ADDED .dockerignore Index: .dockerignore ================================================================== --- .dockerignore +++ .dockerignore @@ -0,0 +1,26 @@ +_FOSSIL_ +.fslckout +ajax +art +autosetup +bld +compat +debian +fossil +fossil.exe +setup +src +test +tools +win +wbld +win +www +*.a +*.lib +*.log +*.manifest +*.o +*.obj +*.pdb +*.res ADDED .fossil-settings/clean-glob Index: .fossil-settings/clean-glob ================================================================== --- .fossil-settings/clean-glob +++ .fossil-settings/clean-glob @@ -0,0 +1,17 @@ +*.a +*.lib +*.manifest +*.o +*.obj +*.pdb +*.res +Makefile +bld/* +wbld/* +win/*.c +win/*.h +win/*.exe +win/headers +win/linkopts +autoconfig.h +config.log ADDED .fossil-settings/encoding-glob Index: .fossil-settings/encoding-glob ================================================================== --- .fossil-settings/encoding-glob +++ .fossil-settings/encoding-glob @@ -0,0 +1,2 @@ +compat/zlib/contrib/dotzlib/DotZLib/*.cs +win/fossil.rc ADDED .fossil-settings/ignore-glob Index: .fossil-settings/ignore-glob ================================================================== --- .fossil-settings/ignore-glob +++ .fossil-settings/ignore-glob @@ -0,0 +1,5 @@ +compat/openssl* +compat/tcl* +fossil +fossil.exe +win/fossil.exe ADDED .project Index: .project ================================================================== --- .project +++ .project @@ -0,0 +1,11 @@ + + + fossil + + + + + + + + ADDED .settings/org.eclipse.core.resources.prefs Index: .settings/org.eclipse.core.resources.prefs ================================================================== --- .settings/org.eclipse.core.resources.prefs +++ .settings/org.eclipse.core.resources.prefs @@ -0,0 +1,2 @@ +eclipse.preferences.version=1 +encoding/=UTF-8 ADDED .settings/org.eclipse.core.runtime.prefs Index: .settings/org.eclipse.core.runtime.prefs ================================================================== --- .settings/org.eclipse.core.runtime.prefs +++ .settings/org.eclipse.core.runtime.prefs @@ -0,0 +1,2 @@ +eclipse.preferences.version=1 +line.separator=\n Index: BUILD.txt ================================================================== --- BUILD.txt +++ BUILD.txt @@ -1,56 +1,78 @@ -All of the source code for fossil is contained in the src/ subdirectory. -But there is a lot of generated code, so you will probably want to -use the Makefile. To do a complete build, just type: - - make - -That should work out-of-the-box on Macs and Linux systems. If you are -building on a Windows box, install MinGW as well as MinGW's make (or -MSYS). You can then type: - - make -f Makefile.w32 +To do a complete build, just type: + + ./configure; make + +The ./configure script builds Makefile from Makefile.in based on +your system and any options you select (run "./configure --help" +for a listing of the available options.) + +If you wish to use the original Makefile with no configuration, you can +instead use: + + make -f Makefile.classic + +On a windows box, use one of the Makefiles in the win/ subdirectory, +according to your compiler and environment. If you have MinGW or +MinGW-w64 installed on your system (Msys or Cygwin, or as +cross-compile environment on Linux or Darwin), then consider: + + make -f win/Makefile.mingw + +If you have VC++ installed on your system, then consider: + + cd win; nmake /f Makefile.msc If you have trouble, or you want to do something fancy, just look at -top level makefile. There are 6 configuration options that are all well -commented. Instead of editing the Makefile, consider copying the Makefile -to an alternative name such as "GNUMakefile", "BSDMakefile", or "makefile" -and editing the copy. +Makefile.classic. There are 6 configuration options that are all well +commented. Instead of editing the Makefile.classic, consider copying +Makefile.classic to an alternative name such as "GNUMakefile", +"BSDMakefile", or "makefile" and editing the copy. + -Out of source builds? --------------------------------------------------------------------------- +BUILDING OUTSIDE THE SOURCE TREE An out of source build is pretty easy: - 1. Make a new directory to do the builds in. - 2. Copy "Makefile" from the source into the build directory and - modify the SRCDIR macro along the lines of: - - SRCDIR=../src - - 3. type: "make" - -This will now keep all generates files seperate from the maintained + 1. Make and change to a new directory to do the builds in. + 2. Run the "configure" script from this directory. + 3. Type: "make" + +For example: + + mkdir build + cd build + ../configure + make + +This will now keep all generates files separate from the maintained source code. -------------------------------------------------------------------------- Here are some notes on what is happening behind the scenes: + +* The configure script (if used) examines the options given + and runs various tests with the C compiler to create Makefile + from the Makefile.in template as well as autoconfig.h * The Makefile just sets up a few macros and then invokes the real makefile in src/main.mk. The src/main.mk makefile is automatically generated by a TCL script found at src/makemake.tcl. Do not edit src/main.mk directly. Update src/makemake.tcl and then rerun it. * The *.h header files are automatically generated using a program called "makeheaders". Source code to the makeheaders program is - found in src/makeheaders.c. Documentation is found in + found in src/makeheaders.c. Documentation is found in src/makeheaders.html. * Most *.c source files are preprocessed using a program called "translate". The sources to translate are found in src/translate.c. A header comment in src/translate.c explains in detail what it does. * The src/mkindex.c program generates some C code that implements static lookup tables. See the header comment in the source code for details on what it does. + +Additional information on the build process is available from +http://www.fossil-scm.org/fossil/doc/trunk/www/makefile.wiki Index: COPYRIGHT-BSD2.txt ================================================================== --- COPYRIGHT-BSD2.txt +++ COPYRIGHT-BSD2.txt @@ -1,31 +1,31 @@ Copyright 2007 D. Richard Hipp. All rights reserved. -Redistribution and use in source and binary forms, with or -without modification, are permitted provided that the +Redistribution and use in source and binary forms, with or +without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above - copyright notice, this list of conditions and the + copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. -THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND ANY EXPRESS -OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND ANY EXPRESS +OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE +ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF +SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, -WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE -OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, +WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE +OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -The views and conclusions contained in the software and documentation -are those of the authors and contributors and should not be interpreted +The views and conclusions contained in the software and documentation +are those of the authors and contributors and should not be interpreted as representing official policies, either expressed or implied, of anybody else. ADDED Dockerfile Index: Dockerfile ================================================================== --- Dockerfile +++ Dockerfile @@ -0,0 +1,28 @@ +### +# Dockerfile for Fossil +### +FROM fedora:23 + +### Now install some additional parts we will need for the build +RUN dnf update -y && dnf install -y gcc make zlib-devel openssl-devel tar && dnf clean all && groupadd -r fossil -g 433 && useradd -u 431 -r -g fossil -d /opt/fossil -s /sbin/nologin -c "Fossil user" fossil + +### If you want to build "trunk", change the next line accordingly. +ENV FOSSIL_INSTALL_VERSION release + +RUN curl "http://core.tcl.tk/tcl/tarball/tcl-src.tar.gz?name=tcl-src&uuid=release" | tar zx +RUN cd tcl-src/unix && ./configure --prefix=/usr --disable-load && make && make install +RUN curl "http://www.fossil-scm.org/index.html/tarball/fossil-src.tar.gz?name=fossil-src&uuid=${FOSSIL_INSTALL_VERSION}" | tar zx +RUN cd fossil-src && ./configure --disable-fusefs --json --with-th1-docs --with-th1-hooks --with-tcl --with-tcl-stubs --with-tcl-private-stubs +RUN cd fossil-src/src && mv main.c main.c.orig && sed s/\"now\"/0/ main.c +RUN cd fossil-src && make && strip fossil && cp fossil /usr/bin && cd .. && rm -rf fossil-src && chmod a+rx /usr/bin/fossil && mkdir -p /opt/fossil && chown fossil:fossil /opt/fossil + +### Build is done, remove modules no longer needed +RUN dnf remove -y gcc make zlib-devel openssl-devel tar && dnf clean all + +USER fossil + +ENV HOME /opt/fossil + +EXPOSE 8080 + +CMD ["/usr/bin/fossil", "server", "--create", "--user", "admin", "/opt/fossil/repository.fossil"] DELETED Makefile Index: Makefile ================================================================== --- Makefile +++ Makefile @@ -1,56 +0,0 @@ -#!/usr/bin/make -# -#### The toplevel directory of the source tree. Fossil can be built -# in a directory that is separate from the source tree. Just change -# the following to point from the build directory to the src/ folder. -# -SRCDIR = ./src - -#### The directory into which object code files should be written. -# -# -OBJDIR = ./obj - -#### C Compiler and options for use in building executables that -# will run on the platform that is doing the build. This is used -# to compile code-generator programs as part of the build process. -# See TCC below for the C compiler for building the finished binary. -# -BCC = gcc -g -O2 - -#### The suffix to add to executable files. ".exe" for windows. -# Nothing for unix. -# -E = - -#### C Compile and options for use in building executables that -# will run on the target platform. This is usually the same -# as BCC, unless you are cross-compiling. This C compiler builds -# the finished binary for fossil. The BCC compiler above is used -# for building intermediate code-generator tools. -# -#TCC = gcc -O6 -#TCC = gcc -g -O0 -Wall -fprofile-arcs -ftest-coverage -TCC = gcc -g -Os -Wall - -#### Extra arguments for linking the finished binary. Fossil needs -# to link against the Z-Lib compression library. There are no -# other dependencies. We sometimes add the -static option here -# so that we can build a static executable that will run in a -# chroot jail. -# -LIB = -lz $(LDFLAGS) -# If you're on OpenSolaris: -# LIB += lsocket -# Solaris 10 needs: -# LIB += -lsocket -lnsl -# My assumption is that the Sol10 flags will work for Sol8/9 and possibly 11. -# - -#### Tcl shell for use in running the fossil testsuite. -# -TCLSH = tclsh - -# You should not need to change anything below this line -############################################################################### -include $(SRCDIR)/main.mk ADDED Makefile.classic Index: Makefile.classic ================================================================== --- Makefile.classic +++ Makefile.classic @@ -0,0 +1,87 @@ +#!/usr/bin/make +# +# This is the top-level makefile for Fossil when the build is occurring +# on a unix platform. This works out-of-the-box on most unix platforms. +# But you are free to vary some of the definitions if desired. +# +#### The toplevel directory of the source tree. Fossil can be built +# in a directory that is separate from the source tree. Just change +# the following to point from the build directory to the src/ folder. +# +SRCDIR = ./src + +#### The directory into which object code files should be written. +# +# +OBJDIR = ./bld + +#### C Compiler and options for use in building executables that +# will run on the platform that is doing the build. This is used +# to compile code-generator programs as part of the build process. +# See TCC below for the C compiler for building the finished binary. +# +BCC = gcc + +#### The suffix to add to final executable file. When cross-compiling +# to windows, make this ".exe". Otherwise leave it blank. +# +E = + +#### C Compile and options for use in building executables that +# will run on the target platform. This is usually the same +# as BCC, unless you are cross-compiling. This C compiler builds +# the finished binary for fossil. The BCC compiler above is used +# for building intermediate code-generator tools. +# +#TCC = gcc -O6 +#TCC = gcc -g -O0 -Wall -fprofile-arcs -ftest-coverage +TCC = gcc -g -Os -Wall + +# To use the included miniz library +# FOSSIL_ENABLE_MINIZ = 1 +# TCC += -DFOSSIL_ENABLE_MINIZ + +# To add support for HTTPS +TCC += -DFOSSIL_ENABLE_SSL + +#### We sometimes add the -static option here so that we can build a +# static executable that will run in a chroot jail. +#LIB = -static +TCC += -DFOSSIL_DYNAMIC_BUILD=1 + +#### Extra arguments for linking the finished binary. Fossil needs +# to link against the Z-Lib compression library unless the miniz +# library in the source tree is being used. There are no other +# required dependencies. +ZLIB_LIB.0 = -lz +ZLIB_LIB.1 = +ZLIB_LIB. = $(ZLIB_LIB.0) + +# If using zlib: +LIB += $(ZLIB_LIB.$(FOSSIL_ENABLE_MINIZ)) $(LDFLAGS) + +# If using HTTPS: +LIB += -lcrypto -lssl + +#### Tcl shell for use in running the fossil testsuite. If you do not +# care about testing the end result, this can be blank. +# +TCLSH = tclsh + +# You should not need to change anything below this line +############################################################################### +# +# Automatic platform-specific options. +HOST_OS_CMD = uname -s +HOST_OS = $(HOST_OS_CMD:sh) + +LIB.SunOS= -lsocket -lnsl +LIB += $(LIB.$(HOST_OS)) + +TCC.DragonFly += -DUSE_PREAD +TCC.FreeBSD += -DUSE_PREAD +TCC.NetBSD += -DUSE_PREAD +TCC.OpenBSD += -DUSE_PREAD +TCC += $(TCC.$(HOST_OS)) + +include $(SRCDIR)/main.mk ADDED Makefile.in Index: Makefile.in ================================================================== --- Makefile.in +++ Makefile.in @@ -0,0 +1,51 @@ +#!/usr/bin/make +# +# This is the top-level makefile for Fossil when the build is occurring +# on a unix platform. This works out-of-the-box on most unix platforms. +# But you are free to vary some of the definitions if desired. +# +#### The toplevel directory of the source tree. Fossil can be built +# in a directory that is separate from the source tree. Just change +# the following to point from the build directory to the src/ folder. +# +SRCDIR = @srcdir@/src + +#### The directory into which object code files should be written. +# Having a "./" prefix in the value of this variable breaks our use of the +# "makeheaders" tool when running make on the MinGW platform, apparently +# due to some command line argument manipulation performed automatically +# by the shell. +# +# +OBJDIR = bld + +#### C Compiler and options for use in building executables that +# will run on the platform that is doing the build. This is used +# to compile code-generator programs as part of the build process. +# See TCC below for the C compiler for building the finished binary. +# +BCC = @CC_FOR_BUILD@ + +#### The suffix to add to final executable file. When cross-compiling +# to windows, make this ".exe". Otherwise leave it blank. +# +E = @EXEEXT@ + +TCC = @CC@ + +#### Tcl shell for use in running the fossil testsuite. If you do not +# care about testing the end result, this can be blank. +# +TCLSH = tclsh + +LIB = @LDFLAGS@ @EXTRA_LDFLAGS@ @LIBS@ +TCCFLAGS = @EXTRA_CFLAGS@ @CPPFLAGS@ @CFLAGS@ -DHAVE_AUTOCONFIG_H -D_HAVE_SQLITE_CONFIG_H +INSTALLDIR = $(DESTDIR)@prefix@/bin +USE_SYSTEM_SQLITE = @USE_SYSTEM_SQLITE@ +USE_LINENOISE = @USE_LINENOISE@ +FOSSIL_ENABLE_MINIZ = @FOSSIL_ENABLE_MINIZ@ + +include $(SRCDIR)/main.mk + +distclean: clean + rm -f autoconfig.h config.log Makefile DELETED Makefile.w32 Index: Makefile.w32 ================================================================== --- Makefile.w32 +++ Makefile.w32 @@ -1,67 +0,0 @@ -#!/usr/bin/make -# -#### The toplevel directory of the source tree. Fossil can be built -# in a directory that is separate from the source tree. Just change -# the following to point from the build directory to the src/ folder. -# -SRCDIR = ./src -OBJDIR = ./wobj - -#### C Compiler and options for use in building executables that -# will run on the platform that is doing the build. This is used -# to compile code-generator programs as part of the build process. -# See TCC below for the C compiler for building the finished binary. -# -BCC = gcc -g -O2 - -#### The suffix to add to executable files. ".exe" for windows. -# Nothing for unix. -# -E = .exe - -#### Enable HTTPS support via OpenSSL (links to libssl and libcrypto) -# -# FOSSIL_ENABLE_SSL=1 - -#### C Compile and options for use in building executables that -# will run on the target platform. This is usually the same -# as BCC, unless you are cross-compiling. This C compiler builds -# the finished binary for fossil. The BCC compiler above is used -# for building intermediate code-generator tools. -# -#TCC = gcc -O6 -#TCC = gcc -g -O0 -Wall -fprofile-arcs -ftest-coverage -#TCC = gcc -g -Os -Wall -#TCC = gcc -g -Os -Wall -DFOSSIL_I18N=0 -L/usr/local/lib -I/usr/local/include -TCC = gcc -Os -Wall -DFOSSIL_I18N=0 -L/mingw/lib -I/mingw/include - -# With HTTPS support -ifdef FOSSIL_ENABLE_SSL -TCC += -DFOSSIL_ENABLE_SSL=1 -endif - -#### Extra arguments for linking the finished binary. Fossil needs -# to link against the Z-Lib compression library. There are no -# other dependencies. We sometimes add the -static option here -# so that we can build a static executable that will run in a -# chroot jail. -# -#LIB = -lz -#LIB = -lz -lws2_32 -LIB = -lmingwex -lz -lws2_32 -# OpenSSL: -ifdef FOSSIL_ENABLE_SSL -LIB += -lcrypto -lssl -endif - -#### Tcl shell for use in running the fossil testsuite. -# -TCLSH = tclsh - -#### Include a configuration file that can override any one of these settings. -# --include config.w32 - -# You should not need to change anything below this line -############################################################################### -include $(SRCDIR)/main.mk ADDED VERSION Index: VERSION ================================================================== --- VERSION +++ VERSION @@ -0,0 +1,1 @@ +1.35 ADDED ajax/README Index: ajax/README ================================================================== --- ajax/README +++ ajax/README @@ -0,0 +1,38 @@ +This is the README for how to set up the Fossil/JSON test web page +under Apache on Unix systems. This is only intended only for +Fossil/JSON developers/tinkerers: + +First, copy cgi-bin/fossil-json.cgi.example to +cgi-bin/fossil-json.cgi. Edit it and correct the paths to the fossil +binary and the repo you want to serve. Make it executable. + +MAKE SURE that the fossil repo you use is world-writable OR that your +Web/CGI server is set up to run as the user ID of the owner of the +fossil file. ALSO: the DIRECTORY CONTAINING the repo file must be +writable by the CGI process. + +Next, set up an apache vhost entry. Mine looks like: + + + ServerAlias fjson + ScriptAlias /cgi-bin/ /home/stephan/cvs/fossil/fossil-json/ajax/cgi-bin/ + DocumentRoot /home/stephan/cvs/fossil/fossil-json/ajax + + +Now add your preferred vhost name (fjson in the above example) to /etc/hosts: + + 127.0.0.1 ...other aliases... fjson + +Restart your Apache. + +Now visit: http://fjson/ + +that will show the test/demo page. If it doesn't, edit index.html and +make sure that: + + WhAjaj.Connector.options.ajax.url = ...; + +points to your CGI script. In theory you can also do this over fossil +standalone server mode, but i haven't yet tested that particular test +page in that mode. + ADDED ajax/cgi-bin/fossil-json.cgi.example Index: ajax/cgi-bin/fossil-json.cgi.example ================================================================== --- ajax/cgi-bin/fossil-json.cgi.example +++ ajax/cgi-bin/fossil-json.cgi.example @@ -0,0 +1,2 @@ +#!/path/to/fossil/binary +repository: /path/to/repo.fsl ADDED ajax/i-test/rhino-shell.js Index: ajax/i-test/rhino-shell.js ================================================================== --- ajax/i-test/rhino-shell.js +++ ajax/i-test/rhino-shell.js @@ -0,0 +1,208 @@ +var FShell = { + serverUrl: + 'http://localhost:8080' + //'http://fjson/cgi-bin/fossil-json.cgi' + //'http://192.168.1.62:8080' + //'http://fossil.wanderinghorse.net/repos/fossil-json-java/index.cgi' + , + verbose:false, + prompt:"fossil shell > ", + wiki:{}, + consol:java.lang.System.console(), + v:function(msg){ + if(this.verbose){ + print("VERBOSE: "+msg); + } + } +}; +(function bootstrap() { + var srcdir = '../js/'; + var includes = [srcdir+'json2.js', + srcdir+'whajaj.js', + srcdir+'fossil-ajaj.js' + ]; + for( var i in includes ) { + load(includes[i]); + } + WhAjaj.Connector.prototype.sendImpl = WhAjaj.Connector.sendImpls.rhino; + FShell.fossil = new FossilAjaj({ + asynchronous:false, /* rhino-based impl doesn't support async. */ + timeout:10000, + url:FShell.serverUrl + }); + print("Server: "+FShell.serverUrl); + var cb = FShell.fossil.ajaj.callbacks; + cb.beforeSend = function(req,opt){ + if(!FShell.verbose) return; + print("SENDING REQUEST: AJAJ options="+JSON.stringify(opt)); + if(req) print("Request envelope="+WhAjaj.stringify(req)); + }; + cb.afterSend = function(req,opt){ + //if(!FShell.verbose) return; + //print("REQUEST RETURNED: opt="+JSON.stringify(opt)); + //if(req) print("Request="+WhAjaj.stringify(req)); + }; + cb.onError = function(req,opt){ + //if(!FShell.verbose) return; + print("ERROR: "+WhAjaj.stringify(opt)); + }; + cb.onResponse = function(resp,req){ + if(!FShell.verbose) return; + if(resp && resp.resultCode){ + print("Response contains error info: "+resp.resultCode+": "+resp.resultText); + } + print("GOT RESPONSE: "+(('string'===typeof resp) ? resp : WhAjaj.stringify(resp))); + }; + FShell.fossil.HAI({ + onResponse:function(resp,opt){ + assertResponseOK(resp); + } + }); +})(); + +/** + Throws an exception of cond is a falsy value. +*/ +function assert(cond, descr){ + descr = descr || "Undescribed condition."; + if(!cond){ + throw new Error("Assertion failed: "+descr); + }else{ + //print("Assertion OK: "+descr); + } +} + +/** + Convenience form of FShell.fossil.sendCommand(command,payload,ajajOpt). +*/ +function send(command,payload, ajajOpt){ + FShell.fossil.sendCommand(command,payload,ajajOpt); +} + +/** + Asserts that resp is-a Object, resp.fossil is-a string, and + !resp.resultCode. +*/ +function assertResponseOK(resp){ + assert('object' === typeof resp,'Response is-a object.'); + assert( 'string' === typeof resp.fossil, 'Response contains fossil property.'); + assert( !resp.resultCode, 'resp.resultCode='+resp.resultCode); +} +/** + Asserts that resp is-a Object, resp.fossil is-a string, and + resp.resultCode is a truthy value. If expectCode is set then + it also asserts that (resp.resultCode=='FOSSIL-'+expectCode). +*/ +function assertResponseError(resp,expectCode){ + assert('object' === typeof resp,'Response is-a object.'); + assert( 'string' === typeof resp.fossil, 'Response contains fossil property.'); + assert( resp.resultCode, 'resp.resultCode='+resp.resultCode); + if(expectCode){ + assert( 'FOSSIL-'+expectCode == resp.resultCode, 'Expecting result code '+expectCode ); + } +} + +FShell.readline = (typeof readline === 'function') ? (readline) : (function() { + importPackage(java.io); + importPackage(java.lang); + var stdin = new BufferedReader(new InputStreamReader(System['in'])); + var self = this; + return function(prompt) { + if(prompt) print(prompt); + var x = stdin.readLine(); + return null===x ? x : String(x) /*convert to JS string!*/; + }; +}()); + +FShell.dispatchLine = function(line){ + var av = line.split(' '); // FIXME: to shell-like tokenization. Too tired! + var cmd = av[0]; + var key, h; + if('/' == cmd[0]) key = '/'; + else key = this.commandAliases[cmd]; + if(!key) key = cmd; + h = this.commandHandlers[key]; + if(!h){ + print("Command not known: "+cmd +" ("+key+")"); + }else if(!WhAjaj.isFunction(h)){ + print("Not a function: "+key); + } + else{ + print("Sending ["+key+"] command... "); + try{h(av);} + catch(e){ print("EXCEPTION: "+e); } + } +}; + +FShell.onResponseDefault = function(callback){ + return function(resp,req){ + assertResponseOK(resp); + print("Payload: "+(resp.payload ? WhAjaj.stringify(resp.payload) : "none")); + if(WhAjaj.isFunction(callback)){ + callback(resp,req); + } + }; +}; +FShell.commandHandlers = { + "?":function(args){ + var k; + print("Available commands...\n"); + var o = FShell.commandHandlers; + for(k in o){ + if(! o.hasOwnProperty(k)) continue; + print("\t"+k); + } + }, + "/":function(args){ + FShell.fossil.sendCommand('/json'+args[0],undefined,{ + beforeSend:function(req,opt){ + print("Sending to: "+opt.url); + }, + onResponse:FShell.onResponseDefault() + }); + }, + "eval":function(args){ + eval(args.join(' ')); + }, + "login":function(args){ + FShell.fossil.login(args[1], args[2], { + onResponse:FShell.onResponseDefault() + }); + }, + "whoami":function(args){ + FShell.fossil.whoami({ + onResponse:FShell.onResponseDefault() + }); + }, + "HAI":function(args){ + FShell.fossil.HAI({ + onResponse:FShell.onResponseDefault() + }); + } + +}; +FShell.commandAliases = { + "li":"login", + "lo":"logout", + "who":"whoami", + "hi":"HAI", + "tci":"/timeline/ci?limit=3" +}; +FShell.mainLoop = function(){ + var line; + var check = /\S/; + //var isJavaNull = /java\.lang\.null/; + //print(typeof java.lang['null']); + while( null != (line=this.readline(this.prompt)) ){ + if(null===line) break /*EOF*/; + else if( "" === line ) continue; + //print("Got line: "+line); + else if(!check.test(line)) continue; + print('typeof line = '+typeof line); + this.dispatchLine(line); + print(""); + } + print("Bye!"); +}; + +FShell.mainLoop(); ADDED ajax/i-test/rhino-test.js Index: ajax/i-test/rhino-test.js ================================================================== --- ajax/i-test/rhino-test.js +++ ajax/i-test/rhino-test.js @@ -0,0 +1,279 @@ +var TestApp = { + serverUrl: + 'http://localhost:8080' + //'http://fjson/cgi-bin/fossil-json.cgi' + //'http://192.168.1.62:8080' + //'http://fossil.wanderinghorse.net/repos/fossil-json-java/index.cgi' + , + verbose:true, + fossilBinary:'fossil', + wiki:{} +}; +(function bootstrap() { + var srcdir = '../js/'; + var includes = [srcdir+'json2.js', + srcdir+'whajaj.js', + srcdir+'fossil-ajaj.js' + ]; + for( var i in includes ) { + load(includes[i]); + } + WhAjaj.Connector.prototype.sendImpl = WhAjaj.Connector.sendImpls.rhino; + TestApp.fossil = new FossilAjaj({ + asynchronous:false, /* rhino-based impl doesn't support async or timeout. */ + timeout:0, + url:TestApp.serverUrl, + fossilBinary:TestApp.fossilBinary + }); + var cb = TestApp.fossil.ajaj.callbacks; + cb.beforeSend = function(req,opt){ + if(!TestApp.verbose) return; + print("SENDING REQUEST: AJAJ options="+JSON.stringify(opt)); + if(req) print("Request envelope="+WhAjaj.stringify(req)); + }; + cb.afterSend = function(req,opt){ + //if(!TestApp.verbose) return; + //print("REQUEST RETURNED: opt="+JSON.stringify(opt)); + //if(req) print("Request="+WhAjaj.stringify(req)); + }; + cb.onError = function(req,opt){ + if(!TestApp.verbose) return; + print("ERROR: "+WhAjaj.stringify(opt)); + }; + cb.onResponse = function(resp,req){ + if(!TestApp.verbose) return; + print("GOT RESPONSE: "+(('string'===typeof resp) ? resp : WhAjaj.stringify(resp))); + }; + +})(); + +/** + Throws an exception of cond is a falsy value. +*/ +function assert(cond, descr){ + descr = descr || "Undescribed condition."; + if(!cond){ + print("Assertion FAILED: "+descr); + throw new Error("Assertion failed: "+descr); + // aarrgghh. Exceptions are of course swallowed by + // the AJAX layer, to keep from killing a browser's + // script environment. + }else{ + if(TestApp.verbose) print("Assertion OK: "+descr); + } +} + +/** + Calls func() in a try/catch block and throws an exception if + func() does NOT throw. +*/ +function assertThrows(func, descr){ + descr = descr || "Undescribed condition failed."; + var ex; + try{ + func(); + }catch(e){ + ex = e; + } + if(!ex){ + throw new Error("Function did not throw (as expected): "+descr); + }else{ + if(TestApp.verbose) print("Function threw (as expected): "+descr+": "+ex); + } +} + +/** + Convenience form of TestApp.fossil.sendCommand(command,payload,ajajOpt). +*/ +function send(command,payload, ajajOpt){ + TestApp.fossil.sendCommand(command,payload,ajajOpt); +} + +/** + Asserts that resp is-a Object, resp.fossil is-a string, and + !resp.resultCode. +*/ +function assertResponseOK(resp){ + assert('object' === typeof resp,'Response is-a object.'); + assert( 'string' === typeof resp.fossil, 'Response contains fossil property.'); + assert( undefined === resp.resultCode, 'resp.resultCode is not set'); +} +/** + Asserts that resp is-a Object, resp.fossil is-a string, and + resp.resultCode is a truthy value. If expectCode is set then + it also asserts that (resp.resultCode=='FOSSIL-'+expectCode). +*/ +function assertResponseError(resp,expectCode){ + assert('object' === typeof resp,'Response is-a object.'); + assert( 'string' === typeof resp.fossil, 'Response contains fossil property.'); + assert( !!resp.resultCode, 'resp.resultCode='+resp.resultCode); + if(expectCode){ + assert( 'FOSSIL-'+expectCode == resp.resultCode, 'Expecting result code '+expectCode ); + } +} + +function testHAI(){ + var rs; + TestApp.fossil.HAI({ + onResponse:function(resp,req){ + rs = resp; + } + }); + assertResponseOK(rs); + TestApp.serverVersion = rs.fossil; + assert( 'string' === typeof TestApp.serverVersion, 'server version = '+TestApp.serverVersion); +} +testHAI.description = 'Get server version info.'; + +function testIAmNobody(){ + TestApp.fossil.whoami('/json/whoami'); + assert('nobody' === TestApp.fossil.auth.name, 'User == nobody.' ); + assert(!TestApp.fossil.auth.authToken, 'authToken is not set.' ); + +} +testIAmNobody.description = 'Ensure that current user is "nobody".'; + + +function testAnonymousLogin(){ + TestApp.fossil.login(); + assert('string' === typeof TestApp.fossil.auth.authToken, 'authToken = '+TestApp.fossil.auth.authToken); + assert( 'string' === typeof TestApp.fossil.auth.name, 'User name = '+TestApp.fossil.auth.name); + TestApp.fossil.userName = null; + TestApp.fossil.whoami('/json/whoami'); + assert( 'string' === typeof TestApp.fossil.auth.name, 'User name = '+TestApp.fossil.auth.name); +} +testAnonymousLogin.description = 'Perform anonymous login.'; + +function testAnonWiki(){ + var rs; + TestApp.fossil.sendCommand('/json/wiki/list',undefined,{ + beforeSend:function(req,opt){ + assert( req && (req.authToken==TestApp.fossil.auth.authToken), 'Request envelope contains expected authToken.' ); + }, + onResponse:function(resp,req){ + rs = resp; + } + }); + assertResponseOK(rs); + assert( (typeof [] === typeof rs.payload) && rs.payload.length, + "Wiki list seems to be okay."); + TestApp.wiki.list = rs.payload; + + TestApp.fossil.sendCommand('/json/wiki/get',{ + name:TestApp.wiki.list[0] + },{ + onResponse:function(resp,req){ + rs = resp; + } + }); + assertResponseOK(rs); + assert(rs.payload.name == TestApp.wiki.list[0], "Fetched page name matches expectations."); + print("Got first wiki page: "+WhAjaj.stringify(rs.payload)); + +} +testAnonWiki.description = 'Fetch wiki list as anonymous user.'; + +function testFetchCheckinArtifact(){ + var art = '18dd383e5e7684ece'; + var rs; + TestApp.fossil.sendCommand('/json/artifact',{ + 'name': art + }, + { + onResponse:function(resp,req){ + rs = resp; + } + }); + assertResponseOK(rs); + assert(3 == rs.payload.parents.length, 'Got 3 parent artifacts.'); +} +testFetchCheckinArtifact.description = '/json/artifact/CHECKIN'; + +function testAnonLogout(){ + var rs; + TestApp.fossil.logout({ + onResponse:function(resp,req){ + rs = resp; + } + }); + assertResponseOK(rs); + print("Ensure that second logout attempt fails..."); + TestApp.fossil.logout({ + onResponse:function(resp,req){ + rs = resp; + } + }); + assertResponseError(rs); +} +testAnonLogout.description = 'Log out anonymous user.'; + +function testExternalProcess(){ + + var req = { command:"HAI", requestId:'testExternalProcess()' }; + var args = [TestApp.fossilBinary, 'json', '--json-input', '-']; + var p = java.lang.Runtime.getRuntime().exec(args); + var outs = p.getOutputStream(); + var osr = new java.io.OutputStreamWriter(outs); + var osb = new java.io.BufferedWriter(osr); + var json = JSON.stringify(req); + osb.write(json,0, json.length); + osb.close(); + req = json = outs = osr = osb = undefined; + var ins = p.getInputStream(); + var isr = new java.io.InputStreamReader(ins); + var br = new java.io.BufferedReader(isr); + var line; + + while( null !== (line=br.readLine())){ + print(line); + } + br.close(); + isr.close(); + ins.close(); + p.waitFor(); +} +testExternalProcess.description = 'Run fossil as external process.'; + +function testExternalProcessHandler(){ + var aj = TestApp.fossil.ajaj; + var oldImpl = aj.sendImpl; + aj.sendImpl = FossilAjaj.rhinoLocalBinarySendImpl; + var rs; + TestApp.fossil.sendCommand('/json/HAI',undefined,{ + onResponse:function(resp,opt){ + rs = resp; + } + }); + aj.sendImpl = oldImpl; + assertResponseOK(rs); + print("Using local fossil binary via AJAX interface, we fetched: "+ + WhAjaj.stringify(rs)); +} +testExternalProcessHandler.description = 'Try local fossil binary via AJAX interface.'; + +(function runAllTests(){ + var testList = [ + testHAI, + testIAmNobody, + testAnonymousLogin, + testAnonWiki, + testFetchCheckinArtifact, + testAnonLogout, + testExternalProcess, + testExternalProcessHandler + ]; + var i, f; + for( i = 0; i < testList.length; ++i ){ + f = testList[i]; + try{ + print("Running test #"+(i+1)+": "+(f.description || "no description.")); + f(); + }catch(e){ + print("Test #"+(i+1)+" failed: "+e); + throw e; + } + } + +})(); + +print("Done! If you don't see an exception message in the last few lines, you win!"); ADDED ajax/index.html Index: ajax/index.html ================================================================== --- ajax/index.html +++ ajax/index.html @@ -0,0 +1,333 @@ + + + + + Fossil/JSON raw request sending + + + + + + + + + + + + + +
+

You know, for sending raw JSON requests to Fossil...

+ +If you're actually using this page, then you know what you're doing and don't +need help text, hoverhelp, and a snazzy interface. + +

+ + +JSON API docs: https://docs.google.com/document/d/1fXViveNhDbiXgCuE7QDXQOKeFzf2qNUkBEgiUvoqFN4/edit + +
+See also: prototype wiki editor. + +

Request...

+ +Path: +
+If the POST textarea is not empty then it will be posted with the request. +
+Quick-posts:
+ + + + + + + + + +
+ + + + + + + +
+ + + + + + + +
+ + + + + + + + + +
+ + + + + + + + + + + + + + + + +
+Login: +
+ + +
+name: +pw: + + +
+ + +
+ +
+ + + + + + + + + + + + + + + + +
POST dataRequest AJAJ options
+ + + +
Response
+ +
+
+
+
+ + ADDED ajax/js/fossil-ajaj.js Index: ajax/js/fossil-ajaj.js ================================================================== --- ajax/js/fossil-ajaj.js +++ ajax/js/fossil-ajaj.js @@ -0,0 +1,274 @@ +/** + This file contains a WhAjaj extension for use with Fossil/JSON. + + Author: Stephan Beal (sgbeal@googlemail.com) + + License: Public Domain +*/ + +/** + Constructor for a new Fossil AJAJ client. ajajOpt may be an optional + object suitable for passing to the WhAjaj.Connector() constructor. + + On returning, this.ajaj is-a WhAjaj.Connector instance which can + be used to send requests to the back-end (though the convenience + functions of this class are the preferred way to do it). Clients + are encouraged to use FossilAjaj.sendCommand() (and friends) instead + of the underlying WhAjaj.Connector API, since this class' API + contains Fossil-specific request-calling handling (e.g. of authentication + info) whereas WhAjaj is more generic. +*/ +function FossilAjaj(ajajOpt) +{ + this.ajaj = new WhAjaj.Connector(ajajOpt); + return this; +} + +FossilAjaj.prototype.generateRequestId = function() { + return this.ajaj.generateRequestId(); +}; + +/** + Proxy for this.ajaj.sendRequest(). +*/ +FossilAjaj.prototype.sendRequest = function(req,opt) { + return this.ajaj.sendRequest(req,opt); +}; + +/** + Sends a command to the fossil back-end. Command should be the + path part of the URL, e.g. /json/stat, payload is a request-specific + value type (may often be null/undefined). ajajOpt is an optional object + holding WhAjaj.sendRequest()-compatible options. + + This function constructs a Fossil/JSON request envelope based + on the given arguments and adds this.auth.authToken and a requestId + to it. +*/ +FossilAjaj.prototype.sendCommand = function(command, payload, ajajOpt) { + var req; + ajajOpt = ajajOpt || {}; + if(payload || (this.auth && this.auth.authToken) || ajajOpt.jsonp) { + req = { + payload:payload, + requestId:('function' === typeof this.generateRequestId) ? this.generateRequestId() : undefined, + authToken:(this.auth ? this.auth.authToken : undefined), + jsonp:('string' === typeof ajajOpt.jsonp) ? ajajOpt.jsonp : undefined + }; + } + ajajOpt.method = req ? 'POST' : 'GET'; + // just for debuggering: ajajOpt.method = 'POST'; if(!req) req={}; + if(command) ajajOpt.url = this.ajaj.derivedOption('url',ajajOpt) + command; + this.ajaj.sendRequest(req,ajajOpt); +}; + +/** + Sends a login request to the back-end. + + ajajOpt is an optional configuration object suitable for passing + to sendCommand(). + + After the response returns, this.auth will be + set to the response payload. + + If name === 'anonymous' (the default if none is passed in) then this + function ignores the pw argument and must make two requests - the first + one gets the captcha code and the second one submits it. + ajajOpt.onResponse() (if set) is only called for the actual login + response (the 2nd one), as opposed to being called for both requests. + However, this.ajaj.callbacks.onResponse() _is_ called for both (because + it happens at a lower level). + + If this object has an onLogin() function it is called (with + no arguments) before the onResponse() handler of the login is called + (that is the 2nd request for anonymous logins) and any exceptions + it throws are ignored. + +*/ +FossilAjaj.prototype.login = function(name,pw,ajajOpt) { + name = name || 'anonymous'; + var self = this; + var loginReq = { + name:name, + password:pw + }; + ajajOpt = this.ajaj.normalizeAjaxParameters( ajajOpt || {} ); + var oldOnResponse = ajajOpt.onResponse; + ajajOpt.onResponse = function(resp,req) { + var thisOpt = this; + //alert('login response:\n'+WhAjaj.stringify(resp)); + if( resp && resp.payload ) { + //self.userName = resp.payload.name; + //self.capabilities = resp.payload.capabilities; + self.auth = resp.payload; + } + if( WhAjaj.isFunction( self.onLogin ) ){ + try{ self.onLogin(); } + catch(e){} + } + if( WhAjaj.isFunction(oldOnResponse) ) { + oldOnResponse.apply(thisOpt,[resp,req]); + } + }; + function doLogin(){ + //alert("Sending login request..."+WhAjaj.stringify(loginReq)); + self.sendCommand('/json/login', loginReq, ajajOpt); + } + if( 'anonymous' === name ){ + this.sendCommand('/json/anonymousPassword',undefined,{ + onResponse:function(resp,req){ +/* + if( WhAjaj.isFunction(oldOnResponse) ){ + oldOnResponse.apply(this, [resp,req]); + }; +*/ + if(resp && !resp.resultCode){ + //alert("Got PW. Trying to log in..."+WhAjaj.stringify(resp)); + loginReq.anonymousSeed = resp.payload.seed; + loginReq.password = resp.payload.password; + doLogin(); + } + } + }); + } + else doLogin(); +}; + +/** + Logs out of fossil, invaliding this login token. + + ajajOpt is an optional configuration object suitable for passing + to sendCommand(). + + If this object has an onLogout() function it is called (with + no arguments) before the onResponse() handler is called. + IFF the response succeeds then this.auth is unset. +*/ +FossilAjaj.prototype.logout = function(ajajOpt) { + var self = this; + ajajOpt = this.ajaj.normalizeAjaxParameters( ajajOpt || {} ); + var oldOnResponse = ajajOpt.onResponse; + ajajOpt.onResponse = function(resp,req) { + var thisOpt = this; + self.auth = undefined; + if( WhAjaj.isFunction( self.onLogout ) ){ + try{ self.onLogout(); } + catch(e){} + } + if( WhAjaj.isFunction(oldOnResponse) ) { + oldOnResponse.apply(thisOpt,[resp,req]); + } + }; + this.sendCommand('/json/logout', undefined, ajajOpt ); +}; + +/** + Sends a HAI request to the server. /json/HAI is an alias /json/version. + + ajajOpt is an optional configuration object suitable for passing + to sendCommand(). +*/ +FossilAjaj.prototype.HAI = function(ajajOpt) { + this.sendCommand('/json/HAI', undefined, ajajOpt); +}; + + +/** + Sends a /json/whoami request. Updates this.auth to contain + the login info, removing them if the response does not contain + that data. +*/ +FossilAjaj.prototype.whoami = function(ajajOpt) { + var self = this; + ajajOpt = this.ajaj.normalizeAjaxParameters( ajajOpt || {} ); + var oldOnResponse = ajajOpt.onResponse; + ajajOpt.onResponse = function(resp,req) { + var thisOpt = this; + if( resp && resp.payload ){ + if(!self.auth || (self.auth.authToken!==resp.payload.authToken)){ + self.auth = resp.payload; + if( WhAjaj.isFunction(self.onLogin) ){ + self.onLogin(); + } + } + } + else { delete self.auth; } + if( WhAjaj.isFunction(oldOnResponse) ) { + oldOnResponse.apply(thisOpt,[resp,req]); + } + }; + self.sendCommand('/json/whoami', undefined, ajajOpt); +}; + +/** + EXPERIMENTAL concrete WhAjaj.Connector.sendImpl() implementation which + uses Rhino to connect to a local fossil binary for input and output. Its + signature and semantics are as described for + WhAjaj.Connector.prototype.sendImpl(), with a few exceptions and + additions: + + - It does not support timeouts or asynchronous mode. + + - The args.fossilBinary property must point to the local fossil binary + (it need not be a complete path if fossil is in the $PATH). This + function throws (without calling any request callbacks) if + args.fossilBinary is not set. fossilBinary may be set on + WhAjaj.Connector.options.ajax, in the FossilAjaj constructor call, as + the ajax options parameter to any of the FossilAjaj.sendCommand() family + of functions, or by setting + aFossilAjajInstance.ajaj.options.fossilBinary on a specific + FossilAjaj instance. + + - It uses the args.url field to create the "command" property of the + request, constructs a request envelope, spawns a fossil process in JSON + mode, feeds it the request envelope, and returns the response envelope + via the same mechanisms defined for the HTTP-based implementations. + + The interface is otherwise compatible with the "normal" + FossilAjaj.sendCommand() front-end (it is, however, fossil-specific, and + not back-end agnostic like the WhAjaj.sendImpl() interface intends). + + +*/ +FossilAjaj.rhinoLocalBinarySendImpl = function(request,args){ + var self = this; + request = request || {}; + if(!args.fossilBinary){ + throw new Error("fossilBinary is not set on AJAX options!"); + } + var url = args.url.split('?')[0].split(/\/+/); + if(url.length>1){ + // 3x shift(): protocol, host, 'json' part of path + request.command = (url.shift(),url.shift(),url.shift(), url.join('/')); + } + delete args.url; + //print("rhinoLocalBinarySendImpl SENDING: "+WhAjaj.stringify(request)); + var json; + try{ + var pargs = [args.fossilBinary, 'json', '--json-input', '-']; + var p = java.lang.Runtime.getRuntime().exec(pargs); + var outs = p.getOutputStream(); + var osr = new java.io.OutputStreamWriter(outs); + var osb = new java.io.BufferedWriter(osr); + + json = JSON.stringify(request); + osb.write(json,0, json.length); + osb.close(); + var ins = p.getInputStream(); + var isr = new java.io.InputStreamReader(ins); + var br = new java.io.BufferedReader(isr); + var line; + json = []; + while( null !== (line=br.readLine())){ + json.push(line); + } + ins.close(); + }catch(e){ + args.errorMessage = e.toString(); + WhAjaj.Connector.sendHelper.onSendError.apply( self, [request, args] ); + return undefined; + } + json = json.join(''); + //print("READ IN JSON: "+json); + WhAjaj.Connector.sendHelper.onSendSuccess.apply( self, [request, json, args] ); +}/*rhinoLocalBinary*/ ADDED ajax/js/json2.js Index: ajax/js/json2.js ================================================================== --- ajax/js/json2.js +++ ajax/js/json2.js @@ -0,0 +1,476 @@ +/* + http://www.JSON.org/json2.js + 2009-06-29 + + Public Domain. + + NO WARRANTY EXPRESSED OR IMPLIED. USE AT YOUR OWN RISK. + + See http://www.JSON.org/js.html + + This file creates a global JSON object containing two methods: stringify + and parse. + + JSON.stringify(value, replacer, space) + value any JavaScript value, usually an object or array. + + replacer an optional parameter that determines how object + values are stringified for objects. It can be a + function or an array of strings. + + space an optional parameter that specifies the indentation + of nested structures. If it is omitted, the text will + be packed without extra whitespace. If it is a number, + it will specify the number of spaces to indent at each + level. If it is a string (such as '\t' or ' '), + it contains the characters used to indent at each level. + + This method produces a JSON text from a JavaScript value. + + When an object value is found, if the object contains a toJSON + method, its toJSON method will be called and the result will be + stringified. A toJSON method does not serialize: it returns the + value represented by the name/value pair that should be serialized, + or undefined if nothing should be serialized. The toJSON method + will be passed the key associated with the value, and this will be + bound to the object holding the key. + + For example, this would serialize Dates as ISO strings. + + Date.prototype.toJSON = function (key) { + function f(n) { + // Format integers to have at least two digits. + return n < 10 ? '0' + n : n; + } + + return this.getUTCFullYear() + '-' + + f(this.getUTCMonth() + 1) + '-' + + f(this.getUTCDate()) + 'T' + + f(this.getUTCHours()) + ':' + + f(this.getUTCMinutes()) + ':' + + f(this.getUTCSeconds()) + 'Z'; + }; + + You can provide an optional replacer method. It will be passed the + key and value of each member, with this bound to the containing + object. The value that is returned from your method will be + serialized. If your method returns undefined, then the member will + be excluded from the serialization. + + If the replacer parameter is an array of strings, then it will be + used to select the members to be serialized. It filters the results + such that only members with keys listed in the replacer array are + stringified. + + Values that do not have JSON representations, such as undefined or + functions, will not be serialized. Such values in objects will be + dropped; in arrays they will be replaced with null. You can use + a replacer function to replace those with JSON values. + JSON.stringify(undefined) returns undefined. + + The optional space parameter produces a stringification of the + value that is filled with line breaks and indentation to make it + easier to read. + + If the space parameter is a non-empty string, then that string will + be used for indentation. If the space parameter is a number, then + the indentation will be that many spaces. + + Example: + + text = JSON.stringify(['e', {pluribus: 'unum'}]); + // text is '["e",{"pluribus":"unum"}]' + + + text = JSON.stringify(['e', {pluribus: 'unum'}], null, '\t'); + // text is '[\n\t"e",\n\t{\n\t\t"pluribus": "unum"\n\t}\n]' + + text = JSON.stringify([new Date()], function (key, value) { + return this[key] instanceof Date ? + 'Date(' + this[key] + ')' : value; + }); + // text is '["Date(---current time---)"]' + + + JSON.parse(text, reviver) + This method parses a JSON text to produce an object or array. + It can throw a SyntaxError exception. + + The optional reviver parameter is a function that can filter and + transform the results. It receives each of the keys and values, + and its return value is used instead of the original value. + If it returns what it received, then the structure is not modified. + If it returns undefined then the member is deleted. + + Example: + + // Parse the text. Values that look like ISO date strings will + // be converted to Date objects. + + myData = JSON.parse(text, function (key, value) { + var a; + if (typeof value === 'string') { + a = +/^(\d{4})-(\d{2})-(\d{2})T(\d{2}):(\d{2}):(\d{2}(?:\.\d*)?)Z$/.exec(value); + if (a) { + return new Date(Date.UTC(+a[1], +a[2] - 1, +a[3], +a[4], + +a[5], +a[6])); + } + } + return value; + }); + + myData = JSON.parse('["Date(09/09/2001)"]', function (key, value) { + var d; + if (typeof value === 'string' && + value.slice(0, 5) === 'Date(' && + value.slice(-1) === ')') { + d = new Date(value.slice(5, -1)); + if (d) { + return d; + } + } + return value; + }); + + + This is a reference implementation. You are free to copy, modify, or + redistribute. + + This code should be minified before deployment. + See http://javascript.crockford.com/jsmin.html + + USE YOUR OWN COPY. IT IS EXTREMELY UNWISE TO LOAD CODE FROM SERVERS YOU DO + NOT CONTROL. +*/ + +/*jslint evil: true */ + +/*members "", "\b", "\t", "\n", "\f", "\r", "\"", JSON, "\\", apply, + call, charCodeAt, getUTCDate, getUTCFullYear, getUTCHours, + getUTCMinutes, getUTCMonth, getUTCSeconds, hasOwnProperty, join, + lastIndex, length, parse, prototype, push, replace, slice, stringify, + test, toJSON, toString, valueOf +*/ + +// Create a JSON object only if one does not already exist. We create the +// methods in a closure to avoid creating global variables. + +var JSON = JSON || {}; + +(function () { + + function f(n) { + // Format integers to have at least two digits. + return n < 10 ? '0' + n : n; + } + + if (typeof Date.prototype.toJSON !== 'function') { + + Date.prototype.toJSON = function (key) { + + return isFinite(this.valueOf()) ? + this.getUTCFullYear() + '-' + + f(this.getUTCMonth() + 1) + '-' + + f(this.getUTCDate()) + 'T' + + f(this.getUTCHours()) + ':' + + f(this.getUTCMinutes()) + ':' + + f(this.getUTCSeconds()) + 'Z' : null; + }; + + String.prototype.toJSON = + Number.prototype.toJSON = + Boolean.prototype.toJSON = function (key) { + return this.valueOf(); + }; + } + + var cx = /[\u0000\u00ad\u0600-\u0604\u070f\u17b4\u17b5\u200c-\u200f\u2028-\u202f\u2060-\u206f\ufeff\ufff0-\uffff]/g, + escapable = /[\\\"\x00-\x1f\x7f-\x9f\u00ad\u0600-\u0604\u070f\u17b4\u17b5\u200c-\u200f\u2028-\u202f\u2060-\u206f\ufeff\ufff0-\uffff]/g, + gap, + indent, + meta = { // table of character substitutions + '\b': '\\b', + '\t': '\\t', + '\n': '\\n', + '\f': '\\f', + '\r': '\\r', + '"' : '\\"', + '\\': '\\\\' + }, + rep; + + + function quote(string) { + +// If the string contains no control characters, no quote characters, and no +// backslash characters, then we can safely slap some quotes around it. +// Otherwise we must also replace the offending characters with safe escape +// sequences. + + escapable.lastIndex = 0; + return escapable.test(string) ? + '"' + string.replace(escapable, function (a) { + var c = meta[a]; + return typeof c === 'string' ? c : + '\\u' + ('0000' + a.charCodeAt(0).toString(16)).slice(-4); + }) + '"' : + '"' + string + '"'; + } + + + function str(key, holder) { + +// Produce a string from holder[key]. + + var i, // The loop counter. + k, // The member key. + v, // The member value. + length, + mind = gap, + partial, + value = holder[key]; + +// If the value has a toJSON method, call it to obtain a replacement value. + + if (value && typeof value === 'object' && + typeof value.toJSON === 'function') { + value = value.toJSON(key); + } + +// If we were called with a replacer function, then call the replacer to +// obtain a replacement value. + + if (typeof rep === 'function') { + value = rep.call(holder, key, value); + } + +// What happens next depends on the value's type. + + switch (typeof value) { + case 'string': + return quote(value); + + case 'number': + +// JSON numbers must be finite. Encode non-finite numbers as null. + + return isFinite(value) ? String(value) : 'null'; + + case 'boolean': + case 'null': + +// If the value is a boolean or null, convert it to a string. Note: +// typeof null does not produce 'null'. The case is included here in +// the remote chance that this gets fixed someday. + + return String(value); + +// If the type is 'object', we might be dealing with an object or an array or +// null. + + case 'object': + +// Due to a specification blunder in ECMAScript, typeof null is 'object', +// so watch out for that case. + + if (!value) { + return 'null'; + } + +// Make an array to hold the partial results of stringifying this object value. + + gap += indent; + partial = []; + +// Is the value an array? + + if (Object.prototype.toString.apply(value) === '[object Array]') { + +// The value is an array. Stringify every element. Use null as a placeholder +// for non-JSON values. + + length = value.length; + for (i = 0; i < length; i += 1) { + partial[i] = str(i, value) || 'null'; + } + +// Join all of the elements together, separated with commas, and wrap them in +// brackets. + + v = partial.length === 0 ? '[]' : + gap ? '[\n' + gap + + partial.join(',\n' + gap) + '\n' + + mind + ']' : + '[' + partial.join(',') + ']'; + gap = mind; + return v; + } + +// If the replacer is an array, use it to select the members to be stringified. + + if (rep && typeof rep === 'object') { + length = rep.length; + for (i = 0; i < length; i += 1) { + k = rep[i]; + if (typeof k === 'string') { + v = str(k, value); + if (v) { + partial.push(quote(k) + (gap ? ': ' : ':') + v); + } + } + } + } else { + +// Otherwise, iterate through all of the keys in the object. + + for (k in value) { + if (Object.hasOwnProperty.call(value, k)) { + v = str(k, value); + if (v) { + partial.push(quote(k) + (gap ? ': ' : ':') + v); + } + } + } + } + +// Join all of the member texts together, separated with commas, +// and wrap them in braces. + + v = partial.length === 0 ? '{}' : + gap ? '{\n' + gap + partial.join(',\n' + gap) + '\n' + + mind + '}' : '{' + partial.join(',') + '}'; + gap = mind; + return v; + } + } + +// If the JSON object does not yet have a stringify method, give it one. + + if (typeof JSON.stringify !== 'function') { + JSON.stringify = function (value, replacer, space) { + +// The stringify method takes a value and an optional replacer, and an optional +// space parameter, and returns a JSON text. The replacer can be a function +// that can replace values, or an array of strings that will select the keys. +// A default replacer method can be provided. Use of the space parameter can +// produce text that is more easily readable. + + var i; + gap = ''; + indent = ''; + +// If the space parameter is a number, make an indent string containing that +// many spaces. + + if (typeof space === 'number') { + for (i = 0; i < space; i += 1) { + indent += ' '; + } + +// If the space parameter is a string, it will be used as the indent string. + + } else if (typeof space === 'string') { + indent = space; + } + +// If there is a replacer, it must be a function or an array. +// Otherwise, throw an error. + + rep = replacer; + if (replacer && typeof replacer !== 'function' && + (typeof replacer !== 'object' || + typeof replacer.length !== 'number')) { + throw new Error('JSON.stringify'); + } + +// Make a fake root object containing our value under the key of ''. +// Return the result of stringifying the value. + + return str('', {'': value}); + }; + } + + +// If the JSON object does not yet have a parse method, give it one. + + if (typeof JSON.parse !== 'function') { + JSON.parse = function (text, reviver) { + +// The parse method takes a text and an optional reviver function, and returns +// a JavaScript value if the text is a valid JSON text. + + var j; + + function walk(holder, key) { + +// The walk method is used to recursively walk the resulting structure so +// that modifications can be made. + + var k, v, value = holder[key]; + if (value && typeof value === 'object') { + for (k in value) { + if (Object.hasOwnProperty.call(value, k)) { + v = walk(value, k); + if (v !== undefined) { + value[k] = v; + } else { + delete value[k]; + } + } + } + } + return reviver.call(holder, key, value); + } + + +// Parsing happens in four stages. In the first stage, we replace certain +// Unicode characters with escape sequences. JavaScript handles many characters +// incorrectly, either silently deleting them, or treating them as line endings. + + cx.lastIndex = 0; + if (cx.test(text)) { + text = text.replace(cx, function (a) { + return '\\u' + + ('0000' + a.charCodeAt(0).toString(16)).slice(-4); + }); + } + +// In the second stage, we run the text against regular expressions that look +// for non-JSON patterns. We are especially concerned with '()' and 'new' +// because they can cause invocation, and '=' because it can cause mutation. +// But just to be safe, we want to reject all unexpected forms. + +// We split the second stage into 4 regexp operations in order to work around +// crippling inefficiencies in IE's and Safari's regexp engines. First we +// replace the JSON backslash pairs with '@' (a non-JSON character). Second, we +// replace all simple value tokens with ']' characters. Third, we delete all +// open brackets that follow a colon or comma or that begin the text. Finally, +// we look to see that the remaining characters are only whitespace or ']' or +// ',' or ':' or '{' or '}'. If that is so, then the text is safe for eval. + + if (/^[\],:{}\s]*$/. +test(text.replace(/\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g, '@'). +replace(/"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g, ']'). +replace(/(?:^|:|,)(?:\s*\[)+/g, ''))) { + +// In the third stage we use the eval function to compile the text into a +// JavaScript structure. The '{' operator is subject to a syntactic ambiguity +// in JavaScript: it can begin a block or an object literal. We wrap the text +// in parens to eliminate the ambiguity. + + j = eval('(' + text + ')'); + +// In the optional fourth stage, we recursively walk the new structure, passing +// each name/value pair to a reviver function for possible transformation. + + return typeof reviver === 'function' ? + walk({'': j}, '') : j; + } + +// If the text is not JSON parseable, then a SyntaxError is thrown. + + throw new SyntaxError('JSON.parse'); + }; + } +}()); ADDED ajax/js/whajaj.js Index: ajax/js/whajaj.js ================================================================== --- ajax/js/whajaj.js +++ ajax/js/whajaj.js @@ -0,0 +1,1221 @@ +/** + This file provides a JS interface into the core functionality of + JSON-centric back-ends. It sends GET or JSON POST requests to + a back-end and expects JSON responses. The exact semantics of + the underlying back-end and overlying front-end are not its concern, + and it leaves the interpretation of the data up to the client/server + insofar as possible. + + All functionality is part of a class named WhAjaj, and that class + acts as namespace for this framework. + + Author: Stephan Beal (http://wanderinghorse.net/home/stephan/) + + License: Public Domain + + This framework is directly derived from code originally found in + http://code.google.com/p/jsonmessage, and later in + http://whiki.wanderinghorse.net, where it contained quite a bit + of application-specific logic. It was eventually (the 3rd time i + needed it) split off into its own library to simplify inclusion + into my many mini-projects. +*/ + + +/** + The WhAjaj function is primarily a namespace, and not intended + to called or instantiated via the 'new' operator. +*/ +function WhAjaj() +{ +} + +/** Returns a millisecond Unix Epoch timestamp. */ +WhAjaj.msTimestamp = function() +{ + return (new Date()).getTime(); +}; + +/** Returns a Unix Epoch timestamp (in seconds) in integer format. + + Reminder to self: (1.1 %1.2) evaluates to a floating-point value + in JS, and thus this implementation is less than optimal. +*/ +WhAjaj.unixTimestamp = function() +{ + var ts = (new Date()).getTime(); + return parseInt( ""+((ts / 1000) % ts) ); +}; + +/** + Returns true if v is-a Array instance. +*/ +WhAjaj.isArray = function( v ) +{ + return (v && + (v instanceof Array) || + (Object.prototype.toString.call(v) === "[object Array]") + ); + /* Reminders to self: + typeof [] == "object" + toString.call([]) == "[object Array]" + ([]).toString() == empty + */ +}; + +/** + Returns true if v is-a Object instance. +*/ +WhAjaj.isObject = function( v ) +{ + return v && + (v instanceof Object) && + ('[object Object]' === Object.prototype.toString.apply(v) ); +}; + +/** + Returns true if v is-a Function instance. +*/ +WhAjaj.isFunction = function(obj) +{ + return obj + && ( + (obj instanceof Function) + || ('function' === typeof obj) + || ("[object Function]" === Object.prototype.toString.call(obj)) + ) + ; +}; + +/** + Parses window.location.search-style string into an object + containing key/value pairs of URL arguments (already urldecoded). + + If the str argument is not passed (arguments.length==0) then + window.location.search.substring(1) is used by default. If + neither str is passed in nor window exists then false is returned. + + On success it returns an Object containing the key/value pairs + parsed from the string. Keys which have no value are treated + has having the boolean true value. + + FIXME: for keys in the form "name[]", build an array of results, + like PHP does. + +*/ +WhAjaj.processUrlArgs = function(str) { + if( 0 === arguments.length ) { + if( ('undefined' === typeof window) || + !window.location || + !window.location.search ) return false; + else str = (''+window.location.search).substring(1); + } + if( ! str ) return false; + str = (''+str).split(/#/,2)[0]; // remove #... to avoid it being added as part of the last value. + var args = {}; + var sp = str.split(/&+/); + var rx = /^([^=]+)(=(.+))?/; + var i, m; + for( i in sp ) { + m = rx.exec( sp[i] ); + if( ! m ) continue; + args[decodeURIComponent(m[1])] = (m[3] ? decodeURIComponent(m[3]) : true); + } + return args; +}; + +/** + A simple wrapper around JSON.stringify(), using my own personal + preferred values for the 2nd and 3rd parameters. To globally + set its indentation level, assign WhAjaj.stringify.indent to + an integer value (0 for no intendation). + + This function is intended only for human-readable output, not + generic over-the-wire JSON output (where JSON.stringify(val) will + produce smaller results). +*/ +WhAjaj.stringify = function(val) { + if( ! arguments.callee.indent ) arguments.callee.indent = 4; + return JSON.stringify(val,0,arguments.callee.indent); +}; + +/** + Each instance of this class holds state information for making + AJAJ requests to a back-end system. While clients may use one + "requester" object per connection attempt, for connections to the + same back-end, using an instance configured for that back-end + can simplify usage. This class is designed so that the actual + connection-related details (i.e. _how_ it connects to the + back-end) may be re-implemented to use a client's preferred + connection mechanism (e.g. jQuery). + + The optional opt paramater may be an object with any (or all) of + the properties documented for WhAjaj.Connector.options.ajax. + Properties set here (or later via modification of the "options" + property of this object) will be used in calls to + WhAjaj.Connector.sendRequest(), and these override (normally) any + options set in WhAjaj.Connector.options.ajax. Note that + WhAjaj.Connector.sendRequest() _also_ takes an options object, + and ones passed there will override, for purposes of that one + request, any options passed in here or defined in + WhAjaj.Connector.options.ajax. See WhAjaj.Connector.options.ajax + and WhAjaj.Connector.prototype.sendRequest() for more details + about the precedence of options. + + Sample usage: + + @code + // Set up common connection-level options: + var cgi = new WhAjaj.Connector({ + url: '/cgi-bin/my.cgi', + timeout:10000, + onResponse(resp,req) { alert(JSON.stringify(resp,0.4)); }, + onError(req,opt) { + alert(opt.errorMessage); + } + }); + // Any of those options may optionally be set globally in + // WhAjaj.Connector.options.ajax (onError(), beforeSend(), and afterSend() + // are often easiest/most useful to set globally). + + // Get list of pages... + cgi.sendRequest( null, { + onResponse(resp,req){ alert(WhAjaj.stringify(resp)); } + }); + @endcode + + For common request types, clients can add functions to this + object which act as wrappers for backend-specific functionality. As + a simple example: + + @code + cgi.login = function(name,pw,ajajOpt) { + this.sendRequest( + {command:"json/login", + name:name, + password:pw + }, ajajOpt ); + }; + @endcode + + TODOs: + + - Caching of page-load requests, with a configurable lifetime. + + - Use-cases like the above login() function are a tiny bit + problematic to implement when each request has a different URL + path (i know this from the whiki and fossil implementations). + This is partly a side-effect of design descisions made back in + the very first days of this code's life. i need to go through + and see where i can bend those conventions a bit (where it won't + break my other apps unduly). +*/ +WhAjaj.Connector = function(opt) +{ + if(WhAjaj.isObject(opt)) this.options = opt; + //TODO?: this.$cache = {}; +}; + +/** + The core options used by WhAjaj.Connector instances for performing + network operations. These options can (and some _should_) + be changed by a client application. They can also be changed + on specific instances of WhAjaj.Connector, but for most applications + it is simpler to set them here and not have to bother with configuring + each WhAjaj.Connector instance. Apps which use multiple back-ends at one time, + however, will need to customize each instance for a given back-end. +*/ +WhAjaj.Connector.options = { + /** + A (meaningless) prefix to apply to WhAjaj.Connector-generated + request IDs. + */ + requestIdPrefix:'WhAjaj.Connector-', + /** + Default options for WhAjaj.Connector.sendRequest() connection + parameters. This object holds only connection-related + options and callbacks (all optional), and not options + related to the required JSON structure of any given request. + i.e. the page name used in a get-page request are not set + here but are specified as part of the request object. + + These connection options are a "normalized form" of options + often found in various AJAX libraries like jQuery, + Prototype, dojo, etc. This approach allows us to swap out + the real connection-related parts by writing a simple proxy + which transforms our "normalized" form to the + backend-specific form. For examples, see the various + implementations stored in WhAjaj.Connector.sendImpls. + + The following callback options are, in practice, almost + always set globally to some app-wide defaults: + + - onError() to report errors using a common mechanism. + - beforeSend() to start a visual activity notification + - afterSend() to disable the visual activity notification + + However, be aware that if any given WhAjaj.Connector instance is + given its own before/afterSend callback then those will + override these. Mixing shared/global and per-instance + callbacks can potentially lead to confusing results if, e.g., + the beforeSend() and afterSend() functions have side-effects + but are not used with their proper before/after partner. + + TODO: rename this to 'ajaj' (the name is historical). The + problem with renaming it is is that the word 'ajax' is + pretty prevelant in the source tree, so i can't globally + swap it out. + */ + ajax: { + /** + URL of the back-end server/CGI. + */ + url: '/some/path', + + /** + Connection method. Some connection-related functions might + override any client-defined setting. + + Must be one of 'GET' or 'POST'. For custom connection + implementation, it may optionally be some + implementation-specified value. + + Normally the API can derive this value automatically - if the + request uses JSON data it is POSTed, else it is GETted. + */ + method:'GET', + + /** + A hint whether to run the operation asynchronously or + not. Not all concrete WhAjaj.Connector.sendImpl() + implementations can support this. Interestingly, at + least one popular AJAX toolkit does not document + supporting _synchronous_ AJAX operations. All common + browser-side implementations support async operation, but + non-browser implementations might not. + */ + asynchronous:true, + + /** + A HTTP authentication login name for the AJAX + connection. Not all concrete WhAjaj.Connector.sendImpl() + implementations can support this. + */ + loginName:undefined, + + /** + An HTTP authentication login password for the AJAJ + connection. Not all concrete WhAjaj.Connector.sendImpl() + implementations can support this. + */ + loginPassword:undefined, + + /** + A connection timeout, in milliseconds, for establishing + an AJAJ connection. Not all concrete + WhAjaj.Connector.sendImpl() implementations can support this. + */ + timeout:10000, + + /** + If an AJAJ request receives JSON data from the back-end, + that data is passed as a plain Object as the response + parameter (exception: in jsonp mode it is passed a + string (why???)). The initiating request object is + passed as the second parameter, but clients can normally + ignore it (only those which need a way to map specific + requests to responses will need it). The 3rd parameter + is the same as the 'this' object for the context of the + callback, but is provided because the instance-level + callbacks (set in (WhAjaj.Connector instance).callbacks, + require it in some cases (because their 'this' is + different!). + + Note that the response might contain error information + which comes from the back-end. The difference between + this error info and the info passed to the onError() + callback is that this data indicates an + application-level error, whereas onError() is used to + report connection-level problems or when the backend + produces non-JSON data (which, when not in jsonp mode, + is unexpected and is as fatal to the request as a + connection error). + */ + onResponse: function(response, request, opt){}, + + /** + If an AJAX request fails to establish a connection or it + receives non-JSON data from the back-end, this function + is called (e.g. timeout error or host name not + resolvable). It is passed the originating request and the + "normalized" connection parameters used for that + request. The connectOpt object "should" (or "might") + have an "errorMessage" property which describes the + nature of the problem. + + Clients will almost always want to replace the default + implementation with something which integrates into + their application. + */ + onError: function(request, connectOpt) + { + alert('AJAJ request failed:\n' + +'Connection information:\n' + +JSON.stringify(connectOpt,0,4) + ); + }, + + /** + Called before each connection attempt is made. Clients + can use this to, e.g., enable a visual "network activity + notification" for the user. It is passed the original + request object and the normalized connection parameters + for the request. If this function changes opt, those + changes _are_ applied to the subsequent request. If this + function throws, neither the onError() nor afterSend() + callbacks are triggered and WhAjaj.Connector.sendImpl() + propagates the exception back to the caller. + */ + beforeSend: function(request,opt){}, + + /** + Called after an AJAJ connection attempt completes, + regardless of success or failure. Passed the same + parameters as beforeSend() (see that function for + details). + + Here's an example of setting up a visual notification on + ajax operations using jQuery (but it's also easy to do + without jQuery as well): + + @code + function startAjaxNotif(req,opt) { + var me = arguments.callee; + var c = ++me.ajaxCount; + me.element.text( c + " pending AJAX operation(s)..." ); + if( 1 == c ) me.element.stop().fadeIn(); + } + startAjaxNotif.ajaxCount = 0. + startAjaxNotif.element = jQuery('#whikiAjaxNotification'); + + function endAjaxNotif() { + var c = --startAjaxNotif.ajaxCount; + startAjaxNotif.element.text( c+" pending AJAX operation(s)..." ); + if( 0 == c ) startAjaxNotif.element.stop().fadeOut(); + } + @endcode + + Set the beforeSend/afterSend properties to those + functions to enable the notifications by default. + */ + afterSend: function(request,opt){}, + + /** + If jsonp is a string then the WhAjaj-internal response + handling code ASSUMES that the response contains a JSONP-style + construct and eval()s it after afterSend() but before onResponse(). + In this case, onResponse() will get a string value for the response + instead of a response object parsed from JSON. + */ + jsonp:undefined, + /** + Don't use yet. Planned future option. + */ + propagateExceptions:false + } +}; + + +/** + WhAjaj.Connector.prototype.callbacks defines callbacks analog + to the onXXX callbacks defined in WhAjaj.Connector.options.ajax, + with two notable differences: + + 1) these callbacks, if set, are called in addition to any + request-specific callback. The intention is to allow a framework to set + "framework-level" callbacks which should be called independently of the + request-specific callbacks (without interfering with them, e.g. + requiring special re-forwarding features). + + 2) The 'this' object in these callbacks is the Connector instance + associated with the callback, whereas the "other" onXXX form has its + "ajax options" object as its this. + + When this API says that an onXXX callback will be called for a request, + both the request's onXXX (if set) and this one (if set) will be called. +*/ +WhAjaj.Connector.prototype.callbacks = {}; +/** + Instance-specific values for AJAJ-level properties (as opposed to + application-level request properties). Options set here "override" those + specified in WhAjaj.Connector.options.ajax and are "overridden" by + options passed to sendRequest(). +*/ +WhAjaj.Connector.prototype.options = {}; + + +/** + Tries to find the given key in any of the following, returning + the first match found: opt, this.options, WhAjaj.Connector.options.ajax. + + Returns undefined if key is not found. +*/ +WhAjaj.Connector.prototype.derivedOption = function(key,opt) { + var v = opt ? opt[key] : undefined; + if( undefined !== v ) return v; + else v = this.options[key]; + if( undefined !== v ) return v; + else v = WhAjaj.Connector.options.ajax[key]; + return v; +}; + +/** + Returns a unique string on each call containing a generic + reandom request identifier string. This is not used by the core + API but can be used by client code to generate unique IDs for + each request (if needed). + + The exact format is unspecified and may change in the future. + + Request IDs can be used by clients to "match up" responses to + specific requests if needed. In practice, however, they are + seldom, if ever, needed. When passing several concurrent + requests through the same response callback, it might be useful + for some clients to be able to distinguish, possibly re-routing + them through other handlers based on the originating request type. + + If this.options.requestIdPrefix or + WhAjaj.Connector.options.requestIdPrefix is set then that text + is prefixed to the returned string. +*/ +WhAjaj.Connector.prototype.generateRequestId = function() +{ + if( undefined === arguments.callee.sequence ) + { + arguments.callee.sequence = 0; + } + var pref = this.options.requestIdPrefix || WhAjaj.Connector.options.requestIdPrefix || ''; + return pref + + WhAjaj.msTimestamp() + + '/'+(Math.round( Math.random() * 100000000) )+ + ':'+(++arguments.callee.sequence); +}; + +/** + Copies (SHALLOWLY) all properties in opt to this.options. +*/ +WhAjaj.Connector.prototype.addOptions = function(opt) { + var k, v; + for( k in opt ) { + if( ! opt.hasOwnProperty(k) ) continue /* proactive Prototype kludge! */; + this.options[k] = opt[k]; + } + return this.options; +}; + +/** + An internal helper object which holds several functions intended + to simplify the creation of concrete communication channel + implementations for WhAjaj.Connector.sendImpl(). These operations + take care of some of the more error-prone parts of ensuring that + onResponse(), onError(), etc. callbacks are called consistently + using the same rules. +*/ +WhAjaj.Connector.sendHelper = { + /** + opt is assumed to be a normalized set of + WhAjaj.Connector.sendRequest() options. This function + creates a url by concatenating opt.url and some form of + opt.urlParam. + + If opt.urlParam is an object or string then it is appended + to the url. An object is assumed to be a one-dimensional set + of simple (urlencodable) key/value pairs, and not larger + data structures. A string value is assumed to be a + well-formed, urlencoded set of key/value pairs separated by + '&' characters. + + The new/normalized URL is returned (opt is not modified). If + opt.urlParam is not set then opt.url is returned (or an + empty string if opt.url is itself a false value). + + TODO: if opt is-a Object and any key points to an array, + build up a list of keys in the form "keyname[]". We could + arguably encode sub-objects like "keyname[subkey]=...", but + i don't know if that's conventions-compatible with other + frameworks. + */ + normalizeURL: function(opt) { + var u = opt.url || ''; + if( opt.urlParam ) { + var addQ = (u.indexOf('?') >= 0) ? false : true; + var addA = addQ ? false : ((u.indexOf('&')>=0) ? true : false); + var tail = ''; + if( WhAjaj.isObject(opt.urlParam) ) { + var li = [], k; + for( k in opt.urlParam) { + li.push( k+'='+encodeURIComponent( opt.urlParam[k] ) ); + } + tail = li.join('&'); + } + else if( 'string' === typeof opt.urlParam ) { + tail = opt.urlParam; + } + u = u + (addQ ? '?' : '') + (addA ? '&' : '') + tail; + } + return u; + }, + /** + Should be called by WhAjaj.Connector.sendImpl() + implementations after a response has come back. This + function takes care of most of ensuring that framework-level + conventions involving WhAjaj.Connector.options.ajax + properties are followed. + + The request argument must be the original request passed to + the sendImpl() function. It may legally be null for GET requests. + + The opt object should be the normalized AJAX options used + for the connection. + + The resp argument may be either a plain Object or a string + (in which case it is assumed to be JSON). + + The 'this' object for this call MUST be a WhAjaj.Connector + instance in order for callback processing to work properly. + + This function takes care of the following: + + - Calling opt.afterSend() + + - If resp is a string, de-JSON-izing it to an object. + + - Calling opt.onResponse() + + - Calling opt.onError() in several common (potential) error + cases. + + - If resp is-a String and opt.jsonp then resp is assumed to be + a JSONP-form construct and is eval()d BEFORE opt.onResponse() + is called. It is arguable to eval() it first, but the logic + integrates better with the non-jsonp handler. + + The sendImpl() should return immediately after calling this. + + The sendImpl() must call only one of onSendSuccess() or + onSendError(). It must call one of them or it must implement + its own response/error handling, which is not recommended + because getting the documented semantics of the + onError/onResponse/afterSend handling correct can be tedious. + */ + onSendSuccess:function(request,resp,opt) { + var cb = this.callbacks || {}; + if( WhAjaj.isFunction(cb.afterSend) ) { + try {cb.afterSend( request, opt );} + catch(e){} + } + if( WhAjaj.isFunction(opt.afterSend) ) { + try {opt.afterSend( request, opt );} + catch(e){} + } + function doErr(){ + if( WhAjaj.isFunction(cb.onError) ) { + try {cb.onError( request, opt );} + catch(e){} + } + if( WhAjaj.isFunction(opt.onError) ) { + try {opt.onError( request, opt );} + catch(e){} + } + } + if( ! resp ) { + opt.errorMessage = "Sending of request succeeded but returned no data!"; + doErr(); + return false; + } + + if( 'string' === typeof resp ) { + try { + resp = opt.jsonp ? eval(resp) : JSON.parse(resp); + } catch(e) { + opt.errorMessage = e.toString(); + doErr(); + return; + } + } + try { + if( WhAjaj.isFunction( cb.onResponse ) ) { + cb.onResponse( resp, request, opt ); + } + if( WhAjaj.isFunction( opt.onResponse ) ) { + opt.onResponse( resp, request, opt ); + } + return true; + } + catch(e) { + opt.errorMessage = "Exception while handling inbound JSON response:\n" + + e + +"\nOriginal response data:\n"+JSON.stringify(resp,0,2) + ; + ; + doErr(); + return false; + } + }, + /** + Should be called by sendImpl() implementations after a response + has failed to connect (e.g. could not resolve host or timeout + reached). This function takes care of most of ensuring that + framework-level conventions involving WhAjaj.Connector.options.ajax + properties are followed. + + The request argument must be the original request passed to + the sendImpl() function. It may legally be null for GET + requests. + + The 'this' object for this call MUST be a WhAjaj.Connector + instance in order for callback processing to work properly. + + The opt object should be the normalized AJAX options used + for the connection. By convention, the caller of this + function "should" set opt.errorMessage to contain a + human-readable description of the error. + + The sendImpl() should return immediately after calling this. The + return value from this function is unspecified. + */ + onSendError: function(request,opt) { + var cb = this.callbacks || {}; + if( WhAjaj.isFunction(cb.afterSend) ) { + try {cb.afterSend( request, opt );} + catch(e){} + } + if( WhAjaj.isFunction(opt.afterSend) ) { + try {opt.afterSend( request, opt );} + catch(e){} + } + if( WhAjaj.isFunction( cb.onError ) ) { + try {cb.onError( request, opt );} + catch(e) {/*ignore*/} + } + if( WhAjaj.isFunction( opt.onError ) ) { + try {opt.onError( request, opt );} + catch(e) {/*ignore*/} + } + } +}; + +/** + WhAjaj.Connector.sendImpls holds several concrete + implementations of WhAjaj.Connector.prototype.sendImpl(). To use + a specific implementation by default assign + WhAjaj.Connector.prototype.sendImpl to one of these functions. + + The functions defined here require that the 'this' object be-a + WhAjaj.Connector instance. + + Historical notes: + + a) We once had an implementation based on Prototype, but that + library just pisses me off (they change base-most types' + prototypes, introducing side-effects in client code which + doesn't even use Prototype). The Prototype version at the time + had a serious toJSON() bug which caused empty arrays to + serialize as the string "[]", which broke a bunch of my code. + (That has been fixed in the mean time, but i don't use + Prototype.) + + b) We once had an implementation for the dojo library, + + If/when the time comes to add Prototype/dojo support, we simply + need to port: + + http://code.google.com/p/jsonmessage/source/browse/trunk/lib/JSONMessage/JSONMessage.inc.js + + (search that file for "dojo" and "Prototype") to this tree. That + code is this code's generic grandfather and they are still very + similar, so a port is trivial. + +*/ +WhAjaj.Connector.sendImpls = { + /** + This is a concrete implementation of + WhAjaj.Connector.prototype.sendImpl() which uses the + environment's native XMLHttpRequest class to send whiki + requests and fetch the responses. + + The only argument must be a connection properties object, as + constructed by WhAjaj.Connector.normalizeAjaxParameters(). + + If window.firebug is set then window.firebug.watchXHR() is + called to enable monitoring of the XMLHttpRequest object. + + This implementation honors the loginName and loginPassword + connection parameters. + + Returns the XMLHttpRequest object. + + This implementation requires that the 'this' object be-a + WhAjaj.Connector. + + This implementation uses setTimeout() to implement the + timeout support, and thus the JS engine must provide that + functionality. + */ + XMLHttpRequest: function(request, args) + { + var json = WhAjaj.isObject(request) ? JSON.stringify(request) : request; + var xhr = new XMLHttpRequest(); + var startTime = (new Date()).getTime(); + var timeout = args.timeout || 10000/*arbitrary!*/; + var hitTimeout = false; + var done = false; + var tmid /* setTimeout() ID */; + var whself = this; + function handleTimeout() + { + hitTimeout = true; + if( ! done ) + { + var now = (new Date()).getTime(); + try { xhr.abort(); } catch(e) {/*ignore*/} + // see: http://www.w3.org/TR/XMLHttpRequest/#the-abort-method + args.errorMessage = "Timeout of "+timeout+"ms reached after "+(now-startTime)+"ms during AJAX request."; + WhAjaj.Connector.sendHelper.onSendError.apply( whself, [request, args] ); + } + return; + } + function onStateChange() + { // reminder to self: apparently 'this' is-not-a XHR :/ + if( hitTimeout ) + { /* we're too late - the error was already triggered. */ + return; + } + + if( 4 == xhr.readyState ) + { + done = true; + if( tmid ) + { + clearTimeout( tmid ); + tmid = null; + } + if( (xhr.status >= 200) && (xhr.status < 300) ) + { + WhAjaj.Connector.sendHelper.onSendSuccess.apply( whself, [request, xhr.responseText, args] ); + return; + } + else + { + if( undefined === args.errorMessage ) + { + args.errorMessage = "Error sending a '"+args.method+"' AJAX request to " + +"["+args.url+"]: " + +"Status text=["+xhr.statusText+"]" + ; + WhAjaj.Connector.sendHelper.onSendError.apply( whself, [request, args] ); + } + else { /*maybe it was was set by the timeout handler. */ } + return; + } + } + }; + + xhr.onreadystatechange = onStateChange; + if( ('undefined'!==(typeof window)) && ('firebug' in window) && ('watchXHR' in window.firebug) ) + { /* plug in to firebug lite's XHR monitor... */ + window.firebug.watchXHR( xhr ); + } + try + { + //alert( JSON.stringify( args )); + function xhrOpen() + { + if( ('loginName' in args) && args.loginName ) + { + xhr.open( args.method, args.url, args.asynchronous, args.loginName, args.loginPassword ); + } + else + { + xhr.open( args.method, args.url, args.asynchronous ); + } + } + if( json && ('POST' === args.method.toUpperCase()) ) + { + xhrOpen(); + xhr.setRequestHeader("Content-Type", "application/json; charset=utf-8"); + // Google Chrome warns that it refuses to set these + // "unsafe" headers (his words, not mine): + // xhr.setRequestHeader("Content-length", json.length); + // xhr.setRequestHeader("Connection", "close"); + xhr.send( json ); + } + else /* assume GET */ + { + xhrOpen(); + xhr.send(null); + } + tmid = setTimeout( handleTimeout, timeout ); + return xhr; + } + catch(e) + { + args.errorMessage = e.toString(); + WhAjaj.Connector.sendHelper.onSendError.apply( whself, [request, args] ); + return undefined; + } + }/*XMLHttpRequest()*/, + /** + This is a concrete implementation of + WhAjaj.Connector.prototype.sendImpl() which uses the jQuery + AJAX API to send requests and fetch the responses. + + The first argument may be either null/false, an Object + containing toJSON-able data to post to the back-end, or such an + object in JSON string form. + + The second argument must be a connection properties object, as + constructed by WhAjaj.Connector.normalizeAjaxParameters(). + + If window.firebug is set then window.firebug.watchXHR() is + called to enable monitoring of the XMLHttpRequest object. + + This implementation honors the loginName and loginPassword + connection parameters. + + Returns the XMLHttpRequest object. + + This implementation requires that the 'this' object be-a + WhAjaj.Connector. + */ + jQuery:function(request,args) + { + var data = request || undefined; + var whself = this; + if( data ) { + if('string'!==typeof data) { + try { + data = JSON.stringify(data); + } + catch(e) { + WhAjaj.Connector.sendHelper.onSendError.apply( whself, [request, args] ); + return; + } + } + } + var ajopt = { + url: args.url, + data: data, + type: args.method, + async: args.asynchronous, + password: (undefined !== args.loginPassword) ? args.loginPassword : undefined, + username: (undefined !== args.loginName) ? args.loginName : undefined, + contentType: 'application/json; charset=utf-8', + error: function(xhr, textStatus, errorThrown) + { + //this === the options for this ajax request + args.errorMessage = "Error sending a '"+ajopt.type+"' request to ["+ajopt.url+"]: " + +"Status text=["+textStatus+"]" + +(errorThrown ? ("Error=["+errorThrown+"]") : "") + ; + WhAjaj.Connector.sendHelper.onSendError.apply( whself, [request, args] ); + }, + success: function(data) + { + WhAjaj.Connector.sendHelper.onSendSuccess.apply( whself, [request, data, args] ); + }, + /* Set dataType=text instead of json to keep jQuery from doing our carefully + written response handling for us. + */ + dataType: 'text' + }; + if( undefined !== args.timeout ) + { + ajopt.timeout = args.timeout; + } + try + { + return jQuery.ajax(ajopt); + } + catch(e) + { + args.errorMessage = e.toString(); + WhAjaj.Connector.sendHelper.onSendError.apply( whself, [request, args] ); + return undefined; + } + }/*jQuery()*/, + /** + This is a concrete implementation of + WhAjaj.Connector.prototype.sendImpl() which uses the rhino + Java API to send requests and fetch the responses. + + Limitations vis-a-vis the interface: + + - timeouts are not supported. + + - asynchronous mode is not supported because implementing it + requires the ability to kill a running thread (which is deprecated + in the Java API). + + TODOs: + + - add socket timeouts. + + - support HTTP proxy. + + The Java APIs support this, it just hasn't been added here yet. + */ + rhino:function(request,args) + { + var self = this; + var data = request || undefined; + if( data ) { + if('string'!==typeof data) { + try { + data = JSON.stringify(data); + } + catch(e) { + WhAjaj.Connector.sendHelper.onSendError.apply( self, [request, args] ); + return; + } + } + } + var url; + var con; + var IO = new JavaImporter(java.io); + var wr; + var rd, ln, json = []; + function setIncomingCookies(list){ + if(!list || !list.length) return; + if( !self.cookies ) self.cookies = {}; + var k, v, i; + for( i = 0; i < list.length; ++i ){ + v = list[i].split('=',2); + k = decodeURIComponent(v[0]) + v = v[0] ? decodeURIComponent(v[0].split(';',2)[0]) : null; + //print("RECEIVED COOKIE: "+k+"="+v); + if(!v) { + delete self.cookies[k]; + continue; + }else{ + self.cookies[k] = v; + } + } + }; + function setOutboundCookies(conn){ + if(!self.cookies) return; + var k, v; + for( k in self.cookies ){ + if(!self.cookies.hasOwnProperty(k)) continue /*kludge for broken JS libs*/; + v = self.cookies[k]; + conn.addRequestProperty("Cookie", encodeURIComponent(k)+'='+encodeURIComponent(v)); + //print("SENDING COOKIE: "+k+"="+v); + } + }; + try{ + url = new java.net.URL( args.url ) + con = url.openConnection(/*FIXME: add proxy support!*/); + con.setRequestProperty("Accept-Charset","utf-8"); + setOutboundCookies(con); + if(data){ + con.setRequestProperty("Content-Type","application/json; charset=utf-8"); + con.setDoOutput( true ); + wr = new IO.OutputStreamWriter(con.getOutputStream()) + wr.write(data); + wr.flush(); + wr.close(); + wr = null; + //print("POSTED: "+data); + } + rd = new IO.BufferedReader(new IO.InputStreamReader(con.getInputStream())); + //var skippedHeaders = false; + while ((line = rd.readLine()) !== null) { + //print("LINE: "+line); + //if(!line.length && !skippedHeaders){ + // skippedHeaders = true; + // json = []; + // continue; + //} + json.push(line); + } + setIncomingCookies(con.getHeaderFields().get("Set-Cookie")); + }catch(e){ + args.errorMessage = e.toString(); + WhAjaj.Connector.sendHelper.onSendError.apply( self, [request, args] ); + return undefined; + } + try { if(wr) wr.close(); } catch(e) { /*ignore*/} + try { if(rd) rd.close(); } catch(e) { /*ignore*/} + json = json.join(''); + //print("READ IN JSON: "+json); + WhAjaj.Connector.sendHelper.onSendSuccess.apply( self, [request, json, args] ); + }/*rhino*/ +}; + +/** + An internal function which takes an object containing properties + for a WhAjaj.Connector network request. This function creates a new + object containing a superset of the properties from: + + a) opt + b) this.options + c) WhAjaj.Connector.options.ajax + + in that order, using the first one it finds. + + All non-function properties are _deeply_ copied via JSON cloning + in order to prevent accidental "cross-request pollenation" (been + there, done that). Functions cannot be cloned and are simply + copied by reference. + + This function throws if JSON-copying one of the options fails + (e.g. due to cyclic data structures). + + Reminder to self: this function does not "normalize" opt.urlParam + by encoding it into opt.url, mainly for historical reasons, but + also because that behaviour was specifically undesirable in this + code's genetic father. +*/ +WhAjaj.Connector.prototype.normalizeAjaxParameters = function (opt) +{ + var rc = {}; + function merge(k,v) + { + if( rc.hasOwnProperty(k) ) return; + else if( WhAjaj.isFunction(v) ) {} + else if( WhAjaj.isObject(v) ) v = JSON.parse( JSON.stringify(v) ); + rc[k]=v; + } + function cp(obj) { + if( ! WhAjaj.isObject(obj) ) return; + var k; + for( k in obj ) { + if( ! obj.hasOwnProperty(k) ) continue /* i will always hate the Prototype designers for this. */; + merge(k, obj[k]); + } + } + cp( opt ); + cp( this.options ); + cp( WhAjaj.Connector.options.ajax ); + // no, not here: rc.url = WhAjaj.Connector.sendHelper.normalizeURL(rc); + return rc; +}; + +/** + This is the generic interface for making calls to a back-end + JSON-producing request handler. It is a simple wrapper around + WhAjaj.Connector.prototype.sendImpl(), which just normalizes the + connection options for sendImpl() and makes sure that + opt.beforeSend() is (possibly) called. + + The request parameter must either be false/null/empty or a + fully-populated JSON-able request object (which will be sent as + unencoded application/json text), depending on the type of + request being made. It is never semantically legal (in this API) + for request to be a string/number/true/array value. As a rule, + only POST requests use the request data. GET requests should + encode their data in opt.url or opt.urlParam (see below). + + opt must contain the network-related parameters for the request. + Paramters _not_ set in opt are pulled from this.options or + WhAjaj.Connector.options.ajax (in that order, using the first + value it finds). Thus the set of connection-level options used + for the request are a superset of those various sources. + + The "normalized" (or "superimposed") opt object's URL may be + modified before the request is sent, as follows: + + if opt.urlParam is a string then it is assumed to be properly + URL-encoded parameters and is appended to the opt.url. If it is + an Object then it is assumed to be a one-dimensional set of + key/value pairs with simple values (numbers, strings, booleans, + null, and NOT objects/arrays). The keys/values are URL-encoded + and appended to the URL. + + The beforeSend() callback (see below) can modify the options + object before the request attempt is made. + + The callbacks in the normalized opt object will be triggered as + follows (if they are set to Function values): + + - beforeSend(request,opt) will be called before any network + processing starts. If beforeSend() throws then no other + callbacks are triggered and this function propagates the + exception. This function is passed normalized connection options + as its second parameter, and changes this function makes to that + object _will_ be used for the pending connection attempt. + + - onError(request,opt) will be called if a connection to the + back-end cannot be established. It will be passed the original + request object (which might be null, depending on the request + type) and the normalized options object. In the error case, the + opt object passed to onError() "should" have a property called + "errorMessage" which contains a description of the problem. + + - onError(request,opt) will also be called if connection + succeeds but the response is not JSON data. + + - onResponse(response,request) will be called if the response + returns JSON data. That data might hold an error response code - + clients need to check for that. It is passed the response object + (a plain object) and the original request object. + + - afterSend(request,opt) will be called directly after the + AJAX request is finished, before onError() or onResonse() are + called. Possible TODO: we explicitly do NOT pass the response to + this function in order to keep the line between the responsibilities + of the various callback clear (otherwise this could be used the same + as onResponse()). In practice it would sometimes be useful have the + response passed to this function, mainly for logging/debugging + purposes. + + The return value from this function is meaningless because + AJAX operations tend to take place asynchronously. + +*/ +WhAjaj.Connector.prototype.sendRequest = function(request,opt) +{ + if( !WhAjaj.isFunction(this.sendImpl) ) + { + throw new Error("This object has no sendImpl() member function! I don't know how to send the request!"); + } + var ex = false; + var av = Array.prototype.slice.apply( arguments, [0] ); + + /** + FIXME: how to handle the error, vis-a-vis- the callbacks, if + normalizeAjaxParameters() throws? It can throw if + (de)JSON-izing fails. + */ + var norm = this.normalizeAjaxParameters( WhAjaj.isObject(opt) ? opt : {} ); + norm.url = WhAjaj.Connector.sendHelper.normalizeURL(norm); + if( ! request ) norm.method = 'GET'; + var cb = this.callbacks || {}; + if( this.callbacks && WhAjaj.isFunction(this.callbacks.beforeSend) ) { + this.callbacks.beforeSend( request, norm ); + } + if( WhAjaj.isFunction(norm.beforeSend) ){ + norm.beforeSend( request, norm ); + } + //alert( WhAjaj.stringify(request)+'\n'+WhAjaj.stringify(norm)); + try { this.sendImpl( request, norm ); } + catch(e) { ex = e; } + if(ex) throw ex; +}; + +/** + sendImpl() holds a concrete back-end connection implementation. It + can be replaced with a custom implementation if one follows the rules + described throughout this API. See WhAjaj.Connector.sendImpls for + the concrete implementations included with this API. +*/ +//WhAjaj.Connector.prototype.sendImpl = WhAjaj.Connector.sendImpls.XMLHttpRequest; +//WhAjaj.Connector.prototype.sendImpl = WhAjaj.Connector.sendImpls.rhino; +//WhAjaj.Connector.prototype.sendImpl = WhAjaj.Connector.sendImpls.jQuery; + +if( 'undefined' !== typeof jQuery ){ + WhAjaj.Connector.prototype.sendImpl = WhAjaj.Connector.sendImpls.jQuery; +} +else { + WhAjaj.Connector.prototype.sendImpl = WhAjaj.Connector.sendImpls.XMLHttpRequest; +} ADDED ajax/wiki-editor.html Index: ajax/wiki-editor.html ================================================================== --- ajax/wiki-editor.html +++ ajax/wiki-editor.html @@ -0,0 +1,382 @@ + + + + + Fossil/JSON Wiki Editor Prototype + + + + + + + + + + + + + +

PROTOTYPE JSON-based Fossil Wiki Editor

+ +See also: main test page. + +
+Login: +
+ +or: +name: +pw: + + + +
+ + +
+Quick-posts:
+ + + + + + + +
+ + + + + + + + + + + + + + + + +
Page ListContent
+
+
+
+
+ +
Response
+ +
+
+
+
+ + ADDED auto.def Index: auto.def ================================================================== --- auto.def +++ auto.def @@ -0,0 +1,483 @@ +# System autoconfiguration. Try: ./configure --help + +use cc cc-lib + +options { + with-openssl:path|auto|tree|none + => {Look for OpenSSL in the given path, automatically, in the source tree, or none} + with-miniz=0 => {Use miniz from the source tree} + with-zlib:path|auto|tree + => {Look for zlib in the given path, automatically, or in the source tree} + with-exec-rel-paths=0 + => {Enable relative paths for external diff/gdiff} + with-legacy-mv-rm=0 => {Enable legacy behavior for mv/rm (skip checkout files)} + with-th1-docs=0 => {Enable TH1 for embedded documentation pages} + with-th1-hooks=0 => {Enable TH1 hooks for commands and web pages} + with-tcl:path => {Enable Tcl integration, with Tcl in the specified path} + with-tcl-stubs=0 => {Enable Tcl integration via stubs library mechanism} + with-tcl-private-stubs=0 + => {Enable Tcl integration via private stubs mechanism} + internal-sqlite=1 => {Don't use the internal SQLite, use the system one} + static=0 => {Link a static executable} + fusefs=1 => {Disable the Fuse Filesystem} + fossil-debug=0 => {Build with fossil debugging enabled} + json=0 => {Build with fossil JSON API enabled} +} + +# sqlite wants these types if possible +cc-with {-includes {stdint.h inttypes.h}} { + cc-check-types uint32_t uint16_t int16_t uint8_t +} + +# Use pread/pwrite system calls in place of seek + read/write if possible +define USE_PREAD [cc-check-functions pread] + +# Find tclsh for the test suite. Can't yet use jimsh for this. +cc-check-progs tclsh + +define EXTRA_CFLAGS "" +define EXTRA_LDFLAGS "" +define USE_SYSTEM_SQLITE 0 +define USE_LINENOISE 0 +define FOSSIL_ENABLE_MINIZ 0 + +# This procedure is a customized version of "cc-check-function-in-lib", +# that does not modify the LIBS variable. Its use prevents prematurely +# pulling in libraries that will be added later anyhow (e.g. "-ldl"). +proc check-function-in-lib {function libs {otherlibs {}}} { + if {[string length $otherlibs]} { + msg-checking "Checking for $function in $libs with $otherlibs..." + } else { + msg-checking "Checking for $function in $libs..." + } + set found 0 + cc-with [list -libs $otherlibs] { + if {[cctest_function $function]} { + msg-result "none needed" + define lib_$function "" + incr found + } else { + foreach lib $libs { + cc-with [list -libs -l$lib] { + if {[cctest_function $function]} { + msg-result -l$lib + define lib_$function -l$lib + incr found + break + } + } + } + } + } + if {$found} { + define [feature-define-name $function] + } else { + msg-result "no" + } + return $found +} + +if {![opt-bool internal-sqlite]} { + proc find_internal_sqlite {} { + + # On some systems (slackware), libsqlite3 requires -ldl to link. So + # search for the system SQLite once with -ldl, and once without. If + # the library can only be found with $extralibs set to -ldl, then + # the code below will append -ldl to LIBS. + # + foreach extralibs {{} {-ldl}} { + + # Locate the system SQLite by searching for sqlite3_open(). Then check + # if sqlite3_strglob() can be found as well. If we can find open() but + # not strglob(), then the system SQLite is too old to link against + # fossil. + # + if {[check-function-in-lib sqlite3_open sqlite3 $extralibs]} { + if {![check-function-in-lib sqlite3_malloc64 sqlite3 $extralibs]} { + user-error "system sqlite3 too old (require >= 3.8.7)" + } + + # Success. Update symbols and return. + # + define USE_SYSTEM_SQLITE 1 + define-append LIBS -lsqlite3 + define-append LIBS $extralibs + return + } + } + user-error "system sqlite3 not found" + } + + find_internal_sqlite +} + +proc is_mingw {} { + return [string match *mingw* [get-define host]] +} + +if {[is_mingw]} { + define-append EXTRA_CFLAGS -DBROKEN_MINGW_CMDLINE + define-append LIBS -lkernel32 -lws2_32 +} else { + # + # NOTE: All platforms except MinGW should use the linenoise + # package. It is currently unsupported on Win32. + # + define USE_LINENOISE 1 +} + +if {[string match *-solaris* [get-define host]]} { + define-append EXTRA_CFLAGS {-D_XOPEN_SOURCE=500 -D__EXTENSIONS__} +} + +if {[opt-bool fossil-debug]} { + define-append EXTRA_CFLAGS -DFOSSIL_DEBUG + msg-result "Debugging support enabled" +} + +if {[opt-bool json]} { + # Reminder/FIXME (stephan): FOSSIL_ENABLE_JSON + # is required in the CFLAGS because json*.c + # have #ifdef guards around the whole file without + # reading config.h first. + define-append EXTRA_CFLAGS -DFOSSIL_ENABLE_JSON + define FOSSIL_ENABLE_JSON + msg-result "JSON support enabled" +} + +if {[opt-bool with-legacy-mv-rm]} { + define-append EXTRA_CFLAGS -DFOSSIL_ENABLE_LEGACY_MV_RM + define FOSSIL_ENABLE_LEGACY_MV_RM + msg-result "Legacy mv/rm support enabled" +} + +if {[opt-bool with-exec-rel-paths]} { + define-append EXTRA_CFLAGS -DFOSSIL_ENABLE_EXEC_REL_PATHS + define FOSSIL_ENABLE_EXEC_REL_PATHS + msg-result "Relative paths in external diff/gdiff enabled" +} + +if {[opt-bool with-th1-docs]} { + define-append EXTRA_CFLAGS -DFOSSIL_ENABLE_TH1_DOCS + define FOSSIL_ENABLE_TH1_DOCS + msg-result "TH1 embedded documentation support enabled" +} + +if {[opt-bool with-th1-hooks]} { + define-append EXTRA_CFLAGS -DFOSSIL_ENABLE_TH1_HOOKS + define FOSSIL_ENABLE_TH1_HOOKS + msg-result "TH1 hooks support enabled" +} + +#if {[opt-bool markdown]} { +# # no-op. Markdown is now enabled by default. +# msg-result "Markdown support enabled" +#} + +if {[opt-bool static]} { + # XXX: This will not work on all systems. + define-append EXTRA_LDFLAGS -static + msg-result "Trying to link statically" +} else { + define-append EXTRA_CFLAGS -DFOSSIL_DYNAMIC_BUILD=1 + define FOSSIL_DYNAMIC_BUILD +} + +# Helper for OpenSSL checking +proc check-for-openssl {msg {cflags {}} {libs {-lssl -lcrypto}}} { + msg-checking "Checking for $msg..." + set rc 0 + if {[is_mingw]} { + lappend libs -lgdi32 -lwsock32 + } + if {[info exists ::zlib_lib]} { + lappend libs $::zlib_lib + } + msg-quiet cc-with [list -cflags $cflags -libs $libs] { + if {[cc-check-includes openssl/ssl.h] && \ + [cc-check-functions SSL_new]} { + incr rc + } + } + if {!$rc && ![is_mingw]} { + # On some systems, OpenSSL appears to require -ldl to link. + lappend libs -ldl + msg-quiet cc-with [list -cflags $cflags -libs $libs] { + if {[cc-check-includes openssl/ssl.h] && \ + [cc-check-functions SSL_new]} { + incr rc + } + } + } + if {$rc} { + msg-result "ok" + return 1 + } else { + msg-result "no" + return 0 + } +} + +if {[opt-bool with-miniz]} { + define FOSSIL_ENABLE_MINIZ 1 + msg-result "Using miniz for compression" +} else { + # Check for zlib, using the given location if specified + set zlibpath [opt-val with-zlib] + if {$zlibpath eq "tree"} { + set zlibdir [file dirname $autosetup(dir)]/compat/zlib + if {![file isdirectory $zlibdir]} { + user-error "The zlib in source tree directory does not exist" + } + cc-with [list -cflags "-I$zlibdir -L$zlibdir"] + define-append EXTRA_CFLAGS -I$zlibdir + define-append LIBS $zlibdir/libz.a + set ::zlib_lib $zlibdir/libz.a + msg-result "Using zlib in source tree" + } else { + if {$zlibpath ni {auto ""}} { + cc-with [list -cflags "-I$zlibpath -L$zlibpath"] + define-append EXTRA_CFLAGS -I$zlibpath + define-append EXTRA_LDFLAGS -L$zlibpath + msg-result "Using zlib from $zlibpath" + } + if {![cc-check-includes zlib.h] || ![check-function-in-lib inflateEnd z]} { + user-error "zlib not found please install it or specify the location with --with-zlib" + } + set ::zlib_lib -lz + } +} + +set ssldirs [opt-val with-openssl] +if {$ssldirs ne "none"} { + if {[opt-bool with-miniz]} { + user-error "The --with-miniz option is incompatible with OpenSSL" + } + set found 0 + if {$ssldirs eq "tree"} { + set ssldir [file dirname $autosetup(dir)]/compat/openssl + if {![file isdirectory $ssldir]} { + user-error "The OpenSSL in source tree directory does not exist" + } + set msg "ssl in $ssldir" + set cflags "-I$ssldir/include" + set ldflags "-L$ssldir" + set ssllibs "$ssldir/libssl.a $ssldir/libcrypto.a" + set found [check-for-openssl "ssl in source tree" "$cflags $ldflags" $ssllibs] + } else { + if {$ssldirs in {auto ""}} { + catch { + set cflags [exec pkg-config openssl --cflags-only-I] + set ldflags [exec pkg-config openssl --libs-only-L] + set found [check-for-openssl "ssl via pkg-config" "$cflags $ldflags"] + } msg + if {!$found} { + set ssldirs "{} /usr/sfw /usr/local/ssl /usr/lib/ssl /usr/ssl \ + /usr/pkg /usr/local /usr /usr/local/opt/openssl" + } + } + if {!$found} { + foreach dir $ssldirs { + if {$dir eq ""} { + set msg "system ssl" + set cflags "" + set ldflags "" + } else { + set msg "ssl in $dir" + set cflags "-I$dir/include" + set ldflags "-L$dir/lib" + } + if {[check-for-openssl $msg "$cflags $ldflags"]} { + incr found + break + } + } + } + } + if {$found} { + define FOSSIL_ENABLE_SSL + define-append EXTRA_CFLAGS $cflags + define-append EXTRA_LDFLAGS $ldflags + if {[info exists ssllibs]} { + define-append LIBS $ssllibs + } else { + define-append LIBS -lssl -lcrypto + } + if {[info exists ::zlib_lib]} { + define-append LIBS $::zlib_lib + } + if {[is_mingw]} { + define-append LIBS -lgdi32 -lwsock32 + } + msg-result "HTTPS support enabled" + + # Silence OpenSSL deprecation warnings on Mac OS X 10.7. + if {[string match *-darwin* [get-define host]]} { + if {[cctest -cflags {-Wdeprecated-declarations}]} { + define-append EXTRA_CFLAGS -Wdeprecated-declarations + } + } + } else { + user-error "OpenSSL not found. Consider --with-openssl=none to disable HTTPS support" + } +} else { + if {[info exists ::zlib_lib]} { + define-append LIBS $::zlib_lib + } +} + +set tclpath [opt-val with-tcl] +if {$tclpath ne ""} { + set tclprivatestubs [opt-bool with-tcl-private-stubs] + # Note parse-tclconfig-sh is in autosetup/local.tcl + if {$tclpath eq "1"} { + set tcldir [file dirname $autosetup(dir)]/compat/tcl-8.6 + if {$tclprivatestubs} { + set tclconfig(TCL_INCLUDE_SPEC) -I$tcldir/generic + set tclconfig(TCL_VERSION) {Private Stubs} + set tclconfig(TCL_PATCH_LEVEL) {} + set tclconfig(TCL_PREFIX) $tcldir + set tclconfig(TCL_LD_FLAGS) { } + } else { + # Use the system Tcl. Look in some likely places. + array set tclconfig [parse-tclconfig-sh \ + $tcldir/unix $tcldir/win \ + /usr /usr/local /usr/share /opt/local] + set msg "on your system" + } + } else { + array set tclconfig [parse-tclconfig-sh $tclpath] + set msg "at $tclpath" + } + if {![info exists tclconfig(TCL_INCLUDE_SPEC)]} { + user-error "Cannot find Tcl $msg" + } + set tclstubs [opt-bool with-tcl-stubs] + if {$tclprivatestubs} { + define FOSSIL_ENABLE_TCL_PRIVATE_STUBS + define USE_TCL_STUBS + } elseif {$tclstubs && $tclconfig(TCL_SUPPORTS_STUBS)} { + set libs "$tclconfig(TCL_STUB_LIB_SPEC)" + define FOSSIL_ENABLE_TCL_STUBS + define USE_TCL_STUBS + } else { + set libs "$tclconfig(TCL_LIB_SPEC) $tclconfig(TCL_LIBS)" + } + set cflags $tclconfig(TCL_INCLUDE_SPEC) + if {!$tclprivatestubs} { + set foundtcl 0; # Did we find a working Tcl library? + cc-with [list -cflags $cflags -libs $libs] { + if {$tclstubs} { + if {[cc-check-functions Tcl_InitStubs]} { + set foundtcl 1 + } + } else { + if {[cc-check-functions Tcl_CreateInterp]} { + set foundtcl 1 + } + } + } + if {!$foundtcl && [string match *-lieee* $libs]} { + # On some systems, using "-lieee" from TCL_LIB_SPEC appears + # to cause issues. + msg-result "Removing \"-lieee\" and retrying for Tcl..." + set libs [string map [list -lieee ""] $libs] + cc-with [list -cflags $cflags -libs $libs] { + if {$tclstubs} { + if {[cc-check-functions Tcl_InitStubs]} { + set foundtcl 1 + } + } else { + if {[cc-check-functions Tcl_CreateInterp]} { + set foundtcl 1 + } + } + } + } + if {!$foundtcl && ![string match *-lpthread* $libs]} { + # On some systems, TCL_LIB_SPEC appears to be missing + # "-lpthread". Try adding it. + msg-result "Adding \"-lpthread\" and retrying for Tcl..." + set libs "$libs -lpthread" + cc-with [list -cflags $cflags -libs $libs] { + if {$tclstubs} { + if {[cc-check-functions Tcl_InitStubs]} { + set foundtcl 1 + } + } else { + if {[cc-check-functions Tcl_CreateInterp]} { + set foundtcl 1 + } + } + } + } + if {!$foundtcl} { + if {$tclstubs} { + user-error "Cannot find a usable Tcl stubs library $msg" + } else { + user-error "Cannot find a usable Tcl library $msg" + } + } + } + set version $tclconfig(TCL_VERSION)$tclconfig(TCL_PATCH_LEVEL) + msg-result "Found Tcl $version at $tclconfig(TCL_PREFIX)" + if {!$tclprivatestubs} { + define-append LIBS $libs + } + define-append EXTRA_CFLAGS $cflags + if {[info exists zlibpath] && $zlibpath eq "tree"} { + # + # NOTE: When using zlib in the source tree, prevent Tcl from + # pulling in the system one. + # + set tclconfig(TCL_LD_FLAGS) [string map [list -lz ""] \ + $tclconfig(TCL_LD_FLAGS)] + } + # + # NOTE: Remove "-ldl" from the TCL_LD_FLAGS because it will be + # be checked for near the bottom of this file. + # + set tclconfig(TCL_LD_FLAGS) [string map [list -ldl ""] \ + $tclconfig(TCL_LD_FLAGS)] + define-append EXTRA_LDFLAGS $tclconfig(TCL_LD_FLAGS) + define FOSSIL_ENABLE_TCL +} + +# Network functions require libraries on some systems +cc-check-function-in-lib gethostbyname nsl +if {![cc-check-function-in-lib socket {socket network}]} { + # Last resort, may be Windows + if {[is_mingw]} { + define-append LIBS -lwsock32 + } +} +cc-check-function-in-lib iconv iconv +cc-check-functions utime +cc-check-functions usleep +cc-check-functions strchrnul + +# Check for getloadavg(), and if it doesn't exist, define FOSSIL_OMIT_LOAD_AVERAGE +if {![cc-check-functions getloadavg]} { + define FOSSIL_OMIT_LOAD_AVERAGE 1 + msg-result "Load average support unavailable" +} + +# Check for getpassphrase() for Solaris 10 where getpass() truncates to 10 chars +if {![cc-check-functions getpassphrase]} { + # Haiku needs this + cc-check-function-in-lib getpass bsd +} +cc-check-function-in-lib dlopen dl +cc-check-function-in-lib sin m + +# Check for the FuseFS library +if {[opt-bool fusefs]} { + if {[cc-check-function-in-lib fuse_mount fuse]} { + define FOSSIL_HAVE_FUSEFS 1 + define-append LIBS -lfuse + msg-result "FuseFS support enabled" + } +} + +make-template Makefile.in +make-config-header autoconfig.h -auto {USE_* FOSSIL_*} ADDED autosetup/LICENSE Index: autosetup/LICENSE ================================================================== --- autosetup/LICENSE +++ autosetup/LICENSE @@ -0,0 +1,35 @@ +Unless explicitly stated, all files which form part of autosetup +are released under the following license: + +--------------------------------------------------------------------- +autosetup - A build environment "autoconfigurator" + +Copyright (c) 2010-2011, WorkWare Systems + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions +are met: + +1. Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. +2. Redistributions in binary form must reproduce the above + copyright notice, this list of conditions and the following + disclaimer in the documentation and/or other materials + provided with the distribution. + +THIS SOFTWARE IS PROVIDED BY THE WORKWARE SYSTEMS ``AS IS'' AND ANY +EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A +PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL WORKWARE +SYSTEMS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, +INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS +OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, +STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) +ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF +ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +The views and conclusions contained in the software and documentation +are those of the authors and should not be interpreted as representing +official policies, either expressed or implied, of WorkWare Systems. ADDED autosetup/README.autosetup Index: autosetup/README.autosetup ================================================================== --- autosetup/README.autosetup +++ autosetup/README.autosetup @@ -0,0 +1,1 @@ +This is autosetup v0.6.5. See http://msteveb.github.com/autosetup/ ADDED autosetup/autosetup Index: autosetup/autosetup ================================================================== --- autosetup/autosetup +++ autosetup/autosetup @@ -0,0 +1,1921 @@ +#!/bin/sh +# Copyright (c) 2006-2011 WorkWare Systems http://www.workware.net.au/ +# All rights reserved +# vim:se syntax=tcl: +# \ +dir=`dirname "$0"`; exec "`$dir/find-tclsh`" "$0" "$@" + +set autosetup(version) 0.6.5 + +# Can be set to 1 to debug early-init problems +set autosetup(debug) 0 + +################################################################## +# +# Main flow of control, option handling +# +proc main {argv} { + global autosetup define + + # There are 3 potential directories involved: + # 1. The directory containing autosetup (this script) + # 2. The directory containing auto.def + # 3. The current directory + + # From this we need to determine: + # a. The path to this script (and related support files) + # b. The path to auto.def + # c. The build directory, where output files are created + + # This is also complicated by the fact that autosetup may + # have been run via the configure wrapper ([getenv WRAPPER] is set) + + # Here are the rules. + # a. This script is $::argv0 + # => dir, prog, exe, libdir + # b. auto.def is in the directory containing the configure wrapper, + # otherwise it is in the current directory. + # => srcdir, autodef + # c. The build directory is the current directory + # => builddir, [pwd] + + # 'misc' is needed before we can do anything, so set a temporary libdir + # in case this is the development version + set autosetup(libdir) [file dirname $::argv0]/lib + use misc + + # (a) + set autosetup(dir) [realdir [file dirname [realpath $::argv0]]] + set autosetup(prog) [file join $autosetup(dir) [file tail $::argv0]] + set autosetup(exe) [getenv WRAPPER $autosetup(prog)] + if {$autosetup(installed)} { + set autosetup(libdir) $autosetup(dir) + } else { + set autosetup(libdir) [file join $autosetup(dir) lib] + } + autosetup_add_dep $autosetup(prog) + + # (b) + if {[getenv WRAPPER ""] eq ""} { + # Invoked directly + set autosetup(srcdir) [pwd] + } else { + # Invoked via the configure wrapper + set autosetup(srcdir) [file dirname $autosetup(exe)] + } + set autosetup(autodef) [relative-path $autosetup(srcdir)/auto.def] + + # (c) + set autosetup(builddir) [pwd] + + set autosetup(argv) $argv + set autosetup(cmdline) {} + set autosetup(options) {} + set autosetup(optionhelp) {} + set autosetup(showhelp) 0 + + # Parse options + use getopt + + array set ::useropts [getopt argv] + + #"=Core Options:" + options-add { + help:=local => "display help and options. Optionally specify a module name, such as --help=system" + version => "display the version of autosetup" + ref:=text manual:=text + reference:=text => "display the autosetup command reference. 'text', 'wiki', 'asciidoc' or 'markdown'" + debug => "display debugging output as autosetup runs" + install:=. => "install autosetup to the current or given directory (in the 'autosetup/' subdirectory)" + force init:=help => "create initial auto.def, etc. Use --init=help for known types" + # Undocumented options + option-checking=1 + nopager + quiet + timing + conf: + } + + #parray ::useropts + if {[opt-bool version]} { + puts $autosetup(version) + exit 0 + } + + # autosetup --conf=alternate-auto.def + if {[opt-val conf] ne ""} { + set autosetup(autodef) [opt-val conf] + } + + # Debugging output (set this early) + incr autosetup(debug) [opt-bool debug] + incr autosetup(force) [opt-bool force] + incr autosetup(msg-quiet) [opt-bool quiet] + incr autosetup(msg-timing) [opt-bool timing] + + # If the local module exists, source it now to allow for + # project-local customisations + if {[file exists $autosetup(libdir)/local.tcl]} { + use local + } + + # Now any auto-load modules + foreach file [glob -nocomplain $autosetup(libdir)/*.auto $autosetup(libdir)/*/*.auto] { + automf_load source $file + } + + if {[opt-val help] ne ""} { + incr autosetup(showhelp) + use help + autosetup_help [opt-val help] + } + + if {[opt-val {manual ref reference}] ne ""} { + use help + autosetup_reference [opt-val {manual ref reference}] + } + + if {[opt-val init] ne ""} { + use init + autosetup_init [opt-val init] + } + + if {[opt-val install] ne ""} { + use install + autosetup_install [opt-val install] + } + + if {![file exists $autosetup(autodef)]} { + # Check for invalid option first + options {} + user-error "No auto.def found in \"$autosetup(srcdir)\" (use [file tail $::autosetup(exe)] --init to create one)" + } + + # Parse extra arguments into autosetup(cmdline) + foreach arg $argv { + if {[regexp {([^=]*)=(.*)} $arg -> n v]} { + dict set autosetup(cmdline) $n $v + define $n $v + } else { + user-error "Unexpected parameter: $arg" + } + } + + autosetup_add_dep $autosetup(autodef) + + set cmd [file-normalize $autosetup(exe)] + foreach arg $autosetup(argv) { + append cmd " [quote-if-needed $arg]" + } + define AUTOREMAKE $cmd + + # Log how we were invoked + configlog "Invoked as: [getenv WRAPPER $::argv0] [quote-argv $autosetup(argv)]" + + # Note that auto.def is *not* loaded in the global scope + source $autosetup(autodef) + + # Could warn here if options {} was not specified + + show-notices + + if {$autosetup(debug)} { + msg-result "Writing all defines to config.log" + configlog "================ defines ======================" + foreach n [lsort [array names define]] { + configlog "define $n $define($n)" + } + } + + exit 0 +} + +# @opt-bool option ... +# +# Check each of the named, boolean options and return 1 if any of them have +# been set by the user. +# +proc opt-bool {args} { + option-check-names {*}$args + opt_bool ::useropts {*}$args +} + +# @opt-val option-list ?default=""? +# +# Returns a list containing all the values given for the non-boolean options in 'option-list'. +# There will be one entry in the list for each option given by the user, including if the +# same option was used multiple times. +# If only a single value is required, use something like: +# +## lindex [opt-val $names] end +# +# If no options were set, $default is returned (exactly, not as a list). +# +proc opt-val {names {default ""}} { + option-check-names {*}$names + join [opt_val ::useropts $names $default] +} + +proc option-check-names {args} { + foreach o $args { + if {$o ni $::autosetup(options)} { + autosetup-error "Request for undeclared option --$o" + } + } +} + +# Parse the option definition in $opts and update +# ::useropts() and ::autosetup(optionhelp) appropriately +# +proc options-add {opts {header ""}} { + global useropts autosetup + + # First weed out comment lines + set realopts {} + foreach line [split $opts \n] { + if {![string match "#*" [string trimleft $line]]} { + append realopts $line \n + } + } + set opts $realopts + + for {set i 0} {$i < [llength $opts]} {incr i} { + set opt [lindex $opts $i] + if {[string match =* $opt]} { + # This is a special heading + lappend autosetup(optionhelp) $opt "" + set header {} + continue + } + + #puts "i=$i, opt=$opt" + regexp {^([^:=]*)(:)?(=)?(.*)$} $opt -> name colon equal value + if {$name in $autosetup(options)} { + autosetup-error "Option $name already specified" + } + + #puts "$opt => $name $colon $equal $value" + + # Find the corresponding value in the user options + # and set the default if necessary + if {[string match "-*" $opt]} { + # This is a documentation-only option, like "-C " + set opthelp $opt + } elseif {$colon eq ""} { + # Boolean option + lappend autosetup(options) $name + + if {![info exists useropts($name)]} { + set useropts($name) $value + } + if {$value eq "1"} { + set opthelp "--disable-$name" + } else { + set opthelp "--$name" + } + } else { + # String option. + lappend autosetup(options) $name + + if {$equal eq "="} { + if {[info exists useropts($name)]} { + # If the user specified the option with no value, the value will be "1" + # Replace with the default + if {$useropts($name) eq "1"} { + set useropts($name) $value + } + } + set opthelp "--$name?=$value?" + } else { + set opthelp "--$name=$value" + } + } + + # Now create the help for this option if appropriate + if {[lindex $opts $i+1] eq "=>"} { + set desc [lindex $opts $i+2] + #string match \n* $desc + if {$header ne ""} { + lappend autosetup(optionhelp) $header "" + set header "" + } + # A multi-line description + lappend autosetup(optionhelp) $opthelp $desc + incr i 2 + } + } +} + +# @module-options optionlist +# +# Like 'options', but used within a module. +proc module-options {opts} { + set header "" + if {$::autosetup(showhelp) > 1 && [llength $opts]} { + set header "Module Options:" + } + options-add $opts $header + + if {$::autosetup(showhelp)} { + # Ensure that the module isn't executed on --help + # We are running under eval or source, so use break + # to prevent further execution + #return -code break -level 2 + return -code break + } +} + +proc max {a b} { + expr {$a > $b ? $a : $b} +} + +proc options-wrap-desc {text length firstprefix nextprefix initial} { + set len $initial + set space $firstprefix + foreach word [split $text] { + set word [string trim $word] + if {$word == ""} { + continue + } + if {$len && [string length $space$word] + $len >= $length} { + puts "" + set len 0 + set space $nextprefix + } + incr len [string length $space$word] + puts -nonewline $space$word + set space " " + } + if {$len} { + puts "" + } +} + +proc options-show {} { + # Determine the max option width + set max 0 + foreach {opt desc} $::autosetup(optionhelp) { + if {[string match =* $opt] || [string match \n* $desc]} { + continue + } + set max [max $max [string length $opt]] + } + set indent [string repeat " " [expr $max+4]] + set cols [getenv COLUMNS 80] + catch { + lassign [exec stty size] rows cols + } + incr cols -1 + # Now output + foreach {opt desc} $::autosetup(optionhelp) { + if {[string match =* $opt]} { + puts [string range $opt 1 end] + continue + } + puts -nonewline " [format %-${max}s $opt]" + if {[string match \n* $desc]} { + puts $desc + } else { + options-wrap-desc [string trim $desc] $cols " " $indent [expr $max + 2] + } + } +} + +# @options options-spec +# +# Specifies configuration-time options which may be selected by the user +# and checked with opt-val and opt-bool. The format of options-spec follows. +# +# A boolean option is of the form: +# +## name[=0|1] => "Description of this boolean option" +# +# The default is name=0, meaning that the option is disabled by default. +# If name=1 is used to make the option enabled by default, the description should reflect +# that with text like "Disable support for ...". +# +# An argument option (one which takes a parameter) is of the form: +# +## name:[=]value => "Description of this option" +# +# If the name:value form is used, the value must be provided with the option (as --name=myvalue). +# If the name:=value form is used, the value is optional and the given value is used as the default +# if is not provided. +# +# Undocumented options are also supported by omitting the "=> description. +# These options are not displayed with --help and can be useful for internal options or as aliases. +# +# For example, --disable-lfs is an alias for --disable=largefile: +# +## lfs=1 largefile=1 => "Disable large file support" +# +proc options {optlist} { + # Allow options as a list or args + options-add $optlist "Local Options:" + + if {$::autosetup(showhelp)} { + options-show + exit 0 + } + + # Check for invalid options + if {[opt-bool option-checking]} { + foreach o [array names ::useropts] { + if {$o ni $::autosetup(options)} { + user-error "Unknown option --$o" + } + } + } +} + +proc config_guess {} { + if {[file-isexec $::autosetup(dir)/config.guess]} { + exec-with-stderr sh $::autosetup(dir)/config.guess + if {[catch {exec-with-stderr sh $::autosetup(dir)/config.guess} alias]} { + user-error $alias + } + return $alias + } else { + configlog "No config.guess, so using uname" + string tolower [exec uname -p]-unknown-[exec uname -s][exec uname -r] + } +} + +proc config_sub {alias} { + if {[file-isexec $::autosetup(dir)/config.sub]} { + if {[catch {exec-with-stderr sh $::autosetup(dir)/config.sub $alias} alias]} { + user-error $alias + } + } + return $alias +} + +# @define name ?value=1? +# +# Defines the named variable to the given value. +# These (name, value) pairs represent the results of the configuration check +# and are available to be checked, modified and substituted. +# +proc define {name {value 1}} { + set ::define($name) $value + #dputs "$name <= $value" +} + +# @define-append name value ... +# +# Appends the given value(s) to the given 'defined' variable. +# If the variable is not defined or empty, it is set to $value. +# Otherwise the value is appended, separated by a space. +# Any extra values are similarly appended. +# If any value is already contained in the variable (as a substring) it is omitted. +# +proc define-append {name args} { + if {[get-define $name ""] ne ""} { + # Make a token attempt to avoid duplicates + foreach arg $args { + if {[string first $arg $::define($name)] == -1} { + append ::define($name) " " $arg + } + } + } else { + set ::define($name) [join $args] + } + #dputs "$name += [join $args] => $::define($name)" +} + +# @get-define name ?default=0? +# +# Returns the current value of the 'defined' variable, or $default +# if not set. +# +proc get-define {name {default 0}} { + if {[info exists ::define($name)]} { + #dputs "$name => $::define($name)" + return $::define($name) + } + #dputs "$name => $default" + return $default +} + +# @is-defined name +# +# Returns 1 if the given variable is defined. +# +proc is-defined {name} { + info exists ::define($name) +} + +# @all-defines +# +# Returns a dictionary (name value list) of all defined variables. +# +# This is suitable for use with 'dict', 'array set' or 'foreach' +# and allows for arbitrary processing of the defined variables. +# +proc all-defines {} { + array get ::define +} + + +# @get-env name default +# +# If $name was specified on the command line, return it. +# If $name was set in the environment, return it. +# Otherwise return $default. +# +proc get-env {name default} { + if {[dict exists $::autosetup(cmdline) $name]} { + return [dict get $::autosetup(cmdline) $name] + } + getenv $name $default +} + +# @env-is-set name +# +# Returns 1 if the $name was specified on the command line or in the environment. +# Note that an empty environment variable is not considered to be set. +# +proc env-is-set {name} { + if {[dict exists $::autosetup(cmdline) $name]} { + return 1 + } + if {[getenv $name ""] ne ""} { + return 1 + } + return 0 +} + +# @readfile filename ?default=""? +# +# Return the contents of the file, without the trailing newline. +# If the doesn't exist or can't be read, returns $default. +# +proc readfile {filename {default_value ""}} { + set result $default_value + catch { + set f [open $filename] + set result [read -nonewline $f] + close $f + } + return $result +} + +# @writefile filename value +# +# Creates the given file containing $value. +# Does not add an extra newline. +# +proc writefile {filename value} { + set f [open $filename w] + puts -nonewline $f $value + close $f +} + +proc quote-if-needed {str} { + if {[string match {*[\" ]*} $str]} { + return \"[string map [list \" \\" \\ \\\\] $str]\" + } + return $str +} + +proc quote-argv {argv} { + set args {} + foreach arg $argv { + lappend args [quote-if-needed $arg] + } + join $args +} + +# @suffix suf list +# +# Takes a list and returns a new list with $suf appended +# to each element +# +## suffix .c {a b c} => {a.c b.c c.c} +# +proc suffix {suf list} { + set result {} + foreach p $list { + lappend result $p$suf + } + return $result +} + +# @prefix pre list +# +# Takes a list and returns a new list with $pre prepended +# to each element +# +## prefix jim- {a.c b.c} => {jim-a.c jim-b.c} +# +proc prefix {pre list} { + set result {} + foreach p $list { + lappend result $pre$p + } + return $result +} + +# @find-executable name +# +# Searches the path for an executable with the given name. +# Note that the name may include some parameters, e.g. "cc -mbig-endian", +# in which case the parameters are ignored. +# Returns 1 if found, or 0 if not. +# +proc find-executable {name} { + # Ignore any parameters + set name [lindex $name 0] + if {$name eq ""} { + # The empty string is never a valid executable + return 0 + } + foreach p [split-path] { + dputs "Looking for $name in $p" + set exec [file join $p $name] + if {[file-isexec $exec]} { + dputs "Found $name -> $exec" + return 1 + } + } + return 0 +} + +# @find-an-executable ?-required? name ... +# +# Given a list of possible executable names, +# searches for one of these on the path. +# +# Returns the name found, or "" if none found. +# If the first parameter is '-required', an error is generated +# if no executable is found. +# +proc find-an-executable {args} { + set required 0 + if {[lindex $args 0] eq "-required"} { + set args [lrange $args 1 end] + incr required + } + foreach name $args { + if {[find-executable $name]} { + return $name + } + } + if {$required} { + if {[llength $args] == 1} { + user-error "failed to find: [join $args]" + } else { + user-error "failed to find one of: [join $args]" + } + } + return "" +} + +# @configlog msg +# +# Writes the given message to the configuration log, config.log +# +proc configlog {msg} { + if {![info exists ::autosetup(logfh)]} { + set ::autosetup(logfh) [open config.log w] + } + puts $::autosetup(logfh) $msg +} + +# @msg-checking msg +# +# Writes the message with no newline to stdout. +# +proc msg-checking {msg} { + if {$::autosetup(msg-quiet) == 0} { + maybe-show-timestamp + puts -nonewline $msg + set ::autosetup(msg-checking) 1 + } +} + +# @msg-result msg +# +# Writes the message to stdout. +# +proc msg-result {msg} { + if {$::autosetup(msg-quiet) == 0} { + maybe-show-timestamp + puts $msg + set ::autosetup(msg-checking) 0 + show-notices + } +} + +# @msg-quiet command ... +# +# msg-quiet evaluates it's arguments as a command with output +# from msg-checking and msg-result suppressed. +# +# This is useful if a check needs to run a subcheck which isn't +# of interest to the user. +proc msg-quiet {args} { + incr ::autosetup(msg-quiet) + set rc [uplevel 1 $args] + incr ::autosetup(msg-quiet) -1 + return $rc +} + +# Will be overridden by 'use misc' +proc error-stacktrace {msg} { + return $msg +} + +proc error-location {msg} { + return $msg +} + +################################################################## +# +# Debugging output +# +proc dputs {msg} { + if {$::autosetup(debug)} { + puts $msg + } +} + +################################################################## +# +# User and system warnings and errors +# +# Usage errors such as wrong command line options + +# @user-error msg +# +# Indicate incorrect usage to the user, including if required components +# or features are not found. +# autosetup exits with a non-zero return code. +# +proc user-error {msg} { + show-notices + puts stderr "Error: $msg" + puts stderr "Try: '[file tail $::autosetup(exe)] --help' for options" + exit 1 +} + +# @user-notice msg +# +# Output the given message to stderr. +# +proc user-notice {msg} { + lappend ::autosetup(notices) $msg +} + +# Incorrect usage in the auto.def file. Identify the location. +proc autosetup-error {msg} { + autosetup-full-error [error-location $msg] +} + +# Like autosetup-error, except $msg is the full error message. +proc autosetup-full-error {msg} { + show-notices + puts stderr $msg + exit 1 +} + +proc show-notices {} { + if {$::autosetup(msg-checking)} { + puts "" + set ::autosetup(msg-checking) 0 + } + flush stdout + if {[info exists ::autosetup(notices)]} { + puts stderr [join $::autosetup(notices) \n] + unset ::autosetup(notices) + } +} + +proc maybe-show-timestamp {} { + if {$::autosetup(msg-timing) && $::autosetup(msg-checking) == 0} { + puts -nonewline [format {[%6.2f] } [expr {([clock millis] - $::autosetup(start)) % 10000 / 1000.0}]] + } +} + +proc autosetup_version {} { + return "autosetup v$::autosetup(version)" +} + +################################################################## +# +# Directory/path handling +# + +proc realdir {dir} { + set oldpwd [pwd] + cd $dir + set pwd [pwd] + cd $oldpwd + return $pwd +} + +# Follow symlinks until we get to something which is not a symlink +proc realpath {path} { + while {1} { + if {[catch { + set path [file readlink $path] + }]} { + # Not a link + break + } + } + return $path +} + +# Convert absolute path, $path into a path relative +# to the given directory (or the current dir, if not given). +# +proc relative-path {path {pwd {}}} { + set diff 0 + set same 0 + set newf {} + set prefix {} + set path [file-normalize $path] + if {$pwd eq ""} { + set pwd [pwd] + } else { + set pwd [file-normalize $pwd] + } + + if {$path eq $pwd} { + return . + } + + # Try to make the filename relative to the current dir + foreach p [split $pwd /] f [split $path /] { + if {$p ne $f} { + incr diff + } elseif {!$diff} { + incr same + } + if {$diff} { + if {$p ne ""} { + # Add .. for sibling or parent dir + lappend prefix .. + } + if {$f ne ""} { + lappend newf $f + } + } + } + if {$same == 1 || [llength $prefix] > 3} { + return $path + } + + file join [join $prefix /] [join $newf /] +} + +# Add filename as a dependency to rerun autosetup +# The name will be normalised (converted to a full path) +# +proc autosetup_add_dep {filename} { + lappend ::autosetup(deps) [file-normalize $filename] +} + +################################################################## +# +# Library module support +# + +# @use module ... +# +# Load the given library modules. +# e.g. 'use cc cc-shared' +# +# Note that module 'X' is implemented in either 'autosetup/X.tcl' +# or 'autosetup/X/init.tcl' +# +# The latter form is useful for a complex module which requires additional +# support file. In this form, '$::usedir' is set to the module directory +# when it is loaded. +# +proc use {args} { + foreach m $args { + if {[info exists ::libmodule($m)]} { + continue + } + set ::libmodule($m) 1 + if {[info exists ::modsource($m)]} { + automf_load eval $::modsource($m) + } else { + set sources [list $::autosetup(libdir)/${m}.tcl $::autosetup(libdir)/${m}/init.tcl] + set found 0 + foreach source $sources { + if {[file exists $source]} { + incr found + break + } + } + if {$found} { + # For the convenience of the "use" source, point to the directory + # it is being loaded from + set ::usedir [file dirname $source] + automf_load source $source + autosetup_add_dep $source + } else { + autosetup-error "use: No such module: $m" + } + } + } +} + +# Load module source in the global scope by executing the given command +proc automf_load {args} { + if {[catch [list uplevel #0 $args] msg opts] ni {0 2 3}} { + autosetup-full-error [error-dump $msg $opts $::autosetup(debug)] + } +} + +# Initial settings +set autosetup(exe) $::argv0 +set autosetup(istcl) 1 +set autosetup(start) [clock millis] +set autosetup(installed) 0 +set autosetup(msg-checking) 0 +set autosetup(msg-quiet) 0 + +# Embedded modules are inserted below here +set autosetup(installed) 1 +# ----- module asciidoc-formatting ----- + +set modsource(asciidoc-formatting) { +# Copyright (c) 2010 WorkWare Systems http://www.workware.net.au/ +# All rights reserved + +# Module which provides text formatting +# asciidoc format + +use formatting + +proc para {text} { + regsub -all "\[ \t\n\]+" [string trim $text] " " +} +proc title {text} { + underline [para $text] = + nl +} +proc p {text} { + puts [para $text] + nl +} +proc code {text} { + foreach line [parse_code_block $text] { + puts " $line" + } + nl +} +proc codelines {lines} { + foreach line $lines { + puts " $line" + } + nl +} +proc nl {} { + puts "" +} +proc underline {text char} { + regexp "^(\[ \t\]*)(.*)" $text -> indent words + puts $text + puts $indent[string repeat $char [string length $words]] +} +proc section {text} { + underline "[para $text]" - + nl +} +proc subsection {text} { + underline "$text" ~ + nl +} +proc bullet {text} { + puts "* [para $text]" +} +proc indent {text} { + puts " :: " + puts [para $text] +} +proc defn {first args} { + set sep "" + if {$first ne ""} { + puts "${first}::" + } else { + puts " :: " + } + set defn [string trim [join $args \n]] + regsub -all "\n\n" $defn "\n ::\n" defn + puts $defn +} +} + +# ----- module formatting ----- + +set modsource(formatting) { +# Copyright (c) 2010 WorkWare Systems http://www.workware.net.au/ +# All rights reserved + +# Module which provides common text formatting + +# This is designed for documenation which looks like: +# code {...} +# or +# code { +# ... +# ... +# } +# In the second case, we need to work out the indenting +# and strip it from all lines but preserve the remaining indenting. +# Note that all lines need to be indented with the same initial +# spaces/tabs. +# +# Returns a list of lines with the indenting removed. +# +proc parse_code_block {text} { + # If the text begins with newline, take the following text, + # otherwise just return the original + if {![regexp "^\n(.*)" $text -> text]} { + return [list [string trim $text]] + } + + # And trip spaces off the end + set text [string trimright $text] + + set min 100 + # Examine each line to determine the minimum indent + foreach line [split $text \n] { + if {$line eq ""} { + # Ignore empty lines for the indent calculation + continue + } + regexp "^(\[ \t\]*)" $line -> indent + set len [string length $indent] + if {$len < $min} { + set min $len + } + } + + # Now make a list of lines with this indent removed + set lines {} + foreach line [split $text \n] { + lappend lines [string range $line $min end] + } + + # Return the result + return $lines +} +} + +# ----- module getopt ----- + +set modsource(getopt) { +# Copyright (c) 2006 WorkWare Systems http://www.workware.net.au/ +# All rights reserved + +# Simple getopt module + +# Parse everything out of the argv list which looks like an option +# Knows about --enable-thing and --disable-thing as alternatives for --thing=0 or --thing=1 +# Everything which doesn't look like an option, or is after --, is left unchanged +proc getopt {argvname} { + upvar $argvname argv + set nargv {} + + for {set i 0} {$i < [llength $argv]} {incr i} { + set arg [lindex $argv $i] + + #dputs arg=$arg + + if {$arg eq "--"} { + # End of options + incr i + lappend nargv {*}[lrange $argv $i end] + break + } + + if {[regexp {^--([^=][^=]+)=(.*)$} $arg -> name value]} { + lappend opts($name) $value + } elseif {[regexp {^--(enable-|disable-)?([^=]*)$} $arg -> prefix name]} { + if {$prefix eq "disable-"} { + set value 0 + } else { + set value 1 + } + lappend opts($name) $value + } else { + lappend nargv $arg + } + } + + #puts "getopt: argv=[join $argv] => [join $nargv]" + #parray opts + + set argv $nargv + + return [array get opts] +} + +proc opt_val {optarrayname options {default {}}} { + upvar $optarrayname opts + + set result {} + + foreach o $options { + if {[info exists opts($o)]} { + lappend result {*}$opts($o) + } + } + if {[llength $result] == 0} { + return $default + } + return $result +} + +proc opt_bool {optarrayname args} { + upvar $optarrayname opts + + # Support the args being passed as a list + if {[llength $args] == 1} { + set args [lindex $args 0] + } + + foreach o $args { + if {[info exists opts($o)]} { + if {"1" in $opts($o) || "yes" in $opts($o)} { + return 1 + } + } + } + return 0 +} +} + +# ----- module help ----- + +set modsource(help) { +# Copyright (c) 2010 WorkWare Systems http://workware.net.au/ +# All rights reserved + +# Module which provides usage, help and the command reference + +proc autosetup_help {what} { + use_pager + + puts "Usage: [file tail $::autosetup(exe)] \[options\] \[settings\]\n" + puts "This is [autosetup_version], a build environment \"autoconfigurator\"" + puts "See the documentation online at http://msteveb.github.com/autosetup/\n" + + if {$what eq "local"} { + if {[file exists $::autosetup(autodef)]} { + # This relies on auto.def having a call to 'options' + # which will display options and quit + source $::autosetup(autodef) + } else { + options-show + } + } else { + incr ::autosetup(showhelp) + if {[catch {use $what}]} { + user-error "Unknown module: $what" + } else { + options-show + } + } + exit 0 +} + +# If not already paged and stdout is a tty, pipe the output through the pager +# This is done by reinvoking autosetup with --nopager added +proc use_pager {} { + if {![opt-bool nopager] && [getenv PAGER ""] ne "" && [isatty? stdin] && [isatty? stdout]} { + if {[catch { + exec [info nameofexecutable] $::argv0 --nopager {*}$::argv |& {*}[getenv PAGER] >@stdout <@stdin 2>@stderr + } msg opts] == 1} { + if {[dict get $opts -errorcode] eq "NONE"} { + # an internal/exec error + puts stderr $msg + exit 1 + } + } + exit 0 + } +} + +# Outputs the autosetup references in one of several formats +proc autosetup_reference {{type text}} { + + use_pager + + switch -glob -- $type { + wiki {use wiki-formatting} + ascii* {use asciidoc-formatting} + md - markdown {use markdown-formatting} + default {use text-formatting} + } + + title "[autosetup_version] -- Command Reference" + + section {Introduction} + + p { + See http://msteveb.github.com/autosetup/ for the online documentation for 'autosetup' + } + + p { + 'autosetup' provides a number of built-in commands which + are documented below. These may be used from 'auto.def' to test + for features, define variables, create files from templates and + other similar actions. + } + + automf_command_reference + + exit 0 +} + +proc autosetup_output_block {type lines} { + if {[llength $lines]} { + switch $type { + code { + codelines $lines + } + p { + p [join $lines] + } + list { + foreach line $lines { + bullet $line + } + nl + } + } + } +} + +# Generate a command reference from inline documentation +proc automf_command_reference {} { + lappend files $::autosetup(prog) + lappend files {*}[lsort [glob -nocomplain $::autosetup(libdir)/*.tcl]] + + section "Core Commands" + set type p + set lines {} + set cmd {} + + foreach file $files { + set f [open $file] + while {![eof $f]} { + set line [gets $f] + + # Find lines starting with "# @*" and continuing through the remaining comment lines + if {![regexp {^# @(.*)} $line -> cmd]} { + continue + } + + # Synopsis or command? + if {$cmd eq "synopsis:"} { + section "Module: [file rootname [file tail $file]]" + } else { + subsection $cmd + } + + set lines {} + set type p + + # Now the description + while {![eof $f]} { + set line [gets $f] + + if {![regexp {^#(#)? ?(.*)} $line -> hash cmd]} { + break + } + if {$hash eq "#"} { + set t code + } elseif {[regexp {^- (.*)} $cmd -> cmd]} { + set t list + } else { + set t p + } + + #puts "hash=$hash, oldhash=$oldhash, lines=[llength $lines], cmd=$cmd" + + if {$t ne $type || $cmd eq ""} { + # Finish the current block + autosetup_output_block $type $lines + set lines {} + set type $t + } + if {$cmd ne ""} { + lappend lines $cmd + } + } + + autosetup_output_block $type $lines + } + close $f + } +} +} + +# ----- module init ----- + +set modsource(init) { +# Copyright (c) 2010 WorkWare Systems http://www.workware.net.au/ +# All rights reserved + +# Module to help create auto.def and configure + +proc autosetup_init {type} { + set help 0 + if {$type in {? help}} { + incr help + } elseif {![dict exists $::autosetup(inittypes) $type]} { + puts "Unknown type, --init=$type" + incr help + } + if {$help} { + puts "Use one of the following types (e.g. --init=make)\n" + foreach type [lsort [dict keys $::autosetup(inittypes)]] { + lassign [dict get $::autosetup(inittypes) $type] desc + # XXX: Use the options-show code to wrap the description + puts [format "%-10s %s" $type $desc] + } + exit 0 + } + lassign [dict get $::autosetup(inittypes) $type] desc script + + puts "Initialising $type: $desc\n" + + # All initialisations happens in the top level srcdir + cd $::autosetup(srcdir) + + uplevel #0 $script + + exit 0 +} + +proc autosetup_add_init_type {type desc script} { + dict set ::autosetup(inittypes) $type [list $desc $script] +} + +# This is for in creating build-system init scripts +# +# If the file doesn't exist, create it containing $contents +# If the file does exist, only overwrite if --force is specified. +# +proc autosetup_check_create {filename contents} { + if {[file exists $filename]} { + if {!$::autosetup(force)} { + puts "I see $filename already exists." + return + } else { + puts "I will overwrite the existing $filename because you used --force." + } + } else { + puts "I don't see $filename, so I will create it." + } + writefile $filename $contents +} +} + +# ----- module install ----- + +set modsource(install) { +# Copyright (c) 2006-2010 WorkWare Systems http://www.workware.net.au/ +# All rights reserved + +# Module which can install autosetup + +proc autosetup_install {dir} { + if {[catch { + cd $dir + file mkdir autosetup + + set f [open autosetup/autosetup w] + + set publicmodules $::autosetup(libdir)/default.auto + + # First the main script, but only up until "CUT HERE" + set in [open $::autosetup(dir)/autosetup] + while {[gets $in buf] >= 0} { + if {$buf ne "##-- CUT HERE --##"} { + puts $f $buf + continue + } + + # Insert the static modules here + # i.e. those which don't contain @synopsis: + puts $f "set autosetup(installed) 1" + foreach file [lsort [glob $::autosetup(libdir)/*.tcl]] { + set buf [readfile $file] + if {[string match "*\n# @synopsis:*" $buf]} { + lappend publicmodules $file + continue + } + set modname [file rootname [file tail $file]] + puts $f "# ----- module $modname -----" + puts $f "\nset modsource($modname) \{" + puts $f $buf + puts $f "\}\n" + } + } + close $in + close $f + exec chmod 755 autosetup/autosetup + + # Install public modules + foreach file $publicmodules { + autosetup_install_file $file autosetup + } + + # Install support files + foreach file {config.guess config.sub jimsh0.c find-tclsh test-tclsh LICENSE} { + autosetup_install_file $::autosetup(dir)/$file autosetup + } + exec chmod 755 autosetup/config.sub autosetup/config.guess autosetup/find-tclsh + + writefile autosetup/README.autosetup \ + "This is [autosetup_version]. See http://msteveb.github.com/autosetup/\n" + + } error]} { + user-error "Failed to install autosetup: $error" + } + puts "Installed [autosetup_version] to autosetup/" + + # Now create 'configure' if necessary + autosetup_create_configure + + exit 0 +} + +proc autosetup_create_configure {} { + if {[file exists configure]} { + if {!$::autosetup(force)} { + # Could this be an autosetup configure? + if {![string match "*\nWRAPPER=*" [readfile configure]]} { + puts "I see configure, but not created by autosetup, so I won't overwrite it." + puts "Remove it or use --force to overwrite." + return + } + } else { + puts "I will overwrite the existing configure because you used --force." + } + } else { + puts "I don't see configure, so I will create it." + } + writefile configure \ +{#!/bin/sh +dir="`dirname "$0"`/autosetup" +WRAPPER="$0"; export WRAPPER; exec "`$dir/find-tclsh`" "$dir/autosetup" "$@" +} + catch {exec chmod 755 configure} +} + +# Append the contents of $file to filehandle $f +proc autosetup_install_append {f file} { + set in [open $file] + puts $f [read $in] + close $in +} + +proc autosetup_install_file {file dir} { + if {![file exists $file]} { + error "Missing installation file '$file'" + } + writefile [file join $dir [file tail $file]] [readfile $file]\n +} + +if {$::autosetup(installed)} { + user-error "autosetup can only be installed from development source, not from installed copy" +} +} + +# ----- module markdown-formatting ----- + +set modsource(markdown-formatting) { +# Copyright (c) 2010 WorkWare Systems http://www.workware.net.au/ +# All rights reserved + +# Module which provides text formatting +# markdown format (kramdown syntax) + +use formatting + +proc para {text} { + regsub -all "\[ \t\n\]+" [string trim $text] " " text + regsub -all {([^a-zA-Z])'([^']*)'} $text {\1**`\2`**} text + regsub -all {^'([^']*)'} $text {**`\1`**} text + regsub -all {(http[^ \t\n]*)} $text {[\1](\1)} text + return $text +} +proc title {text} { + underline [para $text] = + nl +} +proc p {text} { + puts [para $text] + nl +} +proc codelines {lines} { + puts "~~~~~~~~~~~~" + foreach line $lines { + puts $line + } + puts "~~~~~~~~~~~~" + nl +} +proc code {text} { + puts "~~~~~~~~~~~~" + foreach line [parse_code_block $text] { + puts $line + } + puts "~~~~~~~~~~~~" + nl +} +proc nl {} { + puts "" +} +proc underline {text char} { + regexp "^(\[ \t\]*)(.*)" $text -> indent words + puts $text + puts $indent[string repeat $char [string length $words]] +} +proc section {text} { + underline "[para $text]" - + nl +} +proc subsection {text} { + puts "### `$text`" + nl +} +proc bullet {text} { + puts "* [para $text]" +} +proc defn {first args} { + puts "^" + set defn [string trim [join $args \n]] + if {$first ne ""} { + puts "**${first}**" + puts -nonewline ": " + regsub -all "\n\n" $defn "\n: " defn + } + puts "$defn" +} +} + +# ----- module misc ----- + +set modsource(misc) { +# Copyright (c) 2007-2010 WorkWare Systems http://www.workware.net.au/ +# All rights reserved + +# Module containing misc procs useful to modules +# Largely for platform compatibility + +set autosetup(istcl) [info exists ::tcl_library] +set autosetup(iswin) [string equal windows $tcl_platform(platform)] + +if {$autosetup(iswin)} { + # mingw/windows separates $PATH with semicolons + # and doesn't have an executable bit + proc split-path {} { + split [getenv PATH .] {;} + } + proc file-isexec {exec} { + # Basic test for windows. We ignore .bat + if {[file isfile $exec] || [file isfile $exec.exe]} { + return 1 + } + return 0 + } +} else { + # unix separates $PATH with colons and has and executable bit + proc split-path {} { + split [getenv PATH .] : + } + proc file-isexec {exec} { + file executable $exec + } +} + +# Assume that exec can return stdout and stderr +proc exec-with-stderr {args} { + exec {*}$args 2>@1 +} + +if {$autosetup(istcl)} { + # Tcl doesn't have the env command + proc getenv {name args} { + if {[info exists ::env($name)]} { + return $::env($name) + } + if {[llength $args]} { + return [lindex $args 0] + } + return -code error "environment variable \"$name\" does not exist" + } + proc isatty? {channel} { + dict exists [fconfigure $channel] -xchar + } +} else { + if {$autosetup(iswin)} { + # On Windows, backslash convert all environment variables + # (Assume that Tcl does this for us) + proc getenv {name args} { + string map {\\ /} [env $name {*}$args] + } + } else { + # Jim on unix is simple + alias getenv env + } + proc isatty? {channel} { + set tty 0 + catch { + # isatty is a recent addition to Jim Tcl + set tty [$channel isatty] + } + return $tty + } +} + +# In case 'file normalize' doesn't exist +# +proc file-normalize {path} { + if {[catch {file normalize $path} result]} { + if {$path eq ""} { + return "" + } + set oldpwd [pwd] + if {[file isdir $path]} { + cd $path + set result [pwd] + } else { + cd [file dirname $path] + set result [file join [pwd] [file tail $path]] + } + cd $oldpwd + } + return $result +} + +# If everything is working properly, the only errors which occur +# should be generated in user code (e.g. auto.def). +# By default, we only want to show the error location in user code. +# We use [info frame] to achieve this, but it works differently on Tcl and Jim. +# +# This is designed to be called for incorrect usage in auto.def, via autosetup-error +# +proc error-location {msg} { + if {$::autosetup(debug)} { + return -code error $msg + } + # Search back through the stack trace for the first error in a .def file + for {set i 1} {$i < [info level]} {incr i} { + if {$::autosetup(istcl)} { + array set info [info frame -$i] + } else { + lassign [info frame -$i] info(caller) info(file) info(line) + } + if {[string match *.def $info(file)]} { + return "[relative-path $info(file)]:$info(line): Error: $msg" + } + #puts "Skipping $info(file):$info(line)" + } + return $msg +} + +# If everything is working properly, the only errors which occur +# should be generated in user code (e.g. auto.def). +# By default, we only want to show the error location in user code. +# We use [info frame] to achieve this, but it works differently on Tcl and Jim. +# +# This is designed to be called for incorrect usage in auto.def, via autosetup-error +# +proc error-stacktrace {msg} { + if {$::autosetup(debug)} { + return -code error $msg + } + # Search back through the stack trace for the first error in a .def file + for {set i 1} {$i < [info level]} {incr i} { + if {$::autosetup(istcl)} { + array set info [info frame -$i] + } else { + lassign [info frame -$i] info(caller) info(file) info(line) + } + if {[string match *.def $info(file)]} { + return "[relative-path $info(file)]:$info(line): Error: $msg" + } + #puts "Skipping $info(file):$info(line)" + } + return $msg +} + +# Given the return from [catch {...} msg opts], returns an appropriate +# error message. A nice one for Jim and a less-nice one for Tcl. +# If 'fulltrace' is set, a full stack trace is provided. +# Otherwise a simple message is provided. +# +# This is designed for developer errors, e.g. in module code or auto.def code +# +# +proc error-dump {msg opts fulltrace} { + if {$::autosetup(istcl)} { + if {$fulltrace} { + return "Error: [dict get $opts -errorinfo]" + } else { + return "Error: $msg" + } + } else { + lassign $opts(-errorinfo) p f l + if {$f ne ""} { + set result "$f:$l: Error: " + } + append result "$msg\n" + if {$fulltrace} { + append result [stackdump $opts(-errorinfo)] + } + + # Remove the trailing newline + string trim $result + } +} +} + +# ----- module text-formatting ----- + +set modsource(text-formatting) { +# Copyright (c) 2010 WorkWare Systems http://www.workware.net.au/ +# All rights reserved + +# Module which provides text formatting + +use formatting + +proc wordwrap {text length {firstprefix ""} {nextprefix ""}} { + set len 0 + set space $firstprefix + foreach word [split $text] { + set word [string trim $word] + if {$word == ""} { + continue + } + if {$len && [string length $space$word] + $len >= $length} { + puts "" + set len 0 + set space $nextprefix + } + incr len [string length $space$word] + + # Use man-page conventions for highlighting 'quoted' and *quoted* + # single words. + # Use x^Hx for *bold* and _^Hx for 'underline'. + # + # less and more will both understand this. + # Pipe through 'col -b' to remove them. + if {[regexp {^'(.*)'([^a-zA-Z0-9_]*)$} $word -> bareword dot]} { + regsub -all . $bareword "_\b&" word + append word $dot + } elseif {[regexp {^[*](.*)[*]([^a-zA-Z0-9_]*)$} $word -> bareword dot]} { + regsub -all . $bareword "&\b&" word + append word $dot + } + puts -nonewline $space$word + set space " " + } + if {$len} { + puts "" + } +} +proc title {text} { + underline [string trim $text] = + nl +} +proc p {text} { + wordwrap $text 80 + nl +} +proc codelines {lines} { + foreach line $lines { + puts " $line" + } + nl +} +proc nl {} { + puts "" +} +proc underline {text char} { + regexp "^(\[ \t\]*)(.*)" $text -> indent words + puts $text + puts $indent[string repeat $char [string length $words]] +} +proc section {text} { + underline "[string trim $text]" - + nl +} +proc subsection {text} { + underline "$text" ~ + nl +} +proc bullet {text} { + wordwrap $text 76 " * " " " +} +proc indent {text} { + wordwrap $text 76 " " " " +} +proc defn {first args} { + if {$first ne ""} { + underline " $first" ~ + } + foreach p $args { + if {$p ne ""} { + indent $p + } + } +} +} + +# ----- module wiki-formatting ----- + +set modsource(wiki-formatting) { +# Copyright (c) 2010 WorkWare Systems http://www.workware.net.au/ +# All rights reserved + +# Module which provides text formatting +# wiki.tcl.tk format output + +use formatting + +proc joinlines {text} { + set lines {} + foreach l [split [string trim $text] \n] { + lappend lines [string trim $l] + } + join $lines +} +proc p {text} { + puts [joinlines $text] + puts "" +} +proc title {text} { + puts "*** [joinlines $text] ***" + puts "" +} +proc codelines {lines} { + puts "======" + foreach line $lines { + puts " $line" + } + puts "======" +} +proc code {text} { + puts "======" + foreach line [parse_code_block $text] { + puts " $line" + } + puts "======" +} +proc nl {} { +} +proc section {text} { + puts "'''$text'''" + puts "" +} +proc subsection {text} { + puts "''$text''" + puts "" +} +proc bullet {text} { + puts " * [joinlines $text]" +} +proc indent {text} { + puts " : [joinlines $text]" +} +proc defn {first args} { + if {$first ne ""} { + indent '''$first''' + } + + foreach p $args { + p $p + } +} +} + + +################################################################## +# +# Entry/Exit +# +if {$autosetup(debug)} { + main $argv +} +if {[catch {main $argv} msg opts] == 1} { + show-notices + autosetup-full-error [error-dump $msg $opts $::autosetup(debug)] + if {!$autosetup(debug)} { + puts stderr "Try: '[file tail $autosetup(exe)] --debug' for a full stack trace" + } + exit 1 +} ADDED autosetup/cc-db.tcl Index: autosetup/cc-db.tcl ================================================================== --- autosetup/cc-db.tcl +++ autosetup/cc-db.tcl @@ -0,0 +1,15 @@ +# Copyright (c) 2011 WorkWare Systems http://www.workware.net.au/ +# All rights reserved + +# @synopsis: +# +# The 'cc-db' module provides a knowledge based of system idiosyncracies +# In general, this module can always be included + +use cc + +module-options {} + +# openbsd needs sys/types.h to detect some system headers +cc-include-needs sys/socket.h sys/types.h +cc-include-needs netinet/in.h sys/types.h ADDED autosetup/cc-lib.tcl Index: autosetup/cc-lib.tcl ================================================================== --- autosetup/cc-lib.tcl +++ autosetup/cc-lib.tcl @@ -0,0 +1,161 @@ +# Copyright (c) 2011 WorkWare Systems http://www.workware.net.au/ +# All rights reserved + +# @synopsis: +# +# Provides a library of common tests on top of the 'cc' module. + +use cc + +module-options {} + +# @cc-check-lfs +# +# The equivalent of the AC_SYS_LARGEFILE macro +# +# defines 'HAVE_LFS' if LFS is available, +# and defines '_FILE_OFFSET_BITS=64' if necessary +# +# Returns 1 if 'LFS' is available or 0 otherwise +# +proc cc-check-lfs {} { + cc-check-includes sys/types.h + msg-checking "Checking if -D_FILE_OFFSET_BITS=64 is needed..." + set lfs 1 + if {[msg-quiet cc-with {-includes sys/types.h} {cc-check-sizeof off_t}] == 8} { + msg-result no + } elseif {[msg-quiet cc-with {-includes sys/types.h -cflags -D_FILE_OFFSET_BITS=64} {cc-check-sizeof off_t}] == 8} { + define _FILE_OFFSET_BITS 64 + msg-result yes + } else { + set lfs 0 + msg-result none + } + define-feature lfs $lfs + return $lfs +} + +# @cc-check-endian +# +# The equivalent of the AC_C_BIGENDIAN macro +# +# defines 'HAVE_BIG_ENDIAN' if endian is known to be big, +# or 'HAVE_LITTLE_ENDIAN' if endian is known to be little. +# +# Returns 1 if determined, or 0 if not. +# +proc cc-check-endian {} { + cc-check-includes sys/types.h sys/param.h + set rc 0 + msg-checking "Checking endian..." + cc-with {-includes {sys/types.h sys/param.h}} { + if {[cctest -code { + #if !defined(BIG_ENDIAN) || !defined(BYTE_ORDER) + #error unknown + #elif BYTE_ORDER != BIG_ENDIAN + #error little + #endif + }]} { + define-feature big-endian + msg-result "big" + set rc 1 + } elseif {[cctest -code { + #if !defined(LITTLE_ENDIAN) || !defined(BYTE_ORDER) + #error unknown + #elif BYTE_ORDER != LITTLE_ENDIAN + #error big + #endif + }]} { + define-feature little-endian + msg-result "little" + set rc 1 + } else { + msg-result "unknown" + } + } + return $rc +} + +# @cc-check-flags flag ?...? +# +# Checks whether the given C/C++ compiler flags can be used. Defines feature +# names prefixed with 'HAVE_CFLAG' and 'HAVE_CXXFLAG' respectively, and +# appends working flags to '-cflags' and 'CFLAGS' or 'CXXFLAGS'. +proc cc-check-flags {args} { + set result 1 + array set opts [cc-get-settings] + switch -exact -- $opts(-lang) { + c++ { + set lang C++ + set prefix CXXFLAG + } + c { + set lang C + set prefix CFLAG + } + default { + autosetup-error "cc-check-flags failed with unknown language: $opts(-lang)" + } + } + foreach flag $args { + msg-checking "Checking whether the $lang compiler accepts $flag..." + if {[cctest -cflags $flag]} { + msg-result yes + define-feature $prefix$flag + cc-with [list -cflags [list $flag]] + define-append ${prefix}S $flag + } else { + msg-result no + set result 0 + } + } + return $result +} + +# @cc-check-standards ver ?...? +# +# Checks whether the C/C++ compiler accepts one of the specified '-std=$ver' +# options, and appends the first working one to '-cflags' and 'CFLAGS' or +# 'CXXFLAGS'. +proc cc-check-standards {args} { + array set opts [cc-get-settings] + foreach std $args { + if {[cc-check-flags -std=$std]} { + return $std + } + } + return "" +} + +# Checks whether $keyword is usable as alignof +proc cctest_alignof {keyword} { + msg-checking "Checking for $keyword..." + if {[cctest -code [subst -nobackslashes { + printf("minimum alignment is %d == %d\n", ${keyword}(char), ${keyword}('x')); + }]]} then { + msg-result ok + define-feature $keyword + } else { + msg-result "not found" + } +} + +# @cc-check-c11 +# +# Checks for several C11/C++11 extensions and their alternatives. Currently +# checks for '_Static_assert', '_Alignof', '__alignof__', '__alignof'. +proc cc-check-c11 {} { + msg-checking "Checking for _Static_assert..." + if {[cctest -code { + _Static_assert(1, "static assertions are available"); + }]} then { + msg-result ok + define-feature _Static_assert + } else { + msg-result "not found" + } + + cctest_alignof _Alignof + cctest_alignof __alignof__ + cctest_alignof __alignof +} ADDED autosetup/cc-shared.tcl Index: autosetup/cc-shared.tcl ================================================================== --- autosetup/cc-shared.tcl +++ autosetup/cc-shared.tcl @@ -0,0 +1,112 @@ +# Copyright (c) 2010 WorkWare Systems http://www.workware.net.au/ +# All rights reserved + +# @synopsis: +# +# The 'cc-shared' module provides support for shared libraries and shared objects. +# It defines the following variables: +# +## SH_CFLAGS Flags to use compiling sources destined for a shared library +## SH_LDFLAGS Flags to use linking (creating) a shared library +## SH_SOPREFIX Prefix to use to set the soname when creating a shared library +## SH_SOEXT Extension for shared libs +## SH_SOEXTVER Format for versioned shared libs - %s = version +## SHOBJ_CFLAGS Flags to use compiling sources destined for a shared object +## SHOBJ_LDFLAGS Flags to use linking a shared object, undefined symbols allowed +## SHOBJ_LDFLAGS_R - as above, but all symbols must be resolved +## SH_LINKFLAGS Flags to use linking an executable which will load shared objects +## LD_LIBRARY_PATH Environment variable which specifies path to shared libraries +## STRIPLIBFLAGS Arguments to strip to strip a dynamic library + +module-options {} + +# Defaults: gcc on unix +define SHOBJ_CFLAGS -fpic +define SHOBJ_LDFLAGS -shared +define SH_CFLAGS -fpic +define SH_LDFLAGS -shared +define SH_LINKFLAGS -rdynamic +define SH_SOEXT .so +define SH_SOEXTVER .so.%s +define SH_SOPREFIX -Wl,-soname, +define LD_LIBRARY_PATH LD_LIBRARY_PATH +define STRIPLIBFLAGS --strip-unneeded + +# Note: This is a helpful reference for identifying the toolchain +# http://sourceforge.net/apps/mediawiki/predef/index.php?title=Compilers + +switch -glob -- [get-define host] { + *-*-darwin* { + define SHOBJ_CFLAGS "-dynamic -fno-common" + define SHOBJ_LDFLAGS "-bundle -undefined dynamic_lookup" + define SHOBJ_LDFLAGS_R -bundle + define SH_CFLAGS -dynamic + define SH_LDFLAGS -dynamiclib + define SH_LINKFLAGS "" + define SH_SOEXT .dylib + define SH_SOEXTVER .%s.dylib + define SH_SOPREFIX -Wl,-install_name, + define LD_LIBRARY_PATH DYLD_LIBRARY_PATH + define STRIPLIBFLAGS -x + } + *-*-ming* - *-*-cygwin - *-*-msys { + define SHOBJ_CFLAGS "" + define SHOBJ_LDFLAGS -shared + define SH_CFLAGS "" + define SH_LDFLAGS -shared + define SH_LINKFLAGS "" + define SH_SOEXT .dll + define SH_SOEXTVER .dll + define SH_SOPREFIX "" + define LD_LIBRARY_PATH PATH + } + sparc* { + if {[msg-quiet cc-check-decls __SUNPRO_C]} { + msg-result "Found sun stdio compiler" + # sun stdio compiler + # XXX: These haven't been fully tested. + define SHOBJ_CFLAGS -KPIC + define SHOBJ_LDFLAGS "-G" + define SH_CFLAGS -KPIC + define SH_LINKFLAGS -Wl,-export-dynamic + define SH_SOPREFIX -Wl,-h, + } else { + # sparc has a very small GOT table limit, so use -fPIC + define SH_CFLAGS -fPIC + define SHOBJ_CFLAGS -fPIC + } + } + *-*-solaris* { + if {[msg-quiet cc-check-decls __SUNPRO_C]} { + msg-result "Found sun stdio compiler" + # sun stdio compiler + # XXX: These haven't been fully tested. + define SHOBJ_CFLAGS -KPIC + define SHOBJ_LDFLAGS "-G" + define SH_CFLAGS -KPIC + define SH_LINKFLAGS -Wl,-export-dynamic + define SH_SOPREFIX -Wl,-h, + } + } + *-*-hpux { + # XXX: These haven't been tested + define SHOBJ_CFLAGS "+O3 +z" + define SHOBJ_LDFLAGS -b + define SH_CFLAGS +z + define SH_LINKFLAGS -Wl,+s + define LD_LIBRARY_PATH SHLIB_PATH + } + *-*-haiku { + define SHOBJ_CFLAGS "" + define SHOBJ_LDFLAGS -shared + define SH_CFLAGS "" + define SH_LDFLAGS -shared + define SH_LINKFLAGS "" + define SH_SOPREFIX "" + define LD_LIBRARY_PATH LIBRARY_PATH + } +} + +if {![is-defined SHOBJ_LDFLAGS_R]} { + define SHOBJ_LDFLAGS_R [get-define SHOBJ_LDFLAGS] +} ADDED autosetup/cc.tcl Index: autosetup/cc.tcl ================================================================== --- autosetup/cc.tcl +++ autosetup/cc.tcl @@ -0,0 +1,699 @@ +# Copyright (c) 2010 WorkWare Systems http://www.workware.net.au/ +# All rights reserved + +# @synopsis: +# +# The 'cc' module supports checking various 'features' of the C or C++ +# compiler/linker environment. Common commands are cc-check-includes, +# cc-check-types, cc-check-functions, cc-with, make-autoconf-h and make-template. +# +# The following environment variables are used if set: +# +## CC - C compiler +## CXX - C++ compiler +## CCACHE - Set to "none" to disable automatic use of ccache +## CFLAGS - Additional C compiler flags +## CXXFLAGS - Additional C++ compiler flags +## LDFLAGS - Additional compiler flags during linking +## LIBS - Additional libraries to use (for all tests) +## CROSS - Tool prefix for cross compilation +# +# The following variables are defined from the corresponding +# environment variables if set. +# +## CPPFLAGS +## LINKFLAGS +## CC_FOR_BUILD +## LD + +use system + +module-options {} + +# Note that the return code is not meaningful +proc cc-check-something {name code} { + uplevel 1 $code +} + +# Checks for the existence of the given function by linking +# +proc cctest_function {function} { + cctest -link 1 -declare "extern void $function\(void);" -code "$function\();" +} + +# Checks for the existence of the given type by compiling +proc cctest_type {type} { + cctest -code "$type _x;" +} + +# Checks for the existence of the given type/structure member. +# e.g. "struct stat.st_mtime" +proc cctest_member {struct_member} { + lassign [split $struct_member .] struct member + cctest -code "static $struct _s; return sizeof(_s.$member);" +} + +# Checks for the existence of the given define by compiling +# +proc cctest_define {name} { + cctest -code "#ifndef $name\n#error not defined\n#endif" +} + +# Checks for the existence of the given name either as +# a macro (#define) or an rvalue (such as an enum) +# +proc cctest_decl {name} { + cctest -code "#ifndef $name\n(void)$name;\n#endif" +} + +# @cc-check-sizeof type ... +# +# Checks the size of the given types (between 1 and 32, inclusive). +# Defines a variable with the size determined, or "unknown" otherwise. +# e.g. for type 'long long', defines SIZEOF_LONG_LONG. +# Returns the size of the last type. +# +proc cc-check-sizeof {args} { + foreach type $args { + msg-checking "Checking for sizeof $type..." + set size unknown + # Try the most common sizes first + foreach i {4 8 1 2 16 32} { + if {[cctest -code "static int _x\[sizeof($type) == $i ? 1 : -1\] = { 1 };"]} { + set size $i + break + } + } + msg-result $size + set define [feature-define-name $type SIZEOF_] + define $define $size + } + # Return the last result + get-define $define +} + +# Checks for each feature in $list by using the given script. +# +# When the script is evaluated, $each is set to the feature +# being checked, and $extra is set to any additional cctest args. +# +# Returns 1 if all features were found, or 0 otherwise. +proc cc-check-some-feature {list script} { + set ret 1 + foreach each $list { + if {![check-feature $each $script]} { + set ret 0 + } + } + return $ret +} + +# @cc-check-includes includes ... +# +# Checks that the given include files can be used +proc cc-check-includes {args} { + cc-check-some-feature $args { + set with {} + if {[dict exists $::autosetup(cc-include-deps) $each]} { + set deps [dict keys [dict get $::autosetup(cc-include-deps) $each]] + msg-quiet cc-check-includes {*}$deps + foreach i $deps { + if {[have-feature $i]} { + lappend with $i + } + } + } + if {[llength $with]} { + cc-with [list -includes $with] { + cctest -includes $each + } + } else { + cctest -includes $each + } + } +} + +# @cc-include-needs include required ... +# +# Ensures that when checking for 'include', a check is first +# made for each 'required' file, and if found, it is #included +proc cc-include-needs {file args} { + foreach depfile $args { + dict set ::autosetup(cc-include-deps) $file $depfile 1 + } +} + +# @cc-check-types type ... +# +# Checks that the types exist. +proc cc-check-types {args} { + cc-check-some-feature $args { + cctest_type $each + } +} + +# @cc-check-defines define ... +# +# Checks that the given preprocessor symbol is defined +proc cc-check-defines {args} { + cc-check-some-feature $args { + cctest_define $each + } +} + +# @cc-check-decls name ... +# +# Checks that each given name is either a preprocessor symbol or rvalue +# such as an enum. Note that the define used for a decl is HAVE_DECL_xxx +# rather than HAVE_xxx +proc cc-check-decls {args} { + set ret 1 + foreach name $args { + msg-checking "Checking for $name..." + set r [cctest_decl $name] + define-feature "decl $name" $r + if {$r} { + msg-result "ok" + } else { + msg-result "not found" + set ret 0 + } + } + return $ret +} + +# @cc-check-functions function ... +# +# Checks that the given functions exist (can be linked) +proc cc-check-functions {args} { + cc-check-some-feature $args { + cctest_function $each + } +} + +# @cc-check-members type.member ... +# +# Checks that the given type/structure members exist. +# A structure member is of the form "struct stat.st_mtime" +proc cc-check-members {args} { + cc-check-some-feature $args { + cctest_member $each + } +} + +# @cc-check-function-in-lib function libs ?otherlibs? +# +# Checks that the given given function can be found in one of the libs. +# +# First checks for no library required, then checks each of the libraries +# in turn. +# +# If the function is found, the feature is defined and lib_$function is defined +# to -l$lib where the function was found, or "" if no library required. +# In addition, -l$lib is added to the LIBS define. +# +# If additional libraries may be needed for linking, they should be specified +# as $extralibs as "-lotherlib1 -lotherlib2". +# These libraries are not automatically added to LIBS. +# +# Returns 1 if found or 0 if not. +# +proc cc-check-function-in-lib {function libs {otherlibs {}}} { + msg-checking "Checking libs for $function..." + set found 0 + cc-with [list -libs $otherlibs] { + if {[cctest_function $function]} { + msg-result "none needed" + define lib_$function "" + incr found + } else { + foreach lib $libs { + cc-with [list -libs -l$lib] { + if {[cctest_function $function]} { + msg-result -l$lib + define lib_$function -l$lib + define-append LIBS -l$lib + incr found + break + } + } + } + } + } + if {$found} { + define [feature-define-name $function] + } else { + msg-result "no" + } + return $found +} + +# @cc-check-tools tool ... +# +# Checks for existence of the given compiler tools, taking +# into account any cross compilation prefix. +# +# For example, when checking for "ar", first AR is checked on the command +# line and then in the environment. If not found, "${host}-ar" or +# simply "ar" is assumed depending upon whether cross compiling. +# The path is searched for this executable, and if found AR is defined +# to the executable name. +# Note that even when cross compiling, the simple "ar" is used as a fallback, +# but a warning is generated. This is necessary for some toolchains. +# +# It is an error if the executable is not found. +# +proc cc-check-tools {args} { + foreach tool $args { + set TOOL [string toupper $tool] + set exe [get-env $TOOL [get-define cross]$tool] + if {[find-executable {*}$exe]} { + define $TOOL $exe + continue + } + if {[find-executable {*}$tool]} { + msg-result "Warning: Failed to find $exe, falling back to $tool which may be incorrect" + define $TOOL $tool + continue + } + user-error "Failed to find $exe" + } +} + +# @cc-check-progs prog ... +# +# Checks for existence of the given executables on the path. +# +# For example, when checking for "grep", the path is searched for +# the executable, 'grep', and if found GREP is defined as "grep". +# +# It the executable is not found, the variable is defined as false. +# Returns 1 if all programs were found, or 0 otherwise. +# +proc cc-check-progs {args} { + set failed 0 + foreach prog $args { + set PROG [string toupper $prog] + msg-checking "Checking for $prog..." + if {![find-executable $prog]} { + msg-result no + define $PROG false + incr failed + } else { + msg-result ok + define $PROG $prog + } + } + expr {!$failed} +} + +# Adds the given settings to $::autosetup(ccsettings) and +# returns the old settings. +# +proc cc-add-settings {settings} { + if {[llength $settings] % 2} { + autosetup-error "settings list is missing a value: $settings" + } + + set prev [cc-get-settings] + # workaround a bug in some versions of jimsh by forcing + # conversion of $prev to a list + llength $prev + + array set new $prev + + foreach {name value} $settings { + switch -exact -- $name { + -cflags - -includes { + # These are given as lists + lappend new($name) {*}$value + } + -declare { + lappend new($name) $value + } + -libs { + # Note that new libraries are added before previous libraries + set new($name) [list {*}$value {*}$new($name)] + } + -link - -lang - -nooutput { + set new($name) $value + } + -source - -sourcefile - -code { + # XXX: These probably are only valid directly from cctest + set new($name) $value + } + default { + autosetup-error "unknown cctest setting: $name" + } + } + } + + cc-store-settings [array get new] + + return $prev +} + +proc cc-store-settings {new} { + set ::autosetup(ccsettings) $new +} + +proc cc-get-settings {} { + return $::autosetup(ccsettings) +} + +# Similar to cc-add-settings, but each given setting +# simply replaces the existing value. +# +# Returns the previous settings +proc cc-update-settings {args} { + set prev [cc-get-settings] + cc-store-settings [dict merge $prev $args] + return $prev +} + +# @cc-with settings ?{ script }? +# +# Sets the given 'cctest' settings and then runs the tests in 'script'. +# Note that settings such as -lang replace the current setting, while +# those such as -includes are appended to the existing setting. +# +# If no script is given, the settings become the default for the remainder +# of the auto.def file. +# +## cc-with {-lang c++} { +## # This will check with the C++ compiler +## cc-check-types bool +## cc-with {-includes signal.h} { +## # This will check with the C++ compiler, signal.h and any existing includes. +## ... +## } +## # back to just the C++ compiler +## } +# +# The -libs setting is special in that newer values are added *before* earlier ones. +# +## cc-with {-libs {-lc -lm}} { +## cc-with {-libs -ldl} { +## cctest -libs -lsocket ... +## # libs will be in this order: -lsocket -ldl -lc -lm +## } +## } +proc cc-with {settings args} { + if {[llength $args] == 0} { + cc-add-settings $settings + } elseif {[llength $args] > 1} { + autosetup-error "usage: cc-with settings ?script?" + } else { + set save [cc-add-settings $settings] + set rc [catch {uplevel 1 [lindex $args 0]} result info] + cc-store-settings $save + if {$rc != 0} { + return -code [dict get $info -code] $result + } + return $result + } +} + +# @cctest ?settings? +# +# Low level C compiler checker. Compiles and or links a small C program +# according to the arguments and returns 1 if OK, or 0 if not. +# +# Supported settings are: +# +## -cflags cflags A list of flags to pass to the compiler +## -includes list A list of includes, e.g. {stdlib.h stdio.h} +## -declare code Code to declare before main() +## -link 1 Don't just compile, link too +## -lang c|c++ Use the C (default) or C++ compiler +## -libs liblist List of libraries to link, e.g. {-ldl -lm} +## -code code Code to compile in the body of main() +## -source code Compile a complete program. Ignore -includes, -declare and -code +## -sourcefile file Shorthand for -source [readfile [get-define srcdir]/$file] +## -nooutput 1 Treat any compiler output (e.g. a warning) as an error +# +# Unless -source or -sourcefile is specified, the C program looks like: +# +## #include /* same for remaining includes in the list */ +## +## declare-code /* any code in -declare, verbatim */ +## +## int main(void) { +## code /* any code in -code, verbatim */ +## return 0; +## } +# +# Any failures are recorded in 'config.log' +# +proc cctest {args} { + set src conftest__.c + set tmp conftest__ + + # Easiest way to merge in the settings + cc-with $args { + array set opts [cc-get-settings] + } + + if {[info exists opts(-sourcefile)]} { + set opts(-source) [readfile [get-define srcdir]/$opts(-sourcefile) "#error can't find $opts(-sourcefile)"] + } + if {[info exists opts(-source)]} { + set lines $opts(-source) + } else { + foreach i $opts(-includes) { + if {$opts(-code) ne "" && ![feature-checked $i]} { + # Compiling real code with an unchecked header file + # Quickly (and silently) check for it now + + # Remove all -includes from settings before checking + set saveopts [cc-update-settings -includes {}] + msg-quiet cc-check-includes $i + cc-store-settings $saveopts + } + if {$opts(-code) eq "" || [have-feature $i]} { + lappend source "#include <$i>" + } + } + lappend source {*}$opts(-declare) + lappend source "int main(void) {" + lappend source $opts(-code) + lappend source "return 0;" + lappend source "}" + + set lines [join $source \n] + } + + # Build the command line + set cmdline {} + lappend cmdline {*}[get-define CCACHE] + switch -exact -- $opts(-lang) { + c++ { + lappend cmdline {*}[get-define CXX] {*}[get-define CXXFLAGS] + } + c { + lappend cmdline {*}[get-define CC] {*}[get-define CFLAGS] + } + default { + autosetup-error "cctest called with unknown language: $opts(-lang)" + } + } + + if {!$opts(-link)} { + set tmp conftest__.o + lappend cmdline -c + } + lappend cmdline {*}$opts(-cflags) {*}[get-define cc-default-debug ""] + + lappend cmdline $src -o $tmp {*}$opts(-libs) + + # At this point we have the complete command line and the + # complete source to be compiled. Get the result from cache if + # we can + if {[info exists ::cc_cache($cmdline,$lines)]} { + msg-checking "(cached) " + set ok $::cc_cache($cmdline,$lines) + if {$::autosetup(debug)} { + configlog "From cache (ok=$ok): [join $cmdline]" + configlog "============" + configlog $lines + configlog "============" + } + return $ok + } + + writefile $src $lines\n + + set ok 1 + set err [catch {exec-with-stderr {*}$cmdline} result errinfo] + if {$err || ($opts(-nooutput) && [string length $result])} { + configlog "Failed: [join $cmdline]" + configlog $result + configlog "============" + configlog "The failed code was:" + configlog $lines + configlog "============" + set ok 0 + } elseif {$::autosetup(debug)} { + configlog "Compiled OK: [join $cmdline]" + configlog "============" + configlog $lines + configlog "============" + } + file delete $src + file delete $tmp + + # cache it + set ::cc_cache($cmdline,$lines) $ok + + return $ok +} + +# @make-autoconf-h outfile ?auto-patterns=HAVE_*? ?bare-patterns=SIZEOF_*? +# +# Deprecated - see make-config-header +proc make-autoconf-h {file {autopatterns {HAVE_*}} {barepatterns {SIZEOF_* HAVE_DECL_*}}} { + user-notice "*** make-autoconf-h is deprecated -- use make-config-header instead" + make-config-header $file -auto $autopatterns -bare $barepatterns +} + +# @make-config-header outfile ?-auto patternlist? ?-bare patternlist? ?-none patternlist? ?-str patternlist? ... +# +# Examines all defined variables which match the given patterns +# and writes an include file, $file, which defines each of these. +# Variables which match '-auto' are output as follows: +# - defines which have the value "0" are ignored. +# - defines which have integer values are defined as the integer value. +# - any other value is defined as a string, e.g. "value" +# Variables which match '-bare' are defined as-is. +# Variables which match '-str' are defined as a string, e.g. "value" +# Variables which match '-none' are omitted. +# +# Note that order is important. The first pattern which matches is selected +# Default behaviour is: +# +# -bare {SIZEOF_* HAVE_DECL_*} -auto HAVE_* -none * +# +# If the file would be unchanged, it is not written. +proc make-config-header {file args} { + set guard _[string toupper [regsub -all {[^a-zA-Z0-9]} [file tail $file] _]] + file mkdir [file dirname $file] + set lines {} + lappend lines "#ifndef $guard" + lappend lines "#define $guard" + + # Add some defaults + lappend args -bare {SIZEOF_* HAVE_DECL_*} -auto HAVE_* + + foreach n [lsort [dict keys [all-defines]]] { + set value [get-define $n] + set type [calc-define-output-type $n $args] + switch -exact -- $type { + -bare { + # Just output the value unchanged + } + -none { + continue + } + -str { + set value \"[string map [list \\ \\\\ \" \\\"] $value]\" + } + -auto { + # Automatically determine the type + if {$value eq "0"} { + lappend lines "/* #undef $n */" + continue + } + if {![string is integer -strict $value]} { + set value \"[string map [list \\ \\\\ \" \\\"] $value]\" + } + } + "" { + continue + } + default { + autosetup-error "Unknown type in make-config-header: $type" + } + } + lappend lines "#define $n $value" + } + lappend lines "#endif" + set buf [join $lines \n] + write-if-changed $file $buf { + msg-result "Created $file" + } +} + +proc calc-define-output-type {name spec} { + foreach {type patterns} $spec { + foreach pattern $patterns { + if {[string match $pattern $name]} { + return $type + } + } + } + return "" +} + +# Initialise some values from the environment or commandline or default settings +foreach i {LDFLAGS LIBS CPPFLAGS LINKFLAGS {CFLAGS "-g -O2"}} { + lassign $i var default + define $var [get-env $var $default] +} + +if {[env-is-set CC]} { + # Set by the user, so don't try anything else + set try [list [get-env CC ""]] +} else { + # Try some reasonable options + set try [list [get-define cross]cc [get-define cross]gcc] +} +define CC [find-an-executable {*}$try] +if {[get-define CC] eq ""} { + user-error "Could not find a C compiler. Tried: [join $try ", "]" +} + +define CPP [get-env CPP "[get-define CC] -E"] + +# XXX: Could avoid looking for a C++ compiler until requested +# Note that if CXX isn't found, we just set it to "false". It might not be needed. +if {[env-is-set CXX]} { + define CXX [find-an-executable -required [get-env CXX ""]] +} else { + define CXX [find-an-executable [get-define cross]c++ [get-define cross]g++ false] +} + +# CXXFLAGS default to CFLAGS if not specified +define CXXFLAGS [get-env CXXFLAGS [get-define CFLAGS]] + +# May need a CC_FOR_BUILD, so look for one +define CC_FOR_BUILD [find-an-executable [get-env CC_FOR_BUILD ""] cc gcc false] + +if {[get-define CC] eq ""} { + user-error "Could not find a C compiler. Tried: [join $try ", "]" +} + +define CCACHE [find-an-executable [get-env CCACHE ccache]] + +# Initial cctest settings +cc-store-settings {-cflags {} -includes {} -declare {} -link 0 -lang c -libs {} -code {} -nooutput 0} +set autosetup(cc-include-deps) {} + +msg-result "C compiler...[get-define CCACHE] [get-define CC] [get-define CFLAGS]" +if {[get-define CXX] ne "false"} { + msg-result "C++ compiler...[get-define CCACHE] [get-define CXX] [get-define CXXFLAGS]" +} +msg-result "Build C compiler...[get-define CC_FOR_BUILD]" + +# On Darwin, we prefer to use -g0 to avoid creating .dSYM directories +# but some compilers may not support it, so test here. +switch -glob -- [get-define host] { + *-*-darwin* { + if {[cctest -cflags {-g0}]} { + define cc-default-debug -g0 + } + } +} + +if {![cc-check-includes stdlib.h]} { + user-error "Compiler does not work. See config.log" +} ADDED autosetup/config.guess Index: autosetup/config.guess ================================================================== --- autosetup/config.guess +++ autosetup/config.guess @@ -0,0 +1,1420 @@ +#! /bin/sh +# Attempt to guess a canonical system name. +# Copyright 1992-2014 Free Software Foundation, Inc. + +timestamp='2014-03-23' + +# This file is free software; you can redistribute it and/or modify it +# under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, but +# WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, see . +# +# As a special exception to the GNU General Public License, if you +# distribute this file as part of a program that contains a +# configuration script generated by Autoconf, you may include it under +# the same distribution terms that you use for the rest of that +# program. This Exception is an additional permission under section 7 +# of the GNU General Public License, version 3 ("GPLv3"). +# +# Originally written by Per Bothner. +# +# You can get the latest version of this script from: +# http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD +# +# Please send patches with a ChangeLog entry to config-patches@gnu.org. + + +me=`echo "$0" | sed -e 's,.*/,,'` + +usage="\ +Usage: $0 [OPTION] + +Output the configuration name of the system \`$me' is run on. + +Operation modes: + -h, --help print this help, then exit + -t, --time-stamp print date of last modification, then exit + -v, --version print version number, then exit + +Report bugs and patches to ." + +version="\ +GNU config.guess ($timestamp) + +Originally written by Per Bothner. +Copyright 1992-2014 Free Software Foundation, Inc. + +This is free software; see the source for copying conditions. There is NO +warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." + +help=" +Try \`$me --help' for more information." + +# Parse command line +while test $# -gt 0 ; do + case $1 in + --time-stamp | --time* | -t ) + echo "$timestamp" ; exit ;; + --version | -v ) + echo "$version" ; exit ;; + --help | --h* | -h ) + echo "$usage"; exit ;; + -- ) # Stop option processing + shift; break ;; + - ) # Use stdin as input. + break ;; + -* ) + echo "$me: invalid option $1$help" >&2 + exit 1 ;; + * ) + break ;; + esac +done + +if test $# != 0; then + echo "$me: too many arguments$help" >&2 + exit 1 +fi + +trap 'exit 1' 1 2 15 + +# CC_FOR_BUILD -- compiler used by this script. Note that the use of a +# compiler to aid in system detection is discouraged as it requires +# temporary files to be created and, as you can see below, it is a +# headache to deal with in a portable fashion. + +# Historically, `CC_FOR_BUILD' used to be named `HOST_CC'. We still +# use `HOST_CC' if defined, but it is deprecated. + +# Portable tmp directory creation inspired by the Autoconf team. + +set_cc_for_build=' +trap "exitcode=\$?; (rm -f \$tmpfiles 2>/dev/null; rmdir \$tmp 2>/dev/null) && exit \$exitcode" 0 ; +trap "rm -f \$tmpfiles 2>/dev/null; rmdir \$tmp 2>/dev/null; exit 1" 1 2 13 15 ; +: ${TMPDIR=/tmp} ; + { tmp=`(umask 077 && mktemp -d "$TMPDIR/cgXXXXXX") 2>/dev/null` && test -n "$tmp" && test -d "$tmp" ; } || + { test -n "$RANDOM" && tmp=$TMPDIR/cg$$-$RANDOM && (umask 077 && mkdir $tmp) ; } || + { tmp=$TMPDIR/cg-$$ && (umask 077 && mkdir $tmp) && echo "Warning: creating insecure temp directory" >&2 ; } || + { echo "$me: cannot create a temporary directory in $TMPDIR" >&2 ; exit 1 ; } ; +dummy=$tmp/dummy ; +tmpfiles="$dummy.c $dummy.o $dummy.rel $dummy" ; +case $CC_FOR_BUILD,$HOST_CC,$CC in + ,,) echo "int x;" > $dummy.c ; + for c in cc gcc c89 c99 ; do + if ($c -c -o $dummy.o $dummy.c) >/dev/null 2>&1 ; then + CC_FOR_BUILD="$c"; break ; + fi ; + done ; + if test x"$CC_FOR_BUILD" = x ; then + CC_FOR_BUILD=no_compiler_found ; + fi + ;; + ,,*) CC_FOR_BUILD=$CC ;; + ,*,*) CC_FOR_BUILD=$HOST_CC ;; +esac ; set_cc_for_build= ;' + +# This is needed to find uname on a Pyramid OSx when run in the BSD universe. +# (ghazi@noc.rutgers.edu 1994-08-24) +if (test -f /.attbin/uname) >/dev/null 2>&1 ; then + PATH=$PATH:/.attbin ; export PATH +fi + +UNAME_MACHINE=`(uname -m) 2>/dev/null` || UNAME_MACHINE=unknown +UNAME_RELEASE=`(uname -r) 2>/dev/null` || UNAME_RELEASE=unknown +UNAME_SYSTEM=`(uname -s) 2>/dev/null` || UNAME_SYSTEM=unknown +UNAME_VERSION=`(uname -v) 2>/dev/null` || UNAME_VERSION=unknown + +case "${UNAME_SYSTEM}" in +Linux|GNU|GNU/*) + # If the system lacks a compiler, then just pick glibc. + # We could probably try harder. + LIBC=gnu + + eval $set_cc_for_build + cat <<-EOF > $dummy.c + #include + #if defined(__UCLIBC__) + LIBC=uclibc + #elif defined(__dietlibc__) + LIBC=dietlibc + #else + LIBC=gnu + #endif + EOF + eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep '^LIBC' | sed 's, ,,g'` + ;; +esac + +# Note: order is significant - the case branches are not exclusive. + +case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in + *:NetBSD:*:*) + # NetBSD (nbsd) targets should (where applicable) match one or + # more of the tuples: *-*-netbsdelf*, *-*-netbsdaout*, + # *-*-netbsdecoff* and *-*-netbsd*. For targets that recently + # switched to ELF, *-*-netbsd* would select the old + # object file format. This provides both forward + # compatibility and a consistent mechanism for selecting the + # object file format. + # + # Note: NetBSD doesn't particularly care about the vendor + # portion of the name. We always set it to "unknown". + sysctl="sysctl -n hw.machine_arch" + UNAME_MACHINE_ARCH=`(/sbin/$sysctl 2>/dev/null || \ + /usr/sbin/$sysctl 2>/dev/null || echo unknown)` + case "${UNAME_MACHINE_ARCH}" in + armeb) machine=armeb-unknown ;; + arm*) machine=arm-unknown ;; + sh3el) machine=shl-unknown ;; + sh3eb) machine=sh-unknown ;; + sh5el) machine=sh5le-unknown ;; + *) machine=${UNAME_MACHINE_ARCH}-unknown ;; + esac + # The Operating System including object format, if it has switched + # to ELF recently, or will in the future. + case "${UNAME_MACHINE_ARCH}" in + arm*|i386|m68k|ns32k|sh3*|sparc|vax) + eval $set_cc_for_build + if echo __ELF__ | $CC_FOR_BUILD -E - 2>/dev/null \ + | grep -q __ELF__ + then + # Once all utilities can be ECOFF (netbsdecoff) or a.out (netbsdaout). + # Return netbsd for either. FIX? + os=netbsd + else + os=netbsdelf + fi + ;; + *) + os=netbsd + ;; + esac + # The OS release + # Debian GNU/NetBSD machines have a different userland, and + # thus, need a distinct triplet. However, they do not need + # kernel version information, so it can be replaced with a + # suitable tag, in the style of linux-gnu. + case "${UNAME_VERSION}" in + Debian*) + release='-gnu' + ;; + *) + release=`echo ${UNAME_RELEASE}|sed -e 's/[-_].*/\./'` + ;; + esac + # Since CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM: + # contains redundant information, the shorter form: + # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM is used. + echo "${machine}-${os}${release}" + exit ;; + *:Bitrig:*:*) + UNAME_MACHINE_ARCH=`arch | sed 's/Bitrig.//'` + echo ${UNAME_MACHINE_ARCH}-unknown-bitrig${UNAME_RELEASE} + exit ;; + *:OpenBSD:*:*) + UNAME_MACHINE_ARCH=`arch | sed 's/OpenBSD.//'` + echo ${UNAME_MACHINE_ARCH}-unknown-openbsd${UNAME_RELEASE} + exit ;; + *:ekkoBSD:*:*) + echo ${UNAME_MACHINE}-unknown-ekkobsd${UNAME_RELEASE} + exit ;; + *:SolidBSD:*:*) + echo ${UNAME_MACHINE}-unknown-solidbsd${UNAME_RELEASE} + exit ;; + macppc:MirBSD:*:*) + echo powerpc-unknown-mirbsd${UNAME_RELEASE} + exit ;; + *:MirBSD:*:*) + echo ${UNAME_MACHINE}-unknown-mirbsd${UNAME_RELEASE} + exit ;; + alpha:OSF1:*:*) + case $UNAME_RELEASE in + *4.0) + UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $3}'` + ;; + *5.*) + UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $4}'` + ;; + esac + # According to Compaq, /usr/sbin/psrinfo has been available on + # OSF/1 and Tru64 systems produced since 1995. I hope that + # covers most systems running today. This code pipes the CPU + # types through head -n 1, so we only detect the type of CPU 0. + ALPHA_CPU_TYPE=`/usr/sbin/psrinfo -v | sed -n -e 's/^ The alpha \(.*\) processor.*$/\1/p' | head -n 1` + case "$ALPHA_CPU_TYPE" in + "EV4 (21064)") + UNAME_MACHINE="alpha" ;; + "EV4.5 (21064)") + UNAME_MACHINE="alpha" ;; + "LCA4 (21066/21068)") + UNAME_MACHINE="alpha" ;; + "EV5 (21164)") + UNAME_MACHINE="alphaev5" ;; + "EV5.6 (21164A)") + UNAME_MACHINE="alphaev56" ;; + "EV5.6 (21164PC)") + UNAME_MACHINE="alphapca56" ;; + "EV5.7 (21164PC)") + UNAME_MACHINE="alphapca57" ;; + "EV6 (21264)") + UNAME_MACHINE="alphaev6" ;; + "EV6.7 (21264A)") + UNAME_MACHINE="alphaev67" ;; + "EV6.8CB (21264C)") + UNAME_MACHINE="alphaev68" ;; + "EV6.8AL (21264B)") + UNAME_MACHINE="alphaev68" ;; + "EV6.8CX (21264D)") + UNAME_MACHINE="alphaev68" ;; + "EV6.9A (21264/EV69A)") + UNAME_MACHINE="alphaev69" ;; + "EV7 (21364)") + UNAME_MACHINE="alphaev7" ;; + "EV7.9 (21364A)") + UNAME_MACHINE="alphaev79" ;; + esac + # A Pn.n version is a patched version. + # A Vn.n version is a released version. + # A Tn.n version is a released field test version. + # A Xn.n version is an unreleased experimental baselevel. + # 1.2 uses "1.2" for uname -r. + echo ${UNAME_MACHINE}-dec-osf`echo ${UNAME_RELEASE} | sed -e 's/^[PVTX]//' | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz'` + # Reset EXIT trap before exiting to avoid spurious non-zero exit code. + exitcode=$? + trap '' 0 + exit $exitcode ;; + Alpha\ *:Windows_NT*:*) + # How do we know it's Interix rather than the generic POSIX subsystem? + # Should we change UNAME_MACHINE based on the output of uname instead + # of the specific Alpha model? + echo alpha-pc-interix + exit ;; + 21064:Windows_NT:50:3) + echo alpha-dec-winnt3.5 + exit ;; + Amiga*:UNIX_System_V:4.0:*) + echo m68k-unknown-sysv4 + exit ;; + *:[Aa]miga[Oo][Ss]:*:*) + echo ${UNAME_MACHINE}-unknown-amigaos + exit ;; + *:[Mm]orph[Oo][Ss]:*:*) + echo ${UNAME_MACHINE}-unknown-morphos + exit ;; + *:OS/390:*:*) + echo i370-ibm-openedition + exit ;; + *:z/VM:*:*) + echo s390-ibm-zvmoe + exit ;; + *:OS400:*:*) + echo powerpc-ibm-os400 + exit ;; + arm:RISC*:1.[012]*:*|arm:riscix:1.[012]*:*) + echo arm-acorn-riscix${UNAME_RELEASE} + exit ;; + arm*:riscos:*:*|arm*:RISCOS:*:*) + echo arm-unknown-riscos + exit ;; + SR2?01:HI-UX/MPP:*:* | SR8000:HI-UX/MPP:*:*) + echo hppa1.1-hitachi-hiuxmpp + exit ;; + Pyramid*:OSx*:*:* | MIS*:OSx*:*:* | MIS*:SMP_DC-OSx*:*:*) + # akee@wpdis03.wpafb.af.mil (Earle F. Ake) contributed MIS and NILE. + if test "`(/bin/universe) 2>/dev/null`" = att ; then + echo pyramid-pyramid-sysv3 + else + echo pyramid-pyramid-bsd + fi + exit ;; + NILE*:*:*:dcosx) + echo pyramid-pyramid-svr4 + exit ;; + DRS?6000:unix:4.0:6*) + echo sparc-icl-nx6 + exit ;; + DRS?6000:UNIX_SV:4.2*:7* | DRS?6000:isis:4.2*:7*) + case `/usr/bin/uname -p` in + sparc) echo sparc-icl-nx7; exit ;; + esac ;; + s390x:SunOS:*:*) + echo ${UNAME_MACHINE}-ibm-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` + exit ;; + sun4H:SunOS:5.*:*) + echo sparc-hal-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` + exit ;; + sun4*:SunOS:5.*:* | tadpole*:SunOS:5.*:*) + echo sparc-sun-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` + exit ;; + i86pc:AuroraUX:5.*:* | i86xen:AuroraUX:5.*:*) + echo i386-pc-auroraux${UNAME_RELEASE} + exit ;; + i86pc:SunOS:5.*:* | i86xen:SunOS:5.*:*) + eval $set_cc_for_build + SUN_ARCH="i386" + # If there is a compiler, see if it is configured for 64-bit objects. + # Note that the Sun cc does not turn __LP64__ into 1 like gcc does. + # This test works for both compilers. + if [ "$CC_FOR_BUILD" != 'no_compiler_found' ]; then + if (echo '#ifdef __amd64'; echo IS_64BIT_ARCH; echo '#endif') | \ + (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | \ + grep IS_64BIT_ARCH >/dev/null + then + SUN_ARCH="x86_64" + fi + fi + echo ${SUN_ARCH}-pc-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` + exit ;; + sun4*:SunOS:6*:*) + # According to config.sub, this is the proper way to canonicalize + # SunOS6. Hard to guess exactly what SunOS6 will be like, but + # it's likely to be more like Solaris than SunOS4. + echo sparc-sun-solaris3`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` + exit ;; + sun4*:SunOS:*:*) + case "`/usr/bin/arch -k`" in + Series*|S4*) + UNAME_RELEASE=`uname -v` + ;; + esac + # Japanese Language versions have a version number like `4.1.3-JL'. + echo sparc-sun-sunos`echo ${UNAME_RELEASE}|sed -e 's/-/_/'` + exit ;; + sun3*:SunOS:*:*) + echo m68k-sun-sunos${UNAME_RELEASE} + exit ;; + sun*:*:4.2BSD:*) + UNAME_RELEASE=`(sed 1q /etc/motd | awk '{print substr($5,1,3)}') 2>/dev/null` + test "x${UNAME_RELEASE}" = "x" && UNAME_RELEASE=3 + case "`/bin/arch`" in + sun3) + echo m68k-sun-sunos${UNAME_RELEASE} + ;; + sun4) + echo sparc-sun-sunos${UNAME_RELEASE} + ;; + esac + exit ;; + aushp:SunOS:*:*) + echo sparc-auspex-sunos${UNAME_RELEASE} + exit ;; + # The situation for MiNT is a little confusing. The machine name + # can be virtually everything (everything which is not + # "atarist" or "atariste" at least should have a processor + # > m68000). The system name ranges from "MiNT" over "FreeMiNT" + # to the lowercase version "mint" (or "freemint"). Finally + # the system name "TOS" denotes a system which is actually not + # MiNT. But MiNT is downward compatible to TOS, so this should + # be no problem. + atarist[e]:*MiNT:*:* | atarist[e]:*mint:*:* | atarist[e]:*TOS:*:*) + echo m68k-atari-mint${UNAME_RELEASE} + exit ;; + atari*:*MiNT:*:* | atari*:*mint:*:* | atarist[e]:*TOS:*:*) + echo m68k-atari-mint${UNAME_RELEASE} + exit ;; + *falcon*:*MiNT:*:* | *falcon*:*mint:*:* | *falcon*:*TOS:*:*) + echo m68k-atari-mint${UNAME_RELEASE} + exit ;; + milan*:*MiNT:*:* | milan*:*mint:*:* | *milan*:*TOS:*:*) + echo m68k-milan-mint${UNAME_RELEASE} + exit ;; + hades*:*MiNT:*:* | hades*:*mint:*:* | *hades*:*TOS:*:*) + echo m68k-hades-mint${UNAME_RELEASE} + exit ;; + *:*MiNT:*:* | *:*mint:*:* | *:*TOS:*:*) + echo m68k-unknown-mint${UNAME_RELEASE} + exit ;; + m68k:machten:*:*) + echo m68k-apple-machten${UNAME_RELEASE} + exit ;; + powerpc:machten:*:*) + echo powerpc-apple-machten${UNAME_RELEASE} + exit ;; + RISC*:Mach:*:*) + echo mips-dec-mach_bsd4.3 + exit ;; + RISC*:ULTRIX:*:*) + echo mips-dec-ultrix${UNAME_RELEASE} + exit ;; + VAX*:ULTRIX*:*:*) + echo vax-dec-ultrix${UNAME_RELEASE} + exit ;; + 2020:CLIX:*:* | 2430:CLIX:*:*) + echo clipper-intergraph-clix${UNAME_RELEASE} + exit ;; + mips:*:*:UMIPS | mips:*:*:RISCos) + eval $set_cc_for_build + sed 's/^ //' << EOF >$dummy.c +#ifdef __cplusplus +#include /* for printf() prototype */ + int main (int argc, char *argv[]) { +#else + int main (argc, argv) int argc; char *argv[]; { +#endif + #if defined (host_mips) && defined (MIPSEB) + #if defined (SYSTYPE_SYSV) + printf ("mips-mips-riscos%ssysv\n", argv[1]); exit (0); + #endif + #if defined (SYSTYPE_SVR4) + printf ("mips-mips-riscos%ssvr4\n", argv[1]); exit (0); + #endif + #if defined (SYSTYPE_BSD43) || defined(SYSTYPE_BSD) + printf ("mips-mips-riscos%sbsd\n", argv[1]); exit (0); + #endif + #endif + exit (-1); + } +EOF + $CC_FOR_BUILD -o $dummy $dummy.c && + dummyarg=`echo "${UNAME_RELEASE}" | sed -n 's/\([0-9]*\).*/\1/p'` && + SYSTEM_NAME=`$dummy $dummyarg` && + { echo "$SYSTEM_NAME"; exit; } + echo mips-mips-riscos${UNAME_RELEASE} + exit ;; + Motorola:PowerMAX_OS:*:*) + echo powerpc-motorola-powermax + exit ;; + Motorola:*:4.3:PL8-*) + echo powerpc-harris-powermax + exit ;; + Night_Hawk:*:*:PowerMAX_OS | Synergy:PowerMAX_OS:*:*) + echo powerpc-harris-powermax + exit ;; + Night_Hawk:Power_UNIX:*:*) + echo powerpc-harris-powerunix + exit ;; + m88k:CX/UX:7*:*) + echo m88k-harris-cxux7 + exit ;; + m88k:*:4*:R4*) + echo m88k-motorola-sysv4 + exit ;; + m88k:*:3*:R3*) + echo m88k-motorola-sysv3 + exit ;; + AViiON:dgux:*:*) + # DG/UX returns AViiON for all architectures + UNAME_PROCESSOR=`/usr/bin/uname -p` + if [ $UNAME_PROCESSOR = mc88100 ] || [ $UNAME_PROCESSOR = mc88110 ] + then + if [ ${TARGET_BINARY_INTERFACE}x = m88kdguxelfx ] || \ + [ ${TARGET_BINARY_INTERFACE}x = x ] + then + echo m88k-dg-dgux${UNAME_RELEASE} + else + echo m88k-dg-dguxbcs${UNAME_RELEASE} + fi + else + echo i586-dg-dgux${UNAME_RELEASE} + fi + exit ;; + M88*:DolphinOS:*:*) # DolphinOS (SVR3) + echo m88k-dolphin-sysv3 + exit ;; + M88*:*:R3*:*) + # Delta 88k system running SVR3 + echo m88k-motorola-sysv3 + exit ;; + XD88*:*:*:*) # Tektronix XD88 system running UTekV (SVR3) + echo m88k-tektronix-sysv3 + exit ;; + Tek43[0-9][0-9]:UTek:*:*) # Tektronix 4300 system running UTek (BSD) + echo m68k-tektronix-bsd + exit ;; + *:IRIX*:*:*) + echo mips-sgi-irix`echo ${UNAME_RELEASE}|sed -e 's/-/_/g'` + exit ;; + ????????:AIX?:[12].1:2) # AIX 2.2.1 or AIX 2.1.1 is RT/PC AIX. + echo romp-ibm-aix # uname -m gives an 8 hex-code CPU id + exit ;; # Note that: echo "'`uname -s`'" gives 'AIX ' + i*86:AIX:*:*) + echo i386-ibm-aix + exit ;; + ia64:AIX:*:*) + if [ -x /usr/bin/oslevel ] ; then + IBM_REV=`/usr/bin/oslevel` + else + IBM_REV=${UNAME_VERSION}.${UNAME_RELEASE} + fi + echo ${UNAME_MACHINE}-ibm-aix${IBM_REV} + exit ;; + *:AIX:2:3) + if grep bos325 /usr/include/stdio.h >/dev/null 2>&1; then + eval $set_cc_for_build + sed 's/^ //' << EOF >$dummy.c + #include + + main() + { + if (!__power_pc()) + exit(1); + puts("powerpc-ibm-aix3.2.5"); + exit(0); + } +EOF + if $CC_FOR_BUILD -o $dummy $dummy.c && SYSTEM_NAME=`$dummy` + then + echo "$SYSTEM_NAME" + else + echo rs6000-ibm-aix3.2.5 + fi + elif grep bos324 /usr/include/stdio.h >/dev/null 2>&1; then + echo rs6000-ibm-aix3.2.4 + else + echo rs6000-ibm-aix3.2 + fi + exit ;; + *:AIX:*:[4567]) + IBM_CPU_ID=`/usr/sbin/lsdev -C -c processor -S available | sed 1q | awk '{ print $1 }'` + if /usr/sbin/lsattr -El ${IBM_CPU_ID} | grep ' POWER' >/dev/null 2>&1; then + IBM_ARCH=rs6000 + else + IBM_ARCH=powerpc + fi + if [ -x /usr/bin/oslevel ] ; then + IBM_REV=`/usr/bin/oslevel` + else + IBM_REV=${UNAME_VERSION}.${UNAME_RELEASE} + fi + echo ${IBM_ARCH}-ibm-aix${IBM_REV} + exit ;; + *:AIX:*:*) + echo rs6000-ibm-aix + exit ;; + ibmrt:4.4BSD:*|romp-ibm:BSD:*) + echo romp-ibm-bsd4.4 + exit ;; + ibmrt:*BSD:*|romp-ibm:BSD:*) # covers RT/PC BSD and + echo romp-ibm-bsd${UNAME_RELEASE} # 4.3 with uname added to + exit ;; # report: romp-ibm BSD 4.3 + *:BOSX:*:*) + echo rs6000-bull-bosx + exit ;; + DPX/2?00:B.O.S.:*:*) + echo m68k-bull-sysv3 + exit ;; + 9000/[34]??:4.3bsd:1.*:*) + echo m68k-hp-bsd + exit ;; + hp300:4.4BSD:*:* | 9000/[34]??:4.3bsd:2.*:*) + echo m68k-hp-bsd4.4 + exit ;; + 9000/[34678]??:HP-UX:*:*) + HPUX_REV=`echo ${UNAME_RELEASE}|sed -e 's/[^.]*.[0B]*//'` + case "${UNAME_MACHINE}" in + 9000/31? ) HP_ARCH=m68000 ;; + 9000/[34]?? ) HP_ARCH=m68k ;; + 9000/[678][0-9][0-9]) + if [ -x /usr/bin/getconf ]; then + sc_cpu_version=`/usr/bin/getconf SC_CPU_VERSION 2>/dev/null` + sc_kernel_bits=`/usr/bin/getconf SC_KERNEL_BITS 2>/dev/null` + case "${sc_cpu_version}" in + 523) HP_ARCH="hppa1.0" ;; # CPU_PA_RISC1_0 + 528) HP_ARCH="hppa1.1" ;; # CPU_PA_RISC1_1 + 532) # CPU_PA_RISC2_0 + case "${sc_kernel_bits}" in + 32) HP_ARCH="hppa2.0n" ;; + 64) HP_ARCH="hppa2.0w" ;; + '') HP_ARCH="hppa2.0" ;; # HP-UX 10.20 + esac ;; + esac + fi + if [ "${HP_ARCH}" = "" ]; then + eval $set_cc_for_build + sed 's/^ //' << EOF >$dummy.c + + #define _HPUX_SOURCE + #include + #include + + int main () + { + #if defined(_SC_KERNEL_BITS) + long bits = sysconf(_SC_KERNEL_BITS); + #endif + long cpu = sysconf (_SC_CPU_VERSION); + + switch (cpu) + { + case CPU_PA_RISC1_0: puts ("hppa1.0"); break; + case CPU_PA_RISC1_1: puts ("hppa1.1"); break; + case CPU_PA_RISC2_0: + #if defined(_SC_KERNEL_BITS) + switch (bits) + { + case 64: puts ("hppa2.0w"); break; + case 32: puts ("hppa2.0n"); break; + default: puts ("hppa2.0"); break; + } break; + #else /* !defined(_SC_KERNEL_BITS) */ + puts ("hppa2.0"); break; + #endif + default: puts ("hppa1.0"); break; + } + exit (0); + } +EOF + (CCOPTS= $CC_FOR_BUILD -o $dummy $dummy.c 2>/dev/null) && HP_ARCH=`$dummy` + test -z "$HP_ARCH" && HP_ARCH=hppa + fi ;; + esac + if [ ${HP_ARCH} = "hppa2.0w" ] + then + eval $set_cc_for_build + + # hppa2.0w-hp-hpux* has a 64-bit kernel and a compiler generating + # 32-bit code. hppa64-hp-hpux* has the same kernel and a compiler + # generating 64-bit code. GNU and HP use different nomenclature: + # + # $ CC_FOR_BUILD=cc ./config.guess + # => hppa2.0w-hp-hpux11.23 + # $ CC_FOR_BUILD="cc +DA2.0w" ./config.guess + # => hppa64-hp-hpux11.23 + + if echo __LP64__ | (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | + grep -q __LP64__ + then + HP_ARCH="hppa2.0w" + else + HP_ARCH="hppa64" + fi + fi + echo ${HP_ARCH}-hp-hpux${HPUX_REV} + exit ;; + ia64:HP-UX:*:*) + HPUX_REV=`echo ${UNAME_RELEASE}|sed -e 's/[^.]*.[0B]*//'` + echo ia64-hp-hpux${HPUX_REV} + exit ;; + 3050*:HI-UX:*:*) + eval $set_cc_for_build + sed 's/^ //' << EOF >$dummy.c + #include + int + main () + { + long cpu = sysconf (_SC_CPU_VERSION); + /* The order matters, because CPU_IS_HP_MC68K erroneously returns + true for CPU_PA_RISC1_0. CPU_IS_PA_RISC returns correct + results, however. */ + if (CPU_IS_PA_RISC (cpu)) + { + switch (cpu) + { + case CPU_PA_RISC1_0: puts ("hppa1.0-hitachi-hiuxwe2"); break; + case CPU_PA_RISC1_1: puts ("hppa1.1-hitachi-hiuxwe2"); break; + case CPU_PA_RISC2_0: puts ("hppa2.0-hitachi-hiuxwe2"); break; + default: puts ("hppa-hitachi-hiuxwe2"); break; + } + } + else if (CPU_IS_HP_MC68K (cpu)) + puts ("m68k-hitachi-hiuxwe2"); + else puts ("unknown-hitachi-hiuxwe2"); + exit (0); + } +EOF + $CC_FOR_BUILD -o $dummy $dummy.c && SYSTEM_NAME=`$dummy` && + { echo "$SYSTEM_NAME"; exit; } + echo unknown-hitachi-hiuxwe2 + exit ;; + 9000/7??:4.3bsd:*:* | 9000/8?[79]:4.3bsd:*:* ) + echo hppa1.1-hp-bsd + exit ;; + 9000/8??:4.3bsd:*:*) + echo hppa1.0-hp-bsd + exit ;; + *9??*:MPE/iX:*:* | *3000*:MPE/iX:*:*) + echo hppa1.0-hp-mpeix + exit ;; + hp7??:OSF1:*:* | hp8?[79]:OSF1:*:* ) + echo hppa1.1-hp-osf + exit ;; + hp8??:OSF1:*:*) + echo hppa1.0-hp-osf + exit ;; + i*86:OSF1:*:*) + if [ -x /usr/sbin/sysversion ] ; then + echo ${UNAME_MACHINE}-unknown-osf1mk + else + echo ${UNAME_MACHINE}-unknown-osf1 + fi + exit ;; + parisc*:Lites*:*:*) + echo hppa1.1-hp-lites + exit ;; + C1*:ConvexOS:*:* | convex:ConvexOS:C1*:*) + echo c1-convex-bsd + exit ;; + C2*:ConvexOS:*:* | convex:ConvexOS:C2*:*) + if getsysinfo -f scalar_acc + then echo c32-convex-bsd + else echo c2-convex-bsd + fi + exit ;; + C34*:ConvexOS:*:* | convex:ConvexOS:C34*:*) + echo c34-convex-bsd + exit ;; + C38*:ConvexOS:*:* | convex:ConvexOS:C38*:*) + echo c38-convex-bsd + exit ;; + C4*:ConvexOS:*:* | convex:ConvexOS:C4*:*) + echo c4-convex-bsd + exit ;; + CRAY*Y-MP:*:*:*) + echo ymp-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' + exit ;; + CRAY*[A-Z]90:*:*:*) + echo ${UNAME_MACHINE}-cray-unicos${UNAME_RELEASE} \ + | sed -e 's/CRAY.*\([A-Z]90\)/\1/' \ + -e y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/ \ + -e 's/\.[^.]*$/.X/' + exit ;; + CRAY*TS:*:*:*) + echo t90-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' + exit ;; + CRAY*T3E:*:*:*) + echo alphaev5-cray-unicosmk${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' + exit ;; + CRAY*SV1:*:*:*) + echo sv1-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' + exit ;; + *:UNICOS/mp:*:*) + echo craynv-cray-unicosmp${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' + exit ;; + F30[01]:UNIX_System_V:*:* | F700:UNIX_System_V:*:*) + FUJITSU_PROC=`uname -m | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz'` + FUJITSU_SYS=`uname -p | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/\///'` + FUJITSU_REL=`echo ${UNAME_RELEASE} | sed -e 's/ /_/'` + echo "${FUJITSU_PROC}-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}" + exit ;; + 5000:UNIX_System_V:4.*:*) + FUJITSU_SYS=`uname -p | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/\///'` + FUJITSU_REL=`echo ${UNAME_RELEASE} | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/ /_/'` + echo "sparc-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}" + exit ;; + i*86:BSD/386:*:* | i*86:BSD/OS:*:* | *:Ascend\ Embedded/OS:*:*) + echo ${UNAME_MACHINE}-pc-bsdi${UNAME_RELEASE} + exit ;; + sparc*:BSD/OS:*:*) + echo sparc-unknown-bsdi${UNAME_RELEASE} + exit ;; + *:BSD/OS:*:*) + echo ${UNAME_MACHINE}-unknown-bsdi${UNAME_RELEASE} + exit ;; + *:FreeBSD:*:*) + UNAME_PROCESSOR=`/usr/bin/uname -p` + case ${UNAME_PROCESSOR} in + amd64) + echo x86_64-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; + *) + echo ${UNAME_PROCESSOR}-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; + esac + exit ;; + i*:CYGWIN*:*) + echo ${UNAME_MACHINE}-pc-cygwin + exit ;; + *:MINGW64*:*) + echo ${UNAME_MACHINE}-pc-mingw64 + exit ;; + *:MINGW*:*) + echo ${UNAME_MACHINE}-pc-mingw32 + exit ;; + *:MSYS*:*) + echo ${UNAME_MACHINE}-pc-msys + exit ;; + i*:windows32*:*) + # uname -m includes "-pc" on this system. + echo ${UNAME_MACHINE}-mingw32 + exit ;; + i*:PW*:*) + echo ${UNAME_MACHINE}-pc-pw32 + exit ;; + *:Interix*:*) + case ${UNAME_MACHINE} in + x86) + echo i586-pc-interix${UNAME_RELEASE} + exit ;; + authenticamd | genuineintel | EM64T) + echo x86_64-unknown-interix${UNAME_RELEASE} + exit ;; + IA64) + echo ia64-unknown-interix${UNAME_RELEASE} + exit ;; + esac ;; + [345]86:Windows_95:* | [345]86:Windows_98:* | [345]86:Windows_NT:*) + echo i${UNAME_MACHINE}-pc-mks + exit ;; + 8664:Windows_NT:*) + echo x86_64-pc-mks + exit ;; + i*:Windows_NT*:* | Pentium*:Windows_NT*:*) + # How do we know it's Interix rather than the generic POSIX subsystem? + # It also conflicts with pre-2.0 versions of AT&T UWIN. Should we + # UNAME_MACHINE based on the output of uname instead of i386? + echo i586-pc-interix + exit ;; + i*:UWIN*:*) + echo ${UNAME_MACHINE}-pc-uwin + exit ;; + amd64:CYGWIN*:*:* | x86_64:CYGWIN*:*:*) + echo x86_64-unknown-cygwin + exit ;; + p*:CYGWIN*:*) + echo powerpcle-unknown-cygwin + exit ;; + prep*:SunOS:5.*:*) + echo powerpcle-unknown-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` + exit ;; + *:GNU:*:*) + # the GNU system + echo `echo ${UNAME_MACHINE}|sed -e 's,[-/].*$,,'`-unknown-${LIBC}`echo ${UNAME_RELEASE}|sed -e 's,/.*$,,'` + exit ;; + *:GNU/*:*:*) + # other systems with GNU libc and userland + echo ${UNAME_MACHINE}-unknown-`echo ${UNAME_SYSTEM} | sed 's,^[^/]*/,,' | tr '[A-Z]' '[a-z]'``echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'`-${LIBC} + exit ;; + i*86:Minix:*:*) + echo ${UNAME_MACHINE}-pc-minix + exit ;; + aarch64:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + exit ;; + aarch64_be:Linux:*:*) + UNAME_MACHINE=aarch64_be + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + exit ;; + alpha:Linux:*:*) + case `sed -n '/^cpu model/s/^.*: \(.*\)/\1/p' < /proc/cpuinfo` in + EV5) UNAME_MACHINE=alphaev5 ;; + EV56) UNAME_MACHINE=alphaev56 ;; + PCA56) UNAME_MACHINE=alphapca56 ;; + PCA57) UNAME_MACHINE=alphapca56 ;; + EV6) UNAME_MACHINE=alphaev6 ;; + EV67) UNAME_MACHINE=alphaev67 ;; + EV68*) UNAME_MACHINE=alphaev68 ;; + esac + objdump --private-headers /bin/sh | grep -q ld.so.1 + if test "$?" = 0 ; then LIBC="gnulibc1" ; fi + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + exit ;; + arc:Linux:*:* | arceb:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + exit ;; + arm*:Linux:*:*) + eval $set_cc_for_build + if echo __ARM_EABI__ | $CC_FOR_BUILD -E - 2>/dev/null \ + | grep -q __ARM_EABI__ + then + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + else + if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \ + | grep -q __ARM_PCS_VFP + then + echo ${UNAME_MACHINE}-unknown-linux-${LIBC}eabi + else + echo ${UNAME_MACHINE}-unknown-linux-${LIBC}eabihf + fi + fi + exit ;; + avr32*:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + exit ;; + cris:Linux:*:*) + echo ${UNAME_MACHINE}-axis-linux-${LIBC} + exit ;; + crisv32:Linux:*:*) + echo ${UNAME_MACHINE}-axis-linux-${LIBC} + exit ;; + frv:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + exit ;; + hexagon:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + exit ;; + i*86:Linux:*:*) + echo ${UNAME_MACHINE}-pc-linux-${LIBC} + exit ;; + ia64:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + exit ;; + m32r*:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + exit ;; + m68*:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + exit ;; + mips:Linux:*:* | mips64:Linux:*:*) + eval $set_cc_for_build + sed 's/^ //' << EOF >$dummy.c + #undef CPU + #undef ${UNAME_MACHINE} + #undef ${UNAME_MACHINE}el + #if defined(__MIPSEL__) || defined(__MIPSEL) || defined(_MIPSEL) || defined(MIPSEL) + CPU=${UNAME_MACHINE}el + #else + #if defined(__MIPSEB__) || defined(__MIPSEB) || defined(_MIPSEB) || defined(MIPSEB) + CPU=${UNAME_MACHINE} + #else + CPU= + #endif + #endif +EOF + eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep '^CPU'` + test x"${CPU}" != x && { echo "${CPU}-unknown-linux-${LIBC}"; exit; } + ;; + openrisc*:Linux:*:*) + echo or1k-unknown-linux-${LIBC} + exit ;; + or32:Linux:*:* | or1k*:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + exit ;; + padre:Linux:*:*) + echo sparc-unknown-linux-${LIBC} + exit ;; + parisc64:Linux:*:* | hppa64:Linux:*:*) + echo hppa64-unknown-linux-${LIBC} + exit ;; + parisc:Linux:*:* | hppa:Linux:*:*) + # Look for CPU level + case `grep '^cpu[^a-z]*:' /proc/cpuinfo 2>/dev/null | cut -d' ' -f2` in + PA7*) echo hppa1.1-unknown-linux-${LIBC} ;; + PA8*) echo hppa2.0-unknown-linux-${LIBC} ;; + *) echo hppa-unknown-linux-${LIBC} ;; + esac + exit ;; + ppc64:Linux:*:*) + echo powerpc64-unknown-linux-${LIBC} + exit ;; + ppc:Linux:*:*) + echo powerpc-unknown-linux-${LIBC} + exit ;; + ppc64le:Linux:*:*) + echo powerpc64le-unknown-linux-${LIBC} + exit ;; + ppcle:Linux:*:*) + echo powerpcle-unknown-linux-${LIBC} + exit ;; + s390:Linux:*:* | s390x:Linux:*:*) + echo ${UNAME_MACHINE}-ibm-linux-${LIBC} + exit ;; + sh64*:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + exit ;; + sh*:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + exit ;; + sparc:Linux:*:* | sparc64:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + exit ;; + tile*:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + exit ;; + vax:Linux:*:*) + echo ${UNAME_MACHINE}-dec-linux-${LIBC} + exit ;; + x86_64:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + exit ;; + xtensa*:Linux:*:*) + echo ${UNAME_MACHINE}-unknown-linux-${LIBC} + exit ;; + i*86:DYNIX/ptx:4*:*) + # ptx 4.0 does uname -s correctly, with DYNIX/ptx in there. + # earlier versions are messed up and put the nodename in both + # sysname and nodename. + echo i386-sequent-sysv4 + exit ;; + i*86:UNIX_SV:4.2MP:2.*) + # Unixware is an offshoot of SVR4, but it has its own version + # number series starting with 2... + # I am not positive that other SVR4 systems won't match this, + # I just have to hope. -- rms. + # Use sysv4.2uw... so that sysv4* matches it. + echo ${UNAME_MACHINE}-pc-sysv4.2uw${UNAME_VERSION} + exit ;; + i*86:OS/2:*:*) + # If we were able to find `uname', then EMX Unix compatibility + # is probably installed. + echo ${UNAME_MACHINE}-pc-os2-emx + exit ;; + i*86:XTS-300:*:STOP) + echo ${UNAME_MACHINE}-unknown-stop + exit ;; + i*86:atheos:*:*) + echo ${UNAME_MACHINE}-unknown-atheos + exit ;; + i*86:syllable:*:*) + echo ${UNAME_MACHINE}-pc-syllable + exit ;; + i*86:LynxOS:2.*:* | i*86:LynxOS:3.[01]*:* | i*86:LynxOS:4.[02]*:*) + echo i386-unknown-lynxos${UNAME_RELEASE} + exit ;; + i*86:*DOS:*:*) + echo ${UNAME_MACHINE}-pc-msdosdjgpp + exit ;; + i*86:*:4.*:* | i*86:SYSTEM_V:4.*:*) + UNAME_REL=`echo ${UNAME_RELEASE} | sed 's/\/MP$//'` + if grep Novell /usr/include/link.h >/dev/null 2>/dev/null; then + echo ${UNAME_MACHINE}-univel-sysv${UNAME_REL} + else + echo ${UNAME_MACHINE}-pc-sysv${UNAME_REL} + fi + exit ;; + i*86:*:5:[678]*) + # UnixWare 7.x, OpenUNIX and OpenServer 6. + case `/bin/uname -X | grep "^Machine"` in + *486*) UNAME_MACHINE=i486 ;; + *Pentium) UNAME_MACHINE=i586 ;; + *Pent*|*Celeron) UNAME_MACHINE=i686 ;; + esac + echo ${UNAME_MACHINE}-unknown-sysv${UNAME_RELEASE}${UNAME_SYSTEM}${UNAME_VERSION} + exit ;; + i*86:*:3.2:*) + if test -f /usr/options/cb.name; then + UNAME_REL=`sed -n 's/.*Version //p' /dev/null >/dev/null ; then + UNAME_REL=`(/bin/uname -X|grep Release|sed -e 's/.*= //')` + (/bin/uname -X|grep i80486 >/dev/null) && UNAME_MACHINE=i486 + (/bin/uname -X|grep '^Machine.*Pentium' >/dev/null) \ + && UNAME_MACHINE=i586 + (/bin/uname -X|grep '^Machine.*Pent *II' >/dev/null) \ + && UNAME_MACHINE=i686 + (/bin/uname -X|grep '^Machine.*Pentium Pro' >/dev/null) \ + && UNAME_MACHINE=i686 + echo ${UNAME_MACHINE}-pc-sco$UNAME_REL + else + echo ${UNAME_MACHINE}-pc-sysv32 + fi + exit ;; + pc:*:*:*) + # Left here for compatibility: + # uname -m prints for DJGPP always 'pc', but it prints nothing about + # the processor, so we play safe by assuming i586. + # Note: whatever this is, it MUST be the same as what config.sub + # prints for the "djgpp" host, or else GDB configury will decide that + # this is a cross-build. + echo i586-pc-msdosdjgpp + exit ;; + Intel:Mach:3*:*) + echo i386-pc-mach3 + exit ;; + paragon:*:*:*) + echo i860-intel-osf1 + exit ;; + i860:*:4.*:*) # i860-SVR4 + if grep Stardent /usr/include/sys/uadmin.h >/dev/null 2>&1 ; then + echo i860-stardent-sysv${UNAME_RELEASE} # Stardent Vistra i860-SVR4 + else # Add other i860-SVR4 vendors below as they are discovered. + echo i860-unknown-sysv${UNAME_RELEASE} # Unknown i860-SVR4 + fi + exit ;; + mini*:CTIX:SYS*5:*) + # "miniframe" + echo m68010-convergent-sysv + exit ;; + mc68k:UNIX:SYSTEM5:3.51m) + echo m68k-convergent-sysv + exit ;; + M680?0:D-NIX:5.3:*) + echo m68k-diab-dnix + exit ;; + M68*:*:R3V[5678]*:*) + test -r /sysV68 && { echo 'm68k-motorola-sysv'; exit; } ;; + 3[345]??:*:4.0:3.0 | 3[34]??A:*:4.0:3.0 | 3[34]??,*:*:4.0:3.0 | 3[34]??/*:*:4.0:3.0 | 4400:*:4.0:3.0 | 4850:*:4.0:3.0 | SKA40:*:4.0:3.0 | SDS2:*:4.0:3.0 | SHG2:*:4.0:3.0 | S7501*:*:4.0:3.0) + OS_REL='' + test -r /etc/.relid \ + && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` + /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ + && { echo i486-ncr-sysv4.3${OS_REL}; exit; } + /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ + && { echo i586-ncr-sysv4.3${OS_REL}; exit; } ;; + 3[34]??:*:4.0:* | 3[34]??,*:*:4.0:*) + /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ + && { echo i486-ncr-sysv4; exit; } ;; + NCR*:*:4.2:* | MPRAS*:*:4.2:*) + OS_REL='.3' + test -r /etc/.relid \ + && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` + /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ + && { echo i486-ncr-sysv4.3${OS_REL}; exit; } + /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ + && { echo i586-ncr-sysv4.3${OS_REL}; exit; } + /bin/uname -p 2>/dev/null | /bin/grep pteron >/dev/null \ + && { echo i586-ncr-sysv4.3${OS_REL}; exit; } ;; + m68*:LynxOS:2.*:* | m68*:LynxOS:3.0*:*) + echo m68k-unknown-lynxos${UNAME_RELEASE} + exit ;; + mc68030:UNIX_System_V:4.*:*) + echo m68k-atari-sysv4 + exit ;; + TSUNAMI:LynxOS:2.*:*) + echo sparc-unknown-lynxos${UNAME_RELEASE} + exit ;; + rs6000:LynxOS:2.*:*) + echo rs6000-unknown-lynxos${UNAME_RELEASE} + exit ;; + PowerPC:LynxOS:2.*:* | PowerPC:LynxOS:3.[01]*:* | PowerPC:LynxOS:4.[02]*:*) + echo powerpc-unknown-lynxos${UNAME_RELEASE} + exit ;; + SM[BE]S:UNIX_SV:*:*) + echo mips-dde-sysv${UNAME_RELEASE} + exit ;; + RM*:ReliantUNIX-*:*:*) + echo mips-sni-sysv4 + exit ;; + RM*:SINIX-*:*:*) + echo mips-sni-sysv4 + exit ;; + *:SINIX-*:*:*) + if uname -p 2>/dev/null >/dev/null ; then + UNAME_MACHINE=`(uname -p) 2>/dev/null` + echo ${UNAME_MACHINE}-sni-sysv4 + else + echo ns32k-sni-sysv + fi + exit ;; + PENTIUM:*:4.0*:*) # Unisys `ClearPath HMP IX 4000' SVR4/MP effort + # says + echo i586-unisys-sysv4 + exit ;; + *:UNIX_System_V:4*:FTX*) + # From Gerald Hewes . + # How about differentiating between stratus architectures? -djm + echo hppa1.1-stratus-sysv4 + exit ;; + *:*:*:FTX*) + # From seanf@swdc.stratus.com. + echo i860-stratus-sysv4 + exit ;; + i*86:VOS:*:*) + # From Paul.Green@stratus.com. + echo ${UNAME_MACHINE}-stratus-vos + exit ;; + *:VOS:*:*) + # From Paul.Green@stratus.com. + echo hppa1.1-stratus-vos + exit ;; + mc68*:A/UX:*:*) + echo m68k-apple-aux${UNAME_RELEASE} + exit ;; + news*:NEWS-OS:6*:*) + echo mips-sony-newsos6 + exit ;; + R[34]000:*System_V*:*:* | R4000:UNIX_SYSV:*:* | R*000:UNIX_SV:*:*) + if [ -d /usr/nec ]; then + echo mips-nec-sysv${UNAME_RELEASE} + else + echo mips-unknown-sysv${UNAME_RELEASE} + fi + exit ;; + BeBox:BeOS:*:*) # BeOS running on hardware made by Be, PPC only. + echo powerpc-be-beos + exit ;; + BeMac:BeOS:*:*) # BeOS running on Mac or Mac clone, PPC only. + echo powerpc-apple-beos + exit ;; + BePC:BeOS:*:*) # BeOS running on Intel PC compatible. + echo i586-pc-beos + exit ;; + BePC:Haiku:*:*) # Haiku running on Intel PC compatible. + echo i586-pc-haiku + exit ;; + x86_64:Haiku:*:*) + echo x86_64-unknown-haiku + exit ;; + SX-4:SUPER-UX:*:*) + echo sx4-nec-superux${UNAME_RELEASE} + exit ;; + SX-5:SUPER-UX:*:*) + echo sx5-nec-superux${UNAME_RELEASE} + exit ;; + SX-6:SUPER-UX:*:*) + echo sx6-nec-superux${UNAME_RELEASE} + exit ;; + SX-7:SUPER-UX:*:*) + echo sx7-nec-superux${UNAME_RELEASE} + exit ;; + SX-8:SUPER-UX:*:*) + echo sx8-nec-superux${UNAME_RELEASE} + exit ;; + SX-8R:SUPER-UX:*:*) + echo sx8r-nec-superux${UNAME_RELEASE} + exit ;; + Power*:Rhapsody:*:*) + echo powerpc-apple-rhapsody${UNAME_RELEASE} + exit ;; + *:Rhapsody:*:*) + echo ${UNAME_MACHINE}-apple-rhapsody${UNAME_RELEASE} + exit ;; + *:Darwin:*:*) + UNAME_PROCESSOR=`uname -p` || UNAME_PROCESSOR=unknown + eval $set_cc_for_build + if test "$UNAME_PROCESSOR" = unknown ; then + UNAME_PROCESSOR=powerpc + fi + if test `echo "$UNAME_RELEASE" | sed -e 's/\..*//'` -le 10 ; then + if [ "$CC_FOR_BUILD" != 'no_compiler_found' ]; then + if (echo '#ifdef __LP64__'; echo IS_64BIT_ARCH; echo '#endif') | \ + (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | \ + grep IS_64BIT_ARCH >/dev/null + then + case $UNAME_PROCESSOR in + i386) UNAME_PROCESSOR=x86_64 ;; + powerpc) UNAME_PROCESSOR=powerpc64 ;; + esac + fi + fi + elif test "$UNAME_PROCESSOR" = i386 ; then + # Avoid executing cc on OS X 10.9, as it ships with a stub + # that puts up a graphical alert prompting to install + # developer tools. Any system running Mac OS X 10.7 or + # later (Darwin 11 and later) is required to have a 64-bit + # processor. This is not true of the ARM version of Darwin + # that Apple uses in portable devices. + UNAME_PROCESSOR=x86_64 + fi + echo ${UNAME_PROCESSOR}-apple-darwin${UNAME_RELEASE} + exit ;; + *:procnto*:*:* | *:QNX:[0123456789]*:*) + UNAME_PROCESSOR=`uname -p` + if test "$UNAME_PROCESSOR" = "x86"; then + UNAME_PROCESSOR=i386 + UNAME_MACHINE=pc + fi + echo ${UNAME_PROCESSOR}-${UNAME_MACHINE}-nto-qnx${UNAME_RELEASE} + exit ;; + *:QNX:*:4*) + echo i386-pc-qnx + exit ;; + NEO-?:NONSTOP_KERNEL:*:*) + echo neo-tandem-nsk${UNAME_RELEASE} + exit ;; + NSE-*:NONSTOP_KERNEL:*:*) + echo nse-tandem-nsk${UNAME_RELEASE} + exit ;; + NSR-?:NONSTOP_KERNEL:*:*) + echo nsr-tandem-nsk${UNAME_RELEASE} + exit ;; + *:NonStop-UX:*:*) + echo mips-compaq-nonstopux + exit ;; + BS2000:POSIX*:*:*) + echo bs2000-siemens-sysv + exit ;; + DS/*:UNIX_System_V:*:*) + echo ${UNAME_MACHINE}-${UNAME_SYSTEM}-${UNAME_RELEASE} + exit ;; + *:Plan9:*:*) + # "uname -m" is not consistent, so use $cputype instead. 386 + # is converted to i386 for consistency with other x86 + # operating systems. + if test "$cputype" = "386"; then + UNAME_MACHINE=i386 + else + UNAME_MACHINE="$cputype" + fi + echo ${UNAME_MACHINE}-unknown-plan9 + exit ;; + *:TOPS-10:*:*) + echo pdp10-unknown-tops10 + exit ;; + *:TENEX:*:*) + echo pdp10-unknown-tenex + exit ;; + KS10:TOPS-20:*:* | KL10:TOPS-20:*:* | TYPE4:TOPS-20:*:*) + echo pdp10-dec-tops20 + exit ;; + XKL-1:TOPS-20:*:* | TYPE5:TOPS-20:*:*) + echo pdp10-xkl-tops20 + exit ;; + *:TOPS-20:*:*) + echo pdp10-unknown-tops20 + exit ;; + *:ITS:*:*) + echo pdp10-unknown-its + exit ;; + SEI:*:*:SEIUX) + echo mips-sei-seiux${UNAME_RELEASE} + exit ;; + *:DragonFly:*:*) + echo ${UNAME_MACHINE}-unknown-dragonfly`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` + exit ;; + *:*VMS:*:*) + UNAME_MACHINE=`(uname -p) 2>/dev/null` + case "${UNAME_MACHINE}" in + A*) echo alpha-dec-vms ; exit ;; + I*) echo ia64-dec-vms ; exit ;; + V*) echo vax-dec-vms ; exit ;; + esac ;; + *:XENIX:*:SysV) + echo i386-pc-xenix + exit ;; + i*86:skyos:*:*) + echo ${UNAME_MACHINE}-pc-skyos`echo ${UNAME_RELEASE}` | sed -e 's/ .*$//' + exit ;; + i*86:rdos:*:*) + echo ${UNAME_MACHINE}-pc-rdos + exit ;; + i*86:AROS:*:*) + echo ${UNAME_MACHINE}-pc-aros + exit ;; + x86_64:VMkernel:*:*) + echo ${UNAME_MACHINE}-unknown-esx + exit ;; +esac + +cat >&2 < in order to provide the needed +information to handle your system. + +config.guess timestamp = $timestamp + +uname -m = `(uname -m) 2>/dev/null || echo unknown` +uname -r = `(uname -r) 2>/dev/null || echo unknown` +uname -s = `(uname -s) 2>/dev/null || echo unknown` +uname -v = `(uname -v) 2>/dev/null || echo unknown` + +/usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null` +/bin/uname -X = `(/bin/uname -X) 2>/dev/null` + +hostinfo = `(hostinfo) 2>/dev/null` +/bin/universe = `(/bin/universe) 2>/dev/null` +/usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null` +/bin/arch = `(/bin/arch) 2>/dev/null` +/usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null` +/usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null` + +UNAME_MACHINE = ${UNAME_MACHINE} +UNAME_RELEASE = ${UNAME_RELEASE} +UNAME_SYSTEM = ${UNAME_SYSTEM} +UNAME_VERSION = ${UNAME_VERSION} +EOF + +exit 1 + +# Local variables: +# eval: (add-hook 'write-file-hooks 'time-stamp) +# time-stamp-start: "timestamp='" +# time-stamp-format: "%:y-%02m-%02d" +# time-stamp-end: "'" +# End: ADDED autosetup/config.sub Index: autosetup/config.sub ================================================================== --- autosetup/config.sub +++ autosetup/config.sub @@ -0,0 +1,1794 @@ +#! /bin/sh +# Configuration validation subroutine script. +# Copyright 1992-2014 Free Software Foundation, Inc. + +timestamp='2014-05-01' + +# This file is free software; you can redistribute it and/or modify it +# under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, but +# WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, see . +# +# As a special exception to the GNU General Public License, if you +# distribute this file as part of a program that contains a +# configuration script generated by Autoconf, you may include it under +# the same distribution terms that you use for the rest of that +# program. This Exception is an additional permission under section 7 +# of the GNU General Public License, version 3 ("GPLv3"). + + +# Please send patches with a ChangeLog entry to config-patches@gnu.org. +# +# Configuration subroutine to validate and canonicalize a configuration type. +# Supply the specified configuration type as an argument. +# If it is invalid, we print an error message on stderr and exit with code 1. +# Otherwise, we print the canonical config type on stdout and succeed. + +# You can get the latest version of this script from: +# http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.sub;hb=HEAD + +# This file is supposed to be the same for all GNU packages +# and recognize all the CPU types, system types and aliases +# that are meaningful with *any* GNU software. +# Each package is responsible for reporting which valid configurations +# it does not support. The user should be able to distinguish +# a failure to support a valid configuration from a meaningless +# configuration. + +# The goal of this file is to map all the various variations of a given +# machine specification into a single specification in the form: +# CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM +# or in some cases, the newer four-part form: +# CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM +# It is wrong to echo any other type of specification. + +me=`echo "$0" | sed -e 's,.*/,,'` + +usage="\ +Usage: $0 [OPTION] CPU-MFR-OPSYS + $0 [OPTION] ALIAS + +Canonicalize a configuration name. + +Operation modes: + -h, --help print this help, then exit + -t, --time-stamp print date of last modification, then exit + -v, --version print version number, then exit + +Report bugs and patches to ." + +version="\ +GNU config.sub ($timestamp) + +Copyright 1992-2014 Free Software Foundation, Inc. + +This is free software; see the source for copying conditions. There is NO +warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." + +help=" +Try \`$me --help' for more information." + +# Parse command line +while test $# -gt 0 ; do + case $1 in + --time-stamp | --time* | -t ) + echo "$timestamp" ; exit ;; + --version | -v ) + echo "$version" ; exit ;; + --help | --h* | -h ) + echo "$usage"; exit ;; + -- ) # Stop option processing + shift; break ;; + - ) # Use stdin as input. + break ;; + -* ) + echo "$me: invalid option $1$help" + exit 1 ;; + + *local*) + # First pass through any local machine types. + echo $1 + exit ;; + + * ) + break ;; + esac +done + +case $# in + 0) echo "$me: missing argument$help" >&2 + exit 1;; + 1) ;; + *) echo "$me: too many arguments$help" >&2 + exit 1;; +esac + +# Separate what the user gave into CPU-COMPANY and OS or KERNEL-OS (if any). +# Here we must recognize all the valid KERNEL-OS combinations. +maybe_os=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\2/'` +case $maybe_os in + nto-qnx* | linux-gnu* | linux-android* | linux-dietlibc | linux-newlib* | \ + linux-musl* | linux-uclibc* | uclinux-uclibc* | uclinux-gnu* | kfreebsd*-gnu* | \ + knetbsd*-gnu* | netbsd*-gnu* | \ + kopensolaris*-gnu* | \ + storm-chaos* | os2-emx* | rtmk-nova*) + os=-$maybe_os + basic_machine=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'` + ;; + android-linux) + os=-linux-android + basic_machine=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'`-unknown + ;; + *) + basic_machine=`echo $1 | sed 's/-[^-]*$//'` + if [ $basic_machine != $1 ] + then os=`echo $1 | sed 's/.*-/-/'` + else os=; fi + ;; +esac + +### Let's recognize common machines as not being operating systems so +### that things like config.sub decstation-3100 work. We also +### recognize some manufacturers as not being operating systems, so we +### can provide default operating systems below. +case $os in + -sun*os*) + # Prevent following clause from handling this invalid input. + ;; + -dec* | -mips* | -sequent* | -encore* | -pc532* | -sgi* | -sony* | \ + -att* | -7300* | -3300* | -delta* | -motorola* | -sun[234]* | \ + -unicom* | -ibm* | -next | -hp | -isi* | -apollo | -altos* | \ + -convergent* | -ncr* | -news | -32* | -3600* | -3100* | -hitachi* |\ + -c[123]* | -convex* | -sun | -crds | -omron* | -dg | -ultra | -tti* | \ + -harris | -dolphin | -highlevel | -gould | -cbm | -ns | -masscomp | \ + -apple | -axis | -knuth | -cray | -microblaze*) + os= + basic_machine=$1 + ;; + -bluegene*) + os=-cnk + ;; + -sim | -cisco | -oki | -wec | -winbond) + os= + basic_machine=$1 + ;; + -scout) + ;; + -wrs) + os=-vxworks + basic_machine=$1 + ;; + -chorusos*) + os=-chorusos + basic_machine=$1 + ;; + -chorusrdb) + os=-chorusrdb + basic_machine=$1 + ;; + -hiux*) + os=-hiuxwe2 + ;; + -sco6) + os=-sco5v6 + basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` + ;; + -sco5) + os=-sco3.2v5 + basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` + ;; + -sco4) + os=-sco3.2v4 + basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` + ;; + -sco3.2.[4-9]*) + os=`echo $os | sed -e 's/sco3.2./sco3.2v/'` + basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` + ;; + -sco3.2v[4-9]*) + # Don't forget version if it is 3.2v4 or newer. + basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` + ;; + -sco5v6*) + # Don't forget version if it is 3.2v4 or newer. + basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` + ;; + -sco*) + os=-sco3.2v2 + basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` + ;; + -udk*) + basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` + ;; + -isc) + os=-isc2.2 + basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` + ;; + -clix*) + basic_machine=clipper-intergraph + ;; + -isc*) + basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` + ;; + -lynx*178) + os=-lynxos178 + ;; + -lynx*5) + os=-lynxos5 + ;; + -lynx*) + os=-lynxos + ;; + -ptx*) + basic_machine=`echo $1 | sed -e 's/86-.*/86-sequent/'` + ;; + -windowsnt*) + os=`echo $os | sed -e 's/windowsnt/winnt/'` + ;; + -psos*) + os=-psos + ;; + -mint | -mint[0-9]*) + basic_machine=m68k-atari + os=-mint + ;; +esac + +# Decode aliases for certain CPU-COMPANY combinations. +case $basic_machine in + # Recognize the basic CPU types without company name. + # Some are omitted here because they have special meanings below. + 1750a | 580 \ + | a29k \ + | aarch64 | aarch64_be \ + | alpha | alphaev[4-8] | alphaev56 | alphaev6[78] | alphapca5[67] \ + | alpha64 | alpha64ev[4-8] | alpha64ev56 | alpha64ev6[78] | alpha64pca5[67] \ + | am33_2.0 \ + | arc | arceb \ + | arm | arm[bl]e | arme[lb] | armv[2-8] | armv[3-8][lb] | armv7[arm] \ + | avr | avr32 \ + | be32 | be64 \ + | bfin \ + | c4x | c8051 | clipper \ + | d10v | d30v | dlx | dsp16xx \ + | epiphany \ + | fido | fr30 | frv \ + | h8300 | h8500 | hppa | hppa1.[01] | hppa2.0 | hppa2.0[nw] | hppa64 \ + | hexagon \ + | i370 | i860 | i960 | ia64 \ + | ip2k | iq2000 \ + | k1om \ + | le32 | le64 \ + | lm32 \ + | m32c | m32r | m32rle | m68000 | m68k | m88k \ + | maxq | mb | microblaze | microblazeel | mcore | mep | metag \ + | mips | mipsbe | mipseb | mipsel | mipsle \ + | mips16 \ + | mips64 | mips64el \ + | mips64octeon | mips64octeonel \ + | mips64orion | mips64orionel \ + | mips64r5900 | mips64r5900el \ + | mips64vr | mips64vrel \ + | mips64vr4100 | mips64vr4100el \ + | mips64vr4300 | mips64vr4300el \ + | mips64vr5000 | mips64vr5000el \ + | mips64vr5900 | mips64vr5900el \ + | mipsisa32 | mipsisa32el \ + | mipsisa32r2 | mipsisa32r2el \ + | mipsisa32r6 | mipsisa32r6el \ + | mipsisa64 | mipsisa64el \ + | mipsisa64r2 | mipsisa64r2el \ + | mipsisa64r6 | mipsisa64r6el \ + | mipsisa64sb1 | mipsisa64sb1el \ + | mipsisa64sr71k | mipsisa64sr71kel \ + | mipsr5900 | mipsr5900el \ + | mipstx39 | mipstx39el \ + | mn10200 | mn10300 \ + | moxie \ + | mt \ + | msp430 \ + | nds32 | nds32le | nds32be \ + | nios | nios2 | nios2eb | nios2el \ + | ns16k | ns32k \ + | open8 | or1k | or1knd | or32 \ + | pdp10 | pdp11 | pj | pjl \ + | powerpc | powerpc64 | powerpc64le | powerpcle \ + | pyramid \ + | rl78 | rx \ + | score \ + | sh | sh[1234] | sh[24]a | sh[24]aeb | sh[23]e | sh[34]eb | sheb | shbe | shle | sh[1234]le | sh3ele \ + | sh64 | sh64le \ + | sparc | sparc64 | sparc64b | sparc64v | sparc86x | sparclet | sparclite \ + | sparcv8 | sparcv9 | sparcv9b | sparcv9v \ + | spu \ + | tahoe | tic4x | tic54x | tic55x | tic6x | tic80 | tron \ + | ubicom32 \ + | v850 | v850e | v850e1 | v850e2 | v850es | v850e2v3 \ + | we32k \ + | x86 | xc16x | xstormy16 | xtensa \ + | z8k | z80) + basic_machine=$basic_machine-unknown + ;; + c54x) + basic_machine=tic54x-unknown + ;; + c55x) + basic_machine=tic55x-unknown + ;; + c6x) + basic_machine=tic6x-unknown + ;; + m6811 | m68hc11 | m6812 | m68hc12 | m68hcs12x | nvptx | picochip) + basic_machine=$basic_machine-unknown + os=-none + ;; + m88110 | m680[12346]0 | m683?2 | m68360 | m5200 | v70 | w65 | z8k) + ;; + ms1) + basic_machine=mt-unknown + ;; + + strongarm | thumb | xscale) + basic_machine=arm-unknown + ;; + xgate) + basic_machine=$basic_machine-unknown + os=-none + ;; + xscaleeb) + basic_machine=armeb-unknown + ;; + + xscaleel) + basic_machine=armel-unknown + ;; + + # We use `pc' rather than `unknown' + # because (1) that's what they normally are, and + # (2) the word "unknown" tends to confuse beginning users. + i*86 | x86_64) + basic_machine=$basic_machine-pc + ;; + # Object if more than one company name word. + *-*-*) + echo Invalid configuration \`$1\': machine \`$basic_machine\' not recognized 1>&2 + exit 1 + ;; + # Recognize the basic CPU types with company name. + 580-* \ + | a29k-* \ + | aarch64-* | aarch64_be-* \ + | alpha-* | alphaev[4-8]-* | alphaev56-* | alphaev6[78]-* \ + | alpha64-* | alpha64ev[4-8]-* | alpha64ev56-* | alpha64ev6[78]-* \ + | alphapca5[67]-* | alpha64pca5[67]-* | arc-* | arceb-* \ + | arm-* | armbe-* | armle-* | armeb-* | armv*-* \ + | avr-* | avr32-* \ + | be32-* | be64-* \ + | bfin-* | bs2000-* \ + | c[123]* | c30-* | [cjt]90-* | c4x-* \ + | c8051-* | clipper-* | craynv-* | cydra-* \ + | d10v-* | d30v-* | dlx-* \ + | elxsi-* \ + | f30[01]-* | f700-* | fido-* | fr30-* | frv-* | fx80-* \ + | h8300-* | h8500-* \ + | hppa-* | hppa1.[01]-* | hppa2.0-* | hppa2.0[nw]-* | hppa64-* \ + | hexagon-* \ + | i*86-* | i860-* | i960-* | ia64-* \ + | ip2k-* | iq2000-* \ + | k1om-* \ + | le32-* | le64-* \ + | lm32-* \ + | m32c-* | m32r-* | m32rle-* \ + | m68000-* | m680[012346]0-* | m68360-* | m683?2-* | m68k-* \ + | m88110-* | m88k-* | maxq-* | mcore-* | metag-* \ + | microblaze-* | microblazeel-* \ + | mips-* | mipsbe-* | mipseb-* | mipsel-* | mipsle-* \ + | mips16-* \ + | mips64-* | mips64el-* \ + | mips64octeon-* | mips64octeonel-* \ + | mips64orion-* | mips64orionel-* \ + | mips64r5900-* | mips64r5900el-* \ + | mips64vr-* | mips64vrel-* \ + | mips64vr4100-* | mips64vr4100el-* \ + | mips64vr4300-* | mips64vr4300el-* \ + | mips64vr5000-* | mips64vr5000el-* \ + | mips64vr5900-* | mips64vr5900el-* \ + | mipsisa32-* | mipsisa32el-* \ + | mipsisa32r2-* | mipsisa32r2el-* \ + | mipsisa32r6-* | mipsisa32r6el-* \ + | mipsisa64-* | mipsisa64el-* \ + | mipsisa64r2-* | mipsisa64r2el-* \ + | mipsisa64r6-* | mipsisa64r6el-* \ + | mipsisa64sb1-* | mipsisa64sb1el-* \ + | mipsisa64sr71k-* | mipsisa64sr71kel-* \ + | mipsr5900-* | mipsr5900el-* \ + | mipstx39-* | mipstx39el-* \ + | mmix-* \ + | mt-* \ + | msp430-* \ + | nds32-* | nds32le-* | nds32be-* \ + | nios-* | nios2-* | nios2eb-* | nios2el-* \ + | none-* | np1-* | ns16k-* | ns32k-* \ + | open8-* \ + | or1k*-* \ + | orion-* \ + | pdp10-* | pdp11-* | pj-* | pjl-* | pn-* | power-* \ + | powerpc-* | powerpc64-* | powerpc64le-* | powerpcle-* \ + | pyramid-* \ + | rl78-* | romp-* | rs6000-* | rx-* \ + | sh-* | sh[1234]-* | sh[24]a-* | sh[24]aeb-* | sh[23]e-* | sh[34]eb-* | sheb-* | shbe-* \ + | shle-* | sh[1234]le-* | sh3ele-* | sh64-* | sh64le-* \ + | sparc-* | sparc64-* | sparc64b-* | sparc64v-* | sparc86x-* | sparclet-* \ + | sparclite-* \ + | sparcv8-* | sparcv9-* | sparcv9b-* | sparcv9v-* | sv1-* | sx?-* \ + | tahoe-* \ + | tic30-* | tic4x-* | tic54x-* | tic55x-* | tic6x-* | tic80-* \ + | tile*-* \ + | tron-* \ + | ubicom32-* \ + | v850-* | v850e-* | v850e1-* | v850es-* | v850e2-* | v850e2v3-* \ + | vax-* \ + | we32k-* \ + | x86-* | x86_64-* | xc16x-* | xps100-* \ + | xstormy16-* | xtensa*-* \ + | ymp-* \ + | z8k-* | z80-*) + ;; + # Recognize the basic CPU types without company name, with glob match. + xtensa*) + basic_machine=$basic_machine-unknown + ;; + # Recognize the various machine names and aliases which stand + # for a CPU type and a company and sometimes even an OS. + 386bsd) + basic_machine=i386-unknown + os=-bsd + ;; + 3b1 | 7300 | 7300-att | att-7300 | pc7300 | safari | unixpc) + basic_machine=m68000-att + ;; + 3b*) + basic_machine=we32k-att + ;; + a29khif) + basic_machine=a29k-amd + os=-udi + ;; + abacus) + basic_machine=abacus-unknown + ;; + adobe68k) + basic_machine=m68010-adobe + os=-scout + ;; + alliant | fx80) + basic_machine=fx80-alliant + ;; + altos | altos3068) + basic_machine=m68k-altos + ;; + am29k) + basic_machine=a29k-none + os=-bsd + ;; + amd64) + basic_machine=x86_64-pc + ;; + amd64-*) + basic_machine=x86_64-`echo $basic_machine | sed 's/^[^-]*-//'` + ;; + amdahl) + basic_machine=580-amdahl + os=-sysv + ;; + amiga | amiga-*) + basic_machine=m68k-unknown + ;; + amigaos | amigados) + basic_machine=m68k-unknown + os=-amigaos + ;; + amigaunix | amix) + basic_machine=m68k-unknown + os=-sysv4 + ;; + apollo68) + basic_machine=m68k-apollo + os=-sysv + ;; + apollo68bsd) + basic_machine=m68k-apollo + os=-bsd + ;; + aros) + basic_machine=i386-pc + os=-aros + ;; + aux) + basic_machine=m68k-apple + os=-aux + ;; + balance) + basic_machine=ns32k-sequent + os=-dynix + ;; + blackfin) + basic_machine=bfin-unknown + os=-linux + ;; + blackfin-*) + basic_machine=bfin-`echo $basic_machine | sed 's/^[^-]*-//'` + os=-linux + ;; + bluegene*) + basic_machine=powerpc-ibm + os=-cnk + ;; + c54x-*) + basic_machine=tic54x-`echo $basic_machine | sed 's/^[^-]*-//'` + ;; + c55x-*) + basic_machine=tic55x-`echo $basic_machine | sed 's/^[^-]*-//'` + ;; + c6x-*) + basic_machine=tic6x-`echo $basic_machine | sed 's/^[^-]*-//'` + ;; + c90) + basic_machine=c90-cray + os=-unicos + ;; + cegcc) + basic_machine=arm-unknown + os=-cegcc + ;; + convex-c1) + basic_machine=c1-convex + os=-bsd + ;; + convex-c2) + basic_machine=c2-convex + os=-bsd + ;; + convex-c32) + basic_machine=c32-convex + os=-bsd + ;; + convex-c34) + basic_machine=c34-convex + os=-bsd + ;; + convex-c38) + basic_machine=c38-convex + os=-bsd + ;; + cray | j90) + basic_machine=j90-cray + os=-unicos + ;; + craynv) + basic_machine=craynv-cray + os=-unicosmp + ;; + cr16 | cr16-*) + basic_machine=cr16-unknown + os=-elf + ;; + crds | unos) + basic_machine=m68k-crds + ;; + crisv32 | crisv32-* | etraxfs*) + basic_machine=crisv32-axis + ;; + cris | cris-* | etrax*) + basic_machine=cris-axis + ;; + crx) + basic_machine=crx-unknown + os=-elf + ;; + da30 | da30-*) + basic_machine=m68k-da30 + ;; + decstation | decstation-3100 | pmax | pmax-* | pmin | dec3100 | decstatn) + basic_machine=mips-dec + ;; + decsystem10* | dec10*) + basic_machine=pdp10-dec + os=-tops10 + ;; + decsystem20* | dec20*) + basic_machine=pdp10-dec + os=-tops20 + ;; + delta | 3300 | motorola-3300 | motorola-delta \ + | 3300-motorola | delta-motorola) + basic_machine=m68k-motorola + ;; + delta88) + basic_machine=m88k-motorola + os=-sysv3 + ;; + dicos) + basic_machine=i686-pc + os=-dicos + ;; + djgpp) + basic_machine=i586-pc + os=-msdosdjgpp + ;; + dpx20 | dpx20-*) + basic_machine=rs6000-bull + os=-bosx + ;; + dpx2* | dpx2*-bull) + basic_machine=m68k-bull + os=-sysv3 + ;; + ebmon29k) + basic_machine=a29k-amd + os=-ebmon + ;; + elxsi) + basic_machine=elxsi-elxsi + os=-bsd + ;; + encore | umax | mmax) + basic_machine=ns32k-encore + ;; + es1800 | OSE68k | ose68k | ose | OSE) + basic_machine=m68k-ericsson + os=-ose + ;; + fx2800) + basic_machine=i860-alliant + ;; + genix) + basic_machine=ns32k-ns + ;; + gmicro) + basic_machine=tron-gmicro + os=-sysv + ;; + go32) + basic_machine=i386-pc + os=-go32 + ;; + h3050r* | hiux*) + basic_machine=hppa1.1-hitachi + os=-hiuxwe2 + ;; + h8300hms) + basic_machine=h8300-hitachi + os=-hms + ;; + h8300xray) + basic_machine=h8300-hitachi + os=-xray + ;; + h8500hms) + basic_machine=h8500-hitachi + os=-hms + ;; + harris) + basic_machine=m88k-harris + os=-sysv3 + ;; + hp300-*) + basic_machine=m68k-hp + ;; + hp300bsd) + basic_machine=m68k-hp + os=-bsd + ;; + hp300hpux) + basic_machine=m68k-hp + os=-hpux + ;; + hp3k9[0-9][0-9] | hp9[0-9][0-9]) + basic_machine=hppa1.0-hp + ;; + hp9k2[0-9][0-9] | hp9k31[0-9]) + basic_machine=m68000-hp + ;; + hp9k3[2-9][0-9]) + basic_machine=m68k-hp + ;; + hp9k6[0-9][0-9] | hp6[0-9][0-9]) + basic_machine=hppa1.0-hp + ;; + hp9k7[0-79][0-9] | hp7[0-79][0-9]) + basic_machine=hppa1.1-hp + ;; + hp9k78[0-9] | hp78[0-9]) + # FIXME: really hppa2.0-hp + basic_machine=hppa1.1-hp + ;; + hp9k8[67]1 | hp8[67]1 | hp9k80[24] | hp80[24] | hp9k8[78]9 | hp8[78]9 | hp9k893 | hp893) + # FIXME: really hppa2.0-hp + basic_machine=hppa1.1-hp + ;; + hp9k8[0-9][13679] | hp8[0-9][13679]) + basic_machine=hppa1.1-hp + ;; + hp9k8[0-9][0-9] | hp8[0-9][0-9]) + basic_machine=hppa1.0-hp + ;; + hppa-next) + os=-nextstep3 + ;; + hppaosf) + basic_machine=hppa1.1-hp + os=-osf + ;; + hppro) + basic_machine=hppa1.1-hp + os=-proelf + ;; + i370-ibm* | ibm*) + basic_machine=i370-ibm + ;; + i*86v32) + basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` + os=-sysv32 + ;; + i*86v4*) + basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` + os=-sysv4 + ;; + i*86v) + basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` + os=-sysv + ;; + i*86sol2) + basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` + os=-solaris2 + ;; + i386mach) + basic_machine=i386-mach + os=-mach + ;; + i386-vsta | vsta) + basic_machine=i386-unknown + os=-vsta + ;; + iris | iris4d) + basic_machine=mips-sgi + case $os in + -irix*) + ;; + *) + os=-irix4 + ;; + esac + ;; + isi68 | isi) + basic_machine=m68k-isi + os=-sysv + ;; + m68knommu) + basic_machine=m68k-unknown + os=-linux + ;; + m68knommu-*) + basic_machine=m68k-`echo $basic_machine | sed 's/^[^-]*-//'` + os=-linux + ;; + m88k-omron*) + basic_machine=m88k-omron + ;; + magnum | m3230) + basic_machine=mips-mips + os=-sysv + ;; + merlin) + basic_machine=ns32k-utek + os=-sysv + ;; + microblaze*) + basic_machine=microblaze-xilinx + ;; + mingw64) + basic_machine=x86_64-pc + os=-mingw64 + ;; + mingw32) + basic_machine=i686-pc + os=-mingw32 + ;; + mingw32ce) + basic_machine=arm-unknown + os=-mingw32ce + ;; + miniframe) + basic_machine=m68000-convergent + ;; + *mint | -mint[0-9]* | *MiNT | *MiNT[0-9]*) + basic_machine=m68k-atari + os=-mint + ;; + mips3*-*) + basic_machine=`echo $basic_machine | sed -e 's/mips3/mips64/'` + ;; + mips3*) + basic_machine=`echo $basic_machine | sed -e 's/mips3/mips64/'`-unknown + ;; + monitor) + basic_machine=m68k-rom68k + os=-coff + ;; + morphos) + basic_machine=powerpc-unknown + os=-morphos + ;; + msdos) + basic_machine=i386-pc + os=-msdos + ;; + ms1-*) + basic_machine=`echo $basic_machine | sed -e 's/ms1-/mt-/'` + ;; + msys) + basic_machine=i686-pc + os=-msys + ;; + mvs) + basic_machine=i370-ibm + os=-mvs + ;; + nacl) + basic_machine=le32-unknown + os=-nacl + ;; + ncr3000) + basic_machine=i486-ncr + os=-sysv4 + ;; + netbsd386) + basic_machine=i386-unknown + os=-netbsd + ;; + netwinder) + basic_machine=armv4l-rebel + os=-linux + ;; + news | news700 | news800 | news900) + basic_machine=m68k-sony + os=-newsos + ;; + news1000) + basic_machine=m68030-sony + os=-newsos + ;; + news-3600 | risc-news) + basic_machine=mips-sony + os=-newsos + ;; + necv70) + basic_machine=v70-nec + os=-sysv + ;; + next | m*-next ) + basic_machine=m68k-next + case $os in + -nextstep* ) + ;; + -ns2*) + os=-nextstep2 + ;; + *) + os=-nextstep3 + ;; + esac + ;; + nh3000) + basic_machine=m68k-harris + os=-cxux + ;; + nh[45]000) + basic_machine=m88k-harris + os=-cxux + ;; + nindy960) + basic_machine=i960-intel + os=-nindy + ;; + mon960) + basic_machine=i960-intel + os=-mon960 + ;; + nonstopux) + basic_machine=mips-compaq + os=-nonstopux + ;; + np1) + basic_machine=np1-gould + ;; + neo-tandem) + basic_machine=neo-tandem + ;; + nse-tandem) + basic_machine=nse-tandem + ;; + nsr-tandem) + basic_machine=nsr-tandem + ;; + op50n-* | op60c-*) + basic_machine=hppa1.1-oki + os=-proelf + ;; + openrisc | openrisc-*) + basic_machine=or32-unknown + ;; + os400) + basic_machine=powerpc-ibm + os=-os400 + ;; + OSE68000 | ose68000) + basic_machine=m68000-ericsson + os=-ose + ;; + os68k) + basic_machine=m68k-none + os=-os68k + ;; + pa-hitachi) + basic_machine=hppa1.1-hitachi + os=-hiuxwe2 + ;; + paragon) + basic_machine=i860-intel + os=-osf + ;; + parisc) + basic_machine=hppa-unknown + os=-linux + ;; + parisc-*) + basic_machine=hppa-`echo $basic_machine | sed 's/^[^-]*-//'` + os=-linux + ;; + pbd) + basic_machine=sparc-tti + ;; + pbb) + basic_machine=m68k-tti + ;; + pc532 | pc532-*) + basic_machine=ns32k-pc532 + ;; + pc98) + basic_machine=i386-pc + ;; + pc98-*) + basic_machine=i386-`echo $basic_machine | sed 's/^[^-]*-//'` + ;; + pentium | p5 | k5 | k6 | nexgen | viac3) + basic_machine=i586-pc + ;; + pentiumpro | p6 | 6x86 | athlon | athlon_*) + basic_machine=i686-pc + ;; + pentiumii | pentium2 | pentiumiii | pentium3) + basic_machine=i686-pc + ;; + pentium4) + basic_machine=i786-pc + ;; + pentium-* | p5-* | k5-* | k6-* | nexgen-* | viac3-*) + basic_machine=i586-`echo $basic_machine | sed 's/^[^-]*-//'` + ;; + pentiumpro-* | p6-* | 6x86-* | athlon-*) + basic_machine=i686-`echo $basic_machine | sed 's/^[^-]*-//'` + ;; + pentiumii-* | pentium2-* | pentiumiii-* | pentium3-*) + basic_machine=i686-`echo $basic_machine | sed 's/^[^-]*-//'` + ;; + pentium4-*) + basic_machine=i786-`echo $basic_machine | sed 's/^[^-]*-//'` + ;; + pn) + basic_machine=pn-gould + ;; + power) basic_machine=power-ibm + ;; + ppc | ppcbe) basic_machine=powerpc-unknown + ;; + ppc-* | ppcbe-*) + basic_machine=powerpc-`echo $basic_machine | sed 's/^[^-]*-//'` + ;; + ppcle | powerpclittle | ppc-le | powerpc-little) + basic_machine=powerpcle-unknown + ;; + ppcle-* | powerpclittle-*) + basic_machine=powerpcle-`echo $basic_machine | sed 's/^[^-]*-//'` + ;; + ppc64) basic_machine=powerpc64-unknown + ;; + ppc64-*) basic_machine=powerpc64-`echo $basic_machine | sed 's/^[^-]*-//'` + ;; + ppc64le | powerpc64little | ppc64-le | powerpc64-little) + basic_machine=powerpc64le-unknown + ;; + ppc64le-* | powerpc64little-*) + basic_machine=powerpc64le-`echo $basic_machine | sed 's/^[^-]*-//'` + ;; + ps2) + basic_machine=i386-ibm + ;; + pw32) + basic_machine=i586-unknown + os=-pw32 + ;; + rdos | rdos64) + basic_machine=x86_64-pc + os=-rdos + ;; + rdos32) + basic_machine=i386-pc + os=-rdos + ;; + rom68k) + basic_machine=m68k-rom68k + os=-coff + ;; + rm[46]00) + basic_machine=mips-siemens + ;; + rtpc | rtpc-*) + basic_machine=romp-ibm + ;; + s390 | s390-*) + basic_machine=s390-ibm + ;; + s390x | s390x-*) + basic_machine=s390x-ibm + ;; + sa29200) + basic_machine=a29k-amd + os=-udi + ;; + sb1) + basic_machine=mipsisa64sb1-unknown + ;; + sb1el) + basic_machine=mipsisa64sb1el-unknown + ;; + sde) + basic_machine=mipsisa32-sde + os=-elf + ;; + sei) + basic_machine=mips-sei + os=-seiux + ;; + sequent) + basic_machine=i386-sequent + ;; + sh) + basic_machine=sh-hitachi + os=-hms + ;; + sh5el) + basic_machine=sh5le-unknown + ;; + sh64) + basic_machine=sh64-unknown + ;; + sparclite-wrs | simso-wrs) + basic_machine=sparclite-wrs + os=-vxworks + ;; + sps7) + basic_machine=m68k-bull + os=-sysv2 + ;; + spur) + basic_machine=spur-unknown + ;; + st2000) + basic_machine=m68k-tandem + ;; + stratus) + basic_machine=i860-stratus + os=-sysv4 + ;; + strongarm-* | thumb-*) + basic_machine=arm-`echo $basic_machine | sed 's/^[^-]*-//'` + ;; + sun2) + basic_machine=m68000-sun + ;; + sun2os3) + basic_machine=m68000-sun + os=-sunos3 + ;; + sun2os4) + basic_machine=m68000-sun + os=-sunos4 + ;; + sun3os3) + basic_machine=m68k-sun + os=-sunos3 + ;; + sun3os4) + basic_machine=m68k-sun + os=-sunos4 + ;; + sun4os3) + basic_machine=sparc-sun + os=-sunos3 + ;; + sun4os4) + basic_machine=sparc-sun + os=-sunos4 + ;; + sun4sol2) + basic_machine=sparc-sun + os=-solaris2 + ;; + sun3 | sun3-*) + basic_machine=m68k-sun + ;; + sun4) + basic_machine=sparc-sun + ;; + sun386 | sun386i | roadrunner) + basic_machine=i386-sun + ;; + sv1) + basic_machine=sv1-cray + os=-unicos + ;; + symmetry) + basic_machine=i386-sequent + os=-dynix + ;; + t3e) + basic_machine=alphaev5-cray + os=-unicos + ;; + t90) + basic_machine=t90-cray + os=-unicos + ;; + tile*) + basic_machine=$basic_machine-unknown + os=-linux-gnu + ;; + tx39) + basic_machine=mipstx39-unknown + ;; + tx39el) + basic_machine=mipstx39el-unknown + ;; + toad1) + basic_machine=pdp10-xkl + os=-tops20 + ;; + tower | tower-32) + basic_machine=m68k-ncr + ;; + tpf) + basic_machine=s390x-ibm + os=-tpf + ;; + udi29k) + basic_machine=a29k-amd + os=-udi + ;; + ultra3) + basic_machine=a29k-nyu + os=-sym1 + ;; + v810 | necv810) + basic_machine=v810-nec + os=-none + ;; + vaxv) + basic_machine=vax-dec + os=-sysv + ;; + vms) + basic_machine=vax-dec + os=-vms + ;; + vpp*|vx|vx-*) + basic_machine=f301-fujitsu + ;; + vxworks960) + basic_machine=i960-wrs + os=-vxworks + ;; + vxworks68) + basic_machine=m68k-wrs + os=-vxworks + ;; + vxworks29k) + basic_machine=a29k-wrs + os=-vxworks + ;; + w65*) + basic_machine=w65-wdc + os=-none + ;; + w89k-*) + basic_machine=hppa1.1-winbond + os=-proelf + ;; + xbox) + basic_machine=i686-pc + os=-mingw32 + ;; + xps | xps100) + basic_machine=xps100-honeywell + ;; + xscale-* | xscalee[bl]-*) + basic_machine=`echo $basic_machine | sed 's/^xscale/arm/'` + ;; + ymp) + basic_machine=ymp-cray + os=-unicos + ;; + z8k-*-coff) + basic_machine=z8k-unknown + os=-sim + ;; + z80-*-coff) + basic_machine=z80-unknown + os=-sim + ;; + none) + basic_machine=none-none + os=-none + ;; + +# Here we handle the default manufacturer of certain CPU types. It is in +# some cases the only manufacturer, in others, it is the most popular. + w89k) + basic_machine=hppa1.1-winbond + ;; + op50n) + basic_machine=hppa1.1-oki + ;; + op60c) + basic_machine=hppa1.1-oki + ;; + romp) + basic_machine=romp-ibm + ;; + mmix) + basic_machine=mmix-knuth + ;; + rs6000) + basic_machine=rs6000-ibm + ;; + vax) + basic_machine=vax-dec + ;; + pdp10) + # there are many clones, so DEC is not a safe bet + basic_machine=pdp10-unknown + ;; + pdp11) + basic_machine=pdp11-dec + ;; + we32k) + basic_machine=we32k-att + ;; + sh[1234] | sh[24]a | sh[24]aeb | sh[34]eb | sh[1234]le | sh[23]ele) + basic_machine=sh-unknown + ;; + sparc | sparcv8 | sparcv9 | sparcv9b | sparcv9v) + basic_machine=sparc-sun + ;; + cydra) + basic_machine=cydra-cydrome + ;; + orion) + basic_machine=orion-highlevel + ;; + orion105) + basic_machine=clipper-highlevel + ;; + mac | mpw | mac-mpw) + basic_machine=m68k-apple + ;; + pmac | pmac-mpw) + basic_machine=powerpc-apple + ;; + *-unknown) + # Make sure to match an already-canonicalized machine name. + ;; + *) + echo Invalid configuration \`$1\': machine \`$basic_machine\' not recognized 1>&2 + exit 1 + ;; +esac + +# Here we canonicalize certain aliases for manufacturers. +case $basic_machine in + *-digital*) + basic_machine=`echo $basic_machine | sed 's/digital.*/dec/'` + ;; + *-commodore*) + basic_machine=`echo $basic_machine | sed 's/commodore.*/cbm/'` + ;; + *) + ;; +esac + +# Decode manufacturer-specific aliases for certain operating systems. + +if [ x"$os" != x"" ] +then +case $os in + # First match some system type aliases + # that might get confused with valid system types. + # -solaris* is a basic system type, with this one exception. + -auroraux) + os=-auroraux + ;; + -solaris1 | -solaris1.*) + os=`echo $os | sed -e 's|solaris1|sunos4|'` + ;; + -solaris) + os=-solaris2 + ;; + -svr4*) + os=-sysv4 + ;; + -unixware*) + os=-sysv4.2uw + ;; + -gnu/linux*) + os=`echo $os | sed -e 's|gnu/linux|linux-gnu|'` + ;; + # First accept the basic system types. + # The portable systems comes first. + # Each alternative MUST END IN A *, to match a version number. + # -sysv* is not here because it comes later, after sysvr4. + -gnu* | -bsd* | -mach* | -minix* | -genix* | -ultrix* | -irix* \ + | -*vms* | -sco* | -esix* | -isc* | -aix* | -cnk* | -sunos | -sunos[34]*\ + | -hpux* | -unos* | -osf* | -luna* | -dgux* | -auroraux* | -solaris* \ + | -sym* | -kopensolaris* | -plan9* \ + | -amigaos* | -amigados* | -msdos* | -newsos* | -unicos* | -aof* \ + | -aos* | -aros* \ + | -nindy* | -vxsim* | -vxworks* | -ebmon* | -hms* | -mvs* \ + | -clix* | -riscos* | -uniplus* | -iris* | -rtu* | -xenix* \ + | -hiux* | -386bsd* | -knetbsd* | -mirbsd* | -netbsd* \ + | -bitrig* | -openbsd* | -solidbsd* \ + | -ekkobsd* | -kfreebsd* | -freebsd* | -riscix* | -lynxos* \ + | -bosx* | -nextstep* | -cxux* | -aout* | -elf* | -oabi* \ + | -ptx* | -coff* | -ecoff* | -winnt* | -domain* | -vsta* \ + | -udi* | -eabi* | -lites* | -ieee* | -go32* | -aux* \ + | -chorusos* | -chorusrdb* | -cegcc* \ + | -cygwin* | -msys* | -pe* | -psos* | -moss* | -proelf* | -rtems* \ + | -mingw32* | -mingw64* | -linux-gnu* | -linux-android* \ + | -linux-newlib* | -linux-musl* | -linux-uclibc* \ + | -uxpv* | -beos* | -mpeix* | -udk* \ + | -interix* | -uwin* | -mks* | -rhapsody* | -darwin* | -opened* \ + | -openstep* | -oskit* | -conix* | -pw32* | -nonstopux* \ + | -storm-chaos* | -tops10* | -tenex* | -tops20* | -its* \ + | -os2* | -vos* | -palmos* | -uclinux* | -nucleus* \ + | -morphos* | -superux* | -rtmk* | -rtmk-nova* | -windiss* \ + | -powermax* | -dnix* | -nx6 | -nx7 | -sei* | -dragonfly* \ + | -skyos* | -haiku* | -rdos* | -toppers* | -drops* | -es* | -tirtos*) + # Remember, each alternative MUST END IN *, to match a version number. + ;; + -qnx*) + case $basic_machine in + x86-* | i*86-*) + ;; + *) + os=-nto$os + ;; + esac + ;; + -nto-qnx*) + ;; + -nto*) + os=`echo $os | sed -e 's|nto|nto-qnx|'` + ;; + -sim | -es1800* | -hms* | -xray | -os68k* | -none* | -v88r* \ + | -windows* | -osx | -abug | -netware* | -os9* | -beos* | -haiku* \ + | -macos* | -mpw* | -magic* | -mmixware* | -mon960* | -lnews*) + ;; + -mac*) + os=`echo $os | sed -e 's|mac|macos|'` + ;; + -linux-dietlibc) + os=-linux-dietlibc + ;; + -linux*) + os=`echo $os | sed -e 's|linux|linux-gnu|'` + ;; + -sunos5*) + os=`echo $os | sed -e 's|sunos5|solaris2|'` + ;; + -sunos6*) + os=`echo $os | sed -e 's|sunos6|solaris3|'` + ;; + -opened*) + os=-openedition + ;; + -os400*) + os=-os400 + ;; + -wince*) + os=-wince + ;; + -osfrose*) + os=-osfrose + ;; + -osf*) + os=-osf + ;; + -utek*) + os=-bsd + ;; + -dynix*) + os=-bsd + ;; + -acis*) + os=-aos + ;; + -atheos*) + os=-atheos + ;; + -syllable*) + os=-syllable + ;; + -386bsd) + os=-bsd + ;; + -ctix* | -uts*) + os=-sysv + ;; + -nova*) + os=-rtmk-nova + ;; + -ns2 ) + os=-nextstep2 + ;; + -nsk*) + os=-nsk + ;; + # Preserve the version number of sinix5. + -sinix5.*) + os=`echo $os | sed -e 's|sinix|sysv|'` + ;; + -sinix*) + os=-sysv4 + ;; + -tpf*) + os=-tpf + ;; + -triton*) + os=-sysv3 + ;; + -oss*) + os=-sysv3 + ;; + -svr4) + os=-sysv4 + ;; + -svr3) + os=-sysv3 + ;; + -sysvr4) + os=-sysv4 + ;; + # This must come after -sysvr4. + -sysv*) + ;; + -ose*) + os=-ose + ;; + -es1800*) + os=-ose + ;; + -xenix) + os=-xenix + ;; + -*mint | -mint[0-9]* | -*MiNT | -MiNT[0-9]*) + os=-mint + ;; + -aros*) + os=-aros + ;; + -zvmoe) + os=-zvmoe + ;; + -dicos*) + os=-dicos + ;; + -nacl*) + ;; + -none) + ;; + *) + # Get rid of the `-' at the beginning of $os. + os=`echo $os | sed 's/[^-]*-//'` + echo Invalid configuration \`$1\': system \`$os\' not recognized 1>&2 + exit 1 + ;; +esac +else + +# Here we handle the default operating systems that come with various machines. +# The value should be what the vendor currently ships out the door with their +# machine or put another way, the most popular os provided with the machine. + +# Note that if you're going to try to match "-MANUFACTURER" here (say, +# "-sun"), then you have to tell the case statement up towards the top +# that MANUFACTURER isn't an operating system. Otherwise, code above +# will signal an error saying that MANUFACTURER isn't an operating +# system, and we'll never get to this point. + +case $basic_machine in + score-*) + os=-elf + ;; + spu-*) + os=-elf + ;; + *-acorn) + os=-riscix1.2 + ;; + arm*-rebel) + os=-linux + ;; + arm*-semi) + os=-aout + ;; + c4x-* | tic4x-*) + os=-coff + ;; + c8051-*) + os=-elf + ;; + hexagon-*) + os=-elf + ;; + tic54x-*) + os=-coff + ;; + tic55x-*) + os=-coff + ;; + tic6x-*) + os=-coff + ;; + # This must come before the *-dec entry. + pdp10-*) + os=-tops20 + ;; + pdp11-*) + os=-none + ;; + *-dec | vax-*) + os=-ultrix4.2 + ;; + m68*-apollo) + os=-domain + ;; + i386-sun) + os=-sunos4.0.2 + ;; + m68000-sun) + os=-sunos3 + ;; + m68*-cisco) + os=-aout + ;; + mep-*) + os=-elf + ;; + mips*-cisco) + os=-elf + ;; + mips*-*) + os=-elf + ;; + or32-*) + os=-coff + ;; + *-tti) # must be before sparc entry or we get the wrong os. + os=-sysv3 + ;; + sparc-* | *-sun) + os=-sunos4.1.1 + ;; + *-be) + os=-beos + ;; + *-haiku) + os=-haiku + ;; + *-ibm) + os=-aix + ;; + *-knuth) + os=-mmixware + ;; + *-wec) + os=-proelf + ;; + *-winbond) + os=-proelf + ;; + *-oki) + os=-proelf + ;; + *-hp) + os=-hpux + ;; + *-hitachi) + os=-hiux + ;; + i860-* | *-att | *-ncr | *-altos | *-motorola | *-convergent) + os=-sysv + ;; + *-cbm) + os=-amigaos + ;; + *-dg) + os=-dgux + ;; + *-dolphin) + os=-sysv3 + ;; + m68k-ccur) + os=-rtu + ;; + m88k-omron*) + os=-luna + ;; + *-next ) + os=-nextstep + ;; + *-sequent) + os=-ptx + ;; + *-crds) + os=-unos + ;; + *-ns) + os=-genix + ;; + i370-*) + os=-mvs + ;; + *-next) + os=-nextstep3 + ;; + *-gould) + os=-sysv + ;; + *-highlevel) + os=-bsd + ;; + *-encore) + os=-bsd + ;; + *-sgi) + os=-irix + ;; + *-siemens) + os=-sysv4 + ;; + *-masscomp) + os=-rtu + ;; + f30[01]-fujitsu | f700-fujitsu) + os=-uxpv + ;; + *-rom68k) + os=-coff + ;; + *-*bug) + os=-coff + ;; + *-apple) + os=-macos + ;; + *-atari*) + os=-mint + ;; + *) + os=-none + ;; +esac +fi + +# Here we handle the case where we know the os, and the CPU type, but not the +# manufacturer. We pick the logical manufacturer. +vendor=unknown +case $basic_machine in + *-unknown) + case $os in + -riscix*) + vendor=acorn + ;; + -sunos*) + vendor=sun + ;; + -cnk*|-aix*) + vendor=ibm + ;; + -beos*) + vendor=be + ;; + -hpux*) + vendor=hp + ;; + -mpeix*) + vendor=hp + ;; + -hiux*) + vendor=hitachi + ;; + -unos*) + vendor=crds + ;; + -dgux*) + vendor=dg + ;; + -luna*) + vendor=omron + ;; + -genix*) + vendor=ns + ;; + -mvs* | -opened*) + vendor=ibm + ;; + -os400*) + vendor=ibm + ;; + -ptx*) + vendor=sequent + ;; + -tpf*) + vendor=ibm + ;; + -vxsim* | -vxworks* | -windiss*) + vendor=wrs + ;; + -aux*) + vendor=apple + ;; + -hms*) + vendor=hitachi + ;; + -mpw* | -macos*) + vendor=apple + ;; + -*mint | -mint[0-9]* | -*MiNT | -MiNT[0-9]*) + vendor=atari + ;; + -vos*) + vendor=stratus + ;; + esac + basic_machine=`echo $basic_machine | sed "s/unknown/$vendor/"` + ;; +esac + +echo $basic_machine$os +exit + +# Local variables: +# eval: (add-hook 'write-file-hooks 'time-stamp) +# time-stamp-start: "timestamp='" +# time-stamp-format: "%:y-%02m-%02d" +# time-stamp-end: "'" +# End: ADDED autosetup/default.auto Index: autosetup/default.auto ================================================================== --- autosetup/default.auto +++ autosetup/default.auto @@ -0,0 +1,25 @@ +# Copyright (c) 2012 WorkWare Systems http://www.workware.net.au/ +# All rights reserved + +# Auto-load module for 'make' build system integration + +use init + +autosetup_add_init_type make {Simple "make" build system} { + autosetup_check_create auto.def \ +{# Initial auto.def created by 'autosetup --init=make' + +use cc + +# Add any user options here +options { +} + +make-config-header config.h +make-template Makefile.in +} + + if {![file exists Makefile.in]} { + puts "Note: I don't see Makefile.in. You will probably need to create one." + } +} ADDED autosetup/find-tclsh Index: autosetup/find-tclsh ================================================================== --- autosetup/find-tclsh +++ autosetup/find-tclsh @@ -0,0 +1,16 @@ +#!/bin/sh +# Looks for a suitable tclsh or jimsh in the PATH +# If not found, builds a bootstrap jimsh from source +d=`dirname "$0"` +{ "$d/jimsh0" "$d/test-tclsh"; } 2>/dev/null && exit 0 +PATH="$PATH:$d"; export PATH +for tclsh in jimsh tclsh tclsh8.5 tclsh8.6; do + { $tclsh "$d/test-tclsh"; } 2>/dev/null && exit 0 +done +echo 1>&2 "No installed jimsh or tclsh, building local bootstrap jimsh0" +for cc in ${CC_FOR_BUILD:-cc} gcc; do + { $cc -o "$d/jimsh0" "$d/jimsh0.c"; } 2>/dev/null || continue + "$d/jimsh0" "$d/test-tclsh" && exit 0 +done +echo 1>&2 "No working C compiler found. Tried ${CC_FOR_BUILD:-cc} and gcc." +echo false ADDED autosetup/jimsh0.c Index: autosetup/jimsh0.c ================================================================== --- autosetup/jimsh0.c +++ autosetup/jimsh0.c @@ -0,0 +1,21993 @@ +/* This is single source file, bootstrap version of Jim Tcl. See http://jim.tcl.tk/ */ +#define _GNU_SOURCE +#define JIM_TCL_COMPAT +#define JIM_REFERENCES +#define JIM_ANSIC +#define JIM_REGEXP +#define HAVE_NO_AUTOCONF +#define _JIMAUTOCONF_H +#define TCL_LIBRARY "." +#define jim_ext_bootstrap +#define jim_ext_aio +#define jim_ext_readdir +#define jim_ext_glob +#define jim_ext_regexp +#define jim_ext_file +#define jim_ext_exec +#define jim_ext_clock +#define jim_ext_array +#define jim_ext_stdlib +#define jim_ext_tclcompat +#if defined(_MSC_VER) +#define TCL_PLATFORM_OS "windows" +#define TCL_PLATFORM_PLATFORM "windows" +#define TCL_PLATFORM_PATH_SEPARATOR ";" +#define HAVE_MKDIR_ONE_ARG +#define HAVE_SYSTEM +#elif defined(__MINGW32__) +#define TCL_PLATFORM_OS "mingw" +#define TCL_PLATFORM_PLATFORM "windows" +#define TCL_PLATFORM_PATH_SEPARATOR ";" +#define HAVE_MKDIR_ONE_ARG +#define HAVE_SYSTEM +#define HAVE_SYS_TIME_H +#define HAVE_DIRENT_H +#define HAVE_UNISTD_H +#else +#define TCL_PLATFORM_OS "unknown" +#define TCL_PLATFORM_PLATFORM "unix" +#define TCL_PLATFORM_PATH_SEPARATOR ":" +#define HAVE_VFORK +#define HAVE_WAITPID +#define HAVE_ISATTY +#define HAVE_MKSTEMP +#define HAVE_LINK +#define HAVE_SYS_TIME_H +#define HAVE_DIRENT_H +#define HAVE_UNISTD_H +#endif +#define JIM_VERSION 76 +#ifndef JIM_WIN32COMPAT_H +#define JIM_WIN32COMPAT_H + + + +#ifdef __cplusplus +extern "C" { +#endif + + +#if defined(_WIN32) || defined(WIN32) + +#define HAVE_DLOPEN +void *dlopen(const char *path, int mode); +int dlclose(void *handle); +void *dlsym(void *handle, const char *symbol); +char *dlerror(void); + + +#define JIM_SPRINTF_DOUBLE_NEEDS_FIX + +#ifdef _MSC_VER + + +#if _MSC_VER >= 1000 + #pragma warning(disable:4146) +#endif + +#include +#define jim_wide _int64 +#ifndef LLONG_MAX + #define LLONG_MAX 9223372036854775807I64 +#endif +#ifndef LLONG_MIN + #define LLONG_MIN (-LLONG_MAX - 1I64) +#endif +#define JIM_WIDE_MIN LLONG_MIN +#define JIM_WIDE_MAX LLONG_MAX +#define JIM_WIDE_MODIFIER "I64d" +#define strcasecmp _stricmp +#define strtoull _strtoui64 +#define snprintf _snprintf + +#include + +struct timeval { + long tv_sec; + long tv_usec; +}; + +int gettimeofday(struct timeval *tv, void *unused); + +#define HAVE_OPENDIR +struct dirent { + char *d_name; +}; + +typedef struct DIR { + long handle; + struct _finddata_t info; + struct dirent result; + char *name; +} DIR; + +DIR *opendir(const char *name); +int closedir(DIR *dir); +struct dirent *readdir(DIR *dir); + +#elif defined(__MINGW32__) + +#include +#define strtod __strtod + +#endif + +#endif + +#ifdef __cplusplus +} +#endif + +#endif +#ifndef UTF8_UTIL_H +#define UTF8_UTIL_H + +#ifdef __cplusplus +extern "C" { +#endif + + + +#define MAX_UTF8_LEN 4 + +int utf8_fromunicode(char *p, unsigned uc); + +#ifndef JIM_UTF8 +#include + + +#define utf8_strlen(S, B) ((B) < 0 ? strlen(S) : (B)) +#define utf8_tounicode(S, CP) (*(CP) = (unsigned char)*(S), 1) +#define utf8_getchars(CP, C) (*(CP) = (C), 1) +#define utf8_upper(C) toupper(C) +#define utf8_title(C) toupper(C) +#define utf8_lower(C) tolower(C) +#define utf8_index(C, I) (I) +#define utf8_charlen(C) 1 +#define utf8_prev_len(S, L) 1 + +#else + +#endif + +#ifdef __cplusplus +} +#endif + +#endif + +#ifndef __JIM__H +#define __JIM__H + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include +#include +#include +#include + + +#ifndef HAVE_NO_AUTOCONF +#endif + + + +#ifndef jim_wide +# ifdef HAVE_LONG_LONG +# define jim_wide long long +# ifndef LLONG_MAX +# define LLONG_MAX 9223372036854775807LL +# endif +# ifndef LLONG_MIN +# define LLONG_MIN (-LLONG_MAX - 1LL) +# endif +# define JIM_WIDE_MIN LLONG_MIN +# define JIM_WIDE_MAX LLONG_MAX +# else +# define jim_wide long +# define JIM_WIDE_MIN LONG_MIN +# define JIM_WIDE_MAX LONG_MAX +# endif + + +# ifdef HAVE_LONG_LONG +# define JIM_WIDE_MODIFIER "lld" +# else +# define JIM_WIDE_MODIFIER "ld" +# define strtoull strtoul +# endif +#endif + +#define UCHAR(c) ((unsigned char)(c)) + + +#define JIM_OK 0 +#define JIM_ERR 1 +#define JIM_RETURN 2 +#define JIM_BREAK 3 +#define JIM_CONTINUE 4 +#define JIM_SIGNAL 5 +#define JIM_EXIT 6 + +#define JIM_EVAL 7 + +#define JIM_MAX_CALLFRAME_DEPTH 1000 +#define JIM_MAX_EVAL_DEPTH 2000 + + +#define JIM_PRIV_FLAG_SHIFT 20 + +#define JIM_NONE 0 +#define JIM_ERRMSG 1 +#define JIM_ENUM_ABBREV 2 +#define JIM_UNSHARED 4 +#define JIM_MUSTEXIST 8 + + +#define JIM_SUBST_NOVAR 1 +#define JIM_SUBST_NOCMD 2 +#define JIM_SUBST_NOESC 4 +#define JIM_SUBST_FLAG 128 + + +#define JIM_CASESENS 0 +#define JIM_NOCASE 1 + + +#define JIM_PATH_LEN 1024 + + +#define JIM_NOTUSED(V) ((void) V) + +#define JIM_LIBPATH "auto_path" +#define JIM_INTERACTIVE "tcl_interactive" + + +typedef struct Jim_Stack { + int len; + int maxlen; + void **vector; +} Jim_Stack; + + +typedef struct Jim_HashEntry { + void *key; + union { + void *val; + int intval; + } u; + struct Jim_HashEntry *next; +} Jim_HashEntry; + +typedef struct Jim_HashTableType { + unsigned int (*hashFunction)(const void *key); + void *(*keyDup)(void *privdata, const void *key); + void *(*valDup)(void *privdata, const void *obj); + int (*keyCompare)(void *privdata, const void *key1, const void *key2); + void (*keyDestructor)(void *privdata, void *key); + void (*valDestructor)(void *privdata, void *obj); +} Jim_HashTableType; + +typedef struct Jim_HashTable { + Jim_HashEntry **table; + const Jim_HashTableType *type; + void *privdata; + unsigned int size; + unsigned int sizemask; + unsigned int used; + unsigned int collisions; + unsigned int uniq; +} Jim_HashTable; + +typedef struct Jim_HashTableIterator { + Jim_HashTable *ht; + Jim_HashEntry *entry, *nextEntry; + int index; +} Jim_HashTableIterator; + + +#define JIM_HT_INITIAL_SIZE 16 + + +#define Jim_FreeEntryVal(ht, entry) \ + if ((ht)->type->valDestructor) \ + (ht)->type->valDestructor((ht)->privdata, (entry)->u.val) + +#define Jim_SetHashVal(ht, entry, _val_) do { \ + if ((ht)->type->valDup) \ + (entry)->u.val = (ht)->type->valDup((ht)->privdata, (_val_)); \ + else \ + (entry)->u.val = (_val_); \ +} while(0) + +#define Jim_FreeEntryKey(ht, entry) \ + if ((ht)->type->keyDestructor) \ + (ht)->type->keyDestructor((ht)->privdata, (entry)->key) + +#define Jim_SetHashKey(ht, entry, _key_) do { \ + if ((ht)->type->keyDup) \ + (entry)->key = (ht)->type->keyDup((ht)->privdata, (_key_)); \ + else \ + (entry)->key = (void *)(_key_); \ +} while(0) + +#define Jim_CompareHashKeys(ht, key1, key2) \ + (((ht)->type->keyCompare) ? \ + (ht)->type->keyCompare((ht)->privdata, (key1), (key2)) : \ + (key1) == (key2)) + +#define Jim_HashKey(ht, key) ((ht)->type->hashFunction(key) + (ht)->uniq) + +#define Jim_GetHashEntryKey(he) ((he)->key) +#define Jim_GetHashEntryVal(he) ((he)->u.val) +#define Jim_GetHashTableCollisions(ht) ((ht)->collisions) +#define Jim_GetHashTableSize(ht) ((ht)->size) +#define Jim_GetHashTableUsed(ht) ((ht)->used) + + +typedef struct Jim_Obj { + char *bytes; + const struct Jim_ObjType *typePtr; + int refCount; + int length; + + union { + + jim_wide wideValue; + + int intValue; + + double doubleValue; + + void *ptr; + + struct { + void *ptr1; + void *ptr2; + } twoPtrValue; + + struct { + struct Jim_Var *varPtr; + unsigned long callFrameId; + int global; + } varValue; + + struct { + struct Jim_Obj *nsObj; + struct Jim_Cmd *cmdPtr; + unsigned long procEpoch; + } cmdValue; + + struct { + struct Jim_Obj **ele; + int len; + int maxLen; + } listValue; + + struct { + int maxLength; + int charLength; + } strValue; + + struct { + unsigned long id; + struct Jim_Reference *refPtr; + } refValue; + + struct { + struct Jim_Obj *fileNameObj; + int lineNumber; + } sourceValue; + + struct { + struct Jim_Obj *varNameObjPtr; + struct Jim_Obj *indexObjPtr; + } dictSubstValue; + + struct { + void *compre; + unsigned flags; + } regexpValue; + struct { + int line; + int argc; + } scriptLineValue; + } internalRep; + struct Jim_Obj *prevObjPtr; + struct Jim_Obj *nextObjPtr; +} Jim_Obj; + + +#define Jim_IncrRefCount(objPtr) \ + ++(objPtr)->refCount +#define Jim_DecrRefCount(interp, objPtr) \ + if (--(objPtr)->refCount <= 0) Jim_FreeObj(interp, objPtr) +#define Jim_IsShared(objPtr) \ + ((objPtr)->refCount > 1) + +#define Jim_FreeNewObj Jim_FreeObj + + +#define Jim_FreeIntRep(i,o) \ + if ((o)->typePtr && (o)->typePtr->freeIntRepProc) \ + (o)->typePtr->freeIntRepProc(i, o) + + +#define Jim_GetIntRepPtr(o) (o)->internalRep.ptr + + +#define Jim_SetIntRepPtr(o, p) \ + (o)->internalRep.ptr = (p) + + +struct Jim_Interp; + +typedef void (Jim_FreeInternalRepProc)(struct Jim_Interp *interp, + struct Jim_Obj *objPtr); +typedef void (Jim_DupInternalRepProc)(struct Jim_Interp *interp, + struct Jim_Obj *srcPtr, Jim_Obj *dupPtr); +typedef void (Jim_UpdateStringProc)(struct Jim_Obj *objPtr); + +typedef struct Jim_ObjType { + const char *name; + Jim_FreeInternalRepProc *freeIntRepProc; + Jim_DupInternalRepProc *dupIntRepProc; + Jim_UpdateStringProc *updateStringProc; + int flags; +} Jim_ObjType; + + +#define JIM_TYPE_NONE 0 +#define JIM_TYPE_REFERENCES 1 + +#define JIM_PRIV_FLAG_SHIFT 20 + + + +typedef struct Jim_CallFrame { + unsigned long id; + int level; + struct Jim_HashTable vars; + struct Jim_HashTable *staticVars; + struct Jim_CallFrame *parent; + Jim_Obj *const *argv; + int argc; + Jim_Obj *procArgsObjPtr; + Jim_Obj *procBodyObjPtr; + struct Jim_CallFrame *next; + Jim_Obj *nsObj; + Jim_Obj *fileNameObj; + int line; + Jim_Stack *localCommands; + int tailcall; + struct Jim_Obj *tailcallObj; + struct Jim_Cmd *tailcallCmd; +} Jim_CallFrame; + +typedef struct Jim_Var { + Jim_Obj *objPtr; + struct Jim_CallFrame *linkFramePtr; +} Jim_Var; + + +typedef int Jim_CmdProc(struct Jim_Interp *interp, int argc, + Jim_Obj *const *argv); +typedef void Jim_DelCmdProc(struct Jim_Interp *interp, void *privData); + + + +typedef struct Jim_Cmd { + int inUse; + int isproc; + struct Jim_Cmd *prevCmd; + union { + struct { + + Jim_CmdProc *cmdProc; + Jim_DelCmdProc *delProc; + void *privData; + } native; + struct { + + Jim_Obj *argListObjPtr; + Jim_Obj *bodyObjPtr; + Jim_HashTable *staticVars; + int argListLen; + int reqArity; + int optArity; + int argsPos; + int upcall; + struct Jim_ProcArg { + Jim_Obj *nameObjPtr; + Jim_Obj *defaultObjPtr; + } *arglist; + Jim_Obj *nsObj; + } proc; + } u; +} Jim_Cmd; + + +typedef struct Jim_PrngState { + unsigned char sbox[256]; + unsigned int i, j; +} Jim_PrngState; + +typedef struct Jim_Interp { + Jim_Obj *result; + int errorLine; + Jim_Obj *errorFileNameObj; + int addStackTrace; + int maxCallFrameDepth; + int maxEvalDepth; + int evalDepth; + int returnCode; + int returnLevel; + int exitCode; + long id; + int signal_level; + jim_wide sigmask; + int (*signal_set_result)(struct Jim_Interp *interp, jim_wide sigmask); + Jim_CallFrame *framePtr; + Jim_CallFrame *topFramePtr; + struct Jim_HashTable commands; + unsigned long procEpoch; /* Incremented every time the result + of procedures names lookup caching + may no longer be valid. */ + unsigned long callFrameEpoch; /* Incremented every time a new + callframe is created. This id is used for the + 'ID' field contained in the Jim_CallFrame + structure. */ + int local; + Jim_Obj *liveList; + Jim_Obj *freeList; + Jim_Obj *currentScriptObj; + Jim_Obj *nullScriptObj; + Jim_Obj *emptyObj; + Jim_Obj *trueObj; + Jim_Obj *falseObj; + unsigned long referenceNextId; + struct Jim_HashTable references; + unsigned long lastCollectId; /* reference max Id of the last GC + execution. It's set to -1 while the collection + is running as sentinel to avoid to recursive + calls via the [collect] command inside + finalizers. */ + time_t lastCollectTime; + Jim_Obj *stackTrace; + Jim_Obj *errorProc; + Jim_Obj *unknown; + int unknown_called; + int errorFlag; + void *cmdPrivData; /* Used to pass the private data pointer to + a command. It is set to what the user specified + via Jim_CreateCommand(). */ + + struct Jim_CallFrame *freeFramesList; + struct Jim_HashTable assocData; + Jim_PrngState *prngState; + struct Jim_HashTable packages; + Jim_Stack *loadHandles; +} Jim_Interp; + +#define Jim_InterpIncrProcEpoch(i) (i)->procEpoch++ +#define Jim_SetResultString(i,s,l) Jim_SetResult(i, Jim_NewStringObj(i,s,l)) +#define Jim_SetResultInt(i,intval) Jim_SetResult(i, Jim_NewIntObj(i,intval)) + +#define Jim_SetResultBool(i,b) Jim_SetResultInt(i, b) +#define Jim_SetEmptyResult(i) Jim_SetResult(i, (i)->emptyObj) +#define Jim_GetResult(i) ((i)->result) +#define Jim_CmdPrivData(i) ((i)->cmdPrivData) + +#define Jim_SetResult(i,o) do { \ + Jim_Obj *_resultObjPtr_ = (o); \ + Jim_IncrRefCount(_resultObjPtr_); \ + Jim_DecrRefCount(i,(i)->result); \ + (i)->result = _resultObjPtr_; \ +} while(0) + + +#define Jim_GetId(i) (++(i)->id) + + +#define JIM_REFERENCE_TAGLEN 7 /* The tag is fixed-length, because the reference + string representation must be fixed length. */ +typedef struct Jim_Reference { + Jim_Obj *objPtr; + Jim_Obj *finalizerCmdNamePtr; + char tag[JIM_REFERENCE_TAGLEN+1]; +} Jim_Reference; + + +#define Jim_NewEmptyStringObj(i) Jim_NewStringObj(i, "", 0) +#define Jim_FreeHashTableIterator(iter) Jim_Free(iter) + +#define JIM_EXPORT + + +JIM_EXPORT void *Jim_Alloc (int size); +JIM_EXPORT void *Jim_Realloc(void *ptr, int size); +JIM_EXPORT void Jim_Free (void *ptr); +JIM_EXPORT char * Jim_StrDup (const char *s); +JIM_EXPORT char *Jim_StrDupLen(const char *s, int l); + + +JIM_EXPORT char **Jim_GetEnviron(void); +JIM_EXPORT void Jim_SetEnviron(char **env); +JIM_EXPORT int Jim_MakeTempFile(Jim_Interp *interp, const char *template); + + +JIM_EXPORT int Jim_Eval(Jim_Interp *interp, const char *script); + + +JIM_EXPORT int Jim_EvalSource(Jim_Interp *interp, const char *filename, int lineno, const char *script); + +#define Jim_Eval_Named(I, S, F, L) Jim_EvalSource((I), (F), (L), (S)) + +JIM_EXPORT int Jim_EvalGlobal(Jim_Interp *interp, const char *script); +JIM_EXPORT int Jim_EvalFile(Jim_Interp *interp, const char *filename); +JIM_EXPORT int Jim_EvalFileGlobal(Jim_Interp *interp, const char *filename); +JIM_EXPORT int Jim_EvalObj (Jim_Interp *interp, Jim_Obj *scriptObjPtr); +JIM_EXPORT int Jim_EvalObjVector (Jim_Interp *interp, int objc, + Jim_Obj *const *objv); +JIM_EXPORT int Jim_EvalObjList(Jim_Interp *interp, Jim_Obj *listObj); +JIM_EXPORT int Jim_EvalObjPrefix(Jim_Interp *interp, Jim_Obj *prefix, + int objc, Jim_Obj *const *objv); +#define Jim_EvalPrefix(i, p, oc, ov) Jim_EvalObjPrefix((i), Jim_NewStringObj((i), (p), -1), (oc), (ov)) +JIM_EXPORT int Jim_EvalNamespace(Jim_Interp *interp, Jim_Obj *scriptObj, Jim_Obj *nsObj); +JIM_EXPORT int Jim_SubstObj (Jim_Interp *interp, Jim_Obj *substObjPtr, + Jim_Obj **resObjPtrPtr, int flags); + + +JIM_EXPORT void Jim_InitStack(Jim_Stack *stack); +JIM_EXPORT void Jim_FreeStack(Jim_Stack *stack); +JIM_EXPORT int Jim_StackLen(Jim_Stack *stack); +JIM_EXPORT void Jim_StackPush(Jim_Stack *stack, void *element); +JIM_EXPORT void * Jim_StackPop(Jim_Stack *stack); +JIM_EXPORT void * Jim_StackPeek(Jim_Stack *stack); +JIM_EXPORT void Jim_FreeStackElements(Jim_Stack *stack, void (*freeFunc)(void *ptr)); + + +JIM_EXPORT int Jim_InitHashTable (Jim_HashTable *ht, + const Jim_HashTableType *type, void *privdata); +JIM_EXPORT void Jim_ExpandHashTable (Jim_HashTable *ht, + unsigned int size); +JIM_EXPORT int Jim_AddHashEntry (Jim_HashTable *ht, const void *key, + void *val); +JIM_EXPORT int Jim_ReplaceHashEntry (Jim_HashTable *ht, + const void *key, void *val); +JIM_EXPORT int Jim_DeleteHashEntry (Jim_HashTable *ht, + const void *key); +JIM_EXPORT int Jim_FreeHashTable (Jim_HashTable *ht); +JIM_EXPORT Jim_HashEntry * Jim_FindHashEntry (Jim_HashTable *ht, + const void *key); +JIM_EXPORT void Jim_ResizeHashTable (Jim_HashTable *ht); +JIM_EXPORT Jim_HashTableIterator *Jim_GetHashTableIterator + (Jim_HashTable *ht); +JIM_EXPORT Jim_HashEntry * Jim_NextHashEntry + (Jim_HashTableIterator *iter); + + +JIM_EXPORT Jim_Obj * Jim_NewObj (Jim_Interp *interp); +JIM_EXPORT void Jim_FreeObj (Jim_Interp *interp, Jim_Obj *objPtr); +JIM_EXPORT void Jim_InvalidateStringRep (Jim_Obj *objPtr); +JIM_EXPORT Jim_Obj * Jim_DuplicateObj (Jim_Interp *interp, + Jim_Obj *objPtr); +JIM_EXPORT const char * Jim_GetString(Jim_Obj *objPtr, + int *lenPtr); +JIM_EXPORT const char *Jim_String(Jim_Obj *objPtr); +JIM_EXPORT int Jim_Length(Jim_Obj *objPtr); + + +JIM_EXPORT Jim_Obj * Jim_NewStringObj (Jim_Interp *interp, + const char *s, int len); +JIM_EXPORT Jim_Obj *Jim_NewStringObjUtf8(Jim_Interp *interp, + const char *s, int charlen); +JIM_EXPORT Jim_Obj * Jim_NewStringObjNoAlloc (Jim_Interp *interp, + char *s, int len); +JIM_EXPORT void Jim_AppendString (Jim_Interp *interp, Jim_Obj *objPtr, + const char *str, int len); +JIM_EXPORT void Jim_AppendObj (Jim_Interp *interp, Jim_Obj *objPtr, + Jim_Obj *appendObjPtr); +JIM_EXPORT void Jim_AppendStrings (Jim_Interp *interp, + Jim_Obj *objPtr, ...); +JIM_EXPORT int Jim_StringEqObj(Jim_Obj *aObjPtr, Jim_Obj *bObjPtr); +JIM_EXPORT int Jim_StringMatchObj (Jim_Interp *interp, Jim_Obj *patternObjPtr, + Jim_Obj *objPtr, int nocase); +JIM_EXPORT Jim_Obj * Jim_StringRangeObj (Jim_Interp *interp, + Jim_Obj *strObjPtr, Jim_Obj *firstObjPtr, + Jim_Obj *lastObjPtr); +JIM_EXPORT Jim_Obj * Jim_FormatString (Jim_Interp *interp, + Jim_Obj *fmtObjPtr, int objc, Jim_Obj *const *objv); +JIM_EXPORT Jim_Obj * Jim_ScanString (Jim_Interp *interp, Jim_Obj *strObjPtr, + Jim_Obj *fmtObjPtr, int flags); +JIM_EXPORT int Jim_CompareStringImmediate (Jim_Interp *interp, + Jim_Obj *objPtr, const char *str); +JIM_EXPORT int Jim_StringCompareObj(Jim_Interp *interp, Jim_Obj *firstObjPtr, + Jim_Obj *secondObjPtr, int nocase); +JIM_EXPORT int Jim_StringCompareLenObj(Jim_Interp *interp, Jim_Obj *firstObjPtr, + Jim_Obj *secondObjPtr, int nocase); +JIM_EXPORT int Jim_Utf8Length(Jim_Interp *interp, Jim_Obj *objPtr); + + +JIM_EXPORT Jim_Obj * Jim_NewReference (Jim_Interp *interp, + Jim_Obj *objPtr, Jim_Obj *tagPtr, Jim_Obj *cmdNamePtr); +JIM_EXPORT Jim_Reference * Jim_GetReference (Jim_Interp *interp, + Jim_Obj *objPtr); +JIM_EXPORT int Jim_SetFinalizer (Jim_Interp *interp, Jim_Obj *objPtr, Jim_Obj *cmdNamePtr); +JIM_EXPORT int Jim_GetFinalizer (Jim_Interp *interp, Jim_Obj *objPtr, Jim_Obj **cmdNamePtrPtr); + + +JIM_EXPORT Jim_Interp * Jim_CreateInterp (void); +JIM_EXPORT void Jim_FreeInterp (Jim_Interp *i); +JIM_EXPORT int Jim_GetExitCode (Jim_Interp *interp); +JIM_EXPORT const char *Jim_ReturnCode(int code); +JIM_EXPORT void Jim_SetResultFormatted(Jim_Interp *interp, const char *format, ...); + + +JIM_EXPORT void Jim_RegisterCoreCommands (Jim_Interp *interp); +JIM_EXPORT int Jim_CreateCommand (Jim_Interp *interp, + const char *cmdName, Jim_CmdProc cmdProc, void *privData, + Jim_DelCmdProc delProc); +JIM_EXPORT int Jim_DeleteCommand (Jim_Interp *interp, + const char *cmdName); +JIM_EXPORT int Jim_RenameCommand (Jim_Interp *interp, + const char *oldName, const char *newName); +JIM_EXPORT Jim_Cmd * Jim_GetCommand (Jim_Interp *interp, + Jim_Obj *objPtr, int flags); +JIM_EXPORT int Jim_SetVariable (Jim_Interp *interp, + Jim_Obj *nameObjPtr, Jim_Obj *valObjPtr); +JIM_EXPORT int Jim_SetVariableStr (Jim_Interp *interp, + const char *name, Jim_Obj *objPtr); +JIM_EXPORT int Jim_SetGlobalVariableStr (Jim_Interp *interp, + const char *name, Jim_Obj *objPtr); +JIM_EXPORT int Jim_SetVariableStrWithStr (Jim_Interp *interp, + const char *name, const char *val); +JIM_EXPORT int Jim_SetVariableLink (Jim_Interp *interp, + Jim_Obj *nameObjPtr, Jim_Obj *targetNameObjPtr, + Jim_CallFrame *targetCallFrame); +JIM_EXPORT Jim_Obj * Jim_MakeGlobalNamespaceName(Jim_Interp *interp, + Jim_Obj *nameObjPtr); +JIM_EXPORT Jim_Obj * Jim_GetVariable (Jim_Interp *interp, + Jim_Obj *nameObjPtr, int flags); +JIM_EXPORT Jim_Obj * Jim_GetGlobalVariable (Jim_Interp *interp, + Jim_Obj *nameObjPtr, int flags); +JIM_EXPORT Jim_Obj * Jim_GetVariableStr (Jim_Interp *interp, + const char *name, int flags); +JIM_EXPORT Jim_Obj * Jim_GetGlobalVariableStr (Jim_Interp *interp, + const char *name, int flags); +JIM_EXPORT int Jim_UnsetVariable (Jim_Interp *interp, + Jim_Obj *nameObjPtr, int flags); + + +JIM_EXPORT Jim_CallFrame *Jim_GetCallFrameByLevel(Jim_Interp *interp, + Jim_Obj *levelObjPtr); + + +JIM_EXPORT int Jim_Collect (Jim_Interp *interp); +JIM_EXPORT void Jim_CollectIfNeeded (Jim_Interp *interp); + + +JIM_EXPORT int Jim_GetIndex (Jim_Interp *interp, Jim_Obj *objPtr, + int *indexPtr); + + +JIM_EXPORT Jim_Obj * Jim_NewListObj (Jim_Interp *interp, + Jim_Obj *const *elements, int len); +JIM_EXPORT void Jim_ListInsertElements (Jim_Interp *interp, + Jim_Obj *listPtr, int listindex, int objc, Jim_Obj *const *objVec); +JIM_EXPORT void Jim_ListAppendElement (Jim_Interp *interp, + Jim_Obj *listPtr, Jim_Obj *objPtr); +JIM_EXPORT void Jim_ListAppendList (Jim_Interp *interp, + Jim_Obj *listPtr, Jim_Obj *appendListPtr); +JIM_EXPORT int Jim_ListLength (Jim_Interp *interp, Jim_Obj *objPtr); +JIM_EXPORT int Jim_ListIndex (Jim_Interp *interp, Jim_Obj *listPrt, + int listindex, Jim_Obj **objPtrPtr, int seterr); +JIM_EXPORT Jim_Obj *Jim_ListGetIndex(Jim_Interp *interp, Jim_Obj *listPtr, int idx); +JIM_EXPORT int Jim_SetListIndex (Jim_Interp *interp, + Jim_Obj *varNamePtr, Jim_Obj *const *indexv, int indexc, + Jim_Obj *newObjPtr); +JIM_EXPORT Jim_Obj * Jim_ConcatObj (Jim_Interp *interp, int objc, + Jim_Obj *const *objv); +JIM_EXPORT Jim_Obj *Jim_ListJoin(Jim_Interp *interp, + Jim_Obj *listObjPtr, const char *joinStr, int joinStrLen); + + +JIM_EXPORT Jim_Obj * Jim_NewDictObj (Jim_Interp *interp, + Jim_Obj *const *elements, int len); +JIM_EXPORT int Jim_DictKey (Jim_Interp *interp, Jim_Obj *dictPtr, + Jim_Obj *keyPtr, Jim_Obj **objPtrPtr, int flags); +JIM_EXPORT int Jim_DictKeysVector (Jim_Interp *interp, + Jim_Obj *dictPtr, Jim_Obj *const *keyv, int keyc, + Jim_Obj **objPtrPtr, int flags); +JIM_EXPORT int Jim_SetDictKeysVector (Jim_Interp *interp, + Jim_Obj *varNamePtr, Jim_Obj *const *keyv, int keyc, + Jim_Obj *newObjPtr, int flags); +JIM_EXPORT int Jim_DictPairs(Jim_Interp *interp, + Jim_Obj *dictPtr, Jim_Obj ***objPtrPtr, int *len); +JIM_EXPORT int Jim_DictAddElement(Jim_Interp *interp, Jim_Obj *objPtr, + Jim_Obj *keyObjPtr, Jim_Obj *valueObjPtr); +JIM_EXPORT int Jim_DictKeys(Jim_Interp *interp, Jim_Obj *objPtr, Jim_Obj *patternObj); +JIM_EXPORT int Jim_DictValues(Jim_Interp *interp, Jim_Obj *dictObjPtr, Jim_Obj *patternObjPtr); +JIM_EXPORT int Jim_DictSize(Jim_Interp *interp, Jim_Obj *objPtr); +JIM_EXPORT int Jim_DictInfo(Jim_Interp *interp, Jim_Obj *objPtr); + + +JIM_EXPORT int Jim_GetReturnCode (Jim_Interp *interp, Jim_Obj *objPtr, + int *intPtr); + + +JIM_EXPORT int Jim_EvalExpression (Jim_Interp *interp, + Jim_Obj *exprObjPtr, Jim_Obj **exprResultPtrPtr); +JIM_EXPORT int Jim_GetBoolFromExpr (Jim_Interp *interp, + Jim_Obj *exprObjPtr, int *boolPtr); + + +JIM_EXPORT int Jim_GetWide (Jim_Interp *interp, Jim_Obj *objPtr, + jim_wide *widePtr); +JIM_EXPORT int Jim_GetLong (Jim_Interp *interp, Jim_Obj *objPtr, + long *longPtr); +#define Jim_NewWideObj Jim_NewIntObj +JIM_EXPORT Jim_Obj * Jim_NewIntObj (Jim_Interp *interp, + jim_wide wideValue); + + +JIM_EXPORT int Jim_GetDouble(Jim_Interp *interp, Jim_Obj *objPtr, + double *doublePtr); +JIM_EXPORT void Jim_SetDouble(Jim_Interp *interp, Jim_Obj *objPtr, + double doubleValue); +JIM_EXPORT Jim_Obj * Jim_NewDoubleObj(Jim_Interp *interp, double doubleValue); + + +JIM_EXPORT void Jim_WrongNumArgs (Jim_Interp *interp, int argc, + Jim_Obj *const *argv, const char *msg); +JIM_EXPORT int Jim_GetEnum (Jim_Interp *interp, Jim_Obj *objPtr, + const char * const *tablePtr, int *indexPtr, const char *name, int flags); +JIM_EXPORT int Jim_ScriptIsComplete (const char *s, int len, + char *stateCharPtr); +JIM_EXPORT int Jim_FindByName(const char *name, const char * const array[], size_t len); + + +typedef void (Jim_InterpDeleteProc)(Jim_Interp *interp, void *data); +JIM_EXPORT void * Jim_GetAssocData(Jim_Interp *interp, const char *key); +JIM_EXPORT int Jim_SetAssocData(Jim_Interp *interp, const char *key, + Jim_InterpDeleteProc *delProc, void *data); +JIM_EXPORT int Jim_DeleteAssocData(Jim_Interp *interp, const char *key); + + + +JIM_EXPORT int Jim_PackageProvide (Jim_Interp *interp, + const char *name, const char *ver, int flags); +JIM_EXPORT int Jim_PackageRequire (Jim_Interp *interp, + const char *name, int flags); + + +JIM_EXPORT void Jim_MakeErrorMessage (Jim_Interp *interp); + + +JIM_EXPORT int Jim_InteractivePrompt (Jim_Interp *interp); +JIM_EXPORT void Jim_HistoryLoad(const char *filename); +JIM_EXPORT void Jim_HistorySave(const char *filename); +JIM_EXPORT char *Jim_HistoryGetline(const char *prompt); +JIM_EXPORT void Jim_HistoryAdd(const char *line); +JIM_EXPORT void Jim_HistoryShow(void); + + +JIM_EXPORT int Jim_InitStaticExtensions(Jim_Interp *interp); +JIM_EXPORT int Jim_StringToWide(const char *str, jim_wide *widePtr, int base); +JIM_EXPORT int Jim_IsBigEndian(void); + +#define Jim_CheckSignal(i) ((i)->signal_level && (i)->sigmask) + + +JIM_EXPORT int Jim_LoadLibrary(Jim_Interp *interp, const char *pathName); +JIM_EXPORT void Jim_FreeLoadHandles(Jim_Interp *interp); + + +JIM_EXPORT FILE *Jim_AioFilehandle(Jim_Interp *interp, Jim_Obj *command); + + +JIM_EXPORT int Jim_IsDict(Jim_Obj *objPtr); +JIM_EXPORT int Jim_IsList(Jim_Obj *objPtr); + +#ifdef __cplusplus +} +#endif + +#endif + +#ifndef JIM_SUBCMD_H +#define JIM_SUBCMD_H + + +#ifdef __cplusplus +extern "C" { +#endif + + +#define JIM_MODFLAG_HIDDEN 0x0001 +#define JIM_MODFLAG_FULLARGV 0x0002 + + + +typedef int jim_subcmd_function(Jim_Interp *interp, int argc, Jim_Obj *const *argv); + +typedef struct { + const char *cmd; + const char *args; + jim_subcmd_function *function; + short minargs; + short maxargs; + unsigned short flags; +} jim_subcmd_type; + +const jim_subcmd_type * +Jim_ParseSubCmd(Jim_Interp *interp, const jim_subcmd_type *command_table, int argc, Jim_Obj *const *argv); + +int Jim_SubCmdProc(Jim_Interp *interp, int argc, Jim_Obj *const *argv); + +int Jim_CallSubCmd(Jim_Interp *interp, const jim_subcmd_type *ct, int argc, Jim_Obj *const *argv); + +#ifdef __cplusplus +} +#endif + +#endif +#ifndef JIMREGEXP_H +#define JIMREGEXP_H + + +#ifdef __cplusplus +extern "C" { +#endif + +#include + +typedef struct { + int rm_so; + int rm_eo; +} regmatch_t; + + +typedef struct regexp { + + int re_nsub; + + + int cflags; + int err; + int regstart; + int reganch; + int regmust; + int regmlen; + int *program; + + + const char *regparse; + int p; + int proglen; + + + int eflags; + const char *start; + const char *reginput; + const char *regbol; + + + regmatch_t *pmatch; + int nmatch; +} regexp; + +typedef regexp regex_t; + +#define REG_EXTENDED 0 +#define REG_NEWLINE 1 +#define REG_ICASE 2 + +#define REG_NOTBOL 16 + +enum { + REG_NOERROR, + REG_NOMATCH, + REG_BADPAT, + REG_ERR_NULL_ARGUMENT, + REG_ERR_UNKNOWN, + REG_ERR_TOO_BIG, + REG_ERR_NOMEM, + REG_ERR_TOO_MANY_PAREN, + REG_ERR_UNMATCHED_PAREN, + REG_ERR_UNMATCHED_BRACES, + REG_ERR_BAD_COUNT, + REG_ERR_JUNK_ON_END, + REG_ERR_OPERAND_COULD_BE_EMPTY, + REG_ERR_NESTED_COUNT, + REG_ERR_INTERNAL, + REG_ERR_COUNT_FOLLOWS_NOTHING, + REG_ERR_TRAILING_BACKSLASH, + REG_ERR_CORRUPTED, + REG_ERR_NULL_CHAR, + REG_ERR_NUM +}; + +int regcomp(regex_t *preg, const char *regex, int cflags); +int regexec(regex_t *preg, const char *string, size_t nmatch, regmatch_t pmatch[], int eflags); +size_t regerror(int errcode, const regex_t *preg, char *errbuf, size_t errbuf_size); +void regfree(regex_t *preg); + +#ifdef __cplusplus +} +#endif + +#endif +int Jim_bootstrapInit(Jim_Interp *interp) +{ + if (Jim_PackageProvide(interp, "bootstrap", "1.0", JIM_ERRMSG)) + return JIM_ERR; + + return Jim_EvalSource(interp, "bootstrap.tcl", 1, +"\n" +"\n" +"proc package {args} {}\n" +); +} +int Jim_initjimshInit(Jim_Interp *interp) +{ + if (Jim_PackageProvide(interp, "initjimsh", "1.0", JIM_ERRMSG)) + return JIM_ERR; + + return Jim_EvalSource(interp, "initjimsh.tcl", 1, +"\n" +"\n" +"\n" +"proc _jimsh_init {} {\n" +" rename _jimsh_init {}\n" +" global jim::exe jim::argv0 tcl_interactive auto_path tcl_platform\n" +"\n" +"\n" +" if {[exists jim::argv0]} {\n" +" if {[string match \"*/*\" $jim::argv0]} {\n" +" set jim::exe [file join [pwd] $jim::argv0]\n" +" } else {\n" +" foreach path [split [env PATH \"\"] $tcl_platform(pathSeparator)] {\n" +" set exec [file join [pwd] [string map {\\\\ /} $path] $jim::argv0]\n" +" if {[file executable $exec]} {\n" +" set jim::exe $exec\n" +" break\n" +" }\n" +" }\n" +" }\n" +" }\n" +"\n" +"\n" +" lappend p {*}[split [env JIMLIB {}] $tcl_platform(pathSeparator)]\n" +" if {[exists jim::exe]} {\n" +" lappend p [file dirname $jim::exe]\n" +" }\n" +" lappend p {*}$auto_path\n" +" set auto_path $p\n" +"\n" +" if {$tcl_interactive && [env HOME {}] ne \"\"} {\n" +" foreach src {.jimrc jimrc.tcl} {\n" +" if {[file exists [env HOME]/$src]} {\n" +" uplevel #0 source [env HOME]/$src\n" +" break\n" +" }\n" +" }\n" +" }\n" +" return \"\"\n" +"}\n" +"\n" +"if {$tcl_platform(platform) eq \"windows\"} {\n" +" set jim::argv0 [string map {\\\\ /} $jim::argv0]\n" +"}\n" +"\n" +"_jimsh_init\n" +); +} +int Jim_globInit(Jim_Interp *interp) +{ + if (Jim_PackageProvide(interp, "glob", "1.0", JIM_ERRMSG)) + return JIM_ERR; + + return Jim_EvalSource(interp, "glob.tcl", 1, +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"package require readdir\n" +"\n" +"\n" +"proc glob.globdir {dir pattern} {\n" +" if {[file exists $dir/$pattern]} {\n" +"\n" +" return [list $pattern]\n" +" }\n" +"\n" +" set result {}\n" +" set files [readdir $dir]\n" +" lappend files . ..\n" +"\n" +" foreach name $files {\n" +" if {[string match $pattern $name]} {\n" +"\n" +" if {[string index $name 0] eq \".\" && [string index $pattern 0] ne \".\"} {\n" +" continue\n" +" }\n" +" lappend result $name\n" +" }\n" +" }\n" +"\n" +" return $result\n" +"}\n" +"\n" +"\n" +"\n" +"\n" +"proc glob.explode {pattern} {\n" +" set oldexp {}\n" +" set newexp {\"\"}\n" +"\n" +" while 1 {\n" +" set oldexp $newexp\n" +" set newexp {}\n" +" set ob [string first \\{ $pattern]\n" +" set cb [string first \\} $pattern]\n" +"\n" +" if {$ob < $cb && $ob != -1} {\n" +" set mid [string range $pattern 0 $ob-1]\n" +" set subexp [lassign [glob.explode [string range $pattern $ob+1 end]] pattern]\n" +" if {$pattern eq \"\"} {\n" +" error \"unmatched open brace in glob pattern\"\n" +" }\n" +" set pattern [string range $pattern 1 end]\n" +"\n" +" foreach subs $subexp {\n" +" foreach sub [split $subs ,] {\n" +" foreach old $oldexp {\n" +" lappend newexp $old$mid$sub\n" +" }\n" +" }\n" +" }\n" +" } elseif {$cb != -1} {\n" +" set suf [string range $pattern 0 $cb-1]\n" +" set rest [string range $pattern $cb end]\n" +" break\n" +" } else {\n" +" set suf $pattern\n" +" set rest \"\"\n" +" break\n" +" }\n" +" }\n" +"\n" +" foreach old $oldexp {\n" +" lappend newexp $old$suf\n" +" }\n" +" list $rest {*}$newexp\n" +"}\n" +"\n" +"\n" +"\n" +"proc glob.glob {base pattern} {\n" +" set dir [file dirname $pattern]\n" +" if {$pattern eq $dir || $pattern eq \"\"} {\n" +" return [list [file join $base $dir] $pattern]\n" +" } elseif {$pattern eq [file tail $pattern]} {\n" +" set dir \"\"\n" +" }\n" +"\n" +"\n" +" set dirlist [glob.glob $base $dir]\n" +" set pattern [file tail $pattern]\n" +"\n" +"\n" +" set result {}\n" +" foreach {realdir dir} $dirlist {\n" +" if {![file isdir $realdir]} {\n" +" continue\n" +" }\n" +" if {[string index $dir end] ne \"/\" && $dir ne \"\"} {\n" +" append dir /\n" +" }\n" +" foreach name [glob.globdir $realdir $pattern] {\n" +" lappend result [file join $realdir $name] $dir$name\n" +" }\n" +" }\n" +" return $result\n" +"}\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"proc glob {args} {\n" +" set nocomplain 0\n" +" set base \"\"\n" +" set tails 0\n" +"\n" +" set n 0\n" +" foreach arg $args {\n" +" if {[info exists param]} {\n" +" set $param $arg\n" +" unset param\n" +" incr n\n" +" continue\n" +" }\n" +" switch -glob -- $arg {\n" +" -d* {\n" +" set switch $arg\n" +" set param base\n" +" }\n" +" -n* {\n" +" set nocomplain 1\n" +" }\n" +" -ta* {\n" +" set tails 1\n" +" }\n" +" -- {\n" +" incr n\n" +" break\n" +" }\n" +" -* {\n" +" return -code error \"bad option \\\"$arg\\\": must be -directory, -nocomplain, -tails, or --\"\n" +" }\n" +" * {\n" +" break\n" +" }\n" +" }\n" +" incr n\n" +" }\n" +" if {[info exists param]} {\n" +" return -code error \"missing argument to \\\"$switch\\\"\"\n" +" }\n" +" if {[llength $args] <= $n} {\n" +" return -code error \"wrong # args: should be \\\"glob ?options? pattern ?pattern ...?\\\"\"\n" +" }\n" +"\n" +" set args [lrange $args $n end]\n" +"\n" +" set result {}\n" +" foreach pattern $args {\n" +" set escpattern [string map {\n" +" \\\\\\\\ \\x01 \\\\\\{ \\x02 \\\\\\} \\x03 \\\\, \\x04\n" +" } $pattern]\n" +" set patexps [lassign [glob.explode $escpattern] rest]\n" +" if {$rest ne \"\"} {\n" +" return -code error \"unmatched close brace in glob pattern\"\n" +" }\n" +" foreach patexp $patexps {\n" +" set patexp [string map {\n" +" \\x01 \\\\\\\\ \\x02 \\{ \\x03 \\} \\x04 ,\n" +" } $patexp]\n" +" foreach {realname name} [glob.glob $base $patexp] {\n" +" incr n\n" +" if {$tails} {\n" +" lappend result $name\n" +" } else {\n" +" lappend result [file join $base $name]\n" +" }\n" +" }\n" +" }\n" +" }\n" +"\n" +" if {!$nocomplain && [llength $result] == 0} {\n" +" set s $(([llength $args] > 1) ? \"s\" : \"\")\n" +" return -code error \"no files matched glob pattern$s \\\"[join $args]\\\"\"\n" +" }\n" +"\n" +" return $result\n" +"}\n" +); +} +int Jim_stdlibInit(Jim_Interp *interp) +{ + if (Jim_PackageProvide(interp, "stdlib", "1.0", JIM_ERRMSG)) + return JIM_ERR; + + return Jim_EvalSource(interp, "stdlib.tcl", 1, +"\n" +"\n" +"\n" +"proc lambda {arglist args} {\n" +" tailcall proc [ref {} function lambda.finalizer] $arglist {*}$args\n" +"}\n" +"\n" +"proc lambda.finalizer {name val} {\n" +" rename $name {}\n" +"}\n" +"\n" +"\n" +"proc curry {args} {\n" +" alias [ref {} function lambda.finalizer] {*}$args\n" +"}\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"proc function {value} {\n" +" return $value\n" +"}\n" +"\n" +"\n" +"\n" +"\n" +"proc stacktrace {{skip 0}} {\n" +" set trace {}\n" +" incr skip\n" +" foreach level [range $skip [info level]] {\n" +" lappend trace {*}[info frame -$level]\n" +" }\n" +" return $trace\n" +"}\n" +"\n" +"\n" +"proc stackdump {stacktrace} {\n" +" set lines {}\n" +" foreach {l f p} [lreverse $stacktrace] {\n" +" set line {}\n" +" if {$p ne \"\"} {\n" +" append line \"in procedure '$p' \"\n" +" if {$f ne \"\"} {\n" +" append line \"called \"\n" +" }\n" +" }\n" +" if {$f ne \"\"} {\n" +" append line \"at file \\\"$f\\\", line $l\"\n" +" }\n" +" if {$line ne \"\"} {\n" +" lappend lines $line\n" +" }\n" +" }\n" +" join $lines \\n\n" +"}\n" +"\n" +"\n" +"\n" +"proc errorInfo {msg {stacktrace \"\"}} {\n" +" if {$stacktrace eq \"\"} {\n" +"\n" +" set stacktrace [info stacktrace]\n" +"\n" +" lappend stacktrace {*}[stacktrace 1]\n" +" }\n" +" lassign $stacktrace p f l\n" +" if {$f ne \"\"} {\n" +" set result \"$f:$l: Error: \"\n" +" }\n" +" append result \"$msg\\n\"\n" +" append result [stackdump $stacktrace]\n" +"\n" +"\n" +" string trim $result\n" +"}\n" +"\n" +"\n" +"\n" +"proc {info nameofexecutable} {} {\n" +" if {[exists ::jim::exe]} {\n" +" return $::jim::exe\n" +" }\n" +"}\n" +"\n" +"\n" +"proc {dict with} {&dictVar {args key} script} {\n" +" set keys {}\n" +" foreach {n v} [dict get $dictVar {*}$key] {\n" +" upvar $n var_$n\n" +" set var_$n $v\n" +" lappend keys $n\n" +" }\n" +" catch {uplevel 1 $script} msg opts\n" +" if {[info exists dictVar] && ([llength $key] == 0 || [dict exists $dictVar {*}$key])} {\n" +" foreach n $keys {\n" +" if {[info exists var_$n]} {\n" +" dict set dictVar {*}$key $n [set var_$n]\n" +" } else {\n" +" dict unset dictVar {*}$key $n\n" +" }\n" +" }\n" +" }\n" +" return {*}$opts $msg\n" +"}\n" +"\n" +"\n" +"proc {dict update} {&varName args script} {\n" +" set keys {}\n" +" foreach {n v} $args {\n" +" upvar $v var_$v\n" +" if {[dict exists $varName $n]} {\n" +" set var_$v [dict get $varName $n]\n" +" }\n" +" }\n" +" catch {uplevel 1 $script} msg opts\n" +" if {[info exists varName]} {\n" +" foreach {n v} $args {\n" +" if {[info exists var_$v]} {\n" +" dict set varName $n [set var_$v]\n" +" } else {\n" +" dict unset varName $n\n" +" }\n" +" }\n" +" }\n" +" return {*}$opts $msg\n" +"}\n" +"\n" +"\n" +"\n" +"proc {dict merge} {dict args} {\n" +" foreach d $args {\n" +"\n" +" dict size $d\n" +" foreach {k v} $d {\n" +" dict set dict $k $v\n" +" }\n" +" }\n" +" return $dict\n" +"}\n" +"\n" +"proc {dict replace} {dictionary {args {key value}}} {\n" +" if {[llength ${key value}] % 2} {\n" +" tailcall {dict replace}\n" +" }\n" +" tailcall dict merge $dictionary ${key value}\n" +"}\n" +"\n" +"\n" +"proc {dict lappend} {varName key {args value}} {\n" +" upvar $varName dict\n" +" if {[exists dict] && [dict exists $dict $key]} {\n" +" set list [dict get $dict $key]\n" +" }\n" +" lappend list {*}$value\n" +" dict set dict $key $list\n" +"}\n" +"\n" +"\n" +"proc {dict append} {varName key {args value}} {\n" +" upvar $varName dict\n" +" if {[exists dict] && [dict exists $dict $key]} {\n" +" set str [dict get $dict $key]\n" +" }\n" +" append str {*}$value\n" +" dict set dict $key $str\n" +"}\n" +"\n" +"\n" +"proc {dict incr} {varName key {increment 1}} {\n" +" upvar $varName dict\n" +" if {[exists dict] && [dict exists $dict $key]} {\n" +" set value [dict get $dict $key]\n" +" }\n" +" incr value $increment\n" +" dict set dict $key $value\n" +"}\n" +"\n" +"\n" +"proc {dict remove} {dictionary {args key}} {\n" +" foreach k $key {\n" +" dict unset dictionary $k\n" +" }\n" +" return $dictionary\n" +"}\n" +"\n" +"\n" +"proc {dict values} {dictionary {pattern *}} {\n" +" dict keys [lreverse $dictionary] $pattern\n" +"}\n" +"\n" +"\n" +"proc {dict for} {vars dictionary script} {\n" +" if {[llength $vars] != 2} {\n" +" return -code error \"must have exactly two variable names\"\n" +" }\n" +" dict size $dictionary\n" +" tailcall foreach $vars $dictionary $script\n" +"}\n" +); +} +int Jim_tclcompatInit(Jim_Interp *interp) +{ + if (Jim_PackageProvide(interp, "tclcompat", "1.0", JIM_ERRMSG)) + return JIM_ERR; + + return Jim_EvalSource(interp, "tclcompat.tcl", 1, +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"set env [env]\n" +"\n" +"\n" +"if {[info commands stdout] ne \"\"} {\n" +"\n" +" foreach p {gets flush close eof seek tell} {\n" +" proc $p {chan args} {p} {\n" +" tailcall $chan $p {*}$args\n" +" }\n" +" }\n" +" unset p\n" +"\n" +"\n" +"\n" +" proc puts {{-nonewline {}} {chan stdout} msg} {\n" +" if {${-nonewline} ni {-nonewline {}}} {\n" +" tailcall ${-nonewline} puts $msg\n" +" }\n" +" tailcall $chan puts {*}${-nonewline} $msg\n" +" }\n" +"\n" +"\n" +"\n" +"\n" +"\n" +" proc read {{-nonewline {}} chan} {\n" +" if {${-nonewline} ni {-nonewline {}}} {\n" +" tailcall ${-nonewline} read {*}${chan}\n" +" }\n" +" tailcall $chan read {*}${-nonewline}\n" +" }\n" +"\n" +" proc fconfigure {f args} {\n" +" foreach {n v} $args {\n" +" switch -glob -- $n {\n" +" -bl* {\n" +" $f ndelay $(!$v)\n" +" }\n" +" -bu* {\n" +" $f buffering $v\n" +" }\n" +" -tr* {\n" +"\n" +" }\n" +" default {\n" +" return -code error \"fconfigure: unknown option $n\"\n" +" }\n" +" }\n" +" }\n" +" }\n" +"}\n" +"\n" +"\n" +"proc fileevent {args} {\n" +" tailcall {*}$args\n" +"}\n" +"\n" +"\n" +"\n" +"\n" +"proc parray {arrayname {pattern *} {puts puts}} {\n" +" upvar $arrayname a\n" +"\n" +" set max 0\n" +" foreach name [array names a $pattern]] {\n" +" if {[string length $name] > $max} {\n" +" set max [string length $name]\n" +" }\n" +" }\n" +" incr max [string length $arrayname]\n" +" incr max 2\n" +" foreach name [lsort [array names a $pattern]] {\n" +" $puts [format \"%-${max}s = %s\" $arrayname\\($name\\) $a($name)]\n" +" }\n" +"}\n" +"\n" +"\n" +"proc {file copy} {{force {}} source target} {\n" +" try {\n" +" if {$force ni {{} -force}} {\n" +" error \"bad option \\\"$force\\\": should be -force\"\n" +" }\n" +"\n" +" set in [open $source rb]\n" +"\n" +" if {[file exists $target]} {\n" +" if {$force eq \"\"} {\n" +" error \"error copying \\\"$source\\\" to \\\"$target\\\": file already exists\"\n" +" }\n" +"\n" +" if {$source eq $target} {\n" +" return\n" +" }\n" +"\n" +"\n" +" file stat $source ss\n" +" file stat $target ts\n" +" if {$ss(dev) == $ts(dev) && $ss(ino) == $ts(ino) && $ss(ino)} {\n" +" return\n" +" }\n" +" }\n" +" set out [open $target wb]\n" +" $in copyto $out\n" +" $out close\n" +" } on error {msg opts} {\n" +" incr opts(-level)\n" +" return {*}$opts $msg\n" +" } finally {\n" +" catch {$in close}\n" +" }\n" +"}\n" +"\n" +"\n" +"\n" +"proc popen {cmd {mode r}} {\n" +" lassign [socket pipe] r w\n" +" try {\n" +" if {[string match \"w*\" $mode]} {\n" +" lappend cmd <@$r &\n" +" set pids [exec {*}$cmd]\n" +" $r close\n" +" set f $w\n" +" } else {\n" +" lappend cmd >@$w &\n" +" set pids [exec {*}$cmd]\n" +" $w close\n" +" set f $r\n" +" }\n" +" lambda {cmd args} {f pids} {\n" +" if {$cmd eq \"pid\"} {\n" +" return $pids\n" +" }\n" +" if {$cmd eq \"close\"} {\n" +" $f close\n" +"\n" +" foreach p $pids { os.wait $p }\n" +" return\n" +" }\n" +" tailcall $f $cmd {*}$args\n" +" }\n" +" } on error {error opts} {\n" +" $r close\n" +" $w close\n" +" error $error\n" +" }\n" +"}\n" +"\n" +"\n" +"local proc pid {{channelId {}}} {\n" +" if {$channelId eq \"\"} {\n" +" tailcall upcall pid\n" +" }\n" +" if {[catch {$channelId tell}]} {\n" +" return -code error \"can not find channel named \\\"$channelId\\\"\"\n" +" }\n" +" if {[catch {$channelId pid} pids]} {\n" +" return \"\"\n" +" }\n" +" return $pids\n" +"}\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"\n" +"proc try {args} {\n" +" set catchopts {}\n" +" while {[string match -* [lindex $args 0]]} {\n" +" set args [lassign $args opt]\n" +" if {$opt eq \"--\"} {\n" +" break\n" +" }\n" +" lappend catchopts $opt\n" +" }\n" +" if {[llength $args] == 0} {\n" +" return -code error {wrong # args: should be \"try ?options? script ?argument ...?\"}\n" +" }\n" +" set args [lassign $args script]\n" +" set code [catch -eval {*}$catchopts {uplevel 1 $script} msg opts]\n" +"\n" +" set handled 0\n" +"\n" +" foreach {on codes vars script} $args {\n" +" switch -- $on \\\n" +" on {\n" +" if {!$handled && ($codes eq \"*\" || [info returncode $code] in $codes)} {\n" +" lassign $vars msgvar optsvar\n" +" if {$msgvar ne \"\"} {\n" +" upvar $msgvar hmsg\n" +" set hmsg $msg\n" +" }\n" +" if {$optsvar ne \"\"} {\n" +" upvar $optsvar hopts\n" +" set hopts $opts\n" +" }\n" +"\n" +" set code [catch {uplevel 1 $script} msg opts]\n" +" incr handled\n" +" }\n" +" } \\\n" +" finally {\n" +" set finalcode [catch {uplevel 1 $codes} finalmsg finalopts]\n" +" if {$finalcode} {\n" +"\n" +" set code $finalcode\n" +" set msg $finalmsg\n" +" set opts $finalopts\n" +" }\n" +" break\n" +" } \\\n" +" default {\n" +" return -code error \"try: expected 'on' or 'finally', got '$on'\"\n" +" }\n" +" }\n" +"\n" +" if {$code} {\n" +" incr opts(-level)\n" +" return {*}$opts $msg\n" +" }\n" +" return $msg\n" +"}\n" +"\n" +"\n" +"\n" +"proc throw {code {msg \"\"}} {\n" +" return -code $code $msg\n" +"}\n" +"\n" +"\n" +"proc {file delete force} {path} {\n" +" foreach e [readdir $path] {\n" +" file delete -force $path/$e\n" +" }\n" +" file delete $path\n" +"}\n" +); +} + + +#include +#include +#include +#include +#ifdef HAVE_UNISTD_H +#include +#include +#endif + + +#if defined(HAVE_SYS_SOCKET_H) && defined(HAVE_SELECT) && defined(HAVE_NETINET_IN_H) && defined(HAVE_NETDB_H) && defined(HAVE_ARPA_INET_H) +#include +#include +#include +#include +#ifdef HAVE_SYS_UN_H +#include +#endif +#else +#define JIM_ANSIC +#endif + + +#define AIO_CMD_LEN 32 +#define AIO_BUF_LEN 256 + +#ifndef HAVE_FTELLO + #define ftello ftell +#endif +#ifndef HAVE_FSEEKO + #define fseeko fseek +#endif + +#define AIO_KEEPOPEN 1 + +#if defined(JIM_IPV6) +#define IPV6 1 +#else +#define IPV6 0 +#ifndef PF_INET6 +#define PF_INET6 0 +#endif +#endif + + +typedef struct AioFile +{ + FILE *fp; + Jim_Obj *filename; + int type; + int openFlags; + int fd; + Jim_Obj *rEvent; + Jim_Obj *wEvent; + Jim_Obj *eEvent; + int addr_family; +} AioFile; + +static int JimAioSubCmdProc(Jim_Interp *interp, int argc, Jim_Obj *const *argv); +static int JimMakeChannel(Jim_Interp *interp, FILE *fh, int fd, Jim_Obj *filename, + const char *hdlfmt, int family, const char *mode); + + +static void JimAioSetError(Jim_Interp *interp, Jim_Obj *name) +{ + if (name) { + Jim_SetResultFormatted(interp, "%#s: %s", name, strerror(errno)); + } + else { + Jim_SetResultString(interp, strerror(errno), -1); + } +} + +static void JimAioDelProc(Jim_Interp *interp, void *privData) +{ + AioFile *af = privData; + + JIM_NOTUSED(interp); + + Jim_DecrRefCount(interp, af->filename); + +#ifdef jim_ext_eventloop + + Jim_DeleteFileHandler(interp, af->fp, JIM_EVENT_READABLE | JIM_EVENT_WRITABLE | JIM_EVENT_EXCEPTION); +#endif + + if (!(af->openFlags & AIO_KEEPOPEN)) { + fclose(af->fp); + } + + Jim_Free(af); +} + +static int JimCheckStreamError(Jim_Interp *interp, AioFile *af) +{ + if (!ferror(af->fp)) { + return JIM_OK; + } + clearerr(af->fp); + + if (feof(af->fp) || errno == EAGAIN || errno == EINTR) { + return JIM_OK; + } +#ifdef ECONNRESET + if (errno == ECONNRESET) { + return JIM_OK; + } +#endif +#ifdef ECONNABORTED + if (errno != ECONNABORTED) { + return JIM_OK; + } +#endif + JimAioSetError(interp, af->filename); + return JIM_ERR; +} + +static int aio_cmd_read(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + AioFile *af = Jim_CmdPrivData(interp); + char buf[AIO_BUF_LEN]; + Jim_Obj *objPtr; + int nonewline = 0; + jim_wide neededLen = -1; + + if (argc && Jim_CompareStringImmediate(interp, argv[0], "-nonewline")) { + nonewline = 1; + argv++; + argc--; + } + if (argc == 1) { + if (Jim_GetWide(interp, argv[0], &neededLen) != JIM_OK) + return JIM_ERR; + if (neededLen < 0) { + Jim_SetResultString(interp, "invalid parameter: negative len", -1); + return JIM_ERR; + } + } + else if (argc) { + return -1; + } + objPtr = Jim_NewStringObj(interp, NULL, 0); + while (neededLen != 0) { + int retval; + int readlen; + + if (neededLen == -1) { + readlen = AIO_BUF_LEN; + } + else { + readlen = (neededLen > AIO_BUF_LEN ? AIO_BUF_LEN : neededLen); + } + retval = fread(buf, 1, readlen, af->fp); + if (retval > 0) { + Jim_AppendString(interp, objPtr, buf, retval); + if (neededLen != -1) { + neededLen -= retval; + } + } + if (retval != readlen) + break; + } + + if (JimCheckStreamError(interp, af)) { + Jim_FreeNewObj(interp, objPtr); + return JIM_ERR; + } + if (nonewline) { + int len; + const char *s = Jim_GetString(objPtr, &len); + + if (len > 0 && s[len - 1] == '\n') { + objPtr->length--; + objPtr->bytes[objPtr->length] = '\0'; + } + } + Jim_SetResult(interp, objPtr); + return JIM_OK; +} + +static int aio_cmd_copy(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + AioFile *af = Jim_CmdPrivData(interp); + jim_wide count = 0; + jim_wide maxlen = JIM_WIDE_MAX; + FILE *outfh = Jim_AioFilehandle(interp, argv[0]); + + if (outfh == NULL) { + return JIM_ERR; + } + + if (argc == 2) { + if (Jim_GetWide(interp, argv[1], &maxlen) != JIM_OK) { + return JIM_ERR; + } + } + + while (count < maxlen) { + int ch = fgetc(af->fp); + + if (ch == EOF || fputc(ch, outfh) == EOF) { + break; + } + count++; + } + + if (ferror(af->fp)) { + Jim_SetResultFormatted(interp, "error while reading: %s", strerror(errno)); + clearerr(af->fp); + return JIM_ERR; + } + + if (ferror(outfh)) { + Jim_SetResultFormatted(interp, "error while writing: %s", strerror(errno)); + clearerr(outfh); + return JIM_ERR; + } + + Jim_SetResultInt(interp, count); + + return JIM_OK; +} + +static int aio_cmd_gets(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + AioFile *af = Jim_CmdPrivData(interp); + char buf[AIO_BUF_LEN]; + Jim_Obj *objPtr; + int len; + + errno = 0; + + objPtr = Jim_NewStringObj(interp, NULL, 0); + while (1) { + buf[AIO_BUF_LEN - 1] = '_'; + if (fgets(buf, AIO_BUF_LEN, af->fp) == NULL) + break; + + if (buf[AIO_BUF_LEN - 1] == '\0' && buf[AIO_BUF_LEN - 2] != '\n') { + Jim_AppendString(interp, objPtr, buf, AIO_BUF_LEN - 1); + } + else { + len = strlen(buf); + + if (len && (buf[len - 1] == '\n')) { + + len--; + } + + Jim_AppendString(interp, objPtr, buf, len); + break; + } + } + if (JimCheckStreamError(interp, af)) { + + Jim_FreeNewObj(interp, objPtr); + return JIM_ERR; + } + + if (argc) { + if (Jim_SetVariable(interp, argv[0], objPtr) != JIM_OK) { + Jim_FreeNewObj(interp, objPtr); + return JIM_ERR; + } + + len = Jim_Length(objPtr); + + if (len == 0 && feof(af->fp)) { + + len = -1; + } + Jim_SetResultInt(interp, len); + } + else { + Jim_SetResult(interp, objPtr); + } + return JIM_OK; +} + +static int aio_cmd_puts(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + AioFile *af = Jim_CmdPrivData(interp); + int wlen; + const char *wdata; + Jim_Obj *strObj; + + if (argc == 2) { + if (!Jim_CompareStringImmediate(interp, argv[0], "-nonewline")) { + return -1; + } + strObj = argv[1]; + } + else { + strObj = argv[0]; + } + + wdata = Jim_GetString(strObj, &wlen); + if (fwrite(wdata, 1, wlen, af->fp) == (unsigned)wlen) { + if (argc == 2 || putc('\n', af->fp) != EOF) { + return JIM_OK; + } + } + JimAioSetError(interp, af->filename); + return JIM_ERR; +} + +static int aio_cmd_isatty(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ +#ifdef HAVE_ISATTY + AioFile *af = Jim_CmdPrivData(interp); + Jim_SetResultInt(interp, isatty(fileno(af->fp))); +#else + Jim_SetResultInt(interp, 0); +#endif + + return JIM_OK; +} + + +static int aio_cmd_flush(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + AioFile *af = Jim_CmdPrivData(interp); + + if (fflush(af->fp) == EOF) { + JimAioSetError(interp, af->filename); + return JIM_ERR; + } + return JIM_OK; +} + +static int aio_cmd_eof(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + AioFile *af = Jim_CmdPrivData(interp); + + Jim_SetResultInt(interp, feof(af->fp)); + return JIM_OK; +} + +static int aio_cmd_close(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + if (argc == 3) { +#if !defined(JIM_ANSIC) && defined(HAVE_SHUTDOWN) + static const char * const options[] = { "r", "w", NULL }; + enum { OPT_R, OPT_W, }; + int option; + AioFile *af = Jim_CmdPrivData(interp); + + if (Jim_GetEnum(interp, argv[2], options, &option, NULL, JIM_ERRMSG) != JIM_OK) { + return JIM_ERR; + } + if (shutdown(af->fd, option == OPT_R ? SHUT_RD : SHUT_WR) == 0) { + return JIM_OK; + } + JimAioSetError(interp, NULL); +#else + Jim_SetResultString(interp, "async close not supported", -1); +#endif + return JIM_ERR; + } + + return Jim_DeleteCommand(interp, Jim_String(argv[0])); +} + +static int aio_cmd_seek(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + AioFile *af = Jim_CmdPrivData(interp); + int orig = SEEK_SET; + jim_wide offset; + + if (argc == 2) { + if (Jim_CompareStringImmediate(interp, argv[1], "start")) + orig = SEEK_SET; + else if (Jim_CompareStringImmediate(interp, argv[1], "current")) + orig = SEEK_CUR; + else if (Jim_CompareStringImmediate(interp, argv[1], "end")) + orig = SEEK_END; + else { + return -1; + } + } + if (Jim_GetWide(interp, argv[0], &offset) != JIM_OK) { + return JIM_ERR; + } + if (fseeko(af->fp, offset, orig) == -1) { + JimAioSetError(interp, af->filename); + return JIM_ERR; + } + return JIM_OK; +} + +static int aio_cmd_tell(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + AioFile *af = Jim_CmdPrivData(interp); + + Jim_SetResultInt(interp, ftello(af->fp)); + return JIM_OK; +} + +static int aio_cmd_filename(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + AioFile *af = Jim_CmdPrivData(interp); + + Jim_SetResult(interp, af->filename); + return JIM_OK; +} + +#ifdef O_NDELAY +static int aio_cmd_ndelay(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + AioFile *af = Jim_CmdPrivData(interp); + + int fmode = fcntl(af->fd, F_GETFL); + + if (argc) { + long nb; + + if (Jim_GetLong(interp, argv[0], &nb) != JIM_OK) { + return JIM_ERR; + } + if (nb) { + fmode |= O_NDELAY; + } + else { + fmode &= ~O_NDELAY; + } + (void)fcntl(af->fd, F_SETFL, fmode); + } + Jim_SetResultInt(interp, (fmode & O_NONBLOCK) ? 1 : 0); + return JIM_OK; +} +#endif + +static int aio_cmd_buffering(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + AioFile *af = Jim_CmdPrivData(interp); + + static const char * const options[] = { + "none", + "line", + "full", + NULL + }; + enum + { + OPT_NONE, + OPT_LINE, + OPT_FULL, + }; + int option; + + if (Jim_GetEnum(interp, argv[0], options, &option, NULL, JIM_ERRMSG) != JIM_OK) { + return JIM_ERR; + } + switch (option) { + case OPT_NONE: + setvbuf(af->fp, NULL, _IONBF, 0); + break; + case OPT_LINE: + setvbuf(af->fp, NULL, _IOLBF, BUFSIZ); + break; + case OPT_FULL: + setvbuf(af->fp, NULL, _IOFBF, BUFSIZ); + break; + } + return JIM_OK; +} + +#ifdef jim_ext_eventloop +static void JimAioFileEventFinalizer(Jim_Interp *interp, void *clientData) +{ + Jim_Obj **objPtrPtr = clientData; + + Jim_DecrRefCount(interp, *objPtrPtr); + *objPtrPtr = NULL; +} + +static int JimAioFileEventHandler(Jim_Interp *interp, void *clientData, int mask) +{ + Jim_Obj **objPtrPtr = clientData; + + return Jim_EvalObjBackground(interp, *objPtrPtr); +} + +static int aio_eventinfo(Jim_Interp *interp, AioFile * af, unsigned mask, Jim_Obj **scriptHandlerObj, + int argc, Jim_Obj * const *argv) +{ + if (argc == 0) { + + if (*scriptHandlerObj) { + Jim_SetResult(interp, *scriptHandlerObj); + } + return JIM_OK; + } + + if (*scriptHandlerObj) { + + Jim_DeleteFileHandler(interp, af->fp, mask); + } + + + if (Jim_Length(argv[0]) == 0) { + + return JIM_OK; + } + + + Jim_IncrRefCount(argv[0]); + *scriptHandlerObj = argv[0]; + + Jim_CreateFileHandler(interp, af->fp, mask, + JimAioFileEventHandler, scriptHandlerObj, JimAioFileEventFinalizer); + + return JIM_OK; +} + +static int aio_cmd_readable(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + AioFile *af = Jim_CmdPrivData(interp); + + return aio_eventinfo(interp, af, JIM_EVENT_READABLE, &af->rEvent, argc, argv); +} + +static int aio_cmd_writable(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + AioFile *af = Jim_CmdPrivData(interp); + + return aio_eventinfo(interp, af, JIM_EVENT_WRITABLE, &af->wEvent, argc, argv); +} + +static int aio_cmd_onexception(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + AioFile *af = Jim_CmdPrivData(interp); + + return aio_eventinfo(interp, af, JIM_EVENT_EXCEPTION, &af->eEvent, argc, argv); +} +#endif + +static const jim_subcmd_type aio_command_table[] = { + { "read", + "?-nonewline? ?len?", + aio_cmd_read, + 0, + 2, + + }, + { "copyto", + "handle ?size?", + aio_cmd_copy, + 1, + 2, + + }, + { "gets", + "?var?", + aio_cmd_gets, + 0, + 1, + + }, + { "puts", + "?-nonewline? str", + aio_cmd_puts, + 1, + 2, + + }, + { "isatty", + NULL, + aio_cmd_isatty, + 0, + 0, + + }, + { "flush", + NULL, + aio_cmd_flush, + 0, + 0, + + }, + { "eof", + NULL, + aio_cmd_eof, + 0, + 0, + + }, + { "close", + "?r(ead)|w(rite)?", + aio_cmd_close, + 0, + 1, + JIM_MODFLAG_FULLARGV, + + }, + { "seek", + "offset ?start|current|end", + aio_cmd_seek, + 1, + 2, + + }, + { "tell", + NULL, + aio_cmd_tell, + 0, + 0, + + }, + { "filename", + NULL, + aio_cmd_filename, + 0, + 0, + + }, +#ifdef O_NDELAY + { "ndelay", + "?0|1?", + aio_cmd_ndelay, + 0, + 1, + + }, +#endif + { "buffering", + "none|line|full", + aio_cmd_buffering, + 1, + 1, + + }, +#ifdef jim_ext_eventloop + { "readable", + "?readable-script?", + aio_cmd_readable, + 0, + 1, + + }, + { "writable", + "?writable-script?", + aio_cmd_writable, + 0, + 1, + + }, + { "onexception", + "?exception-script?", + aio_cmd_onexception, + 0, + 1, + + }, +#endif + { NULL } +}; + +static int JimAioSubCmdProc(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + return Jim_CallSubCmd(interp, Jim_ParseSubCmd(interp, aio_command_table, argc, argv), argc, argv); +} + +static int JimAioOpenCommand(Jim_Interp *interp, int argc, + Jim_Obj *const *argv) +{ + const char *mode; + + if (argc != 2 && argc != 3) { + Jim_WrongNumArgs(interp, 1, argv, "filename ?mode?"); + return JIM_ERR; + } + + mode = (argc == 3) ? Jim_String(argv[2]) : "r"; + +#ifdef jim_ext_tclcompat + { + const char *filename = Jim_String(argv[1]); + + + if (*filename == '|') { + Jim_Obj *evalObj[3]; + + evalObj[0] = Jim_NewStringObj(interp, "::popen", -1); + evalObj[1] = Jim_NewStringObj(interp, filename + 1, -1); + evalObj[2] = Jim_NewStringObj(interp, mode, -1); + + return Jim_EvalObjVector(interp, 3, evalObj); + } + } +#endif + return JimMakeChannel(interp, NULL, -1, argv[1], "aio.handle%ld", 0, mode); +} + +static int JimMakeChannel(Jim_Interp *interp, FILE *fh, int fd, Jim_Obj *filename, + const char *hdlfmt, int family, const char *mode) +{ + AioFile *af; + char buf[AIO_CMD_LEN]; + int openFlags = 0; + + if (fh) { + filename = Jim_NewStringObj(interp, hdlfmt, -1); + openFlags = AIO_KEEPOPEN; + } + + Jim_IncrRefCount(filename); + + if (fh == NULL) { +#if !defined(JIM_ANSIC) + if (fd >= 0) { + fh = fdopen(fd, mode); + } + else +#endif + fh = fopen(Jim_String(filename), mode); + + if (fh == NULL) { + JimAioSetError(interp, filename); +#if !defined(JIM_ANSIC) + if (fd >= 0) { + close(fd); + } +#endif + Jim_DecrRefCount(interp, filename); + return JIM_ERR; + } + } + + + af = Jim_Alloc(sizeof(*af)); + memset(af, 0, sizeof(*af)); + af->fp = fh; + af->fd = fileno(fh); + af->filename = filename; +#ifdef FD_CLOEXEC + if ((openFlags & AIO_KEEPOPEN) == 0) { + (void)fcntl(af->fd, F_SETFD, FD_CLOEXEC); + } +#endif + af->openFlags = openFlags; + af->addr_family = family; + snprintf(buf, sizeof(buf), hdlfmt, Jim_GetId(interp)); + Jim_CreateCommand(interp, buf, JimAioSubCmdProc, af, JimAioDelProc); + + Jim_SetResult(interp, Jim_MakeGlobalNamespaceName(interp, Jim_NewStringObj(interp, buf, -1))); + + return JIM_OK; +} + +#if defined(HAVE_PIPE) || (defined(HAVE_SOCKETPAIR) && defined(HAVE_SYS_UN_H)) +static int JimMakeChannelPair(Jim_Interp *interp, int p[2], Jim_Obj *filename, + const char *hdlfmt, int family, const char *mode[2]) +{ + if (JimMakeChannel(interp, NULL, p[0], filename, hdlfmt, family, mode[0]) == JIM_OK) { + Jim_Obj *objPtr = Jim_NewListObj(interp, NULL, 0); + Jim_ListAppendElement(interp, objPtr, Jim_GetResult(interp)); + + if (JimMakeChannel(interp, NULL, p[1], filename, hdlfmt, family, mode[1]) == JIM_OK) { + Jim_ListAppendElement(interp, objPtr, Jim_GetResult(interp)); + Jim_SetResult(interp, objPtr); + return JIM_OK; + } + } + + + close(p[0]); + close(p[1]); + JimAioSetError(interp, NULL); + return JIM_ERR; +} +#endif + + +int Jim_MakeTempFile(Jim_Interp *interp, const char *template) +{ +#ifdef HAVE_MKSTEMP + int fd; + mode_t mask; + Jim_Obj *filenameObj; + + if (template == NULL) { + const char *tmpdir = getenv("TMPDIR"); + if (tmpdir == NULL || *tmpdir == '\0' || access(tmpdir, W_OK) != 0) { + tmpdir = "/tmp/"; + } + filenameObj = Jim_NewStringObj(interp, tmpdir, -1); + if (tmpdir[0] && tmpdir[strlen(tmpdir) - 1] != '/') { + Jim_AppendString(interp, filenameObj, "/", 1); + } + Jim_AppendString(interp, filenameObj, "tcl.tmp.XXXXXX", -1); + } + else { + filenameObj = Jim_NewStringObj(interp, template, -1); + } + + mask = umask(S_IXUSR | S_IRWXG | S_IRWXO); + + + fd = mkstemp(filenameObj->bytes); + umask(mask); + if (fd < 0) { + JimAioSetError(interp, filenameObj); + Jim_FreeNewObj(interp, filenameObj); + return -1; + } + + Jim_SetResult(interp, filenameObj); + return fd; +#else + Jim_SetResultString(interp, "platform has no tempfile support", -1); + return -1; +#endif +} + +FILE *Jim_AioFilehandle(Jim_Interp *interp, Jim_Obj *command) +{ + Jim_Cmd *cmdPtr = Jim_GetCommand(interp, command, JIM_ERRMSG); + + + if (cmdPtr && !cmdPtr->isproc && cmdPtr->u.native.cmdProc == JimAioSubCmdProc) { + return ((AioFile *) cmdPtr->u.native.privData)->fp; + } + Jim_SetResultFormatted(interp, "Not a filehandle: \"%#s\"", command); + return NULL; +} + +int Jim_aioInit(Jim_Interp *interp) +{ + if (Jim_PackageProvide(interp, "aio", "1.0", JIM_ERRMSG)) + return JIM_ERR; + + Jim_CreateCommand(interp, "open", JimAioOpenCommand, NULL, NULL); +#ifndef JIM_ANSIC + Jim_CreateCommand(interp, "socket", JimAioSockCommand, NULL, NULL); +#endif + + + JimMakeChannel(interp, stdin, -1, NULL, "stdin", 0, "r"); + JimMakeChannel(interp, stdout, -1, NULL, "stdout", 0, "w"); + JimMakeChannel(interp, stderr, -1, NULL, "stderr", 0, "w"); + + return JIM_OK; +} + +#include +#include +#include + + +#ifdef HAVE_DIRENT_H +#include +#endif + +int Jim_ReaddirCmd(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + const char *dirPath; + DIR *dirPtr; + struct dirent *entryPtr; + int nocomplain = 0; + + if (argc == 3 && Jim_CompareStringImmediate(interp, argv[1], "-nocomplain")) { + nocomplain = 1; + } + if (argc != 2 && !nocomplain) { + Jim_WrongNumArgs(interp, 1, argv, "?-nocomplain? dirPath"); + return JIM_ERR; + } + + dirPath = Jim_String(argv[1 + nocomplain]); + + dirPtr = opendir(dirPath); + if (dirPtr == NULL) { + if (nocomplain) { + return JIM_OK; + } + Jim_SetResultString(interp, strerror(errno), -1); + return JIM_ERR; + } + else { + Jim_Obj *listObj = Jim_NewListObj(interp, NULL, 0); + + while ((entryPtr = readdir(dirPtr)) != NULL) { + if (entryPtr->d_name[0] == '.') { + if (entryPtr->d_name[1] == '\0') { + continue; + } + if ((entryPtr->d_name[1] == '.') && (entryPtr->d_name[2] == '\0')) + continue; + } + Jim_ListAppendElement(interp, listObj, Jim_NewStringObj(interp, entryPtr->d_name, -1)); + } + closedir(dirPtr); + + Jim_SetResult(interp, listObj); + + return JIM_OK; + } +} + +int Jim_readdirInit(Jim_Interp *interp) +{ + if (Jim_PackageProvide(interp, "readdir", "1.0", JIM_ERRMSG)) + return JIM_ERR; + + Jim_CreateCommand(interp, "readdir", Jim_ReaddirCmd, NULL, NULL); + return JIM_OK; +} + +#include +#include + +#if defined(JIM_REGEXP) +#else + #include +#endif + +static void FreeRegexpInternalRep(Jim_Interp *interp, Jim_Obj *objPtr) +{ + regfree(objPtr->internalRep.regexpValue.compre); + Jim_Free(objPtr->internalRep.regexpValue.compre); +} + +static const Jim_ObjType regexpObjType = { + "regexp", + FreeRegexpInternalRep, + NULL, + NULL, + JIM_TYPE_NONE +}; + +static regex_t *SetRegexpFromAny(Jim_Interp *interp, Jim_Obj *objPtr, unsigned flags) +{ + regex_t *compre; + const char *pattern; + int ret; + + + if (objPtr->typePtr == ®expObjType && + objPtr->internalRep.regexpValue.compre && objPtr->internalRep.regexpValue.flags == flags) { + + return objPtr->internalRep.regexpValue.compre; + } + + + + + pattern = Jim_String(objPtr); + compre = Jim_Alloc(sizeof(regex_t)); + + if ((ret = regcomp(compre, pattern, REG_EXTENDED | flags)) != 0) { + char buf[100]; + + regerror(ret, compre, buf, sizeof(buf)); + Jim_SetResultFormatted(interp, "couldn't compile regular expression pattern: %s", buf); + regfree(compre); + Jim_Free(compre); + return NULL; + } + + Jim_FreeIntRep(interp, objPtr); + + objPtr->typePtr = ®expObjType; + objPtr->internalRep.regexpValue.flags = flags; + objPtr->internalRep.regexpValue.compre = compre; + + return compre; +} + +int Jim_RegexpCmd(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int opt_indices = 0; + int opt_all = 0; + int opt_inline = 0; + regex_t *regex; + int match, i, j; + int offset = 0; + regmatch_t *pmatch = NULL; + int source_len; + int result = JIM_OK; + const char *pattern; + const char *source_str; + int num_matches = 0; + int num_vars; + Jim_Obj *resultListObj = NULL; + int regcomp_flags = 0; + int eflags = 0; + int option; + enum { + OPT_INDICES, OPT_NOCASE, OPT_LINE, OPT_ALL, OPT_INLINE, OPT_START, OPT_END + }; + static const char * const options[] = { + "-indices", "-nocase", "-line", "-all", "-inline", "-start", "--", NULL + }; + + if (argc < 3) { + wrongNumArgs: + Jim_WrongNumArgs(interp, 1, argv, + "?switches? exp string ?matchVar? ?subMatchVar subMatchVar ...?"); + return JIM_ERR; + } + + for (i = 1; i < argc; i++) { + const char *opt = Jim_String(argv[i]); + + if (*opt != '-') { + break; + } + if (Jim_GetEnum(interp, argv[i], options, &option, "switch", JIM_ERRMSG | JIM_ENUM_ABBREV) != JIM_OK) { + return JIM_ERR; + } + if (option == OPT_END) { + i++; + break; + } + switch (option) { + case OPT_INDICES: + opt_indices = 1; + break; + + case OPT_NOCASE: + regcomp_flags |= REG_ICASE; + break; + + case OPT_LINE: + regcomp_flags |= REG_NEWLINE; + break; + + case OPT_ALL: + opt_all = 1; + break; + + case OPT_INLINE: + opt_inline = 1; + break; + + case OPT_START: + if (++i == argc) { + goto wrongNumArgs; + } + if (Jim_GetIndex(interp, argv[i], &offset) != JIM_OK) { + return JIM_ERR; + } + break; + } + } + if (argc - i < 2) { + goto wrongNumArgs; + } + + regex = SetRegexpFromAny(interp, argv[i], regcomp_flags); + if (!regex) { + return JIM_ERR; + } + + pattern = Jim_String(argv[i]); + source_str = Jim_GetString(argv[i + 1], &source_len); + + num_vars = argc - i - 2; + + if (opt_inline) { + if (num_vars) { + Jim_SetResultString(interp, "regexp match variables not allowed when using -inline", + -1); + result = JIM_ERR; + goto done; + } + num_vars = regex->re_nsub + 1; + } + + pmatch = Jim_Alloc((num_vars + 1) * sizeof(*pmatch)); + + if (offset) { + if (offset < 0) { + offset += source_len + 1; + } + if (offset > source_len) { + source_str += source_len; + } + else if (offset > 0) { + source_str += offset; + } + eflags |= REG_NOTBOL; + } + + if (opt_inline) { + resultListObj = Jim_NewListObj(interp, NULL, 0); + } + + next_match: + match = regexec(regex, source_str, num_vars + 1, pmatch, eflags); + if (match >= REG_BADPAT) { + char buf[100]; + + regerror(match, regex, buf, sizeof(buf)); + Jim_SetResultFormatted(interp, "error while matching pattern: %s", buf); + result = JIM_ERR; + goto done; + } + + if (match == REG_NOMATCH) { + goto done; + } + + num_matches++; + + if (opt_all && !opt_inline) { + + goto try_next_match; + } + + + j = 0; + for (i += 2; opt_inline ? j < num_vars : i < argc; i++, j++) { + Jim_Obj *resultObj; + + if (opt_indices) { + resultObj = Jim_NewListObj(interp, NULL, 0); + } + else { + resultObj = Jim_NewStringObj(interp, "", 0); + } + + if (pmatch[j].rm_so == -1) { + if (opt_indices) { + Jim_ListAppendElement(interp, resultObj, Jim_NewIntObj(interp, -1)); + Jim_ListAppendElement(interp, resultObj, Jim_NewIntObj(interp, -1)); + } + } + else { + int len = pmatch[j].rm_eo - pmatch[j].rm_so; + + if (opt_indices) { + Jim_ListAppendElement(interp, resultObj, Jim_NewIntObj(interp, + offset + pmatch[j].rm_so)); + Jim_ListAppendElement(interp, resultObj, Jim_NewIntObj(interp, + offset + pmatch[j].rm_so + len - 1)); + } + else { + Jim_AppendString(interp, resultObj, source_str + pmatch[j].rm_so, len); + } + } + + if (opt_inline) { + Jim_ListAppendElement(interp, resultListObj, resultObj); + } + else { + + result = Jim_SetVariable(interp, argv[i], resultObj); + + if (result != JIM_OK) { + Jim_FreeObj(interp, resultObj); + break; + } + } + } + + try_next_match: + if (opt_all && (pattern[0] != '^' || (regcomp_flags & REG_NEWLINE)) && *source_str) { + if (pmatch[0].rm_eo) { + offset += pmatch[0].rm_eo; + source_str += pmatch[0].rm_eo; + } + else { + source_str++; + offset++; + } + if (*source_str) { + eflags = REG_NOTBOL; + goto next_match; + } + } + + done: + if (result == JIM_OK) { + if (opt_inline) { + Jim_SetResult(interp, resultListObj); + } + else { + Jim_SetResultInt(interp, num_matches); + } + } + + Jim_Free(pmatch); + return result; +} + +#define MAX_SUB_MATCHES 50 + +int Jim_RegsubCmd(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int regcomp_flags = 0; + int regexec_flags = 0; + int opt_all = 0; + int offset = 0; + regex_t *regex; + const char *p; + int result; + regmatch_t pmatch[MAX_SUB_MATCHES + 1]; + int num_matches = 0; + + int i, j, n; + Jim_Obj *varname; + Jim_Obj *resultObj; + const char *source_str; + int source_len; + const char *replace_str; + int replace_len; + const char *pattern; + int option; + enum { + OPT_NOCASE, OPT_LINE, OPT_ALL, OPT_START, OPT_END + }; + static const char * const options[] = { + "-nocase", "-line", "-all", "-start", "--", NULL + }; + + if (argc < 4) { + wrongNumArgs: + Jim_WrongNumArgs(interp, 1, argv, + "?switches? exp string subSpec ?varName?"); + return JIM_ERR; + } + + for (i = 1; i < argc; i++) { + const char *opt = Jim_String(argv[i]); + + if (*opt != '-') { + break; + } + if (Jim_GetEnum(interp, argv[i], options, &option, "switch", JIM_ERRMSG | JIM_ENUM_ABBREV) != JIM_OK) { + return JIM_ERR; + } + if (option == OPT_END) { + i++; + break; + } + switch (option) { + case OPT_NOCASE: + regcomp_flags |= REG_ICASE; + break; + + case OPT_LINE: + regcomp_flags |= REG_NEWLINE; + break; + + case OPT_ALL: + opt_all = 1; + break; + + case OPT_START: + if (++i == argc) { + goto wrongNumArgs; + } + if (Jim_GetIndex(interp, argv[i], &offset) != JIM_OK) { + return JIM_ERR; + } + break; + } + } + if (argc - i != 3 && argc - i != 4) { + goto wrongNumArgs; + } + + regex = SetRegexpFromAny(interp, argv[i], regcomp_flags); + if (!regex) { + return JIM_ERR; + } + pattern = Jim_String(argv[i]); + + source_str = Jim_GetString(argv[i + 1], &source_len); + replace_str = Jim_GetString(argv[i + 2], &replace_len); + varname = argv[i + 3]; + + + resultObj = Jim_NewStringObj(interp, "", 0); + + if (offset) { + if (offset < 0) { + offset += source_len + 1; + } + if (offset > source_len) { + offset = source_len; + } + else if (offset < 0) { + offset = 0; + } + } + + + Jim_AppendString(interp, resultObj, source_str, offset); + + + n = source_len - offset; + p = source_str + offset; + do { + int match = regexec(regex, p, MAX_SUB_MATCHES, pmatch, regexec_flags); + + if (match >= REG_BADPAT) { + char buf[100]; + + regerror(match, regex, buf, sizeof(buf)); + Jim_SetResultFormatted(interp, "error while matching pattern: %s", buf); + return JIM_ERR; + } + if (match == REG_NOMATCH) { + break; + } + + num_matches++; + + Jim_AppendString(interp, resultObj, p, pmatch[0].rm_so); + + + for (j = 0; j < replace_len; j++) { + int idx; + int c = replace_str[j]; + + if (c == '&') { + idx = 0; + } + else if (c == '\\' && j < replace_len) { + c = replace_str[++j]; + if ((c >= '0') && (c <= '9')) { + idx = c - '0'; + } + else if ((c == '\\') || (c == '&')) { + Jim_AppendString(interp, resultObj, replace_str + j, 1); + continue; + } + else { + Jim_AppendString(interp, resultObj, replace_str + j - 1, 2); + continue; + } + } + else { + Jim_AppendString(interp, resultObj, replace_str + j, 1); + continue; + } + if ((idx < MAX_SUB_MATCHES) && pmatch[idx].rm_so != -1 && pmatch[idx].rm_eo != -1) { + Jim_AppendString(interp, resultObj, p + pmatch[idx].rm_so, + pmatch[idx].rm_eo - pmatch[idx].rm_so); + } + } + + p += pmatch[0].rm_eo; + n -= pmatch[0].rm_eo; + + + if (!opt_all || n == 0) { + break; + } + + + if ((regcomp_flags & REG_NEWLINE) == 0 && pattern[0] == '^') { + break; + } + + + if (pattern[0] == '\0' && n) { + + Jim_AppendString(interp, resultObj, p, 1); + p++; + n--; + } + + regexec_flags |= REG_NOTBOL; + } while (n); + + Jim_AppendString(interp, resultObj, p, -1); + + + if (argc - i == 4) { + result = Jim_SetVariable(interp, varname, resultObj); + + if (result == JIM_OK) { + Jim_SetResultInt(interp, num_matches); + } + else { + Jim_FreeObj(interp, resultObj); + } + } + else { + Jim_SetResult(interp, resultObj); + result = JIM_OK; + } + + return result; +} + +int Jim_regexpInit(Jim_Interp *interp) +{ + if (Jim_PackageProvide(interp, "regexp", "1.0", JIM_ERRMSG)) + return JIM_ERR; + + Jim_CreateCommand(interp, "regexp", Jim_RegexpCmd, NULL, NULL); + Jim_CreateCommand(interp, "regsub", Jim_RegsubCmd, NULL, NULL); + return JIM_OK; +} + +#include +#include +#include +#include +#include +#include + + +#ifdef HAVE_UTIMES +#include +#endif +#ifdef HAVE_UNISTD_H +#include +#elif defined(_MSC_VER) +#include +#define F_OK 0 +#define W_OK 2 +#define R_OK 4 +#define S_ISREG(m) (((m) & S_IFMT) == S_IFREG) +#define S_ISDIR(m) (((m) & S_IFMT) == S_IFDIR) +#endif + +# ifndef MAXPATHLEN +# define MAXPATHLEN JIM_PATH_LEN +# endif + +#if defined(__MINGW32__) || defined(_MSC_VER) +#define ISWINDOWS 1 +#else +#define ISWINDOWS 0 +#endif + + +static const char *JimGetFileType(int mode) +{ + if (S_ISREG(mode)) { + return "file"; + } + else if (S_ISDIR(mode)) { + return "directory"; + } +#ifdef S_ISCHR + else if (S_ISCHR(mode)) { + return "characterSpecial"; + } +#endif +#ifdef S_ISBLK + else if (S_ISBLK(mode)) { + return "blockSpecial"; + } +#endif +#ifdef S_ISFIFO + else if (S_ISFIFO(mode)) { + return "fifo"; + } +#endif +#ifdef S_ISLNK + else if (S_ISLNK(mode)) { + return "link"; + } +#endif +#ifdef S_ISSOCK + else if (S_ISSOCK(mode)) { + return "socket"; + } +#endif + return "unknown"; +} + +static void AppendStatElement(Jim_Interp *interp, Jim_Obj *listObj, const char *key, jim_wide value) +{ + Jim_ListAppendElement(interp, listObj, Jim_NewStringObj(interp, key, -1)); + Jim_ListAppendElement(interp, listObj, Jim_NewIntObj(interp, value)); +} + +static int StoreStatData(Jim_Interp *interp, Jim_Obj *varName, const struct stat *sb) +{ + + Jim_Obj *listObj = Jim_NewListObj(interp, NULL, 0); + + AppendStatElement(interp, listObj, "dev", sb->st_dev); + AppendStatElement(interp, listObj, "ino", sb->st_ino); + AppendStatElement(interp, listObj, "mode", sb->st_mode); + AppendStatElement(interp, listObj, "nlink", sb->st_nlink); + AppendStatElement(interp, listObj, "uid", sb->st_uid); + AppendStatElement(interp, listObj, "gid", sb->st_gid); + AppendStatElement(interp, listObj, "size", sb->st_size); + AppendStatElement(interp, listObj, "atime", sb->st_atime); + AppendStatElement(interp, listObj, "mtime", sb->st_mtime); + AppendStatElement(interp, listObj, "ctime", sb->st_ctime); + Jim_ListAppendElement(interp, listObj, Jim_NewStringObj(interp, "type", -1)); + Jim_ListAppendElement(interp, listObj, Jim_NewStringObj(interp, JimGetFileType((int)sb->st_mode), -1)); + + + if (varName) { + Jim_Obj *objPtr = Jim_GetVariable(interp, varName, JIM_NONE); + if (objPtr) { + if (Jim_DictSize(interp, objPtr) < 0) { + + Jim_SetResultFormatted(interp, "can't set \"%#s(dev)\": variable isn't array", varName); + Jim_FreeNewObj(interp, listObj); + return JIM_ERR; + } + + if (Jim_IsShared(objPtr)) + objPtr = Jim_DuplicateObj(interp, objPtr); + + + Jim_ListAppendList(interp, objPtr, listObj); + Jim_DictSize(interp, objPtr); + Jim_InvalidateStringRep(objPtr); + + Jim_FreeNewObj(interp, listObj); + listObj = objPtr; + } + Jim_SetVariable(interp, varName, listObj); + } + + + Jim_SetResult(interp, listObj); + + return JIM_OK; +} + +static int file_cmd_dirname(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + const char *path = Jim_String(argv[0]); + const char *p = strrchr(path, '/'); + + if (!p && path[0] == '.' && path[1] == '.' && path[2] == '\0') { + Jim_SetResultString(interp, "..", -1); + } else if (!p) { + Jim_SetResultString(interp, ".", -1); + } + else if (p == path) { + Jim_SetResultString(interp, "/", -1); + } + else if (ISWINDOWS && p[-1] == ':') { + + Jim_SetResultString(interp, path, p - path + 1); + } + else { + Jim_SetResultString(interp, path, p - path); + } + return JIM_OK; +} + +static int file_cmd_rootname(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + const char *path = Jim_String(argv[0]); + const char *lastSlash = strrchr(path, '/'); + const char *p = strrchr(path, '.'); + + if (p == NULL || (lastSlash != NULL && lastSlash > p)) { + Jim_SetResult(interp, argv[0]); + } + else { + Jim_SetResultString(interp, path, p - path); + } + return JIM_OK; +} + +static int file_cmd_extension(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + const char *path = Jim_String(argv[0]); + const char *lastSlash = strrchr(path, '/'); + const char *p = strrchr(path, '.'); + + if (p == NULL || (lastSlash != NULL && lastSlash >= p)) { + p = ""; + } + Jim_SetResultString(interp, p, -1); + return JIM_OK; +} + +static int file_cmd_tail(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + const char *path = Jim_String(argv[0]); + const char *lastSlash = strrchr(path, '/'); + + if (lastSlash) { + Jim_SetResultString(interp, lastSlash + 1, -1); + } + else { + Jim_SetResult(interp, argv[0]); + } + return JIM_OK; +} + +static int file_cmd_normalize(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ +#ifdef HAVE_REALPATH + const char *path = Jim_String(argv[0]); + char *newname = Jim_Alloc(MAXPATHLEN + 1); + + if (realpath(path, newname)) { + Jim_SetResult(interp, Jim_NewStringObjNoAlloc(interp, newname, -1)); + return JIM_OK; + } + else { + Jim_Free(newname); + Jim_SetResultFormatted(interp, "can't normalize \"%#s\": %s", argv[0], strerror(errno)); + return JIM_ERR; + } +#else + Jim_SetResultString(interp, "Not implemented", -1); + return JIM_ERR; +#endif +} + +static int file_cmd_join(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int i; + char *newname = Jim_Alloc(MAXPATHLEN + 1); + char *last = newname; + + *newname = 0; + + + for (i = 0; i < argc; i++) { + int len; + const char *part = Jim_GetString(argv[i], &len); + + if (*part == '/') { + + last = newname; + } + else if (ISWINDOWS && strchr(part, ':')) { + + last = newname; + } + else if (part[0] == '.') { + if (part[1] == '/') { + part += 2; + len -= 2; + } + else if (part[1] == 0 && last != newname) { + + continue; + } + } + + + if (last != newname && last[-1] != '/') { + *last++ = '/'; + } + + if (len) { + if (last + len - newname >= MAXPATHLEN) { + Jim_Free(newname); + Jim_SetResultString(interp, "Path too long", -1); + return JIM_ERR; + } + memcpy(last, part, len); + last += len; + } + + + if (last > newname + 1 && last[-1] == '/') { + + if (!ISWINDOWS || !(last > newname + 2 && last[-2] == ':')) { + *--last = 0; + } + } + } + + *last = 0; + + + + Jim_SetResult(interp, Jim_NewStringObjNoAlloc(interp, newname, last - newname)); + + return JIM_OK; +} + +static int file_access(Jim_Interp *interp, Jim_Obj *filename, int mode) +{ + Jim_SetResultBool(interp, access(Jim_String(filename), mode) != -1); + + return JIM_OK; +} + +static int file_cmd_readable(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + return file_access(interp, argv[0], R_OK); +} + +static int file_cmd_writable(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + return file_access(interp, argv[0], W_OK); +} + +static int file_cmd_executable(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ +#ifdef X_OK + return file_access(interp, argv[0], X_OK); +#else + + Jim_SetResultBool(interp, 1); + return JIM_OK; +#endif +} + +static int file_cmd_exists(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + return file_access(interp, argv[0], F_OK); +} + +static int file_cmd_delete(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int force = Jim_CompareStringImmediate(interp, argv[0], "-force"); + + if (force || Jim_CompareStringImmediate(interp, argv[0], "--")) { + argc++; + argv--; + } + + while (argc--) { + const char *path = Jim_String(argv[0]); + + if (unlink(path) == -1 && errno != ENOENT) { + if (rmdir(path) == -1) { + + if (!force || Jim_EvalPrefix(interp, "file delete force", 1, argv) != JIM_OK) { + Jim_SetResultFormatted(interp, "couldn't delete file \"%s\": %s", path, + strerror(errno)); + return JIM_ERR; + } + } + } + argv++; + } + return JIM_OK; +} + +#ifdef HAVE_MKDIR_ONE_ARG +#define MKDIR_DEFAULT(PATHNAME) mkdir(PATHNAME) +#else +#define MKDIR_DEFAULT(PATHNAME) mkdir(PATHNAME, 0755) +#endif + +static int mkdir_all(char *path) +{ + int ok = 1; + + + goto first; + + while (ok--) { + + { + char *slash = strrchr(path, '/'); + + if (slash && slash != path) { + *slash = 0; + if (mkdir_all(path) != 0) { + return -1; + } + *slash = '/'; + } + } + first: + if (MKDIR_DEFAULT(path) == 0) { + return 0; + } + if (errno == ENOENT) { + + continue; + } + + if (errno == EEXIST) { + struct stat sb; + + if (stat(path, &sb) == 0 && S_ISDIR(sb.st_mode)) { + return 0; + } + + errno = EEXIST; + } + + break; + } + return -1; +} + +static int file_cmd_mkdir(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + while (argc--) { + char *path = Jim_StrDup(Jim_String(argv[0])); + int rc = mkdir_all(path); + + Jim_Free(path); + if (rc != 0) { + Jim_SetResultFormatted(interp, "can't create directory \"%#s\": %s", argv[0], + strerror(errno)); + return JIM_ERR; + } + argv++; + } + return JIM_OK; +} + +static int file_cmd_tempfile(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int fd = Jim_MakeTempFile(interp, (argc >= 1) ? Jim_String(argv[0]) : NULL); + + if (fd < 0) { + return JIM_ERR; + } + close(fd); + + return JIM_OK; +} + +static int file_cmd_rename(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + const char *source; + const char *dest; + int force = 0; + + if (argc == 3) { + if (!Jim_CompareStringImmediate(interp, argv[0], "-force")) { + return -1; + } + force++; + argv++; + argc--; + } + + source = Jim_String(argv[0]); + dest = Jim_String(argv[1]); + + if (!force && access(dest, F_OK) == 0) { + Jim_SetResultFormatted(interp, "error renaming \"%#s\" to \"%#s\": target exists", argv[0], + argv[1]); + return JIM_ERR; + } + + if (rename(source, dest) != 0) { + Jim_SetResultFormatted(interp, "error renaming \"%#s\" to \"%#s\": %s", argv[0], argv[1], + strerror(errno)); + return JIM_ERR; + } + + return JIM_OK; +} + +#if defined(HAVE_LINK) && defined(HAVE_SYMLINK) +static int file_cmd_link(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int ret; + const char *source; + const char *dest; + static const char * const options[] = { "-hard", "-symbolic", NULL }; + enum { OPT_HARD, OPT_SYMBOLIC, }; + int option = OPT_HARD; + + if (argc == 3) { + if (Jim_GetEnum(interp, argv[0], options, &option, NULL, JIM_ENUM_ABBREV | JIM_ERRMSG) != JIM_OK) { + return JIM_ERR; + } + argv++; + argc--; + } + + dest = Jim_String(argv[0]); + source = Jim_String(argv[1]); + + if (option == OPT_HARD) { + ret = link(source, dest); + } + else { + ret = symlink(source, dest); + } + + if (ret != 0) { + Jim_SetResultFormatted(interp, "error linking \"%#s\" to \"%#s\": %s", argv[0], argv[1], + strerror(errno)); + return JIM_ERR; + } + + return JIM_OK; +} +#endif + +static int file_stat(Jim_Interp *interp, Jim_Obj *filename, struct stat *sb) +{ + const char *path = Jim_String(filename); + + if (stat(path, sb) == -1) { + Jim_SetResultFormatted(interp, "could not read \"%#s\": %s", filename, strerror(errno)); + return JIM_ERR; + } + return JIM_OK; +} + +#ifdef HAVE_LSTAT +static int file_lstat(Jim_Interp *interp, Jim_Obj *filename, struct stat *sb) +{ + const char *path = Jim_String(filename); + + if (lstat(path, sb) == -1) { + Jim_SetResultFormatted(interp, "could not read \"%#s\": %s", filename, strerror(errno)); + return JIM_ERR; + } + return JIM_OK; +} +#else +#define file_lstat file_stat +#endif + +static int file_cmd_atime(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + struct stat sb; + + if (file_stat(interp, argv[0], &sb) != JIM_OK) { + return JIM_ERR; + } + Jim_SetResultInt(interp, sb.st_atime); + return JIM_OK; +} + +static int file_cmd_mtime(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + struct stat sb; + + if (argc == 2) { +#ifdef HAVE_UTIMES + jim_wide newtime; + struct timeval times[2]; + + if (Jim_GetWide(interp, argv[1], &newtime) != JIM_OK) { + return JIM_ERR; + } + + times[1].tv_sec = times[0].tv_sec = newtime; + times[1].tv_usec = times[0].tv_usec = 0; + + if (utimes(Jim_String(argv[0]), times) != 0) { + Jim_SetResultFormatted(interp, "can't set time on \"%#s\": %s", argv[0], strerror(errno)); + return JIM_ERR; + } +#else + Jim_SetResultString(interp, "Not implemented", -1); + return JIM_ERR; +#endif + } + if (file_stat(interp, argv[0], &sb) != JIM_OK) { + return JIM_ERR; + } + Jim_SetResultInt(interp, sb.st_mtime); + return JIM_OK; +} + +static int file_cmd_copy(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + return Jim_EvalPrefix(interp, "file copy", argc, argv); +} + +static int file_cmd_size(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + struct stat sb; + + if (file_stat(interp, argv[0], &sb) != JIM_OK) { + return JIM_ERR; + } + Jim_SetResultInt(interp, sb.st_size); + return JIM_OK; +} + +static int file_cmd_isdirectory(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + struct stat sb; + int ret = 0; + + if (file_stat(interp, argv[0], &sb) == JIM_OK) { + ret = S_ISDIR(sb.st_mode); + } + Jim_SetResultInt(interp, ret); + return JIM_OK; +} + +static int file_cmd_isfile(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + struct stat sb; + int ret = 0; + + if (file_stat(interp, argv[0], &sb) == JIM_OK) { + ret = S_ISREG(sb.st_mode); + } + Jim_SetResultInt(interp, ret); + return JIM_OK; +} + +#ifdef HAVE_GETEUID +static int file_cmd_owned(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + struct stat sb; + int ret = 0; + + if (file_stat(interp, argv[0], &sb) == JIM_OK) { + ret = (geteuid() == sb.st_uid); + } + Jim_SetResultInt(interp, ret); + return JIM_OK; +} +#endif + +#if defined(HAVE_READLINK) +static int file_cmd_readlink(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + const char *path = Jim_String(argv[0]); + char *linkValue = Jim_Alloc(MAXPATHLEN + 1); + + int linkLength = readlink(path, linkValue, MAXPATHLEN); + + if (linkLength == -1) { + Jim_Free(linkValue); + Jim_SetResultFormatted(interp, "couldn't readlink \"%#s\": %s", argv[0], strerror(errno)); + return JIM_ERR; + } + linkValue[linkLength] = 0; + Jim_SetResult(interp, Jim_NewStringObjNoAlloc(interp, linkValue, linkLength)); + return JIM_OK; +} +#endif + +static int file_cmd_type(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + struct stat sb; + + if (file_lstat(interp, argv[0], &sb) != JIM_OK) { + return JIM_ERR; + } + Jim_SetResultString(interp, JimGetFileType((int)sb.st_mode), -1); + return JIM_OK; +} + +#ifdef HAVE_LSTAT +static int file_cmd_lstat(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + struct stat sb; + + if (file_lstat(interp, argv[0], &sb) != JIM_OK) { + return JIM_ERR; + } + return StoreStatData(interp, argc == 2 ? argv[1] : NULL, &sb); +} +#else +#define file_cmd_lstat file_cmd_stat +#endif + +static int file_cmd_stat(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + struct stat sb; + + if (file_stat(interp, argv[0], &sb) != JIM_OK) { + return JIM_ERR; + } + return StoreStatData(interp, argc == 2 ? argv[1] : NULL, &sb); +} + +static const jim_subcmd_type file_command_table[] = { + { "atime", + "name", + file_cmd_atime, + 1, + 1, + + }, + { "mtime", + "name ?time?", + file_cmd_mtime, + 1, + 2, + + }, + { "copy", + "?-force? source dest", + file_cmd_copy, + 2, + 3, + + }, + { "dirname", + "name", + file_cmd_dirname, + 1, + 1, + + }, + { "rootname", + "name", + file_cmd_rootname, + 1, + 1, + + }, + { "extension", + "name", + file_cmd_extension, + 1, + 1, + + }, + { "tail", + "name", + file_cmd_tail, + 1, + 1, + + }, + { "normalize", + "name", + file_cmd_normalize, + 1, + 1, + + }, + { "join", + "name ?name ...?", + file_cmd_join, + 1, + -1, + + }, + { "readable", + "name", + file_cmd_readable, + 1, + 1, + + }, + { "writable", + "name", + file_cmd_writable, + 1, + 1, + + }, + { "executable", + "name", + file_cmd_executable, + 1, + 1, + + }, + { "exists", + "name", + file_cmd_exists, + 1, + 1, + + }, + { "delete", + "?-force|--? name ...", + file_cmd_delete, + 1, + -1, + + }, + { "mkdir", + "dir ...", + file_cmd_mkdir, + 1, + -1, + + }, + { "tempfile", + "?template?", + file_cmd_tempfile, + 0, + 1, + + }, + { "rename", + "?-force? source dest", + file_cmd_rename, + 2, + 3, + + }, +#if defined(HAVE_LINK) && defined(HAVE_SYMLINK) + { "link", + "?-symbolic|-hard? newname target", + file_cmd_link, + 2, + 3, + + }, +#endif +#if defined(HAVE_READLINK) + { "readlink", + "name", + file_cmd_readlink, + 1, + 1, + + }, +#endif + { "size", + "name", + file_cmd_size, + 1, + 1, + + }, + { "stat", + "name ?var?", + file_cmd_stat, + 1, + 2, + + }, + { "lstat", + "name ?var?", + file_cmd_lstat, + 1, + 2, + + }, + { "type", + "name", + file_cmd_type, + 1, + 1, + + }, +#ifdef HAVE_GETEUID + { "owned", + "name", + file_cmd_owned, + 1, + 1, + + }, +#endif + { "isdirectory", + "name", + file_cmd_isdirectory, + 1, + 1, + + }, + { "isfile", + "name", + file_cmd_isfile, + 1, + 1, + + }, + { + NULL + } +}; + +static int Jim_CdCmd(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + const char *path; + + if (argc != 2) { + Jim_WrongNumArgs(interp, 1, argv, "dirname"); + return JIM_ERR; + } + + path = Jim_String(argv[1]); + + if (chdir(path) != 0) { + Jim_SetResultFormatted(interp, "couldn't change working directory to \"%s\": %s", path, + strerror(errno)); + return JIM_ERR; + } + return JIM_OK; +} + +static int Jim_PwdCmd(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + char *cwd = Jim_Alloc(MAXPATHLEN); + + if (getcwd(cwd, MAXPATHLEN) == NULL) { + Jim_SetResultString(interp, "Failed to get pwd", -1); + Jim_Free(cwd); + return JIM_ERR; + } + else if (ISWINDOWS) { + + char *p = cwd; + while ((p = strchr(p, '\\')) != NULL) { + *p++ = '/'; + } + } + + Jim_SetResultString(interp, cwd, -1); + + Jim_Free(cwd); + return JIM_OK; +} + +int Jim_fileInit(Jim_Interp *interp) +{ + if (Jim_PackageProvide(interp, "file", "1.0", JIM_ERRMSG)) + return JIM_ERR; + + Jim_CreateCommand(interp, "file", Jim_SubCmdProc, (void *)file_command_table, NULL); + Jim_CreateCommand(interp, "pwd", Jim_PwdCmd, NULL, NULL); + Jim_CreateCommand(interp, "cd", Jim_CdCmd, NULL, NULL); + return JIM_OK; +} + +#include +#include + + +#if (!defined(HAVE_VFORK) || !defined(HAVE_WAITPID)) && !defined(__MINGW32__) +static int Jim_ExecCmd(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *cmdlineObj = Jim_NewEmptyStringObj(interp); + int i, j; + int rc; + + + for (i = 1; i < argc; i++) { + int len; + const char *arg = Jim_GetString(argv[i], &len); + + if (i > 1) { + Jim_AppendString(interp, cmdlineObj, " ", 1); + } + if (strpbrk(arg, "\\\" ") == NULL) { + + Jim_AppendString(interp, cmdlineObj, arg, len); + continue; + } + + Jim_AppendString(interp, cmdlineObj, "\"", 1); + for (j = 0; j < len; j++) { + if (arg[j] == '\\' || arg[j] == '"') { + Jim_AppendString(interp, cmdlineObj, "\\", 1); + } + Jim_AppendString(interp, cmdlineObj, &arg[j], 1); + } + Jim_AppendString(interp, cmdlineObj, "\"", 1); + } + rc = system(Jim_String(cmdlineObj)); + + Jim_FreeNewObj(interp, cmdlineObj); + + if (rc) { + Jim_Obj *errorCode = Jim_NewListObj(interp, NULL, 0); + Jim_ListAppendElement(interp, errorCode, Jim_NewStringObj(interp, "CHILDSTATUS", -1)); + Jim_ListAppendElement(interp, errorCode, Jim_NewIntObj(interp, 0)); + Jim_ListAppendElement(interp, errorCode, Jim_NewIntObj(interp, rc)); + Jim_SetGlobalVariableStr(interp, "errorCode", errorCode); + return JIM_ERR; + } + + return JIM_OK; +} + +int Jim_execInit(Jim_Interp *interp) +{ + if (Jim_PackageProvide(interp, "exec", "1.0", JIM_ERRMSG)) + return JIM_ERR; + + Jim_CreateCommand(interp, "exec", Jim_ExecCmd, NULL, NULL); + return JIM_OK; +} +#else + + +#include +#include + +#if defined(__MINGW32__) + + #ifndef STRICT + #define STRICT + #endif + #define WIN32_LEAN_AND_MEAN + #include + #include + + typedef HANDLE fdtype; + typedef HANDLE pidtype; + #define JIM_BAD_FD INVALID_HANDLE_VALUE + #define JIM_BAD_PID INVALID_HANDLE_VALUE + #define JimCloseFd CloseHandle + + #define WIFEXITED(STATUS) 1 + #define WEXITSTATUS(STATUS) (STATUS) + #define WIFSIGNALED(STATUS) 0 + #define WTERMSIG(STATUS) 0 + #define WNOHANG 1 + + static fdtype JimFileno(FILE *fh); + static pidtype JimWaitPid(pidtype pid, int *status, int nohang); + static fdtype JimDupFd(fdtype infd); + static fdtype JimOpenForRead(const char *filename); + static FILE *JimFdOpenForRead(fdtype fd); + static int JimPipe(fdtype pipefd[2]); + static pidtype JimStartWinProcess(Jim_Interp *interp, char **argv, char *env, + fdtype inputId, fdtype outputId, fdtype errorId); + static int JimErrno(void); +#else + #include + #include + #include + #include + + typedef int fdtype; + typedef int pidtype; + #define JimPipe pipe + #define JimErrno() errno + #define JIM_BAD_FD -1 + #define JIM_BAD_PID -1 + #define JimFileno fileno + #define JimReadFd read + #define JimCloseFd close + #define JimWaitPid waitpid + #define JimDupFd dup + #define JimFdOpenForRead(FD) fdopen((FD), "r") + #define JimOpenForRead(NAME) open((NAME), O_RDONLY, 0) + + #ifndef HAVE_EXECVPE + #define execvpe(ARG0, ARGV, ENV) execvp(ARG0, ARGV) + #endif +#endif + +static const char *JimStrError(void); +static char **JimSaveEnv(char **env); +static void JimRestoreEnv(char **env); +static int JimCreatePipeline(Jim_Interp *interp, int argc, Jim_Obj *const *argv, + pidtype **pidArrayPtr, fdtype *inPipePtr, fdtype *outPipePtr, fdtype *errFilePtr); +static void JimDetachPids(Jim_Interp *interp, int numPids, const pidtype *pidPtr); +static int JimCleanupChildren(Jim_Interp *interp, int numPids, pidtype *pidPtr, fdtype errorId); +static fdtype JimCreateTemp(Jim_Interp *interp, const char *contents, int len); +static fdtype JimOpenForWrite(const char *filename, int append); +static int JimRewindFd(fdtype fd); + +static void Jim_SetResultErrno(Jim_Interp *interp, const char *msg) +{ + Jim_SetResultFormatted(interp, "%s: %s", msg, JimStrError()); +} + +static const char *JimStrError(void) +{ + return strerror(JimErrno()); +} + +static void Jim_RemoveTrailingNewline(Jim_Obj *objPtr) +{ + int len; + const char *s = Jim_GetString(objPtr, &len); + + if (len > 0 && s[len - 1] == '\n') { + objPtr->length--; + objPtr->bytes[objPtr->length] = '\0'; + } +} + +static int JimAppendStreamToString(Jim_Interp *interp, fdtype fd, Jim_Obj *strObj) +{ + char buf[256]; + FILE *fh = JimFdOpenForRead(fd); + if (fh == NULL) { + return JIM_ERR; + } + + while (1) { + int retval = fread(buf, 1, sizeof(buf), fh); + if (retval > 0) { + Jim_AppendString(interp, strObj, buf, retval); + } + if (retval != sizeof(buf)) { + break; + } + } + Jim_RemoveTrailingNewline(strObj); + fclose(fh); + return JIM_OK; +} + +static char **JimBuildEnv(Jim_Interp *interp) +{ + int i; + int size; + int num; + int n; + char **envptr; + char *envdata; + + Jim_Obj *objPtr = Jim_GetGlobalVariableStr(interp, "env", JIM_NONE); + + if (!objPtr) { + return Jim_GetEnviron(); + } + + + + num = Jim_ListLength(interp, objPtr); + if (num % 2) { + + num--; + } + size = Jim_Length(objPtr) + 2; + + envptr = Jim_Alloc(sizeof(*envptr) * (num / 2 + 1) + size); + envdata = (char *)&envptr[num / 2 + 1]; + + n = 0; + for (i = 0; i < num; i += 2) { + const char *s1, *s2; + Jim_Obj *elemObj; + + Jim_ListIndex(interp, objPtr, i, &elemObj, JIM_NONE); + s1 = Jim_String(elemObj); + Jim_ListIndex(interp, objPtr, i + 1, &elemObj, JIM_NONE); + s2 = Jim_String(elemObj); + + envptr[n] = envdata; + envdata += sprintf(envdata, "%s=%s", s1, s2); + envdata++; + n++; + } + envptr[n] = NULL; + *envdata = 0; + + return envptr; +} + +static void JimFreeEnv(char **env, char **original_environ) +{ + if (env != original_environ) { + Jim_Free(env); + } +} + +static int JimCheckWaitStatus(Jim_Interp *interp, pidtype pid, int waitStatus) +{ + Jim_Obj *errorCode = Jim_NewListObj(interp, NULL, 0); + int rc = JIM_ERR; + + if (WIFEXITED(waitStatus)) { + if (WEXITSTATUS(waitStatus) == 0) { + Jim_ListAppendElement(interp, errorCode, Jim_NewStringObj(interp, "NONE", -1)); + rc = JIM_OK; + } + else { + Jim_ListAppendElement(interp, errorCode, Jim_NewStringObj(interp, "CHILDSTATUS", -1)); + Jim_ListAppendElement(interp, errorCode, Jim_NewIntObj(interp, (long)pid)); + Jim_ListAppendElement(interp, errorCode, Jim_NewIntObj(interp, WEXITSTATUS(waitStatus))); + } + } + else { + const char *type; + const char *action; + + if (WIFSIGNALED(waitStatus)) { + type = "CHILDKILLED"; + action = "killed"; + } + else { + type = "CHILDSUSP"; + action = "suspended"; + } + + Jim_ListAppendElement(interp, errorCode, Jim_NewStringObj(interp, type, -1)); + +#ifdef jim_ext_signal + Jim_SetResultFormatted(interp, "child %s by signal %s", action, Jim_SignalId(WTERMSIG(waitStatus))); + Jim_ListAppendElement(interp, errorCode, Jim_NewStringObj(interp, Jim_SignalId(WTERMSIG(waitStatus)), -1)); + Jim_ListAppendElement(interp, errorCode, Jim_NewIntObj(interp, pid)); + Jim_ListAppendElement(interp, errorCode, Jim_NewStringObj(interp, Jim_SignalName(WTERMSIG(waitStatus)), -1)); +#else + Jim_SetResultFormatted(interp, "child %s by signal %d", action, WTERMSIG(waitStatus)); + Jim_ListAppendElement(interp, errorCode, Jim_NewIntObj(interp, WTERMSIG(waitStatus))); + Jim_ListAppendElement(interp, errorCode, Jim_NewIntObj(interp, (long)pid)); + Jim_ListAppendElement(interp, errorCode, Jim_NewIntObj(interp, WTERMSIG(waitStatus))); +#endif + } + Jim_SetGlobalVariableStr(interp, "errorCode", errorCode); + return rc; +} + + +struct WaitInfo +{ + pidtype pid; + int status; + int flags; +}; + +struct WaitInfoTable { + struct WaitInfo *info; + int size; + int used; +}; + + +#define WI_DETACHED 2 + +#define WAIT_TABLE_GROW_BY 4 + +static void JimFreeWaitInfoTable(struct Jim_Interp *interp, void *privData) +{ + struct WaitInfoTable *table = privData; + + Jim_Free(table->info); + Jim_Free(table); +} + +static struct WaitInfoTable *JimAllocWaitInfoTable(void) +{ + struct WaitInfoTable *table = Jim_Alloc(sizeof(*table)); + table->info = NULL; + table->size = table->used = 0; + + return table; +} + +static int Jim_ExecCmd(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + fdtype outputId; + fdtype errorId; + pidtype *pidPtr; + int numPids, result; + + if (argc > 1 && Jim_CompareStringImmediate(interp, argv[argc - 1], "&")) { + Jim_Obj *listObj; + int i; + + argc--; + numPids = JimCreatePipeline(interp, argc - 1, argv + 1, &pidPtr, NULL, NULL, NULL); + if (numPids < 0) { + return JIM_ERR; + } + + listObj = Jim_NewListObj(interp, NULL, 0); + for (i = 0; i < numPids; i++) { + Jim_ListAppendElement(interp, listObj, Jim_NewIntObj(interp, (long)pidPtr[i])); + } + Jim_SetResult(interp, listObj); + JimDetachPids(interp, numPids, pidPtr); + Jim_Free(pidPtr); + return JIM_OK; + } + + numPids = + JimCreatePipeline(interp, argc - 1, argv + 1, &pidPtr, NULL, &outputId, &errorId); + + if (numPids < 0) { + return JIM_ERR; + } + + Jim_SetResultString(interp, "", 0); + + result = JIM_OK; + if (outputId != JIM_BAD_FD) { + result = JimAppendStreamToString(interp, outputId, Jim_GetResult(interp)); + if (result < 0) { + Jim_SetResultErrno(interp, "error reading from output pipe"); + } + } + + if (JimCleanupChildren(interp, numPids, pidPtr, errorId) != JIM_OK) { + result = JIM_ERR; + } + return result; +} + +static void JimReapDetachedPids(struct WaitInfoTable *table) +{ + struct WaitInfo *waitPtr; + int count; + int dest; + + if (!table) { + return; + } + + waitPtr = table->info; + dest = 0; + for (count = table->used; count > 0; waitPtr++, count--) { + if (waitPtr->flags & WI_DETACHED) { + int status; + pidtype pid = JimWaitPid(waitPtr->pid, &status, WNOHANG); + if (pid == waitPtr->pid) { + + table->used--; + continue; + } + } + if (waitPtr != &table->info[dest]) { + table->info[dest] = *waitPtr; + } + dest++; + } +} + +static pidtype JimWaitForProcess(struct WaitInfoTable *table, pidtype pid, int *statusPtr) +{ + int i; + + + for (i = 0; i < table->used; i++) { + if (pid == table->info[i].pid) { + + JimWaitPid(pid, statusPtr, 0); + + + if (i != table->used - 1) { + table->info[i] = table->info[table->used - 1]; + } + table->used--; + return pid; + } + } + + + return JIM_BAD_PID; +} + +static void JimDetachPids(Jim_Interp *interp, int numPids, const pidtype *pidPtr) +{ + int j; + struct WaitInfoTable *table = Jim_CmdPrivData(interp); + + for (j = 0; j < numPids; j++) { + + int i; + for (i = 0; i < table->used; i++) { + if (pidPtr[j] == table->info[i].pid) { + table->info[i].flags |= WI_DETACHED; + break; + } + } + } +} + +static FILE *JimGetAioFilehandle(Jim_Interp *interp, const char *name) +{ + FILE *fh; + Jim_Obj *fhObj; + + fhObj = Jim_NewStringObj(interp, name, -1); + Jim_IncrRefCount(fhObj); + fh = Jim_AioFilehandle(interp, fhObj); + Jim_DecrRefCount(interp, fhObj); + + return fh; +} + +static int +JimCreatePipeline(Jim_Interp *interp, int argc, Jim_Obj *const *argv, pidtype **pidArrayPtr, + fdtype *inPipePtr, fdtype *outPipePtr, fdtype *errFilePtr) +{ + pidtype *pidPtr = NULL; /* Points to malloc-ed array holding all + * the pids of child processes. */ + int numPids = 0; /* Actual number of processes that exist + * at *pidPtr right now. */ + int cmdCount; /* Count of number of distinct commands + * found in argc/argv. */ + const char *input = NULL; /* Describes input for pipeline, depending + * on "inputFile". NULL means take input + * from stdin/pipe. */ + int input_len = 0; + +#define FILE_NAME 0 +#define FILE_APPEND 1 +#define FILE_HANDLE 2 +#define FILE_TEXT 3 + + int inputFile = FILE_NAME; /* 1 means input is name of input file. + * 2 means input is filehandle name. + * 0 means input holds actual + * text to be input to command. */ + + int outputFile = FILE_NAME; /* 0 means output is the name of output file. + * 1 means output is the name of output file, and append. + * 2 means output is filehandle name. + * All this is ignored if output is NULL + */ + int errorFile = FILE_NAME; /* 0 means error is the name of error file. + * 1 means error is the name of error file, and append. + * 2 means error is filehandle name. + * All this is ignored if error is NULL + */ + const char *output = NULL; /* Holds name of output file to pipe to, + * or NULL if output goes to stdout/pipe. */ + const char *error = NULL; /* Holds name of stderr file to pipe to, + * or NULL if stderr goes to stderr/pipe. */ + fdtype inputId = JIM_BAD_FD; + fdtype outputId = JIM_BAD_FD; + fdtype errorId = JIM_BAD_FD; + fdtype lastOutputId = JIM_BAD_FD; + fdtype pipeIds[2]; + int firstArg, lastArg; /* Indexes of first and last arguments in + * current command. */ + int lastBar; + int i; + pidtype pid; + char **save_environ; + struct WaitInfoTable *table = Jim_CmdPrivData(interp); + + + char **arg_array = Jim_Alloc(sizeof(*arg_array) * (argc + 1)); + int arg_count = 0; + + JimReapDetachedPids(table); + + if (inPipePtr != NULL) { + *inPipePtr = JIM_BAD_FD; + } + if (outPipePtr != NULL) { + *outPipePtr = JIM_BAD_FD; + } + if (errFilePtr != NULL) { + *errFilePtr = JIM_BAD_FD; + } + pipeIds[0] = pipeIds[1] = JIM_BAD_FD; + + cmdCount = 1; + lastBar = -1; + for (i = 0; i < argc; i++) { + const char *arg = Jim_String(argv[i]); + + if (arg[0] == '<') { + inputFile = FILE_NAME; + input = arg + 1; + if (*input == '<') { + inputFile = FILE_TEXT; + input_len = Jim_Length(argv[i]) - 2; + input++; + } + else if (*input == '@') { + inputFile = FILE_HANDLE; + input++; + } + + if (!*input && ++i < argc) { + input = Jim_GetString(argv[i], &input_len); + } + } + else if (arg[0] == '>') { + int dup_error = 0; + + outputFile = FILE_NAME; + + output = arg + 1; + if (*output == '>') { + outputFile = FILE_APPEND; + output++; + } + if (*output == '&') { + + output++; + dup_error = 1; + } + if (*output == '@') { + outputFile = FILE_HANDLE; + output++; + } + if (!*output && ++i < argc) { + output = Jim_String(argv[i]); + } + if (dup_error) { + errorFile = outputFile; + error = output; + } + } + else if (arg[0] == '2' && arg[1] == '>') { + error = arg + 2; + errorFile = FILE_NAME; + + if (*error == '@') { + errorFile = FILE_HANDLE; + error++; + } + else if (*error == '>') { + errorFile = FILE_APPEND; + error++; + } + if (!*error && ++i < argc) { + error = Jim_String(argv[i]); + } + } + else { + if (strcmp(arg, "|") == 0 || strcmp(arg, "|&") == 0) { + if (i == lastBar + 1 || i == argc - 1) { + Jim_SetResultString(interp, "illegal use of | or |& in command", -1); + goto badargs; + } + lastBar = i; + cmdCount++; + } + + arg_array[arg_count++] = (char *)arg; + continue; + } + + if (i >= argc) { + Jim_SetResultFormatted(interp, "can't specify \"%s\" as last word in command", arg); + goto badargs; + } + } + + if (arg_count == 0) { + Jim_SetResultString(interp, "didn't specify command to execute", -1); +badargs: + Jim_Free(arg_array); + return -1; + } + + + save_environ = JimSaveEnv(JimBuildEnv(interp)); + + if (input != NULL) { + if (inputFile == FILE_TEXT) { + inputId = JimCreateTemp(interp, input, input_len); + if (inputId == JIM_BAD_FD) { + goto error; + } + } + else if (inputFile == FILE_HANDLE) { + + FILE *fh = JimGetAioFilehandle(interp, input); + + if (fh == NULL) { + goto error; + } + inputId = JimDupFd(JimFileno(fh)); + } + else { + inputId = JimOpenForRead(input); + if (inputId == JIM_BAD_FD) { + Jim_SetResultFormatted(interp, "couldn't read file \"%s\": %s", input, JimStrError()); + goto error; + } + } + } + else if (inPipePtr != NULL) { + if (JimPipe(pipeIds) != 0) { + Jim_SetResultErrno(interp, "couldn't create input pipe for command"); + goto error; + } + inputId = pipeIds[0]; + *inPipePtr = pipeIds[1]; + pipeIds[0] = pipeIds[1] = JIM_BAD_FD; + } + + if (output != NULL) { + if (outputFile == FILE_HANDLE) { + FILE *fh = JimGetAioFilehandle(interp, output); + if (fh == NULL) { + goto error; + } + fflush(fh); + lastOutputId = JimDupFd(JimFileno(fh)); + } + else { + lastOutputId = JimOpenForWrite(output, outputFile == FILE_APPEND); + if (lastOutputId == JIM_BAD_FD) { + Jim_SetResultFormatted(interp, "couldn't write file \"%s\": %s", output, JimStrError()); + goto error; + } + } + } + else if (outPipePtr != NULL) { + if (JimPipe(pipeIds) != 0) { + Jim_SetResultErrno(interp, "couldn't create output pipe"); + goto error; + } + lastOutputId = pipeIds[1]; + *outPipePtr = pipeIds[0]; + pipeIds[0] = pipeIds[1] = JIM_BAD_FD; + } + + if (error != NULL) { + if (errorFile == FILE_HANDLE) { + if (strcmp(error, "1") == 0) { + + if (lastOutputId != JIM_BAD_FD) { + errorId = JimDupFd(lastOutputId); + } + else { + + error = "stdout"; + } + } + if (errorId == JIM_BAD_FD) { + FILE *fh = JimGetAioFilehandle(interp, error); + if (fh == NULL) { + goto error; + } + fflush(fh); + errorId = JimDupFd(JimFileno(fh)); + } + } + else { + errorId = JimOpenForWrite(error, errorFile == FILE_APPEND); + if (errorId == JIM_BAD_FD) { + Jim_SetResultFormatted(interp, "couldn't write file \"%s\": %s", error, JimStrError()); + goto error; + } + } + } + else if (errFilePtr != NULL) { + errorId = JimCreateTemp(interp, NULL, 0); + if (errorId == JIM_BAD_FD) { + goto error; + } + *errFilePtr = JimDupFd(errorId); + } + + + pidPtr = Jim_Alloc(cmdCount * sizeof(*pidPtr)); + for (i = 0; i < numPids; i++) { + pidPtr[i] = JIM_BAD_PID; + } + for (firstArg = 0; firstArg < arg_count; numPids++, firstArg = lastArg + 1) { + int pipe_dup_err = 0; + fdtype origErrorId = errorId; + + for (lastArg = firstArg; lastArg < arg_count; lastArg++) { + if (arg_array[lastArg][0] == '|') { + if (arg_array[lastArg][1] == '&') { + pipe_dup_err = 1; + } + break; + } + } + + arg_array[lastArg] = NULL; + if (lastArg == arg_count) { + outputId = lastOutputId; + } + else { + if (JimPipe(pipeIds) != 0) { + Jim_SetResultErrno(interp, "couldn't create pipe"); + goto error; + } + outputId = pipeIds[1]; + } + + + if (pipe_dup_err) { + errorId = outputId; + } + + + +#ifdef __MINGW32__ + pid = JimStartWinProcess(interp, &arg_array[firstArg], save_environ ? save_environ[0] : NULL, inputId, outputId, errorId); + if (pid == JIM_BAD_PID) { + Jim_SetResultFormatted(interp, "couldn't exec \"%s\"", arg_array[firstArg]); + goto error; + } +#else + pid = vfork(); + if (pid < 0) { + Jim_SetResultErrno(interp, "couldn't fork child process"); + goto error; + } + if (pid == 0) { + + + if (inputId != -1) dup2(inputId, 0); + if (outputId != -1) dup2(outputId, 1); + if (errorId != -1) dup2(errorId, 2); + + for (i = 3; (i <= outputId) || (i <= inputId) || (i <= errorId); i++) { + close(i); + } + + + (void)signal(SIGPIPE, SIG_DFL); + + execvpe(arg_array[firstArg], &arg_array[firstArg], Jim_GetEnviron()); + + + fprintf(stderr, "couldn't exec \"%s\"\n", arg_array[firstArg]); + _exit(127); + } +#endif + + + + if (table->used == table->size) { + table->size += WAIT_TABLE_GROW_BY; + table->info = Jim_Realloc(table->info, table->size * sizeof(*table->info)); + } + + table->info[table->used].pid = pid; + table->info[table->used].flags = 0; + table->used++; + + pidPtr[numPids] = pid; + + + errorId = origErrorId; + + + if (inputId != JIM_BAD_FD) { + JimCloseFd(inputId); + } + if (outputId != JIM_BAD_FD) { + JimCloseFd(outputId); + } + inputId = pipeIds[0]; + pipeIds[0] = pipeIds[1] = JIM_BAD_FD; + } + *pidArrayPtr = pidPtr; + + + cleanup: + if (inputId != JIM_BAD_FD) { + JimCloseFd(inputId); + } + if (lastOutputId != JIM_BAD_FD) { + JimCloseFd(lastOutputId); + } + if (errorId != JIM_BAD_FD) { + JimCloseFd(errorId); + } + Jim_Free(arg_array); + + JimRestoreEnv(save_environ); + + return numPids; + + + error: + if ((inPipePtr != NULL) && (*inPipePtr != JIM_BAD_FD)) { + JimCloseFd(*inPipePtr); + *inPipePtr = JIM_BAD_FD; + } + if ((outPipePtr != NULL) && (*outPipePtr != JIM_BAD_FD)) { + JimCloseFd(*outPipePtr); + *outPipePtr = JIM_BAD_FD; + } + if ((errFilePtr != NULL) && (*errFilePtr != JIM_BAD_FD)) { + JimCloseFd(*errFilePtr); + *errFilePtr = JIM_BAD_FD; + } + if (pipeIds[0] != JIM_BAD_FD) { + JimCloseFd(pipeIds[0]); + } + if (pipeIds[1] != JIM_BAD_FD) { + JimCloseFd(pipeIds[1]); + } + if (pidPtr != NULL) { + for (i = 0; i < numPids; i++) { + if (pidPtr[i] != JIM_BAD_PID) { + JimDetachPids(interp, 1, &pidPtr[i]); + } + } + Jim_Free(pidPtr); + } + numPids = -1; + goto cleanup; +} + + +static int JimCleanupChildren(Jim_Interp *interp, int numPids, pidtype *pidPtr, fdtype errorId) +{ + struct WaitInfoTable *table = Jim_CmdPrivData(interp); + int result = JIM_OK; + int i; + + for (i = 0; i < numPids; i++) { + int waitStatus = 0; + if (JimWaitForProcess(table, pidPtr[i], &waitStatus) != JIM_BAD_PID) { + if (JimCheckWaitStatus(interp, pidPtr[i], waitStatus) != JIM_OK) { + result = JIM_ERR; + } + } + } + Jim_Free(pidPtr); + + if (errorId != JIM_BAD_FD) { + JimRewindFd(errorId); + if (JimAppendStreamToString(interp, errorId, Jim_GetResult(interp)) != JIM_OK) { + result = JIM_ERR; + } + } + + Jim_RemoveTrailingNewline(Jim_GetResult(interp)); + + return result; +} + +int Jim_execInit(Jim_Interp *interp) +{ + if (Jim_PackageProvide(interp, "exec", "1.0", JIM_ERRMSG)) + return JIM_ERR; + +#ifdef SIGPIPE + (void)signal(SIGPIPE, SIG_IGN); +#endif + + Jim_CreateCommand(interp, "exec", Jim_ExecCmd, JimAllocWaitInfoTable(), JimFreeWaitInfoTable); + return JIM_OK; +} + +#if defined(__MINGW32__) + + +static SECURITY_ATTRIBUTES *JimStdSecAttrs(void) +{ + static SECURITY_ATTRIBUTES secAtts; + + secAtts.nLength = sizeof(SECURITY_ATTRIBUTES); + secAtts.lpSecurityDescriptor = NULL; + secAtts.bInheritHandle = TRUE; + return &secAtts; +} + +static int JimErrno(void) +{ + switch (GetLastError()) { + case ERROR_FILE_NOT_FOUND: return ENOENT; + case ERROR_PATH_NOT_FOUND: return ENOENT; + case ERROR_TOO_MANY_OPEN_FILES: return EMFILE; + case ERROR_ACCESS_DENIED: return EACCES; + case ERROR_INVALID_HANDLE: return EBADF; + case ERROR_BAD_ENVIRONMENT: return E2BIG; + case ERROR_BAD_FORMAT: return ENOEXEC; + case ERROR_INVALID_ACCESS: return EACCES; + case ERROR_INVALID_DRIVE: return ENOENT; + case ERROR_CURRENT_DIRECTORY: return EACCES; + case ERROR_NOT_SAME_DEVICE: return EXDEV; + case ERROR_NO_MORE_FILES: return ENOENT; + case ERROR_WRITE_PROTECT: return EROFS; + case ERROR_BAD_UNIT: return ENXIO; + case ERROR_NOT_READY: return EBUSY; + case ERROR_BAD_COMMAND: return EIO; + case ERROR_CRC: return EIO; + case ERROR_BAD_LENGTH: return EIO; + case ERROR_SEEK: return EIO; + case ERROR_WRITE_FAULT: return EIO; + case ERROR_READ_FAULT: return EIO; + case ERROR_GEN_FAILURE: return EIO; + case ERROR_SHARING_VIOLATION: return EACCES; + case ERROR_LOCK_VIOLATION: return EACCES; + case ERROR_SHARING_BUFFER_EXCEEDED: return ENFILE; + case ERROR_HANDLE_DISK_FULL: return ENOSPC; + case ERROR_NOT_SUPPORTED: return ENODEV; + case ERROR_REM_NOT_LIST: return EBUSY; + case ERROR_DUP_NAME: return EEXIST; + case ERROR_BAD_NETPATH: return ENOENT; + case ERROR_NETWORK_BUSY: return EBUSY; + case ERROR_DEV_NOT_EXIST: return ENODEV; + case ERROR_TOO_MANY_CMDS: return EAGAIN; + case ERROR_ADAP_HDW_ERR: return EIO; + case ERROR_BAD_NET_RESP: return EIO; + case ERROR_UNEXP_NET_ERR: return EIO; + case ERROR_NETNAME_DELETED: return ENOENT; + case ERROR_NETWORK_ACCESS_DENIED: return EACCES; + case ERROR_BAD_DEV_TYPE: return ENODEV; + case ERROR_BAD_NET_NAME: return ENOENT; + case ERROR_TOO_MANY_NAMES: return ENFILE; + case ERROR_TOO_MANY_SESS: return EIO; + case ERROR_SHARING_PAUSED: return EAGAIN; + case ERROR_REDIR_PAUSED: return EAGAIN; + case ERROR_FILE_EXISTS: return EEXIST; + case ERROR_CANNOT_MAKE: return ENOSPC; + case ERROR_OUT_OF_STRUCTURES: return ENFILE; + case ERROR_ALREADY_ASSIGNED: return EEXIST; + case ERROR_INVALID_PASSWORD: return EPERM; + case ERROR_NET_WRITE_FAULT: return EIO; + case ERROR_NO_PROC_SLOTS: return EAGAIN; + case ERROR_DISK_CHANGE: return EXDEV; + case ERROR_BROKEN_PIPE: return EPIPE; + case ERROR_OPEN_FAILED: return ENOENT; + case ERROR_DISK_FULL: return ENOSPC; + case ERROR_NO_MORE_SEARCH_HANDLES: return EMFILE; + case ERROR_INVALID_TARGET_HANDLE: return EBADF; + case ERROR_INVALID_NAME: return ENOENT; + case ERROR_PROC_NOT_FOUND: return ESRCH; + case ERROR_WAIT_NO_CHILDREN: return ECHILD; + case ERROR_CHILD_NOT_COMPLETE: return ECHILD; + case ERROR_DIRECT_ACCESS_HANDLE: return EBADF; + case ERROR_SEEK_ON_DEVICE: return ESPIPE; + case ERROR_BUSY_DRIVE: return EAGAIN; + case ERROR_DIR_NOT_EMPTY: return EEXIST; + case ERROR_NOT_LOCKED: return EACCES; + case ERROR_BAD_PATHNAME: return ENOENT; + case ERROR_LOCK_FAILED: return EACCES; + case ERROR_ALREADY_EXISTS: return EEXIST; + case ERROR_FILENAME_EXCED_RANGE: return ENAMETOOLONG; + case ERROR_BAD_PIPE: return EPIPE; + case ERROR_PIPE_BUSY: return EAGAIN; + case ERROR_PIPE_NOT_CONNECTED: return EPIPE; + case ERROR_DIRECTORY: return ENOTDIR; + } + return EINVAL; +} + +static int JimPipe(fdtype pipefd[2]) +{ + if (CreatePipe(&pipefd[0], &pipefd[1], NULL, 0)) { + return 0; + } + return -1; +} + +static fdtype JimDupFd(fdtype infd) +{ + fdtype dupfd; + pidtype pid = GetCurrentProcess(); + + if (DuplicateHandle(pid, infd, pid, &dupfd, 0, TRUE, DUPLICATE_SAME_ACCESS)) { + return dupfd; + } + return JIM_BAD_FD; +} + +static int JimRewindFd(fdtype fd) +{ + return SetFilePointer(fd, 0, NULL, FILE_BEGIN) == INVALID_SET_FILE_POINTER ? -1 : 0; +} + +#if 0 +static int JimReadFd(fdtype fd, char *buffer, size_t len) +{ + DWORD num; + + if (ReadFile(fd, buffer, len, &num, NULL)) { + return num; + } + if (GetLastError() == ERROR_HANDLE_EOF || GetLastError() == ERROR_BROKEN_PIPE) { + return 0; + } + return -1; +} +#endif + +static FILE *JimFdOpenForRead(fdtype fd) +{ + return _fdopen(_open_osfhandle((int)fd, _O_RDONLY | _O_TEXT), "r"); +} + +static fdtype JimFileno(FILE *fh) +{ + return (fdtype)_get_osfhandle(_fileno(fh)); +} + +static fdtype JimOpenForRead(const char *filename) +{ + return CreateFile(filename, GENERIC_READ, FILE_SHARE_READ | FILE_SHARE_WRITE, + JimStdSecAttrs(), OPEN_EXISTING, 0, NULL); +} + +static fdtype JimOpenForWrite(const char *filename, int append) +{ + return CreateFile(filename, append ? FILE_APPEND_DATA : GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, + JimStdSecAttrs(), append ? OPEN_ALWAYS : CREATE_ALWAYS, 0, (HANDLE) NULL); +} + +static FILE *JimFdOpenForWrite(fdtype fd) +{ + return _fdopen(_open_osfhandle((int)fd, _O_TEXT), "w"); +} + +static pidtype JimWaitPid(pidtype pid, int *status, int nohang) +{ + DWORD ret = WaitForSingleObject(pid, nohang ? 0 : INFINITE); + if (ret == WAIT_TIMEOUT || ret == WAIT_FAILED) { + + return JIM_BAD_PID; + } + GetExitCodeProcess(pid, &ret); + *status = ret; + CloseHandle(pid); + return pid; +} + +static HANDLE JimCreateTemp(Jim_Interp *interp, const char *contents, int len) +{ + char name[MAX_PATH]; + HANDLE handle; + + if (!GetTempPath(MAX_PATH, name) || !GetTempFileName(name, "JIM", 0, name)) { + return JIM_BAD_FD; + } + + handle = CreateFile(name, GENERIC_READ | GENERIC_WRITE, 0, JimStdSecAttrs(), + CREATE_ALWAYS, FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE, + NULL); + + if (handle == INVALID_HANDLE_VALUE) { + goto error; + } + + if (contents != NULL) { + + FILE *fh = JimFdOpenForWrite(JimDupFd(handle)); + if (fh == NULL) { + goto error; + } + + if (fwrite(contents, len, 1, fh) != 1) { + fclose(fh); + goto error; + } + fseek(fh, 0, SEEK_SET); + fclose(fh); + } + return handle; + + error: + Jim_SetResultErrno(interp, "failed to create temp file"); + CloseHandle(handle); + DeleteFile(name); + return JIM_BAD_FD; +} + +static int +JimWinFindExecutable(const char *originalName, char fullPath[MAX_PATH]) +{ + int i; + static char extensions[][5] = {".exe", "", ".bat"}; + + for (i = 0; i < (int) (sizeof(extensions) / sizeof(extensions[0])); i++) { + lstrcpyn(fullPath, originalName, MAX_PATH - 5); + lstrcat(fullPath, extensions[i]); + + if (SearchPath(NULL, fullPath, NULL, MAX_PATH, fullPath, NULL) == 0) { + continue; + } + if (GetFileAttributes(fullPath) & FILE_ATTRIBUTE_DIRECTORY) { + continue; + } + return 0; + } + + return -1; +} + +static char **JimSaveEnv(char **env) +{ + return env; +} + +static void JimRestoreEnv(char **env) +{ + JimFreeEnv(env, Jim_GetEnviron()); +} + +static Jim_Obj * +JimWinBuildCommandLine(Jim_Interp *interp, char **argv) +{ + char *start, *special; + int quote, i; + + Jim_Obj *strObj = Jim_NewStringObj(interp, "", 0); + + for (i = 0; argv[i]; i++) { + if (i > 0) { + Jim_AppendString(interp, strObj, " ", 1); + } + + if (argv[i][0] == '\0') { + quote = 1; + } + else { + quote = 0; + for (start = argv[i]; *start != '\0'; start++) { + if (isspace(UCHAR(*start))) { + quote = 1; + break; + } + } + } + if (quote) { + Jim_AppendString(interp, strObj, "\"" , 1); + } + + start = argv[i]; + for (special = argv[i]; ; ) { + if ((*special == '\\') && (special[1] == '\\' || + special[1] == '"' || (quote && special[1] == '\0'))) { + Jim_AppendString(interp, strObj, start, special - start); + start = special; + while (1) { + special++; + if (*special == '"' || (quote && *special == '\0')) { + + Jim_AppendString(interp, strObj, start, special - start); + break; + } + if (*special != '\\') { + break; + } + } + Jim_AppendString(interp, strObj, start, special - start); + start = special; + } + if (*special == '"') { + if (special == start) { + Jim_AppendString(interp, strObj, "\"", 1); + } + else { + Jim_AppendString(interp, strObj, start, special - start); + } + Jim_AppendString(interp, strObj, "\\\"", 2); + start = special + 1; + } + if (*special == '\0') { + break; + } + special++; + } + Jim_AppendString(interp, strObj, start, special - start); + if (quote) { + Jim_AppendString(interp, strObj, "\"", 1); + } + } + return strObj; +} + +static pidtype +JimStartWinProcess(Jim_Interp *interp, char **argv, char *env, fdtype inputId, fdtype outputId, fdtype errorId) +{ + STARTUPINFO startInfo; + PROCESS_INFORMATION procInfo; + HANDLE hProcess, h; + char execPath[MAX_PATH]; + pidtype pid = JIM_BAD_PID; + Jim_Obj *cmdLineObj; + + if (JimWinFindExecutable(argv[0], execPath) < 0) { + return JIM_BAD_PID; + } + argv[0] = execPath; + + hProcess = GetCurrentProcess(); + cmdLineObj = JimWinBuildCommandLine(interp, argv); + + + ZeroMemory(&startInfo, sizeof(startInfo)); + startInfo.cb = sizeof(startInfo); + startInfo.dwFlags = STARTF_USESTDHANDLES; + startInfo.hStdInput = INVALID_HANDLE_VALUE; + startInfo.hStdOutput= INVALID_HANDLE_VALUE; + startInfo.hStdError = INVALID_HANDLE_VALUE; + + if (inputId == JIM_BAD_FD) { + if (CreatePipe(&startInfo.hStdInput, &h, JimStdSecAttrs(), 0) != FALSE) { + CloseHandle(h); + } + } else { + DuplicateHandle(hProcess, inputId, hProcess, &startInfo.hStdInput, + 0, TRUE, DUPLICATE_SAME_ACCESS); + } + if (startInfo.hStdInput == JIM_BAD_FD) { + goto end; + } + + if (outputId == JIM_BAD_FD) { + startInfo.hStdOutput = CreateFile("NUL:", GENERIC_WRITE, 0, + JimStdSecAttrs(), OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL); + } else { + DuplicateHandle(hProcess, outputId, hProcess, &startInfo.hStdOutput, + 0, TRUE, DUPLICATE_SAME_ACCESS); + } + if (startInfo.hStdOutput == JIM_BAD_FD) { + goto end; + } + + if (errorId == JIM_BAD_FD) { + + startInfo.hStdError = CreateFile("NUL:", GENERIC_WRITE, 0, + JimStdSecAttrs(), OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); + } else { + DuplicateHandle(hProcess, errorId, hProcess, &startInfo.hStdError, + 0, TRUE, DUPLICATE_SAME_ACCESS); + } + if (startInfo.hStdError == JIM_BAD_FD) { + goto end; + } + + if (!CreateProcess(NULL, (char *)Jim_String(cmdLineObj), NULL, NULL, TRUE, + 0, env, NULL, &startInfo, &procInfo)) { + goto end; + } + + + WaitForInputIdle(procInfo.hProcess, 5000); + CloseHandle(procInfo.hThread); + + pid = procInfo.hProcess; + + end: + Jim_FreeNewObj(interp, cmdLineObj); + if (startInfo.hStdInput != JIM_BAD_FD) { + CloseHandle(startInfo.hStdInput); + } + if (startInfo.hStdOutput != JIM_BAD_FD) { + CloseHandle(startInfo.hStdOutput); + } + if (startInfo.hStdError != JIM_BAD_FD) { + CloseHandle(startInfo.hStdError); + } + return pid; +} +#else + +static int JimOpenForWrite(const char *filename, int append) +{ + return open(filename, O_WRONLY | O_CREAT | (append ? O_APPEND : O_TRUNC), 0666); +} + +static int JimRewindFd(int fd) +{ + return lseek(fd, 0L, SEEK_SET); +} + +static int JimCreateTemp(Jim_Interp *interp, const char *contents, int len) +{ + int fd = Jim_MakeTempFile(interp, NULL); + + if (fd != JIM_BAD_FD) { + unlink(Jim_String(Jim_GetResult(interp))); + if (contents) { + if (write(fd, contents, len) != len) { + Jim_SetResultErrno(interp, "couldn't write temp file"); + close(fd); + return -1; + } + lseek(fd, 0L, SEEK_SET); + } + } + return fd; +} + +static char **JimSaveEnv(char **env) +{ + char **saveenv = Jim_GetEnviron(); + Jim_SetEnviron(env); + return saveenv; +} + +static void JimRestoreEnv(char **env) +{ + JimFreeEnv(Jim_GetEnviron(), env); + Jim_SetEnviron(env); +} +#endif +#endif + + +#ifndef _XOPEN_SOURCE +#define _XOPEN_SOURCE 500 +#endif + +#include +#include +#include +#include + + +#ifdef HAVE_SYS_TIME_H +#include +#endif + +static int clock_cmd_format(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + + char buf[100]; + time_t t; + long seconds; + + const char *format = "%a %b %d %H:%M:%S %Z %Y"; + + if (argc == 2 || (argc == 3 && !Jim_CompareStringImmediate(interp, argv[1], "-format"))) { + return -1; + } + + if (argc == 3) { + format = Jim_String(argv[2]); + } + + if (Jim_GetLong(interp, argv[0], &seconds) != JIM_OK) { + return JIM_ERR; + } + t = seconds; + + if (strftime(buf, sizeof(buf), format, localtime(&t)) == 0) { + Jim_SetResultString(interp, "format string too long", -1); + return JIM_ERR; + } + + Jim_SetResultString(interp, buf, -1); + + return JIM_OK; +} + +#ifdef HAVE_STRPTIME +static int clock_cmd_scan(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + char *pt; + struct tm tm; + time_t now = time(0); + + if (!Jim_CompareStringImmediate(interp, argv[1], "-format")) { + return -1; + } + + + localtime_r(&now, &tm); + + pt = strptime(Jim_String(argv[0]), Jim_String(argv[2]), &tm); + if (pt == 0 || *pt != 0) { + Jim_SetResultString(interp, "Failed to parse time according to format", -1); + return JIM_ERR; + } + + + Jim_SetResultInt(interp, mktime(&tm)); + + return JIM_OK; +} +#endif + +static int clock_cmd_seconds(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_SetResultInt(interp, time(NULL)); + + return JIM_OK; +} + +static int clock_cmd_micros(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + struct timeval tv; + + gettimeofday(&tv, NULL); + + Jim_SetResultInt(interp, (jim_wide) tv.tv_sec * 1000000 + tv.tv_usec); + + return JIM_OK; +} + +static int clock_cmd_millis(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + struct timeval tv; + + gettimeofday(&tv, NULL); + + Jim_SetResultInt(interp, (jim_wide) tv.tv_sec * 1000 + tv.tv_usec / 1000); + + return JIM_OK; +} + +static const jim_subcmd_type clock_command_table[] = { + { "seconds", + NULL, + clock_cmd_seconds, + 0, + 0, + + }, + { "clicks", + NULL, + clock_cmd_micros, + 0, + 0, + + }, + { "microseconds", + NULL, + clock_cmd_micros, + 0, + 0, + + }, + { "milliseconds", + NULL, + clock_cmd_millis, + 0, + 0, + + }, + { "format", + "seconds ?-format format?", + clock_cmd_format, + 1, + 3, + + }, +#ifdef HAVE_STRPTIME + { "scan", + "str -format format", + clock_cmd_scan, + 3, + 3, + + }, +#endif + { NULL } +}; + +int Jim_clockInit(Jim_Interp *interp) +{ + if (Jim_PackageProvide(interp, "clock", "1.0", JIM_ERRMSG)) + return JIM_ERR; + + Jim_CreateCommand(interp, "clock", Jim_SubCmdProc, (void *)clock_command_table, NULL); + return JIM_OK; +} + +#include +#include +#include +#include +#include + + +static int array_cmd_exists(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + + Jim_SetResultInt(interp, Jim_GetVariable(interp, argv[0], 0) != 0); + return JIM_OK; +} + +static int array_cmd_get(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *objPtr = Jim_GetVariable(interp, argv[0], JIM_NONE); + Jim_Obj *patternObj; + + if (!objPtr) { + return JIM_OK; + } + + patternObj = (argc == 1) ? NULL : argv[1]; + + + if (patternObj == NULL || Jim_CompareStringImmediate(interp, patternObj, "*")) { + if (Jim_IsList(objPtr) && Jim_ListLength(interp, objPtr) % 2 == 0) { + + Jim_SetResult(interp, objPtr); + return JIM_OK; + } + } + + + return Jim_DictValues(interp, objPtr, patternObj); +} + +static int array_cmd_names(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *objPtr = Jim_GetVariable(interp, argv[0], JIM_NONE); + + if (!objPtr) { + return JIM_OK; + } + + return Jim_DictKeys(interp, objPtr, argc == 1 ? NULL : argv[1]); +} + +static int array_cmd_unset(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int i; + int len; + Jim_Obj *resultObj; + Jim_Obj *objPtr; + Jim_Obj **dictValuesObj; + + if (argc == 1 || Jim_CompareStringImmediate(interp, argv[1], "*")) { + + Jim_UnsetVariable(interp, argv[0], JIM_NONE); + return JIM_OK; + } + + objPtr = Jim_GetVariable(interp, argv[0], JIM_NONE); + + if (objPtr == NULL) { + + return JIM_OK; + } + + if (Jim_DictPairs(interp, objPtr, &dictValuesObj, &len) != JIM_OK) { + return JIM_ERR; + } + + + resultObj = Jim_NewDictObj(interp, NULL, 0); + + for (i = 0; i < len; i += 2) { + if (!Jim_StringMatchObj(interp, argv[1], dictValuesObj[i], 0)) { + Jim_DictAddElement(interp, resultObj, dictValuesObj[i], dictValuesObj[i + 1]); + } + } + Jim_Free(dictValuesObj); + + Jim_SetVariable(interp, argv[0], resultObj); + return JIM_OK; +} + +static int array_cmd_size(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *objPtr; + int len = 0; + + + objPtr = Jim_GetVariable(interp, argv[0], JIM_NONE); + if (objPtr) { + len = Jim_DictSize(interp, objPtr); + if (len < 0) { + return JIM_ERR; + } + } + + Jim_SetResultInt(interp, len); + + return JIM_OK; +} + +static int array_cmd_stat(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *objPtr = Jim_GetVariable(interp, argv[0], JIM_NONE); + if (objPtr) { + return Jim_DictInfo(interp, objPtr); + } + Jim_SetResultFormatted(interp, "\"%#s\" isn't an array", argv[0], NULL); + return JIM_ERR; +} + +static int array_cmd_set(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int i; + int len; + Jim_Obj *listObj = argv[1]; + Jim_Obj *dictObj; + + len = Jim_ListLength(interp, listObj); + if (len % 2) { + Jim_SetResultString(interp, "list must have an even number of elements", -1); + return JIM_ERR; + } + + dictObj = Jim_GetVariable(interp, argv[0], JIM_UNSHARED); + if (!dictObj) { + + return Jim_SetVariable(interp, argv[0], listObj); + } + else if (Jim_DictSize(interp, dictObj) < 0) { + return JIM_ERR; + } + + if (Jim_IsShared(dictObj)) { + dictObj = Jim_DuplicateObj(interp, dictObj); + } + + for (i = 0; i < len; i += 2) { + Jim_Obj *nameObj; + Jim_Obj *valueObj; + + Jim_ListIndex(interp, listObj, i, &nameObj, JIM_NONE); + Jim_ListIndex(interp, listObj, i + 1, &valueObj, JIM_NONE); + + Jim_DictAddElement(interp, dictObj, nameObj, valueObj); + } + return Jim_SetVariable(interp, argv[0], dictObj); +} + +static const jim_subcmd_type array_command_table[] = { + { "exists", + "arrayName", + array_cmd_exists, + 1, + 1, + + }, + { "get", + "arrayName ?pattern?", + array_cmd_get, + 1, + 2, + + }, + { "names", + "arrayName ?pattern?", + array_cmd_names, + 1, + 2, + + }, + { "set", + "arrayName list", + array_cmd_set, + 2, + 2, + + }, + { "size", + "arrayName", + array_cmd_size, + 1, + 1, + + }, + { "stat", + "arrayName", + array_cmd_stat, + 1, + 1, + + }, + { "unset", + "arrayName ?pattern?", + array_cmd_unset, + 1, + 2, + + }, + { NULL + } +}; + +int Jim_arrayInit(Jim_Interp *interp) +{ + if (Jim_PackageProvide(interp, "array", "1.0", JIM_ERRMSG)) + return JIM_ERR; + + Jim_CreateCommand(interp, "array", Jim_SubCmdProc, (void *)array_command_table, NULL); + return JIM_OK; +} +int Jim_InitStaticExtensions(Jim_Interp *interp) +{ +extern int Jim_bootstrapInit(Jim_Interp *); +extern int Jim_aioInit(Jim_Interp *); +extern int Jim_readdirInit(Jim_Interp *); +extern int Jim_globInit(Jim_Interp *); +extern int Jim_regexpInit(Jim_Interp *); +extern int Jim_fileInit(Jim_Interp *); +extern int Jim_execInit(Jim_Interp *); +extern int Jim_clockInit(Jim_Interp *); +extern int Jim_arrayInit(Jim_Interp *); +extern int Jim_stdlibInit(Jim_Interp *); +extern int Jim_tclcompatInit(Jim_Interp *); +Jim_bootstrapInit(interp); +Jim_aioInit(interp); +Jim_readdirInit(interp); +Jim_globInit(interp); +Jim_regexpInit(interp); +Jim_fileInit(interp); +Jim_execInit(interp); +Jim_clockInit(interp); +Jim_arrayInit(interp); +Jim_stdlibInit(interp); +Jim_tclcompatInit(interp); +return JIM_OK; +} +#define JIM_OPTIMIZATION + +#include +#include + +#include +#include +#include +#include +#include +#include +#include +#include + + +#ifdef HAVE_SYS_TIME_H +#include +#endif +#ifdef HAVE_BACKTRACE +#include +#endif +#ifdef HAVE_CRT_EXTERNS_H +#include +#endif + + +#include + + + + + +#ifndef TCL_LIBRARY +#define TCL_LIBRARY "." +#endif +#ifndef TCL_PLATFORM_OS +#define TCL_PLATFORM_OS "unknown" +#endif +#ifndef TCL_PLATFORM_PLATFORM +#define TCL_PLATFORM_PLATFORM "unknown" +#endif +#ifndef TCL_PLATFORM_PATH_SEPARATOR +#define TCL_PLATFORM_PATH_SEPARATOR ":" +#endif + + + + + + + +#ifdef JIM_MAINTAINER +#define JIM_DEBUG_COMMAND +#define JIM_DEBUG_PANIC +#endif + + + +#define JIM_INTEGER_SPACE 24 + +const char *jim_tt_name(int type); + +#ifdef JIM_DEBUG_PANIC +static void JimPanicDump(int fail_condition, const char *fmt, ...); +#define JimPanic(X) JimPanicDump X +#else +#define JimPanic(X) +#endif + + +static char JimEmptyStringRep[] = ""; + +static void JimFreeCallFrame(Jim_Interp *interp, Jim_CallFrame *cf, int action); +static int ListSetIndex(Jim_Interp *interp, Jim_Obj *listPtr, int listindex, Jim_Obj *newObjPtr, + int flags); +static int JimDeleteLocalProcs(Jim_Interp *interp, Jim_Stack *localCommands); +static Jim_Obj *JimExpandDictSugar(Jim_Interp *interp, Jim_Obj *objPtr); +static void SetDictSubstFromAny(Jim_Interp *interp, Jim_Obj *objPtr); +static Jim_Obj **JimDictPairs(Jim_Obj *dictPtr, int *len); +static void JimSetFailedEnumResult(Jim_Interp *interp, const char *arg, const char *badtype, + const char *prefix, const char *const *tablePtr, const char *name); +static int JimCallProcedure(Jim_Interp *interp, Jim_Cmd *cmd, int argc, Jim_Obj *const *argv); +static int JimGetWideNoErr(Jim_Interp *interp, Jim_Obj *objPtr, jim_wide * widePtr); +static int JimSign(jim_wide w); +static int JimValidName(Jim_Interp *interp, const char *type, Jim_Obj *nameObjPtr); +static void JimPrngSeed(Jim_Interp *interp, unsigned char *seed, int seedLen); +static void JimRandomBytes(Jim_Interp *interp, void *dest, unsigned int len); + + + +#define JimWideValue(objPtr) (objPtr)->internalRep.wideValue + +#define JimObjTypeName(O) ((O)->typePtr ? (O)->typePtr->name : "none") + +static int utf8_tounicode_case(const char *s, int *uc, int upper) +{ + int l = utf8_tounicode(s, uc); + if (upper) { + *uc = utf8_upper(*uc); + } + return l; +} + + +#define JIM_CHARSET_SCAN 2 +#define JIM_CHARSET_GLOB 0 + +static const char *JimCharsetMatch(const char *pattern, int c, int flags) +{ + int not = 0; + int pchar; + int match = 0; + int nocase = 0; + + if (flags & JIM_NOCASE) { + nocase++; + c = utf8_upper(c); + } + + if (flags & JIM_CHARSET_SCAN) { + if (*pattern == '^') { + not++; + pattern++; + } + + + if (*pattern == ']') { + goto first; + } + } + + while (*pattern && *pattern != ']') { + + if (pattern[0] == '\\') { +first: + pattern += utf8_tounicode_case(pattern, &pchar, nocase); + } + else { + + int start; + int end; + + pattern += utf8_tounicode_case(pattern, &start, nocase); + if (pattern[0] == '-' && pattern[1]) { + + pattern += utf8_tounicode(pattern, &pchar); + pattern += utf8_tounicode_case(pattern, &end, nocase); + + + if ((c >= start && c <= end) || (c >= end && c <= start)) { + match = 1; + } + continue; + } + pchar = start; + } + + if (pchar == c) { + match = 1; + } + } + if (not) { + match = !match; + } + + return match ? pattern : NULL; +} + + + +static int JimGlobMatch(const char *pattern, const char *string, int nocase) +{ + int c; + int pchar; + while (*pattern) { + switch (pattern[0]) { + case '*': + while (pattern[1] == '*') { + pattern++; + } + pattern++; + if (!pattern[0]) { + return 1; + } + while (*string) { + + if (JimGlobMatch(pattern, string, nocase)) + return 1; + string += utf8_tounicode(string, &c); + } + return 0; + + case '?': + string += utf8_tounicode(string, &c); + break; + + case '[': { + string += utf8_tounicode(string, &c); + pattern = JimCharsetMatch(pattern + 1, c, nocase ? JIM_NOCASE : 0); + if (!pattern) { + return 0; + } + if (!*pattern) { + + continue; + } + break; + } + case '\\': + if (pattern[1]) { + pattern++; + } + + default: + string += utf8_tounicode_case(string, &c, nocase); + utf8_tounicode_case(pattern, &pchar, nocase); + if (pchar != c) { + return 0; + } + break; + } + pattern += utf8_tounicode_case(pattern, &pchar, nocase); + if (!*string) { + while (*pattern == '*') { + pattern++; + } + break; + } + } + if (!*pattern && !*string) { + return 1; + } + return 0; +} + +static int JimStringCompare(const char *s1, int l1, const char *s2, int l2) +{ + if (l1 < l2) { + return memcmp(s1, s2, l1) <= 0 ? -1 : 1; + } + else if (l2 < l1) { + return memcmp(s1, s2, l2) >= 0 ? 1 : -1; + } + else { + return JimSign(memcmp(s1, s2, l1)); + } +} + +static int JimStringCompareLen(const char *s1, const char *s2, int maxchars, int nocase) +{ + while (*s1 && *s2 && maxchars) { + int c1, c2; + s1 += utf8_tounicode_case(s1, &c1, nocase); + s2 += utf8_tounicode_case(s2, &c2, nocase); + if (c1 != c2) { + return JimSign(c1 - c2); + } + maxchars--; + } + if (!maxchars) { + return 0; + } + + if (*s1) { + return 1; + } + if (*s2) { + return -1; + } + return 0; +} + +static int JimStringFirst(const char *s1, int l1, const char *s2, int l2, int idx) +{ + int i; + int l1bytelen; + + if (!l1 || !l2 || l1 > l2) { + return -1; + } + if (idx < 0) + idx = 0; + s2 += utf8_index(s2, idx); + + l1bytelen = utf8_index(s1, l1); + + for (i = idx; i <= l2 - l1; i++) { + int c; + if (memcmp(s2, s1, l1bytelen) == 0) { + return i; + } + s2 += utf8_tounicode(s2, &c); + } + return -1; +} + +static int JimStringLast(const char *s1, int l1, const char *s2, int l2) +{ + const char *p; + + if (!l1 || !l2 || l1 > l2) + return -1; + + + for (p = s2 + l2 - 1; p != s2 - 1; p--) { + if (*p == *s1 && memcmp(s1, p, l1) == 0) { + return p - s2; + } + } + return -1; +} + +#ifdef JIM_UTF8 +static int JimStringLastUtf8(const char *s1, int l1, const char *s2, int l2) +{ + int n = JimStringLast(s1, utf8_index(s1, l1), s2, utf8_index(s2, l2)); + if (n > 0) { + n = utf8_strlen(s2, n); + } + return n; +} +#endif + +static int JimCheckConversion(const char *str, const char *endptr) +{ + if (str[0] == '\0' || str == endptr) { + return JIM_ERR; + } + + if (endptr[0] != '\0') { + while (*endptr) { + if (!isspace(UCHAR(*endptr))) { + return JIM_ERR; + } + endptr++; + } + } + return JIM_OK; +} + +static int JimNumberBase(const char *str, int *base, int *sign) +{ + int i = 0; + + *base = 10; + + while (isspace(UCHAR(str[i]))) { + i++; + } + + if (str[i] == '-') { + *sign = -1; + i++; + } + else { + if (str[i] == '+') { + i++; + } + *sign = 1; + } + + if (str[i] != '0') { + + return 0; + } + + + switch (str[i + 1]) { + case 'x': case 'X': *base = 16; break; + case 'o': case 'O': *base = 8; break; + case 'b': case 'B': *base = 2; break; + default: return 0; + } + i += 2; + + if (str[i] != '-' && str[i] != '+' && !isspace(UCHAR(str[i]))) { + + return i; + } + + *base = 10; + return 0; +} + +static long jim_strtol(const char *str, char **endptr) +{ + int sign; + int base; + int i = JimNumberBase(str, &base, &sign); + + if (base != 10) { + long value = strtol(str + i, endptr, base); + if (endptr == NULL || *endptr != str + i) { + return value * sign; + } + } + + + return strtol(str, endptr, 10); +} + + +static jim_wide jim_strtoull(const char *str, char **endptr) +{ +#ifdef HAVE_LONG_LONG + int sign; + int base; + int i = JimNumberBase(str, &base, &sign); + + if (base != 10) { + jim_wide value = strtoull(str + i, endptr, base); + if (endptr == NULL || *endptr != str + i) { + return value * sign; + } + } + + + return strtoull(str, endptr, 10); +#else + return (unsigned long)jim_strtol(str, endptr); +#endif +} + +int Jim_StringToWide(const char *str, jim_wide * widePtr, int base) +{ + char *endptr; + + if (base) { + *widePtr = strtoull(str, &endptr, base); + } + else { + *widePtr = jim_strtoull(str, &endptr); + } + + return JimCheckConversion(str, endptr); +} + +int Jim_StringToDouble(const char *str, double *doublePtr) +{ + char *endptr; + + + errno = 0; + + *doublePtr = strtod(str, &endptr); + + return JimCheckConversion(str, endptr); +} + +static jim_wide JimPowWide(jim_wide b, jim_wide e) +{ + jim_wide i, res = 1; + + if ((b == 0 && e != 0) || (e < 0)) + return 0; + for (i = 0; i < e; i++) { + res *= b; + } + return res; +} + +#ifdef JIM_DEBUG_PANIC +static void JimPanicDump(int condition, const char *fmt, ...) +{ + va_list ap; + + if (!condition) { + return; + } + + va_start(ap, fmt); + + fprintf(stderr, "\nJIM INTERPRETER PANIC: "); + vfprintf(stderr, fmt, ap); + fprintf(stderr, "\n\n"); + va_end(ap); + +#ifdef HAVE_BACKTRACE + { + void *array[40]; + int size, i; + char **strings; + + size = backtrace(array, 40); + strings = backtrace_symbols(array, size); + for (i = 0; i < size; i++) + fprintf(stderr, "[backtrace] %s\n", strings[i]); + fprintf(stderr, "[backtrace] Include the above lines and the output\n"); + fprintf(stderr, "[backtrace] of 'nm ' in the bug report.\n"); + } +#endif + + exit(1); +} +#endif + + +void *Jim_Alloc(int size) +{ + return size ? malloc(size) : NULL; +} + +void Jim_Free(void *ptr) +{ + free(ptr); +} + +void *Jim_Realloc(void *ptr, int size) +{ + return realloc(ptr, size); +} + +char *Jim_StrDup(const char *s) +{ + return strdup(s); +} + +char *Jim_StrDupLen(const char *s, int l) +{ + char *copy = Jim_Alloc(l + 1); + + memcpy(copy, s, l + 1); + copy[l] = 0; + return copy; +} + + + +static jim_wide JimClock(void) +{ + struct timeval tv; + + gettimeofday(&tv, NULL); + return (jim_wide) tv.tv_sec * 1000000 + tv.tv_usec; +} + + + +static void JimExpandHashTableIfNeeded(Jim_HashTable *ht); +static unsigned int JimHashTableNextPower(unsigned int size); +static Jim_HashEntry *JimInsertHashEntry(Jim_HashTable *ht, const void *key, int replace); + + + + +unsigned int Jim_IntHashFunction(unsigned int key) +{ + key += ~(key << 15); + key ^= (key >> 10); + key += (key << 3); + key ^= (key >> 6); + key += ~(key << 11); + key ^= (key >> 16); + return key; +} + +unsigned int Jim_GenHashFunction(const unsigned char *buf, int len) +{ + unsigned int h = 0; + + while (len--) + h += (h << 3) + *buf++; + return h; +} + + + + +static void JimResetHashTable(Jim_HashTable *ht) +{ + ht->table = NULL; + ht->size = 0; + ht->sizemask = 0; + ht->used = 0; + ht->collisions = 0; +#ifdef JIM_RANDOMISE_HASH + ht->uniq = (rand() ^ time(NULL) ^ clock()); +#else + ht->uniq = 0; +#endif +} + +static void JimInitHashTableIterator(Jim_HashTable *ht, Jim_HashTableIterator *iter) +{ + iter->ht = ht; + iter->index = -1; + iter->entry = NULL; + iter->nextEntry = NULL; +} + + +int Jim_InitHashTable(Jim_HashTable *ht, const Jim_HashTableType *type, void *privDataPtr) +{ + JimResetHashTable(ht); + ht->type = type; + ht->privdata = privDataPtr; + return JIM_OK; +} + +void Jim_ResizeHashTable(Jim_HashTable *ht) +{ + int minimal = ht->used; + + if (minimal < JIM_HT_INITIAL_SIZE) + minimal = JIM_HT_INITIAL_SIZE; + Jim_ExpandHashTable(ht, minimal); +} + + +void Jim_ExpandHashTable(Jim_HashTable *ht, unsigned int size) +{ + Jim_HashTable n; + unsigned int realsize = JimHashTableNextPower(size), i; + + if (size <= ht->used) + return; + + Jim_InitHashTable(&n, ht->type, ht->privdata); + n.size = realsize; + n.sizemask = realsize - 1; + n.table = Jim_Alloc(realsize * sizeof(Jim_HashEntry *)); + + n.uniq = ht->uniq; + + + memset(n.table, 0, realsize * sizeof(Jim_HashEntry *)); + + n.used = ht->used; + for (i = 0; ht->used > 0; i++) { + Jim_HashEntry *he, *nextHe; + + if (ht->table[i] == NULL) + continue; + + + he = ht->table[i]; + while (he) { + unsigned int h; + + nextHe = he->next; + + h = Jim_HashKey(ht, he->key) & n.sizemask; + he->next = n.table[h]; + n.table[h] = he; + ht->used--; + + he = nextHe; + } + } + assert(ht->used == 0); + Jim_Free(ht->table); + + + *ht = n; +} + + +int Jim_AddHashEntry(Jim_HashTable *ht, const void *key, void *val) +{ + Jim_HashEntry *entry; + + entry = JimInsertHashEntry(ht, key, 0); + if (entry == NULL) + return JIM_ERR; + + + Jim_SetHashKey(ht, entry, key); + Jim_SetHashVal(ht, entry, val); + return JIM_OK; +} + + +int Jim_ReplaceHashEntry(Jim_HashTable *ht, const void *key, void *val) +{ + int existed; + Jim_HashEntry *entry; + + entry = JimInsertHashEntry(ht, key, 1); + if (entry->key) { + if (ht->type->valDestructor && ht->type->valDup) { + void *newval = ht->type->valDup(ht->privdata, val); + ht->type->valDestructor(ht->privdata, entry->u.val); + entry->u.val = newval; + } + else { + Jim_FreeEntryVal(ht, entry); + Jim_SetHashVal(ht, entry, val); + } + existed = 1; + } + else { + + Jim_SetHashKey(ht, entry, key); + Jim_SetHashVal(ht, entry, val); + existed = 0; + } + + return existed; +} + + +int Jim_DeleteHashEntry(Jim_HashTable *ht, const void *key) +{ + unsigned int h; + Jim_HashEntry *he, *prevHe; + + if (ht->used == 0) + return JIM_ERR; + h = Jim_HashKey(ht, key) & ht->sizemask; + he = ht->table[h]; + + prevHe = NULL; + while (he) { + if (Jim_CompareHashKeys(ht, key, he->key)) { + + if (prevHe) + prevHe->next = he->next; + else + ht->table[h] = he->next; + Jim_FreeEntryKey(ht, he); + Jim_FreeEntryVal(ht, he); + Jim_Free(he); + ht->used--; + return JIM_OK; + } + prevHe = he; + he = he->next; + } + return JIM_ERR; +} + + +int Jim_FreeHashTable(Jim_HashTable *ht) +{ + unsigned int i; + + + for (i = 0; ht->used > 0; i++) { + Jim_HashEntry *he, *nextHe; + + if ((he = ht->table[i]) == NULL) + continue; + while (he) { + nextHe = he->next; + Jim_FreeEntryKey(ht, he); + Jim_FreeEntryVal(ht, he); + Jim_Free(he); + ht->used--; + he = nextHe; + } + } + + Jim_Free(ht->table); + + JimResetHashTable(ht); + return JIM_OK; +} + +Jim_HashEntry *Jim_FindHashEntry(Jim_HashTable *ht, const void *key) +{ + Jim_HashEntry *he; + unsigned int h; + + if (ht->used == 0) + return NULL; + h = Jim_HashKey(ht, key) & ht->sizemask; + he = ht->table[h]; + while (he) { + if (Jim_CompareHashKeys(ht, key, he->key)) + return he; + he = he->next; + } + return NULL; +} + +Jim_HashTableIterator *Jim_GetHashTableIterator(Jim_HashTable *ht) +{ + Jim_HashTableIterator *iter = Jim_Alloc(sizeof(*iter)); + JimInitHashTableIterator(ht, iter); + return iter; +} + +Jim_HashEntry *Jim_NextHashEntry(Jim_HashTableIterator *iter) +{ + while (1) { + if (iter->entry == NULL) { + iter->index++; + if (iter->index >= (signed)iter->ht->size) + break; + iter->entry = iter->ht->table[iter->index]; + } + else { + iter->entry = iter->nextEntry; + } + if (iter->entry) { + iter->nextEntry = iter->entry->next; + return iter->entry; + } + } + return NULL; +} + + + + +static void JimExpandHashTableIfNeeded(Jim_HashTable *ht) +{ + if (ht->size == 0) + Jim_ExpandHashTable(ht, JIM_HT_INITIAL_SIZE); + if (ht->size == ht->used) + Jim_ExpandHashTable(ht, ht->size * 2); +} + + +static unsigned int JimHashTableNextPower(unsigned int size) +{ + unsigned int i = JIM_HT_INITIAL_SIZE; + + if (size >= 2147483648U) + return 2147483648U; + while (1) { + if (i >= size) + return i; + i *= 2; + } +} + +static Jim_HashEntry *JimInsertHashEntry(Jim_HashTable *ht, const void *key, int replace) +{ + unsigned int h; + Jim_HashEntry *he; + + + JimExpandHashTableIfNeeded(ht); + + + h = Jim_HashKey(ht, key) & ht->sizemask; + + he = ht->table[h]; + while (he) { + if (Jim_CompareHashKeys(ht, key, he->key)) + return replace ? he : NULL; + he = he->next; + } + + + he = Jim_Alloc(sizeof(*he)); + he->next = ht->table[h]; + ht->table[h] = he; + ht->used++; + he->key = NULL; + + return he; +} + + + +static unsigned int JimStringCopyHTHashFunction(const void *key) +{ + return Jim_GenHashFunction(key, strlen(key)); +} + +static void *JimStringCopyHTDup(void *privdata, const void *key) +{ + return Jim_StrDup(key); +} + +static int JimStringCopyHTKeyCompare(void *privdata, const void *key1, const void *key2) +{ + return strcmp(key1, key2) == 0; +} + +static void JimStringCopyHTKeyDestructor(void *privdata, void *key) +{ + Jim_Free(key); +} + +static const Jim_HashTableType JimPackageHashTableType = { + JimStringCopyHTHashFunction, + JimStringCopyHTDup, + NULL, + JimStringCopyHTKeyCompare, + JimStringCopyHTKeyDestructor, + NULL +}; + +typedef struct AssocDataValue +{ + Jim_InterpDeleteProc *delProc; + void *data; +} AssocDataValue; + +static void JimAssocDataHashTableValueDestructor(void *privdata, void *data) +{ + AssocDataValue *assocPtr = (AssocDataValue *) data; + + if (assocPtr->delProc != NULL) + assocPtr->delProc((Jim_Interp *)privdata, assocPtr->data); + Jim_Free(data); +} + +static const Jim_HashTableType JimAssocDataHashTableType = { + JimStringCopyHTHashFunction, + JimStringCopyHTDup, + NULL, + JimStringCopyHTKeyCompare, + JimStringCopyHTKeyDestructor, + JimAssocDataHashTableValueDestructor +}; + +void Jim_InitStack(Jim_Stack *stack) +{ + stack->len = 0; + stack->maxlen = 0; + stack->vector = NULL; +} + +void Jim_FreeStack(Jim_Stack *stack) +{ + Jim_Free(stack->vector); +} + +int Jim_StackLen(Jim_Stack *stack) +{ + return stack->len; +} + +void Jim_StackPush(Jim_Stack *stack, void *element) +{ + int neededLen = stack->len + 1; + + if (neededLen > stack->maxlen) { + stack->maxlen = neededLen < 20 ? 20 : neededLen * 2; + stack->vector = Jim_Realloc(stack->vector, sizeof(void *) * stack->maxlen); + } + stack->vector[stack->len] = element; + stack->len++; +} + +void *Jim_StackPop(Jim_Stack *stack) +{ + if (stack->len == 0) + return NULL; + stack->len--; + return stack->vector[stack->len]; +} + +void *Jim_StackPeek(Jim_Stack *stack) +{ + if (stack->len == 0) + return NULL; + return stack->vector[stack->len - 1]; +} + +void Jim_FreeStackElements(Jim_Stack *stack, void (*freeFunc) (void *ptr)) +{ + int i; + + for (i = 0; i < stack->len; i++) + freeFunc(stack->vector[i]); +} + + + +#define JIM_TT_NONE 0 +#define JIM_TT_STR 1 +#define JIM_TT_ESC 2 +#define JIM_TT_VAR 3 +#define JIM_TT_DICTSUGAR 4 +#define JIM_TT_CMD 5 + +#define JIM_TT_SEP 6 +#define JIM_TT_EOL 7 +#define JIM_TT_EOF 8 + +#define JIM_TT_LINE 9 +#define JIM_TT_WORD 10 + + +#define JIM_TT_SUBEXPR_START 11 +#define JIM_TT_SUBEXPR_END 12 +#define JIM_TT_SUBEXPR_COMMA 13 +#define JIM_TT_EXPR_INT 14 +#define JIM_TT_EXPR_DOUBLE 15 + +#define JIM_TT_EXPRSUGAR 16 + + +#define JIM_TT_EXPR_OP 20 + +#define TOKEN_IS_SEP(type) (type >= JIM_TT_SEP && type <= JIM_TT_EOF) + + +#define JIM_PS_DEF 0 +#define JIM_PS_QUOTE 1 +#define JIM_PS_DICTSUGAR 2 + +struct JimParseMissing { + int ch; + int line; +}; + +struct JimParserCtx +{ + const char *p; + int len; + int linenr; + const char *tstart; + const char *tend; + int tline; + int tt; + int eof; + int state; + int comment; + struct JimParseMissing missing; +}; + +static int JimParseScript(struct JimParserCtx *pc); +static int JimParseSep(struct JimParserCtx *pc); +static int JimParseEol(struct JimParserCtx *pc); +static int JimParseCmd(struct JimParserCtx *pc); +static int JimParseQuote(struct JimParserCtx *pc); +static int JimParseVar(struct JimParserCtx *pc); +static int JimParseBrace(struct JimParserCtx *pc); +static int JimParseStr(struct JimParserCtx *pc); +static int JimParseComment(struct JimParserCtx *pc); +static void JimParseSubCmd(struct JimParserCtx *pc); +static int JimParseSubQuote(struct JimParserCtx *pc); +static Jim_Obj *JimParserGetTokenObj(Jim_Interp *interp, struct JimParserCtx *pc); + +static void JimParserInit(struct JimParserCtx *pc, const char *prg, int len, int linenr) +{ + pc->p = prg; + pc->len = len; + pc->tstart = NULL; + pc->tend = NULL; + pc->tline = 0; + pc->tt = JIM_TT_NONE; + pc->eof = 0; + pc->state = JIM_PS_DEF; + pc->linenr = linenr; + pc->comment = 1; + pc->missing.ch = ' '; + pc->missing.line = linenr; +} + +static int JimParseScript(struct JimParserCtx *pc) +{ + while (1) { + if (!pc->len) { + pc->tstart = pc->p; + pc->tend = pc->p - 1; + pc->tline = pc->linenr; + pc->tt = JIM_TT_EOL; + pc->eof = 1; + return JIM_OK; + } + switch (*(pc->p)) { + case '\\': + if (*(pc->p + 1) == '\n' && pc->state == JIM_PS_DEF) { + return JimParseSep(pc); + } + pc->comment = 0; + return JimParseStr(pc); + case ' ': + case '\t': + case '\r': + case '\f': + if (pc->state == JIM_PS_DEF) + return JimParseSep(pc); + pc->comment = 0; + return JimParseStr(pc); + case '\n': + case ';': + pc->comment = 1; + if (pc->state == JIM_PS_DEF) + return JimParseEol(pc); + return JimParseStr(pc); + case '[': + pc->comment = 0; + return JimParseCmd(pc); + case '$': + pc->comment = 0; + if (JimParseVar(pc) == JIM_ERR) { + + pc->tstart = pc->tend = pc->p++; + pc->len--; + pc->tt = JIM_TT_ESC; + } + return JIM_OK; + case '#': + if (pc->comment) { + JimParseComment(pc); + continue; + } + return JimParseStr(pc); + default: + pc->comment = 0; + return JimParseStr(pc); + } + return JIM_OK; + } +} + +static int JimParseSep(struct JimParserCtx *pc) +{ + pc->tstart = pc->p; + pc->tline = pc->linenr; + while (isspace(UCHAR(*pc->p)) || (*pc->p == '\\' && *(pc->p + 1) == '\n')) { + if (*pc->p == '\n') { + break; + } + if (*pc->p == '\\') { + pc->p++; + pc->len--; + pc->linenr++; + } + pc->p++; + pc->len--; + } + pc->tend = pc->p - 1; + pc->tt = JIM_TT_SEP; + return JIM_OK; +} + +static int JimParseEol(struct JimParserCtx *pc) +{ + pc->tstart = pc->p; + pc->tline = pc->linenr; + while (isspace(UCHAR(*pc->p)) || *pc->p == ';') { + if (*pc->p == '\n') + pc->linenr++; + pc->p++; + pc->len--; + } + pc->tend = pc->p - 1; + pc->tt = JIM_TT_EOL; + return JIM_OK; +} + + +static void JimParseSubBrace(struct JimParserCtx *pc) +{ + int level = 1; + + + pc->p++; + pc->len--; + while (pc->len) { + switch (*pc->p) { + case '\\': + if (pc->len > 1) { + if (*++pc->p == '\n') { + pc->linenr++; + } + pc->len--; + } + break; + + case '{': + level++; + break; + + case '}': + if (--level == 0) { + pc->tend = pc->p - 1; + pc->p++; + pc->len--; + return; + } + break; + + case '\n': + pc->linenr++; + break; + } + pc->p++; + pc->len--; + } + pc->missing.ch = '{'; + pc->missing.line = pc->tline; + pc->tend = pc->p - 1; +} + +static int JimParseSubQuote(struct JimParserCtx *pc) +{ + int tt = JIM_TT_STR; + int line = pc->tline; + + + pc->p++; + pc->len--; + while (pc->len) { + switch (*pc->p) { + case '\\': + if (pc->len > 1) { + if (*++pc->p == '\n') { + pc->linenr++; + } + pc->len--; + tt = JIM_TT_ESC; + } + break; + + case '"': + pc->tend = pc->p - 1; + pc->p++; + pc->len--; + return tt; + + case '[': + JimParseSubCmd(pc); + tt = JIM_TT_ESC; + continue; + + case '\n': + pc->linenr++; + break; + + case '$': + tt = JIM_TT_ESC; + break; + } + pc->p++; + pc->len--; + } + pc->missing.ch = '"'; + pc->missing.line = line; + pc->tend = pc->p - 1; + return tt; +} + +static void JimParseSubCmd(struct JimParserCtx *pc) +{ + int level = 1; + int startofword = 1; + int line = pc->tline; + + + pc->p++; + pc->len--; + while (pc->len) { + switch (*pc->p) { + case '\\': + if (pc->len > 1) { + if (*++pc->p == '\n') { + pc->linenr++; + } + pc->len--; + } + break; + + case '[': + level++; + break; + + case ']': + if (--level == 0) { + pc->tend = pc->p - 1; + pc->p++; + pc->len--; + return; + } + break; + + case '"': + if (startofword) { + JimParseSubQuote(pc); + continue; + } + break; + + case '{': + JimParseSubBrace(pc); + startofword = 0; + continue; + + case '\n': + pc->linenr++; + break; + } + startofword = isspace(UCHAR(*pc->p)); + pc->p++; + pc->len--; + } + pc->missing.ch = '['; + pc->missing.line = line; + pc->tend = pc->p - 1; +} + +static int JimParseBrace(struct JimParserCtx *pc) +{ + pc->tstart = pc->p + 1; + pc->tline = pc->linenr; + pc->tt = JIM_TT_STR; + JimParseSubBrace(pc); + return JIM_OK; +} + +static int JimParseCmd(struct JimParserCtx *pc) +{ + pc->tstart = pc->p + 1; + pc->tline = pc->linenr; + pc->tt = JIM_TT_CMD; + JimParseSubCmd(pc); + return JIM_OK; +} + +static int JimParseQuote(struct JimParserCtx *pc) +{ + pc->tstart = pc->p + 1; + pc->tline = pc->linenr; + pc->tt = JimParseSubQuote(pc); + return JIM_OK; +} + +static int JimParseVar(struct JimParserCtx *pc) +{ + + pc->p++; + pc->len--; + +#ifdef EXPRSUGAR_BRACKET + if (*pc->p == '[') { + + JimParseCmd(pc); + pc->tt = JIM_TT_EXPRSUGAR; + return JIM_OK; + } +#endif + + pc->tstart = pc->p; + pc->tt = JIM_TT_VAR; + pc->tline = pc->linenr; + + if (*pc->p == '{') { + pc->tstart = ++pc->p; + pc->len--; + + while (pc->len && *pc->p != '}') { + if (*pc->p == '\n') { + pc->linenr++; + } + pc->p++; + pc->len--; + } + pc->tend = pc->p - 1; + if (pc->len) { + pc->p++; + pc->len--; + } + } + else { + while (1) { + + if (pc->p[0] == ':' && pc->p[1] == ':') { + while (*pc->p == ':') { + pc->p++; + pc->len--; + } + continue; + } + if (isalnum(UCHAR(*pc->p)) || *pc->p == '_' || UCHAR(*pc->p) >= 0x80) { + pc->p++; + pc->len--; + continue; + } + break; + } + + if (*pc->p == '(') { + int count = 1; + const char *paren = NULL; + + pc->tt = JIM_TT_DICTSUGAR; + + while (count && pc->len) { + pc->p++; + pc->len--; + if (*pc->p == '\\' && pc->len >= 1) { + pc->p++; + pc->len--; + } + else if (*pc->p == '(') { + count++; + } + else if (*pc->p == ')') { + paren = pc->p; + count--; + } + } + if (count == 0) { + pc->p++; + pc->len--; + } + else if (paren) { + + paren++; + pc->len += (pc->p - paren); + pc->p = paren; + } +#ifndef EXPRSUGAR_BRACKET + if (*pc->tstart == '(') { + pc->tt = JIM_TT_EXPRSUGAR; + } +#endif + } + pc->tend = pc->p - 1; + } + if (pc->tstart == pc->p) { + pc->p--; + pc->len++; + return JIM_ERR; + } + return JIM_OK; +} + +static int JimParseStr(struct JimParserCtx *pc) +{ + if (pc->tt == JIM_TT_SEP || pc->tt == JIM_TT_EOL || + pc->tt == JIM_TT_NONE || pc->tt == JIM_TT_STR) { + + if (*pc->p == '{') { + return JimParseBrace(pc); + } + if (*pc->p == '"') { + pc->state = JIM_PS_QUOTE; + pc->p++; + pc->len--; + + pc->missing.line = pc->tline; + } + } + pc->tstart = pc->p; + pc->tline = pc->linenr; + while (1) { + if (pc->len == 0) { + if (pc->state == JIM_PS_QUOTE) { + pc->missing.ch = '"'; + } + pc->tend = pc->p - 1; + pc->tt = JIM_TT_ESC; + return JIM_OK; + } + switch (*pc->p) { + case '\\': + if (pc->state == JIM_PS_DEF && *(pc->p + 1) == '\n') { + pc->tend = pc->p - 1; + pc->tt = JIM_TT_ESC; + return JIM_OK; + } + if (pc->len >= 2) { + if (*(pc->p + 1) == '\n') { + pc->linenr++; + } + pc->p++; + pc->len--; + } + else if (pc->len == 1) { + + pc->missing.ch = '\\'; + } + break; + case '(': + + if (pc->len > 1 && pc->p[1] != '$') { + break; + } + case ')': + + if (*pc->p == '(' || pc->tt == JIM_TT_VAR) { + if (pc->p == pc->tstart) { + + pc->p++; + pc->len--; + } + pc->tend = pc->p - 1; + pc->tt = JIM_TT_ESC; + return JIM_OK; + } + break; + + case '$': + case '[': + pc->tend = pc->p - 1; + pc->tt = JIM_TT_ESC; + return JIM_OK; + case ' ': + case '\t': + case '\n': + case '\r': + case '\f': + case ';': + if (pc->state == JIM_PS_DEF) { + pc->tend = pc->p - 1; + pc->tt = JIM_TT_ESC; + return JIM_OK; + } + else if (*pc->p == '\n') { + pc->linenr++; + } + break; + case '"': + if (pc->state == JIM_PS_QUOTE) { + pc->tend = pc->p - 1; + pc->tt = JIM_TT_ESC; + pc->p++; + pc->len--; + pc->state = JIM_PS_DEF; + return JIM_OK; + } + break; + } + pc->p++; + pc->len--; + } + return JIM_OK; +} + +static int JimParseComment(struct JimParserCtx *pc) +{ + while (*pc->p) { + if (*pc->p == '\\') { + pc->p++; + pc->len--; + if (pc->len == 0) { + pc->missing.ch = '\\'; + return JIM_OK; + } + if (*pc->p == '\n') { + pc->linenr++; + } + } + else if (*pc->p == '\n') { + pc->p++; + pc->len--; + pc->linenr++; + break; + } + pc->p++; + pc->len--; + } + return JIM_OK; +} + + +static int xdigitval(int c) +{ + if (c >= '0' && c <= '9') + return c - '0'; + if (c >= 'a' && c <= 'f') + return c - 'a' + 10; + if (c >= 'A' && c <= 'F') + return c - 'A' + 10; + return -1; +} + +static int odigitval(int c) +{ + if (c >= '0' && c <= '7') + return c - '0'; + return -1; +} + +static int JimEscape(char *dest, const char *s, int slen) +{ + char *p = dest; + int i, len; + + if (slen == -1) + slen = strlen(s); + + for (i = 0; i < slen; i++) { + switch (s[i]) { + case '\\': + switch (s[i + 1]) { + case 'a': + *p++ = 0x7; + i++; + break; + case 'b': + *p++ = 0x8; + i++; + break; + case 'f': + *p++ = 0xc; + i++; + break; + case 'n': + *p++ = 0xa; + i++; + break; + case 'r': + *p++ = 0xd; + i++; + break; + case 't': + *p++ = 0x9; + i++; + break; + case 'u': + case 'U': + case 'x': + { + unsigned val = 0; + int k; + int maxchars = 2; + + i++; + + if (s[i] == 'U') { + maxchars = 8; + } + else if (s[i] == 'u') { + if (s[i + 1] == '{') { + maxchars = 6; + i++; + } + else { + maxchars = 4; + } + } + + for (k = 0; k < maxchars; k++) { + int c = xdigitval(s[i + k + 1]); + if (c == -1) { + break; + } + val = (val << 4) | c; + } + + if (s[i] == '{') { + if (k == 0 || val > 0x1fffff || s[i + k + 1] != '}') { + + i--; + k = 0; + } + else { + + k++; + } + } + if (k) { + + if (s[i] == 'x') { + *p++ = val; + } + else { + p += utf8_fromunicode(p, val); + } + i += k; + break; + } + + *p++ = s[i]; + } + break; + case 'v': + *p++ = 0xb; + i++; + break; + case '\0': + *p++ = '\\'; + i++; + break; + case '\n': + + *p++ = ' '; + do { + i++; + } while (s[i + 1] == ' ' || s[i + 1] == '\t'); + break; + case '0': + case '1': + case '2': + case '3': + case '4': + case '5': + case '6': + case '7': + + { + int val = 0; + int c = odigitval(s[i + 1]); + + val = c; + c = odigitval(s[i + 2]); + if (c == -1) { + *p++ = val; + i++; + break; + } + val = (val * 8) + c; + c = odigitval(s[i + 3]); + if (c == -1) { + *p++ = val; + i += 2; + break; + } + val = (val * 8) + c; + *p++ = val; + i += 3; + } + break; + default: + *p++ = s[i + 1]; + i++; + break; + } + break; + default: + *p++ = s[i]; + break; + } + } + len = p - dest; + *p = '\0'; + return len; +} + +static Jim_Obj *JimParserGetTokenObj(Jim_Interp *interp, struct JimParserCtx *pc) +{ + const char *start, *end; + char *token; + int len; + + start = pc->tstart; + end = pc->tend; + if (start > end) { + len = 0; + token = Jim_Alloc(1); + token[0] = '\0'; + } + else { + len = (end - start) + 1; + token = Jim_Alloc(len + 1); + if (pc->tt != JIM_TT_ESC) { + + memcpy(token, start, len); + token[len] = '\0'; + } + else { + + len = JimEscape(token, start, len); + } + } + + return Jim_NewStringObjNoAlloc(interp, token, len); +} + +int Jim_ScriptIsComplete(const char *s, int len, char *stateCharPtr) +{ + struct JimParserCtx parser; + + JimParserInit(&parser, s, len, 1); + while (!parser.eof) { + JimParseScript(&parser); + } + if (stateCharPtr) { + *stateCharPtr = parser.missing.ch; + } + return parser.missing.ch == ' '; +} + +static int JimParseListSep(struct JimParserCtx *pc); +static int JimParseListStr(struct JimParserCtx *pc); +static int JimParseListQuote(struct JimParserCtx *pc); + +static int JimParseList(struct JimParserCtx *pc) +{ + if (isspace(UCHAR(*pc->p))) { + return JimParseListSep(pc); + } + switch (*pc->p) { + case '"': + return JimParseListQuote(pc); + + case '{': + return JimParseBrace(pc); + + default: + if (pc->len) { + return JimParseListStr(pc); + } + break; + } + + pc->tstart = pc->tend = pc->p; + pc->tline = pc->linenr; + pc->tt = JIM_TT_EOL; + pc->eof = 1; + return JIM_OK; +} + +static int JimParseListSep(struct JimParserCtx *pc) +{ + pc->tstart = pc->p; + pc->tline = pc->linenr; + while (isspace(UCHAR(*pc->p))) { + if (*pc->p == '\n') { + pc->linenr++; + } + pc->p++; + pc->len--; + } + pc->tend = pc->p - 1; + pc->tt = JIM_TT_SEP; + return JIM_OK; +} + +static int JimParseListQuote(struct JimParserCtx *pc) +{ + pc->p++; + pc->len--; + + pc->tstart = pc->p; + pc->tline = pc->linenr; + pc->tt = JIM_TT_STR; + + while (pc->len) { + switch (*pc->p) { + case '\\': + pc->tt = JIM_TT_ESC; + if (--pc->len == 0) { + + pc->tend = pc->p; + return JIM_OK; + } + pc->p++; + break; + case '\n': + pc->linenr++; + break; + case '"': + pc->tend = pc->p - 1; + pc->p++; + pc->len--; + return JIM_OK; + } + pc->p++; + pc->len--; + } + + pc->tend = pc->p - 1; + return JIM_OK; +} + +static int JimParseListStr(struct JimParserCtx *pc) +{ + pc->tstart = pc->p; + pc->tline = pc->linenr; + pc->tt = JIM_TT_STR; + + while (pc->len) { + if (isspace(UCHAR(*pc->p))) { + pc->tend = pc->p - 1; + return JIM_OK; + } + if (*pc->p == '\\') { + if (--pc->len == 0) { + + pc->tend = pc->p; + return JIM_OK; + } + pc->tt = JIM_TT_ESC; + pc->p++; + } + pc->p++; + pc->len--; + } + pc->tend = pc->p - 1; + return JIM_OK; +} + + + +Jim_Obj *Jim_NewObj(Jim_Interp *interp) +{ + Jim_Obj *objPtr; + + + if (interp->freeList != NULL) { + + objPtr = interp->freeList; + interp->freeList = objPtr->nextObjPtr; + } + else { + + objPtr = Jim_Alloc(sizeof(*objPtr)); + } + + objPtr->refCount = 0; + + + objPtr->prevObjPtr = NULL; + objPtr->nextObjPtr = interp->liveList; + if (interp->liveList) + interp->liveList->prevObjPtr = objPtr; + interp->liveList = objPtr; + + return objPtr; +} + +void Jim_FreeObj(Jim_Interp *interp, Jim_Obj *objPtr) +{ + + JimPanic((objPtr->refCount != 0, "!!!Object %p freed with bad refcount %d, type=%s", objPtr, + objPtr->refCount, objPtr->typePtr ? objPtr->typePtr->name : "")); + + + Jim_FreeIntRep(interp, objPtr); + + if (objPtr->bytes != NULL) { + if (objPtr->bytes != JimEmptyStringRep) + Jim_Free(objPtr->bytes); + } + + if (objPtr->prevObjPtr) + objPtr->prevObjPtr->nextObjPtr = objPtr->nextObjPtr; + if (objPtr->nextObjPtr) + objPtr->nextObjPtr->prevObjPtr = objPtr->prevObjPtr; + if (interp->liveList == objPtr) + interp->liveList = objPtr->nextObjPtr; +#ifdef JIM_DISABLE_OBJECT_POOL + Jim_Free(objPtr); +#else + + objPtr->prevObjPtr = NULL; + objPtr->nextObjPtr = interp->freeList; + if (interp->freeList) + interp->freeList->prevObjPtr = objPtr; + interp->freeList = objPtr; + objPtr->refCount = -1; +#endif +} + + +void Jim_InvalidateStringRep(Jim_Obj *objPtr) +{ + if (objPtr->bytes != NULL) { + if (objPtr->bytes != JimEmptyStringRep) + Jim_Free(objPtr->bytes); + } + objPtr->bytes = NULL; +} + + +Jim_Obj *Jim_DuplicateObj(Jim_Interp *interp, Jim_Obj *objPtr) +{ + Jim_Obj *dupPtr; + + dupPtr = Jim_NewObj(interp); + if (objPtr->bytes == NULL) { + + dupPtr->bytes = NULL; + } + else if (objPtr->length == 0) { + + dupPtr->bytes = JimEmptyStringRep; + dupPtr->length = 0; + dupPtr->typePtr = NULL; + return dupPtr; + } + else { + dupPtr->bytes = Jim_Alloc(objPtr->length + 1); + dupPtr->length = objPtr->length; + + memcpy(dupPtr->bytes, objPtr->bytes, objPtr->length + 1); + } + + + dupPtr->typePtr = objPtr->typePtr; + if (objPtr->typePtr != NULL) { + if (objPtr->typePtr->dupIntRepProc == NULL) { + dupPtr->internalRep = objPtr->internalRep; + } + else { + + objPtr->typePtr->dupIntRepProc(interp, objPtr, dupPtr); + } + } + return dupPtr; +} + +const char *Jim_GetString(Jim_Obj *objPtr, int *lenPtr) +{ + if (objPtr->bytes == NULL) { + + JimPanic((objPtr->typePtr->updateStringProc == NULL, "UpdateStringProc called against '%s' type.", objPtr->typePtr->name)); + objPtr->typePtr->updateStringProc(objPtr); + } + if (lenPtr) + *lenPtr = objPtr->length; + return objPtr->bytes; +} + + +int Jim_Length(Jim_Obj *objPtr) +{ + if (objPtr->bytes == NULL) { + + JimPanic((objPtr->typePtr->updateStringProc == NULL, "UpdateStringProc called against '%s' type.", objPtr->typePtr->name)); + objPtr->typePtr->updateStringProc(objPtr); + } + return objPtr->length; +} + + +const char *Jim_String(Jim_Obj *objPtr) +{ + if (objPtr->bytes == NULL) { + + JimPanic((objPtr->typePtr == NULL, "UpdateStringProc called against typeless value.")); + JimPanic((objPtr->typePtr->updateStringProc == NULL, "UpdateStringProc called against '%s' type.", objPtr->typePtr->name)); + objPtr->typePtr->updateStringProc(objPtr); + } + return objPtr->bytes; +} + +static void JimSetStringBytes(Jim_Obj *objPtr, const char *str) +{ + objPtr->bytes = Jim_StrDup(str); + objPtr->length = strlen(str); +} + +static void FreeDictSubstInternalRep(Jim_Interp *interp, Jim_Obj *objPtr); +static void DupDictSubstInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr); + +static const Jim_ObjType dictSubstObjType = { + "dict-substitution", + FreeDictSubstInternalRep, + DupDictSubstInternalRep, + NULL, + JIM_TYPE_NONE, +}; + +static void FreeInterpolatedInternalRep(Jim_Interp *interp, Jim_Obj *objPtr) +{ + Jim_DecrRefCount(interp, objPtr->internalRep.dictSubstValue.indexObjPtr); +} + +static const Jim_ObjType interpolatedObjType = { + "interpolated", + FreeInterpolatedInternalRep, + NULL, + NULL, + JIM_TYPE_NONE, +}; + +static void DupStringInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr); +static int SetStringFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr); + +static const Jim_ObjType stringObjType = { + "string", + NULL, + DupStringInternalRep, + NULL, + JIM_TYPE_REFERENCES, +}; + +static void DupStringInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr) +{ + JIM_NOTUSED(interp); + + dupPtr->internalRep.strValue.maxLength = srcPtr->length; + dupPtr->internalRep.strValue.charLength = srcPtr->internalRep.strValue.charLength; +} + +static int SetStringFromAny(Jim_Interp *interp, Jim_Obj *objPtr) +{ + if (objPtr->typePtr != &stringObjType) { + + if (objPtr->bytes == NULL) { + + JimPanic((objPtr->typePtr->updateStringProc == NULL, "UpdateStringProc called against '%s' type.", objPtr->typePtr->name)); + objPtr->typePtr->updateStringProc(objPtr); + } + + Jim_FreeIntRep(interp, objPtr); + + objPtr->typePtr = &stringObjType; + objPtr->internalRep.strValue.maxLength = objPtr->length; + + objPtr->internalRep.strValue.charLength = -1; + } + return JIM_OK; +} + +int Jim_Utf8Length(Jim_Interp *interp, Jim_Obj *objPtr) +{ +#ifdef JIM_UTF8 + SetStringFromAny(interp, objPtr); + + if (objPtr->internalRep.strValue.charLength < 0) { + objPtr->internalRep.strValue.charLength = utf8_strlen(objPtr->bytes, objPtr->length); + } + return objPtr->internalRep.strValue.charLength; +#else + return Jim_Length(objPtr); +#endif +} + + +Jim_Obj *Jim_NewStringObj(Jim_Interp *interp, const char *s, int len) +{ + Jim_Obj *objPtr = Jim_NewObj(interp); + + + if (len == -1) + len = strlen(s); + + if (len == 0) { + objPtr->bytes = JimEmptyStringRep; + } + else { + objPtr->bytes = Jim_Alloc(len + 1); + memcpy(objPtr->bytes, s, len); + objPtr->bytes[len] = '\0'; + } + objPtr->length = len; + + + objPtr->typePtr = NULL; + return objPtr; +} + + +Jim_Obj *Jim_NewStringObjUtf8(Jim_Interp *interp, const char *s, int charlen) +{ +#ifdef JIM_UTF8 + + int bytelen = utf8_index(s, charlen); + + Jim_Obj *objPtr = Jim_NewStringObj(interp, s, bytelen); + + + objPtr->typePtr = &stringObjType; + objPtr->internalRep.strValue.maxLength = bytelen; + objPtr->internalRep.strValue.charLength = charlen; + + return objPtr; +#else + return Jim_NewStringObj(interp, s, charlen); +#endif +} + +Jim_Obj *Jim_NewStringObjNoAlloc(Jim_Interp *interp, char *s, int len) +{ + Jim_Obj *objPtr = Jim_NewObj(interp); + + objPtr->bytes = s; + objPtr->length = (len == -1) ? strlen(s) : len; + objPtr->typePtr = NULL; + return objPtr; +} + +static void StringAppendString(Jim_Obj *objPtr, const char *str, int len) +{ + int needlen; + + if (len == -1) + len = strlen(str); + needlen = objPtr->length + len; + if (objPtr->internalRep.strValue.maxLength < needlen || + objPtr->internalRep.strValue.maxLength == 0) { + needlen *= 2; + + if (needlen < 7) { + needlen = 7; + } + if (objPtr->bytes == JimEmptyStringRep) { + objPtr->bytes = Jim_Alloc(needlen + 1); + } + else { + objPtr->bytes = Jim_Realloc(objPtr->bytes, needlen + 1); + } + objPtr->internalRep.strValue.maxLength = needlen; + } + memcpy(objPtr->bytes + objPtr->length, str, len); + objPtr->bytes[objPtr->length + len] = '\0'; + + if (objPtr->internalRep.strValue.charLength >= 0) { + + objPtr->internalRep.strValue.charLength += utf8_strlen(objPtr->bytes + objPtr->length, len); + } + objPtr->length += len; +} + +void Jim_AppendString(Jim_Interp *interp, Jim_Obj *objPtr, const char *str, int len) +{ + JimPanic((Jim_IsShared(objPtr), "Jim_AppendString called with shared object")); + SetStringFromAny(interp, objPtr); + StringAppendString(objPtr, str, len); +} + +void Jim_AppendObj(Jim_Interp *interp, Jim_Obj *objPtr, Jim_Obj *appendObjPtr) +{ + int len; + const char *str = Jim_GetString(appendObjPtr, &len); + Jim_AppendString(interp, objPtr, str, len); +} + +void Jim_AppendStrings(Jim_Interp *interp, Jim_Obj *objPtr, ...) +{ + va_list ap; + + SetStringFromAny(interp, objPtr); + va_start(ap, objPtr); + while (1) { + const char *s = va_arg(ap, const char *); + + if (s == NULL) + break; + Jim_AppendString(interp, objPtr, s, -1); + } + va_end(ap); +} + +int Jim_StringEqObj(Jim_Obj *aObjPtr, Jim_Obj *bObjPtr) +{ + if (aObjPtr == bObjPtr) { + return 1; + } + else { + int Alen, Blen; + const char *sA = Jim_GetString(aObjPtr, &Alen); + const char *sB = Jim_GetString(bObjPtr, &Blen); + + return Alen == Blen && memcmp(sA, sB, Alen) == 0; + } +} + +int Jim_StringMatchObj(Jim_Interp *interp, Jim_Obj *patternObjPtr, Jim_Obj *objPtr, int nocase) +{ + return JimGlobMatch(Jim_String(patternObjPtr), Jim_String(objPtr), nocase); +} + +int Jim_StringCompareObj(Jim_Interp *interp, Jim_Obj *firstObjPtr, Jim_Obj *secondObjPtr, int nocase) +{ + int l1, l2; + const char *s1 = Jim_GetString(firstObjPtr, &l1); + const char *s2 = Jim_GetString(secondObjPtr, &l2); + + if (nocase) { + + return JimStringCompareLen(s1, s2, -1, nocase); + } + return JimStringCompare(s1, l1, s2, l2); +} + +int Jim_StringCompareLenObj(Jim_Interp *interp, Jim_Obj *firstObjPtr, Jim_Obj *secondObjPtr, int nocase) +{ + const char *s1 = Jim_String(firstObjPtr); + const char *s2 = Jim_String(secondObjPtr); + + return JimStringCompareLen(s1, s2, Jim_Utf8Length(interp, firstObjPtr), nocase); +} + +static int JimRelToAbsIndex(int len, int idx) +{ + if (idx < 0) + return len + idx; + return idx; +} + +static void JimRelToAbsRange(int len, int *firstPtr, int *lastPtr, int *rangeLenPtr) +{ + int rangeLen; + + if (*firstPtr > *lastPtr) { + rangeLen = 0; + } + else { + rangeLen = *lastPtr - *firstPtr + 1; + if (rangeLen) { + if (*firstPtr < 0) { + rangeLen += *firstPtr; + *firstPtr = 0; + } + if (*lastPtr >= len) { + rangeLen -= (*lastPtr - (len - 1)); + *lastPtr = len - 1; + } + } + } + if (rangeLen < 0) + rangeLen = 0; + + *rangeLenPtr = rangeLen; +} + +static int JimStringGetRange(Jim_Interp *interp, Jim_Obj *firstObjPtr, Jim_Obj *lastObjPtr, + int len, int *first, int *last, int *range) +{ + if (Jim_GetIndex(interp, firstObjPtr, first) != JIM_OK) { + return JIM_ERR; + } + if (Jim_GetIndex(interp, lastObjPtr, last) != JIM_OK) { + return JIM_ERR; + } + *first = JimRelToAbsIndex(len, *first); + *last = JimRelToAbsIndex(len, *last); + JimRelToAbsRange(len, first, last, range); + return JIM_OK; +} + +Jim_Obj *Jim_StringByteRangeObj(Jim_Interp *interp, + Jim_Obj *strObjPtr, Jim_Obj *firstObjPtr, Jim_Obj *lastObjPtr) +{ + int first, last; + const char *str; + int rangeLen; + int bytelen; + + str = Jim_GetString(strObjPtr, &bytelen); + + if (JimStringGetRange(interp, firstObjPtr, lastObjPtr, bytelen, &first, &last, &rangeLen) != JIM_OK) { + return NULL; + } + + if (first == 0 && rangeLen == bytelen) { + return strObjPtr; + } + return Jim_NewStringObj(interp, str + first, rangeLen); +} + +Jim_Obj *Jim_StringRangeObj(Jim_Interp *interp, + Jim_Obj *strObjPtr, Jim_Obj *firstObjPtr, Jim_Obj *lastObjPtr) +{ +#ifdef JIM_UTF8 + int first, last; + const char *str; + int len, rangeLen; + int bytelen; + + str = Jim_GetString(strObjPtr, &bytelen); + len = Jim_Utf8Length(interp, strObjPtr); + + if (JimStringGetRange(interp, firstObjPtr, lastObjPtr, len, &first, &last, &rangeLen) != JIM_OK) { + return NULL; + } + + if (first == 0 && rangeLen == len) { + return strObjPtr; + } + if (len == bytelen) { + + return Jim_NewStringObj(interp, str + first, rangeLen); + } + return Jim_NewStringObjUtf8(interp, str + utf8_index(str, first), rangeLen); +#else + return Jim_StringByteRangeObj(interp, strObjPtr, firstObjPtr, lastObjPtr); +#endif +} + +Jim_Obj *JimStringReplaceObj(Jim_Interp *interp, + Jim_Obj *strObjPtr, Jim_Obj *firstObjPtr, Jim_Obj *lastObjPtr, Jim_Obj *newStrObj) +{ + int first, last; + const char *str; + int len, rangeLen; + Jim_Obj *objPtr; + + len = Jim_Utf8Length(interp, strObjPtr); + + if (JimStringGetRange(interp, firstObjPtr, lastObjPtr, len, &first, &last, &rangeLen) != JIM_OK) { + return NULL; + } + + if (last < first) { + return strObjPtr; + } + + str = Jim_String(strObjPtr); + + + objPtr = Jim_NewStringObjUtf8(interp, str, first); + + + if (newStrObj) { + Jim_AppendObj(interp, objPtr, newStrObj); + } + + + Jim_AppendString(interp, objPtr, str + utf8_index(str, last + 1), len - last - 1); + + return objPtr; +} + +static void JimStrCopyUpperLower(char *dest, const char *str, int uc) +{ + while (*str) { + int c; + str += utf8_tounicode(str, &c); + dest += utf8_getchars(dest, uc ? utf8_upper(c) : utf8_lower(c)); + } + *dest = 0; +} + +static Jim_Obj *JimStringToLower(Jim_Interp *interp, Jim_Obj *strObjPtr) +{ + char *buf; + int len; + const char *str; + + SetStringFromAny(interp, strObjPtr); + + str = Jim_GetString(strObjPtr, &len); + +#ifdef JIM_UTF8 + len *= 2; +#endif + buf = Jim_Alloc(len + 1); + JimStrCopyUpperLower(buf, str, 0); + return Jim_NewStringObjNoAlloc(interp, buf, -1); +} + +static Jim_Obj *JimStringToUpper(Jim_Interp *interp, Jim_Obj *strObjPtr) +{ + char *buf; + const char *str; + int len; + + if (strObjPtr->typePtr != &stringObjType) { + SetStringFromAny(interp, strObjPtr); + } + + str = Jim_GetString(strObjPtr, &len); + +#ifdef JIM_UTF8 + len *= 2; +#endif + buf = Jim_Alloc(len + 1); + JimStrCopyUpperLower(buf, str, 1); + return Jim_NewStringObjNoAlloc(interp, buf, -1); +} + +static Jim_Obj *JimStringToTitle(Jim_Interp *interp, Jim_Obj *strObjPtr) +{ + char *buf, *p; + int len; + int c; + const char *str; + + str = Jim_GetString(strObjPtr, &len); + if (len == 0) { + return strObjPtr; + } +#ifdef JIM_UTF8 + len *= 2; +#endif + buf = p = Jim_Alloc(len + 1); + + str += utf8_tounicode(str, &c); + p += utf8_getchars(p, utf8_title(c)); + + JimStrCopyUpperLower(p, str, 0); + + return Jim_NewStringObjNoAlloc(interp, buf, -1); +} + +static const char *utf8_memchr(const char *str, int len, int c) +{ +#ifdef JIM_UTF8 + while (len) { + int sc; + int n = utf8_tounicode(str, &sc); + if (sc == c) { + return str; + } + str += n; + len -= n; + } + return NULL; +#else + return memchr(str, c, len); +#endif +} + +static const char *JimFindTrimLeft(const char *str, int len, const char *trimchars, int trimlen) +{ + while (len) { + int c; + int n = utf8_tounicode(str, &c); + + if (utf8_memchr(trimchars, trimlen, c) == NULL) { + + break; + } + str += n; + len -= n; + } + return str; +} + +static const char *JimFindTrimRight(const char *str, int len, const char *trimchars, int trimlen) +{ + str += len; + + while (len) { + int c; + int n = utf8_prev_len(str, len); + + len -= n; + str -= n; + + n = utf8_tounicode(str, &c); + + if (utf8_memchr(trimchars, trimlen, c) == NULL) { + return str + n; + } + } + + return NULL; +} + +static const char default_trim_chars[] = " \t\n\r"; + +static int default_trim_chars_len = sizeof(default_trim_chars); + +static Jim_Obj *JimStringTrimLeft(Jim_Interp *interp, Jim_Obj *strObjPtr, Jim_Obj *trimcharsObjPtr) +{ + int len; + const char *str = Jim_GetString(strObjPtr, &len); + const char *trimchars = default_trim_chars; + int trimcharslen = default_trim_chars_len; + const char *newstr; + + if (trimcharsObjPtr) { + trimchars = Jim_GetString(trimcharsObjPtr, &trimcharslen); + } + + newstr = JimFindTrimLeft(str, len, trimchars, trimcharslen); + if (newstr == str) { + return strObjPtr; + } + + return Jim_NewStringObj(interp, newstr, len - (newstr - str)); +} + +static Jim_Obj *JimStringTrimRight(Jim_Interp *interp, Jim_Obj *strObjPtr, Jim_Obj *trimcharsObjPtr) +{ + int len; + const char *trimchars = default_trim_chars; + int trimcharslen = default_trim_chars_len; + const char *nontrim; + + if (trimcharsObjPtr) { + trimchars = Jim_GetString(trimcharsObjPtr, &trimcharslen); + } + + SetStringFromAny(interp, strObjPtr); + + len = Jim_Length(strObjPtr); + nontrim = JimFindTrimRight(strObjPtr->bytes, len, trimchars, trimcharslen); + + if (nontrim == NULL) { + + return Jim_NewEmptyStringObj(interp); + } + if (nontrim == strObjPtr->bytes + len) { + + return strObjPtr; + } + + if (Jim_IsShared(strObjPtr)) { + strObjPtr = Jim_NewStringObj(interp, strObjPtr->bytes, (nontrim - strObjPtr->bytes)); + } + else { + + strObjPtr->bytes[nontrim - strObjPtr->bytes] = 0; + strObjPtr->length = (nontrim - strObjPtr->bytes); + } + + return strObjPtr; +} + +static Jim_Obj *JimStringTrim(Jim_Interp *interp, Jim_Obj *strObjPtr, Jim_Obj *trimcharsObjPtr) +{ + + Jim_Obj *objPtr = JimStringTrimLeft(interp, strObjPtr, trimcharsObjPtr); + + + strObjPtr = JimStringTrimRight(interp, objPtr, trimcharsObjPtr); + + + if (objPtr != strObjPtr && objPtr->refCount == 0) { + + Jim_FreeNewObj(interp, objPtr); + } + + return strObjPtr; +} + + +#ifdef HAVE_ISASCII +#define jim_isascii isascii +#else +static int jim_isascii(int c) +{ + return !(c & ~0x7f); +} +#endif + +static int JimStringIs(Jim_Interp *interp, Jim_Obj *strObjPtr, Jim_Obj *strClass, int strict) +{ + static const char * const strclassnames[] = { + "integer", "alpha", "alnum", "ascii", "digit", + "double", "lower", "upper", "space", "xdigit", + "control", "print", "graph", "punct", + NULL + }; + enum { + STR_IS_INTEGER, STR_IS_ALPHA, STR_IS_ALNUM, STR_IS_ASCII, STR_IS_DIGIT, + STR_IS_DOUBLE, STR_IS_LOWER, STR_IS_UPPER, STR_IS_SPACE, STR_IS_XDIGIT, + STR_IS_CONTROL, STR_IS_PRINT, STR_IS_GRAPH, STR_IS_PUNCT + }; + int strclass; + int len; + int i; + const char *str; + int (*isclassfunc)(int c) = NULL; + + if (Jim_GetEnum(interp, strClass, strclassnames, &strclass, "class", JIM_ERRMSG | JIM_ENUM_ABBREV) != JIM_OK) { + return JIM_ERR; + } + + str = Jim_GetString(strObjPtr, &len); + if (len == 0) { + Jim_SetResultBool(interp, !strict); + return JIM_OK; + } + + switch (strclass) { + case STR_IS_INTEGER: + { + jim_wide w; + Jim_SetResultBool(interp, JimGetWideNoErr(interp, strObjPtr, &w) == JIM_OK); + return JIM_OK; + } + + case STR_IS_DOUBLE: + { + double d; + Jim_SetResultBool(interp, Jim_GetDouble(interp, strObjPtr, &d) == JIM_OK && errno != ERANGE); + return JIM_OK; + } + + case STR_IS_ALPHA: isclassfunc = isalpha; break; + case STR_IS_ALNUM: isclassfunc = isalnum; break; + case STR_IS_ASCII: isclassfunc = jim_isascii; break; + case STR_IS_DIGIT: isclassfunc = isdigit; break; + case STR_IS_LOWER: isclassfunc = islower; break; + case STR_IS_UPPER: isclassfunc = isupper; break; + case STR_IS_SPACE: isclassfunc = isspace; break; + case STR_IS_XDIGIT: isclassfunc = isxdigit; break; + case STR_IS_CONTROL: isclassfunc = iscntrl; break; + case STR_IS_PRINT: isclassfunc = isprint; break; + case STR_IS_GRAPH: isclassfunc = isgraph; break; + case STR_IS_PUNCT: isclassfunc = ispunct; break; + default: + return JIM_ERR; + } + + for (i = 0; i < len; i++) { + if (!isclassfunc(str[i])) { + Jim_SetResultBool(interp, 0); + return JIM_OK; + } + } + Jim_SetResultBool(interp, 1); + return JIM_OK; +} + + + +static const Jim_ObjType comparedStringObjType = { + "compared-string", + NULL, + NULL, + NULL, + JIM_TYPE_REFERENCES, +}; + +int Jim_CompareStringImmediate(Jim_Interp *interp, Jim_Obj *objPtr, const char *str) +{ + if (objPtr->typePtr == &comparedStringObjType && objPtr->internalRep.ptr == str) { + return 1; + } + else { + const char *objStr = Jim_String(objPtr); + + if (strcmp(str, objStr) != 0) + return 0; + + if (objPtr->typePtr != &comparedStringObjType) { + Jim_FreeIntRep(interp, objPtr); + objPtr->typePtr = &comparedStringObjType; + } + objPtr->internalRep.ptr = (char *)str; + return 1; + } +} + +static int qsortCompareStringPointers(const void *a, const void *b) +{ + char *const *sa = (char *const *)a; + char *const *sb = (char *const *)b; + + return strcmp(*sa, *sb); +} + + + +static void FreeSourceInternalRep(Jim_Interp *interp, Jim_Obj *objPtr); +static void DupSourceInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr); + +static const Jim_ObjType sourceObjType = { + "source", + FreeSourceInternalRep, + DupSourceInternalRep, + NULL, + JIM_TYPE_REFERENCES, +}; + +void FreeSourceInternalRep(Jim_Interp *interp, Jim_Obj *objPtr) +{ + Jim_DecrRefCount(interp, objPtr->internalRep.sourceValue.fileNameObj); +} + +void DupSourceInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr) +{ + dupPtr->internalRep.sourceValue = srcPtr->internalRep.sourceValue; + Jim_IncrRefCount(dupPtr->internalRep.sourceValue.fileNameObj); +} + +static void JimSetSourceInfo(Jim_Interp *interp, Jim_Obj *objPtr, + Jim_Obj *fileNameObj, int lineNumber) +{ + JimPanic((Jim_IsShared(objPtr), "JimSetSourceInfo called with shared object")); + JimPanic((objPtr->typePtr != NULL, "JimSetSourceInfo called with typed object")); + Jim_IncrRefCount(fileNameObj); + objPtr->internalRep.sourceValue.fileNameObj = fileNameObj; + objPtr->internalRep.sourceValue.lineNumber = lineNumber; + objPtr->typePtr = &sourceObjType; +} + +static const Jim_ObjType scriptLineObjType = { + "scriptline", + NULL, + NULL, + NULL, + JIM_NONE, +}; + +static Jim_Obj *JimNewScriptLineObj(Jim_Interp *interp, int argc, int line) +{ + Jim_Obj *objPtr; + +#ifdef DEBUG_SHOW_SCRIPT + char buf[100]; + snprintf(buf, sizeof(buf), "line=%d, argc=%d", line, argc); + objPtr = Jim_NewStringObj(interp, buf, -1); +#else + objPtr = Jim_NewEmptyStringObj(interp); +#endif + objPtr->typePtr = &scriptLineObjType; + objPtr->internalRep.scriptLineValue.argc = argc; + objPtr->internalRep.scriptLineValue.line = line; + + return objPtr; +} + +static void FreeScriptInternalRep(Jim_Interp *interp, Jim_Obj *objPtr); +static void DupScriptInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr); +static void JimSetScriptFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr); +static int JimParseCheckMissing(Jim_Interp *interp, int ch); + +static const Jim_ObjType scriptObjType = { + "script", + FreeScriptInternalRep, + DupScriptInternalRep, + NULL, + JIM_TYPE_REFERENCES, +}; + +typedef struct ScriptToken +{ + Jim_Obj *objPtr; + int type; +} ScriptToken; + +typedef struct ScriptObj +{ + ScriptToken *token; + Jim_Obj *fileNameObj; + int len; + int substFlags; + int inUse; /* Used to share a ScriptObj. Currently + only used by Jim_EvalObj() as protection against + shimmering of the currently evaluated object. */ + int firstline; + int linenr; + int missing; +} ScriptObj; + +void FreeScriptInternalRep(Jim_Interp *interp, Jim_Obj *objPtr) +{ + int i; + struct ScriptObj *script = (void *)objPtr->internalRep.ptr; + + if (--script->inUse != 0) + return; + for (i = 0; i < script->len; i++) { + Jim_DecrRefCount(interp, script->token[i].objPtr); + } + Jim_Free(script->token); + Jim_DecrRefCount(interp, script->fileNameObj); + Jim_Free(script); +} + +void DupScriptInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr) +{ + JIM_NOTUSED(interp); + JIM_NOTUSED(srcPtr); + + dupPtr->typePtr = NULL; +} + +typedef struct +{ + const char *token; + int len; + int type; + int line; +} ParseToken; + +typedef struct +{ + + ParseToken *list; + int size; + int count; + ParseToken static_list[20]; +} ParseTokenList; + +static void ScriptTokenListInit(ParseTokenList *tokenlist) +{ + tokenlist->list = tokenlist->static_list; + tokenlist->size = sizeof(tokenlist->static_list) / sizeof(ParseToken); + tokenlist->count = 0; +} + +static void ScriptTokenListFree(ParseTokenList *tokenlist) +{ + if (tokenlist->list != tokenlist->static_list) { + Jim_Free(tokenlist->list); + } +} + +static void ScriptAddToken(ParseTokenList *tokenlist, const char *token, int len, int type, + int line) +{ + ParseToken *t; + + if (tokenlist->count == tokenlist->size) { + + tokenlist->size *= 2; + if (tokenlist->list != tokenlist->static_list) { + tokenlist->list = + Jim_Realloc(tokenlist->list, tokenlist->size * sizeof(*tokenlist->list)); + } + else { + + tokenlist->list = Jim_Alloc(tokenlist->size * sizeof(*tokenlist->list)); + memcpy(tokenlist->list, tokenlist->static_list, + tokenlist->count * sizeof(*tokenlist->list)); + } + } + t = &tokenlist->list[tokenlist->count++]; + t->token = token; + t->len = len; + t->type = type; + t->line = line; +} + +static int JimCountWordTokens(ParseToken *t) +{ + int expand = 1; + int count = 0; + + + if (t->type == JIM_TT_STR && !TOKEN_IS_SEP(t[1].type)) { + if ((t->len == 1 && *t->token == '*') || (t->len == 6 && strncmp(t->token, "expand", 6) == 0)) { + + expand = -1; + t++; + } + } + + + while (!TOKEN_IS_SEP(t->type)) { + t++; + count++; + } + + return count * expand; +} + +static Jim_Obj *JimMakeScriptObj(Jim_Interp *interp, const ParseToken *t) +{ + Jim_Obj *objPtr; + + if (t->type == JIM_TT_ESC && memchr(t->token, '\\', t->len) != NULL) { + + int len = t->len; + char *str = Jim_Alloc(len + 1); + len = JimEscape(str, t->token, len); + objPtr = Jim_NewStringObjNoAlloc(interp, str, len); + } + else { + objPtr = Jim_NewStringObj(interp, t->token, t->len); + } + return objPtr; +} + +static void ScriptObjAddTokens(Jim_Interp *interp, struct ScriptObj *script, + ParseTokenList *tokenlist) +{ + int i; + struct ScriptToken *token; + + int lineargs = 0; + + ScriptToken *linefirst; + int count; + int linenr; + +#ifdef DEBUG_SHOW_SCRIPT_TOKENS + printf("==== Tokens ====\n"); + for (i = 0; i < tokenlist->count; i++) { + printf("[%2d]@%d %s '%.*s'\n", i, tokenlist->list[i].line, jim_tt_name(tokenlist->list[i].type), + tokenlist->list[i].len, tokenlist->list[i].token); + } +#endif + + + count = tokenlist->count; + for (i = 0; i < tokenlist->count; i++) { + if (tokenlist->list[i].type == JIM_TT_EOL) { + count++; + } + } + linenr = script->firstline = tokenlist->list[0].line; + + token = script->token = Jim_Alloc(sizeof(ScriptToken) * count); + + + linefirst = token++; + + for (i = 0; i < tokenlist->count; ) { + + int wordtokens; + + + while (tokenlist->list[i].type == JIM_TT_SEP) { + i++; + } + + wordtokens = JimCountWordTokens(tokenlist->list + i); + + if (wordtokens == 0) { + + if (lineargs) { + linefirst->type = JIM_TT_LINE; + linefirst->objPtr = JimNewScriptLineObj(interp, lineargs, linenr); + Jim_IncrRefCount(linefirst->objPtr); + + + lineargs = 0; + linefirst = token++; + } + i++; + continue; + } + else if (wordtokens != 1) { + + token->type = JIM_TT_WORD; + token->objPtr = Jim_NewIntObj(interp, wordtokens); + Jim_IncrRefCount(token->objPtr); + token++; + if (wordtokens < 0) { + + i++; + wordtokens = -wordtokens - 1; + lineargs--; + } + } + + if (lineargs == 0) { + + linenr = tokenlist->list[i].line; + } + lineargs++; + + + while (wordtokens--) { + const ParseToken *t = &tokenlist->list[i++]; + + token->type = t->type; + token->objPtr = JimMakeScriptObj(interp, t); + Jim_IncrRefCount(token->objPtr); + + JimSetSourceInfo(interp, token->objPtr, script->fileNameObj, t->line); + token++; + } + } + + if (lineargs == 0) { + token--; + } + + script->len = token - script->token; + + JimPanic((script->len >= count, "allocated script array is too short")); + +#ifdef DEBUG_SHOW_SCRIPT + printf("==== Script (%s) ====\n", Jim_String(script->fileNameObj)); + for (i = 0; i < script->len; i++) { + const ScriptToken *t = &script->token[i]; + printf("[%2d] %s %s\n", i, jim_tt_name(t->type), Jim_String(t->objPtr)); + } +#endif + +} + +static int JimParseCheckMissing(Jim_Interp *interp, int ch) +{ + const char *msg; + + switch (ch) { + case '\\': + case ' ': + return JIM_OK; + + case '[': + msg = "unmatched \"[\""; + break; + case '{': + msg = "missing close-brace"; + break; + case '"': + default: + msg = "missing quote"; + break; + } + + Jim_SetResultString(interp, msg, -1); + return JIM_ERR; +} + +static void SubstObjAddTokens(Jim_Interp *interp, struct ScriptObj *script, + ParseTokenList *tokenlist) +{ + int i; + struct ScriptToken *token; + + token = script->token = Jim_Alloc(sizeof(ScriptToken) * tokenlist->count); + + for (i = 0; i < tokenlist->count; i++) { + const ParseToken *t = &tokenlist->list[i]; + + + token->type = t->type; + token->objPtr = JimMakeScriptObj(interp, t); + Jim_IncrRefCount(token->objPtr); + token++; + } + + script->len = i; +} + +static void JimSetScriptFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr) +{ + int scriptTextLen; + const char *scriptText = Jim_GetString(objPtr, &scriptTextLen); + struct JimParserCtx parser; + struct ScriptObj *script; + ParseTokenList tokenlist; + int line = 1; + + + if (objPtr->typePtr == &sourceObjType) { + line = objPtr->internalRep.sourceValue.lineNumber; + } + + + ScriptTokenListInit(&tokenlist); + + JimParserInit(&parser, scriptText, scriptTextLen, line); + while (!parser.eof) { + JimParseScript(&parser); + ScriptAddToken(&tokenlist, parser.tstart, parser.tend - parser.tstart + 1, parser.tt, + parser.tline); + } + + + ScriptAddToken(&tokenlist, scriptText + scriptTextLen, 0, JIM_TT_EOF, 0); + + + script = Jim_Alloc(sizeof(*script)); + memset(script, 0, sizeof(*script)); + script->inUse = 1; + if (objPtr->typePtr == &sourceObjType) { + script->fileNameObj = objPtr->internalRep.sourceValue.fileNameObj; + } + else { + script->fileNameObj = interp->emptyObj; + } + Jim_IncrRefCount(script->fileNameObj); + script->missing = parser.missing.ch; + script->linenr = parser.missing.line; + + ScriptObjAddTokens(interp, script, &tokenlist); + + + ScriptTokenListFree(&tokenlist); + + + Jim_FreeIntRep(interp, objPtr); + Jim_SetIntRepPtr(objPtr, script); + objPtr->typePtr = &scriptObjType; +} + +static void JimAddErrorToStack(Jim_Interp *interp, ScriptObj *script); + +ScriptObj *JimGetScript(Jim_Interp *interp, Jim_Obj *objPtr) +{ + if (objPtr == interp->emptyObj) { + + objPtr = interp->nullScriptObj; + } + + if (objPtr->typePtr != &scriptObjType || ((struct ScriptObj *)Jim_GetIntRepPtr(objPtr))->substFlags) { + JimSetScriptFromAny(interp, objPtr); + } + + return (ScriptObj *)Jim_GetIntRepPtr(objPtr); +} + +static int JimScriptValid(Jim_Interp *interp, ScriptObj *script) +{ + if (JimParseCheckMissing(interp, script->missing) == JIM_ERR) { + JimAddErrorToStack(interp, script); + return 0; + } + return 1; +} + + +static void JimIncrCmdRefCount(Jim_Cmd *cmdPtr) +{ + cmdPtr->inUse++; +} + +static void JimDecrCmdRefCount(Jim_Interp *interp, Jim_Cmd *cmdPtr) +{ + if (--cmdPtr->inUse == 0) { + if (cmdPtr->isproc) { + Jim_DecrRefCount(interp, cmdPtr->u.proc.argListObjPtr); + Jim_DecrRefCount(interp, cmdPtr->u.proc.bodyObjPtr); + Jim_DecrRefCount(interp, cmdPtr->u.proc.nsObj); + if (cmdPtr->u.proc.staticVars) { + Jim_FreeHashTable(cmdPtr->u.proc.staticVars); + Jim_Free(cmdPtr->u.proc.staticVars); + } + } + else { + + if (cmdPtr->u.native.delProc) { + cmdPtr->u.native.delProc(interp, cmdPtr->u.native.privData); + } + } + if (cmdPtr->prevCmd) { + + JimDecrCmdRefCount(interp, cmdPtr->prevCmd); + } + Jim_Free(cmdPtr); + } +} + + +static void JimVariablesHTValDestructor(void *interp, void *val) +{ + Jim_DecrRefCount(interp, ((Jim_Var *)val)->objPtr); + Jim_Free(val); +} + +static const Jim_HashTableType JimVariablesHashTableType = { + JimStringCopyHTHashFunction, + JimStringCopyHTDup, + NULL, + JimStringCopyHTKeyCompare, + JimStringCopyHTKeyDestructor, + JimVariablesHTValDestructor +}; + +static void JimCommandsHT_ValDestructor(void *interp, void *val) +{ + JimDecrCmdRefCount(interp, val); +} + +static const Jim_HashTableType JimCommandsHashTableType = { + JimStringCopyHTHashFunction, + JimStringCopyHTDup, + NULL, + JimStringCopyHTKeyCompare, + JimStringCopyHTKeyDestructor, + JimCommandsHT_ValDestructor +}; + + + +#ifdef jim_ext_namespace +static Jim_Obj *JimQualifyNameObj(Jim_Interp *interp, Jim_Obj *nsObj) +{ + const char *name = Jim_String(nsObj); + if (name[0] == ':' && name[1] == ':') { + + while (*++name == ':') { + } + nsObj = Jim_NewStringObj(interp, name, -1); + } + else if (Jim_Length(interp->framePtr->nsObj)) { + + nsObj = Jim_DuplicateObj(interp, interp->framePtr->nsObj); + Jim_AppendStrings(interp, nsObj, "::", name, NULL); + } + return nsObj; +} + +Jim_Obj *Jim_MakeGlobalNamespaceName(Jim_Interp *interp, Jim_Obj *nameObjPtr) +{ + Jim_Obj *resultObj; + + const char *name = Jim_String(nameObjPtr); + if (name[0] == ':' && name[1] == ':') { + return nameObjPtr; + } + Jim_IncrRefCount(nameObjPtr); + resultObj = Jim_NewStringObj(interp, "::", -1); + Jim_AppendObj(interp, resultObj, nameObjPtr); + Jim_DecrRefCount(interp, nameObjPtr); + + return resultObj; +} + +static const char *JimQualifyName(Jim_Interp *interp, const char *name, Jim_Obj **objPtrPtr) +{ + Jim_Obj *objPtr = interp->emptyObj; + + if (name[0] == ':' && name[1] == ':') { + + while (*++name == ':') { + } + } + else if (Jim_Length(interp->framePtr->nsObj)) { + + objPtr = Jim_DuplicateObj(interp, interp->framePtr->nsObj); + Jim_AppendStrings(interp, objPtr, "::", name, NULL); + name = Jim_String(objPtr); + } + Jim_IncrRefCount(objPtr); + *objPtrPtr = objPtr; + return name; +} + + #define JimFreeQualifiedName(INTERP, OBJ) Jim_DecrRefCount((INTERP), (OBJ)) + +#else + + #define JimQualifyName(INTERP, NAME, DUMMY) (((NAME)[0] == ':' && (NAME)[1] == ':') ? (NAME) + 2 : (NAME)) + #define JimFreeQualifiedName(INTERP, DUMMY) (void)(DUMMY) + +Jim_Obj *Jim_MakeGlobalNamespaceName(Jim_Interp *interp, Jim_Obj *nameObjPtr) +{ + return nameObjPtr; +} +#endif + +static int JimCreateCommand(Jim_Interp *interp, const char *name, Jim_Cmd *cmd) +{ + Jim_HashEntry *he = Jim_FindHashEntry(&interp->commands, name); + if (he) { + + Jim_InterpIncrProcEpoch(interp); + } + + if (he && interp->local) { + + cmd->prevCmd = Jim_GetHashEntryVal(he); + Jim_SetHashVal(&interp->commands, he, cmd); + } + else { + if (he) { + + Jim_DeleteHashEntry(&interp->commands, name); + } + + Jim_AddHashEntry(&interp->commands, name, cmd); + } + return JIM_OK; +} + + +int Jim_CreateCommand(Jim_Interp *interp, const char *cmdNameStr, + Jim_CmdProc cmdProc, void *privData, Jim_DelCmdProc delProc) +{ + Jim_Cmd *cmdPtr = Jim_Alloc(sizeof(*cmdPtr)); + + + memset(cmdPtr, 0, sizeof(*cmdPtr)); + cmdPtr->inUse = 1; + cmdPtr->u.native.delProc = delProc; + cmdPtr->u.native.cmdProc = cmdProc; + cmdPtr->u.native.privData = privData; + + JimCreateCommand(interp, cmdNameStr, cmdPtr); + + return JIM_OK; +} + +static int JimCreateProcedureStatics(Jim_Interp *interp, Jim_Cmd *cmdPtr, Jim_Obj *staticsListObjPtr) +{ + int len, i; + + len = Jim_ListLength(interp, staticsListObjPtr); + if (len == 0) { + return JIM_OK; + } + + cmdPtr->u.proc.staticVars = Jim_Alloc(sizeof(Jim_HashTable)); + Jim_InitHashTable(cmdPtr->u.proc.staticVars, &JimVariablesHashTableType, interp); + for (i = 0; i < len; i++) { + Jim_Obj *objPtr, *initObjPtr, *nameObjPtr; + Jim_Var *varPtr; + int subLen; + + objPtr = Jim_ListGetIndex(interp, staticsListObjPtr, i); + + subLen = Jim_ListLength(interp, objPtr); + if (subLen == 1 || subLen == 2) { + nameObjPtr = Jim_ListGetIndex(interp, objPtr, 0); + if (subLen == 1) { + initObjPtr = Jim_GetVariable(interp, nameObjPtr, JIM_NONE); + if (initObjPtr == NULL) { + Jim_SetResultFormatted(interp, + "variable for initialization of static \"%#s\" not found in the local context", + nameObjPtr); + return JIM_ERR; + } + } + else { + initObjPtr = Jim_ListGetIndex(interp, objPtr, 1); + } + if (JimValidName(interp, "static variable", nameObjPtr) != JIM_OK) { + return JIM_ERR; + } + + varPtr = Jim_Alloc(sizeof(*varPtr)); + varPtr->objPtr = initObjPtr; + Jim_IncrRefCount(initObjPtr); + varPtr->linkFramePtr = NULL; + if (Jim_AddHashEntry(cmdPtr->u.proc.staticVars, + Jim_String(nameObjPtr), varPtr) != JIM_OK) { + Jim_SetResultFormatted(interp, + "static variable name \"%#s\" duplicated in statics list", nameObjPtr); + Jim_DecrRefCount(interp, initObjPtr); + Jim_Free(varPtr); + return JIM_ERR; + } + } + else { + Jim_SetResultFormatted(interp, "too many fields in static specifier \"%#s\"", + objPtr); + return JIM_ERR; + } + } + return JIM_OK; +} + +static void JimUpdateProcNamespace(Jim_Interp *interp, Jim_Cmd *cmdPtr, const char *cmdname) +{ +#ifdef jim_ext_namespace + if (cmdPtr->isproc) { + + const char *pt = strrchr(cmdname, ':'); + if (pt && pt != cmdname && pt[-1] == ':') { + Jim_DecrRefCount(interp, cmdPtr->u.proc.nsObj); + cmdPtr->u.proc.nsObj = Jim_NewStringObj(interp, cmdname, pt - cmdname - 1); + Jim_IncrRefCount(cmdPtr->u.proc.nsObj); + + if (Jim_FindHashEntry(&interp->commands, pt + 1)) { + + Jim_InterpIncrProcEpoch(interp); + } + } + } +#endif +} + +static Jim_Cmd *JimCreateProcedureCmd(Jim_Interp *interp, Jim_Obj *argListObjPtr, + Jim_Obj *staticsListObjPtr, Jim_Obj *bodyObjPtr, Jim_Obj *nsObj) +{ + Jim_Cmd *cmdPtr; + int argListLen; + int i; + + argListLen = Jim_ListLength(interp, argListObjPtr); + + + cmdPtr = Jim_Alloc(sizeof(*cmdPtr) + sizeof(struct Jim_ProcArg) * argListLen); + memset(cmdPtr, 0, sizeof(*cmdPtr)); + cmdPtr->inUse = 1; + cmdPtr->isproc = 1; + cmdPtr->u.proc.argListObjPtr = argListObjPtr; + cmdPtr->u.proc.argListLen = argListLen; + cmdPtr->u.proc.bodyObjPtr = bodyObjPtr; + cmdPtr->u.proc.argsPos = -1; + cmdPtr->u.proc.arglist = (struct Jim_ProcArg *)(cmdPtr + 1); + cmdPtr->u.proc.nsObj = nsObj ? nsObj : interp->emptyObj; + Jim_IncrRefCount(argListObjPtr); + Jim_IncrRefCount(bodyObjPtr); + Jim_IncrRefCount(cmdPtr->u.proc.nsObj); + + + if (staticsListObjPtr && JimCreateProcedureStatics(interp, cmdPtr, staticsListObjPtr) != JIM_OK) { + goto err; + } + + + + for (i = 0; i < argListLen; i++) { + Jim_Obj *argPtr; + Jim_Obj *nameObjPtr; + Jim_Obj *defaultObjPtr; + int len; + + + argPtr = Jim_ListGetIndex(interp, argListObjPtr, i); + len = Jim_ListLength(interp, argPtr); + if (len == 0) { + Jim_SetResultString(interp, "argument with no name", -1); +err: + JimDecrCmdRefCount(interp, cmdPtr); + return NULL; + } + if (len > 2) { + Jim_SetResultFormatted(interp, "too many fields in argument specifier \"%#s\"", argPtr); + goto err; + } + + if (len == 2) { + + nameObjPtr = Jim_ListGetIndex(interp, argPtr, 0); + defaultObjPtr = Jim_ListGetIndex(interp, argPtr, 1); + } + else { + + nameObjPtr = argPtr; + defaultObjPtr = NULL; + } + + + if (Jim_CompareStringImmediate(interp, nameObjPtr, "args")) { + if (cmdPtr->u.proc.argsPos >= 0) { + Jim_SetResultString(interp, "'args' specified more than once", -1); + goto err; + } + cmdPtr->u.proc.argsPos = i; + } + else { + if (len == 2) { + cmdPtr->u.proc.optArity++; + } + else { + cmdPtr->u.proc.reqArity++; + } + } + + cmdPtr->u.proc.arglist[i].nameObjPtr = nameObjPtr; + cmdPtr->u.proc.arglist[i].defaultObjPtr = defaultObjPtr; + } + + return cmdPtr; +} + +int Jim_DeleteCommand(Jim_Interp *interp, const char *name) +{ + int ret = JIM_OK; + Jim_Obj *qualifiedNameObj; + const char *qualname = JimQualifyName(interp, name, &qualifiedNameObj); + + if (Jim_DeleteHashEntry(&interp->commands, qualname) == JIM_ERR) { + Jim_SetResultFormatted(interp, "can't delete \"%s\": command doesn't exist", name); + ret = JIM_ERR; + } + else { + Jim_InterpIncrProcEpoch(interp); + } + + JimFreeQualifiedName(interp, qualifiedNameObj); + + return ret; +} + +int Jim_RenameCommand(Jim_Interp *interp, const char *oldName, const char *newName) +{ + int ret = JIM_ERR; + Jim_HashEntry *he; + Jim_Cmd *cmdPtr; + Jim_Obj *qualifiedOldNameObj; + Jim_Obj *qualifiedNewNameObj; + const char *fqold; + const char *fqnew; + + if (newName[0] == 0) { + return Jim_DeleteCommand(interp, oldName); + } + + fqold = JimQualifyName(interp, oldName, &qualifiedOldNameObj); + fqnew = JimQualifyName(interp, newName, &qualifiedNewNameObj); + + + he = Jim_FindHashEntry(&interp->commands, fqold); + if (he == NULL) { + Jim_SetResultFormatted(interp, "can't rename \"%s\": command doesn't exist", oldName); + } + else if (Jim_FindHashEntry(&interp->commands, fqnew)) { + Jim_SetResultFormatted(interp, "can't rename to \"%s\": command already exists", newName); + } + else { + + cmdPtr = Jim_GetHashEntryVal(he); + JimIncrCmdRefCount(cmdPtr); + JimUpdateProcNamespace(interp, cmdPtr, fqnew); + Jim_AddHashEntry(&interp->commands, fqnew, cmdPtr); + + + Jim_DeleteHashEntry(&interp->commands, fqold); + + + Jim_InterpIncrProcEpoch(interp); + + ret = JIM_OK; + } + + JimFreeQualifiedName(interp, qualifiedOldNameObj); + JimFreeQualifiedName(interp, qualifiedNewNameObj); + + return ret; +} + + +static void FreeCommandInternalRep(Jim_Interp *interp, Jim_Obj *objPtr) +{ + Jim_DecrRefCount(interp, objPtr->internalRep.cmdValue.nsObj); +} + +static void DupCommandInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr) +{ + dupPtr->internalRep.cmdValue = srcPtr->internalRep.cmdValue; + dupPtr->typePtr = srcPtr->typePtr; + Jim_IncrRefCount(dupPtr->internalRep.cmdValue.nsObj); +} + +static const Jim_ObjType commandObjType = { + "command", + FreeCommandInternalRep, + DupCommandInternalRep, + NULL, + JIM_TYPE_REFERENCES, +}; + +Jim_Cmd *Jim_GetCommand(Jim_Interp *interp, Jim_Obj *objPtr, int flags) +{ + Jim_Cmd *cmd; + + if (objPtr->typePtr != &commandObjType || + objPtr->internalRep.cmdValue.procEpoch != interp->procEpoch +#ifdef jim_ext_namespace + || !Jim_StringEqObj(objPtr->internalRep.cmdValue.nsObj, interp->framePtr->nsObj) +#endif + ) { + + + + const char *name = Jim_String(objPtr); + Jim_HashEntry *he; + + if (name[0] == ':' && name[1] == ':') { + while (*++name == ':') { + } + } +#ifdef jim_ext_namespace + else if (Jim_Length(interp->framePtr->nsObj)) { + + Jim_Obj *nameObj = Jim_DuplicateObj(interp, interp->framePtr->nsObj); + Jim_AppendStrings(interp, nameObj, "::", name, NULL); + he = Jim_FindHashEntry(&interp->commands, Jim_String(nameObj)); + Jim_FreeNewObj(interp, nameObj); + if (he) { + goto found; + } + } +#endif + + + he = Jim_FindHashEntry(&interp->commands, name); + if (he == NULL) { + if (flags & JIM_ERRMSG) { + Jim_SetResultFormatted(interp, "invalid command name \"%#s\"", objPtr); + } + return NULL; + } +#ifdef jim_ext_namespace +found: +#endif + cmd = Jim_GetHashEntryVal(he); + + + Jim_FreeIntRep(interp, objPtr); + objPtr->typePtr = &commandObjType; + objPtr->internalRep.cmdValue.procEpoch = interp->procEpoch; + objPtr->internalRep.cmdValue.cmdPtr = cmd; + objPtr->internalRep.cmdValue.nsObj = interp->framePtr->nsObj; + Jim_IncrRefCount(interp->framePtr->nsObj); + } + else { + cmd = objPtr->internalRep.cmdValue.cmdPtr; + } + while (cmd->u.proc.upcall) { + cmd = cmd->prevCmd; + } + return cmd; +} + + + +#define JIM_DICT_SUGAR 100 + +static int SetVariableFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr); + +static const Jim_ObjType variableObjType = { + "variable", + NULL, + NULL, + NULL, + JIM_TYPE_REFERENCES, +}; + +static int JimValidName(Jim_Interp *interp, const char *type, Jim_Obj *nameObjPtr) +{ + + if (nameObjPtr->typePtr != &variableObjType) { + int len; + const char *str = Jim_GetString(nameObjPtr, &len); + if (memchr(str, '\0', len)) { + Jim_SetResultFormatted(interp, "%s name contains embedded null", type); + return JIM_ERR; + } + } + return JIM_OK; +} + +static int SetVariableFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr) +{ + const char *varName; + Jim_CallFrame *framePtr; + Jim_HashEntry *he; + int global; + int len; + + + if (objPtr->typePtr == &variableObjType) { + framePtr = objPtr->internalRep.varValue.global ? interp->topFramePtr : interp->framePtr; + if (objPtr->internalRep.varValue.callFrameId == framePtr->id) { + + return JIM_OK; + } + + } + else if (objPtr->typePtr == &dictSubstObjType) { + return JIM_DICT_SUGAR; + } + else if (JimValidName(interp, "variable", objPtr) != JIM_OK) { + return JIM_ERR; + } + + + varName = Jim_GetString(objPtr, &len); + + + if (len && varName[len - 1] == ')' && strchr(varName, '(') != NULL) { + return JIM_DICT_SUGAR; + } + + if (varName[0] == ':' && varName[1] == ':') { + while (*++varName == ':') { + } + global = 1; + framePtr = interp->topFramePtr; + } + else { + global = 0; + framePtr = interp->framePtr; + } + + + he = Jim_FindHashEntry(&framePtr->vars, varName); + if (he == NULL) { + if (!global && framePtr->staticVars) { + + he = Jim_FindHashEntry(framePtr->staticVars, varName); + } + if (he == NULL) { + return JIM_ERR; + } + } + + + Jim_FreeIntRep(interp, objPtr); + objPtr->typePtr = &variableObjType; + objPtr->internalRep.varValue.callFrameId = framePtr->id; + objPtr->internalRep.varValue.varPtr = Jim_GetHashEntryVal(he); + objPtr->internalRep.varValue.global = global; + return JIM_OK; +} + + +static int JimDictSugarSet(Jim_Interp *interp, Jim_Obj *ObjPtr, Jim_Obj *valObjPtr); +static Jim_Obj *JimDictSugarGet(Jim_Interp *interp, Jim_Obj *ObjPtr, int flags); + +static Jim_Var *JimCreateVariable(Jim_Interp *interp, Jim_Obj *nameObjPtr, Jim_Obj *valObjPtr) +{ + const char *name; + Jim_CallFrame *framePtr; + int global; + + + Jim_Var *var = Jim_Alloc(sizeof(*var)); + + var->objPtr = valObjPtr; + Jim_IncrRefCount(valObjPtr); + var->linkFramePtr = NULL; + + name = Jim_String(nameObjPtr); + if (name[0] == ':' && name[1] == ':') { + while (*++name == ':') { + } + framePtr = interp->topFramePtr; + global = 1; + } + else { + framePtr = interp->framePtr; + global = 0; + } + + + Jim_AddHashEntry(&framePtr->vars, name, var); + + + Jim_FreeIntRep(interp, nameObjPtr); + nameObjPtr->typePtr = &variableObjType; + nameObjPtr->internalRep.varValue.callFrameId = framePtr->id; + nameObjPtr->internalRep.varValue.varPtr = var; + nameObjPtr->internalRep.varValue.global = global; + + return var; +} + + +int Jim_SetVariable(Jim_Interp *interp, Jim_Obj *nameObjPtr, Jim_Obj *valObjPtr) +{ + int err; + Jim_Var *var; + + switch (SetVariableFromAny(interp, nameObjPtr)) { + case JIM_DICT_SUGAR: + return JimDictSugarSet(interp, nameObjPtr, valObjPtr); + + case JIM_ERR: + if (JimValidName(interp, "variable", nameObjPtr) != JIM_OK) { + return JIM_ERR; + } + JimCreateVariable(interp, nameObjPtr, valObjPtr); + break; + + case JIM_OK: + var = nameObjPtr->internalRep.varValue.varPtr; + if (var->linkFramePtr == NULL) { + Jim_IncrRefCount(valObjPtr); + Jim_DecrRefCount(interp, var->objPtr); + var->objPtr = valObjPtr; + } + else { + Jim_CallFrame *savedCallFrame; + + savedCallFrame = interp->framePtr; + interp->framePtr = var->linkFramePtr; + err = Jim_SetVariable(interp, var->objPtr, valObjPtr); + interp->framePtr = savedCallFrame; + if (err != JIM_OK) + return err; + } + } + return JIM_OK; +} + +int Jim_SetVariableStr(Jim_Interp *interp, const char *name, Jim_Obj *objPtr) +{ + Jim_Obj *nameObjPtr; + int result; + + nameObjPtr = Jim_NewStringObj(interp, name, -1); + Jim_IncrRefCount(nameObjPtr); + result = Jim_SetVariable(interp, nameObjPtr, objPtr); + Jim_DecrRefCount(interp, nameObjPtr); + return result; +} + +int Jim_SetGlobalVariableStr(Jim_Interp *interp, const char *name, Jim_Obj *objPtr) +{ + Jim_CallFrame *savedFramePtr; + int result; + + savedFramePtr = interp->framePtr; + interp->framePtr = interp->topFramePtr; + result = Jim_SetVariableStr(interp, name, objPtr); + interp->framePtr = savedFramePtr; + return result; +} + +int Jim_SetVariableStrWithStr(Jim_Interp *interp, const char *name, const char *val) +{ + Jim_Obj *nameObjPtr, *valObjPtr; + int result; + + nameObjPtr = Jim_NewStringObj(interp, name, -1); + valObjPtr = Jim_NewStringObj(interp, val, -1); + Jim_IncrRefCount(nameObjPtr); + Jim_IncrRefCount(valObjPtr); + result = Jim_SetVariable(interp, nameObjPtr, valObjPtr); + Jim_DecrRefCount(interp, nameObjPtr); + Jim_DecrRefCount(interp, valObjPtr); + return result; +} + +int Jim_SetVariableLink(Jim_Interp *interp, Jim_Obj *nameObjPtr, + Jim_Obj *targetNameObjPtr, Jim_CallFrame *targetCallFrame) +{ + const char *varName; + const char *targetName; + Jim_CallFrame *framePtr; + Jim_Var *varPtr; + + + switch (SetVariableFromAny(interp, nameObjPtr)) { + case JIM_DICT_SUGAR: + + Jim_SetResultFormatted(interp, "bad variable name \"%#s\": upvar won't create a scalar variable that looks like an array element", nameObjPtr); + return JIM_ERR; + + case JIM_OK: + varPtr = nameObjPtr->internalRep.varValue.varPtr; + + if (varPtr->linkFramePtr == NULL) { + Jim_SetResultFormatted(interp, "variable \"%#s\" already exists", nameObjPtr); + return JIM_ERR; + } + + + varPtr->linkFramePtr = NULL; + break; + } + + + + varName = Jim_String(nameObjPtr); + + if (varName[0] == ':' && varName[1] == ':') { + while (*++varName == ':') { + } + + framePtr = interp->topFramePtr; + } + else { + framePtr = interp->framePtr; + } + + targetName = Jim_String(targetNameObjPtr); + if (targetName[0] == ':' && targetName[1] == ':') { + while (*++targetName == ':') { + } + targetNameObjPtr = Jim_NewStringObj(interp, targetName, -1); + targetCallFrame = interp->topFramePtr; + } + Jim_IncrRefCount(targetNameObjPtr); + + if (framePtr->level < targetCallFrame->level) { + Jim_SetResultFormatted(interp, + "bad variable name \"%#s\": upvar won't create namespace variable that refers to procedure variable", + nameObjPtr); + Jim_DecrRefCount(interp, targetNameObjPtr); + return JIM_ERR; + } + + + if (framePtr == targetCallFrame) { + Jim_Obj *objPtr = targetNameObjPtr; + + + while (1) { + if (strcmp(Jim_String(objPtr), varName) == 0) { + Jim_SetResultString(interp, "can't upvar from variable to itself", -1); + Jim_DecrRefCount(interp, targetNameObjPtr); + return JIM_ERR; + } + if (SetVariableFromAny(interp, objPtr) != JIM_OK) + break; + varPtr = objPtr->internalRep.varValue.varPtr; + if (varPtr->linkFramePtr != targetCallFrame) + break; + objPtr = varPtr->objPtr; + } + } + + + Jim_SetVariable(interp, nameObjPtr, targetNameObjPtr); + + nameObjPtr->internalRep.varValue.varPtr->linkFramePtr = targetCallFrame; + Jim_DecrRefCount(interp, targetNameObjPtr); + return JIM_OK; +} + +Jim_Obj *Jim_GetVariable(Jim_Interp *interp, Jim_Obj *nameObjPtr, int flags) +{ + switch (SetVariableFromAny(interp, nameObjPtr)) { + case JIM_OK:{ + Jim_Var *varPtr = nameObjPtr->internalRep.varValue.varPtr; + + if (varPtr->linkFramePtr == NULL) { + return varPtr->objPtr; + } + else { + Jim_Obj *objPtr; + + + Jim_CallFrame *savedCallFrame = interp->framePtr; + + interp->framePtr = varPtr->linkFramePtr; + objPtr = Jim_GetVariable(interp, varPtr->objPtr, flags); + interp->framePtr = savedCallFrame; + if (objPtr) { + return objPtr; + } + + } + } + break; + + case JIM_DICT_SUGAR: + + return JimDictSugarGet(interp, nameObjPtr, flags); + } + if (flags & JIM_ERRMSG) { + Jim_SetResultFormatted(interp, "can't read \"%#s\": no such variable", nameObjPtr); + } + return NULL; +} + +Jim_Obj *Jim_GetGlobalVariable(Jim_Interp *interp, Jim_Obj *nameObjPtr, int flags) +{ + Jim_CallFrame *savedFramePtr; + Jim_Obj *objPtr; + + savedFramePtr = interp->framePtr; + interp->framePtr = interp->topFramePtr; + objPtr = Jim_GetVariable(interp, nameObjPtr, flags); + interp->framePtr = savedFramePtr; + + return objPtr; +} + +Jim_Obj *Jim_GetVariableStr(Jim_Interp *interp, const char *name, int flags) +{ + Jim_Obj *nameObjPtr, *varObjPtr; + + nameObjPtr = Jim_NewStringObj(interp, name, -1); + Jim_IncrRefCount(nameObjPtr); + varObjPtr = Jim_GetVariable(interp, nameObjPtr, flags); + Jim_DecrRefCount(interp, nameObjPtr); + return varObjPtr; +} + +Jim_Obj *Jim_GetGlobalVariableStr(Jim_Interp *interp, const char *name, int flags) +{ + Jim_CallFrame *savedFramePtr; + Jim_Obj *objPtr; + + savedFramePtr = interp->framePtr; + interp->framePtr = interp->topFramePtr; + objPtr = Jim_GetVariableStr(interp, name, flags); + interp->framePtr = savedFramePtr; + + return objPtr; +} + +int Jim_UnsetVariable(Jim_Interp *interp, Jim_Obj *nameObjPtr, int flags) +{ + Jim_Var *varPtr; + int retval; + Jim_CallFrame *framePtr; + + retval = SetVariableFromAny(interp, nameObjPtr); + if (retval == JIM_DICT_SUGAR) { + + return JimDictSugarSet(interp, nameObjPtr, NULL); + } + else if (retval == JIM_OK) { + varPtr = nameObjPtr->internalRep.varValue.varPtr; + + + if (varPtr->linkFramePtr) { + framePtr = interp->framePtr; + interp->framePtr = varPtr->linkFramePtr; + retval = Jim_UnsetVariable(interp, varPtr->objPtr, JIM_NONE); + interp->framePtr = framePtr; + } + else { + const char *name = Jim_String(nameObjPtr); + if (nameObjPtr->internalRep.varValue.global) { + name += 2; + framePtr = interp->topFramePtr; + } + else { + framePtr = interp->framePtr; + } + + retval = Jim_DeleteHashEntry(&framePtr->vars, name); + if (retval == JIM_OK) { + + framePtr->id = interp->callFrameEpoch++; + } + } + } + if (retval != JIM_OK && (flags & JIM_ERRMSG)) { + Jim_SetResultFormatted(interp, "can't unset \"%#s\": no such variable", nameObjPtr); + } + return retval; +} + + + +static void JimDictSugarParseVarKey(Jim_Interp *interp, Jim_Obj *objPtr, + Jim_Obj **varPtrPtr, Jim_Obj **keyPtrPtr) +{ + const char *str, *p; + int len, keyLen; + Jim_Obj *varObjPtr, *keyObjPtr; + + str = Jim_GetString(objPtr, &len); + + p = strchr(str, '('); + JimPanic((p == NULL, "JimDictSugarParseVarKey() called for non-dict-sugar (%s)", str)); + + varObjPtr = Jim_NewStringObj(interp, str, p - str); + + p++; + keyLen = (str + len) - p; + if (str[len - 1] == ')') { + keyLen--; + } + + + keyObjPtr = Jim_NewStringObj(interp, p, keyLen); + + Jim_IncrRefCount(varObjPtr); + Jim_IncrRefCount(keyObjPtr); + *varPtrPtr = varObjPtr; + *keyPtrPtr = keyObjPtr; +} + +static int JimDictSugarSet(Jim_Interp *interp, Jim_Obj *objPtr, Jim_Obj *valObjPtr) +{ + int err; + + SetDictSubstFromAny(interp, objPtr); + + err = Jim_SetDictKeysVector(interp, objPtr->internalRep.dictSubstValue.varNameObjPtr, + &objPtr->internalRep.dictSubstValue.indexObjPtr, 1, valObjPtr, JIM_MUSTEXIST); + + if (err == JIM_OK) { + + Jim_SetEmptyResult(interp); + } + else { + if (!valObjPtr) { + + if (Jim_GetVariable(interp, objPtr->internalRep.dictSubstValue.varNameObjPtr, JIM_NONE)) { + Jim_SetResultFormatted(interp, "can't unset \"%#s\": no such element in array", + objPtr); + return err; + } + } + + Jim_SetResultFormatted(interp, "can't %s \"%#s\": variable isn't array", + (valObjPtr ? "set" : "unset"), objPtr); + } + return err; +} + +static Jim_Obj *JimDictExpandArrayVariable(Jim_Interp *interp, Jim_Obj *varObjPtr, + Jim_Obj *keyObjPtr, int flags) +{ + Jim_Obj *dictObjPtr; + Jim_Obj *resObjPtr = NULL; + int ret; + + dictObjPtr = Jim_GetVariable(interp, varObjPtr, JIM_ERRMSG); + if (!dictObjPtr) { + return NULL; + } + + ret = Jim_DictKey(interp, dictObjPtr, keyObjPtr, &resObjPtr, JIM_NONE); + if (ret != JIM_OK) { + Jim_SetResultFormatted(interp, + "can't read \"%#s(%#s)\": %s array", varObjPtr, keyObjPtr, + ret < 0 ? "variable isn't" : "no such element in"); + } + else if ((flags & JIM_UNSHARED) && Jim_IsShared(dictObjPtr)) { + + Jim_SetVariable(interp, varObjPtr, Jim_DuplicateObj(interp, dictObjPtr)); + } + + return resObjPtr; +} + + +static Jim_Obj *JimDictSugarGet(Jim_Interp *interp, Jim_Obj *objPtr, int flags) +{ + SetDictSubstFromAny(interp, objPtr); + + return JimDictExpandArrayVariable(interp, + objPtr->internalRep.dictSubstValue.varNameObjPtr, + objPtr->internalRep.dictSubstValue.indexObjPtr, flags); +} + + + +void FreeDictSubstInternalRep(Jim_Interp *interp, Jim_Obj *objPtr) +{ + Jim_DecrRefCount(interp, objPtr->internalRep.dictSubstValue.varNameObjPtr); + Jim_DecrRefCount(interp, objPtr->internalRep.dictSubstValue.indexObjPtr); +} + +void DupDictSubstInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr) +{ + JIM_NOTUSED(interp); + + dupPtr->internalRep.dictSubstValue.varNameObjPtr = + srcPtr->internalRep.dictSubstValue.varNameObjPtr; + dupPtr->internalRep.dictSubstValue.indexObjPtr = srcPtr->internalRep.dictSubstValue.indexObjPtr; + dupPtr->typePtr = &dictSubstObjType; +} + + +static void SetDictSubstFromAny(Jim_Interp *interp, Jim_Obj *objPtr) +{ + if (objPtr->typePtr != &dictSubstObjType) { + Jim_Obj *varObjPtr, *keyObjPtr; + + if (objPtr->typePtr == &interpolatedObjType) { + + + varObjPtr = objPtr->internalRep.dictSubstValue.varNameObjPtr; + keyObjPtr = objPtr->internalRep.dictSubstValue.indexObjPtr; + + Jim_IncrRefCount(varObjPtr); + Jim_IncrRefCount(keyObjPtr); + } + else { + JimDictSugarParseVarKey(interp, objPtr, &varObjPtr, &keyObjPtr); + } + + Jim_FreeIntRep(interp, objPtr); + objPtr->typePtr = &dictSubstObjType; + objPtr->internalRep.dictSubstValue.varNameObjPtr = varObjPtr; + objPtr->internalRep.dictSubstValue.indexObjPtr = keyObjPtr; + } +} + +static Jim_Obj *JimExpandDictSugar(Jim_Interp *interp, Jim_Obj *objPtr) +{ + Jim_Obj *resObjPtr = NULL; + Jim_Obj *substKeyObjPtr = NULL; + + SetDictSubstFromAny(interp, objPtr); + + if (Jim_SubstObj(interp, objPtr->internalRep.dictSubstValue.indexObjPtr, + &substKeyObjPtr, JIM_NONE) + != JIM_OK) { + return NULL; + } + Jim_IncrRefCount(substKeyObjPtr); + resObjPtr = + JimDictExpandArrayVariable(interp, objPtr->internalRep.dictSubstValue.varNameObjPtr, + substKeyObjPtr, 0); + Jim_DecrRefCount(interp, substKeyObjPtr); + + return resObjPtr; +} + +static Jim_Obj *JimExpandExprSugar(Jim_Interp *interp, Jim_Obj *objPtr) +{ + Jim_Obj *resultObjPtr; + + if (Jim_EvalExpression(interp, objPtr, &resultObjPtr) == JIM_OK) { + + resultObjPtr->refCount--; + return resultObjPtr; + } + return NULL; +} + + +static Jim_CallFrame *JimCreateCallFrame(Jim_Interp *interp, Jim_CallFrame *parent, Jim_Obj *nsObj) +{ + Jim_CallFrame *cf; + + if (interp->freeFramesList) { + cf = interp->freeFramesList; + interp->freeFramesList = cf->next; + + cf->argv = NULL; + cf->argc = 0; + cf->procArgsObjPtr = NULL; + cf->procBodyObjPtr = NULL; + cf->next = NULL; + cf->staticVars = NULL; + cf->localCommands = NULL; + cf->tailcall = 0; + cf->tailcallObj = NULL; + cf->tailcallCmd = NULL; + } + else { + cf = Jim_Alloc(sizeof(*cf)); + memset(cf, 0, sizeof(*cf)); + + Jim_InitHashTable(&cf->vars, &JimVariablesHashTableType, interp); + } + + cf->id = interp->callFrameEpoch++; + cf->parent = parent; + cf->level = parent ? parent->level + 1 : 0; + cf->nsObj = nsObj; + Jim_IncrRefCount(nsObj); + + return cf; +} + +static int JimDeleteLocalProcs(Jim_Interp *interp, Jim_Stack *localCommands) +{ + + if (localCommands) { + Jim_Obj *cmdNameObj; + + while ((cmdNameObj = Jim_StackPop(localCommands)) != NULL) { + Jim_HashEntry *he; + Jim_Obj *fqObjName; + Jim_HashTable *ht = &interp->commands; + + const char *fqname = JimQualifyName(interp, Jim_String(cmdNameObj), &fqObjName); + + he = Jim_FindHashEntry(ht, fqname); + + if (he) { + Jim_Cmd *cmd = Jim_GetHashEntryVal(he); + if (cmd->prevCmd) { + Jim_Cmd *prevCmd = cmd->prevCmd; + cmd->prevCmd = NULL; + + + JimDecrCmdRefCount(interp, cmd); + + + Jim_SetHashVal(ht, he, prevCmd); + } + else { + Jim_DeleteHashEntry(ht, fqname); + Jim_InterpIncrProcEpoch(interp); + } + } + Jim_DecrRefCount(interp, cmdNameObj); + JimFreeQualifiedName(interp, fqObjName); + } + Jim_FreeStack(localCommands); + Jim_Free(localCommands); + } + return JIM_OK; +} + + +#define JIM_FCF_FULL 0 +#define JIM_FCF_REUSE 1 +static void JimFreeCallFrame(Jim_Interp *interp, Jim_CallFrame *cf, int action) + { + JimDeleteLocalProcs(interp, cf->localCommands); + + if (cf->procArgsObjPtr) + Jim_DecrRefCount(interp, cf->procArgsObjPtr); + if (cf->procBodyObjPtr) + Jim_DecrRefCount(interp, cf->procBodyObjPtr); + Jim_DecrRefCount(interp, cf->nsObj); + if (action == JIM_FCF_FULL || cf->vars.size != JIM_HT_INITIAL_SIZE) + Jim_FreeHashTable(&cf->vars); + else { + int i; + Jim_HashEntry **table = cf->vars.table, *he; + + for (i = 0; i < JIM_HT_INITIAL_SIZE; i++) { + he = table[i]; + while (he != NULL) { + Jim_HashEntry *nextEntry = he->next; + Jim_Var *varPtr = Jim_GetHashEntryVal(he); + + Jim_DecrRefCount(interp, varPtr->objPtr); + Jim_Free(Jim_GetHashEntryKey(he)); + Jim_Free(varPtr); + Jim_Free(he); + table[i] = NULL; + he = nextEntry; + } + } + cf->vars.used = 0; + } + cf->next = interp->freeFramesList; + interp->freeFramesList = cf; +} + + +#ifdef JIM_REFERENCES + +static void JimReferencesHTValDestructor(void *interp, void *val) +{ + Jim_Reference *refPtr = (void *)val; + + Jim_DecrRefCount(interp, refPtr->objPtr); + if (refPtr->finalizerCmdNamePtr != NULL) { + Jim_DecrRefCount(interp, refPtr->finalizerCmdNamePtr); + } + Jim_Free(val); +} + +static unsigned int JimReferencesHTHashFunction(const void *key) +{ + + const unsigned long *widePtr = key; + unsigned int intValue = (unsigned int)*widePtr; + + return Jim_IntHashFunction(intValue); +} + +static void *JimReferencesHTKeyDup(void *privdata, const void *key) +{ + void *copy = Jim_Alloc(sizeof(unsigned long)); + + JIM_NOTUSED(privdata); + + memcpy(copy, key, sizeof(unsigned long)); + return copy; +} + +static int JimReferencesHTKeyCompare(void *privdata, const void *key1, const void *key2) +{ + JIM_NOTUSED(privdata); + + return memcmp(key1, key2, sizeof(unsigned long)) == 0; +} + +static void JimReferencesHTKeyDestructor(void *privdata, void *key) +{ + JIM_NOTUSED(privdata); + + Jim_Free(key); +} + +static const Jim_HashTableType JimReferencesHashTableType = { + JimReferencesHTHashFunction, + JimReferencesHTKeyDup, + NULL, + JimReferencesHTKeyCompare, + JimReferencesHTKeyDestructor, + JimReferencesHTValDestructor +}; + + + +#define JIM_REFERENCE_SPACE (35+JIM_REFERENCE_TAGLEN) + +static int JimFormatReference(char *buf, Jim_Reference *refPtr, unsigned long id) +{ + const char *fmt = ".%020lu>"; + + sprintf(buf, fmt, refPtr->tag, id); + return JIM_REFERENCE_SPACE; +} + +static void UpdateStringOfReference(struct Jim_Obj *objPtr); + +static const Jim_ObjType referenceObjType = { + "reference", + NULL, + NULL, + UpdateStringOfReference, + JIM_TYPE_REFERENCES, +}; + +static void UpdateStringOfReference(struct Jim_Obj *objPtr) +{ + char buf[JIM_REFERENCE_SPACE + 1]; + + JimFormatReference(buf, objPtr->internalRep.refValue.refPtr, objPtr->internalRep.refValue.id); + JimSetStringBytes(objPtr, buf); +} + +static int isrefchar(int c) +{ + return (c == '_' || isalnum(c)); +} + +static int SetReferenceFromAny(Jim_Interp *interp, Jim_Obj *objPtr) +{ + unsigned long value; + int i, len; + const char *str, *start, *end; + char refId[21]; + Jim_Reference *refPtr; + Jim_HashEntry *he; + char *endptr; + + + str = Jim_GetString(objPtr, &len); + + if (len < JIM_REFERENCE_SPACE) + goto badformat; + + start = str; + end = str + len - 1; + while (*start == ' ') + start++; + while (*end == ' ' && end > start) + end--; + if (end - start + 1 != JIM_REFERENCE_SPACE) + goto badformat; + + if (memcmp(start, "references, &value); + if (he == NULL) { + Jim_SetResultFormatted(interp, "invalid reference id \"%#s\"", objPtr); + return JIM_ERR; + } + refPtr = Jim_GetHashEntryVal(he); + + Jim_FreeIntRep(interp, objPtr); + objPtr->typePtr = &referenceObjType; + objPtr->internalRep.refValue.id = value; + objPtr->internalRep.refValue.refPtr = refPtr; + return JIM_OK; + + badformat: + Jim_SetResultFormatted(interp, "expected reference but got \"%#s\"", objPtr); + return JIM_ERR; +} + +Jim_Obj *Jim_NewReference(Jim_Interp *interp, Jim_Obj *objPtr, Jim_Obj *tagPtr, Jim_Obj *cmdNamePtr) +{ + struct Jim_Reference *refPtr; + unsigned long id; + Jim_Obj *refObjPtr; + const char *tag; + int tagLen, i; + + + Jim_CollectIfNeeded(interp); + + refPtr = Jim_Alloc(sizeof(*refPtr)); + refPtr->objPtr = objPtr; + Jim_IncrRefCount(objPtr); + refPtr->finalizerCmdNamePtr = cmdNamePtr; + if (cmdNamePtr) + Jim_IncrRefCount(cmdNamePtr); + id = interp->referenceNextId++; + Jim_AddHashEntry(&interp->references, &id, refPtr); + refObjPtr = Jim_NewObj(interp); + refObjPtr->typePtr = &referenceObjType; + refObjPtr->bytes = NULL; + refObjPtr->internalRep.refValue.id = id; + refObjPtr->internalRep.refValue.refPtr = refPtr; + interp->referenceNextId++; + tag = Jim_GetString(tagPtr, &tagLen); + if (tagLen > JIM_REFERENCE_TAGLEN) + tagLen = JIM_REFERENCE_TAGLEN; + for (i = 0; i < JIM_REFERENCE_TAGLEN; i++) { + if (i < tagLen && isrefchar(tag[i])) + refPtr->tag[i] = tag[i]; + else + refPtr->tag[i] = '_'; + } + refPtr->tag[JIM_REFERENCE_TAGLEN] = '\0'; + return refObjPtr; +} + +Jim_Reference *Jim_GetReference(Jim_Interp *interp, Jim_Obj *objPtr) +{ + if (objPtr->typePtr != &referenceObjType && SetReferenceFromAny(interp, objPtr) == JIM_ERR) + return NULL; + return objPtr->internalRep.refValue.refPtr; +} + +int Jim_SetFinalizer(Jim_Interp *interp, Jim_Obj *objPtr, Jim_Obj *cmdNamePtr) +{ + Jim_Reference *refPtr; + + if ((refPtr = Jim_GetReference(interp, objPtr)) == NULL) + return JIM_ERR; + Jim_IncrRefCount(cmdNamePtr); + if (refPtr->finalizerCmdNamePtr) + Jim_DecrRefCount(interp, refPtr->finalizerCmdNamePtr); + refPtr->finalizerCmdNamePtr = cmdNamePtr; + return JIM_OK; +} + +int Jim_GetFinalizer(Jim_Interp *interp, Jim_Obj *objPtr, Jim_Obj **cmdNamePtrPtr) +{ + Jim_Reference *refPtr; + + if ((refPtr = Jim_GetReference(interp, objPtr)) == NULL) + return JIM_ERR; + *cmdNamePtrPtr = refPtr->finalizerCmdNamePtr; + return JIM_OK; +} + + + +static const Jim_HashTableType JimRefMarkHashTableType = { + JimReferencesHTHashFunction, + JimReferencesHTKeyDup, + NULL, + JimReferencesHTKeyCompare, + JimReferencesHTKeyDestructor, + NULL +}; + + +int Jim_Collect(Jim_Interp *interp) +{ + int collected = 0; + return collected; +} + +#define JIM_COLLECT_ID_PERIOD 5000 +#define JIM_COLLECT_TIME_PERIOD 300 + +void Jim_CollectIfNeeded(Jim_Interp *interp) +{ + unsigned long elapsedId; + int elapsedTime; + + elapsedId = interp->referenceNextId - interp->lastCollectId; + elapsedTime = time(NULL) - interp->lastCollectTime; + + + if (elapsedId > JIM_COLLECT_ID_PERIOD || elapsedTime > JIM_COLLECT_TIME_PERIOD) { + Jim_Collect(interp); + } +} +#endif + +int Jim_IsBigEndian(void) +{ + union { + unsigned short s; + unsigned char c[2]; + } uval = {0x0102}; + + return uval.c[0] == 1; +} + + +Jim_Interp *Jim_CreateInterp(void) +{ + Jim_Interp *i = Jim_Alloc(sizeof(*i)); + + memset(i, 0, sizeof(*i)); + + i->maxCallFrameDepth = JIM_MAX_CALLFRAME_DEPTH; + i->maxEvalDepth = JIM_MAX_EVAL_DEPTH; + i->lastCollectTime = time(NULL); + + Jim_InitHashTable(&i->commands, &JimCommandsHashTableType, i); +#ifdef JIM_REFERENCES + Jim_InitHashTable(&i->references, &JimReferencesHashTableType, i); +#endif + Jim_InitHashTable(&i->assocData, &JimAssocDataHashTableType, i); + Jim_InitHashTable(&i->packages, &JimPackageHashTableType, NULL); + i->emptyObj = Jim_NewEmptyStringObj(i); + i->trueObj = Jim_NewIntObj(i, 1); + i->falseObj = Jim_NewIntObj(i, 0); + i->framePtr = i->topFramePtr = JimCreateCallFrame(i, NULL, i->emptyObj); + i->errorFileNameObj = i->emptyObj; + i->result = i->emptyObj; + i->stackTrace = Jim_NewListObj(i, NULL, 0); + i->unknown = Jim_NewStringObj(i, "unknown", -1); + i->errorProc = i->emptyObj; + i->currentScriptObj = Jim_NewEmptyStringObj(i); + i->nullScriptObj = Jim_NewEmptyStringObj(i); + Jim_IncrRefCount(i->emptyObj); + Jim_IncrRefCount(i->errorFileNameObj); + Jim_IncrRefCount(i->result); + Jim_IncrRefCount(i->stackTrace); + Jim_IncrRefCount(i->unknown); + Jim_IncrRefCount(i->currentScriptObj); + Jim_IncrRefCount(i->nullScriptObj); + Jim_IncrRefCount(i->errorProc); + Jim_IncrRefCount(i->trueObj); + Jim_IncrRefCount(i->falseObj); + + + Jim_SetVariableStrWithStr(i, JIM_LIBPATH, TCL_LIBRARY); + Jim_SetVariableStrWithStr(i, JIM_INTERACTIVE, "0"); + + Jim_SetVariableStrWithStr(i, "tcl_platform(os)", TCL_PLATFORM_OS); + Jim_SetVariableStrWithStr(i, "tcl_platform(platform)", TCL_PLATFORM_PLATFORM); + Jim_SetVariableStrWithStr(i, "tcl_platform(pathSeparator)", TCL_PLATFORM_PATH_SEPARATOR); + Jim_SetVariableStrWithStr(i, "tcl_platform(byteOrder)", Jim_IsBigEndian() ? "bigEndian" : "littleEndian"); + Jim_SetVariableStrWithStr(i, "tcl_platform(threaded)", "0"); + Jim_SetVariableStr(i, "tcl_platform(pointerSize)", Jim_NewIntObj(i, sizeof(void *))); + Jim_SetVariableStr(i, "tcl_platform(wordSize)", Jim_NewIntObj(i, sizeof(jim_wide))); + + return i; +} + +void Jim_FreeInterp(Jim_Interp *i) +{ + Jim_CallFrame *cf, *cfx; + + Jim_Obj *objPtr, *nextObjPtr; + + + for (cf = i->framePtr; cf; cf = cfx) { + cfx = cf->parent; + JimFreeCallFrame(i, cf, JIM_FCF_FULL); + } + + Jim_DecrRefCount(i, i->emptyObj); + Jim_DecrRefCount(i, i->trueObj); + Jim_DecrRefCount(i, i->falseObj); + Jim_DecrRefCount(i, i->result); + Jim_DecrRefCount(i, i->stackTrace); + Jim_DecrRefCount(i, i->errorProc); + Jim_DecrRefCount(i, i->unknown); + Jim_DecrRefCount(i, i->errorFileNameObj); + Jim_DecrRefCount(i, i->currentScriptObj); + Jim_DecrRefCount(i, i->nullScriptObj); + Jim_FreeHashTable(&i->commands); +#ifdef JIM_REFERENCES + Jim_FreeHashTable(&i->references); +#endif + Jim_FreeHashTable(&i->packages); + Jim_Free(i->prngState); + Jim_FreeHashTable(&i->assocData); + +#ifdef JIM_MAINTAINER + if (i->liveList != NULL) { + objPtr = i->liveList; + + printf("\n-------------------------------------\n"); + printf("Objects still in the free list:\n"); + while (objPtr) { + const char *type = objPtr->typePtr ? objPtr->typePtr->name : "string"; + + if (objPtr->bytes && strlen(objPtr->bytes) > 20) { + printf("%p (%d) %-10s: '%.20s...'\n", + (void *)objPtr, objPtr->refCount, type, objPtr->bytes); + } + else { + printf("%p (%d) %-10s: '%s'\n", + (void *)objPtr, objPtr->refCount, type, objPtr->bytes ? objPtr->bytes : "(null)"); + } + if (objPtr->typePtr == &sourceObjType) { + printf("FILE %s LINE %d\n", + Jim_String(objPtr->internalRep.sourceValue.fileNameObj), + objPtr->internalRep.sourceValue.lineNumber); + } + objPtr = objPtr->nextObjPtr; + } + printf("-------------------------------------\n\n"); + JimPanic((1, "Live list non empty freeing the interpreter! Leak?")); + } +#endif + + + objPtr = i->freeList; + while (objPtr) { + nextObjPtr = objPtr->nextObjPtr; + Jim_Free(objPtr); + objPtr = nextObjPtr; + } + + + for (cf = i->freeFramesList; cf; cf = cfx) { + cfx = cf->next; + if (cf->vars.table) + Jim_FreeHashTable(&cf->vars); + Jim_Free(cf); + } + + + Jim_Free(i); +} + +Jim_CallFrame *Jim_GetCallFrameByLevel(Jim_Interp *interp, Jim_Obj *levelObjPtr) +{ + long level; + const char *str; + Jim_CallFrame *framePtr; + + if (levelObjPtr) { + str = Jim_String(levelObjPtr); + if (str[0] == '#') { + char *endptr; + + level = jim_strtol(str + 1, &endptr); + if (str[1] == '\0' || endptr[0] != '\0') { + level = -1; + } + } + else { + if (Jim_GetLong(interp, levelObjPtr, &level) != JIM_OK || level < 0) { + level = -1; + } + else { + + level = interp->framePtr->level - level; + } + } + } + else { + str = "1"; + level = interp->framePtr->level - 1; + } + + if (level == 0) { + return interp->topFramePtr; + } + if (level > 0) { + + for (framePtr = interp->framePtr; framePtr; framePtr = framePtr->parent) { + if (framePtr->level == level) { + return framePtr; + } + } + } + + Jim_SetResultFormatted(interp, "bad level \"%s\"", str); + return NULL; +} + +static Jim_CallFrame *JimGetCallFrameByInteger(Jim_Interp *interp, Jim_Obj *levelObjPtr) +{ + long level; + Jim_CallFrame *framePtr; + + if (Jim_GetLong(interp, levelObjPtr, &level) == JIM_OK) { + if (level <= 0) { + + level = interp->framePtr->level + level; + } + + if (level == 0) { + return interp->topFramePtr; + } + + + for (framePtr = interp->framePtr; framePtr; framePtr = framePtr->parent) { + if (framePtr->level == level) { + return framePtr; + } + } + } + + Jim_SetResultFormatted(interp, "bad level \"%#s\"", levelObjPtr); + return NULL; +} + +static void JimResetStackTrace(Jim_Interp *interp) +{ + Jim_DecrRefCount(interp, interp->stackTrace); + interp->stackTrace = Jim_NewListObj(interp, NULL, 0); + Jim_IncrRefCount(interp->stackTrace); +} + +static void JimSetStackTrace(Jim_Interp *interp, Jim_Obj *stackTraceObj) +{ + int len; + + + Jim_IncrRefCount(stackTraceObj); + Jim_DecrRefCount(interp, interp->stackTrace); + interp->stackTrace = stackTraceObj; + interp->errorFlag = 1; + + len = Jim_ListLength(interp, interp->stackTrace); + if (len >= 3) { + if (Jim_Length(Jim_ListGetIndex(interp, interp->stackTrace, len - 2)) == 0) { + interp->addStackTrace = 1; + } + } +} + +static void JimAppendStackTrace(Jim_Interp *interp, const char *procname, + Jim_Obj *fileNameObj, int linenr) +{ + if (strcmp(procname, "unknown") == 0) { + procname = ""; + } + if (!*procname && !Jim_Length(fileNameObj)) { + + return; + } + + if (Jim_IsShared(interp->stackTrace)) { + Jim_DecrRefCount(interp, interp->stackTrace); + interp->stackTrace = Jim_DuplicateObj(interp, interp->stackTrace); + Jim_IncrRefCount(interp->stackTrace); + } + + + if (!*procname && Jim_Length(fileNameObj)) { + + int len = Jim_ListLength(interp, interp->stackTrace); + + if (len >= 3) { + Jim_Obj *objPtr = Jim_ListGetIndex(interp, interp->stackTrace, len - 3); + if (Jim_Length(objPtr)) { + + objPtr = Jim_ListGetIndex(interp, interp->stackTrace, len - 2); + if (Jim_Length(objPtr) == 0) { + + ListSetIndex(interp, interp->stackTrace, len - 2, fileNameObj, 0); + ListSetIndex(interp, interp->stackTrace, len - 1, Jim_NewIntObj(interp, linenr), 0); + return; + } + } + } + } + + Jim_ListAppendElement(interp, interp->stackTrace, Jim_NewStringObj(interp, procname, -1)); + Jim_ListAppendElement(interp, interp->stackTrace, fileNameObj); + Jim_ListAppendElement(interp, interp->stackTrace, Jim_NewIntObj(interp, linenr)); +} + +int Jim_SetAssocData(Jim_Interp *interp, const char *key, Jim_InterpDeleteProc * delProc, + void *data) +{ + AssocDataValue *assocEntryPtr = (AssocDataValue *) Jim_Alloc(sizeof(AssocDataValue)); + + assocEntryPtr->delProc = delProc; + assocEntryPtr->data = data; + return Jim_AddHashEntry(&interp->assocData, key, assocEntryPtr); +} + +void *Jim_GetAssocData(Jim_Interp *interp, const char *key) +{ + Jim_HashEntry *entryPtr = Jim_FindHashEntry(&interp->assocData, key); + + if (entryPtr != NULL) { + AssocDataValue *assocEntryPtr = Jim_GetHashEntryVal(entryPtr); + return assocEntryPtr->data; + } + return NULL; +} + +int Jim_DeleteAssocData(Jim_Interp *interp, const char *key) +{ + return Jim_DeleteHashEntry(&interp->assocData, key); +} + +int Jim_GetExitCode(Jim_Interp *interp) +{ + return interp->exitCode; +} + +static void UpdateStringOfInt(struct Jim_Obj *objPtr); +static int SetIntFromAny(Jim_Interp *interp, Jim_Obj *objPtr, int flags); + +static const Jim_ObjType intObjType = { + "int", + NULL, + NULL, + UpdateStringOfInt, + JIM_TYPE_NONE, +}; + +static const Jim_ObjType coercedDoubleObjType = { + "coerced-double", + NULL, + NULL, + UpdateStringOfInt, + JIM_TYPE_NONE, +}; + + +static void UpdateStringOfInt(struct Jim_Obj *objPtr) +{ + char buf[JIM_INTEGER_SPACE + 1]; + jim_wide wideValue = JimWideValue(objPtr); + int pos = 0; + + if (wideValue == 0) { + buf[pos++] = '0'; + } + else { + char tmp[JIM_INTEGER_SPACE]; + int num = 0; + int i; + + if (wideValue < 0) { + buf[pos++] = '-'; + i = wideValue % 10; + tmp[num++] = (i > 0) ? (10 - i) : -i; + wideValue /= -10; + } + + while (wideValue) { + tmp[num++] = wideValue % 10; + wideValue /= 10; + } + + for (i = 0; i < num; i++) { + buf[pos++] = '0' + tmp[num - i - 1]; + } + } + buf[pos] = 0; + + JimSetStringBytes(objPtr, buf); +} + +static int SetIntFromAny(Jim_Interp *interp, Jim_Obj *objPtr, int flags) +{ + jim_wide wideValue; + const char *str; + + if (objPtr->typePtr == &coercedDoubleObjType) { + + objPtr->typePtr = &intObjType; + return JIM_OK; + } + + + str = Jim_String(objPtr); + + if (Jim_StringToWide(str, &wideValue, 0) != JIM_OK) { + if (flags & JIM_ERRMSG) { + Jim_SetResultFormatted(interp, "expected integer but got \"%#s\"", objPtr); + } + return JIM_ERR; + } + if ((wideValue == JIM_WIDE_MIN || wideValue == JIM_WIDE_MAX) && errno == ERANGE) { + Jim_SetResultString(interp, "Integer value too big to be represented", -1); + return JIM_ERR; + } + + Jim_FreeIntRep(interp, objPtr); + objPtr->typePtr = &intObjType; + objPtr->internalRep.wideValue = wideValue; + return JIM_OK; +} + +#ifdef JIM_OPTIMIZATION +static int JimIsWide(Jim_Obj *objPtr) +{ + return objPtr->typePtr == &intObjType; +} +#endif + +int Jim_GetWide(Jim_Interp *interp, Jim_Obj *objPtr, jim_wide * widePtr) +{ + if (objPtr->typePtr != &intObjType && SetIntFromAny(interp, objPtr, JIM_ERRMSG) == JIM_ERR) + return JIM_ERR; + *widePtr = JimWideValue(objPtr); + return JIM_OK; +} + + +static int JimGetWideNoErr(Jim_Interp *interp, Jim_Obj *objPtr, jim_wide * widePtr) +{ + if (objPtr->typePtr != &intObjType && SetIntFromAny(interp, objPtr, JIM_NONE) == JIM_ERR) + return JIM_ERR; + *widePtr = JimWideValue(objPtr); + return JIM_OK; +} + +int Jim_GetLong(Jim_Interp *interp, Jim_Obj *objPtr, long *longPtr) +{ + jim_wide wideValue; + int retval; + + retval = Jim_GetWide(interp, objPtr, &wideValue); + if (retval == JIM_OK) { + *longPtr = (long)wideValue; + return JIM_OK; + } + return JIM_ERR; +} + +Jim_Obj *Jim_NewIntObj(Jim_Interp *interp, jim_wide wideValue) +{ + Jim_Obj *objPtr; + + objPtr = Jim_NewObj(interp); + objPtr->typePtr = &intObjType; + objPtr->bytes = NULL; + objPtr->internalRep.wideValue = wideValue; + return objPtr; +} + +#define JIM_DOUBLE_SPACE 30 + +static void UpdateStringOfDouble(struct Jim_Obj *objPtr); +static int SetDoubleFromAny(Jim_Interp *interp, Jim_Obj *objPtr); + +static const Jim_ObjType doubleObjType = { + "double", + NULL, + NULL, + UpdateStringOfDouble, + JIM_TYPE_NONE, +}; + +#ifndef HAVE_ISNAN +#undef isnan +#define isnan(X) ((X) != (X)) +#endif +#ifndef HAVE_ISINF +#undef isinf +#define isinf(X) (1.0 / (X) == 0.0) +#endif + +static void UpdateStringOfDouble(struct Jim_Obj *objPtr) +{ + double value = objPtr->internalRep.doubleValue; + + if (isnan(value)) { + JimSetStringBytes(objPtr, "NaN"); + return; + } + if (isinf(value)) { + if (value < 0) { + JimSetStringBytes(objPtr, "-Inf"); + } + else { + JimSetStringBytes(objPtr, "Inf"); + } + return; + } + { + char buf[JIM_DOUBLE_SPACE + 1]; + int i; + int len = sprintf(buf, "%.12g", value); + + + for (i = 0; i < len; i++) { + if (buf[i] == '.' || buf[i] == 'e') { +#if defined(JIM_SPRINTF_DOUBLE_NEEDS_FIX) + char *e = strchr(buf, 'e'); + if (e && (e[1] == '-' || e[1] == '+') && e[2] == '0') { + + e += 2; + memmove(e, e + 1, len - (e - buf)); + } +#endif + break; + } + } + if (buf[i] == '\0') { + buf[i++] = '.'; + buf[i++] = '0'; + buf[i] = '\0'; + } + JimSetStringBytes(objPtr, buf); + } +} + +static int SetDoubleFromAny(Jim_Interp *interp, Jim_Obj *objPtr) +{ + double doubleValue; + jim_wide wideValue; + const char *str; + + str = Jim_String(objPtr); + +#ifdef HAVE_LONG_LONG + +#define MIN_INT_IN_DOUBLE -(1LL << 53) +#define MAX_INT_IN_DOUBLE -(MIN_INT_IN_DOUBLE + 1) + + if (objPtr->typePtr == &intObjType + && JimWideValue(objPtr) >= MIN_INT_IN_DOUBLE + && JimWideValue(objPtr) <= MAX_INT_IN_DOUBLE) { + + + objPtr->typePtr = &coercedDoubleObjType; + return JIM_OK; + } + else +#endif + if (Jim_StringToWide(str, &wideValue, 10) == JIM_OK) { + + Jim_FreeIntRep(interp, objPtr); + objPtr->typePtr = &coercedDoubleObjType; + objPtr->internalRep.wideValue = wideValue; + return JIM_OK; + } + else { + + if (Jim_StringToDouble(str, &doubleValue) != JIM_OK) { + Jim_SetResultFormatted(interp, "expected floating-point number but got \"%#s\"", objPtr); + return JIM_ERR; + } + + Jim_FreeIntRep(interp, objPtr); + } + objPtr->typePtr = &doubleObjType; + objPtr->internalRep.doubleValue = doubleValue; + return JIM_OK; +} + +int Jim_GetDouble(Jim_Interp *interp, Jim_Obj *objPtr, double *doublePtr) +{ + if (objPtr->typePtr == &coercedDoubleObjType) { + *doublePtr = JimWideValue(objPtr); + return JIM_OK; + } + if (objPtr->typePtr != &doubleObjType && SetDoubleFromAny(interp, objPtr) == JIM_ERR) + return JIM_ERR; + + if (objPtr->typePtr == &coercedDoubleObjType) { + *doublePtr = JimWideValue(objPtr); + } + else { + *doublePtr = objPtr->internalRep.doubleValue; + } + return JIM_OK; +} + +Jim_Obj *Jim_NewDoubleObj(Jim_Interp *interp, double doubleValue) +{ + Jim_Obj *objPtr; + + objPtr = Jim_NewObj(interp); + objPtr->typePtr = &doubleObjType; + objPtr->bytes = NULL; + objPtr->internalRep.doubleValue = doubleValue; + return objPtr; +} + +static void ListInsertElements(Jim_Obj *listPtr, int idx, int elemc, Jim_Obj *const *elemVec); +static void ListAppendElement(Jim_Obj *listPtr, Jim_Obj *objPtr); +static void FreeListInternalRep(Jim_Interp *interp, Jim_Obj *objPtr); +static void DupListInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr); +static void UpdateStringOfList(struct Jim_Obj *objPtr); +static int SetListFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr); + +static const Jim_ObjType listObjType = { + "list", + FreeListInternalRep, + DupListInternalRep, + UpdateStringOfList, + JIM_TYPE_NONE, +}; + +void FreeListInternalRep(Jim_Interp *interp, Jim_Obj *objPtr) +{ + int i; + + for (i = 0; i < objPtr->internalRep.listValue.len; i++) { + Jim_DecrRefCount(interp, objPtr->internalRep.listValue.ele[i]); + } + Jim_Free(objPtr->internalRep.listValue.ele); +} + +void DupListInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr) +{ + int i; + + JIM_NOTUSED(interp); + + dupPtr->internalRep.listValue.len = srcPtr->internalRep.listValue.len; + dupPtr->internalRep.listValue.maxLen = srcPtr->internalRep.listValue.maxLen; + dupPtr->internalRep.listValue.ele = + Jim_Alloc(sizeof(Jim_Obj *) * srcPtr->internalRep.listValue.maxLen); + memcpy(dupPtr->internalRep.listValue.ele, srcPtr->internalRep.listValue.ele, + sizeof(Jim_Obj *) * srcPtr->internalRep.listValue.len); + for (i = 0; i < dupPtr->internalRep.listValue.len; i++) { + Jim_IncrRefCount(dupPtr->internalRep.listValue.ele[i]); + } + dupPtr->typePtr = &listObjType; +} + +#define JIM_ELESTR_SIMPLE 0 +#define JIM_ELESTR_BRACE 1 +#define JIM_ELESTR_QUOTE 2 +static unsigned char ListElementQuotingType(const char *s, int len) +{ + int i, level, blevel, trySimple = 1; + + + if (len == 0) + return JIM_ELESTR_BRACE; + if (s[0] == '"' || s[0] == '{') { + trySimple = 0; + goto testbrace; + } + for (i = 0; i < len; i++) { + switch (s[i]) { + case ' ': + case '$': + case '"': + case '[': + case ']': + case ';': + case '\\': + case '\r': + case '\n': + case '\t': + case '\f': + case '\v': + trySimple = 0; + case '{': + case '}': + goto testbrace; + } + } + return JIM_ELESTR_SIMPLE; + + testbrace: + + if (s[len - 1] == '\\') + return JIM_ELESTR_QUOTE; + level = 0; + blevel = 0; + for (i = 0; i < len; i++) { + switch (s[i]) { + case '{': + level++; + break; + case '}': + level--; + if (level < 0) + return JIM_ELESTR_QUOTE; + break; + case '[': + blevel++; + break; + case ']': + blevel--; + break; + case '\\': + if (s[i + 1] == '\n') + return JIM_ELESTR_QUOTE; + else if (s[i + 1] != '\0') + i++; + break; + } + } + if (blevel < 0) { + return JIM_ELESTR_QUOTE; + } + + if (level == 0) { + if (!trySimple) + return JIM_ELESTR_BRACE; + for (i = 0; i < len; i++) { + switch (s[i]) { + case ' ': + case '$': + case '"': + case '[': + case ']': + case ';': + case '\\': + case '\r': + case '\n': + case '\t': + case '\f': + case '\v': + return JIM_ELESTR_BRACE; + break; + } + } + return JIM_ELESTR_SIMPLE; + } + return JIM_ELESTR_QUOTE; +} + +static int BackslashQuoteString(const char *s, int len, char *q) +{ + char *p = q; + + while (len--) { + switch (*s) { + case ' ': + case '$': + case '"': + case '[': + case ']': + case '{': + case '}': + case ';': + case '\\': + *p++ = '\\'; + *p++ = *s++; + break; + case '\n': + *p++ = '\\'; + *p++ = 'n'; + s++; + break; + case '\r': + *p++ = '\\'; + *p++ = 'r'; + s++; + break; + case '\t': + *p++ = '\\'; + *p++ = 't'; + s++; + break; + case '\f': + *p++ = '\\'; + *p++ = 'f'; + s++; + break; + case '\v': + *p++ = '\\'; + *p++ = 'v'; + s++; + break; + default: + *p++ = *s++; + break; + } + } + *p = '\0'; + + return p - q; +} + +static void JimMakeListStringRep(Jim_Obj *objPtr, Jim_Obj **objv, int objc) +{ + #define STATIC_QUOTING_LEN 32 + int i, bufLen, realLength; + const char *strRep; + char *p; + unsigned char *quotingType, staticQuoting[STATIC_QUOTING_LEN]; + + + if (objc > STATIC_QUOTING_LEN) { + quotingType = Jim_Alloc(objc); + } + else { + quotingType = staticQuoting; + } + bufLen = 0; + for (i = 0; i < objc; i++) { + int len; + + strRep = Jim_GetString(objv[i], &len); + quotingType[i] = ListElementQuotingType(strRep, len); + switch (quotingType[i]) { + case JIM_ELESTR_SIMPLE: + if (i != 0 || strRep[0] != '#') { + bufLen += len; + break; + } + + quotingType[i] = JIM_ELESTR_BRACE; + + case JIM_ELESTR_BRACE: + bufLen += len + 2; + break; + case JIM_ELESTR_QUOTE: + bufLen += len * 2; + break; + } + bufLen++; + } + bufLen++; + + + p = objPtr->bytes = Jim_Alloc(bufLen + 1); + realLength = 0; + for (i = 0; i < objc; i++) { + int len, qlen; + + strRep = Jim_GetString(objv[i], &len); + + switch (quotingType[i]) { + case JIM_ELESTR_SIMPLE: + memcpy(p, strRep, len); + p += len; + realLength += len; + break; + case JIM_ELESTR_BRACE: + *p++ = '{'; + memcpy(p, strRep, len); + p += len; + *p++ = '}'; + realLength += len + 2; + break; + case JIM_ELESTR_QUOTE: + if (i == 0 && strRep[0] == '#') { + *p++ = '\\'; + realLength++; + } + qlen = BackslashQuoteString(strRep, len, p); + p += qlen; + realLength += qlen; + break; + } + + if (i + 1 != objc) { + *p++ = ' '; + realLength++; + } + } + *p = '\0'; + objPtr->length = realLength; + + if (quotingType != staticQuoting) { + Jim_Free(quotingType); + } +} + +static void UpdateStringOfList(struct Jim_Obj *objPtr) +{ + JimMakeListStringRep(objPtr, objPtr->internalRep.listValue.ele, objPtr->internalRep.listValue.len); +} + +static int SetListFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr) +{ + struct JimParserCtx parser; + const char *str; + int strLen; + Jim_Obj *fileNameObj; + int linenr; + + if (objPtr->typePtr == &listObjType) { + return JIM_OK; + } + + if (Jim_IsDict(objPtr) && objPtr->bytes == NULL) { + Jim_Obj **listObjPtrPtr; + int len; + int i; + + listObjPtrPtr = JimDictPairs(objPtr, &len); + for (i = 0; i < len; i++) { + Jim_IncrRefCount(listObjPtrPtr[i]); + } + + + Jim_FreeIntRep(interp, objPtr); + objPtr->typePtr = &listObjType; + objPtr->internalRep.listValue.len = len; + objPtr->internalRep.listValue.maxLen = len; + objPtr->internalRep.listValue.ele = listObjPtrPtr; + + return JIM_OK; + } + + + if (objPtr->typePtr == &sourceObjType) { + fileNameObj = objPtr->internalRep.sourceValue.fileNameObj; + linenr = objPtr->internalRep.sourceValue.lineNumber; + } + else { + fileNameObj = interp->emptyObj; + linenr = 1; + } + Jim_IncrRefCount(fileNameObj); + + + str = Jim_GetString(objPtr, &strLen); + + Jim_FreeIntRep(interp, objPtr); + objPtr->typePtr = &listObjType; + objPtr->internalRep.listValue.len = 0; + objPtr->internalRep.listValue.maxLen = 0; + objPtr->internalRep.listValue.ele = NULL; + + + if (strLen) { + JimParserInit(&parser, str, strLen, linenr); + while (!parser.eof) { + Jim_Obj *elementPtr; + + JimParseList(&parser); + if (parser.tt != JIM_TT_STR && parser.tt != JIM_TT_ESC) + continue; + elementPtr = JimParserGetTokenObj(interp, &parser); + JimSetSourceInfo(interp, elementPtr, fileNameObj, parser.tline); + ListAppendElement(objPtr, elementPtr); + } + } + Jim_DecrRefCount(interp, fileNameObj); + return JIM_OK; +} + +Jim_Obj *Jim_NewListObj(Jim_Interp *interp, Jim_Obj *const *elements, int len) +{ + Jim_Obj *objPtr; + + objPtr = Jim_NewObj(interp); + objPtr->typePtr = &listObjType; + objPtr->bytes = NULL; + objPtr->internalRep.listValue.ele = NULL; + objPtr->internalRep.listValue.len = 0; + objPtr->internalRep.listValue.maxLen = 0; + + if (len) { + ListInsertElements(objPtr, 0, len, elements); + } + + return objPtr; +} + +static void JimListGetElements(Jim_Interp *interp, Jim_Obj *listObj, int *listLen, + Jim_Obj ***listVec) +{ + *listLen = Jim_ListLength(interp, listObj); + *listVec = listObj->internalRep.listValue.ele; +} + + +static int JimSign(jim_wide w) +{ + if (w == 0) { + return 0; + } + else if (w < 0) { + return -1; + } + return 1; +} + + +struct lsort_info { + jmp_buf jmpbuf; + Jim_Obj *command; + Jim_Interp *interp; + enum { + JIM_LSORT_ASCII, + JIM_LSORT_NOCASE, + JIM_LSORT_INTEGER, + JIM_LSORT_REAL, + JIM_LSORT_COMMAND + } type; + int order; + int index; + int indexed; + int unique; + int (*subfn)(Jim_Obj **, Jim_Obj **); +}; + +static struct lsort_info *sort_info; + +static int ListSortIndexHelper(Jim_Obj **lhsObj, Jim_Obj **rhsObj) +{ + Jim_Obj *lObj, *rObj; + + if (Jim_ListIndex(sort_info->interp, *lhsObj, sort_info->index, &lObj, JIM_ERRMSG) != JIM_OK || + Jim_ListIndex(sort_info->interp, *rhsObj, sort_info->index, &rObj, JIM_ERRMSG) != JIM_OK) { + longjmp(sort_info->jmpbuf, JIM_ERR); + } + return sort_info->subfn(&lObj, &rObj); +} + + +static int ListSortString(Jim_Obj **lhsObj, Jim_Obj **rhsObj) +{ + return Jim_StringCompareObj(sort_info->interp, *lhsObj, *rhsObj, 0) * sort_info->order; +} + +static int ListSortStringNoCase(Jim_Obj **lhsObj, Jim_Obj **rhsObj) +{ + return Jim_StringCompareObj(sort_info->interp, *lhsObj, *rhsObj, 1) * sort_info->order; +} + +static int ListSortInteger(Jim_Obj **lhsObj, Jim_Obj **rhsObj) +{ + jim_wide lhs = 0, rhs = 0; + + if (Jim_GetWide(sort_info->interp, *lhsObj, &lhs) != JIM_OK || + Jim_GetWide(sort_info->interp, *rhsObj, &rhs) != JIM_OK) { + longjmp(sort_info->jmpbuf, JIM_ERR); + } + + return JimSign(lhs - rhs) * sort_info->order; +} + +static int ListSortReal(Jim_Obj **lhsObj, Jim_Obj **rhsObj) +{ + double lhs = 0, rhs = 0; + + if (Jim_GetDouble(sort_info->interp, *lhsObj, &lhs) != JIM_OK || + Jim_GetDouble(sort_info->interp, *rhsObj, &rhs) != JIM_OK) { + longjmp(sort_info->jmpbuf, JIM_ERR); + } + if (lhs == rhs) { + return 0; + } + if (lhs > rhs) { + return sort_info->order; + } + return -sort_info->order; +} + +static int ListSortCommand(Jim_Obj **lhsObj, Jim_Obj **rhsObj) +{ + Jim_Obj *compare_script; + int rc; + + jim_wide ret = 0; + + + compare_script = Jim_DuplicateObj(sort_info->interp, sort_info->command); + Jim_ListAppendElement(sort_info->interp, compare_script, *lhsObj); + Jim_ListAppendElement(sort_info->interp, compare_script, *rhsObj); + + rc = Jim_EvalObj(sort_info->interp, compare_script); + + if (rc != JIM_OK || Jim_GetWide(sort_info->interp, Jim_GetResult(sort_info->interp), &ret) != JIM_OK) { + longjmp(sort_info->jmpbuf, rc); + } + + return JimSign(ret) * sort_info->order; +} + +static void ListRemoveDuplicates(Jim_Obj *listObjPtr, int (*comp)(Jim_Obj **lhs, Jim_Obj **rhs)) +{ + int src; + int dst = 0; + Jim_Obj **ele = listObjPtr->internalRep.listValue.ele; + + for (src = 1; src < listObjPtr->internalRep.listValue.len; src++) { + if (comp(&ele[dst], &ele[src]) == 0) { + + Jim_DecrRefCount(sort_info->interp, ele[dst]); + } + else { + + dst++; + } + ele[dst] = ele[src]; + } + + ele[++dst] = ele[src]; + + + listObjPtr->internalRep.listValue.len = dst; +} + + +static int ListSortElements(Jim_Interp *interp, Jim_Obj *listObjPtr, struct lsort_info *info) +{ + struct lsort_info *prev_info; + + typedef int (qsort_comparator) (const void *, const void *); + int (*fn) (Jim_Obj **, Jim_Obj **); + Jim_Obj **vector; + int len; + int rc; + + JimPanic((Jim_IsShared(listObjPtr), "ListSortElements called with shared object")); + SetListFromAny(interp, listObjPtr); + + + prev_info = sort_info; + sort_info = info; + + vector = listObjPtr->internalRep.listValue.ele; + len = listObjPtr->internalRep.listValue.len; + switch (info->type) { + case JIM_LSORT_ASCII: + fn = ListSortString; + break; + case JIM_LSORT_NOCASE: + fn = ListSortStringNoCase; + break; + case JIM_LSORT_INTEGER: + fn = ListSortInteger; + break; + case JIM_LSORT_REAL: + fn = ListSortReal; + break; + case JIM_LSORT_COMMAND: + fn = ListSortCommand; + break; + default: + fn = NULL; + JimPanic((1, "ListSort called with invalid sort type")); + } + + if (info->indexed) { + + info->subfn = fn; + fn = ListSortIndexHelper; + } + + if ((rc = setjmp(info->jmpbuf)) == 0) { + qsort(vector, len, sizeof(Jim_Obj *), (qsort_comparator *) fn); + + if (info->unique && len > 1) { + ListRemoveDuplicates(listObjPtr, fn); + } + + Jim_InvalidateStringRep(listObjPtr); + } + sort_info = prev_info; + + return rc; +} + +static void ListInsertElements(Jim_Obj *listPtr, int idx, int elemc, Jim_Obj *const *elemVec) +{ + int currentLen = listPtr->internalRep.listValue.len; + int requiredLen = currentLen + elemc; + int i; + Jim_Obj **point; + + if (requiredLen > listPtr->internalRep.listValue.maxLen) { + if (requiredLen < 2) { + + requiredLen = 4; + } + else { + requiredLen *= 2; + } + + listPtr->internalRep.listValue.ele = Jim_Realloc(listPtr->internalRep.listValue.ele, + sizeof(Jim_Obj *) * requiredLen); + + listPtr->internalRep.listValue.maxLen = requiredLen; + } + if (idx < 0) { + idx = currentLen; + } + point = listPtr->internalRep.listValue.ele + idx; + memmove(point + elemc, point, (currentLen - idx) * sizeof(Jim_Obj *)); + for (i = 0; i < elemc; ++i) { + point[i] = elemVec[i]; + Jim_IncrRefCount(point[i]); + } + listPtr->internalRep.listValue.len += elemc; +} + +static void ListAppendElement(Jim_Obj *listPtr, Jim_Obj *objPtr) +{ + ListInsertElements(listPtr, -1, 1, &objPtr); +} + +static void ListAppendList(Jim_Obj *listPtr, Jim_Obj *appendListPtr) +{ + ListInsertElements(listPtr, -1, + appendListPtr->internalRep.listValue.len, appendListPtr->internalRep.listValue.ele); +} + +void Jim_ListAppendElement(Jim_Interp *interp, Jim_Obj *listPtr, Jim_Obj *objPtr) +{ + JimPanic((Jim_IsShared(listPtr), "Jim_ListAppendElement called with shared object")); + SetListFromAny(interp, listPtr); + Jim_InvalidateStringRep(listPtr); + ListAppendElement(listPtr, objPtr); +} + +void Jim_ListAppendList(Jim_Interp *interp, Jim_Obj *listPtr, Jim_Obj *appendListPtr) +{ + JimPanic((Jim_IsShared(listPtr), "Jim_ListAppendList called with shared object")); + SetListFromAny(interp, listPtr); + SetListFromAny(interp, appendListPtr); + Jim_InvalidateStringRep(listPtr); + ListAppendList(listPtr, appendListPtr); +} + +int Jim_ListLength(Jim_Interp *interp, Jim_Obj *objPtr) +{ + SetListFromAny(interp, objPtr); + return objPtr->internalRep.listValue.len; +} + +void Jim_ListInsertElements(Jim_Interp *interp, Jim_Obj *listPtr, int idx, + int objc, Jim_Obj *const *objVec) +{ + JimPanic((Jim_IsShared(listPtr), "Jim_ListInsertElement called with shared object")); + SetListFromAny(interp, listPtr); + if (idx >= 0 && idx > listPtr->internalRep.listValue.len) + idx = listPtr->internalRep.listValue.len; + else if (idx < 0) + idx = 0; + Jim_InvalidateStringRep(listPtr); + ListInsertElements(listPtr, idx, objc, objVec); +} + +Jim_Obj *Jim_ListGetIndex(Jim_Interp *interp, Jim_Obj *listPtr, int idx) +{ + SetListFromAny(interp, listPtr); + if ((idx >= 0 && idx >= listPtr->internalRep.listValue.len) || + (idx < 0 && (-idx - 1) >= listPtr->internalRep.listValue.len)) { + return NULL; + } + if (idx < 0) + idx = listPtr->internalRep.listValue.len + idx; + return listPtr->internalRep.listValue.ele[idx]; +} + +int Jim_ListIndex(Jim_Interp *interp, Jim_Obj *listPtr, int idx, Jim_Obj **objPtrPtr, int flags) +{ + *objPtrPtr = Jim_ListGetIndex(interp, listPtr, idx); + if (*objPtrPtr == NULL) { + if (flags & JIM_ERRMSG) { + Jim_SetResultString(interp, "list index out of range", -1); + } + return JIM_ERR; + } + return JIM_OK; +} + +static int ListSetIndex(Jim_Interp *interp, Jim_Obj *listPtr, int idx, + Jim_Obj *newObjPtr, int flags) +{ + SetListFromAny(interp, listPtr); + if ((idx >= 0 && idx >= listPtr->internalRep.listValue.len) || + (idx < 0 && (-idx - 1) >= listPtr->internalRep.listValue.len)) { + if (flags & JIM_ERRMSG) { + Jim_SetResultString(interp, "list index out of range", -1); + } + return JIM_ERR; + } + if (idx < 0) + idx = listPtr->internalRep.listValue.len + idx; + Jim_DecrRefCount(interp, listPtr->internalRep.listValue.ele[idx]); + listPtr->internalRep.listValue.ele[idx] = newObjPtr; + Jim_IncrRefCount(newObjPtr); + return JIM_OK; +} + +int Jim_ListSetIndex(Jim_Interp *interp, Jim_Obj *varNamePtr, + Jim_Obj *const *indexv, int indexc, Jim_Obj *newObjPtr) +{ + Jim_Obj *varObjPtr, *objPtr, *listObjPtr; + int shared, i, idx; + + varObjPtr = objPtr = Jim_GetVariable(interp, varNamePtr, JIM_ERRMSG | JIM_UNSHARED); + if (objPtr == NULL) + return JIM_ERR; + if ((shared = Jim_IsShared(objPtr))) + varObjPtr = objPtr = Jim_DuplicateObj(interp, objPtr); + for (i = 0; i < indexc - 1; i++) { + listObjPtr = objPtr; + if (Jim_GetIndex(interp, indexv[i], &idx) != JIM_OK) + goto err; + if (Jim_ListIndex(interp, listObjPtr, idx, &objPtr, JIM_ERRMSG) != JIM_OK) { + goto err; + } + if (Jim_IsShared(objPtr)) { + objPtr = Jim_DuplicateObj(interp, objPtr); + ListSetIndex(interp, listObjPtr, idx, objPtr, JIM_NONE); + } + Jim_InvalidateStringRep(listObjPtr); + } + if (Jim_GetIndex(interp, indexv[indexc - 1], &idx) != JIM_OK) + goto err; + if (ListSetIndex(interp, objPtr, idx, newObjPtr, JIM_ERRMSG) == JIM_ERR) + goto err; + Jim_InvalidateStringRep(objPtr); + Jim_InvalidateStringRep(varObjPtr); + if (Jim_SetVariable(interp, varNamePtr, varObjPtr) != JIM_OK) + goto err; + Jim_SetResult(interp, varObjPtr); + return JIM_OK; + err: + if (shared) { + Jim_FreeNewObj(interp, varObjPtr); + } + return JIM_ERR; +} + +Jim_Obj *Jim_ListJoin(Jim_Interp *interp, Jim_Obj *listObjPtr, const char *joinStr, int joinStrLen) +{ + int i; + int listLen = Jim_ListLength(interp, listObjPtr); + Jim_Obj *resObjPtr = Jim_NewEmptyStringObj(interp); + + for (i = 0; i < listLen; ) { + Jim_AppendObj(interp, resObjPtr, Jim_ListGetIndex(interp, listObjPtr, i)); + if (++i != listLen) { + Jim_AppendString(interp, resObjPtr, joinStr, joinStrLen); + } + } + return resObjPtr; +} + +Jim_Obj *Jim_ConcatObj(Jim_Interp *interp, int objc, Jim_Obj *const *objv) +{ + int i; + + for (i = 0; i < objc; i++) { + if (!Jim_IsList(objv[i])) + break; + } + if (i == objc) { + Jim_Obj *objPtr = Jim_NewListObj(interp, NULL, 0); + + for (i = 0; i < objc; i++) + ListAppendList(objPtr, objv[i]); + return objPtr; + } + else { + + int len = 0, objLen; + char *bytes, *p; + + + for (i = 0; i < objc; i++) { + len += Jim_Length(objv[i]); + } + if (objc) + len += objc - 1; + + p = bytes = Jim_Alloc(len + 1); + for (i = 0; i < objc; i++) { + const char *s = Jim_GetString(objv[i], &objLen); + + + while (objLen && isspace(UCHAR(*s))) { + s++; + objLen--; + len--; + } + + while (objLen && isspace(UCHAR(s[objLen - 1]))) { + + if (objLen > 1 && s[objLen - 2] == '\\') { + break; + } + objLen--; + len--; + } + memcpy(p, s, objLen); + p += objLen; + if (i + 1 != objc) { + if (objLen) + *p++ = ' '; + else { + len--; + } + } + } + *p = '\0'; + return Jim_NewStringObjNoAlloc(interp, bytes, len); + } +} + +Jim_Obj *Jim_ListRange(Jim_Interp *interp, Jim_Obj *listObjPtr, Jim_Obj *firstObjPtr, + Jim_Obj *lastObjPtr) +{ + int first, last; + int len, rangeLen; + + if (Jim_GetIndex(interp, firstObjPtr, &first) != JIM_OK || + Jim_GetIndex(interp, lastObjPtr, &last) != JIM_OK) + return NULL; + len = Jim_ListLength(interp, listObjPtr); + first = JimRelToAbsIndex(len, first); + last = JimRelToAbsIndex(len, last); + JimRelToAbsRange(len, &first, &last, &rangeLen); + if (first == 0 && last == len) { + return listObjPtr; + } + return Jim_NewListObj(interp, listObjPtr->internalRep.listValue.ele + first, rangeLen); +} + +static void FreeDictInternalRep(Jim_Interp *interp, Jim_Obj *objPtr); +static void DupDictInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr); +static void UpdateStringOfDict(struct Jim_Obj *objPtr); +static int SetDictFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr); + + +static unsigned int JimObjectHTHashFunction(const void *key) +{ + int len; + const char *str = Jim_GetString((Jim_Obj *)key, &len); + return Jim_GenHashFunction((const unsigned char *)str, len); +} + +static int JimObjectHTKeyCompare(void *privdata, const void *key1, const void *key2) +{ + return Jim_StringEqObj((Jim_Obj *)key1, (Jim_Obj *)key2); +} + +static void *JimObjectHTKeyValDup(void *privdata, const void *val) +{ + Jim_IncrRefCount((Jim_Obj *)val); + return (void *)val; +} + +static void JimObjectHTKeyValDestructor(void *interp, void *val) +{ + Jim_DecrRefCount(interp, (Jim_Obj *)val); +} + +static const Jim_HashTableType JimDictHashTableType = { + JimObjectHTHashFunction, + JimObjectHTKeyValDup, + JimObjectHTKeyValDup, + JimObjectHTKeyCompare, + JimObjectHTKeyValDestructor, + JimObjectHTKeyValDestructor +}; + +static const Jim_ObjType dictObjType = { + "dict", + FreeDictInternalRep, + DupDictInternalRep, + UpdateStringOfDict, + JIM_TYPE_NONE, +}; + +void FreeDictInternalRep(Jim_Interp *interp, Jim_Obj *objPtr) +{ + JIM_NOTUSED(interp); + + Jim_FreeHashTable(objPtr->internalRep.ptr); + Jim_Free(objPtr->internalRep.ptr); +} + +void DupDictInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr) +{ + Jim_HashTable *ht, *dupHt; + Jim_HashTableIterator htiter; + Jim_HashEntry *he; + + + ht = srcPtr->internalRep.ptr; + dupHt = Jim_Alloc(sizeof(*dupHt)); + Jim_InitHashTable(dupHt, &JimDictHashTableType, interp); + if (ht->size != 0) + Jim_ExpandHashTable(dupHt, ht->size); + + JimInitHashTableIterator(ht, &htiter); + while ((he = Jim_NextHashEntry(&htiter)) != NULL) { + Jim_AddHashEntry(dupHt, he->key, he->u.val); + } + + dupPtr->internalRep.ptr = dupHt; + dupPtr->typePtr = &dictObjType; +} + +static Jim_Obj **JimDictPairs(Jim_Obj *dictPtr, int *len) +{ + Jim_HashTable *ht; + Jim_HashTableIterator htiter; + Jim_HashEntry *he; + Jim_Obj **objv; + int i; + + ht = dictPtr->internalRep.ptr; + + + objv = Jim_Alloc((ht->used * 2) * sizeof(Jim_Obj *)); + JimInitHashTableIterator(ht, &htiter); + i = 0; + while ((he = Jim_NextHashEntry(&htiter)) != NULL) { + objv[i++] = Jim_GetHashEntryKey(he); + objv[i++] = Jim_GetHashEntryVal(he); + } + *len = i; + return objv; +} + +static void UpdateStringOfDict(struct Jim_Obj *objPtr) +{ + + int len; + Jim_Obj **objv = JimDictPairs(objPtr, &len); + + + JimMakeListStringRep(objPtr, objv, len); + + Jim_Free(objv); +} + +static int SetDictFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr) +{ + int listlen; + + if (objPtr->typePtr == &dictObjType) { + return JIM_OK; + } + + if (Jim_IsList(objPtr) && Jim_IsShared(objPtr)) { + Jim_String(objPtr); + } + + + listlen = Jim_ListLength(interp, objPtr); + if (listlen % 2) { + Jim_SetResultString(interp, "missing value to go with key", -1); + return JIM_ERR; + } + else { + + Jim_HashTable *ht; + int i; + + ht = Jim_Alloc(sizeof(*ht)); + Jim_InitHashTable(ht, &JimDictHashTableType, interp); + + for (i = 0; i < listlen; i += 2) { + Jim_Obj *keyObjPtr = Jim_ListGetIndex(interp, objPtr, i); + Jim_Obj *valObjPtr = Jim_ListGetIndex(interp, objPtr, i + 1); + + Jim_ReplaceHashEntry(ht, keyObjPtr, valObjPtr); + } + + Jim_FreeIntRep(interp, objPtr); + objPtr->typePtr = &dictObjType; + objPtr->internalRep.ptr = ht; + + return JIM_OK; + } +} + + + +static int DictAddElement(Jim_Interp *interp, Jim_Obj *objPtr, + Jim_Obj *keyObjPtr, Jim_Obj *valueObjPtr) +{ + Jim_HashTable *ht = objPtr->internalRep.ptr; + + if (valueObjPtr == NULL) { + return Jim_DeleteHashEntry(ht, keyObjPtr); + } + Jim_ReplaceHashEntry(ht, keyObjPtr, valueObjPtr); + return JIM_OK; +} + +int Jim_DictAddElement(Jim_Interp *interp, Jim_Obj *objPtr, + Jim_Obj *keyObjPtr, Jim_Obj *valueObjPtr) +{ + JimPanic((Jim_IsShared(objPtr), "Jim_DictAddElement called with shared object")); + if (SetDictFromAny(interp, objPtr) != JIM_OK) { + return JIM_ERR; + } + Jim_InvalidateStringRep(objPtr); + return DictAddElement(interp, objPtr, keyObjPtr, valueObjPtr); +} + +Jim_Obj *Jim_NewDictObj(Jim_Interp *interp, Jim_Obj *const *elements, int len) +{ + Jim_Obj *objPtr; + int i; + + JimPanic((len % 2, "Jim_NewDictObj() 'len' argument must be even")); + + objPtr = Jim_NewObj(interp); + objPtr->typePtr = &dictObjType; + objPtr->bytes = NULL; + objPtr->internalRep.ptr = Jim_Alloc(sizeof(Jim_HashTable)); + Jim_InitHashTable(objPtr->internalRep.ptr, &JimDictHashTableType, interp); + for (i = 0; i < len; i += 2) + DictAddElement(interp, objPtr, elements[i], elements[i + 1]); + return objPtr; +} + +int Jim_DictKey(Jim_Interp *interp, Jim_Obj *dictPtr, Jim_Obj *keyPtr, + Jim_Obj **objPtrPtr, int flags) +{ + Jim_HashEntry *he; + Jim_HashTable *ht; + + if (SetDictFromAny(interp, dictPtr) != JIM_OK) { + return -1; + } + ht = dictPtr->internalRep.ptr; + if ((he = Jim_FindHashEntry(ht, keyPtr)) == NULL) { + if (flags & JIM_ERRMSG) { + Jim_SetResultFormatted(interp, "key \"%#s\" not known in dictionary", keyPtr); + } + return JIM_ERR; + } + *objPtrPtr = he->u.val; + return JIM_OK; +} + + +int Jim_DictPairs(Jim_Interp *interp, Jim_Obj *dictPtr, Jim_Obj ***objPtrPtr, int *len) +{ + if (SetDictFromAny(interp, dictPtr) != JIM_OK) { + return JIM_ERR; + } + *objPtrPtr = JimDictPairs(dictPtr, len); + + return JIM_OK; +} + + + +int Jim_DictKeysVector(Jim_Interp *interp, Jim_Obj *dictPtr, + Jim_Obj *const *keyv, int keyc, Jim_Obj **objPtrPtr, int flags) +{ + int i; + + if (keyc == 0) { + *objPtrPtr = dictPtr; + return JIM_OK; + } + + for (i = 0; i < keyc; i++) { + Jim_Obj *objPtr; + + int rc = Jim_DictKey(interp, dictPtr, keyv[i], &objPtr, flags); + if (rc != JIM_OK) { + return rc; + } + dictPtr = objPtr; + } + *objPtrPtr = dictPtr; + return JIM_OK; +} + +int Jim_SetDictKeysVector(Jim_Interp *interp, Jim_Obj *varNamePtr, + Jim_Obj *const *keyv, int keyc, Jim_Obj *newObjPtr, int flags) +{ + Jim_Obj *varObjPtr, *objPtr, *dictObjPtr; + int shared, i; + + varObjPtr = objPtr = Jim_GetVariable(interp, varNamePtr, flags); + if (objPtr == NULL) { + if (newObjPtr == NULL && (flags & JIM_MUSTEXIST)) { + + return JIM_ERR; + } + varObjPtr = objPtr = Jim_NewDictObj(interp, NULL, 0); + if (Jim_SetVariable(interp, varNamePtr, objPtr) != JIM_OK) { + Jim_FreeNewObj(interp, varObjPtr); + return JIM_ERR; + } + } + if ((shared = Jim_IsShared(objPtr))) + varObjPtr = objPtr = Jim_DuplicateObj(interp, objPtr); + for (i = 0; i < keyc; i++) { + dictObjPtr = objPtr; + + + if (SetDictFromAny(interp, dictObjPtr) != JIM_OK) { + goto err; + } + + if (i == keyc - 1) { + + if (Jim_DictAddElement(interp, objPtr, keyv[keyc - 1], newObjPtr) != JIM_OK) { + if (newObjPtr || (flags & JIM_MUSTEXIST)) { + goto err; + } + } + break; + } + + + Jim_InvalidateStringRep(dictObjPtr); + if (Jim_DictKey(interp, dictObjPtr, keyv[i], &objPtr, + newObjPtr ? JIM_NONE : JIM_ERRMSG) == JIM_OK) { + if (Jim_IsShared(objPtr)) { + objPtr = Jim_DuplicateObj(interp, objPtr); + DictAddElement(interp, dictObjPtr, keyv[i], objPtr); + } + } + else { + if (newObjPtr == NULL) { + goto err; + } + objPtr = Jim_NewDictObj(interp, NULL, 0); + DictAddElement(interp, dictObjPtr, keyv[i], objPtr); + } + } + + Jim_InvalidateStringRep(objPtr); + Jim_InvalidateStringRep(varObjPtr); + if (Jim_SetVariable(interp, varNamePtr, varObjPtr) != JIM_OK) { + goto err; + } + Jim_SetResult(interp, varObjPtr); + return JIM_OK; + err: + if (shared) { + Jim_FreeNewObj(interp, varObjPtr); + } + return JIM_ERR; +} + +static void UpdateStringOfIndex(struct Jim_Obj *objPtr); +static int SetIndexFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr); + +static const Jim_ObjType indexObjType = { + "index", + NULL, + NULL, + UpdateStringOfIndex, + JIM_TYPE_NONE, +}; + +static void UpdateStringOfIndex(struct Jim_Obj *objPtr) +{ + if (objPtr->internalRep.intValue == -1) { + JimSetStringBytes(objPtr, "end"); + } + else { + char buf[JIM_INTEGER_SPACE + 1]; + if (objPtr->internalRep.intValue >= 0) { + sprintf(buf, "%d", objPtr->internalRep.intValue); + } + else { + + sprintf(buf, "end%d", objPtr->internalRep.intValue + 1); + } + JimSetStringBytes(objPtr, buf); + } +} + +static int SetIndexFromAny(Jim_Interp *interp, Jim_Obj *objPtr) +{ + int idx, end = 0; + const char *str; + char *endptr; + + + str = Jim_String(objPtr); + + + if (strncmp(str, "end", 3) == 0) { + end = 1; + str += 3; + idx = 0; + } + else { + idx = jim_strtol(str, &endptr); + + if (endptr == str) { + goto badindex; + } + str = endptr; + } + + + if (*str == '+' || *str == '-') { + int sign = (*str == '+' ? 1 : -1); + + idx += sign * jim_strtol(++str, &endptr); + if (str == endptr || *endptr) { + goto badindex; + } + str = endptr; + } + + while (isspace(UCHAR(*str))) { + str++; + } + if (*str) { + goto badindex; + } + if (end) { + if (idx > 0) { + idx = INT_MAX; + } + else { + + idx--; + } + } + else if (idx < 0) { + idx = -INT_MAX; + } + + + Jim_FreeIntRep(interp, objPtr); + objPtr->typePtr = &indexObjType; + objPtr->internalRep.intValue = idx; + return JIM_OK; + + badindex: + Jim_SetResultFormatted(interp, + "bad index \"%#s\": must be integer?[+-]integer? or end?[+-]integer?", objPtr); + return JIM_ERR; +} + +int Jim_GetIndex(Jim_Interp *interp, Jim_Obj *objPtr, int *indexPtr) +{ + + if (objPtr->typePtr == &intObjType) { + jim_wide val = JimWideValue(objPtr); + + if (val < 0) + *indexPtr = -INT_MAX; + else if (val > INT_MAX) + *indexPtr = INT_MAX; + else + *indexPtr = (int)val; + return JIM_OK; + } + if (objPtr->typePtr != &indexObjType && SetIndexFromAny(interp, objPtr) == JIM_ERR) + return JIM_ERR; + *indexPtr = objPtr->internalRep.intValue; + return JIM_OK; +} + + + +static const char * const jimReturnCodes[] = { + "ok", + "error", + "return", + "break", + "continue", + "signal", + "exit", + "eval", + NULL +}; + +#define jimReturnCodesSize (sizeof(jimReturnCodes)/sizeof(*jimReturnCodes)) + +static const Jim_ObjType returnCodeObjType = { + "return-code", + NULL, + NULL, + NULL, + JIM_TYPE_NONE, +}; + +const char *Jim_ReturnCode(int code) +{ + if (code < 0 || code >= (int)jimReturnCodesSize) { + return "?"; + } + else { + return jimReturnCodes[code]; + } +} + +static int SetReturnCodeFromAny(Jim_Interp *interp, Jim_Obj *objPtr) +{ + int returnCode; + jim_wide wideValue; + + + if (JimGetWideNoErr(interp, objPtr, &wideValue) != JIM_ERR) + returnCode = (int)wideValue; + else if (Jim_GetEnum(interp, objPtr, jimReturnCodes, &returnCode, NULL, JIM_NONE) != JIM_OK) { + Jim_SetResultFormatted(interp, "expected return code but got \"%#s\"", objPtr); + return JIM_ERR; + } + + Jim_FreeIntRep(interp, objPtr); + objPtr->typePtr = &returnCodeObjType; + objPtr->internalRep.intValue = returnCode; + return JIM_OK; +} + +int Jim_GetReturnCode(Jim_Interp *interp, Jim_Obj *objPtr, int *intPtr) +{ + if (objPtr->typePtr != &returnCodeObjType && SetReturnCodeFromAny(interp, objPtr) == JIM_ERR) + return JIM_ERR; + *intPtr = objPtr->internalRep.intValue; + return JIM_OK; +} + +static int JimParseExprOperator(struct JimParserCtx *pc); +static int JimParseExprNumber(struct JimParserCtx *pc); +static int JimParseExprIrrational(struct JimParserCtx *pc); + + + + +enum +{ + + + JIM_EXPROP_MUL = JIM_TT_EXPR_OP, + JIM_EXPROP_DIV, + JIM_EXPROP_MOD, + JIM_EXPROP_SUB, + JIM_EXPROP_ADD, + JIM_EXPROP_LSHIFT, + JIM_EXPROP_RSHIFT, + JIM_EXPROP_ROTL, + JIM_EXPROP_ROTR, + JIM_EXPROP_LT, + JIM_EXPROP_GT, + JIM_EXPROP_LTE, + JIM_EXPROP_GTE, + JIM_EXPROP_NUMEQ, + JIM_EXPROP_NUMNE, + JIM_EXPROP_BITAND, + JIM_EXPROP_BITXOR, + JIM_EXPROP_BITOR, + + + JIM_EXPROP_LOGICAND, + JIM_EXPROP_LOGICAND_LEFT, + JIM_EXPROP_LOGICAND_RIGHT, + + + JIM_EXPROP_LOGICOR, + JIM_EXPROP_LOGICOR_LEFT, + JIM_EXPROP_LOGICOR_RIGHT, + + + + JIM_EXPROP_TERNARY, + JIM_EXPROP_TERNARY_LEFT, + JIM_EXPROP_TERNARY_RIGHT, + + + JIM_EXPROP_COLON, + JIM_EXPROP_COLON_LEFT, + JIM_EXPROP_COLON_RIGHT, + + JIM_EXPROP_POW, + + + JIM_EXPROP_STREQ, + JIM_EXPROP_STRNE, + JIM_EXPROP_STRIN, + JIM_EXPROP_STRNI, + + + JIM_EXPROP_NOT, + JIM_EXPROP_BITNOT, + JIM_EXPROP_UNARYMINUS, + JIM_EXPROP_UNARYPLUS, + + + JIM_EXPROP_FUNC_FIRST, + JIM_EXPROP_FUNC_INT = JIM_EXPROP_FUNC_FIRST, + JIM_EXPROP_FUNC_WIDE, + JIM_EXPROP_FUNC_ABS, + JIM_EXPROP_FUNC_DOUBLE, + JIM_EXPROP_FUNC_ROUND, + JIM_EXPROP_FUNC_RAND, + JIM_EXPROP_FUNC_SRAND, + + + JIM_EXPROP_FUNC_SIN, + JIM_EXPROP_FUNC_COS, + JIM_EXPROP_FUNC_TAN, + JIM_EXPROP_FUNC_ASIN, + JIM_EXPROP_FUNC_ACOS, + JIM_EXPROP_FUNC_ATAN, + JIM_EXPROP_FUNC_SINH, + JIM_EXPROP_FUNC_COSH, + JIM_EXPROP_FUNC_TANH, + JIM_EXPROP_FUNC_CEIL, + JIM_EXPROP_FUNC_FLOOR, + JIM_EXPROP_FUNC_EXP, + JIM_EXPROP_FUNC_LOG, + JIM_EXPROP_FUNC_LOG10, + JIM_EXPROP_FUNC_SQRT, + JIM_EXPROP_FUNC_POW, +}; + +struct JimExprState +{ + Jim_Obj **stack; + int stacklen; + int opcode; + int skip; +}; + + +typedef struct Jim_ExprOperator +{ + const char *name; + int (*funcop) (Jim_Interp *interp, struct JimExprState * e); + unsigned char precedence; + unsigned char arity; + unsigned char lazy; + unsigned char namelen; +} Jim_ExprOperator; + +static void ExprPush(struct JimExprState *e, Jim_Obj *obj) +{ + Jim_IncrRefCount(obj); + e->stack[e->stacklen++] = obj; +} + +static Jim_Obj *ExprPop(struct JimExprState *e) +{ + return e->stack[--e->stacklen]; +} + +static int JimExprOpNumUnary(Jim_Interp *interp, struct JimExprState *e) +{ + int intresult = 1; + int rc = JIM_OK; + Jim_Obj *A = ExprPop(e); + double dA, dC = 0; + jim_wide wA, wC = 0; + + if ((A->typePtr != &doubleObjType || A->bytes) && JimGetWideNoErr(interp, A, &wA) == JIM_OK) { + switch (e->opcode) { + case JIM_EXPROP_FUNC_INT: + case JIM_EXPROP_FUNC_WIDE: + case JIM_EXPROP_FUNC_ROUND: + case JIM_EXPROP_UNARYPLUS: + wC = wA; + break; + case JIM_EXPROP_FUNC_DOUBLE: + dC = wA; + intresult = 0; + break; + case JIM_EXPROP_FUNC_ABS: + wC = wA >= 0 ? wA : -wA; + break; + case JIM_EXPROP_UNARYMINUS: + wC = -wA; + break; + case JIM_EXPROP_NOT: + wC = !wA; + break; + default: + abort(); + } + } + else if ((rc = Jim_GetDouble(interp, A, &dA)) == JIM_OK) { + switch (e->opcode) { + case JIM_EXPROP_FUNC_INT: + case JIM_EXPROP_FUNC_WIDE: + wC = dA; + break; + case JIM_EXPROP_FUNC_ROUND: + wC = dA < 0 ? (dA - 0.5) : (dA + 0.5); + break; + case JIM_EXPROP_FUNC_DOUBLE: + case JIM_EXPROP_UNARYPLUS: + dC = dA; + intresult = 0; + break; + case JIM_EXPROP_FUNC_ABS: + dC = dA >= 0 ? dA : -dA; + intresult = 0; + break; + case JIM_EXPROP_UNARYMINUS: + dC = -dA; + intresult = 0; + break; + case JIM_EXPROP_NOT: + wC = !dA; + break; + default: + abort(); + } + } + + if (rc == JIM_OK) { + if (intresult) { + ExprPush(e, Jim_NewIntObj(interp, wC)); + } + else { + ExprPush(e, Jim_NewDoubleObj(interp, dC)); + } + } + + Jim_DecrRefCount(interp, A); + + return rc; +} + +static double JimRandDouble(Jim_Interp *interp) +{ + unsigned long x; + JimRandomBytes(interp, &x, sizeof(x)); + + return (double)x / (unsigned long)~0; +} + +static int JimExprOpIntUnary(Jim_Interp *interp, struct JimExprState *e) +{ + Jim_Obj *A = ExprPop(e); + jim_wide wA; + + int rc = Jim_GetWide(interp, A, &wA); + if (rc == JIM_OK) { + switch (e->opcode) { + case JIM_EXPROP_BITNOT: + ExprPush(e, Jim_NewIntObj(interp, ~wA)); + break; + case JIM_EXPROP_FUNC_SRAND: + JimPrngSeed(interp, (unsigned char *)&wA, sizeof(wA)); + ExprPush(e, Jim_NewDoubleObj(interp, JimRandDouble(interp))); + break; + default: + abort(); + } + } + + Jim_DecrRefCount(interp, A); + + return rc; +} + +static int JimExprOpNone(Jim_Interp *interp, struct JimExprState *e) +{ + JimPanic((e->opcode != JIM_EXPROP_FUNC_RAND, "JimExprOpNone only support rand()")); + + ExprPush(e, Jim_NewDoubleObj(interp, JimRandDouble(interp))); + + return JIM_OK; +} + +#ifdef JIM_MATH_FUNCTIONS +static int JimExprOpDoubleUnary(Jim_Interp *interp, struct JimExprState *e) +{ + int rc; + Jim_Obj *A = ExprPop(e); + double dA, dC; + + rc = Jim_GetDouble(interp, A, &dA); + if (rc == JIM_OK) { + switch (e->opcode) { + case JIM_EXPROP_FUNC_SIN: + dC = sin(dA); + break; + case JIM_EXPROP_FUNC_COS: + dC = cos(dA); + break; + case JIM_EXPROP_FUNC_TAN: + dC = tan(dA); + break; + case JIM_EXPROP_FUNC_ASIN: + dC = asin(dA); + break; + case JIM_EXPROP_FUNC_ACOS: + dC = acos(dA); + break; + case JIM_EXPROP_FUNC_ATAN: + dC = atan(dA); + break; + case JIM_EXPROP_FUNC_SINH: + dC = sinh(dA); + break; + case JIM_EXPROP_FUNC_COSH: + dC = cosh(dA); + break; + case JIM_EXPROP_FUNC_TANH: + dC = tanh(dA); + break; + case JIM_EXPROP_FUNC_CEIL: + dC = ceil(dA); + break; + case JIM_EXPROP_FUNC_FLOOR: + dC = floor(dA); + break; + case JIM_EXPROP_FUNC_EXP: + dC = exp(dA); + break; + case JIM_EXPROP_FUNC_LOG: + dC = log(dA); + break; + case JIM_EXPROP_FUNC_LOG10: + dC = log10(dA); + break; + case JIM_EXPROP_FUNC_SQRT: + dC = sqrt(dA); + break; + default: + abort(); + } + ExprPush(e, Jim_NewDoubleObj(interp, dC)); + } + + Jim_DecrRefCount(interp, A); + + return rc; +} +#endif + + +static int JimExprOpIntBin(Jim_Interp *interp, struct JimExprState *e) +{ + Jim_Obj *B = ExprPop(e); + Jim_Obj *A = ExprPop(e); + jim_wide wA, wB; + int rc = JIM_ERR; + + if (Jim_GetWide(interp, A, &wA) == JIM_OK && Jim_GetWide(interp, B, &wB) == JIM_OK) { + jim_wide wC; + + rc = JIM_OK; + + switch (e->opcode) { + case JIM_EXPROP_LSHIFT: + wC = wA << wB; + break; + case JIM_EXPROP_RSHIFT: + wC = wA >> wB; + break; + case JIM_EXPROP_BITAND: + wC = wA & wB; + break; + case JIM_EXPROP_BITXOR: + wC = wA ^ wB; + break; + case JIM_EXPROP_BITOR: + wC = wA | wB; + break; + case JIM_EXPROP_MOD: + if (wB == 0) { + wC = 0; + Jim_SetResultString(interp, "Division by zero", -1); + rc = JIM_ERR; + } + else { + int negative = 0; + + if (wB < 0) { + wB = -wB; + wA = -wA; + negative = 1; + } + wC = wA % wB; + if (wC < 0) { + wC += wB; + } + if (negative) { + wC = -wC; + } + } + break; + case JIM_EXPROP_ROTL: + case JIM_EXPROP_ROTR:{ + + unsigned long uA = (unsigned long)wA; + unsigned long uB = (unsigned long)wB; + const unsigned int S = sizeof(unsigned long) * 8; + + + uB %= S; + + if (e->opcode == JIM_EXPROP_ROTR) { + uB = S - uB; + } + wC = (unsigned long)(uA << uB) | (uA >> (S - uB)); + break; + } + default: + abort(); + } + ExprPush(e, Jim_NewIntObj(interp, wC)); + + } + + Jim_DecrRefCount(interp, A); + Jim_DecrRefCount(interp, B); + + return rc; +} + + + +static int JimExprOpBin(Jim_Interp *interp, struct JimExprState *e) +{ + int intresult = 1; + int rc = JIM_OK; + double dA, dB, dC = 0; + jim_wide wA, wB, wC = 0; + + Jim_Obj *B = ExprPop(e); + Jim_Obj *A = ExprPop(e); + + if ((A->typePtr != &doubleObjType || A->bytes) && + (B->typePtr != &doubleObjType || B->bytes) && + JimGetWideNoErr(interp, A, &wA) == JIM_OK && JimGetWideNoErr(interp, B, &wB) == JIM_OK) { + + + + switch (e->opcode) { + case JIM_EXPROP_POW: + case JIM_EXPROP_FUNC_POW: + wC = JimPowWide(wA, wB); + break; + case JIM_EXPROP_ADD: + wC = wA + wB; + break; + case JIM_EXPROP_SUB: + wC = wA - wB; + break; + case JIM_EXPROP_MUL: + wC = wA * wB; + break; + case JIM_EXPROP_DIV: + if (wB == 0) { + Jim_SetResultString(interp, "Division by zero", -1); + rc = JIM_ERR; + } + else { + if (wB < 0) { + wB = -wB; + wA = -wA; + } + wC = wA / wB; + if (wA % wB < 0) { + wC--; + } + } + break; + case JIM_EXPROP_LT: + wC = wA < wB; + break; + case JIM_EXPROP_GT: + wC = wA > wB; + break; + case JIM_EXPROP_LTE: + wC = wA <= wB; + break; + case JIM_EXPROP_GTE: + wC = wA >= wB; + break; + case JIM_EXPROP_NUMEQ: + wC = wA == wB; + break; + case JIM_EXPROP_NUMNE: + wC = wA != wB; + break; + default: + abort(); + } + } + else if (Jim_GetDouble(interp, A, &dA) == JIM_OK && Jim_GetDouble(interp, B, &dB) == JIM_OK) { + intresult = 0; + switch (e->opcode) { + case JIM_EXPROP_POW: + case JIM_EXPROP_FUNC_POW: +#ifdef JIM_MATH_FUNCTIONS + dC = pow(dA, dB); +#else + Jim_SetResultString(interp, "unsupported", -1); + rc = JIM_ERR; +#endif + break; + case JIM_EXPROP_ADD: + dC = dA + dB; + break; + case JIM_EXPROP_SUB: + dC = dA - dB; + break; + case JIM_EXPROP_MUL: + dC = dA * dB; + break; + case JIM_EXPROP_DIV: + if (dB == 0) { +#ifdef INFINITY + dC = dA < 0 ? -INFINITY : INFINITY; +#else + dC = (dA < 0 ? -1.0 : 1.0) * strtod("Inf", NULL); +#endif + } + else { + dC = dA / dB; + } + break; + case JIM_EXPROP_LT: + wC = dA < dB; + intresult = 1; + break; + case JIM_EXPROP_GT: + wC = dA > dB; + intresult = 1; + break; + case JIM_EXPROP_LTE: + wC = dA <= dB; + intresult = 1; + break; + case JIM_EXPROP_GTE: + wC = dA >= dB; + intresult = 1; + break; + case JIM_EXPROP_NUMEQ: + wC = dA == dB; + intresult = 1; + break; + case JIM_EXPROP_NUMNE: + wC = dA != dB; + intresult = 1; + break; + default: + abort(); + } + } + else { + + + + int i = Jim_StringCompareObj(interp, A, B, 0); + + switch (e->opcode) { + case JIM_EXPROP_LT: + wC = i < 0; + break; + case JIM_EXPROP_GT: + wC = i > 0; + break; + case JIM_EXPROP_LTE: + wC = i <= 0; + break; + case JIM_EXPROP_GTE: + wC = i >= 0; + break; + case JIM_EXPROP_NUMEQ: + wC = i == 0; + break; + case JIM_EXPROP_NUMNE: + wC = i != 0; + break; + default: + rc = JIM_ERR; + break; + } + } + + if (rc == JIM_OK) { + if (intresult) { + ExprPush(e, Jim_NewIntObj(interp, wC)); + } + else { + ExprPush(e, Jim_NewDoubleObj(interp, dC)); + } + } + + Jim_DecrRefCount(interp, A); + Jim_DecrRefCount(interp, B); + + return rc; +} + +static int JimSearchList(Jim_Interp *interp, Jim_Obj *listObjPtr, Jim_Obj *valObj) +{ + int listlen; + int i; + + listlen = Jim_ListLength(interp, listObjPtr); + for (i = 0; i < listlen; i++) { + if (Jim_StringEqObj(Jim_ListGetIndex(interp, listObjPtr, i), valObj)) { + return 1; + } + } + return 0; +} + +static int JimExprOpStrBin(Jim_Interp *interp, struct JimExprState *e) +{ + Jim_Obj *B = ExprPop(e); + Jim_Obj *A = ExprPop(e); + + jim_wide wC; + + switch (e->opcode) { + case JIM_EXPROP_STREQ: + case JIM_EXPROP_STRNE: + wC = Jim_StringEqObj(A, B); + if (e->opcode == JIM_EXPROP_STRNE) { + wC = !wC; + } + break; + case JIM_EXPROP_STRIN: + wC = JimSearchList(interp, B, A); + break; + case JIM_EXPROP_STRNI: + wC = !JimSearchList(interp, B, A); + break; + default: + abort(); + } + ExprPush(e, Jim_NewIntObj(interp, wC)); + + Jim_DecrRefCount(interp, A); + Jim_DecrRefCount(interp, B); + + return JIM_OK; +} + +static int ExprBool(Jim_Interp *interp, Jim_Obj *obj) +{ + long l; + double d; + + if (Jim_GetLong(interp, obj, &l) == JIM_OK) { + return l != 0; + } + if (Jim_GetDouble(interp, obj, &d) == JIM_OK) { + return d != 0; + } + return -1; +} + +static int JimExprOpAndLeft(Jim_Interp *interp, struct JimExprState *e) +{ + Jim_Obj *skip = ExprPop(e); + Jim_Obj *A = ExprPop(e); + int rc = JIM_OK; + + switch (ExprBool(interp, A)) { + case 0: + + e->skip = JimWideValue(skip); + ExprPush(e, Jim_NewIntObj(interp, 0)); + break; + + case 1: + + break; + + case -1: + + rc = JIM_ERR; + } + Jim_DecrRefCount(interp, A); + Jim_DecrRefCount(interp, skip); + + return rc; +} + +static int JimExprOpOrLeft(Jim_Interp *interp, struct JimExprState *e) +{ + Jim_Obj *skip = ExprPop(e); + Jim_Obj *A = ExprPop(e); + int rc = JIM_OK; + + switch (ExprBool(interp, A)) { + case 0: + + break; + + case 1: + + e->skip = JimWideValue(skip); + ExprPush(e, Jim_NewIntObj(interp, 1)); + break; + + case -1: + + rc = JIM_ERR; + break; + } + Jim_DecrRefCount(interp, A); + Jim_DecrRefCount(interp, skip); + + return rc; +} + +static int JimExprOpAndOrRight(Jim_Interp *interp, struct JimExprState *e) +{ + Jim_Obj *A = ExprPop(e); + int rc = JIM_OK; + + switch (ExprBool(interp, A)) { + case 0: + ExprPush(e, Jim_NewIntObj(interp, 0)); + break; + + case 1: + ExprPush(e, Jim_NewIntObj(interp, 1)); + break; + + case -1: + + rc = JIM_ERR; + break; + } + Jim_DecrRefCount(interp, A); + + return rc; +} + +static int JimExprOpTernaryLeft(Jim_Interp *interp, struct JimExprState *e) +{ + Jim_Obj *skip = ExprPop(e); + Jim_Obj *A = ExprPop(e); + int rc = JIM_OK; + + + ExprPush(e, A); + + switch (ExprBool(interp, A)) { + case 0: + + e->skip = JimWideValue(skip); + + ExprPush(e, Jim_NewIntObj(interp, 0)); + break; + + case 1: + + break; + + case -1: + + rc = JIM_ERR; + break; + } + Jim_DecrRefCount(interp, A); + Jim_DecrRefCount(interp, skip); + + return rc; +} + +static int JimExprOpColonLeft(Jim_Interp *interp, struct JimExprState *e) +{ + Jim_Obj *skip = ExprPop(e); + Jim_Obj *B = ExprPop(e); + Jim_Obj *A = ExprPop(e); + + + if (ExprBool(interp, A)) { + + e->skip = JimWideValue(skip); + + ExprPush(e, B); + } + + Jim_DecrRefCount(interp, skip); + Jim_DecrRefCount(interp, A); + Jim_DecrRefCount(interp, B); + return JIM_OK; +} + +static int JimExprOpNull(Jim_Interp *interp, struct JimExprState *e) +{ + return JIM_OK; +} + +enum +{ + LAZY_NONE, + LAZY_OP, + LAZY_LEFT, + LAZY_RIGHT +}; + +#define OPRINIT(N, P, A, F) {N, F, P, A, LAZY_NONE, sizeof(N) - 1} +#define OPRINIT_LAZY(N, P, A, F, L) {N, F, P, A, L, sizeof(N) - 1} + +static const struct Jim_ExprOperator Jim_ExprOperators[] = { + OPRINIT("*", 110, 2, JimExprOpBin), + OPRINIT("/", 110, 2, JimExprOpBin), + OPRINIT("%", 110, 2, JimExprOpIntBin), + + OPRINIT("-", 100, 2, JimExprOpBin), + OPRINIT("+", 100, 2, JimExprOpBin), + + OPRINIT("<<", 90, 2, JimExprOpIntBin), + OPRINIT(">>", 90, 2, JimExprOpIntBin), + + OPRINIT("<<<", 90, 2, JimExprOpIntBin), + OPRINIT(">>>", 90, 2, JimExprOpIntBin), + + OPRINIT("<", 80, 2, JimExprOpBin), + OPRINIT(">", 80, 2, JimExprOpBin), + OPRINIT("<=", 80, 2, JimExprOpBin), + OPRINIT(">=", 80, 2, JimExprOpBin), + + OPRINIT("==", 70, 2, JimExprOpBin), + OPRINIT("!=", 70, 2, JimExprOpBin), + + OPRINIT("&", 50, 2, JimExprOpIntBin), + OPRINIT("^", 49, 2, JimExprOpIntBin), + OPRINIT("|", 48, 2, JimExprOpIntBin), + + OPRINIT_LAZY("&&", 10, 2, NULL, LAZY_OP), + OPRINIT_LAZY(NULL, 10, 2, JimExprOpAndLeft, LAZY_LEFT), + OPRINIT_LAZY(NULL, 10, 2, JimExprOpAndOrRight, LAZY_RIGHT), + + OPRINIT_LAZY("||", 9, 2, NULL, LAZY_OP), + OPRINIT_LAZY(NULL, 9, 2, JimExprOpOrLeft, LAZY_LEFT), + OPRINIT_LAZY(NULL, 9, 2, JimExprOpAndOrRight, LAZY_RIGHT), + + OPRINIT_LAZY("?", 5, 2, JimExprOpNull, LAZY_OP), + OPRINIT_LAZY(NULL, 5, 2, JimExprOpTernaryLeft, LAZY_LEFT), + OPRINIT_LAZY(NULL, 5, 2, JimExprOpNull, LAZY_RIGHT), + + OPRINIT_LAZY(":", 5, 2, JimExprOpNull, LAZY_OP), + OPRINIT_LAZY(NULL, 5, 2, JimExprOpColonLeft, LAZY_LEFT), + OPRINIT_LAZY(NULL, 5, 2, JimExprOpNull, LAZY_RIGHT), + + OPRINIT("**", 250, 2, JimExprOpBin), + + OPRINIT("eq", 60, 2, JimExprOpStrBin), + OPRINIT("ne", 60, 2, JimExprOpStrBin), + + OPRINIT("in", 55, 2, JimExprOpStrBin), + OPRINIT("ni", 55, 2, JimExprOpStrBin), + + OPRINIT("!", 150, 1, JimExprOpNumUnary), + OPRINIT("~", 150, 1, JimExprOpIntUnary), + OPRINIT(NULL, 150, 1, JimExprOpNumUnary), + OPRINIT(NULL, 150, 1, JimExprOpNumUnary), + + + + OPRINIT("int", 200, 1, JimExprOpNumUnary), + OPRINIT("wide", 200, 1, JimExprOpNumUnary), + OPRINIT("abs", 200, 1, JimExprOpNumUnary), + OPRINIT("double", 200, 1, JimExprOpNumUnary), + OPRINIT("round", 200, 1, JimExprOpNumUnary), + OPRINIT("rand", 200, 0, JimExprOpNone), + OPRINIT("srand", 200, 1, JimExprOpIntUnary), + +#ifdef JIM_MATH_FUNCTIONS + OPRINIT("sin", 200, 1, JimExprOpDoubleUnary), + OPRINIT("cos", 200, 1, JimExprOpDoubleUnary), + OPRINIT("tan", 200, 1, JimExprOpDoubleUnary), + OPRINIT("asin", 200, 1, JimExprOpDoubleUnary), + OPRINIT("acos", 200, 1, JimExprOpDoubleUnary), + OPRINIT("atan", 200, 1, JimExprOpDoubleUnary), + OPRINIT("sinh", 200, 1, JimExprOpDoubleUnary), + OPRINIT("cosh", 200, 1, JimExprOpDoubleUnary), + OPRINIT("tanh", 200, 1, JimExprOpDoubleUnary), + OPRINIT("ceil", 200, 1, JimExprOpDoubleUnary), + OPRINIT("floor", 200, 1, JimExprOpDoubleUnary), + OPRINIT("exp", 200, 1, JimExprOpDoubleUnary), + OPRINIT("log", 200, 1, JimExprOpDoubleUnary), + OPRINIT("log10", 200, 1, JimExprOpDoubleUnary), + OPRINIT("sqrt", 200, 1, JimExprOpDoubleUnary), + OPRINIT("pow", 200, 2, JimExprOpBin), +#endif +}; +#undef OPRINIT +#undef OPRINIT_LAZY + +#define JIM_EXPR_OPERATORS_NUM \ + (sizeof(Jim_ExprOperators)/sizeof(struct Jim_ExprOperator)) + +static int JimParseExpression(struct JimParserCtx *pc) +{ + + while (isspace(UCHAR(*pc->p)) || (*(pc->p) == '\\' && *(pc->p + 1) == '\n')) { + if (*pc->p == '\n') { + pc->linenr++; + } + pc->p++; + pc->len--; + } + + + pc->tline = pc->linenr; + pc->tstart = pc->p; + + if (pc->len == 0) { + pc->tend = pc->p; + pc->tt = JIM_TT_EOL; + pc->eof = 1; + return JIM_OK; + } + switch (*(pc->p)) { + case '(': + pc->tt = JIM_TT_SUBEXPR_START; + goto singlechar; + case ')': + pc->tt = JIM_TT_SUBEXPR_END; + goto singlechar; + case ',': + pc->tt = JIM_TT_SUBEXPR_COMMA; +singlechar: + pc->tend = pc->p; + pc->p++; + pc->len--; + break; + case '[': + return JimParseCmd(pc); + case '$': + if (JimParseVar(pc) == JIM_ERR) + return JimParseExprOperator(pc); + else { + + if (pc->tt == JIM_TT_EXPRSUGAR) { + return JIM_ERR; + } + return JIM_OK; + } + break; + case '0': + case '1': + case '2': + case '3': + case '4': + case '5': + case '6': + case '7': + case '8': + case '9': + case '.': + return JimParseExprNumber(pc); + case '"': + return JimParseQuote(pc); + case '{': + return JimParseBrace(pc); + + case 'N': + case 'I': + case 'n': + case 'i': + if (JimParseExprIrrational(pc) == JIM_ERR) + return JimParseExprOperator(pc); + break; + default: + return JimParseExprOperator(pc); + break; + } + return JIM_OK; +} + +static int JimParseExprNumber(struct JimParserCtx *pc) +{ + char *end; + + + pc->tt = JIM_TT_EXPR_INT; + + jim_strtoull(pc->p, (char **)&pc->p); + + if (strchr("eENnIi.", *pc->p) || pc->p == pc->tstart) { + if (strtod(pc->tstart, &end)) { } + if (end == pc->tstart) + return JIM_ERR; + if (end > pc->p) { + + pc->tt = JIM_TT_EXPR_DOUBLE; + pc->p = end; + } + } + pc->tend = pc->p - 1; + pc->len -= (pc->p - pc->tstart); + return JIM_OK; +} + +static int JimParseExprIrrational(struct JimParserCtx *pc) +{ + const char *irrationals[] = { "NaN", "nan", "NAN", "Inf", "inf", "INF", NULL }; + int i; + + for (i = 0; irrationals[i]; i++) { + const char *irr = irrationals[i]; + + if (strncmp(irr, pc->p, 3) == 0) { + pc->p += 3; + pc->len -= 3; + pc->tend = pc->p - 1; + pc->tt = JIM_TT_EXPR_DOUBLE; + return JIM_OK; + } + } + return JIM_ERR; +} + +static int JimParseExprOperator(struct JimParserCtx *pc) +{ + int i; + int bestIdx = -1, bestLen = 0; + + + for (i = 0; i < (signed)JIM_EXPR_OPERATORS_NUM; i++) { + const char * const opname = Jim_ExprOperators[i].name; + const int oplen = Jim_ExprOperators[i].namelen; + + if (opname == NULL || opname[0] != pc->p[0]) { + continue; + } + + if (oplen > bestLen && strncmp(opname, pc->p, oplen) == 0) { + bestIdx = i + JIM_TT_EXPR_OP; + bestLen = oplen; + } + } + if (bestIdx == -1) { + return JIM_ERR; + } + + + if (bestIdx >= JIM_EXPROP_FUNC_FIRST) { + const char *p = pc->p + bestLen; + int len = pc->len - bestLen; + + while (len && isspace(UCHAR(*p))) { + len--; + p++; + } + if (*p != '(') { + return JIM_ERR; + } + } + pc->tend = pc->p + bestLen - 1; + pc->p += bestLen; + pc->len -= bestLen; + + pc->tt = bestIdx; + return JIM_OK; +} + +static const struct Jim_ExprOperator *JimExprOperatorInfoByOpcode(int opcode) +{ + static Jim_ExprOperator dummy_op; + if (opcode < JIM_TT_EXPR_OP) { + return &dummy_op; + } + return &Jim_ExprOperators[opcode - JIM_TT_EXPR_OP]; +} + +const char *jim_tt_name(int type) +{ + static const char * const tt_names[JIM_TT_EXPR_OP] = + { "NIL", "STR", "ESC", "VAR", "ARY", "CMD", "SEP", "EOL", "EOF", "LIN", "WRD", "(((", ")))", ",,,", "INT", + "DBL", "$()" }; + if (type < JIM_TT_EXPR_OP) { + return tt_names[type]; + } + else { + const struct Jim_ExprOperator *op = JimExprOperatorInfoByOpcode(type); + static char buf[20]; + + if (op->name) { + return op->name; + } + sprintf(buf, "(%d)", type); + return buf; + } +} + +static void FreeExprInternalRep(Jim_Interp *interp, Jim_Obj *objPtr); +static void DupExprInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr); +static int SetExprFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr); + +static const Jim_ObjType exprObjType = { + "expression", + FreeExprInternalRep, + DupExprInternalRep, + NULL, + JIM_TYPE_REFERENCES, +}; + + +typedef struct ExprByteCode +{ + ScriptToken *token; + int len; + int inUse; +} ExprByteCode; + +static void ExprFreeByteCode(Jim_Interp *interp, ExprByteCode * expr) +{ + int i; + + for (i = 0; i < expr->len; i++) { + Jim_DecrRefCount(interp, expr->token[i].objPtr); + } + Jim_Free(expr->token); + Jim_Free(expr); +} + +static void FreeExprInternalRep(Jim_Interp *interp, Jim_Obj *objPtr) +{ + ExprByteCode *expr = (void *)objPtr->internalRep.ptr; + + if (expr) { + if (--expr->inUse != 0) { + return; + } + + ExprFreeByteCode(interp, expr); + } +} + +static void DupExprInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr) +{ + JIM_NOTUSED(interp); + JIM_NOTUSED(srcPtr); + + + dupPtr->typePtr = NULL; +} + + +static int ExprCheckCorrectness(ExprByteCode * expr) +{ + int i; + int stacklen = 0; + int ternary = 0; + + for (i = 0; i < expr->len; i++) { + ScriptToken *t = &expr->token[i]; + const struct Jim_ExprOperator *op = JimExprOperatorInfoByOpcode(t->type); + + stacklen -= op->arity; + if (stacklen < 0) { + break; + } + if (t->type == JIM_EXPROP_TERNARY || t->type == JIM_EXPROP_TERNARY_LEFT) { + ternary++; + } + else if (t->type == JIM_EXPROP_COLON || t->type == JIM_EXPROP_COLON_LEFT) { + ternary--; + } + + + stacklen++; + } + if (stacklen != 1 || ternary != 0) { + return JIM_ERR; + } + return JIM_OK; +} + +static int ExprAddLazyOperator(Jim_Interp *interp, ExprByteCode * expr, ParseToken *t) +{ + int i; + + int leftindex, arity, offset; + + + leftindex = expr->len - 1; + + arity = 1; + while (arity) { + ScriptToken *tt = &expr->token[leftindex]; + + if (tt->type >= JIM_TT_EXPR_OP) { + arity += JimExprOperatorInfoByOpcode(tt->type)->arity; + } + arity--; + if (--leftindex < 0) { + return JIM_ERR; + } + } + leftindex++; + + + memmove(&expr->token[leftindex + 2], &expr->token[leftindex], + sizeof(*expr->token) * (expr->len - leftindex)); + expr->len += 2; + offset = (expr->len - leftindex) - 1; + + expr->token[leftindex + 1].type = t->type + 1; + expr->token[leftindex + 1].objPtr = interp->emptyObj; + + expr->token[leftindex].type = JIM_TT_EXPR_INT; + expr->token[leftindex].objPtr = Jim_NewIntObj(interp, offset); + + + expr->token[expr->len].objPtr = interp->emptyObj; + expr->token[expr->len].type = t->type + 2; + expr->len++; + + + for (i = leftindex - 1; i > 0; i--) { + const struct Jim_ExprOperator *op = JimExprOperatorInfoByOpcode(expr->token[i].type); + if (op->lazy == LAZY_LEFT) { + if (JimWideValue(expr->token[i - 1].objPtr) + i - 1 >= leftindex) { + JimWideValue(expr->token[i - 1].objPtr) += 2; + } + } + } + return JIM_OK; +} + +static int ExprAddOperator(Jim_Interp *interp, ExprByteCode * expr, ParseToken *t) +{ + struct ScriptToken *token = &expr->token[expr->len]; + const struct Jim_ExprOperator *op = JimExprOperatorInfoByOpcode(t->type); + + if (op->lazy == LAZY_OP) { + if (ExprAddLazyOperator(interp, expr, t) != JIM_OK) { + Jim_SetResultFormatted(interp, "Expression has bad operands to %s", op->name); + return JIM_ERR; + } + } + else { + token->objPtr = interp->emptyObj; + token->type = t->type; + expr->len++; + } + return JIM_OK; +} + +static int ExprTernaryGetColonLeftIndex(ExprByteCode *expr, int right_index) +{ + int ternary_count = 1; + + right_index--; + + while (right_index > 1) { + if (expr->token[right_index].type == JIM_EXPROP_TERNARY_LEFT) { + ternary_count--; + } + else if (expr->token[right_index].type == JIM_EXPROP_COLON_RIGHT) { + ternary_count++; + } + else if (expr->token[right_index].type == JIM_EXPROP_COLON_LEFT && ternary_count == 1) { + return right_index; + } + right_index--; + } + + + return -1; +} + +static int ExprTernaryGetMoveIndices(ExprByteCode *expr, int right_index, int *prev_right_index, int *prev_left_index) +{ + int i = right_index - 1; + int ternary_count = 1; + + while (i > 1) { + if (expr->token[i].type == JIM_EXPROP_TERNARY_LEFT) { + if (--ternary_count == 0 && expr->token[i - 2].type == JIM_EXPROP_COLON_RIGHT) { + *prev_right_index = i - 2; + *prev_left_index = ExprTernaryGetColonLeftIndex(expr, *prev_right_index); + return 1; + } + } + else if (expr->token[i].type == JIM_EXPROP_COLON_RIGHT) { + if (ternary_count == 0) { + return 0; + } + ternary_count++; + } + i--; + } + return 0; +} + +static void ExprTernaryReorderExpression(Jim_Interp *interp, ExprByteCode *expr) +{ + int i; + + for (i = expr->len - 1; i > 1; i--) { + int prev_right_index; + int prev_left_index; + int j; + ScriptToken tmp; + + if (expr->token[i].type != JIM_EXPROP_COLON_RIGHT) { + continue; + } + + + if (ExprTernaryGetMoveIndices(expr, i, &prev_right_index, &prev_left_index) == 0) { + continue; + } + + tmp = expr->token[prev_right_index]; + for (j = prev_right_index; j < i; j++) { + expr->token[j] = expr->token[j + 1]; + } + expr->token[i] = tmp; + + JimWideValue(expr->token[prev_left_index-1].objPtr) += (i - prev_right_index); + + + i++; + } +} + +static ExprByteCode *ExprCreateByteCode(Jim_Interp *interp, const ParseTokenList *tokenlist, Jim_Obj *fileNameObj) +{ + Jim_Stack stack; + ExprByteCode *expr; + int ok = 1; + int i; + int prevtt = JIM_TT_NONE; + int have_ternary = 0; + + + int count = tokenlist->count - 1; + + expr = Jim_Alloc(sizeof(*expr)); + expr->inUse = 1; + expr->len = 0; + + Jim_InitStack(&stack); + + for (i = 0; i < tokenlist->count; i++) { + ParseToken *t = &tokenlist->list[i]; + const struct Jim_ExprOperator *op = JimExprOperatorInfoByOpcode(t->type); + + if (op->lazy == LAZY_OP) { + count += 2; + + if (t->type == JIM_EXPROP_TERNARY) { + have_ternary = 1; + } + } + } + + expr->token = Jim_Alloc(sizeof(ScriptToken) * count); + + for (i = 0; i < tokenlist->count && ok; i++) { + ParseToken *t = &tokenlist->list[i]; + + + struct ScriptToken *token = &expr->token[expr->len]; + + if (t->type == JIM_TT_EOL) { + break; + } + + switch (t->type) { + case JIM_TT_STR: + case JIM_TT_ESC: + case JIM_TT_VAR: + case JIM_TT_DICTSUGAR: + case JIM_TT_EXPRSUGAR: + case JIM_TT_CMD: + token->type = t->type; +strexpr: + token->objPtr = Jim_NewStringObj(interp, t->token, t->len); + if (t->type == JIM_TT_CMD) { + + JimSetSourceInfo(interp, token->objPtr, fileNameObj, t->line); + } + expr->len++; + break; + + case JIM_TT_EXPR_INT: + case JIM_TT_EXPR_DOUBLE: + { + char *endptr; + if (t->type == JIM_TT_EXPR_INT) { + token->objPtr = Jim_NewIntObj(interp, jim_strtoull(t->token, &endptr)); + } + else { + token->objPtr = Jim_NewDoubleObj(interp, strtod(t->token, &endptr)); + } + if (endptr != t->token + t->len) { + + Jim_FreeNewObj(interp, token->objPtr); + token->type = JIM_TT_STR; + goto strexpr; + } + token->type = t->type; + expr->len++; + } + break; + + case JIM_TT_SUBEXPR_START: + Jim_StackPush(&stack, t); + prevtt = JIM_TT_NONE; + continue; + + case JIM_TT_SUBEXPR_COMMA: + + continue; + + case JIM_TT_SUBEXPR_END: + ok = 0; + while (Jim_StackLen(&stack)) { + ParseToken *tt = Jim_StackPop(&stack); + + if (tt->type == JIM_TT_SUBEXPR_START) { + ok = 1; + break; + } + + if (ExprAddOperator(interp, expr, tt) != JIM_OK) { + goto err; + } + } + if (!ok) { + Jim_SetResultString(interp, "Unexpected close parenthesis", -1); + goto err; + } + break; + + + default:{ + + const struct Jim_ExprOperator *op; + ParseToken *tt; + + + if (prevtt == JIM_TT_NONE || prevtt >= JIM_TT_EXPR_OP) { + if (t->type == JIM_EXPROP_SUB) { + t->type = JIM_EXPROP_UNARYMINUS; + } + else if (t->type == JIM_EXPROP_ADD) { + t->type = JIM_EXPROP_UNARYPLUS; + } + } + + op = JimExprOperatorInfoByOpcode(t->type); + + + while ((tt = Jim_StackPeek(&stack)) != NULL) { + const struct Jim_ExprOperator *tt_op = + JimExprOperatorInfoByOpcode(tt->type); + + + + if (op->arity != 1 && tt_op->precedence >= op->precedence) { + if (ExprAddOperator(interp, expr, tt) != JIM_OK) { + ok = 0; + goto err; + } + Jim_StackPop(&stack); + } + else { + break; + } + } + Jim_StackPush(&stack, t); + break; + } + } + prevtt = t->type; + } + + + while (Jim_StackLen(&stack)) { + ParseToken *tt = Jim_StackPop(&stack); + + if (tt->type == JIM_TT_SUBEXPR_START) { + ok = 0; + Jim_SetResultString(interp, "Missing close parenthesis", -1); + goto err; + } + if (ExprAddOperator(interp, expr, tt) != JIM_OK) { + ok = 0; + goto err; + } + } + + if (have_ternary) { + ExprTernaryReorderExpression(interp, expr); + } + + err: + + Jim_FreeStack(&stack); + + for (i = 0; i < expr->len; i++) { + Jim_IncrRefCount(expr->token[i].objPtr); + } + + if (!ok) { + ExprFreeByteCode(interp, expr); + return NULL; + } + + return expr; +} + + +static int SetExprFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr) +{ + int exprTextLen; + const char *exprText; + struct JimParserCtx parser; + struct ExprByteCode *expr; + ParseTokenList tokenlist; + int line; + Jim_Obj *fileNameObj; + int rc = JIM_ERR; + + + if (objPtr->typePtr == &sourceObjType) { + fileNameObj = objPtr->internalRep.sourceValue.fileNameObj; + line = objPtr->internalRep.sourceValue.lineNumber; + } + else { + fileNameObj = interp->emptyObj; + line = 1; + } + Jim_IncrRefCount(fileNameObj); + + exprText = Jim_GetString(objPtr, &exprTextLen); + + + ScriptTokenListInit(&tokenlist); + + JimParserInit(&parser, exprText, exprTextLen, line); + while (!parser.eof) { + if (JimParseExpression(&parser) != JIM_OK) { + ScriptTokenListFree(&tokenlist); + invalidexpr: + Jim_SetResultFormatted(interp, "syntax error in expression: \"%#s\"", objPtr); + expr = NULL; + goto err; + } + + ScriptAddToken(&tokenlist, parser.tstart, parser.tend - parser.tstart + 1, parser.tt, + parser.tline); + } + +#ifdef DEBUG_SHOW_EXPR_TOKENS + { + int i; + printf("==== Expr Tokens (%s) ====\n", Jim_String(fileNameObj)); + for (i = 0; i < tokenlist.count; i++) { + printf("[%2d]@%d %s '%.*s'\n", i, tokenlist.list[i].line, jim_tt_name(tokenlist.list[i].type), + tokenlist.list[i].len, tokenlist.list[i].token); + } + } +#endif + + if (JimParseCheckMissing(interp, parser.missing.ch) == JIM_ERR) { + ScriptTokenListFree(&tokenlist); + Jim_DecrRefCount(interp, fileNameObj); + return JIM_ERR; + } + + + expr = ExprCreateByteCode(interp, &tokenlist, fileNameObj); + + + ScriptTokenListFree(&tokenlist); + + if (!expr) { + goto err; + } + +#ifdef DEBUG_SHOW_EXPR + { + int i; + + printf("==== Expr ====\n"); + for (i = 0; i < expr->len; i++) { + ScriptToken *t = &expr->token[i]; + + printf("[%2d] %s '%s'\n", i, jim_tt_name(t->type), Jim_String(t->objPtr)); + } + } +#endif + + + if (ExprCheckCorrectness(expr) != JIM_OK) { + ExprFreeByteCode(interp, expr); + goto invalidexpr; + } + + rc = JIM_OK; + + err: + + Jim_DecrRefCount(interp, fileNameObj); + Jim_FreeIntRep(interp, objPtr); + Jim_SetIntRepPtr(objPtr, expr); + objPtr->typePtr = &exprObjType; + return rc; +} + +static ExprByteCode *JimGetExpression(Jim_Interp *interp, Jim_Obj *objPtr) +{ + if (objPtr->typePtr != &exprObjType) { + if (SetExprFromAny(interp, objPtr) != JIM_OK) { + return NULL; + } + } + return (ExprByteCode *) Jim_GetIntRepPtr(objPtr); +} + +#ifdef JIM_OPTIMIZATION +static Jim_Obj *JimExprIntValOrVar(Jim_Interp *interp, const ScriptToken *token) +{ + if (token->type == JIM_TT_EXPR_INT) + return token->objPtr; + else if (token->type == JIM_TT_VAR) + return Jim_GetVariable(interp, token->objPtr, JIM_NONE); + else if (token->type == JIM_TT_DICTSUGAR) + return JimExpandDictSugar(interp, token->objPtr); + else + return NULL; +} +#endif + +#define JIM_EE_STATICSTACK_LEN 10 + +int Jim_EvalExpression(Jim_Interp *interp, Jim_Obj *exprObjPtr, Jim_Obj **exprResultPtrPtr) +{ + ExprByteCode *expr; + Jim_Obj *staticStack[JIM_EE_STATICSTACK_LEN]; + int i; + int retcode = JIM_OK; + struct JimExprState e; + + expr = JimGetExpression(interp, exprObjPtr); + if (!expr) { + return JIM_ERR; + } + +#ifdef JIM_OPTIMIZATION + { + Jim_Obj *objPtr; + + + switch (expr->len) { + case 1: + objPtr = JimExprIntValOrVar(interp, &expr->token[0]); + if (objPtr) { + Jim_IncrRefCount(objPtr); + *exprResultPtrPtr = objPtr; + return JIM_OK; + } + break; + + case 2: + if (expr->token[1].type == JIM_EXPROP_NOT) { + objPtr = JimExprIntValOrVar(interp, &expr->token[0]); + + if (objPtr && JimIsWide(objPtr)) { + *exprResultPtrPtr = JimWideValue(objPtr) ? interp->falseObj : interp->trueObj; + Jim_IncrRefCount(*exprResultPtrPtr); + return JIM_OK; + } + } + break; + + case 3: + objPtr = JimExprIntValOrVar(interp, &expr->token[0]); + if (objPtr && JimIsWide(objPtr)) { + Jim_Obj *objPtr2 = JimExprIntValOrVar(interp, &expr->token[1]); + if (objPtr2 && JimIsWide(objPtr2)) { + jim_wide wideValueA = JimWideValue(objPtr); + jim_wide wideValueB = JimWideValue(objPtr2); + int cmpRes; + switch (expr->token[2].type) { + case JIM_EXPROP_LT: + cmpRes = wideValueA < wideValueB; + break; + case JIM_EXPROP_LTE: + cmpRes = wideValueA <= wideValueB; + break; + case JIM_EXPROP_GT: + cmpRes = wideValueA > wideValueB; + break; + case JIM_EXPROP_GTE: + cmpRes = wideValueA >= wideValueB; + break; + case JIM_EXPROP_NUMEQ: + cmpRes = wideValueA == wideValueB; + break; + case JIM_EXPROP_NUMNE: + cmpRes = wideValueA != wideValueB; + break; + default: + goto noopt; + } + *exprResultPtrPtr = cmpRes ? interp->trueObj : interp->falseObj; + Jim_IncrRefCount(*exprResultPtrPtr); + return JIM_OK; + } + } + break; + } + } +noopt: +#endif + + expr->inUse++; + + + + if (expr->len > JIM_EE_STATICSTACK_LEN) + e.stack = Jim_Alloc(sizeof(Jim_Obj *) * expr->len); + else + e.stack = staticStack; + + e.stacklen = 0; + + + for (i = 0; i < expr->len && retcode == JIM_OK; i++) { + Jim_Obj *objPtr; + + switch (expr->token[i].type) { + case JIM_TT_EXPR_INT: + case JIM_TT_EXPR_DOUBLE: + case JIM_TT_STR: + ExprPush(&e, expr->token[i].objPtr); + break; + + case JIM_TT_VAR: + objPtr = Jim_GetVariable(interp, expr->token[i].objPtr, JIM_ERRMSG); + if (objPtr) { + ExprPush(&e, objPtr); + } + else { + retcode = JIM_ERR; + } + break; + + case JIM_TT_DICTSUGAR: + objPtr = JimExpandDictSugar(interp, expr->token[i].objPtr); + if (objPtr) { + ExprPush(&e, objPtr); + } + else { + retcode = JIM_ERR; + } + break; + + case JIM_TT_ESC: + retcode = Jim_SubstObj(interp, expr->token[i].objPtr, &objPtr, JIM_NONE); + if (retcode == JIM_OK) { + ExprPush(&e, objPtr); + } + break; + + case JIM_TT_CMD: + retcode = Jim_EvalObj(interp, expr->token[i].objPtr); + if (retcode == JIM_OK) { + ExprPush(&e, Jim_GetResult(interp)); + } + break; + + default:{ + + e.skip = 0; + e.opcode = expr->token[i].type; + + retcode = JimExprOperatorInfoByOpcode(e.opcode)->funcop(interp, &e); + + i += e.skip; + continue; + } + } + } + + expr->inUse--; + + if (retcode == JIM_OK) { + *exprResultPtrPtr = ExprPop(&e); + } + else { + for (i = 0; i < e.stacklen; i++) { + Jim_DecrRefCount(interp, e.stack[i]); + } + } + if (e.stack != staticStack) { + Jim_Free(e.stack); + } + return retcode; +} + +int Jim_GetBoolFromExpr(Jim_Interp *interp, Jim_Obj *exprObjPtr, int *boolPtr) +{ + int retcode; + jim_wide wideValue; + double doubleValue; + Jim_Obj *exprResultPtr; + + retcode = Jim_EvalExpression(interp, exprObjPtr, &exprResultPtr); + if (retcode != JIM_OK) + return retcode; + + if (JimGetWideNoErr(interp, exprResultPtr, &wideValue) != JIM_OK) { + if (Jim_GetDouble(interp, exprResultPtr, &doubleValue) != JIM_OK) { + Jim_DecrRefCount(interp, exprResultPtr); + return JIM_ERR; + } + else { + Jim_DecrRefCount(interp, exprResultPtr); + *boolPtr = doubleValue != 0; + return JIM_OK; + } + } + *boolPtr = wideValue != 0; + + Jim_DecrRefCount(interp, exprResultPtr); + return JIM_OK; +} + + + + +typedef struct ScanFmtPartDescr +{ + char *arg; + char *prefix; + size_t width; + int pos; + char type; + char modifier; +} ScanFmtPartDescr; + + +typedef struct ScanFmtStringObj +{ + jim_wide size; + char *stringRep; + size_t count; + size_t convCount; + size_t maxPos; + const char *error; + char *scratch; + ScanFmtPartDescr descr[1]; +} ScanFmtStringObj; + + +static void FreeScanFmtInternalRep(Jim_Interp *interp, Jim_Obj *objPtr); +static void DupScanFmtInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr); +static void UpdateStringOfScanFmt(Jim_Obj *objPtr); + +static const Jim_ObjType scanFmtStringObjType = { + "scanformatstring", + FreeScanFmtInternalRep, + DupScanFmtInternalRep, + UpdateStringOfScanFmt, + JIM_TYPE_NONE, +}; + +void FreeScanFmtInternalRep(Jim_Interp *interp, Jim_Obj *objPtr) +{ + JIM_NOTUSED(interp); + Jim_Free((char *)objPtr->internalRep.ptr); + objPtr->internalRep.ptr = 0; +} + +void DupScanFmtInternalRep(Jim_Interp *interp, Jim_Obj *srcPtr, Jim_Obj *dupPtr) +{ + size_t size = (size_t) ((ScanFmtStringObj *) srcPtr->internalRep.ptr)->size; + ScanFmtStringObj *newVec = (ScanFmtStringObj *) Jim_Alloc(size); + + JIM_NOTUSED(interp); + memcpy(newVec, srcPtr->internalRep.ptr, size); + dupPtr->internalRep.ptr = newVec; + dupPtr->typePtr = &scanFmtStringObjType; +} + +static void UpdateStringOfScanFmt(Jim_Obj *objPtr) +{ + JimSetStringBytes(objPtr, ((ScanFmtStringObj *) objPtr->internalRep.ptr)->stringRep); +} + + +static int SetScanFmtFromAny(Jim_Interp *interp, Jim_Obj *objPtr) +{ + ScanFmtStringObj *fmtObj; + char *buffer; + int maxCount, i, approxSize, lastPos = -1; + const char *fmt = objPtr->bytes; + int maxFmtLen = objPtr->length; + const char *fmtEnd = fmt + maxFmtLen; + int curr; + + Jim_FreeIntRep(interp, objPtr); + + for (i = 0, maxCount = 0; i < maxFmtLen; ++i) + if (fmt[i] == '%') + ++maxCount; + + approxSize = sizeof(ScanFmtStringObj) + +(maxCount + 1) * sizeof(ScanFmtPartDescr) + +maxFmtLen * sizeof(char) + 3 + 1 + + maxFmtLen * sizeof(char) + 1 + + maxFmtLen * sizeof(char) + +(maxCount + 1) * sizeof(char) + +1; + fmtObj = (ScanFmtStringObj *) Jim_Alloc(approxSize); + memset(fmtObj, 0, approxSize); + fmtObj->size = approxSize; + fmtObj->maxPos = 0; + fmtObj->scratch = (char *)&fmtObj->descr[maxCount + 1]; + fmtObj->stringRep = fmtObj->scratch + maxFmtLen + 3 + 1; + memcpy(fmtObj->stringRep, fmt, maxFmtLen); + buffer = fmtObj->stringRep + maxFmtLen + 1; + objPtr->internalRep.ptr = fmtObj; + objPtr->typePtr = &scanFmtStringObjType; + for (i = 0, curr = 0; fmt < fmtEnd; ++fmt) { + int width = 0, skip; + ScanFmtPartDescr *descr = &fmtObj->descr[curr]; + + fmtObj->count++; + descr->width = 0; + + if (*fmt != '%' || fmt[1] == '%') { + descr->type = 0; + descr->prefix = &buffer[i]; + for (; fmt < fmtEnd; ++fmt) { + if (*fmt == '%') { + if (fmt[1] != '%') + break; + ++fmt; + } + buffer[i++] = *fmt; + } + buffer[i++] = 0; + } + + ++fmt; + + if (fmt >= fmtEnd) + goto done; + descr->pos = 0; + if (*fmt == '*') { + descr->pos = -1; + ++fmt; + } + else + fmtObj->convCount++; + + if (sscanf(fmt, "%d%n", &width, &skip) == 1) { + fmt += skip; + + if (descr->pos != -1 && *fmt == '$') { + int prev; + + ++fmt; + descr->pos = width; + width = 0; + + if ((lastPos == 0 && descr->pos > 0) + || (lastPos > 0 && descr->pos == 0)) { + fmtObj->error = "cannot mix \"%\" and \"%n$\" conversion specifiers"; + return JIM_ERR; + } + + for (prev = 0; prev < curr; ++prev) { + if (fmtObj->descr[prev].pos == -1) + continue; + if (fmtObj->descr[prev].pos == descr->pos) { + fmtObj->error = + "variable is assigned by multiple \"%n$\" conversion specifiers"; + return JIM_ERR; + } + } + + if (sscanf(fmt, "%d%n", &width, &skip) == 1) { + descr->width = width; + fmt += skip; + } + if (descr->pos > 0 && (size_t) descr->pos > fmtObj->maxPos) + fmtObj->maxPos = descr->pos; + } + else { + + descr->width = width; + } + } + + if (lastPos == -1) + lastPos = descr->pos; + + if (*fmt == '[') { + int swapped = 1, beg = i, end, j; + + descr->type = '['; + descr->arg = &buffer[i]; + ++fmt; + if (*fmt == '^') + buffer[i++] = *fmt++; + if (*fmt == ']') + buffer[i++] = *fmt++; + while (*fmt && *fmt != ']') + buffer[i++] = *fmt++; + if (*fmt != ']') { + fmtObj->error = "unmatched [ in format string"; + return JIM_ERR; + } + end = i; + buffer[i++] = 0; + + while (swapped) { + swapped = 0; + for (j = beg + 1; j < end - 1; ++j) { + if (buffer[j] == '-' && buffer[j - 1] > buffer[j + 1]) { + char tmp = buffer[j - 1]; + + buffer[j - 1] = buffer[j + 1]; + buffer[j + 1] = tmp; + swapped = 1; + } + } + } + } + else { + + if (strchr("hlL", *fmt) != 0) + descr->modifier = tolower((int)*fmt++); + + descr->type = *fmt; + if (strchr("efgcsndoxui", *fmt) == 0) { + fmtObj->error = "bad scan conversion character"; + return JIM_ERR; + } + else if (*fmt == 'c' && descr->width != 0) { + fmtObj->error = "field width may not be specified in %c " "conversion"; + return JIM_ERR; + } + else if (*fmt == 'u' && descr->modifier == 'l') { + fmtObj->error = "unsigned wide not supported"; + return JIM_ERR; + } + } + curr++; + } + done: + return JIM_OK; +} + + + +#define FormatGetCnvCount(_fo_) \ + ((ScanFmtStringObj*)((_fo_)->internalRep.ptr))->convCount +#define FormatGetMaxPos(_fo_) \ + ((ScanFmtStringObj*)((_fo_)->internalRep.ptr))->maxPos +#define FormatGetError(_fo_) \ + ((ScanFmtStringObj*)((_fo_)->internalRep.ptr))->error + +static Jim_Obj *JimScanAString(Jim_Interp *interp, const char *sdescr, const char *str) +{ + char *buffer = Jim_StrDup(str); + char *p = buffer; + + while (*str) { + int c; + int n; + + if (!sdescr && isspace(UCHAR(*str))) + break; + + n = utf8_tounicode(str, &c); + if (sdescr && !JimCharsetMatch(sdescr, c, JIM_CHARSET_SCAN)) + break; + while (n--) + *p++ = *str++; + } + *p = 0; + return Jim_NewStringObjNoAlloc(interp, buffer, p - buffer); +} + + +static int ScanOneEntry(Jim_Interp *interp, const char *str, int pos, int strLen, + ScanFmtStringObj * fmtObj, long idx, Jim_Obj **valObjPtr) +{ + const char *tok; + const ScanFmtPartDescr *descr = &fmtObj->descr[idx]; + size_t scanned = 0; + size_t anchor = pos; + int i; + Jim_Obj *tmpObj = NULL; + + + *valObjPtr = 0; + if (descr->prefix) { + for (i = 0; pos < strLen && descr->prefix[i]; ++i) { + + if (isspace(UCHAR(descr->prefix[i]))) + while (pos < strLen && isspace(UCHAR(str[pos]))) + ++pos; + else if (descr->prefix[i] != str[pos]) + break; + else + ++pos; + } + if (pos >= strLen) { + return -1; + } + else if (descr->prefix[i] != 0) + return 0; + } + + if (descr->type != 'c' && descr->type != '[' && descr->type != 'n') + while (isspace(UCHAR(str[pos]))) + ++pos; + + scanned = pos - anchor; + + + if (descr->type == 'n') { + + *valObjPtr = Jim_NewIntObj(interp, anchor + scanned); + } + else if (pos >= strLen) { + + return -1; + } + else if (descr->type == 'c') { + int c; + scanned += utf8_tounicode(&str[pos], &c); + *valObjPtr = Jim_NewIntObj(interp, c); + return scanned; + } + else { + + if (descr->width > 0) { + size_t sLen = utf8_strlen(&str[pos], strLen - pos); + size_t tLen = descr->width > sLen ? sLen : descr->width; + + tmpObj = Jim_NewStringObjUtf8(interp, str + pos, tLen); + tok = tmpObj->bytes; + } + else { + + tok = &str[pos]; + } + switch (descr->type) { + case 'd': + case 'o': + case 'x': + case 'u': + case 'i':{ + char *endp; + jim_wide w; + + int base = descr->type == 'o' ? 8 + : descr->type == 'x' ? 16 : descr->type == 'i' ? 0 : 10; + + + if (base == 0) { + w = jim_strtoull(tok, &endp); + } + else { + w = strtoull(tok, &endp, base); + } + + if (endp != tok) { + + *valObjPtr = Jim_NewIntObj(interp, w); + + + scanned += endp - tok; + } + else { + scanned = *tok ? 0 : -1; + } + break; + } + case 's': + case '[':{ + *valObjPtr = JimScanAString(interp, descr->arg, tok); + scanned += Jim_Length(*valObjPtr); + break; + } + case 'e': + case 'f': + case 'g':{ + char *endp; + double value = strtod(tok, &endp); + + if (endp != tok) { + + *valObjPtr = Jim_NewDoubleObj(interp, value); + + scanned += endp - tok; + } + else { + scanned = *tok ? 0 : -1; + } + break; + } + } + if (tmpObj) { + Jim_FreeNewObj(interp, tmpObj); + } + } + return scanned; +} + + +Jim_Obj *Jim_ScanString(Jim_Interp *interp, Jim_Obj *strObjPtr, Jim_Obj *fmtObjPtr, int flags) +{ + size_t i, pos; + int scanned = 1; + const char *str = Jim_String(strObjPtr); + int strLen = Jim_Utf8Length(interp, strObjPtr); + Jim_Obj *resultList = 0; + Jim_Obj **resultVec = 0; + int resultc; + Jim_Obj *emptyStr = 0; + ScanFmtStringObj *fmtObj; + + + JimPanic((fmtObjPtr->typePtr != &scanFmtStringObjType, "Jim_ScanString() for non-scan format")); + + fmtObj = (ScanFmtStringObj *) fmtObjPtr->internalRep.ptr; + + if (fmtObj->error != 0) { + if (flags & JIM_ERRMSG) + Jim_SetResultString(interp, fmtObj->error, -1); + return 0; + } + + emptyStr = Jim_NewEmptyStringObj(interp); + Jim_IncrRefCount(emptyStr); + + resultList = Jim_NewListObj(interp, NULL, 0); + if (fmtObj->maxPos > 0) { + for (i = 0; i < fmtObj->maxPos; ++i) + Jim_ListAppendElement(interp, resultList, emptyStr); + JimListGetElements(interp, resultList, &resultc, &resultVec); + } + + for (i = 0, pos = 0; i < fmtObj->count; ++i) { + ScanFmtPartDescr *descr = &(fmtObj->descr[i]); + Jim_Obj *value = 0; + + + if (descr->type == 0) + continue; + + if (scanned > 0) + scanned = ScanOneEntry(interp, str, pos, strLen, fmtObj, i, &value); + + if (scanned == -1 && i == 0) + goto eof; + + pos += scanned; + + + if (value == 0) + value = Jim_NewEmptyStringObj(interp); + + if (descr->pos == -1) { + Jim_FreeNewObj(interp, value); + } + else if (descr->pos == 0) + + Jim_ListAppendElement(interp, resultList, value); + else if (resultVec[descr->pos - 1] == emptyStr) { + + Jim_DecrRefCount(interp, resultVec[descr->pos - 1]); + Jim_IncrRefCount(value); + resultVec[descr->pos - 1] = value; + } + else { + + Jim_FreeNewObj(interp, value); + goto err; + } + } + Jim_DecrRefCount(interp, emptyStr); + return resultList; + eof: + Jim_DecrRefCount(interp, emptyStr); + Jim_FreeNewObj(interp, resultList); + return (Jim_Obj *)EOF; + err: + Jim_DecrRefCount(interp, emptyStr); + Jim_FreeNewObj(interp, resultList); + return 0; +} + + +static void JimPrngInit(Jim_Interp *interp) +{ +#define PRNG_SEED_SIZE 256 + int i; + unsigned int *seed; + time_t t = time(NULL); + + interp->prngState = Jim_Alloc(sizeof(Jim_PrngState)); + + seed = Jim_Alloc(PRNG_SEED_SIZE * sizeof(*seed)); + for (i = 0; i < PRNG_SEED_SIZE; i++) { + seed[i] = (rand() ^ t ^ clock()); + } + JimPrngSeed(interp, (unsigned char *)seed, PRNG_SEED_SIZE * sizeof(*seed)); + Jim_Free(seed); +} + + +static void JimRandomBytes(Jim_Interp *interp, void *dest, unsigned int len) +{ + Jim_PrngState *prng; + unsigned char *destByte = (unsigned char *)dest; + unsigned int si, sj, x; + + + if (interp->prngState == NULL) + JimPrngInit(interp); + prng = interp->prngState; + + for (x = 0; x < len; x++) { + prng->i = (prng->i + 1) & 0xff; + si = prng->sbox[prng->i]; + prng->j = (prng->j + si) & 0xff; + sj = prng->sbox[prng->j]; + prng->sbox[prng->i] = sj; + prng->sbox[prng->j] = si; + *destByte++ = prng->sbox[(si + sj) & 0xff]; + } +} + + +static void JimPrngSeed(Jim_Interp *interp, unsigned char *seed, int seedLen) +{ + int i; + Jim_PrngState *prng; + + + if (interp->prngState == NULL) + JimPrngInit(interp); + prng = interp->prngState; + + + for (i = 0; i < 256; i++) + prng->sbox[i] = i; + + for (i = 0; i < seedLen; i++) { + unsigned char t; + + t = prng->sbox[i & 0xFF]; + prng->sbox[i & 0xFF] = prng->sbox[seed[i]]; + prng->sbox[seed[i]] = t; + } + prng->i = prng->j = 0; + + for (i = 0; i < 256; i += seedLen) { + JimRandomBytes(interp, seed, seedLen); + } +} + + +static int Jim_IncrCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + jim_wide wideValue, increment = 1; + Jim_Obj *intObjPtr; + + if (argc != 2 && argc != 3) { + Jim_WrongNumArgs(interp, 1, argv, "varName ?increment?"); + return JIM_ERR; + } + if (argc == 3) { + if (Jim_GetWide(interp, argv[2], &increment) != JIM_OK) + return JIM_ERR; + } + intObjPtr = Jim_GetVariable(interp, argv[1], JIM_UNSHARED); + if (!intObjPtr) { + + wideValue = 0; + } + else if (Jim_GetWide(interp, intObjPtr, &wideValue) != JIM_OK) { + return JIM_ERR; + } + if (!intObjPtr || Jim_IsShared(intObjPtr)) { + intObjPtr = Jim_NewIntObj(interp, wideValue + increment); + if (Jim_SetVariable(interp, argv[1], intObjPtr) != JIM_OK) { + Jim_FreeNewObj(interp, intObjPtr); + return JIM_ERR; + } + } + else { + + Jim_InvalidateStringRep(intObjPtr); + JimWideValue(intObjPtr) = wideValue + increment; + + if (argv[1]->typePtr != &variableObjType) { + + Jim_SetVariable(interp, argv[1], intObjPtr); + } + } + Jim_SetResult(interp, intObjPtr); + return JIM_OK; +} + + +#define JIM_EVAL_SARGV_LEN 8 +#define JIM_EVAL_SINTV_LEN 8 + + +static int JimUnknown(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int retcode; + + if (interp->unknown_called > 50) { + return JIM_ERR; + } + + + + if (Jim_GetCommand(interp, interp->unknown, JIM_NONE) == NULL) + return JIM_ERR; + + interp->unknown_called++; + + retcode = Jim_EvalObjPrefix(interp, interp->unknown, argc, argv); + interp->unknown_called--; + + return retcode; +} + +static int JimInvokeCommand(Jim_Interp *interp, int objc, Jim_Obj *const *objv) +{ + int retcode; + Jim_Cmd *cmdPtr; + +#if 0 + printf("invoke"); + int j; + for (j = 0; j < objc; j++) { + printf(" '%s'", Jim_String(objv[j])); + } + printf("\n"); +#endif + + if (interp->framePtr->tailcallCmd) { + + cmdPtr = interp->framePtr->tailcallCmd; + interp->framePtr->tailcallCmd = NULL; + } + else { + cmdPtr = Jim_GetCommand(interp, objv[0], JIM_ERRMSG); + if (cmdPtr == NULL) { + return JimUnknown(interp, objc, objv); + } + JimIncrCmdRefCount(cmdPtr); + } + + if (interp->evalDepth == interp->maxEvalDepth) { + Jim_SetResultString(interp, "Infinite eval recursion", -1); + retcode = JIM_ERR; + goto out; + } + interp->evalDepth++; + + + Jim_SetEmptyResult(interp); + if (cmdPtr->isproc) { + retcode = JimCallProcedure(interp, cmdPtr, objc, objv); + } + else { + interp->cmdPrivData = cmdPtr->u.native.privData; + retcode = cmdPtr->u.native.cmdProc(interp, objc, objv); + } + interp->evalDepth--; + +out: + JimDecrCmdRefCount(interp, cmdPtr); + + return retcode; +} + +int Jim_EvalObjVector(Jim_Interp *interp, int objc, Jim_Obj *const *objv) +{ + int i, retcode; + + + for (i = 0; i < objc; i++) + Jim_IncrRefCount(objv[i]); + + retcode = JimInvokeCommand(interp, objc, objv); + + + for (i = 0; i < objc; i++) + Jim_DecrRefCount(interp, objv[i]); + + return retcode; +} + +int Jim_EvalObjPrefix(Jim_Interp *interp, Jim_Obj *prefix, int objc, Jim_Obj *const *objv) +{ + int ret; + Jim_Obj **nargv = Jim_Alloc((objc + 1) * sizeof(*nargv)); + + nargv[0] = prefix; + memcpy(&nargv[1], &objv[0], sizeof(nargv[0]) * objc); + ret = Jim_EvalObjVector(interp, objc + 1, nargv); + Jim_Free(nargv); + return ret; +} + +static void JimAddErrorToStack(Jim_Interp *interp, ScriptObj *script) +{ + if (!interp->errorFlag) { + + interp->errorFlag = 1; + Jim_IncrRefCount(script->fileNameObj); + Jim_DecrRefCount(interp, interp->errorFileNameObj); + interp->errorFileNameObj = script->fileNameObj; + interp->errorLine = script->linenr; + + JimResetStackTrace(interp); + + interp->addStackTrace++; + } + + + if (interp->addStackTrace > 0) { + + + JimAppendStackTrace(interp, Jim_String(interp->errorProc), script->fileNameObj, script->linenr); + + if (Jim_Length(script->fileNameObj)) { + interp->addStackTrace = 0; + } + + Jim_DecrRefCount(interp, interp->errorProc); + interp->errorProc = interp->emptyObj; + Jim_IncrRefCount(interp->errorProc); + } +} + +static int JimSubstOneToken(Jim_Interp *interp, const ScriptToken *token, Jim_Obj **objPtrPtr) +{ + Jim_Obj *objPtr; + + switch (token->type) { + case JIM_TT_STR: + case JIM_TT_ESC: + objPtr = token->objPtr; + break; + case JIM_TT_VAR: + objPtr = Jim_GetVariable(interp, token->objPtr, JIM_ERRMSG); + break; + case JIM_TT_DICTSUGAR: + objPtr = JimExpandDictSugar(interp, token->objPtr); + break; + case JIM_TT_EXPRSUGAR: + objPtr = JimExpandExprSugar(interp, token->objPtr); + break; + case JIM_TT_CMD: + switch (Jim_EvalObj(interp, token->objPtr)) { + case JIM_OK: + case JIM_RETURN: + objPtr = interp->result; + break; + case JIM_BREAK: + + return JIM_BREAK; + case JIM_CONTINUE: + + return JIM_CONTINUE; + default: + return JIM_ERR; + } + break; + default: + JimPanic((1, + "default token type (%d) reached " "in Jim_SubstObj().", token->type)); + objPtr = NULL; + break; + } + if (objPtr) { + *objPtrPtr = objPtr; + return JIM_OK; + } + return JIM_ERR; +} + +static Jim_Obj *JimInterpolateTokens(Jim_Interp *interp, const ScriptToken * token, int tokens, int flags) +{ + int totlen = 0, i; + Jim_Obj **intv; + Jim_Obj *sintv[JIM_EVAL_SINTV_LEN]; + Jim_Obj *objPtr; + char *s; + + if (tokens <= JIM_EVAL_SINTV_LEN) + intv = sintv; + else + intv = Jim_Alloc(sizeof(Jim_Obj *) * tokens); + + for (i = 0; i < tokens; i++) { + switch (JimSubstOneToken(interp, &token[i], &intv[i])) { + case JIM_OK: + case JIM_RETURN: + break; + case JIM_BREAK: + if (flags & JIM_SUBST_FLAG) { + + tokens = i; + continue; + } + + + case JIM_CONTINUE: + if (flags & JIM_SUBST_FLAG) { + intv[i] = NULL; + continue; + } + + + default: + while (i--) { + Jim_DecrRefCount(interp, intv[i]); + } + if (intv != sintv) { + Jim_Free(intv); + } + return NULL; + } + Jim_IncrRefCount(intv[i]); + Jim_String(intv[i]); + totlen += intv[i]->length; + } + + + if (tokens == 1 && intv[0] && intv == sintv) { + Jim_DecrRefCount(interp, intv[0]); + return intv[0]; + } + + objPtr = Jim_NewStringObjNoAlloc(interp, NULL, 0); + + if (tokens == 4 && token[0].type == JIM_TT_ESC && token[1].type == JIM_TT_ESC + && token[2].type == JIM_TT_VAR) { + + objPtr->typePtr = &interpolatedObjType; + objPtr->internalRep.dictSubstValue.varNameObjPtr = token[0].objPtr; + objPtr->internalRep.dictSubstValue.indexObjPtr = intv[2]; + Jim_IncrRefCount(intv[2]); + } + else if (tokens && intv[0] && intv[0]->typePtr == &sourceObjType) { + + JimSetSourceInfo(interp, objPtr, intv[0]->internalRep.sourceValue.fileNameObj, intv[0]->internalRep.sourceValue.lineNumber); + } + + + s = objPtr->bytes = Jim_Alloc(totlen + 1); + objPtr->length = totlen; + for (i = 0; i < tokens; i++) { + if (intv[i]) { + memcpy(s, intv[i]->bytes, intv[i]->length); + s += intv[i]->length; + Jim_DecrRefCount(interp, intv[i]); + } + } + objPtr->bytes[totlen] = '\0'; + + if (intv != sintv) { + Jim_Free(intv); + } + + return objPtr; +} + + +static int JimEvalObjList(Jim_Interp *interp, Jim_Obj *listPtr) +{ + int retcode = JIM_OK; + + JimPanic((Jim_IsList(listPtr) == 0, "JimEvalObjList() invoked on non-list.")); + + if (listPtr->internalRep.listValue.len) { + Jim_IncrRefCount(listPtr); + retcode = JimInvokeCommand(interp, + listPtr->internalRep.listValue.len, + listPtr->internalRep.listValue.ele); + Jim_DecrRefCount(interp, listPtr); + } + return retcode; +} + +int Jim_EvalObjList(Jim_Interp *interp, Jim_Obj *listPtr) +{ + SetListFromAny(interp, listPtr); + return JimEvalObjList(interp, listPtr); +} + +int Jim_EvalObj(Jim_Interp *interp, Jim_Obj *scriptObjPtr) +{ + int i; + ScriptObj *script; + ScriptToken *token; + int retcode = JIM_OK; + Jim_Obj *sargv[JIM_EVAL_SARGV_LEN], **argv = NULL; + Jim_Obj *prevScriptObj; + + if (Jim_IsList(scriptObjPtr) && scriptObjPtr->bytes == NULL) { + return JimEvalObjList(interp, scriptObjPtr); + } + + Jim_IncrRefCount(scriptObjPtr); + script = JimGetScript(interp, scriptObjPtr); + if (!JimScriptValid(interp, script)) { + Jim_DecrRefCount(interp, scriptObjPtr); + return JIM_ERR; + } + + Jim_SetEmptyResult(interp); + + token = script->token; + +#ifdef JIM_OPTIMIZATION + if (script->len == 0) { + Jim_DecrRefCount(interp, scriptObjPtr); + return JIM_OK; + } + if (script->len == 3 + && token[1].objPtr->typePtr == &commandObjType + && token[1].objPtr->internalRep.cmdValue.cmdPtr->isproc == 0 + && token[1].objPtr->internalRep.cmdValue.cmdPtr->u.native.cmdProc == Jim_IncrCoreCommand + && token[2].objPtr->typePtr == &variableObjType) { + + Jim_Obj *objPtr = Jim_GetVariable(interp, token[2].objPtr, JIM_NONE); + + if (objPtr && !Jim_IsShared(objPtr) && objPtr->typePtr == &intObjType) { + JimWideValue(objPtr)++; + Jim_InvalidateStringRep(objPtr); + Jim_DecrRefCount(interp, scriptObjPtr); + Jim_SetResult(interp, objPtr); + return JIM_OK; + } + } +#endif + + script->inUse++; + + + prevScriptObj = interp->currentScriptObj; + interp->currentScriptObj = scriptObjPtr; + + interp->errorFlag = 0; + argv = sargv; + + for (i = 0; i < script->len && retcode == JIM_OK; ) { + int argc; + int j; + + + argc = token[i].objPtr->internalRep.scriptLineValue.argc; + script->linenr = token[i].objPtr->internalRep.scriptLineValue.line; + + + if (argc > JIM_EVAL_SARGV_LEN) + argv = Jim_Alloc(sizeof(Jim_Obj *) * argc); + + + i++; + + for (j = 0; j < argc; j++) { + long wordtokens = 1; + int expand = 0; + Jim_Obj *wordObjPtr = NULL; + + if (token[i].type == JIM_TT_WORD) { + wordtokens = JimWideValue(token[i++].objPtr); + if (wordtokens < 0) { + expand = 1; + wordtokens = -wordtokens; + } + } + + if (wordtokens == 1) { + + switch (token[i].type) { + case JIM_TT_ESC: + case JIM_TT_STR: + wordObjPtr = token[i].objPtr; + break; + case JIM_TT_VAR: + wordObjPtr = Jim_GetVariable(interp, token[i].objPtr, JIM_ERRMSG); + break; + case JIM_TT_EXPRSUGAR: + wordObjPtr = JimExpandExprSugar(interp, token[i].objPtr); + break; + case JIM_TT_DICTSUGAR: + wordObjPtr = JimExpandDictSugar(interp, token[i].objPtr); + break; + case JIM_TT_CMD: + retcode = Jim_EvalObj(interp, token[i].objPtr); + if (retcode == JIM_OK) { + wordObjPtr = Jim_GetResult(interp); + } + break; + default: + JimPanic((1, "default token type reached " "in Jim_EvalObj().")); + } + } + else { + wordObjPtr = JimInterpolateTokens(interp, token + i, wordtokens, JIM_NONE); + } + + if (!wordObjPtr) { + if (retcode == JIM_OK) { + retcode = JIM_ERR; + } + break; + } + + Jim_IncrRefCount(wordObjPtr); + i += wordtokens; + + if (!expand) { + argv[j] = wordObjPtr; + } + else { + + int len = Jim_ListLength(interp, wordObjPtr); + int newargc = argc + len - 1; + int k; + + if (len > 1) { + if (argv == sargv) { + if (newargc > JIM_EVAL_SARGV_LEN) { + argv = Jim_Alloc(sizeof(*argv) * newargc); + memcpy(argv, sargv, sizeof(*argv) * j); + } + } + else { + + argv = Jim_Realloc(argv, sizeof(*argv) * newargc); + } + } + + + for (k = 0; k < len; k++) { + argv[j++] = wordObjPtr->internalRep.listValue.ele[k]; + Jim_IncrRefCount(wordObjPtr->internalRep.listValue.ele[k]); + } + + Jim_DecrRefCount(interp, wordObjPtr); + + + j--; + argc += len - 1; + } + } + + if (retcode == JIM_OK && argc) { + + retcode = JimInvokeCommand(interp, argc, argv); + + if (Jim_CheckSignal(interp)) { + retcode = JIM_SIGNAL; + } + } + + + while (j-- > 0) { + Jim_DecrRefCount(interp, argv[j]); + } + + if (argv != sargv) { + Jim_Free(argv); + argv = sargv; + } + } + + + if (retcode == JIM_ERR) { + JimAddErrorToStack(interp, script); + } + + else if (retcode != JIM_RETURN || interp->returnCode != JIM_ERR) { + + interp->addStackTrace = 0; + } + + + interp->currentScriptObj = prevScriptObj; + + Jim_FreeIntRep(interp, scriptObjPtr); + scriptObjPtr->typePtr = &scriptObjType; + Jim_SetIntRepPtr(scriptObjPtr, script); + Jim_DecrRefCount(interp, scriptObjPtr); + + return retcode; +} + +static int JimSetProcArg(Jim_Interp *interp, Jim_Obj *argNameObj, Jim_Obj *argValObj) +{ + int retcode; + + const char *varname = Jim_String(argNameObj); + if (*varname == '&') { + + Jim_Obj *objPtr; + Jim_CallFrame *savedCallFrame = interp->framePtr; + + interp->framePtr = interp->framePtr->parent; + objPtr = Jim_GetVariable(interp, argValObj, JIM_ERRMSG); + interp->framePtr = savedCallFrame; + if (!objPtr) { + return JIM_ERR; + } + + + objPtr = Jim_NewStringObj(interp, varname + 1, -1); + Jim_IncrRefCount(objPtr); + retcode = Jim_SetVariableLink(interp, objPtr, argValObj, interp->framePtr->parent); + Jim_DecrRefCount(interp, objPtr); + } + else { + retcode = Jim_SetVariable(interp, argNameObj, argValObj); + } + return retcode; +} + +static void JimSetProcWrongArgs(Jim_Interp *interp, Jim_Obj *procNameObj, Jim_Cmd *cmd) +{ + + Jim_Obj *argmsg = Jim_NewStringObj(interp, "", 0); + int i; + + for (i = 0; i < cmd->u.proc.argListLen; i++) { + Jim_AppendString(interp, argmsg, " ", 1); + + if (i == cmd->u.proc.argsPos) { + if (cmd->u.proc.arglist[i].defaultObjPtr) { + + Jim_AppendString(interp, argmsg, "?", 1); + Jim_AppendObj(interp, argmsg, cmd->u.proc.arglist[i].defaultObjPtr); + Jim_AppendString(interp, argmsg, " ...?", -1); + } + else { + + Jim_AppendString(interp, argmsg, "?arg...?", -1); + } + } + else { + if (cmd->u.proc.arglist[i].defaultObjPtr) { + Jim_AppendString(interp, argmsg, "?", 1); + Jim_AppendObj(interp, argmsg, cmd->u.proc.arglist[i].nameObjPtr); + Jim_AppendString(interp, argmsg, "?", 1); + } + else { + const char *arg = Jim_String(cmd->u.proc.arglist[i].nameObjPtr); + if (*arg == '&') { + arg++; + } + Jim_AppendString(interp, argmsg, arg, -1); + } + } + } + Jim_SetResultFormatted(interp, "wrong # args: should be \"%#s%#s\"", procNameObj, argmsg); + Jim_FreeNewObj(interp, argmsg); +} + +#ifdef jim_ext_namespace +int Jim_EvalNamespace(Jim_Interp *interp, Jim_Obj *scriptObj, Jim_Obj *nsObj) +{ + Jim_CallFrame *callFramePtr; + int retcode; + + + callFramePtr = JimCreateCallFrame(interp, interp->framePtr, nsObj); + callFramePtr->argv = &interp->emptyObj; + callFramePtr->argc = 0; + callFramePtr->procArgsObjPtr = NULL; + callFramePtr->procBodyObjPtr = scriptObj; + callFramePtr->staticVars = NULL; + callFramePtr->fileNameObj = interp->emptyObj; + callFramePtr->line = 0; + Jim_IncrRefCount(scriptObj); + interp->framePtr = callFramePtr; + + + if (interp->framePtr->level == interp->maxCallFrameDepth) { + Jim_SetResultString(interp, "Too many nested calls. Infinite recursion?", -1); + retcode = JIM_ERR; + } + else { + + retcode = Jim_EvalObj(interp, scriptObj); + } + + + interp->framePtr = interp->framePtr->parent; + JimFreeCallFrame(interp, callFramePtr, JIM_FCF_REUSE); + + return retcode; +} +#endif + +static int JimCallProcedure(Jim_Interp *interp, Jim_Cmd *cmd, int argc, Jim_Obj *const *argv) +{ + Jim_CallFrame *callFramePtr; + int i, d, retcode, optargs; + ScriptObj *script; + + + if (argc - 1 < cmd->u.proc.reqArity || + (cmd->u.proc.argsPos < 0 && argc - 1 > cmd->u.proc.reqArity + cmd->u.proc.optArity)) { + JimSetProcWrongArgs(interp, argv[0], cmd); + return JIM_ERR; + } + + if (Jim_Length(cmd->u.proc.bodyObjPtr) == 0) { + + return JIM_OK; + } + + + if (interp->framePtr->level == interp->maxCallFrameDepth) { + Jim_SetResultString(interp, "Too many nested calls. Infinite recursion?", -1); + return JIM_ERR; + } + + + callFramePtr = JimCreateCallFrame(interp, interp->framePtr, cmd->u.proc.nsObj); + callFramePtr->argv = argv; + callFramePtr->argc = argc; + callFramePtr->procArgsObjPtr = cmd->u.proc.argListObjPtr; + callFramePtr->procBodyObjPtr = cmd->u.proc.bodyObjPtr; + callFramePtr->staticVars = cmd->u.proc.staticVars; + + + script = JimGetScript(interp, interp->currentScriptObj); + callFramePtr->fileNameObj = script->fileNameObj; + callFramePtr->line = script->linenr; + + Jim_IncrRefCount(cmd->u.proc.argListObjPtr); + Jim_IncrRefCount(cmd->u.proc.bodyObjPtr); + interp->framePtr = callFramePtr; + + + optargs = (argc - 1 - cmd->u.proc.reqArity); + + + i = 1; + for (d = 0; d < cmd->u.proc.argListLen; d++) { + Jim_Obj *nameObjPtr = cmd->u.proc.arglist[d].nameObjPtr; + if (d == cmd->u.proc.argsPos) { + + Jim_Obj *listObjPtr; + int argsLen = 0; + if (cmd->u.proc.reqArity + cmd->u.proc.optArity < argc - 1) { + argsLen = argc - 1 - (cmd->u.proc.reqArity + cmd->u.proc.optArity); + } + listObjPtr = Jim_NewListObj(interp, &argv[i], argsLen); + + + if (cmd->u.proc.arglist[d].defaultObjPtr) { + nameObjPtr =cmd->u.proc.arglist[d].defaultObjPtr; + } + retcode = Jim_SetVariable(interp, nameObjPtr, listObjPtr); + if (retcode != JIM_OK) { + goto badargset; + } + + i += argsLen; + continue; + } + + + if (cmd->u.proc.arglist[d].defaultObjPtr == NULL || optargs-- > 0) { + retcode = JimSetProcArg(interp, nameObjPtr, argv[i++]); + } + else { + + retcode = Jim_SetVariable(interp, nameObjPtr, cmd->u.proc.arglist[d].defaultObjPtr); + } + if (retcode != JIM_OK) { + goto badargset; + } + } + + + retcode = Jim_EvalObj(interp, cmd->u.proc.bodyObjPtr); + +badargset: + + + interp->framePtr = interp->framePtr->parent; + JimFreeCallFrame(interp, callFramePtr, JIM_FCF_REUSE); + + if (interp->framePtr->tailcallObj) { + + if (interp->framePtr->tailcall++ == 0) { + + do { + Jim_Obj *tailcallObj = interp->framePtr->tailcallObj; + + interp->framePtr->tailcallObj = NULL; + + if (retcode == JIM_EVAL) { + retcode = Jim_EvalObjList(interp, tailcallObj); + if (retcode == JIM_RETURN) { + interp->returnLevel++; + } + } + Jim_DecrRefCount(interp, tailcallObj); + } while (interp->framePtr->tailcallObj); + + + if (interp->framePtr->tailcallCmd) { + JimDecrCmdRefCount(interp, interp->framePtr->tailcallCmd); + interp->framePtr->tailcallCmd = NULL; + } + } + interp->framePtr->tailcall--; + } + + + if (retcode == JIM_RETURN) { + if (--interp->returnLevel <= 0) { + retcode = interp->returnCode; + interp->returnCode = JIM_OK; + interp->returnLevel = 0; + } + } + else if (retcode == JIM_ERR) { + interp->addStackTrace++; + Jim_DecrRefCount(interp, interp->errorProc); + interp->errorProc = argv[0]; + Jim_IncrRefCount(interp->errorProc); + } + + return retcode; +} + +int Jim_EvalSource(Jim_Interp *interp, const char *filename, int lineno, const char *script) +{ + int retval; + Jim_Obj *scriptObjPtr; + + scriptObjPtr = Jim_NewStringObj(interp, script, -1); + Jim_IncrRefCount(scriptObjPtr); + + if (filename) { + Jim_Obj *prevScriptObj; + + JimSetSourceInfo(interp, scriptObjPtr, Jim_NewStringObj(interp, filename, -1), lineno); + + prevScriptObj = interp->currentScriptObj; + interp->currentScriptObj = scriptObjPtr; + + retval = Jim_EvalObj(interp, scriptObjPtr); + + interp->currentScriptObj = prevScriptObj; + } + else { + retval = Jim_EvalObj(interp, scriptObjPtr); + } + Jim_DecrRefCount(interp, scriptObjPtr); + return retval; +} + +int Jim_Eval(Jim_Interp *interp, const char *script) +{ + return Jim_EvalObj(interp, Jim_NewStringObj(interp, script, -1)); +} + + +int Jim_EvalGlobal(Jim_Interp *interp, const char *script) +{ + int retval; + Jim_CallFrame *savedFramePtr = interp->framePtr; + + interp->framePtr = interp->topFramePtr; + retval = Jim_Eval(interp, script); + interp->framePtr = savedFramePtr; + + return retval; +} + +int Jim_EvalFileGlobal(Jim_Interp *interp, const char *filename) +{ + int retval; + Jim_CallFrame *savedFramePtr = interp->framePtr; + + interp->framePtr = interp->topFramePtr; + retval = Jim_EvalFile(interp, filename); + interp->framePtr = savedFramePtr; + + return retval; +} + +#include + +int Jim_EvalFile(Jim_Interp *interp, const char *filename) +{ + FILE *fp; + char *buf; + Jim_Obj *scriptObjPtr; + Jim_Obj *prevScriptObj; + struct stat sb; + int retcode; + int readlen; + + if (stat(filename, &sb) != 0 || (fp = fopen(filename, "rt")) == NULL) { + Jim_SetResultFormatted(interp, "couldn't read file \"%s\": %s", filename, strerror(errno)); + return JIM_ERR; + } + if (sb.st_size == 0) { + fclose(fp); + return JIM_OK; + } + + buf = Jim_Alloc(sb.st_size + 1); + readlen = fread(buf, 1, sb.st_size, fp); + if (ferror(fp)) { + fclose(fp); + Jim_Free(buf); + Jim_SetResultFormatted(interp, "failed to load file \"%s\": %s", filename, strerror(errno)); + return JIM_ERR; + } + fclose(fp); + buf[readlen] = 0; + + scriptObjPtr = Jim_NewStringObjNoAlloc(interp, buf, readlen); + JimSetSourceInfo(interp, scriptObjPtr, Jim_NewStringObj(interp, filename, -1), 1); + Jim_IncrRefCount(scriptObjPtr); + + prevScriptObj = interp->currentScriptObj; + interp->currentScriptObj = scriptObjPtr; + + retcode = Jim_EvalObj(interp, scriptObjPtr); + + + if (retcode == JIM_RETURN) { + if (--interp->returnLevel <= 0) { + retcode = interp->returnCode; + interp->returnCode = JIM_OK; + interp->returnLevel = 0; + } + } + if (retcode == JIM_ERR) { + + interp->addStackTrace++; + } + + interp->currentScriptObj = prevScriptObj; + + Jim_DecrRefCount(interp, scriptObjPtr); + + return retcode; +} + +static void JimParseSubst(struct JimParserCtx *pc, int flags) +{ + pc->tstart = pc->p; + pc->tline = pc->linenr; + + if (pc->len == 0) { + pc->tend = pc->p; + pc->tt = JIM_TT_EOL; + pc->eof = 1; + return; + } + if (*pc->p == '[' && !(flags & JIM_SUBST_NOCMD)) { + JimParseCmd(pc); + return; + } + if (*pc->p == '$' && !(flags & JIM_SUBST_NOVAR)) { + if (JimParseVar(pc) == JIM_OK) { + return; + } + + pc->tstart = pc->p; + flags |= JIM_SUBST_NOVAR; + } + while (pc->len) { + if (*pc->p == '$' && !(flags & JIM_SUBST_NOVAR)) { + break; + } + if (*pc->p == '[' && !(flags & JIM_SUBST_NOCMD)) { + break; + } + if (*pc->p == '\\' && pc->len > 1) { + pc->p++; + pc->len--; + } + pc->p++; + pc->len--; + } + pc->tend = pc->p - 1; + pc->tt = (flags & JIM_SUBST_NOESC) ? JIM_TT_STR : JIM_TT_ESC; +} + + +static int SetSubstFromAny(Jim_Interp *interp, struct Jim_Obj *objPtr, int flags) +{ + int scriptTextLen; + const char *scriptText = Jim_GetString(objPtr, &scriptTextLen); + struct JimParserCtx parser; + struct ScriptObj *script = Jim_Alloc(sizeof(*script)); + ParseTokenList tokenlist; + + + ScriptTokenListInit(&tokenlist); + + JimParserInit(&parser, scriptText, scriptTextLen, 1); + while (1) { + JimParseSubst(&parser, flags); + if (parser.eof) { + + break; + } + ScriptAddToken(&tokenlist, parser.tstart, parser.tend - parser.tstart + 1, parser.tt, + parser.tline); + } + + + script->inUse = 1; + script->substFlags = flags; + script->fileNameObj = interp->emptyObj; + Jim_IncrRefCount(script->fileNameObj); + SubstObjAddTokens(interp, script, &tokenlist); + + + ScriptTokenListFree(&tokenlist); + +#ifdef DEBUG_SHOW_SUBST + { + int i; + + printf("==== Subst ====\n"); + for (i = 0; i < script->len; i++) { + printf("[%2d] %s '%s'\n", i, jim_tt_name(script->token[i].type), + Jim_String(script->token[i].objPtr)); + } + } +#endif + + + Jim_FreeIntRep(interp, objPtr); + Jim_SetIntRepPtr(objPtr, script); + objPtr->typePtr = &scriptObjType; + return JIM_OK; +} + +static ScriptObj *Jim_GetSubst(Jim_Interp *interp, Jim_Obj *objPtr, int flags) +{ + if (objPtr->typePtr != &scriptObjType || ((ScriptObj *)Jim_GetIntRepPtr(objPtr))->substFlags != flags) + SetSubstFromAny(interp, objPtr, flags); + return (ScriptObj *) Jim_GetIntRepPtr(objPtr); +} + +int Jim_SubstObj(Jim_Interp *interp, Jim_Obj *substObjPtr, Jim_Obj **resObjPtrPtr, int flags) +{ + ScriptObj *script = Jim_GetSubst(interp, substObjPtr, flags); + + Jim_IncrRefCount(substObjPtr); + script->inUse++; + + *resObjPtrPtr = JimInterpolateTokens(interp, script->token, script->len, flags); + + script->inUse--; + Jim_DecrRefCount(interp, substObjPtr); + if (*resObjPtrPtr == NULL) { + return JIM_ERR; + } + return JIM_OK; +} + +void Jim_WrongNumArgs(Jim_Interp *interp, int argc, Jim_Obj *const *argv, const char *msg) +{ + Jim_Obj *objPtr; + Jim_Obj *listObjPtr = Jim_NewListObj(interp, argv, argc); + + if (*msg) { + Jim_ListAppendElement(interp, listObjPtr, Jim_NewStringObj(interp, msg, -1)); + } + Jim_IncrRefCount(listObjPtr); + objPtr = Jim_ListJoin(interp, listObjPtr, " ", 1); + Jim_DecrRefCount(interp, listObjPtr); + + Jim_IncrRefCount(objPtr); + Jim_SetResultFormatted(interp, "wrong # args: should be \"%#s\"", objPtr); + Jim_DecrRefCount(interp, objPtr); +} + +typedef void JimHashtableIteratorCallbackType(Jim_Interp *interp, Jim_Obj *listObjPtr, + Jim_HashEntry *he, int type); + +#define JimTrivialMatch(pattern) (strpbrk((pattern), "*[?\\") == NULL) + +static Jim_Obj *JimHashtablePatternMatch(Jim_Interp *interp, Jim_HashTable *ht, Jim_Obj *patternObjPtr, + JimHashtableIteratorCallbackType *callback, int type) +{ + Jim_HashEntry *he; + Jim_Obj *listObjPtr = Jim_NewListObj(interp, NULL, 0); + + + if (patternObjPtr && JimTrivialMatch(Jim_String(patternObjPtr))) { + he = Jim_FindHashEntry(ht, Jim_String(patternObjPtr)); + if (he) { + callback(interp, listObjPtr, he, type); + } + } + else { + Jim_HashTableIterator htiter; + JimInitHashTableIterator(ht, &htiter); + while ((he = Jim_NextHashEntry(&htiter)) != NULL) { + if (patternObjPtr == NULL || JimGlobMatch(Jim_String(patternObjPtr), he->key, 0)) { + callback(interp, listObjPtr, he, type); + } + } + } + return listObjPtr; +} + + +#define JIM_CMDLIST_COMMANDS 0 +#define JIM_CMDLIST_PROCS 1 +#define JIM_CMDLIST_CHANNELS 2 + +static void JimCommandMatch(Jim_Interp *interp, Jim_Obj *listObjPtr, + Jim_HashEntry *he, int type) +{ + Jim_Cmd *cmdPtr = Jim_GetHashEntryVal(he); + Jim_Obj *objPtr; + + if (type == JIM_CMDLIST_PROCS && !cmdPtr->isproc) { + + return; + } + + objPtr = Jim_NewStringObj(interp, he->key, -1); + Jim_IncrRefCount(objPtr); + + if (type != JIM_CMDLIST_CHANNELS || Jim_AioFilehandle(interp, objPtr)) { + Jim_ListAppendElement(interp, listObjPtr, objPtr); + } + Jim_DecrRefCount(interp, objPtr); +} + + +static Jim_Obj *JimCommandsList(Jim_Interp *interp, Jim_Obj *patternObjPtr, int type) +{ + return JimHashtablePatternMatch(interp, &interp->commands, patternObjPtr, JimCommandMatch, type); +} + + +#define JIM_VARLIST_GLOBALS 0 +#define JIM_VARLIST_LOCALS 1 +#define JIM_VARLIST_VARS 2 + +#define JIM_VARLIST_VALUES 0x1000 + +static void JimVariablesMatch(Jim_Interp *interp, Jim_Obj *listObjPtr, + Jim_HashEntry *he, int type) +{ + Jim_Var *varPtr = Jim_GetHashEntryVal(he); + + if (type != JIM_VARLIST_LOCALS || varPtr->linkFramePtr == NULL) { + Jim_ListAppendElement(interp, listObjPtr, Jim_NewStringObj(interp, he->key, -1)); + if (type & JIM_VARLIST_VALUES) { + Jim_ListAppendElement(interp, listObjPtr, varPtr->objPtr); + } + } +} + + +static Jim_Obj *JimVariablesList(Jim_Interp *interp, Jim_Obj *patternObjPtr, int mode) +{ + if (mode == JIM_VARLIST_LOCALS && interp->framePtr == interp->topFramePtr) { + return interp->emptyObj; + } + else { + Jim_CallFrame *framePtr = (mode == JIM_VARLIST_GLOBALS) ? interp->topFramePtr : interp->framePtr; + return JimHashtablePatternMatch(interp, &framePtr->vars, patternObjPtr, JimVariablesMatch, mode); + } +} + +static int JimInfoLevel(Jim_Interp *interp, Jim_Obj *levelObjPtr, + Jim_Obj **objPtrPtr, int info_level_cmd) +{ + Jim_CallFrame *targetCallFrame; + + targetCallFrame = JimGetCallFrameByInteger(interp, levelObjPtr); + if (targetCallFrame == NULL) { + return JIM_ERR; + } + + if (targetCallFrame == interp->topFramePtr) { + Jim_SetResultFormatted(interp, "bad level \"%#s\"", levelObjPtr); + return JIM_ERR; + } + if (info_level_cmd) { + *objPtrPtr = Jim_NewListObj(interp, targetCallFrame->argv, targetCallFrame->argc); + } + else { + Jim_Obj *listObj = Jim_NewListObj(interp, NULL, 0); + + Jim_ListAppendElement(interp, listObj, targetCallFrame->argv[0]); + Jim_ListAppendElement(interp, listObj, targetCallFrame->fileNameObj); + Jim_ListAppendElement(interp, listObj, Jim_NewIntObj(interp, targetCallFrame->line)); + *objPtrPtr = listObj; + } + return JIM_OK; +} + + + +static int Jim_PutsCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + if (argc != 2 && argc != 3) { + Jim_WrongNumArgs(interp, 1, argv, "?-nonewline? string"); + return JIM_ERR; + } + if (argc == 3) { + if (!Jim_CompareStringImmediate(interp, argv[1], "-nonewline")) { + Jim_SetResultString(interp, "The second argument must " "be -nonewline", -1); + return JIM_ERR; + } + else { + fputs(Jim_String(argv[2]), stdout); + } + } + else { + puts(Jim_String(argv[1])); + } + return JIM_OK; +} + + +static int JimAddMulHelper(Jim_Interp *interp, int argc, Jim_Obj *const *argv, int op) +{ + jim_wide wideValue, res; + double doubleValue, doubleRes; + int i; + + res = (op == JIM_EXPROP_ADD) ? 0 : 1; + + for (i = 1; i < argc; i++) { + if (Jim_GetWide(interp, argv[i], &wideValue) != JIM_OK) + goto trydouble; + if (op == JIM_EXPROP_ADD) + res += wideValue; + else + res *= wideValue; + } + Jim_SetResultInt(interp, res); + return JIM_OK; + trydouble: + doubleRes = (double)res; + for (; i < argc; i++) { + if (Jim_GetDouble(interp, argv[i], &doubleValue) != JIM_OK) + return JIM_ERR; + if (op == JIM_EXPROP_ADD) + doubleRes += doubleValue; + else + doubleRes *= doubleValue; + } + Jim_SetResult(interp, Jim_NewDoubleObj(interp, doubleRes)); + return JIM_OK; +} + + +static int JimSubDivHelper(Jim_Interp *interp, int argc, Jim_Obj *const *argv, int op) +{ + jim_wide wideValue, res = 0; + double doubleValue, doubleRes = 0; + int i = 2; + + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "number ?number ... number?"); + return JIM_ERR; + } + else if (argc == 2) { + if (Jim_GetWide(interp, argv[1], &wideValue) != JIM_OK) { + if (Jim_GetDouble(interp, argv[1], &doubleValue) != JIM_OK) { + return JIM_ERR; + } + else { + if (op == JIM_EXPROP_SUB) + doubleRes = -doubleValue; + else + doubleRes = 1.0 / doubleValue; + Jim_SetResult(interp, Jim_NewDoubleObj(interp, doubleRes)); + return JIM_OK; + } + } + if (op == JIM_EXPROP_SUB) { + res = -wideValue; + Jim_SetResultInt(interp, res); + } + else { + doubleRes = 1.0 / wideValue; + Jim_SetResult(interp, Jim_NewDoubleObj(interp, doubleRes)); + } + return JIM_OK; + } + else { + if (Jim_GetWide(interp, argv[1], &res) != JIM_OK) { + if (Jim_GetDouble(interp, argv[1], &doubleRes) + != JIM_OK) { + return JIM_ERR; + } + else { + goto trydouble; + } + } + } + for (i = 2; i < argc; i++) { + if (Jim_GetWide(interp, argv[i], &wideValue) != JIM_OK) { + doubleRes = (double)res; + goto trydouble; + } + if (op == JIM_EXPROP_SUB) + res -= wideValue; + else + res /= wideValue; + } + Jim_SetResultInt(interp, res); + return JIM_OK; + trydouble: + for (; i < argc; i++) { + if (Jim_GetDouble(interp, argv[i], &doubleValue) != JIM_OK) + return JIM_ERR; + if (op == JIM_EXPROP_SUB) + doubleRes -= doubleValue; + else + doubleRes /= doubleValue; + } + Jim_SetResult(interp, Jim_NewDoubleObj(interp, doubleRes)); + return JIM_OK; +} + + + +static int Jim_AddCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + return JimAddMulHelper(interp, argc, argv, JIM_EXPROP_ADD); +} + + +static int Jim_MulCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + return JimAddMulHelper(interp, argc, argv, JIM_EXPROP_MUL); +} + + +static int Jim_SubCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + return JimSubDivHelper(interp, argc, argv, JIM_EXPROP_SUB); +} + + +static int Jim_DivCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + return JimSubDivHelper(interp, argc, argv, JIM_EXPROP_DIV); +} + + +static int Jim_SetCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + if (argc != 2 && argc != 3) { + Jim_WrongNumArgs(interp, 1, argv, "varName ?newValue?"); + return JIM_ERR; + } + if (argc == 2) { + Jim_Obj *objPtr; + + objPtr = Jim_GetVariable(interp, argv[1], JIM_ERRMSG); + if (!objPtr) + return JIM_ERR; + Jim_SetResult(interp, objPtr); + return JIM_OK; + } + + if (Jim_SetVariable(interp, argv[1], argv[2]) != JIM_OK) + return JIM_ERR; + Jim_SetResult(interp, argv[2]); + return JIM_OK; +} + +static int Jim_UnsetCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int i = 1; + int complain = 1; + + while (i < argc) { + if (Jim_CompareStringImmediate(interp, argv[i], "--")) { + i++; + break; + } + if (Jim_CompareStringImmediate(interp, argv[i], "-nocomplain")) { + complain = 0; + i++; + continue; + } + break; + } + + while (i < argc) { + if (Jim_UnsetVariable(interp, argv[i], complain ? JIM_ERRMSG : JIM_NONE) != JIM_OK + && complain) { + return JIM_ERR; + } + i++; + } + return JIM_OK; +} + + +static int Jim_WhileCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + if (argc != 3) { + Jim_WrongNumArgs(interp, 1, argv, "condition body"); + return JIM_ERR; + } + + + while (1) { + int boolean, retval; + + if ((retval = Jim_GetBoolFromExpr(interp, argv[1], &boolean)) != JIM_OK) + return retval; + if (!boolean) + break; + + if ((retval = Jim_EvalObj(interp, argv[2])) != JIM_OK) { + switch (retval) { + case JIM_BREAK: + goto out; + break; + case JIM_CONTINUE: + continue; + break; + default: + return retval; + } + } + } + out: + Jim_SetEmptyResult(interp); + return JIM_OK; +} + + +static int Jim_ForCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int retval; + int boolean = 1; + Jim_Obj *varNamePtr = NULL; + Jim_Obj *stopVarNamePtr = NULL; + + if (argc != 5) { + Jim_WrongNumArgs(interp, 1, argv, "start test next body"); + return JIM_ERR; + } + + + if ((retval = Jim_EvalObj(interp, argv[1])) != JIM_OK) { + return retval; + } + + retval = Jim_GetBoolFromExpr(interp, argv[2], &boolean); + + +#ifdef JIM_OPTIMIZATION + if (retval == JIM_OK && boolean) { + ScriptObj *incrScript; + ExprByteCode *expr; + jim_wide stop, currentVal; + Jim_Obj *objPtr; + int cmpOffset; + + + expr = JimGetExpression(interp, argv[2]); + incrScript = JimGetScript(interp, argv[3]); + + + if (incrScript == NULL || incrScript->len != 3 || !expr || expr->len != 3) { + goto evalstart; + } + + if (incrScript->token[1].type != JIM_TT_ESC || + expr->token[0].type != JIM_TT_VAR || + (expr->token[1].type != JIM_TT_EXPR_INT && expr->token[1].type != JIM_TT_VAR)) { + goto evalstart; + } + + if (expr->token[2].type == JIM_EXPROP_LT) { + cmpOffset = 0; + } + else if (expr->token[2].type == JIM_EXPROP_LTE) { + cmpOffset = 1; + } + else { + goto evalstart; + } + + + if (!Jim_CompareStringImmediate(interp, incrScript->token[1].objPtr, "incr")) { + goto evalstart; + } + + + if (!Jim_StringEqObj(incrScript->token[2].objPtr, expr->token[0].objPtr)) { + goto evalstart; + } + + + if (expr->token[1].type == JIM_TT_EXPR_INT) { + if (Jim_GetWide(interp, expr->token[1].objPtr, &stop) == JIM_ERR) { + goto evalstart; + } + } + else { + stopVarNamePtr = expr->token[1].objPtr; + Jim_IncrRefCount(stopVarNamePtr); + + stop = 0; + } + + + varNamePtr = expr->token[0].objPtr; + Jim_IncrRefCount(varNamePtr); + + objPtr = Jim_GetVariable(interp, varNamePtr, JIM_NONE); + if (objPtr == NULL || Jim_GetWide(interp, objPtr, ¤tVal) != JIM_OK) { + goto testcond; + } + + + while (retval == JIM_OK) { + + + + + if (stopVarNamePtr) { + objPtr = Jim_GetVariable(interp, stopVarNamePtr, JIM_NONE); + if (objPtr == NULL || Jim_GetWide(interp, objPtr, &stop) != JIM_OK) { + goto testcond; + } + } + + if (currentVal >= stop + cmpOffset) { + break; + } + + + retval = Jim_EvalObj(interp, argv[4]); + if (retval == JIM_OK || retval == JIM_CONTINUE) { + retval = JIM_OK; + + objPtr = Jim_GetVariable(interp, varNamePtr, JIM_ERRMSG); + + + if (objPtr == NULL) { + retval = JIM_ERR; + goto out; + } + if (!Jim_IsShared(objPtr) && objPtr->typePtr == &intObjType) { + currentVal = ++JimWideValue(objPtr); + Jim_InvalidateStringRep(objPtr); + } + else { + if (Jim_GetWide(interp, objPtr, ¤tVal) != JIM_OK || + Jim_SetVariable(interp, varNamePtr, Jim_NewIntObj(interp, + ++currentVal)) != JIM_OK) { + goto evalnext; + } + } + } + } + goto out; + } + evalstart: +#endif + + while (boolean && (retval == JIM_OK || retval == JIM_CONTINUE)) { + + retval = Jim_EvalObj(interp, argv[4]); + + if (retval == JIM_OK || retval == JIM_CONTINUE) { + + evalnext: + retval = Jim_EvalObj(interp, argv[3]); + if (retval == JIM_OK || retval == JIM_CONTINUE) { + + testcond: + retval = Jim_GetBoolFromExpr(interp, argv[2], &boolean); + } + } + } + out: + if (stopVarNamePtr) { + Jim_DecrRefCount(interp, stopVarNamePtr); + } + if (varNamePtr) { + Jim_DecrRefCount(interp, varNamePtr); + } + + if (retval == JIM_CONTINUE || retval == JIM_BREAK || retval == JIM_OK) { + Jim_SetEmptyResult(interp); + return JIM_OK; + } + + return retval; +} + + +static int Jim_LoopCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int retval; + jim_wide i; + jim_wide limit; + jim_wide incr = 1; + Jim_Obj *bodyObjPtr; + + if (argc != 5 && argc != 6) { + Jim_WrongNumArgs(interp, 1, argv, "var first limit ?incr? body"); + return JIM_ERR; + } + + if (Jim_GetWide(interp, argv[2], &i) != JIM_OK || + Jim_GetWide(interp, argv[3], &limit) != JIM_OK || + (argc == 6 && Jim_GetWide(interp, argv[4], &incr) != JIM_OK)) { + return JIM_ERR; + } + bodyObjPtr = (argc == 5) ? argv[4] : argv[5]; + + retval = Jim_SetVariable(interp, argv[1], argv[2]); + + while (((i < limit && incr > 0) || (i > limit && incr < 0)) && retval == JIM_OK) { + retval = Jim_EvalObj(interp, bodyObjPtr); + if (retval == JIM_OK || retval == JIM_CONTINUE) { + Jim_Obj *objPtr = Jim_GetVariable(interp, argv[1], JIM_ERRMSG); + + retval = JIM_OK; + + + i += incr; + + if (objPtr && !Jim_IsShared(objPtr) && objPtr->typePtr == &intObjType) { + if (argv[1]->typePtr != &variableObjType) { + if (Jim_SetVariable(interp, argv[1], objPtr) != JIM_OK) { + return JIM_ERR; + } + } + JimWideValue(objPtr) = i; + Jim_InvalidateStringRep(objPtr); + + if (argv[1]->typePtr != &variableObjType) { + if (Jim_SetVariable(interp, argv[1], objPtr) != JIM_OK) { + retval = JIM_ERR; + break; + } + } + } + else { + objPtr = Jim_NewIntObj(interp, i); + retval = Jim_SetVariable(interp, argv[1], objPtr); + if (retval != JIM_OK) { + Jim_FreeNewObj(interp, objPtr); + } + } + } + } + + if (retval == JIM_OK || retval == JIM_CONTINUE || retval == JIM_BREAK) { + Jim_SetEmptyResult(interp); + return JIM_OK; + } + return retval; +} + +typedef struct { + Jim_Obj *objPtr; + int idx; +} Jim_ListIter; + +static void JimListIterInit(Jim_ListIter *iter, Jim_Obj *objPtr) +{ + iter->objPtr = objPtr; + iter->idx = 0; +} + +static Jim_Obj *JimListIterNext(Jim_Interp *interp, Jim_ListIter *iter) +{ + if (iter->idx >= Jim_ListLength(interp, iter->objPtr)) { + return NULL; + } + return iter->objPtr->internalRep.listValue.ele[iter->idx++]; +} + +static int JimListIterDone(Jim_Interp *interp, Jim_ListIter *iter) +{ + return iter->idx >= Jim_ListLength(interp, iter->objPtr); +} + + +static int JimForeachMapHelper(Jim_Interp *interp, int argc, Jim_Obj *const *argv, int doMap) +{ + int result = JIM_OK; + int i, numargs; + Jim_ListIter twoiters[2]; + Jim_ListIter *iters; + Jim_Obj *script; + Jim_Obj *resultObj; + + if (argc < 4 || argc % 2 != 0) { + Jim_WrongNumArgs(interp, 1, argv, "varList list ?varList list ...? script"); + return JIM_ERR; + } + script = argv[argc - 1]; + numargs = (argc - 1 - 1); + + if (numargs == 2) { + iters = twoiters; + } + else { + iters = Jim_Alloc(numargs * sizeof(*iters)); + } + for (i = 0; i < numargs; i++) { + JimListIterInit(&iters[i], argv[i + 1]); + if (i % 2 == 0 && JimListIterDone(interp, &iters[i])) { + result = JIM_ERR; + } + } + if (result != JIM_OK) { + Jim_SetResultString(interp, "foreach varlist is empty", -1); + return result; + } + + if (doMap) { + resultObj = Jim_NewListObj(interp, NULL, 0); + } + else { + resultObj = interp->emptyObj; + } + Jim_IncrRefCount(resultObj); + + while (1) { + + for (i = 0; i < numargs; i += 2) { + if (!JimListIterDone(interp, &iters[i + 1])) { + break; + } + } + if (i == numargs) { + + break; + } + + + for (i = 0; i < numargs; i += 2) { + Jim_Obj *varName; + + + JimListIterInit(&iters[i], argv[i + 1]); + while ((varName = JimListIterNext(interp, &iters[i])) != NULL) { + Jim_Obj *valObj = JimListIterNext(interp, &iters[i + 1]); + if (!valObj) { + + valObj = interp->emptyObj; + } + + Jim_IncrRefCount(valObj); + result = Jim_SetVariable(interp, varName, valObj); + Jim_DecrRefCount(interp, valObj); + if (result != JIM_OK) { + goto err; + } + } + } + switch (result = Jim_EvalObj(interp, script)) { + case JIM_OK: + if (doMap) { + Jim_ListAppendElement(interp, resultObj, interp->result); + } + break; + case JIM_CONTINUE: + break; + case JIM_BREAK: + goto out; + default: + goto err; + } + } + out: + result = JIM_OK; + Jim_SetResult(interp, resultObj); + err: + Jim_DecrRefCount(interp, resultObj); + if (numargs > 2) { + Jim_Free(iters); + } + return result; +} + + +static int Jim_ForeachCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + return JimForeachMapHelper(interp, argc, argv, 0); +} + + +static int Jim_LmapCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + return JimForeachMapHelper(interp, argc, argv, 1); +} + + +static int Jim_LassignCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int result = JIM_ERR; + int i; + Jim_ListIter iter; + Jim_Obj *resultObj; + + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "varList list ?varName ...?"); + return JIM_ERR; + } + + JimListIterInit(&iter, argv[1]); + + for (i = 2; i < argc; i++) { + Jim_Obj *valObj = JimListIterNext(interp, &iter); + result = Jim_SetVariable(interp, argv[i], valObj ? valObj : interp->emptyObj); + if (result != JIM_OK) { + return result; + } + } + + resultObj = Jim_NewListObj(interp, NULL, 0); + while (!JimListIterDone(interp, &iter)) { + Jim_ListAppendElement(interp, resultObj, JimListIterNext(interp, &iter)); + } + + Jim_SetResult(interp, resultObj); + + return JIM_OK; +} + + +static int Jim_IfCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int boolean, retval, current = 1, falsebody = 0; + + if (argc >= 3) { + while (1) { + + if (current >= argc) + goto err; + if ((retval = Jim_GetBoolFromExpr(interp, argv[current++], &boolean)) + != JIM_OK) + return retval; + + if (current >= argc) + goto err; + if (Jim_CompareStringImmediate(interp, argv[current], "then")) + current++; + + if (current >= argc) + goto err; + if (boolean) + return Jim_EvalObj(interp, argv[current]); + + if (++current >= argc) { + Jim_SetResult(interp, Jim_NewEmptyStringObj(interp)); + return JIM_OK; + } + falsebody = current++; + if (Jim_CompareStringImmediate(interp, argv[falsebody], "else")) { + + if (current != argc - 1) + goto err; + return Jim_EvalObj(interp, argv[current]); + } + else if (Jim_CompareStringImmediate(interp, argv[falsebody], "elseif")) + continue; + + else if (falsebody != argc - 1) + goto err; + return Jim_EvalObj(interp, argv[falsebody]); + } + return JIM_OK; + } + err: + Jim_WrongNumArgs(interp, 1, argv, "condition ?then? trueBody ?elseif ...? ?else? falseBody"); + return JIM_ERR; +} + + + +int Jim_CommandMatchObj(Jim_Interp *interp, Jim_Obj *commandObj, Jim_Obj *patternObj, + Jim_Obj *stringObj, int nocase) +{ + Jim_Obj *parms[4]; + int argc = 0; + long eq; + int rc; + + parms[argc++] = commandObj; + if (nocase) { + parms[argc++] = Jim_NewStringObj(interp, "-nocase", -1); + } + parms[argc++] = patternObj; + parms[argc++] = stringObj; + + rc = Jim_EvalObjVector(interp, argc, parms); + + if (rc != JIM_OK || Jim_GetLong(interp, Jim_GetResult(interp), &eq) != JIM_OK) { + eq = -rc; + } + + return eq; +} + +enum +{ SWITCH_EXACT, SWITCH_GLOB, SWITCH_RE, SWITCH_CMD }; + + +static int Jim_SwitchCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int matchOpt = SWITCH_EXACT, opt = 1, patCount, i; + Jim_Obj *command = 0, *const *caseList = 0, *strObj; + Jim_Obj *script = 0; + + if (argc < 3) { + wrongnumargs: + Jim_WrongNumArgs(interp, 1, argv, "?options? string " + "pattern body ... ?default body? or " "{pattern body ?pattern body ...?}"); + return JIM_ERR; + } + for (opt = 1; opt < argc; ++opt) { + const char *option = Jim_String(argv[opt]); + + if (*option != '-') + break; + else if (strncmp(option, "--", 2) == 0) { + ++opt; + break; + } + else if (strncmp(option, "-exact", 2) == 0) + matchOpt = SWITCH_EXACT; + else if (strncmp(option, "-glob", 2) == 0) + matchOpt = SWITCH_GLOB; + else if (strncmp(option, "-regexp", 2) == 0) + matchOpt = SWITCH_RE; + else if (strncmp(option, "-command", 2) == 0) { + matchOpt = SWITCH_CMD; + if ((argc - opt) < 2) + goto wrongnumargs; + command = argv[++opt]; + } + else { + Jim_SetResultFormatted(interp, + "bad option \"%#s\": must be -exact, -glob, -regexp, -command procname or --", + argv[opt]); + return JIM_ERR; + } + if ((argc - opt) < 2) + goto wrongnumargs; + } + strObj = argv[opt++]; + patCount = argc - opt; + if (patCount == 1) { + Jim_Obj **vector; + + JimListGetElements(interp, argv[opt], &patCount, &vector); + caseList = vector; + } + else + caseList = &argv[opt]; + if (patCount == 0 || patCount % 2 != 0) + goto wrongnumargs; + for (i = 0; script == 0 && i < patCount; i += 2) { + Jim_Obj *patObj = caseList[i]; + + if (!Jim_CompareStringImmediate(interp, patObj, "default") + || i < (patCount - 2)) { + switch (matchOpt) { + case SWITCH_EXACT: + if (Jim_StringEqObj(strObj, patObj)) + script = caseList[i + 1]; + break; + case SWITCH_GLOB: + if (Jim_StringMatchObj(interp, patObj, strObj, 0)) + script = caseList[i + 1]; + break; + case SWITCH_RE: + command = Jim_NewStringObj(interp, "regexp", -1); + + case SWITCH_CMD:{ + int rc = Jim_CommandMatchObj(interp, command, patObj, strObj, 0); + + if (argc - opt == 1) { + Jim_Obj **vector; + + JimListGetElements(interp, argv[opt], &patCount, &vector); + caseList = vector; + } + + if (rc < 0) { + return -rc; + } + if (rc) + script = caseList[i + 1]; + break; + } + } + } + else { + script = caseList[i + 1]; + } + } + for (; i < patCount && Jim_CompareStringImmediate(interp, script, "-"); i += 2) + script = caseList[i + 1]; + if (script && Jim_CompareStringImmediate(interp, script, "-")) { + Jim_SetResultFormatted(interp, "no body specified for pattern \"%#s\"", caseList[i - 2]); + return JIM_ERR; + } + Jim_SetEmptyResult(interp); + if (script) { + return Jim_EvalObj(interp, script); + } + return JIM_OK; +} + + +static int Jim_ListCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *listObjPtr; + + listObjPtr = Jim_NewListObj(interp, argv + 1, argc - 1); + Jim_SetResult(interp, listObjPtr); + return JIM_OK; +} + + +static int Jim_LindexCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *objPtr, *listObjPtr; + int i; + int idx; + + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "list ?index ...?"); + return JIM_ERR; + } + objPtr = argv[1]; + Jim_IncrRefCount(objPtr); + for (i = 2; i < argc; i++) { + listObjPtr = objPtr; + if (Jim_GetIndex(interp, argv[i], &idx) != JIM_OK) { + Jim_DecrRefCount(interp, listObjPtr); + return JIM_ERR; + } + if (Jim_ListIndex(interp, listObjPtr, idx, &objPtr, JIM_NONE) != JIM_OK) { + Jim_DecrRefCount(interp, listObjPtr); + Jim_SetEmptyResult(interp); + return JIM_OK; + } + Jim_IncrRefCount(objPtr); + Jim_DecrRefCount(interp, listObjPtr); + } + Jim_SetResult(interp, objPtr); + Jim_DecrRefCount(interp, objPtr); + return JIM_OK; +} + + +static int Jim_LlengthCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + if (argc != 2) { + Jim_WrongNumArgs(interp, 1, argv, "list"); + return JIM_ERR; + } + Jim_SetResultInt(interp, Jim_ListLength(interp, argv[1])); + return JIM_OK; +} + + +static int Jim_LsearchCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + static const char * const options[] = { + "-bool", "-not", "-nocase", "-exact", "-glob", "-regexp", "-all", "-inline", "-command", + NULL + }; + enum + { OPT_BOOL, OPT_NOT, OPT_NOCASE, OPT_EXACT, OPT_GLOB, OPT_REGEXP, OPT_ALL, OPT_INLINE, + OPT_COMMAND }; + int i; + int opt_bool = 0; + int opt_not = 0; + int opt_nocase = 0; + int opt_all = 0; + int opt_inline = 0; + int opt_match = OPT_EXACT; + int listlen; + int rc = JIM_OK; + Jim_Obj *listObjPtr = NULL; + Jim_Obj *commandObj = NULL; + + if (argc < 3) { + wrongargs: + Jim_WrongNumArgs(interp, 1, argv, + "?-exact|-glob|-regexp|-command 'command'? ?-bool|-inline? ?-not? ?-nocase? ?-all? list value"); + return JIM_ERR; + } + + for (i = 1; i < argc - 2; i++) { + int option; + + if (Jim_GetEnum(interp, argv[i], options, &option, NULL, JIM_ERRMSG) != JIM_OK) { + return JIM_ERR; + } + switch (option) { + case OPT_BOOL: + opt_bool = 1; + opt_inline = 0; + break; + case OPT_NOT: + opt_not = 1; + break; + case OPT_NOCASE: + opt_nocase = 1; + break; + case OPT_INLINE: + opt_inline = 1; + opt_bool = 0; + break; + case OPT_ALL: + opt_all = 1; + break; + case OPT_COMMAND: + if (i >= argc - 2) { + goto wrongargs; + } + commandObj = argv[++i]; + + case OPT_EXACT: + case OPT_GLOB: + case OPT_REGEXP: + opt_match = option; + break; + } + } + + argv += i; + + if (opt_all) { + listObjPtr = Jim_NewListObj(interp, NULL, 0); + } + if (opt_match == OPT_REGEXP) { + commandObj = Jim_NewStringObj(interp, "regexp", -1); + } + if (commandObj) { + Jim_IncrRefCount(commandObj); + } + + listlen = Jim_ListLength(interp, argv[0]); + for (i = 0; i < listlen; i++) { + int eq = 0; + Jim_Obj *objPtr = Jim_ListGetIndex(interp, argv[0], i); + + switch (opt_match) { + case OPT_EXACT: + eq = Jim_StringCompareObj(interp, argv[1], objPtr, opt_nocase) == 0; + break; + + case OPT_GLOB: + eq = Jim_StringMatchObj(interp, argv[1], objPtr, opt_nocase); + break; + + case OPT_REGEXP: + case OPT_COMMAND: + eq = Jim_CommandMatchObj(interp, commandObj, argv[1], objPtr, opt_nocase); + if (eq < 0) { + if (listObjPtr) { + Jim_FreeNewObj(interp, listObjPtr); + } + rc = JIM_ERR; + goto done; + } + break; + } + + + if (!eq && opt_bool && opt_not && !opt_all) { + continue; + } + + if ((!opt_bool && eq == !opt_not) || (opt_bool && (eq || opt_all))) { + + Jim_Obj *resultObj; + + if (opt_bool) { + resultObj = Jim_NewIntObj(interp, eq ^ opt_not); + } + else if (!opt_inline) { + resultObj = Jim_NewIntObj(interp, i); + } + else { + resultObj = objPtr; + } + + if (opt_all) { + Jim_ListAppendElement(interp, listObjPtr, resultObj); + } + else { + Jim_SetResult(interp, resultObj); + goto done; + } + } + } + + if (opt_all) { + Jim_SetResult(interp, listObjPtr); + } + else { + + if (opt_bool) { + Jim_SetResultBool(interp, opt_not); + } + else if (!opt_inline) { + Jim_SetResultInt(interp, -1); + } + } + + done: + if (commandObj) { + Jim_DecrRefCount(interp, commandObj); + } + return rc; +} + + +static int Jim_LappendCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *listObjPtr; + int shared, i; + + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "varName ?value value ...?"); + return JIM_ERR; + } + listObjPtr = Jim_GetVariable(interp, argv[1], JIM_UNSHARED); + if (!listObjPtr) { + + listObjPtr = Jim_NewListObj(interp, NULL, 0); + if (Jim_SetVariable(interp, argv[1], listObjPtr) != JIM_OK) { + Jim_FreeNewObj(interp, listObjPtr); + return JIM_ERR; + } + } + shared = Jim_IsShared(listObjPtr); + if (shared) + listObjPtr = Jim_DuplicateObj(interp, listObjPtr); + for (i = 2; i < argc; i++) + Jim_ListAppendElement(interp, listObjPtr, argv[i]); + if (Jim_SetVariable(interp, argv[1], listObjPtr) != JIM_OK) { + if (shared) + Jim_FreeNewObj(interp, listObjPtr); + return JIM_ERR; + } + Jim_SetResult(interp, listObjPtr); + return JIM_OK; +} + + +static int Jim_LinsertCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int idx, len; + Jim_Obj *listPtr; + + if (argc < 3) { + Jim_WrongNumArgs(interp, 1, argv, "list index ?element ...?"); + return JIM_ERR; + } + listPtr = argv[1]; + if (Jim_IsShared(listPtr)) + listPtr = Jim_DuplicateObj(interp, listPtr); + if (Jim_GetIndex(interp, argv[2], &idx) != JIM_OK) + goto err; + len = Jim_ListLength(interp, listPtr); + if (idx >= len) + idx = len; + else if (idx < 0) + idx = len + idx + 1; + Jim_ListInsertElements(interp, listPtr, idx, argc - 3, &argv[3]); + Jim_SetResult(interp, listPtr); + return JIM_OK; + err: + if (listPtr != argv[1]) { + Jim_FreeNewObj(interp, listPtr); + } + return JIM_ERR; +} + + +static int Jim_LreplaceCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int first, last, len, rangeLen; + Jim_Obj *listObj; + Jim_Obj *newListObj; + + if (argc < 4) { + Jim_WrongNumArgs(interp, 1, argv, "list first last ?element ...?"); + return JIM_ERR; + } + if (Jim_GetIndex(interp, argv[2], &first) != JIM_OK || + Jim_GetIndex(interp, argv[3], &last) != JIM_OK) { + return JIM_ERR; + } + + listObj = argv[1]; + len = Jim_ListLength(interp, listObj); + + first = JimRelToAbsIndex(len, first); + last = JimRelToAbsIndex(len, last); + JimRelToAbsRange(len, &first, &last, &rangeLen); + + + + if (first < len) { + + } + else if (len == 0) { + + first = 0; + } + else { + Jim_SetResultString(interp, "list doesn't contain element ", -1); + Jim_AppendObj(interp, Jim_GetResult(interp), argv[2]); + return JIM_ERR; + } + + + newListObj = Jim_NewListObj(interp, listObj->internalRep.listValue.ele, first); + + + ListInsertElements(newListObj, -1, argc - 4, argv + 4); + + + ListInsertElements(newListObj, -1, len - first - rangeLen, listObj->internalRep.listValue.ele + first + rangeLen); + + Jim_SetResult(interp, newListObj); + return JIM_OK; +} + + +static int Jim_LsetCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + if (argc < 3) { + Jim_WrongNumArgs(interp, 1, argv, "listVar ?index...? newVal"); + return JIM_ERR; + } + else if (argc == 3) { + + if (Jim_SetVariable(interp, argv[1], argv[2]) != JIM_OK) + return JIM_ERR; + Jim_SetResult(interp, argv[2]); + return JIM_OK; + } + return Jim_ListSetIndex(interp, argv[1], argv + 2, argc - 3, argv[argc - 1]); +} + + +static int Jim_LsortCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const argv[]) +{ + static const char * const options[] = { + "-ascii", "-nocase", "-increasing", "-decreasing", "-command", "-integer", "-real", "-index", "-unique", NULL + }; + enum + { OPT_ASCII, OPT_NOCASE, OPT_INCREASING, OPT_DECREASING, OPT_COMMAND, OPT_INTEGER, OPT_REAL, OPT_INDEX, OPT_UNIQUE }; + Jim_Obj *resObj; + int i; + int retCode; + + struct lsort_info info; + + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "?options? list"); + return JIM_ERR; + } + + info.type = JIM_LSORT_ASCII; + info.order = 1; + info.indexed = 0; + info.unique = 0; + info.command = NULL; + info.interp = interp; + + for (i = 1; i < (argc - 1); i++) { + int option; + + if (Jim_GetEnum(interp, argv[i], options, &option, NULL, JIM_ENUM_ABBREV | JIM_ERRMSG) + != JIM_OK) + return JIM_ERR; + switch (option) { + case OPT_ASCII: + info.type = JIM_LSORT_ASCII; + break; + case OPT_NOCASE: + info.type = JIM_LSORT_NOCASE; + break; + case OPT_INTEGER: + info.type = JIM_LSORT_INTEGER; + break; + case OPT_REAL: + info.type = JIM_LSORT_REAL; + break; + case OPT_INCREASING: + info.order = 1; + break; + case OPT_DECREASING: + info.order = -1; + break; + case OPT_UNIQUE: + info.unique = 1; + break; + case OPT_COMMAND: + if (i >= (argc - 2)) { + Jim_SetResultString(interp, "\"-command\" option must be followed by comparison command", -1); + return JIM_ERR; + } + info.type = JIM_LSORT_COMMAND; + info.command = argv[i + 1]; + i++; + break; + case OPT_INDEX: + if (i >= (argc - 2)) { + Jim_SetResultString(interp, "\"-index\" option must be followed by list index", -1); + return JIM_ERR; + } + if (Jim_GetIndex(interp, argv[i + 1], &info.index) != JIM_OK) { + return JIM_ERR; + } + info.indexed = 1; + i++; + break; + } + } + resObj = Jim_DuplicateObj(interp, argv[argc - 1]); + retCode = ListSortElements(interp, resObj, &info); + if (retCode == JIM_OK) { + Jim_SetResult(interp, resObj); + } + else { + Jim_FreeNewObj(interp, resObj); + } + return retCode; +} + + +static int Jim_AppendCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *stringObjPtr; + int i; + + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "varName ?value ...?"); + return JIM_ERR; + } + if (argc == 2) { + stringObjPtr = Jim_GetVariable(interp, argv[1], JIM_ERRMSG); + if (!stringObjPtr) + return JIM_ERR; + } + else { + int freeobj = 0; + stringObjPtr = Jim_GetVariable(interp, argv[1], JIM_UNSHARED); + if (!stringObjPtr) { + + stringObjPtr = Jim_NewEmptyStringObj(interp); + freeobj = 1; + } + else if (Jim_IsShared(stringObjPtr)) { + freeobj = 1; + stringObjPtr = Jim_DuplicateObj(interp, stringObjPtr); + } + for (i = 2; i < argc; i++) { + Jim_AppendObj(interp, stringObjPtr, argv[i]); + } + if (Jim_SetVariable(interp, argv[1], stringObjPtr) != JIM_OK) { + if (freeobj) { + Jim_FreeNewObj(interp, stringObjPtr); + } + return JIM_ERR; + } + } + Jim_SetResult(interp, stringObjPtr); + return JIM_OK; +} + + +static int Jim_DebugCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ +#if !defined(JIM_DEBUG_COMMAND) + Jim_SetResultString(interp, "unsupported", -1); + return JIM_ERR; +#endif +} + + +static int Jim_EvalCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int rc; + + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "arg ?arg ...?"); + return JIM_ERR; + } + + if (argc == 2) { + rc = Jim_EvalObj(interp, argv[1]); + } + else { + rc = Jim_EvalObj(interp, Jim_ConcatObj(interp, argc - 1, argv + 1)); + } + + if (rc == JIM_ERR) { + + interp->addStackTrace++; + } + return rc; +} + + +static int Jim_UplevelCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + if (argc >= 2) { + int retcode; + Jim_CallFrame *savedCallFrame, *targetCallFrame; + int savedTailcall; + const char *str; + + + savedCallFrame = interp->framePtr; + + + str = Jim_String(argv[1]); + if ((str[0] >= '0' && str[0] <= '9') || str[0] == '#') { + targetCallFrame = Jim_GetCallFrameByLevel(interp, argv[1]); + argc--; + argv++; + } + else { + targetCallFrame = Jim_GetCallFrameByLevel(interp, NULL); + } + if (targetCallFrame == NULL) { + return JIM_ERR; + } + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv - 1, "?level? command ?arg ...?"); + return JIM_ERR; + } + + interp->framePtr = targetCallFrame; + + savedTailcall = interp->framePtr->tailcall; + interp->framePtr->tailcall = 0; + if (argc == 2) { + retcode = Jim_EvalObj(interp, argv[1]); + } + else { + retcode = Jim_EvalObj(interp, Jim_ConcatObj(interp, argc - 1, argv + 1)); + } + interp->framePtr->tailcall = savedTailcall; + interp->framePtr = savedCallFrame; + return retcode; + } + else { + Jim_WrongNumArgs(interp, 1, argv, "?level? command ?arg ...?"); + return JIM_ERR; + } +} + + +static int Jim_ExprCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *exprResultPtr; + int retcode; + + if (argc == 2) { + retcode = Jim_EvalExpression(interp, argv[1], &exprResultPtr); + } + else if (argc > 2) { + Jim_Obj *objPtr; + + objPtr = Jim_ConcatObj(interp, argc - 1, argv + 1); + Jim_IncrRefCount(objPtr); + retcode = Jim_EvalExpression(interp, objPtr, &exprResultPtr); + Jim_DecrRefCount(interp, objPtr); + } + else { + Jim_WrongNumArgs(interp, 1, argv, "expression ?...?"); + return JIM_ERR; + } + if (retcode != JIM_OK) + return retcode; + Jim_SetResult(interp, exprResultPtr); + Jim_DecrRefCount(interp, exprResultPtr); + return JIM_OK; +} + + +static int Jim_BreakCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + if (argc != 1) { + Jim_WrongNumArgs(interp, 1, argv, ""); + return JIM_ERR; + } + return JIM_BREAK; +} + + +static int Jim_ContinueCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + if (argc != 1) { + Jim_WrongNumArgs(interp, 1, argv, ""); + return JIM_ERR; + } + return JIM_CONTINUE; +} + + +static int Jim_ReturnCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int i; + Jim_Obj *stackTraceObj = NULL; + Jim_Obj *errorCodeObj = NULL; + int returnCode = JIM_OK; + long level = 1; + + for (i = 1; i < argc - 1; i += 2) { + if (Jim_CompareStringImmediate(interp, argv[i], "-code")) { + if (Jim_GetReturnCode(interp, argv[i + 1], &returnCode) == JIM_ERR) { + return JIM_ERR; + } + } + else if (Jim_CompareStringImmediate(interp, argv[i], "-errorinfo")) { + stackTraceObj = argv[i + 1]; + } + else if (Jim_CompareStringImmediate(interp, argv[i], "-errorcode")) { + errorCodeObj = argv[i + 1]; + } + else if (Jim_CompareStringImmediate(interp, argv[i], "-level")) { + if (Jim_GetLong(interp, argv[i + 1], &level) != JIM_OK || level < 0) { + Jim_SetResultFormatted(interp, "bad level \"%#s\"", argv[i + 1]); + return JIM_ERR; + } + } + else { + break; + } + } + + if (i != argc - 1 && i != argc) { + Jim_WrongNumArgs(interp, 1, argv, + "?-code code? ?-errorinfo stacktrace? ?-level level? ?result?"); + } + + + if (stackTraceObj && returnCode == JIM_ERR) { + JimSetStackTrace(interp, stackTraceObj); + } + + if (errorCodeObj && returnCode == JIM_ERR) { + Jim_SetGlobalVariableStr(interp, "errorCode", errorCodeObj); + } + interp->returnCode = returnCode; + interp->returnLevel = level; + + if (i == argc - 1) { + Jim_SetResult(interp, argv[i]); + } + return JIM_RETURN; +} + + +static int Jim_TailcallCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + if (interp->framePtr->level == 0) { + Jim_SetResultString(interp, "tailcall can only be called from a proc or lambda", -1); + return JIM_ERR; + } + else if (argc >= 2) { + + Jim_CallFrame *cf = interp->framePtr->parent; + + Jim_Cmd *cmdPtr = Jim_GetCommand(interp, argv[1], JIM_ERRMSG); + if (cmdPtr == NULL) { + return JIM_ERR; + } + + JimPanic((cf->tailcallCmd != NULL, "Already have a tailcallCmd")); + + + JimIncrCmdRefCount(cmdPtr); + cf->tailcallCmd = cmdPtr; + + + JimPanic((cf->tailcallObj != NULL, "Already have a tailcallobj")); + + cf->tailcallObj = Jim_NewListObj(interp, argv + 1, argc - 1); + Jim_IncrRefCount(cf->tailcallObj); + + + return JIM_EVAL; + } + return JIM_OK; +} + +static int JimAliasCmd(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *cmdList; + Jim_Obj *prefixListObj = Jim_CmdPrivData(interp); + + + cmdList = Jim_DuplicateObj(interp, prefixListObj); + Jim_ListInsertElements(interp, cmdList, Jim_ListLength(interp, cmdList), argc - 1, argv + 1); + + return JimEvalObjList(interp, cmdList); +} + +static void JimAliasCmdDelete(Jim_Interp *interp, void *privData) +{ + Jim_Obj *prefixListObj = privData; + Jim_DecrRefCount(interp, prefixListObj); +} + +static int Jim_AliasCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *prefixListObj; + const char *newname; + + if (argc < 3) { + Jim_WrongNumArgs(interp, 1, argv, "newname command ?args ...?"); + return JIM_ERR; + } + + prefixListObj = Jim_NewListObj(interp, argv + 2, argc - 2); + Jim_IncrRefCount(prefixListObj); + newname = Jim_String(argv[1]); + if (newname[0] == ':' && newname[1] == ':') { + while (*++newname == ':') { + } + } + + Jim_SetResult(interp, argv[1]); + + return Jim_CreateCommand(interp, newname, JimAliasCmd, prefixListObj, JimAliasCmdDelete); +} + + +static int Jim_ProcCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Cmd *cmd; + + if (argc != 4 && argc != 5) { + Jim_WrongNumArgs(interp, 1, argv, "name arglist ?statics? body"); + return JIM_ERR; + } + + if (JimValidName(interp, "procedure", argv[1]) != JIM_OK) { + return JIM_ERR; + } + + if (argc == 4) { + cmd = JimCreateProcedureCmd(interp, argv[2], NULL, argv[3], NULL); + } + else { + cmd = JimCreateProcedureCmd(interp, argv[2], argv[3], argv[4], NULL); + } + + if (cmd) { + + Jim_Obj *qualifiedCmdNameObj; + const char *cmdname = JimQualifyName(interp, Jim_String(argv[1]), &qualifiedCmdNameObj); + + JimCreateCommand(interp, cmdname, cmd); + + + JimUpdateProcNamespace(interp, cmd, cmdname); + + JimFreeQualifiedName(interp, qualifiedCmdNameObj); + + + Jim_SetResult(interp, argv[1]); + return JIM_OK; + } + return JIM_ERR; +} + + +static int Jim_LocalCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int retcode; + + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "cmd ?args ...?"); + return JIM_ERR; + } + + + interp->local++; + retcode = Jim_EvalObjVector(interp, argc - 1, argv + 1); + interp->local--; + + + + if (retcode == 0) { + Jim_Obj *cmdNameObj = Jim_GetResult(interp); + + if (Jim_GetCommand(interp, cmdNameObj, JIM_ERRMSG) == NULL) { + return JIM_ERR; + } + if (interp->framePtr->localCommands == NULL) { + interp->framePtr->localCommands = Jim_Alloc(sizeof(*interp->framePtr->localCommands)); + Jim_InitStack(interp->framePtr->localCommands); + } + Jim_IncrRefCount(cmdNameObj); + Jim_StackPush(interp->framePtr->localCommands, cmdNameObj); + } + + return retcode; +} + + +static int Jim_UpcallCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "cmd ?args ...?"); + return JIM_ERR; + } + else { + int retcode; + + Jim_Cmd *cmdPtr = Jim_GetCommand(interp, argv[1], JIM_ERRMSG); + if (cmdPtr == NULL || !cmdPtr->isproc || !cmdPtr->prevCmd) { + Jim_SetResultFormatted(interp, "no previous command: \"%#s\"", argv[1]); + return JIM_ERR; + } + + cmdPtr->u.proc.upcall++; + JimIncrCmdRefCount(cmdPtr); + + + retcode = Jim_EvalObjVector(interp, argc - 1, argv + 1); + + + cmdPtr->u.proc.upcall--; + JimDecrCmdRefCount(interp, cmdPtr); + + return retcode; + } +} + + +static int Jim_ApplyCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "lambdaExpr ?arg ...?"); + return JIM_ERR; + } + else { + int ret; + Jim_Cmd *cmd; + Jim_Obj *argListObjPtr; + Jim_Obj *bodyObjPtr; + Jim_Obj *nsObj = NULL; + Jim_Obj **nargv; + + int len = Jim_ListLength(interp, argv[1]); + if (len != 2 && len != 3) { + Jim_SetResultFormatted(interp, "can't interpret \"%#s\" as a lambda expression", argv[1]); + return JIM_ERR; + } + + if (len == 3) { +#ifdef jim_ext_namespace + + nsObj = JimQualifyNameObj(interp, Jim_ListGetIndex(interp, argv[1], 2)); +#else + Jim_SetResultString(interp, "namespaces not enabled", -1); + return JIM_ERR; +#endif + } + argListObjPtr = Jim_ListGetIndex(interp, argv[1], 0); + bodyObjPtr = Jim_ListGetIndex(interp, argv[1], 1); + + cmd = JimCreateProcedureCmd(interp, argListObjPtr, NULL, bodyObjPtr, nsObj); + + if (cmd) { + + nargv = Jim_Alloc((argc - 2 + 1) * sizeof(*nargv)); + nargv[0] = Jim_NewStringObj(interp, "apply lambdaExpr", -1); + Jim_IncrRefCount(nargv[0]); + memcpy(&nargv[1], argv + 2, (argc - 2) * sizeof(*nargv)); + ret = JimCallProcedure(interp, cmd, argc - 2 + 1, nargv); + Jim_DecrRefCount(interp, nargv[0]); + Jim_Free(nargv); + + JimDecrCmdRefCount(interp, cmd); + return ret; + } + return JIM_ERR; + } +} + + + +static int Jim_ConcatCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_SetResult(interp, Jim_ConcatObj(interp, argc - 1, argv + 1)); + return JIM_OK; +} + + +static int Jim_UpvarCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int i; + Jim_CallFrame *targetCallFrame; + + + if (argc > 3 && (argc % 2 == 0)) { + targetCallFrame = Jim_GetCallFrameByLevel(interp, argv[1]); + argc--; + argv++; + } + else { + targetCallFrame = Jim_GetCallFrameByLevel(interp, NULL); + } + if (targetCallFrame == NULL) { + return JIM_ERR; + } + + + if (argc < 3) { + Jim_WrongNumArgs(interp, 1, argv, "?level? otherVar localVar ?otherVar localVar ...?"); + return JIM_ERR; + } + + + for (i = 1; i < argc; i += 2) { + if (Jim_SetVariableLink(interp, argv[i + 1], argv[i], targetCallFrame) != JIM_OK) + return JIM_ERR; + } + return JIM_OK; +} + + +static int Jim_GlobalCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int i; + + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "varName ?varName ...?"); + return JIM_ERR; + } + + if (interp->framePtr->level == 0) + return JIM_OK; + for (i = 1; i < argc; i++) { + + const char *name = Jim_String(argv[i]); + if (name[0] != ':' || name[1] != ':') { + if (Jim_SetVariableLink(interp, argv[i], argv[i], interp->topFramePtr) != JIM_OK) + return JIM_ERR; + } + } + return JIM_OK; +} + +static Jim_Obj *JimStringMap(Jim_Interp *interp, Jim_Obj *mapListObjPtr, + Jim_Obj *objPtr, int nocase) +{ + int numMaps; + const char *str, *noMatchStart = NULL; + int strLen, i; + Jim_Obj *resultObjPtr; + + numMaps = Jim_ListLength(interp, mapListObjPtr); + if (numMaps % 2) { + Jim_SetResultString(interp, "list must contain an even number of elements", -1); + return NULL; + } + + str = Jim_String(objPtr); + strLen = Jim_Utf8Length(interp, objPtr); + + + resultObjPtr = Jim_NewStringObj(interp, "", 0); + while (strLen) { + for (i = 0; i < numMaps; i += 2) { + Jim_Obj *objPtr; + const char *k; + int kl; + + objPtr = Jim_ListGetIndex(interp, mapListObjPtr, i); + k = Jim_String(objPtr); + kl = Jim_Utf8Length(interp, objPtr); + + if (strLen >= kl && kl) { + int rc; + rc = JimStringCompareLen(str, k, kl, nocase); + if (rc == 0) { + if (noMatchStart) { + Jim_AppendString(interp, resultObjPtr, noMatchStart, str - noMatchStart); + noMatchStart = NULL; + } + Jim_AppendObj(interp, resultObjPtr, Jim_ListGetIndex(interp, mapListObjPtr, i + 1)); + str += utf8_index(str, kl); + strLen -= kl; + break; + } + } + } + if (i == numMaps) { + int c; + if (noMatchStart == NULL) + noMatchStart = str; + str += utf8_tounicode(str, &c); + strLen--; + } + } + if (noMatchStart) { + Jim_AppendString(interp, resultObjPtr, noMatchStart, str - noMatchStart); + } + return resultObjPtr; +} + + +static int Jim_StringCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int len; + int opt_case = 1; + int option; + static const char * const options[] = { + "bytelength", "length", "compare", "match", "equal", "is", "byterange", "range", "replace", + "map", "repeat", "reverse", "index", "first", "last", "cat", + "trim", "trimleft", "trimright", "tolower", "toupper", "totitle", NULL + }; + enum + { + OPT_BYTELENGTH, OPT_LENGTH, OPT_COMPARE, OPT_MATCH, OPT_EQUAL, OPT_IS, OPT_BYTERANGE, OPT_RANGE, OPT_REPLACE, + OPT_MAP, OPT_REPEAT, OPT_REVERSE, OPT_INDEX, OPT_FIRST, OPT_LAST, OPT_CAT, + OPT_TRIM, OPT_TRIMLEFT, OPT_TRIMRIGHT, OPT_TOLOWER, OPT_TOUPPER, OPT_TOTITLE + }; + static const char * const nocase_options[] = { + "-nocase", NULL + }; + static const char * const nocase_length_options[] = { + "-nocase", "-length", NULL + }; + + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "option ?arguments ...?"); + return JIM_ERR; + } + if (Jim_GetEnum(interp, argv[1], options, &option, NULL, + JIM_ERRMSG | JIM_ENUM_ABBREV) != JIM_OK) + return JIM_ERR; + + switch (option) { + case OPT_LENGTH: + case OPT_BYTELENGTH: + if (argc != 3) { + Jim_WrongNumArgs(interp, 2, argv, "string"); + return JIM_ERR; + } + if (option == OPT_LENGTH) { + len = Jim_Utf8Length(interp, argv[2]); + } + else { + len = Jim_Length(argv[2]); + } + Jim_SetResultInt(interp, len); + return JIM_OK; + + case OPT_CAT:{ + Jim_Obj *objPtr; + if (argc == 3) { + + objPtr = argv[2]; + } + else { + int i; + + objPtr = Jim_NewStringObj(interp, "", 0); + + for (i = 2; i < argc; i++) { + Jim_AppendObj(interp, objPtr, argv[i]); + } + } + Jim_SetResult(interp, objPtr); + return JIM_OK; + } + + case OPT_COMPARE: + case OPT_EQUAL: + { + + long opt_length = -1; + int n = argc - 4; + int i = 2; + while (n > 0) { + int subopt; + if (Jim_GetEnum(interp, argv[i++], nocase_length_options, &subopt, NULL, + JIM_ENUM_ABBREV) != JIM_OK) { +badcompareargs: + Jim_WrongNumArgs(interp, 2, argv, "?-nocase? ?-length int? string1 string2"); + return JIM_ERR; + } + if (subopt == 0) { + + opt_case = 0; + n--; + } + else { + + if (n < 2) { + goto badcompareargs; + } + if (Jim_GetLong(interp, argv[i++], &opt_length) != JIM_OK) { + return JIM_ERR; + } + n -= 2; + } + } + if (n) { + goto badcompareargs; + } + argv += argc - 2; + if (opt_length < 0 && option != OPT_COMPARE && opt_case) { + + Jim_SetResultBool(interp, Jim_StringEqObj(argv[0], argv[1])); + } + else { + if (opt_length >= 0) { + n = JimStringCompareLen(Jim_String(argv[0]), Jim_String(argv[1]), opt_length, !opt_case); + } + else { + n = Jim_StringCompareObj(interp, argv[0], argv[1], !opt_case); + } + Jim_SetResultInt(interp, option == OPT_COMPARE ? n : n == 0); + } + return JIM_OK; + } + + case OPT_MATCH: + if (argc != 4 && + (argc != 5 || + Jim_GetEnum(interp, argv[2], nocase_options, &opt_case, NULL, + JIM_ENUM_ABBREV) != JIM_OK)) { + Jim_WrongNumArgs(interp, 2, argv, "?-nocase? pattern string"); + return JIM_ERR; + } + if (opt_case == 0) { + argv++; + } + Jim_SetResultBool(interp, Jim_StringMatchObj(interp, argv[2], argv[3], !opt_case)); + return JIM_OK; + + case OPT_MAP:{ + Jim_Obj *objPtr; + + if (argc != 4 && + (argc != 5 || + Jim_GetEnum(interp, argv[2], nocase_options, &opt_case, NULL, + JIM_ENUM_ABBREV) != JIM_OK)) { + Jim_WrongNumArgs(interp, 2, argv, "?-nocase? mapList string"); + return JIM_ERR; + } + + if (opt_case == 0) { + argv++; + } + objPtr = JimStringMap(interp, argv[2], argv[3], !opt_case); + if (objPtr == NULL) { + return JIM_ERR; + } + Jim_SetResult(interp, objPtr); + return JIM_OK; + } + + case OPT_RANGE: + case OPT_BYTERANGE:{ + Jim_Obj *objPtr; + + if (argc != 5) { + Jim_WrongNumArgs(interp, 2, argv, "string first last"); + return JIM_ERR; + } + if (option == OPT_RANGE) { + objPtr = Jim_StringRangeObj(interp, argv[2], argv[3], argv[4]); + } + else + { + objPtr = Jim_StringByteRangeObj(interp, argv[2], argv[3], argv[4]); + } + + if (objPtr == NULL) { + return JIM_ERR; + } + Jim_SetResult(interp, objPtr); + return JIM_OK; + } + + case OPT_REPLACE:{ + Jim_Obj *objPtr; + + if (argc != 5 && argc != 6) { + Jim_WrongNumArgs(interp, 2, argv, "string first last ?string?"); + return JIM_ERR; + } + objPtr = JimStringReplaceObj(interp, argv[2], argv[3], argv[4], argc == 6 ? argv[5] : NULL); + if (objPtr == NULL) { + return JIM_ERR; + } + Jim_SetResult(interp, objPtr); + return JIM_OK; + } + + + case OPT_REPEAT:{ + Jim_Obj *objPtr; + jim_wide count; + + if (argc != 4) { + Jim_WrongNumArgs(interp, 2, argv, "string count"); + return JIM_ERR; + } + if (Jim_GetWide(interp, argv[3], &count) != JIM_OK) { + return JIM_ERR; + } + objPtr = Jim_NewStringObj(interp, "", 0); + if (count > 0) { + while (count--) { + Jim_AppendObj(interp, objPtr, argv[2]); + } + } + Jim_SetResult(interp, objPtr); + return JIM_OK; + } + + case OPT_REVERSE:{ + char *buf, *p; + const char *str; + int len; + int i; + + if (argc != 3) { + Jim_WrongNumArgs(interp, 2, argv, "string"); + return JIM_ERR; + } + + str = Jim_GetString(argv[2], &len); + buf = Jim_Alloc(len + 1); + p = buf + len; + *p = 0; + for (i = 0; i < len; ) { + int c; + int l = utf8_tounicode(str, &c); + memcpy(p - l, str, l); + p -= l; + i += l; + str += l; + } + Jim_SetResult(interp, Jim_NewStringObjNoAlloc(interp, buf, len)); + return JIM_OK; + } + + case OPT_INDEX:{ + int idx; + const char *str; + + if (argc != 4) { + Jim_WrongNumArgs(interp, 2, argv, "string index"); + return JIM_ERR; + } + if (Jim_GetIndex(interp, argv[3], &idx) != JIM_OK) { + return JIM_ERR; + } + str = Jim_String(argv[2]); + len = Jim_Utf8Length(interp, argv[2]); + if (idx != INT_MIN && idx != INT_MAX) { + idx = JimRelToAbsIndex(len, idx); + } + if (idx < 0 || idx >= len || str == NULL) { + Jim_SetResultString(interp, "", 0); + } + else if (len == Jim_Length(argv[2])) { + + Jim_SetResultString(interp, str + idx, 1); + } + else { + int c; + int i = utf8_index(str, idx); + Jim_SetResultString(interp, str + i, utf8_tounicode(str + i, &c)); + } + return JIM_OK; + } + + case OPT_FIRST: + case OPT_LAST:{ + int idx = 0, l1, l2; + const char *s1, *s2; + + if (argc != 4 && argc != 5) { + Jim_WrongNumArgs(interp, 2, argv, "subString string ?index?"); + return JIM_ERR; + } + s1 = Jim_String(argv[2]); + s2 = Jim_String(argv[3]); + l1 = Jim_Utf8Length(interp, argv[2]); + l2 = Jim_Utf8Length(interp, argv[3]); + if (argc == 5) { + if (Jim_GetIndex(interp, argv[4], &idx) != JIM_OK) { + return JIM_ERR; + } + idx = JimRelToAbsIndex(l2, idx); + } + else if (option == OPT_LAST) { + idx = l2; + } + if (option == OPT_FIRST) { + Jim_SetResultInt(interp, JimStringFirst(s1, l1, s2, l2, idx)); + } + else { +#ifdef JIM_UTF8 + Jim_SetResultInt(interp, JimStringLastUtf8(s1, l1, s2, idx)); +#else + Jim_SetResultInt(interp, JimStringLast(s1, l1, s2, idx)); +#endif + } + return JIM_OK; + } + + case OPT_TRIM: + case OPT_TRIMLEFT: + case OPT_TRIMRIGHT:{ + Jim_Obj *trimchars; + + if (argc != 3 && argc != 4) { + Jim_WrongNumArgs(interp, 2, argv, "string ?trimchars?"); + return JIM_ERR; + } + trimchars = (argc == 4 ? argv[3] : NULL); + if (option == OPT_TRIM) { + Jim_SetResult(interp, JimStringTrim(interp, argv[2], trimchars)); + } + else if (option == OPT_TRIMLEFT) { + Jim_SetResult(interp, JimStringTrimLeft(interp, argv[2], trimchars)); + } + else if (option == OPT_TRIMRIGHT) { + Jim_SetResult(interp, JimStringTrimRight(interp, argv[2], trimchars)); + } + return JIM_OK; + } + + case OPT_TOLOWER: + case OPT_TOUPPER: + case OPT_TOTITLE: + if (argc != 3) { + Jim_WrongNumArgs(interp, 2, argv, "string"); + return JIM_ERR; + } + if (option == OPT_TOLOWER) { + Jim_SetResult(interp, JimStringToLower(interp, argv[2])); + } + else if (option == OPT_TOUPPER) { + Jim_SetResult(interp, JimStringToUpper(interp, argv[2])); + } + else { + Jim_SetResult(interp, JimStringToTitle(interp, argv[2])); + } + return JIM_OK; + + case OPT_IS: + if (argc == 4 || (argc == 5 && Jim_CompareStringImmediate(interp, argv[3], "-strict"))) { + return JimStringIs(interp, argv[argc - 1], argv[2], argc == 5); + } + Jim_WrongNumArgs(interp, 2, argv, "class ?-strict? str"); + return JIM_ERR; + } + return JIM_OK; +} + + +static int Jim_TimeCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + long i, count = 1; + jim_wide start, elapsed; + char buf[60]; + const char *fmt = "%" JIM_WIDE_MODIFIER " microseconds per iteration"; + + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "script ?count?"); + return JIM_ERR; + } + if (argc == 3) { + if (Jim_GetLong(interp, argv[2], &count) != JIM_OK) + return JIM_ERR; + } + if (count < 0) + return JIM_OK; + i = count; + start = JimClock(); + while (i-- > 0) { + int retval; + + retval = Jim_EvalObj(interp, argv[1]); + if (retval != JIM_OK) { + return retval; + } + } + elapsed = JimClock() - start; + sprintf(buf, fmt, count == 0 ? 0 : elapsed / count); + Jim_SetResultString(interp, buf, -1); + return JIM_OK; +} + + +static int Jim_ExitCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + long exitCode = 0; + + if (argc > 2) { + Jim_WrongNumArgs(interp, 1, argv, "?exitCode?"); + return JIM_ERR; + } + if (argc == 2) { + if (Jim_GetLong(interp, argv[1], &exitCode) != JIM_OK) + return JIM_ERR; + } + interp->exitCode = exitCode; + return JIM_EXIT; +} + + +static int Jim_CatchCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int exitCode = 0; + int i; + int sig = 0; + + + jim_wide ignore_mask = (1 << JIM_EXIT) | (1 << JIM_EVAL) | (1 << JIM_SIGNAL); + static const int max_ignore_code = sizeof(ignore_mask) * 8; + + Jim_SetGlobalVariableStr(interp, "errorCode", Jim_NewStringObj(interp, "NONE", -1)); + + for (i = 1; i < argc - 1; i++) { + const char *arg = Jim_String(argv[i]); + jim_wide option; + int ignore; + + + if (strcmp(arg, "--") == 0) { + i++; + break; + } + if (*arg != '-') { + break; + } + + if (strncmp(arg, "-no", 3) == 0) { + arg += 3; + ignore = 1; + } + else { + arg++; + ignore = 0; + } + + if (Jim_StringToWide(arg, &option, 10) != JIM_OK) { + option = -1; + } + if (option < 0) { + option = Jim_FindByName(arg, jimReturnCodes, jimReturnCodesSize); + } + if (option < 0) { + goto wrongargs; + } + + if (ignore) { + ignore_mask |= (1 << option); + } + else { + ignore_mask &= ~(1 << option); + } + } + + argc -= i; + if (argc < 1 || argc > 3) { + wrongargs: + Jim_WrongNumArgs(interp, 1, argv, + "?-?no?code ... --? script ?resultVarName? ?optionVarName?"); + return JIM_ERR; + } + argv += i; + + if ((ignore_mask & (1 << JIM_SIGNAL)) == 0) { + sig++; + } + + interp->signal_level += sig; + if (Jim_CheckSignal(interp)) { + + exitCode = JIM_SIGNAL; + } + else { + exitCode = Jim_EvalObj(interp, argv[0]); + + interp->errorFlag = 0; + } + interp->signal_level -= sig; + + + if (exitCode >= 0 && exitCode < max_ignore_code && (((unsigned jim_wide)1 << exitCode) & ignore_mask)) { + + return exitCode; + } + + if (sig && exitCode == JIM_SIGNAL) { + + if (interp->signal_set_result) { + interp->signal_set_result(interp, interp->sigmask); + } + else { + Jim_SetResultInt(interp, interp->sigmask); + } + interp->sigmask = 0; + } + + if (argc >= 2) { + if (Jim_SetVariable(interp, argv[1], Jim_GetResult(interp)) != JIM_OK) { + return JIM_ERR; + } + if (argc == 3) { + Jim_Obj *optListObj = Jim_NewListObj(interp, NULL, 0); + + Jim_ListAppendElement(interp, optListObj, Jim_NewStringObj(interp, "-code", -1)); + Jim_ListAppendElement(interp, optListObj, + Jim_NewIntObj(interp, exitCode == JIM_RETURN ? interp->returnCode : exitCode)); + Jim_ListAppendElement(interp, optListObj, Jim_NewStringObj(interp, "-level", -1)); + Jim_ListAppendElement(interp, optListObj, Jim_NewIntObj(interp, interp->returnLevel)); + if (exitCode == JIM_ERR) { + Jim_Obj *errorCode; + Jim_ListAppendElement(interp, optListObj, Jim_NewStringObj(interp, "-errorinfo", + -1)); + Jim_ListAppendElement(interp, optListObj, interp->stackTrace); + + errorCode = Jim_GetGlobalVariableStr(interp, "errorCode", JIM_NONE); + if (errorCode) { + Jim_ListAppendElement(interp, optListObj, Jim_NewStringObj(interp, "-errorcode", -1)); + Jim_ListAppendElement(interp, optListObj, errorCode); + } + } + if (Jim_SetVariable(interp, argv[2], optListObj) != JIM_OK) { + return JIM_ERR; + } + } + } + Jim_SetResultInt(interp, exitCode); + return JIM_OK; +} + +#ifdef JIM_REFERENCES + + +static int Jim_RefCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + if (argc != 3 && argc != 4) { + Jim_WrongNumArgs(interp, 1, argv, "string tag ?finalizer?"); + return JIM_ERR; + } + if (argc == 3) { + Jim_SetResult(interp, Jim_NewReference(interp, argv[1], argv[2], NULL)); + } + else { + Jim_SetResult(interp, Jim_NewReference(interp, argv[1], argv[2], argv[3])); + } + return JIM_OK; +} + + +static int Jim_GetrefCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Reference *refPtr; + + if (argc != 2) { + Jim_WrongNumArgs(interp, 1, argv, "reference"); + return JIM_ERR; + } + if ((refPtr = Jim_GetReference(interp, argv[1])) == NULL) + return JIM_ERR; + Jim_SetResult(interp, refPtr->objPtr); + return JIM_OK; +} + + +static int Jim_SetrefCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Reference *refPtr; + + if (argc != 3) { + Jim_WrongNumArgs(interp, 1, argv, "reference newValue"); + return JIM_ERR; + } + if ((refPtr = Jim_GetReference(interp, argv[1])) == NULL) + return JIM_ERR; + Jim_IncrRefCount(argv[2]); + Jim_DecrRefCount(interp, refPtr->objPtr); + refPtr->objPtr = argv[2]; + Jim_SetResult(interp, argv[2]); + return JIM_OK; +} + + +static int Jim_CollectCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + if (argc != 1) { + Jim_WrongNumArgs(interp, 1, argv, ""); + return JIM_ERR; + } + Jim_SetResultInt(interp, Jim_Collect(interp)); + + + while (interp->freeList) { + Jim_Obj *nextObjPtr = interp->freeList->nextObjPtr; + Jim_Free(interp->freeList); + interp->freeList = nextObjPtr; + } + + return JIM_OK; +} + + +static int Jim_FinalizeCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + if (argc != 2 && argc != 3) { + Jim_WrongNumArgs(interp, 1, argv, "reference ?finalizerProc?"); + return JIM_ERR; + } + if (argc == 2) { + Jim_Obj *cmdNamePtr; + + if (Jim_GetFinalizer(interp, argv[1], &cmdNamePtr) != JIM_OK) + return JIM_ERR; + if (cmdNamePtr != NULL) + Jim_SetResult(interp, cmdNamePtr); + } + else { + if (Jim_SetFinalizer(interp, argv[1], argv[2]) != JIM_OK) + return JIM_ERR; + Jim_SetResult(interp, argv[2]); + } + return JIM_OK; +} + + +static int JimInfoReferences(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *listObjPtr; + Jim_HashTableIterator htiter; + Jim_HashEntry *he; + + listObjPtr = Jim_NewListObj(interp, NULL, 0); + + JimInitHashTableIterator(&interp->references, &htiter); + while ((he = Jim_NextHashEntry(&htiter)) != NULL) { + char buf[JIM_REFERENCE_SPACE + 1]; + Jim_Reference *refPtr = Jim_GetHashEntryVal(he); + const unsigned long *refId = he->key; + + JimFormatReference(buf, refPtr, *refId); + Jim_ListAppendElement(interp, listObjPtr, Jim_NewStringObj(interp, buf, -1)); + } + Jim_SetResult(interp, listObjPtr); + return JIM_OK; +} +#endif + + +static int Jim_RenameCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + if (argc != 3) { + Jim_WrongNumArgs(interp, 1, argv, "oldName newName"); + return JIM_ERR; + } + + if (JimValidName(interp, "new procedure", argv[2])) { + return JIM_ERR; + } + + return Jim_RenameCommand(interp, Jim_String(argv[1]), Jim_String(argv[2])); +} + +#define JIM_DICTMATCH_VALUES 0x0001 + +typedef void JimDictMatchCallbackType(Jim_Interp *interp, Jim_Obj *listObjPtr, Jim_HashEntry *he, int type); + +static void JimDictMatchKeys(Jim_Interp *interp, Jim_Obj *listObjPtr, Jim_HashEntry *he, int type) +{ + Jim_ListAppendElement(interp, listObjPtr, (Jim_Obj *)he->key); + if (type & JIM_DICTMATCH_VALUES) { + Jim_ListAppendElement(interp, listObjPtr, Jim_GetHashEntryVal(he)); + } +} + +static Jim_Obj *JimDictPatternMatch(Jim_Interp *interp, Jim_HashTable *ht, Jim_Obj *patternObjPtr, + JimDictMatchCallbackType *callback, int type) +{ + Jim_HashEntry *he; + Jim_Obj *listObjPtr = Jim_NewListObj(interp, NULL, 0); + + + Jim_HashTableIterator htiter; + JimInitHashTableIterator(ht, &htiter); + while ((he = Jim_NextHashEntry(&htiter)) != NULL) { + if (patternObjPtr == NULL || JimGlobMatch(Jim_String(patternObjPtr), Jim_String((Jim_Obj *)he->key), 0)) { + callback(interp, listObjPtr, he, type); + } + } + + return listObjPtr; +} + + +int Jim_DictKeys(Jim_Interp *interp, Jim_Obj *objPtr, Jim_Obj *patternObjPtr) +{ + if (SetDictFromAny(interp, objPtr) != JIM_OK) { + return JIM_ERR; + } + Jim_SetResult(interp, JimDictPatternMatch(interp, objPtr->internalRep.ptr, patternObjPtr, JimDictMatchKeys, 0)); + return JIM_OK; +} + +int Jim_DictValues(Jim_Interp *interp, Jim_Obj *objPtr, Jim_Obj *patternObjPtr) +{ + if (SetDictFromAny(interp, objPtr) != JIM_OK) { + return JIM_ERR; + } + Jim_SetResult(interp, JimDictPatternMatch(interp, objPtr->internalRep.ptr, patternObjPtr, JimDictMatchKeys, JIM_DICTMATCH_VALUES)); + return JIM_OK; +} + +int Jim_DictSize(Jim_Interp *interp, Jim_Obj *objPtr) +{ + if (SetDictFromAny(interp, objPtr) != JIM_OK) { + return -1; + } + return ((Jim_HashTable *)objPtr->internalRep.ptr)->used; +} + +int Jim_DictInfo(Jim_Interp *interp, Jim_Obj *objPtr) +{ + Jim_HashTable *ht; + unsigned int i; + + if (SetDictFromAny(interp, objPtr) != JIM_OK) { + return JIM_ERR; + } + + ht = (Jim_HashTable *)objPtr->internalRep.ptr; + + + printf("%d entries in table, %d buckets\n", ht->used, ht->size); + + for (i = 0; i < ht->size; i++) { + Jim_HashEntry *he = ht->table[i]; + + if (he) { + printf("%d: ", i); + + while (he) { + printf(" %s", Jim_String(he->key)); + he = he->next; + } + printf("\n"); + } + } + return JIM_OK; +} + +static int Jim_EvalEnsemble(Jim_Interp *interp, const char *basecmd, const char *subcmd, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *prefixObj = Jim_NewStringObj(interp, basecmd, -1); + + Jim_AppendString(interp, prefixObj, " ", 1); + Jim_AppendString(interp, prefixObj, subcmd, -1); + + return Jim_EvalObjPrefix(interp, prefixObj, argc, argv); +} + + +static int Jim_DictCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *objPtr; + int option; + static const char * const options[] = { + "create", "get", "set", "unset", "exists", "keys", "size", "info", + "merge", "with", "append", "lappend", "incr", "remove", "values", "for", + "replace", "update", NULL + }; + enum + { + OPT_CREATE, OPT_GET, OPT_SET, OPT_UNSET, OPT_EXISTS, OPT_KEYS, OPT_SIZE, OPT_INFO, + OPT_MERGE, OPT_WITH, OPT_APPEND, OPT_LAPPEND, OPT_INCR, OPT_REMOVE, OPT_VALUES, OPT_FOR, + OPT_REPLACE, OPT_UPDATE, + }; + + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "subcommand ?arguments ...?"); + return JIM_ERR; + } + + if (Jim_GetEnum(interp, argv[1], options, &option, "subcommand", JIM_ERRMSG) != JIM_OK) { + return JIM_ERR; + } + + switch (option) { + case OPT_GET: + if (argc < 3) { + Jim_WrongNumArgs(interp, 2, argv, "dictionary ?key ...?"); + return JIM_ERR; + } + if (Jim_DictKeysVector(interp, argv[2], argv + 3, argc - 3, &objPtr, + JIM_ERRMSG) != JIM_OK) { + return JIM_ERR; + } + Jim_SetResult(interp, objPtr); + return JIM_OK; + + case OPT_SET: + if (argc < 5) { + Jim_WrongNumArgs(interp, 2, argv, "varName key ?key ...? value"); + return JIM_ERR; + } + return Jim_SetDictKeysVector(interp, argv[2], argv + 3, argc - 4, argv[argc - 1], JIM_ERRMSG); + + case OPT_EXISTS: + if (argc < 4) { + Jim_WrongNumArgs(interp, 2, argv, "dictionary key ?key ...?"); + return JIM_ERR; + } + else { + int rc = Jim_DictKeysVector(interp, argv[2], argv + 3, argc - 3, &objPtr, JIM_ERRMSG); + if (rc < 0) { + return JIM_ERR; + } + Jim_SetResultBool(interp, rc == JIM_OK); + return JIM_OK; + } + + case OPT_UNSET: + if (argc < 4) { + Jim_WrongNumArgs(interp, 2, argv, "varName key ?key ...?"); + return JIM_ERR; + } + if (Jim_SetDictKeysVector(interp, argv[2], argv + 3, argc - 3, NULL, 0) != JIM_OK) { + return JIM_ERR; + } + return JIM_OK; + + case OPT_KEYS: + if (argc != 3 && argc != 4) { + Jim_WrongNumArgs(interp, 2, argv, "dictionary ?pattern?"); + return JIM_ERR; + } + return Jim_DictKeys(interp, argv[2], argc == 4 ? argv[3] : NULL); + + case OPT_SIZE: + if (argc != 3) { + Jim_WrongNumArgs(interp, 2, argv, "dictionary"); + return JIM_ERR; + } + else if (Jim_DictSize(interp, argv[2]) < 0) { + return JIM_ERR; + } + Jim_SetResultInt(interp, Jim_DictSize(interp, argv[2])); + return JIM_OK; + + case OPT_MERGE: + if (argc == 2) { + return JIM_OK; + } + if (Jim_DictSize(interp, argv[2]) < 0) { + return JIM_ERR; + } + + break; + + case OPT_UPDATE: + if (argc < 6 || argc % 2) { + + argc = 2; + } + break; + + case OPT_CREATE: + if (argc % 2) { + Jim_WrongNumArgs(interp, 2, argv, "?key value ...?"); + return JIM_ERR; + } + objPtr = Jim_NewDictObj(interp, argv + 2, argc - 2); + Jim_SetResult(interp, objPtr); + return JIM_OK; + + case OPT_INFO: + if (argc != 3) { + Jim_WrongNumArgs(interp, 2, argv, "dictionary"); + return JIM_ERR; + } + return Jim_DictInfo(interp, argv[2]); + } + + return Jim_EvalEnsemble(interp, "dict", options[option], argc - 2, argv + 2); +} + + +static int Jim_SubstCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + static const char * const options[] = { + "-nobackslashes", "-nocommands", "-novariables", NULL + }; + enum + { OPT_NOBACKSLASHES, OPT_NOCOMMANDS, OPT_NOVARIABLES }; + int i; + int flags = JIM_SUBST_FLAG; + Jim_Obj *objPtr; + + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "?options? string"); + return JIM_ERR; + } + for (i = 1; i < (argc - 1); i++) { + int option; + + if (Jim_GetEnum(interp, argv[i], options, &option, NULL, + JIM_ERRMSG | JIM_ENUM_ABBREV) != JIM_OK) { + return JIM_ERR; + } + switch (option) { + case OPT_NOBACKSLASHES: + flags |= JIM_SUBST_NOESC; + break; + case OPT_NOCOMMANDS: + flags |= JIM_SUBST_NOCMD; + break; + case OPT_NOVARIABLES: + flags |= JIM_SUBST_NOVAR; + break; + } + } + if (Jim_SubstObj(interp, argv[argc - 1], &objPtr, flags) != JIM_OK) { + return JIM_ERR; + } + Jim_SetResult(interp, objPtr); + return JIM_OK; +} + + +static int Jim_InfoCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int cmd; + Jim_Obj *objPtr; + int mode = 0; + + static const char * const commands[] = { + "body", "statics", "commands", "procs", "channels", "exists", "globals", "level", "frame", "locals", + "vars", "version", "patchlevel", "complete", "args", "hostname", + "script", "source", "stacktrace", "nameofexecutable", "returncodes", + "references", "alias", NULL + }; + enum + { INFO_BODY, INFO_STATICS, INFO_COMMANDS, INFO_PROCS, INFO_CHANNELS, INFO_EXISTS, INFO_GLOBALS, INFO_LEVEL, + INFO_FRAME, INFO_LOCALS, INFO_VARS, INFO_VERSION, INFO_PATCHLEVEL, INFO_COMPLETE, INFO_ARGS, + INFO_HOSTNAME, INFO_SCRIPT, INFO_SOURCE, INFO_STACKTRACE, INFO_NAMEOFEXECUTABLE, + INFO_RETURNCODES, INFO_REFERENCES, INFO_ALIAS, + }; + +#ifdef jim_ext_namespace + int nons = 0; + + if (argc > 2 && Jim_CompareStringImmediate(interp, argv[1], "-nons")) { + + argc--; + argv++; + nons = 1; + } +#endif + + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "subcommand ?args ...?"); + return JIM_ERR; + } + if (Jim_GetEnum(interp, argv[1], commands, &cmd, "subcommand", JIM_ERRMSG | JIM_ENUM_ABBREV) + != JIM_OK) { + return JIM_ERR; + } + + + switch (cmd) { + case INFO_EXISTS: + if (argc != 3) { + Jim_WrongNumArgs(interp, 2, argv, "varName"); + return JIM_ERR; + } + Jim_SetResultBool(interp, Jim_GetVariable(interp, argv[2], 0) != NULL); + break; + + case INFO_ALIAS:{ + Jim_Cmd *cmdPtr; + + if (argc != 3) { + Jim_WrongNumArgs(interp, 2, argv, "command"); + return JIM_ERR; + } + if ((cmdPtr = Jim_GetCommand(interp, argv[2], JIM_ERRMSG)) == NULL) { + return JIM_ERR; + } + if (cmdPtr->isproc || cmdPtr->u.native.cmdProc != JimAliasCmd) { + Jim_SetResultFormatted(interp, "command \"%#s\" is not an alias", argv[2]); + return JIM_ERR; + } + Jim_SetResult(interp, (Jim_Obj *)cmdPtr->u.native.privData); + return JIM_OK; + } + + case INFO_CHANNELS: + mode++; +#ifndef jim_ext_aio + Jim_SetResultString(interp, "aio not enabled", -1); + return JIM_ERR; +#endif + case INFO_PROCS: + mode++; + case INFO_COMMANDS: + + if (argc != 2 && argc != 3) { + Jim_WrongNumArgs(interp, 2, argv, "?pattern?"); + return JIM_ERR; + } +#ifdef jim_ext_namespace + if (!nons) { + if (Jim_Length(interp->framePtr->nsObj) || (argc == 3 && JimGlobMatch("::*", Jim_String(argv[2]), 0))) { + return Jim_EvalPrefix(interp, "namespace info", argc - 1, argv + 1); + } + } +#endif + Jim_SetResult(interp, JimCommandsList(interp, (argc == 3) ? argv[2] : NULL, mode)); + break; + + case INFO_VARS: + mode++; + case INFO_LOCALS: + mode++; + case INFO_GLOBALS: + + if (argc != 2 && argc != 3) { + Jim_WrongNumArgs(interp, 2, argv, "?pattern?"); + return JIM_ERR; + } +#ifdef jim_ext_namespace + if (!nons) { + if (Jim_Length(interp->framePtr->nsObj) || (argc == 3 && JimGlobMatch("::*", Jim_String(argv[2]), 0))) { + return Jim_EvalPrefix(interp, "namespace info", argc - 1, argv + 1); + } + } +#endif + Jim_SetResult(interp, JimVariablesList(interp, argc == 3 ? argv[2] : NULL, mode)); + break; + + case INFO_SCRIPT: + if (argc != 2) { + Jim_WrongNumArgs(interp, 2, argv, ""); + return JIM_ERR; + } + Jim_SetResult(interp, JimGetScript(interp, interp->currentScriptObj)->fileNameObj); + break; + + case INFO_SOURCE:{ + jim_wide line; + Jim_Obj *resObjPtr; + Jim_Obj *fileNameObj; + + if (argc != 3 && argc != 5) { + Jim_WrongNumArgs(interp, 2, argv, "source ?filename line?"); + return JIM_ERR; + } + if (argc == 5) { + if (Jim_GetWide(interp, argv[4], &line) != JIM_OK) { + return JIM_ERR; + } + resObjPtr = Jim_NewStringObj(interp, Jim_String(argv[2]), Jim_Length(argv[2])); + JimSetSourceInfo(interp, resObjPtr, argv[3], line); + } + else { + if (argv[2]->typePtr == &sourceObjType) { + fileNameObj = argv[2]->internalRep.sourceValue.fileNameObj; + line = argv[2]->internalRep.sourceValue.lineNumber; + } + else if (argv[2]->typePtr == &scriptObjType) { + ScriptObj *script = JimGetScript(interp, argv[2]); + fileNameObj = script->fileNameObj; + line = script->firstline; + } + else { + fileNameObj = interp->emptyObj; + line = 1; + } + resObjPtr = Jim_NewListObj(interp, NULL, 0); + Jim_ListAppendElement(interp, resObjPtr, fileNameObj); + Jim_ListAppendElement(interp, resObjPtr, Jim_NewIntObj(interp, line)); + } + Jim_SetResult(interp, resObjPtr); + break; + } + + case INFO_STACKTRACE: + Jim_SetResult(interp, interp->stackTrace); + break; + + case INFO_LEVEL: + case INFO_FRAME: + switch (argc) { + case 2: + Jim_SetResultInt(interp, interp->framePtr->level); + break; + + case 3: + if (JimInfoLevel(interp, argv[2], &objPtr, cmd == INFO_LEVEL) != JIM_OK) { + return JIM_ERR; + } + Jim_SetResult(interp, objPtr); + break; + + default: + Jim_WrongNumArgs(interp, 2, argv, "?levelNum?"); + return JIM_ERR; + } + break; + + case INFO_BODY: + case INFO_STATICS: + case INFO_ARGS:{ + Jim_Cmd *cmdPtr; + + if (argc != 3) { + Jim_WrongNumArgs(interp, 2, argv, "procname"); + return JIM_ERR; + } + if ((cmdPtr = Jim_GetCommand(interp, argv[2], JIM_ERRMSG)) == NULL) { + return JIM_ERR; + } + if (!cmdPtr->isproc) { + Jim_SetResultFormatted(interp, "command \"%#s\" is not a procedure", argv[2]); + return JIM_ERR; + } + switch (cmd) { + case INFO_BODY: + Jim_SetResult(interp, cmdPtr->u.proc.bodyObjPtr); + break; + case INFO_ARGS: + Jim_SetResult(interp, cmdPtr->u.proc.argListObjPtr); + break; + case INFO_STATICS: + if (cmdPtr->u.proc.staticVars) { + int mode = JIM_VARLIST_LOCALS | JIM_VARLIST_VALUES; + Jim_SetResult(interp, JimHashtablePatternMatch(interp, cmdPtr->u.proc.staticVars, + NULL, JimVariablesMatch, mode)); + } + break; + } + break; + } + + case INFO_VERSION: + case INFO_PATCHLEVEL:{ + char buf[(JIM_INTEGER_SPACE * 2) + 1]; + + sprintf(buf, "%d.%d", JIM_VERSION / 100, JIM_VERSION % 100); + Jim_SetResultString(interp, buf, -1); + break; + } + + case INFO_COMPLETE: + if (argc != 3 && argc != 4) { + Jim_WrongNumArgs(interp, 2, argv, "script ?missing?"); + return JIM_ERR; + } + else { + int len; + const char *s = Jim_GetString(argv[2], &len); + char missing; + + Jim_SetResultBool(interp, Jim_ScriptIsComplete(s, len, &missing)); + if (missing != ' ' && argc == 4) { + Jim_SetVariable(interp, argv[3], Jim_NewStringObj(interp, &missing, 1)); + } + } + break; + + case INFO_HOSTNAME: + + return Jim_Eval(interp, "os.gethostname"); + + case INFO_NAMEOFEXECUTABLE: + + return Jim_Eval(interp, "{info nameofexecutable}"); + + case INFO_RETURNCODES: + if (argc == 2) { + int i; + Jim_Obj *listObjPtr = Jim_NewListObj(interp, NULL, 0); + + for (i = 0; jimReturnCodes[i]; i++) { + Jim_ListAppendElement(interp, listObjPtr, Jim_NewIntObj(interp, i)); + Jim_ListAppendElement(interp, listObjPtr, Jim_NewStringObj(interp, + jimReturnCodes[i], -1)); + } + + Jim_SetResult(interp, listObjPtr); + } + else if (argc == 3) { + long code; + const char *name; + + if (Jim_GetLong(interp, argv[2], &code) != JIM_OK) { + return JIM_ERR; + } + name = Jim_ReturnCode(code); + if (*name == '?') { + Jim_SetResultInt(interp, code); + } + else { + Jim_SetResultString(interp, name, -1); + } + } + else { + Jim_WrongNumArgs(interp, 2, argv, "?code?"); + return JIM_ERR; + } + break; + case INFO_REFERENCES: +#ifdef JIM_REFERENCES + return JimInfoReferences(interp, argc, argv); +#else + Jim_SetResultString(interp, "not supported", -1); + return JIM_ERR; +#endif + } + return JIM_OK; +} + + +static int Jim_ExistsCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *objPtr; + int result = 0; + + static const char * const options[] = { + "-command", "-proc", "-alias", "-var", NULL + }; + enum + { + OPT_COMMAND, OPT_PROC, OPT_ALIAS, OPT_VAR + }; + int option; + + if (argc == 2) { + option = OPT_VAR; + objPtr = argv[1]; + } + else if (argc == 3) { + if (Jim_GetEnum(interp, argv[1], options, &option, NULL, JIM_ERRMSG | JIM_ENUM_ABBREV) != JIM_OK) { + return JIM_ERR; + } + objPtr = argv[2]; + } + else { + Jim_WrongNumArgs(interp, 1, argv, "?option? name"); + return JIM_ERR; + } + + if (option == OPT_VAR) { + result = Jim_GetVariable(interp, objPtr, 0) != NULL; + } + else { + + Jim_Cmd *cmd = Jim_GetCommand(interp, objPtr, JIM_NONE); + + if (cmd) { + switch (option) { + case OPT_COMMAND: + result = 1; + break; + + case OPT_ALIAS: + result = cmd->isproc == 0 && cmd->u.native.cmdProc == JimAliasCmd; + break; + + case OPT_PROC: + result = cmd->isproc; + break; + } + } + } + Jim_SetResultBool(interp, result); + return JIM_OK; +} + + +static int Jim_SplitCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + const char *str, *splitChars, *noMatchStart; + int splitLen, strLen; + Jim_Obj *resObjPtr; + int c; + int len; + + if (argc != 2 && argc != 3) { + Jim_WrongNumArgs(interp, 1, argv, "string ?splitChars?"); + return JIM_ERR; + } + + str = Jim_GetString(argv[1], &len); + if (len == 0) { + return JIM_OK; + } + strLen = Jim_Utf8Length(interp, argv[1]); + + + if (argc == 2) { + splitChars = " \n\t\r"; + splitLen = 4; + } + else { + splitChars = Jim_String(argv[2]); + splitLen = Jim_Utf8Length(interp, argv[2]); + } + + noMatchStart = str; + resObjPtr = Jim_NewListObj(interp, NULL, 0); + + + if (splitLen) { + Jim_Obj *objPtr; + while (strLen--) { + const char *sc = splitChars; + int scLen = splitLen; + int sl = utf8_tounicode(str, &c); + while (scLen--) { + int pc; + sc += utf8_tounicode(sc, &pc); + if (c == pc) { + objPtr = Jim_NewStringObj(interp, noMatchStart, (str - noMatchStart)); + Jim_ListAppendElement(interp, resObjPtr, objPtr); + noMatchStart = str + sl; + break; + } + } + str += sl; + } + objPtr = Jim_NewStringObj(interp, noMatchStart, (str - noMatchStart)); + Jim_ListAppendElement(interp, resObjPtr, objPtr); + } + else { + Jim_Obj **commonObj = NULL; +#define NUM_COMMON (128 - 9) + while (strLen--) { + int n = utf8_tounicode(str, &c); +#ifdef JIM_OPTIMIZATION + if (c >= 9 && c < 128) { + + c -= 9; + if (!commonObj) { + commonObj = Jim_Alloc(sizeof(*commonObj) * NUM_COMMON); + memset(commonObj, 0, sizeof(*commonObj) * NUM_COMMON); + } + if (!commonObj[c]) { + commonObj[c] = Jim_NewStringObj(interp, str, 1); + } + Jim_ListAppendElement(interp, resObjPtr, commonObj[c]); + str++; + continue; + } +#endif + Jim_ListAppendElement(interp, resObjPtr, Jim_NewStringObjUtf8(interp, str, 1)); + str += n; + } + Jim_Free(commonObj); + } + + Jim_SetResult(interp, resObjPtr); + return JIM_OK; +} + + +static int Jim_JoinCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + const char *joinStr; + int joinStrLen; + + if (argc != 2 && argc != 3) { + Jim_WrongNumArgs(interp, 1, argv, "list ?joinString?"); + return JIM_ERR; + } + + if (argc == 2) { + joinStr = " "; + joinStrLen = 1; + } + else { + joinStr = Jim_GetString(argv[2], &joinStrLen); + } + Jim_SetResult(interp, Jim_ListJoin(interp, argv[1], joinStr, joinStrLen)); + return JIM_OK; +} + + +static int Jim_FormatCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *objPtr; + + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "formatString ?arg arg ...?"); + return JIM_ERR; + } + objPtr = Jim_FormatString(interp, argv[1], argc - 2, argv + 2); + if (objPtr == NULL) + return JIM_ERR; + Jim_SetResult(interp, objPtr); + return JIM_OK; +} + + +static int Jim_ScanCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *listPtr, **outVec; + int outc, i; + + if (argc < 3) { + Jim_WrongNumArgs(interp, 1, argv, "string format ?varName varName ...?"); + return JIM_ERR; + } + if (argv[2]->typePtr != &scanFmtStringObjType) + SetScanFmtFromAny(interp, argv[2]); + if (FormatGetError(argv[2]) != 0) { + Jim_SetResultString(interp, FormatGetError(argv[2]), -1); + return JIM_ERR; + } + if (argc > 3) { + int maxPos = FormatGetMaxPos(argv[2]); + int count = FormatGetCnvCount(argv[2]); + + if (maxPos > argc - 3) { + Jim_SetResultString(interp, "\"%n$\" argument index out of range", -1); + return JIM_ERR; + } + else if (count > argc - 3) { + Jim_SetResultString(interp, "different numbers of variable names and " + "field specifiers", -1); + return JIM_ERR; + } + else if (count < argc - 3) { + Jim_SetResultString(interp, "variable is not assigned by any " + "conversion specifiers", -1); + return JIM_ERR; + } + } + listPtr = Jim_ScanString(interp, argv[1], argv[2], JIM_ERRMSG); + if (listPtr == 0) + return JIM_ERR; + if (argc > 3) { + int rc = JIM_OK; + int count = 0; + + if (listPtr != 0 && listPtr != (Jim_Obj *)EOF) { + int len = Jim_ListLength(interp, listPtr); + + if (len != 0) { + JimListGetElements(interp, listPtr, &outc, &outVec); + for (i = 0; i < outc; ++i) { + if (Jim_Length(outVec[i]) > 0) { + ++count; + if (Jim_SetVariable(interp, argv[3 + i], outVec[i]) != JIM_OK) { + rc = JIM_ERR; + } + } + } + } + Jim_FreeNewObj(interp, listPtr); + } + else { + count = -1; + } + if (rc == JIM_OK) { + Jim_SetResultInt(interp, count); + } + return rc; + } + else { + if (listPtr == (Jim_Obj *)EOF) { + Jim_SetResult(interp, Jim_NewListObj(interp, 0, 0)); + return JIM_OK; + } + Jim_SetResult(interp, listPtr); + } + return JIM_OK; +} + + +static int Jim_ErrorCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + if (argc != 2 && argc != 3) { + Jim_WrongNumArgs(interp, 1, argv, "message ?stacktrace?"); + return JIM_ERR; + } + Jim_SetResult(interp, argv[1]); + if (argc == 3) { + JimSetStackTrace(interp, argv[2]); + return JIM_ERR; + } + interp->addStackTrace++; + return JIM_ERR; +} + + +static int Jim_LrangeCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *objPtr; + + if (argc != 4) { + Jim_WrongNumArgs(interp, 1, argv, "list first last"); + return JIM_ERR; + } + if ((objPtr = Jim_ListRange(interp, argv[1], argv[2], argv[3])) == NULL) + return JIM_ERR; + Jim_SetResult(interp, objPtr); + return JIM_OK; +} + + +static int Jim_LrepeatCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *objPtr; + long count; + + if (argc < 2 || Jim_GetLong(interp, argv[1], &count) != JIM_OK || count < 0) { + Jim_WrongNumArgs(interp, 1, argv, "count ?value ...?"); + return JIM_ERR; + } + + if (count == 0 || argc == 2) { + return JIM_OK; + } + + argc -= 2; + argv += 2; + + objPtr = Jim_NewListObj(interp, argv, argc); + while (--count) { + ListInsertElements(objPtr, -1, argc, argv); + } + + Jim_SetResult(interp, objPtr); + return JIM_OK; +} + +char **Jim_GetEnviron(void) +{ +#if defined(HAVE__NSGETENVIRON) + return *_NSGetEnviron(); +#else + #if !defined(NO_ENVIRON_EXTERN) + extern char **environ; + #endif + + return environ; +#endif +} + +void Jim_SetEnviron(char **env) +{ +#if defined(HAVE__NSGETENVIRON) + *_NSGetEnviron() = env; +#else + #if !defined(NO_ENVIRON_EXTERN) + extern char **environ; + #endif + + environ = env; +#endif +} + + +static int Jim_EnvCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + const char *key; + const char *val; + + if (argc == 1) { + char **e = Jim_GetEnviron(); + + int i; + Jim_Obj *listObjPtr = Jim_NewListObj(interp, NULL, 0); + + for (i = 0; e[i]; i++) { + const char *equals = strchr(e[i], '='); + + if (equals) { + Jim_ListAppendElement(interp, listObjPtr, Jim_NewStringObj(interp, e[i], + equals - e[i])); + Jim_ListAppendElement(interp, listObjPtr, Jim_NewStringObj(interp, equals + 1, -1)); + } + } + + Jim_SetResult(interp, listObjPtr); + return JIM_OK; + } + + if (argc < 2) { + Jim_WrongNumArgs(interp, 1, argv, "varName ?default?"); + return JIM_ERR; + } + key = Jim_String(argv[1]); + val = getenv(key); + if (val == NULL) { + if (argc < 3) { + Jim_SetResultFormatted(interp, "environment variable \"%#s\" does not exist", argv[1]); + return JIM_ERR; + } + val = Jim_String(argv[2]); + } + Jim_SetResult(interp, Jim_NewStringObj(interp, val, -1)); + return JIM_OK; +} + + +static int Jim_SourceCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + int retval; + + if (argc != 2) { + Jim_WrongNumArgs(interp, 1, argv, "fileName"); + return JIM_ERR; + } + retval = Jim_EvalFile(interp, Jim_String(argv[1])); + if (retval == JIM_RETURN) + return JIM_OK; + return retval; +} + + +static int Jim_LreverseCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + Jim_Obj *revObjPtr, **ele; + int len; + + if (argc != 2) { + Jim_WrongNumArgs(interp, 1, argv, "list"); + return JIM_ERR; + } + JimListGetElements(interp, argv[1], &len, &ele); + len--; + revObjPtr = Jim_NewListObj(interp, NULL, 0); + while (len >= 0) + ListAppendElement(revObjPtr, ele[len--]); + Jim_SetResult(interp, revObjPtr); + return JIM_OK; +} + +static int JimRangeLen(jim_wide start, jim_wide end, jim_wide step) +{ + jim_wide len; + + if (step == 0) + return -1; + if (start == end) + return 0; + else if (step > 0 && start > end) + return -1; + else if (step < 0 && end > start) + return -1; + len = end - start; + if (len < 0) + len = -len; + if (step < 0) + step = -step; + len = 1 + ((len - 1) / step); + if (len > INT_MAX) + len = INT_MAX; + return (int)((len < 0) ? -1 : len); +} + + +static int Jim_RangeCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + jim_wide start = 0, end, step = 1; + int len, i; + Jim_Obj *objPtr; + + if (argc < 2 || argc > 4) { + Jim_WrongNumArgs(interp, 1, argv, "?start? end ?step?"); + return JIM_ERR; + } + if (argc == 2) { + if (Jim_GetWide(interp, argv[1], &end) != JIM_OK) + return JIM_ERR; + } + else { + if (Jim_GetWide(interp, argv[1], &start) != JIM_OK || + Jim_GetWide(interp, argv[2], &end) != JIM_OK) + return JIM_ERR; + if (argc == 4 && Jim_GetWide(interp, argv[3], &step) != JIM_OK) + return JIM_ERR; + } + if ((len = JimRangeLen(start, end, step)) == -1) { + Jim_SetResultString(interp, "Invalid (infinite?) range specified", -1); + return JIM_ERR; + } + objPtr = Jim_NewListObj(interp, NULL, 0); + for (i = 0; i < len; i++) + ListAppendElement(objPtr, Jim_NewIntObj(interp, start + i * step)); + Jim_SetResult(interp, objPtr); + return JIM_OK; +} + + +static int Jim_RandCoreCommand(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + jim_wide min = 0, max = 0, len, maxMul; + + if (argc < 1 || argc > 3) { + Jim_WrongNumArgs(interp, 1, argv, "?min? max"); + return JIM_ERR; + } + if (argc == 1) { + max = JIM_WIDE_MAX; + } else if (argc == 2) { + if (Jim_GetWide(interp, argv[1], &max) != JIM_OK) + return JIM_ERR; + } else if (argc == 3) { + if (Jim_GetWide(interp, argv[1], &min) != JIM_OK || + Jim_GetWide(interp, argv[2], &max) != JIM_OK) + return JIM_ERR; + } + len = max-min; + if (len < 0) { + Jim_SetResultString(interp, "Invalid arguments (max < min)", -1); + return JIM_ERR; + } + maxMul = JIM_WIDE_MAX - (len ? (JIM_WIDE_MAX%len) : 0); + while (1) { + jim_wide r; + + JimRandomBytes(interp, &r, sizeof(jim_wide)); + if (r < 0 || r >= maxMul) continue; + r = (len == 0) ? 0 : r%len; + Jim_SetResultInt(interp, min+r); + return JIM_OK; + } +} + +static const struct { + const char *name; + Jim_CmdProc *cmdProc; +} Jim_CoreCommandsTable[] = { + {"alias", Jim_AliasCoreCommand}, + {"set", Jim_SetCoreCommand}, + {"unset", Jim_UnsetCoreCommand}, + {"puts", Jim_PutsCoreCommand}, + {"+", Jim_AddCoreCommand}, + {"*", Jim_MulCoreCommand}, + {"-", Jim_SubCoreCommand}, + {"/", Jim_DivCoreCommand}, + {"incr", Jim_IncrCoreCommand}, + {"while", Jim_WhileCoreCommand}, + {"loop", Jim_LoopCoreCommand}, + {"for", Jim_ForCoreCommand}, + {"foreach", Jim_ForeachCoreCommand}, + {"lmap", Jim_LmapCoreCommand}, + {"lassign", Jim_LassignCoreCommand}, + {"if", Jim_IfCoreCommand}, + {"switch", Jim_SwitchCoreCommand}, + {"list", Jim_ListCoreCommand}, + {"lindex", Jim_LindexCoreCommand}, + {"lset", Jim_LsetCoreCommand}, + {"lsearch", Jim_LsearchCoreCommand}, + {"llength", Jim_LlengthCoreCommand}, + {"lappend", Jim_LappendCoreCommand}, + {"linsert", Jim_LinsertCoreCommand}, + {"lreplace", Jim_LreplaceCoreCommand}, + {"lsort", Jim_LsortCoreCommand}, + {"append", Jim_AppendCoreCommand}, + {"debug", Jim_DebugCoreCommand}, + {"eval", Jim_EvalCoreCommand}, + {"uplevel", Jim_UplevelCoreCommand}, + {"expr", Jim_ExprCoreCommand}, + {"break", Jim_BreakCoreCommand}, + {"continue", Jim_ContinueCoreCommand}, + {"proc", Jim_ProcCoreCommand}, + {"concat", Jim_ConcatCoreCommand}, + {"return", Jim_ReturnCoreCommand}, + {"upvar", Jim_UpvarCoreCommand}, + {"global", Jim_GlobalCoreCommand}, + {"string", Jim_StringCoreCommand}, + {"time", Jim_TimeCoreCommand}, + {"exit", Jim_ExitCoreCommand}, + {"catch", Jim_CatchCoreCommand}, +#ifdef JIM_REFERENCES + {"ref", Jim_RefCoreCommand}, + {"getref", Jim_GetrefCoreCommand}, + {"setref", Jim_SetrefCoreCommand}, + {"finalize", Jim_FinalizeCoreCommand}, + {"collect", Jim_CollectCoreCommand}, +#endif + {"rename", Jim_RenameCoreCommand}, + {"dict", Jim_DictCoreCommand}, + {"subst", Jim_SubstCoreCommand}, + {"info", Jim_InfoCoreCommand}, + {"exists", Jim_ExistsCoreCommand}, + {"split", Jim_SplitCoreCommand}, + {"join", Jim_JoinCoreCommand}, + {"format", Jim_FormatCoreCommand}, + {"scan", Jim_ScanCoreCommand}, + {"error", Jim_ErrorCoreCommand}, + {"lrange", Jim_LrangeCoreCommand}, + {"lrepeat", Jim_LrepeatCoreCommand}, + {"env", Jim_EnvCoreCommand}, + {"source", Jim_SourceCoreCommand}, + {"lreverse", Jim_LreverseCoreCommand}, + {"range", Jim_RangeCoreCommand}, + {"rand", Jim_RandCoreCommand}, + {"tailcall", Jim_TailcallCoreCommand}, + {"local", Jim_LocalCoreCommand}, + {"upcall", Jim_UpcallCoreCommand}, + {"apply", Jim_ApplyCoreCommand}, + {NULL, NULL}, +}; + +void Jim_RegisterCoreCommands(Jim_Interp *interp) +{ + int i = 0; + + while (Jim_CoreCommandsTable[i].name != NULL) { + Jim_CreateCommand(interp, + Jim_CoreCommandsTable[i].name, Jim_CoreCommandsTable[i].cmdProc, NULL, NULL); + i++; + } +} + +void Jim_MakeErrorMessage(Jim_Interp *interp) +{ + Jim_Obj *argv[2]; + + argv[0] = Jim_NewStringObj(interp, "errorInfo", -1); + argv[1] = interp->result; + + Jim_EvalObjVector(interp, 2, argv); +} + +static void JimSetFailedEnumResult(Jim_Interp *interp, const char *arg, const char *badtype, + const char *prefix, const char *const *tablePtr, const char *name) +{ + int count; + char **tablePtrSorted; + int i; + + for (count = 0; tablePtr[count]; count++) { + } + + if (name == NULL) { + name = "option"; + } + + Jim_SetResultFormatted(interp, "%s%s \"%s\": must be ", badtype, name, arg); + tablePtrSorted = Jim_Alloc(sizeof(char *) * count); + memcpy(tablePtrSorted, tablePtr, sizeof(char *) * count); + qsort(tablePtrSorted, count, sizeof(char *), qsortCompareStringPointers); + for (i = 0; i < count; i++) { + if (i + 1 == count && count > 1) { + Jim_AppendString(interp, Jim_GetResult(interp), "or ", -1); + } + Jim_AppendStrings(interp, Jim_GetResult(interp), prefix, tablePtrSorted[i], NULL); + if (i + 1 != count) { + Jim_AppendString(interp, Jim_GetResult(interp), ", ", -1); + } + } + Jim_Free(tablePtrSorted); +} + +int Jim_GetEnum(Jim_Interp *interp, Jim_Obj *objPtr, + const char *const *tablePtr, int *indexPtr, const char *name, int flags) +{ + const char *bad = "bad "; + const char *const *entryPtr = NULL; + int i; + int match = -1; + int arglen; + const char *arg = Jim_GetString(objPtr, &arglen); + + *indexPtr = -1; + + for (entryPtr = tablePtr, i = 0; *entryPtr != NULL; entryPtr++, i++) { + if (Jim_CompareStringImmediate(interp, objPtr, *entryPtr)) { + + *indexPtr = i; + return JIM_OK; + } + if (flags & JIM_ENUM_ABBREV) { + if (strncmp(arg, *entryPtr, arglen) == 0) { + if (*arg == '-' && arglen == 1) { + break; + } + if (match >= 0) { + bad = "ambiguous "; + goto ambiguous; + } + match = i; + } + } + } + + + if (match >= 0) { + *indexPtr = match; + return JIM_OK; + } + + ambiguous: + if (flags & JIM_ERRMSG) { + JimSetFailedEnumResult(interp, arg, bad, "", tablePtr, name); + } + return JIM_ERR; +} + +int Jim_FindByName(const char *name, const char * const array[], size_t len) +{ + int i; + + for (i = 0; i < (int)len; i++) { + if (array[i] && strcmp(array[i], name) == 0) { + return i; + } + } + return -1; +} + +int Jim_IsDict(Jim_Obj *objPtr) +{ + return objPtr->typePtr == &dictObjType; +} + +int Jim_IsList(Jim_Obj *objPtr) +{ + return objPtr->typePtr == &listObjType; +} + +void Jim_SetResultFormatted(Jim_Interp *interp, const char *format, ...) +{ + + int len = strlen(format); + int extra = 0; + int n = 0; + const char *params[5]; + char *buf; + va_list args; + int i; + + va_start(args, format); + + for (i = 0; i < len && n < 5; i++) { + int l; + + if (strncmp(format + i, "%s", 2) == 0) { + params[n] = va_arg(args, char *); + + l = strlen(params[n]); + } + else if (strncmp(format + i, "%#s", 3) == 0) { + Jim_Obj *objPtr = va_arg(args, Jim_Obj *); + + params[n] = Jim_GetString(objPtr, &l); + } + else { + if (format[i] == '%') { + i++; + } + continue; + } + n++; + extra += l; + } + + len += extra; + buf = Jim_Alloc(len + 1); + len = snprintf(buf, len + 1, format, params[0], params[1], params[2], params[3], params[4]); + + va_end(args); + + Jim_SetResult(interp, Jim_NewStringObjNoAlloc(interp, buf, len)); +} + + +#ifndef jim_ext_package +int Jim_PackageProvide(Jim_Interp *interp, const char *name, const char *ver, int flags) +{ + return JIM_OK; +} +#endif +#ifndef jim_ext_aio +FILE *Jim_AioFilehandle(Jim_Interp *interp, Jim_Obj *fhObj) +{ + Jim_SetResultString(interp, "aio not enabled", -1); + return NULL; +} +#endif + + +#include +#include + + +static int subcmd_null(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + + return JIM_OK; +} + +static const jim_subcmd_type dummy_subcmd = { + "dummy", NULL, subcmd_null, 0, 0, JIM_MODFLAG_HIDDEN +}; + +static void add_commands(Jim_Interp *interp, const jim_subcmd_type * ct, const char *sep) +{ + const char *s = ""; + + for (; ct->cmd; ct++) { + if (!(ct->flags & JIM_MODFLAG_HIDDEN)) { + Jim_AppendStrings(interp, Jim_GetResult(interp), s, ct->cmd, NULL); + s = sep; + } + } +} + +static void bad_subcmd(Jim_Interp *interp, const jim_subcmd_type * command_table, const char *type, + Jim_Obj *cmd, Jim_Obj *subcmd) +{ + Jim_SetResult(interp, Jim_NewEmptyStringObj(interp)); + Jim_AppendStrings(interp, Jim_GetResult(interp), Jim_String(cmd), ", ", type, + " command \"", Jim_String(subcmd), "\": should be ", NULL); + add_commands(interp, command_table, ", "); +} + +static void show_cmd_usage(Jim_Interp *interp, const jim_subcmd_type * command_table, int argc, + Jim_Obj *const *argv) +{ + Jim_SetResult(interp, Jim_NewEmptyStringObj(interp)); + Jim_AppendStrings(interp, Jim_GetResult(interp), "Usage: \"", Jim_String(argv[0]), + " command ... \", where command is one of: ", NULL); + add_commands(interp, command_table, ", "); +} + +static void add_cmd_usage(Jim_Interp *interp, const jim_subcmd_type * ct, Jim_Obj *cmd) +{ + if (cmd) { + Jim_AppendStrings(interp, Jim_GetResult(interp), Jim_String(cmd), " ", NULL); + } + Jim_AppendStrings(interp, Jim_GetResult(interp), ct->cmd, NULL); + if (ct->args && *ct->args) { + Jim_AppendStrings(interp, Jim_GetResult(interp), " ", ct->args, NULL); + } +} + +static void set_wrong_args(Jim_Interp *interp, const jim_subcmd_type * command_table, Jim_Obj *subcmd) +{ + Jim_SetResultString(interp, "wrong # args: should be \"", -1); + add_cmd_usage(interp, command_table, subcmd); + Jim_AppendStrings(interp, Jim_GetResult(interp), "\"", NULL); +} + +const jim_subcmd_type *Jim_ParseSubCmd(Jim_Interp *interp, const jim_subcmd_type * command_table, + int argc, Jim_Obj *const *argv) +{ + const jim_subcmd_type *ct; + const jim_subcmd_type *partial = 0; + int cmdlen; + Jim_Obj *cmd; + const char *cmdstr; + const char *cmdname; + int help = 0; + + cmdname = Jim_String(argv[0]); + + if (argc < 2) { + Jim_SetResult(interp, Jim_NewEmptyStringObj(interp)); + Jim_AppendStrings(interp, Jim_GetResult(interp), "wrong # args: should be \"", cmdname, + " command ...\"\n", NULL); + Jim_AppendStrings(interp, Jim_GetResult(interp), "Use \"", cmdname, " -help ?command?\" for help", NULL); + return 0; + } + + cmd = argv[1]; + + + if (Jim_CompareStringImmediate(interp, cmd, "-help")) { + if (argc == 2) { + + show_cmd_usage(interp, command_table, argc, argv); + return &dummy_subcmd; + } + help = 1; + + + cmd = argv[2]; + } + + + if (Jim_CompareStringImmediate(interp, cmd, "-commands")) { + + Jim_SetResult(interp, Jim_NewEmptyStringObj(interp)); + add_commands(interp, command_table, " "); + return &dummy_subcmd; + } + + cmdstr = Jim_GetString(cmd, &cmdlen); + + for (ct = command_table; ct->cmd; ct++) { + if (Jim_CompareStringImmediate(interp, cmd, ct->cmd)) { + + break; + } + if (strncmp(cmdstr, ct->cmd, cmdlen) == 0) { + if (partial) { + + if (help) { + + show_cmd_usage(interp, command_table, argc, argv); + return &dummy_subcmd; + } + bad_subcmd(interp, command_table, "ambiguous", argv[0], argv[1 + help]); + return 0; + } + partial = ct; + } + continue; + } + + + if (partial && !ct->cmd) { + ct = partial; + } + + if (!ct->cmd) { + + if (help) { + + show_cmd_usage(interp, command_table, argc, argv); + return &dummy_subcmd; + } + bad_subcmd(interp, command_table, "unknown", argv[0], argv[1 + help]); + return 0; + } + + if (help) { + Jim_SetResultString(interp, "Usage: ", -1); + + add_cmd_usage(interp, ct, argv[0]); + return &dummy_subcmd; + } + + + if (argc - 2 < ct->minargs || (ct->maxargs >= 0 && argc - 2 > ct->maxargs)) { + Jim_SetResultString(interp, "wrong # args: should be \"", -1); + + add_cmd_usage(interp, ct, argv[0]); + Jim_AppendStrings(interp, Jim_GetResult(interp), "\"", NULL); + + return 0; + } + + + return ct; +} + +int Jim_CallSubCmd(Jim_Interp *interp, const jim_subcmd_type * ct, int argc, Jim_Obj *const *argv) +{ + int ret = JIM_ERR; + + if (ct) { + if (ct->flags & JIM_MODFLAG_FULLARGV) { + ret = ct->function(interp, argc, argv); + } + else { + ret = ct->function(interp, argc - 2, argv + 2); + } + if (ret < 0) { + set_wrong_args(interp, ct, argv[0]); + ret = JIM_ERR; + } + } + return ret; +} + +int Jim_SubCmdProc(Jim_Interp *interp, int argc, Jim_Obj *const *argv) +{ + const jim_subcmd_type *ct = + Jim_ParseSubCmd(interp, (const jim_subcmd_type *)Jim_CmdPrivData(interp), argc, argv); + + return Jim_CallSubCmd(interp, ct, argc, argv); +} + +#include +#include +#include +#include +#include + + +int utf8_fromunicode(char *p, unsigned uc) +{ + if (uc <= 0x7f) { + *p = uc; + return 1; + } + else if (uc <= 0x7ff) { + *p++ = 0xc0 | ((uc & 0x7c0) >> 6); + *p = 0x80 | (uc & 0x3f); + return 2; + } + else if (uc <= 0xffff) { + *p++ = 0xe0 | ((uc & 0xf000) >> 12); + *p++ = 0x80 | ((uc & 0xfc0) >> 6); + *p = 0x80 | (uc & 0x3f); + return 3; + } + + else { + *p++ = 0xf0 | ((uc & 0x1c0000) >> 18); + *p++ = 0x80 | ((uc & 0x3f000) >> 12); + *p++ = 0x80 | ((uc & 0xfc0) >> 6); + *p = 0x80 | (uc & 0x3f); + return 4; + } +} + +#include +#include + + +#define JIM_INTEGER_SPACE 24 +#define MAX_FLOAT_WIDTH 320 + +Jim_Obj *Jim_FormatString(Jim_Interp *interp, Jim_Obj *fmtObjPtr, int objc, Jim_Obj *const *objv) +{ + const char *span, *format, *formatEnd, *msg; + int numBytes = 0, objIndex = 0, gotXpg = 0, gotSequential = 0; + static const char * const mixedXPG = + "cannot mix \"%\" and \"%n$\" conversion specifiers"; + static const char * const badIndex[2] = { + "not enough arguments for all format specifiers", + "\"%n$\" argument index out of range" + }; + int formatLen; + Jim_Obj *resultPtr; + + char *num_buffer = NULL; + int num_buffer_size = 0; + + span = format = Jim_GetString(fmtObjPtr, &formatLen); + formatEnd = format + formatLen; + resultPtr = Jim_NewEmptyStringObj(interp); + + while (format != formatEnd) { + char *end; + int gotMinus, sawFlag; + int gotPrecision, useShort; + long width, precision; + int newXpg; + int ch; + int step; + int doubleType; + char pad = ' '; + char spec[2*JIM_INTEGER_SPACE + 12]; + char *p; + + int formatted_chars; + int formatted_bytes; + const char *formatted_buf; + + step = utf8_tounicode(format, &ch); + format += step; + if (ch != '%') { + numBytes += step; + continue; + } + if (numBytes) { + Jim_AppendString(interp, resultPtr, span, numBytes); + numBytes = 0; + } + + + step = utf8_tounicode(format, &ch); + if (ch == '%') { + span = format; + numBytes = step; + format += step; + continue; + } + + + newXpg = 0; + if (isdigit(ch)) { + int position = strtoul(format, &end, 10); + if (*end == '$') { + newXpg = 1; + objIndex = position - 1; + format = end + 1; + step = utf8_tounicode(format, &ch); + } + } + if (newXpg) { + if (gotSequential) { + msg = mixedXPG; + goto errorMsg; + } + gotXpg = 1; + } else { + if (gotXpg) { + msg = mixedXPG; + goto errorMsg; + } + gotSequential = 1; + } + if ((objIndex < 0) || (objIndex >= objc)) { + msg = badIndex[gotXpg]; + goto errorMsg; + } + + p = spec; + *p++ = '%'; + + gotMinus = 0; + sawFlag = 1; + do { + switch (ch) { + case '-': + gotMinus = 1; + break; + case '0': + pad = ch; + break; + case ' ': + case '+': + case '#': + break; + default: + sawFlag = 0; + continue; + } + *p++ = ch; + format += step; + step = utf8_tounicode(format, &ch); + } while (sawFlag); + + + width = 0; + if (isdigit(ch)) { + width = strtoul(format, &end, 10); + format = end; + step = utf8_tounicode(format, &ch); + } else if (ch == '*') { + if (objIndex >= objc - 1) { + msg = badIndex[gotXpg]; + goto errorMsg; + } + if (Jim_GetLong(interp, objv[objIndex], &width) != JIM_OK) { + goto error; + } + if (width < 0) { + width = -width; + if (!gotMinus) { + *p++ = '-'; + gotMinus = 1; + } + } + objIndex++; + format += step; + step = utf8_tounicode(format, &ch); + } + + + gotPrecision = precision = 0; + if (ch == '.') { + gotPrecision = 1; + format += step; + step = utf8_tounicode(format, &ch); + } + if (isdigit(ch)) { + precision = strtoul(format, &end, 10); + format = end; + step = utf8_tounicode(format, &ch); + } else if (ch == '*') { + if (objIndex >= objc - 1) { + msg = badIndex[gotXpg]; + goto errorMsg; + } + if (Jim_GetLong(interp, objv[objIndex], &precision) != JIM_OK) { + goto error; + } + + + if (precision < 0) { + precision = 0; + } + objIndex++; + format += step; + step = utf8_tounicode(format, &ch); + } + + + useShort = 0; + if (ch == 'h') { + useShort = 1; + format += step; + step = utf8_tounicode(format, &ch); + } else if (ch == 'l') { + + format += step; + step = utf8_tounicode(format, &ch); + if (ch == 'l') { + format += step; + step = utf8_tounicode(format, &ch); + } + } + + format += step; + span = format; + + + if (ch == 'i') { + ch = 'd'; + } + + doubleType = 0; + + switch (ch) { + case '\0': + msg = "format string ended in middle of field specifier"; + goto errorMsg; + case 's': { + formatted_buf = Jim_GetString(objv[objIndex], &formatted_bytes); + formatted_chars = Jim_Utf8Length(interp, objv[objIndex]); + if (gotPrecision && (precision < formatted_chars)) { + + formatted_chars = precision; + formatted_bytes = utf8_index(formatted_buf, precision); + } + break; + } + case 'c': { + jim_wide code; + + if (Jim_GetWide(interp, objv[objIndex], &code) != JIM_OK) { + goto error; + } + + formatted_bytes = utf8_getchars(spec, code); + formatted_buf = spec; + formatted_chars = 1; + break; + } + case 'b': { + unsigned jim_wide w; + int length; + int i; + int j; + + if (Jim_GetWide(interp, objv[objIndex], (jim_wide *)&w) != JIM_OK) { + goto error; + } + length = sizeof(w) * 8; + + + + if (num_buffer_size < length + 1) { + num_buffer_size = length + 1; + num_buffer = Jim_Realloc(num_buffer, num_buffer_size); + } + + j = 0; + for (i = length; i > 0; ) { + i--; + if (w & ((unsigned jim_wide)1 << i)) { + num_buffer[j++] = '1'; + } + else if (j || i == 0) { + num_buffer[j++] = '0'; + } + } + num_buffer[j] = 0; + formatted_chars = formatted_bytes = j; + formatted_buf = num_buffer; + break; + } + + case 'e': + case 'E': + case 'f': + case 'g': + case 'G': + doubleType = 1; + + case 'd': + case 'u': + case 'o': + case 'x': + case 'X': { + jim_wide w; + double d; + int length; + + + if (width) { + p += sprintf(p, "%ld", width); + } + if (gotPrecision) { + p += sprintf(p, ".%ld", precision); + } + + + if (doubleType) { + if (Jim_GetDouble(interp, objv[objIndex], &d) != JIM_OK) { + goto error; + } + length = MAX_FLOAT_WIDTH; + } + else { + if (Jim_GetWide(interp, objv[objIndex], &w) != JIM_OK) { + goto error; + } + length = JIM_INTEGER_SPACE; + if (useShort) { + if (ch == 'd') { + w = (short)w; + } + else { + w = (unsigned short)w; + } + } + *p++ = 'l'; +#ifdef HAVE_LONG_LONG + if (sizeof(long long) == sizeof(jim_wide)) { + *p++ = 'l'; + } +#endif + } + + *p++ = (char) ch; + *p = '\0'; + + + if (width > length) { + length = width; + } + if (gotPrecision) { + length += precision; + } + + + if (num_buffer_size < length + 1) { + num_buffer_size = length + 1; + num_buffer = Jim_Realloc(num_buffer, num_buffer_size); + } + + if (doubleType) { + snprintf(num_buffer, length + 1, spec, d); + } + else { + formatted_bytes = snprintf(num_buffer, length + 1, spec, w); + } + formatted_chars = formatted_bytes = strlen(num_buffer); + formatted_buf = num_buffer; + break; + } + + default: { + + spec[0] = ch; + spec[1] = '\0'; + Jim_SetResultFormatted(interp, "bad field specifier \"%s\"", spec); + goto error; + } + } + + if (!gotMinus) { + while (formatted_chars < width) { + Jim_AppendString(interp, resultPtr, &pad, 1); + formatted_chars++; + } + } + + Jim_AppendString(interp, resultPtr, formatted_buf, formatted_bytes); + + while (formatted_chars < width) { + Jim_AppendString(interp, resultPtr, &pad, 1); + formatted_chars++; + } + + objIndex += gotSequential; + } + if (numBytes) { + Jim_AppendString(interp, resultPtr, span, numBytes); + } + + Jim_Free(num_buffer); + return resultPtr; + + errorMsg: + Jim_SetResultString(interp, msg, -1); + error: + Jim_FreeNewObj(interp, resultPtr); + Jim_Free(num_buffer); + return NULL; +} + + +#if defined(JIM_REGEXP) +#include +#include +#include +#include + + + +#define REG_MAX_PAREN 100 + + + +#define END 0 +#define BOL 1 +#define EOL 2 +#define ANY 3 +#define ANYOF 4 +#define ANYBUT 5 +#define BRANCH 6 +#define BACK 7 +#define EXACTLY 8 +#define NOTHING 9 +#define REP 10 +#define REPMIN 11 +#define REPX 12 +#define REPXMIN 13 + +#define WORDA 15 +#define WORDZ 16 + +#define OPENNC 1000 +#define OPEN 1001 + + + + +#define CLOSENC 2000 +#define CLOSE 2001 +#define CLOSE_END (CLOSE+REG_MAX_PAREN) + +#define REG_MAGIC 0xFADED00D + + +#define OP(preg, p) (preg->program[p]) +#define NEXT(preg, p) (preg->program[p + 1]) +#define OPERAND(p) ((p) + 2) + + + + +#define FAIL(R,M) { (R)->err = (M); return (M); } +#define ISMULT(c) ((c) == '*' || (c) == '+' || (c) == '?' || (c) == '{') +#define META "^$.[()|?{+*" + +#define HASWIDTH 1 +#define SIMPLE 2 +#define SPSTART 4 +#define WORST 0 + +#define MAX_REP_COUNT 1000000 + +static int reg(regex_t *preg, int paren , int *flagp ); +static int regpiece(regex_t *preg, int *flagp ); +static int regbranch(regex_t *preg, int *flagp ); +static int regatom(regex_t *preg, int *flagp ); +static int regnode(regex_t *preg, int op ); +static int regnext(regex_t *preg, int p ); +static void regc(regex_t *preg, int b ); +static int reginsert(regex_t *preg, int op, int size, int opnd ); +static void regtail(regex_t *preg, int p, int val); +static void regoptail(regex_t *preg, int p, int val ); +static int regopsize(regex_t *preg, int p ); + +static int reg_range_find(const int *string, int c); +static const char *str_find(const char *string, int c, int nocase); +static int prefix_cmp(const int *prog, int proglen, const char *string, int nocase); + + +#ifdef DEBUG +static int regnarrate = 0; +static void regdump(regex_t *preg); +static const char *regprop( int op ); +#endif + + +static int str_int_len(const int *seq) +{ + int n = 0; + while (*seq++) { + n++; + } + return n; +} + +int regcomp(regex_t *preg, const char *exp, int cflags) +{ + int scan; + int longest; + unsigned len; + int flags; + +#ifdef DEBUG + fprintf(stderr, "Compiling: '%s'\n", exp); +#endif + memset(preg, 0, sizeof(*preg)); + + if (exp == NULL) + FAIL(preg, REG_ERR_NULL_ARGUMENT); + + + preg->cflags = cflags; + preg->regparse = exp; + + + preg->proglen = (strlen(exp) + 1) * 5; + preg->program = malloc(preg->proglen * sizeof(int)); + if (preg->program == NULL) + FAIL(preg, REG_ERR_NOMEM); + + regc(preg, REG_MAGIC); + if (reg(preg, 0, &flags) == 0) { + return preg->err; + } + + + if (preg->re_nsub >= REG_MAX_PAREN) + FAIL(preg,REG_ERR_TOO_BIG); + + + preg->regstart = 0; + preg->reganch = 0; + preg->regmust = 0; + preg->regmlen = 0; + scan = 1; + if (OP(preg, regnext(preg, scan)) == END) { + scan = OPERAND(scan); + + + if (OP(preg, scan) == EXACTLY) { + preg->regstart = preg->program[OPERAND(scan)]; + } + else if (OP(preg, scan) == BOL) + preg->reganch++; + + if (flags&SPSTART) { + longest = 0; + len = 0; + for (; scan != 0; scan = regnext(preg, scan)) { + if (OP(preg, scan) == EXACTLY) { + int plen = str_int_len(preg->program + OPERAND(scan)); + if (plen >= len) { + longest = OPERAND(scan); + len = plen; + } + } + } + preg->regmust = longest; + preg->regmlen = len; + } + } + +#ifdef DEBUG + regdump(preg); +#endif + + return 0; +} + +static int reg(regex_t *preg, int paren , int *flagp ) +{ + int ret; + int br; + int ender; + int parno = 0; + int flags; + + *flagp = HASWIDTH; + + + if (paren) { + if (preg->regparse[0] == '?' && preg->regparse[1] == ':') { + + preg->regparse += 2; + parno = -1; + } + else { + parno = ++preg->re_nsub; + } + ret = regnode(preg, OPEN+parno); + } else + ret = 0; + + + br = regbranch(preg, &flags); + if (br == 0) + return 0; + if (ret != 0) + regtail(preg, ret, br); + else + ret = br; + if (!(flags&HASWIDTH)) + *flagp &= ~HASWIDTH; + *flagp |= flags&SPSTART; + while (*preg->regparse == '|') { + preg->regparse++; + br = regbranch(preg, &flags); + if (br == 0) + return 0; + regtail(preg, ret, br); + if (!(flags&HASWIDTH)) + *flagp &= ~HASWIDTH; + *flagp |= flags&SPSTART; + } + + + ender = regnode(preg, (paren) ? CLOSE+parno : END); + regtail(preg, ret, ender); + + + for (br = ret; br != 0; br = regnext(preg, br)) + regoptail(preg, br, ender); + + + if (paren && *preg->regparse++ != ')') { + preg->err = REG_ERR_UNMATCHED_PAREN; + return 0; + } else if (!paren && *preg->regparse != '\0') { + if (*preg->regparse == ')') { + preg->err = REG_ERR_UNMATCHED_PAREN; + return 0; + } else { + preg->err = REG_ERR_JUNK_ON_END; + return 0; + } + } + + return(ret); +} + +static int regbranch(regex_t *preg, int *flagp ) +{ + int ret; + int chain; + int latest; + int flags; + + *flagp = WORST; + + ret = regnode(preg, BRANCH); + chain = 0; + while (*preg->regparse != '\0' && *preg->regparse != ')' && + *preg->regparse != '|') { + latest = regpiece(preg, &flags); + if (latest == 0) + return 0; + *flagp |= flags&HASWIDTH; + if (chain == 0) { + *flagp |= flags&SPSTART; + } + else { + regtail(preg, chain, latest); + } + chain = latest; + } + if (chain == 0) + (void) regnode(preg, NOTHING); + + return(ret); +} + +static int regpiece(regex_t *preg, int *flagp) +{ + int ret; + char op; + int next; + int flags; + int min; + int max; + + ret = regatom(preg, &flags); + if (ret == 0) + return 0; + + op = *preg->regparse; + if (!ISMULT(op)) { + *flagp = flags; + return(ret); + } + + if (!(flags&HASWIDTH) && op != '?') { + preg->err = REG_ERR_OPERAND_COULD_BE_EMPTY; + return 0; + } + + + if (op == '{') { + char *end; + + min = strtoul(preg->regparse + 1, &end, 10); + if (end == preg->regparse + 1) { + preg->err = REG_ERR_BAD_COUNT; + return 0; + } + if (*end == '}') { + max = min; + } + else { + preg->regparse = end; + max = strtoul(preg->regparse + 1, &end, 10); + if (*end != '}') { + preg->err = REG_ERR_UNMATCHED_BRACES; + return 0; + } + } + if (end == preg->regparse + 1) { + max = MAX_REP_COUNT; + } + else if (max < min || max >= 100) { + preg->err = REG_ERR_BAD_COUNT; + return 0; + } + if (min >= 100) { + preg->err = REG_ERR_BAD_COUNT; + return 0; + } + + preg->regparse = strchr(preg->regparse, '}'); + } + else { + min = (op == '+'); + max = (op == '?' ? 1 : MAX_REP_COUNT); + } + + if (preg->regparse[1] == '?') { + preg->regparse++; + next = reginsert(preg, flags & SIMPLE ? REPMIN : REPXMIN, 5, ret); + } + else { + next = reginsert(preg, flags & SIMPLE ? REP: REPX, 5, ret); + } + preg->program[ret + 2] = max; + preg->program[ret + 3] = min; + preg->program[ret + 4] = 0; + + *flagp = (min) ? (WORST|HASWIDTH) : (WORST|SPSTART); + + if (!(flags & SIMPLE)) { + int back = regnode(preg, BACK); + regtail(preg, back, ret); + regtail(preg, next, back); + } + + preg->regparse++; + if (ISMULT(*preg->regparse)) { + preg->err = REG_ERR_NESTED_COUNT; + return 0; + } + + return ret; +} + +static void reg_addrange(regex_t *preg, int lower, int upper) +{ + if (lower > upper) { + reg_addrange(preg, upper, lower); + } + + regc(preg, upper - lower + 1); + regc(preg, lower); +} + +static void reg_addrange_str(regex_t *preg, const char *str) +{ + while (*str) { + reg_addrange(preg, *str, *str); + str++; + } +} + +static int reg_utf8_tounicode_case(const char *s, int *uc, int upper) +{ + int l = utf8_tounicode(s, uc); + if (upper) { + *uc = utf8_upper(*uc); + } + return l; +} + +static int hexdigitval(int c) +{ + if (c >= '0' && c <= '9') + return c - '0'; + if (c >= 'a' && c <= 'f') + return c - 'a' + 10; + if (c >= 'A' && c <= 'F') + return c - 'A' + 10; + return -1; +} + +static int parse_hex(const char *s, int n, int *uc) +{ + int val = 0; + int k; + + for (k = 0; k < n; k++) { + int c = hexdigitval(*s++); + if (c == -1) { + break; + } + val = (val << 4) | c; + } + if (k) { + *uc = val; + } + return k; +} + +static int reg_decode_escape(const char *s, int *ch) +{ + int n; + const char *s0 = s; + + *ch = *s++; + + switch (*ch) { + case 'b': *ch = '\b'; break; + case 'e': *ch = 27; break; + case 'f': *ch = '\f'; break; + case 'n': *ch = '\n'; break; + case 'r': *ch = '\r'; break; + case 't': *ch = '\t'; break; + case 'v': *ch = '\v'; break; + case 'u': + if (*s == '{') { + + n = parse_hex(s + 1, 6, ch); + if (n > 0 && s[n + 1] == '}' && *ch >= 0 && *ch <= 0x1fffff) { + s += n + 2; + } + else { + + *ch = 'u'; + } + } + else if ((n = parse_hex(s, 4, ch)) > 0) { + s += n; + } + break; + case 'U': + if ((n = parse_hex(s, 8, ch)) > 0) { + s += n; + } + break; + case 'x': + if ((n = parse_hex(s, 2, ch)) > 0) { + s += n; + } + break; + case '\0': + s--; + *ch = '\\'; + break; + } + return s - s0; +} + +static int regatom(regex_t *preg, int *flagp) +{ + int ret; + int flags; + int nocase = (preg->cflags & REG_ICASE); + + int ch; + int n = reg_utf8_tounicode_case(preg->regparse, &ch, nocase); + + *flagp = WORST; + + preg->regparse += n; + switch (ch) { + + case '^': + ret = regnode(preg, BOL); + break; + case '$': + ret = regnode(preg, EOL); + break; + case '.': + ret = regnode(preg, ANY); + *flagp |= HASWIDTH|SIMPLE; + break; + case '[': { + const char *pattern = preg->regparse; + + if (*pattern == '^') { + ret = regnode(preg, ANYBUT); + pattern++; + } else + ret = regnode(preg, ANYOF); + + + if (*pattern == ']' || *pattern == '-') { + reg_addrange(preg, *pattern, *pattern); + pattern++; + } + + while (*pattern && *pattern != ']') { + + int start; + int end; + + pattern += reg_utf8_tounicode_case(pattern, &start, nocase); + if (start == '\\') { + pattern += reg_decode_escape(pattern, &start); + if (start == 0) { + preg->err = REG_ERR_NULL_CHAR; + return 0; + } + } + if (pattern[0] == '-' && pattern[1] && pattern[1] != ']') { + + pattern += utf8_tounicode(pattern, &end); + pattern += reg_utf8_tounicode_case(pattern, &end, nocase); + if (end == '\\') { + pattern += reg_decode_escape(pattern, &end); + if (end == 0) { + preg->err = REG_ERR_NULL_CHAR; + return 0; + } + } + + reg_addrange(preg, start, end); + continue; + } + if (start == '[') { + if (strncmp(pattern, ":alpha:]", 8) == 0) { + if ((preg->cflags & REG_ICASE) == 0) { + reg_addrange(preg, 'a', 'z'); + } + reg_addrange(preg, 'A', 'Z'); + pattern += 8; + continue; + } + if (strncmp(pattern, ":alnum:]", 8) == 0) { + if ((preg->cflags & REG_ICASE) == 0) { + reg_addrange(preg, 'a', 'z'); + } + reg_addrange(preg, 'A', 'Z'); + reg_addrange(preg, '0', '9'); + pattern += 8; + continue; + } + if (strncmp(pattern, ":space:]", 8) == 0) { + reg_addrange_str(preg, " \t\r\n\f\v"); + pattern += 8; + continue; + } + } + + reg_addrange(preg, start, start); + } + regc(preg, '\0'); + + if (*pattern) { + pattern++; + } + preg->regparse = pattern; + + *flagp |= HASWIDTH|SIMPLE; + } + break; + case '(': + ret = reg(preg, 1, &flags); + if (ret == 0) + return 0; + *flagp |= flags&(HASWIDTH|SPSTART); + break; + case '\0': + case '|': + case ')': + preg->err = REG_ERR_INTERNAL; + return 0; + case '?': + case '+': + case '*': + case '{': + preg->err = REG_ERR_COUNT_FOLLOWS_NOTHING; + return 0; + case '\\': + switch (*preg->regparse++) { + case '\0': + preg->err = REG_ERR_TRAILING_BACKSLASH; + return 0; + case '<': + case 'm': + ret = regnode(preg, WORDA); + break; + case '>': + case 'M': + ret = regnode(preg, WORDZ); + break; + case 'd': + ret = regnode(preg, ANYOF); + reg_addrange(preg, '0', '9'); + regc(preg, '\0'); + *flagp |= HASWIDTH|SIMPLE; + break; + case 'w': + ret = regnode(preg, ANYOF); + if ((preg->cflags & REG_ICASE) == 0) { + reg_addrange(preg, 'a', 'z'); + } + reg_addrange(preg, 'A', 'Z'); + reg_addrange(preg, '0', '9'); + reg_addrange(preg, '_', '_'); + regc(preg, '\0'); + *flagp |= HASWIDTH|SIMPLE; + break; + case 's': + ret = regnode(preg, ANYOF); + reg_addrange_str(preg," \t\r\n\f\v"); + regc(preg, '\0'); + *flagp |= HASWIDTH|SIMPLE; + break; + + default: + + + preg->regparse--; + goto de_fault; + } + break; + de_fault: + default: { + int added = 0; + + + preg->regparse -= n; + + ret = regnode(preg, EXACTLY); + + + + while (*preg->regparse && strchr(META, *preg->regparse) == NULL) { + n = reg_utf8_tounicode_case(preg->regparse, &ch, (preg->cflags & REG_ICASE)); + if (ch == '\\' && preg->regparse[n]) { + if (strchr("<>mMwds", preg->regparse[n])) { + + break; + } + n += reg_decode_escape(preg->regparse + n, &ch); + if (ch == 0) { + preg->err = REG_ERR_NULL_CHAR; + return 0; + } + } + + + if (ISMULT(preg->regparse[n])) { + + if (added) { + + break; + } + + regc(preg, ch); + added++; + preg->regparse += n; + break; + } + + + regc(preg, ch); + added++; + preg->regparse += n; + } + regc(preg, '\0'); + + *flagp |= HASWIDTH; + if (added == 1) + *flagp |= SIMPLE; + break; + } + break; + } + + return(ret); +} + +static void reg_grow(regex_t *preg, int n) +{ + if (preg->p + n >= preg->proglen) { + preg->proglen = (preg->p + n) * 2; + preg->program = realloc(preg->program, preg->proglen * sizeof(int)); + } +} + + +static int regnode(regex_t *preg, int op) +{ + reg_grow(preg, 2); + + + preg->program[preg->p++] = op; + preg->program[preg->p++] = 0; + + + return preg->p - 2; +} + +static void regc(regex_t *preg, int b ) +{ + reg_grow(preg, 1); + preg->program[preg->p++] = b; +} + +static int reginsert(regex_t *preg, int op, int size, int opnd ) +{ + reg_grow(preg, size); + + + memmove(preg->program + opnd + size, preg->program + opnd, sizeof(int) * (preg->p - opnd)); + + memset(preg->program + opnd, 0, sizeof(int) * size); + + preg->program[opnd] = op; + + preg->p += size; + + return opnd + size; +} + +static void regtail(regex_t *preg, int p, int val) +{ + int scan; + int temp; + int offset; + + + scan = p; + for (;;) { + temp = regnext(preg, scan); + if (temp == 0) + break; + scan = temp; + } + + if (OP(preg, scan) == BACK) + offset = scan - val; + else + offset = val - scan; + + preg->program[scan + 1] = offset; +} + + +static void regoptail(regex_t *preg, int p, int val ) +{ + + if (p != 0 && OP(preg, p) == BRANCH) { + regtail(preg, OPERAND(p), val); + } +} + + +static int regtry(regex_t *preg, const char *string ); +static int regmatch(regex_t *preg, int prog); +static int regrepeat(regex_t *preg, int p, int max); + +int regexec(regex_t *preg, const char *string, size_t nmatch, regmatch_t pmatch[], int eflags) +{ + const char *s; + int scan; + + + if (preg == NULL || preg->program == NULL || string == NULL) { + return REG_ERR_NULL_ARGUMENT; + } + + + if (*preg->program != REG_MAGIC) { + return REG_ERR_CORRUPTED; + } + +#ifdef DEBUG + fprintf(stderr, "regexec: %s\n", string); + regdump(preg); +#endif + + preg->eflags = eflags; + preg->pmatch = pmatch; + preg->nmatch = nmatch; + preg->start = string; + + + for (scan = OPERAND(1); scan != 0; scan += regopsize(preg, scan)) { + int op = OP(preg, scan); + if (op == END) + break; + if (op == REPX || op == REPXMIN) + preg->program[scan + 4] = 0; + } + + + if (preg->regmust != 0) { + s = string; + while ((s = str_find(s, preg->program[preg->regmust], preg->cflags & REG_ICASE)) != NULL) { + if (prefix_cmp(preg->program + preg->regmust, preg->regmlen, s, preg->cflags & REG_ICASE) >= 0) { + break; + } + s++; + } + if (s == NULL) + return REG_NOMATCH; + } + + + preg->regbol = string; + + + if (preg->reganch) { + if (eflags & REG_NOTBOL) { + + goto nextline; + } + while (1) { + if (regtry(preg, string)) { + return REG_NOERROR; + } + if (*string) { +nextline: + if (preg->cflags & REG_NEWLINE) { + + string = strchr(string, '\n'); + if (string) { + preg->regbol = ++string; + continue; + } + } + } + return REG_NOMATCH; + } + } + + + s = string; + if (preg->regstart != '\0') { + + while ((s = str_find(s, preg->regstart, preg->cflags & REG_ICASE)) != NULL) { + if (regtry(preg, s)) + return REG_NOERROR; + s++; + } + } + else + + while (1) { + if (regtry(preg, s)) + return REG_NOERROR; + if (*s == '\0') { + break; + } + else { + int c; + s += utf8_tounicode(s, &c); + } + } + + + return REG_NOMATCH; +} + + +static int regtry( regex_t *preg, const char *string ) +{ + int i; + + preg->reginput = string; + + for (i = 0; i < preg->nmatch; i++) { + preg->pmatch[i].rm_so = -1; + preg->pmatch[i].rm_eo = -1; + } + if (regmatch(preg, 1)) { + preg->pmatch[0].rm_so = string - preg->start; + preg->pmatch[0].rm_eo = preg->reginput - preg->start; + return(1); + } else + return(0); +} + +static int prefix_cmp(const int *prog, int proglen, const char *string, int nocase) +{ + const char *s = string; + while (proglen && *s) { + int ch; + int n = reg_utf8_tounicode_case(s, &ch, nocase); + if (ch != *prog) { + return -1; + } + prog++; + s += n; + proglen--; + } + if (proglen == 0) { + return s - string; + } + return -1; +} + +static int reg_range_find(const int *range, int c) +{ + while (*range) { + + if (c >= range[1] && c <= (range[0] + range[1] - 1)) { + return 1; + } + range += 2; + } + return 0; +} + +static const char *str_find(const char *string, int c, int nocase) +{ + if (nocase) { + + c = utf8_upper(c); + } + while (*string) { + int ch; + int n = reg_utf8_tounicode_case(string, &ch, nocase); + if (c == ch) { + return string; + } + string += n; + } + return NULL; +} + +static int reg_iseol(regex_t *preg, int ch) +{ + if (preg->cflags & REG_NEWLINE) { + return ch == '\0' || ch == '\n'; + } + else { + return ch == '\0'; + } +} + +static int regmatchsimplerepeat(regex_t *preg, int scan, int matchmin) +{ + int nextch = '\0'; + const char *save; + int no; + int c; + + int max = preg->program[scan + 2]; + int min = preg->program[scan + 3]; + int next = regnext(preg, scan); + + if (OP(preg, next) == EXACTLY) { + nextch = preg->program[OPERAND(next)]; + } + save = preg->reginput; + no = regrepeat(preg, scan + 5, max); + if (no < min) { + return 0; + } + if (matchmin) { + + max = no; + no = min; + } + + while (1) { + if (matchmin) { + if (no > max) { + break; + } + } + else { + if (no < min) { + break; + } + } + preg->reginput = save + utf8_index(save, no); + reg_utf8_tounicode_case(preg->reginput, &c, (preg->cflags & REG_ICASE)); + + if (reg_iseol(preg, nextch) || c == nextch) { + if (regmatch(preg, next)) { + return(1); + } + } + if (matchmin) { + + no++; + } + else { + + no--; + } + } + return(0); +} + +static int regmatchrepeat(regex_t *preg, int scan, int matchmin) +{ + int *scanpt = preg->program + scan; + + int max = scanpt[2]; + int min = scanpt[3]; + + + if (scanpt[4] < min) { + + scanpt[4]++; + if (regmatch(preg, scan + 5)) { + return 1; + } + scanpt[4]--; + return 0; + } + if (scanpt[4] > max) { + return 0; + } + + if (matchmin) { + + if (regmatch(preg, regnext(preg, scan))) { + return 1; + } + + scanpt[4]++; + if (regmatch(preg, scan + 5)) { + return 1; + } + scanpt[4]--; + return 0; + } + + if (scanpt[4] < max) { + scanpt[4]++; + if (regmatch(preg, scan + 5)) { + return 1; + } + scanpt[4]--; + } + + return regmatch(preg, regnext(preg, scan)); +} + + +static int regmatch(regex_t *preg, int prog) +{ + int scan; + int next; + const char *save; + + scan = prog; + +#ifdef DEBUG + if (scan != 0 && regnarrate) + fprintf(stderr, "%s(\n", regprop(scan)); +#endif + while (scan != 0) { + int n; + int c; +#ifdef DEBUG + if (regnarrate) { + fprintf(stderr, "%3d: %s...\n", scan, regprop(OP(preg, scan))); + } +#endif + next = regnext(preg, scan); + n = reg_utf8_tounicode_case(preg->reginput, &c, (preg->cflags & REG_ICASE)); + + switch (OP(preg, scan)) { + case BOL: + if (preg->reginput != preg->regbol) + return(0); + break; + case EOL: + if (!reg_iseol(preg, c)) { + return(0); + } + break; + case WORDA: + + if ((!isalnum(UCHAR(c))) && c != '_') + return(0); + + if (preg->reginput > preg->regbol && + (isalnum(UCHAR(preg->reginput[-1])) || preg->reginput[-1] == '_')) + return(0); + break; + case WORDZ: + + if (preg->reginput > preg->regbol) { + + if (reg_iseol(preg, c) || !isalnum(UCHAR(c)) || c != '_') { + c = preg->reginput[-1]; + + if (isalnum(UCHAR(c)) || c == '_') { + break; + } + } + } + + return(0); + + case ANY: + if (reg_iseol(preg, c)) + return 0; + preg->reginput += n; + break; + case EXACTLY: { + int opnd; + int len; + int slen; + + opnd = OPERAND(scan); + len = str_int_len(preg->program + opnd); + + slen = prefix_cmp(preg->program + opnd, len, preg->reginput, preg->cflags & REG_ICASE); + if (slen < 0) { + return(0); + } + preg->reginput += slen; + } + break; + case ANYOF: + if (reg_iseol(preg, c) || reg_range_find(preg->program + OPERAND(scan), c) == 0) { + return(0); + } + preg->reginput += n; + break; + case ANYBUT: + if (reg_iseol(preg, c) || reg_range_find(preg->program + OPERAND(scan), c) != 0) { + return(0); + } + preg->reginput += n; + break; + case NOTHING: + break; + case BACK: + break; + case BRANCH: + if (OP(preg, next) != BRANCH) + next = OPERAND(scan); + else { + do { + save = preg->reginput; + if (regmatch(preg, OPERAND(scan))) { + return(1); + } + preg->reginput = save; + scan = regnext(preg, scan); + } while (scan != 0 && OP(preg, scan) == BRANCH); + return(0); + + } + break; + case REP: + case REPMIN: + return regmatchsimplerepeat(preg, scan, OP(preg, scan) == REPMIN); + + case REPX: + case REPXMIN: + return regmatchrepeat(preg, scan, OP(preg, scan) == REPXMIN); + + case END: + return 1; + + case OPENNC: + case CLOSENC: + return regmatch(preg, next); + + default: + if (OP(preg, scan) >= OPEN+1 && OP(preg, scan) < CLOSE_END) { + save = preg->reginput; + if (regmatch(preg, next)) { + if (OP(preg, scan) < CLOSE) { + int no = OP(preg, scan) - OPEN; + if (no < preg->nmatch && preg->pmatch[no].rm_so == -1) { + preg->pmatch[no].rm_so = save - preg->start; + } + } + else { + int no = OP(preg, scan) - CLOSE; + if (no < preg->nmatch && preg->pmatch[no].rm_eo == -1) { + preg->pmatch[no].rm_eo = save - preg->start; + } + } + return(1); + } + return(0); + } + return REG_ERR_INTERNAL; + } + + scan = next; + } + + return REG_ERR_INTERNAL; +} + +static int regrepeat(regex_t *preg, int p, int max) +{ + int count = 0; + const char *scan; + int opnd; + int ch; + int n; + + scan = preg->reginput; + opnd = OPERAND(p); + switch (OP(preg, p)) { + case ANY: + + while (!reg_iseol(preg, *scan) && count < max) { + count++; + scan++; + } + break; + case EXACTLY: + while (count < max) { + n = reg_utf8_tounicode_case(scan, &ch, preg->cflags & REG_ICASE); + if (preg->program[opnd] != ch) { + break; + } + count++; + scan += n; + } + break; + case ANYOF: + while (count < max) { + n = reg_utf8_tounicode_case(scan, &ch, preg->cflags & REG_ICASE); + if (reg_iseol(preg, ch) || reg_range_find(preg->program + opnd, ch) == 0) { + break; + } + count++; + scan += n; + } + break; + case ANYBUT: + while (count < max) { + n = reg_utf8_tounicode_case(scan, &ch, preg->cflags & REG_ICASE); + if (reg_iseol(preg, ch) || reg_range_find(preg->program + opnd, ch) != 0) { + break; + } + count++; + scan += n; + } + break; + default: + preg->err = REG_ERR_INTERNAL; + count = 0; + break; + } + preg->reginput = scan; + + return(count); +} + +static int regnext(regex_t *preg, int p ) +{ + int offset; + + offset = NEXT(preg, p); + + if (offset == 0) + return 0; + + if (OP(preg, p) == BACK) + return(p-offset); + else + return(p+offset); +} + +static int regopsize(regex_t *preg, int p ) +{ + + switch (OP(preg, p)) { + case REP: + case REPMIN: + case REPX: + case REPXMIN: + return 5; + + case ANYOF: + case ANYBUT: + case EXACTLY: { + int s = p + 2; + while (preg->program[s++]) { + } + return s - p; + } + } + return 2; +} + + +size_t regerror(int errcode, const regex_t *preg, char *errbuf, size_t errbuf_size) +{ + static const char *error_strings[] = { + "success", + "no match", + "bad pattern", + "null argument", + "unknown error", + "too big", + "out of memory", + "too many ()", + "parentheses () not balanced", + "braces {} not balanced", + "invalid repetition count(s)", + "extra characters", + "*+ of empty atom", + "nested count", + "internal error", + "count follows nothing", + "trailing backslash", + "corrupted program", + "contains null char", + }; + const char *err; + + if (errcode < 0 || errcode >= REG_ERR_NUM) { + err = "Bad error code"; + } + else { + err = error_strings[errcode]; + } + + return snprintf(errbuf, errbuf_size, "%s", err); +} + +void regfree(regex_t *preg) +{ + free(preg->program); +} + +#endif + +#if defined(_WIN32) || defined(WIN32) +#ifndef STRICT +#define STRICT +#endif +#define WIN32_LEAN_AND_MEAN +#include + +#if defined(HAVE_DLOPEN_COMPAT) +void *dlopen(const char *path, int mode) +{ + JIM_NOTUSED(mode); + + return (void *)LoadLibraryA(path); +} + +int dlclose(void *handle) +{ + FreeLibrary((HANDLE)handle); + return 0; +} + +void *dlsym(void *handle, const char *symbol) +{ + return GetProcAddress((HMODULE)handle, symbol); +} + +char *dlerror(void) +{ + static char msg[121]; + FormatMessageA(FORMAT_MESSAGE_FROM_SYSTEM, NULL, GetLastError(), + LANG_NEUTRAL, msg, sizeof(msg) - 1, NULL); + return msg; +} +#endif + +#ifdef _MSC_VER + +#include + + +int gettimeofday(struct timeval *tv, void *unused) +{ + struct _timeb tb; + + _ftime(&tb); + tv->tv_sec = tb.time; + tv->tv_usec = tb.millitm * 1000; + + return 0; +} + + +DIR *opendir(const char *name) +{ + DIR *dir = 0; + + if (name && name[0]) { + size_t base_length = strlen(name); + const char *all = + strchr("/\\", name[base_length - 1]) ? "*" : "/*"; + + if ((dir = (DIR *) Jim_Alloc(sizeof *dir)) != 0 && + (dir->name = (char *)Jim_Alloc(base_length + strlen(all) + 1)) != 0) { + strcat(strcpy(dir->name, name), all); + + if ((dir->handle = (long)_findfirst(dir->name, &dir->info)) != -1) + dir->result.d_name = 0; + else { + Jim_Free(dir->name); + Jim_Free(dir); + dir = 0; + } + } + else { + Jim_Free(dir); + dir = 0; + errno = ENOMEM; + } + } + else { + errno = EINVAL; + } + return dir; +} + +int closedir(DIR * dir) +{ + int result = -1; + + if (dir) { + if (dir->handle != -1) + result = _findclose(dir->handle); + Jim_Free(dir->name); + Jim_Free(dir); + } + if (result == -1) + errno = EBADF; + return result; +} + +struct dirent *readdir(DIR * dir) +{ + struct dirent *result = 0; + + if (dir && dir->handle != -1) { + if (!dir->result.d_name || _findnext(dir->handle, &dir->info) != -1) { + result = &dir->result; + result->d_name = dir->info.name; + } + } + else { + errno = EBADF; + } + return result; +} +#endif +#endif +#ifndef JIM_BOOTSTRAP_LIB_ONLY +#include +#include + + +#ifdef USE_LINENOISE +#include +#include "linenoise.h" +#else +#define MAX_LINE_LEN 512 +#endif + +char *Jim_HistoryGetline(const char *prompt) +{ +#ifdef USE_LINENOISE + return linenoise(prompt); +#else + int len; + char *line = malloc(MAX_LINE_LEN); + + fputs(prompt, stdout); + fflush(stdout); + + if (fgets(line, MAX_LINE_LEN, stdin) == NULL) { + free(line); + return NULL; + } + len = strlen(line); + if (len && line[len - 1] == '\n') { + line[len - 1] = '\0'; + } + return line; +#endif +} + +void Jim_HistoryLoad(const char *filename) +{ +#ifdef USE_LINENOISE + linenoiseHistoryLoad(filename); +#endif +} + +void Jim_HistoryAdd(const char *line) +{ +#ifdef USE_LINENOISE + linenoiseHistoryAdd(line); +#endif +} + +void Jim_HistorySave(const char *filename) +{ +#ifdef USE_LINENOISE + linenoiseHistorySave(filename); +#endif +} + +void Jim_HistoryShow(void) +{ +#ifdef USE_LINENOISE + + int i; + int len; + char **history = linenoiseHistory(&len); + for (i = 0; i < len; i++) { + printf("%4d %s\n", i + 1, history[i]); + } +#endif +} + +int Jim_InteractivePrompt(Jim_Interp *interp) +{ + int retcode = JIM_OK; + char *history_file = NULL; +#ifdef USE_LINENOISE + const char *home; + + home = getenv("HOME"); + if (home && isatty(STDIN_FILENO)) { + int history_len = strlen(home) + sizeof("/.jim_history"); + history_file = Jim_Alloc(history_len); + snprintf(history_file, history_len, "%s/.jim_history", home); + Jim_HistoryLoad(history_file); + } +#endif + + printf("Welcome to Jim version %d.%d\n", + JIM_VERSION / 100, JIM_VERSION % 100); + Jim_SetVariableStrWithStr(interp, JIM_INTERACTIVE, "1"); + + while (1) { + Jim_Obj *scriptObjPtr; + const char *result; + int reslen; + char prompt[20]; + const char *str; + + if (retcode != 0) { + const char *retcodestr = Jim_ReturnCode(retcode); + + if (*retcodestr == '?') { + snprintf(prompt, sizeof(prompt) - 3, "[%d] ", retcode); + } + else { + snprintf(prompt, sizeof(prompt) - 3, "[%s] ", retcodestr); + } + } + else { + prompt[0] = '\0'; + } + strcat(prompt, ". "); + + scriptObjPtr = Jim_NewStringObj(interp, "", 0); + Jim_IncrRefCount(scriptObjPtr); + while (1) { + char state; + int len; + char *line; + + line = Jim_HistoryGetline(prompt); + if (line == NULL) { + if (errno == EINTR) { + continue; + } + Jim_DecrRefCount(interp, scriptObjPtr); + retcode = JIM_OK; + goto out; + } + if (Jim_Length(scriptObjPtr) != 0) { + Jim_AppendString(interp, scriptObjPtr, "\n", 1); + } + Jim_AppendString(interp, scriptObjPtr, line, -1); + free(line); + str = Jim_GetString(scriptObjPtr, &len); + if (len == 0) { + continue; + } + if (Jim_ScriptIsComplete(str, len, &state)) + break; + + snprintf(prompt, sizeof(prompt), "%c> ", state); + } +#ifdef USE_LINENOISE + if (strcmp(str, "h") == 0) { + + Jim_HistoryShow(); + Jim_DecrRefCount(interp, scriptObjPtr); + continue; + } + + Jim_HistoryAdd(Jim_String(scriptObjPtr)); + if (history_file) { + Jim_HistorySave(history_file); + } +#endif + retcode = Jim_EvalObj(interp, scriptObjPtr); + Jim_DecrRefCount(interp, scriptObjPtr); + + if (retcode == JIM_EXIT) { + retcode = JIM_EXIT; + break; + } + if (retcode == JIM_ERR) { + Jim_MakeErrorMessage(interp); + } + result = Jim_GetString(Jim_GetResult(interp), &reslen); + if (reslen) { + printf("%s\n", result); + } + } + out: + Jim_Free(history_file); + return retcode; +} + +#include +#include +#include + + + +extern int Jim_initjimshInit(Jim_Interp *interp); + +static void JimSetArgv(Jim_Interp *interp, int argc, char *const argv[]) +{ + int n; + Jim_Obj *listObj = Jim_NewListObj(interp, NULL, 0); + + + for (n = 0; n < argc; n++) { + Jim_Obj *obj = Jim_NewStringObj(interp, argv[n], -1); + + Jim_ListAppendElement(interp, listObj, obj); + } + + Jim_SetVariableStr(interp, "argv", listObj); + Jim_SetVariableStr(interp, "argc", Jim_NewIntObj(interp, argc)); +} + +static void JimPrintErrorMessage(Jim_Interp *interp) +{ + Jim_MakeErrorMessage(interp); + fprintf(stderr, "%s\n", Jim_String(Jim_GetResult(interp))); +} + +int main(int argc, char *const argv[]) +{ + int retcode; + Jim_Interp *interp; + + if (argc > 1 && strcmp(argv[1], "--version") == 0) { + printf("%d.%d\n", JIM_VERSION / 100, JIM_VERSION % 100); + return 0; + } + + + interp = Jim_CreateInterp(); + Jim_RegisterCoreCommands(interp); + + + if (Jim_InitStaticExtensions(interp) != JIM_OK) { + JimPrintErrorMessage(interp); + } + + Jim_SetVariableStrWithStr(interp, "jim::argv0", argv[0]); + Jim_SetVariableStrWithStr(interp, JIM_INTERACTIVE, argc == 1 ? "1" : "0"); + retcode = Jim_initjimshInit(interp); + + if (argc == 1) { + if (retcode == JIM_ERR) { + JimPrintErrorMessage(interp); + } + if (retcode != JIM_EXIT) { + JimSetArgv(interp, 0, NULL); + retcode = Jim_InteractivePrompt(interp); + } + } + else { + if (argc > 2 && strcmp(argv[1], "-e") == 0) { + JimSetArgv(interp, argc - 3, argv + 3); + retcode = Jim_Eval(interp, argv[2]); + if (retcode != JIM_ERR) { + printf("%s\n", Jim_String(Jim_GetResult(interp))); + } + } + else { + Jim_SetVariableStr(interp, "argv0", Jim_NewStringObj(interp, argv[1], -1)); + JimSetArgv(interp, argc - 2, argv + 2); + retcode = Jim_EvalFile(interp, argv[1]); + } + if (retcode == JIM_ERR) { + JimPrintErrorMessage(interp); + } + } + if (retcode == JIM_EXIT) { + retcode = Jim_GetExitCode(interp); + } + else if (retcode == JIM_ERR) { + retcode = 1; + } + else { + retcode = 0; + } + Jim_FreeInterp(interp); + return retcode; +} +#endif ADDED autosetup/local.tcl Index: autosetup/local.tcl ================================================================== --- autosetup/local.tcl +++ autosetup/local.tcl @@ -0,0 +1,225 @@ +# For this project, disable the pager for --help +set useropts(nopager) 1 + +# Searches for a usable Tcl (prefer 8.6, 8.5, 8.4) in the given paths +# Returns a dictionary of the contents of the tclConfig.sh file, or +# empty if not found +proc parse-tclconfig-sh {args} { + foreach p $args { + # Allow pointing directly to the path containing tclConfig.sh + if {[file exists $p/tclConfig.sh]} { + return [parse-tclconfig-sh-file $p/tclConfig.sh] + } + # Some systems allow for multiple versions + foreach libpath {lib/tcl8.6 lib/tcl8.5 lib/tcl8.4 lib/tcl tcl lib} { + if {[file exists $p/$libpath/tclConfig.sh]} { + return [parse-tclconfig-sh-file $p/$libpath/tclConfig.sh] + } + } + } +} + +proc parse-tclconfig-sh-file {filename} { + foreach line [split [readfile $filename] \n] { + if {[regexp {^(TCL_[^=]*)=(.*)$} $line -> name value]} { + set value [regsub -all {\$\{.*\}} $value ""] + set tclconfig($name) [string trim $value '] + } + } + return [array get tclconfig] +} + +# The complex extension checking is done here. + +global withinfo +global extdb + +# Final determination of module status +dict set extdb status {} + +# Returns 1 if the extension has the attribute +proc ext-has {ext attr} { + expr {$attr in [dict get $::extdb attrs $ext]} +} + +# Returns an entry from the extension 'info' table, or $default otherwise +proc ext-get {ext key {default {}}} { + if {[dict exists $::extdb info $ext $key]} { + return [dict get $::extdb info $ext $key] + } else { + return $default + } +} + +# Set the status of the extension to the given value, and returns the value +proc ext-set-status {ext value} { + dict set ::extdb status $ext $value + return $value +} + +# Returns the status of the extension, or ? if unknown +proc ext-get-status {ext} { + if {[dict exists $::extdb status $ext]} { + return [dict get $::extdb status $ext] + } + return ? +} + +proc check-extension-status {ext required} { + global withinfo + + set status [ext-get-status $ext] + + if {$ext in $withinfo(without)} { + # Disabled without further ado + msg-result "Extension $ext...disabled" + return [ext-set-status $ext n] + } + + if {$status in {m y n}} { + return $status + } + + # required is "required" if this extension *must* be enabled + # required is "wanted" if it is not fatal for this extension + # not to be enabled + + array set depinfo {m 0 y 0 n 0} + + # Check direct dependencies + if [ext-get $ext check 1] { + # "check" conditions are met + } else { + # not met + incr depinfo(n) + } + + if {$depinfo(n) == 0} { + # Now extension dependencies + foreach i [ext-get $ext dep] { + set status [check-extension-status $i $required] + #puts "$ext: dep $i $required => $status" + incr depinfo($status) + if {$depinfo(n)} { + break + } + } + } + + #parray depinfo + + if {$depinfo(n)} { + msg-checking "Extension $ext..." + if {$required eq "required"} { + user-error "dependencies not met" + } + msg-result "disabled (dependencies)" + return [ext-set-status $ext n] + } + + # Selected as a module? + if {$ext in $withinfo(mod)} { + if {[ext-has $ext tcl]} { + # Easy, a Tcl module + msg-result "Extension $ext...tcl" + } elseif {[ext-has $ext static]} { + user-error "Extension $ext can't be a module" + } else { + msg-result "Extension $ext...module" + foreach i [ext-get $ext libdep] { + define-append LDLIBS_$ext [get-define $i ""] + } + } + return [ext-set-status $ext m] + } + + # Selected as a static extension? + if {[ext-has $ext shared]} { + user-error "Extension $ext can only be selected as a module" + } elseif {$ext in $withinfo(ext) || $required eq "$required"} { + msg-result "Extension $ext...enabled" + } elseif {$ext in $withinfo(maybe)} { + msg-result "Extension $ext...enabled (default)" + } else { + # Could be selected, but isn't (yet) + return [ext-set-status $ext x] + } + foreach i [ext-get $ext libdep] { + define-append LDLIBS [get-define $i ""] + } + return [ext-set-status $ext y] +} + +# Examines the user options (the $withinfo array) +# and the extension database ($extdb) to determine +# what is selected, and in what way. +# +# The results are available via ext-get-status +# And a dictionary is returned containing four keys: +# static-c extensions which are static C +# static-tcl extensions which are static Tcl +# module-c extensions which are C modules +# module-tcl extensions which are Tcl modules +proc check-extensions {} { + global extdb withinfo + + # Check valid extension names + foreach i [concat $withinfo(ext) $withinfo(mod)] { + if {![dict exists $extdb attrs $i]} { + user-error "Unknown extension: $i" + } + } + + set extlist [lsort [dict keys [dict get $extdb attrs]]] + + set withinfo(maybe) {} + + # Now work out the default status. We have. + # normal case, include !optional if possible + # --without=default, don't include optional + if {$withinfo(nodefault)} { + lappend withinfo(maybe) stdlib + } else { + foreach i $extlist { + if {![ext-has $i optional]} { + lappend withinfo(maybe) $i + } + } + } + + foreach i $extlist { + define LDLIBS_$i "" + } + + foreach i [concat $withinfo(ext) $withinfo(mod)] { + check-extension-status $i required + } + foreach i $withinfo(maybe) { + check-extension-status $i wanted + } + + array set extinfo {static-c {} static-tcl {} module-c {} module-tcl {}} + + foreach i $extlist { + set status [ext-get-status $i] + set tcl [ext-has $i tcl] + switch $status,$tcl { + y,1 { + define jim_ext_$i + lappend extinfo(static-tcl) $i + } + y,0 { + define jim_ext_$i + lappend extinfo(static-c) $i + # If there are any static C++ extensions, jimsh must be linked using + # the C++ compiler + if {[ext-has $i cpp]} { + define HAVE_CXX_EXTENSIONS + } + } + m,1 { lappend extinfo(module-tcl) $i } + m,0 { lappend extinfo(module-c) $i } + } + } + return [array get extinfo] +} ADDED autosetup/system.tcl Index: autosetup/system.tcl ================================================================== --- autosetup/system.tcl +++ autosetup/system.tcl @@ -0,0 +1,271 @@ +# Copyright (c) 2010 WorkWare Systems http://www.workware.net.au/ +# All rights reserved + +# @synopsis: +# +# This module supports common system interrogation and options +# such as --host, --build, --prefix, and setting srcdir, builddir, and EXEXT. +# +# It also support the 'feature' naming convention, where searching +# for a feature such as sys/type.h defines HAVE_SYS_TYPES_H +# +module-options { + host:host-alias => {a complete or partial cpu-vendor-opsys for the system where + the application will run (defaults to the same value as --build)} + build:build-alias => {a complete or partial cpu-vendor-opsys for the system + where the application will be built (defaults to the + result of running config.guess)} + prefix:dir => {the target directory for the build (defaults to /usr/local)} + + # These (hidden) options are supported for autoconf/automake compatibility + exec-prefix: + bindir: + sbindir: + includedir: + mandir: + infodir: + libexecdir: + datadir: + libdir: + sysconfdir: + sharedstatedir: + localstatedir: + maintainer-mode=0 + dependency-tracking=0 +} + +# Returns 1 if exists, or 0 if not +# +proc check-feature {name code} { + msg-checking "Checking for $name..." + set r [uplevel 1 $code] + define-feature $name $r + if {$r} { + msg-result "ok" + } else { + msg-result "not found" + } + return $r +} + +# @have-feature name ?default=0? +# +# Returns the value of the feature if defined, or $default if not. +# See 'feature-define-name' for how the feature name +# is translated into the define name. +# +proc have-feature {name {default 0}} { + get-define [feature-define-name $name] $default +} + +# @define-feature name ?value=1? +# +# Sets the feature 'define' to the given value. +# See 'feature-define-name' for how the feature name +# is translated into the define name. +# +proc define-feature {name {value 1}} { + define [feature-define-name $name] $value +} + +# @feature-checked name +# +# Returns 1 if the feature has been checked, whether true or not +# +proc feature-checked {name} { + is-defined [feature-define-name $name] +} + +# @feature-define-name name ?prefix=HAVE_? +# +# Converts a name to the corresponding define, +# e.g. sys/stat.h becomes HAVE_SYS_STAT_H. +# +# Converts * to P and all non-alphanumeric to underscore. +# +proc feature-define-name {name {prefix HAVE_}} { + string toupper $prefix[regsub -all {[^a-zA-Z0-9]} [regsub -all {[*]} $name p] _] +} + +# If $file doesn't exist, or it's contents are different than $buf, +# the file is written and $script is executed. +# Otherwise a "file is unchanged" message is displayed. +proc write-if-changed {file buf {script {}}} { + set old [readfile $file ""] + if {$old eq $buf && [file exists $file]} { + msg-result "$file is unchanged" + } else { + writefile $file $buf\n + uplevel 1 $script + } +} + +# @make-template template ?outfile? +# +# Reads the input file /$template and writes the output file $outfile. +# If $outfile is blank/omitted, $template should end with ".in" which +# is removed to create the output file name. +# +# Each pattern of the form @define@ is replaced the the corresponding +# define, if it exists, or left unchanged if not. +# +# The special value @srcdir@ is substituted with the relative +# path to the source directory from the directory where the output +# file is created, while the special value @top_srcdir@ is substituted +# with the relative path to the top level source directory. +# +# Conditional sections may be specified as follows: +## @if name == value +## lines +## @else +## lines +## @endif +# +# Where 'name' is a defined variable name and @else is optional. +# If the expression does not match, all lines through '@endif' are ignored. +# +# The alternative forms may also be used: +## @if name +## @if name != value +# +# Where the first form is true if the variable is defined, but not empty or 0 +# +# Currently these expressions can't be nested. +# +proc make-template {template {out {}}} { + set infile [file join $::autosetup(srcdir) $template] + + if {![file exists $infile]} { + user-error "Template $template is missing" + } + + # Define this as late as possible + define AUTODEPS $::autosetup(deps) + + if {$out eq ""} { + if {[file ext $template] ne ".in"} { + autosetup-error "make_template $template has no target file and can't guess" + } + set out [file rootname $template] + } + + set outdir [file dirname $out] + + # Make sure the directory exists + file mkdir $outdir + + # Set up srcdir and top_srcdir to be relative to the target dir + define srcdir [relative-path [file join $::autosetup(srcdir) $outdir] $outdir] + define top_srcdir [relative-path $::autosetup(srcdir) $outdir] + + set mapping {} + foreach {n v} [array get ::define] { + lappend mapping @$n@ $v + } + set result {} + foreach line [split [readfile $infile] \n] { + if {[info exists cond]} { + set l [string trimright $line] + if {$l eq "@endif"} { + unset cond + continue + } + if {$l eq "@else"} { + set cond [expr {!$cond}] + continue + } + if {$cond} { + lappend result $line + } + continue + } + if {[regexp {^@if\s+(\w+)(.*)} $line -> name expression]} { + lassign $expression equal value + set varval [get-define $name ""] + if {$equal eq ""} { + set cond [expr {$varval ni {"" 0}}] + } else { + set cond [expr {$varval eq $value}] + if {$equal ne "=="} { + set cond [expr {!$cond}] + } + } + continue + } + lappend result $line + } + writefile $out [string map $mapping [join $result \n]]\n + + msg-result "Created [relative-path $out] from [relative-path $template]" +} + +# build/host tuples and cross-compilation prefix +set build [opt-val build] +define build_alias $build +if {$build eq ""} { + define build [config_guess] +} else { + define build [config_sub $build] +} + +set host [opt-val host] +define host_alias $host +if {$host eq ""} { + define host [get-define build] + set cross "" +} else { + define host [config_sub $host] + set cross $host- +} +define cross [get-env CROSS $cross] + +# Do "define defaultprefix myvalue" to set the default prefix *before* the first "use" +set prefix [opt-val prefix [get-define defaultprefix /usr/local]] + +# These are for compatibility with autoconf +define target [get-define host] +define prefix $prefix +define builddir $autosetup(builddir) +define srcdir $autosetup(srcdir) +# Allow this to come from the environment +define top_srcdir [get-env top_srcdir [get-define srcdir]] + +# autoconf supports all of these +set exec_prefix [opt-val exec-prefix $prefix] +define exec_prefix $exec_prefix +foreach {name defpath} { + bindir /bin + sbindir /sbin + libexecdir /libexec + libdir /lib +} { + define $name [opt-val $name $exec_prefix$defpath] +} +foreach {name defpath} { + datadir /share + sysconfdir /etc + sharedstatedir /com + localstatedir /var + infodir /share/info + mandir /share/man + includedir /include +} { + define $name [opt-val $name $prefix$defpath] +} + +define SHELL [get-env SHELL [find-an-executable sh bash ksh]] + +# Windows vs. non-Windows +switch -glob -- [get-define host] { + *-*-ming* - *-*-cygwin - *-*-msys { + define-feature windows + define EXEEXT .exe + } + default { + define EXEEXT "" + } +} + +# Display +msg-result "Host System...[get-define host]" +msg-result "Build System...[get-define build]" ADDED autosetup/test-tclsh Index: autosetup/test-tclsh ================================================================== --- autosetup/test-tclsh +++ autosetup/test-tclsh @@ -0,0 +1,20 @@ +# A small Tcl script to verify that the chosen +# interpreter works. Sometimes we might e.g. pick up +# an interpreter for a different arch. +# Outputs the full path to the interpreter + +if {[catch {info version} version] == 0} { + # This is Jim Tcl + if {$version >= 0.72} { + # Ensure that regexp works + regexp (a.*?) a + puts [info nameofexecutable] + exit 0 + } +} elseif {[catch {info tclversion} version] == 0} { + if {$version >= 8.5 && ![string match 8.5a* [info patchlevel]]} { + puts [info nameofexecutable] + exit 0 + } +} +exit 1 DELETED ci_cvs.txt Index: ci_cvs.txt ================================================================== --- ci_cvs.txt +++ ci_cvs.txt @@ -1,192 +0,0 @@ -=============================================================================== - -First experimental codes ... - -tools/import-cvs.tcl -tools/lib/rcsparser.tcl - -No actual import, right now only working on getting csets right. The -code uses CVSROOT/history as foundation, and augments that with data -from the individual RCS files (commit messages). - -Statistics of a run ... - 3516 csets. - - 1545 breaks on user change - 558 breaks on file duplicate - 13 breaks on branch/trunk change - 1402 breaks on commit message change - -Time statistics ... - 3297 were processed in <= 1 seconds (93.77%) - 217 were processed in between 2 seconds and 14 minutes. - 1 was processed in ~41 minutes - 1 was processed in ~22 hours - -Time fuzz - Differences between csets range from 0 seconds to 66 -days. Needs stats analysis to see if there is an obvious break. Even -so the times within csets and between csets overlap a great deal, -making time a bad criterium for cset separation, IMHO. - -Leaving that topic, back to the current cset separator ... - -It has a problem: - The history file is not starting at the root! - -Examples: - The first three changesets are - - =============================/user - M {Wed Nov 22 09:28:49 AM PST 2000} ericm 1.4 tcllib/modules/ftpd/ChangeLog - M {Wed Nov 22 09:28:49 AM PST 2000} ericm 1.7 tcllib/modules/ftpd/ftpd.tcl - files: 2 - delta: 0 - range: 0 seconds - =============================/cmsg - M {Wed Nov 29 02:14:33 PM PST 2000} ericm 1.3 tcllib/aclocal.m4 - files: 1 - delta: - range: 0 seconds - =============================/cmsg - M {Sun Feb 04 12:28:35 AM PST 2001} ericm 1.9 tcllib/modules/mime/ChangeLog - M {Sun Feb 04 12:28:35 AM PST 2001} ericm 1.12 tcllib/modules/mime/mime.tcl - files: 2 - delta: 0 - range: 0 seconds - -All csets modify files which already have several revisions. We have -no csets from before that in the history, but these csets are in the -RCS files. - -I wonder, is SF maybe removing old entries from the history when it -grows too large ? - -This also affects incremental import ... I cannot assume that the -history always grows. It may shrink ... I cannot keep an offset, will -have to record the time of the last entry, or even the full entry -processed last, to allow me to skip ahead to anything not known yet. - -I might have to try to implement the algorithm outlined below, -matching the revision trees of the individual RCS files to each other -to form the global tree of revisions. Maybe we can use the history to -help in the matchup, for the parts where we do have it. - -Wait. This might be easier ... Take the delta information from the RCS -files and generate a fake history ... Actually, this might even allow -us to create a total history ... No, not quite, the merge entries the -actual history may contain will be missing. These we can mix in from -the actual history, as much as we have. - -Still, lets try that, a fake history, and then run this script on it -to see if/where are differences. - -=============================================================================== - - -Notes about CVS import, regarding CVS. - -- Problem: CVS does not really track changesets, but only individual - revisions of files. To recover changesets it is necessary to look at - author, branch, timestamp information, and the commit messages. Even - so this is only heuristic, not foolproof. - - Existing tool: cvsps. - - Processes the output of 'cvs log' to recover changesets. Problem: - Sees only a linear list of revisions, does not see branchpoints, - etc. Cannot use the tree structure to help in making the decisions. - -- Problem: CVS does not track merge-points at all. Recovery through - heuristics is brittle at best, looking for keywords in commit - messages which might indicate that a branch was merged with some - other. - - -Ideas regarding an algorithm to recover changesets. - -Key feature: Uses the per-file revision trees to help in uncovering -the underlying changesets and global revision tree G. - -The per-file revision tree for a file X is in essence the global -revision tree with all nodes not pertaining to X removed from it. In -the reverse this allows us to built up the global revision tree from -the per-file trees by matching nodes to each other and extending. - -Start with the per file revision tree of a single file as initial -approximation of the global tree. All nodes of this tree refer to the -revision of the file belonging to it, and through that the file -itself. At each step the global tree contains the nodes for a finite -set of files, and all nodes in the tree refer to revisions of all -files in the set, making the mapping total. - -To add a file X to the tree take the per-file revision tree R and -performs the following actions: - -- For each node N in R use the tuple - to identify a set of nodes in G which may match N. Use the timestamp - to locate the node nearest in time. - -- This process will leave nodes in N unmapped. If there are unmapped - nodes which have no neighbouring mapped nodes we have to - abort. - - Otherwise take the nodes which have mapped neighbours. Trace the - edges and see which of these nodes are connected in the local - tree. Then look at the identified neighbours and trace their - connections. - - If two global nodes have a direct connection, but a multi-edge - connection in the local tree insert global nodes mapping to the - local nodes and map them together. This expands the global tree to - hold the revisions added by the new file. - - Otherwise, both sides have multi-edge connections then abort. This - looks like a merge of two different branches, but there are no such - in CVS ... Wait ... sort the nodes over time and fit the new nodes - in between the other nodes, per the timestamps. We have overlapping - / alternating changes to one file and others. - - A last possibility is that a node is only connected to a mapped - parent. This may be a new branch, or again an alternating change on - the given line. Symbols on the revisions will help to map this. - -- We now have an extended global tree which incorporates the revisions - of the new file. However new nodes will refer only to the new file, - and old nodes may not refer to the new file. This has to be fixed, - as all nodes have to refer to all files. - - Run over the tree and look at each parent/child pair. If a file is - not referenced in the child, but the parent, then copy a reference - to the file revision on the parent forward to the child. This - signals that the file did not change in the given revision. - -- After all files have been integrated in this manner we have global - revision tree capturing all changesets, including the unchanged - files per changeset. - - -This algorithm has to be refined to also take Attic/ files into -account. - -------------------------------------------------------------------------- - -Two archive files mapping to the same user file. How are they -interleaved ? - -(a) sqlite/src/os_unix.h,v -(b) sqlite/src/Attic/os_unix.h,v - -Problem: Max version of (a) is 1.9 - Max version of (b) is 1.11 - cvs co 1.10 -> no longer in the repository. - -This seems to indicate that the non-Attic file is relevant. - --------------------------------------------------------------------------- - -tcllib - more problems - tklib/pie.tcl,v - - -invalid change text in -/home/aku/Projects/Tcl/Fossil/Devel/Examples/cvs-tcllib/tklib/modules/tkpiechart/pie.tcl,v - -Possibly braces ? DELETED ci_fossil.txt Index: ci_fossil.txt ================================================================== --- ci_fossil.txt +++ ci_fossil.txt @@ -1,127 +0,0 @@ - -To perform CVS imports for fossil we need at least the ability to -parse CVS files, i.e. RCS files, with slight differences. - -For the general architecture of the import facility we have two major -paths to choose between. - -One is to use an external tool which processes a cvs repository and -drives fossil through its CLI to insert the found changesets. - -The other is to integrate the whole facility into the fossil binary -itself. - -I dislike the second choice. It may be faster, as the implementation -can use all internal functionality of fossil to perform the import, -however it will also bloat the binary with functionality not needed -most of the time. Which becomes especially obvious if more importers -are to be written, like for monotone, bazaar, mercurial, bitkeeper, -git, SVN, Arc, etc. Keeping all this out of the core fossil binary is -IMHO more beneficial in the long term, also from a maintenance point -of view. The tools can evolve separately. Especially important for CVS -as it will have to deal with lots of broken repositories, all -different. - -However, nothing speaks against looking for common parts in all -possible import tools, and having these in the fossil core, as a -general backend all importer may use. Something like that has already -been proposed: The deconstruct|reconstruct methods. For us, actually -only reconstruct is important. Taking an unordered collection of files -(data, and manifests) it generates a proper fossil repository. With -that method implemented all import tools only have to generate the -necessary collection and then leave the main work of filling the -database to fossil itself. - -The disadvantage of this method is however that it will gobble up a -lot of temporary space in the filesystem to hold all unique revisions -of all files in their expanded form. - -It might be worthwhile to consider an extension of 'reconstruct' which -is able to incrementally add a set of files to an existing fossil -repository already containing revisions. In that case the import tool -can be changed to incrementally generate the collection for a -particular revision, import it, and iterate over all revisions in the -origin repository. This is of course also dependent on the origin -repository itself, how well it supports such incremental export. - -This also leads to a possible method for performing the import using -only existing functionality ('reconstruct' has not been implemented -yet). Instead generating an unordered collection for each revision -generate a properly setup workspace, simply commit it. This will -require use of rm, add and update methods as well, to remove old and -enter new files, and point the fossil repository to the correct parent -revision from the new revision is derived. - -The relative efficiency (in time) of these incremental methods versus -importing a complete collection of files encoding the entire origin -repository however is not clear. - ----------------------------------- - -reconstruct - -The core logic for handling content is in the file "content.c", in -particular the functions 'content_put' and 'content_deltify'. One of -the main users of these functions is in the file "checkin.c", see the -function 'commit_cmd'. - -The logic is clear. The new modified files are simply stored without -delta-compression, using 'content_put'. And should fosssil have an id -for the _previous_ revision of the committed file it uses -'content_deltify' to convert the already stored data for that revision -into a delta with the just stored new revision as origin. - -In other words, fossil produces reverse deltas, with leaf revisions -stored just zip-compressed (plain) and older revisions using both zip- -and delta-compression. - -Of note is that the underlying logic in 'content_deltify' gives up on -delta compression if the involved files are either not large enough, -or if the achieved compression factor was not high enough. In that -case the old revision of the file is left plain. - -The scheme can thus be called a 'truncated reverse delta'. - -The manifest is created and committed after the modified files. It -uses the same logic as for the regular files. The new leaf is stored -plain, and storage of the parent manifest is modified to be a delta -with the current as origin. - -Further note that for a checkin of a merge result oonly the primary -parent is modified in that way. The secondary parent, the one merged -into the current revision is not touched. I.e. from the storage layer -point of view this revision is still a leaf and the data is kept -stored plain, not delta-compressed. - - - -Now the "reconstruct" can be done like so: - -- Scan the files in the indicated directory, and look for a manifest. - -- When the manifest has been found parse its contents and follow the - chain of parent links to locate the root manifest (no parent). - -- Import the files referenced by the root manifest, then the manifest - itself. This can be done using a modified form of the 'commit_cmd' - which does not have to construct a manifest on its own from vfile, - vmerge, etc. - -- After that recursively apply the import of the previous step to the - children of the root, and so on. - -For an incremental "reconstruct" the collection of files would not be -a single tree with a root, but a forest, and the roots to look for are -not manifests without parent, but with a parent which is already -present in the repository. After one such root has been found and -processed the unprocessed files have to be searched further for more -roots, and only if no such are found anymore will the remaining files -be considered as superfluous. - -We can use the functions in "manifest.c" for the parsing and following -the parental chain. - -Hm. But we have no direct child information. So the above algorithm -has to be modified, we have to scan all manifests before we start -importing, and we have to create a reverse index, from manifest to -children so that we can perform the import from root to leaves. ADDED compat/tcl-8.6/generic/tcl.h Index: compat/tcl-8.6/generic/tcl.h ================================================================== --- compat/tcl-8.6/generic/tcl.h +++ compat/tcl-8.6/generic/tcl.h @@ -0,0 +1,2653 @@ +/* + * tcl.h -- + * + * This header file describes the externally-visible facilities of the + * Tcl interpreter. + * + * Copyright (c) 1987-1994 The Regents of the University of California. + * Copyright (c) 1993-1996 Lucent Technologies. + * Copyright (c) 1994-1998 Sun Microsystems, Inc. + * Copyright (c) 1998-2000 by Scriptics Corporation. + * Copyright (c) 2002 by Kevin B. Kenny. All rights reserved. + * + * See the file "license.terms" for information on usage and redistribution of + * this file, and for a DISCLAIMER OF ALL WARRANTIES. + */ + +#ifndef _TCL +#define _TCL + +/* + * For C++ compilers, use extern "C" + */ + +#ifdef __cplusplus +extern "C" { +#endif + +/* + * The following defines are used to indicate the various release levels. + */ + +#define TCL_ALPHA_RELEASE 0 +#define TCL_BETA_RELEASE 1 +#define TCL_FINAL_RELEASE 2 + +/* + * When version numbers change here, must also go into the following files and + * update the version numbers: + * + * library/init.tcl (1 LOC patch) + * unix/configure.in (2 LOC Major, 2 LOC minor, 1 LOC patch) + * win/configure.in (as above) + * win/tcl.m4 (not patchlevel) + * win/makefile.bc (not patchlevel) 2 LOC + * README (sections 0 and 2, with and without separator) + * macosx/Tcl.pbproj/project.pbxproj (not patchlevel) 1 LOC + * macosx/Tcl.pbproj/default.pbxuser (not patchlevel) 1 LOC + * macosx/Tcl.xcode/project.pbxproj (not patchlevel) 2 LOC + * macosx/Tcl.xcode/default.pbxuser (not patchlevel) 1 LOC + * macosx/Tcl-Common.xcconfig (not patchlevel) 1 LOC + * win/README (not patchlevel) (sections 0 and 2) + * unix/tcl.spec (1 LOC patch) + * tools/tcl.hpj.in (not patchlevel, for windows installer) + */ + +#define TCL_MAJOR_VERSION 8 +#define TCL_MINOR_VERSION 6 +#define TCL_RELEASE_LEVEL TCL_FINAL_RELEASE +#define TCL_RELEASE_SERIAL 0 + +#define TCL_VERSION "8.6" +#define TCL_PATCH_LEVEL "8.6.0" + +/* + *---------------------------------------------------------------------------- + * The following definitions set up the proper options for Windows compilers. + * We use this method because there is no autoconf equivalent. + */ + +#ifndef __WIN32__ +# if defined(_WIN32) || defined(WIN32) || defined(__MINGW32__) || defined(__BORLANDC__) || (defined(__WATCOMC__) && defined(__WINDOWS_386__)) +# define __WIN32__ +# ifndef WIN32 +# define WIN32 +# endif +# ifndef _WIN32 +# define _WIN32 +# endif +# endif +#endif + +/* + * STRICT: See MSDN Article Q83456 + */ + +#ifdef __WIN32__ +# ifndef STRICT +# define STRICT +# endif +#endif /* __WIN32__ */ + +/* + * Utility macros: STRINGIFY takes an argument and wraps it in "" (double + * quotation marks), JOIN joins two arguments. + */ + +#ifndef STRINGIFY +# define STRINGIFY(x) STRINGIFY1(x) +# define STRINGIFY1(x) #x +#endif +#ifndef JOIN +# define JOIN(a,b) JOIN1(a,b) +# define JOIN1(a,b) a##b +#endif + +/* + * A special definition used to allow this header file to be included from + * windows resource files so that they can obtain version information. + * RC_INVOKED is defined by default by the windows RC tool. + * + * Resource compilers don't like all the C stuff, like typedefs and function + * declarations, that occur below, so block them out. + */ + +#ifndef RC_INVOKED + +/* + * Special macro to define mutexes, that doesn't do anything if we are not + * using threads. + */ + +#ifdef TCL_THREADS +#define TCL_DECLARE_MUTEX(name) static Tcl_Mutex name; +#else +#define TCL_DECLARE_MUTEX(name) +#endif + +/* + * Tcl's public routine Tcl_FSSeek() uses the values SEEK_SET, SEEK_CUR, and + * SEEK_END, all #define'd by stdio.h . + * + * Also, many extensions need stdio.h, and they've grown accustomed to tcl.h + * providing it for them rather than #include-ing it themselves as they + * should, so also for their sake, we keep the #include to be consistent with + * prior Tcl releases. + */ + +#include + +/* + *---------------------------------------------------------------------------- + * Support for functions with a variable number of arguments. + * + * The following TCL_VARARGS* macros are to support old extensions + * written for older versions of Tcl where the macros permitted + * support for the varargs.h system as well as stdarg.h . + * + * New code should just directly be written to use stdarg.h conventions. + */ + +#include +#ifndef TCL_NO_DEPRECATED +# define TCL_VARARGS(type, name) (type name, ...) +# define TCL_VARARGS_DEF(type, name) (type name, ...) +# define TCL_VARARGS_START(type, name, list) (va_start(list, name), name) +#endif +#if defined(__GNUC__) && (__GNUC__ > 2) +# define TCL_FORMAT_PRINTF(a,b) __attribute__ ((__format__ (__printf__, a, b))) +#else +# define TCL_FORMAT_PRINTF(a,b) +#endif + +/* + * Allow a part of Tcl's API to be explicitly marked as deprecated. + * + * Used to make TIP 330/336 generate moans even if people use the + * compatibility macros. Change your code, guys! We won't support you forever. + */ + +#if defined(__GNUC__) && ((__GNUC__ >= 4) || ((__GNUC__ == 3) && (__GNUC_MINOR__ >= 1))) +# if (__GNUC__ > 4) || ((__GNUC__ == 4) && (__GNUC__MINOR__ >= 5)) +# define TCL_DEPRECATED_API(msg) __attribute__ ((__deprecated__ (msg))) +# else +# define TCL_DEPRECATED_API(msg) __attribute__ ((__deprecated__)) +# endif +#else +# define TCL_DEPRECATED_API(msg) /* nothing portable */ +#endif + +/* + *---------------------------------------------------------------------------- + * Macros used to declare a function to be exported by a DLL. Used by Windows, + * maps to no-op declarations on non-Windows systems. The default build on + * windows is for a DLL, which causes the DLLIMPORT and DLLEXPORT macros to be + * nonempty. To build a static library, the macro STATIC_BUILD should be + * defined. + * + * Note: when building static but linking dynamically to MSVCRT we must still + * correctly decorate the C library imported function. Use CRTIMPORT + * for this purpose. _DLL is defined by the compiler when linking to + * MSVCRT. + */ + +#if (defined(__WIN32__) && (defined(_MSC_VER) || (__BORLANDC__ >= 0x0550) || defined(__LCC__) || defined(__WATCOMC__) || (defined(__GNUC__) && defined(__declspec)))) +# define HAVE_DECLSPEC 1 +# ifdef STATIC_BUILD +# define DLLIMPORT +# define DLLEXPORT +# ifdef _DLL +# define CRTIMPORT __declspec(dllimport) +# else +# define CRTIMPORT +# endif +# else +# define DLLIMPORT __declspec(dllimport) +# define DLLEXPORT __declspec(dllexport) +# define CRTIMPORT __declspec(dllimport) +# endif +#else +# define DLLIMPORT +# if defined(__GNUC__) && __GNUC__ > 3 +# define DLLEXPORT __attribute__ ((visibility("default"))) +# else +# define DLLEXPORT +# endif +# define CRTIMPORT +#endif + +/* + * These macros are used to control whether functions are being declared for + * import or export. If a function is being declared while it is being built + * to be included in a shared library, then it should have the DLLEXPORT + * storage class. If is being declared for use by a module that is going to + * link against the shared library, then it should have the DLLIMPORT storage + * class. If the symbol is beind declared for a static build or for use from a + * stub library, then the storage class should be empty. + * + * The convention is that a macro called BUILD_xxxx, where xxxx is the name of + * a library we are building, is set on the compile line for sources that are + * to be placed in the library. When this macro is set, the storage class will + * be set to DLLEXPORT. At the end of the header file, the storage class will + * be reset to DLLIMPORT. + */ + +#undef TCL_STORAGE_CLASS +#ifdef BUILD_tcl +# define TCL_STORAGE_CLASS DLLEXPORT +#else +# ifdef USE_TCL_STUBS +# define TCL_STORAGE_CLASS +# else +# define TCL_STORAGE_CLASS DLLIMPORT +# endif +#endif + +/* + * The following _ANSI_ARGS_ macro is to support old extensions + * written for older versions of Tcl where it permitted support + * for compilers written in the pre-prototype era of C. + * + * New code should use prototypes. + */ + +#ifndef TCL_NO_DEPRECATED +# undef _ANSI_ARGS_ +# define _ANSI_ARGS_(x) x +#endif + +/* + * Definitions that allow this header file to be used either with or without + * ANSI C features. + */ + +#ifndef INLINE +# define INLINE +#endif + +#ifdef NO_CONST +# ifndef const +# define const +# endif +#endif +#ifndef CONST +# define CONST const +#endif + +#ifdef USE_NON_CONST +# ifdef USE_COMPAT_CONST +# error define at most one of USE_NON_CONST and USE_COMPAT_CONST +# endif +# define CONST84 +# define CONST84_RETURN +#else +# ifdef USE_COMPAT_CONST +# define CONST84 +# define CONST84_RETURN const +# else +# define CONST84 const +# define CONST84_RETURN const +# endif +#endif + +#ifndef CONST86 +# define CONST86 CONST84 +#endif + +/* + * Make sure EXTERN isn't defined elsewhere. + */ + +#ifdef EXTERN +# undef EXTERN +#endif /* EXTERN */ + +#ifdef __cplusplus +# define EXTERN extern "C" TCL_STORAGE_CLASS +#else +# define EXTERN extern TCL_STORAGE_CLASS +#endif + +/* + *---------------------------------------------------------------------------- + * The following code is copied from winnt.h. If we don't replicate it here, + * then can't be included after tcl.h, since tcl.h also defines + * VOID. This block is skipped under Cygwin and Mingw. + */ + +#if defined(__WIN32__) && !defined(HAVE_WINNT_IGNORE_VOID) +#ifndef VOID +#define VOID void +typedef char CHAR; +typedef short SHORT; +typedef long LONG; +#endif +#endif /* __WIN32__ && !HAVE_WINNT_IGNORE_VOID */ + +/* + * Macro to use instead of "void" for arguments that must have type "void *" + * in ANSI C; maps them to type "char *" in non-ANSI systems. + */ + +#ifndef NO_VOID +# define VOID void +#else +# define VOID char +#endif + +/* + * Miscellaneous declarations. + */ + +#ifndef _CLIENTDATA +# ifndef NO_VOID + typedef void *ClientData; +# else + typedef int *ClientData; +# endif +# define _CLIENTDATA +#endif + +/* + * Darwin specific configure overrides (to support fat compiles, where + * configure runs only once for multiple architectures): + */ + +#ifdef __APPLE__ +# ifdef __LP64__ +# undef TCL_WIDE_INT_TYPE +# define TCL_WIDE_INT_IS_LONG 1 +# define TCL_CFG_DO64BIT 1 +# else /* !__LP64__ */ +# define TCL_WIDE_INT_TYPE long long +# undef TCL_WIDE_INT_IS_LONG +# undef TCL_CFG_DO64BIT +# endif /* __LP64__ */ +# undef HAVE_STRUCT_STAT64 +#endif /* __APPLE__ */ + +/* + * Define Tcl_WideInt to be a type that is (at least) 64-bits wide, and define + * Tcl_WideUInt to be the unsigned variant of that type (assuming that where + * we have one, we can have the other.) + * + * Also defines the following macros: + * TCL_WIDE_INT_IS_LONG - if wide ints are really longs (i.e. we're on a real + * 64-bit system.) + * Tcl_WideAsLong - forgetful converter from wideInt to long. + * Tcl_LongAsWide - sign-extending converter from long to wideInt. + * Tcl_WideAsDouble - converter from wideInt to double. + * Tcl_DoubleAsWide - converter from double to wideInt. + * + * The following invariant should hold for any long value 'longVal': + * longVal == Tcl_WideAsLong(Tcl_LongAsWide(longVal)) + * + * Note on converting between Tcl_WideInt and strings. This implementation (in + * tclObj.c) depends on the function + * sprintf(...,"%" TCL_LL_MODIFIER "d",...). + */ + +#if !defined(TCL_WIDE_INT_TYPE)&&!defined(TCL_WIDE_INT_IS_LONG) +# if defined(__WIN32__) +# define TCL_WIDE_INT_TYPE __int64 +# ifdef __BORLANDC__ +# define TCL_LL_MODIFIER "L" +# else /* __BORLANDC__ */ +# define TCL_LL_MODIFIER "I64" +# endif /* __BORLANDC__ */ +# elif defined(__GNUC__) +# define TCL_WIDE_INT_TYPE long long +# define TCL_LL_MODIFIER "ll" +# else /* ! __WIN32__ && ! __GNUC__ */ +/* + * Don't know what platform it is and configure hasn't discovered what is + * going on for us. Try to guess... + */ +# ifdef NO_LIMITS_H +# error please define either TCL_WIDE_INT_TYPE or TCL_WIDE_INT_IS_LONG +# else /* !NO_LIMITS_H */ +# include +# if (INT_MAX < LONG_MAX) +# define TCL_WIDE_INT_IS_LONG 1 +# else +# define TCL_WIDE_INT_TYPE long long +# endif +# endif /* NO_LIMITS_H */ +# endif /* __WIN32__ */ +#endif /* !TCL_WIDE_INT_TYPE & !TCL_WIDE_INT_IS_LONG */ +#ifdef TCL_WIDE_INT_IS_LONG +# undef TCL_WIDE_INT_TYPE +# define TCL_WIDE_INT_TYPE long +#endif /* TCL_WIDE_INT_IS_LONG */ + +typedef TCL_WIDE_INT_TYPE Tcl_WideInt; +typedef unsigned TCL_WIDE_INT_TYPE Tcl_WideUInt; + +#ifdef TCL_WIDE_INT_IS_LONG +# define Tcl_WideAsLong(val) ((long)(val)) +# define Tcl_LongAsWide(val) ((long)(val)) +# define Tcl_WideAsDouble(val) ((double)((long)(val))) +# define Tcl_DoubleAsWide(val) ((long)((double)(val))) +# ifndef TCL_LL_MODIFIER +# define TCL_LL_MODIFIER "l" +# endif /* !TCL_LL_MODIFIER */ +#else /* TCL_WIDE_INT_IS_LONG */ +/* + * The next short section of defines are only done when not running on Windows + * or some other strange platform. + */ +# ifndef TCL_LL_MODIFIER +# define TCL_LL_MODIFIER "ll" +# endif /* !TCL_LL_MODIFIER */ +# define Tcl_WideAsLong(val) ((long)((Tcl_WideInt)(val))) +# define Tcl_LongAsWide(val) ((Tcl_WideInt)((long)(val))) +# define Tcl_WideAsDouble(val) ((double)((Tcl_WideInt)(val))) +# define Tcl_DoubleAsWide(val) ((Tcl_WideInt)((double)(val))) +#endif /* TCL_WIDE_INT_IS_LONG */ + +#if defined(__WIN32__) +# ifdef __BORLANDC__ + typedef struct stati64 Tcl_StatBuf; +# elif defined(_WIN64) + typedef struct __stat64 Tcl_StatBuf; +# elif (defined(_MSC_VER) && (_MSC_VER < 1400)) || defined(_USE_32BIT_TIME_T) + typedef struct _stati64 Tcl_StatBuf; +# else + typedef struct _stat32i64 Tcl_StatBuf; +# endif /* _MSC_VER < 1400 */ +#elif defined(__CYGWIN__) + typedef struct _stat32i64 { + dev_t st_dev; + unsigned short st_ino; + unsigned short st_mode; + short st_nlink; + short st_uid; + short st_gid; + /* Here is a 2-byte gap */ + dev_t st_rdev; + /* Here is a 4-byte gap */ + long long st_size; + struct {long tv_sec;} st_atim; + struct {long tv_sec;} st_mtim; + struct {long tv_sec;} st_ctim; + /* Here is a 4-byte gap */ + } Tcl_StatBuf; +#elif defined(HAVE_STRUCT_STAT64) + typedef struct stat64 Tcl_StatBuf; +#else + typedef struct stat Tcl_StatBuf; +#endif + +/* + *---------------------------------------------------------------------------- + * Data structures defined opaquely in this module. The definitions below just + * provide dummy types. A few fields are made visible in Tcl_Interp + * structures, namely those used for returning a string result from commands. + * Direct access to the result field is discouraged in Tcl 8.0. The + * interpreter result is either an object or a string, and the two values are + * kept consistent unless some C code sets interp->result directly. + * Programmers should use either the function Tcl_GetObjResult() or + * Tcl_GetStringResult() to read the interpreter's result. See the SetResult + * man page for details. + * + * Note: any change to the Tcl_Interp definition below must be mirrored in the + * "real" definition in tclInt.h. + * + * Note: Tcl_ObjCmdProc functions do not directly set result and freeProc. + * Instead, they set a Tcl_Obj member in the "real" structure that can be + * accessed with Tcl_GetObjResult() and Tcl_SetObjResult(). + */ + +typedef struct Tcl_Interp +#ifndef TCL_NO_DEPRECATED +{ + /* TIP #330: Strongly discourage extensions from using the string + * result. */ +#ifdef USE_INTERP_RESULT + char *result TCL_DEPRECATED_API("use Tcl_GetResult/Tcl_SetResult"); + /* If the last command returned a string + * result, this points to it. */ + void (*freeProc) (char *blockPtr) + TCL_DEPRECATED_API("use Tcl_GetResult/Tcl_SetResult"); + /* Zero means the string result is statically + * allocated. TCL_DYNAMIC means it was + * allocated with ckalloc and should be freed + * with ckfree. Other values give the address + * of function to invoke to free the result. + * Tcl_Eval must free it before executing next + * command. */ +#else + char *resultDontUse; /* Don't use in extensions! */ + void (*freeProcDontUse) (char *); /* Don't use in extensions! */ +#endif +#ifdef USE_INTERP_ERRORLINE + int errorLine TCL_DEPRECATED_API("use Tcl_GetErrorLine/Tcl_SetErrorLine"); + /* When TCL_ERROR is returned, this gives the + * line number within the command where the + * error occurred (1 if first line). */ +#else + int errorLineDontUse; /* Don't use in extensions! */ +#endif +} +#endif /* TCL_NO_DEPRECATED */ +Tcl_Interp; + +typedef struct Tcl_AsyncHandler_ *Tcl_AsyncHandler; +typedef struct Tcl_Channel_ *Tcl_Channel; +typedef struct Tcl_ChannelTypeVersion_ *Tcl_ChannelTypeVersion; +typedef struct Tcl_Command_ *Tcl_Command; +typedef struct Tcl_Condition_ *Tcl_Condition; +typedef struct Tcl_Dict_ *Tcl_Dict; +typedef struct Tcl_EncodingState_ *Tcl_EncodingState; +typedef struct Tcl_Encoding_ *Tcl_Encoding; +typedef struct Tcl_Event Tcl_Event; +typedef struct Tcl_InterpState_ *Tcl_InterpState; +typedef struct Tcl_LoadHandle_ *Tcl_LoadHandle; +typedef struct Tcl_Mutex_ *Tcl_Mutex; +typedef struct Tcl_Pid_ *Tcl_Pid; +typedef struct Tcl_RegExp_ *Tcl_RegExp; +typedef struct Tcl_ThreadDataKey_ *Tcl_ThreadDataKey; +typedef struct Tcl_ThreadId_ *Tcl_ThreadId; +typedef struct Tcl_TimerToken_ *Tcl_TimerToken; +typedef struct Tcl_Trace_ *Tcl_Trace; +typedef struct Tcl_Var_ *Tcl_Var; +typedef struct Tcl_ZLibStream_ *Tcl_ZlibStream; + +/* + *---------------------------------------------------------------------------- + * Definition of the interface to functions implementing threads. A function + * following this definition is given to each call of 'Tcl_CreateThread' and + * will be called as the main fuction of the new thread created by that call. + */ + +#if defined __WIN32__ +typedef unsigned (__stdcall Tcl_ThreadCreateProc) (ClientData clientData); +#else +typedef void (Tcl_ThreadCreateProc) (ClientData clientData); +#endif + +/* + * Threading function return types used for abstracting away platform + * differences when writing a Tcl_ThreadCreateProc. See the NewThread function + * in generic/tclThreadTest.c for it's usage. + */ + +#if defined __WIN32__ +# define Tcl_ThreadCreateType unsigned __stdcall +# define TCL_THREAD_CREATE_RETURN return 0 +#else +# define Tcl_ThreadCreateType void +# define TCL_THREAD_CREATE_RETURN +#endif + +/* + * Definition of values for default stacksize and the possible flags to be + * given to Tcl_CreateThread. + */ + +#define TCL_THREAD_STACK_DEFAULT (0) /* Use default size for stack. */ +#define TCL_THREAD_NOFLAGS (0000) /* Standard flags, default + * behaviour. */ +#define TCL_THREAD_JOINABLE (0001) /* Mark the thread as joinable. */ + +/* + * Flag values passed to Tcl_StringCaseMatch. + */ + +#define TCL_MATCH_NOCASE (1<<0) + +/* + * Flag values passed to Tcl_GetRegExpFromObj. + */ + +#define TCL_REG_BASIC 000000 /* BREs (convenience). */ +#define TCL_REG_EXTENDED 000001 /* EREs. */ +#define TCL_REG_ADVF 000002 /* Advanced features in EREs. */ +#define TCL_REG_ADVANCED 000003 /* AREs (which are also EREs). */ +#define TCL_REG_QUOTE 000004 /* No special characters, none. */ +#define TCL_REG_NOCASE 000010 /* Ignore case. */ +#define TCL_REG_NOSUB 000020 /* Don't care about subexpressions. */ +#define TCL_REG_EXPANDED 000040 /* Expanded format, white space & + * comments. */ +#define TCL_REG_NLSTOP 000100 /* \n doesn't match . or [^ ] */ +#define TCL_REG_NLANCH 000200 /* ^ matches after \n, $ before. */ +#define TCL_REG_NEWLINE 000300 /* Newlines are line terminators. */ +#define TCL_REG_CANMATCH 001000 /* Report details on partial/limited + * matches. */ + +/* + * Flags values passed to Tcl_RegExpExecObj. + */ + +#define TCL_REG_NOTBOL 0001 /* Beginning of string does not match ^. */ +#define TCL_REG_NOTEOL 0002 /* End of string does not match $. */ + +/* + * Structures filled in by Tcl_RegExpInfo. Note that all offset values are + * relative to the start of the match string, not the beginning of the entire + * string. + */ + +typedef struct Tcl_RegExpIndices { + long start; /* Character offset of first character in + * match. */ + long end; /* Character offset of first character after + * the match. */ +} Tcl_RegExpIndices; + +typedef struct Tcl_RegExpInfo { + int nsubs; /* Number of subexpressions in the compiled + * expression. */ + Tcl_RegExpIndices *matches; /* Array of nsubs match offset pairs. */ + long extendStart; /* The offset at which a subsequent match + * might begin. */ + long reserved; /* Reserved for later use. */ +} Tcl_RegExpInfo; + +/* + * Picky compilers complain if this typdef doesn't appear before the struct's + * reference in tclDecls.h. + */ + +typedef Tcl_StatBuf *Tcl_Stat_; +typedef struct stat *Tcl_OldStat_; + +/* + *---------------------------------------------------------------------------- + * When a TCL command returns, the interpreter contains a result from the + * command. Programmers are strongly encouraged to use one of the functions + * Tcl_GetObjResult() or Tcl_GetStringResult() to read the interpreter's + * result. See the SetResult man page for details. Besides this result, the + * command function returns an integer code, which is one of the following: + * + * TCL_OK Command completed normally; the interpreter's result + * contains the command's result. + * TCL_ERROR The command couldn't be completed successfully; the + * interpreter's result describes what went wrong. + * TCL_RETURN The command requests that the current function return; + * the interpreter's result contains the function's + * return value. + * TCL_BREAK The command requests that the innermost loop be + * exited; the interpreter's result is meaningless. + * TCL_CONTINUE Go on to the next iteration of the current loop; the + * interpreter's result is meaningless. + */ + +#define TCL_OK 0 +#define TCL_ERROR 1 +#define TCL_RETURN 2 +#define TCL_BREAK 3 +#define TCL_CONTINUE 4 + +#define TCL_RESULT_SIZE 200 + +/* + *---------------------------------------------------------------------------- + * Flags to control what substitutions are performed by Tcl_SubstObj(): + */ + +#define TCL_SUBST_COMMANDS 001 +#define TCL_SUBST_VARIABLES 002 +#define TCL_SUBST_BACKSLASHES 004 +#define TCL_SUBST_ALL 007 + +/* + * Argument descriptors for math function callbacks in expressions: + */ + +typedef enum { + TCL_INT, TCL_DOUBLE, TCL_EITHER, TCL_WIDE_INT +} Tcl_ValueType; + +typedef struct Tcl_Value { + Tcl_ValueType type; /* Indicates intValue or doubleValue is valid, + * or both. */ + long intValue; /* Integer value. */ + double doubleValue; /* Double-precision floating value. */ + Tcl_WideInt wideValue; /* Wide (min. 64-bit) integer value. */ +} Tcl_Value; + +/* + * Forward declaration of Tcl_Obj to prevent an error when the forward + * reference to Tcl_Obj is encountered in the function types declared below. + */ + +struct Tcl_Obj; + +/* + *---------------------------------------------------------------------------- + * Function types defined by Tcl: + */ + +typedef int (Tcl_AppInitProc) (Tcl_Interp *interp); +typedef int (Tcl_AsyncProc) (ClientData clientData, Tcl_Interp *interp, + int code); +typedef void (Tcl_ChannelProc) (ClientData clientData, int mask); +typedef void (Tcl_CloseProc) (ClientData data); +typedef void (Tcl_CmdDeleteProc) (ClientData clientData); +typedef int (Tcl_CmdProc) (ClientData clientData, Tcl_Interp *interp, + int argc, CONST84 char *argv[]); +typedef void (Tcl_CmdTraceProc) (ClientData clientData, Tcl_Interp *interp, + int level, char *command, Tcl_CmdProc *proc, + ClientData cmdClientData, int argc, CONST84 char *argv[]); +typedef int (Tcl_CmdObjTraceProc) (ClientData clientData, Tcl_Interp *interp, + int level, const char *command, Tcl_Command commandInfo, int objc, + struct Tcl_Obj *const *objv); +typedef void (Tcl_CmdObjTraceDeleteProc) (ClientData clientData); +typedef void (Tcl_DupInternalRepProc) (struct Tcl_Obj *srcPtr, + struct Tcl_Obj *dupPtr); +typedef int (Tcl_EncodingConvertProc) (ClientData clientData, const char *src, + int srcLen, int flags, Tcl_EncodingState *statePtr, char *dst, + int dstLen, int *srcReadPtr, int *dstWrotePtr, int *dstCharsPtr); +typedef void (Tcl_EncodingFreeProc) (ClientData clientData); +typedef int (Tcl_EventProc) (Tcl_Event *evPtr, int flags); +typedef void (Tcl_EventCheckProc) (ClientData clientData, int flags); +typedef int (Tcl_EventDeleteProc) (Tcl_Event *evPtr, ClientData clientData); +typedef void (Tcl_EventSetupProc) (ClientData clientData, int flags); +typedef void (Tcl_ExitProc) (ClientData clientData); +typedef void (Tcl_FileProc) (ClientData clientData, int mask); +typedef void (Tcl_FileFreeProc) (ClientData clientData); +typedef void (Tcl_FreeInternalRepProc) (struct Tcl_Obj *objPtr); +typedef void (Tcl_FreeProc) (char *blockPtr); +typedef void (Tcl_IdleProc) (ClientData clientData); +typedef void (Tcl_InterpDeleteProc) (ClientData clientData, + Tcl_Interp *interp); +typedef int (Tcl_MathProc) (ClientData clientData, Tcl_Interp *interp, + Tcl_Value *args, Tcl_Value *resultPtr); +typedef void (Tcl_NamespaceDeleteProc) (ClientData clientData); +typedef int (Tcl_ObjCmdProc) (ClientData clientData, Tcl_Interp *interp, + int objc, struct Tcl_Obj *const *objv); +typedef int (Tcl_PackageInitProc) (Tcl_Interp *interp); +typedef int (Tcl_PackageUnloadProc) (Tcl_Interp *interp, int flags); +typedef void (Tcl_PanicProc) (const char *format, ...); +typedef void (Tcl_TcpAcceptProc) (ClientData callbackData, Tcl_Channel chan, + char *address, int port); +typedef void (Tcl_TimerProc) (ClientData clientData); +typedef int (Tcl_SetFromAnyProc) (Tcl_Interp *interp, struct Tcl_Obj *objPtr); +typedef void (Tcl_UpdateStringProc) (struct Tcl_Obj *objPtr); +typedef char * (Tcl_VarTraceProc) (ClientData clientData, Tcl_Interp *interp, + CONST84 char *part1, CONST84 char *part2, int flags); +typedef void (Tcl_CommandTraceProc) (ClientData clientData, Tcl_Interp *interp, + const char *oldName, const char *newName, int flags); +typedef void (Tcl_CreateFileHandlerProc) (int fd, int mask, Tcl_FileProc *proc, + ClientData clientData); +typedef void (Tcl_DeleteFileHandlerProc) (int fd); +typedef void (Tcl_AlertNotifierProc) (ClientData clientData); +typedef void (Tcl_ServiceModeHookProc) (int mode); +typedef ClientData (Tcl_InitNotifierProc) (void); +typedef void (Tcl_FinalizeNotifierProc) (ClientData clientData); +typedef void (Tcl_MainLoopProc) (void); + +/* + *---------------------------------------------------------------------------- + * The following structure represents a type of object, which is a particular + * internal representation for an object plus a set of functions that provide + * standard operations on objects of that type. + */ + +typedef struct Tcl_ObjType { + const char *name; /* Name of the type, e.g. "int". */ + Tcl_FreeInternalRepProc *freeIntRepProc; + /* Called to free any storage for the type's + * internal rep. NULL if the internal rep does + * not need freeing. */ + Tcl_DupInternalRepProc *dupIntRepProc; + /* Called to create a new object as a copy of + * an existing object. */ + Tcl_UpdateStringProc *updateStringProc; + /* Called to update the string rep from the + * type's internal representation. */ + Tcl_SetFromAnyProc *setFromAnyProc; + /* Called to convert the object's internal rep + * to this type. Frees the internal rep of the + * old type. Returns TCL_ERROR on failure. */ +} Tcl_ObjType; + +/* + * One of the following structures exists for each object in the Tcl system. + * An object stores a value as either a string, some internal representation, + * or both. + */ + +typedef struct Tcl_Obj { + int refCount; /* When 0 the object will be freed. */ + char *bytes; /* This points to the first byte of the + * object's string representation. The array + * must be followed by a null byte (i.e., at + * offset length) but may also contain + * embedded null characters. The array's + * storage is allocated by ckalloc. NULL means + * the string rep is invalid and must be + * regenerated from the internal rep. Clients + * should use Tcl_GetStringFromObj or + * Tcl_GetString to get a pointer to the byte + * array as a readonly value. */ + int length; /* The number of bytes at *bytes, not + * including the terminating null. */ + const Tcl_ObjType *typePtr; /* Denotes the object's type. Always + * corresponds to the type of the object's + * internal rep. NULL indicates the object has + * no internal rep (has no type). */ + union { /* The internal representation: */ + long longValue; /* - an long integer value. */ + double doubleValue; /* - a double-precision floating value. */ + void *otherValuePtr; /* - another, type-specific value. */ + Tcl_WideInt wideValue; /* - a long long value. */ + struct { /* - internal rep as two pointers. */ + void *ptr1; + void *ptr2; + } twoPtrValue; + struct { /* - internal rep as a pointer and a long, + * the main use of which is a bignum's + * tightly packed fields, where the alloc, + * used and signum flags are packed into a + * single word with everything else hung + * off the pointer. */ + void *ptr; + unsigned long value; + } ptrAndLongRep; + } internalRep; +} Tcl_Obj; + +/* + * Macros to increment and decrement a Tcl_Obj's reference count, and to test + * whether an object is shared (i.e. has reference count > 1). Note: clients + * should use Tcl_DecrRefCount() when they are finished using an object, and + * should never call TclFreeObj() directly. TclFreeObj() is only defined and + * made public in tcl.h to support Tcl_DecrRefCount's macro definition. + */ + +void Tcl_IncrRefCount(Tcl_Obj *objPtr); +void Tcl_DecrRefCount(Tcl_Obj *objPtr); +int Tcl_IsShared(Tcl_Obj *objPtr); + +/* + *---------------------------------------------------------------------------- + * The following structure contains the state needed by Tcl_SaveResult. No-one + * outside of Tcl should access any of these fields. This structure is + * typically allocated on the stack. + */ + +typedef struct Tcl_SavedResult { + char *result; + Tcl_FreeProc *freeProc; + Tcl_Obj *objResultPtr; + char *appendResult; + int appendAvl; + int appendUsed; + char resultSpace[TCL_RESULT_SIZE+1]; +} Tcl_SavedResult; + +/* + *---------------------------------------------------------------------------- + * The following definitions support Tcl's namespace facility. Note: the first + * five fields must match exactly the fields in a Namespace structure (see + * tclInt.h). + */ + +typedef struct Tcl_Namespace { + char *name; /* The namespace's name within its parent + * namespace. This contains no ::'s. The name + * of the global namespace is "" although "::" + * is an synonym. */ + char *fullName; /* The namespace's fully qualified name. This + * starts with ::. */ + ClientData clientData; /* Arbitrary value associated with this + * namespace. */ + Tcl_NamespaceDeleteProc *deleteProc; + /* Function invoked when deleting the + * namespace to, e.g., free clientData. */ + struct Tcl_Namespace *parentPtr; + /* Points to the namespace that contains this + * one. NULL if this is the global + * namespace. */ +} Tcl_Namespace; + +/* + *---------------------------------------------------------------------------- + * The following structure represents a call frame, or activation record. A + * call frame defines a naming context for a procedure call: its local scope + * (for local variables) and its namespace scope (used for non-local + * variables; often the global :: namespace). A call frame can also define the + * naming context for a namespace eval or namespace inscope command: the + * namespace in which the command's code should execute. The Tcl_CallFrame + * structures exist only while procedures or namespace eval/inscope's are + * being executed, and provide a Tcl call stack. + * + * A call frame is initialized and pushed using Tcl_PushCallFrame and popped + * using Tcl_PopCallFrame. Storage for a Tcl_CallFrame must be provided by the + * Tcl_PushCallFrame caller, and callers typically allocate them on the C call + * stack for efficiency. For this reason, Tcl_CallFrame is defined as a + * structure and not as an opaque token. However, most Tcl_CallFrame fields + * are hidden since applications should not access them directly; others are + * declared as "dummyX". + * + * WARNING!! The structure definition must be kept consistent with the + * CallFrame structure in tclInt.h. If you change one, change the other. + */ + +typedef struct Tcl_CallFrame { + Tcl_Namespace *nsPtr; + int dummy1; + int dummy2; + void *dummy3; + void *dummy4; + void *dummy5; + int dummy6; + void *dummy7; + void *dummy8; + int dummy9; + void *dummy10; + void *dummy11; + void *dummy12; + void *dummy13; +} Tcl_CallFrame; + +/* + *---------------------------------------------------------------------------- + * Information about commands that is returned by Tcl_GetCommandInfo and + * passed to Tcl_SetCommandInfo. objProc is an objc/objv object-based command + * function while proc is a traditional Tcl argc/argv string-based function. + * Tcl_CreateObjCommand and Tcl_CreateCommand ensure that both objProc and + * proc are non-NULL and can be called to execute the command. However, it may + * be faster to call one instead of the other. The member isNativeObjectProc + * is set to 1 if an object-based function was registered by + * Tcl_CreateObjCommand, and to 0 if a string-based function was registered by + * Tcl_CreateCommand. The other function is typically set to a compatibility + * wrapper that does string-to-object or object-to-string argument conversions + * then calls the other function. + */ + +typedef struct Tcl_CmdInfo { + int isNativeObjectProc; /* 1 if objProc was registered by a call to + * Tcl_CreateObjCommand; 0 otherwise. + * Tcl_SetCmdInfo does not modify this + * field. */ + Tcl_ObjCmdProc *objProc; /* Command's object-based function. */ + ClientData objClientData; /* ClientData for object proc. */ + Tcl_CmdProc *proc; /* Command's string-based function. */ + ClientData clientData; /* ClientData for string proc. */ + Tcl_CmdDeleteProc *deleteProc; + /* Function to call when command is + * deleted. */ + ClientData deleteData; /* Value to pass to deleteProc (usually the + * same as clientData). */ + Tcl_Namespace *namespacePtr;/* Points to the namespace that contains this + * command. Note that Tcl_SetCmdInfo will not + * change a command's namespace; use + * TclRenameCommand or Tcl_Eval (of 'rename') + * to do that. */ +} Tcl_CmdInfo; + +/* + *---------------------------------------------------------------------------- + * The structure defined below is used to hold dynamic strings. The only + * fields that clients should use are string and length, accessible via the + * macros Tcl_DStringValue and Tcl_DStringLength. + */ + +#define TCL_DSTRING_STATIC_SIZE 200 +typedef struct Tcl_DString { + char *string; /* Points to beginning of string: either + * staticSpace below or a malloced array. */ + int length; /* Number of non-NULL characters in the + * string. */ + int spaceAvl; /* Total number of bytes available for the + * string and its terminating NULL char. */ + char staticSpace[TCL_DSTRING_STATIC_SIZE]; + /* Space to use in common case where string is + * small. */ +} Tcl_DString; + +#define Tcl_DStringLength(dsPtr) ((dsPtr)->length) +#define Tcl_DStringValue(dsPtr) ((dsPtr)->string) +#define Tcl_DStringTrunc Tcl_DStringSetLength + +/* + * Definitions for the maximum number of digits of precision that may be + * specified in the "tcl_precision" variable, and the number of bytes of + * buffer space required by Tcl_PrintDouble. + */ + +#define TCL_MAX_PREC 17 +#define TCL_DOUBLE_SPACE (TCL_MAX_PREC+10) + +/* + * Definition for a number of bytes of buffer space sufficient to hold the + * string representation of an integer in base 10 (assuming the existence of + * 64-bit integers). + */ + +#define TCL_INTEGER_SPACE 24 + +/* + * Flag values passed to Tcl_ConvertElement. + * TCL_DONT_USE_BRACES forces it not to enclose the element in braces, but to + * use backslash quoting instead. + * TCL_DONT_QUOTE_HASH disables the default quoting of the '#' character. It + * is safe to leave the hash unquoted when the element is not the first + * element of a list, and this flag can be used by the caller to indicate + * that condition. + */ + +#define TCL_DONT_USE_BRACES 1 +#define TCL_DONT_QUOTE_HASH 8 + +/* + * Flag that may be passed to Tcl_GetIndexFromObj to force it to disallow + * abbreviated strings. + */ + +#define TCL_EXACT 1 + +/* + *---------------------------------------------------------------------------- + * Flag values passed to Tcl_RecordAndEval, Tcl_EvalObj, Tcl_EvalObjv. + * WARNING: these bit choices must not conflict with the bit choices for + * evalFlag bits in tclInt.h! + * + * Meanings: + * TCL_NO_EVAL: Just record this command + * TCL_EVAL_GLOBAL: Execute script in global namespace + * TCL_EVAL_DIRECT: Do not compile this script + * TCL_EVAL_INVOKE: Magical Tcl_EvalObjv mode for aliases/ensembles + * o Run in iPtr->lookupNsPtr or global namespace + * o Cut out of error traces + * o Don't reset the flags controlling ensemble + * error message rewriting. + * TCL_CANCEL_UNWIND: Magical Tcl_CancelEval mode that causes the + * stack for the script in progress to be + * completely unwound. + * TCL_EVAL_NOERR: Do no exception reporting at all, just return + * as the caller will report. + */ + +#define TCL_NO_EVAL 0x010000 +#define TCL_EVAL_GLOBAL 0x020000 +#define TCL_EVAL_DIRECT 0x040000 +#define TCL_EVAL_INVOKE 0x080000 +#define TCL_CANCEL_UNWIND 0x100000 +#define TCL_EVAL_NOERR 0x200000 + +/* + * Special freeProc values that may be passed to Tcl_SetResult (see the man + * page for details): + */ + +#define TCL_VOLATILE ((Tcl_FreeProc *) 1) +#define TCL_STATIC ((Tcl_FreeProc *) 0) +#define TCL_DYNAMIC ((Tcl_FreeProc *) 3) + +/* + * Flag values passed to variable-related functions. + * WARNING: these bit choices must not conflict with the bit choice for + * TCL_CANCEL_UNWIND, above. + */ + +#define TCL_GLOBAL_ONLY 1 +#define TCL_NAMESPACE_ONLY 2 +#define TCL_APPEND_VALUE 4 +#define TCL_LIST_ELEMENT 8 +#define TCL_TRACE_READS 0x10 +#define TCL_TRACE_WRITES 0x20 +#define TCL_TRACE_UNSETS 0x40 +#define TCL_TRACE_DESTROYED 0x80 +#define TCL_INTERP_DESTROYED 0x100 +#define TCL_LEAVE_ERR_MSG 0x200 +#define TCL_TRACE_ARRAY 0x800 +#ifndef TCL_REMOVE_OBSOLETE_TRACES +/* Required to support old variable/vdelete/vinfo traces. */ +#define TCL_TRACE_OLD_STYLE 0x1000 +#endif +/* Indicate the semantics of the result of a trace. */ +#define TCL_TRACE_RESULT_DYNAMIC 0x8000 +#define TCL_TRACE_RESULT_OBJECT 0x10000 + +/* + * Flag values for ensemble commands. + */ + +#define TCL_ENSEMBLE_PREFIX 0x02/* Flag value to say whether to allow + * unambiguous prefixes of commands or to + * require exact matches for command names. */ + +/* + * Flag values passed to command-related functions. + */ + +#define TCL_TRACE_RENAME 0x2000 +#define TCL_TRACE_DELETE 0x4000 + +#define TCL_ALLOW_INLINE_COMPILATION 0x20000 + +/* + * The TCL_PARSE_PART1 flag is deprecated and has no effect. The part1 is now + * always parsed whenever the part2 is NULL. (This is to avoid a common error + * when converting code to use the new object based APIs and forgetting to + * give the flag) + */ + +#ifndef TCL_NO_DEPRECATED +# define TCL_PARSE_PART1 0x400 +#endif + +/* + * Types for linked variables: + */ + +#define TCL_LINK_INT 1 +#define TCL_LINK_DOUBLE 2 +#define TCL_LINK_BOOLEAN 3 +#define TCL_LINK_STRING 4 +#define TCL_LINK_WIDE_INT 5 +#define TCL_LINK_CHAR 6 +#define TCL_LINK_UCHAR 7 +#define TCL_LINK_SHORT 8 +#define TCL_LINK_USHORT 9 +#define TCL_LINK_UINT 10 +#define TCL_LINK_LONG 11 +#define TCL_LINK_ULONG 12 +#define TCL_LINK_FLOAT 13 +#define TCL_LINK_WIDE_UINT 14 +#define TCL_LINK_READ_ONLY 0x80 + +/* + *---------------------------------------------------------------------------- + * Forward declarations of Tcl_HashTable and related types. + */ + +typedef struct Tcl_HashKeyType Tcl_HashKeyType; +typedef struct Tcl_HashTable Tcl_HashTable; +typedef struct Tcl_HashEntry Tcl_HashEntry; + +typedef unsigned (Tcl_HashKeyProc) (Tcl_HashTable *tablePtr, void *keyPtr); +typedef int (Tcl_CompareHashKeysProc) (void *keyPtr, Tcl_HashEntry *hPtr); +typedef Tcl_HashEntry * (Tcl_AllocHashEntryProc) (Tcl_HashTable *tablePtr, + void *keyPtr); +typedef void (Tcl_FreeHashEntryProc) (Tcl_HashEntry *hPtr); + +/* + * This flag controls whether the hash table stores the hash of a key, or + * recalculates it. There should be no reason for turning this flag off as it + * is completely binary and source compatible unless you directly access the + * bucketPtr member of the Tcl_HashTableEntry structure. This member has been + * removed and the space used to store the hash value. + */ + +#ifndef TCL_HASH_KEY_STORE_HASH +# define TCL_HASH_KEY_STORE_HASH 1 +#endif + +/* + * Structure definition for an entry in a hash table. No-one outside Tcl + * should access any of these fields directly; use the macros defined below. + */ + +struct Tcl_HashEntry { + Tcl_HashEntry *nextPtr; /* Pointer to next entry in this hash bucket, + * or NULL for end of chain. */ + Tcl_HashTable *tablePtr; /* Pointer to table containing entry. */ +#if TCL_HASH_KEY_STORE_HASH + void *hash; /* Hash value, stored as pointer to ensure + * that the offsets of the fields in this + * structure are not changed. */ +#else + Tcl_HashEntry **bucketPtr; /* Pointer to bucket that points to first + * entry in this entry's chain: used for + * deleting the entry. */ +#endif + ClientData clientData; /* Application stores something here with + * Tcl_SetHashValue. */ + union { /* Key has one of these forms: */ + char *oneWordValue; /* One-word value for key. */ + Tcl_Obj *objPtr; /* Tcl_Obj * key value. */ + int words[1]; /* Multiple integer words for key. The actual + * size will be as large as necessary for this + * table's keys. */ + char string[1]; /* String for key. The actual size will be as + * large as needed to hold the key. */ + } key; /* MUST BE LAST FIELD IN RECORD!! */ +}; + +/* + * Flags used in Tcl_HashKeyType. + * + * TCL_HASH_KEY_RANDOMIZE_HASH - + * There are some things, pointers for example + * which don't hash well because they do not use + * the lower bits. If this flag is set then the + * hash table will attempt to rectify this by + * randomising the bits and then using the upper + * N bits as the index into the table. + * TCL_HASH_KEY_SYSTEM_HASH - If this flag is set then all memory internally + * allocated for the hash table that is not for an + * entry will use the system heap. + */ + +#define TCL_HASH_KEY_RANDOMIZE_HASH 0x1 +#define TCL_HASH_KEY_SYSTEM_HASH 0x2 + +/* + * Structure definition for the methods associated with a hash table key type. + */ + +#define TCL_HASH_KEY_TYPE_VERSION 1 +struct Tcl_HashKeyType { + int version; /* Version of the table. If this structure is + * extended in future then the version can be + * used to distinguish between different + * structures. */ + int flags; /* Flags, see above for details. */ + Tcl_HashKeyProc *hashKeyProc; + /* Calculates a hash value for the key. If + * this is NULL then the pointer itself is + * used as a hash value. */ + Tcl_CompareHashKeysProc *compareKeysProc; + /* Compares two keys and returns zero if they + * do not match, and non-zero if they do. If + * this is NULL then the pointers are + * compared. */ + Tcl_AllocHashEntryProc *allocEntryProc; + /* Called to allocate memory for a new entry, + * i.e. if the key is a string then this could + * allocate a single block which contains + * enough space for both the entry and the + * string. Only the key field of the allocated + * Tcl_HashEntry structure needs to be filled + * in. If something else needs to be done to + * the key, i.e. incrementing a reference + * count then that should be done by this + * function. If this is NULL then Tcl_Alloc is + * used to allocate enough space for a + * Tcl_HashEntry and the key pointer is + * assigned to key.oneWordValue. */ + Tcl_FreeHashEntryProc *freeEntryProc; + /* Called to free memory associated with an + * entry. If something else needs to be done + * to the key, i.e. decrementing a reference + * count then that should be done by this + * function. If this is NULL then Tcl_Free is + * used to free the Tcl_HashEntry. */ +}; + +/* + * Structure definition for a hash table. Must be in tcl.h so clients can + * allocate space for these structures, but clients should never access any + * fields in this structure. + */ + +#define TCL_SMALL_HASH_TABLE 4 +struct Tcl_HashTable { + Tcl_HashEntry **buckets; /* Pointer to bucket array. Each element + * points to first entry in bucket's hash + * chain, or NULL. */ + Tcl_HashEntry *staticBuckets[TCL_SMALL_HASH_TABLE]; + /* Bucket array used for small tables (to + * avoid mallocs and frees). */ + int numBuckets; /* Total number of buckets allocated at + * **bucketPtr. */ + int numEntries; /* Total number of entries present in + * table. */ + int rebuildSize; /* Enlarge table when numEntries gets to be + * this large. */ + int downShift; /* Shift count used in hashing function. + * Designed to use high-order bits of + * randomized keys. */ + int mask; /* Mask value used in hashing function. */ + int keyType; /* Type of keys used in this table. It's + * either TCL_CUSTOM_KEYS, TCL_STRING_KEYS, + * TCL_ONE_WORD_KEYS, or an integer giving the + * number of ints that is the size of the + * key. */ + Tcl_HashEntry *(*findProc) (Tcl_HashTable *tablePtr, const char *key); + Tcl_HashEntry *(*createProc) (Tcl_HashTable *tablePtr, const char *key, + int *newPtr); + const Tcl_HashKeyType *typePtr; + /* Type of the keys used in the + * Tcl_HashTable. */ +}; + +/* + * Structure definition for information used to keep track of searches through + * hash tables: + */ + +typedef struct Tcl_HashSearch { + Tcl_HashTable *tablePtr; /* Table being searched. */ + int nextIndex; /* Index of next bucket to be enumerated after + * present one. */ + Tcl_HashEntry *nextEntryPtr;/* Next entry to be enumerated in the current + * bucket. */ +} Tcl_HashSearch; + +/* + * Acceptable key types for hash tables: + * + * TCL_STRING_KEYS: The keys are strings, they are copied into the + * entry. + * TCL_ONE_WORD_KEYS: The keys are pointers, the pointer is stored + * in the entry. + * TCL_CUSTOM_TYPE_KEYS: The keys are arbitrary types which are copied + * into the entry. + * TCL_CUSTOM_PTR_KEYS: The keys are pointers to arbitrary types, the + * pointer is stored in the entry. + * + * While maintaining binary compatability the above have to be distinct values + * as they are used to differentiate between old versions of the hash table + * which don't have a typePtr and new ones which do. Once binary compatability + * is discarded in favour of making more wide spread changes TCL_STRING_KEYS + * can be the same as TCL_CUSTOM_TYPE_KEYS, and TCL_ONE_WORD_KEYS can be the + * same as TCL_CUSTOM_PTR_KEYS because they simply determine how the key is + * accessed from the entry and not the behaviour. + */ + +#define TCL_STRING_KEYS (0) +#define TCL_ONE_WORD_KEYS (1) +#define TCL_CUSTOM_TYPE_KEYS (-2) +#define TCL_CUSTOM_PTR_KEYS (-1) + +/* + * Structure definition for information used to keep track of searches through + * dictionaries. These fields should not be accessed by code outside + * tclDictObj.c + */ + +typedef struct { + void *next; /* Search position for underlying hash + * table. */ + int epoch; /* Epoch marker for dictionary being searched, + * or -1 if search has terminated. */ + Tcl_Dict dictionaryPtr; /* Reference to dictionary being searched. */ +} Tcl_DictSearch; + +/* + *---------------------------------------------------------------------------- + * Flag values to pass to Tcl_DoOneEvent to disable searches for some kinds of + * events: + */ + +#define TCL_DONT_WAIT (1<<1) +#define TCL_WINDOW_EVENTS (1<<2) +#define TCL_FILE_EVENTS (1<<3) +#define TCL_TIMER_EVENTS (1<<4) +#define TCL_IDLE_EVENTS (1<<5) /* WAS 0x10 ???? */ +#define TCL_ALL_EVENTS (~TCL_DONT_WAIT) + +/* + * The following structure defines a generic event for the Tcl event system. + * These are the things that are queued in calls to Tcl_QueueEvent and + * serviced later by Tcl_DoOneEvent. There can be many different kinds of + * events with different fields, corresponding to window events, timer events, + * etc. The structure for a particular event consists of a Tcl_Event header + * followed by additional information specific to that event. + */ + +struct Tcl_Event { + Tcl_EventProc *proc; /* Function to call to service this event. */ + struct Tcl_Event *nextPtr; /* Next in list of pending events, or NULL. */ +}; + +/* + * Positions to pass to Tcl_QueueEvent: + */ + +typedef enum { + TCL_QUEUE_TAIL, TCL_QUEUE_HEAD, TCL_QUEUE_MARK +} Tcl_QueuePosition; + +/* + * Values to pass to Tcl_SetServiceMode to specify the behavior of notifier + * event routines. + */ + +#define TCL_SERVICE_NONE 0 +#define TCL_SERVICE_ALL 1 + +/* + * The following structure keeps is used to hold a time value, either as an + * absolute time (the number of seconds from the epoch) or as an elapsed time. + * On Unix systems the epoch is Midnight Jan 1, 1970 GMT. + */ + +typedef struct Tcl_Time { + long sec; /* Seconds. */ + long usec; /* Microseconds. */ +} Tcl_Time; + +typedef void (Tcl_SetTimerProc) (CONST86 Tcl_Time *timePtr); +typedef int (Tcl_WaitForEventProc) (CONST86 Tcl_Time *timePtr); + +/* + * TIP #233 (Virtualized Time) + */ + +typedef void (Tcl_GetTimeProc) (Tcl_Time *timebuf, ClientData clientData); +typedef void (Tcl_ScaleTimeProc) (Tcl_Time *timebuf, ClientData clientData); + +/* + *---------------------------------------------------------------------------- + * Bits to pass to Tcl_CreateFileHandler and Tcl_CreateChannelHandler to + * indicate what sorts of events are of interest: + */ + +#define TCL_READABLE (1<<1) +#define TCL_WRITABLE (1<<2) +#define TCL_EXCEPTION (1<<3) + +/* + * Flag values to pass to Tcl_OpenCommandChannel to indicate the disposition + * of the stdio handles. TCL_STDIN, TCL_STDOUT, TCL_STDERR, are also used in + * Tcl_GetStdChannel. + */ + +#define TCL_STDIN (1<<1) +#define TCL_STDOUT (1<<2) +#define TCL_STDERR (1<<3) +#define TCL_ENFORCE_MODE (1<<4) + +/* + * Bits passed to Tcl_DriverClose2Proc to indicate which side of a channel + * should be closed. + */ + +#define TCL_CLOSE_READ (1<<1) +#define TCL_CLOSE_WRITE (1<<2) + +/* + * Value to use as the closeProc for a channel that supports the close2Proc + * interface. + */ + +#define TCL_CLOSE2PROC ((Tcl_DriverCloseProc *) 1) + +/* + * Channel version tag. This was introduced in 8.3.2/8.4. + */ + +#define TCL_CHANNEL_VERSION_1 ((Tcl_ChannelTypeVersion) 0x1) +#define TCL_CHANNEL_VERSION_2 ((Tcl_ChannelTypeVersion) 0x2) +#define TCL_CHANNEL_VERSION_3 ((Tcl_ChannelTypeVersion) 0x3) +#define TCL_CHANNEL_VERSION_4 ((Tcl_ChannelTypeVersion) 0x4) +#define TCL_CHANNEL_VERSION_5 ((Tcl_ChannelTypeVersion) 0x5) + +/* + * TIP #218: Channel Actions, Ids for Tcl_DriverThreadActionProc. + */ + +#define TCL_CHANNEL_THREAD_INSERT (0) +#define TCL_CHANNEL_THREAD_REMOVE (1) + +/* + * Typedefs for the various operations in a channel type: + */ + +typedef int (Tcl_DriverBlockModeProc) (ClientData instanceData, int mode); +typedef int (Tcl_DriverCloseProc) (ClientData instanceData, + Tcl_Interp *interp); +typedef int (Tcl_DriverClose2Proc) (ClientData instanceData, + Tcl_Interp *interp, int flags); +typedef int (Tcl_DriverInputProc) (ClientData instanceData, char *buf, + int toRead, int *errorCodePtr); +typedef int (Tcl_DriverOutputProc) (ClientData instanceData, + CONST84 char *buf, int toWrite, int *errorCodePtr); +typedef int (Tcl_DriverSeekProc) (ClientData instanceData, long offset, + int mode, int *errorCodePtr); +typedef int (Tcl_DriverSetOptionProc) (ClientData instanceData, + Tcl_Interp *interp, const char *optionName, + const char *value); +typedef int (Tcl_DriverGetOptionProc) (ClientData instanceData, + Tcl_Interp *interp, CONST84 char *optionName, + Tcl_DString *dsPtr); +typedef void (Tcl_DriverWatchProc) (ClientData instanceData, int mask); +typedef int (Tcl_DriverGetHandleProc) (ClientData instanceData, + int direction, ClientData *handlePtr); +typedef int (Tcl_DriverFlushProc) (ClientData instanceData); +typedef int (Tcl_DriverHandlerProc) (ClientData instanceData, + int interestMask); +typedef Tcl_WideInt (Tcl_DriverWideSeekProc) (ClientData instanceData, + Tcl_WideInt offset, int mode, int *errorCodePtr); +/* + * TIP #218, Channel Thread Actions + */ +typedef void (Tcl_DriverThreadActionProc) (ClientData instanceData, + int action); +/* + * TIP #208, File Truncation (etc.) + */ +typedef int (Tcl_DriverTruncateProc) (ClientData instanceData, + Tcl_WideInt length); + +/* + * struct Tcl_ChannelType: + * + * One such structure exists for each type (kind) of channel. It collects + * together in one place all the functions that are part of the specific + * channel type. + * + * It is recommend that the Tcl_Channel* functions are used to access elements + * of this structure, instead of direct accessing. + */ + +typedef struct Tcl_ChannelType { + const char *typeName; /* The name of the channel type in Tcl + * commands. This storage is owned by channel + * type. */ + Tcl_ChannelTypeVersion version; + /* Version of the channel type. */ + Tcl_DriverCloseProc *closeProc; + /* Function to call to close the channel, or + * TCL_CLOSE2PROC if the close2Proc should be + * used instead. */ + Tcl_DriverInputProc *inputProc; + /* Function to call for input on channel. */ + Tcl_DriverOutputProc *outputProc; + /* Function to call for output on channel. */ + Tcl_DriverSeekProc *seekProc; + /* Function to call to seek on the channel. + * May be NULL. */ + Tcl_DriverSetOptionProc *setOptionProc; + /* Set an option on a channel. */ + Tcl_DriverGetOptionProc *getOptionProc; + /* Get an option from a channel. */ + Tcl_DriverWatchProc *watchProc; + /* Set up the notifier to watch for events on + * this channel. */ + Tcl_DriverGetHandleProc *getHandleProc; + /* Get an OS handle from the channel or NULL + * if not supported. */ + Tcl_DriverClose2Proc *close2Proc; + /* Function to call to close the channel if + * the device supports closing the read & + * write sides independently. */ + Tcl_DriverBlockModeProc *blockModeProc; + /* Set blocking mode for the raw channel. May + * be NULL. */ + /* + * Only valid in TCL_CHANNEL_VERSION_2 channels or later. + */ + Tcl_DriverFlushProc *flushProc; + /* Function to call to flush a channel. May be + * NULL. */ + Tcl_DriverHandlerProc *handlerProc; + /* Function to call to handle a channel event. + * This will be passed up the stacked channel + * chain. */ + /* + * Only valid in TCL_CHANNEL_VERSION_3 channels or later. + */ + Tcl_DriverWideSeekProc *wideSeekProc; + /* Function to call to seek on the channel + * which can handle 64-bit offsets. May be + * NULL, and must be NULL if seekProc is + * NULL. */ + /* + * Only valid in TCL_CHANNEL_VERSION_4 channels or later. + * TIP #218, Channel Thread Actions. + */ + Tcl_DriverThreadActionProc *threadActionProc; + /* Function to call to notify the driver of + * thread specific activity for a channel. May + * be NULL. */ + /* + * Only valid in TCL_CHANNEL_VERSION_5 channels or later. + * TIP #208, File Truncation. + */ + Tcl_DriverTruncateProc *truncateProc; + /* Function to call to truncate the underlying + * file to a particular length. May be NULL if + * the channel does not support truncation. */ +} Tcl_ChannelType; + +/* + * The following flags determine whether the blockModeProc above should set + * the channel into blocking or nonblocking mode. They are passed as arguments + * to the blockModeProc function in the above structure. + */ + +#define TCL_MODE_BLOCKING 0 /* Put channel into blocking mode. */ +#define TCL_MODE_NONBLOCKING 1 /* Put channel into nonblocking + * mode. */ + +/* + *---------------------------------------------------------------------------- + * Enum for different types of file paths. + */ + +typedef enum Tcl_PathType { + TCL_PATH_ABSOLUTE, + TCL_PATH_RELATIVE, + TCL_PATH_VOLUME_RELATIVE +} Tcl_PathType; + +/* + * The following structure is used to pass glob type data amongst the various + * glob routines and Tcl_FSMatchInDirectory. + */ + +typedef struct Tcl_GlobTypeData { + int type; /* Corresponds to bcdpfls as in 'find -t'. */ + int perm; /* Corresponds to file permissions. */ + Tcl_Obj *macType; /* Acceptable Mac type. */ + Tcl_Obj *macCreator; /* Acceptable Mac creator. */ +} Tcl_GlobTypeData; + +/* + * Type and permission definitions for glob command. + */ + +#define TCL_GLOB_TYPE_BLOCK (1<<0) +#define TCL_GLOB_TYPE_CHAR (1<<1) +#define TCL_GLOB_TYPE_DIR (1<<2) +#define TCL_GLOB_TYPE_PIPE (1<<3) +#define TCL_GLOB_TYPE_FILE (1<<4) +#define TCL_GLOB_TYPE_LINK (1<<5) +#define TCL_GLOB_TYPE_SOCK (1<<6) +#define TCL_GLOB_TYPE_MOUNT (1<<7) + +#define TCL_GLOB_PERM_RONLY (1<<0) +#define TCL_GLOB_PERM_HIDDEN (1<<1) +#define TCL_GLOB_PERM_R (1<<2) +#define TCL_GLOB_PERM_W (1<<3) +#define TCL_GLOB_PERM_X (1<<4) + +/* + * Flags for the unload callback function. + */ + +#define TCL_UNLOAD_DETACH_FROM_INTERPRETER (1<<0) +#define TCL_UNLOAD_DETACH_FROM_PROCESS (1<<1) + +/* + * Typedefs for the various filesystem operations: + */ + +typedef int (Tcl_FSStatProc) (Tcl_Obj *pathPtr, Tcl_StatBuf *buf); +typedef int (Tcl_FSAccessProc) (Tcl_Obj *pathPtr, int mode); +typedef Tcl_Channel (Tcl_FSOpenFileChannelProc) (Tcl_Interp *interp, + Tcl_Obj *pathPtr, int mode, int permissions); +typedef int (Tcl_FSMatchInDirectoryProc) (Tcl_Interp *interp, Tcl_Obj *result, + Tcl_Obj *pathPtr, const char *pattern, Tcl_GlobTypeData *types); +typedef Tcl_Obj * (Tcl_FSGetCwdProc) (Tcl_Interp *interp); +typedef int (Tcl_FSChdirProc) (Tcl_Obj *pathPtr); +typedef int (Tcl_FSLstatProc) (Tcl_Obj *pathPtr, Tcl_StatBuf *buf); +typedef int (Tcl_FSCreateDirectoryProc) (Tcl_Obj *pathPtr); +typedef int (Tcl_FSDeleteFileProc) (Tcl_Obj *pathPtr); +typedef int (Tcl_FSCopyDirectoryProc) (Tcl_Obj *srcPathPtr, + Tcl_Obj *destPathPtr, Tcl_Obj **errorPtr); +typedef int (Tcl_FSCopyFileProc) (Tcl_Obj *srcPathPtr, Tcl_Obj *destPathPtr); +typedef int (Tcl_FSRemoveDirectoryProc) (Tcl_Obj *pathPtr, int recursive, + Tcl_Obj **errorPtr); +typedef int (Tcl_FSRenameFileProc) (Tcl_Obj *srcPathPtr, Tcl_Obj *destPathPtr); +typedef void (Tcl_FSUnloadFileProc) (Tcl_LoadHandle loadHandle); +typedef Tcl_Obj * (Tcl_FSListVolumesProc) (void); +/* We have to declare the utime structure here. */ +struct utimbuf; +typedef int (Tcl_FSUtimeProc) (Tcl_Obj *pathPtr, struct utimbuf *tval); +typedef int (Tcl_FSNormalizePathProc) (Tcl_Interp *interp, Tcl_Obj *pathPtr, + int nextCheckpoint); +typedef int (Tcl_FSFileAttrsGetProc) (Tcl_Interp *interp, int index, + Tcl_Obj *pathPtr, Tcl_Obj **objPtrRef); +typedef const char *CONST86 * (Tcl_FSFileAttrStringsProc) (Tcl_Obj *pathPtr, + Tcl_Obj **objPtrRef); +typedef int (Tcl_FSFileAttrsSetProc) (Tcl_Interp *interp, int index, + Tcl_Obj *pathPtr, Tcl_Obj *objPtr); +typedef Tcl_Obj * (Tcl_FSLinkProc) (Tcl_Obj *pathPtr, Tcl_Obj *toPtr, + int linkType); +typedef int (Tcl_FSLoadFileProc) (Tcl_Interp *interp, Tcl_Obj *pathPtr, + Tcl_LoadHandle *handlePtr, Tcl_FSUnloadFileProc **unloadProcPtr); +typedef int (Tcl_FSPathInFilesystemProc) (Tcl_Obj *pathPtr, + ClientData *clientDataPtr); +typedef Tcl_Obj * (Tcl_FSFilesystemPathTypeProc) (Tcl_Obj *pathPtr); +typedef Tcl_Obj * (Tcl_FSFilesystemSeparatorProc) (Tcl_Obj *pathPtr); +typedef void (Tcl_FSFreeInternalRepProc) (ClientData clientData); +typedef ClientData (Tcl_FSDupInternalRepProc) (ClientData clientData); +typedef Tcl_Obj * (Tcl_FSInternalToNormalizedProc) (ClientData clientData); +typedef ClientData (Tcl_FSCreateInternalRepProc) (Tcl_Obj *pathPtr); + +typedef struct Tcl_FSVersion_ *Tcl_FSVersion; + +/* + *---------------------------------------------------------------------------- + * Data structures related to hooking into the filesystem + */ + +/* + * Filesystem version tag. This was introduced in 8.4. + */ + +#define TCL_FILESYSTEM_VERSION_1 ((Tcl_FSVersion) 0x1) + +/* + * struct Tcl_Filesystem: + * + * One such structure exists for each type (kind) of filesystem. It collects + * together in one place all the functions that are part of the specific + * filesystem. Tcl always accesses the filesystem through one of these + * structures. + * + * Not all entries need be non-NULL; any which are NULL are simply ignored. + * However, a complete filesystem should provide all of these functions. The + * explanations in the structure show the importance of each function. + */ + +typedef struct Tcl_Filesystem { + const char *typeName; /* The name of the filesystem. */ + int structureLength; /* Length of this structure, so future binary + * compatibility can be assured. */ + Tcl_FSVersion version; /* Version of the filesystem type. */ + Tcl_FSPathInFilesystemProc *pathInFilesystemProc; + /* Function to check whether a path is in this + * filesystem. This is the most important + * filesystem function. */ + Tcl_FSDupInternalRepProc *dupInternalRepProc; + /* Function to duplicate internal fs rep. May + * be NULL (but then fs is less efficient). */ + Tcl_FSFreeInternalRepProc *freeInternalRepProc; + /* Function to free internal fs rep. Must be + * implemented if internal representations + * need freeing, otherwise it can be NULL. */ + Tcl_FSInternalToNormalizedProc *internalToNormalizedProc; + /* Function to convert internal representation + * to a normalized path. Only required if the + * fs creates pure path objects with no + * string/path representation. */ + Tcl_FSCreateInternalRepProc *createInternalRepProc; + /* Function to create a filesystem-specific + * internal representation. May be NULL if + * paths have no internal representation, or + * if the Tcl_FSPathInFilesystemProc for this + * filesystem always immediately creates an + * internal representation for paths it + * accepts. */ + Tcl_FSNormalizePathProc *normalizePathProc; + /* Function to normalize a path. Should be + * implemented for all filesystems which can + * have multiple string representations for + * the same path object. */ + Tcl_FSFilesystemPathTypeProc *filesystemPathTypeProc; + /* Function to determine the type of a path in + * this filesystem. May be NULL. */ + Tcl_FSFilesystemSeparatorProc *filesystemSeparatorProc; + /* Function to return the separator + * character(s) for this filesystem. Must be + * implemented. */ + Tcl_FSStatProc *statProc; /* Function to process a 'Tcl_FSStat()' call. + * Must be implemented for any reasonable + * filesystem. */ + Tcl_FSAccessProc *accessProc; + /* Function to process a 'Tcl_FSAccess()' + * call. Must be implemented for any + * reasonable filesystem. */ + Tcl_FSOpenFileChannelProc *openFileChannelProc; + /* Function to process a + * 'Tcl_FSOpenFileChannel()' call. Must be + * implemented for any reasonable + * filesystem. */ + Tcl_FSMatchInDirectoryProc *matchInDirectoryProc; + /* Function to process a + * 'Tcl_FSMatchInDirectory()'. If not + * implemented, then glob and recursive copy + * functionality will be lacking in the + * filesystem. */ + Tcl_FSUtimeProc *utimeProc; /* Function to process a 'Tcl_FSUtime()' call. + * Required to allow setting (not reading) of + * times with 'file mtime', 'file atime' and + * the open-r/open-w/fcopy implementation of + * 'file copy'. */ + Tcl_FSLinkProc *linkProc; /* Function to process a 'Tcl_FSLink()' call. + * Should be implemented only if the + * filesystem supports links (reading or + * creating). */ + Tcl_FSListVolumesProc *listVolumesProc; + /* Function to list any filesystem volumes + * added by this filesystem. Should be + * implemented only if the filesystem adds + * volumes at the head of the filesystem. */ + Tcl_FSFileAttrStringsProc *fileAttrStringsProc; + /* Function to list all attributes strings + * which are valid for this filesystem. If not + * implemented the filesystem will not support + * the 'file attributes' command. This allows + * arbitrary additional information to be + * attached to files in the filesystem. */ + Tcl_FSFileAttrsGetProc *fileAttrsGetProc; + /* Function to process a + * 'Tcl_FSFileAttrsGet()' call, used by 'file + * attributes'. */ + Tcl_FSFileAttrsSetProc *fileAttrsSetProc; + /* Function to process a + * 'Tcl_FSFileAttrsSet()' call, used by 'file + * attributes'. */ + Tcl_FSCreateDirectoryProc *createDirectoryProc; + /* Function to process a + * 'Tcl_FSCreateDirectory()' call. Should be + * implemented unless the FS is read-only. */ + Tcl_FSRemoveDirectoryProc *removeDirectoryProc; + /* Function to process a + * 'Tcl_FSRemoveDirectory()' call. Should be + * implemented unless the FS is read-only. */ + Tcl_FSDeleteFileProc *deleteFileProc; + /* Function to process a 'Tcl_FSDeleteFile()' + * call. Should be implemented unless the FS + * is read-only. */ + Tcl_FSCopyFileProc *copyFileProc; + /* Function to process a 'Tcl_FSCopyFile()' + * call. If not implemented Tcl will fall back + * on open-r, open-w and fcopy as a copying + * mechanism, for copying actions initiated in + * Tcl (not C). */ + Tcl_FSRenameFileProc *renameFileProc; + /* Function to process a 'Tcl_FSRenameFile()' + * call. If not implemented, Tcl will fall + * back on a copy and delete mechanism, for + * rename actions initiated in Tcl (not C). */ + Tcl_FSCopyDirectoryProc *copyDirectoryProc; + /* Function to process a + * 'Tcl_FSCopyDirectory()' call. If not + * implemented, Tcl will fall back on a + * recursive create-dir, file copy mechanism, + * for copying actions initiated in Tcl (not + * C). */ + Tcl_FSLstatProc *lstatProc; /* Function to process a 'Tcl_FSLstat()' call. + * If not implemented, Tcl will attempt to use + * the 'statProc' defined above instead. */ + Tcl_FSLoadFileProc *loadFileProc; + /* Function to process a 'Tcl_FSLoadFile()' + * call. If not implemented, Tcl will fall + * back on a copy to native-temp followed by a + * Tcl_FSLoadFile on that temporary copy. */ + Tcl_FSGetCwdProc *getCwdProc; + /* Function to process a 'Tcl_FSGetCwd()' + * call. Most filesystems need not implement + * this. It will usually only be called once, + * if 'getcwd' is called before 'chdir'. May + * be NULL. */ + Tcl_FSChdirProc *chdirProc; /* Function to process a 'Tcl_FSChdir()' call. + * If filesystems do not implement this, it + * will be emulated by a series of directory + * access checks. Otherwise, virtual + * filesystems which do implement it need only + * respond with a positive return result if + * the dirName is a valid directory in their + * filesystem. They need not remember the + * result, since that will be automatically + * remembered for use by GetCwd. Real + * filesystems should carry out the correct + * action (i.e. call the correct system + * 'chdir' api). If not implemented, then 'cd' + * and 'pwd' will fail inside the + * filesystem. */ +} Tcl_Filesystem; + +/* + * The following definitions are used as values for the 'linkAction' flag to + * Tcl_FSLink, or the linkProc of any filesystem. Any combination of flags can + * be given. For link creation, the linkProc should create a link which + * matches any of the types given. + * + * TCL_CREATE_SYMBOLIC_LINK - Create a symbolic or soft link. + * TCL_CREATE_HARD_LINK - Create a hard link. + */ + +#define TCL_CREATE_SYMBOLIC_LINK 0x01 +#define TCL_CREATE_HARD_LINK 0x02 + +/* + *---------------------------------------------------------------------------- + * The following structure represents the Notifier functions that you can + * override with the Tcl_SetNotifier call. + */ + +typedef struct Tcl_NotifierProcs { + Tcl_SetTimerProc *setTimerProc; + Tcl_WaitForEventProc *waitForEventProc; + Tcl_CreateFileHandlerProc *createFileHandlerProc; + Tcl_DeleteFileHandlerProc *deleteFileHandlerProc; + Tcl_InitNotifierProc *initNotifierProc; + Tcl_FinalizeNotifierProc *finalizeNotifierProc; + Tcl_AlertNotifierProc *alertNotifierProc; + Tcl_ServiceModeHookProc *serviceModeHookProc; +} Tcl_NotifierProcs; + +/* + *---------------------------------------------------------------------------- + * The following data structures and declarations are for the new Tcl parser. + * + * For each word of a command, and for each piece of a word such as a variable + * reference, one of the following structures is created to describe the + * token. + */ + +typedef struct Tcl_Token { + int type; /* Type of token, such as TCL_TOKEN_WORD; see + * below for valid types. */ + const char *start; /* First character in token. */ + int size; /* Number of bytes in token. */ + int numComponents; /* If this token is composed of other tokens, + * this field tells how many of them there are + * (including components of components, etc.). + * The component tokens immediately follow + * this one. */ +} Tcl_Token; + +/* + * Type values defined for Tcl_Token structures. These values are defined as + * mask bits so that it's easy to check for collections of types. + * + * TCL_TOKEN_WORD - The token describes one word of a command, + * from the first non-blank character of the word + * (which may be " or {) up to but not including + * the space, semicolon, or bracket that + * terminates the word. NumComponents counts the + * total number of sub-tokens that make up the + * word. This includes, for example, sub-tokens + * of TCL_TOKEN_VARIABLE tokens. + * TCL_TOKEN_SIMPLE_WORD - This token is just like TCL_TOKEN_WORD except + * that the word is guaranteed to consist of a + * single TCL_TOKEN_TEXT sub-token. + * TCL_TOKEN_TEXT - The token describes a range of literal text + * that is part of a word. NumComponents is + * always 0. + * TCL_TOKEN_BS - The token describes a backslash sequence that + * must be collapsed. NumComponents is always 0. + * TCL_TOKEN_COMMAND - The token describes a command whose result + * must be substituted into the word. The token + * includes the enclosing brackets. NumComponents + * is always 0. + * TCL_TOKEN_VARIABLE - The token describes a variable substitution, + * including the dollar sign, variable name, and + * array index (if there is one) up through the + * right parentheses. NumComponents tells how + * many additional tokens follow to represent the + * variable name. The first token will be a + * TCL_TOKEN_TEXT token that describes the + * variable name. If the variable is an array + * reference then there will be one or more + * additional tokens, of type TCL_TOKEN_TEXT, + * TCL_TOKEN_BS, TCL_TOKEN_COMMAND, and + * TCL_TOKEN_VARIABLE, that describe the array + * index; numComponents counts the total number + * of nested tokens that make up the variable + * reference, including sub-tokens of + * TCL_TOKEN_VARIABLE tokens. + * TCL_TOKEN_SUB_EXPR - The token describes one subexpression of an + * expression, from the first non-blank character + * of the subexpression up to but not including + * the space, brace, or bracket that terminates + * the subexpression. NumComponents counts the + * total number of following subtokens that make + * up the subexpression; this includes all + * subtokens for any nested TCL_TOKEN_SUB_EXPR + * tokens. For example, a numeric value used as a + * primitive operand is described by a + * TCL_TOKEN_SUB_EXPR token followed by a + * TCL_TOKEN_TEXT token. A binary subexpression + * is described by a TCL_TOKEN_SUB_EXPR token + * followed by the TCL_TOKEN_OPERATOR token for + * the operator, then TCL_TOKEN_SUB_EXPR tokens + * for the left then the right operands. + * TCL_TOKEN_OPERATOR - The token describes one expression operator. + * An operator might be the name of a math + * function such as "abs". A TCL_TOKEN_OPERATOR + * token is always preceeded by one + * TCL_TOKEN_SUB_EXPR token for the operator's + * subexpression, and is followed by zero or more + * TCL_TOKEN_SUB_EXPR tokens for the operator's + * operands. NumComponents is always 0. + * TCL_TOKEN_EXPAND_WORD - This token is just like TCL_TOKEN_WORD except + * that it marks a word that began with the + * literal character prefix "{*}". This word is + * marked to be expanded - that is, broken into + * words after substitution is complete. + */ + +#define TCL_TOKEN_WORD 1 +#define TCL_TOKEN_SIMPLE_WORD 2 +#define TCL_TOKEN_TEXT 4 +#define TCL_TOKEN_BS 8 +#define TCL_TOKEN_COMMAND 16 +#define TCL_TOKEN_VARIABLE 32 +#define TCL_TOKEN_SUB_EXPR 64 +#define TCL_TOKEN_OPERATOR 128 +#define TCL_TOKEN_EXPAND_WORD 256 + +/* + * Parsing error types. On any parsing error, one of these values will be + * stored in the error field of the Tcl_Parse structure defined below. + */ + +#define TCL_PARSE_SUCCESS 0 +#define TCL_PARSE_QUOTE_EXTRA 1 +#define TCL_PARSE_BRACE_EXTRA 2 +#define TCL_PARSE_MISSING_BRACE 3 +#define TCL_PARSE_MISSING_BRACKET 4 +#define TCL_PARSE_MISSING_PAREN 5 +#define TCL_PARSE_MISSING_QUOTE 6 +#define TCL_PARSE_MISSING_VAR_BRACE 7 +#define TCL_PARSE_SYNTAX 8 +#define TCL_PARSE_BAD_NUMBER 9 + +/* + * A structure of the following type is filled in by Tcl_ParseCommand. It + * describes a single command parsed from an input string. + */ + +#define NUM_STATIC_TOKENS 20 + +typedef struct Tcl_Parse { + const char *commentStart; /* Pointer to # that begins the first of one + * or more comments preceding the command. */ + int commentSize; /* Number of bytes in comments (up through + * newline character that terminates the last + * comment). If there were no comments, this + * field is 0. */ + const char *commandStart; /* First character in first word of + * command. */ + int commandSize; /* Number of bytes in command, including first + * character of first word, up through the + * terminating newline, close bracket, or + * semicolon. */ + int numWords; /* Total number of words in command. May be + * 0. */ + Tcl_Token *tokenPtr; /* Pointer to first token representing the + * words of the command. Initially points to + * staticTokens, but may change to point to + * malloc-ed space if command exceeds space in + * staticTokens. */ + int numTokens; /* Total number of tokens in command. */ + int tokensAvailable; /* Total number of tokens available at + * *tokenPtr. */ + int errorType; /* One of the parsing error types defined + * above. */ + + /* + * The fields below are intended only for the private use of the parser. + * They should not be used by functions that invoke Tcl_ParseCommand. + */ + + const char *string; /* The original command string passed to + * Tcl_ParseCommand. */ + const char *end; /* Points to the character just after the last + * one in the command string. */ + Tcl_Interp *interp; /* Interpreter to use for error reporting, or + * NULL. */ + const char *term; /* Points to character in string that + * terminated most recent token. Filled in by + * ParseTokens. If an error occurs, points to + * beginning of region where the error + * occurred (e.g. the open brace if the close + * brace is missing). */ + int incomplete; /* This field is set to 1 by Tcl_ParseCommand + * if the command appears to be incomplete. + * This information is used by + * Tcl_CommandComplete. */ + Tcl_Token staticTokens[NUM_STATIC_TOKENS]; + /* Initial space for tokens for command. This + * space should be large enough to accommodate + * most commands; dynamic space is allocated + * for very large commands that don't fit + * here. */ +} Tcl_Parse; + +/* + *---------------------------------------------------------------------------- + * The following structure represents a user-defined encoding. It collects + * together all the functions that are used by the specific encoding. + */ + +typedef struct Tcl_EncodingType { + const char *encodingName; /* The name of the encoding, e.g. "euc-jp". + * This name is the unique key for this + * encoding type. */ + Tcl_EncodingConvertProc *toUtfProc; + /* Function to convert from external encoding + * into UTF-8. */ + Tcl_EncodingConvertProc *fromUtfProc; + /* Function to convert from UTF-8 into + * external encoding. */ + Tcl_EncodingFreeProc *freeProc; + /* If non-NULL, function to call when this + * encoding is deleted. */ + ClientData clientData; /* Arbitrary value associated with encoding + * type. Passed to conversion functions. */ + int nullSize; /* Number of zero bytes that signify + * end-of-string in this encoding. This number + * is used to determine the source string + * length when the srcLen argument is + * negative. Must be 1 or 2. */ +} Tcl_EncodingType; + +/* + * The following definitions are used as values for the conversion control + * flags argument when converting text from one character set to another: + * + * TCL_ENCODING_START - Signifies that the source buffer is the first + * block in a (potentially multi-block) input + * stream. Tells the conversion function to reset + * to an initial state and perform any + * initialization that needs to occur before the + * first byte is converted. If the source buffer + * contains the entire input stream to be + * converted, this flag should be set. + * TCL_ENCODING_END - Signifies that the source buffer is the last + * block in a (potentially multi-block) input + * stream. Tells the conversion routine to + * perform any finalization that needs to occur + * after the last byte is converted and then to + * reset to an initial state. If the source + * buffer contains the entire input stream to be + * converted, this flag should be set. + * TCL_ENCODING_STOPONERROR - If set, then the converter will return + * immediately upon encountering an invalid byte + * sequence or a source character that has no + * mapping in the target encoding. If clear, then + * the converter will skip the problem, + * substituting one or more "close" characters in + * the destination buffer and then continue to + * convert the source. + */ + +#define TCL_ENCODING_START 0x01 +#define TCL_ENCODING_END 0x02 +#define TCL_ENCODING_STOPONERROR 0x04 + +/* + * The following definitions are the error codes returned by the conversion + * routines: + * + * TCL_OK - All characters were converted. + * TCL_CONVERT_NOSPACE - The output buffer would not have been large + * enough for all of the converted data; as many + * characters as could fit were converted though. + * TCL_CONVERT_MULTIBYTE - The last few bytes in the source string were + * the beginning of a multibyte sequence, but + * more bytes were needed to complete this + * sequence. A subsequent call to the conversion + * routine should pass the beginning of this + * unconverted sequence plus additional bytes + * from the source stream to properly convert the + * formerly split-up multibyte sequence. + * TCL_CONVERT_SYNTAX - The source stream contained an invalid + * character sequence. This may occur if the + * input stream has been damaged or if the input + * encoding method was misidentified. This error + * is reported only if TCL_ENCODING_STOPONERROR + * was specified. + * TCL_CONVERT_UNKNOWN - The source string contained a character that + * could not be represented in the target + * encoding. This error is reported only if + * TCL_ENCODING_STOPONERROR was specified. + */ + +#define TCL_CONVERT_MULTIBYTE (-1) +#define TCL_CONVERT_SYNTAX (-2) +#define TCL_CONVERT_UNKNOWN (-3) +#define TCL_CONVERT_NOSPACE (-4) + +/* + * The maximum number of bytes that are necessary to represent a single + * Unicode character in UTF-8. The valid values should be 3, 4 or 6 + * (or perhaps 1 if we want to support a non-unicode enabled core). If 3 or + * 4, then Tcl_UniChar must be 2-bytes in size (UCS-2) (the default). If 6, + * then Tcl_UniChar must be 4-bytes in size (UCS-4). At this time UCS-2 mode + * is the default and recommended mode. UCS-4 is experimental and not + * recommended. It works for the core, but most extensions expect UCS-2. + */ + +#ifndef TCL_UTF_MAX +#define TCL_UTF_MAX 3 +#endif + +/* + * This represents a Unicode character. Any changes to this should also be + * reflected in regcustom.h. + */ + +#if TCL_UTF_MAX > 4 + /* + * unsigned int isn't 100% accurate as it should be a strict 4-byte value + * (perhaps wchar_t). 64-bit systems may have troubles. The size of this + * value must be reflected correctly in regcustom.h and + * in tclEncoding.c. + * XXX: Tcl is currently UCS-2 and planning UTF-16 for the Unicode + * XXX: string rep that Tcl_UniChar represents. Changing the size + * XXX: of Tcl_UniChar is /not/ supported. + */ +typedef unsigned int Tcl_UniChar; +#else +typedef unsigned short Tcl_UniChar; +#endif + +/* + *---------------------------------------------------------------------------- + * TIP #59: The following structure is used in calls 'Tcl_RegisterConfig' to + * provide the system with the embedded configuration data. + */ + +typedef struct Tcl_Config { + const char *key; /* Configuration key to register. ASCII + * encoded, thus UTF-8. */ + const char *value; /* The value associated with the key. System + * encoding. */ +} Tcl_Config; + +/* + *---------------------------------------------------------------------------- + * Flags for TIP#143 limits, detailing which limits are active in an + * interpreter. Used for Tcl_{Add,Remove}LimitHandler type argument. + */ + +#define TCL_LIMIT_COMMANDS 0x01 +#define TCL_LIMIT_TIME 0x02 + +/* + * Structure containing information about a limit handler to be called when a + * command- or time-limit is exceeded by an interpreter. + */ + +typedef void (Tcl_LimitHandlerProc) (ClientData clientData, Tcl_Interp *interp); +typedef void (Tcl_LimitHandlerDeleteProc) (ClientData clientData); + +/* + *---------------------------------------------------------------------------- + * Override definitions for libtommath. + */ + +typedef struct mp_int mp_int; +#define MP_INT_DECLARED +typedef unsigned int mp_digit; +#define MP_DIGIT_DECLARED + +/* + *---------------------------------------------------------------------------- + * Definitions needed for Tcl_ParseArgvObj routines. + * Based on tkArgv.c. + * Modifications from the original are copyright (c) Sam Bromley 2006 + */ + +typedef struct { + int type; /* Indicates the option type; see below. */ + const char *keyStr; /* The key string that flags the option in the + * argv array. */ + void *srcPtr; /* Value to be used in setting dst; usage + * depends on type.*/ + void *dstPtr; /* Address of value to be modified; usage + * depends on type.*/ + const char *helpStr; /* Documentation message describing this + * option. */ + ClientData clientData; /* Word to pass to function callbacks. */ +} Tcl_ArgvInfo; + +/* + * Legal values for the type field of a Tcl_ArgInfo: see the user + * documentation for details. + */ + +#define TCL_ARGV_CONSTANT 15 +#define TCL_ARGV_INT 16 +#define TCL_ARGV_STRING 17 +#define TCL_ARGV_REST 18 +#define TCL_ARGV_FLOAT 19 +#define TCL_ARGV_FUNC 20 +#define TCL_ARGV_GENFUNC 21 +#define TCL_ARGV_HELP 22 +#define TCL_ARGV_END 23 + +/* + * Types of callback functions for the TCL_ARGV_FUNC and TCL_ARGV_GENFUNC + * argument types: + */ + +typedef int (Tcl_ArgvFuncProc)(ClientData clientData, Tcl_Obj *objPtr, + void *dstPtr); +typedef int (Tcl_ArgvGenFuncProc)(ClientData clientData, Tcl_Interp *interp, + int objc, Tcl_Obj *const *objv, void *dstPtr); + +/* + * Shorthand for commonly used argTable entries. + */ + +#define TCL_ARGV_AUTO_HELP \ + {TCL_ARGV_HELP, "-help", NULL, NULL, \ + "Print summary of command-line options and abort", NULL} +#define TCL_ARGV_AUTO_REST \ + {TCL_ARGV_REST, "--", NULL, NULL, \ + "Marks the end of the options", NULL} +#define TCL_ARGV_TABLE_END \ + {TCL_ARGV_END, NULL, NULL, NULL, NULL, NULL} + +/* + *---------------------------------------------------------------------------- + * Definitions needed for Tcl_Zlib routines. [TIP #234] + * + * Constants for the format flags describing what sort of data format is + * desired/expected for the Tcl_ZlibDeflate, Tcl_ZlibInflate and + * Tcl_ZlibStreamInit functions. + */ + +#define TCL_ZLIB_FORMAT_RAW 1 +#define TCL_ZLIB_FORMAT_ZLIB 2 +#define TCL_ZLIB_FORMAT_GZIP 4 +#define TCL_ZLIB_FORMAT_AUTO 8 + +/* + * Constants that describe whether the stream is to operate in compressing or + * decompressing mode. + */ + +#define TCL_ZLIB_STREAM_DEFLATE 16 +#define TCL_ZLIB_STREAM_INFLATE 32 + +/* + * Constants giving compression levels. Use of TCL_ZLIB_COMPRESS_DEFAULT is + * recommended. + */ + +#define TCL_ZLIB_COMPRESS_NONE 0 +#define TCL_ZLIB_COMPRESS_FAST 1 +#define TCL_ZLIB_COMPRESS_BEST 9 +#define TCL_ZLIB_COMPRESS_DEFAULT (-1) + +/* + * Constants for types of flushing, used with Tcl_ZlibFlush. + */ + +#define TCL_ZLIB_NO_FLUSH 0 +#define TCL_ZLIB_FLUSH 2 +#define TCL_ZLIB_FULLFLUSH 3 +#define TCL_ZLIB_FINALIZE 4 + +/* + *---------------------------------------------------------------------------- + * Definitions needed for the Tcl_LoadFile function. [TIP #416] + */ + +#define TCL_LOAD_GLOBAL 1 +#define TCL_LOAD_LAZY 2 + +/* + *---------------------------------------------------------------------------- + * Single public declaration for NRE. + */ + +typedef int (Tcl_NRPostProc) (ClientData data[], Tcl_Interp *interp, + int result); + +/* + *---------------------------------------------------------------------------- + * The following constant is used to test for older versions of Tcl in the + * stubs tables. + * + * Jan Nijtman's plus patch uses 0xFCA1BACF, so we need to pick a different + * value since the stubs tables don't match. + */ + +#define TCL_STUB_MAGIC ((int) 0xFCA3BACF) + +/* + * The following function is required to be defined in all stubs aware + * extensions. The function is actually implemented in the stub library, not + * the main Tcl library, although there is a trivial implementation in the + * main library in case an extension is statically linked into an application. + */ + +const char * Tcl_InitStubs(Tcl_Interp *interp, const char *version, + int exact); +const char * TclTomMathInitializeStubs(Tcl_Interp *interp, + const char *version, int epoch, int revision); + +/* + * When not using stubs, make it a macro. + */ + +#ifndef USE_TCL_STUBS +#define Tcl_InitStubs(interp, version, exact) \ + Tcl_PkgInitStubsCheck(interp, version, exact) +#endif + +/* + * TODO - tommath stubs export goes here! + */ + +/* + * Public functions that are not accessible via the stubs table. + * Tcl_GetMemoryInfo is needed for AOLserver. [Bug 1868171] + */ + +#define Tcl_Main(argc, argv, proc) Tcl_MainEx(argc, argv, proc, \ + (Tcl_FindExecutable(argv[0]), (Tcl_CreateInterp)())) +EXTERN void Tcl_MainEx(int argc, char **argv, + Tcl_AppInitProc *appInitProc, Tcl_Interp *interp); +EXTERN const char * Tcl_PkgInitStubsCheck(Tcl_Interp *interp, + const char *version, int exact); +#if defined(TCL_THREADS) && defined(USE_THREAD_ALLOC) +EXTERN void Tcl_GetMemoryInfo(Tcl_DString *dsPtr); +#endif + +/* + *---------------------------------------------------------------------------- + * Include the public function declarations that are accessible via the stubs + * table. + */ + +#include "tclDecls.h" + +/* + * Include platform specific public function declarations that are accessible + * via the stubs table. + */ + +#include "tclPlatDecls.h" + +/* + *---------------------------------------------------------------------------- + * The following declarations either map ckalloc and ckfree to malloc and + * free, or they map them to functions with all sorts of debugging hooks + * defined in tclCkalloc.c. + */ + +#ifdef TCL_MEM_DEBUG + +# define ckalloc(x) \ + ((VOID *) Tcl_DbCkalloc((unsigned)(x), __FILE__, __LINE__)) +# define ckfree(x) \ + Tcl_DbCkfree((char *)(x), __FILE__, __LINE__) +# define ckrealloc(x,y) \ + ((VOID *) Tcl_DbCkrealloc((char *)(x), (unsigned)(y), __FILE__, __LINE__)) +# define attemptckalloc(x) \ + ((VOID *) Tcl_AttemptDbCkalloc((unsigned)(x), __FILE__, __LINE__)) +# define attemptckrealloc(x,y) \ + ((VOID *) Tcl_AttemptDbCkrealloc((char *)(x), (unsigned)(y), __FILE__, __LINE__)) + +#else /* !TCL_MEM_DEBUG */ + +/* + * If we are not using the debugging allocator, we should call the Tcl_Alloc, + * et al. routines in order to guarantee that every module is using the same + * memory allocator both inside and outside of the Tcl library. + */ + +# define ckalloc(x) \ + ((VOID *) Tcl_Alloc((unsigned)(x))) +# define ckfree(x) \ + Tcl_Free((char *)(x)) +# define ckrealloc(x,y) \ + ((VOID *) Tcl_Realloc((char *)(x), (unsigned)(y))) +# define attemptckalloc(x) \ + ((VOID *) Tcl_AttemptAlloc((unsigned)(x))) +# define attemptckrealloc(x,y) \ + ((VOID *) Tcl_AttemptRealloc((char *)(x), (unsigned)(y))) +# undef Tcl_InitMemory +# define Tcl_InitMemory(x) +# undef Tcl_DumpActiveMemory +# define Tcl_DumpActiveMemory(x) +# undef Tcl_ValidateAllMemory +# define Tcl_ValidateAllMemory(x,y) + +#endif /* !TCL_MEM_DEBUG */ + +#ifdef TCL_MEM_DEBUG +# define Tcl_IncrRefCount(objPtr) \ + Tcl_DbIncrRefCount(objPtr, __FILE__, __LINE__) +# define Tcl_DecrRefCount(objPtr) \ + Tcl_DbDecrRefCount(objPtr, __FILE__, __LINE__) +# define Tcl_IsShared(objPtr) \ + Tcl_DbIsShared(objPtr, __FILE__, __LINE__) +#else +# define Tcl_IncrRefCount(objPtr) \ + ++(objPtr)->refCount + /* + * Use do/while0 idiom for optimum correctness without compiler warnings. + * http://c2.com/cgi/wiki?TrivialDoWhileLoop + */ +# define Tcl_DecrRefCount(objPtr) \ + do { \ + Tcl_Obj *_objPtr = (objPtr); \ + if (--(_objPtr)->refCount <= 0) { \ + TclFreeObj(_objPtr); \ + } \ + } while(0) +# define Tcl_IsShared(objPtr) \ + ((objPtr)->refCount > 1) +#endif + +/* + * Macros and definitions that help to debug the use of Tcl objects. When + * TCL_MEM_DEBUG is defined, the Tcl_New declarations are overridden to call + * debugging versions of the object creation functions. + */ + +#ifdef TCL_MEM_DEBUG +# undef Tcl_NewBignumObj +# define Tcl_NewBignumObj(val) \ + Tcl_DbNewBignumObj(val, __FILE__, __LINE__) +# undef Tcl_NewBooleanObj +# define Tcl_NewBooleanObj(val) \ + Tcl_DbNewBooleanObj(val, __FILE__, __LINE__) +# undef Tcl_NewByteArrayObj +# define Tcl_NewByteArrayObj(bytes, len) \ + Tcl_DbNewByteArrayObj(bytes, len, __FILE__, __LINE__) +# undef Tcl_NewDoubleObj +# define Tcl_NewDoubleObj(val) \ + Tcl_DbNewDoubleObj(val, __FILE__, __LINE__) +# undef Tcl_NewIntObj +# define Tcl_NewIntObj(val) \ + Tcl_DbNewLongObj(val, __FILE__, __LINE__) +# undef Tcl_NewListObj +# define Tcl_NewListObj(objc, objv) \ + Tcl_DbNewListObj(objc, objv, __FILE__, __LINE__) +# undef Tcl_NewLongObj +# define Tcl_NewLongObj(val) \ + Tcl_DbNewLongObj(val, __FILE__, __LINE__) +# undef Tcl_NewObj +# define Tcl_NewObj() \ + Tcl_DbNewObj(__FILE__, __LINE__) +# undef Tcl_NewStringObj +# define Tcl_NewStringObj(bytes, len) \ + Tcl_DbNewStringObj(bytes, len, __FILE__, __LINE__) +# undef Tcl_NewWideIntObj +# define Tcl_NewWideIntObj(val) \ + Tcl_DbNewWideIntObj(val, __FILE__, __LINE__) +#endif /* TCL_MEM_DEBUG */ + +/* + *---------------------------------------------------------------------------- + * Macros for clients to use to access fields of hash entries: + */ + +#define Tcl_GetHashValue(h) ((h)->clientData) +#define Tcl_SetHashValue(h, value) ((h)->clientData = (ClientData) (value)) +#define Tcl_GetHashKey(tablePtr, h) \ + ((void *) (((tablePtr)->keyType == TCL_ONE_WORD_KEYS || \ + (tablePtr)->keyType == TCL_CUSTOM_PTR_KEYS) \ + ? (h)->key.oneWordValue \ + : (h)->key.string)) + +/* + * Macros to use for clients to use to invoke find and create functions for + * hash tables: + */ + +#undef Tcl_FindHashEntry +#define Tcl_FindHashEntry(tablePtr, key) \ + (*((tablePtr)->findProc))(tablePtr, (const char *)(key)) +#undef Tcl_CreateHashEntry +#define Tcl_CreateHashEntry(tablePtr, key, newPtr) \ + (*((tablePtr)->createProc))(tablePtr, (const char *)(key), newPtr) + +/* + *---------------------------------------------------------------------------- + * Macros that eliminate the overhead of the thread synchronization functions + * when compiling without thread support. + */ + +#ifndef TCL_THREADS +#undef Tcl_MutexLock +#define Tcl_MutexLock(mutexPtr) +#undef Tcl_MutexUnlock +#define Tcl_MutexUnlock(mutexPtr) +#undef Tcl_MutexFinalize +#define Tcl_MutexFinalize(mutexPtr) +#undef Tcl_ConditionNotify +#define Tcl_ConditionNotify(condPtr) +#undef Tcl_ConditionWait +#define Tcl_ConditionWait(condPtr, mutexPtr, timePtr) +#undef Tcl_ConditionFinalize +#define Tcl_ConditionFinalize(condPtr) +#endif /* TCL_THREADS */ + +/* + *---------------------------------------------------------------------------- + * Deprecated Tcl functions: + */ + +#ifndef TCL_NO_DEPRECATED +# undef Tcl_EvalObj +# define Tcl_EvalObj(interp,objPtr) \ + Tcl_EvalObjEx((interp),(objPtr),0) +# undef Tcl_GlobalEvalObj +# define Tcl_GlobalEvalObj(interp,objPtr) \ + Tcl_EvalObjEx((interp),(objPtr),TCL_EVAL_GLOBAL) + +/* + * These function have been renamed. The old names are deprecated, but we + * define these macros for backwards compatibilty. + */ + +# define Tcl_Ckalloc Tcl_Alloc +# define Tcl_Ckfree Tcl_Free +# define Tcl_Ckrealloc Tcl_Realloc +# define Tcl_Return Tcl_SetResult +# define Tcl_TildeSubst Tcl_TranslateFileName +# define panic Tcl_Panic +# define panicVA Tcl_PanicVA +#endif /* !TCL_NO_DEPRECATED */ + +/* + *---------------------------------------------------------------------------- + * Convenience declaration of Tcl_AppInit for backwards compatibility. This + * function is not *implemented* by the tcl library, so the storage class is + * neither DLLEXPORT nor DLLIMPORT. + */ + +extern Tcl_AppInitProc Tcl_AppInit; + +#endif /* RC_INVOKED */ + +/* + * end block for C++ + */ + +#ifdef __cplusplus +} +#endif + +#endif /* _TCL */ + +/* + * Local Variables: + * mode: c + * c-basic-offset: 4 + * fill-column: 78 + * End: + */ ADDED compat/tcl-8.6/generic/tclDecls.h Index: compat/tcl-8.6/generic/tclDecls.h ================================================================== --- compat/tcl-8.6/generic/tclDecls.h +++ compat/tcl-8.6/generic/tclDecls.h @@ -0,0 +1,3806 @@ +/* + * tclDecls.h -- + * + * Declarations of functions in the platform independent public Tcl API. + * + * Copyright (c) 1998-1999 by Scriptics Corporation. + * + * See the file "license.terms" for information on usage and redistribution + * of this file, and for a DISCLAIMER OF ALL WARRANTIES. + */ + +#ifndef _TCLDECLS +#define _TCLDECLS + +#undef TCL_STORAGE_CLASS +#ifdef BUILD_tcl +# define TCL_STORAGE_CLASS DLLEXPORT +#else +# ifdef USE_TCL_STUBS +# define TCL_STORAGE_CLASS +# else +# define TCL_STORAGE_CLASS DLLIMPORT +# endif +#endif + +/* + * WARNING: This file is automatically generated by the tools/genStubs.tcl + * script. Any modifications to the function declarations below should be made + * in the generic/tcl.decls script. + */ + +/* !BEGIN!: Do not edit below this line. */ + +/* + * Exported function declarations: + */ + +/* 0 */ +EXTERN int Tcl_PkgProvideEx(Tcl_Interp *interp, + const char *name, const char *version, + const void *clientData); +/* 1 */ +EXTERN CONST84_RETURN char * Tcl_PkgRequireEx(Tcl_Interp *interp, + const char *name, const char *version, + int exact, void *clientDataPtr); +/* 2 */ +EXTERN void Tcl_Panic(const char *format, ...) TCL_FORMAT_PRINTF(1, 2); +/* 3 */ +EXTERN char * Tcl_Alloc(unsigned int size); +/* 4 */ +EXTERN void Tcl_Free(char *ptr); +/* 5 */ +EXTERN char * Tcl_Realloc(char *ptr, unsigned int size); +/* 6 */ +EXTERN char * Tcl_DbCkalloc(unsigned int size, const char *file, + int line); +/* 7 */ +EXTERN void Tcl_DbCkfree(char *ptr, const char *file, int line); +/* 8 */ +EXTERN char * Tcl_DbCkrealloc(char *ptr, unsigned int size, + const char *file, int line); +#if !defined(__WIN32__) && !defined(MAC_OSX_TCL) /* UNIX */ +/* 9 */ +EXTERN void Tcl_CreateFileHandler(int fd, int mask, + Tcl_FileProc *proc, ClientData clientData); +#endif /* UNIX */ +#ifdef MAC_OSX_TCL /* MACOSX */ +/* 9 */ +EXTERN void Tcl_CreateFileHandler(int fd, int mask, + Tcl_FileProc *proc, ClientData clientData); +#endif /* MACOSX */ +#if !defined(__WIN32__) && !defined(MAC_OSX_TCL) /* UNIX */ +/* 10 */ +EXTERN void Tcl_DeleteFileHandler(int fd); +#endif /* UNIX */ +#ifdef MAC_OSX_TCL /* MACOSX */ +/* 10 */ +EXTERN void Tcl_DeleteFileHandler(int fd); +#endif /* MACOSX */ +/* 11 */ +EXTERN void Tcl_SetTimer(const Tcl_Time *timePtr); +/* 12 */ +EXTERN void Tcl_Sleep(int ms); +/* 13 */ +EXTERN int Tcl_WaitForEvent(const Tcl_Time *timePtr); +/* 14 */ +EXTERN int Tcl_AppendAllObjTypes(Tcl_Interp *interp, + Tcl_Obj *objPtr); +/* 15 */ +EXTERN void Tcl_AppendStringsToObj(Tcl_Obj *objPtr, ...); +/* 16 */ +EXTERN void Tcl_AppendToObj(Tcl_Obj *objPtr, const char *bytes, + int length); +/* 17 */ +EXTERN Tcl_Obj * Tcl_ConcatObj(int objc, Tcl_Obj *const objv[]); +/* 18 */ +EXTERN int Tcl_ConvertToType(Tcl_Interp *interp, + Tcl_Obj *objPtr, const Tcl_ObjType *typePtr); +/* 19 */ +EXTERN void Tcl_DbDecrRefCount(Tcl_Obj *objPtr, const char *file, + int line); +/* 20 */ +EXTERN void Tcl_DbIncrRefCount(Tcl_Obj *objPtr, const char *file, + int line); +/* 21 */ +EXTERN int Tcl_DbIsShared(Tcl_Obj *objPtr, const char *file, + int line); +/* 22 */ +EXTERN Tcl_Obj * Tcl_DbNewBooleanObj(int boolValue, const char *file, + int line); +/* 23 */ +EXTERN Tcl_Obj * Tcl_DbNewByteArrayObj(const unsigned char *bytes, + int length, const char *file, int line); +/* 24 */ +EXTERN Tcl_Obj * Tcl_DbNewDoubleObj(double doubleValue, + const char *file, int line); +/* 25 */ +EXTERN Tcl_Obj * Tcl_DbNewListObj(int objc, Tcl_Obj *const *objv, + const char *file, int line); +/* 26 */ +EXTERN Tcl_Obj * Tcl_DbNewLongObj(long longValue, const char *file, + int line); +/* 27 */ +EXTERN Tcl_Obj * Tcl_DbNewObj(const char *file, int line); +/* 28 */ +EXTERN Tcl_Obj * Tcl_DbNewStringObj(const char *bytes, int length, + const char *file, int line); +/* 29 */ +EXTERN Tcl_Obj * Tcl_DuplicateObj(Tcl_Obj *objPtr); +/* 30 */ +EXTERN void TclFreeObj(Tcl_Obj *objPtr); +/* 31 */ +EXTERN int Tcl_GetBoolean(Tcl_Interp *interp, const char *src, + int *boolPtr); +/* 32 */ +EXTERN int Tcl_GetBooleanFromObj(Tcl_Interp *interp, + Tcl_Obj *objPtr, int *boolPtr); +/* 33 */ +EXTERN unsigned char * Tcl_GetByteArrayFromObj(Tcl_Obj *objPtr, + int *lengthPtr); +/* 34 */ +EXTERN int Tcl_GetDouble(Tcl_Interp *interp, const char *src, + double *doublePtr); +/* 35 */ +EXTERN int Tcl_GetDoubleFromObj(Tcl_Interp *interp, + Tcl_Obj *objPtr, double *doublePtr); +/* 36 */ +EXTERN int Tcl_GetIndexFromObj(Tcl_Interp *interp, + Tcl_Obj *objPtr, + CONST84 char *const *tablePtr, + const char *msg, int flags, int *indexPtr); +/* 37 */ +EXTERN int Tcl_GetInt(Tcl_Interp *interp, const char *src, + int *intPtr); +/* 38 */ +EXTERN int Tcl_GetIntFromObj(Tcl_Interp *interp, + Tcl_Obj *objPtr, int *intPtr); +/* 39 */ +EXTERN int Tcl_GetLongFromObj(Tcl_Interp *interp, + Tcl_Obj *objPtr, long *longPtr); +/* 40 */ +EXTERN CONST86 Tcl_ObjType * Tcl_GetObjType(const char *typeName); +/* 41 */ +EXTERN char * Tcl_GetStringFromObj(Tcl_Obj *objPtr, int *lengthPtr); +/* 42 */ +EXTERN void Tcl_InvalidateStringRep(Tcl_Obj *objPtr); +/* 43 */ +EXTERN int Tcl_ListObjAppendList(Tcl_Interp *interp, + Tcl_Obj *listPtr, Tcl_Obj *elemListPtr); +/* 44 */ +EXTERN int Tcl_ListObjAppendElement(Tcl_Interp *interp, + Tcl_Obj *listPtr, Tcl_Obj *objPtr); +/* 45 */ +EXTERN int Tcl_ListObjGetElements(Tcl_Interp *interp, + Tcl_Obj *listPtr, int *objcPtr, + Tcl_Obj ***objvPtr); +/* 46 */ +EXTERN int Tcl_ListObjIndex(Tcl_Interp *interp, + Tcl_Obj *listPtr, int index, + Tcl_Obj **objPtrPtr); +/* 47 */ +EXTERN int Tcl_ListObjLength(Tcl_Interp *interp, + Tcl_Obj *listPtr, int *lengthPtr); +/* 48 */ +EXTERN int Tcl_ListObjReplace(Tcl_Interp *interp, + Tcl_Obj *listPtr, int first, int count, + int objc, Tcl_Obj *const objv[]); +/* 49 */ +EXTERN Tcl_Obj * Tcl_NewBooleanObj(int boolValue); +/* 50 */ +EXTERN Tcl_Obj * Tcl_NewByteArrayObj(const unsigned char *bytes, + int length); +/* 51 */ +EXTERN Tcl_Obj * Tcl_NewDoubleObj(double doubleValue); +/* 52 */ +EXTERN Tcl_Obj * Tcl_NewIntObj(int intValue); +/* 53 */ +EXTERN Tcl_Obj * Tcl_NewListObj(int objc, Tcl_Obj *const objv[]); +/* 54 */ +EXTERN Tcl_Obj * Tcl_NewLongObj(long longValue); +/* 55 */ +EXTERN Tcl_Obj * Tcl_NewObj(void); +/* 56 */ +EXTERN Tcl_Obj * Tcl_NewStringObj(const char *bytes, int length); +/* 57 */ +EXTERN void Tcl_SetBooleanObj(Tcl_Obj *objPtr, int boolValue); +/* 58 */ +EXTERN unsigned char * Tcl_SetByteArrayLength(Tcl_Obj *objPtr, int length); +/* 59 */ +EXTERN void Tcl_SetByteArrayObj(Tcl_Obj *objPtr, + const unsigned char *bytes, int length); +/* 60 */ +EXTERN void Tcl_SetDoubleObj(Tcl_Obj *objPtr, double doubleValue); +/* 61 */ +EXTERN void Tcl_SetIntObj(Tcl_Obj *objPtr, int intValue); +/* 62 */ +EXTERN void Tcl_SetListObj(Tcl_Obj *objPtr, int objc, + Tcl_Obj *const objv[]); +/* 63 */ +EXTERN void Tcl_SetLongObj(Tcl_Obj *objPtr, long longValue); +/* 64 */ +EXTERN void Tcl_SetObjLength(Tcl_Obj *objPtr, int length); +/* 65 */ +EXTERN void Tcl_SetStringObj(Tcl_Obj *objPtr, const char *bytes, + int length); +/* 66 */ +EXTERN void Tcl_AddErrorInfo(Tcl_Interp *interp, + const char *message); +/* 67 */ +EXTERN void Tcl_AddObjErrorInfo(Tcl_Interp *interp, + const char *message, int length); +/* 68 */ +EXTERN void Tcl_AllowExceptions(Tcl_Interp *interp); +/* 69 */ +EXTERN void Tcl_AppendElement(Tcl_Interp *interp, + const char *element); +/* 70 */ +EXTERN void Tcl_AppendResult(Tcl_Interp *interp, ...); +/* 71 */ +EXTERN Tcl_AsyncHandler Tcl_AsyncCreate(Tcl_AsyncProc *proc, + ClientData clientData); +/* 72 */ +EXTERN void Tcl_AsyncDelete(Tcl_AsyncHandler async); +/* 73 */ +EXTERN int Tcl_AsyncInvoke(Tcl_Interp *interp, int code); +/* 74 */ +EXTERN void Tcl_AsyncMark(Tcl_AsyncHandler async); +/* 75 */ +EXTERN int Tcl_AsyncReady(void); +/* 76 */ +EXTERN void Tcl_BackgroundError(Tcl_Interp *interp); +/* 77 */ +EXTERN char Tcl_Backslash(const char *src, int *readPtr); +/* 78 */ +EXTERN int Tcl_BadChannelOption(Tcl_Interp *interp, + const char *optionName, + const char *optionList); +/* 79 */ +EXTERN void Tcl_CallWhenDeleted(Tcl_Interp *interp, + Tcl_InterpDeleteProc *proc, + ClientData clientData); +/* 80 */ +EXTERN void Tcl_CancelIdleCall(Tcl_IdleProc *idleProc, + ClientData clientData); +/* 81 */ +EXTERN int Tcl_Close(Tcl_Interp *interp, Tcl_Channel chan); +/* 82 */ +EXTERN int Tcl_CommandComplete(const char *cmd); +/* 83 */ +EXTERN char * Tcl_Concat(int argc, CONST84 char *const *argv); +/* 84 */ +EXTERN int Tcl_ConvertElement(const char *src, char *dst, + int flags); +/* 85 */ +EXTERN int Tcl_ConvertCountedElement(const char *src, + int length, char *dst, int flags); +/* 86 */ +EXTERN int Tcl_CreateAlias(Tcl_Interp *slave, + const char *slaveCmd, Tcl_Interp *target, + const char *targetCmd, int argc, + CONST84 char *const *argv); +/* 87 */ +EXTERN int Tcl_CreateAliasObj(Tcl_Interp *slave, + const char *slaveCmd, Tcl_Interp *target, + const char *targetCmd, int objc, + Tcl_Obj *const objv[]); +/* 88 */ +EXTERN Tcl_Channel Tcl_CreateChannel(const Tcl_ChannelType *typePtr, + const char *chanName, + ClientData instanceData, int mask); +/* 89 */ +EXTERN void Tcl_CreateChannelHandler(Tcl_Channel chan, int mask, + Tcl_ChannelProc *proc, ClientData clientData); +/* 90 */ +EXTERN void Tcl_CreateCloseHandler(Tcl_Channel chan, + Tcl_CloseProc *proc, ClientData clientData); +/* 91 */ +EXTERN Tcl_Command Tcl_CreateCommand(Tcl_Interp *interp, + const char *cmdName, Tcl_CmdProc *proc, + ClientData clientData, + Tcl_CmdDeleteProc *deleteProc); +/* 92 */ +EXTERN void Tcl_CreateEventSource(Tcl_EventSetupProc *setupProc, + Tcl_EventCheckProc *checkProc, + ClientData clientData); +/* 93 */ +EXTERN void Tcl_CreateExitHandler(Tcl_ExitProc *proc, + ClientData clientData); +/* 94 */ +EXTERN Tcl_Interp * Tcl_CreateInterp(void); +/* 95 */ +EXTERN void Tcl_CreateMathFunc(Tcl_Interp *interp, + const char *name, int numArgs, + Tcl_ValueType *argTypes, Tcl_MathProc *proc, + ClientData clientData); +/* 96 */ +EXTERN Tcl_Command Tcl_CreateObjCommand(Tcl_Interp *interp, + const char *cmdName, Tcl_ObjCmdProc *proc, + ClientData clientData, + Tcl_CmdDeleteProc *deleteProc); +/* 97 */ +EXTERN Tcl_Interp * Tcl_CreateSlave(Tcl_Interp *interp, + const char *slaveName, int isSafe); +/* 98 */ +EXTERN Tcl_TimerToken Tcl_CreateTimerHandler(int milliseconds, + Tcl_TimerProc *proc, ClientData clientData); +/* 99 */ +EXTERN Tcl_Trace Tcl_CreateTrace(Tcl_Interp *interp, int level, + Tcl_CmdTraceProc *proc, + ClientData clientData); +/* 100 */ +EXTERN void Tcl_DeleteAssocData(Tcl_Interp *interp, + const char *name); +/* 101 */ +EXTERN void Tcl_DeleteChannelHandler(Tcl_Channel chan, + Tcl_ChannelProc *proc, ClientData clientData); +/* 102 */ +EXTERN void Tcl_DeleteCloseHandler(Tcl_Channel chan, + Tcl_CloseProc *proc, ClientData clientData); +/* 103 */ +EXTERN int Tcl_DeleteCommand(Tcl_Interp *interp, + const char *cmdName); +/* 104 */ +EXTERN int Tcl_DeleteCommandFromToken(Tcl_Interp *interp, + Tcl_Command command); +/* 105 */ +EXTERN void Tcl_DeleteEvents(Tcl_EventDeleteProc *proc, + ClientData clientData); +/* 106 */ +EXTERN void Tcl_DeleteEventSource(Tcl_EventSetupProc *setupProc, + Tcl_EventCheckProc *checkProc, + ClientData clientData); +/* 107 */ +EXTERN void Tcl_DeleteExitHandler(Tcl_ExitProc *proc, + ClientData clientData); +/* 108 */ +EXTERN void Tcl_DeleteHashEntry(Tcl_HashEntry *entryPtr); +/* 109 */ +EXTERN void Tcl_DeleteHashTable(Tcl_HashTable *tablePtr); +/* 110 */ +EXTERN void Tcl_DeleteInterp(Tcl_Interp *interp); +/* 111 */ +EXTERN void Tcl_DetachPids(int numPids, Tcl_Pid *pidPtr); +/* 112 */ +EXTERN void Tcl_DeleteTimerHandler(Tcl_TimerToken token); +/* 113 */ +EXTERN void Tcl_DeleteTrace(Tcl_Interp *interp, Tcl_Trace trace); +/* 114 */ +EXTERN void Tcl_DontCallWhenDeleted(Tcl_Interp *interp, + Tcl_InterpDeleteProc *proc, + ClientData clientData); +/* 115 */ +EXTERN int Tcl_DoOneEvent(int flags); +/* 116 */ +EXTERN void Tcl_DoWhenIdle(Tcl_IdleProc *proc, + ClientData clientData); +/* 117 */ +EXTERN char * Tcl_DStringAppend(Tcl_DString *dsPtr, + const char *bytes, int length); +/* 118 */ +EXTERN char * Tcl_DStringAppendElement(Tcl_DString *dsPtr, + const char *element); +/* 119 */ +EXTERN void Tcl_DStringEndSublist(Tcl_DString *dsPtr); +/* 120 */ +EXTERN void Tcl_DStringFree(Tcl_DString *dsPtr); +/* 121 */ +EXTERN void Tcl_DStringGetResult(Tcl_Interp *interp, + Tcl_DString *dsPtr); +/* 122 */ +EXTERN void Tcl_DStringInit(Tcl_DString *dsPtr); +/* 123 */ +EXTERN void Tcl_DStringResult(Tcl_Interp *interp, + Tcl_DString *dsPtr); +/* 124 */ +EXTERN void Tcl_DStringSetLength(Tcl_DString *dsPtr, int length); +/* 125 */ +EXTERN void Tcl_DStringStartSublist(Tcl_DString *dsPtr); +/* 126 */ +EXTERN int Tcl_Eof(Tcl_Channel chan); +/* 127 */ +EXTERN CONST84_RETURN char * Tcl_ErrnoId(void); +/* 128 */ +EXTERN CONST84_RETURN char * Tcl_ErrnoMsg(int err); +/* 129 */ +EXTERN int Tcl_Eval(Tcl_Interp *interp, const char *script); +/* 130 */ +EXTERN int Tcl_EvalFile(Tcl_Interp *interp, + const char *fileName); +/* 131 */ +EXTERN int Tcl_EvalObj(Tcl_Interp *interp, Tcl_Obj *objPtr); +/* 132 */ +EXTERN void Tcl_EventuallyFree(ClientData clientData, + Tcl_FreeProc *freeProc); +/* 133 */ +EXTERN void Tcl_Exit(int status); +/* 134 */ +EXTERN int Tcl_ExposeCommand(Tcl_Interp *interp, + const char *hiddenCmdToken, + const char *cmdName); +/* 135 */ +EXTERN int Tcl_ExprBoolean(Tcl_Interp *interp, const char *expr, + int *ptr); +/* 136 */ +EXTERN int Tcl_ExprBooleanObj(Tcl_Interp *interp, + Tcl_Obj *objPtr, int *ptr); +/* 137 */ +EXTERN int Tcl_ExprDouble(Tcl_Interp *interp, const char *expr, + double *ptr); +/* 138 */ +EXTERN int Tcl_ExprDoubleObj(Tcl_Interp *interp, + Tcl_Obj *objPtr, double *ptr); +/* 139 */ +EXTERN int Tcl_ExprLong(Tcl_Interp *interp, const char *expr, + long *ptr); +/* 140 */ +EXTERN int Tcl_ExprLongObj(Tcl_Interp *interp, Tcl_Obj *objPtr, + long *ptr); +/* 141 */ +EXTERN int Tcl_ExprObj(Tcl_Interp *interp, Tcl_Obj *objPtr, + Tcl_Obj **resultPtrPtr); +/* 142 */ +EXTERN int Tcl_ExprString(Tcl_Interp *interp, const char *expr); +/* 143 */ +EXTERN void Tcl_Finalize(void); +/* 144 */ +EXTERN void Tcl_FindExecutable(const char *argv0); +/* 145 */ +EXTERN Tcl_HashEntry * Tcl_FirstHashEntry(Tcl_HashTable *tablePtr, + Tcl_HashSearch *searchPtr); +/* 146 */ +EXTERN int Tcl_Flush(Tcl_Channel chan); +/* 147 */ +EXTERN void Tcl_FreeResult(Tcl_Interp *interp); +/* 148 */ +EXTERN int Tcl_GetAlias(Tcl_Interp *interp, + const char *slaveCmd, + Tcl_Interp **targetInterpPtr, + CONST84 char **targetCmdPtr, int *argcPtr, + CONST84 char ***argvPtr); +/* 149 */ +EXTERN int Tcl_GetAliasObj(Tcl_Interp *interp, + const char *slaveCmd, + Tcl_Interp **targetInterpPtr, + CONST84 char **targetCmdPtr, int *objcPtr, + Tcl_Obj ***objv); +/* 150 */ +EXTERN ClientData Tcl_GetAssocData(Tcl_Interp *interp, + const char *name, + Tcl_InterpDeleteProc **procPtr); +/* 151 */ +EXTERN Tcl_Channel Tcl_GetChannel(Tcl_Interp *interp, + const char *chanName, int *modePtr); +/* 152 */ +EXTERN int Tcl_GetChannelBufferSize(Tcl_Channel chan); +/* 153 */ +EXTERN int Tcl_GetChannelHandle(Tcl_Channel chan, int direction, + ClientData *handlePtr); +/* 154 */ +EXTERN ClientData Tcl_GetChannelInstanceData(Tcl_Channel chan); +/* 155 */ +EXTERN int Tcl_GetChannelMode(Tcl_Channel chan); +/* 156 */ +EXTERN CONST84_RETURN char * Tcl_GetChannelName(Tcl_Channel chan); +/* 157 */ +EXTERN int Tcl_GetChannelOption(Tcl_Interp *interp, + Tcl_Channel chan, const char *optionName, + Tcl_DString *dsPtr); +/* 158 */ +EXTERN CONST86 Tcl_ChannelType * Tcl_GetChannelType(Tcl_Channel chan); +/* 159 */ +EXTERN int Tcl_GetCommandInfo(Tcl_Interp *interp, + const char *cmdName, Tcl_CmdInfo *infoPtr); +/* 160 */ +EXTERN CONST84_RETURN char * Tcl_GetCommandName(Tcl_Interp *interp, + Tcl_Command command); +/* 161 */ +EXTERN int Tcl_GetErrno(void); +/* 162 */ +EXTERN CONST84_RETURN char * Tcl_GetHostName(void); +/* 163 */ +EXTERN int Tcl_GetInterpPath(Tcl_Interp *askInterp, + Tcl_Interp *slaveInterp); +/* 164 */ +EXTERN Tcl_Interp * Tcl_GetMaster(Tcl_Interp *interp); +/* 165 */ +EXTERN const char * Tcl_GetNameOfExecutable(void); +/* 166 */ +EXTERN Tcl_Obj * Tcl_GetObjResult(Tcl_Interp *interp); +#if !defined(__WIN32__) && !defined(MAC_OSX_TCL) /* UNIX */ +/* 167 */ +EXTERN int Tcl_GetOpenFile(Tcl_Interp *interp, + const char *chanID, int forWriting, + int checkUsage, ClientData *filePtr); +#endif /* UNIX */ +#ifdef MAC_OSX_TCL /* MACOSX */ +/* 167 */ +EXTERN int Tcl_GetOpenFile(Tcl_Interp *interp, + const char *chanID, int forWriting, + int checkUsage, ClientData *filePtr); +#endif /* MACOSX */ +/* 168 */ +EXTERN Tcl_PathType Tcl_GetPathType(const char *path); +/* 169 */ +EXTERN int Tcl_Gets(Tcl_Channel chan, Tcl_DString *dsPtr); +/* 170 */ +EXTERN int Tcl_GetsObj(Tcl_Channel chan, Tcl_Obj *objPtr); +/* 171 */ +EXTERN int Tcl_GetServiceMode(void); +/* 172 */ +EXTERN Tcl_Interp * Tcl_GetSlave(Tcl_Interp *interp, + const char *slaveName); +/* 173 */ +EXTERN Tcl_Channel Tcl_GetStdChannel(int type); +/* 174 */ +EXTERN CONST84_RETURN char * Tcl_GetStringResult(Tcl_Interp *interp); +/* 175 */ +EXTERN CONST84_RETURN char * Tcl_GetVar(Tcl_Interp *interp, + const char *varName, int flags); +/* 176 */ +EXTERN CONST84_RETURN char * Tcl_GetVar2(Tcl_Interp *interp, + const char *part1, const char *part2, + int flags); +/* 177 */ +EXTERN int Tcl_GlobalEval(Tcl_Interp *interp, + const char *command); +/* 178 */ +EXTERN int Tcl_GlobalEvalObj(Tcl_Interp *interp, + Tcl_Obj *objPtr); +/* 179 */ +EXTERN int Tcl_HideCommand(Tcl_Interp *interp, + const char *cmdName, + const char *hiddenCmdToken); +/* 180 */ +EXTERN int Tcl_Init(Tcl_Interp *interp); +/* 181 */ +EXTERN void Tcl_InitHashTable(Tcl_HashTable *tablePtr, + int keyType); +/* 182 */ +EXTERN int Tcl_InputBlocked(Tcl_Channel chan); +/* 183 */ +EXTERN int Tcl_InputBuffered(Tcl_Channel chan); +/* 184 */ +EXTERN int Tcl_InterpDeleted(Tcl_Interp *interp); +/* 185 */ +EXTERN int Tcl_IsSafe(Tcl_Interp *interp); +/* 186 */ +EXTERN char * Tcl_JoinPath(int argc, CONST84 char *const *argv, + Tcl_DString *resultPtr); +/* 187 */ +EXTERN int Tcl_LinkVar(Tcl_Interp *interp, const char *varName, + char *addr, int type); +/* Slot 188 is reserved */ +/* 189 */ +EXTERN Tcl_Channel Tcl_MakeFileChannel(ClientData handle, int mode); +/* 190 */ +EXTERN int Tcl_MakeSafe(Tcl_Interp *interp); +/* 191 */ +EXTERN Tcl_Channel Tcl_MakeTcpClientChannel(ClientData tcpSocket); +/* 192 */ +EXTERN char * Tcl_Merge(int argc, CONST84 char *const *argv); +/* 193 */ +EXTERN Tcl_HashEntry * Tcl_NextHashEntry(Tcl_HashSearch *searchPtr); +/* 194 */ +EXTERN void Tcl_NotifyChannel(Tcl_Channel channel, int mask); +/* 195 */ +EXTERN Tcl_Obj * Tcl_ObjGetVar2(Tcl_Interp *interp, Tcl_Obj *part1Ptr, + Tcl_Obj *part2Ptr, int flags); +/* 196 */ +EXTERN Tcl_Obj * Tcl_ObjSetVar2(Tcl_Interp *interp, Tcl_Obj *part1Ptr, + Tcl_Obj *part2Ptr, Tcl_Obj *newValuePtr, + int flags); +/* 197 */ +EXTERN Tcl_Channel Tcl_OpenCommandChannel(Tcl_Interp *interp, int argc, + CONST84 char **argv, int flags); +/* 198 */ +EXTERN Tcl_Channel Tcl_OpenFileChannel(Tcl_Interp *interp, + const char *fileName, const char *modeString, + int permissions); +/* 199 */ +EXTERN Tcl_Channel Tcl_OpenTcpClient(Tcl_Interp *interp, int port, + const char *address, const char *myaddr, + int myport, int async); +/* 200 */ +EXTERN Tcl_Channel Tcl_OpenTcpServer(Tcl_Interp *interp, int port, + const char *host, + Tcl_TcpAcceptProc *acceptProc, + ClientData callbackData); +/* 201 */ +EXTERN void Tcl_Preserve(ClientData data); +/* 202 */ +EXTERN void Tcl_PrintDouble(Tcl_Interp *interp, double value, + char *dst); +/* 203 */ +EXTERN int Tcl_PutEnv(const char *assignment); +/* 204 */ +EXTERN CONST84_RETURN char * Tcl_PosixError(Tcl_Interp *interp); +/* 205 */ +EXTERN void Tcl_QueueEvent(Tcl_Event *evPtr, + Tcl_QueuePosition position); +/* 206 */ +EXTERN int Tcl_Read(Tcl_Channel chan, char *bufPtr, int toRead); +/* 207 */ +EXTERN void Tcl_ReapDetachedProcs(void); +/* 208 */ +EXTERN int Tcl_RecordAndEval(Tcl_Interp *interp, + const char *cmd, int flags); +/* 209 */ +EXTERN int Tcl_RecordAndEvalObj(Tcl_Interp *interp, + Tcl_Obj *cmdPtr, int flags); +/* 210 */ +EXTERN void Tcl_RegisterChannel(Tcl_Interp *interp, + Tcl_Channel chan); +/* 211 */ +EXTERN void Tcl_RegisterObjType(const Tcl_ObjType *typePtr); +/* 212 */ +EXTERN Tcl_RegExp Tcl_RegExpCompile(Tcl_Interp *interp, + const char *pattern); +/* 213 */ +EXTERN int Tcl_RegExpExec(Tcl_Interp *interp, Tcl_RegExp regexp, + const char *text, const char *start); +/* 214 */ +EXTERN int Tcl_RegExpMatch(Tcl_Interp *interp, const char *text, + const char *pattern); +/* 215 */ +EXTERN void Tcl_RegExpRange(Tcl_RegExp regexp, int index, + CONST84 char **startPtr, + CONST84 char **endPtr); +/* 216 */ +EXTERN void Tcl_Release(ClientData clientData); +/* 217 */ +EXTERN void Tcl_ResetResult(Tcl_Interp *interp); +/* 218 */ +EXTERN int Tcl_ScanElement(const char *src, int *flagPtr); +/* 219 */ +EXTERN int Tcl_ScanCountedElement(const char *src, int length, + int *flagPtr); +/* 220 */ +EXTERN int Tcl_SeekOld(Tcl_Channel chan, int offset, int mode); +/* 221 */ +EXTERN int Tcl_ServiceAll(void); +/* 222 */ +EXTERN int Tcl_ServiceEvent(int flags); +/* 223 */ +EXTERN void Tcl_SetAssocData(Tcl_Interp *interp, + const char *name, Tcl_InterpDeleteProc *proc, + ClientData clientData); +/* 224 */ +EXTERN void Tcl_SetChannelBufferSize(Tcl_Channel chan, int sz); +/* 225 */ +EXTERN int Tcl_SetChannelOption(Tcl_Interp *interp, + Tcl_Channel chan, const char *optionName, + const char *newValue); +/* 226 */ +EXTERN int Tcl_SetCommandInfo(Tcl_Interp *interp, + const char *cmdName, + const Tcl_CmdInfo *infoPtr); +/* 227 */ +EXTERN void Tcl_SetErrno(int err); +/* 228 */ +EXTERN void Tcl_SetErrorCode(Tcl_Interp *interp, ...); +/* 229 */ +EXTERN void Tcl_SetMaxBlockTime(const Tcl_Time *timePtr); +/* 230 */ +EXTERN void Tcl_SetPanicProc(Tcl_PanicProc *panicProc); +/* 231 */ +EXTERN int Tcl_SetRecursionLimit(Tcl_Interp *interp, int depth); +/* 232 */ +EXTERN void Tcl_SetResult(Tcl_Interp *interp, char *result, + Tcl_FreeProc *freeProc); +/* 233 */ +EXTERN int Tcl_SetServiceMode(int mode); +/* 234 */ +EXTERN void Tcl_SetObjErrorCode(Tcl_Interp *interp, + Tcl_Obj *errorObjPtr); +/* 235 */ +EXTERN void Tcl_SetObjResult(Tcl_Interp *interp, + Tcl_Obj *resultObjPtr); +/* 236 */ +EXTERN void Tcl_SetStdChannel(Tcl_Channel channel, int type); +/* 237 */ +EXTERN CONST84_RETURN char * Tcl_SetVar(Tcl_Interp *interp, + const char *varName, const char *newValue, + int flags); +/* 238 */ +EXTERN CONST84_RETURN char * Tcl_SetVar2(Tcl_Interp *interp, + const char *part1, const char *part2, + const char *newValue, int flags); +/* 239 */ +EXTERN CONST84_RETURN char * Tcl_SignalId(int sig); +/* 240 */ +EXTERN CONST84_RETURN char * Tcl_SignalMsg(int sig); +/* 241 */ +EXTERN void Tcl_SourceRCFile(Tcl_Interp *interp); +/* 242 */ +EXTERN int Tcl_SplitList(Tcl_Interp *interp, + const char *listStr, int *argcPtr, + CONST84 char ***argvPtr); +/* 243 */ +EXTERN void Tcl_SplitPath(const char *path, int *argcPtr, + CONST84 char ***argvPtr); +/* 244 */ +EXTERN void Tcl_StaticPackage(Tcl_Interp *interp, + const char *pkgName, + Tcl_PackageInitProc *initProc, + Tcl_PackageInitProc *safeInitProc); +/* 245 */ +EXTERN int Tcl_StringMatch(const char *str, const char *pattern); +/* 246 */ +EXTERN int Tcl_TellOld(Tcl_Channel chan); +/* 247 */ +EXTERN int Tcl_TraceVar(Tcl_Interp *interp, const char *varName, + int flags, Tcl_VarTraceProc *proc, + ClientData clientData); +/* 248 */ +EXTERN int Tcl_TraceVar2(Tcl_Interp *interp, const char *part1, + const char *part2, int flags, + Tcl_VarTraceProc *proc, + ClientData clientData); +/* 249 */ +EXTERN char * Tcl_TranslateFileName(Tcl_Interp *interp, + const char *name, Tcl_DString *bufferPtr); +/* 250 */ +EXTERN int Tcl_Ungets(Tcl_Channel chan, const char *str, + int len, int atHead); +/* 251 */ +EXTERN void Tcl_UnlinkVar(Tcl_Interp *interp, + const char *varName); +/* 252 */ +EXTERN int Tcl_UnregisterChannel(Tcl_Interp *interp, + Tcl_Channel chan); +/* 253 */ +EXTERN int Tcl_UnsetVar(Tcl_Interp *interp, const char *varName, + int flags); +/* 254 */ +EXTERN int Tcl_UnsetVar2(Tcl_Interp *interp, const char *part1, + const char *part2, int flags); +/* 255 */ +EXTERN void Tcl_UntraceVar(Tcl_Interp *interp, + const char *varName, int flags, + Tcl_VarTraceProc *proc, + ClientData clientData); +/* 256 */ +EXTERN void Tcl_UntraceVar2(Tcl_Interp *interp, + const char *part1, const char *part2, + int flags, Tcl_VarTraceProc *proc, + ClientData clientData); +/* 257 */ +EXTERN void Tcl_UpdateLinkedVar(Tcl_Interp *interp, + const char *varName); +/* 258 */ +EXTERN int Tcl_UpVar(Tcl_Interp *interp, const char *frameName, + const char *varName, const char *localName, + int flags); +/* 259 */ +EXTERN int Tcl_UpVar2(Tcl_Interp *interp, const char *frameName, + const char *part1, const char *part2, + const char *localName, int flags); +/* 260 */ +EXTERN int Tcl_VarEval(Tcl_Interp *interp, ...); +/* 261 */ +EXTERN ClientData Tcl_VarTraceInfo(Tcl_Interp *interp, + const char *varName, int flags, + Tcl_VarTraceProc *procPtr, + ClientData prevClientData); +/* 262 */ +EXTERN ClientData Tcl_VarTraceInfo2(Tcl_Interp *interp, + const char *part1, const char *part2, + int flags, Tcl_VarTraceProc *procPtr, + ClientData prevClientData); +/* 263 */ +EXTERN int Tcl_Write(Tcl_Channel chan, const char *s, int slen); +/* 264 */ +EXTERN void Tcl_WrongNumArgs(Tcl_Interp *interp, int objc, + Tcl_Obj *const objv[], const char *message); +/* 265 */ +EXTERN int Tcl_DumpActiveMemory(const char *fileName); +/* 266 */ +EXTERN void Tcl_ValidateAllMemory(const char *file, int line); +/* 267 */ +EXTERN void Tcl_AppendResultVA(Tcl_Interp *interp, + va_list argList); +/* 268 */ +EXTERN void Tcl_AppendStringsToObjVA(Tcl_Obj *objPtr, + va_list argList); +/* 269 */ +EXTERN char * Tcl_HashStats(Tcl_HashTable *tablePtr); +/* 270 */ +EXTERN CONST84_RETURN char * Tcl_ParseVar(Tcl_Interp *interp, + const char *start, CONST84 char **termPtr); +/* 271 */ +EXTERN CONST84_RETURN char * Tcl_PkgPresent(Tcl_Interp *interp, + const char *name, const char *version, + int exact); +/* 272 */ +EXTERN CONST84_RETURN char * Tcl_PkgPresentEx(Tcl_Interp *interp, + const char *name, const char *version, + int exact, void *clientDataPtr); +/* 273 */ +EXTERN int Tcl_PkgProvide(Tcl_Interp *interp, const char *name, + const char *version); +/* 274 */ +EXTERN CONST84_RETURN char * Tcl_PkgRequire(Tcl_Interp *interp, + const char *name, const char *version, + int exact); +/* 275 */ +EXTERN void Tcl_SetErrorCodeVA(Tcl_Interp *interp, + va_list argList); +/* 276 */ +EXTERN int Tcl_VarEvalVA(Tcl_Interp *interp, va_list argList); +/* 277 */ +EXTERN Tcl_Pid Tcl_WaitPid(Tcl_Pid pid, int *statPtr, int options); +/* 278 */ +EXTERN void Tcl_PanicVA(const char *format, va_list argList); +/* 279 */ +EXTERN void Tcl_GetVersion(int *major, int *minor, + int *patchLevel, int *type); +/* 280 */ +EXTERN void Tcl_InitMemory(Tcl_Interp *interp); +/* 281 */ +EXTERN Tcl_Channel Tcl_StackChannel(Tcl_Interp *interp, + const Tcl_ChannelType *typePtr, + ClientData instanceData, int mask, + Tcl_Channel prevChan); +/* 282 */ +EXTERN int Tcl_UnstackChannel(Tcl_Interp *interp, + Tcl_Channel chan); +/* 283 */ +EXTERN Tcl_Channel Tcl_GetStackedChannel(Tcl_Channel chan); +/* 284 */ +EXTERN void Tcl_SetMainLoop(Tcl_MainLoopProc *proc); +/* Slot 285 is reserved */ +/* 286 */ +EXTERN void Tcl_AppendObjToObj(Tcl_Obj *objPtr, + Tcl_Obj *appendObjPtr); +/* 287 */ +EXTERN Tcl_Encoding Tcl_CreateEncoding(const Tcl_EncodingType *typePtr); +/* 288 */ +EXTERN void Tcl_CreateThreadExitHandler(Tcl_ExitProc *proc, + ClientData clientData); +/* 289 */ +EXTERN void Tcl_DeleteThreadExitHandler(Tcl_ExitProc *proc, + ClientData clientData); +/* 290 */ +EXTERN void Tcl_DiscardResult(Tcl_SavedResult *statePtr); +/* 291 */ +EXTERN int Tcl_EvalEx(Tcl_Interp *interp, const char *script, + int numBytes, int flags); +/* 292 */ +EXTERN int Tcl_EvalObjv(Tcl_Interp *interp, int objc, + Tcl_Obj *const objv[], int flags); +/* 293 */ +EXTERN int Tcl_EvalObjEx(Tcl_Interp *interp, Tcl_Obj *objPtr, + int flags); +/* 294 */ +EXTERN void Tcl_ExitThread(int status); +/* 295 */ +EXTERN int Tcl_ExternalToUtf(Tcl_Interp *interp, + Tcl_Encoding encoding, const char *src, + int srcLen, int flags, + Tcl_EncodingState *statePtr, char *dst, + int dstLen, int *srcReadPtr, + int *dstWrotePtr, int *dstCharsPtr); +/* 296 */ +EXTERN char * Tcl_ExternalToUtfDString(Tcl_Encoding encoding, + const char *src, int srcLen, + Tcl_DString *dsPtr); +/* 297 */ +EXTERN void Tcl_FinalizeThread(void); +/* 298 */ +EXTERN void Tcl_FinalizeNotifier(ClientData clientData); +/* 299 */ +EXTERN void Tcl_FreeEncoding(Tcl_Encoding encoding); +/* 300 */ +EXTERN Tcl_ThreadId Tcl_GetCurrentThread(void); +/* 301 */ +EXTERN Tcl_Encoding Tcl_GetEncoding(Tcl_Interp *interp, const char *name); +/* 302 */ +EXTERN CONST84_RETURN char * Tcl_GetEncodingName(Tcl_Encoding encoding); +/* 303 */ +EXTERN void Tcl_GetEncodingNames(Tcl_Interp *interp); +/* 304 */ +EXTERN int Tcl_GetIndexFromObjStruct(Tcl_Interp *interp, + Tcl_Obj *objPtr, const void *tablePtr, + int offset, const char *msg, int flags, + int *indexPtr); +/* 305 */ +EXTERN void * Tcl_GetThreadData(Tcl_ThreadDataKey *keyPtr, + int size); +/* 306 */ +EXTERN Tcl_Obj * Tcl_GetVar2Ex(Tcl_Interp *interp, const char *part1, + const char *part2, int flags); +/* 307 */ +EXTERN ClientData Tcl_InitNotifier(void); +/* 308 */ +EXTERN void Tcl_MutexLock(Tcl_Mutex *mutexPtr); +/* 309 */ +EXTERN void Tcl_MutexUnlock(Tcl_Mutex *mutexPtr); +/* 310 */ +EXTERN void Tcl_ConditionNotify(Tcl_Condition *condPtr); +/* 311 */ +EXTERN void Tcl_ConditionWait(Tcl_Condition *condPtr, + Tcl_Mutex *mutexPtr, const Tcl_Time *timePtr); +/* 312 */ +EXTERN int Tcl_NumUtfChars(const char *src, int length); +/* 313 */ +EXTERN int Tcl_ReadChars(Tcl_Channel channel, Tcl_Obj *objPtr, + int charsToRead, int appendFlag); +/* 314 */ +EXTERN void Tcl_RestoreResult(Tcl_Interp *interp, + Tcl_SavedResult *statePtr); +/* 315 */ +EXTERN void Tcl_SaveResult(Tcl_Interp *interp, + Tcl_SavedResult *statePtr); +/* 316 */ +EXTERN int Tcl_SetSystemEncoding(Tcl_Interp *interp, + const char *name); +/* 317 */ +EXTERN Tcl_Obj * Tcl_SetVar2Ex(Tcl_Interp *interp, const char *part1, + const char *part2, Tcl_Obj *newValuePtr, + int flags); +/* 318 */ +EXTERN void Tcl_ThreadAlert(Tcl_ThreadId threadId); +/* 319 */ +EXTERN void Tcl_ThreadQueueEvent(Tcl_ThreadId threadId, + Tcl_Event *evPtr, Tcl_QueuePosition position); +/* 320 */ +EXTERN Tcl_UniChar Tcl_UniCharAtIndex(const char *src, int index); +/* 321 */ +EXTERN Tcl_UniChar Tcl_UniCharToLower(int ch); +/* 322 */ +EXTERN Tcl_UniChar Tcl_UniCharToTitle(int ch); +/* 323 */ +EXTERN Tcl_UniChar Tcl_UniCharToUpper(int ch); +/* 324 */ +EXTERN int Tcl_UniCharToUtf(int ch, char *buf); +/* 325 */ +EXTERN CONST84_RETURN char * Tcl_UtfAtIndex(const char *src, int index); +/* 326 */ +EXTERN int Tcl_UtfCharComplete(const char *src, int length); +/* 327 */ +EXTERN int Tcl_UtfBackslash(const char *src, int *readPtr, + char *dst); +/* 328 */ +EXTERN CONST84_RETURN char * Tcl_UtfFindFirst(const char *src, int ch); +/* 329 */ +EXTERN CONST84_RETURN char * Tcl_UtfFindLast(const char *src, int ch); +/* 330 */ +EXTERN CONST84_RETURN char * Tcl_UtfNext(const char *src); +/* 331 */ +EXTERN CONST84_RETURN char * Tcl_UtfPrev(const char *src, const char *start); +/* 332 */ +EXTERN int Tcl_UtfToExternal(Tcl_Interp *interp, + Tcl_Encoding encoding, const char *src, + int srcLen, int flags, + Tcl_EncodingState *statePtr, char *dst, + int dstLen, int *srcReadPtr, + int *dstWrotePtr, int *dstCharsPtr); +/* 333 */ +EXTERN char * Tcl_UtfToExternalDString(Tcl_Encoding encoding, + const char *src, int srcLen, + Tcl_DString *dsPtr); +/* 334 */ +EXTERN int Tcl_UtfToLower(char *src); +/* 335 */ +EXTERN int Tcl_UtfToTitle(char *src); +/* 336 */ +EXTERN int Tcl_UtfToUniChar(const char *src, Tcl_UniChar *chPtr); +/* 337 */ +EXTERN int Tcl_UtfToUpper(char *src); +/* 338 */ +EXTERN int Tcl_WriteChars(Tcl_Channel chan, const char *src, + int srcLen); +/* 339 */ +EXTERN int Tcl_WriteObj(Tcl_Channel chan, Tcl_Obj *objPtr); +/* 340 */ +EXTERN char * Tcl_GetString(Tcl_Obj *objPtr); +/* 341 */ +EXTERN CONST84_RETURN char * Tcl_GetDefaultEncodingDir(void); +/* 342 */ +EXTERN void Tcl_SetDefaultEncodingDir(const char *path); +/* 343 */ +EXTERN void Tcl_AlertNotifier(ClientData clientData); +/* 344 */ +EXTERN void Tcl_ServiceModeHook(int mode); +/* 345 */ +EXTERN int Tcl_UniCharIsAlnum(int ch); +/* 346 */ +EXTERN int Tcl_UniCharIsAlpha(int ch); +/* 347 */ +EXTERN int Tcl_UniCharIsDigit(int ch); +/* 348 */ +EXTERN int Tcl_UniCharIsLower(int ch); +/* 349 */ +EXTERN int Tcl_UniCharIsSpace(int ch); +/* 350 */ +EXTERN int Tcl_UniCharIsUpper(int ch); +/* 351 */ +EXTERN int Tcl_UniCharIsWordChar(int ch); +/* 352 */ +EXTERN int Tcl_UniCharLen(const Tcl_UniChar *uniStr); +/* 353 */ +EXTERN int Tcl_UniCharNcmp(const Tcl_UniChar *ucs, + const Tcl_UniChar *uct, + unsigned long numChars); +/* 354 */ +EXTERN char * Tcl_UniCharToUtfDString(const Tcl_UniChar *uniStr, + int uniLength, Tcl_DString *dsPtr); +/* 355 */ +EXTERN Tcl_UniChar * Tcl_UtfToUniCharDString(const char *src, int length, + Tcl_DString *dsPtr); +/* 356 */ +EXTERN Tcl_RegExp Tcl_GetRegExpFromObj(Tcl_Interp *interp, + Tcl_Obj *patObj, int flags); +/* 357 */ +EXTERN Tcl_Obj * Tcl_EvalTokens(Tcl_Interp *interp, + Tcl_Token *tokenPtr, int count); +/* 358 */ +EXTERN void Tcl_FreeParse(Tcl_Parse *parsePtr); +/* 359 */ +EXTERN void Tcl_LogCommandInfo(Tcl_Interp *interp, + const char *script, const char *command, + int length); +/* 360 */ +EXTERN int Tcl_ParseBraces(Tcl_Interp *interp, + const char *start, int numBytes, + Tcl_Parse *parsePtr, int append, + CONST84 char **termPtr); +/* 361 */ +EXTERN int Tcl_ParseCommand(Tcl_Interp *interp, + const char *start, int numBytes, int nested, + Tcl_Parse *parsePtr); +/* 362 */ +EXTERN int Tcl_ParseExpr(Tcl_Interp *interp, const char *start, + int numBytes, Tcl_Parse *parsePtr); +/* 363 */ +EXTERN int Tcl_ParseQuotedString(Tcl_Interp *interp, + const char *start, int numBytes, + Tcl_Parse *parsePtr, int append, + CONST84 char **termPtr); +/* 364 */ +EXTERN int Tcl_ParseVarName(Tcl_Interp *interp, + const char *start, int numBytes, + Tcl_Parse *parsePtr, int append); +/* 365 */ +EXTERN char * Tcl_GetCwd(Tcl_Interp *interp, Tcl_DString *cwdPtr); +/* 366 */ +EXTERN int Tcl_Chdir(const char *dirName); +/* 367 */ +EXTERN int Tcl_Access(const char *path, int mode); +/* 368 */ +EXTERN int Tcl_Stat(const char *path, struct stat *bufPtr); +/* 369 */ +EXTERN int Tcl_UtfNcmp(const char *s1, const char *s2, + unsigned long n); +/* 370 */ +EXTERN int Tcl_UtfNcasecmp(const char *s1, const char *s2, + unsigned long n); +/* 371 */ +EXTERN int Tcl_StringCaseMatch(const char *str, + const char *pattern, int nocase); +/* 372 */ +EXTERN int Tcl_UniCharIsControl(int ch); +/* 373 */ +EXTERN int Tcl_UniCharIsGraph(int ch); +/* 374 */ +EXTERN int Tcl_UniCharIsPrint(int ch); +/* 375 */ +EXTERN int Tcl_UniCharIsPunct(int ch); +/* 376 */ +EXTERN int Tcl_RegExpExecObj(Tcl_Interp *interp, + Tcl_RegExp regexp, Tcl_Obj *textObj, + int offset, int nmatches, int flags); +/* 377 */ +EXTERN void Tcl_RegExpGetInfo(Tcl_RegExp regexp, + Tcl_RegExpInfo *infoPtr); +/* 378 */ +EXTERN Tcl_Obj * Tcl_NewUnicodeObj(const Tcl_UniChar *unicode, + int numChars); +/* 379 */ +EXTERN void Tcl_SetUnicodeObj(Tcl_Obj *objPtr, + const Tcl_UniChar *unicode, int numChars); +/* 380 */ +EXTERN int Tcl_GetCharLength(Tcl_Obj *objPtr); +/* 381 */ +EXTERN Tcl_UniChar Tcl_GetUniChar(Tcl_Obj *objPtr, int index); +/* 382 */ +EXTERN Tcl_UniChar * Tcl_GetUnicode(Tcl_Obj *objPtr); +/* 383 */ +EXTERN Tcl_Obj * Tcl_GetRange(Tcl_Obj *objPtr, int first, int last); +/* 384 */ +EXTERN void Tcl_AppendUnicodeToObj(Tcl_Obj *objPtr, + const Tcl_UniChar *unicode, int length); +/* 385 */ +EXTERN int Tcl_RegExpMatchObj(Tcl_Interp *interp, + Tcl_Obj *textObj, Tcl_Obj *patternObj); +/* 386 */ +EXTERN void Tcl_SetNotifier(Tcl_NotifierProcs *notifierProcPtr); +/* 387 */ +EXTERN Tcl_Mutex * Tcl_GetAllocMutex(void); +/* 388 */ +EXTERN int Tcl_GetChannelNames(Tcl_Interp *interp); +/* 389 */ +EXTERN int Tcl_GetChannelNamesEx(Tcl_Interp *interp, + const char *pattern); +/* 390 */ +EXTERN int Tcl_ProcObjCmd(ClientData clientData, + Tcl_Interp *interp, int objc, + Tcl_Obj *const objv[]); +/* 391 */ +EXTERN void Tcl_ConditionFinalize(Tcl_Condition *condPtr); +/* 392 */ +EXTERN void Tcl_MutexFinalize(Tcl_Mutex *mutex); +/* 393 */ +EXTERN int Tcl_CreateThread(Tcl_ThreadId *idPtr, + Tcl_ThreadCreateProc *proc, + ClientData clientData, int stackSize, + int flags); +/* 394 */ +EXTERN int Tcl_ReadRaw(Tcl_Channel chan, char *dst, + int bytesToRead); +/* 395 */ +EXTERN int Tcl_WriteRaw(Tcl_Channel chan, const char *src, + int srcLen); +/* 396 */ +EXTERN Tcl_Channel Tcl_GetTopChannel(Tcl_Channel chan); +/* 397 */ +EXTERN int Tcl_ChannelBuffered(Tcl_Channel chan); +/* 398 */ +EXTERN CONST84_RETURN char * Tcl_ChannelName( + const Tcl_ChannelType *chanTypePtr); +/* 399 */ +EXTERN Tcl_ChannelTypeVersion Tcl_ChannelVersion( + const Tcl_ChannelType *chanTypePtr); +/* 400 */ +EXTERN Tcl_DriverBlockModeProc * Tcl_ChannelBlockModeProc( + const Tcl_ChannelType *chanTypePtr); +/* 401 */ +EXTERN Tcl_DriverCloseProc * Tcl_ChannelCloseProc( + const Tcl_ChannelType *chanTypePtr); +/* 402 */ +EXTERN Tcl_DriverClose2Proc * Tcl_ChannelClose2Proc( + const Tcl_ChannelType *chanTypePtr); +/* 403 */ +EXTERN Tcl_DriverInputProc * Tcl_ChannelInputProc( + const Tcl_ChannelType *chanTypePtr); +/* 404 */ +EXTERN Tcl_DriverOutputProc * Tcl_ChannelOutputProc( + const Tcl_ChannelType *chanTypePtr); +/* 405 */ +EXTERN Tcl_DriverSeekProc * Tcl_ChannelSeekProc( + const Tcl_ChannelType *chanTypePtr); +/* 406 */ +EXTERN Tcl_DriverSetOptionProc * Tcl_ChannelSetOptionProc( + const Tcl_ChannelType *chanTypePtr); +/* 407 */ +EXTERN Tcl_DriverGetOptionProc * Tcl_ChannelGetOptionProc( + const Tcl_ChannelType *chanTypePtr); +/* 408 */ +EXTERN Tcl_DriverWatchProc * Tcl_ChannelWatchProc( + const Tcl_ChannelType *chanTypePtr); +/* 409 */ +EXTERN Tcl_DriverGetHandleProc * Tcl_ChannelGetHandleProc( + const Tcl_ChannelType *chanTypePtr); +/* 410 */ +EXTERN Tcl_DriverFlushProc * Tcl_ChannelFlushProc( + const Tcl_ChannelType *chanTypePtr); +/* 411 */ +EXTERN Tcl_DriverHandlerProc * Tcl_ChannelHandlerProc( + const Tcl_ChannelType *chanTypePtr); +/* 412 */ +EXTERN int Tcl_JoinThread(Tcl_ThreadId threadId, int *result); +/* 413 */ +EXTERN int Tcl_IsChannelShared(Tcl_Channel channel); +/* 414 */ +EXTERN int Tcl_IsChannelRegistered(Tcl_Interp *interp, + Tcl_Channel channel); +/* 415 */ +EXTERN void Tcl_CutChannel(Tcl_Channel channel); +/* 416 */ +EXTERN void Tcl_SpliceChannel(Tcl_Channel channel); +/* 417 */ +EXTERN void Tcl_ClearChannelHandlers(Tcl_Channel channel); +/* 418 */ +EXTERN int Tcl_IsChannelExisting(const char *channelName); +/* 419 */ +EXTERN int Tcl_UniCharNcasecmp(const Tcl_UniChar *ucs, + const Tcl_UniChar *uct, + unsigned long numChars); +/* 420 */ +EXTERN int Tcl_UniCharCaseMatch(const Tcl_UniChar *uniStr, + const Tcl_UniChar *uniPattern, int nocase); +/* 421 */ +EXTERN Tcl_HashEntry * Tcl_FindHashEntry(Tcl_HashTable *tablePtr, + const void *key); +/* 422 */ +EXTERN Tcl_HashEntry * Tcl_CreateHashEntry(Tcl_HashTable *tablePtr, + const void *key, int *newPtr); +/* 423 */ +EXTERN void Tcl_InitCustomHashTable(Tcl_HashTable *tablePtr, + int keyType, const Tcl_HashKeyType *typePtr); +/* 424 */ +EXTERN void Tcl_InitObjHashTable(Tcl_HashTable *tablePtr); +/* 425 */ +EXTERN ClientData Tcl_CommandTraceInfo(Tcl_Interp *interp, + const char *varName, int flags, + Tcl_CommandTraceProc *procPtr, + ClientData prevClientData); +/* 426 */ +EXTERN int Tcl_TraceCommand(Tcl_Interp *interp, + const char *varName, int flags, + Tcl_CommandTraceProc *proc, + ClientData clientData); +/* 427 */ +EXTERN void Tcl_UntraceCommand(Tcl_Interp *interp, + const char *varName, int flags, + Tcl_CommandTraceProc *proc, + ClientData clientData); +/* 428 */ +EXTERN char * Tcl_AttemptAlloc(unsigned int size); +/* 429 */ +EXTERN char * Tcl_AttemptDbCkalloc(unsigned int size, + const char *file, int line); +/* 430 */ +EXTERN char * Tcl_AttemptRealloc(char *ptr, unsigned int size); +/* 431 */ +EXTERN char * Tcl_AttemptDbCkrealloc(char *ptr, unsigned int size, + const char *file, int line); +/* 432 */ +EXTERN int Tcl_AttemptSetObjLength(Tcl_Obj *objPtr, int length); +/* 433 */ +EXTERN Tcl_ThreadId Tcl_GetChannelThread(Tcl_Channel channel); +/* 434 */ +EXTERN Tcl_UniChar * Tcl_GetUnicodeFromObj(Tcl_Obj *objPtr, + int *lengthPtr); +/* 435 */ +EXTERN int Tcl_GetMathFuncInfo(Tcl_Interp *interp, + const char *name, int *numArgsPtr, + Tcl_ValueType **argTypesPtr, + Tcl_MathProc **procPtr, + ClientData *clientDataPtr); +/* 436 */ +EXTERN Tcl_Obj * Tcl_ListMathFuncs(Tcl_Interp *interp, + const char *pattern); +/* 437 */ +EXTERN Tcl_Obj * Tcl_SubstObj(Tcl_Interp *interp, Tcl_Obj *objPtr, + int flags); +/* 438 */ +EXTERN int Tcl_DetachChannel(Tcl_Interp *interp, + Tcl_Channel channel); +/* 439 */ +EXTERN int Tcl_IsStandardChannel(Tcl_Channel channel); +/* 440 */ +EXTERN int Tcl_FSCopyFile(Tcl_Obj *srcPathPtr, + Tcl_Obj *destPathPtr); +/* 441 */ +EXTERN int Tcl_FSCopyDirectory(Tcl_Obj *srcPathPtr, + Tcl_Obj *destPathPtr, Tcl_Obj **errorPtr); +/* 442 */ +EXTERN int Tcl_FSCreateDirectory(Tcl_Obj *pathPtr); +/* 443 */ +EXTERN int Tcl_FSDeleteFile(Tcl_Obj *pathPtr); +/* 444 */ +EXTERN int Tcl_FSLoadFile(Tcl_Interp *interp, Tcl_Obj *pathPtr, + const char *sym1, const char *sym2, + Tcl_PackageInitProc **proc1Ptr, + Tcl_PackageInitProc **proc2Ptr, + Tcl_LoadHandle *handlePtr, + Tcl_FSUnloadFileProc **unloadProcPtr); +/* 445 */ +EXTERN int Tcl_FSMatchInDirectory(Tcl_Interp *interp, + Tcl_Obj *result, Tcl_Obj *pathPtr, + const char *pattern, Tcl_GlobTypeData *types); +/* 446 */ +EXTERN Tcl_Obj * Tcl_FSLink(Tcl_Obj *pathPtr, Tcl_Obj *toPtr, + int linkAction); +/* 447 */ +EXTERN int Tcl_FSRemoveDirectory(Tcl_Obj *pathPtr, + int recursive, Tcl_Obj **errorPtr); +/* 448 */ +EXTERN int Tcl_FSRenameFile(Tcl_Obj *srcPathPtr, + Tcl_Obj *destPathPtr); +/* 449 */ +EXTERN int Tcl_FSLstat(Tcl_Obj *pathPtr, Tcl_StatBuf *buf); +/* 450 */ +EXTERN int Tcl_FSUtime(Tcl_Obj *pathPtr, struct utimbuf *tval); +/* 451 */ +EXTERN int Tcl_FSFileAttrsGet(Tcl_Interp *interp, int index, + Tcl_Obj *pathPtr, Tcl_Obj **objPtrRef); +/* 452 */ +EXTERN int Tcl_FSFileAttrsSet(Tcl_Interp *interp, int index, + Tcl_Obj *pathPtr, Tcl_Obj *objPtr); +/* 453 */ +EXTERN const char *CONST86 * Tcl_FSFileAttrStrings(Tcl_Obj *pathPtr, + Tcl_Obj **objPtrRef); +/* 454 */ +EXTERN int Tcl_FSStat(Tcl_Obj *pathPtr, Tcl_StatBuf *buf); +/* 455 */ +EXTERN int Tcl_FSAccess(Tcl_Obj *pathPtr, int mode); +/* 456 */ +EXTERN Tcl_Channel Tcl_FSOpenFileChannel(Tcl_Interp *interp, + Tcl_Obj *pathPtr, const char *modeString, + int permissions); +/* 457 */ +EXTERN Tcl_Obj * Tcl_FSGetCwd(Tcl_Interp *interp); +/* 458 */ +EXTERN int Tcl_FSChdir(Tcl_Obj *pathPtr); +/* 459 */ +EXTERN int Tcl_FSConvertToPathType(Tcl_Interp *interp, + Tcl_Obj *pathPtr); +/* 460 */ +EXTERN Tcl_Obj * Tcl_FSJoinPath(Tcl_Obj *listObj, int elements); +/* 461 */ +EXTERN Tcl_Obj * Tcl_FSSplitPath(Tcl_Obj *pathPtr, int *lenPtr); +/* 462 */ +EXTERN int Tcl_FSEqualPaths(Tcl_Obj *firstPtr, + Tcl_Obj *secondPtr); +/* 463 */ +EXTERN Tcl_Obj * Tcl_FSGetNormalizedPath(Tcl_Interp *interp, + Tcl_Obj *pathPtr); +/* 464 */ +EXTERN Tcl_Obj * Tcl_FSJoinToPath(Tcl_Obj *pathPtr, int objc, + Tcl_Obj *const objv[]); +/* 465 */ +EXTERN ClientData Tcl_FSGetInternalRep(Tcl_Obj *pathPtr, + const Tcl_Filesystem *fsPtr); +/* 466 */ +EXTERN Tcl_Obj * Tcl_FSGetTranslatedPath(Tcl_Interp *interp, + Tcl_Obj *pathPtr); +/* 467 */ +EXTERN int Tcl_FSEvalFile(Tcl_Interp *interp, Tcl_Obj *fileName); +/* 468 */ +EXTERN Tcl_Obj * Tcl_FSNewNativePath( + const Tcl_Filesystem *fromFilesystem, + ClientData clientData); +/* 469 */ +EXTERN const void * Tcl_FSGetNativePath(Tcl_Obj *pathPtr); +/* 470 */ +EXTERN Tcl_Obj * Tcl_FSFileSystemInfo(Tcl_Obj *pathPtr); +/* 471 */ +EXTERN Tcl_Obj * Tcl_FSPathSeparator(Tcl_Obj *pathPtr); +/* 472 */ +EXTERN Tcl_Obj * Tcl_FSListVolumes(void); +/* 473 */ +EXTERN int Tcl_FSRegister(ClientData clientData, + const Tcl_Filesystem *fsPtr); +/* 474 */ +EXTERN int Tcl_FSUnregister(const Tcl_Filesystem *fsPtr); +/* 475 */ +EXTERN ClientData Tcl_FSData(const Tcl_Filesystem *fsPtr); +/* 476 */ +EXTERN const char * Tcl_FSGetTranslatedStringPath(Tcl_Interp *interp, + Tcl_Obj *pathPtr); +/* 477 */ +EXTERN CONST86 Tcl_Filesystem * Tcl_FSGetFileSystemForPath(Tcl_Obj *pathPtr); +/* 478 */ +EXTERN Tcl_PathType Tcl_FSGetPathType(Tcl_Obj *pathPtr); +/* 479 */ +EXTERN int Tcl_OutputBuffered(Tcl_Channel chan); +/* 480 */ +EXTERN void Tcl_FSMountsChanged(const Tcl_Filesystem *fsPtr); +/* 481 */ +EXTERN int Tcl_EvalTokensStandard(Tcl_Interp *interp, + Tcl_Token *tokenPtr, int count); +/* 482 */ +EXTERN void Tcl_GetTime(Tcl_Time *timeBuf); +/* 483 */ +EXTERN Tcl_Trace Tcl_CreateObjTrace(Tcl_Interp *interp, int level, + int flags, Tcl_CmdObjTraceProc *objProc, + ClientData clientData, + Tcl_CmdObjTraceDeleteProc *delProc); +/* 484 */ +EXTERN int Tcl_GetCommandInfoFromToken(Tcl_Command token, + Tcl_CmdInfo *infoPtr); +/* 485 */ +EXTERN int Tcl_SetCommandInfoFromToken(Tcl_Command token, + const Tcl_CmdInfo *infoPtr); +/* 486 */ +EXTERN Tcl_Obj * Tcl_DbNewWideIntObj(Tcl_WideInt wideValue, + const char *file, int line); +/* 487 */ +EXTERN int Tcl_GetWideIntFromObj(Tcl_Interp *interp, + Tcl_Obj *objPtr, Tcl_WideInt *widePtr); +/* 488 */ +EXTERN Tcl_Obj * Tcl_NewWideIntObj(Tcl_WideInt wideValue); +/* 489 */ +EXTERN void Tcl_SetWideIntObj(Tcl_Obj *objPtr, + Tcl_WideInt wideValue); +/* 490 */ +EXTERN Tcl_StatBuf * Tcl_AllocStatBuf(void); +/* 491 */ +EXTERN Tcl_WideInt Tcl_Seek(Tcl_Channel chan, Tcl_WideInt offset, + int mode); +/* 492 */ +EXTERN Tcl_WideInt Tcl_Tell(Tcl_Channel chan); +/* 493 */ +EXTERN Tcl_DriverWideSeekProc * Tcl_ChannelWideSeekProc( + const Tcl_ChannelType *chanTypePtr); +/* 494 */ +EXTERN int Tcl_DictObjPut(Tcl_Interp *interp, Tcl_Obj *dictPtr, + Tcl_Obj *keyPtr, Tcl_Obj *valuePtr); +/* 495 */ +EXTERN int Tcl_DictObjGet(Tcl_Interp *interp, Tcl_Obj *dictPtr, + Tcl_Obj *keyPtr, Tcl_Obj **valuePtrPtr); +/* 496 */ +EXTERN int Tcl_DictObjRemove(Tcl_Interp *interp, + Tcl_Obj *dictPtr, Tcl_Obj *keyPtr); +/* 497 */ +EXTERN int Tcl_DictObjSize(Tcl_Interp *interp, Tcl_Obj *dictPtr, + int *sizePtr); +/* 498 */ +EXTERN int Tcl_DictObjFirst(Tcl_Interp *interp, + Tcl_Obj *dictPtr, Tcl_DictSearch *searchPtr, + Tcl_Obj **keyPtrPtr, Tcl_Obj **valuePtrPtr, + int *donePtr); +/* 499 */ +EXTERN void Tcl_DictObjNext(Tcl_DictSearch *searchPtr, + Tcl_Obj **keyPtrPtr, Tcl_Obj **valuePtrPtr, + int *donePtr); +/* 500 */ +EXTERN void Tcl_DictObjDone(Tcl_DictSearch *searchPtr); +/* 501 */ +EXTERN int Tcl_DictObjPutKeyList(Tcl_Interp *interp, + Tcl_Obj *dictPtr, int keyc, + Tcl_Obj *const *keyv, Tcl_Obj *valuePtr); +/* 502 */ +EXTERN int Tcl_DictObjRemoveKeyList(Tcl_Interp *interp, + Tcl_Obj *dictPtr, int keyc, + Tcl_Obj *const *keyv); +/* 503 */ +EXTERN Tcl_Obj * Tcl_NewDictObj(void); +/* 504 */ +EXTERN Tcl_Obj * Tcl_DbNewDictObj(const char *file, int line); +/* 505 */ +EXTERN void Tcl_RegisterConfig(Tcl_Interp *interp, + const char *pkgName, + const Tcl_Config *configuration, + const char *valEncoding); +/* 506 */ +EXTERN Tcl_Namespace * Tcl_CreateNamespace(Tcl_Interp *interp, + const char *name, ClientData clientData, + Tcl_NamespaceDeleteProc *deleteProc); +/* 507 */ +EXTERN void Tcl_DeleteNamespace(Tcl_Namespace *nsPtr); +/* 508 */ +EXTERN int Tcl_AppendExportList(Tcl_Interp *interp, + Tcl_Namespace *nsPtr, Tcl_Obj *objPtr); +/* 509 */ +EXTERN int Tcl_Export(Tcl_Interp *interp, Tcl_Namespace *nsPtr, + const char *pattern, int resetListFirst); +/* 510 */ +EXTERN int Tcl_Import(Tcl_Interp *interp, Tcl_Namespace *nsPtr, + const char *pattern, int allowOverwrite); +/* 511 */ +EXTERN int Tcl_ForgetImport(Tcl_Interp *interp, + Tcl_Namespace *nsPtr, const char *pattern); +/* 512 */ +EXTERN Tcl_Namespace * Tcl_GetCurrentNamespace(Tcl_Interp *interp); +/* 513 */ +EXTERN Tcl_Namespace * Tcl_GetGlobalNamespace(Tcl_Interp *interp); +/* 514 */ +EXTERN Tcl_Namespace * Tcl_FindNamespace(Tcl_Interp *interp, + const char *name, + Tcl_Namespace *contextNsPtr, int flags); +/* 515 */ +EXTERN Tcl_Command Tcl_FindCommand(Tcl_Interp *interp, const char *name, + Tcl_Namespace *contextNsPtr, int flags); +/* 516 */ +EXTERN Tcl_Command Tcl_GetCommandFromObj(Tcl_Interp *interp, + Tcl_Obj *objPtr); +/* 517 */ +EXTERN void Tcl_GetCommandFullName(Tcl_Interp *interp, + Tcl_Command command, Tcl_Obj *objPtr); +/* 518 */ +EXTERN int Tcl_FSEvalFileEx(Tcl_Interp *interp, + Tcl_Obj *fileName, const char *encodingName); +/* 519 */ +EXTERN Tcl_ExitProc * Tcl_SetExitProc(Tcl_ExitProc *proc); +/* 520 */ +EXTERN void Tcl_LimitAddHandler(Tcl_Interp *interp, int type, + Tcl_LimitHandlerProc *handlerProc, + ClientData clientData, + Tcl_LimitHandlerDeleteProc *deleteProc); +/* 521 */ +EXTERN void Tcl_LimitRemoveHandler(Tcl_Interp *interp, int type, + Tcl_LimitHandlerProc *handlerProc, + ClientData clientData); +/* 522 */ +EXTERN int Tcl_LimitReady(Tcl_Interp *interp); +/* 523 */ +EXTERN int Tcl_LimitCheck(Tcl_Interp *interp); +/* 524 */ +EXTERN int Tcl_LimitExceeded(Tcl_Interp *interp); +/* 525 */ +EXTERN void Tcl_LimitSetCommands(Tcl_Interp *interp, + int commandLimit); +/* 526 */ +EXTERN void Tcl_LimitSetTime(Tcl_Interp *interp, + Tcl_Time *timeLimitPtr); +/* 527 */ +EXTERN void Tcl_LimitSetGranularity(Tcl_Interp *interp, int type, + int granularity); +/* 528 */ +EXTERN int Tcl_LimitTypeEnabled(Tcl_Interp *interp, int type); +/* 529 */ +EXTERN int Tcl_LimitTypeExceeded(Tcl_Interp *interp, int type); +/* 530 */ +EXTERN void Tcl_LimitTypeSet(Tcl_Interp *interp, int type); +/* 531 */ +EXTERN void Tcl_LimitTypeReset(Tcl_Interp *interp, int type); +/* 532 */ +EXTERN int Tcl_LimitGetCommands(Tcl_Interp *interp); +/* 533 */ +EXTERN void Tcl_LimitGetTime(Tcl_Interp *interp, + Tcl_Time *timeLimitPtr); +/* 534 */ +EXTERN int Tcl_LimitGetGranularity(Tcl_Interp *interp, int type); +/* 535 */ +EXTERN Tcl_InterpState Tcl_SaveInterpState(Tcl_Interp *interp, int status); +/* 536 */ +EXTERN int Tcl_RestoreInterpState(Tcl_Interp *interp, + Tcl_InterpState state); +/* 537 */ +EXTERN void Tcl_DiscardInterpState(Tcl_InterpState state); +/* 538 */ +EXTERN int Tcl_SetReturnOptions(Tcl_Interp *interp, + Tcl_Obj *options); +/* 539 */ +EXTERN Tcl_Obj * Tcl_GetReturnOptions(Tcl_Interp *interp, int result); +/* 540 */ +EXTERN int Tcl_IsEnsemble(Tcl_Command token); +/* 541 */ +EXTERN Tcl_Command Tcl_CreateEnsemble(Tcl_Interp *interp, + const char *name, + Tcl_Namespace *namespacePtr, int flags); +/* 542 */ +EXTERN Tcl_Command Tcl_FindEnsemble(Tcl_Interp *interp, + Tcl_Obj *cmdNameObj, int flags); +/* 543 */ +EXTERN int Tcl_SetEnsembleSubcommandList(Tcl_Interp *interp, + Tcl_Command token, Tcl_Obj *subcmdList); +/* 544 */ +EXTERN int Tcl_SetEnsembleMappingDict(Tcl_Interp *interp, + Tcl_Command token, Tcl_Obj *mapDict); +/* 545 */ +EXTERN int Tcl_SetEnsembleUnknownHandler(Tcl_Interp *interp, + Tcl_Command token, Tcl_Obj *unknownList); +/* 546 */ +EXTERN int Tcl_SetEnsembleFlags(Tcl_Interp *interp, + Tcl_Command token, int flags); +/* 547 */ +EXTERN int Tcl_GetEnsembleSubcommandList(Tcl_Interp *interp, + Tcl_Command token, Tcl_Obj **subcmdListPtr); +/* 548 */ +EXTERN int Tcl_GetEnsembleMappingDict(Tcl_Interp *interp, + Tcl_Command token, Tcl_Obj **mapDictPtr); +/* 549 */ +EXTERN int Tcl_GetEnsembleUnknownHandler(Tcl_Interp *interp, + Tcl_Command token, Tcl_Obj **unknownListPtr); +/* 550 */ +EXTERN int Tcl_GetEnsembleFlags(Tcl_Interp *interp, + Tcl_Command token, int *flagsPtr); +/* 551 */ +EXTERN int Tcl_GetEnsembleNamespace(Tcl_Interp *interp, + Tcl_Command token, + Tcl_Namespace **namespacePtrPtr); +/* 552 */ +EXTERN void Tcl_SetTimeProc(Tcl_GetTimeProc *getProc, + Tcl_ScaleTimeProc *scaleProc, + ClientData clientData); +/* 553 */ +EXTERN void Tcl_QueryTimeProc(Tcl_GetTimeProc **getProc, + Tcl_ScaleTimeProc **scaleProc, + ClientData *clientData); +/* 554 */ +EXTERN Tcl_DriverThreadActionProc * Tcl_ChannelThreadActionProc( + const Tcl_ChannelType *chanTypePtr); +/* 555 */ +EXTERN Tcl_Obj * Tcl_NewBignumObj(mp_int *value); +/* 556 */ +EXTERN Tcl_Obj * Tcl_DbNewBignumObj(mp_int *value, const char *file, + int line); +/* 557 */ +EXTERN void Tcl_SetBignumObj(Tcl_Obj *obj, mp_int *value); +/* 558 */ +EXTERN int Tcl_GetBignumFromObj(Tcl_Interp *interp, + Tcl_Obj *obj, mp_int *value); +/* 559 */ +EXTERN int Tcl_TakeBignumFromObj(Tcl_Interp *interp, + Tcl_Obj *obj, mp_int *value); +/* 560 */ +EXTERN int Tcl_TruncateChannel(Tcl_Channel chan, + Tcl_WideInt length); +/* 561 */ +EXTERN Tcl_DriverTruncateProc * Tcl_ChannelTruncateProc( + const Tcl_ChannelType *chanTypePtr); +/* 562 */ +EXTERN void Tcl_SetChannelErrorInterp(Tcl_Interp *interp, + Tcl_Obj *msg); +/* 563 */ +EXTERN void Tcl_GetChannelErrorInterp(Tcl_Interp *interp, + Tcl_Obj **msg); +/* 564 */ +EXTERN void Tcl_SetChannelError(Tcl_Channel chan, Tcl_Obj *msg); +/* 565 */ +EXTERN void Tcl_GetChannelError(Tcl_Channel chan, Tcl_Obj **msg); +/* 566 */ +EXTERN int Tcl_InitBignumFromDouble(Tcl_Interp *interp, + double initval, mp_int *toInit); +/* 567 */ +EXTERN Tcl_Obj * Tcl_GetNamespaceUnknownHandler(Tcl_Interp *interp, + Tcl_Namespace *nsPtr); +/* 568 */ +EXTERN int Tcl_SetNamespaceUnknownHandler(Tcl_Interp *interp, + Tcl_Namespace *nsPtr, Tcl_Obj *handlerPtr); +/* 569 */ +EXTERN int Tcl_GetEncodingFromObj(Tcl_Interp *interp, + Tcl_Obj *objPtr, Tcl_Encoding *encodingPtr); +/* 570 */ +EXTERN Tcl_Obj * Tcl_GetEncodingSearchPath(void); +/* 571 */ +EXTERN int Tcl_SetEncodingSearchPath(Tcl_Obj *searchPath); +/* 572 */ +EXTERN const char * Tcl_GetEncodingNameFromEnvironment( + Tcl_DString *bufPtr); +/* 573 */ +EXTERN int Tcl_PkgRequireProc(Tcl_Interp *interp, + const char *name, int objc, + Tcl_Obj *const objv[], void *clientDataPtr); +/* 574 */ +EXTERN void Tcl_AppendObjToErrorInfo(Tcl_Interp *interp, + Tcl_Obj *objPtr); +/* 575 */ +EXTERN void Tcl_AppendLimitedToObj(Tcl_Obj *objPtr, + const char *bytes, int length, int limit, + const char *ellipsis); +/* 576 */ +EXTERN Tcl_Obj * Tcl_Format(Tcl_Interp *interp, const char *format, + int objc, Tcl_Obj *const objv[]); +/* 577 */ +EXTERN int Tcl_AppendFormatToObj(Tcl_Interp *interp, + Tcl_Obj *objPtr, const char *format, + int objc, Tcl_Obj *const objv[]); +/* 578 */ +EXTERN Tcl_Obj * Tcl_ObjPrintf(const char *format, ...) TCL_FORMAT_PRINTF(1, 2); +/* 579 */ +EXTERN void Tcl_AppendPrintfToObj(Tcl_Obj *objPtr, + const char *format, ...) TCL_FORMAT_PRINTF(2, 3); +/* 580 */ +EXTERN int Tcl_CancelEval(Tcl_Interp *interp, + Tcl_Obj *resultObjPtr, ClientData clientData, + int flags); +/* 581 */ +EXTERN int Tcl_Canceled(Tcl_Interp *interp, int flags); +/* 582 */ +EXTERN int Tcl_CreatePipe(Tcl_Interp *interp, + Tcl_Channel *rchan, Tcl_Channel *wchan, + int flags); +/* 583 */ +EXTERN Tcl_Command Tcl_NRCreateCommand(Tcl_Interp *interp, + const char *cmdName, Tcl_ObjCmdProc *proc, + Tcl_ObjCmdProc *nreProc, + ClientData clientData, + Tcl_CmdDeleteProc *deleteProc); +/* 584 */ +EXTERN int Tcl_NREvalObj(Tcl_Interp *interp, Tcl_Obj *objPtr, + int flags); +/* 585 */ +EXTERN int Tcl_NREvalObjv(Tcl_Interp *interp, int objc, + Tcl_Obj *const objv[], int flags); +/* 586 */ +EXTERN int Tcl_NRCmdSwap(Tcl_Interp *interp, Tcl_Command cmd, + int objc, Tcl_Obj *const objv[], int flags); +/* 587 */ +EXTERN void Tcl_NRAddCallback(Tcl_Interp *interp, + Tcl_NRPostProc *postProcPtr, + ClientData data0, ClientData data1, + ClientData data2, ClientData data3); +/* 588 */ +EXTERN int Tcl_NRCallObjProc(Tcl_Interp *interp, + Tcl_ObjCmdProc *objProc, + ClientData clientData, int objc, + Tcl_Obj *const objv[]); +/* 589 */ +EXTERN unsigned Tcl_GetFSDeviceFromStat(const Tcl_StatBuf *statPtr); +/* 590 */ +EXTERN unsigned Tcl_GetFSInodeFromStat(const Tcl_StatBuf *statPtr); +/* 591 */ +EXTERN unsigned Tcl_GetModeFromStat(const Tcl_StatBuf *statPtr); +/* 592 */ +EXTERN int Tcl_GetLinkCountFromStat(const Tcl_StatBuf *statPtr); +/* 593 */ +EXTERN int Tcl_GetUserIdFromStat(const Tcl_StatBuf *statPtr); +/* 594 */ +EXTERN int Tcl_GetGroupIdFromStat(const Tcl_StatBuf *statPtr); +/* 595 */ +EXTERN int Tcl_GetDeviceTypeFromStat(const Tcl_StatBuf *statPtr); +/* 596 */ +EXTERN Tcl_WideInt Tcl_GetAccessTimeFromStat(const Tcl_StatBuf *statPtr); +/* 597 */ +EXTERN Tcl_WideInt Tcl_GetModificationTimeFromStat( + const Tcl_StatBuf *statPtr); +/* 598 */ +EXTERN Tcl_WideInt Tcl_GetChangeTimeFromStat(const Tcl_StatBuf *statPtr); +/* 599 */ +EXTERN Tcl_WideUInt Tcl_GetSizeFromStat(const Tcl_StatBuf *statPtr); +/* 600 */ +EXTERN Tcl_WideUInt Tcl_GetBlocksFromStat(const Tcl_StatBuf *statPtr); +/* 601 */ +EXTERN unsigned Tcl_GetBlockSizeFromStat(const Tcl_StatBuf *statPtr); +/* 602 */ +EXTERN int Tcl_SetEnsembleParameterList(Tcl_Interp *interp, + Tcl_Command token, Tcl_Obj *paramList); +/* 603 */ +EXTERN int Tcl_GetEnsembleParameterList(Tcl_Interp *interp, + Tcl_Command token, Tcl_Obj **paramListPtr); +/* 604 */ +EXTERN int Tcl_ParseArgsObjv(Tcl_Interp *interp, + const Tcl_ArgvInfo *argTable, int *objcPtr, + Tcl_Obj *const *objv, Tcl_Obj ***remObjv); +/* 605 */ +EXTERN int Tcl_GetErrorLine(Tcl_Interp *interp); +/* 606 */ +EXTERN void Tcl_SetErrorLine(Tcl_Interp *interp, int lineNum); +/* 607 */ +EXTERN void Tcl_TransferResult(Tcl_Interp *sourceInterp, + int result, Tcl_Interp *targetInterp); +/* 608 */ +EXTERN int Tcl_InterpActive(Tcl_Interp *interp); +/* 609 */ +EXTERN void Tcl_BackgroundException(Tcl_Interp *interp, int code); +/* 610 */ +EXTERN int Tcl_ZlibDeflate(Tcl_Interp *interp, int format, + Tcl_Obj *data, int level, + Tcl_Obj *gzipHeaderDictObj); +/* 611 */ +EXTERN int Tcl_ZlibInflate(Tcl_Interp *interp, int format, + Tcl_Obj *data, int buffersize, + Tcl_Obj *gzipHeaderDictObj); +/* 612 */ +EXTERN unsigned int Tcl_ZlibCRC32(unsigned int crc, + const unsigned char *buf, int len); +/* 613 */ +EXTERN unsigned int Tcl_ZlibAdler32(unsigned int adler, + const unsigned char *buf, int len); +/* 614 */ +EXTERN int Tcl_ZlibStreamInit(Tcl_Interp *interp, int mode, + int format, int level, Tcl_Obj *dictObj, + Tcl_ZlibStream *zshandle); +/* 615 */ +EXTERN Tcl_Obj * Tcl_ZlibStreamGetCommandName(Tcl_ZlibStream zshandle); +/* 616 */ +EXTERN int Tcl_ZlibStreamEof(Tcl_ZlibStream zshandle); +/* 617 */ +EXTERN int Tcl_ZlibStreamChecksum(Tcl_ZlibStream zshandle); +/* 618 */ +EXTERN int Tcl_ZlibStreamPut(Tcl_ZlibStream zshandle, + Tcl_Obj *data, int flush); +/* 619 */ +EXTERN int Tcl_ZlibStreamGet(Tcl_ZlibStream zshandle, + Tcl_Obj *data, int count); +/* 620 */ +EXTERN int Tcl_ZlibStreamClose(Tcl_ZlibStream zshandle); +/* 621 */ +EXTERN int Tcl_ZlibStreamReset(Tcl_ZlibStream zshandle); +/* 622 */ +EXTERN void Tcl_SetStartupScript(Tcl_Obj *path, + const char *encoding); +/* 623 */ +EXTERN Tcl_Obj * Tcl_GetStartupScript(const char **encodingPtr); +/* 624 */ +EXTERN int Tcl_CloseEx(Tcl_Interp *interp, Tcl_Channel chan, + int flags); +/* 625 */ +EXTERN int Tcl_NRExprObj(Tcl_Interp *interp, Tcl_Obj *objPtr, + Tcl_Obj *resultPtr); +/* 626 */ +EXTERN int Tcl_NRSubstObj(Tcl_Interp *interp, Tcl_Obj *objPtr, + int flags); +/* 627 */ +EXTERN int Tcl_LoadFile(Tcl_Interp *interp, Tcl_Obj *pathPtr, + const char *const symv[], int flags, + void *procPtrs, Tcl_LoadHandle *handlePtr); +/* 628 */ +EXTERN void * Tcl_FindSymbol(Tcl_Interp *interp, + Tcl_LoadHandle handle, const char *symbol); +/* 629 */ +EXTERN int Tcl_FSUnloadFile(Tcl_Interp *interp, + Tcl_LoadHandle handlePtr); +/* 630 */ +EXTERN void Tcl_ZlibStreamSetCompressionDictionary( + Tcl_ZlibStream zhandle, + Tcl_Obj *compressionDictionaryObj); + +typedef struct { + const struct TclPlatStubs *tclPlatStubs; + const struct TclIntStubs *tclIntStubs; + const struct TclIntPlatStubs *tclIntPlatStubs; +} TclStubHooks; + +typedef struct TclStubs { + int magic; + const TclStubHooks *hooks; + + int (*tcl_PkgProvideEx) (Tcl_Interp *interp, const char *name, const char *version, const void *clientData); /* 0 */ + CONST84_RETURN char * (*tcl_PkgRequireEx) (Tcl_Interp *interp, const char *name, const char *version, int exact, void *clientDataPtr); /* 1 */ + void (*tcl_Panic) (const char *format, ...) TCL_FORMAT_PRINTF(1, 2); /* 2 */ + char * (*tcl_Alloc) (unsigned int size); /* 3 */ + void (*tcl_Free) (char *ptr); /* 4 */ + char * (*tcl_Realloc) (char *ptr, unsigned int size); /* 5 */ + char * (*tcl_DbCkalloc) (unsigned int size, const char *file, int line); /* 6 */ + void (*tcl_DbCkfree) (char *ptr, const char *file, int line); /* 7 */ + char * (*tcl_DbCkrealloc) (char *ptr, unsigned int size, const char *file, int line); /* 8 */ +#if !defined(__WIN32__) && !defined(MAC_OSX_TCL) /* UNIX */ + void (*tcl_CreateFileHandler) (int fd, int mask, Tcl_FileProc *proc, ClientData clientData); /* 9 */ +#endif /* UNIX */ +#if defined(__WIN32__) /* WIN */ + void (*reserved9)(void); +#endif /* WIN */ +#ifdef MAC_OSX_TCL /* MACOSX */ + void (*tcl_CreateFileHandler) (int fd, int mask, Tcl_FileProc *proc, ClientData clientData); /* 9 */ +#endif /* MACOSX */ +#if !defined(__WIN32__) && !defined(MAC_OSX_TCL) /* UNIX */ + void (*tcl_DeleteFileHandler) (int fd); /* 10 */ +#endif /* UNIX */ +#if defined(__WIN32__) /* WIN */ + void (*reserved10)(void); +#endif /* WIN */ +#ifdef MAC_OSX_TCL /* MACOSX */ + void (*tcl_DeleteFileHandler) (int fd); /* 10 */ +#endif /* MACOSX */ + void (*tcl_SetTimer) (const Tcl_Time *timePtr); /* 11 */ + void (*tcl_Sleep) (int ms); /* 12 */ + int (*tcl_WaitForEvent) (const Tcl_Time *timePtr); /* 13 */ + int (*tcl_AppendAllObjTypes) (Tcl_Interp *interp, Tcl_Obj *objPtr); /* 14 */ + void (*tcl_AppendStringsToObj) (Tcl_Obj *objPtr, ...); /* 15 */ + void (*tcl_AppendToObj) (Tcl_Obj *objPtr, const char *bytes, int length); /* 16 */ + Tcl_Obj * (*tcl_ConcatObj) (int objc, Tcl_Obj *const objv[]); /* 17 */ + int (*tcl_ConvertToType) (Tcl_Interp *interp, Tcl_Obj *objPtr, const Tcl_ObjType *typePtr); /* 18 */ + void (*tcl_DbDecrRefCount) (Tcl_Obj *objPtr, const char *file, int line); /* 19 */ + void (*tcl_DbIncrRefCount) (Tcl_Obj *objPtr, const char *file, int line); /* 20 */ + int (*tcl_DbIsShared) (Tcl_Obj *objPtr, const char *file, int line); /* 21 */ + Tcl_Obj * (*tcl_DbNewBooleanObj) (int boolValue, const char *file, int line); /* 22 */ + Tcl_Obj * (*tcl_DbNewByteArrayObj) (const unsigned char *bytes, int length, const char *file, int line); /* 23 */ + Tcl_Obj * (*tcl_DbNewDoubleObj) (double doubleValue, const char *file, int line); /* 24 */ + Tcl_Obj * (*tcl_DbNewListObj) (int objc, Tcl_Obj *const *objv, const char *file, int line); /* 25 */ + Tcl_Obj * (*tcl_DbNewLongObj) (long longValue, const char *file, int line); /* 26 */ + Tcl_Obj * (*tcl_DbNewObj) (const char *file, int line); /* 27 */ + Tcl_Obj * (*tcl_DbNewStringObj) (const char *bytes, int length, const char *file, int line); /* 28 */ + Tcl_Obj * (*tcl_DuplicateObj) (Tcl_Obj *objPtr); /* 29 */ + void (*tclFreeObj) (Tcl_Obj *objPtr); /* 30 */ + int (*tcl_GetBoolean) (Tcl_Interp *interp, const char *src, int *boolPtr); /* 31 */ + int (*tcl_GetBooleanFromObj) (Tcl_Interp *interp, Tcl_Obj *objPtr, int *boolPtr); /* 32 */ + unsigned char * (*tcl_GetByteArrayFromObj) (Tcl_Obj *objPtr, int *lengthPtr); /* 33 */ + int (*tcl_GetDouble) (Tcl_Interp *interp, const char *src, double *doublePtr); /* 34 */ + int (*tcl_GetDoubleFromObj) (Tcl_Interp *interp, Tcl_Obj *objPtr, double *doublePtr); /* 35 */ + int (*tcl_GetIndexFromObj) (Tcl_Interp *interp, Tcl_Obj *objPtr, CONST84 char *const *tablePtr, const char *msg, int flags, int *indexPtr); /* 36 */ + int (*tcl_GetInt) (Tcl_Interp *interp, const char *src, int *intPtr); /* 37 */ + int (*tcl_GetIntFromObj) (Tcl_Interp *interp, Tcl_Obj *objPtr, int *intPtr); /* 38 */ + int (*tcl_GetLongFromObj) (Tcl_Interp *interp, Tcl_Obj *objPtr, long *longPtr); /* 39 */ + CONST86 Tcl_ObjType * (*tcl_GetObjType) (const char *typeName); /* 40 */ + char * (*tcl_GetStringFromObj) (Tcl_Obj *objPtr, int *lengthPtr); /* 41 */ + void (*tcl_InvalidateStringRep) (Tcl_Obj *objPtr); /* 42 */ + int (*tcl_ListObjAppendList) (Tcl_Interp *interp, Tcl_Obj *listPtr, Tcl_Obj *elemListPtr); /* 43 */ + int (*tcl_ListObjAppendElement) (Tcl_Interp *interp, Tcl_Obj *listPtr, Tcl_Obj *objPtr); /* 44 */ + int (*tcl_ListObjGetElements) (Tcl_Interp *interp, Tcl_Obj *listPtr, int *objcPtr, Tcl_Obj ***objvPtr); /* 45 */ + int (*tcl_ListObjIndex) (Tcl_Interp *interp, Tcl_Obj *listPtr, int index, Tcl_Obj **objPtrPtr); /* 46 */ + int (*tcl_ListObjLength) (Tcl_Interp *interp, Tcl_Obj *listPtr, int *lengthPtr); /* 47 */ + int (*tcl_ListObjReplace) (Tcl_Interp *interp, Tcl_Obj *listPtr, int first, int count, int objc, Tcl_Obj *const objv[]); /* 48 */ + Tcl_Obj * (*tcl_NewBooleanObj) (int boolValue); /* 49 */ + Tcl_Obj * (*tcl_NewByteArrayObj) (const unsigned char *bytes, int length); /* 50 */ + Tcl_Obj * (*tcl_NewDoubleObj) (double doubleValue); /* 51 */ + Tcl_Obj * (*tcl_NewIntObj) (int intValue); /* 52 */ + Tcl_Obj * (*tcl_NewListObj) (int objc, Tcl_Obj *const objv[]); /* 53 */ + Tcl_Obj * (*tcl_NewLongObj) (long longValue); /* 54 */ + Tcl_Obj * (*tcl_NewObj) (void); /* 55 */ + Tcl_Obj * (*tcl_NewStringObj) (const char *bytes, int length); /* 56 */ + void (*tcl_SetBooleanObj) (Tcl_Obj *objPtr, int boolValue); /* 57 */ + unsigned char * (*tcl_SetByteArrayLength) (Tcl_Obj *objPtr, int length); /* 58 */ + void (*tcl_SetByteArrayObj) (Tcl_Obj *objPtr, const unsigned char *bytes, int length); /* 59 */ + void (*tcl_SetDoubleObj) (Tcl_Obj *objPtr, double doubleValue); /* 60 */ + void (*tcl_SetIntObj) (Tcl_Obj *objPtr, int intValue); /* 61 */ + void (*tcl_SetListObj) (Tcl_Obj *objPtr, int objc, Tcl_Obj *const objv[]); /* 62 */ + void (*tcl_SetLongObj) (Tcl_Obj *objPtr, long longValue); /* 63 */ + void (*tcl_SetObjLength) (Tcl_Obj *objPtr, int length); /* 64 */ + void (*tcl_SetStringObj) (Tcl_Obj *objPtr, const char *bytes, int length); /* 65 */ + void (*tcl_AddErrorInfo) (Tcl_Interp *interp, const char *message); /* 66 */ + void (*tcl_AddObjErrorInfo) (Tcl_Interp *interp, const char *message, int length); /* 67 */ + void (*tcl_AllowExceptions) (Tcl_Interp *interp); /* 68 */ + void (*tcl_AppendElement) (Tcl_Interp *interp, const char *element); /* 69 */ + void (*tcl_AppendResult) (Tcl_Interp *interp, ...); /* 70 */ + Tcl_AsyncHandler (*tcl_AsyncCreate) (Tcl_AsyncProc *proc, ClientData clientData); /* 71 */ + void (*tcl_AsyncDelete) (Tcl_AsyncHandler async); /* 72 */ + int (*tcl_AsyncInvoke) (Tcl_Interp *interp, int code); /* 73 */ + void (*tcl_AsyncMark) (Tcl_AsyncHandler async); /* 74 */ + int (*tcl_AsyncReady) (void); /* 75 */ + void (*tcl_BackgroundError) (Tcl_Interp *interp); /* 76 */ + char (*tcl_Backslash) (const char *src, int *readPtr); /* 77 */ + int (*tcl_BadChannelOption) (Tcl_Interp *interp, const char *optionName, const char *optionList); /* 78 */ + void (*tcl_CallWhenDeleted) (Tcl_Interp *interp, Tcl_InterpDeleteProc *proc, ClientData clientData); /* 79 */ + void (*tcl_CancelIdleCall) (Tcl_IdleProc *idleProc, ClientData clientData); /* 80 */ + int (*tcl_Close) (Tcl_Interp *interp, Tcl_Channel chan); /* 81 */ + int (*tcl_CommandComplete) (const char *cmd); /* 82 */ + char * (*tcl_Concat) (int argc, CONST84 char *const *argv); /* 83 */ + int (*tcl_ConvertElement) (const char *src, char *dst, int flags); /* 84 */ + int (*tcl_ConvertCountedElement) (const char *src, int length, char *dst, int flags); /* 85 */ + int (*tcl_CreateAlias) (Tcl_Interp *slave, const char *slaveCmd, Tcl_Interp *target, const char *targetCmd, int argc, CONST84 char *const *argv); /* 86 */ + int (*tcl_CreateAliasObj) (Tcl_Interp *slave, const char *slaveCmd, Tcl_Interp *target, const char *targetCmd, int objc, Tcl_Obj *const objv[]); /* 87 */ + Tcl_Channel (*tcl_CreateChannel) (const Tcl_ChannelType *typePtr, const char *chanName, ClientData instanceData, int mask); /* 88 */ + void (*tcl_CreateChannelHandler) (Tcl_Channel chan, int mask, Tcl_ChannelProc *proc, ClientData clientData); /* 89 */ + void (*tcl_CreateCloseHandler) (Tcl_Channel chan, Tcl_CloseProc *proc, ClientData clientData); /* 90 */ + Tcl_Command (*tcl_CreateCommand) (Tcl_Interp *interp, const char *cmdName, Tcl_CmdProc *proc, ClientData clientData, Tcl_CmdDeleteProc *deleteProc); /* 91 */ + void (*tcl_CreateEventSource) (Tcl_EventSetupProc *setupProc, Tcl_EventCheckProc *checkProc, ClientData clientData); /* 92 */ + void (*tcl_CreateExitHandler) (Tcl_ExitProc *proc, ClientData clientData); /* 93 */ + Tcl_Interp * (*tcl_CreateInterp) (void); /* 94 */ + void (*tcl_CreateMathFunc) (Tcl_Interp *interp, const char *name, int numArgs, Tcl_ValueType *argTypes, Tcl_MathProc *proc, ClientData clientData); /* 95 */ + Tcl_Command (*tcl_CreateObjCommand) (Tcl_Interp *interp, const char *cmdName, Tcl_ObjCmdProc *proc, ClientData clientData, Tcl_CmdDeleteProc *deleteProc); /* 96 */ + Tcl_Interp * (*tcl_CreateSlave) (Tcl_Interp *interp, const char *slaveName, int isSafe); /* 97 */ + Tcl_TimerToken (*tcl_CreateTimerHandler) (int milliseconds, Tcl_TimerProc *proc, ClientData clientData); /* 98 */ + Tcl_Trace (*tcl_CreateTrace) (Tcl_Interp *interp, int level, Tcl_CmdTraceProc *proc, ClientData clientData); /* 99 */ + void (*tcl_DeleteAssocData) (Tcl_Interp *interp, const char *name); /* 100 */ + void (*tcl_DeleteChannelHandler) (Tcl_Channel chan, Tcl_ChannelProc *proc, ClientData clientData); /* 101 */ + void (*tcl_DeleteCloseHandler) (Tcl_Channel chan, Tcl_CloseProc *proc, ClientData clientData); /* 102 */ + int (*tcl_DeleteCommand) (Tcl_Interp *interp, const char *cmdName); /* 103 */ + int (*tcl_DeleteCommandFromToken) (Tcl_Interp *interp, Tcl_Command command); /* 104 */ + void (*tcl_DeleteEvents) (Tcl_EventDeleteProc *proc, ClientData clientData); /* 105 */ + void (*tcl_DeleteEventSource) (Tcl_EventSetupProc *setupProc, Tcl_EventCheckProc *checkProc, ClientData clientData); /* 106 */ + void (*tcl_DeleteExitHandler) (Tcl_ExitProc *proc, ClientData clientData); /* 107 */ + void (*tcl_DeleteHashEntry) (Tcl_HashEntry *entryPtr); /* 108 */ + void (*tcl_DeleteHashTable) (Tcl_HashTable *tablePtr); /* 109 */ + void (*tcl_DeleteInterp) (Tcl_Interp *interp); /* 110 */ + void (*tcl_DetachPids) (int numPids, Tcl_Pid *pidPtr); /* 111 */ + void (*tcl_DeleteTimerHandler) (Tcl_TimerToken token); /* 112 */ + void (*tcl_DeleteTrace) (Tcl_Interp *interp, Tcl_Trace trace); /* 113 */ + void (*tcl_DontCallWhenDeleted) (Tcl_Interp *interp, Tcl_InterpDeleteProc *proc, ClientData clientData); /* 114 */ + int (*tcl_DoOneEvent) (int flags); /* 115 */ + void (*tcl_DoWhenIdle) (Tcl_IdleProc *proc, ClientData clientData); /* 116 */ + char * (*tcl_DStringAppend) (Tcl_DString *dsPtr, const char *bytes, int length); /* 117 */ + char * (*tcl_DStringAppendElement) (Tcl_DString *dsPtr, const char *element); /* 118 */ + void (*tcl_DStringEndSublist) (Tcl_DString *dsPtr); /* 119 */ + void (*tcl_DStringFree) (Tcl_DString *dsPtr); /* 120 */ + void (*tcl_DStringGetResult) (Tcl_Interp *interp, Tcl_DString *dsPtr); /* 121 */ + void (*tcl_DStringInit) (Tcl_DString *dsPtr); /* 122 */ + void (*tcl_DStringResult) (Tcl_Interp *interp, Tcl_DString *dsPtr); /* 123 */ + void (*tcl_DStringSetLength) (Tcl_DString *dsPtr, int length); /* 124 */ + void (*tcl_DStringStartSublist) (Tcl_DString *dsPtr); /* 125 */ + int (*tcl_Eof) (Tcl_Channel chan); /* 126 */ + CONST84_RETURN char * (*tcl_ErrnoId) (void); /* 127 */ + CONST84_RETURN char * (*tcl_ErrnoMsg) (int err); /* 128 */ + int (*tcl_Eval) (Tcl_Interp *interp, const char *script); /* 129 */ + int (*tcl_EvalFile) (Tcl_Interp *interp, const char *fileName); /* 130 */ + int (*tcl_EvalObj) (Tcl_Interp *interp, Tcl_Obj *objPtr); /* 131 */ + void (*tcl_EventuallyFree) (ClientData clientData, Tcl_FreeProc *freeProc); /* 132 */ + void (*tcl_Exit) (int status); /* 133 */ + int (*tcl_ExposeCommand) (Tcl_Interp *interp, const char *hiddenCmdToken, const char *cmdName); /* 134 */ + int (*tcl_ExprBoolean) (Tcl_Interp *interp, const char *expr, int *ptr); /* 135 */ + int (*tcl_ExprBooleanObj) (Tcl_Interp *interp, Tcl_Obj *objPtr, int *ptr); /* 136 */ + int (*tcl_ExprDouble) (Tcl_Interp *interp, const char *expr, double *ptr); /* 137 */ + int (*tcl_ExprDoubleObj) (Tcl_Interp *interp, Tcl_Obj *objPtr, double *ptr); /* 138 */ + int (*tcl_ExprLong) (Tcl_Interp *interp, const char *expr, long *ptr); /* 139 */ + int (*tcl_ExprLongObj) (Tcl_Interp *interp, Tcl_Obj *objPtr, long *ptr); /* 140 */ + int (*tcl_ExprObj) (Tcl_Interp *interp, Tcl_Obj *objPtr, Tcl_Obj **resultPtrPtr); /* 141 */ + int (*tcl_ExprString) (Tcl_Interp *interp, const char *expr); /* 142 */ + void (*tcl_Finalize) (void); /* 143 */ + void (*tcl_FindExecutable) (const char *argv0); /* 144 */ + Tcl_HashEntry * (*tcl_FirstHashEntry) (Tcl_HashTable *tablePtr, Tcl_HashSearch *searchPtr); /* 145 */ + int (*tcl_Flush) (Tcl_Channel chan); /* 146 */ + void (*tcl_FreeResult) (Tcl_Interp *interp); /* 147 */ + int (*tcl_GetAlias) (Tcl_Interp *interp, const char *slaveCmd, Tcl_Interp **targetInterpPtr, CONST84 char **targetCmdPtr, int *argcPtr, CONST84 char ***argvPtr); /* 148 */ + int (*tcl_GetAliasObj) (Tcl_Interp *interp, const char *slaveCmd, Tcl_Interp **targetInterpPtr, CONST84 char **targetCmdPtr, int *objcPtr, Tcl_Obj ***objv); /* 149 */ + ClientData (*tcl_GetAssocData) (Tcl_Interp *interp, const char *name, Tcl_InterpDeleteProc **procPtr); /* 150 */ + Tcl_Channel (*tcl_GetChannel) (Tcl_Interp *interp, const char *chanName, int *modePtr); /* 151 */ + int (*tcl_GetChannelBufferSize) (Tcl_Channel chan); /* 152 */ + int (*tcl_GetChannelHandle) (Tcl_Channel chan, int direction, ClientData *handlePtr); /* 153 */ + ClientData (*tcl_GetChannelInstanceData) (Tcl_Channel chan); /* 154 */ + int (*tcl_GetChannelMode) (Tcl_Channel chan); /* 155 */ + CONST84_RETURN char * (*tcl_GetChannelName) (Tcl_Channel chan); /* 156 */ + int (*tcl_GetChannelOption) (Tcl_Interp *interp, Tcl_Channel chan, const char *optionName, Tcl_DString *dsPtr); /* 157 */ + CONST86 Tcl_ChannelType * (*tcl_GetChannelType) (Tcl_Channel chan); /* 158 */ + int (*tcl_GetCommandInfo) (Tcl_Interp *interp, const char *cmdName, Tcl_CmdInfo *infoPtr); /* 159 */ + CONST84_RETURN char * (*tcl_GetCommandName) (Tcl_Interp *interp, Tcl_Command command); /* 160 */ + int (*tcl_GetErrno) (void); /* 161 */ + CONST84_RETURN char * (*tcl_GetHostName) (void); /* 162 */ + int (*tcl_GetInterpPath) (Tcl_Interp *askInterp, Tcl_Interp *slaveInterp); /* 163 */ + Tcl_Interp * (*tcl_GetMaster) (Tcl_Interp *interp); /* 164 */ + const char * (*tcl_GetNameOfExecutable) (void); /* 165 */ + Tcl_Obj * (*tcl_GetObjResult) (Tcl_Interp *interp); /* 166 */ +#if !defined(__WIN32__) && !defined(MAC_OSX_TCL) /* UNIX */ + int (*tcl_GetOpenFile) (Tcl_Interp *interp, const char *chanID, int forWriting, int checkUsage, ClientData *filePtr); /* 167 */ +#endif /* UNIX */ +#if defined(__WIN32__) /* WIN */ + void (*reserved167)(void); +#endif /* WIN */ +#ifdef MAC_OSX_TCL /* MACOSX */ + int (*tcl_GetOpenFile) (Tcl_Interp *interp, const char *chanID, int forWriting, int checkUsage, ClientData *filePtr); /* 167 */ +#endif /* MACOSX */ + Tcl_PathType (*tcl_GetPathType) (const char *path); /* 168 */ + int (*tcl_Gets) (Tcl_Channel chan, Tcl_DString *dsPtr); /* 169 */ + int (*tcl_GetsObj) (Tcl_Channel chan, Tcl_Obj *objPtr); /* 170 */ + int (*tcl_GetServiceMode) (void); /* 171 */ + Tcl_Interp * (*tcl_GetSlave) (Tcl_Interp *interp, const char *slaveName); /* 172 */ + Tcl_Channel (*tcl_GetStdChannel) (int type); /* 173 */ + CONST84_RETURN char * (*tcl_GetStringResult) (Tcl_Interp *interp); /* 174 */ + CONST84_RETURN char * (*tcl_GetVar) (Tcl_Interp *interp, const char *varName, int flags); /* 175 */ + CONST84_RETURN char * (*tcl_GetVar2) (Tcl_Interp *interp, const char *part1, const char *part2, int flags); /* 176 */ + int (*tcl_GlobalEval) (Tcl_Interp *interp, const char *command); /* 177 */ + int (*tcl_GlobalEvalObj) (Tcl_Interp *interp, Tcl_Obj *objPtr); /* 178 */ + int (*tcl_HideCommand) (Tcl_Interp *interp, const char *cmdName, const char *hiddenCmdToken); /* 179 */ + int (*tcl_Init) (Tcl_Interp *interp); /* 180 */ + void (*tcl_InitHashTable) (Tcl_HashTable *tablePtr, int keyType); /* 181 */ + int (*tcl_InputBlocked) (Tcl_Channel chan); /* 182 */ + int (*tcl_InputBuffered) (Tcl_Channel chan); /* 183 */ + int (*tcl_InterpDeleted) (Tcl_Interp *interp); /* 184 */ + int (*tcl_IsSafe) (Tcl_Interp *interp); /* 185 */ + char * (*tcl_JoinPath) (int argc, CONST84 char *const *argv, Tcl_DString *resultPtr); /* 186 */ + int (*tcl_LinkVar) (Tcl_Interp *interp, const char *varName, char *addr, int type); /* 187 */ + void (*reserved188)(void); + Tcl_Channel (*tcl_MakeFileChannel) (ClientData handle, int mode); /* 189 */ + int (*tcl_MakeSafe) (Tcl_Interp *interp); /* 190 */ + Tcl_Channel (*tcl_MakeTcpClientChannel) (ClientData tcpSocket); /* 191 */ + char * (*tcl_Merge) (int argc, CONST84 char *const *argv); /* 192 */ + Tcl_HashEntry * (*tcl_NextHashEntry) (Tcl_HashSearch *searchPtr); /* 193 */ + void (*tcl_NotifyChannel) (Tcl_Channel channel, int mask); /* 194 */ + Tcl_Obj * (*tcl_ObjGetVar2) (Tcl_Interp *interp, Tcl_Obj *part1Ptr, Tcl_Obj *part2Ptr, int flags); /* 195 */ + Tcl_Obj * (*tcl_ObjSetVar2) (Tcl_Interp *interp, Tcl_Obj *part1Ptr, Tcl_Obj *part2Ptr, Tcl_Obj *newValuePtr, int flags); /* 196 */ + Tcl_Channel (*tcl_OpenCommandChannel) (Tcl_Interp *interp, int argc, CONST84 char **argv, int flags); /* 197 */ + Tcl_Channel (*tcl_OpenFileChannel) (Tcl_Interp *interp, const char *fileName, const char *modeString, int permissions); /* 198 */ + Tcl_Channel (*tcl_OpenTcpClient) (Tcl_Interp *interp, int port, const char *address, const char *myaddr, int myport, int async); /* 199 */ + Tcl_Channel (*tcl_OpenTcpServer) (Tcl_Interp *interp, int port, const char *host, Tcl_TcpAcceptProc *acceptProc, ClientData callbackData); /* 200 */ + void (*tcl_Preserve) (ClientData data); /* 201 */ + void (*tcl_PrintDouble) (Tcl_Interp *interp, double value, char *dst); /* 202 */ + int (*tcl_PutEnv) (const char *assignment); /* 203 */ + CONST84_RETURN char * (*tcl_PosixError) (Tcl_Interp *interp); /* 204 */ + void (*tcl_QueueEvent) (Tcl_Event *evPtr, Tcl_QueuePosition position); /* 205 */ + int (*tcl_Read) (Tcl_Channel chan, char *bufPtr, int toRead); /* 206 */ + void (*tcl_ReapDetachedProcs) (void); /* 207 */ + int (*tcl_RecordAndEval) (Tcl_Interp *interp, const char *cmd, int flags); /* 208 */ + int (*tcl_RecordAndEvalObj) (Tcl_Interp *interp, Tcl_Obj *cmdPtr, int flags); /* 209 */ + void (*tcl_RegisterChannel) (Tcl_Interp *interp, Tcl_Channel chan); /* 210 */ + void (*tcl_RegisterObjType) (const Tcl_ObjType *typePtr); /* 211 */ + Tcl_RegExp (*tcl_RegExpCompile) (Tcl_Interp *interp, const char *pattern); /* 212 */ + int (*tcl_RegExpExec) (Tcl_Interp *interp, Tcl_RegExp regexp, const char *text, const char *start); /* 213 */ + int (*tcl_RegExpMatch) (Tcl_Interp *interp, const char *text, const char *pattern); /* 214 */ + void (*tcl_RegExpRange) (Tcl_RegExp regexp, int index, CONST84 char **startPtr, CONST84 char **endPtr); /* 215 */ + void (*tcl_Release) (ClientData clientData); /* 216 */ + void (*tcl_ResetResult) (Tcl_Interp *interp); /* 217 */ + int (*tcl_ScanElement) (const char *src, int *flagPtr); /* 218 */ + int (*tcl_ScanCountedElement) (const char *src, int length, int *flagPtr); /* 219 */ + int (*tcl_SeekOld) (Tcl_Channel chan, int offset, int mode); /* 220 */ + int (*tcl_ServiceAll) (void); /* 221 */ + int (*tcl_ServiceEvent) (int flags); /* 222 */ + void (*tcl_SetAssocData) (Tcl_Interp *interp, const char *name, Tcl_InterpDeleteProc *proc, ClientData clientData); /* 223 */ + void (*tcl_SetChannelBufferSize) (Tcl_Channel chan, int sz); /* 224 */ + int (*tcl_SetChannelOption) (Tcl_Interp *interp, Tcl_Channel chan, const char *optionName, const char *newValue); /* 225 */ + int (*tcl_SetCommandInfo) (Tcl_Interp *interp, const char *cmdName, const Tcl_CmdInfo *infoPtr); /* 226 */ + void (*tcl_SetErrno) (int err); /* 227 */ + void (*tcl_SetErrorCode) (Tcl_Interp *interp, ...); /* 228 */ + void (*tcl_SetMaxBlockTime) (const Tcl_Time *timePtr); /* 229 */ + void (*tcl_SetPanicProc) (Tcl_PanicProc *panicProc); /* 230 */ + int (*tcl_SetRecursionLimit) (Tcl_Interp *interp, int depth); /* 231 */ + void (*tcl_SetResult) (Tcl_Interp *interp, char *result, Tcl_FreeProc *freeProc); /* 232 */ + int (*tcl_SetServiceMode) (int mode); /* 233 */ + void (*tcl_SetObjErrorCode) (Tcl_Interp *interp, Tcl_Obj *errorObjPtr); /* 234 */ + void (*tcl_SetObjResult) (Tcl_Interp *interp, Tcl_Obj *resultObjPtr); /* 235 */ + void (*tcl_SetStdChannel) (Tcl_Channel channel, int type); /* 236 */ + CONST84_RETURN char * (*tcl_SetVar) (Tcl_Interp *interp, const char *varName, const char *newValue, int flags); /* 237 */ + CONST84_RETURN char * (*tcl_SetVar2) (Tcl_Interp *interp, const char *part1, const char *part2, const char *newValue, int flags); /* 238 */ + CONST84_RETURN char * (*tcl_SignalId) (int sig); /* 239 */ + CONST84_RETURN char * (*tcl_SignalMsg) (int sig); /* 240 */ + void (*tcl_SourceRCFile) (Tcl_Interp *interp); /* 241 */ + int (*tcl_SplitList) (Tcl_Interp *interp, const char *listStr, int *argcPtr, CONST84 char ***argvPtr); /* 242 */ + void (*tcl_SplitPath) (const char *path, int *argcPtr, CONST84 char ***argvPtr); /* 243 */ + void (*tcl_StaticPackage) (Tcl_Interp *interp, const char *pkgName, Tcl_PackageInitProc *initProc, Tcl_PackageInitProc *safeInitProc); /* 244 */ + int (*tcl_StringMatch) (const char *str, const char *pattern); /* 245 */ + int (*tcl_TellOld) (Tcl_Channel chan); /* 246 */ + int (*tcl_TraceVar) (Tcl_Interp *interp, const char *varName, int flags, Tcl_VarTraceProc *proc, ClientData clientData); /* 247 */ + int (*tcl_TraceVar2) (Tcl_Interp *interp, const char *part1, const char *part2, int flags, Tcl_VarTraceProc *proc, ClientData clientData); /* 248 */ + char * (*tcl_TranslateFileName) (Tcl_Interp *interp, const char *name, Tcl_DString *bufferPtr); /* 249 */ + int (*tcl_Ungets) (Tcl_Channel chan, const char *str, int len, int atHead); /* 250 */ + void (*tcl_UnlinkVar) (Tcl_Interp *interp, const char *varName); /* 251 */ + int (*tcl_UnregisterChannel) (Tcl_Interp *interp, Tcl_Channel chan); /* 252 */ + int (*tcl_UnsetVar) (Tcl_Interp *interp, const char *varName, int flags); /* 253 */ + int (*tcl_UnsetVar2) (Tcl_Interp *interp, const char *part1, const char *part2, int flags); /* 254 */ + void (*tcl_UntraceVar) (Tcl_Interp *interp, const char *varName, int flags, Tcl_VarTraceProc *proc, ClientData clientData); /* 255 */ + void (*tcl_UntraceVar2) (Tcl_Interp *interp, const char *part1, const char *part2, int flags, Tcl_VarTraceProc *proc, ClientData clientData); /* 256 */ + void (*tcl_UpdateLinkedVar) (Tcl_Interp *interp, const char *varName); /* 257 */ + int (*tcl_UpVar) (Tcl_Interp *interp, const char *frameName, const char *varName, const char *localName, int flags); /* 258 */ + int (*tcl_UpVar2) (Tcl_Interp *interp, const char *frameName, const char *part1, const char *part2, const char *localName, int flags); /* 259 */ + int (*tcl_VarEval) (Tcl_Interp *interp, ...); /* 260 */ + ClientData (*tcl_VarTraceInfo) (Tcl_Interp *interp, const char *varName, int flags, Tcl_VarTraceProc *procPtr, ClientData prevClientData); /* 261 */ + ClientData (*tcl_VarTraceInfo2) (Tcl_Interp *interp, const char *part1, const char *part2, int flags, Tcl_VarTraceProc *procPtr, ClientData prevClientData); /* 262 */ + int (*tcl_Write) (Tcl_Channel chan, const char *s, int slen); /* 263 */ + void (*tcl_WrongNumArgs) (Tcl_Interp *interp, int objc, Tcl_Obj *const objv[], const char *message); /* 264 */ + int (*tcl_DumpActiveMemory) (const char *fileName); /* 265 */ + void (*tcl_ValidateAllMemory) (const char *file, int line); /* 266 */ + void (*tcl_AppendResultVA) (Tcl_Interp *interp, va_list argList); /* 267 */ + void (*tcl_AppendStringsToObjVA) (Tcl_Obj *objPtr, va_list argList); /* 268 */ + char * (*tcl_HashStats) (Tcl_HashTable *tablePtr); /* 269 */ + CONST84_RETURN char * (*tcl_ParseVar) (Tcl_Interp *interp, const char *start, CONST84 char **termPtr); /* 270 */ + CONST84_RETURN char * (*tcl_PkgPresent) (Tcl_Interp *interp, const char *name, const char *version, int exact); /* 271 */ + CONST84_RETURN char * (*tcl_PkgPresentEx) (Tcl_Interp *interp, const char *name, const char *version, int exact, void *clientDataPtr); /* 272 */ + int (*tcl_PkgProvide) (Tcl_Interp *interp, const char *name, const char *version); /* 273 */ + CONST84_RETURN char * (*tcl_PkgRequire) (Tcl_Interp *interp, const char *name, const char *version, int exact); /* 274 */ + void (*tcl_SetErrorCodeVA) (Tcl_Interp *interp, va_list argList); /* 275 */ + int (*tcl_VarEvalVA) (Tcl_Interp *interp, va_list argList); /* 276 */ + Tcl_Pid (*tcl_WaitPid) (Tcl_Pid pid, int *statPtr, int options); /* 277 */ + void (*tcl_PanicVA) (const char *format, va_list argList); /* 278 */ + void (*tcl_GetVersion) (int *major, int *minor, int *patchLevel, int *type); /* 279 */ + void (*tcl_InitMemory) (Tcl_Interp *interp); /* 280 */ + Tcl_Channel (*tcl_StackChannel) (Tcl_Interp *interp, const Tcl_ChannelType *typePtr, ClientData instanceData, int mask, Tcl_Channel prevChan); /* 281 */ + int (*tcl_UnstackChannel) (Tcl_Interp *interp, Tcl_Channel chan); /* 282 */ + Tcl_Channel (*tcl_GetStackedChannel) (Tcl_Channel chan); /* 283 */ + void (*tcl_SetMainLoop) (Tcl_MainLoopProc *proc); /* 284 */ + void (*reserved285)(void); + void (*tcl_AppendObjToObj) (Tcl_Obj *objPtr, Tcl_Obj *appendObjPtr); /* 286 */ + Tcl_Encoding (*tcl_CreateEncoding) (const Tcl_EncodingType *typePtr); /* 287 */ + void (*tcl_CreateThreadExitHandler) (Tcl_ExitProc *proc, ClientData clientData); /* 288 */ + void (*tcl_DeleteThreadExitHandler) (Tcl_ExitProc *proc, ClientData clientData); /* 289 */ + void (*tcl_DiscardResult) (Tcl_SavedResult *statePtr); /* 290 */ + int (*tcl_EvalEx) (Tcl_Interp *interp, const char *script, int numBytes, int flags); /* 291 */ + int (*tcl_EvalObjv) (Tcl_Interp *interp, int objc, Tcl_Obj *const objv[], int flags); /* 292 */ + int (*tcl_EvalObjEx) (Tcl_Interp *interp, Tcl_Obj *objPtr, int flags); /* 293 */ + void (*tcl_ExitThread) (int status); /* 294 */ + int (*tcl_ExternalToUtf) (Tcl_Interp *interp, Tcl_Encoding encoding, const char *src, int srcLen, int flags, Tcl_EncodingState *statePtr, char *dst, int dstLen, int *srcReadPtr, int *dstWrotePtr, int *dstCharsPtr); /* 295 */ + char * (*tcl_ExternalToUtfDString) (Tcl_Encoding encoding, const char *src, int srcLen, Tcl_DString *dsPtr); /* 296 */ + void (*tcl_FinalizeThread) (void); /* 297 */ + void (*tcl_FinalizeNotifier) (ClientData clientData); /* 298 */ + void (*tcl_FreeEncoding) (Tcl_Encoding encoding); /* 299 */ + Tcl_ThreadId (*tcl_GetCurrentThread) (void); /* 300 */ + Tcl_Encoding (*tcl_GetEncoding) (Tcl_Interp *interp, const char *name); /* 301 */ + CONST84_RETURN char * (*tcl_GetEncodingName) (Tcl_Encoding encoding); /* 302 */ + void (*tcl_GetEncodingNames) (Tcl_Interp *interp); /* 303 */ + int (*tcl_GetIndexFromObjStruct) (Tcl_Interp *interp, Tcl_Obj *objPtr, const void *tablePtr, int offset, const char *msg, int flags, int *indexPtr); /* 304 */ + void * (*tcl_GetThreadData) (Tcl_ThreadDataKey *keyPtr, int size); /* 305 */ + Tcl_Obj * (*tcl_GetVar2Ex) (Tcl_Interp *interp, const char *part1, const char *part2, int flags); /* 306 */ + ClientData (*tcl_InitNotifier) (void); /* 307 */ + void (*tcl_MutexLock) (Tcl_Mutex *mutexPtr); /* 308 */ + void (*tcl_MutexUnlock) (Tcl_Mutex *mutexPtr); /* 309 */ + void (*tcl_ConditionNotify) (Tcl_Condition *condPtr); /* 310 */ + void (*tcl_ConditionWait) (Tcl_Condition *condPtr, Tcl_Mutex *mutexPtr, const Tcl_Time *timePtr); /* 311 */ + int (*tcl_NumUtfChars) (const char *src, int length); /* 312 */ + int (*tcl_ReadChars) (Tcl_Channel channel, Tcl_Obj *objPtr, int charsToRead, int appendFlag); /* 313 */ + void (*tcl_RestoreResult) (Tcl_Interp *interp, Tcl_SavedResult *statePtr); /* 314 */ + void (*tcl_SaveResult) (Tcl_Interp *interp, Tcl_SavedResult *statePtr); /* 315 */ + int (*tcl_SetSystemEncoding) (Tcl_Interp *interp, const char *name); /* 316 */ + Tcl_Obj * (*tcl_SetVar2Ex) (Tcl_Interp *interp, const char *part1, const char *part2, Tcl_Obj *newValuePtr, int flags); /* 317 */ + void (*tcl_ThreadAlert) (Tcl_ThreadId threadId); /* 318 */ + void (*tcl_ThreadQueueEvent) (Tcl_ThreadId threadId, Tcl_Event *evPtr, Tcl_QueuePosition position); /* 319 */ + Tcl_UniChar (*tcl_UniCharAtIndex) (const char *src, int index); /* 320 */ + Tcl_UniChar (*tcl_UniCharToLower) (int ch); /* 321 */ + Tcl_UniChar (*tcl_UniCharToTitle) (int ch); /* 322 */ + Tcl_UniChar (*tcl_UniCharToUpper) (int ch); /* 323 */ + int (*tcl_UniCharToUtf) (int ch, char *buf); /* 324 */ + CONST84_RETURN char * (*tcl_UtfAtIndex) (const char *src, int index); /* 325 */ + int (*tcl_UtfCharComplete) (const char *src, int length); /* 326 */ + int (*tcl_UtfBackslash) (const char *src, int *readPtr, char *dst); /* 327 */ + CONST84_RETURN char * (*tcl_UtfFindFirst) (const char *src, int ch); /* 328 */ + CONST84_RETURN char * (*tcl_UtfFindLast) (const char *src, int ch); /* 329 */ + CONST84_RETURN char * (*tcl_UtfNext) (const char *src); /* 330 */ + CONST84_RETURN char * (*tcl_UtfPrev) (const char *src, const char *start); /* 331 */ + int (*tcl_UtfToExternal) (Tcl_Interp *interp, Tcl_Encoding encoding, const char *src, int srcLen, int flags, Tcl_EncodingState *statePtr, char *dst, int dstLen, int *srcReadPtr, int *dstWrotePtr, int *dstCharsPtr); /* 332 */ + char * (*tcl_UtfToExternalDString) (Tcl_Encoding encoding, const char *src, int srcLen, Tcl_DString *dsPtr); /* 333 */ + int (*tcl_UtfToLower) (char *src); /* 334 */ + int (*tcl_UtfToTitle) (char *src); /* 335 */ + int (*tcl_UtfToUniChar) (const char *src, Tcl_UniChar *chPtr); /* 336 */ + int (*tcl_UtfToUpper) (char *src); /* 337 */ + int (*tcl_WriteChars) (Tcl_Channel chan, const char *src, int srcLen); /* 338 */ + int (*tcl_WriteObj) (Tcl_Channel chan, Tcl_Obj *objPtr); /* 339 */ + char * (*tcl_GetString) (Tcl_Obj *objPtr); /* 340 */ + CONST84_RETURN char * (*tcl_GetDefaultEncodingDir) (void); /* 341 */ + void (*tcl_SetDefaultEncodingDir) (const char *path); /* 342 */ + void (*tcl_AlertNotifier) (ClientData clientData); /* 343 */ + void (*tcl_ServiceModeHook) (int mode); /* 344 */ + int (*tcl_UniCharIsAlnum) (int ch); /* 345 */ + int (*tcl_UniCharIsAlpha) (int ch); /* 346 */ + int (*tcl_UniCharIsDigit) (int ch); /* 347 */ + int (*tcl_UniCharIsLower) (int ch); /* 348 */ + int (*tcl_UniCharIsSpace) (int ch); /* 349 */ + int (*tcl_UniCharIsUpper) (int ch); /* 350 */ + int (*tcl_UniCharIsWordChar) (int ch); /* 351 */ + int (*tcl_UniCharLen) (const Tcl_UniChar *uniStr); /* 352 */ + int (*tcl_UniCharNcmp) (const Tcl_UniChar *ucs, const Tcl_UniChar *uct, unsigned long numChars); /* 353 */ + char * (*tcl_UniCharToUtfDString) (const Tcl_UniChar *uniStr, int uniLength, Tcl_DString *dsPtr); /* 354 */ + Tcl_UniChar * (*tcl_UtfToUniCharDString) (const char *src, int length, Tcl_DString *dsPtr); /* 355 */ + Tcl_RegExp (*tcl_GetRegExpFromObj) (Tcl_Interp *interp, Tcl_Obj *patObj, int flags); /* 356 */ + Tcl_Obj * (*tcl_EvalTokens) (Tcl_Interp *interp, Tcl_Token *tokenPtr, int count); /* 357 */ + void (*tcl_FreeParse) (Tcl_Parse *parsePtr); /* 358 */ + void (*tcl_LogCommandInfo) (Tcl_Interp *interp, const char *script, const char *command, int length); /* 359 */ + int (*tcl_ParseBraces) (Tcl_Interp *interp, const char *start, int numBytes, Tcl_Parse *parsePtr, int append, CONST84 char **termPtr); /* 360 */ + int (*tcl_ParseCommand) (Tcl_Interp *interp, const char *start, int numBytes, int nested, Tcl_Parse *parsePtr); /* 361 */ + int (*tcl_ParseExpr) (Tcl_Interp *interp, const char *start, int numBytes, Tcl_Parse *parsePtr); /* 362 */ + int (*tcl_ParseQuotedString) (Tcl_Interp *interp, const char *start, int numBytes, Tcl_Parse *parsePtr, int append, CONST84 char **termPtr); /* 363 */ + int (*tcl_ParseVarName) (Tcl_Interp *interp, const char *start, int numBytes, Tcl_Parse *parsePtr, int append); /* 364 */ + char * (*tcl_GetCwd) (Tcl_Interp *interp, Tcl_DString *cwdPtr); /* 365 */ + int (*tcl_Chdir) (const char *dirName); /* 366 */ + int (*tcl_Access) (const char *path, int mode); /* 367 */ + int (*tcl_Stat) (const char *path, struct stat *bufPtr); /* 368 */ + int (*tcl_UtfNcmp) (const char *s1, const char *s2, unsigned long n); /* 369 */ + int (*tcl_UtfNcasecmp) (const char *s1, const char *s2, unsigned long n); /* 370 */ + int (*tcl_StringCaseMatch) (const char *str, const char *pattern, int nocase); /* 371 */ + int (*tcl_UniCharIsControl) (int ch); /* 372 */ + int (*tcl_UniCharIsGraph) (int ch); /* 373 */ + int (*tcl_UniCharIsPrint) (int ch); /* 374 */ + int (*tcl_UniCharIsPunct) (int ch); /* 375 */ + int (*tcl_RegExpExecObj) (Tcl_Interp *interp, Tcl_RegExp regexp, Tcl_Obj *textObj, int offset, int nmatches, int flags); /* 376 */ + void (*tcl_RegExpGetInfo) (Tcl_RegExp regexp, Tcl_RegExpInfo *infoPtr); /* 377 */ + Tcl_Obj * (*tcl_NewUnicodeObj) (const Tcl_UniChar *unicode, int numChars); /* 378 */ + void (*tcl_SetUnicodeObj) (Tcl_Obj *objPtr, const Tcl_UniChar *unicode, int numChars); /* 379 */ + int (*tcl_GetCharLength) (Tcl_Obj *objPtr); /* 380 */ + Tcl_UniChar (*tcl_GetUniChar) (Tcl_Obj *objPtr, int index); /* 381 */ + Tcl_UniChar * (*tcl_GetUnicode) (Tcl_Obj *objPtr); /* 382 */ + Tcl_Obj * (*tcl_GetRange) (Tcl_Obj *objPtr, int first, int last); /* 383 */ + void (*tcl_AppendUnicodeToObj) (Tcl_Obj *objPtr, const Tcl_UniChar *unicode, int length); /* 384 */ + int (*tcl_RegExpMatchObj) (Tcl_Interp *interp, Tcl_Obj *textObj, Tcl_Obj *patternObj); /* 385 */ + void (*tcl_SetNotifier) (Tcl_NotifierProcs *notifierProcPtr); /* 386 */ + Tcl_Mutex * (*tcl_GetAllocMutex) (void); /* 387 */ + int (*tcl_GetChannelNames) (Tcl_Interp *interp); /* 388 */ + int (*tcl_GetChannelNamesEx) (Tcl_Interp *interp, const char *pattern); /* 389 */ + int (*tcl_ProcObjCmd) (ClientData clientData, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[]); /* 390 */ + void (*tcl_ConditionFinalize) (Tcl_Condition *condPtr); /* 391 */ + void (*tcl_MutexFinalize) (Tcl_Mutex *mutex); /* 392 */ + int (*tcl_CreateThread) (Tcl_ThreadId *idPtr, Tcl_ThreadCreateProc *proc, ClientData clientData, int stackSize, int flags); /* 393 */ + int (*tcl_ReadRaw) (Tcl_Channel chan, char *dst, int bytesToRead); /* 394 */ + int (*tcl_WriteRaw) (Tcl_Channel chan, const char *src, int srcLen); /* 395 */ + Tcl_Channel (*tcl_GetTopChannel) (Tcl_Channel chan); /* 396 */ + int (*tcl_ChannelBuffered) (Tcl_Channel chan); /* 397 */ + CONST84_RETURN char * (*tcl_ChannelName) (const Tcl_ChannelType *chanTypePtr); /* 398 */ + Tcl_ChannelTypeVersion (*tcl_ChannelVersion) (const Tcl_ChannelType *chanTypePtr); /* 399 */ + Tcl_DriverBlockModeProc * (*tcl_ChannelBlockModeProc) (const Tcl_ChannelType *chanTypePtr); /* 400 */ + Tcl_DriverCloseProc * (*tcl_ChannelCloseProc) (const Tcl_ChannelType *chanTypePtr); /* 401 */ + Tcl_DriverClose2Proc * (*tcl_ChannelClose2Proc) (const Tcl_ChannelType *chanTypePtr); /* 402 */ + Tcl_DriverInputProc * (*tcl_ChannelInputProc) (const Tcl_ChannelType *chanTypePtr); /* 403 */ + Tcl_DriverOutputProc * (*tcl_ChannelOutputProc) (const Tcl_ChannelType *chanTypePtr); /* 404 */ + Tcl_DriverSeekProc * (*tcl_ChannelSeekProc) (const Tcl_ChannelType *chanTypePtr); /* 405 */ + Tcl_DriverSetOptionProc * (*tcl_ChannelSetOptionProc) (const Tcl_ChannelType *chanTypePtr); /* 406 */ + Tcl_DriverGetOptionProc * (*tcl_ChannelGetOptionProc) (const Tcl_ChannelType *chanTypePtr); /* 407 */ + Tcl_DriverWatchProc * (*tcl_ChannelWatchProc) (const Tcl_ChannelType *chanTypePtr); /* 408 */ + Tcl_DriverGetHandleProc * (*tcl_ChannelGetHandleProc) (const Tcl_ChannelType *chanTypePtr); /* 409 */ + Tcl_DriverFlushProc * (*tcl_ChannelFlushProc) (const Tcl_ChannelType *chanTypePtr); /* 410 */ + Tcl_DriverHandlerProc * (*tcl_ChannelHandlerProc) (const Tcl_ChannelType *chanTypePtr); /* 411 */ + int (*tcl_JoinThread) (Tcl_ThreadId threadId, int *result); /* 412 */ + int (*tcl_IsChannelShared) (Tcl_Channel channel); /* 413 */ + int (*tcl_IsChannelRegistered) (Tcl_Interp *interp, Tcl_Channel channel); /* 414 */ + void (*tcl_CutChannel) (Tcl_Channel channel); /* 415 */ + void (*tcl_SpliceChannel) (Tcl_Channel channel); /* 416 */ + void (*tcl_ClearChannelHandlers) (Tcl_Channel channel); /* 417 */ + int (*tcl_IsChannelExisting) (const char *channelName); /* 418 */ + int (*tcl_UniCharNcasecmp) (const Tcl_UniChar *ucs, const Tcl_UniChar *uct, unsigned long numChars); /* 419 */ + int (*tcl_UniCharCaseMatch) (const Tcl_UniChar *uniStr, const Tcl_UniChar *uniPattern, int nocase); /* 420 */ + Tcl_HashEntry * (*tcl_FindHashEntry) (Tcl_HashTable *tablePtr, const void *key); /* 421 */ + Tcl_HashEntry * (*tcl_CreateHashEntry) (Tcl_HashTable *tablePtr, const void *key, int *newPtr); /* 422 */ + void (*tcl_InitCustomHashTable) (Tcl_HashTable *tablePtr, int keyType, const Tcl_HashKeyType *typePtr); /* 423 */ + void (*tcl_InitObjHashTable) (Tcl_HashTable *tablePtr); /* 424 */ + ClientData (*tcl_CommandTraceInfo) (Tcl_Interp *interp, const char *varName, int flags, Tcl_CommandTraceProc *procPtr, ClientData prevClientData); /* 425 */ + int (*tcl_TraceCommand) (Tcl_Interp *interp, const char *varName, int flags, Tcl_CommandTraceProc *proc, ClientData clientData); /* 426 */ + void (*tcl_UntraceCommand) (Tcl_Interp *interp, const char *varName, int flags, Tcl_CommandTraceProc *proc, ClientData clientData); /* 427 */ + char * (*tcl_AttemptAlloc) (unsigned int size); /* 428 */ + char * (*tcl_AttemptDbCkalloc) (unsigned int size, const char *file, int line); /* 429 */ + char * (*tcl_AttemptRealloc) (char *ptr, unsigned int size); /* 430 */ + char * (*tcl_AttemptDbCkrealloc) (char *ptr, unsigned int size, const char *file, int line); /* 431 */ + int (*tcl_AttemptSetObjLength) (Tcl_Obj *objPtr, int length); /* 432 */ + Tcl_ThreadId (*tcl_GetChannelThread) (Tcl_Channel channel); /* 433 */ + Tcl_UniChar * (*tcl_GetUnicodeFromObj) (Tcl_Obj *objPtr, int *lengthPtr); /* 434 */ + int (*tcl_GetMathFuncInfo) (Tcl_Interp *interp, const char *name, int *numArgsPtr, Tcl_ValueType **argTypesPtr, Tcl_MathProc **procPtr, ClientData *clientDataPtr); /* 435 */ + Tcl_Obj * (*tcl_ListMathFuncs) (Tcl_Interp *interp, const char *pattern); /* 436 */ + Tcl_Obj * (*tcl_SubstObj) (Tcl_Interp *interp, Tcl_Obj *objPtr, int flags); /* 437 */ + int (*tcl_DetachChannel) (Tcl_Interp *interp, Tcl_Channel channel); /* 438 */ + int (*tcl_IsStandardChannel) (Tcl_Channel channel); /* 439 */ + int (*tcl_FSCopyFile) (Tcl_Obj *srcPathPtr, Tcl_Obj *destPathPtr); /* 440 */ + int (*tcl_FSCopyDirectory) (Tcl_Obj *srcPathPtr, Tcl_Obj *destPathPtr, Tcl_Obj **errorPtr); /* 441 */ + int (*tcl_FSCreateDirectory) (Tcl_Obj *pathPtr); /* 442 */ + int (*tcl_FSDeleteFile) (Tcl_Obj *pathPtr); /* 443 */ + int (*tcl_FSLoadFile) (Tcl_Interp *interp, Tcl_Obj *pathPtr, const char *sym1, const char *sym2, Tcl_PackageInitProc **proc1Ptr, Tcl_PackageInitProc **proc2Ptr, Tcl_LoadHandle *handlePtr, Tcl_FSUnloadFileProc **unloadProcPtr); /* 444 */ + int (*tcl_FSMatchInDirectory) (Tcl_Interp *interp, Tcl_Obj *result, Tcl_Obj *pathPtr, const char *pattern, Tcl_GlobTypeData *types); /* 445 */ + Tcl_Obj * (*tcl_FSLink) (Tcl_Obj *pathPtr, Tcl_Obj *toPtr, int linkAction); /* 446 */ + int (*tcl_FSRemoveDirectory) (Tcl_Obj *pathPtr, int recursive, Tcl_Obj **errorPtr); /* 447 */ + int (*tcl_FSRenameFile) (Tcl_Obj *srcPathPtr, Tcl_Obj *destPathPtr); /* 448 */ + int (*tcl_FSLstat) (Tcl_Obj *pathPtr, Tcl_StatBuf *buf); /* 449 */ + int (*tcl_FSUtime) (Tcl_Obj *pathPtr, struct utimbuf *tval); /* 450 */ + int (*tcl_FSFileAttrsGet) (Tcl_Interp *interp, int index, Tcl_Obj *pathPtr, Tcl_Obj **objPtrRef); /* 451 */ + int (*tcl_FSFileAttrsSet) (Tcl_Interp *interp, int index, Tcl_Obj *pathPtr, Tcl_Obj *objPtr); /* 452 */ + const char *CONST86 * (*tcl_FSFileAttrStrings) (Tcl_Obj *pathPtr, Tcl_Obj **objPtrRef); /* 453 */ + int (*tcl_FSStat) (Tcl_Obj *pathPtr, Tcl_StatBuf *buf); /* 454 */ + int (*tcl_FSAccess) (Tcl_Obj *pathPtr, int mode); /* 455 */ + Tcl_Channel (*tcl_FSOpenFileChannel) (Tcl_Interp *interp, Tcl_Obj *pathPtr, const char *modeString, int permissions); /* 456 */ + Tcl_Obj * (*tcl_FSGetCwd) (Tcl_Interp *interp); /* 457 */ + int (*tcl_FSChdir) (Tcl_Obj *pathPtr); /* 458 */ + int (*tcl_FSConvertToPathType) (Tcl_Interp *interp, Tcl_Obj *pathPtr); /* 459 */ + Tcl_Obj * (*tcl_FSJoinPath) (Tcl_Obj *listObj, int elements); /* 460 */ + Tcl_Obj * (*tcl_FSSplitPath) (Tcl_Obj *pathPtr, int *lenPtr); /* 461 */ + int (*tcl_FSEqualPaths) (Tcl_Obj *firstPtr, Tcl_Obj *secondPtr); /* 462 */ + Tcl_Obj * (*tcl_FSGetNormalizedPath) (Tcl_Interp *interp, Tcl_Obj *pathPtr); /* 463 */ + Tcl_Obj * (*tcl_FSJoinToPath) (Tcl_Obj *pathPtr, int objc, Tcl_Obj *const objv[]); /* 464 */ + ClientData (*tcl_FSGetInternalRep) (Tcl_Obj *pathPtr, const Tcl_Filesystem *fsPtr); /* 465 */ + Tcl_Obj * (*tcl_FSGetTranslatedPath) (Tcl_Interp *interp, Tcl_Obj *pathPtr); /* 466 */ + int (*tcl_FSEvalFile) (Tcl_Interp *interp, Tcl_Obj *fileName); /* 467 */ + Tcl_Obj * (*tcl_FSNewNativePath) (const Tcl_Filesystem *fromFilesystem, ClientData clientData); /* 468 */ + const void * (*tcl_FSGetNativePath) (Tcl_Obj *pathPtr); /* 469 */ + Tcl_Obj * (*tcl_FSFileSystemInfo) (Tcl_Obj *pathPtr); /* 470 */ + Tcl_Obj * (*tcl_FSPathSeparator) (Tcl_Obj *pathPtr); /* 471 */ + Tcl_Obj * (*tcl_FSListVolumes) (void); /* 472 */ + int (*tcl_FSRegister) (ClientData clientData, const Tcl_Filesystem *fsPtr); /* 473 */ + int (*tcl_FSUnregister) (const Tcl_Filesystem *fsPtr); /* 474 */ + ClientData (*tcl_FSData) (const Tcl_Filesystem *fsPtr); /* 475 */ + const char * (*tcl_FSGetTranslatedStringPath) (Tcl_Interp *interp, Tcl_Obj *pathPtr); /* 476 */ + CONST86 Tcl_Filesystem * (*tcl_FSGetFileSystemForPath) (Tcl_Obj *pathPtr); /* 477 */ + Tcl_PathType (*tcl_FSGetPathType) (Tcl_Obj *pathPtr); /* 478 */ + int (*tcl_OutputBuffered) (Tcl_Channel chan); /* 479 */ + void (*tcl_FSMountsChanged) (const Tcl_Filesystem *fsPtr); /* 480 */ + int (*tcl_EvalTokensStandard) (Tcl_Interp *interp, Tcl_Token *tokenPtr, int count); /* 481 */ + void (*tcl_GetTime) (Tcl_Time *timeBuf); /* 482 */ + Tcl_Trace (*tcl_CreateObjTrace) (Tcl_Interp *interp, int level, int flags, Tcl_CmdObjTraceProc *objProc, ClientData clientData, Tcl_CmdObjTraceDeleteProc *delProc); /* 483 */ + int (*tcl_GetCommandInfoFromToken) (Tcl_Command token, Tcl_CmdInfo *infoPtr); /* 484 */ + int (*tcl_SetCommandInfoFromToken) (Tcl_Command token, const Tcl_CmdInfo *infoPtr); /* 485 */ + Tcl_Obj * (*tcl_DbNewWideIntObj) (Tcl_WideInt wideValue, const char *file, int line); /* 486 */ + int (*tcl_GetWideIntFromObj) (Tcl_Interp *interp, Tcl_Obj *objPtr, Tcl_WideInt *widePtr); /* 487 */ + Tcl_Obj * (*tcl_NewWideIntObj) (Tcl_WideInt wideValue); /* 488 */ + void (*tcl_SetWideIntObj) (Tcl_Obj *objPtr, Tcl_WideInt wideValue); /* 489 */ + Tcl_StatBuf * (*tcl_AllocStatBuf) (void); /* 490 */ + Tcl_WideInt (*tcl_Seek) (Tcl_Channel chan, Tcl_WideInt offset, int mode); /* 491 */ + Tcl_WideInt (*tcl_Tell) (Tcl_Channel chan); /* 492 */ + Tcl_DriverWideSeekProc * (*tcl_ChannelWideSeekProc) (const Tcl_ChannelType *chanTypePtr); /* 493 */ + int (*tcl_DictObjPut) (Tcl_Interp *interp, Tcl_Obj *dictPtr, Tcl_Obj *keyPtr, Tcl_Obj *valuePtr); /* 494 */ + int (*tcl_DictObjGet) (Tcl_Interp *interp, Tcl_Obj *dictPtr, Tcl_Obj *keyPtr, Tcl_Obj **valuePtrPtr); /* 495 */ + int (*tcl_DictObjRemove) (Tcl_Interp *interp, Tcl_Obj *dictPtr, Tcl_Obj *keyPtr); /* 496 */ + int (*tcl_DictObjSize) (Tcl_Interp *interp, Tcl_Obj *dictPtr, int *sizePtr); /* 497 */ + int (*tcl_DictObjFirst) (Tcl_Interp *interp, Tcl_Obj *dictPtr, Tcl_DictSearch *searchPtr, Tcl_Obj **keyPtrPtr, Tcl_Obj **valuePtrPtr, int *donePtr); /* 498 */ + void (*tcl_DictObjNext) (Tcl_DictSearch *searchPtr, Tcl_Obj **keyPtrPtr, Tcl_Obj **valuePtrPtr, int *donePtr); /* 499 */ + void (*tcl_DictObjDone) (Tcl_DictSearch *searchPtr); /* 500 */ + int (*tcl_DictObjPutKeyList) (Tcl_Interp *interp, Tcl_Obj *dictPtr, int keyc, Tcl_Obj *const *keyv, Tcl_Obj *valuePtr); /* 501 */ + int (*tcl_DictObjRemoveKeyList) (Tcl_Interp *interp, Tcl_Obj *dictPtr, int keyc, Tcl_Obj *const *keyv); /* 502 */ + Tcl_Obj * (*tcl_NewDictObj) (void); /* 503 */ + Tcl_Obj * (*tcl_DbNewDictObj) (const char *file, int line); /* 504 */ + void (*tcl_RegisterConfig) (Tcl_Interp *interp, const char *pkgName, const Tcl_Config *configuration, const char *valEncoding); /* 505 */ + Tcl_Namespace * (*tcl_CreateNamespace) (Tcl_Interp *interp, const char *name, ClientData clientData, Tcl_NamespaceDeleteProc *deleteProc); /* 506 */ + void (*tcl_DeleteNamespace) (Tcl_Namespace *nsPtr); /* 507 */ + int (*tcl_AppendExportList) (Tcl_Interp *interp, Tcl_Namespace *nsPtr, Tcl_Obj *objPtr); /* 508 */ + int (*tcl_Export) (Tcl_Interp *interp, Tcl_Namespace *nsPtr, const char *pattern, int resetListFirst); /* 509 */ + int (*tcl_Import) (Tcl_Interp *interp, Tcl_Namespace *nsPtr, const char *pattern, int allowOverwrite); /* 510 */ + int (*tcl_ForgetImport) (Tcl_Interp *interp, Tcl_Namespace *nsPtr, const char *pattern); /* 511 */ + Tcl_Namespace * (*tcl_GetCurrentNamespace) (Tcl_Interp *interp); /* 512 */ + Tcl_Namespace * (*tcl_GetGlobalNamespace) (Tcl_Interp *interp); /* 513 */ + Tcl_Namespace * (*tcl_FindNamespace) (Tcl_Interp *interp, const char *name, Tcl_Namespace *contextNsPtr, int flags); /* 514 */ + Tcl_Command (*tcl_FindCommand) (Tcl_Interp *interp, const char *name, Tcl_Namespace *contextNsPtr, int flags); /* 515 */ + Tcl_Command (*tcl_GetCommandFromObj) (Tcl_Interp *interp, Tcl_Obj *objPtr); /* 516 */ + void (*tcl_GetCommandFullName) (Tcl_Interp *interp, Tcl_Command command, Tcl_Obj *objPtr); /* 517 */ + int (*tcl_FSEvalFileEx) (Tcl_Interp *interp, Tcl_Obj *fileName, const char *encodingName); /* 518 */ + Tcl_ExitProc * (*tcl_SetExitProc) (Tcl_ExitProc *proc); /* 519 */ + void (*tcl_LimitAddHandler) (Tcl_Interp *interp, int type, Tcl_LimitHandlerProc *handlerProc, ClientData clientData, Tcl_LimitHandlerDeleteProc *deleteProc); /* 520 */ + void (*tcl_LimitRemoveHandler) (Tcl_Interp *interp, int type, Tcl_LimitHandlerProc *handlerProc, ClientData clientData); /* 521 */ + int (*tcl_LimitReady) (Tcl_Interp *interp); /* 522 */ + int (*tcl_LimitCheck) (Tcl_Interp *interp); /* 523 */ + int (*tcl_LimitExceeded) (Tcl_Interp *interp); /* 524 */ + void (*tcl_LimitSetCommands) (Tcl_Interp *interp, int commandLimit); /* 525 */ + void (*tcl_LimitSetTime) (Tcl_Interp *interp, Tcl_Time *timeLimitPtr); /* 526 */ + void (*tcl_LimitSetGranularity) (Tcl_Interp *interp, int type, int granularity); /* 527 */ + int (*tcl_LimitTypeEnabled) (Tcl_Interp *interp, int type); /* 528 */ + int (*tcl_LimitTypeExceeded) (Tcl_Interp *interp, int type); /* 529 */ + void (*tcl_LimitTypeSet) (Tcl_Interp *interp, int type); /* 530 */ + void (*tcl_LimitTypeReset) (Tcl_Interp *interp, int type); /* 531 */ + int (*tcl_LimitGetCommands) (Tcl_Interp *interp); /* 532 */ + void (*tcl_LimitGetTime) (Tcl_Interp *interp, Tcl_Time *timeLimitPtr); /* 533 */ + int (*tcl_LimitGetGranularity) (Tcl_Interp *interp, int type); /* 534 */ + Tcl_InterpState (*tcl_SaveInterpState) (Tcl_Interp *interp, int status); /* 535 */ + int (*tcl_RestoreInterpState) (Tcl_Interp *interp, Tcl_InterpState state); /* 536 */ + void (*tcl_DiscardInterpState) (Tcl_InterpState state); /* 537 */ + int (*tcl_SetReturnOptions) (Tcl_Interp *interp, Tcl_Obj *options); /* 538 */ + Tcl_Obj * (*tcl_GetReturnOptions) (Tcl_Interp *interp, int result); /* 539 */ + int (*tcl_IsEnsemble) (Tcl_Command token); /* 540 */ + Tcl_Command (*tcl_CreateEnsemble) (Tcl_Interp *interp, const char *name, Tcl_Namespace *namespacePtr, int flags); /* 541 */ + Tcl_Command (*tcl_FindEnsemble) (Tcl_Interp *interp, Tcl_Obj *cmdNameObj, int flags); /* 542 */ + int (*tcl_SetEnsembleSubcommandList) (Tcl_Interp *interp, Tcl_Command token, Tcl_Obj *subcmdList); /* 543 */ + int (*tcl_SetEnsembleMappingDict) (Tcl_Interp *interp, Tcl_Command token, Tcl_Obj *mapDict); /* 544 */ + int (*tcl_SetEnsembleUnknownHandler) (Tcl_Interp *interp, Tcl_Command token, Tcl_Obj *unknownList); /* 545 */ + int (*tcl_SetEnsembleFlags) (Tcl_Interp *interp, Tcl_Command token, int flags); /* 546 */ + int (*tcl_GetEnsembleSubcommandList) (Tcl_Interp *interp, Tcl_Command token, Tcl_Obj **subcmdListPtr); /* 547 */ + int (*tcl_GetEnsembleMappingDict) (Tcl_Interp *interp, Tcl_Command token, Tcl_Obj **mapDictPtr); /* 548 */ + int (*tcl_GetEnsembleUnknownHandler) (Tcl_Interp *interp, Tcl_Command token, Tcl_Obj **unknownListPtr); /* 549 */ + int (*tcl_GetEnsembleFlags) (Tcl_Interp *interp, Tcl_Command token, int *flagsPtr); /* 550 */ + int (*tcl_GetEnsembleNamespace) (Tcl_Interp *interp, Tcl_Command token, Tcl_Namespace **namespacePtrPtr); /* 551 */ + void (*tcl_SetTimeProc) (Tcl_GetTimeProc *getProc, Tcl_ScaleTimeProc *scaleProc, ClientData clientData); /* 552 */ + void (*tcl_QueryTimeProc) (Tcl_GetTimeProc **getProc, Tcl_ScaleTimeProc **scaleProc, ClientData *clientData); /* 553 */ + Tcl_DriverThreadActionProc * (*tcl_ChannelThreadActionProc) (const Tcl_ChannelType *chanTypePtr); /* 554 */ + Tcl_Obj * (*tcl_NewBignumObj) (mp_int *value); /* 555 */ + Tcl_Obj * (*tcl_DbNewBignumObj) (mp_int *value, const char *file, int line); /* 556 */ + void (*tcl_SetBignumObj) (Tcl_Obj *obj, mp_int *value); /* 557 */ + int (*tcl_GetBignumFromObj) (Tcl_Interp *interp, Tcl_Obj *obj, mp_int *value); /* 558 */ + int (*tcl_TakeBignumFromObj) (Tcl_Interp *interp, Tcl_Obj *obj, mp_int *value); /* 559 */ + int (*tcl_TruncateChannel) (Tcl_Channel chan, Tcl_WideInt length); /* 560 */ + Tcl_DriverTruncateProc * (*tcl_ChannelTruncateProc) (const Tcl_ChannelType *chanTypePtr); /* 561 */ + void (*tcl_SetChannelErrorInterp) (Tcl_Interp *interp, Tcl_Obj *msg); /* 562 */ + void (*tcl_GetChannelErrorInterp) (Tcl_Interp *interp, Tcl_Obj **msg); /* 563 */ + void (*tcl_SetChannelError) (Tcl_Channel chan, Tcl_Obj *msg); /* 564 */ + void (*tcl_GetChannelError) (Tcl_Channel chan, Tcl_Obj **msg); /* 565 */ + int (*tcl_InitBignumFromDouble) (Tcl_Interp *interp, double initval, mp_int *toInit); /* 566 */ + Tcl_Obj * (*tcl_GetNamespaceUnknownHandler) (Tcl_Interp *interp, Tcl_Namespace *nsPtr); /* 567 */ + int (*tcl_SetNamespaceUnknownHandler) (Tcl_Interp *interp, Tcl_Namespace *nsPtr, Tcl_Obj *handlerPtr); /* 568 */ + int (*tcl_GetEncodingFromObj) (Tcl_Interp *interp, Tcl_Obj *objPtr, Tcl_Encoding *encodingPtr); /* 569 */ + Tcl_Obj * (*tcl_GetEncodingSearchPath) (void); /* 570 */ + int (*tcl_SetEncodingSearchPath) (Tcl_Obj *searchPath); /* 571 */ + const char * (*tcl_GetEncodingNameFromEnvironment) (Tcl_DString *bufPtr); /* 572 */ + int (*tcl_PkgRequireProc) (Tcl_Interp *interp, const char *name, int objc, Tcl_Obj *const objv[], void *clientDataPtr); /* 573 */ + void (*tcl_AppendObjToErrorInfo) (Tcl_Interp *interp, Tcl_Obj *objPtr); /* 574 */ + void (*tcl_AppendLimitedToObj) (Tcl_Obj *objPtr, const char *bytes, int length, int limit, const char *ellipsis); /* 575 */ + Tcl_Obj * (*tcl_Format) (Tcl_Interp *interp, const char *format, int objc, Tcl_Obj *const objv[]); /* 576 */ + int (*tcl_AppendFormatToObj) (Tcl_Interp *interp, Tcl_Obj *objPtr, const char *format, int objc, Tcl_Obj *const objv[]); /* 577 */ + Tcl_Obj * (*tcl_ObjPrintf) (const char *format, ...) TCL_FORMAT_PRINTF(1, 2); /* 578 */ + void (*tcl_AppendPrintfToObj) (Tcl_Obj *objPtr, const char *format, ...) TCL_FORMAT_PRINTF(2, 3); /* 579 */ + int (*tcl_CancelEval) (Tcl_Interp *interp, Tcl_Obj *resultObjPtr, ClientData clientData, int flags); /* 580 */ + int (*tcl_Canceled) (Tcl_Interp *interp, int flags); /* 581 */ + int (*tcl_CreatePipe) (Tcl_Interp *interp, Tcl_Channel *rchan, Tcl_Channel *wchan, int flags); /* 582 */ + Tcl_Command (*tcl_NRCreateCommand) (Tcl_Interp *interp, const char *cmdName, Tcl_ObjCmdProc *proc, Tcl_ObjCmdProc *nreProc, ClientData clientData, Tcl_CmdDeleteProc *deleteProc); /* 583 */ + int (*tcl_NREvalObj) (Tcl_Interp *interp, Tcl_Obj *objPtr, int flags); /* 584 */ + int (*tcl_NREvalObjv) (Tcl_Interp *interp, int objc, Tcl_Obj *const objv[], int flags); /* 585 */ + int (*tcl_NRCmdSwap) (Tcl_Interp *interp, Tcl_Command cmd, int objc, Tcl_Obj *const objv[], int flags); /* 586 */ + void (*tcl_NRAddCallback) (Tcl_Interp *interp, Tcl_NRPostProc *postProcPtr, ClientData data0, ClientData data1, ClientData data2, ClientData data3); /* 587 */ + int (*tcl_NRCallObjProc) (Tcl_Interp *interp, Tcl_ObjCmdProc *objProc, ClientData clientData, int objc, Tcl_Obj *const objv[]); /* 588 */ + unsigned (*tcl_GetFSDeviceFromStat) (const Tcl_StatBuf *statPtr); /* 589 */ + unsigned (*tcl_GetFSInodeFromStat) (const Tcl_StatBuf *statPtr); /* 590 */ + unsigned (*tcl_GetModeFromStat) (const Tcl_StatBuf *statPtr); /* 591 */ + int (*tcl_GetLinkCountFromStat) (const Tcl_StatBuf *statPtr); /* 592 */ + int (*tcl_GetUserIdFromStat) (const Tcl_StatBuf *statPtr); /* 593 */ + int (*tcl_GetGroupIdFromStat) (const Tcl_StatBuf *statPtr); /* 594 */ + int (*tcl_GetDeviceTypeFromStat) (const Tcl_StatBuf *statPtr); /* 595 */ + Tcl_WideInt (*tcl_GetAccessTimeFromStat) (const Tcl_StatBuf *statPtr); /* 596 */ + Tcl_WideInt (*tcl_GetModificationTimeFromStat) (const Tcl_StatBuf *statPtr); /* 597 */ + Tcl_WideInt (*tcl_GetChangeTimeFromStat) (const Tcl_StatBuf *statPtr); /* 598 */ + Tcl_WideUInt (*tcl_GetSizeFromStat) (const Tcl_StatBuf *statPtr); /* 599 */ + Tcl_WideUInt (*tcl_GetBlocksFromStat) (const Tcl_StatBuf *statPtr); /* 600 */ + unsigned (*tcl_GetBlockSizeFromStat) (const Tcl_StatBuf *statPtr); /* 601 */ + int (*tcl_SetEnsembleParameterList) (Tcl_Interp *interp, Tcl_Command token, Tcl_Obj *paramList); /* 602 */ + int (*tcl_GetEnsembleParameterList) (Tcl_Interp *interp, Tcl_Command token, Tcl_Obj **paramListPtr); /* 603 */ + int (*tcl_ParseArgsObjv) (Tcl_Interp *interp, const Tcl_ArgvInfo *argTable, int *objcPtr, Tcl_Obj *const *objv, Tcl_Obj ***remObjv); /* 604 */ + int (*tcl_GetErrorLine) (Tcl_Interp *interp); /* 605 */ + void (*tcl_SetErrorLine) (Tcl_Interp *interp, int lineNum); /* 606 */ + void (*tcl_TransferResult) (Tcl_Interp *sourceInterp, int result, Tcl_Interp *targetInterp); /* 607 */ + int (*tcl_InterpActive) (Tcl_Interp *interp); /* 608 */ + void (*tcl_BackgroundException) (Tcl_Interp *interp, int code); /* 609 */ + int (*tcl_ZlibDeflate) (Tcl_Interp *interp, int format, Tcl_Obj *data, int level, Tcl_Obj *gzipHeaderDictObj); /* 610 */ + int (*tcl_ZlibInflate) (Tcl_Interp *interp, int format, Tcl_Obj *data, int buffersize, Tcl_Obj *gzipHeaderDictObj); /* 611 */ + unsigned int (*tcl_ZlibCRC32) (unsigned int crc, const unsigned char *buf, int len); /* 612 */ + unsigned int (*tcl_ZlibAdler32) (unsigned int adler, const unsigned char *buf, int len); /* 613 */ + int (*tcl_ZlibStreamInit) (Tcl_Interp *interp, int mode, int format, int level, Tcl_Obj *dictObj, Tcl_ZlibStream *zshandle); /* 614 */ + Tcl_Obj * (*tcl_ZlibStreamGetCommandName) (Tcl_ZlibStream zshandle); /* 615 */ + int (*tcl_ZlibStreamEof) (Tcl_ZlibStream zshandle); /* 616 */ + int (*tcl_ZlibStreamChecksum) (Tcl_ZlibStream zshandle); /* 617 */ + int (*tcl_ZlibStreamPut) (Tcl_ZlibStream zshandle, Tcl_Obj *data, int flush); /* 618 */ + int (*tcl_ZlibStreamGet) (Tcl_ZlibStream zshandle, Tcl_Obj *data, int count); /* 619 */ + int (*tcl_ZlibStreamClose) (Tcl_ZlibStream zshandle); /* 620 */ + int (*tcl_ZlibStreamReset) (Tcl_ZlibStream zshandle); /* 621 */ + void (*tcl_SetStartupScript) (Tcl_Obj *path, const char *encoding); /* 622 */ + Tcl_Obj * (*tcl_GetStartupScript) (const char **encodingPtr); /* 623 */ + int (*tcl_CloseEx) (Tcl_Interp *interp, Tcl_Channel chan, int flags); /* 624 */ + int (*tcl_NRExprObj) (Tcl_Interp *interp, Tcl_Obj *objPtr, Tcl_Obj *resultPtr); /* 625 */ + int (*tcl_NRSubstObj) (Tcl_Interp *interp, Tcl_Obj *objPtr, int flags); /* 626 */ + int (*tcl_LoadFile) (Tcl_Interp *interp, Tcl_Obj *pathPtr, const char *const symv[], int flags, void *procPtrs, Tcl_LoadHandle *handlePtr); /* 627 */ + void * (*tcl_FindSymbol) (Tcl_Interp *interp, Tcl_LoadHandle handle, const char *symbol); /* 628 */ + int (*tcl_FSUnloadFile) (Tcl_Interp *interp, Tcl_LoadHandle handlePtr); /* 629 */ + void (*tcl_ZlibStreamSetCompressionDictionary) (Tcl_ZlibStream zhandle, Tcl_Obj *compressionDictionaryObj); /* 630 */ +} TclStubs; + +#ifdef __cplusplus +extern "C" { +#endif +extern const TclStubs *tclStubsPtr; +#ifdef __cplusplus +} +#endif + +#if defined(USE_TCL_STUBS) + +/* + * Inline function declarations: + */ + +#define Tcl_PkgProvideEx \ + (tclStubsPtr->tcl_PkgProvideEx) /* 0 */ +#define Tcl_PkgRequireEx \ + (tclStubsPtr->tcl_PkgRequireEx) /* 1 */ +#define Tcl_Panic \ + (tclStubsPtr->tcl_Panic) /* 2 */ +#define Tcl_Alloc \ + (tclStubsPtr->tcl_Alloc) /* 3 */ +#define Tcl_Free \ + (tclStubsPtr->tcl_Free) /* 4 */ +#define Tcl_Realloc \ + (tclStubsPtr->tcl_Realloc) /* 5 */ +#define Tcl_DbCkalloc \ + (tclStubsPtr->tcl_DbCkalloc) /* 6 */ +#define Tcl_DbCkfree \ + (tclStubsPtr->tcl_DbCkfree) /* 7 */ +#define Tcl_DbCkrealloc \ + (tclStubsPtr->tcl_DbCkrealloc) /* 8 */ +#if !defined(__WIN32__) && !defined(MAC_OSX_TCL) /* UNIX */ +#define Tcl_CreateFileHandler \ + (tclStubsPtr->tcl_CreateFileHandler) /* 9 */ +#endif /* UNIX */ +#ifdef MAC_OSX_TCL /* MACOSX */ +#define Tcl_CreateFileHandler \ + (tclStubsPtr->tcl_CreateFileHandler) /* 9 */ +#endif /* MACOSX */ +#if !defined(__WIN32__) && !defined(MAC_OSX_TCL) /* UNIX */ +#define Tcl_DeleteFileHandler \ + (tclStubsPtr->tcl_DeleteFileHandler) /* 10 */ +#endif /* UNIX */ +#ifdef MAC_OSX_TCL /* MACOSX */ +#define Tcl_DeleteFileHandler \ + (tclStubsPtr->tcl_DeleteFileHandler) /* 10 */ +#endif /* MACOSX */ +#define Tcl_SetTimer \ + (tclStubsPtr->tcl_SetTimer) /* 11 */ +#define Tcl_Sleep \ + (tclStubsPtr->tcl_Sleep) /* 12 */ +#define Tcl_WaitForEvent \ + (tclStubsPtr->tcl_WaitForEvent) /* 13 */ +#define Tcl_AppendAllObjTypes \ + (tclStubsPtr->tcl_AppendAllObjTypes) /* 14 */ +#define Tcl_AppendStringsToObj \ + (tclStubsPtr->tcl_AppendStringsToObj) /* 15 */ +#define Tcl_AppendToObj \ + (tclStubsPtr->tcl_AppendToObj) /* 16 */ +#define Tcl_ConcatObj \ + (tclStubsPtr->tcl_ConcatObj) /* 17 */ +#define Tcl_ConvertToType \ + (tclStubsPtr->tcl_ConvertToType) /* 18 */ +#define Tcl_DbDecrRefCount \ + (tclStubsPtr->tcl_DbDecrRefCount) /* 19 */ +#define Tcl_DbIncrRefCount \ + (tclStubsPtr->tcl_DbIncrRefCount) /* 20 */ +#define Tcl_DbIsShared \ + (tclStubsPtr->tcl_DbIsShared) /* 21 */ +#define Tcl_DbNewBooleanObj \ + (tclStubsPtr->tcl_DbNewBooleanObj) /* 22 */ +#define Tcl_DbNewByteArrayObj \ + (tclStubsPtr->tcl_DbNewByteArrayObj) /* 23 */ +#define Tcl_DbNewDoubleObj \ + (tclStubsPtr->tcl_DbNewDoubleObj) /* 24 */ +#define Tcl_DbNewListObj \ + (tclStubsPtr->tcl_DbNewListObj) /* 25 */ +#define Tcl_DbNewLongObj \ + (tclStubsPtr->tcl_DbNewLongObj) /* 26 */ +#define Tcl_DbNewObj \ + (tclStubsPtr->tcl_DbNewObj) /* 27 */ +#define Tcl_DbNewStringObj \ + (tclStubsPtr->tcl_DbNewStringObj) /* 28 */ +#define Tcl_DuplicateObj \ + (tclStubsPtr->tcl_DuplicateObj) /* 29 */ +#define TclFreeObj \ + (tclStubsPtr->tclFreeObj) /* 30 */ +#define Tcl_GetBoolean \ + (tclStubsPtr->tcl_GetBoolean) /* 31 */ +#define Tcl_GetBooleanFromObj \ + (tclStubsPtr->tcl_GetBooleanFromObj) /* 32 */ +#define Tcl_GetByteArrayFromObj \ + (tclStubsPtr->tcl_GetByteArrayFromObj) /* 33 */ +#define Tcl_GetDouble \ + (tclStubsPtr->tcl_GetDouble) /* 34 */ +#define Tcl_GetDoubleFromObj \ + (tclStubsPtr->tcl_GetDoubleFromObj) /* 35 */ +#define Tcl_GetIndexFromObj \ + (tclStubsPtr->tcl_GetIndexFromObj) /* 36 */ +#define Tcl_GetInt \ + (tclStubsPtr->tcl_GetInt) /* 37 */ +#define Tcl_GetIntFromObj \ + (tclStubsPtr->tcl_GetIntFromObj) /* 38 */ +#define Tcl_GetLongFromObj \ + (tclStubsPtr->tcl_GetLongFromObj) /* 39 */ +#define Tcl_GetObjType \ + (tclStubsPtr->tcl_GetObjType) /* 40 */ +#define Tcl_GetStringFromObj \ + (tclStubsPtr->tcl_GetStringFromObj) /* 41 */ +#define Tcl_InvalidateStringRep \ + (tclStubsPtr->tcl_InvalidateStringRep) /* 42 */ +#define Tcl_ListObjAppendList \ + (tclStubsPtr->tcl_ListObjAppendList) /* 43 */ +#define Tcl_ListObjAppendElement \ + (tclStubsPtr->tcl_ListObjAppendElement) /* 44 */ +#define Tcl_ListObjGetElements \ + (tclStubsPtr->tcl_ListObjGetElements) /* 45 */ +#define Tcl_ListObjIndex \ + (tclStubsPtr->tcl_ListObjIndex) /* 46 */ +#define Tcl_ListObjLength \ + (tclStubsPtr->tcl_ListObjLength) /* 47 */ +#define Tcl_ListObjReplace \ + (tclStubsPtr->tcl_ListObjReplace) /* 48 */ +#define Tcl_NewBooleanObj \ + (tclStubsPtr->tcl_NewBooleanObj) /* 49 */ +#define Tcl_NewByteArrayObj \ + (tclStubsPtr->tcl_NewByteArrayObj) /* 50 */ +#define Tcl_NewDoubleObj \ + (tclStubsPtr->tcl_NewDoubleObj) /* 51 */ +#define Tcl_NewIntObj \ + (tclStubsPtr->tcl_NewIntObj) /* 52 */ +#define Tcl_NewListObj \ + (tclStubsPtr->tcl_NewListObj) /* 53 */ +#define Tcl_NewLongObj \ + (tclStubsPtr->tcl_NewLongObj) /* 54 */ +#define Tcl_NewObj \ + (tclStubsPtr->tcl_NewObj) /* 55 */ +#define Tcl_NewStringObj \ + (tclStubsPtr->tcl_NewStringObj) /* 56 */ +#define Tcl_SetBooleanObj \ + (tclStubsPtr->tcl_SetBooleanObj) /* 57 */ +#define Tcl_SetByteArrayLength \ + (tclStubsPtr->tcl_SetByteArrayLength) /* 58 */ +#define Tcl_SetByteArrayObj \ + (tclStubsPtr->tcl_SetByteArrayObj) /* 59 */ +#define Tcl_SetDoubleObj \ + (tclStubsPtr->tcl_SetDoubleObj) /* 60 */ +#define Tcl_SetIntObj \ + (tclStubsPtr->tcl_SetIntObj) /* 61 */ +#define Tcl_SetListObj \ + (tclStubsPtr->tcl_SetListObj) /* 62 */ +#define Tcl_SetLongObj \ + (tclStubsPtr->tcl_SetLongObj) /* 63 */ +#define Tcl_SetObjLength \ + (tclStubsPtr->tcl_SetObjLength) /* 64 */ +#define Tcl_SetStringObj \ + (tclStubsPtr->tcl_SetStringObj) /* 65 */ +#define Tcl_AddErrorInfo \ + (tclStubsPtr->tcl_AddErrorInfo) /* 66 */ +#define Tcl_AddObjErrorInfo \ + (tclStubsPtr->tcl_AddObjErrorInfo) /* 67 */ +#define Tcl_AllowExceptions \ + (tclStubsPtr->tcl_AllowExceptions) /* 68 */ +#define Tcl_AppendElement \ + (tclStubsPtr->tcl_AppendElement) /* 69 */ +#define Tcl_AppendResult \ + (tclStubsPtr->tcl_AppendResult) /* 70 */ +#define Tcl_AsyncCreate \ + (tclStubsPtr->tcl_AsyncCreate) /* 71 */ +#define Tcl_AsyncDelete \ + (tclStubsPtr->tcl_AsyncDelete) /* 72 */ +#define Tcl_AsyncInvoke \ + (tclStubsPtr->tcl_AsyncInvoke) /* 73 */ +#define Tcl_AsyncMark \ + (tclStubsPtr->tcl_AsyncMark) /* 74 */ +#define Tcl_AsyncReady \ + (tclStubsPtr->tcl_AsyncReady) /* 75 */ +#define Tcl_BackgroundError \ + (tclStubsPtr->tcl_BackgroundError) /* 76 */ +#define Tcl_Backslash \ + (tclStubsPtr->tcl_Backslash) /* 77 */ +#define Tcl_BadChannelOption \ + (tclStubsPtr->tcl_BadChannelOption) /* 78 */ +#define Tcl_CallWhenDeleted \ + (tclStubsPtr->tcl_CallWhenDeleted) /* 79 */ +#define Tcl_CancelIdleCall \ + (tclStubsPtr->tcl_CancelIdleCall) /* 80 */ +#define Tcl_Close \ + (tclStubsPtr->tcl_Close) /* 81 */ +#define Tcl_CommandComplete \ + (tclStubsPtr->tcl_CommandComplete) /* 82 */ +#define Tcl_Concat \ + (tclStubsPtr->tcl_Concat) /* 83 */ +#define Tcl_ConvertElement \ + (tclStubsPtr->tcl_ConvertElement) /* 84 */ +#define Tcl_ConvertCountedElement \ + (tclStubsPtr->tcl_ConvertCountedElement) /* 85 */ +#define Tcl_CreateAlias \ + (tclStubsPtr->tcl_CreateAlias) /* 86 */ +#define Tcl_CreateAliasObj \ + (tclStubsPtr->tcl_CreateAliasObj) /* 87 */ +#define Tcl_CreateChannel \ + (tclStubsPtr->tcl_CreateChannel) /* 88 */ +#define Tcl_CreateChannelHandler \ + (tclStubsPtr->tcl_CreateChannelHandler) /* 89 */ +#define Tcl_CreateCloseHandler \ + (tclStubsPtr->tcl_CreateCloseHandler) /* 90 */ +#define Tcl_CreateCommand \ + (tclStubsPtr->tcl_CreateCommand) /* 91 */ +#define Tcl_CreateEventSource \ + (tclStubsPtr->tcl_CreateEventSource) /* 92 */ +#define Tcl_CreateExitHandler \ + (tclStubsPtr->tcl_CreateExitHandler) /* 93 */ +#define Tcl_CreateInterp \ + (tclStubsPtr->tcl_CreateInterp) /* 94 */ +#define Tcl_CreateMathFunc \ + (tclStubsPtr->tcl_CreateMathFunc) /* 95 */ +#define Tcl_CreateObjCommand \ + (tclStubsPtr->tcl_CreateObjCommand) /* 96 */ +#define Tcl_CreateSlave \ + (tclStubsPtr->tcl_CreateSlave) /* 97 */ +#define Tcl_CreateTimerHandler \ + (tclStubsPtr->tcl_CreateTimerHandler) /* 98 */ +#define Tcl_CreateTrace \ + (tclStubsPtr->tcl_CreateTrace) /* 99 */ +#define Tcl_DeleteAssocData \ + (tclStubsPtr->tcl_DeleteAssocData) /* 100 */ +#define Tcl_DeleteChannelHandler \ + (tclStubsPtr->tcl_DeleteChannelHandler) /* 101 */ +#define Tcl_DeleteCloseHandler \ + (tclStubsPtr->tcl_DeleteCloseHandler) /* 102 */ +#define Tcl_DeleteCommand \ + (tclStubsPtr->tcl_DeleteCommand) /* 103 */ +#define Tcl_DeleteCommandFromToken \ + (tclStubsPtr->tcl_DeleteCommandFromToken) /* 104 */ +#define Tcl_DeleteEvents \ + (tclStubsPtr->tcl_DeleteEvents) /* 105 */ +#define Tcl_DeleteEventSource \ + (tclStubsPtr->tcl_DeleteEventSource) /* 106 */ +#define Tcl_DeleteExitHandler \ + (tclStubsPtr->tcl_DeleteExitHandler) /* 107 */ +#define Tcl_DeleteHashEntry \ + (tclStubsPtr->tcl_DeleteHashEntry) /* 108 */ +#define Tcl_DeleteHashTable \ + (tclStubsPtr->tcl_DeleteHashTable) /* 109 */ +#define Tcl_DeleteInterp \ + (tclStubsPtr->tcl_DeleteInterp) /* 110 */ +#define Tcl_DetachPids \ + (tclStubsPtr->tcl_DetachPids) /* 111 */ +#define Tcl_DeleteTimerHandler \ + (tclStubsPtr->tcl_DeleteTimerHandler) /* 112 */ +#define Tcl_DeleteTrace \ + (tclStubsPtr->tcl_DeleteTrace) /* 113 */ +#define Tcl_DontCallWhenDeleted \ + (tclStubsPtr->tcl_DontCallWhenDeleted) /* 114 */ +#define Tcl_DoOneEvent \ + (tclStubsPtr->tcl_DoOneEvent) /* 115 */ +#define Tcl_DoWhenIdle \ + (tclStubsPtr->tcl_DoWhenIdle) /* 116 */ +#define Tcl_DStringAppend \ + (tclStubsPtr->tcl_DStringAppend) /* 117 */ +#define Tcl_DStringAppendElement \ + (tclStubsPtr->tcl_DStringAppendElement) /* 118 */ +#define Tcl_DStringEndSublist \ + (tclStubsPtr->tcl_DStringEndSublist) /* 119 */ +#define Tcl_DStringFree \ + (tclStubsPtr->tcl_DStringFree) /* 120 */ +#define Tcl_DStringGetResult \ + (tclStubsPtr->tcl_DStringGetResult) /* 121 */ +#define Tcl_DStringInit \ + (tclStubsPtr->tcl_DStringInit) /* 122 */ +#define Tcl_DStringResult \ + (tclStubsPtr->tcl_DStringResult) /* 123 */ +#define Tcl_DStringSetLength \ + (tclStubsPtr->tcl_DStringSetLength) /* 124 */ +#define Tcl_DStringStartSublist \ + (tclStubsPtr->tcl_DStringStartSublist) /* 125 */ +#define Tcl_Eof \ + (tclStubsPtr->tcl_Eof) /* 126 */ +#define Tcl_ErrnoId \ + (tclStubsPtr->tcl_ErrnoId) /* 127 */ +#define Tcl_ErrnoMsg \ + (tclStubsPtr->tcl_ErrnoMsg) /* 128 */ +#define Tcl_Eval \ + (tclStubsPtr->tcl_Eval) /* 129 */ +#define Tcl_EvalFile \ + (tclStubsPtr->tcl_EvalFile) /* 130 */ +#define Tcl_EvalObj \ + (tclStubsPtr->tcl_EvalObj) /* 131 */ +#define Tcl_EventuallyFree \ + (tclStubsPtr->tcl_EventuallyFree) /* 132 */ +#define Tcl_Exit \ + (tclStubsPtr->tcl_Exit) /* 133 */ +#define Tcl_ExposeCommand \ + (tclStubsPtr->tcl_ExposeCommand) /* 134 */ +#define Tcl_ExprBoolean \ + (tclStubsPtr->tcl_ExprBoolean) /* 135 */ +#define Tcl_ExprBooleanObj \ + (tclStubsPtr->tcl_ExprBooleanObj) /* 136 */ +#define Tcl_ExprDouble \ + (tclStubsPtr->tcl_ExprDouble) /* 137 */ +#define Tcl_ExprDoubleObj \ + (tclStubsPtr->tcl_ExprDoubleObj) /* 138 */ +#define Tcl_ExprLong \ + (tclStubsPtr->tcl_ExprLong) /* 139 */ +#define Tcl_ExprLongObj \ + (tclStubsPtr->tcl_ExprLongObj) /* 140 */ +#define Tcl_ExprObj \ + (tclStubsPtr->tcl_ExprObj) /* 141 */ +#define Tcl_ExprString \ + (tclStubsPtr->tcl_ExprString) /* 142 */ +#define Tcl_Finalize \ + (tclStubsPtr->tcl_Finalize) /* 143 */ +#define Tcl_FindExecutable \ + (tclStubsPtr->tcl_FindExecutable) /* 144 */ +#define Tcl_FirstHashEntry \ + (tclStubsPtr->tcl_FirstHashEntry) /* 145 */ +#define Tcl_Flush \ + (tclStubsPtr->tcl_Flush) /* 146 */ +#define Tcl_FreeResult \ + (tclStubsPtr->tcl_FreeResult) /* 147 */ +#define Tcl_GetAlias \ + (tclStubsPtr->tcl_GetAlias) /* 148 */ +#define Tcl_GetAliasObj \ + (tclStubsPtr->tcl_GetAliasObj) /* 149 */ +#define Tcl_GetAssocData \ + (tclStubsPtr->tcl_GetAssocData) /* 150 */ +#define Tcl_GetChannel \ + (tclStubsPtr->tcl_GetChannel) /* 151 */ +#define Tcl_GetChannelBufferSize \ + (tclStubsPtr->tcl_GetChannelBufferSize) /* 152 */ +#define Tcl_GetChannelHandle \ + (tclStubsPtr->tcl_GetChannelHandle) /* 153 */ +#define Tcl_GetChannelInstanceData \ + (tclStubsPtr->tcl_GetChannelInstanceData) /* 154 */ +#define Tcl_GetChannelMode \ + (tclStubsPtr->tcl_GetChannelMode) /* 155 */ +#define Tcl_GetChannelName \ + (tclStubsPtr->tcl_GetChannelName) /* 156 */ +#define Tcl_GetChannelOption \ + (tclStubsPtr->tcl_GetChannelOption) /* 157 */ +#define Tcl_GetChannelType \ + (tclStubsPtr->tcl_GetChannelType) /* 158 */ +#define Tcl_GetCommandInfo \ + (tclStubsPtr->tcl_GetCommandInfo) /* 159 */ +#define Tcl_GetCommandName \ + (tclStubsPtr->tcl_GetCommandName) /* 160 */ +#define Tcl_GetErrno \ + (tclStubsPtr->tcl_GetErrno) /* 161 */ +#define Tcl_GetHostName \ + (tclStubsPtr->tcl_GetHostName) /* 162 */ +#define Tcl_GetInterpPath \ + (tclStubsPtr->tcl_GetInterpPath) /* 163 */ +#define Tcl_GetMaster \ + (tclStubsPtr->tcl_GetMaster) /* 164 */ +#define Tcl_GetNameOfExecutable \ + (tclStubsPtr->tcl_GetNameOfExecutable) /* 165 */ +#define Tcl_GetObjResult \ + (tclStubsPtr->tcl_GetObjResult) /* 166 */ +#if !defined(__WIN32__) && !defined(MAC_OSX_TCL) /* UNIX */ +#define Tcl_GetOpenFile \ + (tclStubsPtr->tcl_GetOpenFile) /* 167 */ +#endif /* UNIX */ +#ifdef MAC_OSX_TCL /* MACOSX */ +#define Tcl_GetOpenFile \ + (tclStubsPtr->tcl_GetOpenFile) /* 167 */ +#endif /* MACOSX */ +#define Tcl_GetPathType \ + (tclStubsPtr->tcl_GetPathType) /* 168 */ +#define Tcl_Gets \ + (tclStubsPtr->tcl_Gets) /* 169 */ +#define Tcl_GetsObj \ + (tclStubsPtr->tcl_GetsObj) /* 170 */ +#define Tcl_GetServiceMode \ + (tclStubsPtr->tcl_GetServiceMode) /* 171 */ +#define Tcl_GetSlave \ + (tclStubsPtr->tcl_GetSlave) /* 172 */ +#define Tcl_GetStdChannel \ + (tclStubsPtr->tcl_GetStdChannel) /* 173 */ +#define Tcl_GetStringResult \ + (tclStubsPtr->tcl_GetStringResult) /* 174 */ +#define Tcl_GetVar \ + (tclStubsPtr->tcl_GetVar) /* 175 */ +#define Tcl_GetVar2 \ + (tclStubsPtr->tcl_GetVar2) /* 176 */ +#define Tcl_GlobalEval \ + (tclStubsPtr->tcl_GlobalEval) /* 177 */ +#define Tcl_GlobalEvalObj \ + (tclStubsPtr->tcl_GlobalEvalObj) /* 178 */ +#define Tcl_HideCommand \ + (tclStubsPtr->tcl_HideCommand) /* 179 */ +#define Tcl_Init \ + (tclStubsPtr->tcl_Init) /* 180 */ +#define Tcl_InitHashTable \ + (tclStubsPtr->tcl_InitHashTable) /* 181 */ +#define Tcl_InputBlocked \ + (tclStubsPtr->tcl_InputBlocked) /* 182 */ +#define Tcl_InputBuffered \ + (tclStubsPtr->tcl_InputBuffered) /* 183 */ +#define Tcl_InterpDeleted \ + (tclStubsPtr->tcl_InterpDeleted) /* 184 */ +#define Tcl_IsSafe \ + (tclStubsPtr->tcl_IsSafe) /* 185 */ +#define Tcl_JoinPath \ + (tclStubsPtr->tcl_JoinPath) /* 186 */ +#define Tcl_LinkVar \ + (tclStubsPtr->tcl_LinkVar) /* 187 */ +/* Slot 188 is reserved */ +#define Tcl_MakeFileChannel \ + (tclStubsPtr->tcl_MakeFileChannel) /* 189 */ +#define Tcl_MakeSafe \ + (tclStubsPtr->tcl_MakeSafe) /* 190 */ +#define Tcl_MakeTcpClientChannel \ + (tclStubsPtr->tcl_MakeTcpClientChannel) /* 191 */ +#define Tcl_Merge \ + (tclStubsPtr->tcl_Merge) /* 192 */ +#define Tcl_NextHashEntry \ + (tclStubsPtr->tcl_NextHashEntry) /* 193 */ +#define Tcl_NotifyChannel \ + (tclStubsPtr->tcl_NotifyChannel) /* 194 */ +#define Tcl_ObjGetVar2 \ + (tclStubsPtr->tcl_ObjGetVar2) /* 195 */ +#define Tcl_ObjSetVar2 \ + (tclStubsPtr->tcl_ObjSetVar2) /* 196 */ +#define Tcl_OpenCommandChannel \ + (tclStubsPtr->tcl_OpenCommandChannel) /* 197 */ +#define Tcl_OpenFileChannel \ + (tclStubsPtr->tcl_OpenFileChannel) /* 198 */ +#define Tcl_OpenTcpClient \ + (tclStubsPtr->tcl_OpenTcpClient) /* 199 */ +#define Tcl_OpenTcpServer \ + (tclStubsPtr->tcl_OpenTcpServer) /* 200 */ +#define Tcl_Preserve \ + (tclStubsPtr->tcl_Preserve) /* 201 */ +#define Tcl_PrintDouble \ + (tclStubsPtr->tcl_PrintDouble) /* 202 */ +#define Tcl_PutEnv \ + (tclStubsPtr->tcl_PutEnv) /* 203 */ +#define Tcl_PosixError \ + (tclStubsPtr->tcl_PosixError) /* 204 */ +#define Tcl_QueueEvent \ + (tclStubsPtr->tcl_QueueEvent) /* 205 */ +#define Tcl_Read \ + (tclStubsPtr->tcl_Read) /* 206 */ +#define Tcl_ReapDetachedProcs \ + (tclStubsPtr->tcl_ReapDetachedProcs) /* 207 */ +#define Tcl_RecordAndEval \ + (tclStubsPtr->tcl_RecordAndEval) /* 208 */ +#define Tcl_RecordAndEvalObj \ + (tclStubsPtr->tcl_RecordAndEvalObj) /* 209 */ +#define Tcl_RegisterChannel \ + (tclStubsPtr->tcl_RegisterChannel) /* 210 */ +#define Tcl_RegisterObjType \ + (tclStubsPtr->tcl_RegisterObjType) /* 211 */ +#define Tcl_RegExpCompile \ + (tclStubsPtr->tcl_RegExpCompile) /* 212 */ +#define Tcl_RegExpExec \ + (tclStubsPtr->tcl_RegExpExec) /* 213 */ +#define Tcl_RegExpMatch \ + (tclStubsPtr->tcl_RegExpMatch) /* 214 */ +#define Tcl_RegExpRange \ + (tclStubsPtr->tcl_RegExpRange) /* 215 */ +#define Tcl_Release \ + (tclStubsPtr->tcl_Release) /* 216 */ +#define Tcl_ResetResult \ + (tclStubsPtr->tcl_ResetResult) /* 217 */ +#define Tcl_ScanElement \ + (tclStubsPtr->tcl_ScanElement) /* 218 */ +#define Tcl_ScanCountedElement \ + (tclStubsPtr->tcl_ScanCountedElement) /* 219 */ +#define Tcl_SeekOld \ + (tclStubsPtr->tcl_SeekOld) /* 220 */ +#define Tcl_ServiceAll \ + (tclStubsPtr->tcl_ServiceAll) /* 221 */ +#define Tcl_ServiceEvent \ + (tclStubsPtr->tcl_ServiceEvent) /* 222 */ +#define Tcl_SetAssocData \ + (tclStubsPtr->tcl_SetAssocData) /* 223 */ +#define Tcl_SetChannelBufferSize \ + (tclStubsPtr->tcl_SetChannelBufferSize) /* 224 */ +#define Tcl_SetChannelOption \ + (tclStubsPtr->tcl_SetChannelOption) /* 225 */ +#define Tcl_SetCommandInfo \ + (tclStubsPtr->tcl_SetCommandInfo) /* 226 */ +#define Tcl_SetErrno \ + (tclStubsPtr->tcl_SetErrno) /* 227 */ +#define Tcl_SetErrorCode \ + (tclStubsPtr->tcl_SetErrorCode) /* 228 */ +#define Tcl_SetMaxBlockTime \ + (tclStubsPtr->tcl_SetMaxBlockTime) /* 229 */ +#define Tcl_SetPanicProc \ + (tclStubsPtr->tcl_SetPanicProc) /* 230 */ +#define Tcl_SetRecursionLimit \ + (tclStubsPtr->tcl_SetRecursionLimit) /* 231 */ +#define Tcl_SetResult \ + (tclStubsPtr->tcl_SetResult) /* 232 */ +#define Tcl_SetServiceMode \ + (tclStubsPtr->tcl_SetServiceMode) /* 233 */ +#define Tcl_SetObjErrorCode \ + (tclStubsPtr->tcl_SetObjErrorCode) /* 234 */ +#define Tcl_SetObjResult \ + (tclStubsPtr->tcl_SetObjResult) /* 235 */ +#define Tcl_SetStdChannel \ + (tclStubsPtr->tcl_SetStdChannel) /* 236 */ +#define Tcl_SetVar \ + (tclStubsPtr->tcl_SetVar) /* 237 */ +#define Tcl_SetVar2 \ + (tclStubsPtr->tcl_SetVar2) /* 238 */ +#define Tcl_SignalId \ + (tclStubsPtr->tcl_SignalId) /* 239 */ +#define Tcl_SignalMsg \ + (tclStubsPtr->tcl_SignalMsg) /* 240 */ +#define Tcl_SourceRCFile \ + (tclStubsPtr->tcl_SourceRCFile) /* 241 */ +#define Tcl_SplitList \ + (tclStubsPtr->tcl_SplitList) /* 242 */ +#define Tcl_SplitPath \ + (tclStubsPtr->tcl_SplitPath) /* 243 */ +#define Tcl_StaticPackage \ + (tclStubsPtr->tcl_StaticPackage) /* 244 */ +#define Tcl_StringMatch \ + (tclStubsPtr->tcl_StringMatch) /* 245 */ +#define Tcl_TellOld \ + (tclStubsPtr->tcl_TellOld) /* 246 */ +#define Tcl_TraceVar \ + (tclStubsPtr->tcl_TraceVar) /* 247 */ +#define Tcl_TraceVar2 \ + (tclStubsPtr->tcl_TraceVar2) /* 248 */ +#define Tcl_TranslateFileName \ + (tclStubsPtr->tcl_TranslateFileName) /* 249 */ +#define Tcl_Ungets \ + (tclStubsPtr->tcl_Ungets) /* 250 */ +#define Tcl_UnlinkVar \ + (tclStubsPtr->tcl_UnlinkVar) /* 251 */ +#define Tcl_UnregisterChannel \ + (tclStubsPtr->tcl_UnregisterChannel) /* 252 */ +#define Tcl_UnsetVar \ + (tclStubsPtr->tcl_UnsetVar) /* 253 */ +#define Tcl_UnsetVar2 \ + (tclStubsPtr->tcl_UnsetVar2) /* 254 */ +#define Tcl_UntraceVar \ + (tclStubsPtr->tcl_UntraceVar) /* 255 */ +#define Tcl_UntraceVar2 \ + (tclStubsPtr->tcl_UntraceVar2) /* 256 */ +#define Tcl_UpdateLinkedVar \ + (tclStubsPtr->tcl_UpdateLinkedVar) /* 257 */ +#define Tcl_UpVar \ + (tclStubsPtr->tcl_UpVar) /* 258 */ +#define Tcl_UpVar2 \ + (tclStubsPtr->tcl_UpVar2) /* 259 */ +#define Tcl_VarEval \ + (tclStubsPtr->tcl_VarEval) /* 260 */ +#define Tcl_VarTraceInfo \ + (tclStubsPtr->tcl_VarTraceInfo) /* 261 */ +#define Tcl_VarTraceInfo2 \ + (tclStubsPtr->tcl_VarTraceInfo2) /* 262 */ +#define Tcl_Write \ + (tclStubsPtr->tcl_Write) /* 263 */ +#define Tcl_WrongNumArgs \ + (tclStubsPtr->tcl_WrongNumArgs) /* 264 */ +#define Tcl_DumpActiveMemory \ + (tclStubsPtr->tcl_DumpActiveMemory) /* 265 */ +#define Tcl_ValidateAllMemory \ + (tclStubsPtr->tcl_ValidateAllMemory) /* 266 */ +#define Tcl_AppendResultVA \ + (tclStubsPtr->tcl_AppendResultVA) /* 267 */ +#define Tcl_AppendStringsToObjVA \ + (tclStubsPtr->tcl_AppendStringsToObjVA) /* 268 */ +#define Tcl_HashStats \ + (tclStubsPtr->tcl_HashStats) /* 269 */ +#define Tcl_ParseVar \ + (tclStubsPtr->tcl_ParseVar) /* 270 */ +#define Tcl_PkgPresent \ + (tclStubsPtr->tcl_PkgPresent) /* 271 */ +#define Tcl_PkgPresentEx \ + (tclStubsPtr->tcl_PkgPresentEx) /* 272 */ +#define Tcl_PkgProvide \ + (tclStubsPtr->tcl_PkgProvide) /* 273 */ +#define Tcl_PkgRequire \ + (tclStubsPtr->tcl_PkgRequire) /* 274 */ +#define Tcl_SetErrorCodeVA \ + (tclStubsPtr->tcl_SetErrorCodeVA) /* 275 */ +#define Tcl_VarEvalVA \ + (tclStubsPtr->tcl_VarEvalVA) /* 276 */ +#define Tcl_WaitPid \ + (tclStubsPtr->tcl_WaitPid) /* 277 */ +#define Tcl_PanicVA \ + (tclStubsPtr->tcl_PanicVA) /* 278 */ +#define Tcl_GetVersion \ + (tclStubsPtr->tcl_GetVersion) /* 279 */ +#define Tcl_InitMemory \ + (tclStubsPtr->tcl_InitMemory) /* 280 */ +#define Tcl_StackChannel \ + (tclStubsPtr->tcl_StackChannel) /* 281 */ +#define Tcl_UnstackChannel \ + (tclStubsPtr->tcl_UnstackChannel) /* 282 */ +#define Tcl_GetStackedChannel \ + (tclStubsPtr->tcl_GetStackedChannel) /* 283 */ +#define Tcl_SetMainLoop \ + (tclStubsPtr->tcl_SetMainLoop) /* 284 */ +/* Slot 285 is reserved */ +#define Tcl_AppendObjToObj \ + (tclStubsPtr->tcl_AppendObjToObj) /* 286 */ +#define Tcl_CreateEncoding \ + (tclStubsPtr->tcl_CreateEncoding) /* 287 */ +#define Tcl_CreateThreadExitHandler \ + (tclStubsPtr->tcl_CreateThreadExitHandler) /* 288 */ +#define Tcl_DeleteThreadExitHandler \ + (tclStubsPtr->tcl_DeleteThreadExitHandler) /* 289 */ +#define Tcl_DiscardResult \ + (tclStubsPtr->tcl_DiscardResult) /* 290 */ +#define Tcl_EvalEx \ + (tclStubsPtr->tcl_EvalEx) /* 291 */ +#define Tcl_EvalObjv \ + (tclStubsPtr->tcl_EvalObjv) /* 292 */ +#define Tcl_EvalObjEx \ + (tclStubsPtr->tcl_EvalObjEx) /* 293 */ +#define Tcl_ExitThread \ + (tclStubsPtr->tcl_ExitThread) /* 294 */ +#define Tcl_ExternalToUtf \ + (tclStubsPtr->tcl_ExternalToUtf) /* 295 */ +#define Tcl_ExternalToUtfDString \ + (tclStubsPtr->tcl_ExternalToUtfDString) /* 296 */ +#define Tcl_FinalizeThread \ + (tclStubsPtr->tcl_FinalizeThread) /* 297 */ +#define Tcl_FinalizeNotifier \ + (tclStubsPtr->tcl_FinalizeNotifier) /* 298 */ +#define Tcl_FreeEncoding \ + (tclStubsPtr->tcl_FreeEncoding) /* 299 */ +#define Tcl_GetCurrentThread \ + (tclStubsPtr->tcl_GetCurrentThread) /* 300 */ +#define Tcl_GetEncoding \ + (tclStubsPtr->tcl_GetEncoding) /* 301 */ +#define Tcl_GetEncodingName \ + (tclStubsPtr->tcl_GetEncodingName) /* 302 */ +#define Tcl_GetEncodingNames \ + (tclStubsPtr->tcl_GetEncodingNames) /* 303 */ +#define Tcl_GetIndexFromObjStruct \ + (tclStubsPtr->tcl_GetIndexFromObjStruct) /* 304 */ +#define Tcl_GetThreadData \ + (tclStubsPtr->tcl_GetThreadData) /* 305 */ +#define Tcl_GetVar2Ex \ + (tclStubsPtr->tcl_GetVar2Ex) /* 306 */ +#define Tcl_InitNotifier \ + (tclStubsPtr->tcl_InitNotifier) /* 307 */ +#define Tcl_MutexLock \ + (tclStubsPtr->tcl_MutexLock) /* 308 */ +#define Tcl_MutexUnlock \ + (tclStubsPtr->tcl_MutexUnlock) /* 309 */ +#define Tcl_ConditionNotify \ + (tclStubsPtr->tcl_ConditionNotify) /* 310 */ +#define Tcl_ConditionWait \ + (tclStubsPtr->tcl_ConditionWait) /* 311 */ +#define Tcl_NumUtfChars \ + (tclStubsPtr->tcl_NumUtfChars) /* 312 */ +#define Tcl_ReadChars \ + (tclStubsPtr->tcl_ReadChars) /* 313 */ +#define Tcl_RestoreResult \ + (tclStubsPtr->tcl_RestoreResult) /* 314 */ +#define Tcl_SaveResult \ + (tclStubsPtr->tcl_SaveResult) /* 315 */ +#define Tcl_SetSystemEncoding \ + (tclStubsPtr->tcl_SetSystemEncoding) /* 316 */ +#define Tcl_SetVar2Ex \ + (tclStubsPtr->tcl_SetVar2Ex) /* 317 */ +#define Tcl_ThreadAlert \ + (tclStubsPtr->tcl_ThreadAlert) /* 318 */ +#define Tcl_ThreadQueueEvent \ + (tclStubsPtr->tcl_ThreadQueueEvent) /* 319 */ +#define Tcl_UniCharAtIndex \ + (tclStubsPtr->tcl_UniCharAtIndex) /* 320 */ +#define Tcl_UniCharToLower \ + (tclStubsPtr->tcl_UniCharToLower) /* 321 */ +#define Tcl_UniCharToTitle \ + (tclStubsPtr->tcl_UniCharToTitle) /* 322 */ +#define Tcl_UniCharToUpper \ + (tclStubsPtr->tcl_UniCharToUpper) /* 323 */ +#define Tcl_UniCharToUtf \ + (tclStubsPtr->tcl_UniCharToUtf) /* 324 */ +#define Tcl_UtfAtIndex \ + (tclStubsPtr->tcl_UtfAtIndex) /* 325 */ +#define Tcl_UtfCharComplete \ + (tclStubsPtr->tcl_UtfCharComplete) /* 326 */ +#define Tcl_UtfBackslash \ + (tclStubsPtr->tcl_UtfBackslash) /* 327 */ +#define Tcl_UtfFindFirst \ + (tclStubsPtr->tcl_UtfFindFirst) /* 328 */ +#define Tcl_UtfFindLast \ + (tclStubsPtr->tcl_UtfFindLast) /* 329 */ +#define Tcl_UtfNext \ + (tclStubsPtr->tcl_UtfNext) /* 330 */ +#define Tcl_UtfPrev \ + (tclStubsPtr->tcl_UtfPrev) /* 331 */ +#define Tcl_UtfToExternal \ + (tclStubsPtr->tcl_UtfToExternal) /* 332 */ +#define Tcl_UtfToExternalDString \ + (tclStubsPtr->tcl_UtfToExternalDString) /* 333 */ +#define Tcl_UtfToLower \ + (tclStubsPtr->tcl_UtfToLower) /* 334 */ +#define Tcl_UtfToTitle \ + (tclStubsPtr->tcl_UtfToTitle) /* 335 */ +#define Tcl_UtfToUniChar \ + (tclStubsPtr->tcl_UtfToUniChar) /* 336 */ +#define Tcl_UtfToUpper \ + (tclStubsPtr->tcl_UtfToUpper) /* 337 */ +#define Tcl_WriteChars \ + (tclStubsPtr->tcl_WriteChars) /* 338 */ +#define Tcl_WriteObj \ + (tclStubsPtr->tcl_WriteObj) /* 339 */ +#define Tcl_GetString \ + (tclStubsPtr->tcl_GetString) /* 340 */ +#define Tcl_GetDefaultEncodingDir \ + (tclStubsPtr->tcl_GetDefaultEncodingDir) /* 341 */ +#define Tcl_SetDefaultEncodingDir \ + (tclStubsPtr->tcl_SetDefaultEncodingDir) /* 342 */ +#define Tcl_AlertNotifier \ + (tclStubsPtr->tcl_AlertNotifier) /* 343 */ +#define Tcl_ServiceModeHook \ + (tclStubsPtr->tcl_ServiceModeHook) /* 344 */ +#define Tcl_UniCharIsAlnum \ + (tclStubsPtr->tcl_UniCharIsAlnum) /* 345 */ +#define Tcl_UniCharIsAlpha \ + (tclStubsPtr->tcl_UniCharIsAlpha) /* 346 */ +#define Tcl_UniCharIsDigit \ + (tclStubsPtr->tcl_UniCharIsDigit) /* 347 */ +#define Tcl_UniCharIsLower \ + (tclStubsPtr->tcl_UniCharIsLower) /* 348 */ +#define Tcl_UniCharIsSpace \ + (tclStubsPtr->tcl_UniCharIsSpace) /* 349 */ +#define Tcl_UniCharIsUpper \ + (tclStubsPtr->tcl_UniCharIsUpper) /* 350 */ +#define Tcl_UniCharIsWordChar \ + (tclStubsPtr->tcl_UniCharIsWordChar) /* 351 */ +#define Tcl_UniCharLen \ + (tclStubsPtr->tcl_UniCharLen) /* 352 */ +#define Tcl_UniCharNcmp \ + (tclStubsPtr->tcl_UniCharNcmp) /* 353 */ +#define Tcl_UniCharToUtfDString \ + (tclStubsPtr->tcl_UniCharToUtfDString) /* 354 */ +#define Tcl_UtfToUniCharDString \ + (tclStubsPtr->tcl_UtfToUniCharDString) /* 355 */ +#define Tcl_GetRegExpFromObj \ + (tclStubsPtr->tcl_GetRegExpFromObj) /* 356 */ +#define Tcl_EvalTokens \ + (tclStubsPtr->tcl_EvalTokens) /* 357 */ +#define Tcl_FreeParse \ + (tclStubsPtr->tcl_FreeParse) /* 358 */ +#define Tcl_LogCommandInfo \ + (tclStubsPtr->tcl_LogCommandInfo) /* 359 */ +#define Tcl_ParseBraces \ + (tclStubsPtr->tcl_ParseBraces) /* 360 */ +#define Tcl_ParseCommand \ + (tclStubsPtr->tcl_ParseCommand) /* 361 */ +#define Tcl_ParseExpr \ + (tclStubsPtr->tcl_ParseExpr) /* 362 */ +#define Tcl_ParseQuotedString \ + (tclStubsPtr->tcl_ParseQuotedString) /* 363 */ +#define Tcl_ParseVarName \ + (tclStubsPtr->tcl_ParseVarName) /* 364 */ +#define Tcl_GetCwd \ + (tclStubsPtr->tcl_GetCwd) /* 365 */ +#define Tcl_Chdir \ + (tclStubsPtr->tcl_Chdir) /* 366 */ +#define Tcl_Access \ + (tclStubsPtr->tcl_Access) /* 367 */ +#define Tcl_Stat \ + (tclStubsPtr->tcl_Stat) /* 368 */ +#define Tcl_UtfNcmp \ + (tclStubsPtr->tcl_UtfNcmp) /* 369 */ +#define Tcl_UtfNcasecmp \ + (tclStubsPtr->tcl_UtfNcasecmp) /* 370 */ +#define Tcl_StringCaseMatch \ + (tclStubsPtr->tcl_StringCaseMatch) /* 371 */ +#define Tcl_UniCharIsControl \ + (tclStubsPtr->tcl_UniCharIsControl) /* 372 */ +#define Tcl_UniCharIsGraph \ + (tclStubsPtr->tcl_UniCharIsGraph) /* 373 */ +#define Tcl_UniCharIsPrint \ + (tclStubsPtr->tcl_UniCharIsPrint) /* 374 */ +#define Tcl_UniCharIsPunct \ + (tclStubsPtr->tcl_UniCharIsPunct) /* 375 */ +#define Tcl_RegExpExecObj \ + (tclStubsPtr->tcl_RegExpExecObj) /* 376 */ +#define Tcl_RegExpGetInfo \ + (tclStubsPtr->tcl_RegExpGetInfo) /* 377 */ +#define Tcl_NewUnicodeObj \ + (tclStubsPtr->tcl_NewUnicodeObj) /* 378 */ +#define Tcl_SetUnicodeObj \ + (tclStubsPtr->tcl_SetUnicodeObj) /* 379 */ +#define Tcl_GetCharLength \ + (tclStubsPtr->tcl_GetCharLength) /* 380 */ +#define Tcl_GetUniChar \ + (tclStubsPtr->tcl_GetUniChar) /* 381 */ +#define Tcl_GetUnicode \ + (tclStubsPtr->tcl_GetUnicode) /* 382 */ +#define Tcl_GetRange \ + (tclStubsPtr->tcl_GetRange) /* 383 */ +#define Tcl_AppendUnicodeToObj \ + (tclStubsPtr->tcl_AppendUnicodeToObj) /* 384 */ +#define Tcl_RegExpMatchObj \ + (tclStubsPtr->tcl_RegExpMatchObj) /* 385 */ +#define Tcl_SetNotifier \ + (tclStubsPtr->tcl_SetNotifier) /* 386 */ +#define Tcl_GetAllocMutex \ + (tclStubsPtr->tcl_GetAllocMutex) /* 387 */ +#define Tcl_GetChannelNames \ + (tclStubsPtr->tcl_GetChannelNames) /* 388 */ +#define Tcl_GetChannelNamesEx \ + (tclStubsPtr->tcl_GetChannelNamesEx) /* 389 */ +#define Tcl_ProcObjCmd \ + (tclStubsPtr->tcl_ProcObjCmd) /* 390 */ +#define Tcl_ConditionFinalize \ + (tclStubsPtr->tcl_ConditionFinalize) /* 391 */ +#define Tcl_MutexFinalize \ + (tclStubsPtr->tcl_MutexFinalize) /* 392 */ +#define Tcl_CreateThread \ + (tclStubsPtr->tcl_CreateThread) /* 393 */ +#define Tcl_ReadRaw \ + (tclStubsPtr->tcl_ReadRaw) /* 394 */ +#define Tcl_WriteRaw \ + (tclStubsPtr->tcl_WriteRaw) /* 395 */ +#define Tcl_GetTopChannel \ + (tclStubsPtr->tcl_GetTopChannel) /* 396 */ +#define Tcl_ChannelBuffered \ + (tclStubsPtr->tcl_ChannelBuffered) /* 397 */ +#define Tcl_ChannelName \ + (tclStubsPtr->tcl_ChannelName) /* 398 */ +#define Tcl_ChannelVersion \ + (tclStubsPtr->tcl_ChannelVersion) /* 399 */ +#define Tcl_ChannelBlockModeProc \ + (tclStubsPtr->tcl_ChannelBlockModeProc) /* 400 */ +#define Tcl_ChannelCloseProc \ + (tclStubsPtr->tcl_ChannelCloseProc) /* 401 */ +#define Tcl_ChannelClose2Proc \ + (tclStubsPtr->tcl_ChannelClose2Proc) /* 402 */ +#define Tcl_ChannelInputProc \ + (tclStubsPtr->tcl_ChannelInputProc) /* 403 */ +#define Tcl_ChannelOutputProc \ + (tclStubsPtr->tcl_ChannelOutputProc) /* 404 */ +#define Tcl_ChannelSeekProc \ + (tclStubsPtr->tcl_ChannelSeekProc) /* 405 */ +#define Tcl_ChannelSetOptionProc \ + (tclStubsPtr->tcl_ChannelSetOptionProc) /* 406 */ +#define Tcl_ChannelGetOptionProc \ + (tclStubsPtr->tcl_ChannelGetOptionProc) /* 407 */ +#define Tcl_ChannelWatchProc \ + (tclStubsPtr->tcl_ChannelWatchProc) /* 408 */ +#define Tcl_ChannelGetHandleProc \ + (tclStubsPtr->tcl_ChannelGetHandleProc) /* 409 */ +#define Tcl_ChannelFlushProc \ + (tclStubsPtr->tcl_ChannelFlushProc) /* 410 */ +#define Tcl_ChannelHandlerProc \ + (tclStubsPtr->tcl_ChannelHandlerProc) /* 411 */ +#define Tcl_JoinThread \ + (tclStubsPtr->tcl_JoinThread) /* 412 */ +#define Tcl_IsChannelShared \ + (tclStubsPtr->tcl_IsChannelShared) /* 413 */ +#define Tcl_IsChannelRegistered \ + (tclStubsPtr->tcl_IsChannelRegistered) /* 414 */ +#define Tcl_CutChannel \ + (tclStubsPtr->tcl_CutChannel) /* 415 */ +#define Tcl_SpliceChannel \ + (tclStubsPtr->tcl_SpliceChannel) /* 416 */ +#define Tcl_ClearChannelHandlers \ + (tclStubsPtr->tcl_ClearChannelHandlers) /* 417 */ +#define Tcl_IsChannelExisting \ + (tclStubsPtr->tcl_IsChannelExisting) /* 418 */ +#define Tcl_UniCharNcasecmp \ + (tclStubsPtr->tcl_UniCharNcasecmp) /* 419 */ +#define Tcl_UniCharCaseMatch \ + (tclStubsPtr->tcl_UniCharCaseMatch) /* 420 */ +#define Tcl_FindHashEntry \ + (tclStubsPtr->tcl_FindHashEntry) /* 421 */ +#define Tcl_CreateHashEntry \ + (tclStubsPtr->tcl_CreateHashEntry) /* 422 */ +#define Tcl_InitCustomHashTable \ + (tclStubsPtr->tcl_InitCustomHashTable) /* 423 */ +#define Tcl_InitObjHashTable \ + (tclStubsPtr->tcl_InitObjHashTable) /* 424 */ +#define Tcl_CommandTraceInfo \ + (tclStubsPtr->tcl_CommandTraceInfo) /* 425 */ +#define Tcl_TraceCommand \ + (tclStubsPtr->tcl_TraceCommand) /* 426 */ +#define Tcl_UntraceCommand \ + (tclStubsPtr->tcl_UntraceCommand) /* 427 */ +#define Tcl_AttemptAlloc \ + (tclStubsPtr->tcl_AttemptAlloc) /* 428 */ +#define Tcl_AttemptDbCkalloc \ + (tclStubsPtr->tcl_AttemptDbCkalloc) /* 429 */ +#define Tcl_AttemptRealloc \ + (tclStubsPtr->tcl_AttemptRealloc) /* 430 */ +#define Tcl_AttemptDbCkrealloc \ + (tclStubsPtr->tcl_AttemptDbCkrealloc) /* 431 */ +#define Tcl_AttemptSetObjLength \ + (tclStubsPtr->tcl_AttemptSetObjLength) /* 432 */ +#define Tcl_GetChannelThread \ + (tclStubsPtr->tcl_GetChannelThread) /* 433 */ +#define Tcl_GetUnicodeFromObj \ + (tclStubsPtr->tcl_GetUnicodeFromObj) /* 434 */ +#define Tcl_GetMathFuncInfo \ + (tclStubsPtr->tcl_GetMathFuncInfo) /* 435 */ +#define Tcl_ListMathFuncs \ + (tclStubsPtr->tcl_ListMathFuncs) /* 436 */ +#define Tcl_SubstObj \ + (tclStubsPtr->tcl_SubstObj) /* 437 */ +#define Tcl_DetachChannel \ + (tclStubsPtr->tcl_DetachChannel) /* 438 */ +#define Tcl_IsStandardChannel \ + (tclStubsPtr->tcl_IsStandardChannel) /* 439 */ +#define Tcl_FSCopyFile \ + (tclStubsPtr->tcl_FSCopyFile) /* 440 */ +#define Tcl_FSCopyDirectory \ + (tclStubsPtr->tcl_FSCopyDirectory) /* 441 */ +#define Tcl_FSCreateDirectory \ + (tclStubsPtr->tcl_FSCreateDirectory) /* 442 */ +#define Tcl_FSDeleteFile \ + (tclStubsPtr->tcl_FSDeleteFile) /* 443 */ +#define Tcl_FSLoadFile \ + (tclStubsPtr->tcl_FSLoadFile) /* 444 */ +#define Tcl_FSMatchInDirectory \ + (tclStubsPtr->tcl_FSMatchInDirectory) /* 445 */ +#define Tcl_FSLink \ + (tclStubsPtr->tcl_FSLink) /* 446 */ +#define Tcl_FSRemoveDirectory \ + (tclStubsPtr->tcl_FSRemoveDirectory) /* 447 */ +#define Tcl_FSRenameFile \ + (tclStubsPtr->tcl_FSRenameFile) /* 448 */ +#define Tcl_FSLstat \ + (tclStubsPtr->tcl_FSLstat) /* 449 */ +#define Tcl_FSUtime \ + (tclStubsPtr->tcl_FSUtime) /* 450 */ +#define Tcl_FSFileAttrsGet \ + (tclStubsPtr->tcl_FSFileAttrsGet) /* 451 */ +#define Tcl_FSFileAttrsSet \ + (tclStubsPtr->tcl_FSFileAttrsSet) /* 452 */ +#define Tcl_FSFileAttrStrings \ + (tclStubsPtr->tcl_FSFileAttrStrings) /* 453 */ +#define Tcl_FSStat \ + (tclStubsPtr->tcl_FSStat) /* 454 */ +#define Tcl_FSAccess \ + (tclStubsPtr->tcl_FSAccess) /* 455 */ +#define Tcl_FSOpenFileChannel \ + (tclStubsPtr->tcl_FSOpenFileChannel) /* 456 */ +#define Tcl_FSGetCwd \ + (tclStubsPtr->tcl_FSGetCwd) /* 457 */ +#define Tcl_FSChdir \ + (tclStubsPtr->tcl_FSChdir) /* 458 */ +#define Tcl_FSConvertToPathType \ + (tclStubsPtr->tcl_FSConvertToPathType) /* 459 */ +#define Tcl_FSJoinPath \ + (tclStubsPtr->tcl_FSJoinPath) /* 460 */ +#define Tcl_FSSplitPath \ + (tclStubsPtr->tcl_FSSplitPath) /* 461 */ +#define Tcl_FSEqualPaths \ + (tclStubsPtr->tcl_FSEqualPaths) /* 462 */ +#define Tcl_FSGetNormalizedPath \ + (tclStubsPtr->tcl_FSGetNormalizedPath) /* 463 */ +#define Tcl_FSJoinToPath \ + (tclStubsPtr->tcl_FSJoinToPath) /* 464 */ +#define Tcl_FSGetInternalRep \ + (tclStubsPtr->tcl_FSGetInternalRep) /* 465 */ +#define Tcl_FSGetTranslatedPath \ + (tclStubsPtr->tcl_FSGetTranslatedPath) /* 466 */ +#define Tcl_FSEvalFile \ + (tclStubsPtr->tcl_FSEvalFile) /* 467 */ +#define Tcl_FSNewNativePath \ + (tclStubsPtr->tcl_FSNewNativePath) /* 468 */ +#define Tcl_FSGetNativePath \ + (tclStubsPtr->tcl_FSGetNativePath) /* 469 */ +#define Tcl_FSFileSystemInfo \ + (tclStubsPtr->tcl_FSFileSystemInfo) /* 470 */ +#define Tcl_FSPathSeparator \ + (tclStubsPtr->tcl_FSPathSeparator) /* 471 */ +#define Tcl_FSListVolumes \ + (tclStubsPtr->tcl_FSListVolumes) /* 472 */ +#define Tcl_FSRegister \ + (tclStubsPtr->tcl_FSRegister) /* 473 */ +#define Tcl_FSUnregister \ + (tclStubsPtr->tcl_FSUnregister) /* 474 */ +#define Tcl_FSData \ + (tclStubsPtr->tcl_FSData) /* 475 */ +#define Tcl_FSGetTranslatedStringPath \ + (tclStubsPtr->tcl_FSGetTranslatedStringPath) /* 476 */ +#define Tcl_FSGetFileSystemForPath \ + (tclStubsPtr->tcl_FSGetFileSystemForPath) /* 477 */ +#define Tcl_FSGetPathType \ + (tclStubsPtr->tcl_FSGetPathType) /* 478 */ +#define Tcl_OutputBuffered \ + (tclStubsPtr->tcl_OutputBuffered) /* 479 */ +#define Tcl_FSMountsChanged \ + (tclStubsPtr->tcl_FSMountsChanged) /* 480 */ +#define Tcl_EvalTokensStandard \ + (tclStubsPtr->tcl_EvalTokensStandard) /* 481 */ +#define Tcl_GetTime \ + (tclStubsPtr->tcl_GetTime) /* 482 */ +#define Tcl_CreateObjTrace \ + (tclStubsPtr->tcl_CreateObjTrace) /* 483 */ +#define Tcl_GetCommandInfoFromToken \ + (tclStubsPtr->tcl_GetCommandInfoFromToken) /* 484 */ +#define Tcl_SetCommandInfoFromToken \ + (tclStubsPtr->tcl_SetCommandInfoFromToken) /* 485 */ +#define Tcl_DbNewWideIntObj \ + (tclStubsPtr->tcl_DbNewWideIntObj) /* 486 */ +#define Tcl_GetWideIntFromObj \ + (tclStubsPtr->tcl_GetWideIntFromObj) /* 487 */ +#define Tcl_NewWideIntObj \ + (tclStubsPtr->tcl_NewWideIntObj) /* 488 */ +#define Tcl_SetWideIntObj \ + (tclStubsPtr->tcl_SetWideIntObj) /* 489 */ +#define Tcl_AllocStatBuf \ + (tclStubsPtr->tcl_AllocStatBuf) /* 490 */ +#define Tcl_Seek \ + (tclStubsPtr->tcl_Seek) /* 491 */ +#define Tcl_Tell \ + (tclStubsPtr->tcl_Tell) /* 492 */ +#define Tcl_ChannelWideSeekProc \ + (tclStubsPtr->tcl_ChannelWideSeekProc) /* 493 */ +#define Tcl_DictObjPut \ + (tclStubsPtr->tcl_DictObjPut) /* 494 */ +#define Tcl_DictObjGet \ + (tclStubsPtr->tcl_DictObjGet) /* 495 */ +#define Tcl_DictObjRemove \ + (tclStubsPtr->tcl_DictObjRemove) /* 496 */ +#define Tcl_DictObjSize \ + (tclStubsPtr->tcl_DictObjSize) /* 497 */ +#define Tcl_DictObjFirst \ + (tclStubsPtr->tcl_DictObjFirst) /* 498 */ +#define Tcl_DictObjNext \ + (tclStubsPtr->tcl_DictObjNext) /* 499 */ +#define Tcl_DictObjDone \ + (tclStubsPtr->tcl_DictObjDone) /* 500 */ +#define Tcl_DictObjPutKeyList \ + (tclStubsPtr->tcl_DictObjPutKeyList) /* 501 */ +#define Tcl_DictObjRemoveKeyList \ + (tclStubsPtr->tcl_DictObjRemoveKeyList) /* 502 */ +#define Tcl_NewDictObj \ + (tclStubsPtr->tcl_NewDictObj) /* 503 */ +#define Tcl_DbNewDictObj \ + (tclStubsPtr->tcl_DbNewDictObj) /* 504 */ +#define Tcl_RegisterConfig \ + (tclStubsPtr->tcl_RegisterConfig) /* 505 */ +#define Tcl_CreateNamespace \ + (tclStubsPtr->tcl_CreateNamespace) /* 506 */ +#define Tcl_DeleteNamespace \ + (tclStubsPtr->tcl_DeleteNamespace) /* 507 */ +#define Tcl_AppendExportList \ + (tclStubsPtr->tcl_AppendExportList) /* 508 */ +#define Tcl_Export \ + (tclStubsPtr->tcl_Export) /* 509 */ +#define Tcl_Import \ + (tclStubsPtr->tcl_Import) /* 510 */ +#define Tcl_ForgetImport \ + (tclStubsPtr->tcl_ForgetImport) /* 511 */ +#define Tcl_GetCurrentNamespace \ + (tclStubsPtr->tcl_GetCurrentNamespace) /* 512 */ +#define Tcl_GetGlobalNamespace \ + (tclStubsPtr->tcl_GetGlobalNamespace) /* 513 */ +#define Tcl_FindNamespace \ + (tclStubsPtr->tcl_FindNamespace) /* 514 */ +#define Tcl_FindCommand \ + (tclStubsPtr->tcl_FindCommand) /* 515 */ +#define Tcl_GetCommandFromObj \ + (tclStubsPtr->tcl_GetCommandFromObj) /* 516 */ +#define Tcl_GetCommandFullName \ + (tclStubsPtr->tcl_GetCommandFullName) /* 517 */ +#define Tcl_FSEvalFileEx \ + (tclStubsPtr->tcl_FSEvalFileEx) /* 518 */ +#define Tcl_SetExitProc \ + (tclStubsPtr->tcl_SetExitProc) /* 519 */ +#define Tcl_LimitAddHandler \ + (tclStubsPtr->tcl_LimitAddHandler) /* 520 */ +#define Tcl_LimitRemoveHandler \ + (tclStubsPtr->tcl_LimitRemoveHandler) /* 521 */ +#define Tcl_LimitReady \ + (tclStubsPtr->tcl_LimitReady) /* 522 */ +#define Tcl_LimitCheck \ + (tclStubsPtr->tcl_LimitCheck) /* 523 */ +#define Tcl_LimitExceeded \ + (tclStubsPtr->tcl_LimitExceeded) /* 524 */ +#define Tcl_LimitSetCommands \ + (tclStubsPtr->tcl_LimitSetCommands) /* 525 */ +#define Tcl_LimitSetTime \ + (tclStubsPtr->tcl_LimitSetTime) /* 526 */ +#define Tcl_LimitSetGranularity \ + (tclStubsPtr->tcl_LimitSetGranularity) /* 527 */ +#define Tcl_LimitTypeEnabled \ + (tclStubsPtr->tcl_LimitTypeEnabled) /* 528 */ +#define Tcl_LimitTypeExceeded \ + (tclStubsPtr->tcl_LimitTypeExceeded) /* 529 */ +#define Tcl_LimitTypeSet \ + (tclStubsPtr->tcl_LimitTypeSet) /* 530 */ +#define Tcl_LimitTypeReset \ + (tclStubsPtr->tcl_LimitTypeReset) /* 531 */ +#define Tcl_LimitGetCommands \ + (tclStubsPtr->tcl_LimitGetCommands) /* 532 */ +#define Tcl_LimitGetTime \ + (tclStubsPtr->tcl_LimitGetTime) /* 533 */ +#define Tcl_LimitGetGranularity \ + (tclStubsPtr->tcl_LimitGetGranularity) /* 534 */ +#define Tcl_SaveInterpState \ + (tclStubsPtr->tcl_SaveInterpState) /* 535 */ +#define Tcl_RestoreInterpState \ + (tclStubsPtr->tcl_RestoreInterpState) /* 536 */ +#define Tcl_DiscardInterpState \ + (tclStubsPtr->tcl_DiscardInterpState) /* 537 */ +#define Tcl_SetReturnOptions \ + (tclStubsPtr->tcl_SetReturnOptions) /* 538 */ +#define Tcl_GetReturnOptions \ + (tclStubsPtr->tcl_GetReturnOptions) /* 539 */ +#define Tcl_IsEnsemble \ + (tclStubsPtr->tcl_IsEnsemble) /* 540 */ +#define Tcl_CreateEnsemble \ + (tclStubsPtr->tcl_CreateEnsemble) /* 541 */ +#define Tcl_FindEnsemble \ + (tclStubsPtr->tcl_FindEnsemble) /* 542 */ +#define Tcl_SetEnsembleSubcommandList \ + (tclStubsPtr->tcl_SetEnsembleSubcommandList) /* 543 */ +#define Tcl_SetEnsembleMappingDict \ + (tclStubsPtr->tcl_SetEnsembleMappingDict) /* 544 */ +#define Tcl_SetEnsembleUnknownHandler \ + (tclStubsPtr->tcl_SetEnsembleUnknownHandler) /* 545 */ +#define Tcl_SetEnsembleFlags \ + (tclStubsPtr->tcl_SetEnsembleFlags) /* 546 */ +#define Tcl_GetEnsembleSubcommandList \ + (tclStubsPtr->tcl_GetEnsembleSubcommandList) /* 547 */ +#define Tcl_GetEnsembleMappingDict \ + (tclStubsPtr->tcl_GetEnsembleMappingDict) /* 548 */ +#define Tcl_GetEnsembleUnknownHandler \ + (tclStubsPtr->tcl_GetEnsembleUnknownHandler) /* 549 */ +#define Tcl_GetEnsembleFlags \ + (tclStubsPtr->tcl_GetEnsembleFlags) /* 550 */ +#define Tcl_GetEnsembleNamespace \ + (tclStubsPtr->tcl_GetEnsembleNamespace) /* 551 */ +#define Tcl_SetTimeProc \ + (tclStubsPtr->tcl_SetTimeProc) /* 552 */ +#define Tcl_QueryTimeProc \ + (tclStubsPtr->tcl_QueryTimeProc) /* 553 */ +#define Tcl_ChannelThreadActionProc \ + (tclStubsPtr->tcl_ChannelThreadActionProc) /* 554 */ +#define Tcl_NewBignumObj \ + (tclStubsPtr->tcl_NewBignumObj) /* 555 */ +#define Tcl_DbNewBignumObj \ + (tclStubsPtr->tcl_DbNewBignumObj) /* 556 */ +#define Tcl_SetBignumObj \ + (tclStubsPtr->tcl_SetBignumObj) /* 557 */ +#define Tcl_GetBignumFromObj \ + (tclStubsPtr->tcl_GetBignumFromObj) /* 558 */ +#define Tcl_TakeBignumFromObj \ + (tclStubsPtr->tcl_TakeBignumFromObj) /* 559 */ +#define Tcl_TruncateChannel \ + (tclStubsPtr->tcl_TruncateChannel) /* 560 */ +#define Tcl_ChannelTruncateProc \ + (tclStubsPtr->tcl_ChannelTruncateProc) /* 561 */ +#define Tcl_SetChannelErrorInterp \ + (tclStubsPtr->tcl_SetChannelErrorInterp) /* 562 */ +#define Tcl_GetChannelErrorInterp \ + (tclStubsPtr->tcl_GetChannelErrorInterp) /* 563 */ +#define Tcl_SetChannelError \ + (tclStubsPtr->tcl_SetChannelError) /* 564 */ +#define Tcl_GetChannelError \ + (tclStubsPtr->tcl_GetChannelError) /* 565 */ +#define Tcl_InitBignumFromDouble \ + (tclStubsPtr->tcl_InitBignumFromDouble) /* 566 */ +#define Tcl_GetNamespaceUnknownHandler \ + (tclStubsPtr->tcl_GetNamespaceUnknownHandler) /* 567 */ +#define Tcl_SetNamespaceUnknownHandler \ + (tclStubsPtr->tcl_SetNamespaceUnknownHandler) /* 568 */ +#define Tcl_GetEncodingFromObj \ + (tclStubsPtr->tcl_GetEncodingFromObj) /* 569 */ +#define Tcl_GetEncodingSearchPath \ + (tclStubsPtr->tcl_GetEncodingSearchPath) /* 570 */ +#define Tcl_SetEncodingSearchPath \ + (tclStubsPtr->tcl_SetEncodingSearchPath) /* 571 */ +#define Tcl_GetEncodingNameFromEnvironment \ + (tclStubsPtr->tcl_GetEncodingNameFromEnvironment) /* 572 */ +#define Tcl_PkgRequireProc \ + (tclStubsPtr->tcl_PkgRequireProc) /* 573 */ +#define Tcl_AppendObjToErrorInfo \ + (tclStubsPtr->tcl_AppendObjToErrorInfo) /* 574 */ +#define Tcl_AppendLimitedToObj \ + (tclStubsPtr->tcl_AppendLimitedToObj) /* 575 */ +#define Tcl_Format \ + (tclStubsPtr->tcl_Format) /* 576 */ +#define Tcl_AppendFormatToObj \ + (tclStubsPtr->tcl_AppendFormatToObj) /* 577 */ +#define Tcl_ObjPrintf \ + (tclStubsPtr->tcl_ObjPrintf) /* 578 */ +#define Tcl_AppendPrintfToObj \ + (tclStubsPtr->tcl_AppendPrintfToObj) /* 579 */ +#define Tcl_CancelEval \ + (tclStubsPtr->tcl_CancelEval) /* 580 */ +#define Tcl_Canceled \ + (tclStubsPtr->tcl_Canceled) /* 581 */ +#define Tcl_CreatePipe \ + (tclStubsPtr->tcl_CreatePipe) /* 582 */ +#define Tcl_NRCreateCommand \ + (tclStubsPtr->tcl_NRCreateCommand) /* 583 */ +#define Tcl_NREvalObj \ + (tclStubsPtr->tcl_NREvalObj) /* 584 */ +#define Tcl_NREvalObjv \ + (tclStubsPtr->tcl_NREvalObjv) /* 585 */ +#define Tcl_NRCmdSwap \ + (tclStubsPtr->tcl_NRCmdSwap) /* 586 */ +#define Tcl_NRAddCallback \ + (tclStubsPtr->tcl_NRAddCallback) /* 587 */ +#define Tcl_NRCallObjProc \ + (tclStubsPtr->tcl_NRCallObjProc) /* 588 */ +#define Tcl_GetFSDeviceFromStat \ + (tclStubsPtr->tcl_GetFSDeviceFromStat) /* 589 */ +#define Tcl_GetFSInodeFromStat \ + (tclStubsPtr->tcl_GetFSInodeFromStat) /* 590 */ +#define Tcl_GetModeFromStat \ + (tclStubsPtr->tcl_GetModeFromStat) /* 591 */ +#define Tcl_GetLinkCountFromStat \ + (tclStubsPtr->tcl_GetLinkCountFromStat) /* 592 */ +#define Tcl_GetUserIdFromStat \ + (tclStubsPtr->tcl_GetUserIdFromStat) /* 593 */ +#define Tcl_GetGroupIdFromStat \ + (tclStubsPtr->tcl_GetGroupIdFromStat) /* 594 */ +#define Tcl_GetDeviceTypeFromStat \ + (tclStubsPtr->tcl_GetDeviceTypeFromStat) /* 595 */ +#define Tcl_GetAccessTimeFromStat \ + (tclStubsPtr->tcl_GetAccessTimeFromStat) /* 596 */ +#define Tcl_GetModificationTimeFromStat \ + (tclStubsPtr->tcl_GetModificationTimeFromStat) /* 597 */ +#define Tcl_GetChangeTimeFromStat \ + (tclStubsPtr->tcl_GetChangeTimeFromStat) /* 598 */ +#define Tcl_GetSizeFromStat \ + (tclStubsPtr->tcl_GetSizeFromStat) /* 599 */ +#define Tcl_GetBlocksFromStat \ + (tclStubsPtr->tcl_GetBlocksFromStat) /* 600 */ +#define Tcl_GetBlockSizeFromStat \ + (tclStubsPtr->tcl_GetBlockSizeFromStat) /* 601 */ +#define Tcl_SetEnsembleParameterList \ + (tclStubsPtr->tcl_SetEnsembleParameterList) /* 602 */ +#define Tcl_GetEnsembleParameterList \ + (tclStubsPtr->tcl_GetEnsembleParameterList) /* 603 */ +#define Tcl_ParseArgsObjv \ + (tclStubsPtr->tcl_ParseArgsObjv) /* 604 */ +#define Tcl_GetErrorLine \ + (tclStubsPtr->tcl_GetErrorLine) /* 605 */ +#define Tcl_SetErrorLine \ + (tclStubsPtr->tcl_SetErrorLine) /* 606 */ +#define Tcl_TransferResult \ + (tclStubsPtr->tcl_TransferResult) /* 607 */ +#define Tcl_InterpActive \ + (tclStubsPtr->tcl_InterpActive) /* 608 */ +#define Tcl_BackgroundException \ + (tclStubsPtr->tcl_BackgroundException) /* 609 */ +#define Tcl_ZlibDeflate \ + (tclStubsPtr->tcl_ZlibDeflate) /* 610 */ +#define Tcl_ZlibInflate \ + (tclStubsPtr->tcl_ZlibInflate) /* 611 */ +#define Tcl_ZlibCRC32 \ + (tclStubsPtr->tcl_ZlibCRC32) /* 612 */ +#define Tcl_ZlibAdler32 \ + (tclStubsPtr->tcl_ZlibAdler32) /* 613 */ +#define Tcl_ZlibStreamInit \ + (tclStubsPtr->tcl_ZlibStreamInit) /* 614 */ +#define Tcl_ZlibStreamGetCommandName \ + (tclStubsPtr->tcl_ZlibStreamGetCommandName) /* 615 */ +#define Tcl_ZlibStreamEof \ + (tclStubsPtr->tcl_ZlibStreamEof) /* 616 */ +#define Tcl_ZlibStreamChecksum \ + (tclStubsPtr->tcl_ZlibStreamChecksum) /* 617 */ +#define Tcl_ZlibStreamPut \ + (tclStubsPtr->tcl_ZlibStreamPut) /* 618 */ +#define Tcl_ZlibStreamGet \ + (tclStubsPtr->tcl_ZlibStreamGet) /* 619 */ +#define Tcl_ZlibStreamClose \ + (tclStubsPtr->tcl_ZlibStreamClose) /* 620 */ +#define Tcl_ZlibStreamReset \ + (tclStubsPtr->tcl_ZlibStreamReset) /* 621 */ +#define Tcl_SetStartupScript \ + (tclStubsPtr->tcl_SetStartupScript) /* 622 */ +#define Tcl_GetStartupScript \ + (tclStubsPtr->tcl_GetStartupScript) /* 623 */ +#define Tcl_CloseEx \ + (tclStubsPtr->tcl_CloseEx) /* 624 */ +#define Tcl_NRExprObj \ + (tclStubsPtr->tcl_NRExprObj) /* 625 */ +#define Tcl_NRSubstObj \ + (tclStubsPtr->tcl_NRSubstObj) /* 626 */ +#define Tcl_LoadFile \ + (tclStubsPtr->tcl_LoadFile) /* 627 */ +#define Tcl_FindSymbol \ + (tclStubsPtr->tcl_FindSymbol) /* 628 */ +#define Tcl_FSUnloadFile \ + (tclStubsPtr->tcl_FSUnloadFile) /* 629 */ +#define Tcl_ZlibStreamSetCompressionDictionary \ + (tclStubsPtr->tcl_ZlibStreamSetCompressionDictionary) /* 630 */ + +#endif /* defined(USE_TCL_STUBS) */ + +/* !END!: Do not edit above this line. */ + +#if defined(USE_TCL_STUBS) +# undef Tcl_CreateInterp +# undef Tcl_FindExecutable +# undef Tcl_GetStringResult +# undef Tcl_Init +# undef Tcl_SetPanicProc +# undef Tcl_SetVar +# undef Tcl_StaticPackage +# undef TclFSGetNativePath +# define Tcl_CreateInterp() (tclStubsPtr->tcl_CreateInterp()) +# define Tcl_GetStringResult(interp) (tclStubsPtr->tcl_GetStringResult(interp)) +# define Tcl_Init(interp) (tclStubsPtr->tcl_Init(interp)) +# define Tcl_SetPanicProc(proc) (tclStubsPtr->tcl_SetPanicProc(proc)) +# define Tcl_SetVar(interp, varName, newValue, flags) \ + (tclStubsPtr->tcl_SetVar(interp, varName, newValue, flags)) +#endif + +#if defined(_WIN32) && defined(UNICODE) +# define Tcl_FindExecutable(arg) ((Tcl_FindExecutable)((const char *)(arg))) +# define Tcl_MainEx Tcl_MainExW + EXTERN void Tcl_MainExW(int argc, wchar_t **argv, + Tcl_AppInitProc *appInitProc, Tcl_Interp *interp); +#endif + +#undef TCL_STORAGE_CLASS +#define TCL_STORAGE_CLASS DLLIMPORT + +#endif /* _TCLDECLS */ ADDED compat/tcl-8.6/generic/tclPlatDecls.h Index: compat/tcl-8.6/generic/tclPlatDecls.h ================================================================== --- compat/tcl-8.6/generic/tclPlatDecls.h +++ compat/tcl-8.6/generic/tclPlatDecls.h @@ -0,0 +1,120 @@ +/* + * tclPlatDecls.h -- + * + * Declarations of platform specific Tcl APIs. + * + * Copyright (c) 1998-1999 by Scriptics Corporation. + * All rights reserved. + */ + +#ifndef _TCLPLATDECLS +#define _TCLPLATDECLS + +#undef TCL_STORAGE_CLASS +#ifdef BUILD_tcl +# define TCL_STORAGE_CLASS DLLEXPORT +#else +# ifdef USE_TCL_STUBS +# define TCL_STORAGE_CLASS +# else +# define TCL_STORAGE_CLASS DLLIMPORT +# endif +#endif + +/* + * WARNING: This file is automatically generated by the tools/genStubs.tcl + * script. Any modifications to the function declarations below should be made + * in the generic/tcl.decls script. + */ + +/* + * TCHAR is needed here for win32, so if it is not defined yet do it here. + * This way, we don't need to include just for one define. + */ +#if (defined(_WIN32) || defined(__CYGWIN__)) && !defined(_TCHAR_DEFINED) +# if defined(_UNICODE) + typedef wchar_t TCHAR; +# else + typedef char TCHAR; +# endif +# define _TCHAR_DEFINED +#endif + +/* !BEGIN!: Do not edit below this line. */ + +/* + * Exported function declarations: + */ + +#if defined(__WIN32__) || defined(__CYGWIN__) /* WIN */ +/* 0 */ +EXTERN TCHAR * Tcl_WinUtfToTChar(const char *str, int len, + Tcl_DString *dsPtr); +/* 1 */ +EXTERN char * Tcl_WinTCharToUtf(const TCHAR *str, int len, + Tcl_DString *dsPtr); +#endif /* WIN */ +#ifdef MAC_OSX_TCL /* MACOSX */ +/* 0 */ +EXTERN int Tcl_MacOSXOpenBundleResources(Tcl_Interp *interp, + const char *bundleName, int hasResourceFile, + int maxPathLen, char *libraryPath); +/* 1 */ +EXTERN int Tcl_MacOSXOpenVersionedBundleResources( + Tcl_Interp *interp, const char *bundleName, + const char *bundleVersion, + int hasResourceFile, int maxPathLen, + char *libraryPath); +#endif /* MACOSX */ + +typedef struct TclPlatStubs { + int magic; + void *hooks; + +#if defined(__WIN32__) || defined(__CYGWIN__) /* WIN */ + TCHAR * (*tcl_WinUtfToTChar) (const char *str, int len, Tcl_DString *dsPtr); /* 0 */ + char * (*tcl_WinTCharToUtf) (const TCHAR *str, int len, Tcl_DString *dsPtr); /* 1 */ +#endif /* WIN */ +#ifdef MAC_OSX_TCL /* MACOSX */ + int (*tcl_MacOSXOpenBundleResources) (Tcl_Interp *interp, const char *bundleName, int hasResourceFile, int maxPathLen, char *libraryPath); /* 0 */ + int (*tcl_MacOSXOpenVersionedBundleResources) (Tcl_Interp *interp, const char *bundleName, const char *bundleVersion, int hasResourceFile, int maxPathLen, char *libraryPath); /* 1 */ +#endif /* MACOSX */ +} TclPlatStubs; + +#ifdef __cplusplus +extern "C" { +#endif +extern const TclPlatStubs *tclPlatStubsPtr; +#ifdef __cplusplus +} +#endif + +#if defined(USE_TCL_STUBS) + +/* + * Inline function declarations: + */ + +#if defined(__WIN32__) || defined(__CYGWIN__) /* WIN */ +#define Tcl_WinUtfToTChar \ + (tclPlatStubsPtr->tcl_WinUtfToTChar) /* 0 */ +#define Tcl_WinTCharToUtf \ + (tclPlatStubsPtr->tcl_WinTCharToUtf) /* 1 */ +#endif /* WIN */ +#ifdef MAC_OSX_TCL /* MACOSX */ +#define Tcl_MacOSXOpenBundleResources \ + (tclPlatStubsPtr->tcl_MacOSXOpenBundleResources) /* 0 */ +#define Tcl_MacOSXOpenVersionedBundleResources \ + (tclPlatStubsPtr->tcl_MacOSXOpenVersionedBundleResources) /* 1 */ +#endif /* MACOSX */ + +#endif /* defined(USE_TCL_STUBS) */ + +/* !END!: Do not edit above this line. */ + +#undef TCL_STORAGE_CLASS +#define TCL_STORAGE_CLASS DLLIMPORT + +#endif /* _TCLPLATDECLS */ + + ADDED compat/zlib/CMakeLists.txt Index: compat/zlib/CMakeLists.txt ================================================================== --- compat/zlib/CMakeLists.txt +++ compat/zlib/CMakeLists.txt @@ -0,0 +1,249 @@ +cmake_minimum_required(VERSION 2.4.4) +set(CMAKE_ALLOW_LOOSE_LOOP_CONSTRUCTS ON) + +project(zlib C) + +set(VERSION "1.2.8") + +option(ASM686 "Enable building i686 assembly implementation") +option(AMD64 "Enable building amd64 assembly implementation") + +set(INSTALL_BIN_DIR "${CMAKE_INSTALL_PREFIX}/bin" CACHE PATH "Installation directory for executables") +set(INSTALL_LIB_DIR "${CMAKE_INSTALL_PREFIX}/lib" CACHE PATH "Installation directory for libraries") +set(INSTALL_INC_DIR "${CMAKE_INSTALL_PREFIX}/include" CACHE PATH "Installation directory for headers") +set(INSTALL_MAN_DIR "${CMAKE_INSTALL_PREFIX}/share/man" CACHE PATH "Installation directory for manual pages") +set(INSTALL_PKGCONFIG_DIR "${CMAKE_INSTALL_PREFIX}/share/pkgconfig" CACHE PATH "Installation directory for pkgconfig (.pc) files") + +include(CheckTypeSize) +include(CheckFunctionExists) +include(CheckIncludeFile) +include(CheckCSourceCompiles) +enable_testing() + +check_include_file(sys/types.h HAVE_SYS_TYPES_H) +check_include_file(stdint.h HAVE_STDINT_H) +check_include_file(stddef.h HAVE_STDDEF_H) + +# +# Check to see if we have large file support +# +set(CMAKE_REQUIRED_DEFINITIONS -D_LARGEFILE64_SOURCE=1) +# We add these other definitions here because CheckTypeSize.cmake +# in CMake 2.4.x does not automatically do so and we want +# compatibility with CMake 2.4.x. +if(HAVE_SYS_TYPES_H) + list(APPEND CMAKE_REQUIRED_DEFINITIONS -DHAVE_SYS_TYPES_H) +endif() +if(HAVE_STDINT_H) + list(APPEND CMAKE_REQUIRED_DEFINITIONS -DHAVE_STDINT_H) +endif() +if(HAVE_STDDEF_H) + list(APPEND CMAKE_REQUIRED_DEFINITIONS -DHAVE_STDDEF_H) +endif() +check_type_size(off64_t OFF64_T) +if(HAVE_OFF64_T) + add_definitions(-D_LARGEFILE64_SOURCE=1) +endif() +set(CMAKE_REQUIRED_DEFINITIONS) # clear variable + +# +# Check for fseeko +# +check_function_exists(fseeko HAVE_FSEEKO) +if(NOT HAVE_FSEEKO) + add_definitions(-DNO_FSEEKO) +endif() + +# +# Check for unistd.h +# +check_include_file(unistd.h Z_HAVE_UNISTD_H) + +if(MSVC) + set(CMAKE_DEBUG_POSTFIX "d") + add_definitions(-D_CRT_SECURE_NO_DEPRECATE) + add_definitions(-D_CRT_NONSTDC_NO_DEPRECATE) + include_directories(${CMAKE_CURRENT_SOURCE_DIR}) +endif() + +if(NOT CMAKE_CURRENT_SOURCE_DIR STREQUAL CMAKE_CURRENT_BINARY_DIR) + # If we're doing an out of source build and the user has a zconf.h + # in their source tree... + if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/zconf.h) + message(STATUS "Renaming") + message(STATUS " ${CMAKE_CURRENT_SOURCE_DIR}/zconf.h") + message(STATUS "to 'zconf.h.included' because this file is included with zlib") + message(STATUS "but CMake generates it automatically in the build directory.") + file(RENAME ${CMAKE_CURRENT_SOURCE_DIR}/zconf.h ${CMAKE_CURRENT_SOURCE_DIR}/zconf.h.included) + endif() +endif() + +set(ZLIB_PC ${CMAKE_CURRENT_BINARY_DIR}/zlib.pc) +configure_file( ${CMAKE_CURRENT_SOURCE_DIR}/zlib.pc.cmakein + ${ZLIB_PC} @ONLY) +configure_file( ${CMAKE_CURRENT_SOURCE_DIR}/zconf.h.cmakein + ${CMAKE_CURRENT_BINARY_DIR}/zconf.h @ONLY) +include_directories(${CMAKE_CURRENT_BINARY_DIR} ${CMAKE_SOURCE_DIR}) + + +#============================================================================ +# zlib +#============================================================================ + +set(ZLIB_PUBLIC_HDRS + ${CMAKE_CURRENT_BINARY_DIR}/zconf.h + zlib.h +) +set(ZLIB_PRIVATE_HDRS + crc32.h + deflate.h + gzguts.h + inffast.h + inffixed.h + inflate.h + inftrees.h + trees.h + zutil.h +) +set(ZLIB_SRCS + adler32.c + compress.c + crc32.c + deflate.c + gzclose.c + gzlib.c + gzread.c + gzwrite.c + inflate.c + infback.c + inftrees.c + inffast.c + trees.c + uncompr.c + zutil.c +) + +if(NOT MINGW) + set(ZLIB_DLL_SRCS + win32/zlib1.rc # If present will override custom build rule below. + ) +endif() + +if(CMAKE_COMPILER_IS_GNUCC) + if(ASM686) + set(ZLIB_ASMS contrib/asm686/match.S) + elseif (AMD64) + set(ZLIB_ASMS contrib/amd64/amd64-match.S) + endif () + + if(ZLIB_ASMS) + add_definitions(-DASMV) + set_source_files_properties(${ZLIB_ASMS} PROPERTIES LANGUAGE C COMPILE_FLAGS -DNO_UNDERLINE) + endif() +endif() + +if(MSVC) + if(ASM686) + ENABLE_LANGUAGE(ASM_MASM) + set(ZLIB_ASMS + contrib/masmx86/inffas32.asm + contrib/masmx86/match686.asm + ) + elseif (AMD64) + ENABLE_LANGUAGE(ASM_MASM) + set(ZLIB_ASMS + contrib/masmx64/gvmat64.asm + contrib/masmx64/inffasx64.asm + ) + endif() + + if(ZLIB_ASMS) + add_definitions(-DASMV -DASMINF) + endif() +endif() + +# parse the full version number from zlib.h and include in ZLIB_FULL_VERSION +file(READ ${CMAKE_CURRENT_SOURCE_DIR}/zlib.h _zlib_h_contents) +string(REGEX REPLACE ".*#define[ \t]+ZLIB_VERSION[ \t]+\"([-0-9A-Za-z.]+)\".*" + "\\1" ZLIB_FULL_VERSION ${_zlib_h_contents}) + +if(MINGW) + # This gets us DLL resource information when compiling on MinGW. + if(NOT CMAKE_RC_COMPILER) + set(CMAKE_RC_COMPILER windres.exe) + endif() + + add_custom_command(OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/zlib1rc.obj + COMMAND ${CMAKE_RC_COMPILER} + -D GCC_WINDRES + -I ${CMAKE_CURRENT_SOURCE_DIR} + -I ${CMAKE_CURRENT_BINARY_DIR} + -o ${CMAKE_CURRENT_BINARY_DIR}/zlib1rc.obj + -i ${CMAKE_CURRENT_SOURCE_DIR}/win32/zlib1.rc) + set(ZLIB_DLL_SRCS ${CMAKE_CURRENT_BINARY_DIR}/zlib1rc.obj) +endif(MINGW) + +add_library(zlib SHARED ${ZLIB_SRCS} ${ZLIB_ASMS} ${ZLIB_DLL_SRCS} ${ZLIB_PUBLIC_HDRS} ${ZLIB_PRIVATE_HDRS}) +add_library(zlibstatic STATIC ${ZLIB_SRCS} ${ZLIB_ASMS} ${ZLIB_PUBLIC_HDRS} ${ZLIB_PRIVATE_HDRS}) +set_target_properties(zlib PROPERTIES DEFINE_SYMBOL ZLIB_DLL) +set_target_properties(zlib PROPERTIES SOVERSION 1) + +if(NOT CYGWIN) + # This property causes shared libraries on Linux to have the full version + # encoded into their final filename. We disable this on Cygwin because + # it causes cygz-${ZLIB_FULL_VERSION}.dll to be created when cygz.dll + # seems to be the default. + # + # This has no effect with MSVC, on that platform the version info for + # the DLL comes from the resource file win32/zlib1.rc + set_target_properties(zlib PROPERTIES VERSION ${ZLIB_FULL_VERSION}) +endif() + +if(UNIX) + # On unix-like platforms the library is almost always called libz + set_target_properties(zlib zlibstatic PROPERTIES OUTPUT_NAME z) + if(NOT APPLE) + set_target_properties(zlib PROPERTIES LINK_FLAGS "-Wl,--version-script,\"${CMAKE_CURRENT_SOURCE_DIR}/zlib.map\"") + endif() +elseif(BUILD_SHARED_LIBS AND WIN32) + # Creates zlib1.dll when building shared library version + set_target_properties(zlib PROPERTIES SUFFIX "1.dll") +endif() + +if(NOT SKIP_INSTALL_LIBRARIES AND NOT SKIP_INSTALL_ALL ) + install(TARGETS zlib zlibstatic + RUNTIME DESTINATION "${INSTALL_BIN_DIR}" + ARCHIVE DESTINATION "${INSTALL_LIB_DIR}" + LIBRARY DESTINATION "${INSTALL_LIB_DIR}" ) +endif() +if(NOT SKIP_INSTALL_HEADERS AND NOT SKIP_INSTALL_ALL ) + install(FILES ${ZLIB_PUBLIC_HDRS} DESTINATION "${INSTALL_INC_DIR}") +endif() +if(NOT SKIP_INSTALL_FILES AND NOT SKIP_INSTALL_ALL ) + install(FILES zlib.3 DESTINATION "${INSTALL_MAN_DIR}/man3") +endif() +if(NOT SKIP_INSTALL_FILES AND NOT SKIP_INSTALL_ALL ) + install(FILES ${ZLIB_PC} DESTINATION "${INSTALL_PKGCONFIG_DIR}") +endif() + +#============================================================================ +# Example binaries +#============================================================================ + +add_executable(example test/example.c) +target_link_libraries(example zlib) +add_test(example example) + +add_executable(minigzip test/minigzip.c) +target_link_libraries(minigzip zlib) + +if(HAVE_OFF64_T) + add_executable(example64 test/example.c) + target_link_libraries(example64 zlib) + set_target_properties(example64 PROPERTIES COMPILE_FLAGS "-D_FILE_OFFSET_BITS=64") + add_test(example64 example64) + + add_executable(minigzip64 test/minigzip.c) + target_link_libraries(minigzip64 zlib) + set_target_properties(minigzip64 PROPERTIES COMPILE_FLAGS "-D_FILE_OFFSET_BITS=64") +endif() ADDED compat/zlib/ChangeLog Index: compat/zlib/ChangeLog ================================================================== --- compat/zlib/ChangeLog +++ compat/zlib/ChangeLog @@ -0,0 +1,1472 @@ + + ChangeLog file for zlib + +Changes in 1.2.8 (28 Apr 2013) +- Update contrib/minizip/iowin32.c for Windows RT [Vollant] +- Do not force Z_CONST for C++ +- Clean up contrib/vstudio [Ro§] +- Correct spelling error in zlib.h +- Fix mixed line endings in contrib/vstudio + +Changes in 1.2.7.3 (13 Apr 2013) +- Fix version numbers and DLL names in contrib/vstudio/*/zlib.rc + +Changes in 1.2.7.2 (13 Apr 2013) +- Change check for a four-byte type back to hexadecimal +- Fix typo in win32/Makefile.msc +- Add casts in gzwrite.c for pointer differences + +Changes in 1.2.7.1 (24 Mar 2013) +- Replace use of unsafe string functions with snprintf if available +- Avoid including stddef.h on Windows for Z_SOLO compile [Niessink] +- Fix gzgetc undefine when Z_PREFIX set [Turk] +- Eliminate use of mktemp in Makefile (not always available) +- Fix bug in 'F' mode for gzopen() +- Add inflateGetDictionary() function +- Correct comment in deflate.h +- Use _snprintf for snprintf in Microsoft C +- On Darwin, only use /usr/bin/libtool if libtool is not Apple +- Delete "--version" file if created by "ar --version" [Richard G.] +- Fix configure check for veracity of compiler error return codes +- Fix CMake compilation of static lib for MSVC2010 x64 +- Remove unused variable in infback9.c +- Fix argument checks in gzlog_compress() and gzlog_write() +- Clean up the usage of z_const and respect const usage within zlib +- Clean up examples/gzlog.[ch] comparisons of different types +- Avoid shift equal to bits in type (caused endless loop) +- Fix unintialized value bug in gzputc() introduced by const patches +- Fix memory allocation error in examples/zran.c [Nor] +- Fix bug where gzopen(), gzclose() would write an empty file +- Fix bug in gzclose() when gzwrite() runs out of memory +- Check for input buffer malloc failure in examples/gzappend.c +- Add note to contrib/blast to use binary mode in stdio +- Fix comparisons of differently signed integers in contrib/blast +- Check for invalid code length codes in contrib/puff +- Fix serious but very rare decompression bug in inftrees.c +- Update inflateBack() comments, since inflate() can be faster +- Use underscored I/O function names for WINAPI_FAMILY +- Add _tr_flush_bits to the external symbols prefixed by --zprefix +- Add contrib/vstudio/vc10 pre-build step for static only +- Quote --version-script argument in CMakeLists.txt +- Don't specify --version-script on Apple platforms in CMakeLists.txt +- Fix casting error in contrib/testzlib/testzlib.c +- Fix types in contrib/minizip to match result of get_crc_table() +- Simplify contrib/vstudio/vc10 with 'd' suffix +- Add TOP support to win32/Makefile.msc +- Suport i686 and amd64 assembler builds in CMakeLists.txt +- Fix typos in the use of _LARGEFILE64_SOURCE in zconf.h +- Add vc11 and vc12 build files to contrib/vstudio +- Add gzvprintf() as an undocumented function in zlib +- Fix configure for Sun shell +- Remove runtime check in configure for four-byte integer type +- Add casts and consts to ease user conversion to C++ +- Add man pages for minizip and miniunzip +- In Makefile uninstall, don't rm if preceding cd fails +- Do not return Z_BUF_ERROR if deflateParam() has nothing to write + +Changes in 1.2.7 (2 May 2012) +- Replace use of memmove() with a simple copy for portability +- Test for existence of strerror +- Restore gzgetc_ for backward compatibility with 1.2.6 +- Fix build with non-GNU make on Solaris +- Require gcc 4.0 or later on Mac OS X to use the hidden attribute +- Include unistd.h for Watcom C +- Use __WATCOMC__ instead of __WATCOM__ +- Do not use the visibility attribute if NO_VIZ defined +- Improve the detection of no hidden visibility attribute +- Avoid using __int64 for gcc or solo compilation +- Cast to char * in gzprintf to avoid warnings [Zinser] +- Fix make_vms.com for VAX [Zinser] +- Don't use library or built-in byte swaps +- Simplify test and use of gcc hidden attribute +- Fix bug in gzclose_w() when gzwrite() fails to allocate memory +- Add "x" (O_EXCL) and "e" (O_CLOEXEC) modes support to gzopen() +- Fix bug in test/minigzip.c for configure --solo +- Fix contrib/vstudio project link errors [Mohanathas] +- Add ability to choose the builder in make_vms.com [Schweda] +- Add DESTDIR support to mingw32 win32/Makefile.gcc +- Fix comments in win32/Makefile.gcc for proper usage +- Allow overriding the default install locations for cmake +- Generate and install the pkg-config file with cmake +- Build both a static and a shared version of zlib with cmake +- Include version symbols for cmake builds +- If using cmake with MSVC, add the source directory to the includes +- Remove unneeded EXTRA_CFLAGS from win32/Makefile.gcc [Truta] +- Move obsolete emx makefile to old [Truta] +- Allow the use of -Wundef when compiling or using zlib +- Avoid the use of the -u option with mktemp +- Improve inflate() documentation on the use of Z_FINISH +- Recognize clang as gcc +- Add gzopen_w() in Windows for wide character path names +- Rename zconf.h in CMakeLists.txt to move it out of the way +- Add source directory in CMakeLists.txt for building examples +- Look in build directory for zlib.pc in CMakeLists.txt +- Remove gzflags from zlibvc.def in vc9 and vc10 +- Fix contrib/minizip compilation in the MinGW environment +- Update ./configure for Solaris, support --64 [Mooney] +- Remove -R. from Solaris shared build (possible security issue) +- Avoid race condition for parallel make (-j) running example +- Fix type mismatch between get_crc_table() and crc_table +- Fix parsing of version with "-" in CMakeLists.txt [Snider, Ziegler] +- Fix the path to zlib.map in CMakeLists.txt +- Force the native libtool in Mac OS X to avoid GNU libtool [Beebe] +- Add instructions to win32/Makefile.gcc for shared install [Torri] + +Changes in 1.2.6.1 (12 Feb 2012) +- Avoid the use of the Objective-C reserved name "id" +- Include io.h in gzguts.h for Microsoft compilers +- Fix problem with ./configure --prefix and gzgetc macro +- Include gz_header definition when compiling zlib solo +- Put gzflags() functionality back in zutil.c +- Avoid library header include in crc32.c for Z_SOLO +- Use name in GCC_CLASSIC as C compiler for coverage testing, if set +- Minor cleanup in contrib/minizip/zip.c [Vollant] +- Update make_vms.com [Zinser] +- Remove unnecessary gzgetc_ function +- Use optimized byte swap operations for Microsoft and GNU [Snyder] +- Fix minor typo in zlib.h comments [Rzesniowiecki] + +Changes in 1.2.6 (29 Jan 2012) +- Update the Pascal interface in contrib/pascal +- Fix function numbers for gzgetc_ in zlibvc.def files +- Fix configure.ac for contrib/minizip [Schiffer] +- Fix large-entry detection in minizip on 64-bit systems [Schiffer] +- Have ./configure use the compiler return code for error indication +- Fix CMakeLists.txt for cross compilation [McClure] +- Fix contrib/minizip/zip.c for 64-bit architectures [Dalsnes] +- Fix compilation of contrib/minizip on FreeBSD [Marquez] +- Correct suggested usages in win32/Makefile.msc [Shachar, Horvath] +- Include io.h for Turbo C / Borland C on all platforms [Truta] +- Make version explicit in contrib/minizip/configure.ac [Bosmans] +- Avoid warning for no encryption in contrib/minizip/zip.c [Vollant] +- Minor cleanup up contrib/minizip/unzip.c [Vollant] +- Fix bug when compiling minizip with C++ [Vollant] +- Protect for long name and extra fields in contrib/minizip [Vollant] +- Avoid some warnings in contrib/minizip [Vollant] +- Add -I../.. -L../.. to CFLAGS for minizip and miniunzip +- Add missing libs to minizip linker command +- Add support for VPATH builds in contrib/minizip +- Add an --enable-demos option to contrib/minizip/configure +- Add the generation of configure.log by ./configure +- Exit when required parameters not provided to win32/Makefile.gcc +- Have gzputc return the character written instead of the argument +- Use the -m option on ldconfig for BSD systems [Tobias] +- Correct in zlib.map when deflateResetKeep was added + +Changes in 1.2.5.3 (15 Jan 2012) +- Restore gzgetc function for binary compatibility +- Do not use _lseeki64 under Borland C++ [Truta] +- Update win32/Makefile.msc to build test/*.c [Truta] +- Remove old/visualc6 given CMakefile and other alternatives +- Update AS400 build files and documentation [Monnerat] +- Update win32/Makefile.gcc to build test/*.c [Truta] +- Permit stronger flushes after Z_BLOCK flushes +- Avoid extraneous empty blocks when doing empty flushes +- Permit Z_NULL arguments to deflatePending +- Allow deflatePrime() to insert bits in the middle of a stream +- Remove second empty static block for Z_PARTIAL_FLUSH +- Write out all of the available bits when using Z_BLOCK +- Insert the first two strings in the hash table after a flush + +Changes in 1.2.5.2 (17 Dec 2011) +- fix ld error: unable to find version dependency 'ZLIB_1.2.5' +- use relative symlinks for shared libs +- Avoid searching past window for Z_RLE strategy +- Assure that high-water mark initialization is always applied in deflate +- Add assertions to fill_window() in deflate.c to match comments +- Update python link in README +- Correct spelling error in gzread.c +- Fix bug in gzgets() for a concatenated empty gzip stream +- Correct error in comment for gz_make() +- Change gzread() and related to ignore junk after gzip streams +- Allow gzread() and related to continue after gzclearerr() +- Allow gzrewind() and gzseek() after a premature end-of-file +- Simplify gzseek() now that raw after gzip is ignored +- Change gzgetc() to a macro for speed (~40% speedup in testing) +- Fix gzclose() to return the actual error last encountered +- Always add large file support for windows +- Include zconf.h for windows large file support +- Include zconf.h.cmakein for windows large file support +- Update zconf.h.cmakein on make distclean +- Merge vestigial vsnprintf determination from zutil.h to gzguts.h +- Clarify how gzopen() appends in zlib.h comments +- Correct documentation of gzdirect() since junk at end now ignored +- Add a transparent write mode to gzopen() when 'T' is in the mode +- Update python link in zlib man page +- Get inffixed.h and MAKEFIXED result to match +- Add a ./config --solo option to make zlib subset with no libary use +- Add undocumented inflateResetKeep() function for CAB file decoding +- Add --cover option to ./configure for gcc coverage testing +- Add #define ZLIB_CONST option to use const in the z_stream interface +- Add comment to gzdopen() in zlib.h to use dup() when using fileno() +- Note behavior of uncompress() to provide as much data as it can +- Add files in contrib/minizip to aid in building libminizip +- Split off AR options in Makefile.in and configure +- Change ON macro to Z_ARG to avoid application conflicts +- Facilitate compilation with Borland C++ for pragmas and vsnprintf +- Include io.h for Turbo C / Borland C++ +- Move example.c and minigzip.c to test/ +- Simplify incomplete code table filling in inflate_table() +- Remove code from inflate.c and infback.c that is impossible to execute +- Test the inflate code with full coverage +- Allow deflateSetDictionary, inflateSetDictionary at any time (in raw) +- Add deflateResetKeep and fix inflateResetKeep to retain dictionary +- Fix gzwrite.c to accommodate reduced memory zlib compilation +- Have inflate() with Z_FINISH avoid the allocation of a window +- Do not set strm->adler when doing raw inflate +- Fix gzeof() to behave just like feof() when read is not past end of file +- Fix bug in gzread.c when end-of-file is reached +- Avoid use of Z_BUF_ERROR in gz* functions except for premature EOF +- Document gzread() capability to read concurrently written files +- Remove hard-coding of resource compiler in CMakeLists.txt [Blammo] + +Changes in 1.2.5.1 (10 Sep 2011) +- Update FAQ entry on shared builds (#13) +- Avoid symbolic argument to chmod in Makefile.in +- Fix bug and add consts in contrib/puff [Oberhumer] +- Update contrib/puff/zeros.raw test file to have all block types +- Add full coverage test for puff in contrib/puff/Makefile +- Fix static-only-build install in Makefile.in +- Fix bug in unzGetCurrentFileInfo() in contrib/minizip [Kuno] +- Add libz.a dependency to shared in Makefile.in for parallel builds +- Spell out "number" (instead of "nb") in zlib.h for total_in, total_out +- Replace $(...) with `...` in configure for non-bash sh [Bowler] +- Add darwin* to Darwin* and solaris* to SunOS\ 5* in configure [Groffen] +- Add solaris* to Linux* in configure to allow gcc use [Groffen] +- Add *bsd* to Linux* case in configure [Bar-Lev] +- Add inffast.obj to dependencies in win32/Makefile.msc +- Correct spelling error in deflate.h [Kohler] +- Change libzdll.a again to libz.dll.a (!) in win32/Makefile.gcc +- Add test to configure for GNU C looking for gcc in output of $cc -v +- Add zlib.pc generation to win32/Makefile.gcc [Weigelt] +- Fix bug in zlib.h for _FILE_OFFSET_BITS set and _LARGEFILE64_SOURCE not +- Add comment in zlib.h that adler32_combine with len2 < 0 makes no sense +- Make NO_DIVIDE option in adler32.c much faster (thanks to John Reiser) +- Make stronger test in zconf.h to include unistd.h for LFS +- Apply Darwin patches for 64-bit file offsets to contrib/minizip [Slack] +- Fix zlib.h LFS support when Z_PREFIX used +- Add updated as400 support (removed from old) [Monnerat] +- Avoid deflate sensitivity to volatile input data +- Avoid division in adler32_combine for NO_DIVIDE +- Clarify the use of Z_FINISH with deflateBound() amount of space +- Set binary for output file in puff.c +- Use u4 type for crc_table to avoid conversion warnings +- Apply casts in zlib.h to avoid conversion warnings +- Add OF to prototypes for adler32_combine_ and crc32_combine_ [Miller] +- Improve inflateSync() documentation to note indeterminancy +- Add deflatePending() function to return the amount of pending output +- Correct the spelling of "specification" in FAQ [Randers-Pehrson] +- Add a check in configure for stdarg.h, use for gzprintf() +- Check that pointers fit in ints when gzprint() compiled old style +- Add dummy name before $(SHAREDLIBV) in Makefile [Bar-Lev, Bowler] +- Delete line in configure that adds -L. libz.a to LDFLAGS [Weigelt] +- Add debug records in assmebler code [Londer] +- Update RFC references to use http://tools.ietf.org/html/... [Li] +- Add --archs option, use of libtool to configure for Mac OS X [Borstel] + +Changes in 1.2.5 (19 Apr 2010) +- Disable visibility attribute in win32/Makefile.gcc [Bar-Lev] +- Default to libdir as sharedlibdir in configure [Nieder] +- Update copyright dates on modified source files +- Update trees.c to be able to generate modified trees.h +- Exit configure for MinGW, suggesting win32/Makefile.gcc +- Check for NULL path in gz_open [Homurlu] + +Changes in 1.2.4.5 (18 Apr 2010) +- Set sharedlibdir in configure [Torok] +- Set LDFLAGS in Makefile.in [Bar-Lev] +- Avoid mkdir objs race condition in Makefile.in [Bowler] +- Add ZLIB_INTERNAL in front of internal inter-module functions and arrays +- Define ZLIB_INTERNAL to hide internal functions and arrays for GNU C +- Don't use hidden attribute when it is a warning generator (e.g. Solaris) + +Changes in 1.2.4.4 (18 Apr 2010) +- Fix CROSS_PREFIX executable testing, CHOST extract, mingw* [Torok] +- Undefine _LARGEFILE64_SOURCE in zconf.h if it is zero, but not if empty +- Try to use bash or ksh regardless of functionality of /bin/sh +- Fix configure incompatibility with NetBSD sh +- Remove attempt to run under bash or ksh since have better NetBSD fix +- Fix win32/Makefile.gcc for MinGW [Bar-Lev] +- Add diagnostic messages when using CROSS_PREFIX in configure +- Added --sharedlibdir option to configure [Weigelt] +- Use hidden visibility attribute when available [Frysinger] + +Changes in 1.2.4.3 (10 Apr 2010) +- Only use CROSS_PREFIX in configure for ar and ranlib if they exist +- Use CROSS_PREFIX for nm [Bar-Lev] +- Assume _LARGEFILE64_SOURCE defined is equivalent to true +- Avoid use of undefined symbols in #if with && and || +- Make *64 prototypes in gzguts.h consistent with functions +- Add -shared load option for MinGW in configure [Bowler] +- Move z_off64_t to public interface, use instead of off64_t +- Remove ! from shell test in configure (not portable to Solaris) +- Change +0 macro tests to -0 for possibly increased portability + +Changes in 1.2.4.2 (9 Apr 2010) +- Add consistent carriage returns to readme.txt's in masmx86 and masmx64 +- Really provide prototypes for *64 functions when building without LFS +- Only define unlink() in minigzip.c if unistd.h not included +- Update README to point to contrib/vstudio project files +- Move projects/vc6 to old/ and remove projects/ +- Include stdlib.h in minigzip.c for setmode() definition under WinCE +- Clean up assembler builds in win32/Makefile.msc [Rowe] +- Include sys/types.h for Microsoft for off_t definition +- Fix memory leak on error in gz_open() +- Symbolize nm as $NM in configure [Weigelt] +- Use TEST_LDSHARED instead of LDSHARED to link test programs [Weigelt] +- Add +0 to _FILE_OFFSET_BITS and _LFS64_LARGEFILE in case not defined +- Fix bug in gzeof() to take into account unused input data +- Avoid initialization of structures with variables in puff.c +- Updated win32/README-WIN32.txt [Rowe] + +Changes in 1.2.4.1 (28 Mar 2010) +- Remove the use of [a-z] constructs for sed in configure [gentoo 310225] +- Remove $(SHAREDLIB) from LIBS in Makefile.in [Creech] +- Restore "for debugging" comment on sprintf() in gzlib.c +- Remove fdopen for MVS from gzguts.h +- Put new README-WIN32.txt in win32 [Rowe] +- Add check for shell to configure and invoke another shell if needed +- Fix big fat stinking bug in gzseek() on uncompressed files +- Remove vestigial F_OPEN64 define in zutil.h +- Set and check the value of _LARGEFILE_SOURCE and _LARGEFILE64_SOURCE +- Avoid errors on non-LFS systems when applications define LFS macros +- Set EXE to ".exe" in configure for MINGW [Kahle] +- Match crc32() in crc32.c exactly to the prototype in zlib.h [Sherrill] +- Add prefix for cross-compilation in win32/makefile.gcc [Bar-Lev] +- Add DLL install in win32/makefile.gcc [Bar-Lev] +- Allow Linux* or linux* from uname in configure [Bar-Lev] +- Allow ldconfig to be redefined in configure and Makefile.in [Bar-Lev] +- Add cross-compilation prefixes to configure [Bar-Lev] +- Match type exactly in gz_load() invocation in gzread.c +- Match type exactly of zcalloc() in zutil.c to zlib.h alloc_func +- Provide prototypes for *64 functions when building zlib without LFS +- Don't use -lc when linking shared library on MinGW +- Remove errno.h check in configure and vestigial errno code in zutil.h + +Changes in 1.2.4 (14 Mar 2010) +- Fix VER3 extraction in configure for no fourth subversion +- Update zlib.3, add docs to Makefile.in to make .pdf out of it +- Add zlib.3.pdf to distribution +- Don't set error code in gzerror() if passed pointer is NULL +- Apply destination directory fixes to CMakeLists.txt [Lowman] +- Move #cmakedefine's to a new zconf.in.cmakein +- Restore zconf.h for builds that don't use configure or cmake +- Add distclean to dummy Makefile for convenience +- Update and improve INDEX, README, and FAQ +- Update CMakeLists.txt for the return of zconf.h [Lowman] +- Update contrib/vstudio/vc9 and vc10 [Vollant] +- Change libz.dll.a back to libzdll.a in win32/Makefile.gcc +- Apply license and readme changes to contrib/asm686 [Raiter] +- Check file name lengths and add -c option in minigzip.c [Li] +- Update contrib/amd64 and contrib/masmx86/ [Vollant] +- Avoid use of "eof" parameter in trees.c to not shadow library variable +- Update make_vms.com for removal of zlibdefs.h [Zinser] +- Update assembler code and vstudio projects in contrib [Vollant] +- Remove outdated assembler code contrib/masm686 and contrib/asm586 +- Remove old vc7 and vc8 from contrib/vstudio +- Update win32/Makefile.msc, add ZLIB_VER_SUBREVISION [Rowe] +- Fix memory leaks in gzclose_r() and gzclose_w(), file leak in gz_open() +- Add contrib/gcc_gvmat64 for longest_match and inflate_fast [Vollant] +- Remove *64 functions from win32/zlib.def (they're not 64-bit yet) +- Fix bug in void-returning vsprintf() case in gzwrite.c +- Fix name change from inflate.h in contrib/inflate86/inffas86.c +- Check if temporary file exists before removing in make_vms.com [Zinser] +- Fix make install and uninstall for --static option +- Fix usage of _MSC_VER in gzguts.h and zutil.h [Truta] +- Update readme.txt in contrib/masmx64 and masmx86 to assemble + +Changes in 1.2.3.9 (21 Feb 2010) +- Expunge gzio.c +- Move as400 build information to old +- Fix updates in contrib/minizip and contrib/vstudio +- Add const to vsnprintf test in configure to avoid warnings [Weigelt] +- Delete zconf.h (made by configure) [Weigelt] +- Change zconf.in.h to zconf.h.in per convention [Weigelt] +- Check for NULL buf in gzgets() +- Return empty string for gzgets() with len == 1 (like fgets()) +- Fix description of gzgets() in zlib.h for end-of-file, NULL return +- Update minizip to 1.1 [Vollant] +- Avoid MSVC loss of data warnings in gzread.c, gzwrite.c +- Note in zlib.h that gzerror() should be used to distinguish from EOF +- Remove use of snprintf() from gzlib.c +- Fix bug in gzseek() +- Update contrib/vstudio, adding vc9 and vc10 [Kuno, Vollant] +- Fix zconf.h generation in CMakeLists.txt [Lowman] +- Improve comments in zconf.h where modified by configure + +Changes in 1.2.3.8 (13 Feb 2010) +- Clean up text files (tabs, trailing whitespace, etc.) [Oberhumer] +- Use z_off64_t in gz_zero() and gz_skip() to match state->skip +- Avoid comparison problem when sizeof(int) == sizeof(z_off64_t) +- Revert to Makefile.in from 1.2.3.6 (live with the clutter) +- Fix missing error return in gzflush(), add zlib.h note +- Add *64 functions to zlib.map [Levin] +- Fix signed/unsigned comparison in gz_comp() +- Use SFLAGS when testing shared linking in configure +- Add --64 option to ./configure to use -m64 with gcc +- Fix ./configure --help to correctly name options +- Have make fail if a test fails [Levin] +- Avoid buffer overrun in contrib/masmx64/gvmat64.asm [Simpson] +- Remove assembler object files from contrib + +Changes in 1.2.3.7 (24 Jan 2010) +- Always gzopen() with O_LARGEFILE if available +- Fix gzdirect() to work immediately after gzopen() or gzdopen() +- Make gzdirect() more precise when the state changes while reading +- Improve zlib.h documentation in many places +- Catch memory allocation failure in gz_open() +- Complete close operation if seek forward in gzclose_w() fails +- Return Z_ERRNO from gzclose_r() if close() fails +- Return Z_STREAM_ERROR instead of EOF for gzclose() being passed NULL +- Return zero for gzwrite() errors to match zlib.h description +- Return -1 on gzputs() error to match zlib.h description +- Add zconf.in.h to allow recovery from configure modification [Weigelt] +- Fix static library permissions in Makefile.in [Weigelt] +- Avoid warnings in configure tests that hide functionality [Weigelt] +- Add *BSD and DragonFly to Linux case in configure [gentoo 123571] +- Change libzdll.a to libz.dll.a in win32/Makefile.gcc [gentoo 288212] +- Avoid access of uninitialized data for first inflateReset2 call [Gomes] +- Keep object files in subdirectories to reduce the clutter somewhat +- Remove default Makefile and zlibdefs.h, add dummy Makefile +- Add new external functions to Z_PREFIX, remove duplicates, z_z_ -> z_ +- Remove zlibdefs.h completely -- modify zconf.h instead + +Changes in 1.2.3.6 (17 Jan 2010) +- Avoid void * arithmetic in gzread.c and gzwrite.c +- Make compilers happier with const char * for gz_error message +- Avoid unused parameter warning in inflate.c +- Avoid signed-unsigned comparison warning in inflate.c +- Indent #pragma's for traditional C +- Fix usage of strwinerror() in glib.c, change to gz_strwinerror() +- Correct email address in configure for system options +- Update make_vms.com and add make_vms.com to contrib/minizip [Zinser] +- Update zlib.map [Brown] +- Fix Makefile.in for Solaris 10 make of example64 and minizip64 [Torok] +- Apply various fixes to CMakeLists.txt [Lowman] +- Add checks on len in gzread() and gzwrite() +- Add error message for no more room for gzungetc() +- Remove zlib version check in gzwrite() +- Defer compression of gzprintf() result until need to +- Use snprintf() in gzdopen() if available +- Remove USE_MMAP configuration determination (only used by minigzip) +- Remove examples/pigz.c (available separately) +- Update examples/gun.c to 1.6 + +Changes in 1.2.3.5 (8 Jan 2010) +- Add space after #if in zutil.h for some compilers +- Fix relatively harmless bug in deflate_fast() [Exarevsky] +- Fix same problem in deflate_slow() +- Add $(SHAREDLIBV) to LIBS in Makefile.in [Brown] +- Add deflate_rle() for faster Z_RLE strategy run-length encoding +- Add deflate_huff() for faster Z_HUFFMAN_ONLY encoding +- Change name of "write" variable in inffast.c to avoid library collisions +- Fix premature EOF from gzread() in gzio.c [Brown] +- Use zlib header window size if windowBits is 0 in inflateInit2() +- Remove compressBound() call in deflate.c to avoid linking compress.o +- Replace use of errno in gz* with functions, support WinCE [Alves] +- Provide alternative to perror() in minigzip.c for WinCE [Alves] +- Don't use _vsnprintf on later versions of MSVC [Lowman] +- Add CMake build script and input file [Lowman] +- Update contrib/minizip to 1.1 [Svensson, Vollant] +- Moved nintendods directory from contrib to . +- Replace gzio.c with a new set of routines with the same functionality +- Add gzbuffer(), gzoffset(), gzclose_r(), gzclose_w() as part of above +- Update contrib/minizip to 1.1b +- Change gzeof() to return 0 on error instead of -1 to agree with zlib.h + +Changes in 1.2.3.4 (21 Dec 2009) +- Use old school .SUFFIXES in Makefile.in for FreeBSD compatibility +- Update comments in configure and Makefile.in for default --shared +- Fix test -z's in configure [Marquess] +- Build examplesh and minigzipsh when not testing +- Change NULL's to Z_NULL's in deflate.c and in comments in zlib.h +- Import LDFLAGS from the environment in configure +- Fix configure to populate SFLAGS with discovered CFLAGS options +- Adapt make_vms.com to the new Makefile.in [Zinser] +- Add zlib2ansi script for C++ compilation [Marquess] +- Add _FILE_OFFSET_BITS=64 test to make test (when applicable) +- Add AMD64 assembler code for longest match to contrib [Teterin] +- Include options from $SFLAGS when doing $LDSHARED +- Simplify 64-bit file support by introducing z_off64_t type +- Make shared object files in objs directory to work around old Sun cc +- Use only three-part version number for Darwin shared compiles +- Add rc option to ar in Makefile.in for when ./configure not run +- Add -WI,-rpath,. to LDFLAGS for OSF 1 V4* +- Set LD_LIBRARYN32_PATH for SGI IRIX shared compile +- Protect against _FILE_OFFSET_BITS being defined when compiling zlib +- Rename Makefile.in targets allstatic to static and allshared to shared +- Fix static and shared Makefile.in targets to be independent +- Correct error return bug in gz_open() by setting state [Brown] +- Put spaces before ;;'s in configure for better sh compatibility +- Add pigz.c (parallel implementation of gzip) to examples/ +- Correct constant in crc32.c to UL [Leventhal] +- Reject negative lengths in crc32_combine() +- Add inflateReset2() function to work like inflateEnd()/inflateInit2() +- Include sys/types.h for _LARGEFILE64_SOURCE [Brown] +- Correct typo in doc/algorithm.txt [Janik] +- Fix bug in adler32_combine() [Zhu] +- Catch missing-end-of-block-code error in all inflates and in puff + Assures that random input to inflate eventually results in an error +- Added enough.c (calculation of ENOUGH for inftrees.h) to examples/ +- Update ENOUGH and its usage to reflect discovered bounds +- Fix gzerror() error report on empty input file [Brown] +- Add ush casts in trees.c to avoid pedantic runtime errors +- Fix typo in zlib.h uncompress() description [Reiss] +- Correct inflate() comments with regard to automatic header detection +- Remove deprecation comment on Z_PARTIAL_FLUSH (it stays) +- Put new version of gzlog (2.0) in examples with interruption recovery +- Add puff compile option to permit invalid distance-too-far streams +- Add puff TEST command options, ability to read piped input +- Prototype the *64 functions in zlib.h when _FILE_OFFSET_BITS == 64, but + _LARGEFILE64_SOURCE not defined +- Fix Z_FULL_FLUSH to truly erase the past by resetting s->strstart +- Fix deflateSetDictionary() to use all 32K for output consistency +- Remove extraneous #define MIN_LOOKAHEAD in deflate.c (in deflate.h) +- Clear bytes after deflate lookahead to avoid use of uninitialized data +- Change a limit in inftrees.c to be more transparent to Coverity Prevent +- Update win32/zlib.def with exported symbols from zlib.h +- Correct spelling errors in zlib.h [Willem, Sobrado] +- Allow Z_BLOCK for deflate() to force a new block +- Allow negative bits in inflatePrime() to delete existing bit buffer +- Add Z_TREES flush option to inflate() to return at end of trees +- Add inflateMark() to return current state information for random access +- Add Makefile for NintendoDS to contrib [Costa] +- Add -w in configure compile tests to avoid spurious warnings [Beucler] +- Fix typos in zlib.h comments for deflateSetDictionary() +- Fix EOF detection in transparent gzread() [Maier] + +Changes in 1.2.3.3 (2 October 2006) +- Make --shared the default for configure, add a --static option +- Add compile option to permit invalid distance-too-far streams +- Add inflateUndermine() function which is required to enable above +- Remove use of "this" variable name for C++ compatibility [Marquess] +- Add testing of shared library in make test, if shared library built +- Use ftello() and fseeko() if available instead of ftell() and fseek() +- Provide two versions of all functions that use the z_off_t type for + binary compatibility -- a normal version and a 64-bit offset version, + per the Large File Support Extension when _LARGEFILE64_SOURCE is + defined; use the 64-bit versions by default when _FILE_OFFSET_BITS + is defined to be 64 +- Add a --uname= option to configure to perhaps help with cross-compiling + +Changes in 1.2.3.2 (3 September 2006) +- Turn off silly Borland warnings [Hay] +- Use off64_t and define _LARGEFILE64_SOURCE when present +- Fix missing dependency on inffixed.h in Makefile.in +- Rig configure --shared to build both shared and static [Teredesai, Truta] +- Remove zconf.in.h and instead create a new zlibdefs.h file +- Fix contrib/minizip/unzip.c non-encrypted after encrypted [Vollant] +- Add treebuild.xml (see http://treebuild.metux.de/) [Weigelt] + +Changes in 1.2.3.1 (16 August 2006) +- Add watcom directory with OpenWatcom make files [Daniel] +- Remove #undef of FAR in zconf.in.h for MVS [Fedtke] +- Update make_vms.com [Zinser] +- Use -fPIC for shared build in configure [Teredesai, Nicholson] +- Use only major version number for libz.so on IRIX and OSF1 [Reinholdtsen] +- Use fdopen() (not _fdopen()) for Interix in zutil.h [BŠck] +- Add some FAQ entries about the contrib directory +- Update the MVS question in the FAQ +- Avoid extraneous reads after EOF in gzio.c [Brown] +- Correct spelling of "successfully" in gzio.c [Randers-Pehrson] +- Add comments to zlib.h about gzerror() usage [Brown] +- Set extra flags in gzip header in gzopen() like deflate() does +- Make configure options more compatible with double-dash conventions + [Weigelt] +- Clean up compilation under Solaris SunStudio cc [Rowe, Reinholdtsen] +- Fix uninstall target in Makefile.in [Truta] +- Add pkgconfig support [Weigelt] +- Use $(DESTDIR) macro in Makefile.in [Reinholdtsen, Weigelt] +- Replace set_data_type() with a more accurate detect_data_type() in + trees.c, according to the txtvsbin.txt document [Truta] +- Swap the order of #include and #include "zlib.h" in + gzio.c, example.c and minigzip.c [Truta] +- Shut up annoying VS2005 warnings about standard C deprecation [Rowe, + Truta] (where?) +- Fix target "clean" from win32/Makefile.bor [Truta] +- Create .pdb and .manifest files in win32/makefile.msc [Ziegler, Rowe] +- Update zlib www home address in win32/DLL_FAQ.txt [Truta] +- Update contrib/masmx86/inffas32.asm for VS2005 [Vollant, Van Wassenhove] +- Enable browse info in the "Debug" and "ASM Debug" configurations in + the Visual C++ 6 project, and set (non-ASM) "Debug" as default [Truta] +- Add pkgconfig support [Weigelt] +- Add ZLIB_VER_MAJOR, ZLIB_VER_MINOR and ZLIB_VER_REVISION in zlib.h, + for use in win32/zlib1.rc [Polushin, Rowe, Truta] +- Add a document that explains the new text detection scheme to + doc/txtvsbin.txt [Truta] +- Add rfc1950.txt, rfc1951.txt and rfc1952.txt to doc/ [Truta] +- Move algorithm.txt into doc/ [Truta] +- Synchronize FAQ with website +- Fix compressBound(), was low for some pathological cases [Fearnley] +- Take into account wrapper variations in deflateBound() +- Set examples/zpipe.c input and output to binary mode for Windows +- Update examples/zlib_how.html with new zpipe.c (also web site) +- Fix some warnings in examples/gzlog.c and examples/zran.c (it seems + that gcc became pickier in 4.0) +- Add zlib.map for Linux: "All symbols from zlib-1.1.4 remain + un-versioned, the patch adds versioning only for symbols introduced in + zlib-1.2.0 or later. It also declares as local those symbols which are + not designed to be exported." [Levin] +- Update Z_PREFIX list in zconf.in.h, add --zprefix option to configure +- Do not initialize global static by default in trees.c, add a response + NO_INIT_GLOBAL_POINTERS to initialize them if needed [Marquess] +- Don't use strerror() in gzio.c under WinCE [Yakimov] +- Don't use errno.h in zutil.h under WinCE [Yakimov] +- Move arguments for AR to its usage to allow replacing ar [Marot] +- Add HAVE_VISIBILITY_PRAGMA in zconf.in.h for Mozilla [Randers-Pehrson] +- Improve inflateInit() and inflateInit2() documentation +- Fix structure size comment in inflate.h +- Change configure help option from --h* to --help [Santos] + +Changes in 1.2.3 (18 July 2005) +- Apply security vulnerability fixes to contrib/infback9 as well +- Clean up some text files (carriage returns, trailing space) +- Update testzlib, vstudio, masmx64, and masmx86 in contrib [Vollant] + +Changes in 1.2.2.4 (11 July 2005) +- Add inflatePrime() function for starting inflation at bit boundary +- Avoid some Visual C warnings in deflate.c +- Avoid more silly Visual C warnings in inflate.c and inftrees.c for 64-bit + compile +- Fix some spelling errors in comments [Betts] +- Correct inflateInit2() error return documentation in zlib.h +- Add zran.c example of compressed data random access to examples + directory, shows use of inflatePrime() +- Fix cast for assignments to strm->state in inflate.c and infback.c +- Fix zlibCompileFlags() in zutil.c to use 1L for long shifts [Oberhumer] +- Move declarations of gf2 functions to right place in crc32.c [Oberhumer] +- Add cast in trees.c t avoid a warning [Oberhumer] +- Avoid some warnings in fitblk.c, gun.c, gzjoin.c in examples [Oberhumer] +- Update make_vms.com [Zinser] +- Initialize state->write in inflateReset() since copied in inflate_fast() +- Be more strict on incomplete code sets in inflate_table() and increase + ENOUGH and MAXD -- this repairs a possible security vulnerability for + invalid inflate input. Thanks to Tavis Ormandy and Markus Oberhumer for + discovering the vulnerability and providing test cases. +- Add ia64 support to configure for HP-UX [Smith] +- Add error return to gzread() for format or i/o error [Levin] +- Use malloc.h for OS/2 [Necasek] + +Changes in 1.2.2.3 (27 May 2005) +- Replace 1U constants in inflate.c and inftrees.c for 64-bit compile +- Typecast fread() return values in gzio.c [Vollant] +- Remove trailing space in minigzip.c outmode (VC++ can't deal with it) +- Fix crc check bug in gzread() after gzungetc() [Heiner] +- Add the deflateTune() function to adjust internal compression parameters +- Add a fast gzip decompressor, gun.c, to examples (use of inflateBack) +- Remove an incorrect assertion in examples/zpipe.c +- Add C++ wrapper in infback9.h [Donais] +- Fix bug in inflateCopy() when decoding fixed codes +- Note in zlib.h how much deflateSetDictionary() actually uses +- Remove USE_DICT_HEAD in deflate.c (would mess up inflate if used) +- Add _WIN32_WCE to define WIN32 in zconf.in.h [Spencer] +- Don't include stderr.h or errno.h for _WIN32_WCE in zutil.h [Spencer] +- Add gzdirect() function to indicate transparent reads +- Update contrib/minizip [Vollant] +- Fix compilation of deflate.c when both ASMV and FASTEST [Oberhumer] +- Add casts in crc32.c to avoid warnings [Oberhumer] +- Add contrib/masmx64 [Vollant] +- Update contrib/asm586, asm686, masmx86, testzlib, vstudio [Vollant] + +Changes in 1.2.2.2 (30 December 2004) +- Replace structure assignments in deflate.c and inflate.c with zmemcpy to + avoid implicit memcpy calls (portability for no-library compilation) +- Increase sprintf() buffer size in gzdopen() to allow for large numbers +- Add INFLATE_STRICT to check distances against zlib header +- Improve WinCE errno handling and comments [Chang] +- Remove comment about no gzip header processing in FAQ +- Add Z_FIXED strategy option to deflateInit2() to force fixed trees +- Add updated make_vms.com [Coghlan], update README +- Create a new "examples" directory, move gzappend.c there, add zpipe.c, + fitblk.c, gzlog.[ch], gzjoin.c, and zlib_how.html. +- Add FAQ entry and comments in deflate.c on uninitialized memory access +- Add Solaris 9 make options in configure [Gilbert] +- Allow strerror() usage in gzio.c for STDC +- Fix DecompressBuf in contrib/delphi/ZLib.pas [ManChesTer] +- Update contrib/masmx86/inffas32.asm and gvmat32.asm [Vollant] +- Use z_off_t for adler32_combine() and crc32_combine() lengths +- Make adler32() much faster for small len +- Use OS_CODE in deflate() default gzip header + +Changes in 1.2.2.1 (31 October 2004) +- Allow inflateSetDictionary() call for raw inflate +- Fix inflate header crc check bug for file names and comments +- Add deflateSetHeader() and gz_header structure for custom gzip headers +- Add inflateGetheader() to retrieve gzip headers +- Add crc32_combine() and adler32_combine() functions +- Add alloc_func, free_func, in_func, out_func to Z_PREFIX list +- Use zstreamp consistently in zlib.h (inflate_back functions) +- Remove GUNZIP condition from definition of inflate_mode in inflate.h + and in contrib/inflate86/inffast.S [Truta, Anderson] +- Add support for AMD64 in contrib/inflate86/inffas86.c [Anderson] +- Update projects/README.projects and projects/visualc6 [Truta] +- Update win32/DLL_FAQ.txt [Truta] +- Avoid warning under NO_GZCOMPRESS in gzio.c; fix typo [Truta] +- Deprecate Z_ASCII; use Z_TEXT instead [Truta] +- Use a new algorithm for setting strm->data_type in trees.c [Truta] +- Do not define an exit() prototype in zutil.c unless DEBUG defined +- Remove prototype of exit() from zutil.c, example.c, minigzip.c [Truta] +- Add comment in zlib.h for Z_NO_FLUSH parameter to deflate() +- Fix Darwin build version identification [Peterson] + +Changes in 1.2.2 (3 October 2004) +- Update zlib.h comments on gzip in-memory processing +- Set adler to 1 in inflateReset() to support Java test suite [Walles] +- Add contrib/dotzlib [Ravn] +- Update win32/DLL_FAQ.txt [Truta] +- Update contrib/minizip [Vollant] +- Move contrib/visual-basic.txt to old/ [Truta] +- Fix assembler builds in projects/visualc6/ [Truta] + +Changes in 1.2.1.2 (9 September 2004) +- Update INDEX file +- Fix trees.c to update strm->data_type (no one ever noticed!) +- Fix bug in error case in inflate.c, infback.c, and infback9.c [Brown] +- Add "volatile" to crc table flag declaration (for DYNAMIC_CRC_TABLE) +- Add limited multitasking protection to DYNAMIC_CRC_TABLE +- Add NO_vsnprintf for VMS in zutil.h [Mozilla] +- Don't declare strerror() under VMS [Mozilla] +- Add comment to DYNAMIC_CRC_TABLE to use get_crc_table() to initialize +- Update contrib/ada [Anisimkov] +- Update contrib/minizip [Vollant] +- Fix configure to not hardcode directories for Darwin [Peterson] +- Fix gzio.c to not return error on empty files [Brown] +- Fix indentation; update version in contrib/delphi/ZLib.pas and + contrib/pascal/zlibpas.pas [Truta] +- Update mkasm.bat in contrib/masmx86 [Truta] +- Update contrib/untgz [Truta] +- Add projects/README.projects [Truta] +- Add project for MS Visual C++ 6.0 in projects/visualc6 [Cadieux, Truta] +- Update win32/DLL_FAQ.txt [Truta] +- Update list of Z_PREFIX symbols in zconf.h [Randers-Pehrson, Truta] +- Remove an unnecessary assignment to curr in inftrees.c [Truta] +- Add OS/2 to exe builds in configure [Poltorak] +- Remove err dummy parameter in zlib.h [Kientzle] + +Changes in 1.2.1.1 (9 January 2004) +- Update email address in README +- Several FAQ updates +- Fix a big fat bug in inftrees.c that prevented decoding valid + dynamic blocks with only literals and no distance codes -- + Thanks to "Hot Emu" for the bug report and sample file +- Add a note to puff.c on no distance codes case. + +Changes in 1.2.1 (17 November 2003) +- Remove a tab in contrib/gzappend/gzappend.c +- Update some interfaces in contrib for new zlib functions +- Update zlib version number in some contrib entries +- Add Windows CE definition for ptrdiff_t in zutil.h [Mai, Truta] +- Support shared libraries on Hurd and KFreeBSD [Brown] +- Fix error in NO_DIVIDE option of adler32.c + +Changes in 1.2.0.8 (4 November 2003) +- Update version in contrib/delphi/ZLib.pas and contrib/pascal/zlibpas.pas +- Add experimental NO_DIVIDE #define in adler32.c + - Possibly faster on some processors (let me know if it is) +- Correct Z_BLOCK to not return on first inflate call if no wrap +- Fix strm->data_type on inflate() return to correctly indicate EOB +- Add deflatePrime() function for appending in the middle of a byte +- Add contrib/gzappend for an example of appending to a stream +- Update win32/DLL_FAQ.txt [Truta] +- Delete Turbo C comment in README [Truta] +- Improve some indentation in zconf.h [Truta] +- Fix infinite loop on bad input in configure script [Church] +- Fix gzeof() for concatenated gzip files [Johnson] +- Add example to contrib/visual-basic.txt [Michael B.] +- Add -p to mkdir's in Makefile.in [vda] +- Fix configure to properly detect presence or lack of printf functions +- Add AS400 support [Monnerat] +- Add a little Cygwin support [Wilson] + +Changes in 1.2.0.7 (21 September 2003) +- Correct some debug formats in contrib/infback9 +- Cast a type in a debug statement in trees.c +- Change search and replace delimiter in configure from % to # [Beebe] +- Update contrib/untgz to 0.2 with various fixes [Truta] +- Add build support for Amiga [Nikl] +- Remove some directories in old that have been updated to 1.2 +- Add dylib building for Mac OS X in configure and Makefile.in +- Remove old distribution stuff from Makefile +- Update README to point to DLL_FAQ.txt, and add comment on Mac OS X +- Update links in README + +Changes in 1.2.0.6 (13 September 2003) +- Minor FAQ updates +- Update contrib/minizip to 1.00 [Vollant] +- Remove test of gz functions in example.c when GZ_COMPRESS defined [Truta] +- Update POSTINC comment for 68060 [Nikl] +- Add contrib/infback9 with deflate64 decoding (unsupported) +- For MVS define NO_vsnprintf and undefine FAR [van Burik] +- Add pragma for fdopen on MVS [van Burik] + +Changes in 1.2.0.5 (8 September 2003) +- Add OF to inflateBackEnd() declaration in zlib.h +- Remember start when using gzdopen in the middle of a file +- Use internal off_t counters in gz* functions to properly handle seeks +- Perform more rigorous check for distance-too-far in inffast.c +- Add Z_BLOCK flush option to return from inflate at block boundary +- Set strm->data_type on return from inflate + - Indicate bits unused, if at block boundary, and if in last block +- Replace size_t with ptrdiff_t in crc32.c, and check for correct size +- Add condition so old NO_DEFLATE define still works for compatibility +- FAQ update regarding the Windows DLL [Truta] +- INDEX update: add qnx entry, remove aix entry [Truta] +- Install zlib.3 into mandir [Wilson] +- Move contrib/zlib_dll_FAQ.txt to win32/DLL_FAQ.txt; update [Truta] +- Adapt the zlib interface to the new DLL convention guidelines [Truta] +- Introduce ZLIB_WINAPI macro to allow the export of functions using + the WINAPI calling convention, for Visual Basic [Vollant, Truta] +- Update msdos and win32 scripts and makefiles [Truta] +- Export symbols by name, not by ordinal, in win32/zlib.def [Truta] +- Add contrib/ada [Anisimkov] +- Move asm files from contrib/vstudio/vc70_32 to contrib/asm386 [Truta] +- Rename contrib/asm386 to contrib/masmx86 [Truta, Vollant] +- Add contrib/masm686 [Truta] +- Fix offsets in contrib/inflate86 and contrib/masmx86/inffas32.asm + [Truta, Vollant] +- Update contrib/delphi; rename to contrib/pascal; add example [Truta] +- Remove contrib/delphi2; add a new contrib/delphi [Truta] +- Avoid inclusion of the nonstandard in contrib/iostream, + and fix some method prototypes [Truta] +- Fix the ZCR_SEED2 constant to avoid warnings in contrib/minizip + [Truta] +- Avoid the use of backslash (\) in contrib/minizip [Vollant] +- Fix file time handling in contrib/untgz; update makefiles [Truta] +- Update contrib/vstudio/vc70_32 to comply with the new DLL guidelines + [Vollant] +- Remove contrib/vstudio/vc15_16 [Vollant] +- Rename contrib/vstudio/vc70_32 to contrib/vstudio/vc7 [Truta] +- Update README.contrib [Truta] +- Invert the assignment order of match_head and s->prev[...] in + INSERT_STRING [Truta] +- Compare TOO_FAR with 32767 instead of 32768, to avoid 16-bit warnings + [Truta] +- Compare function pointers with 0, not with NULL or Z_NULL [Truta] +- Fix prototype of syncsearch in inflate.c [Truta] +- Introduce ASMINF macro to be enabled when using an ASM implementation + of inflate_fast [Truta] +- Change NO_DEFLATE to NO_GZCOMPRESS [Truta] +- Modify test_gzio in example.c to take a single file name as a + parameter [Truta] +- Exit the example.c program if gzopen fails [Truta] +- Add type casts around strlen in example.c [Truta] +- Remove casting to sizeof in minigzip.c; give a proper type + to the variable compared with SUFFIX_LEN [Truta] +- Update definitions of STDC and STDC99 in zconf.h [Truta] +- Synchronize zconf.h with the new Windows DLL interface [Truta] +- Use SYS16BIT instead of __32BIT__ to distinguish between + 16- and 32-bit platforms [Truta] +- Use far memory allocators in small 16-bit memory models for + Turbo C [Truta] +- Add info about the use of ASMV, ASMINF and ZLIB_WINAPI in + zlibCompileFlags [Truta] +- Cygwin has vsnprintf [Wilson] +- In Windows16, OS_CODE is 0, as in MSDOS [Truta] +- In Cygwin, OS_CODE is 3 (Unix), not 11 (Windows32) [Wilson] + +Changes in 1.2.0.4 (10 August 2003) +- Minor FAQ updates +- Be more strict when checking inflateInit2's windowBits parameter +- Change NO_GUNZIP compile option to NO_GZIP to cover deflate as well +- Add gzip wrapper option to deflateInit2 using windowBits +- Add updated QNX rule in configure and qnx directory [Bonnefoy] +- Make inflate distance-too-far checks more rigorous +- Clean up FAR usage in inflate +- Add casting to sizeof() in gzio.c and minigzip.c + +Changes in 1.2.0.3 (19 July 2003) +- Fix silly error in gzungetc() implementation [Vollant] +- Update contrib/minizip and contrib/vstudio [Vollant] +- Fix printf format in example.c +- Correct cdecl support in zconf.in.h [Anisimkov] +- Minor FAQ updates + +Changes in 1.2.0.2 (13 July 2003) +- Add ZLIB_VERNUM in zlib.h for numerical preprocessor comparisons +- Attempt to avoid warnings in crc32.c for pointer-int conversion +- Add AIX to configure, remove aix directory [Bakker] +- Add some casts to minigzip.c +- Improve checking after insecure sprintf() or vsprintf() calls +- Remove #elif's from crc32.c +- Change leave label to inf_leave in inflate.c and infback.c to avoid + library conflicts +- Remove inflate gzip decoding by default--only enable gzip decoding by + special request for stricter backward compatibility +- Add zlibCompileFlags() function to return compilation information +- More typecasting in deflate.c to avoid warnings +- Remove leading underscore from _Capital #defines [Truta] +- Fix configure to link shared library when testing +- Add some Windows CE target adjustments [Mai] +- Remove #define ZLIB_DLL in zconf.h [Vollant] +- Add zlib.3 [Rodgers] +- Update RFC URL in deflate.c and algorithm.txt [Mai] +- Add zlib_dll_FAQ.txt to contrib [Truta] +- Add UL to some constants [Truta] +- Update minizip and vstudio [Vollant] +- Remove vestigial NEED_DUMMY_RETURN from zconf.in.h +- Expand use of NO_DUMMY_DECL to avoid all dummy structures +- Added iostream3 to contrib [Schwardt] +- Replace rewind() with fseek() for WinCE [Truta] +- Improve setting of zlib format compression level flags + - Report 0 for huffman and rle strategies and for level == 0 or 1 + - Report 2 only for level == 6 +- Only deal with 64K limit when necessary at compile time [Truta] +- Allow TOO_FAR check to be turned off at compile time [Truta] +- Add gzclearerr() function [Souza] +- Add gzungetc() function + +Changes in 1.2.0.1 (17 March 2003) +- Add Z_RLE strategy for run-length encoding [Truta] + - When Z_RLE requested, restrict matches to distance one + - Update zlib.h, minigzip.c, gzopen(), gzdopen() for Z_RLE +- Correct FASTEST compilation to allow level == 0 +- Clean up what gets compiled for FASTEST +- Incorporate changes to zconf.in.h [Vollant] + - Refine detection of Turbo C need for dummy returns + - Refine ZLIB_DLL compilation + - Include additional header file on VMS for off_t typedef +- Try to use _vsnprintf where it supplants vsprintf [Vollant] +- Add some casts in inffast.c +- Enchance comments in zlib.h on what happens if gzprintf() tries to + write more than 4095 bytes before compression +- Remove unused state from inflateBackEnd() +- Remove exit(0) from minigzip.c, example.c +- Get rid of all those darn tabs +- Add "check" target to Makefile.in that does the same thing as "test" +- Add "mostlyclean" and "maintainer-clean" targets to Makefile.in +- Update contrib/inflate86 [Anderson] +- Update contrib/testzlib, contrib/vstudio, contrib/minizip [Vollant] +- Add msdos and win32 directories with makefiles [Truta] +- More additions and improvements to the FAQ + +Changes in 1.2.0 (9 March 2003) +- New and improved inflate code + - About 20% faster + - Does not allocate 32K window unless and until needed + - Automatically detects and decompresses gzip streams + - Raw inflate no longer needs an extra dummy byte at end + - Added inflateBack functions using a callback interface--even faster + than inflate, useful for file utilities (gzip, zip) + - Added inflateCopy() function to record state for random access on + externally generated deflate streams (e.g. in gzip files) + - More readable code (I hope) +- New and improved crc32() + - About 50% faster, thanks to suggestions from Rodney Brown +- Add deflateBound() and compressBound() functions +- Fix memory leak in deflateInit2() +- Permit setting dictionary for raw deflate (for parallel deflate) +- Fix const declaration for gzwrite() +- Check for some malloc() failures in gzio.c +- Fix bug in gzopen() on single-byte file 0x1f +- Fix bug in gzread() on concatenated file with 0x1f at end of buffer + and next buffer doesn't start with 0x8b +- Fix uncompress() to return Z_DATA_ERROR on truncated input +- Free memory at end of example.c +- Remove MAX #define in trees.c (conflicted with some libraries) +- Fix static const's in deflate.c, gzio.c, and zutil.[ch] +- Declare malloc() and free() in gzio.c if STDC not defined +- Use malloc() instead of calloc() in zutil.c if int big enough +- Define STDC for AIX +- Add aix/ with approach for compiling shared library on AIX +- Add HP-UX support for shared libraries in configure +- Add OpenUNIX support for shared libraries in configure +- Use $cc instead of gcc to build shared library +- Make prefix directory if needed when installing +- Correct Macintosh avoidance of typedef Byte in zconf.h +- Correct Turbo C memory allocation when under Linux +- Use libz.a instead of -lz in Makefile (assure use of compiled library) +- Update configure to check for snprintf or vsnprintf functions and their + return value, warn during make if using an insecure function +- Fix configure problem with compile-time knowledge of HAVE_UNISTD_H that + is lost when library is used--resolution is to build new zconf.h +- Documentation improvements (in zlib.h): + - Document raw deflate and inflate + - Update RFCs URL + - Point out that zlib and gzip formats are different + - Note that Z_BUF_ERROR is not fatal + - Document string limit for gzprintf() and possible buffer overflow + - Note requirement on avail_out when flushing + - Note permitted values of flush parameter of inflate() +- Add some FAQs (and even answers) to the FAQ +- Add contrib/inflate86/ for x86 faster inflate +- Add contrib/blast/ for PKWare Data Compression Library decompression +- Add contrib/puff/ simple inflate for deflate format description + +Changes in 1.1.4 (11 March 2002) +- ZFREE was repeated on same allocation on some error conditions. + This creates a security problem described in + http://www.zlib.org/advisory-2002-03-11.txt +- Returned incorrect error (Z_MEM_ERROR) on some invalid data +- Avoid accesses before window for invalid distances with inflate window + less than 32K. +- force windowBits > 8 to avoid a bug in the encoder for a window size + of 256 bytes. (A complete fix will be available in 1.1.5). + +Changes in 1.1.3 (9 July 1998) +- fix "an inflate input buffer bug that shows up on rare but persistent + occasions" (Mark) +- fix gzread and gztell for concatenated .gz files (Didier Le Botlan) +- fix gzseek(..., SEEK_SET) in write mode +- fix crc check after a gzeek (Frank Faubert) +- fix miniunzip when the last entry in a zip file is itself a zip file + (J Lillge) +- add contrib/asm586 and contrib/asm686 (Brian Raiter) + See http://www.muppetlabs.com/~breadbox/software/assembly.html +- add support for Delphi 3 in contrib/delphi (Bob Dellaca) +- add support for C++Builder 3 and Delphi 3 in contrib/delphi2 (Davide Moretti) +- do not exit prematurely in untgz if 0 at start of block (Magnus Holmgren) +- use macro EXTERN instead of extern to support DLL for BeOS (Sander Stoks) +- added a FAQ file + +- Support gzdopen on Mac with Metrowerks (Jason Linhart) +- Do not redefine Byte on Mac (Brad Pettit & Jason Linhart) +- define SEEK_END too if SEEK_SET is not defined (Albert Chin-A-Young) +- avoid some warnings with Borland C (Tom Tanner) +- fix a problem in contrib/minizip/zip.c for 16-bit MSDOS (Gilles Vollant) +- emulate utime() for WIN32 in contrib/untgz (Gilles Vollant) +- allow several arguments to configure (Tim Mooney, Frodo Looijaard) +- use libdir and includedir in Makefile.in (Tim Mooney) +- support shared libraries on OSF1 V4 (Tim Mooney) +- remove so_locations in "make clean" (Tim Mooney) +- fix maketree.c compilation error (Glenn, Mark) +- Python interface to zlib now in Python 1.5 (Jeremy Hylton) +- new Makefile.riscos (Rich Walker) +- initialize static descriptors in trees.c for embedded targets (Nick Smith) +- use "foo-gz" in example.c for RISCOS and VMS (Nick Smith) +- add the OS/2 files in Makefile.in too (Andrew Zabolotny) +- fix fdopen and halloc macros for Microsoft C 6.0 (Tom Lane) +- fix maketree.c to allow clean compilation of inffixed.h (Mark) +- fix parameter check in deflateCopy (Gunther Nikl) +- cleanup trees.c, use compressed_len only in debug mode (Christian Spieler) +- Many portability patches by Christian Spieler: + . zutil.c, zutil.h: added "const" for zmem* + . Make_vms.com: fixed some typos + . Make_vms.com: msdos/Makefile.*: removed zutil.h from some dependency lists + . msdos/Makefile.msc: remove "default rtl link library" info from obj files + . msdos/Makefile.*: use model-dependent name for the built zlib library + . msdos/Makefile.emx, nt/Makefile.emx, nt/Makefile.gcc: + new makefiles, for emx (DOS/OS2), emx&rsxnt and mingw32 (Windows 9x / NT) +- use define instead of typedef for Bytef also for MSC small/medium (Tom Lane) +- replace __far with _far for better portability (Christian Spieler, Tom Lane) +- fix test for errno.h in configure (Tim Newsham) + +Changes in 1.1.2 (19 March 98) +- added contrib/minzip, mini zip and unzip based on zlib (Gilles Vollant) + See http://www.winimage.com/zLibDll/unzip.html +- preinitialize the inflate tables for fixed codes, to make the code + completely thread safe (Mark) +- some simplifications and slight speed-up to the inflate code (Mark) +- fix gzeof on non-compressed files (Allan Schrum) +- add -std1 option in configure for OSF1 to fix gzprintf (Martin Mokrejs) +- use default value of 4K for Z_BUFSIZE for 16-bit MSDOS (Tim Wegner + Glenn) +- added os2/Makefile.def and os2/zlib.def (Andrew Zabolotny) +- add shared lib support for UNIX_SV4.2MP (MATSUURA Takanori) +- do not wrap extern "C" around system includes (Tom Lane) +- mention zlib binding for TCL in README (Andreas Kupries) +- added amiga/Makefile.pup for Amiga powerUP SAS/C PPC (Andreas Kleinert) +- allow "make install prefix=..." even after configure (Glenn Randers-Pehrson) +- allow "configure --prefix $HOME" (Tim Mooney) +- remove warnings in example.c and gzio.c (Glenn Randers-Pehrson) +- move Makefile.sas to amiga/Makefile.sas + +Changes in 1.1.1 (27 Feb 98) +- fix macros _tr_tally_* in deflate.h for debug mode (Glenn Randers-Pehrson) +- remove block truncation heuristic which had very marginal effect for zlib + (smaller lit_bufsize than in gzip 1.2.4) and degraded a little the + compression ratio on some files. This also allows inlining _tr_tally for + matches in deflate_slow. +- added msdos/Makefile.w32 for WIN32 Microsoft Visual C++ (Bob Frazier) + +Changes in 1.1.0 (24 Feb 98) +- do not return STREAM_END prematurely in inflate (John Bowler) +- revert to the zlib 1.0.8 inflate to avoid the gcc 2.8.0 bug (Jeremy Buhler) +- compile with -DFASTEST to get compression code optimized for speed only +- in minigzip, try mmap'ing the input file first (Miguel Albrecht) +- increase size of I/O buffers in minigzip.c and gzio.c (not a big gain + on Sun but significant on HP) + +- add a pointer to experimental unzip library in README (Gilles Vollant) +- initialize variable gcc in configure (Chris Herborth) + +Changes in 1.0.9 (17 Feb 1998) +- added gzputs and gzgets functions +- do not clear eof flag in gzseek (Mark Diekhans) +- fix gzseek for files in transparent mode (Mark Diekhans) +- do not assume that vsprintf returns the number of bytes written (Jens Krinke) +- replace EXPORT with ZEXPORT to avoid conflict with other programs +- added compress2 in zconf.h, zlib.def, zlib.dnt +- new asm code from Gilles Vollant in contrib/asm386 +- simplify the inflate code (Mark): + . Replace ZALLOC's in huft_build() with single ZALLOC in inflate_blocks_new() + . ZALLOC the length list in inflate_trees_fixed() instead of using stack + . ZALLOC the value area for huft_build() instead of using stack + . Simplify Z_FINISH check in inflate() + +- Avoid gcc 2.8.0 comparison bug a little differently than zlib 1.0.8 +- in inftrees.c, avoid cc -O bug on HP (Farshid Elahi) +- in zconf.h move the ZLIB_DLL stuff earlier to avoid problems with + the declaration of FAR (Gilles VOllant) +- install libz.so* with mode 755 (executable) instead of 644 (Marc Lehmann) +- read_buf buf parameter of type Bytef* instead of charf* +- zmemcpy parameters are of type Bytef*, not charf* (Joseph Strout) +- do not redeclare unlink in minigzip.c for WIN32 (John Bowler) +- fix check for presence of directories in "make install" (Ian Willis) + +Changes in 1.0.8 (27 Jan 1998) +- fixed offsets in contrib/asm386/gvmat32.asm (Gilles Vollant) +- fix gzgetc and gzputc for big endian systems (Markus Oberhumer) +- added compress2() to allow setting the compression level +- include sys/types.h to get off_t on some systems (Marc Lehmann & QingLong) +- use constant arrays for the static trees in trees.c instead of computing + them at run time (thanks to Ken Raeburn for this suggestion). To create + trees.h, compile with GEN_TREES_H and run "make test". +- check return code of example in "make test" and display result +- pass minigzip command line options to file_compress +- simplifying code of inflateSync to avoid gcc 2.8 bug + +- support CC="gcc -Wall" in configure -s (QingLong) +- avoid a flush caused by ftell in gzopen for write mode (Ken Raeburn) +- fix test for shared library support to avoid compiler warnings +- zlib.lib -> zlib.dll in msdos/zlib.rc (Gilles Vollant) +- check for TARGET_OS_MAC in addition to MACOS (Brad Pettit) +- do not use fdopen for Metrowerks on Mac (Brad Pettit)) +- add checks for gzputc and gzputc in example.c +- avoid warnings in gzio.c and deflate.c (Andreas Kleinert) +- use const for the CRC table (Ken Raeburn) +- fixed "make uninstall" for shared libraries +- use Tracev instead of Trace in infblock.c +- in example.c use correct compressed length for test_sync +- suppress +vnocompatwarnings in configure for HPUX (not always supported) + +Changes in 1.0.7 (20 Jan 1998) +- fix gzseek which was broken in write mode +- return error for gzseek to negative absolute position +- fix configure for Linux (Chun-Chung Chen) +- increase stack space for MSC (Tim Wegner) +- get_crc_table and inflateSyncPoint are EXPORTed (Gilles Vollant) +- define EXPORTVA for gzprintf (Gilles Vollant) +- added man page zlib.3 (Rick Rodgers) +- for contrib/untgz, fix makedir() and improve Makefile + +- check gzseek in write mode in example.c +- allocate extra buffer for seeks only if gzseek is actually called +- avoid signed/unsigned comparisons (Tim Wegner, Gilles Vollant) +- add inflateSyncPoint in zconf.h +- fix list of exported functions in nt/zlib.dnt and mdsos/zlib.def + +Changes in 1.0.6 (19 Jan 1998) +- add functions gzprintf, gzputc, gzgetc, gztell, gzeof, gzseek, gzrewind and + gzsetparams (thanks to Roland Giersig and Kevin Ruland for some of this code) +- Fix a deflate bug occurring only with compression level 0 (thanks to + Andy Buckler for finding this one). +- In minigzip, pass transparently also the first byte for .Z files. +- return Z_BUF_ERROR instead of Z_OK if output buffer full in uncompress() +- check Z_FINISH in inflate (thanks to Marc Schluper) +- Implement deflateCopy (thanks to Adam Costello) +- make static libraries by default in configure, add --shared option. +- move MSDOS or Windows specific files to directory msdos +- suppress the notion of partial flush to simplify the interface + (but the symbol Z_PARTIAL_FLUSH is kept for compatibility with 1.0.4) +- suppress history buffer provided by application to simplify the interface + (this feature was not implemented anyway in 1.0.4) +- next_in and avail_in must be initialized before calling inflateInit or + inflateInit2 +- add EXPORT in all exported functions (for Windows DLL) +- added Makefile.nt (thanks to Stephen Williams) +- added the unsupported "contrib" directory: + contrib/asm386/ by Gilles Vollant + 386 asm code replacing longest_match(). + contrib/iostream/ by Kevin Ruland + A C++ I/O streams interface to the zlib gz* functions + contrib/iostream2/ by Tyge Løvset + Another C++ I/O streams interface + contrib/untgz/ by "Pedro A. Aranda Guti\irrez" + A very simple tar.gz file extractor using zlib + contrib/visual-basic.txt by Carlos Rios + How to use compress(), uncompress() and the gz* functions from VB. +- pass params -f (filtered data), -h (huffman only), -1 to -9 (compression + level) in minigzip (thanks to Tom Lane) + +- use const for rommable constants in deflate +- added test for gzseek and gztell in example.c +- add undocumented function inflateSyncPoint() (hack for Paul Mackerras) +- add undocumented function zError to convert error code to string + (for Tim Smithers) +- Allow compilation of gzio with -DNO_DEFLATE to avoid the compression code. +- Use default memcpy for Symantec MSDOS compiler. +- Add EXPORT keyword for check_func (needed for Windows DLL) +- add current directory to LD_LIBRARY_PATH for "make test" +- create also a link for libz.so.1 +- added support for FUJITSU UXP/DS (thanks to Toshiaki Nomura) +- use $(SHAREDLIB) instead of libz.so in Makefile.in (for HPUX) +- added -soname for Linux in configure (Chun-Chung Chen, +- assign numbers to the exported functions in zlib.def (for Windows DLL) +- add advice in zlib.h for best usage of deflateSetDictionary +- work around compiler bug on Atari (cast Z_NULL in call of s->checkfn) +- allow compilation with ANSI keywords only enabled for TurboC in large model +- avoid "versionString"[0] (Borland bug) +- add NEED_DUMMY_RETURN for Borland +- use variable z_verbose for tracing in debug mode (L. Peter Deutsch). +- allow compilation with CC +- defined STDC for OS/2 (David Charlap) +- limit external names to 8 chars for MVS (Thomas Lund) +- in minigzip.c, use static buffers only for 16-bit systems +- fix suffix check for "minigzip -d foo.gz" +- do not return an error for the 2nd of two consecutive gzflush() (Felix Lee) +- use _fdopen instead of fdopen for MSC >= 6.0 (Thomas Fanslau) +- added makelcc.bat for lcc-win32 (Tom St Denis) +- in Makefile.dj2, use copy and del instead of install and rm (Frank Donahoe) +- Avoid expanded $Id$. Use "rcs -kb" or "cvs admin -kb" to avoid Id expansion. +- check for unistd.h in configure (for off_t) +- remove useless check parameter in inflate_blocks_free +- avoid useless assignment of s->check to itself in inflate_blocks_new +- do not flush twice in gzclose (thanks to Ken Raeburn) +- rename FOPEN as F_OPEN to avoid clash with /usr/include/sys/file.h +- use NO_ERRNO_H instead of enumeration of operating systems with errno.h +- work around buggy fclose on pipes for HP/UX +- support zlib DLL with BORLAND C++ 5.0 (thanks to Glenn Randers-Pehrson) +- fix configure if CC is already equal to gcc + +Changes in 1.0.5 (3 Jan 98) +- Fix inflate to terminate gracefully when fed corrupted or invalid data +- Use const for rommable constants in inflate +- Eliminate memory leaks on error conditions in inflate +- Removed some vestigial code in inflate +- Update web address in README + +Changes in 1.0.4 (24 Jul 96) +- In very rare conditions, deflate(s, Z_FINISH) could fail to produce an EOF + bit, so the decompressor could decompress all the correct data but went + on to attempt decompressing extra garbage data. This affected minigzip too. +- zlibVersion and gzerror return const char* (needed for DLL) +- port to RISCOS (no fdopen, no multiple dots, no unlink, no fileno) +- use z_error only for DEBUG (avoid problem with DLLs) + +Changes in 1.0.3 (2 Jul 96) +- use z_streamp instead of z_stream *, which is now a far pointer in MSDOS + small and medium models; this makes the library incompatible with previous + versions for these models. (No effect in large model or on other systems.) +- return OK instead of BUF_ERROR if previous deflate call returned with + avail_out as zero but there is nothing to do +- added memcmp for non STDC compilers +- define NO_DUMMY_DECL for more Mac compilers (.h files merged incorrectly) +- define __32BIT__ if __386__ or i386 is defined (pb. with Watcom and SCO) +- better check for 16-bit mode MSC (avoids problem with Symantec) + +Changes in 1.0.2 (23 May 96) +- added Windows DLL support +- added a function zlibVersion (for the DLL support) +- fixed declarations using Bytef in infutil.c (pb with MSDOS medium model) +- Bytef is define's instead of typedef'd only for Borland C +- avoid reading uninitialized memory in example.c +- mention in README that the zlib format is now RFC1950 +- updated Makefile.dj2 +- added algorithm.doc + +Changes in 1.0.1 (20 May 96) [1.0 skipped to avoid confusion] +- fix array overlay in deflate.c which sometimes caused bad compressed data +- fix inflate bug with empty stored block +- fix MSDOS medium model which was broken in 0.99 +- fix deflateParams() which could generated bad compressed data. +- Bytef is define'd instead of typedef'ed (work around Borland bug) +- added an INDEX file +- new makefiles for DJGPP (Makefile.dj2), 32-bit Borland (Makefile.b32), + Watcom (Makefile.wat), Amiga SAS/C (Makefile.sas) +- speed up adler32 for modern machines without auto-increment +- added -ansi for IRIX in configure +- static_init_done in trees.c is an int +- define unlink as delete for VMS +- fix configure for QNX +- add configure branch for SCO and HPUX +- avoid many warnings (unused variables, dead assignments, etc...) +- no fdopen for BeOS +- fix the Watcom fix for 32 bit mode (define FAR as empty) +- removed redefinition of Byte for MKWERKS +- work around an MWKERKS bug (incorrect merge of all .h files) + +Changes in 0.99 (27 Jan 96) +- allow preset dictionary shared between compressor and decompressor +- allow compression level 0 (no compression) +- add deflateParams in zlib.h: allow dynamic change of compression level + and compression strategy. +- test large buffers and deflateParams in example.c +- add optional "configure" to build zlib as a shared library +- suppress Makefile.qnx, use configure instead +- fixed deflate for 64-bit systems (detected on Cray) +- fixed inflate_blocks for 64-bit systems (detected on Alpha) +- declare Z_DEFLATED in zlib.h (possible parameter for deflateInit2) +- always return Z_BUF_ERROR when deflate() has nothing to do +- deflateInit and inflateInit are now macros to allow version checking +- prefix all global functions and types with z_ with -DZ_PREFIX +- make falloc completely reentrant (inftrees.c) +- fixed very unlikely race condition in ct_static_init +- free in reverse order of allocation to help memory manager +- use zlib-1.0/* instead of zlib/* inside the tar.gz +- make zlib warning-free with "gcc -O3 -Wall -Wwrite-strings -Wpointer-arith + -Wconversion -Wstrict-prototypes -Wmissing-prototypes" +- allow gzread on concatenated .gz files +- deflateEnd now returns Z_DATA_ERROR if it was premature +- deflate is finally (?) fully deterministic (no matches beyond end of input) +- Document Z_SYNC_FLUSH +- add uninstall in Makefile +- Check for __cpluplus in zlib.h +- Better test in ct_align for partial flush +- avoid harmless warnings for Borland C++ +- initialize hash_head in deflate.c +- avoid warning on fdopen (gzio.c) for HP cc -Aa +- include stdlib.h for STDC compilers +- include errno.h for Cray +- ignore error if ranlib doesn't exist +- call ranlib twice for NeXTSTEP +- use exec_prefix instead of prefix for libz.a +- renamed ct_* as _tr_* to avoid conflict with applications +- clear z->msg in inflateInit2 before any error return +- initialize opaque in example.c, gzio.c, deflate.c and inflate.c +- fixed typo in zconf.h (_GNUC__ => __GNUC__) +- check for WIN32 in zconf.h and zutil.c (avoid farmalloc in 32-bit mode) +- fix typo in Make_vms.com (f$trnlnm -> f$getsyi) +- in fcalloc, normalize pointer if size > 65520 bytes +- don't use special fcalloc for 32 bit Borland C++ +- use STDC instead of __GO32__ to avoid redeclaring exit, calloc, etc... +- use Z_BINARY instead of BINARY +- document that gzclose after gzdopen will close the file +- allow "a" as mode in gzopen. +- fix error checking in gzread +- allow skipping .gz extra-field on pipes +- added reference to Perl interface in README +- put the crc table in FAR data (I dislike more and more the medium model :) +- added get_crc_table +- added a dimension to all arrays (Borland C can't count). +- workaround Borland C bug in declaration of inflate_codes_new & inflate_fast +- guard against multiple inclusion of *.h (for precompiled header on Mac) +- Watcom C pretends to be Microsoft C small model even in 32 bit mode. +- don't use unsized arrays to avoid silly warnings by Visual C++: + warning C4746: 'inflate_mask' : unsized array treated as '__far' + (what's wrong with far data in far model?). +- define enum out of inflate_blocks_state to allow compilation with C++ + +Changes in 0.95 (16 Aug 95) +- fix MSDOS small and medium model (now easier to adapt to any compiler) +- inlined send_bits +- fix the final (:-) bug for deflate with flush (output was correct but + not completely flushed in rare occasions). +- default window size is same for compression and decompression + (it's now sufficient to set MAX_WBITS in zconf.h). +- voidp -> voidpf and voidnp -> voidp (for consistency with other + typedefs and because voidnp was not near in large model). + +Changes in 0.94 (13 Aug 95) +- support MSDOS medium model +- fix deflate with flush (could sometimes generate bad output) +- fix deflateReset (zlib header was incorrectly suppressed) +- added support for VMS +- allow a compression level in gzopen() +- gzflush now calls fflush +- For deflate with flush, flush even if no more input is provided. +- rename libgz.a as libz.a +- avoid complex expression in infcodes.c triggering Turbo C bug +- work around a problem with gcc on Alpha (in INSERT_STRING) +- don't use inline functions (problem with some gcc versions) +- allow renaming of Byte, uInt, etc... with #define. +- avoid warning about (unused) pointer before start of array in deflate.c +- avoid various warnings in gzio.c, example.c, infblock.c, adler32.c, zutil.c +- avoid reserved word 'new' in trees.c + +Changes in 0.93 (25 June 95) +- temporarily disable inline functions +- make deflate deterministic +- give enough lookahead for PARTIAL_FLUSH +- Set binary mode for stdin/stdout in minigzip.c for OS/2 +- don't even use signed char in inflate (not portable enough) +- fix inflate memory leak for segmented architectures + +Changes in 0.92 (3 May 95) +- don't assume that char is signed (problem on SGI) +- Clear bit buffer when starting a stored block +- no memcpy on Pyramid +- suppressed inftest.c +- optimized fill_window, put longest_match inline for gcc +- optimized inflate on stored blocks. +- untabify all sources to simplify patches + +Changes in 0.91 (2 May 95) +- Default MEM_LEVEL is 8 (not 9 for Unix) as documented in zlib.h +- Document the memory requirements in zconf.h +- added "make install" +- fix sync search logic in inflateSync +- deflate(Z_FULL_FLUSH) now works even if output buffer too short +- after inflateSync, don't scare people with just "lo world" +- added support for DJGPP + +Changes in 0.9 (1 May 95) +- don't assume that zalloc clears the allocated memory (the TurboC bug + was Mark's bug after all :) +- let again gzread copy uncompressed data unchanged (was working in 0.71) +- deflate(Z_FULL_FLUSH), inflateReset and inflateSync are now fully implemented +- added a test of inflateSync in example.c +- moved MAX_WBITS to zconf.h because users might want to change that. +- document explicitly that zalloc(64K) on MSDOS must return a normalized + pointer (zero offset) +- added Makefiles for Microsoft C, Turbo C, Borland C++ +- faster crc32() + +Changes in 0.8 (29 April 95) +- added fast inflate (inffast.c) +- deflate(Z_FINISH) now returns Z_STREAM_END when done. Warning: this + is incompatible with previous versions of zlib which returned Z_OK. +- work around a TurboC compiler bug (bad code for b << 0, see infutil.h) + (actually that was not a compiler bug, see 0.81 above) +- gzread no longer reads one extra byte in certain cases +- In gzio destroy(), don't reference a freed structure +- avoid many warnings for MSDOS +- avoid the ERROR symbol which is used by MS Windows + +Changes in 0.71 (14 April 95) +- Fixed more MSDOS compilation problems :( There is still a bug with + TurboC large model. + +Changes in 0.7 (14 April 95) +- Added full inflate support. +- Simplified the crc32() interface. The pre- and post-conditioning + (one's complement) is now done inside crc32(). WARNING: this is + incompatible with previous versions; see zlib.h for the new usage. + +Changes in 0.61 (12 April 95) +- workaround for a bug in TurboC. example and minigzip now work on MSDOS. + +Changes in 0.6 (11 April 95) +- added minigzip.c +- added gzdopen to reopen a file descriptor as gzFile +- added transparent reading of non-gziped files in gzread. +- fixed bug in gzread (don't read crc as data) +- fixed bug in destroy (gzio.c) (don't return Z_STREAM_END for gzclose). +- don't allocate big arrays in the stack (for MSDOS) +- fix some MSDOS compilation problems + +Changes in 0.5: +- do real compression in deflate.c. Z_PARTIAL_FLUSH is supported but + not yet Z_FULL_FLUSH. +- support decompression but only in a single step (forced Z_FINISH) +- added opaque object for zalloc and zfree. +- added deflateReset and inflateReset +- added a variable zlib_version for consistency checking. +- renamed the 'filter' parameter of deflateInit2 as 'strategy'. + Added Z_FILTERED and Z_HUFFMAN_ONLY constants. + +Changes in 0.4: +- avoid "zip" everywhere, use zlib instead of ziplib. +- suppress Z_BLOCK_FLUSH, interpret Z_PARTIAL_FLUSH as block flush + if compression method == 8. +- added adler32 and crc32 +- renamed deflateOptions as deflateInit2, call one or the other but not both +- added the method parameter for deflateInit2. +- added inflateInit2 +- simplied considerably deflateInit and inflateInit by not supporting + user-provided history buffer. This is supported only in deflateInit2 + and inflateInit2. + +Changes in 0.3: +- prefix all macro names with Z_ +- use Z_FINISH instead of deflateEnd to finish compression. +- added Z_HUFFMAN_ONLY +- added gzerror() ADDED compat/zlib/FAQ Index: compat/zlib/FAQ ================================================================== --- compat/zlib/FAQ +++ compat/zlib/FAQ @@ -0,0 +1,368 @@ + + Frequently Asked Questions about zlib + + +If your question is not there, please check the zlib home page +http://zlib.net/ which may have more recent information. +The lastest zlib FAQ is at http://zlib.net/zlib_faq.html + + + 1. Is zlib Y2K-compliant? + + Yes. zlib doesn't handle dates. + + 2. Where can I get a Windows DLL version? + + The zlib sources can be compiled without change to produce a DLL. See the + file win32/DLL_FAQ.txt in the zlib distribution. Pointers to the + precompiled DLL are found in the zlib web site at http://zlib.net/ . + + 3. Where can I get a Visual Basic interface to zlib? + + See + * http://marknelson.us/1997/01/01/zlib-engine/ + * win32/DLL_FAQ.txt in the zlib distribution + + 4. compress() returns Z_BUF_ERROR. + + Make sure that before the call of compress(), the length of the compressed + buffer is equal to the available size of the compressed buffer and not + zero. For Visual Basic, check that this parameter is passed by reference + ("as any"), not by value ("as long"). + + 5. deflate() or inflate() returns Z_BUF_ERROR. + + Before making the call, make sure that avail_in and avail_out are not zero. + When setting the parameter flush equal to Z_FINISH, also make sure that + avail_out is big enough to allow processing all pending input. Note that a + Z_BUF_ERROR is not fatal--another call to deflate() or inflate() can be + made with more input or output space. A Z_BUF_ERROR may in fact be + unavoidable depending on how the functions are used, since it is not + possible to tell whether or not there is more output pending when + strm.avail_out returns with zero. See http://zlib.net/zlib_how.html for a + heavily annotated example. + + 6. Where's the zlib documentation (man pages, etc.)? + + It's in zlib.h . Examples of zlib usage are in the files test/example.c + and test/minigzip.c, with more in examples/ . + + 7. Why don't you use GNU autoconf or libtool or ...? + + Because we would like to keep zlib as a very small and simple package. + zlib is rather portable and doesn't need much configuration. + + 8. I found a bug in zlib. + + Most of the time, such problems are due to an incorrect usage of zlib. + Please try to reproduce the problem with a small program and send the + corresponding source to us at zlib@gzip.org . Do not send multi-megabyte + data files without prior agreement. + + 9. Why do I get "undefined reference to gzputc"? + + If "make test" produces something like + + example.o(.text+0x154): undefined reference to `gzputc' + + check that you don't have old files libz.* in /usr/lib, /usr/local/lib or + /usr/X11R6/lib. Remove any old versions, then do "make install". + +10. I need a Delphi interface to zlib. + + See the contrib/delphi directory in the zlib distribution. + +11. Can zlib handle .zip archives? + + Not by itself, no. See the directory contrib/minizip in the zlib + distribution. + +12. Can zlib handle .Z files? + + No, sorry. You have to spawn an uncompress or gunzip subprocess, or adapt + the code of uncompress on your own. + +13. How can I make a Unix shared library? + + By default a shared (and a static) library is built for Unix. So: + + make distclean + ./configure + make + +14. How do I install a shared zlib library on Unix? + + After the above, then: + + make install + + However, many flavors of Unix come with a shared zlib already installed. + Before going to the trouble of compiling a shared version of zlib and + trying to install it, you may want to check if it's already there! If you + can #include , it's there. The -lz option will probably link to + it. You can check the version at the top of zlib.h or with the + ZLIB_VERSION symbol defined in zlib.h . + +15. I have a question about OttoPDF. + + We are not the authors of OttoPDF. The real author is on the OttoPDF web + site: Joel Hainley, jhainley@myndkryme.com. + +16. Can zlib decode Flate data in an Adobe PDF file? + + Yes. See http://www.pdflib.com/ . To modify PDF forms, see + http://sourceforge.net/projects/acroformtool/ . + +17. Why am I getting this "register_frame_info not found" error on Solaris? + + After installing zlib 1.1.4 on Solaris 2.6, running applications using zlib + generates an error such as: + + ld.so.1: rpm: fatal: relocation error: file /usr/local/lib/libz.so: + symbol __register_frame_info: referenced symbol not found + + The symbol __register_frame_info is not part of zlib, it is generated by + the C compiler (cc or gcc). You must recompile applications using zlib + which have this problem. This problem is specific to Solaris. See + http://www.sunfreeware.com for Solaris versions of zlib and applications + using zlib. + +18. Why does gzip give an error on a file I make with compress/deflate? + + The compress and deflate functions produce data in the zlib format, which + is different and incompatible with the gzip format. The gz* functions in + zlib on the other hand use the gzip format. Both the zlib and gzip formats + use the same compressed data format internally, but have different headers + and trailers around the compressed data. + +19. Ok, so why are there two different formats? + + The gzip format was designed to retain the directory information about a + single file, such as the name and last modification date. The zlib format + on the other hand was designed for in-memory and communication channel + applications, and has a much more compact header and trailer and uses a + faster integrity check than gzip. + +20. Well that's nice, but how do I make a gzip file in memory? + + You can request that deflate write the gzip format instead of the zlib + format using deflateInit2(). You can also request that inflate decode the + gzip format using inflateInit2(). Read zlib.h for more details. + +21. Is zlib thread-safe? + + Yes. However any library routines that zlib uses and any application- + provided memory allocation routines must also be thread-safe. zlib's gz* + functions use stdio library routines, and most of zlib's functions use the + library memory allocation routines by default. zlib's *Init* functions + allow for the application to provide custom memory allocation routines. + + Of course, you should only operate on any given zlib or gzip stream from a + single thread at a time. + +22. Can I use zlib in my commercial application? + + Yes. Please read the license in zlib.h. + +23. Is zlib under the GNU license? + + No. Please read the license in zlib.h. + +24. The license says that altered source versions must be "plainly marked". So + what exactly do I need to do to meet that requirement? + + You need to change the ZLIB_VERSION and ZLIB_VERNUM #defines in zlib.h. In + particular, the final version number needs to be changed to "f", and an + identification string should be appended to ZLIB_VERSION. Version numbers + x.x.x.f are reserved for modifications to zlib by others than the zlib + maintainers. For example, if the version of the base zlib you are altering + is "1.2.3.4", then in zlib.h you should change ZLIB_VERNUM to 0x123f, and + ZLIB_VERSION to something like "1.2.3.f-zachary-mods-v3". You can also + update the version strings in deflate.c and inftrees.c. + + For altered source distributions, you should also note the origin and + nature of the changes in zlib.h, as well as in ChangeLog and README, along + with the dates of the alterations. The origin should include at least your + name (or your company's name), and an email address to contact for help or + issues with the library. + + Note that distributing a compiled zlib library along with zlib.h and + zconf.h is also a source distribution, and so you should change + ZLIB_VERSION and ZLIB_VERNUM and note the origin and nature of the changes + in zlib.h as you would for a full source distribution. + +25. Will zlib work on a big-endian or little-endian architecture, and can I + exchange compressed data between them? + + Yes and yes. + +26. Will zlib work on a 64-bit machine? + + Yes. It has been tested on 64-bit machines, and has no dependence on any + data types being limited to 32-bits in length. If you have any + difficulties, please provide a complete problem report to zlib@gzip.org + +27. Will zlib decompress data from the PKWare Data Compression Library? + + No. The PKWare DCL uses a completely different compressed data format than + does PKZIP and zlib. However, you can look in zlib's contrib/blast + directory for a possible solution to your problem. + +28. Can I access data randomly in a compressed stream? + + No, not without some preparation. If when compressing you periodically use + Z_FULL_FLUSH, carefully write all the pending data at those points, and + keep an index of those locations, then you can start decompression at those + points. You have to be careful to not use Z_FULL_FLUSH too often, since it + can significantly degrade compression. Alternatively, you can scan a + deflate stream once to generate an index, and then use that index for + random access. See examples/zran.c . + +29. Does zlib work on MVS, OS/390, CICS, etc.? + + It has in the past, but we have not heard of any recent evidence. There + were working ports of zlib 1.1.4 to MVS, but those links no longer work. + If you know of recent, successful applications of zlib on these operating + systems, please let us know. Thanks. + +30. Is there some simpler, easier to read version of inflate I can look at to + understand the deflate format? + + First off, you should read RFC 1951. Second, yes. Look in zlib's + contrib/puff directory. + +31. Does zlib infringe on any patents? + + As far as we know, no. In fact, that was originally the whole point behind + zlib. Look here for some more information: + + http://www.gzip.org/#faq11 + +32. Can zlib work with greater than 4 GB of data? + + Yes. inflate() and deflate() will process any amount of data correctly. + Each call of inflate() or deflate() is limited to input and output chunks + of the maximum value that can be stored in the compiler's "unsigned int" + type, but there is no limit to the number of chunks. Note however that the + strm.total_in and strm_total_out counters may be limited to 4 GB. These + counters are provided as a convenience and are not used internally by + inflate() or deflate(). The application can easily set up its own counters + updated after each call of inflate() or deflate() to count beyond 4 GB. + compress() and uncompress() may be limited to 4 GB, since they operate in a + single call. gzseek() and gztell() may be limited to 4 GB depending on how + zlib is compiled. See the zlibCompileFlags() function in zlib.h. + + The word "may" appears several times above since there is a 4 GB limit only + if the compiler's "long" type is 32 bits. If the compiler's "long" type is + 64 bits, then the limit is 16 exabytes. + +33. Does zlib have any security vulnerabilities? + + The only one that we are aware of is potentially in gzprintf(). If zlib is + compiled to use sprintf() or vsprintf(), then there is no protection + against a buffer overflow of an 8K string space (or other value as set by + gzbuffer()), other than the caller of gzprintf() assuring that the output + will not exceed 8K. On the other hand, if zlib is compiled to use + snprintf() or vsnprintf(), which should normally be the case, then there is + no vulnerability. The ./configure script will display warnings if an + insecure variation of sprintf() will be used by gzprintf(). Also the + zlibCompileFlags() function will return information on what variant of + sprintf() is used by gzprintf(). + + If you don't have snprintf() or vsnprintf() and would like one, you can + find a portable implementation here: + + http://www.ijs.si/software/snprintf/ + + Note that you should be using the most recent version of zlib. Versions + 1.1.3 and before were subject to a double-free vulnerability, and versions + 1.2.1 and 1.2.2 were subject to an access exception when decompressing + invalid compressed data. + +34. Is there a Java version of zlib? + + Probably what you want is to use zlib in Java. zlib is already included + as part of the Java SDK in the java.util.zip package. If you really want + a version of zlib written in the Java language, look on the zlib home + page for links: http://zlib.net/ . + +35. I get this or that compiler or source-code scanner warning when I crank it + up to maximally-pedantic. Can't you guys write proper code? + + Many years ago, we gave up attempting to avoid warnings on every compiler + in the universe. It just got to be a waste of time, and some compilers + were downright silly as well as contradicted each other. So now, we simply + make sure that the code always works. + +36. Valgrind (or some similar memory access checker) says that deflate is + performing a conditional jump that depends on an uninitialized value. + Isn't that a bug? + + No. That is intentional for performance reasons, and the output of deflate + is not affected. This only started showing up recently since zlib 1.2.x + uses malloc() by default for allocations, whereas earlier versions used + calloc(), which zeros out the allocated memory. Even though the code was + correct, versions 1.2.4 and later was changed to not stimulate these + checkers. + +37. Will zlib read the (insert any ancient or arcane format here) compressed + data format? + + Probably not. Look in the comp.compression FAQ for pointers to various + formats and associated software. + +38. How can I encrypt/decrypt zip files with zlib? + + zlib doesn't support encryption. The original PKZIP encryption is very + weak and can be broken with freely available programs. To get strong + encryption, use GnuPG, http://www.gnupg.org/ , which already includes zlib + compression. For PKZIP compatible "encryption", look at + http://www.info-zip.org/ + +39. What's the difference between the "gzip" and "deflate" HTTP 1.1 encodings? + + "gzip" is the gzip format, and "deflate" is the zlib format. They should + probably have called the second one "zlib" instead to avoid confusion with + the raw deflate compressed data format. While the HTTP 1.1 RFC 2616 + correctly points to the zlib specification in RFC 1950 for the "deflate" + transfer encoding, there have been reports of servers and browsers that + incorrectly produce or expect raw deflate data per the deflate + specification in RFC 1951, most notably Microsoft. So even though the + "deflate" transfer encoding using the zlib format would be the more + efficient approach (and in fact exactly what the zlib format was designed + for), using the "gzip" transfer encoding is probably more reliable due to + an unfortunate choice of name on the part of the HTTP 1.1 authors. + + Bottom line: use the gzip format for HTTP 1.1 encoding. + +40. Does zlib support the new "Deflate64" format introduced by PKWare? + + No. PKWare has apparently decided to keep that format proprietary, since + they have not documented it as they have previous compression formats. In + any case, the compression improvements are so modest compared to other more + modern approaches, that it's not worth the effort to implement. + +41. I'm having a problem with the zip functions in zlib, can you help? + + There are no zip functions in zlib. You are probably using minizip by + Giles Vollant, which is found in the contrib directory of zlib. It is not + part of zlib. In fact none of the stuff in contrib is part of zlib. The + files in there are not supported by the zlib authors. You need to contact + the authors of the respective contribution for help. + +42. The match.asm code in contrib is under the GNU General Public License. + Since it's part of zlib, doesn't that mean that all of zlib falls under the + GNU GPL? + + No. The files in contrib are not part of zlib. They were contributed by + other authors and are provided as a convenience to the user within the zlib + distribution. Each item in contrib has its own license. + +43. Is zlib subject to export controls? What is its ECCN? + + zlib is not subject to export controls, and so is classified as EAR99. + +44. Can you please sign these lengthy legal documents and fax them back to us + so that we can use your software in our product? + + No. Go away. Shoo. ADDED compat/zlib/INDEX Index: compat/zlib/INDEX ================================================================== --- compat/zlib/INDEX +++ compat/zlib/INDEX @@ -0,0 +1,68 @@ +CMakeLists.txt cmake build file +ChangeLog history of changes +FAQ Frequently Asked Questions about zlib +INDEX this file +Makefile dummy Makefile that tells you to ./configure +Makefile.in template for Unix Makefile +README guess what +configure configure script for Unix +make_vms.com makefile for VMS +test/example.c zlib usages examples for build testing +test/minigzip.c minimal gzip-like functionality for build testing +test/infcover.c inf*.c code coverage for build coverage testing +treebuild.xml XML description of source file dependencies +zconf.h.cmakein zconf.h template for cmake +zconf.h.in zconf.h template for configure +zlib.3 Man page for zlib +zlib.3.pdf Man page in PDF format +zlib.map Linux symbol information +zlib.pc.in Template for pkg-config descriptor +zlib.pc.cmakein zlib.pc template for cmake +zlib2ansi perl script to convert source files for C++ compilation + +amiga/ makefiles for Amiga SAS C +as400/ makefiles for AS/400 +doc/ documentation for formats and algorithms +msdos/ makefiles for MSDOS +nintendods/ makefile for Nintendo DS +old/ makefiles for various architectures and zlib documentation + files that have not yet been updated for zlib 1.2.x +qnx/ makefiles for QNX +watcom/ makefiles for OpenWatcom +win32/ makefiles for Windows + + zlib public header files (required for library use): +zconf.h +zlib.h + + private source files used to build the zlib library: +adler32.c +compress.c +crc32.c +crc32.h +deflate.c +deflate.h +gzclose.c +gzguts.h +gzlib.c +gzread.c +gzwrite.c +infback.c +inffast.c +inffast.h +inffixed.h +inflate.c +inflate.h +inftrees.c +inftrees.h +trees.c +trees.h +uncompr.c +zutil.c +zutil.h + + source files for sample programs +See examples/README.examples + + unsupported contributions by third parties +See contrib/README.contrib ADDED compat/zlib/Makefile Index: compat/zlib/Makefile ================================================================== --- compat/zlib/Makefile +++ compat/zlib/Makefile @@ -0,0 +1,5 @@ +all: + -@echo "Please use ./configure first. Thank you." + +distclean: + make -f Makefile.in distclean ADDED compat/zlib/Makefile.in Index: compat/zlib/Makefile.in ================================================================== --- compat/zlib/Makefile.in +++ compat/zlib/Makefile.in @@ -0,0 +1,288 @@ +# Makefile for zlib +# Copyright (C) 1995-2013 Jean-loup Gailly, Mark Adler +# For conditions of distribution and use, see copyright notice in zlib.h + +# To compile and test, type: +# ./configure; make test +# Normally configure builds both a static and a shared library. +# If you want to build just a static library, use: ./configure --static + +# To use the asm code, type: +# cp contrib/asm?86/match.S ./match.S +# make LOC=-DASMV OBJA=match.o + +# To install /usr/local/lib/libz.* and /usr/local/include/zlib.h, type: +# make install +# To install in $HOME instead of /usr/local, use: +# make install prefix=$HOME + +CC=cc + +CFLAGS=-O +#CFLAGS=-O -DMAX_WBITS=14 -DMAX_MEM_LEVEL=7 +#CFLAGS=-g -DDEBUG +#CFLAGS=-O3 -Wall -Wwrite-strings -Wpointer-arith -Wconversion \ +# -Wstrict-prototypes -Wmissing-prototypes + +SFLAGS=-O +LDFLAGS= +TEST_LDFLAGS=-L. libz.a +LDSHARED=$(CC) +CPP=$(CC) -E + +STATICLIB=libz.a +SHAREDLIB=libz.so +SHAREDLIBV=libz.so.1.2.8 +SHAREDLIBM=libz.so.1 +LIBS=$(STATICLIB) $(SHAREDLIBV) + +AR=ar +ARFLAGS=rc +RANLIB=ranlib +LDCONFIG=ldconfig +LDSHAREDLIBC=-lc +TAR=tar +SHELL=/bin/sh +EXE= + +prefix = /usr/local +exec_prefix = ${prefix} +libdir = ${exec_prefix}/lib +sharedlibdir = ${libdir} +includedir = ${prefix}/include +mandir = ${prefix}/share/man +man3dir = ${mandir}/man3 +pkgconfigdir = ${libdir}/pkgconfig + +OBJZ = adler32.o crc32.o deflate.o infback.o inffast.o inflate.o inftrees.o trees.o zutil.o +OBJG = compress.o uncompr.o gzclose.o gzlib.o gzread.o gzwrite.o +OBJC = $(OBJZ) $(OBJG) + +PIC_OBJZ = adler32.lo crc32.lo deflate.lo infback.lo inffast.lo inflate.lo inftrees.lo trees.lo zutil.lo +PIC_OBJG = compress.lo uncompr.lo gzclose.lo gzlib.lo gzread.lo gzwrite.lo +PIC_OBJC = $(PIC_OBJZ) $(PIC_OBJG) + +# to use the asm code: make OBJA=match.o, PIC_OBJA=match.lo +OBJA = +PIC_OBJA = + +OBJS = $(OBJC) $(OBJA) + +PIC_OBJS = $(PIC_OBJC) $(PIC_OBJA) + +all: static shared + +static: example$(EXE) minigzip$(EXE) + +shared: examplesh$(EXE) minigzipsh$(EXE) + +all64: example64$(EXE) minigzip64$(EXE) + +check: test + +test: all teststatic testshared + +teststatic: static + @TMPST=tmpst_$$; \ + if echo hello world | ./minigzip | ./minigzip -d && ./example $$TMPST ; then \ + echo ' *** zlib test OK ***'; \ + else \ + echo ' *** zlib test FAILED ***'; false; \ + fi; \ + rm -f $$TMPST + +testshared: shared + @LD_LIBRARY_PATH=`pwd`:$(LD_LIBRARY_PATH) ; export LD_LIBRARY_PATH; \ + LD_LIBRARYN32_PATH=`pwd`:$(LD_LIBRARYN32_PATH) ; export LD_LIBRARYN32_PATH; \ + DYLD_LIBRARY_PATH=`pwd`:$(DYLD_LIBRARY_PATH) ; export DYLD_LIBRARY_PATH; \ + SHLIB_PATH=`pwd`:$(SHLIB_PATH) ; export SHLIB_PATH; \ + TMPSH=tmpsh_$$; \ + if echo hello world | ./minigzipsh | ./minigzipsh -d && ./examplesh $$TMPSH; then \ + echo ' *** zlib shared test OK ***'; \ + else \ + echo ' *** zlib shared test FAILED ***'; false; \ + fi; \ + rm -f $$TMPSH + +test64: all64 + @TMP64=tmp64_$$; \ + if echo hello world | ./minigzip64 | ./minigzip64 -d && ./example64 $$TMP64; then \ + echo ' *** zlib 64-bit test OK ***'; \ + else \ + echo ' *** zlib 64-bit test FAILED ***'; false; \ + fi; \ + rm -f $$TMP64 + +infcover.o: test/infcover.c zlib.h zconf.h + $(CC) $(CFLAGS) -I. -c -o $@ test/infcover.c + +infcover: infcover.o libz.a + $(CC) $(CFLAGS) -o $@ infcover.o libz.a + +cover: infcover + rm -f *.gcda + ./infcover + gcov inf*.c + +libz.a: $(OBJS) + $(AR) $(ARFLAGS) $@ $(OBJS) + -@ ($(RANLIB) $@ || true) >/dev/null 2>&1 + +match.o: match.S + $(CPP) match.S > _match.s + $(CC) -c _match.s + mv _match.o match.o + rm -f _match.s + +match.lo: match.S + $(CPP) match.S > _match.s + $(CC) -c -fPIC _match.s + mv _match.o match.lo + rm -f _match.s + +example.o: test/example.c zlib.h zconf.h + $(CC) $(CFLAGS) -I. -c -o $@ test/example.c + +minigzip.o: test/minigzip.c zlib.h zconf.h + $(CC) $(CFLAGS) -I. -c -o $@ test/minigzip.c + +example64.o: test/example.c zlib.h zconf.h + $(CC) $(CFLAGS) -I. -D_FILE_OFFSET_BITS=64 -c -o $@ test/example.c + +minigzip64.o: test/minigzip.c zlib.h zconf.h + $(CC) $(CFLAGS) -I. -D_FILE_OFFSET_BITS=64 -c -o $@ test/minigzip.c + +.SUFFIXES: .lo + +.c.lo: + -@mkdir objs 2>/dev/null || test -d objs + $(CC) $(SFLAGS) -DPIC -c -o objs/$*.o $< + -@mv objs/$*.o $@ + +placebo $(SHAREDLIBV): $(PIC_OBJS) libz.a + $(LDSHARED) $(SFLAGS) -o $@ $(PIC_OBJS) $(LDSHAREDLIBC) $(LDFLAGS) + rm -f $(SHAREDLIB) $(SHAREDLIBM) + ln -s $@ $(SHAREDLIB) + ln -s $@ $(SHAREDLIBM) + -@rmdir objs + +example$(EXE): example.o $(STATICLIB) + $(CC) $(CFLAGS) -o $@ example.o $(TEST_LDFLAGS) + +minigzip$(EXE): minigzip.o $(STATICLIB) + $(CC) $(CFLAGS) -o $@ minigzip.o $(TEST_LDFLAGS) + +examplesh$(EXE): example.o $(SHAREDLIBV) + $(CC) $(CFLAGS) -o $@ example.o -L. $(SHAREDLIBV) + +minigzipsh$(EXE): minigzip.o $(SHAREDLIBV) + $(CC) $(CFLAGS) -o $@ minigzip.o -L. $(SHAREDLIBV) + +example64$(EXE): example64.o $(STATICLIB) + $(CC) $(CFLAGS) -o $@ example64.o $(TEST_LDFLAGS) + +minigzip64$(EXE): minigzip64.o $(STATICLIB) + $(CC) $(CFLAGS) -o $@ minigzip64.o $(TEST_LDFLAGS) + +install-libs: $(LIBS) + -@if [ ! -d $(DESTDIR)$(exec_prefix) ]; then mkdir -p $(DESTDIR)$(exec_prefix); fi + -@if [ ! -d $(DESTDIR)$(libdir) ]; then mkdir -p $(DESTDIR)$(libdir); fi + -@if [ ! -d $(DESTDIR)$(sharedlibdir) ]; then mkdir -p $(DESTDIR)$(sharedlibdir); fi + -@if [ ! -d $(DESTDIR)$(man3dir) ]; then mkdir -p $(DESTDIR)$(man3dir); fi + -@if [ ! -d $(DESTDIR)$(pkgconfigdir) ]; then mkdir -p $(DESTDIR)$(pkgconfigdir); fi + cp $(STATICLIB) $(DESTDIR)$(libdir) + chmod 644 $(DESTDIR)$(libdir)/$(STATICLIB) + -@($(RANLIB) $(DESTDIR)$(libdir)/libz.a || true) >/dev/null 2>&1 + -@if test -n "$(SHAREDLIBV)"; then \ + cp $(SHAREDLIBV) $(DESTDIR)$(sharedlibdir); \ + echo "cp $(SHAREDLIBV) $(DESTDIR)$(sharedlibdir)"; \ + chmod 755 $(DESTDIR)$(sharedlibdir)/$(SHAREDLIBV); \ + echo "chmod 755 $(DESTDIR)$(sharedlibdir)/$(SHAREDLIBV)"; \ + rm -f $(DESTDIR)$(sharedlibdir)/$(SHAREDLIB) $(DESTDIR)$(sharedlibdir)/$(SHAREDLIBM); \ + ln -s $(SHAREDLIBV) $(DESTDIR)$(sharedlibdir)/$(SHAREDLIB); \ + ln -s $(SHAREDLIBV) $(DESTDIR)$(sharedlibdir)/$(SHAREDLIBM); \ + ($(LDCONFIG) || true) >/dev/null 2>&1; \ + fi + cp zlib.3 $(DESTDIR)$(man3dir) + chmod 644 $(DESTDIR)$(man3dir)/zlib.3 + cp zlib.pc $(DESTDIR)$(pkgconfigdir) + chmod 644 $(DESTDIR)$(pkgconfigdir)/zlib.pc +# The ranlib in install is needed on NeXTSTEP which checks file times +# ldconfig is for Linux + +install: install-libs + -@if [ ! -d $(DESTDIR)$(includedir) ]; then mkdir -p $(DESTDIR)$(includedir); fi + cp zlib.h zconf.h $(DESTDIR)$(includedir) + chmod 644 $(DESTDIR)$(includedir)/zlib.h $(DESTDIR)$(includedir)/zconf.h + +uninstall: + cd $(DESTDIR)$(includedir) && rm -f zlib.h zconf.h + cd $(DESTDIR)$(libdir) && rm -f libz.a; \ + if test -n "$(SHAREDLIBV)" -a -f $(SHAREDLIBV); then \ + rm -f $(SHAREDLIBV) $(SHAREDLIB) $(SHAREDLIBM); \ + fi + cd $(DESTDIR)$(man3dir) && rm -f zlib.3 + cd $(DESTDIR)$(pkgconfigdir) && rm -f zlib.pc + +docs: zlib.3.pdf + +zlib.3.pdf: zlib.3 + groff -mandoc -f H -T ps zlib.3 | ps2pdf - zlib.3.pdf + +zconf.h.cmakein: zconf.h.in + -@ TEMPFILE=zconfh_$$; \ + echo "/#define ZCONF_H/ a\\\\\n#cmakedefine Z_PREFIX\\\\\n#cmakedefine Z_HAVE_UNISTD_H\n" >> $$TEMPFILE &&\ + sed -f $$TEMPFILE zconf.h.in > zconf.h.cmakein &&\ + touch -r zconf.h.in zconf.h.cmakein &&\ + rm $$TEMPFILE + +zconf: zconf.h.in + cp -p zconf.h.in zconf.h + +mostlyclean: clean +clean: + rm -f *.o *.lo *~ \ + example$(EXE) minigzip$(EXE) examplesh$(EXE) minigzipsh$(EXE) \ + example64$(EXE) minigzip64$(EXE) \ + infcover \ + libz.* foo.gz so_locations \ + _match.s maketree contrib/infback9/*.o + rm -rf objs + rm -f *.gcda *.gcno *.gcov + rm -f contrib/infback9/*.gcda contrib/infback9/*.gcno contrib/infback9/*.gcov + +maintainer-clean: distclean +distclean: clean zconf zconf.h.cmakein docs + rm -f Makefile zlib.pc configure.log + -@rm -f .DS_Store + -@printf 'all:\n\t-@echo "Please use ./configure first. Thank you."\n' > Makefile + -@printf '\ndistclean:\n\tmake -f Makefile.in distclean\n' >> Makefile + -@touch -r Makefile.in Makefile + +tags: + etags *.[ch] + +depend: + makedepend -- $(CFLAGS) -- *.[ch] + +# DO NOT DELETE THIS LINE -- make depend depends on it. + +adler32.o zutil.o: zutil.h zlib.h zconf.h +gzclose.o gzlib.o gzread.o gzwrite.o: zlib.h zconf.h gzguts.h +compress.o example.o minigzip.o uncompr.o: zlib.h zconf.h +crc32.o: zutil.h zlib.h zconf.h crc32.h +deflate.o: deflate.h zutil.h zlib.h zconf.h +infback.o inflate.o: zutil.h zlib.h zconf.h inftrees.h inflate.h inffast.h inffixed.h +inffast.o: zutil.h zlib.h zconf.h inftrees.h inflate.h inffast.h +inftrees.o: zutil.h zlib.h zconf.h inftrees.h +trees.o: deflate.h zutil.h zlib.h zconf.h trees.h + +adler32.lo zutil.lo: zutil.h zlib.h zconf.h +gzclose.lo gzlib.lo gzread.lo gzwrite.lo: zlib.h zconf.h gzguts.h +compress.lo example.lo minigzip.lo uncompr.lo: zlib.h zconf.h +crc32.lo: zutil.h zlib.h zconf.h crc32.h +deflate.lo: deflate.h zutil.h zlib.h zconf.h +infback.lo inflate.lo: zutil.h zlib.h zconf.h inftrees.h inflate.h inffast.h inffixed.h +inffast.lo: zutil.h zlib.h zconf.h inftrees.h inflate.h inffast.h +inftrees.lo: zutil.h zlib.h zconf.h inftrees.h +trees.lo: deflate.h zutil.h zlib.h zconf.h trees.h ADDED compat/zlib/README Index: compat/zlib/README ================================================================== --- compat/zlib/README +++ compat/zlib/README @@ -0,0 +1,115 @@ +ZLIB DATA COMPRESSION LIBRARY + +zlib 1.2.8 is a general purpose data compression library. All the code is +thread safe. The data format used by the zlib library is described by RFCs +(Request for Comments) 1950 to 1952 in the files +http://tools.ietf.org/html/rfc1950 (zlib format), rfc1951 (deflate format) and +rfc1952 (gzip format). + +All functions of the compression library are documented in the file zlib.h +(volunteer to write man pages welcome, contact zlib@gzip.org). A usage example +of the library is given in the file test/example.c which also tests that +the library is working correctly. Another example is given in the file +test/minigzip.c. The compression library itself is composed of all source +files in the root directory. + +To compile all files and run the test program, follow the instructions given at +the top of Makefile.in. In short "./configure; make test", and if that goes +well, "make install" should work for most flavors of Unix. For Windows, use +one of the special makefiles in win32/ or contrib/vstudio/ . For VMS, use +make_vms.com. + +Questions about zlib should be sent to , or to Gilles Vollant + for the Windows DLL version. The zlib home page is +http://zlib.net/ . Before reporting a problem, please check this site to +verify that you have the latest version of zlib; otherwise get the latest +version and check whether the problem still exists or not. + +PLEASE read the zlib FAQ http://zlib.net/zlib_faq.html before asking for help. + +Mark Nelson wrote an article about zlib for the Jan. 1997 +issue of Dr. Dobb's Journal; a copy of the article is available at +http://marknelson.us/1997/01/01/zlib-engine/ . + +The changes made in version 1.2.8 are documented in the file ChangeLog. + +Unsupported third party contributions are provided in directory contrib/ . + +zlib is available in Java using the java.util.zip package, documented at +http://java.sun.com/developer/technicalArticles/Programming/compression/ . + +A Perl interface to zlib written by Paul Marquess is available +at CPAN (Comprehensive Perl Archive Network) sites, including +http://search.cpan.org/~pmqs/IO-Compress-Zlib/ . + +A Python interface to zlib written by A.M. Kuchling is +available in Python 1.5 and later versions, see +http://docs.python.org/library/zlib.html . + +zlib is built into tcl: http://wiki.tcl.tk/4610 . + +An experimental package to read and write files in .zip format, written on top +of zlib by Gilles Vollant , is available in the +contrib/minizip directory of zlib. + + +Notes for some targets: + +- For Windows DLL versions, please see win32/DLL_FAQ.txt + +- For 64-bit Irix, deflate.c must be compiled without any optimization. With + -O, one libpng test fails. The test works in 32 bit mode (with the -n32 + compiler flag). The compiler bug has been reported to SGI. + +- zlib doesn't work with gcc 2.6.3 on a DEC 3000/300LX under OSF/1 2.1 it works + when compiled with cc. + +- On Digital Unix 4.0D (formely OSF/1) on AlphaServer, the cc option -std1 is + necessary to get gzprintf working correctly. This is done by configure. + +- zlib doesn't work on HP-UX 9.05 with some versions of /bin/cc. It works with + other compilers. Use "make test" to check your compiler. + +- gzdopen is not supported on RISCOS or BEOS. + +- For PalmOs, see http://palmzlib.sourceforge.net/ + + +Acknowledgments: + + The deflate format used by zlib was defined by Phil Katz. The deflate and + zlib specifications were written by L. Peter Deutsch. Thanks to all the + people who reported problems and suggested various improvements in zlib; they + are too numerous to cite here. + +Copyright notice: + + (C) 1995-2013 Jean-loup Gailly and Mark Adler + + This software is provided 'as-is', without any express or implied + warranty. In no event will the authors be held liable for any damages + arising from the use of this software. + + Permission is granted to anyone to use this software for any purpose, + including commercial applications, and to alter it and redistribute it + freely, subject to the following restrictions: + + 1. The origin of this software must not be misrepresented; you must not + claim that you wrote the original software. If you use this software + in a product, an acknowledgment in the product documentation would be + appreciated but is not required. + 2. Altered source versions must be plainly marked as such, and must not be + misrepresented as being the original software. + 3. This notice may not be removed or altered from any source distribution. + + Jean-loup Gailly Mark Adler + jloup@gzip.org madler@alumni.caltech.edu + +If you use the zlib library in a product, we would appreciate *not* receiving +lengthy legal documents to sign. The sources are provided for free but without +warranty of any kind. The library has been entirely written by Jean-loup +Gailly and Mark Adler; it does not include third-party code. + +If you redistribute modified sources, we would appreciate that you include in +the file ChangeLog history information documenting your changes. Please read +the FAQ for more information on the distribution of modified source versions. ADDED compat/zlib/adler32.c Index: compat/zlib/adler32.c ================================================================== --- compat/zlib/adler32.c +++ compat/zlib/adler32.c @@ -0,0 +1,179 @@ +/* adler32.c -- compute the Adler-32 checksum of a data stream + * Copyright (C) 1995-2011 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* @(#) $Id$ */ + +#include "zutil.h" + +#define local static + +local uLong adler32_combine_ OF((uLong adler1, uLong adler2, z_off64_t len2)); + +#define BASE 65521 /* largest prime smaller than 65536 */ +#define NMAX 5552 +/* NMAX is the largest n such that 255n(n+1)/2 + (n+1)(BASE-1) <= 2^32-1 */ + +#define DO1(buf,i) {adler += (buf)[i]; sum2 += adler;} +#define DO2(buf,i) DO1(buf,i); DO1(buf,i+1); +#define DO4(buf,i) DO2(buf,i); DO2(buf,i+2); +#define DO8(buf,i) DO4(buf,i); DO4(buf,i+4); +#define DO16(buf) DO8(buf,0); DO8(buf,8); + +/* use NO_DIVIDE if your processor does not do division in hardware -- + try it both ways to see which is faster */ +#ifdef NO_DIVIDE +/* note that this assumes BASE is 65521, where 65536 % 65521 == 15 + (thank you to John Reiser for pointing this out) */ +# define CHOP(a) \ + do { \ + unsigned long tmp = a >> 16; \ + a &= 0xffffUL; \ + a += (tmp << 4) - tmp; \ + } while (0) +# define MOD28(a) \ + do { \ + CHOP(a); \ + if (a >= BASE) a -= BASE; \ + } while (0) +# define MOD(a) \ + do { \ + CHOP(a); \ + MOD28(a); \ + } while (0) +# define MOD63(a) \ + do { /* this assumes a is not negative */ \ + z_off64_t tmp = a >> 32; \ + a &= 0xffffffffL; \ + a += (tmp << 8) - (tmp << 5) + tmp; \ + tmp = a >> 16; \ + a &= 0xffffL; \ + a += (tmp << 4) - tmp; \ + tmp = a >> 16; \ + a &= 0xffffL; \ + a += (tmp << 4) - tmp; \ + if (a >= BASE) a -= BASE; \ + } while (0) +#else +# define MOD(a) a %= BASE +# define MOD28(a) a %= BASE +# define MOD63(a) a %= BASE +#endif + +/* ========================================================================= */ +uLong ZEXPORT adler32(adler, buf, len) + uLong adler; + const Bytef *buf; + uInt len; +{ + unsigned long sum2; + unsigned n; + + /* split Adler-32 into component sums */ + sum2 = (adler >> 16) & 0xffff; + adler &= 0xffff; + + /* in case user likes doing a byte at a time, keep it fast */ + if (len == 1) { + adler += buf[0]; + if (adler >= BASE) + adler -= BASE; + sum2 += adler; + if (sum2 >= BASE) + sum2 -= BASE; + return adler | (sum2 << 16); + } + + /* initial Adler-32 value (deferred check for len == 1 speed) */ + if (buf == Z_NULL) + return 1L; + + /* in case short lengths are provided, keep it somewhat fast */ + if (len < 16) { + while (len--) { + adler += *buf++; + sum2 += adler; + } + if (adler >= BASE) + adler -= BASE; + MOD28(sum2); /* only added so many BASE's */ + return adler | (sum2 << 16); + } + + /* do length NMAX blocks -- requires just one modulo operation */ + while (len >= NMAX) { + len -= NMAX; + n = NMAX / 16; /* NMAX is divisible by 16 */ + do { + DO16(buf); /* 16 sums unrolled */ + buf += 16; + } while (--n); + MOD(adler); + MOD(sum2); + } + + /* do remaining bytes (less than NMAX, still just one modulo) */ + if (len) { /* avoid modulos if none remaining */ + while (len >= 16) { + len -= 16; + DO16(buf); + buf += 16; + } + while (len--) { + adler += *buf++; + sum2 += adler; + } + MOD(adler); + MOD(sum2); + } + + /* return recombined sums */ + return adler | (sum2 << 16); +} + +/* ========================================================================= */ +local uLong adler32_combine_(adler1, adler2, len2) + uLong adler1; + uLong adler2; + z_off64_t len2; +{ + unsigned long sum1; + unsigned long sum2; + unsigned rem; + + /* for negative len, return invalid adler32 as a clue for debugging */ + if (len2 < 0) + return 0xffffffffUL; + + /* the derivation of this formula is left as an exercise for the reader */ + MOD63(len2); /* assumes len2 >= 0 */ + rem = (unsigned)len2; + sum1 = adler1 & 0xffff; + sum2 = rem * sum1; + MOD(sum2); + sum1 += (adler2 & 0xffff) + BASE - 1; + sum2 += ((adler1 >> 16) & 0xffff) + ((adler2 >> 16) & 0xffff) + BASE - rem; + if (sum1 >= BASE) sum1 -= BASE; + if (sum1 >= BASE) sum1 -= BASE; + if (sum2 >= (BASE << 1)) sum2 -= (BASE << 1); + if (sum2 >= BASE) sum2 -= BASE; + return sum1 | (sum2 << 16); +} + +/* ========================================================================= */ +uLong ZEXPORT adler32_combine(adler1, adler2, len2) + uLong adler1; + uLong adler2; + z_off_t len2; +{ + return adler32_combine_(adler1, adler2, len2); +} + +uLong ZEXPORT adler32_combine64(adler1, adler2, len2) + uLong adler1; + uLong adler2; + z_off64_t len2; +{ + return adler32_combine_(adler1, adler2, len2); +} ADDED compat/zlib/amiga/Makefile.pup Index: compat/zlib/amiga/Makefile.pup ================================================================== --- compat/zlib/amiga/Makefile.pup +++ compat/zlib/amiga/Makefile.pup @@ -0,0 +1,69 @@ +# Amiga powerUP (TM) Makefile +# makefile for libpng and SAS C V6.58/7.00 PPC compiler +# Copyright (C) 1998 by Andreas R. Kleinert + +LIBNAME = libzip.a + +CC = scppc +CFLAGS = NOSTKCHK NOSINT OPTIMIZE OPTGO OPTPEEP OPTINLOCAL OPTINL \ + OPTLOOP OPTRDEP=8 OPTDEP=8 OPTCOMP=8 NOVER +AR = ppc-amigaos-ar cr +RANLIB = ppc-amigaos-ranlib +LD = ppc-amigaos-ld -r +LDFLAGS = -o +LDLIBS = LIB:scppc.a LIB:end.o +RM = delete quiet + +OBJS = adler32.o compress.o crc32.o gzclose.o gzlib.o gzread.o gzwrite.o \ + uncompr.o deflate.o trees.o zutil.o inflate.o infback.o inftrees.o inffast.o + +TEST_OBJS = example.o minigzip.o + +all: example minigzip + +check: test +test: all + example + echo hello world | minigzip | minigzip -d + +$(LIBNAME): $(OBJS) + $(AR) $@ $(OBJS) + -$(RANLIB) $@ + +example: example.o $(LIBNAME) + $(LD) $(LDFLAGS) $@ LIB:c_ppc.o $@.o $(LIBNAME) $(LDLIBS) + +minigzip: minigzip.o $(LIBNAME) + $(LD) $(LDFLAGS) $@ LIB:c_ppc.o $@.o $(LIBNAME) $(LDLIBS) + +mostlyclean: clean +clean: + $(RM) *.o example minigzip $(LIBNAME) foo.gz + +zip: + zip -ul9 zlib README ChangeLog Makefile Make????.??? Makefile.?? \ + descrip.mms *.[ch] + +tgz: + cd ..; tar cfz zlib/zlib.tgz zlib/README zlib/ChangeLog zlib/Makefile \ + zlib/Make????.??? zlib/Makefile.?? zlib/descrip.mms zlib/*.[ch] + +# DO NOT DELETE THIS LINE -- make depend depends on it. + +adler32.o: zlib.h zconf.h +compress.o: zlib.h zconf.h +crc32.o: crc32.h zlib.h zconf.h +deflate.o: deflate.h zutil.h zlib.h zconf.h +example.o: zlib.h zconf.h +gzclose.o: zlib.h zconf.h gzguts.h +gzlib.o: zlib.h zconf.h gzguts.h +gzread.o: zlib.h zconf.h gzguts.h +gzwrite.o: zlib.h zconf.h gzguts.h +inffast.o: zutil.h zlib.h zconf.h inftrees.h inflate.h inffast.h +inflate.o: zutil.h zlib.h zconf.h inftrees.h inflate.h inffast.h +infback.o: zutil.h zlib.h zconf.h inftrees.h inflate.h inffast.h +inftrees.o: zutil.h zlib.h zconf.h inftrees.h +minigzip.o: zlib.h zconf.h +trees.o: deflate.h zutil.h zlib.h zconf.h trees.h +uncompr.o: zlib.h zconf.h +zutil.o: zutil.h zlib.h zconf.h ADDED compat/zlib/amiga/Makefile.sas Index: compat/zlib/amiga/Makefile.sas ================================================================== --- compat/zlib/amiga/Makefile.sas +++ compat/zlib/amiga/Makefile.sas @@ -0,0 +1,68 @@ +# SMakefile for zlib +# Modified from the standard UNIX Makefile Copyright Jean-loup Gailly +# Osma Ahvenlampi +# Amiga, SAS/C 6.56 & Smake + +CC=sc +CFLAGS=OPT +#CFLAGS=OPT CPU=68030 +#CFLAGS=DEBUG=LINE +LDFLAGS=LIB z.lib + +SCOPTIONS=OPTSCHED OPTINLINE OPTALIAS OPTTIME OPTINLOCAL STRMERGE \ + NOICONS PARMS=BOTH NOSTACKCHECK UTILLIB NOVERSION ERRORREXX \ + DEF=POSTINC + +OBJS = adler32.o compress.o crc32.o gzclose.o gzlib.o gzread.o gzwrite.o \ + uncompr.o deflate.o trees.o zutil.o inflate.o infback.o inftrees.o inffast.o + +TEST_OBJS = example.o minigzip.o + +all: SCOPTIONS example minigzip + +check: test +test: all + example + echo hello world | minigzip | minigzip -d + +install: z.lib + copy clone zlib.h zconf.h INCLUDE: + copy clone z.lib LIB: + +z.lib: $(OBJS) + oml z.lib r $(OBJS) + +example: example.o z.lib + $(CC) $(CFLAGS) LINK TO $@ example.o $(LDFLAGS) + +minigzip: minigzip.o z.lib + $(CC) $(CFLAGS) LINK TO $@ minigzip.o $(LDFLAGS) + +mostlyclean: clean +clean: + -delete force quiet example minigzip *.o z.lib foo.gz *.lnk SCOPTIONS + +SCOPTIONS: Makefile.sas + copy to $@ 64K on 16-bit machine: */ + if ((uLong)stream.avail_in != sourceLen) return Z_BUF_ERROR; +#endif + stream.next_out = dest; + stream.avail_out = (uInt)*destLen; + if ((uLong)stream.avail_out != *destLen) return Z_BUF_ERROR; + + stream.zalloc = (alloc_func)0; + stream.zfree = (free_func)0; + stream.opaque = (voidpf)0; + + err = deflateInit(&stream, level); + if (err != Z_OK) return err; + + err = deflate(&stream, Z_FINISH); + if (err != Z_STREAM_END) { + deflateEnd(&stream); + return err == Z_OK ? Z_BUF_ERROR : err; + } + *destLen = stream.total_out; + + err = deflateEnd(&stream); + return err; +} + +/* =========================================================================== + */ +int ZEXPORT compress (dest, destLen, source, sourceLen) + Bytef *dest; + uLongf *destLen; + const Bytef *source; + uLong sourceLen; +{ + return compress2(dest, destLen, source, sourceLen, Z_DEFAULT_COMPRESSION); +} + +/* =========================================================================== + If the default memLevel or windowBits for deflateInit() is changed, then + this function needs to be updated. + */ +uLong ZEXPORT compressBound (sourceLen) + uLong sourceLen; +{ + return sourceLen + (sourceLen >> 12) + (sourceLen >> 14) + + (sourceLen >> 25) + 13; +} ADDED compat/zlib/configure Index: compat/zlib/configure ================================================================== --- compat/zlib/configure +++ compat/zlib/configure @@ -0,0 +1,831 @@ +#!/bin/sh +# configure script for zlib. +# +# Normally configure builds both a static and a shared library. +# If you want to build just a static library, use: ./configure --static +# +# To impose specific compiler or flags or install directory, use for example: +# prefix=$HOME CC=cc CFLAGS="-O4" ./configure +# or for csh/tcsh users: +# (setenv prefix $HOME; setenv CC cc; setenv CFLAGS "-O4"; ./configure) + +# Incorrect settings of CC or CFLAGS may prevent creating a shared library. +# If you have problems, try without defining CC and CFLAGS before reporting +# an error. + +# start off configure.log +echo -------------------- >> configure.log +echo $0 $* >> configure.log +date >> configure.log + +# set command prefix for cross-compilation +if [ -n "${CHOST}" ]; then + uname="`echo "${CHOST}" | sed -e 's/^[^-]*-\([^-]*\)$/\1/' -e 's/^[^-]*-[^-]*-\([^-]*\)$/\1/' -e 's/^[^-]*-[^-]*-\([^-]*\)-.*$/\1/'`" + CROSS_PREFIX="${CHOST}-" +fi + +# destination name for static library +STATICLIB=libz.a + +# extract zlib version numbers from zlib.h +VER=`sed -n -e '/VERSION "/s/.*"\(.*\)".*/\1/p' < zlib.h` +VER3=`sed -n -e '/VERSION "/s/.*"\([0-9]*\\.[0-9]*\\.[0-9]*\).*/\1/p' < zlib.h` +VER2=`sed -n -e '/VERSION "/s/.*"\([0-9]*\\.[0-9]*\)\\..*/\1/p' < zlib.h` +VER1=`sed -n -e '/VERSION "/s/.*"\([0-9]*\)\\..*/\1/p' < zlib.h` + +# establish commands for library building +if "${CROSS_PREFIX}ar" --version >/dev/null 2>/dev/null || test $? -lt 126; then + AR=${AR-"${CROSS_PREFIX}ar"} + test -n "${CROSS_PREFIX}" && echo Using ${AR} | tee -a configure.log +else + AR=${AR-"ar"} + test -n "${CROSS_PREFIX}" && echo Using ${AR} | tee -a configure.log +fi +ARFLAGS=${ARFLAGS-"rc"} +if "${CROSS_PREFIX}ranlib" --version >/dev/null 2>/dev/null || test $? -lt 126; then + RANLIB=${RANLIB-"${CROSS_PREFIX}ranlib"} + test -n "${CROSS_PREFIX}" && echo Using ${RANLIB} | tee -a configure.log +else + RANLIB=${RANLIB-"ranlib"} +fi +if "${CROSS_PREFIX}nm" --version >/dev/null 2>/dev/null || test $? -lt 126; then + NM=${NM-"${CROSS_PREFIX}nm"} + test -n "${CROSS_PREFIX}" && echo Using ${NM} | tee -a configure.log +else + NM=${NM-"nm"} +fi + +# set defaults before processing command line options +LDCONFIG=${LDCONFIG-"ldconfig"} +LDSHAREDLIBC="${LDSHAREDLIBC--lc}" +ARCHS= +prefix=${prefix-/usr/local} +exec_prefix=${exec_prefix-'${prefix}'} +libdir=${libdir-'${exec_prefix}/lib'} +sharedlibdir=${sharedlibdir-'${libdir}'} +includedir=${includedir-'${prefix}/include'} +mandir=${mandir-'${prefix}/share/man'} +shared_ext='.so' +shared=1 +solo=0 +cover=0 +zprefix=0 +zconst=0 +build64=0 +gcc=0 +old_cc="$CC" +old_cflags="$CFLAGS" +OBJC='$(OBJZ) $(OBJG)' +PIC_OBJC='$(PIC_OBJZ) $(PIC_OBJG)' + +# leave this script, optionally in a bad way +leave() +{ + if test "$*" != "0"; then + echo "** $0 aborting." | tee -a configure.log + fi + rm -f $test.[co] $test $test$shared_ext $test.gcno ./--version + echo -------------------- >> configure.log + echo >> configure.log + echo >> configure.log + exit $1 +} + +# process command line options +while test $# -ge 1 +do +case "$1" in + -h* | --help) + echo 'usage:' | tee -a configure.log + echo ' configure [--const] [--zprefix] [--prefix=PREFIX] [--eprefix=EXPREFIX]' | tee -a configure.log + echo ' [--static] [--64] [--libdir=LIBDIR] [--sharedlibdir=LIBDIR]' | tee -a configure.log + echo ' [--includedir=INCLUDEDIR] [--archs="-arch i386 -arch x86_64"]' | tee -a configure.log + exit 0 ;; + -p*=* | --prefix=*) prefix=`echo $1 | sed 's/.*=//'`; shift ;; + -e*=* | --eprefix=*) exec_prefix=`echo $1 | sed 's/.*=//'`; shift ;; + -l*=* | --libdir=*) libdir=`echo $1 | sed 's/.*=//'`; shift ;; + --sharedlibdir=*) sharedlibdir=`echo $1 | sed 's/.*=//'`; shift ;; + -i*=* | --includedir=*) includedir=`echo $1 | sed 's/.*=//'`;shift ;; + -u*=* | --uname=*) uname=`echo $1 | sed 's/.*=//'`;shift ;; + -p* | --prefix) prefix="$2"; shift; shift ;; + -e* | --eprefix) exec_prefix="$2"; shift; shift ;; + -l* | --libdir) libdir="$2"; shift; shift ;; + -i* | --includedir) includedir="$2"; shift; shift ;; + -s* | --shared | --enable-shared) shared=1; shift ;; + -t | --static) shared=0; shift ;; + --solo) solo=1; shift ;; + --cover) cover=1; shift ;; + -z* | --zprefix) zprefix=1; shift ;; + -6* | --64) build64=1; shift ;; + -a*=* | --archs=*) ARCHS=`echo $1 | sed 's/.*=//'`; shift ;; + --sysconfdir=*) echo "ignored option: --sysconfdir" | tee -a configure.log; shift ;; + --localstatedir=*) echo "ignored option: --localstatedir" | tee -a configure.log; shift ;; + -c* | --const) zconst=1; shift ;; + *) + echo "unknown option: $1" | tee -a configure.log + echo "$0 --help for help" | tee -a configure.log + leave 1;; + esac +done + +# temporary file name +test=ztest$$ + +# put arguments in log, also put test file in log if used in arguments +show() +{ + case "$*" in + *$test.c*) + echo === $test.c === >> configure.log + cat $test.c >> configure.log + echo === >> configure.log;; + esac + echo $* >> configure.log +} + +# check for gcc vs. cc and set compile and link flags based on the system identified by uname +cat > $test.c <&1` in + *gcc*) gcc=1 ;; +esac + +show $cc -c $test.c +if test "$gcc" -eq 1 && ($cc -c $test.c) >> configure.log 2>&1; then + echo ... using gcc >> configure.log + CC="$cc" + CFLAGS="${CFLAGS--O3} ${ARCHS}" + SFLAGS="${CFLAGS--O3} -fPIC" + LDFLAGS="${LDFLAGS} ${ARCHS}" + if test $build64 -eq 1; then + CFLAGS="${CFLAGS} -m64" + SFLAGS="${SFLAGS} -m64" + fi + if test "${ZLIBGCCWARN}" = "YES"; then + if test "$zconst" -eq 1; then + CFLAGS="${CFLAGS} -Wall -Wextra -Wcast-qual -pedantic -DZLIB_CONST" + else + CFLAGS="${CFLAGS} -Wall -Wextra -pedantic" + fi + fi + if test -z "$uname"; then + uname=`(uname -s || echo unknown) 2>/dev/null` + fi + case "$uname" in + Linux* | linux* | GNU | GNU/* | solaris*) + LDSHARED=${LDSHARED-"$cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map"} ;; + *BSD | *bsd* | DragonFly) + LDSHARED=${LDSHARED-"$cc -shared -Wl,-soname,libz.so.1,--version-script,zlib.map"} + LDCONFIG="ldconfig -m" ;; + CYGWIN* | Cygwin* | cygwin* | OS/2*) + EXE='.exe' ;; + MINGW* | mingw*) +# temporary bypass + rm -f $test.[co] $test $test$shared_ext + echo "Please use win32/Makefile.gcc instead." | tee -a configure.log + leave 1 + LDSHARED=${LDSHARED-"$cc -shared"} + LDSHAREDLIBC="" + EXE='.exe' ;; + QNX*) # This is for QNX6. I suppose that the QNX rule below is for QNX2,QNX4 + # (alain.bonnefoy@icbt.com) + LDSHARED=${LDSHARED-"$cc -shared -Wl,-hlibz.so.1"} ;; + HP-UX*) + LDSHARED=${LDSHARED-"$cc -shared $SFLAGS"} + case `(uname -m || echo unknown) 2>/dev/null` in + ia64) + shared_ext='.so' + SHAREDLIB='libz.so' ;; + *) + shared_ext='.sl' + SHAREDLIB='libz.sl' ;; + esac ;; + Darwin* | darwin*) + shared_ext='.dylib' + SHAREDLIB=libz$shared_ext + SHAREDLIBV=libz.$VER$shared_ext + SHAREDLIBM=libz.$VER1$shared_ext + LDSHARED=${LDSHARED-"$cc -dynamiclib -install_name $libdir/$SHAREDLIBM -compatibility_version $VER1 -current_version $VER3"} + if libtool -V 2>&1 | grep Apple > /dev/null; then + AR="libtool" + else + AR="/usr/bin/libtool" + fi + ARFLAGS="-o" ;; + *) LDSHARED=${LDSHARED-"$cc -shared"} ;; + esac +else + # find system name and corresponding cc options + CC=${CC-cc} + gcc=0 + echo ... using $CC >> configure.log + if test -z "$uname"; then + uname=`(uname -sr || echo unknown) 2>/dev/null` + fi + case "$uname" in + HP-UX*) SFLAGS=${CFLAGS-"-O +z"} + CFLAGS=${CFLAGS-"-O"} +# LDSHARED=${LDSHARED-"ld -b +vnocompatwarnings"} + LDSHARED=${LDSHARED-"ld -b"} + case `(uname -m || echo unknown) 2>/dev/null` in + ia64) + shared_ext='.so' + SHAREDLIB='libz.so' ;; + *) + shared_ext='.sl' + SHAREDLIB='libz.sl' ;; + esac ;; + IRIX*) SFLAGS=${CFLAGS-"-ansi -O2 -rpath ."} + CFLAGS=${CFLAGS-"-ansi -O2"} + LDSHARED=${LDSHARED-"cc -shared -Wl,-soname,libz.so.1"} ;; + OSF1\ V4*) SFLAGS=${CFLAGS-"-O -std1"} + CFLAGS=${CFLAGS-"-O -std1"} + LDFLAGS="${LDFLAGS} -Wl,-rpath,." + LDSHARED=${LDSHARED-"cc -shared -Wl,-soname,libz.so -Wl,-msym -Wl,-rpath,$(libdir) -Wl,-set_version,${VER}:1.0"} ;; + OSF1*) SFLAGS=${CFLAGS-"-O -std1"} + CFLAGS=${CFLAGS-"-O -std1"} + LDSHARED=${LDSHARED-"cc -shared -Wl,-soname,libz.so.1"} ;; + QNX*) SFLAGS=${CFLAGS-"-4 -O"} + CFLAGS=${CFLAGS-"-4 -O"} + LDSHARED=${LDSHARED-"cc"} + RANLIB=${RANLIB-"true"} + AR="cc" + ARFLAGS="-A" ;; + SCO_SV\ 3.2*) SFLAGS=${CFLAGS-"-O3 -dy -KPIC "} + CFLAGS=${CFLAGS-"-O3"} + LDSHARED=${LDSHARED-"cc -dy -KPIC -G"} ;; + SunOS\ 5* | solaris*) + LDSHARED=${LDSHARED-"cc -G -h libz$shared_ext.$VER1"} + SFLAGS=${CFLAGS-"-fast -KPIC"} + CFLAGS=${CFLAGS-"-fast"} + if test $build64 -eq 1; then + # old versions of SunPRO/Workshop/Studio don't support -m64, + # but newer ones do. Check for it. + flag64=`$CC -flags | egrep -- '^-m64'` + if test x"$flag64" != x"" ; then + CFLAGS="${CFLAGS} -m64" + SFLAGS="${SFLAGS} -m64" + else + case `(uname -m || echo unknown) 2>/dev/null` in + i86*) + SFLAGS="$SFLAGS -xarch=amd64" + CFLAGS="$CFLAGS -xarch=amd64" ;; + *) + SFLAGS="$SFLAGS -xarch=v9" + CFLAGS="$CFLAGS -xarch=v9" ;; + esac + fi + fi + ;; + SunOS\ 4*) SFLAGS=${CFLAGS-"-O2 -PIC"} + CFLAGS=${CFLAGS-"-O2"} + LDSHARED=${LDSHARED-"ld"} ;; + SunStudio\ 9*) SFLAGS=${CFLAGS-"-fast -xcode=pic32 -xtarget=ultra3 -xarch=v9b"} + CFLAGS=${CFLAGS-"-fast -xtarget=ultra3 -xarch=v9b"} + LDSHARED=${LDSHARED-"cc -xarch=v9b"} ;; + UNIX_System_V\ 4.2.0) + SFLAGS=${CFLAGS-"-KPIC -O"} + CFLAGS=${CFLAGS-"-O"} + LDSHARED=${LDSHARED-"cc -G"} ;; + UNIX_SV\ 4.2MP) + SFLAGS=${CFLAGS-"-Kconform_pic -O"} + CFLAGS=${CFLAGS-"-O"} + LDSHARED=${LDSHARED-"cc -G"} ;; + OpenUNIX\ 5) + SFLAGS=${CFLAGS-"-KPIC -O"} + CFLAGS=${CFLAGS-"-O"} + LDSHARED=${LDSHARED-"cc -G"} ;; + AIX*) # Courtesy of dbakker@arrayasolutions.com + SFLAGS=${CFLAGS-"-O -qmaxmem=8192"} + CFLAGS=${CFLAGS-"-O -qmaxmem=8192"} + LDSHARED=${LDSHARED-"xlc -G"} ;; + # send working options for other systems to zlib@gzip.org + *) SFLAGS=${CFLAGS-"-O"} + CFLAGS=${CFLAGS-"-O"} + LDSHARED=${LDSHARED-"cc -shared"} ;; + esac +fi + +# destination names for shared library if not defined above +SHAREDLIB=${SHAREDLIB-"libz$shared_ext"} +SHAREDLIBV=${SHAREDLIBV-"libz$shared_ext.$VER"} +SHAREDLIBM=${SHAREDLIBM-"libz$shared_ext.$VER1"} + +echo >> configure.log + +# define functions for testing compiler and library characteristics and logging the results + +cat > $test.c </dev/null; then + try() + { + show $* + test "`( $* ) 2>&1 | tee -a configure.log`" = "" + } + echo - using any output from compiler to indicate an error >> configure.log +else +try() +{ + show $* + ( $* ) >> configure.log 2>&1 + ret=$? + if test $ret -ne 0; then + echo "(exit code "$ret")" >> configure.log + fi + return $ret +} +fi + +tryboth() +{ + show $* + got=`( $* ) 2>&1` + ret=$? + printf %s "$got" >> configure.log + if test $ret -ne 0; then + return $ret + fi + test "$got" = "" +} + +cat > $test.c << EOF +int foo() { return 0; } +EOF +echo "Checking for obsessive-compulsive compiler options..." >> configure.log +if try $CC -c $CFLAGS $test.c; then + : +else + echo "Compiler error reporting is too harsh for $0 (perhaps remove -Werror)." | tee -a configure.log + leave 1 +fi + +echo >> configure.log + +# see if shared library build supported +cat > $test.c <> configure.log + show "$NM $test.o | grep _hello" + if test "`$NM $test.o | grep _hello | tee -a configure.log`" = ""; then + CPP="$CPP -DNO_UNDERLINE" + echo Checking for underline in external names... No. | tee -a configure.log + else + echo Checking for underline in external names... Yes. | tee -a configure.log + fi ;; +esac + +echo >> configure.log + +# check for large file support, and if none, check for fseeko() +cat > $test.c < +off64_t dummy = 0; +EOF +if try $CC -c $CFLAGS -D_LARGEFILE64_SOURCE=1 $test.c; then + CFLAGS="${CFLAGS} -D_LARGEFILE64_SOURCE=1" + SFLAGS="${SFLAGS} -D_LARGEFILE64_SOURCE=1" + ALL="${ALL} all64" + TEST="${TEST} test64" + echo "Checking for off64_t... Yes." | tee -a configure.log + echo "Checking for fseeko... Yes." | tee -a configure.log +else + echo "Checking for off64_t... No." | tee -a configure.log + echo >> configure.log + cat > $test.c < +int main(void) { + fseeko(NULL, 0, 0); + return 0; +} +EOF + if try $CC $CFLAGS -o $test $test.c; then + echo "Checking for fseeko... Yes." | tee -a configure.log + else + CFLAGS="${CFLAGS} -DNO_FSEEKO" + SFLAGS="${SFLAGS} -DNO_FSEEKO" + echo "Checking for fseeko... No." | tee -a configure.log + fi +fi + +echo >> configure.log + +# check for strerror() for use by gz* functions +cat > $test.c < +#include +int main() { return strlen(strerror(errno)); } +EOF +if try $CC $CFLAGS -o $test $test.c; then + echo "Checking for strerror... Yes." | tee -a configure.log +else + CFLAGS="${CFLAGS} -DNO_STRERROR" + SFLAGS="${SFLAGS} -DNO_STRERROR" + echo "Checking for strerror... No." | tee -a configure.log +fi + +# copy clean zconf.h for subsequent edits +cp -p zconf.h.in zconf.h + +echo >> configure.log + +# check for unistd.h and save result in zconf.h +cat > $test.c < +int main() { return 0; } +EOF +if try $CC -c $CFLAGS $test.c; then + sed < zconf.h "/^#ifdef HAVE_UNISTD_H.* may be/s/def HAVE_UNISTD_H\(.*\) may be/ 1\1 was/" > zconf.temp.h + mv zconf.temp.h zconf.h + echo "Checking for unistd.h... Yes." | tee -a configure.log +else + echo "Checking for unistd.h... No." | tee -a configure.log +fi + +echo >> configure.log + +# check for stdarg.h and save result in zconf.h +cat > $test.c < +int main() { return 0; } +EOF +if try $CC -c $CFLAGS $test.c; then + sed < zconf.h "/^#ifdef HAVE_STDARG_H.* may be/s/def HAVE_STDARG_H\(.*\) may be/ 1\1 was/" > zconf.temp.h + mv zconf.temp.h zconf.h + echo "Checking for stdarg.h... Yes." | tee -a configure.log +else + echo "Checking for stdarg.h... No." | tee -a configure.log +fi + +# if the z_ prefix was requested, save that in zconf.h +if test $zprefix -eq 1; then + sed < zconf.h "/#ifdef Z_PREFIX.* may be/s/def Z_PREFIX\(.*\) may be/ 1\1 was/" > zconf.temp.h + mv zconf.temp.h zconf.h + echo >> configure.log + echo "Using z_ prefix on all symbols." | tee -a configure.log +fi + +# if --solo compilation was requested, save that in zconf.h and remove gz stuff from object lists +if test $solo -eq 1; then + sed '/#define ZCONF_H/a\ +#define Z_SOLO + +' < zconf.h > zconf.temp.h + mv zconf.temp.h zconf.h +OBJC='$(OBJZ)' +PIC_OBJC='$(PIC_OBJZ)' +fi + +# if code coverage testing was requested, use older gcc if defined, e.g. "gcc-4.2" on Mac OS X +if test $cover -eq 1; then + CFLAGS="${CFLAGS} -fprofile-arcs -ftest-coverage" + if test -n "$GCC_CLASSIC"; then + CC=$GCC_CLASSIC + fi +fi + +echo >> configure.log + +# conduct a series of tests to resolve eight possible cases of using "vs" or "s" printf functions +# (using stdarg or not), with or without "n" (proving size of buffer), and with or without a +# return value. The most secure result is vsnprintf() with a return value. snprintf() with a +# return value is secure as well, but then gzprintf() will be limited to 20 arguments. +cat > $test.c < +#include +#include "zconf.h" +int main() +{ +#ifndef STDC + choke me +#endif + return 0; +} +EOF +if try $CC -c $CFLAGS $test.c; then + echo "Checking whether to use vs[n]printf() or s[n]printf()... using vs[n]printf()." | tee -a configure.log + + echo >> configure.log + cat > $test.c < +#include +int mytest(const char *fmt, ...) +{ + char buf[20]; + va_list ap; + va_start(ap, fmt); + vsnprintf(buf, sizeof(buf), fmt, ap); + va_end(ap); + return 0; +} +int main() +{ + return (mytest("Hello%d\n", 1)); +} +EOF + if try $CC $CFLAGS -o $test $test.c; then + echo "Checking for vsnprintf() in stdio.h... Yes." | tee -a configure.log + + echo >> configure.log + cat >$test.c < +#include +int mytest(const char *fmt, ...) +{ + int n; + char buf[20]; + va_list ap; + va_start(ap, fmt); + n = vsnprintf(buf, sizeof(buf), fmt, ap); + va_end(ap); + return n; +} +int main() +{ + return (mytest("Hello%d\n", 1)); +} +EOF + + if try $CC -c $CFLAGS $test.c; then + echo "Checking for return value of vsnprintf()... Yes." | tee -a configure.log + else + CFLAGS="$CFLAGS -DHAS_vsnprintf_void" + SFLAGS="$SFLAGS -DHAS_vsnprintf_void" + echo "Checking for return value of vsnprintf()... No." | tee -a configure.log + echo " WARNING: apparently vsnprintf() does not return a value. zlib" | tee -a configure.log + echo " can build but will be open to possible string-format security" | tee -a configure.log + echo " vulnerabilities." | tee -a configure.log + fi + else + CFLAGS="$CFLAGS -DNO_vsnprintf" + SFLAGS="$SFLAGS -DNO_vsnprintf" + echo "Checking for vsnprintf() in stdio.h... No." | tee -a configure.log + echo " WARNING: vsnprintf() not found, falling back to vsprintf(). zlib" | tee -a configure.log + echo " can build but will be open to possible buffer-overflow security" | tee -a configure.log + echo " vulnerabilities." | tee -a configure.log + + echo >> configure.log + cat >$test.c < +#include +int mytest(const char *fmt, ...) +{ + int n; + char buf[20]; + va_list ap; + va_start(ap, fmt); + n = vsprintf(buf, fmt, ap); + va_end(ap); + return n; +} +int main() +{ + return (mytest("Hello%d\n", 1)); +} +EOF + + if try $CC -c $CFLAGS $test.c; then + echo "Checking for return value of vsprintf()... Yes." | tee -a configure.log + else + CFLAGS="$CFLAGS -DHAS_vsprintf_void" + SFLAGS="$SFLAGS -DHAS_vsprintf_void" + echo "Checking for return value of vsprintf()... No." | tee -a configure.log + echo " WARNING: apparently vsprintf() does not return a value. zlib" | tee -a configure.log + echo " can build but will be open to possible string-format security" | tee -a configure.log + echo " vulnerabilities." | tee -a configure.log + fi + fi +else + echo "Checking whether to use vs[n]printf() or s[n]printf()... using s[n]printf()." | tee -a configure.log + + echo >> configure.log + cat >$test.c < +int mytest() +{ + char buf[20]; + snprintf(buf, sizeof(buf), "%s", "foo"); + return 0; +} +int main() +{ + return (mytest()); +} +EOF + + if try $CC $CFLAGS -o $test $test.c; then + echo "Checking for snprintf() in stdio.h... Yes." | tee -a configure.log + + echo >> configure.log + cat >$test.c < +int mytest() +{ + char buf[20]; + return snprintf(buf, sizeof(buf), "%s", "foo"); +} +int main() +{ + return (mytest()); +} +EOF + + if try $CC -c $CFLAGS $test.c; then + echo "Checking for return value of snprintf()... Yes." | tee -a configure.log + else + CFLAGS="$CFLAGS -DHAS_snprintf_void" + SFLAGS="$SFLAGS -DHAS_snprintf_void" + echo "Checking for return value of snprintf()... No." | tee -a configure.log + echo " WARNING: apparently snprintf() does not return a value. zlib" | tee -a configure.log + echo " can build but will be open to possible string-format security" | tee -a configure.log + echo " vulnerabilities." | tee -a configure.log + fi + else + CFLAGS="$CFLAGS -DNO_snprintf" + SFLAGS="$SFLAGS -DNO_snprintf" + echo "Checking for snprintf() in stdio.h... No." | tee -a configure.log + echo " WARNING: snprintf() not found, falling back to sprintf(). zlib" | tee -a configure.log + echo " can build but will be open to possible buffer-overflow security" | tee -a configure.log + echo " vulnerabilities." | tee -a configure.log + + echo >> configure.log + cat >$test.c < +int mytest() +{ + char buf[20]; + return sprintf(buf, "%s", "foo"); +} +int main() +{ + return (mytest()); +} +EOF + + if try $CC -c $CFLAGS $test.c; then + echo "Checking for return value of sprintf()... Yes." | tee -a configure.log + else + CFLAGS="$CFLAGS -DHAS_sprintf_void" + SFLAGS="$SFLAGS -DHAS_sprintf_void" + echo "Checking for return value of sprintf()... No." | tee -a configure.log + echo " WARNING: apparently sprintf() does not return a value. zlib" | tee -a configure.log + echo " can build but will be open to possible string-format security" | tee -a configure.log + echo " vulnerabilities." | tee -a configure.log + fi + fi +fi + +# see if we can hide zlib internal symbols that are linked between separate source files +if test "$gcc" -eq 1; then + echo >> configure.log + cat > $test.c <> configure.log +echo ALL = $ALL >> configure.log +echo AR = $AR >> configure.log +echo ARFLAGS = $ARFLAGS >> configure.log +echo CC = $CC >> configure.log +echo CFLAGS = $CFLAGS >> configure.log +echo CPP = $CPP >> configure.log +echo EXE = $EXE >> configure.log +echo LDCONFIG = $LDCONFIG >> configure.log +echo LDFLAGS = $LDFLAGS >> configure.log +echo LDSHARED = $LDSHARED >> configure.log +echo LDSHAREDLIBC = $LDSHAREDLIBC >> configure.log +echo OBJC = $OBJC >> configure.log +echo PIC_OBJC = $PIC_OBJC >> configure.log +echo RANLIB = $RANLIB >> configure.log +echo SFLAGS = $SFLAGS >> configure.log +echo SHAREDLIB = $SHAREDLIB >> configure.log +echo SHAREDLIBM = $SHAREDLIBM >> configure.log +echo SHAREDLIBV = $SHAREDLIBV >> configure.log +echo STATICLIB = $STATICLIB >> configure.log +echo TEST = $TEST >> configure.log +echo VER = $VER >> configure.log +echo Z_U4 = $Z_U4 >> configure.log +echo exec_prefix = $exec_prefix >> configure.log +echo includedir = $includedir >> configure.log +echo libdir = $libdir >> configure.log +echo mandir = $mandir >> configure.log +echo prefix = $prefix >> configure.log +echo sharedlibdir = $sharedlibdir >> configure.log +echo uname = $uname >> configure.log + +# udpate Makefile with the configure results +sed < Makefile.in " +/^CC *=/s#=.*#=$CC# +/^CFLAGS *=/s#=.*#=$CFLAGS# +/^SFLAGS *=/s#=.*#=$SFLAGS# +/^LDFLAGS *=/s#=.*#=$LDFLAGS# +/^LDSHARED *=/s#=.*#=$LDSHARED# +/^CPP *=/s#=.*#=$CPP# +/^STATICLIB *=/s#=.*#=$STATICLIB# +/^SHAREDLIB *=/s#=.*#=$SHAREDLIB# +/^SHAREDLIBV *=/s#=.*#=$SHAREDLIBV# +/^SHAREDLIBM *=/s#=.*#=$SHAREDLIBM# +/^AR *=/s#=.*#=$AR# +/^ARFLAGS *=/s#=.*#=$ARFLAGS# +/^RANLIB *=/s#=.*#=$RANLIB# +/^LDCONFIG *=/s#=.*#=$LDCONFIG# +/^LDSHAREDLIBC *=/s#=.*#=$LDSHAREDLIBC# +/^EXE *=/s#=.*#=$EXE# +/^prefix *=/s#=.*#=$prefix# +/^exec_prefix *=/s#=.*#=$exec_prefix# +/^libdir *=/s#=.*#=$libdir# +/^sharedlibdir *=/s#=.*#=$sharedlibdir# +/^includedir *=/s#=.*#=$includedir# +/^mandir *=/s#=.*#=$mandir# +/^OBJC *=/s#=.*#= $OBJC# +/^PIC_OBJC *=/s#=.*#= $PIC_OBJC# +/^all: */s#:.*#: $ALL# +/^test: */s#:.*#: $TEST# +" > Makefile + +# create zlib.pc with the configure results +sed < zlib.pc.in " +/^CC *=/s#=.*#=$CC# +/^CFLAGS *=/s#=.*#=$CFLAGS# +/^CPP *=/s#=.*#=$CPP# +/^LDSHARED *=/s#=.*#=$LDSHARED# +/^STATICLIB *=/s#=.*#=$STATICLIB# +/^SHAREDLIB *=/s#=.*#=$SHAREDLIB# +/^SHAREDLIBV *=/s#=.*#=$SHAREDLIBV# +/^SHAREDLIBM *=/s#=.*#=$SHAREDLIBM# +/^AR *=/s#=.*#=$AR# +/^ARFLAGS *=/s#=.*#=$ARFLAGS# +/^RANLIB *=/s#=.*#=$RANLIB# +/^EXE *=/s#=.*#=$EXE# +/^prefix *=/s#=.*#=$prefix# +/^exec_prefix *=/s#=.*#=$exec_prefix# +/^libdir *=/s#=.*#=$libdir# +/^sharedlibdir *=/s#=.*#=$sharedlibdir# +/^includedir *=/s#=.*#=$includedir# +/^mandir *=/s#=.*#=$mandir# +/^LDFLAGS *=/s#=.*#=$LDFLAGS# +" | sed -e " +s/\@VERSION\@/$VER/g; +" > zlib.pc + +# done +leave 0 ADDED compat/zlib/contrib/README.contrib Index: compat/zlib/contrib/README.contrib ================================================================== --- compat/zlib/contrib/README.contrib +++ compat/zlib/contrib/README.contrib @@ -0,0 +1,78 @@ +All files under this contrib directory are UNSUPPORTED. There were +provided by users of zlib and were not tested by the authors of zlib. +Use at your own risk. Please contact the authors of the contributions +for help about these, not the zlib authors. Thanks. + + +ada/ by Dmitriy Anisimkov + Support for Ada + See http://zlib-ada.sourceforge.net/ + +amd64/ by Mikhail Teterin + asm code for AMD64 + See patch at http://www.freebsd.org/cgi/query-pr.cgi?pr=bin/96393 + +asm686/ by Brian Raiter + asm code for Pentium and PPro/PII, using the AT&T (GNU as) syntax + See http://www.muppetlabs.com/~breadbox/software/assembly.html + +blast/ by Mark Adler + Decompressor for output of PKWare Data Compression Library (DCL) + +delphi/ by Cosmin Truta + Support for Delphi and C++ Builder + +dotzlib/ by Henrik Ravn + Support for Microsoft .Net and Visual C++ .Net + +gcc_gvmat64/by Gilles Vollant + GCC Version of x86 64-bit (AMD64 and Intel EM64t) code for x64 + assembler to replace longest_match() and inflate_fast() + +infback9/ by Mark Adler + Unsupported diffs to infback to decode the deflate64 format + +inflate86/ by Chris Anderson + Tuned x86 gcc asm code to replace inflate_fast() + +iostream/ by Kevin Ruland + A C++ I/O streams interface to the zlib gz* functions + +iostream2/ by Tyge Løvset + Another C++ I/O streams interface + +iostream3/ by Ludwig Schwardt + and Kevin Ruland + Yet another C++ I/O streams interface + +masmx64/ by Gilles Vollant + x86 64-bit (AMD64 and Intel EM64t) code for x64 assembler to + replace longest_match() and inflate_fast(), also masm x86 + 64-bits translation of Chris Anderson inflate_fast() + +masmx86/ by Gilles Vollant + x86 asm code to replace longest_match() and inflate_fast(), + for Visual C++ and MASM (32 bits). + Based on Brian Raiter (asm686) and Chris Anderson (inflate86) + +minizip/ by Gilles Vollant + Mini zip and unzip based on zlib + Includes Zip64 support by Mathias Svensson + See http://www.winimage.com/zLibDll/unzip.html + +pascal/ by Bob Dellaca et al. + Support for Pascal + +puff/ by Mark Adler + Small, low memory usage inflate. Also serves to provide an + unambiguous description of the deflate format. + +testzlib/ by Gilles Vollant + Example of the use of zlib + +untgz/ by Pedro A. Aranda Gutierrez + A very simple tar.gz file extractor using zlib + +vstudio/ by Gilles Vollant + Building a minizip-enhanced zlib with Microsoft Visual Studio + Includes vc11 from kreuzerkrieg and vc12 from davispuh ADDED compat/zlib/contrib/ada/buffer_demo.adb Index: compat/zlib/contrib/ada/buffer_demo.adb ================================================================== --- compat/zlib/contrib/ada/buffer_demo.adb +++ compat/zlib/contrib/ada/buffer_demo.adb @@ -0,0 +1,106 @@ +---------------------------------------------------------------- +-- ZLib for Ada thick binding. -- +-- -- +-- Copyright (C) 2002-2004 Dmitriy Anisimkov -- +-- -- +-- Open source license information is in the zlib.ads file. -- +---------------------------------------------------------------- +-- +-- $Id: buffer_demo.adb,v 1.3 2004/09/06 06:55:35 vagul Exp $ + +-- This demo program provided by Dr Steve Sangwine +-- +-- Demonstration of a problem with Zlib-Ada (already fixed) when a buffer +-- of exactly the correct size is used for decompressed data, and the last +-- few bytes passed in to Zlib are checksum bytes. + +-- This program compresses a string of text, and then decompresses the +-- compressed text into a buffer of the same size as the original text. + +with Ada.Streams; use Ada.Streams; +with Ada.Text_IO; + +with ZLib; use ZLib; + +procedure Buffer_Demo is + EOL : Character renames ASCII.LF; + Text : constant String + := "Four score and seven years ago our fathers brought forth," & EOL & + "upon this continent, a new nation, conceived in liberty," & EOL & + "and dedicated to the proposition that `all men are created equal'."; + + Source : Stream_Element_Array (1 .. Text'Length); + for Source'Address use Text'Address; + +begin + Ada.Text_IO.Put (Text); + Ada.Text_IO.New_Line; + Ada.Text_IO.Put_Line + ("Uncompressed size : " & Positive'Image (Text'Length) & " bytes"); + + declare + Compressed_Data : Stream_Element_Array (1 .. Text'Length); + L : Stream_Element_Offset; + begin + Compress : declare + Compressor : Filter_Type; + I : Stream_Element_Offset; + begin + Deflate_Init (Compressor); + + -- Compress the whole of T at once. + + Translate (Compressor, Source, I, Compressed_Data, L, Finish); + pragma Assert (I = Source'Last); + + Close (Compressor); + + Ada.Text_IO.Put_Line + ("Compressed size : " + & Stream_Element_Offset'Image (L) & " bytes"); + end Compress; + + -- Now we decompress the data, passing short blocks of data to Zlib + -- (because this demonstrates the problem - the last block passed will + -- contain checksum information and there will be no output, only a + -- check inside Zlib that the checksum is correct). + + Decompress : declare + Decompressor : Filter_Type; + + Uncompressed_Data : Stream_Element_Array (1 .. Text'Length); + + Block_Size : constant := 4; + -- This makes sure that the last block contains + -- only Adler checksum data. + + P : Stream_Element_Offset := Compressed_Data'First - 1; + O : Stream_Element_Offset; + begin + Inflate_Init (Decompressor); + + loop + Translate + (Decompressor, + Compressed_Data + (P + 1 .. Stream_Element_Offset'Min (P + Block_Size, L)), + P, + Uncompressed_Data + (Total_Out (Decompressor) + 1 .. Uncompressed_Data'Last), + O, + No_Flush); + + Ada.Text_IO.Put_Line + ("Total in : " & Count'Image (Total_In (Decompressor)) & + ", out : " & Count'Image (Total_Out (Decompressor))); + + exit when P = L; + end loop; + + Ada.Text_IO.New_Line; + Ada.Text_IO.Put_Line + ("Decompressed text matches original text : " + & Boolean'Image (Uncompressed_Data = Source)); + end Decompress; + end; +end Buffer_Demo; ADDED compat/zlib/contrib/ada/mtest.adb Index: compat/zlib/contrib/ada/mtest.adb ================================================================== --- compat/zlib/contrib/ada/mtest.adb +++ compat/zlib/contrib/ada/mtest.adb @@ -0,0 +1,156 @@ +---------------------------------------------------------------- +-- ZLib for Ada thick binding. -- +-- -- +-- Copyright (C) 2002-2003 Dmitriy Anisimkov -- +-- -- +-- Open source license information is in the zlib.ads file. -- +---------------------------------------------------------------- +-- Continuous test for ZLib multithreading. If the test would fail +-- we should provide thread safe allocation routines for the Z_Stream. +-- +-- $Id: mtest.adb,v 1.4 2004/07/23 07:49:54 vagul Exp $ + +with ZLib; +with Ada.Streams; +with Ada.Numerics.Discrete_Random; +with Ada.Text_IO; +with Ada.Exceptions; +with Ada.Task_Identification; + +procedure MTest is + use Ada.Streams; + use ZLib; + + Stop : Boolean := False; + + pragma Atomic (Stop); + + subtype Visible_Symbols is Stream_Element range 16#20# .. 16#7E#; + + package Random_Elements is + new Ada.Numerics.Discrete_Random (Visible_Symbols); + + task type Test_Task; + + task body Test_Task is + Buffer : Stream_Element_Array (1 .. 100_000); + Gen : Random_Elements.Generator; + + Buffer_First : Stream_Element_Offset; + Compare_First : Stream_Element_Offset; + + Deflate : Filter_Type; + Inflate : Filter_Type; + + procedure Further (Item : in Stream_Element_Array); + + procedure Read_Buffer + (Item : out Ada.Streams.Stream_Element_Array; + Last : out Ada.Streams.Stream_Element_Offset); + + ------------- + -- Further -- + ------------- + + procedure Further (Item : in Stream_Element_Array) is + + procedure Compare (Item : in Stream_Element_Array); + + ------------- + -- Compare -- + ------------- + + procedure Compare (Item : in Stream_Element_Array) is + Next_First : Stream_Element_Offset := Compare_First + Item'Length; + begin + if Buffer (Compare_First .. Next_First - 1) /= Item then + raise Program_Error; + end if; + + Compare_First := Next_First; + end Compare; + + procedure Compare_Write is new ZLib.Write (Write => Compare); + begin + Compare_Write (Inflate, Item, No_Flush); + end Further; + + ----------------- + -- Read_Buffer -- + ----------------- + + procedure Read_Buffer + (Item : out Ada.Streams.Stream_Element_Array; + Last : out Ada.Streams.Stream_Element_Offset) + is + Buff_Diff : Stream_Element_Offset := Buffer'Last - Buffer_First; + Next_First : Stream_Element_Offset; + begin + if Item'Length <= Buff_Diff then + Last := Item'Last; + + Next_First := Buffer_First + Item'Length; + + Item := Buffer (Buffer_First .. Next_First - 1); + + Buffer_First := Next_First; + else + Last := Item'First + Buff_Diff; + Item (Item'First .. Last) := Buffer (Buffer_First .. Buffer'Last); + Buffer_First := Buffer'Last + 1; + end if; + end Read_Buffer; + + procedure Translate is new Generic_Translate + (Data_In => Read_Buffer, + Data_Out => Further); + + begin + Random_Elements.Reset (Gen); + + Buffer := (others => 20); + + Main : loop + for J in Buffer'Range loop + Buffer (J) := Random_Elements.Random (Gen); + + Deflate_Init (Deflate); + Inflate_Init (Inflate); + + Buffer_First := Buffer'First; + Compare_First := Buffer'First; + + Translate (Deflate); + + if Compare_First /= Buffer'Last + 1 then + raise Program_Error; + end if; + + Ada.Text_IO.Put_Line + (Ada.Task_Identification.Image + (Ada.Task_Identification.Current_Task) + & Stream_Element_Offset'Image (J) + & ZLib.Count'Image (Total_Out (Deflate))); + + Close (Deflate); + Close (Inflate); + + exit Main when Stop; + end loop; + end loop Main; + exception + when E : others => + Ada.Text_IO.Put_Line (Ada.Exceptions.Exception_Information (E)); + Stop := True; + end Test_Task; + + Test : array (1 .. 4) of Test_Task; + + pragma Unreferenced (Test); + + Dummy : Character; + +begin + Ada.Text_IO.Get_Immediate (Dummy); + Stop := True; +end MTest; ADDED compat/zlib/contrib/ada/read.adb Index: compat/zlib/contrib/ada/read.adb ================================================================== --- compat/zlib/contrib/ada/read.adb +++ compat/zlib/contrib/ada/read.adb @@ -0,0 +1,156 @@ +---------------------------------------------------------------- +-- ZLib for Ada thick binding. -- +-- -- +-- Copyright (C) 2002-2003 Dmitriy Anisimkov -- +-- -- +-- Open source license information is in the zlib.ads file. -- +---------------------------------------------------------------- + +-- $Id: read.adb,v 1.8 2004/05/31 10:53:40 vagul Exp $ + +-- Test/demo program for the generic read interface. + +with Ada.Numerics.Discrete_Random; +with Ada.Streams; +with Ada.Text_IO; + +with ZLib; + +procedure Read is + + use Ada.Streams; + + ------------------------------------ + -- Test configuration parameters -- + ------------------------------------ + + File_Size : Stream_Element_Offset := 100_000; + + Continuous : constant Boolean := False; + -- If this constant is True, the test would be repeated again and again, + -- with increment File_Size for every iteration. + + Header : constant ZLib.Header_Type := ZLib.Default; + -- Do not use Header other than Default in ZLib versions 1.1.4 and older. + + Init_Random : constant := 8; + -- We are using the same random sequence, in case of we catch bug, + -- so we would be able to reproduce it. + + -- End -- + + Pack_Size : Stream_Element_Offset; + Offset : Stream_Element_Offset; + + Filter : ZLib.Filter_Type; + + subtype Visible_Symbols + is Stream_Element range 16#20# .. 16#7E#; + + package Random_Elements is new + Ada.Numerics.Discrete_Random (Visible_Symbols); + + Gen : Random_Elements.Generator; + Period : constant Stream_Element_Offset := 200; + -- Period constant variable for random generator not to be very random. + -- Bigger period, harder random. + + Read_Buffer : Stream_Element_Array (1 .. 2048); + Read_First : Stream_Element_Offset; + Read_Last : Stream_Element_Offset; + + procedure Reset; + + procedure Read + (Item : out Stream_Element_Array; + Last : out Stream_Element_Offset); + -- this procedure is for generic instantiation of + -- ZLib.Read + -- reading data from the File_In. + + procedure Read is new ZLib.Read + (Read, + Read_Buffer, + Rest_First => Read_First, + Rest_Last => Read_Last); + + ---------- + -- Read -- + ---------- + + procedure Read + (Item : out Stream_Element_Array; + Last : out Stream_Element_Offset) is + begin + Last := Stream_Element_Offset'Min + (Item'Last, + Item'First + File_Size - Offset); + + for J in Item'First .. Last loop + if J < Item'First + Period then + Item (J) := Random_Elements.Random (Gen); + else + Item (J) := Item (J - Period); + end if; + + Offset := Offset + 1; + end loop; + end Read; + + ----------- + -- Reset -- + ----------- + + procedure Reset is + begin + Random_Elements.Reset (Gen, Init_Random); + Pack_Size := 0; + Offset := 1; + Read_First := Read_Buffer'Last + 1; + Read_Last := Read_Buffer'Last; + end Reset; + +begin + Ada.Text_IO.Put_Line ("ZLib " & ZLib.Version); + + loop + for Level in ZLib.Compression_Level'Range loop + + Ada.Text_IO.Put ("Level =" + & ZLib.Compression_Level'Image (Level)); + + -- Deflate using generic instantiation. + + ZLib.Deflate_Init + (Filter, + Level, + Header => Header); + + Reset; + + Ada.Text_IO.Put + (Stream_Element_Offset'Image (File_Size) & " ->"); + + loop + declare + Buffer : Stream_Element_Array (1 .. 1024); + Last : Stream_Element_Offset; + begin + Read (Filter, Buffer, Last); + + Pack_Size := Pack_Size + Last - Buffer'First + 1; + + exit when Last < Buffer'Last; + end; + end loop; + + Ada.Text_IO.Put_Line (Stream_Element_Offset'Image (Pack_Size)); + + ZLib.Close (Filter); + end loop; + + exit when not Continuous; + + File_Size := File_Size + 1; + end loop; +end Read; ADDED compat/zlib/contrib/ada/readme.txt Index: compat/zlib/contrib/ada/readme.txt ================================================================== --- compat/zlib/contrib/ada/readme.txt +++ compat/zlib/contrib/ada/readme.txt @@ -0,0 +1,65 @@ + ZLib for Ada thick binding (ZLib.Ada) + Release 1.3 + +ZLib.Ada is a thick binding interface to the popular ZLib data +compression library, available at http://www.gzip.org/zlib/. +It provides Ada-style access to the ZLib C library. + + + Here are the main changes since ZLib.Ada 1.2: + +- Attension: ZLib.Read generic routine have a initialization requirement + for Read_Last parameter now. It is a bit incompartible with previous version, + but extends functionality, we could use new parameters Allow_Read_Some and + Flush now. + +- Added Is_Open routines to ZLib and ZLib.Streams packages. + +- Add pragma Assert to check Stream_Element is 8 bit. + +- Fix extraction to buffer with exact known decompressed size. Error reported by + Steve Sangwine. + +- Fix definition of ULong (changed to unsigned_long), fix regression on 64 bits + computers. Patch provided by Pascal Obry. + +- Add Status_Error exception definition. + +- Add pragma Assertion that Ada.Streams.Stream_Element size is 8 bit. + + + How to build ZLib.Ada under GNAT + +You should have the ZLib library already build on your computer, before +building ZLib.Ada. Make the directory of ZLib.Ada sources current and +issue the command: + + gnatmake test -largs -L -lz + +Or use the GNAT project file build for GNAT 3.15 or later: + + gnatmake -Pzlib.gpr -L + + + How to build ZLib.Ada under Aonix ObjectAda for Win32 7.2.2 + +1. Make a project with all *.ads and *.adb files from the distribution. +2. Build the libz.a library from the ZLib C sources. +3. Rename libz.a to z.lib. +4. Add the library z.lib to the project. +5. Add the libc.lib library from the ObjectAda distribution to the project. +6. Build the executable using test.adb as a main procedure. + + + How to use ZLib.Ada + +The source files test.adb and read.adb are small demo programs that show +the main functionality of ZLib.Ada. + +The routines from the package specifications are commented. + + +Homepage: http://zlib-ada.sourceforge.net/ +Author: Dmitriy Anisimkov + +Contributors: Pascal Obry , Steve Sangwine ADDED compat/zlib/contrib/ada/test.adb Index: compat/zlib/contrib/ada/test.adb ================================================================== --- compat/zlib/contrib/ada/test.adb +++ compat/zlib/contrib/ada/test.adb @@ -0,0 +1,463 @@ +---------------------------------------------------------------- +-- ZLib for Ada thick binding. -- +-- -- +-- Copyright (C) 2002-2003 Dmitriy Anisimkov -- +-- -- +-- Open source license information is in the zlib.ads file. -- +---------------------------------------------------------------- + +-- $Id: test.adb,v 1.17 2003/08/12 12:13:30 vagul Exp $ + +-- The program has a few aims. +-- 1. Test ZLib.Ada95 thick binding functionality. +-- 2. Show the example of use main functionality of the ZLib.Ada95 binding. +-- 3. Build this program automatically compile all ZLib.Ada95 packages under +-- GNAT Ada95 compiler. + +with ZLib.Streams; +with Ada.Streams.Stream_IO; +with Ada.Numerics.Discrete_Random; + +with Ada.Text_IO; + +with Ada.Calendar; + +procedure Test is + + use Ada.Streams; + use Stream_IO; + + ------------------------------------ + -- Test configuration parameters -- + ------------------------------------ + + File_Size : Count := 100_000; + Continuous : constant Boolean := False; + + Header : constant ZLib.Header_Type := ZLib.Default; + -- ZLib.None; + -- ZLib.Auto; + -- ZLib.GZip; + -- Do not use Header other then Default in ZLib versions 1.1.4 + -- and older. + + Strategy : constant ZLib.Strategy_Type := ZLib.Default_Strategy; + Init_Random : constant := 10; + + -- End -- + + In_File_Name : constant String := "testzlib.in"; + -- Name of the input file + + Z_File_Name : constant String := "testzlib.zlb"; + -- Name of the compressed file. + + Out_File_Name : constant String := "testzlib.out"; + -- Name of the decompressed file. + + File_In : File_Type; + File_Out : File_Type; + File_Back : File_Type; + File_Z : ZLib.Streams.Stream_Type; + + Filter : ZLib.Filter_Type; + + Time_Stamp : Ada.Calendar.Time; + + procedure Generate_File; + -- Generate file of spetsified size with some random data. + -- The random data is repeatable, for the good compression. + + procedure Compare_Streams + (Left, Right : in out Root_Stream_Type'Class); + -- The procedure compearing data in 2 streams. + -- It is for compare data before and after compression/decompression. + + procedure Compare_Files (Left, Right : String); + -- Compare files. Based on the Compare_Streams. + + procedure Copy_Streams + (Source, Target : in out Root_Stream_Type'Class; + Buffer_Size : in Stream_Element_Offset := 1024); + -- Copying data from one stream to another. It is for test stream + -- interface of the library. + + procedure Data_In + (Item : out Stream_Element_Array; + Last : out Stream_Element_Offset); + -- this procedure is for generic instantiation of + -- ZLib.Generic_Translate. + -- reading data from the File_In. + + procedure Data_Out (Item : in Stream_Element_Array); + -- this procedure is for generic instantiation of + -- ZLib.Generic_Translate. + -- writing data to the File_Out. + + procedure Stamp; + -- Store the timestamp to the local variable. + + procedure Print_Statistic (Msg : String; Data_Size : ZLib.Count); + -- Print the time statistic with the message. + + procedure Translate is new ZLib.Generic_Translate + (Data_In => Data_In, + Data_Out => Data_Out); + -- This procedure is moving data from File_In to File_Out + -- with compression or decompression, depend on initialization of + -- Filter parameter. + + ------------------- + -- Compare_Files -- + ------------------- + + procedure Compare_Files (Left, Right : String) is + Left_File, Right_File : File_Type; + begin + Open (Left_File, In_File, Left); + Open (Right_File, In_File, Right); + Compare_Streams (Stream (Left_File).all, Stream (Right_File).all); + Close (Left_File); + Close (Right_File); + end Compare_Files; + + --------------------- + -- Compare_Streams -- + --------------------- + + procedure Compare_Streams + (Left, Right : in out Ada.Streams.Root_Stream_Type'Class) + is + Left_Buffer, Right_Buffer : Stream_Element_Array (0 .. 16#FFF#); + Left_Last, Right_Last : Stream_Element_Offset; + begin + loop + Read (Left, Left_Buffer, Left_Last); + Read (Right, Right_Buffer, Right_Last); + + if Left_Last /= Right_Last then + Ada.Text_IO.Put_Line ("Compare error :" + & Stream_Element_Offset'Image (Left_Last) + & " /= " + & Stream_Element_Offset'Image (Right_Last)); + + raise Constraint_Error; + + elsif Left_Buffer (0 .. Left_Last) + /= Right_Buffer (0 .. Right_Last) + then + Ada.Text_IO.Put_Line ("ERROR: IN and OUT files is not equal."); + raise Constraint_Error; + + end if; + + exit when Left_Last < Left_Buffer'Last; + end loop; + end Compare_Streams; + + ------------------ + -- Copy_Streams -- + ------------------ + + procedure Copy_Streams + (Source, Target : in out Ada.Streams.Root_Stream_Type'Class; + Buffer_Size : in Stream_Element_Offset := 1024) + is + Buffer : Stream_Element_Array (1 .. Buffer_Size); + Last : Stream_Element_Offset; + begin + loop + Read (Source, Buffer, Last); + Write (Target, Buffer (1 .. Last)); + + exit when Last < Buffer'Last; + end loop; + end Copy_Streams; + + ------------- + -- Data_In -- + ------------- + + procedure Data_In + (Item : out Stream_Element_Array; + Last : out Stream_Element_Offset) is + begin + Read (File_In, Item, Last); + end Data_In; + + -------------- + -- Data_Out -- + -------------- + + procedure Data_Out (Item : in Stream_Element_Array) is + begin + Write (File_Out, Item); + end Data_Out; + + ------------------- + -- Generate_File -- + ------------------- + + procedure Generate_File is + subtype Visible_Symbols is Stream_Element range 16#20# .. 16#7E#; + + package Random_Elements is + new Ada.Numerics.Discrete_Random (Visible_Symbols); + + Gen : Random_Elements.Generator; + Buffer : Stream_Element_Array := (1 .. 77 => 16#20#) & 10; + + Buffer_Count : constant Count := File_Size / Buffer'Length; + -- Number of same buffers in the packet. + + Density : constant Count := 30; -- from 0 to Buffer'Length - 2; + + procedure Fill_Buffer (J, D : in Count); + -- Change the part of the buffer. + + ----------------- + -- Fill_Buffer -- + ----------------- + + procedure Fill_Buffer (J, D : in Count) is + begin + for K in 0 .. D loop + Buffer + (Stream_Element_Offset ((J + K) mod (Buffer'Length - 1) + 1)) + := Random_Elements.Random (Gen); + + end loop; + end Fill_Buffer; + + begin + Random_Elements.Reset (Gen, Init_Random); + + Create (File_In, Out_File, In_File_Name); + + Fill_Buffer (1, Buffer'Length - 2); + + for J in 1 .. Buffer_Count loop + Write (File_In, Buffer); + + Fill_Buffer (J, Density); + end loop; + + -- fill remain size. + + Write + (File_In, + Buffer + (1 .. Stream_Element_Offset + (File_Size - Buffer'Length * Buffer_Count))); + + Flush (File_In); + Close (File_In); + end Generate_File; + + --------------------- + -- Print_Statistic -- + --------------------- + + procedure Print_Statistic (Msg : String; Data_Size : ZLib.Count) is + use Ada.Calendar; + use Ada.Text_IO; + + package Count_IO is new Integer_IO (ZLib.Count); + + Curr_Dur : Duration := Clock - Time_Stamp; + begin + Put (Msg); + + Set_Col (20); + Ada.Text_IO.Put ("size ="); + + Count_IO.Put + (Data_Size, + Width => Stream_IO.Count'Image (File_Size)'Length); + + Put_Line (" duration =" & Duration'Image (Curr_Dur)); + end Print_Statistic; + + ----------- + -- Stamp -- + ----------- + + procedure Stamp is + begin + Time_Stamp := Ada.Calendar.Clock; + end Stamp; + +begin + Ada.Text_IO.Put_Line ("ZLib " & ZLib.Version); + + loop + Generate_File; + + for Level in ZLib.Compression_Level'Range loop + + Ada.Text_IO.Put_Line ("Level =" + & ZLib.Compression_Level'Image (Level)); + + -- Test generic interface. + Open (File_In, In_File, In_File_Name); + Create (File_Out, Out_File, Z_File_Name); + + Stamp; + + -- Deflate using generic instantiation. + + ZLib.Deflate_Init + (Filter => Filter, + Level => Level, + Strategy => Strategy, + Header => Header); + + Translate (Filter); + Print_Statistic ("Generic compress", ZLib.Total_Out (Filter)); + ZLib.Close (Filter); + + Close (File_In); + Close (File_Out); + + Open (File_In, In_File, Z_File_Name); + Create (File_Out, Out_File, Out_File_Name); + + Stamp; + + -- Inflate using generic instantiation. + + ZLib.Inflate_Init (Filter, Header => Header); + + Translate (Filter); + Print_Statistic ("Generic decompress", ZLib.Total_Out (Filter)); + + ZLib.Close (Filter); + + Close (File_In); + Close (File_Out); + + Compare_Files (In_File_Name, Out_File_Name); + + -- Test stream interface. + + -- Compress to the back stream. + + Open (File_In, In_File, In_File_Name); + Create (File_Back, Out_File, Z_File_Name); + + Stamp; + + ZLib.Streams.Create + (Stream => File_Z, + Mode => ZLib.Streams.Out_Stream, + Back => ZLib.Streams.Stream_Access + (Stream (File_Back)), + Back_Compressed => True, + Level => Level, + Strategy => Strategy, + Header => Header); + + Copy_Streams + (Source => Stream (File_In).all, + Target => File_Z); + + -- Flushing internal buffers to the back stream. + + ZLib.Streams.Flush (File_Z, ZLib.Finish); + + Print_Statistic ("Write compress", + ZLib.Streams.Write_Total_Out (File_Z)); + + ZLib.Streams.Close (File_Z); + + Close (File_In); + Close (File_Back); + + -- Compare reading from original file and from + -- decompression stream. + + Open (File_In, In_File, In_File_Name); + Open (File_Back, In_File, Z_File_Name); + + ZLib.Streams.Create + (Stream => File_Z, + Mode => ZLib.Streams.In_Stream, + Back => ZLib.Streams.Stream_Access + (Stream (File_Back)), + Back_Compressed => True, + Header => Header); + + Stamp; + Compare_Streams (Stream (File_In).all, File_Z); + + Print_Statistic ("Read decompress", + ZLib.Streams.Read_Total_Out (File_Z)); + + ZLib.Streams.Close (File_Z); + Close (File_In); + Close (File_Back); + + -- Compress by reading from compression stream. + + Open (File_Back, In_File, In_File_Name); + Create (File_Out, Out_File, Z_File_Name); + + ZLib.Streams.Create + (Stream => File_Z, + Mode => ZLib.Streams.In_Stream, + Back => ZLib.Streams.Stream_Access + (Stream (File_Back)), + Back_Compressed => False, + Level => Level, + Strategy => Strategy, + Header => Header); + + Stamp; + Copy_Streams + (Source => File_Z, + Target => Stream (File_Out).all); + + Print_Statistic ("Read compress", + ZLib.Streams.Read_Total_Out (File_Z)); + + ZLib.Streams.Close (File_Z); + + Close (File_Out); + Close (File_Back); + + -- Decompress to decompression stream. + + Open (File_In, In_File, Z_File_Name); + Create (File_Back, Out_File, Out_File_Name); + + ZLib.Streams.Create + (Stream => File_Z, + Mode => ZLib.Streams.Out_Stream, + Back => ZLib.Streams.Stream_Access + (Stream (File_Back)), + Back_Compressed => False, + Header => Header); + + Stamp; + + Copy_Streams + (Source => Stream (File_In).all, + Target => File_Z); + + Print_Statistic ("Write decompress", + ZLib.Streams.Write_Total_Out (File_Z)); + + ZLib.Streams.Close (File_Z); + Close (File_In); + Close (File_Back); + + Compare_Files (In_File_Name, Out_File_Name); + end loop; + + Ada.Text_IO.Put_Line (Count'Image (File_Size) & " Ok."); + + exit when not Continuous; + + File_Size := File_Size + 1; + end loop; +end Test; ADDED compat/zlib/contrib/ada/zlib-streams.adb Index: compat/zlib/contrib/ada/zlib-streams.adb ================================================================== --- compat/zlib/contrib/ada/zlib-streams.adb +++ compat/zlib/contrib/ada/zlib-streams.adb @@ -0,0 +1,225 @@ +---------------------------------------------------------------- +-- ZLib for Ada thick binding. -- +-- -- +-- Copyright (C) 2002-2003 Dmitriy Anisimkov -- +-- -- +-- Open source license information is in the zlib.ads file. -- +---------------------------------------------------------------- + +-- $Id: zlib-streams.adb,v 1.10 2004/05/31 10:53:40 vagul Exp $ + +with Ada.Unchecked_Deallocation; + +package body ZLib.Streams is + + ----------- + -- Close -- + ----------- + + procedure Close (Stream : in out Stream_Type) is + procedure Free is new Ada.Unchecked_Deallocation + (Stream_Element_Array, Buffer_Access); + begin + if Stream.Mode = Out_Stream or Stream.Mode = Duplex then + -- We should flush the data written by the writer. + + Flush (Stream, Finish); + + Close (Stream.Writer); + end if; + + if Stream.Mode = In_Stream or Stream.Mode = Duplex then + Close (Stream.Reader); + Free (Stream.Buffer); + end if; + end Close; + + ------------ + -- Create -- + ------------ + + procedure Create + (Stream : out Stream_Type; + Mode : in Stream_Mode; + Back : in Stream_Access; + Back_Compressed : in Boolean; + Level : in Compression_Level := Default_Compression; + Strategy : in Strategy_Type := Default_Strategy; + Header : in Header_Type := Default; + Read_Buffer_Size : in Ada.Streams.Stream_Element_Offset + := Default_Buffer_Size; + Write_Buffer_Size : in Ada.Streams.Stream_Element_Offset + := Default_Buffer_Size) + is + + subtype Buffer_Subtype is Stream_Element_Array (1 .. Read_Buffer_Size); + + procedure Init_Filter + (Filter : in out Filter_Type; + Compress : in Boolean); + + ----------------- + -- Init_Filter -- + ----------------- + + procedure Init_Filter + (Filter : in out Filter_Type; + Compress : in Boolean) is + begin + if Compress then + Deflate_Init + (Filter, Level, Strategy, Header => Header); + else + Inflate_Init (Filter, Header => Header); + end if; + end Init_Filter; + + begin + Stream.Back := Back; + Stream.Mode := Mode; + + if Mode = Out_Stream or Mode = Duplex then + Init_Filter (Stream.Writer, Back_Compressed); + Stream.Buffer_Size := Write_Buffer_Size; + else + Stream.Buffer_Size := 0; + end if; + + if Mode = In_Stream or Mode = Duplex then + Init_Filter (Stream.Reader, not Back_Compressed); + + Stream.Buffer := new Buffer_Subtype; + Stream.Rest_First := Stream.Buffer'Last + 1; + Stream.Rest_Last := Stream.Buffer'Last; + end if; + end Create; + + ----------- + -- Flush -- + ----------- + + procedure Flush + (Stream : in out Stream_Type; + Mode : in Flush_Mode := Sync_Flush) + is + Buffer : Stream_Element_Array (1 .. Stream.Buffer_Size); + Last : Stream_Element_Offset; + begin + loop + Flush (Stream.Writer, Buffer, Last, Mode); + + Ada.Streams.Write (Stream.Back.all, Buffer (1 .. Last)); + + exit when Last < Buffer'Last; + end loop; + end Flush; + + ------------- + -- Is_Open -- + ------------- + + function Is_Open (Stream : Stream_Type) return Boolean is + begin + return Is_Open (Stream.Reader) or else Is_Open (Stream.Writer); + end Is_Open; + + ---------- + -- Read -- + ---------- + + procedure Read + (Stream : in out Stream_Type; + Item : out Stream_Element_Array; + Last : out Stream_Element_Offset) + is + + procedure Read + (Item : out Stream_Element_Array; + Last : out Stream_Element_Offset); + + ---------- + -- Read -- + ---------- + + procedure Read + (Item : out Stream_Element_Array; + Last : out Stream_Element_Offset) is + begin + Ada.Streams.Read (Stream.Back.all, Item, Last); + end Read; + + procedure Read is new ZLib.Read + (Read => Read, + Buffer => Stream.Buffer.all, + Rest_First => Stream.Rest_First, + Rest_Last => Stream.Rest_Last); + + begin + Read (Stream.Reader, Item, Last); + end Read; + + ------------------- + -- Read_Total_In -- + ------------------- + + function Read_Total_In (Stream : in Stream_Type) return Count is + begin + return Total_In (Stream.Reader); + end Read_Total_In; + + -------------------- + -- Read_Total_Out -- + -------------------- + + function Read_Total_Out (Stream : in Stream_Type) return Count is + begin + return Total_Out (Stream.Reader); + end Read_Total_Out; + + ----------- + -- Write -- + ----------- + + procedure Write + (Stream : in out Stream_Type; + Item : in Stream_Element_Array) + is + + procedure Write (Item : in Stream_Element_Array); + + ----------- + -- Write -- + ----------- + + procedure Write (Item : in Stream_Element_Array) is + begin + Ada.Streams.Write (Stream.Back.all, Item); + end Write; + + procedure Write is new ZLib.Write + (Write => Write, + Buffer_Size => Stream.Buffer_Size); + + begin + Write (Stream.Writer, Item, No_Flush); + end Write; + + -------------------- + -- Write_Total_In -- + -------------------- + + function Write_Total_In (Stream : in Stream_Type) return Count is + begin + return Total_In (Stream.Writer); + end Write_Total_In; + + --------------------- + -- Write_Total_Out -- + --------------------- + + function Write_Total_Out (Stream : in Stream_Type) return Count is + begin + return Total_Out (Stream.Writer); + end Write_Total_Out; + +end ZLib.Streams; ADDED compat/zlib/contrib/ada/zlib-streams.ads Index: compat/zlib/contrib/ada/zlib-streams.ads ================================================================== --- compat/zlib/contrib/ada/zlib-streams.ads +++ compat/zlib/contrib/ada/zlib-streams.ads @@ -0,0 +1,114 @@ +---------------------------------------------------------------- +-- ZLib for Ada thick binding. -- +-- -- +-- Copyright (C) 2002-2003 Dmitriy Anisimkov -- +-- -- +-- Open source license information is in the zlib.ads file. -- +---------------------------------------------------------------- + +-- $Id: zlib-streams.ads,v 1.12 2004/05/31 10:53:40 vagul Exp $ + +package ZLib.Streams is + + type Stream_Mode is (In_Stream, Out_Stream, Duplex); + + type Stream_Access is access all Ada.Streams.Root_Stream_Type'Class; + + type Stream_Type is + new Ada.Streams.Root_Stream_Type with private; + + procedure Read + (Stream : in out Stream_Type; + Item : out Ada.Streams.Stream_Element_Array; + Last : out Ada.Streams.Stream_Element_Offset); + + procedure Write + (Stream : in out Stream_Type; + Item : in Ada.Streams.Stream_Element_Array); + + procedure Flush + (Stream : in out Stream_Type; + Mode : in Flush_Mode := Sync_Flush); + -- Flush the written data to the back stream, + -- all data placed to the compressor is flushing to the Back stream. + -- Should not be used untill necessary, becouse it is decreasing + -- compression. + + function Read_Total_In (Stream : in Stream_Type) return Count; + pragma Inline (Read_Total_In); + -- Return total number of bytes read from back stream so far. + + function Read_Total_Out (Stream : in Stream_Type) return Count; + pragma Inline (Read_Total_Out); + -- Return total number of bytes read so far. + + function Write_Total_In (Stream : in Stream_Type) return Count; + pragma Inline (Write_Total_In); + -- Return total number of bytes written so far. + + function Write_Total_Out (Stream : in Stream_Type) return Count; + pragma Inline (Write_Total_Out); + -- Return total number of bytes written to the back stream. + + procedure Create + (Stream : out Stream_Type; + Mode : in Stream_Mode; + Back : in Stream_Access; + Back_Compressed : in Boolean; + Level : in Compression_Level := Default_Compression; + Strategy : in Strategy_Type := Default_Strategy; + Header : in Header_Type := Default; + Read_Buffer_Size : in Ada.Streams.Stream_Element_Offset + := Default_Buffer_Size; + Write_Buffer_Size : in Ada.Streams.Stream_Element_Offset + := Default_Buffer_Size); + -- Create the Comression/Decompression stream. + -- If mode is In_Stream then Write operation is disabled. + -- If mode is Out_Stream then Read operation is disabled. + + -- If Back_Compressed is true then + -- Data written to the Stream is compressing to the Back stream + -- and data read from the Stream is decompressed data from the Back stream. + + -- If Back_Compressed is false then + -- Data written to the Stream is decompressing to the Back stream + -- and data read from the Stream is compressed data from the Back stream. + + -- !!! When the Need_Header is False ZLib-Ada is using undocumented + -- ZLib 1.1.4 functionality to do not create/wait for ZLib headers. + + function Is_Open (Stream : Stream_Type) return Boolean; + + procedure Close (Stream : in out Stream_Type); + +private + + use Ada.Streams; + + type Buffer_Access is access all Stream_Element_Array; + + type Stream_Type + is new Root_Stream_Type with + record + Mode : Stream_Mode; + + Buffer : Buffer_Access; + Rest_First : Stream_Element_Offset; + Rest_Last : Stream_Element_Offset; + -- Buffer for Read operation. + -- We need to have this buffer in the record + -- becouse not all read data from back stream + -- could be processed during the read operation. + + Buffer_Size : Stream_Element_Offset; + -- Buffer size for write operation. + -- We do not need to have this buffer + -- in the record becouse all data could be + -- processed in the write operation. + + Back : Stream_Access; + Reader : Filter_Type; + Writer : Filter_Type; + end record; + +end ZLib.Streams; ADDED compat/zlib/contrib/ada/zlib-thin.adb Index: compat/zlib/contrib/ada/zlib-thin.adb ================================================================== --- compat/zlib/contrib/ada/zlib-thin.adb +++ compat/zlib/contrib/ada/zlib-thin.adb @@ -0,0 +1,141 @@ +---------------------------------------------------------------- +-- ZLib for Ada thick binding. -- +-- -- +-- Copyright (C) 2002-2003 Dmitriy Anisimkov -- +-- -- +-- Open source license information is in the zlib.ads file. -- +---------------------------------------------------------------- + +-- $Id: zlib-thin.adb,v 1.8 2003/12/14 18:27:31 vagul Exp $ + +package body ZLib.Thin is + + ZLIB_VERSION : constant Chars_Ptr := zlibVersion; + + Z_Stream_Size : constant Int := Z_Stream'Size / System.Storage_Unit; + + -------------- + -- Avail_In -- + -------------- + + function Avail_In (Strm : in Z_Stream) return UInt is + begin + return Strm.Avail_In; + end Avail_In; + + --------------- + -- Avail_Out -- + --------------- + + function Avail_Out (Strm : in Z_Stream) return UInt is + begin + return Strm.Avail_Out; + end Avail_Out; + + ------------------ + -- Deflate_Init -- + ------------------ + + function Deflate_Init + (strm : Z_Streamp; + level : Int; + method : Int; + windowBits : Int; + memLevel : Int; + strategy : Int) + return Int is + begin + return deflateInit2 + (strm, + level, + method, + windowBits, + memLevel, + strategy, + ZLIB_VERSION, + Z_Stream_Size); + end Deflate_Init; + + ------------------ + -- Inflate_Init -- + ------------------ + + function Inflate_Init (strm : Z_Streamp; windowBits : Int) return Int is + begin + return inflateInit2 (strm, windowBits, ZLIB_VERSION, Z_Stream_Size); + end Inflate_Init; + + ------------------------ + -- Last_Error_Message -- + ------------------------ + + function Last_Error_Message (Strm : in Z_Stream) return String is + use Interfaces.C.Strings; + begin + if Strm.msg = Null_Ptr then + return ""; + else + return Value (Strm.msg); + end if; + end Last_Error_Message; + + ------------ + -- Set_In -- + ------------ + + procedure Set_In + (Strm : in out Z_Stream; + Buffer : in Voidp; + Size : in UInt) is + begin + Strm.Next_In := Buffer; + Strm.Avail_In := Size; + end Set_In; + + ------------------ + -- Set_Mem_Func -- + ------------------ + + procedure Set_Mem_Func + (Strm : in out Z_Stream; + Opaque : in Voidp; + Alloc : in alloc_func; + Free : in free_func) is + begin + Strm.opaque := Opaque; + Strm.zalloc := Alloc; + Strm.zfree := Free; + end Set_Mem_Func; + + ------------- + -- Set_Out -- + ------------- + + procedure Set_Out + (Strm : in out Z_Stream; + Buffer : in Voidp; + Size : in UInt) is + begin + Strm.Next_Out := Buffer; + Strm.Avail_Out := Size; + end Set_Out; + + -------------- + -- Total_In -- + -------------- + + function Total_In (Strm : in Z_Stream) return ULong is + begin + return Strm.Total_In; + end Total_In; + + --------------- + -- Total_Out -- + --------------- + + function Total_Out (Strm : in Z_Stream) return ULong is + begin + return Strm.Total_Out; + end Total_Out; + +end ZLib.Thin; ADDED compat/zlib/contrib/ada/zlib-thin.ads Index: compat/zlib/contrib/ada/zlib-thin.ads ================================================================== --- compat/zlib/contrib/ada/zlib-thin.ads +++ compat/zlib/contrib/ada/zlib-thin.ads @@ -0,0 +1,450 @@ +---------------------------------------------------------------- +-- ZLib for Ada thick binding. -- +-- -- +-- Copyright (C) 2002-2003 Dmitriy Anisimkov -- +-- -- +-- Open source license information is in the zlib.ads file. -- +---------------------------------------------------------------- + +-- $Id: zlib-thin.ads,v 1.11 2004/07/23 06:33:11 vagul Exp $ + +with Interfaces.C.Strings; + +with System; + +private package ZLib.Thin is + + -- From zconf.h + + MAX_MEM_LEVEL : constant := 9; -- zconf.h:105 + -- zconf.h:105 + MAX_WBITS : constant := 15; -- zconf.h:115 + -- 32K LZ77 window + -- zconf.h:115 + SEEK_SET : constant := 8#0000#; -- zconf.h:244 + -- Seek from beginning of file. + -- zconf.h:244 + SEEK_CUR : constant := 1; -- zconf.h:245 + -- Seek from current position. + -- zconf.h:245 + SEEK_END : constant := 2; -- zconf.h:246 + -- Set file pointer to EOF plus "offset" + -- zconf.h:246 + + type Byte is new Interfaces.C.unsigned_char; -- 8 bits + -- zconf.h:214 + type UInt is new Interfaces.C.unsigned; -- 16 bits or more + -- zconf.h:216 + type Int is new Interfaces.C.int; + + type ULong is new Interfaces.C.unsigned_long; -- 32 bits or more + -- zconf.h:217 + subtype Chars_Ptr is Interfaces.C.Strings.chars_ptr; + + type ULong_Access is access ULong; + type Int_Access is access Int; + + subtype Voidp is System.Address; -- zconf.h:232 + + subtype Byte_Access is Voidp; + + Nul : constant Voidp := System.Null_Address; + -- end from zconf + + Z_NO_FLUSH : constant := 8#0000#; -- zlib.h:125 + -- zlib.h:125 + Z_PARTIAL_FLUSH : constant := 1; -- zlib.h:126 + -- will be removed, use + -- Z_SYNC_FLUSH instead + -- zlib.h:126 + Z_SYNC_FLUSH : constant := 2; -- zlib.h:127 + -- zlib.h:127 + Z_FULL_FLUSH : constant := 3; -- zlib.h:128 + -- zlib.h:128 + Z_FINISH : constant := 4; -- zlib.h:129 + -- zlib.h:129 + Z_OK : constant := 8#0000#; -- zlib.h:132 + -- zlib.h:132 + Z_STREAM_END : constant := 1; -- zlib.h:133 + -- zlib.h:133 + Z_NEED_DICT : constant := 2; -- zlib.h:134 + -- zlib.h:134 + Z_ERRNO : constant := -1; -- zlib.h:135 + -- zlib.h:135 + Z_STREAM_ERROR : constant := -2; -- zlib.h:136 + -- zlib.h:136 + Z_DATA_ERROR : constant := -3; -- zlib.h:137 + -- zlib.h:137 + Z_MEM_ERROR : constant := -4; -- zlib.h:138 + -- zlib.h:138 + Z_BUF_ERROR : constant := -5; -- zlib.h:139 + -- zlib.h:139 + Z_VERSION_ERROR : constant := -6; -- zlib.h:140 + -- zlib.h:140 + Z_NO_COMPRESSION : constant := 8#0000#; -- zlib.h:145 + -- zlib.h:145 + Z_BEST_SPEED : constant := 1; -- zlib.h:146 + -- zlib.h:146 + Z_BEST_COMPRESSION : constant := 9; -- zlib.h:147 + -- zlib.h:147 + Z_DEFAULT_COMPRESSION : constant := -1; -- zlib.h:148 + -- zlib.h:148 + Z_FILTERED : constant := 1; -- zlib.h:151 + -- zlib.h:151 + Z_HUFFMAN_ONLY : constant := 2; -- zlib.h:152 + -- zlib.h:152 + Z_DEFAULT_STRATEGY : constant := 8#0000#; -- zlib.h:153 + -- zlib.h:153 + Z_BINARY : constant := 8#0000#; -- zlib.h:156 + -- zlib.h:156 + Z_ASCII : constant := 1; -- zlib.h:157 + -- zlib.h:157 + Z_UNKNOWN : constant := 2; -- zlib.h:158 + -- zlib.h:158 + Z_DEFLATED : constant := 8; -- zlib.h:161 + -- zlib.h:161 + Z_NULL : constant := 8#0000#; -- zlib.h:164 + -- for initializing zalloc, zfree, opaque + -- zlib.h:164 + type gzFile is new Voidp; -- zlib.h:646 + + type Z_Stream is private; + + type Z_Streamp is access all Z_Stream; -- zlib.h:89 + + type alloc_func is access function + (Opaque : Voidp; + Items : UInt; + Size : UInt) + return Voidp; -- zlib.h:63 + + type free_func is access procedure (opaque : Voidp; address : Voidp); + + function zlibVersion return Chars_Ptr; + + function Deflate (strm : Z_Streamp; flush : Int) return Int; + + function DeflateEnd (strm : Z_Streamp) return Int; + + function Inflate (strm : Z_Streamp; flush : Int) return Int; + + function InflateEnd (strm : Z_Streamp) return Int; + + function deflateSetDictionary + (strm : Z_Streamp; + dictionary : Byte_Access; + dictLength : UInt) + return Int; + + function deflateCopy (dest : Z_Streamp; source : Z_Streamp) return Int; + -- zlib.h:478 + + function deflateReset (strm : Z_Streamp) return Int; -- zlib.h:495 + + function deflateParams + (strm : Z_Streamp; + level : Int; + strategy : Int) + return Int; -- zlib.h:506 + + function inflateSetDictionary + (strm : Z_Streamp; + dictionary : Byte_Access; + dictLength : UInt) + return Int; -- zlib.h:548 + + function inflateSync (strm : Z_Streamp) return Int; -- zlib.h:565 + + function inflateReset (strm : Z_Streamp) return Int; -- zlib.h:580 + + function compress + (dest : Byte_Access; + destLen : ULong_Access; + source : Byte_Access; + sourceLen : ULong) + return Int; -- zlib.h:601 + + function compress2 + (dest : Byte_Access; + destLen : ULong_Access; + source : Byte_Access; + sourceLen : ULong; + level : Int) + return Int; -- zlib.h:615 + + function uncompress + (dest : Byte_Access; + destLen : ULong_Access; + source : Byte_Access; + sourceLen : ULong) + return Int; + + function gzopen (path : Chars_Ptr; mode : Chars_Ptr) return gzFile; + + function gzdopen (fd : Int; mode : Chars_Ptr) return gzFile; + + function gzsetparams + (file : gzFile; + level : Int; + strategy : Int) + return Int; + + function gzread + (file : gzFile; + buf : Voidp; + len : UInt) + return Int; + + function gzwrite + (file : in gzFile; + buf : in Voidp; + len : in UInt) + return Int; + + function gzprintf (file : in gzFile; format : in Chars_Ptr) return Int; + + function gzputs (file : in gzFile; s : in Chars_Ptr) return Int; + + function gzgets + (file : gzFile; + buf : Chars_Ptr; + len : Int) + return Chars_Ptr; + + function gzputc (file : gzFile; char : Int) return Int; + + function gzgetc (file : gzFile) return Int; + + function gzflush (file : gzFile; flush : Int) return Int; + + function gzseek + (file : gzFile; + offset : Int; + whence : Int) + return Int; + + function gzrewind (file : gzFile) return Int; + + function gztell (file : gzFile) return Int; + + function gzeof (file : gzFile) return Int; + + function gzclose (file : gzFile) return Int; + + function gzerror (file : gzFile; errnum : Int_Access) return Chars_Ptr; + + function adler32 + (adler : ULong; + buf : Byte_Access; + len : UInt) + return ULong; + + function crc32 + (crc : ULong; + buf : Byte_Access; + len : UInt) + return ULong; + + function deflateInit + (strm : Z_Streamp; + level : Int; + version : Chars_Ptr; + stream_size : Int) + return Int; + + function deflateInit2 + (strm : Z_Streamp; + level : Int; + method : Int; + windowBits : Int; + memLevel : Int; + strategy : Int; + version : Chars_Ptr; + stream_size : Int) + return Int; + + function Deflate_Init + (strm : Z_Streamp; + level : Int; + method : Int; + windowBits : Int; + memLevel : Int; + strategy : Int) + return Int; + pragma Inline (Deflate_Init); + + function inflateInit + (strm : Z_Streamp; + version : Chars_Ptr; + stream_size : Int) + return Int; + + function inflateInit2 + (strm : in Z_Streamp; + windowBits : in Int; + version : in Chars_Ptr; + stream_size : in Int) + return Int; + + function inflateBackInit + (strm : in Z_Streamp; + windowBits : in Int; + window : in Byte_Access; + version : in Chars_Ptr; + stream_size : in Int) + return Int; + -- Size of window have to be 2**windowBits. + + function Inflate_Init (strm : Z_Streamp; windowBits : Int) return Int; + pragma Inline (Inflate_Init); + + function zError (err : Int) return Chars_Ptr; + + function inflateSyncPoint (z : Z_Streamp) return Int; + + function get_crc_table return ULong_Access; + + -- Interface to the available fields of the z_stream structure. + -- The application must update next_in and avail_in when avail_in has + -- dropped to zero. It must update next_out and avail_out when avail_out + -- has dropped to zero. The application must initialize zalloc, zfree and + -- opaque before calling the init function. + + procedure Set_In + (Strm : in out Z_Stream; + Buffer : in Voidp; + Size : in UInt); + pragma Inline (Set_In); + + procedure Set_Out + (Strm : in out Z_Stream; + Buffer : in Voidp; + Size : in UInt); + pragma Inline (Set_Out); + + procedure Set_Mem_Func + (Strm : in out Z_Stream; + Opaque : in Voidp; + Alloc : in alloc_func; + Free : in free_func); + pragma Inline (Set_Mem_Func); + + function Last_Error_Message (Strm : in Z_Stream) return String; + pragma Inline (Last_Error_Message); + + function Avail_Out (Strm : in Z_Stream) return UInt; + pragma Inline (Avail_Out); + + function Avail_In (Strm : in Z_Stream) return UInt; + pragma Inline (Avail_In); + + function Total_In (Strm : in Z_Stream) return ULong; + pragma Inline (Total_In); + + function Total_Out (Strm : in Z_Stream) return ULong; + pragma Inline (Total_Out); + + function inflateCopy + (dest : in Z_Streamp; + Source : in Z_Streamp) + return Int; + + function compressBound (Source_Len : in ULong) return ULong; + + function deflateBound + (Strm : in Z_Streamp; + Source_Len : in ULong) + return ULong; + + function gzungetc (C : in Int; File : in gzFile) return Int; + + function zlibCompileFlags return ULong; + +private + + type Z_Stream is record -- zlib.h:68 + Next_In : Voidp := Nul; -- next input byte + Avail_In : UInt := 0; -- number of bytes available at next_in + Total_In : ULong := 0; -- total nb of input bytes read so far + Next_Out : Voidp := Nul; -- next output byte should be put there + Avail_Out : UInt := 0; -- remaining free space at next_out + Total_Out : ULong := 0; -- total nb of bytes output so far + msg : Chars_Ptr; -- last error message, NULL if no error + state : Voidp; -- not visible by applications + zalloc : alloc_func := null; -- used to allocate the internal state + zfree : free_func := null; -- used to free the internal state + opaque : Voidp; -- private data object passed to + -- zalloc and zfree + data_type : Int; -- best guess about the data type: + -- ascii or binary + adler : ULong; -- adler32 value of the uncompressed + -- data + reserved : ULong; -- reserved for future use + end record; + + pragma Convention (C, Z_Stream); + + pragma Import (C, zlibVersion, "zlibVersion"); + pragma Import (C, Deflate, "deflate"); + pragma Import (C, DeflateEnd, "deflateEnd"); + pragma Import (C, Inflate, "inflate"); + pragma Import (C, InflateEnd, "inflateEnd"); + pragma Import (C, deflateSetDictionary, "deflateSetDictionary"); + pragma Import (C, deflateCopy, "deflateCopy"); + pragma Import (C, deflateReset, "deflateReset"); + pragma Import (C, deflateParams, "deflateParams"); + pragma Import (C, inflateSetDictionary, "inflateSetDictionary"); + pragma Import (C, inflateSync, "inflateSync"); + pragma Import (C, inflateReset, "inflateReset"); + pragma Import (C, compress, "compress"); + pragma Import (C, compress2, "compress2"); + pragma Import (C, uncompress, "uncompress"); + pragma Import (C, gzopen, "gzopen"); + pragma Import (C, gzdopen, "gzdopen"); + pragma Import (C, gzsetparams, "gzsetparams"); + pragma Import (C, gzread, "gzread"); + pragma Import (C, gzwrite, "gzwrite"); + pragma Import (C, gzprintf, "gzprintf"); + pragma Import (C, gzputs, "gzputs"); + pragma Import (C, gzgets, "gzgets"); + pragma Import (C, gzputc, "gzputc"); + pragma Import (C, gzgetc, "gzgetc"); + pragma Import (C, gzflush, "gzflush"); + pragma Import (C, gzseek, "gzseek"); + pragma Import (C, gzrewind, "gzrewind"); + pragma Import (C, gztell, "gztell"); + pragma Import (C, gzeof, "gzeof"); + pragma Import (C, gzclose, "gzclose"); + pragma Import (C, gzerror, "gzerror"); + pragma Import (C, adler32, "adler32"); + pragma Import (C, crc32, "crc32"); + pragma Import (C, deflateInit, "deflateInit_"); + pragma Import (C, inflateInit, "inflateInit_"); + pragma Import (C, deflateInit2, "deflateInit2_"); + pragma Import (C, inflateInit2, "inflateInit2_"); + pragma Import (C, zError, "zError"); + pragma Import (C, inflateSyncPoint, "inflateSyncPoint"); + pragma Import (C, get_crc_table, "get_crc_table"); + + -- since zlib 1.2.0: + + pragma Import (C, inflateCopy, "inflateCopy"); + pragma Import (C, compressBound, "compressBound"); + pragma Import (C, deflateBound, "deflateBound"); + pragma Import (C, gzungetc, "gzungetc"); + pragma Import (C, zlibCompileFlags, "zlibCompileFlags"); + + pragma Import (C, inflateBackInit, "inflateBackInit_"); + + -- I stopped binding the inflateBack routines, becouse realize that + -- it does not support zlib and gzip headers for now, and have no + -- symmetric deflateBack routines. + -- ZLib-Ada is symmetric regarding deflate/inflate data transformation + -- and has a similar generic callback interface for the + -- deflate/inflate transformation based on the regular Deflate/Inflate + -- routines. + + -- pragma Import (C, inflateBack, "inflateBack"); + -- pragma Import (C, inflateBackEnd, "inflateBackEnd"); + +end ZLib.Thin; ADDED compat/zlib/contrib/ada/zlib.adb Index: compat/zlib/contrib/ada/zlib.adb ================================================================== --- compat/zlib/contrib/ada/zlib.adb +++ compat/zlib/contrib/ada/zlib.adb @@ -0,0 +1,701 @@ +---------------------------------------------------------------- +-- ZLib for Ada thick binding. -- +-- -- +-- Copyright (C) 2002-2004 Dmitriy Anisimkov -- +-- -- +-- Open source license information is in the zlib.ads file. -- +---------------------------------------------------------------- + +-- $Id: zlib.adb,v 1.31 2004/09/06 06:53:19 vagul Exp $ + +with Ada.Exceptions; +with Ada.Unchecked_Conversion; +with Ada.Unchecked_Deallocation; + +with Interfaces.C.Strings; + +with ZLib.Thin; + +package body ZLib is + + use type Thin.Int; + + type Z_Stream is new Thin.Z_Stream; + + type Return_Code_Enum is + (OK, + STREAM_END, + NEED_DICT, + ERRNO, + STREAM_ERROR, + DATA_ERROR, + MEM_ERROR, + BUF_ERROR, + VERSION_ERROR); + + type Flate_Step_Function is access + function (Strm : in Thin.Z_Streamp; Flush : in Thin.Int) return Thin.Int; + pragma Convention (C, Flate_Step_Function); + + type Flate_End_Function is access + function (Ctrm : in Thin.Z_Streamp) return Thin.Int; + pragma Convention (C, Flate_End_Function); + + type Flate_Type is record + Step : Flate_Step_Function; + Done : Flate_End_Function; + end record; + + subtype Footer_Array is Stream_Element_Array (1 .. 8); + + Simple_GZip_Header : constant Stream_Element_Array (1 .. 10) + := (16#1f#, 16#8b#, -- Magic header + 16#08#, -- Z_DEFLATED + 16#00#, -- Flags + 16#00#, 16#00#, 16#00#, 16#00#, -- Time + 16#00#, -- XFlags + 16#03# -- OS code + ); + -- The simplest gzip header is not for informational, but just for + -- gzip format compatibility. + -- Note that some code below is using assumption + -- Simple_GZip_Header'Last > Footer_Array'Last, so do not make + -- Simple_GZip_Header'Last <= Footer_Array'Last. + + Return_Code : constant array (Thin.Int range <>) of Return_Code_Enum + := (0 => OK, + 1 => STREAM_END, + 2 => NEED_DICT, + -1 => ERRNO, + -2 => STREAM_ERROR, + -3 => DATA_ERROR, + -4 => MEM_ERROR, + -5 => BUF_ERROR, + -6 => VERSION_ERROR); + + Flate : constant array (Boolean) of Flate_Type + := (True => (Step => Thin.Deflate'Access, + Done => Thin.DeflateEnd'Access), + False => (Step => Thin.Inflate'Access, + Done => Thin.InflateEnd'Access)); + + Flush_Finish : constant array (Boolean) of Flush_Mode + := (True => Finish, False => No_Flush); + + procedure Raise_Error (Stream : in Z_Stream); + pragma Inline (Raise_Error); + + procedure Raise_Error (Message : in String); + pragma Inline (Raise_Error); + + procedure Check_Error (Stream : in Z_Stream; Code : in Thin.Int); + + procedure Free is new Ada.Unchecked_Deallocation + (Z_Stream, Z_Stream_Access); + + function To_Thin_Access is new Ada.Unchecked_Conversion + (Z_Stream_Access, Thin.Z_Streamp); + + procedure Translate_GZip + (Filter : in out Filter_Type; + In_Data : in Ada.Streams.Stream_Element_Array; + In_Last : out Ada.Streams.Stream_Element_Offset; + Out_Data : out Ada.Streams.Stream_Element_Array; + Out_Last : out Ada.Streams.Stream_Element_Offset; + Flush : in Flush_Mode); + -- Separate translate routine for make gzip header. + + procedure Translate_Auto + (Filter : in out Filter_Type; + In_Data : in Ada.Streams.Stream_Element_Array; + In_Last : out Ada.Streams.Stream_Element_Offset; + Out_Data : out Ada.Streams.Stream_Element_Array; + Out_Last : out Ada.Streams.Stream_Element_Offset; + Flush : in Flush_Mode); + -- translate routine without additional headers. + + ----------------- + -- Check_Error -- + ----------------- + + procedure Check_Error (Stream : in Z_Stream; Code : in Thin.Int) is + use type Thin.Int; + begin + if Code /= Thin.Z_OK then + Raise_Error + (Return_Code_Enum'Image (Return_Code (Code)) + & ": " & Last_Error_Message (Stream)); + end if; + end Check_Error; + + ----------- + -- Close -- + ----------- + + procedure Close + (Filter : in out Filter_Type; + Ignore_Error : in Boolean := False) + is + Code : Thin.Int; + begin + if not Ignore_Error and then not Is_Open (Filter) then + raise Status_Error; + end if; + + Code := Flate (Filter.Compression).Done (To_Thin_Access (Filter.Strm)); + + if Ignore_Error or else Code = Thin.Z_OK then + Free (Filter.Strm); + else + declare + Error_Message : constant String + := Last_Error_Message (Filter.Strm.all); + begin + Free (Filter.Strm); + Ada.Exceptions.Raise_Exception + (ZLib_Error'Identity, + Return_Code_Enum'Image (Return_Code (Code)) + & ": " & Error_Message); + end; + end if; + end Close; + + ----------- + -- CRC32 -- + ----------- + + function CRC32 + (CRC : in Unsigned_32; + Data : in Ada.Streams.Stream_Element_Array) + return Unsigned_32 + is + use Thin; + begin + return Unsigned_32 (crc32 (ULong (CRC), + Data'Address, + Data'Length)); + end CRC32; + + procedure CRC32 + (CRC : in out Unsigned_32; + Data : in Ada.Streams.Stream_Element_Array) is + begin + CRC := CRC32 (CRC, Data); + end CRC32; + + ------------------ + -- Deflate_Init -- + ------------------ + + procedure Deflate_Init + (Filter : in out Filter_Type; + Level : in Compression_Level := Default_Compression; + Strategy : in Strategy_Type := Default_Strategy; + Method : in Compression_Method := Deflated; + Window_Bits : in Window_Bits_Type := Default_Window_Bits; + Memory_Level : in Memory_Level_Type := Default_Memory_Level; + Header : in Header_Type := Default) + is + use type Thin.Int; + Win_Bits : Thin.Int := Thin.Int (Window_Bits); + begin + if Is_Open (Filter) then + raise Status_Error; + end if; + + -- We allow ZLib to make header only in case of default header type. + -- Otherwise we would either do header by ourselfs, or do not do + -- header at all. + + if Header = None or else Header = GZip then + Win_Bits := -Win_Bits; + end if; + + -- For the GZip CRC calculation and make headers. + + if Header = GZip then + Filter.CRC := 0; + Filter.Offset := Simple_GZip_Header'First; + else + Filter.Offset := Simple_GZip_Header'Last + 1; + end if; + + Filter.Strm := new Z_Stream; + Filter.Compression := True; + Filter.Stream_End := False; + Filter.Header := Header; + + if Thin.Deflate_Init + (To_Thin_Access (Filter.Strm), + Level => Thin.Int (Level), + method => Thin.Int (Method), + windowBits => Win_Bits, + memLevel => Thin.Int (Memory_Level), + strategy => Thin.Int (Strategy)) /= Thin.Z_OK + then + Raise_Error (Filter.Strm.all); + end if; + end Deflate_Init; + + ----------- + -- Flush -- + ----------- + + procedure Flush + (Filter : in out Filter_Type; + Out_Data : out Ada.Streams.Stream_Element_Array; + Out_Last : out Ada.Streams.Stream_Element_Offset; + Flush : in Flush_Mode) + is + No_Data : Stream_Element_Array := (1 .. 0 => 0); + Last : Stream_Element_Offset; + begin + Translate (Filter, No_Data, Last, Out_Data, Out_Last, Flush); + end Flush; + + ----------------------- + -- Generic_Translate -- + ----------------------- + + procedure Generic_Translate + (Filter : in out ZLib.Filter_Type; + In_Buffer_Size : in Integer := Default_Buffer_Size; + Out_Buffer_Size : in Integer := Default_Buffer_Size) + is + In_Buffer : Stream_Element_Array + (1 .. Stream_Element_Offset (In_Buffer_Size)); + Out_Buffer : Stream_Element_Array + (1 .. Stream_Element_Offset (Out_Buffer_Size)); + Last : Stream_Element_Offset; + In_Last : Stream_Element_Offset; + In_First : Stream_Element_Offset; + Out_Last : Stream_Element_Offset; + begin + Main : loop + Data_In (In_Buffer, Last); + + In_First := In_Buffer'First; + + loop + Translate + (Filter => Filter, + In_Data => In_Buffer (In_First .. Last), + In_Last => In_Last, + Out_Data => Out_Buffer, + Out_Last => Out_Last, + Flush => Flush_Finish (Last < In_Buffer'First)); + + if Out_Buffer'First <= Out_Last then + Data_Out (Out_Buffer (Out_Buffer'First .. Out_Last)); + end if; + + exit Main when Stream_End (Filter); + + -- The end of in buffer. + + exit when In_Last = Last; + + In_First := In_Last + 1; + end loop; + end loop Main; + + end Generic_Translate; + + ------------------ + -- Inflate_Init -- + ------------------ + + procedure Inflate_Init + (Filter : in out Filter_Type; + Window_Bits : in Window_Bits_Type := Default_Window_Bits; + Header : in Header_Type := Default) + is + use type Thin.Int; + Win_Bits : Thin.Int := Thin.Int (Window_Bits); + + procedure Check_Version; + -- Check the latest header types compatibility. + + procedure Check_Version is + begin + if Version <= "1.1.4" then + Raise_Error + ("Inflate header type " & Header_Type'Image (Header) + & " incompatible with ZLib version " & Version); + end if; + end Check_Version; + + begin + if Is_Open (Filter) then + raise Status_Error; + end if; + + case Header is + when None => + Check_Version; + + -- Inflate data without headers determined + -- by negative Win_Bits. + + Win_Bits := -Win_Bits; + when GZip => + Check_Version; + + -- Inflate gzip data defined by flag 16. + + Win_Bits := Win_Bits + 16; + when Auto => + Check_Version; + + -- Inflate with automatic detection + -- of gzip or native header defined by flag 32. + + Win_Bits := Win_Bits + 32; + when Default => null; + end case; + + Filter.Strm := new Z_Stream; + Filter.Compression := False; + Filter.Stream_End := False; + Filter.Header := Header; + + if Thin.Inflate_Init + (To_Thin_Access (Filter.Strm), Win_Bits) /= Thin.Z_OK + then + Raise_Error (Filter.Strm.all); + end if; + end Inflate_Init; + + ------------- + -- Is_Open -- + ------------- + + function Is_Open (Filter : in Filter_Type) return Boolean is + begin + return Filter.Strm /= null; + end Is_Open; + + ----------------- + -- Raise_Error -- + ----------------- + + procedure Raise_Error (Message : in String) is + begin + Ada.Exceptions.Raise_Exception (ZLib_Error'Identity, Message); + end Raise_Error; + + procedure Raise_Error (Stream : in Z_Stream) is + begin + Raise_Error (Last_Error_Message (Stream)); + end Raise_Error; + + ---------- + -- Read -- + ---------- + + procedure Read + (Filter : in out Filter_Type; + Item : out Ada.Streams.Stream_Element_Array; + Last : out Ada.Streams.Stream_Element_Offset; + Flush : in Flush_Mode := No_Flush) + is + In_Last : Stream_Element_Offset; + Item_First : Ada.Streams.Stream_Element_Offset := Item'First; + V_Flush : Flush_Mode := Flush; + + begin + pragma Assert (Rest_First in Buffer'First .. Buffer'Last + 1); + pragma Assert (Rest_Last in Buffer'First - 1 .. Buffer'Last); + + loop + if Rest_Last = Buffer'First - 1 then + V_Flush := Finish; + + elsif Rest_First > Rest_Last then + Read (Buffer, Rest_Last); + Rest_First := Buffer'First; + + if Rest_Last < Buffer'First then + V_Flush := Finish; + end if; + end if; + + Translate + (Filter => Filter, + In_Data => Buffer (Rest_First .. Rest_Last), + In_Last => In_Last, + Out_Data => Item (Item_First .. Item'Last), + Out_Last => Last, + Flush => V_Flush); + + Rest_First := In_Last + 1; + + exit when Stream_End (Filter) + or else Last = Item'Last + or else (Last >= Item'First and then Allow_Read_Some); + + Item_First := Last + 1; + end loop; + end Read; + + ---------------- + -- Stream_End -- + ---------------- + + function Stream_End (Filter : in Filter_Type) return Boolean is + begin + if Filter.Header = GZip and Filter.Compression then + return Filter.Stream_End + and then Filter.Offset = Footer_Array'Last + 1; + else + return Filter.Stream_End; + end if; + end Stream_End; + + -------------- + -- Total_In -- + -------------- + + function Total_In (Filter : in Filter_Type) return Count is + begin + return Count (Thin.Total_In (To_Thin_Access (Filter.Strm).all)); + end Total_In; + + --------------- + -- Total_Out -- + --------------- + + function Total_Out (Filter : in Filter_Type) return Count is + begin + return Count (Thin.Total_Out (To_Thin_Access (Filter.Strm).all)); + end Total_Out; + + --------------- + -- Translate -- + --------------- + + procedure Translate + (Filter : in out Filter_Type; + In_Data : in Ada.Streams.Stream_Element_Array; + In_Last : out Ada.Streams.Stream_Element_Offset; + Out_Data : out Ada.Streams.Stream_Element_Array; + Out_Last : out Ada.Streams.Stream_Element_Offset; + Flush : in Flush_Mode) is + begin + if Filter.Header = GZip and then Filter.Compression then + Translate_GZip + (Filter => Filter, + In_Data => In_Data, + In_Last => In_Last, + Out_Data => Out_Data, + Out_Last => Out_Last, + Flush => Flush); + else + Translate_Auto + (Filter => Filter, + In_Data => In_Data, + In_Last => In_Last, + Out_Data => Out_Data, + Out_Last => Out_Last, + Flush => Flush); + end if; + end Translate; + + -------------------- + -- Translate_Auto -- + -------------------- + + procedure Translate_Auto + (Filter : in out Filter_Type; + In_Data : in Ada.Streams.Stream_Element_Array; + In_Last : out Ada.Streams.Stream_Element_Offset; + Out_Data : out Ada.Streams.Stream_Element_Array; + Out_Last : out Ada.Streams.Stream_Element_Offset; + Flush : in Flush_Mode) + is + use type Thin.Int; + Code : Thin.Int; + + begin + if not Is_Open (Filter) then + raise Status_Error; + end if; + + if Out_Data'Length = 0 and then In_Data'Length = 0 then + raise Constraint_Error; + end if; + + Set_Out (Filter.Strm.all, Out_Data'Address, Out_Data'Length); + Set_In (Filter.Strm.all, In_Data'Address, In_Data'Length); + + Code := Flate (Filter.Compression).Step + (To_Thin_Access (Filter.Strm), + Thin.Int (Flush)); + + if Code = Thin.Z_STREAM_END then + Filter.Stream_End := True; + else + Check_Error (Filter.Strm.all, Code); + end if; + + In_Last := In_Data'Last + - Stream_Element_Offset (Avail_In (Filter.Strm.all)); + Out_Last := Out_Data'Last + - Stream_Element_Offset (Avail_Out (Filter.Strm.all)); + end Translate_Auto; + + -------------------- + -- Translate_GZip -- + -------------------- + + procedure Translate_GZip + (Filter : in out Filter_Type; + In_Data : in Ada.Streams.Stream_Element_Array; + In_Last : out Ada.Streams.Stream_Element_Offset; + Out_Data : out Ada.Streams.Stream_Element_Array; + Out_Last : out Ada.Streams.Stream_Element_Offset; + Flush : in Flush_Mode) + is + Out_First : Stream_Element_Offset; + + procedure Add_Data (Data : in Stream_Element_Array); + -- Add data to stream from the Filter.Offset till necessary, + -- used for add gzip headr/footer. + + procedure Put_32 + (Item : in out Stream_Element_Array; + Data : in Unsigned_32); + pragma Inline (Put_32); + + -------------- + -- Add_Data -- + -------------- + + procedure Add_Data (Data : in Stream_Element_Array) is + Data_First : Stream_Element_Offset renames Filter.Offset; + Data_Last : Stream_Element_Offset; + Data_Len : Stream_Element_Offset; -- -1 + Out_Len : Stream_Element_Offset; -- -1 + begin + Out_First := Out_Last + 1; + + if Data_First > Data'Last then + return; + end if; + + Data_Len := Data'Last - Data_First; + Out_Len := Out_Data'Last - Out_First; + + if Data_Len <= Out_Len then + Out_Last := Out_First + Data_Len; + Data_Last := Data'Last; + else + Out_Last := Out_Data'Last; + Data_Last := Data_First + Out_Len; + end if; + + Out_Data (Out_First .. Out_Last) := Data (Data_First .. Data_Last); + + Data_First := Data_Last + 1; + Out_First := Out_Last + 1; + end Add_Data; + + ------------ + -- Put_32 -- + ------------ + + procedure Put_32 + (Item : in out Stream_Element_Array; + Data : in Unsigned_32) + is + D : Unsigned_32 := Data; + begin + for J in Item'First .. Item'First + 3 loop + Item (J) := Stream_Element (D and 16#FF#); + D := Shift_Right (D, 8); + end loop; + end Put_32; + + begin + Out_Last := Out_Data'First - 1; + + if not Filter.Stream_End then + Add_Data (Simple_GZip_Header); + + Translate_Auto + (Filter => Filter, + In_Data => In_Data, + In_Last => In_Last, + Out_Data => Out_Data (Out_First .. Out_Data'Last), + Out_Last => Out_Last, + Flush => Flush); + + CRC32 (Filter.CRC, In_Data (In_Data'First .. In_Last)); + end if; + + if Filter.Stream_End and then Out_Last <= Out_Data'Last then + -- This detection method would work only when + -- Simple_GZip_Header'Last > Footer_Array'Last + + if Filter.Offset = Simple_GZip_Header'Last + 1 then + Filter.Offset := Footer_Array'First; + end if; + + declare + Footer : Footer_Array; + begin + Put_32 (Footer, Filter.CRC); + Put_32 (Footer (Footer'First + 4 .. Footer'Last), + Unsigned_32 (Total_In (Filter))); + Add_Data (Footer); + end; + end if; + end Translate_GZip; + + ------------- + -- Version -- + ------------- + + function Version return String is + begin + return Interfaces.C.Strings.Value (Thin.zlibVersion); + end Version; + + ----------- + -- Write -- + ----------- + + procedure Write + (Filter : in out Filter_Type; + Item : in Ada.Streams.Stream_Element_Array; + Flush : in Flush_Mode := No_Flush) + is + Buffer : Stream_Element_Array (1 .. Buffer_Size); + In_Last : Stream_Element_Offset; + Out_Last : Stream_Element_Offset; + In_First : Stream_Element_Offset := Item'First; + begin + if Item'Length = 0 and Flush = No_Flush then + return; + end if; + + loop + Translate + (Filter => Filter, + In_Data => Item (In_First .. Item'Last), + In_Last => In_Last, + Out_Data => Buffer, + Out_Last => Out_Last, + Flush => Flush); + + if Out_Last >= Buffer'First then + Write (Buffer (1 .. Out_Last)); + end if; + + exit when In_Last = Item'Last or Stream_End (Filter); + + In_First := In_Last + 1; + end loop; + end Write; + +end ZLib; ADDED compat/zlib/contrib/ada/zlib.ads Index: compat/zlib/contrib/ada/zlib.ads ================================================================== --- compat/zlib/contrib/ada/zlib.ads +++ compat/zlib/contrib/ada/zlib.ads @@ -0,0 +1,328 @@ +------------------------------------------------------------------------------ +-- ZLib for Ada thick binding. -- +-- -- +-- Copyright (C) 2002-2004 Dmitriy Anisimkov -- +-- -- +-- This library is free software; you can redistribute it and/or modify -- +-- it under the terms of the GNU General Public License as published by -- +-- the Free Software Foundation; either version 2 of the License, or (at -- +-- your option) any later version. -- +-- -- +-- This library is distributed in the hope that it will be useful, but -- +-- WITHOUT ANY WARRANTY; without even the implied warranty of -- +-- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -- +-- General Public License for more details. -- +-- -- +-- You should have received a copy of the GNU General Public License -- +-- along with this library; if not, write to the Free Software Foundation, -- +-- Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. -- +-- -- +-- As a special exception, if other files instantiate generics from this -- +-- unit, or you link this unit with other files to produce an executable, -- +-- this unit does not by itself cause the resulting executable to be -- +-- covered by the GNU General Public License. This exception does not -- +-- however invalidate any other reasons why the executable file might be -- +-- covered by the GNU Public License. -- +------------------------------------------------------------------------------ + +-- $Id: zlib.ads,v 1.26 2004/09/06 06:53:19 vagul Exp $ + +with Ada.Streams; + +with Interfaces; + +package ZLib is + + ZLib_Error : exception; + Status_Error : exception; + + type Compression_Level is new Integer range -1 .. 9; + + type Flush_Mode is private; + + type Compression_Method is private; + + type Window_Bits_Type is new Integer range 8 .. 15; + + type Memory_Level_Type is new Integer range 1 .. 9; + + type Unsigned_32 is new Interfaces.Unsigned_32; + + type Strategy_Type is private; + + type Header_Type is (None, Auto, Default, GZip); + -- Header type usage have a some limitation for inflate. + -- See comment for Inflate_Init. + + subtype Count is Ada.Streams.Stream_Element_Count; + + Default_Memory_Level : constant Memory_Level_Type := 8; + Default_Window_Bits : constant Window_Bits_Type := 15; + + ---------------------------------- + -- Compression method constants -- + ---------------------------------- + + Deflated : constant Compression_Method; + -- Only one method allowed in this ZLib version + + --------------------------------- + -- Compression level constants -- + --------------------------------- + + No_Compression : constant Compression_Level := 0; + Best_Speed : constant Compression_Level := 1; + Best_Compression : constant Compression_Level := 9; + Default_Compression : constant Compression_Level := -1; + + -------------------------- + -- Flush mode constants -- + -------------------------- + + No_Flush : constant Flush_Mode; + -- Regular way for compression, no flush + + Partial_Flush : constant Flush_Mode; + -- Will be removed, use Z_SYNC_FLUSH instead + + Sync_Flush : constant Flush_Mode; + -- All pending output is flushed to the output buffer and the output + -- is aligned on a byte boundary, so that the decompressor can get all + -- input data available so far. (In particular avail_in is zero after the + -- call if enough output space has been provided before the call.) + -- Flushing may degrade compression for some compression algorithms and so + -- it should be used only when necessary. + + Block_Flush : constant Flush_Mode; + -- Z_BLOCK requests that inflate() stop + -- if and when it get to the next deflate block boundary. When decoding the + -- zlib or gzip format, this will cause inflate() to return immediately + -- after the header and before the first block. When doing a raw inflate, + -- inflate() will go ahead and process the first block, and will return + -- when it gets to the end of that block, or when it runs out of data. + + Full_Flush : constant Flush_Mode; + -- All output is flushed as with SYNC_FLUSH, and the compression state + -- is reset so that decompression can restart from this point if previous + -- compressed data has been damaged or if random access is desired. Using + -- Full_Flush too often can seriously degrade the compression. + + Finish : constant Flush_Mode; + -- Just for tell the compressor that input data is complete. + + ------------------------------------ + -- Compression strategy constants -- + ------------------------------------ + + -- RLE stategy could be used only in version 1.2.0 and later. + + Filtered : constant Strategy_Type; + Huffman_Only : constant Strategy_Type; + RLE : constant Strategy_Type; + Default_Strategy : constant Strategy_Type; + + Default_Buffer_Size : constant := 4096; + + type Filter_Type is tagged limited private; + -- The filter is for compression and for decompression. + -- The usage of the type is depend of its initialization. + + function Version return String; + pragma Inline (Version); + -- Return string representation of the ZLib version. + + procedure Deflate_Init + (Filter : in out Filter_Type; + Level : in Compression_Level := Default_Compression; + Strategy : in Strategy_Type := Default_Strategy; + Method : in Compression_Method := Deflated; + Window_Bits : in Window_Bits_Type := Default_Window_Bits; + Memory_Level : in Memory_Level_Type := Default_Memory_Level; + Header : in Header_Type := Default); + -- Compressor initialization. + -- When Header parameter is Auto or Default, then default zlib header + -- would be provided for compressed data. + -- When Header is GZip, then gzip header would be set instead of + -- default header. + -- When Header is None, no header would be set for compressed data. + + procedure Inflate_Init + (Filter : in out Filter_Type; + Window_Bits : in Window_Bits_Type := Default_Window_Bits; + Header : in Header_Type := Default); + -- Decompressor initialization. + -- Default header type mean that ZLib default header is expecting in the + -- input compressed stream. + -- Header type None mean that no header is expecting in the input stream. + -- GZip header type mean that GZip header is expecting in the + -- input compressed stream. + -- Auto header type mean that header type (GZip or Native) would be + -- detected automatically in the input stream. + -- Note that header types parameter values None, GZip and Auto are + -- supported for inflate routine only in ZLib versions 1.2.0.2 and later. + -- Deflate_Init is supporting all header types. + + function Is_Open (Filter : in Filter_Type) return Boolean; + pragma Inline (Is_Open); + -- Is the filter opened for compression or decompression. + + procedure Close + (Filter : in out Filter_Type; + Ignore_Error : in Boolean := False); + -- Closing the compression or decompressor. + -- If stream is closing before the complete and Ignore_Error is False, + -- The exception would be raised. + + generic + with procedure Data_In + (Item : out Ada.Streams.Stream_Element_Array; + Last : out Ada.Streams.Stream_Element_Offset); + with procedure Data_Out + (Item : in Ada.Streams.Stream_Element_Array); + procedure Generic_Translate + (Filter : in out Filter_Type; + In_Buffer_Size : in Integer := Default_Buffer_Size; + Out_Buffer_Size : in Integer := Default_Buffer_Size); + -- Compress/decompress data fetch from Data_In routine and pass the result + -- to the Data_Out routine. User should provide Data_In and Data_Out + -- for compression/decompression data flow. + -- Compression or decompression depend on Filter initialization. + + function Total_In (Filter : in Filter_Type) return Count; + pragma Inline (Total_In); + -- Returns total number of input bytes read so far + + function Total_Out (Filter : in Filter_Type) return Count; + pragma Inline (Total_Out); + -- Returns total number of bytes output so far + + function CRC32 + (CRC : in Unsigned_32; + Data : in Ada.Streams.Stream_Element_Array) + return Unsigned_32; + pragma Inline (CRC32); + -- Compute CRC32, it could be necessary for make gzip format + + procedure CRC32 + (CRC : in out Unsigned_32; + Data : in Ada.Streams.Stream_Element_Array); + pragma Inline (CRC32); + -- Compute CRC32, it could be necessary for make gzip format + + ------------------------------------------------- + -- Below is more complex low level routines. -- + ------------------------------------------------- + + procedure Translate + (Filter : in out Filter_Type; + In_Data : in Ada.Streams.Stream_Element_Array; + In_Last : out Ada.Streams.Stream_Element_Offset; + Out_Data : out Ada.Streams.Stream_Element_Array; + Out_Last : out Ada.Streams.Stream_Element_Offset; + Flush : in Flush_Mode); + -- Compress/decompress the In_Data buffer and place the result into + -- Out_Data. In_Last is the index of last element from In_Data accepted by + -- the Filter. Out_Last is the last element of the received data from + -- Filter. To tell the filter that incoming data are complete put the + -- Flush parameter to Finish. + + function Stream_End (Filter : in Filter_Type) return Boolean; + pragma Inline (Stream_End); + -- Return the true when the stream is complete. + + procedure Flush + (Filter : in out Filter_Type; + Out_Data : out Ada.Streams.Stream_Element_Array; + Out_Last : out Ada.Streams.Stream_Element_Offset; + Flush : in Flush_Mode); + pragma Inline (Flush); + -- Flushing the data from the compressor. + + generic + with procedure Write + (Item : in Ada.Streams.Stream_Element_Array); + -- User should provide this routine for accept + -- compressed/decompressed data. + + Buffer_Size : in Ada.Streams.Stream_Element_Offset + := Default_Buffer_Size; + -- Buffer size for Write user routine. + + procedure Write + (Filter : in out Filter_Type; + Item : in Ada.Streams.Stream_Element_Array; + Flush : in Flush_Mode := No_Flush); + -- Compress/Decompress data from Item to the generic parameter procedure + -- Write. Output buffer size could be set in Buffer_Size generic parameter. + + generic + with procedure Read + (Item : out Ada.Streams.Stream_Element_Array; + Last : out Ada.Streams.Stream_Element_Offset); + -- User should provide data for compression/decompression + -- thru this routine. + + Buffer : in out Ada.Streams.Stream_Element_Array; + -- Buffer for keep remaining data from the previous + -- back read. + + Rest_First, Rest_Last : in out Ada.Streams.Stream_Element_Offset; + -- Rest_First have to be initialized to Buffer'Last + 1 + -- Rest_Last have to be initialized to Buffer'Last + -- before usage. + + Allow_Read_Some : in Boolean := False; + -- Is it allowed to return Last < Item'Last before end of data. + + procedure Read + (Filter : in out Filter_Type; + Item : out Ada.Streams.Stream_Element_Array; + Last : out Ada.Streams.Stream_Element_Offset; + Flush : in Flush_Mode := No_Flush); + -- Compress/Decompress data from generic parameter procedure Read to the + -- Item. User should provide Buffer and initialized Rest_First, Rest_Last + -- indicators. If Allow_Read_Some is True, Read routines could return + -- Last < Item'Last only at end of stream. + +private + + use Ada.Streams; + + pragma Assert (Ada.Streams.Stream_Element'Size = 8); + pragma Assert (Ada.Streams.Stream_Element'Modulus = 2**8); + + type Flush_Mode is new Integer range 0 .. 5; + + type Compression_Method is new Integer range 8 .. 8; + + type Strategy_Type is new Integer range 0 .. 3; + + No_Flush : constant Flush_Mode := 0; + Partial_Flush : constant Flush_Mode := 1; + Sync_Flush : constant Flush_Mode := 2; + Full_Flush : constant Flush_Mode := 3; + Finish : constant Flush_Mode := 4; + Block_Flush : constant Flush_Mode := 5; + + Filtered : constant Strategy_Type := 1; + Huffman_Only : constant Strategy_Type := 2; + RLE : constant Strategy_Type := 3; + Default_Strategy : constant Strategy_Type := 0; + + Deflated : constant Compression_Method := 8; + + type Z_Stream; + + type Z_Stream_Access is access all Z_Stream; + + type Filter_Type is tagged limited record + Strm : Z_Stream_Access; + Compression : Boolean; + Stream_End : Boolean; + Header : Header_Type; + CRC : Unsigned_32; + Offset : Stream_Element_Offset; + -- Offset for gzip header/footer output. + end record; + +end ZLib; ADDED compat/zlib/contrib/ada/zlib.gpr Index: compat/zlib/contrib/ada/zlib.gpr ================================================================== --- compat/zlib/contrib/ada/zlib.gpr +++ compat/zlib/contrib/ada/zlib.gpr @@ -0,0 +1,20 @@ +project Zlib is + + for Languages use ("Ada"); + for Source_Dirs use ("."); + for Object_Dir use "."; + for Main use ("test.adb", "mtest.adb", "read.adb", "buffer_demo"); + + package Compiler is + for Default_Switches ("ada") use ("-gnatwcfilopru", "-gnatVcdfimorst", "-gnatyabcefhiklmnoprst"); + end Compiler; + + package Linker is + for Default_Switches ("ada") use ("-lz"); + end Linker; + + package Builder is + for Default_Switches ("ada") use ("-s", "-gnatQ"); + end Builder; + +end Zlib; ADDED compat/zlib/contrib/amd64/amd64-match.S Index: compat/zlib/contrib/amd64/amd64-match.S ================================================================== --- compat/zlib/contrib/amd64/amd64-match.S +++ compat/zlib/contrib/amd64/amd64-match.S @@ -0,0 +1,452 @@ +/* + * match.S -- optimized version of longest_match() + * based on the similar work by Gilles Vollant, and Brian Raiter, written 1998 + * + * This is free software; you can redistribute it and/or modify it + * under the terms of the BSD License. Use by owners of Che Guevarra + * parafernalia is prohibited, where possible, and highly discouraged + * elsewhere. + */ + +#ifndef NO_UNDERLINE +# define match_init _match_init +# define longest_match _longest_match +#endif + +#define scanend ebx +#define scanendw bx +#define chainlenwmask edx /* high word: current chain len low word: s->wmask */ +#define curmatch rsi +#define curmatchd esi +#define windowbestlen r8 +#define scanalign r9 +#define scanalignd r9d +#define window r10 +#define bestlen r11 +#define bestlend r11d +#define scanstart r12d +#define scanstartw r12w +#define scan r13 +#define nicematch r14d +#define limit r15 +#define limitd r15d +#define prev rcx + +/* + * The 258 is a "magic number, not a parameter -- changing it + * breaks the hell loose + */ +#define MAX_MATCH (258) +#define MIN_MATCH (3) +#define MIN_LOOKAHEAD (MAX_MATCH + MIN_MATCH + 1) +#define MAX_MATCH_8 ((MAX_MATCH + 7) & ~7) + +/* stack frame offsets */ +#define LocalVarsSize (112) +#define _chainlenwmask ( 8-LocalVarsSize)(%rsp) +#define _windowbestlen (16-LocalVarsSize)(%rsp) +#define save_r14 (24-LocalVarsSize)(%rsp) +#define save_rsi (32-LocalVarsSize)(%rsp) +#define save_rbx (40-LocalVarsSize)(%rsp) +#define save_r12 (56-LocalVarsSize)(%rsp) +#define save_r13 (64-LocalVarsSize)(%rsp) +#define save_r15 (80-LocalVarsSize)(%rsp) + + +.globl match_init, longest_match + +/* + * On AMD64 the first argument of a function (in our case -- the pointer to + * deflate_state structure) is passed in %rdi, hence our offsets below are + * all off of that. + */ + +/* you can check the structure offset by running + +#include +#include +#include "deflate.h" + +void print_depl() +{ +deflate_state ds; +deflate_state *s=&ds; +printf("size pointer=%u\n",(int)sizeof(void*)); + +printf("#define dsWSize (%3u)(%%rdi)\n",(int)(((char*)&(s->w_size))-((char*)s))); +printf("#define dsWMask (%3u)(%%rdi)\n",(int)(((char*)&(s->w_mask))-((char*)s))); +printf("#define dsWindow (%3u)(%%rdi)\n",(int)(((char*)&(s->window))-((char*)s))); +printf("#define dsPrev (%3u)(%%rdi)\n",(int)(((char*)&(s->prev))-((char*)s))); +printf("#define dsMatchLen (%3u)(%%rdi)\n",(int)(((char*)&(s->match_length))-((char*)s))); +printf("#define dsPrevMatch (%3u)(%%rdi)\n",(int)(((char*)&(s->prev_match))-((char*)s))); +printf("#define dsStrStart (%3u)(%%rdi)\n",(int)(((char*)&(s->strstart))-((char*)s))); +printf("#define dsMatchStart (%3u)(%%rdi)\n",(int)(((char*)&(s->match_start))-((char*)s))); +printf("#define dsLookahead (%3u)(%%rdi)\n",(int)(((char*)&(s->lookahead))-((char*)s))); +printf("#define dsPrevLen (%3u)(%%rdi)\n",(int)(((char*)&(s->prev_length))-((char*)s))); +printf("#define dsMaxChainLen (%3u)(%%rdi)\n",(int)(((char*)&(s->max_chain_length))-((char*)s))); +printf("#define dsGoodMatch (%3u)(%%rdi)\n",(int)(((char*)&(s->good_match))-((char*)s))); +printf("#define dsNiceMatch (%3u)(%%rdi)\n",(int)(((char*)&(s->nice_match))-((char*)s))); +} + +*/ + + +/* + to compile for XCode 3.2 on MacOSX x86_64 + - run "gcc -g -c -DXCODE_MAC_X64_STRUCTURE amd64-match.S" + */ + + +#ifndef CURRENT_LINX_XCODE_MAC_X64_STRUCTURE +#define dsWSize ( 68)(%rdi) +#define dsWMask ( 76)(%rdi) +#define dsWindow ( 80)(%rdi) +#define dsPrev ( 96)(%rdi) +#define dsMatchLen (144)(%rdi) +#define dsPrevMatch (148)(%rdi) +#define dsStrStart (156)(%rdi) +#define dsMatchStart (160)(%rdi) +#define dsLookahead (164)(%rdi) +#define dsPrevLen (168)(%rdi) +#define dsMaxChainLen (172)(%rdi) +#define dsGoodMatch (188)(%rdi) +#define dsNiceMatch (192)(%rdi) + +#else + +#ifndef STRUCT_OFFSET +# define STRUCT_OFFSET (0) +#endif + + +#define dsWSize ( 56 + STRUCT_OFFSET)(%rdi) +#define dsWMask ( 64 + STRUCT_OFFSET)(%rdi) +#define dsWindow ( 72 + STRUCT_OFFSET)(%rdi) +#define dsPrev ( 88 + STRUCT_OFFSET)(%rdi) +#define dsMatchLen (136 + STRUCT_OFFSET)(%rdi) +#define dsPrevMatch (140 + STRUCT_OFFSET)(%rdi) +#define dsStrStart (148 + STRUCT_OFFSET)(%rdi) +#define dsMatchStart (152 + STRUCT_OFFSET)(%rdi) +#define dsLookahead (156 + STRUCT_OFFSET)(%rdi) +#define dsPrevLen (160 + STRUCT_OFFSET)(%rdi) +#define dsMaxChainLen (164 + STRUCT_OFFSET)(%rdi) +#define dsGoodMatch (180 + STRUCT_OFFSET)(%rdi) +#define dsNiceMatch (184 + STRUCT_OFFSET)(%rdi) + +#endif + + + + +.text + +/* uInt longest_match(deflate_state *deflatestate, IPos curmatch) */ + +longest_match: +/* + * Retrieve the function arguments. %curmatch will hold cur_match + * throughout the entire function (passed via rsi on amd64). + * rdi will hold the pointer to the deflate_state (first arg on amd64) + */ + mov %rsi, save_rsi + mov %rbx, save_rbx + mov %r12, save_r12 + mov %r13, save_r13 + mov %r14, save_r14 + mov %r15, save_r15 + +/* uInt wmask = s->w_mask; */ +/* unsigned chain_length = s->max_chain_length; */ +/* if (s->prev_length >= s->good_match) { */ +/* chain_length >>= 2; */ +/* } */ + + movl dsPrevLen, %eax + movl dsGoodMatch, %ebx + cmpl %ebx, %eax + movl dsWMask, %eax + movl dsMaxChainLen, %chainlenwmask + jl LastMatchGood + shrl $2, %chainlenwmask +LastMatchGood: + +/* chainlen is decremented once beforehand so that the function can */ +/* use the sign flag instead of the zero flag for the exit test. */ +/* It is then shifted into the high word, to make room for the wmask */ +/* value, which it will always accompany. */ + + decl %chainlenwmask + shll $16, %chainlenwmask + orl %eax, %chainlenwmask + +/* if ((uInt)nice_match > s->lookahead) nice_match = s->lookahead; */ + + movl dsNiceMatch, %eax + movl dsLookahead, %ebx + cmpl %eax, %ebx + jl LookaheadLess + movl %eax, %ebx +LookaheadLess: movl %ebx, %nicematch + +/* register Bytef *scan = s->window + s->strstart; */ + + mov dsWindow, %window + movl dsStrStart, %limitd + lea (%limit, %window), %scan + +/* Determine how many bytes the scan ptr is off from being */ +/* dword-aligned. */ + + mov %scan, %scanalign + negl %scanalignd + andl $3, %scanalignd + +/* IPos limit = s->strstart > (IPos)MAX_DIST(s) ? */ +/* s->strstart - (IPos)MAX_DIST(s) : NIL; */ + + movl dsWSize, %eax + subl $MIN_LOOKAHEAD, %eax + xorl %ecx, %ecx + subl %eax, %limitd + cmovng %ecx, %limitd + +/* int best_len = s->prev_length; */ + + movl dsPrevLen, %bestlend + +/* Store the sum of s->window + best_len in %windowbestlen locally, and in memory. */ + + lea (%window, %bestlen), %windowbestlen + mov %windowbestlen, _windowbestlen + +/* register ush scan_start = *(ushf*)scan; */ +/* register ush scan_end = *(ushf*)(scan+best_len-1); */ +/* Posf *prev = s->prev; */ + + movzwl (%scan), %scanstart + movzwl -1(%scan, %bestlen), %scanend + mov dsPrev, %prev + +/* Jump into the main loop. */ + + movl %chainlenwmask, _chainlenwmask + jmp LoopEntry + +.balign 16 + +/* do { + * match = s->window + cur_match; + * if (*(ushf*)(match+best_len-1) != scan_end || + * *(ushf*)match != scan_start) continue; + * [...] + * } while ((cur_match = prev[cur_match & wmask]) > limit + * && --chain_length != 0); + * + * Here is the inner loop of the function. The function will spend the + * majority of its time in this loop, and majority of that time will + * be spent in the first ten instructions. + */ +LookupLoop: + andl %chainlenwmask, %curmatchd + movzwl (%prev, %curmatch, 2), %curmatchd + cmpl %limitd, %curmatchd + jbe LeaveNow + subl $0x00010000, %chainlenwmask + js LeaveNow +LoopEntry: cmpw -1(%windowbestlen, %curmatch), %scanendw + jne LookupLoop + cmpw %scanstartw, (%window, %curmatch) + jne LookupLoop + +/* Store the current value of chainlen. */ + movl %chainlenwmask, _chainlenwmask + +/* %scan is the string under scrutiny, and %prev to the string we */ +/* are hoping to match it up with. In actuality, %esi and %edi are */ +/* both pointed (MAX_MATCH_8 - scanalign) bytes ahead, and %edx is */ +/* initialized to -(MAX_MATCH_8 - scanalign). */ + + mov $(-MAX_MATCH_8), %rdx + lea (%curmatch, %window), %windowbestlen + lea MAX_MATCH_8(%windowbestlen, %scanalign), %windowbestlen + lea MAX_MATCH_8(%scan, %scanalign), %prev + +/* the prefetching below makes very little difference... */ + prefetcht1 (%windowbestlen, %rdx) + prefetcht1 (%prev, %rdx) + +/* + * Test the strings for equality, 8 bytes at a time. At the end, + * adjust %rdx so that it is offset to the exact byte that mismatched. + * + * It should be confessed that this loop usually does not represent + * much of the total running time. Replacing it with a more + * straightforward "rep cmpsb" would not drastically degrade + * performance -- unrolling it, for example, makes no difference. + */ + +#undef USE_SSE /* works, but is 6-7% slower, than non-SSE... */ + +LoopCmps: +#ifdef USE_SSE + /* Preload the SSE registers */ + movdqu (%windowbestlen, %rdx), %xmm1 + movdqu (%prev, %rdx), %xmm2 + pcmpeqb %xmm2, %xmm1 + movdqu 16(%windowbestlen, %rdx), %xmm3 + movdqu 16(%prev, %rdx), %xmm4 + pcmpeqb %xmm4, %xmm3 + movdqu 32(%windowbestlen, %rdx), %xmm5 + movdqu 32(%prev, %rdx), %xmm6 + pcmpeqb %xmm6, %xmm5 + movdqu 48(%windowbestlen, %rdx), %xmm7 + movdqu 48(%prev, %rdx), %xmm8 + pcmpeqb %xmm8, %xmm7 + + /* Check the comparisions' results */ + pmovmskb %xmm1, %rax + notw %ax + bsfw %ax, %ax + jnz LeaveLoopCmps + + /* this is the only iteration of the loop with a possibility of having + incremented rdx by 0x108 (each loop iteration add 16*4 = 0x40 + and (0x40*4)+8=0x108 */ + add $8, %rdx + jz LenMaximum + add $8, %rdx + + + pmovmskb %xmm3, %rax + notw %ax + bsfw %ax, %ax + jnz LeaveLoopCmps + + + add $16, %rdx + + + pmovmskb %xmm5, %rax + notw %ax + bsfw %ax, %ax + jnz LeaveLoopCmps + + add $16, %rdx + + + pmovmskb %xmm7, %rax + notw %ax + bsfw %ax, %ax + jnz LeaveLoopCmps + + add $16, %rdx + + jmp LoopCmps +LeaveLoopCmps: add %rax, %rdx +#else + mov (%windowbestlen, %rdx), %rax + xor (%prev, %rdx), %rax + jnz LeaveLoopCmps + + mov 8(%windowbestlen, %rdx), %rax + xor 8(%prev, %rdx), %rax + jnz LeaveLoopCmps8 + + mov 16(%windowbestlen, %rdx), %rax + xor 16(%prev, %rdx), %rax + jnz LeaveLoopCmps16 + + add $24, %rdx + jnz LoopCmps + jmp LenMaximum +# if 0 +/* + * This three-liner is tantalizingly simple, but bsf is a slow instruction, + * and the complicated alternative down below is quite a bit faster. Sad... + */ + +LeaveLoopCmps: bsf %rax, %rax /* find the first non-zero bit */ + shrl $3, %eax /* divide by 8 to get the byte */ + add %rax, %rdx +# else +LeaveLoopCmps16: + add $8, %rdx +LeaveLoopCmps8: + add $8, %rdx +LeaveLoopCmps: testl $0xFFFFFFFF, %eax /* Check the first 4 bytes */ + jnz Check16 + add $4, %rdx + shr $32, %rax +Check16: testw $0xFFFF, %ax + jnz LenLower + add $2, %rdx + shrl $16, %eax +LenLower: subb $1, %al + adc $0, %rdx +# endif +#endif + +/* Calculate the length of the match. If it is longer than MAX_MATCH, */ +/* then automatically accept it as the best possible match and leave. */ + + lea (%prev, %rdx), %rax + sub %scan, %rax + cmpl $MAX_MATCH, %eax + jge LenMaximum + +/* If the length of the match is not longer than the best match we */ +/* have so far, then forget it and return to the lookup loop. */ + + cmpl %bestlend, %eax + jg LongerMatch + mov _windowbestlen, %windowbestlen + mov dsPrev, %prev + movl _chainlenwmask, %edx + jmp LookupLoop + +/* s->match_start = cur_match; */ +/* best_len = len; */ +/* if (len >= nice_match) break; */ +/* scan_end = *(ushf*)(scan+best_len-1); */ + +LongerMatch: + movl %eax, %bestlend + movl %curmatchd, dsMatchStart + cmpl %nicematch, %eax + jge LeaveNow + + lea (%window, %bestlen), %windowbestlen + mov %windowbestlen, _windowbestlen + + movzwl -1(%scan, %rax), %scanend + mov dsPrev, %prev + movl _chainlenwmask, %chainlenwmask + jmp LookupLoop + +/* Accept the current string, with the maximum possible length. */ + +LenMaximum: + movl $MAX_MATCH, %bestlend + movl %curmatchd, dsMatchStart + +/* if ((uInt)best_len <= s->lookahead) return (uInt)best_len; */ +/* return s->lookahead; */ + +LeaveNow: + movl dsLookahead, %eax + cmpl %eax, %bestlend + cmovngl %bestlend, %eax +LookaheadRet: + +/* Restore the registers and return from whence we came. */ + + mov save_rsi, %rsi + mov save_rbx, %rbx + mov save_r12, %r12 + mov save_r13, %r13 + mov save_r14, %r14 + mov save_r15, %r15 + + ret + +match_init: ret ADDED compat/zlib/contrib/asm686/README.686 Index: compat/zlib/contrib/asm686/README.686 ================================================================== --- compat/zlib/contrib/asm686/README.686 +++ compat/zlib/contrib/asm686/README.686 @@ -0,0 +1,51 @@ +This is a patched version of zlib, modified to use +Pentium-Pro-optimized assembly code in the deflation algorithm. The +files changed/added by this patch are: + +README.686 +match.S + +The speedup that this patch provides varies, depending on whether the +compiler used to build the original version of zlib falls afoul of the +PPro's speed traps. My own tests show a speedup of around 10-20% at +the default compression level, and 20-30% using -9, against a version +compiled using gcc 2.7.2.3. Your mileage may vary. + +Note that this code has been tailored for the PPro/PII in particular, +and will not perform particuarly well on a Pentium. + +If you are using an assembler other than GNU as, you will have to +translate match.S to use your assembler's syntax. (Have fun.) + +Brian Raiter +breadbox@muppetlabs.com +April, 1998 + + +Added for zlib 1.1.3: + +The patches come from +http://www.muppetlabs.com/~breadbox/software/assembly.html + +To compile zlib with this asm file, copy match.S to the zlib directory +then do: + +CFLAGS="-O3 -DASMV" ./configure +make OBJA=match.o + + +Update: + +I've been ignoring these assembly routines for years, believing that +gcc's generated code had caught up with it sometime around gcc 2.95 +and the major rearchitecting of the Pentium 4. However, I recently +learned that, despite what I believed, this code still has some life +in it. On the Pentium 4 and AMD64 chips, it continues to run about 8% +faster than the code produced by gcc 4.1. + +In acknowledgement of its continuing usefulness, I've altered the +license to match that of the rest of zlib. Share and Enjoy! + +Brian Raiter +breadbox@muppetlabs.com +April, 2007 ADDED compat/zlib/contrib/asm686/match.S Index: compat/zlib/contrib/asm686/match.S ================================================================== --- compat/zlib/contrib/asm686/match.S +++ compat/zlib/contrib/asm686/match.S @@ -0,0 +1,357 @@ +/* match.S -- x86 assembly version of the zlib longest_match() function. + * Optimized for the Intel 686 chips (PPro and later). + * + * Copyright (C) 1998, 2007 Brian Raiter + * + * This software is provided 'as-is', without any express or implied + * warranty. In no event will the author be held liable for any damages + * arising from the use of this software. + * + * Permission is granted to anyone to use this software for any purpose, + * including commercial applications, and to alter it and redistribute it + * freely, subject to the following restrictions: + * + * 1. The origin of this software must not be misrepresented; you must not + * claim that you wrote the original software. If you use this software + * in a product, an acknowledgment in the product documentation would be + * appreciated but is not required. + * 2. Altered source versions must be plainly marked as such, and must not be + * misrepresented as being the original software. + * 3. This notice may not be removed or altered from any source distribution. + */ + +#ifndef NO_UNDERLINE +#define match_init _match_init +#define longest_match _longest_match +#endif + +#define MAX_MATCH (258) +#define MIN_MATCH (3) +#define MIN_LOOKAHEAD (MAX_MATCH + MIN_MATCH + 1) +#define MAX_MATCH_8 ((MAX_MATCH + 7) & ~7) + +/* stack frame offsets */ + +#define chainlenwmask 0 /* high word: current chain len */ + /* low word: s->wmask */ +#define window 4 /* local copy of s->window */ +#define windowbestlen 8 /* s->window + bestlen */ +#define scanstart 16 /* first two bytes of string */ +#define scanend 12 /* last two bytes of string */ +#define scanalign 20 /* dword-misalignment of string */ +#define nicematch 24 /* a good enough match size */ +#define bestlen 28 /* size of best match so far */ +#define scan 32 /* ptr to string wanting match */ + +#define LocalVarsSize (36) +/* saved ebx 36 */ +/* saved edi 40 */ +/* saved esi 44 */ +/* saved ebp 48 */ +/* return address 52 */ +#define deflatestate 56 /* the function arguments */ +#define curmatch 60 + +/* All the +zlib1222add offsets are due to the addition of fields + * in zlib in the deflate_state structure since the asm code was first written + * (if you compile with zlib 1.0.4 or older, use "zlib1222add equ (-4)"). + * (if you compile with zlib between 1.0.5 and 1.2.2.1, use "zlib1222add equ 0"). + * if you compile with zlib 1.2.2.2 or later , use "zlib1222add equ 8"). + */ + +#define zlib1222add (8) + +#define dsWSize (36+zlib1222add) +#define dsWMask (44+zlib1222add) +#define dsWindow (48+zlib1222add) +#define dsPrev (56+zlib1222add) +#define dsMatchLen (88+zlib1222add) +#define dsPrevMatch (92+zlib1222add) +#define dsStrStart (100+zlib1222add) +#define dsMatchStart (104+zlib1222add) +#define dsLookahead (108+zlib1222add) +#define dsPrevLen (112+zlib1222add) +#define dsMaxChainLen (116+zlib1222add) +#define dsGoodMatch (132+zlib1222add) +#define dsNiceMatch (136+zlib1222add) + + +.file "match.S" + +.globl match_init, longest_match + +.text + +/* uInt longest_match(deflate_state *deflatestate, IPos curmatch) */ +.cfi_sections .debug_frame + +longest_match: + +.cfi_startproc +/* Save registers that the compiler may be using, and adjust %esp to */ +/* make room for our stack frame. */ + + pushl %ebp + .cfi_def_cfa_offset 8 + .cfi_offset ebp, -8 + pushl %edi + .cfi_def_cfa_offset 12 + pushl %esi + .cfi_def_cfa_offset 16 + pushl %ebx + .cfi_def_cfa_offset 20 + subl $LocalVarsSize, %esp + .cfi_def_cfa_offset LocalVarsSize+20 + +/* Retrieve the function arguments. %ecx will hold cur_match */ +/* throughout the entire function. %edx will hold the pointer to the */ +/* deflate_state structure during the function's setup (before */ +/* entering the main loop). */ + + movl deflatestate(%esp), %edx + movl curmatch(%esp), %ecx + +/* uInt wmask = s->w_mask; */ +/* unsigned chain_length = s->max_chain_length; */ +/* if (s->prev_length >= s->good_match) { */ +/* chain_length >>= 2; */ +/* } */ + + movl dsPrevLen(%edx), %eax + movl dsGoodMatch(%edx), %ebx + cmpl %ebx, %eax + movl dsWMask(%edx), %eax + movl dsMaxChainLen(%edx), %ebx + jl LastMatchGood + shrl $2, %ebx +LastMatchGood: + +/* chainlen is decremented once beforehand so that the function can */ +/* use the sign flag instead of the zero flag for the exit test. */ +/* It is then shifted into the high word, to make room for the wmask */ +/* value, which it will always accompany. */ + + decl %ebx + shll $16, %ebx + orl %eax, %ebx + movl %ebx, chainlenwmask(%esp) + +/* if ((uInt)nice_match > s->lookahead) nice_match = s->lookahead; */ + + movl dsNiceMatch(%edx), %eax + movl dsLookahead(%edx), %ebx + cmpl %eax, %ebx + jl LookaheadLess + movl %eax, %ebx +LookaheadLess: movl %ebx, nicematch(%esp) + +/* register Bytef *scan = s->window + s->strstart; */ + + movl dsWindow(%edx), %esi + movl %esi, window(%esp) + movl dsStrStart(%edx), %ebp + lea (%esi,%ebp), %edi + movl %edi, scan(%esp) + +/* Determine how many bytes the scan ptr is off from being */ +/* dword-aligned. */ + + movl %edi, %eax + negl %eax + andl $3, %eax + movl %eax, scanalign(%esp) + +/* IPos limit = s->strstart > (IPos)MAX_DIST(s) ? */ +/* s->strstart - (IPos)MAX_DIST(s) : NIL; */ + + movl dsWSize(%edx), %eax + subl $MIN_LOOKAHEAD, %eax + subl %eax, %ebp + jg LimitPositive + xorl %ebp, %ebp +LimitPositive: + +/* int best_len = s->prev_length; */ + + movl dsPrevLen(%edx), %eax + movl %eax, bestlen(%esp) + +/* Store the sum of s->window + best_len in %esi locally, and in %esi. */ + + addl %eax, %esi + movl %esi, windowbestlen(%esp) + +/* register ush scan_start = *(ushf*)scan; */ +/* register ush scan_end = *(ushf*)(scan+best_len-1); */ +/* Posf *prev = s->prev; */ + + movzwl (%edi), %ebx + movl %ebx, scanstart(%esp) + movzwl -1(%edi,%eax), %ebx + movl %ebx, scanend(%esp) + movl dsPrev(%edx), %edi + +/* Jump into the main loop. */ + + movl chainlenwmask(%esp), %edx + jmp LoopEntry + +.balign 16 + +/* do { + * match = s->window + cur_match; + * if (*(ushf*)(match+best_len-1) != scan_end || + * *(ushf*)match != scan_start) continue; + * [...] + * } while ((cur_match = prev[cur_match & wmask]) > limit + * && --chain_length != 0); + * + * Here is the inner loop of the function. The function will spend the + * majority of its time in this loop, and majority of that time will + * be spent in the first ten instructions. + * + * Within this loop: + * %ebx = scanend + * %ecx = curmatch + * %edx = chainlenwmask - i.e., ((chainlen << 16) | wmask) + * %esi = windowbestlen - i.e., (window + bestlen) + * %edi = prev + * %ebp = limit + */ +LookupLoop: + andl %edx, %ecx + movzwl (%edi,%ecx,2), %ecx + cmpl %ebp, %ecx + jbe LeaveNow + subl $0x00010000, %edx + js LeaveNow +LoopEntry: movzwl -1(%esi,%ecx), %eax + cmpl %ebx, %eax + jnz LookupLoop + movl window(%esp), %eax + movzwl (%eax,%ecx), %eax + cmpl scanstart(%esp), %eax + jnz LookupLoop + +/* Store the current value of chainlen. */ + + movl %edx, chainlenwmask(%esp) + +/* Point %edi to the string under scrutiny, and %esi to the string we */ +/* are hoping to match it up with. In actuality, %esi and %edi are */ +/* both pointed (MAX_MATCH_8 - scanalign) bytes ahead, and %edx is */ +/* initialized to -(MAX_MATCH_8 - scanalign). */ + + movl window(%esp), %esi + movl scan(%esp), %edi + addl %ecx, %esi + movl scanalign(%esp), %eax + movl $(-MAX_MATCH_8), %edx + lea MAX_MATCH_8(%edi,%eax), %edi + lea MAX_MATCH_8(%esi,%eax), %esi + +/* Test the strings for equality, 8 bytes at a time. At the end, + * adjust %edx so that it is offset to the exact byte that mismatched. + * + * We already know at this point that the first three bytes of the + * strings match each other, and they can be safely passed over before + * starting the compare loop. So what this code does is skip over 0-3 + * bytes, as much as necessary in order to dword-align the %edi + * pointer. (%esi will still be misaligned three times out of four.) + * + * It should be confessed that this loop usually does not represent + * much of the total running time. Replacing it with a more + * straightforward "rep cmpsb" would not drastically degrade + * performance. + */ +LoopCmps: + movl (%esi,%edx), %eax + xorl (%edi,%edx), %eax + jnz LeaveLoopCmps + movl 4(%esi,%edx), %eax + xorl 4(%edi,%edx), %eax + jnz LeaveLoopCmps4 + addl $8, %edx + jnz LoopCmps + jmp LenMaximum +LeaveLoopCmps4: addl $4, %edx +LeaveLoopCmps: testl $0x0000FFFF, %eax + jnz LenLower + addl $2, %edx + shrl $16, %eax +LenLower: subb $1, %al + adcl $0, %edx + +/* Calculate the length of the match. If it is longer than MAX_MATCH, */ +/* then automatically accept it as the best possible match and leave. */ + + lea (%edi,%edx), %eax + movl scan(%esp), %edi + subl %edi, %eax + cmpl $MAX_MATCH, %eax + jge LenMaximum + +/* If the length of the match is not longer than the best match we */ +/* have so far, then forget it and return to the lookup loop. */ + + movl deflatestate(%esp), %edx + movl bestlen(%esp), %ebx + cmpl %ebx, %eax + jg LongerMatch + movl windowbestlen(%esp), %esi + movl dsPrev(%edx), %edi + movl scanend(%esp), %ebx + movl chainlenwmask(%esp), %edx + jmp LookupLoop + +/* s->match_start = cur_match; */ +/* best_len = len; */ +/* if (len >= nice_match) break; */ +/* scan_end = *(ushf*)(scan+best_len-1); */ + +LongerMatch: movl nicematch(%esp), %ebx + movl %eax, bestlen(%esp) + movl %ecx, dsMatchStart(%edx) + cmpl %ebx, %eax + jge LeaveNow + movl window(%esp), %esi + addl %eax, %esi + movl %esi, windowbestlen(%esp) + movzwl -1(%edi,%eax), %ebx + movl dsPrev(%edx), %edi + movl %ebx, scanend(%esp) + movl chainlenwmask(%esp), %edx + jmp LookupLoop + +/* Accept the current string, with the maximum possible length. */ + +LenMaximum: movl deflatestate(%esp), %edx + movl $MAX_MATCH, bestlen(%esp) + movl %ecx, dsMatchStart(%edx) + +/* if ((uInt)best_len <= s->lookahead) return (uInt)best_len; */ +/* return s->lookahead; */ + +LeaveNow: + movl deflatestate(%esp), %edx + movl bestlen(%esp), %ebx + movl dsLookahead(%edx), %eax + cmpl %eax, %ebx + jg LookaheadRet + movl %ebx, %eax +LookaheadRet: + +/* Restore the stack and return from whence we came. */ + + addl $LocalVarsSize, %esp + .cfi_def_cfa_offset 20 + popl %ebx + .cfi_def_cfa_offset 16 + popl %esi + .cfi_def_cfa_offset 12 + popl %edi + .cfi_def_cfa_offset 8 + popl %ebp + .cfi_def_cfa_offset 4 +.cfi_endproc +match_init: ret ADDED compat/zlib/contrib/blast/Makefile Index: compat/zlib/contrib/blast/Makefile ================================================================== --- compat/zlib/contrib/blast/Makefile +++ compat/zlib/contrib/blast/Makefile @@ -0,0 +1,8 @@ +blast: blast.c blast.h + cc -DTEST -o blast blast.c + +test: blast + blast < test.pk | cmp - test.txt + +clean: + rm -f blast blast.o ADDED compat/zlib/contrib/blast/README Index: compat/zlib/contrib/blast/README ================================================================== --- compat/zlib/contrib/blast/README +++ compat/zlib/contrib/blast/README @@ -0,0 +1,4 @@ +Read blast.h for purpose and usage. + +Mark Adler +madler@alumni.caltech.edu ADDED compat/zlib/contrib/blast/blast.c Index: compat/zlib/contrib/blast/blast.c ================================================================== --- compat/zlib/contrib/blast/blast.c +++ compat/zlib/contrib/blast/blast.c @@ -0,0 +1,446 @@ +/* blast.c + * Copyright (C) 2003, 2012 Mark Adler + * For conditions of distribution and use, see copyright notice in blast.h + * version 1.2, 24 Oct 2012 + * + * blast.c decompresses data compressed by the PKWare Compression Library. + * This function provides functionality similar to the explode() function of + * the PKWare library, hence the name "blast". + * + * This decompressor is based on the excellent format description provided by + * Ben Rudiak-Gould in comp.compression on August 13, 2001. Interestingly, the + * example Ben provided in the post is incorrect. The distance 110001 should + * instead be 111000. When corrected, the example byte stream becomes: + * + * 00 04 82 24 25 8f 80 7f + * + * which decompresses to "AIAIAIAIAIAIA" (without the quotes). + */ + +/* + * Change history: + * + * 1.0 12 Feb 2003 - First version + * 1.1 16 Feb 2003 - Fixed distance check for > 4 GB uncompressed data + * 1.2 24 Oct 2012 - Add note about using binary mode in stdio + * - Fix comparisons of differently signed integers + */ + +#include /* for setjmp(), longjmp(), and jmp_buf */ +#include "blast.h" /* prototype for blast() */ + +#define local static /* for local function definitions */ +#define MAXBITS 13 /* maximum code length */ +#define MAXWIN 4096 /* maximum window size */ + +/* input and output state */ +struct state { + /* input state */ + blast_in infun; /* input function provided by user */ + void *inhow; /* opaque information passed to infun() */ + unsigned char *in; /* next input location */ + unsigned left; /* available input at in */ + int bitbuf; /* bit buffer */ + int bitcnt; /* number of bits in bit buffer */ + + /* input limit error return state for bits() and decode() */ + jmp_buf env; + + /* output state */ + blast_out outfun; /* output function provided by user */ + void *outhow; /* opaque information passed to outfun() */ + unsigned next; /* index of next write location in out[] */ + int first; /* true to check distances (for first 4K) */ + unsigned char out[MAXWIN]; /* output buffer and sliding window */ +}; + +/* + * Return need bits from the input stream. This always leaves less than + * eight bits in the buffer. bits() works properly for need == 0. + * + * Format notes: + * + * - Bits are stored in bytes from the least significant bit to the most + * significant bit. Therefore bits are dropped from the bottom of the bit + * buffer, using shift right, and new bytes are appended to the top of the + * bit buffer, using shift left. + */ +local int bits(struct state *s, int need) +{ + int val; /* bit accumulator */ + + /* load at least need bits into val */ + val = s->bitbuf; + while (s->bitcnt < need) { + if (s->left == 0) { + s->left = s->infun(s->inhow, &(s->in)); + if (s->left == 0) longjmp(s->env, 1); /* out of input */ + } + val |= (int)(*(s->in)++) << s->bitcnt; /* load eight bits */ + s->left--; + s->bitcnt += 8; + } + + /* drop need bits and update buffer, always zero to seven bits left */ + s->bitbuf = val >> need; + s->bitcnt -= need; + + /* return need bits, zeroing the bits above that */ + return val & ((1 << need) - 1); +} + +/* + * Huffman code decoding tables. count[1..MAXBITS] is the number of symbols of + * each length, which for a canonical code are stepped through in order. + * symbol[] are the symbol values in canonical order, where the number of + * entries is the sum of the counts in count[]. The decoding process can be + * seen in the function decode() below. + */ +struct huffman { + short *count; /* number of symbols of each length */ + short *symbol; /* canonically ordered symbols */ +}; + +/* + * Decode a code from the stream s using huffman table h. Return the symbol or + * a negative value if there is an error. If all of the lengths are zero, i.e. + * an empty code, or if the code is incomplete and an invalid code is received, + * then -9 is returned after reading MAXBITS bits. + * + * Format notes: + * + * - The codes as stored in the compressed data are bit-reversed relative to + * a simple integer ordering of codes of the same lengths. Hence below the + * bits are pulled from the compressed data one at a time and used to + * build the code value reversed from what is in the stream in order to + * permit simple integer comparisons for decoding. + * + * - The first code for the shortest length is all ones. Subsequent codes of + * the same length are simply integer decrements of the previous code. When + * moving up a length, a one bit is appended to the code. For a complete + * code, the last code of the longest length will be all zeros. To support + * this ordering, the bits pulled during decoding are inverted to apply the + * more "natural" ordering starting with all zeros and incrementing. + */ +local int decode(struct state *s, struct huffman *h) +{ + int len; /* current number of bits in code */ + int code; /* len bits being decoded */ + int first; /* first code of length len */ + int count; /* number of codes of length len */ + int index; /* index of first code of length len in symbol table */ + int bitbuf; /* bits from stream */ + int left; /* bits left in next or left to process */ + short *next; /* next number of codes */ + + bitbuf = s->bitbuf; + left = s->bitcnt; + code = first = index = 0; + len = 1; + next = h->count + 1; + while (1) { + while (left--) { + code |= (bitbuf & 1) ^ 1; /* invert code */ + bitbuf >>= 1; + count = *next++; + if (code < first + count) { /* if length len, return symbol */ + s->bitbuf = bitbuf; + s->bitcnt = (s->bitcnt - len) & 7; + return h->symbol[index + (code - first)]; + } + index += count; /* else update for next length */ + first += count; + first <<= 1; + code <<= 1; + len++; + } + left = (MAXBITS+1) - len; + if (left == 0) break; + if (s->left == 0) { + s->left = s->infun(s->inhow, &(s->in)); + if (s->left == 0) longjmp(s->env, 1); /* out of input */ + } + bitbuf = *(s->in)++; + s->left--; + if (left > 8) left = 8; + } + return -9; /* ran out of codes */ +} + +/* + * Given a list of repeated code lengths rep[0..n-1], where each byte is a + * count (high four bits + 1) and a code length (low four bits), generate the + * list of code lengths. This compaction reduces the size of the object code. + * Then given the list of code lengths length[0..n-1] representing a canonical + * Huffman code for n symbols, construct the tables required to decode those + * codes. Those tables are the number of codes of each length, and the symbols + * sorted by length, retaining their original order within each length. The + * return value is zero for a complete code set, negative for an over- + * subscribed code set, and positive for an incomplete code set. The tables + * can be used if the return value is zero or positive, but they cannot be used + * if the return value is negative. If the return value is zero, it is not + * possible for decode() using that table to return an error--any stream of + * enough bits will resolve to a symbol. If the return value is positive, then + * it is possible for decode() using that table to return an error for received + * codes past the end of the incomplete lengths. + */ +local int construct(struct huffman *h, const unsigned char *rep, int n) +{ + int symbol; /* current symbol when stepping through length[] */ + int len; /* current length when stepping through h->count[] */ + int left; /* number of possible codes left of current length */ + short offs[MAXBITS+1]; /* offsets in symbol table for each length */ + short length[256]; /* code lengths */ + + /* convert compact repeat counts into symbol bit length list */ + symbol = 0; + do { + len = *rep++; + left = (len >> 4) + 1; + len &= 15; + do { + length[symbol++] = len; + } while (--left); + } while (--n); + n = symbol; + + /* count number of codes of each length */ + for (len = 0; len <= MAXBITS; len++) + h->count[len] = 0; + for (symbol = 0; symbol < n; symbol++) + (h->count[length[symbol]])++; /* assumes lengths are within bounds */ + if (h->count[0] == n) /* no codes! */ + return 0; /* complete, but decode() will fail */ + + /* check for an over-subscribed or incomplete set of lengths */ + left = 1; /* one possible code of zero length */ + for (len = 1; len <= MAXBITS; len++) { + left <<= 1; /* one more bit, double codes left */ + left -= h->count[len]; /* deduct count from possible codes */ + if (left < 0) return left; /* over-subscribed--return negative */ + } /* left > 0 means incomplete */ + + /* generate offsets into symbol table for each length for sorting */ + offs[1] = 0; + for (len = 1; len < MAXBITS; len++) + offs[len + 1] = offs[len] + h->count[len]; + + /* + * put symbols in table sorted by length, by symbol order within each + * length + */ + for (symbol = 0; symbol < n; symbol++) + if (length[symbol] != 0) + h->symbol[offs[length[symbol]]++] = symbol; + + /* return zero for complete set, positive for incomplete set */ + return left; +} + +/* + * Decode PKWare Compression Library stream. + * + * Format notes: + * + * - First byte is 0 if literals are uncoded or 1 if they are coded. Second + * byte is 4, 5, or 6 for the number of extra bits in the distance code. + * This is the base-2 logarithm of the dictionary size minus six. + * + * - Compressed data is a combination of literals and length/distance pairs + * terminated by an end code. Literals are either Huffman coded or + * uncoded bytes. A length/distance pair is a coded length followed by a + * coded distance to represent a string that occurs earlier in the + * uncompressed data that occurs again at the current location. + * + * - A bit preceding a literal or length/distance pair indicates which comes + * next, 0 for literals, 1 for length/distance. + * + * - If literals are uncoded, then the next eight bits are the literal, in the + * normal bit order in th stream, i.e. no bit-reversal is needed. Similarly, + * no bit reversal is needed for either the length extra bits or the distance + * extra bits. + * + * - Literal bytes are simply written to the output. A length/distance pair is + * an instruction to copy previously uncompressed bytes to the output. The + * copy is from distance bytes back in the output stream, copying for length + * bytes. + * + * - Distances pointing before the beginning of the output data are not + * permitted. + * + * - Overlapped copies, where the length is greater than the distance, are + * allowed and common. For example, a distance of one and a length of 518 + * simply copies the last byte 518 times. A distance of four and a length of + * twelve copies the last four bytes three times. A simple forward copy + * ignoring whether the length is greater than the distance or not implements + * this correctly. + */ +local int decomp(struct state *s) +{ + int lit; /* true if literals are coded */ + int dict; /* log2(dictionary size) - 6 */ + int symbol; /* decoded symbol, extra bits for distance */ + int len; /* length for copy */ + unsigned dist; /* distance for copy */ + int copy; /* copy counter */ + unsigned char *from, *to; /* copy pointers */ + static int virgin = 1; /* build tables once */ + static short litcnt[MAXBITS+1], litsym[256]; /* litcode memory */ + static short lencnt[MAXBITS+1], lensym[16]; /* lencode memory */ + static short distcnt[MAXBITS+1], distsym[64]; /* distcode memory */ + static struct huffman litcode = {litcnt, litsym}; /* length code */ + static struct huffman lencode = {lencnt, lensym}; /* length code */ + static struct huffman distcode = {distcnt, distsym};/* distance code */ + /* bit lengths of literal codes */ + static const unsigned char litlen[] = { + 11, 124, 8, 7, 28, 7, 188, 13, 76, 4, 10, 8, 12, 10, 12, 10, 8, 23, 8, + 9, 7, 6, 7, 8, 7, 6, 55, 8, 23, 24, 12, 11, 7, 9, 11, 12, 6, 7, 22, 5, + 7, 24, 6, 11, 9, 6, 7, 22, 7, 11, 38, 7, 9, 8, 25, 11, 8, 11, 9, 12, + 8, 12, 5, 38, 5, 38, 5, 11, 7, 5, 6, 21, 6, 10, 53, 8, 7, 24, 10, 27, + 44, 253, 253, 253, 252, 252, 252, 13, 12, 45, 12, 45, 12, 61, 12, 45, + 44, 173}; + /* bit lengths of length codes 0..15 */ + static const unsigned char lenlen[] = {2, 35, 36, 53, 38, 23}; + /* bit lengths of distance codes 0..63 */ + static const unsigned char distlen[] = {2, 20, 53, 230, 247, 151, 248}; + static const short base[16] = { /* base for length codes */ + 3, 2, 4, 5, 6, 7, 8, 9, 10, 12, 16, 24, 40, 72, 136, 264}; + static const char extra[16] = { /* extra bits for length codes */ + 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 3, 4, 5, 6, 7, 8}; + + /* set up decoding tables (once--might not be thread-safe) */ + if (virgin) { + construct(&litcode, litlen, sizeof(litlen)); + construct(&lencode, lenlen, sizeof(lenlen)); + construct(&distcode, distlen, sizeof(distlen)); + virgin = 0; + } + + /* read header */ + lit = bits(s, 8); + if (lit > 1) return -1; + dict = bits(s, 8); + if (dict < 4 || dict > 6) return -2; + + /* decode literals and length/distance pairs */ + do { + if (bits(s, 1)) { + /* get length */ + symbol = decode(s, &lencode); + len = base[symbol] + bits(s, extra[symbol]); + if (len == 519) break; /* end code */ + + /* get distance */ + symbol = len == 2 ? 2 : dict; + dist = decode(s, &distcode) << symbol; + dist += bits(s, symbol); + dist++; + if (s->first && dist > s->next) + return -3; /* distance too far back */ + + /* copy length bytes from distance bytes back */ + do { + to = s->out + s->next; + from = to - dist; + copy = MAXWIN; + if (s->next < dist) { + from += copy; + copy = dist; + } + copy -= s->next; + if (copy > len) copy = len; + len -= copy; + s->next += copy; + do { + *to++ = *from++; + } while (--copy); + if (s->next == MAXWIN) { + if (s->outfun(s->outhow, s->out, s->next)) return 1; + s->next = 0; + s->first = 0; + } + } while (len != 0); + } + else { + /* get literal and write it */ + symbol = lit ? decode(s, &litcode) : bits(s, 8); + s->out[s->next++] = symbol; + if (s->next == MAXWIN) { + if (s->outfun(s->outhow, s->out, s->next)) return 1; + s->next = 0; + s->first = 0; + } + } + } while (1); + return 0; +} + +/* See comments in blast.h */ +int blast(blast_in infun, void *inhow, blast_out outfun, void *outhow) +{ + struct state s; /* input/output state */ + int err; /* return value */ + + /* initialize input state */ + s.infun = infun; + s.inhow = inhow; + s.left = 0; + s.bitbuf = 0; + s.bitcnt = 0; + + /* initialize output state */ + s.outfun = outfun; + s.outhow = outhow; + s.next = 0; + s.first = 1; + + /* return if bits() or decode() tries to read past available input */ + if (setjmp(s.env) != 0) /* if came back here via longjmp(), */ + err = 2; /* then skip decomp(), return error */ + else + err = decomp(&s); /* decompress */ + + /* write any leftover output and update the error code if needed */ + if (err != 1 && s.next && s.outfun(s.outhow, s.out, s.next) && err == 0) + err = 1; + return err; +} + +#ifdef TEST +/* Example of how to use blast() */ +#include +#include + +#define CHUNK 16384 + +local unsigned inf(void *how, unsigned char **buf) +{ + static unsigned char hold[CHUNK]; + + *buf = hold; + return fread(hold, 1, CHUNK, (FILE *)how); +} + +local int outf(void *how, unsigned char *buf, unsigned len) +{ + return fwrite(buf, 1, len, (FILE *)how) != len; +} + +/* Decompress a PKWare Compression Library stream from stdin to stdout */ +int main(void) +{ + int ret, n; + + /* decompress to stdout */ + ret = blast(inf, stdin, outf, stdout); + if (ret != 0) fprintf(stderr, "blast error: %d\n", ret); + + /* see if there are any leftover bytes */ + n = 0; + while (getchar() != EOF) n++; + if (n) fprintf(stderr, "blast warning: %d unused bytes of input\n", n); + + /* return blast() error code */ + return ret; +} +#endif ADDED compat/zlib/contrib/blast/blast.h Index: compat/zlib/contrib/blast/blast.h ================================================================== --- compat/zlib/contrib/blast/blast.h +++ compat/zlib/contrib/blast/blast.h @@ -0,0 +1,75 @@ +/* blast.h -- interface for blast.c + Copyright (C) 2003, 2012 Mark Adler + version 1.2, 24 Oct 2012 + + This software is provided 'as-is', without any express or implied + warranty. In no event will the author be held liable for any damages + arising from the use of this software. + + Permission is granted to anyone to use this software for any purpose, + including commercial applications, and to alter it and redistribute it + freely, subject to the following restrictions: + + 1. The origin of this software must not be misrepresented; you must not + claim that you wrote the original software. If you use this software + in a product, an acknowledgment in the product documentation would be + appreciated but is not required. + 2. Altered source versions must be plainly marked as such, and must not be + misrepresented as being the original software. + 3. This notice may not be removed or altered from any source distribution. + + Mark Adler madler@alumni.caltech.edu + */ + + +/* + * blast() decompresses the PKWare Data Compression Library (DCL) compressed + * format. It provides the same functionality as the explode() function in + * that library. (Note: PKWare overused the "implode" verb, and the format + * used by their library implode() function is completely different and + * incompatible with the implode compression method supported by PKZIP.) + * + * The binary mode for stdio functions should be used to assure that the + * compressed data is not corrupted when read or written. For example: + * fopen(..., "rb") and fopen(..., "wb"). + */ + + +typedef unsigned (*blast_in)(void *how, unsigned char **buf); +typedef int (*blast_out)(void *how, unsigned char *buf, unsigned len); +/* Definitions for input/output functions passed to blast(). See below for + * what the provided functions need to do. + */ + + +int blast(blast_in infun, void *inhow, blast_out outfun, void *outhow); +/* Decompress input to output using the provided infun() and outfun() calls. + * On success, the return value of blast() is zero. If there is an error in + * the source data, i.e. it is not in the proper format, then a negative value + * is returned. If there is not enough input available or there is not enough + * output space, then a positive error is returned. + * + * The input function is invoked: len = infun(how, &buf), where buf is set by + * infun() to point to the input buffer, and infun() returns the number of + * available bytes there. If infun() returns zero, then blast() returns with + * an input error. (blast() only asks for input if it needs it.) inhow is for + * use by the application to pass an input descriptor to infun(), if desired. + * + * The output function is invoked: err = outfun(how, buf, len), where the bytes + * to be written are buf[0..len-1]. If err is not zero, then blast() returns + * with an output error. outfun() is always called with len <= 4096. outhow + * is for use by the application to pass an output descriptor to outfun(), if + * desired. + * + * The return codes are: + * + * 2: ran out of input before completing decompression + * 1: output error before completing decompression + * 0: successful decompression + * -1: literal flag not zero or one + * -2: dictionary size not in 4..6 + * -3: distance is too far back + * + * At the bottom of blast.c is an example program that uses blast() that can be + * compiled to produce a command-line decompression filter by defining TEST. + */ ADDED compat/zlib/contrib/blast/test.pk Index: compat/zlib/contrib/blast/test.pk ================================================================== --- compat/zlib/contrib/blast/test.pk +++ compat/zlib/contrib/blast/test.pk cannot compute difference between binary files ADDED compat/zlib/contrib/blast/test.txt Index: compat/zlib/contrib/blast/test.txt ================================================================== --- compat/zlib/contrib/blast/test.txt +++ compat/zlib/contrib/blast/test.txt @@ -0,0 +1,1 @@ +AIAIAIAIAIAIA ADDED compat/zlib/contrib/delphi/ZLib.pas Index: compat/zlib/contrib/delphi/ZLib.pas ================================================================== --- compat/zlib/contrib/delphi/ZLib.pas +++ compat/zlib/contrib/delphi/ZLib.pas @@ -0,0 +1,557 @@ +{*******************************************************} +{ } +{ Borland Delphi Supplemental Components } +{ ZLIB Data Compression Interface Unit } +{ } +{ Copyright (c) 1997,99 Borland Corporation } +{ } +{*******************************************************} + +{ Updated for zlib 1.2.x by Cosmin Truta } + +unit ZLib; + +interface + +uses SysUtils, Classes; + +type + TAlloc = function (AppData: Pointer; Items, Size: Integer): Pointer; cdecl; + TFree = procedure (AppData, Block: Pointer); cdecl; + + // Internal structure. Ignore. + TZStreamRec = packed record + next_in: PChar; // next input byte + avail_in: Integer; // number of bytes available at next_in + total_in: Longint; // total nb of input bytes read so far + + next_out: PChar; // next output byte should be put here + avail_out: Integer; // remaining free space at next_out + total_out: Longint; // total nb of bytes output so far + + msg: PChar; // last error message, NULL if no error + internal: Pointer; // not visible by applications + + zalloc: TAlloc; // used to allocate the internal state + zfree: TFree; // used to free the internal state + AppData: Pointer; // private data object passed to zalloc and zfree + + data_type: Integer; // best guess about the data type: ascii or binary + adler: Longint; // adler32 value of the uncompressed data + reserved: Longint; // reserved for future use + end; + + // Abstract ancestor class + TCustomZlibStream = class(TStream) + private + FStrm: TStream; + FStrmPos: Integer; + FOnProgress: TNotifyEvent; + FZRec: TZStreamRec; + FBuffer: array [Word] of Char; + protected + procedure Progress(Sender: TObject); dynamic; + property OnProgress: TNotifyEvent read FOnProgress write FOnProgress; + constructor Create(Strm: TStream); + end; + +{ TCompressionStream compresses data on the fly as data is written to it, and + stores the compressed data to another stream. + + TCompressionStream is write-only and strictly sequential. Reading from the + stream will raise an exception. Using Seek to move the stream pointer + will raise an exception. + + Output data is cached internally, written to the output stream only when + the internal output buffer is full. All pending output data is flushed + when the stream is destroyed. + + The Position property returns the number of uncompressed bytes of + data that have been written to the stream so far. + + CompressionRate returns the on-the-fly percentage by which the original + data has been compressed: (1 - (CompressedBytes / UncompressedBytes)) * 100 + If raw data size = 100 and compressed data size = 25, the CompressionRate + is 75% + + The OnProgress event is called each time the output buffer is filled and + written to the output stream. This is useful for updating a progress + indicator when you are writing a large chunk of data to the compression + stream in a single call.} + + + TCompressionLevel = (clNone, clFastest, clDefault, clMax); + + TCompressionStream = class(TCustomZlibStream) + private + function GetCompressionRate: Single; + public + constructor Create(CompressionLevel: TCompressionLevel; Dest: TStream); + destructor Destroy; override; + function Read(var Buffer; Count: Longint): Longint; override; + function Write(const Buffer; Count: Longint): Longint; override; + function Seek(Offset: Longint; Origin: Word): Longint; override; + property CompressionRate: Single read GetCompressionRate; + property OnProgress; + end; + +{ TDecompressionStream decompresses data on the fly as data is read from it. + + Compressed data comes from a separate source stream. TDecompressionStream + is read-only and unidirectional; you can seek forward in the stream, but not + backwards. The special case of setting the stream position to zero is + allowed. Seeking forward decompresses data until the requested position in + the uncompressed data has been reached. Seeking backwards, seeking relative + to the end of the stream, requesting the size of the stream, and writing to + the stream will raise an exception. + + The Position property returns the number of bytes of uncompressed data that + have been read from the stream so far. + + The OnProgress event is called each time the internal input buffer of + compressed data is exhausted and the next block is read from the input stream. + This is useful for updating a progress indicator when you are reading a + large chunk of data from the decompression stream in a single call.} + + TDecompressionStream = class(TCustomZlibStream) + public + constructor Create(Source: TStream); + destructor Destroy; override; + function Read(var Buffer; Count: Longint): Longint; override; + function Write(const Buffer; Count: Longint): Longint; override; + function Seek(Offset: Longint; Origin: Word): Longint; override; + property OnProgress; + end; + + + +{ CompressBuf compresses data, buffer to buffer, in one call. + In: InBuf = ptr to compressed data + InBytes = number of bytes in InBuf + Out: OutBuf = ptr to newly allocated buffer containing decompressed data + OutBytes = number of bytes in OutBuf } +procedure CompressBuf(const InBuf: Pointer; InBytes: Integer; + out OutBuf: Pointer; out OutBytes: Integer); + + +{ DecompressBuf decompresses data, buffer to buffer, in one call. + In: InBuf = ptr to compressed data + InBytes = number of bytes in InBuf + OutEstimate = zero, or est. size of the decompressed data + Out: OutBuf = ptr to newly allocated buffer containing decompressed data + OutBytes = number of bytes in OutBuf } +procedure DecompressBuf(const InBuf: Pointer; InBytes: Integer; + OutEstimate: Integer; out OutBuf: Pointer; out OutBytes: Integer); + +{ DecompressToUserBuf decompresses data, buffer to buffer, in one call. + In: InBuf = ptr to compressed data + InBytes = number of bytes in InBuf + Out: OutBuf = ptr to user-allocated buffer to contain decompressed data + BufSize = number of bytes in OutBuf } +procedure DecompressToUserBuf(const InBuf: Pointer; InBytes: Integer; + const OutBuf: Pointer; BufSize: Integer); + +const + zlib_version = '1.2.8'; + +type + EZlibError = class(Exception); + ECompressionError = class(EZlibError); + EDecompressionError = class(EZlibError); + +implementation + +uses ZLibConst; + +const + Z_NO_FLUSH = 0; + Z_PARTIAL_FLUSH = 1; + Z_SYNC_FLUSH = 2; + Z_FULL_FLUSH = 3; + Z_FINISH = 4; + + Z_OK = 0; + Z_STREAM_END = 1; + Z_NEED_DICT = 2; + Z_ERRNO = (-1); + Z_STREAM_ERROR = (-2); + Z_DATA_ERROR = (-3); + Z_MEM_ERROR = (-4); + Z_BUF_ERROR = (-5); + Z_VERSION_ERROR = (-6); + + Z_NO_COMPRESSION = 0; + Z_BEST_SPEED = 1; + Z_BEST_COMPRESSION = 9; + Z_DEFAULT_COMPRESSION = (-1); + + Z_FILTERED = 1; + Z_HUFFMAN_ONLY = 2; + Z_RLE = 3; + Z_DEFAULT_STRATEGY = 0; + + Z_BINARY = 0; + Z_ASCII = 1; + Z_UNKNOWN = 2; + + Z_DEFLATED = 8; + + +{$L adler32.obj} +{$L compress.obj} +{$L crc32.obj} +{$L deflate.obj} +{$L infback.obj} +{$L inffast.obj} +{$L inflate.obj} +{$L inftrees.obj} +{$L trees.obj} +{$L uncompr.obj} +{$L zutil.obj} + +procedure adler32; external; +procedure compressBound; external; +procedure crc32; external; +procedure deflateInit2_; external; +procedure deflateParams; external; + +function _malloc(Size: Integer): Pointer; cdecl; +begin + Result := AllocMem(Size); +end; + +procedure _free(Block: Pointer); cdecl; +begin + FreeMem(Block); +end; + +procedure _memset(P: Pointer; B: Byte; count: Integer); cdecl; +begin + FillChar(P^, count, B); +end; + +procedure _memcpy(dest, source: Pointer; count: Integer); cdecl; +begin + Move(source^, dest^, count); +end; + + + +// deflate compresses data +function deflateInit_(var strm: TZStreamRec; level: Integer; version: PChar; + recsize: Integer): Integer; external; +function deflate(var strm: TZStreamRec; flush: Integer): Integer; external; +function deflateEnd(var strm: TZStreamRec): Integer; external; + +// inflate decompresses data +function inflateInit_(var strm: TZStreamRec; version: PChar; + recsize: Integer): Integer; external; +function inflate(var strm: TZStreamRec; flush: Integer): Integer; external; +function inflateEnd(var strm: TZStreamRec): Integer; external; +function inflateReset(var strm: TZStreamRec): Integer; external; + + +function zlibAllocMem(AppData: Pointer; Items, Size: Integer): Pointer; cdecl; +begin +// GetMem(Result, Items*Size); + Result := AllocMem(Items * Size); +end; + +procedure zlibFreeMem(AppData, Block: Pointer); cdecl; +begin + FreeMem(Block); +end; + +{function zlibCheck(code: Integer): Integer; +begin + Result := code; + if code < 0 then + raise EZlibError.Create('error'); //!! +end;} + +function CCheck(code: Integer): Integer; +begin + Result := code; + if code < 0 then + raise ECompressionError.Create('error'); //!! +end; + +function DCheck(code: Integer): Integer; +begin + Result := code; + if code < 0 then + raise EDecompressionError.Create('error'); //!! +end; + +procedure CompressBuf(const InBuf: Pointer; InBytes: Integer; + out OutBuf: Pointer; out OutBytes: Integer); +var + strm: TZStreamRec; + P: Pointer; +begin + FillChar(strm, sizeof(strm), 0); + strm.zalloc := zlibAllocMem; + strm.zfree := zlibFreeMem; + OutBytes := ((InBytes + (InBytes div 10) + 12) + 255) and not 255; + GetMem(OutBuf, OutBytes); + try + strm.next_in := InBuf; + strm.avail_in := InBytes; + strm.next_out := OutBuf; + strm.avail_out := OutBytes; + CCheck(deflateInit_(strm, Z_BEST_COMPRESSION, zlib_version, sizeof(strm))); + try + while CCheck(deflate(strm, Z_FINISH)) <> Z_STREAM_END do + begin + P := OutBuf; + Inc(OutBytes, 256); + ReallocMem(OutBuf, OutBytes); + strm.next_out := PChar(Integer(OutBuf) + (Integer(strm.next_out) - Integer(P))); + strm.avail_out := 256; + end; + finally + CCheck(deflateEnd(strm)); + end; + ReallocMem(OutBuf, strm.total_out); + OutBytes := strm.total_out; + except + FreeMem(OutBuf); + raise + end; +end; + + +procedure DecompressBuf(const InBuf: Pointer; InBytes: Integer; + OutEstimate: Integer; out OutBuf: Pointer; out OutBytes: Integer); +var + strm: TZStreamRec; + P: Pointer; + BufInc: Integer; +begin + FillChar(strm, sizeof(strm), 0); + strm.zalloc := zlibAllocMem; + strm.zfree := zlibFreeMem; + BufInc := (InBytes + 255) and not 255; + if OutEstimate = 0 then + OutBytes := BufInc + else + OutBytes := OutEstimate; + GetMem(OutBuf, OutBytes); + try + strm.next_in := InBuf; + strm.avail_in := InBytes; + strm.next_out := OutBuf; + strm.avail_out := OutBytes; + DCheck(inflateInit_(strm, zlib_version, sizeof(strm))); + try + while DCheck(inflate(strm, Z_NO_FLUSH)) <> Z_STREAM_END do + begin + P := OutBuf; + Inc(OutBytes, BufInc); + ReallocMem(OutBuf, OutBytes); + strm.next_out := PChar(Integer(OutBuf) + (Integer(strm.next_out) - Integer(P))); + strm.avail_out := BufInc; + end; + finally + DCheck(inflateEnd(strm)); + end; + ReallocMem(OutBuf, strm.total_out); + OutBytes := strm.total_out; + except + FreeMem(OutBuf); + raise + end; +end; + +procedure DecompressToUserBuf(const InBuf: Pointer; InBytes: Integer; + const OutBuf: Pointer; BufSize: Integer); +var + strm: TZStreamRec; +begin + FillChar(strm, sizeof(strm), 0); + strm.zalloc := zlibAllocMem; + strm.zfree := zlibFreeMem; + strm.next_in := InBuf; + strm.avail_in := InBytes; + strm.next_out := OutBuf; + strm.avail_out := BufSize; + DCheck(inflateInit_(strm, zlib_version, sizeof(strm))); + try + if DCheck(inflate(strm, Z_FINISH)) <> Z_STREAM_END then + raise EZlibError.CreateRes(@sTargetBufferTooSmall); + finally + DCheck(inflateEnd(strm)); + end; +end; + +// TCustomZlibStream + +constructor TCustomZLibStream.Create(Strm: TStream); +begin + inherited Create; + FStrm := Strm; + FStrmPos := Strm.Position; + FZRec.zalloc := zlibAllocMem; + FZRec.zfree := zlibFreeMem; +end; + +procedure TCustomZLibStream.Progress(Sender: TObject); +begin + if Assigned(FOnProgress) then FOnProgress(Sender); +end; + + +// TCompressionStream + +constructor TCompressionStream.Create(CompressionLevel: TCompressionLevel; + Dest: TStream); +const + Levels: array [TCompressionLevel] of ShortInt = + (Z_NO_COMPRESSION, Z_BEST_SPEED, Z_DEFAULT_COMPRESSION, Z_BEST_COMPRESSION); +begin + inherited Create(Dest); + FZRec.next_out := FBuffer; + FZRec.avail_out := sizeof(FBuffer); + CCheck(deflateInit_(FZRec, Levels[CompressionLevel], zlib_version, sizeof(FZRec))); +end; + +destructor TCompressionStream.Destroy; +begin + FZRec.next_in := nil; + FZRec.avail_in := 0; + try + if FStrm.Position <> FStrmPos then FStrm.Position := FStrmPos; + while (CCheck(deflate(FZRec, Z_FINISH)) <> Z_STREAM_END) + and (FZRec.avail_out = 0) do + begin + FStrm.WriteBuffer(FBuffer, sizeof(FBuffer)); + FZRec.next_out := FBuffer; + FZRec.avail_out := sizeof(FBuffer); + end; + if FZRec.avail_out < sizeof(FBuffer) then + FStrm.WriteBuffer(FBuffer, sizeof(FBuffer) - FZRec.avail_out); + finally + deflateEnd(FZRec); + end; + inherited Destroy; +end; + +function TCompressionStream.Read(var Buffer; Count: Longint): Longint; +begin + raise ECompressionError.CreateRes(@sInvalidStreamOp); +end; + +function TCompressionStream.Write(const Buffer; Count: Longint): Longint; +begin + FZRec.next_in := @Buffer; + FZRec.avail_in := Count; + if FStrm.Position <> FStrmPos then FStrm.Position := FStrmPos; + while (FZRec.avail_in > 0) do + begin + CCheck(deflate(FZRec, 0)); + if FZRec.avail_out = 0 then + begin + FStrm.WriteBuffer(FBuffer, sizeof(FBuffer)); + FZRec.next_out := FBuffer; + FZRec.avail_out := sizeof(FBuffer); + FStrmPos := FStrm.Position; + Progress(Self); + end; + end; + Result := Count; +end; + +function TCompressionStream.Seek(Offset: Longint; Origin: Word): Longint; +begin + if (Offset = 0) and (Origin = soFromCurrent) then + Result := FZRec.total_in + else + raise ECompressionError.CreateRes(@sInvalidStreamOp); +end; + +function TCompressionStream.GetCompressionRate: Single; +begin + if FZRec.total_in = 0 then + Result := 0 + else + Result := (1.0 - (FZRec.total_out / FZRec.total_in)) * 100.0; +end; + + +// TDecompressionStream + +constructor TDecompressionStream.Create(Source: TStream); +begin + inherited Create(Source); + FZRec.next_in := FBuffer; + FZRec.avail_in := 0; + DCheck(inflateInit_(FZRec, zlib_version, sizeof(FZRec))); +end; + +destructor TDecompressionStream.Destroy; +begin + FStrm.Seek(-FZRec.avail_in, 1); + inflateEnd(FZRec); + inherited Destroy; +end; + +function TDecompressionStream.Read(var Buffer; Count: Longint): Longint; +begin + FZRec.next_out := @Buffer; + FZRec.avail_out := Count; + if FStrm.Position <> FStrmPos then FStrm.Position := FStrmPos; + while (FZRec.avail_out > 0) do + begin + if FZRec.avail_in = 0 then + begin + FZRec.avail_in := FStrm.Read(FBuffer, sizeof(FBuffer)); + if FZRec.avail_in = 0 then + begin + Result := Count - FZRec.avail_out; + Exit; + end; + FZRec.next_in := FBuffer; + FStrmPos := FStrm.Position; + Progress(Self); + end; + CCheck(inflate(FZRec, 0)); + end; + Result := Count; +end; + +function TDecompressionStream.Write(const Buffer; Count: Longint): Longint; +begin + raise EDecompressionError.CreateRes(@sInvalidStreamOp); +end; + +function TDecompressionStream.Seek(Offset: Longint; Origin: Word): Longint; +var + I: Integer; + Buf: array [0..4095] of Char; +begin + if (Offset = 0) and (Origin = soFromBeginning) then + begin + DCheck(inflateReset(FZRec)); + FZRec.next_in := FBuffer; + FZRec.avail_in := 0; + FStrm.Position := 0; + FStrmPos := 0; + end + else if ( (Offset >= 0) and (Origin = soFromCurrent)) or + ( ((Offset - FZRec.total_out) > 0) and (Origin = soFromBeginning)) then + begin + if Origin = soFromBeginning then Dec(Offset, FZRec.total_out); + if Offset > 0 then + begin + for I := 1 to Offset div sizeof(Buf) do + ReadBuffer(Buf, sizeof(Buf)); + ReadBuffer(Buf, Offset mod sizeof(Buf)); + end; + end + else + raise EDecompressionError.CreateRes(@sInvalidStreamOp); + Result := FZRec.total_out; +end; + + +end. ADDED compat/zlib/contrib/delphi/ZLibConst.pas Index: compat/zlib/contrib/delphi/ZLibConst.pas ================================================================== --- compat/zlib/contrib/delphi/ZLibConst.pas +++ compat/zlib/contrib/delphi/ZLibConst.pas @@ -0,0 +1,11 @@ +unit ZLibConst; + +interface + +resourcestring + sTargetBufferTooSmall = 'ZLib error: target buffer may be too small'; + sInvalidStreamOp = 'Invalid stream operation'; + +implementation + +end. ADDED compat/zlib/contrib/delphi/readme.txt Index: compat/zlib/contrib/delphi/readme.txt ================================================================== --- compat/zlib/contrib/delphi/readme.txt +++ compat/zlib/contrib/delphi/readme.txt @@ -0,0 +1,76 @@ + +Overview +======== + +This directory contains an update to the ZLib interface unit, +distributed by Borland as a Delphi supplemental component. + +The original ZLib unit is Copyright (c) 1997,99 Borland Corp., +and is based on zlib version 1.0.4. There are a series of bugs +and security problems associated with that old zlib version, and +we recommend the users to update their ZLib unit. + + +Summary of modifications +======================== + +- Improved makefile, adapted to zlib version 1.2.1. + +- Some field types from TZStreamRec are changed from Integer to + Longint, for consistency with the zlib.h header, and for 64-bit + readiness. + +- The zlib_version constant is updated. + +- The new Z_RLE strategy has its corresponding symbolic constant. + +- The allocation and deallocation functions and function types + (TAlloc, TFree, zlibAllocMem and zlibFreeMem) are now cdecl, + and _malloc and _free are added as C RTL stubs. As a result, + the original C sources of zlib can be compiled out of the box, + and linked to the ZLib unit. + + +Suggestions for improvements +============================ + +Currently, the ZLib unit provides only a limited wrapper around +the zlib library, and much of the original zlib functionality is +missing. Handling compressed file formats like ZIP/GZIP or PNG +cannot be implemented without having this functionality. +Applications that handle these formats are either using their own, +duplicated code, or not using the ZLib unit at all. + +Here are a few suggestions: + +- Checksum class wrappers around adler32() and crc32(), similar + to the Java classes that implement the java.util.zip.Checksum + interface. + +- The ability to read and write raw deflate streams, without the + zlib stream header and trailer. Raw deflate streams are used + in the ZIP file format. + +- The ability to read and write gzip streams, used in the GZIP + file format, and normally produced by the gzip program. + +- The ability to select a different compression strategy, useful + to PNG and MNG image compression, and to multimedia compression + in general. Besides the compression level + + TCompressionLevel = (clNone, clFastest, clDefault, clMax); + + which, in fact, could have used the 'z' prefix and avoided + TColor-like symbols + + TCompressionLevel = (zcNone, zcFastest, zcDefault, zcMax); + + there could be a compression strategy + + TCompressionStrategy = (zsDefault, zsFiltered, zsHuffmanOnly, zsRle); + +- ZIP and GZIP stream handling via TStreams. + + +-- +Cosmin Truta ADDED compat/zlib/contrib/delphi/zlibd32.mak Index: compat/zlib/contrib/delphi/zlibd32.mak ================================================================== --- compat/zlib/contrib/delphi/zlibd32.mak +++ compat/zlib/contrib/delphi/zlibd32.mak @@ -0,0 +1,99 @@ +# Makefile for zlib +# For use with Delphi and C++ Builder under Win32 +# Updated for zlib 1.2.x by Cosmin Truta + +# ------------ Borland C++ ------------ + +# This project uses the Delphi (fastcall/register) calling convention: +LOC = -DZEXPORT=__fastcall -DZEXPORTVA=__cdecl + +CC = bcc32 +LD = bcc32 +AR = tlib +# do not use "-pr" in CFLAGS +CFLAGS = -a -d -k- -O2 $(LOC) +LDFLAGS = + + +# variables +ZLIB_LIB = zlib.lib + +OBJ1 = adler32.obj compress.obj crc32.obj deflate.obj gzclose.obj gzlib.obj gzread.obj +OBJ2 = gzwrite.obj infback.obj inffast.obj inflate.obj inftrees.obj trees.obj uncompr.obj zutil.obj +OBJP1 = +adler32.obj+compress.obj+crc32.obj+deflate.obj+gzclose.obj+gzlib.obj+gzread.obj +OBJP2 = +gzwrite.obj+infback.obj+inffast.obj+inflate.obj+inftrees.obj+trees.obj+uncompr.obj+zutil.obj + + +# targets +all: $(ZLIB_LIB) example.exe minigzip.exe + +.c.obj: + $(CC) -c $(CFLAGS) $*.c + +adler32.obj: adler32.c zlib.h zconf.h + +compress.obj: compress.c zlib.h zconf.h + +crc32.obj: crc32.c zlib.h zconf.h crc32.h + +deflate.obj: deflate.c deflate.h zutil.h zlib.h zconf.h + +gzclose.obj: gzclose.c zlib.h zconf.h gzguts.h + +gzlib.obj: gzlib.c zlib.h zconf.h gzguts.h + +gzread.obj: gzread.c zlib.h zconf.h gzguts.h + +gzwrite.obj: gzwrite.c zlib.h zconf.h gzguts.h + +infback.obj: infback.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h inffixed.h + +inffast.obj: inffast.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h + +inflate.obj: inflate.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h inffixed.h + +inftrees.obj: inftrees.c zutil.h zlib.h zconf.h inftrees.h + +trees.obj: trees.c zutil.h zlib.h zconf.h deflate.h trees.h + +uncompr.obj: uncompr.c zlib.h zconf.h + +zutil.obj: zutil.c zutil.h zlib.h zconf.h + +example.obj: test/example.c zlib.h zconf.h + +minigzip.obj: test/minigzip.c zlib.h zconf.h + + +# For the sake of the old Borland make, +# the command line is cut to fit in the MS-DOS 128 byte limit: +$(ZLIB_LIB): $(OBJ1) $(OBJ2) + -del $(ZLIB_LIB) + $(AR) $(ZLIB_LIB) $(OBJP1) + $(AR) $(ZLIB_LIB) $(OBJP2) + + +# testing +test: example.exe minigzip.exe + example + echo hello world | minigzip | minigzip -d + +example.exe: example.obj $(ZLIB_LIB) + $(LD) $(LDFLAGS) example.obj $(ZLIB_LIB) + +minigzip.exe: minigzip.obj $(ZLIB_LIB) + $(LD) $(LDFLAGS) minigzip.obj $(ZLIB_LIB) + + +# cleanup +clean: + -del *.obj + -del *.exe + -del *.lib + -del *.tds + -del zlib.bak + -del foo.gz + ADDED compat/zlib/contrib/dotzlib/DotZLib.build Index: compat/zlib/contrib/dotzlib/DotZLib.build ================================================================== --- compat/zlib/contrib/dotzlib/DotZLib.build +++ compat/zlib/contrib/dotzlib/DotZLib.build @@ -0,0 +1,33 @@ + + + A .Net wrapper library around ZLib1.dll + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ADDED compat/zlib/contrib/dotzlib/DotZLib.chm Index: compat/zlib/contrib/dotzlib/DotZLib.chm ================================================================== --- compat/zlib/contrib/dotzlib/DotZLib.chm +++ compat/zlib/contrib/dotzlib/DotZLib.chm cannot compute difference between binary files ADDED compat/zlib/contrib/dotzlib/DotZLib.sln Index: compat/zlib/contrib/dotzlib/DotZLib.sln ================================================================== --- compat/zlib/contrib/dotzlib/DotZLib.sln +++ compat/zlib/contrib/dotzlib/DotZLib.sln @@ -0,0 +1,21 @@ +Microsoft Visual Studio Solution File, Format Version 8.00 +Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "DotZLib", "DotZLib\DotZLib.csproj", "{BB1EE0B1-1808-46CB-B786-949D91117FC5}" + ProjectSection(ProjectDependencies) = postProject + EndProjectSection +EndProject +Global + GlobalSection(SolutionConfiguration) = preSolution + Debug = Debug + Release = Release + EndGlobalSection + GlobalSection(ProjectConfiguration) = postSolution + {BB1EE0B1-1808-46CB-B786-949D91117FC5}.Debug.ActiveCfg = Debug|.NET + {BB1EE0B1-1808-46CB-B786-949D91117FC5}.Debug.Build.0 = Debug|.NET + {BB1EE0B1-1808-46CB-B786-949D91117FC5}.Release.ActiveCfg = Release|.NET + {BB1EE0B1-1808-46CB-B786-949D91117FC5}.Release.Build.0 = Release|.NET + EndGlobalSection + GlobalSection(ExtensibilityGlobals) = postSolution + EndGlobalSection + GlobalSection(ExtensibilityAddIns) = postSolution + EndGlobalSection +EndGlobal ADDED compat/zlib/contrib/dotzlib/DotZLib/AssemblyInfo.cs Index: compat/zlib/contrib/dotzlib/DotZLib/AssemblyInfo.cs ================================================================== --- compat/zlib/contrib/dotzlib/DotZLib/AssemblyInfo.cs +++ compat/zlib/contrib/dotzlib/DotZLib/AssemblyInfo.cs @@ -0,0 +1,58 @@ +using System.Reflection; +using System.Runtime.CompilerServices; + +// +// General Information about an assembly is controlled through the following +// set of attributes. Change these attribute values to modify the information +// associated with an assembly. +// +[assembly: AssemblyTitle("DotZLib")] +[assembly: AssemblyDescription(".Net bindings for ZLib compression dll 1.2.x")] +[assembly: AssemblyConfiguration("")] +[assembly: AssemblyCompany("Henrik Ravn")] +[assembly: AssemblyProduct("")] +[assembly: AssemblyCopyright("(c) 2004 by Henrik Ravn")] +[assembly: AssemblyTrademark("")] +[assembly: AssemblyCulture("")] + +// +// Version information for an assembly consists of the following four values: +// +// Major Version +// Minor Version +// Build Number +// Revision +// +// You can specify all the values or you can default the Revision and Build Numbers +// by using the '*' as shown below: + +[assembly: AssemblyVersion("1.0.*")] + +// +// In order to sign your assembly you must specify a key to use. Refer to the +// Microsoft .NET Framework documentation for more information on assembly signing. +// +// Use the attributes below to control which key is used for signing. +// +// Notes: +// (*) If no key is specified, the assembly is not signed. +// (*) KeyName refers to a key that has been installed in the Crypto Service +// Provider (CSP) on your machine. KeyFile refers to a file which contains +// a key. +// (*) If the KeyFile and the KeyName values are both specified, the +// following processing occurs: +// (1) If the KeyName can be found in the CSP, that key is used. +// (2) If the KeyName does not exist and the KeyFile does exist, the key +// in the KeyFile is installed into the CSP and used. +// (*) In order to create a KeyFile, you can use the sn.exe (Strong Name) utility. +// When specifying the KeyFile, the location of the KeyFile should be +// relative to the project output directory which is +// %Project Directory%\obj\. For example, if your KeyFile is +// located in the project directory, you would specify the AssemblyKeyFile +// attribute as [assembly: AssemblyKeyFile("..\\..\\mykey.snk")] +// (*) Delay Signing is an advanced option - see the Microsoft .NET Framework +// documentation for more information on this. +// +[assembly: AssemblyDelaySign(false)] +[assembly: AssemblyKeyFile("")] +[assembly: AssemblyKeyName("")] ADDED compat/zlib/contrib/dotzlib/DotZLib/ChecksumImpl.cs Index: compat/zlib/contrib/dotzlib/DotZLib/ChecksumImpl.cs ================================================================== --- compat/zlib/contrib/dotzlib/DotZLib/ChecksumImpl.cs +++ compat/zlib/contrib/dotzlib/DotZLib/ChecksumImpl.cs @@ -0,0 +1,202 @@ +// +// © Copyright Henrik Ravn 2004 +// +// Use, modification and distribution are subject to the Boost Software License, Version 1.0. +// (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) +// + +using System; +using System.Runtime.InteropServices; +using System.Text; + + +namespace DotZLib +{ + #region ChecksumGeneratorBase + /// + /// Implements the common functionality needed for all s + /// + /// + public abstract class ChecksumGeneratorBase : ChecksumGenerator + { + /// + /// The value of the current checksum + /// + protected uint _current; + + /// + /// Initializes a new instance of the checksum generator base - the current checksum is + /// set to zero + /// + public ChecksumGeneratorBase() + { + _current = 0; + } + + /// + /// Initializes a new instance of the checksum generator basewith a specified value + /// + /// The value to set the current checksum to + public ChecksumGeneratorBase(uint initialValue) + { + _current = initialValue; + } + + /// + /// Resets the current checksum to zero + /// + public void Reset() { _current = 0; } + + /// + /// Gets the current checksum value + /// + public uint Value { get { return _current; } } + + /// + /// Updates the current checksum with part of an array of bytes + /// + /// The data to update the checksum with + /// Where in data to start updating + /// The number of bytes from data to use + /// The sum of offset and count is larger than the length of data + /// data is a null reference + /// Offset or count is negative. + /// All the other Update methods are implmeneted in terms of this one. + /// This is therefore the only method a derived class has to implement + public abstract void Update(byte[] data, int offset, int count); + + /// + /// Updates the current checksum with an array of bytes. + /// + /// The data to update the checksum with + public void Update(byte[] data) + { + Update(data, 0, data.Length); + } + + /// + /// Updates the current checksum with the data from a string + /// + /// The string to update the checksum with + /// The characters in the string are converted by the UTF-8 encoding + public void Update(string data) + { + Update(Encoding.UTF8.GetBytes(data)); + } + + /// + /// Updates the current checksum with the data from a string, using a specific encoding + /// + /// The string to update the checksum with + /// The encoding to use + public void Update(string data, Encoding encoding) + { + Update(encoding.GetBytes(data)); + } + + } + #endregion + + #region CRC32 + /// + /// Implements a CRC32 checksum generator + /// + public sealed class CRC32Checksum : ChecksumGeneratorBase + { + #region DLL imports + + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl)] + private static extern uint crc32(uint crc, int data, uint length); + + #endregion + + /// + /// Initializes a new instance of the CRC32 checksum generator + /// + public CRC32Checksum() : base() {} + + /// + /// Initializes a new instance of the CRC32 checksum generator with a specified value + /// + /// The value to set the current checksum to + public CRC32Checksum(uint initialValue) : base(initialValue) {} + + /// + /// Updates the current checksum with part of an array of bytes + /// + /// The data to update the checksum with + /// Where in data to start updating + /// The number of bytes from data to use + /// The sum of offset and count is larger than the length of data + /// data is a null reference + /// Offset or count is negative. + public override void Update(byte[] data, int offset, int count) + { + if (offset < 0 || count < 0) throw new ArgumentOutOfRangeException(); + if ((offset+count) > data.Length) throw new ArgumentException(); + GCHandle hData = GCHandle.Alloc(data, GCHandleType.Pinned); + try + { + _current = crc32(_current, hData.AddrOfPinnedObject().ToInt32()+offset, (uint)count); + } + finally + { + hData.Free(); + } + } + + } + #endregion + + #region Adler + /// + /// Implements a checksum generator that computes the Adler checksum on data + /// + public sealed class AdlerChecksum : ChecksumGeneratorBase + { + #region DLL imports + + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl)] + private static extern uint adler32(uint adler, int data, uint length); + + #endregion + + /// + /// Initializes a new instance of the Adler checksum generator + /// + public AdlerChecksum() : base() {} + + /// + /// Initializes a new instance of the Adler checksum generator with a specified value + /// + /// The value to set the current checksum to + public AdlerChecksum(uint initialValue) : base(initialValue) {} + + /// + /// Updates the current checksum with part of an array of bytes + /// + /// The data to update the checksum with + /// Where in data to start updating + /// The number of bytes from data to use + /// The sum of offset and count is larger than the length of data + /// data is a null reference + /// Offset or count is negative. + public override void Update(byte[] data, int offset, int count) + { + if (offset < 0 || count < 0) throw new ArgumentOutOfRangeException(); + if ((offset+count) > data.Length) throw new ArgumentException(); + GCHandle hData = GCHandle.Alloc(data, GCHandleType.Pinned); + try + { + _current = adler32(_current, hData.AddrOfPinnedObject().ToInt32()+offset, (uint)count); + } + finally + { + hData.Free(); + } + } + + } + #endregion + +} ADDED compat/zlib/contrib/dotzlib/DotZLib/CircularBuffer.cs Index: compat/zlib/contrib/dotzlib/DotZLib/CircularBuffer.cs ================================================================== --- compat/zlib/contrib/dotzlib/DotZLib/CircularBuffer.cs +++ compat/zlib/contrib/dotzlib/DotZLib/CircularBuffer.cs @@ -0,0 +1,83 @@ +// +// © Copyright Henrik Ravn 2004 +// +// Use, modification and distribution are subject to the Boost Software License, Version 1.0. +// (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) +// + +using System; +using System.Diagnostics; + +namespace DotZLib +{ + + /// + /// This class implements a circular buffer + /// + internal class CircularBuffer + { + #region Private data + private int _capacity; + private int _head; + private int _tail; + private int _size; + private byte[] _buffer; + #endregion + + public CircularBuffer(int capacity) + { + Debug.Assert( capacity > 0 ); + _buffer = new byte[capacity]; + _capacity = capacity; + _head = 0; + _tail = 0; + _size = 0; + } + + public int Size { get { return _size; } } + + public int Put(byte[] source, int offset, int count) + { + Debug.Assert( count > 0 ); + int trueCount = Math.Min(count, _capacity - Size); + for (int i = 0; i < trueCount; ++i) + _buffer[(_tail+i) % _capacity] = source[offset+i]; + _tail += trueCount; + _tail %= _capacity; + _size += trueCount; + return trueCount; + } + + public bool Put(byte b) + { + if (Size == _capacity) // no room + return false; + _buffer[_tail++] = b; + _tail %= _capacity; + ++_size; + return true; + } + + public int Get(byte[] destination, int offset, int count) + { + int trueCount = Math.Min(count,Size); + for (int i = 0; i < trueCount; ++i) + destination[offset + i] = _buffer[(_head+i) % _capacity]; + _head += trueCount; + _head %= _capacity; + _size -= trueCount; + return trueCount; + } + + public int Get() + { + if (Size == 0) + return -1; + + int result = (int)_buffer[_head++ % _capacity]; + --_size; + return result; + } + + } +} ADDED compat/zlib/contrib/dotzlib/DotZLib/CodecBase.cs Index: compat/zlib/contrib/dotzlib/DotZLib/CodecBase.cs ================================================================== --- compat/zlib/contrib/dotzlib/DotZLib/CodecBase.cs +++ compat/zlib/contrib/dotzlib/DotZLib/CodecBase.cs @@ -0,0 +1,198 @@ +// +// © Copyright Henrik Ravn 2004 +// +// Use, modification and distribution are subject to the Boost Software License, Version 1.0. +// (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) +// + +using System; +using System.Runtime.InteropServices; + +namespace DotZLib +{ + /// + /// Implements the common functionality needed for all s + /// + public abstract class CodecBase : Codec, IDisposable + { + + #region Data members + + /// + /// Instance of the internal zlib buffer structure that is + /// passed to all functions in the zlib dll + /// + internal ZStream _ztream = new ZStream(); + + /// + /// True if the object instance has been disposed, false otherwise + /// + protected bool _isDisposed = false; + + /// + /// The size of the internal buffers + /// + protected const int kBufferSize = 16384; + + private byte[] _outBuffer = new byte[kBufferSize]; + private byte[] _inBuffer = new byte[kBufferSize]; + + private GCHandle _hInput; + private GCHandle _hOutput; + + private uint _checksum = 0; + + #endregion + + /// + /// Initializes a new instance of the CodeBase class. + /// + public CodecBase() + { + try + { + _hInput = GCHandle.Alloc(_inBuffer, GCHandleType.Pinned); + _hOutput = GCHandle.Alloc(_outBuffer, GCHandleType.Pinned); + } + catch (Exception) + { + CleanUp(false); + throw; + } + } + + + #region Codec Members + + /// + /// Occurs when more processed data are available. + /// + public event DataAvailableHandler DataAvailable; + + /// + /// Fires the event + /// + protected void OnDataAvailable() + { + if (_ztream.total_out > 0) + { + if (DataAvailable != null) + DataAvailable( _outBuffer, 0, (int)_ztream.total_out); + resetOutput(); + } + } + + /// + /// Adds more data to the codec to be processed. + /// + /// Byte array containing the data to be added to the codec + /// Adding data may, or may not, raise the DataAvailable event + public void Add(byte[] data) + { + Add(data,0,data.Length); + } + + /// + /// Adds more data to the codec to be processed. + /// + /// Byte array containing the data to be added to the codec + /// The index of the first byte to add from data + /// The number of bytes to add + /// Adding data may, or may not, raise the DataAvailable event + /// This must be implemented by a derived class + public abstract void Add(byte[] data, int offset, int count); + + /// + /// Finishes up any pending data that needs to be processed and handled. + /// + /// This must be implemented by a derived class + public abstract void Finish(); + + /// + /// Gets the checksum of the data that has been added so far + /// + public uint Checksum { get { return _checksum; } } + + #endregion + + #region Destructor & IDisposable stuff + + /// + /// Destroys this instance + /// + ~CodecBase() + { + CleanUp(false); + } + + /// + /// Releases any unmanaged resources and calls the method of the derived class + /// + public void Dispose() + { + CleanUp(true); + } + + /// + /// Performs any codec specific cleanup + /// + /// This must be implemented by a derived class + protected abstract void CleanUp(); + + // performs the release of the handles and calls the dereived CleanUp() + private void CleanUp(bool isDisposing) + { + if (!_isDisposed) + { + CleanUp(); + if (_hInput.IsAllocated) + _hInput.Free(); + if (_hOutput.IsAllocated) + _hOutput.Free(); + + _isDisposed = true; + } + } + + + #endregion + + #region Helper methods + + /// + /// Copies a number of bytes to the internal codec buffer - ready for proccesing + /// + /// The byte array that contains the data to copy + /// The index of the first byte to copy + /// The number of bytes to copy from data + protected void copyInput(byte[] data, int startIndex, int count) + { + Array.Copy(data, startIndex, _inBuffer,0, count); + _ztream.next_in = _hInput.AddrOfPinnedObject(); + _ztream.total_in = 0; + _ztream.avail_in = (uint)count; + + } + + /// + /// Resets the internal output buffers to a known state - ready for processing + /// + protected void resetOutput() + { + _ztream.total_out = 0; + _ztream.avail_out = kBufferSize; + _ztream.next_out = _hOutput.AddrOfPinnedObject(); + } + + /// + /// Updates the running checksum property + /// + /// The new checksum value + protected void setChecksum(uint newSum) + { + _checksum = newSum; + } + #endregion + + } +} ADDED compat/zlib/contrib/dotzlib/DotZLib/Deflater.cs Index: compat/zlib/contrib/dotzlib/DotZLib/Deflater.cs ================================================================== --- compat/zlib/contrib/dotzlib/DotZLib/Deflater.cs +++ compat/zlib/contrib/dotzlib/DotZLib/Deflater.cs @@ -0,0 +1,106 @@ +// +// © Copyright Henrik Ravn 2004 +// +// Use, modification and distribution are subject to the Boost Software License, Version 1.0. +// (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) +// + +using System; +using System.Diagnostics; +using System.Runtime.InteropServices; + +namespace DotZLib +{ + + /// + /// Implements a data compressor, using the deflate algorithm in the ZLib dll + /// + public sealed class Deflater : CodecBase + { + #region Dll imports + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl, CharSet=CharSet.Ansi)] + private static extern int deflateInit_(ref ZStream sz, int level, string vs, int size); + + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl)] + private static extern int deflate(ref ZStream sz, int flush); + + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl)] + private static extern int deflateReset(ref ZStream sz); + + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl)] + private static extern int deflateEnd(ref ZStream sz); + #endregion + + /// + /// Constructs an new instance of the Deflater + /// + /// The compression level to use for this Deflater + public Deflater(CompressLevel level) : base() + { + int retval = deflateInit_(ref _ztream, (int)level, Info.Version, Marshal.SizeOf(_ztream)); + if (retval != 0) + throw new ZLibException(retval, "Could not initialize deflater"); + + resetOutput(); + } + + /// + /// Adds more data to the codec to be processed. + /// + /// Byte array containing the data to be added to the codec + /// The index of the first byte to add from data + /// The number of bytes to add + /// Adding data may, or may not, raise the DataAvailable event + public override void Add(byte[] data, int offset, int count) + { + if (data == null) throw new ArgumentNullException(); + if (offset < 0 || count < 0) throw new ArgumentOutOfRangeException(); + if ((offset+count) > data.Length) throw new ArgumentException(); + + int total = count; + int inputIndex = offset; + int err = 0; + + while (err >= 0 && inputIndex < total) + { + copyInput(data, inputIndex, Math.Min(total - inputIndex, kBufferSize)); + while (err >= 0 && _ztream.avail_in > 0) + { + err = deflate(ref _ztream, (int)FlushTypes.None); + if (err == 0) + while (_ztream.avail_out == 0) + { + OnDataAvailable(); + err = deflate(ref _ztream, (int)FlushTypes.None); + } + inputIndex += (int)_ztream.total_in; + } + } + setChecksum( _ztream.adler ); + } + + + /// + /// Finishes up any pending data that needs to be processed and handled. + /// + public override void Finish() + { + int err; + do + { + err = deflate(ref _ztream, (int)FlushTypes.Finish); + OnDataAvailable(); + } + while (err == 0); + setChecksum( _ztream.adler ); + deflateReset(ref _ztream); + resetOutput(); + } + + /// + /// Closes the internal zlib deflate stream + /// + protected override void CleanUp() { deflateEnd(ref _ztream); } + + } +} ADDED compat/zlib/contrib/dotzlib/DotZLib/DotZLib.cs Index: compat/zlib/contrib/dotzlib/DotZLib/DotZLib.cs ================================================================== --- compat/zlib/contrib/dotzlib/DotZLib/DotZLib.cs +++ compat/zlib/contrib/dotzlib/DotZLib/DotZLib.cs @@ -0,0 +1,288 @@ +// +// © Copyright Henrik Ravn 2004 +// +// Use, modification and distribution are subject to the Boost Software License, Version 1.0. +// (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) +// + +using System; +using System.IO; +using System.Runtime.InteropServices; +using System.Text; + + +namespace DotZLib +{ + + #region Internal types + + /// + /// Defines constants for the various flush types used with zlib + /// + internal enum FlushTypes + { + None, Partial, Sync, Full, Finish, Block + } + + #region ZStream structure + // internal mapping of the zlib zstream structure for marshalling + [StructLayoutAttribute(LayoutKind.Sequential, Pack=4, Size=0, CharSet=CharSet.Ansi)] + internal struct ZStream + { + public IntPtr next_in; + public uint avail_in; + public uint total_in; + + public IntPtr next_out; + public uint avail_out; + public uint total_out; + + [MarshalAs(UnmanagedType.LPStr)] + string msg; + uint state; + + uint zalloc; + uint zfree; + uint opaque; + + int data_type; + public uint adler; + uint reserved; + } + + #endregion + + #endregion + + #region Public enums + /// + /// Defines constants for the available compression levels in zlib + /// + public enum CompressLevel : int + { + /// + /// The default compression level with a reasonable compromise between compression and speed + /// + Default = -1, + /// + /// No compression at all. The data are passed straight through. + /// + None = 0, + /// + /// The maximum compression rate available. + /// + Best = 9, + /// + /// The fastest available compression level. + /// + Fastest = 1 + } + #endregion + + #region Exception classes + /// + /// The exception that is thrown when an error occurs on the zlib dll + /// + public class ZLibException : ApplicationException + { + /// + /// Initializes a new instance of the class with a specified + /// error message and error code + /// + /// The zlib error code that caused the exception + /// A message that (hopefully) describes the error + public ZLibException(int errorCode, string msg) : base(String.Format("ZLib error {0} {1}", errorCode, msg)) + { + } + + /// + /// Initializes a new instance of the class with a specified + /// error code + /// + /// The zlib error code that caused the exception + public ZLibException(int errorCode) : base(String.Format("ZLib error {0}", errorCode)) + { + } + } + #endregion + + #region Interfaces + + /// + /// Declares methods and properties that enables a running checksum to be calculated + /// + public interface ChecksumGenerator + { + /// + /// Gets the current value of the checksum + /// + uint Value { get; } + + /// + /// Clears the current checksum to 0 + /// + void Reset(); + + /// + /// Updates the current checksum with an array of bytes + /// + /// The data to update the checksum with + void Update(byte[] data); + + /// + /// Updates the current checksum with part of an array of bytes + /// + /// The data to update the checksum with + /// Where in data to start updating + /// The number of bytes from data to use + /// The sum of offset and count is larger than the length of data + /// data is a null reference + /// Offset or count is negative. + void Update(byte[] data, int offset, int count); + + /// + /// Updates the current checksum with the data from a string + /// + /// The string to update the checksum with + /// The characters in the string are converted by the UTF-8 encoding + void Update(string data); + + /// + /// Updates the current checksum with the data from a string, using a specific encoding + /// + /// The string to update the checksum with + /// The encoding to use + void Update(string data, Encoding encoding); + } + + + /// + /// Represents the method that will be called from a codec when new data + /// are available. + /// + /// The byte array containing the processed data + /// The index of the first processed byte in data + /// The number of processed bytes available + /// On return from this method, the data may be overwritten, so grab it while you can. + /// You cannot assume that startIndex will be zero. + /// + public delegate void DataAvailableHandler(byte[] data, int startIndex, int count); + + /// + /// Declares methods and events for implementing compressors/decompressors + /// + public interface Codec + { + /// + /// Occurs when more processed data are available. + /// + event DataAvailableHandler DataAvailable; + + /// + /// Adds more data to the codec to be processed. + /// + /// Byte array containing the data to be added to the codec + /// Adding data may, or may not, raise the DataAvailable event + void Add(byte[] data); + + /// + /// Adds more data to the codec to be processed. + /// + /// Byte array containing the data to be added to the codec + /// The index of the first byte to add from data + /// The number of bytes to add + /// Adding data may, or may not, raise the DataAvailable event + void Add(byte[] data, int offset, int count); + + /// + /// Finishes up any pending data that needs to be processed and handled. + /// + void Finish(); + + /// + /// Gets the checksum of the data that has been added so far + /// + uint Checksum { get; } + + + } + + #endregion + + #region Classes + /// + /// Encapsulates general information about the ZLib library + /// + public class Info + { + #region DLL imports + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl)] + private static extern uint zlibCompileFlags(); + + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl)] + private static extern string zlibVersion(); + #endregion + + #region Private stuff + private uint _flags; + + // helper function that unpacks a bitsize mask + private static int bitSize(uint bits) + { + switch (bits) + { + case 0: return 16; + case 1: return 32; + case 2: return 64; + } + return -1; + } + #endregion + + /// + /// Constructs an instance of the Info class. + /// + public Info() + { + _flags = zlibCompileFlags(); + } + + /// + /// True if the library is compiled with debug info + /// + public bool HasDebugInfo { get { return 0 != (_flags & 0x100); } } + + /// + /// True if the library is compiled with assembly optimizations + /// + public bool UsesAssemblyCode { get { return 0 != (_flags & 0x200); } } + + /// + /// Gets the size of the unsigned int that was compiled into Zlib + /// + public int SizeOfUInt { get { return bitSize(_flags & 3); } } + + /// + /// Gets the size of the unsigned long that was compiled into Zlib + /// + public int SizeOfULong { get { return bitSize((_flags >> 2) & 3); } } + + /// + /// Gets the size of the pointers that were compiled into Zlib + /// + public int SizeOfPointer { get { return bitSize((_flags >> 4) & 3); } } + + /// + /// Gets the size of the z_off_t type that was compiled into Zlib + /// + public int SizeOfOffset { get { return bitSize((_flags >> 6) & 3); } } + + /// + /// Gets the version of ZLib as a string, e.g. "1.2.1" + /// + public static string Version { get { return zlibVersion(); } } + } + + #endregion + +} ADDED compat/zlib/contrib/dotzlib/DotZLib/DotZLib.csproj Index: compat/zlib/contrib/dotzlib/DotZLib/DotZLib.csproj ================================================================== --- compat/zlib/contrib/dotzlib/DotZLib/DotZLib.csproj +++ compat/zlib/contrib/dotzlib/DotZLib/DotZLib.csproj @@ -0,0 +1,141 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ADDED compat/zlib/contrib/dotzlib/DotZLib/GZipStream.cs Index: compat/zlib/contrib/dotzlib/DotZLib/GZipStream.cs ================================================================== --- compat/zlib/contrib/dotzlib/DotZLib/GZipStream.cs +++ compat/zlib/contrib/dotzlib/DotZLib/GZipStream.cs @@ -0,0 +1,301 @@ +// +// © Copyright Henrik Ravn 2004 +// +// Use, modification and distribution are subject to the Boost Software License, Version 1.0. +// (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) +// + +using System; +using System.IO; +using System.Runtime.InteropServices; + +namespace DotZLib +{ + /// + /// Implements a compressed , in GZip (.gz) format. + /// + public class GZipStream : Stream, IDisposable + { + #region Dll Imports + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl, CharSet=CharSet.Ansi)] + private static extern IntPtr gzopen(string name, string mode); + + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl)] + private static extern int gzclose(IntPtr gzFile); + + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl)] + private static extern int gzwrite(IntPtr gzFile, int data, int length); + + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl)] + private static extern int gzread(IntPtr gzFile, int data, int length); + + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl)] + private static extern int gzgetc(IntPtr gzFile); + + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl)] + private static extern int gzputc(IntPtr gzFile, int c); + + #endregion + + #region Private data + private IntPtr _gzFile; + private bool _isDisposed = false; + private bool _isWriting; + #endregion + + #region Constructors + /// + /// Creates a new file as a writeable GZipStream + /// + /// The name of the compressed file to create + /// The compression level to use when adding data + /// If an error occurred in the internal zlib function + public GZipStream(string fileName, CompressLevel level) + { + _isWriting = true; + _gzFile = gzopen(fileName, String.Format("wb{0}", (int)level)); + if (_gzFile == IntPtr.Zero) + throw new ZLibException(-1, "Could not open " + fileName); + } + + /// + /// Opens an existing file as a readable GZipStream + /// + /// The name of the file to open + /// If an error occurred in the internal zlib function + public GZipStream(string fileName) + { + _isWriting = false; + _gzFile = gzopen(fileName, "rb"); + if (_gzFile == IntPtr.Zero) + throw new ZLibException(-1, "Could not open " + fileName); + + } + #endregion + + #region Access properties + /// + /// Returns true of this stream can be read from, false otherwise + /// + public override bool CanRead + { + get + { + return !_isWriting; + } + } + + + /// + /// Returns false. + /// + public override bool CanSeek + { + get + { + return false; + } + } + + /// + /// Returns true if this tsream is writeable, false otherwise + /// + public override bool CanWrite + { + get + { + return _isWriting; + } + } + #endregion + + #region Destructor & IDispose stuff + + /// + /// Destroys this instance + /// + ~GZipStream() + { + cleanUp(false); + } + + /// + /// Closes the external file handle + /// + public void Dispose() + { + cleanUp(true); + } + + // Does the actual closing of the file handle. + private void cleanUp(bool isDisposing) + { + if (!_isDisposed) + { + gzclose(_gzFile); + _isDisposed = true; + } + } + #endregion + + #region Basic reading and writing + /// + /// Attempts to read a number of bytes from the stream. + /// + /// The destination data buffer + /// The index of the first destination byte in buffer + /// The number of bytes requested + /// The number of bytes read + /// If buffer is null + /// If count or offset are negative + /// If offset + count is > buffer.Length + /// If this stream is not readable. + /// If this stream has been disposed. + public override int Read(byte[] buffer, int offset, int count) + { + if (!CanRead) throw new NotSupportedException(); + if (buffer == null) throw new ArgumentNullException(); + if (offset < 0 || count < 0) throw new ArgumentOutOfRangeException(); + if ((offset+count) > buffer.Length) throw new ArgumentException(); + if (_isDisposed) throw new ObjectDisposedException("GZipStream"); + + GCHandle h = GCHandle.Alloc(buffer, GCHandleType.Pinned); + int result; + try + { + result = gzread(_gzFile, h.AddrOfPinnedObject().ToInt32() + offset, count); + if (result < 0) + throw new IOException(); + } + finally + { + h.Free(); + } + return result; + } + + /// + /// Attempts to read a single byte from the stream. + /// + /// The byte that was read, or -1 in case of error or End-Of-File + public override int ReadByte() + { + if (!CanRead) throw new NotSupportedException(); + if (_isDisposed) throw new ObjectDisposedException("GZipStream"); + return gzgetc(_gzFile); + } + + /// + /// Writes a number of bytes to the stream + /// + /// + /// + /// + /// If buffer is null + /// If count or offset are negative + /// If offset + count is > buffer.Length + /// If this stream is not writeable. + /// If this stream has been disposed. + public override void Write(byte[] buffer, int offset, int count) + { + if (!CanWrite) throw new NotSupportedException(); + if (buffer == null) throw new ArgumentNullException(); + if (offset < 0 || count < 0) throw new ArgumentOutOfRangeException(); + if ((offset+count) > buffer.Length) throw new ArgumentException(); + if (_isDisposed) throw new ObjectDisposedException("GZipStream"); + + GCHandle h = GCHandle.Alloc(buffer, GCHandleType.Pinned); + try + { + int result = gzwrite(_gzFile, h.AddrOfPinnedObject().ToInt32() + offset, count); + if (result < 0) + throw new IOException(); + } + finally + { + h.Free(); + } + } + + /// + /// Writes a single byte to the stream + /// + /// The byte to add to the stream. + /// If this stream is not writeable. + /// If this stream has been disposed. + public override void WriteByte(byte value) + { + if (!CanWrite) throw new NotSupportedException(); + if (_isDisposed) throw new ObjectDisposedException("GZipStream"); + + int result = gzputc(_gzFile, (int)value); + if (result < 0) + throw new IOException(); + } + #endregion + + #region Position & length stuff + /// + /// Not supported. + /// + /// + /// Always thrown + public override void SetLength(long value) + { + throw new NotSupportedException(); + } + + /// + /// Not suppported. + /// + /// + /// + /// + /// Always thrown + public override long Seek(long offset, SeekOrigin origin) + { + throw new NotSupportedException(); + } + + /// + /// Flushes the GZipStream. + /// + /// In this implementation, this method does nothing. This is because excessive + /// flushing may degrade the achievable compression rates. + public override void Flush() + { + // left empty on purpose + } + + /// + /// Gets/sets the current position in the GZipStream. Not suppported. + /// + /// In this implementation this property is not supported + /// Always thrown + public override long Position + { + get + { + throw new NotSupportedException(); + } + set + { + throw new NotSupportedException(); + } + } + + /// + /// Gets the size of the stream. Not suppported. + /// + /// In this implementation this property is not supported + /// Always thrown + public override long Length + { + get + { + throw new NotSupportedException(); + } + } + #endregion + } +} ADDED compat/zlib/contrib/dotzlib/DotZLib/Inflater.cs Index: compat/zlib/contrib/dotzlib/DotZLib/Inflater.cs ================================================================== --- compat/zlib/contrib/dotzlib/DotZLib/Inflater.cs +++ compat/zlib/contrib/dotzlib/DotZLib/Inflater.cs @@ -0,0 +1,105 @@ +// +// © Copyright Henrik Ravn 2004 +// +// Use, modification and distribution are subject to the Boost Software License, Version 1.0. +// (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) +// + +using System; +using System.Diagnostics; +using System.Runtime.InteropServices; + +namespace DotZLib +{ + + /// + /// Implements a data decompressor, using the inflate algorithm in the ZLib dll + /// + public class Inflater : CodecBase + { + #region Dll imports + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl, CharSet=CharSet.Ansi)] + private static extern int inflateInit_(ref ZStream sz, string vs, int size); + + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl)] + private static extern int inflate(ref ZStream sz, int flush); + + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl)] + private static extern int inflateReset(ref ZStream sz); + + [DllImport("ZLIB1.dll", CallingConvention=CallingConvention.Cdecl)] + private static extern int inflateEnd(ref ZStream sz); + #endregion + + /// + /// Constructs an new instance of the Inflater + /// + public Inflater() : base() + { + int retval = inflateInit_(ref _ztream, Info.Version, Marshal.SizeOf(_ztream)); + if (retval != 0) + throw new ZLibException(retval, "Could not initialize inflater"); + + resetOutput(); + } + + + /// + /// Adds more data to the codec to be processed. + /// + /// Byte array containing the data to be added to the codec + /// The index of the first byte to add from data + /// The number of bytes to add + /// Adding data may, or may not, raise the DataAvailable event + public override void Add(byte[] data, int offset, int count) + { + if (data == null) throw new ArgumentNullException(); + if (offset < 0 || count < 0) throw new ArgumentOutOfRangeException(); + if ((offset+count) > data.Length) throw new ArgumentException(); + + int total = count; + int inputIndex = offset; + int err = 0; + + while (err >= 0 && inputIndex < total) + { + copyInput(data, inputIndex, Math.Min(total - inputIndex, kBufferSize)); + err = inflate(ref _ztream, (int)FlushTypes.None); + if (err == 0) + while (_ztream.avail_out == 0) + { + OnDataAvailable(); + err = inflate(ref _ztream, (int)FlushTypes.None); + } + + inputIndex += (int)_ztream.total_in; + } + setChecksum( _ztream.adler ); + } + + + /// + /// Finishes up any pending data that needs to be processed and handled. + /// + public override void Finish() + { + int err; + do + { + err = inflate(ref _ztream, (int)FlushTypes.Finish); + OnDataAvailable(); + } + while (err == 0); + setChecksum( _ztream.adler ); + inflateReset(ref _ztream); + resetOutput(); + } + + /// + /// Closes the internal zlib inflate stream + /// + protected override void CleanUp() { inflateEnd(ref _ztream); } + + + } +} ADDED compat/zlib/contrib/dotzlib/DotZLib/UnitTests.cs Index: compat/zlib/contrib/dotzlib/DotZLib/UnitTests.cs ================================================================== --- compat/zlib/contrib/dotzlib/DotZLib/UnitTests.cs +++ compat/zlib/contrib/dotzlib/DotZLib/UnitTests.cs @@ -0,0 +1,274 @@ +// +// © Copyright Henrik Ravn 2004 +// +// Use, modification and distribution are subject to the Boost Software License, Version 1.0. +// (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) +// + +using System; +using System.Collections; +using System.IO; + +// uncomment the define below to include unit tests +//#define nunit +#if nunit +using NUnit.Framework; + +// Unit tests for the DotZLib class library +// ---------------------------------------- +// +// Use this with NUnit 2 from http://www.nunit.org +// + +namespace DotZLibTests +{ + using DotZLib; + + // helper methods + internal class Utils + { + public static bool byteArrEqual( byte[] lhs, byte[] rhs ) + { + if (lhs.Length != rhs.Length) + return false; + for (int i = lhs.Length-1; i >= 0; --i) + if (lhs[i] != rhs[i]) + return false; + return true; + } + + } + + + [TestFixture] + public class CircBufferTests + { + #region Circular buffer tests + [Test] + public void SinglePutGet() + { + CircularBuffer buf = new CircularBuffer(10); + Assert.AreEqual( 0, buf.Size ); + Assert.AreEqual( -1, buf.Get() ); + + Assert.IsTrue(buf.Put( 1 )); + Assert.AreEqual( 1, buf.Size ); + Assert.AreEqual( 1, buf.Get() ); + Assert.AreEqual( 0, buf.Size ); + Assert.AreEqual( -1, buf.Get() ); + } + + [Test] + public void BlockPutGet() + { + CircularBuffer buf = new CircularBuffer(10); + byte[] arr = {1,2,3,4,5,6,7,8,9,10}; + Assert.AreEqual( 10, buf.Put(arr,0,10) ); + Assert.AreEqual( 10, buf.Size ); + Assert.IsFalse( buf.Put(11) ); + Assert.AreEqual( 1, buf.Get() ); + Assert.IsTrue( buf.Put(11) ); + + byte[] arr2 = (byte[])arr.Clone(); + Assert.AreEqual( 9, buf.Get(arr2,1,9) ); + Assert.IsTrue( Utils.byteArrEqual(arr,arr2) ); + } + + #endregion + } + + [TestFixture] + public class ChecksumTests + { + #region CRC32 Tests + [Test] + public void CRC32_Null() + { + CRC32Checksum crc32 = new CRC32Checksum(); + Assert.AreEqual( 0, crc32.Value ); + + crc32 = new CRC32Checksum(1); + Assert.AreEqual( 1, crc32.Value ); + + crc32 = new CRC32Checksum(556); + Assert.AreEqual( 556, crc32.Value ); + } + + [Test] + public void CRC32_Data() + { + CRC32Checksum crc32 = new CRC32Checksum(); + byte[] data = { 1,2,3,4,5,6,7 }; + crc32.Update(data); + Assert.AreEqual( 0x70e46888, crc32.Value ); + + crc32 = new CRC32Checksum(); + crc32.Update("penguin"); + Assert.AreEqual( 0x0e5c1a120, crc32.Value ); + + crc32 = new CRC32Checksum(1); + crc32.Update("penguin"); + Assert.AreEqual(0x43b6aa94, crc32.Value); + + } + #endregion + + #region Adler tests + + [Test] + public void Adler_Null() + { + AdlerChecksum adler = new AdlerChecksum(); + Assert.AreEqual(0, adler.Value); + + adler = new AdlerChecksum(1); + Assert.AreEqual( 1, adler.Value ); + + adler = new AdlerChecksum(556); + Assert.AreEqual( 556, adler.Value ); + } + + [Test] + public void Adler_Data() + { + AdlerChecksum adler = new AdlerChecksum(1); + byte[] data = { 1,2,3,4,5,6,7 }; + adler.Update(data); + Assert.AreEqual( 0x5b001d, adler.Value ); + + adler = new AdlerChecksum(); + adler.Update("penguin"); + Assert.AreEqual(0x0bcf02f6, adler.Value ); + + adler = new AdlerChecksum(1); + adler.Update("penguin"); + Assert.AreEqual(0x0bd602f7, adler.Value); + + } + #endregion + } + + [TestFixture] + public class InfoTests + { + #region Info tests + [Test] + public void Info_Version() + { + Info info = new Info(); + Assert.AreEqual("1.2.8", Info.Version); + Assert.AreEqual(32, info.SizeOfUInt); + Assert.AreEqual(32, info.SizeOfULong); + Assert.AreEqual(32, info.SizeOfPointer); + Assert.AreEqual(32, info.SizeOfOffset); + } + #endregion + } + + [TestFixture] + public class DeflateInflateTests + { + #region Deflate tests + [Test] + public void Deflate_Init() + { + using (Deflater def = new Deflater(CompressLevel.Default)) + { + } + } + + private ArrayList compressedData = new ArrayList(); + private uint adler1; + + private ArrayList uncompressedData = new ArrayList(); + private uint adler2; + + public void CDataAvail(byte[] data, int startIndex, int count) + { + for (int i = 0; i < count; ++i) + compressedData.Add(data[i+startIndex]); + } + + [Test] + public void Deflate_Compress() + { + compressedData.Clear(); + + byte[] testData = new byte[35000]; + for (int i = 0; i < testData.Length; ++i) + testData[i] = 5; + + using (Deflater def = new Deflater((CompressLevel)5)) + { + def.DataAvailable += new DataAvailableHandler(CDataAvail); + def.Add(testData); + def.Finish(); + adler1 = def.Checksum; + } + } + #endregion + + #region Inflate tests + [Test] + public void Inflate_Init() + { + using (Inflater inf = new Inflater()) + { + } + } + + private void DDataAvail(byte[] data, int startIndex, int count) + { + for (int i = 0; i < count; ++i) + uncompressedData.Add(data[i+startIndex]); + } + + [Test] + public void Inflate_Expand() + { + uncompressedData.Clear(); + + using (Inflater inf = new Inflater()) + { + inf.DataAvailable += new DataAvailableHandler(DDataAvail); + inf.Add((byte[])compressedData.ToArray(typeof(byte))); + inf.Finish(); + adler2 = inf.Checksum; + } + Assert.AreEqual( adler1, adler2 ); + } + #endregion + } + + [TestFixture] + public class GZipStreamTests + { + #region GZipStream test + [Test] + public void GZipStream_WriteRead() + { + using (GZipStream gzOut = new GZipStream("gzstream.gz", CompressLevel.Best)) + { + BinaryWriter writer = new BinaryWriter(gzOut); + writer.Write("hi there"); + writer.Write(Math.PI); + writer.Write(42); + } + + using (GZipStream gzIn = new GZipStream("gzstream.gz")) + { + BinaryReader reader = new BinaryReader(gzIn); + string s = reader.ReadString(); + Assert.AreEqual("hi there",s); + double d = reader.ReadDouble(); + Assert.AreEqual(Math.PI, d); + int i = reader.ReadInt32(); + Assert.AreEqual(42,i); + } + + } + #endregion + } +} + +#endif ADDED compat/zlib/contrib/dotzlib/LICENSE_1_0.txt Index: compat/zlib/contrib/dotzlib/LICENSE_1_0.txt ================================================================== --- compat/zlib/contrib/dotzlib/LICENSE_1_0.txt +++ compat/zlib/contrib/dotzlib/LICENSE_1_0.txt @@ -0,0 +1,23 @@ +Boost Software License - Version 1.0 - August 17th, 2003 + +Permission is hereby granted, free of charge, to any person or organization +obtaining a copy of the software and accompanying documentation covered by +this license (the "Software") to use, reproduce, display, distribute, +execute, and transmit the Software, and to prepare derivative works of the +Software, and to permit third-parties to whom the Software is furnished to +do so, all subject to the following: + +The copyright notices in the Software and this entire statement, including +the above license grant, this restriction and the following disclaimer, +must be included in all copies of the Software, in whole or in part, and +all derivative works of the Software, unless such copies or derivative +works are solely in the form of machine-executable object code generated by +a source language processor. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT +SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE +FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE, +ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER +DEALINGS IN THE SOFTWARE. ADDED compat/zlib/contrib/dotzlib/readme.txt Index: compat/zlib/contrib/dotzlib/readme.txt ================================================================== --- compat/zlib/contrib/dotzlib/readme.txt +++ compat/zlib/contrib/dotzlib/readme.txt @@ -0,0 +1,58 @@ +This directory contains a .Net wrapper class library for the ZLib1.dll + +The wrapper includes support for inflating/deflating memory buffers, +.Net streaming wrappers for the gz streams part of zlib, and wrappers +for the checksum parts of zlib. See DotZLib/UnitTests.cs for examples. + +Directory structure: +-------------------- + +LICENSE_1_0.txt - License file. +readme.txt - This file. +DotZLib.chm - Class library documentation +DotZLib.build - NAnt build file +DotZLib.sln - Microsoft Visual Studio 2003 solution file + +DotZLib\*.cs - Source files for the class library + +Unit tests: +----------- +The file DotZLib/UnitTests.cs contains unit tests for use with NUnit 2.1 or higher. +To include unit tests in the build, define nunit before building. + + +Build instructions: +------------------- + +1. Using Visual Studio.Net 2003: + Open DotZLib.sln in VS.Net and build from there. Output file (DotZLib.dll) + will be found ./DotZLib/bin/release or ./DotZLib/bin/debug, depending on + you are building the release or debug version of the library. Check + DotZLib/UnitTests.cs for instructions on how to include unit tests in the + build. + +2. Using NAnt: + Open a command prompt with access to the build environment and run nant + in the same directory as the DotZLib.build file. + You can define 2 properties on the nant command-line to control the build: + debug={true|false} to toggle between release/debug builds (default=true). + nunit={true|false} to include or esclude unit tests (default=true). + Also the target clean will remove binaries. + Output file (DotZLib.dll) will be found in either ./DotZLib/bin/release + or ./DotZLib/bin/debug, depending on whether you are building the release + or debug version of the library. + + Examples: + nant -D:debug=false -D:nunit=false + will build a release mode version of the library without unit tests. + nant + will build a debug version of the library with unit tests + nant clean + will remove all previously built files. + + +--------------------------------- +Copyright (c) Henrik Ravn 2004 + +Use, modification and distribution are subject to the Boost Software License, Version 1.0. +(See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) ADDED compat/zlib/contrib/gcc_gvmat64/gvmat64.S Index: compat/zlib/contrib/gcc_gvmat64/gvmat64.S ================================================================== --- compat/zlib/contrib/gcc_gvmat64/gvmat64.S +++ compat/zlib/contrib/gcc_gvmat64/gvmat64.S @@ -0,0 +1,574 @@ +/* +;uInt longest_match_x64( +; deflate_state *s, +; IPos cur_match); // current match + +; gvmat64.S -- Asm portion of the optimized longest_match for 32 bits x86_64 +; (AMD64 on Athlon 64, Opteron, Phenom +; and Intel EM64T on Pentium 4 with EM64T, Pentium D, Core 2 Duo, Core I5/I7) +; this file is translation from gvmat64.asm to GCC 4.x (for Linux, Mac XCode) +; Copyright (C) 1995-2010 Jean-loup Gailly, Brian Raiter and Gilles Vollant. +; +; File written by Gilles Vollant, by converting to assembly the longest_match +; from Jean-loup Gailly in deflate.c of zLib and infoZip zip. +; and by taking inspiration on asm686 with masm, optimised assembly code +; from Brian Raiter, written 1998 +; +; This software is provided 'as-is', without any express or implied +; warranty. In no event will the authors be held liable for any damages +; arising from the use of this software. +; +; Permission is granted to anyone to use this software for any purpose, +; including commercial applications, and to alter it and redistribute it +; freely, subject to the following restrictions: +; +; 1. The origin of this software must not be misrepresented; you must not +; claim that you wrote the original software. If you use this software +; in a product, an acknowledgment in the product documentation would be +; appreciated but is not required. +; 2. Altered source versions must be plainly marked as such, and must not be +; misrepresented as being the original software +; 3. This notice may not be removed or altered from any source distribution. +; +; http://www.zlib.net +; http://www.winimage.com/zLibDll +; http://www.muppetlabs.com/~breadbox/software/assembly.html +; +; to compile this file for zLib, I use option: +; gcc -c -arch x86_64 gvmat64.S + + +;uInt longest_match(s, cur_match) +; deflate_state *s; +; IPos cur_match; // current match / +; +; with XCode for Mac, I had strange error with some jump on intel syntax +; this is why BEFORE_JMP and AFTER_JMP are used + */ + + +#define BEFORE_JMP .att_syntax +#define AFTER_JMP .intel_syntax noprefix + +#ifndef NO_UNDERLINE +# define match_init _match_init +# define longest_match _longest_match +#endif + +.intel_syntax noprefix + +.globl match_init, longest_match +.text +longest_match: + + + +#define LocalVarsSize 96 +/* +; register used : rax,rbx,rcx,rdx,rsi,rdi,r8,r9,r10,r11,r12 +; free register : r14,r15 +; register can be saved : rsp +*/ + +#define chainlenwmask (rsp + 8 - LocalVarsSize) +#define nicematch (rsp + 16 - LocalVarsSize) + +#define save_rdi (rsp + 24 - LocalVarsSize) +#define save_rsi (rsp + 32 - LocalVarsSize) +#define save_rbx (rsp + 40 - LocalVarsSize) +#define save_rbp (rsp + 48 - LocalVarsSize) +#define save_r12 (rsp + 56 - LocalVarsSize) +#define save_r13 (rsp + 64 - LocalVarsSize) +#define save_r14 (rsp + 72 - LocalVarsSize) +#define save_r15 (rsp + 80 - LocalVarsSize) + + +/* +; all the +4 offsets are due to the addition of pending_buf_size (in zlib +; in the deflate_state structure since the asm code was first written +; (if you compile with zlib 1.0.4 or older, remove the +4). +; Note : these value are good with a 8 bytes boundary pack structure +*/ + +#define MAX_MATCH 258 +#define MIN_MATCH 3 +#define MIN_LOOKAHEAD (MAX_MATCH+MIN_MATCH+1) + +/* +;;; Offsets for fields in the deflate_state structure. These numbers +;;; are calculated from the definition of deflate_state, with the +;;; assumption that the compiler will dword-align the fields. (Thus, +;;; changing the definition of deflate_state could easily cause this +;;; program to crash horribly, without so much as a warning at +;;; compile time. Sigh.) + +; all the +zlib1222add offsets are due to the addition of fields +; in zlib in the deflate_state structure since the asm code was first written +; (if you compile with zlib 1.0.4 or older, use "zlib1222add equ (-4)"). +; (if you compile with zlib between 1.0.5 and 1.2.2.1, use "zlib1222add equ 0"). +; if you compile with zlib 1.2.2.2 or later , use "zlib1222add equ 8"). +*/ + + + +/* you can check the structure offset by running + +#include +#include +#include "deflate.h" + +void print_depl() +{ +deflate_state ds; +deflate_state *s=&ds; +printf("size pointer=%u\n",(int)sizeof(void*)); + +printf("#define dsWSize %u\n",(int)(((char*)&(s->w_size))-((char*)s))); +printf("#define dsWMask %u\n",(int)(((char*)&(s->w_mask))-((char*)s))); +printf("#define dsWindow %u\n",(int)(((char*)&(s->window))-((char*)s))); +printf("#define dsPrev %u\n",(int)(((char*)&(s->prev))-((char*)s))); +printf("#define dsMatchLen %u\n",(int)(((char*)&(s->match_length))-((char*)s))); +printf("#define dsPrevMatch %u\n",(int)(((char*)&(s->prev_match))-((char*)s))); +printf("#define dsStrStart %u\n",(int)(((char*)&(s->strstart))-((char*)s))); +printf("#define dsMatchStart %u\n",(int)(((char*)&(s->match_start))-((char*)s))); +printf("#define dsLookahead %u\n",(int)(((char*)&(s->lookahead))-((char*)s))); +printf("#define dsPrevLen %u\n",(int)(((char*)&(s->prev_length))-((char*)s))); +printf("#define dsMaxChainLen %u\n",(int)(((char*)&(s->max_chain_length))-((char*)s))); +printf("#define dsGoodMatch %u\n",(int)(((char*)&(s->good_match))-((char*)s))); +printf("#define dsNiceMatch %u\n",(int)(((char*)&(s->nice_match))-((char*)s))); +} +*/ + +#define dsWSize 68 +#define dsWMask 76 +#define dsWindow 80 +#define dsPrev 96 +#define dsMatchLen 144 +#define dsPrevMatch 148 +#define dsStrStart 156 +#define dsMatchStart 160 +#define dsLookahead 164 +#define dsPrevLen 168 +#define dsMaxChainLen 172 +#define dsGoodMatch 188 +#define dsNiceMatch 192 + +#define window_size [ rcx + dsWSize] +#define WMask [ rcx + dsWMask] +#define window_ad [ rcx + dsWindow] +#define prev_ad [ rcx + dsPrev] +#define strstart [ rcx + dsStrStart] +#define match_start [ rcx + dsMatchStart] +#define Lookahead [ rcx + dsLookahead] //; 0ffffffffh on infozip +#define prev_length [ rcx + dsPrevLen] +#define max_chain_length [ rcx + dsMaxChainLen] +#define good_match [ rcx + dsGoodMatch] +#define nice_match [ rcx + dsNiceMatch] + +/* +; windows: +; parameter 1 in rcx(deflate state s), param 2 in rdx (cur match) + +; see http://weblogs.asp.net/oldnewthing/archive/2004/01/14/58579.aspx and +; http://msdn.microsoft.com/library/en-us/kmarch/hh/kmarch/64bitAMD_8e951dd2-ee77-4728-8702-55ce4b5dd24a.xml.asp +; +; All registers must be preserved across the call, except for +; rax, rcx, rdx, r8, r9, r10, and r11, which are scratch. + +; +; gcc on macosx-linux: +; see http://www.x86-64.org/documentation/abi-0.99.pdf +; param 1 in rdi, param 2 in rsi +; rbx, rsp, rbp, r12 to r15 must be preserved + +;;; Save registers that the compiler may be using, and adjust esp to +;;; make room for our stack frame. + + +;;; Retrieve the function arguments. r8d will hold cur_match +;;; throughout the entire function. edx will hold the pointer to the +;;; deflate_state structure during the function's setup (before +;;; entering the main loop. + +; ms: parameter 1 in rcx (deflate_state* s), param 2 in edx -> r8 (cur match) +; mac: param 1 in rdi, param 2 rsi +; this clear high 32 bits of r8, which can be garbage in both r8 and rdx +*/ + mov [save_rbx],rbx + mov [save_rbp],rbp + + + mov rcx,rdi + + mov r8d,esi + + + mov [save_r12],r12 + mov [save_r13],r13 + mov [save_r14],r14 + mov [save_r15],r15 + + +//;;; uInt wmask = s->w_mask; +//;;; unsigned chain_length = s->max_chain_length; +//;;; if (s->prev_length >= s->good_match) { +//;;; chain_length >>= 2; +//;;; } + + + mov edi, prev_length + mov esi, good_match + mov eax, WMask + mov ebx, max_chain_length + cmp edi, esi + jl LastMatchGood + shr ebx, 2 +LastMatchGood: + +//;;; chainlen is decremented once beforehand so that the function can +//;;; use the sign flag instead of the zero flag for the exit test. +//;;; It is then shifted into the high word, to make room for the wmask +//;;; value, which it will always accompany. + + dec ebx + shl ebx, 16 + or ebx, eax + +//;;; on zlib only +//;;; if ((uInt)nice_match > s->lookahead) nice_match = s->lookahead; + + + + mov eax, nice_match + mov [chainlenwmask], ebx + mov r10d, Lookahead + cmp r10d, eax + cmovnl r10d, eax + mov [nicematch],r10d + + + +//;;; register Bytef *scan = s->window + s->strstart; + mov r10, window_ad + mov ebp, strstart + lea r13, [r10 + rbp] + +//;;; Determine how many bytes the scan ptr is off from being +//;;; dword-aligned. + + mov r9,r13 + neg r13 + and r13,3 + +//;;; IPos limit = s->strstart > (IPos)MAX_DIST(s) ? +//;;; s->strstart - (IPos)MAX_DIST(s) : NIL; + + + mov eax, window_size + sub eax, MIN_LOOKAHEAD + + + xor edi,edi + sub ebp, eax + + mov r11d, prev_length + + cmovng ebp,edi + +//;;; int best_len = s->prev_length; + + +//;;; Store the sum of s->window + best_len in esi locally, and in esi. + + lea rsi,[r10+r11] + +//;;; register ush scan_start = *(ushf*)scan; +//;;; register ush scan_end = *(ushf*)(scan+best_len-1); +//;;; Posf *prev = s->prev; + + movzx r12d,word ptr [r9] + movzx ebx, word ptr [r9 + r11 - 1] + + mov rdi, prev_ad + +//;;; Jump into the main loop. + + mov edx, [chainlenwmask] + + cmp bx,word ptr [rsi + r8 - 1] + jz LookupLoopIsZero + + + +LookupLoop1: + and r8d, edx + + movzx r8d, word ptr [rdi + r8*2] + cmp r8d, ebp + jbe LeaveNow + + + + sub edx, 0x00010000 + BEFORE_JMP + js LeaveNow + AFTER_JMP + +LoopEntry1: + cmp bx,word ptr [rsi + r8 - 1] + BEFORE_JMP + jz LookupLoopIsZero + AFTER_JMP + +LookupLoop2: + and r8d, edx + + movzx r8d, word ptr [rdi + r8*2] + cmp r8d, ebp + BEFORE_JMP + jbe LeaveNow + AFTER_JMP + sub edx, 0x00010000 + BEFORE_JMP + js LeaveNow + AFTER_JMP + +LoopEntry2: + cmp bx,word ptr [rsi + r8 - 1] + BEFORE_JMP + jz LookupLoopIsZero + AFTER_JMP + +LookupLoop4: + and r8d, edx + + movzx r8d, word ptr [rdi + r8*2] + cmp r8d, ebp + BEFORE_JMP + jbe LeaveNow + AFTER_JMP + sub edx, 0x00010000 + BEFORE_JMP + js LeaveNow + AFTER_JMP + +LoopEntry4: + + cmp bx,word ptr [rsi + r8 - 1] + BEFORE_JMP + jnz LookupLoop1 + jmp LookupLoopIsZero + AFTER_JMP +/* +;;; do { +;;; match = s->window + cur_match; +;;; if (*(ushf*)(match+best_len-1) != scan_end || +;;; *(ushf*)match != scan_start) continue; +;;; [...] +;;; } while ((cur_match = prev[cur_match & wmask]) > limit +;;; && --chain_length != 0); +;;; +;;; Here is the inner loop of the function. The function will spend the +;;; majority of its time in this loop, and majority of that time will +;;; be spent in the first ten instructions. +;;; +;;; Within this loop: +;;; ebx = scanend +;;; r8d = curmatch +;;; edx = chainlenwmask - i.e., ((chainlen << 16) | wmask) +;;; esi = windowbestlen - i.e., (window + bestlen) +;;; edi = prev +;;; ebp = limit +*/ +.balign 16 +LookupLoop: + and r8d, edx + + movzx r8d, word ptr [rdi + r8*2] + cmp r8d, ebp + BEFORE_JMP + jbe LeaveNow + AFTER_JMP + sub edx, 0x00010000 + BEFORE_JMP + js LeaveNow + AFTER_JMP + +LoopEntry: + + cmp bx,word ptr [rsi + r8 - 1] + BEFORE_JMP + jnz LookupLoop1 + AFTER_JMP +LookupLoopIsZero: + cmp r12w, word ptr [r10 + r8] + BEFORE_JMP + jnz LookupLoop1 + AFTER_JMP + + +//;;; Store the current value of chainlen. + mov [chainlenwmask], edx +/* +;;; Point edi to the string under scrutiny, and esi to the string we +;;; are hoping to match it up with. In actuality, esi and edi are +;;; both pointed (MAX_MATCH_8 - scanalign) bytes ahead, and edx is +;;; initialized to -(MAX_MATCH_8 - scanalign). +*/ + lea rsi,[r8+r10] + mov rdx, 0xfffffffffffffef8 //; -(MAX_MATCH_8) + lea rsi, [rsi + r13 + 0x0108] //;MAX_MATCH_8] + lea rdi, [r9 + r13 + 0x0108] //;MAX_MATCH_8] + + prefetcht1 [rsi+rdx] + prefetcht1 [rdi+rdx] + +/* +;;; Test the strings for equality, 8 bytes at a time. At the end, +;;; adjust rdx so that it is offset to the exact byte that mismatched. +;;; +;;; We already know at this point that the first three bytes of the +;;; strings match each other, and they can be safely passed over before +;;; starting the compare loop. So what this code does is skip over 0-3 +;;; bytes, as much as necessary in order to dword-align the edi +;;; pointer. (rsi will still be misaligned three times out of four.) +;;; +;;; It should be confessed that this loop usually does not represent +;;; much of the total running time. Replacing it with a more +;;; straightforward "rep cmpsb" would not drastically degrade +;;; performance. +*/ + +LoopCmps: + mov rax, [rsi + rdx] + xor rax, [rdi + rdx] + jnz LeaveLoopCmps + + mov rax, [rsi + rdx + 8] + xor rax, [rdi + rdx + 8] + jnz LeaveLoopCmps8 + + + mov rax, [rsi + rdx + 8+8] + xor rax, [rdi + rdx + 8+8] + jnz LeaveLoopCmps16 + + add rdx,8+8+8 + + BEFORE_JMP + jnz LoopCmps + jmp LenMaximum + AFTER_JMP + +LeaveLoopCmps16: add rdx,8 +LeaveLoopCmps8: add rdx,8 +LeaveLoopCmps: + + test eax, 0x0000FFFF + jnz LenLower + + test eax,0xffffffff + + jnz LenLower32 + + add rdx,4 + shr rax,32 + or ax,ax + BEFORE_JMP + jnz LenLower + AFTER_JMP + +LenLower32: + shr eax,16 + add rdx,2 + +LenLower: + sub al, 1 + adc rdx, 0 +//;;; Calculate the length of the match. If it is longer than MAX_MATCH, +//;;; then automatically accept it as the best possible match and leave. + + lea rax, [rdi + rdx] + sub rax, r9 + cmp eax, MAX_MATCH + BEFORE_JMP + jge LenMaximum + AFTER_JMP +/* +;;; If the length of the match is not longer than the best match we +;;; have so far, then forget it and return to the lookup loop. +;/////////////////////////////////// +*/ + cmp eax, r11d + jg LongerMatch + + lea rsi,[r10+r11] + + mov rdi, prev_ad + mov edx, [chainlenwmask] + BEFORE_JMP + jmp LookupLoop + AFTER_JMP +/* +;;; s->match_start = cur_match; +;;; best_len = len; +;;; if (len >= nice_match) break; +;;; scan_end = *(ushf*)(scan+best_len-1); +*/ +LongerMatch: + mov r11d, eax + mov match_start, r8d + cmp eax, [nicematch] + BEFORE_JMP + jge LeaveNow + AFTER_JMP + + lea rsi,[r10+rax] + + movzx ebx, word ptr [r9 + rax - 1] + mov rdi, prev_ad + mov edx, [chainlenwmask] + BEFORE_JMP + jmp LookupLoop + AFTER_JMP + +//;;; Accept the current string, with the maximum possible length. + +LenMaximum: + mov r11d,MAX_MATCH + mov match_start, r8d + +//;;; if ((uInt)best_len <= s->lookahead) return (uInt)best_len; +//;;; return s->lookahead; + +LeaveNow: + mov eax, Lookahead + cmp r11d, eax + cmovng eax, r11d + + + +//;;; Restore the stack and return from whence we came. + + +// mov rsi,[save_rsi] +// mov rdi,[save_rdi] + mov rbx,[save_rbx] + mov rbp,[save_rbp] + mov r12,[save_r12] + mov r13,[save_r13] + mov r14,[save_r14] + mov r15,[save_r15] + + + ret 0 +//; please don't remove this string ! +//; Your can freely use gvmat64 in any free or commercial app +//; but it is far better don't remove the string in the binary! + // db 0dh,0ah,"asm686 with masm, optimised assembly code from Brian Raiter, written 1998, converted to amd 64 by Gilles Vollant 2005",0dh,0ah,0 + + +match_init: + ret 0 + + ADDED compat/zlib/contrib/infback9/README Index: compat/zlib/contrib/infback9/README ================================================================== --- compat/zlib/contrib/infback9/README +++ compat/zlib/contrib/infback9/README @@ -0,0 +1,1 @@ +See infback9.h for what this is and how to use it. ADDED compat/zlib/contrib/infback9/infback9.c Index: compat/zlib/contrib/infback9/infback9.c ================================================================== --- compat/zlib/contrib/infback9/infback9.c +++ compat/zlib/contrib/infback9/infback9.c @@ -0,0 +1,615 @@ +/* infback9.c -- inflate deflate64 data using a call-back interface + * Copyright (C) 1995-2008 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +#include "zutil.h" +#include "infback9.h" +#include "inftree9.h" +#include "inflate9.h" + +#define WSIZE 65536UL + +/* + strm provides memory allocation functions in zalloc and zfree, or + Z_NULL to use the library memory allocation functions. + + window is a user-supplied window and output buffer that is 64K bytes. + */ +int ZEXPORT inflateBack9Init_(strm, window, version, stream_size) +z_stream FAR *strm; +unsigned char FAR *window; +const char *version; +int stream_size; +{ + struct inflate_state FAR *state; + + if (version == Z_NULL || version[0] != ZLIB_VERSION[0] || + stream_size != (int)(sizeof(z_stream))) + return Z_VERSION_ERROR; + if (strm == Z_NULL || window == Z_NULL) + return Z_STREAM_ERROR; + strm->msg = Z_NULL; /* in case we return an error */ + if (strm->zalloc == (alloc_func)0) { + strm->zalloc = zcalloc; + strm->opaque = (voidpf)0; + } + if (strm->zfree == (free_func)0) strm->zfree = zcfree; + state = (struct inflate_state FAR *)ZALLOC(strm, 1, + sizeof(struct inflate_state)); + if (state == Z_NULL) return Z_MEM_ERROR; + Tracev((stderr, "inflate: allocated\n")); + strm->state = (voidpf)state; + state->window = window; + return Z_OK; +} + +/* + Build and output length and distance decoding tables for fixed code + decoding. + */ +#ifdef MAKEFIXED +#include + +void makefixed9(void) +{ + unsigned sym, bits, low, size; + code *next, *lenfix, *distfix; + struct inflate_state state; + code fixed[544]; + + /* literal/length table */ + sym = 0; + while (sym < 144) state.lens[sym++] = 8; + while (sym < 256) state.lens[sym++] = 9; + while (sym < 280) state.lens[sym++] = 7; + while (sym < 288) state.lens[sym++] = 8; + next = fixed; + lenfix = next; + bits = 9; + inflate_table9(LENS, state.lens, 288, &(next), &(bits), state.work); + + /* distance table */ + sym = 0; + while (sym < 32) state.lens[sym++] = 5; + distfix = next; + bits = 5; + inflate_table9(DISTS, state.lens, 32, &(next), &(bits), state.work); + + /* write tables */ + puts(" /* inffix9.h -- table for decoding deflate64 fixed codes"); + puts(" * Generated automatically by makefixed9()."); + puts(" */"); + puts(""); + puts(" /* WARNING: this file should *not* be used by applications."); + puts(" It is part of the implementation of this library and is"); + puts(" subject to change. Applications should only use zlib.h."); + puts(" */"); + puts(""); + size = 1U << 9; + printf(" static const code lenfix[%u] = {", size); + low = 0; + for (;;) { + if ((low % 6) == 0) printf("\n "); + printf("{%u,%u,%d}", lenfix[low].op, lenfix[low].bits, + lenfix[low].val); + if (++low == size) break; + putchar(','); + } + puts("\n };"); + size = 1U << 5; + printf("\n static const code distfix[%u] = {", size); + low = 0; + for (;;) { + if ((low % 5) == 0) printf("\n "); + printf("{%u,%u,%d}", distfix[low].op, distfix[low].bits, + distfix[low].val); + if (++low == size) break; + putchar(','); + } + puts("\n };"); +} +#endif /* MAKEFIXED */ + +/* Macros for inflateBack(): */ + +/* Clear the input bit accumulator */ +#define INITBITS() \ + do { \ + hold = 0; \ + bits = 0; \ + } while (0) + +/* Assure that some input is available. If input is requested, but denied, + then return a Z_BUF_ERROR from inflateBack(). */ +#define PULL() \ + do { \ + if (have == 0) { \ + have = in(in_desc, &next); \ + if (have == 0) { \ + next = Z_NULL; \ + ret = Z_BUF_ERROR; \ + goto inf_leave; \ + } \ + } \ + } while (0) + +/* Get a byte of input into the bit accumulator, or return from inflateBack() + with an error if there is no input available. */ +#define PULLBYTE() \ + do { \ + PULL(); \ + have--; \ + hold += (unsigned long)(*next++) << bits; \ + bits += 8; \ + } while (0) + +/* Assure that there are at least n bits in the bit accumulator. If there is + not enough available input to do that, then return from inflateBack() with + an error. */ +#define NEEDBITS(n) \ + do { \ + while (bits < (unsigned)(n)) \ + PULLBYTE(); \ + } while (0) + +/* Return the low n bits of the bit accumulator (n <= 16) */ +#define BITS(n) \ + ((unsigned)hold & ((1U << (n)) - 1)) + +/* Remove n bits from the bit accumulator */ +#define DROPBITS(n) \ + do { \ + hold >>= (n); \ + bits -= (unsigned)(n); \ + } while (0) + +/* Remove zero to seven bits as needed to go to a byte boundary */ +#define BYTEBITS() \ + do { \ + hold >>= bits & 7; \ + bits -= bits & 7; \ + } while (0) + +/* Assure that some output space is available, by writing out the window + if it's full. If the write fails, return from inflateBack() with a + Z_BUF_ERROR. */ +#define ROOM() \ + do { \ + if (left == 0) { \ + put = window; \ + left = WSIZE; \ + wrap = 1; \ + if (out(out_desc, put, (unsigned)left)) { \ + ret = Z_BUF_ERROR; \ + goto inf_leave; \ + } \ + } \ + } while (0) + +/* + strm provides the memory allocation functions and window buffer on input, + and provides information on the unused input on return. For Z_DATA_ERROR + returns, strm will also provide an error message. + + in() and out() are the call-back input and output functions. When + inflateBack() needs more input, it calls in(). When inflateBack() has + filled the window with output, or when it completes with data in the + window, it calls out() to write out the data. The application must not + change the provided input until in() is called again or inflateBack() + returns. The application must not change the window/output buffer until + inflateBack() returns. + + in() and out() are called with a descriptor parameter provided in the + inflateBack() call. This parameter can be a structure that provides the + information required to do the read or write, as well as accumulated + information on the input and output such as totals and check values. + + in() should return zero on failure. out() should return non-zero on + failure. If either in() or out() fails, than inflateBack() returns a + Z_BUF_ERROR. strm->next_in can be checked for Z_NULL to see whether it + was in() or out() that caused in the error. Otherwise, inflateBack() + returns Z_STREAM_END on success, Z_DATA_ERROR for an deflate format + error, or Z_MEM_ERROR if it could not allocate memory for the state. + inflateBack() can also return Z_STREAM_ERROR if the input parameters + are not correct, i.e. strm is Z_NULL or the state was not initialized. + */ +int ZEXPORT inflateBack9(strm, in, in_desc, out, out_desc) +z_stream FAR *strm; +in_func in; +void FAR *in_desc; +out_func out; +void FAR *out_desc; +{ + struct inflate_state FAR *state; + z_const unsigned char FAR *next; /* next input */ + unsigned char FAR *put; /* next output */ + unsigned have; /* available input */ + unsigned long left; /* available output */ + inflate_mode mode; /* current inflate mode */ + int lastblock; /* true if processing last block */ + int wrap; /* true if the window has wrapped */ + unsigned char FAR *window; /* allocated sliding window, if needed */ + unsigned long hold; /* bit buffer */ + unsigned bits; /* bits in bit buffer */ + unsigned extra; /* extra bits needed */ + unsigned long length; /* literal or length of data to copy */ + unsigned long offset; /* distance back to copy string from */ + unsigned long copy; /* number of stored or match bytes to copy */ + unsigned char FAR *from; /* where to copy match bytes from */ + code const FAR *lencode; /* starting table for length/literal codes */ + code const FAR *distcode; /* starting table for distance codes */ + unsigned lenbits; /* index bits for lencode */ + unsigned distbits; /* index bits for distcode */ + code here; /* current decoding table entry */ + code last; /* parent table entry */ + unsigned len; /* length to copy for repeats, bits to drop */ + int ret; /* return code */ + static const unsigned short order[19] = /* permutation of code lengths */ + {16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15}; +#include "inffix9.h" + + /* Check that the strm exists and that the state was initialized */ + if (strm == Z_NULL || strm->state == Z_NULL) + return Z_STREAM_ERROR; + state = (struct inflate_state FAR *)strm->state; + + /* Reset the state */ + strm->msg = Z_NULL; + mode = TYPE; + lastblock = 0; + wrap = 0; + window = state->window; + next = strm->next_in; + have = next != Z_NULL ? strm->avail_in : 0; + hold = 0; + bits = 0; + put = window; + left = WSIZE; + lencode = Z_NULL; + distcode = Z_NULL; + + /* Inflate until end of block marked as last */ + for (;;) + switch (mode) { + case TYPE: + /* determine and dispatch block type */ + if (lastblock) { + BYTEBITS(); + mode = DONE; + break; + } + NEEDBITS(3); + lastblock = BITS(1); + DROPBITS(1); + switch (BITS(2)) { + case 0: /* stored block */ + Tracev((stderr, "inflate: stored block%s\n", + lastblock ? " (last)" : "")); + mode = STORED; + break; + case 1: /* fixed block */ + lencode = lenfix; + lenbits = 9; + distcode = distfix; + distbits = 5; + Tracev((stderr, "inflate: fixed codes block%s\n", + lastblock ? " (last)" : "")); + mode = LEN; /* decode codes */ + break; + case 2: /* dynamic block */ + Tracev((stderr, "inflate: dynamic codes block%s\n", + lastblock ? " (last)" : "")); + mode = TABLE; + break; + case 3: + strm->msg = (char *)"invalid block type"; + mode = BAD; + } + DROPBITS(2); + break; + + case STORED: + /* get and verify stored block length */ + BYTEBITS(); /* go to byte boundary */ + NEEDBITS(32); + if ((hold & 0xffff) != ((hold >> 16) ^ 0xffff)) { + strm->msg = (char *)"invalid stored block lengths"; + mode = BAD; + break; + } + length = (unsigned)hold & 0xffff; + Tracev((stderr, "inflate: stored length %lu\n", + length)); + INITBITS(); + + /* copy stored block from input to output */ + while (length != 0) { + copy = length; + PULL(); + ROOM(); + if (copy > have) copy = have; + if (copy > left) copy = left; + zmemcpy(put, next, copy); + have -= copy; + next += copy; + left -= copy; + put += copy; + length -= copy; + } + Tracev((stderr, "inflate: stored end\n")); + mode = TYPE; + break; + + case TABLE: + /* get dynamic table entries descriptor */ + NEEDBITS(14); + state->nlen = BITS(5) + 257; + DROPBITS(5); + state->ndist = BITS(5) + 1; + DROPBITS(5); + state->ncode = BITS(4) + 4; + DROPBITS(4); + if (state->nlen > 286) { + strm->msg = (char *)"too many length symbols"; + mode = BAD; + break; + } + Tracev((stderr, "inflate: table sizes ok\n")); + + /* get code length code lengths (not a typo) */ + state->have = 0; + while (state->have < state->ncode) { + NEEDBITS(3); + state->lens[order[state->have++]] = (unsigned short)BITS(3); + DROPBITS(3); + } + while (state->have < 19) + state->lens[order[state->have++]] = 0; + state->next = state->codes; + lencode = (code const FAR *)(state->next); + lenbits = 7; + ret = inflate_table9(CODES, state->lens, 19, &(state->next), + &(lenbits), state->work); + if (ret) { + strm->msg = (char *)"invalid code lengths set"; + mode = BAD; + break; + } + Tracev((stderr, "inflate: code lengths ok\n")); + + /* get length and distance code code lengths */ + state->have = 0; + while (state->have < state->nlen + state->ndist) { + for (;;) { + here = lencode[BITS(lenbits)]; + if ((unsigned)(here.bits) <= bits) break; + PULLBYTE(); + } + if (here.val < 16) { + NEEDBITS(here.bits); + DROPBITS(here.bits); + state->lens[state->have++] = here.val; + } + else { + if (here.val == 16) { + NEEDBITS(here.bits + 2); + DROPBITS(here.bits); + if (state->have == 0) { + strm->msg = (char *)"invalid bit length repeat"; + mode = BAD; + break; + } + len = (unsigned)(state->lens[state->have - 1]); + copy = 3 + BITS(2); + DROPBITS(2); + } + else if (here.val == 17) { + NEEDBITS(here.bits + 3); + DROPBITS(here.bits); + len = 0; + copy = 3 + BITS(3); + DROPBITS(3); + } + else { + NEEDBITS(here.bits + 7); + DROPBITS(here.bits); + len = 0; + copy = 11 + BITS(7); + DROPBITS(7); + } + if (state->have + copy > state->nlen + state->ndist) { + strm->msg = (char *)"invalid bit length repeat"; + mode = BAD; + break; + } + while (copy--) + state->lens[state->have++] = (unsigned short)len; + } + } + + /* handle error breaks in while */ + if (mode == BAD) break; + + /* check for end-of-block code (better have one) */ + if (state->lens[256] == 0) { + strm->msg = (char *)"invalid code -- missing end-of-block"; + mode = BAD; + break; + } + + /* build code tables -- note: do not change the lenbits or distbits + values here (9 and 6) without reading the comments in inftree9.h + concerning the ENOUGH constants, which depend on those values */ + state->next = state->codes; + lencode = (code const FAR *)(state->next); + lenbits = 9; + ret = inflate_table9(LENS, state->lens, state->nlen, + &(state->next), &(lenbits), state->work); + if (ret) { + strm->msg = (char *)"invalid literal/lengths set"; + mode = BAD; + break; + } + distcode = (code const FAR *)(state->next); + distbits = 6; + ret = inflate_table9(DISTS, state->lens + state->nlen, + state->ndist, &(state->next), &(distbits), + state->work); + if (ret) { + strm->msg = (char *)"invalid distances set"; + mode = BAD; + break; + } + Tracev((stderr, "inflate: codes ok\n")); + mode = LEN; + + case LEN: + /* get a literal, length, or end-of-block code */ + for (;;) { + here = lencode[BITS(lenbits)]; + if ((unsigned)(here.bits) <= bits) break; + PULLBYTE(); + } + if (here.op && (here.op & 0xf0) == 0) { + last = here; + for (;;) { + here = lencode[last.val + + (BITS(last.bits + last.op) >> last.bits)]; + if ((unsigned)(last.bits + here.bits) <= bits) break; + PULLBYTE(); + } + DROPBITS(last.bits); + } + DROPBITS(here.bits); + length = (unsigned)here.val; + + /* process literal */ + if (here.op == 0) { + Tracevv((stderr, here.val >= 0x20 && here.val < 0x7f ? + "inflate: literal '%c'\n" : + "inflate: literal 0x%02x\n", here.val)); + ROOM(); + *put++ = (unsigned char)(length); + left--; + mode = LEN; + break; + } + + /* process end of block */ + if (here.op & 32) { + Tracevv((stderr, "inflate: end of block\n")); + mode = TYPE; + break; + } + + /* invalid code */ + if (here.op & 64) { + strm->msg = (char *)"invalid literal/length code"; + mode = BAD; + break; + } + + /* length code -- get extra bits, if any */ + extra = (unsigned)(here.op) & 31; + if (extra != 0) { + NEEDBITS(extra); + length += BITS(extra); + DROPBITS(extra); + } + Tracevv((stderr, "inflate: length %lu\n", length)); + + /* get distance code */ + for (;;) { + here = distcode[BITS(distbits)]; + if ((unsigned)(here.bits) <= bits) break; + PULLBYTE(); + } + if ((here.op & 0xf0) == 0) { + last = here; + for (;;) { + here = distcode[last.val + + (BITS(last.bits + last.op) >> last.bits)]; + if ((unsigned)(last.bits + here.bits) <= bits) break; + PULLBYTE(); + } + DROPBITS(last.bits); + } + DROPBITS(here.bits); + if (here.op & 64) { + strm->msg = (char *)"invalid distance code"; + mode = BAD; + break; + } + offset = (unsigned)here.val; + + /* get distance extra bits, if any */ + extra = (unsigned)(here.op) & 15; + if (extra != 0) { + NEEDBITS(extra); + offset += BITS(extra); + DROPBITS(extra); + } + if (offset > WSIZE - (wrap ? 0: left)) { + strm->msg = (char *)"invalid distance too far back"; + mode = BAD; + break; + } + Tracevv((stderr, "inflate: distance %lu\n", offset)); + + /* copy match from window to output */ + do { + ROOM(); + copy = WSIZE - offset; + if (copy < left) { + from = put + copy; + copy = left - copy; + } + else { + from = put - offset; + copy = left; + } + if (copy > length) copy = length; + length -= copy; + left -= copy; + do { + *put++ = *from++; + } while (--copy); + } while (length != 0); + break; + + case DONE: + /* inflate stream terminated properly -- write leftover output */ + ret = Z_STREAM_END; + if (left < WSIZE) { + if (out(out_desc, window, (unsigned)(WSIZE - left))) + ret = Z_BUF_ERROR; + } + goto inf_leave; + + case BAD: + ret = Z_DATA_ERROR; + goto inf_leave; + + default: /* can't happen, but makes compilers happy */ + ret = Z_STREAM_ERROR; + goto inf_leave; + } + + /* Return unused input */ + inf_leave: + strm->next_in = next; + strm->avail_in = have; + return ret; +} + +int ZEXPORT inflateBack9End(strm) +z_stream FAR *strm; +{ + if (strm == Z_NULL || strm->state == Z_NULL || strm->zfree == (free_func)0) + return Z_STREAM_ERROR; + ZFREE(strm, strm->state); + strm->state = Z_NULL; + Tracev((stderr, "inflate: end\n")); + return Z_OK; +} ADDED compat/zlib/contrib/infback9/infback9.h Index: compat/zlib/contrib/infback9/infback9.h ================================================================== --- compat/zlib/contrib/infback9/infback9.h +++ compat/zlib/contrib/infback9/infback9.h @@ -0,0 +1,37 @@ +/* infback9.h -- header for using inflateBack9 functions + * Copyright (C) 2003 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* + * This header file and associated patches provide a decoder for PKWare's + * undocumented deflate64 compression method (method 9). Use with infback9.c, + * inftree9.h, inftree9.c, and inffix9.h. These patches are not supported. + * This should be compiled with zlib, since it uses zutil.h and zutil.o. + * This code has not yet been tested on 16-bit architectures. See the + * comments in zlib.h for inflateBack() usage. These functions are used + * identically, except that there is no windowBits parameter, and a 64K + * window must be provided. Also if int's are 16 bits, then a zero for + * the third parameter of the "out" function actually means 65536UL. + * zlib.h must be included before this header file. + */ + +#ifdef __cplusplus +extern "C" { +#endif + +ZEXTERN int ZEXPORT inflateBack9 OF((z_stream FAR *strm, + in_func in, void FAR *in_desc, + out_func out, void FAR *out_desc)); +ZEXTERN int ZEXPORT inflateBack9End OF((z_stream FAR *strm)); +ZEXTERN int ZEXPORT inflateBack9Init_ OF((z_stream FAR *strm, + unsigned char FAR *window, + const char *version, + int stream_size)); +#define inflateBack9Init(strm, window) \ + inflateBack9Init_((strm), (window), \ + ZLIB_VERSION, sizeof(z_stream)) + +#ifdef __cplusplus +} +#endif ADDED compat/zlib/contrib/infback9/inffix9.h Index: compat/zlib/contrib/infback9/inffix9.h ================================================================== --- compat/zlib/contrib/infback9/inffix9.h +++ compat/zlib/contrib/infback9/inffix9.h @@ -0,0 +1,107 @@ + /* inffix9.h -- table for decoding deflate64 fixed codes + * Generated automatically by makefixed9(). + */ + + /* WARNING: this file should *not* be used by applications. + It is part of the implementation of this library and is + subject to change. Applications should only use zlib.h. + */ + + static const code lenfix[512] = { + {96,7,0},{0,8,80},{0,8,16},{132,8,115},{130,7,31},{0,8,112}, + {0,8,48},{0,9,192},{128,7,10},{0,8,96},{0,8,32},{0,9,160}, + {0,8,0},{0,8,128},{0,8,64},{0,9,224},{128,7,6},{0,8,88}, + {0,8,24},{0,9,144},{131,7,59},{0,8,120},{0,8,56},{0,9,208}, + {129,7,17},{0,8,104},{0,8,40},{0,9,176},{0,8,8},{0,8,136}, + {0,8,72},{0,9,240},{128,7,4},{0,8,84},{0,8,20},{133,8,227}, + {131,7,43},{0,8,116},{0,8,52},{0,9,200},{129,7,13},{0,8,100}, + {0,8,36},{0,9,168},{0,8,4},{0,8,132},{0,8,68},{0,9,232}, + {128,7,8},{0,8,92},{0,8,28},{0,9,152},{132,7,83},{0,8,124}, + {0,8,60},{0,9,216},{130,7,23},{0,8,108},{0,8,44},{0,9,184}, + {0,8,12},{0,8,140},{0,8,76},{0,9,248},{128,7,3},{0,8,82}, + {0,8,18},{133,8,163},{131,7,35},{0,8,114},{0,8,50},{0,9,196}, + {129,7,11},{0,8,98},{0,8,34},{0,9,164},{0,8,2},{0,8,130}, + {0,8,66},{0,9,228},{128,7,7},{0,8,90},{0,8,26},{0,9,148}, + {132,7,67},{0,8,122},{0,8,58},{0,9,212},{130,7,19},{0,8,106}, + {0,8,42},{0,9,180},{0,8,10},{0,8,138},{0,8,74},{0,9,244}, + {128,7,5},{0,8,86},{0,8,22},{65,8,0},{131,7,51},{0,8,118}, + {0,8,54},{0,9,204},{129,7,15},{0,8,102},{0,8,38},{0,9,172}, + {0,8,6},{0,8,134},{0,8,70},{0,9,236},{128,7,9},{0,8,94}, + {0,8,30},{0,9,156},{132,7,99},{0,8,126},{0,8,62},{0,9,220}, + {130,7,27},{0,8,110},{0,8,46},{0,9,188},{0,8,14},{0,8,142}, + {0,8,78},{0,9,252},{96,7,0},{0,8,81},{0,8,17},{133,8,131}, + {130,7,31},{0,8,113},{0,8,49},{0,9,194},{128,7,10},{0,8,97}, + {0,8,33},{0,9,162},{0,8,1},{0,8,129},{0,8,65},{0,9,226}, + {128,7,6},{0,8,89},{0,8,25},{0,9,146},{131,7,59},{0,8,121}, + {0,8,57},{0,9,210},{129,7,17},{0,8,105},{0,8,41},{0,9,178}, + {0,8,9},{0,8,137},{0,8,73},{0,9,242},{128,7,4},{0,8,85}, + {0,8,21},{144,8,3},{131,7,43},{0,8,117},{0,8,53},{0,9,202}, + {129,7,13},{0,8,101},{0,8,37},{0,9,170},{0,8,5},{0,8,133}, + {0,8,69},{0,9,234},{128,7,8},{0,8,93},{0,8,29},{0,9,154}, + {132,7,83},{0,8,125},{0,8,61},{0,9,218},{130,7,23},{0,8,109}, + {0,8,45},{0,9,186},{0,8,13},{0,8,141},{0,8,77},{0,9,250}, + {128,7,3},{0,8,83},{0,8,19},{133,8,195},{131,7,35},{0,8,115}, + {0,8,51},{0,9,198},{129,7,11},{0,8,99},{0,8,35},{0,9,166}, + {0,8,3},{0,8,131},{0,8,67},{0,9,230},{128,7,7},{0,8,91}, + {0,8,27},{0,9,150},{132,7,67},{0,8,123},{0,8,59},{0,9,214}, + {130,7,19},{0,8,107},{0,8,43},{0,9,182},{0,8,11},{0,8,139}, + {0,8,75},{0,9,246},{128,7,5},{0,8,87},{0,8,23},{77,8,0}, + {131,7,51},{0,8,119},{0,8,55},{0,9,206},{129,7,15},{0,8,103}, + {0,8,39},{0,9,174},{0,8,7},{0,8,135},{0,8,71},{0,9,238}, + {128,7,9},{0,8,95},{0,8,31},{0,9,158},{132,7,99},{0,8,127}, + {0,8,63},{0,9,222},{130,7,27},{0,8,111},{0,8,47},{0,9,190}, + {0,8,15},{0,8,143},{0,8,79},{0,9,254},{96,7,0},{0,8,80}, + {0,8,16},{132,8,115},{130,7,31},{0,8,112},{0,8,48},{0,9,193}, + {128,7,10},{0,8,96},{0,8,32},{0,9,161},{0,8,0},{0,8,128}, + {0,8,64},{0,9,225},{128,7,6},{0,8,88},{0,8,24},{0,9,145}, + {131,7,59},{0,8,120},{0,8,56},{0,9,209},{129,7,17},{0,8,104}, + {0,8,40},{0,9,177},{0,8,8},{0,8,136},{0,8,72},{0,9,241}, + {128,7,4},{0,8,84},{0,8,20},{133,8,227},{131,7,43},{0,8,116}, + {0,8,52},{0,9,201},{129,7,13},{0,8,100},{0,8,36},{0,9,169}, + {0,8,4},{0,8,132},{0,8,68},{0,9,233},{128,7,8},{0,8,92}, + {0,8,28},{0,9,153},{132,7,83},{0,8,124},{0,8,60},{0,9,217}, + {130,7,23},{0,8,108},{0,8,44},{0,9,185},{0,8,12},{0,8,140}, + {0,8,76},{0,9,249},{128,7,3},{0,8,82},{0,8,18},{133,8,163}, + {131,7,35},{0,8,114},{0,8,50},{0,9,197},{129,7,11},{0,8,98}, + {0,8,34},{0,9,165},{0,8,2},{0,8,130},{0,8,66},{0,9,229}, + {128,7,7},{0,8,90},{0,8,26},{0,9,149},{132,7,67},{0,8,122}, + {0,8,58},{0,9,213},{130,7,19},{0,8,106},{0,8,42},{0,9,181}, + {0,8,10},{0,8,138},{0,8,74},{0,9,245},{128,7,5},{0,8,86}, + {0,8,22},{65,8,0},{131,7,51},{0,8,118},{0,8,54},{0,9,205}, + {129,7,15},{0,8,102},{0,8,38},{0,9,173},{0,8,6},{0,8,134}, + {0,8,70},{0,9,237},{128,7,9},{0,8,94},{0,8,30},{0,9,157}, + {132,7,99},{0,8,126},{0,8,62},{0,9,221},{130,7,27},{0,8,110}, + {0,8,46},{0,9,189},{0,8,14},{0,8,142},{0,8,78},{0,9,253}, + {96,7,0},{0,8,81},{0,8,17},{133,8,131},{130,7,31},{0,8,113}, + {0,8,49},{0,9,195},{128,7,10},{0,8,97},{0,8,33},{0,9,163}, + {0,8,1},{0,8,129},{0,8,65},{0,9,227},{128,7,6},{0,8,89}, + {0,8,25},{0,9,147},{131,7,59},{0,8,121},{0,8,57},{0,9,211}, + {129,7,17},{0,8,105},{0,8,41},{0,9,179},{0,8,9},{0,8,137}, + {0,8,73},{0,9,243},{128,7,4},{0,8,85},{0,8,21},{144,8,3}, + {131,7,43},{0,8,117},{0,8,53},{0,9,203},{129,7,13},{0,8,101}, + {0,8,37},{0,9,171},{0,8,5},{0,8,133},{0,8,69},{0,9,235}, + {128,7,8},{0,8,93},{0,8,29},{0,9,155},{132,7,83},{0,8,125}, + {0,8,61},{0,9,219},{130,7,23},{0,8,109},{0,8,45},{0,9,187}, + {0,8,13},{0,8,141},{0,8,77},{0,9,251},{128,7,3},{0,8,83}, + {0,8,19},{133,8,195},{131,7,35},{0,8,115},{0,8,51},{0,9,199}, + {129,7,11},{0,8,99},{0,8,35},{0,9,167},{0,8,3},{0,8,131}, + {0,8,67},{0,9,231},{128,7,7},{0,8,91},{0,8,27},{0,9,151}, + {132,7,67},{0,8,123},{0,8,59},{0,9,215},{130,7,19},{0,8,107}, + {0,8,43},{0,9,183},{0,8,11},{0,8,139},{0,8,75},{0,9,247}, + {128,7,5},{0,8,87},{0,8,23},{77,8,0},{131,7,51},{0,8,119}, + {0,8,55},{0,9,207},{129,7,15},{0,8,103},{0,8,39},{0,9,175}, + {0,8,7},{0,8,135},{0,8,71},{0,9,239},{128,7,9},{0,8,95}, + {0,8,31},{0,9,159},{132,7,99},{0,8,127},{0,8,63},{0,9,223}, + {130,7,27},{0,8,111},{0,8,47},{0,9,191},{0,8,15},{0,8,143}, + {0,8,79},{0,9,255} + }; + + static const code distfix[32] = { + {128,5,1},{135,5,257},{131,5,17},{139,5,4097},{129,5,5}, + {137,5,1025},{133,5,65},{141,5,16385},{128,5,3},{136,5,513}, + {132,5,33},{140,5,8193},{130,5,9},{138,5,2049},{134,5,129}, + {142,5,32769},{128,5,2},{135,5,385},{131,5,25},{139,5,6145}, + {129,5,7},{137,5,1537},{133,5,97},{141,5,24577},{128,5,4}, + {136,5,769},{132,5,49},{140,5,12289},{130,5,13},{138,5,3073}, + {134,5,193},{142,5,49153} + }; ADDED compat/zlib/contrib/infback9/inflate9.h Index: compat/zlib/contrib/infback9/inflate9.h ================================================================== --- compat/zlib/contrib/infback9/inflate9.h +++ compat/zlib/contrib/infback9/inflate9.h @@ -0,0 +1,47 @@ +/* inflate9.h -- internal inflate state definition + * Copyright (C) 1995-2003 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* WARNING: this file should *not* be used by applications. It is + part of the implementation of the compression library and is + subject to change. Applications should only use zlib.h. + */ + +/* Possible inflate modes between inflate() calls */ +typedef enum { + TYPE, /* i: waiting for type bits, including last-flag bit */ + STORED, /* i: waiting for stored size (length and complement) */ + TABLE, /* i: waiting for dynamic block table lengths */ + LEN, /* i: waiting for length/lit code */ + DONE, /* finished check, done -- remain here until reset */ + BAD /* got a data error -- remain here until reset */ +} inflate_mode; + +/* + State transitions between above modes - + + (most modes can go to the BAD mode -- not shown for clarity) + + Read deflate blocks: + TYPE -> STORED or TABLE or LEN or DONE + STORED -> TYPE + TABLE -> LENLENS -> CODELENS -> LEN + Read deflate codes: + LEN -> LEN or TYPE + */ + +/* state maintained between inflate() calls. Approximately 7K bytes. */ +struct inflate_state { + /* sliding window */ + unsigned char FAR *window; /* allocated sliding window, if needed */ + /* dynamic table building */ + unsigned ncode; /* number of code length code lengths */ + unsigned nlen; /* number of length code lengths */ + unsigned ndist; /* number of distance code lengths */ + unsigned have; /* number of code lengths in lens[] */ + code FAR *next; /* next available space in codes[] */ + unsigned short lens[320]; /* temporary storage for code lengths */ + unsigned short work[288]; /* work area for code table building */ + code codes[ENOUGH]; /* space for code tables */ +}; ADDED compat/zlib/contrib/infback9/inftree9.c Index: compat/zlib/contrib/infback9/inftree9.c ================================================================== --- compat/zlib/contrib/infback9/inftree9.c +++ compat/zlib/contrib/infback9/inftree9.c @@ -0,0 +1,324 @@ +/* inftree9.c -- generate Huffman trees for efficient decoding + * Copyright (C) 1995-2013 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +#include "zutil.h" +#include "inftree9.h" + +#define MAXBITS 15 + +const char inflate9_copyright[] = + " inflate9 1.2.8 Copyright 1995-2013 Mark Adler "; +/* + If you use the zlib library in a product, an acknowledgment is welcome + in the documentation of your product. If for some reason you cannot + include such an acknowledgment, I would appreciate that you keep this + copyright string in the executable of your product. + */ + +/* + Build a set of tables to decode the provided canonical Huffman code. + The code lengths are lens[0..codes-1]. The result starts at *table, + whose indices are 0..2^bits-1. work is a writable array of at least + lens shorts, which is used as a work area. type is the type of code + to be generated, CODES, LENS, or DISTS. On return, zero is success, + -1 is an invalid code, and +1 means that ENOUGH isn't enough. table + on return points to the next available entry's address. bits is the + requested root table index bits, and on return it is the actual root + table index bits. It will differ if the request is greater than the + longest code or if it is less than the shortest code. + */ +int inflate_table9(type, lens, codes, table, bits, work) +codetype type; +unsigned short FAR *lens; +unsigned codes; +code FAR * FAR *table; +unsigned FAR *bits; +unsigned short FAR *work; +{ + unsigned len; /* a code's length in bits */ + unsigned sym; /* index of code symbols */ + unsigned min, max; /* minimum and maximum code lengths */ + unsigned root; /* number of index bits for root table */ + unsigned curr; /* number of index bits for current table */ + unsigned drop; /* code bits to drop for sub-table */ + int left; /* number of prefix codes available */ + unsigned used; /* code entries in table used */ + unsigned huff; /* Huffman code */ + unsigned incr; /* for incrementing code, index */ + unsigned fill; /* index for replicating entries */ + unsigned low; /* low bits for current root entry */ + unsigned mask; /* mask for low root bits */ + code this; /* table entry for duplication */ + code FAR *next; /* next available space in table */ + const unsigned short FAR *base; /* base value table to use */ + const unsigned short FAR *extra; /* extra bits table to use */ + int end; /* use base and extra for symbol > end */ + unsigned short count[MAXBITS+1]; /* number of codes of each length */ + unsigned short offs[MAXBITS+1]; /* offsets in table for each length */ + static const unsigned short lbase[31] = { /* Length codes 257..285 base */ + 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 15, 17, + 19, 23, 27, 31, 35, 43, 51, 59, 67, 83, 99, 115, + 131, 163, 195, 227, 3, 0, 0}; + static const unsigned short lext[31] = { /* Length codes 257..285 extra */ + 128, 128, 128, 128, 128, 128, 128, 128, 129, 129, 129, 129, + 130, 130, 130, 130, 131, 131, 131, 131, 132, 132, 132, 132, + 133, 133, 133, 133, 144, 72, 78}; + static const unsigned short dbase[32] = { /* Distance codes 0..31 base */ + 1, 2, 3, 4, 5, 7, 9, 13, 17, 25, 33, 49, + 65, 97, 129, 193, 257, 385, 513, 769, 1025, 1537, 2049, 3073, + 4097, 6145, 8193, 12289, 16385, 24577, 32769, 49153}; + static const unsigned short dext[32] = { /* Distance codes 0..31 extra */ + 128, 128, 128, 128, 129, 129, 130, 130, 131, 131, 132, 132, + 133, 133, 134, 134, 135, 135, 136, 136, 137, 137, 138, 138, + 139, 139, 140, 140, 141, 141, 142, 142}; + + /* + Process a set of code lengths to create a canonical Huffman code. The + code lengths are lens[0..codes-1]. Each length corresponds to the + symbols 0..codes-1. The Huffman code is generated by first sorting the + symbols by length from short to long, and retaining the symbol order + for codes with equal lengths. Then the code starts with all zero bits + for the first code of the shortest length, and the codes are integer + increments for the same length, and zeros are appended as the length + increases. For the deflate format, these bits are stored backwards + from their more natural integer increment ordering, and so when the + decoding tables are built in the large loop below, the integer codes + are incremented backwards. + + This routine assumes, but does not check, that all of the entries in + lens[] are in the range 0..MAXBITS. The caller must assure this. + 1..MAXBITS is interpreted as that code length. zero means that that + symbol does not occur in this code. + + The codes are sorted by computing a count of codes for each length, + creating from that a table of starting indices for each length in the + sorted table, and then entering the symbols in order in the sorted + table. The sorted table is work[], with that space being provided by + the caller. + + The length counts are used for other purposes as well, i.e. finding + the minimum and maximum length codes, determining if there are any + codes at all, checking for a valid set of lengths, and looking ahead + at length counts to determine sub-table sizes when building the + decoding tables. + */ + + /* accumulate lengths for codes (assumes lens[] all in 0..MAXBITS) */ + for (len = 0; len <= MAXBITS; len++) + count[len] = 0; + for (sym = 0; sym < codes; sym++) + count[lens[sym]]++; + + /* bound code lengths, force root to be within code lengths */ + root = *bits; + for (max = MAXBITS; max >= 1; max--) + if (count[max] != 0) break; + if (root > max) root = max; + if (max == 0) return -1; /* no codes! */ + for (min = 1; min <= MAXBITS; min++) + if (count[min] != 0) break; + if (root < min) root = min; + + /* check for an over-subscribed or incomplete set of lengths */ + left = 1; + for (len = 1; len <= MAXBITS; len++) { + left <<= 1; + left -= count[len]; + if (left < 0) return -1; /* over-subscribed */ + } + if (left > 0 && (type == CODES || max != 1)) + return -1; /* incomplete set */ + + /* generate offsets into symbol table for each length for sorting */ + offs[1] = 0; + for (len = 1; len < MAXBITS; len++) + offs[len + 1] = offs[len] + count[len]; + + /* sort symbols by length, by symbol order within each length */ + for (sym = 0; sym < codes; sym++) + if (lens[sym] != 0) work[offs[lens[sym]]++] = (unsigned short)sym; + + /* + Create and fill in decoding tables. In this loop, the table being + filled is at next and has curr index bits. The code being used is huff + with length len. That code is converted to an index by dropping drop + bits off of the bottom. For codes where len is less than drop + curr, + those top drop + curr - len bits are incremented through all values to + fill the table with replicated entries. + + root is the number of index bits for the root table. When len exceeds + root, sub-tables are created pointed to by the root entry with an index + of the low root bits of huff. This is saved in low to check for when a + new sub-table should be started. drop is zero when the root table is + being filled, and drop is root when sub-tables are being filled. + + When a new sub-table is needed, it is necessary to look ahead in the + code lengths to determine what size sub-table is needed. The length + counts are used for this, and so count[] is decremented as codes are + entered in the tables. + + used keeps track of how many table entries have been allocated from the + provided *table space. It is checked for LENS and DIST tables against + the constants ENOUGH_LENS and ENOUGH_DISTS to guard against changes in + the initial root table size constants. See the comments in inftree9.h + for more information. + + sym increments through all symbols, and the loop terminates when + all codes of length max, i.e. all codes, have been processed. This + routine permits incomplete codes, so another loop after this one fills + in the rest of the decoding tables with invalid code markers. + */ + + /* set up for code type */ + switch (type) { + case CODES: + base = extra = work; /* dummy value--not used */ + end = 19; + break; + case LENS: + base = lbase; + base -= 257; + extra = lext; + extra -= 257; + end = 256; + break; + default: /* DISTS */ + base = dbase; + extra = dext; + end = -1; + } + + /* initialize state for loop */ + huff = 0; /* starting code */ + sym = 0; /* starting code symbol */ + len = min; /* starting code length */ + next = *table; /* current table to fill in */ + curr = root; /* current table index bits */ + drop = 0; /* current bits to drop from code for index */ + low = (unsigned)(-1); /* trigger new sub-table when len > root */ + used = 1U << root; /* use root table entries */ + mask = used - 1; /* mask for comparing low */ + + /* check available table space */ + if ((type == LENS && used >= ENOUGH_LENS) || + (type == DISTS && used >= ENOUGH_DISTS)) + return 1; + + /* process all codes and make table entries */ + for (;;) { + /* create table entry */ + this.bits = (unsigned char)(len - drop); + if ((int)(work[sym]) < end) { + this.op = (unsigned char)0; + this.val = work[sym]; + } + else if ((int)(work[sym]) > end) { + this.op = (unsigned char)(extra[work[sym]]); + this.val = base[work[sym]]; + } + else { + this.op = (unsigned char)(32 + 64); /* end of block */ + this.val = 0; + } + + /* replicate for those indices with low len bits equal to huff */ + incr = 1U << (len - drop); + fill = 1U << curr; + do { + fill -= incr; + next[(huff >> drop) + fill] = this; + } while (fill != 0); + + /* backwards increment the len-bit code huff */ + incr = 1U << (len - 1); + while (huff & incr) + incr >>= 1; + if (incr != 0) { + huff &= incr - 1; + huff += incr; + } + else + huff = 0; + + /* go to next symbol, update count, len */ + sym++; + if (--(count[len]) == 0) { + if (len == max) break; + len = lens[work[sym]]; + } + + /* create new sub-table if needed */ + if (len > root && (huff & mask) != low) { + /* if first time, transition to sub-tables */ + if (drop == 0) + drop = root; + + /* increment past last table */ + next += 1U << curr; + + /* determine length of next table */ + curr = len - drop; + left = (int)(1 << curr); + while (curr + drop < max) { + left -= count[curr + drop]; + if (left <= 0) break; + curr++; + left <<= 1; + } + + /* check for enough space */ + used += 1U << curr; + if ((type == LENS && used >= ENOUGH_LENS) || + (type == DISTS && used >= ENOUGH_DISTS)) + return 1; + + /* point entry in root table to sub-table */ + low = huff & mask; + (*table)[low].op = (unsigned char)curr; + (*table)[low].bits = (unsigned char)root; + (*table)[low].val = (unsigned short)(next - *table); + } + } + + /* + Fill in rest of table for incomplete codes. This loop is similar to the + loop above in incrementing huff for table indices. It is assumed that + len is equal to curr + drop, so there is no loop needed to increment + through high index bits. When the current sub-table is filled, the loop + drops back to the root table to fill in any remaining entries there. + */ + this.op = (unsigned char)64; /* invalid code marker */ + this.bits = (unsigned char)(len - drop); + this.val = (unsigned short)0; + while (huff != 0) { + /* when done with sub-table, drop back to root table */ + if (drop != 0 && (huff & mask) != low) { + drop = 0; + len = root; + next = *table; + curr = root; + this.bits = (unsigned char)len; + } + + /* put invalid code marker in table */ + next[huff >> drop] = this; + + /* backwards increment the len-bit code huff */ + incr = 1U << (len - 1); + while (huff & incr) + incr >>= 1; + if (incr != 0) { + huff &= incr - 1; + huff += incr; + } + else + huff = 0; + } + + /* set return parameters */ + *table += used; + *bits = root; + return 0; +} ADDED compat/zlib/contrib/infback9/inftree9.h Index: compat/zlib/contrib/infback9/inftree9.h ================================================================== --- compat/zlib/contrib/infback9/inftree9.h +++ compat/zlib/contrib/infback9/inftree9.h @@ -0,0 +1,61 @@ +/* inftree9.h -- header to use inftree9.c + * Copyright (C) 1995-2008 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* WARNING: this file should *not* be used by applications. It is + part of the implementation of the compression library and is + subject to change. Applications should only use zlib.h. + */ + +/* Structure for decoding tables. Each entry provides either the + information needed to do the operation requested by the code that + indexed that table entry, or it provides a pointer to another + table that indexes more bits of the code. op indicates whether + the entry is a pointer to another table, a literal, a length or + distance, an end-of-block, or an invalid code. For a table + pointer, the low four bits of op is the number of index bits of + that table. For a length or distance, the low four bits of op + is the number of extra bits to get after the code. bits is + the number of bits in this code or part of the code to drop off + of the bit buffer. val is the actual byte to output in the case + of a literal, the base length or distance, or the offset from + the current table to the next table. Each entry is four bytes. */ +typedef struct { + unsigned char op; /* operation, extra bits, table bits */ + unsigned char bits; /* bits in this part of the code */ + unsigned short val; /* offset in table or code value */ +} code; + +/* op values as set by inflate_table(): + 00000000 - literal + 0000tttt - table link, tttt != 0 is the number of table index bits + 100eeeee - length or distance, eeee is the number of extra bits + 01100000 - end of block + 01000000 - invalid code + */ + +/* Maximum size of the dynamic table. The maximum number of code structures is + 1446, which is the sum of 852 for literal/length codes and 594 for distance + codes. These values were found by exhaustive searches using the program + examples/enough.c found in the zlib distribtution. The arguments to that + program are the number of symbols, the initial root table size, and the + maximum bit length of a code. "enough 286 9 15" for literal/length codes + returns returns 852, and "enough 32 6 15" for distance codes returns 594. + The initial root table size (9 or 6) is found in the fifth argument of the + inflate_table() calls in infback9.c. If the root table size is changed, + then these maximum sizes would be need to be recalculated and updated. */ +#define ENOUGH_LENS 852 +#define ENOUGH_DISTS 594 +#define ENOUGH (ENOUGH_LENS+ENOUGH_DISTS) + +/* Type of code to build for inflate_table9() */ +typedef enum { + CODES, + LENS, + DISTS +} codetype; + +extern int inflate_table9 OF((codetype type, unsigned short FAR *lens, + unsigned codes, code FAR * FAR *table, + unsigned FAR *bits, unsigned short FAR *work)); ADDED compat/zlib/contrib/inflate86/inffas86.c Index: compat/zlib/contrib/inflate86/inffas86.c ================================================================== --- compat/zlib/contrib/inflate86/inffas86.c +++ compat/zlib/contrib/inflate86/inffas86.c @@ -0,0 +1,1157 @@ +/* inffas86.c is a hand tuned assembler version of + * + * inffast.c -- fast decoding + * Copyright (C) 1995-2003 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + * + * Copyright (C) 2003 Chris Anderson + * Please use the copyright conditions above. + * + * Dec-29-2003 -- I added AMD64 inflate asm support. This version is also + * slightly quicker on x86 systems because, instead of using rep movsb to copy + * data, it uses rep movsw, which moves data in 2-byte chunks instead of single + * bytes. I've tested the AMD64 code on a Fedora Core 1 + the x86_64 updates + * from http://fedora.linux.duke.edu/fc1_x86_64 + * which is running on an Athlon 64 3000+ / Gigabyte GA-K8VT800M system with + * 1GB ram. The 64-bit version is about 4% faster than the 32-bit version, + * when decompressing mozilla-source-1.3.tar.gz. + * + * Mar-13-2003 -- Most of this is derived from inffast.S which is derived from + * the gcc -S output of zlib-1.2.0/inffast.c. Zlib-1.2.0 is in beta release at + * the moment. I have successfully compiled and tested this code with gcc2.96, + * gcc3.2, icc5.0, msvc6.0. It is very close to the speed of inffast.S + * compiled with gcc -DNO_MMX, but inffast.S is still faster on the P3 with MMX + * enabled. I will attempt to merge the MMX code into this version. Newer + * versions of this and inffast.S can be found at + * http://www.eetbeetee.com/zlib/ and http://www.charm.net/~christop/zlib/ + */ + +#include "zutil.h" +#include "inftrees.h" +#include "inflate.h" +#include "inffast.h" + +/* Mark Adler's comments from inffast.c: */ + +/* + Decode literal, length, and distance codes and write out the resulting + literal and match bytes until either not enough input or output is + available, an end-of-block is encountered, or a data error is encountered. + When large enough input and output buffers are supplied to inflate(), for + example, a 16K input buffer and a 64K output buffer, more than 95% of the + inflate execution time is spent in this routine. + + Entry assumptions: + + state->mode == LEN + strm->avail_in >= 6 + strm->avail_out >= 258 + start >= strm->avail_out + state->bits < 8 + + On return, state->mode is one of: + + LEN -- ran out of enough output space or enough available input + TYPE -- reached end of block code, inflate() to interpret next block + BAD -- error in block data + + Notes: + + - The maximum input bits used by a length/distance pair is 15 bits for the + length code, 5 bits for the length extra, 15 bits for the distance code, + and 13 bits for the distance extra. This totals 48 bits, or six bytes. + Therefore if strm->avail_in >= 6, then there is enough input to avoid + checking for available input while decoding. + + - The maximum bytes that a single length/distance pair can output is 258 + bytes, which is the maximum length that can be coded. inflate_fast() + requires strm->avail_out >= 258 for each loop to avoid checking for + output space. + */ +void inflate_fast(strm, start) +z_streamp strm; +unsigned start; /* inflate()'s starting value for strm->avail_out */ +{ + struct inflate_state FAR *state; + struct inffast_ar { +/* 64 32 x86 x86_64 */ +/* ar offset register */ +/* 0 0 */ void *esp; /* esp save */ +/* 8 4 */ void *ebp; /* ebp save */ +/* 16 8 */ unsigned char FAR *in; /* esi rsi local strm->next_in */ +/* 24 12 */ unsigned char FAR *last; /* r9 while in < last */ +/* 32 16 */ unsigned char FAR *out; /* edi rdi local strm->next_out */ +/* 40 20 */ unsigned char FAR *beg; /* inflate()'s init next_out */ +/* 48 24 */ unsigned char FAR *end; /* r10 while out < end */ +/* 56 28 */ unsigned char FAR *window;/* size of window, wsize!=0 */ +/* 64 32 */ code const FAR *lcode; /* ebp rbp local strm->lencode */ +/* 72 36 */ code const FAR *dcode; /* r11 local strm->distcode */ +/* 80 40 */ unsigned long hold; /* edx rdx local strm->hold */ +/* 88 44 */ unsigned bits; /* ebx rbx local strm->bits */ +/* 92 48 */ unsigned wsize; /* window size */ +/* 96 52 */ unsigned write; /* window write index */ +/*100 56 */ unsigned lmask; /* r12 mask for lcode */ +/*104 60 */ unsigned dmask; /* r13 mask for dcode */ +/*108 64 */ unsigned len; /* r14 match length */ +/*112 68 */ unsigned dist; /* r15 match distance */ +/*116 72 */ unsigned status; /* set when state chng*/ + } ar; + +#if defined( __GNUC__ ) && defined( __amd64__ ) && ! defined( __i386 ) +#define PAD_AVAIL_IN 6 +#define PAD_AVAIL_OUT 258 +#else +#define PAD_AVAIL_IN 5 +#define PAD_AVAIL_OUT 257 +#endif + + /* copy state to local variables */ + state = (struct inflate_state FAR *)strm->state; + ar.in = strm->next_in; + ar.last = ar.in + (strm->avail_in - PAD_AVAIL_IN); + ar.out = strm->next_out; + ar.beg = ar.out - (start - strm->avail_out); + ar.end = ar.out + (strm->avail_out - PAD_AVAIL_OUT); + ar.wsize = state->wsize; + ar.write = state->wnext; + ar.window = state->window; + ar.hold = state->hold; + ar.bits = state->bits; + ar.lcode = state->lencode; + ar.dcode = state->distcode; + ar.lmask = (1U << state->lenbits) - 1; + ar.dmask = (1U << state->distbits) - 1; + + /* decode literals and length/distances until end-of-block or not enough + input data or output space */ + + /* align in on 1/2 hold size boundary */ + while (((unsigned long)(void *)ar.in & (sizeof(ar.hold) / 2 - 1)) != 0) { + ar.hold += (unsigned long)*ar.in++ << ar.bits; + ar.bits += 8; + } + +#if defined( __GNUC__ ) && defined( __amd64__ ) && ! defined( __i386 ) + __asm__ __volatile__ ( +" leaq %0, %%rax\n" +" movq %%rbp, 8(%%rax)\n" /* save regs rbp and rsp */ +" movq %%rsp, (%%rax)\n" +" movq %%rax, %%rsp\n" /* make rsp point to &ar */ +" movq 16(%%rsp), %%rsi\n" /* rsi = in */ +" movq 32(%%rsp), %%rdi\n" /* rdi = out */ +" movq 24(%%rsp), %%r9\n" /* r9 = last */ +" movq 48(%%rsp), %%r10\n" /* r10 = end */ +" movq 64(%%rsp), %%rbp\n" /* rbp = lcode */ +" movq 72(%%rsp), %%r11\n" /* r11 = dcode */ +" movq 80(%%rsp), %%rdx\n" /* rdx = hold */ +" movl 88(%%rsp), %%ebx\n" /* ebx = bits */ +" movl 100(%%rsp), %%r12d\n" /* r12d = lmask */ +" movl 104(%%rsp), %%r13d\n" /* r13d = dmask */ + /* r14d = len */ + /* r15d = dist */ +" cld\n" +" cmpq %%rdi, %%r10\n" +" je .L_one_time\n" /* if only one decode left */ +" cmpq %%rsi, %%r9\n" +" je .L_one_time\n" +" jmp .L_do_loop\n" + +".L_one_time:\n" +" movq %%r12, %%r8\n" /* r8 = lmask */ +" cmpb $32, %%bl\n" +" ja .L_get_length_code_one_time\n" + +" lodsl\n" /* eax = *(uint *)in++ */ +" movb %%bl, %%cl\n" /* cl = bits, needs it for shifting */ +" addb $32, %%bl\n" /* bits += 32 */ +" shlq %%cl, %%rax\n" +" orq %%rax, %%rdx\n" /* hold |= *((uint *)in)++ << bits */ +" jmp .L_get_length_code_one_time\n" + +".align 32,0x90\n" +".L_while_test:\n" +" cmpq %%rdi, %%r10\n" +" jbe .L_break_loop\n" +" cmpq %%rsi, %%r9\n" +" jbe .L_break_loop\n" + +".L_do_loop:\n" +" movq %%r12, %%r8\n" /* r8 = lmask */ +" cmpb $32, %%bl\n" +" ja .L_get_length_code\n" /* if (32 < bits) */ + +" lodsl\n" /* eax = *(uint *)in++ */ +" movb %%bl, %%cl\n" /* cl = bits, needs it for shifting */ +" addb $32, %%bl\n" /* bits += 32 */ +" shlq %%cl, %%rax\n" +" orq %%rax, %%rdx\n" /* hold |= *((uint *)in)++ << bits */ + +".L_get_length_code:\n" +" andq %%rdx, %%r8\n" /* r8 &= hold */ +" movl (%%rbp,%%r8,4), %%eax\n" /* eax = lcode[hold & lmask] */ + +" movb %%ah, %%cl\n" /* cl = this.bits */ +" subb %%ah, %%bl\n" /* bits -= this.bits */ +" shrq %%cl, %%rdx\n" /* hold >>= this.bits */ + +" testb %%al, %%al\n" +" jnz .L_test_for_length_base\n" /* if (op != 0) 45.7% */ + +" movq %%r12, %%r8\n" /* r8 = lmask */ +" shrl $16, %%eax\n" /* output this.val char */ +" stosb\n" + +".L_get_length_code_one_time:\n" +" andq %%rdx, %%r8\n" /* r8 &= hold */ +" movl (%%rbp,%%r8,4), %%eax\n" /* eax = lcode[hold & lmask] */ + +".L_dolen:\n" +" movb %%ah, %%cl\n" /* cl = this.bits */ +" subb %%ah, %%bl\n" /* bits -= this.bits */ +" shrq %%cl, %%rdx\n" /* hold >>= this.bits */ + +" testb %%al, %%al\n" +" jnz .L_test_for_length_base\n" /* if (op != 0) 45.7% */ + +" shrl $16, %%eax\n" /* output this.val char */ +" stosb\n" +" jmp .L_while_test\n" + +".align 32,0x90\n" +".L_test_for_length_base:\n" +" movl %%eax, %%r14d\n" /* len = this */ +" shrl $16, %%r14d\n" /* len = this.val */ +" movb %%al, %%cl\n" + +" testb $16, %%al\n" +" jz .L_test_for_second_level_length\n" /* if ((op & 16) == 0) 8% */ +" andb $15, %%cl\n" /* op &= 15 */ +" jz .L_decode_distance\n" /* if (!op) */ + +".L_add_bits_to_len:\n" +" subb %%cl, %%bl\n" +" xorl %%eax, %%eax\n" +" incl %%eax\n" +" shll %%cl, %%eax\n" +" decl %%eax\n" +" andl %%edx, %%eax\n" /* eax &= hold */ +" shrq %%cl, %%rdx\n" +" addl %%eax, %%r14d\n" /* len += hold & mask[op] */ + +".L_decode_distance:\n" +" movq %%r13, %%r8\n" /* r8 = dmask */ +" cmpb $32, %%bl\n" +" ja .L_get_distance_code\n" /* if (32 < bits) */ + +" lodsl\n" /* eax = *(uint *)in++ */ +" movb %%bl, %%cl\n" /* cl = bits, needs it for shifting */ +" addb $32, %%bl\n" /* bits += 32 */ +" shlq %%cl, %%rax\n" +" orq %%rax, %%rdx\n" /* hold |= *((uint *)in)++ << bits */ + +".L_get_distance_code:\n" +" andq %%rdx, %%r8\n" /* r8 &= hold */ +" movl (%%r11,%%r8,4), %%eax\n" /* eax = dcode[hold & dmask] */ + +".L_dodist:\n" +" movl %%eax, %%r15d\n" /* dist = this */ +" shrl $16, %%r15d\n" /* dist = this.val */ +" movb %%ah, %%cl\n" +" subb %%ah, %%bl\n" /* bits -= this.bits */ +" shrq %%cl, %%rdx\n" /* hold >>= this.bits */ +" movb %%al, %%cl\n" /* cl = this.op */ + +" testb $16, %%al\n" /* if ((op & 16) == 0) */ +" jz .L_test_for_second_level_dist\n" +" andb $15, %%cl\n" /* op &= 15 */ +" jz .L_check_dist_one\n" + +".L_add_bits_to_dist:\n" +" subb %%cl, %%bl\n" +" xorl %%eax, %%eax\n" +" incl %%eax\n" +" shll %%cl, %%eax\n" +" decl %%eax\n" /* (1 << op) - 1 */ +" andl %%edx, %%eax\n" /* eax &= hold */ +" shrq %%cl, %%rdx\n" +" addl %%eax, %%r15d\n" /* dist += hold & ((1 << op) - 1) */ + +".L_check_window:\n" +" movq %%rsi, %%r8\n" /* save in so from can use it's reg */ +" movq %%rdi, %%rax\n" +" subq 40(%%rsp), %%rax\n" /* nbytes = out - beg */ + +" cmpl %%r15d, %%eax\n" +" jb .L_clip_window\n" /* if (dist > nbytes) 4.2% */ + +" movl %%r14d, %%ecx\n" /* ecx = len */ +" movq %%rdi, %%rsi\n" +" subq %%r15, %%rsi\n" /* from = out - dist */ + +" sarl %%ecx\n" +" jnc .L_copy_two\n" /* if len % 2 == 0 */ + +" rep movsw\n" +" movb (%%rsi), %%al\n" +" movb %%al, (%%rdi)\n" +" incq %%rdi\n" + +" movq %%r8, %%rsi\n" /* move in back to %rsi, toss from */ +" jmp .L_while_test\n" + +".L_copy_two:\n" +" rep movsw\n" +" movq %%r8, %%rsi\n" /* move in back to %rsi, toss from */ +" jmp .L_while_test\n" + +".align 32,0x90\n" +".L_check_dist_one:\n" +" cmpl $1, %%r15d\n" /* if dist 1, is a memset */ +" jne .L_check_window\n" +" cmpq %%rdi, 40(%%rsp)\n" /* if out == beg, outside window */ +" je .L_check_window\n" + +" movl %%r14d, %%ecx\n" /* ecx = len */ +" movb -1(%%rdi), %%al\n" +" movb %%al, %%ah\n" + +" sarl %%ecx\n" +" jnc .L_set_two\n" +" movb %%al, (%%rdi)\n" +" incq %%rdi\n" + +".L_set_two:\n" +" rep stosw\n" +" jmp .L_while_test\n" + +".align 32,0x90\n" +".L_test_for_second_level_length:\n" +" testb $64, %%al\n" +" jnz .L_test_for_end_of_block\n" /* if ((op & 64) != 0) */ + +" xorl %%eax, %%eax\n" +" incl %%eax\n" +" shll %%cl, %%eax\n" +" decl %%eax\n" +" andl %%edx, %%eax\n" /* eax &= hold */ +" addl %%r14d, %%eax\n" /* eax += len */ +" movl (%%rbp,%%rax,4), %%eax\n" /* eax = lcode[val+(hold&mask[op])]*/ +" jmp .L_dolen\n" + +".align 32,0x90\n" +".L_test_for_second_level_dist:\n" +" testb $64, %%al\n" +" jnz .L_invalid_distance_code\n" /* if ((op & 64) != 0) */ + +" xorl %%eax, %%eax\n" +" incl %%eax\n" +" shll %%cl, %%eax\n" +" decl %%eax\n" +" andl %%edx, %%eax\n" /* eax &= hold */ +" addl %%r15d, %%eax\n" /* eax += dist */ +" movl (%%r11,%%rax,4), %%eax\n" /* eax = dcode[val+(hold&mask[op])]*/ +" jmp .L_dodist\n" + +".align 32,0x90\n" +".L_clip_window:\n" +" movl %%eax, %%ecx\n" /* ecx = nbytes */ +" movl 92(%%rsp), %%eax\n" /* eax = wsize, prepare for dist cmp */ +" negl %%ecx\n" /* nbytes = -nbytes */ + +" cmpl %%r15d, %%eax\n" +" jb .L_invalid_distance_too_far\n" /* if (dist > wsize) */ + +" addl %%r15d, %%ecx\n" /* nbytes = dist - nbytes */ +" cmpl $0, 96(%%rsp)\n" +" jne .L_wrap_around_window\n" /* if (write != 0) */ + +" movq 56(%%rsp), %%rsi\n" /* from = window */ +" subl %%ecx, %%eax\n" /* eax -= nbytes */ +" addq %%rax, %%rsi\n" /* from += wsize - nbytes */ + +" movl %%r14d, %%eax\n" /* eax = len */ +" cmpl %%ecx, %%r14d\n" +" jbe .L_do_copy\n" /* if (nbytes >= len) */ + +" subl %%ecx, %%eax\n" /* eax -= nbytes */ +" rep movsb\n" +" movq %%rdi, %%rsi\n" +" subq %%r15, %%rsi\n" /* from = &out[ -dist ] */ +" jmp .L_do_copy\n" + +".align 32,0x90\n" +".L_wrap_around_window:\n" +" movl 96(%%rsp), %%eax\n" /* eax = write */ +" cmpl %%eax, %%ecx\n" +" jbe .L_contiguous_in_window\n" /* if (write >= nbytes) */ + +" movl 92(%%rsp), %%esi\n" /* from = wsize */ +" addq 56(%%rsp), %%rsi\n" /* from += window */ +" addq %%rax, %%rsi\n" /* from += write */ +" subq %%rcx, %%rsi\n" /* from -= nbytes */ +" subl %%eax, %%ecx\n" /* nbytes -= write */ + +" movl %%r14d, %%eax\n" /* eax = len */ +" cmpl %%ecx, %%eax\n" +" jbe .L_do_copy\n" /* if (nbytes >= len) */ + +" subl %%ecx, %%eax\n" /* len -= nbytes */ +" rep movsb\n" +" movq 56(%%rsp), %%rsi\n" /* from = window */ +" movl 96(%%rsp), %%ecx\n" /* nbytes = write */ +" cmpl %%ecx, %%eax\n" +" jbe .L_do_copy\n" /* if (nbytes >= len) */ + +" subl %%ecx, %%eax\n" /* len -= nbytes */ +" rep movsb\n" +" movq %%rdi, %%rsi\n" +" subq %%r15, %%rsi\n" /* from = out - dist */ +" jmp .L_do_copy\n" + +".align 32,0x90\n" +".L_contiguous_in_window:\n" +" movq 56(%%rsp), %%rsi\n" /* rsi = window */ +" addq %%rax, %%rsi\n" +" subq %%rcx, %%rsi\n" /* from += write - nbytes */ + +" movl %%r14d, %%eax\n" /* eax = len */ +" cmpl %%ecx, %%eax\n" +" jbe .L_do_copy\n" /* if (nbytes >= len) */ + +" subl %%ecx, %%eax\n" /* len -= nbytes */ +" rep movsb\n" +" movq %%rdi, %%rsi\n" +" subq %%r15, %%rsi\n" /* from = out - dist */ +" jmp .L_do_copy\n" /* if (nbytes >= len) */ + +".align 32,0x90\n" +".L_do_copy:\n" +" movl %%eax, %%ecx\n" /* ecx = len */ +" rep movsb\n" + +" movq %%r8, %%rsi\n" /* move in back to %esi, toss from */ +" jmp .L_while_test\n" + +".L_test_for_end_of_block:\n" +" testb $32, %%al\n" +" jz .L_invalid_literal_length_code\n" +" movl $1, 116(%%rsp)\n" +" jmp .L_break_loop_with_status\n" + +".L_invalid_literal_length_code:\n" +" movl $2, 116(%%rsp)\n" +" jmp .L_break_loop_with_status\n" + +".L_invalid_distance_code:\n" +" movl $3, 116(%%rsp)\n" +" jmp .L_break_loop_with_status\n" + +".L_invalid_distance_too_far:\n" +" movl $4, 116(%%rsp)\n" +" jmp .L_break_loop_with_status\n" + +".L_break_loop:\n" +" movl $0, 116(%%rsp)\n" + +".L_break_loop_with_status:\n" +/* put in, out, bits, and hold back into ar and pop esp */ +" movq %%rsi, 16(%%rsp)\n" /* in */ +" movq %%rdi, 32(%%rsp)\n" /* out */ +" movl %%ebx, 88(%%rsp)\n" /* bits */ +" movq %%rdx, 80(%%rsp)\n" /* hold */ +" movq (%%rsp), %%rax\n" /* restore rbp and rsp */ +" movq 8(%%rsp), %%rbp\n" +" movq %%rax, %%rsp\n" + : + : "m" (ar) + : "memory", "%rax", "%rbx", "%rcx", "%rdx", "%rsi", "%rdi", + "%r8", "%r9", "%r10", "%r11", "%r12", "%r13", "%r14", "%r15" + ); +#elif ( defined( __GNUC__ ) || defined( __ICC ) ) && defined( __i386 ) + __asm__ __volatile__ ( +" leal %0, %%eax\n" +" movl %%esp, (%%eax)\n" /* save esp, ebp */ +" movl %%ebp, 4(%%eax)\n" +" movl %%eax, %%esp\n" +" movl 8(%%esp), %%esi\n" /* esi = in */ +" movl 16(%%esp), %%edi\n" /* edi = out */ +" movl 40(%%esp), %%edx\n" /* edx = hold */ +" movl 44(%%esp), %%ebx\n" /* ebx = bits */ +" movl 32(%%esp), %%ebp\n" /* ebp = lcode */ + +" cld\n" +" jmp .L_do_loop\n" + +".align 32,0x90\n" +".L_while_test:\n" +" cmpl %%edi, 24(%%esp)\n" /* out < end */ +" jbe .L_break_loop\n" +" cmpl %%esi, 12(%%esp)\n" /* in < last */ +" jbe .L_break_loop\n" + +".L_do_loop:\n" +" cmpb $15, %%bl\n" +" ja .L_get_length_code\n" /* if (15 < bits) */ + +" xorl %%eax, %%eax\n" +" lodsw\n" /* al = *(ushort *)in++ */ +" movb %%bl, %%cl\n" /* cl = bits, needs it for shifting */ +" addb $16, %%bl\n" /* bits += 16 */ +" shll %%cl, %%eax\n" +" orl %%eax, %%edx\n" /* hold |= *((ushort *)in)++ << bits */ + +".L_get_length_code:\n" +" movl 56(%%esp), %%eax\n" /* eax = lmask */ +" andl %%edx, %%eax\n" /* eax &= hold */ +" movl (%%ebp,%%eax,4), %%eax\n" /* eax = lcode[hold & lmask] */ + +".L_dolen:\n" +" movb %%ah, %%cl\n" /* cl = this.bits */ +" subb %%ah, %%bl\n" /* bits -= this.bits */ +" shrl %%cl, %%edx\n" /* hold >>= this.bits */ + +" testb %%al, %%al\n" +" jnz .L_test_for_length_base\n" /* if (op != 0) 45.7% */ + +" shrl $16, %%eax\n" /* output this.val char */ +" stosb\n" +" jmp .L_while_test\n" + +".align 32,0x90\n" +".L_test_for_length_base:\n" +" movl %%eax, %%ecx\n" /* len = this */ +" shrl $16, %%ecx\n" /* len = this.val */ +" movl %%ecx, 64(%%esp)\n" /* save len */ +" movb %%al, %%cl\n" + +" testb $16, %%al\n" +" jz .L_test_for_second_level_length\n" /* if ((op & 16) == 0) 8% */ +" andb $15, %%cl\n" /* op &= 15 */ +" jz .L_decode_distance\n" /* if (!op) */ +" cmpb %%cl, %%bl\n" +" jae .L_add_bits_to_len\n" /* if (op <= bits) */ + +" movb %%cl, %%ch\n" /* stash op in ch, freeing cl */ +" xorl %%eax, %%eax\n" +" lodsw\n" /* al = *(ushort *)in++ */ +" movb %%bl, %%cl\n" /* cl = bits, needs it for shifting */ +" addb $16, %%bl\n" /* bits += 16 */ +" shll %%cl, %%eax\n" +" orl %%eax, %%edx\n" /* hold |= *((ushort *)in)++ << bits */ +" movb %%ch, %%cl\n" /* move op back to ecx */ + +".L_add_bits_to_len:\n" +" subb %%cl, %%bl\n" +" xorl %%eax, %%eax\n" +" incl %%eax\n" +" shll %%cl, %%eax\n" +" decl %%eax\n" +" andl %%edx, %%eax\n" /* eax &= hold */ +" shrl %%cl, %%edx\n" +" addl %%eax, 64(%%esp)\n" /* len += hold & mask[op] */ + +".L_decode_distance:\n" +" cmpb $15, %%bl\n" +" ja .L_get_distance_code\n" /* if (15 < bits) */ + +" xorl %%eax, %%eax\n" +" lodsw\n" /* al = *(ushort *)in++ */ +" movb %%bl, %%cl\n" /* cl = bits, needs it for shifting */ +" addb $16, %%bl\n" /* bits += 16 */ +" shll %%cl, %%eax\n" +" orl %%eax, %%edx\n" /* hold |= *((ushort *)in)++ << bits */ + +".L_get_distance_code:\n" +" movl 60(%%esp), %%eax\n" /* eax = dmask */ +" movl 36(%%esp), %%ecx\n" /* ecx = dcode */ +" andl %%edx, %%eax\n" /* eax &= hold */ +" movl (%%ecx,%%eax,4), %%eax\n"/* eax = dcode[hold & dmask] */ + +".L_dodist:\n" +" movl %%eax, %%ebp\n" /* dist = this */ +" shrl $16, %%ebp\n" /* dist = this.val */ +" movb %%ah, %%cl\n" +" subb %%ah, %%bl\n" /* bits -= this.bits */ +" shrl %%cl, %%edx\n" /* hold >>= this.bits */ +" movb %%al, %%cl\n" /* cl = this.op */ + +" testb $16, %%al\n" /* if ((op & 16) == 0) */ +" jz .L_test_for_second_level_dist\n" +" andb $15, %%cl\n" /* op &= 15 */ +" jz .L_check_dist_one\n" +" cmpb %%cl, %%bl\n" +" jae .L_add_bits_to_dist\n" /* if (op <= bits) 97.6% */ + +" movb %%cl, %%ch\n" /* stash op in ch, freeing cl */ +" xorl %%eax, %%eax\n" +" lodsw\n" /* al = *(ushort *)in++ */ +" movb %%bl, %%cl\n" /* cl = bits, needs it for shifting */ +" addb $16, %%bl\n" /* bits += 16 */ +" shll %%cl, %%eax\n" +" orl %%eax, %%edx\n" /* hold |= *((ushort *)in)++ << bits */ +" movb %%ch, %%cl\n" /* move op back to ecx */ + +".L_add_bits_to_dist:\n" +" subb %%cl, %%bl\n" +" xorl %%eax, %%eax\n" +" incl %%eax\n" +" shll %%cl, %%eax\n" +" decl %%eax\n" /* (1 << op) - 1 */ +" andl %%edx, %%eax\n" /* eax &= hold */ +" shrl %%cl, %%edx\n" +" addl %%eax, %%ebp\n" /* dist += hold & ((1 << op) - 1) */ + +".L_check_window:\n" +" movl %%esi, 8(%%esp)\n" /* save in so from can use it's reg */ +" movl %%edi, %%eax\n" +" subl 20(%%esp), %%eax\n" /* nbytes = out - beg */ + +" cmpl %%ebp, %%eax\n" +" jb .L_clip_window\n" /* if (dist > nbytes) 4.2% */ + +" movl 64(%%esp), %%ecx\n" /* ecx = len */ +" movl %%edi, %%esi\n" +" subl %%ebp, %%esi\n" /* from = out - dist */ + +" sarl %%ecx\n" +" jnc .L_copy_two\n" /* if len % 2 == 0 */ + +" rep movsw\n" +" movb (%%esi), %%al\n" +" movb %%al, (%%edi)\n" +" incl %%edi\n" + +" movl 8(%%esp), %%esi\n" /* move in back to %esi, toss from */ +" movl 32(%%esp), %%ebp\n" /* ebp = lcode */ +" jmp .L_while_test\n" + +".L_copy_two:\n" +" rep movsw\n" +" movl 8(%%esp), %%esi\n" /* move in back to %esi, toss from */ +" movl 32(%%esp), %%ebp\n" /* ebp = lcode */ +" jmp .L_while_test\n" + +".align 32,0x90\n" +".L_check_dist_one:\n" +" cmpl $1, %%ebp\n" /* if dist 1, is a memset */ +" jne .L_check_window\n" +" cmpl %%edi, 20(%%esp)\n" +" je .L_check_window\n" /* out == beg, if outside window */ + +" movl 64(%%esp), %%ecx\n" /* ecx = len */ +" movb -1(%%edi), %%al\n" +" movb %%al, %%ah\n" + +" sarl %%ecx\n" +" jnc .L_set_two\n" +" movb %%al, (%%edi)\n" +" incl %%edi\n" + +".L_set_two:\n" +" rep stosw\n" +" movl 32(%%esp), %%ebp\n" /* ebp = lcode */ +" jmp .L_while_test\n" + +".align 32,0x90\n" +".L_test_for_second_level_length:\n" +" testb $64, %%al\n" +" jnz .L_test_for_end_of_block\n" /* if ((op & 64) != 0) */ + +" xorl %%eax, %%eax\n" +" incl %%eax\n" +" shll %%cl, %%eax\n" +" decl %%eax\n" +" andl %%edx, %%eax\n" /* eax &= hold */ +" addl 64(%%esp), %%eax\n" /* eax += len */ +" movl (%%ebp,%%eax,4), %%eax\n" /* eax = lcode[val+(hold&mask[op])]*/ +" jmp .L_dolen\n" + +".align 32,0x90\n" +".L_test_for_second_level_dist:\n" +" testb $64, %%al\n" +" jnz .L_invalid_distance_code\n" /* if ((op & 64) != 0) */ + +" xorl %%eax, %%eax\n" +" incl %%eax\n" +" shll %%cl, %%eax\n" +" decl %%eax\n" +" andl %%edx, %%eax\n" /* eax &= hold */ +" addl %%ebp, %%eax\n" /* eax += dist */ +" movl 36(%%esp), %%ecx\n" /* ecx = dcode */ +" movl (%%ecx,%%eax,4), %%eax\n" /* eax = dcode[val+(hold&mask[op])]*/ +" jmp .L_dodist\n" + +".align 32,0x90\n" +".L_clip_window:\n" +" movl %%eax, %%ecx\n" +" movl 48(%%esp), %%eax\n" /* eax = wsize */ +" negl %%ecx\n" /* nbytes = -nbytes */ +" movl 28(%%esp), %%esi\n" /* from = window */ + +" cmpl %%ebp, %%eax\n" +" jb .L_invalid_distance_too_far\n" /* if (dist > wsize) */ + +" addl %%ebp, %%ecx\n" /* nbytes = dist - nbytes */ +" cmpl $0, 52(%%esp)\n" +" jne .L_wrap_around_window\n" /* if (write != 0) */ + +" subl %%ecx, %%eax\n" +" addl %%eax, %%esi\n" /* from += wsize - nbytes */ + +" movl 64(%%esp), %%eax\n" /* eax = len */ +" cmpl %%ecx, %%eax\n" +" jbe .L_do_copy\n" /* if (nbytes >= len) */ + +" subl %%ecx, %%eax\n" /* len -= nbytes */ +" rep movsb\n" +" movl %%edi, %%esi\n" +" subl %%ebp, %%esi\n" /* from = out - dist */ +" jmp .L_do_copy\n" + +".align 32,0x90\n" +".L_wrap_around_window:\n" +" movl 52(%%esp), %%eax\n" /* eax = write */ +" cmpl %%eax, %%ecx\n" +" jbe .L_contiguous_in_window\n" /* if (write >= nbytes) */ + +" addl 48(%%esp), %%esi\n" /* from += wsize */ +" addl %%eax, %%esi\n" /* from += write */ +" subl %%ecx, %%esi\n" /* from -= nbytes */ +" subl %%eax, %%ecx\n" /* nbytes -= write */ + +" movl 64(%%esp), %%eax\n" /* eax = len */ +" cmpl %%ecx, %%eax\n" +" jbe .L_do_copy\n" /* if (nbytes >= len) */ + +" subl %%ecx, %%eax\n" /* len -= nbytes */ +" rep movsb\n" +" movl 28(%%esp), %%esi\n" /* from = window */ +" movl 52(%%esp), %%ecx\n" /* nbytes = write */ +" cmpl %%ecx, %%eax\n" +" jbe .L_do_copy\n" /* if (nbytes >= len) */ + +" subl %%ecx, %%eax\n" /* len -= nbytes */ +" rep movsb\n" +" movl %%edi, %%esi\n" +" subl %%ebp, %%esi\n" /* from = out - dist */ +" jmp .L_do_copy\n" + +".align 32,0x90\n" +".L_contiguous_in_window:\n" +" addl %%eax, %%esi\n" +" subl %%ecx, %%esi\n" /* from += write - nbytes */ + +" movl 64(%%esp), %%eax\n" /* eax = len */ +" cmpl %%ecx, %%eax\n" +" jbe .L_do_copy\n" /* if (nbytes >= len) */ + +" subl %%ecx, %%eax\n" /* len -= nbytes */ +" rep movsb\n" +" movl %%edi, %%esi\n" +" subl %%ebp, %%esi\n" /* from = out - dist */ +" jmp .L_do_copy\n" /* if (nbytes >= len) */ + +".align 32,0x90\n" +".L_do_copy:\n" +" movl %%eax, %%ecx\n" +" rep movsb\n" + +" movl 8(%%esp), %%esi\n" /* move in back to %esi, toss from */ +" movl 32(%%esp), %%ebp\n" /* ebp = lcode */ +" jmp .L_while_test\n" + +".L_test_for_end_of_block:\n" +" testb $32, %%al\n" +" jz .L_invalid_literal_length_code\n" +" movl $1, 72(%%esp)\n" +" jmp .L_break_loop_with_status\n" + +".L_invalid_literal_length_code:\n" +" movl $2, 72(%%esp)\n" +" jmp .L_break_loop_with_status\n" + +".L_invalid_distance_code:\n" +" movl $3, 72(%%esp)\n" +" jmp .L_break_loop_with_status\n" + +".L_invalid_distance_too_far:\n" +" movl 8(%%esp), %%esi\n" +" movl $4, 72(%%esp)\n" +" jmp .L_break_loop_with_status\n" + +".L_break_loop:\n" +" movl $0, 72(%%esp)\n" + +".L_break_loop_with_status:\n" +/* put in, out, bits, and hold back into ar and pop esp */ +" movl %%esi, 8(%%esp)\n" /* save in */ +" movl %%edi, 16(%%esp)\n" /* save out */ +" movl %%ebx, 44(%%esp)\n" /* save bits */ +" movl %%edx, 40(%%esp)\n" /* save hold */ +" movl 4(%%esp), %%ebp\n" /* restore esp, ebp */ +" movl (%%esp), %%esp\n" + : + : "m" (ar) + : "memory", "%eax", "%ebx", "%ecx", "%edx", "%esi", "%edi" + ); +#elif defined( _MSC_VER ) && ! defined( _M_AMD64 ) + __asm { + lea eax, ar + mov [eax], esp /* save esp, ebp */ + mov [eax+4], ebp + mov esp, eax + mov esi, [esp+8] /* esi = in */ + mov edi, [esp+16] /* edi = out */ + mov edx, [esp+40] /* edx = hold */ + mov ebx, [esp+44] /* ebx = bits */ + mov ebp, [esp+32] /* ebp = lcode */ + + cld + jmp L_do_loop + +ALIGN 4 +L_while_test: + cmp [esp+24], edi + jbe L_break_loop + cmp [esp+12], esi + jbe L_break_loop + +L_do_loop: + cmp bl, 15 + ja L_get_length_code /* if (15 < bits) */ + + xor eax, eax + lodsw /* al = *(ushort *)in++ */ + mov cl, bl /* cl = bits, needs it for shifting */ + add bl, 16 /* bits += 16 */ + shl eax, cl + or edx, eax /* hold |= *((ushort *)in)++ << bits */ + +L_get_length_code: + mov eax, [esp+56] /* eax = lmask */ + and eax, edx /* eax &= hold */ + mov eax, [ebp+eax*4] /* eax = lcode[hold & lmask] */ + +L_dolen: + mov cl, ah /* cl = this.bits */ + sub bl, ah /* bits -= this.bits */ + shr edx, cl /* hold >>= this.bits */ + + test al, al + jnz L_test_for_length_base /* if (op != 0) 45.7% */ + + shr eax, 16 /* output this.val char */ + stosb + jmp L_while_test + +ALIGN 4 +L_test_for_length_base: + mov ecx, eax /* len = this */ + shr ecx, 16 /* len = this.val */ + mov [esp+64], ecx /* save len */ + mov cl, al + + test al, 16 + jz L_test_for_second_level_length /* if ((op & 16) == 0) 8% */ + and cl, 15 /* op &= 15 */ + jz L_decode_distance /* if (!op) */ + cmp bl, cl + jae L_add_bits_to_len /* if (op <= bits) */ + + mov ch, cl /* stash op in ch, freeing cl */ + xor eax, eax + lodsw /* al = *(ushort *)in++ */ + mov cl, bl /* cl = bits, needs it for shifting */ + add bl, 16 /* bits += 16 */ + shl eax, cl + or edx, eax /* hold |= *((ushort *)in)++ << bits */ + mov cl, ch /* move op back to ecx */ + +L_add_bits_to_len: + sub bl, cl + xor eax, eax + inc eax + shl eax, cl + dec eax + and eax, edx /* eax &= hold */ + shr edx, cl + add [esp+64], eax /* len += hold & mask[op] */ + +L_decode_distance: + cmp bl, 15 + ja L_get_distance_code /* if (15 < bits) */ + + xor eax, eax + lodsw /* al = *(ushort *)in++ */ + mov cl, bl /* cl = bits, needs it for shifting */ + add bl, 16 /* bits += 16 */ + shl eax, cl + or edx, eax /* hold |= *((ushort *)in)++ << bits */ + +L_get_distance_code: + mov eax, [esp+60] /* eax = dmask */ + mov ecx, [esp+36] /* ecx = dcode */ + and eax, edx /* eax &= hold */ + mov eax, [ecx+eax*4]/* eax = dcode[hold & dmask] */ + +L_dodist: + mov ebp, eax /* dist = this */ + shr ebp, 16 /* dist = this.val */ + mov cl, ah + sub bl, ah /* bits -= this.bits */ + shr edx, cl /* hold >>= this.bits */ + mov cl, al /* cl = this.op */ + + test al, 16 /* if ((op & 16) == 0) */ + jz L_test_for_second_level_dist + and cl, 15 /* op &= 15 */ + jz L_check_dist_one + cmp bl, cl + jae L_add_bits_to_dist /* if (op <= bits) 97.6% */ + + mov ch, cl /* stash op in ch, freeing cl */ + xor eax, eax + lodsw /* al = *(ushort *)in++ */ + mov cl, bl /* cl = bits, needs it for shifting */ + add bl, 16 /* bits += 16 */ + shl eax, cl + or edx, eax /* hold |= *((ushort *)in)++ << bits */ + mov cl, ch /* move op back to ecx */ + +L_add_bits_to_dist: + sub bl, cl + xor eax, eax + inc eax + shl eax, cl + dec eax /* (1 << op) - 1 */ + and eax, edx /* eax &= hold */ + shr edx, cl + add ebp, eax /* dist += hold & ((1 << op) - 1) */ + +L_check_window: + mov [esp+8], esi /* save in so from can use it's reg */ + mov eax, edi + sub eax, [esp+20] /* nbytes = out - beg */ + + cmp eax, ebp + jb L_clip_window /* if (dist > nbytes) 4.2% */ + + mov ecx, [esp+64] /* ecx = len */ + mov esi, edi + sub esi, ebp /* from = out - dist */ + + sar ecx, 1 + jnc L_copy_two + + rep movsw + mov al, [esi] + mov [edi], al + inc edi + + mov esi, [esp+8] /* move in back to %esi, toss from */ + mov ebp, [esp+32] /* ebp = lcode */ + jmp L_while_test + +L_copy_two: + rep movsw + mov esi, [esp+8] /* move in back to %esi, toss from */ + mov ebp, [esp+32] /* ebp = lcode */ + jmp L_while_test + +ALIGN 4 +L_check_dist_one: + cmp ebp, 1 /* if dist 1, is a memset */ + jne L_check_window + cmp [esp+20], edi + je L_check_window /* out == beg, if outside window */ + + mov ecx, [esp+64] /* ecx = len */ + mov al, [edi-1] + mov ah, al + + sar ecx, 1 + jnc L_set_two + mov [edi], al /* memset out with from[-1] */ + inc edi + +L_set_two: + rep stosw + mov ebp, [esp+32] /* ebp = lcode */ + jmp L_while_test + +ALIGN 4 +L_test_for_second_level_length: + test al, 64 + jnz L_test_for_end_of_block /* if ((op & 64) != 0) */ + + xor eax, eax + inc eax + shl eax, cl + dec eax + and eax, edx /* eax &= hold */ + add eax, [esp+64] /* eax += len */ + mov eax, [ebp+eax*4] /* eax = lcode[val+(hold&mask[op])]*/ + jmp L_dolen + +ALIGN 4 +L_test_for_second_level_dist: + test al, 64 + jnz L_invalid_distance_code /* if ((op & 64) != 0) */ + + xor eax, eax + inc eax + shl eax, cl + dec eax + and eax, edx /* eax &= hold */ + add eax, ebp /* eax += dist */ + mov ecx, [esp+36] /* ecx = dcode */ + mov eax, [ecx+eax*4] /* eax = dcode[val+(hold&mask[op])]*/ + jmp L_dodist + +ALIGN 4 +L_clip_window: + mov ecx, eax + mov eax, [esp+48] /* eax = wsize */ + neg ecx /* nbytes = -nbytes */ + mov esi, [esp+28] /* from = window */ + + cmp eax, ebp + jb L_invalid_distance_too_far /* if (dist > wsize) */ + + add ecx, ebp /* nbytes = dist - nbytes */ + cmp dword ptr [esp+52], 0 + jne L_wrap_around_window /* if (write != 0) */ + + sub eax, ecx + add esi, eax /* from += wsize - nbytes */ + + mov eax, [esp+64] /* eax = len */ + cmp eax, ecx + jbe L_do_copy /* if (nbytes >= len) */ + + sub eax, ecx /* len -= nbytes */ + rep movsb + mov esi, edi + sub esi, ebp /* from = out - dist */ + jmp L_do_copy + +ALIGN 4 +L_wrap_around_window: + mov eax, [esp+52] /* eax = write */ + cmp ecx, eax + jbe L_contiguous_in_window /* if (write >= nbytes) */ + + add esi, [esp+48] /* from += wsize */ + add esi, eax /* from += write */ + sub esi, ecx /* from -= nbytes */ + sub ecx, eax /* nbytes -= write */ + + mov eax, [esp+64] /* eax = len */ + cmp eax, ecx + jbe L_do_copy /* if (nbytes >= len) */ + + sub eax, ecx /* len -= nbytes */ + rep movsb + mov esi, [esp+28] /* from = window */ + mov ecx, [esp+52] /* nbytes = write */ + cmp eax, ecx + jbe L_do_copy /* if (nbytes >= len) */ + + sub eax, ecx /* len -= nbytes */ + rep movsb + mov esi, edi + sub esi, ebp /* from = out - dist */ + jmp L_do_copy + +ALIGN 4 +L_contiguous_in_window: + add esi, eax + sub esi, ecx /* from += write - nbytes */ + + mov eax, [esp+64] /* eax = len */ + cmp eax, ecx + jbe L_do_copy /* if (nbytes >= len) */ + + sub eax, ecx /* len -= nbytes */ + rep movsb + mov esi, edi + sub esi, ebp /* from = out - dist */ + jmp L_do_copy + +ALIGN 4 +L_do_copy: + mov ecx, eax + rep movsb + + mov esi, [esp+8] /* move in back to %esi, toss from */ + mov ebp, [esp+32] /* ebp = lcode */ + jmp L_while_test + +L_test_for_end_of_block: + test al, 32 + jz L_invalid_literal_length_code + mov dword ptr [esp+72], 1 + jmp L_break_loop_with_status + +L_invalid_literal_length_code: + mov dword ptr [esp+72], 2 + jmp L_break_loop_with_status + +L_invalid_distance_code: + mov dword ptr [esp+72], 3 + jmp L_break_loop_with_status + +L_invalid_distance_too_far: + mov esi, [esp+4] + mov dword ptr [esp+72], 4 + jmp L_break_loop_with_status + +L_break_loop: + mov dword ptr [esp+72], 0 + +L_break_loop_with_status: +/* put in, out, bits, and hold back into ar and pop esp */ + mov [esp+8], esi /* save in */ + mov [esp+16], edi /* save out */ + mov [esp+44], ebx /* save bits */ + mov [esp+40], edx /* save hold */ + mov ebp, [esp+4] /* restore esp, ebp */ + mov esp, [esp] + } +#else +#error "x86 architecture not defined" +#endif + + if (ar.status > 1) { + if (ar.status == 2) + strm->msg = "invalid literal/length code"; + else if (ar.status == 3) + strm->msg = "invalid distance code"; + else + strm->msg = "invalid distance too far back"; + state->mode = BAD; + } + else if ( ar.status == 1 ) { + state->mode = TYPE; + } + + /* return unused bytes (on entry, bits < 8, so in won't go too far back) */ + ar.len = ar.bits >> 3; + ar.in -= ar.len; + ar.bits -= ar.len << 3; + ar.hold &= (1U << ar.bits) - 1; + + /* update state and return */ + strm->next_in = ar.in; + strm->next_out = ar.out; + strm->avail_in = (unsigned)(ar.in < ar.last ? + PAD_AVAIL_IN + (ar.last - ar.in) : + PAD_AVAIL_IN - (ar.in - ar.last)); + strm->avail_out = (unsigned)(ar.out < ar.end ? + PAD_AVAIL_OUT + (ar.end - ar.out) : + PAD_AVAIL_OUT - (ar.out - ar.end)); + state->hold = ar.hold; + state->bits = ar.bits; + return; +} + ADDED compat/zlib/contrib/inflate86/inffast.S Index: compat/zlib/contrib/inflate86/inffast.S ================================================================== --- compat/zlib/contrib/inflate86/inffast.S +++ compat/zlib/contrib/inflate86/inffast.S @@ -0,0 +1,1368 @@ +/* + * inffast.S is a hand tuned assembler version of: + * + * inffast.c -- fast decoding + * Copyright (C) 1995-2003 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + * + * Copyright (C) 2003 Chris Anderson + * Please use the copyright conditions above. + * + * This version (Jan-23-2003) of inflate_fast was coded and tested under + * GNU/Linux on a pentium 3, using the gcc-3.2 compiler distribution. On that + * machine, I found that gzip style archives decompressed about 20% faster than + * the gcc-3.2 -O3 -fomit-frame-pointer compiled version. Your results will + * depend on how large of a buffer is used for z_stream.next_in & next_out + * (8K-32K worked best for my 256K cpu cache) and how much overhead there is in + * stream processing I/O and crc32/addler32. In my case, this routine used + * 70% of the cpu time and crc32 used 20%. + * + * I am confident that this version will work in the general case, but I have + * not tested a wide variety of datasets or a wide variety of platforms. + * + * Jan-24-2003 -- Added -DUSE_MMX define for slightly faster inflating. + * It should be a runtime flag instead of compile time flag... + * + * Jan-26-2003 -- Added runtime check for MMX support with cpuid instruction. + * With -DUSE_MMX, only MMX code is compiled. With -DNO_MMX, only non-MMX code + * is compiled. Without either option, runtime detection is enabled. Runtime + * detection should work on all modern cpus and the recomended algorithm (flip + * ID bit on eflags and then use the cpuid instruction) is used in many + * multimedia applications. Tested under win2k with gcc-2.95 and gas-2.12 + * distributed with cygwin3. Compiling with gcc-2.95 -c inffast.S -o + * inffast.obj generates a COFF object which can then be linked with MSVC++ + * compiled code. Tested under FreeBSD 4.7 with gcc-2.95. + * + * Jan-28-2003 -- Tested Athlon XP... MMX mode is slower than no MMX (and + * slower than compiler generated code). Adjusted cpuid check to use the MMX + * code only for Pentiums < P4 until I have more data on the P4. Speed + * improvment is only about 15% on the Athlon when compared with code generated + * with MSVC++. Not sure yet, but I think the P4 will also be slower using the + * MMX mode because many of it's x86 ALU instructions execute in .5 cycles and + * have less latency than MMX ops. Added code to buffer the last 11 bytes of + * the input stream since the MMX code grabs bits in chunks of 32, which + * differs from the inffast.c algorithm. I don't think there would have been + * read overruns where a page boundary was crossed (a segfault), but there + * could have been overruns when next_in ends on unaligned memory (unintialized + * memory read). + * + * Mar-13-2003 -- P4 MMX is slightly slower than P4 NO_MMX. I created a C + * version of the non-MMX code so that it doesn't depend on zstrm and zstate + * structure offsets which are hard coded in this file. This was last tested + * with zlib-1.2.0 which is currently in beta testing, newer versions of this + * and inffas86.c can be found at http://www.eetbeetee.com/zlib/ and + * http://www.charm.net/~christop/zlib/ + */ + + +/* + * if you have underscore linking problems (_inflate_fast undefined), try + * using -DGAS_COFF + */ +#if ! defined( GAS_COFF ) && ! defined( GAS_ELF ) + +#if defined( WIN32 ) || defined( __CYGWIN__ ) +#define GAS_COFF /* windows object format */ +#else +#define GAS_ELF +#endif + +#endif /* ! GAS_COFF && ! GAS_ELF */ + + +#if defined( GAS_COFF ) + +/* coff externals have underscores */ +#define inflate_fast _inflate_fast +#define inflate_fast_use_mmx _inflate_fast_use_mmx + +#endif /* GAS_COFF */ + + +.file "inffast.S" + +.globl inflate_fast + +.text +.align 4,0 +.L_invalid_literal_length_code_msg: +.string "invalid literal/length code" + +.align 4,0 +.L_invalid_distance_code_msg: +.string "invalid distance code" + +.align 4,0 +.L_invalid_distance_too_far_msg: +.string "invalid distance too far back" + +#if ! defined( NO_MMX ) +.align 4,0 +.L_mask: /* mask[N] = ( 1 << N ) - 1 */ +.long 0 +.long 1 +.long 3 +.long 7 +.long 15 +.long 31 +.long 63 +.long 127 +.long 255 +.long 511 +.long 1023 +.long 2047 +.long 4095 +.long 8191 +.long 16383 +.long 32767 +.long 65535 +.long 131071 +.long 262143 +.long 524287 +.long 1048575 +.long 2097151 +.long 4194303 +.long 8388607 +.long 16777215 +.long 33554431 +.long 67108863 +.long 134217727 +.long 268435455 +.long 536870911 +.long 1073741823 +.long 2147483647 +.long 4294967295 +#endif /* NO_MMX */ + +.text + +/* + * struct z_stream offsets, in zlib.h + */ +#define next_in_strm 0 /* strm->next_in */ +#define avail_in_strm 4 /* strm->avail_in */ +#define next_out_strm 12 /* strm->next_out */ +#define avail_out_strm 16 /* strm->avail_out */ +#define msg_strm 24 /* strm->msg */ +#define state_strm 28 /* strm->state */ + +/* + * struct inflate_state offsets, in inflate.h + */ +#define mode_state 0 /* state->mode */ +#define wsize_state 32 /* state->wsize */ +#define write_state 40 /* state->write */ +#define window_state 44 /* state->window */ +#define hold_state 48 /* state->hold */ +#define bits_state 52 /* state->bits */ +#define lencode_state 68 /* state->lencode */ +#define distcode_state 72 /* state->distcode */ +#define lenbits_state 76 /* state->lenbits */ +#define distbits_state 80 /* state->distbits */ + +/* + * inflate_fast's activation record + */ +#define local_var_size 64 /* how much local space for vars */ +#define strm_sp 88 /* first arg: z_stream * (local_var_size + 24) */ +#define start_sp 92 /* second arg: unsigned int (local_var_size + 28) */ + +/* + * offsets for local vars on stack + */ +#define out 60 /* unsigned char* */ +#define window 56 /* unsigned char* */ +#define wsize 52 /* unsigned int */ +#define write 48 /* unsigned int */ +#define in 44 /* unsigned char* */ +#define beg 40 /* unsigned char* */ +#define buf 28 /* char[ 12 ] */ +#define len 24 /* unsigned int */ +#define last 20 /* unsigned char* */ +#define end 16 /* unsigned char* */ +#define dcode 12 /* code* */ +#define lcode 8 /* code* */ +#define dmask 4 /* unsigned int */ +#define lmask 0 /* unsigned int */ + +/* + * typedef enum inflate_mode consts, in inflate.h + */ +#define INFLATE_MODE_TYPE 11 /* state->mode flags enum-ed in inflate.h */ +#define INFLATE_MODE_BAD 26 + + +#if ! defined( USE_MMX ) && ! defined( NO_MMX ) + +#define RUN_TIME_MMX + +#define CHECK_MMX 1 +#define DO_USE_MMX 2 +#define DONT_USE_MMX 3 + +.globl inflate_fast_use_mmx + +.data + +.align 4,0 +inflate_fast_use_mmx: /* integer flag for run time control 1=check,2=mmx,3=no */ +.long CHECK_MMX + +#if defined( GAS_ELF ) +/* elf info */ +.type inflate_fast_use_mmx,@object +.size inflate_fast_use_mmx,4 +#endif + +#endif /* RUN_TIME_MMX */ + +#if defined( GAS_COFF ) +/* coff info: scl 2 = extern, type 32 = function */ +.def inflate_fast; .scl 2; .type 32; .endef +#endif + +.text + +.align 32,0x90 +inflate_fast: + pushl %edi + pushl %esi + pushl %ebp + pushl %ebx + pushf /* save eflags (strm_sp, state_sp assumes this is 32 bits) */ + subl $local_var_size, %esp + cld + +#define strm_r %esi +#define state_r %edi + + movl strm_sp(%esp), strm_r + movl state_strm(strm_r), state_r + + /* in = strm->next_in; + * out = strm->next_out; + * last = in + strm->avail_in - 11; + * beg = out - (start - strm->avail_out); + * end = out + (strm->avail_out - 257); + */ + movl avail_in_strm(strm_r), %edx + movl next_in_strm(strm_r), %eax + + addl %eax, %edx /* avail_in += next_in */ + subl $11, %edx /* avail_in -= 11 */ + + movl %eax, in(%esp) + movl %edx, last(%esp) + + movl start_sp(%esp), %ebp + movl avail_out_strm(strm_r), %ecx + movl next_out_strm(strm_r), %ebx + + subl %ecx, %ebp /* start -= avail_out */ + negl %ebp /* start = -start */ + addl %ebx, %ebp /* start += next_out */ + + subl $257, %ecx /* avail_out -= 257 */ + addl %ebx, %ecx /* avail_out += out */ + + movl %ebx, out(%esp) + movl %ebp, beg(%esp) + movl %ecx, end(%esp) + + /* wsize = state->wsize; + * write = state->write; + * window = state->window; + * hold = state->hold; + * bits = state->bits; + * lcode = state->lencode; + * dcode = state->distcode; + * lmask = ( 1 << state->lenbits ) - 1; + * dmask = ( 1 << state->distbits ) - 1; + */ + + movl lencode_state(state_r), %eax + movl distcode_state(state_r), %ecx + + movl %eax, lcode(%esp) + movl %ecx, dcode(%esp) + + movl $1, %eax + movl lenbits_state(state_r), %ecx + shll %cl, %eax + decl %eax + movl %eax, lmask(%esp) + + movl $1, %eax + movl distbits_state(state_r), %ecx + shll %cl, %eax + decl %eax + movl %eax, dmask(%esp) + + movl wsize_state(state_r), %eax + movl write_state(state_r), %ecx + movl window_state(state_r), %edx + + movl %eax, wsize(%esp) + movl %ecx, write(%esp) + movl %edx, window(%esp) + + movl hold_state(state_r), %ebp + movl bits_state(state_r), %ebx + +#undef strm_r +#undef state_r + +#define in_r %esi +#define from_r %esi +#define out_r %edi + + movl in(%esp), in_r + movl last(%esp), %ecx + cmpl in_r, %ecx + ja .L_align_long /* if in < last */ + + addl $11, %ecx /* ecx = &in[ avail_in ] */ + subl in_r, %ecx /* ecx = avail_in */ + movl $12, %eax + subl %ecx, %eax /* eax = 12 - avail_in */ + leal buf(%esp), %edi + rep movsb /* memcpy( buf, in, avail_in ) */ + movl %eax, %ecx + xorl %eax, %eax + rep stosb /* memset( &buf[ avail_in ], 0, 12 - avail_in ) */ + leal buf(%esp), in_r /* in = buf */ + movl in_r, last(%esp) /* last = in, do just one iteration */ + jmp .L_is_aligned + + /* align in_r on long boundary */ +.L_align_long: + testl $3, in_r + jz .L_is_aligned + xorl %eax, %eax + movb (in_r), %al + incl in_r + movl %ebx, %ecx + addl $8, %ebx + shll %cl, %eax + orl %eax, %ebp + jmp .L_align_long + +.L_is_aligned: + movl out(%esp), out_r + +#if defined( NO_MMX ) + jmp .L_do_loop +#endif + +#if defined( USE_MMX ) + jmp .L_init_mmx +#endif + +/*** Runtime MMX check ***/ + +#if defined( RUN_TIME_MMX ) +.L_check_mmx: + cmpl $DO_USE_MMX, inflate_fast_use_mmx + je .L_init_mmx + ja .L_do_loop /* > 2 */ + + pushl %eax + pushl %ebx + pushl %ecx + pushl %edx + pushf + movl (%esp), %eax /* copy eflags to eax */ + xorl $0x200000, (%esp) /* try toggling ID bit of eflags (bit 21) + * to see if cpu supports cpuid... + * ID bit method not supported by NexGen but + * bios may load a cpuid instruction and + * cpuid may be disabled on Cyrix 5-6x86 */ + popf + pushf + popl %edx /* copy new eflags to edx */ + xorl %eax, %edx /* test if ID bit is flipped */ + jz .L_dont_use_mmx /* not flipped if zero */ + xorl %eax, %eax + cpuid + cmpl $0x756e6547, %ebx /* check for GenuineIntel in ebx,ecx,edx */ + jne .L_dont_use_mmx + cmpl $0x6c65746e, %ecx + jne .L_dont_use_mmx + cmpl $0x49656e69, %edx + jne .L_dont_use_mmx + movl $1, %eax + cpuid /* get cpu features */ + shrl $8, %eax + andl $15, %eax + cmpl $6, %eax /* check for Pentium family, is 0xf for P4 */ + jne .L_dont_use_mmx + testl $0x800000, %edx /* test if MMX feature is set (bit 23) */ + jnz .L_use_mmx + jmp .L_dont_use_mmx +.L_use_mmx: + movl $DO_USE_MMX, inflate_fast_use_mmx + jmp .L_check_mmx_pop +.L_dont_use_mmx: + movl $DONT_USE_MMX, inflate_fast_use_mmx +.L_check_mmx_pop: + popl %edx + popl %ecx + popl %ebx + popl %eax + jmp .L_check_mmx +#endif + + +/*** Non-MMX code ***/ + +#if defined ( NO_MMX ) || defined( RUN_TIME_MMX ) + +#define hold_r %ebp +#define bits_r %bl +#define bitslong_r %ebx + +.align 32,0x90 +.L_while_test: + /* while (in < last && out < end) + */ + cmpl out_r, end(%esp) + jbe .L_break_loop /* if (out >= end) */ + + cmpl in_r, last(%esp) + jbe .L_break_loop + +.L_do_loop: + /* regs: %esi = in, %ebp = hold, %bl = bits, %edi = out + * + * do { + * if (bits < 15) { + * hold |= *((unsigned short *)in)++ << bits; + * bits += 16 + * } + * this = lcode[hold & lmask] + */ + cmpb $15, bits_r + ja .L_get_length_code /* if (15 < bits) */ + + xorl %eax, %eax + lodsw /* al = *(ushort *)in++ */ + movb bits_r, %cl /* cl = bits, needs it for shifting */ + addb $16, bits_r /* bits += 16 */ + shll %cl, %eax + orl %eax, hold_r /* hold |= *((ushort *)in)++ << bits */ + +.L_get_length_code: + movl lmask(%esp), %edx /* edx = lmask */ + movl lcode(%esp), %ecx /* ecx = lcode */ + andl hold_r, %edx /* edx &= hold */ + movl (%ecx,%edx,4), %eax /* eax = lcode[hold & lmask] */ + +.L_dolen: + /* regs: %esi = in, %ebp = hold, %bl = bits, %edi = out + * + * dolen: + * bits -= this.bits; + * hold >>= this.bits + */ + movb %ah, %cl /* cl = this.bits */ + subb %ah, bits_r /* bits -= this.bits */ + shrl %cl, hold_r /* hold >>= this.bits */ + + /* check if op is a literal + * if (op == 0) { + * PUP(out) = this.val; + * } + */ + testb %al, %al + jnz .L_test_for_length_base /* if (op != 0) 45.7% */ + + shrl $16, %eax /* output this.val char */ + stosb + jmp .L_while_test + +.L_test_for_length_base: + /* regs: %esi = in, %ebp = hold, %bl = bits, %edi = out, %edx = len + * + * else if (op & 16) { + * len = this.val + * op &= 15 + * if (op) { + * if (op > bits) { + * hold |= *((unsigned short *)in)++ << bits; + * bits += 16 + * } + * len += hold & mask[op]; + * bits -= op; + * hold >>= op; + * } + */ +#define len_r %edx + movl %eax, len_r /* len = this */ + shrl $16, len_r /* len = this.val */ + movb %al, %cl + + testb $16, %al + jz .L_test_for_second_level_length /* if ((op & 16) == 0) 8% */ + andb $15, %cl /* op &= 15 */ + jz .L_save_len /* if (!op) */ + cmpb %cl, bits_r + jae .L_add_bits_to_len /* if (op <= bits) */ + + movb %cl, %ch /* stash op in ch, freeing cl */ + xorl %eax, %eax + lodsw /* al = *(ushort *)in++ */ + movb bits_r, %cl /* cl = bits, needs it for shifting */ + addb $16, bits_r /* bits += 16 */ + shll %cl, %eax + orl %eax, hold_r /* hold |= *((ushort *)in)++ << bits */ + movb %ch, %cl /* move op back to ecx */ + +.L_add_bits_to_len: + movl $1, %eax + shll %cl, %eax + decl %eax + subb %cl, bits_r + andl hold_r, %eax /* eax &= hold */ + shrl %cl, hold_r + addl %eax, len_r /* len += hold & mask[op] */ + +.L_save_len: + movl len_r, len(%esp) /* save len */ +#undef len_r + +.L_decode_distance: + /* regs: %esi = in, %ebp = hold, %bl = bits, %edi = out, %edx = dist + * + * if (bits < 15) { + * hold |= *((unsigned short *)in)++ << bits; + * bits += 16 + * } + * this = dcode[hold & dmask]; + * dodist: + * bits -= this.bits; + * hold >>= this.bits; + * op = this.op; + */ + + cmpb $15, bits_r + ja .L_get_distance_code /* if (15 < bits) */ + + xorl %eax, %eax + lodsw /* al = *(ushort *)in++ */ + movb bits_r, %cl /* cl = bits, needs it for shifting */ + addb $16, bits_r /* bits += 16 */ + shll %cl, %eax + orl %eax, hold_r /* hold |= *((ushort *)in)++ << bits */ + +.L_get_distance_code: + movl dmask(%esp), %edx /* edx = dmask */ + movl dcode(%esp), %ecx /* ecx = dcode */ + andl hold_r, %edx /* edx &= hold */ + movl (%ecx,%edx,4), %eax /* eax = dcode[hold & dmask] */ + +#define dist_r %edx +.L_dodist: + movl %eax, dist_r /* dist = this */ + shrl $16, dist_r /* dist = this.val */ + movb %ah, %cl + subb %ah, bits_r /* bits -= this.bits */ + shrl %cl, hold_r /* hold >>= this.bits */ + + /* if (op & 16) { + * dist = this.val + * op &= 15 + * if (op > bits) { + * hold |= *((unsigned short *)in)++ << bits; + * bits += 16 + * } + * dist += hold & mask[op]; + * bits -= op; + * hold >>= op; + */ + movb %al, %cl /* cl = this.op */ + + testb $16, %al /* if ((op & 16) == 0) */ + jz .L_test_for_second_level_dist + andb $15, %cl /* op &= 15 */ + jz .L_check_dist_one + cmpb %cl, bits_r + jae .L_add_bits_to_dist /* if (op <= bits) 97.6% */ + + movb %cl, %ch /* stash op in ch, freeing cl */ + xorl %eax, %eax + lodsw /* al = *(ushort *)in++ */ + movb bits_r, %cl /* cl = bits, needs it for shifting */ + addb $16, bits_r /* bits += 16 */ + shll %cl, %eax + orl %eax, hold_r /* hold |= *((ushort *)in)++ << bits */ + movb %ch, %cl /* move op back to ecx */ + +.L_add_bits_to_dist: + movl $1, %eax + shll %cl, %eax + decl %eax /* (1 << op) - 1 */ + subb %cl, bits_r + andl hold_r, %eax /* eax &= hold */ + shrl %cl, hold_r + addl %eax, dist_r /* dist += hold & ((1 << op) - 1) */ + jmp .L_check_window + +.L_check_window: + /* regs: %esi = from, %ebp = hold, %bl = bits, %edi = out, %edx = dist + * %ecx = nbytes + * + * nbytes = out - beg; + * if (dist <= nbytes) { + * from = out - dist; + * do { + * PUP(out) = PUP(from); + * } while (--len > 0) { + * } + */ + + movl in_r, in(%esp) /* save in so from can use it's reg */ + movl out_r, %eax + subl beg(%esp), %eax /* nbytes = out - beg */ + + cmpl dist_r, %eax + jb .L_clip_window /* if (dist > nbytes) 4.2% */ + + movl len(%esp), %ecx + movl out_r, from_r + subl dist_r, from_r /* from = out - dist */ + + subl $3, %ecx + movb (from_r), %al + movb %al, (out_r) + movb 1(from_r), %al + movb 2(from_r), %dl + addl $3, from_r + movb %al, 1(out_r) + movb %dl, 2(out_r) + addl $3, out_r + rep movsb + + movl in(%esp), in_r /* move in back to %esi, toss from */ + jmp .L_while_test + +.align 16,0x90 +.L_check_dist_one: + cmpl $1, dist_r + jne .L_check_window + cmpl out_r, beg(%esp) + je .L_check_window + + decl out_r + movl len(%esp), %ecx + movb (out_r), %al + subl $3, %ecx + + movb %al, 1(out_r) + movb %al, 2(out_r) + movb %al, 3(out_r) + addl $4, out_r + rep stosb + + jmp .L_while_test + +.align 16,0x90 +.L_test_for_second_level_length: + /* else if ((op & 64) == 0) { + * this = lcode[this.val + (hold & mask[op])]; + * } + */ + testb $64, %al + jnz .L_test_for_end_of_block /* if ((op & 64) != 0) */ + + movl $1, %eax + shll %cl, %eax + decl %eax + andl hold_r, %eax /* eax &= hold */ + addl %edx, %eax /* eax += this.val */ + movl lcode(%esp), %edx /* edx = lcode */ + movl (%edx,%eax,4), %eax /* eax = lcode[val + (hold&mask[op])] */ + jmp .L_dolen + +.align 16,0x90 +.L_test_for_second_level_dist: + /* else if ((op & 64) == 0) { + * this = dcode[this.val + (hold & mask[op])]; + * } + */ + testb $64, %al + jnz .L_invalid_distance_code /* if ((op & 64) != 0) */ + + movl $1, %eax + shll %cl, %eax + decl %eax + andl hold_r, %eax /* eax &= hold */ + addl %edx, %eax /* eax += this.val */ + movl dcode(%esp), %edx /* edx = dcode */ + movl (%edx,%eax,4), %eax /* eax = dcode[val + (hold&mask[op])] */ + jmp .L_dodist + +.align 16,0x90 +.L_clip_window: + /* regs: %esi = from, %ebp = hold, %bl = bits, %edi = out, %edx = dist + * %ecx = nbytes + * + * else { + * if (dist > wsize) { + * invalid distance + * } + * from = window; + * nbytes = dist - nbytes; + * if (write == 0) { + * from += wsize - nbytes; + */ +#define nbytes_r %ecx + movl %eax, nbytes_r + movl wsize(%esp), %eax /* prepare for dist compare */ + negl nbytes_r /* nbytes = -nbytes */ + movl window(%esp), from_r /* from = window */ + + cmpl dist_r, %eax + jb .L_invalid_distance_too_far /* if (dist > wsize) */ + + addl dist_r, nbytes_r /* nbytes = dist - nbytes */ + cmpl $0, write(%esp) + jne .L_wrap_around_window /* if (write != 0) */ + + subl nbytes_r, %eax + addl %eax, from_r /* from += wsize - nbytes */ + + /* regs: %esi = from, %ebp = hold, %bl = bits, %edi = out, %edx = dist + * %ecx = nbytes, %eax = len + * + * if (nbytes < len) { + * len -= nbytes; + * do { + * PUP(out) = PUP(from); + * } while (--nbytes); + * from = out - dist; + * } + * } + */ +#define len_r %eax + movl len(%esp), len_r + cmpl nbytes_r, len_r + jbe .L_do_copy1 /* if (nbytes >= len) */ + + subl nbytes_r, len_r /* len -= nbytes */ + rep movsb + movl out_r, from_r + subl dist_r, from_r /* from = out - dist */ + jmp .L_do_copy1 + + cmpl nbytes_r, len_r + jbe .L_do_copy1 /* if (nbytes >= len) */ + + subl nbytes_r, len_r /* len -= nbytes */ + rep movsb + movl out_r, from_r + subl dist_r, from_r /* from = out - dist */ + jmp .L_do_copy1 + +.L_wrap_around_window: + /* regs: %esi = from, %ebp = hold, %bl = bits, %edi = out, %edx = dist + * %ecx = nbytes, %eax = write, %eax = len + * + * else if (write < nbytes) { + * from += wsize + write - nbytes; + * nbytes -= write; + * if (nbytes < len) { + * len -= nbytes; + * do { + * PUP(out) = PUP(from); + * } while (--nbytes); + * from = window; + * nbytes = write; + * if (nbytes < len) { + * len -= nbytes; + * do { + * PUP(out) = PUP(from); + * } while(--nbytes); + * from = out - dist; + * } + * } + * } + */ +#define write_r %eax + movl write(%esp), write_r + cmpl write_r, nbytes_r + jbe .L_contiguous_in_window /* if (write >= nbytes) */ + + addl wsize(%esp), from_r + addl write_r, from_r + subl nbytes_r, from_r /* from += wsize + write - nbytes */ + subl write_r, nbytes_r /* nbytes -= write */ +#undef write_r + + movl len(%esp), len_r + cmpl nbytes_r, len_r + jbe .L_do_copy1 /* if (nbytes >= len) */ + + subl nbytes_r, len_r /* len -= nbytes */ + rep movsb + movl window(%esp), from_r /* from = window */ + movl write(%esp), nbytes_r /* nbytes = write */ + cmpl nbytes_r, len_r + jbe .L_do_copy1 /* if (nbytes >= len) */ + + subl nbytes_r, len_r /* len -= nbytes */ + rep movsb + movl out_r, from_r + subl dist_r, from_r /* from = out - dist */ + jmp .L_do_copy1 + +.L_contiguous_in_window: + /* regs: %esi = from, %ebp = hold, %bl = bits, %edi = out, %edx = dist + * %ecx = nbytes, %eax = write, %eax = len + * + * else { + * from += write - nbytes; + * if (nbytes < len) { + * len -= nbytes; + * do { + * PUP(out) = PUP(from); + * } while (--nbytes); + * from = out - dist; + * } + * } + */ +#define write_r %eax + addl write_r, from_r + subl nbytes_r, from_r /* from += write - nbytes */ +#undef write_r + + movl len(%esp), len_r + cmpl nbytes_r, len_r + jbe .L_do_copy1 /* if (nbytes >= len) */ + + subl nbytes_r, len_r /* len -= nbytes */ + rep movsb + movl out_r, from_r + subl dist_r, from_r /* from = out - dist */ + +.L_do_copy1: + /* regs: %esi = from, %esi = in, %ebp = hold, %bl = bits, %edi = out + * %eax = len + * + * while (len > 0) { + * PUP(out) = PUP(from); + * len--; + * } + * } + * } while (in < last && out < end); + */ +#undef nbytes_r +#define in_r %esi + movl len_r, %ecx + rep movsb + + movl in(%esp), in_r /* move in back to %esi, toss from */ + jmp .L_while_test + +#undef len_r +#undef dist_r + +#endif /* NO_MMX || RUN_TIME_MMX */ + + +/*** MMX code ***/ + +#if defined( USE_MMX ) || defined( RUN_TIME_MMX ) + +.align 32,0x90 +.L_init_mmx: + emms + +#undef bits_r +#undef bitslong_r +#define bitslong_r %ebp +#define hold_mm %mm0 + movd %ebp, hold_mm + movl %ebx, bitslong_r + +#define used_mm %mm1 +#define dmask2_mm %mm2 +#define lmask2_mm %mm3 +#define lmask_mm %mm4 +#define dmask_mm %mm5 +#define tmp_mm %mm6 + + movd lmask(%esp), lmask_mm + movq lmask_mm, lmask2_mm + movd dmask(%esp), dmask_mm + movq dmask_mm, dmask2_mm + pxor used_mm, used_mm + movl lcode(%esp), %ebx /* ebx = lcode */ + jmp .L_do_loop_mmx + +.align 32,0x90 +.L_while_test_mmx: + /* while (in < last && out < end) + */ + cmpl out_r, end(%esp) + jbe .L_break_loop /* if (out >= end) */ + + cmpl in_r, last(%esp) + jbe .L_break_loop + +.L_do_loop_mmx: + psrlq used_mm, hold_mm /* hold_mm >>= last bit length */ + + cmpl $32, bitslong_r + ja .L_get_length_code_mmx /* if (32 < bits) */ + + movd bitslong_r, tmp_mm + movd (in_r), %mm7 + addl $4, in_r + psllq tmp_mm, %mm7 + addl $32, bitslong_r + por %mm7, hold_mm /* hold_mm |= *((uint *)in)++ << bits */ + +.L_get_length_code_mmx: + pand hold_mm, lmask_mm + movd lmask_mm, %eax + movq lmask2_mm, lmask_mm + movl (%ebx,%eax,4), %eax /* eax = lcode[hold & lmask] */ + +.L_dolen_mmx: + movzbl %ah, %ecx /* ecx = this.bits */ + movd %ecx, used_mm + subl %ecx, bitslong_r /* bits -= this.bits */ + + testb %al, %al + jnz .L_test_for_length_base_mmx /* if (op != 0) 45.7% */ + + shrl $16, %eax /* output this.val char */ + stosb + jmp .L_while_test_mmx + +.L_test_for_length_base_mmx: +#define len_r %edx + movl %eax, len_r /* len = this */ + shrl $16, len_r /* len = this.val */ + + testb $16, %al + jz .L_test_for_second_level_length_mmx /* if ((op & 16) == 0) 8% */ + andl $15, %eax /* op &= 15 */ + jz .L_decode_distance_mmx /* if (!op) */ + + psrlq used_mm, hold_mm /* hold_mm >>= last bit length */ + movd %eax, used_mm + movd hold_mm, %ecx + subl %eax, bitslong_r + andl .L_mask(,%eax,4), %ecx + addl %ecx, len_r /* len += hold & mask[op] */ + +.L_decode_distance_mmx: + psrlq used_mm, hold_mm /* hold_mm >>= last bit length */ + + cmpl $32, bitslong_r + ja .L_get_dist_code_mmx /* if (32 < bits) */ + + movd bitslong_r, tmp_mm + movd (in_r), %mm7 + addl $4, in_r + psllq tmp_mm, %mm7 + addl $32, bitslong_r + por %mm7, hold_mm /* hold_mm |= *((uint *)in)++ << bits */ + +.L_get_dist_code_mmx: + movl dcode(%esp), %ebx /* ebx = dcode */ + pand hold_mm, dmask_mm + movd dmask_mm, %eax + movq dmask2_mm, dmask_mm + movl (%ebx,%eax,4), %eax /* eax = dcode[hold & lmask] */ + +.L_dodist_mmx: +#define dist_r %ebx + movzbl %ah, %ecx /* ecx = this.bits */ + movl %eax, dist_r + shrl $16, dist_r /* dist = this.val */ + subl %ecx, bitslong_r /* bits -= this.bits */ + movd %ecx, used_mm + + testb $16, %al /* if ((op & 16) == 0) */ + jz .L_test_for_second_level_dist_mmx + andl $15, %eax /* op &= 15 */ + jz .L_check_dist_one_mmx + +.L_add_bits_to_dist_mmx: + psrlq used_mm, hold_mm /* hold_mm >>= last bit length */ + movd %eax, used_mm /* save bit length of current op */ + movd hold_mm, %ecx /* get the next bits on input stream */ + subl %eax, bitslong_r /* bits -= op bits */ + andl .L_mask(,%eax,4), %ecx /* ecx = hold & mask[op] */ + addl %ecx, dist_r /* dist += hold & mask[op] */ + +.L_check_window_mmx: + movl in_r, in(%esp) /* save in so from can use it's reg */ + movl out_r, %eax + subl beg(%esp), %eax /* nbytes = out - beg */ + + cmpl dist_r, %eax + jb .L_clip_window_mmx /* if (dist > nbytes) 4.2% */ + + movl len_r, %ecx + movl out_r, from_r + subl dist_r, from_r /* from = out - dist */ + + subl $3, %ecx + movb (from_r), %al + movb %al, (out_r) + movb 1(from_r), %al + movb 2(from_r), %dl + addl $3, from_r + movb %al, 1(out_r) + movb %dl, 2(out_r) + addl $3, out_r + rep movsb + + movl in(%esp), in_r /* move in back to %esi, toss from */ + movl lcode(%esp), %ebx /* move lcode back to %ebx, toss dist */ + jmp .L_while_test_mmx + +.align 16,0x90 +.L_check_dist_one_mmx: + cmpl $1, dist_r + jne .L_check_window_mmx + cmpl out_r, beg(%esp) + je .L_check_window_mmx + + decl out_r + movl len_r, %ecx + movb (out_r), %al + subl $3, %ecx + + movb %al, 1(out_r) + movb %al, 2(out_r) + movb %al, 3(out_r) + addl $4, out_r + rep stosb + + movl lcode(%esp), %ebx /* move lcode back to %ebx, toss dist */ + jmp .L_while_test_mmx + +.align 16,0x90 +.L_test_for_second_level_length_mmx: + testb $64, %al + jnz .L_test_for_end_of_block /* if ((op & 64) != 0) */ + + andl $15, %eax + psrlq used_mm, hold_mm /* hold_mm >>= last bit length */ + movd hold_mm, %ecx + andl .L_mask(,%eax,4), %ecx + addl len_r, %ecx + movl (%ebx,%ecx,4), %eax /* eax = lcode[hold & lmask] */ + jmp .L_dolen_mmx + +.align 16,0x90 +.L_test_for_second_level_dist_mmx: + testb $64, %al + jnz .L_invalid_distance_code /* if ((op & 64) != 0) */ + + andl $15, %eax + psrlq used_mm, hold_mm /* hold_mm >>= last bit length */ + movd hold_mm, %ecx + andl .L_mask(,%eax,4), %ecx + movl dcode(%esp), %eax /* ecx = dcode */ + addl dist_r, %ecx + movl (%eax,%ecx,4), %eax /* eax = lcode[hold & lmask] */ + jmp .L_dodist_mmx + +.align 16,0x90 +.L_clip_window_mmx: +#define nbytes_r %ecx + movl %eax, nbytes_r + movl wsize(%esp), %eax /* prepare for dist compare */ + negl nbytes_r /* nbytes = -nbytes */ + movl window(%esp), from_r /* from = window */ + + cmpl dist_r, %eax + jb .L_invalid_distance_too_far /* if (dist > wsize) */ + + addl dist_r, nbytes_r /* nbytes = dist - nbytes */ + cmpl $0, write(%esp) + jne .L_wrap_around_window_mmx /* if (write != 0) */ + + subl nbytes_r, %eax + addl %eax, from_r /* from += wsize - nbytes */ + + cmpl nbytes_r, len_r + jbe .L_do_copy1_mmx /* if (nbytes >= len) */ + + subl nbytes_r, len_r /* len -= nbytes */ + rep movsb + movl out_r, from_r + subl dist_r, from_r /* from = out - dist */ + jmp .L_do_copy1_mmx + + cmpl nbytes_r, len_r + jbe .L_do_copy1_mmx /* if (nbytes >= len) */ + + subl nbytes_r, len_r /* len -= nbytes */ + rep movsb + movl out_r, from_r + subl dist_r, from_r /* from = out - dist */ + jmp .L_do_copy1_mmx + +.L_wrap_around_window_mmx: +#define write_r %eax + movl write(%esp), write_r + cmpl write_r, nbytes_r + jbe .L_contiguous_in_window_mmx /* if (write >= nbytes) */ + + addl wsize(%esp), from_r + addl write_r, from_r + subl nbytes_r, from_r /* from += wsize + write - nbytes */ + subl write_r, nbytes_r /* nbytes -= write */ +#undef write_r + + cmpl nbytes_r, len_r + jbe .L_do_copy1_mmx /* if (nbytes >= len) */ + + subl nbytes_r, len_r /* len -= nbytes */ + rep movsb + movl window(%esp), from_r /* from = window */ + movl write(%esp), nbytes_r /* nbytes = write */ + cmpl nbytes_r, len_r + jbe .L_do_copy1_mmx /* if (nbytes >= len) */ + + subl nbytes_r, len_r /* len -= nbytes */ + rep movsb + movl out_r, from_r + subl dist_r, from_r /* from = out - dist */ + jmp .L_do_copy1_mmx + +.L_contiguous_in_window_mmx: +#define write_r %eax + addl write_r, from_r + subl nbytes_r, from_r /* from += write - nbytes */ +#undef write_r + + cmpl nbytes_r, len_r + jbe .L_do_copy1_mmx /* if (nbytes >= len) */ + + subl nbytes_r, len_r /* len -= nbytes */ + rep movsb + movl out_r, from_r + subl dist_r, from_r /* from = out - dist */ + +.L_do_copy1_mmx: +#undef nbytes_r +#define in_r %esi + movl len_r, %ecx + rep movsb + + movl in(%esp), in_r /* move in back to %esi, toss from */ + movl lcode(%esp), %ebx /* move lcode back to %ebx, toss dist */ + jmp .L_while_test_mmx + +#undef hold_r +#undef bitslong_r + +#endif /* USE_MMX || RUN_TIME_MMX */ + + +/*** USE_MMX, NO_MMX, and RUNTIME_MMX from here on ***/ + +.L_invalid_distance_code: + /* else { + * strm->msg = "invalid distance code"; + * state->mode = BAD; + * } + */ + movl $.L_invalid_distance_code_msg, %ecx + movl $INFLATE_MODE_BAD, %edx + jmp .L_update_stream_state + +.L_test_for_end_of_block: + /* else if (op & 32) { + * state->mode = TYPE; + * break; + * } + */ + testb $32, %al + jz .L_invalid_literal_length_code /* if ((op & 32) == 0) */ + + movl $0, %ecx + movl $INFLATE_MODE_TYPE, %edx + jmp .L_update_stream_state + +.L_invalid_literal_length_code: + /* else { + * strm->msg = "invalid literal/length code"; + * state->mode = BAD; + * } + */ + movl $.L_invalid_literal_length_code_msg, %ecx + movl $INFLATE_MODE_BAD, %edx + jmp .L_update_stream_state + +.L_invalid_distance_too_far: + /* strm->msg = "invalid distance too far back"; + * state->mode = BAD; + */ + movl in(%esp), in_r /* from_r has in's reg, put in back */ + movl $.L_invalid_distance_too_far_msg, %ecx + movl $INFLATE_MODE_BAD, %edx + jmp .L_update_stream_state + +.L_update_stream_state: + /* set strm->msg = %ecx, strm->state->mode = %edx */ + movl strm_sp(%esp), %eax + testl %ecx, %ecx /* if (msg != NULL) */ + jz .L_skip_msg + movl %ecx, msg_strm(%eax) /* strm->msg = msg */ +.L_skip_msg: + movl state_strm(%eax), %eax /* state = strm->state */ + movl %edx, mode_state(%eax) /* state->mode = edx (BAD | TYPE) */ + jmp .L_break_loop + +.align 32,0x90 +.L_break_loop: + +/* + * Regs: + * + * bits = %ebp when mmx, and in %ebx when non-mmx + * hold = %hold_mm when mmx, and in %ebp when non-mmx + * in = %esi + * out = %edi + */ + +#if defined( USE_MMX ) || defined( RUN_TIME_MMX ) + +#if defined( RUN_TIME_MMX ) + + cmpl $DO_USE_MMX, inflate_fast_use_mmx + jne .L_update_next_in + +#endif /* RUN_TIME_MMX */ + + movl %ebp, %ebx + +.L_update_next_in: + +#endif + +#define strm_r %eax +#define state_r %edx + + /* len = bits >> 3; + * in -= len; + * bits -= len << 3; + * hold &= (1U << bits) - 1; + * state->hold = hold; + * state->bits = bits; + * strm->next_in = in; + * strm->next_out = out; + */ + movl strm_sp(%esp), strm_r + movl %ebx, %ecx + movl state_strm(strm_r), state_r + shrl $3, %ecx + subl %ecx, in_r + shll $3, %ecx + subl %ecx, %ebx + movl out_r, next_out_strm(strm_r) + movl %ebx, bits_state(state_r) + movl %ebx, %ecx + + leal buf(%esp), %ebx + cmpl %ebx, last(%esp) + jne .L_buf_not_used /* if buf != last */ + + subl %ebx, in_r /* in -= buf */ + movl next_in_strm(strm_r), %ebx + movl %ebx, last(%esp) /* last = strm->next_in */ + addl %ebx, in_r /* in += strm->next_in */ + movl avail_in_strm(strm_r), %ebx + subl $11, %ebx + addl %ebx, last(%esp) /* last = &strm->next_in[ avail_in - 11 ] */ + +.L_buf_not_used: + movl in_r, next_in_strm(strm_r) + + movl $1, %ebx + shll %cl, %ebx + decl %ebx + +#if defined( USE_MMX ) || defined( RUN_TIME_MMX ) + +#if defined( RUN_TIME_MMX ) + + cmpl $DO_USE_MMX, inflate_fast_use_mmx + jne .L_update_hold + +#endif /* RUN_TIME_MMX */ + + psrlq used_mm, hold_mm /* hold_mm >>= last bit length */ + movd hold_mm, %ebp + + emms + +.L_update_hold: + +#endif /* USE_MMX || RUN_TIME_MMX */ + + andl %ebx, %ebp + movl %ebp, hold_state(state_r) + +#define last_r %ebx + + /* strm->avail_in = in < last ? 11 + (last - in) : 11 - (in - last) */ + movl last(%esp), last_r + cmpl in_r, last_r + jbe .L_last_is_smaller /* if (in >= last) */ + + subl in_r, last_r /* last -= in */ + addl $11, last_r /* last += 11 */ + movl last_r, avail_in_strm(strm_r) + jmp .L_fixup_out +.L_last_is_smaller: + subl last_r, in_r /* in -= last */ + negl in_r /* in = -in */ + addl $11, in_r /* in += 11 */ + movl in_r, avail_in_strm(strm_r) + +#undef last_r +#define end_r %ebx + +.L_fixup_out: + /* strm->avail_out = out < end ? 257 + (end - out) : 257 - (out - end)*/ + movl end(%esp), end_r + cmpl out_r, end_r + jbe .L_end_is_smaller /* if (out >= end) */ + + subl out_r, end_r /* end -= out */ + addl $257, end_r /* end += 257 */ + movl end_r, avail_out_strm(strm_r) + jmp .L_done +.L_end_is_smaller: + subl end_r, out_r /* out -= end */ + negl out_r /* out = -out */ + addl $257, out_r /* out += 257 */ + movl out_r, avail_out_strm(strm_r) + +#undef end_r +#undef strm_r +#undef state_r + +.L_done: + addl $local_var_size, %esp + popf + popl %ebx + popl %ebp + popl %esi + popl %edi + ret + +#if defined( GAS_ELF ) +/* elf info */ +.type inflate_fast,@function +.size inflate_fast,.-inflate_fast +#endif ADDED compat/zlib/contrib/iostream/test.cpp Index: compat/zlib/contrib/iostream/test.cpp ================================================================== --- compat/zlib/contrib/iostream/test.cpp +++ compat/zlib/contrib/iostream/test.cpp @@ -0,0 +1,24 @@ + +#include "zfstream.h" + +int main() { + + // Construct a stream object with this filebuffer. Anything sent + // to this stream will go to standard out. + gzofstream os( 1, ios::out ); + + // This text is getting compressed and sent to stdout. + // To prove this, run 'test | zcat'. + os << "Hello, Mommy" << endl; + + os << setcompressionlevel( Z_NO_COMPRESSION ); + os << "hello, hello, hi, ho!" << endl; + + setcompressionlevel( os, Z_DEFAULT_COMPRESSION ) + << "I'm compressing again" << endl; + + os.close(); + + return 0; + +} ADDED compat/zlib/contrib/iostream/zfstream.cpp Index: compat/zlib/contrib/iostream/zfstream.cpp ================================================================== --- compat/zlib/contrib/iostream/zfstream.cpp +++ compat/zlib/contrib/iostream/zfstream.cpp @@ -0,0 +1,329 @@ + +#include "zfstream.h" + +gzfilebuf::gzfilebuf() : + file(NULL), + mode(0), + own_file_descriptor(0) +{ } + +gzfilebuf::~gzfilebuf() { + + sync(); + if ( own_file_descriptor ) + close(); + +} + +gzfilebuf *gzfilebuf::open( const char *name, + int io_mode ) { + + if ( is_open() ) + return NULL; + + char char_mode[10]; + char *p = char_mode; + + if ( io_mode & ios::in ) { + mode = ios::in; + *p++ = 'r'; + } else if ( io_mode & ios::app ) { + mode = ios::app; + *p++ = 'a'; + } else { + mode = ios::out; + *p++ = 'w'; + } + + if ( io_mode & ios::binary ) { + mode |= ios::binary; + *p++ = 'b'; + } + + // Hard code the compression level + if ( io_mode & (ios::out|ios::app )) { + *p++ = '9'; + } + + // Put the end-of-string indicator + *p = '\0'; + + if ( (file = gzopen(name, char_mode)) == NULL ) + return NULL; + + own_file_descriptor = 1; + + return this; + +} + +gzfilebuf *gzfilebuf::attach( int file_descriptor, + int io_mode ) { + + if ( is_open() ) + return NULL; + + char char_mode[10]; + char *p = char_mode; + + if ( io_mode & ios::in ) { + mode = ios::in; + *p++ = 'r'; + } else if ( io_mode & ios::app ) { + mode = ios::app; + *p++ = 'a'; + } else { + mode = ios::out; + *p++ = 'w'; + } + + if ( io_mode & ios::binary ) { + mode |= ios::binary; + *p++ = 'b'; + } + + // Hard code the compression level + if ( io_mode & (ios::out|ios::app )) { + *p++ = '9'; + } + + // Put the end-of-string indicator + *p = '\0'; + + if ( (file = gzdopen(file_descriptor, char_mode)) == NULL ) + return NULL; + + own_file_descriptor = 0; + + return this; + +} + +gzfilebuf *gzfilebuf::close() { + + if ( is_open() ) { + + sync(); + gzclose( file ); + file = NULL; + + } + + return this; + +} + +int gzfilebuf::setcompressionlevel( int comp_level ) { + + return gzsetparams(file, comp_level, -2); + +} + +int gzfilebuf::setcompressionstrategy( int comp_strategy ) { + + return gzsetparams(file, -2, comp_strategy); + +} + + +streampos gzfilebuf::seekoff( streamoff off, ios::seek_dir dir, int which ) { + + return streampos(EOF); + +} + +int gzfilebuf::underflow() { + + // If the file hasn't been opened for reading, error. + if ( !is_open() || !(mode & ios::in) ) + return EOF; + + // if a buffer doesn't exists, allocate one. + if ( !base() ) { + + if ( (allocate()) == EOF ) + return EOF; + setp(0,0); + + } else { + + if ( in_avail() ) + return (unsigned char) *gptr(); + + if ( out_waiting() ) { + if ( flushbuf() == EOF ) + return EOF; + } + + } + + // Attempt to fill the buffer. + + int result = fillbuf(); + if ( result == EOF ) { + // disable get area + setg(0,0,0); + return EOF; + } + + return (unsigned char) *gptr(); + +} + +int gzfilebuf::overflow( int c ) { + + if ( !is_open() || !(mode & ios::out) ) + return EOF; + + if ( !base() ) { + if ( allocate() == EOF ) + return EOF; + setg(0,0,0); + } else { + if (in_avail()) { + return EOF; + } + if (out_waiting()) { + if (flushbuf() == EOF) + return EOF; + } + } + + int bl = blen(); + setp( base(), base() + bl); + + if ( c != EOF ) { + + *pptr() = c; + pbump(1); + + } + + return 0; + +} + +int gzfilebuf::sync() { + + if ( !is_open() ) + return EOF; + + if ( out_waiting() ) + return flushbuf(); + + return 0; + +} + +int gzfilebuf::flushbuf() { + + int n; + char *q; + + q = pbase(); + n = pptr() - q; + + if ( gzwrite( file, q, n) < n ) + return EOF; + + setp(0,0); + + return 0; + +} + +int gzfilebuf::fillbuf() { + + int required; + char *p; + + p = base(); + + required = blen(); + + int t = gzread( file, p, required ); + + if ( t <= 0) return EOF; + + setg( base(), base(), base()+t); + + return t; + +} + +gzfilestream_common::gzfilestream_common() : + ios( gzfilestream_common::rdbuf() ) +{ } + +gzfilestream_common::~gzfilestream_common() +{ } + +void gzfilestream_common::attach( int fd, int io_mode ) { + + if ( !buffer.attach( fd, io_mode) ) + clear( ios::failbit | ios::badbit ); + else + clear(); + +} + +void gzfilestream_common::open( const char *name, int io_mode ) { + + if ( !buffer.open( name, io_mode ) ) + clear( ios::failbit | ios::badbit ); + else + clear(); + +} + +void gzfilestream_common::close() { + + if ( !buffer.close() ) + clear( ios::failbit | ios::badbit ); + +} + +gzfilebuf *gzfilestream_common::rdbuf() +{ + return &buffer; +} + +gzifstream::gzifstream() : + ios( gzfilestream_common::rdbuf() ) +{ + clear( ios::badbit ); +} + +gzifstream::gzifstream( const char *name, int io_mode ) : + ios( gzfilestream_common::rdbuf() ) +{ + gzfilestream_common::open( name, io_mode ); +} + +gzifstream::gzifstream( int fd, int io_mode ) : + ios( gzfilestream_common::rdbuf() ) +{ + gzfilestream_common::attach( fd, io_mode ); +} + +gzifstream::~gzifstream() { } + +gzofstream::gzofstream() : + ios( gzfilestream_common::rdbuf() ) +{ + clear( ios::badbit ); +} + +gzofstream::gzofstream( const char *name, int io_mode ) : + ios( gzfilestream_common::rdbuf() ) +{ + gzfilestream_common::open( name, io_mode ); +} + +gzofstream::gzofstream( int fd, int io_mode ) : + ios( gzfilestream_common::rdbuf() ) +{ + gzfilestream_common::attach( fd, io_mode ); +} + +gzofstream::~gzofstream() { } ADDED compat/zlib/contrib/iostream/zfstream.h Index: compat/zlib/contrib/iostream/zfstream.h ================================================================== --- compat/zlib/contrib/iostream/zfstream.h +++ compat/zlib/contrib/iostream/zfstream.h @@ -0,0 +1,128 @@ + +#ifndef zfstream_h +#define zfstream_h + +#include +#include "zlib.h" + +class gzfilebuf : public streambuf { + +public: + + gzfilebuf( ); + virtual ~gzfilebuf(); + + gzfilebuf *open( const char *name, int io_mode ); + gzfilebuf *attach( int file_descriptor, int io_mode ); + gzfilebuf *close(); + + int setcompressionlevel( int comp_level ); + int setcompressionstrategy( int comp_strategy ); + + inline int is_open() const { return (file !=NULL); } + + virtual streampos seekoff( streamoff, ios::seek_dir, int ); + + virtual int sync(); + +protected: + + virtual int underflow(); + virtual int overflow( int = EOF ); + +private: + + gzFile file; + short mode; + short own_file_descriptor; + + int flushbuf(); + int fillbuf(); + +}; + +class gzfilestream_common : virtual public ios { + + friend class gzifstream; + friend class gzofstream; + friend gzofstream &setcompressionlevel( gzofstream &, int ); + friend gzofstream &setcompressionstrategy( gzofstream &, int ); + +public: + virtual ~gzfilestream_common(); + + void attach( int fd, int io_mode ); + void open( const char *name, int io_mode ); + void close(); + +protected: + gzfilestream_common(); + +private: + gzfilebuf *rdbuf(); + + gzfilebuf buffer; + +}; + +class gzifstream : public gzfilestream_common, public istream { + +public: + + gzifstream(); + gzifstream( const char *name, int io_mode = ios::in ); + gzifstream( int fd, int io_mode = ios::in ); + + virtual ~gzifstream(); + +}; + +class gzofstream : public gzfilestream_common, public ostream { + +public: + + gzofstream(); + gzofstream( const char *name, int io_mode = ios::out ); + gzofstream( int fd, int io_mode = ios::out ); + + virtual ~gzofstream(); + +}; + +template class gzomanip { + friend gzofstream &operator<<(gzofstream &, const gzomanip &); +public: + gzomanip(gzofstream &(*f)(gzofstream &, T), T v) : func(f), val(v) { } +private: + gzofstream &(*func)(gzofstream &, T); + T val; +}; + +template gzofstream &operator<<(gzofstream &s, const gzomanip &m) +{ + return (*m.func)(s, m.val); +} + +inline gzofstream &setcompressionlevel( gzofstream &s, int l ) +{ + (s.rdbuf())->setcompressionlevel(l); + return s; +} + +inline gzofstream &setcompressionstrategy( gzofstream &s, int l ) +{ + (s.rdbuf())->setcompressionstrategy(l); + return s; +} + +inline gzomanip setcompressionlevel(int l) +{ + return gzomanip(&setcompressionlevel,l); +} + +inline gzomanip setcompressionstrategy(int l) +{ + return gzomanip(&setcompressionstrategy,l); +} + +#endif ADDED compat/zlib/contrib/iostream2/zstream.h Index: compat/zlib/contrib/iostream2/zstream.h ================================================================== --- compat/zlib/contrib/iostream2/zstream.h +++ compat/zlib/contrib/iostream2/zstream.h @@ -0,0 +1,307 @@ +/* + * + * Copyright (c) 1997 + * Christian Michelsen Research AS + * Advanced Computing + * Fantoftvegen 38, 5036 BERGEN, Norway + * http://www.cmr.no + * + * Permission to use, copy, modify, distribute and sell this software + * and its documentation for any purpose is hereby granted without fee, + * provided that the above copyright notice appear in all copies and + * that both that copyright notice and this permission notice appear + * in supporting documentation. Christian Michelsen Research AS makes no + * representations about the suitability of this software for any + * purpose. It is provided "as is" without express or implied warranty. + * + */ + +#ifndef ZSTREAM__H +#define ZSTREAM__H + +/* + * zstream.h - C++ interface to the 'zlib' general purpose compression library + * $Id: zstream.h 1.1 1997-06-25 12:00:56+02 tyge Exp tyge $ + */ + +#include +#include +#include +#include "zlib.h" + +#if defined(_WIN32) +# include +# include +# define SET_BINARY_MODE(file) setmode(fileno(file), O_BINARY) +#else +# define SET_BINARY_MODE(file) +#endif + +class zstringlen { +public: + zstringlen(class izstream&); + zstringlen(class ozstream&, const char*); + size_t value() const { return val.word; } +private: + struct Val { unsigned char byte; size_t word; } val; +}; + +// ----------------------------- izstream ----------------------------- + +class izstream +{ + public: + izstream() : m_fp(0) {} + izstream(FILE* fp) : m_fp(0) { open(fp); } + izstream(const char* name) : m_fp(0) { open(name); } + ~izstream() { close(); } + + /* Opens a gzip (.gz) file for reading. + * open() can be used to read a file which is not in gzip format; + * in this case read() will directly read from the file without + * decompression. errno can be checked to distinguish two error + * cases (if errno is zero, the zlib error is Z_MEM_ERROR). + */ + void open(const char* name) { + if (m_fp) close(); + m_fp = ::gzopen(name, "rb"); + } + + void open(FILE* fp) { + SET_BINARY_MODE(fp); + if (m_fp) close(); + m_fp = ::gzdopen(fileno(fp), "rb"); + } + + /* Flushes all pending input if necessary, closes the compressed file + * and deallocates all the (de)compression state. The return value is + * the zlib error number (see function error() below). + */ + int close() { + int r = ::gzclose(m_fp); + m_fp = 0; return r; + } + + /* Binary read the given number of bytes from the compressed file. + */ + int read(void* buf, size_t len) { + return ::gzread(m_fp, buf, len); + } + + /* Returns the error message for the last error which occurred on the + * given compressed file. errnum is set to zlib error number. If an + * error occurred in the file system and not in the compression library, + * errnum is set to Z_ERRNO and the application may consult errno + * to get the exact error code. + */ + const char* error(int* errnum) { + return ::gzerror(m_fp, errnum); + } + + gzFile fp() { return m_fp; } + + private: + gzFile m_fp; +}; + +/* + * Binary read the given (array of) object(s) from the compressed file. + * If the input file was not in gzip format, read() copies the objects number + * of bytes into the buffer. + * returns the number of uncompressed bytes actually read + * (0 for end of file, -1 for error). + */ +template +inline int read(izstream& zs, T* x, Items items) { + return ::gzread(zs.fp(), x, items*sizeof(T)); +} + +/* + * Binary input with the '>' operator. + */ +template +inline izstream& operator>(izstream& zs, T& x) { + ::gzread(zs.fp(), &x, sizeof(T)); + return zs; +} + + +inline zstringlen::zstringlen(izstream& zs) { + zs > val.byte; + if (val.byte == 255) zs > val.word; + else val.word = val.byte; +} + +/* + * Read length of string + the string with the '>' operator. + */ +inline izstream& operator>(izstream& zs, char* x) { + zstringlen len(zs); + ::gzread(zs.fp(), x, len.value()); + x[len.value()] = '\0'; + return zs; +} + +inline char* read_string(izstream& zs) { + zstringlen len(zs); + char* x = new char[len.value()+1]; + ::gzread(zs.fp(), x, len.value()); + x[len.value()] = '\0'; + return x; +} + +// ----------------------------- ozstream ----------------------------- + +class ozstream +{ + public: + ozstream() : m_fp(0), m_os(0) { + } + ozstream(FILE* fp, int level = Z_DEFAULT_COMPRESSION) + : m_fp(0), m_os(0) { + open(fp, level); + } + ozstream(const char* name, int level = Z_DEFAULT_COMPRESSION) + : m_fp(0), m_os(0) { + open(name, level); + } + ~ozstream() { + close(); + } + + /* Opens a gzip (.gz) file for writing. + * The compression level parameter should be in 0..9 + * errno can be checked to distinguish two error cases + * (if errno is zero, the zlib error is Z_MEM_ERROR). + */ + void open(const char* name, int level = Z_DEFAULT_COMPRESSION) { + char mode[4] = "wb\0"; + if (level != Z_DEFAULT_COMPRESSION) mode[2] = '0'+level; + if (m_fp) close(); + m_fp = ::gzopen(name, mode); + } + + /* open from a FILE pointer. + */ + void open(FILE* fp, int level = Z_DEFAULT_COMPRESSION) { + SET_BINARY_MODE(fp); + char mode[4] = "wb\0"; + if (level != Z_DEFAULT_COMPRESSION) mode[2] = '0'+level; + if (m_fp) close(); + m_fp = ::gzdopen(fileno(fp), mode); + } + + /* Flushes all pending output if necessary, closes the compressed file + * and deallocates all the (de)compression state. The return value is + * the zlib error number (see function error() below). + */ + int close() { + if (m_os) { + ::gzwrite(m_fp, m_os->str(), m_os->pcount()); + delete[] m_os->str(); delete m_os; m_os = 0; + } + int r = ::gzclose(m_fp); m_fp = 0; return r; + } + + /* Binary write the given number of bytes into the compressed file. + */ + int write(const void* buf, size_t len) { + return ::gzwrite(m_fp, (voidp) buf, len); + } + + /* Flushes all pending output into the compressed file. The parameter + * _flush is as in the deflate() function. The return value is the zlib + * error number (see function gzerror below). flush() returns Z_OK if + * the flush_ parameter is Z_FINISH and all output could be flushed. + * flush() should be called only when strictly necessary because it can + * degrade compression. + */ + int flush(int _flush) { + os_flush(); + return ::gzflush(m_fp, _flush); + } + + /* Returns the error message for the last error which occurred on the + * given compressed file. errnum is set to zlib error number. If an + * error occurred in the file system and not in the compression library, + * errnum is set to Z_ERRNO and the application may consult errno + * to get the exact error code. + */ + const char* error(int* errnum) { + return ::gzerror(m_fp, errnum); + } + + gzFile fp() { return m_fp; } + + ostream& os() { + if (m_os == 0) m_os = new ostrstream; + return *m_os; + } + + void os_flush() { + if (m_os && m_os->pcount()>0) { + ostrstream* oss = new ostrstream; + oss->fill(m_os->fill()); + oss->flags(m_os->flags()); + oss->precision(m_os->precision()); + oss->width(m_os->width()); + ::gzwrite(m_fp, m_os->str(), m_os->pcount()); + delete[] m_os->str(); delete m_os; m_os = oss; + } + } + + private: + gzFile m_fp; + ostrstream* m_os; +}; + +/* + * Binary write the given (array of) object(s) into the compressed file. + * returns the number of uncompressed bytes actually written + * (0 in case of error). + */ +template +inline int write(ozstream& zs, const T* x, Items items) { + return ::gzwrite(zs.fp(), (voidp) x, items*sizeof(T)); +} + +/* + * Binary output with the '<' operator. + */ +template +inline ozstream& operator<(ozstream& zs, const T& x) { + ::gzwrite(zs.fp(), (voidp) &x, sizeof(T)); + return zs; +} + +inline zstringlen::zstringlen(ozstream& zs, const char* x) { + val.byte = 255; val.word = ::strlen(x); + if (val.word < 255) zs < (val.byte = val.word); + else zs < val; +} + +/* + * Write length of string + the string with the '<' operator. + */ +inline ozstream& operator<(ozstream& zs, const char* x) { + zstringlen len(zs, x); + ::gzwrite(zs.fp(), (voidp) x, len.value()); + return zs; +} + +#ifdef _MSC_VER +inline ozstream& operator<(ozstream& zs, char* const& x) { + return zs < (const char*) x; +} +#endif + +/* + * Ascii write with the << operator; + */ +template +inline ostream& operator<<(ozstream& zs, const T& x) { + zs.os_flush(); + return zs.os() << x; +} + +#endif ADDED compat/zlib/contrib/iostream2/zstream_test.cpp Index: compat/zlib/contrib/iostream2/zstream_test.cpp ================================================================== --- compat/zlib/contrib/iostream2/zstream_test.cpp +++ compat/zlib/contrib/iostream2/zstream_test.cpp @@ -0,0 +1,25 @@ +#include "zstream.h" +#include +#include +#include + +void main() { + char h[256] = "Hello"; + char* g = "Goodbye"; + ozstream out("temp.gz"); + out < "This works well" < h < g; + out.close(); + + izstream in("temp.gz"); // read it back + char *x = read_string(in), *y = new char[256], z[256]; + in > y > z; + in.close(); + cout << x << endl << y << endl << z << endl; + + out.open("temp.gz"); // try ascii output; zcat temp.gz to see the results + out << setw(50) << setfill('#') << setprecision(20) << x << endl << y << endl << z << endl; + out << z << endl << y << endl << x << endl; + out << 1.1234567890123456789 << endl; + + delete[] x; delete[] y; +} ADDED compat/zlib/contrib/iostream3/README Index: compat/zlib/contrib/iostream3/README ================================================================== --- compat/zlib/contrib/iostream3/README +++ compat/zlib/contrib/iostream3/README @@ -0,0 +1,35 @@ +These classes provide a C++ stream interface to the zlib library. It allows you +to do things like: + + gzofstream outf("blah.gz"); + outf << "These go into the gzip file " << 123 << endl; + +It does this by deriving a specialized stream buffer for gzipped files, which is +the way Stroustrup would have done it. :-> + +The gzifstream and gzofstream classes were originally written by Kevin Ruland +and made available in the zlib contrib/iostream directory. The older version still +compiles under gcc 2.xx, but not under gcc 3.xx, which sparked the development of +this version. + +The new classes are as standard-compliant as possible, closely following the +approach of the standard library's fstream classes. It compiles under gcc versions +3.2 and 3.3, but not under gcc 2.xx. This is mainly due to changes in the standard +library naming scheme. The new version of gzifstream/gzofstream/gzfilebuf differs +from the previous one in the following respects: +- added showmanyc +- added setbuf, with support for unbuffered output via setbuf(0,0) +- a few bug fixes of stream behavior +- gzipped output file opened with default compression level instead of maximum level +- setcompressionlevel()/strategy() members replaced by single setcompression() + +The code is provided "as is", with the permission to use, copy, modify, distribute +and sell it for any purpose without fee. + +Ludwig Schwardt + + +DSP Lab +Electrical & Electronic Engineering Department +University of Stellenbosch +South Africa ADDED compat/zlib/contrib/iostream3/TODO Index: compat/zlib/contrib/iostream3/TODO ================================================================== --- compat/zlib/contrib/iostream3/TODO +++ compat/zlib/contrib/iostream3/TODO @@ -0,0 +1,17 @@ +Possible upgrades to gzfilebuf: + +- The ability to do putback (e.g. putbackfail) + +- The ability to seek (zlib supports this, but could be slow/tricky) + +- Simultaneous read/write access (does it make sense?) + +- Support for ios_base::ate open mode + +- Locale support? + +- Check public interface to see which calls give problems + (due to dependence on library internals) + +- Override operator<<(ostream&, gzfilebuf*) to allow direct copying + of stream buffer to stream ( i.e. os << is.rdbuf(); ) ADDED compat/zlib/contrib/iostream3/test.cc Index: compat/zlib/contrib/iostream3/test.cc ================================================================== --- compat/zlib/contrib/iostream3/test.cc +++ compat/zlib/contrib/iostream3/test.cc @@ -0,0 +1,50 @@ +/* + * Test program for gzifstream and gzofstream + * + * by Ludwig Schwardt + * original version by Kevin Ruland + */ + +#include "zfstream.h" +#include // for cout + +int main() { + + gzofstream outf; + gzifstream inf; + char buf[80]; + + outf.open("test1.txt.gz"); + outf << "The quick brown fox sidestepped the lazy canine\n" + << 1.3 << "\nPlan " << 9 << std::endl; + outf.close(); + std::cout << "Wrote the following message to 'test1.txt.gz' (check with zcat or zless):\n" + << "The quick brown fox sidestepped the lazy canine\n" + << 1.3 << "\nPlan " << 9 << std::endl; + + std::cout << "\nReading 'test1.txt.gz' (buffered) produces:\n"; + inf.open("test1.txt.gz"); + while (inf.getline(buf,80,'\n')) { + std::cout << buf << "\t(" << inf.rdbuf()->in_avail() << " chars left in buffer)\n"; + } + inf.close(); + + outf.rdbuf()->pubsetbuf(0,0); + outf.open("test2.txt.gz"); + outf << setcompression(Z_NO_COMPRESSION) + << "The quick brown fox sidestepped the lazy canine\n" + << 1.3 << "\nPlan " << 9 << std::endl; + outf.close(); + std::cout << "\nWrote the same message to 'test2.txt.gz' in uncompressed form"; + + std::cout << "\nReading 'test2.txt.gz' (unbuffered) produces:\n"; + inf.rdbuf()->pubsetbuf(0,0); + inf.open("test2.txt.gz"); + while (inf.getline(buf,80,'\n')) { + std::cout << buf << "\t(" << inf.rdbuf()->in_avail() << " chars left in buffer)\n"; + } + inf.close(); + + return 0; + +} ADDED compat/zlib/contrib/iostream3/zfstream.cc Index: compat/zlib/contrib/iostream3/zfstream.cc ================================================================== --- compat/zlib/contrib/iostream3/zfstream.cc +++ compat/zlib/contrib/iostream3/zfstream.cc @@ -0,0 +1,479 @@ +/* + * A C++ I/O streams interface to the zlib gz* functions + * + * by Ludwig Schwardt + * original version by Kevin Ruland + * + * This version is standard-compliant and compatible with gcc 3.x. + */ + +#include "zfstream.h" +#include // for strcpy, strcat, strlen (mode strings) +#include // for BUFSIZ + +// Internal buffer sizes (default and "unbuffered" versions) +#define BIGBUFSIZE BUFSIZ +#define SMALLBUFSIZE 1 + +/*****************************************************************************/ + +// Default constructor +gzfilebuf::gzfilebuf() +: file(NULL), io_mode(std::ios_base::openmode(0)), own_fd(false), + buffer(NULL), buffer_size(BIGBUFSIZE), own_buffer(true) +{ + // No buffers to start with + this->disable_buffer(); +} + +// Destructor +gzfilebuf::~gzfilebuf() +{ + // Sync output buffer and close only if responsible for file + // (i.e. attached streams should be left open at this stage) + this->sync(); + if (own_fd) + this->close(); + // Make sure internal buffer is deallocated + this->disable_buffer(); +} + +// Set compression level and strategy +int +gzfilebuf::setcompression(int comp_level, + int comp_strategy) +{ + return gzsetparams(file, comp_level, comp_strategy); +} + +// Open gzipped file +gzfilebuf* +gzfilebuf::open(const char *name, + std::ios_base::openmode mode) +{ + // Fail if file already open + if (this->is_open()) + return NULL; + // Don't support simultaneous read/write access (yet) + if ((mode & std::ios_base::in) && (mode & std::ios_base::out)) + return NULL; + + // Build mode string for gzopen and check it [27.8.1.3.2] + char char_mode[6] = "\0\0\0\0\0"; + if (!this->open_mode(mode, char_mode)) + return NULL; + + // Attempt to open file + if ((file = gzopen(name, char_mode)) == NULL) + return NULL; + + // On success, allocate internal buffer and set flags + this->enable_buffer(); + io_mode = mode; + own_fd = true; + return this; +} + +// Attach to gzipped file +gzfilebuf* +gzfilebuf::attach(int fd, + std::ios_base::openmode mode) +{ + // Fail if file already open + if (this->is_open()) + return NULL; + // Don't support simultaneous read/write access (yet) + if ((mode & std::ios_base::in) && (mode & std::ios_base::out)) + return NULL; + + // Build mode string for gzdopen and check it [27.8.1.3.2] + char char_mode[6] = "\0\0\0\0\0"; + if (!this->open_mode(mode, char_mode)) + return NULL; + + // Attempt to attach to file + if ((file = gzdopen(fd, char_mode)) == NULL) + return NULL; + + // On success, allocate internal buffer and set flags + this->enable_buffer(); + io_mode = mode; + own_fd = false; + return this; +} + +// Close gzipped file +gzfilebuf* +gzfilebuf::close() +{ + // Fail immediately if no file is open + if (!this->is_open()) + return NULL; + // Assume success + gzfilebuf* retval = this; + // Attempt to sync and close gzipped file + if (this->sync() == -1) + retval = NULL; + if (gzclose(file) < 0) + retval = NULL; + // File is now gone anyway (postcondition [27.8.1.3.8]) + file = NULL; + own_fd = false; + // Destroy internal buffer if it exists + this->disable_buffer(); + return retval; +} + +/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ + +// Convert int open mode to mode string +bool +gzfilebuf::open_mode(std::ios_base::openmode mode, + char* c_mode) const +{ + bool testb = mode & std::ios_base::binary; + bool testi = mode & std::ios_base::in; + bool testo = mode & std::ios_base::out; + bool testt = mode & std::ios_base::trunc; + bool testa = mode & std::ios_base::app; + + // Check for valid flag combinations - see [27.8.1.3.2] (Table 92) + // Original zfstream hardcoded the compression level to maximum here... + // Double the time for less than 1% size improvement seems + // excessive though - keeping it at the default level + // To change back, just append "9" to the next three mode strings + if (!testi && testo && !testt && !testa) + strcpy(c_mode, "w"); + if (!testi && testo && !testt && testa) + strcpy(c_mode, "a"); + if (!testi && testo && testt && !testa) + strcpy(c_mode, "w"); + if (testi && !testo && !testt && !testa) + strcpy(c_mode, "r"); + // No read/write mode yet +// if (testi && testo && !testt && !testa) +// strcpy(c_mode, "r+"); +// if (testi && testo && testt && !testa) +// strcpy(c_mode, "w+"); + + // Mode string should be empty for invalid combination of flags + if (strlen(c_mode) == 0) + return false; + if (testb) + strcat(c_mode, "b"); + return true; +} + +// Determine number of characters in internal get buffer +std::streamsize +gzfilebuf::showmanyc() +{ + // Calls to underflow will fail if file not opened for reading + if (!this->is_open() || !(io_mode & std::ios_base::in)) + return -1; + // Make sure get area is in use + if (this->gptr() && (this->gptr() < this->egptr())) + return std::streamsize(this->egptr() - this->gptr()); + else + return 0; +} + +// Fill get area from gzipped file +gzfilebuf::int_type +gzfilebuf::underflow() +{ + // If something is left in the get area by chance, return it + // (this shouldn't normally happen, as underflow is only supposed + // to be called when gptr >= egptr, but it serves as error check) + if (this->gptr() && (this->gptr() < this->egptr())) + return traits_type::to_int_type(*(this->gptr())); + + // If the file hasn't been opened for reading, produce error + if (!this->is_open() || !(io_mode & std::ios_base::in)) + return traits_type::eof(); + + // Attempt to fill internal buffer from gzipped file + // (buffer must be guaranteed to exist...) + int bytes_read = gzread(file, buffer, buffer_size); + // Indicates error or EOF + if (bytes_read <= 0) + { + // Reset get area + this->setg(buffer, buffer, buffer); + return traits_type::eof(); + } + // Make all bytes read from file available as get area + this->setg(buffer, buffer, buffer + bytes_read); + + // Return next character in get area + return traits_type::to_int_type(*(this->gptr())); +} + +// Write put area to gzipped file +gzfilebuf::int_type +gzfilebuf::overflow(int_type c) +{ + // Determine whether put area is in use + if (this->pbase()) + { + // Double-check pointer range + if (this->pptr() > this->epptr() || this->pptr() < this->pbase()) + return traits_type::eof(); + // Add extra character to buffer if not EOF + if (!traits_type::eq_int_type(c, traits_type::eof())) + { + *(this->pptr()) = traits_type::to_char_type(c); + this->pbump(1); + } + // Number of characters to write to file + int bytes_to_write = this->pptr() - this->pbase(); + // Overflow doesn't fail if nothing is to be written + if (bytes_to_write > 0) + { + // If the file hasn't been opened for writing, produce error + if (!this->is_open() || !(io_mode & std::ios_base::out)) + return traits_type::eof(); + // If gzipped file won't accept all bytes written to it, fail + if (gzwrite(file, this->pbase(), bytes_to_write) != bytes_to_write) + return traits_type::eof(); + // Reset next pointer to point to pbase on success + this->pbump(-bytes_to_write); + } + } + // Write extra character to file if not EOF + else if (!traits_type::eq_int_type(c, traits_type::eof())) + { + // If the file hasn't been opened for writing, produce error + if (!this->is_open() || !(io_mode & std::ios_base::out)) + return traits_type::eof(); + // Impromptu char buffer (allows "unbuffered" output) + char_type last_char = traits_type::to_char_type(c); + // If gzipped file won't accept this character, fail + if (gzwrite(file, &last_char, 1) != 1) + return traits_type::eof(); + } + + // If you got here, you have succeeded (even if c was EOF) + // The return value should therefore be non-EOF + if (traits_type::eq_int_type(c, traits_type::eof())) + return traits_type::not_eof(c); + else + return c; +} + +// Assign new buffer +std::streambuf* +gzfilebuf::setbuf(char_type* p, + std::streamsize n) +{ + // First make sure stuff is sync'ed, for safety + if (this->sync() == -1) + return NULL; + // If buffering is turned off on purpose via setbuf(0,0), still allocate one... + // "Unbuffered" only really refers to put [27.8.1.4.10], while get needs at + // least a buffer of size 1 (very inefficient though, therefore make it bigger?) + // This follows from [27.5.2.4.3]/12 (gptr needs to point at something, it seems) + if (!p || !n) + { + // Replace existing buffer (if any) with small internal buffer + this->disable_buffer(); + buffer = NULL; + buffer_size = 0; + own_buffer = true; + this->enable_buffer(); + } + else + { + // Replace existing buffer (if any) with external buffer + this->disable_buffer(); + buffer = p; + buffer_size = n; + own_buffer = false; + this->enable_buffer(); + } + return this; +} + +// Write put area to gzipped file (i.e. ensures that put area is empty) +int +gzfilebuf::sync() +{ + return traits_type::eq_int_type(this->overflow(), traits_type::eof()) ? -1 : 0; +} + +/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */ + +// Allocate internal buffer +void +gzfilebuf::enable_buffer() +{ + // If internal buffer required, allocate one + if (own_buffer && !buffer) + { + // Check for buffered vs. "unbuffered" + if (buffer_size > 0) + { + // Allocate internal buffer + buffer = new char_type[buffer_size]; + // Get area starts empty and will be expanded by underflow as need arises + this->setg(buffer, buffer, buffer); + // Setup entire internal buffer as put area. + // The one-past-end pointer actually points to the last element of the buffer, + // so that overflow(c) can safely add the extra character c to the sequence. + // These pointers remain in place for the duration of the buffer + this->setp(buffer, buffer + buffer_size - 1); + } + else + { + // Even in "unbuffered" case, (small?) get buffer is still required + buffer_size = SMALLBUFSIZE; + buffer = new char_type[buffer_size]; + this->setg(buffer, buffer, buffer); + // "Unbuffered" means no put buffer + this->setp(0, 0); + } + } + else + { + // If buffer already allocated, reset buffer pointers just to make sure no + // stale chars are lying around + this->setg(buffer, buffer, buffer); + this->setp(buffer, buffer + buffer_size - 1); + } +} + +// Destroy internal buffer +void +gzfilebuf::disable_buffer() +{ + // If internal buffer exists, deallocate it + if (own_buffer && buffer) + { + // Preserve unbuffered status by zeroing size + if (!this->pbase()) + buffer_size = 0; + delete[] buffer; + buffer = NULL; + this->setg(0, 0, 0); + this->setp(0, 0); + } + else + { + // Reset buffer pointers to initial state if external buffer exists + this->setg(buffer, buffer, buffer); + if (buffer) + this->setp(buffer, buffer + buffer_size - 1); + else + this->setp(0, 0); + } +} + +/*****************************************************************************/ + +// Default constructor initializes stream buffer +gzifstream::gzifstream() +: std::istream(NULL), sb() +{ this->init(&sb); } + +// Initialize stream buffer and open file +gzifstream::gzifstream(const char* name, + std::ios_base::openmode mode) +: std::istream(NULL), sb() +{ + this->init(&sb); + this->open(name, mode); +} + +// Initialize stream buffer and attach to file +gzifstream::gzifstream(int fd, + std::ios_base::openmode mode) +: std::istream(NULL), sb() +{ + this->init(&sb); + this->attach(fd, mode); +} + +// Open file and go into fail() state if unsuccessful +void +gzifstream::open(const char* name, + std::ios_base::openmode mode) +{ + if (!sb.open(name, mode | std::ios_base::in)) + this->setstate(std::ios_base::failbit); + else + this->clear(); +} + +// Attach to file and go into fail() state if unsuccessful +void +gzifstream::attach(int fd, + std::ios_base::openmode mode) +{ + if (!sb.attach(fd, mode | std::ios_base::in)) + this->setstate(std::ios_base::failbit); + else + this->clear(); +} + +// Close file +void +gzifstream::close() +{ + if (!sb.close()) + this->setstate(std::ios_base::failbit); +} + +/*****************************************************************************/ + +// Default constructor initializes stream buffer +gzofstream::gzofstream() +: std::ostream(NULL), sb() +{ this->init(&sb); } + +// Initialize stream buffer and open file +gzofstream::gzofstream(const char* name, + std::ios_base::openmode mode) +: std::ostream(NULL), sb() +{ + this->init(&sb); + this->open(name, mode); +} + +// Initialize stream buffer and attach to file +gzofstream::gzofstream(int fd, + std::ios_base::openmode mode) +: std::ostream(NULL), sb() +{ + this->init(&sb); + this->attach(fd, mode); +} + +// Open file and go into fail() state if unsuccessful +void +gzofstream::open(const char* name, + std::ios_base::openmode mode) +{ + if (!sb.open(name, mode | std::ios_base::out)) + this->setstate(std::ios_base::failbit); + else + this->clear(); +} + +// Attach to file and go into fail() state if unsuccessful +void +gzofstream::attach(int fd, + std::ios_base::openmode mode) +{ + if (!sb.attach(fd, mode | std::ios_base::out)) + this->setstate(std::ios_base::failbit); + else + this->clear(); +} + +// Close file +void +gzofstream::close() +{ + if (!sb.close()) + this->setstate(std::ios_base::failbit); +} ADDED compat/zlib/contrib/iostream3/zfstream.h Index: compat/zlib/contrib/iostream3/zfstream.h ================================================================== --- compat/zlib/contrib/iostream3/zfstream.h +++ compat/zlib/contrib/iostream3/zfstream.h @@ -0,0 +1,466 @@ +/* + * A C++ I/O streams interface to the zlib gz* functions + * + * by Ludwig Schwardt + * original version by Kevin Ruland + * + * This version is standard-compliant and compatible with gcc 3.x. + */ + +#ifndef ZFSTREAM_H +#define ZFSTREAM_H + +#include // not iostream, since we don't need cin/cout +#include +#include "zlib.h" + +/*****************************************************************************/ + +/** + * @brief Gzipped file stream buffer class. + * + * This class implements basic_filebuf for gzipped files. It doesn't yet support + * seeking (allowed by zlib but slow/limited), putback and read/write access + * (tricky). Otherwise, it attempts to be a drop-in replacement for the standard + * file streambuf. +*/ +class gzfilebuf : public std::streambuf +{ +public: + // Default constructor. + gzfilebuf(); + + // Destructor. + virtual + ~gzfilebuf(); + + /** + * @brief Set compression level and strategy on the fly. + * @param comp_level Compression level (see zlib.h for allowed values) + * @param comp_strategy Compression strategy (see zlib.h for allowed values) + * @return Z_OK on success, Z_STREAM_ERROR otherwise. + * + * Unfortunately, these parameters cannot be modified separately, as the + * previous zfstream version assumed. Since the strategy is seldom changed, + * it can default and setcompression(level) then becomes like the old + * setcompressionlevel(level). + */ + int + setcompression(int comp_level, + int comp_strategy = Z_DEFAULT_STRATEGY); + + /** + * @brief Check if file is open. + * @return True if file is open. + */ + bool + is_open() const { return (file != NULL); } + + /** + * @brief Open gzipped file. + * @param name File name. + * @param mode Open mode flags. + * @return @c this on success, NULL on failure. + */ + gzfilebuf* + open(const char* name, + std::ios_base::openmode mode); + + /** + * @brief Attach to already open gzipped file. + * @param fd File descriptor. + * @param mode Open mode flags. + * @return @c this on success, NULL on failure. + */ + gzfilebuf* + attach(int fd, + std::ios_base::openmode mode); + + /** + * @brief Close gzipped file. + * @return @c this on success, NULL on failure. + */ + gzfilebuf* + close(); + +protected: + /** + * @brief Convert ios open mode int to mode string used by zlib. + * @return True if valid mode flag combination. + */ + bool + open_mode(std::ios_base::openmode mode, + char* c_mode) const; + + /** + * @brief Number of characters available in stream buffer. + * @return Number of characters. + * + * This indicates number of characters in get area of stream buffer. + * These characters can be read without accessing the gzipped file. + */ + virtual std::streamsize + showmanyc(); + + /** + * @brief Fill get area from gzipped file. + * @return First character in get area on success, EOF on error. + * + * This actually reads characters from gzipped file to stream + * buffer. Always buffered. + */ + virtual int_type + underflow(); + + /** + * @brief Write put area to gzipped file. + * @param c Extra character to add to buffer contents. + * @return Non-EOF on success, EOF on error. + * + * This actually writes characters in stream buffer to + * gzipped file. With unbuffered output this is done one + * character at a time. + */ + virtual int_type + overflow(int_type c = traits_type::eof()); + + /** + * @brief Installs external stream buffer. + * @param p Pointer to char buffer. + * @param n Size of external buffer. + * @return @c this on success, NULL on failure. + * + * Call setbuf(0,0) to enable unbuffered output. + */ + virtual std::streambuf* + setbuf(char_type* p, + std::streamsize n); + + /** + * @brief Flush stream buffer to file. + * @return 0 on success, -1 on error. + * + * This calls underflow(EOF) to do the job. + */ + virtual int + sync(); + +// +// Some future enhancements +// +// virtual int_type uflow(); +// virtual int_type pbackfail(int_type c = traits_type::eof()); +// virtual pos_type +// seekoff(off_type off, +// std::ios_base::seekdir way, +// std::ios_base::openmode mode = std::ios_base::in|std::ios_base::out); +// virtual pos_type +// seekpos(pos_type sp, +// std::ios_base::openmode mode = std::ios_base::in|std::ios_base::out); + +private: + /** + * @brief Allocate internal buffer. + * + * This function is safe to call multiple times. It will ensure + * that a proper internal buffer exists if it is required. If the + * buffer already exists or is external, the buffer pointers will be + * reset to their original state. + */ + void + enable_buffer(); + + /** + * @brief Destroy internal buffer. + * + * This function is safe to call multiple times. It will ensure + * that the internal buffer is deallocated if it exists. In any + * case, it will also reset the buffer pointers. + */ + void + disable_buffer(); + + /** + * Underlying file pointer. + */ + gzFile file; + + /** + * Mode in which file was opened. + */ + std::ios_base::openmode io_mode; + + /** + * @brief True if this object owns file descriptor. + * + * This makes the class responsible for closing the file + * upon destruction. + */ + bool own_fd; + + /** + * @brief Stream buffer. + * + * For simplicity this remains allocated on the free store for the + * entire life span of the gzfilebuf object, unless replaced by setbuf. + */ + char_type* buffer; + + /** + * @brief Stream buffer size. + * + * Defaults to system default buffer size (typically 8192 bytes). + * Modified by setbuf. + */ + std::streamsize buffer_size; + + /** + * @brief True if this object owns stream buffer. + * + * This makes the class responsible for deleting the buffer + * upon destruction. + */ + bool own_buffer; +}; + +/*****************************************************************************/ + +/** + * @brief Gzipped file input stream class. + * + * This class implements ifstream for gzipped files. Seeking and putback + * is not supported yet. +*/ +class gzifstream : public std::istream +{ +public: + // Default constructor + gzifstream(); + + /** + * @brief Construct stream on gzipped file to be opened. + * @param name File name. + * @param mode Open mode flags (forced to contain ios::in). + */ + explicit + gzifstream(const char* name, + std::ios_base::openmode mode = std::ios_base::in); + + /** + * @brief Construct stream on already open gzipped file. + * @param fd File descriptor. + * @param mode Open mode flags (forced to contain ios::in). + */ + explicit + gzifstream(int fd, + std::ios_base::openmode mode = std::ios_base::in); + + /** + * Obtain underlying stream buffer. + */ + gzfilebuf* + rdbuf() const + { return const_cast(&sb); } + + /** + * @brief Check if file is open. + * @return True if file is open. + */ + bool + is_open() { return sb.is_open(); } + + /** + * @brief Open gzipped file. + * @param name File name. + * @param mode Open mode flags (forced to contain ios::in). + * + * Stream will be in state good() if file opens successfully; + * otherwise in state fail(). This differs from the behavior of + * ifstream, which never sets the state to good() and therefore + * won't allow you to reuse the stream for a second file unless + * you manually clear() the state. The choice is a matter of + * convenience. + */ + void + open(const char* name, + std::ios_base::openmode mode = std::ios_base::in); + + /** + * @brief Attach to already open gzipped file. + * @param fd File descriptor. + * @param mode Open mode flags (forced to contain ios::in). + * + * Stream will be in state good() if attach succeeded; otherwise + * in state fail(). + */ + void + attach(int fd, + std::ios_base::openmode mode = std::ios_base::in); + + /** + * @brief Close gzipped file. + * + * Stream will be in state fail() if close failed. + */ + void + close(); + +private: + /** + * Underlying stream buffer. + */ + gzfilebuf sb; +}; + +/*****************************************************************************/ + +/** + * @brief Gzipped file output stream class. + * + * This class implements ofstream for gzipped files. Seeking and putback + * is not supported yet. +*/ +class gzofstream : public std::ostream +{ +public: + // Default constructor + gzofstream(); + + /** + * @brief Construct stream on gzipped file to be opened. + * @param name File name. + * @param mode Open mode flags (forced to contain ios::out). + */ + explicit + gzofstream(const char* name, + std::ios_base::openmode mode = std::ios_base::out); + + /** + * @brief Construct stream on already open gzipped file. + * @param fd File descriptor. + * @param mode Open mode flags (forced to contain ios::out). + */ + explicit + gzofstream(int fd, + std::ios_base::openmode mode = std::ios_base::out); + + /** + * Obtain underlying stream buffer. + */ + gzfilebuf* + rdbuf() const + { return const_cast(&sb); } + + /** + * @brief Check if file is open. + * @return True if file is open. + */ + bool + is_open() { return sb.is_open(); } + + /** + * @brief Open gzipped file. + * @param name File name. + * @param mode Open mode flags (forced to contain ios::out). + * + * Stream will be in state good() if file opens successfully; + * otherwise in state fail(). This differs from the behavior of + * ofstream, which never sets the state to good() and therefore + * won't allow you to reuse the stream for a second file unless + * you manually clear() the state. The choice is a matter of + * convenience. + */ + void + open(const char* name, + std::ios_base::openmode mode = std::ios_base::out); + + /** + * @brief Attach to already open gzipped file. + * @param fd File descriptor. + * @param mode Open mode flags (forced to contain ios::out). + * + * Stream will be in state good() if attach succeeded; otherwise + * in state fail(). + */ + void + attach(int fd, + std::ios_base::openmode mode = std::ios_base::out); + + /** + * @brief Close gzipped file. + * + * Stream will be in state fail() if close failed. + */ + void + close(); + +private: + /** + * Underlying stream buffer. + */ + gzfilebuf sb; +}; + +/*****************************************************************************/ + +/** + * @brief Gzipped file output stream manipulator class. + * + * This class defines a two-argument manipulator for gzofstream. It is used + * as base for the setcompression(int,int) manipulator. +*/ +template + class gzomanip2 + { + public: + // Allows insertor to peek at internals + template + friend gzofstream& + operator<<(gzofstream&, + const gzomanip2&); + + // Constructor + gzomanip2(gzofstream& (*f)(gzofstream&, T1, T2), + T1 v1, + T2 v2); + private: + // Underlying manipulator function + gzofstream& + (*func)(gzofstream&, T1, T2); + + // Arguments for manipulator function + T1 val1; + T2 val2; + }; + +/*****************************************************************************/ + +// Manipulator function thunks through to stream buffer +inline gzofstream& +setcompression(gzofstream &gzs, int l, int s = Z_DEFAULT_STRATEGY) +{ + (gzs.rdbuf())->setcompression(l, s); + return gzs; +} + +// Manipulator constructor stores arguments +template + inline + gzomanip2::gzomanip2(gzofstream &(*f)(gzofstream &, T1, T2), + T1 v1, + T2 v2) + : func(f), val1(v1), val2(v2) + { } + +// Insertor applies underlying manipulator function to stream +template + inline gzofstream& + operator<<(gzofstream& s, const gzomanip2& m) + { return (*m.func)(s, m.val1, m.val2); } + +// Insert this onto stream to simplify setting of compression level +inline gzomanip2 +setcompression(int l, int s = Z_DEFAULT_STRATEGY) +{ return gzomanip2(&setcompression, l, s); } + +#endif // ZFSTREAM_H ADDED compat/zlib/contrib/masmx64/bld_ml64.bat Index: compat/zlib/contrib/masmx64/bld_ml64.bat ================================================================== --- compat/zlib/contrib/masmx64/bld_ml64.bat +++ compat/zlib/contrib/masmx64/bld_ml64.bat @@ -0,0 +1,2 @@ +ml64.exe /Flinffasx64 /c /Zi inffasx64.asm +ml64.exe /Flgvmat64 /c /Zi gvmat64.asm ADDED compat/zlib/contrib/masmx64/gvmat64.asm Index: compat/zlib/contrib/masmx64/gvmat64.asm ================================================================== --- compat/zlib/contrib/masmx64/gvmat64.asm +++ compat/zlib/contrib/masmx64/gvmat64.asm @@ -0,0 +1,553 @@ +;uInt longest_match_x64( +; deflate_state *s, +; IPos cur_match); /* current match */ + +; gvmat64.asm -- Asm portion of the optimized longest_match for 32 bits x86_64 +; (AMD64 on Athlon 64, Opteron, Phenom +; and Intel EM64T on Pentium 4 with EM64T, Pentium D, Core 2 Duo, Core I5/I7) +; Copyright (C) 1995-2010 Jean-loup Gailly, Brian Raiter and Gilles Vollant. +; +; File written by Gilles Vollant, by converting to assembly the longest_match +; from Jean-loup Gailly in deflate.c of zLib and infoZip zip. +; +; and by taking inspiration on asm686 with masm, optimised assembly code +; from Brian Raiter, written 1998 +; +; This software is provided 'as-is', without any express or implied +; warranty. In no event will the authors be held liable for any damages +; arising from the use of this software. +; +; Permission is granted to anyone to use this software for any purpose, +; including commercial applications, and to alter it and redistribute it +; freely, subject to the following restrictions: +; +; 1. The origin of this software must not be misrepresented; you must not +; claim that you wrote the original software. If you use this software +; in a product, an acknowledgment in the product documentation would be +; appreciated but is not required. +; 2. Altered source versions must be plainly marked as such, and must not be +; misrepresented as being the original software +; 3. This notice may not be removed or altered from any source distribution. +; +; +; +; http://www.zlib.net +; http://www.winimage.com/zLibDll +; http://www.muppetlabs.com/~breadbox/software/assembly.html +; +; to compile this file for infozip Zip, I use option: +; ml64.exe /Flgvmat64 /c /Zi /DINFOZIP gvmat64.asm +; +; to compile this file for zLib, I use option: +; ml64.exe /Flgvmat64 /c /Zi gvmat64.asm +; Be carrefull to adapt zlib1222add below to your version of zLib +; (if you use a version of zLib before 1.0.4 or after 1.2.2.2, change +; value of zlib1222add later) +; +; This file compile with Microsoft Macro Assembler (x64) for AMD64 +; +; ml64.exe is given with Visual Studio 2005/2008/2010 and Windows WDK +; +; (you can get Windows WDK with ml64 for AMD64 from +; http://www.microsoft.com/whdc/Devtools/wdk/default.mspx for low price) +; + + +;uInt longest_match(s, cur_match) +; deflate_state *s; +; IPos cur_match; /* current match */ +.code +longest_match PROC + + +;LocalVarsSize equ 88 + LocalVarsSize equ 72 + +; register used : rax,rbx,rcx,rdx,rsi,rdi,r8,r9,r10,r11,r12 +; free register : r14,r15 +; register can be saved : rsp + + chainlenwmask equ rsp + 8 - LocalVarsSize ; high word: current chain len + ; low word: s->wmask +;window equ rsp + xx - LocalVarsSize ; local copy of s->window ; stored in r10 +;windowbestlen equ rsp + xx - LocalVarsSize ; s->window + bestlen , use r10+r11 +;scanstart equ rsp + xx - LocalVarsSize ; first two bytes of string ; stored in r12w +;scanend equ rsp + xx - LocalVarsSize ; last two bytes of string use ebx +;scanalign equ rsp + xx - LocalVarsSize ; dword-misalignment of string r13 +;bestlen equ rsp + xx - LocalVarsSize ; size of best match so far -> r11d +;scan equ rsp + xx - LocalVarsSize ; ptr to string wanting match -> r9 +IFDEF INFOZIP +ELSE + nicematch equ (rsp + 16 - LocalVarsSize) ; a good enough match size +ENDIF + +save_rdi equ rsp + 24 - LocalVarsSize +save_rsi equ rsp + 32 - LocalVarsSize +save_rbx equ rsp + 40 - LocalVarsSize +save_rbp equ rsp + 48 - LocalVarsSize +save_r12 equ rsp + 56 - LocalVarsSize +save_r13 equ rsp + 64 - LocalVarsSize +;save_r14 equ rsp + 72 - LocalVarsSize +;save_r15 equ rsp + 80 - LocalVarsSize + + +; summary of register usage +; scanend ebx +; scanendw bx +; chainlenwmask edx +; curmatch rsi +; curmatchd esi +; windowbestlen r8 +; scanalign r9 +; scanalignd r9d +; window r10 +; bestlen r11 +; bestlend r11d +; scanstart r12d +; scanstartw r12w +; scan r13 +; nicematch r14d +; limit r15 +; limitd r15d +; prev rcx + +; all the +4 offsets are due to the addition of pending_buf_size (in zlib +; in the deflate_state structure since the asm code was first written +; (if you compile with zlib 1.0.4 or older, remove the +4). +; Note : these value are good with a 8 bytes boundary pack structure + + + MAX_MATCH equ 258 + MIN_MATCH equ 3 + MIN_LOOKAHEAD equ (MAX_MATCH+MIN_MATCH+1) + + +;;; Offsets for fields in the deflate_state structure. These numbers +;;; are calculated from the definition of deflate_state, with the +;;; assumption that the compiler will dword-align the fields. (Thus, +;;; changing the definition of deflate_state could easily cause this +;;; program to crash horribly, without so much as a warning at +;;; compile time. Sigh.) + +; all the +zlib1222add offsets are due to the addition of fields +; in zlib in the deflate_state structure since the asm code was first written +; (if you compile with zlib 1.0.4 or older, use "zlib1222add equ (-4)"). +; (if you compile with zlib between 1.0.5 and 1.2.2.1, use "zlib1222add equ 0"). +; if you compile with zlib 1.2.2.2 or later , use "zlib1222add equ 8"). + + +IFDEF INFOZIP + +_DATA SEGMENT +COMM window_size:DWORD +; WMask ; 7fff +COMM window:BYTE:010040H +COMM prev:WORD:08000H +; MatchLen : unused +; PrevMatch : unused +COMM strstart:DWORD +COMM match_start:DWORD +; Lookahead : ignore +COMM prev_length:DWORD ; PrevLen +COMM max_chain_length:DWORD +COMM good_match:DWORD +COMM nice_match:DWORD +prev_ad equ OFFSET prev +window_ad equ OFFSET window +nicematch equ nice_match +_DATA ENDS +WMask equ 07fffh + +ELSE + + IFNDEF zlib1222add + zlib1222add equ 8 + ENDIF +dsWSize equ 56+zlib1222add+(zlib1222add/2) +dsWMask equ 64+zlib1222add+(zlib1222add/2) +dsWindow equ 72+zlib1222add +dsPrev equ 88+zlib1222add +dsMatchLen equ 128+zlib1222add +dsPrevMatch equ 132+zlib1222add +dsStrStart equ 140+zlib1222add +dsMatchStart equ 144+zlib1222add +dsLookahead equ 148+zlib1222add +dsPrevLen equ 152+zlib1222add +dsMaxChainLen equ 156+zlib1222add +dsGoodMatch equ 172+zlib1222add +dsNiceMatch equ 176+zlib1222add + +window_size equ [ rcx + dsWSize] +WMask equ [ rcx + dsWMask] +window_ad equ [ rcx + dsWindow] +prev_ad equ [ rcx + dsPrev] +strstart equ [ rcx + dsStrStart] +match_start equ [ rcx + dsMatchStart] +Lookahead equ [ rcx + dsLookahead] ; 0ffffffffh on infozip +prev_length equ [ rcx + dsPrevLen] +max_chain_length equ [ rcx + dsMaxChainLen] +good_match equ [ rcx + dsGoodMatch] +nice_match equ [ rcx + dsNiceMatch] +ENDIF + +; parameter 1 in r8(deflate state s), param 2 in rdx (cur match) + +; see http://weblogs.asp.net/oldnewthing/archive/2004/01/14/58579.aspx and +; http://msdn.microsoft.com/library/en-us/kmarch/hh/kmarch/64bitAMD_8e951dd2-ee77-4728-8702-55ce4b5dd24a.xml.asp +; +; All registers must be preserved across the call, except for +; rax, rcx, rdx, r8, r9, r10, and r11, which are scratch. + + + +;;; Save registers that the compiler may be using, and adjust esp to +;;; make room for our stack frame. + + +;;; Retrieve the function arguments. r8d will hold cur_match +;;; throughout the entire function. edx will hold the pointer to the +;;; deflate_state structure during the function's setup (before +;;; entering the main loop. + +; parameter 1 in rcx (deflate_state* s), param 2 in edx -> r8 (cur match) + +; this clear high 32 bits of r8, which can be garbage in both r8 and rdx + + mov [save_rdi],rdi + mov [save_rsi],rsi + mov [save_rbx],rbx + mov [save_rbp],rbp +IFDEF INFOZIP + mov r8d,ecx +ELSE + mov r8d,edx +ENDIF + mov [save_r12],r12 + mov [save_r13],r13 +; mov [save_r14],r14 +; mov [save_r15],r15 + + +;;; uInt wmask = s->w_mask; +;;; unsigned chain_length = s->max_chain_length; +;;; if (s->prev_length >= s->good_match) { +;;; chain_length >>= 2; +;;; } + + mov edi, prev_length + mov esi, good_match + mov eax, WMask + mov ebx, max_chain_length + cmp edi, esi + jl LastMatchGood + shr ebx, 2 +LastMatchGood: + +;;; chainlen is decremented once beforehand so that the function can +;;; use the sign flag instead of the zero flag for the exit test. +;;; It is then shifted into the high word, to make room for the wmask +;;; value, which it will always accompany. + + dec ebx + shl ebx, 16 + or ebx, eax + +;;; on zlib only +;;; if ((uInt)nice_match > s->lookahead) nice_match = s->lookahead; + +IFDEF INFOZIP + mov [chainlenwmask], ebx +; on infozip nice_match = [nice_match] +ELSE + mov eax, nice_match + mov [chainlenwmask], ebx + mov r10d, Lookahead + cmp r10d, eax + cmovnl r10d, eax + mov [nicematch],r10d +ENDIF + +;;; register Bytef *scan = s->window + s->strstart; + mov r10, window_ad + mov ebp, strstart + lea r13, [r10 + rbp] + +;;; Determine how many bytes the scan ptr is off from being +;;; dword-aligned. + + mov r9,r13 + neg r13 + and r13,3 + +;;; IPos limit = s->strstart > (IPos)MAX_DIST(s) ? +;;; s->strstart - (IPos)MAX_DIST(s) : NIL; +IFDEF INFOZIP + mov eax,07efah ; MAX_DIST = (WSIZE-MIN_LOOKAHEAD) (0x8000-(3+8+1)) +ELSE + mov eax, window_size + sub eax, MIN_LOOKAHEAD +ENDIF + xor edi,edi + sub ebp, eax + + mov r11d, prev_length + + cmovng ebp,edi + +;;; int best_len = s->prev_length; + + +;;; Store the sum of s->window + best_len in esi locally, and in esi. + + lea rsi,[r10+r11] + +;;; register ush scan_start = *(ushf*)scan; +;;; register ush scan_end = *(ushf*)(scan+best_len-1); +;;; Posf *prev = s->prev; + + movzx r12d,word ptr [r9] + movzx ebx, word ptr [r9 + r11 - 1] + + mov rdi, prev_ad + +;;; Jump into the main loop. + + mov edx, [chainlenwmask] + + cmp bx,word ptr [rsi + r8 - 1] + jz LookupLoopIsZero + +LookupLoop1: + and r8d, edx + + movzx r8d, word ptr [rdi + r8*2] + cmp r8d, ebp + jbe LeaveNow + sub edx, 00010000h + js LeaveNow + +LoopEntry1: + cmp bx,word ptr [rsi + r8 - 1] + jz LookupLoopIsZero + +LookupLoop2: + and r8d, edx + + movzx r8d, word ptr [rdi + r8*2] + cmp r8d, ebp + jbe LeaveNow + sub edx, 00010000h + js LeaveNow + +LoopEntry2: + cmp bx,word ptr [rsi + r8 - 1] + jz LookupLoopIsZero + +LookupLoop4: + and r8d, edx + + movzx r8d, word ptr [rdi + r8*2] + cmp r8d, ebp + jbe LeaveNow + sub edx, 00010000h + js LeaveNow + +LoopEntry4: + + cmp bx,word ptr [rsi + r8 - 1] + jnz LookupLoop1 + jmp LookupLoopIsZero + + +;;; do { +;;; match = s->window + cur_match; +;;; if (*(ushf*)(match+best_len-1) != scan_end || +;;; *(ushf*)match != scan_start) continue; +;;; [...] +;;; } while ((cur_match = prev[cur_match & wmask]) > limit +;;; && --chain_length != 0); +;;; +;;; Here is the inner loop of the function. The function will spend the +;;; majority of its time in this loop, and majority of that time will +;;; be spent in the first ten instructions. +;;; +;;; Within this loop: +;;; ebx = scanend +;;; r8d = curmatch +;;; edx = chainlenwmask - i.e., ((chainlen << 16) | wmask) +;;; esi = windowbestlen - i.e., (window + bestlen) +;;; edi = prev +;;; ebp = limit + +LookupLoop: + and r8d, edx + + movzx r8d, word ptr [rdi + r8*2] + cmp r8d, ebp + jbe LeaveNow + sub edx, 00010000h + js LeaveNow + +LoopEntry: + + cmp bx,word ptr [rsi + r8 - 1] + jnz LookupLoop1 +LookupLoopIsZero: + cmp r12w, word ptr [r10 + r8] + jnz LookupLoop1 + + +;;; Store the current value of chainlen. + mov [chainlenwmask], edx + +;;; Point edi to the string under scrutiny, and esi to the string we +;;; are hoping to match it up with. In actuality, esi and edi are +;;; both pointed (MAX_MATCH_8 - scanalign) bytes ahead, and edx is +;;; initialized to -(MAX_MATCH_8 - scanalign). + + lea rsi,[r8+r10] + mov rdx, 0fffffffffffffef8h; -(MAX_MATCH_8) + lea rsi, [rsi + r13 + 0108h] ;MAX_MATCH_8] + lea rdi, [r9 + r13 + 0108h] ;MAX_MATCH_8] + + prefetcht1 [rsi+rdx] + prefetcht1 [rdi+rdx] + + +;;; Test the strings for equality, 8 bytes at a time. At the end, +;;; adjust rdx so that it is offset to the exact byte that mismatched. +;;; +;;; We already know at this point that the first three bytes of the +;;; strings match each other, and they can be safely passed over before +;;; starting the compare loop. So what this code does is skip over 0-3 +;;; bytes, as much as necessary in order to dword-align the edi +;;; pointer. (rsi will still be misaligned three times out of four.) +;;; +;;; It should be confessed that this loop usually does not represent +;;; much of the total running time. Replacing it with a more +;;; straightforward "rep cmpsb" would not drastically degrade +;;; performance. + + +LoopCmps: + mov rax, [rsi + rdx] + xor rax, [rdi + rdx] + jnz LeaveLoopCmps + + mov rax, [rsi + rdx + 8] + xor rax, [rdi + rdx + 8] + jnz LeaveLoopCmps8 + + + mov rax, [rsi + rdx + 8+8] + xor rax, [rdi + rdx + 8+8] + jnz LeaveLoopCmps16 + + add rdx,8+8+8 + + jnz short LoopCmps + jmp short LenMaximum +LeaveLoopCmps16: add rdx,8 +LeaveLoopCmps8: add rdx,8 +LeaveLoopCmps: + + test eax, 0000FFFFh + jnz LenLower + + test eax,0ffffffffh + + jnz LenLower32 + + add rdx,4 + shr rax,32 + or ax,ax + jnz LenLower + +LenLower32: + shr eax,16 + add rdx,2 +LenLower: sub al, 1 + adc rdx, 0 +;;; Calculate the length of the match. If it is longer than MAX_MATCH, +;;; then automatically accept it as the best possible match and leave. + + lea rax, [rdi + rdx] + sub rax, r9 + cmp eax, MAX_MATCH + jge LenMaximum + +;;; If the length of the match is not longer than the best match we +;;; have so far, then forget it and return to the lookup loop. +;/////////////////////////////////// + + cmp eax, r11d + jg LongerMatch + + lea rsi,[r10+r11] + + mov rdi, prev_ad + mov edx, [chainlenwmask] + jmp LookupLoop + +;;; s->match_start = cur_match; +;;; best_len = len; +;;; if (len >= nice_match) break; +;;; scan_end = *(ushf*)(scan+best_len-1); + +LongerMatch: + mov r11d, eax + mov match_start, r8d + cmp eax, [nicematch] + jge LeaveNow + + lea rsi,[r10+rax] + + movzx ebx, word ptr [r9 + rax - 1] + mov rdi, prev_ad + mov edx, [chainlenwmask] + jmp LookupLoop + +;;; Accept the current string, with the maximum possible length. + +LenMaximum: + mov r11d,MAX_MATCH + mov match_start, r8d + +;;; if ((uInt)best_len <= s->lookahead) return (uInt)best_len; +;;; return s->lookahead; + +LeaveNow: +IFDEF INFOZIP + mov eax,r11d +ELSE + mov eax, Lookahead + cmp r11d, eax + cmovng eax, r11d +ENDIF + +;;; Restore the stack and return from whence we came. + + + mov rsi,[save_rsi] + mov rdi,[save_rdi] + mov rbx,[save_rbx] + mov rbp,[save_rbp] + mov r12,[save_r12] + mov r13,[save_r13] +; mov r14,[save_r14] +; mov r15,[save_r15] + + + ret 0 +; please don't remove this string ! +; Your can freely use gvmat64 in any free or commercial app +; but it is far better don't remove the string in the binary! + db 0dh,0ah,"asm686 with masm, optimised assembly code from Brian Raiter, written 1998, converted to amd 64 by Gilles Vollant 2005",0dh,0ah,0 +longest_match ENDP + +match_init PROC + ret 0 +match_init ENDP + + +END ADDED compat/zlib/contrib/masmx64/inffas8664.c Index: compat/zlib/contrib/masmx64/inffas8664.c ================================================================== --- compat/zlib/contrib/masmx64/inffas8664.c +++ compat/zlib/contrib/masmx64/inffas8664.c @@ -0,0 +1,186 @@ +/* inffas8664.c is a hand tuned assembler version of inffast.c - fast decoding + * version for AMD64 on Windows using Microsoft C compiler + * + * Copyright (C) 1995-2003 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + * + * Copyright (C) 2003 Chris Anderson + * Please use the copyright conditions above. + * + * 2005 - Adaptation to Microsoft C Compiler for AMD64 by Gilles Vollant + * + * inffas8664.c call function inffas8664fnc in inffasx64.asm + * inffasx64.asm is automatically convert from AMD64 portion of inffas86.c + * + * Dec-29-2003 -- I added AMD64 inflate asm support. This version is also + * slightly quicker on x86 systems because, instead of using rep movsb to copy + * data, it uses rep movsw, which moves data in 2-byte chunks instead of single + * bytes. I've tested the AMD64 code on a Fedora Core 1 + the x86_64 updates + * from http://fedora.linux.duke.edu/fc1_x86_64 + * which is running on an Athlon 64 3000+ / Gigabyte GA-K8VT800M system with + * 1GB ram. The 64-bit version is about 4% faster than the 32-bit version, + * when decompressing mozilla-source-1.3.tar.gz. + * + * Mar-13-2003 -- Most of this is derived from inffast.S which is derived from + * the gcc -S output of zlib-1.2.0/inffast.c. Zlib-1.2.0 is in beta release at + * the moment. I have successfully compiled and tested this code with gcc2.96, + * gcc3.2, icc5.0, msvc6.0. It is very close to the speed of inffast.S + * compiled with gcc -DNO_MMX, but inffast.S is still faster on the P3 with MMX + * enabled. I will attempt to merge the MMX code into this version. Newer + * versions of this and inffast.S can be found at + * http://www.eetbeetee.com/zlib/ and http://www.charm.net/~christop/zlib/ + * + */ + +#include +#include "zutil.h" +#include "inftrees.h" +#include "inflate.h" +#include "inffast.h" + +/* Mark Adler's comments from inffast.c: */ + +/* + Decode literal, length, and distance codes and write out the resulting + literal and match bytes until either not enough input or output is + available, an end-of-block is encountered, or a data error is encountered. + When large enough input and output buffers are supplied to inflate(), for + example, a 16K input buffer and a 64K output buffer, more than 95% of the + inflate execution time is spent in this routine. + + Entry assumptions: + + state->mode == LEN + strm->avail_in >= 6 + strm->avail_out >= 258 + start >= strm->avail_out + state->bits < 8 + + On return, state->mode is one of: + + LEN -- ran out of enough output space or enough available input + TYPE -- reached end of block code, inflate() to interpret next block + BAD -- error in block data + + Notes: + + - The maximum input bits used by a length/distance pair is 15 bits for the + length code, 5 bits for the length extra, 15 bits for the distance code, + and 13 bits for the distance extra. This totals 48 bits, or six bytes. + Therefore if strm->avail_in >= 6, then there is enough input to avoid + checking for available input while decoding. + + - The maximum bytes that a single length/distance pair can output is 258 + bytes, which is the maximum length that can be coded. inflate_fast() + requires strm->avail_out >= 258 for each loop to avoid checking for + output space. + */ + + + + typedef struct inffast_ar { +/* 64 32 x86 x86_64 */ +/* ar offset register */ +/* 0 0 */ void *esp; /* esp save */ +/* 8 4 */ void *ebp; /* ebp save */ +/* 16 8 */ unsigned char FAR *in; /* esi rsi local strm->next_in */ +/* 24 12 */ unsigned char FAR *last; /* r9 while in < last */ +/* 32 16 */ unsigned char FAR *out; /* edi rdi local strm->next_out */ +/* 40 20 */ unsigned char FAR *beg; /* inflate()'s init next_out */ +/* 48 24 */ unsigned char FAR *end; /* r10 while out < end */ +/* 56 28 */ unsigned char FAR *window;/* size of window, wsize!=0 */ +/* 64 32 */ code const FAR *lcode; /* ebp rbp local strm->lencode */ +/* 72 36 */ code const FAR *dcode; /* r11 local strm->distcode */ +/* 80 40 */ size_t /*unsigned long */hold; /* edx rdx local strm->hold */ +/* 88 44 */ unsigned bits; /* ebx rbx local strm->bits */ +/* 92 48 */ unsigned wsize; /* window size */ +/* 96 52 */ unsigned write; /* window write index */ +/*100 56 */ unsigned lmask; /* r12 mask for lcode */ +/*104 60 */ unsigned dmask; /* r13 mask for dcode */ +/*108 64 */ unsigned len; /* r14 match length */ +/*112 68 */ unsigned dist; /* r15 match distance */ +/*116 72 */ unsigned status; /* set when state chng*/ + } type_ar; +#ifdef ASMINF + +void inflate_fast(strm, start) +z_streamp strm; +unsigned start; /* inflate()'s starting value for strm->avail_out */ +{ + struct inflate_state FAR *state; + type_ar ar; + void inffas8664fnc(struct inffast_ar * par); + + + +#if (defined( __GNUC__ ) && defined( __amd64__ ) && ! defined( __i386 )) || (defined(_MSC_VER) && defined(_M_AMD64)) +#define PAD_AVAIL_IN 6 +#define PAD_AVAIL_OUT 258 +#else +#define PAD_AVAIL_IN 5 +#define PAD_AVAIL_OUT 257 +#endif + + /* copy state to local variables */ + state = (struct inflate_state FAR *)strm->state; + + ar.in = strm->next_in; + ar.last = ar.in + (strm->avail_in - PAD_AVAIL_IN); + ar.out = strm->next_out; + ar.beg = ar.out - (start - strm->avail_out); + ar.end = ar.out + (strm->avail_out - PAD_AVAIL_OUT); + ar.wsize = state->wsize; + ar.write = state->wnext; + ar.window = state->window; + ar.hold = state->hold; + ar.bits = state->bits; + ar.lcode = state->lencode; + ar.dcode = state->distcode; + ar.lmask = (1U << state->lenbits) - 1; + ar.dmask = (1U << state->distbits) - 1; + + /* decode literals and length/distances until end-of-block or not enough + input data or output space */ + + /* align in on 1/2 hold size boundary */ + while (((size_t)(void *)ar.in & (sizeof(ar.hold) / 2 - 1)) != 0) { + ar.hold += (unsigned long)*ar.in++ << ar.bits; + ar.bits += 8; + } + + inffas8664fnc(&ar); + + if (ar.status > 1) { + if (ar.status == 2) + strm->msg = "invalid literal/length code"; + else if (ar.status == 3) + strm->msg = "invalid distance code"; + else + strm->msg = "invalid distance too far back"; + state->mode = BAD; + } + else if ( ar.status == 1 ) { + state->mode = TYPE; + } + + /* return unused bytes (on entry, bits < 8, so in won't go too far back) */ + ar.len = ar.bits >> 3; + ar.in -= ar.len; + ar.bits -= ar.len << 3; + ar.hold &= (1U << ar.bits) - 1; + + /* update state and return */ + strm->next_in = ar.in; + strm->next_out = ar.out; + strm->avail_in = (unsigned)(ar.in < ar.last ? + PAD_AVAIL_IN + (ar.last - ar.in) : + PAD_AVAIL_IN - (ar.in - ar.last)); + strm->avail_out = (unsigned)(ar.out < ar.end ? + PAD_AVAIL_OUT + (ar.end - ar.out) : + PAD_AVAIL_OUT - (ar.out - ar.end)); + state->hold = (unsigned long)ar.hold; + state->bits = ar.bits; + return; +} + +#endif ADDED compat/zlib/contrib/masmx64/inffasx64.asm Index: compat/zlib/contrib/masmx64/inffasx64.asm ================================================================== --- compat/zlib/contrib/masmx64/inffasx64.asm +++ compat/zlib/contrib/masmx64/inffasx64.asm @@ -0,0 +1,396 @@ +; inffasx64.asm is a hand tuned assembler version of inffast.c - fast decoding +; version for AMD64 on Windows using Microsoft C compiler +; +; inffasx64.asm is automatically convert from AMD64 portion of inffas86.c +; inffasx64.asm is called by inffas8664.c, which contain more info. + + +; to compile this file, I use option +; ml64.exe /Flinffasx64 /c /Zi inffasx64.asm +; with Microsoft Macro Assembler (x64) for AMD64 +; + +; This file compile with Microsoft Macro Assembler (x64) for AMD64 +; +; ml64.exe is given with Visual Studio 2005/2008/2010 and Windows WDK +; +; (you can get Windows WDK with ml64 for AMD64 from +; http://www.microsoft.com/whdc/Devtools/wdk/default.mspx for low price) +; + + +.code +inffas8664fnc PROC + +; see http://weblogs.asp.net/oldnewthing/archive/2004/01/14/58579.aspx and +; http://msdn.microsoft.com/library/en-us/kmarch/hh/kmarch/64bitAMD_8e951dd2-ee77-4728-8702-55ce4b5dd24a.xml.asp +; +; All registers must be preserved across the call, except for +; rax, rcx, rdx, r8, r-9, r10, and r11, which are scratch. + + + mov [rsp-8],rsi + mov [rsp-16],rdi + mov [rsp-24],r12 + mov [rsp-32],r13 + mov [rsp-40],r14 + mov [rsp-48],r15 + mov [rsp-56],rbx + + mov rax,rcx + + mov [rax+8], rbp ; /* save regs rbp and rsp */ + mov [rax], rsp + + mov rsp, rax ; /* make rsp point to &ar */ + + mov rsi, [rsp+16] ; /* rsi = in */ + mov rdi, [rsp+32] ; /* rdi = out */ + mov r9, [rsp+24] ; /* r9 = last */ + mov r10, [rsp+48] ; /* r10 = end */ + mov rbp, [rsp+64] ; /* rbp = lcode */ + mov r11, [rsp+72] ; /* r11 = dcode */ + mov rdx, [rsp+80] ; /* rdx = hold */ + mov ebx, [rsp+88] ; /* ebx = bits */ + mov r12d, [rsp+100] ; /* r12d = lmask */ + mov r13d, [rsp+104] ; /* r13d = dmask */ + ; /* r14d = len */ + ; /* r15d = dist */ + + + cld + cmp r10, rdi + je L_one_time ; /* if only one decode left */ + cmp r9, rsi + + jne L_do_loop + + +L_one_time: + mov r8, r12 ; /* r8 = lmask */ + cmp bl, 32 + ja L_get_length_code_one_time + + lodsd ; /* eax = *(uint *)in++ */ + mov cl, bl ; /* cl = bits, needs it for shifting */ + add bl, 32 ; /* bits += 32 */ + shl rax, cl + or rdx, rax ; /* hold |= *((uint *)in)++ << bits */ + jmp L_get_length_code_one_time + +ALIGN 4 +L_while_test: + cmp r10, rdi + jbe L_break_loop + cmp r9, rsi + jbe L_break_loop + +L_do_loop: + mov r8, r12 ; /* r8 = lmask */ + cmp bl, 32 + ja L_get_length_code ; /* if (32 < bits) */ + + lodsd ; /* eax = *(uint *)in++ */ + mov cl, bl ; /* cl = bits, needs it for shifting */ + add bl, 32 ; /* bits += 32 */ + shl rax, cl + or rdx, rax ; /* hold |= *((uint *)in)++ << bits */ + +L_get_length_code: + and r8, rdx ; /* r8 &= hold */ + mov eax, [rbp+r8*4] ; /* eax = lcode[hold & lmask] */ + + mov cl, ah ; /* cl = this.bits */ + sub bl, ah ; /* bits -= this.bits */ + shr rdx, cl ; /* hold >>= this.bits */ + + test al, al + jnz L_test_for_length_base ; /* if (op != 0) 45.7% */ + + mov r8, r12 ; /* r8 = lmask */ + shr eax, 16 ; /* output this.val char */ + stosb + +L_get_length_code_one_time: + and r8, rdx ; /* r8 &= hold */ + mov eax, [rbp+r8*4] ; /* eax = lcode[hold & lmask] */ + +L_dolen: + mov cl, ah ; /* cl = this.bits */ + sub bl, ah ; /* bits -= this.bits */ + shr rdx, cl ; /* hold >>= this.bits */ + + test al, al + jnz L_test_for_length_base ; /* if (op != 0) 45.7% */ + + shr eax, 16 ; /* output this.val char */ + stosb + jmp L_while_test + +ALIGN 4 +L_test_for_length_base: + mov r14d, eax ; /* len = this */ + shr r14d, 16 ; /* len = this.val */ + mov cl, al + + test al, 16 + jz L_test_for_second_level_length ; /* if ((op & 16) == 0) 8% */ + and cl, 15 ; /* op &= 15 */ + jz L_decode_distance ; /* if (!op) */ + +L_add_bits_to_len: + sub bl, cl + xor eax, eax + inc eax + shl eax, cl + dec eax + and eax, edx ; /* eax &= hold */ + shr rdx, cl + add r14d, eax ; /* len += hold & mask[op] */ + +L_decode_distance: + mov r8, r13 ; /* r8 = dmask */ + cmp bl, 32 + ja L_get_distance_code ; /* if (32 < bits) */ + + lodsd ; /* eax = *(uint *)in++ */ + mov cl, bl ; /* cl = bits, needs it for shifting */ + add bl, 32 ; /* bits += 32 */ + shl rax, cl + or rdx, rax ; /* hold |= *((uint *)in)++ << bits */ + +L_get_distance_code: + and r8, rdx ; /* r8 &= hold */ + mov eax, [r11+r8*4] ; /* eax = dcode[hold & dmask] */ + +L_dodist: + mov r15d, eax ; /* dist = this */ + shr r15d, 16 ; /* dist = this.val */ + mov cl, ah + sub bl, ah ; /* bits -= this.bits */ + shr rdx, cl ; /* hold >>= this.bits */ + mov cl, al ; /* cl = this.op */ + + test al, 16 ; /* if ((op & 16) == 0) */ + jz L_test_for_second_level_dist + and cl, 15 ; /* op &= 15 */ + jz L_check_dist_one + +L_add_bits_to_dist: + sub bl, cl + xor eax, eax + inc eax + shl eax, cl + dec eax ; /* (1 << op) - 1 */ + and eax, edx ; /* eax &= hold */ + shr rdx, cl + add r15d, eax ; /* dist += hold & ((1 << op) - 1) */ + +L_check_window: + mov r8, rsi ; /* save in so from can use it's reg */ + mov rax, rdi + sub rax, [rsp+40] ; /* nbytes = out - beg */ + + cmp eax, r15d + jb L_clip_window ; /* if (dist > nbytes) 4.2% */ + + mov ecx, r14d ; /* ecx = len */ + mov rsi, rdi + sub rsi, r15 ; /* from = out - dist */ + + sar ecx, 1 + jnc L_copy_two ; /* if len % 2 == 0 */ + + rep movsw + mov al, [rsi] + mov [rdi], al + inc rdi + + mov rsi, r8 ; /* move in back to %rsi, toss from */ + jmp L_while_test + +L_copy_two: + rep movsw + mov rsi, r8 ; /* move in back to %rsi, toss from */ + jmp L_while_test + +ALIGN 4 +L_check_dist_one: + cmp r15d, 1 ; /* if dist 1, is a memset */ + jne L_check_window + cmp [rsp+40], rdi ; /* if out == beg, outside window */ + je L_check_window + + mov ecx, r14d ; /* ecx = len */ + mov al, [rdi-1] + mov ah, al + + sar ecx, 1 + jnc L_set_two + mov [rdi], al + inc rdi + +L_set_two: + rep stosw + jmp L_while_test + +ALIGN 4 +L_test_for_second_level_length: + test al, 64 + jnz L_test_for_end_of_block ; /* if ((op & 64) != 0) */ + + xor eax, eax + inc eax + shl eax, cl + dec eax + and eax, edx ; /* eax &= hold */ + add eax, r14d ; /* eax += len */ + mov eax, [rbp+rax*4] ; /* eax = lcode[val+(hold&mask[op])]*/ + jmp L_dolen + +ALIGN 4 +L_test_for_second_level_dist: + test al, 64 + jnz L_invalid_distance_code ; /* if ((op & 64) != 0) */ + + xor eax, eax + inc eax + shl eax, cl + dec eax + and eax, edx ; /* eax &= hold */ + add eax, r15d ; /* eax += dist */ + mov eax, [r11+rax*4] ; /* eax = dcode[val+(hold&mask[op])]*/ + jmp L_dodist + +ALIGN 4 +L_clip_window: + mov ecx, eax ; /* ecx = nbytes */ + mov eax, [rsp+92] ; /* eax = wsize, prepare for dist cmp */ + neg ecx ; /* nbytes = -nbytes */ + + cmp eax, r15d + jb L_invalid_distance_too_far ; /* if (dist > wsize) */ + + add ecx, r15d ; /* nbytes = dist - nbytes */ + cmp dword ptr [rsp+96], 0 + jne L_wrap_around_window ; /* if (write != 0) */ + + mov rsi, [rsp+56] ; /* from = window */ + sub eax, ecx ; /* eax -= nbytes */ + add rsi, rax ; /* from += wsize - nbytes */ + + mov eax, r14d ; /* eax = len */ + cmp r14d, ecx + jbe L_do_copy ; /* if (nbytes >= len) */ + + sub eax, ecx ; /* eax -= nbytes */ + rep movsb + mov rsi, rdi + sub rsi, r15 ; /* from = &out[ -dist ] */ + jmp L_do_copy + +ALIGN 4 +L_wrap_around_window: + mov eax, [rsp+96] ; /* eax = write */ + cmp ecx, eax + jbe L_contiguous_in_window ; /* if (write >= nbytes) */ + + mov esi, [rsp+92] ; /* from = wsize */ + add rsi, [rsp+56] ; /* from += window */ + add rsi, rax ; /* from += write */ + sub rsi, rcx ; /* from -= nbytes */ + sub ecx, eax ; /* nbytes -= write */ + + mov eax, r14d ; /* eax = len */ + cmp eax, ecx + jbe L_do_copy ; /* if (nbytes >= len) */ + + sub eax, ecx ; /* len -= nbytes */ + rep movsb + mov rsi, [rsp+56] ; /* from = window */ + mov ecx, [rsp+96] ; /* nbytes = write */ + cmp eax, ecx + jbe L_do_copy ; /* if (nbytes >= len) */ + + sub eax, ecx ; /* len -= nbytes */ + rep movsb + mov rsi, rdi + sub rsi, r15 ; /* from = out - dist */ + jmp L_do_copy + +ALIGN 4 +L_contiguous_in_window: + mov rsi, [rsp+56] ; /* rsi = window */ + add rsi, rax + sub rsi, rcx ; /* from += write - nbytes */ + + mov eax, r14d ; /* eax = len */ + cmp eax, ecx + jbe L_do_copy ; /* if (nbytes >= len) */ + + sub eax, ecx ; /* len -= nbytes */ + rep movsb + mov rsi, rdi + sub rsi, r15 ; /* from = out - dist */ + jmp L_do_copy ; /* if (nbytes >= len) */ + +ALIGN 4 +L_do_copy: + mov ecx, eax ; /* ecx = len */ + rep movsb + + mov rsi, r8 ; /* move in back to %esi, toss from */ + jmp L_while_test + +L_test_for_end_of_block: + test al, 32 + jz L_invalid_literal_length_code + mov dword ptr [rsp+116], 1 + jmp L_break_loop_with_status + +L_invalid_literal_length_code: + mov dword ptr [rsp+116], 2 + jmp L_break_loop_with_status + +L_invalid_distance_code: + mov dword ptr [rsp+116], 3 + jmp L_break_loop_with_status + +L_invalid_distance_too_far: + mov dword ptr [rsp+116], 4 + jmp L_break_loop_with_status + +L_break_loop: + mov dword ptr [rsp+116], 0 + +L_break_loop_with_status: +; /* put in, out, bits, and hold back into ar and pop esp */ + mov [rsp+16], rsi ; /* in */ + mov [rsp+32], rdi ; /* out */ + mov [rsp+88], ebx ; /* bits */ + mov [rsp+80], rdx ; /* hold */ + + mov rax, [rsp] ; /* restore rbp and rsp */ + mov rbp, [rsp+8] + mov rsp, rax + + + + mov rsi,[rsp-8] + mov rdi,[rsp-16] + mov r12,[rsp-24] + mov r13,[rsp-32] + mov r14,[rsp-40] + mov r15,[rsp-48] + mov rbx,[rsp-56] + + ret 0 +; : +; : "m" (ar) +; : "memory", "%rax", "%rbx", "%rcx", "%rdx", "%rsi", "%rdi", +; "%r8", "%r9", "%r10", "%r11", "%r12", "%r13", "%r14", "%r15" +; ); + +inffas8664fnc ENDP +;_TEXT ENDS +END ADDED compat/zlib/contrib/masmx64/readme.txt Index: compat/zlib/contrib/masmx64/readme.txt ================================================================== --- compat/zlib/contrib/masmx64/readme.txt +++ compat/zlib/contrib/masmx64/readme.txt @@ -0,0 +1,31 @@ +Summary +------- +This directory contains ASM implementations of the functions +longest_match() and inflate_fast(), for 64 bits x86 (both AMD64 and Intel EM64t), +for use with Microsoft Macro Assembler (x64) for AMD64 and Microsoft C++ 64 bits. + +gvmat64.asm is written by Gilles Vollant (2005), by using Brian Raiter 686/32 bits + assembly optimized version from Jean-loup Gailly original longest_match function + +inffasx64.asm and inffas8664.c were written by Chris Anderson, by optimizing + original function from Mark Adler + +Use instructions +---------------- +Assemble the .asm files using MASM and put the object files into the zlib source +directory. You can also get object files here: + + http://www.winimage.com/zLibDll/zlib124_masm_obj.zip + +define ASMV and ASMINF in your project. Include inffas8664.c in your source tree, +and inffasx64.obj and gvmat64.obj as object to link. + + +Build instructions +------------------ +run bld_64.bat with Microsoft Macro Assembler (x64) for AMD64 (ml64.exe) + +ml64.exe is given with Visual Studio 2005, Windows 2003 server DDK + +You can get Windows 2003 server DDK with ml64 and cl for AMD64 from + http://www.microsoft.com/whdc/devtools/ddk/default.mspx for low price) ADDED compat/zlib/contrib/masmx86/bld_ml32.bat Index: compat/zlib/contrib/masmx86/bld_ml32.bat ================================================================== --- compat/zlib/contrib/masmx86/bld_ml32.bat +++ compat/zlib/contrib/masmx86/bld_ml32.bat @@ -0,0 +1,2 @@ +ml /coff /Zi /c /Flmatch686.lst match686.asm +ml /coff /Zi /c /Flinffas32.lst inffas32.asm ADDED compat/zlib/contrib/masmx86/inffas32.asm Index: compat/zlib/contrib/masmx86/inffas32.asm ================================================================== --- compat/zlib/contrib/masmx86/inffas32.asm +++ compat/zlib/contrib/masmx86/inffas32.asm @@ -0,0 +1,1080 @@ +;/* inffas32.asm is a hand tuned assembler version of inffast.c -- fast decoding +; * +; * inffas32.asm is derivated from inffas86.c, with translation of assembly code +; * +; * Copyright (C) 1995-2003 Mark Adler +; * For conditions of distribution and use, see copyright notice in zlib.h +; * +; * Copyright (C) 2003 Chris Anderson +; * Please use the copyright conditions above. +; * +; * Mar-13-2003 -- Most of this is derived from inffast.S which is derived from +; * the gcc -S output of zlib-1.2.0/inffast.c. Zlib-1.2.0 is in beta release at +; * the moment. I have successfully compiled and tested this code with gcc2.96, +; * gcc3.2, icc5.0, msvc6.0. It is very close to the speed of inffast.S +; * compiled with gcc -DNO_MMX, but inffast.S is still faster on the P3 with MMX +; * enabled. I will attempt to merge the MMX code into this version. Newer +; * versions of this and inffast.S can be found at +; * http://www.eetbeetee.com/zlib/ and http://www.charm.net/~christop/zlib/ +; * +; * 2005 : modification by Gilles Vollant +; */ +; For Visual C++ 4.x and higher and ML 6.x and higher +; ml.exe is in directory \MASM611C of Win95 DDK +; ml.exe is also distributed in http://www.masm32.com/masmdl.htm +; and in VC++2003 toolkit at http://msdn.microsoft.com/visualc/vctoolkit2003/ +; +; +; compile with command line option +; ml /coff /Zi /c /Flinffas32.lst inffas32.asm + +; if you define NO_GZIP (see inflate.h), compile with +; ml /coff /Zi /c /Flinffas32.lst /DNO_GUNZIP inffas32.asm + + +; zlib122sup is 0 fort zlib 1.2.2.1 and lower +; zlib122sup is 8 fort zlib 1.2.2.2 and more (with addition of dmax and head +; in inflate_state in inflate.h) +zlib1222sup equ 8 + + +IFDEF GUNZIP + INFLATE_MODE_TYPE equ 11 + INFLATE_MODE_BAD equ 26 +ELSE + IFNDEF NO_GUNZIP + INFLATE_MODE_TYPE equ 11 + INFLATE_MODE_BAD equ 26 + ELSE + INFLATE_MODE_TYPE equ 3 + INFLATE_MODE_BAD equ 17 + ENDIF +ENDIF + + +; 75 "inffast.S" +;FILE "inffast.S" + +;;;GLOBAL _inflate_fast + +;;;SECTION .text + + + + .586p + .mmx + + name inflate_fast_x86 + .MODEL FLAT + +_DATA segment +inflate_fast_use_mmx: + dd 1 + + +_TEXT segment + + + +ALIGN 4 + db 'Fast decoding Code from Chris Anderson' + db 0 + +ALIGN 4 +invalid_literal_length_code_msg: + db 'invalid literal/length code' + db 0 + +ALIGN 4 +invalid_distance_code_msg: + db 'invalid distance code' + db 0 + +ALIGN 4 +invalid_distance_too_far_msg: + db 'invalid distance too far back' + db 0 + + +ALIGN 4 +inflate_fast_mask: +dd 0 +dd 1 +dd 3 +dd 7 +dd 15 +dd 31 +dd 63 +dd 127 +dd 255 +dd 511 +dd 1023 +dd 2047 +dd 4095 +dd 8191 +dd 16383 +dd 32767 +dd 65535 +dd 131071 +dd 262143 +dd 524287 +dd 1048575 +dd 2097151 +dd 4194303 +dd 8388607 +dd 16777215 +dd 33554431 +dd 67108863 +dd 134217727 +dd 268435455 +dd 536870911 +dd 1073741823 +dd 2147483647 +dd 4294967295 + + +mode_state equ 0 ;/* state->mode */ +wsize_state equ (32+zlib1222sup) ;/* state->wsize */ +write_state equ (36+4+zlib1222sup) ;/* state->write */ +window_state equ (40+4+zlib1222sup) ;/* state->window */ +hold_state equ (44+4+zlib1222sup) ;/* state->hold */ +bits_state equ (48+4+zlib1222sup) ;/* state->bits */ +lencode_state equ (64+4+zlib1222sup) ;/* state->lencode */ +distcode_state equ (68+4+zlib1222sup) ;/* state->distcode */ +lenbits_state equ (72+4+zlib1222sup) ;/* state->lenbits */ +distbits_state equ (76+4+zlib1222sup) ;/* state->distbits */ + + +;;SECTION .text +; 205 "inffast.S" +;GLOBAL inflate_fast_use_mmx + +;SECTION .data + + +; GLOBAL inflate_fast_use_mmx:object +;.size inflate_fast_use_mmx, 4 +; 226 "inffast.S" +;SECTION .text + +ALIGN 4 +_inflate_fast proc near +.FPO (16, 4, 0, 0, 1, 0) + push edi + push esi + push ebp + push ebx + pushfd + sub esp,64 + cld + + + + + mov esi, [esp+88] + mov edi, [esi+28] + + + + + + + + mov edx, [esi+4] + mov eax, [esi+0] + + add edx,eax + sub edx,11 + + mov [esp+44],eax + mov [esp+20],edx + + mov ebp, [esp+92] + mov ecx, [esi+16] + mov ebx, [esi+12] + + sub ebp,ecx + neg ebp + add ebp,ebx + + sub ecx,257 + add ecx,ebx + + mov [esp+60],ebx + mov [esp+40],ebp + mov [esp+16],ecx +; 285 "inffast.S" + mov eax, [edi+lencode_state] + mov ecx, [edi+distcode_state] + + mov [esp+8],eax + mov [esp+12],ecx + + mov eax,1 + mov ecx, [edi+lenbits_state] + shl eax,cl + dec eax + mov [esp+0],eax + + mov eax,1 + mov ecx, [edi+distbits_state] + shl eax,cl + dec eax + mov [esp+4],eax + + mov eax, [edi+wsize_state] + mov ecx, [edi+write_state] + mov edx, [edi+window_state] + + mov [esp+52],eax + mov [esp+48],ecx + mov [esp+56],edx + + mov ebp, [edi+hold_state] + mov ebx, [edi+bits_state] +; 321 "inffast.S" + mov esi, [esp+44] + mov ecx, [esp+20] + cmp ecx,esi + ja L_align_long + + add ecx,11 + sub ecx,esi + mov eax,12 + sub eax,ecx + lea edi, [esp+28] + rep movsb + mov ecx,eax + xor eax,eax + rep stosb + lea esi, [esp+28] + mov [esp+20],esi + jmp L_is_aligned + + +L_align_long: + test esi,3 + jz L_is_aligned + xor eax,eax + mov al, [esi] + inc esi + mov ecx,ebx + add ebx,8 + shl eax,cl + or ebp,eax + jmp L_align_long + +L_is_aligned: + mov edi, [esp+60] +; 366 "inffast.S" +L_check_mmx: + cmp dword ptr [inflate_fast_use_mmx],2 + je L_init_mmx + ja L_do_loop + + push eax + push ebx + push ecx + push edx + pushfd + mov eax, [esp] + xor dword ptr [esp],0200000h + + + + + popfd + pushfd + pop edx + xor edx,eax + jz L_dont_use_mmx + xor eax,eax + cpuid + cmp ebx,0756e6547h + jne L_dont_use_mmx + cmp ecx,06c65746eh + jne L_dont_use_mmx + cmp edx,049656e69h + jne L_dont_use_mmx + mov eax,1 + cpuid + shr eax,8 + and eax,15 + cmp eax,6 + jne L_dont_use_mmx + test edx,0800000h + jnz L_use_mmx + jmp L_dont_use_mmx +L_use_mmx: + mov dword ptr [inflate_fast_use_mmx],2 + jmp L_check_mmx_pop +L_dont_use_mmx: + mov dword ptr [inflate_fast_use_mmx],3 +L_check_mmx_pop: + pop edx + pop ecx + pop ebx + pop eax + jmp L_check_mmx +; 426 "inffast.S" +ALIGN 4 +L_do_loop: +; 437 "inffast.S" + cmp bl,15 + ja L_get_length_code + + xor eax,eax + lodsw + mov cl,bl + add bl,16 + shl eax,cl + or ebp,eax + +L_get_length_code: + mov edx, [esp+0] + mov ecx, [esp+8] + and edx,ebp + mov eax, [ecx+edx*4] + +L_dolen: + + + + + + + mov cl,ah + sub bl,ah + shr ebp,cl + + + + + + + test al,al + jnz L_test_for_length_base + + shr eax,16 + stosb + +L_while_test: + + + cmp [esp+16],edi + jbe L_break_loop + + cmp [esp+20],esi + ja L_do_loop + jmp L_break_loop + +L_test_for_length_base: +; 502 "inffast.S" + mov edx,eax + shr edx,16 + mov cl,al + + test al,16 + jz L_test_for_second_level_length + and cl,15 + jz L_save_len + cmp bl,cl + jae L_add_bits_to_len + + mov ch,cl + xor eax,eax + lodsw + mov cl,bl + add bl,16 + shl eax,cl + or ebp,eax + mov cl,ch + +L_add_bits_to_len: + mov eax,1 + shl eax,cl + dec eax + sub bl,cl + and eax,ebp + shr ebp,cl + add edx,eax + +L_save_len: + mov [esp+24],edx + + +L_decode_distance: +; 549 "inffast.S" + cmp bl,15 + ja L_get_distance_code + + xor eax,eax + lodsw + mov cl,bl + add bl,16 + shl eax,cl + or ebp,eax + +L_get_distance_code: + mov edx, [esp+4] + mov ecx, [esp+12] + and edx,ebp + mov eax, [ecx+edx*4] + + +L_dodist: + mov edx,eax + shr edx,16 + mov cl,ah + sub bl,ah + shr ebp,cl +; 584 "inffast.S" + mov cl,al + + test al,16 + jz L_test_for_second_level_dist + and cl,15 + jz L_check_dist_one + cmp bl,cl + jae L_add_bits_to_dist + + mov ch,cl + xor eax,eax + lodsw + mov cl,bl + add bl,16 + shl eax,cl + or ebp,eax + mov cl,ch + +L_add_bits_to_dist: + mov eax,1 + shl eax,cl + dec eax + sub bl,cl + and eax,ebp + shr ebp,cl + add edx,eax + jmp L_check_window + +L_check_window: +; 625 "inffast.S" + mov [esp+44],esi + mov eax,edi + sub eax, [esp+40] + + cmp eax,edx + jb L_clip_window + + mov ecx, [esp+24] + mov esi,edi + sub esi,edx + + sub ecx,3 + mov al, [esi] + mov [edi],al + mov al, [esi+1] + mov dl, [esi+2] + add esi,3 + mov [edi+1],al + mov [edi+2],dl + add edi,3 + rep movsb + + mov esi, [esp+44] + jmp L_while_test + +ALIGN 4 +L_check_dist_one: + cmp edx,1 + jne L_check_window + cmp [esp+40],edi + je L_check_window + + dec edi + mov ecx, [esp+24] + mov al, [edi] + sub ecx,3 + + mov [edi+1],al + mov [edi+2],al + mov [edi+3],al + add edi,4 + rep stosb + + jmp L_while_test + +ALIGN 4 +L_test_for_second_level_length: + + + + + test al,64 + jnz L_test_for_end_of_block + + mov eax,1 + shl eax,cl + dec eax + and eax,ebp + add eax,edx + mov edx, [esp+8] + mov eax, [edx+eax*4] + jmp L_dolen + +ALIGN 4 +L_test_for_second_level_dist: + + + + + test al,64 + jnz L_invalid_distance_code + + mov eax,1 + shl eax,cl + dec eax + and eax,ebp + add eax,edx + mov edx, [esp+12] + mov eax, [edx+eax*4] + jmp L_dodist + +ALIGN 4 +L_clip_window: +; 721 "inffast.S" + mov ecx,eax + mov eax, [esp+52] + neg ecx + mov esi, [esp+56] + + cmp eax,edx + jb L_invalid_distance_too_far + + add ecx,edx + cmp dword ptr [esp+48],0 + jne L_wrap_around_window + + sub eax,ecx + add esi,eax +; 749 "inffast.S" + mov eax, [esp+24] + cmp eax,ecx + jbe L_do_copy1 + + sub eax,ecx + rep movsb + mov esi,edi + sub esi,edx + jmp L_do_copy1 + + cmp eax,ecx + jbe L_do_copy1 + + sub eax,ecx + rep movsb + mov esi,edi + sub esi,edx + jmp L_do_copy1 + +L_wrap_around_window: +; 793 "inffast.S" + mov eax, [esp+48] + cmp ecx,eax + jbe L_contiguous_in_window + + add esi, [esp+52] + add esi,eax + sub esi,ecx + sub ecx,eax + + + mov eax, [esp+24] + cmp eax,ecx + jbe L_do_copy1 + + sub eax,ecx + rep movsb + mov esi, [esp+56] + mov ecx, [esp+48] + cmp eax,ecx + jbe L_do_copy1 + + sub eax,ecx + rep movsb + mov esi,edi + sub esi,edx + jmp L_do_copy1 + +L_contiguous_in_window: +; 836 "inffast.S" + add esi,eax + sub esi,ecx + + + mov eax, [esp+24] + cmp eax,ecx + jbe L_do_copy1 + + sub eax,ecx + rep movsb + mov esi,edi + sub esi,edx + +L_do_copy1: +; 862 "inffast.S" + mov ecx,eax + rep movsb + + mov esi, [esp+44] + jmp L_while_test +; 878 "inffast.S" +ALIGN 4 +L_init_mmx: + emms + + + + + + movd mm0,ebp + mov ebp,ebx +; 896 "inffast.S" + movd mm4,dword ptr [esp+0] + movq mm3,mm4 + movd mm5,dword ptr [esp+4] + movq mm2,mm5 + pxor mm1,mm1 + mov ebx, [esp+8] + jmp L_do_loop_mmx + +ALIGN 4 +L_do_loop_mmx: + psrlq mm0,mm1 + + cmp ebp,32 + ja L_get_length_code_mmx + + movd mm6,ebp + movd mm7,dword ptr [esi] + add esi,4 + psllq mm7,mm6 + add ebp,32 + por mm0,mm7 + +L_get_length_code_mmx: + pand mm4,mm0 + movd eax,mm4 + movq mm4,mm3 + mov eax, [ebx+eax*4] + +L_dolen_mmx: + movzx ecx,ah + movd mm1,ecx + sub ebp,ecx + + test al,al + jnz L_test_for_length_base_mmx + + shr eax,16 + stosb + +L_while_test_mmx: + + + cmp [esp+16],edi + jbe L_break_loop + + cmp [esp+20],esi + ja L_do_loop_mmx + jmp L_break_loop + +L_test_for_length_base_mmx: + + mov edx,eax + shr edx,16 + + test al,16 + jz L_test_for_second_level_length_mmx + and eax,15 + jz L_decode_distance_mmx + + psrlq mm0,mm1 + movd mm1,eax + movd ecx,mm0 + sub ebp,eax + and ecx, [inflate_fast_mask+eax*4] + add edx,ecx + +L_decode_distance_mmx: + psrlq mm0,mm1 + + cmp ebp,32 + ja L_get_dist_code_mmx + + movd mm6,ebp + movd mm7,dword ptr [esi] + add esi,4 + psllq mm7,mm6 + add ebp,32 + por mm0,mm7 + +L_get_dist_code_mmx: + mov ebx, [esp+12] + pand mm5,mm0 + movd eax,mm5 + movq mm5,mm2 + mov eax, [ebx+eax*4] + +L_dodist_mmx: + + movzx ecx,ah + mov ebx,eax + shr ebx,16 + sub ebp,ecx + movd mm1,ecx + + test al,16 + jz L_test_for_second_level_dist_mmx + and eax,15 + jz L_check_dist_one_mmx + +L_add_bits_to_dist_mmx: + psrlq mm0,mm1 + movd mm1,eax + movd ecx,mm0 + sub ebp,eax + and ecx, [inflate_fast_mask+eax*4] + add ebx,ecx + +L_check_window_mmx: + mov [esp+44],esi + mov eax,edi + sub eax, [esp+40] + + cmp eax,ebx + jb L_clip_window_mmx + + mov ecx,edx + mov esi,edi + sub esi,ebx + + sub ecx,3 + mov al, [esi] + mov [edi],al + mov al, [esi+1] + mov dl, [esi+2] + add esi,3 + mov [edi+1],al + mov [edi+2],dl + add edi,3 + rep movsb + + mov esi, [esp+44] + mov ebx, [esp+8] + jmp L_while_test_mmx + +ALIGN 4 +L_check_dist_one_mmx: + cmp ebx,1 + jne L_check_window_mmx + cmp [esp+40],edi + je L_check_window_mmx + + dec edi + mov ecx,edx + mov al, [edi] + sub ecx,3 + + mov [edi+1],al + mov [edi+2],al + mov [edi+3],al + add edi,4 + rep stosb + + mov ebx, [esp+8] + jmp L_while_test_mmx + +ALIGN 4 +L_test_for_second_level_length_mmx: + test al,64 + jnz L_test_for_end_of_block + + and eax,15 + psrlq mm0,mm1 + movd ecx,mm0 + and ecx, [inflate_fast_mask+eax*4] + add ecx,edx + mov eax, [ebx+ecx*4] + jmp L_dolen_mmx + +ALIGN 4 +L_test_for_second_level_dist_mmx: + test al,64 + jnz L_invalid_distance_code + + and eax,15 + psrlq mm0,mm1 + movd ecx,mm0 + and ecx, [inflate_fast_mask+eax*4] + mov eax, [esp+12] + add ecx,ebx + mov eax, [eax+ecx*4] + jmp L_dodist_mmx + +ALIGN 4 +L_clip_window_mmx: + + mov ecx,eax + mov eax, [esp+52] + neg ecx + mov esi, [esp+56] + + cmp eax,ebx + jb L_invalid_distance_too_far + + add ecx,ebx + cmp dword ptr [esp+48],0 + jne L_wrap_around_window_mmx + + sub eax,ecx + add esi,eax + + cmp edx,ecx + jbe L_do_copy1_mmx + + sub edx,ecx + rep movsb + mov esi,edi + sub esi,ebx + jmp L_do_copy1_mmx + + cmp edx,ecx + jbe L_do_copy1_mmx + + sub edx,ecx + rep movsb + mov esi,edi + sub esi,ebx + jmp L_do_copy1_mmx + +L_wrap_around_window_mmx: + + mov eax, [esp+48] + cmp ecx,eax + jbe L_contiguous_in_window_mmx + + add esi, [esp+52] + add esi,eax + sub esi,ecx + sub ecx,eax + + + cmp edx,ecx + jbe L_do_copy1_mmx + + sub edx,ecx + rep movsb + mov esi, [esp+56] + mov ecx, [esp+48] + cmp edx,ecx + jbe L_do_copy1_mmx + + sub edx,ecx + rep movsb + mov esi,edi + sub esi,ebx + jmp L_do_copy1_mmx + +L_contiguous_in_window_mmx: + + add esi,eax + sub esi,ecx + + + cmp edx,ecx + jbe L_do_copy1_mmx + + sub edx,ecx + rep movsb + mov esi,edi + sub esi,ebx + +L_do_copy1_mmx: + + + mov ecx,edx + rep movsb + + mov esi, [esp+44] + mov ebx, [esp+8] + jmp L_while_test_mmx +; 1174 "inffast.S" +L_invalid_distance_code: + + + + + + mov ecx, invalid_distance_code_msg + mov edx,INFLATE_MODE_BAD + jmp L_update_stream_state + +L_test_for_end_of_block: + + + + + + test al,32 + jz L_invalid_literal_length_code + + mov ecx,0 + mov edx,INFLATE_MODE_TYPE + jmp L_update_stream_state + +L_invalid_literal_length_code: + + + + + + mov ecx, invalid_literal_length_code_msg + mov edx,INFLATE_MODE_BAD + jmp L_update_stream_state + +L_invalid_distance_too_far: + + + + mov esi, [esp+44] + mov ecx, invalid_distance_too_far_msg + mov edx,INFLATE_MODE_BAD + jmp L_update_stream_state + +L_update_stream_state: + + mov eax, [esp+88] + test ecx,ecx + jz L_skip_msg + mov [eax+24],ecx +L_skip_msg: + mov eax, [eax+28] + mov [eax+mode_state],edx + jmp L_break_loop + +ALIGN 4 +L_break_loop: +; 1243 "inffast.S" + cmp dword ptr [inflate_fast_use_mmx],2 + jne L_update_next_in + + + + mov ebx,ebp + +L_update_next_in: +; 1266 "inffast.S" + mov eax, [esp+88] + mov ecx,ebx + mov edx, [eax+28] + shr ecx,3 + sub esi,ecx + shl ecx,3 + sub ebx,ecx + mov [eax+12],edi + mov [edx+bits_state],ebx + mov ecx,ebx + + lea ebx, [esp+28] + cmp [esp+20],ebx + jne L_buf_not_used + + sub esi,ebx + mov ebx, [eax+0] + mov [esp+20],ebx + add esi,ebx + mov ebx, [eax+4] + sub ebx,11 + add [esp+20],ebx + +L_buf_not_used: + mov [eax+0],esi + + mov ebx,1 + shl ebx,cl + dec ebx + + + + + + cmp dword ptr [inflate_fast_use_mmx],2 + jne L_update_hold + + + + psrlq mm0,mm1 + movd ebp,mm0 + + emms + +L_update_hold: + + + + and ebp,ebx + mov [edx+hold_state],ebp + + + + + mov ebx, [esp+20] + cmp ebx,esi + jbe L_last_is_smaller + + sub ebx,esi + add ebx,11 + mov [eax+4],ebx + jmp L_fixup_out +L_last_is_smaller: + sub esi,ebx + neg esi + add esi,11 + mov [eax+4],esi + + + + +L_fixup_out: + + mov ebx, [esp+16] + cmp ebx,edi + jbe L_end_is_smaller + + sub ebx,edi + add ebx,257 + mov [eax+16],ebx + jmp L_done +L_end_is_smaller: + sub edi,ebx + neg edi + add edi,257 + mov [eax+16],edi + + + + + +L_done: + add esp,64 + popfd + pop ebx + pop ebp + pop esi + pop edi + ret +_inflate_fast endp + +_TEXT ends +end ADDED compat/zlib/contrib/masmx86/match686.asm Index: compat/zlib/contrib/masmx86/match686.asm ================================================================== --- compat/zlib/contrib/masmx86/match686.asm +++ compat/zlib/contrib/masmx86/match686.asm @@ -0,0 +1,479 @@ +; match686.asm -- Asm portion of the optimized longest_match for 32 bits x86 +; Copyright (C) 1995-1996 Jean-loup Gailly, Brian Raiter and Gilles Vollant. +; File written by Gilles Vollant, by converting match686.S from Brian Raiter +; for MASM. This is as assembly version of longest_match +; from Jean-loup Gailly in deflate.c +; +; http://www.zlib.net +; http://www.winimage.com/zLibDll +; http://www.muppetlabs.com/~breadbox/software/assembly.html +; +; For Visual C++ 4.x and higher and ML 6.x and higher +; ml.exe is distributed in +; http://www.microsoft.com/downloads/details.aspx?FamilyID=7a1c9da0-0510-44a2-b042-7ef370530c64 +; +; this file contain two implementation of longest_match +; +; this longest_match was written by Brian raiter (1998), optimized for Pentium Pro +; (and the faster known version of match_init on modern Core 2 Duo and AMD Phenom) +; +; for using an assembly version of longest_match, you need define ASMV in project +; +; compile the asm file running +; ml /coff /Zi /c /Flmatch686.lst match686.asm +; and do not include match686.obj in your project +; +; note: contrib of zLib 1.2.3 and earlier contained both a deprecated version for +; Pentium (prior Pentium Pro) and this version for Pentium Pro and modern processor +; with autoselect (with cpu detection code) +; if you want support the old pentium optimization, you can still use these version +; +; this file is not optimized for old pentium, but it compatible with all x86 32 bits +; processor (starting 80386) +; +; +; see below : zlib1222add must be adjuster if you use a zlib version < 1.2.2.2 + +;uInt longest_match(s, cur_match) +; deflate_state *s; +; IPos cur_match; /* current match */ + + NbStack equ 76 + cur_match equ dword ptr[esp+NbStack-0] + str_s equ dword ptr[esp+NbStack-4] +; 5 dword on top (ret,ebp,esi,edi,ebx) + adrret equ dword ptr[esp+NbStack-8] + pushebp equ dword ptr[esp+NbStack-12] + pushedi equ dword ptr[esp+NbStack-16] + pushesi equ dword ptr[esp+NbStack-20] + pushebx equ dword ptr[esp+NbStack-24] + + chain_length equ dword ptr [esp+NbStack-28] + limit equ dword ptr [esp+NbStack-32] + best_len equ dword ptr [esp+NbStack-36] + window equ dword ptr [esp+NbStack-40] + prev equ dword ptr [esp+NbStack-44] + scan_start equ word ptr [esp+NbStack-48] + wmask equ dword ptr [esp+NbStack-52] + match_start_ptr equ dword ptr [esp+NbStack-56] + nice_match equ dword ptr [esp+NbStack-60] + scan equ dword ptr [esp+NbStack-64] + + windowlen equ dword ptr [esp+NbStack-68] + match_start equ dword ptr [esp+NbStack-72] + strend equ dword ptr [esp+NbStack-76] + NbStackAdd equ (NbStack-24) + + .386p + + name gvmatch + .MODEL FLAT + + + +; all the +zlib1222add offsets are due to the addition of fields +; in zlib in the deflate_state structure since the asm code was first written +; (if you compile with zlib 1.0.4 or older, use "zlib1222add equ (-4)"). +; (if you compile with zlib between 1.0.5 and 1.2.2.1, use "zlib1222add equ 0"). +; if you compile with zlib 1.2.2.2 or later , use "zlib1222add equ 8"). + + zlib1222add equ 8 + +; Note : these value are good with a 8 bytes boundary pack structure + dep_chain_length equ 74h+zlib1222add + dep_window equ 30h+zlib1222add + dep_strstart equ 64h+zlib1222add + dep_prev_length equ 70h+zlib1222add + dep_nice_match equ 88h+zlib1222add + dep_w_size equ 24h+zlib1222add + dep_prev equ 38h+zlib1222add + dep_w_mask equ 2ch+zlib1222add + dep_good_match equ 84h+zlib1222add + dep_match_start equ 68h+zlib1222add + dep_lookahead equ 6ch+zlib1222add + + +_TEXT segment + +IFDEF NOUNDERLINE + public longest_match + public match_init +ELSE + public _longest_match + public _match_init +ENDIF + + MAX_MATCH equ 258 + MIN_MATCH equ 3 + MIN_LOOKAHEAD equ (MAX_MATCH+MIN_MATCH+1) + + + +MAX_MATCH equ 258 +MIN_MATCH equ 3 +MIN_LOOKAHEAD equ (MAX_MATCH + MIN_MATCH + 1) +MAX_MATCH_8_ equ ((MAX_MATCH + 7) AND 0FFF0h) + + +;;; stack frame offsets + +chainlenwmask equ esp + 0 ; high word: current chain len + ; low word: s->wmask +window equ esp + 4 ; local copy of s->window +windowbestlen equ esp + 8 ; s->window + bestlen +scanstart equ esp + 16 ; first two bytes of string +scanend equ esp + 12 ; last two bytes of string +scanalign equ esp + 20 ; dword-misalignment of string +nicematch equ esp + 24 ; a good enough match size +bestlen equ esp + 28 ; size of best match so far +scan equ esp + 32 ; ptr to string wanting match + +LocalVarsSize equ 36 +; saved ebx byte esp + 36 +; saved edi byte esp + 40 +; saved esi byte esp + 44 +; saved ebp byte esp + 48 +; return address byte esp + 52 +deflatestate equ esp + 56 ; the function arguments +curmatch equ esp + 60 + +;;; Offsets for fields in the deflate_state structure. These numbers +;;; are calculated from the definition of deflate_state, with the +;;; assumption that the compiler will dword-align the fields. (Thus, +;;; changing the definition of deflate_state could easily cause this +;;; program to crash horribly, without so much as a warning at +;;; compile time. Sigh.) + +dsWSize equ 36+zlib1222add +dsWMask equ 44+zlib1222add +dsWindow equ 48+zlib1222add +dsPrev equ 56+zlib1222add +dsMatchLen equ 88+zlib1222add +dsPrevMatch equ 92+zlib1222add +dsStrStart equ 100+zlib1222add +dsMatchStart equ 104+zlib1222add +dsLookahead equ 108+zlib1222add +dsPrevLen equ 112+zlib1222add +dsMaxChainLen equ 116+zlib1222add +dsGoodMatch equ 132+zlib1222add +dsNiceMatch equ 136+zlib1222add + + +;;; match686.asm -- Pentium-Pro-optimized version of longest_match() +;;; Written for zlib 1.1.2 +;;; Copyright (C) 1998 Brian Raiter +;;; You can look at http://www.muppetlabs.com/~breadbox/software/assembly.html +;;; +;; +;; This software is provided 'as-is', without any express or implied +;; warranty. In no event will the authors be held liable for any damages +;; arising from the use of this software. +;; +;; Permission is granted to anyone to use this software for any purpose, +;; including commercial applications, and to alter it and redistribute it +;; freely, subject to the following restrictions: +;; +;; 1. The origin of this software must not be misrepresented; you must not +;; claim that you wrote the original software. If you use this software +;; in a product, an acknowledgment in the product documentation would be +;; appreciated but is not required. +;; 2. Altered source versions must be plainly marked as such, and must not be +;; misrepresented as being the original software +;; 3. This notice may not be removed or altered from any source distribution. +;; + +;GLOBAL _longest_match, _match_init + + +;SECTION .text + +;;; uInt longest_match(deflate_state *deflatestate, IPos curmatch) + +;_longest_match: + IFDEF NOUNDERLINE + longest_match proc near + ELSE + _longest_match proc near + ENDIF +.FPO (9, 4, 0, 0, 1, 0) + +;;; Save registers that the compiler may be using, and adjust esp to +;;; make room for our stack frame. + + push ebp + push edi + push esi + push ebx + sub esp, LocalVarsSize + +;;; Retrieve the function arguments. ecx will hold cur_match +;;; throughout the entire function. edx will hold the pointer to the +;;; deflate_state structure during the function's setup (before +;;; entering the main loop. + + mov edx, [deflatestate] + mov ecx, [curmatch] + +;;; uInt wmask = s->w_mask; +;;; unsigned chain_length = s->max_chain_length; +;;; if (s->prev_length >= s->good_match) { +;;; chain_length >>= 2; +;;; } + + mov eax, [edx + dsPrevLen] + mov ebx, [edx + dsGoodMatch] + cmp eax, ebx + mov eax, [edx + dsWMask] + mov ebx, [edx + dsMaxChainLen] + jl LastMatchGood + shr ebx, 2 +LastMatchGood: + +;;; chainlen is decremented once beforehand so that the function can +;;; use the sign flag instead of the zero flag for the exit test. +;;; It is then shifted into the high word, to make room for the wmask +;;; value, which it will always accompany. + + dec ebx + shl ebx, 16 + or ebx, eax + mov [chainlenwmask], ebx + +;;; if ((uInt)nice_match > s->lookahead) nice_match = s->lookahead; + + mov eax, [edx + dsNiceMatch] + mov ebx, [edx + dsLookahead] + cmp ebx, eax + jl LookaheadLess + mov ebx, eax +LookaheadLess: mov [nicematch], ebx + +;;; register Bytef *scan = s->window + s->strstart; + + mov esi, [edx + dsWindow] + mov [window], esi + mov ebp, [edx + dsStrStart] + lea edi, [esi + ebp] + mov [scan], edi + +;;; Determine how many bytes the scan ptr is off from being +;;; dword-aligned. + + mov eax, edi + neg eax + and eax, 3 + mov [scanalign], eax + +;;; IPos limit = s->strstart > (IPos)MAX_DIST(s) ? +;;; s->strstart - (IPos)MAX_DIST(s) : NIL; + + mov eax, [edx + dsWSize] + sub eax, MIN_LOOKAHEAD + sub ebp, eax + jg LimitPositive + xor ebp, ebp +LimitPositive: + +;;; int best_len = s->prev_length; + + mov eax, [edx + dsPrevLen] + mov [bestlen], eax + +;;; Store the sum of s->window + best_len in esi locally, and in esi. + + add esi, eax + mov [windowbestlen], esi + +;;; register ush scan_start = *(ushf*)scan; +;;; register ush scan_end = *(ushf*)(scan+best_len-1); +;;; Posf *prev = s->prev; + + movzx ebx, word ptr [edi] + mov [scanstart], ebx + movzx ebx, word ptr [edi + eax - 1] + mov [scanend], ebx + mov edi, [edx + dsPrev] + +;;; Jump into the main loop. + + mov edx, [chainlenwmask] + jmp short LoopEntry + +align 4 + +;;; do { +;;; match = s->window + cur_match; +;;; if (*(ushf*)(match+best_len-1) != scan_end || +;;; *(ushf*)match != scan_start) continue; +;;; [...] +;;; } while ((cur_match = prev[cur_match & wmask]) > limit +;;; && --chain_length != 0); +;;; +;;; Here is the inner loop of the function. The function will spend the +;;; majority of its time in this loop, and majority of that time will +;;; be spent in the first ten instructions. +;;; +;;; Within this loop: +;;; ebx = scanend +;;; ecx = curmatch +;;; edx = chainlenwmask - i.e., ((chainlen << 16) | wmask) +;;; esi = windowbestlen - i.e., (window + bestlen) +;;; edi = prev +;;; ebp = limit + +LookupLoop: + and ecx, edx + movzx ecx, word ptr [edi + ecx*2] + cmp ecx, ebp + jbe LeaveNow + sub edx, 00010000h + js LeaveNow +LoopEntry: movzx eax, word ptr [esi + ecx - 1] + cmp eax, ebx + jnz LookupLoop + mov eax, [window] + movzx eax, word ptr [eax + ecx] + cmp eax, [scanstart] + jnz LookupLoop + +;;; Store the current value of chainlen. + + mov [chainlenwmask], edx + +;;; Point edi to the string under scrutiny, and esi to the string we +;;; are hoping to match it up with. In actuality, esi and edi are +;;; both pointed (MAX_MATCH_8 - scanalign) bytes ahead, and edx is +;;; initialized to -(MAX_MATCH_8 - scanalign). + + mov esi, [window] + mov edi, [scan] + add esi, ecx + mov eax, [scanalign] + mov edx, 0fffffef8h; -(MAX_MATCH_8) + lea edi, [edi + eax + 0108h] ;MAX_MATCH_8] + lea esi, [esi + eax + 0108h] ;MAX_MATCH_8] + +;;; Test the strings for equality, 8 bytes at a time. At the end, +;;; adjust edx so that it is offset to the exact byte that mismatched. +;;; +;;; We already know at this point that the first three bytes of the +;;; strings match each other, and they can be safely passed over before +;;; starting the compare loop. So what this code does is skip over 0-3 +;;; bytes, as much as necessary in order to dword-align the edi +;;; pointer. (esi will still be misaligned three times out of four.) +;;; +;;; It should be confessed that this loop usually does not represent +;;; much of the total running time. Replacing it with a more +;;; straightforward "rep cmpsb" would not drastically degrade +;;; performance. + +LoopCmps: + mov eax, [esi + edx] + xor eax, [edi + edx] + jnz LeaveLoopCmps + mov eax, [esi + edx + 4] + xor eax, [edi + edx + 4] + jnz LeaveLoopCmps4 + add edx, 8 + jnz LoopCmps + jmp short LenMaximum +LeaveLoopCmps4: add edx, 4 +LeaveLoopCmps: test eax, 0000FFFFh + jnz LenLower + add edx, 2 + shr eax, 16 +LenLower: sub al, 1 + adc edx, 0 + +;;; Calculate the length of the match. If it is longer than MAX_MATCH, +;;; then automatically accept it as the best possible match and leave. + + lea eax, [edi + edx] + mov edi, [scan] + sub eax, edi + cmp eax, MAX_MATCH + jge LenMaximum + +;;; If the length of the match is not longer than the best match we +;;; have so far, then forget it and return to the lookup loop. + + mov edx, [deflatestate] + mov ebx, [bestlen] + cmp eax, ebx + jg LongerMatch + mov esi, [windowbestlen] + mov edi, [edx + dsPrev] + mov ebx, [scanend] + mov edx, [chainlenwmask] + jmp LookupLoop + +;;; s->match_start = cur_match; +;;; best_len = len; +;;; if (len >= nice_match) break; +;;; scan_end = *(ushf*)(scan+best_len-1); + +LongerMatch: mov ebx, [nicematch] + mov [bestlen], eax + mov [edx + dsMatchStart], ecx + cmp eax, ebx + jge LeaveNow + mov esi, [window] + add esi, eax + mov [windowbestlen], esi + movzx ebx, word ptr [edi + eax - 1] + mov edi, [edx + dsPrev] + mov [scanend], ebx + mov edx, [chainlenwmask] + jmp LookupLoop + +;;; Accept the current string, with the maximum possible length. + +LenMaximum: mov edx, [deflatestate] + mov dword ptr [bestlen], MAX_MATCH + mov [edx + dsMatchStart], ecx + +;;; if ((uInt)best_len <= s->lookahead) return (uInt)best_len; +;;; return s->lookahead; + +LeaveNow: + mov edx, [deflatestate] + mov ebx, [bestlen] + mov eax, [edx + dsLookahead] + cmp ebx, eax + jg LookaheadRet + mov eax, ebx +LookaheadRet: + +;;; Restore the stack and return from whence we came. + + add esp, LocalVarsSize + pop ebx + pop esi + pop edi + pop ebp + + ret +; please don't remove this string ! +; Your can freely use match686 in any free or commercial app if you don't remove the string in the binary! + db 0dh,0ah,"asm686 with masm, optimised assembly code from Brian Raiter, written 1998",0dh,0ah + + + IFDEF NOUNDERLINE + longest_match endp + ELSE + _longest_match endp + ENDIF + + IFDEF NOUNDERLINE + match_init proc near + ret + match_init endp + ELSE + _match_init proc near + ret + _match_init endp + ENDIF + + +_TEXT ends +end ADDED compat/zlib/contrib/masmx86/readme.txt Index: compat/zlib/contrib/masmx86/readme.txt ================================================================== --- compat/zlib/contrib/masmx86/readme.txt +++ compat/zlib/contrib/masmx86/readme.txt @@ -0,0 +1,27 @@ + +Summary +------- +This directory contains ASM implementations of the functions +longest_match() and inflate_fast(). + + +Use instructions +---------------- +Assemble using MASM, and copy the object files into the zlib source +directory, then run the appropriate makefile, as suggested below. You can +donwload MASM from here: + + http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=7a1c9da0-0510-44a2-b042-7ef370530c64 + +You can also get objects files here: + + http://www.winimage.com/zLibDll/zlib124_masm_obj.zip + +Build instructions +------------------ +* With Microsoft C and MASM: +nmake -f win32/Makefile.msc LOC="-DASMV -DASMINF" OBJA="match686.obj inffas32.obj" + +* With Borland C and TASM: +make -f win32/Makefile.bor LOCAL_ZLIB="-DASMV -DASMINF" OBJA="match686.obj inffas32.obj" OBJPA="+match686c.obj+match686.obj+inffas32.obj" + ADDED compat/zlib/contrib/minizip/Makefile Index: compat/zlib/contrib/minizip/Makefile ================================================================== --- compat/zlib/contrib/minizip/Makefile +++ compat/zlib/contrib/minizip/Makefile @@ -0,0 +1,25 @@ +CC=cc +CFLAGS=-O -I../.. + +UNZ_OBJS = miniunz.o unzip.o ioapi.o ../../libz.a +ZIP_OBJS = minizip.o zip.o ioapi.o ../../libz.a + +.c.o: + $(CC) -c $(CFLAGS) $*.c + +all: miniunz minizip + +miniunz: $(UNZ_OBJS) + $(CC) $(CFLAGS) -o $@ $(UNZ_OBJS) + +minizip: $(ZIP_OBJS) + $(CC) $(CFLAGS) -o $@ $(ZIP_OBJS) + +test: miniunz minizip + ./minizip test readme.txt + ./miniunz -l test.zip + mv readme.txt readme.old + ./miniunz test.zip + +clean: + /bin/rm -f *.o *~ minizip miniunz ADDED compat/zlib/contrib/minizip/Makefile.am Index: compat/zlib/contrib/minizip/Makefile.am ================================================================== --- compat/zlib/contrib/minizip/Makefile.am +++ compat/zlib/contrib/minizip/Makefile.am @@ -0,0 +1,45 @@ +lib_LTLIBRARIES = libminizip.la + +if COND_DEMOS +bin_PROGRAMS = miniunzip minizip +endif + +zlib_top_srcdir = $(top_srcdir)/../.. +zlib_top_builddir = $(top_builddir)/../.. + +AM_CPPFLAGS = -I$(zlib_top_srcdir) +AM_LDFLAGS = -L$(zlib_top_builddir) + +if WIN32 +iowin32_src = iowin32.c +iowin32_h = iowin32.h +endif + +libminizip_la_SOURCES = \ + ioapi.c \ + mztools.c \ + unzip.c \ + zip.c \ + ${iowin32_src} + +libminizip_la_LDFLAGS = $(AM_LDFLAGS) -version-info 1:0:0 -lz + +minizip_includedir = $(includedir)/minizip +minizip_include_HEADERS = \ + crypt.h \ + ioapi.h \ + mztools.h \ + unzip.h \ + zip.h \ + ${iowin32_h} + +pkgconfigdir = $(libdir)/pkgconfig +pkgconfig_DATA = minizip.pc + +EXTRA_PROGRAMS = miniunzip minizip + +miniunzip_SOURCES = miniunz.c +miniunzip_LDADD = libminizip.la + +minizip_SOURCES = minizip.c +minizip_LDADD = libminizip.la -lz ADDED compat/zlib/contrib/minizip/MiniZip64_Changes.txt Index: compat/zlib/contrib/minizip/MiniZip64_Changes.txt ================================================================== --- compat/zlib/contrib/minizip/MiniZip64_Changes.txt +++ compat/zlib/contrib/minizip/MiniZip64_Changes.txt @@ -0,0 +1,6 @@ + +MiniZip 1.1 was derrived from MiniZip at version 1.01f + +Change in 1.0 (Okt 2009) + - **TODO - Add history** + ADDED compat/zlib/contrib/minizip/MiniZip64_info.txt Index: compat/zlib/contrib/minizip/MiniZip64_info.txt ================================================================== --- compat/zlib/contrib/minizip/MiniZip64_info.txt +++ compat/zlib/contrib/minizip/MiniZip64_info.txt @@ -0,0 +1,74 @@ +MiniZip - Copyright (c) 1998-2010 - by Gilles Vollant - version 1.1 64 bits from Mathias Svensson + +Introduction +--------------------- +MiniZip 1.1 is built from MiniZip 1.0 by Gilles Vollant ( http://www.winimage.com/zLibDll/minizip.html ) + +When adding ZIP64 support into minizip it would result into risk of breaking compatibility with minizip 1.0. +All possible work was done for compatibility. + + +Background +--------------------- +When adding ZIP64 support Mathias Svensson found that Even Rouault have added ZIP64 +support for unzip.c into minizip for a open source project called gdal ( http://www.gdal.org/ ) + +That was used as a starting point. And after that ZIP64 support was added to zip.c +some refactoring and code cleanup was also done. + + +Changed from MiniZip 1.0 to MiniZip 1.1 +--------------------------------------- +* Added ZIP64 support for unzip ( by Even Rouault ) +* Added ZIP64 support for zip ( by Mathias Svensson ) +* Reverted some changed that Even Rouault did. +* Bunch of patches received from Gulles Vollant that he received for MiniZip from various users. +* Added unzip patch for BZIP Compression method (patch create by Daniel Borca) +* Added BZIP Compress method for zip +* Did some refactoring and code cleanup + + +Credits + + Gilles Vollant - Original MiniZip author + Even Rouault - ZIP64 unzip Support + Daniel Borca - BZip Compression method support in unzip + Mathias Svensson - ZIP64 zip support + Mathias Svensson - BZip Compression method support in zip + + Resources + + ZipLayout http://result42.com/projects/ZipFileLayout + Command line tool for Windows that shows the layout and information of the headers in a zip archive. + Used when debugging and validating the creation of zip files using MiniZip64 + + + ZIP App Note http://www.pkware.com/documents/casestudies/APPNOTE.TXT + Zip File specification + + +Notes. + * To be able to use BZip compression method in zip64.c or unzip64.c the BZIP2 lib is needed and HAVE_BZIP2 need to be defined. + +License +---------------------------------------------------------- + Condition of use and distribution are the same than zlib : + + This software is provided 'as-is', without any express or implied + warranty. In no event will the authors be held liable for any damages + arising from the use of this software. + + Permission is granted to anyone to use this software for any purpose, + including commercial applications, and to alter it and redistribute it + freely, subject to the following restrictions: + + 1. The origin of this software must not be misrepresented; you must not + claim that you wrote the original software. If you use this software + in a product, an acknowledgment in the product documentation would be + appreciated but is not required. + 2. Altered source versions must be plainly marked as such, and must not be + misrepresented as being the original software. + 3. This notice may not be removed or altered from any source distribution. + +---------------------------------------------------------- + ADDED compat/zlib/contrib/minizip/configure.ac Index: compat/zlib/contrib/minizip/configure.ac ================================================================== --- compat/zlib/contrib/minizip/configure.ac +++ compat/zlib/contrib/minizip/configure.ac @@ -0,0 +1,32 @@ +# -*- Autoconf -*- +# Process this file with autoconf to produce a configure script. + +AC_INIT([minizip], [1.2.8], [bugzilla.redhat.com]) +AC_CONFIG_SRCDIR([minizip.c]) +AM_INIT_AUTOMAKE([foreign]) +LT_INIT + +AC_MSG_CHECKING([whether to build example programs]) +AC_ARG_ENABLE([demos], AC_HELP_STRING([--enable-demos], [build example programs])) +AM_CONDITIONAL([COND_DEMOS], [test "$enable_demos" = yes]) +if test "$enable_demos" = yes +then + AC_MSG_RESULT([yes]) +else + AC_MSG_RESULT([no]) +fi + +case "${host}" in + *-mingw* | mingw*) + WIN32="yes" + ;; + *) + ;; +esac +AM_CONDITIONAL([WIN32], [test "${WIN32}" = "yes"]) + + +AC_SUBST([HAVE_UNISTD_H], [0]) +AC_CHECK_HEADER([unistd.h], [HAVE_UNISTD_H=1], []) +AC_CONFIG_FILES([Makefile minizip.pc]) +AC_OUTPUT ADDED compat/zlib/contrib/minizip/crypt.h Index: compat/zlib/contrib/minizip/crypt.h ================================================================== --- compat/zlib/contrib/minizip/crypt.h +++ compat/zlib/contrib/minizip/crypt.h @@ -0,0 +1,131 @@ +/* crypt.h -- base code for crypt/uncrypt ZIPfile + + + Version 1.01e, February 12th, 2005 + + Copyright (C) 1998-2005 Gilles Vollant + + This code is a modified version of crypting code in Infozip distribution + + The encryption/decryption parts of this source code (as opposed to the + non-echoing password parts) were originally written in Europe. The + whole source package can be freely distributed, including from the USA. + (Prior to January 2000, re-export from the US was a violation of US law.) + + This encryption code is a direct transcription of the algorithm from + Roger Schlafly, described by Phil Katz in the file appnote.txt. This + file (appnote.txt) is distributed with the PKZIP program (even in the + version without encryption capabilities). + + If you don't need crypting in your application, just define symbols + NOCRYPT and NOUNCRYPT. + + This code support the "Traditional PKWARE Encryption". + + The new AES encryption added on Zip format by Winzip (see the page + http://www.winzip.com/aes_info.htm ) and PKWare PKZip 5.x Strong + Encryption is not supported. +*/ + +#define CRC32(c, b) ((*(pcrc_32_tab+(((int)(c) ^ (b)) & 0xff))) ^ ((c) >> 8)) + +/*********************************************************************** + * Return the next byte in the pseudo-random sequence + */ +static int decrypt_byte(unsigned long* pkeys, const z_crc_t* pcrc_32_tab) +{ + unsigned temp; /* POTENTIAL BUG: temp*(temp^1) may overflow in an + * unpredictable manner on 16-bit systems; not a problem + * with any known compiler so far, though */ + + temp = ((unsigned)(*(pkeys+2)) & 0xffff) | 2; + return (int)(((temp * (temp ^ 1)) >> 8) & 0xff); +} + +/*********************************************************************** + * Update the encryption keys with the next byte of plain text + */ +static int update_keys(unsigned long* pkeys,const z_crc_t* pcrc_32_tab,int c) +{ + (*(pkeys+0)) = CRC32((*(pkeys+0)), c); + (*(pkeys+1)) += (*(pkeys+0)) & 0xff; + (*(pkeys+1)) = (*(pkeys+1)) * 134775813L + 1; + { + register int keyshift = (int)((*(pkeys+1)) >> 24); + (*(pkeys+2)) = CRC32((*(pkeys+2)), keyshift); + } + return c; +} + + +/*********************************************************************** + * Initialize the encryption keys and the random header according to + * the given password. + */ +static void init_keys(const char* passwd,unsigned long* pkeys,const z_crc_t* pcrc_32_tab) +{ + *(pkeys+0) = 305419896L; + *(pkeys+1) = 591751049L; + *(pkeys+2) = 878082192L; + while (*passwd != '\0') { + update_keys(pkeys,pcrc_32_tab,(int)*passwd); + passwd++; + } +} + +#define zdecode(pkeys,pcrc_32_tab,c) \ + (update_keys(pkeys,pcrc_32_tab,c ^= decrypt_byte(pkeys,pcrc_32_tab))) + +#define zencode(pkeys,pcrc_32_tab,c,t) \ + (t=decrypt_byte(pkeys,pcrc_32_tab), update_keys(pkeys,pcrc_32_tab,c), t^(c)) + +#ifdef INCLUDECRYPTINGCODE_IFCRYPTALLOWED + +#define RAND_HEAD_LEN 12 + /* "last resort" source for second part of crypt seed pattern */ +# ifndef ZCR_SEED2 +# define ZCR_SEED2 3141592654UL /* use PI as default pattern */ +# endif + +static int crypthead(const char* passwd, /* password string */ + unsigned char* buf, /* where to write header */ + int bufSize, + unsigned long* pkeys, + const z_crc_t* pcrc_32_tab, + unsigned long crcForCrypting) +{ + int n; /* index in random header */ + int t; /* temporary */ + int c; /* random byte */ + unsigned char header[RAND_HEAD_LEN-2]; /* random header */ + static unsigned calls = 0; /* ensure different random header each time */ + + if (bufSize> 7) & 0xff; + header[n] = (unsigned char)zencode(pkeys, pcrc_32_tab, c, t); + } + /* Encrypt random header (last two bytes is high word of crc) */ + init_keys(passwd, pkeys, pcrc_32_tab); + for (n = 0; n < RAND_HEAD_LEN-2; n++) + { + buf[n] = (unsigned char)zencode(pkeys, pcrc_32_tab, header[n], t); + } + buf[n++] = (unsigned char)zencode(pkeys, pcrc_32_tab, (int)(crcForCrypting >> 16) & 0xff, t); + buf[n++] = (unsigned char)zencode(pkeys, pcrc_32_tab, (int)(crcForCrypting >> 24) & 0xff, t); + return n; +} + +#endif ADDED compat/zlib/contrib/minizip/ioapi.c Index: compat/zlib/contrib/minizip/ioapi.c ================================================================== --- compat/zlib/contrib/minizip/ioapi.c +++ compat/zlib/contrib/minizip/ioapi.c @@ -0,0 +1,247 @@ +/* ioapi.h -- IO base function header for compress/uncompress .zip + part of the MiniZip project - ( http://www.winimage.com/zLibDll/minizip.html ) + + Copyright (C) 1998-2010 Gilles Vollant (minizip) ( http://www.winimage.com/zLibDll/minizip.html ) + + Modifications for Zip64 support + Copyright (C) 2009-2010 Mathias Svensson ( http://result42.com ) + + For more info read MiniZip_info.txt + +*/ + +#if defined(_WIN32) && (!(defined(_CRT_SECURE_NO_WARNINGS))) + #define _CRT_SECURE_NO_WARNINGS +#endif + +#if defined(__APPLE__) || defined(IOAPI_NO_64) +// In darwin and perhaps other BSD variants off_t is a 64 bit value, hence no need for specific 64 bit functions +#define FOPEN_FUNC(filename, mode) fopen(filename, mode) +#define FTELLO_FUNC(stream) ftello(stream) +#define FSEEKO_FUNC(stream, offset, origin) fseeko(stream, offset, origin) +#else +#define FOPEN_FUNC(filename, mode) fopen64(filename, mode) +#define FTELLO_FUNC(stream) ftello64(stream) +#define FSEEKO_FUNC(stream, offset, origin) fseeko64(stream, offset, origin) +#endif + + +#include "ioapi.h" + +voidpf call_zopen64 (const zlib_filefunc64_32_def* pfilefunc,const void*filename,int mode) +{ + if (pfilefunc->zfile_func64.zopen64_file != NULL) + return (*(pfilefunc->zfile_func64.zopen64_file)) (pfilefunc->zfile_func64.opaque,filename,mode); + else + { + return (*(pfilefunc->zopen32_file))(pfilefunc->zfile_func64.opaque,(const char*)filename,mode); + } +} + +long call_zseek64 (const zlib_filefunc64_32_def* pfilefunc,voidpf filestream, ZPOS64_T offset, int origin) +{ + if (pfilefunc->zfile_func64.zseek64_file != NULL) + return (*(pfilefunc->zfile_func64.zseek64_file)) (pfilefunc->zfile_func64.opaque,filestream,offset,origin); + else + { + uLong offsetTruncated = (uLong)offset; + if (offsetTruncated != offset) + return -1; + else + return (*(pfilefunc->zseek32_file))(pfilefunc->zfile_func64.opaque,filestream,offsetTruncated,origin); + } +} + +ZPOS64_T call_ztell64 (const zlib_filefunc64_32_def* pfilefunc,voidpf filestream) +{ + if (pfilefunc->zfile_func64.zseek64_file != NULL) + return (*(pfilefunc->zfile_func64.ztell64_file)) (pfilefunc->zfile_func64.opaque,filestream); + else + { + uLong tell_uLong = (*(pfilefunc->ztell32_file))(pfilefunc->zfile_func64.opaque,filestream); + if ((tell_uLong) == MAXU32) + return (ZPOS64_T)-1; + else + return tell_uLong; + } +} + +void fill_zlib_filefunc64_32_def_from_filefunc32(zlib_filefunc64_32_def* p_filefunc64_32,const zlib_filefunc_def* p_filefunc32) +{ + p_filefunc64_32->zfile_func64.zopen64_file = NULL; + p_filefunc64_32->zopen32_file = p_filefunc32->zopen_file; + p_filefunc64_32->zfile_func64.zerror_file = p_filefunc32->zerror_file; + p_filefunc64_32->zfile_func64.zread_file = p_filefunc32->zread_file; + p_filefunc64_32->zfile_func64.zwrite_file = p_filefunc32->zwrite_file; + p_filefunc64_32->zfile_func64.ztell64_file = NULL; + p_filefunc64_32->zfile_func64.zseek64_file = NULL; + p_filefunc64_32->zfile_func64.zclose_file = p_filefunc32->zclose_file; + p_filefunc64_32->zfile_func64.zerror_file = p_filefunc32->zerror_file; + p_filefunc64_32->zfile_func64.opaque = p_filefunc32->opaque; + p_filefunc64_32->zseek32_file = p_filefunc32->zseek_file; + p_filefunc64_32->ztell32_file = p_filefunc32->ztell_file; +} + + + +static voidpf ZCALLBACK fopen_file_func OF((voidpf opaque, const char* filename, int mode)); +static uLong ZCALLBACK fread_file_func OF((voidpf opaque, voidpf stream, void* buf, uLong size)); +static uLong ZCALLBACK fwrite_file_func OF((voidpf opaque, voidpf stream, const void* buf,uLong size)); +static ZPOS64_T ZCALLBACK ftell64_file_func OF((voidpf opaque, voidpf stream)); +static long ZCALLBACK fseek64_file_func OF((voidpf opaque, voidpf stream, ZPOS64_T offset, int origin)); +static int ZCALLBACK fclose_file_func OF((voidpf opaque, voidpf stream)); +static int ZCALLBACK ferror_file_func OF((voidpf opaque, voidpf stream)); + +static voidpf ZCALLBACK fopen_file_func (voidpf opaque, const char* filename, int mode) +{ + FILE* file = NULL; + const char* mode_fopen = NULL; + if ((mode & ZLIB_FILEFUNC_MODE_READWRITEFILTER)==ZLIB_FILEFUNC_MODE_READ) + mode_fopen = "rb"; + else + if (mode & ZLIB_FILEFUNC_MODE_EXISTING) + mode_fopen = "r+b"; + else + if (mode & ZLIB_FILEFUNC_MODE_CREATE) + mode_fopen = "wb"; + + if ((filename!=NULL) && (mode_fopen != NULL)) + file = fopen(filename, mode_fopen); + return file; +} + +static voidpf ZCALLBACK fopen64_file_func (voidpf opaque, const void* filename, int mode) +{ + FILE* file = NULL; + const char* mode_fopen = NULL; + if ((mode & ZLIB_FILEFUNC_MODE_READWRITEFILTER)==ZLIB_FILEFUNC_MODE_READ) + mode_fopen = "rb"; + else + if (mode & ZLIB_FILEFUNC_MODE_EXISTING) + mode_fopen = "r+b"; + else + if (mode & ZLIB_FILEFUNC_MODE_CREATE) + mode_fopen = "wb"; + + if ((filename!=NULL) && (mode_fopen != NULL)) + file = FOPEN_FUNC((const char*)filename, mode_fopen); + return file; +} + + +static uLong ZCALLBACK fread_file_func (voidpf opaque, voidpf stream, void* buf, uLong size) +{ + uLong ret; + ret = (uLong)fread(buf, 1, (size_t)size, (FILE *)stream); + return ret; +} + +static uLong ZCALLBACK fwrite_file_func (voidpf opaque, voidpf stream, const void* buf, uLong size) +{ + uLong ret; + ret = (uLong)fwrite(buf, 1, (size_t)size, (FILE *)stream); + return ret; +} + +static long ZCALLBACK ftell_file_func (voidpf opaque, voidpf stream) +{ + long ret; + ret = ftell((FILE *)stream); + return ret; +} + + +static ZPOS64_T ZCALLBACK ftell64_file_func (voidpf opaque, voidpf stream) +{ + ZPOS64_T ret; + ret = FTELLO_FUNC((FILE *)stream); + return ret; +} + +static long ZCALLBACK fseek_file_func (voidpf opaque, voidpf stream, uLong offset, int origin) +{ + int fseek_origin=0; + long ret; + switch (origin) + { + case ZLIB_FILEFUNC_SEEK_CUR : + fseek_origin = SEEK_CUR; + break; + case ZLIB_FILEFUNC_SEEK_END : + fseek_origin = SEEK_END; + break; + case ZLIB_FILEFUNC_SEEK_SET : + fseek_origin = SEEK_SET; + break; + default: return -1; + } + ret = 0; + if (fseek((FILE *)stream, offset, fseek_origin) != 0) + ret = -1; + return ret; +} + +static long ZCALLBACK fseek64_file_func (voidpf opaque, voidpf stream, ZPOS64_T offset, int origin) +{ + int fseek_origin=0; + long ret; + switch (origin) + { + case ZLIB_FILEFUNC_SEEK_CUR : + fseek_origin = SEEK_CUR; + break; + case ZLIB_FILEFUNC_SEEK_END : + fseek_origin = SEEK_END; + break; + case ZLIB_FILEFUNC_SEEK_SET : + fseek_origin = SEEK_SET; + break; + default: return -1; + } + ret = 0; + + if(FSEEKO_FUNC((FILE *)stream, offset, fseek_origin) != 0) + ret = -1; + + return ret; +} + + +static int ZCALLBACK fclose_file_func (voidpf opaque, voidpf stream) +{ + int ret; + ret = fclose((FILE *)stream); + return ret; +} + +static int ZCALLBACK ferror_file_func (voidpf opaque, voidpf stream) +{ + int ret; + ret = ferror((FILE *)stream); + return ret; +} + +void fill_fopen_filefunc (pzlib_filefunc_def) + zlib_filefunc_def* pzlib_filefunc_def; +{ + pzlib_filefunc_def->zopen_file = fopen_file_func; + pzlib_filefunc_def->zread_file = fread_file_func; + pzlib_filefunc_def->zwrite_file = fwrite_file_func; + pzlib_filefunc_def->ztell_file = ftell_file_func; + pzlib_filefunc_def->zseek_file = fseek_file_func; + pzlib_filefunc_def->zclose_file = fclose_file_func; + pzlib_filefunc_def->zerror_file = ferror_file_func; + pzlib_filefunc_def->opaque = NULL; +} + +void fill_fopen64_filefunc (zlib_filefunc64_def* pzlib_filefunc_def) +{ + pzlib_filefunc_def->zopen64_file = fopen64_file_func; + pzlib_filefunc_def->zread_file = fread_file_func; + pzlib_filefunc_def->zwrite_file = fwrite_file_func; + pzlib_filefunc_def->ztell64_file = ftell64_file_func; + pzlib_filefunc_def->zseek64_file = fseek64_file_func; + pzlib_filefunc_def->zclose_file = fclose_file_func; + pzlib_filefunc_def->zerror_file = ferror_file_func; + pzlib_filefunc_def->opaque = NULL; +} ADDED compat/zlib/contrib/minizip/ioapi.h Index: compat/zlib/contrib/minizip/ioapi.h ================================================================== --- compat/zlib/contrib/minizip/ioapi.h +++ compat/zlib/contrib/minizip/ioapi.h @@ -0,0 +1,208 @@ +/* ioapi.h -- IO base function header for compress/uncompress .zip + part of the MiniZip project - ( http://www.winimage.com/zLibDll/minizip.html ) + + Copyright (C) 1998-2010 Gilles Vollant (minizip) ( http://www.winimage.com/zLibDll/minizip.html ) + + Modifications for Zip64 support + Copyright (C) 2009-2010 Mathias Svensson ( http://result42.com ) + + For more info read MiniZip_info.txt + + Changes + + Oct-2009 - Defined ZPOS64_T to fpos_t on windows and u_int64_t on linux. (might need to find a better why for this) + Oct-2009 - Change to fseeko64, ftello64 and fopen64 so large files would work on linux. + More if/def section may be needed to support other platforms + Oct-2009 - Defined fxxxx64 calls to normal fopen/ftell/fseek so they would compile on windows. + (but you should use iowin32.c for windows instead) + +*/ + +#ifndef _ZLIBIOAPI64_H +#define _ZLIBIOAPI64_H + +#if (!defined(_WIN32)) && (!defined(WIN32)) && (!defined(__APPLE__)) + + // Linux needs this to support file operation on files larger then 4+GB + // But might need better if/def to select just the platforms that needs them. + + #ifndef __USE_FILE_OFFSET64 + #define __USE_FILE_OFFSET64 + #endif + #ifndef __USE_LARGEFILE64 + #define __USE_LARGEFILE64 + #endif + #ifndef _LARGEFILE64_SOURCE + #define _LARGEFILE64_SOURCE + #endif + #ifndef _FILE_OFFSET_BIT + #define _FILE_OFFSET_BIT 64 + #endif + +#endif + +#include +#include +#include "zlib.h" + +#if defined(USE_FILE32API) +#define fopen64 fopen +#define ftello64 ftell +#define fseeko64 fseek +#else +#ifdef __FreeBSD__ +#define fopen64 fopen +#define ftello64 ftello +#define fseeko64 fseeko +#endif +#ifdef _MSC_VER + #define fopen64 fopen + #if (_MSC_VER >= 1400) && (!(defined(NO_MSCVER_FILE64_FUNC))) + #define ftello64 _ftelli64 + #define fseeko64 _fseeki64 + #else // old MSC + #define ftello64 ftell + #define fseeko64 fseek + #endif +#endif +#endif + +/* +#ifndef ZPOS64_T + #ifdef _WIN32 + #define ZPOS64_T fpos_t + #else + #include + #define ZPOS64_T uint64_t + #endif +#endif +*/ + +#ifdef HAVE_MINIZIP64_CONF_H +#include "mz64conf.h" +#endif + +/* a type choosen by DEFINE */ +#ifdef HAVE_64BIT_INT_CUSTOM +typedef 64BIT_INT_CUSTOM_TYPE ZPOS64_T; +#else +#ifdef HAS_STDINT_H +#include "stdint.h" +typedef uint64_t ZPOS64_T; +#else + +/* Maximum unsigned 32-bit value used as placeholder for zip64 */ +#define MAXU32 0xffffffff + +#if defined(_MSC_VER) || defined(__BORLANDC__) +typedef unsigned __int64 ZPOS64_T; +#else +typedef unsigned long long int ZPOS64_T; +#endif +#endif +#endif + + + +#ifdef __cplusplus +extern "C" { +#endif + + +#define ZLIB_FILEFUNC_SEEK_CUR (1) +#define ZLIB_FILEFUNC_SEEK_END (2) +#define ZLIB_FILEFUNC_SEEK_SET (0) + +#define ZLIB_FILEFUNC_MODE_READ (1) +#define ZLIB_FILEFUNC_MODE_WRITE (2) +#define ZLIB_FILEFUNC_MODE_READWRITEFILTER (3) + +#define ZLIB_FILEFUNC_MODE_EXISTING (4) +#define ZLIB_FILEFUNC_MODE_CREATE (8) + + +#ifndef ZCALLBACK + #if (defined(WIN32) || defined(_WIN32) || defined (WINDOWS) || defined (_WINDOWS)) && defined(CALLBACK) && defined (USEWINDOWS_CALLBACK) + #define ZCALLBACK CALLBACK + #else + #define ZCALLBACK + #endif +#endif + + + + +typedef voidpf (ZCALLBACK *open_file_func) OF((voidpf opaque, const char* filename, int mode)); +typedef uLong (ZCALLBACK *read_file_func) OF((voidpf opaque, voidpf stream, void* buf, uLong size)); +typedef uLong (ZCALLBACK *write_file_func) OF((voidpf opaque, voidpf stream, const void* buf, uLong size)); +typedef int (ZCALLBACK *close_file_func) OF((voidpf opaque, voidpf stream)); +typedef int (ZCALLBACK *testerror_file_func) OF((voidpf opaque, voidpf stream)); + +typedef long (ZCALLBACK *tell_file_func) OF((voidpf opaque, voidpf stream)); +typedef long (ZCALLBACK *seek_file_func) OF((voidpf opaque, voidpf stream, uLong offset, int origin)); + + +/* here is the "old" 32 bits structure structure */ +typedef struct zlib_filefunc_def_s +{ + open_file_func zopen_file; + read_file_func zread_file; + write_file_func zwrite_file; + tell_file_func ztell_file; + seek_file_func zseek_file; + close_file_func zclose_file; + testerror_file_func zerror_file; + voidpf opaque; +} zlib_filefunc_def; + +typedef ZPOS64_T (ZCALLBACK *tell64_file_func) OF((voidpf opaque, voidpf stream)); +typedef long (ZCALLBACK *seek64_file_func) OF((voidpf opaque, voidpf stream, ZPOS64_T offset, int origin)); +typedef voidpf (ZCALLBACK *open64_file_func) OF((voidpf opaque, const void* filename, int mode)); + +typedef struct zlib_filefunc64_def_s +{ + open64_file_func zopen64_file; + read_file_func zread_file; + write_file_func zwrite_file; + tell64_file_func ztell64_file; + seek64_file_func zseek64_file; + close_file_func zclose_file; + testerror_file_func zerror_file; + voidpf opaque; +} zlib_filefunc64_def; + +void fill_fopen64_filefunc OF((zlib_filefunc64_def* pzlib_filefunc_def)); +void fill_fopen_filefunc OF((zlib_filefunc_def* pzlib_filefunc_def)); + +/* now internal definition, only for zip.c and unzip.h */ +typedef struct zlib_filefunc64_32_def_s +{ + zlib_filefunc64_def zfile_func64; + open_file_func zopen32_file; + tell_file_func ztell32_file; + seek_file_func zseek32_file; +} zlib_filefunc64_32_def; + + +#define ZREAD64(filefunc,filestream,buf,size) ((*((filefunc).zfile_func64.zread_file)) ((filefunc).zfile_func64.opaque,filestream,buf,size)) +#define ZWRITE64(filefunc,filestream,buf,size) ((*((filefunc).zfile_func64.zwrite_file)) ((filefunc).zfile_func64.opaque,filestream,buf,size)) +//#define ZTELL64(filefunc,filestream) ((*((filefunc).ztell64_file)) ((filefunc).opaque,filestream)) +//#define ZSEEK64(filefunc,filestream,pos,mode) ((*((filefunc).zseek64_file)) ((filefunc).opaque,filestream,pos,mode)) +#define ZCLOSE64(filefunc,filestream) ((*((filefunc).zfile_func64.zclose_file)) ((filefunc).zfile_func64.opaque,filestream)) +#define ZERROR64(filefunc,filestream) ((*((filefunc).zfile_func64.zerror_file)) ((filefunc).zfile_func64.opaque,filestream)) + +voidpf call_zopen64 OF((const zlib_filefunc64_32_def* pfilefunc,const void*filename,int mode)); +long call_zseek64 OF((const zlib_filefunc64_32_def* pfilefunc,voidpf filestream, ZPOS64_T offset, int origin)); +ZPOS64_T call_ztell64 OF((const zlib_filefunc64_32_def* pfilefunc,voidpf filestream)); + +void fill_zlib_filefunc64_32_def_from_filefunc32(zlib_filefunc64_32_def* p_filefunc64_32,const zlib_filefunc_def* p_filefunc32); + +#define ZOPEN64(filefunc,filename,mode) (call_zopen64((&(filefunc)),(filename),(mode))) +#define ZTELL64(filefunc,filestream) (call_ztell64((&(filefunc)),(filestream))) +#define ZSEEK64(filefunc,filestream,pos,mode) (call_zseek64((&(filefunc)),(filestream),(pos),(mode))) + +#ifdef __cplusplus +} +#endif + +#endif ADDED compat/zlib/contrib/minizip/iowin32.c Index: compat/zlib/contrib/minizip/iowin32.c ================================================================== --- compat/zlib/contrib/minizip/iowin32.c +++ compat/zlib/contrib/minizip/iowin32.c @@ -0,0 +1,461 @@ +/* iowin32.c -- IO base function header for compress/uncompress .zip + Version 1.1, February 14h, 2010 + part of the MiniZip project - ( http://www.winimage.com/zLibDll/minizip.html ) + + Copyright (C) 1998-2010 Gilles Vollant (minizip) ( http://www.winimage.com/zLibDll/minizip.html ) + + Modifications for Zip64 support + Copyright (C) 2009-2010 Mathias Svensson ( http://result42.com ) + + For more info read MiniZip_info.txt + +*/ + +#include + +#include "zlib.h" +#include "ioapi.h" +#include "iowin32.h" + +#ifndef INVALID_HANDLE_VALUE +#define INVALID_HANDLE_VALUE (0xFFFFFFFF) +#endif + +#ifndef INVALID_SET_FILE_POINTER +#define INVALID_SET_FILE_POINTER ((DWORD)-1) +#endif + + +#if defined(WINAPI_FAMILY_PARTITION) && (!(defined(IOWIN32_USING_WINRT_API))) +#if WINAPI_FAMILY_PARTITION(WINAPI_PARTITION_APP) +#define IOWIN32_USING_WINRT_API 1 +#endif +#endif + +voidpf ZCALLBACK win32_open_file_func OF((voidpf opaque, const char* filename, int mode)); +uLong ZCALLBACK win32_read_file_func OF((voidpf opaque, voidpf stream, void* buf, uLong size)); +uLong ZCALLBACK win32_write_file_func OF((voidpf opaque, voidpf stream, const void* buf, uLong size)); +ZPOS64_T ZCALLBACK win32_tell64_file_func OF((voidpf opaque, voidpf stream)); +long ZCALLBACK win32_seek64_file_func OF((voidpf opaque, voidpf stream, ZPOS64_T offset, int origin)); +int ZCALLBACK win32_close_file_func OF((voidpf opaque, voidpf stream)); +int ZCALLBACK win32_error_file_func OF((voidpf opaque, voidpf stream)); + +typedef struct +{ + HANDLE hf; + int error; +} WIN32FILE_IOWIN; + + +static void win32_translate_open_mode(int mode, + DWORD* lpdwDesiredAccess, + DWORD* lpdwCreationDisposition, + DWORD* lpdwShareMode, + DWORD* lpdwFlagsAndAttributes) +{ + *lpdwDesiredAccess = *lpdwShareMode = *lpdwFlagsAndAttributes = *lpdwCreationDisposition = 0; + + if ((mode & ZLIB_FILEFUNC_MODE_READWRITEFILTER)==ZLIB_FILEFUNC_MODE_READ) + { + *lpdwDesiredAccess = GENERIC_READ; + *lpdwCreationDisposition = OPEN_EXISTING; + *lpdwShareMode = FILE_SHARE_READ; + } + else if (mode & ZLIB_FILEFUNC_MODE_EXISTING) + { + *lpdwDesiredAccess = GENERIC_WRITE | GENERIC_READ; + *lpdwCreationDisposition = OPEN_EXISTING; + } + else if (mode & ZLIB_FILEFUNC_MODE_CREATE) + { + *lpdwDesiredAccess = GENERIC_WRITE | GENERIC_READ; + *lpdwCreationDisposition = CREATE_ALWAYS; + } +} + +static voidpf win32_build_iowin(HANDLE hFile) +{ + voidpf ret=NULL; + + if ((hFile != NULL) && (hFile != INVALID_HANDLE_VALUE)) + { + WIN32FILE_IOWIN w32fiow; + w32fiow.hf = hFile; + w32fiow.error = 0; + ret = malloc(sizeof(WIN32FILE_IOWIN)); + + if (ret==NULL) + CloseHandle(hFile); + else + *((WIN32FILE_IOWIN*)ret) = w32fiow; + } + return ret; +} + +voidpf ZCALLBACK win32_open64_file_func (voidpf opaque,const void* filename,int mode) +{ + const char* mode_fopen = NULL; + DWORD dwDesiredAccess,dwCreationDisposition,dwShareMode,dwFlagsAndAttributes ; + HANDLE hFile = NULL; + + win32_translate_open_mode(mode,&dwDesiredAccess,&dwCreationDisposition,&dwShareMode,&dwFlagsAndAttributes); + +#ifdef IOWIN32_USING_WINRT_API +#ifdef UNICODE + if ((filename!=NULL) && (dwDesiredAccess != 0)) + hFile = CreateFile2((LPCTSTR)filename, dwDesiredAccess, dwShareMode, dwCreationDisposition, NULL); +#else + if ((filename!=NULL) && (dwDesiredAccess != 0)) + { + WCHAR filenameW[FILENAME_MAX + 0x200 + 1]; + MultiByteToWideChar(CP_ACP,0,(const char*)filename,-1,filenameW,FILENAME_MAX + 0x200); + hFile = CreateFile2(filenameW, dwDesiredAccess, dwShareMode, dwCreationDisposition, NULL); + } +#endif +#else + if ((filename!=NULL) && (dwDesiredAccess != 0)) + hFile = CreateFile((LPCTSTR)filename, dwDesiredAccess, dwShareMode, NULL, dwCreationDisposition, dwFlagsAndAttributes, NULL); +#endif + + return win32_build_iowin(hFile); +} + + +voidpf ZCALLBACK win32_open64_file_funcA (voidpf opaque,const void* filename,int mode) +{ + const char* mode_fopen = NULL; + DWORD dwDesiredAccess,dwCreationDisposition,dwShareMode,dwFlagsAndAttributes ; + HANDLE hFile = NULL; + + win32_translate_open_mode(mode,&dwDesiredAccess,&dwCreationDisposition,&dwShareMode,&dwFlagsAndAttributes); + +#ifdef IOWIN32_USING_WINRT_API + if ((filename!=NULL) && (dwDesiredAccess != 0)) + { + WCHAR filenameW[FILENAME_MAX + 0x200 + 1]; + MultiByteToWideChar(CP_ACP,0,(const char*)filename,-1,filenameW,FILENAME_MAX + 0x200); + hFile = CreateFile2(filenameW, dwDesiredAccess, dwShareMode, dwCreationDisposition, NULL); + } +#else + if ((filename!=NULL) && (dwDesiredAccess != 0)) + hFile = CreateFileA((LPCSTR)filename, dwDesiredAccess, dwShareMode, NULL, dwCreationDisposition, dwFlagsAndAttributes, NULL); +#endif + + return win32_build_iowin(hFile); +} + + +voidpf ZCALLBACK win32_open64_file_funcW (voidpf opaque,const void* filename,int mode) +{ + const char* mode_fopen = NULL; + DWORD dwDesiredAccess,dwCreationDisposition,dwShareMode,dwFlagsAndAttributes ; + HANDLE hFile = NULL; + + win32_translate_open_mode(mode,&dwDesiredAccess,&dwCreationDisposition,&dwShareMode,&dwFlagsAndAttributes); + +#ifdef IOWIN32_USING_WINRT_API + if ((filename!=NULL) && (dwDesiredAccess != 0)) + hFile = CreateFile2((LPCWSTR)filename, dwDesiredAccess, dwShareMode, dwCreationDisposition,NULL); +#else + if ((filename!=NULL) && (dwDesiredAccess != 0)) + hFile = CreateFileW((LPCWSTR)filename, dwDesiredAccess, dwShareMode, NULL, dwCreationDisposition, dwFlagsAndAttributes, NULL); +#endif + + return win32_build_iowin(hFile); +} + + +voidpf ZCALLBACK win32_open_file_func (voidpf opaque,const char* filename,int mode) +{ + const char* mode_fopen = NULL; + DWORD dwDesiredAccess,dwCreationDisposition,dwShareMode,dwFlagsAndAttributes ; + HANDLE hFile = NULL; + + win32_translate_open_mode(mode,&dwDesiredAccess,&dwCreationDisposition,&dwShareMode,&dwFlagsAndAttributes); + +#ifdef IOWIN32_USING_WINRT_API +#ifdef UNICODE + if ((filename!=NULL) && (dwDesiredAccess != 0)) + hFile = CreateFile2((LPCTSTR)filename, dwDesiredAccess, dwShareMode, dwCreationDisposition, NULL); +#else + if ((filename!=NULL) && (dwDesiredAccess != 0)) + { + WCHAR filenameW[FILENAME_MAX + 0x200 + 1]; + MultiByteToWideChar(CP_ACP,0,(const char*)filename,-1,filenameW,FILENAME_MAX + 0x200); + hFile = CreateFile2(filenameW, dwDesiredAccess, dwShareMode, dwCreationDisposition, NULL); + } +#endif +#else + if ((filename!=NULL) && (dwDesiredAccess != 0)) + hFile = CreateFile((LPCTSTR)filename, dwDesiredAccess, dwShareMode, NULL, dwCreationDisposition, dwFlagsAndAttributes, NULL); +#endif + + return win32_build_iowin(hFile); +} + + +uLong ZCALLBACK win32_read_file_func (voidpf opaque, voidpf stream, void* buf,uLong size) +{ + uLong ret=0; + HANDLE hFile = NULL; + if (stream!=NULL) + hFile = ((WIN32FILE_IOWIN*)stream) -> hf; + + if (hFile != NULL) + { + if (!ReadFile(hFile, buf, size, &ret, NULL)) + { + DWORD dwErr = GetLastError(); + if (dwErr == ERROR_HANDLE_EOF) + dwErr = 0; + ((WIN32FILE_IOWIN*)stream) -> error=(int)dwErr; + } + } + + return ret; +} + + +uLong ZCALLBACK win32_write_file_func (voidpf opaque,voidpf stream,const void* buf,uLong size) +{ + uLong ret=0; + HANDLE hFile = NULL; + if (stream!=NULL) + hFile = ((WIN32FILE_IOWIN*)stream) -> hf; + + if (hFile != NULL) + { + if (!WriteFile(hFile, buf, size, &ret, NULL)) + { + DWORD dwErr = GetLastError(); + if (dwErr == ERROR_HANDLE_EOF) + dwErr = 0; + ((WIN32FILE_IOWIN*)stream) -> error=(int)dwErr; + } + } + + return ret; +} + +static BOOL MySetFilePointerEx(HANDLE hFile, LARGE_INTEGER pos, LARGE_INTEGER *newPos, DWORD dwMoveMethod) +{ +#ifdef IOWIN32_USING_WINRT_API + return SetFilePointerEx(hFile, pos, newPos, dwMoveMethod); +#else + LONG lHigh = pos.HighPart; + DWORD dwNewPos = SetFilePointer(hFile, pos.LowPart, &lHigh, FILE_CURRENT); + BOOL fOk = TRUE; + if (dwNewPos == 0xFFFFFFFF) + if (GetLastError() != NO_ERROR) + fOk = FALSE; + if ((newPos != NULL) && (fOk)) + { + newPos->LowPart = dwNewPos; + newPos->HighPart = lHigh; + } + return fOk; +#endif +} + +long ZCALLBACK win32_tell_file_func (voidpf opaque,voidpf stream) +{ + long ret=-1; + HANDLE hFile = NULL; + if (stream!=NULL) + hFile = ((WIN32FILE_IOWIN*)stream) -> hf; + if (hFile != NULL) + { + LARGE_INTEGER pos; + pos.QuadPart = 0; + + if (!MySetFilePointerEx(hFile, pos, &pos, FILE_CURRENT)) + { + DWORD dwErr = GetLastError(); + ((WIN32FILE_IOWIN*)stream) -> error=(int)dwErr; + ret = -1; + } + else + ret=(long)pos.LowPart; + } + return ret; +} + +ZPOS64_T ZCALLBACK win32_tell64_file_func (voidpf opaque, voidpf stream) +{ + ZPOS64_T ret= (ZPOS64_T)-1; + HANDLE hFile = NULL; + if (stream!=NULL) + hFile = ((WIN32FILE_IOWIN*)stream)->hf; + + if (hFile) + { + LARGE_INTEGER pos; + pos.QuadPart = 0; + + if (!MySetFilePointerEx(hFile, pos, &pos, FILE_CURRENT)) + { + DWORD dwErr = GetLastError(); + ((WIN32FILE_IOWIN*)stream) -> error=(int)dwErr; + ret = (ZPOS64_T)-1; + } + else + ret=pos.QuadPart; + } + return ret; +} + + +long ZCALLBACK win32_seek_file_func (voidpf opaque,voidpf stream,uLong offset,int origin) +{ + DWORD dwMoveMethod=0xFFFFFFFF; + HANDLE hFile = NULL; + + long ret=-1; + if (stream!=NULL) + hFile = ((WIN32FILE_IOWIN*)stream) -> hf; + switch (origin) + { + case ZLIB_FILEFUNC_SEEK_CUR : + dwMoveMethod = FILE_CURRENT; + break; + case ZLIB_FILEFUNC_SEEK_END : + dwMoveMethod = FILE_END; + break; + case ZLIB_FILEFUNC_SEEK_SET : + dwMoveMethod = FILE_BEGIN; + break; + default: return -1; + } + + if (hFile != NULL) + { + LARGE_INTEGER pos; + pos.QuadPart = offset; + if (!MySetFilePointerEx(hFile, pos, NULL, dwMoveMethod)) + { + DWORD dwErr = GetLastError(); + ((WIN32FILE_IOWIN*)stream) -> error=(int)dwErr; + ret = -1; + } + else + ret=0; + } + return ret; +} + +long ZCALLBACK win32_seek64_file_func (voidpf opaque, voidpf stream,ZPOS64_T offset,int origin) +{ + DWORD dwMoveMethod=0xFFFFFFFF; + HANDLE hFile = NULL; + long ret=-1; + + if (stream!=NULL) + hFile = ((WIN32FILE_IOWIN*)stream)->hf; + + switch (origin) + { + case ZLIB_FILEFUNC_SEEK_CUR : + dwMoveMethod = FILE_CURRENT; + break; + case ZLIB_FILEFUNC_SEEK_END : + dwMoveMethod = FILE_END; + break; + case ZLIB_FILEFUNC_SEEK_SET : + dwMoveMethod = FILE_BEGIN; + break; + default: return -1; + } + + if (hFile) + { + LARGE_INTEGER pos; + pos.QuadPart = offset; + if (!MySetFilePointerEx(hFile, pos, NULL, FILE_CURRENT)) + { + DWORD dwErr = GetLastError(); + ((WIN32FILE_IOWIN*)stream) -> error=(int)dwErr; + ret = -1; + } + else + ret=0; + } + return ret; +} + +int ZCALLBACK win32_close_file_func (voidpf opaque, voidpf stream) +{ + int ret=-1; + + if (stream!=NULL) + { + HANDLE hFile; + hFile = ((WIN32FILE_IOWIN*)stream) -> hf; + if (hFile != NULL) + { + CloseHandle(hFile); + ret=0; + } + free(stream); + } + return ret; +} + +int ZCALLBACK win32_error_file_func (voidpf opaque,voidpf stream) +{ + int ret=-1; + if (stream!=NULL) + { + ret = ((WIN32FILE_IOWIN*)stream) -> error; + } + return ret; +} + +void fill_win32_filefunc (zlib_filefunc_def* pzlib_filefunc_def) +{ + pzlib_filefunc_def->zopen_file = win32_open_file_func; + pzlib_filefunc_def->zread_file = win32_read_file_func; + pzlib_filefunc_def->zwrite_file = win32_write_file_func; + pzlib_filefunc_def->ztell_file = win32_tell_file_func; + pzlib_filefunc_def->zseek_file = win32_seek_file_func; + pzlib_filefunc_def->zclose_file = win32_close_file_func; + pzlib_filefunc_def->zerror_file = win32_error_file_func; + pzlib_filefunc_def->opaque = NULL; +} + +void fill_win32_filefunc64(zlib_filefunc64_def* pzlib_filefunc_def) +{ + pzlib_filefunc_def->zopen64_file = win32_open64_file_func; + pzlib_filefunc_def->zread_file = win32_read_file_func; + pzlib_filefunc_def->zwrite_file = win32_write_file_func; + pzlib_filefunc_def->ztell64_file = win32_tell64_file_func; + pzlib_filefunc_def->zseek64_file = win32_seek64_file_func; + pzlib_filefunc_def->zclose_file = win32_close_file_func; + pzlib_filefunc_def->zerror_file = win32_error_file_func; + pzlib_filefunc_def->opaque = NULL; +} + + +void fill_win32_filefunc64A(zlib_filefunc64_def* pzlib_filefunc_def) +{ + pzlib_filefunc_def->zopen64_file = win32_open64_file_funcA; + pzlib_filefunc_def->zread_file = win32_read_file_func; + pzlib_filefunc_def->zwrite_file = win32_write_file_func; + pzlib_filefunc_def->ztell64_file = win32_tell64_file_func; + pzlib_filefunc_def->zseek64_file = win32_seek64_file_func; + pzlib_filefunc_def->zclose_file = win32_close_file_func; + pzlib_filefunc_def->zerror_file = win32_error_file_func; + pzlib_filefunc_def->opaque = NULL; +} + + +void fill_win32_filefunc64W(zlib_filefunc64_def* pzlib_filefunc_def) +{ + pzlib_filefunc_def->zopen64_file = win32_open64_file_funcW; + pzlib_filefunc_def->zread_file = win32_read_file_func; + pzlib_filefunc_def->zwrite_file = win32_write_file_func; + pzlib_filefunc_def->ztell64_file = win32_tell64_file_func; + pzlib_filefunc_def->zseek64_file = win32_seek64_file_func; + pzlib_filefunc_def->zclose_file = win32_close_file_func; + pzlib_filefunc_def->zerror_file = win32_error_file_func; + pzlib_filefunc_def->opaque = NULL; +} ADDED compat/zlib/contrib/minizip/iowin32.h Index: compat/zlib/contrib/minizip/iowin32.h ================================================================== --- compat/zlib/contrib/minizip/iowin32.h +++ compat/zlib/contrib/minizip/iowin32.h @@ -0,0 +1,28 @@ +/* iowin32.h -- IO base function header for compress/uncompress .zip + Version 1.1, February 14h, 2010 + part of the MiniZip project - ( http://www.winimage.com/zLibDll/minizip.html ) + + Copyright (C) 1998-2010 Gilles Vollant (minizip) ( http://www.winimage.com/zLibDll/minizip.html ) + + Modifications for Zip64 support + Copyright (C) 2009-2010 Mathias Svensson ( http://result42.com ) + + For more info read MiniZip_info.txt + +*/ + +#include + + +#ifdef __cplusplus +extern "C" { +#endif + +void fill_win32_filefunc OF((zlib_filefunc_def* pzlib_filefunc_def)); +void fill_win32_filefunc64 OF((zlib_filefunc64_def* pzlib_filefunc_def)); +void fill_win32_filefunc64A OF((zlib_filefunc64_def* pzlib_filefunc_def)); +void fill_win32_filefunc64W OF((zlib_filefunc64_def* pzlib_filefunc_def)); + +#ifdef __cplusplus +} +#endif ADDED compat/zlib/contrib/minizip/make_vms.com Index: compat/zlib/contrib/minizip/make_vms.com ================================================================== --- compat/zlib/contrib/minizip/make_vms.com +++ compat/zlib/contrib/minizip/make_vms.com @@ -0,0 +1,25 @@ +$ if f$search("ioapi.h_orig") .eqs. "" then copy ioapi.h ioapi.h_orig +$ open/write zdef vmsdefs.h +$ copy sys$input: zdef +$ deck +#define unix +#define fill_zlib_filefunc64_32_def_from_filefunc32 fillzffunc64from +#define Write_Zip64EndOfCentralDirectoryLocator Write_Zip64EoDLocator +#define Write_Zip64EndOfCentralDirectoryRecord Write_Zip64EoDRecord +#define Write_EndOfCentralDirectoryRecord Write_EoDRecord +$ eod +$ close zdef +$ copy vmsdefs.h,ioapi.h_orig ioapi.h +$ cc/include=[--]/prefix=all ioapi.c +$ cc/include=[--]/prefix=all miniunz.c +$ cc/include=[--]/prefix=all unzip.c +$ cc/include=[--]/prefix=all minizip.c +$ cc/include=[--]/prefix=all zip.c +$ link miniunz,unzip,ioapi,[--]libz.olb/lib +$ link minizip,zip,ioapi,[--]libz.olb/lib +$ mcr []minizip test minizip_info.txt +$ mcr []miniunz -l test.zip +$ rename minizip_info.txt; minizip_info.txt_old +$ mcr []miniunz test.zip +$ delete test.zip;* +$exit ADDED compat/zlib/contrib/minizip/miniunz.c Index: compat/zlib/contrib/minizip/miniunz.c ================================================================== --- compat/zlib/contrib/minizip/miniunz.c +++ compat/zlib/contrib/minizip/miniunz.c @@ -0,0 +1,660 @@ +/* + miniunz.c + Version 1.1, February 14h, 2010 + sample part of the MiniZip project - ( http://www.winimage.com/zLibDll/minizip.html ) + + Copyright (C) 1998-2010 Gilles Vollant (minizip) ( http://www.winimage.com/zLibDll/minizip.html ) + + Modifications of Unzip for Zip64 + Copyright (C) 2007-2008 Even Rouault + + Modifications for Zip64 support on both zip and unzip + Copyright (C) 2009-2010 Mathias Svensson ( http://result42.com ) +*/ + +#if (!defined(_WIN32)) && (!defined(WIN32)) && (!defined(__APPLE__)) + #ifndef __USE_FILE_OFFSET64 + #define __USE_FILE_OFFSET64 + #endif + #ifndef __USE_LARGEFILE64 + #define __USE_LARGEFILE64 + #endif + #ifndef _LARGEFILE64_SOURCE + #define _LARGEFILE64_SOURCE + #endif + #ifndef _FILE_OFFSET_BIT + #define _FILE_OFFSET_BIT 64 + #endif +#endif + +#ifdef __APPLE__ +// In darwin and perhaps other BSD variants off_t is a 64 bit value, hence no need for specific 64 bit functions +#define FOPEN_FUNC(filename, mode) fopen(filename, mode) +#define FTELLO_FUNC(stream) ftello(stream) +#define FSEEKO_FUNC(stream, offset, origin) fseeko(stream, offset, origin) +#else +#define FOPEN_FUNC(filename, mode) fopen64(filename, mode) +#define FTELLO_FUNC(stream) ftello64(stream) +#define FSEEKO_FUNC(stream, offset, origin) fseeko64(stream, offset, origin) +#endif + + +#include +#include +#include +#include +#include +#include + +#ifdef _WIN32 +# include +# include +#else +# include +# include +#endif + + +#include "unzip.h" + +#define CASESENSITIVITY (0) +#define WRITEBUFFERSIZE (8192) +#define MAXFILENAME (256) + +#ifdef _WIN32 +#define USEWIN32IOAPI +#include "iowin32.h" +#endif +/* + mini unzip, demo of unzip package + + usage : + Usage : miniunz [-exvlo] file.zip [file_to_extract] [-d extractdir] + + list the file in the zipfile, and print the content of FILE_ID.ZIP or README.TXT + if it exists +*/ + + +/* change_file_date : change the date/time of a file + filename : the filename of the file where date/time must be modified + dosdate : the new date at the MSDos format (4 bytes) + tmu_date : the SAME new date at the tm_unz format */ +void change_file_date(filename,dosdate,tmu_date) + const char *filename; + uLong dosdate; + tm_unz tmu_date; +{ +#ifdef _WIN32 + HANDLE hFile; + FILETIME ftm,ftLocal,ftCreate,ftLastAcc,ftLastWrite; + + hFile = CreateFileA(filename,GENERIC_READ | GENERIC_WRITE, + 0,NULL,OPEN_EXISTING,0,NULL); + GetFileTime(hFile,&ftCreate,&ftLastAcc,&ftLastWrite); + DosDateTimeToFileTime((WORD)(dosdate>>16),(WORD)dosdate,&ftLocal); + LocalFileTimeToFileTime(&ftLocal,&ftm); + SetFileTime(hFile,&ftm,&ftLastAcc,&ftm); + CloseHandle(hFile); +#else +#ifdef unix || __APPLE__ + struct utimbuf ut; + struct tm newdate; + newdate.tm_sec = tmu_date.tm_sec; + newdate.tm_min=tmu_date.tm_min; + newdate.tm_hour=tmu_date.tm_hour; + newdate.tm_mday=tmu_date.tm_mday; + newdate.tm_mon=tmu_date.tm_mon; + if (tmu_date.tm_year > 1900) + newdate.tm_year=tmu_date.tm_year - 1900; + else + newdate.tm_year=tmu_date.tm_year ; + newdate.tm_isdst=-1; + + ut.actime=ut.modtime=mktime(&newdate); + utime(filename,&ut); +#endif +#endif +} + + +/* mymkdir and change_file_date are not 100 % portable + As I don't know well Unix, I wait feedback for the unix portion */ + +int mymkdir(dirname) + const char* dirname; +{ + int ret=0; +#ifdef _WIN32 + ret = _mkdir(dirname); +#elif unix + ret = mkdir (dirname,0775); +#elif __APPLE__ + ret = mkdir (dirname,0775); +#endif + return ret; +} + +int makedir (newdir) + char *newdir; +{ + char *buffer ; + char *p; + int len = (int)strlen(newdir); + + if (len <= 0) + return 0; + + buffer = (char*)malloc(len+1); + if (buffer==NULL) + { + printf("Error allocating memory\n"); + return UNZ_INTERNALERROR; + } + strcpy(buffer,newdir); + + if (buffer[len-1] == '/') { + buffer[len-1] = '\0'; + } + if (mymkdir(buffer) == 0) + { + free(buffer); + return 1; + } + + p = buffer+1; + while (1) + { + char hold; + + while(*p && *p != '\\' && *p != '/') + p++; + hold = *p; + *p = 0; + if ((mymkdir(buffer) == -1) && (errno == ENOENT)) + { + printf("couldn't create directory %s\n",buffer); + free(buffer); + return 0; + } + if (hold == 0) + break; + *p++ = hold; + } + free(buffer); + return 1; +} + +void do_banner() +{ + printf("MiniUnz 1.01b, demo of zLib + Unz package written by Gilles Vollant\n"); + printf("more info at http://www.winimage.com/zLibDll/unzip.html\n\n"); +} + +void do_help() +{ + printf("Usage : miniunz [-e] [-x] [-v] [-l] [-o] [-p password] file.zip [file_to_extr.] [-d extractdir]\n\n" \ + " -e Extract without pathname (junk paths)\n" \ + " -x Extract with pathname\n" \ + " -v list files\n" \ + " -l list files\n" \ + " -d directory to extract into\n" \ + " -o overwrite files without prompting\n" \ + " -p extract crypted file using password\n\n"); +} + +void Display64BitsSize(ZPOS64_T n, int size_char) +{ + /* to avoid compatibility problem , we do here the conversion */ + char number[21]; + int offset=19; + int pos_string = 19; + number[20]=0; + for (;;) { + number[offset]=(char)((n%10)+'0'); + if (number[offset] != '0') + pos_string=offset; + n/=10; + if (offset==0) + break; + offset--; + } + { + int size_display_string = 19-pos_string; + while (size_char > size_display_string) + { + size_char--; + printf(" "); + } + } + + printf("%s",&number[pos_string]); +} + +int do_list(uf) + unzFile uf; +{ + uLong i; + unz_global_info64 gi; + int err; + + err = unzGetGlobalInfo64(uf,&gi); + if (err!=UNZ_OK) + printf("error %d with zipfile in unzGetGlobalInfo \n",err); + printf(" Length Method Size Ratio Date Time CRC-32 Name\n"); + printf(" ------ ------ ---- ----- ---- ---- ------ ----\n"); + for (i=0;i0) + ratio = (uLong)((file_info.compressed_size*100)/file_info.uncompressed_size); + + /* display a '*' if the file is crypted */ + if ((file_info.flag & 1) != 0) + charCrypt='*'; + + if (file_info.compression_method==0) + string_method="Stored"; + else + if (file_info.compression_method==Z_DEFLATED) + { + uInt iLevel=(uInt)((file_info.flag & 0x6)/2); + if (iLevel==0) + string_method="Defl:N"; + else if (iLevel==1) + string_method="Defl:X"; + else if ((iLevel==2) || (iLevel==3)) + string_method="Defl:F"; /* 2:fast , 3 : extra fast*/ + } + else + if (file_info.compression_method==Z_BZIP2ED) + { + string_method="BZip2 "; + } + else + string_method="Unkn. "; + + Display64BitsSize(file_info.uncompressed_size,7); + printf(" %6s%c",string_method,charCrypt); + Display64BitsSize(file_info.compressed_size,7); + printf(" %3lu%% %2.2lu-%2.2lu-%2.2lu %2.2lu:%2.2lu %8.8lx %s\n", + ratio, + (uLong)file_info.tmu_date.tm_mon + 1, + (uLong)file_info.tmu_date.tm_mday, + (uLong)file_info.tmu_date.tm_year % 100, + (uLong)file_info.tmu_date.tm_hour,(uLong)file_info.tmu_date.tm_min, + (uLong)file_info.crc,filename_inzip); + if ((i+1)='a') && (rep<='z')) + rep -= 0x20; + } + while ((rep!='Y') && (rep!='N') && (rep!='A')); + } + + if (rep == 'N') + skip = 1; + + if (rep == 'A') + *popt_overwrite=1; + } + + if ((skip==0) && (err==UNZ_OK)) + { + fout=FOPEN_FUNC(write_filename,"wb"); + /* some zipfile don't contain directory alone before file */ + if ((fout==NULL) && ((*popt_extract_without_path)==0) && + (filename_withoutpath!=(char*)filename_inzip)) + { + char c=*(filename_withoutpath-1); + *(filename_withoutpath-1)='\0'; + makedir(write_filename); + *(filename_withoutpath-1)=c; + fout=FOPEN_FUNC(write_filename,"wb"); + } + + if (fout==NULL) + { + printf("error opening %s\n",write_filename); + } + } + + if (fout!=NULL) + { + printf(" extracting: %s\n",write_filename); + + do + { + err = unzReadCurrentFile(uf,buf,size_buf); + if (err<0) + { + printf("error %d with zipfile in unzReadCurrentFile\n",err); + break; + } + if (err>0) + if (fwrite(buf,err,1,fout)!=1) + { + printf("error in writing extracted file\n"); + err=UNZ_ERRNO; + break; + } + } + while (err>0); + if (fout) + fclose(fout); + + if (err==0) + change_file_date(write_filename,file_info.dosDate, + file_info.tmu_date); + } + + if (err==UNZ_OK) + { + err = unzCloseCurrentFile (uf); + if (err!=UNZ_OK) + { + printf("error %d with zipfile in unzCloseCurrentFile\n",err); + } + } + else + unzCloseCurrentFile(uf); /* don't lose the error */ + } + + free(buf); + return err; +} + + +int do_extract(uf,opt_extract_without_path,opt_overwrite,password) + unzFile uf; + int opt_extract_without_path; + int opt_overwrite; + const char* password; +{ + uLong i; + unz_global_info64 gi; + int err; + FILE* fout=NULL; + + err = unzGetGlobalInfo64(uf,&gi); + if (err!=UNZ_OK) + printf("error %d with zipfile in unzGetGlobalInfo \n",err); + + for (i=0;i insert n+1 empty lines +.\" for manpage-specific macros, see man(7) +.SH NAME +miniunzip - uncompress and examine ZIP archives +.SH SYNOPSIS +.B miniunzip +.RI [ -exvlo ] +zipfile [ files_to_extract ] [-d tempdir] +.SH DESCRIPTION +.B minizip +is a simple tool which allows the extraction of compressed file +archives in the ZIP format used by the MS-DOS utility PKZIP. It was +written as a demonstration of the +.IR zlib (3) +library and therefore lack many of the features of the +.IR unzip (1) +program. +.SH OPTIONS +A number of options are supported. With the exception of +.BI \-d\ tempdir +these must be supplied before any +other arguments and are: +.TP +.BI \-l\ ,\ \-\-v +List the files in the archive without extracting them. +.TP +.B \-o +Overwrite files without prompting for confirmation. +.TP +.B \-x +Extract files (default). +.PP +The +.I zipfile +argument is the name of the archive to process. The next argument can be used +to specify a single file to extract from the archive. + +Lastly, the following option can be specified at the end of the command-line: +.TP +.BI \-d\ tempdir +Extract the archive in the directory +.I tempdir +rather than the current directory. +.SH SEE ALSO +.BR minizip (1), +.BR zlib (3), +.BR unzip (1). +.SH AUTHOR +This program was written by Gilles Vollant. This manual page was +written by Mark Brown . The -d tempdir option +was added by Dirk Eddelbuettel . ADDED compat/zlib/contrib/minizip/minizip.1 Index: compat/zlib/contrib/minizip/minizip.1 ================================================================== --- compat/zlib/contrib/minizip/minizip.1 +++ compat/zlib/contrib/minizip/minizip.1 @@ -0,0 +1,46 @@ +.\" Hey, EMACS: -*- nroff -*- +.TH minizip 1 "May 2, 2001" +.\" Please adjust this date whenever revising the manpage. +.\" +.\" Some roff macros, for reference: +.\" .nh disable hyphenation +.\" .hy enable hyphenation +.\" .ad l left justify +.\" .ad b justify to both left and right margins +.\" .nf disable filling +.\" .fi enable filling +.\" .br insert line break +.\" .sp insert n+1 empty lines +.\" for manpage-specific macros, see man(7) +.SH NAME +minizip - create ZIP archives +.SH SYNOPSIS +.B minizip +.RI [ -o ] +zipfile [ " files" ... ] +.SH DESCRIPTION +.B minizip +is a simple tool which allows the creation of compressed file archives +in the ZIP format used by the MS-DOS utility PKZIP. It was written as +a demonstration of the +.IR zlib (3) +library and therefore lack many of the features of the +.IR zip (1) +program. +.SH OPTIONS +The first argument supplied is the name of the ZIP archive to create or +.RI -o +in which case it is ignored and the second argument treated as the +name of the ZIP file. If the ZIP file already exists it will be +overwritten. +.PP +Subsequent arguments specify a list of files to place in the ZIP +archive. If none are specified then an empty archive will be created. +.SH SEE ALSO +.BR miniunzip (1), +.BR zlib (3), +.BR zip (1). +.SH AUTHOR +This program was written by Gilles Vollant. This manual page was +written by Mark Brown . + ADDED compat/zlib/contrib/minizip/minizip.c Index: compat/zlib/contrib/minizip/minizip.c ================================================================== --- compat/zlib/contrib/minizip/minizip.c +++ compat/zlib/contrib/minizip/minizip.c @@ -0,0 +1,520 @@ +/* + minizip.c + Version 1.1, February 14h, 2010 + sample part of the MiniZip project - ( http://www.winimage.com/zLibDll/minizip.html ) + + Copyright (C) 1998-2010 Gilles Vollant (minizip) ( http://www.winimage.com/zLibDll/minizip.html ) + + Modifications of Unzip for Zip64 + Copyright (C) 2007-2008 Even Rouault + + Modifications for Zip64 support on both zip and unzip + Copyright (C) 2009-2010 Mathias Svensson ( http://result42.com ) +*/ + + +#if (!defined(_WIN32)) && (!defined(WIN32)) && (!defined(__APPLE__)) + #ifndef __USE_FILE_OFFSET64 + #define __USE_FILE_OFFSET64 + #endif + #ifndef __USE_LARGEFILE64 + #define __USE_LARGEFILE64 + #endif + #ifndef _LARGEFILE64_SOURCE + #define _LARGEFILE64_SOURCE + #endif + #ifndef _FILE_OFFSET_BIT + #define _FILE_OFFSET_BIT 64 + #endif +#endif + +#ifdef __APPLE__ +// In darwin and perhaps other BSD variants off_t is a 64 bit value, hence no need for specific 64 bit functions +#define FOPEN_FUNC(filename, mode) fopen(filename, mode) +#define FTELLO_FUNC(stream) ftello(stream) +#define FSEEKO_FUNC(stream, offset, origin) fseeko(stream, offset, origin) +#else +#define FOPEN_FUNC(filename, mode) fopen64(filename, mode) +#define FTELLO_FUNC(stream) ftello64(stream) +#define FSEEKO_FUNC(stream, offset, origin) fseeko64(stream, offset, origin) +#endif + + + +#include +#include +#include +#include +#include +#include + +#ifdef _WIN32 +# include +# include +#else +# include +# include +# include +# include +#endif + +#include "zip.h" + +#ifdef _WIN32 + #define USEWIN32IOAPI + #include "iowin32.h" +#endif + + + +#define WRITEBUFFERSIZE (16384) +#define MAXFILENAME (256) + +#ifdef _WIN32 +uLong filetime(f, tmzip, dt) + char *f; /* name of file to get info on */ + tm_zip *tmzip; /* return value: access, modific. and creation times */ + uLong *dt; /* dostime */ +{ + int ret = 0; + { + FILETIME ftLocal; + HANDLE hFind; + WIN32_FIND_DATAA ff32; + + hFind = FindFirstFileA(f,&ff32); + if (hFind != INVALID_HANDLE_VALUE) + { + FileTimeToLocalFileTime(&(ff32.ftLastWriteTime),&ftLocal); + FileTimeToDosDateTime(&ftLocal,((LPWORD)dt)+1,((LPWORD)dt)+0); + FindClose(hFind); + ret = 1; + } + } + return ret; +} +#else +#ifdef unix || __APPLE__ +uLong filetime(f, tmzip, dt) + char *f; /* name of file to get info on */ + tm_zip *tmzip; /* return value: access, modific. and creation times */ + uLong *dt; /* dostime */ +{ + int ret=0; + struct stat s; /* results of stat() */ + struct tm* filedate; + time_t tm_t=0; + + if (strcmp(f,"-")!=0) + { + char name[MAXFILENAME+1]; + int len = strlen(f); + if (len > MAXFILENAME) + len = MAXFILENAME; + + strncpy(name, f,MAXFILENAME-1); + /* strncpy doesnt append the trailing NULL, of the string is too long. */ + name[ MAXFILENAME ] = '\0'; + + if (name[len - 1] == '/') + name[len - 1] = '\0'; + /* not all systems allow stat'ing a file with / appended */ + if (stat(name,&s)==0) + { + tm_t = s.st_mtime; + ret = 1; + } + } + filedate = localtime(&tm_t); + + tmzip->tm_sec = filedate->tm_sec; + tmzip->tm_min = filedate->tm_min; + tmzip->tm_hour = filedate->tm_hour; + tmzip->tm_mday = filedate->tm_mday; + tmzip->tm_mon = filedate->tm_mon ; + tmzip->tm_year = filedate->tm_year; + + return ret; +} +#else +uLong filetime(f, tmzip, dt) + char *f; /* name of file to get info on */ + tm_zip *tmzip; /* return value: access, modific. and creation times */ + uLong *dt; /* dostime */ +{ + return 0; +} +#endif +#endif + + + + +int check_exist_file(filename) + const char* filename; +{ + FILE* ftestexist; + int ret = 1; + ftestexist = FOPEN_FUNC(filename,"rb"); + if (ftestexist==NULL) + ret = 0; + else + fclose(ftestexist); + return ret; +} + +void do_banner() +{ + printf("MiniZip 1.1, demo of zLib + MiniZip64 package, written by Gilles Vollant\n"); + printf("more info on MiniZip at http://www.winimage.com/zLibDll/minizip.html\n\n"); +} + +void do_help() +{ + printf("Usage : minizip [-o] [-a] [-0 to -9] [-p password] [-j] file.zip [files_to_add]\n\n" \ + " -o Overwrite existing file.zip\n" \ + " -a Append to existing file.zip\n" \ + " -0 Store only\n" \ + " -1 Compress faster\n" \ + " -9 Compress better\n\n" \ + " -j exclude path. store only the file name.\n\n"); +} + +/* calculate the CRC32 of a file, + because to encrypt a file, we need known the CRC32 of the file before */ +int getFileCrc(const char* filenameinzip,void*buf,unsigned long size_buf,unsigned long* result_crc) +{ + unsigned long calculate_crc=0; + int err=ZIP_OK; + FILE * fin = FOPEN_FUNC(filenameinzip,"rb"); + + unsigned long size_read = 0; + unsigned long total_read = 0; + if (fin==NULL) + { + err = ZIP_ERRNO; + } + + if (err == ZIP_OK) + do + { + err = ZIP_OK; + size_read = (int)fread(buf,1,size_buf,fin); + if (size_read < size_buf) + if (feof(fin)==0) + { + printf("error in reading %s\n",filenameinzip); + err = ZIP_ERRNO; + } + + if (size_read>0) + calculate_crc = crc32(calculate_crc,buf,size_read); + total_read += size_read; + + } while ((err == ZIP_OK) && (size_read>0)); + + if (fin) + fclose(fin); + + *result_crc=calculate_crc; + printf("file %s crc %lx\n", filenameinzip, calculate_crc); + return err; +} + +int isLargeFile(const char* filename) +{ + int largeFile = 0; + ZPOS64_T pos = 0; + FILE* pFile = FOPEN_FUNC(filename, "rb"); + + if(pFile != NULL) + { + int n = FSEEKO_FUNC(pFile, 0, SEEK_END); + pos = FTELLO_FUNC(pFile); + + printf("File : %s is %lld bytes\n", filename, pos); + + if(pos >= 0xffffffff) + largeFile = 1; + + fclose(pFile); + } + + return largeFile; +} + +int main(argc,argv) + int argc; + char *argv[]; +{ + int i; + int opt_overwrite=0; + int opt_compress_level=Z_DEFAULT_COMPRESSION; + int opt_exclude_path=0; + int zipfilenamearg = 0; + char filename_try[MAXFILENAME+16]; + int zipok; + int err=0; + int size_buf=0; + void* buf=NULL; + const char* password=NULL; + + + do_banner(); + if (argc==1) + { + do_help(); + return 0; + } + else + { + for (i=1;i='0') && (c<='9')) + opt_compress_level = c-'0'; + if ((c=='j') || (c=='J')) + opt_exclude_path = 1; + + if (((c=='p') || (c=='P')) && (i+1='a') && (rep<='z')) + rep -= 0x20; + } + while ((rep!='Y') && (rep!='N') && (rep!='A')); + if (rep=='N') + zipok = 0; + if (rep=='A') + opt_overwrite = 2; + } + } + + if (zipok==1) + { + zipFile zf; + int errclose; +# ifdef USEWIN32IOAPI + zlib_filefunc64_def ffunc; + fill_win32_filefunc64A(&ffunc); + zf = zipOpen2_64(filename_try,(opt_overwrite==2) ? 2 : 0,NULL,&ffunc); +# else + zf = zipOpen64(filename_try,(opt_overwrite==2) ? 2 : 0); +# endif + + if (zf == NULL) + { + printf("error opening %s\n",filename_try); + err= ZIP_ERRNO; + } + else + printf("creating %s\n",filename_try); + + for (i=zipfilenamearg+1;(i='0') || (argv[i][1]<='9'))) && + (strlen(argv[i]) == 2))) + { + FILE * fin; + int size_read; + const char* filenameinzip = argv[i]; + const char *savefilenameinzip; + zip_fileinfo zi; + unsigned long crcFile=0; + int zip64 = 0; + + zi.tmz_date.tm_sec = zi.tmz_date.tm_min = zi.tmz_date.tm_hour = + zi.tmz_date.tm_mday = zi.tmz_date.tm_mon = zi.tmz_date.tm_year = 0; + zi.dosDate = 0; + zi.internal_fa = 0; + zi.external_fa = 0; + filetime(filenameinzip,&zi.tmz_date,&zi.dosDate); + +/* + err = zipOpenNewFileInZip(zf,filenameinzip,&zi, + NULL,0,NULL,0,NULL / * comment * /, + (opt_compress_level != 0) ? Z_DEFLATED : 0, + opt_compress_level); +*/ + if ((password != NULL) && (err==ZIP_OK)) + err = getFileCrc(filenameinzip,buf,size_buf,&crcFile); + + zip64 = isLargeFile(filenameinzip); + + /* The path name saved, should not include a leading slash. */ + /*if it did, windows/xp and dynazip couldn't read the zip file. */ + savefilenameinzip = filenameinzip; + while( savefilenameinzip[0] == '\\' || savefilenameinzip[0] == '/' ) + { + savefilenameinzip++; + } + + /*should the zip file contain any path at all?*/ + if( opt_exclude_path ) + { + const char *tmpptr; + const char *lastslash = 0; + for( tmpptr = savefilenameinzip; *tmpptr; tmpptr++) + { + if( *tmpptr == '\\' || *tmpptr == '/') + { + lastslash = tmpptr; + } + } + if( lastslash != NULL ) + { + savefilenameinzip = lastslash+1; // base filename follows last slash. + } + } + + /**/ + err = zipOpenNewFileInZip3_64(zf,savefilenameinzip,&zi, + NULL,0,NULL,0,NULL /* comment*/, + (opt_compress_level != 0) ? Z_DEFLATED : 0, + opt_compress_level,0, + /* -MAX_WBITS, DEF_MEM_LEVEL, Z_DEFAULT_STRATEGY, */ + -MAX_WBITS, DEF_MEM_LEVEL, Z_DEFAULT_STRATEGY, + password,crcFile, zip64); + + if (err != ZIP_OK) + printf("error in opening %s in zipfile\n",filenameinzip); + else + { + fin = FOPEN_FUNC(filenameinzip,"rb"); + if (fin==NULL) + { + err=ZIP_ERRNO; + printf("error in opening %s for reading\n",filenameinzip); + } + } + + if (err == ZIP_OK) + do + { + err = ZIP_OK; + size_read = (int)fread(buf,1,size_buf,fin); + if (size_read < size_buf) + if (feof(fin)==0) + { + printf("error in reading %s\n",filenameinzip); + err = ZIP_ERRNO; + } + + if (size_read>0) + { + err = zipWriteInFileInZip (zf,buf,size_read); + if (err<0) + { + printf("error in writing %s in the zipfile\n", + filenameinzip); + } + + } + } while ((err == ZIP_OK) && (size_read>0)); + + if (fin) + fclose(fin); + + if (err<0) + err=ZIP_ERRNO; + else + { + err = zipCloseFileInZip(zf); + if (err!=ZIP_OK) + printf("error in closing %s in the zipfile\n", + filenameinzip); + } + } + } + errclose = zipClose(zf,NULL); + if (errclose != ZIP_OK) + printf("error in closing %s\n",filename_try); + } + else + { + do_help(); + } + + free(buf); + return 0; +} ADDED compat/zlib/contrib/minizip/minizip.pc.in Index: compat/zlib/contrib/minizip/minizip.pc.in ================================================================== --- compat/zlib/contrib/minizip/minizip.pc.in +++ compat/zlib/contrib/minizip/minizip.pc.in @@ -0,0 +1,12 @@ +prefix=@prefix@ +exec_prefix=@exec_prefix@ +libdir=@libdir@ +includedir=@includedir@/minizip + +Name: minizip +Description: Minizip zip file manipulation library +Requires: +Version: @PACKAGE_VERSION@ +Libs: -L${libdir} -lminizip +Libs.private: -lz +Cflags: -I${includedir} ADDED compat/zlib/contrib/minizip/mztools.c Index: compat/zlib/contrib/minizip/mztools.c ================================================================== --- compat/zlib/contrib/minizip/mztools.c +++ compat/zlib/contrib/minizip/mztools.c @@ -0,0 +1,291 @@ +/* + Additional tools for Minizip + Code: Xavier Roche '2004 + License: Same as ZLIB (www.gzip.org) +*/ + +/* Code */ +#include +#include +#include +#include "zlib.h" +#include "unzip.h" + +#define READ_8(adr) ((unsigned char)*(adr)) +#define READ_16(adr) ( READ_8(adr) | (READ_8(adr+1) << 8) ) +#define READ_32(adr) ( READ_16(adr) | (READ_16((adr)+2) << 16) ) + +#define WRITE_8(buff, n) do { \ + *((unsigned char*)(buff)) = (unsigned char) ((n) & 0xff); \ +} while(0) +#define WRITE_16(buff, n) do { \ + WRITE_8((unsigned char*)(buff), n); \ + WRITE_8(((unsigned char*)(buff)) + 1, (n) >> 8); \ +} while(0) +#define WRITE_32(buff, n) do { \ + WRITE_16((unsigned char*)(buff), (n) & 0xffff); \ + WRITE_16((unsigned char*)(buff) + 2, (n) >> 16); \ +} while(0) + +extern int ZEXPORT unzRepair(file, fileOut, fileOutTmp, nRecovered, bytesRecovered) +const char* file; +const char* fileOut; +const char* fileOutTmp; +uLong* nRecovered; +uLong* bytesRecovered; +{ + int err = Z_OK; + FILE* fpZip = fopen(file, "rb"); + FILE* fpOut = fopen(fileOut, "wb"); + FILE* fpOutCD = fopen(fileOutTmp, "wb"); + if (fpZip != NULL && fpOut != NULL) { + int entries = 0; + uLong totalBytes = 0; + char header[30]; + char filename[1024]; + char extra[1024]; + int offset = 0; + int offsetCD = 0; + while ( fread(header, 1, 30, fpZip) == 30 ) { + int currentOffset = offset; + + /* File entry */ + if (READ_32(header) == 0x04034b50) { + unsigned int version = READ_16(header + 4); + unsigned int gpflag = READ_16(header + 6); + unsigned int method = READ_16(header + 8); + unsigned int filetime = READ_16(header + 10); + unsigned int filedate = READ_16(header + 12); + unsigned int crc = READ_32(header + 14); /* crc */ + unsigned int cpsize = READ_32(header + 18); /* compressed size */ + unsigned int uncpsize = READ_32(header + 22); /* uncompressed sz */ + unsigned int fnsize = READ_16(header + 26); /* file name length */ + unsigned int extsize = READ_16(header + 28); /* extra field length */ + filename[0] = extra[0] = '\0'; + + /* Header */ + if (fwrite(header, 1, 30, fpOut) == 30) { + offset += 30; + } else { + err = Z_ERRNO; + break; + } + + /* Filename */ + if (fnsize > 0) { + if (fnsize < sizeof(filename)) { + if (fread(filename, 1, fnsize, fpZip) == fnsize) { + if (fwrite(filename, 1, fnsize, fpOut) == fnsize) { + offset += fnsize; + } else { + err = Z_ERRNO; + break; + } + } else { + err = Z_ERRNO; + break; + } + } else { + err = Z_ERRNO; + break; + } + } else { + err = Z_STREAM_ERROR; + break; + } + + /* Extra field */ + if (extsize > 0) { + if (extsize < sizeof(extra)) { + if (fread(extra, 1, extsize, fpZip) == extsize) { + if (fwrite(extra, 1, extsize, fpOut) == extsize) { + offset += extsize; + } else { + err = Z_ERRNO; + break; + } + } else { + err = Z_ERRNO; + break; + } + } else { + err = Z_ERRNO; + break; + } + } + + /* Data */ + { + int dataSize = cpsize; + if (dataSize == 0) { + dataSize = uncpsize; + } + if (dataSize > 0) { + char* data = malloc(dataSize); + if (data != NULL) { + if ((int)fread(data, 1, dataSize, fpZip) == dataSize) { + if ((int)fwrite(data, 1, dataSize, fpOut) == dataSize) { + offset += dataSize; + totalBytes += dataSize; + } else { + err = Z_ERRNO; + } + } else { + err = Z_ERRNO; + } + free(data); + if (err != Z_OK) { + break; + } + } else { + err = Z_MEM_ERROR; + break; + } + } + } + + /* Central directory entry */ + { + char header[46]; + char* comment = ""; + int comsize = (int) strlen(comment); + WRITE_32(header, 0x02014b50); + WRITE_16(header + 4, version); + WRITE_16(header + 6, version); + WRITE_16(header + 8, gpflag); + WRITE_16(header + 10, method); + WRITE_16(header + 12, filetime); + WRITE_16(header + 14, filedate); + WRITE_32(header + 16, crc); + WRITE_32(header + 20, cpsize); + WRITE_32(header + 24, uncpsize); + WRITE_16(header + 28, fnsize); + WRITE_16(header + 30, extsize); + WRITE_16(header + 32, comsize); + WRITE_16(header + 34, 0); /* disk # */ + WRITE_16(header + 36, 0); /* int attrb */ + WRITE_32(header + 38, 0); /* ext attrb */ + WRITE_32(header + 42, currentOffset); + /* Header */ + if (fwrite(header, 1, 46, fpOutCD) == 46) { + offsetCD += 46; + + /* Filename */ + if (fnsize > 0) { + if (fwrite(filename, 1, fnsize, fpOutCD) == fnsize) { + offsetCD += fnsize; + } else { + err = Z_ERRNO; + break; + } + } else { + err = Z_STREAM_ERROR; + break; + } + + /* Extra field */ + if (extsize > 0) { + if (fwrite(extra, 1, extsize, fpOutCD) == extsize) { + offsetCD += extsize; + } else { + err = Z_ERRNO; + break; + } + } + + /* Comment field */ + if (comsize > 0) { + if ((int)fwrite(comment, 1, comsize, fpOutCD) == comsize) { + offsetCD += comsize; + } else { + err = Z_ERRNO; + break; + } + } + + + } else { + err = Z_ERRNO; + break; + } + } + + /* Success */ + entries++; + + } else { + break; + } + } + + /* Final central directory */ + { + int entriesZip = entries; + char header[22]; + char* comment = ""; // "ZIP File recovered by zlib/minizip/mztools"; + int comsize = (int) strlen(comment); + if (entriesZip > 0xffff) { + entriesZip = 0xffff; + } + WRITE_32(header, 0x06054b50); + WRITE_16(header + 4, 0); /* disk # */ + WRITE_16(header + 6, 0); /* disk # */ + WRITE_16(header + 8, entriesZip); /* hack */ + WRITE_16(header + 10, entriesZip); /* hack */ + WRITE_32(header + 12, offsetCD); /* size of CD */ + WRITE_32(header + 16, offset); /* offset to CD */ + WRITE_16(header + 20, comsize); /* comment */ + + /* Header */ + if (fwrite(header, 1, 22, fpOutCD) == 22) { + + /* Comment field */ + if (comsize > 0) { + if ((int)fwrite(comment, 1, comsize, fpOutCD) != comsize) { + err = Z_ERRNO; + } + } + + } else { + err = Z_ERRNO; + } + } + + /* Final merge (file + central directory) */ + fclose(fpOutCD); + if (err == Z_OK) { + fpOutCD = fopen(fileOutTmp, "rb"); + if (fpOutCD != NULL) { + int nRead; + char buffer[8192]; + while ( (nRead = (int)fread(buffer, 1, sizeof(buffer), fpOutCD)) > 0) { + if ((int)fwrite(buffer, 1, nRead, fpOut) != nRead) { + err = Z_ERRNO; + break; + } + } + fclose(fpOutCD); + } + } + + /* Close */ + fclose(fpZip); + fclose(fpOut); + + /* Wipe temporary file */ + (void)remove(fileOutTmp); + + /* Number of recovered entries */ + if (err == Z_OK) { + if (nRecovered != NULL) { + *nRecovered = entries; + } + if (bytesRecovered != NULL) { + *bytesRecovered = totalBytes; + } + } + } else { + err = Z_STREAM_ERROR; + } + return err; +} ADDED compat/zlib/contrib/minizip/mztools.h Index: compat/zlib/contrib/minizip/mztools.h ================================================================== --- compat/zlib/contrib/minizip/mztools.h +++ compat/zlib/contrib/minizip/mztools.h @@ -0,0 +1,37 @@ +/* + Additional tools for Minizip + Code: Xavier Roche '2004 + License: Same as ZLIB (www.gzip.org) +*/ + +#ifndef _zip_tools_H +#define _zip_tools_H + +#ifdef __cplusplus +extern "C" { +#endif + +#ifndef _ZLIB_H +#include "zlib.h" +#endif + +#include "unzip.h" + +/* Repair a ZIP file (missing central directory) + file: file to recover + fileOut: output file after recovery + fileOutTmp: temporary file name used for recovery +*/ +extern int ZEXPORT unzRepair(const char* file, + const char* fileOut, + const char* fileOutTmp, + uLong* nRecovered, + uLong* bytesRecovered); + + +#ifdef __cplusplus +} +#endif + + +#endif ADDED compat/zlib/contrib/minizip/unzip.c Index: compat/zlib/contrib/minizip/unzip.c ================================================================== --- compat/zlib/contrib/minizip/unzip.c +++ compat/zlib/contrib/minizip/unzip.c @@ -0,0 +1,2125 @@ +/* unzip.c -- IO for uncompress .zip files using zlib + Version 1.1, February 14h, 2010 + part of the MiniZip project - ( http://www.winimage.com/zLibDll/minizip.html ) + + Copyright (C) 1998-2010 Gilles Vollant (minizip) ( http://www.winimage.com/zLibDll/minizip.html ) + + Modifications of Unzip for Zip64 + Copyright (C) 2007-2008 Even Rouault + + Modifications for Zip64 support on both zip and unzip + Copyright (C) 2009-2010 Mathias Svensson ( http://result42.com ) + + For more info read MiniZip_info.txt + + + ------------------------------------------------------------------------------------ + Decryption code comes from crypt.c by Info-ZIP but has been greatly reduced in terms of + compatibility with older software. The following is from the original crypt.c. + Code woven in by Terry Thorsen 1/2003. + + Copyright (c) 1990-2000 Info-ZIP. All rights reserved. + + See the accompanying file LICENSE, version 2000-Apr-09 or later + (the contents of which are also included in zip.h) for terms of use. + If, for some reason, all these files are missing, the Info-ZIP license + also may be found at: ftp://ftp.info-zip.org/pub/infozip/license.html + + crypt.c (full version) by Info-ZIP. Last revised: [see crypt.h] + + The encryption/decryption parts of this source code (as opposed to the + non-echoing password parts) were originally written in Europe. The + whole source package can be freely distributed, including from the USA. + (Prior to January 2000, re-export from the US was a violation of US law.) + + This encryption code is a direct transcription of the algorithm from + Roger Schlafly, described by Phil Katz in the file appnote.txt. This + file (appnote.txt) is distributed with the PKZIP program (even in the + version without encryption capabilities). + + ------------------------------------------------------------------------------------ + + Changes in unzip.c + + 2007-2008 - Even Rouault - Addition of cpl_unzGetCurrentFileZStreamPos + 2007-2008 - Even Rouault - Decoration of symbol names unz* -> cpl_unz* + 2007-2008 - Even Rouault - Remove old C style function prototypes + 2007-2008 - Even Rouault - Add unzip support for ZIP64 + + Copyright (C) 2007-2008 Even Rouault + + + Oct-2009 - Mathias Svensson - Removed cpl_* from symbol names (Even Rouault added them but since this is now moved to a new project (minizip64) I renamed them again). + Oct-2009 - Mathias Svensson - Fixed problem if uncompressed size was > 4G and compressed size was <4G + should only read the compressed/uncompressed size from the Zip64 format if + the size from normal header was 0xFFFFFFFF + Oct-2009 - Mathias Svensson - Applied some bug fixes from paches recived from Gilles Vollant + Oct-2009 - Mathias Svensson - Applied support to unzip files with compression mathod BZIP2 (bzip2 lib is required) + Patch created by Daniel Borca + + Jan-2010 - back to unzip and minizip 1.0 name scheme, with compatibility layer + + Copyright (C) 1998 - 2010 Gilles Vollant, Even Rouault, Mathias Svensson + +*/ + + +#include +#include +#include + +#ifndef NOUNCRYPT + #define NOUNCRYPT +#endif + +#include "zlib.h" +#include "unzip.h" + +#ifdef STDC +# include +# include +# include +#endif +#ifdef NO_ERRNO_H + extern int errno; +#else +# include +#endif + + +#ifndef local +# define local static +#endif +/* compile with -Dlocal if your debugger can't find static symbols */ + + +#ifndef CASESENSITIVITYDEFAULT_NO +# if !defined(unix) && !defined(CASESENSITIVITYDEFAULT_YES) +# define CASESENSITIVITYDEFAULT_NO +# endif +#endif + + +#ifndef UNZ_BUFSIZE +#define UNZ_BUFSIZE (16384) +#endif + +#ifndef UNZ_MAXFILENAMEINZIP +#define UNZ_MAXFILENAMEINZIP (256) +#endif + +#ifndef ALLOC +# define ALLOC(size) (malloc(size)) +#endif +#ifndef TRYFREE +# define TRYFREE(p) {if (p) free(p);} +#endif + +#define SIZECENTRALDIRITEM (0x2e) +#define SIZEZIPLOCALHEADER (0x1e) + + +const char unz_copyright[] = + " unzip 1.01 Copyright 1998-2004 Gilles Vollant - http://www.winimage.com/zLibDll"; + +/* unz_file_info_interntal contain internal info about a file in zipfile*/ +typedef struct unz_file_info64_internal_s +{ + ZPOS64_T offset_curfile;/* relative offset of local header 8 bytes */ +} unz_file_info64_internal; + + +/* file_in_zip_read_info_s contain internal information about a file in zipfile, + when reading and decompress it */ +typedef struct +{ + char *read_buffer; /* internal buffer for compressed data */ + z_stream stream; /* zLib stream structure for inflate */ + +#ifdef HAVE_BZIP2 + bz_stream bstream; /* bzLib stream structure for bziped */ +#endif + + ZPOS64_T pos_in_zipfile; /* position in byte on the zipfile, for fseek*/ + uLong stream_initialised; /* flag set if stream structure is initialised*/ + + ZPOS64_T offset_local_extrafield;/* offset of the local extra field */ + uInt size_local_extrafield;/* size of the local extra field */ + ZPOS64_T pos_local_extrafield; /* position in the local extra field in read*/ + ZPOS64_T total_out_64; + + uLong crc32; /* crc32 of all data uncompressed */ + uLong crc32_wait; /* crc32 we must obtain after decompress all */ + ZPOS64_T rest_read_compressed; /* number of byte to be decompressed */ + ZPOS64_T rest_read_uncompressed;/*number of byte to be obtained after decomp*/ + zlib_filefunc64_32_def z_filefunc; + voidpf filestream; /* io structore of the zipfile */ + uLong compression_method; /* compression method (0==store) */ + ZPOS64_T byte_before_the_zipfile;/* byte before the zipfile, (>0 for sfx)*/ + int raw; +} file_in_zip64_read_info_s; + + +/* unz64_s contain internal information about the zipfile +*/ +typedef struct +{ + zlib_filefunc64_32_def z_filefunc; + int is64bitOpenFunction; + voidpf filestream; /* io structore of the zipfile */ + unz_global_info64 gi; /* public global information */ + ZPOS64_T byte_before_the_zipfile;/* byte before the zipfile, (>0 for sfx)*/ + ZPOS64_T num_file; /* number of the current file in the zipfile*/ + ZPOS64_T pos_in_central_dir; /* pos of the current file in the central dir*/ + ZPOS64_T current_file_ok; /* flag about the usability of the current file*/ + ZPOS64_T central_pos; /* position of the beginning of the central dir*/ + + ZPOS64_T size_central_dir; /* size of the central directory */ + ZPOS64_T offset_central_dir; /* offset of start of central directory with + respect to the starting disk number */ + + unz_file_info64 cur_file_info; /* public info about the current file in zip*/ + unz_file_info64_internal cur_file_info_internal; /* private info about it*/ + file_in_zip64_read_info_s* pfile_in_zip_read; /* structure about the current + file if we are decompressing it */ + int encrypted; + + int isZip64; + +# ifndef NOUNCRYPT + unsigned long keys[3]; /* keys defining the pseudo-random sequence */ + const z_crc_t* pcrc_32_tab; +# endif +} unz64_s; + + +#ifndef NOUNCRYPT +#include "crypt.h" +#endif + +/* =========================================================================== + Read a byte from a gz_stream; update next_in and avail_in. Return EOF + for end of file. + IN assertion: the stream s has been sucessfully opened for reading. +*/ + + +local int unz64local_getByte OF(( + const zlib_filefunc64_32_def* pzlib_filefunc_def, + voidpf filestream, + int *pi)); + +local int unz64local_getByte(const zlib_filefunc64_32_def* pzlib_filefunc_def, voidpf filestream, int *pi) +{ + unsigned char c; + int err = (int)ZREAD64(*pzlib_filefunc_def,filestream,&c,1); + if (err==1) + { + *pi = (int)c; + return UNZ_OK; + } + else + { + if (ZERROR64(*pzlib_filefunc_def,filestream)) + return UNZ_ERRNO; + else + return UNZ_EOF; + } +} + + +/* =========================================================================== + Reads a long in LSB order from the given gz_stream. Sets +*/ +local int unz64local_getShort OF(( + const zlib_filefunc64_32_def* pzlib_filefunc_def, + voidpf filestream, + uLong *pX)); + +local int unz64local_getShort (const zlib_filefunc64_32_def* pzlib_filefunc_def, + voidpf filestream, + uLong *pX) +{ + uLong x ; + int i = 0; + int err; + + err = unz64local_getByte(pzlib_filefunc_def,filestream,&i); + x = (uLong)i; + + if (err==UNZ_OK) + err = unz64local_getByte(pzlib_filefunc_def,filestream,&i); + x |= ((uLong)i)<<8; + + if (err==UNZ_OK) + *pX = x; + else + *pX = 0; + return err; +} + +local int unz64local_getLong OF(( + const zlib_filefunc64_32_def* pzlib_filefunc_def, + voidpf filestream, + uLong *pX)); + +local int unz64local_getLong (const zlib_filefunc64_32_def* pzlib_filefunc_def, + voidpf filestream, + uLong *pX) +{ + uLong x ; + int i = 0; + int err; + + err = unz64local_getByte(pzlib_filefunc_def,filestream,&i); + x = (uLong)i; + + if (err==UNZ_OK) + err = unz64local_getByte(pzlib_filefunc_def,filestream,&i); + x |= ((uLong)i)<<8; + + if (err==UNZ_OK) + err = unz64local_getByte(pzlib_filefunc_def,filestream,&i); + x |= ((uLong)i)<<16; + + if (err==UNZ_OK) + err = unz64local_getByte(pzlib_filefunc_def,filestream,&i); + x += ((uLong)i)<<24; + + if (err==UNZ_OK) + *pX = x; + else + *pX = 0; + return err; +} + +local int unz64local_getLong64 OF(( + const zlib_filefunc64_32_def* pzlib_filefunc_def, + voidpf filestream, + ZPOS64_T *pX)); + + +local int unz64local_getLong64 (const zlib_filefunc64_32_def* pzlib_filefunc_def, + voidpf filestream, + ZPOS64_T *pX) +{ + ZPOS64_T x ; + int i = 0; + int err; + + err = unz64local_getByte(pzlib_filefunc_def,filestream,&i); + x = (ZPOS64_T)i; + + if (err==UNZ_OK) + err = unz64local_getByte(pzlib_filefunc_def,filestream,&i); + x |= ((ZPOS64_T)i)<<8; + + if (err==UNZ_OK) + err = unz64local_getByte(pzlib_filefunc_def,filestream,&i); + x |= ((ZPOS64_T)i)<<16; + + if (err==UNZ_OK) + err = unz64local_getByte(pzlib_filefunc_def,filestream,&i); + x |= ((ZPOS64_T)i)<<24; + + if (err==UNZ_OK) + err = unz64local_getByte(pzlib_filefunc_def,filestream,&i); + x |= ((ZPOS64_T)i)<<32; + + if (err==UNZ_OK) + err = unz64local_getByte(pzlib_filefunc_def,filestream,&i); + x |= ((ZPOS64_T)i)<<40; + + if (err==UNZ_OK) + err = unz64local_getByte(pzlib_filefunc_def,filestream,&i); + x |= ((ZPOS64_T)i)<<48; + + if (err==UNZ_OK) + err = unz64local_getByte(pzlib_filefunc_def,filestream,&i); + x |= ((ZPOS64_T)i)<<56; + + if (err==UNZ_OK) + *pX = x; + else + *pX = 0; + return err; +} + +/* My own strcmpi / strcasecmp */ +local int strcmpcasenosensitive_internal (const char* fileName1, const char* fileName2) +{ + for (;;) + { + char c1=*(fileName1++); + char c2=*(fileName2++); + if ((c1>='a') && (c1<='z')) + c1 -= 0x20; + if ((c2>='a') && (c2<='z')) + c2 -= 0x20; + if (c1=='\0') + return ((c2=='\0') ? 0 : -1); + if (c2=='\0') + return 1; + if (c1c2) + return 1; + } +} + + +#ifdef CASESENSITIVITYDEFAULT_NO +#define CASESENSITIVITYDEFAULTVALUE 2 +#else +#define CASESENSITIVITYDEFAULTVALUE 1 +#endif + +#ifndef STRCMPCASENOSENTIVEFUNCTION +#define STRCMPCASENOSENTIVEFUNCTION strcmpcasenosensitive_internal +#endif + +/* + Compare two filename (fileName1,fileName2). + If iCaseSenisivity = 1, comparision is case sensitivity (like strcmp) + If iCaseSenisivity = 2, comparision is not case sensitivity (like strcmpi + or strcasecmp) + If iCaseSenisivity = 0, case sensitivity is defaut of your operating system + (like 1 on Unix, 2 on Windows) + +*/ +extern int ZEXPORT unzStringFileNameCompare (const char* fileName1, + const char* fileName2, + int iCaseSensitivity) + +{ + if (iCaseSensitivity==0) + iCaseSensitivity=CASESENSITIVITYDEFAULTVALUE; + + if (iCaseSensitivity==1) + return strcmp(fileName1,fileName2); + + return STRCMPCASENOSENTIVEFUNCTION(fileName1,fileName2); +} + +#ifndef BUFREADCOMMENT +#define BUFREADCOMMENT (0x400) +#endif + +/* + Locate the Central directory of a zipfile (at the end, just before + the global comment) +*/ +local ZPOS64_T unz64local_SearchCentralDir OF((const zlib_filefunc64_32_def* pzlib_filefunc_def, voidpf filestream)); +local ZPOS64_T unz64local_SearchCentralDir(const zlib_filefunc64_32_def* pzlib_filefunc_def, voidpf filestream) +{ + unsigned char* buf; + ZPOS64_T uSizeFile; + ZPOS64_T uBackRead; + ZPOS64_T uMaxBack=0xffff; /* maximum size of global comment */ + ZPOS64_T uPosFound=0; + + if (ZSEEK64(*pzlib_filefunc_def,filestream,0,ZLIB_FILEFUNC_SEEK_END) != 0) + return 0; + + + uSizeFile = ZTELL64(*pzlib_filefunc_def,filestream); + + if (uMaxBack>uSizeFile) + uMaxBack = uSizeFile; + + buf = (unsigned char*)ALLOC(BUFREADCOMMENT+4); + if (buf==NULL) + return 0; + + uBackRead = 4; + while (uBackReaduMaxBack) + uBackRead = uMaxBack; + else + uBackRead+=BUFREADCOMMENT; + uReadPos = uSizeFile-uBackRead ; + + uReadSize = ((BUFREADCOMMENT+4) < (uSizeFile-uReadPos)) ? + (BUFREADCOMMENT+4) : (uLong)(uSizeFile-uReadPos); + if (ZSEEK64(*pzlib_filefunc_def,filestream,uReadPos,ZLIB_FILEFUNC_SEEK_SET)!=0) + break; + + if (ZREAD64(*pzlib_filefunc_def,filestream,buf,uReadSize)!=uReadSize) + break; + + for (i=(int)uReadSize-3; (i--)>0;) + if (((*(buf+i))==0x50) && ((*(buf+i+1))==0x4b) && + ((*(buf+i+2))==0x05) && ((*(buf+i+3))==0x06)) + { + uPosFound = uReadPos+i; + break; + } + + if (uPosFound!=0) + break; + } + TRYFREE(buf); + return uPosFound; +} + + +/* + Locate the Central directory 64 of a zipfile (at the end, just before + the global comment) +*/ +local ZPOS64_T unz64local_SearchCentralDir64 OF(( + const zlib_filefunc64_32_def* pzlib_filefunc_def, + voidpf filestream)); + +local ZPOS64_T unz64local_SearchCentralDir64(const zlib_filefunc64_32_def* pzlib_filefunc_def, + voidpf filestream) +{ + unsigned char* buf; + ZPOS64_T uSizeFile; + ZPOS64_T uBackRead; + ZPOS64_T uMaxBack=0xffff; /* maximum size of global comment */ + ZPOS64_T uPosFound=0; + uLong uL; + ZPOS64_T relativeOffset; + + if (ZSEEK64(*pzlib_filefunc_def,filestream,0,ZLIB_FILEFUNC_SEEK_END) != 0) + return 0; + + + uSizeFile = ZTELL64(*pzlib_filefunc_def,filestream); + + if (uMaxBack>uSizeFile) + uMaxBack = uSizeFile; + + buf = (unsigned char*)ALLOC(BUFREADCOMMENT+4); + if (buf==NULL) + return 0; + + uBackRead = 4; + while (uBackReaduMaxBack) + uBackRead = uMaxBack; + else + uBackRead+=BUFREADCOMMENT; + uReadPos = uSizeFile-uBackRead ; + + uReadSize = ((BUFREADCOMMENT+4) < (uSizeFile-uReadPos)) ? + (BUFREADCOMMENT+4) : (uLong)(uSizeFile-uReadPos); + if (ZSEEK64(*pzlib_filefunc_def,filestream,uReadPos,ZLIB_FILEFUNC_SEEK_SET)!=0) + break; + + if (ZREAD64(*pzlib_filefunc_def,filestream,buf,uReadSize)!=uReadSize) + break; + + for (i=(int)uReadSize-3; (i--)>0;) + if (((*(buf+i))==0x50) && ((*(buf+i+1))==0x4b) && + ((*(buf+i+2))==0x06) && ((*(buf+i+3))==0x07)) + { + uPosFound = uReadPos+i; + break; + } + + if (uPosFound!=0) + break; + } + TRYFREE(buf); + if (uPosFound == 0) + return 0; + + /* Zip64 end of central directory locator */ + if (ZSEEK64(*pzlib_filefunc_def,filestream, uPosFound,ZLIB_FILEFUNC_SEEK_SET)!=0) + return 0; + + /* the signature, already checked */ + if (unz64local_getLong(pzlib_filefunc_def,filestream,&uL)!=UNZ_OK) + return 0; + + /* number of the disk with the start of the zip64 end of central directory */ + if (unz64local_getLong(pzlib_filefunc_def,filestream,&uL)!=UNZ_OK) + return 0; + if (uL != 0) + return 0; + + /* relative offset of the zip64 end of central directory record */ + if (unz64local_getLong64(pzlib_filefunc_def,filestream,&relativeOffset)!=UNZ_OK) + return 0; + + /* total number of disks */ + if (unz64local_getLong(pzlib_filefunc_def,filestream,&uL)!=UNZ_OK) + return 0; + if (uL != 1) + return 0; + + /* Goto end of central directory record */ + if (ZSEEK64(*pzlib_filefunc_def,filestream, relativeOffset,ZLIB_FILEFUNC_SEEK_SET)!=0) + return 0; + + /* the signature */ + if (unz64local_getLong(pzlib_filefunc_def,filestream,&uL)!=UNZ_OK) + return 0; + + if (uL != 0x06064b50) + return 0; + + return relativeOffset; +} + +/* + Open a Zip file. path contain the full pathname (by example, + on a Windows NT computer "c:\\test\\zlib114.zip" or on an Unix computer + "zlib/zlib114.zip". + If the zipfile cannot be opened (file doesn't exist or in not valid), the + return value is NULL. + Else, the return value is a unzFile Handle, usable with other function + of this unzip package. +*/ +local unzFile unzOpenInternal (const void *path, + zlib_filefunc64_32_def* pzlib_filefunc64_32_def, + int is64bitOpenFunction) +{ + unz64_s us; + unz64_s *s; + ZPOS64_T central_pos; + uLong uL; + + uLong number_disk; /* number of the current dist, used for + spaning ZIP, unsupported, always 0*/ + uLong number_disk_with_CD; /* number the the disk with central dir, used + for spaning ZIP, unsupported, always 0*/ + ZPOS64_T number_entry_CD; /* total number of entries in + the central dir + (same than number_entry on nospan) */ + + int err=UNZ_OK; + + if (unz_copyright[0]!=' ') + return NULL; + + us.z_filefunc.zseek32_file = NULL; + us.z_filefunc.ztell32_file = NULL; + if (pzlib_filefunc64_32_def==NULL) + fill_fopen64_filefunc(&us.z_filefunc.zfile_func64); + else + us.z_filefunc = *pzlib_filefunc64_32_def; + us.is64bitOpenFunction = is64bitOpenFunction; + + + + us.filestream = ZOPEN64(us.z_filefunc, + path, + ZLIB_FILEFUNC_MODE_READ | + ZLIB_FILEFUNC_MODE_EXISTING); + if (us.filestream==NULL) + return NULL; + + central_pos = unz64local_SearchCentralDir64(&us.z_filefunc,us.filestream); + if (central_pos) + { + uLong uS; + ZPOS64_T uL64; + + us.isZip64 = 1; + + if (ZSEEK64(us.z_filefunc, us.filestream, + central_pos,ZLIB_FILEFUNC_SEEK_SET)!=0) + err=UNZ_ERRNO; + + /* the signature, already checked */ + if (unz64local_getLong(&us.z_filefunc, us.filestream,&uL)!=UNZ_OK) + err=UNZ_ERRNO; + + /* size of zip64 end of central directory record */ + if (unz64local_getLong64(&us.z_filefunc, us.filestream,&uL64)!=UNZ_OK) + err=UNZ_ERRNO; + + /* version made by */ + if (unz64local_getShort(&us.z_filefunc, us.filestream,&uS)!=UNZ_OK) + err=UNZ_ERRNO; + + /* version needed to extract */ + if (unz64local_getShort(&us.z_filefunc, us.filestream,&uS)!=UNZ_OK) + err=UNZ_ERRNO; + + /* number of this disk */ + if (unz64local_getLong(&us.z_filefunc, us.filestream,&number_disk)!=UNZ_OK) + err=UNZ_ERRNO; + + /* number of the disk with the start of the central directory */ + if (unz64local_getLong(&us.z_filefunc, us.filestream,&number_disk_with_CD)!=UNZ_OK) + err=UNZ_ERRNO; + + /* total number of entries in the central directory on this disk */ + if (unz64local_getLong64(&us.z_filefunc, us.filestream,&us.gi.number_entry)!=UNZ_OK) + err=UNZ_ERRNO; + + /* total number of entries in the central directory */ + if (unz64local_getLong64(&us.z_filefunc, us.filestream,&number_entry_CD)!=UNZ_OK) + err=UNZ_ERRNO; + + if ((number_entry_CD!=us.gi.number_entry) || + (number_disk_with_CD!=0) || + (number_disk!=0)) + err=UNZ_BADZIPFILE; + + /* size of the central directory */ + if (unz64local_getLong64(&us.z_filefunc, us.filestream,&us.size_central_dir)!=UNZ_OK) + err=UNZ_ERRNO; + + /* offset of start of central directory with respect to the + starting disk number */ + if (unz64local_getLong64(&us.z_filefunc, us.filestream,&us.offset_central_dir)!=UNZ_OK) + err=UNZ_ERRNO; + + us.gi.size_comment = 0; + } + else + { + central_pos = unz64local_SearchCentralDir(&us.z_filefunc,us.filestream); + if (central_pos==0) + err=UNZ_ERRNO; + + us.isZip64 = 0; + + if (ZSEEK64(us.z_filefunc, us.filestream, + central_pos,ZLIB_FILEFUNC_SEEK_SET)!=0) + err=UNZ_ERRNO; + + /* the signature, already checked */ + if (unz64local_getLong(&us.z_filefunc, us.filestream,&uL)!=UNZ_OK) + err=UNZ_ERRNO; + + /* number of this disk */ + if (unz64local_getShort(&us.z_filefunc, us.filestream,&number_disk)!=UNZ_OK) + err=UNZ_ERRNO; + + /* number of the disk with the start of the central directory */ + if (unz64local_getShort(&us.z_filefunc, us.filestream,&number_disk_with_CD)!=UNZ_OK) + err=UNZ_ERRNO; + + /* total number of entries in the central dir on this disk */ + if (unz64local_getShort(&us.z_filefunc, us.filestream,&uL)!=UNZ_OK) + err=UNZ_ERRNO; + us.gi.number_entry = uL; + + /* total number of entries in the central dir */ + if (unz64local_getShort(&us.z_filefunc, us.filestream,&uL)!=UNZ_OK) + err=UNZ_ERRNO; + number_entry_CD = uL; + + if ((number_entry_CD!=us.gi.number_entry) || + (number_disk_with_CD!=0) || + (number_disk!=0)) + err=UNZ_BADZIPFILE; + + /* size of the central directory */ + if (unz64local_getLong(&us.z_filefunc, us.filestream,&uL)!=UNZ_OK) + err=UNZ_ERRNO; + us.size_central_dir = uL; + + /* offset of start of central directory with respect to the + starting disk number */ + if (unz64local_getLong(&us.z_filefunc, us.filestream,&uL)!=UNZ_OK) + err=UNZ_ERRNO; + us.offset_central_dir = uL; + + /* zipfile comment length */ + if (unz64local_getShort(&us.z_filefunc, us.filestream,&us.gi.size_comment)!=UNZ_OK) + err=UNZ_ERRNO; + } + + if ((central_pospfile_in_zip_read!=NULL) + unzCloseCurrentFile(file); + + ZCLOSE64(s->z_filefunc, s->filestream); + TRYFREE(s); + return UNZ_OK; +} + + +/* + Write info about the ZipFile in the *pglobal_info structure. + No preparation of the structure is needed + return UNZ_OK if there is no problem. */ +extern int ZEXPORT unzGetGlobalInfo64 (unzFile file, unz_global_info64* pglobal_info) +{ + unz64_s* s; + if (file==NULL) + return UNZ_PARAMERROR; + s=(unz64_s*)file; + *pglobal_info=s->gi; + return UNZ_OK; +} + +extern int ZEXPORT unzGetGlobalInfo (unzFile file, unz_global_info* pglobal_info32) +{ + unz64_s* s; + if (file==NULL) + return UNZ_PARAMERROR; + s=(unz64_s*)file; + /* to do : check if number_entry is not truncated */ + pglobal_info32->number_entry = (uLong)s->gi.number_entry; + pglobal_info32->size_comment = s->gi.size_comment; + return UNZ_OK; +} +/* + Translate date/time from Dos format to tm_unz (readable more easilty) +*/ +local void unz64local_DosDateToTmuDate (ZPOS64_T ulDosDate, tm_unz* ptm) +{ + ZPOS64_T uDate; + uDate = (ZPOS64_T)(ulDosDate>>16); + ptm->tm_mday = (uInt)(uDate&0x1f) ; + ptm->tm_mon = (uInt)((((uDate)&0x1E0)/0x20)-1) ; + ptm->tm_year = (uInt)(((uDate&0x0FE00)/0x0200)+1980) ; + + ptm->tm_hour = (uInt) ((ulDosDate &0xF800)/0x800); + ptm->tm_min = (uInt) ((ulDosDate&0x7E0)/0x20) ; + ptm->tm_sec = (uInt) (2*(ulDosDate&0x1f)) ; +} + +/* + Get Info about the current file in the zipfile, with internal only info +*/ +local int unz64local_GetCurrentFileInfoInternal OF((unzFile file, + unz_file_info64 *pfile_info, + unz_file_info64_internal + *pfile_info_internal, + char *szFileName, + uLong fileNameBufferSize, + void *extraField, + uLong extraFieldBufferSize, + char *szComment, + uLong commentBufferSize)); + +local int unz64local_GetCurrentFileInfoInternal (unzFile file, + unz_file_info64 *pfile_info, + unz_file_info64_internal + *pfile_info_internal, + char *szFileName, + uLong fileNameBufferSize, + void *extraField, + uLong extraFieldBufferSize, + char *szComment, + uLong commentBufferSize) +{ + unz64_s* s; + unz_file_info64 file_info; + unz_file_info64_internal file_info_internal; + int err=UNZ_OK; + uLong uMagic; + long lSeek=0; + uLong uL; + + if (file==NULL) + return UNZ_PARAMERROR; + s=(unz64_s*)file; + if (ZSEEK64(s->z_filefunc, s->filestream, + s->pos_in_central_dir+s->byte_before_the_zipfile, + ZLIB_FILEFUNC_SEEK_SET)!=0) + err=UNZ_ERRNO; + + + /* we check the magic */ + if (err==UNZ_OK) + { + if (unz64local_getLong(&s->z_filefunc, s->filestream,&uMagic) != UNZ_OK) + err=UNZ_ERRNO; + else if (uMagic!=0x02014b50) + err=UNZ_BADZIPFILE; + } + + if (unz64local_getShort(&s->z_filefunc, s->filestream,&file_info.version) != UNZ_OK) + err=UNZ_ERRNO; + + if (unz64local_getShort(&s->z_filefunc, s->filestream,&file_info.version_needed) != UNZ_OK) + err=UNZ_ERRNO; + + if (unz64local_getShort(&s->z_filefunc, s->filestream,&file_info.flag) != UNZ_OK) + err=UNZ_ERRNO; + + if (unz64local_getShort(&s->z_filefunc, s->filestream,&file_info.compression_method) != UNZ_OK) + err=UNZ_ERRNO; + + if (unz64local_getLong(&s->z_filefunc, s->filestream,&file_info.dosDate) != UNZ_OK) + err=UNZ_ERRNO; + + unz64local_DosDateToTmuDate(file_info.dosDate,&file_info.tmu_date); + + if (unz64local_getLong(&s->z_filefunc, s->filestream,&file_info.crc) != UNZ_OK) + err=UNZ_ERRNO; + + if (unz64local_getLong(&s->z_filefunc, s->filestream,&uL) != UNZ_OK) + err=UNZ_ERRNO; + file_info.compressed_size = uL; + + if (unz64local_getLong(&s->z_filefunc, s->filestream,&uL) != UNZ_OK) + err=UNZ_ERRNO; + file_info.uncompressed_size = uL; + + if (unz64local_getShort(&s->z_filefunc, s->filestream,&file_info.size_filename) != UNZ_OK) + err=UNZ_ERRNO; + + if (unz64local_getShort(&s->z_filefunc, s->filestream,&file_info.size_file_extra) != UNZ_OK) + err=UNZ_ERRNO; + + if (unz64local_getShort(&s->z_filefunc, s->filestream,&file_info.size_file_comment) != UNZ_OK) + err=UNZ_ERRNO; + + if (unz64local_getShort(&s->z_filefunc, s->filestream,&file_info.disk_num_start) != UNZ_OK) + err=UNZ_ERRNO; + + if (unz64local_getShort(&s->z_filefunc, s->filestream,&file_info.internal_fa) != UNZ_OK) + err=UNZ_ERRNO; + + if (unz64local_getLong(&s->z_filefunc, s->filestream,&file_info.external_fa) != UNZ_OK) + err=UNZ_ERRNO; + + // relative offset of local header + if (unz64local_getLong(&s->z_filefunc, s->filestream,&uL) != UNZ_OK) + err=UNZ_ERRNO; + file_info_internal.offset_curfile = uL; + + lSeek+=file_info.size_filename; + if ((err==UNZ_OK) && (szFileName!=NULL)) + { + uLong uSizeRead ; + if (file_info.size_filename0) && (fileNameBufferSize>0)) + if (ZREAD64(s->z_filefunc, s->filestream,szFileName,uSizeRead)!=uSizeRead) + err=UNZ_ERRNO; + lSeek -= uSizeRead; + } + + // Read extrafield + if ((err==UNZ_OK) && (extraField!=NULL)) + { + ZPOS64_T uSizeRead ; + if (file_info.size_file_extraz_filefunc, s->filestream,lSeek,ZLIB_FILEFUNC_SEEK_CUR)==0) + lSeek=0; + else + err=UNZ_ERRNO; + } + + if ((file_info.size_file_extra>0) && (extraFieldBufferSize>0)) + if (ZREAD64(s->z_filefunc, s->filestream,extraField,(uLong)uSizeRead)!=uSizeRead) + err=UNZ_ERRNO; + + lSeek += file_info.size_file_extra - (uLong)uSizeRead; + } + else + lSeek += file_info.size_file_extra; + + + if ((err==UNZ_OK) && (file_info.size_file_extra != 0)) + { + uLong acc = 0; + + // since lSeek now points to after the extra field we need to move back + lSeek -= file_info.size_file_extra; + + if (lSeek!=0) + { + if (ZSEEK64(s->z_filefunc, s->filestream,lSeek,ZLIB_FILEFUNC_SEEK_CUR)==0) + lSeek=0; + else + err=UNZ_ERRNO; + } + + while(acc < file_info.size_file_extra) + { + uLong headerId; + uLong dataSize; + + if (unz64local_getShort(&s->z_filefunc, s->filestream,&headerId) != UNZ_OK) + err=UNZ_ERRNO; + + if (unz64local_getShort(&s->z_filefunc, s->filestream,&dataSize) != UNZ_OK) + err=UNZ_ERRNO; + + /* ZIP64 extra fields */ + if (headerId == 0x0001) + { + uLong uL; + + if(file_info.uncompressed_size == MAXU32) + { + if (unz64local_getLong64(&s->z_filefunc, s->filestream,&file_info.uncompressed_size) != UNZ_OK) + err=UNZ_ERRNO; + } + + if(file_info.compressed_size == MAXU32) + { + if (unz64local_getLong64(&s->z_filefunc, s->filestream,&file_info.compressed_size) != UNZ_OK) + err=UNZ_ERRNO; + } + + if(file_info_internal.offset_curfile == MAXU32) + { + /* Relative Header offset */ + if (unz64local_getLong64(&s->z_filefunc, s->filestream,&file_info_internal.offset_curfile) != UNZ_OK) + err=UNZ_ERRNO; + } + + if(file_info.disk_num_start == MAXU32) + { + /* Disk Start Number */ + if (unz64local_getLong(&s->z_filefunc, s->filestream,&uL) != UNZ_OK) + err=UNZ_ERRNO; + } + + } + else + { + if (ZSEEK64(s->z_filefunc, s->filestream,dataSize,ZLIB_FILEFUNC_SEEK_CUR)!=0) + err=UNZ_ERRNO; + } + + acc += 2 + 2 + dataSize; + } + } + + if ((err==UNZ_OK) && (szComment!=NULL)) + { + uLong uSizeRead ; + if (file_info.size_file_commentz_filefunc, s->filestream,lSeek,ZLIB_FILEFUNC_SEEK_CUR)==0) + lSeek=0; + else + err=UNZ_ERRNO; + } + + if ((file_info.size_file_comment>0) && (commentBufferSize>0)) + if (ZREAD64(s->z_filefunc, s->filestream,szComment,uSizeRead)!=uSizeRead) + err=UNZ_ERRNO; + lSeek+=file_info.size_file_comment - uSizeRead; + } + else + lSeek+=file_info.size_file_comment; + + + if ((err==UNZ_OK) && (pfile_info!=NULL)) + *pfile_info=file_info; + + if ((err==UNZ_OK) && (pfile_info_internal!=NULL)) + *pfile_info_internal=file_info_internal; + + return err; +} + + + +/* + Write info about the ZipFile in the *pglobal_info structure. + No preparation of the structure is needed + return UNZ_OK if there is no problem. +*/ +extern int ZEXPORT unzGetCurrentFileInfo64 (unzFile file, + unz_file_info64 * pfile_info, + char * szFileName, uLong fileNameBufferSize, + void *extraField, uLong extraFieldBufferSize, + char* szComment, uLong commentBufferSize) +{ + return unz64local_GetCurrentFileInfoInternal(file,pfile_info,NULL, + szFileName,fileNameBufferSize, + extraField,extraFieldBufferSize, + szComment,commentBufferSize); +} + +extern int ZEXPORT unzGetCurrentFileInfo (unzFile file, + unz_file_info * pfile_info, + char * szFileName, uLong fileNameBufferSize, + void *extraField, uLong extraFieldBufferSize, + char* szComment, uLong commentBufferSize) +{ + int err; + unz_file_info64 file_info64; + err = unz64local_GetCurrentFileInfoInternal(file,&file_info64,NULL, + szFileName,fileNameBufferSize, + extraField,extraFieldBufferSize, + szComment,commentBufferSize); + if ((err==UNZ_OK) && (pfile_info != NULL)) + { + pfile_info->version = file_info64.version; + pfile_info->version_needed = file_info64.version_needed; + pfile_info->flag = file_info64.flag; + pfile_info->compression_method = file_info64.compression_method; + pfile_info->dosDate = file_info64.dosDate; + pfile_info->crc = file_info64.crc; + + pfile_info->size_filename = file_info64.size_filename; + pfile_info->size_file_extra = file_info64.size_file_extra; + pfile_info->size_file_comment = file_info64.size_file_comment; + + pfile_info->disk_num_start = file_info64.disk_num_start; + pfile_info->internal_fa = file_info64.internal_fa; + pfile_info->external_fa = file_info64.external_fa; + + pfile_info->tmu_date = file_info64.tmu_date, + + + pfile_info->compressed_size = (uLong)file_info64.compressed_size; + pfile_info->uncompressed_size = (uLong)file_info64.uncompressed_size; + + } + return err; +} +/* + Set the current file of the zipfile to the first file. + return UNZ_OK if there is no problem +*/ +extern int ZEXPORT unzGoToFirstFile (unzFile file) +{ + int err=UNZ_OK; + unz64_s* s; + if (file==NULL) + return UNZ_PARAMERROR; + s=(unz64_s*)file; + s->pos_in_central_dir=s->offset_central_dir; + s->num_file=0; + err=unz64local_GetCurrentFileInfoInternal(file,&s->cur_file_info, + &s->cur_file_info_internal, + NULL,0,NULL,0,NULL,0); + s->current_file_ok = (err == UNZ_OK); + return err; +} + +/* + Set the current file of the zipfile to the next file. + return UNZ_OK if there is no problem + return UNZ_END_OF_LIST_OF_FILE if the actual file was the latest. +*/ +extern int ZEXPORT unzGoToNextFile (unzFile file) +{ + unz64_s* s; + int err; + + if (file==NULL) + return UNZ_PARAMERROR; + s=(unz64_s*)file; + if (!s->current_file_ok) + return UNZ_END_OF_LIST_OF_FILE; + if (s->gi.number_entry != 0xffff) /* 2^16 files overflow hack */ + if (s->num_file+1==s->gi.number_entry) + return UNZ_END_OF_LIST_OF_FILE; + + s->pos_in_central_dir += SIZECENTRALDIRITEM + s->cur_file_info.size_filename + + s->cur_file_info.size_file_extra + s->cur_file_info.size_file_comment ; + s->num_file++; + err = unz64local_GetCurrentFileInfoInternal(file,&s->cur_file_info, + &s->cur_file_info_internal, + NULL,0,NULL,0,NULL,0); + s->current_file_ok = (err == UNZ_OK); + return err; +} + + +/* + Try locate the file szFileName in the zipfile. + For the iCaseSensitivity signification, see unzStringFileNameCompare + + return value : + UNZ_OK if the file is found. It becomes the current file. + UNZ_END_OF_LIST_OF_FILE if the file is not found +*/ +extern int ZEXPORT unzLocateFile (unzFile file, const char *szFileName, int iCaseSensitivity) +{ + unz64_s* s; + int err; + + /* We remember the 'current' position in the file so that we can jump + * back there if we fail. + */ + unz_file_info64 cur_file_infoSaved; + unz_file_info64_internal cur_file_info_internalSaved; + ZPOS64_T num_fileSaved; + ZPOS64_T pos_in_central_dirSaved; + + + if (file==NULL) + return UNZ_PARAMERROR; + + if (strlen(szFileName)>=UNZ_MAXFILENAMEINZIP) + return UNZ_PARAMERROR; + + s=(unz64_s*)file; + if (!s->current_file_ok) + return UNZ_END_OF_LIST_OF_FILE; + + /* Save the current state */ + num_fileSaved = s->num_file; + pos_in_central_dirSaved = s->pos_in_central_dir; + cur_file_infoSaved = s->cur_file_info; + cur_file_info_internalSaved = s->cur_file_info_internal; + + err = unzGoToFirstFile(file); + + while (err == UNZ_OK) + { + char szCurrentFileName[UNZ_MAXFILENAMEINZIP+1]; + err = unzGetCurrentFileInfo64(file,NULL, + szCurrentFileName,sizeof(szCurrentFileName)-1, + NULL,0,NULL,0); + if (err == UNZ_OK) + { + if (unzStringFileNameCompare(szCurrentFileName, + szFileName,iCaseSensitivity)==0) + return UNZ_OK; + err = unzGoToNextFile(file); + } + } + + /* We failed, so restore the state of the 'current file' to where we + * were. + */ + s->num_file = num_fileSaved ; + s->pos_in_central_dir = pos_in_central_dirSaved ; + s->cur_file_info = cur_file_infoSaved; + s->cur_file_info_internal = cur_file_info_internalSaved; + return err; +} + + +/* +/////////////////////////////////////////// +// Contributed by Ryan Haksi (mailto://cryogen@infoserve.net) +// I need random access +// +// Further optimization could be realized by adding an ability +// to cache the directory in memory. The goal being a single +// comprehensive file read to put the file I need in a memory. +*/ + +/* +typedef struct unz_file_pos_s +{ + ZPOS64_T pos_in_zip_directory; // offset in file + ZPOS64_T num_of_file; // # of file +} unz_file_pos; +*/ + +extern int ZEXPORT unzGetFilePos64(unzFile file, unz64_file_pos* file_pos) +{ + unz64_s* s; + + if (file==NULL || file_pos==NULL) + return UNZ_PARAMERROR; + s=(unz64_s*)file; + if (!s->current_file_ok) + return UNZ_END_OF_LIST_OF_FILE; + + file_pos->pos_in_zip_directory = s->pos_in_central_dir; + file_pos->num_of_file = s->num_file; + + return UNZ_OK; +} + +extern int ZEXPORT unzGetFilePos( + unzFile file, + unz_file_pos* file_pos) +{ + unz64_file_pos file_pos64; + int err = unzGetFilePos64(file,&file_pos64); + if (err==UNZ_OK) + { + file_pos->pos_in_zip_directory = (uLong)file_pos64.pos_in_zip_directory; + file_pos->num_of_file = (uLong)file_pos64.num_of_file; + } + return err; +} + +extern int ZEXPORT unzGoToFilePos64(unzFile file, const unz64_file_pos* file_pos) +{ + unz64_s* s; + int err; + + if (file==NULL || file_pos==NULL) + return UNZ_PARAMERROR; + s=(unz64_s*)file; + + /* jump to the right spot */ + s->pos_in_central_dir = file_pos->pos_in_zip_directory; + s->num_file = file_pos->num_of_file; + + /* set the current file */ + err = unz64local_GetCurrentFileInfoInternal(file,&s->cur_file_info, + &s->cur_file_info_internal, + NULL,0,NULL,0,NULL,0); + /* return results */ + s->current_file_ok = (err == UNZ_OK); + return err; +} + +extern int ZEXPORT unzGoToFilePos( + unzFile file, + unz_file_pos* file_pos) +{ + unz64_file_pos file_pos64; + if (file_pos == NULL) + return UNZ_PARAMERROR; + + file_pos64.pos_in_zip_directory = file_pos->pos_in_zip_directory; + file_pos64.num_of_file = file_pos->num_of_file; + return unzGoToFilePos64(file,&file_pos64); +} + +/* +// Unzip Helper Functions - should be here? +/////////////////////////////////////////// +*/ + +/* + Read the local header of the current zipfile + Check the coherency of the local header and info in the end of central + directory about this file + store in *piSizeVar the size of extra info in local header + (filename and size of extra field data) +*/ +local int unz64local_CheckCurrentFileCoherencyHeader (unz64_s* s, uInt* piSizeVar, + ZPOS64_T * poffset_local_extrafield, + uInt * psize_local_extrafield) +{ + uLong uMagic,uData,uFlags; + uLong size_filename; + uLong size_extra_field; + int err=UNZ_OK; + + *piSizeVar = 0; + *poffset_local_extrafield = 0; + *psize_local_extrafield = 0; + + if (ZSEEK64(s->z_filefunc, s->filestream,s->cur_file_info_internal.offset_curfile + + s->byte_before_the_zipfile,ZLIB_FILEFUNC_SEEK_SET)!=0) + return UNZ_ERRNO; + + + if (err==UNZ_OK) + { + if (unz64local_getLong(&s->z_filefunc, s->filestream,&uMagic) != UNZ_OK) + err=UNZ_ERRNO; + else if (uMagic!=0x04034b50) + err=UNZ_BADZIPFILE; + } + + if (unz64local_getShort(&s->z_filefunc, s->filestream,&uData) != UNZ_OK) + err=UNZ_ERRNO; +/* + else if ((err==UNZ_OK) && (uData!=s->cur_file_info.wVersion)) + err=UNZ_BADZIPFILE; +*/ + if (unz64local_getShort(&s->z_filefunc, s->filestream,&uFlags) != UNZ_OK) + err=UNZ_ERRNO; + + if (unz64local_getShort(&s->z_filefunc, s->filestream,&uData) != UNZ_OK) + err=UNZ_ERRNO; + else if ((err==UNZ_OK) && (uData!=s->cur_file_info.compression_method)) + err=UNZ_BADZIPFILE; + + if ((err==UNZ_OK) && (s->cur_file_info.compression_method!=0) && +/* #ifdef HAVE_BZIP2 */ + (s->cur_file_info.compression_method!=Z_BZIP2ED) && +/* #endif */ + (s->cur_file_info.compression_method!=Z_DEFLATED)) + err=UNZ_BADZIPFILE; + + if (unz64local_getLong(&s->z_filefunc, s->filestream,&uData) != UNZ_OK) /* date/time */ + err=UNZ_ERRNO; + + if (unz64local_getLong(&s->z_filefunc, s->filestream,&uData) != UNZ_OK) /* crc */ + err=UNZ_ERRNO; + else if ((err==UNZ_OK) && (uData!=s->cur_file_info.crc) && ((uFlags & 8)==0)) + err=UNZ_BADZIPFILE; + + if (unz64local_getLong(&s->z_filefunc, s->filestream,&uData) != UNZ_OK) /* size compr */ + err=UNZ_ERRNO; + else if (uData != 0xFFFFFFFF && (err==UNZ_OK) && (uData!=s->cur_file_info.compressed_size) && ((uFlags & 8)==0)) + err=UNZ_BADZIPFILE; + + if (unz64local_getLong(&s->z_filefunc, s->filestream,&uData) != UNZ_OK) /* size uncompr */ + err=UNZ_ERRNO; + else if (uData != 0xFFFFFFFF && (err==UNZ_OK) && (uData!=s->cur_file_info.uncompressed_size) && ((uFlags & 8)==0)) + err=UNZ_BADZIPFILE; + + if (unz64local_getShort(&s->z_filefunc, s->filestream,&size_filename) != UNZ_OK) + err=UNZ_ERRNO; + else if ((err==UNZ_OK) && (size_filename!=s->cur_file_info.size_filename)) + err=UNZ_BADZIPFILE; + + *piSizeVar += (uInt)size_filename; + + if (unz64local_getShort(&s->z_filefunc, s->filestream,&size_extra_field) != UNZ_OK) + err=UNZ_ERRNO; + *poffset_local_extrafield= s->cur_file_info_internal.offset_curfile + + SIZEZIPLOCALHEADER + size_filename; + *psize_local_extrafield = (uInt)size_extra_field; + + *piSizeVar += (uInt)size_extra_field; + + return err; +} + +/* + Open for reading data the current file in the zipfile. + If there is no error and the file is opened, the return value is UNZ_OK. +*/ +extern int ZEXPORT unzOpenCurrentFile3 (unzFile file, int* method, + int* level, int raw, const char* password) +{ + int err=UNZ_OK; + uInt iSizeVar; + unz64_s* s; + file_in_zip64_read_info_s* pfile_in_zip_read_info; + ZPOS64_T offset_local_extrafield; /* offset of the local extra field */ + uInt size_local_extrafield; /* size of the local extra field */ +# ifndef NOUNCRYPT + char source[12]; +# else + if (password != NULL) + return UNZ_PARAMERROR; +# endif + + if (file==NULL) + return UNZ_PARAMERROR; + s=(unz64_s*)file; + if (!s->current_file_ok) + return UNZ_PARAMERROR; + + if (s->pfile_in_zip_read != NULL) + unzCloseCurrentFile(file); + + if (unz64local_CheckCurrentFileCoherencyHeader(s,&iSizeVar, &offset_local_extrafield,&size_local_extrafield)!=UNZ_OK) + return UNZ_BADZIPFILE; + + pfile_in_zip_read_info = (file_in_zip64_read_info_s*)ALLOC(sizeof(file_in_zip64_read_info_s)); + if (pfile_in_zip_read_info==NULL) + return UNZ_INTERNALERROR; + + pfile_in_zip_read_info->read_buffer=(char*)ALLOC(UNZ_BUFSIZE); + pfile_in_zip_read_info->offset_local_extrafield = offset_local_extrafield; + pfile_in_zip_read_info->size_local_extrafield = size_local_extrafield; + pfile_in_zip_read_info->pos_local_extrafield=0; + pfile_in_zip_read_info->raw=raw; + + if (pfile_in_zip_read_info->read_buffer==NULL) + { + TRYFREE(pfile_in_zip_read_info); + return UNZ_INTERNALERROR; + } + + pfile_in_zip_read_info->stream_initialised=0; + + if (method!=NULL) + *method = (int)s->cur_file_info.compression_method; + + if (level!=NULL) + { + *level = 6; + switch (s->cur_file_info.flag & 0x06) + { + case 6 : *level = 1; break; + case 4 : *level = 2; break; + case 2 : *level = 9; break; + } + } + + if ((s->cur_file_info.compression_method!=0) && +/* #ifdef HAVE_BZIP2 */ + (s->cur_file_info.compression_method!=Z_BZIP2ED) && +/* #endif */ + (s->cur_file_info.compression_method!=Z_DEFLATED)) + + err=UNZ_BADZIPFILE; + + pfile_in_zip_read_info->crc32_wait=s->cur_file_info.crc; + pfile_in_zip_read_info->crc32=0; + pfile_in_zip_read_info->total_out_64=0; + pfile_in_zip_read_info->compression_method = s->cur_file_info.compression_method; + pfile_in_zip_read_info->filestream=s->filestream; + pfile_in_zip_read_info->z_filefunc=s->z_filefunc; + pfile_in_zip_read_info->byte_before_the_zipfile=s->byte_before_the_zipfile; + + pfile_in_zip_read_info->stream.total_out = 0; + + if ((s->cur_file_info.compression_method==Z_BZIP2ED) && (!raw)) + { +#ifdef HAVE_BZIP2 + pfile_in_zip_read_info->bstream.bzalloc = (void *(*) (void *, int, int))0; + pfile_in_zip_read_info->bstream.bzfree = (free_func)0; + pfile_in_zip_read_info->bstream.opaque = (voidpf)0; + pfile_in_zip_read_info->bstream.state = (voidpf)0; + + pfile_in_zip_read_info->stream.zalloc = (alloc_func)0; + pfile_in_zip_read_info->stream.zfree = (free_func)0; + pfile_in_zip_read_info->stream.opaque = (voidpf)0; + pfile_in_zip_read_info->stream.next_in = (voidpf)0; + pfile_in_zip_read_info->stream.avail_in = 0; + + err=BZ2_bzDecompressInit(&pfile_in_zip_read_info->bstream, 0, 0); + if (err == Z_OK) + pfile_in_zip_read_info->stream_initialised=Z_BZIP2ED; + else + { + TRYFREE(pfile_in_zip_read_info); + return err; + } +#else + pfile_in_zip_read_info->raw=1; +#endif + } + else if ((s->cur_file_info.compression_method==Z_DEFLATED) && (!raw)) + { + pfile_in_zip_read_info->stream.zalloc = (alloc_func)0; + pfile_in_zip_read_info->stream.zfree = (free_func)0; + pfile_in_zip_read_info->stream.opaque = (voidpf)0; + pfile_in_zip_read_info->stream.next_in = 0; + pfile_in_zip_read_info->stream.avail_in = 0; + + err=inflateInit2(&pfile_in_zip_read_info->stream, -MAX_WBITS); + if (err == Z_OK) + pfile_in_zip_read_info->stream_initialised=Z_DEFLATED; + else + { + TRYFREE(pfile_in_zip_read_info); + return err; + } + /* windowBits is passed < 0 to tell that there is no zlib header. + * Note that in this case inflate *requires* an extra "dummy" byte + * after the compressed stream in order to complete decompression and + * return Z_STREAM_END. + * In unzip, i don't wait absolutely Z_STREAM_END because I known the + * size of both compressed and uncompressed data + */ + } + pfile_in_zip_read_info->rest_read_compressed = + s->cur_file_info.compressed_size ; + pfile_in_zip_read_info->rest_read_uncompressed = + s->cur_file_info.uncompressed_size ; + + + pfile_in_zip_read_info->pos_in_zipfile = + s->cur_file_info_internal.offset_curfile + SIZEZIPLOCALHEADER + + iSizeVar; + + pfile_in_zip_read_info->stream.avail_in = (uInt)0; + + s->pfile_in_zip_read = pfile_in_zip_read_info; + s->encrypted = 0; + +# ifndef NOUNCRYPT + if (password != NULL) + { + int i; + s->pcrc_32_tab = get_crc_table(); + init_keys(password,s->keys,s->pcrc_32_tab); + if (ZSEEK64(s->z_filefunc, s->filestream, + s->pfile_in_zip_read->pos_in_zipfile + + s->pfile_in_zip_read->byte_before_the_zipfile, + SEEK_SET)!=0) + return UNZ_INTERNALERROR; + if(ZREAD64(s->z_filefunc, s->filestream,source, 12)<12) + return UNZ_INTERNALERROR; + + for (i = 0; i<12; i++) + zdecode(s->keys,s->pcrc_32_tab,source[i]); + + s->pfile_in_zip_read->pos_in_zipfile+=12; + s->encrypted=1; + } +# endif + + + return UNZ_OK; +} + +extern int ZEXPORT unzOpenCurrentFile (unzFile file) +{ + return unzOpenCurrentFile3(file, NULL, NULL, 0, NULL); +} + +extern int ZEXPORT unzOpenCurrentFilePassword (unzFile file, const char* password) +{ + return unzOpenCurrentFile3(file, NULL, NULL, 0, password); +} + +extern int ZEXPORT unzOpenCurrentFile2 (unzFile file, int* method, int* level, int raw) +{ + return unzOpenCurrentFile3(file, method, level, raw, NULL); +} + +/** Addition for GDAL : START */ + +extern ZPOS64_T ZEXPORT unzGetCurrentFileZStreamPos64( unzFile file) +{ + unz64_s* s; + file_in_zip64_read_info_s* pfile_in_zip_read_info; + s=(unz64_s*)file; + if (file==NULL) + return 0; //UNZ_PARAMERROR; + pfile_in_zip_read_info=s->pfile_in_zip_read; + if (pfile_in_zip_read_info==NULL) + return 0; //UNZ_PARAMERROR; + return pfile_in_zip_read_info->pos_in_zipfile + + pfile_in_zip_read_info->byte_before_the_zipfile; +} + +/** Addition for GDAL : END */ + +/* + Read bytes from the current file. + buf contain buffer where data must be copied + len the size of buf. + + return the number of byte copied if somes bytes are copied + return 0 if the end of file was reached + return <0 with error code if there is an error + (UNZ_ERRNO for IO error, or zLib error for uncompress error) +*/ +extern int ZEXPORT unzReadCurrentFile (unzFile file, voidp buf, unsigned len) +{ + int err=UNZ_OK; + uInt iRead = 0; + unz64_s* s; + file_in_zip64_read_info_s* pfile_in_zip_read_info; + if (file==NULL) + return UNZ_PARAMERROR; + s=(unz64_s*)file; + pfile_in_zip_read_info=s->pfile_in_zip_read; + + if (pfile_in_zip_read_info==NULL) + return UNZ_PARAMERROR; + + + if (pfile_in_zip_read_info->read_buffer == NULL) + return UNZ_END_OF_LIST_OF_FILE; + if (len==0) + return 0; + + pfile_in_zip_read_info->stream.next_out = (Bytef*)buf; + + pfile_in_zip_read_info->stream.avail_out = (uInt)len; + + if ((len>pfile_in_zip_read_info->rest_read_uncompressed) && + (!(pfile_in_zip_read_info->raw))) + pfile_in_zip_read_info->stream.avail_out = + (uInt)pfile_in_zip_read_info->rest_read_uncompressed; + + if ((len>pfile_in_zip_read_info->rest_read_compressed+ + pfile_in_zip_read_info->stream.avail_in) && + (pfile_in_zip_read_info->raw)) + pfile_in_zip_read_info->stream.avail_out = + (uInt)pfile_in_zip_read_info->rest_read_compressed+ + pfile_in_zip_read_info->stream.avail_in; + + while (pfile_in_zip_read_info->stream.avail_out>0) + { + if ((pfile_in_zip_read_info->stream.avail_in==0) && + (pfile_in_zip_read_info->rest_read_compressed>0)) + { + uInt uReadThis = UNZ_BUFSIZE; + if (pfile_in_zip_read_info->rest_read_compressedrest_read_compressed; + if (uReadThis == 0) + return UNZ_EOF; + if (ZSEEK64(pfile_in_zip_read_info->z_filefunc, + pfile_in_zip_read_info->filestream, + pfile_in_zip_read_info->pos_in_zipfile + + pfile_in_zip_read_info->byte_before_the_zipfile, + ZLIB_FILEFUNC_SEEK_SET)!=0) + return UNZ_ERRNO; + if (ZREAD64(pfile_in_zip_read_info->z_filefunc, + pfile_in_zip_read_info->filestream, + pfile_in_zip_read_info->read_buffer, + uReadThis)!=uReadThis) + return UNZ_ERRNO; + + +# ifndef NOUNCRYPT + if(s->encrypted) + { + uInt i; + for(i=0;iread_buffer[i] = + zdecode(s->keys,s->pcrc_32_tab, + pfile_in_zip_read_info->read_buffer[i]); + } +# endif + + + pfile_in_zip_read_info->pos_in_zipfile += uReadThis; + + pfile_in_zip_read_info->rest_read_compressed-=uReadThis; + + pfile_in_zip_read_info->stream.next_in = + (Bytef*)pfile_in_zip_read_info->read_buffer; + pfile_in_zip_read_info->stream.avail_in = (uInt)uReadThis; + } + + if ((pfile_in_zip_read_info->compression_method==0) || (pfile_in_zip_read_info->raw)) + { + uInt uDoCopy,i ; + + if ((pfile_in_zip_read_info->stream.avail_in == 0) && + (pfile_in_zip_read_info->rest_read_compressed == 0)) + return (iRead==0) ? UNZ_EOF : iRead; + + if (pfile_in_zip_read_info->stream.avail_out < + pfile_in_zip_read_info->stream.avail_in) + uDoCopy = pfile_in_zip_read_info->stream.avail_out ; + else + uDoCopy = pfile_in_zip_read_info->stream.avail_in ; + + for (i=0;istream.next_out+i) = + *(pfile_in_zip_read_info->stream.next_in+i); + + pfile_in_zip_read_info->total_out_64 = pfile_in_zip_read_info->total_out_64 + uDoCopy; + + pfile_in_zip_read_info->crc32 = crc32(pfile_in_zip_read_info->crc32, + pfile_in_zip_read_info->stream.next_out, + uDoCopy); + pfile_in_zip_read_info->rest_read_uncompressed-=uDoCopy; + pfile_in_zip_read_info->stream.avail_in -= uDoCopy; + pfile_in_zip_read_info->stream.avail_out -= uDoCopy; + pfile_in_zip_read_info->stream.next_out += uDoCopy; + pfile_in_zip_read_info->stream.next_in += uDoCopy; + pfile_in_zip_read_info->stream.total_out += uDoCopy; + iRead += uDoCopy; + } + else if (pfile_in_zip_read_info->compression_method==Z_BZIP2ED) + { +#ifdef HAVE_BZIP2 + uLong uTotalOutBefore,uTotalOutAfter; + const Bytef *bufBefore; + uLong uOutThis; + + pfile_in_zip_read_info->bstream.next_in = (char*)pfile_in_zip_read_info->stream.next_in; + pfile_in_zip_read_info->bstream.avail_in = pfile_in_zip_read_info->stream.avail_in; + pfile_in_zip_read_info->bstream.total_in_lo32 = pfile_in_zip_read_info->stream.total_in; + pfile_in_zip_read_info->bstream.total_in_hi32 = 0; + pfile_in_zip_read_info->bstream.next_out = (char*)pfile_in_zip_read_info->stream.next_out; + pfile_in_zip_read_info->bstream.avail_out = pfile_in_zip_read_info->stream.avail_out; + pfile_in_zip_read_info->bstream.total_out_lo32 = pfile_in_zip_read_info->stream.total_out; + pfile_in_zip_read_info->bstream.total_out_hi32 = 0; + + uTotalOutBefore = pfile_in_zip_read_info->bstream.total_out_lo32; + bufBefore = (const Bytef *)pfile_in_zip_read_info->bstream.next_out; + + err=BZ2_bzDecompress(&pfile_in_zip_read_info->bstream); + + uTotalOutAfter = pfile_in_zip_read_info->bstream.total_out_lo32; + uOutThis = uTotalOutAfter-uTotalOutBefore; + + pfile_in_zip_read_info->total_out_64 = pfile_in_zip_read_info->total_out_64 + uOutThis; + + pfile_in_zip_read_info->crc32 = crc32(pfile_in_zip_read_info->crc32,bufBefore, (uInt)(uOutThis)); + pfile_in_zip_read_info->rest_read_uncompressed -= uOutThis; + iRead += (uInt)(uTotalOutAfter - uTotalOutBefore); + + pfile_in_zip_read_info->stream.next_in = (Bytef*)pfile_in_zip_read_info->bstream.next_in; + pfile_in_zip_read_info->stream.avail_in = pfile_in_zip_read_info->bstream.avail_in; + pfile_in_zip_read_info->stream.total_in = pfile_in_zip_read_info->bstream.total_in_lo32; + pfile_in_zip_read_info->stream.next_out = (Bytef*)pfile_in_zip_read_info->bstream.next_out; + pfile_in_zip_read_info->stream.avail_out = pfile_in_zip_read_info->bstream.avail_out; + pfile_in_zip_read_info->stream.total_out = pfile_in_zip_read_info->bstream.total_out_lo32; + + if (err==BZ_STREAM_END) + return (iRead==0) ? UNZ_EOF : iRead; + if (err!=BZ_OK) + break; +#endif + } // end Z_BZIP2ED + else + { + ZPOS64_T uTotalOutBefore,uTotalOutAfter; + const Bytef *bufBefore; + ZPOS64_T uOutThis; + int flush=Z_SYNC_FLUSH; + + uTotalOutBefore = pfile_in_zip_read_info->stream.total_out; + bufBefore = pfile_in_zip_read_info->stream.next_out; + + /* + if ((pfile_in_zip_read_info->rest_read_uncompressed == + pfile_in_zip_read_info->stream.avail_out) && + (pfile_in_zip_read_info->rest_read_compressed == 0)) + flush = Z_FINISH; + */ + err=inflate(&pfile_in_zip_read_info->stream,flush); + + if ((err>=0) && (pfile_in_zip_read_info->stream.msg!=NULL)) + err = Z_DATA_ERROR; + + uTotalOutAfter = pfile_in_zip_read_info->stream.total_out; + uOutThis = uTotalOutAfter-uTotalOutBefore; + + pfile_in_zip_read_info->total_out_64 = pfile_in_zip_read_info->total_out_64 + uOutThis; + + pfile_in_zip_read_info->crc32 = + crc32(pfile_in_zip_read_info->crc32,bufBefore, + (uInt)(uOutThis)); + + pfile_in_zip_read_info->rest_read_uncompressed -= + uOutThis; + + iRead += (uInt)(uTotalOutAfter - uTotalOutBefore); + + if (err==Z_STREAM_END) + return (iRead==0) ? UNZ_EOF : iRead; + if (err!=Z_OK) + break; + } + } + + if (err==Z_OK) + return iRead; + return err; +} + + +/* + Give the current position in uncompressed data +*/ +extern z_off_t ZEXPORT unztell (unzFile file) +{ + unz64_s* s; + file_in_zip64_read_info_s* pfile_in_zip_read_info; + if (file==NULL) + return UNZ_PARAMERROR; + s=(unz64_s*)file; + pfile_in_zip_read_info=s->pfile_in_zip_read; + + if (pfile_in_zip_read_info==NULL) + return UNZ_PARAMERROR; + + return (z_off_t)pfile_in_zip_read_info->stream.total_out; +} + +extern ZPOS64_T ZEXPORT unztell64 (unzFile file) +{ + + unz64_s* s; + file_in_zip64_read_info_s* pfile_in_zip_read_info; + if (file==NULL) + return (ZPOS64_T)-1; + s=(unz64_s*)file; + pfile_in_zip_read_info=s->pfile_in_zip_read; + + if (pfile_in_zip_read_info==NULL) + return (ZPOS64_T)-1; + + return pfile_in_zip_read_info->total_out_64; +} + + +/* + return 1 if the end of file was reached, 0 elsewhere +*/ +extern int ZEXPORT unzeof (unzFile file) +{ + unz64_s* s; + file_in_zip64_read_info_s* pfile_in_zip_read_info; + if (file==NULL) + return UNZ_PARAMERROR; + s=(unz64_s*)file; + pfile_in_zip_read_info=s->pfile_in_zip_read; + + if (pfile_in_zip_read_info==NULL) + return UNZ_PARAMERROR; + + if (pfile_in_zip_read_info->rest_read_uncompressed == 0) + return 1; + else + return 0; +} + + + +/* +Read extra field from the current file (opened by unzOpenCurrentFile) +This is the local-header version of the extra field (sometimes, there is +more info in the local-header version than in the central-header) + + if buf==NULL, it return the size of the local extra field that can be read + + if buf!=NULL, len is the size of the buffer, the extra header is copied in + buf. + the return value is the number of bytes copied in buf, or (if <0) + the error code +*/ +extern int ZEXPORT unzGetLocalExtrafield (unzFile file, voidp buf, unsigned len) +{ + unz64_s* s; + file_in_zip64_read_info_s* pfile_in_zip_read_info; + uInt read_now; + ZPOS64_T size_to_read; + + if (file==NULL) + return UNZ_PARAMERROR; + s=(unz64_s*)file; + pfile_in_zip_read_info=s->pfile_in_zip_read; + + if (pfile_in_zip_read_info==NULL) + return UNZ_PARAMERROR; + + size_to_read = (pfile_in_zip_read_info->size_local_extrafield - + pfile_in_zip_read_info->pos_local_extrafield); + + if (buf==NULL) + return (int)size_to_read; + + if (len>size_to_read) + read_now = (uInt)size_to_read; + else + read_now = (uInt)len ; + + if (read_now==0) + return 0; + + if (ZSEEK64(pfile_in_zip_read_info->z_filefunc, + pfile_in_zip_read_info->filestream, + pfile_in_zip_read_info->offset_local_extrafield + + pfile_in_zip_read_info->pos_local_extrafield, + ZLIB_FILEFUNC_SEEK_SET)!=0) + return UNZ_ERRNO; + + if (ZREAD64(pfile_in_zip_read_info->z_filefunc, + pfile_in_zip_read_info->filestream, + buf,read_now)!=read_now) + return UNZ_ERRNO; + + return (int)read_now; +} + +/* + Close the file in zip opened with unzOpenCurrentFile + Return UNZ_CRCERROR if all the file was read but the CRC is not good +*/ +extern int ZEXPORT unzCloseCurrentFile (unzFile file) +{ + int err=UNZ_OK; + + unz64_s* s; + file_in_zip64_read_info_s* pfile_in_zip_read_info; + if (file==NULL) + return UNZ_PARAMERROR; + s=(unz64_s*)file; + pfile_in_zip_read_info=s->pfile_in_zip_read; + + if (pfile_in_zip_read_info==NULL) + return UNZ_PARAMERROR; + + + if ((pfile_in_zip_read_info->rest_read_uncompressed == 0) && + (!pfile_in_zip_read_info->raw)) + { + if (pfile_in_zip_read_info->crc32 != pfile_in_zip_read_info->crc32_wait) + err=UNZ_CRCERROR; + } + + + TRYFREE(pfile_in_zip_read_info->read_buffer); + pfile_in_zip_read_info->read_buffer = NULL; + if (pfile_in_zip_read_info->stream_initialised == Z_DEFLATED) + inflateEnd(&pfile_in_zip_read_info->stream); +#ifdef HAVE_BZIP2 + else if (pfile_in_zip_read_info->stream_initialised == Z_BZIP2ED) + BZ2_bzDecompressEnd(&pfile_in_zip_read_info->bstream); +#endif + + + pfile_in_zip_read_info->stream_initialised = 0; + TRYFREE(pfile_in_zip_read_info); + + s->pfile_in_zip_read=NULL; + + return err; +} + + +/* + Get the global comment string of the ZipFile, in the szComment buffer. + uSizeBuf is the size of the szComment buffer. + return the number of byte copied or an error code <0 +*/ +extern int ZEXPORT unzGetGlobalComment (unzFile file, char * szComment, uLong uSizeBuf) +{ + unz64_s* s; + uLong uReadThis ; + if (file==NULL) + return (int)UNZ_PARAMERROR; + s=(unz64_s*)file; + + uReadThis = uSizeBuf; + if (uReadThis>s->gi.size_comment) + uReadThis = s->gi.size_comment; + + if (ZSEEK64(s->z_filefunc,s->filestream,s->central_pos+22,ZLIB_FILEFUNC_SEEK_SET)!=0) + return UNZ_ERRNO; + + if (uReadThis>0) + { + *szComment='\0'; + if (ZREAD64(s->z_filefunc,s->filestream,szComment,uReadThis)!=uReadThis) + return UNZ_ERRNO; + } + + if ((szComment != NULL) && (uSizeBuf > s->gi.size_comment)) + *(szComment+s->gi.size_comment)='\0'; + return (int)uReadThis; +} + +/* Additions by RX '2004 */ +extern ZPOS64_T ZEXPORT unzGetOffset64(unzFile file) +{ + unz64_s* s; + + if (file==NULL) + return 0; //UNZ_PARAMERROR; + s=(unz64_s*)file; + if (!s->current_file_ok) + return 0; + if (s->gi.number_entry != 0 && s->gi.number_entry != 0xffff) + if (s->num_file==s->gi.number_entry) + return 0; + return s->pos_in_central_dir; +} + +extern uLong ZEXPORT unzGetOffset (unzFile file) +{ + ZPOS64_T offset64; + + if (file==NULL) + return 0; //UNZ_PARAMERROR; + offset64 = unzGetOffset64(file); + return (uLong)offset64; +} + +extern int ZEXPORT unzSetOffset64(unzFile file, ZPOS64_T pos) +{ + unz64_s* s; + int err; + + if (file==NULL) + return UNZ_PARAMERROR; + s=(unz64_s*)file; + + s->pos_in_central_dir = pos; + s->num_file = s->gi.number_entry; /* hack */ + err = unz64local_GetCurrentFileInfoInternal(file,&s->cur_file_info, + &s->cur_file_info_internal, + NULL,0,NULL,0,NULL,0); + s->current_file_ok = (err == UNZ_OK); + return err; +} + +extern int ZEXPORT unzSetOffset (unzFile file, uLong pos) +{ + return unzSetOffset64(file,pos); +} ADDED compat/zlib/contrib/minizip/unzip.h Index: compat/zlib/contrib/minizip/unzip.h ================================================================== --- compat/zlib/contrib/minizip/unzip.h +++ compat/zlib/contrib/minizip/unzip.h @@ -0,0 +1,437 @@ +/* unzip.h -- IO for uncompress .zip files using zlib + Version 1.1, February 14h, 2010 + part of the MiniZip project - ( http://www.winimage.com/zLibDll/minizip.html ) + + Copyright (C) 1998-2010 Gilles Vollant (minizip) ( http://www.winimage.com/zLibDll/minizip.html ) + + Modifications of Unzip for Zip64 + Copyright (C) 2007-2008 Even Rouault + + Modifications for Zip64 support on both zip and unzip + Copyright (C) 2009-2010 Mathias Svensson ( http://result42.com ) + + For more info read MiniZip_info.txt + + --------------------------------------------------------------------------------- + + Condition of use and distribution are the same than zlib : + + This software is provided 'as-is', without any express or implied + warranty. In no event will the authors be held liable for any damages + arising from the use of this software. + + Permission is granted to anyone to use this software for any purpose, + including commercial applications, and to alter it and redistribute it + freely, subject to the following restrictions: + + 1. The origin of this software must not be misrepresented; you must not + claim that you wrote the original software. If you use this software + in a product, an acknowledgment in the product documentation would be + appreciated but is not required. + 2. Altered source versions must be plainly marked as such, and must not be + misrepresented as being the original software. + 3. This notice may not be removed or altered from any source distribution. + + --------------------------------------------------------------------------------- + + Changes + + See header of unzip64.c + +*/ + +#ifndef _unz64_H +#define _unz64_H + +#ifdef __cplusplus +extern "C" { +#endif + +#ifndef _ZLIB_H +#include "zlib.h" +#endif + +#ifndef _ZLIBIOAPI_H +#include "ioapi.h" +#endif + +#ifdef HAVE_BZIP2 +#include "bzlib.h" +#endif + +#define Z_BZIP2ED 12 + +#if defined(STRICTUNZIP) || defined(STRICTZIPUNZIP) +/* like the STRICT of WIN32, we define a pointer that cannot be converted + from (void*) without cast */ +typedef struct TagunzFile__ { int unused; } unzFile__; +typedef unzFile__ *unzFile; +#else +typedef voidp unzFile; +#endif + + +#define UNZ_OK (0) +#define UNZ_END_OF_LIST_OF_FILE (-100) +#define UNZ_ERRNO (Z_ERRNO) +#define UNZ_EOF (0) +#define UNZ_PARAMERROR (-102) +#define UNZ_BADZIPFILE (-103) +#define UNZ_INTERNALERROR (-104) +#define UNZ_CRCERROR (-105) + +/* tm_unz contain date/time info */ +typedef struct tm_unz_s +{ + uInt tm_sec; /* seconds after the minute - [0,59] */ + uInt tm_min; /* minutes after the hour - [0,59] */ + uInt tm_hour; /* hours since midnight - [0,23] */ + uInt tm_mday; /* day of the month - [1,31] */ + uInt tm_mon; /* months since January - [0,11] */ + uInt tm_year; /* years - [1980..2044] */ +} tm_unz; + +/* unz_global_info structure contain global data about the ZIPfile + These data comes from the end of central dir */ +typedef struct unz_global_info64_s +{ + ZPOS64_T number_entry; /* total number of entries in + the central dir on this disk */ + uLong size_comment; /* size of the global comment of the zipfile */ +} unz_global_info64; + +typedef struct unz_global_info_s +{ + uLong number_entry; /* total number of entries in + the central dir on this disk */ + uLong size_comment; /* size of the global comment of the zipfile */ +} unz_global_info; + +/* unz_file_info contain information about a file in the zipfile */ +typedef struct unz_file_info64_s +{ + uLong version; /* version made by 2 bytes */ + uLong version_needed; /* version needed to extract 2 bytes */ + uLong flag; /* general purpose bit flag 2 bytes */ + uLong compression_method; /* compression method 2 bytes */ + uLong dosDate; /* last mod file date in Dos fmt 4 bytes */ + uLong crc; /* crc-32 4 bytes */ + ZPOS64_T compressed_size; /* compressed size 8 bytes */ + ZPOS64_T uncompressed_size; /* uncompressed size 8 bytes */ + uLong size_filename; /* filename length 2 bytes */ + uLong size_file_extra; /* extra field length 2 bytes */ + uLong size_file_comment; /* file comment length 2 bytes */ + + uLong disk_num_start; /* disk number start 2 bytes */ + uLong internal_fa; /* internal file attributes 2 bytes */ + uLong external_fa; /* external file attributes 4 bytes */ + + tm_unz tmu_date; +} unz_file_info64; + +typedef struct unz_file_info_s +{ + uLong version; /* version made by 2 bytes */ + uLong version_needed; /* version needed to extract 2 bytes */ + uLong flag; /* general purpose bit flag 2 bytes */ + uLong compression_method; /* compression method 2 bytes */ + uLong dosDate; /* last mod file date in Dos fmt 4 bytes */ + uLong crc; /* crc-32 4 bytes */ + uLong compressed_size; /* compressed size 4 bytes */ + uLong uncompressed_size; /* uncompressed size 4 bytes */ + uLong size_filename; /* filename length 2 bytes */ + uLong size_file_extra; /* extra field length 2 bytes */ + uLong size_file_comment; /* file comment length 2 bytes */ + + uLong disk_num_start; /* disk number start 2 bytes */ + uLong internal_fa; /* internal file attributes 2 bytes */ + uLong external_fa; /* external file attributes 4 bytes */ + + tm_unz tmu_date; +} unz_file_info; + +extern int ZEXPORT unzStringFileNameCompare OF ((const char* fileName1, + const char* fileName2, + int iCaseSensitivity)); +/* + Compare two filename (fileName1,fileName2). + If iCaseSenisivity = 1, comparision is case sensitivity (like strcmp) + If iCaseSenisivity = 2, comparision is not case sensitivity (like strcmpi + or strcasecmp) + If iCaseSenisivity = 0, case sensitivity is defaut of your operating system + (like 1 on Unix, 2 on Windows) +*/ + + +extern unzFile ZEXPORT unzOpen OF((const char *path)); +extern unzFile ZEXPORT unzOpen64 OF((const void *path)); +/* + Open a Zip file. path contain the full pathname (by example, + on a Windows XP computer "c:\\zlib\\zlib113.zip" or on an Unix computer + "zlib/zlib113.zip". + If the zipfile cannot be opened (file don't exist or in not valid), the + return value is NULL. + Else, the return value is a unzFile Handle, usable with other function + of this unzip package. + the "64" function take a const void* pointer, because the path is just the + value passed to the open64_file_func callback. + Under Windows, if UNICODE is defined, using fill_fopen64_filefunc, the path + is a pointer to a wide unicode string (LPCTSTR is LPCWSTR), so const char* + does not describe the reality +*/ + + +extern unzFile ZEXPORT unzOpen2 OF((const char *path, + zlib_filefunc_def* pzlib_filefunc_def)); +/* + Open a Zip file, like unzOpen, but provide a set of file low level API + for read/write the zip file (see ioapi.h) +*/ + +extern unzFile ZEXPORT unzOpen2_64 OF((const void *path, + zlib_filefunc64_def* pzlib_filefunc_def)); +/* + Open a Zip file, like unz64Open, but provide a set of file low level API + for read/write the zip file (see ioapi.h) +*/ + +extern int ZEXPORT unzClose OF((unzFile file)); +/* + Close a ZipFile opened with unzOpen. + If there is files inside the .Zip opened with unzOpenCurrentFile (see later), + these files MUST be closed with unzCloseCurrentFile before call unzClose. + return UNZ_OK if there is no problem. */ + +extern int ZEXPORT unzGetGlobalInfo OF((unzFile file, + unz_global_info *pglobal_info)); + +extern int ZEXPORT unzGetGlobalInfo64 OF((unzFile file, + unz_global_info64 *pglobal_info)); +/* + Write info about the ZipFile in the *pglobal_info structure. + No preparation of the structure is needed + return UNZ_OK if there is no problem. */ + + +extern int ZEXPORT unzGetGlobalComment OF((unzFile file, + char *szComment, + uLong uSizeBuf)); +/* + Get the global comment string of the ZipFile, in the szComment buffer. + uSizeBuf is the size of the szComment buffer. + return the number of byte copied or an error code <0 +*/ + + +/***************************************************************************/ +/* Unzip package allow you browse the directory of the zipfile */ + +extern int ZEXPORT unzGoToFirstFile OF((unzFile file)); +/* + Set the current file of the zipfile to the first file. + return UNZ_OK if there is no problem +*/ + +extern int ZEXPORT unzGoToNextFile OF((unzFile file)); +/* + Set the current file of the zipfile to the next file. + return UNZ_OK if there is no problem + return UNZ_END_OF_LIST_OF_FILE if the actual file was the latest. +*/ + +extern int ZEXPORT unzLocateFile OF((unzFile file, + const char *szFileName, + int iCaseSensitivity)); +/* + Try locate the file szFileName in the zipfile. + For the iCaseSensitivity signification, see unzStringFileNameCompare + + return value : + UNZ_OK if the file is found. It becomes the current file. + UNZ_END_OF_LIST_OF_FILE if the file is not found +*/ + + +/* ****************************************** */ +/* Ryan supplied functions */ +/* unz_file_info contain information about a file in the zipfile */ +typedef struct unz_file_pos_s +{ + uLong pos_in_zip_directory; /* offset in zip file directory */ + uLong num_of_file; /* # of file */ +} unz_file_pos; + +extern int ZEXPORT unzGetFilePos( + unzFile file, + unz_file_pos* file_pos); + +extern int ZEXPORT unzGoToFilePos( + unzFile file, + unz_file_pos* file_pos); + +typedef struct unz64_file_pos_s +{ + ZPOS64_T pos_in_zip_directory; /* offset in zip file directory */ + ZPOS64_T num_of_file; /* # of file */ +} unz64_file_pos; + +extern int ZEXPORT unzGetFilePos64( + unzFile file, + unz64_file_pos* file_pos); + +extern int ZEXPORT unzGoToFilePos64( + unzFile file, + const unz64_file_pos* file_pos); + +/* ****************************************** */ + +extern int ZEXPORT unzGetCurrentFileInfo64 OF((unzFile file, + unz_file_info64 *pfile_info, + char *szFileName, + uLong fileNameBufferSize, + void *extraField, + uLong extraFieldBufferSize, + char *szComment, + uLong commentBufferSize)); + +extern int ZEXPORT unzGetCurrentFileInfo OF((unzFile file, + unz_file_info *pfile_info, + char *szFileName, + uLong fileNameBufferSize, + void *extraField, + uLong extraFieldBufferSize, + char *szComment, + uLong commentBufferSize)); +/* + Get Info about the current file + if pfile_info!=NULL, the *pfile_info structure will contain somes info about + the current file + if szFileName!=NULL, the filemane string will be copied in szFileName + (fileNameBufferSize is the size of the buffer) + if extraField!=NULL, the extra field information will be copied in extraField + (extraFieldBufferSize is the size of the buffer). + This is the Central-header version of the extra field + if szComment!=NULL, the comment string of the file will be copied in szComment + (commentBufferSize is the size of the buffer) +*/ + + +/** Addition for GDAL : START */ + +extern ZPOS64_T ZEXPORT unzGetCurrentFileZStreamPos64 OF((unzFile file)); + +/** Addition for GDAL : END */ + + +/***************************************************************************/ +/* for reading the content of the current zipfile, you can open it, read data + from it, and close it (you can close it before reading all the file) + */ + +extern int ZEXPORT unzOpenCurrentFile OF((unzFile file)); +/* + Open for reading data the current file in the zipfile. + If there is no error, the return value is UNZ_OK. +*/ + +extern int ZEXPORT unzOpenCurrentFilePassword OF((unzFile file, + const char* password)); +/* + Open for reading data the current file in the zipfile. + password is a crypting password + If there is no error, the return value is UNZ_OK. +*/ + +extern int ZEXPORT unzOpenCurrentFile2 OF((unzFile file, + int* method, + int* level, + int raw)); +/* + Same than unzOpenCurrentFile, but open for read raw the file (not uncompress) + if raw==1 + *method will receive method of compression, *level will receive level of + compression + note : you can set level parameter as NULL (if you did not want known level, + but you CANNOT set method parameter as NULL +*/ + +extern int ZEXPORT unzOpenCurrentFile3 OF((unzFile file, + int* method, + int* level, + int raw, + const char* password)); +/* + Same than unzOpenCurrentFile, but open for read raw the file (not uncompress) + if raw==1 + *method will receive method of compression, *level will receive level of + compression + note : you can set level parameter as NULL (if you did not want known level, + but you CANNOT set method parameter as NULL +*/ + + +extern int ZEXPORT unzCloseCurrentFile OF((unzFile file)); +/* + Close the file in zip opened with unzOpenCurrentFile + Return UNZ_CRCERROR if all the file was read but the CRC is not good +*/ + +extern int ZEXPORT unzReadCurrentFile OF((unzFile file, + voidp buf, + unsigned len)); +/* + Read bytes from the current file (opened by unzOpenCurrentFile) + buf contain buffer where data must be copied + len the size of buf. + + return the number of byte copied if somes bytes are copied + return 0 if the end of file was reached + return <0 with error code if there is an error + (UNZ_ERRNO for IO error, or zLib error for uncompress error) +*/ + +extern z_off_t ZEXPORT unztell OF((unzFile file)); + +extern ZPOS64_T ZEXPORT unztell64 OF((unzFile file)); +/* + Give the current position in uncompressed data +*/ + +extern int ZEXPORT unzeof OF((unzFile file)); +/* + return 1 if the end of file was reached, 0 elsewhere +*/ + +extern int ZEXPORT unzGetLocalExtrafield OF((unzFile file, + voidp buf, + unsigned len)); +/* + Read extra field from the current file (opened by unzOpenCurrentFile) + This is the local-header version of the extra field (sometimes, there is + more info in the local-header version than in the central-header) + + if buf==NULL, it return the size of the local extra field + + if buf!=NULL, len is the size of the buffer, the extra header is copied in + buf. + the return value is the number of bytes copied in buf, or (if <0) + the error code +*/ + +/***************************************************************************/ + +/* Get the current file offset */ +extern ZPOS64_T ZEXPORT unzGetOffset64 (unzFile file); +extern uLong ZEXPORT unzGetOffset (unzFile file); + +/* Set the current file offset */ +extern int ZEXPORT unzSetOffset64 (unzFile file, ZPOS64_T pos); +extern int ZEXPORT unzSetOffset (unzFile file, uLong pos); + + + +#ifdef __cplusplus +} +#endif + +#endif /* _unz64_H */ ADDED compat/zlib/contrib/minizip/zip.c Index: compat/zlib/contrib/minizip/zip.c ================================================================== --- compat/zlib/contrib/minizip/zip.c +++ compat/zlib/contrib/minizip/zip.c @@ -0,0 +1,2007 @@ +/* zip.c -- IO on .zip files using zlib + Version 1.1, February 14h, 2010 + part of the MiniZip project - ( http://www.winimage.com/zLibDll/minizip.html ) + + Copyright (C) 1998-2010 Gilles Vollant (minizip) ( http://www.winimage.com/zLibDll/minizip.html ) + + Modifications for Zip64 support + Copyright (C) 2009-2010 Mathias Svensson ( http://result42.com ) + + For more info read MiniZip_info.txt + + Changes + Oct-2009 - Mathias Svensson - Remove old C style function prototypes + Oct-2009 - Mathias Svensson - Added Zip64 Support when creating new file archives + Oct-2009 - Mathias Svensson - Did some code cleanup and refactoring to get better overview of some functions. + Oct-2009 - Mathias Svensson - Added zipRemoveExtraInfoBlock to strip extra field data from its ZIP64 data + It is used when recreting zip archive with RAW when deleting items from a zip. + ZIP64 data is automaticly added to items that needs it, and existing ZIP64 data need to be removed. + Oct-2009 - Mathias Svensson - Added support for BZIP2 as compression mode (bzip2 lib is required) + Jan-2010 - back to unzip and minizip 1.0 name scheme, with compatibility layer + +*/ + + +#include +#include +#include +#include +#include "zlib.h" +#include "zip.h" + +#ifdef STDC +# include +# include +# include +#endif +#ifdef NO_ERRNO_H + extern int errno; +#else +# include +#endif + + +#ifndef local +# define local static +#endif +/* compile with -Dlocal if your debugger can't find static symbols */ + +#ifndef VERSIONMADEBY +# define VERSIONMADEBY (0x0) /* platform depedent */ +#endif + +#ifndef Z_BUFSIZE +#define Z_BUFSIZE (64*1024) //(16384) +#endif + +#ifndef Z_MAXFILENAMEINZIP +#define Z_MAXFILENAMEINZIP (256) +#endif + +#ifndef ALLOC +# define ALLOC(size) (malloc(size)) +#endif +#ifndef TRYFREE +# define TRYFREE(p) {if (p) free(p);} +#endif + +/* +#define SIZECENTRALDIRITEM (0x2e) +#define SIZEZIPLOCALHEADER (0x1e) +*/ + +/* I've found an old Unix (a SunOS 4.1.3_U1) without all SEEK_* defined.... */ + + +// NOT sure that this work on ALL platform +#define MAKEULONG64(a, b) ((ZPOS64_T)(((unsigned long)(a)) | ((ZPOS64_T)((unsigned long)(b))) << 32)) + +#ifndef SEEK_CUR +#define SEEK_CUR 1 +#endif + +#ifndef SEEK_END +#define SEEK_END 2 +#endif + +#ifndef SEEK_SET +#define SEEK_SET 0 +#endif + +#ifndef DEF_MEM_LEVEL +#if MAX_MEM_LEVEL >= 8 +# define DEF_MEM_LEVEL 8 +#else +# define DEF_MEM_LEVEL MAX_MEM_LEVEL +#endif +#endif +const char zip_copyright[] =" zip 1.01 Copyright 1998-2004 Gilles Vollant - http://www.winimage.com/zLibDll"; + + +#define SIZEDATA_INDATABLOCK (4096-(4*4)) + +#define LOCALHEADERMAGIC (0x04034b50) +#define CENTRALHEADERMAGIC (0x02014b50) +#define ENDHEADERMAGIC (0x06054b50) +#define ZIP64ENDHEADERMAGIC (0x6064b50) +#define ZIP64ENDLOCHEADERMAGIC (0x7064b50) + +#define FLAG_LOCALHEADER_OFFSET (0x06) +#define CRC_LOCALHEADER_OFFSET (0x0e) + +#define SIZECENTRALHEADER (0x2e) /* 46 */ + +typedef struct linkedlist_datablock_internal_s +{ + struct linkedlist_datablock_internal_s* next_datablock; + uLong avail_in_this_block; + uLong filled_in_this_block; + uLong unused; /* for future use and alignement */ + unsigned char data[SIZEDATA_INDATABLOCK]; +} linkedlist_datablock_internal; + +typedef struct linkedlist_data_s +{ + linkedlist_datablock_internal* first_block; + linkedlist_datablock_internal* last_block; +} linkedlist_data; + + +typedef struct +{ + z_stream stream; /* zLib stream structure for inflate */ +#ifdef HAVE_BZIP2 + bz_stream bstream; /* bzLib stream structure for bziped */ +#endif + + int stream_initialised; /* 1 is stream is initialised */ + uInt pos_in_buffered_data; /* last written byte in buffered_data */ + + ZPOS64_T pos_local_header; /* offset of the local header of the file + currenty writing */ + char* central_header; /* central header data for the current file */ + uLong size_centralExtra; + uLong size_centralheader; /* size of the central header for cur file */ + uLong size_centralExtraFree; /* Extra bytes allocated to the centralheader but that are not used */ + uLong flag; /* flag of the file currently writing */ + + int method; /* compression method of file currenty wr.*/ + int raw; /* 1 for directly writing raw data */ + Byte buffered_data[Z_BUFSIZE];/* buffer contain compressed data to be writ*/ + uLong dosDate; + uLong crc32; + int encrypt; + int zip64; /* Add ZIP64 extened information in the extra field */ + ZPOS64_T pos_zip64extrainfo; + ZPOS64_T totalCompressedData; + ZPOS64_T totalUncompressedData; +#ifndef NOCRYPT + unsigned long keys[3]; /* keys defining the pseudo-random sequence */ + const z_crc_t* pcrc_32_tab; + int crypt_header_size; +#endif +} curfile64_info; + +typedef struct +{ + zlib_filefunc64_32_def z_filefunc; + voidpf filestream; /* io structore of the zipfile */ + linkedlist_data central_dir;/* datablock with central dir in construction*/ + int in_opened_file_inzip; /* 1 if a file in the zip is currently writ.*/ + curfile64_info ci; /* info on the file curretly writing */ + + ZPOS64_T begin_pos; /* position of the beginning of the zipfile */ + ZPOS64_T add_position_when_writting_offset; + ZPOS64_T number_entry; + +#ifndef NO_ADDFILEINEXISTINGZIP + char *globalcomment; +#endif + +} zip64_internal; + + +#ifndef NOCRYPT +#define INCLUDECRYPTINGCODE_IFCRYPTALLOWED +#include "crypt.h" +#endif + +local linkedlist_datablock_internal* allocate_new_datablock() +{ + linkedlist_datablock_internal* ldi; + ldi = (linkedlist_datablock_internal*) + ALLOC(sizeof(linkedlist_datablock_internal)); + if (ldi!=NULL) + { + ldi->next_datablock = NULL ; + ldi->filled_in_this_block = 0 ; + ldi->avail_in_this_block = SIZEDATA_INDATABLOCK ; + } + return ldi; +} + +local void free_datablock(linkedlist_datablock_internal* ldi) +{ + while (ldi!=NULL) + { + linkedlist_datablock_internal* ldinext = ldi->next_datablock; + TRYFREE(ldi); + ldi = ldinext; + } +} + +local void init_linkedlist(linkedlist_data* ll) +{ + ll->first_block = ll->last_block = NULL; +} + +local void free_linkedlist(linkedlist_data* ll) +{ + free_datablock(ll->first_block); + ll->first_block = ll->last_block = NULL; +} + + +local int add_data_in_datablock(linkedlist_data* ll, const void* buf, uLong len) +{ + linkedlist_datablock_internal* ldi; + const unsigned char* from_copy; + + if (ll==NULL) + return ZIP_INTERNALERROR; + + if (ll->last_block == NULL) + { + ll->first_block = ll->last_block = allocate_new_datablock(); + if (ll->first_block == NULL) + return ZIP_INTERNALERROR; + } + + ldi = ll->last_block; + from_copy = (unsigned char*)buf; + + while (len>0) + { + uInt copy_this; + uInt i; + unsigned char* to_copy; + + if (ldi->avail_in_this_block==0) + { + ldi->next_datablock = allocate_new_datablock(); + if (ldi->next_datablock == NULL) + return ZIP_INTERNALERROR; + ldi = ldi->next_datablock ; + ll->last_block = ldi; + } + + if (ldi->avail_in_this_block < len) + copy_this = (uInt)ldi->avail_in_this_block; + else + copy_this = (uInt)len; + + to_copy = &(ldi->data[ldi->filled_in_this_block]); + + for (i=0;ifilled_in_this_block += copy_this; + ldi->avail_in_this_block -= copy_this; + from_copy += copy_this ; + len -= copy_this; + } + return ZIP_OK; +} + + + +/****************************************************************************/ + +#ifndef NO_ADDFILEINEXISTINGZIP +/* =========================================================================== + Inputs a long in LSB order to the given file + nbByte == 1, 2 ,4 or 8 (byte, short or long, ZPOS64_T) +*/ + +local int zip64local_putValue OF((const zlib_filefunc64_32_def* pzlib_filefunc_def, voidpf filestream, ZPOS64_T x, int nbByte)); +local int zip64local_putValue (const zlib_filefunc64_32_def* pzlib_filefunc_def, voidpf filestream, ZPOS64_T x, int nbByte) +{ + unsigned char buf[8]; + int n; + for (n = 0; n < nbByte; n++) + { + buf[n] = (unsigned char)(x & 0xff); + x >>= 8; + } + if (x != 0) + { /* data overflow - hack for ZIP64 (X Roche) */ + for (n = 0; n < nbByte; n++) + { + buf[n] = 0xff; + } + } + + if (ZWRITE64(*pzlib_filefunc_def,filestream,buf,nbByte)!=(uLong)nbByte) + return ZIP_ERRNO; + else + return ZIP_OK; +} + +local void zip64local_putValue_inmemory OF((void* dest, ZPOS64_T x, int nbByte)); +local void zip64local_putValue_inmemory (void* dest, ZPOS64_T x, int nbByte) +{ + unsigned char* buf=(unsigned char*)dest; + int n; + for (n = 0; n < nbByte; n++) { + buf[n] = (unsigned char)(x & 0xff); + x >>= 8; + } + + if (x != 0) + { /* data overflow - hack for ZIP64 */ + for (n = 0; n < nbByte; n++) + { + buf[n] = 0xff; + } + } +} + +/****************************************************************************/ + + +local uLong zip64local_TmzDateToDosDate(const tm_zip* ptm) +{ + uLong year = (uLong)ptm->tm_year; + if (year>=1980) + year-=1980; + else if (year>=80) + year-=80; + return + (uLong) (((ptm->tm_mday) + (32 * (ptm->tm_mon+1)) + (512 * year)) << 16) | + ((ptm->tm_sec/2) + (32* ptm->tm_min) + (2048 * (uLong)ptm->tm_hour)); +} + + +/****************************************************************************/ + +local int zip64local_getByte OF((const zlib_filefunc64_32_def* pzlib_filefunc_def, voidpf filestream, int *pi)); + +local int zip64local_getByte(const zlib_filefunc64_32_def* pzlib_filefunc_def,voidpf filestream,int* pi) +{ + unsigned char c; + int err = (int)ZREAD64(*pzlib_filefunc_def,filestream,&c,1); + if (err==1) + { + *pi = (int)c; + return ZIP_OK; + } + else + { + if (ZERROR64(*pzlib_filefunc_def,filestream)) + return ZIP_ERRNO; + else + return ZIP_EOF; + } +} + + +/* =========================================================================== + Reads a long in LSB order from the given gz_stream. Sets +*/ +local int zip64local_getShort OF((const zlib_filefunc64_32_def* pzlib_filefunc_def, voidpf filestream, uLong *pX)); + +local int zip64local_getShort (const zlib_filefunc64_32_def* pzlib_filefunc_def, voidpf filestream, uLong* pX) +{ + uLong x ; + int i = 0; + int err; + + err = zip64local_getByte(pzlib_filefunc_def,filestream,&i); + x = (uLong)i; + + if (err==ZIP_OK) + err = zip64local_getByte(pzlib_filefunc_def,filestream,&i); + x += ((uLong)i)<<8; + + if (err==ZIP_OK) + *pX = x; + else + *pX = 0; + return err; +} + +local int zip64local_getLong OF((const zlib_filefunc64_32_def* pzlib_filefunc_def, voidpf filestream, uLong *pX)); + +local int zip64local_getLong (const zlib_filefunc64_32_def* pzlib_filefunc_def, voidpf filestream, uLong* pX) +{ + uLong x ; + int i = 0; + int err; + + err = zip64local_getByte(pzlib_filefunc_def,filestream,&i); + x = (uLong)i; + + if (err==ZIP_OK) + err = zip64local_getByte(pzlib_filefunc_def,filestream,&i); + x += ((uLong)i)<<8; + + if (err==ZIP_OK) + err = zip64local_getByte(pzlib_filefunc_def,filestream,&i); + x += ((uLong)i)<<16; + + if (err==ZIP_OK) + err = zip64local_getByte(pzlib_filefunc_def,filestream,&i); + x += ((uLong)i)<<24; + + if (err==ZIP_OK) + *pX = x; + else + *pX = 0; + return err; +} + +local int zip64local_getLong64 OF((const zlib_filefunc64_32_def* pzlib_filefunc_def, voidpf filestream, ZPOS64_T *pX)); + + +local int zip64local_getLong64 (const zlib_filefunc64_32_def* pzlib_filefunc_def, voidpf filestream, ZPOS64_T *pX) +{ + ZPOS64_T x; + int i = 0; + int err; + + err = zip64local_getByte(pzlib_filefunc_def,filestream,&i); + x = (ZPOS64_T)i; + + if (err==ZIP_OK) + err = zip64local_getByte(pzlib_filefunc_def,filestream,&i); + x += ((ZPOS64_T)i)<<8; + + if (err==ZIP_OK) + err = zip64local_getByte(pzlib_filefunc_def,filestream,&i); + x += ((ZPOS64_T)i)<<16; + + if (err==ZIP_OK) + err = zip64local_getByte(pzlib_filefunc_def,filestream,&i); + x += ((ZPOS64_T)i)<<24; + + if (err==ZIP_OK) + err = zip64local_getByte(pzlib_filefunc_def,filestream,&i); + x += ((ZPOS64_T)i)<<32; + + if (err==ZIP_OK) + err = zip64local_getByte(pzlib_filefunc_def,filestream,&i); + x += ((ZPOS64_T)i)<<40; + + if (err==ZIP_OK) + err = zip64local_getByte(pzlib_filefunc_def,filestream,&i); + x += ((ZPOS64_T)i)<<48; + + if (err==ZIP_OK) + err = zip64local_getByte(pzlib_filefunc_def,filestream,&i); + x += ((ZPOS64_T)i)<<56; + + if (err==ZIP_OK) + *pX = x; + else + *pX = 0; + + return err; +} + +#ifndef BUFREADCOMMENT +#define BUFREADCOMMENT (0x400) +#endif +/* + Locate the Central directory of a zipfile (at the end, just before + the global comment) +*/ +local ZPOS64_T zip64local_SearchCentralDir OF((const zlib_filefunc64_32_def* pzlib_filefunc_def, voidpf filestream)); + +local ZPOS64_T zip64local_SearchCentralDir(const zlib_filefunc64_32_def* pzlib_filefunc_def, voidpf filestream) +{ + unsigned char* buf; + ZPOS64_T uSizeFile; + ZPOS64_T uBackRead; + ZPOS64_T uMaxBack=0xffff; /* maximum size of global comment */ + ZPOS64_T uPosFound=0; + + if (ZSEEK64(*pzlib_filefunc_def,filestream,0,ZLIB_FILEFUNC_SEEK_END) != 0) + return 0; + + + uSizeFile = ZTELL64(*pzlib_filefunc_def,filestream); + + if (uMaxBack>uSizeFile) + uMaxBack = uSizeFile; + + buf = (unsigned char*)ALLOC(BUFREADCOMMENT+4); + if (buf==NULL) + return 0; + + uBackRead = 4; + while (uBackReaduMaxBack) + uBackRead = uMaxBack; + else + uBackRead+=BUFREADCOMMENT; + uReadPos = uSizeFile-uBackRead ; + + uReadSize = ((BUFREADCOMMENT+4) < (uSizeFile-uReadPos)) ? + (BUFREADCOMMENT+4) : (uLong)(uSizeFile-uReadPos); + if (ZSEEK64(*pzlib_filefunc_def,filestream,uReadPos,ZLIB_FILEFUNC_SEEK_SET)!=0) + break; + + if (ZREAD64(*pzlib_filefunc_def,filestream,buf,uReadSize)!=uReadSize) + break; + + for (i=(int)uReadSize-3; (i--)>0;) + if (((*(buf+i))==0x50) && ((*(buf+i+1))==0x4b) && + ((*(buf+i+2))==0x05) && ((*(buf+i+3))==0x06)) + { + uPosFound = uReadPos+i; + break; + } + + if (uPosFound!=0) + break; + } + TRYFREE(buf); + return uPosFound; +} + +/* +Locate the End of Zip64 Central directory locator and from there find the CD of a zipfile (at the end, just before +the global comment) +*/ +local ZPOS64_T zip64local_SearchCentralDir64 OF((const zlib_filefunc64_32_def* pzlib_filefunc_def, voidpf filestream)); + +local ZPOS64_T zip64local_SearchCentralDir64(const zlib_filefunc64_32_def* pzlib_filefunc_def, voidpf filestream) +{ + unsigned char* buf; + ZPOS64_T uSizeFile; + ZPOS64_T uBackRead; + ZPOS64_T uMaxBack=0xffff; /* maximum size of global comment */ + ZPOS64_T uPosFound=0; + uLong uL; + ZPOS64_T relativeOffset; + + if (ZSEEK64(*pzlib_filefunc_def,filestream,0,ZLIB_FILEFUNC_SEEK_END) != 0) + return 0; + + uSizeFile = ZTELL64(*pzlib_filefunc_def,filestream); + + if (uMaxBack>uSizeFile) + uMaxBack = uSizeFile; + + buf = (unsigned char*)ALLOC(BUFREADCOMMENT+4); + if (buf==NULL) + return 0; + + uBackRead = 4; + while (uBackReaduMaxBack) + uBackRead = uMaxBack; + else + uBackRead+=BUFREADCOMMENT; + uReadPos = uSizeFile-uBackRead ; + + uReadSize = ((BUFREADCOMMENT+4) < (uSizeFile-uReadPos)) ? + (BUFREADCOMMENT+4) : (uLong)(uSizeFile-uReadPos); + if (ZSEEK64(*pzlib_filefunc_def,filestream,uReadPos,ZLIB_FILEFUNC_SEEK_SET)!=0) + break; + + if (ZREAD64(*pzlib_filefunc_def,filestream,buf,uReadSize)!=uReadSize) + break; + + for (i=(int)uReadSize-3; (i--)>0;) + { + // Signature "0x07064b50" Zip64 end of central directory locater + if (((*(buf+i))==0x50) && ((*(buf+i+1))==0x4b) && ((*(buf+i+2))==0x06) && ((*(buf+i+3))==0x07)) + { + uPosFound = uReadPos+i; + break; + } + } + + if (uPosFound!=0) + break; + } + + TRYFREE(buf); + if (uPosFound == 0) + return 0; + + /* Zip64 end of central directory locator */ + if (ZSEEK64(*pzlib_filefunc_def,filestream, uPosFound,ZLIB_FILEFUNC_SEEK_SET)!=0) + return 0; + + /* the signature, already checked */ + if (zip64local_getLong(pzlib_filefunc_def,filestream,&uL)!=ZIP_OK) + return 0; + + /* number of the disk with the start of the zip64 end of central directory */ + if (zip64local_getLong(pzlib_filefunc_def,filestream,&uL)!=ZIP_OK) + return 0; + if (uL != 0) + return 0; + + /* relative offset of the zip64 end of central directory record */ + if (zip64local_getLong64(pzlib_filefunc_def,filestream,&relativeOffset)!=ZIP_OK) + return 0; + + /* total number of disks */ + if (zip64local_getLong(pzlib_filefunc_def,filestream,&uL)!=ZIP_OK) + return 0; + if (uL != 1) + return 0; + + /* Goto Zip64 end of central directory record */ + if (ZSEEK64(*pzlib_filefunc_def,filestream, relativeOffset,ZLIB_FILEFUNC_SEEK_SET)!=0) + return 0; + + /* the signature */ + if (zip64local_getLong(pzlib_filefunc_def,filestream,&uL)!=ZIP_OK) + return 0; + + if (uL != 0x06064b50) // signature of 'Zip64 end of central directory' + return 0; + + return relativeOffset; +} + +int LoadCentralDirectoryRecord(zip64_internal* pziinit) +{ + int err=ZIP_OK; + ZPOS64_T byte_before_the_zipfile;/* byte before the zipfile, (>0 for sfx)*/ + + ZPOS64_T size_central_dir; /* size of the central directory */ + ZPOS64_T offset_central_dir; /* offset of start of central directory */ + ZPOS64_T central_pos; + uLong uL; + + uLong number_disk; /* number of the current dist, used for + spaning ZIP, unsupported, always 0*/ + uLong number_disk_with_CD; /* number the the disk with central dir, used + for spaning ZIP, unsupported, always 0*/ + ZPOS64_T number_entry; + ZPOS64_T number_entry_CD; /* total number of entries in + the central dir + (same than number_entry on nospan) */ + uLong VersionMadeBy; + uLong VersionNeeded; + uLong size_comment; + + int hasZIP64Record = 0; + + // check first if we find a ZIP64 record + central_pos = zip64local_SearchCentralDir64(&pziinit->z_filefunc,pziinit->filestream); + if(central_pos > 0) + { + hasZIP64Record = 1; + } + else if(central_pos == 0) + { + central_pos = zip64local_SearchCentralDir(&pziinit->z_filefunc,pziinit->filestream); + } + +/* disable to allow appending to empty ZIP archive + if (central_pos==0) + err=ZIP_ERRNO; +*/ + + if(hasZIP64Record) + { + ZPOS64_T sizeEndOfCentralDirectory; + if (ZSEEK64(pziinit->z_filefunc, pziinit->filestream, central_pos, ZLIB_FILEFUNC_SEEK_SET) != 0) + err=ZIP_ERRNO; + + /* the signature, already checked */ + if (zip64local_getLong(&pziinit->z_filefunc, pziinit->filestream,&uL)!=ZIP_OK) + err=ZIP_ERRNO; + + /* size of zip64 end of central directory record */ + if (zip64local_getLong64(&pziinit->z_filefunc, pziinit->filestream, &sizeEndOfCentralDirectory)!=ZIP_OK) + err=ZIP_ERRNO; + + /* version made by */ + if (zip64local_getShort(&pziinit->z_filefunc, pziinit->filestream, &VersionMadeBy)!=ZIP_OK) + err=ZIP_ERRNO; + + /* version needed to extract */ + if (zip64local_getShort(&pziinit->z_filefunc, pziinit->filestream, &VersionNeeded)!=ZIP_OK) + err=ZIP_ERRNO; + + /* number of this disk */ + if (zip64local_getLong(&pziinit->z_filefunc, pziinit->filestream,&number_disk)!=ZIP_OK) + err=ZIP_ERRNO; + + /* number of the disk with the start of the central directory */ + if (zip64local_getLong(&pziinit->z_filefunc, pziinit->filestream,&number_disk_with_CD)!=ZIP_OK) + err=ZIP_ERRNO; + + /* total number of entries in the central directory on this disk */ + if (zip64local_getLong64(&pziinit->z_filefunc, pziinit->filestream, &number_entry)!=ZIP_OK) + err=ZIP_ERRNO; + + /* total number of entries in the central directory */ + if (zip64local_getLong64(&pziinit->z_filefunc, pziinit->filestream,&number_entry_CD)!=ZIP_OK) + err=ZIP_ERRNO; + + if ((number_entry_CD!=number_entry) || (number_disk_with_CD!=0) || (number_disk!=0)) + err=ZIP_BADZIPFILE; + + /* size of the central directory */ + if (zip64local_getLong64(&pziinit->z_filefunc, pziinit->filestream,&size_central_dir)!=ZIP_OK) + err=ZIP_ERRNO; + + /* offset of start of central directory with respect to the + starting disk number */ + if (zip64local_getLong64(&pziinit->z_filefunc, pziinit->filestream,&offset_central_dir)!=ZIP_OK) + err=ZIP_ERRNO; + + // TODO.. + // read the comment from the standard central header. + size_comment = 0; + } + else + { + // Read End of central Directory info + if (ZSEEK64(pziinit->z_filefunc, pziinit->filestream, central_pos,ZLIB_FILEFUNC_SEEK_SET)!=0) + err=ZIP_ERRNO; + + /* the signature, already checked */ + if (zip64local_getLong(&pziinit->z_filefunc, pziinit->filestream,&uL)!=ZIP_OK) + err=ZIP_ERRNO; + + /* number of this disk */ + if (zip64local_getShort(&pziinit->z_filefunc, pziinit->filestream,&number_disk)!=ZIP_OK) + err=ZIP_ERRNO; + + /* number of the disk with the start of the central directory */ + if (zip64local_getShort(&pziinit->z_filefunc, pziinit->filestream,&number_disk_with_CD)!=ZIP_OK) + err=ZIP_ERRNO; + + /* total number of entries in the central dir on this disk */ + number_entry = 0; + if (zip64local_getShort(&pziinit->z_filefunc, pziinit->filestream, &uL)!=ZIP_OK) + err=ZIP_ERRNO; + else + number_entry = uL; + + /* total number of entries in the central dir */ + number_entry_CD = 0; + if (zip64local_getShort(&pziinit->z_filefunc, pziinit->filestream, &uL)!=ZIP_OK) + err=ZIP_ERRNO; + else + number_entry_CD = uL; + + if ((number_entry_CD!=number_entry) || (number_disk_with_CD!=0) || (number_disk!=0)) + err=ZIP_BADZIPFILE; + + /* size of the central directory */ + size_central_dir = 0; + if (zip64local_getLong(&pziinit->z_filefunc, pziinit->filestream, &uL)!=ZIP_OK) + err=ZIP_ERRNO; + else + size_central_dir = uL; + + /* offset of start of central directory with respect to the starting disk number */ + offset_central_dir = 0; + if (zip64local_getLong(&pziinit->z_filefunc, pziinit->filestream, &uL)!=ZIP_OK) + err=ZIP_ERRNO; + else + offset_central_dir = uL; + + + /* zipfile global comment length */ + if (zip64local_getShort(&pziinit->z_filefunc, pziinit->filestream, &size_comment)!=ZIP_OK) + err=ZIP_ERRNO; + } + + if ((central_posz_filefunc, pziinit->filestream); + return ZIP_ERRNO; + } + + if (size_comment>0) + { + pziinit->globalcomment = (char*)ALLOC(size_comment+1); + if (pziinit->globalcomment) + { + size_comment = ZREAD64(pziinit->z_filefunc, pziinit->filestream, pziinit->globalcomment,size_comment); + pziinit->globalcomment[size_comment]=0; + } + } + + byte_before_the_zipfile = central_pos - (offset_central_dir+size_central_dir); + pziinit->add_position_when_writting_offset = byte_before_the_zipfile; + + { + ZPOS64_T size_central_dir_to_read = size_central_dir; + size_t buf_size = SIZEDATA_INDATABLOCK; + void* buf_read = (void*)ALLOC(buf_size); + if (ZSEEK64(pziinit->z_filefunc, pziinit->filestream, offset_central_dir + byte_before_the_zipfile, ZLIB_FILEFUNC_SEEK_SET) != 0) + err=ZIP_ERRNO; + + while ((size_central_dir_to_read>0) && (err==ZIP_OK)) + { + ZPOS64_T read_this = SIZEDATA_INDATABLOCK; + if (read_this > size_central_dir_to_read) + read_this = size_central_dir_to_read; + + if (ZREAD64(pziinit->z_filefunc, pziinit->filestream,buf_read,(uLong)read_this) != read_this) + err=ZIP_ERRNO; + + if (err==ZIP_OK) + err = add_data_in_datablock(&pziinit->central_dir,buf_read, (uLong)read_this); + + size_central_dir_to_read-=read_this; + } + TRYFREE(buf_read); + } + pziinit->begin_pos = byte_before_the_zipfile; + pziinit->number_entry = number_entry_CD; + + if (ZSEEK64(pziinit->z_filefunc, pziinit->filestream, offset_central_dir+byte_before_the_zipfile,ZLIB_FILEFUNC_SEEK_SET) != 0) + err=ZIP_ERRNO; + + return err; +} + + +#endif /* !NO_ADDFILEINEXISTINGZIP*/ + + +/************************************************************/ +extern zipFile ZEXPORT zipOpen3 (const void *pathname, int append, zipcharpc* globalcomment, zlib_filefunc64_32_def* pzlib_filefunc64_32_def) +{ + zip64_internal ziinit; + zip64_internal* zi; + int err=ZIP_OK; + + ziinit.z_filefunc.zseek32_file = NULL; + ziinit.z_filefunc.ztell32_file = NULL; + if (pzlib_filefunc64_32_def==NULL) + fill_fopen64_filefunc(&ziinit.z_filefunc.zfile_func64); + else + ziinit.z_filefunc = *pzlib_filefunc64_32_def; + + ziinit.filestream = ZOPEN64(ziinit.z_filefunc, + pathname, + (append == APPEND_STATUS_CREATE) ? + (ZLIB_FILEFUNC_MODE_READ | ZLIB_FILEFUNC_MODE_WRITE | ZLIB_FILEFUNC_MODE_CREATE) : + (ZLIB_FILEFUNC_MODE_READ | ZLIB_FILEFUNC_MODE_WRITE | ZLIB_FILEFUNC_MODE_EXISTING)); + + if (ziinit.filestream == NULL) + return NULL; + + if (append == APPEND_STATUS_CREATEAFTER) + ZSEEK64(ziinit.z_filefunc,ziinit.filestream,0,SEEK_END); + + ziinit.begin_pos = ZTELL64(ziinit.z_filefunc,ziinit.filestream); + ziinit.in_opened_file_inzip = 0; + ziinit.ci.stream_initialised = 0; + ziinit.number_entry = 0; + ziinit.add_position_when_writting_offset = 0; + init_linkedlist(&(ziinit.central_dir)); + + + + zi = (zip64_internal*)ALLOC(sizeof(zip64_internal)); + if (zi==NULL) + { + ZCLOSE64(ziinit.z_filefunc,ziinit.filestream); + return NULL; + } + + /* now we add file in a zipfile */ +# ifndef NO_ADDFILEINEXISTINGZIP + ziinit.globalcomment = NULL; + if (append == APPEND_STATUS_ADDINZIP) + { + // Read and Cache Central Directory Records + err = LoadCentralDirectoryRecord(&ziinit); + } + + if (globalcomment) + { + *globalcomment = ziinit.globalcomment; + } +# endif /* !NO_ADDFILEINEXISTINGZIP*/ + + if (err != ZIP_OK) + { +# ifndef NO_ADDFILEINEXISTINGZIP + TRYFREE(ziinit.globalcomment); +# endif /* !NO_ADDFILEINEXISTINGZIP*/ + TRYFREE(zi); + return NULL; + } + else + { + *zi = ziinit; + return (zipFile)zi; + } +} + +extern zipFile ZEXPORT zipOpen2 (const char *pathname, int append, zipcharpc* globalcomment, zlib_filefunc_def* pzlib_filefunc32_def) +{ + if (pzlib_filefunc32_def != NULL) + { + zlib_filefunc64_32_def zlib_filefunc64_32_def_fill; + fill_zlib_filefunc64_32_def_from_filefunc32(&zlib_filefunc64_32_def_fill,pzlib_filefunc32_def); + return zipOpen3(pathname, append, globalcomment, &zlib_filefunc64_32_def_fill); + } + else + return zipOpen3(pathname, append, globalcomment, NULL); +} + +extern zipFile ZEXPORT zipOpen2_64 (const void *pathname, int append, zipcharpc* globalcomment, zlib_filefunc64_def* pzlib_filefunc_def) +{ + if (pzlib_filefunc_def != NULL) + { + zlib_filefunc64_32_def zlib_filefunc64_32_def_fill; + zlib_filefunc64_32_def_fill.zfile_func64 = *pzlib_filefunc_def; + zlib_filefunc64_32_def_fill.ztell32_file = NULL; + zlib_filefunc64_32_def_fill.zseek32_file = NULL; + return zipOpen3(pathname, append, globalcomment, &zlib_filefunc64_32_def_fill); + } + else + return zipOpen3(pathname, append, globalcomment, NULL); +} + + + +extern zipFile ZEXPORT zipOpen (const char* pathname, int append) +{ + return zipOpen3((const void*)pathname,append,NULL,NULL); +} + +extern zipFile ZEXPORT zipOpen64 (const void* pathname, int append) +{ + return zipOpen3(pathname,append,NULL,NULL); +} + +int Write_LocalFileHeader(zip64_internal* zi, const char* filename, uInt size_extrafield_local, const void* extrafield_local) +{ + /* write the local header */ + int err; + uInt size_filename = (uInt)strlen(filename); + uInt size_extrafield = size_extrafield_local; + + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)LOCALHEADERMAGIC, 4); + + if (err==ZIP_OK) + { + if(zi->ci.zip64) + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)45,2);/* version needed to extract */ + else + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)20,2);/* version needed to extract */ + } + + if (err==ZIP_OK) + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)zi->ci.flag,2); + + if (err==ZIP_OK) + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)zi->ci.method,2); + + if (err==ZIP_OK) + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)zi->ci.dosDate,4); + + // CRC / Compressed size / Uncompressed size will be filled in later and rewritten later + if (err==ZIP_OK) + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)0,4); /* crc 32, unknown */ + if (err==ZIP_OK) + { + if(zi->ci.zip64) + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)0xFFFFFFFF,4); /* compressed size, unknown */ + else + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)0,4); /* compressed size, unknown */ + } + if (err==ZIP_OK) + { + if(zi->ci.zip64) + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)0xFFFFFFFF,4); /* uncompressed size, unknown */ + else + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)0,4); /* uncompressed size, unknown */ + } + + if (err==ZIP_OK) + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)size_filename,2); + + if(zi->ci.zip64) + { + size_extrafield += 20; + } + + if (err==ZIP_OK) + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)size_extrafield,2); + + if ((err==ZIP_OK) && (size_filename > 0)) + { + if (ZWRITE64(zi->z_filefunc,zi->filestream,filename,size_filename)!=size_filename) + err = ZIP_ERRNO; + } + + if ((err==ZIP_OK) && (size_extrafield_local > 0)) + { + if (ZWRITE64(zi->z_filefunc, zi->filestream, extrafield_local, size_extrafield_local) != size_extrafield_local) + err = ZIP_ERRNO; + } + + + if ((err==ZIP_OK) && (zi->ci.zip64)) + { + // write the Zip64 extended info + short HeaderID = 1; + short DataSize = 16; + ZPOS64_T CompressedSize = 0; + ZPOS64_T UncompressedSize = 0; + + // Remember position of Zip64 extended info for the local file header. (needed when we update size after done with file) + zi->ci.pos_zip64extrainfo = ZTELL64(zi->z_filefunc,zi->filestream); + + err = zip64local_putValue(&zi->z_filefunc, zi->filestream, (short)HeaderID,2); + err = zip64local_putValue(&zi->z_filefunc, zi->filestream, (short)DataSize,2); + + err = zip64local_putValue(&zi->z_filefunc, zi->filestream, (ZPOS64_T)UncompressedSize,8); + err = zip64local_putValue(&zi->z_filefunc, zi->filestream, (ZPOS64_T)CompressedSize,8); + } + + return err; +} + +/* + NOTE. + When writing RAW the ZIP64 extended information in extrafield_local and extrafield_global needs to be stripped + before calling this function it can be done with zipRemoveExtraInfoBlock + + It is not done here because then we need to realloc a new buffer since parameters are 'const' and I want to minimize + unnecessary allocations. + */ +extern int ZEXPORT zipOpenNewFileInZip4_64 (zipFile file, const char* filename, const zip_fileinfo* zipfi, + const void* extrafield_local, uInt size_extrafield_local, + const void* extrafield_global, uInt size_extrafield_global, + const char* comment, int method, int level, int raw, + int windowBits,int memLevel, int strategy, + const char* password, uLong crcForCrypting, + uLong versionMadeBy, uLong flagBase, int zip64) +{ + zip64_internal* zi; + uInt size_filename; + uInt size_comment; + uInt i; + int err = ZIP_OK; + +# ifdef NOCRYPT + (crcForCrypting); + if (password != NULL) + return ZIP_PARAMERROR; +# endif + + if (file == NULL) + return ZIP_PARAMERROR; + +#ifdef HAVE_BZIP2 + if ((method!=0) && (method!=Z_DEFLATED) && (method!=Z_BZIP2ED)) + return ZIP_PARAMERROR; +#else + if ((method!=0) && (method!=Z_DEFLATED)) + return ZIP_PARAMERROR; +#endif + + zi = (zip64_internal*)file; + + if (zi->in_opened_file_inzip == 1) + { + err = zipCloseFileInZip (file); + if (err != ZIP_OK) + return err; + } + + if (filename==NULL) + filename="-"; + + if (comment==NULL) + size_comment = 0; + else + size_comment = (uInt)strlen(comment); + + size_filename = (uInt)strlen(filename); + + if (zipfi == NULL) + zi->ci.dosDate = 0; + else + { + if (zipfi->dosDate != 0) + zi->ci.dosDate = zipfi->dosDate; + else + zi->ci.dosDate = zip64local_TmzDateToDosDate(&zipfi->tmz_date); + } + + zi->ci.flag = flagBase; + if ((level==8) || (level==9)) + zi->ci.flag |= 2; + if (level==2) + zi->ci.flag |= 4; + if (level==1) + zi->ci.flag |= 6; + if (password != NULL) + zi->ci.flag |= 1; + + zi->ci.crc32 = 0; + zi->ci.method = method; + zi->ci.encrypt = 0; + zi->ci.stream_initialised = 0; + zi->ci.pos_in_buffered_data = 0; + zi->ci.raw = raw; + zi->ci.pos_local_header = ZTELL64(zi->z_filefunc,zi->filestream); + + zi->ci.size_centralheader = SIZECENTRALHEADER + size_filename + size_extrafield_global + size_comment; + zi->ci.size_centralExtraFree = 32; // Extra space we have reserved in case we need to add ZIP64 extra info data + + zi->ci.central_header = (char*)ALLOC((uInt)zi->ci.size_centralheader + zi->ci.size_centralExtraFree); + + zi->ci.size_centralExtra = size_extrafield_global; + zip64local_putValue_inmemory(zi->ci.central_header,(uLong)CENTRALHEADERMAGIC,4); + /* version info */ + zip64local_putValue_inmemory(zi->ci.central_header+4,(uLong)versionMadeBy,2); + zip64local_putValue_inmemory(zi->ci.central_header+6,(uLong)20,2); + zip64local_putValue_inmemory(zi->ci.central_header+8,(uLong)zi->ci.flag,2); + zip64local_putValue_inmemory(zi->ci.central_header+10,(uLong)zi->ci.method,2); + zip64local_putValue_inmemory(zi->ci.central_header+12,(uLong)zi->ci.dosDate,4); + zip64local_putValue_inmemory(zi->ci.central_header+16,(uLong)0,4); /*crc*/ + zip64local_putValue_inmemory(zi->ci.central_header+20,(uLong)0,4); /*compr size*/ + zip64local_putValue_inmemory(zi->ci.central_header+24,(uLong)0,4); /*uncompr size*/ + zip64local_putValue_inmemory(zi->ci.central_header+28,(uLong)size_filename,2); + zip64local_putValue_inmemory(zi->ci.central_header+30,(uLong)size_extrafield_global,2); + zip64local_putValue_inmemory(zi->ci.central_header+32,(uLong)size_comment,2); + zip64local_putValue_inmemory(zi->ci.central_header+34,(uLong)0,2); /*disk nm start*/ + + if (zipfi==NULL) + zip64local_putValue_inmemory(zi->ci.central_header+36,(uLong)0,2); + else + zip64local_putValue_inmemory(zi->ci.central_header+36,(uLong)zipfi->internal_fa,2); + + if (zipfi==NULL) + zip64local_putValue_inmemory(zi->ci.central_header+38,(uLong)0,4); + else + zip64local_putValue_inmemory(zi->ci.central_header+38,(uLong)zipfi->external_fa,4); + + if(zi->ci.pos_local_header >= 0xffffffff) + zip64local_putValue_inmemory(zi->ci.central_header+42,(uLong)0xffffffff,4); + else + zip64local_putValue_inmemory(zi->ci.central_header+42,(uLong)zi->ci.pos_local_header - zi->add_position_when_writting_offset,4); + + for (i=0;ici.central_header+SIZECENTRALHEADER+i) = *(filename+i); + + for (i=0;ici.central_header+SIZECENTRALHEADER+size_filename+i) = + *(((const char*)extrafield_global)+i); + + for (i=0;ici.central_header+SIZECENTRALHEADER+size_filename+ + size_extrafield_global+i) = *(comment+i); + if (zi->ci.central_header == NULL) + return ZIP_INTERNALERROR; + + zi->ci.zip64 = zip64; + zi->ci.totalCompressedData = 0; + zi->ci.totalUncompressedData = 0; + zi->ci.pos_zip64extrainfo = 0; + + err = Write_LocalFileHeader(zi, filename, size_extrafield_local, extrafield_local); + +#ifdef HAVE_BZIP2 + zi->ci.bstream.avail_in = (uInt)0; + zi->ci.bstream.avail_out = (uInt)Z_BUFSIZE; + zi->ci.bstream.next_out = (char*)zi->ci.buffered_data; + zi->ci.bstream.total_in_hi32 = 0; + zi->ci.bstream.total_in_lo32 = 0; + zi->ci.bstream.total_out_hi32 = 0; + zi->ci.bstream.total_out_lo32 = 0; +#endif + + zi->ci.stream.avail_in = (uInt)0; + zi->ci.stream.avail_out = (uInt)Z_BUFSIZE; + zi->ci.stream.next_out = zi->ci.buffered_data; + zi->ci.stream.total_in = 0; + zi->ci.stream.total_out = 0; + zi->ci.stream.data_type = Z_BINARY; + +#ifdef HAVE_BZIP2 + if ((err==ZIP_OK) && (zi->ci.method == Z_DEFLATED || zi->ci.method == Z_BZIP2ED) && (!zi->ci.raw)) +#else + if ((err==ZIP_OK) && (zi->ci.method == Z_DEFLATED) && (!zi->ci.raw)) +#endif + { + if(zi->ci.method == Z_DEFLATED) + { + zi->ci.stream.zalloc = (alloc_func)0; + zi->ci.stream.zfree = (free_func)0; + zi->ci.stream.opaque = (voidpf)0; + + if (windowBits>0) + windowBits = -windowBits; + + err = deflateInit2(&zi->ci.stream, level, Z_DEFLATED, windowBits, memLevel, strategy); + + if (err==Z_OK) + zi->ci.stream_initialised = Z_DEFLATED; + } + else if(zi->ci.method == Z_BZIP2ED) + { +#ifdef HAVE_BZIP2 + // Init BZip stuff here + zi->ci.bstream.bzalloc = 0; + zi->ci.bstream.bzfree = 0; + zi->ci.bstream.opaque = (voidpf)0; + + err = BZ2_bzCompressInit(&zi->ci.bstream, level, 0,35); + if(err == BZ_OK) + zi->ci.stream_initialised = Z_BZIP2ED; +#endif + } + + } + +# ifndef NOCRYPT + zi->ci.crypt_header_size = 0; + if ((err==Z_OK) && (password != NULL)) + { + unsigned char bufHead[RAND_HEAD_LEN]; + unsigned int sizeHead; + zi->ci.encrypt = 1; + zi->ci.pcrc_32_tab = get_crc_table(); + /*init_keys(password,zi->ci.keys,zi->ci.pcrc_32_tab);*/ + + sizeHead=crypthead(password,bufHead,RAND_HEAD_LEN,zi->ci.keys,zi->ci.pcrc_32_tab,crcForCrypting); + zi->ci.crypt_header_size = sizeHead; + + if (ZWRITE64(zi->z_filefunc,zi->filestream,bufHead,sizeHead) != sizeHead) + err = ZIP_ERRNO; + } +# endif + + if (err==Z_OK) + zi->in_opened_file_inzip = 1; + return err; +} + +extern int ZEXPORT zipOpenNewFileInZip4 (zipFile file, const char* filename, const zip_fileinfo* zipfi, + const void* extrafield_local, uInt size_extrafield_local, + const void* extrafield_global, uInt size_extrafield_global, + const char* comment, int method, int level, int raw, + int windowBits,int memLevel, int strategy, + const char* password, uLong crcForCrypting, + uLong versionMadeBy, uLong flagBase) +{ + return zipOpenNewFileInZip4_64 (file, filename, zipfi, + extrafield_local, size_extrafield_local, + extrafield_global, size_extrafield_global, + comment, method, level, raw, + windowBits, memLevel, strategy, + password, crcForCrypting, versionMadeBy, flagBase, 0); +} + +extern int ZEXPORT zipOpenNewFileInZip3 (zipFile file, const char* filename, const zip_fileinfo* zipfi, + const void* extrafield_local, uInt size_extrafield_local, + const void* extrafield_global, uInt size_extrafield_global, + const char* comment, int method, int level, int raw, + int windowBits,int memLevel, int strategy, + const char* password, uLong crcForCrypting) +{ + return zipOpenNewFileInZip4_64 (file, filename, zipfi, + extrafield_local, size_extrafield_local, + extrafield_global, size_extrafield_global, + comment, method, level, raw, + windowBits, memLevel, strategy, + password, crcForCrypting, VERSIONMADEBY, 0, 0); +} + +extern int ZEXPORT zipOpenNewFileInZip3_64(zipFile file, const char* filename, const zip_fileinfo* zipfi, + const void* extrafield_local, uInt size_extrafield_local, + const void* extrafield_global, uInt size_extrafield_global, + const char* comment, int method, int level, int raw, + int windowBits,int memLevel, int strategy, + const char* password, uLong crcForCrypting, int zip64) +{ + return zipOpenNewFileInZip4_64 (file, filename, zipfi, + extrafield_local, size_extrafield_local, + extrafield_global, size_extrafield_global, + comment, method, level, raw, + windowBits, memLevel, strategy, + password, crcForCrypting, VERSIONMADEBY, 0, zip64); +} + +extern int ZEXPORT zipOpenNewFileInZip2(zipFile file, const char* filename, const zip_fileinfo* zipfi, + const void* extrafield_local, uInt size_extrafield_local, + const void* extrafield_global, uInt size_extrafield_global, + const char* comment, int method, int level, int raw) +{ + return zipOpenNewFileInZip4_64 (file, filename, zipfi, + extrafield_local, size_extrafield_local, + extrafield_global, size_extrafield_global, + comment, method, level, raw, + -MAX_WBITS, DEF_MEM_LEVEL, Z_DEFAULT_STRATEGY, + NULL, 0, VERSIONMADEBY, 0, 0); +} + +extern int ZEXPORT zipOpenNewFileInZip2_64(zipFile file, const char* filename, const zip_fileinfo* zipfi, + const void* extrafield_local, uInt size_extrafield_local, + const void* extrafield_global, uInt size_extrafield_global, + const char* comment, int method, int level, int raw, int zip64) +{ + return zipOpenNewFileInZip4_64 (file, filename, zipfi, + extrafield_local, size_extrafield_local, + extrafield_global, size_extrafield_global, + comment, method, level, raw, + -MAX_WBITS, DEF_MEM_LEVEL, Z_DEFAULT_STRATEGY, + NULL, 0, VERSIONMADEBY, 0, zip64); +} + +extern int ZEXPORT zipOpenNewFileInZip64 (zipFile file, const char* filename, const zip_fileinfo* zipfi, + const void* extrafield_local, uInt size_extrafield_local, + const void*extrafield_global, uInt size_extrafield_global, + const char* comment, int method, int level, int zip64) +{ + return zipOpenNewFileInZip4_64 (file, filename, zipfi, + extrafield_local, size_extrafield_local, + extrafield_global, size_extrafield_global, + comment, method, level, 0, + -MAX_WBITS, DEF_MEM_LEVEL, Z_DEFAULT_STRATEGY, + NULL, 0, VERSIONMADEBY, 0, zip64); +} + +extern int ZEXPORT zipOpenNewFileInZip (zipFile file, const char* filename, const zip_fileinfo* zipfi, + const void* extrafield_local, uInt size_extrafield_local, + const void*extrafield_global, uInt size_extrafield_global, + const char* comment, int method, int level) +{ + return zipOpenNewFileInZip4_64 (file, filename, zipfi, + extrafield_local, size_extrafield_local, + extrafield_global, size_extrafield_global, + comment, method, level, 0, + -MAX_WBITS, DEF_MEM_LEVEL, Z_DEFAULT_STRATEGY, + NULL, 0, VERSIONMADEBY, 0, 0); +} + +local int zip64FlushWriteBuffer(zip64_internal* zi) +{ + int err=ZIP_OK; + + if (zi->ci.encrypt != 0) + { +#ifndef NOCRYPT + uInt i; + int t; + for (i=0;ici.pos_in_buffered_data;i++) + zi->ci.buffered_data[i] = zencode(zi->ci.keys, zi->ci.pcrc_32_tab, zi->ci.buffered_data[i],t); +#endif + } + + if (ZWRITE64(zi->z_filefunc,zi->filestream,zi->ci.buffered_data,zi->ci.pos_in_buffered_data) != zi->ci.pos_in_buffered_data) + err = ZIP_ERRNO; + + zi->ci.totalCompressedData += zi->ci.pos_in_buffered_data; + +#ifdef HAVE_BZIP2 + if(zi->ci.method == Z_BZIP2ED) + { + zi->ci.totalUncompressedData += zi->ci.bstream.total_in_lo32; + zi->ci.bstream.total_in_lo32 = 0; + zi->ci.bstream.total_in_hi32 = 0; + } + else +#endif + { + zi->ci.totalUncompressedData += zi->ci.stream.total_in; + zi->ci.stream.total_in = 0; + } + + + zi->ci.pos_in_buffered_data = 0; + + return err; +} + +extern int ZEXPORT zipWriteInFileInZip (zipFile file,const void* buf,unsigned int len) +{ + zip64_internal* zi; + int err=ZIP_OK; + + if (file == NULL) + return ZIP_PARAMERROR; + zi = (zip64_internal*)file; + + if (zi->in_opened_file_inzip == 0) + return ZIP_PARAMERROR; + + zi->ci.crc32 = crc32(zi->ci.crc32,buf,(uInt)len); + +#ifdef HAVE_BZIP2 + if(zi->ci.method == Z_BZIP2ED && (!zi->ci.raw)) + { + zi->ci.bstream.next_in = (void*)buf; + zi->ci.bstream.avail_in = len; + err = BZ_RUN_OK; + + while ((err==BZ_RUN_OK) && (zi->ci.bstream.avail_in>0)) + { + if (zi->ci.bstream.avail_out == 0) + { + if (zip64FlushWriteBuffer(zi) == ZIP_ERRNO) + err = ZIP_ERRNO; + zi->ci.bstream.avail_out = (uInt)Z_BUFSIZE; + zi->ci.bstream.next_out = (char*)zi->ci.buffered_data; + } + + + if(err != BZ_RUN_OK) + break; + + if ((zi->ci.method == Z_BZIP2ED) && (!zi->ci.raw)) + { + uLong uTotalOutBefore_lo = zi->ci.bstream.total_out_lo32; +// uLong uTotalOutBefore_hi = zi->ci.bstream.total_out_hi32; + err=BZ2_bzCompress(&zi->ci.bstream, BZ_RUN); + + zi->ci.pos_in_buffered_data += (uInt)(zi->ci.bstream.total_out_lo32 - uTotalOutBefore_lo) ; + } + } + + if(err == BZ_RUN_OK) + err = ZIP_OK; + } + else +#endif + { + zi->ci.stream.next_in = (Bytef*)buf; + zi->ci.stream.avail_in = len; + + while ((err==ZIP_OK) && (zi->ci.stream.avail_in>0)) + { + if (zi->ci.stream.avail_out == 0) + { + if (zip64FlushWriteBuffer(zi) == ZIP_ERRNO) + err = ZIP_ERRNO; + zi->ci.stream.avail_out = (uInt)Z_BUFSIZE; + zi->ci.stream.next_out = zi->ci.buffered_data; + } + + + if(err != ZIP_OK) + break; + + if ((zi->ci.method == Z_DEFLATED) && (!zi->ci.raw)) + { + uLong uTotalOutBefore = zi->ci.stream.total_out; + err=deflate(&zi->ci.stream, Z_NO_FLUSH); + if(uTotalOutBefore > zi->ci.stream.total_out) + { + int bBreak = 0; + bBreak++; + } + + zi->ci.pos_in_buffered_data += (uInt)(zi->ci.stream.total_out - uTotalOutBefore) ; + } + else + { + uInt copy_this,i; + if (zi->ci.stream.avail_in < zi->ci.stream.avail_out) + copy_this = zi->ci.stream.avail_in; + else + copy_this = zi->ci.stream.avail_out; + + for (i = 0; i < copy_this; i++) + *(((char*)zi->ci.stream.next_out)+i) = + *(((const char*)zi->ci.stream.next_in)+i); + { + zi->ci.stream.avail_in -= copy_this; + zi->ci.stream.avail_out-= copy_this; + zi->ci.stream.next_in+= copy_this; + zi->ci.stream.next_out+= copy_this; + zi->ci.stream.total_in+= copy_this; + zi->ci.stream.total_out+= copy_this; + zi->ci.pos_in_buffered_data += copy_this; + } + } + }// while(...) + } + + return err; +} + +extern int ZEXPORT zipCloseFileInZipRaw (zipFile file, uLong uncompressed_size, uLong crc32) +{ + return zipCloseFileInZipRaw64 (file, uncompressed_size, crc32); +} + +extern int ZEXPORT zipCloseFileInZipRaw64 (zipFile file, ZPOS64_T uncompressed_size, uLong crc32) +{ + zip64_internal* zi; + ZPOS64_T compressed_size; + uLong invalidValue = 0xffffffff; + short datasize = 0; + int err=ZIP_OK; + + if (file == NULL) + return ZIP_PARAMERROR; + zi = (zip64_internal*)file; + + if (zi->in_opened_file_inzip == 0) + return ZIP_PARAMERROR; + zi->ci.stream.avail_in = 0; + + if ((zi->ci.method == Z_DEFLATED) && (!zi->ci.raw)) + { + while (err==ZIP_OK) + { + uLong uTotalOutBefore; + if (zi->ci.stream.avail_out == 0) + { + if (zip64FlushWriteBuffer(zi) == ZIP_ERRNO) + err = ZIP_ERRNO; + zi->ci.stream.avail_out = (uInt)Z_BUFSIZE; + zi->ci.stream.next_out = zi->ci.buffered_data; + } + uTotalOutBefore = zi->ci.stream.total_out; + err=deflate(&zi->ci.stream, Z_FINISH); + zi->ci.pos_in_buffered_data += (uInt)(zi->ci.stream.total_out - uTotalOutBefore) ; + } + } + else if ((zi->ci.method == Z_BZIP2ED) && (!zi->ci.raw)) + { +#ifdef HAVE_BZIP2 + err = BZ_FINISH_OK; + while (err==BZ_FINISH_OK) + { + uLong uTotalOutBefore; + if (zi->ci.bstream.avail_out == 0) + { + if (zip64FlushWriteBuffer(zi) == ZIP_ERRNO) + err = ZIP_ERRNO; + zi->ci.bstream.avail_out = (uInt)Z_BUFSIZE; + zi->ci.bstream.next_out = (char*)zi->ci.buffered_data; + } + uTotalOutBefore = zi->ci.bstream.total_out_lo32; + err=BZ2_bzCompress(&zi->ci.bstream, BZ_FINISH); + if(err == BZ_STREAM_END) + err = Z_STREAM_END; + + zi->ci.pos_in_buffered_data += (uInt)(zi->ci.bstream.total_out_lo32 - uTotalOutBefore); + } + + if(err == BZ_FINISH_OK) + err = ZIP_OK; +#endif + } + + if (err==Z_STREAM_END) + err=ZIP_OK; /* this is normal */ + + if ((zi->ci.pos_in_buffered_data>0) && (err==ZIP_OK)) + { + if (zip64FlushWriteBuffer(zi)==ZIP_ERRNO) + err = ZIP_ERRNO; + } + + if ((zi->ci.method == Z_DEFLATED) && (!zi->ci.raw)) + { + int tmp_err = deflateEnd(&zi->ci.stream); + if (err == ZIP_OK) + err = tmp_err; + zi->ci.stream_initialised = 0; + } +#ifdef HAVE_BZIP2 + else if((zi->ci.method == Z_BZIP2ED) && (!zi->ci.raw)) + { + int tmperr = BZ2_bzCompressEnd(&zi->ci.bstream); + if (err==ZIP_OK) + err = tmperr; + zi->ci.stream_initialised = 0; + } +#endif + + if (!zi->ci.raw) + { + crc32 = (uLong)zi->ci.crc32; + uncompressed_size = zi->ci.totalUncompressedData; + } + compressed_size = zi->ci.totalCompressedData; + +# ifndef NOCRYPT + compressed_size += zi->ci.crypt_header_size; +# endif + + // update Current Item crc and sizes, + if(compressed_size >= 0xffffffff || uncompressed_size >= 0xffffffff || zi->ci.pos_local_header >= 0xffffffff) + { + /*version Made by*/ + zip64local_putValue_inmemory(zi->ci.central_header+4,(uLong)45,2); + /*version needed*/ + zip64local_putValue_inmemory(zi->ci.central_header+6,(uLong)45,2); + + } + + zip64local_putValue_inmemory(zi->ci.central_header+16,crc32,4); /*crc*/ + + + if(compressed_size >= 0xffffffff) + zip64local_putValue_inmemory(zi->ci.central_header+20, invalidValue,4); /*compr size*/ + else + zip64local_putValue_inmemory(zi->ci.central_header+20, compressed_size,4); /*compr size*/ + + /// set internal file attributes field + if (zi->ci.stream.data_type == Z_ASCII) + zip64local_putValue_inmemory(zi->ci.central_header+36,(uLong)Z_ASCII,2); + + if(uncompressed_size >= 0xffffffff) + zip64local_putValue_inmemory(zi->ci.central_header+24, invalidValue,4); /*uncompr size*/ + else + zip64local_putValue_inmemory(zi->ci.central_header+24, uncompressed_size,4); /*uncompr size*/ + + // Add ZIP64 extra info field for uncompressed size + if(uncompressed_size >= 0xffffffff) + datasize += 8; + + // Add ZIP64 extra info field for compressed size + if(compressed_size >= 0xffffffff) + datasize += 8; + + // Add ZIP64 extra info field for relative offset to local file header of current file + if(zi->ci.pos_local_header >= 0xffffffff) + datasize += 8; + + if(datasize > 0) + { + char* p = NULL; + + if((uLong)(datasize + 4) > zi->ci.size_centralExtraFree) + { + // we can not write more data to the buffer that we have room for. + return ZIP_BADZIPFILE; + } + + p = zi->ci.central_header + zi->ci.size_centralheader; + + // Add Extra Information Header for 'ZIP64 information' + zip64local_putValue_inmemory(p, 0x0001, 2); // HeaderID + p += 2; + zip64local_putValue_inmemory(p, datasize, 2); // DataSize + p += 2; + + if(uncompressed_size >= 0xffffffff) + { + zip64local_putValue_inmemory(p, uncompressed_size, 8); + p += 8; + } + + if(compressed_size >= 0xffffffff) + { + zip64local_putValue_inmemory(p, compressed_size, 8); + p += 8; + } + + if(zi->ci.pos_local_header >= 0xffffffff) + { + zip64local_putValue_inmemory(p, zi->ci.pos_local_header, 8); + p += 8; + } + + // Update how much extra free space we got in the memory buffer + // and increase the centralheader size so the new ZIP64 fields are included + // ( 4 below is the size of HeaderID and DataSize field ) + zi->ci.size_centralExtraFree -= datasize + 4; + zi->ci.size_centralheader += datasize + 4; + + // Update the extra info size field + zi->ci.size_centralExtra += datasize + 4; + zip64local_putValue_inmemory(zi->ci.central_header+30,(uLong)zi->ci.size_centralExtra,2); + } + + if (err==ZIP_OK) + err = add_data_in_datablock(&zi->central_dir, zi->ci.central_header, (uLong)zi->ci.size_centralheader); + + free(zi->ci.central_header); + + if (err==ZIP_OK) + { + // Update the LocalFileHeader with the new values. + + ZPOS64_T cur_pos_inzip = ZTELL64(zi->z_filefunc,zi->filestream); + + if (ZSEEK64(zi->z_filefunc,zi->filestream, zi->ci.pos_local_header + 14,ZLIB_FILEFUNC_SEEK_SET)!=0) + err = ZIP_ERRNO; + + if (err==ZIP_OK) + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,crc32,4); /* crc 32, unknown */ + + if(uncompressed_size >= 0xffffffff || compressed_size >= 0xffffffff ) + { + if(zi->ci.pos_zip64extrainfo > 0) + { + // Update the size in the ZIP64 extended field. + if (ZSEEK64(zi->z_filefunc,zi->filestream, zi->ci.pos_zip64extrainfo + 4,ZLIB_FILEFUNC_SEEK_SET)!=0) + err = ZIP_ERRNO; + + if (err==ZIP_OK) /* compressed size, unknown */ + err = zip64local_putValue(&zi->z_filefunc, zi->filestream, uncompressed_size, 8); + + if (err==ZIP_OK) /* uncompressed size, unknown */ + err = zip64local_putValue(&zi->z_filefunc, zi->filestream, compressed_size, 8); + } + else + err = ZIP_BADZIPFILE; // Caller passed zip64 = 0, so no room for zip64 info -> fatal + } + else + { + if (err==ZIP_OK) /* compressed size, unknown */ + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,compressed_size,4); + + if (err==ZIP_OK) /* uncompressed size, unknown */ + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,uncompressed_size,4); + } + + if (ZSEEK64(zi->z_filefunc,zi->filestream, cur_pos_inzip,ZLIB_FILEFUNC_SEEK_SET)!=0) + err = ZIP_ERRNO; + } + + zi->number_entry ++; + zi->in_opened_file_inzip = 0; + + return err; +} + +extern int ZEXPORT zipCloseFileInZip (zipFile file) +{ + return zipCloseFileInZipRaw (file,0,0); +} + +int Write_Zip64EndOfCentralDirectoryLocator(zip64_internal* zi, ZPOS64_T zip64eocd_pos_inzip) +{ + int err = ZIP_OK; + ZPOS64_T pos = zip64eocd_pos_inzip - zi->add_position_when_writting_offset; + + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)ZIP64ENDLOCHEADERMAGIC,4); + + /*num disks*/ + if (err==ZIP_OK) /* number of the disk with the start of the central directory */ + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)0,4); + + /*relative offset*/ + if (err==ZIP_OK) /* Relative offset to the Zip64EndOfCentralDirectory */ + err = zip64local_putValue(&zi->z_filefunc,zi->filestream, pos,8); + + /*total disks*/ /* Do not support spawning of disk so always say 1 here*/ + if (err==ZIP_OK) /* number of the disk with the start of the central directory */ + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)1,4); + + return err; +} + +int Write_Zip64EndOfCentralDirectoryRecord(zip64_internal* zi, uLong size_centraldir, ZPOS64_T centraldir_pos_inzip) +{ + int err = ZIP_OK; + + uLong Zip64DataSize = 44; + + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)ZIP64ENDHEADERMAGIC,4); + + if (err==ZIP_OK) /* size of this 'zip64 end of central directory' */ + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(ZPOS64_T)Zip64DataSize,8); // why ZPOS64_T of this ? + + if (err==ZIP_OK) /* version made by */ + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)45,2); + + if (err==ZIP_OK) /* version needed */ + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)45,2); + + if (err==ZIP_OK) /* number of this disk */ + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)0,4); + + if (err==ZIP_OK) /* number of the disk with the start of the central directory */ + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)0,4); + + if (err==ZIP_OK) /* total number of entries in the central dir on this disk */ + err = zip64local_putValue(&zi->z_filefunc, zi->filestream, zi->number_entry, 8); + + if (err==ZIP_OK) /* total number of entries in the central dir */ + err = zip64local_putValue(&zi->z_filefunc, zi->filestream, zi->number_entry, 8); + + if (err==ZIP_OK) /* size of the central directory */ + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(ZPOS64_T)size_centraldir,8); + + if (err==ZIP_OK) /* offset of start of central directory with respect to the starting disk number */ + { + ZPOS64_T pos = centraldir_pos_inzip - zi->add_position_when_writting_offset; + err = zip64local_putValue(&zi->z_filefunc,zi->filestream, (ZPOS64_T)pos,8); + } + return err; +} +int Write_EndOfCentralDirectoryRecord(zip64_internal* zi, uLong size_centraldir, ZPOS64_T centraldir_pos_inzip) +{ + int err = ZIP_OK; + + /*signature*/ + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)ENDHEADERMAGIC,4); + + if (err==ZIP_OK) /* number of this disk */ + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)0,2); + + if (err==ZIP_OK) /* number of the disk with the start of the central directory */ + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)0,2); + + if (err==ZIP_OK) /* total number of entries in the central dir on this disk */ + { + { + if(zi->number_entry >= 0xFFFF) + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)0xffff,2); // use value in ZIP64 record + else + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)zi->number_entry,2); + } + } + + if (err==ZIP_OK) /* total number of entries in the central dir */ + { + if(zi->number_entry >= 0xFFFF) + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)0xffff,2); // use value in ZIP64 record + else + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)zi->number_entry,2); + } + + if (err==ZIP_OK) /* size of the central directory */ + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)size_centraldir,4); + + if (err==ZIP_OK) /* offset of start of central directory with respect to the starting disk number */ + { + ZPOS64_T pos = centraldir_pos_inzip - zi->add_position_when_writting_offset; + if(pos >= 0xffffffff) + { + err = zip64local_putValue(&zi->z_filefunc,zi->filestream, (uLong)0xffffffff,4); + } + else + err = zip64local_putValue(&zi->z_filefunc,zi->filestream, (uLong)(centraldir_pos_inzip - zi->add_position_when_writting_offset),4); + } + + return err; +} + +int Write_GlobalComment(zip64_internal* zi, const char* global_comment) +{ + int err = ZIP_OK; + uInt size_global_comment = 0; + + if(global_comment != NULL) + size_global_comment = (uInt)strlen(global_comment); + + err = zip64local_putValue(&zi->z_filefunc,zi->filestream,(uLong)size_global_comment,2); + + if (err == ZIP_OK && size_global_comment > 0) + { + if (ZWRITE64(zi->z_filefunc,zi->filestream, global_comment, size_global_comment) != size_global_comment) + err = ZIP_ERRNO; + } + return err; +} + +extern int ZEXPORT zipClose (zipFile file, const char* global_comment) +{ + zip64_internal* zi; + int err = 0; + uLong size_centraldir = 0; + ZPOS64_T centraldir_pos_inzip; + ZPOS64_T pos; + + if (file == NULL) + return ZIP_PARAMERROR; + + zi = (zip64_internal*)file; + + if (zi->in_opened_file_inzip == 1) + { + err = zipCloseFileInZip (file); + } + +#ifndef NO_ADDFILEINEXISTINGZIP + if (global_comment==NULL) + global_comment = zi->globalcomment; +#endif + + centraldir_pos_inzip = ZTELL64(zi->z_filefunc,zi->filestream); + + if (err==ZIP_OK) + { + linkedlist_datablock_internal* ldi = zi->central_dir.first_block; + while (ldi!=NULL) + { + if ((err==ZIP_OK) && (ldi->filled_in_this_block>0)) + { + if (ZWRITE64(zi->z_filefunc,zi->filestream, ldi->data, ldi->filled_in_this_block) != ldi->filled_in_this_block) + err = ZIP_ERRNO; + } + + size_centraldir += ldi->filled_in_this_block; + ldi = ldi->next_datablock; + } + } + free_linkedlist(&(zi->central_dir)); + + pos = centraldir_pos_inzip - zi->add_position_when_writting_offset; + if(pos >= 0xffffffff || zi->number_entry > 0xFFFF) + { + ZPOS64_T Zip64EOCDpos = ZTELL64(zi->z_filefunc,zi->filestream); + Write_Zip64EndOfCentralDirectoryRecord(zi, size_centraldir, centraldir_pos_inzip); + + Write_Zip64EndOfCentralDirectoryLocator(zi, Zip64EOCDpos); + } + + if (err==ZIP_OK) + err = Write_EndOfCentralDirectoryRecord(zi, size_centraldir, centraldir_pos_inzip); + + if(err == ZIP_OK) + err = Write_GlobalComment(zi, global_comment); + + if (ZCLOSE64(zi->z_filefunc,zi->filestream) != 0) + if (err == ZIP_OK) + err = ZIP_ERRNO; + +#ifndef NO_ADDFILEINEXISTINGZIP + TRYFREE(zi->globalcomment); +#endif + TRYFREE(zi); + + return err; +} + +extern int ZEXPORT zipRemoveExtraInfoBlock (char* pData, int* dataLen, short sHeader) +{ + char* p = pData; + int size = 0; + char* pNewHeader; + char* pTmp; + short header; + short dataSize; + + int retVal = ZIP_OK; + + if(pData == NULL || *dataLen < 4) + return ZIP_PARAMERROR; + + pNewHeader = (char*)ALLOC(*dataLen); + pTmp = pNewHeader; + + while(p < (pData + *dataLen)) + { + header = *(short*)p; + dataSize = *(((short*)p)+1); + + if( header == sHeader ) // Header found. + { + p += dataSize + 4; // skip it. do not copy to temp buffer + } + else + { + // Extra Info block should not be removed, So copy it to the temp buffer. + memcpy(pTmp, p, dataSize + 4); + p += dataSize + 4; + size += dataSize + 4; + } + + } + + if(size < *dataLen) + { + // clean old extra info block. + memset(pData,0, *dataLen); + + // copy the new extra info block over the old + if(size > 0) + memcpy(pData, pNewHeader, size); + + // set the new extra info size + *dataLen = size; + + retVal = ZIP_OK; + } + else + retVal = ZIP_ERRNO; + + TRYFREE(pNewHeader); + + return retVal; +} ADDED compat/zlib/contrib/minizip/zip.h Index: compat/zlib/contrib/minizip/zip.h ================================================================== --- compat/zlib/contrib/minizip/zip.h +++ compat/zlib/contrib/minizip/zip.h @@ -0,0 +1,362 @@ +/* zip.h -- IO on .zip files using zlib + Version 1.1, February 14h, 2010 + part of the MiniZip project - ( http://www.winimage.com/zLibDll/minizip.html ) + + Copyright (C) 1998-2010 Gilles Vollant (minizip) ( http://www.winimage.com/zLibDll/minizip.html ) + + Modifications for Zip64 support + Copyright (C) 2009-2010 Mathias Svensson ( http://result42.com ) + + For more info read MiniZip_info.txt + + --------------------------------------------------------------------------- + + Condition of use and distribution are the same than zlib : + + This software is provided 'as-is', without any express or implied + warranty. In no event will the authors be held liable for any damages + arising from the use of this software. + + Permission is granted to anyone to use this software for any purpose, + including commercial applications, and to alter it and redistribute it + freely, subject to the following restrictions: + + 1. The origin of this software must not be misrepresented; you must not + claim that you wrote the original software. If you use this software + in a product, an acknowledgment in the product documentation would be + appreciated but is not required. + 2. Altered source versions must be plainly marked as such, and must not be + misrepresented as being the original software. + 3. This notice may not be removed or altered from any source distribution. + + --------------------------------------------------------------------------- + + Changes + + See header of zip.h + +*/ + +#ifndef _zip12_H +#define _zip12_H + +#ifdef __cplusplus +extern "C" { +#endif + +//#define HAVE_BZIP2 + +#ifndef _ZLIB_H +#include "zlib.h" +#endif + +#ifndef _ZLIBIOAPI_H +#include "ioapi.h" +#endif + +#ifdef HAVE_BZIP2 +#include "bzlib.h" +#endif + +#define Z_BZIP2ED 12 + +#if defined(STRICTZIP) || defined(STRICTZIPUNZIP) +/* like the STRICT of WIN32, we define a pointer that cannot be converted + from (void*) without cast */ +typedef struct TagzipFile__ { int unused; } zipFile__; +typedef zipFile__ *zipFile; +#else +typedef voidp zipFile; +#endif + +#define ZIP_OK (0) +#define ZIP_EOF (0) +#define ZIP_ERRNO (Z_ERRNO) +#define ZIP_PARAMERROR (-102) +#define ZIP_BADZIPFILE (-103) +#define ZIP_INTERNALERROR (-104) + +#ifndef DEF_MEM_LEVEL +# if MAX_MEM_LEVEL >= 8 +# define DEF_MEM_LEVEL 8 +# else +# define DEF_MEM_LEVEL MAX_MEM_LEVEL +# endif +#endif +/* default memLevel */ + +/* tm_zip contain date/time info */ +typedef struct tm_zip_s +{ + uInt tm_sec; /* seconds after the minute - [0,59] */ + uInt tm_min; /* minutes after the hour - [0,59] */ + uInt tm_hour; /* hours since midnight - [0,23] */ + uInt tm_mday; /* day of the month - [1,31] */ + uInt tm_mon; /* months since January - [0,11] */ + uInt tm_year; /* years - [1980..2044] */ +} tm_zip; + +typedef struct +{ + tm_zip tmz_date; /* date in understandable format */ + uLong dosDate; /* if dos_date == 0, tmu_date is used */ +/* uLong flag; */ /* general purpose bit flag 2 bytes */ + + uLong internal_fa; /* internal file attributes 2 bytes */ + uLong external_fa; /* external file attributes 4 bytes */ +} zip_fileinfo; + +typedef const char* zipcharpc; + + +#define APPEND_STATUS_CREATE (0) +#define APPEND_STATUS_CREATEAFTER (1) +#define APPEND_STATUS_ADDINZIP (2) + +extern zipFile ZEXPORT zipOpen OF((const char *pathname, int append)); +extern zipFile ZEXPORT zipOpen64 OF((const void *pathname, int append)); +/* + Create a zipfile. + pathname contain on Windows XP a filename like "c:\\zlib\\zlib113.zip" or on + an Unix computer "zlib/zlib113.zip". + if the file pathname exist and append==APPEND_STATUS_CREATEAFTER, the zip + will be created at the end of the file. + (useful if the file contain a self extractor code) + if the file pathname exist and append==APPEND_STATUS_ADDINZIP, we will + add files in existing zip (be sure you don't add file that doesn't exist) + If the zipfile cannot be opened, the return value is NULL. + Else, the return value is a zipFile Handle, usable with other function + of this zip package. +*/ + +/* Note : there is no delete function into a zipfile. + If you want delete file into a zipfile, you must open a zipfile, and create another + Of couse, you can use RAW reading and writing to copy the file you did not want delte +*/ + +extern zipFile ZEXPORT zipOpen2 OF((const char *pathname, + int append, + zipcharpc* globalcomment, + zlib_filefunc_def* pzlib_filefunc_def)); + +extern zipFile ZEXPORT zipOpen2_64 OF((const void *pathname, + int append, + zipcharpc* globalcomment, + zlib_filefunc64_def* pzlib_filefunc_def)); + +extern int ZEXPORT zipOpenNewFileInZip OF((zipFile file, + const char* filename, + const zip_fileinfo* zipfi, + const void* extrafield_local, + uInt size_extrafield_local, + const void* extrafield_global, + uInt size_extrafield_global, + const char* comment, + int method, + int level)); + +extern int ZEXPORT zipOpenNewFileInZip64 OF((zipFile file, + const char* filename, + const zip_fileinfo* zipfi, + const void* extrafield_local, + uInt size_extrafield_local, + const void* extrafield_global, + uInt size_extrafield_global, + const char* comment, + int method, + int level, + int zip64)); + +/* + Open a file in the ZIP for writing. + filename : the filename in zip (if NULL, '-' without quote will be used + *zipfi contain supplemental information + if extrafield_local!=NULL and size_extrafield_local>0, extrafield_local + contains the extrafield data the the local header + if extrafield_global!=NULL and size_extrafield_global>0, extrafield_global + contains the extrafield data the the local header + if comment != NULL, comment contain the comment string + method contain the compression method (0 for store, Z_DEFLATED for deflate) + level contain the level of compression (can be Z_DEFAULT_COMPRESSION) + zip64 is set to 1 if a zip64 extended information block should be added to the local file header. + this MUST be '1' if the uncompressed size is >= 0xffffffff. + +*/ + + +extern int ZEXPORT zipOpenNewFileInZip2 OF((zipFile file, + const char* filename, + const zip_fileinfo* zipfi, + const void* extrafield_local, + uInt size_extrafield_local, + const void* extrafield_global, + uInt size_extrafield_global, + const char* comment, + int method, + int level, + int raw)); + + +extern int ZEXPORT zipOpenNewFileInZip2_64 OF((zipFile file, + const char* filename, + const zip_fileinfo* zipfi, + const void* extrafield_local, + uInt size_extrafield_local, + const void* extrafield_global, + uInt size_extrafield_global, + const char* comment, + int method, + int level, + int raw, + int zip64)); +/* + Same than zipOpenNewFileInZip, except if raw=1, we write raw file + */ + +extern int ZEXPORT zipOpenNewFileInZip3 OF((zipFile file, + const char* filename, + const zip_fileinfo* zipfi, + const void* extrafield_local, + uInt size_extrafield_local, + const void* extrafield_global, + uInt size_extrafield_global, + const char* comment, + int method, + int level, + int raw, + int windowBits, + int memLevel, + int strategy, + const char* password, + uLong crcForCrypting)); + +extern int ZEXPORT zipOpenNewFileInZip3_64 OF((zipFile file, + const char* filename, + const zip_fileinfo* zipfi, + const void* extrafield_local, + uInt size_extrafield_local, + const void* extrafield_global, + uInt size_extrafield_global, + const char* comment, + int method, + int level, + int raw, + int windowBits, + int memLevel, + int strategy, + const char* password, + uLong crcForCrypting, + int zip64 + )); + +/* + Same than zipOpenNewFileInZip2, except + windowBits,memLevel,,strategy : see parameter strategy in deflateInit2 + password : crypting password (NULL for no crypting) + crcForCrypting : crc of file to compress (needed for crypting) + */ + +extern int ZEXPORT zipOpenNewFileInZip4 OF((zipFile file, + const char* filename, + const zip_fileinfo* zipfi, + const void* extrafield_local, + uInt size_extrafield_local, + const void* extrafield_global, + uInt size_extrafield_global, + const char* comment, + int method, + int level, + int raw, + int windowBits, + int memLevel, + int strategy, + const char* password, + uLong crcForCrypting, + uLong versionMadeBy, + uLong flagBase + )); + + +extern int ZEXPORT zipOpenNewFileInZip4_64 OF((zipFile file, + const char* filename, + const zip_fileinfo* zipfi, + const void* extrafield_local, + uInt size_extrafield_local, + const void* extrafield_global, + uInt size_extrafield_global, + const char* comment, + int method, + int level, + int raw, + int windowBits, + int memLevel, + int strategy, + const char* password, + uLong crcForCrypting, + uLong versionMadeBy, + uLong flagBase, + int zip64 + )); +/* + Same than zipOpenNewFileInZip4, except + versionMadeBy : value for Version made by field + flag : value for flag field (compression level info will be added) + */ + + +extern int ZEXPORT zipWriteInFileInZip OF((zipFile file, + const void* buf, + unsigned len)); +/* + Write data in the zipfile +*/ + +extern int ZEXPORT zipCloseFileInZip OF((zipFile file)); +/* + Close the current file in the zipfile +*/ + +extern int ZEXPORT zipCloseFileInZipRaw OF((zipFile file, + uLong uncompressed_size, + uLong crc32)); + +extern int ZEXPORT zipCloseFileInZipRaw64 OF((zipFile file, + ZPOS64_T uncompressed_size, + uLong crc32)); + +/* + Close the current file in the zipfile, for file opened with + parameter raw=1 in zipOpenNewFileInZip2 + uncompressed_size and crc32 are value for the uncompressed size +*/ + +extern int ZEXPORT zipClose OF((zipFile file, + const char* global_comment)); +/* + Close the zipfile +*/ + + +extern int ZEXPORT zipRemoveExtraInfoBlock OF((char* pData, int* dataLen, short sHeader)); +/* + zipRemoveExtraInfoBlock - Added by Mathias Svensson + + Remove extra information block from a extra information data for the local file header or central directory header + + It is needed to remove ZIP64 extra information blocks when before data is written if using RAW mode. + + 0x0001 is the signature header for the ZIP64 extra information blocks + + usage. + Remove ZIP64 Extra information from a central director extra field data + zipRemoveExtraInfoBlock(pCenDirExtraFieldData, &nCenDirExtraFieldDataLen, 0x0001); + + Remove ZIP64 Extra information from a Local File Header extra field data + zipRemoveExtraInfoBlock(pLocalHeaderExtraFieldData, &nLocalHeaderExtraFieldDataLen, 0x0001); +*/ + +#ifdef __cplusplus +} +#endif + +#endif /* _zip64_H */ ADDED compat/zlib/contrib/pascal/example.pas Index: compat/zlib/contrib/pascal/example.pas ================================================================== --- compat/zlib/contrib/pascal/example.pas +++ compat/zlib/contrib/pascal/example.pas @@ -0,0 +1,599 @@ +(* example.c -- usage example of the zlib compression library + * Copyright (C) 1995-2003 Jean-loup Gailly. + * For conditions of distribution and use, see copyright notice in zlib.h + * + * Pascal translation + * Copyright (C) 1998 by Jacques Nomssi Nzali. + * For conditions of distribution and use, see copyright notice in readme.txt + * + * Adaptation to the zlibpas interface + * Copyright (C) 2003 by Cosmin Truta. + * For conditions of distribution and use, see copyright notice in readme.txt + *) + +program example; + +{$DEFINE TEST_COMPRESS} +{DO NOT $DEFINE TEST_GZIO} +{$DEFINE TEST_DEFLATE} +{$DEFINE TEST_INFLATE} +{$DEFINE TEST_FLUSH} +{$DEFINE TEST_SYNC} +{$DEFINE TEST_DICT} + +uses SysUtils, zlibpas; + +const TESTFILE = 'foo.gz'; + +(* "hello world" would be more standard, but the repeated "hello" + * stresses the compression code better, sorry... + *) +const hello: PChar = 'hello, hello!'; + +const dictionary: PChar = 'hello'; + +var dictId: LongInt; (* Adler32 value of the dictionary *) + +procedure CHECK_ERR(err: Integer; msg: String); +begin + if err <> Z_OK then + begin + WriteLn(msg, ' error: ', err); + Halt(1); + end; +end; + +procedure EXIT_ERR(const msg: String); +begin + WriteLn('Error: ', msg); + Halt(1); +end; + +(* =========================================================================== + * Test compress and uncompress + *) +{$IFDEF TEST_COMPRESS} +procedure test_compress(compr: Pointer; comprLen: LongInt; + uncompr: Pointer; uncomprLen: LongInt); +var err: Integer; + len: LongInt; +begin + len := StrLen(hello)+1; + + err := compress(compr, comprLen, hello, len); + CHECK_ERR(err, 'compress'); + + StrCopy(PChar(uncompr), 'garbage'); + + err := uncompress(uncompr, uncomprLen, compr, comprLen); + CHECK_ERR(err, 'uncompress'); + + if StrComp(PChar(uncompr), hello) <> 0 then + EXIT_ERR('bad uncompress') + else + WriteLn('uncompress(): ', PChar(uncompr)); +end; +{$ENDIF} + +(* =========================================================================== + * Test read/write of .gz files + *) +{$IFDEF TEST_GZIO} +procedure test_gzio(const fname: PChar; (* compressed file name *) + uncompr: Pointer; + uncomprLen: LongInt); +var err: Integer; + len: Integer; + zfile: gzFile; + pos: LongInt; +begin + len := StrLen(hello)+1; + + zfile := gzopen(fname, 'wb'); + if zfile = NIL then + begin + WriteLn('gzopen error'); + Halt(1); + end; + gzputc(zfile, 'h'); + if gzputs(zfile, 'ello') <> 4 then + begin + WriteLn('gzputs err: ', gzerror(zfile, err)); + Halt(1); + end; + {$IFDEF GZ_FORMAT_STRING} + if gzprintf(zfile, ', %s!', 'hello') <> 8 then + begin + WriteLn('gzprintf err: ', gzerror(zfile, err)); + Halt(1); + end; + {$ELSE} + if gzputs(zfile, ', hello!') <> 8 then + begin + WriteLn('gzputs err: ', gzerror(zfile, err)); + Halt(1); + end; + {$ENDIF} + gzseek(zfile, 1, SEEK_CUR); (* add one zero byte *) + gzclose(zfile); + + zfile := gzopen(fname, 'rb'); + if zfile = NIL then + begin + WriteLn('gzopen error'); + Halt(1); + end; + + StrCopy(PChar(uncompr), 'garbage'); + + if gzread(zfile, uncompr, uncomprLen) <> len then + begin + WriteLn('gzread err: ', gzerror(zfile, err)); + Halt(1); + end; + if StrComp(PChar(uncompr), hello) <> 0 then + begin + WriteLn('bad gzread: ', PChar(uncompr)); + Halt(1); + end + else + WriteLn('gzread(): ', PChar(uncompr)); + + pos := gzseek(zfile, -8, SEEK_CUR); + if (pos <> 6) or (gztell(zfile) <> pos) then + begin + WriteLn('gzseek error, pos=', pos, ', gztell=', gztell(zfile)); + Halt(1); + end; + + if gzgetc(zfile) <> ' ' then + begin + WriteLn('gzgetc error'); + Halt(1); + end; + + if gzungetc(' ', zfile) <> ' ' then + begin + WriteLn('gzungetc error'); + Halt(1); + end; + + gzgets(zfile, PChar(uncompr), uncomprLen); + uncomprLen := StrLen(PChar(uncompr)); + if uncomprLen <> 7 then (* " hello!" *) + begin + WriteLn('gzgets err after gzseek: ', gzerror(zfile, err)); + Halt(1); + end; + if StrComp(PChar(uncompr), hello + 6) <> 0 then + begin + WriteLn('bad gzgets after gzseek'); + Halt(1); + end + else + WriteLn('gzgets() after gzseek: ', PChar(uncompr)); + + gzclose(zfile); +end; +{$ENDIF} + +(* =========================================================================== + * Test deflate with small buffers + *) +{$IFDEF TEST_DEFLATE} +procedure test_deflate(compr: Pointer; comprLen: LongInt); +var c_stream: z_stream; (* compression stream *) + err: Integer; + len: LongInt; +begin + len := StrLen(hello)+1; + + c_stream.zalloc := NIL; + c_stream.zfree := NIL; + c_stream.opaque := NIL; + + err := deflateInit(c_stream, Z_DEFAULT_COMPRESSION); + CHECK_ERR(err, 'deflateInit'); + + c_stream.next_in := hello; + c_stream.next_out := compr; + + while (c_stream.total_in <> len) and + (c_stream.total_out < comprLen) do + begin + c_stream.avail_out := 1; { force small buffers } + c_stream.avail_in := 1; + err := deflate(c_stream, Z_NO_FLUSH); + CHECK_ERR(err, 'deflate'); + end; + + (* Finish the stream, still forcing small buffers: *) + while TRUE do + begin + c_stream.avail_out := 1; + err := deflate(c_stream, Z_FINISH); + if err = Z_STREAM_END then + break; + CHECK_ERR(err, 'deflate'); + end; + + err := deflateEnd(c_stream); + CHECK_ERR(err, 'deflateEnd'); +end; +{$ENDIF} + +(* =========================================================================== + * Test inflate with small buffers + *) +{$IFDEF TEST_INFLATE} +procedure test_inflate(compr: Pointer; comprLen : LongInt; + uncompr: Pointer; uncomprLen : LongInt); +var err: Integer; + d_stream: z_stream; (* decompression stream *) +begin + StrCopy(PChar(uncompr), 'garbage'); + + d_stream.zalloc := NIL; + d_stream.zfree := NIL; + d_stream.opaque := NIL; + + d_stream.next_in := compr; + d_stream.avail_in := 0; + d_stream.next_out := uncompr; + + err := inflateInit(d_stream); + CHECK_ERR(err, 'inflateInit'); + + while (d_stream.total_out < uncomprLen) and + (d_stream.total_in < comprLen) do + begin + d_stream.avail_out := 1; (* force small buffers *) + d_stream.avail_in := 1; + err := inflate(d_stream, Z_NO_FLUSH); + if err = Z_STREAM_END then + break; + CHECK_ERR(err, 'inflate'); + end; + + err := inflateEnd(d_stream); + CHECK_ERR(err, 'inflateEnd'); + + if StrComp(PChar(uncompr), hello) <> 0 then + EXIT_ERR('bad inflate') + else + WriteLn('inflate(): ', PChar(uncompr)); +end; +{$ENDIF} + +(* =========================================================================== + * Test deflate with large buffers and dynamic change of compression level + *) +{$IFDEF TEST_DEFLATE} +procedure test_large_deflate(compr: Pointer; comprLen: LongInt; + uncompr: Pointer; uncomprLen: LongInt); +var c_stream: z_stream; (* compression stream *) + err: Integer; +begin + c_stream.zalloc := NIL; + c_stream.zfree := NIL; + c_stream.opaque := NIL; + + err := deflateInit(c_stream, Z_BEST_SPEED); + CHECK_ERR(err, 'deflateInit'); + + c_stream.next_out := compr; + c_stream.avail_out := Integer(comprLen); + + (* At this point, uncompr is still mostly zeroes, so it should compress + * very well: + *) + c_stream.next_in := uncompr; + c_stream.avail_in := Integer(uncomprLen); + err := deflate(c_stream, Z_NO_FLUSH); + CHECK_ERR(err, 'deflate'); + if c_stream.avail_in <> 0 then + EXIT_ERR('deflate not greedy'); + + (* Feed in already compressed data and switch to no compression: *) + deflateParams(c_stream, Z_NO_COMPRESSION, Z_DEFAULT_STRATEGY); + c_stream.next_in := compr; + c_stream.avail_in := Integer(comprLen div 2); + err := deflate(c_stream, Z_NO_FLUSH); + CHECK_ERR(err, 'deflate'); + + (* Switch back to compressing mode: *) + deflateParams(c_stream, Z_BEST_COMPRESSION, Z_FILTERED); + c_stream.next_in := uncompr; + c_stream.avail_in := Integer(uncomprLen); + err := deflate(c_stream, Z_NO_FLUSH); + CHECK_ERR(err, 'deflate'); + + err := deflate(c_stream, Z_FINISH); + if err <> Z_STREAM_END then + EXIT_ERR('deflate should report Z_STREAM_END'); + + err := deflateEnd(c_stream); + CHECK_ERR(err, 'deflateEnd'); +end; +{$ENDIF} + +(* =========================================================================== + * Test inflate with large buffers + *) +{$IFDEF TEST_INFLATE} +procedure test_large_inflate(compr: Pointer; comprLen: LongInt; + uncompr: Pointer; uncomprLen: LongInt); +var err: Integer; + d_stream: z_stream; (* decompression stream *) +begin + StrCopy(PChar(uncompr), 'garbage'); + + d_stream.zalloc := NIL; + d_stream.zfree := NIL; + d_stream.opaque := NIL; + + d_stream.next_in := compr; + d_stream.avail_in := Integer(comprLen); + + err := inflateInit(d_stream); + CHECK_ERR(err, 'inflateInit'); + + while TRUE do + begin + d_stream.next_out := uncompr; (* discard the output *) + d_stream.avail_out := Integer(uncomprLen); + err := inflate(d_stream, Z_NO_FLUSH); + if err = Z_STREAM_END then + break; + CHECK_ERR(err, 'large inflate'); + end; + + err := inflateEnd(d_stream); + CHECK_ERR(err, 'inflateEnd'); + + if d_stream.total_out <> 2 * uncomprLen + comprLen div 2 then + begin + WriteLn('bad large inflate: ', d_stream.total_out); + Halt(1); + end + else + WriteLn('large_inflate(): OK'); +end; +{$ENDIF} + +(* =========================================================================== + * Test deflate with full flush + *) +{$IFDEF TEST_FLUSH} +procedure test_flush(compr: Pointer; var comprLen : LongInt); +var c_stream: z_stream; (* compression stream *) + err: Integer; + len: Integer; +begin + len := StrLen(hello)+1; + + c_stream.zalloc := NIL; + c_stream.zfree := NIL; + c_stream.opaque := NIL; + + err := deflateInit(c_stream, Z_DEFAULT_COMPRESSION); + CHECK_ERR(err, 'deflateInit'); + + c_stream.next_in := hello; + c_stream.next_out := compr; + c_stream.avail_in := 3; + c_stream.avail_out := Integer(comprLen); + err := deflate(c_stream, Z_FULL_FLUSH); + CHECK_ERR(err, 'deflate'); + + Inc(PByteArray(compr)^[3]); (* force an error in first compressed block *) + c_stream.avail_in := len - 3; + + err := deflate(c_stream, Z_FINISH); + if err <> Z_STREAM_END then + CHECK_ERR(err, 'deflate'); + + err := deflateEnd(c_stream); + CHECK_ERR(err, 'deflateEnd'); + + comprLen := c_stream.total_out; +end; +{$ENDIF} + +(* =========================================================================== + * Test inflateSync() + *) +{$IFDEF TEST_SYNC} +procedure test_sync(compr: Pointer; comprLen: LongInt; + uncompr: Pointer; uncomprLen : LongInt); +var err: Integer; + d_stream: z_stream; (* decompression stream *) +begin + StrCopy(PChar(uncompr), 'garbage'); + + d_stream.zalloc := NIL; + d_stream.zfree := NIL; + d_stream.opaque := NIL; + + d_stream.next_in := compr; + d_stream.avail_in := 2; (* just read the zlib header *) + + err := inflateInit(d_stream); + CHECK_ERR(err, 'inflateInit'); + + d_stream.next_out := uncompr; + d_stream.avail_out := Integer(uncomprLen); + + inflate(d_stream, Z_NO_FLUSH); + CHECK_ERR(err, 'inflate'); + + d_stream.avail_in := Integer(comprLen-2); (* read all compressed data *) + err := inflateSync(d_stream); (* but skip the damaged part *) + CHECK_ERR(err, 'inflateSync'); + + err := inflate(d_stream, Z_FINISH); + if err <> Z_DATA_ERROR then + EXIT_ERR('inflate should report DATA_ERROR'); + (* Because of incorrect adler32 *) + + err := inflateEnd(d_stream); + CHECK_ERR(err, 'inflateEnd'); + + WriteLn('after inflateSync(): hel', PChar(uncompr)); +end; +{$ENDIF} + +(* =========================================================================== + * Test deflate with preset dictionary + *) +{$IFDEF TEST_DICT} +procedure test_dict_deflate(compr: Pointer; comprLen: LongInt); +var c_stream: z_stream; (* compression stream *) + err: Integer; +begin + c_stream.zalloc := NIL; + c_stream.zfree := NIL; + c_stream.opaque := NIL; + + err := deflateInit(c_stream, Z_BEST_COMPRESSION); + CHECK_ERR(err, 'deflateInit'); + + err := deflateSetDictionary(c_stream, dictionary, StrLen(dictionary)); + CHECK_ERR(err, 'deflateSetDictionary'); + + dictId := c_stream.adler; + c_stream.next_out := compr; + c_stream.avail_out := Integer(comprLen); + + c_stream.next_in := hello; + c_stream.avail_in := StrLen(hello)+1; + + err := deflate(c_stream, Z_FINISH); + if err <> Z_STREAM_END then + EXIT_ERR('deflate should report Z_STREAM_END'); + + err := deflateEnd(c_stream); + CHECK_ERR(err, 'deflateEnd'); +end; +{$ENDIF} + +(* =========================================================================== + * Test inflate with a preset dictionary + *) +{$IFDEF TEST_DICT} +procedure test_dict_inflate(compr: Pointer; comprLen: LongInt; + uncompr: Pointer; uncomprLen: LongInt); +var err: Integer; + d_stream: z_stream; (* decompression stream *) +begin + StrCopy(PChar(uncompr), 'garbage'); + + d_stream.zalloc := NIL; + d_stream.zfree := NIL; + d_stream.opaque := NIL; + + d_stream.next_in := compr; + d_stream.avail_in := Integer(comprLen); + + err := inflateInit(d_stream); + CHECK_ERR(err, 'inflateInit'); + + d_stream.next_out := uncompr; + d_stream.avail_out := Integer(uncomprLen); + + while TRUE do + begin + err := inflate(d_stream, Z_NO_FLUSH); + if err = Z_STREAM_END then + break; + if err = Z_NEED_DICT then + begin + if d_stream.adler <> dictId then + EXIT_ERR('unexpected dictionary'); + err := inflateSetDictionary(d_stream, dictionary, StrLen(dictionary)); + end; + CHECK_ERR(err, 'inflate with dict'); + end; + + err := inflateEnd(d_stream); + CHECK_ERR(err, 'inflateEnd'); + + if StrComp(PChar(uncompr), hello) <> 0 then + EXIT_ERR('bad inflate with dict') + else + WriteLn('inflate with dictionary: ', PChar(uncompr)); +end; +{$ENDIF} + +var compr, uncompr: Pointer; + comprLen, uncomprLen: LongInt; + +begin + if zlibVersion^ <> ZLIB_VERSION[1] then + EXIT_ERR('Incompatible zlib version'); + + WriteLn('zlib version: ', zlibVersion); + WriteLn('zlib compile flags: ', Format('0x%x', [zlibCompileFlags])); + + comprLen := 10000 * SizeOf(Integer); (* don't overflow on MSDOS *) + uncomprLen := comprLen; + GetMem(compr, comprLen); + GetMem(uncompr, uncomprLen); + if (compr = NIL) or (uncompr = NIL) then + EXIT_ERR('Out of memory'); + (* compr and uncompr are cleared to avoid reading uninitialized + * data and to ensure that uncompr compresses well. + *) + FillChar(compr^, comprLen, 0); + FillChar(uncompr^, uncomprLen, 0); + + {$IFDEF TEST_COMPRESS} + WriteLn('** Testing compress'); + test_compress(compr, comprLen, uncompr, uncomprLen); + {$ENDIF} + + {$IFDEF TEST_GZIO} + WriteLn('** Testing gzio'); + if ParamCount >= 1 then + test_gzio(ParamStr(1), uncompr, uncomprLen) + else + test_gzio(TESTFILE, uncompr, uncomprLen); + {$ENDIF} + + {$IFDEF TEST_DEFLATE} + WriteLn('** Testing deflate with small buffers'); + test_deflate(compr, comprLen); + {$ENDIF} + {$IFDEF TEST_INFLATE} + WriteLn('** Testing inflate with small buffers'); + test_inflate(compr, comprLen, uncompr, uncomprLen); + {$ENDIF} + + {$IFDEF TEST_DEFLATE} + WriteLn('** Testing deflate with large buffers'); + test_large_deflate(compr, comprLen, uncompr, uncomprLen); + {$ENDIF} + {$IFDEF TEST_INFLATE} + WriteLn('** Testing inflate with large buffers'); + test_large_inflate(compr, comprLen, uncompr, uncomprLen); + {$ENDIF} + + {$IFDEF TEST_FLUSH} + WriteLn('** Testing deflate with full flush'); + test_flush(compr, comprLen); + {$ENDIF} + {$IFDEF TEST_SYNC} + WriteLn('** Testing inflateSync'); + test_sync(compr, comprLen, uncompr, uncomprLen); + {$ENDIF} + comprLen := uncomprLen; + + {$IFDEF TEST_DICT} + WriteLn('** Testing deflate and inflate with preset dictionary'); + test_dict_deflate(compr, comprLen); + test_dict_inflate(compr, comprLen, uncompr, uncomprLen); + {$ENDIF} + + FreeMem(compr, comprLen); + FreeMem(uncompr, uncomprLen); +end. ADDED compat/zlib/contrib/pascal/readme.txt Index: compat/zlib/contrib/pascal/readme.txt ================================================================== --- compat/zlib/contrib/pascal/readme.txt +++ compat/zlib/contrib/pascal/readme.txt @@ -0,0 +1,76 @@ + +This directory contains a Pascal (Delphi, Kylix) interface to the +zlib data compression library. + + +Directory listing +================= + +zlibd32.mak makefile for Borland C++ +example.pas usage example of zlib +zlibpas.pas the Pascal interface to zlib +readme.txt this file + + +Compatibility notes +=================== + +- Although the name "zlib" would have been more normal for the + zlibpas unit, this name is already taken by Borland's ZLib unit. + This is somehow unfortunate, because that unit is not a genuine + interface to the full-fledged zlib functionality, but a suite of + class wrappers around zlib streams. Other essential features, + such as checksums, are missing. + It would have been more appropriate for that unit to have a name + like "ZStreams", or something similar. + +- The C and zlib-supplied types int, uInt, long, uLong, etc. are + translated directly into Pascal types of similar sizes (Integer, + LongInt, etc.), to avoid namespace pollution. In particular, + there is no conversion of unsigned int into a Pascal unsigned + integer. The Word type is non-portable and has the same size + (16 bits) both in a 16-bit and in a 32-bit environment, unlike + Integer. Even if there is a 32-bit Cardinal type, there is no + real need for unsigned int in zlib under a 32-bit environment. + +- Except for the callbacks, the zlib function interfaces are + assuming the calling convention normally used in Pascal + (__pascal for DOS and Windows16, __fastcall for Windows32). + Since the cdecl keyword is used, the old Turbo Pascal does + not work with this interface. + +- The gz* function interfaces are not translated, to avoid + interfacing problems with the C runtime library. Besides, + gzprintf(gzFile file, const char *format, ...) + cannot be translated into Pascal. + + +Legal issues +============ + +The zlibpas interface is: + Copyright (C) 1995-2003 Jean-loup Gailly and Mark Adler. + Copyright (C) 1998 by Bob Dellaca. + Copyright (C) 2003 by Cosmin Truta. + +The example program is: + Copyright (C) 1995-2003 by Jean-loup Gailly. + Copyright (C) 1998,1999,2000 by Jacques Nomssi Nzali. + Copyright (C) 2003 by Cosmin Truta. + + This software is provided 'as-is', without any express or implied + warranty. In no event will the author be held liable for any damages + arising from the use of this software. + + Permission is granted to anyone to use this software for any purpose, + including commercial applications, and to alter it and redistribute it + freely, subject to the following restrictions: + + 1. The origin of this software must not be misrepresented; you must not + claim that you wrote the original software. If you use this software + in a product, an acknowledgment in the product documentation would be + appreciated but is not required. + 2. Altered source versions must be plainly marked as such, and must not be + misrepresented as being the original software. + 3. This notice may not be removed or altered from any source distribution. + ADDED compat/zlib/contrib/pascal/zlibd32.mak Index: compat/zlib/contrib/pascal/zlibd32.mak ================================================================== --- compat/zlib/contrib/pascal/zlibd32.mak +++ compat/zlib/contrib/pascal/zlibd32.mak @@ -0,0 +1,99 @@ +# Makefile for zlib +# For use with Delphi and C++ Builder under Win32 +# Updated for zlib 1.2.x by Cosmin Truta + +# ------------ Borland C++ ------------ + +# This project uses the Delphi (fastcall/register) calling convention: +LOC = -DZEXPORT=__fastcall -DZEXPORTVA=__cdecl + +CC = bcc32 +LD = bcc32 +AR = tlib +# do not use "-pr" in CFLAGS +CFLAGS = -a -d -k- -O2 $(LOC) +LDFLAGS = + + +# variables +ZLIB_LIB = zlib.lib + +OBJ1 = adler32.obj compress.obj crc32.obj deflate.obj gzclose.obj gzlib.obj gzread.obj +OBJ2 = gzwrite.obj infback.obj inffast.obj inflate.obj inftrees.obj trees.obj uncompr.obj zutil.obj +OBJP1 = +adler32.obj+compress.obj+crc32.obj+deflate.obj+gzclose.obj+gzlib.obj+gzread.obj +OBJP2 = +gzwrite.obj+infback.obj+inffast.obj+inflate.obj+inftrees.obj+trees.obj+uncompr.obj+zutil.obj + + +# targets +all: $(ZLIB_LIB) example.exe minigzip.exe + +.c.obj: + $(CC) -c $(CFLAGS) $*.c + +adler32.obj: adler32.c zlib.h zconf.h + +compress.obj: compress.c zlib.h zconf.h + +crc32.obj: crc32.c zlib.h zconf.h crc32.h + +deflate.obj: deflate.c deflate.h zutil.h zlib.h zconf.h + +gzclose.obj: gzclose.c zlib.h zconf.h gzguts.h + +gzlib.obj: gzlib.c zlib.h zconf.h gzguts.h + +gzread.obj: gzread.c zlib.h zconf.h gzguts.h + +gzwrite.obj: gzwrite.c zlib.h zconf.h gzguts.h + +infback.obj: infback.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h inffixed.h + +inffast.obj: inffast.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h + +inflate.obj: inflate.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h inffixed.h + +inftrees.obj: inftrees.c zutil.h zlib.h zconf.h inftrees.h + +trees.obj: trees.c zutil.h zlib.h zconf.h deflate.h trees.h + +uncompr.obj: uncompr.c zlib.h zconf.h + +zutil.obj: zutil.c zutil.h zlib.h zconf.h + +example.obj: test/example.c zlib.h zconf.h + +minigzip.obj: test/minigzip.c zlib.h zconf.h + + +# For the sake of the old Borland make, +# the command line is cut to fit in the MS-DOS 128 byte limit: +$(ZLIB_LIB): $(OBJ1) $(OBJ2) + -del $(ZLIB_LIB) + $(AR) $(ZLIB_LIB) $(OBJP1) + $(AR) $(ZLIB_LIB) $(OBJP2) + + +# testing +test: example.exe minigzip.exe + example + echo hello world | minigzip | minigzip -d + +example.exe: example.obj $(ZLIB_LIB) + $(LD) $(LDFLAGS) example.obj $(ZLIB_LIB) + +minigzip.exe: minigzip.obj $(ZLIB_LIB) + $(LD) $(LDFLAGS) minigzip.obj $(ZLIB_LIB) + + +# cleanup +clean: + -del *.obj + -del *.exe + -del *.lib + -del *.tds + -del zlib.bak + -del foo.gz + ADDED compat/zlib/contrib/pascal/zlibpas.pas Index: compat/zlib/contrib/pascal/zlibpas.pas ================================================================== --- compat/zlib/contrib/pascal/zlibpas.pas +++ compat/zlib/contrib/pascal/zlibpas.pas @@ -0,0 +1,276 @@ +(* zlibpas -- Pascal interface to the zlib data compression library + * + * Copyright (C) 2003 Cosmin Truta. + * Derived from original sources by Bob Dellaca. + * For conditions of distribution and use, see copyright notice in readme.txt + *) + +unit zlibpas; + +interface + +const + ZLIB_VERSION = '1.2.8'; + ZLIB_VERNUM = $1280; + +type + alloc_func = function(opaque: Pointer; items, size: Integer): Pointer; + cdecl; + free_func = procedure(opaque, address: Pointer); + cdecl; + + in_func = function(opaque: Pointer; var buf: PByte): Integer; + cdecl; + out_func = function(opaque: Pointer; buf: PByte; size: Integer): Integer; + cdecl; + + z_streamp = ^z_stream; + z_stream = packed record + next_in: PChar; (* next input byte *) + avail_in: Integer; (* number of bytes available at next_in *) + total_in: LongInt; (* total nb of input bytes read so far *) + + next_out: PChar; (* next output byte should be put there *) + avail_out: Integer; (* remaining free space at next_out *) + total_out: LongInt; (* total nb of bytes output so far *) + + msg: PChar; (* last error message, NULL if no error *) + state: Pointer; (* not visible by applications *) + + zalloc: alloc_func; (* used to allocate the internal state *) + zfree: free_func; (* used to free the internal state *) + opaque: Pointer; (* private data object passed to zalloc and zfree *) + + data_type: Integer; (* best guess about the data type: ascii or binary *) + adler: LongInt; (* adler32 value of the uncompressed data *) + reserved: LongInt; (* reserved for future use *) + end; + + gz_headerp = ^gz_header; + gz_header = packed record + text: Integer; (* true if compressed data believed to be text *) + time: LongInt; (* modification time *) + xflags: Integer; (* extra flags (not used when writing a gzip file) *) + os: Integer; (* operating system *) + extra: PChar; (* pointer to extra field or Z_NULL if none *) + extra_len: Integer; (* extra field length (valid if extra != Z_NULL) *) + extra_max: Integer; (* space at extra (only when reading header) *) + name: PChar; (* pointer to zero-terminated file name or Z_NULL *) + name_max: Integer; (* space at name (only when reading header) *) + comment: PChar; (* pointer to zero-terminated comment or Z_NULL *) + comm_max: Integer; (* space at comment (only when reading header) *) + hcrc: Integer; (* true if there was or will be a header crc *) + done: Integer; (* true when done reading gzip header *) + end; + +(* constants *) +const + Z_NO_FLUSH = 0; + Z_PARTIAL_FLUSH = 1; + Z_SYNC_FLUSH = 2; + Z_FULL_FLUSH = 3; + Z_FINISH = 4; + Z_BLOCK = 5; + Z_TREES = 6; + + Z_OK = 0; + Z_STREAM_END = 1; + Z_NEED_DICT = 2; + Z_ERRNO = -1; + Z_STREAM_ERROR = -2; + Z_DATA_ERROR = -3; + Z_MEM_ERROR = -4; + Z_BUF_ERROR = -5; + Z_VERSION_ERROR = -6; + + Z_NO_COMPRESSION = 0; + Z_BEST_SPEED = 1; + Z_BEST_COMPRESSION = 9; + Z_DEFAULT_COMPRESSION = -1; + + Z_FILTERED = 1; + Z_HUFFMAN_ONLY = 2; + Z_RLE = 3; + Z_FIXED = 4; + Z_DEFAULT_STRATEGY = 0; + + Z_BINARY = 0; + Z_TEXT = 1; + Z_ASCII = 1; + Z_UNKNOWN = 2; + + Z_DEFLATED = 8; + +(* basic functions *) +function zlibVersion: PChar; +function deflateInit(var strm: z_stream; level: Integer): Integer; +function deflate(var strm: z_stream; flush: Integer): Integer; +function deflateEnd(var strm: z_stream): Integer; +function inflateInit(var strm: z_stream): Integer; +function inflate(var strm: z_stream; flush: Integer): Integer; +function inflateEnd(var strm: z_stream): Integer; + +(* advanced functions *) +function deflateInit2(var strm: z_stream; level, method, windowBits, + memLevel, strategy: Integer): Integer; +function deflateSetDictionary(var strm: z_stream; const dictionary: PChar; + dictLength: Integer): Integer; +function deflateCopy(var dest, source: z_stream): Integer; +function deflateReset(var strm: z_stream): Integer; +function deflateParams(var strm: z_stream; level, strategy: Integer): Integer; +function deflateTune(var strm: z_stream; good_length, max_lazy, nice_length, max_chain: Integer): Integer; +function deflateBound(var strm: z_stream; sourceLen: LongInt): LongInt; +function deflatePending(var strm: z_stream; var pending: Integer; var bits: Integer): Integer; +function deflatePrime(var strm: z_stream; bits, value: Integer): Integer; +function deflateSetHeader(var strm: z_stream; head: gz_header): Integer; +function inflateInit2(var strm: z_stream; windowBits: Integer): Integer; +function inflateSetDictionary(var strm: z_stream; const dictionary: PChar; + dictLength: Integer): Integer; +function inflateSync(var strm: z_stream): Integer; +function inflateCopy(var dest, source: z_stream): Integer; +function inflateReset(var strm: z_stream): Integer; +function inflateReset2(var strm: z_stream; windowBits: Integer): Integer; +function inflatePrime(var strm: z_stream; bits, value: Integer): Integer; +function inflateMark(var strm: z_stream): LongInt; +function inflateGetHeader(var strm: z_stream; var head: gz_header): Integer; +function inflateBackInit(var strm: z_stream; + windowBits: Integer; window: PChar): Integer; +function inflateBack(var strm: z_stream; in_fn: in_func; in_desc: Pointer; + out_fn: out_func; out_desc: Pointer): Integer; +function inflateBackEnd(var strm: z_stream): Integer; +function zlibCompileFlags: LongInt; + +(* utility functions *) +function compress(dest: PChar; var destLen: LongInt; + const source: PChar; sourceLen: LongInt): Integer; +function compress2(dest: PChar; var destLen: LongInt; + const source: PChar; sourceLen: LongInt; + level: Integer): Integer; +function compressBound(sourceLen: LongInt): LongInt; +function uncompress(dest: PChar; var destLen: LongInt; + const source: PChar; sourceLen: LongInt): Integer; + +(* checksum functions *) +function adler32(adler: LongInt; const buf: PChar; len: Integer): LongInt; +function adler32_combine(adler1, adler2, len2: LongInt): LongInt; +function crc32(crc: LongInt; const buf: PChar; len: Integer): LongInt; +function crc32_combine(crc1, crc2, len2: LongInt): LongInt; + +(* various hacks, don't look :) *) +function deflateInit_(var strm: z_stream; level: Integer; + const version: PChar; stream_size: Integer): Integer; +function inflateInit_(var strm: z_stream; const version: PChar; + stream_size: Integer): Integer; +function deflateInit2_(var strm: z_stream; + level, method, windowBits, memLevel, strategy: Integer; + const version: PChar; stream_size: Integer): Integer; +function inflateInit2_(var strm: z_stream; windowBits: Integer; + const version: PChar; stream_size: Integer): Integer; +function inflateBackInit_(var strm: z_stream; + windowBits: Integer; window: PChar; + const version: PChar; stream_size: Integer): Integer; + + +implementation + +{$L adler32.obj} +{$L compress.obj} +{$L crc32.obj} +{$L deflate.obj} +{$L infback.obj} +{$L inffast.obj} +{$L inflate.obj} +{$L inftrees.obj} +{$L trees.obj} +{$L uncompr.obj} +{$L zutil.obj} + +function adler32; external; +function adler32_combine; external; +function compress; external; +function compress2; external; +function compressBound; external; +function crc32; external; +function crc32_combine; external; +function deflate; external; +function deflateBound; external; +function deflateCopy; external; +function deflateEnd; external; +function deflateInit_; external; +function deflateInit2_; external; +function deflateParams; external; +function deflatePending; external; +function deflatePrime; external; +function deflateReset; external; +function deflateSetDictionary; external; +function deflateSetHeader; external; +function deflateTune; external; +function inflate; external; +function inflateBack; external; +function inflateBackEnd; external; +function inflateBackInit_; external; +function inflateCopy; external; +function inflateEnd; external; +function inflateGetHeader; external; +function inflateInit_; external; +function inflateInit2_; external; +function inflateMark; external; +function inflatePrime; external; +function inflateReset; external; +function inflateReset2; external; +function inflateSetDictionary; external; +function inflateSync; external; +function uncompress; external; +function zlibCompileFlags; external; +function zlibVersion; external; + +function deflateInit(var strm: z_stream; level: Integer): Integer; +begin + Result := deflateInit_(strm, level, ZLIB_VERSION, sizeof(z_stream)); +end; + +function deflateInit2(var strm: z_stream; level, method, windowBits, memLevel, + strategy: Integer): Integer; +begin + Result := deflateInit2_(strm, level, method, windowBits, memLevel, strategy, + ZLIB_VERSION, sizeof(z_stream)); +end; + +function inflateInit(var strm: z_stream): Integer; +begin + Result := inflateInit_(strm, ZLIB_VERSION, sizeof(z_stream)); +end; + +function inflateInit2(var strm: z_stream; windowBits: Integer): Integer; +begin + Result := inflateInit2_(strm, windowBits, ZLIB_VERSION, sizeof(z_stream)); +end; + +function inflateBackInit(var strm: z_stream; + windowBits: Integer; window: PChar): Integer; +begin + Result := inflateBackInit_(strm, windowBits, window, + ZLIB_VERSION, sizeof(z_stream)); +end; + +function _malloc(Size: Integer): Pointer; cdecl; +begin + GetMem(Result, Size); +end; + +procedure _free(Block: Pointer); cdecl; +begin + FreeMem(Block); +end; + +procedure _memset(P: Pointer; B: Byte; count: Integer); cdecl; +begin + FillChar(P^, count, B); +end; + +procedure _memcpy(dest, source: Pointer; count: Integer); cdecl; +begin + Move(source^, dest^, count); +end; + +end. ADDED compat/zlib/contrib/puff/Makefile Index: compat/zlib/contrib/puff/Makefile ================================================================== --- compat/zlib/contrib/puff/Makefile +++ compat/zlib/contrib/puff/Makefile @@ -0,0 +1,42 @@ +CFLAGS=-O + +puff: puff.o pufftest.o + +puff.o: puff.h + +pufftest.o: puff.h + +test: puff + puff zeros.raw + +puft: puff.c puff.h pufftest.o + cc -fprofile-arcs -ftest-coverage -o puft puff.c pufftest.o + +# puff full coverage test (should say 100%) +cov: puft + @rm -f *.gcov *.gcda + @puft -w zeros.raw 2>&1 | cat > /dev/null + @echo '04' | xxd -r -p | puft 2> /dev/null || test $$? -eq 2 + @echo '00' | xxd -r -p | puft 2> /dev/null || test $$? -eq 2 + @echo '00 00 00 00 00' | xxd -r -p | puft 2> /dev/null || test $$? -eq 254 + @echo '00 01 00 fe ff' | xxd -r -p | puft 2> /dev/null || test $$? -eq 2 + @echo '01 01 00 fe ff 0a' | xxd -r -p | puft -f 2>&1 | cat > /dev/null + @echo '02 7e ff ff' | xxd -r -p | puft 2> /dev/null || test $$? -eq 246 + @echo '02' | xxd -r -p | puft 2> /dev/null || test $$? -eq 2 + @echo '04 80 49 92 24 49 92 24 0f b4 ff ff c3 04' | xxd -r -p | puft 2> /dev/null || test $$? -eq 2 + @echo '04 80 49 92 24 49 92 24 71 ff ff 93 11 00' | xxd -r -p | puft 2> /dev/null || test $$? -eq 249 + @echo '04 c0 81 08 00 00 00 00 20 7f eb 0b 00 00' | xxd -r -p | puft 2> /dev/null || test $$? -eq 246 + @echo '0b 00 00' | xxd -r -p | puft -f 2>&1 | cat > /dev/null + @echo '1a 07' | xxd -r -p | puft 2> /dev/null || test $$? -eq 246 + @echo '0c c0 81 00 00 00 00 00 90 ff 6b 04' | xxd -r -p | puft 2> /dev/null || test $$? -eq 245 + @puft -f zeros.raw 2>&1 | cat > /dev/null + @echo 'fc 00 00' | xxd -r -p | puft 2> /dev/null || test $$? -eq 253 + @echo '04 00 fe ff' | xxd -r -p | puft 2> /dev/null || test $$? -eq 252 + @echo '04 00 24 49' | xxd -r -p | puft 2> /dev/null || test $$? -eq 251 + @echo '04 80 49 92 24 49 92 24 0f b4 ff ff c3 84' | xxd -r -p | puft 2> /dev/null || test $$? -eq 248 + @echo '04 00 24 e9 ff ff' | xxd -r -p | puft 2> /dev/null || test $$? -eq 250 + @echo '04 00 24 e9 ff 6d' | xxd -r -p | puft 2> /dev/null || test $$? -eq 247 + @gcov -n puff.c + +clean: + rm -f puff puft *.o *.gc* ADDED compat/zlib/contrib/puff/README Index: compat/zlib/contrib/puff/README ================================================================== --- compat/zlib/contrib/puff/README +++ compat/zlib/contrib/puff/README @@ -0,0 +1,63 @@ +Puff -- A Simple Inflate +3 Mar 2003 +Mark Adler +madler@alumni.caltech.edu + +What this is -- + +puff.c provides the routine puff() to decompress the deflate data format. It +does so more slowly than zlib, but the code is about one-fifth the size of the +inflate code in zlib, and written to be very easy to read. + +Why I wrote this -- + +puff.c was written to document the deflate format unambiguously, by virtue of +being working C code. It is meant to supplement RFC 1951, which formally +describes the deflate format. I have received many questions on details of the +deflate format, and I hope that reading this code will answer those questions. +puff.c is heavily commented with details of the deflate format, especially +those little nooks and cranies of the format that might not be obvious from a +specification. + +puff.c may also be useful in applications where code size or memory usage is a +very limited resource, and speed is not as important. + +How to use it -- + +Well, most likely you should just be reading puff.c and using zlib for actual +applications, but if you must ... + +Include puff.h in your code, which provides this prototype: + +int puff(unsigned char *dest, /* pointer to destination pointer */ + unsigned long *destlen, /* amount of output space */ + unsigned char *source, /* pointer to source data pointer */ + unsigned long *sourcelen); /* amount of input available */ + +Then you can call puff() to decompress a deflate stream that is in memory in +its entirety at source, to a sufficiently sized block of memory for the +decompressed data at dest. puff() is the only external symbol in puff.c The +only C library functions that puff.c needs are setjmp() and longjmp(), which +are used to simplify error checking in the code to improve readabilty. puff.c +does no memory allocation, and uses less than 2K bytes off of the stack. + +If destlen is not enough space for the uncompressed data, then inflate will +return an error without writing more than destlen bytes. Note that this means +that in order to decompress the deflate data successfully, you need to know +the size of the uncompressed data ahead of time. + +If needed, puff() can determine the size of the uncompressed data with no +output space. This is done by passing dest equal to (unsigned char *)0. Then +the initial value of *destlen is ignored and *destlen is set to the length of +the uncompressed data. So if the size of the uncompressed data is not known, +then two passes of puff() can be used--first to determine the size, and second +to do the actual inflation after allocating the appropriate memory. Not +pretty, but it works. (This is one of the reasons you should be using zlib.) + +The deflate format is self-terminating. If the deflate stream does not end +in *sourcelen bytes, puff() will return an error without reading at or past +endsource. + +On return, *sourcelen is updated to the amount of input data consumed, and +*destlen is updated to the size of the uncompressed data. See the comments +in puff.c for the possible return codes for puff(). ADDED compat/zlib/contrib/puff/puff.c Index: compat/zlib/contrib/puff/puff.c ================================================================== --- compat/zlib/contrib/puff/puff.c +++ compat/zlib/contrib/puff/puff.c @@ -0,0 +1,840 @@ +/* + * puff.c + * Copyright (C) 2002-2013 Mark Adler + * For conditions of distribution and use, see copyright notice in puff.h + * version 2.3, 21 Jan 2013 + * + * puff.c is a simple inflate written to be an unambiguous way to specify the + * deflate format. It is not written for speed but rather simplicity. As a + * side benefit, this code might actually be useful when small code is more + * important than speed, such as bootstrap applications. For typical deflate + * data, zlib's inflate() is about four times as fast as puff(). zlib's + * inflate compiles to around 20K on my machine, whereas puff.c compiles to + * around 4K on my machine (a PowerPC using GNU cc). If the faster decode() + * function here is used, then puff() is only twice as slow as zlib's + * inflate(). + * + * All dynamically allocated memory comes from the stack. The stack required + * is less than 2K bytes. This code is compatible with 16-bit int's and + * assumes that long's are at least 32 bits. puff.c uses the short data type, + * assumed to be 16 bits, for arrays in order to to conserve memory. The code + * works whether integers are stored big endian or little endian. + * + * In the comments below are "Format notes" that describe the inflate process + * and document some of the less obvious aspects of the format. This source + * code is meant to supplement RFC 1951, which formally describes the deflate + * format: + * + * http://www.zlib.org/rfc-deflate.html + */ + +/* + * Change history: + * + * 1.0 10 Feb 2002 - First version + * 1.1 17 Feb 2002 - Clarifications of some comments and notes + * - Update puff() dest and source pointers on negative + * errors to facilitate debugging deflators + * - Remove longest from struct huffman -- not needed + * - Simplify offs[] index in construct() + * - Add input size and checking, using longjmp() to + * maintain easy readability + * - Use short data type for large arrays + * - Use pointers instead of long to specify source and + * destination sizes to avoid arbitrary 4 GB limits + * 1.2 17 Mar 2002 - Add faster version of decode(), doubles speed (!), + * but leave simple version for readabilty + * - Make sure invalid distances detected if pointers + * are 16 bits + * - Fix fixed codes table error + * - Provide a scanning mode for determining size of + * uncompressed data + * 1.3 20 Mar 2002 - Go back to lengths for puff() parameters [Gailly] + * - Add a puff.h file for the interface + * - Add braces in puff() for else do [Gailly] + * - Use indexes instead of pointers for readability + * 1.4 31 Mar 2002 - Simplify construct() code set check + * - Fix some comments + * - Add FIXLCODES #define + * 1.5 6 Apr 2002 - Minor comment fixes + * 1.6 7 Aug 2002 - Minor format changes + * 1.7 3 Mar 2003 - Added test code for distribution + * - Added zlib-like license + * 1.8 9 Jan 2004 - Added some comments on no distance codes case + * 1.9 21 Feb 2008 - Fix bug on 16-bit integer architectures [Pohland] + * - Catch missing end-of-block symbol error + * 2.0 25 Jul 2008 - Add #define to permit distance too far back + * - Add option in TEST code for puff to write the data + * - Add option in TEST code to skip input bytes + * - Allow TEST code to read from piped stdin + * 2.1 4 Apr 2010 - Avoid variable initialization for happier compilers + * - Avoid unsigned comparisons for even happier compilers + * 2.2 25 Apr 2010 - Fix bug in variable initializations [Oberhumer] + * - Add const where appropriate [Oberhumer] + * - Split if's and ?'s for coverage testing + * - Break out test code to separate file + * - Move NIL to puff.h + * - Allow incomplete code only if single code length is 1 + * - Add full code coverage test to Makefile + * 2.3 21 Jan 2013 - Check for invalid code length codes in dynamic blocks + */ + +#include /* for setjmp(), longjmp(), and jmp_buf */ +#include "puff.h" /* prototype for puff() */ + +#define local static /* for local function definitions */ + +/* + * Maximums for allocations and loops. It is not useful to change these -- + * they are fixed by the deflate format. + */ +#define MAXBITS 15 /* maximum bits in a code */ +#define MAXLCODES 286 /* maximum number of literal/length codes */ +#define MAXDCODES 30 /* maximum number of distance codes */ +#define MAXCODES (MAXLCODES+MAXDCODES) /* maximum codes lengths to read */ +#define FIXLCODES 288 /* number of fixed literal/length codes */ + +/* input and output state */ +struct state { + /* output state */ + unsigned char *out; /* output buffer */ + unsigned long outlen; /* available space at out */ + unsigned long outcnt; /* bytes written to out so far */ + + /* input state */ + const unsigned char *in; /* input buffer */ + unsigned long inlen; /* available input at in */ + unsigned long incnt; /* bytes read so far */ + int bitbuf; /* bit buffer */ + int bitcnt; /* number of bits in bit buffer */ + + /* input limit error return state for bits() and decode() */ + jmp_buf env; +}; + +/* + * Return need bits from the input stream. This always leaves less than + * eight bits in the buffer. bits() works properly for need == 0. + * + * Format notes: + * + * - Bits are stored in bytes from the least significant bit to the most + * significant bit. Therefore bits are dropped from the bottom of the bit + * buffer, using shift right, and new bytes are appended to the top of the + * bit buffer, using shift left. + */ +local int bits(struct state *s, int need) +{ + long val; /* bit accumulator (can use up to 20 bits) */ + + /* load at least need bits into val */ + val = s->bitbuf; + while (s->bitcnt < need) { + if (s->incnt == s->inlen) + longjmp(s->env, 1); /* out of input */ + val |= (long)(s->in[s->incnt++]) << s->bitcnt; /* load eight bits */ + s->bitcnt += 8; + } + + /* drop need bits and update buffer, always zero to seven bits left */ + s->bitbuf = (int)(val >> need); + s->bitcnt -= need; + + /* return need bits, zeroing the bits above that */ + return (int)(val & ((1L << need) - 1)); +} + +/* + * Process a stored block. + * + * Format notes: + * + * - After the two-bit stored block type (00), the stored block length and + * stored bytes are byte-aligned for fast copying. Therefore any leftover + * bits in the byte that has the last bit of the type, as many as seven, are + * discarded. The value of the discarded bits are not defined and should not + * be checked against any expectation. + * + * - The second inverted copy of the stored block length does not have to be + * checked, but it's probably a good idea to do so anyway. + * + * - A stored block can have zero length. This is sometimes used to byte-align + * subsets of the compressed data for random access or partial recovery. + */ +local int stored(struct state *s) +{ + unsigned len; /* length of stored block */ + + /* discard leftover bits from current byte (assumes s->bitcnt < 8) */ + s->bitbuf = 0; + s->bitcnt = 0; + + /* get length and check against its one's complement */ + if (s->incnt + 4 > s->inlen) + return 2; /* not enough input */ + len = s->in[s->incnt++]; + len |= s->in[s->incnt++] << 8; + if (s->in[s->incnt++] != (~len & 0xff) || + s->in[s->incnt++] != ((~len >> 8) & 0xff)) + return -2; /* didn't match complement! */ + + /* copy len bytes from in to out */ + if (s->incnt + len > s->inlen) + return 2; /* not enough input */ + if (s->out != NIL) { + if (s->outcnt + len > s->outlen) + return 1; /* not enough output space */ + while (len--) + s->out[s->outcnt++] = s->in[s->incnt++]; + } + else { /* just scanning */ + s->outcnt += len; + s->incnt += len; + } + + /* done with a valid stored block */ + return 0; +} + +/* + * Huffman code decoding tables. count[1..MAXBITS] is the number of symbols of + * each length, which for a canonical code are stepped through in order. + * symbol[] are the symbol values in canonical order, where the number of + * entries is the sum of the counts in count[]. The decoding process can be + * seen in the function decode() below. + */ +struct huffman { + short *count; /* number of symbols of each length */ + short *symbol; /* canonically ordered symbols */ +}; + +/* + * Decode a code from the stream s using huffman table h. Return the symbol or + * a negative value if there is an error. If all of the lengths are zero, i.e. + * an empty code, or if the code is incomplete and an invalid code is received, + * then -10 is returned after reading MAXBITS bits. + * + * Format notes: + * + * - The codes as stored in the compressed data are bit-reversed relative to + * a simple integer ordering of codes of the same lengths. Hence below the + * bits are pulled from the compressed data one at a time and used to + * build the code value reversed from what is in the stream in order to + * permit simple integer comparisons for decoding. A table-based decoding + * scheme (as used in zlib) does not need to do this reversal. + * + * - The first code for the shortest length is all zeros. Subsequent codes of + * the same length are simply integer increments of the previous code. When + * moving up a length, a zero bit is appended to the code. For a complete + * code, the last code of the longest length will be all ones. + * + * - Incomplete codes are handled by this decoder, since they are permitted + * in the deflate format. See the format notes for fixed() and dynamic(). + */ +#ifdef SLOW +local int decode(struct state *s, const struct huffman *h) +{ + int len; /* current number of bits in code */ + int code; /* len bits being decoded */ + int first; /* first code of length len */ + int count; /* number of codes of length len */ + int index; /* index of first code of length len in symbol table */ + + code = first = index = 0; + for (len = 1; len <= MAXBITS; len++) { + code |= bits(s, 1); /* get next bit */ + count = h->count[len]; + if (code - count < first) /* if length len, return symbol */ + return h->symbol[index + (code - first)]; + index += count; /* else update for next length */ + first += count; + first <<= 1; + code <<= 1; + } + return -10; /* ran out of codes */ +} + +/* + * A faster version of decode() for real applications of this code. It's not + * as readable, but it makes puff() twice as fast. And it only makes the code + * a few percent larger. + */ +#else /* !SLOW */ +local int decode(struct state *s, const struct huffman *h) +{ + int len; /* current number of bits in code */ + int code; /* len bits being decoded */ + int first; /* first code of length len */ + int count; /* number of codes of length len */ + int index; /* index of first code of length len in symbol table */ + int bitbuf; /* bits from stream */ + int left; /* bits left in next or left to process */ + short *next; /* next number of codes */ + + bitbuf = s->bitbuf; + left = s->bitcnt; + code = first = index = 0; + len = 1; + next = h->count + 1; + while (1) { + while (left--) { + code |= bitbuf & 1; + bitbuf >>= 1; + count = *next++; + if (code - count < first) { /* if length len, return symbol */ + s->bitbuf = bitbuf; + s->bitcnt = (s->bitcnt - len) & 7; + return h->symbol[index + (code - first)]; + } + index += count; /* else update for next length */ + first += count; + first <<= 1; + code <<= 1; + len++; + } + left = (MAXBITS+1) - len; + if (left == 0) + break; + if (s->incnt == s->inlen) + longjmp(s->env, 1); /* out of input */ + bitbuf = s->in[s->incnt++]; + if (left > 8) + left = 8; + } + return -10; /* ran out of codes */ +} +#endif /* SLOW */ + +/* + * Given the list of code lengths length[0..n-1] representing a canonical + * Huffman code for n symbols, construct the tables required to decode those + * codes. Those tables are the number of codes of each length, and the symbols + * sorted by length, retaining their original order within each length. The + * return value is zero for a complete code set, negative for an over- + * subscribed code set, and positive for an incomplete code set. The tables + * can be used if the return value is zero or positive, but they cannot be used + * if the return value is negative. If the return value is zero, it is not + * possible for decode() using that table to return an error--any stream of + * enough bits will resolve to a symbol. If the return value is positive, then + * it is possible for decode() using that table to return an error for received + * codes past the end of the incomplete lengths. + * + * Not used by decode(), but used for error checking, h->count[0] is the number + * of the n symbols not in the code. So n - h->count[0] is the number of + * codes. This is useful for checking for incomplete codes that have more than + * one symbol, which is an error in a dynamic block. + * + * Assumption: for all i in 0..n-1, 0 <= length[i] <= MAXBITS + * This is assured by the construction of the length arrays in dynamic() and + * fixed() and is not verified by construct(). + * + * Format notes: + * + * - Permitted and expected examples of incomplete codes are one of the fixed + * codes and any code with a single symbol which in deflate is coded as one + * bit instead of zero bits. See the format notes for fixed() and dynamic(). + * + * - Within a given code length, the symbols are kept in ascending order for + * the code bits definition. + */ +local int construct(struct huffman *h, const short *length, int n) +{ + int symbol; /* current symbol when stepping through length[] */ + int len; /* current length when stepping through h->count[] */ + int left; /* number of possible codes left of current length */ + short offs[MAXBITS+1]; /* offsets in symbol table for each length */ + + /* count number of codes of each length */ + for (len = 0; len <= MAXBITS; len++) + h->count[len] = 0; + for (symbol = 0; symbol < n; symbol++) + (h->count[length[symbol]])++; /* assumes lengths are within bounds */ + if (h->count[0] == n) /* no codes! */ + return 0; /* complete, but decode() will fail */ + + /* check for an over-subscribed or incomplete set of lengths */ + left = 1; /* one possible code of zero length */ + for (len = 1; len <= MAXBITS; len++) { + left <<= 1; /* one more bit, double codes left */ + left -= h->count[len]; /* deduct count from possible codes */ + if (left < 0) + return left; /* over-subscribed--return negative */ + } /* left > 0 means incomplete */ + + /* generate offsets into symbol table for each length for sorting */ + offs[1] = 0; + for (len = 1; len < MAXBITS; len++) + offs[len + 1] = offs[len] + h->count[len]; + + /* + * put symbols in table sorted by length, by symbol order within each + * length + */ + for (symbol = 0; symbol < n; symbol++) + if (length[symbol] != 0) + h->symbol[offs[length[symbol]]++] = symbol; + + /* return zero for complete set, positive for incomplete set */ + return left; +} + +/* + * Decode literal/length and distance codes until an end-of-block code. + * + * Format notes: + * + * - Compressed data that is after the block type if fixed or after the code + * description if dynamic is a combination of literals and length/distance + * pairs terminated by and end-of-block code. Literals are simply Huffman + * coded bytes. A length/distance pair is a coded length followed by a + * coded distance to represent a string that occurs earlier in the + * uncompressed data that occurs again at the current location. + * + * - Literals, lengths, and the end-of-block code are combined into a single + * code of up to 286 symbols. They are 256 literals (0..255), 29 length + * symbols (257..285), and the end-of-block symbol (256). + * + * - There are 256 possible lengths (3..258), and so 29 symbols are not enough + * to represent all of those. Lengths 3..10 and 258 are in fact represented + * by just a length symbol. Lengths 11..257 are represented as a symbol and + * some number of extra bits that are added as an integer to the base length + * of the length symbol. The number of extra bits is determined by the base + * length symbol. These are in the static arrays below, lens[] for the base + * lengths and lext[] for the corresponding number of extra bits. + * + * - The reason that 258 gets its own symbol is that the longest length is used + * often in highly redundant files. Note that 258 can also be coded as the + * base value 227 plus the maximum extra value of 31. While a good deflate + * should never do this, it is not an error, and should be decoded properly. + * + * - If a length is decoded, including its extra bits if any, then it is + * followed a distance code. There are up to 30 distance symbols. Again + * there are many more possible distances (1..32768), so extra bits are added + * to a base value represented by the symbol. The distances 1..4 get their + * own symbol, but the rest require extra bits. The base distances and + * corresponding number of extra bits are below in the static arrays dist[] + * and dext[]. + * + * - Literal bytes are simply written to the output. A length/distance pair is + * an instruction to copy previously uncompressed bytes to the output. The + * copy is from distance bytes back in the output stream, copying for length + * bytes. + * + * - Distances pointing before the beginning of the output data are not + * permitted. + * + * - Overlapped copies, where the length is greater than the distance, are + * allowed and common. For example, a distance of one and a length of 258 + * simply copies the last byte 258 times. A distance of four and a length of + * twelve copies the last four bytes three times. A simple forward copy + * ignoring whether the length is greater than the distance or not implements + * this correctly. You should not use memcpy() since its behavior is not + * defined for overlapped arrays. You should not use memmove() or bcopy() + * since though their behavior -is- defined for overlapping arrays, it is + * defined to do the wrong thing in this case. + */ +local int codes(struct state *s, + const struct huffman *lencode, + const struct huffman *distcode) +{ + int symbol; /* decoded symbol */ + int len; /* length for copy */ + unsigned dist; /* distance for copy */ + static const short lens[29] = { /* Size base for length codes 257..285 */ + 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 15, 17, 19, 23, 27, 31, + 35, 43, 51, 59, 67, 83, 99, 115, 131, 163, 195, 227, 258}; + static const short lext[29] = { /* Extra bits for length codes 257..285 */ + 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, + 3, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 0}; + static const short dists[30] = { /* Offset base for distance codes 0..29 */ + 1, 2, 3, 4, 5, 7, 9, 13, 17, 25, 33, 49, 65, 97, 129, 193, + 257, 385, 513, 769, 1025, 1537, 2049, 3073, 4097, 6145, + 8193, 12289, 16385, 24577}; + static const short dext[30] = { /* Extra bits for distance codes 0..29 */ + 0, 0, 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, + 7, 7, 8, 8, 9, 9, 10, 10, 11, 11, + 12, 12, 13, 13}; + + /* decode literals and length/distance pairs */ + do { + symbol = decode(s, lencode); + if (symbol < 0) + return symbol; /* invalid symbol */ + if (symbol < 256) { /* literal: symbol is the byte */ + /* write out the literal */ + if (s->out != NIL) { + if (s->outcnt == s->outlen) + return 1; + s->out[s->outcnt] = symbol; + } + s->outcnt++; + } + else if (symbol > 256) { /* length */ + /* get and compute length */ + symbol -= 257; + if (symbol >= 29) + return -10; /* invalid fixed code */ + len = lens[symbol] + bits(s, lext[symbol]); + + /* get and check distance */ + symbol = decode(s, distcode); + if (symbol < 0) + return symbol; /* invalid symbol */ + dist = dists[symbol] + bits(s, dext[symbol]); +#ifndef INFLATE_ALLOW_INVALID_DISTANCE_TOOFAR_ARRR + if (dist > s->outcnt) + return -11; /* distance too far back */ +#endif + + /* copy length bytes from distance bytes back */ + if (s->out != NIL) { + if (s->outcnt + len > s->outlen) + return 1; + while (len--) { + s->out[s->outcnt] = +#ifdef INFLATE_ALLOW_INVALID_DISTANCE_TOOFAR_ARRR + dist > s->outcnt ? + 0 : +#endif + s->out[s->outcnt - dist]; + s->outcnt++; + } + } + else + s->outcnt += len; + } + } while (symbol != 256); /* end of block symbol */ + + /* done with a valid fixed or dynamic block */ + return 0; +} + +/* + * Process a fixed codes block. + * + * Format notes: + * + * - This block type can be useful for compressing small amounts of data for + * which the size of the code descriptions in a dynamic block exceeds the + * benefit of custom codes for that block. For fixed codes, no bits are + * spent on code descriptions. Instead the code lengths for literal/length + * codes and distance codes are fixed. The specific lengths for each symbol + * can be seen in the "for" loops below. + * + * - The literal/length code is complete, but has two symbols that are invalid + * and should result in an error if received. This cannot be implemented + * simply as an incomplete code since those two symbols are in the "middle" + * of the code. They are eight bits long and the longest literal/length\ + * code is nine bits. Therefore the code must be constructed with those + * symbols, and the invalid symbols must be detected after decoding. + * + * - The fixed distance codes also have two invalid symbols that should result + * in an error if received. Since all of the distance codes are the same + * length, this can be implemented as an incomplete code. Then the invalid + * codes are detected while decoding. + */ +local int fixed(struct state *s) +{ + static int virgin = 1; + static short lencnt[MAXBITS+1], lensym[FIXLCODES]; + static short distcnt[MAXBITS+1], distsym[MAXDCODES]; + static struct huffman lencode, distcode; + + /* build fixed huffman tables if first call (may not be thread safe) */ + if (virgin) { + int symbol; + short lengths[FIXLCODES]; + + /* construct lencode and distcode */ + lencode.count = lencnt; + lencode.symbol = lensym; + distcode.count = distcnt; + distcode.symbol = distsym; + + /* literal/length table */ + for (symbol = 0; symbol < 144; symbol++) + lengths[symbol] = 8; + for (; symbol < 256; symbol++) + lengths[symbol] = 9; + for (; symbol < 280; symbol++) + lengths[symbol] = 7; + for (; symbol < FIXLCODES; symbol++) + lengths[symbol] = 8; + construct(&lencode, lengths, FIXLCODES); + + /* distance table */ + for (symbol = 0; symbol < MAXDCODES; symbol++) + lengths[symbol] = 5; + construct(&distcode, lengths, MAXDCODES); + + /* do this just once */ + virgin = 0; + } + + /* decode data until end-of-block code */ + return codes(s, &lencode, &distcode); +} + +/* + * Process a dynamic codes block. + * + * Format notes: + * + * - A dynamic block starts with a description of the literal/length and + * distance codes for that block. New dynamic blocks allow the compressor to + * rapidly adapt to changing data with new codes optimized for that data. + * + * - The codes used by the deflate format are "canonical", which means that + * the actual bits of the codes are generated in an unambiguous way simply + * from the number of bits in each code. Therefore the code descriptions + * are simply a list of code lengths for each symbol. + * + * - The code lengths are stored in order for the symbols, so lengths are + * provided for each of the literal/length symbols, and for each of the + * distance symbols. + * + * - If a symbol is not used in the block, this is represented by a zero as + * as the code length. This does not mean a zero-length code, but rather + * that no code should be created for this symbol. There is no way in the + * deflate format to represent a zero-length code. + * + * - The maximum number of bits in a code is 15, so the possible lengths for + * any code are 1..15. + * + * - The fact that a length of zero is not permitted for a code has an + * interesting consequence. Normally if only one symbol is used for a given + * code, then in fact that code could be represented with zero bits. However + * in deflate, that code has to be at least one bit. So for example, if + * only a single distance base symbol appears in a block, then it will be + * represented by a single code of length one, in particular one 0 bit. This + * is an incomplete code, since if a 1 bit is received, it has no meaning, + * and should result in an error. So incomplete distance codes of one symbol + * should be permitted, and the receipt of invalid codes should be handled. + * + * - It is also possible to have a single literal/length code, but that code + * must be the end-of-block code, since every dynamic block has one. This + * is not the most efficient way to create an empty block (an empty fixed + * block is fewer bits), but it is allowed by the format. So incomplete + * literal/length codes of one symbol should also be permitted. + * + * - If there are only literal codes and no lengths, then there are no distance + * codes. This is represented by one distance code with zero bits. + * + * - The list of up to 286 length/literal lengths and up to 30 distance lengths + * are themselves compressed using Huffman codes and run-length encoding. In + * the list of code lengths, a 0 symbol means no code, a 1..15 symbol means + * that length, and the symbols 16, 17, and 18 are run-length instructions. + * Each of 16, 17, and 18 are follwed by extra bits to define the length of + * the run. 16 copies the last length 3 to 6 times. 17 represents 3 to 10 + * zero lengths, and 18 represents 11 to 138 zero lengths. Unused symbols + * are common, hence the special coding for zero lengths. + * + * - The symbols for 0..18 are Huffman coded, and so that code must be + * described first. This is simply a sequence of up to 19 three-bit values + * representing no code (0) or the code length for that symbol (1..7). + * + * - A dynamic block starts with three fixed-size counts from which is computed + * the number of literal/length code lengths, the number of distance code + * lengths, and the number of code length code lengths (ok, you come up with + * a better name!) in the code descriptions. For the literal/length and + * distance codes, lengths after those provided are considered zero, i.e. no + * code. The code length code lengths are received in a permuted order (see + * the order[] array below) to make a short code length code length list more + * likely. As it turns out, very short and very long codes are less likely + * to be seen in a dynamic code description, hence what may appear initially + * to be a peculiar ordering. + * + * - Given the number of literal/length code lengths (nlen) and distance code + * lengths (ndist), then they are treated as one long list of nlen + ndist + * code lengths. Therefore run-length coding can and often does cross the + * boundary between the two sets of lengths. + * + * - So to summarize, the code description at the start of a dynamic block is + * three counts for the number of code lengths for the literal/length codes, + * the distance codes, and the code length codes. This is followed by the + * code length code lengths, three bits each. This is used to construct the + * code length code which is used to read the remainder of the lengths. Then + * the literal/length code lengths and distance lengths are read as a single + * set of lengths using the code length codes. Codes are constructed from + * the resulting two sets of lengths, and then finally you can start + * decoding actual compressed data in the block. + * + * - For reference, a "typical" size for the code description in a dynamic + * block is around 80 bytes. + */ +local int dynamic(struct state *s) +{ + int nlen, ndist, ncode; /* number of lengths in descriptor */ + int index; /* index of lengths[] */ + int err; /* construct() return value */ + short lengths[MAXCODES]; /* descriptor code lengths */ + short lencnt[MAXBITS+1], lensym[MAXLCODES]; /* lencode memory */ + short distcnt[MAXBITS+1], distsym[MAXDCODES]; /* distcode memory */ + struct huffman lencode, distcode; /* length and distance codes */ + static const short order[19] = /* permutation of code length codes */ + {16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15}; + + /* construct lencode and distcode */ + lencode.count = lencnt; + lencode.symbol = lensym; + distcode.count = distcnt; + distcode.symbol = distsym; + + /* get number of lengths in each table, check lengths */ + nlen = bits(s, 5) + 257; + ndist = bits(s, 5) + 1; + ncode = bits(s, 4) + 4; + if (nlen > MAXLCODES || ndist > MAXDCODES) + return -3; /* bad counts */ + + /* read code length code lengths (really), missing lengths are zero */ + for (index = 0; index < ncode; index++) + lengths[order[index]] = bits(s, 3); + for (; index < 19; index++) + lengths[order[index]] = 0; + + /* build huffman table for code lengths codes (use lencode temporarily) */ + err = construct(&lencode, lengths, 19); + if (err != 0) /* require complete code set here */ + return -4; + + /* read length/literal and distance code length tables */ + index = 0; + while (index < nlen + ndist) { + int symbol; /* decoded value */ + int len; /* last length to repeat */ + + symbol = decode(s, &lencode); + if (symbol < 0) + return symbol; /* invalid symbol */ + if (symbol < 16) /* length in 0..15 */ + lengths[index++] = symbol; + else { /* repeat instruction */ + len = 0; /* assume repeating zeros */ + if (symbol == 16) { /* repeat last length 3..6 times */ + if (index == 0) + return -5; /* no last length! */ + len = lengths[index - 1]; /* last length */ + symbol = 3 + bits(s, 2); + } + else if (symbol == 17) /* repeat zero 3..10 times */ + symbol = 3 + bits(s, 3); + else /* == 18, repeat zero 11..138 times */ + symbol = 11 + bits(s, 7); + if (index + symbol > nlen + ndist) + return -6; /* too many lengths! */ + while (symbol--) /* repeat last or zero symbol times */ + lengths[index++] = len; + } + } + + /* check for end-of-block code -- there better be one! */ + if (lengths[256] == 0) + return -9; + + /* build huffman table for literal/length codes */ + err = construct(&lencode, lengths, nlen); + if (err && (err < 0 || nlen != lencode.count[0] + lencode.count[1])) + return -7; /* incomplete code ok only for single length 1 code */ + + /* build huffman table for distance codes */ + err = construct(&distcode, lengths + nlen, ndist); + if (err && (err < 0 || ndist != distcode.count[0] + distcode.count[1])) + return -8; /* incomplete code ok only for single length 1 code */ + + /* decode data until end-of-block code */ + return codes(s, &lencode, &distcode); +} + +/* + * Inflate source to dest. On return, destlen and sourcelen are updated to the + * size of the uncompressed data and the size of the deflate data respectively. + * On success, the return value of puff() is zero. If there is an error in the + * source data, i.e. it is not in the deflate format, then a negative value is + * returned. If there is not enough input available or there is not enough + * output space, then a positive error is returned. In that case, destlen and + * sourcelen are not updated to facilitate retrying from the beginning with the + * provision of more input data or more output space. In the case of invalid + * inflate data (a negative error), the dest and source pointers are updated to + * facilitate the debugging of deflators. + * + * puff() also has a mode to determine the size of the uncompressed output with + * no output written. For this dest must be (unsigned char *)0. In this case, + * the input value of *destlen is ignored, and on return *destlen is set to the + * size of the uncompressed output. + * + * The return codes are: + * + * 2: available inflate data did not terminate + * 1: output space exhausted before completing inflate + * 0: successful inflate + * -1: invalid block type (type == 3) + * -2: stored block length did not match one's complement + * -3: dynamic block code description: too many length or distance codes + * -4: dynamic block code description: code lengths codes incomplete + * -5: dynamic block code description: repeat lengths with no first length + * -6: dynamic block code description: repeat more than specified lengths + * -7: dynamic block code description: invalid literal/length code lengths + * -8: dynamic block code description: invalid distance code lengths + * -9: dynamic block code description: missing end-of-block code + * -10: invalid literal/length or distance code in fixed or dynamic block + * -11: distance is too far back in fixed or dynamic block + * + * Format notes: + * + * - Three bits are read for each block to determine the kind of block and + * whether or not it is the last block. Then the block is decoded and the + * process repeated if it was not the last block. + * + * - The leftover bits in the last byte of the deflate data after the last + * block (if it was a fixed or dynamic block) are undefined and have no + * expected values to check. + */ +int puff(unsigned char *dest, /* pointer to destination pointer */ + unsigned long *destlen, /* amount of output space */ + const unsigned char *source, /* pointer to source data pointer */ + unsigned long *sourcelen) /* amount of input available */ +{ + struct state s; /* input/output state */ + int last, type; /* block information */ + int err; /* return value */ + + /* initialize output state */ + s.out = dest; + s.outlen = *destlen; /* ignored if dest is NIL */ + s.outcnt = 0; + + /* initialize input state */ + s.in = source; + s.inlen = *sourcelen; + s.incnt = 0; + s.bitbuf = 0; + s.bitcnt = 0; + + /* return if bits() or decode() tries to read past available input */ + if (setjmp(s.env) != 0) /* if came back here via longjmp() */ + err = 2; /* then skip do-loop, return error */ + else { + /* process blocks until last block or error */ + do { + last = bits(&s, 1); /* one if last block */ + type = bits(&s, 2); /* block type 0..3 */ + err = type == 0 ? + stored(&s) : + (type == 1 ? + fixed(&s) : + (type == 2 ? + dynamic(&s) : + -1)); /* type == 3, invalid */ + if (err != 0) + break; /* return with error */ + } while (!last); + } + + /* update the lengths and return */ + if (err <= 0) { + *destlen = s.outcnt; + *sourcelen = s.incnt; + } + return err; +} ADDED compat/zlib/contrib/puff/puff.h Index: compat/zlib/contrib/puff/puff.h ================================================================== --- compat/zlib/contrib/puff/puff.h +++ compat/zlib/contrib/puff/puff.h @@ -0,0 +1,35 @@ +/* puff.h + Copyright (C) 2002-2013 Mark Adler, all rights reserved + version 2.3, 21 Jan 2013 + + This software is provided 'as-is', without any express or implied + warranty. In no event will the author be held liable for any damages + arising from the use of this software. + + Permission is granted to anyone to use this software for any purpose, + including commercial applications, and to alter it and redistribute it + freely, subject to the following restrictions: + + 1. The origin of this software must not be misrepresented; you must not + claim that you wrote the original software. If you use this software + in a product, an acknowledgment in the product documentation would be + appreciated but is not required. + 2. Altered source versions must be plainly marked as such, and must not be + misrepresented as being the original software. + 3. This notice may not be removed or altered from any source distribution. + + Mark Adler madler@alumni.caltech.edu + */ + + +/* + * See puff.c for purpose and usage. + */ +#ifndef NIL +# define NIL ((unsigned char *)0) /* for no output option */ +#endif + +int puff(unsigned char *dest, /* pointer to destination pointer */ + unsigned long *destlen, /* amount of output space */ + const unsigned char *source, /* pointer to source data pointer */ + unsigned long *sourcelen); /* amount of input available */ ADDED compat/zlib/contrib/puff/pufftest.c Index: compat/zlib/contrib/puff/pufftest.c ================================================================== --- compat/zlib/contrib/puff/pufftest.c +++ compat/zlib/contrib/puff/pufftest.c @@ -0,0 +1,165 @@ +/* + * pufftest.c + * Copyright (C) 2002-2013 Mark Adler + * For conditions of distribution and use, see copyright notice in puff.h + * version 2.3, 21 Jan 2013 + */ + +/* Example of how to use puff(). + + Usage: puff [-w] [-f] [-nnn] file + ... | puff [-w] [-f] [-nnn] + + where file is the input file with deflate data, nnn is the number of bytes + of input to skip before inflating (e.g. to skip a zlib or gzip header), and + -w is used to write the decompressed data to stdout. -f is for coverage + testing, and causes pufftest to fail with not enough output space (-f does + a write like -w, so -w is not required). */ + +#include +#include +#include "puff.h" + +#if defined(MSDOS) || defined(OS2) || defined(WIN32) || defined(__CYGWIN__) +# include +# include +# define SET_BINARY_MODE(file) setmode(fileno(file), O_BINARY) +#else +# define SET_BINARY_MODE(file) +#endif + +#define local static + +/* Return size times approximately the cube root of 2, keeping the result as 1, + 3, or 5 times a power of 2 -- the result is always > size, until the result + is the maximum value of an unsigned long, where it remains. This is useful + to keep reallocations less than ~33% over the actual data. */ +local size_t bythirds(size_t size) +{ + int n; + size_t m; + + m = size; + for (n = 0; m; n++) + m >>= 1; + if (n < 3) + return size + 1; + n -= 3; + m = size >> n; + m += m == 6 ? 2 : 1; + m <<= n; + return m > size ? m : (size_t)(-1); +} + +/* Read the input file *name, or stdin if name is NULL, into allocated memory. + Reallocate to larger buffers until the entire file is read in. Return a + pointer to the allocated data, or NULL if there was a memory allocation + failure. *len is the number of bytes of data read from the input file (even + if load() returns NULL). If the input file was empty or could not be opened + or read, *len is zero. */ +local void *load(const char *name, size_t *len) +{ + size_t size; + void *buf, *swap; + FILE *in; + + *len = 0; + buf = malloc(size = 4096); + if (buf == NULL) + return NULL; + in = name == NULL ? stdin : fopen(name, "rb"); + if (in != NULL) { + for (;;) { + *len += fread((char *)buf + *len, 1, size - *len, in); + if (*len < size) break; + size = bythirds(size); + if (size == *len || (swap = realloc(buf, size)) == NULL) { + free(buf); + buf = NULL; + break; + } + buf = swap; + } + fclose(in); + } + return buf; +} + +int main(int argc, char **argv) +{ + int ret, put = 0, fail = 0; + unsigned skip = 0; + char *arg, *name = NULL; + unsigned char *source = NULL, *dest; + size_t len = 0; + unsigned long sourcelen, destlen; + + /* process arguments */ + while (arg = *++argv, --argc) + if (arg[0] == '-') { + if (arg[1] == 'w' && arg[2] == 0) + put = 1; + else if (arg[1] == 'f' && arg[2] == 0) + fail = 1, put = 1; + else if (arg[1] >= '0' && arg[1] <= '9') + skip = (unsigned)atoi(arg + 1); + else { + fprintf(stderr, "invalid option %s\n", arg); + return 3; + } + } + else if (name != NULL) { + fprintf(stderr, "only one file name allowed\n"); + return 3; + } + else + name = arg; + source = load(name, &len); + if (source == NULL) { + fprintf(stderr, "memory allocation failure\n"); + return 4; + } + if (len == 0) { + fprintf(stderr, "could not read %s, or it was empty\n", + name == NULL ? "" : name); + free(source); + return 3; + } + if (skip >= len) { + fprintf(stderr, "skip request of %d leaves no input\n", skip); + free(source); + return 3; + } + + /* test inflate data with offset skip */ + len -= skip; + sourcelen = (unsigned long)len; + ret = puff(NIL, &destlen, source + skip, &sourcelen); + if (ret) + fprintf(stderr, "puff() failed with return code %d\n", ret); + else { + fprintf(stderr, "puff() succeeded uncompressing %lu bytes\n", destlen); + if (sourcelen < len) fprintf(stderr, "%lu compressed bytes unused\n", + len - sourcelen); + } + + /* if requested, inflate again and write decompressd data to stdout */ + if (put && ret == 0) { + if (fail) + destlen >>= 1; + dest = malloc(destlen); + if (dest == NULL) { + fprintf(stderr, "memory allocation failure\n"); + free(source); + return 4; + } + puff(dest, &destlen, source + skip, &sourcelen); + SET_BINARY_MODE(stdout); + fwrite(dest, 1, destlen, stdout); + free(dest); + } + + /* clean up */ + free(source); + return ret; +} ADDED compat/zlib/contrib/puff/zeros.raw Index: compat/zlib/contrib/puff/zeros.raw ================================================================== --- compat/zlib/contrib/puff/zeros.raw +++ compat/zlib/contrib/puff/zeros.raw cannot compute difference between binary files ADDED compat/zlib/contrib/testzlib/testzlib.c Index: compat/zlib/contrib/testzlib/testzlib.c ================================================================== --- compat/zlib/contrib/testzlib/testzlib.c +++ compat/zlib/contrib/testzlib/testzlib.c @@ -0,0 +1,275 @@ +#include +#include +#include + +#include "zlib.h" + + +void MyDoMinus64(LARGE_INTEGER *R,LARGE_INTEGER A,LARGE_INTEGER B) +{ + R->HighPart = A.HighPart - B.HighPart; + if (A.LowPart >= B.LowPart) + R->LowPart = A.LowPart - B.LowPart; + else + { + R->LowPart = A.LowPart - B.LowPart; + R->HighPart --; + } +} + +#ifdef _M_X64 +// see http://msdn2.microsoft.com/library/twchhe95(en-us,vs.80).aspx for __rdtsc +unsigned __int64 __rdtsc(void); +void BeginCountRdtsc(LARGE_INTEGER * pbeginTime64) +{ + // printf("rdtsc = %I64x\n",__rdtsc()); + pbeginTime64->QuadPart=__rdtsc(); +} + +LARGE_INTEGER GetResRdtsc(LARGE_INTEGER beginTime64,BOOL fComputeTimeQueryPerf) +{ + LARGE_INTEGER LIres; + unsigned _int64 res=__rdtsc()-((unsigned _int64)(beginTime64.QuadPart)); + LIres.QuadPart=res; + // printf("rdtsc = %I64x\n",__rdtsc()); + return LIres; +} +#else +#ifdef _M_IX86 +void myGetRDTSC32(LARGE_INTEGER * pbeginTime64) +{ + DWORD dwEdx,dwEax; + _asm + { + rdtsc + mov dwEax,eax + mov dwEdx,edx + } + pbeginTime64->LowPart=dwEax; + pbeginTime64->HighPart=dwEdx; +} + +void BeginCountRdtsc(LARGE_INTEGER * pbeginTime64) +{ + myGetRDTSC32(pbeginTime64); +} + +LARGE_INTEGER GetResRdtsc(LARGE_INTEGER beginTime64,BOOL fComputeTimeQueryPerf) +{ + LARGE_INTEGER LIres,endTime64; + myGetRDTSC32(&endTime64); + + LIres.LowPart=LIres.HighPart=0; + MyDoMinus64(&LIres,endTime64,beginTime64); + return LIres; +} +#else +void myGetRDTSC32(LARGE_INTEGER * pbeginTime64) +{ +} + +void BeginCountRdtsc(LARGE_INTEGER * pbeginTime64) +{ +} + +LARGE_INTEGER GetResRdtsc(LARGE_INTEGER beginTime64,BOOL fComputeTimeQueryPerf) +{ + LARGE_INTEGER lr; + lr.QuadPart=0; + return lr; +} +#endif +#endif + +void BeginCountPerfCounter(LARGE_INTEGER * pbeginTime64,BOOL fComputeTimeQueryPerf) +{ + if ((!fComputeTimeQueryPerf) || (!QueryPerformanceCounter(pbeginTime64))) + { + pbeginTime64->LowPart = GetTickCount(); + pbeginTime64->HighPart = 0; + } +} + +DWORD GetMsecSincePerfCounter(LARGE_INTEGER beginTime64,BOOL fComputeTimeQueryPerf) +{ + LARGE_INTEGER endTime64,ticksPerSecond,ticks; + DWORDLONG ticksShifted,tickSecShifted; + DWORD dwLog=16+0; + DWORD dwRet; + if ((!fComputeTimeQueryPerf) || (!QueryPerformanceCounter(&endTime64))) + dwRet = (GetTickCount() - beginTime64.LowPart)*1; + else + { + MyDoMinus64(&ticks,endTime64,beginTime64); + QueryPerformanceFrequency(&ticksPerSecond); + + + { + ticksShifted = Int64ShrlMod32(*(DWORDLONG*)&ticks,dwLog); + tickSecShifted = Int64ShrlMod32(*(DWORDLONG*)&ticksPerSecond,dwLog); + + } + + dwRet = (DWORD)((((DWORD)ticksShifted)*1000)/(DWORD)(tickSecShifted)); + dwRet *=1; + } + return dwRet; +} + +int ReadFileMemory(const char* filename,long* plFileSize,unsigned char** pFilePtr) +{ + FILE* stream; + unsigned char* ptr; + int retVal=1; + stream=fopen(filename, "rb"); + if (stream==NULL) + return 0; + + fseek(stream,0,SEEK_END); + + *plFileSize=ftell(stream); + fseek(stream,0,SEEK_SET); + ptr=malloc((*plFileSize)+1); + if (ptr==NULL) + retVal=0; + else + { + if (fread(ptr, 1, *plFileSize,stream) != (*plFileSize)) + retVal=0; + } + fclose(stream); + *pFilePtr=ptr; + return retVal; +} + +int main(int argc, char *argv[]) +{ + int BlockSizeCompress=0x8000; + int BlockSizeUncompress=0x8000; + int cprLevel=Z_DEFAULT_COMPRESSION ; + long lFileSize; + unsigned char* FilePtr; + long lBufferSizeCpr; + long lBufferSizeUncpr; + long lCompressedSize=0; + unsigned char* CprPtr; + unsigned char* UncprPtr; + long lSizeCpr,lSizeUncpr; + DWORD dwGetTick,dwMsecQP; + LARGE_INTEGER li_qp,li_rdtsc,dwResRdtsc; + + if (argc<=1) + { + printf("run TestZlib [BlockSizeCompress] [BlockSizeUncompress] [compres. level]\n"); + return 0; + } + + if (ReadFileMemory(argv[1],&lFileSize,&FilePtr)==0) + { + printf("error reading %s\n",argv[1]); + return 1; + } + else printf("file %s read, %u bytes\n",argv[1],lFileSize); + + if (argc>=3) + BlockSizeCompress=atol(argv[2]); + + if (argc>=4) + BlockSizeUncompress=atol(argv[3]); + + if (argc>=5) + cprLevel=(int)atol(argv[4]); + + lBufferSizeCpr = lFileSize + (lFileSize/0x10) + 0x200; + lBufferSizeUncpr = lBufferSizeCpr; + + CprPtr=(unsigned char*)malloc(lBufferSizeCpr + BlockSizeCompress); + + BeginCountPerfCounter(&li_qp,TRUE); + dwGetTick=GetTickCount(); + BeginCountRdtsc(&li_rdtsc); + { + z_stream zcpr; + int ret=Z_OK; + long lOrigToDo = lFileSize; + long lOrigDone = 0; + int step=0; + memset(&zcpr,0,sizeof(z_stream)); + deflateInit(&zcpr,cprLevel); + + zcpr.next_in = FilePtr; + zcpr.next_out = CprPtr; + + + do + { + long all_read_before = zcpr.total_in; + zcpr.avail_in = min(lOrigToDo,BlockSizeCompress); + zcpr.avail_out = BlockSizeCompress; + ret=deflate(&zcpr,(zcpr.avail_in==lOrigToDo) ? Z_FINISH : Z_SYNC_FLUSH); + lOrigDone += (zcpr.total_in-all_read_before); + lOrigToDo -= (zcpr.total_in-all_read_before); + step++; + } while (ret==Z_OK); + + lSizeCpr=zcpr.total_out; + deflateEnd(&zcpr); + dwGetTick=GetTickCount()-dwGetTick; + dwMsecQP=GetMsecSincePerfCounter(li_qp,TRUE); + dwResRdtsc=GetResRdtsc(li_rdtsc,TRUE); + printf("total compress size = %u, in %u step\n",lSizeCpr,step); + printf("time = %u msec = %f sec\n",dwGetTick,dwGetTick/(double)1000.); + printf("defcpr time QP = %u msec = %f sec\n",dwMsecQP,dwMsecQP/(double)1000.); + printf("defcpr result rdtsc = %I64x\n\n",dwResRdtsc.QuadPart); + } + + CprPtr=(unsigned char*)realloc(CprPtr,lSizeCpr); + UncprPtr=(unsigned char*)malloc(lBufferSizeUncpr + BlockSizeUncompress); + + BeginCountPerfCounter(&li_qp,TRUE); + dwGetTick=GetTickCount(); + BeginCountRdtsc(&li_rdtsc); + { + z_stream zcpr; + int ret=Z_OK; + long lOrigToDo = lSizeCpr; + long lOrigDone = 0; + int step=0; + memset(&zcpr,0,sizeof(z_stream)); + inflateInit(&zcpr); + + zcpr.next_in = CprPtr; + zcpr.next_out = UncprPtr; + + + do + { + long all_read_before = zcpr.total_in; + zcpr.avail_in = min(lOrigToDo,BlockSizeUncompress); + zcpr.avail_out = BlockSizeUncompress; + ret=inflate(&zcpr,Z_SYNC_FLUSH); + lOrigDone += (zcpr.total_in-all_read_before); + lOrigToDo -= (zcpr.total_in-all_read_before); + step++; + } while (ret==Z_OK); + + lSizeUncpr=zcpr.total_out; + inflateEnd(&zcpr); + dwGetTick=GetTickCount()-dwGetTick; + dwMsecQP=GetMsecSincePerfCounter(li_qp,TRUE); + dwResRdtsc=GetResRdtsc(li_rdtsc,TRUE); + printf("total uncompress size = %u, in %u step\n",lSizeUncpr,step); + printf("time = %u msec = %f sec\n",dwGetTick,dwGetTick/(double)1000.); + printf("uncpr time QP = %u msec = %f sec\n",dwMsecQP,dwMsecQP/(double)1000.); + printf("uncpr result rdtsc = %I64x\n\n",dwResRdtsc.QuadPart); + } + + if (lSizeUncpr==lFileSize) + { + if (memcmp(FilePtr,UncprPtr,lFileSize)==0) + printf("compare ok\n"); + + } + + return 0; +} ADDED compat/zlib/contrib/testzlib/testzlib.txt Index: compat/zlib/contrib/testzlib/testzlib.txt ================================================================== --- compat/zlib/contrib/testzlib/testzlib.txt +++ compat/zlib/contrib/testzlib/testzlib.txt @@ -0,0 +1,10 @@ +To build testzLib with Visual Studio 2005: + +copy to a directory file from : +- root of zLib tree +- contrib/testzlib +- contrib/masmx86 +- contrib/masmx64 +- contrib/vstudio/vc7 + +and open testzlib8.sln ADDED compat/zlib/contrib/untgz/Makefile Index: compat/zlib/contrib/untgz/Makefile ================================================================== --- compat/zlib/contrib/untgz/Makefile +++ compat/zlib/contrib/untgz/Makefile @@ -0,0 +1,14 @@ +CC=cc +CFLAGS=-g + +untgz: untgz.o ../../libz.a + $(CC) $(CFLAGS) -o untgz untgz.o -L../.. -lz + +untgz.o: untgz.c ../../zlib.h + $(CC) $(CFLAGS) -c -I../.. untgz.c + +../../libz.a: + cd ../..; ./configure; make + +clean: + rm -f untgz untgz.o *~ ADDED compat/zlib/contrib/untgz/Makefile.msc Index: compat/zlib/contrib/untgz/Makefile.msc ================================================================== --- compat/zlib/contrib/untgz/Makefile.msc +++ compat/zlib/contrib/untgz/Makefile.msc @@ -0,0 +1,17 @@ +CC=cl +CFLAGS=-MD + +untgz.exe: untgz.obj ..\..\zlib.lib + $(CC) $(CFLAGS) untgz.obj ..\..\zlib.lib + +untgz.obj: untgz.c ..\..\zlib.h + $(CC) $(CFLAGS) -c -I..\.. untgz.c + +..\..\zlib.lib: + cd ..\.. + $(MAKE) -f win32\makefile.msc + cd contrib\untgz + +clean: + -del untgz.obj + -del untgz.exe ADDED compat/zlib/contrib/untgz/untgz.c Index: compat/zlib/contrib/untgz/untgz.c ================================================================== --- compat/zlib/contrib/untgz/untgz.c +++ compat/zlib/contrib/untgz/untgz.c @@ -0,0 +1,674 @@ +/* + * untgz.c -- Display contents and extract files from a gzip'd TAR file + * + * written by Pedro A. Aranda Gutierrez + * adaptation to Unix by Jean-loup Gailly + * various fixes by Cosmin Truta + */ + +#include +#include +#include +#include +#include + +#include "zlib.h" + +#ifdef unix +# include +#else +# include +# include +#endif + +#ifdef WIN32 +#include +# ifndef F_OK +# define F_OK 0 +# endif +# define mkdir(dirname,mode) _mkdir(dirname) +# ifdef _MSC_VER +# define access(path,mode) _access(path,mode) +# define chmod(path,mode) _chmod(path,mode) +# define strdup(str) _strdup(str) +# endif +#else +# include +#endif + + +/* values used in typeflag field */ + +#define REGTYPE '0' /* regular file */ +#define AREGTYPE '\0' /* regular file */ +#define LNKTYPE '1' /* link */ +#define SYMTYPE '2' /* reserved */ +#define CHRTYPE '3' /* character special */ +#define BLKTYPE '4' /* block special */ +#define DIRTYPE '5' /* directory */ +#define FIFOTYPE '6' /* FIFO special */ +#define CONTTYPE '7' /* reserved */ + +/* GNU tar extensions */ + +#define GNUTYPE_DUMPDIR 'D' /* file names from dumped directory */ +#define GNUTYPE_LONGLINK 'K' /* long link name */ +#define GNUTYPE_LONGNAME 'L' /* long file name */ +#define GNUTYPE_MULTIVOL 'M' /* continuation of file from another volume */ +#define GNUTYPE_NAMES 'N' /* file name that does not fit into main hdr */ +#define GNUTYPE_SPARSE 'S' /* sparse file */ +#define GNUTYPE_VOLHDR 'V' /* tape/volume header */ + + +/* tar header */ + +#define BLOCKSIZE 512 +#define SHORTNAMESIZE 100 + +struct tar_header +{ /* byte offset */ + char name[100]; /* 0 */ + char mode[8]; /* 100 */ + char uid[8]; /* 108 */ + char gid[8]; /* 116 */ + char size[12]; /* 124 */ + char mtime[12]; /* 136 */ + char chksum[8]; /* 148 */ + char typeflag; /* 156 */ + char linkname[100]; /* 157 */ + char magic[6]; /* 257 */ + char version[2]; /* 263 */ + char uname[32]; /* 265 */ + char gname[32]; /* 297 */ + char devmajor[8]; /* 329 */ + char devminor[8]; /* 337 */ + char prefix[155]; /* 345 */ + /* 500 */ +}; + +union tar_buffer +{ + char buffer[BLOCKSIZE]; + struct tar_header header; +}; + +struct attr_item +{ + struct attr_item *next; + char *fname; + int mode; + time_t time; +}; + +enum { TGZ_EXTRACT, TGZ_LIST, TGZ_INVALID }; + +char *TGZfname OF((const char *)); +void TGZnotfound OF((const char *)); + +int getoct OF((char *, int)); +char *strtime OF((time_t *)); +int setfiletime OF((char *, time_t)); +void push_attr OF((struct attr_item **, char *, int, time_t)); +void restore_attr OF((struct attr_item **)); + +int ExprMatch OF((char *, char *)); + +int makedir OF((char *)); +int matchname OF((int, int, char **, char *)); + +void error OF((const char *)); +int tar OF((gzFile, int, int, int, char **)); + +void help OF((int)); +int main OF((int, char **)); + +char *prog; + +const char *TGZsuffix[] = { "\0", ".tar", ".tar.gz", ".taz", ".tgz", NULL }; + +/* return the file name of the TGZ archive */ +/* or NULL if it does not exist */ + +char *TGZfname (const char *arcname) +{ + static char buffer[1024]; + int origlen,i; + + strcpy(buffer,arcname); + origlen = strlen(buffer); + + for (i=0; TGZsuffix[i]; i++) + { + strcpy(buffer+origlen,TGZsuffix[i]); + if (access(buffer,F_OK) == 0) + return buffer; + } + return NULL; +} + + +/* error message for the filename */ + +void TGZnotfound (const char *arcname) +{ + int i; + + fprintf(stderr,"%s: Couldn't find ",prog); + for (i=0;TGZsuffix[i];i++) + fprintf(stderr,(TGZsuffix[i+1]) ? "%s%s, " : "or %s%s\n", + arcname, + TGZsuffix[i]); + exit(1); +} + + +/* convert octal digits to int */ +/* on error return -1 */ + +int getoct (char *p,int width) +{ + int result = 0; + char c; + + while (width--) + { + c = *p++; + if (c == 0) + break; + if (c == ' ') + continue; + if (c < '0' || c > '7') + return -1; + result = result * 8 + (c - '0'); + } + return result; +} + + +/* convert time_t to string */ +/* use the "YYYY/MM/DD hh:mm:ss" format */ + +char *strtime (time_t *t) +{ + struct tm *local; + static char result[32]; + + local = localtime(t); + sprintf(result,"%4d/%02d/%02d %02d:%02d:%02d", + local->tm_year+1900, local->tm_mon+1, local->tm_mday, + local->tm_hour, local->tm_min, local->tm_sec); + return result; +} + + +/* set file time */ + +int setfiletime (char *fname,time_t ftime) +{ +#ifdef WIN32 + static int isWinNT = -1; + SYSTEMTIME st; + FILETIME locft, modft; + struct tm *loctm; + HANDLE hFile; + int result; + + loctm = localtime(&ftime); + if (loctm == NULL) + return -1; + + st.wYear = (WORD)loctm->tm_year + 1900; + st.wMonth = (WORD)loctm->tm_mon + 1; + st.wDayOfWeek = (WORD)loctm->tm_wday; + st.wDay = (WORD)loctm->tm_mday; + st.wHour = (WORD)loctm->tm_hour; + st.wMinute = (WORD)loctm->tm_min; + st.wSecond = (WORD)loctm->tm_sec; + st.wMilliseconds = 0; + if (!SystemTimeToFileTime(&st, &locft) || + !LocalFileTimeToFileTime(&locft, &modft)) + return -1; + + if (isWinNT < 0) + isWinNT = (GetVersion() < 0x80000000) ? 1 : 0; + hFile = CreateFile(fname, GENERIC_WRITE, 0, NULL, OPEN_EXISTING, + (isWinNT ? FILE_FLAG_BACKUP_SEMANTICS : 0), + NULL); + if (hFile == INVALID_HANDLE_VALUE) + return -1; + result = SetFileTime(hFile, NULL, NULL, &modft) ? 0 : -1; + CloseHandle(hFile); + return result; +#else + struct utimbuf settime; + + settime.actime = settime.modtime = ftime; + return utime(fname,&settime); +#endif +} + + +/* push file attributes */ + +void push_attr(struct attr_item **list,char *fname,int mode,time_t time) +{ + struct attr_item *item; + + item = (struct attr_item *)malloc(sizeof(struct attr_item)); + if (item == NULL) + error("Out of memory"); + item->fname = strdup(fname); + item->mode = mode; + item->time = time; + item->next = *list; + *list = item; +} + + +/* restore file attributes */ + +void restore_attr(struct attr_item **list) +{ + struct attr_item *item, *prev; + + for (item = *list; item != NULL; ) + { + setfiletime(item->fname,item->time); + chmod(item->fname,item->mode); + prev = item; + item = item->next; + free(prev); + } + *list = NULL; +} + + +/* match regular expression */ + +#define ISSPECIAL(c) (((c) == '*') || ((c) == '/')) + +int ExprMatch (char *string,char *expr) +{ + while (1) + { + if (ISSPECIAL(*expr)) + { + if (*expr == '/') + { + if (*string != '\\' && *string != '/') + return 0; + string ++; expr++; + } + else if (*expr == '*') + { + if (*expr ++ == 0) + return 1; + while (*++string != *expr) + if (*string == 0) + return 0; + } + } + else + { + if (*string != *expr) + return 0; + if (*expr++ == 0) + return 1; + string++; + } + } +} + + +/* recursive mkdir */ +/* abort on ENOENT; ignore other errors like "directory already exists" */ +/* return 1 if OK */ +/* 0 on error */ + +int makedir (char *newdir) +{ + char *buffer = strdup(newdir); + char *p; + int len = strlen(buffer); + + if (len <= 0) { + free(buffer); + return 0; + } + if (buffer[len-1] == '/') { + buffer[len-1] = '\0'; + } + if (mkdir(buffer, 0755) == 0) + { + free(buffer); + return 1; + } + + p = buffer+1; + while (1) + { + char hold; + + while(*p && *p != '\\' && *p != '/') + p++; + hold = *p; + *p = 0; + if ((mkdir(buffer, 0755) == -1) && (errno == ENOENT)) + { + fprintf(stderr,"%s: Couldn't create directory %s\n",prog,buffer); + free(buffer); + return 0; + } + if (hold == 0) + break; + *p++ = hold; + } + free(buffer); + return 1; +} + + +int matchname (int arg,int argc,char **argv,char *fname) +{ + if (arg == argc) /* no arguments given (untgz tgzarchive) */ + return 1; + + while (arg < argc) + if (ExprMatch(fname,argv[arg++])) + return 1; + + return 0; /* ignore this for the moment being */ +} + + +/* tar file list or extract */ + +int tar (gzFile in,int action,int arg,int argc,char **argv) +{ + union tar_buffer buffer; + int len; + int err; + int getheader = 1; + int remaining = 0; + FILE *outfile = NULL; + char fname[BLOCKSIZE]; + int tarmode; + time_t tartime; + struct attr_item *attributes = NULL; + + if (action == TGZ_LIST) + printf(" date time size file\n" + " ---------- -------- --------- -------------------------------------\n"); + while (1) + { + len = gzread(in, &buffer, BLOCKSIZE); + if (len < 0) + error(gzerror(in, &err)); + /* + * Always expect complete blocks to process + * the tar information. + */ + if (len != BLOCKSIZE) + { + action = TGZ_INVALID; /* force error exit */ + remaining = 0; /* force I/O cleanup */ + } + + /* + * If we have to get a tar header + */ + if (getheader >= 1) + { + /* + * if we met the end of the tar + * or the end-of-tar block, + * we are done + */ + if (len == 0 || buffer.header.name[0] == 0) + break; + + tarmode = getoct(buffer.header.mode,8); + tartime = (time_t)getoct(buffer.header.mtime,12); + if (tarmode == -1 || tartime == (time_t)-1) + { + buffer.header.name[0] = 0; + action = TGZ_INVALID; + } + + if (getheader == 1) + { + strncpy(fname,buffer.header.name,SHORTNAMESIZE); + if (fname[SHORTNAMESIZE-1] != 0) + fname[SHORTNAMESIZE] = 0; + } + else + { + /* + * The file name is longer than SHORTNAMESIZE + */ + if (strncmp(fname,buffer.header.name,SHORTNAMESIZE-1) != 0) + error("bad long name"); + getheader = 1; + } + + /* + * Act according to the type flag + */ + switch (buffer.header.typeflag) + { + case DIRTYPE: + if (action == TGZ_LIST) + printf(" %s %s\n",strtime(&tartime),fname); + if (action == TGZ_EXTRACT) + { + makedir(fname); + push_attr(&attributes,fname,tarmode,tartime); + } + break; + case REGTYPE: + case AREGTYPE: + remaining = getoct(buffer.header.size,12); + if (remaining == -1) + { + action = TGZ_INVALID; + break; + } + if (action == TGZ_LIST) + printf(" %s %9d %s\n",strtime(&tartime),remaining,fname); + else if (action == TGZ_EXTRACT) + { + if (matchname(arg,argc,argv,fname)) + { + outfile = fopen(fname,"wb"); + if (outfile == NULL) { + /* try creating directory */ + char *p = strrchr(fname, '/'); + if (p != NULL) { + *p = '\0'; + makedir(fname); + *p = '/'; + outfile = fopen(fname,"wb"); + } + } + if (outfile != NULL) + printf("Extracting %s\n",fname); + else + fprintf(stderr, "%s: Couldn't create %s",prog,fname); + } + else + outfile = NULL; + } + getheader = 0; + break; + case GNUTYPE_LONGLINK: + case GNUTYPE_LONGNAME: + remaining = getoct(buffer.header.size,12); + if (remaining < 0 || remaining >= BLOCKSIZE) + { + action = TGZ_INVALID; + break; + } + len = gzread(in, fname, BLOCKSIZE); + if (len < 0) + error(gzerror(in, &err)); + if (fname[BLOCKSIZE-1] != 0 || (int)strlen(fname) > remaining) + { + action = TGZ_INVALID; + break; + } + getheader = 2; + break; + default: + if (action == TGZ_LIST) + printf(" %s <---> %s\n",strtime(&tartime),fname); + break; + } + } + else + { + unsigned int bytes = (remaining > BLOCKSIZE) ? BLOCKSIZE : remaining; + + if (outfile != NULL) + { + if (fwrite(&buffer,sizeof(char),bytes,outfile) != bytes) + { + fprintf(stderr, + "%s: Error writing %s -- skipping\n",prog,fname); + fclose(outfile); + outfile = NULL; + remove(fname); + } + } + remaining -= bytes; + } + + if (remaining == 0) + { + getheader = 1; + if (outfile != NULL) + { + fclose(outfile); + outfile = NULL; + if (action != TGZ_INVALID) + push_attr(&attributes,fname,tarmode,tartime); + } + } + + /* + * Abandon if errors are found + */ + if (action == TGZ_INVALID) + { + error("broken archive"); + break; + } + } + + /* + * Restore file modes and time stamps + */ + restore_attr(&attributes); + + if (gzclose(in) != Z_OK) + error("failed gzclose"); + + return 0; +} + + +/* ============================================================ */ + +void help(int exitval) +{ + printf("untgz version 0.2.1\n" + " using zlib version %s\n\n", + zlibVersion()); + printf("Usage: untgz file.tgz extract all files\n" + " untgz file.tgz fname ... extract selected files\n" + " untgz -l file.tgz list archive contents\n" + " untgz -h display this help\n"); + exit(exitval); +} + +void error(const char *msg) +{ + fprintf(stderr, "%s: %s\n", prog, msg); + exit(1); +} + + +/* ============================================================ */ + +#if defined(WIN32) && defined(__GNUC__) +int _CRT_glob = 0; /* disable argument globbing in MinGW */ +#endif + +int main(int argc,char **argv) +{ + int action = TGZ_EXTRACT; + int arg = 1; + char *TGZfile; + gzFile *f; + + prog = strrchr(argv[0],'\\'); + if (prog == NULL) + { + prog = strrchr(argv[0],'/'); + if (prog == NULL) + { + prog = strrchr(argv[0],':'); + if (prog == NULL) + prog = argv[0]; + else + prog++; + } + else + prog++; + } + else + prog++; + + if (argc == 1) + help(0); + + if (strcmp(argv[arg],"-l") == 0) + { + action = TGZ_LIST; + if (argc == ++arg) + help(0); + } + else if (strcmp(argv[arg],"-h") == 0) + { + help(0); + } + + if ((TGZfile = TGZfname(argv[arg])) == NULL) + TGZnotfound(argv[arg]); + + ++arg; + if ((action == TGZ_LIST) && (arg != argc)) + help(1); + +/* + * Process the TGZ file + */ + switch(action) + { + case TGZ_LIST: + case TGZ_EXTRACT: + f = gzopen(TGZfile,"rb"); + if (f == NULL) + { + fprintf(stderr,"%s: Couldn't gzopen %s\n",prog,TGZfile); + return 1; + } + exit(tar(f, action, arg, argc, argv)); + break; + + default: + error("Unknown option"); + exit(1); + } + + return 0; +} ADDED compat/zlib/contrib/vstudio/readme.txt Index: compat/zlib/contrib/vstudio/readme.txt ================================================================== --- compat/zlib/contrib/vstudio/readme.txt +++ compat/zlib/contrib/vstudio/readme.txt @@ -0,0 +1,65 @@ +Building instructions for the DLL versions of Zlib 1.2.8 +======================================================== + +This directory contains projects that build zlib and minizip using +Microsoft Visual C++ 9.0/10.0. + +You don't need to build these projects yourself. You can download the +binaries from: + http://www.winimage.com/zLibDll + +More information can be found at this site. + + + + + +Build instructions for Visual Studio 2008 (32 bits or 64 bits) +-------------------------------------------------------------- +- Uncompress current zlib, including all contrib/* files +- Compile assembly code (with Visual Studio Command Prompt) by running: + bld_ml64.bat (in contrib\masmx64) + bld_ml32.bat (in contrib\masmx86) +- Open contrib\vstudio\vc9\zlibvc.sln with Microsoft Visual C++ 2008 +- Or run: vcbuild /rebuild contrib\vstudio\vc9\zlibvc.sln "Release|Win32" + +Build instructions for Visual Studio 2010 (32 bits or 64 bits) +-------------------------------------------------------------- +- Uncompress current zlib, including all contrib/* files +- Open contrib\vstudio\vc10\zlibvc.sln with Microsoft Visual C++ 2010 + +Build instructions for Visual Studio 2012 (32 bits or 64 bits) +-------------------------------------------------------------- +- Uncompress current zlib, including all contrib/* files +- Open contrib\vstudio\vc11\zlibvc.sln with Microsoft Visual C++ 2012 + + +Important +--------- +- To use zlibwapi.dll in your application, you must define the + macro ZLIB_WINAPI when compiling your application's source files. + + +Additional notes +---------------- +- This DLL, named zlibwapi.dll, is compatible to the old zlib.dll built + by Gilles Vollant from the zlib 1.1.x sources, and distributed at + http://www.winimage.com/zLibDll + It uses the WINAPI calling convention for the exported functions, and + includes the minizip functionality. If your application needs that + particular build of zlib.dll, you can rename zlibwapi.dll to zlib.dll. + +- The new DLL was renamed because there exist several incompatible + versions of zlib.dll on the Internet. + +- There is also an official DLL build of zlib, named zlib1.dll. This one + is exporting the functions using the CDECL convention. See the file + win32\DLL_FAQ.txt found in this zlib distribution. + +- There used to be a ZLIB_DLL macro in zlib 1.1.x, but now this symbol + has a slightly different effect. To avoid compatibility problems, do + not define it here. + + +Gilles Vollant +info@winimage.com ADDED compat/zlib/contrib/vstudio/vc10/miniunz.vcxproj Index: compat/zlib/contrib/vstudio/vc10/miniunz.vcxproj ================================================================== --- compat/zlib/contrib/vstudio/vc10/miniunz.vcxproj +++ compat/zlib/contrib/vstudio/vc10/miniunz.vcxproj @@ -0,0 +1,310 @@ + + + + + Debug + Itanium + + + Debug + Win32 + + + Debug + x64 + + + Release + Itanium + + + Release + Win32 + + + Release + x64 + + + + {C52F9E7B-498A-42BE-8DB4-85A15694382A} + Win32Proj + + + + Application + MultiByte + + + Application + MultiByte + + + Application + MultiByte + + + Application + MultiByte + + + Application + MultiByte + + + Application + MultiByte + + + + + + + + + + + + + + + + + + + + + + + + + <_ProjectFileVersion>10.0.30128.1 + x86\MiniUnzip$(Configuration)\ + x86\MiniUnzip$(Configuration)\Tmp\ + true + false + x86\MiniUnzip$(Configuration)\ + x86\MiniUnzip$(Configuration)\Tmp\ + false + false + x64\MiniUnzip$(Configuration)\ + x64\MiniUnzip$(Configuration)\Tmp\ + true + false + ia64\MiniUnzip$(Configuration)\ + ia64\MiniUnzip$(Configuration)\Tmp\ + true + false + x64\MiniUnzip$(Configuration)\ + x64\MiniUnzip$(Configuration)\Tmp\ + false + false + ia64\MiniUnzip$(Configuration)\ + ia64\MiniUnzip$(Configuration)\Tmp\ + false + false + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebug + false + + + $(IntDir) + Level3 + EditAndContinue + + + x86\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)miniunz.exe + true + $(OutDir)miniunz.pdb + Console + false + + + MachineX86 + + + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) + true + Default + MultiThreaded + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + x86\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)miniunz.exe + true + Console + true + true + false + + + MachineX86 + + + + + X64 + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + $(IntDir) + Level3 + ProgramDatabase + + + x64\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)miniunz.exe + true + $(OutDir)miniunz.pdb + Console + MachineX64 + + + + + Itanium + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + $(IntDir) + Level3 + ProgramDatabase + + + ia64\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)miniunz.exe + true + $(OutDir)miniunz.pdb + Console + MachineIA64 + + + + + X64 + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDLL + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + x64\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)miniunz.exe + true + Console + true + true + MachineX64 + + + + + Itanium + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDLL + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + ia64\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)miniunz.exe + true + Console + true + true + MachineIA64 + + + + + + + + {8fd826f8-3739-44e6-8cc8-997122e53b8d} + + + + + + ADDED compat/zlib/contrib/vstudio/vc10/miniunz.vcxproj.filters Index: compat/zlib/contrib/vstudio/vc10/miniunz.vcxproj.filters ================================================================== --- compat/zlib/contrib/vstudio/vc10/miniunz.vcxproj.filters +++ compat/zlib/contrib/vstudio/vc10/miniunz.vcxproj.filters @@ -0,0 +1,22 @@ + + + + + {048af943-022b-4db6-beeb-a54c34774ee2} + cpp;c;cxx;def;odl;idl;hpj;bat;asm + + + {c1d600d2-888f-4aea-b73e-8b0dd9befa0c} + h;hpp;hxx;hm;inl;inc + + + {0844199a-966b-4f19-81db-1e0125e141b9} + rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe + + + + + Source Files + + + ADDED compat/zlib/contrib/vstudio/vc10/miniunz.vcxproj.user Index: compat/zlib/contrib/vstudio/vc10/miniunz.vcxproj.user ================================================================== --- compat/zlib/contrib/vstudio/vc10/miniunz.vcxproj.user +++ compat/zlib/contrib/vstudio/vc10/miniunz.vcxproj.user @@ -0,0 +1,3 @@ + + + ADDED compat/zlib/contrib/vstudio/vc10/minizip.vcxproj Index: compat/zlib/contrib/vstudio/vc10/minizip.vcxproj ================================================================== --- compat/zlib/contrib/vstudio/vc10/minizip.vcxproj +++ compat/zlib/contrib/vstudio/vc10/minizip.vcxproj @@ -0,0 +1,307 @@ + + + + + Debug + Itanium + + + Debug + Win32 + + + Debug + x64 + + + Release + Itanium + + + Release + Win32 + + + Release + x64 + + + + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B} + Win32Proj + + + + Application + MultiByte + + + Application + MultiByte + + + Application + MultiByte + + + Application + MultiByte + + + Application + MultiByte + + + Application + MultiByte + + + + + + + + + + + + + + + + + + + + + + + + + <_ProjectFileVersion>10.0.30128.1 + x86\MiniZip$(Configuration)\ + x86\MiniZip$(Configuration)\Tmp\ + true + false + x86\MiniZip$(Configuration)\ + x86\MiniZip$(Configuration)\Tmp\ + false + x64\$(Configuration)\ + x64\$(Configuration)\ + true + false + ia64\$(Configuration)\ + ia64\$(Configuration)\ + true + false + x64\$(Configuration)\ + x64\$(Configuration)\ + false + ia64\$(Configuration)\ + ia64\$(Configuration)\ + false + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebug + false + + + $(IntDir) + Level3 + EditAndContinue + + + x86\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)minizip.exe + true + $(OutDir)minizip.pdb + Console + false + + + MachineX86 + + + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) + true + Default + MultiThreaded + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + x86\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)minizip.exe + true + Console + true + true + false + + + MachineX86 + + + + + X64 + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + $(IntDir) + Level3 + ProgramDatabase + + + x64\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)minizip.exe + true + $(OutDir)minizip.pdb + Console + MachineX64 + + + + + Itanium + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + $(IntDir) + Level3 + ProgramDatabase + + + ia64\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)minizip.exe + true + $(OutDir)minizip.pdb + Console + MachineIA64 + + + + + X64 + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDLL + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + x64\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)minizip.exe + true + Console + true + true + MachineX64 + + + + + Itanium + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDLL + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + ia64\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)minizip.exe + true + Console + true + true + MachineIA64 + + + + + + + + {8fd826f8-3739-44e6-8cc8-997122e53b8d} + + + + + + ADDED compat/zlib/contrib/vstudio/vc10/minizip.vcxproj.filters Index: compat/zlib/contrib/vstudio/vc10/minizip.vcxproj.filters ================================================================== --- compat/zlib/contrib/vstudio/vc10/minizip.vcxproj.filters +++ compat/zlib/contrib/vstudio/vc10/minizip.vcxproj.filters @@ -0,0 +1,22 @@ + + + + + {c0419b40-bf50-40da-b153-ff74215b79de} + cpp;c;cxx;def;odl;idl;hpj;bat;asm + + + {bb87b070-735b-478e-92ce-7383abb2f36c} + h;hpp;hxx;hm;inl;inc + + + {f46ab6a6-548f-43cb-ae96-681abb5bd5db} + rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe + + + + + Source Files + + + ADDED compat/zlib/contrib/vstudio/vc10/minizip.vcxproj.user Index: compat/zlib/contrib/vstudio/vc10/minizip.vcxproj.user ================================================================== --- compat/zlib/contrib/vstudio/vc10/minizip.vcxproj.user +++ compat/zlib/contrib/vstudio/vc10/minizip.vcxproj.user @@ -0,0 +1,3 @@ + + + ADDED compat/zlib/contrib/vstudio/vc10/testzlib.vcxproj Index: compat/zlib/contrib/vstudio/vc10/testzlib.vcxproj ================================================================== --- compat/zlib/contrib/vstudio/vc10/testzlib.vcxproj +++ compat/zlib/contrib/vstudio/vc10/testzlib.vcxproj @@ -0,0 +1,420 @@ + + + + + Debug + Itanium + + + Debug + Win32 + + + Debug + x64 + + + ReleaseWithoutAsm + Itanium + + + ReleaseWithoutAsm + Win32 + + + ReleaseWithoutAsm + x64 + + + Release + Itanium + + + Release + Win32 + + + Release + x64 + + + + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B} + testzlib + Win32Proj + + + + Application + MultiByte + true + + + Application + MultiByte + true + + + Application + MultiByte + + + Application + MultiByte + true + + + Application + MultiByte + true + + + Application + MultiByte + + + Application + true + + + Application + true + + + Application + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + <_ProjectFileVersion>10.0.30128.1 + x86\TestZlib$(Configuration)\ + x86\TestZlib$(Configuration)\Tmp\ + true + false + x86\TestZlib$(Configuration)\ + x86\TestZlib$(Configuration)\Tmp\ + false + false + x86\TestZlib$(Configuration)\ + x86\TestZlib$(Configuration)\Tmp\ + false + false + x64\TestZlib$(Configuration)\ + x64\TestZlib$(Configuration)\Tmp\ + false + ia64\TestZlib$(Configuration)\ + ia64\TestZlib$(Configuration)\Tmp\ + true + false + x64\TestZlib$(Configuration)\ + x64\TestZlib$(Configuration)\Tmp\ + false + ia64\TestZlib$(Configuration)\ + ia64\TestZlib$(Configuration)\Tmp\ + false + false + x64\TestZlib$(Configuration)\ + x64\TestZlib$(Configuration)\Tmp\ + false + ia64\TestZlib$(Configuration)\ + ia64\TestZlib$(Configuration)\Tmp\ + false + false + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + + + + Disabled + ..\..\..;%(AdditionalIncludeDirectories) + ASMV;ASMINF;WIN32;ZLIB_WINAPI;_DEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebug + false + + + AssemblyAndSourceCode + $(IntDir) + Level3 + EditAndContinue + + + ..\..\masmx86\match686.obj;..\..\masmx86\inffas32.obj;%(AdditionalDependencies) + $(OutDir)testzlib.exe + true + $(OutDir)testzlib.pdb + Console + false + + + MachineX86 + + + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;%(AdditionalIncludeDirectories) + WIN32;ZLIB_WINAPI;NDEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;%(PreprocessorDefinitions) + true + Default + MultiThreaded + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + $(OutDir)testzlib.exe + true + Console + true + true + false + + + MachineX86 + + + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;%(AdditionalIncludeDirectories) + ASMV;ASMINF;WIN32;ZLIB_WINAPI;NDEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;%(PreprocessorDefinitions) + true + Default + MultiThreaded + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + ..\..\masmx86\match686.obj;..\..\masmx86\inffas32.obj;%(AdditionalDependencies) + $(OutDir)testzlib.exe + true + Console + true + true + false + + + MachineX86 + + + + + ..\..\..;%(AdditionalIncludeDirectories) + ASMV;ASMINF;WIN32;ZLIB_WINAPI;_DEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;%(PreprocessorDefinitions) + Default + MultiThreadedDebugDLL + false + $(IntDir) + + + ..\..\masmx64\gvmat64.obj;..\..\masmx64\inffasx64.obj;%(AdditionalDependencies) + + + + + Itanium + + + Disabled + ..\..\..;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;_DEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + AssemblyAndSourceCode + $(IntDir) + Level3 + ProgramDatabase + + + $(OutDir)testzlib.exe + true + $(OutDir)testzlib.pdb + Console + MachineIA64 + + + + + ..\..\..;%(AdditionalIncludeDirectories) + WIN32;ZLIB_WINAPI;NDEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;%(PreprocessorDefinitions) + Default + MultiThreadedDLL + false + $(IntDir) + + + %(AdditionalDependencies) + + + + + Itanium + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;NDEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDLL + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + $(OutDir)testzlib.exe + true + Console + true + true + MachineIA64 + + + + + ..\..\..;%(AdditionalIncludeDirectories) + ASMV;ASMINF;WIN32;ZLIB_WINAPI;NDEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;%(PreprocessorDefinitions) + Default + MultiThreadedDLL + false + $(IntDir) + + + ..\..\masmx64\gvmat64.obj;..\..\masmx64\inffasx64.obj;%(AdditionalDependencies) + + + + + Itanium + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;NDEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDLL + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + $(OutDir)testzlib.exe + true + Console + true + true + MachineIA64 + + + + + + + + + + true + true + true + true + true + true + + + + + + + + + + + + + ADDED compat/zlib/contrib/vstudio/vc10/testzlib.vcxproj.filters Index: compat/zlib/contrib/vstudio/vc10/testzlib.vcxproj.filters ================================================================== --- compat/zlib/contrib/vstudio/vc10/testzlib.vcxproj.filters +++ compat/zlib/contrib/vstudio/vc10/testzlib.vcxproj.filters @@ -0,0 +1,58 @@ + + + + + {c1f6a2e3-5da5-4955-8653-310d3efe05a9} + cpp;c;cxx;def;odl;idl;hpj;bat;asm + + + {c2aaffdc-2c95-4d6f-8466-4bec5890af2c} + h;hpp;hxx;hm;inl;inc + + + {c274fe07-05f2-461c-964b-f6341e4e7eb5} + rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe + + + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + ADDED compat/zlib/contrib/vstudio/vc10/testzlib.vcxproj.user Index: compat/zlib/contrib/vstudio/vc10/testzlib.vcxproj.user ================================================================== --- compat/zlib/contrib/vstudio/vc10/testzlib.vcxproj.user +++ compat/zlib/contrib/vstudio/vc10/testzlib.vcxproj.user @@ -0,0 +1,3 @@ + + + ADDED compat/zlib/contrib/vstudio/vc10/testzlibdll.vcxproj Index: compat/zlib/contrib/vstudio/vc10/testzlibdll.vcxproj ================================================================== --- compat/zlib/contrib/vstudio/vc10/testzlibdll.vcxproj +++ compat/zlib/contrib/vstudio/vc10/testzlibdll.vcxproj @@ -0,0 +1,310 @@ + + + + + Debug + Itanium + + + Debug + Win32 + + + Debug + x64 + + + Release + Itanium + + + Release + Win32 + + + Release + x64 + + + + {C52F9E7B-498A-42BE-8DB4-85A15694366A} + Win32Proj + + + + Application + MultiByte + + + Application + MultiByte + + + Application + MultiByte + + + Application + MultiByte + + + Application + MultiByte + + + Application + MultiByte + + + + + + + + + + + + + + + + + + + + + + + + + <_ProjectFileVersion>10.0.30128.1 + x86\TestZlibDll$(Configuration)\ + x86\TestZlibDll$(Configuration)\Tmp\ + true + false + x86\TestZlibDll$(Configuration)\ + x86\TestZlibDll$(Configuration)\Tmp\ + false + false + x64\TestZlibDll$(Configuration)\ + x64\TestZlibDll$(Configuration)\Tmp\ + true + false + ia64\TestZlibDll$(Configuration)\ + ia64\TestZlibDll$(Configuration)\Tmp\ + true + false + x64\TestZlibDll$(Configuration)\ + x64\TestZlibDll$(Configuration)\Tmp\ + false + false + ia64\TestZlibDll$(Configuration)\ + ia64\TestZlibDll$(Configuration)\Tmp\ + false + false + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebug + false + + + $(IntDir) + Level3 + EditAndContinue + + + x86\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)testzlibdll.exe + true + $(OutDir)testzlib.pdb + Console + false + + + MachineX86 + + + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) + true + Default + MultiThreaded + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + x86\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)testzlibdll.exe + true + Console + true + true + false + + + MachineX86 + + + + + X64 + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + $(IntDir) + Level3 + ProgramDatabase + + + x64\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)testzlibdll.exe + true + $(OutDir)testzlib.pdb + Console + MachineX64 + + + + + Itanium + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + $(IntDir) + Level3 + ProgramDatabase + + + ia64\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)testzlibdll.exe + true + $(OutDir)testzlib.pdb + Console + MachineIA64 + + + + + X64 + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDLL + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + x64\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)testzlibdll.exe + true + Console + true + true + MachineX64 + + + + + Itanium + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDLL + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + ia64\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)testzlibdll.exe + true + Console + true + true + MachineIA64 + + + + + + + + {8fd826f8-3739-44e6-8cc8-997122e53b8d} + + + + + + ADDED compat/zlib/contrib/vstudio/vc10/testzlibdll.vcxproj.filters Index: compat/zlib/contrib/vstudio/vc10/testzlibdll.vcxproj.filters ================================================================== --- compat/zlib/contrib/vstudio/vc10/testzlibdll.vcxproj.filters +++ compat/zlib/contrib/vstudio/vc10/testzlibdll.vcxproj.filters @@ -0,0 +1,22 @@ + + + + + {fa61a89f-93fc-4c89-b29e-36224b7592f4} + cpp;c;cxx;def;odl;idl;hpj;bat;asm + + + {d4b85da0-2ba2-4934-b57f-e2584e3848ee} + h;hpp;hxx;hm;inl;inc + + + {e573e075-00bd-4a7d-bd67-a8cc9bfc5aca} + rc;ico;cur;bmp;dlg;rc2;rct;bin;rgs;gif;jpg;jpeg;jpe + + + + + Source Files + + + ADDED compat/zlib/contrib/vstudio/vc10/testzlibdll.vcxproj.user Index: compat/zlib/contrib/vstudio/vc10/testzlibdll.vcxproj.user ================================================================== --- compat/zlib/contrib/vstudio/vc10/testzlibdll.vcxproj.user +++ compat/zlib/contrib/vstudio/vc10/testzlibdll.vcxproj.user @@ -0,0 +1,3 @@ + + + ADDED compat/zlib/contrib/vstudio/vc10/zlib.rc Index: compat/zlib/contrib/vstudio/vc10/zlib.rc ================================================================== --- compat/zlib/contrib/vstudio/vc10/zlib.rc +++ compat/zlib/contrib/vstudio/vc10/zlib.rc @@ -0,0 +1,32 @@ +#include + +#define IDR_VERSION1 1 +IDR_VERSION1 VERSIONINFO MOVEABLE IMPURE LOADONCALL DISCARDABLE + FILEVERSION 1,2,8,0 + PRODUCTVERSION 1,2,8,0 + FILEFLAGSMASK VS_FFI_FILEFLAGSMASK + FILEFLAGS 0 + FILEOS VOS_DOS_WINDOWS32 + FILETYPE VFT_DLL + FILESUBTYPE 0 // not used +BEGIN + BLOCK "StringFileInfo" + BEGIN + BLOCK "040904E4" + //language ID = U.S. English, char set = Windows, Multilingual + + BEGIN + VALUE "FileDescription", "zlib data compression and ZIP file I/O library\0" + VALUE "FileVersion", "1.2.8\0" + VALUE "InternalName", "zlib\0" + VALUE "OriginalFilename", "zlibwapi.dll\0" + VALUE "ProductName", "ZLib.DLL\0" + VALUE "Comments","DLL support by Alessandro Iacopetti & Gilles Vollant\0" + VALUE "LegalCopyright", "(C) 1995-2013 Jean-loup Gailly & Mark Adler\0" + END + END + BLOCK "VarFileInfo" + BEGIN + VALUE "Translation", 0x0409, 1252 + END +END ADDED compat/zlib/contrib/vstudio/vc10/zlibstat.vcxproj Index: compat/zlib/contrib/vstudio/vc10/zlibstat.vcxproj ================================================================== --- compat/zlib/contrib/vstudio/vc10/zlibstat.vcxproj +++ compat/zlib/contrib/vstudio/vc10/zlibstat.vcxproj @@ -0,0 +1,473 @@ + + + + + Debug + Itanium + + + Debug + Win32 + + + Debug + x64 + + + ReleaseWithoutAsm + Itanium + + + ReleaseWithoutAsm + Win32 + + + ReleaseWithoutAsm + x64 + + + Release + Itanium + + + Release + Win32 + + + Release + x64 + + + + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8} + + + + StaticLibrary + false + + + StaticLibrary + false + + + StaticLibrary + false + + + StaticLibrary + false + + + StaticLibrary + false + + + StaticLibrary + false + + + StaticLibrary + false + + + StaticLibrary + false + + + StaticLibrary + false + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + <_ProjectFileVersion>10.0.30128.1 + x86\ZlibStat$(Configuration)\ + x86\ZlibStat$(Configuration)\Tmp\ + x86\ZlibStat$(Configuration)\ + x86\ZlibStat$(Configuration)\Tmp\ + x86\ZlibStat$(Configuration)\ + x86\ZlibStat$(Configuration)\Tmp\ + x64\ZlibStat$(Configuration)\ + x64\ZlibStat$(Configuration)\Tmp\ + ia64\ZlibStat$(Configuration)\ + ia64\ZlibStat$(Configuration)\Tmp\ + x64\ZlibStat$(Configuration)\ + x64\ZlibStat$(Configuration)\Tmp\ + ia64\ZlibStat$(Configuration)\ + ia64\ZlibStat$(Configuration)\Tmp\ + x64\ZlibStat$(Configuration)\ + x64\ZlibStat$(Configuration)\Tmp\ + ia64\ZlibStat$(Configuration)\ + ia64\ZlibStat$(Configuration)\Tmp\ + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + + + + Disabled + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;%(PreprocessorDefinitions) + + + MultiThreadedDebug + false + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + OldStyle + + + 0x040c + + + /MACHINE:X86 /NODEFAULTLIB %(AdditionalOptions) + $(OutDir)zlibstat.lib + true + + + cd ..\..\masmx86 +bld_ml32.bat + + + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ASMV;ASMINF;%(PreprocessorDefinitions) + true + + + MultiThreaded + false + true + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + + + 0x040c + + + /MACHINE:X86 /NODEFAULTLIB %(AdditionalOptions) + ..\..\masmx86\match686.obj;..\..\masmx86\inffas32.obj;%(AdditionalDependencies) + $(OutDir)zlibstat.lib + true + + + cd ..\..\masmx86 +bld_ml32.bat + + + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;%(PreprocessorDefinitions) + true + + + MultiThreaded + false + true + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + + + 0x040c + + + /MACHINE:X86 /NODEFAULTLIB %(AdditionalOptions) + $(OutDir)zlibstat.lib + true + + + + + X64 + + + Disabled + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;WIN64;%(PreprocessorDefinitions) + + + MultiThreadedDebugDLL + false + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + OldStyle + + + 0x040c + + + /MACHINE:AMD64 /NODEFAULTLIB %(AdditionalOptions) + $(OutDir)zlibstat.lib + true + + + cd ..\..\masmx64 +bld_ml64.bat + + + + + Itanium + + + Disabled + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;WIN64;%(PreprocessorDefinitions) + + + MultiThreadedDebugDLL + false + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + OldStyle + + + 0x040c + + + /MACHINE:IA64 /NODEFAULTLIB %(AdditionalOptions) + $(OutDir)zlibstat.lib + true + + + + + X64 + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ASMV;ASMINF;WIN64;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + + + 0x040c + + + /MACHINE:AMD64 /NODEFAULTLIB %(AdditionalOptions) + ..\..\masmx64\gvmat64.obj;..\..\masmx64\inffasx64.obj;%(AdditionalDependencies) + $(OutDir)zlibstat.lib + true + + + cd ..\..\masmx64 +bld_ml64.bat + + + + + Itanium + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;WIN64;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + + + 0x040c + + + /MACHINE:IA64 /NODEFAULTLIB %(AdditionalOptions) + $(OutDir)zlibstat.lib + true + + + + + X64 + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;WIN64;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + + + 0x040c + + + /MACHINE:AMD64 /NODEFAULTLIB %(AdditionalOptions) + $(OutDir)zlibstat.lib + true + + + + + Itanium + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;WIN64;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + + + 0x040c + + + /MACHINE:IA64 /NODEFAULTLIB %(AdditionalOptions) + $(OutDir)zlibstat.lib + true + + + + + + + + + + + + + + true + true + true + true + true + true + + + + + + + + + + + + + + + + + + + + + ADDED compat/zlib/contrib/vstudio/vc10/zlibstat.vcxproj.filters Index: compat/zlib/contrib/vstudio/vc10/zlibstat.vcxproj.filters ================================================================== --- compat/zlib/contrib/vstudio/vc10/zlibstat.vcxproj.filters +++ compat/zlib/contrib/vstudio/vc10/zlibstat.vcxproj.filters @@ -0,0 +1,77 @@ + + + + + {174213f6-7f66-4ae8-a3a8-a1e0a1e6ffdd} + + + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + + + Source Files + + + + + Source Files + + + ADDED compat/zlib/contrib/vstudio/vc10/zlibstat.vcxproj.user Index: compat/zlib/contrib/vstudio/vc10/zlibstat.vcxproj.user ================================================================== --- compat/zlib/contrib/vstudio/vc10/zlibstat.vcxproj.user +++ compat/zlib/contrib/vstudio/vc10/zlibstat.vcxproj.user @@ -0,0 +1,3 @@ + + + ADDED compat/zlib/contrib/vstudio/vc10/zlibvc.def Index: compat/zlib/contrib/vstudio/vc10/zlibvc.def ================================================================== --- compat/zlib/contrib/vstudio/vc10/zlibvc.def +++ compat/zlib/contrib/vstudio/vc10/zlibvc.def @@ -0,0 +1,143 @@ +LIBRARY +; zlib data compression and ZIP file I/O library + +VERSION 1.2.8 + +EXPORTS + adler32 @1 + compress @2 + crc32 @3 + deflate @4 + deflateCopy @5 + deflateEnd @6 + deflateInit2_ @7 + deflateInit_ @8 + deflateParams @9 + deflateReset @10 + deflateSetDictionary @11 + gzclose @12 + gzdopen @13 + gzerror @14 + gzflush @15 + gzopen @16 + gzread @17 + gzwrite @18 + inflate @19 + inflateEnd @20 + inflateInit2_ @21 + inflateInit_ @22 + inflateReset @23 + inflateSetDictionary @24 + inflateSync @25 + uncompress @26 + zlibVersion @27 + gzprintf @28 + gzputc @29 + gzgetc @30 + gzseek @31 + gzrewind @32 + gztell @33 + gzeof @34 + gzsetparams @35 + zError @36 + inflateSyncPoint @37 + get_crc_table @38 + compress2 @39 + gzputs @40 + gzgets @41 + inflateCopy @42 + inflateBackInit_ @43 + inflateBack @44 + inflateBackEnd @45 + compressBound @46 + deflateBound @47 + gzclearerr @48 + gzungetc @49 + zlibCompileFlags @50 + deflatePrime @51 + deflatePending @52 + + unzOpen @61 + unzClose @62 + unzGetGlobalInfo @63 + unzGetCurrentFileInfo @64 + unzGoToFirstFile @65 + unzGoToNextFile @66 + unzOpenCurrentFile @67 + unzReadCurrentFile @68 + unzOpenCurrentFile3 @69 + unztell @70 + unzeof @71 + unzCloseCurrentFile @72 + unzGetGlobalComment @73 + unzStringFileNameCompare @74 + unzLocateFile @75 + unzGetLocalExtrafield @76 + unzOpen2 @77 + unzOpenCurrentFile2 @78 + unzOpenCurrentFilePassword @79 + + zipOpen @80 + zipOpenNewFileInZip @81 + zipWriteInFileInZip @82 + zipCloseFileInZip @83 + zipClose @84 + zipOpenNewFileInZip2 @86 + zipCloseFileInZipRaw @87 + zipOpen2 @88 + zipOpenNewFileInZip3 @89 + + unzGetFilePos @100 + unzGoToFilePos @101 + + fill_win32_filefunc @110 + +; zlibwapi v1.2.4 added: + fill_win32_filefunc64 @111 + fill_win32_filefunc64A @112 + fill_win32_filefunc64W @113 + + unzOpen64 @120 + unzOpen2_64 @121 + unzGetGlobalInfo64 @122 + unzGetCurrentFileInfo64 @124 + unzGetCurrentFileZStreamPos64 @125 + unztell64 @126 + unzGetFilePos64 @127 + unzGoToFilePos64 @128 + + zipOpen64 @130 + zipOpen2_64 @131 + zipOpenNewFileInZip64 @132 + zipOpenNewFileInZip2_64 @133 + zipOpenNewFileInZip3_64 @134 + zipOpenNewFileInZip4_64 @135 + zipCloseFileInZipRaw64 @136 + +; zlib1 v1.2.4 added: + adler32_combine @140 + crc32_combine @142 + deflateSetHeader @144 + deflateTune @145 + gzbuffer @146 + gzclose_r @147 + gzclose_w @148 + gzdirect @149 + gzoffset @150 + inflateGetHeader @156 + inflateMark @157 + inflatePrime @158 + inflateReset2 @159 + inflateUndermine @160 + +; zlib1 v1.2.6 added: + gzgetc_ @161 + inflateResetKeep @163 + deflateResetKeep @164 + +; zlib1 v1.2.7 added: + gzopen_w @165 + +; zlib1 v1.2.8 added: + inflateGetDictionary @166 + gzvprintf @167 ADDED compat/zlib/contrib/vstudio/vc10/zlibvc.sln Index: compat/zlib/contrib/vstudio/vc10/zlibvc.sln ================================================================== --- compat/zlib/contrib/vstudio/vc10/zlibvc.sln +++ compat/zlib/contrib/vstudio/vc10/zlibvc.sln @@ -0,0 +1,135 @@ + +Microsoft Visual Studio Solution File, Format Version 11.00 +# Visual Studio 2010 +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "zlibvc", "zlibvc.vcxproj", "{8FD826F8-3739-44E6-8CC8-997122E53B8D}" +EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "zlibstat", "zlibstat.vcxproj", "{745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}" +EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "testzlib", "testzlib.vcxproj", "{AA6666AA-E09F-4135-9C0C-4FE50C3C654B}" +EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "testzlibdll", "testzlibdll.vcxproj", "{C52F9E7B-498A-42BE-8DB4-85A15694366A}" +EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "minizip", "minizip.vcxproj", "{48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}" +EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "miniunz", "miniunz.vcxproj", "{C52F9E7B-498A-42BE-8DB4-85A15694382A}" +EndProject +Global + GlobalSection(SolutionConfigurationPlatforms) = preSolution + Debug|Itanium = Debug|Itanium + Debug|Win32 = Debug|Win32 + Debug|x64 = Debug|x64 + Release|Itanium = Release|Itanium + Release|Win32 = Release|Win32 + Release|x64 = Release|x64 + ReleaseWithoutAsm|Itanium = ReleaseWithoutAsm|Itanium + ReleaseWithoutAsm|Win32 = ReleaseWithoutAsm|Win32 + ReleaseWithoutAsm|x64 = ReleaseWithoutAsm|x64 + EndGlobalSection + GlobalSection(ProjectConfigurationPlatforms) = postSolution + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Debug|Itanium.ActiveCfg = Debug|Itanium + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Debug|Itanium.Build.0 = Debug|Itanium + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Debug|Win32.ActiveCfg = Debug|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Debug|Win32.Build.0 = Debug|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Debug|x64.ActiveCfg = Debug|x64 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Debug|x64.Build.0 = Debug|x64 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Release|Itanium.ActiveCfg = Release|Itanium + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Release|Itanium.Build.0 = Release|Itanium + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Release|Win32.ActiveCfg = Release|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Release|Win32.Build.0 = Release|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Release|x64.ActiveCfg = Release|x64 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Release|x64.Build.0 = Release|x64 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.ReleaseWithoutAsm|Itanium.ActiveCfg = ReleaseWithoutAsm|Itanium + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.ReleaseWithoutAsm|Itanium.Build.0 = ReleaseWithoutAsm|Itanium + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.ReleaseWithoutAsm|Win32.ActiveCfg = ReleaseWithoutAsm|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.ReleaseWithoutAsm|Win32.Build.0 = ReleaseWithoutAsm|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.ReleaseWithoutAsm|x64.ActiveCfg = ReleaseWithoutAsm|x64 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.ReleaseWithoutAsm|x64.Build.0 = ReleaseWithoutAsm|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Debug|Itanium.ActiveCfg = Debug|Itanium + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Debug|Itanium.Build.0 = Debug|Itanium + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Debug|Win32.ActiveCfg = Debug|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Debug|Win32.Build.0 = Debug|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Debug|x64.ActiveCfg = Debug|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Debug|x64.Build.0 = Debug|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Release|Itanium.ActiveCfg = Release|Itanium + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Release|Itanium.Build.0 = Release|Itanium + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Release|Win32.ActiveCfg = Release|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Release|Win32.Build.0 = Release|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Release|x64.ActiveCfg = Release|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Release|x64.Build.0 = Release|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.ReleaseWithoutAsm|Itanium.ActiveCfg = ReleaseWithoutAsm|Itanium + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.ReleaseWithoutAsm|Itanium.Build.0 = ReleaseWithoutAsm|Itanium + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.ReleaseWithoutAsm|Win32.ActiveCfg = ReleaseWithoutAsm|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.ReleaseWithoutAsm|Win32.Build.0 = ReleaseWithoutAsm|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.ReleaseWithoutAsm|x64.ActiveCfg = ReleaseWithoutAsm|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.ReleaseWithoutAsm|x64.Build.0 = ReleaseWithoutAsm|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Debug|Itanium.ActiveCfg = Debug|Itanium + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Debug|Itanium.Build.0 = Debug|Itanium + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Debug|Win32.ActiveCfg = Debug|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Debug|Win32.Build.0 = Debug|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Debug|x64.ActiveCfg = Debug|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Debug|x64.Build.0 = Debug|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Release|Itanium.ActiveCfg = Release|Itanium + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Release|Itanium.Build.0 = Release|Itanium + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Release|Win32.ActiveCfg = Release|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Release|Win32.Build.0 = Release|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Release|x64.ActiveCfg = Release|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Release|x64.Build.0 = Release|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Itanium.ActiveCfg = ReleaseWithoutAsm|Itanium + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Itanium.Build.0 = ReleaseWithoutAsm|Itanium + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Win32.ActiveCfg = ReleaseWithoutAsm|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Win32.Build.0 = ReleaseWithoutAsm|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|x64.ActiveCfg = ReleaseWithoutAsm|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|x64.Build.0 = ReleaseWithoutAsm|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Debug|Itanium.ActiveCfg = Debug|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Debug|Itanium.Build.0 = Debug|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Debug|Win32.ActiveCfg = Debug|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Debug|Win32.Build.0 = Debug|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Debug|x64.ActiveCfg = Debug|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Debug|x64.Build.0 = Debug|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Release|Itanium.ActiveCfg = Release|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Release|Itanium.Build.0 = Release|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Release|Win32.ActiveCfg = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Release|Win32.Build.0 = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Release|x64.ActiveCfg = Release|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Release|x64.Build.0 = Release|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.ReleaseWithoutAsm|Itanium.ActiveCfg = Release|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.ReleaseWithoutAsm|Itanium.Build.0 = Release|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.ReleaseWithoutAsm|Win32.ActiveCfg = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.ReleaseWithoutAsm|x64.ActiveCfg = Release|x64 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Debug|Itanium.ActiveCfg = Debug|Itanium + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Debug|Itanium.Build.0 = Debug|Itanium + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Debug|Win32.ActiveCfg = Debug|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Debug|Win32.Build.0 = Debug|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Debug|x64.ActiveCfg = Debug|x64 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Debug|x64.Build.0 = Debug|x64 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Release|Itanium.ActiveCfg = Release|Itanium + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Release|Itanium.Build.0 = Release|Itanium + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Release|Win32.ActiveCfg = Release|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Release|Win32.Build.0 = Release|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Release|x64.ActiveCfg = Release|x64 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Release|x64.Build.0 = Release|x64 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Itanium.ActiveCfg = Release|Itanium + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Itanium.Build.0 = Release|Itanium + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Win32.ActiveCfg = Release|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|x64.ActiveCfg = Release|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Debug|Itanium.ActiveCfg = Debug|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Debug|Itanium.Build.0 = Debug|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Debug|Win32.ActiveCfg = Debug|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Debug|Win32.Build.0 = Debug|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Debug|x64.ActiveCfg = Debug|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Debug|x64.Build.0 = Debug|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Release|Itanium.ActiveCfg = Release|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Release|Itanium.Build.0 = Release|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Release|Win32.ActiveCfg = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Release|Win32.Build.0 = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Release|x64.ActiveCfg = Release|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Release|x64.Build.0 = Release|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.ReleaseWithoutAsm|Itanium.ActiveCfg = Release|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.ReleaseWithoutAsm|Itanium.Build.0 = Release|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.ReleaseWithoutAsm|Win32.ActiveCfg = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.ReleaseWithoutAsm|x64.ActiveCfg = Release|x64 + EndGlobalSection + GlobalSection(SolutionProperties) = preSolution + HideSolutionNode = FALSE + EndGlobalSection +EndGlobal ADDED compat/zlib/contrib/vstudio/vc10/zlibvc.vcxproj Index: compat/zlib/contrib/vstudio/vc10/zlibvc.vcxproj ================================================================== --- compat/zlib/contrib/vstudio/vc10/zlibvc.vcxproj +++ compat/zlib/contrib/vstudio/vc10/zlibvc.vcxproj @@ -0,0 +1,657 @@ + + + + + Debug + Itanium + + + Debug + Win32 + + + Debug + x64 + + + ReleaseWithoutAsm + Itanium + + + ReleaseWithoutAsm + Win32 + + + ReleaseWithoutAsm + x64 + + + Release + Itanium + + + Release + Win32 + + + Release + x64 + + + + {8FD826F8-3739-44E6-8CC8-997122E53B8D} + + + + DynamicLibrary + false + true + + + DynamicLibrary + false + true + + + DynamicLibrary + false + + + DynamicLibrary + false + true + + + DynamicLibrary + false + true + + + DynamicLibrary + false + + + DynamicLibrary + false + true + + + DynamicLibrary + false + true + + + DynamicLibrary + false + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + <_ProjectFileVersion>10.0.30128.1 + x86\ZlibDll$(Configuration)\ + x86\ZlibDll$(Configuration)\Tmp\ + true + false + x86\ZlibDll$(Configuration)\ + x86\ZlibDll$(Configuration)\Tmp\ + false + false + x86\ZlibDll$(Configuration)\ + x86\ZlibDll$(Configuration)\Tmp\ + false + false + x64\ZlibDll$(Configuration)\ + x64\ZlibDll$(Configuration)\Tmp\ + true + false + ia64\ZlibDll$(Configuration)\ + ia64\ZlibDll$(Configuration)\Tmp\ + true + false + x64\ZlibDll$(Configuration)\ + x64\ZlibDll$(Configuration)\Tmp\ + false + false + ia64\ZlibDll$(Configuration)\ + ia64\ZlibDll$(Configuration)\Tmp\ + false + false + x64\ZlibDll$(Configuration)\ + x64\ZlibDll$(Configuration)\Tmp\ + false + false + ia64\ZlibDll$(Configuration)\ + ia64\ZlibDll$(Configuration)\Tmp\ + false + false + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + zlibwapid + zlibwapi + zlibwapi + zlibwapid + zlibwapi + zlibwapi + + + + _DEBUG;%(PreprocessorDefinitions) + true + true + Win32 + $(OutDir)zlibvc.tlb + + + Disabled + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;ASMV;ASMINF;%(PreprocessorDefinitions) + + + MultiThreadedDebug + false + $(IntDir)zlibvc.pch + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + EditAndContinue + + + _DEBUG;%(PreprocessorDefinitions) + 0x040c + + + /MACHINE:I386 %(AdditionalOptions) + ..\..\masmx86\match686.obj;..\..\masmx86\inffas32.obj;%(AdditionalDependencies) + true + .\zlibvc.def + true + true + Windows + false + + + + + cd ..\..\masmx86 +bld_ml32.bat + + + + + NDEBUG;%(PreprocessorDefinitions) + true + true + Win32 + $(OutDir)zlibvc.tlb + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibvc.pch + All + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + + + NDEBUG;%(PreprocessorDefinitions) + 0x040c + + + /MACHINE:I386 %(AdditionalOptions) + true + false + .\zlibvc.def + true + Windows + false + + + + + + + NDEBUG;%(PreprocessorDefinitions) + true + true + Win32 + $(OutDir)zlibvc.tlb + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;ASMV;ASMINF;%(PreprocessorDefinitions) + true + + + MultiThreaded + false + true + $(IntDir)zlibvc.pch + All + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + + + NDEBUG;%(PreprocessorDefinitions) + 0x040c + + + /MACHINE:I386 %(AdditionalOptions) + ..\..\masmx86\match686.obj;..\..\masmx86\inffas32.obj;%(AdditionalDependencies) + true + false + .\zlibvc.def + true + Windows + false + + + + + cd ..\..\masmx86 +bld_ml32.bat + + + + + _DEBUG;%(PreprocessorDefinitions) + true + true + X64 + $(OutDir)zlibvc.tlb + + + Disabled + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;ASMV;ASMINF;WIN64;%(PreprocessorDefinitions) + + + MultiThreadedDebugDLL + false + $(IntDir)zlibvc.pch + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + ProgramDatabase + + + _DEBUG;%(PreprocessorDefinitions) + 0x040c + + + ..\..\masmx64\gvmat64.obj;..\..\masmx64\inffasx64.obj;%(AdditionalDependencies) + true + .\zlibvc.def + true + true + Windows + MachineX64 + + + cd ..\..\masmx64 +bld_ml64.bat + + + + + _DEBUG;%(PreprocessorDefinitions) + true + true + Itanium + $(OutDir)zlibvc.tlb + + + Disabled + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;WIN64;%(PreprocessorDefinitions) + + + MultiThreadedDebugDLL + false + $(IntDir)zlibvc.pch + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + ProgramDatabase + + + _DEBUG;%(PreprocessorDefinitions) + 0x040c + + + $(OutDir)zlibwapi.dll + true + .\zlibvc.def + true + $(OutDir)zlibwapi.pdb + true + $(OutDir)zlibwapi.map + Windows + $(OutDir)zlibwapi.lib + MachineIA64 + + + + + NDEBUG;%(PreprocessorDefinitions) + true + true + X64 + $(OutDir)zlibvc.tlb + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;WIN64;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibvc.pch + All + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + + + NDEBUG;%(PreprocessorDefinitions) + 0x040c + + + true + false + .\zlibvc.def + true + Windows + MachineX64 + + + + + NDEBUG;%(PreprocessorDefinitions) + true + true + Itanium + $(OutDir)zlibvc.tlb + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;WIN64;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibvc.pch + All + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + + + NDEBUG;%(PreprocessorDefinitions) + 0x040c + + + $(OutDir)zlibwapi.dll + true + false + .\zlibvc.def + $(OutDir)zlibwapi.pdb + true + $(OutDir)zlibwapi.map + Windows + $(OutDir)zlibwapi.lib + MachineIA64 + + + + + NDEBUG;%(PreprocessorDefinitions) + true + true + X64 + $(OutDir)zlibvc.tlb + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;ASMV;ASMINF;WIN64;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibvc.pch + All + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + + + NDEBUG;%(PreprocessorDefinitions) + 0x040c + + + ..\..\masmx64\gvmat64.obj;..\..\masmx64\inffasx64.obj;%(AdditionalDependencies) + true + false + .\zlibvc.def + true + Windows + MachineX64 + + + cd ..\..\masmx64 +bld_ml64.bat + + + + + NDEBUG;%(PreprocessorDefinitions) + true + true + Itanium + $(OutDir)zlibvc.tlb + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;WIN64;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibvc.pch + All + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + + + NDEBUG;%(PreprocessorDefinitions) + 0x040c + + + $(OutDir)zlibwapi.dll + true + false + .\zlibvc.def + $(OutDir)zlibwapi.pdb + true + $(OutDir)zlibwapi.map + Windows + $(OutDir)zlibwapi.lib + MachineIA64 + + + + + + + + + + + + + + true + true + true + true + true + true + + + + + + + + + + %(AdditionalIncludeDirectories) + ZLIB_INTERNAL;%(PreprocessorDefinitions) + %(AdditionalIncludeDirectories) + ZLIB_INTERNAL;%(PreprocessorDefinitions) + %(AdditionalIncludeDirectories) + ZLIB_INTERNAL;%(PreprocessorDefinitions) + + + %(AdditionalIncludeDirectories) + ZLIB_INTERNAL;%(PreprocessorDefinitions) + %(AdditionalIncludeDirectories) + ZLIB_INTERNAL;%(PreprocessorDefinitions) + %(AdditionalIncludeDirectories) + ZLIB_INTERNAL;%(PreprocessorDefinitions) + + + + + + + + + + + + + + + + + + + + + + + + ADDED compat/zlib/contrib/vstudio/vc10/zlibvc.vcxproj.filters Index: compat/zlib/contrib/vstudio/vc10/zlibvc.vcxproj.filters ================================================================== --- compat/zlib/contrib/vstudio/vc10/zlibvc.vcxproj.filters +++ compat/zlib/contrib/vstudio/vc10/zlibvc.vcxproj.filters @@ -0,0 +1,118 @@ + + + + + {07934a85-8b61-443d-a0ee-b2eedb74f3cd} + cpp;c;cxx;rc;def;r;odl;hpj;bat;for;f90 + + + {1d99675b-433d-4a21-9e50-ed4ab8b19762} + h;hpp;hxx;hm;inl;fi;fd + + + {431c0958-fa71-44d0-9084-2d19d100c0cc} + ico;cur;bmp;dlg;rc2;rct;bin;cnt;rtf;gif;jpg;jpeg;jpe + + + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + Source Files + + + + + Source Files + + + + + Source Files + + + + + Header Files + + + Header Files + + + Header Files + + + Header Files + + + Header Files + + + Header Files + + + Header Files + + + Header Files + + + Header Files + + + ADDED compat/zlib/contrib/vstudio/vc10/zlibvc.vcxproj.user Index: compat/zlib/contrib/vstudio/vc10/zlibvc.vcxproj.user ================================================================== --- compat/zlib/contrib/vstudio/vc10/zlibvc.vcxproj.user +++ compat/zlib/contrib/vstudio/vc10/zlibvc.vcxproj.user @@ -0,0 +1,3 @@ + + + ADDED compat/zlib/contrib/vstudio/vc11/miniunz.vcxproj Index: compat/zlib/contrib/vstudio/vc11/miniunz.vcxproj ================================================================== --- compat/zlib/contrib/vstudio/vc11/miniunz.vcxproj +++ compat/zlib/contrib/vstudio/vc11/miniunz.vcxproj @@ -0,0 +1,314 @@ + + + + + Debug + Itanium + + + Debug + Win32 + + + Debug + x64 + + + Release + Itanium + + + Release + Win32 + + + Release + x64 + + + + {C52F9E7B-498A-42BE-8DB4-85A15694382A} + Win32Proj + + + + Application + MultiByte + v110 + + + Application + Unicode + v110 + + + Application + MultiByte + + + Application + MultiByte + + + Application + MultiByte + v110 + + + Application + MultiByte + v110 + + + + + + + + + + + + + + + + + + + + + + + + + <_ProjectFileVersion>10.0.30128.1 + x86\MiniUnzip$(Configuration)\ + x86\MiniUnzip$(Configuration)\Tmp\ + true + false + x86\MiniUnzip$(Configuration)\ + x86\MiniUnzip$(Configuration)\Tmp\ + false + false + x64\MiniUnzip$(Configuration)\ + x64\MiniUnzip$(Configuration)\Tmp\ + true + false + ia64\MiniUnzip$(Configuration)\ + ia64\MiniUnzip$(Configuration)\Tmp\ + true + false + x64\MiniUnzip$(Configuration)\ + x64\MiniUnzip$(Configuration)\Tmp\ + false + false + ia64\MiniUnzip$(Configuration)\ + ia64\MiniUnzip$(Configuration)\Tmp\ + false + false + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + $(IntDir) + Level3 + ProgramDatabase + + + x86\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)miniunz.exe + true + $(OutDir)miniunz.pdb + Console + false + + + MachineX86 + + + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) + true + Default + MultiThreaded + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + x86\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)miniunz.exe + true + Console + true + true + false + + + MachineX86 + + + + + X64 + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + $(IntDir) + Level3 + ProgramDatabase + + + x64\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)miniunz.exe + true + $(OutDir)miniunz.pdb + Console + MachineX64 + + + + + Itanium + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + $(IntDir) + Level3 + ProgramDatabase + + + ia64\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)miniunz.exe + true + $(OutDir)miniunz.pdb + Console + MachineIA64 + + + + + X64 + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDLL + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + x64\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)miniunz.exe + true + Console + true + true + MachineX64 + + + + + Itanium + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDLL + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + ia64\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)miniunz.exe + true + Console + true + true + MachineIA64 + + + + + + + + {8fd826f8-3739-44e6-8cc8-997122e53b8d} + + + + + + ADDED compat/zlib/contrib/vstudio/vc11/minizip.vcxproj Index: compat/zlib/contrib/vstudio/vc11/minizip.vcxproj ================================================================== --- compat/zlib/contrib/vstudio/vc11/minizip.vcxproj +++ compat/zlib/contrib/vstudio/vc11/minizip.vcxproj @@ -0,0 +1,311 @@ + + + + + Debug + Itanium + + + Debug + Win32 + + + Debug + x64 + + + Release + Itanium + + + Release + Win32 + + + Release + x64 + + + + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B} + Win32Proj + + + + Application + MultiByte + v110 + + + Application + Unicode + v110 + + + Application + MultiByte + + + Application + MultiByte + + + Application + MultiByte + v110 + + + Application + MultiByte + v110 + + + + + + + + + + + + + + + + + + + + + + + + + <_ProjectFileVersion>10.0.30128.1 + x86\MiniZip$(Configuration)\ + x86\MiniZip$(Configuration)\Tmp\ + true + false + x86\MiniZip$(Configuration)\ + x86\MiniZip$(Configuration)\Tmp\ + false + x64\$(Configuration)\ + x64\$(Configuration)\ + true + false + ia64\$(Configuration)\ + ia64\$(Configuration)\ + true + false + x64\$(Configuration)\ + x64\$(Configuration)\ + false + ia64\$(Configuration)\ + ia64\$(Configuration)\ + false + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + $(IntDir) + Level3 + ProgramDatabase + + + x86\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)minizip.exe + true + $(OutDir)minizip.pdb + Console + false + + + MachineX86 + + + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) + true + Default + MultiThreaded + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + x86\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)minizip.exe + true + Console + true + true + false + + + MachineX86 + + + + + X64 + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + $(IntDir) + Level3 + ProgramDatabase + + + x64\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)minizip.exe + true + $(OutDir)minizip.pdb + Console + MachineX64 + + + + + Itanium + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + $(IntDir) + Level3 + ProgramDatabase + + + ia64\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)minizip.exe + true + $(OutDir)minizip.pdb + Console + MachineIA64 + + + + + X64 + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDLL + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + x64\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)minizip.exe + true + Console + true + true + MachineX64 + + + + + Itanium + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDLL + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + ia64\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)minizip.exe + true + Console + true + true + MachineIA64 + + + + + + + + {8fd826f8-3739-44e6-8cc8-997122e53b8d} + + + + + + ADDED compat/zlib/contrib/vstudio/vc11/testzlib.vcxproj Index: compat/zlib/contrib/vstudio/vc11/testzlib.vcxproj ================================================================== --- compat/zlib/contrib/vstudio/vc11/testzlib.vcxproj +++ compat/zlib/contrib/vstudio/vc11/testzlib.vcxproj @@ -0,0 +1,426 @@ + + + + + Debug + Itanium + + + Debug + Win32 + + + Debug + x64 + + + ReleaseWithoutAsm + Itanium + + + ReleaseWithoutAsm + Win32 + + + ReleaseWithoutAsm + x64 + + + Release + Itanium + + + Release + Win32 + + + Release + x64 + + + + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B} + testzlib + Win32Proj + + + + Application + MultiByte + true + v110 + + + Application + MultiByte + true + v110 + + + Application + Unicode + v110 + + + Application + MultiByte + true + + + Application + MultiByte + true + + + Application + MultiByte + + + Application + true + v110 + + + Application + true + v110 + + + Application + v110 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + <_ProjectFileVersion>10.0.30128.1 + x86\TestZlib$(Configuration)\ + x86\TestZlib$(Configuration)\Tmp\ + true + false + x86\TestZlib$(Configuration)\ + x86\TestZlib$(Configuration)\Tmp\ + false + false + x86\TestZlib$(Configuration)\ + x86\TestZlib$(Configuration)\Tmp\ + false + false + x64\TestZlib$(Configuration)\ + x64\TestZlib$(Configuration)\Tmp\ + false + ia64\TestZlib$(Configuration)\ + ia64\TestZlib$(Configuration)\Tmp\ + true + false + x64\TestZlib$(Configuration)\ + x64\TestZlib$(Configuration)\Tmp\ + false + ia64\TestZlib$(Configuration)\ + ia64\TestZlib$(Configuration)\Tmp\ + false + false + x64\TestZlib$(Configuration)\ + x64\TestZlib$(Configuration)\Tmp\ + false + ia64\TestZlib$(Configuration)\ + ia64\TestZlib$(Configuration)\Tmp\ + false + false + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + + + + Disabled + ..\..\..;%(AdditionalIncludeDirectories) + ASMV;ASMINF;WIN32;ZLIB_WINAPI;_DEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + AssemblyAndSourceCode + $(IntDir) + Level3 + ProgramDatabase + + + ..\..\masmx86\match686.obj;..\..\masmx86\inffas32.obj;%(AdditionalDependencies) + $(OutDir)testzlib.exe + true + $(OutDir)testzlib.pdb + Console + false + + + MachineX86 + + + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;%(AdditionalIncludeDirectories) + WIN32;ZLIB_WINAPI;NDEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;%(PreprocessorDefinitions) + true + Default + MultiThreaded + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + $(OutDir)testzlib.exe + true + Console + true + true + false + + + MachineX86 + + + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;%(AdditionalIncludeDirectories) + ASMV;ASMINF;WIN32;ZLIB_WINAPI;NDEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;%(PreprocessorDefinitions) + true + Default + MultiThreaded + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + ..\..\masmx86\match686.obj;..\..\masmx86\inffas32.obj;%(AdditionalDependencies) + $(OutDir)testzlib.exe + true + Console + true + true + false + + + MachineX86 + + + + + ..\..\..;%(AdditionalIncludeDirectories) + ASMV;ASMINF;WIN32;ZLIB_WINAPI;_DEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;%(PreprocessorDefinitions) + Default + MultiThreadedDebugDLL + false + $(IntDir) + + + ..\..\masmx64\gvmat64.obj;..\..\masmx64\inffasx64.obj;%(AdditionalDependencies) + + + + + Itanium + + + Disabled + ..\..\..;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;_DEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + AssemblyAndSourceCode + $(IntDir) + Level3 + ProgramDatabase + + + $(OutDir)testzlib.exe + true + $(OutDir)testzlib.pdb + Console + MachineIA64 + + + + + ..\..\..;%(AdditionalIncludeDirectories) + WIN32;ZLIB_WINAPI;NDEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;%(PreprocessorDefinitions) + Default + MultiThreadedDLL + false + $(IntDir) + + + %(AdditionalDependencies) + + + + + Itanium + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;NDEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDLL + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + $(OutDir)testzlib.exe + true + Console + true + true + MachineIA64 + + + + + ..\..\..;%(AdditionalIncludeDirectories) + ASMV;ASMINF;WIN32;ZLIB_WINAPI;NDEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;%(PreprocessorDefinitions) + Default + MultiThreadedDLL + false + $(IntDir) + + + ..\..\masmx64\gvmat64.obj;..\..\masmx64\inffasx64.obj;%(AdditionalDependencies) + + + + + Itanium + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;NDEBUG;_CONSOLE;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDLL + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + $(OutDir)testzlib.exe + true + Console + true + true + MachineIA64 + + + + + + + + + + true + true + true + true + true + true + + + + + + + + + + + + + ADDED compat/zlib/contrib/vstudio/vc11/testzlibdll.vcxproj Index: compat/zlib/contrib/vstudio/vc11/testzlibdll.vcxproj ================================================================== --- compat/zlib/contrib/vstudio/vc11/testzlibdll.vcxproj +++ compat/zlib/contrib/vstudio/vc11/testzlibdll.vcxproj @@ -0,0 +1,314 @@ + + + + + Debug + Itanium + + + Debug + Win32 + + + Debug + x64 + + + Release + Itanium + + + Release + Win32 + + + Release + x64 + + + + {C52F9E7B-498A-42BE-8DB4-85A15694366A} + Win32Proj + + + + Application + MultiByte + v110 + + + Application + Unicode + v110 + + + Application + MultiByte + + + Application + MultiByte + + + Application + MultiByte + v110 + + + Application + MultiByte + v110 + + + + + + + + + + + + + + + + + + + + + + + + + <_ProjectFileVersion>10.0.30128.1 + x86\TestZlibDll$(Configuration)\ + x86\TestZlibDll$(Configuration)\Tmp\ + true + false + x86\TestZlibDll$(Configuration)\ + x86\TestZlibDll$(Configuration)\Tmp\ + false + false + x64\TestZlibDll$(Configuration)\ + x64\TestZlibDll$(Configuration)\Tmp\ + true + false + ia64\TestZlibDll$(Configuration)\ + ia64\TestZlibDll$(Configuration)\Tmp\ + true + false + x64\TestZlibDll$(Configuration)\ + x64\TestZlibDll$(Configuration)\Tmp\ + false + false + ia64\TestZlibDll$(Configuration)\ + ia64\TestZlibDll$(Configuration)\Tmp\ + false + false + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + $(IntDir) + Level3 + ProgramDatabase + + + x86\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)testzlibdll.exe + true + $(OutDir)testzlib.pdb + Console + false + + + MachineX86 + + + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;%(PreprocessorDefinitions) + true + Default + MultiThreaded + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + x86\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)testzlibdll.exe + true + Console + true + true + false + + + MachineX86 + + + + + X64 + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + $(IntDir) + Level3 + ProgramDatabase + + + x64\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)testzlibdll.exe + true + $(OutDir)testzlib.pdb + Console + MachineX64 + + + + + Itanium + + + Disabled + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;_DEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDebugDLL + false + + + $(IntDir) + Level3 + ProgramDatabase + + + ia64\ZlibDllDebug\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)testzlibdll.exe + true + $(OutDir)testzlib.pdb + Console + MachineIA64 + + + + + X64 + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDLL + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + x64\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)testzlibdll.exe + true + Console + true + true + MachineX64 + + + + + Itanium + + + MaxSpeed + OnlyExplicitInline + true + ..\..\..;..\..\minizip;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;ZLIB_WINAPI;NDEBUG;_CONSOLE;WIN64;%(PreprocessorDefinitions) + true + Default + MultiThreadedDLL + false + true + + + $(IntDir) + Level3 + ProgramDatabase + + + ia64\ZlibDllRelease\zlibwapi.lib;%(AdditionalDependencies) + $(OutDir)testzlibdll.exe + true + Console + true + true + MachineIA64 + + + + + + + + {8fd826f8-3739-44e6-8cc8-997122e53b8d} + + + + + + ADDED compat/zlib/contrib/vstudio/vc11/zlib.rc Index: compat/zlib/contrib/vstudio/vc11/zlib.rc ================================================================== --- compat/zlib/contrib/vstudio/vc11/zlib.rc +++ compat/zlib/contrib/vstudio/vc11/zlib.rc @@ -0,0 +1,32 @@ +#include + +#define IDR_VERSION1 1 +IDR_VERSION1 VERSIONINFO MOVEABLE IMPURE LOADONCALL DISCARDABLE + FILEVERSION 1,2,8,0 + PRODUCTVERSION 1,2,8,0 + FILEFLAGSMASK VS_FFI_FILEFLAGSMASK + FILEFLAGS 0 + FILEOS VOS_DOS_WINDOWS32 + FILETYPE VFT_DLL + FILESUBTYPE 0 // not used +BEGIN + BLOCK "StringFileInfo" + BEGIN + BLOCK "040904E4" + //language ID = U.S. English, char set = Windows, Multilingual + + BEGIN + VALUE "FileDescription", "zlib data compression and ZIP file I/O library\0" + VALUE "FileVersion", "1.2.8\0" + VALUE "InternalName", "zlib\0" + VALUE "OriginalFilename", "zlibwapi.dll\0" + VALUE "ProductName", "ZLib.DLL\0" + VALUE "Comments","DLL support by Alessandro Iacopetti & Gilles Vollant\0" + VALUE "LegalCopyright", "(C) 1995-2013 Jean-loup Gailly & Mark Adler\0" + END + END + BLOCK "VarFileInfo" + BEGIN + VALUE "Translation", 0x0409, 1252 + END +END ADDED compat/zlib/contrib/vstudio/vc11/zlibstat.vcxproj Index: compat/zlib/contrib/vstudio/vc11/zlibstat.vcxproj ================================================================== --- compat/zlib/contrib/vstudio/vc11/zlibstat.vcxproj +++ compat/zlib/contrib/vstudio/vc11/zlibstat.vcxproj @@ -0,0 +1,464 @@ + + + + + Debug + Itanium + + + Debug + Win32 + + + Debug + x64 + + + ReleaseWithoutAsm + Itanium + + + ReleaseWithoutAsm + Win32 + + + ReleaseWithoutAsm + x64 + + + Release + Itanium + + + Release + Win32 + + + Release + x64 + + + + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8} + + + + StaticLibrary + false + v110 + + + StaticLibrary + false + v110 + + + StaticLibrary + false + v110 + Unicode + + + StaticLibrary + false + + + StaticLibrary + false + + + StaticLibrary + false + + + StaticLibrary + false + v110 + + + StaticLibrary + false + v110 + + + StaticLibrary + false + v110 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + <_ProjectFileVersion>10.0.30128.1 + x86\ZlibStat$(Configuration)\ + x86\ZlibStat$(Configuration)\Tmp\ + x86\ZlibStat$(Configuration)\ + x86\ZlibStat$(Configuration)\Tmp\ + x86\ZlibStat$(Configuration)\ + x86\ZlibStat$(Configuration)\Tmp\ + x64\ZlibStat$(Configuration)\ + x64\ZlibStat$(Configuration)\Tmp\ + ia64\ZlibStat$(Configuration)\ + ia64\ZlibStat$(Configuration)\Tmp\ + x64\ZlibStat$(Configuration)\ + x64\ZlibStat$(Configuration)\Tmp\ + ia64\ZlibStat$(Configuration)\ + ia64\ZlibStat$(Configuration)\Tmp\ + x64\ZlibStat$(Configuration)\ + x64\ZlibStat$(Configuration)\Tmp\ + ia64\ZlibStat$(Configuration)\ + ia64\ZlibStat$(Configuration)\Tmp\ + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + + + + Disabled + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;%(PreprocessorDefinitions) + + + MultiThreadedDebugDLL + false + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + OldStyle + + + 0x040c + + + /MACHINE:X86 /NODEFAULTLIB %(AdditionalOptions) + $(OutDir)zlibstat.lib + true + + + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ASMV;ASMINF;%(PreprocessorDefinitions) + true + + + MultiThreaded + false + true + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + + + 0x040c + + + /MACHINE:X86 /NODEFAULTLIB %(AdditionalOptions) + ..\..\masmx86\match686.obj;..\..\masmx86\inffas32.obj;%(AdditionalDependencies) + $(OutDir)zlibstat.lib + true + + + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;%(PreprocessorDefinitions) + true + + + MultiThreaded + false + true + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + + + 0x040c + + + /MACHINE:X86 /NODEFAULTLIB %(AdditionalOptions) + $(OutDir)zlibstat.lib + true + + + + + X64 + + + Disabled + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;WIN64;%(PreprocessorDefinitions) + + + MultiThreadedDebugDLL + false + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + OldStyle + + + 0x040c + + + /MACHINE:AMD64 /NODEFAULTLIB %(AdditionalOptions) + $(OutDir)zlibstat.lib + true + + + + + Itanium + + + Disabled + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;WIN64;%(PreprocessorDefinitions) + + + MultiThreadedDebugDLL + false + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + OldStyle + + + 0x040c + + + /MACHINE:IA64 /NODEFAULTLIB %(AdditionalOptions) + $(OutDir)zlibstat.lib + true + + + + + X64 + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ASMV;ASMINF;WIN64;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + + + 0x040c + + + /MACHINE:AMD64 /NODEFAULTLIB %(AdditionalOptions) + ..\..\masmx64\gvmat64.obj;..\..\masmx64\inffasx64.obj;%(AdditionalDependencies) + $(OutDir)zlibstat.lib + true + + + + + Itanium + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;WIN64;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + + + 0x040c + + + /MACHINE:IA64 /NODEFAULTLIB %(AdditionalOptions) + $(OutDir)zlibstat.lib + true + + + + + X64 + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;WIN64;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + + + 0x040c + + + /MACHINE:AMD64 /NODEFAULTLIB %(AdditionalOptions) + $(OutDir)zlibstat.lib + true + + + + + Itanium + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + ZLIB_WINAPI;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;WIN64;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibstat.pch + $(IntDir) + $(IntDir) + $(OutDir) + Level3 + true + + + 0x040c + + + /MACHINE:IA64 /NODEFAULTLIB %(AdditionalOptions) + $(OutDir)zlibstat.lib + true + + + + + + + + + + + + + + true + true + true + true + true + true + + + + + + + + + + + + + + + + + + + + + ADDED compat/zlib/contrib/vstudio/vc11/zlibvc.def Index: compat/zlib/contrib/vstudio/vc11/zlibvc.def ================================================================== --- compat/zlib/contrib/vstudio/vc11/zlibvc.def +++ compat/zlib/contrib/vstudio/vc11/zlibvc.def @@ -0,0 +1,143 @@ +LIBRARY +; zlib data compression and ZIP file I/O library + +VERSION 1.2.8 + +EXPORTS + adler32 @1 + compress @2 + crc32 @3 + deflate @4 + deflateCopy @5 + deflateEnd @6 + deflateInit2_ @7 + deflateInit_ @8 + deflateParams @9 + deflateReset @10 + deflateSetDictionary @11 + gzclose @12 + gzdopen @13 + gzerror @14 + gzflush @15 + gzopen @16 + gzread @17 + gzwrite @18 + inflate @19 + inflateEnd @20 + inflateInit2_ @21 + inflateInit_ @22 + inflateReset @23 + inflateSetDictionary @24 + inflateSync @25 + uncompress @26 + zlibVersion @27 + gzprintf @28 + gzputc @29 + gzgetc @30 + gzseek @31 + gzrewind @32 + gztell @33 + gzeof @34 + gzsetparams @35 + zError @36 + inflateSyncPoint @37 + get_crc_table @38 + compress2 @39 + gzputs @40 + gzgets @41 + inflateCopy @42 + inflateBackInit_ @43 + inflateBack @44 + inflateBackEnd @45 + compressBound @46 + deflateBound @47 + gzclearerr @48 + gzungetc @49 + zlibCompileFlags @50 + deflatePrime @51 + deflatePending @52 + + unzOpen @61 + unzClose @62 + unzGetGlobalInfo @63 + unzGetCurrentFileInfo @64 + unzGoToFirstFile @65 + unzGoToNextFile @66 + unzOpenCurrentFile @67 + unzReadCurrentFile @68 + unzOpenCurrentFile3 @69 + unztell @70 + unzeof @71 + unzCloseCurrentFile @72 + unzGetGlobalComment @73 + unzStringFileNameCompare @74 + unzLocateFile @75 + unzGetLocalExtrafield @76 + unzOpen2 @77 + unzOpenCurrentFile2 @78 + unzOpenCurrentFilePassword @79 + + zipOpen @80 + zipOpenNewFileInZip @81 + zipWriteInFileInZip @82 + zipCloseFileInZip @83 + zipClose @84 + zipOpenNewFileInZip2 @86 + zipCloseFileInZipRaw @87 + zipOpen2 @88 + zipOpenNewFileInZip3 @89 + + unzGetFilePos @100 + unzGoToFilePos @101 + + fill_win32_filefunc @110 + +; zlibwapi v1.2.4 added: + fill_win32_filefunc64 @111 + fill_win32_filefunc64A @112 + fill_win32_filefunc64W @113 + + unzOpen64 @120 + unzOpen2_64 @121 + unzGetGlobalInfo64 @122 + unzGetCurrentFileInfo64 @124 + unzGetCurrentFileZStreamPos64 @125 + unztell64 @126 + unzGetFilePos64 @127 + unzGoToFilePos64 @128 + + zipOpen64 @130 + zipOpen2_64 @131 + zipOpenNewFileInZip64 @132 + zipOpenNewFileInZip2_64 @133 + zipOpenNewFileInZip3_64 @134 + zipOpenNewFileInZip4_64 @135 + zipCloseFileInZipRaw64 @136 + +; zlib1 v1.2.4 added: + adler32_combine @140 + crc32_combine @142 + deflateSetHeader @144 + deflateTune @145 + gzbuffer @146 + gzclose_r @147 + gzclose_w @148 + gzdirect @149 + gzoffset @150 + inflateGetHeader @156 + inflateMark @157 + inflatePrime @158 + inflateReset2 @159 + inflateUndermine @160 + +; zlib1 v1.2.6 added: + gzgetc_ @161 + inflateResetKeep @163 + deflateResetKeep @164 + +; zlib1 v1.2.7 added: + gzopen_w @165 + +; zlib1 v1.2.8 added: + inflateGetDictionary @166 + gzvprintf @167 ADDED compat/zlib/contrib/vstudio/vc11/zlibvc.sln Index: compat/zlib/contrib/vstudio/vc11/zlibvc.sln ================================================================== --- compat/zlib/contrib/vstudio/vc11/zlibvc.sln +++ compat/zlib/contrib/vstudio/vc11/zlibvc.sln @@ -0,0 +1,117 @@ + +Microsoft Visual Studio Solution File, Format Version 12.00 +# Visual Studio 2012 +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "zlibvc", "zlibvc.vcxproj", "{8FD826F8-3739-44E6-8CC8-997122E53B8D}" +EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "zlibstat", "zlibstat.vcxproj", "{745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}" +EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "testzlib", "testzlib.vcxproj", "{AA6666AA-E09F-4135-9C0C-4FE50C3C654B}" +EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "testzlibdll", "testzlibdll.vcxproj", "{C52F9E7B-498A-42BE-8DB4-85A15694366A}" +EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "minizip", "minizip.vcxproj", "{48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}" +EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "miniunz", "miniunz.vcxproj", "{C52F9E7B-498A-42BE-8DB4-85A15694382A}" +EndProject +Global + GlobalSection(SolutionConfigurationPlatforms) = preSolution + Debug|Itanium = Debug|Itanium + Debug|Win32 = Debug|Win32 + Debug|x64 = Debug|x64 + Release|Itanium = Release|Itanium + Release|Win32 = Release|Win32 + Release|x64 = Release|x64 + ReleaseWithoutAsm|Itanium = ReleaseWithoutAsm|Itanium + ReleaseWithoutAsm|Win32 = ReleaseWithoutAsm|Win32 + ReleaseWithoutAsm|x64 = ReleaseWithoutAsm|x64 + EndGlobalSection + GlobalSection(ProjectConfigurationPlatforms) = postSolution + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Debug|Itanium.ActiveCfg = Debug|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Debug|Win32.ActiveCfg = Debug|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Debug|Win32.Build.0 = Debug|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Debug|x64.ActiveCfg = Debug|x64 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Debug|x64.Build.0 = Debug|x64 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Release|Itanium.ActiveCfg = Release|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Release|Win32.ActiveCfg = Release|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Release|Win32.Build.0 = Release|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Release|x64.ActiveCfg = Release|x64 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Release|x64.Build.0 = Release|x64 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.ReleaseWithoutAsm|Itanium.ActiveCfg = ReleaseWithoutAsm|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.ReleaseWithoutAsm|Win32.ActiveCfg = ReleaseWithoutAsm|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.ReleaseWithoutAsm|Win32.Build.0 = ReleaseWithoutAsm|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.ReleaseWithoutAsm|x64.ActiveCfg = ReleaseWithoutAsm|x64 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.ReleaseWithoutAsm|x64.Build.0 = ReleaseWithoutAsm|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Debug|Itanium.ActiveCfg = Debug|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Debug|Win32.ActiveCfg = Debug|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Debug|Win32.Build.0 = Debug|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Debug|x64.ActiveCfg = Debug|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Debug|x64.Build.0 = Debug|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Release|Itanium.ActiveCfg = Release|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Release|Win32.ActiveCfg = Release|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Release|Win32.Build.0 = Release|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Release|x64.ActiveCfg = Release|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Release|x64.Build.0 = Release|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.ReleaseWithoutAsm|Itanium.ActiveCfg = ReleaseWithoutAsm|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.ReleaseWithoutAsm|Win32.ActiveCfg = ReleaseWithoutAsm|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.ReleaseWithoutAsm|Win32.Build.0 = ReleaseWithoutAsm|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.ReleaseWithoutAsm|x64.ActiveCfg = ReleaseWithoutAsm|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.ReleaseWithoutAsm|x64.Build.0 = ReleaseWithoutAsm|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Debug|Itanium.ActiveCfg = Debug|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Debug|Win32.ActiveCfg = Debug|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Debug|Win32.Build.0 = Debug|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Debug|x64.ActiveCfg = Debug|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Debug|x64.Build.0 = Debug|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Release|Itanium.ActiveCfg = Release|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Release|Win32.ActiveCfg = Release|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Release|Win32.Build.0 = Release|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Release|x64.ActiveCfg = Release|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Release|x64.Build.0 = Release|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Itanium.ActiveCfg = ReleaseWithoutAsm|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Win32.ActiveCfg = ReleaseWithoutAsm|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Win32.Build.0 = ReleaseWithoutAsm|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|x64.ActiveCfg = ReleaseWithoutAsm|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|x64.Build.0 = ReleaseWithoutAsm|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Debug|Itanium.ActiveCfg = Debug|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Debug|Win32.ActiveCfg = Debug|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Debug|Win32.Build.0 = Debug|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Debug|x64.ActiveCfg = Debug|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Debug|x64.Build.0 = Debug|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Release|Itanium.ActiveCfg = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Release|Win32.ActiveCfg = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Release|Win32.Build.0 = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Release|x64.ActiveCfg = Release|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Release|x64.Build.0 = Release|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.ReleaseWithoutAsm|Itanium.ActiveCfg = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.ReleaseWithoutAsm|Win32.ActiveCfg = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.ReleaseWithoutAsm|x64.ActiveCfg = Release|x64 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Debug|Itanium.ActiveCfg = Debug|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Debug|Win32.ActiveCfg = Debug|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Debug|Win32.Build.0 = Debug|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Debug|x64.ActiveCfg = Debug|x64 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Debug|x64.Build.0 = Debug|x64 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Release|Itanium.ActiveCfg = Release|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Release|Win32.ActiveCfg = Release|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Release|Win32.Build.0 = Release|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Release|x64.ActiveCfg = Release|x64 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Release|x64.Build.0 = Release|x64 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Itanium.ActiveCfg = Release|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Win32.ActiveCfg = Release|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|x64.ActiveCfg = Release|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Debug|Itanium.ActiveCfg = Debug|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Debug|Win32.ActiveCfg = Debug|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Debug|Win32.Build.0 = Debug|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Debug|x64.ActiveCfg = Debug|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Debug|x64.Build.0 = Debug|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Release|Itanium.ActiveCfg = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Release|Win32.ActiveCfg = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Release|Win32.Build.0 = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Release|x64.ActiveCfg = Release|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Release|x64.Build.0 = Release|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.ReleaseWithoutAsm|Itanium.ActiveCfg = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.ReleaseWithoutAsm|Win32.ActiveCfg = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.ReleaseWithoutAsm|x64.ActiveCfg = Release|x64 + EndGlobalSection + GlobalSection(SolutionProperties) = preSolution + HideSolutionNode = FALSE + EndGlobalSection +EndGlobal ADDED compat/zlib/contrib/vstudio/vc11/zlibvc.vcxproj Index: compat/zlib/contrib/vstudio/vc11/zlibvc.vcxproj ================================================================== --- compat/zlib/contrib/vstudio/vc11/zlibvc.vcxproj +++ compat/zlib/contrib/vstudio/vc11/zlibvc.vcxproj @@ -0,0 +1,688 @@ + + + + + Debug + Itanium + + + Debug + Win32 + + + Debug + x64 + + + ReleaseWithoutAsm + Itanium + + + ReleaseWithoutAsm + Win32 + + + ReleaseWithoutAsm + x64 + + + Release + Itanium + + + Release + Win32 + + + Release + x64 + + + + {8FD826F8-3739-44E6-8CC8-997122E53B8D} + + + + DynamicLibrary + false + true + v110 + + + DynamicLibrary + false + true + v110 + + + DynamicLibrary + false + v110 + Unicode + + + DynamicLibrary + false + true + + + DynamicLibrary + false + true + + + DynamicLibrary + false + + + DynamicLibrary + false + true + v110 + + + DynamicLibrary + false + true + v110 + + + DynamicLibrary + false + v110 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + <_ProjectFileVersion>10.0.30128.1 + x86\ZlibDll$(Configuration)\ + x86\ZlibDll$(Configuration)\Tmp\ + true + false + x86\ZlibDll$(Configuration)\ + x86\ZlibDll$(Configuration)\Tmp\ + false + false + x86\ZlibDll$(Configuration)\ + x86\ZlibDll$(Configuration)\Tmp\ + false + false + x64\ZlibDll$(Configuration)\ + x64\ZlibDll$(Configuration)\Tmp\ + true + false + ia64\ZlibDll$(Configuration)\ + ia64\ZlibDll$(Configuration)\Tmp\ + true + false + x64\ZlibDll$(Configuration)\ + x64\ZlibDll$(Configuration)\Tmp\ + false + false + ia64\ZlibDll$(Configuration)\ + ia64\ZlibDll$(Configuration)\Tmp\ + false + false + x64\ZlibDll$(Configuration)\ + x64\ZlibDll$(Configuration)\Tmp\ + false + false + ia64\ZlibDll$(Configuration)\ + ia64\ZlibDll$(Configuration)\Tmp\ + false + false + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + AllRules.ruleset + + + zlibwapi + zlibwapi + zlibwapi + zlibwapi + zlibwapi + zlibwapi + + + + _DEBUG;%(PreprocessorDefinitions) + true + true + Win32 + $(OutDir)zlibvc.tlb + + + Disabled + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;ASMV;ASMINF;%(PreprocessorDefinitions) + + + MultiThreadedDebugDLL + false + $(IntDir)zlibvc.pch + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + ProgramDatabase + + + _DEBUG;%(PreprocessorDefinitions) + 0x040c + + + /MACHINE:I386 %(AdditionalOptions) + ..\..\masmx86\match686.obj;..\..\masmx86\inffas32.obj;%(AdditionalDependencies) + $(OutDir)zlibwapi.dll + true + .\zlibvc.def + true + $(OutDir)zlibwapi.pdb + true + $(OutDir)zlibwapi.map + Windows + false + + + $(OutDir)zlibwapi.lib + + + cd ..\..\masmx86 +bld_ml32.bat + + + + + NDEBUG;%(PreprocessorDefinitions) + true + true + Win32 + $(OutDir)zlibvc.tlb + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibvc.pch + All + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + + + NDEBUG;%(PreprocessorDefinitions) + 0x040c + + + /MACHINE:I386 %(AdditionalOptions) + $(OutDir)zlibwapi.dll + true + false + .\zlibvc.def + $(OutDir)zlibwapi.pdb + true + $(OutDir)zlibwapi.map + Windows + false + + + $(OutDir)zlibwapi.lib + + + + + NDEBUG;%(PreprocessorDefinitions) + true + true + Win32 + $(OutDir)zlibvc.tlb + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;ASMV;ASMINF;%(PreprocessorDefinitions) + true + + + MultiThreaded + false + true + $(IntDir)zlibvc.pch + All + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + + + NDEBUG;%(PreprocessorDefinitions) + 0x040c + + + /MACHINE:I386 %(AdditionalOptions) + ..\..\masmx86\match686.obj;..\..\masmx86\inffas32.obj;%(AdditionalDependencies) + $(OutDir)zlibwapi.dll + true + false + .\zlibvc.def + $(OutDir)zlibwapi.pdb + true + $(OutDir)zlibwapi.map + Windows + false + + + $(OutDir)zlibwapi.lib + + + cd ..\..\masmx86 +bld_ml32.bat + + + + + _DEBUG;%(PreprocessorDefinitions) + true + true + X64 + $(OutDir)zlibvc.tlb + + + Disabled + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;ASMV;ASMINF;WIN64;%(PreprocessorDefinitions) + + + MultiThreadedDebugDLL + false + $(IntDir)zlibvc.pch + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + ProgramDatabase + + + _DEBUG;%(PreprocessorDefinitions) + 0x040c + + + ..\..\masmx64\gvmat64.obj;..\..\masmx64\inffasx64.obj;%(AdditionalDependencies) + $(OutDir)zlibwapi.dll + true + .\zlibvc.def + true + $(OutDir)zlibwapi.pdb + true + $(OutDir)zlibwapi.map + Windows + $(OutDir)zlibwapi.lib + MachineX64 + + + cd ..\..\contrib\masmx64 +bld_ml64.bat + + + + + _DEBUG;%(PreprocessorDefinitions) + true + true + Itanium + $(OutDir)zlibvc.tlb + + + Disabled + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;WIN64;%(PreprocessorDefinitions) + + + MultiThreadedDebugDLL + false + $(IntDir)zlibvc.pch + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + ProgramDatabase + + + _DEBUG;%(PreprocessorDefinitions) + 0x040c + + + $(OutDir)zlibwapi.dll + true + .\zlibvc.def + true + $(OutDir)zlibwapi.pdb + true + $(OutDir)zlibwapi.map + Windows + $(OutDir)zlibwapi.lib + MachineIA64 + + + + + NDEBUG;%(PreprocessorDefinitions) + true + true + X64 + $(OutDir)zlibvc.tlb + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;WIN64;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibvc.pch + All + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + + + NDEBUG;%(PreprocessorDefinitions) + 0x040c + + + $(OutDir)zlibwapi.dll + true + false + .\zlibvc.def + $(OutDir)zlibwapi.pdb + true + $(OutDir)zlibwapi.map + Windows + $(OutDir)zlibwapi.lib + MachineX64 + + + + + NDEBUG;%(PreprocessorDefinitions) + true + true + Itanium + $(OutDir)zlibvc.tlb + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + WIN32;_CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;WIN64;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibvc.pch + All + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + + + NDEBUG;%(PreprocessorDefinitions) + 0x040c + + + $(OutDir)zlibwapi.dll + true + false + .\zlibvc.def + $(OutDir)zlibwapi.pdb + true + $(OutDir)zlibwapi.map + Windows + $(OutDir)zlibwapi.lib + MachineIA64 + + + + + NDEBUG;%(PreprocessorDefinitions) + true + true + X64 + $(OutDir)zlibvc.tlb + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;ASMV;ASMINF;WIN64;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibvc.pch + All + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + + + NDEBUG;%(PreprocessorDefinitions) + 0x040c + + + ..\..\masmx64\gvmat64.obj;..\..\masmx64\inffasx64.obj;%(AdditionalDependencies) + $(OutDir)zlibwapi.dll + true + false + .\zlibvc.def + $(OutDir)zlibwapi.pdb + true + $(OutDir)zlibwapi.map + Windows + $(OutDir)zlibwapi.lib + MachineX64 + + + cd ..\..\masmx64 +bld_ml64.bat + + + + + NDEBUG;%(PreprocessorDefinitions) + true + true + Itanium + $(OutDir)zlibvc.tlb + + + OnlyExplicitInline + ..\..\..;..\..\masmx86;%(AdditionalIncludeDirectories) + _CRT_NONSTDC_NO_DEPRECATE;_CRT_SECURE_NO_DEPRECATE;_CRT_NONSTDC_NO_WARNINGS;ZLIB_WINAPI;WIN64;%(PreprocessorDefinitions) + true + + + MultiThreadedDLL + false + true + $(IntDir)zlibvc.pch + All + $(IntDir) + $(IntDir) + $(OutDir) + + + Level3 + true + + + NDEBUG;%(PreprocessorDefinitions) + 0x040c + + + $(OutDir)zlibwapi.dll + true + false + .\zlibvc.def + $(OutDir)zlibwapi.pdb + true + $(OutDir)zlibwapi.map + Windows + $(OutDir)zlibwapi.lib + MachineIA64 + + + + + + + + + + + + + + true + true + true + true + true + true + + + + + + + + + + %(AdditionalIncludeDirectories) + ZLIB_INTERNAL;%(PreprocessorDefinitions) + %(AdditionalIncludeDirectories) + ZLIB_INTERNAL;%(PreprocessorDefinitions) + %(AdditionalIncludeDirectories) + ZLIB_INTERNAL;%(PreprocessorDefinitions) + + + %(AdditionalIncludeDirectories) + ZLIB_INTERNAL;%(PreprocessorDefinitions) + %(AdditionalIncludeDirectories) + ZLIB_INTERNAL;%(PreprocessorDefinitions) + %(AdditionalIncludeDirectories) + ZLIB_INTERNAL;%(PreprocessorDefinitions) + + + + + + + + + + + + + + + + + + + + + + + + ADDED compat/zlib/contrib/vstudio/vc9/miniunz.vcproj Index: compat/zlib/contrib/vstudio/vc9/miniunz.vcproj ================================================================== --- compat/zlib/contrib/vstudio/vc9/miniunz.vcproj +++ compat/zlib/contrib/vstudio/vc9/miniunz.vcproj @@ -0,0 +1,565 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ADDED compat/zlib/contrib/vstudio/vc9/minizip.vcproj Index: compat/zlib/contrib/vstudio/vc9/minizip.vcproj ================================================================== --- compat/zlib/contrib/vstudio/vc9/minizip.vcproj +++ compat/zlib/contrib/vstudio/vc9/minizip.vcproj @@ -0,0 +1,562 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ADDED compat/zlib/contrib/vstudio/vc9/testzlib.vcproj Index: compat/zlib/contrib/vstudio/vc9/testzlib.vcproj ================================================================== --- compat/zlib/contrib/vstudio/vc9/testzlib.vcproj +++ compat/zlib/contrib/vstudio/vc9/testzlib.vcproj @@ -0,0 +1,852 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ADDED compat/zlib/contrib/vstudio/vc9/testzlibdll.vcproj Index: compat/zlib/contrib/vstudio/vc9/testzlibdll.vcproj ================================================================== --- compat/zlib/contrib/vstudio/vc9/testzlibdll.vcproj +++ compat/zlib/contrib/vstudio/vc9/testzlibdll.vcproj @@ -0,0 +1,565 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ADDED compat/zlib/contrib/vstudio/vc9/zlib.rc Index: compat/zlib/contrib/vstudio/vc9/zlib.rc ================================================================== --- compat/zlib/contrib/vstudio/vc9/zlib.rc +++ compat/zlib/contrib/vstudio/vc9/zlib.rc @@ -0,0 +1,32 @@ +#include + +#define IDR_VERSION1 1 +IDR_VERSION1 VERSIONINFO MOVEABLE IMPURE LOADONCALL DISCARDABLE + FILEVERSION 1,2,8,0 + PRODUCTVERSION 1,2,8,0 + FILEFLAGSMASK VS_FFI_FILEFLAGSMASK + FILEFLAGS 0 + FILEOS VOS_DOS_WINDOWS32 + FILETYPE VFT_DLL + FILESUBTYPE 0 // not used +BEGIN + BLOCK "StringFileInfo" + BEGIN + BLOCK "040904E4" + //language ID = U.S. English, char set = Windows, Multilingual + + BEGIN + VALUE "FileDescription", "zlib data compression and ZIP file I/O library\0" + VALUE "FileVersion", "1.2.8\0" + VALUE "InternalName", "zlib\0" + VALUE "OriginalFilename", "zlibwapi.dll\0" + VALUE "ProductName", "ZLib.DLL\0" + VALUE "Comments","DLL support by Alessandro Iacopetti & Gilles Vollant\0" + VALUE "LegalCopyright", "(C) 1995-2013 Jean-loup Gailly & Mark Adler\0" + END + END + BLOCK "VarFileInfo" + BEGIN + VALUE "Translation", 0x0409, 1252 + END +END ADDED compat/zlib/contrib/vstudio/vc9/zlibstat.vcproj Index: compat/zlib/contrib/vstudio/vc9/zlibstat.vcproj ================================================================== --- compat/zlib/contrib/vstudio/vc9/zlibstat.vcproj +++ compat/zlib/contrib/vstudio/vc9/zlibstat.vcproj @@ -0,0 +1,835 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ADDED compat/zlib/contrib/vstudio/vc9/zlibvc.def Index: compat/zlib/contrib/vstudio/vc9/zlibvc.def ================================================================== --- compat/zlib/contrib/vstudio/vc9/zlibvc.def +++ compat/zlib/contrib/vstudio/vc9/zlibvc.def @@ -0,0 +1,143 @@ +LIBRARY +; zlib data compression and ZIP file I/O library + +VERSION 1.2.8 + +EXPORTS + adler32 @1 + compress @2 + crc32 @3 + deflate @4 + deflateCopy @5 + deflateEnd @6 + deflateInit2_ @7 + deflateInit_ @8 + deflateParams @9 + deflateReset @10 + deflateSetDictionary @11 + gzclose @12 + gzdopen @13 + gzerror @14 + gzflush @15 + gzopen @16 + gzread @17 + gzwrite @18 + inflate @19 + inflateEnd @20 + inflateInit2_ @21 + inflateInit_ @22 + inflateReset @23 + inflateSetDictionary @24 + inflateSync @25 + uncompress @26 + zlibVersion @27 + gzprintf @28 + gzputc @29 + gzgetc @30 + gzseek @31 + gzrewind @32 + gztell @33 + gzeof @34 + gzsetparams @35 + zError @36 + inflateSyncPoint @37 + get_crc_table @38 + compress2 @39 + gzputs @40 + gzgets @41 + inflateCopy @42 + inflateBackInit_ @43 + inflateBack @44 + inflateBackEnd @45 + compressBound @46 + deflateBound @47 + gzclearerr @48 + gzungetc @49 + zlibCompileFlags @50 + deflatePrime @51 + deflatePending @52 + + unzOpen @61 + unzClose @62 + unzGetGlobalInfo @63 + unzGetCurrentFileInfo @64 + unzGoToFirstFile @65 + unzGoToNextFile @66 + unzOpenCurrentFile @67 + unzReadCurrentFile @68 + unzOpenCurrentFile3 @69 + unztell @70 + unzeof @71 + unzCloseCurrentFile @72 + unzGetGlobalComment @73 + unzStringFileNameCompare @74 + unzLocateFile @75 + unzGetLocalExtrafield @76 + unzOpen2 @77 + unzOpenCurrentFile2 @78 + unzOpenCurrentFilePassword @79 + + zipOpen @80 + zipOpenNewFileInZip @81 + zipWriteInFileInZip @82 + zipCloseFileInZip @83 + zipClose @84 + zipOpenNewFileInZip2 @86 + zipCloseFileInZipRaw @87 + zipOpen2 @88 + zipOpenNewFileInZip3 @89 + + unzGetFilePos @100 + unzGoToFilePos @101 + + fill_win32_filefunc @110 + +; zlibwapi v1.2.4 added: + fill_win32_filefunc64 @111 + fill_win32_filefunc64A @112 + fill_win32_filefunc64W @113 + + unzOpen64 @120 + unzOpen2_64 @121 + unzGetGlobalInfo64 @122 + unzGetCurrentFileInfo64 @124 + unzGetCurrentFileZStreamPos64 @125 + unztell64 @126 + unzGetFilePos64 @127 + unzGoToFilePos64 @128 + + zipOpen64 @130 + zipOpen2_64 @131 + zipOpenNewFileInZip64 @132 + zipOpenNewFileInZip2_64 @133 + zipOpenNewFileInZip3_64 @134 + zipOpenNewFileInZip4_64 @135 + zipCloseFileInZipRaw64 @136 + +; zlib1 v1.2.4 added: + adler32_combine @140 + crc32_combine @142 + deflateSetHeader @144 + deflateTune @145 + gzbuffer @146 + gzclose_r @147 + gzclose_w @148 + gzdirect @149 + gzoffset @150 + inflateGetHeader @156 + inflateMark @157 + inflatePrime @158 + inflateReset2 @159 + inflateUndermine @160 + +; zlib1 v1.2.6 added: + gzgetc_ @161 + inflateResetKeep @163 + deflateResetKeep @164 + +; zlib1 v1.2.7 added: + gzopen_w @165 + +; zlib1 v1.2.8 added: + inflateGetDictionary @166 + gzvprintf @167 ADDED compat/zlib/contrib/vstudio/vc9/zlibvc.sln Index: compat/zlib/contrib/vstudio/vc9/zlibvc.sln ================================================================== --- compat/zlib/contrib/vstudio/vc9/zlibvc.sln +++ compat/zlib/contrib/vstudio/vc9/zlibvc.sln @@ -0,0 +1,144 @@ + +Microsoft Visual Studio Solution File, Format Version 10.00 +# Visual Studio 2008 +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "zlibvc", "zlibvc.vcproj", "{8FD826F8-3739-44E6-8CC8-997122E53B8D}" +EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "zlibstat", "zlibstat.vcproj", "{745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}" +EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "testzlib", "testzlib.vcproj", "{AA6666AA-E09F-4135-9C0C-4FE50C3C654B}" +EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "TestZlibDll", "testzlibdll.vcproj", "{C52F9E7B-498A-42BE-8DB4-85A15694366A}" + ProjectSection(ProjectDependencies) = postProject + {8FD826F8-3739-44E6-8CC8-997122E53B8D} = {8FD826F8-3739-44E6-8CC8-997122E53B8D} + EndProjectSection +EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "minizip", "minizip.vcproj", "{48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}" + ProjectSection(ProjectDependencies) = postProject + {8FD826F8-3739-44E6-8CC8-997122E53B8D} = {8FD826F8-3739-44E6-8CC8-997122E53B8D} + EndProjectSection +EndProject +Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "miniunz", "miniunz.vcproj", "{C52F9E7B-498A-42BE-8DB4-85A15694382A}" + ProjectSection(ProjectDependencies) = postProject + {8FD826F8-3739-44E6-8CC8-997122E53B8D} = {8FD826F8-3739-44E6-8CC8-997122E53B8D} + EndProjectSection +EndProject +Global + GlobalSection(SolutionConfigurationPlatforms) = preSolution + Debug|Itanium = Debug|Itanium + Debug|Win32 = Debug|Win32 + Debug|x64 = Debug|x64 + Release|Itanium = Release|Itanium + Release|Win32 = Release|Win32 + Release|x64 = Release|x64 + ReleaseWithoutAsm|Itanium = ReleaseWithoutAsm|Itanium + ReleaseWithoutAsm|Win32 = ReleaseWithoutAsm|Win32 + ReleaseWithoutAsm|x64 = ReleaseWithoutAsm|x64 + EndGlobalSection + GlobalSection(ProjectConfigurationPlatforms) = postSolution + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Debug|Itanium.ActiveCfg = Debug|Itanium + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Debug|Itanium.Build.0 = Debug|Itanium + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Debug|Win32.ActiveCfg = Debug|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Debug|Win32.Build.0 = Debug|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Debug|x64.ActiveCfg = Debug|x64 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Debug|x64.Build.0 = Debug|x64 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Release|Itanium.ActiveCfg = Release|Itanium + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Release|Itanium.Build.0 = Release|Itanium + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Release|Win32.ActiveCfg = Release|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Release|Win32.Build.0 = Release|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Release|x64.ActiveCfg = Release|x64 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.Release|x64.Build.0 = Release|x64 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.ReleaseWithoutAsm|Itanium.ActiveCfg = ReleaseWithoutAsm|Itanium + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.ReleaseWithoutAsm|Itanium.Build.0 = ReleaseWithoutAsm|Itanium + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.ReleaseWithoutAsm|Win32.ActiveCfg = ReleaseWithoutAsm|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.ReleaseWithoutAsm|Win32.Build.0 = ReleaseWithoutAsm|Win32 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.ReleaseWithoutAsm|x64.ActiveCfg = ReleaseWithoutAsm|x64 + {8FD826F8-3739-44E6-8CC8-997122E53B8D}.ReleaseWithoutAsm|x64.Build.0 = ReleaseWithoutAsm|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Debug|Itanium.ActiveCfg = Debug|Itanium + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Debug|Itanium.Build.0 = Debug|Itanium + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Debug|Win32.ActiveCfg = Debug|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Debug|Win32.Build.0 = Debug|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Debug|x64.ActiveCfg = Debug|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Debug|x64.Build.0 = Debug|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Release|Itanium.ActiveCfg = Release|Itanium + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Release|Itanium.Build.0 = Release|Itanium + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Release|Win32.ActiveCfg = Release|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Release|Win32.Build.0 = Release|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Release|x64.ActiveCfg = Release|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.Release|x64.Build.0 = Release|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.ReleaseWithoutAsm|Itanium.ActiveCfg = ReleaseWithoutAsm|Itanium + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.ReleaseWithoutAsm|Itanium.Build.0 = ReleaseWithoutAsm|Itanium + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.ReleaseWithoutAsm|Win32.ActiveCfg = ReleaseWithoutAsm|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.ReleaseWithoutAsm|Win32.Build.0 = ReleaseWithoutAsm|Win32 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.ReleaseWithoutAsm|x64.ActiveCfg = ReleaseWithoutAsm|x64 + {745DEC58-EBB3-47A9-A9B8-4C6627C01BF8}.ReleaseWithoutAsm|x64.Build.0 = ReleaseWithoutAsm|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Debug|Itanium.ActiveCfg = Debug|Itanium + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Debug|Itanium.Build.0 = Debug|Itanium + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Debug|Win32.ActiveCfg = Debug|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Debug|Win32.Build.0 = Debug|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Debug|x64.ActiveCfg = Debug|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Debug|x64.Build.0 = Debug|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Release|Itanium.ActiveCfg = Release|Itanium + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Release|Itanium.Build.0 = Release|Itanium + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Release|Win32.ActiveCfg = Release|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Release|Win32.Build.0 = Release|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Release|x64.ActiveCfg = Release|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.Release|x64.Build.0 = Release|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Itanium.ActiveCfg = ReleaseWithoutAsm|Itanium + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Itanium.Build.0 = ReleaseWithoutAsm|Itanium + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Win32.ActiveCfg = ReleaseWithoutAsm|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Win32.Build.0 = ReleaseWithoutAsm|Win32 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|x64.ActiveCfg = ReleaseWithoutAsm|x64 + {AA6666AA-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|x64.Build.0 = ReleaseWithoutAsm|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Debug|Itanium.ActiveCfg = Debug|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Debug|Itanium.Build.0 = Debug|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Debug|Win32.ActiveCfg = Debug|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Debug|Win32.Build.0 = Debug|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Debug|x64.ActiveCfg = Debug|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Debug|x64.Build.0 = Debug|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Release|Itanium.ActiveCfg = Release|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Release|Itanium.Build.0 = Release|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Release|Win32.ActiveCfg = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Release|Win32.Build.0 = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Release|x64.ActiveCfg = Release|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.Release|x64.Build.0 = Release|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.ReleaseWithoutAsm|Itanium.ActiveCfg = Release|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.ReleaseWithoutAsm|Itanium.Build.0 = Release|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.ReleaseWithoutAsm|Win32.ActiveCfg = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694366A}.ReleaseWithoutAsm|x64.ActiveCfg = Release|x64 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Debug|Itanium.ActiveCfg = Debug|Itanium + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Debug|Itanium.Build.0 = Debug|Itanium + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Debug|Win32.ActiveCfg = Debug|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Debug|Win32.Build.0 = Debug|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Debug|x64.ActiveCfg = Debug|x64 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Debug|x64.Build.0 = Debug|x64 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Release|Itanium.ActiveCfg = Release|Itanium + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Release|Itanium.Build.0 = Release|Itanium + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Release|Win32.ActiveCfg = Release|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Release|Win32.Build.0 = Release|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Release|x64.ActiveCfg = Release|x64 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.Release|x64.Build.0 = Release|x64 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Itanium.ActiveCfg = Release|Itanium + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Itanium.Build.0 = Release|Itanium + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|Win32.ActiveCfg = Release|Win32 + {48CDD9DC-E09F-4135-9C0C-4FE50C3C654B}.ReleaseWithoutAsm|x64.ActiveCfg = Release|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Debug|Itanium.ActiveCfg = Debug|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Debug|Itanium.Build.0 = Debug|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Debug|Win32.ActiveCfg = Debug|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Debug|Win32.Build.0 = Debug|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Debug|x64.ActiveCfg = Debug|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Debug|x64.Build.0 = Debug|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Release|Itanium.ActiveCfg = Release|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Release|Itanium.Build.0 = Release|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Release|Win32.ActiveCfg = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Release|Win32.Build.0 = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Release|x64.ActiveCfg = Release|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.Release|x64.Build.0 = Release|x64 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.ReleaseWithoutAsm|Itanium.ActiveCfg = Release|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.ReleaseWithoutAsm|Itanium.Build.0 = Release|Itanium + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.ReleaseWithoutAsm|Win32.ActiveCfg = Release|Win32 + {C52F9E7B-498A-42BE-8DB4-85A15694382A}.ReleaseWithoutAsm|x64.ActiveCfg = Release|x64 + EndGlobalSection + GlobalSection(SolutionProperties) = preSolution + HideSolutionNode = FALSE + EndGlobalSection +EndGlobal ADDED compat/zlib/contrib/vstudio/vc9/zlibvc.vcproj Index: compat/zlib/contrib/vstudio/vc9/zlibvc.vcproj ================================================================== --- compat/zlib/contrib/vstudio/vc9/zlibvc.vcproj +++ compat/zlib/contrib/vstudio/vc9/zlibvc.vcproj @@ -0,0 +1,1156 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ADDED compat/zlib/crc32.c Index: compat/zlib/crc32.c ================================================================== --- compat/zlib/crc32.c +++ compat/zlib/crc32.c @@ -0,0 +1,425 @@ +/* crc32.c -- compute the CRC-32 of a data stream + * Copyright (C) 1995-2006, 2010, 2011, 2012 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + * + * Thanks to Rodney Brown for his contribution of faster + * CRC methods: exclusive-oring 32 bits of data at a time, and pre-computing + * tables for updating the shift register in one step with three exclusive-ors + * instead of four steps with four exclusive-ors. This results in about a + * factor of two increase in speed on a Power PC G4 (PPC7455) using gcc -O3. + */ + +/* @(#) $Id$ */ + +/* + Note on the use of DYNAMIC_CRC_TABLE: there is no mutex or semaphore + protection on the static variables used to control the first-use generation + of the crc tables. Therefore, if you #define DYNAMIC_CRC_TABLE, you should + first call get_crc_table() to initialize the tables before allowing more than + one thread to use crc32(). + + DYNAMIC_CRC_TABLE and MAKECRCH can be #defined to write out crc32.h. + */ + +#ifdef MAKECRCH +# include +# ifndef DYNAMIC_CRC_TABLE +# define DYNAMIC_CRC_TABLE +# endif /* !DYNAMIC_CRC_TABLE */ +#endif /* MAKECRCH */ + +#include "zutil.h" /* for STDC and FAR definitions */ + +#define local static + +/* Definitions for doing the crc four data bytes at a time. */ +#if !defined(NOBYFOUR) && defined(Z_U4) +# define BYFOUR +#endif +#ifdef BYFOUR + local unsigned long crc32_little OF((unsigned long, + const unsigned char FAR *, unsigned)); + local unsigned long crc32_big OF((unsigned long, + const unsigned char FAR *, unsigned)); +# define TBLS 8 +#else +# define TBLS 1 +#endif /* BYFOUR */ + +/* Local functions for crc concatenation */ +local unsigned long gf2_matrix_times OF((unsigned long *mat, + unsigned long vec)); +local void gf2_matrix_square OF((unsigned long *square, unsigned long *mat)); +local uLong crc32_combine_ OF((uLong crc1, uLong crc2, z_off64_t len2)); + + +#ifdef DYNAMIC_CRC_TABLE + +local volatile int crc_table_empty = 1; +local z_crc_t FAR crc_table[TBLS][256]; +local void make_crc_table OF((void)); +#ifdef MAKECRCH + local void write_table OF((FILE *, const z_crc_t FAR *)); +#endif /* MAKECRCH */ +/* + Generate tables for a byte-wise 32-bit CRC calculation on the polynomial: + x^32+x^26+x^23+x^22+x^16+x^12+x^11+x^10+x^8+x^7+x^5+x^4+x^2+x+1. + + Polynomials over GF(2) are represented in binary, one bit per coefficient, + with the lowest powers in the most significant bit. Then adding polynomials + is just exclusive-or, and multiplying a polynomial by x is a right shift by + one. If we call the above polynomial p, and represent a byte as the + polynomial q, also with the lowest power in the most significant bit (so the + byte 0xb1 is the polynomial x^7+x^3+x+1), then the CRC is (q*x^32) mod p, + where a mod b means the remainder after dividing a by b. + + This calculation is done using the shift-register method of multiplying and + taking the remainder. The register is initialized to zero, and for each + incoming bit, x^32 is added mod p to the register if the bit is a one (where + x^32 mod p is p+x^32 = x^26+...+1), and the register is multiplied mod p by + x (which is shifting right by one and adding x^32 mod p if the bit shifted + out is a one). We start with the highest power (least significant bit) of + q and repeat for all eight bits of q. + + The first table is simply the CRC of all possible eight bit values. This is + all the information needed to generate CRCs on data a byte at a time for all + combinations of CRC register values and incoming bytes. The remaining tables + allow for word-at-a-time CRC calculation for both big-endian and little- + endian machines, where a word is four bytes. +*/ +local void make_crc_table() +{ + z_crc_t c; + int n, k; + z_crc_t poly; /* polynomial exclusive-or pattern */ + /* terms of polynomial defining this crc (except x^32): */ + static volatile int first = 1; /* flag to limit concurrent making */ + static const unsigned char p[] = {0,1,2,4,5,7,8,10,11,12,16,22,23,26}; + + /* See if another task is already doing this (not thread-safe, but better + than nothing -- significantly reduces duration of vulnerability in + case the advice about DYNAMIC_CRC_TABLE is ignored) */ + if (first) { + first = 0; + + /* make exclusive-or pattern from polynomial (0xedb88320UL) */ + poly = 0; + for (n = 0; n < (int)(sizeof(p)/sizeof(unsigned char)); n++) + poly |= (z_crc_t)1 << (31 - p[n]); + + /* generate a crc for every 8-bit value */ + for (n = 0; n < 256; n++) { + c = (z_crc_t)n; + for (k = 0; k < 8; k++) + c = c & 1 ? poly ^ (c >> 1) : c >> 1; + crc_table[0][n] = c; + } + +#ifdef BYFOUR + /* generate crc for each value followed by one, two, and three zeros, + and then the byte reversal of those as well as the first table */ + for (n = 0; n < 256; n++) { + c = crc_table[0][n]; + crc_table[4][n] = ZSWAP32(c); + for (k = 1; k < 4; k++) { + c = crc_table[0][c & 0xff] ^ (c >> 8); + crc_table[k][n] = c; + crc_table[k + 4][n] = ZSWAP32(c); + } + } +#endif /* BYFOUR */ + + crc_table_empty = 0; + } + else { /* not first */ + /* wait for the other guy to finish (not efficient, but rare) */ + while (crc_table_empty) + ; + } + +#ifdef MAKECRCH + /* write out CRC tables to crc32.h */ + { + FILE *out; + + out = fopen("crc32.h", "w"); + if (out == NULL) return; + fprintf(out, "/* crc32.h -- tables for rapid CRC calculation\n"); + fprintf(out, " * Generated automatically by crc32.c\n */\n\n"); + fprintf(out, "local const z_crc_t FAR "); + fprintf(out, "crc_table[TBLS][256] =\n{\n {\n"); + write_table(out, crc_table[0]); +# ifdef BYFOUR + fprintf(out, "#ifdef BYFOUR\n"); + for (k = 1; k < 8; k++) { + fprintf(out, " },\n {\n"); + write_table(out, crc_table[k]); + } + fprintf(out, "#endif\n"); +# endif /* BYFOUR */ + fprintf(out, " }\n};\n"); + fclose(out); + } +#endif /* MAKECRCH */ +} + +#ifdef MAKECRCH +local void write_table(out, table) + FILE *out; + const z_crc_t FAR *table; +{ + int n; + + for (n = 0; n < 256; n++) + fprintf(out, "%s0x%08lxUL%s", n % 5 ? "" : " ", + (unsigned long)(table[n]), + n == 255 ? "\n" : (n % 5 == 4 ? ",\n" : ", ")); +} +#endif /* MAKECRCH */ + +#else /* !DYNAMIC_CRC_TABLE */ +/* ======================================================================== + * Tables of CRC-32s of all single-byte values, made by make_crc_table(). + */ +#include "crc32.h" +#endif /* DYNAMIC_CRC_TABLE */ + +/* ========================================================================= + * This function can be used by asm versions of crc32() + */ +const z_crc_t FAR * ZEXPORT get_crc_table() +{ +#ifdef DYNAMIC_CRC_TABLE + if (crc_table_empty) + make_crc_table(); +#endif /* DYNAMIC_CRC_TABLE */ + return (const z_crc_t FAR *)crc_table; +} + +/* ========================================================================= */ +#define DO1 crc = crc_table[0][((int)crc ^ (*buf++)) & 0xff] ^ (crc >> 8) +#define DO8 DO1; DO1; DO1; DO1; DO1; DO1; DO1; DO1 + +/* ========================================================================= */ +unsigned long ZEXPORT crc32(crc, buf, len) + unsigned long crc; + const unsigned char FAR *buf; + uInt len; +{ + if (buf == Z_NULL) return 0UL; + +#ifdef DYNAMIC_CRC_TABLE + if (crc_table_empty) + make_crc_table(); +#endif /* DYNAMIC_CRC_TABLE */ + +#ifdef BYFOUR + if (sizeof(void *) == sizeof(ptrdiff_t)) { + z_crc_t endian; + + endian = 1; + if (*((unsigned char *)(&endian))) + return crc32_little(crc, buf, len); + else + return crc32_big(crc, buf, len); + } +#endif /* BYFOUR */ + crc = crc ^ 0xffffffffUL; + while (len >= 8) { + DO8; + len -= 8; + } + if (len) do { + DO1; + } while (--len); + return crc ^ 0xffffffffUL; +} + +#ifdef BYFOUR + +/* ========================================================================= */ +#define DOLIT4 c ^= *buf4++; \ + c = crc_table[3][c & 0xff] ^ crc_table[2][(c >> 8) & 0xff] ^ \ + crc_table[1][(c >> 16) & 0xff] ^ crc_table[0][c >> 24] +#define DOLIT32 DOLIT4; DOLIT4; DOLIT4; DOLIT4; DOLIT4; DOLIT4; DOLIT4; DOLIT4 + +/* ========================================================================= */ +local unsigned long crc32_little(crc, buf, len) + unsigned long crc; + const unsigned char FAR *buf; + unsigned len; +{ + register z_crc_t c; + register const z_crc_t FAR *buf4; + + c = (z_crc_t)crc; + c = ~c; + while (len && ((ptrdiff_t)buf & 3)) { + c = crc_table[0][(c ^ *buf++) & 0xff] ^ (c >> 8); + len--; + } + + buf4 = (const z_crc_t FAR *)(const void FAR *)buf; + while (len >= 32) { + DOLIT32; + len -= 32; + } + while (len >= 4) { + DOLIT4; + len -= 4; + } + buf = (const unsigned char FAR *)buf4; + + if (len) do { + c = crc_table[0][(c ^ *buf++) & 0xff] ^ (c >> 8); + } while (--len); + c = ~c; + return (unsigned long)c; +} + +/* ========================================================================= */ +#define DOBIG4 c ^= *++buf4; \ + c = crc_table[4][c & 0xff] ^ crc_table[5][(c >> 8) & 0xff] ^ \ + crc_table[6][(c >> 16) & 0xff] ^ crc_table[7][c >> 24] +#define DOBIG32 DOBIG4; DOBIG4; DOBIG4; DOBIG4; DOBIG4; DOBIG4; DOBIG4; DOBIG4 + +/* ========================================================================= */ +local unsigned long crc32_big(crc, buf, len) + unsigned long crc; + const unsigned char FAR *buf; + unsigned len; +{ + register z_crc_t c; + register const z_crc_t FAR *buf4; + + c = ZSWAP32((z_crc_t)crc); + c = ~c; + while (len && ((ptrdiff_t)buf & 3)) { + c = crc_table[4][(c >> 24) ^ *buf++] ^ (c << 8); + len--; + } + + buf4 = (const z_crc_t FAR *)(const void FAR *)buf; + buf4--; + while (len >= 32) { + DOBIG32; + len -= 32; + } + while (len >= 4) { + DOBIG4; + len -= 4; + } + buf4++; + buf = (const unsigned char FAR *)buf4; + + if (len) do { + c = crc_table[4][(c >> 24) ^ *buf++] ^ (c << 8); + } while (--len); + c = ~c; + return (unsigned long)(ZSWAP32(c)); +} + +#endif /* BYFOUR */ + +#define GF2_DIM 32 /* dimension of GF(2) vectors (length of CRC) */ + +/* ========================================================================= */ +local unsigned long gf2_matrix_times(mat, vec) + unsigned long *mat; + unsigned long vec; +{ + unsigned long sum; + + sum = 0; + while (vec) { + if (vec & 1) + sum ^= *mat; + vec >>= 1; + mat++; + } + return sum; +} + +/* ========================================================================= */ +local void gf2_matrix_square(square, mat) + unsigned long *square; + unsigned long *mat; +{ + int n; + + for (n = 0; n < GF2_DIM; n++) + square[n] = gf2_matrix_times(mat, mat[n]); +} + +/* ========================================================================= */ +local uLong crc32_combine_(crc1, crc2, len2) + uLong crc1; + uLong crc2; + z_off64_t len2; +{ + int n; + unsigned long row; + unsigned long even[GF2_DIM]; /* even-power-of-two zeros operator */ + unsigned long odd[GF2_DIM]; /* odd-power-of-two zeros operator */ + + /* degenerate case (also disallow negative lengths) */ + if (len2 <= 0) + return crc1; + + /* put operator for one zero bit in odd */ + odd[0] = 0xedb88320UL; /* CRC-32 polynomial */ + row = 1; + for (n = 1; n < GF2_DIM; n++) { + odd[n] = row; + row <<= 1; + } + + /* put operator for two zero bits in even */ + gf2_matrix_square(even, odd); + + /* put operator for four zero bits in odd */ + gf2_matrix_square(odd, even); + + /* apply len2 zeros to crc1 (first square will put the operator for one + zero byte, eight zero bits, in even) */ + do { + /* apply zeros operator for this bit of len2 */ + gf2_matrix_square(even, odd); + if (len2 & 1) + crc1 = gf2_matrix_times(even, crc1); + len2 >>= 1; + + /* if no more bits set, then done */ + if (len2 == 0) + break; + + /* another iteration of the loop with odd and even swapped */ + gf2_matrix_square(odd, even); + if (len2 & 1) + crc1 = gf2_matrix_times(odd, crc1); + len2 >>= 1; + + /* if no more bits set, then done */ + } while (len2 != 0); + + /* return combined crc */ + crc1 ^= crc2; + return crc1; +} + +/* ========================================================================= */ +uLong ZEXPORT crc32_combine(crc1, crc2, len2) + uLong crc1; + uLong crc2; + z_off_t len2; +{ + return crc32_combine_(crc1, crc2, len2); +} + +uLong ZEXPORT crc32_combine64(crc1, crc2, len2) + uLong crc1; + uLong crc2; + z_off64_t len2; +{ + return crc32_combine_(crc1, crc2, len2); +} ADDED compat/zlib/crc32.h Index: compat/zlib/crc32.h ================================================================== --- compat/zlib/crc32.h +++ compat/zlib/crc32.h @@ -0,0 +1,441 @@ +/* crc32.h -- tables for rapid CRC calculation + * Generated automatically by crc32.c + */ + +local const z_crc_t FAR crc_table[TBLS][256] = +{ + { + 0x00000000UL, 0x77073096UL, 0xee0e612cUL, 0x990951baUL, 0x076dc419UL, + 0x706af48fUL, 0xe963a535UL, 0x9e6495a3UL, 0x0edb8832UL, 0x79dcb8a4UL, + 0xe0d5e91eUL, 0x97d2d988UL, 0x09b64c2bUL, 0x7eb17cbdUL, 0xe7b82d07UL, + 0x90bf1d91UL, 0x1db71064UL, 0x6ab020f2UL, 0xf3b97148UL, 0x84be41deUL, + 0x1adad47dUL, 0x6ddde4ebUL, 0xf4d4b551UL, 0x83d385c7UL, 0x136c9856UL, + 0x646ba8c0UL, 0xfd62f97aUL, 0x8a65c9ecUL, 0x14015c4fUL, 0x63066cd9UL, + 0xfa0f3d63UL, 0x8d080df5UL, 0x3b6e20c8UL, 0x4c69105eUL, 0xd56041e4UL, + 0xa2677172UL, 0x3c03e4d1UL, 0x4b04d447UL, 0xd20d85fdUL, 0xa50ab56bUL, + 0x35b5a8faUL, 0x42b2986cUL, 0xdbbbc9d6UL, 0xacbcf940UL, 0x32d86ce3UL, + 0x45df5c75UL, 0xdcd60dcfUL, 0xabd13d59UL, 0x26d930acUL, 0x51de003aUL, + 0xc8d75180UL, 0xbfd06116UL, 0x21b4f4b5UL, 0x56b3c423UL, 0xcfba9599UL, + 0xb8bda50fUL, 0x2802b89eUL, 0x5f058808UL, 0xc60cd9b2UL, 0xb10be924UL, + 0x2f6f7c87UL, 0x58684c11UL, 0xc1611dabUL, 0xb6662d3dUL, 0x76dc4190UL, + 0x01db7106UL, 0x98d220bcUL, 0xefd5102aUL, 0x71b18589UL, 0x06b6b51fUL, + 0x9fbfe4a5UL, 0xe8b8d433UL, 0x7807c9a2UL, 0x0f00f934UL, 0x9609a88eUL, + 0xe10e9818UL, 0x7f6a0dbbUL, 0x086d3d2dUL, 0x91646c97UL, 0xe6635c01UL, + 0x6b6b51f4UL, 0x1c6c6162UL, 0x856530d8UL, 0xf262004eUL, 0x6c0695edUL, + 0x1b01a57bUL, 0x8208f4c1UL, 0xf50fc457UL, 0x65b0d9c6UL, 0x12b7e950UL, + 0x8bbeb8eaUL, 0xfcb9887cUL, 0x62dd1ddfUL, 0x15da2d49UL, 0x8cd37cf3UL, + 0xfbd44c65UL, 0x4db26158UL, 0x3ab551ceUL, 0xa3bc0074UL, 0xd4bb30e2UL, + 0x4adfa541UL, 0x3dd895d7UL, 0xa4d1c46dUL, 0xd3d6f4fbUL, 0x4369e96aUL, + 0x346ed9fcUL, 0xad678846UL, 0xda60b8d0UL, 0x44042d73UL, 0x33031de5UL, + 0xaa0a4c5fUL, 0xdd0d7cc9UL, 0x5005713cUL, 0x270241aaUL, 0xbe0b1010UL, + 0xc90c2086UL, 0x5768b525UL, 0x206f85b3UL, 0xb966d409UL, 0xce61e49fUL, + 0x5edef90eUL, 0x29d9c998UL, 0xb0d09822UL, 0xc7d7a8b4UL, 0x59b33d17UL, + 0x2eb40d81UL, 0xb7bd5c3bUL, 0xc0ba6cadUL, 0xedb88320UL, 0x9abfb3b6UL, + 0x03b6e20cUL, 0x74b1d29aUL, 0xead54739UL, 0x9dd277afUL, 0x04db2615UL, + 0x73dc1683UL, 0xe3630b12UL, 0x94643b84UL, 0x0d6d6a3eUL, 0x7a6a5aa8UL, + 0xe40ecf0bUL, 0x9309ff9dUL, 0x0a00ae27UL, 0x7d079eb1UL, 0xf00f9344UL, + 0x8708a3d2UL, 0x1e01f268UL, 0x6906c2feUL, 0xf762575dUL, 0x806567cbUL, + 0x196c3671UL, 0x6e6b06e7UL, 0xfed41b76UL, 0x89d32be0UL, 0x10da7a5aUL, + 0x67dd4accUL, 0xf9b9df6fUL, 0x8ebeeff9UL, 0x17b7be43UL, 0x60b08ed5UL, + 0xd6d6a3e8UL, 0xa1d1937eUL, 0x38d8c2c4UL, 0x4fdff252UL, 0xd1bb67f1UL, + 0xa6bc5767UL, 0x3fb506ddUL, 0x48b2364bUL, 0xd80d2bdaUL, 0xaf0a1b4cUL, + 0x36034af6UL, 0x41047a60UL, 0xdf60efc3UL, 0xa867df55UL, 0x316e8eefUL, + 0x4669be79UL, 0xcb61b38cUL, 0xbc66831aUL, 0x256fd2a0UL, 0x5268e236UL, + 0xcc0c7795UL, 0xbb0b4703UL, 0x220216b9UL, 0x5505262fUL, 0xc5ba3bbeUL, + 0xb2bd0b28UL, 0x2bb45a92UL, 0x5cb36a04UL, 0xc2d7ffa7UL, 0xb5d0cf31UL, + 0x2cd99e8bUL, 0x5bdeae1dUL, 0x9b64c2b0UL, 0xec63f226UL, 0x756aa39cUL, + 0x026d930aUL, 0x9c0906a9UL, 0xeb0e363fUL, 0x72076785UL, 0x05005713UL, + 0x95bf4a82UL, 0xe2b87a14UL, 0x7bb12baeUL, 0x0cb61b38UL, 0x92d28e9bUL, + 0xe5d5be0dUL, 0x7cdcefb7UL, 0x0bdbdf21UL, 0x86d3d2d4UL, 0xf1d4e242UL, + 0x68ddb3f8UL, 0x1fda836eUL, 0x81be16cdUL, 0xf6b9265bUL, 0x6fb077e1UL, + 0x18b74777UL, 0x88085ae6UL, 0xff0f6a70UL, 0x66063bcaUL, 0x11010b5cUL, + 0x8f659effUL, 0xf862ae69UL, 0x616bffd3UL, 0x166ccf45UL, 0xa00ae278UL, + 0xd70dd2eeUL, 0x4e048354UL, 0x3903b3c2UL, 0xa7672661UL, 0xd06016f7UL, + 0x4969474dUL, 0x3e6e77dbUL, 0xaed16a4aUL, 0xd9d65adcUL, 0x40df0b66UL, + 0x37d83bf0UL, 0xa9bcae53UL, 0xdebb9ec5UL, 0x47b2cf7fUL, 0x30b5ffe9UL, + 0xbdbdf21cUL, 0xcabac28aUL, 0x53b39330UL, 0x24b4a3a6UL, 0xbad03605UL, + 0xcdd70693UL, 0x54de5729UL, 0x23d967bfUL, 0xb3667a2eUL, 0xc4614ab8UL, + 0x5d681b02UL, 0x2a6f2b94UL, 0xb40bbe37UL, 0xc30c8ea1UL, 0x5a05df1bUL, + 0x2d02ef8dUL +#ifdef BYFOUR + }, + { + 0x00000000UL, 0x191b3141UL, 0x32366282UL, 0x2b2d53c3UL, 0x646cc504UL, + 0x7d77f445UL, 0x565aa786UL, 0x4f4196c7UL, 0xc8d98a08UL, 0xd1c2bb49UL, + 0xfaefe88aUL, 0xe3f4d9cbUL, 0xacb54f0cUL, 0xb5ae7e4dUL, 0x9e832d8eUL, + 0x87981ccfUL, 0x4ac21251UL, 0x53d92310UL, 0x78f470d3UL, 0x61ef4192UL, + 0x2eaed755UL, 0x37b5e614UL, 0x1c98b5d7UL, 0x05838496UL, 0x821b9859UL, + 0x9b00a918UL, 0xb02dfadbUL, 0xa936cb9aUL, 0xe6775d5dUL, 0xff6c6c1cUL, + 0xd4413fdfUL, 0xcd5a0e9eUL, 0x958424a2UL, 0x8c9f15e3UL, 0xa7b24620UL, + 0xbea97761UL, 0xf1e8e1a6UL, 0xe8f3d0e7UL, 0xc3de8324UL, 0xdac5b265UL, + 0x5d5daeaaUL, 0x44469febUL, 0x6f6bcc28UL, 0x7670fd69UL, 0x39316baeUL, + 0x202a5aefUL, 0x0b07092cUL, 0x121c386dUL, 0xdf4636f3UL, 0xc65d07b2UL, + 0xed705471UL, 0xf46b6530UL, 0xbb2af3f7UL, 0xa231c2b6UL, 0x891c9175UL, + 0x9007a034UL, 0x179fbcfbUL, 0x0e848dbaUL, 0x25a9de79UL, 0x3cb2ef38UL, + 0x73f379ffUL, 0x6ae848beUL, 0x41c51b7dUL, 0x58de2a3cUL, 0xf0794f05UL, + 0xe9627e44UL, 0xc24f2d87UL, 0xdb541cc6UL, 0x94158a01UL, 0x8d0ebb40UL, + 0xa623e883UL, 0xbf38d9c2UL, 0x38a0c50dUL, 0x21bbf44cUL, 0x0a96a78fUL, + 0x138d96ceUL, 0x5ccc0009UL, 0x45d73148UL, 0x6efa628bUL, 0x77e153caUL, + 0xbabb5d54UL, 0xa3a06c15UL, 0x888d3fd6UL, 0x91960e97UL, 0xded79850UL, + 0xc7cca911UL, 0xece1fad2UL, 0xf5facb93UL, 0x7262d75cUL, 0x6b79e61dUL, + 0x4054b5deUL, 0x594f849fUL, 0x160e1258UL, 0x0f152319UL, 0x243870daUL, + 0x3d23419bUL, 0x65fd6ba7UL, 0x7ce65ae6UL, 0x57cb0925UL, 0x4ed03864UL, + 0x0191aea3UL, 0x188a9fe2UL, 0x33a7cc21UL, 0x2abcfd60UL, 0xad24e1afUL, + 0xb43fd0eeUL, 0x9f12832dUL, 0x8609b26cUL, 0xc94824abUL, 0xd05315eaUL, + 0xfb7e4629UL, 0xe2657768UL, 0x2f3f79f6UL, 0x362448b7UL, 0x1d091b74UL, + 0x04122a35UL, 0x4b53bcf2UL, 0x52488db3UL, 0x7965de70UL, 0x607eef31UL, + 0xe7e6f3feUL, 0xfefdc2bfUL, 0xd5d0917cUL, 0xcccba03dUL, 0x838a36faUL, + 0x9a9107bbUL, 0xb1bc5478UL, 0xa8a76539UL, 0x3b83984bUL, 0x2298a90aUL, + 0x09b5fac9UL, 0x10aecb88UL, 0x5fef5d4fUL, 0x46f46c0eUL, 0x6dd93fcdUL, + 0x74c20e8cUL, 0xf35a1243UL, 0xea412302UL, 0xc16c70c1UL, 0xd8774180UL, + 0x9736d747UL, 0x8e2de606UL, 0xa500b5c5UL, 0xbc1b8484UL, 0x71418a1aUL, + 0x685abb5bUL, 0x4377e898UL, 0x5a6cd9d9UL, 0x152d4f1eUL, 0x0c367e5fUL, + 0x271b2d9cUL, 0x3e001cddUL, 0xb9980012UL, 0xa0833153UL, 0x8bae6290UL, + 0x92b553d1UL, 0xddf4c516UL, 0xc4eff457UL, 0xefc2a794UL, 0xf6d996d5UL, + 0xae07bce9UL, 0xb71c8da8UL, 0x9c31de6bUL, 0x852aef2aUL, 0xca6b79edUL, + 0xd37048acUL, 0xf85d1b6fUL, 0xe1462a2eUL, 0x66de36e1UL, 0x7fc507a0UL, + 0x54e85463UL, 0x4df36522UL, 0x02b2f3e5UL, 0x1ba9c2a4UL, 0x30849167UL, + 0x299fa026UL, 0xe4c5aeb8UL, 0xfdde9ff9UL, 0xd6f3cc3aUL, 0xcfe8fd7bUL, + 0x80a96bbcUL, 0x99b25afdUL, 0xb29f093eUL, 0xab84387fUL, 0x2c1c24b0UL, + 0x350715f1UL, 0x1e2a4632UL, 0x07317773UL, 0x4870e1b4UL, 0x516bd0f5UL, + 0x7a468336UL, 0x635db277UL, 0xcbfad74eUL, 0xd2e1e60fUL, 0xf9ccb5ccUL, + 0xe0d7848dUL, 0xaf96124aUL, 0xb68d230bUL, 0x9da070c8UL, 0x84bb4189UL, + 0x03235d46UL, 0x1a386c07UL, 0x31153fc4UL, 0x280e0e85UL, 0x674f9842UL, + 0x7e54a903UL, 0x5579fac0UL, 0x4c62cb81UL, 0x8138c51fUL, 0x9823f45eUL, + 0xb30ea79dUL, 0xaa1596dcUL, 0xe554001bUL, 0xfc4f315aUL, 0xd7626299UL, + 0xce7953d8UL, 0x49e14f17UL, 0x50fa7e56UL, 0x7bd72d95UL, 0x62cc1cd4UL, + 0x2d8d8a13UL, 0x3496bb52UL, 0x1fbbe891UL, 0x06a0d9d0UL, 0x5e7ef3ecUL, + 0x4765c2adUL, 0x6c48916eUL, 0x7553a02fUL, 0x3a1236e8UL, 0x230907a9UL, + 0x0824546aUL, 0x113f652bUL, 0x96a779e4UL, 0x8fbc48a5UL, 0xa4911b66UL, + 0xbd8a2a27UL, 0xf2cbbce0UL, 0xebd08da1UL, 0xc0fdde62UL, 0xd9e6ef23UL, + 0x14bce1bdUL, 0x0da7d0fcUL, 0x268a833fUL, 0x3f91b27eUL, 0x70d024b9UL, + 0x69cb15f8UL, 0x42e6463bUL, 0x5bfd777aUL, 0xdc656bb5UL, 0xc57e5af4UL, + 0xee530937UL, 0xf7483876UL, 0xb809aeb1UL, 0xa1129ff0UL, 0x8a3fcc33UL, + 0x9324fd72UL + }, + { + 0x00000000UL, 0x01c26a37UL, 0x0384d46eUL, 0x0246be59UL, 0x0709a8dcUL, + 0x06cbc2ebUL, 0x048d7cb2UL, 0x054f1685UL, 0x0e1351b8UL, 0x0fd13b8fUL, + 0x0d9785d6UL, 0x0c55efe1UL, 0x091af964UL, 0x08d89353UL, 0x0a9e2d0aUL, + 0x0b5c473dUL, 0x1c26a370UL, 0x1de4c947UL, 0x1fa2771eUL, 0x1e601d29UL, + 0x1b2f0bacUL, 0x1aed619bUL, 0x18abdfc2UL, 0x1969b5f5UL, 0x1235f2c8UL, + 0x13f798ffUL, 0x11b126a6UL, 0x10734c91UL, 0x153c5a14UL, 0x14fe3023UL, + 0x16b88e7aUL, 0x177ae44dUL, 0x384d46e0UL, 0x398f2cd7UL, 0x3bc9928eUL, + 0x3a0bf8b9UL, 0x3f44ee3cUL, 0x3e86840bUL, 0x3cc03a52UL, 0x3d025065UL, + 0x365e1758UL, 0x379c7d6fUL, 0x35dac336UL, 0x3418a901UL, 0x3157bf84UL, + 0x3095d5b3UL, 0x32d36beaUL, 0x331101ddUL, 0x246be590UL, 0x25a98fa7UL, + 0x27ef31feUL, 0x262d5bc9UL, 0x23624d4cUL, 0x22a0277bUL, 0x20e69922UL, + 0x2124f315UL, 0x2a78b428UL, 0x2bbade1fUL, 0x29fc6046UL, 0x283e0a71UL, + 0x2d711cf4UL, 0x2cb376c3UL, 0x2ef5c89aUL, 0x2f37a2adUL, 0x709a8dc0UL, + 0x7158e7f7UL, 0x731e59aeUL, 0x72dc3399UL, 0x7793251cUL, 0x76514f2bUL, + 0x7417f172UL, 0x75d59b45UL, 0x7e89dc78UL, 0x7f4bb64fUL, 0x7d0d0816UL, + 0x7ccf6221UL, 0x798074a4UL, 0x78421e93UL, 0x7a04a0caUL, 0x7bc6cafdUL, + 0x6cbc2eb0UL, 0x6d7e4487UL, 0x6f38fadeUL, 0x6efa90e9UL, 0x6bb5866cUL, + 0x6a77ec5bUL, 0x68315202UL, 0x69f33835UL, 0x62af7f08UL, 0x636d153fUL, + 0x612bab66UL, 0x60e9c151UL, 0x65a6d7d4UL, 0x6464bde3UL, 0x662203baUL, + 0x67e0698dUL, 0x48d7cb20UL, 0x4915a117UL, 0x4b531f4eUL, 0x4a917579UL, + 0x4fde63fcUL, 0x4e1c09cbUL, 0x4c5ab792UL, 0x4d98dda5UL, 0x46c49a98UL, + 0x4706f0afUL, 0x45404ef6UL, 0x448224c1UL, 0x41cd3244UL, 0x400f5873UL, + 0x4249e62aUL, 0x438b8c1dUL, 0x54f16850UL, 0x55330267UL, 0x5775bc3eUL, + 0x56b7d609UL, 0x53f8c08cUL, 0x523aaabbUL, 0x507c14e2UL, 0x51be7ed5UL, + 0x5ae239e8UL, 0x5b2053dfUL, 0x5966ed86UL, 0x58a487b1UL, 0x5deb9134UL, + 0x5c29fb03UL, 0x5e6f455aUL, 0x5fad2f6dUL, 0xe1351b80UL, 0xe0f771b7UL, + 0xe2b1cfeeUL, 0xe373a5d9UL, 0xe63cb35cUL, 0xe7fed96bUL, 0xe5b86732UL, + 0xe47a0d05UL, 0xef264a38UL, 0xeee4200fUL, 0xeca29e56UL, 0xed60f461UL, + 0xe82fe2e4UL, 0xe9ed88d3UL, 0xebab368aUL, 0xea695cbdUL, 0xfd13b8f0UL, + 0xfcd1d2c7UL, 0xfe976c9eUL, 0xff5506a9UL, 0xfa1a102cUL, 0xfbd87a1bUL, + 0xf99ec442UL, 0xf85cae75UL, 0xf300e948UL, 0xf2c2837fUL, 0xf0843d26UL, + 0xf1465711UL, 0xf4094194UL, 0xf5cb2ba3UL, 0xf78d95faUL, 0xf64fffcdUL, + 0xd9785d60UL, 0xd8ba3757UL, 0xdafc890eUL, 0xdb3ee339UL, 0xde71f5bcUL, + 0xdfb39f8bUL, 0xddf521d2UL, 0xdc374be5UL, 0xd76b0cd8UL, 0xd6a966efUL, + 0xd4efd8b6UL, 0xd52db281UL, 0xd062a404UL, 0xd1a0ce33UL, 0xd3e6706aUL, + 0xd2241a5dUL, 0xc55efe10UL, 0xc49c9427UL, 0xc6da2a7eUL, 0xc7184049UL, + 0xc25756ccUL, 0xc3953cfbUL, 0xc1d382a2UL, 0xc011e895UL, 0xcb4dafa8UL, + 0xca8fc59fUL, 0xc8c97bc6UL, 0xc90b11f1UL, 0xcc440774UL, 0xcd866d43UL, + 0xcfc0d31aUL, 0xce02b92dUL, 0x91af9640UL, 0x906dfc77UL, 0x922b422eUL, + 0x93e92819UL, 0x96a63e9cUL, 0x976454abUL, 0x9522eaf2UL, 0x94e080c5UL, + 0x9fbcc7f8UL, 0x9e7eadcfUL, 0x9c381396UL, 0x9dfa79a1UL, 0x98b56f24UL, + 0x99770513UL, 0x9b31bb4aUL, 0x9af3d17dUL, 0x8d893530UL, 0x8c4b5f07UL, + 0x8e0de15eUL, 0x8fcf8b69UL, 0x8a809decUL, 0x8b42f7dbUL, 0x89044982UL, + 0x88c623b5UL, 0x839a6488UL, 0x82580ebfUL, 0x801eb0e6UL, 0x81dcdad1UL, + 0x8493cc54UL, 0x8551a663UL, 0x8717183aUL, 0x86d5720dUL, 0xa9e2d0a0UL, + 0xa820ba97UL, 0xaa6604ceUL, 0xaba46ef9UL, 0xaeeb787cUL, 0xaf29124bUL, + 0xad6fac12UL, 0xacadc625UL, 0xa7f18118UL, 0xa633eb2fUL, 0xa4755576UL, + 0xa5b73f41UL, 0xa0f829c4UL, 0xa13a43f3UL, 0xa37cfdaaUL, 0xa2be979dUL, + 0xb5c473d0UL, 0xb40619e7UL, 0xb640a7beUL, 0xb782cd89UL, 0xb2cddb0cUL, + 0xb30fb13bUL, 0xb1490f62UL, 0xb08b6555UL, 0xbbd72268UL, 0xba15485fUL, + 0xb853f606UL, 0xb9919c31UL, 0xbcde8ab4UL, 0xbd1ce083UL, 0xbf5a5edaUL, + 0xbe9834edUL + }, + { + 0x00000000UL, 0xb8bc6765UL, 0xaa09c88bUL, 0x12b5afeeUL, 0x8f629757UL, + 0x37def032UL, 0x256b5fdcUL, 0x9dd738b9UL, 0xc5b428efUL, 0x7d084f8aUL, + 0x6fbde064UL, 0xd7018701UL, 0x4ad6bfb8UL, 0xf26ad8ddUL, 0xe0df7733UL, + 0x58631056UL, 0x5019579fUL, 0xe8a530faUL, 0xfa109f14UL, 0x42acf871UL, + 0xdf7bc0c8UL, 0x67c7a7adUL, 0x75720843UL, 0xcdce6f26UL, 0x95ad7f70UL, + 0x2d111815UL, 0x3fa4b7fbUL, 0x8718d09eUL, 0x1acfe827UL, 0xa2738f42UL, + 0xb0c620acUL, 0x087a47c9UL, 0xa032af3eUL, 0x188ec85bUL, 0x0a3b67b5UL, + 0xb28700d0UL, 0x2f503869UL, 0x97ec5f0cUL, 0x8559f0e2UL, 0x3de59787UL, + 0x658687d1UL, 0xdd3ae0b4UL, 0xcf8f4f5aUL, 0x7733283fUL, 0xeae41086UL, + 0x525877e3UL, 0x40edd80dUL, 0xf851bf68UL, 0xf02bf8a1UL, 0x48979fc4UL, + 0x5a22302aUL, 0xe29e574fUL, 0x7f496ff6UL, 0xc7f50893UL, 0xd540a77dUL, + 0x6dfcc018UL, 0x359fd04eUL, 0x8d23b72bUL, 0x9f9618c5UL, 0x272a7fa0UL, + 0xbafd4719UL, 0x0241207cUL, 0x10f48f92UL, 0xa848e8f7UL, 0x9b14583dUL, + 0x23a83f58UL, 0x311d90b6UL, 0x89a1f7d3UL, 0x1476cf6aUL, 0xaccaa80fUL, + 0xbe7f07e1UL, 0x06c36084UL, 0x5ea070d2UL, 0xe61c17b7UL, 0xf4a9b859UL, + 0x4c15df3cUL, 0xd1c2e785UL, 0x697e80e0UL, 0x7bcb2f0eUL, 0xc377486bUL, + 0xcb0d0fa2UL, 0x73b168c7UL, 0x6104c729UL, 0xd9b8a04cUL, 0x446f98f5UL, + 0xfcd3ff90UL, 0xee66507eUL, 0x56da371bUL, 0x0eb9274dUL, 0xb6054028UL, + 0xa4b0efc6UL, 0x1c0c88a3UL, 0x81dbb01aUL, 0x3967d77fUL, 0x2bd27891UL, + 0x936e1ff4UL, 0x3b26f703UL, 0x839a9066UL, 0x912f3f88UL, 0x299358edUL, + 0xb4446054UL, 0x0cf80731UL, 0x1e4da8dfUL, 0xa6f1cfbaUL, 0xfe92dfecUL, + 0x462eb889UL, 0x549b1767UL, 0xec277002UL, 0x71f048bbUL, 0xc94c2fdeUL, + 0xdbf98030UL, 0x6345e755UL, 0x6b3fa09cUL, 0xd383c7f9UL, 0xc1366817UL, + 0x798a0f72UL, 0xe45d37cbUL, 0x5ce150aeUL, 0x4e54ff40UL, 0xf6e89825UL, + 0xae8b8873UL, 0x1637ef16UL, 0x048240f8UL, 0xbc3e279dUL, 0x21e91f24UL, + 0x99557841UL, 0x8be0d7afUL, 0x335cb0caUL, 0xed59b63bUL, 0x55e5d15eUL, + 0x47507eb0UL, 0xffec19d5UL, 0x623b216cUL, 0xda874609UL, 0xc832e9e7UL, + 0x708e8e82UL, 0x28ed9ed4UL, 0x9051f9b1UL, 0x82e4565fUL, 0x3a58313aUL, + 0xa78f0983UL, 0x1f336ee6UL, 0x0d86c108UL, 0xb53aa66dUL, 0xbd40e1a4UL, + 0x05fc86c1UL, 0x1749292fUL, 0xaff54e4aUL, 0x322276f3UL, 0x8a9e1196UL, + 0x982bbe78UL, 0x2097d91dUL, 0x78f4c94bUL, 0xc048ae2eUL, 0xd2fd01c0UL, + 0x6a4166a5UL, 0xf7965e1cUL, 0x4f2a3979UL, 0x5d9f9697UL, 0xe523f1f2UL, + 0x4d6b1905UL, 0xf5d77e60UL, 0xe762d18eUL, 0x5fdeb6ebUL, 0xc2098e52UL, + 0x7ab5e937UL, 0x680046d9UL, 0xd0bc21bcUL, 0x88df31eaUL, 0x3063568fUL, + 0x22d6f961UL, 0x9a6a9e04UL, 0x07bda6bdUL, 0xbf01c1d8UL, 0xadb46e36UL, + 0x15080953UL, 0x1d724e9aUL, 0xa5ce29ffUL, 0xb77b8611UL, 0x0fc7e174UL, + 0x9210d9cdUL, 0x2aacbea8UL, 0x38191146UL, 0x80a57623UL, 0xd8c66675UL, + 0x607a0110UL, 0x72cfaefeUL, 0xca73c99bUL, 0x57a4f122UL, 0xef189647UL, + 0xfdad39a9UL, 0x45115eccUL, 0x764dee06UL, 0xcef18963UL, 0xdc44268dUL, + 0x64f841e8UL, 0xf92f7951UL, 0x41931e34UL, 0x5326b1daUL, 0xeb9ad6bfUL, + 0xb3f9c6e9UL, 0x0b45a18cUL, 0x19f00e62UL, 0xa14c6907UL, 0x3c9b51beUL, + 0x842736dbUL, 0x96929935UL, 0x2e2efe50UL, 0x2654b999UL, 0x9ee8defcUL, + 0x8c5d7112UL, 0x34e11677UL, 0xa9362eceUL, 0x118a49abUL, 0x033fe645UL, + 0xbb838120UL, 0xe3e09176UL, 0x5b5cf613UL, 0x49e959fdUL, 0xf1553e98UL, + 0x6c820621UL, 0xd43e6144UL, 0xc68bceaaUL, 0x7e37a9cfUL, 0xd67f4138UL, + 0x6ec3265dUL, 0x7c7689b3UL, 0xc4caeed6UL, 0x591dd66fUL, 0xe1a1b10aUL, + 0xf3141ee4UL, 0x4ba87981UL, 0x13cb69d7UL, 0xab770eb2UL, 0xb9c2a15cUL, + 0x017ec639UL, 0x9ca9fe80UL, 0x241599e5UL, 0x36a0360bUL, 0x8e1c516eUL, + 0x866616a7UL, 0x3eda71c2UL, 0x2c6fde2cUL, 0x94d3b949UL, 0x090481f0UL, + 0xb1b8e695UL, 0xa30d497bUL, 0x1bb12e1eUL, 0x43d23e48UL, 0xfb6e592dUL, + 0xe9dbf6c3UL, 0x516791a6UL, 0xccb0a91fUL, 0x740cce7aUL, 0x66b96194UL, + 0xde0506f1UL + }, + { + 0x00000000UL, 0x96300777UL, 0x2c610eeeUL, 0xba510999UL, 0x19c46d07UL, + 0x8ff46a70UL, 0x35a563e9UL, 0xa395649eUL, 0x3288db0eUL, 0xa4b8dc79UL, + 0x1ee9d5e0UL, 0x88d9d297UL, 0x2b4cb609UL, 0xbd7cb17eUL, 0x072db8e7UL, + 0x911dbf90UL, 0x6410b71dUL, 0xf220b06aUL, 0x4871b9f3UL, 0xde41be84UL, + 0x7dd4da1aUL, 0xebe4dd6dUL, 0x51b5d4f4UL, 0xc785d383UL, 0x56986c13UL, + 0xc0a86b64UL, 0x7af962fdUL, 0xecc9658aUL, 0x4f5c0114UL, 0xd96c0663UL, + 0x633d0ffaUL, 0xf50d088dUL, 0xc8206e3bUL, 0x5e10694cUL, 0xe44160d5UL, + 0x727167a2UL, 0xd1e4033cUL, 0x47d4044bUL, 0xfd850dd2UL, 0x6bb50aa5UL, + 0xfaa8b535UL, 0x6c98b242UL, 0xd6c9bbdbUL, 0x40f9bcacUL, 0xe36cd832UL, + 0x755cdf45UL, 0xcf0dd6dcUL, 0x593dd1abUL, 0xac30d926UL, 0x3a00de51UL, + 0x8051d7c8UL, 0x1661d0bfUL, 0xb5f4b421UL, 0x23c4b356UL, 0x9995bacfUL, + 0x0fa5bdb8UL, 0x9eb80228UL, 0x0888055fUL, 0xb2d90cc6UL, 0x24e90bb1UL, + 0x877c6f2fUL, 0x114c6858UL, 0xab1d61c1UL, 0x3d2d66b6UL, 0x9041dc76UL, + 0x0671db01UL, 0xbc20d298UL, 0x2a10d5efUL, 0x8985b171UL, 0x1fb5b606UL, + 0xa5e4bf9fUL, 0x33d4b8e8UL, 0xa2c90778UL, 0x34f9000fUL, 0x8ea80996UL, + 0x18980ee1UL, 0xbb0d6a7fUL, 0x2d3d6d08UL, 0x976c6491UL, 0x015c63e6UL, + 0xf4516b6bUL, 0x62616c1cUL, 0xd8306585UL, 0x4e0062f2UL, 0xed95066cUL, + 0x7ba5011bUL, 0xc1f40882UL, 0x57c40ff5UL, 0xc6d9b065UL, 0x50e9b712UL, + 0xeab8be8bUL, 0x7c88b9fcUL, 0xdf1ddd62UL, 0x492dda15UL, 0xf37cd38cUL, + 0x654cd4fbUL, 0x5861b24dUL, 0xce51b53aUL, 0x7400bca3UL, 0xe230bbd4UL, + 0x41a5df4aUL, 0xd795d83dUL, 0x6dc4d1a4UL, 0xfbf4d6d3UL, 0x6ae96943UL, + 0xfcd96e34UL, 0x468867adUL, 0xd0b860daUL, 0x732d0444UL, 0xe51d0333UL, + 0x5f4c0aaaUL, 0xc97c0dddUL, 0x3c710550UL, 0xaa410227UL, 0x10100bbeUL, + 0x86200cc9UL, 0x25b56857UL, 0xb3856f20UL, 0x09d466b9UL, 0x9fe461ceUL, + 0x0ef9de5eUL, 0x98c9d929UL, 0x2298d0b0UL, 0xb4a8d7c7UL, 0x173db359UL, + 0x810db42eUL, 0x3b5cbdb7UL, 0xad6cbac0UL, 0x2083b8edUL, 0xb6b3bf9aUL, + 0x0ce2b603UL, 0x9ad2b174UL, 0x3947d5eaUL, 0xaf77d29dUL, 0x1526db04UL, + 0x8316dc73UL, 0x120b63e3UL, 0x843b6494UL, 0x3e6a6d0dUL, 0xa85a6a7aUL, + 0x0bcf0ee4UL, 0x9dff0993UL, 0x27ae000aUL, 0xb19e077dUL, 0x44930ff0UL, + 0xd2a30887UL, 0x68f2011eUL, 0xfec20669UL, 0x5d5762f7UL, 0xcb676580UL, + 0x71366c19UL, 0xe7066b6eUL, 0x761bd4feUL, 0xe02bd389UL, 0x5a7ada10UL, + 0xcc4add67UL, 0x6fdfb9f9UL, 0xf9efbe8eUL, 0x43beb717UL, 0xd58eb060UL, + 0xe8a3d6d6UL, 0x7e93d1a1UL, 0xc4c2d838UL, 0x52f2df4fUL, 0xf167bbd1UL, + 0x6757bca6UL, 0xdd06b53fUL, 0x4b36b248UL, 0xda2b0dd8UL, 0x4c1b0aafUL, + 0xf64a0336UL, 0x607a0441UL, 0xc3ef60dfUL, 0x55df67a8UL, 0xef8e6e31UL, + 0x79be6946UL, 0x8cb361cbUL, 0x1a8366bcUL, 0xa0d26f25UL, 0x36e26852UL, + 0x95770cccUL, 0x03470bbbUL, 0xb9160222UL, 0x2f260555UL, 0xbe3bbac5UL, + 0x280bbdb2UL, 0x925ab42bUL, 0x046ab35cUL, 0xa7ffd7c2UL, 0x31cfd0b5UL, + 0x8b9ed92cUL, 0x1daede5bUL, 0xb0c2649bUL, 0x26f263ecUL, 0x9ca36a75UL, + 0x0a936d02UL, 0xa906099cUL, 0x3f360eebUL, 0x85670772UL, 0x13570005UL, + 0x824abf95UL, 0x147ab8e2UL, 0xae2bb17bUL, 0x381bb60cUL, 0x9b8ed292UL, + 0x0dbed5e5UL, 0xb7efdc7cUL, 0x21dfdb0bUL, 0xd4d2d386UL, 0x42e2d4f1UL, + 0xf8b3dd68UL, 0x6e83da1fUL, 0xcd16be81UL, 0x5b26b9f6UL, 0xe177b06fUL, + 0x7747b718UL, 0xe65a0888UL, 0x706a0fffUL, 0xca3b0666UL, 0x5c0b0111UL, + 0xff9e658fUL, 0x69ae62f8UL, 0xd3ff6b61UL, 0x45cf6c16UL, 0x78e20aa0UL, + 0xeed20dd7UL, 0x5483044eUL, 0xc2b30339UL, 0x612667a7UL, 0xf71660d0UL, + 0x4d476949UL, 0xdb776e3eUL, 0x4a6ad1aeUL, 0xdc5ad6d9UL, 0x660bdf40UL, + 0xf03bd837UL, 0x53aebca9UL, 0xc59ebbdeUL, 0x7fcfb247UL, 0xe9ffb530UL, + 0x1cf2bdbdUL, 0x8ac2bacaUL, 0x3093b353UL, 0xa6a3b424UL, 0x0536d0baUL, + 0x9306d7cdUL, 0x2957de54UL, 0xbf67d923UL, 0x2e7a66b3UL, 0xb84a61c4UL, + 0x021b685dUL, 0x942b6f2aUL, 0x37be0bb4UL, 0xa18e0cc3UL, 0x1bdf055aUL, + 0x8def022dUL + }, + { + 0x00000000UL, 0x41311b19UL, 0x82623632UL, 0xc3532d2bUL, 0x04c56c64UL, + 0x45f4777dUL, 0x86a75a56UL, 0xc796414fUL, 0x088ad9c8UL, 0x49bbc2d1UL, + 0x8ae8effaUL, 0xcbd9f4e3UL, 0x0c4fb5acUL, 0x4d7eaeb5UL, 0x8e2d839eUL, + 0xcf1c9887UL, 0x5112c24aUL, 0x1023d953UL, 0xd370f478UL, 0x9241ef61UL, + 0x55d7ae2eUL, 0x14e6b537UL, 0xd7b5981cUL, 0x96848305UL, 0x59981b82UL, + 0x18a9009bUL, 0xdbfa2db0UL, 0x9acb36a9UL, 0x5d5d77e6UL, 0x1c6c6cffUL, + 0xdf3f41d4UL, 0x9e0e5acdUL, 0xa2248495UL, 0xe3159f8cUL, 0x2046b2a7UL, + 0x6177a9beUL, 0xa6e1e8f1UL, 0xe7d0f3e8UL, 0x2483dec3UL, 0x65b2c5daUL, + 0xaaae5d5dUL, 0xeb9f4644UL, 0x28cc6b6fUL, 0x69fd7076UL, 0xae6b3139UL, + 0xef5a2a20UL, 0x2c09070bUL, 0x6d381c12UL, 0xf33646dfUL, 0xb2075dc6UL, + 0x715470edUL, 0x30656bf4UL, 0xf7f32abbUL, 0xb6c231a2UL, 0x75911c89UL, + 0x34a00790UL, 0xfbbc9f17UL, 0xba8d840eUL, 0x79dea925UL, 0x38efb23cUL, + 0xff79f373UL, 0xbe48e86aUL, 0x7d1bc541UL, 0x3c2ade58UL, 0x054f79f0UL, + 0x447e62e9UL, 0x872d4fc2UL, 0xc61c54dbUL, 0x018a1594UL, 0x40bb0e8dUL, + 0x83e823a6UL, 0xc2d938bfUL, 0x0dc5a038UL, 0x4cf4bb21UL, 0x8fa7960aUL, + 0xce968d13UL, 0x0900cc5cUL, 0x4831d745UL, 0x8b62fa6eUL, 0xca53e177UL, + 0x545dbbbaUL, 0x156ca0a3UL, 0xd63f8d88UL, 0x970e9691UL, 0x5098d7deUL, + 0x11a9ccc7UL, 0xd2fae1ecUL, 0x93cbfaf5UL, 0x5cd76272UL, 0x1de6796bUL, + 0xdeb55440UL, 0x9f844f59UL, 0x58120e16UL, 0x1923150fUL, 0xda703824UL, + 0x9b41233dUL, 0xa76bfd65UL, 0xe65ae67cUL, 0x2509cb57UL, 0x6438d04eUL, + 0xa3ae9101UL, 0xe29f8a18UL, 0x21cca733UL, 0x60fdbc2aUL, 0xafe124adUL, + 0xeed03fb4UL, 0x2d83129fUL, 0x6cb20986UL, 0xab2448c9UL, 0xea1553d0UL, + 0x29467efbUL, 0x687765e2UL, 0xf6793f2fUL, 0xb7482436UL, 0x741b091dUL, + 0x352a1204UL, 0xf2bc534bUL, 0xb38d4852UL, 0x70de6579UL, 0x31ef7e60UL, + 0xfef3e6e7UL, 0xbfc2fdfeUL, 0x7c91d0d5UL, 0x3da0cbccUL, 0xfa368a83UL, + 0xbb07919aUL, 0x7854bcb1UL, 0x3965a7a8UL, 0x4b98833bUL, 0x0aa99822UL, + 0xc9fab509UL, 0x88cbae10UL, 0x4f5def5fUL, 0x0e6cf446UL, 0xcd3fd96dUL, + 0x8c0ec274UL, 0x43125af3UL, 0x022341eaUL, 0xc1706cc1UL, 0x804177d8UL, + 0x47d73697UL, 0x06e62d8eUL, 0xc5b500a5UL, 0x84841bbcUL, 0x1a8a4171UL, + 0x5bbb5a68UL, 0x98e87743UL, 0xd9d96c5aUL, 0x1e4f2d15UL, 0x5f7e360cUL, + 0x9c2d1b27UL, 0xdd1c003eUL, 0x120098b9UL, 0x533183a0UL, 0x9062ae8bUL, + 0xd153b592UL, 0x16c5f4ddUL, 0x57f4efc4UL, 0x94a7c2efUL, 0xd596d9f6UL, + 0xe9bc07aeUL, 0xa88d1cb7UL, 0x6bde319cUL, 0x2aef2a85UL, 0xed796bcaUL, + 0xac4870d3UL, 0x6f1b5df8UL, 0x2e2a46e1UL, 0xe136de66UL, 0xa007c57fUL, + 0x6354e854UL, 0x2265f34dUL, 0xe5f3b202UL, 0xa4c2a91bUL, 0x67918430UL, + 0x26a09f29UL, 0xb8aec5e4UL, 0xf99fdefdUL, 0x3accf3d6UL, 0x7bfde8cfUL, + 0xbc6ba980UL, 0xfd5ab299UL, 0x3e099fb2UL, 0x7f3884abUL, 0xb0241c2cUL, + 0xf1150735UL, 0x32462a1eUL, 0x73773107UL, 0xb4e17048UL, 0xf5d06b51UL, + 0x3683467aUL, 0x77b25d63UL, 0x4ed7facbUL, 0x0fe6e1d2UL, 0xccb5ccf9UL, + 0x8d84d7e0UL, 0x4a1296afUL, 0x0b238db6UL, 0xc870a09dUL, 0x8941bb84UL, + 0x465d2303UL, 0x076c381aUL, 0xc43f1531UL, 0x850e0e28UL, 0x42984f67UL, + 0x03a9547eUL, 0xc0fa7955UL, 0x81cb624cUL, 0x1fc53881UL, 0x5ef42398UL, + 0x9da70eb3UL, 0xdc9615aaUL, 0x1b0054e5UL, 0x5a314ffcUL, 0x996262d7UL, + 0xd85379ceUL, 0x174fe149UL, 0x567efa50UL, 0x952dd77bUL, 0xd41ccc62UL, + 0x138a8d2dUL, 0x52bb9634UL, 0x91e8bb1fUL, 0xd0d9a006UL, 0xecf37e5eUL, + 0xadc26547UL, 0x6e91486cUL, 0x2fa05375UL, 0xe836123aUL, 0xa9070923UL, + 0x6a542408UL, 0x2b653f11UL, 0xe479a796UL, 0xa548bc8fUL, 0x661b91a4UL, + 0x272a8abdUL, 0xe0bccbf2UL, 0xa18dd0ebUL, 0x62defdc0UL, 0x23efe6d9UL, + 0xbde1bc14UL, 0xfcd0a70dUL, 0x3f838a26UL, 0x7eb2913fUL, 0xb924d070UL, + 0xf815cb69UL, 0x3b46e642UL, 0x7a77fd5bUL, 0xb56b65dcUL, 0xf45a7ec5UL, + 0x370953eeUL, 0x763848f7UL, 0xb1ae09b8UL, 0xf09f12a1UL, 0x33cc3f8aUL, + 0x72fd2493UL + }, + { + 0x00000000UL, 0x376ac201UL, 0x6ed48403UL, 0x59be4602UL, 0xdca80907UL, + 0xebc2cb06UL, 0xb27c8d04UL, 0x85164f05UL, 0xb851130eUL, 0x8f3bd10fUL, + 0xd685970dUL, 0xe1ef550cUL, 0x64f91a09UL, 0x5393d808UL, 0x0a2d9e0aUL, + 0x3d475c0bUL, 0x70a3261cUL, 0x47c9e41dUL, 0x1e77a21fUL, 0x291d601eUL, + 0xac0b2f1bUL, 0x9b61ed1aUL, 0xc2dfab18UL, 0xf5b56919UL, 0xc8f23512UL, + 0xff98f713UL, 0xa626b111UL, 0x914c7310UL, 0x145a3c15UL, 0x2330fe14UL, + 0x7a8eb816UL, 0x4de47a17UL, 0xe0464d38UL, 0xd72c8f39UL, 0x8e92c93bUL, + 0xb9f80b3aUL, 0x3cee443fUL, 0x0b84863eUL, 0x523ac03cUL, 0x6550023dUL, + 0x58175e36UL, 0x6f7d9c37UL, 0x36c3da35UL, 0x01a91834UL, 0x84bf5731UL, + 0xb3d59530UL, 0xea6bd332UL, 0xdd011133UL, 0x90e56b24UL, 0xa78fa925UL, + 0xfe31ef27UL, 0xc95b2d26UL, 0x4c4d6223UL, 0x7b27a022UL, 0x2299e620UL, + 0x15f32421UL, 0x28b4782aUL, 0x1fdeba2bUL, 0x4660fc29UL, 0x710a3e28UL, + 0xf41c712dUL, 0xc376b32cUL, 0x9ac8f52eUL, 0xada2372fUL, 0xc08d9a70UL, + 0xf7e75871UL, 0xae591e73UL, 0x9933dc72UL, 0x1c259377UL, 0x2b4f5176UL, + 0x72f11774UL, 0x459bd575UL, 0x78dc897eUL, 0x4fb64b7fUL, 0x16080d7dUL, + 0x2162cf7cUL, 0xa4748079UL, 0x931e4278UL, 0xcaa0047aUL, 0xfdcac67bUL, + 0xb02ebc6cUL, 0x87447e6dUL, 0xdefa386fUL, 0xe990fa6eUL, 0x6c86b56bUL, + 0x5bec776aUL, 0x02523168UL, 0x3538f369UL, 0x087faf62UL, 0x3f156d63UL, + 0x66ab2b61UL, 0x51c1e960UL, 0xd4d7a665UL, 0xe3bd6464UL, 0xba032266UL, + 0x8d69e067UL, 0x20cbd748UL, 0x17a11549UL, 0x4e1f534bUL, 0x7975914aUL, + 0xfc63de4fUL, 0xcb091c4eUL, 0x92b75a4cUL, 0xa5dd984dUL, 0x989ac446UL, + 0xaff00647UL, 0xf64e4045UL, 0xc1248244UL, 0x4432cd41UL, 0x73580f40UL, + 0x2ae64942UL, 0x1d8c8b43UL, 0x5068f154UL, 0x67023355UL, 0x3ebc7557UL, + 0x09d6b756UL, 0x8cc0f853UL, 0xbbaa3a52UL, 0xe2147c50UL, 0xd57ebe51UL, + 0xe839e25aUL, 0xdf53205bUL, 0x86ed6659UL, 0xb187a458UL, 0x3491eb5dUL, + 0x03fb295cUL, 0x5a456f5eUL, 0x6d2fad5fUL, 0x801b35e1UL, 0xb771f7e0UL, + 0xeecfb1e2UL, 0xd9a573e3UL, 0x5cb33ce6UL, 0x6bd9fee7UL, 0x3267b8e5UL, + 0x050d7ae4UL, 0x384a26efUL, 0x0f20e4eeUL, 0x569ea2ecUL, 0x61f460edUL, + 0xe4e22fe8UL, 0xd388ede9UL, 0x8a36abebUL, 0xbd5c69eaUL, 0xf0b813fdUL, + 0xc7d2d1fcUL, 0x9e6c97feUL, 0xa90655ffUL, 0x2c101afaUL, 0x1b7ad8fbUL, + 0x42c49ef9UL, 0x75ae5cf8UL, 0x48e900f3UL, 0x7f83c2f2UL, 0x263d84f0UL, + 0x115746f1UL, 0x944109f4UL, 0xa32bcbf5UL, 0xfa958df7UL, 0xcdff4ff6UL, + 0x605d78d9UL, 0x5737bad8UL, 0x0e89fcdaUL, 0x39e33edbUL, 0xbcf571deUL, + 0x8b9fb3dfUL, 0xd221f5ddUL, 0xe54b37dcUL, 0xd80c6bd7UL, 0xef66a9d6UL, + 0xb6d8efd4UL, 0x81b22dd5UL, 0x04a462d0UL, 0x33cea0d1UL, 0x6a70e6d3UL, + 0x5d1a24d2UL, 0x10fe5ec5UL, 0x27949cc4UL, 0x7e2adac6UL, 0x494018c7UL, + 0xcc5657c2UL, 0xfb3c95c3UL, 0xa282d3c1UL, 0x95e811c0UL, 0xa8af4dcbUL, + 0x9fc58fcaUL, 0xc67bc9c8UL, 0xf1110bc9UL, 0x740744ccUL, 0x436d86cdUL, + 0x1ad3c0cfUL, 0x2db902ceUL, 0x4096af91UL, 0x77fc6d90UL, 0x2e422b92UL, + 0x1928e993UL, 0x9c3ea696UL, 0xab546497UL, 0xf2ea2295UL, 0xc580e094UL, + 0xf8c7bc9fUL, 0xcfad7e9eUL, 0x9613389cUL, 0xa179fa9dUL, 0x246fb598UL, + 0x13057799UL, 0x4abb319bUL, 0x7dd1f39aUL, 0x3035898dUL, 0x075f4b8cUL, + 0x5ee10d8eUL, 0x698bcf8fUL, 0xec9d808aUL, 0xdbf7428bUL, 0x82490489UL, + 0xb523c688UL, 0x88649a83UL, 0xbf0e5882UL, 0xe6b01e80UL, 0xd1dadc81UL, + 0x54cc9384UL, 0x63a65185UL, 0x3a181787UL, 0x0d72d586UL, 0xa0d0e2a9UL, + 0x97ba20a8UL, 0xce0466aaUL, 0xf96ea4abUL, 0x7c78ebaeUL, 0x4b1229afUL, + 0x12ac6fadUL, 0x25c6adacUL, 0x1881f1a7UL, 0x2feb33a6UL, 0x765575a4UL, + 0x413fb7a5UL, 0xc429f8a0UL, 0xf3433aa1UL, 0xaafd7ca3UL, 0x9d97bea2UL, + 0xd073c4b5UL, 0xe71906b4UL, 0xbea740b6UL, 0x89cd82b7UL, 0x0cdbcdb2UL, + 0x3bb10fb3UL, 0x620f49b1UL, 0x55658bb0UL, 0x6822d7bbUL, 0x5f4815baUL, + 0x06f653b8UL, 0x319c91b9UL, 0xb48adebcUL, 0x83e01cbdUL, 0xda5e5abfUL, + 0xed3498beUL + }, + { + 0x00000000UL, 0x6567bcb8UL, 0x8bc809aaUL, 0xeeafb512UL, 0x5797628fUL, + 0x32f0de37UL, 0xdc5f6b25UL, 0xb938d79dUL, 0xef28b4c5UL, 0x8a4f087dUL, + 0x64e0bd6fUL, 0x018701d7UL, 0xb8bfd64aUL, 0xddd86af2UL, 0x3377dfe0UL, + 0x56106358UL, 0x9f571950UL, 0xfa30a5e8UL, 0x149f10faUL, 0x71f8ac42UL, + 0xc8c07bdfUL, 0xada7c767UL, 0x43087275UL, 0x266fcecdUL, 0x707fad95UL, + 0x1518112dUL, 0xfbb7a43fUL, 0x9ed01887UL, 0x27e8cf1aUL, 0x428f73a2UL, + 0xac20c6b0UL, 0xc9477a08UL, 0x3eaf32a0UL, 0x5bc88e18UL, 0xb5673b0aUL, + 0xd00087b2UL, 0x6938502fUL, 0x0c5fec97UL, 0xe2f05985UL, 0x8797e53dUL, + 0xd1878665UL, 0xb4e03addUL, 0x5a4f8fcfUL, 0x3f283377UL, 0x8610e4eaUL, + 0xe3775852UL, 0x0dd8ed40UL, 0x68bf51f8UL, 0xa1f82bf0UL, 0xc49f9748UL, + 0x2a30225aUL, 0x4f579ee2UL, 0xf66f497fUL, 0x9308f5c7UL, 0x7da740d5UL, + 0x18c0fc6dUL, 0x4ed09f35UL, 0x2bb7238dUL, 0xc518969fUL, 0xa07f2a27UL, + 0x1947fdbaUL, 0x7c204102UL, 0x928ff410UL, 0xf7e848a8UL, 0x3d58149bUL, + 0x583fa823UL, 0xb6901d31UL, 0xd3f7a189UL, 0x6acf7614UL, 0x0fa8caacUL, + 0xe1077fbeUL, 0x8460c306UL, 0xd270a05eUL, 0xb7171ce6UL, 0x59b8a9f4UL, + 0x3cdf154cUL, 0x85e7c2d1UL, 0xe0807e69UL, 0x0e2fcb7bUL, 0x6b4877c3UL, + 0xa20f0dcbUL, 0xc768b173UL, 0x29c70461UL, 0x4ca0b8d9UL, 0xf5986f44UL, + 0x90ffd3fcUL, 0x7e5066eeUL, 0x1b37da56UL, 0x4d27b90eUL, 0x284005b6UL, + 0xc6efb0a4UL, 0xa3880c1cUL, 0x1ab0db81UL, 0x7fd76739UL, 0x9178d22bUL, + 0xf41f6e93UL, 0x03f7263bUL, 0x66909a83UL, 0x883f2f91UL, 0xed589329UL, + 0x546044b4UL, 0x3107f80cUL, 0xdfa84d1eUL, 0xbacff1a6UL, 0xecdf92feUL, + 0x89b82e46UL, 0x67179b54UL, 0x027027ecUL, 0xbb48f071UL, 0xde2f4cc9UL, + 0x3080f9dbUL, 0x55e74563UL, 0x9ca03f6bUL, 0xf9c783d3UL, 0x176836c1UL, + 0x720f8a79UL, 0xcb375de4UL, 0xae50e15cUL, 0x40ff544eUL, 0x2598e8f6UL, + 0x73888baeUL, 0x16ef3716UL, 0xf8408204UL, 0x9d273ebcUL, 0x241fe921UL, + 0x41785599UL, 0xafd7e08bUL, 0xcab05c33UL, 0x3bb659edUL, 0x5ed1e555UL, + 0xb07e5047UL, 0xd519ecffUL, 0x6c213b62UL, 0x094687daUL, 0xe7e932c8UL, + 0x828e8e70UL, 0xd49eed28UL, 0xb1f95190UL, 0x5f56e482UL, 0x3a31583aUL, + 0x83098fa7UL, 0xe66e331fUL, 0x08c1860dUL, 0x6da63ab5UL, 0xa4e140bdUL, + 0xc186fc05UL, 0x2f294917UL, 0x4a4ef5afUL, 0xf3762232UL, 0x96119e8aUL, + 0x78be2b98UL, 0x1dd99720UL, 0x4bc9f478UL, 0x2eae48c0UL, 0xc001fdd2UL, + 0xa566416aUL, 0x1c5e96f7UL, 0x79392a4fUL, 0x97969f5dUL, 0xf2f123e5UL, + 0x05196b4dUL, 0x607ed7f5UL, 0x8ed162e7UL, 0xebb6de5fUL, 0x528e09c2UL, + 0x37e9b57aUL, 0xd9460068UL, 0xbc21bcd0UL, 0xea31df88UL, 0x8f566330UL, + 0x61f9d622UL, 0x049e6a9aUL, 0xbda6bd07UL, 0xd8c101bfUL, 0x366eb4adUL, + 0x53090815UL, 0x9a4e721dUL, 0xff29cea5UL, 0x11867bb7UL, 0x74e1c70fUL, + 0xcdd91092UL, 0xa8beac2aUL, 0x46111938UL, 0x2376a580UL, 0x7566c6d8UL, + 0x10017a60UL, 0xfeaecf72UL, 0x9bc973caUL, 0x22f1a457UL, 0x479618efUL, + 0xa939adfdUL, 0xcc5e1145UL, 0x06ee4d76UL, 0x6389f1ceUL, 0x8d2644dcUL, + 0xe841f864UL, 0x51792ff9UL, 0x341e9341UL, 0xdab12653UL, 0xbfd69aebUL, + 0xe9c6f9b3UL, 0x8ca1450bUL, 0x620ef019UL, 0x07694ca1UL, 0xbe519b3cUL, + 0xdb362784UL, 0x35999296UL, 0x50fe2e2eUL, 0x99b95426UL, 0xfcdee89eUL, + 0x12715d8cUL, 0x7716e134UL, 0xce2e36a9UL, 0xab498a11UL, 0x45e63f03UL, + 0x208183bbUL, 0x7691e0e3UL, 0x13f65c5bUL, 0xfd59e949UL, 0x983e55f1UL, + 0x2106826cUL, 0x44613ed4UL, 0xaace8bc6UL, 0xcfa9377eUL, 0x38417fd6UL, + 0x5d26c36eUL, 0xb389767cUL, 0xd6eecac4UL, 0x6fd61d59UL, 0x0ab1a1e1UL, + 0xe41e14f3UL, 0x8179a84bUL, 0xd769cb13UL, 0xb20e77abUL, 0x5ca1c2b9UL, + 0x39c67e01UL, 0x80fea99cUL, 0xe5991524UL, 0x0b36a036UL, 0x6e511c8eUL, + 0xa7166686UL, 0xc271da3eUL, 0x2cde6f2cUL, 0x49b9d394UL, 0xf0810409UL, + 0x95e6b8b1UL, 0x7b490da3UL, 0x1e2eb11bUL, 0x483ed243UL, 0x2d596efbUL, + 0xc3f6dbe9UL, 0xa6916751UL, 0x1fa9b0ccUL, 0x7ace0c74UL, 0x9461b966UL, + 0xf10605deUL +#endif + } +}; ADDED compat/zlib/deflate.c Index: compat/zlib/deflate.c ================================================================== --- compat/zlib/deflate.c +++ compat/zlib/deflate.c @@ -0,0 +1,1967 @@ +/* deflate.c -- compress data using the deflation algorithm + * Copyright (C) 1995-2013 Jean-loup Gailly and Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* + * ALGORITHM + * + * The "deflation" process depends on being able to identify portions + * of the input text which are identical to earlier input (within a + * sliding window trailing behind the input currently being processed). + * + * The most straightforward technique turns out to be the fastest for + * most input files: try all possible matches and select the longest. + * The key feature of this algorithm is that insertions into the string + * dictionary are very simple and thus fast, and deletions are avoided + * completely. Insertions are performed at each input character, whereas + * string matches are performed only when the previous match ends. So it + * is preferable to spend more time in matches to allow very fast string + * insertions and avoid deletions. The matching algorithm for small + * strings is inspired from that of Rabin & Karp. A brute force approach + * is used to find longer strings when a small match has been found. + * A similar algorithm is used in comic (by Jan-Mark Wams) and freeze + * (by Leonid Broukhis). + * A previous version of this file used a more sophisticated algorithm + * (by Fiala and Greene) which is guaranteed to run in linear amortized + * time, but has a larger average cost, uses more memory and is patented. + * However the F&G algorithm may be faster for some highly redundant + * files if the parameter max_chain_length (described below) is too large. + * + * ACKNOWLEDGEMENTS + * + * The idea of lazy evaluation of matches is due to Jan-Mark Wams, and + * I found it in 'freeze' written by Leonid Broukhis. + * Thanks to many people for bug reports and testing. + * + * REFERENCES + * + * Deutsch, L.P.,"DEFLATE Compressed Data Format Specification". + * Available in http://tools.ietf.org/html/rfc1951 + * + * A description of the Rabin and Karp algorithm is given in the book + * "Algorithms" by R. Sedgewick, Addison-Wesley, p252. + * + * Fiala,E.R., and Greene,D.H. + * Data Compression with Finite Windows, Comm.ACM, 32,4 (1989) 490-595 + * + */ + +/* @(#) $Id$ */ + +#include "deflate.h" + +const char deflate_copyright[] = + " deflate 1.2.8 Copyright 1995-2013 Jean-loup Gailly and Mark Adler "; +/* + If you use the zlib library in a product, an acknowledgment is welcome + in the documentation of your product. If for some reason you cannot + include such an acknowledgment, I would appreciate that you keep this + copyright string in the executable of your product. + */ + +/* =========================================================================== + * Function prototypes. + */ +typedef enum { + need_more, /* block not completed, need more input or more output */ + block_done, /* block flush performed */ + finish_started, /* finish started, need only more output at next deflate */ + finish_done /* finish done, accept no more input or output */ +} block_state; + +typedef block_state (*compress_func) OF((deflate_state *s, int flush)); +/* Compression function. Returns the block state after the call. */ + +local void fill_window OF((deflate_state *s)); +local block_state deflate_stored OF((deflate_state *s, int flush)); +local block_state deflate_fast OF((deflate_state *s, int flush)); +#ifndef FASTEST +local block_state deflate_slow OF((deflate_state *s, int flush)); +#endif +local block_state deflate_rle OF((deflate_state *s, int flush)); +local block_state deflate_huff OF((deflate_state *s, int flush)); +local void lm_init OF((deflate_state *s)); +local void putShortMSB OF((deflate_state *s, uInt b)); +local void flush_pending OF((z_streamp strm)); +local int read_buf OF((z_streamp strm, Bytef *buf, unsigned size)); +#ifdef ASMV + void match_init OF((void)); /* asm code initialization */ + uInt longest_match OF((deflate_state *s, IPos cur_match)); +#else +local uInt longest_match OF((deflate_state *s, IPos cur_match)); +#endif + +#ifdef DEBUG +local void check_match OF((deflate_state *s, IPos start, IPos match, + int length)); +#endif + +/* =========================================================================== + * Local data + */ + +#define NIL 0 +/* Tail of hash chains */ + +#ifndef TOO_FAR +# define TOO_FAR 4096 +#endif +/* Matches of length 3 are discarded if their distance exceeds TOO_FAR */ + +/* Values for max_lazy_match, good_match and max_chain_length, depending on + * the desired pack level (0..9). The values given below have been tuned to + * exclude worst case performance for pathological files. Better values may be + * found for specific files. + */ +typedef struct config_s { + ush good_length; /* reduce lazy search above this match length */ + ush max_lazy; /* do not perform lazy search above this match length */ + ush nice_length; /* quit search above this match length */ + ush max_chain; + compress_func func; +} config; + +#ifdef FASTEST +local const config configuration_table[2] = { +/* good lazy nice chain */ +/* 0 */ {0, 0, 0, 0, deflate_stored}, /* store only */ +/* 1 */ {4, 4, 8, 4, deflate_fast}}; /* max speed, no lazy matches */ +#else +local const config configuration_table[10] = { +/* good lazy nice chain */ +/* 0 */ {0, 0, 0, 0, deflate_stored}, /* store only */ +/* 1 */ {4, 4, 8, 4, deflate_fast}, /* max speed, no lazy matches */ +/* 2 */ {4, 5, 16, 8, deflate_fast}, +/* 3 */ {4, 6, 32, 32, deflate_fast}, + +/* 4 */ {4, 4, 16, 16, deflate_slow}, /* lazy matches */ +/* 5 */ {8, 16, 32, 32, deflate_slow}, +/* 6 */ {8, 16, 128, 128, deflate_slow}, +/* 7 */ {8, 32, 128, 256, deflate_slow}, +/* 8 */ {32, 128, 258, 1024, deflate_slow}, +/* 9 */ {32, 258, 258, 4096, deflate_slow}}; /* max compression */ +#endif + +/* Note: the deflate() code requires max_lazy >= MIN_MATCH and max_chain >= 4 + * For deflate_fast() (levels <= 3) good is ignored and lazy has a different + * meaning. + */ + +#define EQUAL 0 +/* result of memcmp for equal strings */ + +#ifndef NO_DUMMY_DECL +struct static_tree_desc_s {int dummy;}; /* for buggy compilers */ +#endif + +/* rank Z_BLOCK between Z_NO_FLUSH and Z_PARTIAL_FLUSH */ +#define RANK(f) (((f) << 1) - ((f) > 4 ? 9 : 0)) + +/* =========================================================================== + * Update a hash value with the given input byte + * IN assertion: all calls to to UPDATE_HASH are made with consecutive + * input characters, so that a running hash key can be computed from the + * previous key instead of complete recalculation each time. + */ +#define UPDATE_HASH(s,h,c) (h = (((h)<hash_shift) ^ (c)) & s->hash_mask) + + +/* =========================================================================== + * Insert string str in the dictionary and set match_head to the previous head + * of the hash chain (the most recent string with same hash key). Return + * the previous length of the hash chain. + * If this file is compiled with -DFASTEST, the compression level is forced + * to 1, and no hash chains are maintained. + * IN assertion: all calls to to INSERT_STRING are made with consecutive + * input characters and the first MIN_MATCH bytes of str are valid + * (except for the last MIN_MATCH-1 bytes of the input file). + */ +#ifdef FASTEST +#define INSERT_STRING(s, str, match_head) \ + (UPDATE_HASH(s, s->ins_h, s->window[(str) + (MIN_MATCH-1)]), \ + match_head = s->head[s->ins_h], \ + s->head[s->ins_h] = (Pos)(str)) +#else +#define INSERT_STRING(s, str, match_head) \ + (UPDATE_HASH(s, s->ins_h, s->window[(str) + (MIN_MATCH-1)]), \ + match_head = s->prev[(str) & s->w_mask] = s->head[s->ins_h], \ + s->head[s->ins_h] = (Pos)(str)) +#endif + +/* =========================================================================== + * Initialize the hash table (avoiding 64K overflow for 16 bit systems). + * prev[] will be initialized on the fly. + */ +#define CLEAR_HASH(s) \ + s->head[s->hash_size-1] = NIL; \ + zmemzero((Bytef *)s->head, (unsigned)(s->hash_size-1)*sizeof(*s->head)); + +/* ========================================================================= */ +int ZEXPORT deflateInit_(strm, level, version, stream_size) + z_streamp strm; + int level; + const char *version; + int stream_size; +{ + return deflateInit2_(strm, level, Z_DEFLATED, MAX_WBITS, DEF_MEM_LEVEL, + Z_DEFAULT_STRATEGY, version, stream_size); + /* To do: ignore strm->next_in if we use it as window */ +} + +/* ========================================================================= */ +int ZEXPORT deflateInit2_(strm, level, method, windowBits, memLevel, strategy, + version, stream_size) + z_streamp strm; + int level; + int method; + int windowBits; + int memLevel; + int strategy; + const char *version; + int stream_size; +{ + deflate_state *s; + int wrap = 1; + static const char my_version[] = ZLIB_VERSION; + + ushf *overlay; + /* We overlay pending_buf and d_buf+l_buf. This works since the average + * output size for (length,distance) codes is <= 24 bits. + */ + + if (version == Z_NULL || version[0] != my_version[0] || + stream_size != sizeof(z_stream)) { + return Z_VERSION_ERROR; + } + if (strm == Z_NULL) return Z_STREAM_ERROR; + + strm->msg = Z_NULL; + if (strm->zalloc == (alloc_func)0) { +#ifdef Z_SOLO + return Z_STREAM_ERROR; +#else + strm->zalloc = zcalloc; + strm->opaque = (voidpf)0; +#endif + } + if (strm->zfree == (free_func)0) +#ifdef Z_SOLO + return Z_STREAM_ERROR; +#else + strm->zfree = zcfree; +#endif + +#ifdef FASTEST + if (level != 0) level = 1; +#else + if (level == Z_DEFAULT_COMPRESSION) level = 6; +#endif + + if (windowBits < 0) { /* suppress zlib wrapper */ + wrap = 0; + windowBits = -windowBits; + } +#ifdef GZIP + else if (windowBits > 15) { + wrap = 2; /* write gzip wrapper instead */ + windowBits -= 16; + } +#endif + if (memLevel < 1 || memLevel > MAX_MEM_LEVEL || method != Z_DEFLATED || + windowBits < 8 || windowBits > 15 || level < 0 || level > 9 || + strategy < 0 || strategy > Z_FIXED) { + return Z_STREAM_ERROR; + } + if (windowBits == 8) windowBits = 9; /* until 256-byte window bug fixed */ + s = (deflate_state *) ZALLOC(strm, 1, sizeof(deflate_state)); + if (s == Z_NULL) return Z_MEM_ERROR; + strm->state = (struct internal_state FAR *)s; + s->strm = strm; + + s->wrap = wrap; + s->gzhead = Z_NULL; + s->w_bits = windowBits; + s->w_size = 1 << s->w_bits; + s->w_mask = s->w_size - 1; + + s->hash_bits = memLevel + 7; + s->hash_size = 1 << s->hash_bits; + s->hash_mask = s->hash_size - 1; + s->hash_shift = ((s->hash_bits+MIN_MATCH-1)/MIN_MATCH); + + s->window = (Bytef *) ZALLOC(strm, s->w_size, 2*sizeof(Byte)); + s->prev = (Posf *) ZALLOC(strm, s->w_size, sizeof(Pos)); + s->head = (Posf *) ZALLOC(strm, s->hash_size, sizeof(Pos)); + + s->high_water = 0; /* nothing written to s->window yet */ + + s->lit_bufsize = 1 << (memLevel + 6); /* 16K elements by default */ + + overlay = (ushf *) ZALLOC(strm, s->lit_bufsize, sizeof(ush)+2); + s->pending_buf = (uchf *) overlay; + s->pending_buf_size = (ulg)s->lit_bufsize * (sizeof(ush)+2L); + + if (s->window == Z_NULL || s->prev == Z_NULL || s->head == Z_NULL || + s->pending_buf == Z_NULL) { + s->status = FINISH_STATE; + strm->msg = ERR_MSG(Z_MEM_ERROR); + deflateEnd (strm); + return Z_MEM_ERROR; + } + s->d_buf = overlay + s->lit_bufsize/sizeof(ush); + s->l_buf = s->pending_buf + (1+sizeof(ush))*s->lit_bufsize; + + s->level = level; + s->strategy = strategy; + s->method = (Byte)method; + + return deflateReset(strm); +} + +/* ========================================================================= */ +int ZEXPORT deflateSetDictionary (strm, dictionary, dictLength) + z_streamp strm; + const Bytef *dictionary; + uInt dictLength; +{ + deflate_state *s; + uInt str, n; + int wrap; + unsigned avail; + z_const unsigned char *next; + + if (strm == Z_NULL || strm->state == Z_NULL || dictionary == Z_NULL) + return Z_STREAM_ERROR; + s = strm->state; + wrap = s->wrap; + if (wrap == 2 || (wrap == 1 && s->status != INIT_STATE) || s->lookahead) + return Z_STREAM_ERROR; + + /* when using zlib wrappers, compute Adler-32 for provided dictionary */ + if (wrap == 1) + strm->adler = adler32(strm->adler, dictionary, dictLength); + s->wrap = 0; /* avoid computing Adler-32 in read_buf */ + + /* if dictionary would fill window, just replace the history */ + if (dictLength >= s->w_size) { + if (wrap == 0) { /* already empty otherwise */ + CLEAR_HASH(s); + s->strstart = 0; + s->block_start = 0L; + s->insert = 0; + } + dictionary += dictLength - s->w_size; /* use the tail */ + dictLength = s->w_size; + } + + /* insert dictionary into window and hash */ + avail = strm->avail_in; + next = strm->next_in; + strm->avail_in = dictLength; + strm->next_in = (z_const Bytef *)dictionary; + fill_window(s); + while (s->lookahead >= MIN_MATCH) { + str = s->strstart; + n = s->lookahead - (MIN_MATCH-1); + do { + UPDATE_HASH(s, s->ins_h, s->window[str + MIN_MATCH-1]); +#ifndef FASTEST + s->prev[str & s->w_mask] = s->head[s->ins_h]; +#endif + s->head[s->ins_h] = (Pos)str; + str++; + } while (--n); + s->strstart = str; + s->lookahead = MIN_MATCH-1; + fill_window(s); + } + s->strstart += s->lookahead; + s->block_start = (long)s->strstart; + s->insert = s->lookahead; + s->lookahead = 0; + s->match_length = s->prev_length = MIN_MATCH-1; + s->match_available = 0; + strm->next_in = next; + strm->avail_in = avail; + s->wrap = wrap; + return Z_OK; +} + +/* ========================================================================= */ +int ZEXPORT deflateResetKeep (strm) + z_streamp strm; +{ + deflate_state *s; + + if (strm == Z_NULL || strm->state == Z_NULL || + strm->zalloc == (alloc_func)0 || strm->zfree == (free_func)0) { + return Z_STREAM_ERROR; + } + + strm->total_in = strm->total_out = 0; + strm->msg = Z_NULL; /* use zfree if we ever allocate msg dynamically */ + strm->data_type = Z_UNKNOWN; + + s = (deflate_state *)strm->state; + s->pending = 0; + s->pending_out = s->pending_buf; + + if (s->wrap < 0) { + s->wrap = -s->wrap; /* was made negative by deflate(..., Z_FINISH); */ + } + s->status = s->wrap ? INIT_STATE : BUSY_STATE; + strm->adler = +#ifdef GZIP + s->wrap == 2 ? crc32(0L, Z_NULL, 0) : +#endif + adler32(0L, Z_NULL, 0); + s->last_flush = Z_NO_FLUSH; + + _tr_init(s); + + return Z_OK; +} + +/* ========================================================================= */ +int ZEXPORT deflateReset (strm) + z_streamp strm; +{ + int ret; + + ret = deflateResetKeep(strm); + if (ret == Z_OK) + lm_init(strm->state); + return ret; +} + +/* ========================================================================= */ +int ZEXPORT deflateSetHeader (strm, head) + z_streamp strm; + gz_headerp head; +{ + if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR; + if (strm->state->wrap != 2) return Z_STREAM_ERROR; + strm->state->gzhead = head; + return Z_OK; +} + +/* ========================================================================= */ +int ZEXPORT deflatePending (strm, pending, bits) + unsigned *pending; + int *bits; + z_streamp strm; +{ + if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR; + if (pending != Z_NULL) + *pending = strm->state->pending; + if (bits != Z_NULL) + *bits = strm->state->bi_valid; + return Z_OK; +} + +/* ========================================================================= */ +int ZEXPORT deflatePrime (strm, bits, value) + z_streamp strm; + int bits; + int value; +{ + deflate_state *s; + int put; + + if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR; + s = strm->state; + if ((Bytef *)(s->d_buf) < s->pending_out + ((Buf_size + 7) >> 3)) + return Z_BUF_ERROR; + do { + put = Buf_size - s->bi_valid; + if (put > bits) + put = bits; + s->bi_buf |= (ush)((value & ((1 << put) - 1)) << s->bi_valid); + s->bi_valid += put; + _tr_flush_bits(s); + value >>= put; + bits -= put; + } while (bits); + return Z_OK; +} + +/* ========================================================================= */ +int ZEXPORT deflateParams(strm, level, strategy) + z_streamp strm; + int level; + int strategy; +{ + deflate_state *s; + compress_func func; + int err = Z_OK; + + if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR; + s = strm->state; + +#ifdef FASTEST + if (level != 0) level = 1; +#else + if (level == Z_DEFAULT_COMPRESSION) level = 6; +#endif + if (level < 0 || level > 9 || strategy < 0 || strategy > Z_FIXED) { + return Z_STREAM_ERROR; + } + func = configuration_table[s->level].func; + + if ((strategy != s->strategy || func != configuration_table[level].func) && + strm->total_in != 0) { + /* Flush the last buffer: */ + err = deflate(strm, Z_BLOCK); + if (err == Z_BUF_ERROR && s->pending == 0) + err = Z_OK; + } + if (s->level != level) { + s->level = level; + s->max_lazy_match = configuration_table[level].max_lazy; + s->good_match = configuration_table[level].good_length; + s->nice_match = configuration_table[level].nice_length; + s->max_chain_length = configuration_table[level].max_chain; + } + s->strategy = strategy; + return err; +} + +/* ========================================================================= */ +int ZEXPORT deflateTune(strm, good_length, max_lazy, nice_length, max_chain) + z_streamp strm; + int good_length; + int max_lazy; + int nice_length; + int max_chain; +{ + deflate_state *s; + + if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR; + s = strm->state; + s->good_match = good_length; + s->max_lazy_match = max_lazy; + s->nice_match = nice_length; + s->max_chain_length = max_chain; + return Z_OK; +} + +/* ========================================================================= + * For the default windowBits of 15 and memLevel of 8, this function returns + * a close to exact, as well as small, upper bound on the compressed size. + * They are coded as constants here for a reason--if the #define's are + * changed, then this function needs to be changed as well. The return + * value for 15 and 8 only works for those exact settings. + * + * For any setting other than those defaults for windowBits and memLevel, + * the value returned is a conservative worst case for the maximum expansion + * resulting from using fixed blocks instead of stored blocks, which deflate + * can emit on compressed data for some combinations of the parameters. + * + * This function could be more sophisticated to provide closer upper bounds for + * every combination of windowBits and memLevel. But even the conservative + * upper bound of about 14% expansion does not seem onerous for output buffer + * allocation. + */ +uLong ZEXPORT deflateBound(strm, sourceLen) + z_streamp strm; + uLong sourceLen; +{ + deflate_state *s; + uLong complen, wraplen; + Bytef *str; + + /* conservative upper bound for compressed data */ + complen = sourceLen + + ((sourceLen + 7) >> 3) + ((sourceLen + 63) >> 6) + 5; + + /* if can't get parameters, return conservative bound plus zlib wrapper */ + if (strm == Z_NULL || strm->state == Z_NULL) + return complen + 6; + + /* compute wrapper length */ + s = strm->state; + switch (s->wrap) { + case 0: /* raw deflate */ + wraplen = 0; + break; + case 1: /* zlib wrapper */ + wraplen = 6 + (s->strstart ? 4 : 0); + break; + case 2: /* gzip wrapper */ + wraplen = 18; + if (s->gzhead != Z_NULL) { /* user-supplied gzip header */ + if (s->gzhead->extra != Z_NULL) + wraplen += 2 + s->gzhead->extra_len; + str = s->gzhead->name; + if (str != Z_NULL) + do { + wraplen++; + } while (*str++); + str = s->gzhead->comment; + if (str != Z_NULL) + do { + wraplen++; + } while (*str++); + if (s->gzhead->hcrc) + wraplen += 2; + } + break; + default: /* for compiler happiness */ + wraplen = 6; + } + + /* if not default parameters, return conservative bound */ + if (s->w_bits != 15 || s->hash_bits != 8 + 7) + return complen + wraplen; + + /* default settings: return tight bound for that case */ + return sourceLen + (sourceLen >> 12) + (sourceLen >> 14) + + (sourceLen >> 25) + 13 - 6 + wraplen; +} + +/* ========================================================================= + * Put a short in the pending buffer. The 16-bit value is put in MSB order. + * IN assertion: the stream state is correct and there is enough room in + * pending_buf. + */ +local void putShortMSB (s, b) + deflate_state *s; + uInt b; +{ + put_byte(s, (Byte)(b >> 8)); + put_byte(s, (Byte)(b & 0xff)); +} + +/* ========================================================================= + * Flush as much pending output as possible. All deflate() output goes + * through this function so some applications may wish to modify it + * to avoid allocating a large strm->next_out buffer and copying into it. + * (See also read_buf()). + */ +local void flush_pending(strm) + z_streamp strm; +{ + unsigned len; + deflate_state *s = strm->state; + + _tr_flush_bits(s); + len = s->pending; + if (len > strm->avail_out) len = strm->avail_out; + if (len == 0) return; + + zmemcpy(strm->next_out, s->pending_out, len); + strm->next_out += len; + s->pending_out += len; + strm->total_out += len; + strm->avail_out -= len; + s->pending -= len; + if (s->pending == 0) { + s->pending_out = s->pending_buf; + } +} + +/* ========================================================================= */ +int ZEXPORT deflate (strm, flush) + z_streamp strm; + int flush; +{ + int old_flush; /* value of flush param for previous deflate call */ + deflate_state *s; + + if (strm == Z_NULL || strm->state == Z_NULL || + flush > Z_BLOCK || flush < 0) { + return Z_STREAM_ERROR; + } + s = strm->state; + + if (strm->next_out == Z_NULL || + (strm->next_in == Z_NULL && strm->avail_in != 0) || + (s->status == FINISH_STATE && flush != Z_FINISH)) { + ERR_RETURN(strm, Z_STREAM_ERROR); + } + if (strm->avail_out == 0) ERR_RETURN(strm, Z_BUF_ERROR); + + s->strm = strm; /* just in case */ + old_flush = s->last_flush; + s->last_flush = flush; + + /* Write the header */ + if (s->status == INIT_STATE) { +#ifdef GZIP + if (s->wrap == 2) { + strm->adler = crc32(0L, Z_NULL, 0); + put_byte(s, 31); + put_byte(s, 139); + put_byte(s, 8); + if (s->gzhead == Z_NULL) { + put_byte(s, 0); + put_byte(s, 0); + put_byte(s, 0); + put_byte(s, 0); + put_byte(s, 0); + put_byte(s, s->level == 9 ? 2 : + (s->strategy >= Z_HUFFMAN_ONLY || s->level < 2 ? + 4 : 0)); + put_byte(s, OS_CODE); + s->status = BUSY_STATE; + } + else { + put_byte(s, (s->gzhead->text ? 1 : 0) + + (s->gzhead->hcrc ? 2 : 0) + + (s->gzhead->extra == Z_NULL ? 0 : 4) + + (s->gzhead->name == Z_NULL ? 0 : 8) + + (s->gzhead->comment == Z_NULL ? 0 : 16) + ); + put_byte(s, (Byte)(s->gzhead->time & 0xff)); + put_byte(s, (Byte)((s->gzhead->time >> 8) & 0xff)); + put_byte(s, (Byte)((s->gzhead->time >> 16) & 0xff)); + put_byte(s, (Byte)((s->gzhead->time >> 24) & 0xff)); + put_byte(s, s->level == 9 ? 2 : + (s->strategy >= Z_HUFFMAN_ONLY || s->level < 2 ? + 4 : 0)); + put_byte(s, s->gzhead->os & 0xff); + if (s->gzhead->extra != Z_NULL) { + put_byte(s, s->gzhead->extra_len & 0xff); + put_byte(s, (s->gzhead->extra_len >> 8) & 0xff); + } + if (s->gzhead->hcrc) + strm->adler = crc32(strm->adler, s->pending_buf, + s->pending); + s->gzindex = 0; + s->status = EXTRA_STATE; + } + } + else +#endif + { + uInt header = (Z_DEFLATED + ((s->w_bits-8)<<4)) << 8; + uInt level_flags; + + if (s->strategy >= Z_HUFFMAN_ONLY || s->level < 2) + level_flags = 0; + else if (s->level < 6) + level_flags = 1; + else if (s->level == 6) + level_flags = 2; + else + level_flags = 3; + header |= (level_flags << 6); + if (s->strstart != 0) header |= PRESET_DICT; + header += 31 - (header % 31); + + s->status = BUSY_STATE; + putShortMSB(s, header); + + /* Save the adler32 of the preset dictionary: */ + if (s->strstart != 0) { + putShortMSB(s, (uInt)(strm->adler >> 16)); + putShortMSB(s, (uInt)(strm->adler & 0xffff)); + } + strm->adler = adler32(0L, Z_NULL, 0); + } + } +#ifdef GZIP + if (s->status == EXTRA_STATE) { + if (s->gzhead->extra != Z_NULL) { + uInt beg = s->pending; /* start of bytes to update crc */ + + while (s->gzindex < (s->gzhead->extra_len & 0xffff)) { + if (s->pending == s->pending_buf_size) { + if (s->gzhead->hcrc && s->pending > beg) + strm->adler = crc32(strm->adler, s->pending_buf + beg, + s->pending - beg); + flush_pending(strm); + beg = s->pending; + if (s->pending == s->pending_buf_size) + break; + } + put_byte(s, s->gzhead->extra[s->gzindex]); + s->gzindex++; + } + if (s->gzhead->hcrc && s->pending > beg) + strm->adler = crc32(strm->adler, s->pending_buf + beg, + s->pending - beg); + if (s->gzindex == s->gzhead->extra_len) { + s->gzindex = 0; + s->status = NAME_STATE; + } + } + else + s->status = NAME_STATE; + } + if (s->status == NAME_STATE) { + if (s->gzhead->name != Z_NULL) { + uInt beg = s->pending; /* start of bytes to update crc */ + int val; + + do { + if (s->pending == s->pending_buf_size) { + if (s->gzhead->hcrc && s->pending > beg) + strm->adler = crc32(strm->adler, s->pending_buf + beg, + s->pending - beg); + flush_pending(strm); + beg = s->pending; + if (s->pending == s->pending_buf_size) { + val = 1; + break; + } + } + val = s->gzhead->name[s->gzindex++]; + put_byte(s, val); + } while (val != 0); + if (s->gzhead->hcrc && s->pending > beg) + strm->adler = crc32(strm->adler, s->pending_buf + beg, + s->pending - beg); + if (val == 0) { + s->gzindex = 0; + s->status = COMMENT_STATE; + } + } + else + s->status = COMMENT_STATE; + } + if (s->status == COMMENT_STATE) { + if (s->gzhead->comment != Z_NULL) { + uInt beg = s->pending; /* start of bytes to update crc */ + int val; + + do { + if (s->pending == s->pending_buf_size) { + if (s->gzhead->hcrc && s->pending > beg) + strm->adler = crc32(strm->adler, s->pending_buf + beg, + s->pending - beg); + flush_pending(strm); + beg = s->pending; + if (s->pending == s->pending_buf_size) { + val = 1; + break; + } + } + val = s->gzhead->comment[s->gzindex++]; + put_byte(s, val); + } while (val != 0); + if (s->gzhead->hcrc && s->pending > beg) + strm->adler = crc32(strm->adler, s->pending_buf + beg, + s->pending - beg); + if (val == 0) + s->status = HCRC_STATE; + } + else + s->status = HCRC_STATE; + } + if (s->status == HCRC_STATE) { + if (s->gzhead->hcrc) { + if (s->pending + 2 > s->pending_buf_size) + flush_pending(strm); + if (s->pending + 2 <= s->pending_buf_size) { + put_byte(s, (Byte)(strm->adler & 0xff)); + put_byte(s, (Byte)((strm->adler >> 8) & 0xff)); + strm->adler = crc32(0L, Z_NULL, 0); + s->status = BUSY_STATE; + } + } + else + s->status = BUSY_STATE; + } +#endif + + /* Flush as much pending output as possible */ + if (s->pending != 0) { + flush_pending(strm); + if (strm->avail_out == 0) { + /* Since avail_out is 0, deflate will be called again with + * more output space, but possibly with both pending and + * avail_in equal to zero. There won't be anything to do, + * but this is not an error situation so make sure we + * return OK instead of BUF_ERROR at next call of deflate: + */ + s->last_flush = -1; + return Z_OK; + } + + /* Make sure there is something to do and avoid duplicate consecutive + * flushes. For repeated and useless calls with Z_FINISH, we keep + * returning Z_STREAM_END instead of Z_BUF_ERROR. + */ + } else if (strm->avail_in == 0 && RANK(flush) <= RANK(old_flush) && + flush != Z_FINISH) { + ERR_RETURN(strm, Z_BUF_ERROR); + } + + /* User must not provide more input after the first FINISH: */ + if (s->status == FINISH_STATE && strm->avail_in != 0) { + ERR_RETURN(strm, Z_BUF_ERROR); + } + + /* Start a new block or continue the current one. + */ + if (strm->avail_in != 0 || s->lookahead != 0 || + (flush != Z_NO_FLUSH && s->status != FINISH_STATE)) { + block_state bstate; + + bstate = s->strategy == Z_HUFFMAN_ONLY ? deflate_huff(s, flush) : + (s->strategy == Z_RLE ? deflate_rle(s, flush) : + (*(configuration_table[s->level].func))(s, flush)); + + if (bstate == finish_started || bstate == finish_done) { + s->status = FINISH_STATE; + } + if (bstate == need_more || bstate == finish_started) { + if (strm->avail_out == 0) { + s->last_flush = -1; /* avoid BUF_ERROR next call, see above */ + } + return Z_OK; + /* If flush != Z_NO_FLUSH && avail_out == 0, the next call + * of deflate should use the same flush parameter to make sure + * that the flush is complete. So we don't have to output an + * empty block here, this will be done at next call. This also + * ensures that for a very small output buffer, we emit at most + * one empty block. + */ + } + if (bstate == block_done) { + if (flush == Z_PARTIAL_FLUSH) { + _tr_align(s); + } else if (flush != Z_BLOCK) { /* FULL_FLUSH or SYNC_FLUSH */ + _tr_stored_block(s, (char*)0, 0L, 0); + /* For a full flush, this empty block will be recognized + * as a special marker by inflate_sync(). + */ + if (flush == Z_FULL_FLUSH) { + CLEAR_HASH(s); /* forget history */ + if (s->lookahead == 0) { + s->strstart = 0; + s->block_start = 0L; + s->insert = 0; + } + } + } + flush_pending(strm); + if (strm->avail_out == 0) { + s->last_flush = -1; /* avoid BUF_ERROR at next call, see above */ + return Z_OK; + } + } + } + Assert(strm->avail_out > 0, "bug2"); + + if (flush != Z_FINISH) return Z_OK; + if (s->wrap <= 0) return Z_STREAM_END; + + /* Write the trailer */ +#ifdef GZIP + if (s->wrap == 2) { + put_byte(s, (Byte)(strm->adler & 0xff)); + put_byte(s, (Byte)((strm->adler >> 8) & 0xff)); + put_byte(s, (Byte)((strm->adler >> 16) & 0xff)); + put_byte(s, (Byte)((strm->adler >> 24) & 0xff)); + put_byte(s, (Byte)(strm->total_in & 0xff)); + put_byte(s, (Byte)((strm->total_in >> 8) & 0xff)); + put_byte(s, (Byte)((strm->total_in >> 16) & 0xff)); + put_byte(s, (Byte)((strm->total_in >> 24) & 0xff)); + } + else +#endif + { + putShortMSB(s, (uInt)(strm->adler >> 16)); + putShortMSB(s, (uInt)(strm->adler & 0xffff)); + } + flush_pending(strm); + /* If avail_out is zero, the application will call deflate again + * to flush the rest. + */ + if (s->wrap > 0) s->wrap = -s->wrap; /* write the trailer only once! */ + return s->pending != 0 ? Z_OK : Z_STREAM_END; +} + +/* ========================================================================= */ +int ZEXPORT deflateEnd (strm) + z_streamp strm; +{ + int status; + + if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR; + + status = strm->state->status; + if (status != INIT_STATE && + status != EXTRA_STATE && + status != NAME_STATE && + status != COMMENT_STATE && + status != HCRC_STATE && + status != BUSY_STATE && + status != FINISH_STATE) { + return Z_STREAM_ERROR; + } + + /* Deallocate in reverse order of allocations: */ + TRY_FREE(strm, strm->state->pending_buf); + TRY_FREE(strm, strm->state->head); + TRY_FREE(strm, strm->state->prev); + TRY_FREE(strm, strm->state->window); + + ZFREE(strm, strm->state); + strm->state = Z_NULL; + + return status == BUSY_STATE ? Z_DATA_ERROR : Z_OK; +} + +/* ========================================================================= + * Copy the source state to the destination state. + * To simplify the source, this is not supported for 16-bit MSDOS (which + * doesn't have enough memory anyway to duplicate compression states). + */ +int ZEXPORT deflateCopy (dest, source) + z_streamp dest; + z_streamp source; +{ +#ifdef MAXSEG_64K + return Z_STREAM_ERROR; +#else + deflate_state *ds; + deflate_state *ss; + ushf *overlay; + + + if (source == Z_NULL || dest == Z_NULL || source->state == Z_NULL) { + return Z_STREAM_ERROR; + } + + ss = source->state; + + zmemcpy((voidpf)dest, (voidpf)source, sizeof(z_stream)); + + ds = (deflate_state *) ZALLOC(dest, 1, sizeof(deflate_state)); + if (ds == Z_NULL) return Z_MEM_ERROR; + dest->state = (struct internal_state FAR *) ds; + zmemcpy((voidpf)ds, (voidpf)ss, sizeof(deflate_state)); + ds->strm = dest; + + ds->window = (Bytef *) ZALLOC(dest, ds->w_size, 2*sizeof(Byte)); + ds->prev = (Posf *) ZALLOC(dest, ds->w_size, sizeof(Pos)); + ds->head = (Posf *) ZALLOC(dest, ds->hash_size, sizeof(Pos)); + overlay = (ushf *) ZALLOC(dest, ds->lit_bufsize, sizeof(ush)+2); + ds->pending_buf = (uchf *) overlay; + + if (ds->window == Z_NULL || ds->prev == Z_NULL || ds->head == Z_NULL || + ds->pending_buf == Z_NULL) { + deflateEnd (dest); + return Z_MEM_ERROR; + } + /* following zmemcpy do not work for 16-bit MSDOS */ + zmemcpy(ds->window, ss->window, ds->w_size * 2 * sizeof(Byte)); + zmemcpy((voidpf)ds->prev, (voidpf)ss->prev, ds->w_size * sizeof(Pos)); + zmemcpy((voidpf)ds->head, (voidpf)ss->head, ds->hash_size * sizeof(Pos)); + zmemcpy(ds->pending_buf, ss->pending_buf, (uInt)ds->pending_buf_size); + + ds->pending_out = ds->pending_buf + (ss->pending_out - ss->pending_buf); + ds->d_buf = overlay + ds->lit_bufsize/sizeof(ush); + ds->l_buf = ds->pending_buf + (1+sizeof(ush))*ds->lit_bufsize; + + ds->l_desc.dyn_tree = ds->dyn_ltree; + ds->d_desc.dyn_tree = ds->dyn_dtree; + ds->bl_desc.dyn_tree = ds->bl_tree; + + return Z_OK; +#endif /* MAXSEG_64K */ +} + +/* =========================================================================== + * Read a new buffer from the current input stream, update the adler32 + * and total number of bytes read. All deflate() input goes through + * this function so some applications may wish to modify it to avoid + * allocating a large strm->next_in buffer and copying from it. + * (See also flush_pending()). + */ +local int read_buf(strm, buf, size) + z_streamp strm; + Bytef *buf; + unsigned size; +{ + unsigned len = strm->avail_in; + + if (len > size) len = size; + if (len == 0) return 0; + + strm->avail_in -= len; + + zmemcpy(buf, strm->next_in, len); + if (strm->state->wrap == 1) { + strm->adler = adler32(strm->adler, buf, len); + } +#ifdef GZIP + else if (strm->state->wrap == 2) { + strm->adler = crc32(strm->adler, buf, len); + } +#endif + strm->next_in += len; + strm->total_in += len; + + return (int)len; +} + +/* =========================================================================== + * Initialize the "longest match" routines for a new zlib stream + */ +local void lm_init (s) + deflate_state *s; +{ + s->window_size = (ulg)2L*s->w_size; + + CLEAR_HASH(s); + + /* Set the default configuration parameters: + */ + s->max_lazy_match = configuration_table[s->level].max_lazy; + s->good_match = configuration_table[s->level].good_length; + s->nice_match = configuration_table[s->level].nice_length; + s->max_chain_length = configuration_table[s->level].max_chain; + + s->strstart = 0; + s->block_start = 0L; + s->lookahead = 0; + s->insert = 0; + s->match_length = s->prev_length = MIN_MATCH-1; + s->match_available = 0; + s->ins_h = 0; +#ifndef FASTEST +#ifdef ASMV + match_init(); /* initialize the asm code */ +#endif +#endif +} + +#ifndef FASTEST +/* =========================================================================== + * Set match_start to the longest match starting at the given string and + * return its length. Matches shorter or equal to prev_length are discarded, + * in which case the result is equal to prev_length and match_start is + * garbage. + * IN assertions: cur_match is the head of the hash chain for the current + * string (strstart) and its distance is <= MAX_DIST, and prev_length >= 1 + * OUT assertion: the match length is not greater than s->lookahead. + */ +#ifndef ASMV +/* For 80x86 and 680x0, an optimized version will be provided in match.asm or + * match.S. The code will be functionally equivalent. + */ +local uInt longest_match(s, cur_match) + deflate_state *s; + IPos cur_match; /* current match */ +{ + unsigned chain_length = s->max_chain_length;/* max hash chain length */ + register Bytef *scan = s->window + s->strstart; /* current string */ + register Bytef *match; /* matched string */ + register int len; /* length of current match */ + int best_len = s->prev_length; /* best match length so far */ + int nice_match = s->nice_match; /* stop if match long enough */ + IPos limit = s->strstart > (IPos)MAX_DIST(s) ? + s->strstart - (IPos)MAX_DIST(s) : NIL; + /* Stop when cur_match becomes <= limit. To simplify the code, + * we prevent matches with the string of window index 0. + */ + Posf *prev = s->prev; + uInt wmask = s->w_mask; + +#ifdef UNALIGNED_OK + /* Compare two bytes at a time. Note: this is not always beneficial. + * Try with and without -DUNALIGNED_OK to check. + */ + register Bytef *strend = s->window + s->strstart + MAX_MATCH - 1; + register ush scan_start = *(ushf*)scan; + register ush scan_end = *(ushf*)(scan+best_len-1); +#else + register Bytef *strend = s->window + s->strstart + MAX_MATCH; + register Byte scan_end1 = scan[best_len-1]; + register Byte scan_end = scan[best_len]; +#endif + + /* The code is optimized for HASH_BITS >= 8 and MAX_MATCH-2 multiple of 16. + * It is easy to get rid of this optimization if necessary. + */ + Assert(s->hash_bits >= 8 && MAX_MATCH == 258, "Code too clever"); + + /* Do not waste too much time if we already have a good match: */ + if (s->prev_length >= s->good_match) { + chain_length >>= 2; + } + /* Do not look for matches beyond the end of the input. This is necessary + * to make deflate deterministic. + */ + if ((uInt)nice_match > s->lookahead) nice_match = s->lookahead; + + Assert((ulg)s->strstart <= s->window_size-MIN_LOOKAHEAD, "need lookahead"); + + do { + Assert(cur_match < s->strstart, "no future"); + match = s->window + cur_match; + + /* Skip to next match if the match length cannot increase + * or if the match length is less than 2. Note that the checks below + * for insufficient lookahead only occur occasionally for performance + * reasons. Therefore uninitialized memory will be accessed, and + * conditional jumps will be made that depend on those values. + * However the length of the match is limited to the lookahead, so + * the output of deflate is not affected by the uninitialized values. + */ +#if (defined(UNALIGNED_OK) && MAX_MATCH == 258) + /* This code assumes sizeof(unsigned short) == 2. Do not use + * UNALIGNED_OK if your compiler uses a different size. + */ + if (*(ushf*)(match+best_len-1) != scan_end || + *(ushf*)match != scan_start) continue; + + /* It is not necessary to compare scan[2] and match[2] since they are + * always equal when the other bytes match, given that the hash keys + * are equal and that HASH_BITS >= 8. Compare 2 bytes at a time at + * strstart+3, +5, ... up to strstart+257. We check for insufficient + * lookahead only every 4th comparison; the 128th check will be made + * at strstart+257. If MAX_MATCH-2 is not a multiple of 8, it is + * necessary to put more guard bytes at the end of the window, or + * to check more often for insufficient lookahead. + */ + Assert(scan[2] == match[2], "scan[2]?"); + scan++, match++; + do { + } while (*(ushf*)(scan+=2) == *(ushf*)(match+=2) && + *(ushf*)(scan+=2) == *(ushf*)(match+=2) && + *(ushf*)(scan+=2) == *(ushf*)(match+=2) && + *(ushf*)(scan+=2) == *(ushf*)(match+=2) && + scan < strend); + /* The funny "do {}" generates better code on most compilers */ + + /* Here, scan <= window+strstart+257 */ + Assert(scan <= s->window+(unsigned)(s->window_size-1), "wild scan"); + if (*scan == *match) scan++; + + len = (MAX_MATCH - 1) - (int)(strend-scan); + scan = strend - (MAX_MATCH-1); + +#else /* UNALIGNED_OK */ + + if (match[best_len] != scan_end || + match[best_len-1] != scan_end1 || + *match != *scan || + *++match != scan[1]) continue; + + /* The check at best_len-1 can be removed because it will be made + * again later. (This heuristic is not always a win.) + * It is not necessary to compare scan[2] and match[2] since they + * are always equal when the other bytes match, given that + * the hash keys are equal and that HASH_BITS >= 8. + */ + scan += 2, match++; + Assert(*scan == *match, "match[2]?"); + + /* We check for insufficient lookahead only every 8th comparison; + * the 256th check will be made at strstart+258. + */ + do { + } while (*++scan == *++match && *++scan == *++match && + *++scan == *++match && *++scan == *++match && + *++scan == *++match && *++scan == *++match && + *++scan == *++match && *++scan == *++match && + scan < strend); + + Assert(scan <= s->window+(unsigned)(s->window_size-1), "wild scan"); + + len = MAX_MATCH - (int)(strend - scan); + scan = strend - MAX_MATCH; + +#endif /* UNALIGNED_OK */ + + if (len > best_len) { + s->match_start = cur_match; + best_len = len; + if (len >= nice_match) break; +#ifdef UNALIGNED_OK + scan_end = *(ushf*)(scan+best_len-1); +#else + scan_end1 = scan[best_len-1]; + scan_end = scan[best_len]; +#endif + } + } while ((cur_match = prev[cur_match & wmask]) > limit + && --chain_length != 0); + + if ((uInt)best_len <= s->lookahead) return (uInt)best_len; + return s->lookahead; +} +#endif /* ASMV */ + +#else /* FASTEST */ + +/* --------------------------------------------------------------------------- + * Optimized version for FASTEST only + */ +local uInt longest_match(s, cur_match) + deflate_state *s; + IPos cur_match; /* current match */ +{ + register Bytef *scan = s->window + s->strstart; /* current string */ + register Bytef *match; /* matched string */ + register int len; /* length of current match */ + register Bytef *strend = s->window + s->strstart + MAX_MATCH; + + /* The code is optimized for HASH_BITS >= 8 and MAX_MATCH-2 multiple of 16. + * It is easy to get rid of this optimization if necessary. + */ + Assert(s->hash_bits >= 8 && MAX_MATCH == 258, "Code too clever"); + + Assert((ulg)s->strstart <= s->window_size-MIN_LOOKAHEAD, "need lookahead"); + + Assert(cur_match < s->strstart, "no future"); + + match = s->window + cur_match; + + /* Return failure if the match length is less than 2: + */ + if (match[0] != scan[0] || match[1] != scan[1]) return MIN_MATCH-1; + + /* The check at best_len-1 can be removed because it will be made + * again later. (This heuristic is not always a win.) + * It is not necessary to compare scan[2] and match[2] since they + * are always equal when the other bytes match, given that + * the hash keys are equal and that HASH_BITS >= 8. + */ + scan += 2, match += 2; + Assert(*scan == *match, "match[2]?"); + + /* We check for insufficient lookahead only every 8th comparison; + * the 256th check will be made at strstart+258. + */ + do { + } while (*++scan == *++match && *++scan == *++match && + *++scan == *++match && *++scan == *++match && + *++scan == *++match && *++scan == *++match && + *++scan == *++match && *++scan == *++match && + scan < strend); + + Assert(scan <= s->window+(unsigned)(s->window_size-1), "wild scan"); + + len = MAX_MATCH - (int)(strend - scan); + + if (len < MIN_MATCH) return MIN_MATCH - 1; + + s->match_start = cur_match; + return (uInt)len <= s->lookahead ? (uInt)len : s->lookahead; +} + +#endif /* FASTEST */ + +#ifdef DEBUG +/* =========================================================================== + * Check that the match at match_start is indeed a match. + */ +local void check_match(s, start, match, length) + deflate_state *s; + IPos start, match; + int length; +{ + /* check that the match is indeed a match */ + if (zmemcmp(s->window + match, + s->window + start, length) != EQUAL) { + fprintf(stderr, " start %u, match %u, length %d\n", + start, match, length); + do { + fprintf(stderr, "%c%c", s->window[match++], s->window[start++]); + } while (--length != 0); + z_error("invalid match"); + } + if (z_verbose > 1) { + fprintf(stderr,"\\[%d,%d]", start-match, length); + do { putc(s->window[start++], stderr); } while (--length != 0); + } +} +#else +# define check_match(s, start, match, length) +#endif /* DEBUG */ + +/* =========================================================================== + * Fill the window when the lookahead becomes insufficient. + * Updates strstart and lookahead. + * + * IN assertion: lookahead < MIN_LOOKAHEAD + * OUT assertions: strstart <= window_size-MIN_LOOKAHEAD + * At least one byte has been read, or avail_in == 0; reads are + * performed for at least two bytes (required for the zip translate_eol + * option -- not supported here). + */ +local void fill_window(s) + deflate_state *s; +{ + register unsigned n, m; + register Posf *p; + unsigned more; /* Amount of free space at the end of the window. */ + uInt wsize = s->w_size; + + Assert(s->lookahead < MIN_LOOKAHEAD, "already enough lookahead"); + + do { + more = (unsigned)(s->window_size -(ulg)s->lookahead -(ulg)s->strstart); + + /* Deal with !@#$% 64K limit: */ + if (sizeof(int) <= 2) { + if (more == 0 && s->strstart == 0 && s->lookahead == 0) { + more = wsize; + + } else if (more == (unsigned)(-1)) { + /* Very unlikely, but possible on 16 bit machine if + * strstart == 0 && lookahead == 1 (input done a byte at time) + */ + more--; + } + } + + /* If the window is almost full and there is insufficient lookahead, + * move the upper half to the lower one to make room in the upper half. + */ + if (s->strstart >= wsize+MAX_DIST(s)) { + + zmemcpy(s->window, s->window+wsize, (unsigned)wsize); + s->match_start -= wsize; + s->strstart -= wsize; /* we now have strstart >= MAX_DIST */ + s->block_start -= (long) wsize; + + /* Slide the hash table (could be avoided with 32 bit values + at the expense of memory usage). We slide even when level == 0 + to keep the hash table consistent if we switch back to level > 0 + later. (Using level 0 permanently is not an optimal usage of + zlib, so we don't care about this pathological case.) + */ + n = s->hash_size; + p = &s->head[n]; + do { + m = *--p; + *p = (Pos)(m >= wsize ? m-wsize : NIL); + } while (--n); + + n = wsize; +#ifndef FASTEST + p = &s->prev[n]; + do { + m = *--p; + *p = (Pos)(m >= wsize ? m-wsize : NIL); + /* If n is not on any hash chain, prev[n] is garbage but + * its value will never be used. + */ + } while (--n); +#endif + more += wsize; + } + if (s->strm->avail_in == 0) break; + + /* If there was no sliding: + * strstart <= WSIZE+MAX_DIST-1 && lookahead <= MIN_LOOKAHEAD - 1 && + * more == window_size - lookahead - strstart + * => more >= window_size - (MIN_LOOKAHEAD-1 + WSIZE + MAX_DIST-1) + * => more >= window_size - 2*WSIZE + 2 + * In the BIG_MEM or MMAP case (not yet supported), + * window_size == input_size + MIN_LOOKAHEAD && + * strstart + s->lookahead <= input_size => more >= MIN_LOOKAHEAD. + * Otherwise, window_size == 2*WSIZE so more >= 2. + * If there was sliding, more >= WSIZE. So in all cases, more >= 2. + */ + Assert(more >= 2, "more < 2"); + + n = read_buf(s->strm, s->window + s->strstart + s->lookahead, more); + s->lookahead += n; + + /* Initialize the hash value now that we have some input: */ + if (s->lookahead + s->insert >= MIN_MATCH) { + uInt str = s->strstart - s->insert; + s->ins_h = s->window[str]; + UPDATE_HASH(s, s->ins_h, s->window[str + 1]); +#if MIN_MATCH != 3 + Call UPDATE_HASH() MIN_MATCH-3 more times +#endif + while (s->insert) { + UPDATE_HASH(s, s->ins_h, s->window[str + MIN_MATCH-1]); +#ifndef FASTEST + s->prev[str & s->w_mask] = s->head[s->ins_h]; +#endif + s->head[s->ins_h] = (Pos)str; + str++; + s->insert--; + if (s->lookahead + s->insert < MIN_MATCH) + break; + } + } + /* If the whole input has less than MIN_MATCH bytes, ins_h is garbage, + * but this is not important since only literal bytes will be emitted. + */ + + } while (s->lookahead < MIN_LOOKAHEAD && s->strm->avail_in != 0); + + /* If the WIN_INIT bytes after the end of the current data have never been + * written, then zero those bytes in order to avoid memory check reports of + * the use of uninitialized (or uninitialised as Julian writes) bytes by + * the longest match routines. Update the high water mark for the next + * time through here. WIN_INIT is set to MAX_MATCH since the longest match + * routines allow scanning to strstart + MAX_MATCH, ignoring lookahead. + */ + if (s->high_water < s->window_size) { + ulg curr = s->strstart + (ulg)(s->lookahead); + ulg init; + + if (s->high_water < curr) { + /* Previous high water mark below current data -- zero WIN_INIT + * bytes or up to end of window, whichever is less. + */ + init = s->window_size - curr; + if (init > WIN_INIT) + init = WIN_INIT; + zmemzero(s->window + curr, (unsigned)init); + s->high_water = curr + init; + } + else if (s->high_water < (ulg)curr + WIN_INIT) { + /* High water mark at or above current data, but below current data + * plus WIN_INIT -- zero out to current data plus WIN_INIT, or up + * to end of window, whichever is less. + */ + init = (ulg)curr + WIN_INIT - s->high_water; + if (init > s->window_size - s->high_water) + init = s->window_size - s->high_water; + zmemzero(s->window + s->high_water, (unsigned)init); + s->high_water += init; + } + } + + Assert((ulg)s->strstart <= s->window_size - MIN_LOOKAHEAD, + "not enough room for search"); +} + +/* =========================================================================== + * Flush the current block, with given end-of-file flag. + * IN assertion: strstart is set to the end of the current match. + */ +#define FLUSH_BLOCK_ONLY(s, last) { \ + _tr_flush_block(s, (s->block_start >= 0L ? \ + (charf *)&s->window[(unsigned)s->block_start] : \ + (charf *)Z_NULL), \ + (ulg)((long)s->strstart - s->block_start), \ + (last)); \ + s->block_start = s->strstart; \ + flush_pending(s->strm); \ + Tracev((stderr,"[FLUSH]")); \ +} + +/* Same but force premature exit if necessary. */ +#define FLUSH_BLOCK(s, last) { \ + FLUSH_BLOCK_ONLY(s, last); \ + if (s->strm->avail_out == 0) return (last) ? finish_started : need_more; \ +} + +/* =========================================================================== + * Copy without compression as much as possible from the input stream, return + * the current block state. + * This function does not insert new strings in the dictionary since + * uncompressible data is probably not useful. This function is used + * only for the level=0 compression option. + * NOTE: this function should be optimized to avoid extra copying from + * window to pending_buf. + */ +local block_state deflate_stored(s, flush) + deflate_state *s; + int flush; +{ + /* Stored blocks are limited to 0xffff bytes, pending_buf is limited + * to pending_buf_size, and each stored block has a 5 byte header: + */ + ulg max_block_size = 0xffff; + ulg max_start; + + if (max_block_size > s->pending_buf_size - 5) { + max_block_size = s->pending_buf_size - 5; + } + + /* Copy as much as possible from input to output: */ + for (;;) { + /* Fill the window as much as possible: */ + if (s->lookahead <= 1) { + + Assert(s->strstart < s->w_size+MAX_DIST(s) || + s->block_start >= (long)s->w_size, "slide too late"); + + fill_window(s); + if (s->lookahead == 0 && flush == Z_NO_FLUSH) return need_more; + + if (s->lookahead == 0) break; /* flush the current block */ + } + Assert(s->block_start >= 0L, "block gone"); + + s->strstart += s->lookahead; + s->lookahead = 0; + + /* Emit a stored block if pending_buf will be full: */ + max_start = s->block_start + max_block_size; + if (s->strstart == 0 || (ulg)s->strstart >= max_start) { + /* strstart == 0 is possible when wraparound on 16-bit machine */ + s->lookahead = (uInt)(s->strstart - max_start); + s->strstart = (uInt)max_start; + FLUSH_BLOCK(s, 0); + } + /* Flush if we may have to slide, otherwise block_start may become + * negative and the data will be gone: + */ + if (s->strstart - (uInt)s->block_start >= MAX_DIST(s)) { + FLUSH_BLOCK(s, 0); + } + } + s->insert = 0; + if (flush == Z_FINISH) { + FLUSH_BLOCK(s, 1); + return finish_done; + } + if ((long)s->strstart > s->block_start) + FLUSH_BLOCK(s, 0); + return block_done; +} + +/* =========================================================================== + * Compress as much as possible from the input stream, return the current + * block state. + * This function does not perform lazy evaluation of matches and inserts + * new strings in the dictionary only for unmatched strings or for short + * matches. It is used only for the fast compression options. + */ +local block_state deflate_fast(s, flush) + deflate_state *s; + int flush; +{ + IPos hash_head; /* head of the hash chain */ + int bflush; /* set if current block must be flushed */ + + for (;;) { + /* Make sure that we always have enough lookahead, except + * at the end of the input file. We need MAX_MATCH bytes + * for the next match, plus MIN_MATCH bytes to insert the + * string following the next match. + */ + if (s->lookahead < MIN_LOOKAHEAD) { + fill_window(s); + if (s->lookahead < MIN_LOOKAHEAD && flush == Z_NO_FLUSH) { + return need_more; + } + if (s->lookahead == 0) break; /* flush the current block */ + } + + /* Insert the string window[strstart .. strstart+2] in the + * dictionary, and set hash_head to the head of the hash chain: + */ + hash_head = NIL; + if (s->lookahead >= MIN_MATCH) { + INSERT_STRING(s, s->strstart, hash_head); + } + + /* Find the longest match, discarding those <= prev_length. + * At this point we have always match_length < MIN_MATCH + */ + if (hash_head != NIL && s->strstart - hash_head <= MAX_DIST(s)) { + /* To simplify the code, we prevent matches with the string + * of window index 0 (in particular we have to avoid a match + * of the string with itself at the start of the input file). + */ + s->match_length = longest_match (s, hash_head); + /* longest_match() sets match_start */ + } + if (s->match_length >= MIN_MATCH) { + check_match(s, s->strstart, s->match_start, s->match_length); + + _tr_tally_dist(s, s->strstart - s->match_start, + s->match_length - MIN_MATCH, bflush); + + s->lookahead -= s->match_length; + + /* Insert new strings in the hash table only if the match length + * is not too large. This saves time but degrades compression. + */ +#ifndef FASTEST + if (s->match_length <= s->max_insert_length && + s->lookahead >= MIN_MATCH) { + s->match_length--; /* string at strstart already in table */ + do { + s->strstart++; + INSERT_STRING(s, s->strstart, hash_head); + /* strstart never exceeds WSIZE-MAX_MATCH, so there are + * always MIN_MATCH bytes ahead. + */ + } while (--s->match_length != 0); + s->strstart++; + } else +#endif + { + s->strstart += s->match_length; + s->match_length = 0; + s->ins_h = s->window[s->strstart]; + UPDATE_HASH(s, s->ins_h, s->window[s->strstart+1]); +#if MIN_MATCH != 3 + Call UPDATE_HASH() MIN_MATCH-3 more times +#endif + /* If lookahead < MIN_MATCH, ins_h is garbage, but it does not + * matter since it will be recomputed at next deflate call. + */ + } + } else { + /* No match, output a literal byte */ + Tracevv((stderr,"%c", s->window[s->strstart])); + _tr_tally_lit (s, s->window[s->strstart], bflush); + s->lookahead--; + s->strstart++; + } + if (bflush) FLUSH_BLOCK(s, 0); + } + s->insert = s->strstart < MIN_MATCH-1 ? s->strstart : MIN_MATCH-1; + if (flush == Z_FINISH) { + FLUSH_BLOCK(s, 1); + return finish_done; + } + if (s->last_lit) + FLUSH_BLOCK(s, 0); + return block_done; +} + +#ifndef FASTEST +/* =========================================================================== + * Same as above, but achieves better compression. We use a lazy + * evaluation for matches: a match is finally adopted only if there is + * no better match at the next window position. + */ +local block_state deflate_slow(s, flush) + deflate_state *s; + int flush; +{ + IPos hash_head; /* head of hash chain */ + int bflush; /* set if current block must be flushed */ + + /* Process the input block. */ + for (;;) { + /* Make sure that we always have enough lookahead, except + * at the end of the input file. We need MAX_MATCH bytes + * for the next match, plus MIN_MATCH bytes to insert the + * string following the next match. + */ + if (s->lookahead < MIN_LOOKAHEAD) { + fill_window(s); + if (s->lookahead < MIN_LOOKAHEAD && flush == Z_NO_FLUSH) { + return need_more; + } + if (s->lookahead == 0) break; /* flush the current block */ + } + + /* Insert the string window[strstart .. strstart+2] in the + * dictionary, and set hash_head to the head of the hash chain: + */ + hash_head = NIL; + if (s->lookahead >= MIN_MATCH) { + INSERT_STRING(s, s->strstart, hash_head); + } + + /* Find the longest match, discarding those <= prev_length. + */ + s->prev_length = s->match_length, s->prev_match = s->match_start; + s->match_length = MIN_MATCH-1; + + if (hash_head != NIL && s->prev_length < s->max_lazy_match && + s->strstart - hash_head <= MAX_DIST(s)) { + /* To simplify the code, we prevent matches with the string + * of window index 0 (in particular we have to avoid a match + * of the string with itself at the start of the input file). + */ + s->match_length = longest_match (s, hash_head); + /* longest_match() sets match_start */ + + if (s->match_length <= 5 && (s->strategy == Z_FILTERED +#if TOO_FAR <= 32767 + || (s->match_length == MIN_MATCH && + s->strstart - s->match_start > TOO_FAR) +#endif + )) { + + /* If prev_match is also MIN_MATCH, match_start is garbage + * but we will ignore the current match anyway. + */ + s->match_length = MIN_MATCH-1; + } + } + /* If there was a match at the previous step and the current + * match is not better, output the previous match: + */ + if (s->prev_length >= MIN_MATCH && s->match_length <= s->prev_length) { + uInt max_insert = s->strstart + s->lookahead - MIN_MATCH; + /* Do not insert strings in hash table beyond this. */ + + check_match(s, s->strstart-1, s->prev_match, s->prev_length); + + _tr_tally_dist(s, s->strstart -1 - s->prev_match, + s->prev_length - MIN_MATCH, bflush); + + /* Insert in hash table all strings up to the end of the match. + * strstart-1 and strstart are already inserted. If there is not + * enough lookahead, the last two strings are not inserted in + * the hash table. + */ + s->lookahead -= s->prev_length-1; + s->prev_length -= 2; + do { + if (++s->strstart <= max_insert) { + INSERT_STRING(s, s->strstart, hash_head); + } + } while (--s->prev_length != 0); + s->match_available = 0; + s->match_length = MIN_MATCH-1; + s->strstart++; + + if (bflush) FLUSH_BLOCK(s, 0); + + } else if (s->match_available) { + /* If there was no match at the previous position, output a + * single literal. If there was a match but the current match + * is longer, truncate the previous match to a single literal. + */ + Tracevv((stderr,"%c", s->window[s->strstart-1])); + _tr_tally_lit(s, s->window[s->strstart-1], bflush); + if (bflush) { + FLUSH_BLOCK_ONLY(s, 0); + } + s->strstart++; + s->lookahead--; + if (s->strm->avail_out == 0) return need_more; + } else { + /* There is no previous match to compare with, wait for + * the next step to decide. + */ + s->match_available = 1; + s->strstart++; + s->lookahead--; + } + } + Assert (flush != Z_NO_FLUSH, "no flush?"); + if (s->match_available) { + Tracevv((stderr,"%c", s->window[s->strstart-1])); + _tr_tally_lit(s, s->window[s->strstart-1], bflush); + s->match_available = 0; + } + s->insert = s->strstart < MIN_MATCH-1 ? s->strstart : MIN_MATCH-1; + if (flush == Z_FINISH) { + FLUSH_BLOCK(s, 1); + return finish_done; + } + if (s->last_lit) + FLUSH_BLOCK(s, 0); + return block_done; +} +#endif /* FASTEST */ + +/* =========================================================================== + * For Z_RLE, simply look for runs of bytes, generate matches only of distance + * one. Do not maintain a hash table. (It will be regenerated if this run of + * deflate switches away from Z_RLE.) + */ +local block_state deflate_rle(s, flush) + deflate_state *s; + int flush; +{ + int bflush; /* set if current block must be flushed */ + uInt prev; /* byte at distance one to match */ + Bytef *scan, *strend; /* scan goes up to strend for length of run */ + + for (;;) { + /* Make sure that we always have enough lookahead, except + * at the end of the input file. We need MAX_MATCH bytes + * for the longest run, plus one for the unrolled loop. + */ + if (s->lookahead <= MAX_MATCH) { + fill_window(s); + if (s->lookahead <= MAX_MATCH && flush == Z_NO_FLUSH) { + return need_more; + } + if (s->lookahead == 0) break; /* flush the current block */ + } + + /* See how many times the previous byte repeats */ + s->match_length = 0; + if (s->lookahead >= MIN_MATCH && s->strstart > 0) { + scan = s->window + s->strstart - 1; + prev = *scan; + if (prev == *++scan && prev == *++scan && prev == *++scan) { + strend = s->window + s->strstart + MAX_MATCH; + do { + } while (prev == *++scan && prev == *++scan && + prev == *++scan && prev == *++scan && + prev == *++scan && prev == *++scan && + prev == *++scan && prev == *++scan && + scan < strend); + s->match_length = MAX_MATCH - (int)(strend - scan); + if (s->match_length > s->lookahead) + s->match_length = s->lookahead; + } + Assert(scan <= s->window+(uInt)(s->window_size-1), "wild scan"); + } + + /* Emit match if have run of MIN_MATCH or longer, else emit literal */ + if (s->match_length >= MIN_MATCH) { + check_match(s, s->strstart, s->strstart - 1, s->match_length); + + _tr_tally_dist(s, 1, s->match_length - MIN_MATCH, bflush); + + s->lookahead -= s->match_length; + s->strstart += s->match_length; + s->match_length = 0; + } else { + /* No match, output a literal byte */ + Tracevv((stderr,"%c", s->window[s->strstart])); + _tr_tally_lit (s, s->window[s->strstart], bflush); + s->lookahead--; + s->strstart++; + } + if (bflush) FLUSH_BLOCK(s, 0); + } + s->insert = 0; + if (flush == Z_FINISH) { + FLUSH_BLOCK(s, 1); + return finish_done; + } + if (s->last_lit) + FLUSH_BLOCK(s, 0); + return block_done; +} + +/* =========================================================================== + * For Z_HUFFMAN_ONLY, do not look for matches. Do not maintain a hash table. + * (It will be regenerated if this run of deflate switches away from Huffman.) + */ +local block_state deflate_huff(s, flush) + deflate_state *s; + int flush; +{ + int bflush; /* set if current block must be flushed */ + + for (;;) { + /* Make sure that we have a literal to write. */ + if (s->lookahead == 0) { + fill_window(s); + if (s->lookahead == 0) { + if (flush == Z_NO_FLUSH) + return need_more; + break; /* flush the current block */ + } + } + + /* Output a literal byte */ + s->match_length = 0; + Tracevv((stderr,"%c", s->window[s->strstart])); + _tr_tally_lit (s, s->window[s->strstart], bflush); + s->lookahead--; + s->strstart++; + if (bflush) FLUSH_BLOCK(s, 0); + } + s->insert = 0; + if (flush == Z_FINISH) { + FLUSH_BLOCK(s, 1); + return finish_done; + } + if (s->last_lit) + FLUSH_BLOCK(s, 0); + return block_done; +} ADDED compat/zlib/deflate.h Index: compat/zlib/deflate.h ================================================================== --- compat/zlib/deflate.h +++ compat/zlib/deflate.h @@ -0,0 +1,346 @@ +/* deflate.h -- internal compression state + * Copyright (C) 1995-2012 Jean-loup Gailly + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* WARNING: this file should *not* be used by applications. It is + part of the implementation of the compression library and is + subject to change. Applications should only use zlib.h. + */ + +/* @(#) $Id$ */ + +#ifndef DEFLATE_H +#define DEFLATE_H + +#include "zutil.h" + +/* define NO_GZIP when compiling if you want to disable gzip header and + trailer creation by deflate(). NO_GZIP would be used to avoid linking in + the crc code when it is not needed. For shared libraries, gzip encoding + should be left enabled. */ +#ifndef NO_GZIP +# define GZIP +#endif + +/* =========================================================================== + * Internal compression state. + */ + +#define LENGTH_CODES 29 +/* number of length codes, not counting the special END_BLOCK code */ + +#define LITERALS 256 +/* number of literal bytes 0..255 */ + +#define L_CODES (LITERALS+1+LENGTH_CODES) +/* number of Literal or Length codes, including the END_BLOCK code */ + +#define D_CODES 30 +/* number of distance codes */ + +#define BL_CODES 19 +/* number of codes used to transfer the bit lengths */ + +#define HEAP_SIZE (2*L_CODES+1) +/* maximum heap size */ + +#define MAX_BITS 15 +/* All codes must not exceed MAX_BITS bits */ + +#define Buf_size 16 +/* size of bit buffer in bi_buf */ + +#define INIT_STATE 42 +#define EXTRA_STATE 69 +#define NAME_STATE 73 +#define COMMENT_STATE 91 +#define HCRC_STATE 103 +#define BUSY_STATE 113 +#define FINISH_STATE 666 +/* Stream status */ + + +/* Data structure describing a single value and its code string. */ +typedef struct ct_data_s { + union { + ush freq; /* frequency count */ + ush code; /* bit string */ + } fc; + union { + ush dad; /* father node in Huffman tree */ + ush len; /* length of bit string */ + } dl; +} FAR ct_data; + +#define Freq fc.freq +#define Code fc.code +#define Dad dl.dad +#define Len dl.len + +typedef struct static_tree_desc_s static_tree_desc; + +typedef struct tree_desc_s { + ct_data *dyn_tree; /* the dynamic tree */ + int max_code; /* largest code with non zero frequency */ + static_tree_desc *stat_desc; /* the corresponding static tree */ +} FAR tree_desc; + +typedef ush Pos; +typedef Pos FAR Posf; +typedef unsigned IPos; + +/* A Pos is an index in the character window. We use short instead of int to + * save space in the various tables. IPos is used only for parameter passing. + */ + +typedef struct internal_state { + z_streamp strm; /* pointer back to this zlib stream */ + int status; /* as the name implies */ + Bytef *pending_buf; /* output still pending */ + ulg pending_buf_size; /* size of pending_buf */ + Bytef *pending_out; /* next pending byte to output to the stream */ + uInt pending; /* nb of bytes in the pending buffer */ + int wrap; /* bit 0 true for zlib, bit 1 true for gzip */ + gz_headerp gzhead; /* gzip header information to write */ + uInt gzindex; /* where in extra, name, or comment */ + Byte method; /* can only be DEFLATED */ + int last_flush; /* value of flush param for previous deflate call */ + + /* used by deflate.c: */ + + uInt w_size; /* LZ77 window size (32K by default) */ + uInt w_bits; /* log2(w_size) (8..16) */ + uInt w_mask; /* w_size - 1 */ + + Bytef *window; + /* Sliding window. Input bytes are read into the second half of the window, + * and move to the first half later to keep a dictionary of at least wSize + * bytes. With this organization, matches are limited to a distance of + * wSize-MAX_MATCH bytes, but this ensures that IO is always + * performed with a length multiple of the block size. Also, it limits + * the window size to 64K, which is quite useful on MSDOS. + * To do: use the user input buffer as sliding window. + */ + + ulg window_size; + /* Actual size of window: 2*wSize, except when the user input buffer + * is directly used as sliding window. + */ + + Posf *prev; + /* Link to older string with same hash index. To limit the size of this + * array to 64K, this link is maintained only for the last 32K strings. + * An index in this array is thus a window index modulo 32K. + */ + + Posf *head; /* Heads of the hash chains or NIL. */ + + uInt ins_h; /* hash index of string to be inserted */ + uInt hash_size; /* number of elements in hash table */ + uInt hash_bits; /* log2(hash_size) */ + uInt hash_mask; /* hash_size-1 */ + + uInt hash_shift; + /* Number of bits by which ins_h must be shifted at each input + * step. It must be such that after MIN_MATCH steps, the oldest + * byte no longer takes part in the hash key, that is: + * hash_shift * MIN_MATCH >= hash_bits + */ + + long block_start; + /* Window position at the beginning of the current output block. Gets + * negative when the window is moved backwards. + */ + + uInt match_length; /* length of best match */ + IPos prev_match; /* previous match */ + int match_available; /* set if previous match exists */ + uInt strstart; /* start of string to insert */ + uInt match_start; /* start of matching string */ + uInt lookahead; /* number of valid bytes ahead in window */ + + uInt prev_length; + /* Length of the best match at previous step. Matches not greater than this + * are discarded. This is used in the lazy match evaluation. + */ + + uInt max_chain_length; + /* To speed up deflation, hash chains are never searched beyond this + * length. A higher limit improves compression ratio but degrades the + * speed. + */ + + uInt max_lazy_match; + /* Attempt to find a better match only when the current match is strictly + * smaller than this value. This mechanism is used only for compression + * levels >= 4. + */ +# define max_insert_length max_lazy_match + /* Insert new strings in the hash table only if the match length is not + * greater than this length. This saves time but degrades compression. + * max_insert_length is used only for compression levels <= 3. + */ + + int level; /* compression level (1..9) */ + int strategy; /* favor or force Huffman coding*/ + + uInt good_match; + /* Use a faster search when the previous match is longer than this */ + + int nice_match; /* Stop searching when current match exceeds this */ + + /* used by trees.c: */ + /* Didn't use ct_data typedef below to suppress compiler warning */ + struct ct_data_s dyn_ltree[HEAP_SIZE]; /* literal and length tree */ + struct ct_data_s dyn_dtree[2*D_CODES+1]; /* distance tree */ + struct ct_data_s bl_tree[2*BL_CODES+1]; /* Huffman tree for bit lengths */ + + struct tree_desc_s l_desc; /* desc. for literal tree */ + struct tree_desc_s d_desc; /* desc. for distance tree */ + struct tree_desc_s bl_desc; /* desc. for bit length tree */ + + ush bl_count[MAX_BITS+1]; + /* number of codes at each bit length for an optimal tree */ + + int heap[2*L_CODES+1]; /* heap used to build the Huffman trees */ + int heap_len; /* number of elements in the heap */ + int heap_max; /* element of largest frequency */ + /* The sons of heap[n] are heap[2*n] and heap[2*n+1]. heap[0] is not used. + * The same heap array is used to build all trees. + */ + + uch depth[2*L_CODES+1]; + /* Depth of each subtree used as tie breaker for trees of equal frequency + */ + + uchf *l_buf; /* buffer for literals or lengths */ + + uInt lit_bufsize; + /* Size of match buffer for literals/lengths. There are 4 reasons for + * limiting lit_bufsize to 64K: + * - frequencies can be kept in 16 bit counters + * - if compression is not successful for the first block, all input + * data is still in the window so we can still emit a stored block even + * when input comes from standard input. (This can also be done for + * all blocks if lit_bufsize is not greater than 32K.) + * - if compression is not successful for a file smaller than 64K, we can + * even emit a stored file instead of a stored block (saving 5 bytes). + * This is applicable only for zip (not gzip or zlib). + * - creating new Huffman trees less frequently may not provide fast + * adaptation to changes in the input data statistics. (Take for + * example a binary file with poorly compressible code followed by + * a highly compressible string table.) Smaller buffer sizes give + * fast adaptation but have of course the overhead of transmitting + * trees more frequently. + * - I can't count above 4 + */ + + uInt last_lit; /* running index in l_buf */ + + ushf *d_buf; + /* Buffer for distances. To simplify the code, d_buf and l_buf have + * the same number of elements. To use different lengths, an extra flag + * array would be necessary. + */ + + ulg opt_len; /* bit length of current block with optimal trees */ + ulg static_len; /* bit length of current block with static trees */ + uInt matches; /* number of string matches in current block */ + uInt insert; /* bytes at end of window left to insert */ + +#ifdef DEBUG + ulg compressed_len; /* total bit length of compressed file mod 2^32 */ + ulg bits_sent; /* bit length of compressed data sent mod 2^32 */ +#endif + + ush bi_buf; + /* Output buffer. bits are inserted starting at the bottom (least + * significant bits). + */ + int bi_valid; + /* Number of valid bits in bi_buf. All bits above the last valid bit + * are always zero. + */ + + ulg high_water; + /* High water mark offset in window for initialized bytes -- bytes above + * this are set to zero in order to avoid memory check warnings when + * longest match routines access bytes past the input. This is then + * updated to the new high water mark. + */ + +} FAR deflate_state; + +/* Output a byte on the stream. + * IN assertion: there is enough room in pending_buf. + */ +#define put_byte(s, c) {s->pending_buf[s->pending++] = (c);} + + +#define MIN_LOOKAHEAD (MAX_MATCH+MIN_MATCH+1) +/* Minimum amount of lookahead, except at the end of the input file. + * See deflate.c for comments about the MIN_MATCH+1. + */ + +#define MAX_DIST(s) ((s)->w_size-MIN_LOOKAHEAD) +/* In order to simplify the code, particularly on 16 bit machines, match + * distances are limited to MAX_DIST instead of WSIZE. + */ + +#define WIN_INIT MAX_MATCH +/* Number of bytes after end of data in window to initialize in order to avoid + memory checker errors from longest match routines */ + + /* in trees.c */ +void ZLIB_INTERNAL _tr_init OF((deflate_state *s)); +int ZLIB_INTERNAL _tr_tally OF((deflate_state *s, unsigned dist, unsigned lc)); +void ZLIB_INTERNAL _tr_flush_block OF((deflate_state *s, charf *buf, + ulg stored_len, int last)); +void ZLIB_INTERNAL _tr_flush_bits OF((deflate_state *s)); +void ZLIB_INTERNAL _tr_align OF((deflate_state *s)); +void ZLIB_INTERNAL _tr_stored_block OF((deflate_state *s, charf *buf, + ulg stored_len, int last)); + +#define d_code(dist) \ + ((dist) < 256 ? _dist_code[dist] : _dist_code[256+((dist)>>7)]) +/* Mapping from a distance to a distance code. dist is the distance - 1 and + * must not have side effects. _dist_code[256] and _dist_code[257] are never + * used. + */ + +#ifndef DEBUG +/* Inline versions of _tr_tally for speed: */ + +#if defined(GEN_TREES_H) || !defined(STDC) + extern uch ZLIB_INTERNAL _length_code[]; + extern uch ZLIB_INTERNAL _dist_code[]; +#else + extern const uch ZLIB_INTERNAL _length_code[]; + extern const uch ZLIB_INTERNAL _dist_code[]; +#endif + +# define _tr_tally_lit(s, c, flush) \ + { uch cc = (c); \ + s->d_buf[s->last_lit] = 0; \ + s->l_buf[s->last_lit++] = cc; \ + s->dyn_ltree[cc].Freq++; \ + flush = (s->last_lit == s->lit_bufsize-1); \ + } +# define _tr_tally_dist(s, distance, length, flush) \ + { uch len = (length); \ + ush dist = (distance); \ + s->d_buf[s->last_lit] = dist; \ + s->l_buf[s->last_lit++] = len; \ + dist--; \ + s->dyn_ltree[_length_code[len]+LITERALS+1].Freq++; \ + s->dyn_dtree[d_code(dist)].Freq++; \ + flush = (s->last_lit == s->lit_bufsize-1); \ + } +#else +# define _tr_tally_lit(s, c, flush) flush = _tr_tally(s, 0, c) +# define _tr_tally_dist(s, distance, length, flush) \ + flush = _tr_tally(s, distance, length) +#endif + +#endif /* DEFLATE_H */ ADDED compat/zlib/doc/algorithm.txt Index: compat/zlib/doc/algorithm.txt ================================================================== --- compat/zlib/doc/algorithm.txt +++ compat/zlib/doc/algorithm.txt @@ -0,0 +1,209 @@ +1. Compression algorithm (deflate) + +The deflation algorithm used by gzip (also zip and zlib) is a variation of +LZ77 (Lempel-Ziv 1977, see reference below). It finds duplicated strings in +the input data. The second occurrence of a string is replaced by a +pointer to the previous string, in the form of a pair (distance, +length). Distances are limited to 32K bytes, and lengths are limited +to 258 bytes. When a string does not occur anywhere in the previous +32K bytes, it is emitted as a sequence of literal bytes. (In this +description, `string' must be taken as an arbitrary sequence of bytes, +and is not restricted to printable characters.) + +Literals or match lengths are compressed with one Huffman tree, and +match distances are compressed with another tree. The trees are stored +in a compact form at the start of each block. The blocks can have any +size (except that the compressed data for one block must fit in +available memory). A block is terminated when deflate() determines that +it would be useful to start another block with fresh trees. (This is +somewhat similar to the behavior of LZW-based _compress_.) + +Duplicated strings are found using a hash table. All input strings of +length 3 are inserted in the hash table. A hash index is computed for +the next 3 bytes. If the hash chain for this index is not empty, all +strings in the chain are compared with the current input string, and +the longest match is selected. + +The hash chains are searched starting with the most recent strings, to +favor small distances and thus take advantage of the Huffman encoding. +The hash chains are singly linked. There are no deletions from the +hash chains, the algorithm simply discards matches that are too old. + +To avoid a worst-case situation, very long hash chains are arbitrarily +truncated at a certain length, determined by a runtime option (level +parameter of deflateInit). So deflate() does not always find the longest +possible match but generally finds a match which is long enough. + +deflate() also defers the selection of matches with a lazy evaluation +mechanism. After a match of length N has been found, deflate() searches for +a longer match at the next input byte. If a longer match is found, the +previous match is truncated to a length of one (thus producing a single +literal byte) and the process of lazy evaluation begins again. Otherwise, +the original match is kept, and the next match search is attempted only N +steps later. + +The lazy match evaluation is also subject to a runtime parameter. If +the current match is long enough, deflate() reduces the search for a longer +match, thus speeding up the whole process. If compression ratio is more +important than speed, deflate() attempts a complete second search even if +the first match is already long enough. + +The lazy match evaluation is not performed for the fastest compression +modes (level parameter 1 to 3). For these fast modes, new strings +are inserted in the hash table only when no match was found, or +when the match is not too long. This degrades the compression ratio +but saves time since there are both fewer insertions and fewer searches. + + +2. Decompression algorithm (inflate) + +2.1 Introduction + +The key question is how to represent a Huffman code (or any prefix code) so +that you can decode fast. The most important characteristic is that shorter +codes are much more common than longer codes, so pay attention to decoding the +short codes fast, and let the long codes take longer to decode. + +inflate() sets up a first level table that covers some number of bits of +input less than the length of longest code. It gets that many bits from the +stream, and looks it up in the table. The table will tell if the next +code is that many bits or less and how many, and if it is, it will tell +the value, else it will point to the next level table for which inflate() +grabs more bits and tries to decode a longer code. + +How many bits to make the first lookup is a tradeoff between the time it +takes to decode and the time it takes to build the table. If building the +table took no time (and if you had infinite memory), then there would only +be a first level table to cover all the way to the longest code. However, +building the table ends up taking a lot longer for more bits since short +codes are replicated many times in such a table. What inflate() does is +simply to make the number of bits in the first table a variable, and then +to set that variable for the maximum speed. + +For inflate, which has 286 possible codes for the literal/length tree, the size +of the first table is nine bits. Also the distance trees have 30 possible +values, and the size of the first table is six bits. Note that for each of +those cases, the table ended up one bit longer than the ``average'' code +length, i.e. the code length of an approximately flat code which would be a +little more than eight bits for 286 symbols and a little less than five bits +for 30 symbols. + + +2.2 More details on the inflate table lookup + +Ok, you want to know what this cleverly obfuscated inflate tree actually +looks like. You are correct that it's not a Huffman tree. It is simply a +lookup table for the first, let's say, nine bits of a Huffman symbol. The +symbol could be as short as one bit or as long as 15 bits. If a particular +symbol is shorter than nine bits, then that symbol's translation is duplicated +in all those entries that start with that symbol's bits. For example, if the +symbol is four bits, then it's duplicated 32 times in a nine-bit table. If a +symbol is nine bits long, it appears in the table once. + +If the symbol is longer than nine bits, then that entry in the table points +to another similar table for the remaining bits. Again, there are duplicated +entries as needed. The idea is that most of the time the symbol will be short +and there will only be one table look up. (That's whole idea behind data +compression in the first place.) For the less frequent long symbols, there +will be two lookups. If you had a compression method with really long +symbols, you could have as many levels of lookups as is efficient. For +inflate, two is enough. + +So a table entry either points to another table (in which case nine bits in +the above example are gobbled), or it contains the translation for the symbol +and the number of bits to gobble. Then you start again with the next +ungobbled bit. + +You may wonder: why not just have one lookup table for how ever many bits the +longest symbol is? The reason is that if you do that, you end up spending +more time filling in duplicate symbol entries than you do actually decoding. +At least for deflate's output that generates new trees every several 10's of +kbytes. You can imagine that filling in a 2^15 entry table for a 15-bit code +would take too long if you're only decoding several thousand symbols. At the +other extreme, you could make a new table for every bit in the code. In fact, +that's essentially a Huffman tree. But then you spend too much time +traversing the tree while decoding, even for short symbols. + +So the number of bits for the first lookup table is a trade of the time to +fill out the table vs. the time spent looking at the second level and above of +the table. + +Here is an example, scaled down: + +The code being decoded, with 10 symbols, from 1 to 6 bits long: + +A: 0 +B: 10 +C: 1100 +D: 11010 +E: 11011 +F: 11100 +G: 11101 +H: 11110 +I: 111110 +J: 111111 + +Let's make the first table three bits long (eight entries): + +000: A,1 +001: A,1 +010: A,1 +011: A,1 +100: B,2 +101: B,2 +110: -> table X (gobble 3 bits) +111: -> table Y (gobble 3 bits) + +Each entry is what the bits decode as and how many bits that is, i.e. how +many bits to gobble. Or the entry points to another table, with the number of +bits to gobble implicit in the size of the table. + +Table X is two bits long since the longest code starting with 110 is five bits +long: + +00: C,1 +01: C,1 +10: D,2 +11: E,2 + +Table Y is three bits long since the longest code starting with 111 is six +bits long: + +000: F,2 +001: F,2 +010: G,2 +011: G,2 +100: H,2 +101: H,2 +110: I,3 +111: J,3 + +So what we have here are three tables with a total of 20 entries that had to +be constructed. That's compared to 64 entries for a single table. Or +compared to 16 entries for a Huffman tree (six two entry tables and one four +entry table). Assuming that the code ideally represents the probability of +the symbols, it takes on the average 1.25 lookups per symbol. That's compared +to one lookup for the single table, or 1.66 lookups per symbol for the +Huffman tree. + +There, I think that gives you a picture of what's going on. For inflate, the +meaning of a particular symbol is often more than just a letter. It can be a +byte (a "literal"), or it can be either a length or a distance which +indicates a base value and a number of bits to fetch after the code that is +added to the base value. Or it might be the special end-of-block code. The +data structures created in inftrees.c try to encode all that information +compactly in the tables. + + +Jean-loup Gailly Mark Adler +jloup@gzip.org madler@alumni.caltech.edu + + +References: + +[LZ77] Ziv J., Lempel A., ``A Universal Algorithm for Sequential Data +Compression,'' IEEE Transactions on Information Theory, Vol. 23, No. 3, +pp. 337-343. + +``DEFLATE Compressed Data Format Specification'' available in +http://tools.ietf.org/html/rfc1951 ADDED compat/zlib/doc/rfc1950.txt Index: compat/zlib/doc/rfc1950.txt ================================================================== --- compat/zlib/doc/rfc1950.txt +++ compat/zlib/doc/rfc1950.txt @@ -0,0 +1,619 @@ + + + + + + +Network Working Group P. Deutsch +Request for Comments: 1950 Aladdin Enterprises +Category: Informational J-L. Gailly + Info-ZIP + May 1996 + + + ZLIB Compressed Data Format Specification version 3.3 + +Status of This Memo + + This memo provides information for the Internet community. This memo + does not specify an Internet standard of any kind. Distribution of + this memo is unlimited. + +IESG Note: + + The IESG takes no position on the validity of any Intellectual + Property Rights statements contained in this document. + +Notices + + Copyright (c) 1996 L. Peter Deutsch and Jean-Loup Gailly + + Permission is granted to copy and distribute this document for any + purpose and without charge, including translations into other + languages and incorporation into compilations, provided that the + copyright notice and this notice are preserved, and that any + substantive changes or deletions from the original are clearly + marked. + + A pointer to the latest version of this and related documentation in + HTML format can be found at the URL + . + +Abstract + + This specification defines a lossless compressed data format. The + data can be produced or consumed, even for an arbitrarily long + sequentially presented input data stream, using only an a priori + bounded amount of intermediate storage. The format presently uses + the DEFLATE compression method but can be easily extended to use + other compression methods. It can be implemented readily in a manner + not covered by patents. This specification also defines the ADLER-32 + checksum (an extension and improvement of the Fletcher checksum), + used for detection of data corruption, and provides an algorithm for + computing it. + + + + +Deutsch & Gailly Informational [Page 1] + +RFC 1950 ZLIB Compressed Data Format Specification May 1996 + + +Table of Contents + + 1. Introduction ................................................... 2 + 1.1. Purpose ................................................... 2 + 1.2. Intended audience ......................................... 3 + 1.3. Scope ..................................................... 3 + 1.4. Compliance ................................................ 3 + 1.5. Definitions of terms and conventions used ................ 3 + 1.6. Changes from previous versions ............................ 3 + 2. Detailed specification ......................................... 3 + 2.1. Overall conventions ....................................... 3 + 2.2. Data format ............................................... 4 + 2.3. Compliance ................................................ 7 + 3. References ..................................................... 7 + 4. Source code .................................................... 8 + 5. Security Considerations ........................................ 8 + 6. Acknowledgements ............................................... 8 + 7. Authors' Addresses ............................................. 8 + 8. Appendix: Rationale ............................................ 9 + 9. Appendix: Sample code ..........................................10 + +1. Introduction + + 1.1. Purpose + + The purpose of this specification is to define a lossless + compressed data format that: + + * Is independent of CPU type, operating system, file system, + and character set, and hence can be used for interchange; + + * Can be produced or consumed, even for an arbitrarily long + sequentially presented input data stream, using only an a + priori bounded amount of intermediate storage, and hence can + be used in data communications or similar structures such as + Unix filters; + + * Can use a number of different compression methods; + + * Can be implemented readily in a manner not covered by + patents, and hence can be practiced freely. + + The data format defined by this specification does not attempt to + allow random access to compressed data. + + + + + + + +Deutsch & Gailly Informational [Page 2] + +RFC 1950 ZLIB Compressed Data Format Specification May 1996 + + + 1.2. Intended audience + + This specification is intended for use by implementors of software + to compress data into zlib format and/or decompress data from zlib + format. + + The text of the specification assumes a basic background in + programming at the level of bits and other primitive data + representations. + + 1.3. Scope + + The specification specifies a compressed data format that can be + used for in-memory compression of a sequence of arbitrary bytes. + + 1.4. Compliance + + Unless otherwise indicated below, a compliant decompressor must be + able to accept and decompress any data set that conforms to all + the specifications presented here; a compliant compressor must + produce data sets that conform to all the specifications presented + here. + + 1.5. Definitions of terms and conventions used + + byte: 8 bits stored or transmitted as a unit (same as an octet). + (For this specification, a byte is exactly 8 bits, even on + machines which store a character on a number of bits different + from 8.) See below, for the numbering of bits within a byte. + + 1.6. Changes from previous versions + + Version 3.1 was the first public release of this specification. + In version 3.2, some terminology was changed and the Adler-32 + sample code was rewritten for clarity. In version 3.3, the + support for a preset dictionary was introduced, and the + specification was converted to RFC style. + +2. Detailed specification + + 2.1. Overall conventions + + In the diagrams below, a box like this: + + +---+ + | | <-- the vertical bars might be missing + +---+ + + + + +Deutsch & Gailly Informational [Page 3] + +RFC 1950 ZLIB Compressed Data Format Specification May 1996 + + + represents one byte; a box like this: + + +==============+ + | | + +==============+ + + represents a variable number of bytes. + + Bytes stored within a computer do not have a "bit order", since + they are always treated as a unit. However, a byte considered as + an integer between 0 and 255 does have a most- and least- + significant bit, and since we write numbers with the most- + significant digit on the left, we also write bytes with the most- + significant bit on the left. In the diagrams below, we number the + bits of a byte so that bit 0 is the least-significant bit, i.e., + the bits are numbered: + + +--------+ + |76543210| + +--------+ + + Within a computer, a number may occupy multiple bytes. All + multi-byte numbers in the format described here are stored with + the MOST-significant byte first (at the lower memory address). + For example, the decimal number 520 is stored as: + + 0 1 + +--------+--------+ + |00000010|00001000| + +--------+--------+ + ^ ^ + | | + | + less significant byte = 8 + + more significant byte = 2 x 256 + + 2.2. Data format + + A zlib stream has the following structure: + + 0 1 + +---+---+ + |CMF|FLG| (more-->) + +---+---+ + + + + + + + + +Deutsch & Gailly Informational [Page 4] + +RFC 1950 ZLIB Compressed Data Format Specification May 1996 + + + (if FLG.FDICT set) + + 0 1 2 3 + +---+---+---+---+ + | DICTID | (more-->) + +---+---+---+---+ + + +=====================+---+---+---+---+ + |...compressed data...| ADLER32 | + +=====================+---+---+---+---+ + + Any data which may appear after ADLER32 are not part of the zlib + stream. + + CMF (Compression Method and flags) + This byte is divided into a 4-bit compression method and a 4- + bit information field depending on the compression method. + + bits 0 to 3 CM Compression method + bits 4 to 7 CINFO Compression info + + CM (Compression method) + This identifies the compression method used in the file. CM = 8 + denotes the "deflate" compression method with a window size up + to 32K. This is the method used by gzip and PNG (see + references [1] and [2] in Chapter 3, below, for the reference + documents). CM = 15 is reserved. It might be used in a future + version of this specification to indicate the presence of an + extra field before the compressed data. + + CINFO (Compression info) + For CM = 8, CINFO is the base-2 logarithm of the LZ77 window + size, minus eight (CINFO=7 indicates a 32K window size). Values + of CINFO above 7 are not allowed in this version of the + specification. CINFO is not defined in this specification for + CM not equal to 8. + + FLG (FLaGs) + This flag byte is divided as follows: + + bits 0 to 4 FCHECK (check bits for CMF and FLG) + bit 5 FDICT (preset dictionary) + bits 6 to 7 FLEVEL (compression level) + + The FCHECK value must be such that CMF and FLG, when viewed as + a 16-bit unsigned integer stored in MSB order (CMF*256 + FLG), + is a multiple of 31. + + + + +Deutsch & Gailly Informational [Page 5] + +RFC 1950 ZLIB Compressed Data Format Specification May 1996 + + + FDICT (Preset dictionary) + If FDICT is set, a DICT dictionary identifier is present + immediately after the FLG byte. The dictionary is a sequence of + bytes which are initially fed to the compressor without + producing any compressed output. DICT is the Adler-32 checksum + of this sequence of bytes (see the definition of ADLER32 + below). The decompressor can use this identifier to determine + which dictionary has been used by the compressor. + + FLEVEL (Compression level) + These flags are available for use by specific compression + methods. The "deflate" method (CM = 8) sets these flags as + follows: + + 0 - compressor used fastest algorithm + 1 - compressor used fast algorithm + 2 - compressor used default algorithm + 3 - compressor used maximum compression, slowest algorithm + + The information in FLEVEL is not needed for decompression; it + is there to indicate if recompression might be worthwhile. + + compressed data + For compression method 8, the compressed data is stored in the + deflate compressed data format as described in the document + "DEFLATE Compressed Data Format Specification" by L. Peter + Deutsch. (See reference [3] in Chapter 3, below) + + Other compressed data formats are not specified in this version + of the zlib specification. + + ADLER32 (Adler-32 checksum) + This contains a checksum value of the uncompressed data + (excluding any dictionary data) computed according to Adler-32 + algorithm. This algorithm is a 32-bit extension and improvement + of the Fletcher algorithm, used in the ITU-T X.224 / ISO 8073 + standard. See references [4] and [5] in Chapter 3, below) + + Adler-32 is composed of two sums accumulated per byte: s1 is + the sum of all bytes, s2 is the sum of all s1 values. Both sums + are done modulo 65521. s1 is initialized to 1, s2 to zero. The + Adler-32 checksum is stored as s2*65536 + s1 in most- + significant-byte first (network) order. + + + + + + + + +Deutsch & Gailly Informational [Page 6] + +RFC 1950 ZLIB Compressed Data Format Specification May 1996 + + + 2.3. Compliance + + A compliant compressor must produce streams with correct CMF, FLG + and ADLER32, but need not support preset dictionaries. When the + zlib data format is used as part of another standard data format, + the compressor may use only preset dictionaries that are specified + by this other data format. If this other format does not use the + preset dictionary feature, the compressor must not set the FDICT + flag. + + A compliant decompressor must check CMF, FLG, and ADLER32, and + provide an error indication if any of these have incorrect values. + A compliant decompressor must give an error indication if CM is + not one of the values defined in this specification (only the + value 8 is permitted in this version), since another value could + indicate the presence of new features that would cause subsequent + data to be interpreted incorrectly. A compliant decompressor must + give an error indication if FDICT is set and DICTID is not the + identifier of a known preset dictionary. A decompressor may + ignore FLEVEL and still be compliant. When the zlib data format + is being used as a part of another standard format, a compliant + decompressor must support all the preset dictionaries specified by + the other format. When the other format does not use the preset + dictionary feature, a compliant decompressor must reject any + stream in which the FDICT flag is set. + +3. References + + [1] Deutsch, L.P.,"GZIP Compressed Data Format Specification", + available in ftp://ftp.uu.net/pub/archiving/zip/doc/ + + [2] Thomas Boutell, "PNG (Portable Network Graphics) specification", + available in ftp://ftp.uu.net/graphics/png/documents/ + + [3] Deutsch, L.P.,"DEFLATE Compressed Data Format Specification", + available in ftp://ftp.uu.net/pub/archiving/zip/doc/ + + [4] Fletcher, J. G., "An Arithmetic Checksum for Serial + Transmissions," IEEE Transactions on Communications, Vol. COM-30, + No. 1, January 1982, pp. 247-252. + + [5] ITU-T Recommendation X.224, Annex D, "Checksum Algorithms," + November, 1993, pp. 144, 145. (Available from + gopher://info.itu.ch). ITU-T X.244 is also the same as ISO 8073. + + + + + + + +Deutsch & Gailly Informational [Page 7] + +RFC 1950 ZLIB Compressed Data Format Specification May 1996 + + +4. Source code + + Source code for a C language implementation of a "zlib" compliant + library is available at ftp://ftp.uu.net/pub/archiving/zip/zlib/. + +5. Security Considerations + + A decoder that fails to check the ADLER32 checksum value may be + subject to undetected data corruption. + +6. Acknowledgements + + Trademarks cited in this document are the property of their + respective owners. + + Jean-Loup Gailly and Mark Adler designed the zlib format and wrote + the related software described in this specification. Glenn + Randers-Pehrson converted this document to RFC and HTML format. + +7. Authors' Addresses + + L. Peter Deutsch + Aladdin Enterprises + 203 Santa Margarita Ave. + Menlo Park, CA 94025 + + Phone: (415) 322-0103 (AM only) + FAX: (415) 322-1734 + EMail: + + + Jean-Loup Gailly + + EMail: + + Questions about the technical content of this specification can be + sent by email to + + Jean-Loup Gailly and + Mark Adler + + Editorial comments on this specification can be sent by email to + + L. Peter Deutsch and + Glenn Randers-Pehrson + + + + + + +Deutsch & Gailly Informational [Page 8] + +RFC 1950 ZLIB Compressed Data Format Specification May 1996 + + +8. Appendix: Rationale + + 8.1. Preset dictionaries + + A preset dictionary is specially useful to compress short input + sequences. The compressor can take advantage of the dictionary + context to encode the input in a more compact manner. The + decompressor can be initialized with the appropriate context by + virtually decompressing a compressed version of the dictionary + without producing any output. However for certain compression + algorithms such as the deflate algorithm this operation can be + achieved without actually performing any decompression. + + The compressor and the decompressor must use exactly the same + dictionary. The dictionary may be fixed or may be chosen among a + certain number of predefined dictionaries, according to the kind + of input data. The decompressor can determine which dictionary has + been chosen by the compressor by checking the dictionary + identifier. This document does not specify the contents of + predefined dictionaries, since the optimal dictionaries are + application specific. Standard data formats using this feature of + the zlib specification must precisely define the allowed + dictionaries. + + 8.2. The Adler-32 algorithm + + The Adler-32 algorithm is much faster than the CRC32 algorithm yet + still provides an extremely low probability of undetected errors. + + The modulo on unsigned long accumulators can be delayed for 5552 + bytes, so the modulo operation time is negligible. If the bytes + are a, b, c, the second sum is 3a + 2b + c + 3, and so is position + and order sensitive, unlike the first sum, which is just a + checksum. That 65521 is prime is important to avoid a possible + large class of two-byte errors that leave the check unchanged. + (The Fletcher checksum uses 255, which is not prime and which also + makes the Fletcher check insensitive to single byte changes 0 <-> + 255.) + + The sum s1 is initialized to 1 instead of zero to make the length + of the sequence part of s2, so that the length does not have to be + checked separately. (Any sequence of zeroes has a Fletcher + checksum of zero.) + + + + + + + + +Deutsch & Gailly Informational [Page 9] + +RFC 1950 ZLIB Compressed Data Format Specification May 1996 + + +9. Appendix: Sample code + + The following C code computes the Adler-32 checksum of a data buffer. + It is written for clarity, not for speed. The sample code is in the + ANSI C programming language. Non C users may find it easier to read + with these hints: + + & Bitwise AND operator. + >> Bitwise right shift operator. When applied to an + unsigned quantity, as here, right shift inserts zero bit(s) + at the left. + << Bitwise left shift operator. Left shift inserts zero + bit(s) at the right. + ++ "n++" increments the variable n. + % modulo operator: a % b is the remainder of a divided by b. + + #define BASE 65521 /* largest prime smaller than 65536 */ + + /* + Update a running Adler-32 checksum with the bytes buf[0..len-1] + and return the updated checksum. The Adler-32 checksum should be + initialized to 1. + + Usage example: + + unsigned long adler = 1L; + + while (read_buffer(buffer, length) != EOF) { + adler = update_adler32(adler, buffer, length); + } + if (adler != original_adler) error(); + */ + unsigned long update_adler32(unsigned long adler, + unsigned char *buf, int len) + { + unsigned long s1 = adler & 0xffff; + unsigned long s2 = (adler >> 16) & 0xffff; + int n; + + for (n = 0; n < len; n++) { + s1 = (s1 + buf[n]) % BASE; + s2 = (s2 + s1) % BASE; + } + return (s2 << 16) + s1; + } + + /* Return the adler32 of the bytes buf[0..len-1] */ + + + + +Deutsch & Gailly Informational [Page 10] + +RFC 1950 ZLIB Compressed Data Format Specification May 1996 + + + unsigned long adler32(unsigned char *buf, int len) + { + return update_adler32(1L, buf, len); + } + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Deutsch & Gailly Informational [Page 11] + ADDED compat/zlib/doc/rfc1951.txt Index: compat/zlib/doc/rfc1951.txt ================================================================== --- compat/zlib/doc/rfc1951.txt +++ compat/zlib/doc/rfc1951.txt @@ -0,0 +1,955 @@ + + + + + + +Network Working Group P. Deutsch +Request for Comments: 1951 Aladdin Enterprises +Category: Informational May 1996 + + + DEFLATE Compressed Data Format Specification version 1.3 + +Status of This Memo + + This memo provides information for the Internet community. This memo + does not specify an Internet standard of any kind. Distribution of + this memo is unlimited. + +IESG Note: + + The IESG takes no position on the validity of any Intellectual + Property Rights statements contained in this document. + +Notices + + Copyright (c) 1996 L. Peter Deutsch + + Permission is granted to copy and distribute this document for any + purpose and without charge, including translations into other + languages and incorporation into compilations, provided that the + copyright notice and this notice are preserved, and that any + substantive changes or deletions from the original are clearly + marked. + + A pointer to the latest version of this and related documentation in + HTML format can be found at the URL + . + +Abstract + + This specification defines a lossless compressed data format that + compresses data using a combination of the LZ77 algorithm and Huffman + coding, with efficiency comparable to the best currently available + general-purpose compression methods. The data can be produced or + consumed, even for an arbitrarily long sequentially presented input + data stream, using only an a priori bounded amount of intermediate + storage. The format can be implemented readily in a manner not + covered by patents. + + + + + + + + +Deutsch Informational [Page 1] + +RFC 1951 DEFLATE Compressed Data Format Specification May 1996 + + +Table of Contents + + 1. Introduction ................................................... 2 + 1.1. Purpose ................................................... 2 + 1.2. Intended audience ......................................... 3 + 1.3. Scope ..................................................... 3 + 1.4. Compliance ................................................ 3 + 1.5. Definitions of terms and conventions used ................ 3 + 1.6. Changes from previous versions ............................ 4 + 2. Compressed representation overview ............................. 4 + 3. Detailed specification ......................................... 5 + 3.1. Overall conventions ....................................... 5 + 3.1.1. Packing into bytes .................................. 5 + 3.2. Compressed block format ................................... 6 + 3.2.1. Synopsis of prefix and Huffman coding ............... 6 + 3.2.2. Use of Huffman coding in the "deflate" format ....... 7 + 3.2.3. Details of block format ............................. 9 + 3.2.4. Non-compressed blocks (BTYPE=00) ................... 11 + 3.2.5. Compressed blocks (length and distance codes) ...... 11 + 3.2.6. Compression with fixed Huffman codes (BTYPE=01) .... 12 + 3.2.7. Compression with dynamic Huffman codes (BTYPE=10) .. 13 + 3.3. Compliance ............................................... 14 + 4. Compression algorithm details ................................. 14 + 5. References .................................................... 16 + 6. Security Considerations ....................................... 16 + 7. Source code ................................................... 16 + 8. Acknowledgements .............................................. 16 + 9. Author's Address .............................................. 17 + +1. Introduction + + 1.1. Purpose + + The purpose of this specification is to define a lossless + compressed data format that: + * Is independent of CPU type, operating system, file system, + and character set, and hence can be used for interchange; + * Can be produced or consumed, even for an arbitrarily long + sequentially presented input data stream, using only an a + priori bounded amount of intermediate storage, and hence + can be used in data communications or similar structures + such as Unix filters; + * Compresses data with efficiency comparable to the best + currently available general-purpose compression methods, + and in particular considerably better than the "compress" + program; + * Can be implemented readily in a manner not covered by + patents, and hence can be practiced freely; + + + +Deutsch Informational [Page 2] + +RFC 1951 DEFLATE Compressed Data Format Specification May 1996 + + + * Is compatible with the file format produced by the current + widely used gzip utility, in that conforming decompressors + will be able to read data produced by the existing gzip + compressor. + + The data format defined by this specification does not attempt to: + + * Allow random access to compressed data; + * Compress specialized data (e.g., raster graphics) as well + as the best currently available specialized algorithms. + + A simple counting argument shows that no lossless compression + algorithm can compress every possible input data set. For the + format defined here, the worst case expansion is 5 bytes per 32K- + byte block, i.e., a size increase of 0.015% for large data sets. + English text usually compresses by a factor of 2.5 to 3; + executable files usually compress somewhat less; graphical data + such as raster images may compress much more. + + 1.2. Intended audience + + This specification is intended for use by implementors of software + to compress data into "deflate" format and/or decompress data from + "deflate" format. + + The text of the specification assumes a basic background in + programming at the level of bits and other primitive data + representations. Familiarity with the technique of Huffman coding + is helpful but not required. + + 1.3. Scope + + The specification specifies a method for representing a sequence + of bytes as a (usually shorter) sequence of bits, and a method for + packing the latter bit sequence into bytes. + + 1.4. Compliance + + Unless otherwise indicated below, a compliant decompressor must be + able to accept and decompress any data set that conforms to all + the specifications presented here; a compliant compressor must + produce data sets that conform to all the specifications presented + here. + + 1.5. Definitions of terms and conventions used + + Byte: 8 bits stored or transmitted as a unit (same as an octet). + For this specification, a byte is exactly 8 bits, even on machines + + + +Deutsch Informational [Page 3] + +RFC 1951 DEFLATE Compressed Data Format Specification May 1996 + + + which store a character on a number of bits different from eight. + See below, for the numbering of bits within a byte. + + String: a sequence of arbitrary bytes. + + 1.6. Changes from previous versions + + There have been no technical changes to the deflate format since + version 1.1 of this specification. In version 1.2, some + terminology was changed. Version 1.3 is a conversion of the + specification to RFC style. + +2. Compressed representation overview + + A compressed data set consists of a series of blocks, corresponding + to successive blocks of input data. The block sizes are arbitrary, + except that non-compressible blocks are limited to 65,535 bytes. + + Each block is compressed using a combination of the LZ77 algorithm + and Huffman coding. The Huffman trees for each block are independent + of those for previous or subsequent blocks; the LZ77 algorithm may + use a reference to a duplicated string occurring in a previous block, + up to 32K input bytes before. + + Each block consists of two parts: a pair of Huffman code trees that + describe the representation of the compressed data part, and a + compressed data part. (The Huffman trees themselves are compressed + using Huffman encoding.) The compressed data consists of a series of + elements of two types: literal bytes (of strings that have not been + detected as duplicated within the previous 32K input bytes), and + pointers to duplicated strings, where a pointer is represented as a + pair . The representation used in the + "deflate" format limits distances to 32K bytes and lengths to 258 + bytes, but does not limit the size of a block, except for + uncompressible blocks, which are limited as noted above. + + Each type of value (literals, distances, and lengths) in the + compressed data is represented using a Huffman code, using one code + tree for literals and lengths and a separate code tree for distances. + The code trees for each block appear in a compact form just before + the compressed data for that block. + + + + + + + + + + +Deutsch Informational [Page 4] + +RFC 1951 DEFLATE Compressed Data Format Specification May 1996 + + +3. Detailed specification + + 3.1. Overall conventions In the diagrams below, a box like this: + + +---+ + | | <-- the vertical bars might be missing + +---+ + + represents one byte; a box like this: + + +==============+ + | | + +==============+ + + represents a variable number of bytes. + + Bytes stored within a computer do not have a "bit order", since + they are always treated as a unit. However, a byte considered as + an integer between 0 and 255 does have a most- and least- + significant bit, and since we write numbers with the most- + significant digit on the left, we also write bytes with the most- + significant bit on the left. In the diagrams below, we number the + bits of a byte so that bit 0 is the least-significant bit, i.e., + the bits are numbered: + + +--------+ + |76543210| + +--------+ + + Within a computer, a number may occupy multiple bytes. All + multi-byte numbers in the format described here are stored with + the least-significant byte first (at the lower memory address). + For example, the decimal number 520 is stored as: + + 0 1 + +--------+--------+ + |00001000|00000010| + +--------+--------+ + ^ ^ + | | + | + more significant byte = 2 x 256 + + less significant byte = 8 + + 3.1.1. Packing into bytes + + This document does not address the issue of the order in which + bits of a byte are transmitted on a bit-sequential medium, + since the final data format described here is byte- rather than + + + +Deutsch Informational [Page 5] + +RFC 1951 DEFLATE Compressed Data Format Specification May 1996 + + + bit-oriented. However, we describe the compressed block format + in below, as a sequence of data elements of various bit + lengths, not a sequence of bytes. We must therefore specify + how to pack these data elements into bytes to form the final + compressed byte sequence: + + * Data elements are packed into bytes in order of + increasing bit number within the byte, i.e., starting + with the least-significant bit of the byte. + * Data elements other than Huffman codes are packed + starting with the least-significant bit of the data + element. + * Huffman codes are packed starting with the most- + significant bit of the code. + + In other words, if one were to print out the compressed data as + a sequence of bytes, starting with the first byte at the + *right* margin and proceeding to the *left*, with the most- + significant bit of each byte on the left as usual, one would be + able to parse the result from right to left, with fixed-width + elements in the correct MSB-to-LSB order and Huffman codes in + bit-reversed order (i.e., with the first bit of the code in the + relative LSB position). + + 3.2. Compressed block format + + 3.2.1. Synopsis of prefix and Huffman coding + + Prefix coding represents symbols from an a priori known + alphabet by bit sequences (codes), one code for each symbol, in + a manner such that different symbols may be represented by bit + sequences of different lengths, but a parser can always parse + an encoded string unambiguously symbol-by-symbol. + + We define a prefix code in terms of a binary tree in which the + two edges descending from each non-leaf node are labeled 0 and + 1 and in which the leaf nodes correspond one-for-one with (are + labeled with) the symbols of the alphabet; then the code for a + symbol is the sequence of 0's and 1's on the edges leading from + the root to the leaf labeled with that symbol. For example: + + + + + + + + + + + +Deutsch Informational [Page 6] + +RFC 1951 DEFLATE Compressed Data Format Specification May 1996 + + + /\ Symbol Code + 0 1 ------ ---- + / \ A 00 + /\ B B 1 + 0 1 C 011 + / \ D 010 + A /\ + 0 1 + / \ + D C + + A parser can decode the next symbol from an encoded input + stream by walking down the tree from the root, at each step + choosing the edge corresponding to the next input bit. + + Given an alphabet with known symbol frequencies, the Huffman + algorithm allows the construction of an optimal prefix code + (one which represents strings with those symbol frequencies + using the fewest bits of any possible prefix codes for that + alphabet). Such a code is called a Huffman code. (See + reference [1] in Chapter 5, references for additional + information on Huffman codes.) + + Note that in the "deflate" format, the Huffman codes for the + various alphabets must not exceed certain maximum code lengths. + This constraint complicates the algorithm for computing code + lengths from symbol frequencies. Again, see Chapter 5, + references for details. + + 3.2.2. Use of Huffman coding in the "deflate" format + + The Huffman codes used for each alphabet in the "deflate" + format have two additional rules: + + * All codes of a given bit length have lexicographically + consecutive values, in the same order as the symbols + they represent; + + * Shorter codes lexicographically precede longer codes. + + + + + + + + + + + + +Deutsch Informational [Page 7] + +RFC 1951 DEFLATE Compressed Data Format Specification May 1996 + + + We could recode the example above to follow this rule as + follows, assuming that the order of the alphabet is ABCD: + + Symbol Code + ------ ---- + A 10 + B 0 + C 110 + D 111 + + I.e., 0 precedes 10 which precedes 11x, and 110 and 111 are + lexicographically consecutive. + + Given this rule, we can define the Huffman code for an alphabet + just by giving the bit lengths of the codes for each symbol of + the alphabet in order; this is sufficient to determine the + actual codes. In our example, the code is completely defined + by the sequence of bit lengths (2, 1, 3, 3). The following + algorithm generates the codes as integers, intended to be read + from most- to least-significant bit. The code lengths are + initially in tree[I].Len; the codes are produced in + tree[I].Code. + + 1) Count the number of codes for each code length. Let + bl_count[N] be the number of codes of length N, N >= 1. + + 2) Find the numerical value of the smallest code for each + code length: + + code = 0; + bl_count[0] = 0; + for (bits = 1; bits <= MAX_BITS; bits++) { + code = (code + bl_count[bits-1]) << 1; + next_code[bits] = code; + } + + 3) Assign numerical values to all codes, using consecutive + values for all codes of the same length with the base + values determined at step 2. Codes that are never used + (which have a bit length of zero) must not be assigned a + value. + + for (n = 0; n <= max_code; n++) { + len = tree[n].Len; + if (len != 0) { + tree[n].Code = next_code[len]; + next_code[len]++; + } + + + +Deutsch Informational [Page 8] + +RFC 1951 DEFLATE Compressed Data Format Specification May 1996 + + + } + + Example: + + Consider the alphabet ABCDEFGH, with bit lengths (3, 3, 3, 3, + 3, 2, 4, 4). After step 1, we have: + + N bl_count[N] + - ----------- + 2 1 + 3 5 + 4 2 + + Step 2 computes the following next_code values: + + N next_code[N] + - ------------ + 1 0 + 2 0 + 3 2 + 4 14 + + Step 3 produces the following code values: + + Symbol Length Code + ------ ------ ---- + A 3 010 + B 3 011 + C 3 100 + D 3 101 + E 3 110 + F 2 00 + G 4 1110 + H 4 1111 + + 3.2.3. Details of block format + + Each block of compressed data begins with 3 header bits + containing the following data: + + first bit BFINAL + next 2 bits BTYPE + + Note that the header bits do not necessarily begin on a byte + boundary, since a block does not necessarily occupy an integral + number of bytes. + + + + + +Deutsch Informational [Page 9] + +RFC 1951 DEFLATE Compressed Data Format Specification May 1996 + + + BFINAL is set if and only if this is the last block of the data + set. + + BTYPE specifies how the data are compressed, as follows: + + 00 - no compression + 01 - compressed with fixed Huffman codes + 10 - compressed with dynamic Huffman codes + 11 - reserved (error) + + The only difference between the two compressed cases is how the + Huffman codes for the literal/length and distance alphabets are + defined. + + In all cases, the decoding algorithm for the actual data is as + follows: + + do + read block header from input stream. + if stored with no compression + skip any remaining bits in current partially + processed byte + read LEN and NLEN (see next section) + copy LEN bytes of data to output + otherwise + if compressed with dynamic Huffman codes + read representation of code trees (see + subsection below) + loop (until end of block code recognized) + decode literal/length value from input stream + if value < 256 + copy value (literal byte) to output stream + otherwise + if value = end of block (256) + break from loop + otherwise (value = 257..285) + decode distance from input stream + + move backwards distance bytes in the output + stream, and copy length bytes from this + position to the output stream. + end loop + while not last block + + Note that a duplicated string reference may refer to a string + in a previous block; i.e., the backward distance may cross one + or more block boundaries. However a distance cannot refer past + the beginning of the output stream. (An application using a + + + +Deutsch Informational [Page 10] + +RFC 1951 DEFLATE Compressed Data Format Specification May 1996 + + + preset dictionary might discard part of the output stream; a + distance can refer to that part of the output stream anyway) + Note also that the referenced string may overlap the current + position; for example, if the last 2 bytes decoded have values + X and Y, a string reference with + adds X,Y,X,Y,X to the output stream. + + We now specify each compression method in turn. + + 3.2.4. Non-compressed blocks (BTYPE=00) + + Any bits of input up to the next byte boundary are ignored. + The rest of the block consists of the following information: + + 0 1 2 3 4... + +---+---+---+---+================================+ + | LEN | NLEN |... LEN bytes of literal data...| + +---+---+---+---+================================+ + + LEN is the number of data bytes in the block. NLEN is the + one's complement of LEN. + + 3.2.5. Compressed blocks (length and distance codes) + + As noted above, encoded data blocks in the "deflate" format + consist of sequences of symbols drawn from three conceptually + distinct alphabets: either literal bytes, from the alphabet of + byte values (0..255), or pairs, + where the length is drawn from (3..258) and the distance is + drawn from (1..32,768). In fact, the literal and length + alphabets are merged into a single alphabet (0..285), where + values 0..255 represent literal bytes, the value 256 indicates + end-of-block, and values 257..285 represent length codes + (possibly in conjunction with extra bits following the symbol + code) as follows: + + + + + + + + + + + + + + + + +Deutsch Informational [Page 11] + +RFC 1951 DEFLATE Compressed Data Format Specification May 1996 + + + Extra Extra Extra + Code Bits Length(s) Code Bits Lengths Code Bits Length(s) + ---- ---- ------ ---- ---- ------- ---- ---- ------- + 257 0 3 267 1 15,16 277 4 67-82 + 258 0 4 268 1 17,18 278 4 83-98 + 259 0 5 269 2 19-22 279 4 99-114 + 260 0 6 270 2 23-26 280 4 115-130 + 261 0 7 271 2 27-30 281 5 131-162 + 262 0 8 272 2 31-34 282 5 163-194 + 263 0 9 273 3 35-42 283 5 195-226 + 264 0 10 274 3 43-50 284 5 227-257 + 265 1 11,12 275 3 51-58 285 0 258 + 266 1 13,14 276 3 59-66 + + The extra bits should be interpreted as a machine integer + stored with the most-significant bit first, e.g., bits 1110 + represent the value 14. + + Extra Extra Extra + Code Bits Dist Code Bits Dist Code Bits Distance + ---- ---- ---- ---- ---- ------ ---- ---- -------- + 0 0 1 10 4 33-48 20 9 1025-1536 + 1 0 2 11 4 49-64 21 9 1537-2048 + 2 0 3 12 5 65-96 22 10 2049-3072 + 3 0 4 13 5 97-128 23 10 3073-4096 + 4 1 5,6 14 6 129-192 24 11 4097-6144 + 5 1 7,8 15 6 193-256 25 11 6145-8192 + 6 2 9-12 16 7 257-384 26 12 8193-12288 + 7 2 13-16 17 7 385-512 27 12 12289-16384 + 8 3 17-24 18 8 513-768 28 13 16385-24576 + 9 3 25-32 19 8 769-1024 29 13 24577-32768 + + 3.2.6. Compression with fixed Huffman codes (BTYPE=01) + + The Huffman codes for the two alphabets are fixed, and are not + represented explicitly in the data. The Huffman code lengths + for the literal/length alphabet are: + + Lit Value Bits Codes + --------- ---- ----- + 0 - 143 8 00110000 through + 10111111 + 144 - 255 9 110010000 through + 111111111 + 256 - 279 7 0000000 through + 0010111 + 280 - 287 8 11000000 through + 11000111 + + + +Deutsch Informational [Page 12] + +RFC 1951 DEFLATE Compressed Data Format Specification May 1996 + + + The code lengths are sufficient to generate the actual codes, + as described above; we show the codes in the table for added + clarity. Literal/length values 286-287 will never actually + occur in the compressed data, but participate in the code + construction. + + Distance codes 0-31 are represented by (fixed-length) 5-bit + codes, with possible additional bits as shown in the table + shown in Paragraph 3.2.5, above. Note that distance codes 30- + 31 will never actually occur in the compressed data. + + 3.2.7. Compression with dynamic Huffman codes (BTYPE=10) + + The Huffman codes for the two alphabets appear in the block + immediately after the header bits and before the actual + compressed data, first the literal/length code and then the + distance code. Each code is defined by a sequence of code + lengths, as discussed in Paragraph 3.2.2, above. For even + greater compactness, the code length sequences themselves are + compressed using a Huffman code. The alphabet for code lengths + is as follows: + + 0 - 15: Represent code lengths of 0 - 15 + 16: Copy the previous code length 3 - 6 times. + The next 2 bits indicate repeat length + (0 = 3, ... , 3 = 6) + Example: Codes 8, 16 (+2 bits 11), + 16 (+2 bits 10) will expand to + 12 code lengths of 8 (1 + 6 + 5) + 17: Repeat a code length of 0 for 3 - 10 times. + (3 bits of length) + 18: Repeat a code length of 0 for 11 - 138 times + (7 bits of length) + + A code length of 0 indicates that the corresponding symbol in + the literal/length or distance alphabet will not occur in the + block, and should not participate in the Huffman code + construction algorithm given earlier. If only one distance + code is used, it is encoded using one bit, not zero bits; in + this case there is a single code length of one, with one unused + code. One distance code of zero bits means that there are no + distance codes used at all (the data is all literals). + + We can now define the format of the block: + + 5 Bits: HLIT, # of Literal/Length codes - 257 (257 - 286) + 5 Bits: HDIST, # of Distance codes - 1 (1 - 32) + 4 Bits: HCLEN, # of Code Length codes - 4 (4 - 19) + + + +Deutsch Informational [Page 13] + +RFC 1951 DEFLATE Compressed Data Format Specification May 1996 + + + (HCLEN + 4) x 3 bits: code lengths for the code length + alphabet given just above, in the order: 16, 17, 18, + 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15 + + These code lengths are interpreted as 3-bit integers + (0-7); as above, a code length of 0 means the + corresponding symbol (literal/length or distance code + length) is not used. + + HLIT + 257 code lengths for the literal/length alphabet, + encoded using the code length Huffman code + + HDIST + 1 code lengths for the distance alphabet, + encoded using the code length Huffman code + + The actual compressed data of the block, + encoded using the literal/length and distance Huffman + codes + + The literal/length symbol 256 (end of data), + encoded using the literal/length Huffman code + + The code length repeat codes can cross from HLIT + 257 to the + HDIST + 1 code lengths. In other words, all code lengths form + a single sequence of HLIT + HDIST + 258 values. + + 3.3. Compliance + + A compressor may limit further the ranges of values specified in + the previous section and still be compliant; for example, it may + limit the range of backward pointers to some value smaller than + 32K. Similarly, a compressor may limit the size of blocks so that + a compressible block fits in memory. + + A compliant decompressor must accept the full range of possible + values defined in the previous section, and must accept blocks of + arbitrary size. + +4. Compression algorithm details + + While it is the intent of this document to define the "deflate" + compressed data format without reference to any particular + compression algorithm, the format is related to the compressed + formats produced by LZ77 (Lempel-Ziv 1977, see reference [2] below); + since many variations of LZ77 are patented, it is strongly + recommended that the implementor of a compressor follow the general + algorithm presented here, which is known not to be patented per se. + The material in this section is not part of the definition of the + + + +Deutsch Informational [Page 14] + +RFC 1951 DEFLATE Compressed Data Format Specification May 1996 + + + specification per se, and a compressor need not follow it in order to + be compliant. + + The compressor terminates a block when it determines that starting a + new block with fresh trees would be useful, or when the block size + fills up the compressor's block buffer. + + The compressor uses a chained hash table to find duplicated strings, + using a hash function that operates on 3-byte sequences. At any + given point during compression, let XYZ be the next 3 input bytes to + be examined (not necessarily all different, of course). First, the + compressor examines the hash chain for XYZ. If the chain is empty, + the compressor simply writes out X as a literal byte and advances one + byte in the input. If the hash chain is not empty, indicating that + the sequence XYZ (or, if we are unlucky, some other 3 bytes with the + same hash function value) has occurred recently, the compressor + compares all strings on the XYZ hash chain with the actual input data + sequence starting at the current point, and selects the longest + match. + + The compressor searches the hash chains starting with the most recent + strings, to favor small distances and thus take advantage of the + Huffman encoding. The hash chains are singly linked. There are no + deletions from the hash chains; the algorithm simply discards matches + that are too old. To avoid a worst-case situation, very long hash + chains are arbitrarily truncated at a certain length, determined by a + run-time parameter. + + To improve overall compression, the compressor optionally defers the + selection of matches ("lazy matching"): after a match of length N has + been found, the compressor searches for a longer match starting at + the next input byte. If it finds a longer match, it truncates the + previous match to a length of one (thus producing a single literal + byte) and then emits the longer match. Otherwise, it emits the + original match, and, as described above, advances N bytes before + continuing. + + Run-time parameters also control this "lazy match" procedure. If + compression ratio is most important, the compressor attempts a + complete second search regardless of the length of the first match. + In the normal case, if the current match is "long enough", the + compressor reduces the search for a longer match, thus speeding up + the process. If speed is most important, the compressor inserts new + strings in the hash table only when no match was found, or when the + match is not "too long". This degrades the compression ratio but + saves time since there are both fewer insertions and fewer searches. + + + + + +Deutsch Informational [Page 15] + +RFC 1951 DEFLATE Compressed Data Format Specification May 1996 + + +5. References + + [1] Huffman, D. A., "A Method for the Construction of Minimum + Redundancy Codes", Proceedings of the Institute of Radio + Engineers, September 1952, Volume 40, Number 9, pp. 1098-1101. + + [2] Ziv J., Lempel A., "A Universal Algorithm for Sequential Data + Compression", IEEE Transactions on Information Theory, Vol. 23, + No. 3, pp. 337-343. + + [3] Gailly, J.-L., and Adler, M., ZLIB documentation and sources, + available in ftp://ftp.uu.net/pub/archiving/zip/doc/ + + [4] Gailly, J.-L., and Adler, M., GZIP documentation and sources, + available as gzip-*.tar in ftp://prep.ai.mit.edu/pub/gnu/ + + [5] Schwartz, E. S., and Kallick, B. "Generating a canonical prefix + encoding." Comm. ACM, 7,3 (Mar. 1964), pp. 166-169. + + [6] Hirschberg and Lelewer, "Efficient decoding of prefix codes," + Comm. ACM, 33,4, April 1990, pp. 449-459. + +6. Security Considerations + + Any data compression method involves the reduction of redundancy in + the data. Consequently, any corruption of the data is likely to have + severe effects and be difficult to correct. Uncompressed text, on + the other hand, will probably still be readable despite the presence + of some corrupted bytes. + + It is recommended that systems using this data format provide some + means of validating the integrity of the compressed data. See + reference [3], for example. + +7. Source code + + Source code for a C language implementation of a "deflate" compliant + compressor and decompressor is available within the zlib package at + ftp://ftp.uu.net/pub/archiving/zip/zlib/. + +8. Acknowledgements + + Trademarks cited in this document are the property of their + respective owners. + + Phil Katz designed the deflate format. Jean-Loup Gailly and Mark + Adler wrote the related software described in this specification. + Glenn Randers-Pehrson converted this document to RFC and HTML format. + + + +Deutsch Informational [Page 16] + +RFC 1951 DEFLATE Compressed Data Format Specification May 1996 + + +9. Author's Address + + L. Peter Deutsch + Aladdin Enterprises + 203 Santa Margarita Ave. + Menlo Park, CA 94025 + + Phone: (415) 322-0103 (AM only) + FAX: (415) 322-1734 + EMail: + + Questions about the technical content of this specification can be + sent by email to: + + Jean-Loup Gailly and + Mark Adler + + Editorial comments on this specification can be sent by email to: + + L. Peter Deutsch and + Glenn Randers-Pehrson + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Deutsch Informational [Page 17] + ADDED compat/zlib/doc/rfc1952.txt Index: compat/zlib/doc/rfc1952.txt ================================================================== --- compat/zlib/doc/rfc1952.txt +++ compat/zlib/doc/rfc1952.txt @@ -0,0 +1,675 @@ + + + + + + +Network Working Group P. Deutsch +Request for Comments: 1952 Aladdin Enterprises +Category: Informational May 1996 + + + GZIP file format specification version 4.3 + +Status of This Memo + + This memo provides information for the Internet community. This memo + does not specify an Internet standard of any kind. Distribution of + this memo is unlimited. + +IESG Note: + + The IESG takes no position on the validity of any Intellectual + Property Rights statements contained in this document. + +Notices + + Copyright (c) 1996 L. Peter Deutsch + + Permission is granted to copy and distribute this document for any + purpose and without charge, including translations into other + languages and incorporation into compilations, provided that the + copyright notice and this notice are preserved, and that any + substantive changes or deletions from the original are clearly + marked. + + A pointer to the latest version of this and related documentation in + HTML format can be found at the URL + . + +Abstract + + This specification defines a lossless compressed data format that is + compatible with the widely used GZIP utility. The format includes a + cyclic redundancy check value for detecting data corruption. The + format presently uses the DEFLATE method of compression but can be + easily extended to use other compression methods. The format can be + implemented readily in a manner not covered by patents. + + + + + + + + + + +Deutsch Informational [Page 1] + +RFC 1952 GZIP File Format Specification May 1996 + + +Table of Contents + + 1. Introduction ................................................... 2 + 1.1. Purpose ................................................... 2 + 1.2. Intended audience ......................................... 3 + 1.3. Scope ..................................................... 3 + 1.4. Compliance ................................................ 3 + 1.5. Definitions of terms and conventions used ................. 3 + 1.6. Changes from previous versions ............................ 3 + 2. Detailed specification ......................................... 4 + 2.1. Overall conventions ....................................... 4 + 2.2. File format ............................................... 5 + 2.3. Member format ............................................. 5 + 2.3.1. Member header and trailer ........................... 6 + 2.3.1.1. Extra field ................................... 8 + 2.3.1.2. Compliance .................................... 9 + 3. References .................................................. 9 + 4. Security Considerations .................................... 10 + 5. Acknowledgements ........................................... 10 + 6. Author's Address ........................................... 10 + 7. Appendix: Jean-Loup Gailly's gzip utility .................. 11 + 8. Appendix: Sample CRC Code .................................. 11 + +1. Introduction + + 1.1. Purpose + + The purpose of this specification is to define a lossless + compressed data format that: + + * Is independent of CPU type, operating system, file system, + and character set, and hence can be used for interchange; + * Can compress or decompress a data stream (as opposed to a + randomly accessible file) to produce another data stream, + using only an a priori bounded amount of intermediate + storage, and hence can be used in data communications or + similar structures such as Unix filters; + * Compresses data with efficiency comparable to the best + currently available general-purpose compression methods, + and in particular considerably better than the "compress" + program; + * Can be implemented readily in a manner not covered by + patents, and hence can be practiced freely; + * Is compatible with the file format produced by the current + widely used gzip utility, in that conforming decompressors + will be able to read data produced by the existing gzip + compressor. + + + + +Deutsch Informational [Page 2] + +RFC 1952 GZIP File Format Specification May 1996 + + + The data format defined by this specification does not attempt to: + + * Provide random access to compressed data; + * Compress specialized data (e.g., raster graphics) as well as + the best currently available specialized algorithms. + + 1.2. Intended audience + + This specification is intended for use by implementors of software + to compress data into gzip format and/or decompress data from gzip + format. + + The text of the specification assumes a basic background in + programming at the level of bits and other primitive data + representations. + + 1.3. Scope + + The specification specifies a compression method and a file format + (the latter assuming only that a file can store a sequence of + arbitrary bytes). It does not specify any particular interface to + a file system or anything about character sets or encodings + (except for file names and comments, which are optional). + + 1.4. Compliance + + Unless otherwise indicated below, a compliant decompressor must be + able to accept and decompress any file that conforms to all the + specifications presented here; a compliant compressor must produce + files that conform to all the specifications presented here. The + material in the appendices is not part of the specification per se + and is not relevant to compliance. + + 1.5. Definitions of terms and conventions used + + byte: 8 bits stored or transmitted as a unit (same as an octet). + (For this specification, a byte is exactly 8 bits, even on + machines which store a character on a number of bits different + from 8.) See below for the numbering of bits within a byte. + + 1.6. Changes from previous versions + + There have been no technical changes to the gzip format since + version 4.1 of this specification. In version 4.2, some + terminology was changed, and the sample CRC code was rewritten for + clarity and to eliminate the requirement for the caller to do pre- + and post-conditioning. Version 4.3 is a conversion of the + specification to RFC style. + + + +Deutsch Informational [Page 3] + +RFC 1952 GZIP File Format Specification May 1996 + + +2. Detailed specification + + 2.1. Overall conventions + + In the diagrams below, a box like this: + + +---+ + | | <-- the vertical bars might be missing + +---+ + + represents one byte; a box like this: + + +==============+ + | | + +==============+ + + represents a variable number of bytes. + + Bytes stored within a computer do not have a "bit order", since + they are always treated as a unit. However, a byte considered as + an integer between 0 and 255 does have a most- and least- + significant bit, and since we write numbers with the most- + significant digit on the left, we also write bytes with the most- + significant bit on the left. In the diagrams below, we number the + bits of a byte so that bit 0 is the least-significant bit, i.e., + the bits are numbered: + + +--------+ + |76543210| + +--------+ + + This document does not address the issue of the order in which + bits of a byte are transmitted on a bit-sequential medium, since + the data format described here is byte- rather than bit-oriented. + + Within a computer, a number may occupy multiple bytes. All + multi-byte numbers in the format described here are stored with + the least-significant byte first (at the lower memory address). + For example, the decimal number 520 is stored as: + + 0 1 + +--------+--------+ + |00001000|00000010| + +--------+--------+ + ^ ^ + | | + | + more significant byte = 2 x 256 + + less significant byte = 8 + + + +Deutsch Informational [Page 4] + +RFC 1952 GZIP File Format Specification May 1996 + + + 2.2. File format + + A gzip file consists of a series of "members" (compressed data + sets). The format of each member is specified in the following + section. The members simply appear one after another in the file, + with no additional information before, between, or after them. + + 2.3. Member format + + Each member has the following structure: + + +---+---+---+---+---+---+---+---+---+---+ + |ID1|ID2|CM |FLG| MTIME |XFL|OS | (more-->) + +---+---+---+---+---+---+---+---+---+---+ + + (if FLG.FEXTRA set) + + +---+---+=================================+ + | XLEN |...XLEN bytes of "extra field"...| (more-->) + +---+---+=================================+ + + (if FLG.FNAME set) + + +=========================================+ + |...original file name, zero-terminated...| (more-->) + +=========================================+ + + (if FLG.FCOMMENT set) + + +===================================+ + |...file comment, zero-terminated...| (more-->) + +===================================+ + + (if FLG.FHCRC set) + + +---+---+ + | CRC16 | + +---+---+ + + +=======================+ + |...compressed blocks...| (more-->) + +=======================+ + + 0 1 2 3 4 5 6 7 + +---+---+---+---+---+---+---+---+ + | CRC32 | ISIZE | + +---+---+---+---+---+---+---+---+ + + + + +Deutsch Informational [Page 5] + +RFC 1952 GZIP File Format Specification May 1996 + + + 2.3.1. Member header and trailer + + ID1 (IDentification 1) + ID2 (IDentification 2) + These have the fixed values ID1 = 31 (0x1f, \037), ID2 = 139 + (0x8b, \213), to identify the file as being in gzip format. + + CM (Compression Method) + This identifies the compression method used in the file. CM + = 0-7 are reserved. CM = 8 denotes the "deflate" + compression method, which is the one customarily used by + gzip and which is documented elsewhere. + + FLG (FLaGs) + This flag byte is divided into individual bits as follows: + + bit 0 FTEXT + bit 1 FHCRC + bit 2 FEXTRA + bit 3 FNAME + bit 4 FCOMMENT + bit 5 reserved + bit 6 reserved + bit 7 reserved + + If FTEXT is set, the file is probably ASCII text. This is + an optional indication, which the compressor may set by + checking a small amount of the input data to see whether any + non-ASCII characters are present. In case of doubt, FTEXT + is cleared, indicating binary data. For systems which have + different file formats for ascii text and binary data, the + decompressor can use FTEXT to choose the appropriate format. + We deliberately do not specify the algorithm used to set + this bit, since a compressor always has the option of + leaving it cleared and a decompressor always has the option + of ignoring it and letting some other program handle issues + of data conversion. + + If FHCRC is set, a CRC16 for the gzip header is present, + immediately before the compressed data. The CRC16 consists + of the two least significant bytes of the CRC32 for all + bytes of the gzip header up to and not including the CRC16. + [The FHCRC bit was never set by versions of gzip up to + 1.2.4, even though it was documented with a different + meaning in gzip 1.2.4.] + + If FEXTRA is set, optional extra fields are present, as + described in a following section. + + + +Deutsch Informational [Page 6] + +RFC 1952 GZIP File Format Specification May 1996 + + + If FNAME is set, an original file name is present, + terminated by a zero byte. The name must consist of ISO + 8859-1 (LATIN-1) characters; on operating systems using + EBCDIC or any other character set for file names, the name + must be translated to the ISO LATIN-1 character set. This + is the original name of the file being compressed, with any + directory components removed, and, if the file being + compressed is on a file system with case insensitive names, + forced to lower case. There is no original file name if the + data was compressed from a source other than a named file; + for example, if the source was stdin on a Unix system, there + is no file name. + + If FCOMMENT is set, a zero-terminated file comment is + present. This comment is not interpreted; it is only + intended for human consumption. The comment must consist of + ISO 8859-1 (LATIN-1) characters. Line breaks should be + denoted by a single line feed character (10 decimal). + + Reserved FLG bits must be zero. + + MTIME (Modification TIME) + This gives the most recent modification time of the original + file being compressed. The time is in Unix format, i.e., + seconds since 00:00:00 GMT, Jan. 1, 1970. (Note that this + may cause problems for MS-DOS and other systems that use + local rather than Universal time.) If the compressed data + did not come from a file, MTIME is set to the time at which + compression started. MTIME = 0 means no time stamp is + available. + + XFL (eXtra FLags) + These flags are available for use by specific compression + methods. The "deflate" method (CM = 8) sets these flags as + follows: + + XFL = 2 - compressor used maximum compression, + slowest algorithm + XFL = 4 - compressor used fastest algorithm + + OS (Operating System) + This identifies the type of file system on which compression + took place. This may be useful in determining end-of-line + convention for text files. The currently defined values are + as follows: + + + + + + +Deutsch Informational [Page 7] + +RFC 1952 GZIP File Format Specification May 1996 + + + 0 - FAT filesystem (MS-DOS, OS/2, NT/Win32) + 1 - Amiga + 2 - VMS (or OpenVMS) + 3 - Unix + 4 - VM/CMS + 5 - Atari TOS + 6 - HPFS filesystem (OS/2, NT) + 7 - Macintosh + 8 - Z-System + 9 - CP/M + 10 - TOPS-20 + 11 - NTFS filesystem (NT) + 12 - QDOS + 13 - Acorn RISCOS + 255 - unknown + + XLEN (eXtra LENgth) + If FLG.FEXTRA is set, this gives the length of the optional + extra field. See below for details. + + CRC32 (CRC-32) + This contains a Cyclic Redundancy Check value of the + uncompressed data computed according to CRC-32 algorithm + used in the ISO 3309 standard and in section 8.1.1.6.2 of + ITU-T recommendation V.42. (See http://www.iso.ch for + ordering ISO documents. See gopher://info.itu.ch for an + online version of ITU-T V.42.) + + ISIZE (Input SIZE) + This contains the size of the original (uncompressed) input + data modulo 2^32. + + 2.3.1.1. Extra field + + If the FLG.FEXTRA bit is set, an "extra field" is present in + the header, with total length XLEN bytes. It consists of a + series of subfields, each of the form: + + +---+---+---+---+==================================+ + |SI1|SI2| LEN |... LEN bytes of subfield data ...| + +---+---+---+---+==================================+ + + SI1 and SI2 provide a subfield ID, typically two ASCII letters + with some mnemonic value. Jean-Loup Gailly + is maintaining a registry of subfield + IDs; please send him any subfield ID you wish to use. Subfield + IDs with SI2 = 0 are reserved for future use. The following + IDs are currently defined: + + + +Deutsch Informational [Page 8] + +RFC 1952 GZIP File Format Specification May 1996 + + + SI1 SI2 Data + ---------- ---------- ---- + 0x41 ('A') 0x70 ('P') Apollo file type information + + LEN gives the length of the subfield data, excluding the 4 + initial bytes. + + 2.3.1.2. Compliance + + A compliant compressor must produce files with correct ID1, + ID2, CM, CRC32, and ISIZE, but may set all the other fields in + the fixed-length part of the header to default values (255 for + OS, 0 for all others). The compressor must set all reserved + bits to zero. + + A compliant decompressor must check ID1, ID2, and CM, and + provide an error indication if any of these have incorrect + values. It must examine FEXTRA/XLEN, FNAME, FCOMMENT and FHCRC + at least so it can skip over the optional fields if they are + present. It need not examine any other part of the header or + trailer; in particular, a decompressor may ignore FTEXT and OS + and always produce binary output, and still be compliant. A + compliant decompressor must give an error indication if any + reserved bit is non-zero, since such a bit could indicate the + presence of a new field that would cause subsequent data to be + interpreted incorrectly. + +3. References + + [1] "Information Processing - 8-bit single-byte coded graphic + character sets - Part 1: Latin alphabet No.1" (ISO 8859-1:1987). + The ISO 8859-1 (Latin-1) character set is a superset of 7-bit + ASCII. Files defining this character set are available as + iso_8859-1.* in ftp://ftp.uu.net/graphics/png/documents/ + + [2] ISO 3309 + + [3] ITU-T recommendation V.42 + + [4] Deutsch, L.P.,"DEFLATE Compressed Data Format Specification", + available in ftp://ftp.uu.net/pub/archiving/zip/doc/ + + [5] Gailly, J.-L., GZIP documentation, available as gzip-*.tar in + ftp://prep.ai.mit.edu/pub/gnu/ + + [6] Sarwate, D.V., "Computation of Cyclic Redundancy Checks via Table + Look-Up", Communications of the ACM, 31(8), pp.1008-1013. + + + + +Deutsch Informational [Page 9] + +RFC 1952 GZIP File Format Specification May 1996 + + + [7] Schwaderer, W.D., "CRC Calculation", April 85 PC Tech Journal, + pp.118-133. + + [8] ftp://ftp.adelaide.edu.au/pub/rocksoft/papers/crc_v3.txt, + describing the CRC concept. + +4. Security Considerations + + Any data compression method involves the reduction of redundancy in + the data. Consequently, any corruption of the data is likely to have + severe effects and be difficult to correct. Uncompressed text, on + the other hand, will probably still be readable despite the presence + of some corrupted bytes. + + It is recommended that systems using this data format provide some + means of validating the integrity of the compressed data, such as by + setting and checking the CRC-32 check value. + +5. Acknowledgements + + Trademarks cited in this document are the property of their + respective owners. + + Jean-Loup Gailly designed the gzip format and wrote, with Mark Adler, + the related software described in this specification. Glenn + Randers-Pehrson converted this document to RFC and HTML format. + +6. Author's Address + + L. Peter Deutsch + Aladdin Enterprises + 203 Santa Margarita Ave. + Menlo Park, CA 94025 + + Phone: (415) 322-0103 (AM only) + FAX: (415) 322-1734 + EMail: + + Questions about the technical content of this specification can be + sent by email to: + + Jean-Loup Gailly and + Mark Adler + + Editorial comments on this specification can be sent by email to: + + L. Peter Deutsch and + Glenn Randers-Pehrson + + + +Deutsch Informational [Page 10] + +RFC 1952 GZIP File Format Specification May 1996 + + +7. Appendix: Jean-Loup Gailly's gzip utility + + The most widely used implementation of gzip compression, and the + original documentation on which this specification is based, were + created by Jean-Loup Gailly . Since this + implementation is a de facto standard, we mention some more of its + features here. Again, the material in this section is not part of + the specification per se, and implementations need not follow it to + be compliant. + + When compressing or decompressing a file, gzip preserves the + protection, ownership, and modification time attributes on the local + file system, since there is no provision for representing protection + attributes in the gzip file format itself. Since the file format + includes a modification time, the gzip decompressor provides a + command line switch that assigns the modification time from the file, + rather than the local modification time of the compressed input, to + the decompressed output. + +8. Appendix: Sample CRC Code + + The following sample code represents a practical implementation of + the CRC (Cyclic Redundancy Check). (See also ISO 3309 and ITU-T V.42 + for a formal specification.) + + The sample code is in the ANSI C programming language. Non C users + may find it easier to read with these hints: + + & Bitwise AND operator. + ^ Bitwise exclusive-OR operator. + >> Bitwise right shift operator. When applied to an + unsigned quantity, as here, right shift inserts zero + bit(s) at the left. + ! Logical NOT operator. + ++ "n++" increments the variable n. + 0xNNN 0x introduces a hexadecimal (base 16) constant. + Suffix L indicates a long value (at least 32 bits). + + /* Table of CRCs of all 8-bit messages. */ + unsigned long crc_table[256]; + + /* Flag: has the table been computed? Initially false. */ + int crc_table_computed = 0; + + /* Make the table for a fast CRC. */ + void make_crc_table(void) + { + unsigned long c; + + + +Deutsch Informational [Page 11] + +RFC 1952 GZIP File Format Specification May 1996 + + + int n, k; + for (n = 0; n < 256; n++) { + c = (unsigned long) n; + for (k = 0; k < 8; k++) { + if (c & 1) { + c = 0xedb88320L ^ (c >> 1); + } else { + c = c >> 1; + } + } + crc_table[n] = c; + } + crc_table_computed = 1; + } + + /* + Update a running crc with the bytes buf[0..len-1] and return + the updated crc. The crc should be initialized to zero. Pre- and + post-conditioning (one's complement) is performed within this + function so it shouldn't be done by the caller. Usage example: + + unsigned long crc = 0L; + + while (read_buffer(buffer, length) != EOF) { + crc = update_crc(crc, buffer, length); + } + if (crc != original_crc) error(); + */ + unsigned long update_crc(unsigned long crc, + unsigned char *buf, int len) + { + unsigned long c = crc ^ 0xffffffffL; + int n; + + if (!crc_table_computed) + make_crc_table(); + for (n = 0; n < len; n++) { + c = crc_table[(c ^ buf[n]) & 0xff] ^ (c >> 8); + } + return c ^ 0xffffffffL; + } + + /* Return the CRC of the bytes buf[0..len-1]. */ + unsigned long crc(unsigned char *buf, int len) + { + return update_crc(0L, buf, len); + } + + + + +Deutsch Informational [Page 12] + ADDED compat/zlib/doc/txtvsbin.txt Index: compat/zlib/doc/txtvsbin.txt ================================================================== --- compat/zlib/doc/txtvsbin.txt +++ compat/zlib/doc/txtvsbin.txt @@ -0,0 +1,107 @@ +A Fast Method for Identifying Plain Text Files +============================================== + + +Introduction +------------ + +Given a file coming from an unknown source, it is sometimes desirable +to find out whether the format of that file is plain text. Although +this may appear like a simple task, a fully accurate detection of the +file type requires heavy-duty semantic analysis on the file contents. +It is, however, possible to obtain satisfactory results by employing +various heuristics. + +Previous versions of PKZip and other zip-compatible compression tools +were using a crude detection scheme: if more than 80% (4/5) of the bytes +found in a certain buffer are within the range [7..127], the file is +labeled as plain text, otherwise it is labeled as binary. A prominent +limitation of this scheme is the restriction to Latin-based alphabets. +Other alphabets, like Greek, Cyrillic or Asian, make extensive use of +the bytes within the range [128..255], and texts using these alphabets +are most often misidentified by this scheme; in other words, the rate +of false negatives is sometimes too high, which means that the recall +is low. Another weakness of this scheme is a reduced precision, due to +the false positives that may occur when binary files containing large +amounts of textual characters are misidentified as plain text. + +In this article we propose a new, simple detection scheme that features +a much increased precision and a near-100% recall. This scheme is +designed to work on ASCII, Unicode and other ASCII-derived alphabets, +and it handles single-byte encodings (ISO-8859, MacRoman, KOI8, etc.) +and variable-sized encodings (ISO-2022, UTF-8, etc.). Wider encodings +(UCS-2/UTF-16 and UCS-4/UTF-32) are not handled, however. + + +The Algorithm +------------- + +The algorithm works by dividing the set of bytecodes [0..255] into three +categories: +- The white list of textual bytecodes: + 9 (TAB), 10 (LF), 13 (CR), 32 (SPACE) to 255. +- The gray list of tolerated bytecodes: + 7 (BEL), 8 (BS), 11 (VT), 12 (FF), 26 (SUB), 27 (ESC). +- The black list of undesired, non-textual bytecodes: + 0 (NUL) to 6, 14 to 31. + +If a file contains at least one byte that belongs to the white list and +no byte that belongs to the black list, then the file is categorized as +plain text; otherwise, it is categorized as binary. (The boundary case, +when the file is empty, automatically falls into the latter category.) + + +Rationale +--------- + +The idea behind this algorithm relies on two observations. + +The first observation is that, although the full range of 7-bit codes +[0..127] is properly specified by the ASCII standard, most control +characters in the range [0..31] are not used in practice. The only +widely-used, almost universally-portable control codes are 9 (TAB), +10 (LF) and 13 (CR). There are a few more control codes that are +recognized on a reduced range of platforms and text viewers/editors: +7 (BEL), 8 (BS), 11 (VT), 12 (FF), 26 (SUB) and 27 (ESC); but these +codes are rarely (if ever) used alone, without being accompanied by +some printable text. Even the newer, portable text formats such as +XML avoid using control characters outside the list mentioned here. + +The second observation is that most of the binary files tend to contain +control characters, especially 0 (NUL). Even though the older text +detection schemes observe the presence of non-ASCII codes from the range +[128..255], the precision rarely has to suffer if this upper range is +labeled as textual, because the files that are genuinely binary tend to +contain both control characters and codes from the upper range. On the +other hand, the upper range needs to be labeled as textual, because it +is used by virtually all ASCII extensions. In particular, this range is +used for encoding non-Latin scripts. + +Since there is no counting involved, other than simply observing the +presence or the absence of some byte values, the algorithm produces +consistent results, regardless what alphabet encoding is being used. +(If counting were involved, it could be possible to obtain different +results on a text encoded, say, using ISO-8859-16 versus UTF-8.) + +There is an extra category of plain text files that are "polluted" with +one or more black-listed codes, either by mistake or by peculiar design +considerations. In such cases, a scheme that tolerates a small fraction +of black-listed codes would provide an increased recall (i.e. more true +positives). This, however, incurs a reduced precision overall, since +false positives are more likely to appear in binary files that contain +large chunks of textual data. Furthermore, "polluted" plain text should +be regarded as binary by general-purpose text detection schemes, because +general-purpose text processing algorithms might not be applicable. +Under this premise, it is safe to say that our detection method provides +a near-100% recall. + +Experiments have been run on many files coming from various platforms +and applications. We tried plain text files, system logs, source code, +formatted office documents, compiled object code, etc. The results +confirm the optimistic assumptions about the capabilities of this +algorithm. + + +-- +Cosmin Truta +Last updated: 2006-May-28 ADDED compat/zlib/examples/README.examples Index: compat/zlib/examples/README.examples ================================================================== --- compat/zlib/examples/README.examples +++ compat/zlib/examples/README.examples @@ -0,0 +1,49 @@ +This directory contains examples of the use of zlib and other relevant +programs and documentation. + +enough.c + calculation and justification of ENOUGH parameter in inftrees.h + - calculates the maximum table space used in inflate tree + construction over all possible Huffman codes + +fitblk.c + compress just enough input to nearly fill a requested output size + - zlib isn't designed to do this, but fitblk does it anyway + +gun.c + uncompress a gzip file + - illustrates the use of inflateBack() for high speed file-to-file + decompression using call-back functions + - is approximately twice as fast as gzip -d + - also provides Unix uncompress functionality, again twice as fast + +gzappend.c + append to a gzip file + - illustrates the use of the Z_BLOCK flush parameter for inflate() + - illustrates the use of deflatePrime() to start at any bit + +gzjoin.c + join gzip files without recalculating the crc or recompressing + - illustrates the use of the Z_BLOCK flush parameter for inflate() + - illustrates the use of crc32_combine() + +gzlog.c +gzlog.h + efficiently and robustly maintain a message log file in gzip format + - illustrates use of raw deflate, Z_PARTIAL_FLUSH, deflatePrime(), + and deflateSetDictionary() + - illustrates use of a gzip header extra field + +zlib_how.html + painfully comprehensive description of zpipe.c (see below) + - describes in excruciating detail the use of deflate() and inflate() + +zpipe.c + reads and writes zlib streams from stdin to stdout + - illustrates the proper use of deflate() and inflate() + - deeply commented in zlib_how.html (see above) + +zran.c + index a zlib or gzip stream and randomly access it + - illustrates the use of Z_BLOCK, inflatePrime(), and + inflateSetDictionary() to provide random access ADDED compat/zlib/examples/enough.c Index: compat/zlib/examples/enough.c ================================================================== --- compat/zlib/examples/enough.c +++ compat/zlib/examples/enough.c @@ -0,0 +1,572 @@ +/* enough.c -- determine the maximum size of inflate's Huffman code tables over + * all possible valid and complete Huffman codes, subject to a length limit. + * Copyright (C) 2007, 2008, 2012 Mark Adler + * Version 1.4 18 August 2012 Mark Adler + */ + +/* Version history: + 1.0 3 Jan 2007 First version (derived from codecount.c version 1.4) + 1.1 4 Jan 2007 Use faster incremental table usage computation + Prune examine() search on previously visited states + 1.2 5 Jan 2007 Comments clean up + As inflate does, decrease root for short codes + Refuse cases where inflate would increase root + 1.3 17 Feb 2008 Add argument for initial root table size + Fix bug for initial root table size == max - 1 + Use a macro to compute the history index + 1.4 18 Aug 2012 Avoid shifts more than bits in type (caused endless loop!) + Clean up comparisons of different types + Clean up code indentation + */ + +/* + Examine all possible Huffman codes for a given number of symbols and a + maximum code length in bits to determine the maximum table size for zilb's + inflate. Only complete Huffman codes are counted. + + Two codes are considered distinct if the vectors of the number of codes per + length are not identical. So permutations of the symbol assignments result + in the same code for the counting, as do permutations of the assignments of + the bit values to the codes (i.e. only canonical codes are counted). + + We build a code from shorter to longer lengths, determining how many symbols + are coded at each length. At each step, we have how many symbols remain to + be coded, what the last code length used was, and how many bit patterns of + that length remain unused. Then we add one to the code length and double the + number of unused patterns to graduate to the next code length. We then + assign all portions of the remaining symbols to that code length that + preserve the properties of a correct and eventually complete code. Those + properties are: we cannot use more bit patterns than are available; and when + all the symbols are used, there are exactly zero possible bit patterns + remaining. + + The inflate Huffman decoding algorithm uses two-level lookup tables for + speed. There is a single first-level table to decode codes up to root bits + in length (root == 9 in the current inflate implementation). The table + has 1 << root entries and is indexed by the next root bits of input. Codes + shorter than root bits have replicated table entries, so that the correct + entry is pointed to regardless of the bits that follow the short code. If + the code is longer than root bits, then the table entry points to a second- + level table. The size of that table is determined by the longest code with + that root-bit prefix. If that longest code has length len, then the table + has size 1 << (len - root), to index the remaining bits in that set of + codes. Each subsequent root-bit prefix then has its own sub-table. The + total number of table entries required by the code is calculated + incrementally as the number of codes at each bit length is populated. When + all of the codes are shorter than root bits, then root is reduced to the + longest code length, resulting in a single, smaller, one-level table. + + The inflate algorithm also provides for small values of root (relative to + the log2 of the number of symbols), where the shortest code has more bits + than root. In that case, root is increased to the length of the shortest + code. This program, by design, does not handle that case, so it is verified + that the number of symbols is less than 2^(root + 1). + + In order to speed up the examination (by about ten orders of magnitude for + the default arguments), the intermediate states in the build-up of a code + are remembered and previously visited branches are pruned. The memory + required for this will increase rapidly with the total number of symbols and + the maximum code length in bits. However this is a very small price to pay + for the vast speedup. + + First, all of the possible Huffman codes are counted, and reachable + intermediate states are noted by a non-zero count in a saved-results array. + Second, the intermediate states that lead to (root + 1) bit or longer codes + are used to look at all sub-codes from those junctures for their inflate + memory usage. (The amount of memory used is not affected by the number of + codes of root bits or less in length.) Third, the visited states in the + construction of those sub-codes and the associated calculation of the table + size is recalled in order to avoid recalculating from the same juncture. + Beginning the code examination at (root + 1) bit codes, which is enabled by + identifying the reachable nodes, accounts for about six of the orders of + magnitude of improvement for the default arguments. About another four + orders of magnitude come from not revisiting previous states. Out of + approximately 2x10^16 possible Huffman codes, only about 2x10^6 sub-codes + need to be examined to cover all of the possible table memory usage cases + for the default arguments of 286 symbols limited to 15-bit codes. + + Note that an unsigned long long type is used for counting. It is quite easy + to exceed the capacity of an eight-byte integer with a large number of + symbols and a large maximum code length, so multiple-precision arithmetic + would need to replace the unsigned long long arithmetic in that case. This + program will abort if an overflow occurs. The big_t type identifies where + the counting takes place. + + An unsigned long long type is also used for calculating the number of + possible codes remaining at the maximum length. This limits the maximum + code length to the number of bits in a long long minus the number of bits + needed to represent the symbols in a flat code. The code_t type identifies + where the bit pattern counting takes place. + */ + +#include +#include +#include +#include + +#define local static + +/* special data types */ +typedef unsigned long long big_t; /* type for code counting */ +typedef unsigned long long code_t; /* type for bit pattern counting */ +struct tab { /* type for been here check */ + size_t len; /* length of bit vector in char's */ + char *vec; /* allocated bit vector */ +}; + +/* The array for saving results, num[], is indexed with this triplet: + + syms: number of symbols remaining to code + left: number of available bit patterns at length len + len: number of bits in the codes currently being assigned + + Those indices are constrained thusly when saving results: + + syms: 3..totsym (totsym == total symbols to code) + left: 2..syms - 1, but only the evens (so syms == 8 -> 2, 4, 6) + len: 1..max - 1 (max == maximum code length in bits) + + syms == 2 is not saved since that immediately leads to a single code. left + must be even, since it represents the number of available bit patterns at + the current length, which is double the number at the previous length. + left ends at syms-1 since left == syms immediately results in a single code. + (left > sym is not allowed since that would result in an incomplete code.) + len is less than max, since the code completes immediately when len == max. + + The offset into the array is calculated for the three indices with the + first one (syms) being outermost, and the last one (len) being innermost. + We build the array with length max-1 lists for the len index, with syms-3 + of those for each symbol. There are totsym-2 of those, with each one + varying in length as a function of sym. See the calculation of index in + count() for the index, and the calculation of size in main() for the size + of the array. + + For the deflate example of 286 symbols limited to 15-bit codes, the array + has 284,284 entries, taking up 2.17 MB for an 8-byte big_t. More than + half of the space allocated for saved results is actually used -- not all + possible triplets are reached in the generation of valid Huffman codes. + */ + +/* The array for tracking visited states, done[], is itself indexed identically + to the num[] array as described above for the (syms, left, len) triplet. + Each element in the array is further indexed by the (mem, rem) doublet, + where mem is the amount of inflate table space used so far, and rem is the + remaining unused entries in the current inflate sub-table. Each indexed + element is simply one bit indicating whether the state has been visited or + not. Since the ranges for mem and rem are not known a priori, each bit + vector is of a variable size, and grows as needed to accommodate the visited + states. mem and rem are used to calculate a single index in a triangular + array. Since the range of mem is expected in the default case to be about + ten times larger than the range of rem, the array is skewed to reduce the + memory usage, with eight times the range for mem than for rem. See the + calculations for offset and bit in beenhere() for the details. + + For the deflate example of 286 symbols limited to 15-bit codes, the bit + vectors grow to total approximately 21 MB, in addition to the 4.3 MB done[] + array itself. + */ + +/* Globals to avoid propagating constants or constant pointers recursively */ +local int max; /* maximum allowed bit length for the codes */ +local int root; /* size of base code table in bits */ +local int large; /* largest code table so far */ +local size_t size; /* number of elements in num and done */ +local int *code; /* number of symbols assigned to each bit length */ +local big_t *num; /* saved results array for code counting */ +local struct tab *done; /* states already evaluated array */ + +/* Index function for num[] and done[] */ +#define INDEX(i,j,k) (((size_t)((i-1)>>1)*((i-2)>>1)+(j>>1)-1)*(max-1)+k-1) + +/* Free allocated space. Uses globals code, num, and done. */ +local void cleanup(void) +{ + size_t n; + + if (done != NULL) { + for (n = 0; n < size; n++) + if (done[n].len) + free(done[n].vec); + free(done); + } + if (num != NULL) + free(num); + if (code != NULL) + free(code); +} + +/* Return the number of possible Huffman codes using bit patterns of lengths + len through max inclusive, coding syms symbols, with left bit patterns of + length len unused -- return -1 if there is an overflow in the counting. + Keep a record of previous results in num to prevent repeating the same + calculation. Uses the globals max and num. */ +local big_t count(int syms, int len, int left) +{ + big_t sum; /* number of possible codes from this juncture */ + big_t got; /* value returned from count() */ + int least; /* least number of syms to use at this juncture */ + int most; /* most number of syms to use at this juncture */ + int use; /* number of bit patterns to use in next call */ + size_t index; /* index of this case in *num */ + + /* see if only one possible code */ + if (syms == left) + return 1; + + /* note and verify the expected state */ + assert(syms > left && left > 0 && len < max); + + /* see if we've done this one already */ + index = INDEX(syms, left, len); + got = num[index]; + if (got) + return got; /* we have -- return the saved result */ + + /* we need to use at least this many bit patterns so that the code won't be + incomplete at the next length (more bit patterns than symbols) */ + least = (left << 1) - syms; + if (least < 0) + least = 0; + + /* we can use at most this many bit patterns, lest there not be enough + available for the remaining symbols at the maximum length (if there were + no limit to the code length, this would become: most = left - 1) */ + most = (((code_t)left << (max - len)) - syms) / + (((code_t)1 << (max - len)) - 1); + + /* count all possible codes from this juncture and add them up */ + sum = 0; + for (use = least; use <= most; use++) { + got = count(syms - use, len + 1, (left - use) << 1); + sum += got; + if (got == (big_t)0 - 1 || sum < got) /* overflow */ + return (big_t)0 - 1; + } + + /* verify that all recursive calls are productive */ + assert(sum != 0); + + /* save the result and return it */ + num[index] = sum; + return sum; +} + +/* Return true if we've been here before, set to true if not. Set a bit in a + bit vector to indicate visiting this state. Each (syms,len,left) state + has a variable size bit vector indexed by (mem,rem). The bit vector is + lengthened if needed to allow setting the (mem,rem) bit. */ +local int beenhere(int syms, int len, int left, int mem, int rem) +{ + size_t index; /* index for this state's bit vector */ + size_t offset; /* offset in this state's bit vector */ + int bit; /* mask for this state's bit */ + size_t length; /* length of the bit vector in bytes */ + char *vector; /* new or enlarged bit vector */ + + /* point to vector for (syms,left,len), bit in vector for (mem,rem) */ + index = INDEX(syms, left, len); + mem -= 1 << root; + offset = (mem >> 3) + rem; + offset = ((offset * (offset + 1)) >> 1) + rem; + bit = 1 << (mem & 7); + + /* see if we've been here */ + length = done[index].len; + if (offset < length && (done[index].vec[offset] & bit) != 0) + return 1; /* done this! */ + + /* we haven't been here before -- set the bit to show we have now */ + + /* see if we need to lengthen the vector in order to set the bit */ + if (length <= offset) { + /* if we have one already, enlarge it, zero out the appended space */ + if (length) { + do { + length <<= 1; + } while (length <= offset); + vector = realloc(done[index].vec, length); + if (vector != NULL) + memset(vector + done[index].len, 0, length - done[index].len); + } + + /* otherwise we need to make a new vector and zero it out */ + else { + length = 1 << (len - root); + while (length <= offset) + length <<= 1; + vector = calloc(length, sizeof(char)); + } + + /* in either case, bail if we can't get the memory */ + if (vector == NULL) { + fputs("abort: unable to allocate enough memory\n", stderr); + cleanup(); + exit(1); + } + + /* install the new vector */ + done[index].len = length; + done[index].vec = vector; + } + + /* set the bit */ + done[index].vec[offset] |= bit; + return 0; +} + +/* Examine all possible codes from the given node (syms, len, left). Compute + the amount of memory required to build inflate's decoding tables, where the + number of code structures used so far is mem, and the number remaining in + the current sub-table is rem. Uses the globals max, code, root, large, and + done. */ +local void examine(int syms, int len, int left, int mem, int rem) +{ + int least; /* least number of syms to use at this juncture */ + int most; /* most number of syms to use at this juncture */ + int use; /* number of bit patterns to use in next call */ + + /* see if we have a complete code */ + if (syms == left) { + /* set the last code entry */ + code[len] = left; + + /* complete computation of memory used by this code */ + while (rem < left) { + left -= rem; + rem = 1 << (len - root); + mem += rem; + } + assert(rem == left); + + /* if this is a new maximum, show the entries used and the sub-code */ + if (mem > large) { + large = mem; + printf("max %d: ", mem); + for (use = root + 1; use <= max; use++) + if (code[use]) + printf("%d[%d] ", code[use], use); + putchar('\n'); + fflush(stdout); + } + + /* remove entries as we drop back down in the recursion */ + code[len] = 0; + return; + } + + /* prune the tree if we can */ + if (beenhere(syms, len, left, mem, rem)) + return; + + /* we need to use at least this many bit patterns so that the code won't be + incomplete at the next length (more bit patterns than symbols) */ + least = (left << 1) - syms; + if (least < 0) + least = 0; + + /* we can use at most this many bit patterns, lest there not be enough + available for the remaining symbols at the maximum length (if there were + no limit to the code length, this would become: most = left - 1) */ + most = (((code_t)left << (max - len)) - syms) / + (((code_t)1 << (max - len)) - 1); + + /* occupy least table spaces, creating new sub-tables as needed */ + use = least; + while (rem < use) { + use -= rem; + rem = 1 << (len - root); + mem += rem; + } + rem -= use; + + /* examine codes from here, updating table space as we go */ + for (use = least; use <= most; use++) { + code[len] = use; + examine(syms - use, len + 1, (left - use) << 1, + mem + (rem ? 1 << (len - root) : 0), rem << 1); + if (rem == 0) { + rem = 1 << (len - root); + mem += rem; + } + rem--; + } + + /* remove entries as we drop back down in the recursion */ + code[len] = 0; +} + +/* Look at all sub-codes starting with root + 1 bits. Look at only the valid + intermediate code states (syms, left, len). For each completed code, + calculate the amount of memory required by inflate to build the decoding + tables. Find the maximum amount of memory required and show the code that + requires that maximum. Uses the globals max, root, and num. */ +local void enough(int syms) +{ + int n; /* number of remaing symbols for this node */ + int left; /* number of unused bit patterns at this length */ + size_t index; /* index of this case in *num */ + + /* clear code */ + for (n = 0; n <= max; n++) + code[n] = 0; + + /* look at all (root + 1) bit and longer codes */ + large = 1 << root; /* base table */ + if (root < max) /* otherwise, there's only a base table */ + for (n = 3; n <= syms; n++) + for (left = 2; left < n; left += 2) + { + /* look at all reachable (root + 1) bit nodes, and the + resulting codes (complete at root + 2 or more) */ + index = INDEX(n, left, root + 1); + if (root + 1 < max && num[index]) /* reachable node */ + examine(n, root + 1, left, 1 << root, 0); + + /* also look at root bit codes with completions at root + 1 + bits (not saved in num, since complete), just in case */ + if (num[index - 1] && n <= left << 1) + examine((n - left) << 1, root + 1, (n - left) << 1, + 1 << root, 0); + } + + /* done */ + printf("done: maximum of %d table entries\n", large); +} + +/* + Examine and show the total number of possible Huffman codes for a given + maximum number of symbols, initial root table size, and maximum code length + in bits -- those are the command arguments in that order. The default + values are 286, 9, and 15 respectively, for the deflate literal/length code. + The possible codes are counted for each number of coded symbols from two to + the maximum. The counts for each of those and the total number of codes are + shown. The maximum number of inflate table entires is then calculated + across all possible codes. Each new maximum number of table entries and the + associated sub-code (starting at root + 1 == 10 bits) is shown. + + To count and examine Huffman codes that are not length-limited, provide a + maximum length equal to the number of symbols minus one. + + For the deflate literal/length code, use "enough". For the deflate distance + code, use "enough 30 6". + + This uses the %llu printf format to print big_t numbers, which assumes that + big_t is an unsigned long long. If the big_t type is changed (for example + to a multiple precision type), the method of printing will also need to be + updated. + */ +int main(int argc, char **argv) +{ + int syms; /* total number of symbols to code */ + int n; /* number of symbols to code for this run */ + big_t got; /* return value of count() */ + big_t sum; /* accumulated number of codes over n */ + code_t word; /* for counting bits in code_t */ + + /* set up globals for cleanup() */ + code = NULL; + num = NULL; + done = NULL; + + /* get arguments -- default to the deflate literal/length code */ + syms = 286; + root = 9; + max = 15; + if (argc > 1) { + syms = atoi(argv[1]); + if (argc > 2) { + root = atoi(argv[2]); + if (argc > 3) + max = atoi(argv[3]); + } + } + if (argc > 4 || syms < 2 || root < 1 || max < 1) { + fputs("invalid arguments, need: [sym >= 2 [root >= 1 [max >= 1]]]\n", + stderr); + return 1; + } + + /* if not restricting the code length, the longest is syms - 1 */ + if (max > syms - 1) + max = syms - 1; + + /* determine the number of bits in a code_t */ + for (n = 0, word = 1; word; n++, word <<= 1) + ; + + /* make sure that the calculation of most will not overflow */ + if (max > n || (code_t)(syms - 2) >= (((code_t)0 - 1) >> (max - 1))) { + fputs("abort: code length too long for internal types\n", stderr); + return 1; + } + + /* reject impossible code requests */ + if ((code_t)(syms - 1) > ((code_t)1 << max) - 1) { + fprintf(stderr, "%d symbols cannot be coded in %d bits\n", + syms, max); + return 1; + } + + /* allocate code vector */ + code = calloc(max + 1, sizeof(int)); + if (code == NULL) { + fputs("abort: unable to allocate enough memory\n", stderr); + return 1; + } + + /* determine size of saved results array, checking for overflows, + allocate and clear the array (set all to zero with calloc()) */ + if (syms == 2) /* iff max == 1 */ + num = NULL; /* won't be saving any results */ + else { + size = syms >> 1; + if (size > ((size_t)0 - 1) / (n = (syms - 1) >> 1) || + (size *= n, size > ((size_t)0 - 1) / (n = max - 1)) || + (size *= n, size > ((size_t)0 - 1) / sizeof(big_t)) || + (num = calloc(size, sizeof(big_t))) == NULL) { + fputs("abort: unable to allocate enough memory\n", stderr); + cleanup(); + return 1; + } + } + + /* count possible codes for all numbers of symbols, add up counts */ + sum = 0; + for (n = 2; n <= syms; n++) { + got = count(n, 1, 2); + sum += got; + if (got == (big_t)0 - 1 || sum < got) { /* overflow */ + fputs("abort: can't count that high!\n", stderr); + cleanup(); + return 1; + } + printf("%llu %d-codes\n", got, n); + } + printf("%llu total codes for 2 to %d symbols", sum, syms); + if (max < syms - 1) + printf(" (%d-bit length limit)\n", max); + else + puts(" (no length limit)"); + + /* allocate and clear done array for beenhere() */ + if (syms == 2) + done = NULL; + else if (size > ((size_t)0 - 1) / sizeof(struct tab) || + (done = calloc(size, sizeof(struct tab))) == NULL) { + fputs("abort: unable to allocate enough memory\n", stderr); + cleanup(); + return 1; + } + + /* find and show maximum inflate table usage */ + if (root > max) /* reduce root to max length */ + root = max; + if ((code_t)syms < ((code_t)1 << (root + 1))) + enough(syms); + else + puts("cannot handle minimum code lengths > root"); + + /* done */ + cleanup(); + return 0; +} ADDED compat/zlib/examples/fitblk.c Index: compat/zlib/examples/fitblk.c ================================================================== --- compat/zlib/examples/fitblk.c +++ compat/zlib/examples/fitblk.c @@ -0,0 +1,233 @@ +/* fitblk.c: example of fitting compressed output to a specified size + Not copyrighted -- provided to the public domain + Version 1.1 25 November 2004 Mark Adler */ + +/* Version history: + 1.0 24 Nov 2004 First version + 1.1 25 Nov 2004 Change deflateInit2() to deflateInit() + Use fixed-size, stack-allocated raw buffers + Simplify code moving compression to subroutines + Use assert() for internal errors + Add detailed description of approach + */ + +/* Approach to just fitting a requested compressed size: + + fitblk performs three compression passes on a portion of the input + data in order to determine how much of that input will compress to + nearly the requested output block size. The first pass generates + enough deflate blocks to produce output to fill the requested + output size plus a specfied excess amount (see the EXCESS define + below). The last deflate block may go quite a bit past that, but + is discarded. The second pass decompresses and recompresses just + the compressed data that fit in the requested plus excess sized + buffer. The deflate process is terminated after that amount of + input, which is less than the amount consumed on the first pass. + The last deflate block of the result will be of a comparable size + to the final product, so that the header for that deflate block and + the compression ratio for that block will be about the same as in + the final product. The third compression pass decompresses the + result of the second step, but only the compressed data up to the + requested size minus an amount to allow the compressed stream to + complete (see the MARGIN define below). That will result in a + final compressed stream whose length is less than or equal to the + requested size. Assuming sufficient input and a requested size + greater than a few hundred bytes, the shortfall will typically be + less than ten bytes. + + If the input is short enough that the first compression completes + before filling the requested output size, then that compressed + stream is return with no recompression. + + EXCESS is chosen to be just greater than the shortfall seen in a + two pass approach similar to the above. That shortfall is due to + the last deflate block compressing more efficiently with a smaller + header on the second pass. EXCESS is set to be large enough so + that there is enough uncompressed data for the second pass to fill + out the requested size, and small enough so that the final deflate + block of the second pass will be close in size to the final deflate + block of the third and final pass. MARGIN is chosen to be just + large enough to assure that the final compression has enough room + to complete in all cases. + */ + +#include +#include +#include +#include "zlib.h" + +#define local static + +/* print nastygram and leave */ +local void quit(char *why) +{ + fprintf(stderr, "fitblk abort: %s\n", why); + exit(1); +} + +#define RAWLEN 4096 /* intermediate uncompressed buffer size */ + +/* compress from file to def until provided buffer is full or end of + input reached; return last deflate() return value, or Z_ERRNO if + there was read error on the file */ +local int partcompress(FILE *in, z_streamp def) +{ + int ret, flush; + unsigned char raw[RAWLEN]; + + flush = Z_NO_FLUSH; + do { + def->avail_in = fread(raw, 1, RAWLEN, in); + if (ferror(in)) + return Z_ERRNO; + def->next_in = raw; + if (feof(in)) + flush = Z_FINISH; + ret = deflate(def, flush); + assert(ret != Z_STREAM_ERROR); + } while (def->avail_out != 0 && flush == Z_NO_FLUSH); + return ret; +} + +/* recompress from inf's input to def's output; the input for inf and + the output for def are set in those structures before calling; + return last deflate() return value, or Z_MEM_ERROR if inflate() + was not able to allocate enough memory when it needed to */ +local int recompress(z_streamp inf, z_streamp def) +{ + int ret, flush; + unsigned char raw[RAWLEN]; + + flush = Z_NO_FLUSH; + do { + /* decompress */ + inf->avail_out = RAWLEN; + inf->next_out = raw; + ret = inflate(inf, Z_NO_FLUSH); + assert(ret != Z_STREAM_ERROR && ret != Z_DATA_ERROR && + ret != Z_NEED_DICT); + if (ret == Z_MEM_ERROR) + return ret; + + /* compress what was decompresed until done or no room */ + def->avail_in = RAWLEN - inf->avail_out; + def->next_in = raw; + if (inf->avail_out != 0) + flush = Z_FINISH; + ret = deflate(def, flush); + assert(ret != Z_STREAM_ERROR); + } while (ret != Z_STREAM_END && def->avail_out != 0); + return ret; +} + +#define EXCESS 256 /* empirically determined stream overage */ +#define MARGIN 8 /* amount to back off for completion */ + +/* compress from stdin to fixed-size block on stdout */ +int main(int argc, char **argv) +{ + int ret; /* return code */ + unsigned size; /* requested fixed output block size */ + unsigned have; /* bytes written by deflate() call */ + unsigned char *blk; /* intermediate and final stream */ + unsigned char *tmp; /* close to desired size stream */ + z_stream def, inf; /* zlib deflate and inflate states */ + + /* get requested output size */ + if (argc != 2) + quit("need one argument: size of output block"); + ret = strtol(argv[1], argv + 1, 10); + if (argv[1][0] != 0) + quit("argument must be a number"); + if (ret < 8) /* 8 is minimum zlib stream size */ + quit("need positive size of 8 or greater"); + size = (unsigned)ret; + + /* allocate memory for buffers and compression engine */ + blk = malloc(size + EXCESS); + def.zalloc = Z_NULL; + def.zfree = Z_NULL; + def.opaque = Z_NULL; + ret = deflateInit(&def, Z_DEFAULT_COMPRESSION); + if (ret != Z_OK || blk == NULL) + quit("out of memory"); + + /* compress from stdin until output full, or no more input */ + def.avail_out = size + EXCESS; + def.next_out = blk; + ret = partcompress(stdin, &def); + if (ret == Z_ERRNO) + quit("error reading input"); + + /* if it all fit, then size was undersubscribed -- done! */ + if (ret == Z_STREAM_END && def.avail_out >= EXCESS) { + /* write block to stdout */ + have = size + EXCESS - def.avail_out; + if (fwrite(blk, 1, have, stdout) != have || ferror(stdout)) + quit("error writing output"); + + /* clean up and print results to stderr */ + ret = deflateEnd(&def); + assert(ret != Z_STREAM_ERROR); + free(blk); + fprintf(stderr, + "%u bytes unused out of %u requested (all input)\n", + size - have, size); + return 0; + } + + /* it didn't all fit -- set up for recompression */ + inf.zalloc = Z_NULL; + inf.zfree = Z_NULL; + inf.opaque = Z_NULL; + inf.avail_in = 0; + inf.next_in = Z_NULL; + ret = inflateInit(&inf); + tmp = malloc(size + EXCESS); + if (ret != Z_OK || tmp == NULL) + quit("out of memory"); + ret = deflateReset(&def); + assert(ret != Z_STREAM_ERROR); + + /* do first recompression close to the right amount */ + inf.avail_in = size + EXCESS; + inf.next_in = blk; + def.avail_out = size + EXCESS; + def.next_out = tmp; + ret = recompress(&inf, &def); + if (ret == Z_MEM_ERROR) + quit("out of memory"); + + /* set up for next reocmpression */ + ret = inflateReset(&inf); + assert(ret != Z_STREAM_ERROR); + ret = deflateReset(&def); + assert(ret != Z_STREAM_ERROR); + + /* do second and final recompression (third compression) */ + inf.avail_in = size - MARGIN; /* assure stream will complete */ + inf.next_in = tmp; + def.avail_out = size; + def.next_out = blk; + ret = recompress(&inf, &def); + if (ret == Z_MEM_ERROR) + quit("out of memory"); + assert(ret == Z_STREAM_END); /* otherwise MARGIN too small */ + + /* done -- write block to stdout */ + have = size - def.avail_out; + if (fwrite(blk, 1, have, stdout) != have || ferror(stdout)) + quit("error writing output"); + + /* clean up and print results to stderr */ + free(tmp); + ret = inflateEnd(&inf); + assert(ret != Z_STREAM_ERROR); + ret = deflateEnd(&def); + assert(ret != Z_STREAM_ERROR); + free(blk); + fprintf(stderr, + "%u bytes unused out of %u requested (%lu input)\n", + size - have, size, def.total_in); + return 0; +} ADDED compat/zlib/examples/gun.c Index: compat/zlib/examples/gun.c ================================================================== --- compat/zlib/examples/gun.c +++ compat/zlib/examples/gun.c @@ -0,0 +1,702 @@ +/* gun.c -- simple gunzip to give an example of the use of inflateBack() + * Copyright (C) 2003, 2005, 2008, 2010, 2012 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + Version 1.7 12 August 2012 Mark Adler */ + +/* Version history: + 1.0 16 Feb 2003 First version for testing of inflateBack() + 1.1 21 Feb 2005 Decompress concatenated gzip streams + Remove use of "this" variable (C++ keyword) + Fix return value for in() + Improve allocation failure checking + Add typecasting for void * structures + Add -h option for command version and usage + Add a bunch of comments + 1.2 20 Mar 2005 Add Unix compress (LZW) decompression + Copy file attributes from input file to output file + 1.3 12 Jun 2005 Add casts for error messages [Oberhumer] + 1.4 8 Dec 2006 LZW decompression speed improvements + 1.5 9 Feb 2008 Avoid warning in latest version of gcc + 1.6 17 Jan 2010 Avoid signed/unsigned comparison warnings + 1.7 12 Aug 2012 Update for z_const usage in zlib 1.2.8 + */ + +/* + gun [ -t ] [ name ... ] + + decompresses the data in the named gzip files. If no arguments are given, + gun will decompress from stdin to stdout. The names must end in .gz, -gz, + .z, -z, _z, or .Z. The uncompressed data will be written to a file name + with the suffix stripped. On success, the original file is deleted. On + failure, the output file is deleted. For most failures, the command will + continue to process the remaining names on the command line. A memory + allocation failure will abort the command. If -t is specified, then the + listed files or stdin will be tested as gzip files for integrity (without + checking for a proper suffix), no output will be written, and no files + will be deleted. + + Like gzip, gun allows concatenated gzip streams and will decompress them, + writing all of the uncompressed data to the output. Unlike gzip, gun allows + an empty file on input, and will produce no error writing an empty output + file. + + gun will also decompress files made by Unix compress, which uses LZW + compression. These files are automatically detected by virtue of their + magic header bytes. Since the end of Unix compress stream is marked by the + end-of-file, they cannot be concantenated. If a Unix compress stream is + encountered in an input file, it is the last stream in that file. + + Like gunzip and uncompress, the file attributes of the orignal compressed + file are maintained in the final uncompressed file, to the extent that the + user permissions allow it. + + On my Mac OS X PowerPC G4, gun is almost twice as fast as gunzip (version + 1.2.4) is on the same file, when gun is linked with zlib 1.2.2. Also the + LZW decompression provided by gun is about twice as fast as the standard + Unix uncompress command. + */ + +/* external functions and related types and constants */ +#include /* fprintf() */ +#include /* malloc(), free() */ +#include /* strerror(), strcmp(), strlen(), memcpy() */ +#include /* errno */ +#include /* open() */ +#include /* read(), write(), close(), chown(), unlink() */ +#include +#include /* stat(), chmod() */ +#include /* utime() */ +#include "zlib.h" /* inflateBackInit(), inflateBack(), */ + /* inflateBackEnd(), crc32() */ + +/* function declaration */ +#define local static + +/* buffer constants */ +#define SIZE 32768U /* input and output buffer sizes */ +#define PIECE 16384 /* limits i/o chunks for 16-bit int case */ + +/* structure for infback() to pass to input function in() -- it maintains the + input file and a buffer of size SIZE */ +struct ind { + int infile; + unsigned char *inbuf; +}; + +/* Load input buffer, assumed to be empty, and return bytes loaded and a + pointer to them. read() is called until the buffer is full, or until it + returns end-of-file or error. Return 0 on error. */ +local unsigned in(void *in_desc, z_const unsigned char **buf) +{ + int ret; + unsigned len; + unsigned char *next; + struct ind *me = (struct ind *)in_desc; + + next = me->inbuf; + *buf = next; + len = 0; + do { + ret = PIECE; + if ((unsigned)ret > SIZE - len) + ret = (int)(SIZE - len); + ret = (int)read(me->infile, next, ret); + if (ret == -1) { + len = 0; + break; + } + next += ret; + len += ret; + } while (ret != 0 && len < SIZE); + return len; +} + +/* structure for infback() to pass to output function out() -- it maintains the + output file, a running CRC-32 check on the output and the total number of + bytes output, both for checking against the gzip trailer. (The length in + the gzip trailer is stored modulo 2^32, so it's ok if a long is 32 bits and + the output is greater than 4 GB.) */ +struct outd { + int outfile; + int check; /* true if checking crc and total */ + unsigned long crc; + unsigned long total; +}; + +/* Write output buffer and update the CRC-32 and total bytes written. write() + is called until all of the output is written or an error is encountered. + On success out() returns 0. For a write failure, out() returns 1. If the + output file descriptor is -1, then nothing is written. + */ +local int out(void *out_desc, unsigned char *buf, unsigned len) +{ + int ret; + struct outd *me = (struct outd *)out_desc; + + if (me->check) { + me->crc = crc32(me->crc, buf, len); + me->total += len; + } + if (me->outfile != -1) + do { + ret = PIECE; + if ((unsigned)ret > len) + ret = (int)len; + ret = (int)write(me->outfile, buf, ret); + if (ret == -1) + return 1; + buf += ret; + len -= ret; + } while (len != 0); + return 0; +} + +/* next input byte macro for use inside lunpipe() and gunpipe() */ +#define NEXT() (have ? 0 : (have = in(indp, &next)), \ + last = have ? (have--, (int)(*next++)) : -1) + +/* memory for gunpipe() and lunpipe() -- + the first 256 entries of prefix[] and suffix[] are never used, could + have offset the index, but it's faster to waste the memory */ +unsigned char inbuf[SIZE]; /* input buffer */ +unsigned char outbuf[SIZE]; /* output buffer */ +unsigned short prefix[65536]; /* index to LZW prefix string */ +unsigned char suffix[65536]; /* one-character LZW suffix */ +unsigned char match[65280 + 2]; /* buffer for reversed match or gzip + 32K sliding window */ + +/* throw out what's left in the current bits byte buffer (this is a vestigial + aspect of the compressed data format derived from an implementation that + made use of a special VAX machine instruction!) */ +#define FLUSHCODE() \ + do { \ + left = 0; \ + rem = 0; \ + if (chunk > have) { \ + chunk -= have; \ + have = 0; \ + if (NEXT() == -1) \ + break; \ + chunk--; \ + if (chunk > have) { \ + chunk = have = 0; \ + break; \ + } \ + } \ + have -= chunk; \ + next += chunk; \ + chunk = 0; \ + } while (0) + +/* Decompress a compress (LZW) file from indp to outfile. The compress magic + header (two bytes) has already been read and verified. There are have bytes + of buffered input at next. strm is used for passing error information back + to gunpipe(). + + lunpipe() will return Z_OK on success, Z_BUF_ERROR for an unexpected end of + file, read error, or write error (a write error indicated by strm->next_in + not equal to Z_NULL), or Z_DATA_ERROR for invalid input. + */ +local int lunpipe(unsigned have, z_const unsigned char *next, struct ind *indp, + int outfile, z_stream *strm) +{ + int last; /* last byte read by NEXT(), or -1 if EOF */ + unsigned chunk; /* bytes left in current chunk */ + int left; /* bits left in rem */ + unsigned rem; /* unused bits from input */ + int bits; /* current bits per code */ + unsigned code; /* code, table traversal index */ + unsigned mask; /* mask for current bits codes */ + int max; /* maximum bits per code for this stream */ + unsigned flags; /* compress flags, then block compress flag */ + unsigned end; /* last valid entry in prefix/suffix tables */ + unsigned temp; /* current code */ + unsigned prev; /* previous code */ + unsigned final; /* last character written for previous code */ + unsigned stack; /* next position for reversed string */ + unsigned outcnt; /* bytes in output buffer */ + struct outd outd; /* output structure */ + unsigned char *p; + + /* set up output */ + outd.outfile = outfile; + outd.check = 0; + + /* process remainder of compress header -- a flags byte */ + flags = NEXT(); + if (last == -1) + return Z_BUF_ERROR; + if (flags & 0x60) { + strm->msg = (char *)"unknown lzw flags set"; + return Z_DATA_ERROR; + } + max = flags & 0x1f; + if (max < 9 || max > 16) { + strm->msg = (char *)"lzw bits out of range"; + return Z_DATA_ERROR; + } + if (max == 9) /* 9 doesn't really mean 9 */ + max = 10; + flags &= 0x80; /* true if block compress */ + + /* clear table */ + bits = 9; + mask = 0x1ff; + end = flags ? 256 : 255; + + /* set up: get first 9-bit code, which is the first decompressed byte, but + don't create a table entry until the next code */ + if (NEXT() == -1) /* no compressed data is ok */ + return Z_OK; + final = prev = (unsigned)last; /* low 8 bits of code */ + if (NEXT() == -1) /* missing a bit */ + return Z_BUF_ERROR; + if (last & 1) { /* code must be < 256 */ + strm->msg = (char *)"invalid lzw code"; + return Z_DATA_ERROR; + } + rem = (unsigned)last >> 1; /* remaining 7 bits */ + left = 7; + chunk = bits - 2; /* 7 bytes left in this chunk */ + outbuf[0] = (unsigned char)final; /* write first decompressed byte */ + outcnt = 1; + + /* decode codes */ + stack = 0; + for (;;) { + /* if the table will be full after this, increment the code size */ + if (end >= mask && bits < max) { + FLUSHCODE(); + bits++; + mask <<= 1; + mask++; + } + + /* get a code of length bits */ + if (chunk == 0) /* decrement chunk modulo bits */ + chunk = bits; + code = rem; /* low bits of code */ + if (NEXT() == -1) { /* EOF is end of compressed data */ + /* write remaining buffered output */ + if (outcnt && out(&outd, outbuf, outcnt)) { + strm->next_in = outbuf; /* signal write error */ + return Z_BUF_ERROR; + } + return Z_OK; + } + code += (unsigned)last << left; /* middle (or high) bits of code */ + left += 8; + chunk--; + if (bits > left) { /* need more bits */ + if (NEXT() == -1) /* can't end in middle of code */ + return Z_BUF_ERROR; + code += (unsigned)last << left; /* high bits of code */ + left += 8; + chunk--; + } + code &= mask; /* mask to current code length */ + left -= bits; /* number of unused bits */ + rem = (unsigned)last >> (8 - left); /* unused bits from last byte */ + + /* process clear code (256) */ + if (code == 256 && flags) { + FLUSHCODE(); + bits = 9; /* initialize bits and mask */ + mask = 0x1ff; + end = 255; /* empty table */ + continue; /* get next code */ + } + + /* special code to reuse last match */ + temp = code; /* save the current code */ + if (code > end) { + /* Be picky on the allowed code here, and make sure that the code + we drop through (prev) will be a valid index so that random + input does not cause an exception. The code != end + 1 check is + empirically derived, and not checked in the original uncompress + code. If this ever causes a problem, that check could be safely + removed. Leaving this check in greatly improves gun's ability + to detect random or corrupted input after a compress header. + In any case, the prev > end check must be retained. */ + if (code != end + 1 || prev > end) { + strm->msg = (char *)"invalid lzw code"; + return Z_DATA_ERROR; + } + match[stack++] = (unsigned char)final; + code = prev; + } + + /* walk through linked list to generate output in reverse order */ + p = match + stack; + while (code >= 256) { + *p++ = suffix[code]; + code = prefix[code]; + } + stack = p - match; + match[stack++] = (unsigned char)code; + final = code; + + /* link new table entry */ + if (end < mask) { + end++; + prefix[end] = (unsigned short)prev; + suffix[end] = (unsigned char)final; + } + + /* set previous code for next iteration */ + prev = temp; + + /* write output in forward order */ + while (stack > SIZE - outcnt) { + while (outcnt < SIZE) + outbuf[outcnt++] = match[--stack]; + if (out(&outd, outbuf, outcnt)) { + strm->next_in = outbuf; /* signal write error */ + return Z_BUF_ERROR; + } + outcnt = 0; + } + p = match + stack; + do { + outbuf[outcnt++] = *--p; + } while (p > match); + stack = 0; + + /* loop for next code with final and prev as the last match, rem and + left provide the first 0..7 bits of the next code, end is the last + valid table entry */ + } +} + +/* Decompress a gzip file from infile to outfile. strm is assumed to have been + successfully initialized with inflateBackInit(). The input file may consist + of a series of gzip streams, in which case all of them will be decompressed + to the output file. If outfile is -1, then the gzip stream(s) integrity is + checked and nothing is written. + + The return value is a zlib error code: Z_MEM_ERROR if out of memory, + Z_DATA_ERROR if the header or the compressed data is invalid, or if the + trailer CRC-32 check or length doesn't match, Z_BUF_ERROR if the input ends + prematurely or a write error occurs, or Z_ERRNO if junk (not a another gzip + stream) follows a valid gzip stream. + */ +local int gunpipe(z_stream *strm, int infile, int outfile) +{ + int ret, first, last; + unsigned have, flags, len; + z_const unsigned char *next = NULL; + struct ind ind, *indp; + struct outd outd; + + /* setup input buffer */ + ind.infile = infile; + ind.inbuf = inbuf; + indp = &ind; + + /* decompress concatenated gzip streams */ + have = 0; /* no input data read in yet */ + first = 1; /* looking for first gzip header */ + strm->next_in = Z_NULL; /* so Z_BUF_ERROR means EOF */ + for (;;) { + /* look for the two magic header bytes for a gzip stream */ + if (NEXT() == -1) { + ret = Z_OK; + break; /* empty gzip stream is ok */ + } + if (last != 31 || (NEXT() != 139 && last != 157)) { + strm->msg = (char *)"incorrect header check"; + ret = first ? Z_DATA_ERROR : Z_ERRNO; + break; /* not a gzip or compress header */ + } + first = 0; /* next non-header is junk */ + + /* process a compress (LZW) file -- can't be concatenated after this */ + if (last == 157) { + ret = lunpipe(have, next, indp, outfile, strm); + break; + } + + /* process remainder of gzip header */ + ret = Z_BUF_ERROR; + if (NEXT() != 8) { /* only deflate method allowed */ + if (last == -1) break; + strm->msg = (char *)"unknown compression method"; + ret = Z_DATA_ERROR; + break; + } + flags = NEXT(); /* header flags */ + NEXT(); /* discard mod time, xflgs, os */ + NEXT(); + NEXT(); + NEXT(); + NEXT(); + NEXT(); + if (last == -1) break; + if (flags & 0xe0) { + strm->msg = (char *)"unknown header flags set"; + ret = Z_DATA_ERROR; + break; + } + if (flags & 4) { /* extra field */ + len = NEXT(); + len += (unsigned)(NEXT()) << 8; + if (last == -1) break; + while (len > have) { + len -= have; + have = 0; + if (NEXT() == -1) break; + len--; + } + if (last == -1) break; + have -= len; + next += len; + } + if (flags & 8) /* file name */ + while (NEXT() != 0 && last != -1) + ; + if (flags & 16) /* comment */ + while (NEXT() != 0 && last != -1) + ; + if (flags & 2) { /* header crc */ + NEXT(); + NEXT(); + } + if (last == -1) break; + + /* set up output */ + outd.outfile = outfile; + outd.check = 1; + outd.crc = crc32(0L, Z_NULL, 0); + outd.total = 0; + + /* decompress data to output */ + strm->next_in = next; + strm->avail_in = have; + ret = inflateBack(strm, in, indp, out, &outd); + if (ret != Z_STREAM_END) break; + next = strm->next_in; + have = strm->avail_in; + strm->next_in = Z_NULL; /* so Z_BUF_ERROR means EOF */ + + /* check trailer */ + ret = Z_BUF_ERROR; + if (NEXT() != (int)(outd.crc & 0xff) || + NEXT() != (int)((outd.crc >> 8) & 0xff) || + NEXT() != (int)((outd.crc >> 16) & 0xff) || + NEXT() != (int)((outd.crc >> 24) & 0xff)) { + /* crc error */ + if (last != -1) { + strm->msg = (char *)"incorrect data check"; + ret = Z_DATA_ERROR; + } + break; + } + if (NEXT() != (int)(outd.total & 0xff) || + NEXT() != (int)((outd.total >> 8) & 0xff) || + NEXT() != (int)((outd.total >> 16) & 0xff) || + NEXT() != (int)((outd.total >> 24) & 0xff)) { + /* length error */ + if (last != -1) { + strm->msg = (char *)"incorrect length check"; + ret = Z_DATA_ERROR; + } + break; + } + + /* go back and look for another gzip stream */ + } + + /* clean up and return */ + return ret; +} + +/* Copy file attributes, from -> to, as best we can. This is best effort, so + no errors are reported. The mode bits, including suid, sgid, and the sticky + bit are copied (if allowed), the owner's user id and group id are copied + (again if allowed), and the access and modify times are copied. */ +local void copymeta(char *from, char *to) +{ + struct stat was; + struct utimbuf when; + + /* get all of from's Unix meta data, return if not a regular file */ + if (stat(from, &was) != 0 || (was.st_mode & S_IFMT) != S_IFREG) + return; + + /* set to's mode bits, ignore errors */ + (void)chmod(to, was.st_mode & 07777); + + /* copy owner's user and group, ignore errors */ + (void)chown(to, was.st_uid, was.st_gid); + + /* copy access and modify times, ignore errors */ + when.actime = was.st_atime; + when.modtime = was.st_mtime; + (void)utime(to, &when); +} + +/* Decompress the file inname to the file outnname, of if test is true, just + decompress without writing and check the gzip trailer for integrity. If + inname is NULL or an empty string, read from stdin. If outname is NULL or + an empty string, write to stdout. strm is a pre-initialized inflateBack + structure. When appropriate, copy the file attributes from inname to + outname. + + gunzip() returns 1 if there is an out-of-memory error or an unexpected + return code from gunpipe(). Otherwise it returns 0. + */ +local int gunzip(z_stream *strm, char *inname, char *outname, int test) +{ + int ret; + int infile, outfile; + + /* open files */ + if (inname == NULL || *inname == 0) { + inname = "-"; + infile = 0; /* stdin */ + } + else { + infile = open(inname, O_RDONLY, 0); + if (infile == -1) { + fprintf(stderr, "gun cannot open %s\n", inname); + return 0; + } + } + if (test) + outfile = -1; + else if (outname == NULL || *outname == 0) { + outname = "-"; + outfile = 1; /* stdout */ + } + else { + outfile = open(outname, O_CREAT | O_TRUNC | O_WRONLY, 0666); + if (outfile == -1) { + close(infile); + fprintf(stderr, "gun cannot create %s\n", outname); + return 0; + } + } + errno = 0; + + /* decompress */ + ret = gunpipe(strm, infile, outfile); + if (outfile > 2) close(outfile); + if (infile > 2) close(infile); + + /* interpret result */ + switch (ret) { + case Z_OK: + case Z_ERRNO: + if (infile > 2 && outfile > 2) { + copymeta(inname, outname); /* copy attributes */ + unlink(inname); + } + if (ret == Z_ERRNO) + fprintf(stderr, "gun warning: trailing garbage ignored in %s\n", + inname); + break; + case Z_DATA_ERROR: + if (outfile > 2) unlink(outname); + fprintf(stderr, "gun data error on %s: %s\n", inname, strm->msg); + break; + case Z_MEM_ERROR: + if (outfile > 2) unlink(outname); + fprintf(stderr, "gun out of memory error--aborting\n"); + return 1; + case Z_BUF_ERROR: + if (outfile > 2) unlink(outname); + if (strm->next_in != Z_NULL) { + fprintf(stderr, "gun write error on %s: %s\n", + outname, strerror(errno)); + } + else if (errno) { + fprintf(stderr, "gun read error on %s: %s\n", + inname, strerror(errno)); + } + else { + fprintf(stderr, "gun unexpected end of file on %s\n", + inname); + } + break; + default: + if (outfile > 2) unlink(outname); + fprintf(stderr, "gun internal error--aborting\n"); + return 1; + } + return 0; +} + +/* Process the gun command line arguments. See the command syntax near the + beginning of this source file. */ +int main(int argc, char **argv) +{ + int ret, len, test; + char *outname; + unsigned char *window; + z_stream strm; + + /* initialize inflateBack state for repeated use */ + window = match; /* reuse LZW match buffer */ + strm.zalloc = Z_NULL; + strm.zfree = Z_NULL; + strm.opaque = Z_NULL; + ret = inflateBackInit(&strm, 15, window); + if (ret != Z_OK) { + fprintf(stderr, "gun out of memory error--aborting\n"); + return 1; + } + + /* decompress each file to the same name with the suffix removed */ + argc--; + argv++; + test = 0; + if (argc && strcmp(*argv, "-h") == 0) { + fprintf(stderr, "gun 1.6 (17 Jan 2010)\n"); + fprintf(stderr, "Copyright (C) 2003-2010 Mark Adler\n"); + fprintf(stderr, "usage: gun [-t] [file1.gz [file2.Z ...]]\n"); + return 0; + } + if (argc && strcmp(*argv, "-t") == 0) { + test = 1; + argc--; + argv++; + } + if (argc) + do { + if (test) + outname = NULL; + else { + len = (int)strlen(*argv); + if (strcmp(*argv + len - 3, ".gz") == 0 || + strcmp(*argv + len - 3, "-gz") == 0) + len -= 3; + else if (strcmp(*argv + len - 2, ".z") == 0 || + strcmp(*argv + len - 2, "-z") == 0 || + strcmp(*argv + len - 2, "_z") == 0 || + strcmp(*argv + len - 2, ".Z") == 0) + len -= 2; + else { + fprintf(stderr, "gun error: no gz type on %s--skipping\n", + *argv); + continue; + } + outname = malloc(len + 1); + if (outname == NULL) { + fprintf(stderr, "gun out of memory error--aborting\n"); + ret = 1; + break; + } + memcpy(outname, *argv, len); + outname[len] = 0; + } + ret = gunzip(&strm, *argv, outname, test); + if (outname != NULL) free(outname); + if (ret) break; + } while (argv++, --argc); + else + ret = gunzip(&strm, NULL, NULL, test); + + /* clean up */ + inflateBackEnd(&strm); + return ret; +} ADDED compat/zlib/examples/gzappend.c Index: compat/zlib/examples/gzappend.c ================================================================== --- compat/zlib/examples/gzappend.c +++ compat/zlib/examples/gzappend.c @@ -0,0 +1,504 @@ +/* gzappend -- command to append to a gzip file + + Copyright (C) 2003, 2012 Mark Adler, all rights reserved + version 1.2, 11 Oct 2012 + + This software is provided 'as-is', without any express or implied + warranty. In no event will the author be held liable for any damages + arising from the use of this software. + + Permission is granted to anyone to use this software for any purpose, + including commercial applications, and to alter it and redistribute it + freely, subject to the following restrictions: + + 1. The origin of this software must not be misrepresented; you must not + claim that you wrote the original software. If you use this software + in a product, an acknowledgment in the product documentation would be + appreciated but is not required. + 2. Altered source versions must be plainly marked as such, and must not be + misrepresented as being the original software. + 3. This notice may not be removed or altered from any source distribution. + + Mark Adler madler@alumni.caltech.edu + */ + +/* + * Change history: + * + * 1.0 19 Oct 2003 - First version + * 1.1 4 Nov 2003 - Expand and clarify some comments and notes + * - Add version and copyright to help + * - Send help to stdout instead of stderr + * - Add some preemptive typecasts + * - Add L to constants in lseek() calls + * - Remove some debugging information in error messages + * - Use new data_type definition for zlib 1.2.1 + * - Simplfy and unify file operations + * - Finish off gzip file in gztack() + * - Use deflatePrime() instead of adding empty blocks + * - Keep gzip file clean on appended file read errors + * - Use in-place rotate instead of auxiliary buffer + * (Why you ask? Because it was fun to write!) + * 1.2 11 Oct 2012 - Fix for proper z_const usage + * - Check for input buffer malloc failure + */ + +/* + gzappend takes a gzip file and appends to it, compressing files from the + command line or data from stdin. The gzip file is written to directly, to + avoid copying that file, in case it's large. Note that this results in the + unfriendly behavior that if gzappend fails, the gzip file is corrupted. + + This program was written to illustrate the use of the new Z_BLOCK option of + zlib 1.2.x's inflate() function. This option returns from inflate() at each + block boundary to facilitate locating and modifying the last block bit at + the start of the final deflate block. Also whether using Z_BLOCK or not, + another required feature of zlib 1.2.x is that inflate() now provides the + number of unusued bits in the last input byte used. gzappend will not work + with versions of zlib earlier than 1.2.1. + + gzappend first decompresses the gzip file internally, discarding all but + the last 32K of uncompressed data, and noting the location of the last block + bit and the number of unused bits in the last byte of the compressed data. + The gzip trailer containing the CRC-32 and length of the uncompressed data + is verified. This trailer will be later overwritten. + + Then the last block bit is cleared by seeking back in the file and rewriting + the byte that contains it. Seeking forward, the last byte of the compressed + data is saved along with the number of unused bits to initialize deflate. + + A deflate process is initialized, using the last 32K of the uncompressed + data from the gzip file to initialize the dictionary. If the total + uncompressed data was less than 32K, then all of it is used to initialize + the dictionary. The deflate output bit buffer is also initialized with the + last bits from the original deflate stream. From here on, the data to + append is simply compressed using deflate, and written to the gzip file. + When that is complete, the new CRC-32 and uncompressed length are written + as the trailer of the gzip file. + */ + +#include +#include +#include +#include +#include +#include "zlib.h" + +#define local static +#define LGCHUNK 14 +#define CHUNK (1U << LGCHUNK) +#define DSIZE 32768U + +/* print an error message and terminate with extreme prejudice */ +local void bye(char *msg1, char *msg2) +{ + fprintf(stderr, "gzappend error: %s%s\n", msg1, msg2); + exit(1); +} + +/* return the greatest common divisor of a and b using Euclid's algorithm, + modified to be fast when one argument much greater than the other, and + coded to avoid unnecessary swapping */ +local unsigned gcd(unsigned a, unsigned b) +{ + unsigned c; + + while (a && b) + if (a > b) { + c = b; + while (a - c >= c) + c <<= 1; + a -= c; + } + else { + c = a; + while (b - c >= c) + c <<= 1; + b -= c; + } + return a + b; +} + +/* rotate list[0..len-1] left by rot positions, in place */ +local void rotate(unsigned char *list, unsigned len, unsigned rot) +{ + unsigned char tmp; + unsigned cycles; + unsigned char *start, *last, *to, *from; + + /* normalize rot and handle degenerate cases */ + if (len < 2) return; + if (rot >= len) rot %= len; + if (rot == 0) return; + + /* pointer to last entry in list */ + last = list + (len - 1); + + /* do simple left shift by one */ + if (rot == 1) { + tmp = *list; + memcpy(list, list + 1, len - 1); + *last = tmp; + return; + } + + /* do simple right shift by one */ + if (rot == len - 1) { + tmp = *last; + memmove(list + 1, list, len - 1); + *list = tmp; + return; + } + + /* otherwise do rotate as a set of cycles in place */ + cycles = gcd(len, rot); /* number of cycles */ + do { + start = from = list + cycles; /* start index is arbitrary */ + tmp = *from; /* save entry to be overwritten */ + for (;;) { + to = from; /* next step in cycle */ + from += rot; /* go right rot positions */ + if (from > last) from -= len; /* (pointer better not wrap) */ + if (from == start) break; /* all but one shifted */ + *to = *from; /* shift left */ + } + *to = tmp; /* complete the circle */ + } while (--cycles); +} + +/* structure for gzip file read operations */ +typedef struct { + int fd; /* file descriptor */ + int size; /* 1 << size is bytes in buf */ + unsigned left; /* bytes available at next */ + unsigned char *buf; /* buffer */ + z_const unsigned char *next; /* next byte in buffer */ + char *name; /* file name for error messages */ +} file; + +/* reload buffer */ +local int readin(file *in) +{ + int len; + + len = read(in->fd, in->buf, 1 << in->size); + if (len == -1) bye("error reading ", in->name); + in->left = (unsigned)len; + in->next = in->buf; + return len; +} + +/* read from file in, exit if end-of-file */ +local int readmore(file *in) +{ + if (readin(in) == 0) bye("unexpected end of ", in->name); + return 0; +} + +#define read1(in) (in->left == 0 ? readmore(in) : 0, \ + in->left--, *(in->next)++) + +/* skip over n bytes of in */ +local void skip(file *in, unsigned n) +{ + unsigned bypass; + + if (n > in->left) { + n -= in->left; + bypass = n & ~((1U << in->size) - 1); + if (bypass) { + if (lseek(in->fd, (off_t)bypass, SEEK_CUR) == -1) + bye("seeking ", in->name); + n -= bypass; + } + readmore(in); + if (n > in->left) + bye("unexpected end of ", in->name); + } + in->left -= n; + in->next += n; +} + +/* read a four-byte unsigned integer, little-endian, from in */ +unsigned long read4(file *in) +{ + unsigned long val; + + val = read1(in); + val += (unsigned)read1(in) << 8; + val += (unsigned long)read1(in) << 16; + val += (unsigned long)read1(in) << 24; + return val; +} + +/* skip over gzip header */ +local void gzheader(file *in) +{ + int flags; + unsigned n; + + if (read1(in) != 31 || read1(in) != 139) bye(in->name, " not a gzip file"); + if (read1(in) != 8) bye("unknown compression method in", in->name); + flags = read1(in); + if (flags & 0xe0) bye("unknown header flags set in", in->name); + skip(in, 6); + if (flags & 4) { + n = read1(in); + n += (unsigned)(read1(in)) << 8; + skip(in, n); + } + if (flags & 8) while (read1(in) != 0) ; + if (flags & 16) while (read1(in) != 0) ; + if (flags & 2) skip(in, 2); +} + +/* decompress gzip file "name", return strm with a deflate stream ready to + continue compression of the data in the gzip file, and return a file + descriptor pointing to where to write the compressed data -- the deflate + stream is initialized to compress using level "level" */ +local int gzscan(char *name, z_stream *strm, int level) +{ + int ret, lastbit, left, full; + unsigned have; + unsigned long crc, tot; + unsigned char *window; + off_t lastoff, end; + file gz; + + /* open gzip file */ + gz.name = name; + gz.fd = open(name, O_RDWR, 0); + if (gz.fd == -1) bye("cannot open ", name); + gz.buf = malloc(CHUNK); + if (gz.buf == NULL) bye("out of memory", ""); + gz.size = LGCHUNK; + gz.left = 0; + + /* skip gzip header */ + gzheader(&gz); + + /* prepare to decompress */ + window = malloc(DSIZE); + if (window == NULL) bye("out of memory", ""); + strm->zalloc = Z_NULL; + strm->zfree = Z_NULL; + strm->opaque = Z_NULL; + ret = inflateInit2(strm, -15); + if (ret != Z_OK) bye("out of memory", " or library mismatch"); + + /* decompress the deflate stream, saving append information */ + lastbit = 0; + lastoff = lseek(gz.fd, 0L, SEEK_CUR) - gz.left; + left = 0; + strm->avail_in = gz.left; + strm->next_in = gz.next; + crc = crc32(0L, Z_NULL, 0); + have = full = 0; + do { + /* if needed, get more input */ + if (strm->avail_in == 0) { + readmore(&gz); + strm->avail_in = gz.left; + strm->next_in = gz.next; + } + + /* set up output to next available section of sliding window */ + strm->avail_out = DSIZE - have; + strm->next_out = window + have; + + /* inflate and check for errors */ + ret = inflate(strm, Z_BLOCK); + if (ret == Z_STREAM_ERROR) bye("internal stream error!", ""); + if (ret == Z_MEM_ERROR) bye("out of memory", ""); + if (ret == Z_DATA_ERROR) + bye("invalid compressed data--format violated in", name); + + /* update crc and sliding window pointer */ + crc = crc32(crc, window + have, DSIZE - have - strm->avail_out); + if (strm->avail_out) + have = DSIZE - strm->avail_out; + else { + have = 0; + full = 1; + } + + /* process end of block */ + if (strm->data_type & 128) { + if (strm->data_type & 64) + left = strm->data_type & 0x1f; + else { + lastbit = strm->data_type & 0x1f; + lastoff = lseek(gz.fd, 0L, SEEK_CUR) - strm->avail_in; + } + } + } while (ret != Z_STREAM_END); + inflateEnd(strm); + gz.left = strm->avail_in; + gz.next = strm->next_in; + + /* save the location of the end of the compressed data */ + end = lseek(gz.fd, 0L, SEEK_CUR) - gz.left; + + /* check gzip trailer and save total for deflate */ + if (crc != read4(&gz)) + bye("invalid compressed data--crc mismatch in ", name); + tot = strm->total_out; + if ((tot & 0xffffffffUL) != read4(&gz)) + bye("invalid compressed data--length mismatch in", name); + + /* if not at end of file, warn */ + if (gz.left || readin(&gz)) + fprintf(stderr, + "gzappend warning: junk at end of gzip file overwritten\n"); + + /* clear last block bit */ + lseek(gz.fd, lastoff - (lastbit != 0), SEEK_SET); + if (read(gz.fd, gz.buf, 1) != 1) bye("reading after seek on ", name); + *gz.buf = (unsigned char)(*gz.buf ^ (1 << ((8 - lastbit) & 7))); + lseek(gz.fd, -1L, SEEK_CUR); + if (write(gz.fd, gz.buf, 1) != 1) bye("writing after seek to ", name); + + /* if window wrapped, build dictionary from window by rotating */ + if (full) { + rotate(window, DSIZE, have); + have = DSIZE; + } + + /* set up deflate stream with window, crc, total_in, and leftover bits */ + ret = deflateInit2(strm, level, Z_DEFLATED, -15, 8, Z_DEFAULT_STRATEGY); + if (ret != Z_OK) bye("out of memory", ""); + deflateSetDictionary(strm, window, have); + strm->adler = crc; + strm->total_in = tot; + if (left) { + lseek(gz.fd, --end, SEEK_SET); + if (read(gz.fd, gz.buf, 1) != 1) bye("reading after seek on ", name); + deflatePrime(strm, 8 - left, *gz.buf); + } + lseek(gz.fd, end, SEEK_SET); + + /* clean up and return */ + free(window); + free(gz.buf); + return gz.fd; +} + +/* append file "name" to gzip file gd using deflate stream strm -- if last + is true, then finish off the deflate stream at the end */ +local void gztack(char *name, int gd, z_stream *strm, int last) +{ + int fd, len, ret; + unsigned left; + unsigned char *in, *out; + + /* open file to compress and append */ + fd = 0; + if (name != NULL) { + fd = open(name, O_RDONLY, 0); + if (fd == -1) + fprintf(stderr, "gzappend warning: %s not found, skipping ...\n", + name); + } + + /* allocate buffers */ + in = malloc(CHUNK); + out = malloc(CHUNK); + if (in == NULL || out == NULL) bye("out of memory", ""); + + /* compress input file and append to gzip file */ + do { + /* get more input */ + len = read(fd, in, CHUNK); + if (len == -1) { + fprintf(stderr, + "gzappend warning: error reading %s, skipping rest ...\n", + name); + len = 0; + } + strm->avail_in = (unsigned)len; + strm->next_in = in; + if (len) strm->adler = crc32(strm->adler, in, (unsigned)len); + + /* compress and write all available output */ + do { + strm->avail_out = CHUNK; + strm->next_out = out; + ret = deflate(strm, last && len == 0 ? Z_FINISH : Z_NO_FLUSH); + left = CHUNK - strm->avail_out; + while (left) { + len = write(gd, out + CHUNK - strm->avail_out - left, left); + if (len == -1) bye("writing gzip file", ""); + left -= (unsigned)len; + } + } while (strm->avail_out == 0 && ret != Z_STREAM_END); + } while (len != 0); + + /* write trailer after last entry */ + if (last) { + deflateEnd(strm); + out[0] = (unsigned char)(strm->adler); + out[1] = (unsigned char)(strm->adler >> 8); + out[2] = (unsigned char)(strm->adler >> 16); + out[3] = (unsigned char)(strm->adler >> 24); + out[4] = (unsigned char)(strm->total_in); + out[5] = (unsigned char)(strm->total_in >> 8); + out[6] = (unsigned char)(strm->total_in >> 16); + out[7] = (unsigned char)(strm->total_in >> 24); + len = 8; + do { + ret = write(gd, out + 8 - len, len); + if (ret == -1) bye("writing gzip file", ""); + len -= ret; + } while (len); + close(gd); + } + + /* clean up and return */ + free(out); + free(in); + if (fd > 0) close(fd); +} + +/* process the compression level option if present, scan the gzip file, and + append the specified files, or append the data from stdin if no other file + names are provided on the command line -- the gzip file must be writable + and seekable */ +int main(int argc, char **argv) +{ + int gd, level; + z_stream strm; + + /* ignore command name */ + argc--; argv++; + + /* provide usage if no arguments */ + if (*argv == NULL) { + printf( + "gzappend 1.2 (11 Oct 2012) Copyright (C) 2003, 2012 Mark Adler\n" + ); + printf( + "usage: gzappend [-level] file.gz [ addthis [ andthis ... ]]\n"); + return 0; + } + + /* set compression level */ + level = Z_DEFAULT_COMPRESSION; + if (argv[0][0] == '-') { + if (argv[0][1] < '0' || argv[0][1] > '9' || argv[0][2] != 0) + bye("invalid compression level", ""); + level = argv[0][1] - '0'; + if (*++argv == NULL) bye("no gzip file name after options", ""); + } + + /* prepare to append to gzip file */ + gd = gzscan(*argv++, &strm, level); + + /* append files on command line, or from stdin if none */ + if (*argv == NULL) + gztack(NULL, gd, &strm, 1); + else + do { + gztack(*argv, gd, &strm, argv[1] == NULL); + } while (*++argv != NULL); + return 0; +} ADDED compat/zlib/examples/gzjoin.c Index: compat/zlib/examples/gzjoin.c ================================================================== --- compat/zlib/examples/gzjoin.c +++ compat/zlib/examples/gzjoin.c @@ -0,0 +1,449 @@ +/* gzjoin -- command to join gzip files into one gzip file + + Copyright (C) 2004, 2005, 2012 Mark Adler, all rights reserved + version 1.2, 14 Aug 2012 + + This software is provided 'as-is', without any express or implied + warranty. In no event will the author be held liable for any damages + arising from the use of this software. + + Permission is granted to anyone to use this software for any purpose, + including commercial applications, and to alter it and redistribute it + freely, subject to the following restrictions: + + 1. The origin of this software must not be misrepresented; you must not + claim that you wrote the original software. If you use this software + in a product, an acknowledgment in the product documentation would be + appreciated but is not required. + 2. Altered source versions must be plainly marked as such, and must not be + misrepresented as being the original software. + 3. This notice may not be removed or altered from any source distribution. + + Mark Adler madler@alumni.caltech.edu + */ + +/* + * Change history: + * + * 1.0 11 Dec 2004 - First version + * 1.1 12 Jun 2005 - Changed ssize_t to long for portability + * 1.2 14 Aug 2012 - Clean up for z_const usage + */ + +/* + gzjoin takes one or more gzip files on the command line and writes out a + single gzip file that will uncompress to the concatenation of the + uncompressed data from the individual gzip files. gzjoin does this without + having to recompress any of the data and without having to calculate a new + crc32 for the concatenated uncompressed data. gzjoin does however have to + decompress all of the input data in order to find the bits in the compressed + data that need to be modified to concatenate the streams. + + gzjoin does not do an integrity check on the input gzip files other than + checking the gzip header and decompressing the compressed data. They are + otherwise assumed to be complete and correct. + + Each joint between gzip files removes at least 18 bytes of previous trailer + and subsequent header, and inserts an average of about three bytes to the + compressed data in order to connect the streams. The output gzip file + has a minimal ten-byte gzip header with no file name or modification time. + + This program was written to illustrate the use of the Z_BLOCK option of + inflate() and the crc32_combine() function. gzjoin will not compile with + versions of zlib earlier than 1.2.3. + */ + +#include /* fputs(), fprintf(), fwrite(), putc() */ +#include /* exit(), malloc(), free() */ +#include /* open() */ +#include /* close(), read(), lseek() */ +#include "zlib.h" + /* crc32(), crc32_combine(), inflateInit2(), inflate(), inflateEnd() */ + +#define local static + +/* exit with an error (return a value to allow use in an expression) */ +local int bail(char *why1, char *why2) +{ + fprintf(stderr, "gzjoin error: %s%s, output incomplete\n", why1, why2); + exit(1); + return 0; +} + +/* -- simple buffered file input with access to the buffer -- */ + +#define CHUNK 32768 /* must be a power of two and fit in unsigned */ + +/* bin buffered input file type */ +typedef struct { + char *name; /* name of file for error messages */ + int fd; /* file descriptor */ + unsigned left; /* bytes remaining at next */ + unsigned char *next; /* next byte to read */ + unsigned char *buf; /* allocated buffer of length CHUNK */ +} bin; + +/* close a buffered file and free allocated memory */ +local void bclose(bin *in) +{ + if (in != NULL) { + if (in->fd != -1) + close(in->fd); + if (in->buf != NULL) + free(in->buf); + free(in); + } +} + +/* open a buffered file for input, return a pointer to type bin, or NULL on + failure */ +local bin *bopen(char *name) +{ + bin *in; + + in = malloc(sizeof(bin)); + if (in == NULL) + return NULL; + in->buf = malloc(CHUNK); + in->fd = open(name, O_RDONLY, 0); + if (in->buf == NULL || in->fd == -1) { + bclose(in); + return NULL; + } + in->left = 0; + in->next = in->buf; + in->name = name; + return in; +} + +/* load buffer from file, return -1 on read error, 0 or 1 on success, with + 1 indicating that end-of-file was reached */ +local int bload(bin *in) +{ + long len; + + if (in == NULL) + return -1; + if (in->left != 0) + return 0; + in->next = in->buf; + do { + len = (long)read(in->fd, in->buf + in->left, CHUNK - in->left); + if (len < 0) + return -1; + in->left += (unsigned)len; + } while (len != 0 && in->left < CHUNK); + return len == 0 ? 1 : 0; +} + +/* get a byte from the file, bail if end of file */ +#define bget(in) (in->left ? 0 : bload(in), \ + in->left ? (in->left--, *(in->next)++) : \ + bail("unexpected end of file on ", in->name)) + +/* get a four-byte little-endian unsigned integer from file */ +local unsigned long bget4(bin *in) +{ + unsigned long val; + + val = bget(in); + val += (unsigned long)(bget(in)) << 8; + val += (unsigned long)(bget(in)) << 16; + val += (unsigned long)(bget(in)) << 24; + return val; +} + +/* skip bytes in file */ +local void bskip(bin *in, unsigned skip) +{ + /* check pointer */ + if (in == NULL) + return; + + /* easy case -- skip bytes in buffer */ + if (skip <= in->left) { + in->left -= skip; + in->next += skip; + return; + } + + /* skip what's in buffer, discard buffer contents */ + skip -= in->left; + in->left = 0; + + /* seek past multiples of CHUNK bytes */ + if (skip > CHUNK) { + unsigned left; + + left = skip & (CHUNK - 1); + if (left == 0) { + /* exact number of chunks: seek all the way minus one byte to check + for end-of-file with a read */ + lseek(in->fd, skip - 1, SEEK_CUR); + if (read(in->fd, in->buf, 1) != 1) + bail("unexpected end of file on ", in->name); + return; + } + + /* skip the integral chunks, update skip with remainder */ + lseek(in->fd, skip - left, SEEK_CUR); + skip = left; + } + + /* read more input and skip remainder */ + bload(in); + if (skip > in->left) + bail("unexpected end of file on ", in->name); + in->left -= skip; + in->next += skip; +} + +/* -- end of buffered input functions -- */ + +/* skip the gzip header from file in */ +local void gzhead(bin *in) +{ + int flags; + + /* verify gzip magic header and compression method */ + if (bget(in) != 0x1f || bget(in) != 0x8b || bget(in) != 8) + bail(in->name, " is not a valid gzip file"); + + /* get and verify flags */ + flags = bget(in); + if ((flags & 0xe0) != 0) + bail("unknown reserved bits set in ", in->name); + + /* skip modification time, extra flags, and os */ + bskip(in, 6); + + /* skip extra field if present */ + if (flags & 4) { + unsigned len; + + len = bget(in); + len += (unsigned)(bget(in)) << 8; + bskip(in, len); + } + + /* skip file name if present */ + if (flags & 8) + while (bget(in) != 0) + ; + + /* skip comment if present */ + if (flags & 16) + while (bget(in) != 0) + ; + + /* skip header crc if present */ + if (flags & 2) + bskip(in, 2); +} + +/* write a four-byte little-endian unsigned integer to out */ +local void put4(unsigned long val, FILE *out) +{ + putc(val & 0xff, out); + putc((val >> 8) & 0xff, out); + putc((val >> 16) & 0xff, out); + putc((val >> 24) & 0xff, out); +} + +/* Load up zlib stream from buffered input, bail if end of file */ +local void zpull(z_streamp strm, bin *in) +{ + if (in->left == 0) + bload(in); + if (in->left == 0) + bail("unexpected end of file on ", in->name); + strm->avail_in = in->left; + strm->next_in = in->next; +} + +/* Write header for gzip file to out and initialize trailer. */ +local void gzinit(unsigned long *crc, unsigned long *tot, FILE *out) +{ + fwrite("\x1f\x8b\x08\0\0\0\0\0\0\xff", 1, 10, out); + *crc = crc32(0L, Z_NULL, 0); + *tot = 0; +} + +/* Copy the compressed data from name, zeroing the last block bit of the last + block if clr is true, and adding empty blocks as needed to get to a byte + boundary. If clr is false, then the last block becomes the last block of + the output, and the gzip trailer is written. crc and tot maintains the + crc and length (modulo 2^32) of the output for the trailer. The resulting + gzip file is written to out. gzinit() must be called before the first call + of gzcopy() to write the gzip header and to initialize crc and tot. */ +local void gzcopy(char *name, int clr, unsigned long *crc, unsigned long *tot, + FILE *out) +{ + int ret; /* return value from zlib functions */ + int pos; /* where the "last block" bit is in byte */ + int last; /* true if processing the last block */ + bin *in; /* buffered input file */ + unsigned char *start; /* start of compressed data in buffer */ + unsigned char *junk; /* buffer for uncompressed data -- discarded */ + z_off_t len; /* length of uncompressed data (support > 4 GB) */ + z_stream strm; /* zlib inflate stream */ + + /* open gzip file and skip header */ + in = bopen(name); + if (in == NULL) + bail("could not open ", name); + gzhead(in); + + /* allocate buffer for uncompressed data and initialize raw inflate + stream */ + junk = malloc(CHUNK); + strm.zalloc = Z_NULL; + strm.zfree = Z_NULL; + strm.opaque = Z_NULL; + strm.avail_in = 0; + strm.next_in = Z_NULL; + ret = inflateInit2(&strm, -15); + if (junk == NULL || ret != Z_OK) + bail("out of memory", ""); + + /* inflate and copy compressed data, clear last-block bit if requested */ + len = 0; + zpull(&strm, in); + start = in->next; + last = start[0] & 1; + if (last && clr) + start[0] &= ~1; + strm.avail_out = 0; + for (;;) { + /* if input used and output done, write used input and get more */ + if (strm.avail_in == 0 && strm.avail_out != 0) { + fwrite(start, 1, strm.next_in - start, out); + start = in->buf; + in->left = 0; + zpull(&strm, in); + } + + /* decompress -- return early when end-of-block reached */ + strm.avail_out = CHUNK; + strm.next_out = junk; + ret = inflate(&strm, Z_BLOCK); + switch (ret) { + case Z_MEM_ERROR: + bail("out of memory", ""); + case Z_DATA_ERROR: + bail("invalid compressed data in ", in->name); + } + + /* update length of uncompressed data */ + len += CHUNK - strm.avail_out; + + /* check for block boundary (only get this when block copied out) */ + if (strm.data_type & 128) { + /* if that was the last block, then done */ + if (last) + break; + + /* number of unused bits in last byte */ + pos = strm.data_type & 7; + + /* find the next last-block bit */ + if (pos != 0) { + /* next last-block bit is in last used byte */ + pos = 0x100 >> pos; + last = strm.next_in[-1] & pos; + if (last && clr) + in->buf[strm.next_in - in->buf - 1] &= ~pos; + } + else { + /* next last-block bit is in next unused byte */ + if (strm.avail_in == 0) { + /* don't have that byte yet -- get it */ + fwrite(start, 1, strm.next_in - start, out); + start = in->buf; + in->left = 0; + zpull(&strm, in); + } + last = strm.next_in[0] & 1; + if (last && clr) + in->buf[strm.next_in - in->buf] &= ~1; + } + } + } + + /* update buffer with unused input */ + in->left = strm.avail_in; + in->next = in->buf + (strm.next_in - in->buf); + + /* copy used input, write empty blocks to get to byte boundary */ + pos = strm.data_type & 7; + fwrite(start, 1, in->next - start - 1, out); + last = in->next[-1]; + if (pos == 0 || !clr) + /* already at byte boundary, or last file: write last byte */ + putc(last, out); + else { + /* append empty blocks to last byte */ + last &= ((0x100 >> pos) - 1); /* assure unused bits are zero */ + if (pos & 1) { + /* odd -- append an empty stored block */ + putc(last, out); + if (pos == 1) + putc(0, out); /* two more bits in block header */ + fwrite("\0\0\xff\xff", 1, 4, out); + } + else { + /* even -- append 1, 2, or 3 empty fixed blocks */ + switch (pos) { + case 6: + putc(last | 8, out); + last = 0; + case 4: + putc(last | 0x20, out); + last = 0; + case 2: + putc(last | 0x80, out); + putc(0, out); + } + } + } + + /* update crc and tot */ + *crc = crc32_combine(*crc, bget4(in), len); + *tot += (unsigned long)len; + + /* clean up */ + inflateEnd(&strm); + free(junk); + bclose(in); + + /* write trailer if this is the last gzip file */ + if (!clr) { + put4(*crc, out); + put4(*tot, out); + } +} + +/* join the gzip files on the command line, write result to stdout */ +int main(int argc, char **argv) +{ + unsigned long crc, tot; /* running crc and total uncompressed length */ + + /* skip command name */ + argc--; + argv++; + + /* show usage if no arguments */ + if (argc == 0) { + fputs("gzjoin usage: gzjoin f1.gz [f2.gz [f3.gz ...]] > fjoin.gz\n", + stderr); + return 0; + } + + /* join gzip files on command line and write to stdout */ + gzinit(&crc, &tot, stdout); + while (argc--) + gzcopy(*argv++, argc, &crc, &tot, stdout); + + /* done */ + return 0; +} ADDED compat/zlib/examples/gzlog.c Index: compat/zlib/examples/gzlog.c ================================================================== --- compat/zlib/examples/gzlog.c +++ compat/zlib/examples/gzlog.c @@ -0,0 +1,1059 @@ +/* + * gzlog.c + * Copyright (C) 2004, 2008, 2012 Mark Adler, all rights reserved + * For conditions of distribution and use, see copyright notice in gzlog.h + * version 2.2, 14 Aug 2012 + */ + +/* + gzlog provides a mechanism for frequently appending short strings to a gzip + file that is efficient both in execution time and compression ratio. The + strategy is to write the short strings in an uncompressed form to the end of + the gzip file, only compressing when the amount of uncompressed data has + reached a given threshold. + + gzlog also provides protection against interruptions in the process due to + system crashes. The status of the operation is recorded in an extra field + in the gzip file, and is only updated once the gzip file is brought to a + valid state. The last data to be appended or compressed is saved in an + auxiliary file, so that if the operation is interrupted, it can be completed + the next time an append operation is attempted. + + gzlog maintains another auxiliary file with the last 32K of data from the + compressed portion, which is preloaded for the compression of the subsequent + data. This minimizes the impact to the compression ratio of appending. + */ + +/* + Operations Concept: + + Files (log name "foo"): + foo.gz -- gzip file with the complete log + foo.add -- last message to append or last data to compress + foo.dict -- dictionary of the last 32K of data for next compression + foo.temp -- temporary dictionary file for compression after this one + foo.lock -- lock file for reading and writing the other files + foo.repairs -- log file for log file recovery operations (not compressed) + + gzip file structure: + - fixed-length (no file name) header with extra field (see below) + - compressed data ending initially with empty stored block + - uncompressed data filling out originally empty stored block and + subsequent stored blocks as needed (16K max each) + - gzip trailer + - no junk at end (no other gzip streams) + + When appending data, the information in the first three items above plus the + foo.add file are sufficient to recover an interrupted append operation. The + extra field has the necessary information to restore the start of the last + stored block and determine where to append the data in the foo.add file, as + well as the crc and length of the gzip data before the append operation. + + The foo.add file is created before the gzip file is marked for append, and + deleted after the gzip file is marked as complete. So if the append + operation is interrupted, the data to add will still be there. If due to + some external force, the foo.add file gets deleted between when the append + operation was interrupted and when recovery is attempted, the gzip file will + still be restored, but without the appended data. + + When compressing data, the information in the first two items above plus the + foo.add file are sufficient to recover an interrupted compress operation. + The extra field has the necessary information to find the end of the + compressed data, and contains both the crc and length of just the compressed + data and of the complete set of data including the contents of the foo.add + file. + + Again, the foo.add file is maintained during the compress operation in case + of an interruption. If in the unlikely event the foo.add file with the data + to be compressed is missing due to some external force, a gzip file with + just the previous compressed data will be reconstructed. In this case, all + of the data that was to be compressed is lost (approximately one megabyte). + This will not occur if all that happened was an interruption of the compress + operation. + + The third state that is marked is the replacement of the old dictionary with + the new dictionary after a compress operation. Once compression is + complete, the gzip file is marked as being in the replace state. This + completes the gzip file, so an interrupt after being so marked does not + result in recompression. Then the dictionary file is replaced, and the gzip + file is marked as completed. This state prevents the possibility of + restarting compression with the wrong dictionary file. + + All three operations are wrapped by a lock/unlock procedure. In order to + gain exclusive access to the log files, first a foo.lock file must be + exclusively created. When all operations are complete, the lock is + released by deleting the foo.lock file. If when attempting to create the + lock file, it already exists and the modify time of the lock file is more + than five minutes old (set by the PATIENCE define below), then the old + lock file is considered stale and deleted, and the exclusive creation of + the lock file is retried. To assure that there are no false assessments + of the staleness of the lock file, the operations periodically touch the + lock file to update the modified date. + + Following is the definition of the extra field with all of the information + required to enable the above append and compress operations and their + recovery if interrupted. Multi-byte values are stored little endian + (consistent with the gzip format). File pointers are eight bytes long. + The crc's and lengths for the gzip trailer are four bytes long. (Note that + the length at the end of a gzip file is used for error checking only, and + for large files is actually the length modulo 2^32.) The stored block + length is two bytes long. The gzip extra field two-byte identification is + "ap" for append. It is assumed that writing the extra field to the file is + an "atomic" operation. That is, either all of the extra field is written + to the file, or none of it is, if the operation is interrupted right at the + point of updating the extra field. This is a reasonable assumption, since + the extra field is within the first 52 bytes of the file, which is smaller + than any expected block size for a mass storage device (usually 512 bytes or + larger). + + Extra field (35 bytes): + - Pointer to first stored block length -- this points to the two-byte length + of the first stored block, which is followed by the two-byte, one's + complement of that length. The stored block length is preceded by the + three-bit header of the stored block, which is the actual start of the + stored block in the deflate format. See the bit offset field below. + - Pointer to the last stored block length. This is the same as above, but + for the last stored block of the uncompressed data in the gzip file. + Initially this is the same as the first stored block length pointer. + When the stored block gets to 16K (see the MAX_STORE define), then a new + stored block as added, at which point the last stored block length pointer + is different from the first stored block length pointer. When they are + different, the first bit of the last stored block header is eight bits, or + one byte back from the block length. + - Compressed data crc and length. This is the crc and length of the data + that is in the compressed portion of the deflate stream. These are used + only in the event that the foo.add file containing the data to compress is + lost after a compress operation is interrupted. + - Total data crc and length. This is the crc and length of all of the data + stored in the gzip file, compressed and uncompressed. It is used to + reconstruct the gzip trailer when compressing, as well as when recovering + interrupted operations. + - Final stored block length. This is used to quickly find where to append, + and allows the restoration of the original final stored block state when + an append operation is interrupted. + - First stored block start as the number of bits back from the final stored + block first length byte. This value is in the range of 3..10, and is + stored as the low three bits of the final byte of the extra field after + subtracting three (0..7). This allows the last-block bit of the stored + block header to be updated when a new stored block is added, for the case + when the first stored block and the last stored block are the same. (When + they are different, the numbers of bits back is known to be eight.) This + also allows for new compressed data to be appended to the old compressed + data in the compress operation, overwriting the previous first stored + block, or for the compressed data to be terminated and a valid gzip file + reconstructed on the off chance that a compression operation was + interrupted and the data to compress in the foo.add file was deleted. + - The operation in process. This is the next two bits in the last byte (the + bits under the mask 0x18). The are interpreted as 0: nothing in process, + 1: append in process, 2: compress in process, 3: replace in process. + - The top three bits of the last byte in the extra field are reserved and + are currently set to zero. + + Main procedure: + - Exclusively create the foo.lock file using the O_CREAT and O_EXCL modes of + the system open() call. If the modify time of an existing lock file is + more than PATIENCE seconds old, then the lock file is deleted and the + exclusive create is retried. + - Load the extra field from the foo.gz file, and see if an operation was in + progress but not completed. If so, apply the recovery procedure below. + - Perform the append procedure with the provided data. + - If the uncompressed data in the foo.gz file is 1MB or more, apply the + compress procedure. + - Delete the foo.lock file. + + Append procedure: + - Put what to append in the foo.add file so that the operation can be + restarted if this procedure is interrupted. + - Mark the foo.gz extra field with the append operation in progress. + + Restore the original last-block bit and stored block length of the last + stored block from the information in the extra field, in case a previous + append operation was interrupted. + - Append the provided data to the last stored block, creating new stored + blocks as needed and updating the stored blocks last-block bits and + lengths. + - Update the crc and length with the new data, and write the gzip trailer. + - Write over the extra field (with a single write operation) with the new + pointers, lengths, and crc's, and mark the gzip file as not in process. + Though there is still a foo.add file, it will be ignored since nothing + is in process. If a foo.add file is leftover from a previously + completed operation, it is truncated when writing new data to it. + - Delete the foo.add file. + + Compress and replace procedures: + - Read all of the uncompressed data in the stored blocks in foo.gz and write + it to foo.add. Also write foo.temp with the last 32K of that data to + provide a dictionary for the next invocation of this procedure. + - Rewrite the extra field marking foo.gz with a compression in process. + * If there is no data provided to compress (due to a missing foo.add file + when recovering), reconstruct and truncate the foo.gz file to contain + only the previous compressed data and proceed to the step after the next + one. Otherwise ... + - Compress the data with the dictionary in foo.dict, and write to the + foo.gz file starting at the bit immediately following the last previously + compressed block. If there is no foo.dict, proceed anyway with the + compression at slightly reduced efficiency. (For the foo.dict file to be + missing requires some external failure beyond simply the interruption of + a compress operation.) During this process, the foo.lock file is + periodically touched to assure that that file is not considered stale by + another process before we're done. The deflation is terminated with a + non-last empty static block (10 bits long), that is then located and + written over by a last-bit-set empty stored block. + - Append the crc and length of the data in the gzip file (previously + calculated during the append operations). + - Write over the extra field with the updated stored block offsets, bits + back, crc's, and lengths, and mark foo.gz as in process for a replacement + of the dictionary. + @ Delete the foo.add file. + - Replace foo.dict with foo.temp. + - Write over the extra field, marking foo.gz as complete. + + Recovery procedure: + - If not a replace recovery, read in the foo.add file, and provide that data + to the appropriate recovery below. If there is no foo.add file, provide + a zero data length to the recovery. In that case, the append recovery + restores the foo.gz to the previous compressed + uncompressed data state. + For the the compress recovery, a missing foo.add file results in foo.gz + being restored to the previous compressed-only data state. + - Append recovery: + - Pick up append at + step above + - Compress recovery: + - Pick up compress at * step above + - Replace recovery: + - Pick up compress at @ step above + - Log the repair with a date stamp in foo.repairs + */ + +#include +#include /* rename, fopen, fprintf, fclose */ +#include /* malloc, free */ +#include /* strlen, strrchr, strcpy, strncpy, strcmp */ +#include /* open */ +#include /* lseek, read, write, close, unlink, sleep, */ + /* ftruncate, fsync */ +#include /* errno */ +#include /* time, ctime */ +#include /* stat */ +#include /* utimes */ +#include "zlib.h" /* crc32 */ + +#include "gzlog.h" /* header for external access */ + +#define local static +typedef unsigned int uint; +typedef unsigned long ulong; + +/* Macro for debugging to deterministically force recovery operations */ +#ifdef DEBUG + #include /* longjmp */ + jmp_buf gzlog_jump; /* where to go back to */ + int gzlog_bail = 0; /* which point to bail at (1..8) */ + int gzlog_count = -1; /* number of times through to wait */ +# define BAIL(n) do { if (n == gzlog_bail && gzlog_count-- == 0) \ + longjmp(gzlog_jump, gzlog_bail); } while (0) +#else +# define BAIL(n) +#endif + +/* how old the lock file can be in seconds before considering it stale */ +#define PATIENCE 300 + +/* maximum stored block size in Kbytes -- must be in 1..63 */ +#define MAX_STORE 16 + +/* number of stored Kbytes to trigger compression (must be >= 32 to allow + dictionary construction, and <= 204 * MAX_STORE, in order for >> 10 to + discard the stored block headers contribution of five bytes each) */ +#define TRIGGER 1024 + +/* size of a deflate dictionary (this cannot be changed) */ +#define DICT 32768U + +/* values for the operation (2 bits) */ +#define NO_OP 0 +#define APPEND_OP 1 +#define COMPRESS_OP 2 +#define REPLACE_OP 3 + +/* macros to extract little-endian integers from an unsigned byte buffer */ +#define PULL2(p) ((p)[0]+((uint)((p)[1])<<8)) +#define PULL4(p) (PULL2(p)+((ulong)PULL2(p+2)<<16)) +#define PULL8(p) (PULL4(p)+((off_t)PULL4(p+4)<<32)) + +/* macros to store integers into a byte buffer in little-endian order */ +#define PUT2(p,a) do {(p)[0]=a;(p)[1]=(a)>>8;} while(0) +#define PUT4(p,a) do {PUT2(p,a);PUT2(p+2,a>>16);} while(0) +#define PUT8(p,a) do {PUT4(p,a);PUT4(p+4,a>>32);} while(0) + +/* internal structure for log information */ +#define LOGID "\106\035\172" /* should be three non-zero characters */ +struct log { + char id[4]; /* contains LOGID to detect inadvertent overwrites */ + int fd; /* file descriptor for .gz file, opened read/write */ + char *path; /* allocated path, e.g. "/var/log/foo" or "foo" */ + char *end; /* end of path, for appending suffices such as ".gz" */ + off_t first; /* offset of first stored block first length byte */ + int back; /* location of first block id in bits back from first */ + uint stored; /* bytes currently in last stored block */ + off_t last; /* offset of last stored block first length byte */ + ulong ccrc; /* crc of compressed data */ + ulong clen; /* length (modulo 2^32) of compressed data */ + ulong tcrc; /* crc of total data */ + ulong tlen; /* length (modulo 2^32) of total data */ + time_t lock; /* last modify time of our lock file */ +}; + +/* gzip header for gzlog */ +local unsigned char log_gzhead[] = { + 0x1f, 0x8b, /* magic gzip id */ + 8, /* compression method is deflate */ + 4, /* there is an extra field (no file name) */ + 0, 0, 0, 0, /* no modification time provided */ + 0, 0xff, /* no extra flags, no OS specified */ + 39, 0, 'a', 'p', 35, 0 /* extra field with "ap" subfield */ + /* 35 is EXTRA, 39 is EXTRA + 4 */ +}; + +#define HEAD sizeof(log_gzhead) /* should be 16 */ + +/* initial gzip extra field content (52 == HEAD + EXTRA + 1) */ +local unsigned char log_gzext[] = { + 52, 0, 0, 0, 0, 0, 0, 0, /* offset of first stored block length */ + 52, 0, 0, 0, 0, 0, 0, 0, /* offset of last stored block length */ + 0, 0, 0, 0, 0, 0, 0, 0, /* compressed data crc and length */ + 0, 0, 0, 0, 0, 0, 0, 0, /* total data crc and length */ + 0, 0, /* final stored block data length */ + 5 /* op is NO_OP, last bit 8 bits back */ +}; + +#define EXTRA sizeof(log_gzext) /* should be 35 */ + +/* initial gzip data and trailer */ +local unsigned char log_gzbody[] = { + 1, 0, 0, 0xff, 0xff, /* empty stored block (last) */ + 0, 0, 0, 0, /* crc */ + 0, 0, 0, 0 /* uncompressed length */ +}; + +#define BODY sizeof(log_gzbody) + +/* Exclusively create foo.lock in order to negotiate exclusive access to the + foo.* files. If the modify time of an existing lock file is greater than + PATIENCE seconds in the past, then consider the lock file to have been + abandoned, delete it, and try the exclusive create again. Save the lock + file modify time for verification of ownership. Return 0 on success, or -1 + on failure, usually due to an access restriction or invalid path. Note that + if stat() or unlink() fails, it may be due to another process noticing the + abandoned lock file a smidge sooner and deleting it, so those are not + flagged as an error. */ +local int log_lock(struct log *log) +{ + int fd; + struct stat st; + + strcpy(log->end, ".lock"); + while ((fd = open(log->path, O_CREAT | O_EXCL, 0644)) < 0) { + if (errno != EEXIST) + return -1; + if (stat(log->path, &st) == 0 && time(NULL) - st.st_mtime > PATIENCE) { + unlink(log->path); + continue; + } + sleep(2); /* relinquish the CPU for two seconds while waiting */ + } + close(fd); + if (stat(log->path, &st) == 0) + log->lock = st.st_mtime; + return 0; +} + +/* Update the modify time of the lock file to now, in order to prevent another + task from thinking that the lock is stale. Save the lock file modify time + for verification of ownership. */ +local void log_touch(struct log *log) +{ + struct stat st; + + strcpy(log->end, ".lock"); + utimes(log->path, NULL); + if (stat(log->path, &st) == 0) + log->lock = st.st_mtime; +} + +/* Check the log file modify time against what is expected. Return true if + this is not our lock. If it is our lock, touch it to keep it. */ +local int log_check(struct log *log) +{ + struct stat st; + + strcpy(log->end, ".lock"); + if (stat(log->path, &st) || st.st_mtime != log->lock) + return 1; + log_touch(log); + return 0; +} + +/* Unlock a previously acquired lock, but only if it's ours. */ +local void log_unlock(struct log *log) +{ + if (log_check(log)) + return; + strcpy(log->end, ".lock"); + unlink(log->path); + log->lock = 0; +} + +/* Check the gzip header and read in the extra field, filling in the values in + the log structure. Return op on success or -1 if the gzip header was not as + expected. op is the current operation in progress last written to the extra + field. This assumes that the gzip file has already been opened, with the + file descriptor log->fd. */ +local int log_head(struct log *log) +{ + int op; + unsigned char buf[HEAD + EXTRA]; + + if (lseek(log->fd, 0, SEEK_SET) < 0 || + read(log->fd, buf, HEAD + EXTRA) != HEAD + EXTRA || + memcmp(buf, log_gzhead, HEAD)) { + return -1; + } + log->first = PULL8(buf + HEAD); + log->last = PULL8(buf + HEAD + 8); + log->ccrc = PULL4(buf + HEAD + 16); + log->clen = PULL4(buf + HEAD + 20); + log->tcrc = PULL4(buf + HEAD + 24); + log->tlen = PULL4(buf + HEAD + 28); + log->stored = PULL2(buf + HEAD + 32); + log->back = 3 + (buf[HEAD + 34] & 7); + op = (buf[HEAD + 34] >> 3) & 3; + return op; +} + +/* Write over the extra field contents, marking the operation as op. Use fsync + to assure that the device is written to, and in the requested order. This + operation, and only this operation, is assumed to be atomic in order to + assure that the log is recoverable in the event of an interruption at any + point in the process. Return -1 if the write to foo.gz failed. */ +local int log_mark(struct log *log, int op) +{ + int ret; + unsigned char ext[EXTRA]; + + PUT8(ext, log->first); + PUT8(ext + 8, log->last); + PUT4(ext + 16, log->ccrc); + PUT4(ext + 20, log->clen); + PUT4(ext + 24, log->tcrc); + PUT4(ext + 28, log->tlen); + PUT2(ext + 32, log->stored); + ext[34] = log->back - 3 + (op << 3); + fsync(log->fd); + ret = lseek(log->fd, HEAD, SEEK_SET) < 0 || + write(log->fd, ext, EXTRA) != EXTRA ? -1 : 0; + fsync(log->fd); + return ret; +} + +/* Rewrite the last block header bits and subsequent zero bits to get to a byte + boundary, setting the last block bit if last is true, and then write the + remainder of the stored block header (length and one's complement). Leave + the file pointer after the end of the last stored block data. Return -1 if + there is a read or write failure on the foo.gz file */ +local int log_last(struct log *log, int last) +{ + int back, len, mask; + unsigned char buf[6]; + + /* determine the locations of the bytes and bits to modify */ + back = log->last == log->first ? log->back : 8; + len = back > 8 ? 2 : 1; /* bytes back from log->last */ + mask = 0x80 >> ((back - 1) & 7); /* mask for block last-bit */ + + /* get the byte to modify (one or two back) into buf[0] -- don't need to + read the byte if the last-bit is eight bits back, since in that case + the entire byte will be modified */ + buf[0] = 0; + if (back != 8 && (lseek(log->fd, log->last - len, SEEK_SET) < 0 || + read(log->fd, buf, 1) != 1)) + return -1; + + /* change the last-bit of the last stored block as requested -- note + that all bits above the last-bit are set to zero, per the type bits + of a stored block being 00 and per the convention that the bits to + bring the stream to a byte boundary are also zeros */ + buf[1] = 0; + buf[2 - len] = (*buf & (mask - 1)) + (last ? mask : 0); + + /* write the modified stored block header and lengths, move the file + pointer to after the last stored block data */ + PUT2(buf + 2, log->stored); + PUT2(buf + 4, log->stored ^ 0xffff); + return lseek(log->fd, log->last - len, SEEK_SET) < 0 || + write(log->fd, buf + 2 - len, len + 4) != len + 4 || + lseek(log->fd, log->stored, SEEK_CUR) < 0 ? -1 : 0; +} + +/* Append len bytes from data to the locked and open log file. len may be zero + if recovering and no .add file was found. In that case, the previous state + of the foo.gz file is restored. The data is appended uncompressed in + deflate stored blocks. Return -1 if there was an error reading or writing + the foo.gz file. */ +local int log_append(struct log *log, unsigned char *data, size_t len) +{ + uint put; + off_t end; + unsigned char buf[8]; + + /* set the last block last-bit and length, in case recovering an + interrupted append, then position the file pointer to append to the + block */ + if (log_last(log, 1)) + return -1; + + /* append, adding stored blocks and updating the offset of the last stored + block as needed, and update the total crc and length */ + while (len) { + /* append as much as we can to the last block */ + put = (MAX_STORE << 10) - log->stored; + if (put > len) + put = (uint)len; + if (put) { + if (write(log->fd, data, put) != put) + return -1; + BAIL(1); + log->tcrc = crc32(log->tcrc, data, put); + log->tlen += put; + log->stored += put; + data += put; + len -= put; + } + + /* if we need to, add a new empty stored block */ + if (len) { + /* mark current block as not last */ + if (log_last(log, 0)) + return -1; + + /* point to new, empty stored block */ + log->last += 4 + log->stored + 1; + log->stored = 0; + } + + /* mark last block as last, update its length */ + if (log_last(log, 1)) + return -1; + BAIL(2); + } + + /* write the new crc and length trailer, and truncate just in case (could + be recovering from partial append with a missing foo.add file) */ + PUT4(buf, log->tcrc); + PUT4(buf + 4, log->tlen); + if (write(log->fd, buf, 8) != 8 || + (end = lseek(log->fd, 0, SEEK_CUR)) < 0 || ftruncate(log->fd, end)) + return -1; + + /* write the extra field, marking the log file as done, delete .add file */ + if (log_mark(log, NO_OP)) + return -1; + strcpy(log->end, ".add"); + unlink(log->path); /* ignore error, since may not exist */ + return 0; +} + +/* Replace the foo.dict file with the foo.temp file. Also delete the foo.add + file, since the compress operation may have been interrupted before that was + done. Returns 1 if memory could not be allocated, or -1 if reading or + writing foo.gz fails, or if the rename fails for some reason other than + foo.temp not existing. foo.temp not existing is a permitted error, since + the replace operation may have been interrupted after the rename is done, + but before foo.gz is marked as complete. */ +local int log_replace(struct log *log) +{ + int ret; + char *dest; + + /* delete foo.add file */ + strcpy(log->end, ".add"); + unlink(log->path); /* ignore error, since may not exist */ + BAIL(3); + + /* rename foo.name to foo.dict, replacing foo.dict if it exists */ + strcpy(log->end, ".dict"); + dest = malloc(strlen(log->path) + 1); + if (dest == NULL) + return -2; + strcpy(dest, log->path); + strcpy(log->end, ".temp"); + ret = rename(log->path, dest); + free(dest); + if (ret && errno != ENOENT) + return -1; + BAIL(4); + + /* mark the foo.gz file as done */ + return log_mark(log, NO_OP); +} + +/* Compress the len bytes at data and append the compressed data to the + foo.gz deflate data immediately after the previous compressed data. This + overwrites the previous uncompressed data, which was stored in foo.add + and is the data provided in data[0..len-1]. If this operation is + interrupted, it picks up at the start of this routine, with the foo.add + file read in again. If there is no data to compress (len == 0), then we + simply terminate the foo.gz file after the previously compressed data, + appending a final empty stored block and the gzip trailer. Return -1 if + reading or writing the log.gz file failed, or -2 if there was a memory + allocation failure. */ +local int log_compress(struct log *log, unsigned char *data, size_t len) +{ + int fd; + uint got, max; + ssize_t dict; + off_t end; + z_stream strm; + unsigned char buf[DICT]; + + /* compress and append compressed data */ + if (len) { + /* set up for deflate, allocating memory */ + strm.zalloc = Z_NULL; + strm.zfree = Z_NULL; + strm.opaque = Z_NULL; + if (deflateInit2(&strm, Z_DEFAULT_COMPRESSION, Z_DEFLATED, -15, 8, + Z_DEFAULT_STRATEGY) != Z_OK) + return -2; + + /* read in dictionary (last 32K of data that was compressed) */ + strcpy(log->end, ".dict"); + fd = open(log->path, O_RDONLY, 0); + if (fd >= 0) { + dict = read(fd, buf, DICT); + close(fd); + if (dict < 0) { + deflateEnd(&strm); + return -1; + } + if (dict) + deflateSetDictionary(&strm, buf, (uint)dict); + } + log_touch(log); + + /* prime deflate with last bits of previous block, position write + pointer to write those bits and overwrite what follows */ + if (lseek(log->fd, log->first - (log->back > 8 ? 2 : 1), + SEEK_SET) < 0 || + read(log->fd, buf, 1) != 1 || lseek(log->fd, -1, SEEK_CUR) < 0) { + deflateEnd(&strm); + return -1; + } + deflatePrime(&strm, (8 - log->back) & 7, *buf); + + /* compress, finishing with a partial non-last empty static block */ + strm.next_in = data; + max = (((uint)0 - 1) >> 1) + 1; /* in case int smaller than size_t */ + do { + strm.avail_in = len > max ? max : (uint)len; + len -= strm.avail_in; + do { + strm.avail_out = DICT; + strm.next_out = buf; + deflate(&strm, len ? Z_NO_FLUSH : Z_PARTIAL_FLUSH); + got = DICT - strm.avail_out; + if (got && write(log->fd, buf, got) != got) { + deflateEnd(&strm); + return -1; + } + log_touch(log); + } while (strm.avail_out == 0); + } while (len); + deflateEnd(&strm); + BAIL(5); + + /* find start of empty static block -- scanning backwards the first one + bit is the second bit of the block, if the last byte is zero, then + we know the byte before that has a one in the top bit, since an + empty static block is ten bits long */ + if ((log->first = lseek(log->fd, -1, SEEK_CUR)) < 0 || + read(log->fd, buf, 1) != 1) + return -1; + log->first++; + if (*buf) { + log->back = 1; + while ((*buf & ((uint)1 << (8 - log->back++))) == 0) + ; /* guaranteed to terminate, since *buf != 0 */ + } + else + log->back = 10; + + /* update compressed crc and length */ + log->ccrc = log->tcrc; + log->clen = log->tlen; + } + else { + /* no data to compress -- fix up existing gzip stream */ + log->tcrc = log->ccrc; + log->tlen = log->clen; + } + + /* complete and truncate gzip stream */ + log->last = log->first; + log->stored = 0; + PUT4(buf, log->tcrc); + PUT4(buf + 4, log->tlen); + if (log_last(log, 1) || write(log->fd, buf, 8) != 8 || + (end = lseek(log->fd, 0, SEEK_CUR)) < 0 || ftruncate(log->fd, end)) + return -1; + BAIL(6); + + /* mark as being in the replace operation */ + if (log_mark(log, REPLACE_OP)) + return -1; + + /* execute the replace operation and mark the file as done */ + return log_replace(log); +} + +/* log a repair record to the .repairs file */ +local void log_log(struct log *log, int op, char *record) +{ + time_t now; + FILE *rec; + + now = time(NULL); + strcpy(log->end, ".repairs"); + rec = fopen(log->path, "a"); + if (rec == NULL) + return; + fprintf(rec, "%.24s %s recovery: %s\n", ctime(&now), op == APPEND_OP ? + "append" : (op == COMPRESS_OP ? "compress" : "replace"), record); + fclose(rec); + return; +} + +/* Recover the interrupted operation op. First read foo.add for recovering an + append or compress operation. Return -1 if there was an error reading or + writing foo.gz or reading an existing foo.add, or -2 if there was a memory + allocation failure. */ +local int log_recover(struct log *log, int op) +{ + int fd, ret = 0; + unsigned char *data = NULL; + size_t len = 0; + struct stat st; + + /* log recovery */ + log_log(log, op, "start"); + + /* load foo.add file if expected and present */ + if (op == APPEND_OP || op == COMPRESS_OP) { + strcpy(log->end, ".add"); + if (stat(log->path, &st) == 0 && st.st_size) { + len = (size_t)(st.st_size); + if ((off_t)len != st.st_size || + (data = malloc(st.st_size)) == NULL) { + log_log(log, op, "allocation failure"); + return -2; + } + if ((fd = open(log->path, O_RDONLY, 0)) < 0) { + log_log(log, op, ".add file read failure"); + return -1; + } + ret = (size_t)read(fd, data, len) != len; + close(fd); + if (ret) { + log_log(log, op, ".add file read failure"); + return -1; + } + log_log(log, op, "loaded .add file"); + } + else + log_log(log, op, "missing .add file!"); + } + + /* recover the interrupted operation */ + switch (op) { + case APPEND_OP: + ret = log_append(log, data, len); + break; + case COMPRESS_OP: + ret = log_compress(log, data, len); + break; + case REPLACE_OP: + ret = log_replace(log); + } + + /* log status */ + log_log(log, op, ret ? "failure" : "complete"); + + /* clean up */ + if (data != NULL) + free(data); + return ret; +} + +/* Close the foo.gz file (if open) and release the lock. */ +local void log_close(struct log *log) +{ + if (log->fd >= 0) + close(log->fd); + log->fd = -1; + log_unlock(log); +} + +/* Open foo.gz, verify the header, and load the extra field contents, after + first creating the foo.lock file to gain exclusive access to the foo.* + files. If foo.gz does not exist or is empty, then write the initial header, + extra, and body content of an empty foo.gz log file. If there is an error + creating the lock file due to access restrictions, or an error reading or + writing the foo.gz file, or if the foo.gz file is not a proper log file for + this object (e.g. not a gzip file or does not contain the expected extra + field), then return true. If there is an error, the lock is released. + Otherwise, the lock is left in place. */ +local int log_open(struct log *log) +{ + int op; + + /* release open file resource if left over -- can occur if lock lost + between gzlog_open() and gzlog_write() */ + if (log->fd >= 0) + close(log->fd); + log->fd = -1; + + /* negotiate exclusive access */ + if (log_lock(log) < 0) + return -1; + + /* open the log file, foo.gz */ + strcpy(log->end, ".gz"); + log->fd = open(log->path, O_RDWR | O_CREAT, 0644); + if (log->fd < 0) { + log_close(log); + return -1; + } + + /* if new, initialize foo.gz with an empty log, delete old dictionary */ + if (lseek(log->fd, 0, SEEK_END) == 0) { + if (write(log->fd, log_gzhead, HEAD) != HEAD || + write(log->fd, log_gzext, EXTRA) != EXTRA || + write(log->fd, log_gzbody, BODY) != BODY) { + log_close(log); + return -1; + } + strcpy(log->end, ".dict"); + unlink(log->path); + } + + /* verify log file and load extra field information */ + if ((op = log_head(log)) < 0) { + log_close(log); + return -1; + } + + /* check for interrupted process and if so, recover */ + if (op != NO_OP && log_recover(log, op)) { + log_close(log); + return -1; + } + + /* touch the lock file to prevent another process from grabbing it */ + log_touch(log); + return 0; +} + +/* See gzlog.h for the description of the external methods below */ +gzlog *gzlog_open(char *path) +{ + size_t n; + struct log *log; + + /* check arguments */ + if (path == NULL || *path == 0) + return NULL; + + /* allocate and initialize log structure */ + log = malloc(sizeof(struct log)); + if (log == NULL) + return NULL; + strcpy(log->id, LOGID); + log->fd = -1; + + /* save path and end of path for name construction */ + n = strlen(path); + log->path = malloc(n + 9); /* allow for ".repairs" */ + if (log->path == NULL) { + free(log); + return NULL; + } + strcpy(log->path, path); + log->end = log->path + n; + + /* gain exclusive access and verify log file -- may perform a + recovery operation if needed */ + if (log_open(log)) { + free(log->path); + free(log); + return NULL; + } + + /* return pointer to log structure */ + return log; +} + +/* gzlog_compress() return values: + 0: all good + -1: file i/o error (usually access issue) + -2: memory allocation failure + -3: invalid log pointer argument */ +int gzlog_compress(gzlog *logd) +{ + int fd, ret; + uint block; + size_t len, next; + unsigned char *data, buf[5]; + struct log *log = logd; + + /* check arguments */ + if (log == NULL || strcmp(log->id, LOGID)) + return -3; + + /* see if we lost the lock -- if so get it again and reload the extra + field information (it probably changed), recover last operation if + necessary */ + if (log_check(log) && log_open(log)) + return -1; + + /* create space for uncompressed data */ + len = ((size_t)(log->last - log->first) & ~(((size_t)1 << 10) - 1)) + + log->stored; + if ((data = malloc(len)) == NULL) + return -2; + + /* do statement here is just a cheap trick for error handling */ + do { + /* read in the uncompressed data */ + if (lseek(log->fd, log->first - 1, SEEK_SET) < 0) + break; + next = 0; + while (next < len) { + if (read(log->fd, buf, 5) != 5) + break; + block = PULL2(buf + 1); + if (next + block > len || + read(log->fd, (char *)data + next, block) != block) + break; + next += block; + } + if (lseek(log->fd, 0, SEEK_CUR) != log->last + 4 + log->stored) + break; + log_touch(log); + + /* write the uncompressed data to the .add file */ + strcpy(log->end, ".add"); + fd = open(log->path, O_WRONLY | O_CREAT | O_TRUNC, 0644); + if (fd < 0) + break; + ret = (size_t)write(fd, data, len) != len; + if (ret | close(fd)) + break; + log_touch(log); + + /* write the dictionary for the next compress to the .temp file */ + strcpy(log->end, ".temp"); + fd = open(log->path, O_WRONLY | O_CREAT | O_TRUNC, 0644); + if (fd < 0) + break; + next = DICT > len ? len : DICT; + ret = (size_t)write(fd, (char *)data + len - next, next) != next; + if (ret | close(fd)) + break; + log_touch(log); + + /* roll back to compressed data, mark the compress in progress */ + log->last = log->first; + log->stored = 0; + if (log_mark(log, COMPRESS_OP)) + break; + BAIL(7); + + /* compress and append the data (clears mark) */ + ret = log_compress(log, data, len); + free(data); + return ret; + } while (0); + + /* broke out of do above on i/o error */ + free(data); + return -1; +} + +/* gzlog_write() return values: + 0: all good + -1: file i/o error (usually access issue) + -2: memory allocation failure + -3: invalid log pointer argument */ +int gzlog_write(gzlog *logd, void *data, size_t len) +{ + int fd, ret; + struct log *log = logd; + + /* check arguments */ + if (log == NULL || strcmp(log->id, LOGID)) + return -3; + if (data == NULL || len <= 0) + return 0; + + /* see if we lost the lock -- if so get it again and reload the extra + field information (it probably changed), recover last operation if + necessary */ + if (log_check(log) && log_open(log)) + return -1; + + /* create and write .add file */ + strcpy(log->end, ".add"); + fd = open(log->path, O_WRONLY | O_CREAT | O_TRUNC, 0644); + if (fd < 0) + return -1; + ret = (size_t)write(fd, data, len) != len; + if (ret | close(fd)) + return -1; + log_touch(log); + + /* mark log file with append in progress */ + if (log_mark(log, APPEND_OP)) + return -1; + BAIL(8); + + /* append data (clears mark) */ + if (log_append(log, data, len)) + return -1; + + /* check to see if it's time to compress -- if not, then done */ + if (((log->last - log->first) >> 10) + (log->stored >> 10) < TRIGGER) + return 0; + + /* time to compress */ + return gzlog_compress(log); +} + +/* gzlog_close() return values: + 0: ok + -3: invalid log pointer argument */ +int gzlog_close(gzlog *logd) +{ + struct log *log = logd; + + /* check arguments */ + if (log == NULL || strcmp(log->id, LOGID)) + return -3; + + /* close the log file and release the lock */ + log_close(log); + + /* free structure and return */ + if (log->path != NULL) + free(log->path); + strcpy(log->id, "bad"); + free(log); + return 0; +} ADDED compat/zlib/examples/gzlog.h Index: compat/zlib/examples/gzlog.h ================================================================== --- compat/zlib/examples/gzlog.h +++ compat/zlib/examples/gzlog.h @@ -0,0 +1,91 @@ +/* gzlog.h + Copyright (C) 2004, 2008, 2012 Mark Adler, all rights reserved + version 2.2, 14 Aug 2012 + + This software is provided 'as-is', without any express or implied + warranty. In no event will the author be held liable for any damages + arising from the use of this software. + + Permission is granted to anyone to use this software for any purpose, + including commercial applications, and to alter it and redistribute it + freely, subject to the following restrictions: + + 1. The origin of this software must not be misrepresented; you must not + claim that you wrote the original software. If you use this software + in a product, an acknowledgment in the product documentation would be + appreciated but is not required. + 2. Altered source versions must be plainly marked as such, and must not be + misrepresented as being the original software. + 3. This notice may not be removed or altered from any source distribution. + + Mark Adler madler@alumni.caltech.edu + */ + +/* Version History: + 1.0 26 Nov 2004 First version + 2.0 25 Apr 2008 Complete redesign for recovery of interrupted operations + Interface changed slightly in that now path is a prefix + Compression now occurs as needed during gzlog_write() + gzlog_write() now always leaves the log file as valid gzip + 2.1 8 Jul 2012 Fix argument checks in gzlog_compress() and gzlog_write() + 2.2 14 Aug 2012 Clean up signed comparisons + */ + +/* + The gzlog object allows writing short messages to a gzipped log file, + opening the log file locked for small bursts, and then closing it. The log + object works by appending stored (uncompressed) data to the gzip file until + 1 MB has been accumulated. At that time, the stored data is compressed, and + replaces the uncompressed data in the file. The log file is truncated to + its new size at that time. After each write operation, the log file is a + valid gzip file that can decompressed to recover what was written. + + The gzlog operations can be interupted at any point due to an application or + system crash, and the log file will be recovered the next time the log is + opened with gzlog_open(). + */ + +#ifndef GZLOG_H +#define GZLOG_H + +/* gzlog object type */ +typedef void gzlog; + +/* Open a gzlog object, creating the log file if it does not exist. Return + NULL on error. Note that gzlog_open() could take a while to complete if it + has to wait to verify that a lock is stale (possibly for five minutes), or + if there is significant contention with other instantiations of this object + when locking the resource. path is the prefix of the file names created by + this object. If path is "foo", then the log file will be "foo.gz", and + other auxiliary files will be created and destroyed during the process: + "foo.dict" for a compression dictionary, "foo.temp" for a temporary (next) + dictionary, "foo.add" for data being added or compressed, "foo.lock" for the + lock file, and "foo.repairs" to log recovery operations performed due to + interrupted gzlog operations. A gzlog_open() followed by a gzlog_close() + will recover a previously interrupted operation, if any. */ +gzlog *gzlog_open(char *path); + +/* Write to a gzlog object. Return zero on success, -1 if there is a file i/o + error on any of the gzlog files (this should not happen if gzlog_open() + succeeded, unless the device has run out of space or leftover auxiliary + files have permissions or ownership that prevent their use), -2 if there is + a memory allocation failure, or -3 if the log argument is invalid (e.g. if + it was not created by gzlog_open()). This function will write data to the + file uncompressed, until 1 MB has been accumulated, at which time that data + will be compressed. The log file will be a valid gzip file upon successful + return. */ +int gzlog_write(gzlog *log, void *data, size_t len); + +/* Force compression of any uncompressed data in the log. This should be used + sparingly, if at all. The main application would be when a log file will + not be appended to again. If this is used to compress frequently while + appending, it will both significantly increase the execution time and + reduce the compression ratio. The return codes are the same as for + gzlog_write(). */ +int gzlog_compress(gzlog *log); + +/* Close a gzlog object. Return zero on success, -3 if the log argument is + invalid. The log object is freed, and so cannot be referenced again. */ +int gzlog_close(gzlog *log); + +#endif ADDED compat/zlib/examples/zlib_how.html Index: compat/zlib/examples/zlib_how.html ================================================================== --- compat/zlib/examples/zlib_how.html +++ compat/zlib/examples/zlib_how.html @@ -0,0 +1,545 @@ + + + + +zlib Usage Example + + + +

zlib Usage Example

+We often get questions about how the deflate() and inflate() functions should be used. +Users wonder when they should provide more input, when they should use more output, +what to do with a Z_BUF_ERROR, how to make sure the process terminates properly, and +so on. So for those who have read zlib.h (a few times), and +would like further edification, below is an annotated example in C of simple routines to compress and decompress +from an input file to an output file using deflate() and inflate() respectively. The +annotations are interspersed between lines of the code. So please read between the lines. +We hope this helps explain some of the intricacies of zlib. +

+Without further adieu, here is the program zpipe.c: +


+/* zpipe.c: example of proper use of zlib's inflate() and deflate()
+   Not copyrighted -- provided to the public domain
+   Version 1.4  11 December 2005  Mark Adler */
+
+/* Version history:
+   1.0  30 Oct 2004  First version
+   1.1   8 Nov 2004  Add void casting for unused return values
+                     Use switch statement for inflate() return values
+   1.2   9 Nov 2004  Add assertions to document zlib guarantees
+   1.3   6 Apr 2005  Remove incorrect assertion in inf()
+   1.4  11 Dec 2005  Add hack to avoid MSDOS end-of-line conversions
+                     Avoid some compiler warnings for input and output buffers
+ */
+
+We now include the header files for the required definitions. From +stdio.h we use fopen(), fread(), fwrite(), +feof(), ferror(), and fclose() for file i/o, and +fputs() for error messages. From string.h we use +strcmp() for command line argument processing. +From assert.h we use the assert() macro. +From zlib.h +we use the basic compression functions deflateInit(), +deflate(), and deflateEnd(), and the basic decompression +functions inflateInit(), inflate(), and +inflateEnd(). +

+#include <stdio.h>
+#include <string.h>
+#include <assert.h>
+#include "zlib.h"
+
+This is an ugly hack required to avoid corruption of the input and output data on +Windows/MS-DOS systems. Without this, those systems would assume that the input and output +files are text, and try to convert the end-of-line characters from one standard to +another. That would corrupt binary data, and in particular would render the compressed data unusable. +This sets the input and output to binary which suppresses the end-of-line conversions. +SET_BINARY_MODE() will be used later on stdin and stdout, at the beginning of main(). +

+#if defined(MSDOS) || defined(OS2) || defined(WIN32) || defined(__CYGWIN__)
+#  include <fcntl.h>
+#  include <io.h>
+#  define SET_BINARY_MODE(file) setmode(fileno(file), O_BINARY)
+#else
+#  define SET_BINARY_MODE(file)
+#endif
+
+CHUNK is simply the buffer size for feeding data to and pulling data +from the zlib routines. Larger buffer sizes would be more efficient, +especially for inflate(). If the memory is available, buffers sizes +on the order of 128K or 256K bytes should be used. +

+#define CHUNK 16384
+
+The def() routine compresses data from an input file to an output file. The output data +will be in the zlib format, which is different from the gzip or zip +formats. The zlib format has a very small header of only two bytes to identify it as +a zlib stream and to provide decoding information, and a four-byte trailer with a fast +check value to verify the integrity of the uncompressed data after decoding. +

+/* Compress from file source to file dest until EOF on source.
+   def() returns Z_OK on success, Z_MEM_ERROR if memory could not be
+   allocated for processing, Z_STREAM_ERROR if an invalid compression
+   level is supplied, Z_VERSION_ERROR if the version of zlib.h and the
+   version of the library linked do not match, or Z_ERRNO if there is
+   an error reading or writing the files. */
+int def(FILE *source, FILE *dest, int level)
+{
+
+Here are the local variables for def(). ret will be used for zlib +return codes. flush will keep track of the current flushing state for deflate(), +which is either no flushing, or flush to completion after the end of the input file is reached. +have is the amount of data returned from deflate(). The strm structure +is used to pass information to and from the zlib routines, and to maintain the +deflate() state. in and out are the input and output buffers for +deflate(). +

+    int ret, flush;
+    unsigned have;
+    z_stream strm;
+    unsigned char in[CHUNK];
+    unsigned char out[CHUNK];
+
+The first thing we do is to initialize the zlib state for compression using +deflateInit(). This must be done before the first use of deflate(). +The zalloc, zfree, and opaque fields in the strm +structure must be initialized before calling deflateInit(). Here they are +set to the zlib constant Z_NULL to request that zlib use +the default memory allocation routines. An application may also choose to provide +custom memory allocation routines here. deflateInit() will allocate on the +order of 256K bytes for the internal state. +(See zlib Technical Details.) +

+deflateInit() is called with a pointer to the structure to be initialized and +the compression level, which is an integer in the range of -1 to 9. Lower compression +levels result in faster execution, but less compression. Higher levels result in +greater compression, but slower execution. The zlib constant Z_DEFAULT_COMPRESSION, +equal to -1, +provides a good compromise between compression and speed and is equivalent to level 6. +Level 0 actually does no compression at all, and in fact expands the data slightly to produce +the zlib format (it is not a byte-for-byte copy of the input). +More advanced applications of zlib +may use deflateInit2() here instead. Such an application may want to reduce how +much memory will be used, at some price in compression. Or it may need to request a +gzip header and trailer instead of a zlib header and trailer, or raw +encoding with no header or trailer at all. +

+We must check the return value of deflateInit() against the zlib constant +Z_OK to make sure that it was able to +allocate memory for the internal state, and that the provided arguments were valid. +deflateInit() will also check that the version of zlib that the zlib.h +file came from matches the version of zlib actually linked with the program. This +is especially important for environments in which zlib is a shared library. +

+Note that an application can initialize multiple, independent zlib streams, which can +operate in parallel. The state information maintained in the structure allows the zlib +routines to be reentrant. +


+    /* allocate deflate state */
+    strm.zalloc = Z_NULL;
+    strm.zfree = Z_NULL;
+    strm.opaque = Z_NULL;
+    ret = deflateInit(&strm, level);
+    if (ret != Z_OK)
+        return ret;
+
+With the pleasantries out of the way, now we can get down to business. The outer do-loop +reads all of the input file and exits at the bottom of the loop once end-of-file is reached. +This loop contains the only call of deflate(). So we must make sure that all of the +input data has been processed and that all of the output data has been generated and consumed +before we fall out of the loop at the bottom. +

+    /* compress until end of file */
+    do {
+
+We start off by reading data from the input file. The number of bytes read is put directly +into avail_in, and a pointer to those bytes is put into next_in. We also +check to see if end-of-file on the input has been reached. If we are at the end of file, then flush is set to the +zlib constant Z_FINISH, which is later passed to deflate() to +indicate that this is the last chunk of input data to compress. We need to use feof() +to check for end-of-file as opposed to seeing if fewer than CHUNK bytes have been read. The +reason is that if the input file length is an exact multiple of CHUNK, we will miss +the fact that we got to the end-of-file, and not know to tell deflate() to finish +up the compressed stream. If we are not yet at the end of the input, then the zlib +constant Z_NO_FLUSH will be passed to deflate to indicate that we are still +in the middle of the uncompressed data. +

+If there is an error in reading from the input file, the process is aborted with +deflateEnd() being called to free the allocated zlib state before returning +the error. We wouldn't want a memory leak, now would we? deflateEnd() can be called +at any time after the state has been initialized. Once that's done, deflateInit() (or +deflateInit2()) would have to be called to start a new compression process. There is +no point here in checking the deflateEnd() return code. The deallocation can't fail. +


+        strm.avail_in = fread(in, 1, CHUNK, source);
+        if (ferror(source)) {
+            (void)deflateEnd(&strm);
+            return Z_ERRNO;
+        }
+        flush = feof(source) ? Z_FINISH : Z_NO_FLUSH;
+        strm.next_in = in;
+
+The inner do-loop passes our chunk of input data to deflate(), and then +keeps calling deflate() until it is done producing output. Once there is no more +new output, deflate() is guaranteed to have consumed all of the input, i.e., +avail_in will be zero. +

+        /* run deflate() on input until output buffer not full, finish
+           compression if all of source has been read in */
+        do {
+
+Output space is provided to deflate() by setting avail_out to the number +of available output bytes and next_out to a pointer to that space. +

+            strm.avail_out = CHUNK;
+            strm.next_out = out;
+
+Now we call the compression engine itself, deflate(). It takes as many of the +avail_in bytes at next_in as it can process, and writes as many as +avail_out bytes to next_out. Those counters and pointers are then +updated past the input data consumed and the output data written. It is the amount of +output space available that may limit how much input is consumed. +Hence the inner loop to make sure that +all of the input is consumed by providing more output space each time. Since avail_in +and next_in are updated by deflate(), we don't have to mess with those +between deflate() calls until it's all used up. +

+The parameters to deflate() are a pointer to the strm structure containing +the input and output information and the internal compression engine state, and a parameter +indicating whether and how to flush data to the output. Normally deflate will consume +several K bytes of input data before producing any output (except for the header), in order +to accumulate statistics on the data for optimum compression. It will then put out a burst of +compressed data, and proceed to consume more input before the next burst. Eventually, +deflate() +must be told to terminate the stream, complete the compression with provided input data, and +write out the trailer check value. deflate() will continue to compress normally as long +as the flush parameter is Z_NO_FLUSH. Once the Z_FINISH parameter is provided, +deflate() will begin to complete the compressed output stream. However depending on how +much output space is provided, deflate() may have to be called several times until it +has provided the complete compressed stream, even after it has consumed all of the input. The flush +parameter must continue to be Z_FINISH for those subsequent calls. +

+There are other values of the flush parameter that are used in more advanced applications. You can +force deflate() to produce a burst of output that encodes all of the input data provided +so far, even if it wouldn't have otherwise, for example to control data latency on a link with +compressed data. You can also ask that deflate() do that as well as erase any history up to +that point so that what follows can be decompressed independently, for example for random access +applications. Both requests will degrade compression by an amount depending on how often such +requests are made. +

+deflate() has a return value that can indicate errors, yet we do not check it here. Why +not? Well, it turns out that deflate() can do no wrong here. Let's go through +deflate()'s return values and dispense with them one by one. The possible values are +Z_OK, Z_STREAM_END, Z_STREAM_ERROR, or Z_BUF_ERROR. Z_OK +is, well, ok. Z_STREAM_END is also ok and will be returned for the last call of +deflate(). This is already guaranteed by calling deflate() with Z_FINISH +until it has no more output. Z_STREAM_ERROR is only possible if the stream is not +initialized properly, but we did initialize it properly. There is no harm in checking for +Z_STREAM_ERROR here, for example to check for the possibility that some +other part of the application inadvertently clobbered the memory containing the zlib state. +Z_BUF_ERROR will be explained further below, but +suffice it to say that this is simply an indication that deflate() could not consume +more input or produce more output. deflate() can be called again with more output space +or more available input, which it will be in this code. +


+            ret = deflate(&strm, flush);    /* no bad return value */
+            assert(ret != Z_STREAM_ERROR);  /* state not clobbered */
+
+Now we compute how much output deflate() provided on the last call, which is the +difference between how much space was provided before the call, and how much output space +is still available after the call. Then that data, if any, is written to the output file. +We can then reuse the output buffer for the next call of deflate(). Again if there +is a file i/o error, we call deflateEnd() before returning to avoid a memory leak. +

+            have = CHUNK - strm.avail_out;
+            if (fwrite(out, 1, have, dest) != have || ferror(dest)) {
+                (void)deflateEnd(&strm);
+                return Z_ERRNO;
+            }
+
+The inner do-loop is repeated until the last deflate() call fails to fill the +provided output buffer. Then we know that deflate() has done as much as it can with +the provided input, and that all of that input has been consumed. We can then fall out of this +loop and reuse the input buffer. +

+The way we tell that deflate() has no more output is by seeing that it did not fill +the output buffer, leaving avail_out greater than zero. However suppose that +deflate() has no more output, but just so happened to exactly fill the output buffer! +avail_out is zero, and we can't tell that deflate() has done all it can. +As far as we know, deflate() +has more output for us. So we call it again. But now deflate() produces no output +at all, and avail_out remains unchanged as CHUNK. That deflate() call +wasn't able to do anything, either consume input or produce output, and so it returns +Z_BUF_ERROR. (See, I told you I'd cover this later.) However this is not a problem at +all. Now we finally have the desired indication that deflate() is really done, +and so we drop out of the inner loop to provide more input to deflate(). +

+With flush set to Z_FINISH, this final set of deflate() calls will +complete the output stream. Once that is done, subsequent calls of deflate() would return +Z_STREAM_ERROR if the flush parameter is not Z_FINISH, and do no more processing +until the state is reinitialized. +

+Some applications of zlib have two loops that call deflate() +instead of the single inner loop we have here. The first loop would call +without flushing and feed all of the data to deflate(). The second loop would call +deflate() with no more +data and the Z_FINISH parameter to complete the process. As you can see from this +example, that can be avoided by simply keeping track of the current flush state. +


+        } while (strm.avail_out == 0);
+        assert(strm.avail_in == 0);     /* all input will be used */
+
+Now we check to see if we have already processed all of the input file. That information was +saved in the flush variable, so we see if that was set to Z_FINISH. If so, +then we're done and we fall out of the outer loop. We're guaranteed to get Z_STREAM_END +from the last deflate() call, since we ran it until the last chunk of input was +consumed and all of the output was generated. +

+        /* done when last data in file processed */
+    } while (flush != Z_FINISH);
+    assert(ret == Z_STREAM_END);        /* stream will be complete */
+
+The process is complete, but we still need to deallocate the state to avoid a memory leak +(or rather more like a memory hemorrhage if you didn't do this). Then +finally we can return with a happy return value. +

+    /* clean up and return */
+    (void)deflateEnd(&strm);
+    return Z_OK;
+}
+
+Now we do the same thing for decompression in the inf() routine. inf() +decompresses what is hopefully a valid zlib stream from the input file and writes the +uncompressed data to the output file. Much of the discussion above for def() +applies to inf() as well, so the discussion here will focus on the differences between +the two. +

+/* Decompress from file source to file dest until stream ends or EOF.
+   inf() returns Z_OK on success, Z_MEM_ERROR if memory could not be
+   allocated for processing, Z_DATA_ERROR if the deflate data is
+   invalid or incomplete, Z_VERSION_ERROR if the version of zlib.h and
+   the version of the library linked do not match, or Z_ERRNO if there
+   is an error reading or writing the files. */
+int inf(FILE *source, FILE *dest)
+{
+
+The local variables have the same functionality as they do for def(). The +only difference is that there is no flush variable, since inflate() +can tell from the zlib stream itself when the stream is complete. +

+    int ret;
+    unsigned have;
+    z_stream strm;
+    unsigned char in[CHUNK];
+    unsigned char out[CHUNK];
+
+The initialization of the state is the same, except that there is no compression level, +of course, and two more elements of the structure are initialized. avail_in +and next_in must be initialized before calling inflateInit(). This +is because the application has the option to provide the start of the zlib stream in +order for inflateInit() to have access to information about the compression +method to aid in memory allocation. In the current implementation of zlib +(up through versions 1.2.x), the method-dependent memory allocations are deferred to the first call of +inflate() anyway. However those fields must be initialized since later versions +of zlib that provide more compression methods may take advantage of this interface. +In any case, no decompression is performed by inflateInit(), so the +avail_out and next_out fields do not need to be initialized before calling. +

+Here avail_in is set to zero and next_in is set to Z_NULL to +indicate that no input data is being provided. +


+    /* allocate inflate state */
+    strm.zalloc = Z_NULL;
+    strm.zfree = Z_NULL;
+    strm.opaque = Z_NULL;
+    strm.avail_in = 0;
+    strm.next_in = Z_NULL;
+    ret = inflateInit(&strm);
+    if (ret != Z_OK)
+        return ret;
+
+The outer do-loop decompresses input until inflate() indicates +that it has reached the end of the compressed data and has produced all of the uncompressed +output. This is in contrast to def() which processes all of the input file. +If end-of-file is reached before the compressed data self-terminates, then the compressed +data is incomplete and an error is returned. +

+    /* decompress until deflate stream ends or end of file */
+    do {
+
+We read input data and set the strm structure accordingly. If we've reached the +end of the input file, then we leave the outer loop and report an error, since the +compressed data is incomplete. Note that we may read more data than is eventually consumed +by inflate(), if the input file continues past the zlib stream. +For applications where zlib streams are embedded in other data, this routine would +need to be modified to return the unused data, or at least indicate how much of the input +data was not used, so the application would know where to pick up after the zlib stream. +

+        strm.avail_in = fread(in, 1, CHUNK, source);
+        if (ferror(source)) {
+            (void)inflateEnd(&strm);
+            return Z_ERRNO;
+        }
+        if (strm.avail_in == 0)
+            break;
+        strm.next_in = in;
+
+The inner do-loop has the same function it did in def(), which is to +keep calling inflate() until has generated all of the output it can with the +provided input. +

+        /* run inflate() on input until output buffer not full */
+        do {
+
+Just like in def(), the same output space is provided for each call of inflate(). +

+            strm.avail_out = CHUNK;
+            strm.next_out = out;
+
+Now we run the decompression engine itself. There is no need to adjust the flush parameter, since +the zlib format is self-terminating. The main difference here is that there are +return values that we need to pay attention to. Z_DATA_ERROR +indicates that inflate() detected an error in the zlib compressed data format, +which means that either the data is not a zlib stream to begin with, or that the data was +corrupted somewhere along the way since it was compressed. The other error to be processed is +Z_MEM_ERROR, which can occur since memory allocation is deferred until inflate() +needs it, unlike deflate(), whose memory is allocated at the start by deflateInit(). +

+Advanced applications may use +deflateSetDictionary() to prime deflate() with a set of likely data to improve the +first 32K or so of compression. This is noted in the zlib header, so inflate() +requests that that dictionary be provided before it can start to decompress. Without the dictionary, +correct decompression is not possible. For this routine, we have no idea what the dictionary is, +so the Z_NEED_DICT indication is converted to a Z_DATA_ERROR. +

+inflate() can also return Z_STREAM_ERROR, which should not be possible here, +but could be checked for as noted above for def(). Z_BUF_ERROR does not need to be +checked for here, for the same reasons noted for def(). Z_STREAM_END will be +checked for later. +


+            ret = inflate(&strm, Z_NO_FLUSH);
+            assert(ret != Z_STREAM_ERROR);  /* state not clobbered */
+            switch (ret) {
+            case Z_NEED_DICT:
+                ret = Z_DATA_ERROR;     /* and fall through */
+            case Z_DATA_ERROR:
+            case Z_MEM_ERROR:
+                (void)inflateEnd(&strm);
+                return ret;
+            }
+
+The output of inflate() is handled identically to that of deflate(). +

+            have = CHUNK - strm.avail_out;
+            if (fwrite(out, 1, have, dest) != have || ferror(dest)) {
+                (void)inflateEnd(&strm);
+                return Z_ERRNO;
+            }
+
+The inner do-loop ends when inflate() has no more output as indicated +by not filling the output buffer, just as for deflate(). In this case, we cannot +assert that strm.avail_in will be zero, since the deflate stream may end before the file +does. +

+        } while (strm.avail_out == 0);
+
+The outer do-loop ends when inflate() reports that it has reached the +end of the input zlib stream, has completed the decompression and integrity +check, and has provided all of the output. This is indicated by the inflate() +return value Z_STREAM_END. The inner loop is guaranteed to leave ret +equal to Z_STREAM_END if the last chunk of the input file read contained the end +of the zlib stream. So if the return value is not Z_STREAM_END, the +loop continues to read more input. +

+        /* done when inflate() says it's done */
+    } while (ret != Z_STREAM_END);
+
+At this point, decompression successfully completed, or we broke out of the loop due to no +more data being available from the input file. If the last inflate() return value +is not Z_STREAM_END, then the zlib stream was incomplete and a data error +is returned. Otherwise, we return with a happy return value. Of course, inflateEnd() +is called first to avoid a memory leak. +

+    /* clean up and return */
+    (void)inflateEnd(&strm);
+    return ret == Z_STREAM_END ? Z_OK : Z_DATA_ERROR;
+}
+
+That ends the routines that directly use zlib. The following routines make this +a command-line program by running data through the above routines from stdin to +stdout, and handling any errors reported by def() or inf(). +

+zerr() is used to interpret the possible error codes from def() +and inf(), as detailed in their comments above, and print out an error message. +Note that these are only a subset of the possible return values from deflate() +and inflate(). +


+/* report a zlib or i/o error */
+void zerr(int ret)
+{
+    fputs("zpipe: ", stderr);
+    switch (ret) {
+    case Z_ERRNO:
+        if (ferror(stdin))
+            fputs("error reading stdin\n", stderr);
+        if (ferror(stdout))
+            fputs("error writing stdout\n", stderr);
+        break;
+    case Z_STREAM_ERROR:
+        fputs("invalid compression level\n", stderr);
+        break;
+    case Z_DATA_ERROR:
+        fputs("invalid or incomplete deflate data\n", stderr);
+        break;
+    case Z_MEM_ERROR:
+        fputs("out of memory\n", stderr);
+        break;
+    case Z_VERSION_ERROR:
+        fputs("zlib version mismatch!\n", stderr);
+    }
+}
+
+Here is the main() routine used to test def() and inf(). The +zpipe command is simply a compression pipe from stdin to stdout, if +no arguments are given, or it is a decompression pipe if zpipe -d is used. If any other +arguments are provided, no compression or decompression is performed. Instead a usage +message is displayed. Examples are zpipe < foo.txt > foo.txt.z to compress, and +zpipe -d < foo.txt.z > foo.txt to decompress. +

+/* compress or decompress from stdin to stdout */
+int main(int argc, char **argv)
+{
+    int ret;
+
+    /* avoid end-of-line conversions */
+    SET_BINARY_MODE(stdin);
+    SET_BINARY_MODE(stdout);
+
+    /* do compression if no arguments */
+    if (argc == 1) {
+        ret = def(stdin, stdout, Z_DEFAULT_COMPRESSION);
+        if (ret != Z_OK)
+            zerr(ret);
+        return ret;
+    }
+
+    /* do decompression if -d specified */
+    else if (argc == 2 && strcmp(argv[1], "-d") == 0) {
+        ret = inf(stdin, stdout);
+        if (ret != Z_OK)
+            zerr(ret);
+        return ret;
+    }
+
+    /* otherwise, report usage */
+    else {
+        fputs("zpipe usage: zpipe [-d] < source > dest\n", stderr);
+        return 1;
+    }
+}
+
+
+Copyright (c) 2004, 2005 by Mark Adler
Last modified 11 December 2005
+ + ADDED compat/zlib/examples/zpipe.c Index: compat/zlib/examples/zpipe.c ================================================================== --- compat/zlib/examples/zpipe.c +++ compat/zlib/examples/zpipe.c @@ -0,0 +1,205 @@ +/* zpipe.c: example of proper use of zlib's inflate() and deflate() + Not copyrighted -- provided to the public domain + Version 1.4 11 December 2005 Mark Adler */ + +/* Version history: + 1.0 30 Oct 2004 First version + 1.1 8 Nov 2004 Add void casting for unused return values + Use switch statement for inflate() return values + 1.2 9 Nov 2004 Add assertions to document zlib guarantees + 1.3 6 Apr 2005 Remove incorrect assertion in inf() + 1.4 11 Dec 2005 Add hack to avoid MSDOS end-of-line conversions + Avoid some compiler warnings for input and output buffers + */ + +#include +#include +#include +#include "zlib.h" + +#if defined(MSDOS) || defined(OS2) || defined(WIN32) || defined(__CYGWIN__) +# include +# include +# define SET_BINARY_MODE(file) setmode(fileno(file), O_BINARY) +#else +# define SET_BINARY_MODE(file) +#endif + +#define CHUNK 16384 + +/* Compress from file source to file dest until EOF on source. + def() returns Z_OK on success, Z_MEM_ERROR if memory could not be + allocated for processing, Z_STREAM_ERROR if an invalid compression + level is supplied, Z_VERSION_ERROR if the version of zlib.h and the + version of the library linked do not match, or Z_ERRNO if there is + an error reading or writing the files. */ +int def(FILE *source, FILE *dest, int level) +{ + int ret, flush; + unsigned have; + z_stream strm; + unsigned char in[CHUNK]; + unsigned char out[CHUNK]; + + /* allocate deflate state */ + strm.zalloc = Z_NULL; + strm.zfree = Z_NULL; + strm.opaque = Z_NULL; + ret = deflateInit(&strm, level); + if (ret != Z_OK) + return ret; + + /* compress until end of file */ + do { + strm.avail_in = fread(in, 1, CHUNK, source); + if (ferror(source)) { + (void)deflateEnd(&strm); + return Z_ERRNO; + } + flush = feof(source) ? Z_FINISH : Z_NO_FLUSH; + strm.next_in = in; + + /* run deflate() on input until output buffer not full, finish + compression if all of source has been read in */ + do { + strm.avail_out = CHUNK; + strm.next_out = out; + ret = deflate(&strm, flush); /* no bad return value */ + assert(ret != Z_STREAM_ERROR); /* state not clobbered */ + have = CHUNK - strm.avail_out; + if (fwrite(out, 1, have, dest) != have || ferror(dest)) { + (void)deflateEnd(&strm); + return Z_ERRNO; + } + } while (strm.avail_out == 0); + assert(strm.avail_in == 0); /* all input will be used */ + + /* done when last data in file processed */ + } while (flush != Z_FINISH); + assert(ret == Z_STREAM_END); /* stream will be complete */ + + /* clean up and return */ + (void)deflateEnd(&strm); + return Z_OK; +} + +/* Decompress from file source to file dest until stream ends or EOF. + inf() returns Z_OK on success, Z_MEM_ERROR if memory could not be + allocated for processing, Z_DATA_ERROR if the deflate data is + invalid or incomplete, Z_VERSION_ERROR if the version of zlib.h and + the version of the library linked do not match, or Z_ERRNO if there + is an error reading or writing the files. */ +int inf(FILE *source, FILE *dest) +{ + int ret; + unsigned have; + z_stream strm; + unsigned char in[CHUNK]; + unsigned char out[CHUNK]; + + /* allocate inflate state */ + strm.zalloc = Z_NULL; + strm.zfree = Z_NULL; + strm.opaque = Z_NULL; + strm.avail_in = 0; + strm.next_in = Z_NULL; + ret = inflateInit(&strm); + if (ret != Z_OK) + return ret; + + /* decompress until deflate stream ends or end of file */ + do { + strm.avail_in = fread(in, 1, CHUNK, source); + if (ferror(source)) { + (void)inflateEnd(&strm); + return Z_ERRNO; + } + if (strm.avail_in == 0) + break; + strm.next_in = in; + + /* run inflate() on input until output buffer not full */ + do { + strm.avail_out = CHUNK; + strm.next_out = out; + ret = inflate(&strm, Z_NO_FLUSH); + assert(ret != Z_STREAM_ERROR); /* state not clobbered */ + switch (ret) { + case Z_NEED_DICT: + ret = Z_DATA_ERROR; /* and fall through */ + case Z_DATA_ERROR: + case Z_MEM_ERROR: + (void)inflateEnd(&strm); + return ret; + } + have = CHUNK - strm.avail_out; + if (fwrite(out, 1, have, dest) != have || ferror(dest)) { + (void)inflateEnd(&strm); + return Z_ERRNO; + } + } while (strm.avail_out == 0); + + /* done when inflate() says it's done */ + } while (ret != Z_STREAM_END); + + /* clean up and return */ + (void)inflateEnd(&strm); + return ret == Z_STREAM_END ? Z_OK : Z_DATA_ERROR; +} + +/* report a zlib or i/o error */ +void zerr(int ret) +{ + fputs("zpipe: ", stderr); + switch (ret) { + case Z_ERRNO: + if (ferror(stdin)) + fputs("error reading stdin\n", stderr); + if (ferror(stdout)) + fputs("error writing stdout\n", stderr); + break; + case Z_STREAM_ERROR: + fputs("invalid compression level\n", stderr); + break; + case Z_DATA_ERROR: + fputs("invalid or incomplete deflate data\n", stderr); + break; + case Z_MEM_ERROR: + fputs("out of memory\n", stderr); + break; + case Z_VERSION_ERROR: + fputs("zlib version mismatch!\n", stderr); + } +} + +/* compress or decompress from stdin to stdout */ +int main(int argc, char **argv) +{ + int ret; + + /* avoid end-of-line conversions */ + SET_BINARY_MODE(stdin); + SET_BINARY_MODE(stdout); + + /* do compression if no arguments */ + if (argc == 1) { + ret = def(stdin, stdout, Z_DEFAULT_COMPRESSION); + if (ret != Z_OK) + zerr(ret); + return ret; + } + + /* do decompression if -d specified */ + else if (argc == 2 && strcmp(argv[1], "-d") == 0) { + ret = inf(stdin, stdout); + if (ret != Z_OK) + zerr(ret); + return ret; + } + + /* otherwise, report usage */ + else { + fputs("zpipe usage: zpipe [-d] < source > dest\n", stderr); + return 1; + } +} ADDED compat/zlib/examples/zran.c Index: compat/zlib/examples/zran.c ================================================================== --- compat/zlib/examples/zran.c +++ compat/zlib/examples/zran.c @@ -0,0 +1,409 @@ +/* zran.c -- example of zlib/gzip stream indexing and random access + * Copyright (C) 2005, 2012 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + Version 1.1 29 Sep 2012 Mark Adler */ + +/* Version History: + 1.0 29 May 2005 First version + 1.1 29 Sep 2012 Fix memory reallocation error + */ + +/* Illustrate the use of Z_BLOCK, inflatePrime(), and inflateSetDictionary() + for random access of a compressed file. A file containing a zlib or gzip + stream is provided on the command line. The compressed stream is decoded in + its entirety, and an index built with access points about every SPAN bytes + in the uncompressed output. The compressed file is left open, and can then + be read randomly, having to decompress on the average SPAN/2 uncompressed + bytes before getting to the desired block of data. + + An access point can be created at the start of any deflate block, by saving + the starting file offset and bit of that block, and the 32K bytes of + uncompressed data that precede that block. Also the uncompressed offset of + that block is saved to provide a referece for locating a desired starting + point in the uncompressed stream. build_index() works by decompressing the + input zlib or gzip stream a block at a time, and at the end of each block + deciding if enough uncompressed data has gone by to justify the creation of + a new access point. If so, that point is saved in a data structure that + grows as needed to accommodate the points. + + To use the index, an offset in the uncompressed data is provided, for which + the latest accees point at or preceding that offset is located in the index. + The input file is positioned to the specified location in the index, and if + necessary the first few bits of the compressed data is read from the file. + inflate is initialized with those bits and the 32K of uncompressed data, and + the decompression then proceeds until the desired offset in the file is + reached. Then the decompression continues to read the desired uncompressed + data from the file. + + Another approach would be to generate the index on demand. In that case, + requests for random access reads from the compressed data would try to use + the index, but if a read far enough past the end of the index is required, + then further index entries would be generated and added. + + There is some fair bit of overhead to starting inflation for the random + access, mainly copying the 32K byte dictionary. So if small pieces of the + file are being accessed, it would make sense to implement a cache to hold + some lookahead and avoid many calls to extract() for small lengths. + + Another way to build an index would be to use inflateCopy(). That would + not be constrained to have access points at block boundaries, but requires + more memory per access point, and also cannot be saved to file due to the + use of pointers in the state. The approach here allows for storage of the + index in a file. + */ + +#include +#include +#include +#include "zlib.h" + +#define local static + +#define SPAN 1048576L /* desired distance between access points */ +#define WINSIZE 32768U /* sliding window size */ +#define CHUNK 16384 /* file input buffer size */ + +/* access point entry */ +struct point { + off_t out; /* corresponding offset in uncompressed data */ + off_t in; /* offset in input file of first full byte */ + int bits; /* number of bits (1-7) from byte at in - 1, or 0 */ + unsigned char window[WINSIZE]; /* preceding 32K of uncompressed data */ +}; + +/* access point list */ +struct access { + int have; /* number of list entries filled in */ + int size; /* number of list entries allocated */ + struct point *list; /* allocated list */ +}; + +/* Deallocate an index built by build_index() */ +local void free_index(struct access *index) +{ + if (index != NULL) { + free(index->list); + free(index); + } +} + +/* Add an entry to the access point list. If out of memory, deallocate the + existing list and return NULL. */ +local struct access *addpoint(struct access *index, int bits, + off_t in, off_t out, unsigned left, unsigned char *window) +{ + struct point *next; + + /* if list is empty, create it (start with eight points) */ + if (index == NULL) { + index = malloc(sizeof(struct access)); + if (index == NULL) return NULL; + index->list = malloc(sizeof(struct point) << 3); + if (index->list == NULL) { + free(index); + return NULL; + } + index->size = 8; + index->have = 0; + } + + /* if list is full, make it bigger */ + else if (index->have == index->size) { + index->size <<= 1; + next = realloc(index->list, sizeof(struct point) * index->size); + if (next == NULL) { + free_index(index); + return NULL; + } + index->list = next; + } + + /* fill in entry and increment how many we have */ + next = index->list + index->have; + next->bits = bits; + next->in = in; + next->out = out; + if (left) + memcpy(next->window, window + WINSIZE - left, left); + if (left < WINSIZE) + memcpy(next->window + left, window, WINSIZE - left); + index->have++; + + /* return list, possibly reallocated */ + return index; +} + +/* Make one entire pass through the compressed stream and build an index, with + access points about every span bytes of uncompressed output -- span is + chosen to balance the speed of random access against the memory requirements + of the list, about 32K bytes per access point. Note that data after the end + of the first zlib or gzip stream in the file is ignored. build_index() + returns the number of access points on success (>= 1), Z_MEM_ERROR for out + of memory, Z_DATA_ERROR for an error in the input file, or Z_ERRNO for a + file read error. On success, *built points to the resulting index. */ +local int build_index(FILE *in, off_t span, struct access **built) +{ + int ret; + off_t totin, totout; /* our own total counters to avoid 4GB limit */ + off_t last; /* totout value of last access point */ + struct access *index; /* access points being generated */ + z_stream strm; + unsigned char input[CHUNK]; + unsigned char window[WINSIZE]; + + /* initialize inflate */ + strm.zalloc = Z_NULL; + strm.zfree = Z_NULL; + strm.opaque = Z_NULL; + strm.avail_in = 0; + strm.next_in = Z_NULL; + ret = inflateInit2(&strm, 47); /* automatic zlib or gzip decoding */ + if (ret != Z_OK) + return ret; + + /* inflate the input, maintain a sliding window, and build an index -- this + also validates the integrity of the compressed data using the check + information at the end of the gzip or zlib stream */ + totin = totout = last = 0; + index = NULL; /* will be allocated by first addpoint() */ + strm.avail_out = 0; + do { + /* get some compressed data from input file */ + strm.avail_in = fread(input, 1, CHUNK, in); + if (ferror(in)) { + ret = Z_ERRNO; + goto build_index_error; + } + if (strm.avail_in == 0) { + ret = Z_DATA_ERROR; + goto build_index_error; + } + strm.next_in = input; + + /* process all of that, or until end of stream */ + do { + /* reset sliding window if necessary */ + if (strm.avail_out == 0) { + strm.avail_out = WINSIZE; + strm.next_out = window; + } + + /* inflate until out of input, output, or at end of block -- + update the total input and output counters */ + totin += strm.avail_in; + totout += strm.avail_out; + ret = inflate(&strm, Z_BLOCK); /* return at end of block */ + totin -= strm.avail_in; + totout -= strm.avail_out; + if (ret == Z_NEED_DICT) + ret = Z_DATA_ERROR; + if (ret == Z_MEM_ERROR || ret == Z_DATA_ERROR) + goto build_index_error; + if (ret == Z_STREAM_END) + break; + + /* if at end of block, consider adding an index entry (note that if + data_type indicates an end-of-block, then all of the + uncompressed data from that block has been delivered, and none + of the compressed data after that block has been consumed, + except for up to seven bits) -- the totout == 0 provides an + entry point after the zlib or gzip header, and assures that the + index always has at least one access point; we avoid creating an + access point after the last block by checking bit 6 of data_type + */ + if ((strm.data_type & 128) && !(strm.data_type & 64) && + (totout == 0 || totout - last > span)) { + index = addpoint(index, strm.data_type & 7, totin, + totout, strm.avail_out, window); + if (index == NULL) { + ret = Z_MEM_ERROR; + goto build_index_error; + } + last = totout; + } + } while (strm.avail_in != 0); + } while (ret != Z_STREAM_END); + + /* clean up and return index (release unused entries in list) */ + (void)inflateEnd(&strm); + index->list = realloc(index->list, sizeof(struct point) * index->have); + index->size = index->have; + *built = index; + return index->size; + + /* return error */ + build_index_error: + (void)inflateEnd(&strm); + if (index != NULL) + free_index(index); + return ret; +} + +/* Use the index to read len bytes from offset into buf, return bytes read or + negative for error (Z_DATA_ERROR or Z_MEM_ERROR). If data is requested past + the end of the uncompressed data, then extract() will return a value less + than len, indicating how much as actually read into buf. This function + should not return a data error unless the file was modified since the index + was generated. extract() may also return Z_ERRNO if there is an error on + reading or seeking the input file. */ +local int extract(FILE *in, struct access *index, off_t offset, + unsigned char *buf, int len) +{ + int ret, skip; + z_stream strm; + struct point *here; + unsigned char input[CHUNK]; + unsigned char discard[WINSIZE]; + + /* proceed only if something reasonable to do */ + if (len < 0) + return 0; + + /* find where in stream to start */ + here = index->list; + ret = index->have; + while (--ret && here[1].out <= offset) + here++; + + /* initialize file and inflate state to start there */ + strm.zalloc = Z_NULL; + strm.zfree = Z_NULL; + strm.opaque = Z_NULL; + strm.avail_in = 0; + strm.next_in = Z_NULL; + ret = inflateInit2(&strm, -15); /* raw inflate */ + if (ret != Z_OK) + return ret; + ret = fseeko(in, here->in - (here->bits ? 1 : 0), SEEK_SET); + if (ret == -1) + goto extract_ret; + if (here->bits) { + ret = getc(in); + if (ret == -1) { + ret = ferror(in) ? Z_ERRNO : Z_DATA_ERROR; + goto extract_ret; + } + (void)inflatePrime(&strm, here->bits, ret >> (8 - here->bits)); + } + (void)inflateSetDictionary(&strm, here->window, WINSIZE); + + /* skip uncompressed bytes until offset reached, then satisfy request */ + offset -= here->out; + strm.avail_in = 0; + skip = 1; /* while skipping to offset */ + do { + /* define where to put uncompressed data, and how much */ + if (offset == 0 && skip) { /* at offset now */ + strm.avail_out = len; + strm.next_out = buf; + skip = 0; /* only do this once */ + } + if (offset > WINSIZE) { /* skip WINSIZE bytes */ + strm.avail_out = WINSIZE; + strm.next_out = discard; + offset -= WINSIZE; + } + else if (offset != 0) { /* last skip */ + strm.avail_out = (unsigned)offset; + strm.next_out = discard; + offset = 0; + } + + /* uncompress until avail_out filled, or end of stream */ + do { + if (strm.avail_in == 0) { + strm.avail_in = fread(input, 1, CHUNK, in); + if (ferror(in)) { + ret = Z_ERRNO; + goto extract_ret; + } + if (strm.avail_in == 0) { + ret = Z_DATA_ERROR; + goto extract_ret; + } + strm.next_in = input; + } + ret = inflate(&strm, Z_NO_FLUSH); /* normal inflate */ + if (ret == Z_NEED_DICT) + ret = Z_DATA_ERROR; + if (ret == Z_MEM_ERROR || ret == Z_DATA_ERROR) + goto extract_ret; + if (ret == Z_STREAM_END) + break; + } while (strm.avail_out != 0); + + /* if reach end of stream, then don't keep trying to get more */ + if (ret == Z_STREAM_END) + break; + + /* do until offset reached and requested data read, or stream ends */ + } while (skip); + + /* compute number of uncompressed bytes read after offset */ + ret = skip ? 0 : len - strm.avail_out; + + /* clean up and return bytes read or error */ + extract_ret: + (void)inflateEnd(&strm); + return ret; +} + +/* Demonstrate the use of build_index() and extract() by processing the file + provided on the command line, and the extracting 16K from about 2/3rds of + the way through the uncompressed output, and writing that to stdout. */ +int main(int argc, char **argv) +{ + int len; + off_t offset; + FILE *in; + struct access *index = NULL; + unsigned char buf[CHUNK]; + + /* open input file */ + if (argc != 2) { + fprintf(stderr, "usage: zran file.gz\n"); + return 1; + } + in = fopen(argv[1], "rb"); + if (in == NULL) { + fprintf(stderr, "zran: could not open %s for reading\n", argv[1]); + return 1; + } + + /* build index */ + len = build_index(in, SPAN, &index); + if (len < 0) { + fclose(in); + switch (len) { + case Z_MEM_ERROR: + fprintf(stderr, "zran: out of memory\n"); + break; + case Z_DATA_ERROR: + fprintf(stderr, "zran: compressed data error in %s\n", argv[1]); + break; + case Z_ERRNO: + fprintf(stderr, "zran: read error on %s\n", argv[1]); + break; + default: + fprintf(stderr, "zran: error %d while building index\n", len); + } + return 1; + } + fprintf(stderr, "zran: built index with %d access points\n", len); + + /* use index by reading some bytes from an arbitrary offset */ + offset = (index->list[index->have - 1].out << 1) / 3; + len = extract(in, index, offset, buf, CHUNK); + if (len < 0) + fprintf(stderr, "zran: extraction failed: %s error\n", + len == Z_MEM_ERROR ? "out of memory" : "input corrupted"); + else { + fwrite(buf, 1, len, stdout); + fprintf(stderr, "zran: extracted %d bytes at %llu\n", len, offset); + } + + /* clean up and exit */ + free_index(index); + fclose(in); + return 0; +} ADDED compat/zlib/gzclose.c Index: compat/zlib/gzclose.c ================================================================== --- compat/zlib/gzclose.c +++ compat/zlib/gzclose.c @@ -0,0 +1,25 @@ +/* gzclose.c -- zlib gzclose() function + * Copyright (C) 2004, 2010 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +#include "gzguts.h" + +/* gzclose() is in a separate file so that it is linked in only if it is used. + That way the other gzclose functions can be used instead to avoid linking in + unneeded compression or decompression routines. */ +int ZEXPORT gzclose(file) + gzFile file; +{ +#ifndef NO_GZCOMPRESS + gz_statep state; + + if (file == NULL) + return Z_STREAM_ERROR; + state = (gz_statep)file; + + return state->mode == GZ_READ ? gzclose_r(file) : gzclose_w(file); +#else + return gzclose_r(file); +#endif +} ADDED compat/zlib/gzguts.h Index: compat/zlib/gzguts.h ================================================================== --- compat/zlib/gzguts.h +++ compat/zlib/gzguts.h @@ -0,0 +1,209 @@ +/* gzguts.h -- zlib internal header definitions for gz* operations + * Copyright (C) 2004, 2005, 2010, 2011, 2012, 2013 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +#ifdef _LARGEFILE64_SOURCE +# ifndef _LARGEFILE_SOURCE +# define _LARGEFILE_SOURCE 1 +# endif +# ifdef _FILE_OFFSET_BITS +# undef _FILE_OFFSET_BITS +# endif +#endif + +#ifdef HAVE_HIDDEN +# define ZLIB_INTERNAL __attribute__((visibility ("hidden"))) +#else +# define ZLIB_INTERNAL +#endif + +#include +#include "zlib.h" +#ifdef STDC +# include +# include +# include +#endif +#include + +#ifdef _WIN32 +# include +#endif + +#if defined(__TURBOC__) || defined(_MSC_VER) || defined(_WIN32) +# include +#endif + +#ifdef WINAPI_FAMILY +# define open _open +# define read _read +# define write _write +# define close _close +#endif + +#ifdef NO_DEFLATE /* for compatibility with old definition */ +# define NO_GZCOMPRESS +#endif + +#if defined(STDC99) || (defined(__TURBOC__) && __TURBOC__ >= 0x550) +# ifndef HAVE_VSNPRINTF +# define HAVE_VSNPRINTF +# endif +#endif + +#if defined(__CYGWIN__) +# ifndef HAVE_VSNPRINTF +# define HAVE_VSNPRINTF +# endif +#endif + +#if defined(MSDOS) && defined(__BORLANDC__) && (BORLANDC > 0x410) +# ifndef HAVE_VSNPRINTF +# define HAVE_VSNPRINTF +# endif +#endif + +#ifndef HAVE_VSNPRINTF +# ifdef MSDOS +/* vsnprintf may exist on some MS-DOS compilers (DJGPP?), + but for now we just assume it doesn't. */ +# define NO_vsnprintf +# endif +# ifdef __TURBOC__ +# define NO_vsnprintf +# endif +# ifdef WIN32 +/* In Win32, vsnprintf is available as the "non-ANSI" _vsnprintf. */ +# if !defined(vsnprintf) && !defined(NO_vsnprintf) +# if !defined(_MSC_VER) || ( defined(_MSC_VER) && _MSC_VER < 1500 ) +# define vsnprintf _vsnprintf +# endif +# endif +# endif +# ifdef __SASC +# define NO_vsnprintf +# endif +# ifdef VMS +# define NO_vsnprintf +# endif +# ifdef __OS400__ +# define NO_vsnprintf +# endif +# ifdef __MVS__ +# define NO_vsnprintf +# endif +#endif + +/* unlike snprintf (which is required in C99, yet still not supported by + Microsoft more than a decade later!), _snprintf does not guarantee null + termination of the result -- however this is only used in gzlib.c where + the result is assured to fit in the space provided */ +#ifdef _MSC_VER +# define snprintf _snprintf +#endif + +#ifndef local +# define local static +#endif +/* compile with -Dlocal if your debugger can't find static symbols */ + +/* gz* functions always use library allocation functions */ +#ifndef STDC + extern voidp malloc OF((uInt size)); + extern void free OF((voidpf ptr)); +#endif + +/* get errno and strerror definition */ +#if defined UNDER_CE +# include +# define zstrerror() gz_strwinerror((DWORD)GetLastError()) +#else +# ifndef NO_STRERROR +# include +# define zstrerror() strerror(errno) +# else +# define zstrerror() "stdio error (consult errno)" +# endif +#endif + +/* provide prototypes for these when building zlib without LFS */ +#if !defined(_LARGEFILE64_SOURCE) || _LFS64_LARGEFILE-0 == 0 + ZEXTERN gzFile ZEXPORT gzopen64 OF((const char *, const char *)); + ZEXTERN z_off64_t ZEXPORT gzseek64 OF((gzFile, z_off64_t, int)); + ZEXTERN z_off64_t ZEXPORT gztell64 OF((gzFile)); + ZEXTERN z_off64_t ZEXPORT gzoffset64 OF((gzFile)); +#endif + +/* default memLevel */ +#if MAX_MEM_LEVEL >= 8 +# define DEF_MEM_LEVEL 8 +#else +# define DEF_MEM_LEVEL MAX_MEM_LEVEL +#endif + +/* default i/o buffer size -- double this for output when reading (this and + twice this must be able to fit in an unsigned type) */ +#define GZBUFSIZE 8192 + +/* gzip modes, also provide a little integrity check on the passed structure */ +#define GZ_NONE 0 +#define GZ_READ 7247 +#define GZ_WRITE 31153 +#define GZ_APPEND 1 /* mode set to GZ_WRITE after the file is opened */ + +/* values for gz_state how */ +#define LOOK 0 /* look for a gzip header */ +#define COPY 1 /* copy input directly */ +#define GZIP 2 /* decompress a gzip stream */ + +/* internal gzip file state data structure */ +typedef struct { + /* exposed contents for gzgetc() macro */ + struct gzFile_s x; /* "x" for exposed */ + /* x.have: number of bytes available at x.next */ + /* x.next: next output data to deliver or write */ + /* x.pos: current position in uncompressed data */ + /* used for both reading and writing */ + int mode; /* see gzip modes above */ + int fd; /* file descriptor */ + char *path; /* path or fd for error messages */ + unsigned size; /* buffer size, zero if not allocated yet */ + unsigned want; /* requested buffer size, default is GZBUFSIZE */ + unsigned char *in; /* input buffer */ + unsigned char *out; /* output buffer (double-sized when reading) */ + int direct; /* 0 if processing gzip, 1 if transparent */ + /* just for reading */ + int how; /* 0: get header, 1: copy, 2: decompress */ + z_off64_t start; /* where the gzip data started, for rewinding */ + int eof; /* true if end of input file reached */ + int past; /* true if read requested past end */ + /* just for writing */ + int level; /* compression level */ + int strategy; /* compression strategy */ + /* seek request */ + z_off64_t skip; /* amount to skip (already rewound if backwards) */ + int seek; /* true if seek request pending */ + /* error information */ + int err; /* error code */ + char *msg; /* error message */ + /* zlib inflate or deflate stream */ + z_stream strm; /* stream structure in-place (not a pointer) */ +} gz_state; +typedef gz_state FAR *gz_statep; + +/* shared functions */ +void ZLIB_INTERNAL gz_error OF((gz_statep, int, const char *)); +#if defined UNDER_CE +char ZLIB_INTERNAL *gz_strwinerror OF((DWORD error)); +#endif + +/* GT_OFF(x), where x is an unsigned value, is true if x > maximum z_off64_t + value -- needed when comparing unsigned to z_off64_t, which is signed + (possible z_off64_t types off_t, off64_t, and long are all signed) */ +#ifdef INT_MAX +# define GT_OFF(x) (sizeof(int) == sizeof(z_off64_t) && (x) > INT_MAX) +#else +unsigned ZLIB_INTERNAL gz_intmax OF((void)); +# define GT_OFF(x) (sizeof(int) == sizeof(z_off64_t) && (x) > gz_intmax()) +#endif ADDED compat/zlib/gzlib.c Index: compat/zlib/gzlib.c ================================================================== --- compat/zlib/gzlib.c +++ compat/zlib/gzlib.c @@ -0,0 +1,634 @@ +/* gzlib.c -- zlib functions common to reading and writing gzip files + * Copyright (C) 2004, 2010, 2011, 2012, 2013 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +#include "gzguts.h" + +#if defined(_WIN32) && !defined(__BORLANDC__) +# define LSEEK _lseeki64 +#else +#if defined(_LARGEFILE64_SOURCE) && _LFS64_LARGEFILE-0 +# define LSEEK lseek64 +#else +# define LSEEK lseek +#endif +#endif + +/* Local functions */ +local void gz_reset OF((gz_statep)); +local gzFile gz_open OF((const void *, int, const char *)); + +#if defined UNDER_CE + +/* Map the Windows error number in ERROR to a locale-dependent error message + string and return a pointer to it. Typically, the values for ERROR come + from GetLastError. + + The string pointed to shall not be modified by the application, but may be + overwritten by a subsequent call to gz_strwinerror + + The gz_strwinerror function does not change the current setting of + GetLastError. */ +char ZLIB_INTERNAL *gz_strwinerror (error) + DWORD error; +{ + static char buf[1024]; + + wchar_t *msgbuf; + DWORD lasterr = GetLastError(); + DWORD chars = FormatMessage(FORMAT_MESSAGE_FROM_SYSTEM + | FORMAT_MESSAGE_ALLOCATE_BUFFER, + NULL, + error, + 0, /* Default language */ + (LPVOID)&msgbuf, + 0, + NULL); + if (chars != 0) { + /* If there is an \r\n appended, zap it. */ + if (chars >= 2 + && msgbuf[chars - 2] == '\r' && msgbuf[chars - 1] == '\n') { + chars -= 2; + msgbuf[chars] = 0; + } + + if (chars > sizeof (buf) - 1) { + chars = sizeof (buf) - 1; + msgbuf[chars] = 0; + } + + wcstombs(buf, msgbuf, chars + 1); + LocalFree(msgbuf); + } + else { + sprintf(buf, "unknown win32 error (%ld)", error); + } + + SetLastError(lasterr); + return buf; +} + +#endif /* UNDER_CE */ + +/* Reset gzip file state */ +local void gz_reset(state) + gz_statep state; +{ + state->x.have = 0; /* no output data available */ + if (state->mode == GZ_READ) { /* for reading ... */ + state->eof = 0; /* not at end of file */ + state->past = 0; /* have not read past end yet */ + state->how = LOOK; /* look for gzip header */ + } + state->seek = 0; /* no seek request pending */ + gz_error(state, Z_OK, NULL); /* clear error */ + state->x.pos = 0; /* no uncompressed data yet */ + state->strm.avail_in = 0; /* no input data yet */ +} + +/* Open a gzip file either by name or file descriptor. */ +local gzFile gz_open(path, fd, mode) + const void *path; + int fd; + const char *mode; +{ + gz_statep state; + size_t len; + int oflag; +#ifdef O_CLOEXEC + int cloexec = 0; +#endif +#ifdef O_EXCL + int exclusive = 0; +#endif + + /* check input */ + if (path == NULL) + return NULL; + + /* allocate gzFile structure to return */ + state = (gz_statep)malloc(sizeof(gz_state)); + if (state == NULL) + return NULL; + state->size = 0; /* no buffers allocated yet */ + state->want = GZBUFSIZE; /* requested buffer size */ + state->msg = NULL; /* no error message yet */ + + /* interpret mode */ + state->mode = GZ_NONE; + state->level = Z_DEFAULT_COMPRESSION; + state->strategy = Z_DEFAULT_STRATEGY; + state->direct = 0; + while (*mode) { + if (*mode >= '0' && *mode <= '9') + state->level = *mode - '0'; + else + switch (*mode) { + case 'r': + state->mode = GZ_READ; + break; +#ifndef NO_GZCOMPRESS + case 'w': + state->mode = GZ_WRITE; + break; + case 'a': + state->mode = GZ_APPEND; + break; +#endif + case '+': /* can't read and write at the same time */ + free(state); + return NULL; + case 'b': /* ignore -- will request binary anyway */ + break; +#ifdef O_CLOEXEC + case 'e': + cloexec = 1; + break; +#endif +#ifdef O_EXCL + case 'x': + exclusive = 1; + break; +#endif + case 'f': + state->strategy = Z_FILTERED; + break; + case 'h': + state->strategy = Z_HUFFMAN_ONLY; + break; + case 'R': + state->strategy = Z_RLE; + break; + case 'F': + state->strategy = Z_FIXED; + break; + case 'T': + state->direct = 1; + break; + default: /* could consider as an error, but just ignore */ + ; + } + mode++; + } + + /* must provide an "r", "w", or "a" */ + if (state->mode == GZ_NONE) { + free(state); + return NULL; + } + + /* can't force transparent read */ + if (state->mode == GZ_READ) { + if (state->direct) { + free(state); + return NULL; + } + state->direct = 1; /* for empty file */ + } + + /* save the path name for error messages */ +#ifdef _WIN32 + if (fd == -2) { + len = wcstombs(NULL, path, 0); + if (len == (size_t)-1) + len = 0; + } + else +#endif + len = strlen((const char *)path); + state->path = (char *)malloc(len + 1); + if (state->path == NULL) { + free(state); + return NULL; + } +#ifdef _WIN32 + if (fd == -2) + if (len) + wcstombs(state->path, path, len + 1); + else + *(state->path) = 0; + else +#endif +#if !defined(NO_snprintf) && !defined(NO_vsnprintf) + snprintf(state->path, len + 1, "%s", (const char *)path); +#else + strcpy(state->path, path); +#endif + + /* compute the flags for open() */ + oflag = +#ifdef O_LARGEFILE + O_LARGEFILE | +#endif +#ifdef O_BINARY + O_BINARY | +#endif +#ifdef O_CLOEXEC + (cloexec ? O_CLOEXEC : 0) | +#endif + (state->mode == GZ_READ ? + O_RDONLY : + (O_WRONLY | O_CREAT | +#ifdef O_EXCL + (exclusive ? O_EXCL : 0) | +#endif + (state->mode == GZ_WRITE ? + O_TRUNC : + O_APPEND))); + + /* open the file with the appropriate flags (or just use fd) */ + state->fd = fd > -1 ? fd : ( +#ifdef _WIN32 + fd == -2 ? _wopen(path, oflag, 0666) : +#endif + open((const char *)path, oflag, 0666)); + if (state->fd == -1) { + free(state->path); + free(state); + return NULL; + } + if (state->mode == GZ_APPEND) + state->mode = GZ_WRITE; /* simplify later checks */ + + /* save the current position for rewinding (only if reading) */ + if (state->mode == GZ_READ) { + state->start = LSEEK(state->fd, 0, SEEK_CUR); + if (state->start == -1) state->start = 0; + } + + /* initialize stream */ + gz_reset(state); + + /* return stream */ + return (gzFile)state; +} + +/* -- see zlib.h -- */ +gzFile ZEXPORT gzopen(path, mode) + const char *path; + const char *mode; +{ + return gz_open(path, -1, mode); +} + +/* -- see zlib.h -- */ +gzFile ZEXPORT gzopen64(path, mode) + const char *path; + const char *mode; +{ + return gz_open(path, -1, mode); +} + +/* -- see zlib.h -- */ +gzFile ZEXPORT gzdopen(fd, mode) + int fd; + const char *mode; +{ + char *path; /* identifier for error messages */ + gzFile gz; + + if (fd == -1 || (path = (char *)malloc(7 + 3 * sizeof(int))) == NULL) + return NULL; +#if !defined(NO_snprintf) && !defined(NO_vsnprintf) + snprintf(path, 7 + 3 * sizeof(int), "", fd); /* for debugging */ +#else + sprintf(path, "", fd); /* for debugging */ +#endif + gz = gz_open(path, fd, mode); + free(path); + return gz; +} + +/* -- see zlib.h -- */ +#ifdef _WIN32 +gzFile ZEXPORT gzopen_w(path, mode) + const wchar_t *path; + const char *mode; +{ + return gz_open(path, -2, mode); +} +#endif + +/* -- see zlib.h -- */ +int ZEXPORT gzbuffer(file, size) + gzFile file; + unsigned size; +{ + gz_statep state; + + /* get internal structure and check integrity */ + if (file == NULL) + return -1; + state = (gz_statep)file; + if (state->mode != GZ_READ && state->mode != GZ_WRITE) + return -1; + + /* make sure we haven't already allocated memory */ + if (state->size != 0) + return -1; + + /* check and set requested size */ + if (size < 2) + size = 2; /* need two bytes to check magic header */ + state->want = size; + return 0; +} + +/* -- see zlib.h -- */ +int ZEXPORT gzrewind(file) + gzFile file; +{ + gz_statep state; + + /* get internal structure */ + if (file == NULL) + return -1; + state = (gz_statep)file; + + /* check that we're reading and that there's no error */ + if (state->mode != GZ_READ || + (state->err != Z_OK && state->err != Z_BUF_ERROR)) + return -1; + + /* back up and start over */ + if (LSEEK(state->fd, state->start, SEEK_SET) == -1) + return -1; + gz_reset(state); + return 0; +} + +/* -- see zlib.h -- */ +z_off64_t ZEXPORT gzseek64(file, offset, whence) + gzFile file; + z_off64_t offset; + int whence; +{ + unsigned n; + z_off64_t ret; + gz_statep state; + + /* get internal structure and check integrity */ + if (file == NULL) + return -1; + state = (gz_statep)file; + if (state->mode != GZ_READ && state->mode != GZ_WRITE) + return -1; + + /* check that there's no error */ + if (state->err != Z_OK && state->err != Z_BUF_ERROR) + return -1; + + /* can only seek from start or relative to current position */ + if (whence != SEEK_SET && whence != SEEK_CUR) + return -1; + + /* normalize offset to a SEEK_CUR specification */ + if (whence == SEEK_SET) + offset -= state->x.pos; + else if (state->seek) + offset += state->skip; + state->seek = 0; + + /* if within raw area while reading, just go there */ + if (state->mode == GZ_READ && state->how == COPY && + state->x.pos + offset >= 0) { + ret = LSEEK(state->fd, offset - state->x.have, SEEK_CUR); + if (ret == -1) + return -1; + state->x.have = 0; + state->eof = 0; + state->past = 0; + state->seek = 0; + gz_error(state, Z_OK, NULL); + state->strm.avail_in = 0; + state->x.pos += offset; + return state->x.pos; + } + + /* calculate skip amount, rewinding if needed for back seek when reading */ + if (offset < 0) { + if (state->mode != GZ_READ) /* writing -- can't go backwards */ + return -1; + offset += state->x.pos; + if (offset < 0) /* before start of file! */ + return -1; + if (gzrewind(file) == -1) /* rewind, then skip to offset */ + return -1; + } + + /* if reading, skip what's in output buffer (one less gzgetc() check) */ + if (state->mode == GZ_READ) { + n = GT_OFF(state->x.have) || (z_off64_t)state->x.have > offset ? + (unsigned)offset : state->x.have; + state->x.have -= n; + state->x.next += n; + state->x.pos += n; + offset -= n; + } + + /* request skip (if not zero) */ + if (offset) { + state->seek = 1; + state->skip = offset; + } + return state->x.pos + offset; +} + +/* -- see zlib.h -- */ +z_off_t ZEXPORT gzseek(file, offset, whence) + gzFile file; + z_off_t offset; + int whence; +{ + z_off64_t ret; + + ret = gzseek64(file, (z_off64_t)offset, whence); + return ret == (z_off_t)ret ? (z_off_t)ret : -1; +} + +/* -- see zlib.h -- */ +z_off64_t ZEXPORT gztell64(file) + gzFile file; +{ + gz_statep state; + + /* get internal structure and check integrity */ + if (file == NULL) + return -1; + state = (gz_statep)file; + if (state->mode != GZ_READ && state->mode != GZ_WRITE) + return -1; + + /* return position */ + return state->x.pos + (state->seek ? state->skip : 0); +} + +/* -- see zlib.h -- */ +z_off_t ZEXPORT gztell(file) + gzFile file; +{ + z_off64_t ret; + + ret = gztell64(file); + return ret == (z_off_t)ret ? (z_off_t)ret : -1; +} + +/* -- see zlib.h -- */ +z_off64_t ZEXPORT gzoffset64(file) + gzFile file; +{ + z_off64_t offset; + gz_statep state; + + /* get internal structure and check integrity */ + if (file == NULL) + return -1; + state = (gz_statep)file; + if (state->mode != GZ_READ && state->mode != GZ_WRITE) + return -1; + + /* compute and return effective offset in file */ + offset = LSEEK(state->fd, 0, SEEK_CUR); + if (offset == -1) + return -1; + if (state->mode == GZ_READ) /* reading */ + offset -= state->strm.avail_in; /* don't count buffered input */ + return offset; +} + +/* -- see zlib.h -- */ +z_off_t ZEXPORT gzoffset(file) + gzFile file; +{ + z_off64_t ret; + + ret = gzoffset64(file); + return ret == (z_off_t)ret ? (z_off_t)ret : -1; +} + +/* -- see zlib.h -- */ +int ZEXPORT gzeof(file) + gzFile file; +{ + gz_statep state; + + /* get internal structure and check integrity */ + if (file == NULL) + return 0; + state = (gz_statep)file; + if (state->mode != GZ_READ && state->mode != GZ_WRITE) + return 0; + + /* return end-of-file state */ + return state->mode == GZ_READ ? state->past : 0; +} + +/* -- see zlib.h -- */ +const char * ZEXPORT gzerror(file, errnum) + gzFile file; + int *errnum; +{ + gz_statep state; + + /* get internal structure and check integrity */ + if (file == NULL) + return NULL; + state = (gz_statep)file; + if (state->mode != GZ_READ && state->mode != GZ_WRITE) + return NULL; + + /* return error information */ + if (errnum != NULL) + *errnum = state->err; + return state->err == Z_MEM_ERROR ? "out of memory" : + (state->msg == NULL ? "" : state->msg); +} + +/* -- see zlib.h -- */ +void ZEXPORT gzclearerr(file) + gzFile file; +{ + gz_statep state; + + /* get internal structure and check integrity */ + if (file == NULL) + return; + state = (gz_statep)file; + if (state->mode != GZ_READ && state->mode != GZ_WRITE) + return; + + /* clear error and end-of-file */ + if (state->mode == GZ_READ) { + state->eof = 0; + state->past = 0; + } + gz_error(state, Z_OK, NULL); +} + +/* Create an error message in allocated memory and set state->err and + state->msg accordingly. Free any previous error message already there. Do + not try to free or allocate space if the error is Z_MEM_ERROR (out of + memory). Simply save the error message as a static string. If there is an + allocation failure constructing the error message, then convert the error to + out of memory. */ +void ZLIB_INTERNAL gz_error(state, err, msg) + gz_statep state; + int err; + const char *msg; +{ + /* free previously allocated message and clear */ + if (state->msg != NULL) { + if (state->err != Z_MEM_ERROR) + free(state->msg); + state->msg = NULL; + } + + /* if fatal, set state->x.have to 0 so that the gzgetc() macro fails */ + if (err != Z_OK && err != Z_BUF_ERROR) + state->x.have = 0; + + /* set error code, and if no message, then done */ + state->err = err; + if (msg == NULL) + return; + + /* for an out of memory error, return literal string when requested */ + if (err == Z_MEM_ERROR) + return; + + /* construct error message with path */ + if ((state->msg = (char *)malloc(strlen(state->path) + strlen(msg) + 3)) == + NULL) { + state->err = Z_MEM_ERROR; + return; + } +#if !defined(NO_snprintf) && !defined(NO_vsnprintf) + snprintf(state->msg, strlen(state->path) + strlen(msg) + 3, + "%s%s%s", state->path, ": ", msg); +#else + strcpy(state->msg, state->path); + strcat(state->msg, ": "); + strcat(state->msg, msg); +#endif + return; +} + +#ifndef INT_MAX +/* portably return maximum value for an int (when limits.h presumed not + available) -- we need to do this to cover cases where 2's complement not + used, since C standard permits 1's complement and sign-bit representations, + otherwise we could just use ((unsigned)-1) >> 1 */ +unsigned ZLIB_INTERNAL gz_intmax() +{ + unsigned p, q; + + p = 1; + do { + q = p; + p <<= 1; + p++; + } while (p > q); + return q >> 1; +} +#endif ADDED compat/zlib/gzread.c Index: compat/zlib/gzread.c ================================================================== --- compat/zlib/gzread.c +++ compat/zlib/gzread.c @@ -0,0 +1,594 @@ +/* gzread.c -- zlib functions for reading gzip files + * Copyright (C) 2004, 2005, 2010, 2011, 2012, 2013 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +#include "gzguts.h" + +/* Local functions */ +local int gz_load OF((gz_statep, unsigned char *, unsigned, unsigned *)); +local int gz_avail OF((gz_statep)); +local int gz_look OF((gz_statep)); +local int gz_decomp OF((gz_statep)); +local int gz_fetch OF((gz_statep)); +local int gz_skip OF((gz_statep, z_off64_t)); + +/* Use read() to load a buffer -- return -1 on error, otherwise 0. Read from + state->fd, and update state->eof, state->err, and state->msg as appropriate. + This function needs to loop on read(), since read() is not guaranteed to + read the number of bytes requested, depending on the type of descriptor. */ +local int gz_load(state, buf, len, have) + gz_statep state; + unsigned char *buf; + unsigned len; + unsigned *have; +{ + int ret; + + *have = 0; + do { + ret = read(state->fd, buf + *have, len - *have); + if (ret <= 0) + break; + *have += ret; + } while (*have < len); + if (ret < 0) { + gz_error(state, Z_ERRNO, zstrerror()); + return -1; + } + if (ret == 0) + state->eof = 1; + return 0; +} + +/* Load up input buffer and set eof flag if last data loaded -- return -1 on + error, 0 otherwise. Note that the eof flag is set when the end of the input + file is reached, even though there may be unused data in the buffer. Once + that data has been used, no more attempts will be made to read the file. + If strm->avail_in != 0, then the current data is moved to the beginning of + the input buffer, and then the remainder of the buffer is loaded with the + available data from the input file. */ +local int gz_avail(state) + gz_statep state; +{ + unsigned got; + z_streamp strm = &(state->strm); + + if (state->err != Z_OK && state->err != Z_BUF_ERROR) + return -1; + if (state->eof == 0) { + if (strm->avail_in) { /* copy what's there to the start */ + unsigned char *p = state->in; + unsigned const char *q = strm->next_in; + unsigned n = strm->avail_in; + do { + *p++ = *q++; + } while (--n); + } + if (gz_load(state, state->in + strm->avail_in, + state->size - strm->avail_in, &got) == -1) + return -1; + strm->avail_in += got; + strm->next_in = state->in; + } + return 0; +} + +/* Look for gzip header, set up for inflate or copy. state->x.have must be 0. + If this is the first time in, allocate required memory. state->how will be + left unchanged if there is no more input data available, will be set to COPY + if there is no gzip header and direct copying will be performed, or it will + be set to GZIP for decompression. If direct copying, then leftover input + data from the input buffer will be copied to the output buffer. In that + case, all further file reads will be directly to either the output buffer or + a user buffer. If decompressing, the inflate state will be initialized. + gz_look() will return 0 on success or -1 on failure. */ +local int gz_look(state) + gz_statep state; +{ + z_streamp strm = &(state->strm); + + /* allocate read buffers and inflate memory */ + if (state->size == 0) { + /* allocate buffers */ + state->in = (unsigned char *)malloc(state->want); + state->out = (unsigned char *)malloc(state->want << 1); + if (state->in == NULL || state->out == NULL) { + if (state->out != NULL) + free(state->out); + if (state->in != NULL) + free(state->in); + gz_error(state, Z_MEM_ERROR, "out of memory"); + return -1; + } + state->size = state->want; + + /* allocate inflate memory */ + state->strm.zalloc = Z_NULL; + state->strm.zfree = Z_NULL; + state->strm.opaque = Z_NULL; + state->strm.avail_in = 0; + state->strm.next_in = Z_NULL; + if (inflateInit2(&(state->strm), 15 + 16) != Z_OK) { /* gunzip */ + free(state->out); + free(state->in); + state->size = 0; + gz_error(state, Z_MEM_ERROR, "out of memory"); + return -1; + } + } + + /* get at least the magic bytes in the input buffer */ + if (strm->avail_in < 2) { + if (gz_avail(state) == -1) + return -1; + if (strm->avail_in == 0) + return 0; + } + + /* look for gzip magic bytes -- if there, do gzip decoding (note: there is + a logical dilemma here when considering the case of a partially written + gzip file, to wit, if a single 31 byte is written, then we cannot tell + whether this is a single-byte file, or just a partially written gzip + file -- for here we assume that if a gzip file is being written, then + the header will be written in a single operation, so that reading a + single byte is sufficient indication that it is not a gzip file) */ + if (strm->avail_in > 1 && + strm->next_in[0] == 31 && strm->next_in[1] == 139) { + inflateReset(strm); + state->how = GZIP; + state->direct = 0; + return 0; + } + + /* no gzip header -- if we were decoding gzip before, then this is trailing + garbage. Ignore the trailing garbage and finish. */ + if (state->direct == 0) { + strm->avail_in = 0; + state->eof = 1; + state->x.have = 0; + return 0; + } + + /* doing raw i/o, copy any leftover input to output -- this assumes that + the output buffer is larger than the input buffer, which also assures + space for gzungetc() */ + state->x.next = state->out; + if (strm->avail_in) { + memcpy(state->x.next, strm->next_in, strm->avail_in); + state->x.have = strm->avail_in; + strm->avail_in = 0; + } + state->how = COPY; + state->direct = 1; + return 0; +} + +/* Decompress from input to the provided next_out and avail_out in the state. + On return, state->x.have and state->x.next point to the just decompressed + data. If the gzip stream completes, state->how is reset to LOOK to look for + the next gzip stream or raw data, once state->x.have is depleted. Returns 0 + on success, -1 on failure. */ +local int gz_decomp(state) + gz_statep state; +{ + int ret = Z_OK; + unsigned had; + z_streamp strm = &(state->strm); + + /* fill output buffer up to end of deflate stream */ + had = strm->avail_out; + do { + /* get more input for inflate() */ + if (strm->avail_in == 0 && gz_avail(state) == -1) + return -1; + if (strm->avail_in == 0) { + gz_error(state, Z_BUF_ERROR, "unexpected end of file"); + break; + } + + /* decompress and handle errors */ + ret = inflate(strm, Z_NO_FLUSH); + if (ret == Z_STREAM_ERROR || ret == Z_NEED_DICT) { + gz_error(state, Z_STREAM_ERROR, + "internal error: inflate stream corrupt"); + return -1; + } + if (ret == Z_MEM_ERROR) { + gz_error(state, Z_MEM_ERROR, "out of memory"); + return -1; + } + if (ret == Z_DATA_ERROR) { /* deflate stream invalid */ + gz_error(state, Z_DATA_ERROR, + strm->msg == NULL ? "compressed data error" : strm->msg); + return -1; + } + } while (strm->avail_out && ret != Z_STREAM_END); + + /* update available output */ + state->x.have = had - strm->avail_out; + state->x.next = strm->next_out - state->x.have; + + /* if the gzip stream completed successfully, look for another */ + if (ret == Z_STREAM_END) + state->how = LOOK; + + /* good decompression */ + return 0; +} + +/* Fetch data and put it in the output buffer. Assumes state->x.have is 0. + Data is either copied from the input file or decompressed from the input + file depending on state->how. If state->how is LOOK, then a gzip header is + looked for to determine whether to copy or decompress. Returns -1 on error, + otherwise 0. gz_fetch() will leave state->how as COPY or GZIP unless the + end of the input file has been reached and all data has been processed. */ +local int gz_fetch(state) + gz_statep state; +{ + z_streamp strm = &(state->strm); + + do { + switch(state->how) { + case LOOK: /* -> LOOK, COPY (only if never GZIP), or GZIP */ + if (gz_look(state) == -1) + return -1; + if (state->how == LOOK) + return 0; + break; + case COPY: /* -> COPY */ + if (gz_load(state, state->out, state->size << 1, &(state->x.have)) + == -1) + return -1; + state->x.next = state->out; + return 0; + case GZIP: /* -> GZIP or LOOK (if end of gzip stream) */ + strm->avail_out = state->size << 1; + strm->next_out = state->out; + if (gz_decomp(state) == -1) + return -1; + } + } while (state->x.have == 0 && (!state->eof || strm->avail_in)); + return 0; +} + +/* Skip len uncompressed bytes of output. Return -1 on error, 0 on success. */ +local int gz_skip(state, len) + gz_statep state; + z_off64_t len; +{ + unsigned n; + + /* skip over len bytes or reach end-of-file, whichever comes first */ + while (len) + /* skip over whatever is in output buffer */ + if (state->x.have) { + n = GT_OFF(state->x.have) || (z_off64_t)state->x.have > len ? + (unsigned)len : state->x.have; + state->x.have -= n; + state->x.next += n; + state->x.pos += n; + len -= n; + } + + /* output buffer empty -- return if we're at the end of the input */ + else if (state->eof && state->strm.avail_in == 0) + break; + + /* need more data to skip -- load up output buffer */ + else { + /* get more output, looking for header if required */ + if (gz_fetch(state) == -1) + return -1; + } + return 0; +} + +/* -- see zlib.h -- */ +int ZEXPORT gzread(file, buf, len) + gzFile file; + voidp buf; + unsigned len; +{ + unsigned got, n; + gz_statep state; + z_streamp strm; + + /* get internal structure */ + if (file == NULL) + return -1; + state = (gz_statep)file; + strm = &(state->strm); + + /* check that we're reading and that there's no (serious) error */ + if (state->mode != GZ_READ || + (state->err != Z_OK && state->err != Z_BUF_ERROR)) + return -1; + + /* since an int is returned, make sure len fits in one, otherwise return + with an error (this avoids the flaw in the interface) */ + if ((int)len < 0) { + gz_error(state, Z_DATA_ERROR, "requested length does not fit in int"); + return -1; + } + + /* if len is zero, avoid unnecessary operations */ + if (len == 0) + return 0; + + /* process a skip request */ + if (state->seek) { + state->seek = 0; + if (gz_skip(state, state->skip) == -1) + return -1; + } + + /* get len bytes to buf, or less than len if at the end */ + got = 0; + do { + /* first just try copying data from the output buffer */ + if (state->x.have) { + n = state->x.have > len ? len : state->x.have; + memcpy(buf, state->x.next, n); + state->x.next += n; + state->x.have -= n; + } + + /* output buffer empty -- return if we're at the end of the input */ + else if (state->eof && strm->avail_in == 0) { + state->past = 1; /* tried to read past end */ + break; + } + + /* need output data -- for small len or new stream load up our output + buffer */ + else if (state->how == LOOK || len < (state->size << 1)) { + /* get more output, looking for header if required */ + if (gz_fetch(state) == -1) + return -1; + continue; /* no progress yet -- go back to copy above */ + /* the copy above assures that we will leave with space in the + output buffer, allowing at least one gzungetc() to succeed */ + } + + /* large len -- read directly into user buffer */ + else if (state->how == COPY) { /* read directly */ + if (gz_load(state, (unsigned char *)buf, len, &n) == -1) + return -1; + } + + /* large len -- decompress directly into user buffer */ + else { /* state->how == GZIP */ + strm->avail_out = len; + strm->next_out = (unsigned char *)buf; + if (gz_decomp(state) == -1) + return -1; + n = state->x.have; + state->x.have = 0; + } + + /* update progress */ + len -= n; + buf = (char *)buf + n; + got += n; + state->x.pos += n; + } while (len); + + /* return number of bytes read into user buffer (will fit in int) */ + return (int)got; +} + +/* -- see zlib.h -- */ +#ifdef Z_PREFIX_SET +# undef z_gzgetc +#else +# undef gzgetc +#endif +int ZEXPORT gzgetc(file) + gzFile file; +{ + int ret; + unsigned char buf[1]; + gz_statep state; + + /* get internal structure */ + if (file == NULL) + return -1; + state = (gz_statep)file; + + /* check that we're reading and that there's no (serious) error */ + if (state->mode != GZ_READ || + (state->err != Z_OK && state->err != Z_BUF_ERROR)) + return -1; + + /* try output buffer (no need to check for skip request) */ + if (state->x.have) { + state->x.have--; + state->x.pos++; + return *(state->x.next)++; + } + + /* nothing there -- try gzread() */ + ret = gzread(file, buf, 1); + return ret < 1 ? -1 : buf[0]; +} + +int ZEXPORT gzgetc_(file) +gzFile file; +{ + return gzgetc(file); +} + +/* -- see zlib.h -- */ +int ZEXPORT gzungetc(c, file) + int c; + gzFile file; +{ + gz_statep state; + + /* get internal structure */ + if (file == NULL) + return -1; + state = (gz_statep)file; + + /* check that we're reading and that there's no (serious) error */ + if (state->mode != GZ_READ || + (state->err != Z_OK && state->err != Z_BUF_ERROR)) + return -1; + + /* process a skip request */ + if (state->seek) { + state->seek = 0; + if (gz_skip(state, state->skip) == -1) + return -1; + } + + /* can't push EOF */ + if (c < 0) + return -1; + + /* if output buffer empty, put byte at end (allows more pushing) */ + if (state->x.have == 0) { + state->x.have = 1; + state->x.next = state->out + (state->size << 1) - 1; + state->x.next[0] = c; + state->x.pos--; + state->past = 0; + return c; + } + + /* if no room, give up (must have already done a gzungetc()) */ + if (state->x.have == (state->size << 1)) { + gz_error(state, Z_DATA_ERROR, "out of room to push characters"); + return -1; + } + + /* slide output data if needed and insert byte before existing data */ + if (state->x.next == state->out) { + unsigned char *src = state->out + state->x.have; + unsigned char *dest = state->out + (state->size << 1); + while (src > state->out) + *--dest = *--src; + state->x.next = dest; + } + state->x.have++; + state->x.next--; + state->x.next[0] = c; + state->x.pos--; + state->past = 0; + return c; +} + +/* -- see zlib.h -- */ +char * ZEXPORT gzgets(file, buf, len) + gzFile file; + char *buf; + int len; +{ + unsigned left, n; + char *str; + unsigned char *eol; + gz_statep state; + + /* check parameters and get internal structure */ + if (file == NULL || buf == NULL || len < 1) + return NULL; + state = (gz_statep)file; + + /* check that we're reading and that there's no (serious) error */ + if (state->mode != GZ_READ || + (state->err != Z_OK && state->err != Z_BUF_ERROR)) + return NULL; + + /* process a skip request */ + if (state->seek) { + state->seek = 0; + if (gz_skip(state, state->skip) == -1) + return NULL; + } + + /* copy output bytes up to new line or len - 1, whichever comes first -- + append a terminating zero to the string (we don't check for a zero in + the contents, let the user worry about that) */ + str = buf; + left = (unsigned)len - 1; + if (left) do { + /* assure that something is in the output buffer */ + if (state->x.have == 0 && gz_fetch(state) == -1) + return NULL; /* error */ + if (state->x.have == 0) { /* end of file */ + state->past = 1; /* read past end */ + break; /* return what we have */ + } + + /* look for end-of-line in current output buffer */ + n = state->x.have > left ? left : state->x.have; + eol = (unsigned char *)memchr(state->x.next, '\n', n); + if (eol != NULL) + n = (unsigned)(eol - state->x.next) + 1; + + /* copy through end-of-line, or remainder if not found */ + memcpy(buf, state->x.next, n); + state->x.have -= n; + state->x.next += n; + state->x.pos += n; + left -= n; + buf += n; + } while (left && eol == NULL); + + /* return terminated string, or if nothing, end of file */ + if (buf == str) + return NULL; + buf[0] = 0; + return str; +} + +/* -- see zlib.h -- */ +int ZEXPORT gzdirect(file) + gzFile file; +{ + gz_statep state; + + /* get internal structure */ + if (file == NULL) + return 0; + state = (gz_statep)file; + + /* if the state is not known, but we can find out, then do so (this is + mainly for right after a gzopen() or gzdopen()) */ + if (state->mode == GZ_READ && state->how == LOOK && state->x.have == 0) + (void)gz_look(state); + + /* return 1 if transparent, 0 if processing a gzip stream */ + return state->direct; +} + +/* -- see zlib.h -- */ +int ZEXPORT gzclose_r(file) + gzFile file; +{ + int ret, err; + gz_statep state; + + /* get internal structure */ + if (file == NULL) + return Z_STREAM_ERROR; + state = (gz_statep)file; + + /* check that we're reading */ + if (state->mode != GZ_READ) + return Z_STREAM_ERROR; + + /* free memory and close file */ + if (state->size) { + inflateEnd(&(state->strm)); + free(state->out); + free(state->in); + } + err = state->err == Z_BUF_ERROR ? Z_BUF_ERROR : Z_OK; + gz_error(state, Z_OK, NULL); + free(state->path); + ret = close(state->fd); + free(state); + return ret ? Z_ERRNO : err; +} ADDED compat/zlib/gzwrite.c Index: compat/zlib/gzwrite.c ================================================================== --- compat/zlib/gzwrite.c +++ compat/zlib/gzwrite.c @@ -0,0 +1,577 @@ +/* gzwrite.c -- zlib functions for writing gzip files + * Copyright (C) 2004, 2005, 2010, 2011, 2012, 2013 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +#include "gzguts.h" + +/* Local functions */ +local int gz_init OF((gz_statep)); +local int gz_comp OF((gz_statep, int)); +local int gz_zero OF((gz_statep, z_off64_t)); + +/* Initialize state for writing a gzip file. Mark initialization by setting + state->size to non-zero. Return -1 on failure or 0 on success. */ +local int gz_init(state) + gz_statep state; +{ + int ret; + z_streamp strm = &(state->strm); + + /* allocate input buffer */ + state->in = (unsigned char *)malloc(state->want); + if (state->in == NULL) { + gz_error(state, Z_MEM_ERROR, "out of memory"); + return -1; + } + + /* only need output buffer and deflate state if compressing */ + if (!state->direct) { + /* allocate output buffer */ + state->out = (unsigned char *)malloc(state->want); + if (state->out == NULL) { + free(state->in); + gz_error(state, Z_MEM_ERROR, "out of memory"); + return -1; + } + + /* allocate deflate memory, set up for gzip compression */ + strm->zalloc = Z_NULL; + strm->zfree = Z_NULL; + strm->opaque = Z_NULL; + ret = deflateInit2(strm, state->level, Z_DEFLATED, + MAX_WBITS + 16, DEF_MEM_LEVEL, state->strategy); + if (ret != Z_OK) { + free(state->out); + free(state->in); + gz_error(state, Z_MEM_ERROR, "out of memory"); + return -1; + } + } + + /* mark state as initialized */ + state->size = state->want; + + /* initialize write buffer if compressing */ + if (!state->direct) { + strm->avail_out = state->size; + strm->next_out = state->out; + state->x.next = strm->next_out; + } + return 0; +} + +/* Compress whatever is at avail_in and next_in and write to the output file. + Return -1 if there is an error writing to the output file, otherwise 0. + flush is assumed to be a valid deflate() flush value. If flush is Z_FINISH, + then the deflate() state is reset to start a new gzip stream. If gz->direct + is true, then simply write to the output file without compressing, and + ignore flush. */ +local int gz_comp(state, flush) + gz_statep state; + int flush; +{ + int ret, got; + unsigned have; + z_streamp strm = &(state->strm); + + /* allocate memory if this is the first time through */ + if (state->size == 0 && gz_init(state) == -1) + return -1; + + /* write directly if requested */ + if (state->direct) { + got = write(state->fd, strm->next_in, strm->avail_in); + if (got < 0 || (unsigned)got != strm->avail_in) { + gz_error(state, Z_ERRNO, zstrerror()); + return -1; + } + strm->avail_in = 0; + return 0; + } + + /* run deflate() on provided input until it produces no more output */ + ret = Z_OK; + do { + /* write out current buffer contents if full, or if flushing, but if + doing Z_FINISH then don't write until we get to Z_STREAM_END */ + if (strm->avail_out == 0 || (flush != Z_NO_FLUSH && + (flush != Z_FINISH || ret == Z_STREAM_END))) { + have = (unsigned)(strm->next_out - state->x.next); + if (have && ((got = write(state->fd, state->x.next, have)) < 0 || + (unsigned)got != have)) { + gz_error(state, Z_ERRNO, zstrerror()); + return -1; + } + if (strm->avail_out == 0) { + strm->avail_out = state->size; + strm->next_out = state->out; + } + state->x.next = strm->next_out; + } + + /* compress */ + have = strm->avail_out; + ret = deflate(strm, flush); + if (ret == Z_STREAM_ERROR) { + gz_error(state, Z_STREAM_ERROR, + "internal error: deflate stream corrupt"); + return -1; + } + have -= strm->avail_out; + } while (have); + + /* if that completed a deflate stream, allow another to start */ + if (flush == Z_FINISH) + deflateReset(strm); + + /* all done, no errors */ + return 0; +} + +/* Compress len zeros to output. Return -1 on error, 0 on success. */ +local int gz_zero(state, len) + gz_statep state; + z_off64_t len; +{ + int first; + unsigned n; + z_streamp strm = &(state->strm); + + /* consume whatever's left in the input buffer */ + if (strm->avail_in && gz_comp(state, Z_NO_FLUSH) == -1) + return -1; + + /* compress len zeros (len guaranteed > 0) */ + first = 1; + while (len) { + n = GT_OFF(state->size) || (z_off64_t)state->size > len ? + (unsigned)len : state->size; + if (first) { + memset(state->in, 0, n); + first = 0; + } + strm->avail_in = n; + strm->next_in = state->in; + state->x.pos += n; + if (gz_comp(state, Z_NO_FLUSH) == -1) + return -1; + len -= n; + } + return 0; +} + +/* -- see zlib.h -- */ +int ZEXPORT gzwrite(file, buf, len) + gzFile file; + voidpc buf; + unsigned len; +{ + unsigned put = len; + gz_statep state; + z_streamp strm; + + /* get internal structure */ + if (file == NULL) + return 0; + state = (gz_statep)file; + strm = &(state->strm); + + /* check that we're writing and that there's no error */ + if (state->mode != GZ_WRITE || state->err != Z_OK) + return 0; + + /* since an int is returned, make sure len fits in one, otherwise return + with an error (this avoids the flaw in the interface) */ + if ((int)len < 0) { + gz_error(state, Z_DATA_ERROR, "requested length does not fit in int"); + return 0; + } + + /* if len is zero, avoid unnecessary operations */ + if (len == 0) + return 0; + + /* allocate memory if this is the first time through */ + if (state->size == 0 && gz_init(state) == -1) + return 0; + + /* check for seek request */ + if (state->seek) { + state->seek = 0; + if (gz_zero(state, state->skip) == -1) + return 0; + } + + /* for small len, copy to input buffer, otherwise compress directly */ + if (len < state->size) { + /* copy to input buffer, compress when full */ + do { + unsigned have, copy; + + if (strm->avail_in == 0) + strm->next_in = state->in; + have = (unsigned)((strm->next_in + strm->avail_in) - state->in); + copy = state->size - have; + if (copy > len) + copy = len; + memcpy(state->in + have, buf, copy); + strm->avail_in += copy; + state->x.pos += copy; + buf = (const char *)buf + copy; + len -= copy; + if (len && gz_comp(state, Z_NO_FLUSH) == -1) + return 0; + } while (len); + } + else { + /* consume whatever's left in the input buffer */ + if (strm->avail_in && gz_comp(state, Z_NO_FLUSH) == -1) + return 0; + + /* directly compress user buffer to file */ + strm->avail_in = len; + strm->next_in = (z_const Bytef *)buf; + state->x.pos += len; + if (gz_comp(state, Z_NO_FLUSH) == -1) + return 0; + } + + /* input was all buffered or compressed (put will fit in int) */ + return (int)put; +} + +/* -- see zlib.h -- */ +int ZEXPORT gzputc(file, c) + gzFile file; + int c; +{ + unsigned have; + unsigned char buf[1]; + gz_statep state; + z_streamp strm; + + /* get internal structure */ + if (file == NULL) + return -1; + state = (gz_statep)file; + strm = &(state->strm); + + /* check that we're writing and that there's no error */ + if (state->mode != GZ_WRITE || state->err != Z_OK) + return -1; + + /* check for seek request */ + if (state->seek) { + state->seek = 0; + if (gz_zero(state, state->skip) == -1) + return -1; + } + + /* try writing to input buffer for speed (state->size == 0 if buffer not + initialized) */ + if (state->size) { + if (strm->avail_in == 0) + strm->next_in = state->in; + have = (unsigned)((strm->next_in + strm->avail_in) - state->in); + if (have < state->size) { + state->in[have] = c; + strm->avail_in++; + state->x.pos++; + return c & 0xff; + } + } + + /* no room in buffer or not initialized, use gz_write() */ + buf[0] = c; + if (gzwrite(file, buf, 1) != 1) + return -1; + return c & 0xff; +} + +/* -- see zlib.h -- */ +int ZEXPORT gzputs(file, str) + gzFile file; + const char *str; +{ + int ret; + unsigned len; + + /* write string */ + len = (unsigned)strlen(str); + ret = gzwrite(file, str, len); + return ret == 0 && len != 0 ? -1 : ret; +} + +#if defined(STDC) || defined(Z_HAVE_STDARG_H) +#include + +/* -- see zlib.h -- */ +int ZEXPORTVA gzvprintf(gzFile file, const char *format, va_list va) +{ + int size, len; + gz_statep state; + z_streamp strm; + + /* get internal structure */ + if (file == NULL) + return -1; + state = (gz_statep)file; + strm = &(state->strm); + + /* check that we're writing and that there's no error */ + if (state->mode != GZ_WRITE || state->err != Z_OK) + return 0; + + /* make sure we have some buffer space */ + if (state->size == 0 && gz_init(state) == -1) + return 0; + + /* check for seek request */ + if (state->seek) { + state->seek = 0; + if (gz_zero(state, state->skip) == -1) + return 0; + } + + /* consume whatever's left in the input buffer */ + if (strm->avail_in && gz_comp(state, Z_NO_FLUSH) == -1) + return 0; + + /* do the printf() into the input buffer, put length in len */ + size = (int)(state->size); + state->in[size - 1] = 0; +#ifdef NO_vsnprintf +# ifdef HAS_vsprintf_void + (void)vsprintf((char *)(state->in), format, va); + for (len = 0; len < size; len++) + if (state->in[len] == 0) break; +# else + len = vsprintf((char *)(state->in), format, va); +# endif +#else +# ifdef HAS_vsnprintf_void + (void)vsnprintf((char *)(state->in), size, format, va); + len = strlen((char *)(state->in)); +# else + len = vsnprintf((char *)(state->in), size, format, va); +# endif +#endif + + /* check that printf() results fit in buffer */ + if (len <= 0 || len >= (int)size || state->in[size - 1] != 0) + return 0; + + /* update buffer and position, defer compression until needed */ + strm->avail_in = (unsigned)len; + strm->next_in = state->in; + state->x.pos += len; + return len; +} + +int ZEXPORTVA gzprintf(gzFile file, const char *format, ...) +{ + va_list va; + int ret; + + va_start(va, format); + ret = gzvprintf(file, format, va); + va_end(va); + return ret; +} + +#else /* !STDC && !Z_HAVE_STDARG_H */ + +/* -- see zlib.h -- */ +int ZEXPORTVA gzprintf (file, format, a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, + a11, a12, a13, a14, a15, a16, a17, a18, a19, a20) + gzFile file; + const char *format; + int a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, + a11, a12, a13, a14, a15, a16, a17, a18, a19, a20; +{ + int size, len; + gz_statep state; + z_streamp strm; + + /* get internal structure */ + if (file == NULL) + return -1; + state = (gz_statep)file; + strm = &(state->strm); + + /* check that can really pass pointer in ints */ + if (sizeof(int) != sizeof(void *)) + return 0; + + /* check that we're writing and that there's no error */ + if (state->mode != GZ_WRITE || state->err != Z_OK) + return 0; + + /* make sure we have some buffer space */ + if (state->size == 0 && gz_init(state) == -1) + return 0; + + /* check for seek request */ + if (state->seek) { + state->seek = 0; + if (gz_zero(state, state->skip) == -1) + return 0; + } + + /* consume whatever's left in the input buffer */ + if (strm->avail_in && gz_comp(state, Z_NO_FLUSH) == -1) + return 0; + + /* do the printf() into the input buffer, put length in len */ + size = (int)(state->size); + state->in[size - 1] = 0; +#ifdef NO_snprintf +# ifdef HAS_sprintf_void + sprintf((char *)(state->in), format, a1, a2, a3, a4, a5, a6, a7, a8, + a9, a10, a11, a12, a13, a14, a15, a16, a17, a18, a19, a20); + for (len = 0; len < size; len++) + if (state->in[len] == 0) break; +# else + len = sprintf((char *)(state->in), format, a1, a2, a3, a4, a5, a6, a7, a8, + a9, a10, a11, a12, a13, a14, a15, a16, a17, a18, a19, a20); +# endif +#else +# ifdef HAS_snprintf_void + snprintf((char *)(state->in), size, format, a1, a2, a3, a4, a5, a6, a7, a8, + a9, a10, a11, a12, a13, a14, a15, a16, a17, a18, a19, a20); + len = strlen((char *)(state->in)); +# else + len = snprintf((char *)(state->in), size, format, a1, a2, a3, a4, a5, a6, + a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17, a18, + a19, a20); +# endif +#endif + + /* check that printf() results fit in buffer */ + if (len <= 0 || len >= (int)size || state->in[size - 1] != 0) + return 0; + + /* update buffer and position, defer compression until needed */ + strm->avail_in = (unsigned)len; + strm->next_in = state->in; + state->x.pos += len; + return len; +} + +#endif + +/* -- see zlib.h -- */ +int ZEXPORT gzflush(file, flush) + gzFile file; + int flush; +{ + gz_statep state; + + /* get internal structure */ + if (file == NULL) + return -1; + state = (gz_statep)file; + + /* check that we're writing and that there's no error */ + if (state->mode != GZ_WRITE || state->err != Z_OK) + return Z_STREAM_ERROR; + + /* check flush parameter */ + if (flush < 0 || flush > Z_FINISH) + return Z_STREAM_ERROR; + + /* check for seek request */ + if (state->seek) { + state->seek = 0; + if (gz_zero(state, state->skip) == -1) + return -1; + } + + /* compress remaining data with requested flush */ + gz_comp(state, flush); + return state->err; +} + +/* -- see zlib.h -- */ +int ZEXPORT gzsetparams(file, level, strategy) + gzFile file; + int level; + int strategy; +{ + gz_statep state; + z_streamp strm; + + /* get internal structure */ + if (file == NULL) + return Z_STREAM_ERROR; + state = (gz_statep)file; + strm = &(state->strm); + + /* check that we're writing and that there's no error */ + if (state->mode != GZ_WRITE || state->err != Z_OK) + return Z_STREAM_ERROR; + + /* if no change is requested, then do nothing */ + if (level == state->level && strategy == state->strategy) + return Z_OK; + + /* check for seek request */ + if (state->seek) { + state->seek = 0; + if (gz_zero(state, state->skip) == -1) + return -1; + } + + /* change compression parameters for subsequent input */ + if (state->size) { + /* flush previous input with previous parameters before changing */ + if (strm->avail_in && gz_comp(state, Z_PARTIAL_FLUSH) == -1) + return state->err; + deflateParams(strm, level, strategy); + } + state->level = level; + state->strategy = strategy; + return Z_OK; +} + +/* -- see zlib.h -- */ +int ZEXPORT gzclose_w(file) + gzFile file; +{ + int ret = Z_OK; + gz_statep state; + + /* get internal structure */ + if (file == NULL) + return Z_STREAM_ERROR; + state = (gz_statep)file; + + /* check that we're writing */ + if (state->mode != GZ_WRITE) + return Z_STREAM_ERROR; + + /* check for seek request */ + if (state->seek) { + state->seek = 0; + if (gz_zero(state, state->skip) == -1) + ret = state->err; + } + + /* flush, free memory, and close file */ + if (gz_comp(state, Z_FINISH) == -1) + ret = state->err; + if (state->size) { + if (!state->direct) { + (void)deflateEnd(&(state->strm)); + free(state->out); + } + free(state->in); + } + gz_error(state, Z_OK, NULL); + free(state->path); + if (close(state->fd) == -1) + ret = Z_ERRNO; + free(state); + return ret; +} ADDED compat/zlib/infback.c Index: compat/zlib/infback.c ================================================================== --- compat/zlib/infback.c +++ compat/zlib/infback.c @@ -0,0 +1,640 @@ +/* infback.c -- inflate using a call-back interface + * Copyright (C) 1995-2011 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* + This code is largely copied from inflate.c. Normally either infback.o or + inflate.o would be linked into an application--not both. The interface + with inffast.c is retained so that optimized assembler-coded versions of + inflate_fast() can be used with either inflate.c or infback.c. + */ + +#include "zutil.h" +#include "inftrees.h" +#include "inflate.h" +#include "inffast.h" + +/* function prototypes */ +local void fixedtables OF((struct inflate_state FAR *state)); + +/* + strm provides memory allocation functions in zalloc and zfree, or + Z_NULL to use the library memory allocation functions. + + windowBits is in the range 8..15, and window is a user-supplied + window and output buffer that is 2**windowBits bytes. + */ +int ZEXPORT inflateBackInit_(strm, windowBits, window, version, stream_size) +z_streamp strm; +int windowBits; +unsigned char FAR *window; +const char *version; +int stream_size; +{ + struct inflate_state FAR *state; + + if (version == Z_NULL || version[0] != ZLIB_VERSION[0] || + stream_size != (int)(sizeof(z_stream))) + return Z_VERSION_ERROR; + if (strm == Z_NULL || window == Z_NULL || + windowBits < 8 || windowBits > 15) + return Z_STREAM_ERROR; + strm->msg = Z_NULL; /* in case we return an error */ + if (strm->zalloc == (alloc_func)0) { +#ifdef Z_SOLO + return Z_STREAM_ERROR; +#else + strm->zalloc = zcalloc; + strm->opaque = (voidpf)0; +#endif + } + if (strm->zfree == (free_func)0) +#ifdef Z_SOLO + return Z_STREAM_ERROR; +#else + strm->zfree = zcfree; +#endif + state = (struct inflate_state FAR *)ZALLOC(strm, 1, + sizeof(struct inflate_state)); + if (state == Z_NULL) return Z_MEM_ERROR; + Tracev((stderr, "inflate: allocated\n")); + strm->state = (struct internal_state FAR *)state; + state->dmax = 32768U; + state->wbits = windowBits; + state->wsize = 1U << windowBits; + state->window = window; + state->wnext = 0; + state->whave = 0; + return Z_OK; +} + +/* + Return state with length and distance decoding tables and index sizes set to + fixed code decoding. Normally this returns fixed tables from inffixed.h. + If BUILDFIXED is defined, then instead this routine builds the tables the + first time it's called, and returns those tables the first time and + thereafter. This reduces the size of the code by about 2K bytes, in + exchange for a little execution time. However, BUILDFIXED should not be + used for threaded applications, since the rewriting of the tables and virgin + may not be thread-safe. + */ +local void fixedtables(state) +struct inflate_state FAR *state; +{ +#ifdef BUILDFIXED + static int virgin = 1; + static code *lenfix, *distfix; + static code fixed[544]; + + /* build fixed huffman tables if first call (may not be thread safe) */ + if (virgin) { + unsigned sym, bits; + static code *next; + + /* literal/length table */ + sym = 0; + while (sym < 144) state->lens[sym++] = 8; + while (sym < 256) state->lens[sym++] = 9; + while (sym < 280) state->lens[sym++] = 7; + while (sym < 288) state->lens[sym++] = 8; + next = fixed; + lenfix = next; + bits = 9; + inflate_table(LENS, state->lens, 288, &(next), &(bits), state->work); + + /* distance table */ + sym = 0; + while (sym < 32) state->lens[sym++] = 5; + distfix = next; + bits = 5; + inflate_table(DISTS, state->lens, 32, &(next), &(bits), state->work); + + /* do this just once */ + virgin = 0; + } +#else /* !BUILDFIXED */ +# include "inffixed.h" +#endif /* BUILDFIXED */ + state->lencode = lenfix; + state->lenbits = 9; + state->distcode = distfix; + state->distbits = 5; +} + +/* Macros for inflateBack(): */ + +/* Load returned state from inflate_fast() */ +#define LOAD() \ + do { \ + put = strm->next_out; \ + left = strm->avail_out; \ + next = strm->next_in; \ + have = strm->avail_in; \ + hold = state->hold; \ + bits = state->bits; \ + } while (0) + +/* Set state from registers for inflate_fast() */ +#define RESTORE() \ + do { \ + strm->next_out = put; \ + strm->avail_out = left; \ + strm->next_in = next; \ + strm->avail_in = have; \ + state->hold = hold; \ + state->bits = bits; \ + } while (0) + +/* Clear the input bit accumulator */ +#define INITBITS() \ + do { \ + hold = 0; \ + bits = 0; \ + } while (0) + +/* Assure that some input is available. If input is requested, but denied, + then return a Z_BUF_ERROR from inflateBack(). */ +#define PULL() \ + do { \ + if (have == 0) { \ + have = in(in_desc, &next); \ + if (have == 0) { \ + next = Z_NULL; \ + ret = Z_BUF_ERROR; \ + goto inf_leave; \ + } \ + } \ + } while (0) + +/* Get a byte of input into the bit accumulator, or return from inflateBack() + with an error if there is no input available. */ +#define PULLBYTE() \ + do { \ + PULL(); \ + have--; \ + hold += (unsigned long)(*next++) << bits; \ + bits += 8; \ + } while (0) + +/* Assure that there are at least n bits in the bit accumulator. If there is + not enough available input to do that, then return from inflateBack() with + an error. */ +#define NEEDBITS(n) \ + do { \ + while (bits < (unsigned)(n)) \ + PULLBYTE(); \ + } while (0) + +/* Return the low n bits of the bit accumulator (n < 16) */ +#define BITS(n) \ + ((unsigned)hold & ((1U << (n)) - 1)) + +/* Remove n bits from the bit accumulator */ +#define DROPBITS(n) \ + do { \ + hold >>= (n); \ + bits -= (unsigned)(n); \ + } while (0) + +/* Remove zero to seven bits as needed to go to a byte boundary */ +#define BYTEBITS() \ + do { \ + hold >>= bits & 7; \ + bits -= bits & 7; \ + } while (0) + +/* Assure that some output space is available, by writing out the window + if it's full. If the write fails, return from inflateBack() with a + Z_BUF_ERROR. */ +#define ROOM() \ + do { \ + if (left == 0) { \ + put = state->window; \ + left = state->wsize; \ + state->whave = left; \ + if (out(out_desc, put, left)) { \ + ret = Z_BUF_ERROR; \ + goto inf_leave; \ + } \ + } \ + } while (0) + +/* + strm provides the memory allocation functions and window buffer on input, + and provides information on the unused input on return. For Z_DATA_ERROR + returns, strm will also provide an error message. + + in() and out() are the call-back input and output functions. When + inflateBack() needs more input, it calls in(). When inflateBack() has + filled the window with output, or when it completes with data in the + window, it calls out() to write out the data. The application must not + change the provided input until in() is called again or inflateBack() + returns. The application must not change the window/output buffer until + inflateBack() returns. + + in() and out() are called with a descriptor parameter provided in the + inflateBack() call. This parameter can be a structure that provides the + information required to do the read or write, as well as accumulated + information on the input and output such as totals and check values. + + in() should return zero on failure. out() should return non-zero on + failure. If either in() or out() fails, than inflateBack() returns a + Z_BUF_ERROR. strm->next_in can be checked for Z_NULL to see whether it + was in() or out() that caused in the error. Otherwise, inflateBack() + returns Z_STREAM_END on success, Z_DATA_ERROR for an deflate format + error, or Z_MEM_ERROR if it could not allocate memory for the state. + inflateBack() can also return Z_STREAM_ERROR if the input parameters + are not correct, i.e. strm is Z_NULL or the state was not initialized. + */ +int ZEXPORT inflateBack(strm, in, in_desc, out, out_desc) +z_streamp strm; +in_func in; +void FAR *in_desc; +out_func out; +void FAR *out_desc; +{ + struct inflate_state FAR *state; + z_const unsigned char FAR *next; /* next input */ + unsigned char FAR *put; /* next output */ + unsigned have, left; /* available input and output */ + unsigned long hold; /* bit buffer */ + unsigned bits; /* bits in bit buffer */ + unsigned copy; /* number of stored or match bytes to copy */ + unsigned char FAR *from; /* where to copy match bytes from */ + code here; /* current decoding table entry */ + code last; /* parent table entry */ + unsigned len; /* length to copy for repeats, bits to drop */ + int ret; /* return code */ + static const unsigned short order[19] = /* permutation of code lengths */ + {16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15}; + + /* Check that the strm exists and that the state was initialized */ + if (strm == Z_NULL || strm->state == Z_NULL) + return Z_STREAM_ERROR; + state = (struct inflate_state FAR *)strm->state; + + /* Reset the state */ + strm->msg = Z_NULL; + state->mode = TYPE; + state->last = 0; + state->whave = 0; + next = strm->next_in; + have = next != Z_NULL ? strm->avail_in : 0; + hold = 0; + bits = 0; + put = state->window; + left = state->wsize; + + /* Inflate until end of block marked as last */ + for (;;) + switch (state->mode) { + case TYPE: + /* determine and dispatch block type */ + if (state->last) { + BYTEBITS(); + state->mode = DONE; + break; + } + NEEDBITS(3); + state->last = BITS(1); + DROPBITS(1); + switch (BITS(2)) { + case 0: /* stored block */ + Tracev((stderr, "inflate: stored block%s\n", + state->last ? " (last)" : "")); + state->mode = STORED; + break; + case 1: /* fixed block */ + fixedtables(state); + Tracev((stderr, "inflate: fixed codes block%s\n", + state->last ? " (last)" : "")); + state->mode = LEN; /* decode codes */ + break; + case 2: /* dynamic block */ + Tracev((stderr, "inflate: dynamic codes block%s\n", + state->last ? " (last)" : "")); + state->mode = TABLE; + break; + case 3: + strm->msg = (char *)"invalid block type"; + state->mode = BAD; + } + DROPBITS(2); + break; + + case STORED: + /* get and verify stored block length */ + BYTEBITS(); /* go to byte boundary */ + NEEDBITS(32); + if ((hold & 0xffff) != ((hold >> 16) ^ 0xffff)) { + strm->msg = (char *)"invalid stored block lengths"; + state->mode = BAD; + break; + } + state->length = (unsigned)hold & 0xffff; + Tracev((stderr, "inflate: stored length %u\n", + state->length)); + INITBITS(); + + /* copy stored block from input to output */ + while (state->length != 0) { + copy = state->length; + PULL(); + ROOM(); + if (copy > have) copy = have; + if (copy > left) copy = left; + zmemcpy(put, next, copy); + have -= copy; + next += copy; + left -= copy; + put += copy; + state->length -= copy; + } + Tracev((stderr, "inflate: stored end\n")); + state->mode = TYPE; + break; + + case TABLE: + /* get dynamic table entries descriptor */ + NEEDBITS(14); + state->nlen = BITS(5) + 257; + DROPBITS(5); + state->ndist = BITS(5) + 1; + DROPBITS(5); + state->ncode = BITS(4) + 4; + DROPBITS(4); +#ifndef PKZIP_BUG_WORKAROUND + if (state->nlen > 286 || state->ndist > 30) { + strm->msg = (char *)"too many length or distance symbols"; + state->mode = BAD; + break; + } +#endif + Tracev((stderr, "inflate: table sizes ok\n")); + + /* get code length code lengths (not a typo) */ + state->have = 0; + while (state->have < state->ncode) { + NEEDBITS(3); + state->lens[order[state->have++]] = (unsigned short)BITS(3); + DROPBITS(3); + } + while (state->have < 19) + state->lens[order[state->have++]] = 0; + state->next = state->codes; + state->lencode = (code const FAR *)(state->next); + state->lenbits = 7; + ret = inflate_table(CODES, state->lens, 19, &(state->next), + &(state->lenbits), state->work); + if (ret) { + strm->msg = (char *)"invalid code lengths set"; + state->mode = BAD; + break; + } + Tracev((stderr, "inflate: code lengths ok\n")); + + /* get length and distance code code lengths */ + state->have = 0; + while (state->have < state->nlen + state->ndist) { + for (;;) { + here = state->lencode[BITS(state->lenbits)]; + if ((unsigned)(here.bits) <= bits) break; + PULLBYTE(); + } + if (here.val < 16) { + DROPBITS(here.bits); + state->lens[state->have++] = here.val; + } + else { + if (here.val == 16) { + NEEDBITS(here.bits + 2); + DROPBITS(here.bits); + if (state->have == 0) { + strm->msg = (char *)"invalid bit length repeat"; + state->mode = BAD; + break; + } + len = (unsigned)(state->lens[state->have - 1]); + copy = 3 + BITS(2); + DROPBITS(2); + } + else if (here.val == 17) { + NEEDBITS(here.bits + 3); + DROPBITS(here.bits); + len = 0; + copy = 3 + BITS(3); + DROPBITS(3); + } + else { + NEEDBITS(here.bits + 7); + DROPBITS(here.bits); + len = 0; + copy = 11 + BITS(7); + DROPBITS(7); + } + if (state->have + copy > state->nlen + state->ndist) { + strm->msg = (char *)"invalid bit length repeat"; + state->mode = BAD; + break; + } + while (copy--) + state->lens[state->have++] = (unsigned short)len; + } + } + + /* handle error breaks in while */ + if (state->mode == BAD) break; + + /* check for end-of-block code (better have one) */ + if (state->lens[256] == 0) { + strm->msg = (char *)"invalid code -- missing end-of-block"; + state->mode = BAD; + break; + } + + /* build code tables -- note: do not change the lenbits or distbits + values here (9 and 6) without reading the comments in inftrees.h + concerning the ENOUGH constants, which depend on those values */ + state->next = state->codes; + state->lencode = (code const FAR *)(state->next); + state->lenbits = 9; + ret = inflate_table(LENS, state->lens, state->nlen, &(state->next), + &(state->lenbits), state->work); + if (ret) { + strm->msg = (char *)"invalid literal/lengths set"; + state->mode = BAD; + break; + } + state->distcode = (code const FAR *)(state->next); + state->distbits = 6; + ret = inflate_table(DISTS, state->lens + state->nlen, state->ndist, + &(state->next), &(state->distbits), state->work); + if (ret) { + strm->msg = (char *)"invalid distances set"; + state->mode = BAD; + break; + } + Tracev((stderr, "inflate: codes ok\n")); + state->mode = LEN; + + case LEN: + /* use inflate_fast() if we have enough input and output */ + if (have >= 6 && left >= 258) { + RESTORE(); + if (state->whave < state->wsize) + state->whave = state->wsize - left; + inflate_fast(strm, state->wsize); + LOAD(); + break; + } + + /* get a literal, length, or end-of-block code */ + for (;;) { + here = state->lencode[BITS(state->lenbits)]; + if ((unsigned)(here.bits) <= bits) break; + PULLBYTE(); + } + if (here.op && (here.op & 0xf0) == 0) { + last = here; + for (;;) { + here = state->lencode[last.val + + (BITS(last.bits + last.op) >> last.bits)]; + if ((unsigned)(last.bits + here.bits) <= bits) break; + PULLBYTE(); + } + DROPBITS(last.bits); + } + DROPBITS(here.bits); + state->length = (unsigned)here.val; + + /* process literal */ + if (here.op == 0) { + Tracevv((stderr, here.val >= 0x20 && here.val < 0x7f ? + "inflate: literal '%c'\n" : + "inflate: literal 0x%02x\n", here.val)); + ROOM(); + *put++ = (unsigned char)(state->length); + left--; + state->mode = LEN; + break; + } + + /* process end of block */ + if (here.op & 32) { + Tracevv((stderr, "inflate: end of block\n")); + state->mode = TYPE; + break; + } + + /* invalid code */ + if (here.op & 64) { + strm->msg = (char *)"invalid literal/length code"; + state->mode = BAD; + break; + } + + /* length code -- get extra bits, if any */ + state->extra = (unsigned)(here.op) & 15; + if (state->extra != 0) { + NEEDBITS(state->extra); + state->length += BITS(state->extra); + DROPBITS(state->extra); + } + Tracevv((stderr, "inflate: length %u\n", state->length)); + + /* get distance code */ + for (;;) { + here = state->distcode[BITS(state->distbits)]; + if ((unsigned)(here.bits) <= bits) break; + PULLBYTE(); + } + if ((here.op & 0xf0) == 0) { + last = here; + for (;;) { + here = state->distcode[last.val + + (BITS(last.bits + last.op) >> last.bits)]; + if ((unsigned)(last.bits + here.bits) <= bits) break; + PULLBYTE(); + } + DROPBITS(last.bits); + } + DROPBITS(here.bits); + if (here.op & 64) { + strm->msg = (char *)"invalid distance code"; + state->mode = BAD; + break; + } + state->offset = (unsigned)here.val; + + /* get distance extra bits, if any */ + state->extra = (unsigned)(here.op) & 15; + if (state->extra != 0) { + NEEDBITS(state->extra); + state->offset += BITS(state->extra); + DROPBITS(state->extra); + } + if (state->offset > state->wsize - (state->whave < state->wsize ? + left : 0)) { + strm->msg = (char *)"invalid distance too far back"; + state->mode = BAD; + break; + } + Tracevv((stderr, "inflate: distance %u\n", state->offset)); + + /* copy match from window to output */ + do { + ROOM(); + copy = state->wsize - state->offset; + if (copy < left) { + from = put + copy; + copy = left - copy; + } + else { + from = put - state->offset; + copy = left; + } + if (copy > state->length) copy = state->length; + state->length -= copy; + left -= copy; + do { + *put++ = *from++; + } while (--copy); + } while (state->length != 0); + break; + + case DONE: + /* inflate stream terminated properly -- write leftover output */ + ret = Z_STREAM_END; + if (left < state->wsize) { + if (out(out_desc, state->window, state->wsize - left)) + ret = Z_BUF_ERROR; + } + goto inf_leave; + + case BAD: + ret = Z_DATA_ERROR; + goto inf_leave; + + default: /* can't happen, but makes compilers happy */ + ret = Z_STREAM_ERROR; + goto inf_leave; + } + + /* Return unused input */ + inf_leave: + strm->next_in = next; + strm->avail_in = have; + return ret; +} + +int ZEXPORT inflateBackEnd(strm) +z_streamp strm; +{ + if (strm == Z_NULL || strm->state == Z_NULL || strm->zfree == (free_func)0) + return Z_STREAM_ERROR; + ZFREE(strm, strm->state); + strm->state = Z_NULL; + Tracev((stderr, "inflate: end\n")); + return Z_OK; +} ADDED compat/zlib/inffast.c Index: compat/zlib/inffast.c ================================================================== --- compat/zlib/inffast.c +++ compat/zlib/inffast.c @@ -0,0 +1,340 @@ +/* inffast.c -- fast decoding + * Copyright (C) 1995-2008, 2010, 2013 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +#include "zutil.h" +#include "inftrees.h" +#include "inflate.h" +#include "inffast.h" + +#ifndef ASMINF + +/* Allow machine dependent optimization for post-increment or pre-increment. + Based on testing to date, + Pre-increment preferred for: + - PowerPC G3 (Adler) + - MIPS R5000 (Randers-Pehrson) + Post-increment preferred for: + - none + No measurable difference: + - Pentium III (Anderson) + - M68060 (Nikl) + */ +#ifdef POSTINC +# define OFF 0 +# define PUP(a) *(a)++ +#else +# define OFF 1 +# define PUP(a) *++(a) +#endif + +/* + Decode literal, length, and distance codes and write out the resulting + literal and match bytes until either not enough input or output is + available, an end-of-block is encountered, or a data error is encountered. + When large enough input and output buffers are supplied to inflate(), for + example, a 16K input buffer and a 64K output buffer, more than 95% of the + inflate execution time is spent in this routine. + + Entry assumptions: + + state->mode == LEN + strm->avail_in >= 6 + strm->avail_out >= 258 + start >= strm->avail_out + state->bits < 8 + + On return, state->mode is one of: + + LEN -- ran out of enough output space or enough available input + TYPE -- reached end of block code, inflate() to interpret next block + BAD -- error in block data + + Notes: + + - The maximum input bits used by a length/distance pair is 15 bits for the + length code, 5 bits for the length extra, 15 bits for the distance code, + and 13 bits for the distance extra. This totals 48 bits, or six bytes. + Therefore if strm->avail_in >= 6, then there is enough input to avoid + checking for available input while decoding. + + - The maximum bytes that a single length/distance pair can output is 258 + bytes, which is the maximum length that can be coded. inflate_fast() + requires strm->avail_out >= 258 for each loop to avoid checking for + output space. + */ +void ZLIB_INTERNAL inflate_fast(strm, start) +z_streamp strm; +unsigned start; /* inflate()'s starting value for strm->avail_out */ +{ + struct inflate_state FAR *state; + z_const unsigned char FAR *in; /* local strm->next_in */ + z_const unsigned char FAR *last; /* have enough input while in < last */ + unsigned char FAR *out; /* local strm->next_out */ + unsigned char FAR *beg; /* inflate()'s initial strm->next_out */ + unsigned char FAR *end; /* while out < end, enough space available */ +#ifdef INFLATE_STRICT + unsigned dmax; /* maximum distance from zlib header */ +#endif + unsigned wsize; /* window size or zero if not using window */ + unsigned whave; /* valid bytes in the window */ + unsigned wnext; /* window write index */ + unsigned char FAR *window; /* allocated sliding window, if wsize != 0 */ + unsigned long hold; /* local strm->hold */ + unsigned bits; /* local strm->bits */ + code const FAR *lcode; /* local strm->lencode */ + code const FAR *dcode; /* local strm->distcode */ + unsigned lmask; /* mask for first level of length codes */ + unsigned dmask; /* mask for first level of distance codes */ + code here; /* retrieved table entry */ + unsigned op; /* code bits, operation, extra bits, or */ + /* window position, window bytes to copy */ + unsigned len; /* match length, unused bytes */ + unsigned dist; /* match distance */ + unsigned char FAR *from; /* where to copy match from */ + + /* copy state to local variables */ + state = (struct inflate_state FAR *)strm->state; + in = strm->next_in - OFF; + last = in + (strm->avail_in - 5); + out = strm->next_out - OFF; + beg = out - (start - strm->avail_out); + end = out + (strm->avail_out - 257); +#ifdef INFLATE_STRICT + dmax = state->dmax; +#endif + wsize = state->wsize; + whave = state->whave; + wnext = state->wnext; + window = state->window; + hold = state->hold; + bits = state->bits; + lcode = state->lencode; + dcode = state->distcode; + lmask = (1U << state->lenbits) - 1; + dmask = (1U << state->distbits) - 1; + + /* decode literals and length/distances until end-of-block or not enough + input data or output space */ + do { + if (bits < 15) { + hold += (unsigned long)(PUP(in)) << bits; + bits += 8; + hold += (unsigned long)(PUP(in)) << bits; + bits += 8; + } + here = lcode[hold & lmask]; + dolen: + op = (unsigned)(here.bits); + hold >>= op; + bits -= op; + op = (unsigned)(here.op); + if (op == 0) { /* literal */ + Tracevv((stderr, here.val >= 0x20 && here.val < 0x7f ? + "inflate: literal '%c'\n" : + "inflate: literal 0x%02x\n", here.val)); + PUP(out) = (unsigned char)(here.val); + } + else if (op & 16) { /* length base */ + len = (unsigned)(here.val); + op &= 15; /* number of extra bits */ + if (op) { + if (bits < op) { + hold += (unsigned long)(PUP(in)) << bits; + bits += 8; + } + len += (unsigned)hold & ((1U << op) - 1); + hold >>= op; + bits -= op; + } + Tracevv((stderr, "inflate: length %u\n", len)); + if (bits < 15) { + hold += (unsigned long)(PUP(in)) << bits; + bits += 8; + hold += (unsigned long)(PUP(in)) << bits; + bits += 8; + } + here = dcode[hold & dmask]; + dodist: + op = (unsigned)(here.bits); + hold >>= op; + bits -= op; + op = (unsigned)(here.op); + if (op & 16) { /* distance base */ + dist = (unsigned)(here.val); + op &= 15; /* number of extra bits */ + if (bits < op) { + hold += (unsigned long)(PUP(in)) << bits; + bits += 8; + if (bits < op) { + hold += (unsigned long)(PUP(in)) << bits; + bits += 8; + } + } + dist += (unsigned)hold & ((1U << op) - 1); +#ifdef INFLATE_STRICT + if (dist > dmax) { + strm->msg = (char *)"invalid distance too far back"; + state->mode = BAD; + break; + } +#endif + hold >>= op; + bits -= op; + Tracevv((stderr, "inflate: distance %u\n", dist)); + op = (unsigned)(out - beg); /* max distance in output */ + if (dist > op) { /* see if copy from window */ + op = dist - op; /* distance back in window */ + if (op > whave) { + if (state->sane) { + strm->msg = + (char *)"invalid distance too far back"; + state->mode = BAD; + break; + } +#ifdef INFLATE_ALLOW_INVALID_DISTANCE_TOOFAR_ARRR + if (len <= op - whave) { + do { + PUP(out) = 0; + } while (--len); + continue; + } + len -= op - whave; + do { + PUP(out) = 0; + } while (--op > whave); + if (op == 0) { + from = out - dist; + do { + PUP(out) = PUP(from); + } while (--len); + continue; + } +#endif + } + from = window - OFF; + if (wnext == 0) { /* very common case */ + from += wsize - op; + if (op < len) { /* some from window */ + len -= op; + do { + PUP(out) = PUP(from); + } while (--op); + from = out - dist; /* rest from output */ + } + } + else if (wnext < op) { /* wrap around window */ + from += wsize + wnext - op; + op -= wnext; + if (op < len) { /* some from end of window */ + len -= op; + do { + PUP(out) = PUP(from); + } while (--op); + from = window - OFF; + if (wnext < len) { /* some from start of window */ + op = wnext; + len -= op; + do { + PUP(out) = PUP(from); + } while (--op); + from = out - dist; /* rest from output */ + } + } + } + else { /* contiguous in window */ + from += wnext - op; + if (op < len) { /* some from window */ + len -= op; + do { + PUP(out) = PUP(from); + } while (--op); + from = out - dist; /* rest from output */ + } + } + while (len > 2) { + PUP(out) = PUP(from); + PUP(out) = PUP(from); + PUP(out) = PUP(from); + len -= 3; + } + if (len) { + PUP(out) = PUP(from); + if (len > 1) + PUP(out) = PUP(from); + } + } + else { + from = out - dist; /* copy direct from output */ + do { /* minimum length is three */ + PUP(out) = PUP(from); + PUP(out) = PUP(from); + PUP(out) = PUP(from); + len -= 3; + } while (len > 2); + if (len) { + PUP(out) = PUP(from); + if (len > 1) + PUP(out) = PUP(from); + } + } + } + else if ((op & 64) == 0) { /* 2nd level distance code */ + here = dcode[here.val + (hold & ((1U << op) - 1))]; + goto dodist; + } + else { + strm->msg = (char *)"invalid distance code"; + state->mode = BAD; + break; + } + } + else if ((op & 64) == 0) { /* 2nd level length code */ + here = lcode[here.val + (hold & ((1U << op) - 1))]; + goto dolen; + } + else if (op & 32) { /* end-of-block */ + Tracevv((stderr, "inflate: end of block\n")); + state->mode = TYPE; + break; + } + else { + strm->msg = (char *)"invalid literal/length code"; + state->mode = BAD; + break; + } + } while (in < last && out < end); + + /* return unused bytes (on entry, bits < 8, so in won't go too far back) */ + len = bits >> 3; + in -= len; + bits -= len << 3; + hold &= (1U << bits) - 1; + + /* update state and return */ + strm->next_in = in + OFF; + strm->next_out = out + OFF; + strm->avail_in = (unsigned)(in < last ? 5 + (last - in) : 5 - (in - last)); + strm->avail_out = (unsigned)(out < end ? + 257 + (end - out) : 257 - (out - end)); + state->hold = hold; + state->bits = bits; + return; +} + +/* + inflate_fast() speedups that turned out slower (on a PowerPC G3 750CXe): + - Using bit fields for code structure + - Different op definition to avoid & for extra bits (do & for table bits) + - Three separate decoding do-loops for direct, window, and wnext == 0 + - Special case for distance > 1 copies to do overlapped load and store copy + - Explicit branch predictions (based on measured branch probabilities) + - Deferring match copy and interspersed it with decoding subsequent codes + - Swapping literal/length else + - Swapping window/direct else + - Larger unrolled copy loops (three is about right) + - Moving len -= 3 statement into middle of loop + */ + +#endif /* !ASMINF */ ADDED compat/zlib/inffast.h Index: compat/zlib/inffast.h ================================================================== --- compat/zlib/inffast.h +++ compat/zlib/inffast.h @@ -0,0 +1,11 @@ +/* inffast.h -- header to use inffast.c + * Copyright (C) 1995-2003, 2010 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* WARNING: this file should *not* be used by applications. It is + part of the implementation of the compression library and is + subject to change. Applications should only use zlib.h. + */ + +void ZLIB_INTERNAL inflate_fast OF((z_streamp strm, unsigned start)); ADDED compat/zlib/inffixed.h Index: compat/zlib/inffixed.h ================================================================== --- compat/zlib/inffixed.h +++ compat/zlib/inffixed.h @@ -0,0 +1,94 @@ + /* inffixed.h -- table for decoding fixed codes + * Generated automatically by makefixed(). + */ + + /* WARNING: this file should *not* be used by applications. + It is part of the implementation of this library and is + subject to change. Applications should only use zlib.h. + */ + + static const code lenfix[512] = { + {96,7,0},{0,8,80},{0,8,16},{20,8,115},{18,7,31},{0,8,112},{0,8,48}, + {0,9,192},{16,7,10},{0,8,96},{0,8,32},{0,9,160},{0,8,0},{0,8,128}, + {0,8,64},{0,9,224},{16,7,6},{0,8,88},{0,8,24},{0,9,144},{19,7,59}, + {0,8,120},{0,8,56},{0,9,208},{17,7,17},{0,8,104},{0,8,40},{0,9,176}, + {0,8,8},{0,8,136},{0,8,72},{0,9,240},{16,7,4},{0,8,84},{0,8,20}, + {21,8,227},{19,7,43},{0,8,116},{0,8,52},{0,9,200},{17,7,13},{0,8,100}, + {0,8,36},{0,9,168},{0,8,4},{0,8,132},{0,8,68},{0,9,232},{16,7,8}, + {0,8,92},{0,8,28},{0,9,152},{20,7,83},{0,8,124},{0,8,60},{0,9,216}, + {18,7,23},{0,8,108},{0,8,44},{0,9,184},{0,8,12},{0,8,140},{0,8,76}, + {0,9,248},{16,7,3},{0,8,82},{0,8,18},{21,8,163},{19,7,35},{0,8,114}, + {0,8,50},{0,9,196},{17,7,11},{0,8,98},{0,8,34},{0,9,164},{0,8,2}, + {0,8,130},{0,8,66},{0,9,228},{16,7,7},{0,8,90},{0,8,26},{0,9,148}, + {20,7,67},{0,8,122},{0,8,58},{0,9,212},{18,7,19},{0,8,106},{0,8,42}, + {0,9,180},{0,8,10},{0,8,138},{0,8,74},{0,9,244},{16,7,5},{0,8,86}, + {0,8,22},{64,8,0},{19,7,51},{0,8,118},{0,8,54},{0,9,204},{17,7,15}, + {0,8,102},{0,8,38},{0,9,172},{0,8,6},{0,8,134},{0,8,70},{0,9,236}, + {16,7,9},{0,8,94},{0,8,30},{0,9,156},{20,7,99},{0,8,126},{0,8,62}, + {0,9,220},{18,7,27},{0,8,110},{0,8,46},{0,9,188},{0,8,14},{0,8,142}, + {0,8,78},{0,9,252},{96,7,0},{0,8,81},{0,8,17},{21,8,131},{18,7,31}, + {0,8,113},{0,8,49},{0,9,194},{16,7,10},{0,8,97},{0,8,33},{0,9,162}, + {0,8,1},{0,8,129},{0,8,65},{0,9,226},{16,7,6},{0,8,89},{0,8,25}, + {0,9,146},{19,7,59},{0,8,121},{0,8,57},{0,9,210},{17,7,17},{0,8,105}, + {0,8,41},{0,9,178},{0,8,9},{0,8,137},{0,8,73},{0,9,242},{16,7,4}, + {0,8,85},{0,8,21},{16,8,258},{19,7,43},{0,8,117},{0,8,53},{0,9,202}, + {17,7,13},{0,8,101},{0,8,37},{0,9,170},{0,8,5},{0,8,133},{0,8,69}, + {0,9,234},{16,7,8},{0,8,93},{0,8,29},{0,9,154},{20,7,83},{0,8,125}, + {0,8,61},{0,9,218},{18,7,23},{0,8,109},{0,8,45},{0,9,186},{0,8,13}, + {0,8,141},{0,8,77},{0,9,250},{16,7,3},{0,8,83},{0,8,19},{21,8,195}, + {19,7,35},{0,8,115},{0,8,51},{0,9,198},{17,7,11},{0,8,99},{0,8,35}, + {0,9,166},{0,8,3},{0,8,131},{0,8,67},{0,9,230},{16,7,7},{0,8,91}, + {0,8,27},{0,9,150},{20,7,67},{0,8,123},{0,8,59},{0,9,214},{18,7,19}, + {0,8,107},{0,8,43},{0,9,182},{0,8,11},{0,8,139},{0,8,75},{0,9,246}, + {16,7,5},{0,8,87},{0,8,23},{64,8,0},{19,7,51},{0,8,119},{0,8,55}, + {0,9,206},{17,7,15},{0,8,103},{0,8,39},{0,9,174},{0,8,7},{0,8,135}, + {0,8,71},{0,9,238},{16,7,9},{0,8,95},{0,8,31},{0,9,158},{20,7,99}, + {0,8,127},{0,8,63},{0,9,222},{18,7,27},{0,8,111},{0,8,47},{0,9,190}, + {0,8,15},{0,8,143},{0,8,79},{0,9,254},{96,7,0},{0,8,80},{0,8,16}, + {20,8,115},{18,7,31},{0,8,112},{0,8,48},{0,9,193},{16,7,10},{0,8,96}, + {0,8,32},{0,9,161},{0,8,0},{0,8,128},{0,8,64},{0,9,225},{16,7,6}, + {0,8,88},{0,8,24},{0,9,145},{19,7,59},{0,8,120},{0,8,56},{0,9,209}, + {17,7,17},{0,8,104},{0,8,40},{0,9,177},{0,8,8},{0,8,136},{0,8,72}, + {0,9,241},{16,7,4},{0,8,84},{0,8,20},{21,8,227},{19,7,43},{0,8,116}, + {0,8,52},{0,9,201},{17,7,13},{0,8,100},{0,8,36},{0,9,169},{0,8,4}, + {0,8,132},{0,8,68},{0,9,233},{16,7,8},{0,8,92},{0,8,28},{0,9,153}, + {20,7,83},{0,8,124},{0,8,60},{0,9,217},{18,7,23},{0,8,108},{0,8,44}, + {0,9,185},{0,8,12},{0,8,140},{0,8,76},{0,9,249},{16,7,3},{0,8,82}, + {0,8,18},{21,8,163},{19,7,35},{0,8,114},{0,8,50},{0,9,197},{17,7,11}, + {0,8,98},{0,8,34},{0,9,165},{0,8,2},{0,8,130},{0,8,66},{0,9,229}, + {16,7,7},{0,8,90},{0,8,26},{0,9,149},{20,7,67},{0,8,122},{0,8,58}, + {0,9,213},{18,7,19},{0,8,106},{0,8,42},{0,9,181},{0,8,10},{0,8,138}, + {0,8,74},{0,9,245},{16,7,5},{0,8,86},{0,8,22},{64,8,0},{19,7,51}, + {0,8,118},{0,8,54},{0,9,205},{17,7,15},{0,8,102},{0,8,38},{0,9,173}, + {0,8,6},{0,8,134},{0,8,70},{0,9,237},{16,7,9},{0,8,94},{0,8,30}, + {0,9,157},{20,7,99},{0,8,126},{0,8,62},{0,9,221},{18,7,27},{0,8,110}, + {0,8,46},{0,9,189},{0,8,14},{0,8,142},{0,8,78},{0,9,253},{96,7,0}, + {0,8,81},{0,8,17},{21,8,131},{18,7,31},{0,8,113},{0,8,49},{0,9,195}, + {16,7,10},{0,8,97},{0,8,33},{0,9,163},{0,8,1},{0,8,129},{0,8,65}, + {0,9,227},{16,7,6},{0,8,89},{0,8,25},{0,9,147},{19,7,59},{0,8,121}, + {0,8,57},{0,9,211},{17,7,17},{0,8,105},{0,8,41},{0,9,179},{0,8,9}, + {0,8,137},{0,8,73},{0,9,243},{16,7,4},{0,8,85},{0,8,21},{16,8,258}, + {19,7,43},{0,8,117},{0,8,53},{0,9,203},{17,7,13},{0,8,101},{0,8,37}, + {0,9,171},{0,8,5},{0,8,133},{0,8,69},{0,9,235},{16,7,8},{0,8,93}, + {0,8,29},{0,9,155},{20,7,83},{0,8,125},{0,8,61},{0,9,219},{18,7,23}, + {0,8,109},{0,8,45},{0,9,187},{0,8,13},{0,8,141},{0,8,77},{0,9,251}, + {16,7,3},{0,8,83},{0,8,19},{21,8,195},{19,7,35},{0,8,115},{0,8,51}, + {0,9,199},{17,7,11},{0,8,99},{0,8,35},{0,9,167},{0,8,3},{0,8,131}, + {0,8,67},{0,9,231},{16,7,7},{0,8,91},{0,8,27},{0,9,151},{20,7,67}, + {0,8,123},{0,8,59},{0,9,215},{18,7,19},{0,8,107},{0,8,43},{0,9,183}, + {0,8,11},{0,8,139},{0,8,75},{0,9,247},{16,7,5},{0,8,87},{0,8,23}, + {64,8,0},{19,7,51},{0,8,119},{0,8,55},{0,9,207},{17,7,15},{0,8,103}, + {0,8,39},{0,9,175},{0,8,7},{0,8,135},{0,8,71},{0,9,239},{16,7,9}, + {0,8,95},{0,8,31},{0,9,159},{20,7,99},{0,8,127},{0,8,63},{0,9,223}, + {18,7,27},{0,8,111},{0,8,47},{0,9,191},{0,8,15},{0,8,143},{0,8,79}, + {0,9,255} + }; + + static const code distfix[32] = { + {16,5,1},{23,5,257},{19,5,17},{27,5,4097},{17,5,5},{25,5,1025}, + {21,5,65},{29,5,16385},{16,5,3},{24,5,513},{20,5,33},{28,5,8193}, + {18,5,9},{26,5,2049},{22,5,129},{64,5,0},{16,5,2},{23,5,385}, + {19,5,25},{27,5,6145},{17,5,7},{25,5,1537},{21,5,97},{29,5,24577}, + {16,5,4},{24,5,769},{20,5,49},{28,5,12289},{18,5,13},{26,5,3073}, + {22,5,193},{64,5,0} + }; ADDED compat/zlib/inflate.c Index: compat/zlib/inflate.c ================================================================== --- compat/zlib/inflate.c +++ compat/zlib/inflate.c @@ -0,0 +1,1512 @@ +/* inflate.c -- zlib decompression + * Copyright (C) 1995-2012 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* + * Change history: + * + * 1.2.beta0 24 Nov 2002 + * - First version -- complete rewrite of inflate to simplify code, avoid + * creation of window when not needed, minimize use of window when it is + * needed, make inffast.c even faster, implement gzip decoding, and to + * improve code readability and style over the previous zlib inflate code + * + * 1.2.beta1 25 Nov 2002 + * - Use pointers for available input and output checking in inffast.c + * - Remove input and output counters in inffast.c + * - Change inffast.c entry and loop from avail_in >= 7 to >= 6 + * - Remove unnecessary second byte pull from length extra in inffast.c + * - Unroll direct copy to three copies per loop in inffast.c + * + * 1.2.beta2 4 Dec 2002 + * - Change external routine names to reduce potential conflicts + * - Correct filename to inffixed.h for fixed tables in inflate.c + * - Make hbuf[] unsigned char to match parameter type in inflate.c + * - Change strm->next_out[-state->offset] to *(strm->next_out - state->offset) + * to avoid negation problem on Alphas (64 bit) in inflate.c + * + * 1.2.beta3 22 Dec 2002 + * - Add comments on state->bits assertion in inffast.c + * - Add comments on op field in inftrees.h + * - Fix bug in reuse of allocated window after inflateReset() + * - Remove bit fields--back to byte structure for speed + * - Remove distance extra == 0 check in inflate_fast()--only helps for lengths + * - Change post-increments to pre-increments in inflate_fast(), PPC biased? + * - Add compile time option, POSTINC, to use post-increments instead (Intel?) + * - Make MATCH copy in inflate() much faster for when inflate_fast() not used + * - Use local copies of stream next and avail values, as well as local bit + * buffer and bit count in inflate()--for speed when inflate_fast() not used + * + * 1.2.beta4 1 Jan 2003 + * - Split ptr - 257 statements in inflate_table() to avoid compiler warnings + * - Move a comment on output buffer sizes from inffast.c to inflate.c + * - Add comments in inffast.c to introduce the inflate_fast() routine + * - Rearrange window copies in inflate_fast() for speed and simplification + * - Unroll last copy for window match in inflate_fast() + * - Use local copies of window variables in inflate_fast() for speed + * - Pull out common wnext == 0 case for speed in inflate_fast() + * - Make op and len in inflate_fast() unsigned for consistency + * - Add FAR to lcode and dcode declarations in inflate_fast() + * - Simplified bad distance check in inflate_fast() + * - Added inflateBackInit(), inflateBack(), and inflateBackEnd() in new + * source file infback.c to provide a call-back interface to inflate for + * programs like gzip and unzip -- uses window as output buffer to avoid + * window copying + * + * 1.2.beta5 1 Jan 2003 + * - Improved inflateBack() interface to allow the caller to provide initial + * input in strm. + * - Fixed stored blocks bug in inflateBack() + * + * 1.2.beta6 4 Jan 2003 + * - Added comments in inffast.c on effectiveness of POSTINC + * - Typecasting all around to reduce compiler warnings + * - Changed loops from while (1) or do {} while (1) to for (;;), again to + * make compilers happy + * - Changed type of window in inflateBackInit() to unsigned char * + * + * 1.2.beta7 27 Jan 2003 + * - Changed many types to unsigned or unsigned short to avoid warnings + * - Added inflateCopy() function + * + * 1.2.0 9 Mar 2003 + * - Changed inflateBack() interface to provide separate opaque descriptors + * for the in() and out() functions + * - Changed inflateBack() argument and in_func typedef to swap the length + * and buffer address return values for the input function + * - Check next_in and next_out for Z_NULL on entry to inflate() + * + * The history for versions after 1.2.0 are in ChangeLog in zlib distribution. + */ + +#include "zutil.h" +#include "inftrees.h" +#include "inflate.h" +#include "inffast.h" + +#ifdef MAKEFIXED +# ifndef BUILDFIXED +# define BUILDFIXED +# endif +#endif + +/* function prototypes */ +local void fixedtables OF((struct inflate_state FAR *state)); +local int updatewindow OF((z_streamp strm, const unsigned char FAR *end, + unsigned copy)); +#ifdef BUILDFIXED + void makefixed OF((void)); +#endif +local unsigned syncsearch OF((unsigned FAR *have, const unsigned char FAR *buf, + unsigned len)); + +int ZEXPORT inflateResetKeep(strm) +z_streamp strm; +{ + struct inflate_state FAR *state; + + if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR; + state = (struct inflate_state FAR *)strm->state; + strm->total_in = strm->total_out = state->total = 0; + strm->msg = Z_NULL; + if (state->wrap) /* to support ill-conceived Java test suite */ + strm->adler = state->wrap & 1; + state->mode = HEAD; + state->last = 0; + state->havedict = 0; + state->dmax = 32768U; + state->head = Z_NULL; + state->hold = 0; + state->bits = 0; + state->lencode = state->distcode = state->next = state->codes; + state->sane = 1; + state->back = -1; + Tracev((stderr, "inflate: reset\n")); + return Z_OK; +} + +int ZEXPORT inflateReset(strm) +z_streamp strm; +{ + struct inflate_state FAR *state; + + if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR; + state = (struct inflate_state FAR *)strm->state; + state->wsize = 0; + state->whave = 0; + state->wnext = 0; + return inflateResetKeep(strm); +} + +int ZEXPORT inflateReset2(strm, windowBits) +z_streamp strm; +int windowBits; +{ + int wrap; + struct inflate_state FAR *state; + + /* get the state */ + if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR; + state = (struct inflate_state FAR *)strm->state; + + /* extract wrap request from windowBits parameter */ + if (windowBits < 0) { + wrap = 0; + windowBits = -windowBits; + } + else { + wrap = (windowBits >> 4) + 1; +#ifdef GUNZIP + if (windowBits < 48) + windowBits &= 15; +#endif + } + + /* set number of window bits, free window if different */ + if (windowBits && (windowBits < 8 || windowBits > 15)) + return Z_STREAM_ERROR; + if (state->window != Z_NULL && state->wbits != (unsigned)windowBits) { + ZFREE(strm, state->window); + state->window = Z_NULL; + } + + /* update state and reset the rest of it */ + state->wrap = wrap; + state->wbits = (unsigned)windowBits; + return inflateReset(strm); +} + +int ZEXPORT inflateInit2_(strm, windowBits, version, stream_size) +z_streamp strm; +int windowBits; +const char *version; +int stream_size; +{ + int ret; + struct inflate_state FAR *state; + + if (version == Z_NULL || version[0] != ZLIB_VERSION[0] || + stream_size != (int)(sizeof(z_stream))) + return Z_VERSION_ERROR; + if (strm == Z_NULL) return Z_STREAM_ERROR; + strm->msg = Z_NULL; /* in case we return an error */ + if (strm->zalloc == (alloc_func)0) { +#ifdef Z_SOLO + return Z_STREAM_ERROR; +#else + strm->zalloc = zcalloc; + strm->opaque = (voidpf)0; +#endif + } + if (strm->zfree == (free_func)0) +#ifdef Z_SOLO + return Z_STREAM_ERROR; +#else + strm->zfree = zcfree; +#endif + state = (struct inflate_state FAR *) + ZALLOC(strm, 1, sizeof(struct inflate_state)); + if (state == Z_NULL) return Z_MEM_ERROR; + Tracev((stderr, "inflate: allocated\n")); + strm->state = (struct internal_state FAR *)state; + state->window = Z_NULL; + ret = inflateReset2(strm, windowBits); + if (ret != Z_OK) { + ZFREE(strm, state); + strm->state = Z_NULL; + } + return ret; +} + +int ZEXPORT inflateInit_(strm, version, stream_size) +z_streamp strm; +const char *version; +int stream_size; +{ + return inflateInit2_(strm, DEF_WBITS, version, stream_size); +} + +int ZEXPORT inflatePrime(strm, bits, value) +z_streamp strm; +int bits; +int value; +{ + struct inflate_state FAR *state; + + if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR; + state = (struct inflate_state FAR *)strm->state; + if (bits < 0) { + state->hold = 0; + state->bits = 0; + return Z_OK; + } + if (bits > 16 || state->bits + bits > 32) return Z_STREAM_ERROR; + value &= (1L << bits) - 1; + state->hold += value << state->bits; + state->bits += bits; + return Z_OK; +} + +/* + Return state with length and distance decoding tables and index sizes set to + fixed code decoding. Normally this returns fixed tables from inffixed.h. + If BUILDFIXED is defined, then instead this routine builds the tables the + first time it's called, and returns those tables the first time and + thereafter. This reduces the size of the code by about 2K bytes, in + exchange for a little execution time. However, BUILDFIXED should not be + used for threaded applications, since the rewriting of the tables and virgin + may not be thread-safe. + */ +local void fixedtables(state) +struct inflate_state FAR *state; +{ +#ifdef BUILDFIXED + static int virgin = 1; + static code *lenfix, *distfix; + static code fixed[544]; + + /* build fixed huffman tables if first call (may not be thread safe) */ + if (virgin) { + unsigned sym, bits; + static code *next; + + /* literal/length table */ + sym = 0; + while (sym < 144) state->lens[sym++] = 8; + while (sym < 256) state->lens[sym++] = 9; + while (sym < 280) state->lens[sym++] = 7; + while (sym < 288) state->lens[sym++] = 8; + next = fixed; + lenfix = next; + bits = 9; + inflate_table(LENS, state->lens, 288, &(next), &(bits), state->work); + + /* distance table */ + sym = 0; + while (sym < 32) state->lens[sym++] = 5; + distfix = next; + bits = 5; + inflate_table(DISTS, state->lens, 32, &(next), &(bits), state->work); + + /* do this just once */ + virgin = 0; + } +#else /* !BUILDFIXED */ +# include "inffixed.h" +#endif /* BUILDFIXED */ + state->lencode = lenfix; + state->lenbits = 9; + state->distcode = distfix; + state->distbits = 5; +} + +#ifdef MAKEFIXED +#include + +/* + Write out the inffixed.h that is #include'd above. Defining MAKEFIXED also + defines BUILDFIXED, so the tables are built on the fly. makefixed() writes + those tables to stdout, which would be piped to inffixed.h. A small program + can simply call makefixed to do this: + + void makefixed(void); + + int main(void) + { + makefixed(); + return 0; + } + + Then that can be linked with zlib built with MAKEFIXED defined and run: + + a.out > inffixed.h + */ +void makefixed() +{ + unsigned low, size; + struct inflate_state state; + + fixedtables(&state); + puts(" /* inffixed.h -- table for decoding fixed codes"); + puts(" * Generated automatically by makefixed()."); + puts(" */"); + puts(""); + puts(" /* WARNING: this file should *not* be used by applications."); + puts(" It is part of the implementation of this library and is"); + puts(" subject to change. Applications should only use zlib.h."); + puts(" */"); + puts(""); + size = 1U << 9; + printf(" static const code lenfix[%u] = {", size); + low = 0; + for (;;) { + if ((low % 7) == 0) printf("\n "); + printf("{%u,%u,%d}", (low & 127) == 99 ? 64 : state.lencode[low].op, + state.lencode[low].bits, state.lencode[low].val); + if (++low == size) break; + putchar(','); + } + puts("\n };"); + size = 1U << 5; + printf("\n static const code distfix[%u] = {", size); + low = 0; + for (;;) { + if ((low % 6) == 0) printf("\n "); + printf("{%u,%u,%d}", state.distcode[low].op, state.distcode[low].bits, + state.distcode[low].val); + if (++low == size) break; + putchar(','); + } + puts("\n };"); +} +#endif /* MAKEFIXED */ + +/* + Update the window with the last wsize (normally 32K) bytes written before + returning. If window does not exist yet, create it. This is only called + when a window is already in use, or when output has been written during this + inflate call, but the end of the deflate stream has not been reached yet. + It is also called to create a window for dictionary data when a dictionary + is loaded. + + Providing output buffers larger than 32K to inflate() should provide a speed + advantage, since only the last 32K of output is copied to the sliding window + upon return from inflate(), and since all distances after the first 32K of + output will fall in the output data, making match copies simpler and faster. + The advantage may be dependent on the size of the processor's data caches. + */ +local int updatewindow(strm, end, copy) +z_streamp strm; +const Bytef *end; +unsigned copy; +{ + struct inflate_state FAR *state; + unsigned dist; + + state = (struct inflate_state FAR *)strm->state; + + /* if it hasn't been done already, allocate space for the window */ + if (state->window == Z_NULL) { + state->window = (unsigned char FAR *) + ZALLOC(strm, 1U << state->wbits, + sizeof(unsigned char)); + if (state->window == Z_NULL) return 1; + } + + /* if window not in use yet, initialize */ + if (state->wsize == 0) { + state->wsize = 1U << state->wbits; + state->wnext = 0; + state->whave = 0; + } + + /* copy state->wsize or less output bytes into the circular window */ + if (copy >= state->wsize) { + zmemcpy(state->window, end - state->wsize, state->wsize); + state->wnext = 0; + state->whave = state->wsize; + } + else { + dist = state->wsize - state->wnext; + if (dist > copy) dist = copy; + zmemcpy(state->window + state->wnext, end - copy, dist); + copy -= dist; + if (copy) { + zmemcpy(state->window, end - copy, copy); + state->wnext = copy; + state->whave = state->wsize; + } + else { + state->wnext += dist; + if (state->wnext == state->wsize) state->wnext = 0; + if (state->whave < state->wsize) state->whave += dist; + } + } + return 0; +} + +/* Macros for inflate(): */ + +/* check function to use adler32() for zlib or crc32() for gzip */ +#ifdef GUNZIP +# define UPDATE(check, buf, len) \ + (state->flags ? crc32(check, buf, len) : adler32(check, buf, len)) +#else +# define UPDATE(check, buf, len) adler32(check, buf, len) +#endif + +/* check macros for header crc */ +#ifdef GUNZIP +# define CRC2(check, word) \ + do { \ + hbuf[0] = (unsigned char)(word); \ + hbuf[1] = (unsigned char)((word) >> 8); \ + check = crc32(check, hbuf, 2); \ + } while (0) + +# define CRC4(check, word) \ + do { \ + hbuf[0] = (unsigned char)(word); \ + hbuf[1] = (unsigned char)((word) >> 8); \ + hbuf[2] = (unsigned char)((word) >> 16); \ + hbuf[3] = (unsigned char)((word) >> 24); \ + check = crc32(check, hbuf, 4); \ + } while (0) +#endif + +/* Load registers with state in inflate() for speed */ +#define LOAD() \ + do { \ + put = strm->next_out; \ + left = strm->avail_out; \ + next = strm->next_in; \ + have = strm->avail_in; \ + hold = state->hold; \ + bits = state->bits; \ + } while (0) + +/* Restore state from registers in inflate() */ +#define RESTORE() \ + do { \ + strm->next_out = put; \ + strm->avail_out = left; \ + strm->next_in = next; \ + strm->avail_in = have; \ + state->hold = hold; \ + state->bits = bits; \ + } while (0) + +/* Clear the input bit accumulator */ +#define INITBITS() \ + do { \ + hold = 0; \ + bits = 0; \ + } while (0) + +/* Get a byte of input into the bit accumulator, or return from inflate() + if there is no input available. */ +#define PULLBYTE() \ + do { \ + if (have == 0) goto inf_leave; \ + have--; \ + hold += (unsigned long)(*next++) << bits; \ + bits += 8; \ + } while (0) + +/* Assure that there are at least n bits in the bit accumulator. If there is + not enough available input to do that, then return from inflate(). */ +#define NEEDBITS(n) \ + do { \ + while (bits < (unsigned)(n)) \ + PULLBYTE(); \ + } while (0) + +/* Return the low n bits of the bit accumulator (n < 16) */ +#define BITS(n) \ + ((unsigned)hold & ((1U << (n)) - 1)) + +/* Remove n bits from the bit accumulator */ +#define DROPBITS(n) \ + do { \ + hold >>= (n); \ + bits -= (unsigned)(n); \ + } while (0) + +/* Remove zero to seven bits as needed to go to a byte boundary */ +#define BYTEBITS() \ + do { \ + hold >>= bits & 7; \ + bits -= bits & 7; \ + } while (0) + +/* + inflate() uses a state machine to process as much input data and generate as + much output data as possible before returning. The state machine is + structured roughly as follows: + + for (;;) switch (state) { + ... + case STATEn: + if (not enough input data or output space to make progress) + return; + ... make progress ... + state = STATEm; + break; + ... + } + + so when inflate() is called again, the same case is attempted again, and + if the appropriate resources are provided, the machine proceeds to the + next state. The NEEDBITS() macro is usually the way the state evaluates + whether it can proceed or should return. NEEDBITS() does the return if + the requested bits are not available. The typical use of the BITS macros + is: + + NEEDBITS(n); + ... do something with BITS(n) ... + DROPBITS(n); + + where NEEDBITS(n) either returns from inflate() if there isn't enough + input left to load n bits into the accumulator, or it continues. BITS(n) + gives the low n bits in the accumulator. When done, DROPBITS(n) drops + the low n bits off the accumulator. INITBITS() clears the accumulator + and sets the number of available bits to zero. BYTEBITS() discards just + enough bits to put the accumulator on a byte boundary. After BYTEBITS() + and a NEEDBITS(8), then BITS(8) would return the next byte in the stream. + + NEEDBITS(n) uses PULLBYTE() to get an available byte of input, or to return + if there is no input available. The decoding of variable length codes uses + PULLBYTE() directly in order to pull just enough bytes to decode the next + code, and no more. + + Some states loop until they get enough input, making sure that enough + state information is maintained to continue the loop where it left off + if NEEDBITS() returns in the loop. For example, want, need, and keep + would all have to actually be part of the saved state in case NEEDBITS() + returns: + + case STATEw: + while (want < need) { + NEEDBITS(n); + keep[want++] = BITS(n); + DROPBITS(n); + } + state = STATEx; + case STATEx: + + As shown above, if the next state is also the next case, then the break + is omitted. + + A state may also return if there is not enough output space available to + complete that state. Those states are copying stored data, writing a + literal byte, and copying a matching string. + + When returning, a "goto inf_leave" is used to update the total counters, + update the check value, and determine whether any progress has been made + during that inflate() call in order to return the proper return code. + Progress is defined as a change in either strm->avail_in or strm->avail_out. + When there is a window, goto inf_leave will update the window with the last + output written. If a goto inf_leave occurs in the middle of decompression + and there is no window currently, goto inf_leave will create one and copy + output to the window for the next call of inflate(). + + In this implementation, the flush parameter of inflate() only affects the + return code (per zlib.h). inflate() always writes as much as possible to + strm->next_out, given the space available and the provided input--the effect + documented in zlib.h of Z_SYNC_FLUSH. Furthermore, inflate() always defers + the allocation of and copying into a sliding window until necessary, which + provides the effect documented in zlib.h for Z_FINISH when the entire input + stream available. So the only thing the flush parameter actually does is: + when flush is set to Z_FINISH, inflate() cannot return Z_OK. Instead it + will return Z_BUF_ERROR if it has not reached the end of the stream. + */ + +int ZEXPORT inflate(strm, flush) +z_streamp strm; +int flush; +{ + struct inflate_state FAR *state; + z_const unsigned char FAR *next; /* next input */ + unsigned char FAR *put; /* next output */ + unsigned have, left; /* available input and output */ + unsigned long hold; /* bit buffer */ + unsigned bits; /* bits in bit buffer */ + unsigned in, out; /* save starting available input and output */ + unsigned copy; /* number of stored or match bytes to copy */ + unsigned char FAR *from; /* where to copy match bytes from */ + code here; /* current decoding table entry */ + code last; /* parent table entry */ + unsigned len; /* length to copy for repeats, bits to drop */ + int ret; /* return code */ +#ifdef GUNZIP + unsigned char hbuf[4]; /* buffer for gzip header crc calculation */ +#endif + static const unsigned short order[19] = /* permutation of code lengths */ + {16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15}; + + if (strm == Z_NULL || strm->state == Z_NULL || strm->next_out == Z_NULL || + (strm->next_in == Z_NULL && strm->avail_in != 0)) + return Z_STREAM_ERROR; + + state = (struct inflate_state FAR *)strm->state; + if (state->mode == TYPE) state->mode = TYPEDO; /* skip check */ + LOAD(); + in = have; + out = left; + ret = Z_OK; + for (;;) + switch (state->mode) { + case HEAD: + if (state->wrap == 0) { + state->mode = TYPEDO; + break; + } + NEEDBITS(16); +#ifdef GUNZIP + if ((state->wrap & 2) && hold == 0x8b1f) { /* gzip header */ + state->check = crc32(0L, Z_NULL, 0); + CRC2(state->check, hold); + INITBITS(); + state->mode = FLAGS; + break; + } + state->flags = 0; /* expect zlib header */ + if (state->head != Z_NULL) + state->head->done = -1; + if (!(state->wrap & 1) || /* check if zlib header allowed */ +#else + if ( +#endif + ((BITS(8) << 8) + (hold >> 8)) % 31) { + strm->msg = (char *)"incorrect header check"; + state->mode = BAD; + break; + } + if (BITS(4) != Z_DEFLATED) { + strm->msg = (char *)"unknown compression method"; + state->mode = BAD; + break; + } + DROPBITS(4); + len = BITS(4) + 8; + if (state->wbits == 0) + state->wbits = len; + else if (len > state->wbits) { + strm->msg = (char *)"invalid window size"; + state->mode = BAD; + break; + } + state->dmax = 1U << len; + Tracev((stderr, "inflate: zlib header ok\n")); + strm->adler = state->check = adler32(0L, Z_NULL, 0); + state->mode = hold & 0x200 ? DICTID : TYPE; + INITBITS(); + break; +#ifdef GUNZIP + case FLAGS: + NEEDBITS(16); + state->flags = (int)(hold); + if ((state->flags & 0xff) != Z_DEFLATED) { + strm->msg = (char *)"unknown compression method"; + state->mode = BAD; + break; + } + if (state->flags & 0xe000) { + strm->msg = (char *)"unknown header flags set"; + state->mode = BAD; + break; + } + if (state->head != Z_NULL) + state->head->text = (int)((hold >> 8) & 1); + if (state->flags & 0x0200) CRC2(state->check, hold); + INITBITS(); + state->mode = TIME; + case TIME: + NEEDBITS(32); + if (state->head != Z_NULL) + state->head->time = hold; + if (state->flags & 0x0200) CRC4(state->check, hold); + INITBITS(); + state->mode = OS; + case OS: + NEEDBITS(16); + if (state->head != Z_NULL) { + state->head->xflags = (int)(hold & 0xff); + state->head->os = (int)(hold >> 8); + } + if (state->flags & 0x0200) CRC2(state->check, hold); + INITBITS(); + state->mode = EXLEN; + case EXLEN: + if (state->flags & 0x0400) { + NEEDBITS(16); + state->length = (unsigned)(hold); + if (state->head != Z_NULL) + state->head->extra_len = (unsigned)hold; + if (state->flags & 0x0200) CRC2(state->check, hold); + INITBITS(); + } + else if (state->head != Z_NULL) + state->head->extra = Z_NULL; + state->mode = EXTRA; + case EXTRA: + if (state->flags & 0x0400) { + copy = state->length; + if (copy > have) copy = have; + if (copy) { + if (state->head != Z_NULL && + state->head->extra != Z_NULL) { + len = state->head->extra_len - state->length; + zmemcpy(state->head->extra + len, next, + len + copy > state->head->extra_max ? + state->head->extra_max - len : copy); + } + if (state->flags & 0x0200) + state->check = crc32(state->check, next, copy); + have -= copy; + next += copy; + state->length -= copy; + } + if (state->length) goto inf_leave; + } + state->length = 0; + state->mode = NAME; + case NAME: + if (state->flags & 0x0800) { + if (have == 0) goto inf_leave; + copy = 0; + do { + len = (unsigned)(next[copy++]); + if (state->head != Z_NULL && + state->head->name != Z_NULL && + state->length < state->head->name_max) + state->head->name[state->length++] = len; + } while (len && copy < have); + if (state->flags & 0x0200) + state->check = crc32(state->check, next, copy); + have -= copy; + next += copy; + if (len) goto inf_leave; + } + else if (state->head != Z_NULL) + state->head->name = Z_NULL; + state->length = 0; + state->mode = COMMENT; + case COMMENT: + if (state->flags & 0x1000) { + if (have == 0) goto inf_leave; + copy = 0; + do { + len = (unsigned)(next[copy++]); + if (state->head != Z_NULL && + state->head->comment != Z_NULL && + state->length < state->head->comm_max) + state->head->comment[state->length++] = len; + } while (len && copy < have); + if (state->flags & 0x0200) + state->check = crc32(state->check, next, copy); + have -= copy; + next += copy; + if (len) goto inf_leave; + } + else if (state->head != Z_NULL) + state->head->comment = Z_NULL; + state->mode = HCRC; + case HCRC: + if (state->flags & 0x0200) { + NEEDBITS(16); + if (hold != (state->check & 0xffff)) { + strm->msg = (char *)"header crc mismatch"; + state->mode = BAD; + break; + } + INITBITS(); + } + if (state->head != Z_NULL) { + state->head->hcrc = (int)((state->flags >> 9) & 1); + state->head->done = 1; + } + strm->adler = state->check = crc32(0L, Z_NULL, 0); + state->mode = TYPE; + break; +#endif + case DICTID: + NEEDBITS(32); + strm->adler = state->check = ZSWAP32(hold); + INITBITS(); + state->mode = DICT; + case DICT: + if (state->havedict == 0) { + RESTORE(); + return Z_NEED_DICT; + } + strm->adler = state->check = adler32(0L, Z_NULL, 0); + state->mode = TYPE; + case TYPE: + if (flush == Z_BLOCK || flush == Z_TREES) goto inf_leave; + case TYPEDO: + if (state->last) { + BYTEBITS(); + state->mode = CHECK; + break; + } + NEEDBITS(3); + state->last = BITS(1); + DROPBITS(1); + switch (BITS(2)) { + case 0: /* stored block */ + Tracev((stderr, "inflate: stored block%s\n", + state->last ? " (last)" : "")); + state->mode = STORED; + break; + case 1: /* fixed block */ + fixedtables(state); + Tracev((stderr, "inflate: fixed codes block%s\n", + state->last ? " (last)" : "")); + state->mode = LEN_; /* decode codes */ + if (flush == Z_TREES) { + DROPBITS(2); + goto inf_leave; + } + break; + case 2: /* dynamic block */ + Tracev((stderr, "inflate: dynamic codes block%s\n", + state->last ? " (last)" : "")); + state->mode = TABLE; + break; + case 3: + strm->msg = (char *)"invalid block type"; + state->mode = BAD; + } + DROPBITS(2); + break; + case STORED: + BYTEBITS(); /* go to byte boundary */ + NEEDBITS(32); + if ((hold & 0xffff) != ((hold >> 16) ^ 0xffff)) { + strm->msg = (char *)"invalid stored block lengths"; + state->mode = BAD; + break; + } + state->length = (unsigned)hold & 0xffff; + Tracev((stderr, "inflate: stored length %u\n", + state->length)); + INITBITS(); + state->mode = COPY_; + if (flush == Z_TREES) goto inf_leave; + case COPY_: + state->mode = COPY; + case COPY: + copy = state->length; + if (copy) { + if (copy > have) copy = have; + if (copy > left) copy = left; + if (copy == 0) goto inf_leave; + zmemcpy(put, next, copy); + have -= copy; + next += copy; + left -= copy; + put += copy; + state->length -= copy; + break; + } + Tracev((stderr, "inflate: stored end\n")); + state->mode = TYPE; + break; + case TABLE: + NEEDBITS(14); + state->nlen = BITS(5) + 257; + DROPBITS(5); + state->ndist = BITS(5) + 1; + DROPBITS(5); + state->ncode = BITS(4) + 4; + DROPBITS(4); +#ifndef PKZIP_BUG_WORKAROUND + if (state->nlen > 286 || state->ndist > 30) { + strm->msg = (char *)"too many length or distance symbols"; + state->mode = BAD; + break; + } +#endif + Tracev((stderr, "inflate: table sizes ok\n")); + state->have = 0; + state->mode = LENLENS; + case LENLENS: + while (state->have < state->ncode) { + NEEDBITS(3); + state->lens[order[state->have++]] = (unsigned short)BITS(3); + DROPBITS(3); + } + while (state->have < 19) + state->lens[order[state->have++]] = 0; + state->next = state->codes; + state->lencode = (const code FAR *)(state->next); + state->lenbits = 7; + ret = inflate_table(CODES, state->lens, 19, &(state->next), + &(state->lenbits), state->work); + if (ret) { + strm->msg = (char *)"invalid code lengths set"; + state->mode = BAD; + break; + } + Tracev((stderr, "inflate: code lengths ok\n")); + state->have = 0; + state->mode = CODELENS; + case CODELENS: + while (state->have < state->nlen + state->ndist) { + for (;;) { + here = state->lencode[BITS(state->lenbits)]; + if ((unsigned)(here.bits) <= bits) break; + PULLBYTE(); + } + if (here.val < 16) { + DROPBITS(here.bits); + state->lens[state->have++] = here.val; + } + else { + if (here.val == 16) { + NEEDBITS(here.bits + 2); + DROPBITS(here.bits); + if (state->have == 0) { + strm->msg = (char *)"invalid bit length repeat"; + state->mode = BAD; + break; + } + len = state->lens[state->have - 1]; + copy = 3 + BITS(2); + DROPBITS(2); + } + else if (here.val == 17) { + NEEDBITS(here.bits + 3); + DROPBITS(here.bits); + len = 0; + copy = 3 + BITS(3); + DROPBITS(3); + } + else { + NEEDBITS(here.bits + 7); + DROPBITS(here.bits); + len = 0; + copy = 11 + BITS(7); + DROPBITS(7); + } + if (state->have + copy > state->nlen + state->ndist) { + strm->msg = (char *)"invalid bit length repeat"; + state->mode = BAD; + break; + } + while (copy--) + state->lens[state->have++] = (unsigned short)len; + } + } + + /* handle error breaks in while */ + if (state->mode == BAD) break; + + /* check for end-of-block code (better have one) */ + if (state->lens[256] == 0) { + strm->msg = (char *)"invalid code -- missing end-of-block"; + state->mode = BAD; + break; + } + + /* build code tables -- note: do not change the lenbits or distbits + values here (9 and 6) without reading the comments in inftrees.h + concerning the ENOUGH constants, which depend on those values */ + state->next = state->codes; + state->lencode = (const code FAR *)(state->next); + state->lenbits = 9; + ret = inflate_table(LENS, state->lens, state->nlen, &(state->next), + &(state->lenbits), state->work); + if (ret) { + strm->msg = (char *)"invalid literal/lengths set"; + state->mode = BAD; + break; + } + state->distcode = (const code FAR *)(state->next); + state->distbits = 6; + ret = inflate_table(DISTS, state->lens + state->nlen, state->ndist, + &(state->next), &(state->distbits), state->work); + if (ret) { + strm->msg = (char *)"invalid distances set"; + state->mode = BAD; + break; + } + Tracev((stderr, "inflate: codes ok\n")); + state->mode = LEN_; + if (flush == Z_TREES) goto inf_leave; + case LEN_: + state->mode = LEN; + case LEN: + if (have >= 6 && left >= 258) { + RESTORE(); + inflate_fast(strm, out); + LOAD(); + if (state->mode == TYPE) + state->back = -1; + break; + } + state->back = 0; + for (;;) { + here = state->lencode[BITS(state->lenbits)]; + if ((unsigned)(here.bits) <= bits) break; + PULLBYTE(); + } + if (here.op && (here.op & 0xf0) == 0) { + last = here; + for (;;) { + here = state->lencode[last.val + + (BITS(last.bits + last.op) >> last.bits)]; + if ((unsigned)(last.bits + here.bits) <= bits) break; + PULLBYTE(); + } + DROPBITS(last.bits); + state->back += last.bits; + } + DROPBITS(here.bits); + state->back += here.bits; + state->length = (unsigned)here.val; + if ((int)(here.op) == 0) { + Tracevv((stderr, here.val >= 0x20 && here.val < 0x7f ? + "inflate: literal '%c'\n" : + "inflate: literal 0x%02x\n", here.val)); + state->mode = LIT; + break; + } + if (here.op & 32) { + Tracevv((stderr, "inflate: end of block\n")); + state->back = -1; + state->mode = TYPE; + break; + } + if (here.op & 64) { + strm->msg = (char *)"invalid literal/length code"; + state->mode = BAD; + break; + } + state->extra = (unsigned)(here.op) & 15; + state->mode = LENEXT; + case LENEXT: + if (state->extra) { + NEEDBITS(state->extra); + state->length += BITS(state->extra); + DROPBITS(state->extra); + state->back += state->extra; + } + Tracevv((stderr, "inflate: length %u\n", state->length)); + state->was = state->length; + state->mode = DIST; + case DIST: + for (;;) { + here = state->distcode[BITS(state->distbits)]; + if ((unsigned)(here.bits) <= bits) break; + PULLBYTE(); + } + if ((here.op & 0xf0) == 0) { + last = here; + for (;;) { + here = state->distcode[last.val + + (BITS(last.bits + last.op) >> last.bits)]; + if ((unsigned)(last.bits + here.bits) <= bits) break; + PULLBYTE(); + } + DROPBITS(last.bits); + state->back += last.bits; + } + DROPBITS(here.bits); + state->back += here.bits; + if (here.op & 64) { + strm->msg = (char *)"invalid distance code"; + state->mode = BAD; + break; + } + state->offset = (unsigned)here.val; + state->extra = (unsigned)(here.op) & 15; + state->mode = DISTEXT; + case DISTEXT: + if (state->extra) { + NEEDBITS(state->extra); + state->offset += BITS(state->extra); + DROPBITS(state->extra); + state->back += state->extra; + } +#ifdef INFLATE_STRICT + if (state->offset > state->dmax) { + strm->msg = (char *)"invalid distance too far back"; + state->mode = BAD; + break; + } +#endif + Tracevv((stderr, "inflate: distance %u\n", state->offset)); + state->mode = MATCH; + case MATCH: + if (left == 0) goto inf_leave; + copy = out - left; + if (state->offset > copy) { /* copy from window */ + copy = state->offset - copy; + if (copy > state->whave) { + if (state->sane) { + strm->msg = (char *)"invalid distance too far back"; + state->mode = BAD; + break; + } +#ifdef INFLATE_ALLOW_INVALID_DISTANCE_TOOFAR_ARRR + Trace((stderr, "inflate.c too far\n")); + copy -= state->whave; + if (copy > state->length) copy = state->length; + if (copy > left) copy = left; + left -= copy; + state->length -= copy; + do { + *put++ = 0; + } while (--copy); + if (state->length == 0) state->mode = LEN; + break; +#endif + } + if (copy > state->wnext) { + copy -= state->wnext; + from = state->window + (state->wsize - copy); + } + else + from = state->window + (state->wnext - copy); + if (copy > state->length) copy = state->length; + } + else { /* copy from output */ + from = put - state->offset; + copy = state->length; + } + if (copy > left) copy = left; + left -= copy; + state->length -= copy; + do { + *put++ = *from++; + } while (--copy); + if (state->length == 0) state->mode = LEN; + break; + case LIT: + if (left == 0) goto inf_leave; + *put++ = (unsigned char)(state->length); + left--; + state->mode = LEN; + break; + case CHECK: + if (state->wrap) { + NEEDBITS(32); + out -= left; + strm->total_out += out; + state->total += out; + if (out) + strm->adler = state->check = + UPDATE(state->check, put - out, out); + out = left; + if (( +#ifdef GUNZIP + state->flags ? hold : +#endif + ZSWAP32(hold)) != state->check) { + strm->msg = (char *)"incorrect data check"; + state->mode = BAD; + break; + } + INITBITS(); + Tracev((stderr, "inflate: check matches trailer\n")); + } +#ifdef GUNZIP + state->mode = LENGTH; + case LENGTH: + if (state->wrap && state->flags) { + NEEDBITS(32); + if (hold != (state->total & 0xffffffffUL)) { + strm->msg = (char *)"incorrect length check"; + state->mode = BAD; + break; + } + INITBITS(); + Tracev((stderr, "inflate: length matches trailer\n")); + } +#endif + state->mode = DONE; + case DONE: + ret = Z_STREAM_END; + goto inf_leave; + case BAD: + ret = Z_DATA_ERROR; + goto inf_leave; + case MEM: + return Z_MEM_ERROR; + case SYNC: + default: + return Z_STREAM_ERROR; + } + + /* + Return from inflate(), updating the total counts and the check value. + If there was no progress during the inflate() call, return a buffer + error. Call updatewindow() to create and/or update the window state. + Note: a memory error from inflate() is non-recoverable. + */ + inf_leave: + RESTORE(); + if (state->wsize || (out != strm->avail_out && state->mode < BAD && + (state->mode < CHECK || flush != Z_FINISH))) + if (updatewindow(strm, strm->next_out, out - strm->avail_out)) { + state->mode = MEM; + return Z_MEM_ERROR; + } + in -= strm->avail_in; + out -= strm->avail_out; + strm->total_in += in; + strm->total_out += out; + state->total += out; + if (state->wrap && out) + strm->adler = state->check = + UPDATE(state->check, strm->next_out - out, out); + strm->data_type = state->bits + (state->last ? 64 : 0) + + (state->mode == TYPE ? 128 : 0) + + (state->mode == LEN_ || state->mode == COPY_ ? 256 : 0); + if (((in == 0 && out == 0) || flush == Z_FINISH) && ret == Z_OK) + ret = Z_BUF_ERROR; + return ret; +} + +int ZEXPORT inflateEnd(strm) +z_streamp strm; +{ + struct inflate_state FAR *state; + if (strm == Z_NULL || strm->state == Z_NULL || strm->zfree == (free_func)0) + return Z_STREAM_ERROR; + state = (struct inflate_state FAR *)strm->state; + if (state->window != Z_NULL) ZFREE(strm, state->window); + ZFREE(strm, strm->state); + strm->state = Z_NULL; + Tracev((stderr, "inflate: end\n")); + return Z_OK; +} + +int ZEXPORT inflateGetDictionary(strm, dictionary, dictLength) +z_streamp strm; +Bytef *dictionary; +uInt *dictLength; +{ + struct inflate_state FAR *state; + + /* check state */ + if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR; + state = (struct inflate_state FAR *)strm->state; + + /* copy dictionary */ + if (state->whave && dictionary != Z_NULL) { + zmemcpy(dictionary, state->window + state->wnext, + state->whave - state->wnext); + zmemcpy(dictionary + state->whave - state->wnext, + state->window, state->wnext); + } + if (dictLength != Z_NULL) + *dictLength = state->whave; + return Z_OK; +} + +int ZEXPORT inflateSetDictionary(strm, dictionary, dictLength) +z_streamp strm; +const Bytef *dictionary; +uInt dictLength; +{ + struct inflate_state FAR *state; + unsigned long dictid; + int ret; + + /* check state */ + if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR; + state = (struct inflate_state FAR *)strm->state; + if (state->wrap != 0 && state->mode != DICT) + return Z_STREAM_ERROR; + + /* check for correct dictionary identifier */ + if (state->mode == DICT) { + dictid = adler32(0L, Z_NULL, 0); + dictid = adler32(dictid, dictionary, dictLength); + if (dictid != state->check) + return Z_DATA_ERROR; + } + + /* copy dictionary to window using updatewindow(), which will amend the + existing dictionary if appropriate */ + ret = updatewindow(strm, dictionary + dictLength, dictLength); + if (ret) { + state->mode = MEM; + return Z_MEM_ERROR; + } + state->havedict = 1; + Tracev((stderr, "inflate: dictionary set\n")); + return Z_OK; +} + +int ZEXPORT inflateGetHeader(strm, head) +z_streamp strm; +gz_headerp head; +{ + struct inflate_state FAR *state; + + /* check state */ + if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR; + state = (struct inflate_state FAR *)strm->state; + if ((state->wrap & 2) == 0) return Z_STREAM_ERROR; + + /* save header structure */ + state->head = head; + head->done = 0; + return Z_OK; +} + +/* + Search buf[0..len-1] for the pattern: 0, 0, 0xff, 0xff. Return when found + or when out of input. When called, *have is the number of pattern bytes + found in order so far, in 0..3. On return *have is updated to the new + state. If on return *have equals four, then the pattern was found and the + return value is how many bytes were read including the last byte of the + pattern. If *have is less than four, then the pattern has not been found + yet and the return value is len. In the latter case, syncsearch() can be + called again with more data and the *have state. *have is initialized to + zero for the first call. + */ +local unsigned syncsearch(have, buf, len) +unsigned FAR *have; +const unsigned char FAR *buf; +unsigned len; +{ + unsigned got; + unsigned next; + + got = *have; + next = 0; + while (next < len && got < 4) { + if ((int)(buf[next]) == (got < 2 ? 0 : 0xff)) + got++; + else if (buf[next]) + got = 0; + else + got = 4 - got; + next++; + } + *have = got; + return next; +} + +int ZEXPORT inflateSync(strm) +z_streamp strm; +{ + unsigned len; /* number of bytes to look at or looked at */ + unsigned long in, out; /* temporary to save total_in and total_out */ + unsigned char buf[4]; /* to restore bit buffer to byte string */ + struct inflate_state FAR *state; + + /* check parameters */ + if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR; + state = (struct inflate_state FAR *)strm->state; + if (strm->avail_in == 0 && state->bits < 8) return Z_BUF_ERROR; + + /* if first time, start search in bit buffer */ + if (state->mode != SYNC) { + state->mode = SYNC; + state->hold <<= state->bits & 7; + state->bits -= state->bits & 7; + len = 0; + while (state->bits >= 8) { + buf[len++] = (unsigned char)(state->hold); + state->hold >>= 8; + state->bits -= 8; + } + state->have = 0; + syncsearch(&(state->have), buf, len); + } + + /* search available input */ + len = syncsearch(&(state->have), strm->next_in, strm->avail_in); + strm->avail_in -= len; + strm->next_in += len; + strm->total_in += len; + + /* return no joy or set up to restart inflate() on a new block */ + if (state->have != 4) return Z_DATA_ERROR; + in = strm->total_in; out = strm->total_out; + inflateReset(strm); + strm->total_in = in; strm->total_out = out; + state->mode = TYPE; + return Z_OK; +} + +/* + Returns true if inflate is currently at the end of a block generated by + Z_SYNC_FLUSH or Z_FULL_FLUSH. This function is used by one PPP + implementation to provide an additional safety check. PPP uses + Z_SYNC_FLUSH but removes the length bytes of the resulting empty stored + block. When decompressing, PPP checks that at the end of input packet, + inflate is waiting for these length bytes. + */ +int ZEXPORT inflateSyncPoint(strm) +z_streamp strm; +{ + struct inflate_state FAR *state; + + if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR; + state = (struct inflate_state FAR *)strm->state; + return state->mode == STORED && state->bits == 0; +} + +int ZEXPORT inflateCopy(dest, source) +z_streamp dest; +z_streamp source; +{ + struct inflate_state FAR *state; + struct inflate_state FAR *copy; + unsigned char FAR *window; + unsigned wsize; + + /* check input */ + if (dest == Z_NULL || source == Z_NULL || source->state == Z_NULL || + source->zalloc == (alloc_func)0 || source->zfree == (free_func)0) + return Z_STREAM_ERROR; + state = (struct inflate_state FAR *)source->state; + + /* allocate space */ + copy = (struct inflate_state FAR *) + ZALLOC(source, 1, sizeof(struct inflate_state)); + if (copy == Z_NULL) return Z_MEM_ERROR; + window = Z_NULL; + if (state->window != Z_NULL) { + window = (unsigned char FAR *) + ZALLOC(source, 1U << state->wbits, sizeof(unsigned char)); + if (window == Z_NULL) { + ZFREE(source, copy); + return Z_MEM_ERROR; + } + } + + /* copy state */ + zmemcpy((voidpf)dest, (voidpf)source, sizeof(z_stream)); + zmemcpy((voidpf)copy, (voidpf)state, sizeof(struct inflate_state)); + if (state->lencode >= state->codes && + state->lencode <= state->codes + ENOUGH - 1) { + copy->lencode = copy->codes + (state->lencode - state->codes); + copy->distcode = copy->codes + (state->distcode - state->codes); + } + copy->next = copy->codes + (state->next - state->codes); + if (window != Z_NULL) { + wsize = 1U << state->wbits; + zmemcpy(window, state->window, wsize); + } + copy->window = window; + dest->state = (struct internal_state FAR *)copy; + return Z_OK; +} + +int ZEXPORT inflateUndermine(strm, subvert) +z_streamp strm; +int subvert; +{ + struct inflate_state FAR *state; + + if (strm == Z_NULL || strm->state == Z_NULL) return Z_STREAM_ERROR; + state = (struct inflate_state FAR *)strm->state; + state->sane = !subvert; +#ifdef INFLATE_ALLOW_INVALID_DISTANCE_TOOFAR_ARRR + return Z_OK; +#else + state->sane = 1; + return Z_DATA_ERROR; +#endif +} + +long ZEXPORT inflateMark(strm) +z_streamp strm; +{ + struct inflate_state FAR *state; + + if (strm == Z_NULL || strm->state == Z_NULL) return -1L << 16; + state = (struct inflate_state FAR *)strm->state; + return ((long)(state->back) << 16) + + (state->mode == COPY ? state->length : + (state->mode == MATCH ? state->was - state->length : 0)); +} ADDED compat/zlib/inflate.h Index: compat/zlib/inflate.h ================================================================== --- compat/zlib/inflate.h +++ compat/zlib/inflate.h @@ -0,0 +1,122 @@ +/* inflate.h -- internal inflate state definition + * Copyright (C) 1995-2009 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* WARNING: this file should *not* be used by applications. It is + part of the implementation of the compression library and is + subject to change. Applications should only use zlib.h. + */ + +/* define NO_GZIP when compiling if you want to disable gzip header and + trailer decoding by inflate(). NO_GZIP would be used to avoid linking in + the crc code when it is not needed. For shared libraries, gzip decoding + should be left enabled. */ +#ifndef NO_GZIP +# define GUNZIP +#endif + +/* Possible inflate modes between inflate() calls */ +typedef enum { + HEAD, /* i: waiting for magic header */ + FLAGS, /* i: waiting for method and flags (gzip) */ + TIME, /* i: waiting for modification time (gzip) */ + OS, /* i: waiting for extra flags and operating system (gzip) */ + EXLEN, /* i: waiting for extra length (gzip) */ + EXTRA, /* i: waiting for extra bytes (gzip) */ + NAME, /* i: waiting for end of file name (gzip) */ + COMMENT, /* i: waiting for end of comment (gzip) */ + HCRC, /* i: waiting for header crc (gzip) */ + DICTID, /* i: waiting for dictionary check value */ + DICT, /* waiting for inflateSetDictionary() call */ + TYPE, /* i: waiting for type bits, including last-flag bit */ + TYPEDO, /* i: same, but skip check to exit inflate on new block */ + STORED, /* i: waiting for stored size (length and complement) */ + COPY_, /* i/o: same as COPY below, but only first time in */ + COPY, /* i/o: waiting for input or output to copy stored block */ + TABLE, /* i: waiting for dynamic block table lengths */ + LENLENS, /* i: waiting for code length code lengths */ + CODELENS, /* i: waiting for length/lit and distance code lengths */ + LEN_, /* i: same as LEN below, but only first time in */ + LEN, /* i: waiting for length/lit/eob code */ + LENEXT, /* i: waiting for length extra bits */ + DIST, /* i: waiting for distance code */ + DISTEXT, /* i: waiting for distance extra bits */ + MATCH, /* o: waiting for output space to copy string */ + LIT, /* o: waiting for output space to write literal */ + CHECK, /* i: waiting for 32-bit check value */ + LENGTH, /* i: waiting for 32-bit length (gzip) */ + DONE, /* finished check, done -- remain here until reset */ + BAD, /* got a data error -- remain here until reset */ + MEM, /* got an inflate() memory error -- remain here until reset */ + SYNC /* looking for synchronization bytes to restart inflate() */ +} inflate_mode; + +/* + State transitions between above modes - + + (most modes can go to BAD or MEM on error -- not shown for clarity) + + Process header: + HEAD -> (gzip) or (zlib) or (raw) + (gzip) -> FLAGS -> TIME -> OS -> EXLEN -> EXTRA -> NAME -> COMMENT -> + HCRC -> TYPE + (zlib) -> DICTID or TYPE + DICTID -> DICT -> TYPE + (raw) -> TYPEDO + Read deflate blocks: + TYPE -> TYPEDO -> STORED or TABLE or LEN_ or CHECK + STORED -> COPY_ -> COPY -> TYPE + TABLE -> LENLENS -> CODELENS -> LEN_ + LEN_ -> LEN + Read deflate codes in fixed or dynamic block: + LEN -> LENEXT or LIT or TYPE + LENEXT -> DIST -> DISTEXT -> MATCH -> LEN + LIT -> LEN + Process trailer: + CHECK -> LENGTH -> DONE + */ + +/* state maintained between inflate() calls. Approximately 10K bytes. */ +struct inflate_state { + inflate_mode mode; /* current inflate mode */ + int last; /* true if processing last block */ + int wrap; /* bit 0 true for zlib, bit 1 true for gzip */ + int havedict; /* true if dictionary provided */ + int flags; /* gzip header method and flags (0 if zlib) */ + unsigned dmax; /* zlib header max distance (INFLATE_STRICT) */ + unsigned long check; /* protected copy of check value */ + unsigned long total; /* protected copy of output count */ + gz_headerp head; /* where to save gzip header information */ + /* sliding window */ + unsigned wbits; /* log base 2 of requested window size */ + unsigned wsize; /* window size or zero if not using window */ + unsigned whave; /* valid bytes in the window */ + unsigned wnext; /* window write index */ + unsigned char FAR *window; /* allocated sliding window, if needed */ + /* bit accumulator */ + unsigned long hold; /* input bit accumulator */ + unsigned bits; /* number of bits in "in" */ + /* for string and stored block copying */ + unsigned length; /* literal or length of data to copy */ + unsigned offset; /* distance back to copy string from */ + /* for table and code decoding */ + unsigned extra; /* extra bits needed */ + /* fixed and dynamic code tables */ + code const FAR *lencode; /* starting table for length/literal codes */ + code const FAR *distcode; /* starting table for distance codes */ + unsigned lenbits; /* index bits for lencode */ + unsigned distbits; /* index bits for distcode */ + /* dynamic table building */ + unsigned ncode; /* number of code length code lengths */ + unsigned nlen; /* number of length code lengths */ + unsigned ndist; /* number of distance code lengths */ + unsigned have; /* number of code lengths in lens[] */ + code FAR *next; /* next available space in codes[] */ + unsigned short lens[320]; /* temporary storage for code lengths */ + unsigned short work[288]; /* work area for code table building */ + code codes[ENOUGH]; /* space for code tables */ + int sane; /* if false, allow invalid distance too far */ + int back; /* bits back of last unprocessed length/lit */ + unsigned was; /* initial length of match */ +}; ADDED compat/zlib/inftrees.c Index: compat/zlib/inftrees.c ================================================================== --- compat/zlib/inftrees.c +++ compat/zlib/inftrees.c @@ -0,0 +1,306 @@ +/* inftrees.c -- generate Huffman trees for efficient decoding + * Copyright (C) 1995-2013 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +#include "zutil.h" +#include "inftrees.h" + +#define MAXBITS 15 + +const char inflate_copyright[] = + " inflate 1.2.8 Copyright 1995-2013 Mark Adler "; +/* + If you use the zlib library in a product, an acknowledgment is welcome + in the documentation of your product. If for some reason you cannot + include such an acknowledgment, I would appreciate that you keep this + copyright string in the executable of your product. + */ + +/* + Build a set of tables to decode the provided canonical Huffman code. + The code lengths are lens[0..codes-1]. The result starts at *table, + whose indices are 0..2^bits-1. work is a writable array of at least + lens shorts, which is used as a work area. type is the type of code + to be generated, CODES, LENS, or DISTS. On return, zero is success, + -1 is an invalid code, and +1 means that ENOUGH isn't enough. table + on return points to the next available entry's address. bits is the + requested root table index bits, and on return it is the actual root + table index bits. It will differ if the request is greater than the + longest code or if it is less than the shortest code. + */ +int ZLIB_INTERNAL inflate_table(type, lens, codes, table, bits, work) +codetype type; +unsigned short FAR *lens; +unsigned codes; +code FAR * FAR *table; +unsigned FAR *bits; +unsigned short FAR *work; +{ + unsigned len; /* a code's length in bits */ + unsigned sym; /* index of code symbols */ + unsigned min, max; /* minimum and maximum code lengths */ + unsigned root; /* number of index bits for root table */ + unsigned curr; /* number of index bits for current table */ + unsigned drop; /* code bits to drop for sub-table */ + int left; /* number of prefix codes available */ + unsigned used; /* code entries in table used */ + unsigned huff; /* Huffman code */ + unsigned incr; /* for incrementing code, index */ + unsigned fill; /* index for replicating entries */ + unsigned low; /* low bits for current root entry */ + unsigned mask; /* mask for low root bits */ + code here; /* table entry for duplication */ + code FAR *next; /* next available space in table */ + const unsigned short FAR *base; /* base value table to use */ + const unsigned short FAR *extra; /* extra bits table to use */ + int end; /* use base and extra for symbol > end */ + unsigned short count[MAXBITS+1]; /* number of codes of each length */ + unsigned short offs[MAXBITS+1]; /* offsets in table for each length */ + static const unsigned short lbase[31] = { /* Length codes 257..285 base */ + 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 15, 17, 19, 23, 27, 31, + 35, 43, 51, 59, 67, 83, 99, 115, 131, 163, 195, 227, 258, 0, 0}; + static const unsigned short lext[31] = { /* Length codes 257..285 extra */ + 16, 16, 16, 16, 16, 16, 16, 16, 17, 17, 17, 17, 18, 18, 18, 18, + 19, 19, 19, 19, 20, 20, 20, 20, 21, 21, 21, 21, 16, 72, 78}; + static const unsigned short dbase[32] = { /* Distance codes 0..29 base */ + 1, 2, 3, 4, 5, 7, 9, 13, 17, 25, 33, 49, 65, 97, 129, 193, + 257, 385, 513, 769, 1025, 1537, 2049, 3073, 4097, 6145, + 8193, 12289, 16385, 24577, 0, 0}; + static const unsigned short dext[32] = { /* Distance codes 0..29 extra */ + 16, 16, 16, 16, 17, 17, 18, 18, 19, 19, 20, 20, 21, 21, 22, 22, + 23, 23, 24, 24, 25, 25, 26, 26, 27, 27, + 28, 28, 29, 29, 64, 64}; + + /* + Process a set of code lengths to create a canonical Huffman code. The + code lengths are lens[0..codes-1]. Each length corresponds to the + symbols 0..codes-1. The Huffman code is generated by first sorting the + symbols by length from short to long, and retaining the symbol order + for codes with equal lengths. Then the code starts with all zero bits + for the first code of the shortest length, and the codes are integer + increments for the same length, and zeros are appended as the length + increases. For the deflate format, these bits are stored backwards + from their more natural integer increment ordering, and so when the + decoding tables are built in the large loop below, the integer codes + are incremented backwards. + + This routine assumes, but does not check, that all of the entries in + lens[] are in the range 0..MAXBITS. The caller must assure this. + 1..MAXBITS is interpreted as that code length. zero means that that + symbol does not occur in this code. + + The codes are sorted by computing a count of codes for each length, + creating from that a table of starting indices for each length in the + sorted table, and then entering the symbols in order in the sorted + table. The sorted table is work[], with that space being provided by + the caller. + + The length counts are used for other purposes as well, i.e. finding + the minimum and maximum length codes, determining if there are any + codes at all, checking for a valid set of lengths, and looking ahead + at length counts to determine sub-table sizes when building the + decoding tables. + */ + + /* accumulate lengths for codes (assumes lens[] all in 0..MAXBITS) */ + for (len = 0; len <= MAXBITS; len++) + count[len] = 0; + for (sym = 0; sym < codes; sym++) + count[lens[sym]]++; + + /* bound code lengths, force root to be within code lengths */ + root = *bits; + for (max = MAXBITS; max >= 1; max--) + if (count[max] != 0) break; + if (root > max) root = max; + if (max == 0) { /* no symbols to code at all */ + here.op = (unsigned char)64; /* invalid code marker */ + here.bits = (unsigned char)1; + here.val = (unsigned short)0; + *(*table)++ = here; /* make a table to force an error */ + *(*table)++ = here; + *bits = 1; + return 0; /* no symbols, but wait for decoding to report error */ + } + for (min = 1; min < max; min++) + if (count[min] != 0) break; + if (root < min) root = min; + + /* check for an over-subscribed or incomplete set of lengths */ + left = 1; + for (len = 1; len <= MAXBITS; len++) { + left <<= 1; + left -= count[len]; + if (left < 0) return -1; /* over-subscribed */ + } + if (left > 0 && (type == CODES || max != 1)) + return -1; /* incomplete set */ + + /* generate offsets into symbol table for each length for sorting */ + offs[1] = 0; + for (len = 1; len < MAXBITS; len++) + offs[len + 1] = offs[len] + count[len]; + + /* sort symbols by length, by symbol order within each length */ + for (sym = 0; sym < codes; sym++) + if (lens[sym] != 0) work[offs[lens[sym]]++] = (unsigned short)sym; + + /* + Create and fill in decoding tables. In this loop, the table being + filled is at next and has curr index bits. The code being used is huff + with length len. That code is converted to an index by dropping drop + bits off of the bottom. For codes where len is less than drop + curr, + those top drop + curr - len bits are incremented through all values to + fill the table with replicated entries. + + root is the number of index bits for the root table. When len exceeds + root, sub-tables are created pointed to by the root entry with an index + of the low root bits of huff. This is saved in low to check for when a + new sub-table should be started. drop is zero when the root table is + being filled, and drop is root when sub-tables are being filled. + + When a new sub-table is needed, it is necessary to look ahead in the + code lengths to determine what size sub-table is needed. The length + counts are used for this, and so count[] is decremented as codes are + entered in the tables. + + used keeps track of how many table entries have been allocated from the + provided *table space. It is checked for LENS and DIST tables against + the constants ENOUGH_LENS and ENOUGH_DISTS to guard against changes in + the initial root table size constants. See the comments in inftrees.h + for more information. + + sym increments through all symbols, and the loop terminates when + all codes of length max, i.e. all codes, have been processed. This + routine permits incomplete codes, so another loop after this one fills + in the rest of the decoding tables with invalid code markers. + */ + + /* set up for code type */ + switch (type) { + case CODES: + base = extra = work; /* dummy value--not used */ + end = 19; + break; + case LENS: + base = lbase; + base -= 257; + extra = lext; + extra -= 257; + end = 256; + break; + default: /* DISTS */ + base = dbase; + extra = dext; + end = -1; + } + + /* initialize state for loop */ + huff = 0; /* starting code */ + sym = 0; /* starting code symbol */ + len = min; /* starting code length */ + next = *table; /* current table to fill in */ + curr = root; /* current table index bits */ + drop = 0; /* current bits to drop from code for index */ + low = (unsigned)(-1); /* trigger new sub-table when len > root */ + used = 1U << root; /* use root table entries */ + mask = used - 1; /* mask for comparing low */ + + /* check available table space */ + if ((type == LENS && used > ENOUGH_LENS) || + (type == DISTS && used > ENOUGH_DISTS)) + return 1; + + /* process all codes and make table entries */ + for (;;) { + /* create table entry */ + here.bits = (unsigned char)(len - drop); + if ((int)(work[sym]) < end) { + here.op = (unsigned char)0; + here.val = work[sym]; + } + else if ((int)(work[sym]) > end) { + here.op = (unsigned char)(extra[work[sym]]); + here.val = base[work[sym]]; + } + else { + here.op = (unsigned char)(32 + 64); /* end of block */ + here.val = 0; + } + + /* replicate for those indices with low len bits equal to huff */ + incr = 1U << (len - drop); + fill = 1U << curr; + min = fill; /* save offset to next table */ + do { + fill -= incr; + next[(huff >> drop) + fill] = here; + } while (fill != 0); + + /* backwards increment the len-bit code huff */ + incr = 1U << (len - 1); + while (huff & incr) + incr >>= 1; + if (incr != 0) { + huff &= incr - 1; + huff += incr; + } + else + huff = 0; + + /* go to next symbol, update count, len */ + sym++; + if (--(count[len]) == 0) { + if (len == max) break; + len = lens[work[sym]]; + } + + /* create new sub-table if needed */ + if (len > root && (huff & mask) != low) { + /* if first time, transition to sub-tables */ + if (drop == 0) + drop = root; + + /* increment past last table */ + next += min; /* here min is 1 << curr */ + + /* determine length of next table */ + curr = len - drop; + left = (int)(1 << curr); + while (curr + drop < max) { + left -= count[curr + drop]; + if (left <= 0) break; + curr++; + left <<= 1; + } + + /* check for enough space */ + used += 1U << curr; + if ((type == LENS && used > ENOUGH_LENS) || + (type == DISTS && used > ENOUGH_DISTS)) + return 1; + + /* point entry in root table to sub-table */ + low = huff & mask; + (*table)[low].op = (unsigned char)curr; + (*table)[low].bits = (unsigned char)root; + (*table)[low].val = (unsigned short)(next - *table); + } + } + + /* fill in remaining table entry if code is incomplete (guaranteed to have + at most one remaining entry, since if the code is incomplete, the + maximum code length that was allowed to get this far is one bit) */ + if (huff != 0) { + here.op = (unsigned char)64; /* invalid code marker */ + here.bits = (unsigned char)(len - drop); + here.val = (unsigned short)0; + next[huff] = here; + } + + /* set return parameters */ + *table += used; + *bits = root; + return 0; +} ADDED compat/zlib/inftrees.h Index: compat/zlib/inftrees.h ================================================================== --- compat/zlib/inftrees.h +++ compat/zlib/inftrees.h @@ -0,0 +1,62 @@ +/* inftrees.h -- header to use inftrees.c + * Copyright (C) 1995-2005, 2010 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* WARNING: this file should *not* be used by applications. It is + part of the implementation of the compression library and is + subject to change. Applications should only use zlib.h. + */ + +/* Structure for decoding tables. Each entry provides either the + information needed to do the operation requested by the code that + indexed that table entry, or it provides a pointer to another + table that indexes more bits of the code. op indicates whether + the entry is a pointer to another table, a literal, a length or + distance, an end-of-block, or an invalid code. For a table + pointer, the low four bits of op is the number of index bits of + that table. For a length or distance, the low four bits of op + is the number of extra bits to get after the code. bits is + the number of bits in this code or part of the code to drop off + of the bit buffer. val is the actual byte to output in the case + of a literal, the base length or distance, or the offset from + the current table to the next table. Each entry is four bytes. */ +typedef struct { + unsigned char op; /* operation, extra bits, table bits */ + unsigned char bits; /* bits in this part of the code */ + unsigned short val; /* offset in table or code value */ +} code; + +/* op values as set by inflate_table(): + 00000000 - literal + 0000tttt - table link, tttt != 0 is the number of table index bits + 0001eeee - length or distance, eeee is the number of extra bits + 01100000 - end of block + 01000000 - invalid code + */ + +/* Maximum size of the dynamic table. The maximum number of code structures is + 1444, which is the sum of 852 for literal/length codes and 592 for distance + codes. These values were found by exhaustive searches using the program + examples/enough.c found in the zlib distribtution. The arguments to that + program are the number of symbols, the initial root table size, and the + maximum bit length of a code. "enough 286 9 15" for literal/length codes + returns returns 852, and "enough 30 6 15" for distance codes returns 592. + The initial root table size (9 or 6) is found in the fifth argument of the + inflate_table() calls in inflate.c and infback.c. If the root table size is + changed, then these maximum sizes would be need to be recalculated and + updated. */ +#define ENOUGH_LENS 852 +#define ENOUGH_DISTS 592 +#define ENOUGH (ENOUGH_LENS+ENOUGH_DISTS) + +/* Type of code to build for inflate_table() */ +typedef enum { + CODES, + LENS, + DISTS +} codetype; + +int ZLIB_INTERNAL inflate_table OF((codetype type, unsigned short FAR *lens, + unsigned codes, code FAR * FAR *table, + unsigned FAR *bits, unsigned short FAR *work)); ADDED compat/zlib/make_vms.com Index: compat/zlib/make_vms.com ================================================================== --- compat/zlib/make_vms.com +++ compat/zlib/make_vms.com @@ -0,0 +1,867 @@ +$! make libz under VMS written by +$! Martin P.J. Zinser +$! +$! In case of problems with the install you might contact me at +$! zinser@zinser.no-ip.info(preferred) or +$! martin.zinser@eurexchange.com (work) +$! +$! Make procedure history for Zlib +$! +$!------------------------------------------------------------------------------ +$! Version history +$! 0.01 20060120 First version to receive a number +$! 0.02 20061008 Adapt to new Makefile.in +$! 0.03 20091224 Add support for large file check +$! 0.04 20100110 Add new gzclose, gzlib, gzread, gzwrite +$! 0.05 20100221 Exchange zlibdefs.h by zconf.h.in +$! 0.06 20120111 Fix missing amiss_err, update zconf_h.in, fix new exmples +$! subdir path, update module search in makefile.in +$! 0.07 20120115 Triggered by work done by Alexey Chupahin completly redesigned +$! shared image creation +$! 0.08 20120219 Make it work on VAX again, pre-load missing symbols to shared +$! image +$! 0.09 20120305 SMS. P1 sets builder ("MMK", "MMS", " " (built-in)). +$! "" -> automatic, preference: MMK, MMS, built-in. +$! +$ on error then goto err_exit +$! +$ true = 1 +$ false = 0 +$ tmpnam = "temp_" + f$getjpi("","pid") +$ tt = tmpnam + ".txt" +$ tc = tmpnam + ".c" +$ th = tmpnam + ".h" +$ define/nolog tconfig 'th' +$ its_decc = false +$ its_vaxc = false +$ its_gnuc = false +$ s_case = False +$! +$! Setup variables holding "config" information +$! +$ Make = "''p1'" +$ name = "Zlib" +$ version = "?.?.?" +$ v_string = "ZLIB_VERSION" +$ v_file = "zlib.h" +$ ccopt = "/include = []" +$ lopts = "" +$ dnsrl = "" +$ aconf_in_file = "zconf.h.in#zconf.h_in#zconf_h.in" +$ conf_check_string = "" +$ linkonly = false +$ optfile = name + ".opt" +$ mapfile = name + ".map" +$ libdefs = "" +$ vax = f$getsyi("HW_MODEL").lt.1024 +$ axp = f$getsyi("HW_MODEL").ge.1024 .and. f$getsyi("HW_MODEL").lt.4096 +$ ia64 = f$getsyi("HW_MODEL").ge.4096 +$! +$! 2012-03-05 SMS. +$! Why is this needed? And if it is needed, why not simply ".not. vax"? +$! +$!!! if axp .or. ia64 then set proc/parse=extended +$! +$ whoami = f$parse(f$environment("Procedure"),,,,"NO_CONCEAL") +$ mydef = F$parse(whoami,,,"DEVICE") +$ mydir = f$parse(whoami,,,"DIRECTORY") - "][" +$ myproc = f$parse(whoami,,,"Name") + f$parse(whoami,,,"type") +$! +$! Check for MMK/MMS +$! +$ if (Make .eqs. "") +$ then +$ If F$Search ("Sys$System:MMS.EXE") .nes. "" Then Make = "MMS" +$ If F$Type (MMK) .eqs. "STRING" Then Make = "MMK" +$ else +$ Make = f$edit( Make, "trim") +$ endif +$! +$ gosub find_version +$! +$ open/write topt tmp.opt +$ open/write optf 'optfile' +$! +$ gosub check_opts +$! +$! Look for the compiler used +$! +$ gosub check_compiler +$ close topt +$ close optf +$! +$ if its_decc +$ then +$ ccopt = "/prefix=all" + ccopt +$ if f$trnlnm("SYS") .eqs. "" +$ then +$ if axp +$ then +$ define sys sys$library: +$ else +$ ccopt = "/decc" + ccopt +$ define sys decc$library_include: +$ endif +$ endif +$! +$! 2012-03-05 SMS. +$! Why /NAMES = AS_IS? Why not simply ".not. vax"? And why not on VAX? +$! +$ if axp .or. ia64 +$ then +$ ccopt = ccopt + "/name=as_is/opt=(inline=speed)" +$ s_case = true +$ endif +$ endif +$ if its_vaxc .or. its_gnuc +$ then +$ if f$trnlnm("SYS").eqs."" then define sys sys$library: +$ endif +$! +$! Build a fake configure input header +$! +$ open/write conf_hin config.hin +$ write conf_hin "#undef _LARGEFILE64_SOURCE" +$ close conf_hin +$! +$! +$ i = 0 +$FIND_ACONF: +$ fname = f$element(i,"#",aconf_in_file) +$ if fname .eqs. "#" then goto AMISS_ERR +$ if f$search(fname) .eqs. "" +$ then +$ i = i + 1 +$ goto find_aconf +$ endif +$ open/read/err=aconf_err aconf_in 'fname' +$ open/write aconf zconf.h +$ACONF_LOOP: +$ read/end_of_file=aconf_exit aconf_in line +$ work = f$edit(line, "compress,trim") +$ if f$extract(0,6,work) .nes. "#undef" +$ then +$ if f$extract(0,12,work) .nes. "#cmakedefine" +$ then +$ write aconf line +$ endif +$ else +$ cdef = f$element(1," ",work) +$ gosub check_config +$ endif +$ goto aconf_loop +$ACONF_EXIT: +$ write aconf "" +$ write aconf "/* VMS specifics added by make_vms.com: */" +$ write aconf "#define VMS 1" +$ write aconf "#include " +$ write aconf "#include " +$ write aconf "#ifdef _LARGEFILE" +$ write aconf "# define off64_t __off64_t" +$ write aconf "# define fopen64 fopen" +$ write aconf "# define fseeko64 fseeko" +$ write aconf "# define lseek64 lseek" +$ write aconf "# define ftello64 ftell" +$ write aconf "#endif" +$ write aconf "#if !defined( __VAX) && (__CRTL_VER >= 70312000)" +$ write aconf "# define HAVE_VSNPRINTF" +$ write aconf "#endif" +$ close aconf_in +$ close aconf +$ if f$search("''th'") .nes. "" then delete 'th';* +$! Build the thing plain or with mms +$! +$ write sys$output "Compiling Zlib sources ..." +$ if make.eqs."" +$ then +$ if (f$search( "example.obj;*") .nes. "") then delete example.obj;* +$ if (f$search( "minigzip.obj;*") .nes. "") then delete minigzip.obj;* +$ CALL MAKE adler32.OBJ "CC ''CCOPT' adler32" - + adler32.c zlib.h zconf.h +$ CALL MAKE compress.OBJ "CC ''CCOPT' compress" - + compress.c zlib.h zconf.h +$ CALL MAKE crc32.OBJ "CC ''CCOPT' crc32" - + crc32.c zlib.h zconf.h +$ CALL MAKE deflate.OBJ "CC ''CCOPT' deflate" - + deflate.c deflate.h zutil.h zlib.h zconf.h +$ CALL MAKE gzclose.OBJ "CC ''CCOPT' gzclose" - + gzclose.c zutil.h zlib.h zconf.h +$ CALL MAKE gzlib.OBJ "CC ''CCOPT' gzlib" - + gzlib.c zutil.h zlib.h zconf.h +$ CALL MAKE gzread.OBJ "CC ''CCOPT' gzread" - + gzread.c zutil.h zlib.h zconf.h +$ CALL MAKE gzwrite.OBJ "CC ''CCOPT' gzwrite" - + gzwrite.c zutil.h zlib.h zconf.h +$ CALL MAKE infback.OBJ "CC ''CCOPT' infback" - + infback.c zutil.h inftrees.h inflate.h inffast.h inffixed.h +$ CALL MAKE inffast.OBJ "CC ''CCOPT' inffast" - + inffast.c zutil.h zlib.h zconf.h inffast.h +$ CALL MAKE inflate.OBJ "CC ''CCOPT' inflate" - + inflate.c zutil.h zlib.h zconf.h infblock.h +$ CALL MAKE inftrees.OBJ "CC ''CCOPT' inftrees" - + inftrees.c zutil.h zlib.h zconf.h inftrees.h +$ CALL MAKE trees.OBJ "CC ''CCOPT' trees" - + trees.c deflate.h zutil.h zlib.h zconf.h +$ CALL MAKE uncompr.OBJ "CC ''CCOPT' uncompr" - + uncompr.c zlib.h zconf.h +$ CALL MAKE zutil.OBJ "CC ''CCOPT' zutil" - + zutil.c zutil.h zlib.h zconf.h +$ write sys$output "Building Zlib ..." +$ CALL MAKE libz.OLB "lib/crea libz.olb *.obj" *.OBJ +$ write sys$output "Building example..." +$ CALL MAKE example.OBJ "CC ''CCOPT' [.test]example" - + [.test]example.c zlib.h zconf.h +$ call make example.exe "LINK example,libz.olb/lib" example.obj libz.olb +$ write sys$output "Building minigzip..." +$ CALL MAKE minigzip.OBJ "CC ''CCOPT' [.test]minigzip" - + [.test]minigzip.c zlib.h zconf.h +$ call make minigzip.exe - + "LINK minigzip,libz.olb/lib" - + minigzip.obj libz.olb +$ else +$ gosub crea_mms +$ write sys$output "Make ''name' ''version' with ''Make' " +$ 'make' +$ endif +$! +$! Create shareable image +$! +$ gosub crea_olist +$ write sys$output "Creating libzshr.exe" +$ call map_2_shopt 'mapfile' 'optfile' +$ LINK_'lopts'/SHARE=libzshr.exe modules.opt/opt,'optfile'/opt +$ write sys$output "Zlib build completed" +$ delete/nolog tmp.opt;* +$ exit +$AMISS_ERR: +$ write sys$output "No source for config.hin found." +$ write sys$output "Tried any of ''aconf_in_file'" +$ goto err_exit +$CC_ERR: +$ write sys$output "C compiler required to build ''name'" +$ goto err_exit +$ERR_EXIT: +$ set message/facil/ident/sever/text +$ close/nolog optf +$ close/nolog topt +$ close/nolog aconf_in +$ close/nolog aconf +$ close/nolog out +$ close/nolog min +$ close/nolog mod +$ close/nolog h_in +$ write sys$output "Exiting..." +$ exit 2 +$! +$! +$MAKE: SUBROUTINE !SUBROUTINE TO CHECK DEPENDENCIES +$ V = 'F$Verify(0) +$! P1 = What we are trying to make +$! P2 = Command to make it +$! P3 - P8 What it depends on +$ +$ If F$Search(P1) .Eqs. "" Then Goto Makeit +$ Time = F$CvTime(F$File(P1,"RDT")) +$arg=3 +$Loop: +$ Argument = P'arg +$ If Argument .Eqs. "" Then Goto Exit +$ El=0 +$Loop2: +$ File = F$Element(El," ",Argument) +$ If File .Eqs. " " Then Goto Endl +$ AFile = "" +$Loop3: +$ OFile = AFile +$ AFile = F$Search(File) +$ If AFile .Eqs. "" .Or. AFile .Eqs. OFile Then Goto NextEl +$ If F$CvTime(F$File(AFile,"RDT")) .Ges. Time Then Goto Makeit +$ Goto Loop3 +$NextEL: +$ El = El + 1 +$ Goto Loop2 +$EndL: +$ arg=arg+1 +$ If arg .Le. 8 Then Goto Loop +$ Goto Exit +$ +$Makeit: +$ VV=F$VERIFY(0) +$ write sys$output P2 +$ 'P2 +$ VV='F$Verify(VV) +$Exit: +$ If V Then Set Verify +$ENDSUBROUTINE +$!------------------------------------------------------------------------------ +$! +$! Check command line options and set symbols accordingly +$! +$!------------------------------------------------------------------------------ +$! Version history +$! 0.01 20041206 First version to receive a number +$! 0.02 20060126 Add new "HELP" target +$ CHECK_OPTS: +$ i = 1 +$ OPT_LOOP: +$ if i .lt. 9 +$ then +$ cparm = f$edit(p'i',"upcase") +$! +$! Check if parameter actually contains something +$! +$ if f$edit(cparm,"trim") .nes. "" +$ then +$ if cparm .eqs. "DEBUG" +$ then +$ ccopt = ccopt + "/noopt/deb" +$ lopts = lopts + "/deb" +$ endif +$ if f$locate("CCOPT=",cparm) .lt. f$length(cparm) +$ then +$ start = f$locate("=",cparm) + 1 +$ len = f$length(cparm) - start +$ ccopt = ccopt + f$extract(start,len,cparm) +$ if f$locate("AS_IS",f$edit(ccopt,"UPCASE")) .lt. f$length(ccopt) - + then s_case = true +$ endif +$ if cparm .eqs. "LINK" then linkonly = true +$ if f$locate("LOPTS=",cparm) .lt. f$length(cparm) +$ then +$ start = f$locate("=",cparm) + 1 +$ len = f$length(cparm) - start +$ lopts = lopts + f$extract(start,len,cparm) +$ endif +$ if f$locate("CC=",cparm) .lt. f$length(cparm) +$ then +$ start = f$locate("=",cparm) + 1 +$ len = f$length(cparm) - start +$ cc_com = f$extract(start,len,cparm) + if (cc_com .nes. "DECC") .and. - + (cc_com .nes. "VAXC") .and. - + (cc_com .nes. "GNUC") +$ then +$ write sys$output "Unsupported compiler choice ''cc_com' ignored" +$ write sys$output "Use DECC, VAXC, or GNUC instead" +$ else +$ if cc_com .eqs. "DECC" then its_decc = true +$ if cc_com .eqs. "VAXC" then its_vaxc = true +$ if cc_com .eqs. "GNUC" then its_gnuc = true +$ endif +$ endif +$ if f$locate("MAKE=",cparm) .lt. f$length(cparm) +$ then +$ start = f$locate("=",cparm) + 1 +$ len = f$length(cparm) - start +$ mmks = f$extract(start,len,cparm) +$ if (mmks .eqs. "MMK") .or. (mmks .eqs. "MMS") +$ then +$ make = mmks +$ else +$ write sys$output "Unsupported make choice ''mmks' ignored" +$ write sys$output "Use MMK or MMS instead" +$ endif +$ endif +$ if cparm .eqs. "HELP" then gosub bhelp +$ endif +$ i = i + 1 +$ goto opt_loop +$ endif +$ return +$!------------------------------------------------------------------------------ +$! +$! Look for the compiler used +$! +$! Version history +$! 0.01 20040223 First version to receive a number +$! 0.02 20040229 Save/set value of decc$no_rooted_search_lists +$! 0.03 20060202 Extend handling of GNU C +$! 0.04 20090402 Compaq -> hp +$CHECK_COMPILER: +$ if (.not. (its_decc .or. its_vaxc .or. its_gnuc)) +$ then +$ its_decc = (f$search("SYS$SYSTEM:DECC$COMPILER.EXE") .nes. "") +$ its_vaxc = .not. its_decc .and. (F$Search("SYS$System:VAXC.Exe") .nes. "") +$ its_gnuc = .not. (its_decc .or. its_vaxc) .and. (f$trnlnm("gnu_cc") .nes. "") +$ endif +$! +$! Exit if no compiler available +$! +$ if (.not. (its_decc .or. its_vaxc .or. its_gnuc)) +$ then goto CC_ERR +$ else +$ if its_decc +$ then +$ write sys$output "CC compiler check ... hp C" +$ if f$trnlnm("decc$no_rooted_search_lists") .nes. "" +$ then +$ dnrsl = f$trnlnm("decc$no_rooted_search_lists") +$ endif +$ define/nolog decc$no_rooted_search_lists 1 +$ else +$ if its_vaxc then write sys$output "CC compiler check ... VAX C" +$ if its_gnuc +$ then +$ write sys$output "CC compiler check ... GNU C" +$ if f$trnlnm(topt) then write topt "gnu_cc:[000000]gcclib.olb/lib" +$ if f$trnlnm(optf) then write optf "gnu_cc:[000000]gcclib.olb/lib" +$ cc = "gcc" +$ endif +$ if f$trnlnm(topt) then write topt "sys$share:vaxcrtl.exe/share" +$ if f$trnlnm(optf) then write optf "sys$share:vaxcrtl.exe/share" +$ endif +$ endif +$ return +$!------------------------------------------------------------------------------ +$! +$! If MMS/MMK are available dump out the descrip.mms if required +$! +$CREA_MMS: +$ write sys$output "Creating descrip.mms..." +$ create descrip.mms +$ open/append out descrip.mms +$ copy sys$input: out +$ deck +# descrip.mms: MMS description file for building zlib on VMS +# written by Martin P.J. Zinser +# + +OBJS = adler32.obj, compress.obj, crc32.obj, gzclose.obj, gzlib.obj\ + gzread.obj, gzwrite.obj, uncompr.obj, infback.obj\ + deflate.obj, trees.obj, zutil.obj, inflate.obj, \ + inftrees.obj, inffast.obj + +$ eod +$ write out "CFLAGS=", ccopt +$ write out "LOPTS=", lopts +$ write out "all : example.exe minigzip.exe libz.olb" +$ copy sys$input: out +$ deck + @ write sys$output " Example applications available" + +libz.olb : libz.olb($(OBJS)) + @ write sys$output " libz available" + +example.exe : example.obj libz.olb + link $(LOPTS) example,libz.olb/lib + +minigzip.exe : minigzip.obj libz.olb + link $(LOPTS) minigzip,libz.olb/lib + +clean : + delete *.obj;*,libz.olb;*,*.opt;*,*.exe;* + + +# Other dependencies. +adler32.obj : adler32.c zutil.h zlib.h zconf.h +compress.obj : compress.c zlib.h zconf.h +crc32.obj : crc32.c zutil.h zlib.h zconf.h +deflate.obj : deflate.c deflate.h zutil.h zlib.h zconf.h +example.obj : [.test]example.c zlib.h zconf.h +gzclose.obj : gzclose.c zutil.h zlib.h zconf.h +gzlib.obj : gzlib.c zutil.h zlib.h zconf.h +gzread.obj : gzread.c zutil.h zlib.h zconf.h +gzwrite.obj : gzwrite.c zutil.h zlib.h zconf.h +inffast.obj : inffast.c zutil.h zlib.h zconf.h inftrees.h inffast.h +inflate.obj : inflate.c zutil.h zlib.h zconf.h +inftrees.obj : inftrees.c zutil.h zlib.h zconf.h inftrees.h +minigzip.obj : [.test]minigzip.c zlib.h zconf.h +trees.obj : trees.c deflate.h zutil.h zlib.h zconf.h +uncompr.obj : uncompr.c zlib.h zconf.h +zutil.obj : zutil.c zutil.h zlib.h zconf.h +infback.obj : infback.c zutil.h inftrees.h inflate.h inffast.h inffixed.h +$ eod +$ close out +$ return +$!------------------------------------------------------------------------------ +$! +$! Read list of core library sources from makefile.in and create options +$! needed to build shareable image +$! +$CREA_OLIST: +$ open/read min makefile.in +$ open/write mod modules.opt +$ src_check_list = "OBJZ =#OBJG =" +$MRLOOP: +$ read/end=mrdone min rec +$ i = 0 +$SRC_CHECK_LOOP: +$ src_check = f$element(i, "#", src_check_list) +$ i = i+1 +$ if src_check .eqs. "#" then goto mrloop +$ if (f$extract(0,6,rec) .nes. src_check) then goto src_check_loop +$ rec = rec - src_check +$ gosub extra_filnam +$ if (f$element(1,"\",rec) .eqs. "\") then goto mrloop +$MRSLOOP: +$ read/end=mrdone min rec +$ gosub extra_filnam +$ if (f$element(1,"\",rec) .nes. "\") then goto mrsloop +$MRDONE: +$ close min +$ close mod +$ return +$!------------------------------------------------------------------------------ +$! +$! Take record extracted in crea_olist and split it into single filenames +$! +$EXTRA_FILNAM: +$ myrec = f$edit(rec - "\", "trim,compress") +$ i = 0 +$FELOOP: +$ srcfil = f$element(i," ", myrec) +$ if (srcfil .nes. " ") +$ then +$ write mod f$parse(srcfil,,,"NAME"), ".obj" +$ i = i + 1 +$ goto feloop +$ endif +$ return +$!------------------------------------------------------------------------------ +$! +$! Find current Zlib version number +$! +$FIND_VERSION: +$ open/read h_in 'v_file' +$hloop: +$ read/end=hdone h_in rec +$ rec = f$edit(rec,"TRIM") +$ if (f$extract(0,1,rec) .nes. "#") then goto hloop +$ rec = f$edit(rec - "#", "TRIM") +$ if f$element(0," ",rec) .nes. "define" then goto hloop +$ if f$element(1," ",rec) .eqs. v_string +$ then +$ version = 'f$element(2," ",rec)' +$ goto hdone +$ endif +$ goto hloop +$hdone: +$ close h_in +$ return +$!------------------------------------------------------------------------------ +$! +$CHECK_CONFIG: +$! +$ in_ldef = f$locate(cdef,libdefs) +$ if (in_ldef .lt. f$length(libdefs)) +$ then +$ write aconf "#define ''cdef' 1" +$ libdefs = f$extract(0,in_ldef,libdefs) + - + f$extract(in_ldef + f$length(cdef) + 1, - + f$length(libdefs) - in_ldef - f$length(cdef) - 1, - + libdefs) +$ else +$ if (f$type('cdef') .eqs. "INTEGER") +$ then +$ write aconf "#define ''cdef' ", 'cdef' +$ else +$ if (f$type('cdef') .eqs. "STRING") +$ then +$ write aconf "#define ''cdef' ", """", '''cdef'', """" +$ else +$ gosub check_cc_def +$ endif +$ endif +$ endif +$ return +$!------------------------------------------------------------------------------ +$! +$! Check if this is a define relating to the properties of the C/C++ +$! compiler +$! +$ CHECK_CC_DEF: +$ if (cdef .eqs. "_LARGEFILE64_SOURCE") +$ then +$ copy sys$input: 'tc' +$ deck +#include "tconfig" +#define _LARGEFILE +#include + +int main(){ +FILE *fp; + fp = fopen("temp.txt","r"); + fseeko(fp,1,SEEK_SET); + fclose(fp); +} + +$ eod +$ test_inv = false +$ comm_h = false +$ gosub cc_prop_check +$ return +$ endif +$ write aconf "/* ", line, " */" +$ return +$!------------------------------------------------------------------------------ +$! +$! Check for properties of C/C++ compiler +$! +$! Version history +$! 0.01 20031020 First version to receive a number +$! 0.02 20031022 Added logic for defines with value +$! 0.03 20040309 Make sure local config file gets not deleted +$! 0.04 20041230 Also write include for configure run +$! 0.05 20050103 Add processing of "comment defines" +$CC_PROP_CHECK: +$ cc_prop = true +$ is_need = false +$ is_need = (f$extract(0,4,cdef) .eqs. "NEED") .or. (test_inv .eq. true) +$ if f$search(th) .eqs. "" then create 'th' +$ set message/nofac/noident/nosever/notext +$ on error then continue +$ cc 'tmpnam' +$ if .not. ($status) then cc_prop = false +$ on error then continue +$! The headers might lie about the capabilities of the RTL +$ link 'tmpnam',tmp.opt/opt +$ if .not. ($status) then cc_prop = false +$ set message/fac/ident/sever/text +$ on error then goto err_exit +$ delete/nolog 'tmpnam'.*;*/exclude='th' +$ if (cc_prop .and. .not. is_need) .or. - + (.not. cc_prop .and. is_need) +$ then +$ write sys$output "Checking for ''cdef'... yes" +$ if f$type('cdef_val'_yes) .nes. "" +$ then +$ if f$type('cdef_val'_yes) .eqs. "INTEGER" - + then call write_config f$fao("#define !AS !UL",cdef,'cdef_val'_yes) +$ if f$type('cdef_val'_yes) .eqs. "STRING" - + then call write_config f$fao("#define !AS !AS",cdef,'cdef_val'_yes) +$ else +$ call write_config f$fao("#define !AS 1",cdef) +$ endif +$ if (cdef .eqs. "HAVE_FSEEKO") .or. (cdef .eqs. "_LARGE_FILES") .or. - + (cdef .eqs. "_LARGEFILE64_SOURCE") then - + call write_config f$string("#define _LARGEFILE 1") +$ else +$ write sys$output "Checking for ''cdef'... no" +$ if (comm_h) +$ then + call write_config f$fao("/* !AS */",line) +$ else +$ if f$type('cdef_val'_no) .nes. "" +$ then +$ if f$type('cdef_val'_no) .eqs. "INTEGER" - + then call write_config f$fao("#define !AS !UL",cdef,'cdef_val'_no) +$ if f$type('cdef_val'_no) .eqs. "STRING" - + then call write_config f$fao("#define !AS !AS",cdef,'cdef_val'_no) +$ else +$ call write_config f$fao("#undef !AS",cdef) +$ endif +$ endif +$ endif +$ return +$!------------------------------------------------------------------------------ +$! +$! Check for properties of C/C++ compiler with multiple result values +$! +$! Version history +$! 0.01 20040127 First version +$! 0.02 20050103 Reconcile changes from cc_prop up to version 0.05 +$CC_MPROP_CHECK: +$ cc_prop = true +$ i = 1 +$ idel = 1 +$ MT_LOOP: +$ if f$type(result_'i') .eqs. "STRING" +$ then +$ set message/nofac/noident/nosever/notext +$ on error then continue +$ cc 'tmpnam'_'i' +$ if .not. ($status) then cc_prop = false +$ on error then continue +$! The headers might lie about the capabilities of the RTL +$ link 'tmpnam'_'i',tmp.opt/opt +$ if .not. ($status) then cc_prop = false +$ set message/fac/ident/sever/text +$ on error then goto err_exit +$ delete/nolog 'tmpnam'_'i'.*;* +$ if (cc_prop) +$ then +$ write sys$output "Checking for ''cdef'... ", mdef_'i' +$ if f$type(mdef_'i') .eqs. "INTEGER" - + then call write_config f$fao("#define !AS !UL",cdef,mdef_'i') +$ if f$type('cdef_val'_yes) .eqs. "STRING" - + then call write_config f$fao("#define !AS !AS",cdef,mdef_'i') +$ goto msym_clean +$ else +$ i = i + 1 +$ goto mt_loop +$ endif +$ endif +$ write sys$output "Checking for ''cdef'... no" +$ call write_config f$fao("#undef !AS",cdef) +$ MSYM_CLEAN: +$ if (idel .le. msym_max) +$ then +$ delete/sym mdef_'idel' +$ idel = idel + 1 +$ goto msym_clean +$ endif +$ return +$!------------------------------------------------------------------------------ +$! +$! Write configuration to both permanent and temporary config file +$! +$! Version history +$! 0.01 20031029 First version to receive a number +$! +$WRITE_CONFIG: SUBROUTINE +$ write aconf 'p1' +$ open/append confh 'th' +$ write confh 'p1' +$ close confh +$ENDSUBROUTINE +$!------------------------------------------------------------------------------ +$! +$! Analyze the project map file and create the symbol vector for a shareable +$! image from it +$! +$! Version history +$! 0.01 20120128 First version +$! 0.02 20120226 Add pre-load logic +$! +$ MAP_2_SHOPT: Subroutine +$! +$ SAY := "WRITE_ SYS$OUTPUT" +$! +$ IF F$SEARCH("''P1'") .EQS. "" +$ THEN +$ SAY "MAP_2_SHOPT-E-NOSUCHFILE: Error, inputfile ''p1' not available" +$ goto exit_m2s +$ ENDIF +$ IF "''P2'" .EQS. "" +$ THEN +$ SAY "MAP_2_SHOPT: Error, no output file provided" +$ goto exit_m2s +$ ENDIF +$! +$ module1 = "deflate#deflateEnd#deflateInit_#deflateParams#deflateSetDictionary" +$ module2 = "gzclose#gzerror#gzgetc#gzgets#gzopen#gzprintf#gzputc#gzputs#gzread" +$ module3 = "gzseek#gztell#inflate#inflateEnd#inflateInit_#inflateSetDictionary" +$ module4 = "inflateSync#uncompress#zlibVersion#compress" +$ open/read map 'p1 +$ if axp .or. ia64 +$ then +$ open/write aopt a.opt +$ open/write bopt b.opt +$ write aopt " CASE_SENSITIVE=YES" +$ write bopt "SYMBOL_VECTOR= (-" +$ mod_sym_num = 1 +$ MOD_SYM_LOOP: +$ if f$type(module'mod_sym_num') .nes. "" +$ then +$ mod_in = 0 +$ MOD_SYM_IN: +$ shared_proc = f$element(mod_in, "#", module'mod_sym_num') +$ if shared_proc .nes. "#" +$ then +$ write aopt f$fao(" symbol_vector=(!AS/!AS=PROCEDURE)",- + f$edit(shared_proc,"upcase"),shared_proc) +$ write bopt f$fao("!AS=PROCEDURE,-",shared_proc) +$ mod_in = mod_in + 1 +$ goto mod_sym_in +$ endif +$ mod_sym_num = mod_sym_num + 1 +$ goto mod_sym_loop +$ endif +$MAP_LOOP: +$ read/end=map_end map line +$ if (f$locate("{",line).lt. f$length(line)) .or. - + (f$locate("global:", line) .lt. f$length(line)) +$ then +$ proc = true +$ goto map_loop +$ endif +$ if f$locate("}",line).lt. f$length(line) then proc = false +$ if f$locate("local:", line) .lt. f$length(line) then proc = false +$ if proc +$ then +$ shared_proc = f$edit(line,"collapse") +$ chop_semi = f$locate(";", shared_proc) +$ if chop_semi .lt. f$length(shared_proc) then - + shared_proc = f$extract(0, chop_semi, shared_proc) +$ write aopt f$fao(" symbol_vector=(!AS/!AS=PROCEDURE)",- + f$edit(shared_proc,"upcase"),shared_proc) +$ write bopt f$fao("!AS=PROCEDURE,-",shared_proc) +$ endif +$ goto map_loop +$MAP_END: +$ close/nolog aopt +$ close/nolog bopt +$ open/append libopt 'p2' +$ open/read aopt a.opt +$ open/read bopt b.opt +$ALOOP: +$ read/end=aloop_end aopt line +$ write libopt line +$ goto aloop +$ALOOP_END: +$ close/nolog aopt +$ sv = "" +$BLOOP: +$ read/end=bloop_end bopt svn +$ if (svn.nes."") +$ then +$ if (sv.nes."") then write libopt sv +$ sv = svn +$ endif +$ goto bloop +$BLOOP_END: +$ write libopt f$extract(0,f$length(sv)-2,sv), "-" +$ write libopt ")" +$ close/nolog bopt +$ delete/nolog/noconf a.opt;*,b.opt;* +$ else +$ if vax +$ then +$ open/append libopt 'p2' +$ mod_sym_num = 1 +$ VMOD_SYM_LOOP: +$ if f$type(module'mod_sym_num') .nes. "" +$ then +$ mod_in = 0 +$ VMOD_SYM_IN: +$ shared_proc = f$element(mod_in, "#", module'mod_sym_num') +$ if shared_proc .nes. "#" +$ then +$ write libopt f$fao("UNIVERSAL=!AS",- + f$edit(shared_proc,"upcase")) +$ mod_in = mod_in + 1 +$ goto vmod_sym_in +$ endif +$ mod_sym_num = mod_sym_num + 1 +$ goto vmod_sym_loop +$ endif +$VMAP_LOOP: +$ read/end=vmap_end map line +$ if (f$locate("{",line).lt. f$length(line)) .or. - + (f$locate("global:", line) .lt. f$length(line)) +$ then +$ proc = true +$ goto vmap_loop +$ endif +$ if f$locate("}",line).lt. f$length(line) then proc = false +$ if f$locate("local:", line) .lt. f$length(line) then proc = false +$ if proc +$ then +$ shared_proc = f$edit(line,"collapse") +$ chop_semi = f$locate(";", shared_proc) +$ if chop_semi .lt. f$length(shared_proc) then - + shared_proc = f$extract(0, chop_semi, shared_proc) +$ write libopt f$fao("UNIVERSAL=!AS",- + f$edit(shared_proc,"upcase")) +$ endif +$ goto vmap_loop +$VMAP_END: +$ else +$ write sys$output "Unknown Architecture (Not VAX, AXP, or IA64)" +$ write sys$output "No options file created" +$ endif +$ endif +$ EXIT_M2S: +$ close/nolog map +$ close/nolog libopt +$ endsubroutine ADDED compat/zlib/msdos/Makefile.bor Index: compat/zlib/msdos/Makefile.bor ================================================================== --- compat/zlib/msdos/Makefile.bor +++ compat/zlib/msdos/Makefile.bor @@ -0,0 +1,115 @@ +# Makefile for zlib +# Borland C++ +# Last updated: 15-Mar-2003 + +# To use, do "make -fmakefile.bor" +# To compile in small model, set below: MODEL=s + +# WARNING: the small model is supported but only for small values of +# MAX_WBITS and MAX_MEM_LEVEL. For example: +# -DMAX_WBITS=11 -DDEF_WBITS=11 -DMAX_MEM_LEVEL=3 +# If you wish to reduce the memory requirements (default 256K for big +# objects plus a few K), you can add to the LOC macro below: +# -DMAX_MEM_LEVEL=7 -DMAX_WBITS=14 +# See zconf.h for details about the memory requirements. + +# ------------ Turbo C++, Borland C++ ------------ + +# Optional nonstandard preprocessor flags (e.g. -DMAX_MEM_LEVEL=7) +# should be added to the environment via "set LOCAL_ZLIB=-DFOO" or added +# to the declaration of LOC here: +LOC = $(LOCAL_ZLIB) + +# type for CPU required: 0: 8086, 1: 80186, 2: 80286, 3: 80386, etc. +CPU_TYP = 0 + +# memory model: one of s, m, c, l (small, medium, compact, large) +MODEL=l + +# replace bcc with tcc for Turbo C++ 1.0, with bcc32 for the 32 bit version +CC=bcc +LD=bcc +AR=tlib + +# compiler flags +# replace "-O2" by "-O -G -a -d" for Turbo C++ 1.0 +CFLAGS=-O2 -Z -m$(MODEL) $(LOC) + +LDFLAGS=-m$(MODEL) -f- + + +# variables +ZLIB_LIB = zlib_$(MODEL).lib + +OBJ1 = adler32.obj compress.obj crc32.obj deflate.obj gzclose.obj gzlib.obj gzread.obj +OBJ2 = gzwrite.obj infback.obj inffast.obj inflate.obj inftrees.obj trees.obj uncompr.obj zutil.obj +OBJP1 = +adler32.obj+compress.obj+crc32.obj+deflate.obj+gzclose.obj+gzlib.obj+gzread.obj +OBJP2 = +gzwrite.obj+infback.obj+inffast.obj+inflate.obj+inftrees.obj+trees.obj+uncompr.obj+zutil.obj + + +# targets +all: $(ZLIB_LIB) example.exe minigzip.exe + +.c.obj: + $(CC) -c $(CFLAGS) $*.c + +adler32.obj: adler32.c zlib.h zconf.h + +compress.obj: compress.c zlib.h zconf.h + +crc32.obj: crc32.c zlib.h zconf.h crc32.h + +deflate.obj: deflate.c deflate.h zutil.h zlib.h zconf.h + +gzclose.obj: gzclose.c zlib.h zconf.h gzguts.h + +gzlib.obj: gzlib.c zlib.h zconf.h gzguts.h + +gzread.obj: gzread.c zlib.h zconf.h gzguts.h + +gzwrite.obj: gzwrite.c zlib.h zconf.h gzguts.h + +infback.obj: infback.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h inffixed.h + +inffast.obj: inffast.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h + +inflate.obj: inflate.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h inffixed.h + +inftrees.obj: inftrees.c zutil.h zlib.h zconf.h inftrees.h + +trees.obj: trees.c zutil.h zlib.h zconf.h deflate.h trees.h + +uncompr.obj: uncompr.c zlib.h zconf.h + +zutil.obj: zutil.c zutil.h zlib.h zconf.h + +example.obj: test/example.c zlib.h zconf.h + +minigzip.obj: test/minigzip.c zlib.h zconf.h + + +# the command line is cut to fit in the MS-DOS 128 byte limit: +$(ZLIB_LIB): $(OBJ1) $(OBJ2) + -del $(ZLIB_LIB) + $(AR) $(ZLIB_LIB) $(OBJP1) + $(AR) $(ZLIB_LIB) $(OBJP2) + +example.exe: example.obj $(ZLIB_LIB) + $(LD) $(LDFLAGS) example.obj $(ZLIB_LIB) + +minigzip.exe: minigzip.obj $(ZLIB_LIB) + $(LD) $(LDFLAGS) minigzip.obj $(ZLIB_LIB) + +test: example.exe minigzip.exe + example + echo hello world | minigzip | minigzip -d + +clean: + -del *.obj + -del *.lib + -del *.exe + -del zlib_*.bak + -del foo.gz ADDED compat/zlib/msdos/Makefile.dj2 Index: compat/zlib/msdos/Makefile.dj2 ================================================================== --- compat/zlib/msdos/Makefile.dj2 +++ compat/zlib/msdos/Makefile.dj2 @@ -0,0 +1,104 @@ +# Makefile for zlib. Modified for djgpp v2.0 by F. J. Donahoe, 3/15/96. +# Copyright (C) 1995-1998 Jean-loup Gailly. +# For conditions of distribution and use, see copyright notice in zlib.h + +# To compile, or to compile and test, type: +# +# make -fmakefile.dj2; make test -fmakefile.dj2 +# +# To install libz.a, zconf.h and zlib.h in the djgpp directories, type: +# +# make install -fmakefile.dj2 +# +# after first defining LIBRARY_PATH and INCLUDE_PATH in djgpp.env as +# in the sample below if the pattern of the DJGPP distribution is to +# be followed. Remember that, while 'es around <=> are ignored in +# makefiles, they are *not* in batch files or in djgpp.env. +# - - - - - +# [make] +# INCLUDE_PATH=%\>;INCLUDE_PATH%%\DJDIR%\include +# LIBRARY_PATH=%\>;LIBRARY_PATH%%\DJDIR%\lib +# BUTT=-m486 +# - - - - - +# Alternately, these variables may be defined below, overriding the values +# in djgpp.env, as +# INCLUDE_PATH=c:\usr\include +# LIBRARY_PATH=c:\usr\lib + +CC=gcc + +#CFLAGS=-MMD -O +#CFLAGS=-O -DMAX_WBITS=14 -DMAX_MEM_LEVEL=7 +#CFLAGS=-MMD -g -DDEBUG +CFLAGS=-MMD -O3 $(BUTT) -Wall -Wwrite-strings -Wpointer-arith -Wconversion \ + -Wstrict-prototypes -Wmissing-prototypes + +# If cp.exe is available, replace "copy /Y" with "cp -fp" . +CP=copy /Y +# If gnu install.exe is available, replace $(CP) with ginstall. +INSTALL=$(CP) +# The default value of RM is "rm -f." If "rm.exe" is found, comment out: +RM=del +LDLIBS=-L. -lz +LD=$(CC) -s -o +LDSHARED=$(CC) + +INCL=zlib.h zconf.h +LIBS=libz.a + +AR=ar rcs + +prefix=/usr/local +exec_prefix = $(prefix) + +OBJS = adler32.o compress.o crc32.o gzclose.o gzlib.o gzread.o gzwrite.o \ + uncompr.o deflate.o trees.o zutil.o inflate.o infback.o inftrees.o inffast.o + +OBJA = +# to use the asm code: make OBJA=match.o + +TEST_OBJS = example.o minigzip.o + +all: example.exe minigzip.exe + +check: test +test: all + ./example + echo hello world | .\minigzip | .\minigzip -d + +%.o : %.c + $(CC) $(CFLAGS) -c $< -o $@ + +libz.a: $(OBJS) $(OBJA) + $(AR) $@ $(OBJS) $(OBJA) + +%.exe : %.o $(LIBS) + $(LD) $@ $< $(LDLIBS) + +# INCLUDE_PATH and LIBRARY_PATH were set for [make] in djgpp.env . + +.PHONY : uninstall clean + +install: $(INCL) $(LIBS) + -@if not exist $(INCLUDE_PATH)\nul mkdir $(INCLUDE_PATH) + -@if not exist $(LIBRARY_PATH)\nul mkdir $(LIBRARY_PATH) + $(INSTALL) zlib.h $(INCLUDE_PATH) + $(INSTALL) zconf.h $(INCLUDE_PATH) + $(INSTALL) libz.a $(LIBRARY_PATH) + +uninstall: + $(RM) $(INCLUDE_PATH)\zlib.h + $(RM) $(INCLUDE_PATH)\zconf.h + $(RM) $(LIBRARY_PATH)\libz.a + +clean: + $(RM) *.d + $(RM) *.o + $(RM) *.exe + $(RM) libz.a + $(RM) foo.gz + +DEPS := $(wildcard *.d) +ifneq ($(DEPS),) +include $(DEPS) +endif ADDED compat/zlib/msdos/Makefile.emx Index: compat/zlib/msdos/Makefile.emx ================================================================== --- compat/zlib/msdos/Makefile.emx +++ compat/zlib/msdos/Makefile.emx @@ -0,0 +1,69 @@ +# Makefile for zlib. Modified for emx 0.9c by Chr. Spieler, 6/17/98. +# Copyright (C) 1995-1998 Jean-loup Gailly. +# For conditions of distribution and use, see copyright notice in zlib.h + +# To compile, or to compile and test, type: +# +# make -fmakefile.emx; make test -fmakefile.emx +# + +CC=gcc + +#CFLAGS=-MMD -O +#CFLAGS=-O -DMAX_WBITS=14 -DMAX_MEM_LEVEL=7 +#CFLAGS=-MMD -g -DDEBUG +CFLAGS=-MMD -O3 $(BUTT) -Wall -Wwrite-strings -Wpointer-arith -Wconversion \ + -Wstrict-prototypes -Wmissing-prototypes + +# If cp.exe is available, replace "copy /Y" with "cp -fp" . +CP=copy /Y +# If gnu install.exe is available, replace $(CP) with ginstall. +INSTALL=$(CP) +# The default value of RM is "rm -f." If "rm.exe" is found, comment out: +RM=del +LDLIBS=-L. -lzlib +LD=$(CC) -s -o +LDSHARED=$(CC) + +INCL=zlib.h zconf.h +LIBS=zlib.a + +AR=ar rcs + +prefix=/usr/local +exec_prefix = $(prefix) + +OBJS = adler32.o compress.o crc32.o gzclose.o gzlib.o gzread.o gzwrite.o \ + uncompr.o deflate.o trees.o zutil.o inflate.o infback.o inftrees.o inffast.o + +TEST_OBJS = example.o minigzip.o + +all: example.exe minigzip.exe + +test: all + ./example + echo hello world | .\minigzip | .\minigzip -d + +%.o : %.c + $(CC) $(CFLAGS) -c $< -o $@ + +zlib.a: $(OBJS) + $(AR) $@ $(OBJS) + +%.exe : %.o $(LIBS) + $(LD) $@ $< $(LDLIBS) + + +.PHONY : clean + +clean: + $(RM) *.d + $(RM) *.o + $(RM) *.exe + $(RM) zlib.a + $(RM) foo.gz + +DEPS := $(wildcard *.d) +ifneq ($(DEPS),) +include $(DEPS) +endif ADDED compat/zlib/msdos/Makefile.msc Index: compat/zlib/msdos/Makefile.msc ================================================================== --- compat/zlib/msdos/Makefile.msc +++ compat/zlib/msdos/Makefile.msc @@ -0,0 +1,112 @@ +# Makefile for zlib +# Microsoft C 5.1 or later +# Last updated: 19-Mar-2003 + +# To use, do "make makefile.msc" +# To compile in small model, set below: MODEL=S + +# If you wish to reduce the memory requirements (default 256K for big +# objects plus a few K), you can add to the LOC macro below: +# -DMAX_MEM_LEVEL=7 -DMAX_WBITS=14 +# See zconf.h for details about the memory requirements. + +# ------------- Microsoft C 5.1 and later ------------- + +# Optional nonstandard preprocessor flags (e.g. -DMAX_MEM_LEVEL=7) +# should be added to the environment via "set LOCAL_ZLIB=-DFOO" or added +# to the declaration of LOC here: +LOC = $(LOCAL_ZLIB) + +# Type for CPU required: 0: 8086, 1: 80186, 2: 80286, 3: 80386, etc. +CPU_TYP = 0 + +# Memory model: one of S, M, C, L (small, medium, compact, large) +MODEL=L + +CC=cl +CFLAGS=-nologo -A$(MODEL) -G$(CPU_TYP) -W3 -Oait -Gs $(LOC) +#-Ox generates bad code with MSC 5.1 +LIB_CFLAGS=-Zl $(CFLAGS) + +LD=link +LDFLAGS=/noi/e/st:0x1500/noe/farcall/packcode +# "/farcall/packcode" are only useful for `large code' memory models +# but should be a "no-op" for small code models. + + +# variables +ZLIB_LIB = zlib_$(MODEL).lib + +OBJ1 = adler32.obj compress.obj crc32.obj deflate.obj gzclose.obj gzlib.obj gzread.obj +OBJ2 = gzwrite.obj infback.obj inffast.obj inflate.obj inftrees.obj trees.obj uncompr.obj zutil.obj + + +# targets +all: $(ZLIB_LIB) example.exe minigzip.exe + +.c.obj: + $(CC) -c $(LIB_CFLAGS) $*.c + +adler32.obj: adler32.c zlib.h zconf.h + +compress.obj: compress.c zlib.h zconf.h + +crc32.obj: crc32.c zlib.h zconf.h crc32.h + +deflate.obj: deflate.c deflate.h zutil.h zlib.h zconf.h + +gzclose.obj: gzclose.c zlib.h zconf.h gzguts.h + +gzlib.obj: gzlib.c zlib.h zconf.h gzguts.h + +gzread.obj: gzread.c zlib.h zconf.h gzguts.h + +gzwrite.obj: gzwrite.c zlib.h zconf.h gzguts.h + +infback.obj: infback.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h inffixed.h + +inffast.obj: inffast.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h + +inflate.obj: inflate.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h inffixed.h + +inftrees.obj: inftrees.c zutil.h zlib.h zconf.h inftrees.h + +trees.obj: trees.c zutil.h zlib.h zconf.h deflate.h trees.h + +uncompr.obj: uncompr.c zlib.h zconf.h + +zutil.obj: zutil.c zutil.h zlib.h zconf.h + +example.obj: test/example.c zlib.h zconf.h + $(CC) -c $(CFLAGS) $*.c + +minigzip.obj: test/minigzip.c zlib.h zconf.h + $(CC) -c $(CFLAGS) $*.c + + +# the command line is cut to fit in the MS-DOS 128 byte limit: +$(ZLIB_LIB): $(OBJ1) $(OBJ2) + if exist $(ZLIB_LIB) del $(ZLIB_LIB) + lib $(ZLIB_LIB) $(OBJ1); + lib $(ZLIB_LIB) $(OBJ2); + +example.exe: example.obj $(ZLIB_LIB) + $(LD) $(LDFLAGS) example.obj,,,$(ZLIB_LIB); + +minigzip.exe: minigzip.obj $(ZLIB_LIB) + $(LD) $(LDFLAGS) minigzip.obj,,,$(ZLIB_LIB); + +test: example.exe minigzip.exe + example + echo hello world | minigzip | minigzip -d + +clean: + -del *.obj + -del *.lib + -del *.exe + -del *.map + -del zlib_*.bak + -del foo.gz ADDED compat/zlib/msdos/Makefile.tc Index: compat/zlib/msdos/Makefile.tc ================================================================== --- compat/zlib/msdos/Makefile.tc +++ compat/zlib/msdos/Makefile.tc @@ -0,0 +1,100 @@ +# Makefile for zlib +# Turbo C 2.01, Turbo C++ 1.01 +# Last updated: 15-Mar-2003 + +# To use, do "make -fmakefile.tc" +# To compile in small model, set below: MODEL=s + +# WARNING: the small model is supported but only for small values of +# MAX_WBITS and MAX_MEM_LEVEL. For example: +# -DMAX_WBITS=11 -DMAX_MEM_LEVEL=3 +# If you wish to reduce the memory requirements (default 256K for big +# objects plus a few K), you can add to CFLAGS below: +# -DMAX_MEM_LEVEL=7 -DMAX_WBITS=14 +# See zconf.h for details about the memory requirements. + +# ------------ Turbo C 2.01, Turbo C++ 1.01 ------------ +MODEL=l +CC=tcc +LD=tcc +AR=tlib +# CFLAGS=-O2 -G -Z -m$(MODEL) -DMAX_WBITS=11 -DMAX_MEM_LEVEL=3 +CFLAGS=-O2 -G -Z -m$(MODEL) +LDFLAGS=-m$(MODEL) -f- + + +# variables +ZLIB_LIB = zlib_$(MODEL).lib + +OBJ1 = adler32.obj compress.obj crc32.obj deflate.obj gzclose.obj gzlib.obj gzread.obj +OBJ2 = gzwrite.obj infback.obj inffast.obj inflate.obj inftrees.obj trees.obj uncompr.obj zutil.obj +OBJP1 = +adler32.obj+compress.obj+crc32.obj+deflate.obj+gzclose.obj+gzlib.obj+gzread.obj +OBJP2 = +gzwrite.obj+infback.obj+inffast.obj+inflate.obj+inftrees.obj+trees.obj+uncompr.obj+zutil.obj + + +# targets +all: $(ZLIB_LIB) example.exe minigzip.exe + +.c.obj: + $(CC) -c $(CFLAGS) $*.c + +adler32.obj: adler32.c zlib.h zconf.h + +compress.obj: compress.c zlib.h zconf.h + +crc32.obj: crc32.c zlib.h zconf.h crc32.h + +deflate.obj: deflate.c deflate.h zutil.h zlib.h zconf.h + +gzclose.obj: gzclose.c zlib.h zconf.h gzguts.h + +gzlib.obj: gzlib.c zlib.h zconf.h gzguts.h + +gzread.obj: gzread.c zlib.h zconf.h gzguts.h + +gzwrite.obj: gzwrite.c zlib.h zconf.h gzguts.h + +infback.obj: infback.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h inffixed.h + +inffast.obj: inffast.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h + +inflate.obj: inflate.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h inffixed.h + +inftrees.obj: inftrees.c zutil.h zlib.h zconf.h inftrees.h + +trees.obj: trees.c zutil.h zlib.h zconf.h deflate.h trees.h + +uncompr.obj: uncompr.c zlib.h zconf.h + +zutil.obj: zutil.c zutil.h zlib.h zconf.h + +example.obj: test/example.c zlib.h zconf.h + +minigzip.obj: test/minigzip.c zlib.h zconf.h + + +# the command line is cut to fit in the MS-DOS 128 byte limit: +$(ZLIB_LIB): $(OBJ1) $(OBJ2) + -del $(ZLIB_LIB) + $(AR) $(ZLIB_LIB) $(OBJP1) + $(AR) $(ZLIB_LIB) $(OBJP2) + +example.exe: example.obj $(ZLIB_LIB) + $(LD) $(LDFLAGS) example.obj $(ZLIB_LIB) + +minigzip.exe: minigzip.obj $(ZLIB_LIB) + $(LD) $(LDFLAGS) minigzip.obj $(ZLIB_LIB) + +test: example.exe minigzip.exe + example + echo hello world | minigzip | minigzip -d + +clean: + -del *.obj + -del *.lib + -del *.exe + -del zlib_*.bak + -del foo.gz ADDED compat/zlib/nintendods/Makefile Index: compat/zlib/nintendods/Makefile ================================================================== --- compat/zlib/nintendods/Makefile +++ compat/zlib/nintendods/Makefile @@ -0,0 +1,126 @@ +#--------------------------------------------------------------------------------- +.SUFFIXES: +#--------------------------------------------------------------------------------- + +ifeq ($(strip $(DEVKITARM)),) +$(error "Please set DEVKITARM in your environment. export DEVKITARM=devkitARM") +endif + +include $(DEVKITARM)/ds_rules + +#--------------------------------------------------------------------------------- +# TARGET is the name of the output +# BUILD is the directory where object files & intermediate files will be placed +# SOURCES is a list of directories containing source code +# DATA is a list of directories containing data files +# INCLUDES is a list of directories containing header files +#--------------------------------------------------------------------------------- +TARGET := $(shell basename $(CURDIR)) +BUILD := build +SOURCES := ../../ +DATA := data +INCLUDES := include + +#--------------------------------------------------------------------------------- +# options for code generation +#--------------------------------------------------------------------------------- +ARCH := -mthumb -mthumb-interwork + +CFLAGS := -Wall -O2\ + -march=armv5te -mtune=arm946e-s \ + -fomit-frame-pointer -ffast-math \ + $(ARCH) + +CFLAGS += $(INCLUDE) -DARM9 +CXXFLAGS := $(CFLAGS) -fno-rtti -fno-exceptions + +ASFLAGS := $(ARCH) -march=armv5te -mtune=arm946e-s +LDFLAGS = -specs=ds_arm9.specs -g $(ARCH) -Wl,-Map,$(notdir $*.map) + +#--------------------------------------------------------------------------------- +# list of directories containing libraries, this must be the top level containing +# include and lib +#--------------------------------------------------------------------------------- +LIBDIRS := $(LIBNDS) + +#--------------------------------------------------------------------------------- +# no real need to edit anything past this point unless you need to add additional +# rules for different file extensions +#--------------------------------------------------------------------------------- +ifneq ($(BUILD),$(notdir $(CURDIR))) +#--------------------------------------------------------------------------------- + +export OUTPUT := $(CURDIR)/lib/libz.a + +export VPATH := $(foreach dir,$(SOURCES),$(CURDIR)/$(dir)) \ + $(foreach dir,$(DATA),$(CURDIR)/$(dir)) + +export DEPSDIR := $(CURDIR)/$(BUILD) + +CFILES := $(foreach dir,$(SOURCES),$(notdir $(wildcard $(dir)/*.c))) +CPPFILES := $(foreach dir,$(SOURCES),$(notdir $(wildcard $(dir)/*.cpp))) +SFILES := $(foreach dir,$(SOURCES),$(notdir $(wildcard $(dir)/*.s))) +BINFILES := $(foreach dir,$(DATA),$(notdir $(wildcard $(dir)/*.*))) + +#--------------------------------------------------------------------------------- +# use CXX for linking C++ projects, CC for standard C +#--------------------------------------------------------------------------------- +ifeq ($(strip $(CPPFILES)),) +#--------------------------------------------------------------------------------- + export LD := $(CC) +#--------------------------------------------------------------------------------- +else +#--------------------------------------------------------------------------------- + export LD := $(CXX) +#--------------------------------------------------------------------------------- +endif +#--------------------------------------------------------------------------------- + +export OFILES := $(addsuffix .o,$(BINFILES)) \ + $(CPPFILES:.cpp=.o) $(CFILES:.c=.o) $(SFILES:.s=.o) + +export INCLUDE := $(foreach dir,$(INCLUDES),-I$(CURDIR)/$(dir)) \ + $(foreach dir,$(LIBDIRS),-I$(dir)/include) \ + -I$(CURDIR)/$(BUILD) + +.PHONY: $(BUILD) clean all + +#--------------------------------------------------------------------------------- +all: $(BUILD) + @[ -d $@ ] || mkdir -p include + @cp ../../*.h include + +lib: + @[ -d $@ ] || mkdir -p $@ + +$(BUILD): lib + @[ -d $@ ] || mkdir -p $@ + @$(MAKE) --no-print-directory -C $(BUILD) -f $(CURDIR)/Makefile + +#--------------------------------------------------------------------------------- +clean: + @echo clean ... + @rm -fr $(BUILD) lib + +#--------------------------------------------------------------------------------- +else + +DEPENDS := $(OFILES:.o=.d) + +#--------------------------------------------------------------------------------- +# main targets +#--------------------------------------------------------------------------------- +$(OUTPUT) : $(OFILES) + +#--------------------------------------------------------------------------------- +%.bin.o : %.bin +#--------------------------------------------------------------------------------- + @echo $(notdir $<) + @$(bin2o) + + +-include $(DEPENDS) + +#--------------------------------------------------------------------------------------- +endif +#--------------------------------------------------------------------------------------- ADDED compat/zlib/nintendods/README Index: compat/zlib/nintendods/README ================================================================== --- compat/zlib/nintendods/README +++ compat/zlib/nintendods/README @@ -0,0 +1,5 @@ +This Makefile requires devkitARM (http://www.devkitpro.org/category/devkitarm/) and works inside "contrib/nds". It is based on a devkitARM template. + +Eduardo Costa +January 3, 2009 + ADDED compat/zlib/old/Makefile.emx Index: compat/zlib/old/Makefile.emx ================================================================== --- compat/zlib/old/Makefile.emx +++ compat/zlib/old/Makefile.emx @@ -0,0 +1,69 @@ +# Makefile for zlib. Modified for emx/rsxnt by Chr. Spieler, 6/16/98. +# Copyright (C) 1995-1998 Jean-loup Gailly. +# For conditions of distribution and use, see copyright notice in zlib.h + +# To compile, or to compile and test, type: +# +# make -fmakefile.emx; make test -fmakefile.emx +# + +CC=gcc -Zwin32 + +#CFLAGS=-MMD -O +#CFLAGS=-O -DMAX_WBITS=14 -DMAX_MEM_LEVEL=7 +#CFLAGS=-MMD -g -DDEBUG +CFLAGS=-MMD -O3 $(BUTT) -Wall -Wwrite-strings -Wpointer-arith -Wconversion \ + -Wstrict-prototypes -Wmissing-prototypes + +# If cp.exe is available, replace "copy /Y" with "cp -fp" . +CP=copy /Y +# If gnu install.exe is available, replace $(CP) with ginstall. +INSTALL=$(CP) +# The default value of RM is "rm -f." If "rm.exe" is found, comment out: +RM=del +LDLIBS=-L. -lzlib +LD=$(CC) -s -o +LDSHARED=$(CC) + +INCL=zlib.h zconf.h +LIBS=zlib.a + +AR=ar rcs + +prefix=/usr/local +exec_prefix = $(prefix) + +OBJS = adler32.o compress.o crc32.o deflate.o gzclose.o gzlib.o gzread.o \ + gzwrite.o infback.o inffast.o inflate.o inftrees.o trees.o uncompr.o zutil.o + +TEST_OBJS = example.o minigzip.o + +all: example.exe minigzip.exe + +test: all + ./example + echo hello world | .\minigzip | .\minigzip -d + +%.o : %.c + $(CC) $(CFLAGS) -c $< -o $@ + +zlib.a: $(OBJS) + $(AR) $@ $(OBJS) + +%.exe : %.o $(LIBS) + $(LD) $@ $< $(LDLIBS) + + +.PHONY : clean + +clean: + $(RM) *.d + $(RM) *.o + $(RM) *.exe + $(RM) zlib.a + $(RM) foo.gz + +DEPS := $(wildcard *.d) +ifneq ($(DEPS),) +include $(DEPS) +endif ADDED compat/zlib/old/Makefile.riscos Index: compat/zlib/old/Makefile.riscos ================================================================== --- compat/zlib/old/Makefile.riscos +++ compat/zlib/old/Makefile.riscos @@ -0,0 +1,151 @@ +# Project: zlib_1_03 +# Patched for zlib 1.1.2 rw@shadow.org.uk 19980430 +# test works out-of-the-box, installs `somewhere' on demand + +# Toolflags: +CCflags = -c -depend !Depend -IC: -g -throwback -DRISCOS -fah +C++flags = -c -depend !Depend -IC: -throwback +Linkflags = -aif -c++ -o $@ +ObjAsmflags = -throwback -NoCache -depend !Depend +CMHGflags = +LibFileflags = -c -l -o $@ +Squeezeflags = -o $@ + +# change the line below to where _you_ want the library installed. +libdest = lib:zlib + +# Final targets: +@.lib: @.o.adler32 @.o.compress @.o.crc32 @.o.deflate @.o.gzio \ + @.o.infblock @.o.infcodes @.o.inffast @.o.inflate @.o.inftrees @.o.infutil @.o.trees \ + @.o.uncompr @.o.zutil + LibFile $(LibFileflags) @.o.adler32 @.o.compress @.o.crc32 @.o.deflate \ + @.o.gzio @.o.infblock @.o.infcodes @.o.inffast @.o.inflate @.o.inftrees @.o.infutil \ + @.o.trees @.o.uncompr @.o.zutil +test: @.minigzip @.example @.lib + @copy @.lib @.libc A~C~DF~L~N~P~Q~RS~TV + @echo running tests: hang on. + @/@.minigzip -f -9 libc + @/@.minigzip -d libc-gz + @/@.minigzip -f -1 libc + @/@.minigzip -d libc-gz + @/@.minigzip -h -9 libc + @/@.minigzip -d libc-gz + @/@.minigzip -h -1 libc + @/@.minigzip -d libc-gz + @/@.minigzip -9 libc + @/@.minigzip -d libc-gz + @/@.minigzip -1 libc + @/@.minigzip -d libc-gz + @diff @.lib @.libc + @echo that should have reported '@.lib and @.libc identical' if you have diff. + @/@.example @.fred @.fred + @echo that will have given lots of hello!'s. + +@.minigzip: @.o.minigzip @.lib C:o.Stubs + Link $(Linkflags) @.o.minigzip @.lib C:o.Stubs +@.example: @.o.example @.lib C:o.Stubs + Link $(Linkflags) @.o.example @.lib C:o.Stubs + +install: @.lib + cdir $(libdest) + cdir $(libdest).h + @copy @.h.zlib $(libdest).h.zlib A~C~DF~L~N~P~Q~RS~TV + @copy @.h.zconf $(libdest).h.zconf A~C~DF~L~N~P~Q~RS~TV + @copy @.lib $(libdest).lib A~C~DF~L~N~P~Q~RS~TV + @echo okay, installed zlib in $(libdest) + +clean:; remove @.minigzip + remove @.example + remove @.libc + -wipe @.o.* F~r~cV + remove @.fred + +# User-editable dependencies: +.c.o: + cc $(ccflags) -o $@ $< + +# Static dependencies: + +# Dynamic dependencies: +o.example: c.example +o.example: h.zlib +o.example: h.zconf +o.minigzip: c.minigzip +o.minigzip: h.zlib +o.minigzip: h.zconf +o.adler32: c.adler32 +o.adler32: h.zlib +o.adler32: h.zconf +o.compress: c.compress +o.compress: h.zlib +o.compress: h.zconf +o.crc32: c.crc32 +o.crc32: h.zlib +o.crc32: h.zconf +o.deflate: c.deflate +o.deflate: h.deflate +o.deflate: h.zutil +o.deflate: h.zlib +o.deflate: h.zconf +o.gzio: c.gzio +o.gzio: h.zutil +o.gzio: h.zlib +o.gzio: h.zconf +o.infblock: c.infblock +o.infblock: h.zutil +o.infblock: h.zlib +o.infblock: h.zconf +o.infblock: h.infblock +o.infblock: h.inftrees +o.infblock: h.infcodes +o.infblock: h.infutil +o.infcodes: c.infcodes +o.infcodes: h.zutil +o.infcodes: h.zlib +o.infcodes: h.zconf +o.infcodes: h.inftrees +o.infcodes: h.infblock +o.infcodes: h.infcodes +o.infcodes: h.infutil +o.infcodes: h.inffast +o.inffast: c.inffast +o.inffast: h.zutil +o.inffast: h.zlib +o.inffast: h.zconf +o.inffast: h.inftrees +o.inffast: h.infblock +o.inffast: h.infcodes +o.inffast: h.infutil +o.inffast: h.inffast +o.inflate: c.inflate +o.inflate: h.zutil +o.inflate: h.zlib +o.inflate: h.zconf +o.inflate: h.infblock +o.inftrees: c.inftrees +o.inftrees: h.zutil +o.inftrees: h.zlib +o.inftrees: h.zconf +o.inftrees: h.inftrees +o.inftrees: h.inffixed +o.infutil: c.infutil +o.infutil: h.zutil +o.infutil: h.zlib +o.infutil: h.zconf +o.infutil: h.infblock +o.infutil: h.inftrees +o.infutil: h.infcodes +o.infutil: h.infutil +o.trees: c.trees +o.trees: h.deflate +o.trees: h.zutil +o.trees: h.zlib +o.trees: h.zconf +o.trees: h.trees +o.uncompr: c.uncompr +o.uncompr: h.zlib +o.uncompr: h.zconf +o.zutil: c.zutil +o.zutil: h.zutil +o.zutil: h.zlib +o.zutil: h.zconf ADDED compat/zlib/old/README Index: compat/zlib/old/README ================================================================== --- compat/zlib/old/README +++ compat/zlib/old/README @@ -0,0 +1,3 @@ +This directory contains files that have not been updated for zlib 1.2.x + +(Volunteers are encouraged to help clean this up. Thanks.) ADDED compat/zlib/old/descrip.mms Index: compat/zlib/old/descrip.mms ================================================================== --- compat/zlib/old/descrip.mms +++ compat/zlib/old/descrip.mms @@ -0,0 +1,48 @@ +# descrip.mms: MMS description file for building zlib on VMS +# written by Martin P.J. Zinser + +cc_defs = +c_deb = + +.ifdef __DECC__ +pref = /prefix=all +.endif + +OBJS = adler32.obj, compress.obj, crc32.obj, gzio.obj, uncompr.obj,\ + deflate.obj, trees.obj, zutil.obj, inflate.obj, infblock.obj,\ + inftrees.obj, infcodes.obj, infutil.obj, inffast.obj + +CFLAGS= $(C_DEB) $(CC_DEFS) $(PREF) + +all : example.exe minigzip.exe + @ write sys$output " Example applications available" +libz.olb : libz.olb($(OBJS)) + @ write sys$output " libz available" + +example.exe : example.obj libz.olb + link example,libz.olb/lib + +minigzip.exe : minigzip.obj libz.olb + link minigzip,libz.olb/lib,x11vms:xvmsutils.olb/lib + +clean : + delete *.obj;*,libz.olb;* + + +# Other dependencies. +adler32.obj : zutil.h zlib.h zconf.h +compress.obj : zlib.h zconf.h +crc32.obj : zutil.h zlib.h zconf.h +deflate.obj : deflate.h zutil.h zlib.h zconf.h +example.obj : zlib.h zconf.h +gzio.obj : zutil.h zlib.h zconf.h +infblock.obj : zutil.h zlib.h zconf.h infblock.h inftrees.h infcodes.h infutil.h +infcodes.obj : zutil.h zlib.h zconf.h inftrees.h infutil.h infcodes.h inffast.h +inffast.obj : zutil.h zlib.h zconf.h inftrees.h infutil.h inffast.h +inflate.obj : zutil.h zlib.h zconf.h infblock.h +inftrees.obj : zutil.h zlib.h zconf.h inftrees.h +infutil.obj : zutil.h zlib.h zconf.h inftrees.h infutil.h +minigzip.obj : zlib.h zconf.h +trees.obj : deflate.h zutil.h zlib.h zconf.h +uncompr.obj : zlib.h zconf.h +zutil.obj : zutil.h zlib.h zconf.h ADDED compat/zlib/old/os2/Makefile.os2 Index: compat/zlib/old/os2/Makefile.os2 ================================================================== --- compat/zlib/old/os2/Makefile.os2 +++ compat/zlib/old/os2/Makefile.os2 @@ -0,0 +1,136 @@ +# Makefile for zlib under OS/2 using GCC (PGCC) +# For conditions of distribution and use, see copyright notice in zlib.h + +# To compile and test, type: +# cp Makefile.os2 .. +# cd .. +# make -f Makefile.os2 test + +# This makefile will build a static library z.lib, a shared library +# z.dll and a import library zdll.lib. You can use either z.lib or +# zdll.lib by specifying either -lz or -lzdll on gcc's command line + +CC=gcc -Zomf -s + +CFLAGS=-O6 -Wall +#CFLAGS=-O -DMAX_WBITS=14 -DMAX_MEM_LEVEL=7 +#CFLAGS=-g -DDEBUG +#CFLAGS=-O3 -Wall -Wwrite-strings -Wpointer-arith -Wconversion \ +# -Wstrict-prototypes -Wmissing-prototypes + +#################### BUG WARNING: ##################### +## infcodes.c hits a bug in pgcc-1.0, so you have to use either +## -O# where # <= 4 or one of (-fno-ommit-frame-pointer or -fno-force-mem) +## This bug is reportedly fixed in pgcc >1.0, but this was not tested +CFLAGS+=-fno-force-mem + +LDFLAGS=-s -L. -lzdll -Zcrtdll +LDSHARED=$(CC) -s -Zomf -Zdll -Zcrtdll + +VER=1.1.0 +ZLIB=z.lib +SHAREDLIB=z.dll +SHAREDLIBIMP=zdll.lib +LIBS=$(ZLIB) $(SHAREDLIB) $(SHAREDLIBIMP) + +AR=emxomfar cr +IMPLIB=emximp +RANLIB=echo +TAR=tar +SHELL=bash + +prefix=/usr/local +exec_prefix = $(prefix) + +OBJS = adler32.o compress.o crc32.o gzio.o uncompr.o deflate.o trees.o \ + zutil.o inflate.o infblock.o inftrees.o infcodes.o infutil.o inffast.o + +TEST_OBJS = example.o minigzip.o + +DISTFILES = README INDEX ChangeLog configure Make*[a-z0-9] *.[ch] descrip.mms \ + algorithm.txt zlib.3 msdos/Make*[a-z0-9] msdos/zlib.def msdos/zlib.rc \ + nt/Makefile.nt nt/zlib.dnt contrib/README.contrib contrib/*.txt \ + contrib/asm386/*.asm contrib/asm386/*.c \ + contrib/asm386/*.bat contrib/asm386/zlibvc.d?? contrib/iostream/*.cpp \ + contrib/iostream/*.h contrib/iostream2/*.h contrib/iostream2/*.cpp \ + contrib/untgz/Makefile contrib/untgz/*.c contrib/untgz/*.w32 + +all: example.exe minigzip.exe + +test: all + @LD_LIBRARY_PATH=.:$(LD_LIBRARY_PATH) ; export LD_LIBRARY_PATH; \ + echo hello world | ./minigzip | ./minigzip -d || \ + echo ' *** minigzip test FAILED ***' ; \ + if ./example; then \ + echo ' *** zlib test OK ***'; \ + else \ + echo ' *** zlib test FAILED ***'; \ + fi + +$(ZLIB): $(OBJS) + $(AR) $@ $(OBJS) + -@ ($(RANLIB) $@ || true) >/dev/null 2>&1 + +$(SHAREDLIB): $(OBJS) os2/z.def + $(LDSHARED) -o $@ $^ + +$(SHAREDLIBIMP): os2/z.def + $(IMPLIB) -o $@ $^ + +example.exe: example.o $(LIBS) + $(CC) $(CFLAGS) -o $@ example.o $(LDFLAGS) + +minigzip.exe: minigzip.o $(LIBS) + $(CC) $(CFLAGS) -o $@ minigzip.o $(LDFLAGS) + +clean: + rm -f *.o *~ example minigzip libz.a libz.so* foo.gz + +distclean: clean + +zip: + mv Makefile Makefile~; cp -p Makefile.in Makefile + rm -f test.c ztest*.c + v=`sed -n -e 's/\.//g' -e '/VERSION "/s/.*"\(.*\)".*/\1/p' < zlib.h`;\ + zip -ul9 zlib$$v $(DISTFILES) + mv Makefile~ Makefile + +dist: + mv Makefile Makefile~; cp -p Makefile.in Makefile + rm -f test.c ztest*.c + d=zlib-`sed -n '/VERSION "/s/.*"\(.*\)".*/\1/p' < zlib.h`;\ + rm -f $$d.tar.gz; \ + if test ! -d ../$$d; then rm -f ../$$d; ln -s `pwd` ../$$d; fi; \ + files=""; \ + for f in $(DISTFILES); do files="$$files $$d/$$f"; done; \ + cd ..; \ + GZIP=-9 $(TAR) chofz $$d/$$d.tar.gz $$files; \ + if test ! -d $$d; then rm -f $$d; fi + mv Makefile~ Makefile + +tags: + etags *.[ch] + +depend: + makedepend -- $(CFLAGS) -- *.[ch] + +# DO NOT DELETE THIS LINE -- make depend depends on it. + +adler32.o: zlib.h zconf.h +compress.o: zlib.h zconf.h +crc32.o: zlib.h zconf.h +deflate.o: deflate.h zutil.h zlib.h zconf.h +example.o: zlib.h zconf.h +gzio.o: zutil.h zlib.h zconf.h +infblock.o: infblock.h inftrees.h infcodes.h infutil.h zutil.h zlib.h zconf.h +infcodes.o: zutil.h zlib.h zconf.h +infcodes.o: inftrees.h infblock.h infcodes.h infutil.h inffast.h +inffast.o: zutil.h zlib.h zconf.h inftrees.h +inffast.o: infblock.h infcodes.h infutil.h inffast.h +inflate.o: zutil.h zlib.h zconf.h infblock.h +inftrees.o: zutil.h zlib.h zconf.h inftrees.h +infutil.o: zutil.h zlib.h zconf.h infblock.h inftrees.h infcodes.h infutil.h +minigzip.o: zlib.h zconf.h +trees.o: deflate.h zutil.h zlib.h zconf.h trees.h +uncompr.o: zlib.h zconf.h +zutil.o: zutil.h zlib.h zconf.h ADDED compat/zlib/old/os2/zlib.def Index: compat/zlib/old/os2/zlib.def ================================================================== --- compat/zlib/old/os2/zlib.def +++ compat/zlib/old/os2/zlib.def @@ -0,0 +1,51 @@ +; +; Slightly modified version of ../nt/zlib.dnt :-) +; + +LIBRARY Z +DESCRIPTION "Zlib compression library for OS/2" +CODE PRELOAD MOVEABLE DISCARDABLE +DATA PRELOAD MOVEABLE MULTIPLE + +EXPORTS + adler32 + compress + crc32 + deflate + deflateCopy + deflateEnd + deflateInit2_ + deflateInit_ + deflateParams + deflateReset + deflateSetDictionary + gzclose + gzdopen + gzerror + gzflush + gzopen + gzread + gzwrite + inflate + inflateEnd + inflateInit2_ + inflateInit_ + inflateReset + inflateSetDictionary + inflateSync + uncompress + zlibVersion + gzprintf + gzputc + gzgetc + gzseek + gzrewind + gztell + gzeof + gzsetparams + zError + inflateSyncPoint + get_crc_table + compress2 + gzputs + gzgets ADDED compat/zlib/old/visual-basic.txt Index: compat/zlib/old/visual-basic.txt ================================================================== --- compat/zlib/old/visual-basic.txt +++ compat/zlib/old/visual-basic.txt @@ -0,0 +1,160 @@ +See below some functions declarations for Visual Basic. + +Frequently Asked Question: + +Q: Each time I use the compress function I get the -5 error (not enough + room in the output buffer). + +A: Make sure that the length of the compressed buffer is passed by + reference ("as any"), not by value ("as long"). Also check that + before the call of compress this length is equal to the total size of + the compressed buffer and not zero. + + +From: "Jon Caruana" +Subject: Re: How to port zlib declares to vb? +Date: Mon, 28 Oct 1996 18:33:03 -0600 + +Got the answer! (I haven't had time to check this but it's what I got, and +looks correct): + +He has the following routines working: + compress + uncompress + gzopen + gzwrite + gzread + gzclose + +Declares follow: (Quoted from Carlos Rios , in Vb4 form) + +#If Win16 Then 'Use Win16 calls. +Declare Function compress Lib "ZLIB.DLL" (ByVal compr As + String, comprLen As Any, ByVal buf As String, ByVal buflen + As Long) As Integer +Declare Function uncompress Lib "ZLIB.DLL" (ByVal uncompr + As String, uncomprLen As Any, ByVal compr As String, ByVal + lcompr As Long) As Integer +Declare Function gzopen Lib "ZLIB.DLL" (ByVal filePath As + String, ByVal mode As String) As Long +Declare Function gzread Lib "ZLIB.DLL" (ByVal file As + Long, ByVal uncompr As String, ByVal uncomprLen As Integer) + As Integer +Declare Function gzwrite Lib "ZLIB.DLL" (ByVal file As + Long, ByVal uncompr As String, ByVal uncomprLen As Integer) + As Integer +Declare Function gzclose Lib "ZLIB.DLL" (ByVal file As + Long) As Integer +#Else +Declare Function compress Lib "ZLIB32.DLL" + (ByVal compr As String, comprLen As Any, ByVal buf As + String, ByVal buflen As Long) As Integer +Declare Function uncompress Lib "ZLIB32.DLL" + (ByVal uncompr As String, uncomprLen As Any, ByVal compr As + String, ByVal lcompr As Long) As Long +Declare Function gzopen Lib "ZLIB32.DLL" + (ByVal file As String, ByVal mode As String) As Long +Declare Function gzread Lib "ZLIB32.DLL" + (ByVal file As Long, ByVal uncompr As String, ByVal + uncomprLen As Long) As Long +Declare Function gzwrite Lib "ZLIB32.DLL" + (ByVal file As Long, ByVal uncompr As String, ByVal + uncomprLen As Long) As Long +Declare Function gzclose Lib "ZLIB32.DLL" + (ByVal file As Long) As Long +#End If + +-Jon Caruana +jon-net@usa.net +Microsoft Sitebuilder Network Level 1 Member - HTML Writer's Guild Member + + +Here is another example from Michael that he +says conforms to the VB guidelines, and that solves the problem of not +knowing the uncompressed size by storing it at the end of the file: + +'Calling the functions: +'bracket meaning: [optional] {Range of possible values} +'Call subCompressFile( [, , [level of compression {1..9}]]) +'Call subUncompressFile() + +Option Explicit +Private lngpvtPcnSml As Long 'Stores value for 'lngPercentSmaller' +Private Const SUCCESS As Long = 0 +Private Const strFilExt As String = ".cpr" +Private Declare Function lngfncCpr Lib "zlib.dll" Alias "compress2" (ByRef +dest As Any, ByRef destLen As Any, ByRef src As Any, ByVal srcLen As Long, +ByVal level As Integer) As Long +Private Declare Function lngfncUcp Lib "zlib.dll" Alias "uncompress" (ByRef +dest As Any, ByRef destLen As Any, ByRef src As Any, ByVal srcLen As Long) +As Long + +Public Sub subCompressFile(ByVal strargOriFilPth As String, Optional ByVal +strargCprFilPth As String, Optional ByVal intLvl As Integer = 9) + Dim strCprPth As String + Dim lngOriSiz As Long + Dim lngCprSiz As Long + Dim bytaryOri() As Byte + Dim bytaryCpr() As Byte + lngOriSiz = FileLen(strargOriFilPth) + ReDim bytaryOri(lngOriSiz - 1) + Open strargOriFilPth For Binary Access Read As #1 + Get #1, , bytaryOri() + Close #1 + strCprPth = IIf(strargCprFilPth = "", strargOriFilPth, strargCprFilPth) +'Select file path and name + strCprPth = strCprPth & IIf(Right(strCprPth, Len(strFilExt)) = +strFilExt, "", strFilExt) 'Add file extension if not exists + lngCprSiz = (lngOriSiz * 1.01) + 12 'Compression needs temporary a bit +more space then original file size + ReDim bytaryCpr(lngCprSiz - 1) + If lngfncCpr(bytaryCpr(0), lngCprSiz, bytaryOri(0), lngOriSiz, intLvl) = +SUCCESS Then + lngpvtPcnSml = (1# - (lngCprSiz / lngOriSiz)) * 100 + ReDim Preserve bytaryCpr(lngCprSiz - 1) + Open strCprPth For Binary Access Write As #1 + Put #1, , bytaryCpr() + Put #1, , lngOriSiz 'Add the the original size value to the end +(last 4 bytes) + Close #1 + Else + MsgBox "Compression error" + End If + Erase bytaryCpr + Erase bytaryOri +End Sub + +Public Sub subUncompressFile(ByVal strargFilPth As String) + Dim bytaryCpr() As Byte + Dim bytaryOri() As Byte + Dim lngOriSiz As Long + Dim lngCprSiz As Long + Dim strOriPth As String + lngCprSiz = FileLen(strargFilPth) + ReDim bytaryCpr(lngCprSiz - 1) + Open strargFilPth For Binary Access Read As #1 + Get #1, , bytaryCpr() + Close #1 + 'Read the original file size value: + lngOriSiz = bytaryCpr(lngCprSiz - 1) * (2 ^ 24) _ + + bytaryCpr(lngCprSiz - 2) * (2 ^ 16) _ + + bytaryCpr(lngCprSiz - 3) * (2 ^ 8) _ + + bytaryCpr(lngCprSiz - 4) + ReDim Preserve bytaryCpr(lngCprSiz - 5) 'Cut of the original size value + ReDim bytaryOri(lngOriSiz - 1) + If lngfncUcp(bytaryOri(0), lngOriSiz, bytaryCpr(0), lngCprSiz) = SUCCESS +Then + strOriPth = Left(strargFilPth, Len(strargFilPth) - Len(strFilExt)) + Open strOriPth For Binary Access Write As #1 + Put #1, , bytaryOri() + Close #1 + Else + MsgBox "Uncompression error" + End If + Erase bytaryCpr + Erase bytaryOri +End Sub +Public Property Get lngPercentSmaller() As Long + lngPercentSmaller = lngpvtPcnSml +End Property ADDED compat/zlib/qnx/package.qpg Index: compat/zlib/qnx/package.qpg ================================================================== --- compat/zlib/qnx/package.qpg +++ compat/zlib/qnx/package.qpg @@ -0,0 +1,141 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Library + + Medium + + 2.0 + + + + zlib + zlib + alain.bonnefoy@icbt.com + Public + public + www.gzip.org/zlib + + + Jean-Loup Gailly,Mark Adler + www.gzip.org/zlib + + zlib@gzip.org + + + A massively spiffy yet delicately unobtrusive compression library. + zlib is designed to be a free, general-purpose, legally unencumbered, lossless data compression library for use on virtually any computer hardware and operating system. + http://www.gzip.org/zlib + + + + + 1.2.8 + Medium + Stable + + + + + + + No License + + + + Software Development/Libraries and Extensions/C Libraries + zlib,compression + qnx6 + qnx6 + None + Developer + + + + + + + + + + + + + + Install + Post + No + Ignore + + No + Optional + + + + + + + + + + + + + InstallOver + zlib + + + + + + + + + + + + + InstallOver + zlib-dev + + + + + + + + + ADDED compat/zlib/test/example.c Index: compat/zlib/test/example.c ================================================================== --- compat/zlib/test/example.c +++ compat/zlib/test/example.c @@ -0,0 +1,601 @@ +/* example.c -- usage example of the zlib compression library + * Copyright (C) 1995-2006, 2011 Jean-loup Gailly. + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* @(#) $Id$ */ + +#include "zlib.h" +#include + +#ifdef STDC +# include +# include +#endif + +#if defined(VMS) || defined(RISCOS) +# define TESTFILE "foo-gz" +#else +# define TESTFILE "foo.gz" +#endif + +#define CHECK_ERR(err, msg) { \ + if (err != Z_OK) { \ + fprintf(stderr, "%s error: %d\n", msg, err); \ + exit(1); \ + } \ +} + +z_const char hello[] = "hello, hello!"; +/* "hello world" would be more standard, but the repeated "hello" + * stresses the compression code better, sorry... + */ + +const char dictionary[] = "hello"; +uLong dictId; /* Adler32 value of the dictionary */ + +void test_deflate OF((Byte *compr, uLong comprLen)); +void test_inflate OF((Byte *compr, uLong comprLen, + Byte *uncompr, uLong uncomprLen)); +void test_large_deflate OF((Byte *compr, uLong comprLen, + Byte *uncompr, uLong uncomprLen)); +void test_large_inflate OF((Byte *compr, uLong comprLen, + Byte *uncompr, uLong uncomprLen)); +void test_flush OF((Byte *compr, uLong *comprLen)); +void test_sync OF((Byte *compr, uLong comprLen, + Byte *uncompr, uLong uncomprLen)); +void test_dict_deflate OF((Byte *compr, uLong comprLen)); +void test_dict_inflate OF((Byte *compr, uLong comprLen, + Byte *uncompr, uLong uncomprLen)); +int main OF((int argc, char *argv[])); + + +#ifdef Z_SOLO + +void *myalloc OF((void *, unsigned, unsigned)); +void myfree OF((void *, void *)); + +void *myalloc(q, n, m) + void *q; + unsigned n, m; +{ + q = Z_NULL; + return calloc(n, m); +} + +void myfree(void *q, void *p) +{ + q = Z_NULL; + free(p); +} + +static alloc_func zalloc = myalloc; +static free_func zfree = myfree; + +#else /* !Z_SOLO */ + +static alloc_func zalloc = (alloc_func)0; +static free_func zfree = (free_func)0; + +void test_compress OF((Byte *compr, uLong comprLen, + Byte *uncompr, uLong uncomprLen)); +void test_gzio OF((const char *fname, + Byte *uncompr, uLong uncomprLen)); + +/* =========================================================================== + * Test compress() and uncompress() + */ +void test_compress(compr, comprLen, uncompr, uncomprLen) + Byte *compr, *uncompr; + uLong comprLen, uncomprLen; +{ + int err; + uLong len = (uLong)strlen(hello)+1; + + err = compress(compr, &comprLen, (const Bytef*)hello, len); + CHECK_ERR(err, "compress"); + + strcpy((char*)uncompr, "garbage"); + + err = uncompress(uncompr, &uncomprLen, compr, comprLen); + CHECK_ERR(err, "uncompress"); + + if (strcmp((char*)uncompr, hello)) { + fprintf(stderr, "bad uncompress\n"); + exit(1); + } else { + printf("uncompress(): %s\n", (char *)uncompr); + } +} + +/* =========================================================================== + * Test read/write of .gz files + */ +void test_gzio(fname, uncompr, uncomprLen) + const char *fname; /* compressed file name */ + Byte *uncompr; + uLong uncomprLen; +{ +#ifdef NO_GZCOMPRESS + fprintf(stderr, "NO_GZCOMPRESS -- gz* functions cannot compress\n"); +#else + int err; + int len = (int)strlen(hello)+1; + gzFile file; + z_off_t pos; + + file = gzopen(fname, "wb"); + if (file == NULL) { + fprintf(stderr, "gzopen error\n"); + exit(1); + } + gzputc(file, 'h'); + if (gzputs(file, "ello") != 4) { + fprintf(stderr, "gzputs err: %s\n", gzerror(file, &err)); + exit(1); + } + if (gzprintf(file, ", %s!", "hello") != 8) { + fprintf(stderr, "gzprintf err: %s\n", gzerror(file, &err)); + exit(1); + } + gzseek(file, 1L, SEEK_CUR); /* add one zero byte */ + gzclose(file); + + file = gzopen(fname, "rb"); + if (file == NULL) { + fprintf(stderr, "gzopen error\n"); + exit(1); + } + strcpy((char*)uncompr, "garbage"); + + if (gzread(file, uncompr, (unsigned)uncomprLen) != len) { + fprintf(stderr, "gzread err: %s\n", gzerror(file, &err)); + exit(1); + } + if (strcmp((char*)uncompr, hello)) { + fprintf(stderr, "bad gzread: %s\n", (char*)uncompr); + exit(1); + } else { + printf("gzread(): %s\n", (char*)uncompr); + } + + pos = gzseek(file, -8L, SEEK_CUR); + if (pos != 6 || gztell(file) != pos) { + fprintf(stderr, "gzseek error, pos=%ld, gztell=%ld\n", + (long)pos, (long)gztell(file)); + exit(1); + } + + if (gzgetc(file) != ' ') { + fprintf(stderr, "gzgetc error\n"); + exit(1); + } + + if (gzungetc(' ', file) != ' ') { + fprintf(stderr, "gzungetc error\n"); + exit(1); + } + + gzgets(file, (char*)uncompr, (int)uncomprLen); + if (strlen((char*)uncompr) != 7) { /* " hello!" */ + fprintf(stderr, "gzgets err after gzseek: %s\n", gzerror(file, &err)); + exit(1); + } + if (strcmp((char*)uncompr, hello + 6)) { + fprintf(stderr, "bad gzgets after gzseek\n"); + exit(1); + } else { + printf("gzgets() after gzseek: %s\n", (char*)uncompr); + } + + gzclose(file); +#endif +} + +#endif /* Z_SOLO */ + +/* =========================================================================== + * Test deflate() with small buffers + */ +void test_deflate(compr, comprLen) + Byte *compr; + uLong comprLen; +{ + z_stream c_stream; /* compression stream */ + int err; + uLong len = (uLong)strlen(hello)+1; + + c_stream.zalloc = zalloc; + c_stream.zfree = zfree; + c_stream.opaque = (voidpf)0; + + err = deflateInit(&c_stream, Z_DEFAULT_COMPRESSION); + CHECK_ERR(err, "deflateInit"); + + c_stream.next_in = (z_const unsigned char *)hello; + c_stream.next_out = compr; + + while (c_stream.total_in != len && c_stream.total_out < comprLen) { + c_stream.avail_in = c_stream.avail_out = 1; /* force small buffers */ + err = deflate(&c_stream, Z_NO_FLUSH); + CHECK_ERR(err, "deflate"); + } + /* Finish the stream, still forcing small buffers: */ + for (;;) { + c_stream.avail_out = 1; + err = deflate(&c_stream, Z_FINISH); + if (err == Z_STREAM_END) break; + CHECK_ERR(err, "deflate"); + } + + err = deflateEnd(&c_stream); + CHECK_ERR(err, "deflateEnd"); +} + +/* =========================================================================== + * Test inflate() with small buffers + */ +void test_inflate(compr, comprLen, uncompr, uncomprLen) + Byte *compr, *uncompr; + uLong comprLen, uncomprLen; +{ + int err; + z_stream d_stream; /* decompression stream */ + + strcpy((char*)uncompr, "garbage"); + + d_stream.zalloc = zalloc; + d_stream.zfree = zfree; + d_stream.opaque = (voidpf)0; + + d_stream.next_in = compr; + d_stream.avail_in = 0; + d_stream.next_out = uncompr; + + err = inflateInit(&d_stream); + CHECK_ERR(err, "inflateInit"); + + while (d_stream.total_out < uncomprLen && d_stream.total_in < comprLen) { + d_stream.avail_in = d_stream.avail_out = 1; /* force small buffers */ + err = inflate(&d_stream, Z_NO_FLUSH); + if (err == Z_STREAM_END) break; + CHECK_ERR(err, "inflate"); + } + + err = inflateEnd(&d_stream); + CHECK_ERR(err, "inflateEnd"); + + if (strcmp((char*)uncompr, hello)) { + fprintf(stderr, "bad inflate\n"); + exit(1); + } else { + printf("inflate(): %s\n", (char *)uncompr); + } +} + +/* =========================================================================== + * Test deflate() with large buffers and dynamic change of compression level + */ +void test_large_deflate(compr, comprLen, uncompr, uncomprLen) + Byte *compr, *uncompr; + uLong comprLen, uncomprLen; +{ + z_stream c_stream; /* compression stream */ + int err; + + c_stream.zalloc = zalloc; + c_stream.zfree = zfree; + c_stream.opaque = (voidpf)0; + + err = deflateInit(&c_stream, Z_BEST_SPEED); + CHECK_ERR(err, "deflateInit"); + + c_stream.next_out = compr; + c_stream.avail_out = (uInt)comprLen; + + /* At this point, uncompr is still mostly zeroes, so it should compress + * very well: + */ + c_stream.next_in = uncompr; + c_stream.avail_in = (uInt)uncomprLen; + err = deflate(&c_stream, Z_NO_FLUSH); + CHECK_ERR(err, "deflate"); + if (c_stream.avail_in != 0) { + fprintf(stderr, "deflate not greedy\n"); + exit(1); + } + + /* Feed in already compressed data and switch to no compression: */ + deflateParams(&c_stream, Z_NO_COMPRESSION, Z_DEFAULT_STRATEGY); + c_stream.next_in = compr; + c_stream.avail_in = (uInt)comprLen/2; + err = deflate(&c_stream, Z_NO_FLUSH); + CHECK_ERR(err, "deflate"); + + /* Switch back to compressing mode: */ + deflateParams(&c_stream, Z_BEST_COMPRESSION, Z_FILTERED); + c_stream.next_in = uncompr; + c_stream.avail_in = (uInt)uncomprLen; + err = deflate(&c_stream, Z_NO_FLUSH); + CHECK_ERR(err, "deflate"); + + err = deflate(&c_stream, Z_FINISH); + if (err != Z_STREAM_END) { + fprintf(stderr, "deflate should report Z_STREAM_END\n"); + exit(1); + } + err = deflateEnd(&c_stream); + CHECK_ERR(err, "deflateEnd"); +} + +/* =========================================================================== + * Test inflate() with large buffers + */ +void test_large_inflate(compr, comprLen, uncompr, uncomprLen) + Byte *compr, *uncompr; + uLong comprLen, uncomprLen; +{ + int err; + z_stream d_stream; /* decompression stream */ + + strcpy((char*)uncompr, "garbage"); + + d_stream.zalloc = zalloc; + d_stream.zfree = zfree; + d_stream.opaque = (voidpf)0; + + d_stream.next_in = compr; + d_stream.avail_in = (uInt)comprLen; + + err = inflateInit(&d_stream); + CHECK_ERR(err, "inflateInit"); + + for (;;) { + d_stream.next_out = uncompr; /* discard the output */ + d_stream.avail_out = (uInt)uncomprLen; + err = inflate(&d_stream, Z_NO_FLUSH); + if (err == Z_STREAM_END) break; + CHECK_ERR(err, "large inflate"); + } + + err = inflateEnd(&d_stream); + CHECK_ERR(err, "inflateEnd"); + + if (d_stream.total_out != 2*uncomprLen + comprLen/2) { + fprintf(stderr, "bad large inflate: %ld\n", d_stream.total_out); + exit(1); + } else { + printf("large_inflate(): OK\n"); + } +} + +/* =========================================================================== + * Test deflate() with full flush + */ +void test_flush(compr, comprLen) + Byte *compr; + uLong *comprLen; +{ + z_stream c_stream; /* compression stream */ + int err; + uInt len = (uInt)strlen(hello)+1; + + c_stream.zalloc = zalloc; + c_stream.zfree = zfree; + c_stream.opaque = (voidpf)0; + + err = deflateInit(&c_stream, Z_DEFAULT_COMPRESSION); + CHECK_ERR(err, "deflateInit"); + + c_stream.next_in = (z_const unsigned char *)hello; + c_stream.next_out = compr; + c_stream.avail_in = 3; + c_stream.avail_out = (uInt)*comprLen; + err = deflate(&c_stream, Z_FULL_FLUSH); + CHECK_ERR(err, "deflate"); + + compr[3]++; /* force an error in first compressed block */ + c_stream.avail_in = len - 3; + + err = deflate(&c_stream, Z_FINISH); + if (err != Z_STREAM_END) { + CHECK_ERR(err, "deflate"); + } + err = deflateEnd(&c_stream); + CHECK_ERR(err, "deflateEnd"); + + *comprLen = c_stream.total_out; +} + +/* =========================================================================== + * Test inflateSync() + */ +void test_sync(compr, comprLen, uncompr, uncomprLen) + Byte *compr, *uncompr; + uLong comprLen, uncomprLen; +{ + int err; + z_stream d_stream; /* decompression stream */ + + strcpy((char*)uncompr, "garbage"); + + d_stream.zalloc = zalloc; + d_stream.zfree = zfree; + d_stream.opaque = (voidpf)0; + + d_stream.next_in = compr; + d_stream.avail_in = 2; /* just read the zlib header */ + + err = inflateInit(&d_stream); + CHECK_ERR(err, "inflateInit"); + + d_stream.next_out = uncompr; + d_stream.avail_out = (uInt)uncomprLen; + + inflate(&d_stream, Z_NO_FLUSH); + CHECK_ERR(err, "inflate"); + + d_stream.avail_in = (uInt)comprLen-2; /* read all compressed data */ + err = inflateSync(&d_stream); /* but skip the damaged part */ + CHECK_ERR(err, "inflateSync"); + + err = inflate(&d_stream, Z_FINISH); + if (err != Z_DATA_ERROR) { + fprintf(stderr, "inflate should report DATA_ERROR\n"); + /* Because of incorrect adler32 */ + exit(1); + } + err = inflateEnd(&d_stream); + CHECK_ERR(err, "inflateEnd"); + + printf("after inflateSync(): hel%s\n", (char *)uncompr); +} + +/* =========================================================================== + * Test deflate() with preset dictionary + */ +void test_dict_deflate(compr, comprLen) + Byte *compr; + uLong comprLen; +{ + z_stream c_stream; /* compression stream */ + int err; + + c_stream.zalloc = zalloc; + c_stream.zfree = zfree; + c_stream.opaque = (voidpf)0; + + err = deflateInit(&c_stream, Z_BEST_COMPRESSION); + CHECK_ERR(err, "deflateInit"); + + err = deflateSetDictionary(&c_stream, + (const Bytef*)dictionary, (int)sizeof(dictionary)); + CHECK_ERR(err, "deflateSetDictionary"); + + dictId = c_stream.adler; + c_stream.next_out = compr; + c_stream.avail_out = (uInt)comprLen; + + c_stream.next_in = (z_const unsigned char *)hello; + c_stream.avail_in = (uInt)strlen(hello)+1; + + err = deflate(&c_stream, Z_FINISH); + if (err != Z_STREAM_END) { + fprintf(stderr, "deflate should report Z_STREAM_END\n"); + exit(1); + } + err = deflateEnd(&c_stream); + CHECK_ERR(err, "deflateEnd"); +} + +/* =========================================================================== + * Test inflate() with a preset dictionary + */ +void test_dict_inflate(compr, comprLen, uncompr, uncomprLen) + Byte *compr, *uncompr; + uLong comprLen, uncomprLen; +{ + int err; + z_stream d_stream; /* decompression stream */ + + strcpy((char*)uncompr, "garbage"); + + d_stream.zalloc = zalloc; + d_stream.zfree = zfree; + d_stream.opaque = (voidpf)0; + + d_stream.next_in = compr; + d_stream.avail_in = (uInt)comprLen; + + err = inflateInit(&d_stream); + CHECK_ERR(err, "inflateInit"); + + d_stream.next_out = uncompr; + d_stream.avail_out = (uInt)uncomprLen; + + for (;;) { + err = inflate(&d_stream, Z_NO_FLUSH); + if (err == Z_STREAM_END) break; + if (err == Z_NEED_DICT) { + if (d_stream.adler != dictId) { + fprintf(stderr, "unexpected dictionary"); + exit(1); + } + err = inflateSetDictionary(&d_stream, (const Bytef*)dictionary, + (int)sizeof(dictionary)); + } + CHECK_ERR(err, "inflate with dict"); + } + + err = inflateEnd(&d_stream); + CHECK_ERR(err, "inflateEnd"); + + if (strcmp((char*)uncompr, hello)) { + fprintf(stderr, "bad inflate with dict\n"); + exit(1); + } else { + printf("inflate with dictionary: %s\n", (char *)uncompr); + } +} + +/* =========================================================================== + * Usage: example [output.gz [input.gz]] + */ + +int main(argc, argv) + int argc; + char *argv[]; +{ + Byte *compr, *uncompr; + uLong comprLen = 10000*sizeof(int); /* don't overflow on MSDOS */ + uLong uncomprLen = comprLen; + static const char* myVersion = ZLIB_VERSION; + + if (zlibVersion()[0] != myVersion[0]) { + fprintf(stderr, "incompatible zlib version\n"); + exit(1); + + } else if (strcmp(zlibVersion(), ZLIB_VERSION) != 0) { + fprintf(stderr, "warning: different zlib version\n"); + } + + printf("zlib version %s = 0x%04x, compile flags = 0x%lx\n", + ZLIB_VERSION, ZLIB_VERNUM, zlibCompileFlags()); + + compr = (Byte*)calloc((uInt)comprLen, 1); + uncompr = (Byte*)calloc((uInt)uncomprLen, 1); + /* compr and uncompr are cleared to avoid reading uninitialized + * data and to ensure that uncompr compresses well. + */ + if (compr == Z_NULL || uncompr == Z_NULL) { + printf("out of memory\n"); + exit(1); + } + +#ifdef Z_SOLO + argc = strlen(argv[0]); +#else + test_compress(compr, comprLen, uncompr, uncomprLen); + + test_gzio((argc > 1 ? argv[1] : TESTFILE), + uncompr, uncomprLen); +#endif + + test_deflate(compr, comprLen); + test_inflate(compr, comprLen, uncompr, uncomprLen); + + test_large_deflate(compr, comprLen, uncompr, uncomprLen); + test_large_inflate(compr, comprLen, uncompr, uncomprLen); + + test_flush(compr, &comprLen); + test_sync(compr, comprLen, uncompr, uncomprLen); + comprLen = uncomprLen; + + test_dict_deflate(compr, comprLen); + test_dict_inflate(compr, comprLen, uncompr, uncomprLen); + + free(compr); + free(uncompr); + + return 0; +} ADDED compat/zlib/test/infcover.c Index: compat/zlib/test/infcover.c ================================================================== --- compat/zlib/test/infcover.c +++ compat/zlib/test/infcover.c @@ -0,0 +1,671 @@ +/* infcover.c -- test zlib's inflate routines with full code coverage + * Copyright (C) 2011 Mark Adler + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* to use, do: ./configure --cover && make cover */ + +#include +#include +#include +#include +#include "zlib.h" + +/* get definition of internal structure so we can mess with it (see pull()), + and so we can call inflate_trees() (see cover5()) */ +#define ZLIB_INTERNAL +#include "inftrees.h" +#include "inflate.h" + +#define local static + +/* -- memory tracking routines -- */ + +/* + These memory tracking routines are provided to zlib and track all of zlib's + allocations and deallocations, check for LIFO operations, keep a current + and high water mark of total bytes requested, optionally set a limit on the + total memory that can be allocated, and when done check for memory leaks. + + They are used as follows: + + z_stream strm; + mem_setup(&strm) initializes the memory tracking and sets the + zalloc, zfree, and opaque members of strm to use + memory tracking for all zlib operations on strm + mem_limit(&strm, limit) sets a limit on the total bytes requested -- a + request that exceeds this limit will result in an + allocation failure (returns NULL) -- setting the + limit to zero means no limit, which is the default + after mem_setup() + mem_used(&strm, "msg") prints to stderr "msg" and the total bytes used + mem_high(&strm, "msg") prints to stderr "msg" and the high water mark + mem_done(&strm, "msg") ends memory tracking, releases all allocations + for the tracking as well as leaked zlib blocks, if + any. If there was anything unusual, such as leaked + blocks, non-FIFO frees, or frees of addresses not + allocated, then "msg" and information about the + problem is printed to stderr. If everything is + normal, nothing is printed. mem_done resets the + strm members to Z_NULL to use the default memory + allocation routines on the next zlib initialization + using strm. + */ + +/* these items are strung together in a linked list, one for each allocation */ +struct mem_item { + void *ptr; /* pointer to allocated memory */ + size_t size; /* requested size of allocation */ + struct mem_item *next; /* pointer to next item in list, or NULL */ +}; + +/* this structure is at the root of the linked list, and tracks statistics */ +struct mem_zone { + struct mem_item *first; /* pointer to first item in list, or NULL */ + size_t total, highwater; /* total allocations, and largest total */ + size_t limit; /* memory allocation limit, or 0 if no limit */ + int notlifo, rogue; /* counts of non-LIFO frees and rogue frees */ +}; + +/* memory allocation routine to pass to zlib */ +local void *mem_alloc(void *mem, unsigned count, unsigned size) +{ + void *ptr; + struct mem_item *item; + struct mem_zone *zone = mem; + size_t len = count * (size_t)size; + + /* induced allocation failure */ + if (zone == NULL || (zone->limit && zone->total + len > zone->limit)) + return NULL; + + /* perform allocation using the standard library, fill memory with a + non-zero value to make sure that the code isn't depending on zeros */ + ptr = malloc(len); + if (ptr == NULL) + return NULL; + memset(ptr, 0xa5, len); + + /* create a new item for the list */ + item = malloc(sizeof(struct mem_item)); + if (item == NULL) { + free(ptr); + return NULL; + } + item->ptr = ptr; + item->size = len; + + /* insert item at the beginning of the list */ + item->next = zone->first; + zone->first = item; + + /* update the statistics */ + zone->total += item->size; + if (zone->total > zone->highwater) + zone->highwater = zone->total; + + /* return the allocated memory */ + return ptr; +} + +/* memory free routine to pass to zlib */ +local void mem_free(void *mem, void *ptr) +{ + struct mem_item *item, *next; + struct mem_zone *zone = mem; + + /* if no zone, just do a free */ + if (zone == NULL) { + free(ptr); + return; + } + + /* point next to the item that matches ptr, or NULL if not found -- remove + the item from the linked list if found */ + next = zone->first; + if (next) { + if (next->ptr == ptr) + zone->first = next->next; /* first one is it, remove from list */ + else { + do { /* search the linked list */ + item = next; + next = item->next; + } while (next != NULL && next->ptr != ptr); + if (next) { /* if found, remove from linked list */ + item->next = next->next; + zone->notlifo++; /* not a LIFO free */ + } + + } + } + + /* if found, update the statistics and free the item */ + if (next) { + zone->total -= next->size; + free(next); + } + + /* if not found, update the rogue count */ + else + zone->rogue++; + + /* in any case, do the requested free with the standard library function */ + free(ptr); +} + +/* set up a controlled memory allocation space for monitoring, set the stream + parameters to the controlled routines, with opaque pointing to the space */ +local void mem_setup(z_stream *strm) +{ + struct mem_zone *zone; + + zone = malloc(sizeof(struct mem_zone)); + assert(zone != NULL); + zone->first = NULL; + zone->total = 0; + zone->highwater = 0; + zone->limit = 0; + zone->notlifo = 0; + zone->rogue = 0; + strm->opaque = zone; + strm->zalloc = mem_alloc; + strm->zfree = mem_free; +} + +/* set a limit on the total memory allocation, or 0 to remove the limit */ +local void mem_limit(z_stream *strm, size_t limit) +{ + struct mem_zone *zone = strm->opaque; + + zone->limit = limit; +} + +/* show the current total requested allocations in bytes */ +local void mem_used(z_stream *strm, char *prefix) +{ + struct mem_zone *zone = strm->opaque; + + fprintf(stderr, "%s: %lu allocated\n", prefix, zone->total); +} + +/* show the high water allocation in bytes */ +local void mem_high(z_stream *strm, char *prefix) +{ + struct mem_zone *zone = strm->opaque; + + fprintf(stderr, "%s: %lu high water mark\n", prefix, zone->highwater); +} + +/* release the memory allocation zone -- if there are any surprises, notify */ +local void mem_done(z_stream *strm, char *prefix) +{ + int count = 0; + struct mem_item *item, *next; + struct mem_zone *zone = strm->opaque; + + /* show high water mark */ + mem_high(strm, prefix); + + /* free leftover allocations and item structures, if any */ + item = zone->first; + while (item != NULL) { + free(item->ptr); + next = item->next; + free(item); + item = next; + count++; + } + + /* issue alerts about anything unexpected */ + if (count || zone->total) + fprintf(stderr, "** %s: %lu bytes in %d blocks not freed\n", + prefix, zone->total, count); + if (zone->notlifo) + fprintf(stderr, "** %s: %d frees not LIFO\n", prefix, zone->notlifo); + if (zone->rogue) + fprintf(stderr, "** %s: %d frees not recognized\n", + prefix, zone->rogue); + + /* free the zone and delete from the stream */ + free(zone); + strm->opaque = Z_NULL; + strm->zalloc = Z_NULL; + strm->zfree = Z_NULL; +} + +/* -- inflate test routines -- */ + +/* Decode a hexadecimal string, set *len to length, in[] to the bytes. This + decodes liberally, in that hex digits can be adjacent, in which case two in + a row writes a byte. Or they can delimited by any non-hex character, where + the delimiters are ignored except when a single hex digit is followed by a + delimiter in which case that single digit writes a byte. The returned + data is allocated and must eventually be freed. NULL is returned if out of + memory. If the length is not needed, then len can be NULL. */ +local unsigned char *h2b(const char *hex, unsigned *len) +{ + unsigned char *in; + unsigned next, val; + + in = malloc((strlen(hex) + 1) >> 1); + if (in == NULL) + return NULL; + next = 0; + val = 1; + do { + if (*hex >= '0' && *hex <= '9') + val = (val << 4) + *hex - '0'; + else if (*hex >= 'A' && *hex <= 'F') + val = (val << 4) + *hex - 'A' + 10; + else if (*hex >= 'a' && *hex <= 'f') + val = (val << 4) + *hex - 'a' + 10; + else if (val != 1 && val < 32) /* one digit followed by delimiter */ + val += 240; /* make it look like two digits */ + if (val > 255) { /* have two digits */ + in[next++] = val & 0xff; /* save the decoded byte */ + val = 1; /* start over */ + } + } while (*hex++); /* go through the loop with the terminating null */ + if (len != NULL) + *len = next; + in = reallocf(in, next); + return in; +} + +/* generic inflate() run, where hex is the hexadecimal input data, what is the + text to include in an error message, step is how much input data to feed + inflate() on each call, or zero to feed it all, win is the window bits + parameter to inflateInit2(), len is the size of the output buffer, and err + is the error code expected from the first inflate() call (the second + inflate() call is expected to return Z_STREAM_END). If win is 47, then + header information is collected with inflateGetHeader(). If a zlib stream + is looking for a dictionary, then an empty dictionary is provided. + inflate() is run until all of the input data is consumed. */ +local void inf(char *hex, char *what, unsigned step, int win, unsigned len, + int err) +{ + int ret; + unsigned have; + unsigned char *in, *out; + z_stream strm, copy; + gz_header head; + + mem_setup(&strm); + strm.avail_in = 0; + strm.next_in = Z_NULL; + ret = inflateInit2(&strm, win); + if (ret != Z_OK) { + mem_done(&strm, what); + return; + } + out = malloc(len); assert(out != NULL); + if (win == 47) { + head.extra = out; + head.extra_max = len; + head.name = out; + head.name_max = len; + head.comment = out; + head.comm_max = len; + ret = inflateGetHeader(&strm, &head); assert(ret == Z_OK); + } + in = h2b(hex, &have); assert(in != NULL); + if (step == 0 || step > have) + step = have; + strm.avail_in = step; + have -= step; + strm.next_in = in; + do { + strm.avail_out = len; + strm.next_out = out; + ret = inflate(&strm, Z_NO_FLUSH); assert(err == 9 || ret == err); + if (ret != Z_OK && ret != Z_BUF_ERROR && ret != Z_NEED_DICT) + break; + if (ret == Z_NEED_DICT) { + ret = inflateSetDictionary(&strm, in, 1); + assert(ret == Z_DATA_ERROR); + mem_limit(&strm, 1); + ret = inflateSetDictionary(&strm, out, 0); + assert(ret == Z_MEM_ERROR); + mem_limit(&strm, 0); + ((struct inflate_state *)strm.state)->mode = DICT; + ret = inflateSetDictionary(&strm, out, 0); + assert(ret == Z_OK); + ret = inflate(&strm, Z_NO_FLUSH); assert(ret == Z_BUF_ERROR); + } + ret = inflateCopy(©, &strm); assert(ret == Z_OK); + ret = inflateEnd(©); assert(ret == Z_OK); + err = 9; /* don't care next time around */ + have += strm.avail_in; + strm.avail_in = step > have ? have : step; + have -= strm.avail_in; + } while (strm.avail_in); + free(in); + free(out); + ret = inflateReset2(&strm, -8); assert(ret == Z_OK); + ret = inflateEnd(&strm); assert(ret == Z_OK); + mem_done(&strm, what); +} + +/* cover all of the lines in inflate.c up to inflate() */ +local void cover_support(void) +{ + int ret; + z_stream strm; + + mem_setup(&strm); + strm.avail_in = 0; + strm.next_in = Z_NULL; + ret = inflateInit(&strm); assert(ret == Z_OK); + mem_used(&strm, "inflate init"); + ret = inflatePrime(&strm, 5, 31); assert(ret == Z_OK); + ret = inflatePrime(&strm, -1, 0); assert(ret == Z_OK); + ret = inflateSetDictionary(&strm, Z_NULL, 0); + assert(ret == Z_STREAM_ERROR); + ret = inflateEnd(&strm); assert(ret == Z_OK); + mem_done(&strm, "prime"); + + inf("63 0", "force window allocation", 0, -15, 1, Z_OK); + inf("63 18 5", "force window replacement", 0, -8, 259, Z_OK); + inf("63 18 68 30 d0 0 0", "force split window update", 4, -8, 259, Z_OK); + inf("3 0", "use fixed blocks", 0, -15, 1, Z_STREAM_END); + inf("", "bad window size", 0, 1, 0, Z_STREAM_ERROR); + + mem_setup(&strm); + strm.avail_in = 0; + strm.next_in = Z_NULL; + ret = inflateInit_(&strm, ZLIB_VERSION - 1, (int)sizeof(z_stream)); + assert(ret == Z_VERSION_ERROR); + mem_done(&strm, "wrong version"); + + strm.avail_in = 0; + strm.next_in = Z_NULL; + ret = inflateInit(&strm); assert(ret == Z_OK); + ret = inflateEnd(&strm); assert(ret == Z_OK); + fputs("inflate built-in memory routines\n", stderr); +} + +/* cover all inflate() header and trailer cases and code after inflate() */ +local void cover_wrap(void) +{ + int ret; + z_stream strm, copy; + unsigned char dict[257]; + + ret = inflate(Z_NULL, 0); assert(ret == Z_STREAM_ERROR); + ret = inflateEnd(Z_NULL); assert(ret == Z_STREAM_ERROR); + ret = inflateCopy(Z_NULL, Z_NULL); assert(ret == Z_STREAM_ERROR); + fputs("inflate bad parameters\n", stderr); + + inf("1f 8b 0 0", "bad gzip method", 0, 31, 0, Z_DATA_ERROR); + inf("1f 8b 8 80", "bad gzip flags", 0, 31, 0, Z_DATA_ERROR); + inf("77 85", "bad zlib method", 0, 15, 0, Z_DATA_ERROR); + inf("8 99", "set window size from header", 0, 0, 0, Z_OK); + inf("78 9c", "bad zlib window size", 0, 8, 0, Z_DATA_ERROR); + inf("78 9c 63 0 0 0 1 0 1", "check adler32", 0, 15, 1, Z_STREAM_END); + inf("1f 8b 8 1e 0 0 0 0 0 0 1 0 0 0 0 0 0", "bad header crc", 0, 47, 1, + Z_DATA_ERROR); + inf("1f 8b 8 2 0 0 0 0 0 0 1d 26 3 0 0 0 0 0 0 0 0 0", "check gzip length", + 0, 47, 0, Z_STREAM_END); + inf("78 90", "bad zlib header check", 0, 47, 0, Z_DATA_ERROR); + inf("8 b8 0 0 0 1", "need dictionary", 0, 8, 0, Z_NEED_DICT); + inf("78 9c 63 0", "compute adler32", 0, 15, 1, Z_OK); + + mem_setup(&strm); + strm.avail_in = 0; + strm.next_in = Z_NULL; + ret = inflateInit2(&strm, -8); + strm.avail_in = 2; + strm.next_in = (void *)"\x63"; + strm.avail_out = 1; + strm.next_out = (void *)&ret; + mem_limit(&strm, 1); + ret = inflate(&strm, Z_NO_FLUSH); assert(ret == Z_MEM_ERROR); + ret = inflate(&strm, Z_NO_FLUSH); assert(ret == Z_MEM_ERROR); + mem_limit(&strm, 0); + memset(dict, 0, 257); + ret = inflateSetDictionary(&strm, dict, 257); + assert(ret == Z_OK); + mem_limit(&strm, (sizeof(struct inflate_state) << 1) + 256); + ret = inflatePrime(&strm, 16, 0); assert(ret == Z_OK); + strm.avail_in = 2; + strm.next_in = (void *)"\x80"; + ret = inflateSync(&strm); assert(ret == Z_DATA_ERROR); + ret = inflate(&strm, Z_NO_FLUSH); assert(ret == Z_STREAM_ERROR); + strm.avail_in = 4; + strm.next_in = (void *)"\0\0\xff\xff"; + ret = inflateSync(&strm); assert(ret == Z_OK); + (void)inflateSyncPoint(&strm); + ret = inflateCopy(©, &strm); assert(ret == Z_MEM_ERROR); + mem_limit(&strm, 0); + ret = inflateUndermine(&strm, 1); assert(ret == Z_DATA_ERROR); + (void)inflateMark(&strm); + ret = inflateEnd(&strm); assert(ret == Z_OK); + mem_done(&strm, "miscellaneous, force memory errors"); +} + +/* input and output functions for inflateBack() */ +local unsigned pull(void *desc, unsigned char **buf) +{ + static unsigned int next = 0; + static unsigned char dat[] = {0x63, 0, 2, 0}; + struct inflate_state *state; + + if (desc == Z_NULL) { + next = 0; + return 0; /* no input (already provided at next_in) */ + } + state = (void *)((z_stream *)desc)->state; + if (state != Z_NULL) + state->mode = SYNC; /* force an otherwise impossible situation */ + return next < sizeof(dat) ? (*buf = dat + next++, 1) : 0; +} + +local int push(void *desc, unsigned char *buf, unsigned len) +{ + buf += len; + return desc != Z_NULL; /* force error if desc not null */ +} + +/* cover inflateBack() up to common deflate data cases and after those */ +local void cover_back(void) +{ + int ret; + z_stream strm; + unsigned char win[32768]; + + ret = inflateBackInit_(Z_NULL, 0, win, 0, 0); + assert(ret == Z_VERSION_ERROR); + ret = inflateBackInit(Z_NULL, 0, win); assert(ret == Z_STREAM_ERROR); + ret = inflateBack(Z_NULL, Z_NULL, Z_NULL, Z_NULL, Z_NULL); + assert(ret == Z_STREAM_ERROR); + ret = inflateBackEnd(Z_NULL); assert(ret == Z_STREAM_ERROR); + fputs("inflateBack bad parameters\n", stderr); + + mem_setup(&strm); + ret = inflateBackInit(&strm, 15, win); assert(ret == Z_OK); + strm.avail_in = 2; + strm.next_in = (void *)"\x03"; + ret = inflateBack(&strm, pull, Z_NULL, push, Z_NULL); + assert(ret == Z_STREAM_END); + /* force output error */ + strm.avail_in = 3; + strm.next_in = (void *)"\x63\x00"; + ret = inflateBack(&strm, pull, Z_NULL, push, &strm); + assert(ret == Z_BUF_ERROR); + /* force mode error by mucking with state */ + ret = inflateBack(&strm, pull, &strm, push, Z_NULL); + assert(ret == Z_STREAM_ERROR); + ret = inflateBackEnd(&strm); assert(ret == Z_OK); + mem_done(&strm, "inflateBack bad state"); + + ret = inflateBackInit(&strm, 15, win); assert(ret == Z_OK); + ret = inflateBackEnd(&strm); assert(ret == Z_OK); + fputs("inflateBack built-in memory routines\n", stderr); +} + +/* do a raw inflate of data in hexadecimal with both inflate and inflateBack */ +local int try(char *hex, char *id, int err) +{ + int ret; + unsigned len, size; + unsigned char *in, *out, *win; + char *prefix; + z_stream strm; + + /* convert to hex */ + in = h2b(hex, &len); + assert(in != NULL); + + /* allocate work areas */ + size = len << 3; + out = malloc(size); + assert(out != NULL); + win = malloc(32768); + assert(win != NULL); + prefix = malloc(strlen(id) + 6); + assert(prefix != NULL); + + /* first with inflate */ + strcpy(prefix, id); + strcat(prefix, "-late"); + mem_setup(&strm); + strm.avail_in = 0; + strm.next_in = Z_NULL; + ret = inflateInit2(&strm, err < 0 ? 47 : -15); + assert(ret == Z_OK); + strm.avail_in = len; + strm.next_in = in; + do { + strm.avail_out = size; + strm.next_out = out; + ret = inflate(&strm, Z_TREES); + assert(ret != Z_STREAM_ERROR && ret != Z_MEM_ERROR); + if (ret == Z_DATA_ERROR || ret == Z_NEED_DICT) + break; + } while (strm.avail_in || strm.avail_out == 0); + if (err) { + assert(ret == Z_DATA_ERROR); + assert(strcmp(id, strm.msg) == 0); + } + inflateEnd(&strm); + mem_done(&strm, prefix); + + /* then with inflateBack */ + if (err >= 0) { + strcpy(prefix, id); + strcat(prefix, "-back"); + mem_setup(&strm); + ret = inflateBackInit(&strm, 15, win); + assert(ret == Z_OK); + strm.avail_in = len; + strm.next_in = in; + ret = inflateBack(&strm, pull, Z_NULL, push, Z_NULL); + assert(ret != Z_STREAM_ERROR); + if (err) { + assert(ret == Z_DATA_ERROR); + assert(strcmp(id, strm.msg) == 0); + } + inflateBackEnd(&strm); + mem_done(&strm, prefix); + } + + /* clean up */ + free(prefix); + free(win); + free(out); + free(in); + return ret; +} + +/* cover deflate data cases in both inflate() and inflateBack() */ +local void cover_inflate(void) +{ + try("0 0 0 0 0", "invalid stored block lengths", 1); + try("3 0", "fixed", 0); + try("6", "invalid block type", 1); + try("1 1 0 fe ff 0", "stored", 0); + try("fc 0 0", "too many length or distance symbols", 1); + try("4 0 fe ff", "invalid code lengths set", 1); + try("4 0 24 49 0", "invalid bit length repeat", 1); + try("4 0 24 e9 ff ff", "invalid bit length repeat", 1); + try("4 0 24 e9 ff 6d", "invalid code -- missing end-of-block", 1); + try("4 80 49 92 24 49 92 24 71 ff ff 93 11 0", + "invalid literal/lengths set", 1); + try("4 80 49 92 24 49 92 24 f b4 ff ff c3 84", "invalid distances set", 1); + try("4 c0 81 8 0 0 0 0 20 7f eb b 0 0", "invalid literal/length code", 1); + try("2 7e ff ff", "invalid distance code", 1); + try("c c0 81 0 0 0 0 0 90 ff 6b 4 0", "invalid distance too far back", 1); + + /* also trailer mismatch just in inflate() */ + try("1f 8b 8 0 0 0 0 0 0 0 3 0 0 0 0 1", "incorrect data check", -1); + try("1f 8b 8 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 1", + "incorrect length check", -1); + try("5 c0 21 d 0 0 0 80 b0 fe 6d 2f 91 6c", "pull 17", 0); + try("5 e0 81 91 24 cb b2 2c 49 e2 f 2e 8b 9a 47 56 9f fb fe ec d2 ff 1f", + "long code", 0); + try("ed c0 1 1 0 0 0 40 20 ff 57 1b 42 2c 4f", "length extra", 0); + try("ed cf c1 b1 2c 47 10 c4 30 fa 6f 35 1d 1 82 59 3d fb be 2e 2a fc f c", + "long distance and extra", 0); + try("ed c0 81 0 0 0 0 80 a0 fd a9 17 a9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 " + "0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6", "window end", 0); + inf("2 8 20 80 0 3 0", "inflate_fast TYPE return", 0, -15, 258, + Z_STREAM_END); + inf("63 18 5 40 c 0", "window wrap", 3, -8, 300, Z_OK); +} + +/* cover remaining lines in inftrees.c */ +local void cover_trees(void) +{ + int ret; + unsigned bits; + unsigned short lens[16], work[16]; + code *next, table[ENOUGH_DISTS]; + + /* we need to call inflate_table() directly in order to manifest not- + enough errors, since zlib insures that enough is always enough */ + for (bits = 0; bits < 15; bits++) + lens[bits] = (unsigned short)(bits + 1); + lens[15] = 15; + next = table; + bits = 15; + ret = inflate_table(DISTS, lens, 16, &next, &bits, work); + assert(ret == 1); + next = table; + bits = 1; + ret = inflate_table(DISTS, lens, 16, &next, &bits, work); + assert(ret == 1); + fputs("inflate_table not enough errors\n", stderr); +} + +/* cover remaining inffast.c decoding and window copying */ +local void cover_fast(void) +{ + inf("e5 e0 81 ad 6d cb b2 2c c9 01 1e 59 63 ae 7d ee fb 4d fd b5 35 41 68" + " ff 7f 0f 0 0 0", "fast length extra bits", 0, -8, 258, Z_DATA_ERROR); + inf("25 fd 81 b5 6d 59 b6 6a 49 ea af 35 6 34 eb 8c b9 f6 b9 1e ef 67 49" + " 50 fe ff ff 3f 0 0", "fast distance extra bits", 0, -8, 258, + Z_DATA_ERROR); + inf("3 7e 0 0 0 0 0", "fast invalid distance code", 0, -8, 258, + Z_DATA_ERROR); + inf("1b 7 0 0 0 0 0", "fast invalid literal/length code", 0, -8, 258, + Z_DATA_ERROR); + inf("d c7 1 ae eb 38 c 4 41 a0 87 72 de df fb 1f b8 36 b1 38 5d ff ff 0", + "fast 2nd level codes and too far back", 0, -8, 258, Z_DATA_ERROR); + inf("63 18 5 8c 10 8 0 0 0 0", "very common case", 0, -8, 259, Z_OK); + inf("63 60 60 18 c9 0 8 18 18 18 26 c0 28 0 29 0 0 0", + "contiguous and wrap around window", 6, -8, 259, Z_OK); + inf("63 0 3 0 0 0 0 0", "copy direct from output", 0, -8, 259, + Z_STREAM_END); +} + +int main(void) +{ + fprintf(stderr, "%s\n", zlibVersion()); + cover_support(); + cover_wrap(); + cover_back(); + cover_inflate(); + cover_trees(); + cover_fast(); + return 0; +} ADDED compat/zlib/test/minigzip.c Index: compat/zlib/test/minigzip.c ================================================================== --- compat/zlib/test/minigzip.c +++ compat/zlib/test/minigzip.c @@ -0,0 +1,651 @@ +/* minigzip.c -- simulate gzip using the zlib compression library + * Copyright (C) 1995-2006, 2010, 2011 Jean-loup Gailly. + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* + * minigzip is a minimal implementation of the gzip utility. This is + * only an example of using zlib and isn't meant to replace the + * full-featured gzip. No attempt is made to deal with file systems + * limiting names to 14 or 8+3 characters, etc... Error checking is + * very limited. So use minigzip only for testing; use gzip for the + * real thing. On MSDOS, use only on file names without extension + * or in pipe mode. + */ + +/* @(#) $Id$ */ + +#include "zlib.h" +#include + +#ifdef STDC +# include +# include +#endif + +#ifdef USE_MMAP +# include +# include +# include +#endif + +#if defined(MSDOS) || defined(OS2) || defined(WIN32) || defined(__CYGWIN__) +# include +# include +# ifdef UNDER_CE +# include +# endif +# define SET_BINARY_MODE(file) setmode(fileno(file), O_BINARY) +#else +# define SET_BINARY_MODE(file) +#endif + +#ifdef _MSC_VER +# define snprintf _snprintf +#endif + +#ifdef VMS +# define unlink delete +# define GZ_SUFFIX "-gz" +#endif +#ifdef RISCOS +# define unlink remove +# define GZ_SUFFIX "-gz" +# define fileno(file) file->__file +#endif +#if defined(__MWERKS__) && __dest_os != __be_os && __dest_os != __win32_os +# include /* for fileno */ +#endif + +#if !defined(Z_HAVE_UNISTD_H) && !defined(_LARGEFILE64_SOURCE) +#ifndef WIN32 /* unlink already in stdio.h for WIN32 */ + extern int unlink OF((const char *)); +#endif +#endif + +#if defined(UNDER_CE) +# include +# define perror(s) pwinerror(s) + +/* Map the Windows error number in ERROR to a locale-dependent error + message string and return a pointer to it. Typically, the values + for ERROR come from GetLastError. + + The string pointed to shall not be modified by the application, + but may be overwritten by a subsequent call to strwinerror + + The strwinerror function does not change the current setting + of GetLastError. */ + +static char *strwinerror (error) + DWORD error; +{ + static char buf[1024]; + + wchar_t *msgbuf; + DWORD lasterr = GetLastError(); + DWORD chars = FormatMessage(FORMAT_MESSAGE_FROM_SYSTEM + | FORMAT_MESSAGE_ALLOCATE_BUFFER, + NULL, + error, + 0, /* Default language */ + (LPVOID)&msgbuf, + 0, + NULL); + if (chars != 0) { + /* If there is an \r\n appended, zap it. */ + if (chars >= 2 + && msgbuf[chars - 2] == '\r' && msgbuf[chars - 1] == '\n') { + chars -= 2; + msgbuf[chars] = 0; + } + + if (chars > sizeof (buf) - 1) { + chars = sizeof (buf) - 1; + msgbuf[chars] = 0; + } + + wcstombs(buf, msgbuf, chars + 1); + LocalFree(msgbuf); + } + else { + sprintf(buf, "unknown win32 error (%ld)", error); + } + + SetLastError(lasterr); + return buf; +} + +static void pwinerror (s) + const char *s; +{ + if (s && *s) + fprintf(stderr, "%s: %s\n", s, strwinerror(GetLastError ())); + else + fprintf(stderr, "%s\n", strwinerror(GetLastError ())); +} + +#endif /* UNDER_CE */ + +#ifndef GZ_SUFFIX +# define GZ_SUFFIX ".gz" +#endif +#define SUFFIX_LEN (sizeof(GZ_SUFFIX)-1) + +#define BUFLEN 16384 +#define MAX_NAME_LEN 1024 + +#ifdef MAXSEG_64K +# define local static + /* Needed for systems with limitation on stack size. */ +#else +# define local +#endif + +#ifdef Z_SOLO +/* for Z_SOLO, create simplified gz* functions using deflate and inflate */ + +#if defined(Z_HAVE_UNISTD_H) || defined(Z_LARGE) +# include /* for unlink() */ +#endif + +void *myalloc OF((void *, unsigned, unsigned)); +void myfree OF((void *, void *)); + +void *myalloc(q, n, m) + void *q; + unsigned n, m; +{ + q = Z_NULL; + return calloc(n, m); +} + +void myfree(q, p) + void *q, *p; +{ + q = Z_NULL; + free(p); +} + +typedef struct gzFile_s { + FILE *file; + int write; + int err; + char *msg; + z_stream strm; +} *gzFile; + +gzFile gzopen OF((const char *, const char *)); +gzFile gzdopen OF((int, const char *)); +gzFile gz_open OF((const char *, int, const char *)); + +gzFile gzopen(path, mode) +const char *path; +const char *mode; +{ + return gz_open(path, -1, mode); +} + +gzFile gzdopen(fd, mode) +int fd; +const char *mode; +{ + return gz_open(NULL, fd, mode); +} + +gzFile gz_open(path, fd, mode) + const char *path; + int fd; + const char *mode; +{ + gzFile gz; + int ret; + + gz = malloc(sizeof(struct gzFile_s)); + if (gz == NULL) + return NULL; + gz->write = strchr(mode, 'w') != NULL; + gz->strm.zalloc = myalloc; + gz->strm.zfree = myfree; + gz->strm.opaque = Z_NULL; + if (gz->write) + ret = deflateInit2(&(gz->strm), -1, 8, 15 + 16, 8, 0); + else { + gz->strm.next_in = 0; + gz->strm.avail_in = Z_NULL; + ret = inflateInit2(&(gz->strm), 15 + 16); + } + if (ret != Z_OK) { + free(gz); + return NULL; + } + gz->file = path == NULL ? fdopen(fd, gz->write ? "wb" : "rb") : + fopen(path, gz->write ? "wb" : "rb"); + if (gz->file == NULL) { + gz->write ? deflateEnd(&(gz->strm)) : inflateEnd(&(gz->strm)); + free(gz); + return NULL; + } + gz->err = 0; + gz->msg = ""; + return gz; +} + +int gzwrite OF((gzFile, const void *, unsigned)); + +int gzwrite(gz, buf, len) + gzFile gz; + const void *buf; + unsigned len; +{ + z_stream *strm; + unsigned char out[BUFLEN]; + + if (gz == NULL || !gz->write) + return 0; + strm = &(gz->strm); + strm->next_in = (void *)buf; + strm->avail_in = len; + do { + strm->next_out = out; + strm->avail_out = BUFLEN; + (void)deflate(strm, Z_NO_FLUSH); + fwrite(out, 1, BUFLEN - strm->avail_out, gz->file); + } while (strm->avail_out == 0); + return len; +} + +int gzread OF((gzFile, void *, unsigned)); + +int gzread(gz, buf, len) + gzFile gz; + void *buf; + unsigned len; +{ + int ret; + unsigned got; + unsigned char in[1]; + z_stream *strm; + + if (gz == NULL || gz->write) + return 0; + if (gz->err) + return 0; + strm = &(gz->strm); + strm->next_out = (void *)buf; + strm->avail_out = len; + do { + got = fread(in, 1, 1, gz->file); + if (got == 0) + break; + strm->next_in = in; + strm->avail_in = 1; + ret = inflate(strm, Z_NO_FLUSH); + if (ret == Z_DATA_ERROR) { + gz->err = Z_DATA_ERROR; + gz->msg = strm->msg; + return 0; + } + if (ret == Z_STREAM_END) + inflateReset(strm); + } while (strm->avail_out); + return len - strm->avail_out; +} + +int gzclose OF((gzFile)); + +int gzclose(gz) + gzFile gz; +{ + z_stream *strm; + unsigned char out[BUFLEN]; + + if (gz == NULL) + return Z_STREAM_ERROR; + strm = &(gz->strm); + if (gz->write) { + strm->next_in = Z_NULL; + strm->avail_in = 0; + do { + strm->next_out = out; + strm->avail_out = BUFLEN; + (void)deflate(strm, Z_FINISH); + fwrite(out, 1, BUFLEN - strm->avail_out, gz->file); + } while (strm->avail_out == 0); + deflateEnd(strm); + } + else + inflateEnd(strm); + fclose(gz->file); + free(gz); + return Z_OK; +} + +const char *gzerror OF((gzFile, int *)); + +const char *gzerror(gz, err) + gzFile gz; + int *err; +{ + *err = gz->err; + return gz->msg; +} + +#endif + +char *prog; + +void error OF((const char *msg)); +void gz_compress OF((FILE *in, gzFile out)); +#ifdef USE_MMAP +int gz_compress_mmap OF((FILE *in, gzFile out)); +#endif +void gz_uncompress OF((gzFile in, FILE *out)); +void file_compress OF((char *file, char *mode)); +void file_uncompress OF((char *file)); +int main OF((int argc, char *argv[])); + +/* =========================================================================== + * Display error message and exit + */ +void error(msg) + const char *msg; +{ + fprintf(stderr, "%s: %s\n", prog, msg); + exit(1); +} + +/* =========================================================================== + * Compress input to output then close both files. + */ + +void gz_compress(in, out) + FILE *in; + gzFile out; +{ + local char buf[BUFLEN]; + int len; + int err; + +#ifdef USE_MMAP + /* Try first compressing with mmap. If mmap fails (minigzip used in a + * pipe), use the normal fread loop. + */ + if (gz_compress_mmap(in, out) == Z_OK) return; +#endif + for (;;) { + len = (int)fread(buf, 1, sizeof(buf), in); + if (ferror(in)) { + perror("fread"); + exit(1); + } + if (len == 0) break; + + if (gzwrite(out, buf, (unsigned)len) != len) error(gzerror(out, &err)); + } + fclose(in); + if (gzclose(out) != Z_OK) error("failed gzclose"); +} + +#ifdef USE_MMAP /* MMAP version, Miguel Albrecht */ + +/* Try compressing the input file at once using mmap. Return Z_OK if + * if success, Z_ERRNO otherwise. + */ +int gz_compress_mmap(in, out) + FILE *in; + gzFile out; +{ + int len; + int err; + int ifd = fileno(in); + caddr_t buf; /* mmap'ed buffer for the entire input file */ + off_t buf_len; /* length of the input file */ + struct stat sb; + + /* Determine the size of the file, needed for mmap: */ + if (fstat(ifd, &sb) < 0) return Z_ERRNO; + buf_len = sb.st_size; + if (buf_len <= 0) return Z_ERRNO; + + /* Now do the actual mmap: */ + buf = mmap((caddr_t) 0, buf_len, PROT_READ, MAP_SHARED, ifd, (off_t)0); + if (buf == (caddr_t)(-1)) return Z_ERRNO; + + /* Compress the whole file at once: */ + len = gzwrite(out, (char *)buf, (unsigned)buf_len); + + if (len != (int)buf_len) error(gzerror(out, &err)); + + munmap(buf, buf_len); + fclose(in); + if (gzclose(out) != Z_OK) error("failed gzclose"); + return Z_OK; +} +#endif /* USE_MMAP */ + +/* =========================================================================== + * Uncompress input to output then close both files. + */ +void gz_uncompress(in, out) + gzFile in; + FILE *out; +{ + local char buf[BUFLEN]; + int len; + int err; + + for (;;) { + len = gzread(in, buf, sizeof(buf)); + if (len < 0) error (gzerror(in, &err)); + if (len == 0) break; + + if ((int)fwrite(buf, 1, (unsigned)len, out) != len) { + error("failed fwrite"); + } + } + if (fclose(out)) error("failed fclose"); + + if (gzclose(in) != Z_OK) error("failed gzclose"); +} + + +/* =========================================================================== + * Compress the given file: create a corresponding .gz file and remove the + * original. + */ +void file_compress(file, mode) + char *file; + char *mode; +{ + local char outfile[MAX_NAME_LEN]; + FILE *in; + gzFile out; + + if (strlen(file) + strlen(GZ_SUFFIX) >= sizeof(outfile)) { + fprintf(stderr, "%s: filename too long\n", prog); + exit(1); + } + +#if !defined(NO_snprintf) && !defined(NO_vsnprintf) + snprintf(outfile, sizeof(outfile), "%s%s", file, GZ_SUFFIX); +#else + strcpy(outfile, file); + strcat(outfile, GZ_SUFFIX); +#endif + + in = fopen(file, "rb"); + if (in == NULL) { + perror(file); + exit(1); + } + out = gzopen(outfile, mode); + if (out == NULL) { + fprintf(stderr, "%s: can't gzopen %s\n", prog, outfile); + exit(1); + } + gz_compress(in, out); + + unlink(file); +} + + +/* =========================================================================== + * Uncompress the given file and remove the original. + */ +void file_uncompress(file) + char *file; +{ + local char buf[MAX_NAME_LEN]; + char *infile, *outfile; + FILE *out; + gzFile in; + size_t len = strlen(file); + + if (len + strlen(GZ_SUFFIX) >= sizeof(buf)) { + fprintf(stderr, "%s: filename too long\n", prog); + exit(1); + } + +#if !defined(NO_snprintf) && !defined(NO_vsnprintf) + snprintf(buf, sizeof(buf), "%s", file); +#else + strcpy(buf, file); +#endif + + if (len > SUFFIX_LEN && strcmp(file+len-SUFFIX_LEN, GZ_SUFFIX) == 0) { + infile = file; + outfile = buf; + outfile[len-3] = '\0'; + } else { + outfile = file; + infile = buf; +#if !defined(NO_snprintf) && !defined(NO_vsnprintf) + snprintf(buf + len, sizeof(buf) - len, "%s", GZ_SUFFIX); +#else + strcat(infile, GZ_SUFFIX); +#endif + } + in = gzopen(infile, "rb"); + if (in == NULL) { + fprintf(stderr, "%s: can't gzopen %s\n", prog, infile); + exit(1); + } + out = fopen(outfile, "wb"); + if (out == NULL) { + perror(file); + exit(1); + } + + gz_uncompress(in, out); + + unlink(infile); +} + + +/* =========================================================================== + * Usage: minigzip [-c] [-d] [-f] [-h] [-r] [-1 to -9] [files...] + * -c : write to standard output + * -d : decompress + * -f : compress with Z_FILTERED + * -h : compress with Z_HUFFMAN_ONLY + * -r : compress with Z_RLE + * -1 to -9 : compression level + */ + +int main(argc, argv) + int argc; + char *argv[]; +{ + int copyout = 0; + int uncompr = 0; + gzFile file; + char *bname, outmode[20]; + +#if !defined(NO_snprintf) && !defined(NO_vsnprintf) + snprintf(outmode, sizeof(outmode), "%s", "wb6 "); +#else + strcpy(outmode, "wb6 "); +#endif + + prog = argv[0]; + bname = strrchr(argv[0], '/'); + if (bname) + bname++; + else + bname = argv[0]; + argc--, argv++; + + if (!strcmp(bname, "gunzip")) + uncompr = 1; + else if (!strcmp(bname, "zcat")) + copyout = uncompr = 1; + + while (argc > 0) { + if (strcmp(*argv, "-c") == 0) + copyout = 1; + else if (strcmp(*argv, "-d") == 0) + uncompr = 1; + else if (strcmp(*argv, "-f") == 0) + outmode[3] = 'f'; + else if (strcmp(*argv, "-h") == 0) + outmode[3] = 'h'; + else if (strcmp(*argv, "-r") == 0) + outmode[3] = 'R'; + else if ((*argv)[0] == '-' && (*argv)[1] >= '1' && (*argv)[1] <= '9' && + (*argv)[2] == 0) + outmode[2] = (*argv)[1]; + else + break; + argc--, argv++; + } + if (outmode[3] == ' ') + outmode[3] = 0; + if (argc == 0) { + SET_BINARY_MODE(stdin); + SET_BINARY_MODE(stdout); + if (uncompr) { + file = gzdopen(fileno(stdin), "rb"); + if (file == NULL) error("can't gzdopen stdin"); + gz_uncompress(file, stdout); + } else { + file = gzdopen(fileno(stdout), outmode); + if (file == NULL) error("can't gzdopen stdout"); + gz_compress(stdin, file); + } + } else { + if (copyout) { + SET_BINARY_MODE(stdout); + } + do { + if (uncompr) { + if (copyout) { + file = gzopen(*argv, "rb"); + if (file == NULL) + fprintf(stderr, "%s: can't gzopen %s\n", prog, *argv); + else + gz_uncompress(file, stdout); + } else { + file_uncompress(*argv); + } + } else { + if (copyout) { + FILE * in = fopen(*argv, "rb"); + + if (in == NULL) { + perror(*argv); + } else { + file = gzdopen(fileno(stdout), outmode); + if (file == NULL) error("can't gzdopen stdout"); + + gz_compress(in, file); + } + + } else { + file_compress(*argv, outmode); + } + } + } while (argv++, --argc); + } + return 0; +} ADDED compat/zlib/treebuild.xml Index: compat/zlib/treebuild.xml ================================================================== --- compat/zlib/treebuild.xml +++ compat/zlib/treebuild.xml @@ -0,0 +1,116 @@ + + + + zip compression library + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + ADDED compat/zlib/trees.c Index: compat/zlib/trees.c ================================================================== --- compat/zlib/trees.c +++ compat/zlib/trees.c @@ -0,0 +1,1226 @@ +/* trees.c -- output deflated data using Huffman coding + * Copyright (C) 1995-2012 Jean-loup Gailly + * detect_data_type() function provided freely by Cosmin Truta, 2006 + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* + * ALGORITHM + * + * The "deflation" process uses several Huffman trees. The more + * common source values are represented by shorter bit sequences. + * + * Each code tree is stored in a compressed form which is itself + * a Huffman encoding of the lengths of all the code strings (in + * ascending order by source values). The actual code strings are + * reconstructed from the lengths in the inflate process, as described + * in the deflate specification. + * + * REFERENCES + * + * Deutsch, L.P.,"'Deflate' Compressed Data Format Specification". + * Available in ftp.uu.net:/pub/archiving/zip/doc/deflate-1.1.doc + * + * Storer, James A. + * Data Compression: Methods and Theory, pp. 49-50. + * Computer Science Press, 1988. ISBN 0-7167-8156-5. + * + * Sedgewick, R. + * Algorithms, p290. + * Addison-Wesley, 1983. ISBN 0-201-06672-6. + */ + +/* @(#) $Id$ */ + +/* #define GEN_TREES_H */ + +#include "deflate.h" + +#ifdef DEBUG +# include +#endif + +/* =========================================================================== + * Constants + */ + +#define MAX_BL_BITS 7 +/* Bit length codes must not exceed MAX_BL_BITS bits */ + +#define END_BLOCK 256 +/* end of block literal code */ + +#define REP_3_6 16 +/* repeat previous bit length 3-6 times (2 bits of repeat count) */ + +#define REPZ_3_10 17 +/* repeat a zero length 3-10 times (3 bits of repeat count) */ + +#define REPZ_11_138 18 +/* repeat a zero length 11-138 times (7 bits of repeat count) */ + +local const int extra_lbits[LENGTH_CODES] /* extra bits for each length code */ + = {0,0,0,0,0,0,0,0,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,0}; + +local const int extra_dbits[D_CODES] /* extra bits for each distance code */ + = {0,0,0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,11,11,12,12,13,13}; + +local const int extra_blbits[BL_CODES]/* extra bits for each bit length code */ + = {0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,3,7}; + +local const uch bl_order[BL_CODES] + = {16,17,18,0,8,7,9,6,10,5,11,4,12,3,13,2,14,1,15}; +/* The lengths of the bit length codes are sent in order of decreasing + * probability, to avoid transmitting the lengths for unused bit length codes. + */ + +/* =========================================================================== + * Local data. These are initialized only once. + */ + +#define DIST_CODE_LEN 512 /* see definition of array dist_code below */ + +#if defined(GEN_TREES_H) || !defined(STDC) +/* non ANSI compilers may not accept trees.h */ + +local ct_data static_ltree[L_CODES+2]; +/* The static literal tree. Since the bit lengths are imposed, there is no + * need for the L_CODES extra codes used during heap construction. However + * The codes 286 and 287 are needed to build a canonical tree (see _tr_init + * below). + */ + +local ct_data static_dtree[D_CODES]; +/* The static distance tree. (Actually a trivial tree since all codes use + * 5 bits.) + */ + +uch _dist_code[DIST_CODE_LEN]; +/* Distance codes. The first 256 values correspond to the distances + * 3 .. 258, the last 256 values correspond to the top 8 bits of + * the 15 bit distances. + */ + +uch _length_code[MAX_MATCH-MIN_MATCH+1]; +/* length code for each normalized match length (0 == MIN_MATCH) */ + +local int base_length[LENGTH_CODES]; +/* First normalized length for each code (0 = MIN_MATCH) */ + +local int base_dist[D_CODES]; +/* First normalized distance for each code (0 = distance of 1) */ + +#else +# include "trees.h" +#endif /* GEN_TREES_H */ + +struct static_tree_desc_s { + const ct_data *static_tree; /* static tree or NULL */ + const intf *extra_bits; /* extra bits for each code or NULL */ + int extra_base; /* base index for extra_bits */ + int elems; /* max number of elements in the tree */ + int max_length; /* max bit length for the codes */ +}; + +local static_tree_desc static_l_desc = +{static_ltree, extra_lbits, LITERALS+1, L_CODES, MAX_BITS}; + +local static_tree_desc static_d_desc = +{static_dtree, extra_dbits, 0, D_CODES, MAX_BITS}; + +local static_tree_desc static_bl_desc = +{(const ct_data *)0, extra_blbits, 0, BL_CODES, MAX_BL_BITS}; + +/* =========================================================================== + * Local (static) routines in this file. + */ + +local void tr_static_init OF((void)); +local void init_block OF((deflate_state *s)); +local void pqdownheap OF((deflate_state *s, ct_data *tree, int k)); +local void gen_bitlen OF((deflate_state *s, tree_desc *desc)); +local void gen_codes OF((ct_data *tree, int max_code, ushf *bl_count)); +local void build_tree OF((deflate_state *s, tree_desc *desc)); +local void scan_tree OF((deflate_state *s, ct_data *tree, int max_code)); +local void send_tree OF((deflate_state *s, ct_data *tree, int max_code)); +local int build_bl_tree OF((deflate_state *s)); +local void send_all_trees OF((deflate_state *s, int lcodes, int dcodes, + int blcodes)); +local void compress_block OF((deflate_state *s, const ct_data *ltree, + const ct_data *dtree)); +local int detect_data_type OF((deflate_state *s)); +local unsigned bi_reverse OF((unsigned value, int length)); +local void bi_windup OF((deflate_state *s)); +local void bi_flush OF((deflate_state *s)); +local void copy_block OF((deflate_state *s, charf *buf, unsigned len, + int header)); + +#ifdef GEN_TREES_H +local void gen_trees_header OF((void)); +#endif + +#ifndef DEBUG +# define send_code(s, c, tree) send_bits(s, tree[c].Code, tree[c].Len) + /* Send a code of the given tree. c and tree must not have side effects */ + +#else /* DEBUG */ +# define send_code(s, c, tree) \ + { if (z_verbose>2) fprintf(stderr,"\ncd %3d ",(c)); \ + send_bits(s, tree[c].Code, tree[c].Len); } +#endif + +/* =========================================================================== + * Output a short LSB first on the stream. + * IN assertion: there is enough room in pendingBuf. + */ +#define put_short(s, w) { \ + put_byte(s, (uch)((w) & 0xff)); \ + put_byte(s, (uch)((ush)(w) >> 8)); \ +} + +/* =========================================================================== + * Send a value on a given number of bits. + * IN assertion: length <= 16 and value fits in length bits. + */ +#ifdef DEBUG +local void send_bits OF((deflate_state *s, int value, int length)); + +local void send_bits(s, value, length) + deflate_state *s; + int value; /* value to send */ + int length; /* number of bits */ +{ + Tracevv((stderr," l %2d v %4x ", length, value)); + Assert(length > 0 && length <= 15, "invalid length"); + s->bits_sent += (ulg)length; + + /* If not enough room in bi_buf, use (valid) bits from bi_buf and + * (16 - bi_valid) bits from value, leaving (width - (16-bi_valid)) + * unused bits in value. + */ + if (s->bi_valid > (int)Buf_size - length) { + s->bi_buf |= (ush)value << s->bi_valid; + put_short(s, s->bi_buf); + s->bi_buf = (ush)value >> (Buf_size - s->bi_valid); + s->bi_valid += length - Buf_size; + } else { + s->bi_buf |= (ush)value << s->bi_valid; + s->bi_valid += length; + } +} +#else /* !DEBUG */ + +#define send_bits(s, value, length) \ +{ int len = length;\ + if (s->bi_valid > (int)Buf_size - len) {\ + int val = value;\ + s->bi_buf |= (ush)val << s->bi_valid;\ + put_short(s, s->bi_buf);\ + s->bi_buf = (ush)val >> (Buf_size - s->bi_valid);\ + s->bi_valid += len - Buf_size;\ + } else {\ + s->bi_buf |= (ush)(value) << s->bi_valid;\ + s->bi_valid += len;\ + }\ +} +#endif /* DEBUG */ + + +/* the arguments must not have side effects */ + +/* =========================================================================== + * Initialize the various 'constant' tables. + */ +local void tr_static_init() +{ +#if defined(GEN_TREES_H) || !defined(STDC) + static int static_init_done = 0; + int n; /* iterates over tree elements */ + int bits; /* bit counter */ + int length; /* length value */ + int code; /* code value */ + int dist; /* distance index */ + ush bl_count[MAX_BITS+1]; + /* number of codes at each bit length for an optimal tree */ + + if (static_init_done) return; + + /* For some embedded targets, global variables are not initialized: */ +#ifdef NO_INIT_GLOBAL_POINTERS + static_l_desc.static_tree = static_ltree; + static_l_desc.extra_bits = extra_lbits; + static_d_desc.static_tree = static_dtree; + static_d_desc.extra_bits = extra_dbits; + static_bl_desc.extra_bits = extra_blbits; +#endif + + /* Initialize the mapping length (0..255) -> length code (0..28) */ + length = 0; + for (code = 0; code < LENGTH_CODES-1; code++) { + base_length[code] = length; + for (n = 0; n < (1< dist code (0..29) */ + dist = 0; + for (code = 0 ; code < 16; code++) { + base_dist[code] = dist; + for (n = 0; n < (1<>= 7; /* from now on, all distances are divided by 128 */ + for ( ; code < D_CODES; code++) { + base_dist[code] = dist << 7; + for (n = 0; n < (1<<(extra_dbits[code]-7)); n++) { + _dist_code[256 + dist++] = (uch)code; + } + } + Assert (dist == 256, "tr_static_init: 256+dist != 512"); + + /* Construct the codes of the static literal tree */ + for (bits = 0; bits <= MAX_BITS; bits++) bl_count[bits] = 0; + n = 0; + while (n <= 143) static_ltree[n++].Len = 8, bl_count[8]++; + while (n <= 255) static_ltree[n++].Len = 9, bl_count[9]++; + while (n <= 279) static_ltree[n++].Len = 7, bl_count[7]++; + while (n <= 287) static_ltree[n++].Len = 8, bl_count[8]++; + /* Codes 286 and 287 do not exist, but we must include them in the + * tree construction to get a canonical Huffman tree (longest code + * all ones) + */ + gen_codes((ct_data *)static_ltree, L_CODES+1, bl_count); + + /* The static distance tree is trivial: */ + for (n = 0; n < D_CODES; n++) { + static_dtree[n].Len = 5; + static_dtree[n].Code = bi_reverse((unsigned)n, 5); + } + static_init_done = 1; + +# ifdef GEN_TREES_H + gen_trees_header(); +# endif +#endif /* defined(GEN_TREES_H) || !defined(STDC) */ +} + +/* =========================================================================== + * Genererate the file trees.h describing the static trees. + */ +#ifdef GEN_TREES_H +# ifndef DEBUG +# include +# endif + +# define SEPARATOR(i, last, width) \ + ((i) == (last)? "\n};\n\n" : \ + ((i) % (width) == (width)-1 ? ",\n" : ", ")) + +void gen_trees_header() +{ + FILE *header = fopen("trees.h", "w"); + int i; + + Assert (header != NULL, "Can't open trees.h"); + fprintf(header, + "/* header created automatically with -DGEN_TREES_H */\n\n"); + + fprintf(header, "local const ct_data static_ltree[L_CODES+2] = {\n"); + for (i = 0; i < L_CODES+2; i++) { + fprintf(header, "{{%3u},{%3u}}%s", static_ltree[i].Code, + static_ltree[i].Len, SEPARATOR(i, L_CODES+1, 5)); + } + + fprintf(header, "local const ct_data static_dtree[D_CODES] = {\n"); + for (i = 0; i < D_CODES; i++) { + fprintf(header, "{{%2u},{%2u}}%s", static_dtree[i].Code, + static_dtree[i].Len, SEPARATOR(i, D_CODES-1, 5)); + } + + fprintf(header, "const uch ZLIB_INTERNAL _dist_code[DIST_CODE_LEN] = {\n"); + for (i = 0; i < DIST_CODE_LEN; i++) { + fprintf(header, "%2u%s", _dist_code[i], + SEPARATOR(i, DIST_CODE_LEN-1, 20)); + } + + fprintf(header, + "const uch ZLIB_INTERNAL _length_code[MAX_MATCH-MIN_MATCH+1]= {\n"); + for (i = 0; i < MAX_MATCH-MIN_MATCH+1; i++) { + fprintf(header, "%2u%s", _length_code[i], + SEPARATOR(i, MAX_MATCH-MIN_MATCH, 20)); + } + + fprintf(header, "local const int base_length[LENGTH_CODES] = {\n"); + for (i = 0; i < LENGTH_CODES; i++) { + fprintf(header, "%1u%s", base_length[i], + SEPARATOR(i, LENGTH_CODES-1, 20)); + } + + fprintf(header, "local const int base_dist[D_CODES] = {\n"); + for (i = 0; i < D_CODES; i++) { + fprintf(header, "%5u%s", base_dist[i], + SEPARATOR(i, D_CODES-1, 10)); + } + + fclose(header); +} +#endif /* GEN_TREES_H */ + +/* =========================================================================== + * Initialize the tree data structures for a new zlib stream. + */ +void ZLIB_INTERNAL _tr_init(s) + deflate_state *s; +{ + tr_static_init(); + + s->l_desc.dyn_tree = s->dyn_ltree; + s->l_desc.stat_desc = &static_l_desc; + + s->d_desc.dyn_tree = s->dyn_dtree; + s->d_desc.stat_desc = &static_d_desc; + + s->bl_desc.dyn_tree = s->bl_tree; + s->bl_desc.stat_desc = &static_bl_desc; + + s->bi_buf = 0; + s->bi_valid = 0; +#ifdef DEBUG + s->compressed_len = 0L; + s->bits_sent = 0L; +#endif + + /* Initialize the first block of the first file: */ + init_block(s); +} + +/* =========================================================================== + * Initialize a new block. + */ +local void init_block(s) + deflate_state *s; +{ + int n; /* iterates over tree elements */ + + /* Initialize the trees. */ + for (n = 0; n < L_CODES; n++) s->dyn_ltree[n].Freq = 0; + for (n = 0; n < D_CODES; n++) s->dyn_dtree[n].Freq = 0; + for (n = 0; n < BL_CODES; n++) s->bl_tree[n].Freq = 0; + + s->dyn_ltree[END_BLOCK].Freq = 1; + s->opt_len = s->static_len = 0L; + s->last_lit = s->matches = 0; +} + +#define SMALLEST 1 +/* Index within the heap array of least frequent node in the Huffman tree */ + + +/* =========================================================================== + * Remove the smallest element from the heap and recreate the heap with + * one less element. Updates heap and heap_len. + */ +#define pqremove(s, tree, top) \ +{\ + top = s->heap[SMALLEST]; \ + s->heap[SMALLEST] = s->heap[s->heap_len--]; \ + pqdownheap(s, tree, SMALLEST); \ +} + +/* =========================================================================== + * Compares to subtrees, using the tree depth as tie breaker when + * the subtrees have equal frequency. This minimizes the worst case length. + */ +#define smaller(tree, n, m, depth) \ + (tree[n].Freq < tree[m].Freq || \ + (tree[n].Freq == tree[m].Freq && depth[n] <= depth[m])) + +/* =========================================================================== + * Restore the heap property by moving down the tree starting at node k, + * exchanging a node with the smallest of its two sons if necessary, stopping + * when the heap property is re-established (each father smaller than its + * two sons). + */ +local void pqdownheap(s, tree, k) + deflate_state *s; + ct_data *tree; /* the tree to restore */ + int k; /* node to move down */ +{ + int v = s->heap[k]; + int j = k << 1; /* left son of k */ + while (j <= s->heap_len) { + /* Set j to the smallest of the two sons: */ + if (j < s->heap_len && + smaller(tree, s->heap[j+1], s->heap[j], s->depth)) { + j++; + } + /* Exit if v is smaller than both sons */ + if (smaller(tree, v, s->heap[j], s->depth)) break; + + /* Exchange v with the smallest son */ + s->heap[k] = s->heap[j]; k = j; + + /* And continue down the tree, setting j to the left son of k */ + j <<= 1; + } + s->heap[k] = v; +} + +/* =========================================================================== + * Compute the optimal bit lengths for a tree and update the total bit length + * for the current block. + * IN assertion: the fields freq and dad are set, heap[heap_max] and + * above are the tree nodes sorted by increasing frequency. + * OUT assertions: the field len is set to the optimal bit length, the + * array bl_count contains the frequencies for each bit length. + * The length opt_len is updated; static_len is also updated if stree is + * not null. + */ +local void gen_bitlen(s, desc) + deflate_state *s; + tree_desc *desc; /* the tree descriptor */ +{ + ct_data *tree = desc->dyn_tree; + int max_code = desc->max_code; + const ct_data *stree = desc->stat_desc->static_tree; + const intf *extra = desc->stat_desc->extra_bits; + int base = desc->stat_desc->extra_base; + int max_length = desc->stat_desc->max_length; + int h; /* heap index */ + int n, m; /* iterate over the tree elements */ + int bits; /* bit length */ + int xbits; /* extra bits */ + ush f; /* frequency */ + int overflow = 0; /* number of elements with bit length too large */ + + for (bits = 0; bits <= MAX_BITS; bits++) s->bl_count[bits] = 0; + + /* In a first pass, compute the optimal bit lengths (which may + * overflow in the case of the bit length tree). + */ + tree[s->heap[s->heap_max]].Len = 0; /* root of the heap */ + + for (h = s->heap_max+1; h < HEAP_SIZE; h++) { + n = s->heap[h]; + bits = tree[tree[n].Dad].Len + 1; + if (bits > max_length) bits = max_length, overflow++; + tree[n].Len = (ush)bits; + /* We overwrite tree[n].Dad which is no longer needed */ + + if (n > max_code) continue; /* not a leaf node */ + + s->bl_count[bits]++; + xbits = 0; + if (n >= base) xbits = extra[n-base]; + f = tree[n].Freq; + s->opt_len += (ulg)f * (bits + xbits); + if (stree) s->static_len += (ulg)f * (stree[n].Len + xbits); + } + if (overflow == 0) return; + + Trace((stderr,"\nbit length overflow\n")); + /* This happens for example on obj2 and pic of the Calgary corpus */ + + /* Find the first bit length which could increase: */ + do { + bits = max_length-1; + while (s->bl_count[bits] == 0) bits--; + s->bl_count[bits]--; /* move one leaf down the tree */ + s->bl_count[bits+1] += 2; /* move one overflow item as its brother */ + s->bl_count[max_length]--; + /* The brother of the overflow item also moves one step up, + * but this does not affect bl_count[max_length] + */ + overflow -= 2; + } while (overflow > 0); + + /* Now recompute all bit lengths, scanning in increasing frequency. + * h is still equal to HEAP_SIZE. (It is simpler to reconstruct all + * lengths instead of fixing only the wrong ones. This idea is taken + * from 'ar' written by Haruhiko Okumura.) + */ + for (bits = max_length; bits != 0; bits--) { + n = s->bl_count[bits]; + while (n != 0) { + m = s->heap[--h]; + if (m > max_code) continue; + if ((unsigned) tree[m].Len != (unsigned) bits) { + Trace((stderr,"code %d bits %d->%d\n", m, tree[m].Len, bits)); + s->opt_len += ((long)bits - (long)tree[m].Len) + *(long)tree[m].Freq; + tree[m].Len = (ush)bits; + } + n--; + } + } +} + +/* =========================================================================== + * Generate the codes for a given tree and bit counts (which need not be + * optimal). + * IN assertion: the array bl_count contains the bit length statistics for + * the given tree and the field len is set for all tree elements. + * OUT assertion: the field code is set for all tree elements of non + * zero code length. + */ +local void gen_codes (tree, max_code, bl_count) + ct_data *tree; /* the tree to decorate */ + int max_code; /* largest code with non zero frequency */ + ushf *bl_count; /* number of codes at each bit length */ +{ + ush next_code[MAX_BITS+1]; /* next code value for each bit length */ + ush code = 0; /* running code value */ + int bits; /* bit index */ + int n; /* code index */ + + /* The distribution counts are first used to generate the code values + * without bit reversal. + */ + for (bits = 1; bits <= MAX_BITS; bits++) { + next_code[bits] = code = (code + bl_count[bits-1]) << 1; + } + /* Check that the bit counts in bl_count are consistent. The last code + * must be all ones. + */ + Assert (code + bl_count[MAX_BITS]-1 == (1<dyn_tree; + const ct_data *stree = desc->stat_desc->static_tree; + int elems = desc->stat_desc->elems; + int n, m; /* iterate over heap elements */ + int max_code = -1; /* largest code with non zero frequency */ + int node; /* new node being created */ + + /* Construct the initial heap, with least frequent element in + * heap[SMALLEST]. The sons of heap[n] are heap[2*n] and heap[2*n+1]. + * heap[0] is not used. + */ + s->heap_len = 0, s->heap_max = HEAP_SIZE; + + for (n = 0; n < elems; n++) { + if (tree[n].Freq != 0) { + s->heap[++(s->heap_len)] = max_code = n; + s->depth[n] = 0; + } else { + tree[n].Len = 0; + } + } + + /* The pkzip format requires that at least one distance code exists, + * and that at least one bit should be sent even if there is only one + * possible code. So to avoid special checks later on we force at least + * two codes of non zero frequency. + */ + while (s->heap_len < 2) { + node = s->heap[++(s->heap_len)] = (max_code < 2 ? ++max_code : 0); + tree[node].Freq = 1; + s->depth[node] = 0; + s->opt_len--; if (stree) s->static_len -= stree[node].Len; + /* node is 0 or 1 so it does not have extra bits */ + } + desc->max_code = max_code; + + /* The elements heap[heap_len/2+1 .. heap_len] are leaves of the tree, + * establish sub-heaps of increasing lengths: + */ + for (n = s->heap_len/2; n >= 1; n--) pqdownheap(s, tree, n); + + /* Construct the Huffman tree by repeatedly combining the least two + * frequent nodes. + */ + node = elems; /* next internal node of the tree */ + do { + pqremove(s, tree, n); /* n = node of least frequency */ + m = s->heap[SMALLEST]; /* m = node of next least frequency */ + + s->heap[--(s->heap_max)] = n; /* keep the nodes sorted by frequency */ + s->heap[--(s->heap_max)] = m; + + /* Create a new node father of n and m */ + tree[node].Freq = tree[n].Freq + tree[m].Freq; + s->depth[node] = (uch)((s->depth[n] >= s->depth[m] ? + s->depth[n] : s->depth[m]) + 1); + tree[n].Dad = tree[m].Dad = (ush)node; +#ifdef DUMP_BL_TREE + if (tree == s->bl_tree) { + fprintf(stderr,"\nnode %d(%d), sons %d(%d) %d(%d)", + node, tree[node].Freq, n, tree[n].Freq, m, tree[m].Freq); + } +#endif + /* and insert the new node in the heap */ + s->heap[SMALLEST] = node++; + pqdownheap(s, tree, SMALLEST); + + } while (s->heap_len >= 2); + + s->heap[--(s->heap_max)] = s->heap[SMALLEST]; + + /* At this point, the fields freq and dad are set. We can now + * generate the bit lengths. + */ + gen_bitlen(s, (tree_desc *)desc); + + /* The field len is now set, we can generate the bit codes */ + gen_codes ((ct_data *)tree, max_code, s->bl_count); +} + +/* =========================================================================== + * Scan a literal or distance tree to determine the frequencies of the codes + * in the bit length tree. + */ +local void scan_tree (s, tree, max_code) + deflate_state *s; + ct_data *tree; /* the tree to be scanned */ + int max_code; /* and its largest code of non zero frequency */ +{ + int n; /* iterates over all tree elements */ + int prevlen = -1; /* last emitted length */ + int curlen; /* length of current code */ + int nextlen = tree[0].Len; /* length of next code */ + int count = 0; /* repeat count of the current code */ + int max_count = 7; /* max repeat count */ + int min_count = 4; /* min repeat count */ + + if (nextlen == 0) max_count = 138, min_count = 3; + tree[max_code+1].Len = (ush)0xffff; /* guard */ + + for (n = 0; n <= max_code; n++) { + curlen = nextlen; nextlen = tree[n+1].Len; + if (++count < max_count && curlen == nextlen) { + continue; + } else if (count < min_count) { + s->bl_tree[curlen].Freq += count; + } else if (curlen != 0) { + if (curlen != prevlen) s->bl_tree[curlen].Freq++; + s->bl_tree[REP_3_6].Freq++; + } else if (count <= 10) { + s->bl_tree[REPZ_3_10].Freq++; + } else { + s->bl_tree[REPZ_11_138].Freq++; + } + count = 0; prevlen = curlen; + if (nextlen == 0) { + max_count = 138, min_count = 3; + } else if (curlen == nextlen) { + max_count = 6, min_count = 3; + } else { + max_count = 7, min_count = 4; + } + } +} + +/* =========================================================================== + * Send a literal or distance tree in compressed form, using the codes in + * bl_tree. + */ +local void send_tree (s, tree, max_code) + deflate_state *s; + ct_data *tree; /* the tree to be scanned */ + int max_code; /* and its largest code of non zero frequency */ +{ + int n; /* iterates over all tree elements */ + int prevlen = -1; /* last emitted length */ + int curlen; /* length of current code */ + int nextlen = tree[0].Len; /* length of next code */ + int count = 0; /* repeat count of the current code */ + int max_count = 7; /* max repeat count */ + int min_count = 4; /* min repeat count */ + + /* tree[max_code+1].Len = -1; */ /* guard already set */ + if (nextlen == 0) max_count = 138, min_count = 3; + + for (n = 0; n <= max_code; n++) { + curlen = nextlen; nextlen = tree[n+1].Len; + if (++count < max_count && curlen == nextlen) { + continue; + } else if (count < min_count) { + do { send_code(s, curlen, s->bl_tree); } while (--count != 0); + + } else if (curlen != 0) { + if (curlen != prevlen) { + send_code(s, curlen, s->bl_tree); count--; + } + Assert(count >= 3 && count <= 6, " 3_6?"); + send_code(s, REP_3_6, s->bl_tree); send_bits(s, count-3, 2); + + } else if (count <= 10) { + send_code(s, REPZ_3_10, s->bl_tree); send_bits(s, count-3, 3); + + } else { + send_code(s, REPZ_11_138, s->bl_tree); send_bits(s, count-11, 7); + } + count = 0; prevlen = curlen; + if (nextlen == 0) { + max_count = 138, min_count = 3; + } else if (curlen == nextlen) { + max_count = 6, min_count = 3; + } else { + max_count = 7, min_count = 4; + } + } +} + +/* =========================================================================== + * Construct the Huffman tree for the bit lengths and return the index in + * bl_order of the last bit length code to send. + */ +local int build_bl_tree(s) + deflate_state *s; +{ + int max_blindex; /* index of last bit length code of non zero freq */ + + /* Determine the bit length frequencies for literal and distance trees */ + scan_tree(s, (ct_data *)s->dyn_ltree, s->l_desc.max_code); + scan_tree(s, (ct_data *)s->dyn_dtree, s->d_desc.max_code); + + /* Build the bit length tree: */ + build_tree(s, (tree_desc *)(&(s->bl_desc))); + /* opt_len now includes the length of the tree representations, except + * the lengths of the bit lengths codes and the 5+5+4 bits for the counts. + */ + + /* Determine the number of bit length codes to send. The pkzip format + * requires that at least 4 bit length codes be sent. (appnote.txt says + * 3 but the actual value used is 4.) + */ + for (max_blindex = BL_CODES-1; max_blindex >= 3; max_blindex--) { + if (s->bl_tree[bl_order[max_blindex]].Len != 0) break; + } + /* Update opt_len to include the bit length tree and counts */ + s->opt_len += 3*(max_blindex+1) + 5+5+4; + Tracev((stderr, "\ndyn trees: dyn %ld, stat %ld", + s->opt_len, s->static_len)); + + return max_blindex; +} + +/* =========================================================================== + * Send the header for a block using dynamic Huffman trees: the counts, the + * lengths of the bit length codes, the literal tree and the distance tree. + * IN assertion: lcodes >= 257, dcodes >= 1, blcodes >= 4. + */ +local void send_all_trees(s, lcodes, dcodes, blcodes) + deflate_state *s; + int lcodes, dcodes, blcodes; /* number of codes for each tree */ +{ + int rank; /* index in bl_order */ + + Assert (lcodes >= 257 && dcodes >= 1 && blcodes >= 4, "not enough codes"); + Assert (lcodes <= L_CODES && dcodes <= D_CODES && blcodes <= BL_CODES, + "too many codes"); + Tracev((stderr, "\nbl counts: ")); + send_bits(s, lcodes-257, 5); /* not +255 as stated in appnote.txt */ + send_bits(s, dcodes-1, 5); + send_bits(s, blcodes-4, 4); /* not -3 as stated in appnote.txt */ + for (rank = 0; rank < blcodes; rank++) { + Tracev((stderr, "\nbl code %2d ", bl_order[rank])); + send_bits(s, s->bl_tree[bl_order[rank]].Len, 3); + } + Tracev((stderr, "\nbl tree: sent %ld", s->bits_sent)); + + send_tree(s, (ct_data *)s->dyn_ltree, lcodes-1); /* literal tree */ + Tracev((stderr, "\nlit tree: sent %ld", s->bits_sent)); + + send_tree(s, (ct_data *)s->dyn_dtree, dcodes-1); /* distance tree */ + Tracev((stderr, "\ndist tree: sent %ld", s->bits_sent)); +} + +/* =========================================================================== + * Send a stored block + */ +void ZLIB_INTERNAL _tr_stored_block(s, buf, stored_len, last) + deflate_state *s; + charf *buf; /* input block */ + ulg stored_len; /* length of input block */ + int last; /* one if this is the last block for a file */ +{ + send_bits(s, (STORED_BLOCK<<1)+last, 3); /* send block type */ +#ifdef DEBUG + s->compressed_len = (s->compressed_len + 3 + 7) & (ulg)~7L; + s->compressed_len += (stored_len + 4) << 3; +#endif + copy_block(s, buf, (unsigned)stored_len, 1); /* with header */ +} + +/* =========================================================================== + * Flush the bits in the bit buffer to pending output (leaves at most 7 bits) + */ +void ZLIB_INTERNAL _tr_flush_bits(s) + deflate_state *s; +{ + bi_flush(s); +} + +/* =========================================================================== + * Send one empty static block to give enough lookahead for inflate. + * This takes 10 bits, of which 7 may remain in the bit buffer. + */ +void ZLIB_INTERNAL _tr_align(s) + deflate_state *s; +{ + send_bits(s, STATIC_TREES<<1, 3); + send_code(s, END_BLOCK, static_ltree); +#ifdef DEBUG + s->compressed_len += 10L; /* 3 for block type, 7 for EOB */ +#endif + bi_flush(s); +} + +/* =========================================================================== + * Determine the best encoding for the current block: dynamic trees, static + * trees or store, and output the encoded block to the zip file. + */ +void ZLIB_INTERNAL _tr_flush_block(s, buf, stored_len, last) + deflate_state *s; + charf *buf; /* input block, or NULL if too old */ + ulg stored_len; /* length of input block */ + int last; /* one if this is the last block for a file */ +{ + ulg opt_lenb, static_lenb; /* opt_len and static_len in bytes */ + int max_blindex = 0; /* index of last bit length code of non zero freq */ + + /* Build the Huffman trees unless a stored block is forced */ + if (s->level > 0) { + + /* Check if the file is binary or text */ + if (s->strm->data_type == Z_UNKNOWN) + s->strm->data_type = detect_data_type(s); + + /* Construct the literal and distance trees */ + build_tree(s, (tree_desc *)(&(s->l_desc))); + Tracev((stderr, "\nlit data: dyn %ld, stat %ld", s->opt_len, + s->static_len)); + + build_tree(s, (tree_desc *)(&(s->d_desc))); + Tracev((stderr, "\ndist data: dyn %ld, stat %ld", s->opt_len, + s->static_len)); + /* At this point, opt_len and static_len are the total bit lengths of + * the compressed block data, excluding the tree representations. + */ + + /* Build the bit length tree for the above two trees, and get the index + * in bl_order of the last bit length code to send. + */ + max_blindex = build_bl_tree(s); + + /* Determine the best encoding. Compute the block lengths in bytes. */ + opt_lenb = (s->opt_len+3+7)>>3; + static_lenb = (s->static_len+3+7)>>3; + + Tracev((stderr, "\nopt %lu(%lu) stat %lu(%lu) stored %lu lit %u ", + opt_lenb, s->opt_len, static_lenb, s->static_len, stored_len, + s->last_lit)); + + if (static_lenb <= opt_lenb) opt_lenb = static_lenb; + + } else { + Assert(buf != (char*)0, "lost buf"); + opt_lenb = static_lenb = stored_len + 5; /* force a stored block */ + } + +#ifdef FORCE_STORED + if (buf != (char*)0) { /* force stored block */ +#else + if (stored_len+4 <= opt_lenb && buf != (char*)0) { + /* 4: two words for the lengths */ +#endif + /* The test buf != NULL is only necessary if LIT_BUFSIZE > WSIZE. + * Otherwise we can't have processed more than WSIZE input bytes since + * the last block flush, because compression would have been + * successful. If LIT_BUFSIZE <= WSIZE, it is never too late to + * transform a block into a stored block. + */ + _tr_stored_block(s, buf, stored_len, last); + +#ifdef FORCE_STATIC + } else if (static_lenb >= 0) { /* force static trees */ +#else + } else if (s->strategy == Z_FIXED || static_lenb == opt_lenb) { +#endif + send_bits(s, (STATIC_TREES<<1)+last, 3); + compress_block(s, (const ct_data *)static_ltree, + (const ct_data *)static_dtree); +#ifdef DEBUG + s->compressed_len += 3 + s->static_len; +#endif + } else { + send_bits(s, (DYN_TREES<<1)+last, 3); + send_all_trees(s, s->l_desc.max_code+1, s->d_desc.max_code+1, + max_blindex+1); + compress_block(s, (const ct_data *)s->dyn_ltree, + (const ct_data *)s->dyn_dtree); +#ifdef DEBUG + s->compressed_len += 3 + s->opt_len; +#endif + } + Assert (s->compressed_len == s->bits_sent, "bad compressed size"); + /* The above check is made mod 2^32, for files larger than 512 MB + * and uLong implemented on 32 bits. + */ + init_block(s); + + if (last) { + bi_windup(s); +#ifdef DEBUG + s->compressed_len += 7; /* align on byte boundary */ +#endif + } + Tracev((stderr,"\ncomprlen %lu(%lu) ", s->compressed_len>>3, + s->compressed_len-7*last)); +} + +/* =========================================================================== + * Save the match info and tally the frequency counts. Return true if + * the current block must be flushed. + */ +int ZLIB_INTERNAL _tr_tally (s, dist, lc) + deflate_state *s; + unsigned dist; /* distance of matched string */ + unsigned lc; /* match length-MIN_MATCH or unmatched char (if dist==0) */ +{ + s->d_buf[s->last_lit] = (ush)dist; + s->l_buf[s->last_lit++] = (uch)lc; + if (dist == 0) { + /* lc is the unmatched char */ + s->dyn_ltree[lc].Freq++; + } else { + s->matches++; + /* Here, lc is the match length - MIN_MATCH */ + dist--; /* dist = match distance - 1 */ + Assert((ush)dist < (ush)MAX_DIST(s) && + (ush)lc <= (ush)(MAX_MATCH-MIN_MATCH) && + (ush)d_code(dist) < (ush)D_CODES, "_tr_tally: bad match"); + + s->dyn_ltree[_length_code[lc]+LITERALS+1].Freq++; + s->dyn_dtree[d_code(dist)].Freq++; + } + +#ifdef TRUNCATE_BLOCK + /* Try to guess if it is profitable to stop the current block here */ + if ((s->last_lit & 0x1fff) == 0 && s->level > 2) { + /* Compute an upper bound for the compressed length */ + ulg out_length = (ulg)s->last_lit*8L; + ulg in_length = (ulg)((long)s->strstart - s->block_start); + int dcode; + for (dcode = 0; dcode < D_CODES; dcode++) { + out_length += (ulg)s->dyn_dtree[dcode].Freq * + (5L+extra_dbits[dcode]); + } + out_length >>= 3; + Tracev((stderr,"\nlast_lit %u, in %ld, out ~%ld(%ld%%) ", + s->last_lit, in_length, out_length, + 100L - out_length*100L/in_length)); + if (s->matches < s->last_lit/2 && out_length < in_length/2) return 1; + } +#endif + return (s->last_lit == s->lit_bufsize-1); + /* We avoid equality with lit_bufsize because of wraparound at 64K + * on 16 bit machines and because stored blocks are restricted to + * 64K-1 bytes. + */ +} + +/* =========================================================================== + * Send the block data compressed using the given Huffman trees + */ +local void compress_block(s, ltree, dtree) + deflate_state *s; + const ct_data *ltree; /* literal tree */ + const ct_data *dtree; /* distance tree */ +{ + unsigned dist; /* distance of matched string */ + int lc; /* match length or unmatched char (if dist == 0) */ + unsigned lx = 0; /* running index in l_buf */ + unsigned code; /* the code to send */ + int extra; /* number of extra bits to send */ + + if (s->last_lit != 0) do { + dist = s->d_buf[lx]; + lc = s->l_buf[lx++]; + if (dist == 0) { + send_code(s, lc, ltree); /* send a literal byte */ + Tracecv(isgraph(lc), (stderr," '%c' ", lc)); + } else { + /* Here, lc is the match length - MIN_MATCH */ + code = _length_code[lc]; + send_code(s, code+LITERALS+1, ltree); /* send the length code */ + extra = extra_lbits[code]; + if (extra != 0) { + lc -= base_length[code]; + send_bits(s, lc, extra); /* send the extra length bits */ + } + dist--; /* dist is now the match distance - 1 */ + code = d_code(dist); + Assert (code < D_CODES, "bad d_code"); + + send_code(s, code, dtree); /* send the distance code */ + extra = extra_dbits[code]; + if (extra != 0) { + dist -= base_dist[code]; + send_bits(s, dist, extra); /* send the extra distance bits */ + } + } /* literal or match pair ? */ + + /* Check that the overlay between pending_buf and d_buf+l_buf is ok: */ + Assert((uInt)(s->pending) < s->lit_bufsize + 2*lx, + "pendingBuf overflow"); + + } while (lx < s->last_lit); + + send_code(s, END_BLOCK, ltree); +} + +/* =========================================================================== + * Check if the data type is TEXT or BINARY, using the following algorithm: + * - TEXT if the two conditions below are satisfied: + * a) There are no non-portable control characters belonging to the + * "black list" (0..6, 14..25, 28..31). + * b) There is at least one printable character belonging to the + * "white list" (9 {TAB}, 10 {LF}, 13 {CR}, 32..255). + * - BINARY otherwise. + * - The following partially-portable control characters form a + * "gray list" that is ignored in this detection algorithm: + * (7 {BEL}, 8 {BS}, 11 {VT}, 12 {FF}, 26 {SUB}, 27 {ESC}). + * IN assertion: the fields Freq of dyn_ltree are set. + */ +local int detect_data_type(s) + deflate_state *s; +{ + /* black_mask is the bit mask of black-listed bytes + * set bits 0..6, 14..25, and 28..31 + * 0xf3ffc07f = binary 11110011111111111100000001111111 + */ + unsigned long black_mask = 0xf3ffc07fUL; + int n; + + /* Check for non-textual ("black-listed") bytes. */ + for (n = 0; n <= 31; n++, black_mask >>= 1) + if ((black_mask & 1) && (s->dyn_ltree[n].Freq != 0)) + return Z_BINARY; + + /* Check for textual ("white-listed") bytes. */ + if (s->dyn_ltree[9].Freq != 0 || s->dyn_ltree[10].Freq != 0 + || s->dyn_ltree[13].Freq != 0) + return Z_TEXT; + for (n = 32; n < LITERALS; n++) + if (s->dyn_ltree[n].Freq != 0) + return Z_TEXT; + + /* There are no "black-listed" or "white-listed" bytes: + * this stream either is empty or has tolerated ("gray-listed") bytes only. + */ + return Z_BINARY; +} + +/* =========================================================================== + * Reverse the first len bits of a code, using straightforward code (a faster + * method would use a table) + * IN assertion: 1 <= len <= 15 + */ +local unsigned bi_reverse(code, len) + unsigned code; /* the value to invert */ + int len; /* its bit length */ +{ + register unsigned res = 0; + do { + res |= code & 1; + code >>= 1, res <<= 1; + } while (--len > 0); + return res >> 1; +} + +/* =========================================================================== + * Flush the bit buffer, keeping at most 7 bits in it. + */ +local void bi_flush(s) + deflate_state *s; +{ + if (s->bi_valid == 16) { + put_short(s, s->bi_buf); + s->bi_buf = 0; + s->bi_valid = 0; + } else if (s->bi_valid >= 8) { + put_byte(s, (Byte)s->bi_buf); + s->bi_buf >>= 8; + s->bi_valid -= 8; + } +} + +/* =========================================================================== + * Flush the bit buffer and align the output on a byte boundary + */ +local void bi_windup(s) + deflate_state *s; +{ + if (s->bi_valid > 8) { + put_short(s, s->bi_buf); + } else if (s->bi_valid > 0) { + put_byte(s, (Byte)s->bi_buf); + } + s->bi_buf = 0; + s->bi_valid = 0; +#ifdef DEBUG + s->bits_sent = (s->bits_sent+7) & ~7; +#endif +} + +/* =========================================================================== + * Copy a stored block, storing first the length and its + * one's complement if requested. + */ +local void copy_block(s, buf, len, header) + deflate_state *s; + charf *buf; /* the input data */ + unsigned len; /* its length */ + int header; /* true if block header must be written */ +{ + bi_windup(s); /* align on byte boundary */ + + if (header) { + put_short(s, (ush)len); + put_short(s, (ush)~len); +#ifdef DEBUG + s->bits_sent += 2*16; +#endif + } +#ifdef DEBUG + s->bits_sent += (ulg)len<<3; +#endif + while (len--) { + put_byte(s, *buf++); + } +} ADDED compat/zlib/trees.h Index: compat/zlib/trees.h ================================================================== --- compat/zlib/trees.h +++ compat/zlib/trees.h @@ -0,0 +1,128 @@ +/* header created automatically with -DGEN_TREES_H */ + +local const ct_data static_ltree[L_CODES+2] = { +{{ 12},{ 8}}, {{140},{ 8}}, {{ 76},{ 8}}, {{204},{ 8}}, {{ 44},{ 8}}, +{{172},{ 8}}, {{108},{ 8}}, {{236},{ 8}}, {{ 28},{ 8}}, {{156},{ 8}}, +{{ 92},{ 8}}, {{220},{ 8}}, {{ 60},{ 8}}, {{188},{ 8}}, {{124},{ 8}}, +{{252},{ 8}}, {{ 2},{ 8}}, {{130},{ 8}}, {{ 66},{ 8}}, {{194},{ 8}}, +{{ 34},{ 8}}, {{162},{ 8}}, {{ 98},{ 8}}, {{226},{ 8}}, {{ 18},{ 8}}, +{{146},{ 8}}, {{ 82},{ 8}}, {{210},{ 8}}, {{ 50},{ 8}}, {{178},{ 8}}, +{{114},{ 8}}, {{242},{ 8}}, {{ 10},{ 8}}, {{138},{ 8}}, {{ 74},{ 8}}, +{{202},{ 8}}, {{ 42},{ 8}}, {{170},{ 8}}, {{106},{ 8}}, {{234},{ 8}}, +{{ 26},{ 8}}, {{154},{ 8}}, {{ 90},{ 8}}, {{218},{ 8}}, {{ 58},{ 8}}, +{{186},{ 8}}, {{122},{ 8}}, {{250},{ 8}}, {{ 6},{ 8}}, {{134},{ 8}}, +{{ 70},{ 8}}, {{198},{ 8}}, {{ 38},{ 8}}, {{166},{ 8}}, {{102},{ 8}}, +{{230},{ 8}}, {{ 22},{ 8}}, {{150},{ 8}}, {{ 86},{ 8}}, {{214},{ 8}}, +{{ 54},{ 8}}, {{182},{ 8}}, {{118},{ 8}}, {{246},{ 8}}, {{ 14},{ 8}}, +{{142},{ 8}}, {{ 78},{ 8}}, {{206},{ 8}}, {{ 46},{ 8}}, {{174},{ 8}}, +{{110},{ 8}}, {{238},{ 8}}, {{ 30},{ 8}}, {{158},{ 8}}, {{ 94},{ 8}}, +{{222},{ 8}}, {{ 62},{ 8}}, {{190},{ 8}}, {{126},{ 8}}, {{254},{ 8}}, +{{ 1},{ 8}}, {{129},{ 8}}, {{ 65},{ 8}}, {{193},{ 8}}, {{ 33},{ 8}}, +{{161},{ 8}}, {{ 97},{ 8}}, {{225},{ 8}}, {{ 17},{ 8}}, {{145},{ 8}}, +{{ 81},{ 8}}, {{209},{ 8}}, {{ 49},{ 8}}, {{177},{ 8}}, {{113},{ 8}}, +{{241},{ 8}}, {{ 9},{ 8}}, {{137},{ 8}}, {{ 73},{ 8}}, {{201},{ 8}}, +{{ 41},{ 8}}, {{169},{ 8}}, {{105},{ 8}}, {{233},{ 8}}, {{ 25},{ 8}}, +{{153},{ 8}}, {{ 89},{ 8}}, {{217},{ 8}}, {{ 57},{ 8}}, {{185},{ 8}}, +{{121},{ 8}}, {{249},{ 8}}, {{ 5},{ 8}}, {{133},{ 8}}, {{ 69},{ 8}}, +{{197},{ 8}}, {{ 37},{ 8}}, {{165},{ 8}}, {{101},{ 8}}, {{229},{ 8}}, +{{ 21},{ 8}}, {{149},{ 8}}, {{ 85},{ 8}}, {{213},{ 8}}, {{ 53},{ 8}}, +{{181},{ 8}}, {{117},{ 8}}, {{245},{ 8}}, {{ 13},{ 8}}, {{141},{ 8}}, +{{ 77},{ 8}}, {{205},{ 8}}, {{ 45},{ 8}}, {{173},{ 8}}, {{109},{ 8}}, +{{237},{ 8}}, {{ 29},{ 8}}, {{157},{ 8}}, {{ 93},{ 8}}, {{221},{ 8}}, +{{ 61},{ 8}}, {{189},{ 8}}, {{125},{ 8}}, {{253},{ 8}}, {{ 19},{ 9}}, +{{275},{ 9}}, {{147},{ 9}}, {{403},{ 9}}, {{ 83},{ 9}}, {{339},{ 9}}, +{{211},{ 9}}, {{467},{ 9}}, {{ 51},{ 9}}, {{307},{ 9}}, {{179},{ 9}}, +{{435},{ 9}}, {{115},{ 9}}, {{371},{ 9}}, {{243},{ 9}}, {{499},{ 9}}, +{{ 11},{ 9}}, {{267},{ 9}}, {{139},{ 9}}, {{395},{ 9}}, {{ 75},{ 9}}, +{{331},{ 9}}, {{203},{ 9}}, {{459},{ 9}}, {{ 43},{ 9}}, {{299},{ 9}}, +{{171},{ 9}}, {{427},{ 9}}, {{107},{ 9}}, {{363},{ 9}}, {{235},{ 9}}, +{{491},{ 9}}, {{ 27},{ 9}}, {{283},{ 9}}, {{155},{ 9}}, {{411},{ 9}}, +{{ 91},{ 9}}, {{347},{ 9}}, {{219},{ 9}}, {{475},{ 9}}, {{ 59},{ 9}}, +{{315},{ 9}}, {{187},{ 9}}, {{443},{ 9}}, {{123},{ 9}}, {{379},{ 9}}, +{{251},{ 9}}, {{507},{ 9}}, {{ 7},{ 9}}, {{263},{ 9}}, {{135},{ 9}}, +{{391},{ 9}}, {{ 71},{ 9}}, {{327},{ 9}}, {{199},{ 9}}, {{455},{ 9}}, +{{ 39},{ 9}}, {{295},{ 9}}, {{167},{ 9}}, {{423},{ 9}}, {{103},{ 9}}, +{{359},{ 9}}, {{231},{ 9}}, {{487},{ 9}}, {{ 23},{ 9}}, {{279},{ 9}}, +{{151},{ 9}}, {{407},{ 9}}, {{ 87},{ 9}}, {{343},{ 9}}, {{215},{ 9}}, +{{471},{ 9}}, {{ 55},{ 9}}, {{311},{ 9}}, {{183},{ 9}}, {{439},{ 9}}, +{{119},{ 9}}, {{375},{ 9}}, {{247},{ 9}}, {{503},{ 9}}, {{ 15},{ 9}}, +{{271},{ 9}}, {{143},{ 9}}, {{399},{ 9}}, {{ 79},{ 9}}, {{335},{ 9}}, +{{207},{ 9}}, {{463},{ 9}}, {{ 47},{ 9}}, {{303},{ 9}}, {{175},{ 9}}, +{{431},{ 9}}, {{111},{ 9}}, {{367},{ 9}}, {{239},{ 9}}, {{495},{ 9}}, +{{ 31},{ 9}}, {{287},{ 9}}, {{159},{ 9}}, {{415},{ 9}}, {{ 95},{ 9}}, +{{351},{ 9}}, {{223},{ 9}}, {{479},{ 9}}, {{ 63},{ 9}}, {{319},{ 9}}, +{{191},{ 9}}, {{447},{ 9}}, {{127},{ 9}}, {{383},{ 9}}, {{255},{ 9}}, +{{511},{ 9}}, {{ 0},{ 7}}, {{ 64},{ 7}}, {{ 32},{ 7}}, {{ 96},{ 7}}, +{{ 16},{ 7}}, {{ 80},{ 7}}, {{ 48},{ 7}}, {{112},{ 7}}, {{ 8},{ 7}}, +{{ 72},{ 7}}, {{ 40},{ 7}}, {{104},{ 7}}, {{ 24},{ 7}}, {{ 88},{ 7}}, +{{ 56},{ 7}}, {{120},{ 7}}, {{ 4},{ 7}}, {{ 68},{ 7}}, {{ 36},{ 7}}, +{{100},{ 7}}, {{ 20},{ 7}}, {{ 84},{ 7}}, {{ 52},{ 7}}, {{116},{ 7}}, +{{ 3},{ 8}}, {{131},{ 8}}, {{ 67},{ 8}}, {{195},{ 8}}, {{ 35},{ 8}}, +{{163},{ 8}}, {{ 99},{ 8}}, {{227},{ 8}} +}; + +local const ct_data static_dtree[D_CODES] = { +{{ 0},{ 5}}, {{16},{ 5}}, {{ 8},{ 5}}, {{24},{ 5}}, {{ 4},{ 5}}, +{{20},{ 5}}, {{12},{ 5}}, {{28},{ 5}}, {{ 2},{ 5}}, {{18},{ 5}}, +{{10},{ 5}}, {{26},{ 5}}, {{ 6},{ 5}}, {{22},{ 5}}, {{14},{ 5}}, +{{30},{ 5}}, {{ 1},{ 5}}, {{17},{ 5}}, {{ 9},{ 5}}, {{25},{ 5}}, +{{ 5},{ 5}}, {{21},{ 5}}, {{13},{ 5}}, {{29},{ 5}}, {{ 3},{ 5}}, +{{19},{ 5}}, {{11},{ 5}}, {{27},{ 5}}, {{ 7},{ 5}}, {{23},{ 5}} +}; + +const uch ZLIB_INTERNAL _dist_code[DIST_CODE_LEN] = { + 0, 1, 2, 3, 4, 4, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, + 8, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 10, 10, +10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, +11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, +12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, +13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, +13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, +14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, +14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, +14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 14, 15, 15, 15, 15, 15, 15, 15, 15, +15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, +15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, +15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 0, 0, 16, 17, +18, 18, 19, 19, 20, 20, 20, 20, 21, 21, 21, 21, 22, 22, 22, 22, 22, 22, 22, 22, +23, 23, 23, 23, 23, 23, 23, 23, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, +24, 24, 24, 24, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, +26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, +26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 27, 27, 27, 27, 27, 27, 27, 27, +27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, +27, 27, 27, 27, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, +28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, +28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, +28, 28, 28, 28, 28, 28, 28, 28, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, +29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, +29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, +29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29 +}; + +const uch ZLIB_INTERNAL _length_code[MAX_MATCH-MIN_MATCH+1]= { + 0, 1, 2, 3, 4, 5, 6, 7, 8, 8, 9, 9, 10, 10, 11, 11, 12, 12, 12, 12, +13, 13, 13, 13, 14, 14, 14, 14, 15, 15, 15, 15, 16, 16, 16, 16, 16, 16, 16, 16, +17, 17, 17, 17, 17, 17, 17, 17, 18, 18, 18, 18, 18, 18, 18, 18, 19, 19, 19, 19, +19, 19, 19, 19, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, 20, +21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 22, 22, 22, 22, +22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 22, 23, 23, 23, 23, 23, 23, 23, 23, +23, 23, 23, 23, 23, 23, 23, 23, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, +24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, 24, +25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, +25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 25, 26, 26, 26, 26, 26, 26, 26, 26, +26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, 26, +26, 26, 26, 26, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, +27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 28 +}; + +local const int base_length[LENGTH_CODES] = { +0, 1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 16, 20, 24, 28, 32, 40, 48, 56, +64, 80, 96, 112, 128, 160, 192, 224, 0 +}; + +local const int base_dist[D_CODES] = { + 0, 1, 2, 3, 4, 6, 8, 12, 16, 24, + 32, 48, 64, 96, 128, 192, 256, 384, 512, 768, + 1024, 1536, 2048, 3072, 4096, 6144, 8192, 12288, 16384, 24576 +}; + ADDED compat/zlib/uncompr.c Index: compat/zlib/uncompr.c ================================================================== --- compat/zlib/uncompr.c +++ compat/zlib/uncompr.c @@ -0,0 +1,59 @@ +/* uncompr.c -- decompress a memory buffer + * Copyright (C) 1995-2003, 2010 Jean-loup Gailly. + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* @(#) $Id$ */ + +#define ZLIB_INTERNAL +#include "zlib.h" + +/* =========================================================================== + Decompresses the source buffer into the destination buffer. sourceLen is + the byte length of the source buffer. Upon entry, destLen is the total + size of the destination buffer, which must be large enough to hold the + entire uncompressed data. (The size of the uncompressed data must have + been saved previously by the compressor and transmitted to the decompressor + by some mechanism outside the scope of this compression library.) + Upon exit, destLen is the actual size of the compressed buffer. + + uncompress returns Z_OK if success, Z_MEM_ERROR if there was not + enough memory, Z_BUF_ERROR if there was not enough room in the output + buffer, or Z_DATA_ERROR if the input data was corrupted. +*/ +int ZEXPORT uncompress (dest, destLen, source, sourceLen) + Bytef *dest; + uLongf *destLen; + const Bytef *source; + uLong sourceLen; +{ + z_stream stream; + int err; + + stream.next_in = (z_const Bytef *)source; + stream.avail_in = (uInt)sourceLen; + /* Check for source > 64K on 16-bit machine: */ + if ((uLong)stream.avail_in != sourceLen) return Z_BUF_ERROR; + + stream.next_out = dest; + stream.avail_out = (uInt)*destLen; + if ((uLong)stream.avail_out != *destLen) return Z_BUF_ERROR; + + stream.zalloc = (alloc_func)0; + stream.zfree = (free_func)0; + + err = inflateInit(&stream); + if (err != Z_OK) return err; + + err = inflate(&stream, Z_FINISH); + if (err != Z_STREAM_END) { + inflateEnd(&stream); + if (err == Z_NEED_DICT || (err == Z_BUF_ERROR && stream.avail_in == 0)) + return Z_DATA_ERROR; + return err; + } + *destLen = stream.total_out; + + err = inflateEnd(&stream); + return err; +} ADDED compat/zlib/watcom/watcom_f.mak Index: compat/zlib/watcom/watcom_f.mak ================================================================== --- compat/zlib/watcom/watcom_f.mak +++ compat/zlib/watcom/watcom_f.mak @@ -0,0 +1,43 @@ +# Makefile for zlib +# OpenWatcom flat model +# Last updated: 28-Dec-2005 + +# To use, do "wmake -f watcom_f.mak" + +C_SOURCE = adler32.c compress.c crc32.c deflate.c & + gzclose.c gzlib.c gzread.c gzwrite.c & + infback.c inffast.c inflate.c inftrees.c & + trees.c uncompr.c zutil.c + +OBJS = adler32.obj compress.obj crc32.obj deflate.obj & + gzclose.obj gzlib.obj gzread.obj gzwrite.obj & + infback.obj inffast.obj inflate.obj inftrees.obj & + trees.obj uncompr.obj zutil.obj + +CC = wcc386 +LINKER = wcl386 +CFLAGS = -zq -mf -3r -fp3 -s -bt=dos -oilrtfm -fr=nul -wx +ZLIB_LIB = zlib_f.lib + +.C.OBJ: + $(CC) $(CFLAGS) $[@ + +all: $(ZLIB_LIB) example.exe minigzip.exe + +$(ZLIB_LIB): $(OBJS) + wlib -b -c $(ZLIB_LIB) -+adler32.obj -+compress.obj -+crc32.obj + wlib -b -c $(ZLIB_LIB) -+gzclose.obj -+gzlib.obj -+gzread.obj -+gzwrite.obj + wlib -b -c $(ZLIB_LIB) -+deflate.obj -+infback.obj + wlib -b -c $(ZLIB_LIB) -+inffast.obj -+inflate.obj -+inftrees.obj + wlib -b -c $(ZLIB_LIB) -+trees.obj -+uncompr.obj -+zutil.obj + +example.exe: $(ZLIB_LIB) example.obj + $(LINKER) -ldos32a -fe=example.exe example.obj $(ZLIB_LIB) + +minigzip.exe: $(ZLIB_LIB) minigzip.obj + $(LINKER) -ldos32a -fe=minigzip.exe minigzip.obj $(ZLIB_LIB) + +clean: .SYMBOLIC + del *.obj + del $(ZLIB_LIB) + @echo Cleaning done ADDED compat/zlib/watcom/watcom_l.mak Index: compat/zlib/watcom/watcom_l.mak ================================================================== --- compat/zlib/watcom/watcom_l.mak +++ compat/zlib/watcom/watcom_l.mak @@ -0,0 +1,43 @@ +# Makefile for zlib +# OpenWatcom large model +# Last updated: 28-Dec-2005 + +# To use, do "wmake -f watcom_l.mak" + +C_SOURCE = adler32.c compress.c crc32.c deflate.c & + gzclose.c gzlib.c gzread.c gzwrite.c & + infback.c inffast.c inflate.c inftrees.c & + trees.c uncompr.c zutil.c + +OBJS = adler32.obj compress.obj crc32.obj deflate.obj & + gzclose.obj gzlib.obj gzread.obj gzwrite.obj & + infback.obj inffast.obj inflate.obj inftrees.obj & + trees.obj uncompr.obj zutil.obj + +CC = wcc +LINKER = wcl +CFLAGS = -zq -ml -s -bt=dos -oilrtfm -fr=nul -wx +ZLIB_LIB = zlib_l.lib + +.C.OBJ: + $(CC) $(CFLAGS) $[@ + +all: $(ZLIB_LIB) example.exe minigzip.exe + +$(ZLIB_LIB): $(OBJS) + wlib -b -c $(ZLIB_LIB) -+adler32.obj -+compress.obj -+crc32.obj + wlib -b -c $(ZLIB_LIB) -+gzclose.obj -+gzlib.obj -+gzread.obj -+gzwrite.obj + wlib -b -c $(ZLIB_LIB) -+deflate.obj -+infback.obj + wlib -b -c $(ZLIB_LIB) -+inffast.obj -+inflate.obj -+inftrees.obj + wlib -b -c $(ZLIB_LIB) -+trees.obj -+uncompr.obj -+zutil.obj + +example.exe: $(ZLIB_LIB) example.obj + $(LINKER) -fe=example.exe example.obj $(ZLIB_LIB) + +minigzip.exe: $(ZLIB_LIB) minigzip.obj + $(LINKER) -fe=minigzip.exe minigzip.obj $(ZLIB_LIB) + +clean: .SYMBOLIC + del *.obj + del $(ZLIB_LIB) + @echo Cleaning done ADDED compat/zlib/win32/DLL_FAQ.txt Index: compat/zlib/win32/DLL_FAQ.txt ================================================================== --- compat/zlib/win32/DLL_FAQ.txt +++ compat/zlib/win32/DLL_FAQ.txt @@ -0,0 +1,397 @@ + + Frequently Asked Questions about ZLIB1.DLL + + +This document describes the design, the rationale, and the usage +of the official DLL build of zlib, named ZLIB1.DLL. If you have +general questions about zlib, you should see the file "FAQ" found +in the zlib distribution, or at the following location: + http://www.gzip.org/zlib/zlib_faq.html + + + 1. What is ZLIB1.DLL, and how can I get it? + + - ZLIB1.DLL is the official build of zlib as a DLL. + (Please remark the character '1' in the name.) + + Pointers to a precompiled ZLIB1.DLL can be found in the zlib + web site at: + http://www.zlib.net/ + + Applications that link to ZLIB1.DLL can rely on the following + specification: + + * The exported symbols are exclusively defined in the source + files "zlib.h" and "zlib.def", found in an official zlib + source distribution. + * The symbols are exported by name, not by ordinal. + * The exported names are undecorated. + * The calling convention of functions is "C" (CDECL). + * The ZLIB1.DLL binary is linked to MSVCRT.DLL. + + The archive in which ZLIB1.DLL is bundled contains compiled + test programs that must run with a valid build of ZLIB1.DLL. + It is recommended to download the prebuilt DLL from the zlib + web site, instead of building it yourself, to avoid potential + incompatibilities that could be introduced by your compiler + and build settings. If you do build the DLL yourself, please + make sure that it complies with all the above requirements, + and it runs with the precompiled test programs, bundled with + the original ZLIB1.DLL distribution. + + If, for any reason, you need to build an incompatible DLL, + please use a different file name. + + + 2. Why did you change the name of the DLL to ZLIB1.DLL? + What happened to the old ZLIB.DLL? + + - The old ZLIB.DLL, built from zlib-1.1.4 or earlier, required + compilation settings that were incompatible to those used by + a static build. The DLL settings were supposed to be enabled + by defining the macro ZLIB_DLL, before including "zlib.h". + Incorrect handling of this macro was silently accepted at + build time, resulting in two major problems: + + * ZLIB_DLL was missing from the old makefile. When building + the DLL, not all people added it to the build options. In + consequence, incompatible incarnations of ZLIB.DLL started + to circulate around the net. + + * When switching from using the static library to using the + DLL, applications had to define the ZLIB_DLL macro and + to recompile all the sources that contained calls to zlib + functions. Failure to do so resulted in creating binaries + that were unable to run with the official ZLIB.DLL build. + + The only possible solution that we could foresee was to make + a binary-incompatible change in the DLL interface, in order to + remove the dependency on the ZLIB_DLL macro, and to release + the new DLL under a different name. + + We chose the name ZLIB1.DLL, where '1' indicates the major + zlib version number. We hope that we will not have to break + the binary compatibility again, at least not as long as the + zlib-1.x series will last. + + There is still a ZLIB_DLL macro, that can trigger a more + efficient build and use of the DLL, but compatibility no + longer dependents on it. + + + 3. Can I build ZLIB.DLL from the new zlib sources, and replace + an old ZLIB.DLL, that was built from zlib-1.1.4 or earlier? + + - In principle, you can do it by assigning calling convention + keywords to the macros ZEXPORT and ZEXPORTVA. In practice, + it depends on what you mean by "an old ZLIB.DLL", because the + old DLL exists in several mutually-incompatible versions. + You have to find out first what kind of calling convention is + being used in your particular ZLIB.DLL build, and to use the + same one in the new build. If you don't know what this is all + about, you might be better off if you would just leave the old + DLL intact. + + + 4. Can I compile my application using the new zlib interface, and + link it to an old ZLIB.DLL, that was built from zlib-1.1.4 or + earlier? + + - The official answer is "no"; the real answer depends again on + what kind of ZLIB.DLL you have. Even if you are lucky, this + course of action is unreliable. + + If you rebuild your application and you intend to use a newer + version of zlib (post- 1.1.4), it is strongly recommended to + link it to the new ZLIB1.DLL. + + + 5. Why are the zlib symbols exported by name, and not by ordinal? + + - Although exporting symbols by ordinal is a little faster, it + is risky. Any single glitch in the maintenance or use of the + DEF file that contains the ordinals can result in incompatible + builds and frustrating crashes. Simply put, the benefits of + exporting symbols by ordinal do not justify the risks. + + Technically, it should be possible to maintain ordinals in + the DEF file, and still export the symbols by name. Ordinals + exist in every DLL, and even if the dynamic linking performed + at the DLL startup is searching for names, ordinals serve as + hints, for a faster name lookup. However, if the DEF file + contains ordinals, the Microsoft linker automatically builds + an implib that will cause the executables linked to it to use + those ordinals, and not the names. It is interesting to + notice that the GNU linker for Win32 does not suffer from this + problem. + + It is possible to avoid the DEF file if the exported symbols + are accompanied by a "__declspec(dllexport)" attribute in the + source files. You can do this in zlib by predefining the + ZLIB_DLL macro. + + + 6. I see that the ZLIB1.DLL functions use the "C" (CDECL) calling + convention. Why not use the STDCALL convention? + STDCALL is the standard convention in Win32, and I need it in + my Visual Basic project! + + (For readability, we use CDECL to refer to the convention + triggered by the "__cdecl" keyword, STDCALL to refer to + the convention triggered by "__stdcall", and FASTCALL to + refer to the convention triggered by "__fastcall".) + + - Most of the native Windows API functions (without varargs) use + indeed the WINAPI convention (which translates to STDCALL in + Win32), but the standard C functions use CDECL. If a user + application is intrinsically tied to the Windows API (e.g. + it calls native Windows API functions such as CreateFile()), + sometimes it makes sense to decorate its own functions with + WINAPI. But if ANSI C or POSIX portability is a goal (e.g. + it calls standard C functions such as fopen()), it is not a + sound decision to request the inclusion of , or to + use non-ANSI constructs, for the sole purpose to make the user + functions STDCALL-able. + + The functionality offered by zlib is not in the category of + "Windows functionality", but is more like "C functionality". + + Technically, STDCALL is not bad; in fact, it is slightly + faster than CDECL, and it works with variable-argument + functions, just like CDECL. It is unfortunate that, in spite + of using STDCALL in the Windows API, it is not the default + convention used by the C compilers that run under Windows. + The roots of the problem reside deep inside the unsafety of + the K&R-style function prototypes, where the argument types + are not specified; but that is another story for another day. + + The remaining fact is that CDECL is the default convention. + Even if an explicit convention is hard-coded into the function + prototypes inside C headers, problems may appear. The + necessity to expose the convention in users' callbacks is one + of these problems. + + The calling convention issues are also important when using + zlib in other programming languages. Some of them, like Ada + (GNAT) and Fortran (GNU G77), have C bindings implemented + initially on Unix, and relying on the C calling convention. + On the other hand, the pre- .NET versions of Microsoft Visual + Basic require STDCALL, while Borland Delphi prefers, although + it does not require, FASTCALL. + + In fairness to all possible uses of zlib outside the C + programming language, we choose the default "C" convention. + Anyone interested in different bindings or conventions is + encouraged to maintain specialized projects. The "contrib/" + directory from the zlib distribution already holds a couple + of foreign bindings, such as Ada, C++, and Delphi. + + + 7. I need a DLL for my Visual Basic project. What can I do? + + - Define the ZLIB_WINAPI macro before including "zlib.h", when + building both the DLL and the user application (except that + you don't need to define anything when using the DLL in Visual + Basic). The ZLIB_WINAPI macro will switch on the WINAPI + (STDCALL) convention. The name of this DLL must be different + than the official ZLIB1.DLL. + + Gilles Vollant has contributed a build named ZLIBWAPI.DLL, + with the ZLIB_WINAPI macro turned on, and with the minizip + functionality built in. For more information, please read + the notes inside "contrib/vstudio/readme.txt", found in the + zlib distribution. + + + 8. I need to use zlib in my Microsoft .NET project. What can I + do? + + - Henrik Ravn has contributed a .NET wrapper around zlib. Look + into contrib/dotzlib/, inside the zlib distribution. + + + 9. If my application uses ZLIB1.DLL, should I link it to + MSVCRT.DLL? Why? + + - It is not required, but it is recommended to link your + application to MSVCRT.DLL, if it uses ZLIB1.DLL. + + The executables (.EXE, .DLL, etc.) that are involved in the + same process and are using the C run-time library (i.e. they + are calling standard C functions), must link to the same + library. There are several libraries in the Win32 system: + CRTDLL.DLL, MSVCRT.DLL, the static C libraries, etc. + Since ZLIB1.DLL is linked to MSVCRT.DLL, the executables that + depend on it should also be linked to MSVCRT.DLL. + + +10. Why are you saying that ZLIB1.DLL and my application should + be linked to the same C run-time (CRT) library? I linked my + application and my DLLs to different C libraries (e.g. my + application to a static library, and my DLLs to MSVCRT.DLL), + and everything works fine. + + - If a user library invokes only pure Win32 API (accessible via + and the related headers), its DLL build will work + in any context. But if this library invokes standard C API, + things get more complicated. + + There is a single Win32 library in a Win32 system. Every + function in this library resides in a single DLL module, that + is safe to call from anywhere. On the other hand, there are + multiple versions of the C library, and each of them has its + own separate internal state. Standalone executables and user + DLLs that call standard C functions must link to a C run-time + (CRT) library, be it static or shared (DLL). Intermixing + occurs when an executable (not necessarily standalone) and a + DLL are linked to different CRTs, and both are running in the + same process. + + Intermixing multiple CRTs is possible, as long as their + internal states are kept intact. The Microsoft Knowledge Base + articles KB94248 "HOWTO: Use the C Run-Time" and KB140584 + "HOWTO: Link with the Correct C Run-Time (CRT) Library" + mention the potential problems raised by intermixing. + + If intermixing works for you, it's because your application + and DLLs are avoiding the corruption of each of the CRTs' + internal states, maybe by careful design, or maybe by fortune. + + Also note that linking ZLIB1.DLL to non-Microsoft CRTs, such + as those provided by Borland, raises similar problems. + + +11. Why are you linking ZLIB1.DLL to MSVCRT.DLL? + + - MSVCRT.DLL exists on every Windows 95 with a new service pack + installed, or with Microsoft Internet Explorer 4 or later, and + on all other Windows 4.x or later (Windows 98, Windows NT 4, + or later). It is freely distributable; if not present in the + system, it can be downloaded from Microsoft or from other + software provider for free. + + The fact that MSVCRT.DLL does not exist on a virgin Windows 95 + is not so problematic. Windows 95 is scarcely found nowadays, + Microsoft ended its support a long time ago, and many recent + applications from various vendors, including Microsoft, do not + even run on it. Furthermore, no serious user should run + Windows 95 without a proper update installed. + + +12. Why are you not linking ZLIB1.DLL to + <> ? + + - We considered and abandoned the following alternatives: + + * Linking ZLIB1.DLL to a static C library (LIBC.LIB, or + LIBCMT.LIB) is not a good option. People are using the DLL + mainly to save disk space. If you are linking your program + to a static C library, you may as well consider linking zlib + in statically, too. + + * Linking ZLIB1.DLL to CRTDLL.DLL looks appealing, because + CRTDLL.DLL is present on every Win32 installation. + Unfortunately, it has a series of problems: it does not + work properly with Microsoft's C++ libraries, it does not + provide support for 64-bit file offsets, (and so on...), + and Microsoft discontinued its support a long time ago. + + * Linking ZLIB1.DLL to MSVCR70.DLL or MSVCR71.DLL, supplied + with the Microsoft .NET platform, and Visual C++ 7.0/7.1, + raises problems related to the status of ZLIB1.DLL as a + system component. According to the Microsoft Knowledge Base + article KB326922 "INFO: Redistribution of the Shared C + Runtime Component in Visual C++ .NET", MSVCR70.DLL and + MSVCR71.DLL are not supposed to function as system DLLs, + because they may clash with MSVCRT.DLL. Instead, the + application's installer is supposed to put these DLLs + (if needed) in the application's private directory. + If ZLIB1.DLL depends on a non-system runtime, it cannot + function as a redistributable system component. + + * Linking ZLIB1.DLL to non-Microsoft runtimes, such as + Borland's, or Cygwin's, raises problems related to the + reliable presence of these runtimes on Win32 systems. + It's easier to let the DLL build of zlib up to the people + who distribute these runtimes, and who may proceed as + explained in the answer to Question 14. + + +13. If ZLIB1.DLL cannot be linked to MSVCR70.DLL or MSVCR71.DLL, + how can I build/use ZLIB1.DLL in Microsoft Visual C++ 7.0 + (Visual Studio .NET) or newer? + + - Due to the problems explained in the Microsoft Knowledge Base + article KB326922 (see the previous answer), the C runtime that + comes with the VC7 environment is no longer considered a + system component. That is, it should not be assumed that this + runtime exists, or may be installed in a system directory. + Since ZLIB1.DLL is supposed to be a system component, it may + not depend on a non-system component. + + In order to link ZLIB1.DLL and your application to MSVCRT.DLL + in VC7, you need the library of Visual C++ 6.0 or older. If + you don't have this library at hand, it's probably best not to + use ZLIB1.DLL. + + We are hoping that, in the future, Microsoft will provide a + way to build applications linked to a proper system runtime, + from the Visual C++ environment. Until then, you have a + couple of alternatives, such as linking zlib in statically. + If your application requires dynamic linking, you may proceed + as explained in the answer to Question 14. + + +14. I need to link my own DLL build to a CRT different than + MSVCRT.DLL. What can I do? + + - Feel free to rebuild the DLL from the zlib sources, and link + it the way you want. You should, however, clearly state that + your build is unofficial. You should give it a different file + name, and/or install it in a private directory that can be + accessed by your application only, and is not visible to the + others (i.e. it's neither in the PATH, nor in the SYSTEM or + SYSTEM32 directories). Otherwise, your build may clash with + applications that link to the official build. + + For example, in Cygwin, zlib is linked to the Cygwin runtime + CYGWIN1.DLL, and it is distributed under the name CYGZ.DLL. + + +15. May I include additional pieces of code that I find useful, + link them in ZLIB1.DLL, and export them? + + - No. A legitimate build of ZLIB1.DLL must not include code + that does not originate from the official zlib source code. + But you can make your own private DLL build, under a different + file name, as suggested in the previous answer. + + For example, zlib is a part of the VCL library, distributed + with Borland Delphi and C++ Builder. The DLL build of VCL + is a redistributable file, named VCLxx.DLL. + + +16. May I remove some functionality out of ZLIB1.DLL, by enabling + macros like NO_GZCOMPRESS or NO_GZIP at compile time? + + - No. A legitimate build of ZLIB1.DLL must provide the complete + zlib functionality, as implemented in the official zlib source + code. But you can make your own private DLL build, under a + different file name, as suggested in the previous answer. + + +17. I made my own ZLIB1.DLL build. Can I test it for compliance? + + - We prefer that you download the official DLL from the zlib + web site. If you need something peculiar from this DLL, you + can send your suggestion to the zlib mailing list. + + However, in case you do rebuild the DLL yourself, you can run + it with the test programs found in the DLL distribution. + Running these test programs is not a guarantee of compliance, + but a failure can imply a detected problem. + +** + +This document is written and maintained by +Cosmin Truta ADDED compat/zlib/win32/Makefile.bor Index: compat/zlib/win32/Makefile.bor ================================================================== --- compat/zlib/win32/Makefile.bor +++ compat/zlib/win32/Makefile.bor @@ -0,0 +1,110 @@ +# Makefile for zlib +# Borland C++ for Win32 +# +# Usage: +# make -f win32/Makefile.bor +# make -f win32/Makefile.bor LOCAL_ZLIB=-DASMV OBJA=match.obj OBJPA=+match.obj + +# ------------ Borland C++ ------------ + +# Optional nonstandard preprocessor flags (e.g. -DMAX_MEM_LEVEL=7) +# should be added to the environment via "set LOCAL_ZLIB=-DFOO" or +# added to the declaration of LOC here: +LOC = $(LOCAL_ZLIB) + +CC = bcc32 +AS = bcc32 +LD = bcc32 +AR = tlib +CFLAGS = -a -d -k- -O2 $(LOC) +ASFLAGS = $(LOC) +LDFLAGS = $(LOC) + + +# variables +ZLIB_LIB = zlib.lib + +OBJ1 = adler32.obj compress.obj crc32.obj deflate.obj gzclose.obj gzlib.obj gzread.obj +OBJ2 = gzwrite.obj infback.obj inffast.obj inflate.obj inftrees.obj trees.obj uncompr.obj zutil.obj +#OBJA = +OBJP1 = +adler32.obj+compress.obj+crc32.obj+deflate.obj+gzclose.obj+gzlib.obj+gzread.obj +OBJP2 = +gzwrite.obj+infback.obj+inffast.obj+inflate.obj+inftrees.obj+trees.obj+uncompr.obj+zutil.obj +#OBJPA= + + +# targets +all: $(ZLIB_LIB) example.exe minigzip.exe + +.c.obj: + $(CC) -c $(CFLAGS) $< + +.asm.obj: + $(AS) -c $(ASFLAGS) $< + +adler32.obj: adler32.c zlib.h zconf.h + +compress.obj: compress.c zlib.h zconf.h + +crc32.obj: crc32.c zlib.h zconf.h crc32.h + +deflate.obj: deflate.c deflate.h zutil.h zlib.h zconf.h + +gzclose.obj: gzclose.c zlib.h zconf.h gzguts.h + +gzlib.obj: gzlib.c zlib.h zconf.h gzguts.h + +gzread.obj: gzread.c zlib.h zconf.h gzguts.h + +gzwrite.obj: gzwrite.c zlib.h zconf.h gzguts.h + +infback.obj: infback.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h inffixed.h + +inffast.obj: inffast.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h + +inflate.obj: inflate.c zutil.h zlib.h zconf.h inftrees.h inflate.h \ + inffast.h inffixed.h + +inftrees.obj: inftrees.c zutil.h zlib.h zconf.h inftrees.h + +trees.obj: trees.c zutil.h zlib.h zconf.h deflate.h trees.h + +uncompr.obj: uncompr.c zlib.h zconf.h + +zutil.obj: zutil.c zutil.h zlib.h zconf.h + +example.obj: test/example.c zlib.h zconf.h + +minigzip.obj: test/minigzip.c zlib.h zconf.h + + +# For the sake of the old Borland make, +# the command line is cut to fit in the MS-DOS 128 byte limit: +$(ZLIB_LIB): $(OBJ1) $(OBJ2) $(OBJA) + -del $(ZLIB_LIB) + $(AR) $(ZLIB_LIB) $(OBJP1) + $(AR) $(ZLIB_LIB) $(OBJP2) + $(AR) $(ZLIB_LIB) $(OBJPA) + + +# testing +test: example.exe minigzip.exe + example + echo hello world | minigzip | minigzip -d + +example.exe: example.obj $(ZLIB_LIB) + $(LD) $(LDFLAGS) example.obj $(ZLIB_LIB) + +minigzip.exe: minigzip.obj $(ZLIB_LIB) + $(LD) $(LDFLAGS) minigzip.obj $(ZLIB_LIB) + + +# cleanup +clean: + -del $(ZLIB_LIB) + -del *.obj + -del *.exe + -del *.tds + -del zlib.bak + -del foo.gz ADDED compat/zlib/win32/Makefile.gcc Index: compat/zlib/win32/Makefile.gcc ================================================================== --- compat/zlib/win32/Makefile.gcc +++ compat/zlib/win32/Makefile.gcc @@ -0,0 +1,182 @@ +# Makefile for zlib, derived from Makefile.dj2. +# Modified for mingw32 by C. Spieler, 6/16/98. +# Updated for zlib 1.2.x by Christian Spieler and Cosmin Truta, Mar-2003. +# Last updated: Mar 2012. +# Tested under Cygwin and MinGW. + +# Copyright (C) 1995-2003 Jean-loup Gailly. +# For conditions of distribution and use, see copyright notice in zlib.h + +# To compile, or to compile and test, type from the top level zlib directory: +# +# make -fwin32/Makefile.gcc; make test testdll -fwin32/Makefile.gcc +# +# To use the asm code, type: +# cp contrib/asm?86/match.S ./match.S +# make LOC=-DASMV OBJA=match.o -fwin32/Makefile.gcc +# +# To install libz.a, zconf.h and zlib.h in the system directories, type: +# +# make install -fwin32/Makefile.gcc +# +# BINARY_PATH, INCLUDE_PATH and LIBRARY_PATH must be set. +# +# To install the shared lib, append SHARED_MODE=1 to the make command : +# +# make install -fwin32/Makefile.gcc SHARED_MODE=1 + +# Note: +# If the platform is *not* MinGW (e.g. it is Cygwin or UWIN), +# the DLL name should be changed from "zlib1.dll". + +STATICLIB = libz.a +SHAREDLIB = zlib1.dll +IMPLIB = libz.dll.a + +# +# Set to 1 if shared object needs to be installed +# +SHARED_MODE=0 + +#LOC = -DASMV +#LOC = -DDEBUG -g + +PREFIX = +CC = $(PREFIX)gcc +CFLAGS = $(LOC) -O3 -Wall + +AS = $(CC) +ASFLAGS = $(LOC) -Wall + +LD = $(CC) +LDFLAGS = $(LOC) + +AR = $(PREFIX)ar +ARFLAGS = rcs + +RC = $(PREFIX)windres +RCFLAGS = --define GCC_WINDRES + +STRIP = $(PREFIX)strip + +CP = cp -fp +# If GNU install is available, replace $(CP) with install. +INSTALL = $(CP) +RM = rm -f + +prefix ?= /usr/local +exec_prefix = $(prefix) + +OBJS = adler32.o compress.o crc32.o deflate.o gzclose.o gzlib.o gzread.o \ + gzwrite.o infback.o inffast.o inflate.o inftrees.o trees.o uncompr.o zutil.o +OBJA = + +all: $(STATICLIB) $(SHAREDLIB) $(IMPLIB) example.exe minigzip.exe example_d.exe minigzip_d.exe + +test: example.exe minigzip.exe + ./example + echo hello world | ./minigzip | ./minigzip -d + +testdll: example_d.exe minigzip_d.exe + ./example_d + echo hello world | ./minigzip_d | ./minigzip_d -d + +.c.o: + $(CC) $(CFLAGS) -c -o $@ $< + +.S.o: + $(AS) $(ASFLAGS) -c -o $@ $< + +$(STATICLIB): $(OBJS) $(OBJA) + $(AR) $(ARFLAGS) $@ $(OBJS) $(OBJA) + +$(IMPLIB): $(SHAREDLIB) + +$(SHAREDLIB): win32/zlib.def $(OBJS) $(OBJA) zlibrc.o + $(CC) -shared -Wl,--out-implib,$(IMPLIB) $(LDFLAGS) \ + -o $@ win32/zlib.def $(OBJS) $(OBJA) zlibrc.o + $(STRIP) $@ + +example.exe: example.o $(STATICLIB) + $(LD) $(LDFLAGS) -o $@ example.o $(STATICLIB) + $(STRIP) $@ + +minigzip.exe: minigzip.o $(STATICLIB) + $(LD) $(LDFLAGS) -o $@ minigzip.o $(STATICLIB) + $(STRIP) $@ + +example_d.exe: example.o $(IMPLIB) + $(LD) $(LDFLAGS) -o $@ example.o $(IMPLIB) + $(STRIP) $@ + +minigzip_d.exe: minigzip.o $(IMPLIB) + $(LD) $(LDFLAGS) -o $@ minigzip.o $(IMPLIB) + $(STRIP) $@ + +example.o: test/example.c zlib.h zconf.h + $(CC) $(CFLAGS) -I. -c -o $@ test/example.c + +minigzip.o: test/minigzip.c zlib.h zconf.h + $(CC) $(CFLAGS) -I. -c -o $@ test/minigzip.c + +zlibrc.o: win32/zlib1.rc + $(RC) $(RCFLAGS) -o $@ win32/zlib1.rc + +.PHONY: install uninstall clean + +install: zlib.h zconf.h $(STATICLIB) $(IMPLIB) + @if test -z "$(DESTDIR)$(INCLUDE_PATH)" -o -z "$(DESTDIR)$(LIBRARY_PATH)" -o -z "$(DESTDIR)$(BINARY_PATH)"; then \ + echo INCLUDE_PATH, LIBRARY_PATH, and BINARY_PATH must be specified; \ + exit 1; \ + fi + -@mkdir -p '$(DESTDIR)$(INCLUDE_PATH)' + -@mkdir -p '$(DESTDIR)$(LIBRARY_PATH)' '$(DESTDIR)$(LIBRARY_PATH)'/pkgconfig + -if [ "$(SHARED_MODE)" = "1" ]; then \ + mkdir -p '$(DESTDIR)$(BINARY_PATH)'; \ + $(INSTALL) $(SHAREDLIB) '$(DESTDIR)$(BINARY_PATH)'; \ + $(INSTALL) $(IMPLIB) '$(DESTDIR)$(LIBRARY_PATH)'; \ + fi + -$(INSTALL) zlib.h '$(DESTDIR)$(INCLUDE_PATH)' + -$(INSTALL) zconf.h '$(DESTDIR)$(INCLUDE_PATH)' + -$(INSTALL) $(STATICLIB) '$(DESTDIR)$(LIBRARY_PATH)' + sed \ + -e 's|@prefix@|${prefix}|g' \ + -e 's|@exec_prefix@|${exec_prefix}|g' \ + -e 's|@libdir@|$(LIBRARY_PATH)|g' \ + -e 's|@sharedlibdir@|$(LIBRARY_PATH)|g' \ + -e 's|@includedir@|$(INCLUDE_PATH)|g' \ + -e 's|@VERSION@|'`sed -n -e '/VERSION "/s/.*"\(.*\)".*/\1/p' zlib.h`'|g' \ + zlib.pc.in > '$(DESTDIR)$(LIBRARY_PATH)'/pkgconfig/zlib.pc + +uninstall: + -if [ "$(SHARED_MODE)" = "1" ]; then \ + $(RM) '$(DESTDIR)$(BINARY_PATH)'/$(SHAREDLIB); \ + $(RM) '$(DESTDIR)$(LIBRARY_PATH)'/$(IMPLIB); \ + fi + -$(RM) '$(DESTDIR)$(INCLUDE_PATH)'/zlib.h + -$(RM) '$(DESTDIR)$(INCLUDE_PATH)'/zconf.h + -$(RM) '$(DESTDIR)$(LIBRARY_PATH)'/$(STATICLIB) + +clean: + -$(RM) $(STATICLIB) + -$(RM) $(SHAREDLIB) + -$(RM) $(IMPLIB) + -$(RM) *.o + -$(RM) *.exe + -$(RM) foo.gz + +adler32.o: zlib.h zconf.h +compress.o: zlib.h zconf.h +crc32.o: crc32.h zlib.h zconf.h +deflate.o: deflate.h zutil.h zlib.h zconf.h +gzclose.o: zlib.h zconf.h gzguts.h +gzlib.o: zlib.h zconf.h gzguts.h +gzread.o: zlib.h zconf.h gzguts.h +gzwrite.o: zlib.h zconf.h gzguts.h +inffast.o: zutil.h zlib.h zconf.h inftrees.h inflate.h inffast.h +inflate.o: zutil.h zlib.h zconf.h inftrees.h inflate.h inffast.h +infback.o: zutil.h zlib.h zconf.h inftrees.h inflate.h inffast.h +inftrees.o: zutil.h zlib.h zconf.h inftrees.h +trees.o: deflate.h zutil.h zlib.h zconf.h trees.h +uncompr.o: zlib.h zconf.h +zutil.o: zutil.h zlib.h zconf.h ADDED compat/zlib/win32/Makefile.msc Index: compat/zlib/win32/Makefile.msc ================================================================== --- compat/zlib/win32/Makefile.msc +++ compat/zlib/win32/Makefile.msc @@ -0,0 +1,163 @@ +# Makefile for zlib using Microsoft (Visual) C +# zlib is copyright (C) 1995-2006 Jean-loup Gailly and Mark Adler +# +# Usage: +# nmake -f win32/Makefile.msc (standard build) +# nmake -f win32/Makefile.msc LOC=-DFOO (nonstandard build) +# nmake -f win32/Makefile.msc LOC="-DASMV -DASMINF" \ +# OBJA="inffas32.obj match686.obj" (use ASM code, x86) +# nmake -f win32/Makefile.msc AS=ml64 LOC="-DASMV -DASMINF -I." \ +# OBJA="inffasx64.obj gvmat64.obj inffas8664.obj" (use ASM code, x64) + +# The toplevel directory of the source tree. +# +TOP = . + +# optional build flags +LOC = + +# variables +STATICLIB = zlib.lib +SHAREDLIB = zlib1.dll +IMPLIB = zdll.lib + +CC = cl +AS = ml +LD = link +AR = lib +RC = rc +CFLAGS = -nologo -MD -W3 -O2 -Oy- -Zi -Fd"zlib" $(LOC) +WFLAGS = -D_CRT_SECURE_NO_DEPRECATE -D_CRT_NONSTDC_NO_DEPRECATE +ASFLAGS = -coff -Zi $(LOC) +LDFLAGS = -nologo -debug -incremental:no -opt:ref +ARFLAGS = -nologo +RCFLAGS = /dWIN32 /r + +OBJS = adler32.obj compress.obj crc32.obj deflate.obj gzclose.obj gzlib.obj gzread.obj \ + gzwrite.obj infback.obj inflate.obj inftrees.obj inffast.obj trees.obj uncompr.obj zutil.obj +OBJA = + + +# targets +all: $(STATICLIB) $(SHAREDLIB) $(IMPLIB) \ + example.exe minigzip.exe example_d.exe minigzip_d.exe + +$(STATICLIB): $(OBJS) $(OBJA) + $(AR) $(ARFLAGS) -out:$@ $(OBJS) $(OBJA) + +$(IMPLIB): $(SHAREDLIB) + +$(SHAREDLIB): $(TOP)/win32/zlib.def $(OBJS) $(OBJA) zlib1.res + $(LD) $(LDFLAGS) -def:$(TOP)/win32/zlib.def -dll -implib:$(IMPLIB) \ + -out:$@ -base:0x5A4C0000 $(OBJS) $(OBJA) zlib1.res + if exist $@.manifest \ + mt -nologo -manifest $@.manifest -outputresource:$@;2 + +example.exe: example.obj $(STATICLIB) + $(LD) $(LDFLAGS) example.obj $(STATICLIB) + if exist $@.manifest \ + mt -nologo -manifest $@.manifest -outputresource:$@;1 + +minigzip.exe: minigzip.obj $(STATICLIB) + $(LD) $(LDFLAGS) minigzip.obj $(STATICLIB) + if exist $@.manifest \ + mt -nologo -manifest $@.manifest -outputresource:$@;1 + +example_d.exe: example.obj $(IMPLIB) + $(LD) $(LDFLAGS) -out:$@ example.obj $(IMPLIB) + if exist $@.manifest \ + mt -nologo -manifest $@.manifest -outputresource:$@;1 + +minigzip_d.exe: minigzip.obj $(IMPLIB) + $(LD) $(LDFLAGS) -out:$@ minigzip.obj $(IMPLIB) + if exist $@.manifest \ + mt -nologo -manifest $@.manifest -outputresource:$@;1 + +{$(TOP)}.c.obj: + $(CC) -c $(WFLAGS) $(CFLAGS) $< + +{$(TOP)/test}.c.obj: + $(CC) -c -I$(TOP) $(WFLAGS) $(CFLAGS) $< + +{$(TOP)/contrib/masmx64}.c.obj: + $(CC) -c $(WFLAGS) $(CFLAGS) $< + +{$(TOP)/contrib/masmx64}.asm.obj: + $(AS) -c $(ASFLAGS) $< + +{$(TOP)/contrib/masmx86}.asm.obj: + $(AS) -c $(ASFLAGS) $< + +adler32.obj: $(TOP)/adler32.c $(TOP)/zlib.h $(TOP)/zconf.h + +compress.obj: $(TOP)/compress.c $(TOP)/zlib.h $(TOP)/zconf.h + +crc32.obj: $(TOP)/crc32.c $(TOP)/zlib.h $(TOP)/zconf.h $(TOP)/crc32.h + +deflate.obj: $(TOP)/deflate.c $(TOP)/deflate.h $(TOP)/zutil.h $(TOP)/zlib.h $(TOP)/zconf.h + +gzclose.obj: $(TOP)/gzclose.c $(TOP)/zlib.h $(TOP)/zconf.h $(TOP)/gzguts.h + +gzlib.obj: $(TOP)/gzlib.c $(TOP)/zlib.h $(TOP)/zconf.h $(TOP)/gzguts.h + +gzread.obj: $(TOP)/gzread.c $(TOP)/zlib.h $(TOP)/zconf.h $(TOP)/gzguts.h + +gzwrite.obj: $(TOP)/gzwrite.c $(TOP)/zlib.h $(TOP)/zconf.h $(TOP)/gzguts.h + +infback.obj: $(TOP)/infback.c $(TOP)/zutil.h $(TOP)/zlib.h $(TOP)/zconf.h $(TOP)/inftrees.h $(TOP)/inflate.h \ + $(TOP)/inffast.h $(TOP)/inffixed.h + +inffast.obj: $(TOP)/inffast.c $(TOP)/zutil.h $(TOP)/zlib.h $(TOP)/zconf.h $(TOP)/inftrees.h $(TOP)/inflate.h \ + $(TOP)/inffast.h + +inflate.obj: $(TOP)/inflate.c $(TOP)/zutil.h $(TOP)/zlib.h $(TOP)/zconf.h $(TOP)/inftrees.h $(TOP)/inflate.h \ + $(TOP)/inffast.h $(TOP)/inffixed.h + +inftrees.obj: $(TOP)/inftrees.c $(TOP)/zutil.h $(TOP)/zlib.h $(TOP)/zconf.h $(TOP)/inftrees.h + +trees.obj: $(TOP)/trees.c $(TOP)/zutil.h $(TOP)/zlib.h $(TOP)/zconf.h $(TOP)/deflate.h $(TOP)/trees.h + +uncompr.obj: $(TOP)/uncompr.c $(TOP)/zlib.h $(TOP)/zconf.h + +zutil.obj: $(TOP)/zutil.c $(TOP)/zutil.h $(TOP)/zlib.h $(TOP)/zconf.h + +gvmat64.obj: $(TOP)/contrib\masmx64\gvmat64.asm + +inffasx64.obj: $(TOP)/contrib\masmx64\inffasx64.asm + +inffas8664.obj: $(TOP)/contrib\masmx64\inffas8664.c $(TOP)/zutil.h $(TOP)/zlib.h $(TOP)/zconf.h \ + $(TOP)/inftrees.h $(TOP)/inflate.h $(TOP)/inffast.h + +inffas32.obj: $(TOP)/contrib\masmx86\inffas32.asm + +match686.obj: $(TOP)/contrib\masmx86\match686.asm + +example.obj: $(TOP)/test/example.c $(TOP)/zlib.h $(TOP)/zconf.h + +minigzip.obj: $(TOP)/test/minigzip.c $(TOP)/zlib.h $(TOP)/zconf.h + +zlib1.res: $(TOP)/win32/zlib1.rc + $(RC) $(RCFLAGS) /fo$@ $(TOP)/win32/zlib1.rc + +# testing +test: example.exe minigzip.exe + example + echo hello world | minigzip | minigzip -d + +testdll: example_d.exe minigzip_d.exe + example_d + echo hello world | minigzip_d | minigzip_d -d + + +# cleanup +clean: + -del $(STATICLIB) + -del $(SHAREDLIB) + -del $(IMPLIB) + -del *.obj + -del *.res + -del *.exp + -del *.exe + -del *.pdb + -del *.manifest + -del foo.gz ADDED compat/zlib/win32/README-WIN32.txt Index: compat/zlib/win32/README-WIN32.txt ================================================================== --- compat/zlib/win32/README-WIN32.txt +++ compat/zlib/win32/README-WIN32.txt @@ -0,0 +1,103 @@ +ZLIB DATA COMPRESSION LIBRARY + +zlib 1.2.8 is a general purpose data compression library. All the code is +thread safe. The data format used by the zlib library is described by RFCs +(Request for Comments) 1950 to 1952 in the files +http://www.ietf.org/rfc/rfc1950.txt (zlib format), rfc1951.txt (deflate format) +and rfc1952.txt (gzip format). + +All functions of the compression library are documented in the file zlib.h +(volunteer to write man pages welcome, contact zlib@gzip.org). Two compiled +examples are distributed in this package, example and minigzip. The example_d +and minigzip_d flavors validate that the zlib1.dll file is working correctly. + +Questions about zlib should be sent to . The zlib home page +is http://zlib.net/ . Before reporting a problem, please check this site to +verify that you have the latest version of zlib; otherwise get the latest +version and check whether the problem still exists or not. + +PLEASE read DLL_FAQ.txt, and the the zlib FAQ http://zlib.net/zlib_faq.html +before asking for help. + + +Manifest: + +The package zlib-1.2.8-win32-x86.zip will contain the following files: + + README-WIN32.txt This document + ChangeLog Changes since previous zlib packages + DLL_FAQ.txt Frequently asked questions about zlib1.dll + zlib.3.pdf Documentation of this library in Adobe Acrobat format + + example.exe A statically-bound example (using zlib.lib, not the dll) + example.pdb Symbolic information for debugging example.exe + + example_d.exe A zlib1.dll bound example (using zdll.lib) + example_d.pdb Symbolic information for debugging example_d.exe + + minigzip.exe A statically-bound test program (using zlib.lib, not the dll) + minigzip.pdb Symbolic information for debugging minigzip.exe + + minigzip_d.exe A zlib1.dll bound test program (using zdll.lib) + minigzip_d.pdb Symbolic information for debugging minigzip_d.exe + + zlib.h Install these files into the compilers' INCLUDE path to + zconf.h compile programs which use zlib.lib or zdll.lib + + zdll.lib Install these files into the compilers' LIB path if linking + zdll.exp a compiled program to the zlib1.dll binary + + zlib.lib Install these files into the compilers' LIB path to link zlib + zlib.pdb into compiled programs, without zlib1.dll runtime dependency + (zlib.pdb provides debugging info to the compile time linker) + + zlib1.dll Install this binary shared library into the system PATH, or + the program's runtime directory (where the .exe resides) + zlib1.pdb Install in the same directory as zlib1.dll, in order to debug + an application crash using WinDbg or similar tools. + +All .pdb files above are entirely optional, but are very useful to a developer +attempting to diagnose program misbehavior or a crash. Many additional +important files for developers can be found in the zlib127.zip source package +available from http://zlib.net/ - review that package's README file for details. + + +Acknowledgments: + +The deflate format used by zlib was defined by Phil Katz. The deflate and +zlib specifications were written by L. Peter Deutsch. Thanks to all the +people who reported problems and suggested various improvements in zlib; they +are too numerous to cite here. + + +Copyright notice: + + (C) 1995-2012 Jean-loup Gailly and Mark Adler + + This software is provided 'as-is', without any express or implied + warranty. In no event will the authors be held liable for any damages + arising from the use of this software. + + Permission is granted to anyone to use this software for any purpose, + including commercial applications, and to alter it and redistribute it + freely, subject to the following restrictions: + + 1. The origin of this software must not be misrepresented; you must not + claim that you wrote the original software. If you use this software + in a product, an acknowledgment in the product documentation would be + appreciated but is not required. + 2. Altered source versions must be plainly marked as such, and must not be + misrepresented as being the original software. + 3. This notice may not be removed or altered from any source distribution. + + Jean-loup Gailly Mark Adler + jloup@gzip.org madler@alumni.caltech.edu + +If you use the zlib library in a product, we would appreciate *not* receiving +lengthy legal documents to sign. The sources are provided for free but without +warranty of any kind. The library has been entirely written by Jean-loup +Gailly and Mark Adler; it does not include third-party code. + +If you redistribute modified sources, we would appreciate that you include in +the file ChangeLog history information documenting your changes. Please read +the FAQ for more information on the distribution of modified source versions. ADDED compat/zlib/win32/VisualC.txt Index: compat/zlib/win32/VisualC.txt ================================================================== --- compat/zlib/win32/VisualC.txt +++ compat/zlib/win32/VisualC.txt @@ -0,0 +1,3 @@ + +To build zlib using the Microsoft Visual C++ environment, +use the appropriate project from the projects/ directory. ADDED compat/zlib/win32/zlib.def Index: compat/zlib/win32/zlib.def ================================================================== --- compat/zlib/win32/zlib.def +++ compat/zlib/win32/zlib.def @@ -0,0 +1,86 @@ +; zlib data compression library +EXPORTS +; basic functions + zlibVersion + deflate + deflateEnd + inflate + inflateEnd +; advanced functions + deflateSetDictionary + deflateCopy + deflateReset + deflateParams + deflateTune + deflateBound + deflatePending + deflatePrime + deflateSetHeader + inflateSetDictionary + inflateGetDictionary + inflateSync + inflateCopy + inflateReset + inflateReset2 + inflatePrime + inflateMark + inflateGetHeader + inflateBack + inflateBackEnd + zlibCompileFlags +; utility functions + compress + compress2 + compressBound + uncompress + gzopen + gzdopen + gzbuffer + gzsetparams + gzread + gzwrite + gzprintf + gzvprintf + gzputs + gzgets + gzputc + gzgetc + gzungetc + gzflush + gzseek + gzrewind + gztell + gzoffset + gzeof + gzdirect + gzclose + gzclose_r + gzclose_w + gzerror + gzclearerr +; large file functions + gzopen64 + gzseek64 + gztell64 + gzoffset64 + adler32_combine64 + crc32_combine64 +; checksum functions + adler32 + crc32 + adler32_combine + crc32_combine +; various hacks, don't look :) + deflateInit_ + deflateInit2_ + inflateInit_ + inflateInit2_ + inflateBackInit_ + gzgetc_ + zError + inflateSyncPoint + get_crc_table + inflateUndermine + inflateResetKeep + deflateResetKeep + gzopen_w ADDED compat/zlib/win32/zlib1.rc Index: compat/zlib/win32/zlib1.rc ================================================================== --- compat/zlib/win32/zlib1.rc +++ compat/zlib/win32/zlib1.rc @@ -0,0 +1,40 @@ +#include +#include "../zlib.h" + +#ifdef GCC_WINDRES +VS_VERSION_INFO VERSIONINFO +#else +VS_VERSION_INFO VERSIONINFO MOVEABLE IMPURE LOADONCALL DISCARDABLE +#endif + FILEVERSION ZLIB_VER_MAJOR,ZLIB_VER_MINOR,ZLIB_VER_REVISION,0 + PRODUCTVERSION ZLIB_VER_MAJOR,ZLIB_VER_MINOR,ZLIB_VER_REVISION,0 + FILEFLAGSMASK VS_FFI_FILEFLAGSMASK +#ifdef _DEBUG + FILEFLAGS 1 +#else + FILEFLAGS 0 +#endif + FILEOS VOS__WINDOWS32 + FILETYPE VFT_DLL + FILESUBTYPE 0 // not used +BEGIN + BLOCK "StringFileInfo" + BEGIN + BLOCK "040904E4" + //language ID = U.S. English, char set = Windows, Multilingual + BEGIN + VALUE "FileDescription", "zlib data compression library\0" + VALUE "FileVersion", ZLIB_VERSION "\0" + VALUE "InternalName", "zlib1.dll\0" + VALUE "LegalCopyright", "(C) 1995-2013 Jean-loup Gailly & Mark Adler\0" + VALUE "OriginalFilename", "zlib1.dll\0" + VALUE "ProductName", "zlib\0" + VALUE "ProductVersion", ZLIB_VERSION "\0" + VALUE "Comments", "For more information visit http://www.zlib.net/\0" + END + END + BLOCK "VarFileInfo" + BEGIN + VALUE "Translation", 0x0409, 1252 + END +END ADDED compat/zlib/zconf.h Index: compat/zlib/zconf.h ================================================================== --- compat/zlib/zconf.h +++ compat/zlib/zconf.h @@ -0,0 +1,511 @@ +/* zconf.h -- configuration of the zlib compression library + * Copyright (C) 1995-2013 Jean-loup Gailly. + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* @(#) $Id$ */ + +#ifndef ZCONF_H +#define ZCONF_H + +/* + * If you *really* need a unique prefix for all types and library functions, + * compile with -DZ_PREFIX. The "standard" zlib should be compiled without it. + * Even better than compiling with -DZ_PREFIX would be to use configure to set + * this permanently in zconf.h using "./configure --zprefix". + */ +#ifdef Z_PREFIX /* may be set to #if 1 by ./configure */ +# define Z_PREFIX_SET + +/* all linked symbols */ +# define _dist_code z__dist_code +# define _length_code z__length_code +# define _tr_align z__tr_align +# define _tr_flush_bits z__tr_flush_bits +# define _tr_flush_block z__tr_flush_block +# define _tr_init z__tr_init +# define _tr_stored_block z__tr_stored_block +# define _tr_tally z__tr_tally +# define adler32 z_adler32 +# define adler32_combine z_adler32_combine +# define adler32_combine64 z_adler32_combine64 +# ifndef Z_SOLO +# define compress z_compress +# define compress2 z_compress2 +# define compressBound z_compressBound +# endif +# define crc32 z_crc32 +# define crc32_combine z_crc32_combine +# define crc32_combine64 z_crc32_combine64 +# define deflate z_deflate +# define deflateBound z_deflateBound +# define deflateCopy z_deflateCopy +# define deflateEnd z_deflateEnd +# define deflateInit2_ z_deflateInit2_ +# define deflateInit_ z_deflateInit_ +# define deflateParams z_deflateParams +# define deflatePending z_deflatePending +# define deflatePrime z_deflatePrime +# define deflateReset z_deflateReset +# define deflateResetKeep z_deflateResetKeep +# define deflateSetDictionary z_deflateSetDictionary +# define deflateSetHeader z_deflateSetHeader +# define deflateTune z_deflateTune +# define deflate_copyright z_deflate_copyright +# define get_crc_table z_get_crc_table +# ifndef Z_SOLO +# define gz_error z_gz_error +# define gz_intmax z_gz_intmax +# define gz_strwinerror z_gz_strwinerror +# define gzbuffer z_gzbuffer +# define gzclearerr z_gzclearerr +# define gzclose z_gzclose +# define gzclose_r z_gzclose_r +# define gzclose_w z_gzclose_w +# define gzdirect z_gzdirect +# define gzdopen z_gzdopen +# define gzeof z_gzeof +# define gzerror z_gzerror +# define gzflush z_gzflush +# define gzgetc z_gzgetc +# define gzgetc_ z_gzgetc_ +# define gzgets z_gzgets +# define gzoffset z_gzoffset +# define gzoffset64 z_gzoffset64 +# define gzopen z_gzopen +# define gzopen64 z_gzopen64 +# ifdef _WIN32 +# define gzopen_w z_gzopen_w +# endif +# define gzprintf z_gzprintf +# define gzvprintf z_gzvprintf +# define gzputc z_gzputc +# define gzputs z_gzputs +# define gzread z_gzread +# define gzrewind z_gzrewind +# define gzseek z_gzseek +# define gzseek64 z_gzseek64 +# define gzsetparams z_gzsetparams +# define gztell z_gztell +# define gztell64 z_gztell64 +# define gzungetc z_gzungetc +# define gzwrite z_gzwrite +# endif +# define inflate z_inflate +# define inflateBack z_inflateBack +# define inflateBackEnd z_inflateBackEnd +# define inflateBackInit_ z_inflateBackInit_ +# define inflateCopy z_inflateCopy +# define inflateEnd z_inflateEnd +# define inflateGetHeader z_inflateGetHeader +# define inflateInit2_ z_inflateInit2_ +# define inflateInit_ z_inflateInit_ +# define inflateMark z_inflateMark +# define inflatePrime z_inflatePrime +# define inflateReset z_inflateReset +# define inflateReset2 z_inflateReset2 +# define inflateSetDictionary z_inflateSetDictionary +# define inflateGetDictionary z_inflateGetDictionary +# define inflateSync z_inflateSync +# define inflateSyncPoint z_inflateSyncPoint +# define inflateUndermine z_inflateUndermine +# define inflateResetKeep z_inflateResetKeep +# define inflate_copyright z_inflate_copyright +# define inflate_fast z_inflate_fast +# define inflate_table z_inflate_table +# ifndef Z_SOLO +# define uncompress z_uncompress +# endif +# define zError z_zError +# ifndef Z_SOLO +# define zcalloc z_zcalloc +# define zcfree z_zcfree +# endif +# define zlibCompileFlags z_zlibCompileFlags +# define zlibVersion z_zlibVersion + +/* all zlib typedefs in zlib.h and zconf.h */ +# define Byte z_Byte +# define Bytef z_Bytef +# define alloc_func z_alloc_func +# define charf z_charf +# define free_func z_free_func +# ifndef Z_SOLO +# define gzFile z_gzFile +# endif +# define gz_header z_gz_header +# define gz_headerp z_gz_headerp +# define in_func z_in_func +# define intf z_intf +# define out_func z_out_func +# define uInt z_uInt +# define uIntf z_uIntf +# define uLong z_uLong +# define uLongf z_uLongf +# define voidp z_voidp +# define voidpc z_voidpc +# define voidpf z_voidpf + +/* all zlib structs in zlib.h and zconf.h */ +# define gz_header_s z_gz_header_s +# define internal_state z_internal_state + +#endif + +#if defined(__MSDOS__) && !defined(MSDOS) +# define MSDOS +#endif +#if (defined(OS_2) || defined(__OS2__)) && !defined(OS2) +# define OS2 +#endif +#if defined(_WINDOWS) && !defined(WINDOWS) +# define WINDOWS +#endif +#if defined(_WIN32) || defined(_WIN32_WCE) || defined(__WIN32__) +# ifndef WIN32 +# define WIN32 +# endif +#endif +#if (defined(MSDOS) || defined(OS2) || defined(WINDOWS)) && !defined(WIN32) +# if !defined(__GNUC__) && !defined(__FLAT__) && !defined(__386__) +# ifndef SYS16BIT +# define SYS16BIT +# endif +# endif +#endif + +/* + * Compile with -DMAXSEG_64K if the alloc function cannot allocate more + * than 64k bytes at a time (needed on systems with 16-bit int). + */ +#ifdef SYS16BIT +# define MAXSEG_64K +#endif +#ifdef MSDOS +# define UNALIGNED_OK +#endif + +#ifdef __STDC_VERSION__ +# ifndef STDC +# define STDC +# endif +# if __STDC_VERSION__ >= 199901L +# ifndef STDC99 +# define STDC99 +# endif +# endif +#endif +#if !defined(STDC) && (defined(__STDC__) || defined(__cplusplus)) +# define STDC +#endif +#if !defined(STDC) && (defined(__GNUC__) || defined(__BORLANDC__)) +# define STDC +#endif +#if !defined(STDC) && (defined(MSDOS) || defined(WINDOWS) || defined(WIN32)) +# define STDC +#endif +#if !defined(STDC) && (defined(OS2) || defined(__HOS_AIX__)) +# define STDC +#endif + +#if defined(__OS400__) && !defined(STDC) /* iSeries (formerly AS/400). */ +# define STDC +#endif + +#ifndef STDC +# ifndef const /* cannot use !defined(STDC) && !defined(const) on Mac */ +# define const /* note: need a more gentle solution here */ +# endif +#endif + +#if defined(ZLIB_CONST) && !defined(z_const) +# define z_const const +#else +# define z_const +#endif + +/* Some Mac compilers merge all .h files incorrectly: */ +#if defined(__MWERKS__)||defined(applec)||defined(THINK_C)||defined(__SC__) +# define NO_DUMMY_DECL +#endif + +/* Maximum value for memLevel in deflateInit2 */ +#ifndef MAX_MEM_LEVEL +# ifdef MAXSEG_64K +# define MAX_MEM_LEVEL 8 +# else +# define MAX_MEM_LEVEL 9 +# endif +#endif + +/* Maximum value for windowBits in deflateInit2 and inflateInit2. + * WARNING: reducing MAX_WBITS makes minigzip unable to extract .gz files + * created by gzip. (Files created by minigzip can still be extracted by + * gzip.) + */ +#ifndef MAX_WBITS +# define MAX_WBITS 15 /* 32K LZ77 window */ +#endif + +/* The memory requirements for deflate are (in bytes): + (1 << (windowBits+2)) + (1 << (memLevel+9)) + that is: 128K for windowBits=15 + 128K for memLevel = 8 (default values) + plus a few kilobytes for small objects. For example, if you want to reduce + the default memory requirements from 256K to 128K, compile with + make CFLAGS="-O -DMAX_WBITS=14 -DMAX_MEM_LEVEL=7" + Of course this will generally degrade compression (there's no free lunch). + + The memory requirements for inflate are (in bytes) 1 << windowBits + that is, 32K for windowBits=15 (default value) plus a few kilobytes + for small objects. +*/ + + /* Type declarations */ + +#ifndef OF /* function prototypes */ +# ifdef STDC +# define OF(args) args +# else +# define OF(args) () +# endif +#endif + +#ifndef Z_ARG /* function prototypes for stdarg */ +# if defined(STDC) || defined(Z_HAVE_STDARG_H) +# define Z_ARG(args) args +# else +# define Z_ARG(args) () +# endif +#endif + +/* The following definitions for FAR are needed only for MSDOS mixed + * model programming (small or medium model with some far allocations). + * This was tested only with MSC; for other MSDOS compilers you may have + * to define NO_MEMCPY in zutil.h. If you don't need the mixed model, + * just define FAR to be empty. + */ +#ifdef SYS16BIT +# if defined(M_I86SM) || defined(M_I86MM) + /* MSC small or medium model */ +# define SMALL_MEDIUM +# ifdef _MSC_VER +# define FAR _far +# else +# define FAR far +# endif +# endif +# if (defined(__SMALL__) || defined(__MEDIUM__)) + /* Turbo C small or medium model */ +# define SMALL_MEDIUM +# ifdef __BORLANDC__ +# define FAR _far +# else +# define FAR far +# endif +# endif +#endif + +#if defined(WINDOWS) || defined(WIN32) + /* If building or using zlib as a DLL, define ZLIB_DLL. + * This is not mandatory, but it offers a little performance increase. + */ +# ifdef ZLIB_DLL +# if defined(WIN32) && (!defined(__BORLANDC__) || (__BORLANDC__ >= 0x500)) +# ifdef ZLIB_INTERNAL +# define ZEXTERN extern __declspec(dllexport) +# else +# define ZEXTERN extern __declspec(dllimport) +# endif +# endif +# endif /* ZLIB_DLL */ + /* If building or using zlib with the WINAPI/WINAPIV calling convention, + * define ZLIB_WINAPI. + * Caution: the standard ZLIB1.DLL is NOT compiled using ZLIB_WINAPI. + */ +# ifdef ZLIB_WINAPI +# ifdef FAR +# undef FAR +# endif +# include + /* No need for _export, use ZLIB.DEF instead. */ + /* For complete Windows compatibility, use WINAPI, not __stdcall. */ +# define ZEXPORT WINAPI +# ifdef WIN32 +# define ZEXPORTVA WINAPIV +# else +# define ZEXPORTVA FAR CDECL +# endif +# endif +#endif + +#if defined (__BEOS__) +# ifdef ZLIB_DLL +# ifdef ZLIB_INTERNAL +# define ZEXPORT __declspec(dllexport) +# define ZEXPORTVA __declspec(dllexport) +# else +# define ZEXPORT __declspec(dllimport) +# define ZEXPORTVA __declspec(dllimport) +# endif +# endif +#endif + +#ifndef ZEXTERN +# define ZEXTERN extern +#endif +#ifndef ZEXPORT +# define ZEXPORT +#endif +#ifndef ZEXPORTVA +# define ZEXPORTVA +#endif + +#ifndef FAR +# define FAR +#endif + +#if !defined(__MACTYPES__) +typedef unsigned char Byte; /* 8 bits */ +#endif +typedef unsigned int uInt; /* 16 bits or more */ +typedef unsigned long uLong; /* 32 bits or more */ + +#ifdef SMALL_MEDIUM + /* Borland C/C++ and some old MSC versions ignore FAR inside typedef */ +# define Bytef Byte FAR +#else + typedef Byte FAR Bytef; +#endif +typedef char FAR charf; +typedef int FAR intf; +typedef uInt FAR uIntf; +typedef uLong FAR uLongf; + +#ifdef STDC + typedef void const *voidpc; + typedef void FAR *voidpf; + typedef void *voidp; +#else + typedef Byte const *voidpc; + typedef Byte FAR *voidpf; + typedef Byte *voidp; +#endif + +#if !defined(Z_U4) && !defined(Z_SOLO) && defined(STDC) +# include +# if (UINT_MAX == 0xffffffffUL) +# define Z_U4 unsigned +# elif (ULONG_MAX == 0xffffffffUL) +# define Z_U4 unsigned long +# elif (USHRT_MAX == 0xffffffffUL) +# define Z_U4 unsigned short +# endif +#endif + +#ifdef Z_U4 + typedef Z_U4 z_crc_t; +#else + typedef unsigned long z_crc_t; +#endif + +#ifdef HAVE_UNISTD_H /* may be set to #if 1 by ./configure */ +# define Z_HAVE_UNISTD_H +#endif + +#ifdef HAVE_STDARG_H /* may be set to #if 1 by ./configure */ +# define Z_HAVE_STDARG_H +#endif + +#ifdef STDC +# ifndef Z_SOLO +# include /* for off_t */ +# endif +#endif + +#if defined(STDC) || defined(Z_HAVE_STDARG_H) +# ifndef Z_SOLO +# include /* for va_list */ +# endif +#endif + +#ifdef _WIN32 +# ifndef Z_SOLO +# include /* for wchar_t */ +# endif +#endif + +/* a little trick to accommodate both "#define _LARGEFILE64_SOURCE" and + * "#define _LARGEFILE64_SOURCE 1" as requesting 64-bit operations, (even + * though the former does not conform to the LFS document), but considering + * both "#undef _LARGEFILE64_SOURCE" and "#define _LARGEFILE64_SOURCE 0" as + * equivalently requesting no 64-bit operations + */ +#if defined(_LARGEFILE64_SOURCE) && -_LARGEFILE64_SOURCE - -1 == 1 +# undef _LARGEFILE64_SOURCE +#endif + +#if defined(__WATCOMC__) && !defined(Z_HAVE_UNISTD_H) +# define Z_HAVE_UNISTD_H +#endif +#ifndef Z_SOLO +# if defined(Z_HAVE_UNISTD_H) || defined(_LARGEFILE64_SOURCE) +# include /* for SEEK_*, off_t, and _LFS64_LARGEFILE */ +# ifdef VMS +# include /* for off_t */ +# endif +# ifndef z_off_t +# define z_off_t off_t +# endif +# endif +#endif + +#if defined(_LFS64_LARGEFILE) && _LFS64_LARGEFILE-0 +# define Z_LFS64 +#endif + +#if defined(_LARGEFILE64_SOURCE) && defined(Z_LFS64) +# define Z_LARGE64 +#endif + +#if defined(_FILE_OFFSET_BITS) && _FILE_OFFSET_BITS-0 == 64 && defined(Z_LFS64) +# define Z_WANT64 +#endif + +#if !defined(SEEK_SET) && !defined(Z_SOLO) +# define SEEK_SET 0 /* Seek from beginning of file. */ +# define SEEK_CUR 1 /* Seek from current position. */ +# define SEEK_END 2 /* Set file pointer to EOF plus "offset" */ +#endif + +#ifndef z_off_t +# define z_off_t long +#endif + +#if !defined(_WIN32) && defined(Z_LARGE64) +# define z_off64_t off64_t +#else +# if defined(_WIN32) && !defined(__GNUC__) && !defined(Z_SOLO) +# define z_off64_t __int64 +# else +# define z_off64_t z_off_t +# endif +#endif + +/* MVS linker does not support external names larger than 8 bytes */ +#if defined(__MVS__) + #pragma map(deflateInit_,"DEIN") + #pragma map(deflateInit2_,"DEIN2") + #pragma map(deflateEnd,"DEEND") + #pragma map(deflateBound,"DEBND") + #pragma map(inflateInit_,"ININ") + #pragma map(inflateInit2_,"ININ2") + #pragma map(inflateEnd,"INEND") + #pragma map(inflateSync,"INSY") + #pragma map(inflateSetDictionary,"INSEDI") + #pragma map(compressBound,"CMBND") + #pragma map(inflate_table,"INTABL") + #pragma map(inflate_fast,"INFA") + #pragma map(inflate_copyright,"INCOPY") +#endif + +#endif /* ZCONF_H */ ADDED compat/zlib/zconf.h.cmakein Index: compat/zlib/zconf.h.cmakein ================================================================== --- compat/zlib/zconf.h.cmakein +++ compat/zlib/zconf.h.cmakein @@ -0,0 +1,513 @@ +/* zconf.h -- configuration of the zlib compression library + * Copyright (C) 1995-2013 Jean-loup Gailly. + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* @(#) $Id$ */ + +#ifndef ZCONF_H +#define ZCONF_H +#cmakedefine Z_PREFIX +#cmakedefine Z_HAVE_UNISTD_H + +/* + * If you *really* need a unique prefix for all types and library functions, + * compile with -DZ_PREFIX. The "standard" zlib should be compiled without it. + * Even better than compiling with -DZ_PREFIX would be to use configure to set + * this permanently in zconf.h using "./configure --zprefix". + */ +#ifdef Z_PREFIX /* may be set to #if 1 by ./configure */ +# define Z_PREFIX_SET + +/* all linked symbols */ +# define _dist_code z__dist_code +# define _length_code z__length_code +# define _tr_align z__tr_align +# define _tr_flush_bits z__tr_flush_bits +# define _tr_flush_block z__tr_flush_block +# define _tr_init z__tr_init +# define _tr_stored_block z__tr_stored_block +# define _tr_tally z__tr_tally +# define adler32 z_adler32 +# define adler32_combine z_adler32_combine +# define adler32_combine64 z_adler32_combine64 +# ifndef Z_SOLO +# define compress z_compress +# define compress2 z_compress2 +# define compressBound z_compressBound +# endif +# define crc32 z_crc32 +# define crc32_combine z_crc32_combine +# define crc32_combine64 z_crc32_combine64 +# define deflate z_deflate +# define deflateBound z_deflateBound +# define deflateCopy z_deflateCopy +# define deflateEnd z_deflateEnd +# define deflateInit2_ z_deflateInit2_ +# define deflateInit_ z_deflateInit_ +# define deflateParams z_deflateParams +# define deflatePending z_deflatePending +# define deflatePrime z_deflatePrime +# define deflateReset z_deflateReset +# define deflateResetKeep z_deflateResetKeep +# define deflateSetDictionary z_deflateSetDictionary +# define deflateSetHeader z_deflateSetHeader +# define deflateTune z_deflateTune +# define deflate_copyright z_deflate_copyright +# define get_crc_table z_get_crc_table +# ifndef Z_SOLO +# define gz_error z_gz_error +# define gz_intmax z_gz_intmax +# define gz_strwinerror z_gz_strwinerror +# define gzbuffer z_gzbuffer +# define gzclearerr z_gzclearerr +# define gzclose z_gzclose +# define gzclose_r z_gzclose_r +# define gzclose_w z_gzclose_w +# define gzdirect z_gzdirect +# define gzdopen z_gzdopen +# define gzeof z_gzeof +# define gzerror z_gzerror +# define gzflush z_gzflush +# define gzgetc z_gzgetc +# define gzgetc_ z_gzgetc_ +# define gzgets z_gzgets +# define gzoffset z_gzoffset +# define gzoffset64 z_gzoffset64 +# define gzopen z_gzopen +# define gzopen64 z_gzopen64 +# ifdef _WIN32 +# define gzopen_w z_gzopen_w +# endif +# define gzprintf z_gzprintf +# define gzvprintf z_gzvprintf +# define gzputc z_gzputc +# define gzputs z_gzputs +# define gzread z_gzread +# define gzrewind z_gzrewind +# define gzseek z_gzseek +# define gzseek64 z_gzseek64 +# define gzsetparams z_gzsetparams +# define gztell z_gztell +# define gztell64 z_gztell64 +# define gzungetc z_gzungetc +# define gzwrite z_gzwrite +# endif +# define inflate z_inflate +# define inflateBack z_inflateBack +# define inflateBackEnd z_inflateBackEnd +# define inflateBackInit_ z_inflateBackInit_ +# define inflateCopy z_inflateCopy +# define inflateEnd z_inflateEnd +# define inflateGetHeader z_inflateGetHeader +# define inflateInit2_ z_inflateInit2_ +# define inflateInit_ z_inflateInit_ +# define inflateMark z_inflateMark +# define inflatePrime z_inflatePrime +# define inflateReset z_inflateReset +# define inflateReset2 z_inflateReset2 +# define inflateSetDictionary z_inflateSetDictionary +# define inflateGetDictionary z_inflateGetDictionary +# define inflateSync z_inflateSync +# define inflateSyncPoint z_inflateSyncPoint +# define inflateUndermine z_inflateUndermine +# define inflateResetKeep z_inflateResetKeep +# define inflate_copyright z_inflate_copyright +# define inflate_fast z_inflate_fast +# define inflate_table z_inflate_table +# ifndef Z_SOLO +# define uncompress z_uncompress +# endif +# define zError z_zError +# ifndef Z_SOLO +# define zcalloc z_zcalloc +# define zcfree z_zcfree +# endif +# define zlibCompileFlags z_zlibCompileFlags +# define zlibVersion z_zlibVersion + +/* all zlib typedefs in zlib.h and zconf.h */ +# define Byte z_Byte +# define Bytef z_Bytef +# define alloc_func z_alloc_func +# define charf z_charf +# define free_func z_free_func +# ifndef Z_SOLO +# define gzFile z_gzFile +# endif +# define gz_header z_gz_header +# define gz_headerp z_gz_headerp +# define in_func z_in_func +# define intf z_intf +# define out_func z_out_func +# define uInt z_uInt +# define uIntf z_uIntf +# define uLong z_uLong +# define uLongf z_uLongf +# define voidp z_voidp +# define voidpc z_voidpc +# define voidpf z_voidpf + +/* all zlib structs in zlib.h and zconf.h */ +# define gz_header_s z_gz_header_s +# define internal_state z_internal_state + +#endif + +#if defined(__MSDOS__) && !defined(MSDOS) +# define MSDOS +#endif +#if (defined(OS_2) || defined(__OS2__)) && !defined(OS2) +# define OS2 +#endif +#if defined(_WINDOWS) && !defined(WINDOWS) +# define WINDOWS +#endif +#if defined(_WIN32) || defined(_WIN32_WCE) || defined(__WIN32__) +# ifndef WIN32 +# define WIN32 +# endif +#endif +#if (defined(MSDOS) || defined(OS2) || defined(WINDOWS)) && !defined(WIN32) +# if !defined(__GNUC__) && !defined(__FLAT__) && !defined(__386__) +# ifndef SYS16BIT +# define SYS16BIT +# endif +# endif +#endif + +/* + * Compile with -DMAXSEG_64K if the alloc function cannot allocate more + * than 64k bytes at a time (needed on systems with 16-bit int). + */ +#ifdef SYS16BIT +# define MAXSEG_64K +#endif +#ifdef MSDOS +# define UNALIGNED_OK +#endif + +#ifdef __STDC_VERSION__ +# ifndef STDC +# define STDC +# endif +# if __STDC_VERSION__ >= 199901L +# ifndef STDC99 +# define STDC99 +# endif +# endif +#endif +#if !defined(STDC) && (defined(__STDC__) || defined(__cplusplus)) +# define STDC +#endif +#if !defined(STDC) && (defined(__GNUC__) || defined(__BORLANDC__)) +# define STDC +#endif +#if !defined(STDC) && (defined(MSDOS) || defined(WINDOWS) || defined(WIN32)) +# define STDC +#endif +#if !defined(STDC) && (defined(OS2) || defined(__HOS_AIX__)) +# define STDC +#endif + +#if defined(__OS400__) && !defined(STDC) /* iSeries (formerly AS/400). */ +# define STDC +#endif + +#ifndef STDC +# ifndef const /* cannot use !defined(STDC) && !defined(const) on Mac */ +# define const /* note: need a more gentle solution here */ +# endif +#endif + +#if defined(ZLIB_CONST) && !defined(z_const) +# define z_const const +#else +# define z_const +#endif + +/* Some Mac compilers merge all .h files incorrectly: */ +#if defined(__MWERKS__)||defined(applec)||defined(THINK_C)||defined(__SC__) +# define NO_DUMMY_DECL +#endif + +/* Maximum value for memLevel in deflateInit2 */ +#ifndef MAX_MEM_LEVEL +# ifdef MAXSEG_64K +# define MAX_MEM_LEVEL 8 +# else +# define MAX_MEM_LEVEL 9 +# endif +#endif + +/* Maximum value for windowBits in deflateInit2 and inflateInit2. + * WARNING: reducing MAX_WBITS makes minigzip unable to extract .gz files + * created by gzip. (Files created by minigzip can still be extracted by + * gzip.) + */ +#ifndef MAX_WBITS +# define MAX_WBITS 15 /* 32K LZ77 window */ +#endif + +/* The memory requirements for deflate are (in bytes): + (1 << (windowBits+2)) + (1 << (memLevel+9)) + that is: 128K for windowBits=15 + 128K for memLevel = 8 (default values) + plus a few kilobytes for small objects. For example, if you want to reduce + the default memory requirements from 256K to 128K, compile with + make CFLAGS="-O -DMAX_WBITS=14 -DMAX_MEM_LEVEL=7" + Of course this will generally degrade compression (there's no free lunch). + + The memory requirements for inflate are (in bytes) 1 << windowBits + that is, 32K for windowBits=15 (default value) plus a few kilobytes + for small objects. +*/ + + /* Type declarations */ + +#ifndef OF /* function prototypes */ +# ifdef STDC +# define OF(args) args +# else +# define OF(args) () +# endif +#endif + +#ifndef Z_ARG /* function prototypes for stdarg */ +# if defined(STDC) || defined(Z_HAVE_STDARG_H) +# define Z_ARG(args) args +# else +# define Z_ARG(args) () +# endif +#endif + +/* The following definitions for FAR are needed only for MSDOS mixed + * model programming (small or medium model with some far allocations). + * This was tested only with MSC; for other MSDOS compilers you may have + * to define NO_MEMCPY in zutil.h. If you don't need the mixed model, + * just define FAR to be empty. + */ +#ifdef SYS16BIT +# if defined(M_I86SM) || defined(M_I86MM) + /* MSC small or medium model */ +# define SMALL_MEDIUM +# ifdef _MSC_VER +# define FAR _far +# else +# define FAR far +# endif +# endif +# if (defined(__SMALL__) || defined(__MEDIUM__)) + /* Turbo C small or medium model */ +# define SMALL_MEDIUM +# ifdef __BORLANDC__ +# define FAR _far +# else +# define FAR far +# endif +# endif +#endif + +#if defined(WINDOWS) || defined(WIN32) + /* If building or using zlib as a DLL, define ZLIB_DLL. + * This is not mandatory, but it offers a little performance increase. + */ +# ifdef ZLIB_DLL +# if defined(WIN32) && (!defined(__BORLANDC__) || (__BORLANDC__ >= 0x500)) +# ifdef ZLIB_INTERNAL +# define ZEXTERN extern __declspec(dllexport) +# else +# define ZEXTERN extern __declspec(dllimport) +# endif +# endif +# endif /* ZLIB_DLL */ + /* If building or using zlib with the WINAPI/WINAPIV calling convention, + * define ZLIB_WINAPI. + * Caution: the standard ZLIB1.DLL is NOT compiled using ZLIB_WINAPI. + */ +# ifdef ZLIB_WINAPI +# ifdef FAR +# undef FAR +# endif +# include + /* No need for _export, use ZLIB.DEF instead. */ + /* For complete Windows compatibility, use WINAPI, not __stdcall. */ +# define ZEXPORT WINAPI +# ifdef WIN32 +# define ZEXPORTVA WINAPIV +# else +# define ZEXPORTVA FAR CDECL +# endif +# endif +#endif + +#if defined (__BEOS__) +# ifdef ZLIB_DLL +# ifdef ZLIB_INTERNAL +# define ZEXPORT __declspec(dllexport) +# define ZEXPORTVA __declspec(dllexport) +# else +# define ZEXPORT __declspec(dllimport) +# define ZEXPORTVA __declspec(dllimport) +# endif +# endif +#endif + +#ifndef ZEXTERN +# define ZEXTERN extern +#endif +#ifndef ZEXPORT +# define ZEXPORT +#endif +#ifndef ZEXPORTVA +# define ZEXPORTVA +#endif + +#ifndef FAR +# define FAR +#endif + +#if !defined(__MACTYPES__) +typedef unsigned char Byte; /* 8 bits */ +#endif +typedef unsigned int uInt; /* 16 bits or more */ +typedef unsigned long uLong; /* 32 bits or more */ + +#ifdef SMALL_MEDIUM + /* Borland C/C++ and some old MSC versions ignore FAR inside typedef */ +# define Bytef Byte FAR +#else + typedef Byte FAR Bytef; +#endif +typedef char FAR charf; +typedef int FAR intf; +typedef uInt FAR uIntf; +typedef uLong FAR uLongf; + +#ifdef STDC + typedef void const *voidpc; + typedef void FAR *voidpf; + typedef void *voidp; +#else + typedef Byte const *voidpc; + typedef Byte FAR *voidpf; + typedef Byte *voidp; +#endif + +#if !defined(Z_U4) && !defined(Z_SOLO) && defined(STDC) +# include +# if (UINT_MAX == 0xffffffffUL) +# define Z_U4 unsigned +# elif (ULONG_MAX == 0xffffffffUL) +# define Z_U4 unsigned long +# elif (USHRT_MAX == 0xffffffffUL) +# define Z_U4 unsigned short +# endif +#endif + +#ifdef Z_U4 + typedef Z_U4 z_crc_t; +#else + typedef unsigned long z_crc_t; +#endif + +#ifdef HAVE_UNISTD_H /* may be set to #if 1 by ./configure */ +# define Z_HAVE_UNISTD_H +#endif + +#ifdef HAVE_STDARG_H /* may be set to #if 1 by ./configure */ +# define Z_HAVE_STDARG_H +#endif + +#ifdef STDC +# ifndef Z_SOLO +# include /* for off_t */ +# endif +#endif + +#if defined(STDC) || defined(Z_HAVE_STDARG_H) +# ifndef Z_SOLO +# include /* for va_list */ +# endif +#endif + +#ifdef _WIN32 +# ifndef Z_SOLO +# include /* for wchar_t */ +# endif +#endif + +/* a little trick to accommodate both "#define _LARGEFILE64_SOURCE" and + * "#define _LARGEFILE64_SOURCE 1" as requesting 64-bit operations, (even + * though the former does not conform to the LFS document), but considering + * both "#undef _LARGEFILE64_SOURCE" and "#define _LARGEFILE64_SOURCE 0" as + * equivalently requesting no 64-bit operations + */ +#if defined(_LARGEFILE64_SOURCE) && -_LARGEFILE64_SOURCE - -1 == 1 +# undef _LARGEFILE64_SOURCE +#endif + +#if defined(__WATCOMC__) && !defined(Z_HAVE_UNISTD_H) +# define Z_HAVE_UNISTD_H +#endif +#ifndef Z_SOLO +# if defined(Z_HAVE_UNISTD_H) || defined(_LARGEFILE64_SOURCE) +# include /* for SEEK_*, off_t, and _LFS64_LARGEFILE */ +# ifdef VMS +# include /* for off_t */ +# endif +# ifndef z_off_t +# define z_off_t off_t +# endif +# endif +#endif + +#if defined(_LFS64_LARGEFILE) && _LFS64_LARGEFILE-0 +# define Z_LFS64 +#endif + +#if defined(_LARGEFILE64_SOURCE) && defined(Z_LFS64) +# define Z_LARGE64 +#endif + +#if defined(_FILE_OFFSET_BITS) && _FILE_OFFSET_BITS-0 == 64 && defined(Z_LFS64) +# define Z_WANT64 +#endif + +#if !defined(SEEK_SET) && !defined(Z_SOLO) +# define SEEK_SET 0 /* Seek from beginning of file. */ +# define SEEK_CUR 1 /* Seek from current position. */ +# define SEEK_END 2 /* Set file pointer to EOF plus "offset" */ +#endif + +#ifndef z_off_t +# define z_off_t long +#endif + +#if !defined(_WIN32) && defined(Z_LARGE64) +# define z_off64_t off64_t +#else +# if defined(_WIN32) && !defined(__GNUC__) && !defined(Z_SOLO) +# define z_off64_t __int64 +# else +# define z_off64_t z_off_t +# endif +#endif + +/* MVS linker does not support external names larger than 8 bytes */ +#if defined(__MVS__) + #pragma map(deflateInit_,"DEIN") + #pragma map(deflateInit2_,"DEIN2") + #pragma map(deflateEnd,"DEEND") + #pragma map(deflateBound,"DEBND") + #pragma map(inflateInit_,"ININ") + #pragma map(inflateInit2_,"ININ2") + #pragma map(inflateEnd,"INEND") + #pragma map(inflateSync,"INSY") + #pragma map(inflateSetDictionary,"INSEDI") + #pragma map(compressBound,"CMBND") + #pragma map(inflate_table,"INTABL") + #pragma map(inflate_fast,"INFA") + #pragma map(inflate_copyright,"INCOPY") +#endif + +#endif /* ZCONF_H */ ADDED compat/zlib/zconf.h.in Index: compat/zlib/zconf.h.in ================================================================== --- compat/zlib/zconf.h.in +++ compat/zlib/zconf.h.in @@ -0,0 +1,511 @@ +/* zconf.h -- configuration of the zlib compression library + * Copyright (C) 1995-2013 Jean-loup Gailly. + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* @(#) $Id$ */ + +#ifndef ZCONF_H +#define ZCONF_H + +/* + * If you *really* need a unique prefix for all types and library functions, + * compile with -DZ_PREFIX. The "standard" zlib should be compiled without it. + * Even better than compiling with -DZ_PREFIX would be to use configure to set + * this permanently in zconf.h using "./configure --zprefix". + */ +#ifdef Z_PREFIX /* may be set to #if 1 by ./configure */ +# define Z_PREFIX_SET + +/* all linked symbols */ +# define _dist_code z__dist_code +# define _length_code z__length_code +# define _tr_align z__tr_align +# define _tr_flush_bits z__tr_flush_bits +# define _tr_flush_block z__tr_flush_block +# define _tr_init z__tr_init +# define _tr_stored_block z__tr_stored_block +# define _tr_tally z__tr_tally +# define adler32 z_adler32 +# define adler32_combine z_adler32_combine +# define adler32_combine64 z_adler32_combine64 +# ifndef Z_SOLO +# define compress z_compress +# define compress2 z_compress2 +# define compressBound z_compressBound +# endif +# define crc32 z_crc32 +# define crc32_combine z_crc32_combine +# define crc32_combine64 z_crc32_combine64 +# define deflate z_deflate +# define deflateBound z_deflateBound +# define deflateCopy z_deflateCopy +# define deflateEnd z_deflateEnd +# define deflateInit2_ z_deflateInit2_ +# define deflateInit_ z_deflateInit_ +# define deflateParams z_deflateParams +# define deflatePending z_deflatePending +# define deflatePrime z_deflatePrime +# define deflateReset z_deflateReset +# define deflateResetKeep z_deflateResetKeep +# define deflateSetDictionary z_deflateSetDictionary +# define deflateSetHeader z_deflateSetHeader +# define deflateTune z_deflateTune +# define deflate_copyright z_deflate_copyright +# define get_crc_table z_get_crc_table +# ifndef Z_SOLO +# define gz_error z_gz_error +# define gz_intmax z_gz_intmax +# define gz_strwinerror z_gz_strwinerror +# define gzbuffer z_gzbuffer +# define gzclearerr z_gzclearerr +# define gzclose z_gzclose +# define gzclose_r z_gzclose_r +# define gzclose_w z_gzclose_w +# define gzdirect z_gzdirect +# define gzdopen z_gzdopen +# define gzeof z_gzeof +# define gzerror z_gzerror +# define gzflush z_gzflush +# define gzgetc z_gzgetc +# define gzgetc_ z_gzgetc_ +# define gzgets z_gzgets +# define gzoffset z_gzoffset +# define gzoffset64 z_gzoffset64 +# define gzopen z_gzopen +# define gzopen64 z_gzopen64 +# ifdef _WIN32 +# define gzopen_w z_gzopen_w +# endif +# define gzprintf z_gzprintf +# define gzvprintf z_gzvprintf +# define gzputc z_gzputc +# define gzputs z_gzputs +# define gzread z_gzread +# define gzrewind z_gzrewind +# define gzseek z_gzseek +# define gzseek64 z_gzseek64 +# define gzsetparams z_gzsetparams +# define gztell z_gztell +# define gztell64 z_gztell64 +# define gzungetc z_gzungetc +# define gzwrite z_gzwrite +# endif +# define inflate z_inflate +# define inflateBack z_inflateBack +# define inflateBackEnd z_inflateBackEnd +# define inflateBackInit_ z_inflateBackInit_ +# define inflateCopy z_inflateCopy +# define inflateEnd z_inflateEnd +# define inflateGetHeader z_inflateGetHeader +# define inflateInit2_ z_inflateInit2_ +# define inflateInit_ z_inflateInit_ +# define inflateMark z_inflateMark +# define inflatePrime z_inflatePrime +# define inflateReset z_inflateReset +# define inflateReset2 z_inflateReset2 +# define inflateSetDictionary z_inflateSetDictionary +# define inflateGetDictionary z_inflateGetDictionary +# define inflateSync z_inflateSync +# define inflateSyncPoint z_inflateSyncPoint +# define inflateUndermine z_inflateUndermine +# define inflateResetKeep z_inflateResetKeep +# define inflate_copyright z_inflate_copyright +# define inflate_fast z_inflate_fast +# define inflate_table z_inflate_table +# ifndef Z_SOLO +# define uncompress z_uncompress +# endif +# define zError z_zError +# ifndef Z_SOLO +# define zcalloc z_zcalloc +# define zcfree z_zcfree +# endif +# define zlibCompileFlags z_zlibCompileFlags +# define zlibVersion z_zlibVersion + +/* all zlib typedefs in zlib.h and zconf.h */ +# define Byte z_Byte +# define Bytef z_Bytef +# define alloc_func z_alloc_func +# define charf z_charf +# define free_func z_free_func +# ifndef Z_SOLO +# define gzFile z_gzFile +# endif +# define gz_header z_gz_header +# define gz_headerp z_gz_headerp +# define in_func z_in_func +# define intf z_intf +# define out_func z_out_func +# define uInt z_uInt +# define uIntf z_uIntf +# define uLong z_uLong +# define uLongf z_uLongf +# define voidp z_voidp +# define voidpc z_voidpc +# define voidpf z_voidpf + +/* all zlib structs in zlib.h and zconf.h */ +# define gz_header_s z_gz_header_s +# define internal_state z_internal_state + +#endif + +#if defined(__MSDOS__) && !defined(MSDOS) +# define MSDOS +#endif +#if (defined(OS_2) || defined(__OS2__)) && !defined(OS2) +# define OS2 +#endif +#if defined(_WINDOWS) && !defined(WINDOWS) +# define WINDOWS +#endif +#if defined(_WIN32) || defined(_WIN32_WCE) || defined(__WIN32__) +# ifndef WIN32 +# define WIN32 +# endif +#endif +#if (defined(MSDOS) || defined(OS2) || defined(WINDOWS)) && !defined(WIN32) +# if !defined(__GNUC__) && !defined(__FLAT__) && !defined(__386__) +# ifndef SYS16BIT +# define SYS16BIT +# endif +# endif +#endif + +/* + * Compile with -DMAXSEG_64K if the alloc function cannot allocate more + * than 64k bytes at a time (needed on systems with 16-bit int). + */ +#ifdef SYS16BIT +# define MAXSEG_64K +#endif +#ifdef MSDOS +# define UNALIGNED_OK +#endif + +#ifdef __STDC_VERSION__ +# ifndef STDC +# define STDC +# endif +# if __STDC_VERSION__ >= 199901L +# ifndef STDC99 +# define STDC99 +# endif +# endif +#endif +#if !defined(STDC) && (defined(__STDC__) || defined(__cplusplus)) +# define STDC +#endif +#if !defined(STDC) && (defined(__GNUC__) || defined(__BORLANDC__)) +# define STDC +#endif +#if !defined(STDC) && (defined(MSDOS) || defined(WINDOWS) || defined(WIN32)) +# define STDC +#endif +#if !defined(STDC) && (defined(OS2) || defined(__HOS_AIX__)) +# define STDC +#endif + +#if defined(__OS400__) && !defined(STDC) /* iSeries (formerly AS/400). */ +# define STDC +#endif + +#ifndef STDC +# ifndef const /* cannot use !defined(STDC) && !defined(const) on Mac */ +# define const /* note: need a more gentle solution here */ +# endif +#endif + +#if defined(ZLIB_CONST) && !defined(z_const) +# define z_const const +#else +# define z_const +#endif + +/* Some Mac compilers merge all .h files incorrectly: */ +#if defined(__MWERKS__)||defined(applec)||defined(THINK_C)||defined(__SC__) +# define NO_DUMMY_DECL +#endif + +/* Maximum value for memLevel in deflateInit2 */ +#ifndef MAX_MEM_LEVEL +# ifdef MAXSEG_64K +# define MAX_MEM_LEVEL 8 +# else +# define MAX_MEM_LEVEL 9 +# endif +#endif + +/* Maximum value for windowBits in deflateInit2 and inflateInit2. + * WARNING: reducing MAX_WBITS makes minigzip unable to extract .gz files + * created by gzip. (Files created by minigzip can still be extracted by + * gzip.) + */ +#ifndef MAX_WBITS +# define MAX_WBITS 15 /* 32K LZ77 window */ +#endif + +/* The memory requirements for deflate are (in bytes): + (1 << (windowBits+2)) + (1 << (memLevel+9)) + that is: 128K for windowBits=15 + 128K for memLevel = 8 (default values) + plus a few kilobytes for small objects. For example, if you want to reduce + the default memory requirements from 256K to 128K, compile with + make CFLAGS="-O -DMAX_WBITS=14 -DMAX_MEM_LEVEL=7" + Of course this will generally degrade compression (there's no free lunch). + + The memory requirements for inflate are (in bytes) 1 << windowBits + that is, 32K for windowBits=15 (default value) plus a few kilobytes + for small objects. +*/ + + /* Type declarations */ + +#ifndef OF /* function prototypes */ +# ifdef STDC +# define OF(args) args +# else +# define OF(args) () +# endif +#endif + +#ifndef Z_ARG /* function prototypes for stdarg */ +# if defined(STDC) || defined(Z_HAVE_STDARG_H) +# define Z_ARG(args) args +# else +# define Z_ARG(args) () +# endif +#endif + +/* The following definitions for FAR are needed only for MSDOS mixed + * model programming (small or medium model with some far allocations). + * This was tested only with MSC; for other MSDOS compilers you may have + * to define NO_MEMCPY in zutil.h. If you don't need the mixed model, + * just define FAR to be empty. + */ +#ifdef SYS16BIT +# if defined(M_I86SM) || defined(M_I86MM) + /* MSC small or medium model */ +# define SMALL_MEDIUM +# ifdef _MSC_VER +# define FAR _far +# else +# define FAR far +# endif +# endif +# if (defined(__SMALL__) || defined(__MEDIUM__)) + /* Turbo C small or medium model */ +# define SMALL_MEDIUM +# ifdef __BORLANDC__ +# define FAR _far +# else +# define FAR far +# endif +# endif +#endif + +#if defined(WINDOWS) || defined(WIN32) + /* If building or using zlib as a DLL, define ZLIB_DLL. + * This is not mandatory, but it offers a little performance increase. + */ +# ifdef ZLIB_DLL +# if defined(WIN32) && (!defined(__BORLANDC__) || (__BORLANDC__ >= 0x500)) +# ifdef ZLIB_INTERNAL +# define ZEXTERN extern __declspec(dllexport) +# else +# define ZEXTERN extern __declspec(dllimport) +# endif +# endif +# endif /* ZLIB_DLL */ + /* If building or using zlib with the WINAPI/WINAPIV calling convention, + * define ZLIB_WINAPI. + * Caution: the standard ZLIB1.DLL is NOT compiled using ZLIB_WINAPI. + */ +# ifdef ZLIB_WINAPI +# ifdef FAR +# undef FAR +# endif +# include + /* No need for _export, use ZLIB.DEF instead. */ + /* For complete Windows compatibility, use WINAPI, not __stdcall. */ +# define ZEXPORT WINAPI +# ifdef WIN32 +# define ZEXPORTVA WINAPIV +# else +# define ZEXPORTVA FAR CDECL +# endif +# endif +#endif + +#if defined (__BEOS__) +# ifdef ZLIB_DLL +# ifdef ZLIB_INTERNAL +# define ZEXPORT __declspec(dllexport) +# define ZEXPORTVA __declspec(dllexport) +# else +# define ZEXPORT __declspec(dllimport) +# define ZEXPORTVA __declspec(dllimport) +# endif +# endif +#endif + +#ifndef ZEXTERN +# define ZEXTERN extern +#endif +#ifndef ZEXPORT +# define ZEXPORT +#endif +#ifndef ZEXPORTVA +# define ZEXPORTVA +#endif + +#ifndef FAR +# define FAR +#endif + +#if !defined(__MACTYPES__) +typedef unsigned char Byte; /* 8 bits */ +#endif +typedef unsigned int uInt; /* 16 bits or more */ +typedef unsigned long uLong; /* 32 bits or more */ + +#ifdef SMALL_MEDIUM + /* Borland C/C++ and some old MSC versions ignore FAR inside typedef */ +# define Bytef Byte FAR +#else + typedef Byte FAR Bytef; +#endif +typedef char FAR charf; +typedef int FAR intf; +typedef uInt FAR uIntf; +typedef uLong FAR uLongf; + +#ifdef STDC + typedef void const *voidpc; + typedef void FAR *voidpf; + typedef void *voidp; +#else + typedef Byte const *voidpc; + typedef Byte FAR *voidpf; + typedef Byte *voidp; +#endif + +#if !defined(Z_U4) && !defined(Z_SOLO) && defined(STDC) +# include +# if (UINT_MAX == 0xffffffffUL) +# define Z_U4 unsigned +# elif (ULONG_MAX == 0xffffffffUL) +# define Z_U4 unsigned long +# elif (USHRT_MAX == 0xffffffffUL) +# define Z_U4 unsigned short +# endif +#endif + +#ifdef Z_U4 + typedef Z_U4 z_crc_t; +#else + typedef unsigned long z_crc_t; +#endif + +#ifdef HAVE_UNISTD_H /* may be set to #if 1 by ./configure */ +# define Z_HAVE_UNISTD_H +#endif + +#ifdef HAVE_STDARG_H /* may be set to #if 1 by ./configure */ +# define Z_HAVE_STDARG_H +#endif + +#ifdef STDC +# ifndef Z_SOLO +# include /* for off_t */ +# endif +#endif + +#if defined(STDC) || defined(Z_HAVE_STDARG_H) +# ifndef Z_SOLO +# include /* for va_list */ +# endif +#endif + +#ifdef _WIN32 +# ifndef Z_SOLO +# include /* for wchar_t */ +# endif +#endif + +/* a little trick to accommodate both "#define _LARGEFILE64_SOURCE" and + * "#define _LARGEFILE64_SOURCE 1" as requesting 64-bit operations, (even + * though the former does not conform to the LFS document), but considering + * both "#undef _LARGEFILE64_SOURCE" and "#define _LARGEFILE64_SOURCE 0" as + * equivalently requesting no 64-bit operations + */ +#if defined(_LARGEFILE64_SOURCE) && -_LARGEFILE64_SOURCE - -1 == 1 +# undef _LARGEFILE64_SOURCE +#endif + +#if defined(__WATCOMC__) && !defined(Z_HAVE_UNISTD_H) +# define Z_HAVE_UNISTD_H +#endif +#ifndef Z_SOLO +# if defined(Z_HAVE_UNISTD_H) || defined(_LARGEFILE64_SOURCE) +# include /* for SEEK_*, off_t, and _LFS64_LARGEFILE */ +# ifdef VMS +# include /* for off_t */ +# endif +# ifndef z_off_t +# define z_off_t off_t +# endif +# endif +#endif + +#if defined(_LFS64_LARGEFILE) && _LFS64_LARGEFILE-0 +# define Z_LFS64 +#endif + +#if defined(_LARGEFILE64_SOURCE) && defined(Z_LFS64) +# define Z_LARGE64 +#endif + +#if defined(_FILE_OFFSET_BITS) && _FILE_OFFSET_BITS-0 == 64 && defined(Z_LFS64) +# define Z_WANT64 +#endif + +#if !defined(SEEK_SET) && !defined(Z_SOLO) +# define SEEK_SET 0 /* Seek from beginning of file. */ +# define SEEK_CUR 1 /* Seek from current position. */ +# define SEEK_END 2 /* Set file pointer to EOF plus "offset" */ +#endif + +#ifndef z_off_t +# define z_off_t long +#endif + +#if !defined(_WIN32) && defined(Z_LARGE64) +# define z_off64_t off64_t +#else +# if defined(_WIN32) && !defined(__GNUC__) && !defined(Z_SOLO) +# define z_off64_t __int64 +# else +# define z_off64_t z_off_t +# endif +#endif + +/* MVS linker does not support external names larger than 8 bytes */ +#if defined(__MVS__) + #pragma map(deflateInit_,"DEIN") + #pragma map(deflateInit2_,"DEIN2") + #pragma map(deflateEnd,"DEEND") + #pragma map(deflateBound,"DEBND") + #pragma map(inflateInit_,"ININ") + #pragma map(inflateInit2_,"ININ2") + #pragma map(inflateEnd,"INEND") + #pragma map(inflateSync,"INSY") + #pragma map(inflateSetDictionary,"INSEDI") + #pragma map(compressBound,"CMBND") + #pragma map(inflate_table,"INTABL") + #pragma map(inflate_fast,"INFA") + #pragma map(inflate_copyright,"INCOPY") +#endif + +#endif /* ZCONF_H */ ADDED compat/zlib/zlib.3 Index: compat/zlib/zlib.3 ================================================================== --- compat/zlib/zlib.3 +++ compat/zlib/zlib.3 @@ -0,0 +1,151 @@ +.TH ZLIB 3 "28 Apr 2013" +.SH NAME +zlib \- compression/decompression library +.SH SYNOPSIS +[see +.I zlib.h +for full description] +.SH DESCRIPTION +The +.I zlib +library is a general purpose data compression library. +The code is thread safe, assuming that the standard library functions +used are thread safe, such as memory allocation routines. +It provides in-memory compression and decompression functions, +including integrity checks of the uncompressed data. +This version of the library supports only one compression method (deflation) +but other algorithms may be added later +with the same stream interface. +.LP +Compression can be done in a single step if the buffers are large enough +or can be done by repeated calls of the compression function. +In the latter case, +the application must provide more input and/or consume the output +(providing more output space) before each call. +.LP +The library also supports reading and writing files in +.IR gzip (1) +(.gz) format +with an interface similar to that of stdio. +.LP +The library does not install any signal handler. +The decoder checks the consistency of the compressed data, +so the library should never crash even in the case of corrupted input. +.LP +All functions of the compression library are documented in the file +.IR zlib.h . +The distribution source includes examples of use of the library +in the files +.I test/example.c +and +.IR test/minigzip.c, +as well as other examples in the +.IR examples/ +directory. +.LP +Changes to this version are documented in the file +.I ChangeLog +that accompanies the source. +.LP +.I zlib +is available in Java using the java.util.zip package: +.IP +http://java.sun.com/developer/technicalArticles/Programming/compression/ +.LP +A Perl interface to +.IR zlib , +written by Paul Marquess (pmqs@cpan.org), +is available at CPAN (Comprehensive Perl Archive Network) sites, +including: +.IP +http://search.cpan.org/~pmqs/IO-Compress-Zlib/ +.LP +A Python interface to +.IR zlib , +written by A.M. Kuchling (amk@magnet.com), +is available in Python 1.5 and later versions: +.IP +http://docs.python.org/library/zlib.html +.LP +.I zlib +is built into +.IR tcl: +.IP +http://wiki.tcl.tk/4610 +.LP +An experimental package to read and write files in .zip format, +written on top of +.I zlib +by Gilles Vollant (info@winimage.com), +is available at: +.IP +http://www.winimage.com/zLibDll/minizip.html +and also in the +.I contrib/minizip +directory of the main +.I zlib +source distribution. +.SH "SEE ALSO" +The +.I zlib +web site can be found at: +.IP +http://zlib.net/ +.LP +The data format used by the zlib library is described by RFC +(Request for Comments) 1950 to 1952 in the files: +.IP +http://tools.ietf.org/html/rfc1950 (for the zlib header and trailer format) +.br +http://tools.ietf.org/html/rfc1951 (for the deflate compressed data format) +.br +http://tools.ietf.org/html/rfc1952 (for the gzip header and trailer format) +.LP +Mark Nelson wrote an article about +.I zlib +for the Jan. 1997 issue of Dr. Dobb's Journal; +a copy of the article is available at: +.IP +http://marknelson.us/1997/01/01/zlib-engine/ +.SH "REPORTING PROBLEMS" +Before reporting a problem, +please check the +.I zlib +web site to verify that you have the latest version of +.IR zlib ; +otherwise, +obtain the latest version and see if the problem still exists. +Please read the +.I zlib +FAQ at: +.IP +http://zlib.net/zlib_faq.html +.LP +before asking for help. +Send questions and/or comments to zlib@gzip.org, +or (for the Windows DLL version) to Gilles Vollant (info@winimage.com). +.SH AUTHORS +Version 1.2.8 +Copyright (C) 1995-2013 Jean-loup Gailly (jloup@gzip.org) +and Mark Adler (madler@alumni.caltech.edu). +.LP +This software is provided "as-is," +without any express or implied warranty. +In no event will the authors be held liable for any damages +arising from the use of this software. +See the distribution directory with respect to requirements +governing redistribution. +The deflate format used by +.I zlib +was defined by Phil Katz. +The deflate and +.I zlib +specifications were written by L. Peter Deutsch. +Thanks to all the people who reported problems and suggested various +improvements in +.IR zlib ; +who are too numerous to cite here. +.LP +UNIX manual page by R. P. C. Rodgers, +U.S. National Library of Medicine (rodgers@nlm.nih.gov). +.\" end of man page ADDED compat/zlib/zlib.3.pdf Index: compat/zlib/zlib.3.pdf ================================================================== --- compat/zlib/zlib.3.pdf +++ compat/zlib/zlib.3.pdf cannot compute difference between binary files ADDED compat/zlib/zlib.h Index: compat/zlib/zlib.h ================================================================== --- compat/zlib/zlib.h +++ compat/zlib/zlib.h @@ -0,0 +1,1768 @@ +/* zlib.h -- interface of the 'zlib' general purpose compression library + version 1.2.8, April 28th, 2013 + + Copyright (C) 1995-2013 Jean-loup Gailly and Mark Adler + + This software is provided 'as-is', without any express or implied + warranty. In no event will the authors be held liable for any damages + arising from the use of this software. + + Permission is granted to anyone to use this software for any purpose, + including commercial applications, and to alter it and redistribute it + freely, subject to the following restrictions: + + 1. The origin of this software must not be misrepresented; you must not + claim that you wrote the original software. If you use this software + in a product, an acknowledgment in the product documentation would be + appreciated but is not required. + 2. Altered source versions must be plainly marked as such, and must not be + misrepresented as being the original software. + 3. This notice may not be removed or altered from any source distribution. + + Jean-loup Gailly Mark Adler + jloup@gzip.org madler@alumni.caltech.edu + + + The data format used by the zlib library is described by RFCs (Request for + Comments) 1950 to 1952 in the files http://tools.ietf.org/html/rfc1950 + (zlib format), rfc1951 (deflate format) and rfc1952 (gzip format). +*/ + +#ifndef ZLIB_H +#define ZLIB_H + +#include "zconf.h" + +#ifdef __cplusplus +extern "C" { +#endif + +#define ZLIB_VERSION "1.2.8" +#define ZLIB_VERNUM 0x1280 +#define ZLIB_VER_MAJOR 1 +#define ZLIB_VER_MINOR 2 +#define ZLIB_VER_REVISION 8 +#define ZLIB_VER_SUBREVISION 0 + +/* + The 'zlib' compression library provides in-memory compression and + decompression functions, including integrity checks of the uncompressed data. + This version of the library supports only one compression method (deflation) + but other algorithms will be added later and will have the same stream + interface. + + Compression can be done in a single step if the buffers are large enough, + or can be done by repeated calls of the compression function. In the latter + case, the application must provide more input and/or consume the output + (providing more output space) before each call. + + The compressed data format used by default by the in-memory functions is + the zlib format, which is a zlib wrapper documented in RFC 1950, wrapped + around a deflate stream, which is itself documented in RFC 1951. + + The library also supports reading and writing files in gzip (.gz) format + with an interface similar to that of stdio using the functions that start + with "gz". The gzip format is different from the zlib format. gzip is a + gzip wrapper, documented in RFC 1952, wrapped around a deflate stream. + + This library can optionally read and write gzip streams in memory as well. + + The zlib format was designed to be compact and fast for use in memory + and on communications channels. The gzip format was designed for single- + file compression on file systems, has a larger header than zlib to maintain + directory information, and uses a different, slower check method than zlib. + + The library does not install any signal handler. The decoder checks + the consistency of the compressed data, so the library should never crash + even in case of corrupted input. +*/ + +typedef voidpf (*alloc_func) OF((voidpf opaque, uInt items, uInt size)); +typedef void (*free_func) OF((voidpf opaque, voidpf address)); + +struct internal_state; + +typedef struct z_stream_s { + z_const Bytef *next_in; /* next input byte */ + uInt avail_in; /* number of bytes available at next_in */ + uLong total_in; /* total number of input bytes read so far */ + + Bytef *next_out; /* next output byte should be put there */ + uInt avail_out; /* remaining free space at next_out */ + uLong total_out; /* total number of bytes output so far */ + + z_const char *msg; /* last error message, NULL if no error */ + struct internal_state FAR *state; /* not visible by applications */ + + alloc_func zalloc; /* used to allocate the internal state */ + free_func zfree; /* used to free the internal state */ + voidpf opaque; /* private data object passed to zalloc and zfree */ + + int data_type; /* best guess about the data type: binary or text */ + uLong adler; /* adler32 value of the uncompressed data */ + uLong reserved; /* reserved for future use */ +} z_stream; + +typedef z_stream FAR *z_streamp; + +/* + gzip header information passed to and from zlib routines. See RFC 1952 + for more details on the meanings of these fields. +*/ +typedef struct gz_header_s { + int text; /* true if compressed data believed to be text */ + uLong time; /* modification time */ + int xflags; /* extra flags (not used when writing a gzip file) */ + int os; /* operating system */ + Bytef *extra; /* pointer to extra field or Z_NULL if none */ + uInt extra_len; /* extra field length (valid if extra != Z_NULL) */ + uInt extra_max; /* space at extra (only when reading header) */ + Bytef *name; /* pointer to zero-terminated file name or Z_NULL */ + uInt name_max; /* space at name (only when reading header) */ + Bytef *comment; /* pointer to zero-terminated comment or Z_NULL */ + uInt comm_max; /* space at comment (only when reading header) */ + int hcrc; /* true if there was or will be a header crc */ + int done; /* true when done reading gzip header (not used + when writing a gzip file) */ +} gz_header; + +typedef gz_header FAR *gz_headerp; + +/* + The application must update next_in and avail_in when avail_in has dropped + to zero. It must update next_out and avail_out when avail_out has dropped + to zero. The application must initialize zalloc, zfree and opaque before + calling the init function. All other fields are set by the compression + library and must not be updated by the application. + + The opaque value provided by the application will be passed as the first + parameter for calls of zalloc and zfree. This can be useful for custom + memory management. The compression library attaches no meaning to the + opaque value. + + zalloc must return Z_NULL if there is not enough memory for the object. + If zlib is used in a multi-threaded application, zalloc and zfree must be + thread safe. + + On 16-bit systems, the functions zalloc and zfree must be able to allocate + exactly 65536 bytes, but will not be required to allocate more than this if + the symbol MAXSEG_64K is defined (see zconf.h). WARNING: On MSDOS, pointers + returned by zalloc for objects of exactly 65536 bytes *must* have their + offset normalized to zero. The default allocation function provided by this + library ensures this (see zutil.c). To reduce memory requirements and avoid + any allocation of 64K objects, at the expense of compression ratio, compile + the library with -DMAX_WBITS=14 (see zconf.h). + + The fields total_in and total_out can be used for statistics or progress + reports. After compression, total_in holds the total size of the + uncompressed data and may be saved for use in the decompressor (particularly + if the decompressor wants to decompress everything in a single step). +*/ + + /* constants */ + +#define Z_NO_FLUSH 0 +#define Z_PARTIAL_FLUSH 1 +#define Z_SYNC_FLUSH 2 +#define Z_FULL_FLUSH 3 +#define Z_FINISH 4 +#define Z_BLOCK 5 +#define Z_TREES 6 +/* Allowed flush values; see deflate() and inflate() below for details */ + +#define Z_OK 0 +#define Z_STREAM_END 1 +#define Z_NEED_DICT 2 +#define Z_ERRNO (-1) +#define Z_STREAM_ERROR (-2) +#define Z_DATA_ERROR (-3) +#define Z_MEM_ERROR (-4) +#define Z_BUF_ERROR (-5) +#define Z_VERSION_ERROR (-6) +/* Return codes for the compression/decompression functions. Negative values + * are errors, positive values are used for special but normal events. + */ + +#define Z_NO_COMPRESSION 0 +#define Z_BEST_SPEED 1 +#define Z_BEST_COMPRESSION 9 +#define Z_DEFAULT_COMPRESSION (-1) +/* compression levels */ + +#define Z_FILTERED 1 +#define Z_HUFFMAN_ONLY 2 +#define Z_RLE 3 +#define Z_FIXED 4 +#define Z_DEFAULT_STRATEGY 0 +/* compression strategy; see deflateInit2() below for details */ + +#define Z_BINARY 0 +#define Z_TEXT 1 +#define Z_ASCII Z_TEXT /* for compatibility with 1.2.2 and earlier */ +#define Z_UNKNOWN 2 +/* Possible values of the data_type field (though see inflate()) */ + +#define Z_DEFLATED 8 +/* The deflate compression method (the only one supported in this version) */ + +#define Z_NULL 0 /* for initializing zalloc, zfree, opaque */ + +#define zlib_version zlibVersion() +/* for compatibility with versions < 1.0.2 */ + + + /* basic functions */ + +ZEXTERN const char * ZEXPORT zlibVersion OF((void)); +/* The application can compare zlibVersion and ZLIB_VERSION for consistency. + If the first character differs, the library code actually used is not + compatible with the zlib.h header file used by the application. This check + is automatically made by deflateInit and inflateInit. + */ + +/* +ZEXTERN int ZEXPORT deflateInit OF((z_streamp strm, int level)); + + Initializes the internal stream state for compression. The fields + zalloc, zfree and opaque must be initialized before by the caller. If + zalloc and zfree are set to Z_NULL, deflateInit updates them to use default + allocation functions. + + The compression level must be Z_DEFAULT_COMPRESSION, or between 0 and 9: + 1 gives best speed, 9 gives best compression, 0 gives no compression at all + (the input data is simply copied a block at a time). Z_DEFAULT_COMPRESSION + requests a default compromise between speed and compression (currently + equivalent to level 6). + + deflateInit returns Z_OK if success, Z_MEM_ERROR if there was not enough + memory, Z_STREAM_ERROR if level is not a valid compression level, or + Z_VERSION_ERROR if the zlib library version (zlib_version) is incompatible + with the version assumed by the caller (ZLIB_VERSION). msg is set to null + if there is no error message. deflateInit does not perform any compression: + this will be done by deflate(). +*/ + + +ZEXTERN int ZEXPORT deflate OF((z_streamp strm, int flush)); +/* + deflate compresses as much data as possible, and stops when the input + buffer becomes empty or the output buffer becomes full. It may introduce + some output latency (reading input without producing any output) except when + forced to flush. + + The detailed semantics are as follows. deflate performs one or both of the + following actions: + + - Compress more input starting at next_in and update next_in and avail_in + accordingly. If not all input can be processed (because there is not + enough room in the output buffer), next_in and avail_in are updated and + processing will resume at this point for the next call of deflate(). + + - Provide more output starting at next_out and update next_out and avail_out + accordingly. This action is forced if the parameter flush is non zero. + Forcing flush frequently degrades the compression ratio, so this parameter + should be set only when necessary (in interactive applications). Some + output may be provided even if flush is not set. + + Before the call of deflate(), the application should ensure that at least + one of the actions is possible, by providing more input and/or consuming more + output, and updating avail_in or avail_out accordingly; avail_out should + never be zero before the call. The application can consume the compressed + output when it wants, for example when the output buffer is full (avail_out + == 0), or after each call of deflate(). If deflate returns Z_OK and with + zero avail_out, it must be called again after making room in the output + buffer because there might be more output pending. + + Normally the parameter flush is set to Z_NO_FLUSH, which allows deflate to + decide how much data to accumulate before producing output, in order to + maximize compression. + + If the parameter flush is set to Z_SYNC_FLUSH, all pending output is + flushed to the output buffer and the output is aligned on a byte boundary, so + that the decompressor can get all input data available so far. (In + particular avail_in is zero after the call if enough output space has been + provided before the call.) Flushing may degrade compression for some + compression algorithms and so it should be used only when necessary. This + completes the current deflate block and follows it with an empty stored block + that is three bits plus filler bits to the next byte, followed by four bytes + (00 00 ff ff). + + If flush is set to Z_PARTIAL_FLUSH, all pending output is flushed to the + output buffer, but the output is not aligned to a byte boundary. All of the + input data so far will be available to the decompressor, as for Z_SYNC_FLUSH. + This completes the current deflate block and follows it with an empty fixed + codes block that is 10 bits long. This assures that enough bytes are output + in order for the decompressor to finish the block before the empty fixed code + block. + + If flush is set to Z_BLOCK, a deflate block is completed and emitted, as + for Z_SYNC_FLUSH, but the output is not aligned on a byte boundary, and up to + seven bits of the current block are held to be written as the next byte after + the next deflate block is completed. In this case, the decompressor may not + be provided enough bits at this point in order to complete decompression of + the data provided so far to the compressor. It may need to wait for the next + block to be emitted. This is for advanced applications that need to control + the emission of deflate blocks. + + If flush is set to Z_FULL_FLUSH, all output is flushed as with + Z_SYNC_FLUSH, and the compression state is reset so that decompression can + restart from this point if previous compressed data has been damaged or if + random access is desired. Using Z_FULL_FLUSH too often can seriously degrade + compression. + + If deflate returns with avail_out == 0, this function must be called again + with the same value of the flush parameter and more output space (updated + avail_out), until the flush is complete (deflate returns with non-zero + avail_out). In the case of a Z_FULL_FLUSH or Z_SYNC_FLUSH, make sure that + avail_out is greater than six to avoid repeated flush markers due to + avail_out == 0 on return. + + If the parameter flush is set to Z_FINISH, pending input is processed, + pending output is flushed and deflate returns with Z_STREAM_END if there was + enough output space; if deflate returns with Z_OK, this function must be + called again with Z_FINISH and more output space (updated avail_out) but no + more input data, until it returns with Z_STREAM_END or an error. After + deflate has returned Z_STREAM_END, the only possible operations on the stream + are deflateReset or deflateEnd. + + Z_FINISH can be used immediately after deflateInit if all the compression + is to be done in a single step. In this case, avail_out must be at least the + value returned by deflateBound (see below). Then deflate is guaranteed to + return Z_STREAM_END. If not enough output space is provided, deflate will + not return Z_STREAM_END, and it must be called again as described above. + + deflate() sets strm->adler to the adler32 checksum of all input read + so far (that is, total_in bytes). + + deflate() may update strm->data_type if it can make a good guess about + the input data type (Z_BINARY or Z_TEXT). In doubt, the data is considered + binary. This field is only for information purposes and does not affect the + compression algorithm in any manner. + + deflate() returns Z_OK if some progress has been made (more input + processed or more output produced), Z_STREAM_END if all input has been + consumed and all output has been produced (only when flush is set to + Z_FINISH), Z_STREAM_ERROR if the stream state was inconsistent (for example + if next_in or next_out was Z_NULL), Z_BUF_ERROR if no progress is possible + (for example avail_in or avail_out was zero). Note that Z_BUF_ERROR is not + fatal, and deflate() can be called again with more input and more output + space to continue compressing. +*/ + + +ZEXTERN int ZEXPORT deflateEnd OF((z_streamp strm)); +/* + All dynamically allocated data structures for this stream are freed. + This function discards any unprocessed input and does not flush any pending + output. + + deflateEnd returns Z_OK if success, Z_STREAM_ERROR if the + stream state was inconsistent, Z_DATA_ERROR if the stream was freed + prematurely (some input or output was discarded). In the error case, msg + may be set but then points to a static string (which must not be + deallocated). +*/ + + +/* +ZEXTERN int ZEXPORT inflateInit OF((z_streamp strm)); + + Initializes the internal stream state for decompression. The fields + next_in, avail_in, zalloc, zfree and opaque must be initialized before by + the caller. If next_in is not Z_NULL and avail_in is large enough (the + exact value depends on the compression method), inflateInit determines the + compression method from the zlib header and allocates all data structures + accordingly; otherwise the allocation will be deferred to the first call of + inflate. If zalloc and zfree are set to Z_NULL, inflateInit updates them to + use default allocation functions. + + inflateInit returns Z_OK if success, Z_MEM_ERROR if there was not enough + memory, Z_VERSION_ERROR if the zlib library version is incompatible with the + version assumed by the caller, or Z_STREAM_ERROR if the parameters are + invalid, such as a null pointer to the structure. msg is set to null if + there is no error message. inflateInit does not perform any decompression + apart from possibly reading the zlib header if present: actual decompression + will be done by inflate(). (So next_in and avail_in may be modified, but + next_out and avail_out are unused and unchanged.) The current implementation + of inflateInit() does not process any header information -- that is deferred + until inflate() is called. +*/ + + +ZEXTERN int ZEXPORT inflate OF((z_streamp strm, int flush)); +/* + inflate decompresses as much data as possible, and stops when the input + buffer becomes empty or the output buffer becomes full. It may introduce + some output latency (reading input without producing any output) except when + forced to flush. + + The detailed semantics are as follows. inflate performs one or both of the + following actions: + + - Decompress more input starting at next_in and update next_in and avail_in + accordingly. If not all input can be processed (because there is not + enough room in the output buffer), next_in is updated and processing will + resume at this point for the next call of inflate(). + + - Provide more output starting at next_out and update next_out and avail_out + accordingly. inflate() provides as much output as possible, until there is + no more input data or no more space in the output buffer (see below about + the flush parameter). + + Before the call of inflate(), the application should ensure that at least + one of the actions is possible, by providing more input and/or consuming more + output, and updating the next_* and avail_* values accordingly. The + application can consume the uncompressed output when it wants, for example + when the output buffer is full (avail_out == 0), or after each call of + inflate(). If inflate returns Z_OK and with zero avail_out, it must be + called again after making room in the output buffer because there might be + more output pending. + + The flush parameter of inflate() can be Z_NO_FLUSH, Z_SYNC_FLUSH, Z_FINISH, + Z_BLOCK, or Z_TREES. Z_SYNC_FLUSH requests that inflate() flush as much + output as possible to the output buffer. Z_BLOCK requests that inflate() + stop if and when it gets to the next deflate block boundary. When decoding + the zlib or gzip format, this will cause inflate() to return immediately + after the header and before the first block. When doing a raw inflate, + inflate() will go ahead and process the first block, and will return when it + gets to the end of that block, or when it runs out of data. + + The Z_BLOCK option assists in appending to or combining deflate streams. + Also to assist in this, on return inflate() will set strm->data_type to the + number of unused bits in the last byte taken from strm->next_in, plus 64 if + inflate() is currently decoding the last block in the deflate stream, plus + 128 if inflate() returned immediately after decoding an end-of-block code or + decoding the complete header up to just before the first byte of the deflate + stream. The end-of-block will not be indicated until all of the uncompressed + data from that block has been written to strm->next_out. The number of + unused bits may in general be greater than seven, except when bit 7 of + data_type is set, in which case the number of unused bits will be less than + eight. data_type is set as noted here every time inflate() returns for all + flush options, and so can be used to determine the amount of currently + consumed input in bits. + + The Z_TREES option behaves as Z_BLOCK does, but it also returns when the + end of each deflate block header is reached, before any actual data in that + block is decoded. This allows the caller to determine the length of the + deflate block header for later use in random access within a deflate block. + 256 is added to the value of strm->data_type when inflate() returns + immediately after reaching the end of the deflate block header. + + inflate() should normally be called until it returns Z_STREAM_END or an + error. However if all decompression is to be performed in a single step (a + single call of inflate), the parameter flush should be set to Z_FINISH. In + this case all pending input is processed and all pending output is flushed; + avail_out must be large enough to hold all of the uncompressed data for the + operation to complete. (The size of the uncompressed data may have been + saved by the compressor for this purpose.) The use of Z_FINISH is not + required to perform an inflation in one step. However it may be used to + inform inflate that a faster approach can be used for the single inflate() + call. Z_FINISH also informs inflate to not maintain a sliding window if the + stream completes, which reduces inflate's memory footprint. If the stream + does not complete, either because not all of the stream is provided or not + enough output space is provided, then a sliding window will be allocated and + inflate() can be called again to continue the operation as if Z_NO_FLUSH had + been used. + + In this implementation, inflate() always flushes as much output as + possible to the output buffer, and always uses the faster approach on the + first call. So the effects of the flush parameter in this implementation are + on the return value of inflate() as noted below, when inflate() returns early + when Z_BLOCK or Z_TREES is used, and when inflate() avoids the allocation of + memory for a sliding window when Z_FINISH is used. + + If a preset dictionary is needed after this call (see inflateSetDictionary + below), inflate sets strm->adler to the Adler-32 checksum of the dictionary + chosen by the compressor and returns Z_NEED_DICT; otherwise it sets + strm->adler to the Adler-32 checksum of all output produced so far (that is, + total_out bytes) and returns Z_OK, Z_STREAM_END or an error code as described + below. At the end of the stream, inflate() checks that its computed adler32 + checksum is equal to that saved by the compressor and returns Z_STREAM_END + only if the checksum is correct. + + inflate() can decompress and check either zlib-wrapped or gzip-wrapped + deflate data. The header type is detected automatically, if requested when + initializing with inflateInit2(). Any information contained in the gzip + header is not retained, so applications that need that information should + instead use raw inflate, see inflateInit2() below, or inflateBack() and + perform their own processing of the gzip header and trailer. When processing + gzip-wrapped deflate data, strm->adler32 is set to the CRC-32 of the output + producted so far. The CRC-32 is checked against the gzip trailer. + + inflate() returns Z_OK if some progress has been made (more input processed + or more output produced), Z_STREAM_END if the end of the compressed data has + been reached and all uncompressed output has been produced, Z_NEED_DICT if a + preset dictionary is needed at this point, Z_DATA_ERROR if the input data was + corrupted (input stream not conforming to the zlib format or incorrect check + value), Z_STREAM_ERROR if the stream structure was inconsistent (for example + next_in or next_out was Z_NULL), Z_MEM_ERROR if there was not enough memory, + Z_BUF_ERROR if no progress is possible or if there was not enough room in the + output buffer when Z_FINISH is used. Note that Z_BUF_ERROR is not fatal, and + inflate() can be called again with more input and more output space to + continue decompressing. If Z_DATA_ERROR is returned, the application may + then call inflateSync() to look for a good compression block if a partial + recovery of the data is desired. +*/ + + +ZEXTERN int ZEXPORT inflateEnd OF((z_streamp strm)); +/* + All dynamically allocated data structures for this stream are freed. + This function discards any unprocessed input and does not flush any pending + output. + + inflateEnd returns Z_OK if success, Z_STREAM_ERROR if the stream state + was inconsistent. In the error case, msg may be set but then points to a + static string (which must not be deallocated). +*/ + + + /* Advanced functions */ + +/* + The following functions are needed only in some special applications. +*/ + +/* +ZEXTERN int ZEXPORT deflateInit2 OF((z_streamp strm, + int level, + int method, + int windowBits, + int memLevel, + int strategy)); + + This is another version of deflateInit with more compression options. The + fields next_in, zalloc, zfree and opaque must be initialized before by the + caller. + + The method parameter is the compression method. It must be Z_DEFLATED in + this version of the library. + + The windowBits parameter is the base two logarithm of the window size + (the size of the history buffer). It should be in the range 8..15 for this + version of the library. Larger values of this parameter result in better + compression at the expense of memory usage. The default value is 15 if + deflateInit is used instead. + + windowBits can also be -8..-15 for raw deflate. In this case, -windowBits + determines the window size. deflate() will then generate raw deflate data + with no zlib header or trailer, and will not compute an adler32 check value. + + windowBits can also be greater than 15 for optional gzip encoding. Add + 16 to windowBits to write a simple gzip header and trailer around the + compressed data instead of a zlib wrapper. The gzip header will have no + file name, no extra data, no comment, no modification time (set to zero), no + header crc, and the operating system will be set to 255 (unknown). If a + gzip stream is being written, strm->adler is a crc32 instead of an adler32. + + The memLevel parameter specifies how much memory should be allocated + for the internal compression state. memLevel=1 uses minimum memory but is + slow and reduces compression ratio; memLevel=9 uses maximum memory for + optimal speed. The default value is 8. See zconf.h for total memory usage + as a function of windowBits and memLevel. + + The strategy parameter is used to tune the compression algorithm. Use the + value Z_DEFAULT_STRATEGY for normal data, Z_FILTERED for data produced by a + filter (or predictor), Z_HUFFMAN_ONLY to force Huffman encoding only (no + string match), or Z_RLE to limit match distances to one (run-length + encoding). Filtered data consists mostly of small values with a somewhat + random distribution. In this case, the compression algorithm is tuned to + compress them better. The effect of Z_FILTERED is to force more Huffman + coding and less string matching; it is somewhat intermediate between + Z_DEFAULT_STRATEGY and Z_HUFFMAN_ONLY. Z_RLE is designed to be almost as + fast as Z_HUFFMAN_ONLY, but give better compression for PNG image data. The + strategy parameter only affects the compression ratio but not the + correctness of the compressed output even if it is not set appropriately. + Z_FIXED prevents the use of dynamic Huffman codes, allowing for a simpler + decoder for special applications. + + deflateInit2 returns Z_OK if success, Z_MEM_ERROR if there was not enough + memory, Z_STREAM_ERROR if any parameter is invalid (such as an invalid + method), or Z_VERSION_ERROR if the zlib library version (zlib_version) is + incompatible with the version assumed by the caller (ZLIB_VERSION). msg is + set to null if there is no error message. deflateInit2 does not perform any + compression: this will be done by deflate(). +*/ + +ZEXTERN int ZEXPORT deflateSetDictionary OF((z_streamp strm, + const Bytef *dictionary, + uInt dictLength)); +/* + Initializes the compression dictionary from the given byte sequence + without producing any compressed output. When using the zlib format, this + function must be called immediately after deflateInit, deflateInit2 or + deflateReset, and before any call of deflate. When doing raw deflate, this + function must be called either before any call of deflate, or immediately + after the completion of a deflate block, i.e. after all input has been + consumed and all output has been delivered when using any of the flush + options Z_BLOCK, Z_PARTIAL_FLUSH, Z_SYNC_FLUSH, or Z_FULL_FLUSH. The + compressor and decompressor must use exactly the same dictionary (see + inflateSetDictionary). + + The dictionary should consist of strings (byte sequences) that are likely + to be encountered later in the data to be compressed, with the most commonly + used strings preferably put towards the end of the dictionary. Using a + dictionary is most useful when the data to be compressed is short and can be + predicted with good accuracy; the data can then be compressed better than + with the default empty dictionary. + + Depending on the size of the compression data structures selected by + deflateInit or deflateInit2, a part of the dictionary may in effect be + discarded, for example if the dictionary is larger than the window size + provided in deflateInit or deflateInit2. Thus the strings most likely to be + useful should be put at the end of the dictionary, not at the front. In + addition, the current implementation of deflate will use at most the window + size minus 262 bytes of the provided dictionary. + + Upon return of this function, strm->adler is set to the adler32 value + of the dictionary; the decompressor may later use this value to determine + which dictionary has been used by the compressor. (The adler32 value + applies to the whole dictionary even if only a subset of the dictionary is + actually used by the compressor.) If a raw deflate was requested, then the + adler32 value is not computed and strm->adler is not set. + + deflateSetDictionary returns Z_OK if success, or Z_STREAM_ERROR if a + parameter is invalid (e.g. dictionary being Z_NULL) or the stream state is + inconsistent (for example if deflate has already been called for this stream + or if not at a block boundary for raw deflate). deflateSetDictionary does + not perform any compression: this will be done by deflate(). +*/ + +ZEXTERN int ZEXPORT deflateCopy OF((z_streamp dest, + z_streamp source)); +/* + Sets the destination stream as a complete copy of the source stream. + + This function can be useful when several compression strategies will be + tried, for example when there are several ways of pre-processing the input + data with a filter. The streams that will be discarded should then be freed + by calling deflateEnd. Note that deflateCopy duplicates the internal + compression state which can be quite large, so this strategy is slow and can + consume lots of memory. + + deflateCopy returns Z_OK if success, Z_MEM_ERROR if there was not + enough memory, Z_STREAM_ERROR if the source stream state was inconsistent + (such as zalloc being Z_NULL). msg is left unchanged in both source and + destination. +*/ + +ZEXTERN int ZEXPORT deflateReset OF((z_streamp strm)); +/* + This function is equivalent to deflateEnd followed by deflateInit, + but does not free and reallocate all the internal compression state. The + stream will keep the same compression level and any other attributes that + may have been set by deflateInit2. + + deflateReset returns Z_OK if success, or Z_STREAM_ERROR if the source + stream state was inconsistent (such as zalloc or state being Z_NULL). +*/ + +ZEXTERN int ZEXPORT deflateParams OF((z_streamp strm, + int level, + int strategy)); +/* + Dynamically update the compression level and compression strategy. The + interpretation of level and strategy is as in deflateInit2. This can be + used to switch between compression and straight copy of the input data, or + to switch to a different kind of input data requiring a different strategy. + If the compression level is changed, the input available so far is + compressed with the old level (and may be flushed); the new level will take + effect only at the next call of deflate(). + + Before the call of deflateParams, the stream state must be set as for + a call of deflate(), since the currently available input may have to be + compressed and flushed. In particular, strm->avail_out must be non-zero. + + deflateParams returns Z_OK if success, Z_STREAM_ERROR if the source + stream state was inconsistent or if a parameter was invalid, Z_BUF_ERROR if + strm->avail_out was zero. +*/ + +ZEXTERN int ZEXPORT deflateTune OF((z_streamp strm, + int good_length, + int max_lazy, + int nice_length, + int max_chain)); +/* + Fine tune deflate's internal compression parameters. This should only be + used by someone who understands the algorithm used by zlib's deflate for + searching for the best matching string, and even then only by the most + fanatic optimizer trying to squeeze out the last compressed bit for their + specific input data. Read the deflate.c source code for the meaning of the + max_lazy, good_length, nice_length, and max_chain parameters. + + deflateTune() can be called after deflateInit() or deflateInit2(), and + returns Z_OK on success, or Z_STREAM_ERROR for an invalid deflate stream. + */ + +ZEXTERN uLong ZEXPORT deflateBound OF((z_streamp strm, + uLong sourceLen)); +/* + deflateBound() returns an upper bound on the compressed size after + deflation of sourceLen bytes. It must be called after deflateInit() or + deflateInit2(), and after deflateSetHeader(), if used. This would be used + to allocate an output buffer for deflation in a single pass, and so would be + called before deflate(). If that first deflate() call is provided the + sourceLen input bytes, an output buffer allocated to the size returned by + deflateBound(), and the flush value Z_FINISH, then deflate() is guaranteed + to return Z_STREAM_END. Note that it is possible for the compressed size to + be larger than the value returned by deflateBound() if flush options other + than Z_FINISH or Z_NO_FLUSH are used. +*/ + +ZEXTERN int ZEXPORT deflatePending OF((z_streamp strm, + unsigned *pending, + int *bits)); +/* + deflatePending() returns the number of bytes and bits of output that have + been generated, but not yet provided in the available output. The bytes not + provided would be due to the available output space having being consumed. + The number of bits of output not provided are between 0 and 7, where they + await more bits to join them in order to fill out a full byte. If pending + or bits are Z_NULL, then those values are not set. + + deflatePending returns Z_OK if success, or Z_STREAM_ERROR if the source + stream state was inconsistent. + */ + +ZEXTERN int ZEXPORT deflatePrime OF((z_streamp strm, + int bits, + int value)); +/* + deflatePrime() inserts bits in the deflate output stream. The intent + is that this function is used to start off the deflate output with the bits + leftover from a previous deflate stream when appending to it. As such, this + function can only be used for raw deflate, and must be used before the first + deflate() call after a deflateInit2() or deflateReset(). bits must be less + than or equal to 16, and that many of the least significant bits of value + will be inserted in the output. + + deflatePrime returns Z_OK if success, Z_BUF_ERROR if there was not enough + room in the internal buffer to insert the bits, or Z_STREAM_ERROR if the + source stream state was inconsistent. +*/ + +ZEXTERN int ZEXPORT deflateSetHeader OF((z_streamp strm, + gz_headerp head)); +/* + deflateSetHeader() provides gzip header information for when a gzip + stream is requested by deflateInit2(). deflateSetHeader() may be called + after deflateInit2() or deflateReset() and before the first call of + deflate(). The text, time, os, extra field, name, and comment information + in the provided gz_header structure are written to the gzip header (xflag is + ignored -- the extra flags are set according to the compression level). The + caller must assure that, if not Z_NULL, name and comment are terminated with + a zero byte, and that if extra is not Z_NULL, that extra_len bytes are + available there. If hcrc is true, a gzip header crc is included. Note that + the current versions of the command-line version of gzip (up through version + 1.3.x) do not support header crc's, and will report that it is a "multi-part + gzip file" and give up. + + If deflateSetHeader is not used, the default gzip header has text false, + the time set to zero, and os set to 255, with no extra, name, or comment + fields. The gzip header is returned to the default state by deflateReset(). + + deflateSetHeader returns Z_OK if success, or Z_STREAM_ERROR if the source + stream state was inconsistent. +*/ + +/* +ZEXTERN int ZEXPORT inflateInit2 OF((z_streamp strm, + int windowBits)); + + This is another version of inflateInit with an extra parameter. The + fields next_in, avail_in, zalloc, zfree and opaque must be initialized + before by the caller. + + The windowBits parameter is the base two logarithm of the maximum window + size (the size of the history buffer). It should be in the range 8..15 for + this version of the library. The default value is 15 if inflateInit is used + instead. windowBits must be greater than or equal to the windowBits value + provided to deflateInit2() while compressing, or it must be equal to 15 if + deflateInit2() was not used. If a compressed stream with a larger window + size is given as input, inflate() will return with the error code + Z_DATA_ERROR instead of trying to allocate a larger window. + + windowBits can also be zero to request that inflate use the window size in + the zlib header of the compressed stream. + + windowBits can also be -8..-15 for raw inflate. In this case, -windowBits + determines the window size. inflate() will then process raw deflate data, + not looking for a zlib or gzip header, not generating a check value, and not + looking for any check values for comparison at the end of the stream. This + is for use with other formats that use the deflate compressed data format + such as zip. Those formats provide their own check values. If a custom + format is developed using the raw deflate format for compressed data, it is + recommended that a check value such as an adler32 or a crc32 be applied to + the uncompressed data as is done in the zlib, gzip, and zip formats. For + most applications, the zlib format should be used as is. Note that comments + above on the use in deflateInit2() applies to the magnitude of windowBits. + + windowBits can also be greater than 15 for optional gzip decoding. Add + 32 to windowBits to enable zlib and gzip decoding with automatic header + detection, or add 16 to decode only the gzip format (the zlib format will + return a Z_DATA_ERROR). If a gzip stream is being decoded, strm->adler is a + crc32 instead of an adler32. + + inflateInit2 returns Z_OK if success, Z_MEM_ERROR if there was not enough + memory, Z_VERSION_ERROR if the zlib library version is incompatible with the + version assumed by the caller, or Z_STREAM_ERROR if the parameters are + invalid, such as a null pointer to the structure. msg is set to null if + there is no error message. inflateInit2 does not perform any decompression + apart from possibly reading the zlib header if present: actual decompression + will be done by inflate(). (So next_in and avail_in may be modified, but + next_out and avail_out are unused and unchanged.) The current implementation + of inflateInit2() does not process any header information -- that is + deferred until inflate() is called. +*/ + +ZEXTERN int ZEXPORT inflateSetDictionary OF((z_streamp strm, + const Bytef *dictionary, + uInt dictLength)); +/* + Initializes the decompression dictionary from the given uncompressed byte + sequence. This function must be called immediately after a call of inflate, + if that call returned Z_NEED_DICT. The dictionary chosen by the compressor + can be determined from the adler32 value returned by that call of inflate. + The compressor and decompressor must use exactly the same dictionary (see + deflateSetDictionary). For raw inflate, this function can be called at any + time to set the dictionary. If the provided dictionary is smaller than the + window and there is already data in the window, then the provided dictionary + will amend what's there. The application must insure that the dictionary + that was used for compression is provided. + + inflateSetDictionary returns Z_OK if success, Z_STREAM_ERROR if a + parameter is invalid (e.g. dictionary being Z_NULL) or the stream state is + inconsistent, Z_DATA_ERROR if the given dictionary doesn't match the + expected one (incorrect adler32 value). inflateSetDictionary does not + perform any decompression: this will be done by subsequent calls of + inflate(). +*/ + +ZEXTERN int ZEXPORT inflateGetDictionary OF((z_streamp strm, + Bytef *dictionary, + uInt *dictLength)); +/* + Returns the sliding dictionary being maintained by inflate. dictLength is + set to the number of bytes in the dictionary, and that many bytes are copied + to dictionary. dictionary must have enough space, where 32768 bytes is + always enough. If inflateGetDictionary() is called with dictionary equal to + Z_NULL, then only the dictionary length is returned, and nothing is copied. + Similary, if dictLength is Z_NULL, then it is not set. + + inflateGetDictionary returns Z_OK on success, or Z_STREAM_ERROR if the + stream state is inconsistent. +*/ + +ZEXTERN int ZEXPORT inflateSync OF((z_streamp strm)); +/* + Skips invalid compressed data until a possible full flush point (see above + for the description of deflate with Z_FULL_FLUSH) can be found, or until all + available input is skipped. No output is provided. + + inflateSync searches for a 00 00 FF FF pattern in the compressed data. + All full flush points have this pattern, but not all occurrences of this + pattern are full flush points. + + inflateSync returns Z_OK if a possible full flush point has been found, + Z_BUF_ERROR if no more input was provided, Z_DATA_ERROR if no flush point + has been found, or Z_STREAM_ERROR if the stream structure was inconsistent. + In the success case, the application may save the current current value of + total_in which indicates where valid compressed data was found. In the + error case, the application may repeatedly call inflateSync, providing more + input each time, until success or end of the input data. +*/ + +ZEXTERN int ZEXPORT inflateCopy OF((z_streamp dest, + z_streamp source)); +/* + Sets the destination stream as a complete copy of the source stream. + + This function can be useful when randomly accessing a large stream. The + first pass through the stream can periodically record the inflate state, + allowing restarting inflate at those points when randomly accessing the + stream. + + inflateCopy returns Z_OK if success, Z_MEM_ERROR if there was not + enough memory, Z_STREAM_ERROR if the source stream state was inconsistent + (such as zalloc being Z_NULL). msg is left unchanged in both source and + destination. +*/ + +ZEXTERN int ZEXPORT inflateReset OF((z_streamp strm)); +/* + This function is equivalent to inflateEnd followed by inflateInit, + but does not free and reallocate all the internal decompression state. The + stream will keep attributes that may have been set by inflateInit2. + + inflateReset returns Z_OK if success, or Z_STREAM_ERROR if the source + stream state was inconsistent (such as zalloc or state being Z_NULL). +*/ + +ZEXTERN int ZEXPORT inflateReset2 OF((z_streamp strm, + int windowBits)); +/* + This function is the same as inflateReset, but it also permits changing + the wrap and window size requests. The windowBits parameter is interpreted + the same as it is for inflateInit2. + + inflateReset2 returns Z_OK if success, or Z_STREAM_ERROR if the source + stream state was inconsistent (such as zalloc or state being Z_NULL), or if + the windowBits parameter is invalid. +*/ + +ZEXTERN int ZEXPORT inflatePrime OF((z_streamp strm, + int bits, + int value)); +/* + This function inserts bits in the inflate input stream. The intent is + that this function is used to start inflating at a bit position in the + middle of a byte. The provided bits will be used before any bytes are used + from next_in. This function should only be used with raw inflate, and + should be used before the first inflate() call after inflateInit2() or + inflateReset(). bits must be less than or equal to 16, and that many of the + least significant bits of value will be inserted in the input. + + If bits is negative, then the input stream bit buffer is emptied. Then + inflatePrime() can be called again to put bits in the buffer. This is used + to clear out bits leftover after feeding inflate a block description prior + to feeding inflate codes. + + inflatePrime returns Z_OK if success, or Z_STREAM_ERROR if the source + stream state was inconsistent. +*/ + +ZEXTERN long ZEXPORT inflateMark OF((z_streamp strm)); +/* + This function returns two values, one in the lower 16 bits of the return + value, and the other in the remaining upper bits, obtained by shifting the + return value down 16 bits. If the upper value is -1 and the lower value is + zero, then inflate() is currently decoding information outside of a block. + If the upper value is -1 and the lower value is non-zero, then inflate is in + the middle of a stored block, with the lower value equaling the number of + bytes from the input remaining to copy. If the upper value is not -1, then + it is the number of bits back from the current bit position in the input of + the code (literal or length/distance pair) currently being processed. In + that case the lower value is the number of bytes already emitted for that + code. + + A code is being processed if inflate is waiting for more input to complete + decoding of the code, or if it has completed decoding but is waiting for + more output space to write the literal or match data. + + inflateMark() is used to mark locations in the input data for random + access, which may be at bit positions, and to note those cases where the + output of a code may span boundaries of random access blocks. The current + location in the input stream can be determined from avail_in and data_type + as noted in the description for the Z_BLOCK flush parameter for inflate. + + inflateMark returns the value noted above or -1 << 16 if the provided + source stream state was inconsistent. +*/ + +ZEXTERN int ZEXPORT inflateGetHeader OF((z_streamp strm, + gz_headerp head)); +/* + inflateGetHeader() requests that gzip header information be stored in the + provided gz_header structure. inflateGetHeader() may be called after + inflateInit2() or inflateReset(), and before the first call of inflate(). + As inflate() processes the gzip stream, head->done is zero until the header + is completed, at which time head->done is set to one. If a zlib stream is + being decoded, then head->done is set to -1 to indicate that there will be + no gzip header information forthcoming. Note that Z_BLOCK or Z_TREES can be + used to force inflate() to return immediately after header processing is + complete and before any actual data is decompressed. + + The text, time, xflags, and os fields are filled in with the gzip header + contents. hcrc is set to true if there is a header CRC. (The header CRC + was valid if done is set to one.) If extra is not Z_NULL, then extra_max + contains the maximum number of bytes to write to extra. Once done is true, + extra_len contains the actual extra field length, and extra contains the + extra field, or that field truncated if extra_max is less than extra_len. + If name is not Z_NULL, then up to name_max characters are written there, + terminated with a zero unless the length is greater than name_max. If + comment is not Z_NULL, then up to comm_max characters are written there, + terminated with a zero unless the length is greater than comm_max. When any + of extra, name, or comment are not Z_NULL and the respective field is not + present in the header, then that field is set to Z_NULL to signal its + absence. This allows the use of deflateSetHeader() with the returned + structure to duplicate the header. However if those fields are set to + allocated memory, then the application will need to save those pointers + elsewhere so that they can be eventually freed. + + If inflateGetHeader is not used, then the header information is simply + discarded. The header is always checked for validity, including the header + CRC if present. inflateReset() will reset the process to discard the header + information. The application would need to call inflateGetHeader() again to + retrieve the header from the next gzip stream. + + inflateGetHeader returns Z_OK if success, or Z_STREAM_ERROR if the source + stream state was inconsistent. +*/ + +/* +ZEXTERN int ZEXPORT inflateBackInit OF((z_streamp strm, int windowBits, + unsigned char FAR *window)); + + Initialize the internal stream state for decompression using inflateBack() + calls. The fields zalloc, zfree and opaque in strm must be initialized + before the call. If zalloc and zfree are Z_NULL, then the default library- + derived memory allocation routines are used. windowBits is the base two + logarithm of the window size, in the range 8..15. window is a caller + supplied buffer of that size. Except for special applications where it is + assured that deflate was used with small window sizes, windowBits must be 15 + and a 32K byte window must be supplied to be able to decompress general + deflate streams. + + See inflateBack() for the usage of these routines. + + inflateBackInit will return Z_OK on success, Z_STREAM_ERROR if any of + the parameters are invalid, Z_MEM_ERROR if the internal state could not be + allocated, or Z_VERSION_ERROR if the version of the library does not match + the version of the header file. +*/ + +typedef unsigned (*in_func) OF((void FAR *, + z_const unsigned char FAR * FAR *)); +typedef int (*out_func) OF((void FAR *, unsigned char FAR *, unsigned)); + +ZEXTERN int ZEXPORT inflateBack OF((z_streamp strm, + in_func in, void FAR *in_desc, + out_func out, void FAR *out_desc)); +/* + inflateBack() does a raw inflate with a single call using a call-back + interface for input and output. This is potentially more efficient than + inflate() for file i/o applications, in that it avoids copying between the + output and the sliding window by simply making the window itself the output + buffer. inflate() can be faster on modern CPUs when used with large + buffers. inflateBack() trusts the application to not change the output + buffer passed by the output function, at least until inflateBack() returns. + + inflateBackInit() must be called first to allocate the internal state + and to initialize the state with the user-provided window buffer. + inflateBack() may then be used multiple times to inflate a complete, raw + deflate stream with each call. inflateBackEnd() is then called to free the + allocated state. + + A raw deflate stream is one with no zlib or gzip header or trailer. + This routine would normally be used in a utility that reads zip or gzip + files and writes out uncompressed files. The utility would decode the + header and process the trailer on its own, hence this routine expects only + the raw deflate stream to decompress. This is different from the normal + behavior of inflate(), which expects either a zlib or gzip header and + trailer around the deflate stream. + + inflateBack() uses two subroutines supplied by the caller that are then + called by inflateBack() for input and output. inflateBack() calls those + routines until it reads a complete deflate stream and writes out all of the + uncompressed data, or until it encounters an error. The function's + parameters and return types are defined above in the in_func and out_func + typedefs. inflateBack() will call in(in_desc, &buf) which should return the + number of bytes of provided input, and a pointer to that input in buf. If + there is no input available, in() must return zero--buf is ignored in that + case--and inflateBack() will return a buffer error. inflateBack() will call + out(out_desc, buf, len) to write the uncompressed data buf[0..len-1]. out() + should return zero on success, or non-zero on failure. If out() returns + non-zero, inflateBack() will return with an error. Neither in() nor out() + are permitted to change the contents of the window provided to + inflateBackInit(), which is also the buffer that out() uses to write from. + The length written by out() will be at most the window size. Any non-zero + amount of input may be provided by in(). + + For convenience, inflateBack() can be provided input on the first call by + setting strm->next_in and strm->avail_in. If that input is exhausted, then + in() will be called. Therefore strm->next_in must be initialized before + calling inflateBack(). If strm->next_in is Z_NULL, then in() will be called + immediately for input. If strm->next_in is not Z_NULL, then strm->avail_in + must also be initialized, and then if strm->avail_in is not zero, input will + initially be taken from strm->next_in[0 .. strm->avail_in - 1]. + + The in_desc and out_desc parameters of inflateBack() is passed as the + first parameter of in() and out() respectively when they are called. These + descriptors can be optionally used to pass any information that the caller- + supplied in() and out() functions need to do their job. + + On return, inflateBack() will set strm->next_in and strm->avail_in to + pass back any unused input that was provided by the last in() call. The + return values of inflateBack() can be Z_STREAM_END on success, Z_BUF_ERROR + if in() or out() returned an error, Z_DATA_ERROR if there was a format error + in the deflate stream (in which case strm->msg is set to indicate the nature + of the error), or Z_STREAM_ERROR if the stream was not properly initialized. + In the case of Z_BUF_ERROR, an input or output error can be distinguished + using strm->next_in which will be Z_NULL only if in() returned an error. If + strm->next_in is not Z_NULL, then the Z_BUF_ERROR was due to out() returning + non-zero. (in() will always be called before out(), so strm->next_in is + assured to be defined if out() returns non-zero.) Note that inflateBack() + cannot return Z_OK. +*/ + +ZEXTERN int ZEXPORT inflateBackEnd OF((z_streamp strm)); +/* + All memory allocated by inflateBackInit() is freed. + + inflateBackEnd() returns Z_OK on success, or Z_STREAM_ERROR if the stream + state was inconsistent. +*/ + +ZEXTERN uLong ZEXPORT zlibCompileFlags OF((void)); +/* Return flags indicating compile-time options. + + Type sizes, two bits each, 00 = 16 bits, 01 = 32, 10 = 64, 11 = other: + 1.0: size of uInt + 3.2: size of uLong + 5.4: size of voidpf (pointer) + 7.6: size of z_off_t + + Compiler, assembler, and debug options: + 8: DEBUG + 9: ASMV or ASMINF -- use ASM code + 10: ZLIB_WINAPI -- exported functions use the WINAPI calling convention + 11: 0 (reserved) + + One-time table building (smaller code, but not thread-safe if true): + 12: BUILDFIXED -- build static block decoding tables when needed + 13: DYNAMIC_CRC_TABLE -- build CRC calculation tables when needed + 14,15: 0 (reserved) + + Library content (indicates missing functionality): + 16: NO_GZCOMPRESS -- gz* functions cannot compress (to avoid linking + deflate code when not needed) + 17: NO_GZIP -- deflate can't write gzip streams, and inflate can't detect + and decode gzip streams (to avoid linking crc code) + 18-19: 0 (reserved) + + Operation variations (changes in library functionality): + 20: PKZIP_BUG_WORKAROUND -- slightly more permissive inflate + 21: FASTEST -- deflate algorithm with only one, lowest compression level + 22,23: 0 (reserved) + + The sprintf variant used by gzprintf (zero is best): + 24: 0 = vs*, 1 = s* -- 1 means limited to 20 arguments after the format + 25: 0 = *nprintf, 1 = *printf -- 1 means gzprintf() not secure! + 26: 0 = returns value, 1 = void -- 1 means inferred string length returned + + Remainder: + 27-31: 0 (reserved) + */ + +#ifndef Z_SOLO + + /* utility functions */ + +/* + The following utility functions are implemented on top of the basic + stream-oriented functions. To simplify the interface, some default options + are assumed (compression level and memory usage, standard memory allocation + functions). The source code of these utility functions can be modified if + you need special options. +*/ + +ZEXTERN int ZEXPORT compress OF((Bytef *dest, uLongf *destLen, + const Bytef *source, uLong sourceLen)); +/* + Compresses the source buffer into the destination buffer. sourceLen is + the byte length of the source buffer. Upon entry, destLen is the total size + of the destination buffer, which must be at least the value returned by + compressBound(sourceLen). Upon exit, destLen is the actual size of the + compressed buffer. + + compress returns Z_OK if success, Z_MEM_ERROR if there was not + enough memory, Z_BUF_ERROR if there was not enough room in the output + buffer. +*/ + +ZEXTERN int ZEXPORT compress2 OF((Bytef *dest, uLongf *destLen, + const Bytef *source, uLong sourceLen, + int level)); +/* + Compresses the source buffer into the destination buffer. The level + parameter has the same meaning as in deflateInit. sourceLen is the byte + length of the source buffer. Upon entry, destLen is the total size of the + destination buffer, which must be at least the value returned by + compressBound(sourceLen). Upon exit, destLen is the actual size of the + compressed buffer. + + compress2 returns Z_OK if success, Z_MEM_ERROR if there was not enough + memory, Z_BUF_ERROR if there was not enough room in the output buffer, + Z_STREAM_ERROR if the level parameter is invalid. +*/ + +ZEXTERN uLong ZEXPORT compressBound OF((uLong sourceLen)); +/* + compressBound() returns an upper bound on the compressed size after + compress() or compress2() on sourceLen bytes. It would be used before a + compress() or compress2() call to allocate the destination buffer. +*/ + +ZEXTERN int ZEXPORT uncompress OF((Bytef *dest, uLongf *destLen, + const Bytef *source, uLong sourceLen)); +/* + Decompresses the source buffer into the destination buffer. sourceLen is + the byte length of the source buffer. Upon entry, destLen is the total size + of the destination buffer, which must be large enough to hold the entire + uncompressed data. (The size of the uncompressed data must have been saved + previously by the compressor and transmitted to the decompressor by some + mechanism outside the scope of this compression library.) Upon exit, destLen + is the actual size of the uncompressed buffer. + + uncompress returns Z_OK if success, Z_MEM_ERROR if there was not + enough memory, Z_BUF_ERROR if there was not enough room in the output + buffer, or Z_DATA_ERROR if the input data was corrupted or incomplete. In + the case where there is not enough room, uncompress() will fill the output + buffer with the uncompressed data up to that point. +*/ + + /* gzip file access functions */ + +/* + This library supports reading and writing files in gzip (.gz) format with + an interface similar to that of stdio, using the functions that start with + "gz". The gzip format is different from the zlib format. gzip is a gzip + wrapper, documented in RFC 1952, wrapped around a deflate stream. +*/ + +typedef struct gzFile_s *gzFile; /* semi-opaque gzip file descriptor */ + +/* +ZEXTERN gzFile ZEXPORT gzopen OF((const char *path, const char *mode)); + + Opens a gzip (.gz) file for reading or writing. The mode parameter is as + in fopen ("rb" or "wb") but can also include a compression level ("wb9") or + a strategy: 'f' for filtered data as in "wb6f", 'h' for Huffman-only + compression as in "wb1h", 'R' for run-length encoding as in "wb1R", or 'F' + for fixed code compression as in "wb9F". (See the description of + deflateInit2 for more information about the strategy parameter.) 'T' will + request transparent writing or appending with no compression and not using + the gzip format. + + "a" can be used instead of "w" to request that the gzip stream that will + be written be appended to the file. "+" will result in an error, since + reading and writing to the same gzip file is not supported. The addition of + "x" when writing will create the file exclusively, which fails if the file + already exists. On systems that support it, the addition of "e" when + reading or writing will set the flag to close the file on an execve() call. + + These functions, as well as gzip, will read and decode a sequence of gzip + streams in a file. The append function of gzopen() can be used to create + such a file. (Also see gzflush() for another way to do this.) When + appending, gzopen does not test whether the file begins with a gzip stream, + nor does it look for the end of the gzip streams to begin appending. gzopen + will simply append a gzip stream to the existing file. + + gzopen can be used to read a file which is not in gzip format; in this + case gzread will directly read from the file without decompression. When + reading, this will be detected automatically by looking for the magic two- + byte gzip header. + + gzopen returns NULL if the file could not be opened, if there was + insufficient memory to allocate the gzFile state, or if an invalid mode was + specified (an 'r', 'w', or 'a' was not provided, or '+' was provided). + errno can be checked to determine if the reason gzopen failed was that the + file could not be opened. +*/ + +ZEXTERN gzFile ZEXPORT gzdopen OF((int fd, const char *mode)); +/* + gzdopen associates a gzFile with the file descriptor fd. File descriptors + are obtained from calls like open, dup, creat, pipe or fileno (if the file + has been previously opened with fopen). The mode parameter is as in gzopen. + + The next call of gzclose on the returned gzFile will also close the file + descriptor fd, just like fclose(fdopen(fd, mode)) closes the file descriptor + fd. If you want to keep fd open, use fd = dup(fd_keep); gz = gzdopen(fd, + mode);. The duplicated descriptor should be saved to avoid a leak, since + gzdopen does not close fd if it fails. If you are using fileno() to get the + file descriptor from a FILE *, then you will have to use dup() to avoid + double-close()ing the file descriptor. Both gzclose() and fclose() will + close the associated file descriptor, so they need to have different file + descriptors. + + gzdopen returns NULL if there was insufficient memory to allocate the + gzFile state, if an invalid mode was specified (an 'r', 'w', or 'a' was not + provided, or '+' was provided), or if fd is -1. The file descriptor is not + used until the next gz* read, write, seek, or close operation, so gzdopen + will not detect if fd is invalid (unless fd is -1). +*/ + +ZEXTERN int ZEXPORT gzbuffer OF((gzFile file, unsigned size)); +/* + Set the internal buffer size used by this library's functions. The + default buffer size is 8192 bytes. This function must be called after + gzopen() or gzdopen(), and before any other calls that read or write the + file. The buffer memory allocation is always deferred to the first read or + write. Two buffers are allocated, either both of the specified size when + writing, or one of the specified size and the other twice that size when + reading. A larger buffer size of, for example, 64K or 128K bytes will + noticeably increase the speed of decompression (reading). + + The new buffer size also affects the maximum length for gzprintf(). + + gzbuffer() returns 0 on success, or -1 on failure, such as being called + too late. +*/ + +ZEXTERN int ZEXPORT gzsetparams OF((gzFile file, int level, int strategy)); +/* + Dynamically update the compression level or strategy. See the description + of deflateInit2 for the meaning of these parameters. + + gzsetparams returns Z_OK if success, or Z_STREAM_ERROR if the file was not + opened for writing. +*/ + +ZEXTERN int ZEXPORT gzread OF((gzFile file, voidp buf, unsigned len)); +/* + Reads the given number of uncompressed bytes from the compressed file. If + the input file is not in gzip format, gzread copies the given number of + bytes into the buffer directly from the file. + + After reaching the end of a gzip stream in the input, gzread will continue + to read, looking for another gzip stream. Any number of gzip streams may be + concatenated in the input file, and will all be decompressed by gzread(). + If something other than a gzip stream is encountered after a gzip stream, + that remaining trailing garbage is ignored (and no error is returned). + + gzread can be used to read a gzip file that is being concurrently written. + Upon reaching the end of the input, gzread will return with the available + data. If the error code returned by gzerror is Z_OK or Z_BUF_ERROR, then + gzclearerr can be used to clear the end of file indicator in order to permit + gzread to be tried again. Z_OK indicates that a gzip stream was completed + on the last gzread. Z_BUF_ERROR indicates that the input file ended in the + middle of a gzip stream. Note that gzread does not return -1 in the event + of an incomplete gzip stream. This error is deferred until gzclose(), which + will return Z_BUF_ERROR if the last gzread ended in the middle of a gzip + stream. Alternatively, gzerror can be used before gzclose to detect this + case. + + gzread returns the number of uncompressed bytes actually read, less than + len for end of file, or -1 for error. +*/ + +ZEXTERN int ZEXPORT gzwrite OF((gzFile file, + voidpc buf, unsigned len)); +/* + Writes the given number of uncompressed bytes into the compressed file. + gzwrite returns the number of uncompressed bytes written or 0 in case of + error. +*/ + +ZEXTERN int ZEXPORTVA gzprintf Z_ARG((gzFile file, const char *format, ...)); +/* + Converts, formats, and writes the arguments to the compressed file under + control of the format string, as in fprintf. gzprintf returns the number of + uncompressed bytes actually written, or 0 in case of error. The number of + uncompressed bytes written is limited to 8191, or one less than the buffer + size given to gzbuffer(). The caller should assure that this limit is not + exceeded. If it is exceeded, then gzprintf() will return an error (0) with + nothing written. In this case, there may also be a buffer overflow with + unpredictable consequences, which is possible only if zlib was compiled with + the insecure functions sprintf() or vsprintf() because the secure snprintf() + or vsnprintf() functions were not available. This can be determined using + zlibCompileFlags(). +*/ + +ZEXTERN int ZEXPORT gzputs OF((gzFile file, const char *s)); +/* + Writes the given null-terminated string to the compressed file, excluding + the terminating null character. + + gzputs returns the number of characters written, or -1 in case of error. +*/ + +ZEXTERN char * ZEXPORT gzgets OF((gzFile file, char *buf, int len)); +/* + Reads bytes from the compressed file until len-1 characters are read, or a + newline character is read and transferred to buf, or an end-of-file + condition is encountered. If any characters are read or if len == 1, the + string is terminated with a null character. If no characters are read due + to an end-of-file or len < 1, then the buffer is left untouched. + + gzgets returns buf which is a null-terminated string, or it returns NULL + for end-of-file or in case of error. If there was an error, the contents at + buf are indeterminate. +*/ + +ZEXTERN int ZEXPORT gzputc OF((gzFile file, int c)); +/* + Writes c, converted to an unsigned char, into the compressed file. gzputc + returns the value that was written, or -1 in case of error. +*/ + +ZEXTERN int ZEXPORT gzgetc OF((gzFile file)); +/* + Reads one byte from the compressed file. gzgetc returns this byte or -1 + in case of end of file or error. This is implemented as a macro for speed. + As such, it does not do all of the checking the other functions do. I.e. + it does not check to see if file is NULL, nor whether the structure file + points to has been clobbered or not. +*/ + +ZEXTERN int ZEXPORT gzungetc OF((int c, gzFile file)); +/* + Push one character back onto the stream to be read as the first character + on the next read. At least one character of push-back is allowed. + gzungetc() returns the character pushed, or -1 on failure. gzungetc() will + fail if c is -1, and may fail if a character has been pushed but not read + yet. If gzungetc is used immediately after gzopen or gzdopen, at least the + output buffer size of pushed characters is allowed. (See gzbuffer above.) + The pushed character will be discarded if the stream is repositioned with + gzseek() or gzrewind(). +*/ + +ZEXTERN int ZEXPORT gzflush OF((gzFile file, int flush)); +/* + Flushes all pending output into the compressed file. The parameter flush + is as in the deflate() function. The return value is the zlib error number + (see function gzerror below). gzflush is only permitted when writing. + + If the flush parameter is Z_FINISH, the remaining data is written and the + gzip stream is completed in the output. If gzwrite() is called again, a new + gzip stream will be started in the output. gzread() is able to read such + concatented gzip streams. + + gzflush should be called only when strictly necessary because it will + degrade compression if called too often. +*/ + +/* +ZEXTERN z_off_t ZEXPORT gzseek OF((gzFile file, + z_off_t offset, int whence)); + + Sets the starting position for the next gzread or gzwrite on the given + compressed file. The offset represents a number of bytes in the + uncompressed data stream. The whence parameter is defined as in lseek(2); + the value SEEK_END is not supported. + + If the file is opened for reading, this function is emulated but can be + extremely slow. If the file is opened for writing, only forward seeks are + supported; gzseek then compresses a sequence of zeroes up to the new + starting position. + + gzseek returns the resulting offset location as measured in bytes from + the beginning of the uncompressed stream, or -1 in case of error, in + particular if the file is opened for writing and the new starting position + would be before the current position. +*/ + +ZEXTERN int ZEXPORT gzrewind OF((gzFile file)); +/* + Rewinds the given file. This function is supported only for reading. + + gzrewind(file) is equivalent to (int)gzseek(file, 0L, SEEK_SET) +*/ + +/* +ZEXTERN z_off_t ZEXPORT gztell OF((gzFile file)); + + Returns the starting position for the next gzread or gzwrite on the given + compressed file. This position represents a number of bytes in the + uncompressed data stream, and is zero when starting, even if appending or + reading a gzip stream from the middle of a file using gzdopen(). + + gztell(file) is equivalent to gzseek(file, 0L, SEEK_CUR) +*/ + +/* +ZEXTERN z_off_t ZEXPORT gzoffset OF((gzFile file)); + + Returns the current offset in the file being read or written. This offset + includes the count of bytes that precede the gzip stream, for example when + appending or when using gzdopen() for reading. When reading, the offset + does not include as yet unused buffered input. This information can be used + for a progress indicator. On error, gzoffset() returns -1. +*/ + +ZEXTERN int ZEXPORT gzeof OF((gzFile file)); +/* + Returns true (1) if the end-of-file indicator has been set while reading, + false (0) otherwise. Note that the end-of-file indicator is set only if the + read tried to go past the end of the input, but came up short. Therefore, + just like feof(), gzeof() may return false even if there is no more data to + read, in the event that the last read request was for the exact number of + bytes remaining in the input file. This will happen if the input file size + is an exact multiple of the buffer size. + + If gzeof() returns true, then the read functions will return no more data, + unless the end-of-file indicator is reset by gzclearerr() and the input file + has grown since the previous end of file was detected. +*/ + +ZEXTERN int ZEXPORT gzdirect OF((gzFile file)); +/* + Returns true (1) if file is being copied directly while reading, or false + (0) if file is a gzip stream being decompressed. + + If the input file is empty, gzdirect() will return true, since the input + does not contain a gzip stream. + + If gzdirect() is used immediately after gzopen() or gzdopen() it will + cause buffers to be allocated to allow reading the file to determine if it + is a gzip file. Therefore if gzbuffer() is used, it should be called before + gzdirect(). + + When writing, gzdirect() returns true (1) if transparent writing was + requested ("wT" for the gzopen() mode), or false (0) otherwise. (Note: + gzdirect() is not needed when writing. Transparent writing must be + explicitly requested, so the application already knows the answer. When + linking statically, using gzdirect() will include all of the zlib code for + gzip file reading and decompression, which may not be desired.) +*/ + +ZEXTERN int ZEXPORT gzclose OF((gzFile file)); +/* + Flushes all pending output if necessary, closes the compressed file and + deallocates the (de)compression state. Note that once file is closed, you + cannot call gzerror with file, since its structures have been deallocated. + gzclose must not be called more than once on the same file, just as free + must not be called more than once on the same allocation. + + gzclose will return Z_STREAM_ERROR if file is not valid, Z_ERRNO on a + file operation error, Z_MEM_ERROR if out of memory, Z_BUF_ERROR if the + last read ended in the middle of a gzip stream, or Z_OK on success. +*/ + +ZEXTERN int ZEXPORT gzclose_r OF((gzFile file)); +ZEXTERN int ZEXPORT gzclose_w OF((gzFile file)); +/* + Same as gzclose(), but gzclose_r() is only for use when reading, and + gzclose_w() is only for use when writing or appending. The advantage to + using these instead of gzclose() is that they avoid linking in zlib + compression or decompression code that is not used when only reading or only + writing respectively. If gzclose() is used, then both compression and + decompression code will be included the application when linking to a static + zlib library. +*/ + +ZEXTERN const char * ZEXPORT gzerror OF((gzFile file, int *errnum)); +/* + Returns the error message for the last error which occurred on the given + compressed file. errnum is set to zlib error number. If an error occurred + in the file system and not in the compression library, errnum is set to + Z_ERRNO and the application may consult errno to get the exact error code. + + The application must not modify the returned string. Future calls to + this function may invalidate the previously returned string. If file is + closed, then the string previously returned by gzerror will no longer be + available. + + gzerror() should be used to distinguish errors from end-of-file for those + functions above that do not distinguish those cases in their return values. +*/ + +ZEXTERN void ZEXPORT gzclearerr OF((gzFile file)); +/* + Clears the error and end-of-file flags for file. This is analogous to the + clearerr() function in stdio. This is useful for continuing to read a gzip + file that is being written concurrently. +*/ + +#endif /* !Z_SOLO */ + + /* checksum functions */ + +/* + These functions are not related to compression but are exported + anyway because they might be useful in applications using the compression + library. +*/ + +ZEXTERN uLong ZEXPORT adler32 OF((uLong adler, const Bytef *buf, uInt len)); +/* + Update a running Adler-32 checksum with the bytes buf[0..len-1] and + return the updated checksum. If buf is Z_NULL, this function returns the + required initial value for the checksum. + + An Adler-32 checksum is almost as reliable as a CRC32 but can be computed + much faster. + + Usage example: + + uLong adler = adler32(0L, Z_NULL, 0); + + while (read_buffer(buffer, length) != EOF) { + adler = adler32(adler, buffer, length); + } + if (adler != original_adler) error(); +*/ + +/* +ZEXTERN uLong ZEXPORT adler32_combine OF((uLong adler1, uLong adler2, + z_off_t len2)); + + Combine two Adler-32 checksums into one. For two sequences of bytes, seq1 + and seq2 with lengths len1 and len2, Adler-32 checksums were calculated for + each, adler1 and adler2. adler32_combine() returns the Adler-32 checksum of + seq1 and seq2 concatenated, requiring only adler1, adler2, and len2. Note + that the z_off_t type (like off_t) is a signed integer. If len2 is + negative, the result has no meaning or utility. +*/ + +ZEXTERN uLong ZEXPORT crc32 OF((uLong crc, const Bytef *buf, uInt len)); +/* + Update a running CRC-32 with the bytes buf[0..len-1] and return the + updated CRC-32. If buf is Z_NULL, this function returns the required + initial value for the crc. Pre- and post-conditioning (one's complement) is + performed within this function so it shouldn't be done by the application. + + Usage example: + + uLong crc = crc32(0L, Z_NULL, 0); + + while (read_buffer(buffer, length) != EOF) { + crc = crc32(crc, buffer, length); + } + if (crc != original_crc) error(); +*/ + +/* +ZEXTERN uLong ZEXPORT crc32_combine OF((uLong crc1, uLong crc2, z_off_t len2)); + + Combine two CRC-32 check values into one. For two sequences of bytes, + seq1 and seq2 with lengths len1 and len2, CRC-32 check values were + calculated for each, crc1 and crc2. crc32_combine() returns the CRC-32 + check value of seq1 and seq2 concatenated, requiring only crc1, crc2, and + len2. +*/ + + + /* various hacks, don't look :) */ + +/* deflateInit and inflateInit are macros to allow checking the zlib version + * and the compiler's view of z_stream: + */ +ZEXTERN int ZEXPORT deflateInit_ OF((z_streamp strm, int level, + const char *version, int stream_size)); +ZEXTERN int ZEXPORT inflateInit_ OF((z_streamp strm, + const char *version, int stream_size)); +ZEXTERN int ZEXPORT deflateInit2_ OF((z_streamp strm, int level, int method, + int windowBits, int memLevel, + int strategy, const char *version, + int stream_size)); +ZEXTERN int ZEXPORT inflateInit2_ OF((z_streamp strm, int windowBits, + const char *version, int stream_size)); +ZEXTERN int ZEXPORT inflateBackInit_ OF((z_streamp strm, int windowBits, + unsigned char FAR *window, + const char *version, + int stream_size)); +#define deflateInit(strm, level) \ + deflateInit_((strm), (level), ZLIB_VERSION, (int)sizeof(z_stream)) +#define inflateInit(strm) \ + inflateInit_((strm), ZLIB_VERSION, (int)sizeof(z_stream)) +#define deflateInit2(strm, level, method, windowBits, memLevel, strategy) \ + deflateInit2_((strm),(level),(method),(windowBits),(memLevel),\ + (strategy), ZLIB_VERSION, (int)sizeof(z_stream)) +#define inflateInit2(strm, windowBits) \ + inflateInit2_((strm), (windowBits), ZLIB_VERSION, \ + (int)sizeof(z_stream)) +#define inflateBackInit(strm, windowBits, window) \ + inflateBackInit_((strm), (windowBits), (window), \ + ZLIB_VERSION, (int)sizeof(z_stream)) + +#ifndef Z_SOLO + +/* gzgetc() macro and its supporting function and exposed data structure. Note + * that the real internal state is much larger than the exposed structure. + * This abbreviated structure exposes just enough for the gzgetc() macro. The + * user should not mess with these exposed elements, since their names or + * behavior could change in the future, perhaps even capriciously. They can + * only be used by the gzgetc() macro. You have been warned. + */ +struct gzFile_s { + unsigned have; + unsigned char *next; + z_off64_t pos; +}; +ZEXTERN int ZEXPORT gzgetc_ OF((gzFile file)); /* backward compatibility */ +#ifdef Z_PREFIX_SET +# undef z_gzgetc +# define z_gzgetc(g) \ + ((g)->have ? ((g)->have--, (g)->pos++, *((g)->next)++) : gzgetc(g)) +#else +# define gzgetc(g) \ + ((g)->have ? ((g)->have--, (g)->pos++, *((g)->next)++) : gzgetc(g)) +#endif + +/* provide 64-bit offset functions if _LARGEFILE64_SOURCE defined, and/or + * change the regular functions to 64 bits if _FILE_OFFSET_BITS is 64 (if + * both are true, the application gets the *64 functions, and the regular + * functions are changed to 64 bits) -- in case these are set on systems + * without large file support, _LFS64_LARGEFILE must also be true + */ +#ifdef Z_LARGE64 + ZEXTERN gzFile ZEXPORT gzopen64 OF((const char *, const char *)); + ZEXTERN z_off64_t ZEXPORT gzseek64 OF((gzFile, z_off64_t, int)); + ZEXTERN z_off64_t ZEXPORT gztell64 OF((gzFile)); + ZEXTERN z_off64_t ZEXPORT gzoffset64 OF((gzFile)); + ZEXTERN uLong ZEXPORT adler32_combine64 OF((uLong, uLong, z_off64_t)); + ZEXTERN uLong ZEXPORT crc32_combine64 OF((uLong, uLong, z_off64_t)); +#endif + +#if !defined(ZLIB_INTERNAL) && defined(Z_WANT64) +# ifdef Z_PREFIX_SET +# define z_gzopen z_gzopen64 +# define z_gzseek z_gzseek64 +# define z_gztell z_gztell64 +# define z_gzoffset z_gzoffset64 +# define z_adler32_combine z_adler32_combine64 +# define z_crc32_combine z_crc32_combine64 +# else +# define gzopen gzopen64 +# define gzseek gzseek64 +# define gztell gztell64 +# define gzoffset gzoffset64 +# define adler32_combine adler32_combine64 +# define crc32_combine crc32_combine64 +# endif +# ifndef Z_LARGE64 + ZEXTERN gzFile ZEXPORT gzopen64 OF((const char *, const char *)); + ZEXTERN z_off_t ZEXPORT gzseek64 OF((gzFile, z_off_t, int)); + ZEXTERN z_off_t ZEXPORT gztell64 OF((gzFile)); + ZEXTERN z_off_t ZEXPORT gzoffset64 OF((gzFile)); + ZEXTERN uLong ZEXPORT adler32_combine64 OF((uLong, uLong, z_off_t)); + ZEXTERN uLong ZEXPORT crc32_combine64 OF((uLong, uLong, z_off_t)); +# endif +#else + ZEXTERN gzFile ZEXPORT gzopen OF((const char *, const char *)); + ZEXTERN z_off_t ZEXPORT gzseek OF((gzFile, z_off_t, int)); + ZEXTERN z_off_t ZEXPORT gztell OF((gzFile)); + ZEXTERN z_off_t ZEXPORT gzoffset OF((gzFile)); + ZEXTERN uLong ZEXPORT adler32_combine OF((uLong, uLong, z_off_t)); + ZEXTERN uLong ZEXPORT crc32_combine OF((uLong, uLong, z_off_t)); +#endif + +#else /* Z_SOLO */ + + ZEXTERN uLong ZEXPORT adler32_combine OF((uLong, uLong, z_off_t)); + ZEXTERN uLong ZEXPORT crc32_combine OF((uLong, uLong, z_off_t)); + +#endif /* !Z_SOLO */ + +/* hack for buggy compilers */ +#if !defined(ZUTIL_H) && !defined(NO_DUMMY_DECL) + struct internal_state {int dummy;}; +#endif + +/* undocumented functions */ +ZEXTERN const char * ZEXPORT zError OF((int)); +ZEXTERN int ZEXPORT inflateSyncPoint OF((z_streamp)); +ZEXTERN const z_crc_t FAR * ZEXPORT get_crc_table OF((void)); +ZEXTERN int ZEXPORT inflateUndermine OF((z_streamp, int)); +ZEXTERN int ZEXPORT inflateResetKeep OF((z_streamp)); +ZEXTERN int ZEXPORT deflateResetKeep OF((z_streamp)); +#if defined(_WIN32) && !defined(Z_SOLO) +ZEXTERN gzFile ZEXPORT gzopen_w OF((const wchar_t *path, + const char *mode)); +#endif +#if defined(STDC) || defined(Z_HAVE_STDARG_H) +# ifndef Z_SOLO +ZEXTERN int ZEXPORTVA gzvprintf Z_ARG((gzFile file, + const char *format, + va_list va)); +# endif +#endif + +#ifdef __cplusplus +} +#endif + +#endif /* ZLIB_H */ ADDED compat/zlib/zlib.map Index: compat/zlib/zlib.map ================================================================== --- compat/zlib/zlib.map +++ compat/zlib/zlib.map @@ -0,0 +1,83 @@ +ZLIB_1.2.0 { + global: + compressBound; + deflateBound; + inflateBack; + inflateBackEnd; + inflateBackInit_; + inflateCopy; + local: + deflate_copyright; + inflate_copyright; + inflate_fast; + inflate_table; + zcalloc; + zcfree; + z_errmsg; + gz_error; + gz_intmax; + _*; +}; + +ZLIB_1.2.0.2 { + gzclearerr; + gzungetc; + zlibCompileFlags; +} ZLIB_1.2.0; + +ZLIB_1.2.0.8 { + deflatePrime; +} ZLIB_1.2.0.2; + +ZLIB_1.2.2 { + adler32_combine; + crc32_combine; + deflateSetHeader; + inflateGetHeader; +} ZLIB_1.2.0.8; + +ZLIB_1.2.2.3 { + deflateTune; + gzdirect; +} ZLIB_1.2.2; + +ZLIB_1.2.2.4 { + inflatePrime; +} ZLIB_1.2.2.3; + +ZLIB_1.2.3.3 { + adler32_combine64; + crc32_combine64; + gzopen64; + gzseek64; + gztell64; + inflateUndermine; +} ZLIB_1.2.2.4; + +ZLIB_1.2.3.4 { + inflateReset2; + inflateMark; +} ZLIB_1.2.3.3; + +ZLIB_1.2.3.5 { + gzbuffer; + gzoffset; + gzoffset64; + gzclose_r; + gzclose_w; +} ZLIB_1.2.3.4; + +ZLIB_1.2.5.1 { + deflatePending; +} ZLIB_1.2.3.5; + +ZLIB_1.2.5.2 { + deflateResetKeep; + gzgetc_; + inflateResetKeep; +} ZLIB_1.2.5.1; + +ZLIB_1.2.7.1 { + inflateGetDictionary; + gzvprintf; +} ZLIB_1.2.5.2; ADDED compat/zlib/zlib.pc.cmakein Index: compat/zlib/zlib.pc.cmakein ================================================================== --- compat/zlib/zlib.pc.cmakein +++ compat/zlib/zlib.pc.cmakein @@ -0,0 +1,13 @@ +prefix=@CMAKE_INSTALL_PREFIX@ +exec_prefix=@CMAKE_INSTALL_PREFIX@ +libdir=@INSTALL_LIB_DIR@ +sharedlibdir=@INSTALL_LIB_DIR@ +includedir=@INSTALL_INC_DIR@ + +Name: zlib +Description: zlib compression library +Version: @VERSION@ + +Requires: +Libs: -L${libdir} -L${sharedlibdir} -lz +Cflags: -I${includedir} ADDED compat/zlib/zlib.pc.in Index: compat/zlib/zlib.pc.in ================================================================== --- compat/zlib/zlib.pc.in +++ compat/zlib/zlib.pc.in @@ -0,0 +1,13 @@ +prefix=@prefix@ +exec_prefix=@exec_prefix@ +libdir=@libdir@ +sharedlibdir=@sharedlibdir@ +includedir=@includedir@ + +Name: zlib +Description: zlib compression library +Version: @VERSION@ + +Requires: +Libs: -L${libdir} -L${sharedlibdir} -lz +Cflags: -I${includedir} ADDED compat/zlib/zlib2ansi Index: compat/zlib/zlib2ansi ================================================================== --- compat/zlib/zlib2ansi +++ compat/zlib/zlib2ansi @@ -0,0 +1,152 @@ +#!/usr/bin/perl + +# Transform K&R C function definitions into ANSI equivalent. +# +# Author: Paul Marquess +# Version: 1.0 +# Date: 3 October 2006 + +# TODO +# +# Asumes no function pointer parameters. unless they are typedefed. +# Assumes no literal strings that look like function definitions +# Assumes functions start at the beginning of a line + +use strict; +use warnings; + +local $/; +$_ = <>; + +my $sp = qr{ \s* (?: /\* .*? \*/ )? \s* }x; # assume no nested comments + +my $d1 = qr{ $sp (?: [\w\*\s]+ $sp)* $sp \w+ $sp [\[\]\s]* $sp }x ; +my $decl = qr{ $sp (?: \w+ $sp )+ $d1 }xo ; +my $dList = qr{ $sp $decl (?: $sp , $d1 )* $sp ; $sp }xo ; + + +while (s/^ + ( # Start $1 + ( # Start $2 + .*? # Minimal eat content + ( ^ \w [\w\s\*]+ ) # $3 -- function name + \s* # optional whitespace + ) # $2 - Matched up to before parameter list + + \( \s* # Literal "(" + optional whitespace + ( [^\)]+ ) # $4 - one or more anythings except ")" + \s* \) # optional whitespace surrounding a Literal ")" + + ( (?: $dList )+ ) # $5 + + $sp ^ { # literal "{" at start of line + ) # Remember to $1 + //xsom + ) +{ + my $all = $1 ; + my $prefix = $2; + my $param_list = $4 ; + my $params = $5; + + StripComments($params); + StripComments($param_list); + $param_list =~ s/^\s+//; + $param_list =~ s/\s+$//; + + my $i = 0 ; + my %pList = map { $_ => $i++ } + split /\s*,\s*/, $param_list; + my $pMatch = '(\b' . join('|', keys %pList) . '\b)\W*$' ; + + my @params = split /\s*;\s*/, $params; + my @outParams = (); + foreach my $p (@params) + { + if ($p =~ /,/) + { + my @bits = split /\s*,\s*/, $p; + my $first = shift @bits; + $first =~ s/^\s*//; + push @outParams, $first; + $first =~ /^(\w+\s*)/; + my $type = $1 ; + push @outParams, map { $type . $_ } @bits; + } + else + { + $p =~ s/^\s+//; + push @outParams, $p; + } + } + + + my %tmp = map { /$pMatch/; $_ => $pList{$1} } + @outParams ; + + @outParams = map { " $_" } + sort { $tmp{$a} <=> $tmp{$b} } + @outParams ; + + print $prefix ; + print "(\n" . join(",\n", @outParams) . ")\n"; + print "{" ; + +} + +# Output any trailing code. +print ; +exit 0; + + +sub StripComments +{ + + no warnings; + + # Strip C & C++ coments + # From the perlfaq + $_[0] =~ + + s{ + /\* ## Start of /* ... */ comment + [^*]*\*+ ## Non-* followed by 1-or-more *'s + ( + [^/*][^*]*\*+ + )* ## 0-or-more things which don't start with / + ## but do end with '*' + / ## End of /* ... */ comment + + | ## OR C++ Comment + // ## Start of C++ comment // + [^\n]* ## followed by 0-or-more non end of line characters + + | ## OR various things which aren't comments: + + ( + " ## Start of " ... " string + ( + \\. ## Escaped char + | ## OR + [^"\\] ## Non "\ + )* + " ## End of " ... " string + + | ## OR + + ' ## Start of ' ... ' string + ( + \\. ## Escaped char + | ## OR + [^'\\] ## Non '\ + )* + ' ## End of ' ... ' string + + | ## OR + + . ## Anything other char + [^/"'\\]* ## Chars which doesn't start a comment, string or escape + ) + }{$2}gxs; + +} ADDED compat/zlib/zutil.c Index: compat/zlib/zutil.c ================================================================== --- compat/zlib/zutil.c +++ compat/zlib/zutil.c @@ -0,0 +1,324 @@ +/* zutil.c -- target dependent utility functions for the compression library + * Copyright (C) 1995-2005, 2010, 2011, 2012 Jean-loup Gailly. + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* @(#) $Id$ */ + +#include "zutil.h" +#ifndef Z_SOLO +# include "gzguts.h" +#endif + +#ifndef NO_DUMMY_DECL +struct internal_state {int dummy;}; /* for buggy compilers */ +#endif + +z_const char * const z_errmsg[10] = { +"need dictionary", /* Z_NEED_DICT 2 */ +"stream end", /* Z_STREAM_END 1 */ +"", /* Z_OK 0 */ +"file error", /* Z_ERRNO (-1) */ +"stream error", /* Z_STREAM_ERROR (-2) */ +"data error", /* Z_DATA_ERROR (-3) */ +"insufficient memory", /* Z_MEM_ERROR (-4) */ +"buffer error", /* Z_BUF_ERROR (-5) */ +"incompatible version",/* Z_VERSION_ERROR (-6) */ +""}; + + +const char * ZEXPORT zlibVersion() +{ + return ZLIB_VERSION; +} + +uLong ZEXPORT zlibCompileFlags() +{ + uLong flags; + + flags = 0; + switch ((int)(sizeof(uInt))) { + case 2: break; + case 4: flags += 1; break; + case 8: flags += 2; break; + default: flags += 3; + } + switch ((int)(sizeof(uLong))) { + case 2: break; + case 4: flags += 1 << 2; break; + case 8: flags += 2 << 2; break; + default: flags += 3 << 2; + } + switch ((int)(sizeof(voidpf))) { + case 2: break; + case 4: flags += 1 << 4; break; + case 8: flags += 2 << 4; break; + default: flags += 3 << 4; + } + switch ((int)(sizeof(z_off_t))) { + case 2: break; + case 4: flags += 1 << 6; break; + case 8: flags += 2 << 6; break; + default: flags += 3 << 6; + } +#ifdef DEBUG + flags += 1 << 8; +#endif +#if defined(ASMV) || defined(ASMINF) + flags += 1 << 9; +#endif +#ifdef ZLIB_WINAPI + flags += 1 << 10; +#endif +#ifdef BUILDFIXED + flags += 1 << 12; +#endif +#ifdef DYNAMIC_CRC_TABLE + flags += 1 << 13; +#endif +#ifdef NO_GZCOMPRESS + flags += 1L << 16; +#endif +#ifdef NO_GZIP + flags += 1L << 17; +#endif +#ifdef PKZIP_BUG_WORKAROUND + flags += 1L << 20; +#endif +#ifdef FASTEST + flags += 1L << 21; +#endif +#if defined(STDC) || defined(Z_HAVE_STDARG_H) +# ifdef NO_vsnprintf + flags += 1L << 25; +# ifdef HAS_vsprintf_void + flags += 1L << 26; +# endif +# else +# ifdef HAS_vsnprintf_void + flags += 1L << 26; +# endif +# endif +#else + flags += 1L << 24; +# ifdef NO_snprintf + flags += 1L << 25; +# ifdef HAS_sprintf_void + flags += 1L << 26; +# endif +# else +# ifdef HAS_snprintf_void + flags += 1L << 26; +# endif +# endif +#endif + return flags; +} + +#ifdef DEBUG + +# ifndef verbose +# define verbose 0 +# endif +int ZLIB_INTERNAL z_verbose = verbose; + +void ZLIB_INTERNAL z_error (m) + char *m; +{ + fprintf(stderr, "%s\n", m); + exit(1); +} +#endif + +/* exported to allow conversion of error code to string for compress() and + * uncompress() + */ +const char * ZEXPORT zError(err) + int err; +{ + return ERR_MSG(err); +} + +#if defined(_WIN32_WCE) + /* The Microsoft C Run-Time Library for Windows CE doesn't have + * errno. We define it as a global variable to simplify porting. + * Its value is always 0 and should not be used. + */ + int errno = 0; +#endif + +#ifndef HAVE_MEMCPY + +void ZLIB_INTERNAL zmemcpy(dest, source, len) + Bytef* dest; + const Bytef* source; + uInt len; +{ + if (len == 0) return; + do { + *dest++ = *source++; /* ??? to be unrolled */ + } while (--len != 0); +} + +int ZLIB_INTERNAL zmemcmp(s1, s2, len) + const Bytef* s1; + const Bytef* s2; + uInt len; +{ + uInt j; + + for (j = 0; j < len; j++) { + if (s1[j] != s2[j]) return 2*(s1[j] > s2[j])-1; + } + return 0; +} + +void ZLIB_INTERNAL zmemzero(dest, len) + Bytef* dest; + uInt len; +{ + if (len == 0) return; + do { + *dest++ = 0; /* ??? to be unrolled */ + } while (--len != 0); +} +#endif + +#ifndef Z_SOLO + +#ifdef SYS16BIT + +#ifdef __TURBOC__ +/* Turbo C in 16-bit mode */ + +# define MY_ZCALLOC + +/* Turbo C malloc() does not allow dynamic allocation of 64K bytes + * and farmalloc(64K) returns a pointer with an offset of 8, so we + * must fix the pointer. Warning: the pointer must be put back to its + * original form in order to free it, use zcfree(). + */ + +#define MAX_PTR 10 +/* 10*64K = 640K */ + +local int next_ptr = 0; + +typedef struct ptr_table_s { + voidpf org_ptr; + voidpf new_ptr; +} ptr_table; + +local ptr_table table[MAX_PTR]; +/* This table is used to remember the original form of pointers + * to large buffers (64K). Such pointers are normalized with a zero offset. + * Since MSDOS is not a preemptive multitasking OS, this table is not + * protected from concurrent access. This hack doesn't work anyway on + * a protected system like OS/2. Use Microsoft C instead. + */ + +voidpf ZLIB_INTERNAL zcalloc (voidpf opaque, unsigned items, unsigned size) +{ + voidpf buf = opaque; /* just to make some compilers happy */ + ulg bsize = (ulg)items*size; + + /* If we allocate less than 65520 bytes, we assume that farmalloc + * will return a usable pointer which doesn't have to be normalized. + */ + if (bsize < 65520L) { + buf = farmalloc(bsize); + if (*(ush*)&buf != 0) return buf; + } else { + buf = farmalloc(bsize + 16L); + } + if (buf == NULL || next_ptr >= MAX_PTR) return NULL; + table[next_ptr].org_ptr = buf; + + /* Normalize the pointer to seg:0 */ + *((ush*)&buf+1) += ((ush)((uch*)buf-0) + 15) >> 4; + *(ush*)&buf = 0; + table[next_ptr++].new_ptr = buf; + return buf; +} + +void ZLIB_INTERNAL zcfree (voidpf opaque, voidpf ptr) +{ + int n; + if (*(ush*)&ptr != 0) { /* object < 64K */ + farfree(ptr); + return; + } + /* Find the original pointer */ + for (n = 0; n < next_ptr; n++) { + if (ptr != table[n].new_ptr) continue; + + farfree(table[n].org_ptr); + while (++n < next_ptr) { + table[n-1] = table[n]; + } + next_ptr--; + return; + } + ptr = opaque; /* just to make some compilers happy */ + Assert(0, "zcfree: ptr not found"); +} + +#endif /* __TURBOC__ */ + + +#ifdef M_I86 +/* Microsoft C in 16-bit mode */ + +# define MY_ZCALLOC + +#if (!defined(_MSC_VER) || (_MSC_VER <= 600)) +# define _halloc halloc +# define _hfree hfree +#endif + +voidpf ZLIB_INTERNAL zcalloc (voidpf opaque, uInt items, uInt size) +{ + if (opaque) opaque = 0; /* to make compiler happy */ + return _halloc((long)items, size); +} + +void ZLIB_INTERNAL zcfree (voidpf opaque, voidpf ptr) +{ + if (opaque) opaque = 0; /* to make compiler happy */ + _hfree(ptr); +} + +#endif /* M_I86 */ + +#endif /* SYS16BIT */ + + +#ifndef MY_ZCALLOC /* Any system without a special alloc function */ + +#ifndef STDC +extern voidp malloc OF((uInt size)); +extern voidp calloc OF((uInt items, uInt size)); +extern void free OF((voidpf ptr)); +#endif + +voidpf ZLIB_INTERNAL zcalloc (opaque, items, size) + voidpf opaque; + unsigned items; + unsigned size; +{ + if (opaque) items += size - size; /* make compiler happy */ + return sizeof(uInt) > 2 ? (voidpf)malloc(items * size) : + (voidpf)calloc(items, size); +} + +void ZLIB_INTERNAL zcfree (opaque, ptr) + voidpf opaque; + voidpf ptr; +{ + free(ptr); + if (opaque) return; /* make compiler happy */ +} + +#endif /* MY_ZCALLOC */ + +#endif /* !Z_SOLO */ ADDED compat/zlib/zutil.h Index: compat/zlib/zutil.h ================================================================== --- compat/zlib/zutil.h +++ compat/zlib/zutil.h @@ -0,0 +1,253 @@ +/* zutil.h -- internal interface and configuration of the compression library + * Copyright (C) 1995-2013 Jean-loup Gailly. + * For conditions of distribution and use, see copyright notice in zlib.h + */ + +/* WARNING: this file should *not* be used by applications. It is + part of the implementation of the compression library and is + subject to change. Applications should only use zlib.h. + */ + +/* @(#) $Id$ */ + +#ifndef ZUTIL_H +#define ZUTIL_H + +#ifdef HAVE_HIDDEN +# define ZLIB_INTERNAL __attribute__((visibility ("hidden"))) +#else +# define ZLIB_INTERNAL +#endif + +#include "zlib.h" + +#if defined(STDC) && !defined(Z_SOLO) +# if !(defined(_WIN32_WCE) && defined(_MSC_VER)) +# include +# endif +# include +# include +#endif + +#ifdef Z_SOLO + typedef long ptrdiff_t; /* guess -- will be caught if guess is wrong */ +#endif + +#ifndef local +# define local static +#endif +/* compile with -Dlocal if your debugger can't find static symbols */ + +typedef unsigned char uch; +typedef uch FAR uchf; +typedef unsigned short ush; +typedef ush FAR ushf; +typedef unsigned long ulg; + +extern z_const char * const z_errmsg[10]; /* indexed by 2-zlib_error */ +/* (size given to avoid silly warnings with Visual C++) */ + +#define ERR_MSG(err) z_errmsg[Z_NEED_DICT-(err)] + +#define ERR_RETURN(strm,err) \ + return (strm->msg = ERR_MSG(err), (err)) +/* To be used only when the state is known to be valid */ + + /* common constants */ + +#ifndef DEF_WBITS +# define DEF_WBITS MAX_WBITS +#endif +/* default windowBits for decompression. MAX_WBITS is for compression only */ + +#if MAX_MEM_LEVEL >= 8 +# define DEF_MEM_LEVEL 8 +#else +# define DEF_MEM_LEVEL MAX_MEM_LEVEL +#endif +/* default memLevel */ + +#define STORED_BLOCK 0 +#define STATIC_TREES 1 +#define DYN_TREES 2 +/* The three kinds of block type */ + +#define MIN_MATCH 3 +#define MAX_MATCH 258 +/* The minimum and maximum match lengths */ + +#define PRESET_DICT 0x20 /* preset dictionary flag in zlib header */ + + /* target dependencies */ + +#if defined(MSDOS) || (defined(WINDOWS) && !defined(WIN32)) +# define OS_CODE 0x00 +# ifndef Z_SOLO +# if defined(__TURBOC__) || defined(__BORLANDC__) +# if (__STDC__ == 1) && (defined(__LARGE__) || defined(__COMPACT__)) + /* Allow compilation with ANSI keywords only enabled */ + void _Cdecl farfree( void *block ); + void *_Cdecl farmalloc( unsigned long nbytes ); +# else +# include +# endif +# else /* MSC or DJGPP */ +# include +# endif +# endif +#endif + +#ifdef AMIGA +# define OS_CODE 0x01 +#endif + +#if defined(VAXC) || defined(VMS) +# define OS_CODE 0x02 +# define F_OPEN(name, mode) \ + fopen((name), (mode), "mbc=60", "ctx=stm", "rfm=fix", "mrs=512") +#endif + +#if defined(ATARI) || defined(atarist) +# define OS_CODE 0x05 +#endif + +#ifdef OS2 +# define OS_CODE 0x06 +# if defined(M_I86) && !defined(Z_SOLO) +# include +# endif +#endif + +#if defined(MACOS) || defined(TARGET_OS_MAC) +# define OS_CODE 0x07 +# ifndef Z_SOLO +# if defined(__MWERKS__) && __dest_os != __be_os && __dest_os != __win32_os +# include /* for fdopen */ +# else +# ifndef fdopen +# define fdopen(fd,mode) NULL /* No fdopen() */ +# endif +# endif +# endif +#endif + +#ifdef TOPS20 +# define OS_CODE 0x0a +#endif + +#ifdef WIN32 +# ifndef __CYGWIN__ /* Cygwin is Unix, not Win32 */ +# define OS_CODE 0x0b +# endif +#endif + +#ifdef __50SERIES /* Prime/PRIMOS */ +# define OS_CODE 0x0f +#endif + +#if defined(_BEOS_) || defined(RISCOS) +# define fdopen(fd,mode) NULL /* No fdopen() */ +#endif + +#if (defined(_MSC_VER) && (_MSC_VER > 600)) && !defined __INTERIX +# if defined(_WIN32_WCE) +# define fdopen(fd,mode) NULL /* No fdopen() */ +# ifndef _PTRDIFF_T_DEFINED + typedef int ptrdiff_t; +# define _PTRDIFF_T_DEFINED +# endif +# else +# define fdopen(fd,type) _fdopen(fd,type) +# endif +#endif + +#if defined(__BORLANDC__) && !defined(MSDOS) + #pragma warn -8004 + #pragma warn -8008 + #pragma warn -8066 +#endif + +/* provide prototypes for these when building zlib without LFS */ +#if !defined(_WIN32) && \ + (!defined(_LARGEFILE64_SOURCE) || _LFS64_LARGEFILE-0 == 0) + ZEXTERN uLong ZEXPORT adler32_combine64 OF((uLong, uLong, z_off_t)); + ZEXTERN uLong ZEXPORT crc32_combine64 OF((uLong, uLong, z_off_t)); +#endif + + /* common defaults */ + +#ifndef OS_CODE +# define OS_CODE 0x03 /* assume Unix */ +#endif + +#ifndef F_OPEN +# define F_OPEN(name, mode) fopen((name), (mode)) +#endif + + /* functions */ + +#if defined(pyr) || defined(Z_SOLO) +# define NO_MEMCPY +#endif +#if defined(SMALL_MEDIUM) && !defined(_MSC_VER) && !defined(__SC__) + /* Use our own functions for small and medium model with MSC <= 5.0. + * You may have to use the same strategy for Borland C (untested). + * The __SC__ check is for Symantec. + */ +# define NO_MEMCPY +#endif +#if defined(STDC) && !defined(HAVE_MEMCPY) && !defined(NO_MEMCPY) +# define HAVE_MEMCPY +#endif +#ifdef HAVE_MEMCPY +# ifdef SMALL_MEDIUM /* MSDOS small or medium model */ +# define zmemcpy _fmemcpy +# define zmemcmp _fmemcmp +# define zmemzero(dest, len) _fmemset(dest, 0, len) +# else +# define zmemcpy memcpy +# define zmemcmp memcmp +# define zmemzero(dest, len) memset(dest, 0, len) +# endif +#else + void ZLIB_INTERNAL zmemcpy OF((Bytef* dest, const Bytef* source, uInt len)); + int ZLIB_INTERNAL zmemcmp OF((const Bytef* s1, const Bytef* s2, uInt len)); + void ZLIB_INTERNAL zmemzero OF((Bytef* dest, uInt len)); +#endif + +/* Diagnostic functions */ +#ifdef DEBUG +# include + extern int ZLIB_INTERNAL z_verbose; + extern void ZLIB_INTERNAL z_error OF((char *m)); +# define Assert(cond,msg) {if(!(cond)) z_error(msg);} +# define Trace(x) {if (z_verbose>=0) fprintf x ;} +# define Tracev(x) {if (z_verbose>0) fprintf x ;} +# define Tracevv(x) {if (z_verbose>1) fprintf x ;} +# define Tracec(c,x) {if (z_verbose>0 && (c)) fprintf x ;} +# define Tracecv(c,x) {if (z_verbose>1 && (c)) fprintf x ;} +#else +# define Assert(cond,msg) +# define Trace(x) +# define Tracev(x) +# define Tracevv(x) +# define Tracec(c,x) +# define Tracecv(c,x) +#endif + +#ifndef Z_SOLO + voidpf ZLIB_INTERNAL zcalloc OF((voidpf opaque, unsigned items, + unsigned size)); + void ZLIB_INTERNAL zcfree OF((voidpf opaque, voidpf ptr)); +#endif + +#define ZALLOC(strm, items, size) \ + (*((strm)->zalloc))((strm)->opaque, (items), (size)) +#define ZFREE(strm, addr) (*((strm)->zfree))((strm)->opaque, (voidpf)(addr)) +#define TRY_FREE(s, p) {if (p) ZFREE(s, p);} + +/* Reverse the bytes in a 32-bit value */ +#define ZSWAP32(q) ((((q) >> 24) & 0xff) + (((q) >> 8) & 0xff00) + \ + (((q) & 0xff00) << 8) + (((q) & 0xff) << 24)) + +#endif /* ZUTIL_H */ ADDED configure Index: configure ================================================================== --- configure +++ configure @@ -0,0 +1,3 @@ +#!/bin/sh +dir="`dirname "$0"`/autosetup" +WRAPPER="$0"; export WRAPPER; exec "`$dir/find-tclsh`" "$dir/autosetup" "$@" DELETED cvs2fossil.txt Index: cvs2fossil.txt ================================================================== --- cvs2fossil.txt +++ cvs2fossil.txt @@ -1,66 +0,0 @@ - -Known problems and areas to work on -=================================== - -* Not yet able to handle the specification of multiple projects - for one CVS repository. I.e. I can, for example, import all of - tcllib, or a single subproject of tcllib, like tklib, but not - multiple sub-projects in one go. - -* Consider to rework the breaker- and sort-passes so that they - do not need all changesets as objects in memory. - - Current memory consumption after all changesets are loaded: - - bwidget 6971627 6.6 - cvs-memchan 4634049 4.4 - cvs-sqlite 45674501 43.6 - cvs-trf 8781289 8.4 - faqs 2835116 2.7 - libtommath 4405066 4.2 - mclistbox 3350190 3.2 - newclock 5020460 4.8 - oocore 4064574 3.9 - sampleextension 4729932 4.5 - tclapps 8482135 8.1 - tclbench 4116887 3.9 - tcl_bignum 2545192 2.4 - tclconfig 4105042 3.9 - tcllib 31707688 30.2 - tcltutorial 3512048 3.3 - tcl 109926382 104.8 - thread 8953139 8.5 - tklib 13935220 13.3 - tk 66149870 63.1 - widget 2625609 2.5 - -* Look at the dependencies on external packages and consider - which of them can be moved into the importer, either as a - simple utility command, or wholesale. - - struct::list - assign, map, reverse, filter - - Very few and self-contained commands. - - struct::set - size, empty, contains, add, include, exclude, - intersect, subsetof - - Most of the core commands. - - fileutil - cat, appendToFile, writeFile, - tempfile, stripPath, test - - fileutil::traverse - In toto - - struct::graph - In toto - - snit - In toto - - sqlite3 - In toto Index: debian/makedeb.sh ================================================================== --- debian/makedeb.sh +++ debian/makedeb.sh @@ -1,21 +1,19 @@ #!/bin/bash # A quick hack to generate a Debian package of fossil. i took most of this # from Martin Krafft's "The Debian System" book. DEB_REV=${1-1} # .deb package build/revision number. -PACKAGE_DEBNAME=fossil-scm +PACKAGE_DEBNAME=fossil THISDIR=${PWD} if uname -a | grep -i nexenta &>/dev/null; then # Assume NexentaOS/GnuSolaris: - DEB_PLATFORM=nexenta DEB_ARCH_NAME=solaris-i386 DEB_ARCH_PKGDEPENDS="sunwcsl" # for -lsocket else - DEB_PLATFORM=${DEB_PLATFORM-ubuntu-gutsy} - DEB_ARCH_NAME=i386 + DEB_ARCH_NAME=$(dpkg --print-architecture) fi SRCDIR=$(cd ..; pwd) test -e ${SRCDIR}/fossil || { echo "This script must be run from a BUILT copy of the source tree." @@ -41,11 +39,11 @@ rm -fr DEBIAN mkdir DEBIAN PACKAGE_VERSION=$(date +%Y.%m.%d) PACKAGE_DEB_VERSION=${PACKAGE_VERSION}-${DEB_REV} -DEBFILE=${THISDIR}/${PACKAGE_DEBNAME}-${PACKAGE_DEB_VERSION}-dev-${DEB_ARCH_NAME}-${DEB_PLATFORM}.deb +DEBFILE=${THISDIR}/${PACKAGE_DEBNAME}-${PACKAGE_DEB_VERSION}-dev-${DEB_ARCH_NAME}.deb PACKAGE_TIME=$(/bin/date) rm -f ${DEBFILE} echo "Creating .deb package [${DEBFILE}]..." @@ -54,11 +52,11 @@ true && { echo "Generating Debian-specific files..." COPYRIGHT=${DEBLOCALPREFIX}/share/doc/${PACKAGE_DEBNAME}/copyright cat < ${COPYRIGHT} -This package was created by stephan beal +This package was created by fossil-scm on ${PACKAGE_TIME}. The original sources for fossil can be downloaded for free from: http://www.fossil-scm.org/ @@ -74,11 +72,11 @@ ${PACKAGE_DEBNAME} ${PACKAGE_DEB_VERSION}; urgency=low This release has no changes over the core source distribution. It has simply been Debianized. -Packaged by stephan beal on +Packaged by fossil-dev on ${PACKAGE_TIME}. EOF } @@ -87,15 +85,15 @@ true && { CONTROL=DEBIAN/control echo "Generating ${CONTROL}..." cat < ${CONTROL} Package: ${PACKAGE_DEBNAME} -Section: devel +Section: vcs Priority: optional -Maintainer: stephan beal +Maintainer: fossil-dev Architecture: ${DEB_ARCH_NAME} -Depends: libc6-dev ${DEB_ARCH_PKGDEPENDS+, }${DEB_ARCH_PKGDEPENDS} +Depends: libc6 ${DEB_ARCH_PKGDEPENDS+, }${DEB_ARCH_PKGDEPENDS} Version: ${PACKAGE_DEB_VERSION} Description: Fossil is a unique SCM (Software Configuration Management) system. This package contains the Fossil binary for *buntu/Debian systems. Fossil is a unique SCM program which supports distributed source control management using local repositories, access over HTTP CGI, or using the ADDED fossil.1 Index: fossil.1 ================================================================== --- fossil.1 +++ fossil.1 @@ -0,0 +1,100 @@ +.TH FOSSIL "1" "February 2015" "http://fossil-scm.org" "User Commands" +.SH NAME +fossil \- Distributed Version Control System +.SH SYNOPSIS +.B fossil +\fIhelp\fR +.br +.B fossil +\fIhelp COMMAND\fR +.br +.B fossil +\fICOMMAND [OPTIONS]\fR +.SH DESCRIPTION +Fossil is a distributed version control system (DVCS) with built-in +wiki, ticket tracker, CGI/http interface, and http server. + +.SH Common COMMANDs: + +add clean import pull stash +.br +addremove clone info purge status +.br +all commit init push sync +.br +annotate diff json rebuild tag +.br +bisect export ls remote-url timeline +.br +blame extras merge revert ui +.br +branch finfo mv rm undo +.br +bundle fusefs open rss unpublish +.br +cat gdiff praise settings update +.br +changes help publish sqlite3 version + +.SH FEATURES + +Features as described on the fossil home page. + +.HP +1. +.B Integrated Bug Tracking, Wiki, & Technotes +- In addition to doing distributed version control like Git and +Mercurial, Fossil also supports bug tracking, wiki, and technotes. + +.HP +2. +.B Built-in Web Interface +- Fossil has a built-in and intuitive web interface that promotes +project situational awareness. Type "fossil ui" and Fossil automatically +opens a web browser to a page that shows detailed graphical history and +status information on that project. + +.HP +3. +.B Self-Contained +- Fossil is a single self-contained stand-alone executable. To install, +simply download a precompiled binary for Linux, Mac, OpenBSD, or Windows +and put it on your $PATH. Easy-to-compile source code is available for +users on other platforms. + +.HP +4. +.B Simple Networking +- No custom protocols or TCP ports. Fossil uses plain old HTTP (or HTTPS +or SSH) for all network communications, so it works fine from behind +restrictive firewalls, including proxies. The protocol is bandwidth +efficient to the point that Fossil can be used comfortably over dial-up. + +.HP +5. +.B CGI/SCGI Enabled +- No server is required, but if you want to set one up, Fossil supports +four simple server configurations. + +.HP +6. +.B Autosync +- Fossil supports "autosync" mode which helps to keep projects moving +forward by reducing the amount of needless forking and merging often +associated with distributed projects. + +.HP +7. +.B Robust & Reliable +- Fossil stores content using an enduring file format in an SQLite +database so that transactions are atomic even if interrupted by a +power loss or system crash. Automatic self-checks verify that all +aspects of the repository are consistent prior to each commit. In +over seven years of operation, no work has ever been lost after +having been committed to a Fossil repository. + +.SH DOCUMENTATION +http://www.fossil-scm.org/ +.br +.B fossil +\fIui\fR DELETED fossil.nsi Index: fossil.nsi ================================================================== --- fossil.nsi +++ fossil.nsi @@ -1,59 +0,0 @@ -; example2.nsi -; -; This script is based on example1.nsi, but adds uninstall support -; and (optionally) start menu shortcuts. -; -; It will install notepad.exe into a directory that the user selects, -; - -; The name of the installer -Name "Fossil" - -; The file to write -OutFile "fossil-setup-7c0bd3ee08.exe" - -; The default installation directory -InstallDir $PROGRAMFILES\Fossil -; Registry key to check for directory (so if you install again, it will -; overwrite the old one automatically) -InstallDirRegKey HKLM SOFTWARE\Fossil "Install_Dir" - -; The text to prompt the user to enter a directory -ComponentText "This will install fossil on your computer." -; The text to prompt the user to enter a directory -DirText "Choose a directory to install in to:" - -; The stuff to install -Section "Fossil (required)" - ; Set output path to the installation directory. - SetOutPath $INSTDIR - ; Put file there - File ".\build\fossil.exe" - ; Write the installation path into the registry - WriteRegStr HKLM SOFTWARE\Fossil "Install_Dir" "$INSTDIR" - ; Write the uninstall keys for Windows - WriteRegStr HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\Fossil" "DisplayName" "Fossil (remove only)" - WriteRegStr HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\Fossil" "UninstallString" '"$INSTDIR\uninstall.exe"' - WriteUninstaller "uninstall.exe" -SectionEnd - - -; uninstall stuff - -UninstallText "This will uninstall fossil. Hit next to continue." - -; special uninstall section. -Section "Uninstall" - ; remove registry keys - DeleteRegKey HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\Fossil" - DeleteRegKey HKLM SOFTWARE\Fossil - ; remove files - Delete $INSTDIR\fossil.exe - ; MUST REMOVE UNINSTALLER, too - Delete $INSTDIR\uninstall.exe - ; remove shortcuts, if any. - RMDir "$SMPROGRAMS\Fossil" - RMDir "$INSTDIR" -SectionEnd - -; eof DELETED kktodo.wiki Index: kktodo.wiki ================================================================== --- kktodo.wiki +++ kktodo.wiki @@ -1,69 +0,0 @@ -

kkinnell

- -.plan -- Fossil, the DG Bwahahahaha! The cover art could be an homo erectus skull lying on some COBOL code... - - 1. Command line interface reference docs -
    -
  • Finish initial pages.
  • -
  • Start on tech-spec (serious, not "chatty") reference pages.
  • -
  • Edit, edit, edit.
  • -
- - 2. Support docs -
    -
  • Basic explanation of Distributed SCM. -
  • Tutorial -
      -
    • Silly source. Start with existing dir struct.
    • -
    • Repository. Creatiing and populating.
    • -
    • Where? Local, Intranet, Internet.
    • -
    • Who? Project size defined by size of code base versus - number of developers. Size matters.
    • -
    • How? -
        -
      • Open, close, commit, checkout, update, merge.
      • -
      -
    • -
    • Hmmm. Experimenting.
    • -
        -
      • The road less travelled, or where'd that - fork come from?
      • -
      -
    • Oops! Going back in time.
    • -
        -
      • Versions
      • -
          -
        • What is a version?
        • -
        • Is it a "version" or a "tag"?
        • -
        • DSCM redux: Revisionist versioning.
        • -
        -
      -
    -
  • -
  • Basic explanation of merge. -
      -
    1. Leaves, branches and baselines: We want a shrubbery!
    2. -
    3. update merges vs. merge merges. - All merges are equal, but some are more equal than others.
    4. -
    -
  • -
- - 3. Configuration - - 42. General -
    -
  • Co-ordinate style and tone with drh, other devs. (documentation - standard? yuck.)
  • -
- - * Tips & tricks. - - * Fossil and Sqlite -
    -
  • Get a word in for Mrinal Kant's - SQLite Manager - XUL app. Great tool.
  • -
- - * Th (code groveling [and code groveling {and code groveling ... } ... ] ... ) DELETED rse-notes.txt Index: rse-notes.txt ================================================================== --- rse-notes.txt +++ rse-notes.txt @@ -1,198 +0,0 @@ -From: "Ralf S. Engelschall" -Date: October 18, 2008 1:40:53 PM EDT -To: drh@hwaci.com -Subject: Some Fossil Feedback - -I've today looked at your Fossil project (latest version from your -repository). That's a _really_ interesting and very promising DVCS. -While I looked around and tested it, I stumbled over some issues I want -to share with you: - -o No tarball - - You currently still do not provide any tarballs of the Fossil sources. - Sure, you are still hacking wild on the code and one can on-the-fly - fetch a ZIP archive, but that's a manual process. For packagers (like - me in the OpenPKG project) this is very nasty and just means that - Fossil will usually still not be packaged. As a result it will be not - spreaded as much as possible and this way you still do not get as much - as possible feedback. Hence, I recommend that you let daily snapshot - tarballs (or ZIP files) be rolled. This would be great. - -o UUID - - Under http://www.fossil-scm.org/index.html/doc/tip/www/concepts.wiki - you describe the concepts and you clearly name the artifact ID a - "UUID" and even say that this is a "Universally Unique Identifier". - Unfortunately, a UUID is a 128-bit entity standardized by the ISO - (under ISO/IEC 11578:1996) and IETF (under RFC-4122) and hence it is - *VERY MUCH* confusing and unclean that your 160-bit SHA1-based ID is - called "UUID" in Fossil. - - I *STRONGLY* recommend you to either use real UUIDs (128-bit - entities, where a UUID of version 5 even is SHA1-based!) or you name - your 160-bit SHA1 entities differently (perhaps AID for "Artifact - Identifier"?). - -o "fossil cgi + ADDED skins/enhanced1/css.txt Index: skins/enhanced1/css.txt ================================================================== --- skins/enhanced1/css.txt +++ skins/enhanced1/css.txt @@ -0,0 +1,148 @@ +/* General settings for the entire page */ +body { + margin: 0ex 1ex; + padding: 0px; + background-color: white; + font-family: sans-serif; + -moz-text-size-adjust: none; + -webkit-text-size-adjust: none; + -mx-text-size-adjust: none; +} + +/* The project logo in the upper left-hand corner of each page */ +div.logo { + display: table-cell; + text-align: center; + vertical-align: bottom; + font-weight: bold; + color: #558195; + min-width: 200px; + white-space: nowrap; +} + +/* The page title centered at the top of each page */ +div.title { + display: table-cell; + font-size: 2em; + font-weight: bold; + text-align: center; + padding: 0 0 0 1em; + color: #558195; + vertical-align: bottom; + width: 100%; +} + +/* The login status message in the top right-hand corner */ +div.status { + display: table-cell; + text-align: right; + vertical-align: bottom; + color: #558195; + font-size: 0.8em; + font-weight: bold; + min-width: 200px; + white-space: nowrap; +} + +/* The header across the top of the page */ +div.header { + display: table; + width: 100%; +} + +/* The main menu bar that appears at the top of the page beneath +** the header */ +div.mainmenu { + padding: 5px 10px 5px 10px; + font-size: 0.9em; + font-weight: bold; + text-align: center; + letter-spacing: 1px; + background-color: #558195; + border-top-left-radius: 8px; + border-top-right-radius: 8px; + color: white; +} + +/* The submenu bar that *sometimes* appears below the main menu */ +div.submenu, div.sectionmenu { + padding: 3px 10px 3px 0px; + font-size: 0.9em; + text-align: center; + background-color: #456878; + color: white; +} +div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, +div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited { + padding: 3px 10px 3px 10px; + color: white; + text-decoration: none; +} +div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover { + color: #558195; + background-color: white; +} + +/* All page content from the bottom of the menu or submenu down to +** the footer */ +div.content { + padding: 0ex 1ex 1ex 1ex; + border: solid #aaa; + border-width: 1px; +} + +/* Some pages have section dividers */ +div.section { + margin-bottom: 0px; + margin-top: 1em; + padding: 1px 1px 1px 1px; + font-size: 1.2em; + font-weight: bold; + background-color: #558195; + color: white; + white-space: nowrap; +} + +/* The "Date" that occurs on the left hand side of timelines */ +div.divider { + background: #a1c4d4; + border: 2px #558195 solid; + font-size: 1em; font-weight: normal; + padding: .25em; + margin: .2em 0 .2em 0; + float: left; + clear: left; + white-space: nowrap; +} + +/* The footer at the very bottom of the page */ +div.footer { + clear: both; + font-size: 0.8em; + padding: 5px 10px 5px 10px; + text-align: right; + background-color: #558195; + border-bottom-left-radius: 8px; + border-bottom-right-radius: 8px; + color: white; +} + +/* Hyperlink colors in the footer */ +div.footer a { color: white; } +div.footer a:link { color: white; } +div.footer a:visited { color: white; } +div.footer a:hover { background-color: white; color: #558195; } + +/* verbatim blocks */ +pre.verbatim { + background-color: #f5f5f5; + padding: 0.5em; + white-space: pre-wrap; +} + +/* The label/value pairs on (for example) the ci page */ +table.label-value th { + vertical-align: top; + text-align: right; + padding: 0.2ex 2ex; +} ADDED skins/enhanced1/details.txt Index: skins/enhanced1/details.txt ================================================================== --- skins/enhanced1/details.txt +++ skins/enhanced1/details.txt @@ -0,0 +1,4 @@ +timeline-arrowheads: 1 +timeline-circle-nodes: 0 +timeline-color-graph-lines: 0 +white-foreground: 0 ADDED skins/enhanced1/footer.txt Index: skins/enhanced1/footer.txt ================================================================== --- skins/enhanced1/footer.txt +++ skins/enhanced1/footer.txt @@ -0,0 +1,25 @@ + + ADDED skins/enhanced1/header.txt Index: skins/enhanced1/header.txt ================================================================== --- skins/enhanced1/header.txt +++ skins/enhanced1/header.txt @@ -0,0 +1,138 @@ + + + +$<project_name>: $<title> + + + + +
+ +
$</div> + <div class="status"><th1> + if {[info exists login]} { + puts "Logged in as $login" + } else { + puts "Not logged in" + } + </th1></nobr><small><div id="clock"></div></small></div> +</div> +<script> +function updateClock(){ + var e = document.getElementById("clock"); + if(e){ + var d = new Date(); + function f(n) { + return n < 10 ? '0' + n : n; + } + e.innerHTML = d.getUTCFullYear()+ '-' + + f(d.getUTCMonth() + 1) + '-' + + f(d.getUTCDate()) + ' ' + + f(d.getUTCHours()) + ':' + + f(d.getUTCMinutes()); + setTimeout("updateClock();",(60-d.getUTCSeconds())*1000); + } +} +updateClock(); +</script> +<div class="mainmenu"> +<th1> +proc menulink {url name} { + upvar home home + html "<a href='$home$url'>$name</a>\n" +} +menulink $index_page Home +menulink /help Help +if {[anycap jor]} { + menulink /timeline Timeline +} +if {[anoncap oh]} { + menulink /dir?ci=tip Files +} +if {[anoncap o]} { + menulink /brlist Branches + menulink /taglist Tags +} +if {[anoncap r]} { + menulink /ticket Tickets +} +if {[anoncap j]} { + menulink /wiki Wiki +} +if {[hascap s]} { + menulink /setup Admin +} elseif {[hascap a]} { + menulink /setup_ulist Users +} +if {[info exists login]} { + menulink /login Logout +} else { + menulink /login Login +} +</th1></div> ADDED skins/khaki/css.txt Index: skins/khaki/css.txt ================================================================== --- skins/khaki/css.txt +++ skins/khaki/css.txt @@ -0,0 +1,146 @@ +/* General settings for the entire page */ +body { + margin: 0ex 0ex; + padding: 0px; + background-color: #fef3bc; + font-family: sans-serif; + -moz-text-size-adjust: none; + -webkit-text-size-adjust: none; + -mx-text-size-adjust: none; +} + +/* The project logo in the upper left-hand corner of each page */ +div.logo { + display: inline; + text-align: center; + vertical-align: bottom; + font-weight: bold; + font-size: 2.5em; + color: #a09048; + white-space: nowrap; +} + +/* The page title centered at the top of each page */ +div.title { + display: table-cell; + font-size: 2em; + font-weight: bold; + text-align: left; + padding: 0 0 0 5px; + color: #a09048; + vertical-align: bottom; + width: 100%; +} + +/* The login status message in the top right-hand corner */ +div.status { + display: table-cell; + text-align: right; + vertical-align: bottom; + color: #a09048; + padding: 5px 5px 0 0; + font-size: 0.8em; + font-weight: bold; + white-space: nowrap; +} + +/* The header across the top of the page */ +div.header { + display: table; + width: 100%; +} + +/* The main menu bar that appears at the top of the page beneath +** the header */ +div.mainmenu { + padding: 5px 10px 5px 10px; + font-size: 0.9em; + font-weight: bold; + text-align: center; + letter-spacing: 1px; + background-color: #a09048; + color: black; +} + +/* The submenu bar that *sometimes* appears below the main menu */ +div.submenu, div.sectionmenu { + padding: 3px 10px 3px 0px; + font-size: 0.9em; + text-align: center; + background-color: #c0af58; + color: white; +} +div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, +div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited { + padding: 3px 10px 3px 10px; + color: white; + text-decoration: none; +} +div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover { + color: #a09048; + background-color: white; +} + +/* All page content from the bottom of the menu or submenu down to +** the footer */ +div.content { + padding: 1ex 5px; +} +div.content a { color: #706532; } +div.content a:link { color: #706532; } +div.content a:visited { color: #704032; } +div.content a:hover { background-color: white; color: #706532; } + +/* Some pages have section dividers */ +div.section { + margin-bottom: 0px; + margin-top: 1em; + padding: 3px 3px 0 3px; + font-size: 1.2em; + font-weight: bold; + background-color: #a09048; + color: white; + white-space: nowrap; +} + +/* The "Date" that occurs on the left hand side of timelines */ +div.divider { + background: #e1d498; + border: 2px #a09048 solid; + font-size: 1em; font-weight: normal; + padding: .25em; + margin: .2em 0 .2em 0; + float: left; + clear: left; + white-space: nowrap; +} + +/* The footer at the very bottom of the page */ +div.footer { + font-size: 0.8em; + margin-top: 12px; + padding: 5px 10px 5px 10px; + text-align: right; + background-color: #a09048; + color: white; +} + +/* Hyperlink colors */ +div.footer a { color: white; } +div.footer a:link { color: white; } +div.footer a:visited { color: white; } +div.footer a:hover { background-color: white; color: #558195; } + +/* <verbatim> blocks */ +pre.verbatim { + background-color: #f5f5f5; + padding: 0.5em; + white-space: pre-wrap; +} + +/* The label/value pairs on (for example) the ci page */ +table.label-value th { + vertical-align: top; + text-align: right; + padding: 0.2ex 2ex; +} ADDED skins/khaki/details.txt Index: skins/khaki/details.txt ================================================================== --- skins/khaki/details.txt +++ skins/khaki/details.txt @@ -0,0 +1,4 @@ +timeline-arrowheads: 1 +timeline-circle-nodes: 0 +timeline-color-graph-lines: 0 +white-foreground: 0 ADDED skins/khaki/footer.txt Index: skins/khaki/footer.txt ================================================================== --- skins/khaki/footer.txt +++ skins/khaki/footer.txt @@ -0,0 +1,4 @@ +<div class="footer"> +Fossil $release_version $manifest_version $manifest_date +</div> +</body></html> ADDED skins/khaki/header.txt Index: skins/khaki/header.txt ================================================================== --- skins/khaki/header.txt +++ skins/khaki/header.txt @@ -0,0 +1,52 @@ +<html> +<head> +<base href="$baseurl/$current_page" /> +<title>$<project_name>: $<title> + + + + +
+
$</div> + <div class="status"> + <div class="logo">$<project_name></div><br/> + <th1> + if {[info exists login]} { + puts "Logged in as $login" + } else { + puts "Not logged in" + } + </th1></div> +</div> +<div class="mainmenu"> +<th1> +html "<a href='$home$index_page'>Home</a>\n" +if {[anycap jor]} { + html "<a href='$home/timeline'>Timeline</a>\n" +} +if {[anoncap oh]} { + html "<a href='$home/tree?ci=tip'>Files</a>\n" +} +if {[anoncap o]} { + html "<a href='$home/brlist'>Branches</a>\n" + html "<a href='$home/taglist'>Tags</a>\n" +} +if {[anoncap r]} { + html "<a href='$home/ticket'>Tickets</a>\n" +} +if {[anoncap j]} { + html "<a href='$home/wiki'>Wiki</a>\n" +} +if {[hascap s]} { + html "<a href='$home/setup'>Admin</a>\n" +} elseif {[hascap a]} { + html "<a href='$home/setup_ulist'>Users</a>\n" +} +if {[info exists login]} { + html "<a href='$home/login'>Logout</a>\n" +} else { + html "<a href='$home/login'>Login</a>\n" +} +</th1></div> ADDED skins/original/css.txt Index: skins/original/css.txt ================================================================== --- skins/original/css.txt +++ skins/original/css.txt @@ -0,0 +1,148 @@ +/* General settings for the entire page */ +body { + margin: 0ex 1ex; + padding: 0px; + background-color: white; + font-family: sans-serif; + -moz-text-size-adjust: none; + -webkit-text-size-adjust: none; + -mx-text-size-adjust: none; +} + +/* The project logo in the upper left-hand corner of each page */ +div.logo { + display: table-cell; + text-align: center; + vertical-align: bottom; + font-weight: bold; + color: #558195; + min-width: 200px; + white-space: nowrap; +} + +/* The page title centered at the top of each page */ +div.title { + display: table-cell; + font-size: 2em; + font-weight: bold; + text-align: center; + padding: 0 0 0 1em; + color: #558195; + vertical-align: bottom; + width: 100%; +} + +/* The login status message in the top right-hand corner */ +div.status { + display: table-cell; + text-align: right; + vertical-align: bottom; + color: #558195; + font-size: 0.8em; + font-weight: bold; + min-width: 200px; + white-space: nowrap; +} + +/* The header across the top of the page */ +div.header { + display: table; + width: 100%; +} + +/* The main menu bar that appears at the top of the page beneath +** the header */ +div.mainmenu { + padding: 5px 10px 5px 10px; + font-size: 0.9em; + font-weight: bold; + text-align: center; + letter-spacing: 1px; + background-color: #558195; + border-top-left-radius: 8px; + border-top-right-radius: 8px; + color: white; +} + +/* The submenu bar that *sometimes* appears below the main menu */ +div.submenu, div.sectionmenu { + padding: 3px 10px 3px 0px; + font-size: 0.9em; + text-align: center; + background-color: #456878; + color: white; +} +div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, +div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited { + padding: 3px 10px 3px 10px; + color: white; + text-decoration: none; +} +div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover { + color: #558195; + background-color: white; +} + +/* All page content from the bottom of the menu or submenu down to +** the footer */ +div.content { + padding: 0ex 1ex 1ex 1ex; + border: solid #aaa; + border-width: 1px; +} + +/* Some pages have section dividers */ +div.section { + margin-bottom: 0px; + margin-top: 1em; + padding: 1px 1px 1px 1px; + font-size: 1.2em; + font-weight: bold; + background-color: #558195; + color: white; + white-space: nowrap; +} + +/* The "Date" that occurs on the left hand side of timelines */ +div.divider { + background: #a1c4d4; + border: 2px #558195 solid; + font-size: 1em; font-weight: normal; + padding: .25em; + margin: .2em 0 .2em 0; + float: left; + clear: left; + white-space: nowrap; +} + +/* The footer at the very bottom of the page */ +div.footer { + clear: both; + font-size: 0.8em; + padding: 5px 10px 5px 10px; + text-align: right; + background-color: #558195; + border-bottom-left-radius: 8px; + border-bottom-right-radius: 8px; + color: white; +} + +/* Hyperlink colors in the footer */ +div.footer a { color: white; } +div.footer a:link { color: white; } +div.footer a:visited { color: white; } +div.footer a:hover { background-color: white; color: #558195; } + +/* verbatim blocks */ +pre.verbatim { + background-color: #f5f5f5; + padding: 0.5em; + white-space: pre-wrap; +} + +/* The label/value pairs on (for example) the ci page */ +table.label-value th { + vertical-align: top; + text-align: right; + padding: 0.2ex 2ex; +} ADDED skins/original/details.txt Index: skins/original/details.txt ================================================================== --- skins/original/details.txt +++ skins/original/details.txt @@ -0,0 +1,4 @@ +timeline-arrowheads: 1 +timeline-circle-nodes: 0 +timeline-color-graph-lines: 0 +white-foreground: 0 ADDED skins/original/footer.txt Index: skins/original/footer.txt ================================================================== --- skins/original/footer.txt +++ skins/original/footer.txt @@ -0,0 +1,6 @@ +<div class="footer"> +This page was generated in about +<th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s by +Fossil $release_version $manifest_version $manifest_date +</div> +</body></html> ADDED skins/original/header.txt Index: skins/original/header.txt ================================================================== --- skins/original/header.txt +++ skins/original/header.txt @@ -0,0 +1,53 @@ +<html> +<head> +<base href="$baseurl/$current_page" /> +<title>$<project_name>: $<title> + + + + +
+ +
$
$</div> + <div class="status"><th1> + if {[info exists login]} { + puts "Logged in as $login" + } else { + puts "Not logged in" + } + </th1></div> +</div> +<div class="mainmenu"> +<th1> +html "<a href='$home$index_page'>Home</a>\n" +if {[anycap jor]} { + html "<a href='$home/timeline'>Timeline</a>\n" +} +if {[anoncap oh]} { + html "<a href='$home/tree?ci=tip'>Files</a>\n" +} +if {[anoncap o]} { + html "<a href='$home/brlist'>Branches</a>\n" + html "<a href='$home/taglist'>Tags</a>\n" +} +if {[anoncap r]} { + html "<a href='$home/ticket'>Tickets</a>\n" +} +if {[anoncap j]} { + html "<a href='$home/wiki'>Wiki</a>\n" +} +if {[hascap s]} { + html "<a href='$home/setup'>Admin</a>\n" +} elseif {[hascap a]} { + html "<a href='$home/setup_ulist'>Users</a>\n" +} +if {[info exists login]} { + html "<a href='$home/login'>Logout</a>\n" +} else { + html "<a href='$home/login'>Login</a>\n" +} +</th1></div> ADDED skins/plain_gray/css.txt Index: skins/plain_gray/css.txt ================================================================== --- skins/plain_gray/css.txt +++ skins/plain_gray/css.txt @@ -0,0 +1,142 @@ +/* General settings for the entire page */ +body { + margin: 0ex 1ex; + padding: 0px; + background-color: white; + font-family: sans-serif; + -moz-text-size-adjust: none; + -webkit-text-size-adjust: none; + -mx-text-size-adjust: none; +} + +/* The project logo in the upper left-hand corner of each page */ +div.logo { + display: table-row; + text-align: center; + /* vertical-align: bottom;*/ + font-size: 2em; + font-weight: bold; + background-color: #707070; + color: #ffffff; + min-width: 200px; + white-space: nowrap; +} + +/* The page title centered at the top of each page */ +div.title { + display: table-cell; + font-size: 1.5em; + font-weight: bold; + text-align: center; + padding: 0 0 0 10px; + color: #404040; + vertical-align: bottom; + width: 100%; +} + +/* The login status message in the top right-hand corner */ +div.status { + display: table-cell; + text-align: right; + vertical-align: bottom; + color: #404040; + font-size: 0.8em; + font-weight: bold; + min-width: 200px; + white-space: nowrap; +} + +/* The header across the top of the page */ +div.header { + display: table; + width: 100%; +} + +/* The main menu bar that appears at the top of the page beneath +** the header */ +div.mainmenu { + padding: 5px 10px 5px 10px; + font-size: 0.9em; + font-weight: bold; + text-align: center; + letter-spacing: 1px; + background-color: #404040; + color: white; +} + +/* The submenu bar that *sometimes* appears below the main menu */ +div.submenu, div.sectionmenu { + padding: 3px 10px 3px 0px; + font-size: 0.9em; + text-align: center; + background-color: #606060; + color: white; +} +div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited, +div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited { + padding: 3px 10px 3px 10px; + color: white; + text-decoration: none; +} +div.mainmenu a:hover, div.submenu a:hover, div.sectionmenu>a.button:hover { + color: #404040; + background-color: white; +} + +/* All page content from the bottom of the menu or submenu down to +** the footer */ +div.content { + padding: 0ex 0ex 0ex 0ex; +} +/* Hyperlink colors */ +div.content a { color: #604000; } +div.content a:link { color: #604000;} +div.content a:visited { color: #600000; } + +/* <verbatim> blocks */ +pre.verbatim { + background-color: #ffffff; + padding: 0.5em; + white-space: pre-wrap; +} + +/* Some pages have section dividers */ +div.section { + margin-bottom: 0px; + margin-top: 1em; + padding: 1px 1px 1px 1px; + font-size: 1.2em; + font-weight: bold; + background-color: #404040; + color: white; + white-space: nowrap; +} + +/* The "Date" that occurs on the left hand side of timelines */ +div.divider { + background: #a0a0a0; + border: 2px #505050 solid; + font-size: 1em; font-weight: normal; + padding: .25em; + margin: .2em 0 .2em 0; + float: left; + clear: left; + white-space: nowrap; +} + +/* The footer at the very bottom of the page */ +div.footer { + font-size: 0.8em; + margin-top: 12px; + padding: 5px 10px 5px 10px; + text-align: right; + background-color: #404040; + color: white; +} + +/* The label/value pairs on (for example) the vinfo page */ +table.label-value th { + vertical-align: top; + text-align: right; + padding: 0.2ex 2ex; +} ADDED skins/plain_gray/details.txt Index: skins/plain_gray/details.txt ================================================================== --- skins/plain_gray/details.txt +++ skins/plain_gray/details.txt @@ -0,0 +1,4 @@ +timeline-arrowheads: 1 +timeline-circle-nodes: 0 +timeline-color-graph-lines: 0 +white-foreground: 0 ADDED skins/plain_gray/footer.txt Index: skins/plain_gray/footer.txt ================================================================== --- skins/plain_gray/footer.txt +++ skins/plain_gray/footer.txt @@ -0,0 +1,4 @@ +<div class="footer"> +Fossil $release_version $manifest_version $manifest_date +</div> +</body></html> ADDED skins/plain_gray/header.txt Index: skins/plain_gray/header.txt ================================================================== --- skins/plain_gray/header.txt +++ skins/plain_gray/header.txt @@ -0,0 +1,50 @@ +<html> +<head> +<base href="$baseurl/$current_page" /> +<title>$<project_name>: $<title> + + + + +
+
$
$</div> + <div class="status"><th1> + if {[info exists login]} { + puts "Logged in as $login" + } else { + puts "Not logged in" + } + </th1></div> +</div> +<div class="mainmenu"> +<th1> +html "<a href='$home$index_page'>Home</a>\n" +if {[anycap jor]} { + html "<a href='$home/timeline'>Timeline</a>\n" +} +if {[anoncap oh]} { + html "<a href='$home/tree?ci=tip'>Files</a>\n" +} +if {[anoncap o]} { + html "<a href='$home/brlist'>Branches</a>\n" + html "<a href='$home/taglist'>Tags</a>\n" +} +if {[anoncap r]} { + html "<a href='$home/ticket'>Tickets</a>\n" +} +if {[anoncap j]} { + html "<a href='$home/wiki'>Wiki</a>\n" +} +if {[hascap s]} { + html "<a href='$home/setup'>Admin</a>\n" +} elseif {[hascap a]} { + html "<a href='$home/setup_ulist'>Users</a>\n" +} +if {[info exists login]} { + html "<a href='$home/login'>Logout</a>\n" +} else { + html "<a href='$home/login'>Login</a>\n" +} +</th1></div> ADDED skins/rounded1/css.txt Index: skins/rounded1/css.txt ================================================================== --- skins/rounded1/css.txt +++ skins/rounded1/css.txt @@ -0,0 +1,197 @@ +/* General settings for the entire page */ +html { + min-height: 100%; +} +body { + margin: 0ex 1ex; + padding: 0px; + background-color: white; + color: #333; + font-family: Verdana, sans-serif; + font-size: 0.8em; + -moz-text-size-adjust: none; + -webkit-text-size-adjust: none; + -mx-text-size-adjust: none; +} + +/* The project logo in the upper left-hand corner of each page */ +div.logo { + display: table-cell; + text-align: right; + vertical-align: bottom; + font-weight: normal; + white-space: nowrap; +} + +/* Widths */ +div.header, div.mainmenu, div.submenu, div.content, div.footer { + max-width: 900px; + margin: auto; + padding: 3px 20px 3px 20px; + clear: both; +} + +/* The page title at the top of each page */ +div.title { + display: table-cell; + padding-left: 10px; + font-size: 2em; + margin: 10px 0 10px -20px; + vertical-align: bottom; + text-align: left; + width: 80%; + font-family: Verdana, sans-serif; + font-weight: bold; + color: #558195; + text-shadow: 0px 2px 2px #999999; +} + +/* The login status message in the top right-hand corner */ +div.status { + display: table-cell; + text-align: right; + vertical-align: bottom; + color: #333; + margin-right: -20px; + white-space: nowrap; +} + +/* The main menu bar that appears at the top of the page beneath + ** the header */ +div.mainmenu { + text-align: center; + color: white; + border-top-left-radius: 5px; + border-top-right-radius: 5px; + vertical-align: middle; + padding-top: 8px; + padding-bottom: 8px; + background-color: #446979; + box-shadow: 0px 3px 4px #333333; +} + +/* The submenu bar that *sometimes* appears below the main menu */ +div.submenu { + padding-top:10px; + padding-bottom:0; + text-align: right; + color: #000; + background-color: #fff; + height: 1.5em; + vertical-align:middle; + box-shadow: 0px 3px 4px #999; +} +div.mainmenu a, div.mainmenu a:visited { + padding: 3px 10px 3px 10px; + color: white; + text-decoration: none; +} +div.submenu a, div.submenu a:visited, a.button, +div.sectionmenu>a.button:link, div.sectionmenu>a.button:visited { + padding: 2px 8px; + color: #000; + font-family: Arial; + text-decoration: none; + margin:auto; + border-radius: 5px; + background-color: #e0e0e0; + text-shadow: 0px -1px 0px #eee; + border: 1px solid #000; +} + +div.mainmenu a:hover { + color: #000; + background-color: white; +} + +div.submenu a:hover, div.sectionmenu>a.button:hover { + background-color: #c0c0c0; +} + +/* All page content from the bottom of the menu or submenu down to + ** the footer */ +div.content { + background-color: #fff; + box-shadow: 0px 3px 4px #999; + border-bottom-right-radius: 5px; + border-bottom-left-radius: 5px; + padding-bottom: 1em; + min-height:40%; +} + + +/* Some pages have section dividers */ +div.section { + margin-bottom: 0.5em; + margin-top: 1em; + margin-right: auto; + padding: 1px 1px 1px 1px; + font-size: 1.2em; + font-weight: bold; + text-align: center; + color: white; + border-radius: 5px; + background-color: #446979; + box-shadow: 0px 3px 4px #333333; + white-space: nowrap; +} + +/* The "Date" that occurs on the left hand side of timelines */ +div.divider { + font-size: 1.2em; + font-family: Georgia, serif; + font-weight: bold; + margin-top: 1em; + white-space: nowrap; +} + +/* The footer at the very bottom of the page */ +div.footer { + font-size: 0.9em; + text-align: right; + margin-bottom: 1em; + color: #666; +} + +/* Hyperlink colors in the footer */ +div.footer a { color: white; } +div.footer a:link { color: white; } +div.footer a:visited { color: white; } +div.footer a:hover { background-color: white; color: #558195; } + +/* <verbatim> blocks */ +pre.verbatim, blockquote pre { + font-family: Dejavu Sans Mono, Monaco, Lucida Console, monospace; + background-color: #f3f3f3; + padding: 0.5em; + white-space: pre-wrap; +} + +blockquote pre { + border: 1px #000 dashed; +} + +/* The label/value pairs on (for example) the ci page */ +table.label-value th { + vertical-align: top; + text-align: right; + padding: 0.2ex 2ex; +} + +table.report tr th { + padding: 3px 5px; + text-transform: capitalize; + cursor: pointer; +} + +table.report tr td { + padding: 3px 5px; +} + +textarea { + font-size: 1em; +} + +.fullsize-text { + font-size: 1.25em; +} ADDED skins/rounded1/details.txt Index: skins/rounded1/details.txt ================================================================== --- skins/rounded1/details.txt +++ skins/rounded1/details.txt @@ -0,0 +1,4 @@ +timeline-arrowheads: 1 +timeline-circle-nodes: 0 +timeline-color-graph-lines: 0 +white-foreground: 0 ADDED skins/rounded1/footer.txt Index: skins/rounded1/footer.txt ================================================================== --- skins/rounded1/footer.txt +++ skins/rounded1/footer.txt @@ -0,0 +1,4 @@ +<div class="footer"> +Fossil $release_version $manifest_version $manifest_date +</div> +</body></html> ADDED skins/rounded1/header.txt Index: skins/rounded1/header.txt ================================================================== --- skins/rounded1/header.txt +++ skins/rounded1/header.txt @@ -0,0 +1,54 @@ +<html> +<head> +<base href="$baseurl/$current_page" /> +<title>$<project_name>: $<title> + + + + +
+ +
$</div> + <div class="status"><th1> + if {[info exists login]} { + puts "Logged in as $login" + } else { + puts "Not logged in" + } + </th1></div> +</div> +<div class="mainmenu"> +<th1> +html "<a href='$home$index_page'>Home</a>\n" +if {[anycap jor]} { + html "<a href='$home/timeline'>Timeline</a>\n" +} +if {[anoncap oh]} { + html "<a href='$home/tree?ci=tip'>Files</a>\n" +} +if {[anoncap o]} { + html "<a href='$home/brlist'>Branches</a>\n" + html "<a href='$home/taglist'>Tags</a>\n" +} +if {[anoncap r]} { + html "<a href='$home/ticket'>Tickets</a>\n" +} +if {[anoncap j]} { + html "<a href='$home/wiki'>Wiki</a>\n" +} +if {[hascap s]} { + html "<a href='$home/setup'>Admin</a>\n" +} elseif {[hascap a]} { + html "<a href='$home/setup_ulist'>Users</a>\n" +} +if {[info exists login]} { + html "<a href='$home/login'>Logout</a>\n" +} else { + html "<a href='$home/login'>Login</a>\n" +} +</th1></div> ADDED skins/xekri/README.md Index: skins/xekri/README.md ================================================================== --- skins/xekri/README.md +++ skins/xekri/README.md @@ -0,0 +1,2 @@ +"xekri" is a Lojban word that means "extermely dark-colored". +This skin was contributed by Andrew Moore. ADDED skins/xekri/css.txt Index: skins/xekri/css.txt ================================================================== --- skins/xekri/css.txt +++ skins/xekri/css.txt @@ -0,0 +1,1033 @@ +/****************************************************************************** + * Xekri + * + * To adjust the width of the contents for this skin, look for the "max-width" + * property and change its value. (It's in the "Main Area" section) The value + * determines how much of the browser window to use. Some like 100%, so that + * the entire window is used. Others prefer 80%, which makes the contents + * easier to read for them. + */ + + +/************************************** + * General HTML + */ + +html { + background-color: #333; + color: #eee; + font-family: Monospace; + font-size: 1em; + min-height: 100%; +} + +body { + margin: 0; + padding: 0; + -moz-text-size-adjust: none; + -ms-text-size-adjust: none; + -webkit-text-size-adjust: none; +} + +a { + color: #07e; +} + +a:hover { + font-weight: bold; +} + +blockquote pre { + border: 1px dashed #ee0; +} + +blockquote pre, pre.verbatim { + background-color: #000; + border-radius: 0.75rem; + padding: 0.5rem; + white-space: pre-wrap; +} + +input[type="password"], input[type="text"], textarea { + background-color: #111; + color: #fff; + font-size: 1rem; +} + +h1 { + font-size: 2rem; +} + +h2 { + font-size: 1.5rem; +} + +h3 { + font-size: 1.25rem; +} + +span[style^=background-color] { + color: #000; +} + +td[style^=background-color] { + color: #000; +} + + +/************************************** + * Main Area + */ + +div.header, div.mainmenu, div.submenu, div.content, div.footer { + clear: both; + margin: 0 auto; + max-width: 90%; + padding: 0.25rem 1rem; +} + + +/************************************** + * Main Area: Header + */ + +div.header { + margin: 0.5rem auto 0 auto; +} + +div.logo img { + float: left; + padding: 0; + box-shadow: 3px 3px 1px #000; + margin: 0 6px 6px 0; +} + +div.logo br { + display: none; +} + +div.logo nobr { + color: #eee; + font-size: 1.2rem; + font-weight: bold; + padding: 0; + text-shadow: 3px 3px 1px #000; + vertical-align: top; + white-space: nowrap; +} + +div.title { + color: #07e; + font-family: Verdana, sans-serif; + font-weight: bold; + font-size: 2.5rem; + padding: 0.5rem; + text-align: center; + text-shadow: 3px 3px 1px #000; +} + +div.status { + color: #ee0; + font-size: 1rem; + padding: 0.25rem; + text-align: right; + text-shadow: 2px 2px 1px #000; +} + + +/************************************** + * Main Area: Global Menu + */ + +div.mainmenu, div.submenu { + background-color: #080; + border-radius: 1rem 1rem 0 0; + box-shadow: 3px 4px 1px #000; + color: #000; + font-weight: bold; + font-size: 1.1rem; + text-align: center; +} + +div.mainmenu { + padding-top: 0.33rem; + padding-bottom: 0.25rem; +} + +div.submenu { + border-top: 1px solid #0a0; + border-radius: 0; + display: block; +} + +div.mainmenu a, div.submenu a { + color: #000; + padding: 0 0.75rem; + text-decoration: none; +} + +div.mainmenu a:hover, div.submenu a:hover { + color: #fff; + text-shadow: 0px 0px 6px #0f0; +} + +div.submenu * { + margin: 0 0.5rem; + vertical-align: middle; +} + +div.submenu select, div.submenu input { + background-color: #222; + border: 1px inset #080; + color: #eee; + cursor: pointer; + font-size: 0.9rem; +} + +div.submenu select { + height: 1.75rem; +} + +/************************************** + * Main Area: Content + */ + +div.content { + background-color: #222; + border-radius: 0 0 1rem 1rem; + box-shadow: 3px 3px 1px #000; + min-height:40%; + padding-bottom: 1rem; + padding-top: 0.5rem; +} + +div.content table[bgcolor="white"] { + color: #000; +} + +.piechartLabel { + fill: white; +} +.piechartLine { + stroke: white; +} + +/************************************** + * Main Area: Footer + */ + +div.footer { + color: #ee0; + font-size: 0.75rem; + padding: 0; + text-align: right; + width: 75%; +} + + +div.footer div { + background-color: #222; + box-shadow: 3px 3px 1px #000; + border-radius: 0 0 1rem 1rem; + margin: 0 0 10px 0; + padding: 0.5rem 0.75rem; +} + +div.footer div.page-time { + float: left; +} + +div.footer div.fossil-info { + float: right; +} + +div.footer a, div.footer a:link, div.footer a:visited { + color: #ee0; +} + +div.footer a:hover { + color: #fff; + text-shadow: 0px 0px 6px #ee0; +} + + +/************************************** + * Check-in + */ + +table.label-value th { + vertical-align: top; + text-align: right; + padding: 0.1rem 1rem; +} + + +/************************************** + * Diffs + */ + +/* Code Added */ +span.diffadd { + background-color: #7f7; + color: #000; +} + +/* Code Changed */ +span.diffchng { + background-color: #77f; + color: #000; +} + +/* Code Deleted */ +span.diffrm { + background-color: #f77; + color: #000; +} + + +/************************************** + * Diffs : Side-By-Side + */ + +/* display (column-based) */ +table.sbsdiffcols { + border-spacing: 0; + font-size: 0.85rem; + width: 90%; +} + +table.sbsdiffcols pre { + border: 0; + margin: 0; + padding: 0; +} + +table.sbsdiffcols td { + padding: 0; + vertical-align: top; +} + +/* line number column */ +div.difflncol { + color: #ee0; + padding-right: 0.75em; + text-align: right; +} + +/* diff text column */ +div.difftxtcol { + background-color: #111; + overflow-x: auto; + width: 45em; +} + +/* suppressed lines */ +span.diffhr { + display: inline-block; + margin-bottom: 0.75em; + color: #ff0; +} + +/* diff marker column */ +div.diffmkrcol { + padding: 0 0.5em; +} + + +/************************************** + * Diffs : Unified + */ + +pre.udiff { + background-color: #111; +} + +/* line numbers */ +span.diffln { + background-color: #222; + color: #ee0; +} + + +/************************************** + * File List : Flat + */ + +table.browser { + width: 100%; + border: 0; +} + +td.browser { + width: 24%; + vertical-align: top; +} + +ul.browser { + margin: 0.5rem; + padding: 0.5rem; + white-space: nowrap; +} + +ul.browser li.dir { + font-style: italic +} + + +/************************************** + * File List : Age + */ + +.fileage tr:hover td { + background-color: #225; +} + + +/************************************** + * File List : Tree + */ + +.filetree { + line-height: 1.5; + margin: 1rem 0; +} + +/* list */ +.filetree ul { + list-style: none; + margin: 0; + padding: 0; +} + +/* collapsed list */ +.filetree ul.collapsed { + display: none; +} + +/* lists below the root */ +.filetree ul ul { + margin: 0 0 0 21px; + position: relative; +} + +/* lists items */ +.filetree li { + margin: 0; + padding: 0; + position: relative; +} + +/* node lines */ +.filetree li li:before { + border-bottom: 2px solid #000; + border-left: 2px solid #000; + content: ''; + height: 1.5rem; + left: -14px; + position: absolute; + top: -0.8rem; + width: 14px; +} + +/* directory lines */ +.filetree li > ul:before { + border-left: 2px solid #000; + bottom: 0; + content: ''; + left: -35px; + position: absolute; + top: -1.5rem; +} + +/* hide lines for last-child directories */ +.filetree li.last > ul:before { + display: none; +} + +.filetree a { + background-image: url(data:image/gif;base64,R0lGODlhEAAQAJEAAP\/\/\/yEhIf\/\/\/wAAACH5BAEHAAIALAAAAAAQABAAAAIvlIKpxqcfmgOUvoaqDSCxrEEfF14GqFXImJZsu73wepJzVMNxrtNTj3NATMKhpwAAOw==); + background-position: center left; + background-repeat: no-repeat; + display: inline-block; + min-height: 16px; + padding-left: 21px; + position: relative; + z-index: 1; +} + +.filetree .dir > a { + background-image: url(data:image/gif;base64,R0lGODlhEAAQAJEAAP/WVCIiIv\/\/\/wAAACH5BAEHAAIALAAAAAAQABAAAAInlI9pwa3XYniCgQtkrAFfLXkiFo1jaXpo+jUs6b5Z/K4siDu5RPUFADs=); + font-style: italic +} + +.filetreeline:hover { + color: #000; + font-weight: bold; +} + +.filetreeline .filetreeage { + padding-right: 0.5rem; +} + +/************************************** + * Logout + */ + +span.loginError { + color: #f00; +} + +table.login_out { + margin: 10px; + text-align: left; +} + +td.login_out_label { + text-align: center; +} + +div.captcha { + padding: 1rem; + text-align: center; +} + +table.captcha { + background-color: #111; + border-color: #111; + border-style: inset; + border-width: 2px; + margin: auto; + padding: 0.5rem; +} + +table.captcha pre { + color: #ee0; +} + + +/************************************** + * Statistics Reports + */ + +.statistics-report-graph-line { + background-color: #22e; +} + +.statistics-report-table-events th { + padding: 0 1rem; +} + +.statistics-report-table-events td { + padding: 0.1rem 1rem; +} + +.statistics-report-row-year { + color: #ee0; + text-align: left; +} + +.statistics-report-week-number-label { + font-size: 0.8rem; + text-align: right; +} + +.statistics-report-week-of-year-list { + font-size: 0.8rem; +} + + +/************************************** + * Search + */ + +.searchResult .snippet mark { + color: #ee0; +} + + +/************************************** + * Sections + */ + +div.section, div.sectionmenu { + color: #2ee; + background-color: #22c; + border-radius: 0 3rem; + box-shadow: 2px 2px #000; + display: flex; + font-size: 1.1rem; + font-weight: bold; + justify-content: space-around; + margin: 1.2rem auto 0.75rem auto; + padding: 0.2rem; + text-align: center; +} + +div.sectionmenu { + border-radius: 0 0 3rem 3rem; + margin-top: -0.75rem; + width: 75%; +} + +div.sectionmenu > a:link, div.sectionmenu > a:visited { + color: #000; + text-decoration: none; +} + +div.sectionmenu > a:hover { + color: #eee; + text-shadow: 0px 0px 6px #eee; +} + + +/************************************** + * Sidebox + */ + +div.sidebox { + background-color: #333; + border-radius: 0.5rem; + box-shadow: 3px 3px 1px #000; + float: right; + margin: 1rem 0.5rem; + padding: 0.5rem; +} + +div.sidebox ol { + margin: 0 0 0.5rem 2.5rem; + padding: 0 0; +} + +div.sidebox ol li { + margin-top: 0.75rem; +} + +div.sideboxTitle { + background-color: #ee0; + border-radius: 0.5rem 0.5rem 0 0; + color: #000; + font-weight: bold; + margin: -0.5rem -0.5rem 0 -0.5rem; + padding: 0.25rem; + text-align: center; +} + +div.sideboxDescribed { + display: inline; +} + +/* --- Untested : Begin --- */ +/* The defined element in sideboxes for branches,.. */ +span.disabled { + color: #f00; +} +/* --- Untested : End --- */ + + +/************************************** + * Tag + */ + +/* --- Untested : Begin --- */ +/* the format for the tag links */ +a.tagLink { +} +/* the format for the tag display(no history permission!) */ +span.tagDsp { + font-weight: bold; +} +/* the format for fixed/canceled tags,.. */ +span.infoTagCancelled { + font-weight: bold; + text-decoration: line-through; +} +/* --- Untested : End --- */ + + +/************************************** + * Ticket + */ + +table.report { + color: #000; + border: 1px solid #999; + border-collapse: collapse; + margin: 1rem 0; +} + +table.report tr th { + color: #eee; + padding: 3px 5px; + text-transform : capitalize; +} + +table.report tr td { + padding: 3px 5px; +} + +/* example ticket colors */ +table.rpteditex { + border-collapse: collapse; + border-spacing: 0; + color: #000; + float: right; + margin: 0; + padding: 0; + text-align: center; + width: 125px; +} + +td.rpteditex { + border-color: #000; + border-style: solid; + border-width: thin; +} + +#reportTable { +} + +/* format for labels on ticket display page */ +td.tktDspLabel { + text-align: right; +} + +/* format for values on ticket display page */ +td.tktDspValue { + background-color: #111; + text-align: left; + vertical-align: top; +} + +/* format for ticket error messages */ +span.tktError { + color: #f00; + font-weight: bold; +} + + +/************************************** + * Timeline + */ + +div.divider { + color: #ee0; + font-size: 1.2rem; + font-weight: bold; + margin-top: 1rem; + white-space: nowrap; +} + +/* The suppressed duplicates lines in timeline, .. */ +span.timelineDisabled { + font-size: 0.5rem; + font-style: italic; +} + +/* the format for the timeline data table */ +table.timelineTable { + border: 0; +} + +/* The row in the timeline table that contains the entry of interest */ +tr.timelineSelected { + border: 1px solid #eee; + border-radius: 1rem; +} + +tr.timelineSelected td.timelineTime +, tr.timelineSelected td.timelineTableCell { + background-color: #333; + box-shadow: 2px 2px 1px #000; + padding: 0.5rem; +} + +tr.timelineSelected td.timelineTime { + border-radius: 1rem 0 0 1rem; +} + +tr.timelineSelected td.timelineTableCell { + border-radius: 0 1rem 1rem 0; +} + +/* the format for the timeline data cells */ +td.timelineTableCell { + padding: 0.3rem; + text-align: left; + vertical-align: top; +} + +td.timelineTableCell[style] { + color: #000; +} + +/* the format for the timeline data cell of the current checkout */ +tr.timelineCurrent td.timelineTableCell { + border: 0; + border-radius: 1em 0em; +} + +/* the format for the timeline leaf marks */ +span.timelineLeaf { + font-weight: bold; +} + +/* the format for the timeline version links */ +a.timelineHistLink { +} + +/* the format for the timeline version display(no history permission!) */ +span.timelineHistDsp { + font-weight: bold; +} + +/* the format for the timeline time display */ +td.timelineTime { + text-align: right; + vertical-align: top; + white-space: nowrap; +} + +/* the format for the grap placeholder cells in timelines */ +td.timelineGraph { + text-align: left; + vertical-align: top; + width: 20px; +} + + +/************************************** + * User Edit + */ + +/* layout definition for the capabilities box on the user edit detail page */ +div.ueditCapBox { + float: left; + margin: 0 20px 20px 0; +} + +/* format of the label cells in the detailed user edit page */ +td.usetupEditLabel { + text-align: right; + vertical-align: top; + white-space: nowrap; +} + +/* color for capabilities, inherited by nobody */ +span.ueditInheritNobody { + color: #0f0; +} + +/* color for capabilities, inherited by developer */ +span.ueditInheritDeveloper { + color: #f00; +} + +/* color for capabilities, inherited by reader */ +span.ueditInheritReader { + color: black; +} + +/* color for capabilities, inherited by anonymous */ +span.ueditInheritAnonymous { + color: #00f; +} + +/* format for capabilities */ +span.capability { + font-weight: bold; +} + +/* format for different user types */ +span.usertype { + font-weight: bold; +} + +span.usertype:before { + content:"'"; +} + +span.usertype:after { + content:"'"; +} + + +/************************************** + * User List + */ + +table.usetupLayoutTable { + margin: 0.5rem; + outline-style: none; + padding: 0; +} + +td.usetupColumnLayout { + vertical-align: top +} + +td.usetupColumnLayout ol th { + padding: 0 0.75rem 0.5rem 0; +} + +span.note { + color: #ee0; + font-weight: bold; +} + +table.usetupUserList { + margin: 0.5rem; +} + +.usetupListUser { + padding-right: 20px; + text-align: right; +} + +.usetupListCap { + padding-right: 15px; + text-align: center; +} + +.usetupListCon { + text-align: left; +} + + +/************************************** + * Wiki + */ + +span.wikiError { + font-weight: bold; + color: #f00; +} + +/* the format for fixed/cancelled tags */ +span.wikiTagCancelled { + text-decoration: line-through; +} + + +/************************************** + * Did not encounter these + */ + +/* selected lines of text within a linenumbered artifact display */ +div.selectedText { + font-weight: bold; + color: #00f; + background-color: #d5d5ff; + border: 1px #00f solid; +} + +/* format for missing privileges note on user setup page */ +p.missingPriv { + color: #00f; +} + +/* format for leading text in wikirules definitions */ +span.wikiruleHead { + font-weight: bold; +} + + +/* format for user color input on checkin edit page */ +input.checkinUserColor { + /* no special definitions, class defined, to enable color pickers, + * f.e.: + * ** add the color picker found at http:jscolor.com as java script + * include + * ** to the header and configure the java script file with + * ** 1. use as bindClass :checkinUserColor + * ** 2. change the default hash adding behaviour to ON + * ** or change the class defition of element identified by + * id="clrcust" + * ** to a standard jscolor definition with java script in the footer. + * */ +} + +/* format for end of content area, to be used to clear page flow. */ +div.endContent { + clear: both; +} + +/* format for general errors */ +p.generalError { + color: #f00; +} + +/* format for tktsetup errors */ +p.tktsetupError { + color: #f00; + font-weight: bold; +} +/* format for xfersetup errors */ +p.xfersetupError { + color: #f00; + font-weight: bold; +} +/* format for th script errors */ +p.thmainError { + color: #f00; + font-weight: bold; +} +/* format for th script trace messages */ +span.thTrace { + color: #f00; +} +/* format for report configuration errors */ +p.reportError { + color: #f00; + font-weight: bold; +} +/* format for report configuration errors */ +blockquote.reportError { + color: #f00; + font-weight: bold; +} +/* format for artifact lines, no longer shunned */ +p.noMoreShun { + color: #00f; +} +/* format for artifact lines beeing shunned */ +p.shunned { + color: #00f; +} +/* a broken hyperlink */ +span.brokenlink { + color: #f00; +} +/* List of files in a timeline */ +ul.filelist { + margin-top: 3px; + line-height: 100%; +} +/* Moderation Pending message on timeline */ +span.modpending { + color: #b30; + font-style: italic; +} +/* format for textarea labels */ +span.textareaLabel { + font-weight: bold; +} +/* format for th1 script results */ +pre.th1result { + white-space: pre-wrap; + word-wrap: break-word; +} +/* format for th1 script errors */ +pre.th1error { + white-space: pre-wrap; + word-wrap: break-word; + color: #f00; +} + +/* even table row color */ +tr.row0 { + /* use default */ +} +/* odd table row color */ +tr.row1 { + /* Use default */ +} ADDED skins/xekri/details.txt Index: skins/xekri/details.txt ================================================================== --- skins/xekri/details.txt +++ skins/xekri/details.txt @@ -0,0 +1,4 @@ +timeline-arrowheads: 1 +timeline-circle-nodes: 0 +timeline-color-graph-lines: 0 +white-foreground: 0 ADDED skins/xekri/footer.txt Index: skins/xekri/footer.txt ================================================================== --- skins/xekri/footer.txt +++ skins/xekri/footer.txt @@ -0,0 +1,11 @@ +</div> +<div class="footer"> +<div class="page-time"> +Generated in <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s +</div> +<div class="fossil-info"> +Fossil v$release_version $manifest_version +</div> +</div> +</body> +</html> ADDED skins/xekri/header.txt Index: skins/xekri/header.txt ================================================================== --- skins/xekri/header.txt +++ skins/xekri/header.txt @@ -0,0 +1,143 @@ +<html> +<head> +<base href="$baseurl/$current_page" /> +<title>$<project_name>: $<title> + + + + +
+ +
$</div> + <div class="status"><nobr><th1> + if {[info exists login]} { + puts "Logged in as $login" + } else { + puts "Not logged in" + } + </th1></nobr><small><div id="clock"></div></small></div> +</div> +<script> +function updateClock(){ + var e = document.getElementById("clock"); + if(e){ + var d = new Date(); + function f(n) { + return n < 10 ? '0' + n : n; + } + e.innerHTML = d.getUTCFullYear()+ '-' + + f(d.getUTCMonth() + 1) + '-' + + f(d.getUTCDate()) + ' ' + + f(d.getUTCHours()) + ':' + + f(d.getUTCMinutes()); + setTimeout("updateClock();",(60-d.getUTCSeconds())*1000); + } +} +updateClock(); +</script> +<div class="mainmenu"> +<th1> +proc menulink {url name} { + upvar current_page current + upvar home home + if {[string range $url 0 [string length $current]] eq "/$current"} { + html "<a href='$home$url' class='active'>$name</a>\n" + } else { + html "<a href='$home$url'>$name</a>\n" + } +} +menulink $index_page Home +if {[anycap jor]} { + menulink /timeline Timeline +} +if {[anoncap oh]} { + menulink /dir?ci=tip Files +} +if {[anoncap o]} { + menulink /brlist Branches + menulink /taglist Tags +} +if {[anoncap r]} { + menulink /ticket Tickets +} +if {[anoncap j]} { + menulink /wiki Wiki +} + menulink /sitemap More... +if {[hascap s]} { + menulink /setup Admin +} elseif {[hascap a]} { + menulink /setup_ulist Users +} +if {[info exists login]} { + menulink /login Logout +} else { + menulink /login Login +} +</th1></div> ADDED src/Makefile Index: src/Makefile ================================================================== --- src/Makefile +++ src/Makefile @@ -0,0 +1,3 @@ +all clean: + $(MAKE) -C .. $(MAKECMDGOALS) + Index: src/add.c ================================================================== --- src/add.c +++ src/add.c @@ -20,316 +20,876 @@ */ #include "config.h" #include "add.h" #include <assert.h> #include <dirent.h> - -/* -** Set to true if files whose names begin with "." should be -** included when processing a recursive "add" command. -*/ -static int includeDotFiles = 0; - -/* -** Add a single file -*/ -static void add_one_file(const char *zName, int vid, Blob *pOmit){ - Blob pathname; - const char *zPath; - - file_tree_name(zName, &pathname, 1); - zPath = blob_str(&pathname); - if( strcmp(zPath, "manifest")==0 - || strcmp(zPath, "_FOSSIL_")==0 - || strcmp(zPath, "_FOSSIL_-journal")==0 - || strcmp(zPath, ".fos")==0 - || strcmp(zPath, ".fos-journal")==0 - || strcmp(zPath, "manifest.uuid")==0 - || blob_compare(&pathname, pOmit)==0 - ){ - fossil_warning("cannot add %s", zPath); - }else{ - if( !file_is_simple_pathname(zPath) ){ - fossil_fatal("filename contains illegal characters: %s", zPath); - } -#ifdef __MINGW32__ - if( db_exists("SELECT 1 FROM vfile" - " WHERE pathname=%Q COLLATE nocase", zPath) ){ - db_multi_exec("UPDATE vfile SET deleted=0" - " WHERE pathname=%Q COLLATE nocase", zPath); - } -#else - if( db_exists("SELECT 1 FROM vfile WHERE pathname=%Q", zPath) ){ - db_multi_exec("UPDATE vfile SET deleted=0 WHERE pathname=%Q", zPath); - } +#include "cygsup.h" + +/* +** WARNING: For Fossil version 1.x this value was always zero. For Fossil +** 2.x, it will probably always be one. When this value is zero, +** files in the checkout will not be moved by the "mv" command and +** files in the checkout will not be removed by the "rm" command. +** +** If the FOSSIL_ENABLE_LEGACY_MV_RM compile-time option is used, +** the "mv-rm-files" setting will be consulted instead of using +** this value. +** +** To retain the Fossil version 1.x behavior when using Fossil 2.x, +** the FOSSIL_ENABLE_LEGACY_MV_RM compile-time option must be used +** -AND- the "mv-rm-files" setting must be set to zero. +*/ +#ifndef FOSSIL_MV_RM_FILE +#define FOSSIL_MV_RM_FILE (0) #endif - else{ - db_multi_exec( - "INSERT INTO vfile(vid,deleted,rid,mrid,pathname)" - "VALUES(%d,0,0,0,%Q)", vid, zPath); - } - printf("ADDED %s\n", zPath); - } - blob_reset(&pathname); -} - -/* -** All content of the zDir directory to the SFILE table. -*/ -void add_directory_content(const char *zDir){ - DIR *d; - int origSize; - struct dirent *pEntry; - Blob path; - - blob_zero(&path); - blob_append(&path, zDir, -1); - origSize = blob_size(&path); - d = opendir(zDir); - if( d ){ - while( (pEntry=readdir(d))!=0 ){ - char *zPath; - if( pEntry->d_name[0]=='.' ){ - if( !includeDotFiles ) continue; - if( pEntry->d_name[1]==0 ) continue; - if( pEntry->d_name[1]=='.' && pEntry->d_name[2]==0 ) continue; - } - blob_appendf(&path, "/%s", pEntry->d_name); - zPath = blob_str(&path); - if( file_isdir(zPath)==1 ){ - add_directory_content(zPath); - }else if( file_isfile(zPath) ){ - db_multi_exec("INSERT INTO sfile VALUES(%Q)", zPath); - } - blob_resize(&path, origSize); - } - } - closedir(d); - blob_reset(&path); -} - -/* -** Add all content of a directory. -*/ -void add_directory(const char *zDir, int vid, Blob *pOmit){ - Stmt q; - add_directory_content(zDir); - db_prepare(&q, "SELECT x FROM sfile ORDER BY x"); - while( db_step(&q)==SQLITE_ROW ){ - const char *zName = db_column_text(&q, 0); - add_one_file(zName, vid, pOmit); - } - db_finalize(&q); - db_multi_exec("DELETE FROM sfile"); + +/* +** This routine returns the names of files in a working checkout that +** are created by Fossil itself, and hence should not be added, deleted, +** or merge, and should be omitted from "clean" and "extras" lists. +** +** Return the N-th name. The first name has N==0. When all names have +** been used, return 0. +*/ +const char *fossil_reserved_name(int N, int omitRepo){ + /* Possible names of the local per-checkout database file and + ** its associated journals + */ + static const char *const azName[] = { + "_FOSSIL_", + "_FOSSIL_-journal", + "_FOSSIL_-wal", + "_FOSSIL_-shm", + ".fslckout", + ".fslckout-journal", + ".fslckout-wal", + ".fslckout-shm", + + /* The use of ".fos" as the name of the checkout database is + ** deprecated. Use ".fslckout" instead. At some point, the following + ** entries should be removed. 2012-02-04 */ + ".fos", + ".fos-journal", + ".fos-wal", + ".fos-shm", + }; + + /* Names of auxiliary files generated by SQLite when the "manifest" + ** property is enabled + */ + static const char *const azManifest[] = { + "manifest", + "manifest.uuid", + }; + + /* + ** Names of repository files, if they exist in the checkout. + */ + static const char *azRepo[4] = { 0, 0, 0, 0 }; + + /* Cached setting "manifest" */ + static int cachedManifest = -1; + + if( cachedManifest == -1 ){ + Blob repo; + cachedManifest = db_get_boolean("manifest",0); + blob_zero(&repo); + if( file_tree_name(g.zRepositoryName, &repo, 0, 0) ){ + const char *zRepo = blob_str(&repo); + azRepo[0] = zRepo; + azRepo[1] = mprintf("%s-journal", zRepo); + azRepo[2] = mprintf("%s-wal", zRepo); + azRepo[3] = mprintf("%s-shm", zRepo); + } + } + + if( N<0 ) return 0; + if( N<count(azName) ) return azName[N]; + N -= count(azName); + if( cachedManifest ){ + if( N<count(azManifest) ) return azManifest[N]; + N -= count(azManifest); + } + if( !omitRepo && N<count(azRepo) ) return azRepo[N]; + return 0; +} + +/* +** Return a list of all reserved filenames as an SQL list. +*/ +const char *fossil_all_reserved_names(int omitRepo){ + static char *zAll = 0; + if( zAll==0 ){ + Blob x; + int i; + const char *z; + blob_zero(&x); + for(i=0; (z = fossil_reserved_name(i, omitRepo))!=0; i++){ + if( i>0 ) blob_append(&x, ",", 1); + blob_appendf(&x, "'%q'", z); + } + zAll = blob_str(&x); + } + return zAll; +} + +/* +** COMMAND: test-reserved-names +** +** Usage: %fossil test-reserved-names [-omitrepo] +** +** Show all reserved filenames for the current check-out. +*/ +void test_reserved_names(void){ + int i; + const char *z; + int omitRepo = find_option("omitrepo",0,0)!=0; + + /* We should be done with options.. */ + verify_all_options(); + + db_must_be_within_tree(); + for(i=0; (z = fossil_reserved_name(i, omitRepo))!=0; i++){ + fossil_print("%3d: %s\n", i, z); + } + fossil_print("ALL: (%s)\n", fossil_all_reserved_names(omitRepo)); +} + +/* +** Add a single file named zName to the VFILE table with vid. +** +** Omit any file whose name is pOmit. +*/ +static int add_one_file( + const char *zPath, /* Tree-name of file to add. */ + int vid /* Add to this VFILE */ +){ + if( !file_is_simple_pathname(zPath, 1) ){ + fossil_warning("filename contains illegal characters: %s", zPath); + return 0; + } + if( db_exists("SELECT 1 FROM vfile" + " WHERE pathname=%Q %s", zPath, filename_collation()) ){ + db_multi_exec("UPDATE vfile SET deleted=0" + " WHERE pathname=%Q %s AND deleted", + zPath, filename_collation()); + }else{ + char *zFullname = mprintf("%s%s", g.zLocalRoot, zPath); + int isExe = file_wd_isexe(zFullname); + db_multi_exec( + "INSERT INTO vfile(vid,deleted,rid,mrid,pathname,isexe,islink)" + "VALUES(%d,0,0,0,%Q,%d,%d)", + vid, zPath, isExe, file_wd_islink(0)); + fossil_free(zFullname); + } + if( db_changes() ){ + fossil_print("ADDED %s\n", zPath); + return 1; + }else{ + fossil_print("SKIP %s\n", zPath); + return 0; + } +} + +/* +** Add all files in the sfile temp table. +** +** Automatically exclude the repository file. +*/ +static int add_files_in_sfile(int vid){ + const char *zRepo; /* Name of the repository database file */ + int nAdd = 0; /* Number of files added */ + int i; /* Loop counter */ + const char *zReserved; /* Name of a reserved file */ + Blob repoName; /* Treename of the repository */ + Stmt loop; /* SQL to loop over all files to add */ + int (*xCmp)(const char*,const char*); + + if( !file_tree_name(g.zRepositoryName, &repoName, 0, 0) ){ + blob_zero(&repoName); + zRepo = ""; + }else{ + zRepo = blob_str(&repoName); + } + if( filenames_are_case_sensitive() ){ + xCmp = fossil_strcmp; + }else{ + xCmp = fossil_stricmp; + } + db_prepare(&loop, "SELECT x FROM sfile ORDER BY x"); + while( db_step(&loop)==SQLITE_ROW ){ + const char *zToAdd = db_column_text(&loop, 0); + if( fossil_strcmp(zToAdd, zRepo)==0 ) continue; + for(i=0; (zReserved = fossil_reserved_name(i, 0))!=0; i++){ + if( xCmp(zToAdd, zReserved)==0 ) break; + } + if( zReserved ) continue; + nAdd += add_one_file(zToAdd, vid); + } + db_finalize(&loop); + blob_reset(&repoName); + return nAdd; } /* ** COMMAND: add ** -** Usage: %fossil add FILE... -** -** Make arrangements to add one or more files to the current checkout -** at the next commit. -** -** When adding files recursively, filenames that begin with "." are -** excluded by default. To include such files, add the "--dotfiles" -** option to the command-line. +** Usage: %fossil add ?OPTIONS? FILE1 ?FILE2 ...? +** +** Make arrangements to add one or more files or directories to the +** current checkout at the next commit. +** +** When adding files or directories recursively, filenames that begin +** with "." are excluded by default. To include such files, add +** the "--dotfiles" option to the command-line. +** +** The --ignore and --clean options are comma-separate lists of glob patterns +** for files to be excluded. Example: '*.o,*.obj,*.exe' If the --ignore +** option does not appear on the command line then the "ignore-glob" setting +** is used. If the --clean option does not appear on the command line then +** the "clean-glob" setting is used. +** +** If files are attempted to be added explicitly on the command line which +** match "ignore-glob", a confirmation is asked first. This can be prevented +** using the -f|--force option. +** +** The --case-sensitive option determines whether or not filenames should +** be treated case sensitive or not. If the option is not given, the default +** depends on the global setting, or the operating system default, if not set. +** +** Options: +** +** --case-sensitive <BOOL> Override the case-sensitive setting. +** --dotfiles include files beginning with a dot (".") +** -f|--force Add files without prompting +** --ignore <CSG> Ignore files matching patterns from the +** comma separated list of glob patterns. +** --clean <CSG> Also ignore files matching patterns from +** the comma separated list of glob patterns. +** +** See also: addremove, rm */ void add_cmd(void){ - int i; - int vid; - Blob repo; + int i; /* Loop counter */ + int vid; /* Currently checked out version */ + int nRoot; /* Full path characters in g.zLocalRoot */ + const char *zCleanFlag; /* The --clean option or clean-glob setting */ + const char *zIgnoreFlag; /* The --ignore option or ignore-glob setting */ + Glob *pIgnore, *pClean; /* Ignore everything matching the glob patterns */ + unsigned scanFlags = 0; /* Flags passed to vfile_scan() */ + int forceFlag; + + zCleanFlag = find_option("clean",0,1); + zIgnoreFlag = find_option("ignore",0,1); + forceFlag = find_option("force","f",0)!=0; + if( find_option("dotfiles",0,0)!=0 ) scanFlags |= SCAN_ALL; - includeDotFiles = find_option("dotfiles",0,0)!=0; + /* We should be done with options.. */ + verify_all_options(); + db_must_be_within_tree(); - vid = db_lget_int("checkout",0); - if( vid==0 ){ - fossil_panic("no checkout to add to"); + if( zCleanFlag==0 ){ + zCleanFlag = db_get("clean-glob", 0); + } + if( zIgnoreFlag==0 ){ + zIgnoreFlag = db_get("ignore-glob", 0); } + if( db_get_boolean("dotfiles", 0) ) scanFlags |= SCAN_ALL; + vid = db_lget_int("checkout",0); db_begin_transaction(); - if( !file_tree_name(g.zRepositoryName, &repo, 0) ){ - blob_zero(&repo); - } - db_multi_exec("CREATE TEMP TABLE sfile(x TEXT PRIMARY KEY)"); -#ifdef __MINGW32__ - db_multi_exec( - "CREATE INDEX IF NOT EXISTS vfile_pathname " - " ON vfile(pathname COLLATE nocase)" - ); -#endif + db_multi_exec("CREATE TEMP TABLE sfile(x TEXT PRIMARY KEY %s)", + filename_collation()); + pClean = glob_create(zCleanFlag); + pIgnore = glob_create(zIgnoreFlag); + nRoot = strlen(g.zLocalRoot); + + /* Load the names of all files that are to be added into sfile temp table */ for(i=2; i<g.argc; i++){ char *zName; int isDir; + Blob fullName; + + /* file_tree_name() throws a fatal error if g.argv[i] is outside of the + ** checkout. */ + file_tree_name(g.argv[i], &fullName, 0, 1); + blob_reset(&fullName); - zName = mprintf("%/", g.argv[i]); - isDir = file_isdir(zName); + file_canonical_name(g.argv[i], &fullName, 0); + zName = blob_str(&fullName); + isDir = file_wd_isdir(zName); if( isDir==1 ){ - add_directory(zName, vid, &repo); + vfile_scan(&fullName, nRoot-1, scanFlags, pClean, pIgnore); }else if( isDir==0 ){ - fossil_fatal("not found: %s", zName); - }else if( access(zName, R_OK) ){ - fossil_fatal("cannot open %s", zName); + fossil_warning("not found: %s", zName); }else{ - add_one_file(zName, vid, &repo); + char *zTreeName = &zName[nRoot]; + if( !forceFlag && glob_match(pIgnore, zTreeName) ){ + Blob ans; + char cReply; + char *prompt = mprintf("file \"%s\" matches \"ignore-glob\". " + "Add it (a=all/y/N)? ", zTreeName); + prompt_user(prompt, &ans); + cReply = blob_str(&ans)[0]; + blob_reset(&ans); + if( cReply=='a' || cReply=='A' ){ + forceFlag = 1; + }else if( cReply!='y' && cReply!='Y' ){ + blob_reset(&fullName); + continue; + } + } + db_multi_exec( + "INSERT OR IGNORE INTO sfile(x) VALUES(%Q)", + zTreeName + ); } - free(zName); + blob_reset(&fullName); } + glob_free(pIgnore); + glob_free(pClean); + + add_files_in_sfile(vid); db_end_transaction(0); } /* -** Remove all contents of zDir -*/ -void del_directory_content(const char *zDir){ - DIR *d; - int origSize; - struct dirent *pEntry; - Blob path; - - blob_zero(&path); - blob_append(&path, zDir, -1); - origSize = blob_size(&path); - d = opendir(zDir); - if( d ){ - while( (pEntry=readdir(d))!=0 ){ - char *zPath; - if( pEntry->d_name[0]=='.'){ - if( !includeDotFiles ) continue; - if( pEntry->d_name[1]==0 ) continue; - if( pEntry->d_name[1]=='.' && pEntry->d_name[2]==0 ) continue; - } - blob_appendf(&path, "/%s", pEntry->d_name); - zPath = blob_str(&path); - if( file_isdir(zPath)==1 ){ - del_directory_content(zPath); - }else if( file_isfile(zPath) ){ - char *zFilePath; - Blob pathname; - file_tree_name(zPath, &pathname, 1); - zFilePath = blob_str(&pathname); - if( !db_exists( - "SELECT 1 FROM vfile WHERE pathname=%Q AND NOT deleted", zFilePath) - ){ - printf("SKIPPED %s\n", zPath); - }else{ - db_multi_exec("UPDATE vfile SET deleted=1 WHERE pathname=%Q", zPath); - printf("DELETED %s\n", zPath); - } - blob_reset(&pathname); - } - blob_resize(&path, origSize); - } - } - closedir(d); - blob_reset(&path); +** This function adds a file to list of files to delete from disk after +** the other actions required for the parent operation have completed +** successfully. The first time it is called for the current process, +** it creates a temporary table named "fremove", to keep track of these +** files. +*/ +static void add_file_to_remove( + const char *zOldName /* The old name of the file on disk. */ +){ + static int tableCreated = 0; + Blob fullOldName; + if( !tableCreated ){ + db_multi_exec("CREATE TEMP TABLE fremove(x TEXT PRIMARY KEY %s)", + filename_collation()); + tableCreated = 1; + } + file_tree_name(zOldName, &fullOldName, 1, 1); + db_multi_exec("INSERT INTO fremove VALUES('%q');", blob_str(&fullOldName)); + blob_reset(&fullOldName); +} + +/* +** This function deletes files from the checkout, using the file names +** contained in the temporary table "fremove". The temporary table is +** created on demand by the add_file_to_remove() function. +** +** If dryRunFlag is non-zero, no files will be removed; however, their +** names will still be output. +** +** The temporary table "fremove" is dropped after being processed. +*/ +static void process_files_to_remove( + int dryRunFlag /* Zero to actually operate on the file-system. */ +){ + Stmt remove; + if( db_table_exists(db_name("temp"), "fremove") ){ + db_prepare(&remove, "SELECT x FROM fremove ORDER BY x;"); + while( db_step(&remove)==SQLITE_ROW ){ + const char *zOldName = db_column_text(&remove, 0); + if( !dryRunFlag ){ + file_delete(zOldName); + } + fossil_print("DELETED_FILE %s\n", zOldName); + } + db_finalize(&remove); + db_multi_exec("DROP TABLE fremove;"); + } } /* ** COMMAND: rm -** COMMAND: del -** -** Usage: %fossil rm FILE... -** or: %fossil del FILE... -** -** Remove one or more files from the tree. -** -** This command does not remove the files from disk. It just marks the -** files as no longer being part of the project. In other words, future -** changes to the named files will not be versioned. +** COMMAND: delete +** COMMAND: forget* +** +** Usage: %fossil rm|delete|forget FILE1 ?FILE2 ...? +** +** Remove one or more files or directories from the repository. +** +** The 'rm' and 'delete' commands do NOT normally remove the files from +** disk. They just mark the files as no longer being part of the project. +** In other words, future changes to the named files will not be versioned. +** However, the default behavior of this command may be overridden via the +** command line options listed below and/or the 'mv-rm-files' setting. +** +** The 'forget' command never removes files from disk, even when the command +** line options and/or the 'mv-rm-files' setting would otherwise require it +** to do so. +** +** WARNING: If the "--hard" option is specified -OR- the "mv-rm-files" +** setting is non-zero, files WILL BE removed from disk as well. +** This does NOT apply to the 'forget' command. +** +** Options: +** --soft Skip removing files from the checkout. +** This supersedes the --hard option. +** --hard Remove files from the checkout. +** --case-sensitive <BOOL> Override the case-sensitive setting. +** -n|--dry-run If given, display instead of run actions. +** +** See also: addremove, add */ -void del_cmd(void){ +void delete_cmd(void){ int i; - int vid; + int removeFiles; + int dryRunFlag; + int softFlag; + int hardFlag; + Stmt loop; + + dryRunFlag = find_option("dry-run","n",0)!=0; + softFlag = find_option("soft",0,0)!=0; + hardFlag = find_option("hard",0,0)!=0; + + /* We should be done with options.. */ + verify_all_options(); db_must_be_within_tree(); - vid = db_lget_int("checkout", 0); - if( vid==0 ){ - fossil_panic("no checkout to remove from"); - } db_begin_transaction(); + if( g.argv[1][0]=='f' ){ /* i.e. "forget" */ + removeFiles = 0; + }else if( softFlag ){ + removeFiles = 0; + }else if( hardFlag ){ + removeFiles = 1; + }else{ +#if FOSSIL_ENABLE_LEGACY_MV_RM + removeFiles = db_get_boolean("mv-rm-files",0); +#else + removeFiles = FOSSIL_MV_RM_FILE; +#endif + } + db_multi_exec("CREATE TEMP TABLE sfile(x TEXT PRIMARY KEY %s)", + filename_collation()); for(i=2; i<g.argc; i++){ - char *zName; - - zName = mprintf("%/", g.argv[i]); - if( file_isdir(zName) == 1 ){ - del_directory_content(zName); - } else { - char *zPath; - Blob pathname; - file_tree_name(zName, &pathname, 1); - zPath = blob_str(&pathname); - if( !db_exists( - "SELECT 1 FROM vfile WHERE pathname=%Q AND NOT deleted", zPath) ){ - fossil_fatal("not in the repository: %s", zName); - }else{ - db_multi_exec("UPDATE vfile SET deleted=1 WHERE pathname=%Q", zPath); - printf("DELETED %s\n", zPath); - } - blob_reset(&pathname); - } - free(zName); - } - db_multi_exec("DELETE FROM vfile WHERE deleted AND rid=0"); + Blob treeName; + char *zTreeName; + + file_tree_name(g.argv[i], &treeName, 0, 1); + zTreeName = blob_str(&treeName); + db_multi_exec( + "INSERT OR IGNORE INTO sfile" + " SELECT pathname FROM vfile" + " WHERE (pathname=%Q %s" + " OR (pathname>'%q/' %s AND pathname<'%q0' %s))" + " AND NOT deleted", + zTreeName, filename_collation(), zTreeName, + filename_collation(), zTreeName, filename_collation() + ); + blob_reset(&treeName); + } + + db_prepare(&loop, "SELECT x FROM sfile"); + while( db_step(&loop)==SQLITE_ROW ){ + fossil_print("DELETED %s\n", db_column_text(&loop, 0)); + if( removeFiles ) add_file_to_remove(db_column_text(&loop, 0)); + } + db_finalize(&loop); + if( !dryRunFlag ){ + db_multi_exec( + "UPDATE vfile SET deleted=1 WHERE pathname IN sfile;" + "DELETE FROM vfile WHERE rid=0 AND deleted;" + ); + } db_end_transaction(0); + if( removeFiles ) process_files_to_remove(dryRunFlag); +} + +/* +** Capture the command-line --case-sensitive option. +*/ +static const char *zCaseSensitive = 0; +void capture_case_sensitive_option(void){ + if( zCaseSensitive==0 ){ + zCaseSensitive = find_option("case-sensitive",0,1); + } +} + +/* +** This routine determines if files should be case-sensitive or not. +** In other words, this routine determines if two filenames that +** differ only in case should be considered the same name or not. +** +** The case-sensitive setting determines the default value. If +** the case-sensitive setting is undefined, then case sensitivity +** defaults off for Cygwin, Mac and Windows and on for all other unix. +** If case-sensitivity is enabled in the windows kernel, the Cygwin port +** of fossil.exe can detect that, and modifies the default to 'on'. +** +** The --case-sensitive <BOOL> command-line option overrides any +** setting. +*/ +int filenames_are_case_sensitive(void){ + static int caseSensitive; + static int once = 1; + + if( once ){ + once = 0; + if( zCaseSensitive ){ + caseSensitive = is_truth(zCaseSensitive); + }else{ +#if defined(_WIN32) || defined(__DARWIN__) || defined(__APPLE__) + caseSensitive = 0; /* Mac and Windows */ +#elif defined(__CYGWIN__) + /* Cygwin can be configured to be case-sensitive, check this. */ + void *hKey; + int value = 1, length = sizeof(int); + caseSensitive = 0; /* Cygwin default */ + if( (RegOpenKeyExW((void *)0x80000002, L"SYSTEM\\CurrentControlSet\\" + "Control\\Session Manager\\kernel", 0, 1, (void *)&hKey) + == 0) && (RegQueryValueExW(hKey, L"obcaseinsensitive", + 0, NULL, (void *)&value, (void *)&length) == 0) && !value ){ + caseSensitive = 1; + } +#else + caseSensitive = 1; /* Unix */ +#endif + caseSensitive = db_get_boolean("case-sensitive",caseSensitive); + } + if( !caseSensitive && g.localOpen ){ + db_multi_exec( + "CREATE INDEX IF NOT EXISTS %s.vfile_nocase" + " ON vfile(pathname COLLATE nocase)", + db_name("localdb") + ); + } + } + return caseSensitive; +} + +/* +** Return one of two things: +** +** "" (empty string) if filenames are case sensitive +** +** "COLLATE nocase" if filenames are not case sensitive. +*/ +const char *filename_collation(void){ + return filenames_are_case_sensitive() ? "" : "COLLATE nocase"; +} + +/* +** COMMAND: addremove +** +** Usage: %fossil addremove ?OPTIONS? +** +** Do all necessary "add" and "rm" commands to synchronize the repository +** with the content of the working checkout: +** +** * All files in the checkout but not in the repository (that is, +** all files displayed using the "extras" command) are added as +** if by the "add" command. +** +** * All files in the repository but missing from the checkout (that is, +** all files that show as MISSING with the "status" command) are +** removed as if by the "rm" command. +** +** The command does not "commit". You must run the "commit" separately +** as a separate step. +** +** Files and directories whose names begin with "." are ignored unless +** the --dotfiles option is used. +** +** The --ignore option overrides the "ignore-glob" setting, as do the +** --case-sensitive option with the "case-sensitive" setting and the +** --clean option with the "clean-glob" setting. See the documentation +** on the "settings" command for further information. +** +** The -n|--dry-run option shows what would happen without actually doing +** anything. +** +** This command can be used to track third party software. +** +** Options: +** --case-sensitive <BOOL> Override the case-sensitive setting. +** --dotfiles Include files beginning with a dot (".") +** --ignore <CSG> Ignore files matching patterns from the +** comma separated list of glob patterns. +** --clean <CSG> Also ignore files matching patterns from +** the comma separated list of glob patterns. +** -n|--dry-run If given, display instead of run actions. +** +** See also: add, rm +*/ +void addremove_cmd(void){ + Blob path; + const char *zCleanFlag = find_option("clean",0,1); + const char *zIgnoreFlag = find_option("ignore",0,1); + unsigned scanFlags = find_option("dotfiles",0,0)!=0 ? SCAN_ALL : 0; + int dryRunFlag = find_option("dry-run","n",0)!=0; + int n; + Stmt q; + int vid; + int nAdd = 0; + int nDelete = 0; + Glob *pIgnore, *pClean; + + if( !dryRunFlag ){ + dryRunFlag = find_option("test",0,0)!=0; /* deprecated */ + } + + /* We should be done with options.. */ + verify_all_options(); + + /* Fail if unprocessed arguments are present, in case user expect the + ** addremove command to accept a list of file or directory. + */ + if( g.argc>2 ){ + fossil_fatal( + "%s: Can only work on the entire checkout, no arguments supported.", + g.argv[1]); + } + db_must_be_within_tree(); + if( zCleanFlag==0 ){ + zCleanFlag = db_get("clean-glob", 0); + } + if( zIgnoreFlag==0 ){ + zIgnoreFlag = db_get("ignore-glob", 0); + } + if( db_get_boolean("dotfiles", 0) ) scanFlags |= SCAN_ALL; + vid = db_lget_int("checkout",0); + db_begin_transaction(); + + /* step 1: + ** Populate the temp table "sfile" with the names of all unmanaged + ** files currently in the check-out, except for files that match the + ** --ignore or ignore-glob patterns and dot-files. Then add all of + ** the files in the sfile temp table to the set of managed files. + */ + db_multi_exec("CREATE TEMP TABLE sfile(x TEXT PRIMARY KEY %s)", + filename_collation()); + n = strlen(g.zLocalRoot); + blob_init(&path, g.zLocalRoot, n-1); + /* now we read the complete file structure into a temp table */ + pClean = glob_create(zCleanFlag); + pIgnore = glob_create(zIgnoreFlag); + vfile_scan(&path, blob_size(&path), scanFlags, pClean, pIgnore); + glob_free(pIgnore); + glob_free(pClean); + nAdd = add_files_in_sfile(vid); + + /* step 2: search for missing files */ + db_prepare(&q, + "SELECT pathname, %Q || pathname, deleted FROM vfile" + " WHERE NOT deleted" + " ORDER BY 1", + g.zLocalRoot + ); + while( db_step(&q)==SQLITE_ROW ){ + const char *zFile; + const char *zPath; + + zFile = db_column_text(&q, 0); + zPath = db_column_text(&q, 1); + if( !file_wd_isfile_or_link(zPath) ){ + if( !dryRunFlag ){ + db_multi_exec("UPDATE vfile SET deleted=1 WHERE pathname=%Q", zFile); + } + fossil_print("DELETED %s\n", zFile); + nDelete++; + } + } + db_finalize(&q); + /* show command summary */ + fossil_print("added %d files, deleted %d files\n", nAdd, nDelete); + + db_end_transaction(dryRunFlag); } + /* ** Rename a single file. ** ** The original name of the file is zOrig. The new filename is zNew. */ -static void mv_one_file(int vid, const char *zOrig, const char *zNew){ - printf("RENAME %s %s\n", zOrig, zNew); - db_multi_exec( - "UPDATE vfile SET pathname='%s' WHERE pathname='%s' AND vid=%d", - zNew, zOrig, vid - ); +static void mv_one_file( + int vid, + const char *zOrig, + const char *zNew, + int dryRunFlag +){ + int x = db_int(-1, "SELECT deleted FROM vfile WHERE pathname=%Q %s", + zNew, filename_collation()); + if( x>=0 ){ + if( x==0 ){ + if( !filenames_are_case_sensitive() && fossil_stricmp(zOrig,zNew)==0 ){ + /* Case change only */ + }else{ + fossil_fatal("cannot rename '%s' to '%s' since another file named '%s'" + " is currently under management", zOrig, zNew, zNew); + } + }else{ + fossil_fatal("cannot rename '%s' to '%s' since the delete of '%s' has " + "not yet been committed", zOrig, zNew, zNew); + } + } + fossil_print("RENAME %s %s\n", zOrig, zNew); + if( !dryRunFlag ){ + db_multi_exec( + "UPDATE vfile SET pathname='%q' WHERE pathname='%q' %s AND vid=%d", + zNew, zOrig, filename_collation(), vid + ); + } +} + +/* +** This function adds a file to list of files to move on disk after the +** other actions required for the parent operation have completed +** successfully. The first time it is called for the current process, +** it creates a temporary table named "fmove", to keep track of these +** files. +*/ +static void add_file_to_move( + const char *zOldName, /* The old name of the file on disk. */ + const char *zNewName /* The new name of the file on disk. */ +){ + static int tableCreated = 0; + Blob fullOldName; + Blob fullNewName; + char *zOld, *zNew; + if( !tableCreated ){ + db_multi_exec("CREATE TEMP TABLE fmove(x TEXT PRIMARY KEY %s, y TEXT %s)", + filename_collation(), filename_collation()); + tableCreated = 1; + } + file_tree_name(zOldName, &fullOldName, 1, 1); + zOld = blob_str(&fullOldName); + file_tree_name(zNewName, &fullNewName, 1, 1); + zNew = blob_str(&fullNewName); + if( filenames_are_case_sensitive() || fossil_stricmp(zOld,zNew)!=0 ){ + db_multi_exec("INSERT INTO fmove VALUES('%q','%q');", zOld, zNew); + } + blob_reset(&fullNewName); + blob_reset(&fullOldName); +} + +/* +** This function moves files within the checkout, using the file names +** contained in the temporary table "fmove". The temporary table is +** created on demand by the add_file_to_move() function. +** +** If dryRunFlag is non-zero, no files will be moved; however, their +** names will still be output. +** +** The temporary table "fmove" is dropped after being processed. +*/ +static void process_files_to_move( + int dryRunFlag /* Zero to actually operate on the file-system. */ +){ + Stmt move; + if( db_table_exists(db_name("temp"), "fmove") ){ + db_prepare(&move, "SELECT x, y FROM fmove ORDER BY x;"); + while( db_step(&move)==SQLITE_ROW ){ + const char *zOldName = db_column_text(&move, 0); + const char *zNewName = db_column_text(&move, 1); + if( !dryRunFlag ){ + if( file_wd_islink(zOldName) ){ + symlink_copy(zOldName, zNewName); + }else{ + file_copy(zOldName, zNewName); + } + file_delete(zOldName); + } + fossil_print("MOVED_FILE %s\n", zOldName); + } + db_finalize(&move); + db_multi_exec("DROP TABLE fmove;"); + } } /* ** COMMAND: mv -** COMMAND: rename +** COMMAND: rename* ** ** Usage: %fossil mv|rename OLDNAME NEWNAME ** or: %fossil mv|rename OLDNAME... DIR ** -** Move or rename one or more files within the tree +** Move or rename one or more files or directories within the repository tree. +** You can either rename a file or directory or move it to another subdirectory. +** +** The 'mv' command does NOT normally rename or move the files on disk. +** This command merely records the fact that file names have changed so +** that appropriate notations can be made at the next commit/check-in. +** However, the default behavior of this command may be overridden via +** command line options listed below and/or the 'mv-rm-files' setting. +** +** The 'rename' command never renames or moves files on disk, even when the +** command line options and/or the 'mv-rm-files' setting would otherwise +** require it to do so. +** +** WARNING: If the "--hard" option is specified -OR- the "mv-rm-files" +** setting is non-zero, files WILL BE renamed or moved on disk +** as well. This does NOT apply to the 'rename' command. +** +** Options: +** --soft Skip moving files within the checkout. +** This supersedes the --hard option. +** --hard Move files within the checkout. +** --case-sensitive <BOOL> Override the case-sensitive setting. +** -n|--dry-run If given, display instead of run actions. ** -** This command does not rename the files on disk. This command merely -** records the fact that filenames have changed so that appropriate notations -** can be made at the next commit/checkin. +** See also: changes, status */ void mv_cmd(void){ int i; int vid; + int moveFiles; + int dryRunFlag; + int softFlag; + int hardFlag; char *zDest; Blob dest; Stmt q; db_must_be_within_tree(); + dryRunFlag = find_option("dry-run","n",0)!=0; + softFlag = find_option("soft",0,0)!=0; + hardFlag = find_option("hard",0,0)!=0; + + /* We should be done with options.. */ + verify_all_options(); + vid = db_lget_int("checkout", 0); if( vid==0 ){ - fossil_panic("no checkout rename files in"); + fossil_fatal("no checkout rename files in"); } if( g.argc<4 ){ usage("OLDNAME NEWNAME"); } zDest = g.argv[g.argc-1]; db_begin_transaction(); - file_tree_name(zDest, &dest, 1); + if( g.argv[1][0]=='r' ){ /* i.e. "rename" */ + moveFiles = 0; + }else if( softFlag ){ + moveFiles = 0; + }else if( hardFlag ){ + moveFiles = 1; + }else{ +#if FOSSIL_ENABLE_LEGACY_MV_RM + moveFiles = db_get_boolean("mv-rm-files",0); +#else + moveFiles = FOSSIL_MV_RM_FILE; +#endif + } + file_tree_name(zDest, &dest, 0, 1); db_multi_exec( "UPDATE vfile SET origname=pathname WHERE origname IS NULL;" ); db_multi_exec( "CREATE TEMP TABLE mv(f TEXT UNIQUE ON CONFLICT IGNORE, t TEXT);" ); - if( file_isdir(zDest)!=1 ){ + if( file_wd_isdir(zDest)!=1 ){ Blob orig; if( g.argc!=4 ){ usage("OLDNAME NEWNAME"); } - file_tree_name(g.argv[2], &orig, 1); + file_tree_name(g.argv[2], &orig, 0, 1); db_multi_exec( "INSERT INTO mv VALUES(%B,%B)", &orig, &dest ); }else{ if( blob_eq(&dest, ".") ){ @@ -339,19 +899,20 @@ } for(i=2; i<g.argc-1; i++){ Blob orig; char *zOrig; int nOrig; - file_tree_name(g.argv[i], &orig, 1); + file_tree_name(g.argv[i], &orig, 0, 1); zOrig = blob_str(&orig); nOrig = blob_size(&orig); db_prepare(&q, "SELECT pathname FROM vfile" " WHERE vid=%d" - " AND (pathname='%s' OR pathname GLOB '%s/*')" + " AND (pathname='%q' %s OR (pathname>'%q/' %s AND pathname<'%q0' %s))" " ORDER BY 1", - vid, zOrig, zOrig + vid, zOrig, filename_collation(), zOrig, filename_collation(), + zOrig, filename_collation() ); while( db_step(&q)==SQLITE_ROW ){ const char *zPath = db_column_text(&q, 0); int nPath = db_column_bytes(&q, 0); const char *zTail; @@ -359,11 +920,11 @@ zTail = file_tail(zPath); }else{ zTail = &zPath[nOrig+1]; } db_multi_exec( - "INSERT INTO mv VALUES('%s','%s%s')", + "INSERT INTO mv VALUES('%q','%q%q')", zPath, blob_str(&dest), zTail ); } db_finalize(&q); } @@ -370,10 +931,20 @@ } db_prepare(&q, "SELECT f, t FROM mv ORDER BY f"); while( db_step(&q)==SQLITE_ROW ){ const char *zFrom = db_column_text(&q, 0); const char *zTo = db_column_text(&q, 1); - mv_one_file(vid, zFrom, zTo); + mv_one_file(vid, zFrom, zTo, dryRunFlag); + if( moveFiles ) add_file_to_move(zFrom, zTo); } db_finalize(&q); db_end_transaction(0); + if( moveFiles ) process_files_to_move(dryRunFlag); +} + +/* +** Function for stash_apply to be able to restore a file and indicate +** newly ADDED state. +*/ +int stash_add_files_in_sfile(int vid){ + return add_files_in_sfile(vid); } Index: src/allrepo.c ================================================================== --- src/allrepo.c +++ src/allrepo.c @@ -34,11 +34,11 @@ static char *quoteFilename(const char *zFilename){ int i, c; int needQuote = 0; for(i=0; (c = zFilename[i])!=0; i++){ if( c=='"' ) return 0; - if( isspace(c) ) needQuote = 1; + if( fossil_isspace(c) ) needQuote = 1; if( c=='\\' && zFilename[i+1]==0 ) return 0; if( c=='$' ) return 0; } if( needQuote ){ return mprintf("\"%s\"", zFilename); @@ -45,114 +45,388 @@ }else{ return mprintf("%s", zFilename); } } +/* +** Build a string that contains all of the command-line options +** specified as arguments. If the option name begins with "+" then +** it takes an argument. Without the "+" it does not. +*/ +static void collect_argument(Blob *pExtra, const char *zArg, const char *zShort){ + if( find_option(zArg, zShort, 0)!=0 ){ + blob_appendf(pExtra, " --%s", zArg); + } +} +static void collect_argument_value(Blob *pExtra, const char *zArg){ + const char *zValue = find_option(zArg, 0, 1); + if( zValue ){ + if( zValue[0] ){ + blob_appendf(pExtra, " --%s %s", zArg, zValue); + }else{ + blob_appendf(pExtra, " --%s \"\"", zArg); + } + } +} +static void collect_argv(Blob *pExtra, int iStart){ + int i; + for(i=iStart; i<g.argc; i++){ + blob_appendf(pExtra, " %s", g.argv[i]); + } +} + /* ** COMMAND: all ** -** Usage: %fossil all (list|ls|pull|push|rebuild|sync) +** Usage: %fossil all SUBCOMMAND ... ** ** The ~/.fossil file records the location of all repositories for a ** user. This command performs certain operations on all repositories -** that can be useful before or after a period of disconnection operation. +** that can be useful before or after a period of disconnected operation. +** +** On Win32 systems, the file is named "_fossil" and is located in +** %LOCALAPPDATA%, %APPDATA% or %HOMEPATH%. +** ** Available operations are: ** -** list | ls Display the location of all repositories -** -** pull Run a "pull" operation on all repositories -** -** push Run a "push" on all repositories -** -** rebuild Rebuild on all repositories -** -** sync Run a "sync" on all repositories -** -** Respositories are automatically added to the set of known repositories -** when one of the following commands against the repository: clone, info, -** pull, push, or sync +** cache Mangages the cache used for potentially expensive web +** pages. Any additional arguments are passed on verbatim +** to the cache command. +** +** changes Shows all local checkouts that have uncommitted changes. +** This operation has no additional options. +** +** clean Delete all "extra" files in all local checkouts. Extreme +** caution should be exercised with this command because its +** effects cannot be undone. Use of the --dry-run option to +** carefully review the local checkouts to be operated upon +** and the --whatif option to carefully review the files to +** be deleted beforehand is highly recommended. The command +** line options supported by the clean command itself, if any +** are present, are passed along verbatim. +** +** config Only the "config pull AREA" command works. +** +** dbstat Run the "dbstat" command on all repositories. +** +** extras Shows "extra" files from all local checkouts. The command +** line options supported by the extra command itself, if any +** are present, are passed along verbatim. +** +** fts-config Run the "fts-config" command on all repositories. +** +** info Run the "info" command on all repositories. +** +** pull Run a "pull" operation on all repositories. Only the +** --verbose option is supported. +** +** push Run a "push" on all repositories. Only the --verbose +** option is supported. +** +** rebuild Rebuild on all repositories. The command line options +** supported by the rebuild command itself, if any are +** present, are passed along verbatim. The --force and +** --randomize options are not supported. +** +** sync Run a "sync" on all repositories. Only the --verbose +** option is supported. +** +** setting Run the "setting", "set", or "unset" commands on all +** set repositories. These command are particularly useful in +** unset conjunction with the "max-loadavg" setting which cannot +** otherwise be set globally. +** +** In addition, the following maintenance operations are supported: +** +** add Add all the repositories named to the set of repositories +** tracked by Fossil. Normally Fossil is able to keep up with +** this list by itself, but sometime it can benefit from this +** hint if you rename repositories. +** +** ignore Arguments are repositories that should be ignored by +** subsequent clean, extras, list, pull, push, rebuild, and +** sync operations. The -c|--ckout option causes the listed +** local checkouts to be ignored instead. +** +** list | ls Display the location of all repositories. The -c|--ckout +** option causes all local checkouts to be listed instead. +** +** Repositories are automatically added to the set of known repositories +** when one of the following commands are run against the repository: +** clone, info, pull, push, or sync. Even previously ignored repositories +** are added back to the list of repositories by these commands. +** +** Options: +** --showfile Show the repository or checkout being operated upon. +** --dontstop Continue with other repositories even after an error. +** --dry-run If given, display instead of run actions. */ void all_cmd(void){ int n; Stmt q; const char *zCmd; char *zSyscmd; char *zFossil; char *zQFilename; - int nMissing; - + Blob extra; + int useCheckouts = 0; + int quiet = 0; + int dryRunFlag = 0; + int showFile = find_option("showfile",0,0)!=0; + int stopOnError = find_option("dontstop",0,0)==0; + int rc; + int nToDel = 0; + int showLabel = 0; + + dryRunFlag = find_option("dry-run","n",0)!=0; + if( !dryRunFlag ){ + dryRunFlag = find_option("test",0,0)!=0; /* deprecated */ + } + if( g.argc<3 ){ - usage("list|ls|pull|push|rebuild|sync"); + usage("SUBCOMMAND ..."); } n = strlen(g.argv[2]); db_open_config(1); + blob_zero(&extra); zCmd = g.argv[2]; + if( !login_is_nobody() ) blob_appendf(&extra, " -U %s", g.zLogin); if( strncmp(zCmd, "list", n)==0 || strncmp(zCmd,"ls",n)==0 ){ zCmd = "list"; + useCheckouts = find_option("ckout","c",0)!=0; + }else if( strncmp(zCmd, "clean", n)==0 ){ + zCmd = "clean --chdir"; + collect_argument(&extra, "allckouts",0); + collect_argument_value(&extra, "case-sensitive"); + collect_argument_value(&extra, "clean"); + collect_argument(&extra, "dirsonly",0); + collect_argument(&extra, "disable-undo",0); + collect_argument(&extra, "dotfiles",0); + collect_argument(&extra, "emptydirs",0); + collect_argument(&extra, "force","f"); + collect_argument_value(&extra, "ignore"); + collect_argument_value(&extra, "keep"); + collect_argument(&extra, "no-prompt",0); + collect_argument(&extra, "temp",0); + collect_argument(&extra, "verbose","v"); + collect_argument(&extra, "whatif",0); + useCheckouts = 1; + }else if( strncmp(zCmd, "config", n)==0 ){ + zCmd = "config -R"; + collect_argv(&extra, 3); + (void)find_option("legacy",0,0); + (void)find_option("overwrite",0,0); + verify_all_options(); + if( g.argc!=5 || fossil_strcmp(g.argv[3],"pull")!=0 ){ + usage("configure pull AREA ?OPTIONS?"); + } + }else if( strncmp(zCmd, "dbstat", n)==0 ){ + zCmd = "dbstat --omit-version-info -R"; + showLabel = 1; + quiet = 1; + collect_argument(&extra, "brief", "b"); + collect_argument(&extra, "db-check", 0); + }else if( strncmp(zCmd, "extras", n)==0 ){ + if( showFile ){ + zCmd = "extras --chdir"; + }else{ + zCmd = "extras --header --chdir"; + } + collect_argument(&extra, "abs-paths",0); + collect_argument_value(&extra, "case-sensitive"); + collect_argument(&extra, "dotfiles",0); + collect_argument_value(&extra, "ignore"); + collect_argument(&extra, "rel-paths",0); + useCheckouts = 1; + stopOnError = 0; + quiet = 1; }else if( strncmp(zCmd, "push", n)==0 ){ zCmd = "push -autourl -R"; + collect_argument(&extra, "verbose","v"); }else if( strncmp(zCmd, "pull", n)==0 ){ zCmd = "pull -autourl -R"; + collect_argument(&extra, "verbose","v"); }else if( strncmp(zCmd, "rebuild", n)==0 ){ zCmd = "rebuild"; + collect_argument(&extra, "cluster",0); + collect_argument(&extra, "compress",0); + collect_argument(&extra, "compress-only",0); + collect_argument(&extra, "noverify",0); + collect_argument_value(&extra, "pagesize"); + collect_argument(&extra, "vacuum",0); + collect_argument(&extra, "deanalyze",0); + collect_argument(&extra, "analyze",0); + collect_argument(&extra, "wal",0); + collect_argument(&extra, "stats",0); + collect_argument(&extra, "index",0); + collect_argument(&extra, "noindex",0); + collect_argument(&extra, "ifneeded", 0); + }else if( strncmp(zCmd, "setting", n)==0 ){ + zCmd = "setting -R"; + collect_argv(&extra, 3); + }else if( strncmp(zCmd, "unset", n)==0 ){ + zCmd = "unset -R"; + collect_argv(&extra, 3); + }else if( strncmp(zCmd, "fts-config", n)==0 ){ + zCmd = "fts-config -R"; + collect_argv(&extra, 3); }else if( strncmp(zCmd, "sync", n)==0 ){ zCmd = "sync -autourl -R"; + collect_argument(&extra, "verbose","v"); + }else if( strncmp(zCmd, "test-integrity", n)==0 ){ + collect_argument(&extra, "parse", 0); + zCmd = "test-integrity"; + }else if( strncmp(zCmd, "test-orphans", n)==0 ){ + zCmd = "test-orphans -R"; + }else if( strncmp(zCmd, "test-missing", n)==0 ){ + zCmd = "test-missing -q -R"; + collect_argument(&extra, "notshunned",0); + }else if( strncmp(zCmd, "changes", n)==0 ){ + zCmd = "changes --quiet --header --chdir"; + useCheckouts = 1; + stopOnError = 0; + quiet = 1; + }else if( strncmp(zCmd, "ignore", n)==0 ){ + int j; + Blob fn = BLOB_INITIALIZER; + Blob sql = BLOB_INITIALIZER; + useCheckouts = find_option("ckout","c",0)!=0; + verify_all_options(); + db_begin_transaction(); + for(j=3; j<g.argc; j++, blob_reset(&sql), blob_reset(&fn)){ + file_canonical_name(g.argv[j], &fn, 0); + blob_append_sql(&sql, + "DELETE FROM global_config WHERE name GLOB '%s:%q'", + useCheckouts?"ckout":"repo", blob_str(&fn) + ); + if( dryRunFlag ){ + fossil_print("%s\n", blob_sql_text(&sql)); + }else{ + db_multi_exec("%s", blob_sql_text(&sql)); + } + } + db_end_transaction(0); + blob_reset(&sql); + blob_reset(&fn); + blob_reset(&extra); + return; + }else if( strncmp(zCmd, "add", n)==0 ){ + int j; + Blob fn = BLOB_INITIALIZER; + Blob sql = BLOB_INITIALIZER; + verify_all_options(); + db_begin_transaction(); + for(j=3; j<g.argc; j++, blob_reset(&fn), blob_reset(&sql)){ + sqlite3 *db; + int rc; + const char *z; + file_canonical_name(g.argv[j], &fn, 0); + z = blob_str(&fn); + if( !file_isfile(z) ) continue; + rc = sqlite3_open(z, &db); + if( rc!=SQLITE_OK ){ sqlite3_close(db); continue; } + rc = sqlite3_exec(db, "SELECT rcvid FROM blob, delta LIMIT 1", 0, 0, 0); + sqlite3_close(db); + if( rc!=SQLITE_OK ) continue; + blob_append_sql(&sql, + "INSERT INTO global_config(name,value)VALUES('repo:%q',1)", z + ); + if( dryRunFlag ){ + fossil_print("%s\n", blob_sql_text(&sql)); + }else{ + db_multi_exec("%s", blob_sql_text(&sql)); + } + } + db_end_transaction(0); + blob_reset(&sql); + blob_reset(&fn); + blob_reset(&extra); + return; + }else if( strncmp(zCmd, "info", n)==0 ){ + zCmd = "info"; + showLabel = 1; + quiet = 1; + }else if( strncmp(zCmd, "cache", n)==0 ){ + zCmd = "cache -R"; + showLabel = 1; + collect_argv(&extra, 3); }else{ fossil_fatal("\"all\" subcommand should be one of: " - "list ls push pull rebuild sync"); - } - zFossil = quoteFilename(g.argv[0]); - nMissing = 0; - db_prepare(&q, - "SELECT DISTINCT substr(name, 6) COLLATE nocase" - " FROM global_config" - " WHERE substr(name, 1, 5)=='repo:' ORDER BY 1" - ); + "add cache changes clean dbstat extras fts-config ignore " + "info list ls pull push rebuild setting sync unset"); + } + verify_all_options(); + zFossil = quoteFilename(g.nameOfExe); + db_multi_exec("CREATE TEMP TABLE repolist(name,tag);"); + if( useCheckouts ){ + db_multi_exec( + "INSERT INTO repolist " + "SELECT DISTINCT substr(name, 7), name COLLATE nocase" + " FROM global_config" + " WHERE substr(name, 1, 6)=='ckout:'" + " ORDER BY 1" + ); + }else{ + db_multi_exec( + "INSERT INTO repolist " + "SELECT DISTINCT substr(name, 6), name COLLATE nocase" + " FROM global_config" + " WHERE substr(name, 1, 5)=='repo:'" + " ORDER BY 1" + ); + } + db_multi_exec("CREATE TEMP TABLE todel(x TEXT)"); + db_prepare(&q, "SELECT name, tag FROM repolist ORDER BY 1"); while( db_step(&q)==SQLITE_ROW ){ const char *zFilename = db_column_text(&q, 0); - if( access(zFilename, 0) ){ - nMissing++; + if( file_access(zFilename, F_OK) + || !file_is_canonical(zFilename) + || (useCheckouts && file_isdir(zFilename)!=1) + ){ + db_multi_exec("INSERT INTO todel VALUES(%Q)", db_column_text(&q, 1)); + nToDel++; continue; } - if( !file_is_canonical(zFilename) ) nMissing++; if( zCmd[0]=='l' ){ - printf("%s\n", zFilename); + fossil_print("%s\n", zFilename); continue; + }else if( showFile ){ + fossil_print("%s: %s\n", useCheckouts ? "checkout" : "repository", + zFilename); } zQFilename = quoteFilename(zFilename); - zSyscmd = mprintf("%s %s %s", zFossil, zCmd, zQFilename); - printf("%s\n", zSyscmd); - fflush(stdout); - portable_system(zSyscmd); + zSyscmd = mprintf("%s %s %s%s", + zFossil, zCmd, zQFilename, blob_str(&extra)); + if( showLabel ){ + int len = (int)strlen(zFilename); + int nStar = 80 - (len + 15); + if( nStar<2 ) nStar = 1; + fossil_print("%.13c %s %.*c\n", '*', zFilename, nStar, '*'); + } + if( !quiet || dryRunFlag ){ + fossil_print("%s\n", zSyscmd); + fflush(stdout); + } + rc = dryRunFlag ? 0 : fossil_system(zSyscmd); free(zSyscmd); free(zQFilename); + if( stopOnError && rc ){ + break; + } } - + db_finalize(&q); + + blob_reset(&extra); + /* If any repositories whose names appear in the ~/.fossil file could not ** be found, remove those names from the ~/.fossil file. */ - if( nMissing ){ - db_begin_transaction(); - db_reset(&q); - while( db_step(&q)==SQLITE_ROW ){ - const char *zFilename = db_column_text(&q, 0); - if( access(zFilename, 0) ){ - char *zRepo = mprintf("repo:%s", zFilename); - db_unset(zRepo, 1); - free(zRepo); - }else if( !file_is_canonical(zFilename) ){ - Blob cname; - char *zRepo = mprintf("repo:%s", zFilename); - db_unset(zRepo, 1); - free(zRepo); - file_canonical_name(zFilename, &cname); - zRepo = mprintf("repo:%s", blob_str(&cname)); - db_set(zRepo, "1", 1); - free(zRepo); - } - } - db_reset(&q); - db_end_transaction(0); - } - db_finalize(&q); + if( nToDel>0 ){ + const char *zSql = "DELETE FROM global_config WHERE name IN toDel"; + if( dryRunFlag ){ + fossil_print("%s\n", zSql); + }else{ + db_multi_exec("%s", zSql /*safe-for-%s*/ ); + } + } } Index: src/attach.c ================================================================== --- src/attach.c +++ src/attach.c @@ -21,136 +21,172 @@ #include "attach.h" #include <assert.h> /* ** WEBPAGE: attachlist +** List attachments. ** ** tkt=TICKETUUID ** page=WIKIPAGE ** -** List attachments. -** Either one of tkt= or page= are supplied or neither. If neither -** are given, all attachments are listed. If one is given, only -** attachments for the designated ticket or wiki page are shown. -** TICKETUUID must be complete +** At most one of technote=, tkt= or page= are supplied. +** If none is given, all attachments are listed. If one is given, +** only attachments for the designated technote, ticket or wiki page +** are shown. TECHNOTEUUID and TICKETUUID may be just a prefix of the +** relevant tech note or ticket, in which case all attachments of all +** tech notes or tickets with the prefix will be listed. */ void attachlist_page(void){ const char *zPage = P("page"); const char *zTkt = P("tkt"); + const char *zTechNote = P("technote"); Blob sql; Stmt q; if( zPage && zTkt ) zTkt = 0; login_check_credentials(); blob_zero(&sql); - blob_append(&sql, - "SELECT datetime(mtime,'localtime'), src, target, filename, comment, user" - " FROM attachment", - -1 + blob_append_sql(&sql, + "SELECT datetime(mtime,toLocal()), src, target, filename," + " comment, user," + " (SELECT uuid FROM blob WHERE rid=attachid), attachid," + " (CASE WHEN 'tkt-'||target IN (SELECT tagname FROM tag)" + " THEN 1" + " WHEN 'event-'||target IN (SELECT tagname FROM tag)" + " THEN 2" + " ELSE 0 END)" + " FROM attachment" ); if( zPage ){ - if( g.okRdWiki==0 ) login_needed(); + if( g.perm.RdWiki==0 ){ login_needed(g.anon.RdWiki); return; } style_header("Attachments To %h", zPage); - blob_appendf(&sql, " WHERE target=%Q", zPage); + blob_append_sql(&sql, " WHERE target=%Q", zPage); }else if( zTkt ){ - if( g.okRdTkt==0 ) login_needed(); - style_header("Attachments To Ticket %.10s", zTkt); - blob_appendf(&sql, " WHERE target GLOB '%q*'", zTkt); + if( g.perm.RdTkt==0 ){ login_needed(g.anon.RdTkt); return; } + style_header("Attachments To Ticket %S", zTkt); + blob_append_sql(&sql, " WHERE target GLOB '%q*'", zTkt); + }else if( zTechNote ){ + if( g.perm.RdWiki==0 ){ login_needed(g.anon.RdWiki); return; } + style_header("Attachments to Tech Note %S", zTechNote); + blob_append_sql(&sql, " WHERE target GLOB '%q*'", + zTechNote); }else{ - if( g.okRdTkt==0 && g.okRdWiki==0 ) login_needed(); + if( g.perm.RdTkt==0 && g.perm.RdWiki==0 ){ + login_needed(g.anon.RdTkt || g.anon.RdWiki); + return; + } style_header("All Attachments"); } - blob_appendf(&sql, " ORDER BY mtime DESC"); - db_prepare(&q, "%s", blob_str(&sql)); + blob_append_sql(&sql, " ORDER BY mtime DESC"); + db_prepare(&q, "%s", blob_sql_text(&sql)); + @ <ol> while( db_step(&q)==SQLITE_ROW ){ const char *zDate = db_column_text(&q, 0); const char *zSrc = db_column_text(&q, 1); const char *zTarget = db_column_text(&q, 2); const char *zFilename = db_column_text(&q, 3); const char *zComment = db_column_text(&q, 4); const char *zUser = db_column_text(&q, 5); + const char *zUuid = db_column_text(&q, 6); + int attachid = db_column_int(&q, 7); + // type 0 is a wiki page, 1 is a ticket, 2 is a tech note + int type = db_column_int(&q, 8); + const char *zDispUser = zUser && zUser[0] ? zUser : "anonymous"; int i; char *zUrlTail; for(i=0; zFilename[i]; i++){ - if( zFilename[i]=='/' && zFilename[i+1]!=0 ){ + if( zFilename[i]=='/' && zFilename[i+1]!=0 ){ zFilename = &zFilename[i+1]; i = -1; } } - if( strlen(zTarget)==UUID_SIZE && validate16(zTarget,UUID_SIZE) ){ + if( type==1 ){ zUrlTail = mprintf("tkt=%s&file=%t", zTarget, zFilename); + }else if( type==2 ){ + zUrlTail = mprintf("technote=%s&file=%t", zTarget, zFilename); }else{ zUrlTail = mprintf("page=%t&file=%t", zTarget, zFilename); } - @ - @ <p><a href="/attachview?%s(zUrlTail)">%h(zFilename)</a> - @ [<a href="/attachdownload/%t(zFilename)?%s(zUrlTail)">download</a>]<br> - if( zComment ) while( isspace(zComment[0]) ) zComment++; + @ <li><p> + @ Attachment %z(href("%R/ainfo/%!S",zUuid))%S(zUuid)</a> + if( moderation_pending(attachid) ){ + @ <span class="modpending">*** Awaiting Moderator Approval ***</span> + } + @ <br><a href="%R/attachview?%s(zUrlTail)">%h(zFilename)</a> + @ [<a href="%R/attachdownload/%t(zFilename)?%s(zUrlTail)">download</a>]<br /> + if( zComment ) while( fossil_isspace(zComment[0]) ) zComment++; if( zComment && zComment[0] ){ - @ %w(zComment)<br> + @ %!W(zComment)<br /> } - if( zPage==0 && zTkt==0 ){ + if( zPage==0 && zTkt==0 && zTechNote==0 ){ if( zSrc==0 || zSrc[0]==0 ){ zSrc = "Deleted from"; }else { zSrc = "Added to"; } - if( strlen(zTarget)==UUID_SIZE && validate16(zTarget, UUID_SIZE) ){ - char zShort[20]; - memcpy(zShort, zTarget, 10); - zShort[10] = 0; - @ %s(zSrc) ticket <a href="%s(g.zTop)/tktview?name=%s(zTarget)"> - @ %s(zShort)</a> + if( type==1 ){ + @ %s(zSrc) ticket <a href="%R/tktview?name=%s(zTarget)"> + @ %S(zTarget)</a> + }else if( type==2 ){ + @ %s(zSrc) tech note <a href="%R/technote/%s(zTarget)"> + @ %S(zTarget)</a> }else{ - @ %s(zSrc) wiki page <a href="%s(g.zTop)/wiki?name=%t(zTarget)"> + @ %s(zSrc) wiki page <a href="%R/wiki?name=%t(zTarget)"> @ %h(zTarget)</a> } }else{ if( zSrc==0 || zSrc[0]==0 ){ @ Deleted }else { @ Added } } - @ by %h(zUser) on + @ by %h(zDispUser) on hyperlink_to_date(zDate, "."); free(zUrlTail); } db_finalize(&q); + @ </ol> style_footer(); return; } /* ** WEBPAGE: attachdownload ** WEBPAGE: attachimage ** WEBPAGE: attachview ** +** Download or display an attachment. +** Query parameters: +** ** tkt=TICKETUUID ** page=WIKIPAGE +** technote=TECHNOTEUUID ** file=FILENAME ** attachid=ID ** -** List attachments. */ void attachview_page(void){ const char *zPage = P("page"); const char *zTkt = P("tkt"); + const char *zTechNote = P("technote"); const char *zFile = P("file"); const char *zTarget = 0; int attachid = atoi(PD("attachid","0")); char *zUUID; - if( zPage && zTkt ) zTkt = 0; if( zFile==0 ) fossil_redirect_home(); login_check_credentials(); if( zPage ){ - if( g.okRdWiki==0 ) login_needed(); + if( g.perm.RdWiki==0 ){ login_needed(g.anon.RdWiki); return; } zTarget = zPage; }else if( zTkt ){ - if( g.okRdTkt==0 ) login_needed(); + if( g.perm.RdTkt==0 ){ login_needed(g.anon.RdTkt); return; } zTarget = zTkt; + }else if( zTechNote ){ + if( g.perm.RdWiki==0 ){ login_needed(g.anon.RdWiki); return; } + zTarget = zTechNote; }else{ fossil_redirect_home(); } if( attachid>0 ){ zUUID = db_text(0, @@ -174,197 +210,463 @@ }else if( zUUID[0]=='x' ){ style_header("Missing"); @ Attachment has been deleted style_footer(); return; + }else{ + g.perm.Read = 1; + cgi_replace_parameter("name",zUUID); + if( fossil_strcmp(g.zPath,"attachview")==0 ){ + artifact_page(); + }else{ + cgi_replace_parameter("m", mimetype_from_name(zFile)); + rawartifact_page(); + } } - g.okRead = 1; - cgi_replace_parameter("name",zUUID); - if( strcmp(g.zPath,"attachview")==0 ){ - artifact_page(); +} + +/* +** Save an attachment control artifact into the repository +*/ +static void attach_put( + Blob *pAttach, /* Text of the Attachment record */ + int attachRid, /* RID for the file that is being attached */ + int needMod /* True if the attachment is subject to moderation */ +){ + int rid; + if( needMod ){ + rid = content_put_ex(pAttach, 0, 0, 0, 1); + moderation_table_create(); + db_multi_exec( + "INSERT INTO modreq(objid,attachRid) VALUES(%d,%d);", + rid, attachRid + ); }else{ - cgi_replace_parameter("m", mimetype_from_name(zFile)); - rawartifact_page(); + rid = content_put(pAttach); + db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d);", rid); + db_multi_exec("INSERT OR IGNORE INTO unclustered VALUES(%d);", rid); } + manifest_crosslink(rid, pAttach, MC_NONE); } /* ** WEBPAGE: attachadd +** Add a new attachment. ** ** tkt=TICKETUUID ** page=WIKIPAGE +** technote=TECHNOTEUUID ** from=URL ** -** Add a new attachment. */ void attachadd_page(void){ const char *zPage = P("page"); const char *zTkt = P("tkt"); + const char *zTechNote = P("technote"); const char *zFrom = P("from"); const char *aContent = P("f"); const char *zName = PD("f:filename","unknown"); const char *zTarget; - const char *zTargetType; + char *zTargetType; int szContent = atoi(PD("f:bytes","0")); + int goodCaptcha = 1; if( P("cancel") ) cgi_redirect(zFrom); - if( zPage && zTkt ) fossil_redirect_home(); - if( zPage==0 && zTkt==0 ) fossil_redirect_home(); + if( (zPage && zTkt) + || (zPage && zTechNote) + || (zTkt && zTechNote) + ){ + fossil_redirect_home(); + } + if( zPage==0 && zTkt==0 && zTechNote==0) fossil_redirect_home(); login_check_credentials(); if( zPage ){ - if( g.okApndWiki==0 || g.okAttach==0 ) login_needed(); + if( g.perm.ApndWiki==0 || g.perm.Attach==0 ){ + login_needed(g.anon.ApndWiki && g.anon.Attach); + return; + } if( !db_exists("SELECT 1 FROM tag WHERE tagname='wiki-%q'", zPage) ){ fossil_redirect_home(); } zTarget = zPage; - zTargetType = mprintf("Wiki Page <a href=\"%s/wiki?name=%h\">%h</a>", - g.zTop, zPage, zPage); + zTargetType = mprintf("Wiki Page <a href=\"%R/wiki?name=%h\">%h</a>", + zPage, zPage); + }else if ( zTechNote ){ + if( g.perm.Write==0 || g.perm.ApndWiki==0 || g.perm.Attach==0 ){ + login_needed(g.anon.Write && g.anon.ApndWiki && g.anon.Attach); + return; + } + if( !db_exists("SELECT 1 FROM tag WHERE tagname='event-%q'", zTechNote) ){ + zTechNote = db_text(0, "SELECT substr(tagname,7) FROM tag" + " WHERE tagname GLOB 'event-%q*'", zTechNote); + if( zTechNote==0) fossil_redirect_home(); + } + zTarget = zTechNote; + zTargetType = mprintf("Tech Note <a href=\"%R/technote/%h\">%h</a>", + zTechNote, zTechNote); + }else{ - if( g.okApndTkt==0 || g.okAttach==0 ) login_needed(); + if( g.perm.ApndTkt==0 || g.perm.Attach==0 ){ + login_needed(g.anon.ApndTkt && g.anon.Attach); + return; + } if( !db_exists("SELECT 1 FROM tag WHERE tagname='tkt-%q'", zTkt) ){ - fossil_redirect_home(); + zTkt = db_text(0, "SELECT substr(tagname,5) FROM tag" + " WHERE tagname GLOB 'tkt-%q*'", zTkt); + if( zTkt==0 ) fossil_redirect_home(); } zTarget = zTkt; - zTargetType = mprintf("Ticket <a href=\"%s/tktview?name=%.10s\">%.10s</a>", - g.zTop, zTkt, zTkt); + zTargetType = mprintf("Ticket <a href=\"%R/tktview/%s\">%S</a>", + zTkt, zTkt); } if( zFrom==0 ) zFrom = mprintf("%s/home", g.zTop); if( P("cancel") ){ cgi_redirect(zFrom); } - if( P("ok") && szContent>0 ){ + if( P("ok") && szContent>0 && (goodCaptcha = captcha_is_correct()) ){ Blob content; Blob manifest; Blob cksum; char *zUUID; const char *zComment; char *zDate; int rid; int i, n; + int addCompress = 0; + Manifest *pManifest; + int needModerator; db_begin_transaction(); blob_init(&content, aContent, szContent); - rid = content_put(&content, 0, 0); + pManifest = manifest_parse(&content, 0, 0); + manifest_destroy(pManifest); + blob_init(&content, aContent, szContent); + if( pManifest ){ + blob_compress(&content, &content); + addCompress = 1; + } + needModerator = + (zTkt!=0 && ticket_need_moderation(0)) || + (zPage!=0 && wiki_need_moderation(0)); + rid = content_put_ex(&content, 0, 0, 0, needModerator); zUUID = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); blob_zero(&manifest); for(i=n=0; zName[i]; i++){ if( zName[i]=='/' || zName[i]=='\\' ) n = i; } zName += n; if( zName[0]==0 ) zName = "unknown"; - blob_appendf(&manifest, "A %F %F %s\n", zName, zTarget, zUUID); + blob_appendf(&manifest, "A %F%s %F %s\n", + zName, addCompress ? ".gz" : "", zTarget, zUUID); zComment = PD("comment", ""); - while( isspace(zComment[0]) ) zComment++; + while( fossil_isspace(zComment[0]) ) zComment++; n = strlen(zComment); - while( n>0 && isspace(zComment[n-1]) ){ n--; } + while( n>0 && fossil_isspace(zComment[n-1]) ){ n--; } if( n>0 ){ - blob_appendf(&manifest, "C %F\n", zComment); + blob_appendf(&manifest, "C %#F\n", n, zComment); } - zDate = db_text(0, "SELECT datetime('now')"); - zDate[10] = 'T'; + zDate = date_in_standard_format("now"); blob_appendf(&manifest, "D %s\n", zDate); - blob_appendf(&manifest, "U %F\n", g.zLogin ? g.zLogin : "nobody"); + blob_appendf(&manifest, "U %F\n", login_name()); md5sum_blob(&manifest, &cksum); blob_appendf(&manifest, "Z %b\n", &cksum); - rid = content_put(&manifest, 0, 0); - manifest_crosslink(rid, &manifest); + attach_put(&manifest, rid, needModerator); + assert( blob_is_reset(&manifest) ); db_end_transaction(0); cgi_redirect(zFrom); } style_header("Add Attachment"); + if( !goodCaptcha ){ + @ <p class="generalError">Error: Incorrect security code.</p> + } @ <h2>Add Attachment To %s(zTargetType)</h2> - @ <form action="%s(g.zBaseURL)/attachadd" method="POST" - @ enctype="multipart/form-data"> + form_begin("enctype='multipart/form-data'", "%R/attachadd"); + @ <div> @ File to Attach: - @ <input type="file" name="f" size="60"><br> - @ Description:<br> - @ <textarea name="comment" cols=80 rows=5 wrap="virtual"></textarea><br> + @ <input type="file" name="f" size="60" /><br /> + @ Description:<br /> + @ <textarea name="comment" cols="80" rows="5" wrap="virtual"></textarea><br /> if( zTkt ){ - @ <input type="hidden" name="tkt" value="%h(zTkt)"> + @ <input type="hidden" name="tkt" value="%h(zTkt)" /> + }else if( zTechNote ){ + @ <input type="hidden" name="technote" value="%h(zTechNote)" /> }else{ - @ <input type="hidden" name="page" value="%h(zPage)"> + @ <input type="hidden" name="page" value="%h(zPage)" /> } - @ <input type="hidden" name="from" value="%h(zFrom)"> - @ <input type="submit" name="ok" value="Add Attachment"> - @ <input type="submit" name="cancel" value="Cancel"> + @ <input type="hidden" name="from" value="%h(zFrom)" /> + @ <input type="submit" name="ok" value="Add Attachment" /> + @ <input type="submit" name="cancel" value="Cancel" /> + @ </div> + captcha_generate(0); @ </form> style_footer(); + fossil_free(zTargetType); } - /* -** WEBPAGE: attachdelete -** -** tkt=TICKETUUID -** page=WIKIPAGE -** file=FILENAME -** -** "Delete" an attachment. Because objects in Fossil are immutable -** the attachment isn't really deleted. Instead, we change the content -** of the attachment to NULL, which the system understands as being -** deleted. Historical values of the attachment are preserved. +** WEBPAGE: ainfo +** URL: /ainfo?name=ARTIFACTID +** +** Show the details of an attachment artifact. */ -void attachdel_page(void){ - const char *zPage = P("page"); - const char *zTkt = P("tkt"); - const char *zFile = P("file"); - const char *zFrom = P("from"); - const char *zTarget; - - if( zPage && zTkt ) fossil_redirect_home(); - if( zPage==0 && zTkt==0 ) fossil_redirect_home(); - if( zFile==0 ) fossil_redirect_home(); +void ainfo_page(void){ + int rid; /* RID for the control artifact */ + int ridSrc; /* RID for the attached file */ + char *zDate; /* Date attached */ + const char *zUuid; /* UUID of the control artifact */ + Manifest *pAttach; /* Parse of the control artifact */ + const char *zTarget; /* Wiki, ticket or tech note attached to */ + const char *zSrc; /* UUID of the attached file */ + const char *zName; /* Name of the attached file */ + const char *zDesc; /* Description of the attached file */ + const char *zWikiName = 0; /* Wiki page name when attached to Wiki */ + const char *zTNUuid = 0; /* Tech Note ID when attached to tech note */ + const char *zTktUuid = 0; /* Ticket ID when attached to a ticket */ + int modPending; /* True if awaiting moderation */ + const char *zModAction; /* Moderation action or NULL */ + int isModerator; /* TRUE if user is the moderator */ + const char *zMime; /* MIME Type */ + Blob attach; /* Content of the attachment */ + int fShowContent = 0; + const char *zLn = P("ln"); + login_check_credentials(); - if( zPage ){ - if( g.okWrWiki==0 || g.okAttach==0 ) login_needed(); - zTarget = zPage; - }else{ - if( g.okWrTkt==0 || g.okAttach==0 ) login_needed(); - zTarget = zTkt; - } - if( zFrom==0 ) zFrom = mprintf("%s/home", g.zTop); - if( P("cancel") ){ - cgi_redirect(zFrom); - } - if( P("confirm") ){ + if( !g.perm.RdTkt && !g.perm.RdWiki ){ + login_needed(g.anon.RdTkt || g.anon.RdWiki); + return; + } + rid = name_to_rid_www("name"); + if( rid==0 ){ fossil_redirect_home(); } + zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); +#if 0 + /* Shunning here needs to get both the attachment control artifact and + ** the object that is attached. */ + if( g.perm.Admin ){ + if( db_exists("SELECT 1 FROM shun WHERE uuid='%q'", zUuid) ){ + style_submenu_element("Unshun","Unshun", "%s/shun?uuid=%s&sub=1", + g.zTop, zUuid); + }else{ + style_submenu_element("Shun","Shun", "%s/shun?shun=%s#addshun", + g.zTop, zUuid); + } + } +#endif + pAttach = manifest_get(rid, CFTYPE_ATTACHMENT, 0); + if( pAttach==0 ) fossil_redirect_home(); + zTarget = pAttach->zAttachTarget; + zSrc = pAttach->zAttachSrc; + ridSrc = db_int(0,"SELECT rid FROM blob WHERE uuid='%q'", zSrc); + zName = pAttach->zAttachName; + zDesc = pAttach->zComment; + zMime = mimetype_from_name(zName); + fShowContent = zMime ? strncmp(zMime,"text/", 5)==0 : 0; + if( validate16(zTarget, strlen(zTarget)) + && db_exists("SELECT 1 FROM ticket WHERE tkt_uuid='%q'", zTarget) + ){ + zTktUuid = zTarget; + if( !g.perm.RdTkt ){ login_needed(g.anon.RdTkt); return; } + if( g.perm.WrTkt ){ + style_submenu_element("Delete","Delete","%R/ainfo/%s?del", zUuid); + } + }else if( db_exists("SELECT 1 FROM tag WHERE tagname='wiki-%q'",zTarget) ){ + zWikiName = zTarget; + if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } + if( g.perm.WrWiki ){ + style_submenu_element("Delete","Delete","%R/ainfo/%s?del", zUuid); + } + }else if( db_exists("SELECT 1 FROM tag WHERE tagname='event-%q'",zTarget) ){ + zTNUuid = zTarget; + if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } + if( g.perm.Write && g.perm.WrWiki ){ + style_submenu_element("Delete","Delete","%R/ainfo/%s?del", zUuid); + } + } + zDate = db_text(0, "SELECT datetime(%.12f)", pAttach->rDate); + + if( P("confirm") + && ((zTktUuid && g.perm.WrTkt) || + (zWikiName && g.perm.WrWiki) || + (zTNUuid && g.perm.Write && g.perm.WrWiki)) + ){ int i, n, rid; char *zDate; Blob manifest; Blob cksum; + const char *zFile = zName; db_begin_transaction(); blob_zero(&manifest); for(i=n=0; zFile[i]; i++){ if( zFile[i]=='/' || zFile[i]=='\\' ) n = i; } zFile += n; if( zFile[0]==0 ) zFile = "unknown"; blob_appendf(&manifest, "A %F %F\n", zFile, zTarget); - zDate = db_text(0, "SELECT datetime('now')"); - zDate[10] = 'T'; + zDate = date_in_standard_format("now"); blob_appendf(&manifest, "D %s\n", zDate); - blob_appendf(&manifest, "U %F\n", g.zLogin ? g.zLogin : "nobody"); + blob_appendf(&manifest, "U %F\n", login_name()); md5sum_blob(&manifest, &cksum); blob_appendf(&manifest, "Z %b\n", &cksum); - rid = content_put(&manifest, 0, 0); - manifest_crosslink(rid, &manifest); + rid = content_put(&manifest); + manifest_crosslink(rid, &manifest, MC_NONE); db_end_transaction(0); - cgi_redirect(zFrom); - } - style_header("Delete Attachment"); - @ <form action="%s(g.zBaseURL)/attachdelete" method="POST"> - @ <p>Confirm that you want to delete the attachment named - @ "%h(zFile)" on %s(zTkt?"ticket":"wiki page") %h(zTarget):<br> - if( zTkt ){ - @ <input type="hidden" name="tkt" value="%h(zTkt)"> + @ <p>The attachment below has been deleted.</p> + } + + if( P("del") + && ((zTktUuid && g.perm.WrTkt) || + (zWikiName && g.perm.WrWiki) || + (zTNUuid && g.perm.Write && g.perm.WrWiki)) + ){ + form_begin(0, "%R/ainfo/%!S", zUuid); + @ <p>Confirm you want to delete the attachment shown below. + @ <input type="submit" name="confirm" value="Confirm"> + @ </form> + } + + isModerator = g.perm.Admin || + (zTktUuid && g.perm.ModTkt) || + (zWikiName && g.perm.ModWiki); + if( isModerator && (zModAction = P("modaction"))!=0 ){ + if( strcmp(zModAction,"delete")==0 ){ + moderation_disapprove(rid); + if( zTktUuid ){ + cgi_redirectf("%R/tktview/%!S", zTktUuid); + }else{ + cgi_redirectf("%R/wiki?name=%t", zWikiName); + } + return; + } + if( strcmp(zModAction,"approve")==0 ){ + moderation_approve(rid); + } + } + style_header("Attachment Details"); + style_submenu_element("Raw", "Raw", "%R/artifact/%s", zUuid); + if(fShowContent){ + style_submenu_element("Line Numbers", "Line Numbers", + "%R/ainfo/%s%s",zUuid, + ((zLn&&*zLn) ? "" : "?ln=0")); + } + + @ <div class="section">Overview</div> + @ <p><table class="label-value"> + @ <tr><th>Artifact ID:</th> + @ <td>%z(href("%R/artifact/%!S",zUuid))%s(zUuid)</a> + if( g.perm.Setup ){ + @ (%d(rid)) + } + modPending = moderation_pending(rid); + if( modPending ){ + @ <span class="modpending">*** Awaiting Moderator Approval ***</span> + } + if( zTktUuid ){ + @ <tr><th>Ticket:</th> + @ <td>%z(href("%R/tktview/%s",zTktUuid))%s(zTktUuid)</a></td></tr> + } + if( zTNUuid ){ + @ <tr><th>Tech Note:</th> + @ <td>%z(href("%R/technote/%s",zTNUuid))%s(zTNUuid)</a></td></tr> + } + if( zWikiName ){ + @ <tr><th>Wiki Page:</th> + @ <td>%z(href("%R/wiki?name=%t",zWikiName))%h(zWikiName)</a></td></tr> + } + @ <tr><th>Date:</th><td> + hyperlink_to_date(zDate, "</td></tr>"); + @ <tr><th>User:</th><td> + hyperlink_to_user(pAttach->zUser, zDate, "</td></tr>"); + @ <tr><th>Artifact Attached:</th> + @ <td>%z(href("%R/artifact/%s",zSrc))%s(zSrc)</a> + if( g.perm.Setup ){ + @ (%d(ridSrc)) + } + @ <tr><th>Filename:</th><td>%h(zName)</td></tr> + if( g.perm.Setup ){ + @ <tr><th>MIME-Type:</th><td>%h(zMime)</td></tr> + } + @ <tr><th valign="top">Description:</th><td valign="top">%h(zDesc)</td></tr> + @ </table> + + if( isModerator && modPending ){ + @ <div class="section">Moderation</div> + @ <blockquote> + form_begin(0, "%R/ainfo/%s", zUuid); + @ <label><input type="radio" name="modaction" value="delete"> + @ Delete this change</label><br /> + @ <label><input type="radio" name="modaction" value="approve"> + @ Approve this change</label><br /> + @ <input type="submit" value="Submit"> + @ </form> + @ </blockquote> + } + + @ <div class="section">Content Appended</div> + @ <blockquote> + blob_zero(&attach); + if( fShowContent ){ + const char *z; + content_get(ridSrc, &attach); + blob_to_utf8_no_bom(&attach, 0); + z = blob_str(&attach); + if( zLn ){ + output_text_with_line_numbers(z, zLn); + }else{ + @ <pre> + @ %h(z) + @ </pre> + } + }else if( strncmp(zMime, "image/", 6)==0 ){ + @ <img src="%R/raw/%s(zSrc)?m=%s(zMime)"></img> + style_submenu_element("Image", "Image", "%R/raw/%s?m=%s", zSrc, zMime); }else{ - @ <input type="hidden" name="page" value="%h(zPage)"> + int sz = db_int(0, "SELECT size FROM blob WHERE rid=%d", ridSrc); + @ <i>(file is %d(sz) bytes of binary data)</i> } - @ <input type="hidden" name="file" value="%h(zFile)"> - @ <input type="hidden" name="from" value="%h(zFrom)"> - @ <input type="submit" name="confirm" value="Delete"> - @ <input type="submit" name="cancel" value="Cancel"> - @ </form> + @ </blockquote> + manifest_destroy(pAttach); + blob_reset(&attach); style_footer(); +} + +/* +** Output HTML to show a list of attachments. +*/ +void attachment_list( + const char *zTarget, /* Object that things are attached to */ + const char *zHeader /* Header to display with attachments */ +){ + int cnt = 0; + Stmt q; + db_prepare(&q, + "SELECT datetime(mtime,toLocal()), filename, user," + " (SELECT uuid FROM blob WHERE rid=attachid), src" + " FROM attachment" + " WHERE isLatest AND src!='' AND target=%Q" + " ORDER BY mtime DESC", + zTarget + ); + while( db_step(&q)==SQLITE_ROW ){ + const char *zDate = db_column_text(&q, 0); + const char *zFile = db_column_text(&q, 1); + const char *zUser = db_column_text(&q, 2); + const char *zUuid = db_column_text(&q, 3); + const char *zSrc = db_column_text(&q, 4); + const char *zDispUser = zUser && zUser[0] ? zUser : "anonymous"; + if( cnt==0 ){ + @ %s(zHeader) + } + cnt++; + @ <li> + @ %z(href("%R/artifact/%!S",zSrc))%h(zFile)</a> + @ added by %h(zDispUser) on + hyperlink_to_date(zDate, "."); + @ [%z(href("%R/ainfo/%!S",zUuid))details</a>] + @ </li> + } + if( cnt ){ + @ </ul> + } + db_finalize(&q); } Index: src/bag.c ================================================================== --- src/bag.c +++ src/bag.c @@ -75,11 +75,11 @@ /* ** Change the size of the hash table on a bag so that ** it contains N slots ** ** Completely reconstruct the hash table from scratch. Deleted -** entries (indicated by a -1) are removed. When finished, it +** entries (indicated by a -1) are removed. When finished, it ** should be the case that p->cnt==p->used. */ static void bag_resize(Bag *p, int newSize){ int i; Bag old; @@ -86,11 +86,11 @@ int nDel = 0; /* Number of deleted entries */ int nLive = 0; /* Number of live entries */ old = *p; assert( newSize>old.cnt ); - p->a = malloc( sizeof(p->a[0])*newSize ); + p->a = fossil_malloc( sizeof(p->a[0])*newSize ); p->sz = newSize; memset(p->a, 0, sizeof(p->a[0])*newSize ); for(i=0; i<old.sz; i++){ int e = old.a[i]; if( e>0 ){ ADDED src/bisect.c Index: src/bisect.c ================================================================== --- src/bisect.c +++ src/bisect.c @@ -0,0 +1,462 @@ +/* +** Copyright (c) 2010 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@sqlite.org +** +******************************************************************************* +** +** This file contains code used to implement the "bisect" command. +** +** This file also contains logic used to compute the closure of filename +** changes that have occurred across multiple check-ins. +*/ +#include "config.h" +#include "bisect.h" +#include <assert.h> + +/* +** Local variables for this module +*/ +static struct { + int bad; /* The bad version */ + int good; /* The good version */ +} bisect; + +/* +** Find the shortest path between bad and good. +*/ +void bisect_path(void){ + PathNode *p; + bisect.bad = db_lget_int("bisect-bad", 0); + bisect.good = db_lget_int("bisect-good", 0); + if( bisect.good>0 && bisect.bad==0 ){ + path_shortest(bisect.good, bisect.good, 0, 0); + }else if( bisect.bad>0 && bisect.good==0 ){ + path_shortest(bisect.bad, bisect.bad, 0, 0); + }else if( bisect.bad==0 && bisect.good==0 ){ + fossil_fatal("neither \"good\" nor \"bad\" versions have been identified"); + }else{ + p = path_shortest(bisect.good, bisect.bad, bisect_option("direct-only"), 0); + if( p==0 ){ + char *zBad = db_text(0,"SELECT uuid FROM blob WHERE rid=%d",bisect.bad); + char *zGood = db_text(0,"SELECT uuid FROM blob WHERE rid=%d",bisect.good); + fossil_fatal("no path from good ([%S]) to bad ([%S]) or back", + zGood, zBad); + } + } +} + +/* +** The set of all bisect options. +*/ +static const struct { + const char *zName; + const char *zDefault; + const char *zDesc; +} aBisectOption[] = { + { "auto-next", "on", "Automatically run \"bisect next\" after each " + "\"bisect good\" or \"bisect bad\"" }, + { "direct-only", "on", "Follow only primary parent-child links, not " + "merges\n" }, + { "display", "chart", "Command to run after \"next\". \"chart\", " + "\"log\", \"status\", or \"none\"" }, +}; + +/* +** Return the value of a boolean bisect option. +*/ +int bisect_option(const char *zName){ + unsigned int i; + int r = -1; + for(i=0; i<sizeof(aBisectOption)/sizeof(aBisectOption[0]); i++){ + if( fossil_strcmp(zName, aBisectOption[i].zName)==0 ){ + char *zLabel = mprintf("bisect-%s", zName); + char *z = db_lget(zLabel, (char*)aBisectOption[i].zDefault); + if( is_truth(z) ) r = 1; + if( is_false(z) ) r = 0; + if( r<0 ) r = is_truth(aBisectOption[i].zDefault); + free(zLabel); + break; + } + } + assert( r>=0 ); + return r; +} + +/* +** List a bisect path. +*/ +static void bisect_list(int abbreviated){ + PathNode *p; + int vid = db_lget_int("checkout", 0); + int n; + Stmt s; + int nStep; + int nHidden = 0; + bisect_path(); + db_prepare(&s, "SELECT blob.uuid, datetime(event.mtime) " + " FROM blob, event" + " WHERE blob.rid=:rid AND event.objid=:rid" + " AND event.type='ci'"); + nStep = path_length(); + if( abbreviated ){ + for(p=path_last(); p; p=p->pFrom) p->isHidden = 1; + for(p=path_last(), n=0; p; p=p->pFrom, n++){ + if( p->rid==bisect.good + || p->rid==bisect.bad + || p->rid==vid + || (nStep>1 && n==nStep/2) + ){ + p->isHidden = 0; + if( p->pFrom ) p->pFrom->isHidden = 0; + } + } + for(p=path_last(); p; p=p->pFrom){ + if( p->pFrom && p->pFrom->isHidden==0 ) p->isHidden = 0; + } + } + for(p=path_last(), n=0; p; p=p->pFrom, n++){ + if( p->isHidden && (nHidden || (p->pFrom && p->pFrom->isHidden)) ){ + nHidden++; + continue; + }else if( nHidden ){ + fossil_print(" ... %d other check-ins omitted\n", nHidden); + nHidden = 0; + } + db_bind_int(&s, ":rid", p->rid); + if( db_step(&s)==SQLITE_ROW ){ + const char *zUuid = db_column_text(&s, 0); + const char *zDate = db_column_text(&s, 1); + fossil_print("%s %S", zDate, zUuid); + if( p->rid==bisect.good ) fossil_print(" GOOD"); + if( p->rid==bisect.bad ) fossil_print(" BAD"); + if( p->rid==vid ) fossil_print(" CURRENT"); + if( nStep>1 && n==nStep/2 ) fossil_print(" NEXT"); + fossil_print("\n"); + } + db_reset(&s); + } + db_finalize(&s); +} + +/* +** Append a new entry to the bisect log. Update the bisect-good or +** bisect-bad values as appropriate. +** +** The bisect-log consists of a list of token. Each token is an +** integer RID of a check-in. The RID is negative for "bad" check-ins +** and positive for "good" check-ins. +*/ +static void bisect_append_log(int rid){ + if( rid<0 ){ + if( db_lget_int("bisect-bad",0)==(-rid) ) return; + db_lset_int("bisect-bad", -rid); + }else{ + if( db_lget_int("bisect-good",0)==rid ) return; + db_lset_int("bisect-good", rid); + } + db_multi_exec( + "REPLACE INTO vvar(name,value) VALUES('bisect-log'," + "COALESCE((SELECT value||' ' FROM vvar WHERE name='bisect-log'),'')" + " || '%d')", rid); +} + +/* +** Create a TEMP table named "bilog" that contains the complete history +** of the current bisect. +*/ +void bisect_create_bilog_table(int iCurrent){ + char *zLog = db_lget("bisect-log",""); + Blob log, id; + Stmt q; + int cnt = 0; + blob_init(&log, zLog, -1); + db_multi_exec( + "CREATE TEMP TABLE bilog(" + " seq INTEGER PRIMARY KEY," /* Sequence of events */ + " stat TEXT," /* Type of occurrence */ + " rid INTEGER UNIQUE" /* Check-in number */ + ");" + ); + db_prepare(&q, "INSERT OR IGNORE INTO bilog(seq,stat,rid)" + " VALUES(:seq,:stat,:rid)"); + while( blob_token(&log, &id) ){ + int rid = atoi(blob_str(&id)); + db_bind_int(&q, ":seq", ++cnt); + db_bind_text(&q, ":stat", rid>0 ? "GOOD" : "BAD"); + db_bind_int(&q, ":rid", rid>=0 ? rid : -rid); + db_step(&q); + db_reset(&q); + } + if( iCurrent>0 ){ + db_bind_int(&q, ":seq", ++cnt); + db_bind_text(&q, ":stat", "CURRENT"); + db_bind_int(&q, ":rid", iCurrent); + db_step(&q); + } + db_finalize(&q); +} + +/* +** Show a chart of bisect "good" and "bad" versions. The chart can be +** sorted either chronologically by bisect time, or by check-in time. +*/ +static void bisect_chart(int sortByCkinTime){ + Stmt q; + int iCurrent = db_lget_int("checkout",0); + bisect_create_bilog_table(iCurrent); + db_prepare(&q, + "SELECT bilog.seq, bilog.stat," + " substr(blob.uuid,1,16), datetime(event.mtime)," + " blob.rid==%d" + " FROM bilog, blob, event" + " WHERE blob.rid=bilog.rid AND event.objid=bilog.rid" + " AND event.type='ci'" + " ORDER BY %s bilog.rowid ASC", + iCurrent, (sortByCkinTime ? "event.mtime DESC, " : "") + ); + while( db_step(&q)==SQLITE_ROW ){ + const char *zGoodBad = db_column_text(&q, 1); + fossil_print("%3d %-7s %s %s%s\n", + db_column_int(&q, 0), + zGoodBad, + db_column_text(&q, 3), + db_column_text(&q, 2), + (db_column_int(&q, 4) && zGoodBad[0]!='C') ? " CURRENT" : ""); + } + db_finalize(&q); +} + +/* +** COMMAND: bisect +** +** Usage: %fossil bisect SUBCOMMAND ... +** +** Run various subcommands useful for searching for bugs. +** +** fossil bisect bad ?VERSION? +** +** Identify version VERSION as non-working. If VERSION is omitted, +** the current checkout is marked as non-working. +** +** fossil bisect good ?VERSION? +** +** Identify version VERSION as working. If VERSION is omitted, +** the current checkout is marked as working. +** +** fossil bisect log +** fossil bisect chart +** +** Show a log of "good" and "bad" versions. "bisect log" shows the +** events in the order that they were tested. "bisect chart" shows +** them in order of check-in. +** +** fossil bisect next +** +** Update to the next version that is halfway between the working and +** non-working versions. +** +** fossil bisect options ?NAME? ?VALUE? +** +** List all bisect options, or the value of a single option, or set the +** value of a bisect option. +** +** fossil bisect reset +** +** Reinitialize a bisect session. This cancels prior bisect history +** and allows a bisect session to start over from the beginning. +** +** fossil bisect vlist|ls|status ?-a|--all? +** +** List the versions in between "bad" and "good". +** +** fossil bisect ui +** +** Like "fossil ui" except start on a timeline that shows only the +** check-ins that are part of the current bisect. +** +** fossil bisect undo +** +** Undo the most recent "good" or "bad" command. +** +** Summary: +** +** fossil bisect bad ?VERSION? +** fossil bisect good ?VERSION? +** fossil bisect log +** fossil bisect chart +** fossil bisect next +** fossil bisect options +** fossil bisect reset +** fossil bisect status +** fossil bisect ui +** fossil bisect undo +*/ +void bisect_cmd(void){ + int n; + const char *zCmd; + int foundCmd = 0; + db_must_be_within_tree(); + if( g.argc<3 ){ + usage("bad|good|log|next|options|reset|status|undo"); + } + zCmd = g.argv[2]; + n = strlen(zCmd); + if( n==0 ) zCmd = "-"; + if( strncmp(zCmd, "bad", n)==0 ){ + int ridBad; + foundCmd = 1; + if( g.argc==3 ){ + ridBad = db_lget_int("checkout",0); + }else{ + ridBad = name_to_typed_rid(g.argv[3], "ci"); + } + if( ridBad>0 ){ + bisect_append_log(-ridBad); + if( bisect_option("auto-next") && db_lget_int("bisect-good",0)>0 ){ + zCmd = "next"; + n = 4; + } + } + }else if( strncmp(zCmd, "good", n)==0 ){ + int ridGood; + foundCmd = 1; + if( g.argc==3 ){ + ridGood = db_lget_int("checkout",0); + }else{ + ridGood = name_to_typed_rid(g.argv[3], "ci"); + } + if( ridGood>0 ){ + bisect_append_log(ridGood); + if( bisect_option("auto-next") && db_lget_int("bisect-bad",0)>0 ){ + zCmd = "next"; + n = 4; + } + } + }else if( strncmp(zCmd, "undo", n)==0 ){ + char *zLog; + Blob log, id; + int ridBad = 0; + int ridGood = 0; + int cnt = 0, i; + foundCmd = 1; + db_begin_transaction(); + zLog = db_lget("bisect-log",""); + blob_init(&log, zLog, -1); + while( blob_token(&log, &id) ){ cnt++; } + if( cnt==0 ){ + fossil_fatal("no previous bisect steps to undo"); + } + blob_rewind(&log); + for(i=0; i<cnt-1; i++){ + int rid; + blob_token(&log, &id); + rid = atoi(blob_str(&id)); + if( rid<0 ) ridBad = -rid; + else ridGood = rid; + } + db_multi_exec( + "UPDATE vvar SET value=substr(value,1,%d) WHERE name='bisect-log'", + log.iCursor-1 + ); + db_lset_int("bisect-bad", ridBad); + db_lset_int("bisect-good", ridGood); + db_end_transaction(0); + if( ridBad && ridGood ){ + zCmd = "next"; + n = 4; + } + } + /* No else here so that the above commands can morph themselves into + ** a "next" command */ + if( strncmp(zCmd, "next", n)==0 ){ + PathNode *pMid; + char *zDisplay = db_lget("bisect-display","chart"); + int m = (int)strlen(zDisplay); + bisect_path(); + pMid = path_midpoint(); + if( pMid==0 ){ + fossil_print("bisect complete\n"); + }else{ + g.argv[1] = "update"; + g.argv[2] = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", pMid->rid); + g.argc = 3; + g.fNoSync = 1; + update_cmd(); + } + + if( strncmp(zDisplay,"chart",m)==0 ){ + bisect_chart(1); + }else if( strncmp(zDisplay, "log", m)==0 ){ + bisect_chart(0); + }else if( strncmp(zDisplay, "status", m)==0 ){ + bisect_list(1); + } + }else if( strncmp(zCmd, "log", n)==0 ){ + bisect_chart(0); + }else if( strncmp(zCmd, "chart", n)==0 ){ + bisect_chart(1); + }else if( strncmp(zCmd, "options", n)==0 ){ + if( g.argc==3 ){ + unsigned int i; + for(i=0; i<sizeof(aBisectOption)/sizeof(aBisectOption[0]); i++){ + char *z = mprintf("bisect-%s", aBisectOption[i].zName); + fossil_print(" %-15s %-6s ", aBisectOption[i].zName, + db_lget(z, (char*)aBisectOption[i].zDefault)); + fossil_free(z); + comment_print(aBisectOption[i].zDesc, 0, 27, -1, g.comFmtFlags); + } + }else if( g.argc==4 || g.argc==5 ){ + unsigned int i; + n = strlen(g.argv[3]); + for(i=0; i<sizeof(aBisectOption)/sizeof(aBisectOption[0]); i++){ + if( strncmp(g.argv[3], aBisectOption[i].zName, n)==0 ){ + char *z = mprintf("bisect-%s", aBisectOption[i].zName); + if( g.argc==5 ){ + db_lset(z, g.argv[4]); + } + fossil_print("%s\n", db_lget(z, (char*)aBisectOption[i].zDefault)); + fossil_free(z); + break; + } + } + if( i>=sizeof(aBisectOption)/sizeof(aBisectOption[0]) ){ + fossil_fatal("no such bisect option: %s", g.argv[3]); + } + }else{ + usage("bisect option ?NAME? ?VALUE?"); + } + }else if( strncmp(zCmd, "reset", n)==0 ){ + db_multi_exec( + "DELETE FROM vvar WHERE name IN " + " ('bisect-good', 'bisect-bad', 'bisect-log')" + ); + }else if( strcmp(zCmd, "ui")==0 ){ + char *newArgv[8]; + newArgv[0] = g.argv[0]; + newArgv[1] = "ui"; + newArgv[2] = "--page"; + newArgv[3] = "timeline?bisect"; + newArgv[4] = 0; + g.argv = newArgv; + g.argc = 4; + cmd_webserver(); + }else if( strncmp(zCmd, "vlist", n)==0 + || strncmp(zCmd, "ls", n)==0 + || strncmp(zCmd, "status", n)==0 + ){ + int fAll = find_option("all", "a", 0)!=0; + bisect_list(!fAll); + }else if( !foundCmd ){ + usage("bad|good|log|next|options|reset|status|ui|undo"); + } +} Index: src/blob.c ================================================================== --- src/blob.c +++ src/blob.c @@ -17,12 +17,21 @@ ** ** A Blob is a variable-length containers for arbitrary string ** or binary data. */ #include "config.h" -#include <zlib.h> +#if defined(FOSSIL_ENABLE_MINIZ) +# define MINIZ_HEADER_FILE_ONLY +# include "miniz.c" +#else +# include <zlib.h> +#endif #include "blob.h" +#if defined(_WIN32) +#include <fcntl.h> +#include <io.h> +#endif #if INTERFACE /* ** A Blob can hold a string or a binary object of arbitrary size. The ** size changes as necessary. @@ -29,14 +38,20 @@ */ struct Blob { unsigned int nUsed; /* Number of bytes used in aData[] */ unsigned int nAlloc; /* Number of bytes allocated for aData[] */ unsigned int iCursor; /* Next character of input to parse */ + unsigned int blobFlags; /* One or more BLOBFLAG_* bits */ char *aData; /* Where the information is stored */ void (*xRealloc)(Blob*, unsigned int); /* Function to reallocate the buffer */ }; +/* +** Allowed values for Blob.blobFlags +*/ +#define BLOBFLAG_NotSQL 0x0001 /* Non-SQL text */ + /* ** The current size of a Blob */ #define blob_size(X) ((X)->nUsed) @@ -60,56 +75,84 @@ #define blob_is_init(x) \ assert((x)->xRealloc==blobReallocMalloc || (x)->xRealloc==blobReallocStatic) /* ** Make sure a blob does not contain malloced memory. +** +** This might fail if we are unlucky and x is uninitialized. For that +** reason it should only be used locally for debugging. Leave it turned +** off for production. */ #if 0 /* Enable for debugging only */ -#define blob_is_reset(x) \ - assert((x)->xRealloc!=blobReallocMalloc || (x)->nAlloc==0) +#define assert_blob_is_reset(x) assert(blob_is_reset(x)) #else -#define blob_is_reset(x) +#define assert_blob_is_reset(x) #endif + + /* ** We find that the built-in isspace() function does not work for ** some international character sets. So here is a substitute. */ -static int blob_isspace(char c){ +int fossil_isspace(char c){ return c==' ' || (c<='\r' && c>='\t'); } + +/* +** Other replacements for ctype.h functions. +*/ +int fossil_islower(char c){ return c>='a' && c<='z'; } +int fossil_isupper(char c){ return c>='A' && c<='Z'; } +int fossil_isdigit(char c){ return c>='0' && c<='9'; } +int fossil_tolower(char c){ + return fossil_isupper(c) ? c - 'A' + 'a' : c; +} +int fossil_toupper(char c){ + return fossil_islower(c) ? c - 'a' + 'A' : c; +} +int fossil_isalpha(char c){ + return (c>='a' && c<='z') || (c>='A' && c<='Z'); +} +int fossil_isalnum(char c){ + return (c>='a' && c<='z') || (c>='A' && c<='Z') || (c>='0' && c<='9'); +} + /* ** COMMAND: test-isspace +** +** Verify that the fossil_isspace() routine is working correctly but +** testing it on all possible inputs. */ void isspace_cmd(void){ int i; for(i=0; i<=255; i++){ if( i==' ' || i=='\n' || i=='\t' || i=='\v' || i=='\f' || i=='\r' ){ - assert( blob_isspace((char)i) ); + assert( fossil_isspace((char)i) ); }else{ - assert( !blob_isspace((char)i) ); + assert( !fossil_isspace((char)i) ); } } - printf("All 256 characters OK\n"); + fossil_print("All 256 characters OK\n"); } /* ** This routine is called if a blob operation fails because we ** have run out of memory. */ static void blob_panic(void){ static const char zErrMsg[] = "out of memory\n"; - write(2, zErrMsg, sizeof(zErrMsg)-1); - exit(1); + fputs(zErrMsg, stderr); + fossil_exit(1); } /* ** A reallocation function that assumes that aData came from malloc(). ** This function attempts to resize the buffer of the blob to hold -** newSize bytes. +** newSize bytes. ** ** No attempt is made to recover from an out-of-memory error. ** If an OOM error occurs, an error message is printed on stderr ** and the program exits. */ @@ -118,13 +161,13 @@ free(pBlob->aData); pBlob->aData = 0; pBlob->nAlloc = 0; pBlob->nUsed = 0; pBlob->iCursor = 0; + pBlob->blobFlags = 0; }else if( newSize>pBlob->nAlloc || newSize<pBlob->nAlloc-4000 ){ - char *pNew = realloc(pBlob->aData, newSize); - if( pNew==0 ) blob_panic(); + char *pNew = fossil_realloc(pBlob->aData, newSize); pBlob->aData = pNew; pBlob->nAlloc = newSize; if( pBlob->nUsed>pBlob->nAlloc ){ pBlob->nUsed = pBlob->nAlloc; } @@ -133,11 +176,11 @@ /* ** An initializer for Blobs */ #if INTERFACE -#define BLOB_INITIALIZER {0,0,0,0,blobReallocMalloc} +#define BLOB_INITIALIZER {0,0,0,0,0,blobReallocMalloc} #endif const Blob empty_blob = BLOB_INITIALIZER; /* ** A reallocation function for when the initial string is in unmanaged @@ -145,12 +188,11 @@ */ static void blobReallocStatic(Blob *pBlob, unsigned int newSize){ if( newSize==0 ){ *pBlob = empty_blob; }else{ - char *pNew = malloc( newSize ); - if( pNew==0 ) blob_panic(); + char *pNew = fossil_malloc( newSize ); if( pBlob->nUsed>newSize ) pBlob->nUsed = newSize; memcpy(pNew, pBlob->aData, pBlob->nUsed); pBlob->aData = pNew; pBlob->xRealloc = blobReallocMalloc; pBlob->nAlloc = newSize; @@ -163,23 +205,37 @@ void blob_reset(Blob *pBlob){ blob_is_init(pBlob); pBlob->xRealloc(pBlob, 0); } + +/* +** Return true if the blob has been zeroed - in other words if it contains +** no malloced memory. This only works reliably if the blob has been +** initialized - it can return a false negative on an uninitialized blob. +*/ +int blob_is_reset(Blob *pBlob){ + if( pBlob==0 ) return 1; + if( pBlob->nUsed ) return 0; + if( pBlob->xRealloc==blobReallocMalloc && pBlob->nAlloc ) return 0; + return 1; +} + /* ** Initialize a blob to a string or byte-array constant of a specified length. ** Any prior data in the blob is discarded. */ void blob_init(Blob *pBlob, const char *zData, int size){ - blob_is_reset(pBlob); + assert_blob_is_reset(pBlob); if( zData==0 ){ *pBlob = empty_blob; }else{ if( size<=0 ) size = strlen(zData); pBlob->nUsed = pBlob->nAlloc = size; pBlob->aData = (char*)zData; pBlob->iCursor = 0; + pBlob->blobFlags = 0; pBlob->xRealloc = blobReallocStatic; } } /* @@ -187,28 +243,39 @@ ** Any prior data in the blob is discarded. */ void blob_set(Blob *pBlob, const char *zStr){ blob_init(pBlob, zStr, -1); } + +/* +** Initialize a blob to a nul-terminated string obtained from fossil_malloc(). +** The blob will take responsibility for freeing the string. +*/ +void blob_set_dynamic(Blob *pBlob, char *zStr){ + blob_init(pBlob, zStr, -1); + pBlob->xRealloc = blobReallocMalloc; +} /* ** Initialize a blob to an empty string. */ void blob_zero(Blob *pBlob){ static const char zEmpty[] = ""; - blob_is_reset(pBlob); + assert_blob_is_reset(pBlob); pBlob->nUsed = 0; pBlob->nAlloc = 1; pBlob->aData = (char*)zEmpty; pBlob->iCursor = 0; + pBlob->blobFlags = 0; pBlob->xRealloc = blobReallocStatic; } /* ** Append text or data to the end of a blob. */ void blob_append(Blob *pBlob, const char *aData, int nData){ + assert( aData!=0 || nData==0 ); blob_is_init(pBlob); if( nData<0 ) nData = strlen(aData); if( nData==0 ) return; if( pBlob->nUsed + nData >= pBlob->nAlloc ){ pBlob->xRealloc(pBlob, pBlob->nUsed + nData + pBlob->nAlloc + 100); @@ -234,18 +301,32 @@ ** Return a pointer to a null-terminated string for a blob. */ char *blob_str(Blob *p){ blob_is_init(p); if( p->nUsed==0 ){ - blob_append(p, "", 1); + blob_append(p, "", 1); /* NOTE: Changes nUsed. */ p->nUsed = 0; } if( p->aData[p->nUsed]!=0 ){ blob_materialize(p); } return p->aData; } + +/* +** Return a pointer to a null-terminated string for a blob that has +** been created using blob_append_sql() and not blob_appendf(). If +** text was ever added using blob_appendf() then throw an error. +*/ +char *blob_sql_text(Blob *p){ + blob_is_init(p); + if( (p->blobFlags & BLOBFLAG_NotSQL) ){ + fossil_fatal("Internal error: Use of blob_appendf() to construct SQL text"); + } + return blob_str(p); +} + /* ** Return a pointer to a null-terminated string for a blob. ** ** WARNING: If the blob is ephemeral, it might cause a '\000' @@ -264,11 +345,12 @@ p->aData[p->nUsed] = 0; return p->aData; } /* -** Compare two blobs. +** Compare two blobs. Return negative, zero, or positive if the first +** blob is less then, equal to, or greater than the second. */ int blob_compare(Blob *pA, Blob *pB){ int szA, szB, sz, rc; blob_is_init(pA); blob_is_init(pB); @@ -277,10 +359,36 @@ sz = szA<szB ? szA : szB; rc = memcmp(blob_buffer(pA), blob_buffer(pB), sz); if( rc==0 ){ rc = szA - szB; } + return rc; +} + +/* +** Compare two blobs in constant time and return zero if they are equal. +** Constant time comparison only applies for blobs of the same length. +** If lengths are different, immediately returns 1. +*/ +int blob_constant_time_cmp(Blob *pA, Blob *pB){ + int szA, szB, i; + unsigned char *buf1, *buf2; + unsigned char rc = 0; + + blob_is_init(pA); + blob_is_init(pB); + szA = blob_size(pA); + szB = blob_size(pB); + if( szA!=szB || szA==0 ) return 1; + + buf1 = (unsigned char*)blob_buffer(pA); + buf2 = (unsigned char*)blob_buffer(pB); + + for( i=0; i<szA; i++ ){ + rc = rc | (buf1[i] ^ buf2[i]); + } + return rc; } /* ** Compare a blob to a string. Return TRUE if they are equal. @@ -305,11 +413,11 @@ ((B)->nUsed==sizeof(S)-1 && memcmp((B)->aData,S,sizeof(S)-1)==0) #endif /* -** Attempt to resize a blob so that its internal buffer is +** Attempt to resize a blob so that its internal buffer is ** nByte in size. The blob is truncated if necessary. */ void blob_resize(Blob *pBlob, unsigned int newSize){ pBlob->xRealloc(pBlob, newSize+1); pBlob->nUsed = newSize; @@ -316,11 +424,11 @@ pBlob->aData[newSize] = 0; } /* ** Make sure a blob is nul-terminated and is not a pointer to unmanaged -** space. Return a pointer to the +** space. Return a pointer to the data. */ char *blob_materialize(Blob *pBlob){ blob_resize(pBlob, pBlob->nUsed); return pBlob->aData; } @@ -341,11 +449,11 @@ ** ** After this call completes, pTo will be an ephemeral blob. */ int blob_extract(Blob *pFrom, int N, Blob *pTo){ blob_is_init(pFrom); - blob_is_reset(pTo); + assert_blob_is_reset(pTo); if( pFrom->iCursor + N > pFrom->nUsed ){ N = pFrom->nUsed - pFrom->iCursor; if( N<=0 ){ blob_zero(pTo); return 0; @@ -376,13 +484,10 @@ }else if( whence==BLOB_SEEK_CUR ){ p->iCursor += offset; }else if( whence==BLOB_SEEK_END ){ p->iCursor = p->nUsed + offset - 1; } - if( p->iCursor<0 ){ - p->iCursor = 0; - } if( p->iCursor>p->nUsed ){ p->iCursor = p->nUsed; } return p->iCursor; } @@ -393,11 +498,11 @@ int blob_tell(Blob *p){ return p->iCursor; } /* -** Extract a single line of text from pFrom beginning at the current +** Extract a single line of text from pFrom beginning at the current ** cursor location and use that line of text to initialize pTo. ** pTo will include the terminating \n. Return the number of bytes ** in the line including the \n at the end. 0 is returned at ** end-of-file. ** @@ -429,11 +534,11 @@ ** not insert a new zero terminator. */ int blob_trim(Blob *p){ char *z = p->aData; int n = p->nUsed; - while( n>0 && blob_isspace(z[n-1]) ){ n--; } + while( n>0 && fossil_isspace(z[n-1]) ){ n--; } p->nUsed = n; return n; } /* @@ -452,15 +557,53 @@ */ int blob_token(Blob *pFrom, Blob *pTo){ char *aData = pFrom->aData; int n = pFrom->nUsed; int i = pFrom->iCursor; - while( i<n && blob_isspace(aData[i]) ){ i++; } + while( i<n && fossil_isspace(aData[i]) ){ i++; } + pFrom->iCursor = i; + while( i<n && !fossil_isspace(aData[i]) ){ i++; } + blob_extract(pFrom, i-pFrom->iCursor, pTo); + while( i<n && fossil_isspace(aData[i]) ){ i++; } + pFrom->iCursor = i; + return pTo->nUsed; +} + +/* +** Extract a single SQL token from pFrom and use it to initialize pTo. +** Return the number of bytes in the token. If no token is found, +** return 0. +** +** An SQL token consists of one or more non-space characters. If the +** first character is ' then the token is terminated by a matching ' +** (ignoring double '') or by the end of the string +** +** The cursor of pFrom is left pointing at the first character past +** the end of the token. +** +** pTo will be an ephermeral blob. If pFrom changes, it might alter +** pTo as well. +*/ +int blob_sqltoken(Blob *pFrom, Blob *pTo){ + char *aData = pFrom->aData; + int n = pFrom->nUsed; + int i = pFrom->iCursor; + while( i<n && fossil_isspace(aData[i]) ){ i++; } pFrom->iCursor = i; - while( i<n && !blob_isspace(aData[i]) ){ i++; } + if( aData[i]=='\'' ){ + i++; + while( i<n ){ + if( aData[i]=='\'' ){ + if( aData[++i]!='\'' ) break; + } + i++; + } + }else{ + while( i<n && !fossil_isspace(aData[i]) ){ i++; } + } blob_extract(pFrom, i-pFrom->iCursor, pTo); - while( i<n && blob_isspace(aData[i]) ){ i++; } + while( i<n && fossil_isspace(aData[i]) ){ i++; } pFrom->iCursor = i; return pTo->nUsed; } /* @@ -523,11 +666,11 @@ int blob_is_int(Blob *pBlob, int *pValue){ const char *z = blob_buffer(pBlob); int i, n, c, v; n = blob_size(pBlob); v = 0; - for(i=0; i<n && (c = z[i])!=0 && isdigit(c); i++){ + for(i=0; i<n && (c = z[i])!=0 && c>='0' && c<='9'; i++){ v = v*10 + c - '0'; } if( i==n ){ *pValue = v; return 1; @@ -559,23 +702,37 @@ return i; } /* ** Do printf-style string rendering and append the results to a blob. +** +** The blob_appendf() version sets the BLOBFLAG_NotSQL bit in Blob.blobFlags +** whereas blob_append_sql() does not. */ void blob_appendf(Blob *pBlob, const char *zFormat, ...){ - va_list ap; - va_start(ap, zFormat); - vxprintf(pBlob, zFormat, ap); - va_end(ap); + if( pBlob ){ + va_list ap; + va_start(ap, zFormat); + vxprintf(pBlob, zFormat, ap); + va_end(ap); + pBlob->blobFlags |= BLOBFLAG_NotSQL; + } +} +void blob_append_sql(Blob *pBlob, const char *zFormat, ...){ + if( pBlob ){ + va_list ap; + va_start(ap, zFormat); + vxprintf(pBlob, zFormat, ap); + va_end(ap); + } } void blob_vappendf(Blob *pBlob, const char *zFormat, va_list ap){ - vxprintf(pBlob, zFormat, ap); + if( pBlob ) vxprintf(pBlob, zFormat, ap); } /* -** Initalize a blob to the data on an input channel. Return +** Initialize a blob to the data on an input channel. Return ** the number of bytes read into the blob. Any prior content ** of the blob is discarded, not freed. */ int blob_read_from_channel(Blob *pBlob, FILE *in, int nToRead){ size_t n; @@ -600,39 +757,66 @@ ** Initialize a blob to be the content of a file. If the filename ** is blank or "-" then read from standard input. ** ** Any prior content of the blob is discarded, not freed. ** -** Return the number of bytes read. Return -1 for an error. +** Return the number of bytes read. Calls fossil_fatal() on error (i.e. +** it exit()s and does not return). */ int blob_read_from_file(Blob *pBlob, const char *zFilename){ int size, got; FILE *in; if( zFilename==0 || zFilename[0]==0 || (zFilename[0]=='-' && zFilename[1]==0) ){ return blob_read_from_channel(pBlob, stdin, -1); } - size = file_size(zFilename); + size = file_wd_size(zFilename); blob_zero(pBlob); if( size<0 ){ - fossil_panic("no such file: %s", zFilename); + fossil_fatal("no such file: %s", zFilename); } if( size==0 ){ return 0; } blob_resize(pBlob, size); - in = fopen(zFilename, "rb"); + in = fossil_fopen(zFilename, "rb"); if( in==0 ){ - fossil_panic("cannot open %s for reading", zFilename); + fossil_fatal("cannot open %s for reading", zFilename); } got = fread(blob_buffer(pBlob), 1, size, in); fclose(in); if( got<size ){ blob_resize(pBlob, got); } return got; } + +/* +** Reads symlink destination path and puts int into blob. +** Any prior content of the blob is discarded, not freed. +** +** Returns length of destination path. +** +** On windows, zeros blob and returns 0. +*/ +int blob_read_link(Blob *pBlob, const char *zFilename){ +#if !defined(_WIN32) + char zBuf[1024]; + ssize_t len = readlink(zFilename, zBuf, 1023); + if( len < 0 ){ + fossil_fatal("cannot read symbolic link %s", zFilename); + } + zBuf[len] = 0; /* null-terminate */ + blob_zero(pBlob); + blob_appendf(pBlob, "%s", zBuf); + return len; +#else + blob_zero(pBlob); + return 0; +#endif +} + /* ** Write the content of a blob into a file. ** ** If the filename is blank or "-" then write to standard output. @@ -639,71 +823,49 @@ ** ** Return the number of bytes written. */ int blob_write_to_file(Blob *pBlob, const char *zFilename){ FILE *out; - int needToClose; - int wrote; + int nWrote; if( zFilename[0]==0 || (zFilename[0]=='-' && zFilename[1]==0) ){ - out = stdout; - needToClose = 0; - }else{ - int i, nName; - char *zName, zBuf[1000]; - - nName = strlen(zFilename); - if( nName>=sizeof(zBuf) ){ - zName = mprintf("%s", zFilename); - }else{ - zName = zBuf; - strcpy(zName, zFilename); - } - nName = file_simplify_name(zName, nName); - for(i=1; i<nName; i++){ - if( zName[i]=='/' ){ - zName[i] = 0; -#ifdef __MINGW32__ - /* - ** On Windows, local path looks like: C:/develop/project/file.txt - ** The if stops us from trying to create a directory of a drive letter - ** C: in this example. - */ - if( !(i==2 && zName[1]==':') ){ -#endif - if( file_mkdir(zName, 1) ){ - fossil_fatal_recursive("unable to create directory %s", zName); - return 0; - } -#ifdef __MINGW32__ - } -#endif - zName[i] = '/'; - } - } - out = fopen(zName, "wb"); - if( out==0 ){ - fossil_fatal_recursive("unable to open file \"%s\" for writing", zName); - return 0; - } - needToClose = 1; - if( zName!=zBuf ) free(zName); - } - blob_is_init(pBlob); - wrote = fwrite(blob_buffer(pBlob), 1, blob_size(pBlob), out); - if( needToClose ) fclose(out); - if( wrote!=blob_size(pBlob) ){ - fossil_fatal_recursive("short write: %d of %d bytes to %s", wrote, - blob_size(pBlob), zFilename); - } - return wrote; + nWrote = blob_size(pBlob); +#if defined(_WIN32) + if( fossil_utf8_to_console(blob_buffer(pBlob), nWrote, 0) >= 0 ){ + return nWrote; + } + fflush(stdout); + _setmode(_fileno(stdout), _O_BINARY); +#endif + fwrite(blob_buffer(pBlob), 1, nWrote, stdout); +#if defined(_WIN32) + fflush(stdout); + _setmode(_fileno(stdout), _O_TEXT); +#endif + }else{ + file_mkfolder(zFilename, 1, 0); + out = fossil_fopen(zFilename, "wb"); + if( out==0 ){ + fossil_fatal_recursive("unable to open file \"%s\" for writing", + zFilename); + return 0; + } + blob_is_init(pBlob); + nWrote = fwrite(blob_buffer(pBlob), 1, blob_size(pBlob), out); + fclose(out); + if( nWrote!=blob_size(pBlob) ){ + fossil_fatal_recursive("short write: %d of %d bytes to %s", nWrote, + blob_size(pBlob), zFilename); + } + } + return nWrote; } /* ** Compress a blob pIn. Store the result in pOut. It is ok for pIn and -** pOut to be the same blob. -** +** pOut to be the same blob. +** ** pOut must either be the same as pIn or else uninitialized. */ void blob_compress(Blob *pIn, Blob *pOut){ unsigned int nIn = blob_size(pIn); unsigned int nOut = 13 + nIn + (nIn+999)/1000; @@ -719,17 +881,23 @@ outBuf[3] = nIn & 0xff; nOut2 = (long int)nOut; compress(&outBuf[4], &nOut2, (unsigned char*)blob_buffer(pIn), blob_size(pIn)); if( pOut==pIn ) blob_reset(pOut); - blob_is_reset(pOut); + assert_blob_is_reset(pOut); *pOut = temp; blob_resize(pOut, nOut2+4); } /* ** COMMAND: test-compress +** +** Usage: %fossil test-compress INPUTFILE OUTPUTFILE +** +** Run compression on INPUTFILE and write the result into OUTPUTFILE. +** +** This is used to test and debug the blob_compress() routine. */ void compress_cmd(void){ Blob f; if( g.argc!=4 ) usage("INPUTFILE OUTPUTFILE"); blob_read_from_file(&f, g.argv[2]); @@ -736,13 +904,13 @@ blob_compress(&f, &f); blob_write_to_file(&f, g.argv[3]); } /* -** Compress the concatenation of a blobs pIn1 and pIn2. Store the result -** in pOut. -** +** Compress the concatenation of a blobs pIn1 and pIn2. Store the result +** in pOut. +** ** pOut must be either uninitialized or must be the same as either pIn1 or ** pIn2. */ void blob_compress2(Blob *pIn1, Blob *pIn2, Blob *pOut){ unsigned int nIn = blob_size(pIn1) + blob_size(pIn2); @@ -772,16 +940,23 @@ deflate(&stream, Z_FINISH); blob_resize(&temp, stream.total_out + 4); deflateEnd(&stream); if( pOut==pIn1 ) blob_reset(pOut); if( pOut==pIn2 ) blob_reset(pOut); - blob_is_reset(pOut); + assert_blob_is_reset(pOut); *pOut = temp; } /* ** COMMAND: test-compress-2 +** +** Usage: %fossil test-compress-2 IN1 IN2 OUT +** +** Read files IN1 and IN2, concatenate the content, compress the +** content, then write results into OUT. +** +** This is used to test and debug the blob_compress2() routine. */ void compress2_cmd(void){ Blob f1, f2; if( g.argc!=5 ) usage("INPUTFILE1 INPUTFILE2 OUTPUTFILE"); blob_read_from_file(&f1, g.argv[2]); @@ -809,25 +984,31 @@ inBuf = (unsigned char*)blob_buffer(pIn); nOut = (inBuf[0]<<24) + (inBuf[1]<<16) + (inBuf[2]<<8) + inBuf[3]; blob_zero(&temp); blob_resize(&temp, nOut+1); nOut2 = (long int)nOut; - rc = uncompress((unsigned char*)blob_buffer(&temp), &nOut2, - &inBuf[4], blob_size(pIn)); + rc = uncompress((unsigned char*)blob_buffer(&temp), &nOut2, + &inBuf[4], nIn - 4); if( rc!=Z_OK ){ blob_reset(&temp); return 1; } blob_resize(&temp, nOut2); if( pOut==pIn ) blob_reset(pOut); - blob_is_reset(pOut); + assert_blob_is_reset(pOut); *pOut = temp; return 0; } /* ** COMMAND: test-uncompress +** +** Usage: %fossil test-uncompress IN OUT +** +** Read the content of file IN, uncompress that content, and write the +** result into OUT. This command is intended for testing of the the +** blob_compress() function. */ void uncompress_cmd(void){ Blob f; if( g.argc!=4 ) usage("INPUTFILE OUTPUTFILE"); blob_read_from_file(&f, g.argv[2]); @@ -847,20 +1028,20 @@ for(i=2; i<g.argc; i++){ blob_read_from_file(&b1, g.argv[i]); blob_compress(&b1, &b2); blob_uncompress(&b2, &b3); if( blob_compare(&b1, &b3) ){ - fossil_panic("compress/uncompress cycle failed for %s", g.argv[i]); + fossil_fatal("compress/uncompress cycle failed for %s", g.argv[i]); } blob_reset(&b1); blob_reset(&b2); blob_reset(&b3); } - printf("ok\n"); + fossil_print("ok\n"); } -#ifdef __MINGW32__ +#if defined(_WIN32) || defined(__CYGWIN__) /* ** Convert every \n character in the given blob into \r\n. */ void blob_add_cr(Blob *p){ char *z = p->aData; @@ -883,17 +1064,51 @@ } } #endif /* -** Remove every \r character from the given blob. +** Remove every \r character from the given blob, replacing each one with +** a \n character if it was not already part of a \r\n pair. */ -void blob_remove_cr(Blob *p){ +void blob_to_lf_only(Blob *p){ int i, j; - char *z; - blob_materialize(p); - z = p->aData; + char *z = blob_materialize(p); for(i=j=0; z[i]; i++){ if( z[i]!='\r' ) z[j++] = z[i]; + else if( z[i+1]!='\n' ) z[j++] = '\n'; } z[j] = 0; p->nUsed = j; +} + +/* +** Convert blob from cp1252 to UTF-8. As cp1252 is a superset +** of iso8859-1, this is useful on UNIX as well. +** +** This table contains the character translations for 0x80..0xA0. +*/ + +static const unsigned short cp1252[32] = { + 0x20ac, 0x81, 0x201A, 0x0192, 0x201E, 0x2026, 0x2020, 0x2021, + 0x02C6, 0x2030, 0x0160, 0x2039, 0x0152, 0x8D, 0x017D, 0x8F, + 0x90, 0x2018, 0x2019, 0x201C, 0x201D, 0x2022, 0x2013, 0x2014, + 0x2DC, 0x2122, 0x0161, 0x203A, 0x0153, 0x9D, 0x017E, 0x0178 +}; + +void blob_cp1252_to_utf8(Blob *p){ + unsigned char *z = (unsigned char *)p->aData; + int j = p->nUsed; + int i, n; + for(i=n=0; i<j; i++){ + if( z[i]>=0x80 ){ + if( (z[i]<0xa0) && (cp1252[z[i]&0x1f]>=0x800) ){ + n++; + } + n++; + } + } + j += n; + if( j>=p->nAlloc ){ + blob_resize(p, j); + z = (unsigned char *)p->aData; + } + p->nUsed = j; @@ -900,1 +1115,127 @@ + z[j] = 0; + while( j>i ){ + if( z[--i]>=0x80 ){ + if( z[i]<0xa0 ){ + unsigned short sym = cp1252[z[i]&0x1f]; + if( sym>=0x800 ){ + z[--j] = 0x80 | (sym&0x3f); + z[--j] = 0x80 | ((sym>>6)&0x3f); + z[--j] = 0xe0 | (sym>>12); + }else{ + z[--j] = 0x80 | (sym&0x3f); + z[--j] = 0xc0 | (sym>>6); + } + }else{ + z[--j] = 0x80 | (z[i]&0x3f); + z[--j] = 0xC0 | (z[i]>>6); + } + }else{ + z[--j] = z[i]; + } + } +} + +/* +** Shell-escape the given string. Append the result to a blob. +*/ +void shell_escape(Blob *pBlob, const char *zIn){ + int n = blob_size(pBlob); + int k = strlen(zIn); + int i, c; + char *z; + for(i=0; (c = zIn[i])!=0; i++){ + if( fossil_isspace(c) || c=='"' || (c=='\\' && zIn[i+1]!=0) ){ + blob_appendf(pBlob, "\"%s\"", zIn); + z = blob_buffer(pBlob); + for(i=n+1; i<=n+k; i++){ + if( z[i]=='"' ) z[i] = '_'; + } + return; + } + } + blob_append(pBlob, zIn, -1); +} + +/* +** A read(2)-like impl for the Blob class. Reads (copies) up to nLen +** bytes from pIn, starting at position pIn->iCursor, and copies them +** to pDest (which must be valid memory at least nLen bytes long). +** +** Returns the number of bytes read/copied, which may be less than +** nLen (if end-of-blob is encountered). +** +** Updates pIn's cursor. +** +** Returns 0 if pIn contains no data. +*/ +unsigned int blob_read(Blob *pIn, void * pDest, unsigned int nLen ){ + if( !pIn->aData || (pIn->iCursor >= pIn->nUsed) ){ + return 0; + } else if( (pIn->iCursor + nLen) > (unsigned int)pIn->nUsed ){ + nLen = (unsigned int) (pIn->nUsed - pIn->iCursor); + } + assert( pIn->nUsed > pIn->iCursor ); + assert( (pIn->iCursor+nLen) <= pIn->nUsed ); + if( nLen ){ + memcpy( pDest, pIn->aData, nLen ); + pIn->iCursor += nLen; + } + return nLen; +} + +/* +** Swaps the contents of the given blobs. Results +** are unspecified if either value is NULL or both +** point to the same blob. +*/ +void blob_swap( Blob *pLeft, Blob *pRight ){ + Blob swap = *pLeft; + *pLeft = *pRight; + *pRight = swap; +} + +/* +** Strip a possible byte-order-mark (BOM) from the blob. On Windows, if there +** is either no BOM at all or an (le/be) UTF-16 BOM, a conversion to UTF-8 is +** done. If useMbcs is false and there is no BOM, the input string is assumed +** to be UTF-8 already, so no conversion is done. +*/ +void blob_to_utf8_no_bom(Blob *pBlob, int useMbcs){ + char *zUtf8; + int bomSize = 0; + int bomReverse = 0; + if( starts_with_utf8_bom(pBlob, &bomSize) ){ + struct Blob temp; + zUtf8 = blob_str(pBlob) + bomSize; + blob_zero(&temp); + blob_append(&temp, zUtf8, -1); + blob_swap(pBlob, &temp); + blob_reset(&temp); + }else if( starts_with_utf16_bom(pBlob, &bomSize, &bomReverse) ){ + zUtf8 = blob_buffer(pBlob); + if( bomReverse ){ + /* Found BOM, but with reversed bytes */ + unsigned int i = blob_size(pBlob); + while( i>0 ){ + /* swap bytes of unicode representation */ + char zTemp = zUtf8[--i]; + zUtf8[i] = zUtf8[i-1]; + zUtf8[--i] = zTemp; + } + } + /* Make sure the blob contains two terminating 0-bytes */ + blob_append(pBlob, "", 1); + zUtf8 = blob_str(pBlob) + bomSize; + zUtf8 = fossil_unicode_to_utf8(zUtf8); + blob_set_dynamic(pBlob, zUtf8); + }else if( useMbcs && invalid_utf8(pBlob) ){ +#if defined(_WIN32) || defined(__CYGWIN__) + zUtf8 = fossil_mbcs_to_utf8(blob_str(pBlob)); + blob_reset(pBlob); + blob_append(pBlob, zUtf8, -1); + fossil_mbcs_free(zUtf8); +#else + blob_cp1252_to_utf8(pBlob); +#endif /* _WIN32 */ + } } Index: src/branch.c ================================================================== --- src/branch.c +++ src/branch.c @@ -20,12 +20,12 @@ #include "config.h" #include "branch.h" #include <assert.h> /* -** fossil branch new BRANCH-NAME ?ORIGIN-CHECK-IN? ?-bgcolor COLOR? -** argv0 argv1 argv2 argv3 argv4 +** fossil branch new NAME BASIS ?OPTIONS? +** argv0 argv1 argv2 argv3 argv4 */ void branch_new(void){ int rootid; /* RID of the root check-in - what we branch off of */ int brid; /* RID of the branch check-in */ int noSign; /* True if the branch is unsigned */ @@ -35,81 +35,95 @@ const char *zBranch; /* Name of the new branch */ char *zDate; /* Date that branch was created */ char *zComment; /* Check-in comment for the new branch */ const char *zColor; /* Color of the new branch */ Blob branch; /* manifest for the new branch */ - Blob parent; /* root check-in manifest */ - Manifest mParent; /* Parsed parent manifest */ + Manifest *pParent; /* Parsed parent manifest */ Blob mcksum; /* Self-checksum on the manifest */ - + const char *zDateOvrd; /* Override date string */ + const char *zUserOvrd; /* Override user name */ + int isPrivate = 0; /* True if the branch should be private */ + noSign = find_option("nosign","",0)!=0; zColor = find_option("bgcolor","c",1); + isPrivate = find_option("private",0,0)!=0; + zDateOvrd = find_option("date-override",0,1); + zUserOvrd = find_option("user-override",0,1); verify_all_options(); if( g.argc<5 ){ - usage("new BRANCH-NAME CHECK-IN ?-bgcolor COLOR?"); + usage("new BRANCH-NAME BASIS ?OPTIONS?"); } - db_find_and_open_repository(1); - noSign = db_get_int("omitsign", 0)|noSign; - + db_find_and_open_repository(0, 0); + noSign = db_get_boolean("omitsign", 0)|noSign; + if( db_get_boolean("clearsign", 0)==0 ){ noSign = 1; } + /* fossil branch new name */ zBranch = g.argv[3]; if( zBranch==0 || zBranch[0]==0 ){ - fossil_panic("branch name cannot be empty"); + fossil_fatal("branch name cannot be empty"); } if( db_exists( "SELECT 1 FROM tagxref" " WHERE tagtype>0" - " AND tagid=(SELECT tagid FROM tag WHERE tagname='sym-%s')", + " AND tagid=(SELECT tagid FROM tag WHERE tagname='sym-%q')", zBranch)!=0 ){ fossil_fatal("branch \"%s\" already exists", zBranch); } user_select(); db_begin_transaction(); - rootid = name_to_rid(g.argv[4]); + rootid = name_to_typed_rid(g.argv[4], "ci"); if( rootid==0 ){ fossil_fatal("unable to locate check-in off of which to branch"); } + + pParent = manifest_get(rootid, CFTYPE_MANIFEST, 0); + if( pParent==0 ){ + fossil_fatal("%s is not a valid check-in", g.argv[4]); + } /* Create a manifest for the new branch */ blob_zero(&branch); + if( pParent->zBaseline ){ + blob_appendf(&branch, "B %s\n", pParent->zBaseline); + } zComment = mprintf("Create new branch named \"%h\"", zBranch); blob_appendf(&branch, "C %F\n", zComment); - zDate = db_text(0, "SELECT datetime('now')"); - zDate[10] = 'T'; + zDate = date_in_standard_format(zDateOvrd ? zDateOvrd : "now"); blob_appendf(&branch, "D %s\n", zDate); /* Copy all of the content from the parent into the branch */ - content_get(rootid, &parent); - manifest_parse(&mParent, &parent); - if( mParent.type!=CFTYPE_MANIFEST ){ - fossil_fatal("%s is not a valid check-in", g.argv[4]); - } - for(i=0; i<mParent.nFile; ++i){ - if( mParent.aFile[i].zPerm[0] ){ - blob_appendf(&branch, "F %F %s %s\n", - mParent.aFile[i].zName, - mParent.aFile[i].zUuid, - mParent.aFile[i].zPerm); - }else{ - blob_appendf(&branch, "F %F %s\n", - mParent.aFile[i].zName, - mParent.aFile[i].zUuid); - } + for(i=0; i<pParent->nFile; ++i){ + blob_appendf(&branch, "F %F", pParent->aFile[i].zName); + if( pParent->aFile[i].zUuid ){ + blob_appendf(&branch, " %s", pParent->aFile[i].zUuid); + if( pParent->aFile[i].zPerm && pParent->aFile[i].zPerm[0] ){ + blob_appendf(&branch, " %s", pParent->aFile[i].zPerm); + } + } + blob_append(&branch, "\n", 1); } zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rootid); blob_appendf(&branch, "P %s\n", zUuid); - blob_appendf(&branch, "R %s\n", mParent.zRepoCksum); - manifest_clear(&mParent); + if( pParent->zRepoCksum ){ + blob_appendf(&branch, "R %s\n", pParent->zRepoCksum); + } + manifest_destroy(pParent); /* Add the symbolic branch name and the "branch" tag to identify ** this as a new branch */ + if( content_is_private(rootid) ) isPrivate = 1; + if( isPrivate && zColor==0 ) zColor = "#fec084"; if( zColor!=0 ){ blob_appendf(&branch, "T *bgcolor * %F\n", zColor); } blob_appendf(&branch, "T *branch * %F\n", zBranch); blob_appendf(&branch, "T *sym-%F *\n", zBranch); + if( isPrivate ){ + blob_appendf(&branch, "T +private *\n"); + noSign = 1; + } /* Cancel all other symbolic tags */ db_prepare(&q, "SELECT tagname FROM tagxref, tag" " WHERE tagxref.rid=%d AND tagxref.tagid=tag.tagid" @@ -119,37 +133,39 @@ while( db_step(&q)==SQLITE_ROW ){ const char *zTag = db_column_text(&q, 0); blob_appendf(&branch, "T -%F *\n", zTag); } db_finalize(&q); - - blob_appendf(&branch, "U %F\n", g.zLogin); + + blob_appendf(&branch, "U %F\n", zUserOvrd ? zUserOvrd : login_name()); md5sum_blob(&branch, &mcksum); blob_appendf(&branch, "Z %b\n", &mcksum); if( !noSign && clearsign(&branch, &branch) ){ Blob ans; - blob_zero(&ans); + char cReply; prompt_user("unable to sign manifest. continue (y/N)? ", &ans); - if( blob_str(&ans)[0]!='y' ){ + cReply = blob_str(&ans)[0]; + if( cReply!='y' && cReply!='Y'){ db_end_transaction(1); - exit(1); + fossil_exit(1); } } - brid = content_put(&branch, 0, 0); + brid = content_put_ex(&branch, 0, 0, 0, isPrivate); if( brid==0 ){ - fossil_panic("trouble committing manifest: %s", g.zErrMsg); + fossil_fatal("trouble committing manifest: %s", g.zErrMsg); } db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", brid); - if( manifest_crosslink(brid, &branch)==0 ){ - fossil_panic("unable to install new manifest"); + if( manifest_crosslink(brid, &branch, MC_PERMIT_HOOKS)==0 ){ + fossil_fatal("%s\n", g.zErrMsg); } + assert( blob_is_reset(&branch) ); content_deltify(rootid, brid, 0); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", brid); - printf("New branch: %s\n", zUuid); + fossil_print("New branch: %s\n", zUuid); if( g.argc==3 ){ - printf( + fossil_print( "\n" "Note: the local check-out has not been updated to the new\n" " branch. To begin working on the new branch, do this:\n" "\n" " %s update %s\n", @@ -158,149 +174,321 @@ } /* Commit */ db_end_transaction(0); - + /* Do an autosync push, if requested */ - autosync(AUTOSYNC_PUSH); + if( !isPrivate ) autosync_loop(SYNC_PUSH, db_get_int("autosync-tries", 1)); +} + +#if INTERFACE +/* +** Allows bits in the mBplqFlags parameter to branch_prepare_list_query(). +*/ +#define BRL_CLOSED_ONLY 0x001 /* Show only closed branches */ +#define BRL_OPEN_ONLY 0x002 /* Show only open branches */ +#define BRL_BOTH 0x003 /* Show both open and closed branches */ +#define BRL_OPEN_CLOSED_MASK 0x003 +#define BRL_MTIME 0x004 /* Include lastest check-in time */ +#dfeine BRL_ORDERBY_MTIME 0x008 /* Sort by MTIME. (otherwise sort by name)*/ + +#endif /* INTERFACE */ + +/* +** Prepare a query that will list branches. +** +** If (which<0) then the query pulls only closed branches. If +** (which>0) then the query pulls all (closed and opened) +** branches. Else the query pulls currently-opened branches. +*/ +void branch_prepare_list_query(Stmt *pQuery, int brFlags){ + switch( brFlags & BRL_OPEN_CLOSED_MASK ){ + case BRL_CLOSED_ONLY: { + db_prepare(pQuery, + "SELECT value FROM tagxref" + " WHERE tagid=%d AND value NOT NULL " + "EXCEPT " + "SELECT value FROM tagxref" + " WHERE tagid=%d" + " AND rid IN leaf" + " AND NOT %z" + " ORDER BY value COLLATE nocase /*sort*/", + TAG_BRANCH, TAG_BRANCH, leaf_is_closed_sql("tagxref.rid") + ); + break; + } + case BRL_BOTH: { + db_prepare(pQuery, + "SELECT DISTINCT value FROM tagxref" + " WHERE tagid=%d AND value NOT NULL" + " AND rid IN leaf" + " ORDER BY value COLLATE nocase /*sort*/", + TAG_BRANCH + ); + break; + } + case BRL_OPEN_ONLY: { + db_prepare(pQuery, + "SELECT DISTINCT value FROM tagxref" + " WHERE tagid=%d AND value NOT NULL" + " AND rid IN leaf" + " AND NOT %z" + " ORDER BY value COLLATE nocase /*sort*/", + TAG_BRANCH, leaf_is_closed_sql("tagxref.rid") + ); + break; + } + } } + /* ** COMMAND: branch ** -** Usage: %fossil branch SUBCOMMAND ... ?-R|--repository FILE? +** Usage: %fossil branch SUBCOMMAND ... ?OPTIONS? ** ** Run various subcommands to manage branches of the open repository or ** of the repository identified by the -R or --repository option. ** -** %fossil branch new BRANCH-NAME BASIS ?-bgcolor COLOR? +** %fossil branch new BRANCH-NAME BASIS ?OPTIONS? ** ** Create a new branch BRANCH-NAME off of check-in BASIS. -** You can optionally give the branch a default color. +** Supported options for this subcommand include: +** --private branch is private (i.e., remains local) +** --bgcolor COLOR use COLOR instead of automatic background +** --nosign do not sign contents on this branch +** --date-override DATE DATE to use instead of 'now' +** --user-override USER USER to use instead of the current default +** +** %fossil branch list ?-a|--all|-c|--closed? +** %fossil branch ls ?-a|--all|-c|--closed? ** -** %fossil branch list +** List all branches. Use -a or --all to list all branches and +** -c or --closed to list all closed branches. The default is to +** show only open branches. ** -** List all branches -** +** Options: +** -R|--repository FILE Run commands on repository FILE */ void branch_cmd(void){ int n; - db_find_and_open_repository(1); - if( g.argc<3 ){ - usage("new|list ..."); - } - n = strlen(g.argv[2]); - if( n>=2 && strncmp(g.argv[2],"new",n)==0 ){ + const char *zCmd = "list"; + db_find_and_open_repository(0, 0); + if( g.argc>=3 ) zCmd = g.argv[2]; + n = strlen(zCmd); + if( strncmp(zCmd,"new",n)==0 ){ branch_new(); - }else if( n>=2 && strncmp(g.argv[2],"list",n)==0 ){ + }else if( (strncmp(zCmd,"list",n)==0)||(strncmp(zCmd, "ls", n)==0) ){ Stmt q; - db_prepare(&q, - "%s" - " AND blob.rid IN (SELECT rid FROM tagxref" - " WHERE tagid=%d AND tagtype==2 AND srcid!=0)" - " ORDER BY event.mtime DESC", - timeline_query_for_tty(), TAG_BRANCH - ); - print_timeline(&q, 2000); + int vid; + char *zCurrent = 0; + int brFlags = BRL_OPEN_ONLY; + if( find_option("all","a",0)!=0 ) brFlags = BRL_BOTH; + if( find_option("closed","c",0)!=0 ) brFlags = BRL_CLOSED_ONLY; + + if( g.localOpen ){ + vid = db_lget_int("checkout", 0); + zCurrent = db_text(0, "SELECT value FROM tagxref" + " WHERE rid=%d AND tagid=%d", vid, TAG_BRANCH); + } + branch_prepare_list_query(&q, brFlags); + while( db_step(&q)==SQLITE_ROW ){ + const char *zBr = db_column_text(&q, 0); + int isCur = zCurrent!=0 && fossil_strcmp(zCurrent,zBr)==0; + fossil_print("%s%s\n", (isCur ? "* " : " "), zBr); + } db_finalize(&q); }else{ - fossil_panic("branch subcommand should be one of: " - "new list"); + fossil_fatal("branch subcommand should be one of: " + "new list ls"); + } +} + +static const char brlistQuery[] = +@ SELECT +@ tagxref.value, +@ max(event.mtime), +@ EXISTS(SELECT 1 FROM tagxref AS tx +@ WHERE tx.rid=tagxref.rid +@ AND tx.tagid=(SELECT tagid FROM tag WHERE tagname='closed') +@ AND tx.tagtype>0), +@ (SELECT tagxref.value +@ FROM plink CROSS JOIN tagxref +@ WHERE plink.pid=event.objid +@ AND tagxref.rid=plink.cid +@ AND tagxref.tagid=(SELECT tagid FROM tag WHERE tagname='branch') +@ AND tagtype>0), +@ count(*), +@ (SELECT uuid FROM blob WHERE rid=tagxref.rid) +@ FROM tagxref, tag, event +@ WHERE tagxref.tagid=tag.tagid +@ AND tagxref.tagtype>0 +@ AND tag.tagname='branch' +@ AND event.objid=tagxref.rid +@ GROUP BY 1 +@ ORDER BY 2 DESC; +; + +/* +** This is the new-style branch-list page that shows the branch names +** together with their ages (time of last check-in) and whether or not +** they are closed or merged to another branch. +** +** Control jumps to this routine from brlist_page() (the /brlist handler) +** if there are no query parameters. +*/ +static void new_brlist_page(void){ + Stmt q; + double rNow; + login_check_credentials(); + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } + style_header("Branches"); + style_adunit_config(ADUNIT_RIGHT_OK); + login_anonymous_available(); + + db_prepare(&q, brlistQuery/*works-like:""*/); + rNow = db_double(0.0, "SELECT julianday('now')"); + @ <div class="brlist"><table id="branchlisttable"> + @ <thead><tr> + @ <th>Branch Name</th> + @ <th>Age</th> + @ <th>Check-ins</th> + @ <th>Status</th> + @ <th>Resolution</th> + @ </tr></thead><tbody> + while( db_step(&q)==SQLITE_ROW ){ + const char *zBranch = db_column_text(&q, 0); + double rMtime = db_column_double(&q, 1); + int isClosed = db_column_int(&q, 2); + const char *zMergeTo = db_column_text(&q, 3); + int nCkin = db_column_int(&q, 4); + const char *zLastCkin = db_column_text(&q, 5); + char *zAge = human_readable_age(rNow - rMtime); + sqlite3_int64 iMtime = (sqlite3_int64)(rMtime*86400.0); + if( zMergeTo && zMergeTo[0]==0 ) zMergeTo = 0; + @ <tr> + @ <td>%z(href("%R/timeline?n=100&r=%T",zBranch))%h(zBranch)</a></td> + @ <td data-sortkey="%016llx(-iMtime)">%s(zAge)</td> + @ <td>%d(nCkin)</td> + fossil_free(zAge); + @ <td>%s(isClosed?"closed":"")</td> + if( zMergeTo ){ + @ <td>merged into + @ %z(href("%R/timeline?f=%!S",zLastCkin))%h(zMergeTo)</a></td> + }else{ + @ <td></td> + } + @ </tr> } + @ </tbody></table></div> + db_finalize(&q); + output_table_sorting_javascript("branchlisttable","tkNtt",2); + style_footer(); } /* ** WEBPAGE: brlist -** -** Show a timeline of all branches -*/ -void brlist_page(void){ - Stmt q; - int cnt; - - login_check_credentials(); - if( !g.okRead ){ login_needed(); return; } - - style_header("Branches"); - style_submenu_element("Timeline", "Timeline", "brtimeline"); - login_anonymous_available(); - compute_leaves(0, 1); - style_sidebox_begin("Nomenclature:", "33%"); - @ <ol> - @ <li> An <b>open branch</b> is a branch that has one or - @ more <a href="leaves">open leaves.</a> - @ The presence of open leaves presumably means - @ that the branch is still being extended with new check-ins.</li> - @ <li> A <b>closed branch</b> is a branch with only - @ <a href="leaves?closed">closed leaves</a>. - @ Closed branches are fixed and do not change (unless they are first - @ reopened)</li> - @ </ol> - style_sidebox_end(); - - db_prepare(&q, - "SELECT DISTINCT value FROM tagxref" - " WHERE tagid=%d AND value NOT NULL" - " AND rid IN leaves" - " ORDER BY value /*sort*/", - TAG_BRANCH - ); - cnt = 0; - while( db_step(&q)==SQLITE_ROW ){ - const char *zBr = db_column_text(&q, 0); - if( cnt==0 ){ - @ <h2>Open Branches:</h2> - @ <ul> - cnt++; - } - if( g.okHistory ){ - @ <li><a href="%s(g.zBaseURL)/timeline?t=%T(zBr)">%h(zBr)</a></li> - }else{ - @ <li><b>%h(zBr)</b></li> - } - } - db_finalize(&q); - if( cnt ){ - @ </ul> - } - cnt = 0; - db_prepare(&q, - "SELECT value FROM tagxref" - " WHERE tagid=%d AND value NOT NULL" - " EXCEPT " - "SELECT value FROM tagxref" - " WHERE tagid=%d AND value NOT NULL" - " AND rid IN leaves" - " ORDER BY value /*sort*/", - TAG_BRANCH, TAG_BRANCH - ); - while( db_step(&q)==SQLITE_ROW ){ - const char *zBr = db_column_text(&q, 0); - if( cnt==0 ){ - @ <h2>Closed Branches:</h2> - @ <ul> - cnt++; - } - if( g.okHistory ){ - @ <li><a href="%s(g.zBaseURL)/timeline?t=%T(zBr)">%h(zBr)</a></li> - }else{ - @ <li><b>%h(zBr)</b></li> +** Show a list of branches. With no query parameters, a sortable table +** is used to show all branches. If query parameters are present a +** fixed bullet list is shown. +** +** Query parameters: +** +** all Show all branches +** closed Show only closed branches +** open Show only open branches (default behavior) +** colortest Show all branches with automatic color +*/ +void brlist_page(void){ + Stmt q; + int cnt; + int showClosed = P("closed")!=0; + int showAll = P("all")!=0; + int showOpen = P("open")!=0; + int colorTest = P("colortest")!=0; + int brFlags = BRL_OPEN_ONLY; + + if( showClosed==0 && showAll==0 && showOpen==0 && colorTest==0 ){ + new_brlist_page(); + return; + } + login_check_credentials(); + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } + if( colorTest ){ + showClosed = 0; + showAll = 1; + } + if( showAll ) brFlags = BRL_BOTH; + if( showClosed ) brFlags = BRL_CLOSED_ONLY; + + style_header("%s", showClosed ? "Closed Branches" : + showAll ? "All Branches" : "Open Branches"); + style_submenu_element("Timeline", "Timeline", "brtimeline"); + if( showClosed ){ + style_submenu_element("All", "All", "brlist?all"); + style_submenu_element("Open","Open","brlist?open"); + }else if( showAll ){ + style_submenu_element("Closed", "Closed", "brlist?closed"); + style_submenu_element("Open","Open","brlist"); + }else{ + style_submenu_element("All", "All", "brlist?all"); + style_submenu_element("Closed","Closed","brlist?closed"); + } + if( !colorTest ){ + style_submenu_element("Color-Test", "Color-Test", "brlist?colortest"); + }else{ + style_submenu_element("All", "All", "brlist?all"); + } + login_anonymous_available(); +#if 0 + style_sidebox_begin("Nomenclature:", "33%"); + @ <ol> + @ <li> An <div class="sideboxDescribed">%z(href("brlist")) + @ open branch</a></div> is a branch that has one or more + @ <div class="sideboxDescribed">%z(href("leaves"))open leaves.</a></div> + @ The presence of open leaves presumably means + @ that the branch is still being extended with new check-ins.</li> + @ <li> A <div class="sideboxDescribed">%z(href("brlist?closed")) + @ closed branch</a></div> is a branch with only + @ <div class="sideboxDescribed">%z(href("leaves?closed")) + @ closed leaves</a></div>. + @ Closed branches are fixed and do not change (unless they are first + @ reopened).</li> + @ </ol> + style_sidebox_end(); +#endif + + branch_prepare_list_query(&q, brFlags); + cnt = 0; + while( db_step(&q)==SQLITE_ROW ){ + const char *zBr = db_column_text(&q, 0); + if( cnt==0 ){ + if( colorTest ){ + @ <h2>Default background colors for all branches:</h2> + }else if( showClosed ){ + @ <h2>Closed Branches:</h2> + }else if( showAll ){ + @ <h2>All Branches:</h2> + }else{ + @ <h2>Open Branches:</h2> + } + @ <ul> + cnt++; + } + if( colorTest ){ + const char *zColor = hash_color(zBr); + @ <li><span style="background-color: %s(zColor)"> + @ %h(zBr) → %s(zColor)</span></li> + }else{ + @ <li>%z(href("%R/timeline?r=%T&n=200",zBr))%h(zBr)</a></li> } } if( cnt ){ @ </ul> } db_finalize(&q); - @ </ul> - @ <br clear="both"> - @ <script> - @ function xin(id){ - @ } - @ function xout(id){ - @ } - @ </script> style_footer(); } /* ** This routine is called while for each check-in that is rendered by @@ -307,22 +495,22 @@ ** the timeline of a "brlist" page. Add some additional hyperlinks ** to the end of the line. */ static void brtimeline_extra(int rid){ Stmt q; - if( !g.okHistory ) return; - db_prepare(&q, + if( !g.perm.Hyperlink ) return; + db_prepare(&q, "SELECT substr(tagname,5) FROM tagxref, tag" " WHERE tagxref.rid=%d" " AND tagxref.tagid=tag.tagid" " AND tagxref.tagtype>0" " AND tag.tagname GLOB 'sym-*'", rid ); while( db_step(&q)==SQLITE_ROW ){ const char *zTagName = db_column_text(&q, 0); - @ <a href="%s(g.zBaseURL)/timeline?t=%T(zTagName)">[timeline]</a> + @ %z(href("%R/timeline?r=%T&n=200",zTagName))[timeline]</a> } db_finalize(&q); } /* @@ -332,11 +520,11 @@ */ void brtimeline_page(void){ Stmt q; login_check_credentials(); - if( !g.okRead ){ login_needed(); return; } + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Branches"); style_submenu_element("List", "List", "brlist"); login_anonymous_available(); @ <h2>The initial check-in for each branch:</h2> @@ -344,16 +532,9 @@ "%s AND blob.rid IN (SELECT rid FROM tagxref" " WHERE tagtype>0 AND tagid=%d AND srcid!=0)" " ORDER BY event.mtime DESC", timeline_query_for_www(), TAG_BRANCH ); - www_print_timeline(&q, 0, brtimeline_extra); + www_print_timeline(&q, 0, 0, 0, 0, brtimeline_extra); db_finalize(&q); - @ <br clear="both"> - @ <script> - @ function xin(id){ - @ } - @ function xout(id){ - @ } - @ </script> style_footer(); } Index: src/browse.c ================================================================== --- src/browse.c +++ src/browse.c @@ -20,11 +20,11 @@ #include "config.h" #include "browse.h" #include <assert.h> /* -** This is the implemention of the "pathelement(X,N)" SQL function. +** This is the implementation of the "pathelement(X,N)" SQL function. ** ** If X is a unix-like pathname (with "/" separators) and N is an ** integer, then skip the initial N characters of X and return the ** name of the path component that begins on the N+1th character ** (numbered from 0). If the path component is a directory (if @@ -34,11 +34,11 @@ ** ** pathelement('abc/pqr/xyz', 4) -> '/pqr' ** pathelement('abc/pqr', 4) -> 'pqr' ** pathelement('abc/pqr/xyz', 0) -> '/abc' */ -static void pathelementFunc( +void pathelementFunc( sqlite3_context *context, int argc, sqlite3_value **argv ){ const unsigned char *z; @@ -71,19 +71,32 @@ ** There is no hyperlink on the file element of the path. ** ** The computed string is appended to the pOut blob. pOut should ** have already been initialized. */ -void hyperlinked_path(const char *zPath, Blob *pOut){ +void hyperlinked_path( + const char *zPath, /* Path to render */ + Blob *pOut, /* Write into this blob */ + const char *zCI, /* check-in name, or NULL */ + const char *zURI, /* "dir" or "tree" */ + const char *zREx /* Extra query parameters */ +){ int i, j; char *zSep = ""; for(i=0; zPath[i]; i=j){ for(j=i; zPath[j] && zPath[j]!='/'; j++){} - if( zPath[j] && g.okHistory ){ - blob_appendf(pOut, "%s<a href=\"%s/dir?name=%#T\">%#h</a>", - zSep, g.zBaseURL, j, zPath, j-i, &zPath[i]); + if( zPath[j] && g.perm.Hyperlink ){ + if( zCI ){ + char *zLink = href("%R/%s?name=%#T%s&ci=%!S", zURI, j, zPath, zREx,zCI); + blob_appendf(pOut, "%s%z%#h</a>", + zSep, zLink, j-i, &zPath[i]); + }else{ + char *zLink = href("%R/%s?name=%#T%s", zURI, j, zPath, zREx); + blob_appendf(pOut, "%s%z%#h</a>", + zSep, zLink, j-i, &zPath[i]); + } }else{ blob_appendf(pOut, "%s%#h", zSep, j-i, &zPath[i]); } zSep = "/"; while( zPath[j]=='/' ){ j++; } @@ -92,153 +105,1012 @@ /* ** WEBPAGE: dir ** +** Show the files and subdirectories within a single directory of the +** source tree. Only files for a single check-in are shown if the ci= +** query parameter is present. If ci= is missing, the union of files +** across all check-ins is shown. +** ** Query parameters: ** -** name=PATH Directory to display. Required. +** name=PATH Directory to display. Optional. Top-level if missing ** ci=LABEL Show only files in this check-in. Optional. +** type=TYPE TYPE=flat: use this display +** TYPE=tree: use the /tree display instead */ void page_dir(void){ - const char *zD = P("name"); + char *zD = fossil_strdup(P("name")); + int nD = zD ? strlen(zD)+1 : 0; int mxLen; int nCol, nRow; int cnt, i; char *zPrefix; Stmt q; const char *zCI = P("ci"); int rid = 0; - Blob content; + char *zUuid = 0; Blob dirname; - Manifest m; + Manifest *pM = 0; const char *zSubdirLink; + int linkTrunk = 1; + int linkTip = 1; + HQuery sURI; + if( strcmp(PD("type","flat"),"tree")==0 ){ page_tree(); return; } login_check_credentials(); - if( !g.okHistory ){ login_needed(); return; } + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } + while( nD>1 && zD[nD-2]=='/' ){ zD[(--nD)-1] = 0; } style_header("File List"); + style_adunit_config(ADUNIT_RIGHT_OK); sqlite3_create_function(g.db, "pathelement", 2, SQLITE_UTF8, 0, pathelementFunc, 0, 0); + url_initialize(&sURI, "dir"); + cgi_query_parameters_to_url(&sURI); /* If the name= parameter is an empty string, make it a NULL pointer */ if( zD && strlen(zD)==0 ){ zD = 0; } - /* If a specific check-in is requested, fetch and parse it. */ - if( zCI && (rid = name_to_rid(zCI))!=0 && content_get(rid, &content) ){ - if( !manifest_parse(&m, &content) || m.type!=CFTYPE_MANIFEST ){ + /* If a specific check-in is requested, fetch and parse it. If the + ** specific check-in does not exist, clear zCI. zCI==0 will cause all + ** files from all check-ins to be displayed. + */ + if( zCI ){ + pM = manifest_get_by_name(zCI, &rid); + if( pM ){ + int trunkRid = symbolic_name_to_rid("tag:trunk", "ci"); + linkTrunk = trunkRid && rid != trunkRid; + linkTip = rid != symbolic_name_to_rid("tip", "ci"); + zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); + }else{ zCI = 0; } } - /* Compute the title of the page */ + /* Compute the title of the page */ blob_zero(&dirname); if( zD ){ blob_append(&dirname, "in directory ", -1); - hyperlinked_path(zD, &dirname); - zPrefix = mprintf("%h/", zD); + hyperlinked_path(zD, &dirname, zCI, "dir", ""); + zPrefix = mprintf("%s/", zD); + style_submenu_element("Top-Level", "Top-Level", "%s", + url_render(&sURI, "name", 0, 0, 0)); }else{ blob_append(&dirname, "in the top-level directory", -1); zPrefix = ""; } + if( linkTrunk ){ + style_submenu_element("Trunk", "Trunk", "%s", + url_render(&sURI, "ci", "trunk", 0, 0)); + } + if( linkTip ){ + style_submenu_element("Tip", "Tip", "%s", + url_render(&sURI, "ci", "tip", 0, 0)); + } if( zCI ){ - char *zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); - char zShort[20]; - memcpy(zShort, zUuid, 10); - zShort[10] = 0; - @ <h2>Files of check-in [<a href="vinfo?name=%T(zUuid)">%s(zShort)</a>] + @ <h2>Files of check-in [%z(href("vinfo?name=%!S",zUuid))%S(zUuid)</a>] @ %s(blob_str(&dirname))</h2> - zSubdirLink = mprintf("%s/dir?ci=%S&name=%T", g.zTop, zUuid, zPrefix); - if( zD ){ - style_submenu_element("Top", "Top", "%s/dir?ci=%S", g.zTop, zUuid); - style_submenu_element("All", "All", "%s/dir?name=%t", g.zTop, zD); - }else{ - style_submenu_element("All", "All", "%s/dir", g.zBaseURL); + zSubdirLink = mprintf("%R/dir?ci=%!S&name=%T", zUuid, zPrefix); + if( nD==0 ){ + style_submenu_element("File Ages", "File Ages", "%R/fileage?name=%!S", + zUuid); } }else{ @ <h2>The union of all files from all check-ins @ %s(blob_str(&dirname))</h2> - zSubdirLink = mprintf("%s/dir?name=%T", g.zBaseURL, zPrefix); - if( zD ){ - style_submenu_element("Top", "Top", "%s/dir", g.zBaseURL); - style_submenu_element("Tip", "Tip", "%s/dir?name=%t&ci=tip", - g.zBaseURL, zD); - style_submenu_element("Trunk", "Trunk", "%s/dir?name=%t&ci=trunk", - g.zBaseURL,zD); - }else{ - style_submenu_element("Tip", "Tip", "%s/dir?ci=tip", g.zBaseURL); - style_submenu_element("Trunk", "Trunk", "%s/dir?ci=trunk", g.zBaseURL); - } - } + zSubdirLink = mprintf("%R/dir?name=%T", zPrefix); + } + style_submenu_element("All", "All", "%s", + url_render(&sURI, "ci", 0, 0, 0)); + style_submenu_element("Tree-View", "Tree-View", "%s", + url_render(&sURI, "type", "tree", 0, 0)); /* Compute the temporary table "localfiles" containing the names - ** of all files and subdirectories in the zD[] directory. + ** of all files and subdirectories in the zD[] directory. ** ** Subdirectory names begin with "/". This causes them to sort ** first and it also gives us an easy way to distinguish files ** from directories in the loop that follows. */ db_multi_exec( "CREATE TEMP TABLE localfiles(x UNIQUE NOT NULL, u);" - "CREATE TEMP TABLE allfiles(x UNIQUE NOT NULL, u);" ); if( zCI ){ Stmt ins; - int i; - db_prepare(&ins, "INSERT INTO allfiles VALUES(:x, :u)"); - for(i=0; i<m.nFile; i++){ - db_bind_text(&ins, ":x", m.aFile[i].zName); - db_bind_text(&ins, ":u", m.aFile[i].zUuid); - db_step(&ins); - db_reset(&ins); - } - db_finalize(&ins); - }else{ - db_multi_exec( - "INSERT INTO allfiles SELECT name, NULL FROM filename" - ); - } - if( zD ){ - db_multi_exec( - "INSERT OR IGNORE INTO localfiles " - " SELECT pathelement(x,%d), u FROM allfiles" - " WHERE x GLOB '%q/*'", - strlen(zD)+1, zD - ); - }else{ - db_multi_exec( - "INSERT OR IGNORE INTO localfiles " - " SELECT pathelement(x,0), u FROM allfiles" + ManifestFile *pFile; + ManifestFile *pPrev = 0; + int nPrev = 0; + int c; + + db_prepare(&ins, + "INSERT OR IGNORE INTO localfiles VALUES(pathelement(:x,0), :u)" + ); + manifest_file_rewind(pM); + while( (pFile = manifest_file_next(pM,0))!=0 ){ + if( nD>0 + && (fossil_strncmp(pFile->zName, zD, nD-1)!=0 + || pFile->zName[nD-1]!='/') + ){ + continue; + } + if( pPrev + && fossil_strncmp(&pFile->zName[nD],&pPrev->zName[nD],nPrev)==0 + && (pFile->zName[nD+nPrev]==0 || pFile->zName[nD+nPrev]=='/') + ){ + continue; + } + db_bind_text(&ins, ":x", &pFile->zName[nD]); + db_bind_text(&ins, ":u", pFile->zUuid); + db_step(&ins); + db_reset(&ins); + pPrev = pFile; + for(nPrev=0; (c=pPrev->zName[nD+nPrev]) && c!='/'; nPrev++){} + if( c=='/' ) nPrev++; + } + db_finalize(&ins); + }else if( zD ){ + db_multi_exec( + "INSERT OR IGNORE INTO localfiles" + " SELECT pathelement(name,%d), NULL FROM filename" + " WHERE name GLOB '%q/*'", + nD, zD + ); + }else{ + db_multi_exec( + "INSERT OR IGNORE INTO localfiles" + " SELECT pathelement(name,0), NULL FROM filename" ); } /* Generate a multi-column table listing the contents of zD[] ** directory. */ mxLen = db_int(12, "SELECT max(length(x)) FROM localfiles /*scan*/"); cnt = db_int(0, "SELECT count(*) FROM localfiles /*scan*/"); - nCol = 4; + if( mxLen<12 ) mxLen = 12; + nCol = 100/mxLen; + if( nCol<1 ) nCol = 1; + if( nCol>5 ) nCol = 5; nRow = (cnt+nCol-1)/nCol; db_prepare(&q, "SELECT x, u FROM localfiles ORDER BY x /*scan*/"); - @ <table border="0" width="100%%"><tr><td valign="top" width="25%%"> + @ <table class="browser"><tr><td class="browser"><ul class="browser"> i = 0; while( db_step(&q)==SQLITE_ROW ){ const char *zFN; if( i==nRow ){ - @ </td><td valign="top" width="25%%"> + @ </ul></td><td class="browser"><ul class="browser"> i = 0; } i++; zFN = db_column_text(&q, 0); if( zFN[0]=='/' ){ zFN++; - @ <li><a href="%s(zSubdirLink)%T(zFN)">%h(zFN)/</a></li> - }else if( zCI ){ - const char *zUuid = db_column_text(&q, 1); - @ <li><a href="%s(g.zBaseURL)/artifact?name=%s(zUuid)">%h(zFN)</a> + @ <li class="dir">%z(href("%s%T",zSubdirLink,zFN))%h(zFN)</a></li> + }else{ + const char *zLink; + if( zCI ){ + const char *zUuid = db_column_text(&q, 1); + zLink = href("%R/artifact/%!S",zUuid); + }else{ + zLink = href("%R/finfo?name=%T%T",zPrefix,zFN); + } + @ <li class="%z(fileext_class(zFN))">%z(zLink)%h(zFN)</a></li> + } + } + db_finalize(&q); + manifest_destroy(pM); + @ </ul></td></tr></table> + style_footer(); +} + +/* +** Objects used by the "tree" webpage. +*/ +typedef struct FileTreeNode FileTreeNode; +typedef struct FileTree FileTree; + +/* +** A single line of the file hierarchy +*/ +struct FileTreeNode { + FileTreeNode *pNext; /* Next entry in an ordered list of them all */ + FileTreeNode *pParent; /* Directory containing this entry */ + FileTreeNode *pSibling; /* Next element in the same subdirectory */ + FileTreeNode *pChild; /* List of child nodes */ + FileTreeNode *pLastChild; /* Last child on the pChild list */ + char *zName; /* Name of this entry. The "tail" */ + char *zFullName; /* Full pathname of this entry */ + char *zUuid; /* SHA1 hash of this file. May be NULL. */ + double mtime; /* Modification time for this entry */ + unsigned nFullName; /* Length of zFullName */ + unsigned iLevel; /* Levels of parent directories */ +}; + +/* +** A complete file hierarchy +*/ +struct FileTree { + FileTreeNode *pFirst; /* First line of the list */ + FileTreeNode *pLast; /* Last line of the list */ + FileTreeNode *pLastTop; /* Last top-level node */ +}; + +/* +** Add one or more new FileTreeNodes to the FileTree object so that the +** leaf object zPathname is at the end of the node list. +** +** The caller invokes this routine once for each leaf node (each file +** as opposed to each directory). This routine fills in any missing +** intermediate nodes automatically. +** +** When constructing a list of FileTreeNodes, all entries that have +** a common directory prefix must be added consecutively in order for +** the tree to be constructed properly. +*/ +static void tree_add_node( + FileTree *pTree, /* Tree into which nodes are added */ + const char *zPath, /* The full pathname of file to add */ + const char *zUuid, /* UUID of the file. Might be NULL. */ + double mtime /* Modification time for this entry */ +){ + int i; + FileTreeNode *pParent; /* Parent (directory) of the next node to insert */ + + /* Make pParent point to the most recent ancestor of zPath, or + ** NULL if there are no prior entires that are a container for zPath. + */ + pParent = pTree->pLast; + while( pParent!=0 && + ( strncmp(pParent->zFullName, zPath, pParent->nFullName)!=0 + || zPath[pParent->nFullName]!='/' ) + ){ + pParent = pParent->pParent; + } + i = pParent ? pParent->nFullName+1 : 0; + while( zPath[i] ){ + FileTreeNode *pNew; + int iStart = i; + int nByte; + while( zPath[i] && zPath[i]!='/' ){ i++; } + nByte = sizeof(*pNew) + i + 1; + if( zUuid!=0 && zPath[i]==0 ) nByte += UUID_SIZE+1; + pNew = fossil_malloc( nByte ); + memset(pNew, 0, sizeof(*pNew)); + pNew->zFullName = (char*)&pNew[1]; + memcpy(pNew->zFullName, zPath, i); + pNew->zFullName[i] = 0; + pNew->nFullName = i; + if( zUuid!=0 && zPath[i]==0 ){ + pNew->zUuid = pNew->zFullName + i + 1; + memcpy(pNew->zUuid, zUuid, UUID_SIZE+1); + } + pNew->zName = pNew->zFullName + iStart; + if( pTree->pLast ){ + pTree->pLast->pNext = pNew; + }else{ + pTree->pFirst = pNew; + } + pTree->pLast = pNew; + pNew->pParent = pParent; + if( pParent ){ + if( pParent->pChild ){ + pParent->pLastChild->pSibling = pNew; + }else{ + pParent->pChild = pNew; + } + pNew->iLevel = pParent->iLevel + 1; + pParent->pLastChild = pNew; + }else{ + if( pTree->pLastTop ) pTree->pLastTop->pSibling = pNew; + pTree->pLastTop = pNew; + } + pNew->mtime = mtime; + while( zPath[i]=='/' ){ i++; } + pParent = pNew; + } + while( pParent && pParent->pParent ){ + if( pParent->pParent->mtime < pParent->mtime ){ + pParent->pParent->mtime = pParent->mtime; + } + pParent = pParent->pParent; + } +} + +/* Comparison function for two FileTreeNode objects. Sort first by +** mtime (larger numbers first) and then by zName (smaller names first). +** +** Return negative if pLeft<pRight. +** Return positive if pLeft>pRight. +** Return zero if pLeft==pRight. +*/ +static int compareNodes(FileTreeNode *pLeft, FileTreeNode *pRight){ + if( pLeft->mtime>pRight->mtime ) return -1; + if( pLeft->mtime<pRight->mtime ) return +1; + return fossil_stricmp(pLeft->zName, pRight->zName); +} + +/* Merge together two sorted lists of FileTreeNode objects */ +static FileTreeNode *mergeNodes(FileTreeNode *pLeft, FileTreeNode *pRight){ + FileTreeNode *pEnd; + FileTreeNode base; + pEnd = &base; + while( pLeft && pRight ){ + if( compareNodes(pLeft,pRight)<=0 ){ + pEnd = pEnd->pSibling = pLeft; + pLeft = pLeft->pSibling; + }else{ + pEnd = pEnd->pSibling = pRight; + pRight = pRight->pSibling; + } + } + if( pLeft ){ + pEnd->pSibling = pLeft; + }else{ + pEnd->pSibling = pRight; + } + return base.pSibling; +} + +/* Sort a list of FileTreeNode objects in mtime order. */ +static FileTreeNode *sortNodesByMtime(FileTreeNode *p){ + FileTreeNode *a[30]; + FileTreeNode *pX; + int i; + + memset(a, 0, sizeof(a)); + while( p ){ + pX = p; + p = pX->pSibling; + pX->pSibling = 0; + for(i=0; i<count(a)-1 && a[i]!=0; i++){ + pX = mergeNodes(a[i], pX); + a[i] = 0; + } + a[i] = mergeNodes(a[i], pX); + } + pX = 0; + for(i=0; i<count(a); i++){ + pX = mergeNodes(a[i], pX); + } + return pX; +} + +/* Sort an entire FileTreeNode tree by mtime +** +** This routine invalidates the following fields: +** +** FileTreeNode.pLastChild +** FileTreeNode.pNext +** +** Use relinkTree to reconnect the pNext pointers. +*/ +static FileTreeNode *sortTreeByMtime(FileTreeNode *p){ + FileTreeNode *pX; + for(pX=p; pX; pX=pX->pSibling){ + if( pX->pChild ) pX->pChild = sortTreeByMtime(pX->pChild); + } + return sortNodesByMtime(p); +} + +/* Reconstruct the FileTree by reconnecting the FileTreeNode.pNext +** fields in sequential order. +*/ +static void relinkTree(FileTree *pTree, FileTreeNode *pRoot){ + while( pRoot ){ + if( pTree->pLast ){ + pTree->pLast->pNext = pRoot; + }else{ + pTree->pFirst = pRoot; + } + pTree->pLast = pRoot; + if( pRoot->pChild ) relinkTree(pTree, pRoot->pChild); + pRoot = pRoot->pSibling; + } + if( pTree->pLast ) pTree->pLast->pNext = 0; +} + + +/* +** WEBPAGE: tree +** +** Show the files using a tree-view. If the ci= query parameter is present +** then show only the files for the check-in identified. If ci= is omitted, +** then show the union of files over all check-ins. +** +** The type=tree query parameter is required or else the /dir format is +** used. +** +** Query parameters: +** +** type=tree Required to prevent use of /dir format +** name=PATH Directory to display. Optional +** ci=LABEL Show only files in this check-in. Optional. +** re=REGEXP Show only files matching REGEXP. Optional. +** expand Begin with the tree fully expanded. +** nofiles Show directories (folders) only. Omit files. +** mtime Order directory elements by decreasing mtime +*/ +void page_tree(void){ + char *zD = fossil_strdup(P("name")); + int nD = zD ? strlen(zD)+1 : 0; + const char *zCI = P("ci"); + int rid = 0; + char *zUuid = 0; + Blob dirname; + Manifest *pM = 0; + double rNow = 0; + char *zNow = 0; + int useMtime = atoi(PD("mtime","0")); + int nFile = 0; /* Number of files (or folders with "nofiles") */ + int linkTrunk = 1; /* include link to "trunk" */ + int linkTip = 1; /* include link to "tip" */ + const char *zRE; /* the value for the re=REGEXP query parameter */ + const char *zObjType; /* "files" by default or "folders" for "nofiles" */ + char *zREx = ""; /* Extra parameters for path hyperlinks */ + ReCompiled *pRE = 0; /* Compiled regular expression */ + FileTreeNode *p; /* One line of the tree */ + FileTree sTree; /* The complete tree of files */ + HQuery sURI; /* Hyperlink */ + int startExpanded; /* True to start out with the tree expanded */ + int showDirOnly; /* Show directories only. Omit files */ + int nDir = 0; /* Number of directories. Used for ID attributes */ + char *zProjectName = db_get("project-name", 0); + + if( strcmp(PD("type","flat"),"flat")==0 ){ page_dir(); return; } + memset(&sTree, 0, sizeof(sTree)); + login_check_credentials(); + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } + while( nD>1 && zD[nD-2]=='/' ){ zD[(--nD)-1] = 0; } + sqlite3_create_function(g.db, "pathelement", 2, SQLITE_UTF8, 0, + pathelementFunc, 0, 0); + url_initialize(&sURI, "tree"); + cgi_query_parameters_to_url(&sURI); + if( PB("nofiles") ){ + showDirOnly = 1; + style_header("Folder Hierarchy"); + }else{ + showDirOnly = 0; + style_header("File Tree"); + } + style_adunit_config(ADUNIT_RIGHT_OK); + if( PB("expand") ){ + startExpanded = 1; + }else{ + startExpanded = 0; + } + + /* If a regular expression is specified, compile it */ + zRE = P("re"); + if( zRE ){ + re_compile(&pRE, zRE, 0); + zREx = mprintf("&re=%T", zRE); + } + + /* If the name= parameter is an empty string, make it a NULL pointer */ + if( zD && strlen(zD)==0 ){ zD = 0; } + + /* If a specific check-in is requested, fetch and parse it. If the + ** specific check-in does not exist, clear zCI. zCI==0 will cause all + ** files from all check-ins to be displayed. + */ + if( zCI ){ + pM = manifest_get_by_name(zCI, &rid); + if( pM ){ + int trunkRid = symbolic_name_to_rid("tag:trunk", "ci"); + linkTrunk = trunkRid && rid != trunkRid; + linkTip = rid != symbolic_name_to_rid("tip", "ci"); + zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); + rNow = db_double(0.0, "SELECT mtime FROM event WHERE objid=%d", rid); + zNow = db_text("", "SELECT datetime(mtime,toLocal())" + " FROM event WHERE objid=%d", rid); + }else{ + zCI = 0; + } + } + if( zCI==0 ){ + rNow = db_double(0.0, "SELECT max(mtime) FROM event"); + zNow = db_text("", "SELECT datetime(max(mtime),toLocal()) FROM event"); + } + + /* Compute the title of the page */ + blob_zero(&dirname); + if( zD ){ + blob_append(&dirname, "within directory ", -1); + hyperlinked_path(zD, &dirname, zCI, "tree", zREx); + if( zRE ) blob_appendf(&dirname, " matching \"%s\"", zRE); + style_submenu_element("Top-Level", "Top-Level", "%s", + url_render(&sURI, "name", 0, 0, 0)); + }else{ + if( zRE ){ + blob_appendf(&dirname, "matching \"%s\"", zRE); + } + } + style_submenu_binary("mtime","Sort By Time","Sort By Filename", 0); + if( zCI ){ + style_submenu_element("All", "All", "%s", + url_render(&sURI, "ci", 0, 0, 0)); + if( nD==0 && !showDirOnly ){ + style_submenu_element("File Ages", "File Ages", "%R/fileage?name=%s", + zUuid); + } + } + if( linkTrunk ){ + style_submenu_element("Trunk", "Trunk", "%s", + url_render(&sURI, "ci", "trunk", 0, 0)); + } + if( linkTip ){ + style_submenu_element("Tip", "Tip", "%s", + url_render(&sURI, "ci", "tip", 0, 0)); + } + style_submenu_element("Flat-View", "Flat-View", "%s", + url_render(&sURI, "type", "flat", 0, 0)); + + /* Compute the file hierarchy. + */ + if( zCI ){ + Stmt q; + compute_fileage(rid, 0); + db_prepare(&q, + "SELECT filename.name, blob.uuid, fileage.mtime\n" + " FROM fileage, filename, blob\n" + " WHERE filename.fnid=fileage.fnid\n" + " AND blob.rid=fileage.fid\n" + " ORDER BY filename.name COLLATE nocase;" + ); + while( db_step(&q)==SQLITE_ROW ){ + const char *zFile = db_column_text(&q,0); + const char *zUuid = db_column_text(&q,1); + double mtime = db_column_double(&q,2); + if( nD>0 && (fossil_strncmp(zFile, zD, nD-1)!=0 || zFile[nD-1]!='/') ){ + continue; + } + if( pRE && re_match(pRE, (const unsigned char*)zFile, -1)==0 ) continue; + tree_add_node(&sTree, zFile, zUuid, mtime); + nFile++; + } + db_finalize(&q); + }else{ + Stmt q; + db_prepare(&q, + "SELECT filename.name, blob.uuid, max(event.mtime)\n" + " FROM filename, mlink, blob, event\n" + " WHERE mlink.fnid=filename.fnid\n" + " AND event.objid=mlink.mid\n" + " AND blob.rid=mlink.fid\n" + " GROUP BY 1 ORDER BY 1 COLLATE nocase"); + while( db_step(&q)==SQLITE_ROW ){ + const char *zName = db_column_text(&q, 0); + const char *zUuid = db_column_text(&q,1); + double mtime = db_column_double(&q,2); + if( nD>0 && (fossil_strncmp(zName, zD, nD-1)!=0 || zName[nD-1]!='/') ){ + continue; + } + if( pRE && re_match(pRE, (const u8*)zName, -1)==0 ) continue; + tree_add_node(&sTree, zName, zUuid, mtime); + nFile++; + } + db_finalize(&q); + } + + if( showDirOnly ){ + for(nFile=0, p=sTree.pFirst; p; p=p->pNext){ + if( p->pChild!=0 && p->nFullName>nD ) nFile++; + } + zObjType = "Folders"; + style_submenu_element("Files","Files","%s", + url_render(&sURI,"nofiles",0,0,0)); + }else{ + zObjType = "Files"; + style_submenu_element("Folders","Folders","%s", + url_render(&sURI,"nofiles","1",0,0)); + } + + if( zCI ){ + @ <h2>%s(zObjType) from + if( sqlite3_strnicmp(zCI, zUuid, (int)strlen(zCI))!=0 ){ + @ "%h(zCI)" + } + @ [%z(href("vinfo?name=%!S",zUuid))%S(zUuid)</a>] %s(blob_str(&dirname)) + }else{ + int n = db_int(0, "SELECT count(*) FROM plink"); + @ <h2>%s(zObjType) from all %d(n) check-ins %s(blob_str(&dirname)) + } + if( useMtime ){ + @ sorted by modification time</h2> + }else{ + @ sorted by filename</h2> + } + + + /* Generate tree of lists. + ** + ** Each file and directory is a list element: <li>. Files have class=file + ** and if the filename as the suffix "xyz" the file also has class=file-xyz. + ** Directories have class=dir. The directory specfied by the name= query + ** parameter (or the top-level directory if there is no name= query parameter) + ** adds class=subdir. + ** + ** The <li> element for directories also contains a sublist <ul> + ** for the contents of that directory. + */ + @ <div class="filetree"><ul> + if( nD ){ + @ <li class="dir last"> + }else{ + @ <li class="dir subdir last"> + } + @ <div class="filetreeline"> + @ %z(href("%s",url_render(&sURI,"name",0,0,0)))%h(zProjectName)</a> + if( zNow ){ + @ <div class="filetreeage">%s(zNow)</div> + } + @ </div> + @ <ul> + if( useMtime ){ + p = sortTreeByMtime(sTree.pFirst); + memset(&sTree, 0, sizeof(sTree)); + relinkTree(&sTree, p); + } + for(p=sTree.pFirst, nDir=0; p; p=p->pNext){ + const char *zLastClass = p->pSibling==0 ? " last" : ""; + if( p->pChild ){ + const char *zSubdirClass = p->nFullName==nD-1 ? " subdir" : ""; + @ <li class="dir%s(zSubdirClass)%s(zLastClass)"><div class="filetreeline"> + @ %z(href("%s",url_render(&sURI,"name",p->zFullName,0,0)))%h(p->zName)</a> + if( p->mtime>0.0 ){ + char *zAge = human_readable_age(rNow - p->mtime); + @ <div class="filetreeage">%s(zAge)</div> + } + @ </div> + if( startExpanded || p->nFullName<=nD ){ + @ <ul id="dir%d(nDir)"> + }else{ + @ <ul id="dir%d(nDir)" class="collapsed"> + } + nDir++; + }else if( !showDirOnly ){ + const char *zFileClass = fileext_class(p->zName); + char *zLink; + if( zCI ){ + zLink = href("%R/artifact/%!S",p->zUuid); + }else{ + zLink = href("%R/finfo?name=%T",p->zFullName); + } + @ <li class="%z(zFileClass)%s(zLastClass)"><div class="filetreeline"> + @ %z(zLink)%h(p->zName)</a> + if( p->mtime>0 ){ + char *zAge = human_readable_age(rNow - p->mtime); + @ <div class="filetreeage">%s(zAge)</div> + } + @ </div> + } + if( p->pSibling==0 ){ + int nClose = p->iLevel - (p->pNext ? p->pNext->iLevel : 0); + while( nClose-- > 0 ){ + @ </ul> + } + } + } + @ </ul> + @ </ul></div> + @ <script>(function(){ + @ function isExpanded(ul){ + @ return ul.className==''; + @ } + @ + @ function toggleDir(ul, useInitValue){ + @ if( !useInitValue ){ + @ expandMap[ul.id] = !isExpanded(ul); + @ history.replaceState(expandMap, ''); + @ } + @ ul.className = expandMap[ul.id] ? '' : 'collapsed'; + @ } + @ + @ function toggleAll(tree, useInitValue){ + @ var lists = tree.querySelectorAll('.subdir > ul > li ul'); + @ if( !useInitValue ){ + @ var expand = true; /* Default action: make all sublists visible */ + @ for( var i=0; lists[i]; i++ ){ + @ if( isExpanded(lists[i]) ){ + @ expand = false; /* Any already visible - make them all hidden */ + @ break; + @ } + @ } + @ expandMap = {'*': expand}; + @ history.replaceState(expandMap, ''); + @ } + @ var className = expandMap['*'] ? '' : 'collapsed'; + @ for( var i=0; lists[i]; i++ ){ + @ lists[i].className = className; + @ } + @ } + @ + @ function checkState(){ + @ expandMap = history.state || {}; + @ if( '*' in expandMap ) toggleAll(outer_ul, true); + @ for( var id in expandMap ){ + @ if( id!=='*' ) toggleDir(gebi(id), true); + @ } + @ } + @ + @ function belowSubdir(node){ + @ do{ + @ node = node.parentNode; + @ if( node==subdir ) return true; + @ } while( node && node!=outer_ul ); + @ return false; + @ } + @ + @ var history = window.history || {}; + @ if( !history.replaceState ) history.replaceState = function(){}; + @ var outer_ul = document.querySelector('.filetree > ul'); + @ var subdir = outer_ul.querySelector('.subdir'); + @ var expandMap = {}; + @ checkState(); + @ outer_ul.onclick = function(e){ + @ e = e || window.event; + @ var a = e.target || e.srcElement; + @ if( a.nodeName!='A' ) return true; + @ if( a.parentNode.parentNode==subdir ){ + @ toggleAll(outer_ul); + @ return false; + @ } + @ if( !belowSubdir(a) ) return true; + @ var ul = a.parentNode.nextSibling; + @ while( ul && ul.nodeName!='UL' ) ul = ul.nextSibling; + @ if( !ul ) return true; /* This is a file link, not a directory */ + @ toggleDir(ul); + @ return false; + @ } + @ }())</script> + style_footer(); + + /* We could free memory used by sTree here if we needed to. But + ** the process is about to exit, so doing so would not really accomplish + ** anything useful. */ +} + +/* +** Return a CSS class name based on the given filename's extension. +** Result must be freed by the caller. +**/ +const char *fileext_class(const char *zFilename){ + char *zClass; + const char *zExt = strrchr(zFilename, '.'); + int isExt = zExt && zExt!=zFilename && zExt[1]; + int i; + for( i=1; isExt && zExt[i]; i++ ) isExt &= fossil_isalnum(zExt[i]); + if( isExt ){ + zClass = mprintf("file file-%s", zExt+1); + for( i=5; zClass[i]; i++ ) zClass[i] = fossil_tolower(zClass[i]); + }else{ + zClass = mprintf("file"); + } + return zClass; +} + +/* +** SQL used to compute the age of all files in check-in :ckin whose +** names match :glob +*/ +static const char zComputeFileAgeSetup[] = +@ CREATE TABLE IF NOT EXISTS temp.fileage( +@ fnid INTEGER PRIMARY KEY, +@ fid INTEGER, +@ mid INTEGER, +@ mtime DATETIME, +@ pathname TEXT +@ ); +@ CREATE VIRTUAL TABLE IF NOT EXISTS temp.foci USING files_of_checkin; +; + +static const char zComputeFileAgeRun[] = +@ WITH RECURSIVE +@ ckin(x,m) AS (SELECT objid, mtime FROM event WHERE objid=:ckin +@ UNION +@ SELECT plink.pid, event.mtime +@ FROM ckin, plink, event +@ WHERE plink.cid=ckin.x AND event.objid=plink.pid +@ ORDER BY 2 DESC) +@ INSERT OR IGNORE INTO fileage(fnid, fid, mid, mtime, pathname) +@ SELECT filename.fnid, mlink.fid, mlink.mid, event.mtime, filename.name +@ FROM foci, filename, blob, mlink, event +@ WHERE foci.checkinID=:ckin +@ AND foci.filename GLOB :glob +@ AND filename.name=foci.filename +@ AND blob.uuid=foci.uuid +@ AND mlink.fid=blob.rid +@ AND mlink.fid!=mlink.pid +@ AND mlink.mid IN (SELECT x FROM ckin) +@ AND event.objid=mlink.mid +@ ORDER BY event.mtime ASC; +; + +/* +** Look at all file containing in the version "vid". Construct a +** temporary table named "fileage" that contains the file-id for each +** files, the pathname, the check-in where the file was added, and the +** mtime on that check-in. If zGlob and *zGlob then only files matching +** the given glob are computed. +*/ +int compute_fileage(int vid, const char* zGlob){ + Stmt q; + db_multi_exec(zComputeFileAgeSetup /*works-like:"constant"*/); + db_prepare(&q, zComputeFileAgeRun /*works-like:"constant"*/); + db_bind_int(&q, ":ckin", vid); + db_bind_text(&q, ":glob", zGlob && zGlob[0] ? zGlob : "*"); + db_exec(&q); + db_finalize(&q); + return 0; +} + +/* +** Render the number of days in rAge as a more human-readable time span. +** Different units (seconds, minutes, hours, days, months, years) are +** selected depending on the magnitude of rAge. +** +** The string returned is obtained from fossil_malloc() and should be +** freed by the caller. +*/ +char *human_readable_age(double rAge){ + if( rAge*86400.0<120 ){ + if( rAge*86400.0<1.0 ){ + return mprintf("current"); }else{ - @ <li><a href="%s(g.zBaseURL)/finfo?name=%T(zPrefix)%T(zFN)">%h(zFN)</a> + return mprintf("%d seconds", (int)(rAge*86400.0)); } + }else if( rAge*1440.0<90 ){ + return mprintf("%.1f minutes", rAge*1440.0); + }else if( rAge*24.0<36 ){ + return mprintf("%.1f hours", rAge*24.0); + }else if( rAge<365.0 ){ + return mprintf("%.1f days", rAge); + }else{ + return mprintf("%.2f years", rAge/365.0); + } +} + +/* +** COMMAND: test-fileage +** +** Usage: %fossil test-fileage CHECKIN +*/ +void test_fileage_cmd(void){ + int mid; + Stmt q; + const char *zGlob = find_option("glob",0,1); + db_find_and_open_repository(0,0); + verify_all_options(); + if( g.argc!=3 ) usage("test-fileage CHECKIN"); + mid = name_to_typed_rid(g.argv[2],"ci"); + compute_fileage(mid, zGlob); + db_prepare(&q, + "SELECT fid, mid, julianday('now') - mtime, pathname" + " FROM fileage" + ); + while( db_step(&q)==SQLITE_ROW ){ + char *zAge = human_readable_age(db_column_double(&q,2)); + fossil_print("%8d %8d %16s %s\n", + db_column_int(&q,0), + db_column_int(&q,1), + zAge, + db_column_text(&q,3)); + fossil_free(zAge); } db_finalize(&q); - @ </td></tr></table> +} + +/* +** WEBPAGE: fileage +** +** Show all files in a single check-in (identified by the name= query +** parameter) in order of increasing age. +** +** Parameters: +** name=VERSION Selects the check-in version (default=tip). +** glob=STRING Only shows files matching this glob pattern +** (e.g. *.c or *.txt). +** showid Show RID values for debugging +*/ +void fileage_page(void){ + int rid; + const char *zName; + const char *zGlob; + const char *zUuid; + const char *zNow; /* Time of check-in */ + int showId = PB("showid"); + Stmt q1, q2; + double baseTime; + login_check_credentials(); + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } + zName = P("name"); + if( zName==0 ) zName = "tip"; + rid = symbolic_name_to_rid(zName, "ci"); + if( rid==0 ){ + fossil_fatal("not a valid check-in: %s", zName); + } + zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); + baseTime = db_double(0.0,"SELECT mtime FROM event WHERE objid=%d", rid); + zNow = db_text("", "SELECT datetime(mtime,toLocal()) FROM event" + " WHERE objid=%d", rid); + style_submenu_element("Tree-View", "Tree-View", + "%R/tree?ci=%T&mtime=1&type=tree", + zName); + style_header("File Ages"); + zGlob = P("glob"); + compute_fileage(rid,zGlob); + db_multi_exec("CREATE INDEX fileage_ix1 ON fileage(mid,pathname);"); + + @ <h2>Files in + @ %z(href("%R/info/%!S",zUuid))[%S(zUuid)]</a> + if( zGlob && zGlob[0] ){ + @ that match "%h(zGlob)" and + } + @ ordered by check-in time</h2> + @ + @ <p>Times are relative to the check-in time for + @ %z(href("%R/ci/%!S",zUuid))[%S(zUuid)]</a> which is + @ %z(href("%R/timeline?c=%t",zNow))%s(zNow)</a>.</p> + @ + @ <div class='fileage'><table> + @ <tr><th>Time</th><th>Files</th><th>Check-in</th></tr> + db_prepare(&q1, + "SELECT event.mtime, event.objid, blob.uuid,\n" + " coalesce(event.ecomment,event.comment),\n" + " coalesce(event.euser,event.user),\n" + " coalesce((SELECT value FROM tagxref\n" + " WHERE tagtype>0 AND tagid=%d\n" + " AND rid=event.objid),'trunk')\n" + " FROM event, blob\n" + " WHERE event.objid IN (SELECT mid FROM fileage)\n" + " AND blob.rid=event.objid\n" + " ORDER BY event.mtime DESC;", + TAG_BRANCH + ); + db_prepare(&q2, + "SELECT blob.uuid, filename.name, fileage.fid\n" + " FROM fileage, blob, filename\n" + " WHERE fileage.mid=:mid AND filename.fnid=fileage.fnid" + " AND blob.rid=fileage.fid;" + ); + while( db_step(&q1)==SQLITE_ROW ){ + double age = baseTime - db_column_double(&q1, 0); + int mid = db_column_int(&q1, 1); + const char *zUuid = db_column_text(&q1, 2); + const char *zComment = db_column_text(&q1, 3); + const char *zUser = db_column_text(&q1, 4); + const char *zBranch = db_column_text(&q1, 5); + char *zAge = human_readable_age(age); + @ <tr><td>%s(zAge)</td> + @ <td> + db_bind_int(&q2, ":mid", mid); + while( db_step(&q2)==SQLITE_ROW ){ + const char *zFUuid = db_column_text(&q2,0); + const char *zFile = db_column_text(&q2,1); + int fid = db_column_int(&q2,2); + if( showId ){ + @ %z(href("%R/artifact/%!S",zFUuid))%h(zFile)</a> (%d(fid))<br> + }else{ + @ %z(href("%R/artifact/%!S",zFUuid))%h(zFile)</a><br> + } + } + db_reset(&q2); + @ </td> + @ <td> + @ %z(href("%R/info/%!S",zUuid))[%S(zUuid)]</a> + if( showId ){ + @ (%d(mid)) + } + @ %W(zComment) (user: + @ %z(href("%R/timeline?u=%t&c=%!S&nd&n=200",zUser,zUuid))%h(zUser)</a>, + @ branch: + @ %z(href("%R/timeline?r=%t&c=%!S&nd&n=200",zBranch,zUuid))%h(zBranch)</a>) + @ </td></tr> + @ + fossil_free(zAge); + } + @ </table></div> + db_finalize(&q1); + db_finalize(&q2); style_footer(); } ADDED src/builtin.c Index: src/builtin.c ================================================================== --- src/builtin.c +++ src/builtin.c @@ -0,0 +1,89 @@ +/* +** Copyright (c) 2014 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains built-in string and BLOB resources packaged as +** byte arrays. +*/ +#include "config.h" +#include "builtin.h" +#include <assert.h> + +/* +** The resources provided by this file are packaged by the "mkbuiltin.c" +** utility program during the built process and stored in the +** builtin_data.h file. Include that information here: +*/ +#include "builtin_data.h" + +/* +** Return a pointer to built-in content +*/ +const unsigned char *builtin_file(const char *zFilename, int *piSize){ + int lwr, upr, i, c; + lwr = 0; + upr = sizeof(aBuiltinFiles)/sizeof(aBuiltinFiles[0]) - 1; + while( upr>=lwr ){ + i = (upr+lwr)/2; + c = strcmp(aBuiltinFiles[i].zName,zFilename); + if( c<0 ){ + lwr = i+1; + }else if( c>0 ){ + upr = i-1; + }else{ + if( piSize ) *piSize = aBuiltinFiles[i].nByte; + return aBuiltinFiles[i].pData; + } + } + if( piSize ) *piSize = 0; + return 0; +} +const char *builtin_text(const char *zFilename){ + return (char*)builtin_file(zFilename, 0); +} + +/* +** COMMAND: test-builtin-list +** +** List the names and sizes of all built-in resources +*/ +void test_builtin_list(void){ + int i; + for(i=0; i<sizeof(aBuiltinFiles)/sizeof(aBuiltinFiles[0]); i++){ + fossil_print("%-30s %6d\n", aBuiltinFiles[i].zName,aBuiltinFiles[i].nByte); + } +} + +/* +** COMMAND: test-builtin-get +** +** Usage: %fossil test-builtin-get NAME ?OUTPUT-FILE? +*/ +void test_builtin_get(void){ + const unsigned char *pData; + int nByte; + Blob x; + if( g.argc!=3 && g.argc!=4 ){ + usage("NAME ?OUTPUT-FILE?"); + } + pData = builtin_file(g.argv[2], &nByte); + if( pData==0 ){ + fossil_fatal("no such built-in file: [%s]", g.argv[2]); + } + blob_init(&x, (const char*)pData, nByte); + blob_write_to_file(&x, g.argc==4 ? g.argv[3] : "-"); + blob_reset(&x); +} ADDED src/bundle.c Index: src/bundle.c ================================================================== --- src/bundle.c +++ src/bundle.c @@ -0,0 +1,818 @@ +/* +** Copyright (c) 2014 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code used to implement and manage a "bundle" file. +*/ +#include "config.h" +#include "bundle.h" +#include <assert.h> + +/* +** SQL code used to initialize the schema of a bundle. +** +** The bblob.delta field can be an integer, a text string, or NULL. +** If an integer, then the corresponding blobid is the delta basis. +** If a text string, then that string is a SHA1 hash for the delta +** basis, which is presumably in the master repository. If NULL, then +** data contains contain without delta compression. +*/ +static const char zBundleInit[] = +@ CREATE TABLE IF NOT EXISTS "%w".bconfig( +@ bcname TEXT, +@ bcvalue ANY +@ ); +@ CREATE TABLE IF NOT EXISTS "%w".bblob( +@ blobid INTEGER PRIMARY KEY, -- Blob ID +@ uuid TEXT NOT NULL, -- SHA1 hash of expanded blob +@ sz INT NOT NULL, -- Size of blob after expansion +@ delta ANY, -- Delta compression basis, or NULL +@ notes TEXT, -- Description of content +@ data BLOB -- compressed content +@ ); +; + +/* +** Attach a bundle file to the current database connection using the +** attachment name zBName. +*/ +static void bundle_attach_file( + const char *zFile, /* Name of the file that contains the bundle */ + const char *zBName, /* Attachment name */ + int doInit /* Initialize a new bundle, if true */ +){ + int rc; + char *zErrMsg = 0; + char *zSql; + if( !doInit && file_size(zFile)<0 ){ + fossil_fatal("no such file: %s", zFile); + } + assert( g.db ); + zSql = sqlite3_mprintf("ATTACH %Q AS %Q", zFile, zBName); + if( zSql==0 ) fossil_fatal("out of memory"); + rc = sqlite3_exec(g.db, zSql, 0, 0, &zErrMsg); + sqlite3_free(zSql); + if( rc!=SQLITE_OK || zErrMsg ){ + if( zErrMsg==0 ) zErrMsg = (char*)sqlite3_errmsg(g.db); + fossil_fatal("not a valid bundle: %s", zFile); + } + if( doInit ){ + db_multi_exec(zBundleInit /*works-like:"%w%w"*/, zBName, zBName); + }else{ + sqlite3_stmt *pStmt; + zSql = sqlite3_mprintf("SELECT bcname, bcvalue" + " FROM \"%w\".bconfig", zBName); + if( zSql==0 ) fossil_fatal("out of memory"); + rc = sqlite3_prepare(g.db, zSql, -1, &pStmt, 0); + if( rc ) fossil_fatal("not a valid bundle: %s", zFile); + sqlite3_free(zSql); + sqlite3_finalize(pStmt); + zSql = sqlite3_mprintf("SELECT blobid, uuid, sz, delta, notes, data" + " FROM \"%w\".bblob", zBName); + if( zSql==0 ) fossil_fatal("out of memory"); + rc = sqlite3_prepare(g.db, zSql, -1, &pStmt, 0); + if( rc ) fossil_fatal("not a valid bundle: %s", zFile); + sqlite3_free(zSql); + sqlite3_finalize(pStmt); + } +} + +/* +** fossil bundle ls BUNDLE ?OPTIONS? +** +** Display the content of a bundle in human-readable form. +*/ +static void bundle_ls_cmd(void){ + Stmt q; + sqlite3_int64 sumSz = 0; + sqlite3_int64 sumLen = 0; + int bDetails = find_option("details","l",0)!=0; + verify_all_options(); + if( g.argc!=4 ) usage("ls BUNDLE ?OPTIONS?"); + bundle_attach_file(g.argv[3], "b1", 0); + db_prepare(&q, + "SELECT bcname, bcvalue FROM bconfig" + " WHERE typeof(bcvalue)='text'" + " AND bcvalue NOT GLOB char(0x2a,0x0a,0x2a);" + ); + while( db_step(&q)==SQLITE_ROW ){ + fossil_print("%s: %s\n", db_column_text(&q,0), db_column_text(&q,1)); + } + db_finalize(&q); + fossil_print("%.78c\n",'-'); + if( bDetails ){ + db_prepare(&q, + "SELECT blobid, substr(uuid,1,10), coalesce(substr(delta,1,10),'')," + " sz, length(data), notes" + " FROM bblob" + ); + while( db_step(&q)==SQLITE_ROW ){ + fossil_print("%4d %10s %10s %8d %8d %s\n", + db_column_int(&q,0), + db_column_text(&q,1), + db_column_text(&q,2), + db_column_int(&q,3), + db_column_int(&q,4), + db_column_text(&q,5)); + sumSz += db_column_int(&q,3); + sumLen += db_column_int(&q,4); + } + db_finalize(&q); + fossil_print("%27s %8lld %8lld\n", "Total:", sumSz, sumLen); + }else{ + db_prepare(&q, + "SELECT substr(uuid,1,16), notes FROM bblob" + ); + while( db_step(&q)==SQLITE_ROW ){ + fossil_print("%16s %s\n", + db_column_text(&q,0), + db_column_text(&q,1)); + } + db_finalize(&q); + } +} + +/* +** Implement the "fossil bundle append BUNDLE FILE..." command. Add +** the named files into the BUNDLE. Create the BUNDLE if it does not +** alraedy exist. +*/ +static void bundle_append_cmd(void){ + Blob content, hash; + int i; + Stmt q; + + verify_all_options(); + bundle_attach_file(g.argv[3], "b1", 1); + db_prepare(&q, + "INSERT INTO bblob(blobid, uuid, sz, delta, data, notes) " + "VALUES(NULL, $uuid, $sz, NULL, $data, $filename)"); + db_begin_transaction(); + for(i=4; i<g.argc; i++){ + int sz; + blob_read_from_file(&content, g.argv[i]); + sz = blob_size(&content); + sha1sum_blob(&content, &hash); + blob_compress(&content, &content); + db_bind_text(&q, "$uuid", blob_str(&hash)); + db_bind_int(&q, "$sz", sz); + db_bind_blob(&q, "$data", &content); + db_bind_text(&q, "$filename", g.argv[i]); + db_step(&q); + db_reset(&q); + blob_reset(&content); + blob_reset(&hash); + } + db_end_transaction(0); + db_finalize(&q); +} + +/* +** Identify a subsection of the check-in tree using command-line switches. +** There must be one of the following switch available: +** +** --branch BRANCHNAME All check-ins on the most recent +** instance of BRANCHNAME +** --from TAG1 [--to TAG2] Check-in TAG1 and all primary descendants +** up to and including TAG2 +** --checkin TAG Check-in TAG only +** +** Store the RIDs for all applicable check-ins in the zTab table that +** should already exist. Invoke fossil_fatal() if any kind of error is +** seen. +*/ +void subtree_from_arguments(const char *zTab){ + const char *zBr; + const char *zFrom; + const char *zTo; + const char *zCkin; + int rid = 0, endRid; + + zBr = find_option("branch",0,1); + zFrom = find_option("from",0,1); + zTo = find_option("to",0,1); + zCkin = find_option("checkin",0,1); + if( zCkin ){ + if( zFrom ) fossil_fatal("cannot use both --checkin and --from"); + if( zBr ) fossil_fatal("cannot use both --checkin and --branch"); + rid = symbolic_name_to_rid(zCkin, "ci"); + endRid = rid; + }else{ + endRid = zTo ? name_to_typed_rid(zTo, "ci") : 0; + } + if( zFrom ){ + rid = name_to_typed_rid(zFrom, "ci"); + }else if( zBr ){ + rid = name_to_typed_rid(zBr, "br"); + }else if( zCkin==0 ){ + fossil_fatal("need one of: --branch, --from, --checkin"); + } + db_multi_exec("INSERT OR IGNORE INTO \"%w\" VALUES(%d)", zTab, rid); + if( rid!=endRid ){ + Blob sql; + blob_zero(&sql); + blob_appendf(&sql, + "WITH RECURSIVE child(rid) AS (VALUES(%d) UNION ALL " + " SELECT cid FROM plink, child" + " WHERE plink.pid=child.rid" + " AND plink.isPrim", rid); + if( endRid>0 ){ + double endTime = db_double(0.0, "SELECT mtime FROM event WHERE objid=%d", + endRid); + blob_appendf(&sql, + " AND child.rid!=%d" + " AND (SELECT mtime FROM event WHERE objid=plink.cid)<=%.17g", + endRid, endTime + ); + } + if( zBr ){ + blob_appendf(&sql, + " AND EXISTS(SELECT 1 FROM tagxref" + " WHERE tagid=%d AND tagtype>0" + " AND value=%Q and rid=plink.cid)", + TAG_BRANCH, zBr); + } + blob_appendf(&sql, ") INSERT OR IGNORE INTO \"%w\" SELECT rid FROM child;", + zTab); + db_multi_exec("%s", blob_str(&sql)/*safe-for-%s*/); + } +} + +/* +** COMMAND: test-subtree +** +** Usage: %fossil test-subtree ?OPTIONS? +** +** Show the subset of check-ins that match the supplied options. This +** command is used to test the subtree_from_options() subroutine in the +** implementation and does not really have any other practical use that +** we know of. +** +** Options: +** --branch BRANCH Include only check-ins on BRANCH +** --from TAG Start the subtree at TAG +** --to TAG End the subtree at TAG +** --checkin TAG The subtree is the single check-in TAG +** --all Include FILE and TAG artifacts +** --exclusive Include FILES exclusively on check-ins +*/ +void test_subtree_cmd(void){ + int bAll = find_option("all",0,0)!=0; + int bExcl = find_option("exclusive",0,0)!=0; + db_find_and_open_repository(0,0); + db_begin_transaction(); + db_multi_exec("CREATE TEMP TABLE tobundle(rid INTEGER PRIMARY KEY);"); + subtree_from_arguments("tobundle"); + verify_all_options(); + if( bAll ) find_checkin_associates("tobundle",bExcl); + describe_artifacts_to_stdout("IN tobundle", 0); + db_end_transaction(1); +} + +/* fossil bundle export BUNDLE ?OPTIONS? +** +** OPTIONS: +** --branch BRANCH --from TAG --to TAG +** --checkin TAG +** --standalone +*/ +static void bundle_export_cmd(void){ + int bStandalone = find_option("standalone",0,0)!=0; + int mnToBundle; /* Minimum RID in the bundle */ + Stmt q; + + /* Decode the arguments (like --branch) that specify which artifacts + ** should be in the bundle */ + db_multi_exec("CREATE TEMP TABLE tobundle(rid INTEGER PRIMARY KEY);"); + subtree_from_arguments("tobundle"); + find_checkin_associates("tobundle", 0); + verify_all_options(); + describe_artifacts("IN tobundle"); + + if( g.argc!=4 ) usage("export BUNDLE ?OPTIONS?"); + /* Create the new bundle */ + bundle_attach_file(g.argv[3], "b1", 1); + db_begin_transaction(); + + /* Add 'mtime' and 'project-code' entries to the bconfig table */ + db_multi_exec( + "INSERT INTO bconfig(bcname,bcvalue)" + " VALUES('mtime',datetime('now'));" + ); + db_multi_exec( + "INSERT INTO bconfig(bcname,bcvalue)" + " SELECT name, value FROM config" + " WHERE name IN ('project-code');" + ); + + /* Directly copy content from the repository into the bundle as long + ** as the repository content is a delta from some other artifact that + ** is also in the bundle. + */ + db_multi_exec( + "REPLACE INTO bblob(blobid,uuid,sz,delta,data,notes) " + " SELECT" + " tobundle.rid," + " blob.uuid," + " blob.size," + " delta.srcid," + " blob.content," + " (SELECT summary FROM description WHERE rid=blob.rid)" + " FROM tobundle, blob, delta" + " WHERE blob.rid=tobundle.rid" + " AND delta.rid=tobundle.rid" + " AND delta.srcid IN tobundle;" + ); + + /* For all the remaining artifacts, we need to construct their deltas + ** manually. + */ + mnToBundle = db_int(0,"SELECT min(rid) FROM tobundle"); + db_prepare(&q, + "SELECT rid FROM tobundle" + " WHERE rid NOT IN (SELECT blobid FROM bblob)" + " ORDER BY +rid;" + ); + while( db_step(&q)==SQLITE_ROW ){ + Blob content; + int rid = db_column_int(&q,0); + int deltaFrom = 0; + + /* Get the raw, uncompressed content of the artifact into content */ + content_get(rid, &content); + + /* Try to find another artifact, not within the bundle, that is a + ** plausible candidate for being a delta basis for the content. Set + ** deltaFrom to the RID of that other artifact. Leave deltaFrom set + ** to zero if the content should not be delta-compressed + */ + if( !bStandalone ){ + if( db_exists("SELECT 1 FROM plink WHERE cid=%d",rid) ){ + deltaFrom = db_int(0, + "SELECT max(cid) FROM plink" + " WHERE cid<%d", mnToBundle); + }else{ + deltaFrom = db_int(0, + "SELECT max(fid) FROM mlink" + " WHERE fnid=(SELECT fnid FROM mlink WHERE fid=%d)" + " AND fid<%d", rid, mnToBundle); + } + } + + /* Try to insert the insert the artifact as a delta + */ + if( deltaFrom ){ + Blob basis, delta; + content_get(deltaFrom, &basis); + blob_delta_create(&basis, &content, &delta); + if( blob_size(&delta)>0.9*blob_size(&content) ){ + deltaFrom = 0; + }else{ + Stmt ins; + blob_compress(&delta, &delta); + db_prepare(&ins, + "REPLACE INTO bblob(blobid,uuid,sz,delta,data,notes)" + " SELECT %d, uuid, size, (SELECT uuid FROM blob WHERE rid=%d)," + " :delta, (SELECT summary FROM description WHERE rid=blob.rid)" + " FROM blob WHERE rid=%d", rid, deltaFrom, rid); + db_bind_blob(&ins, ":delta", &delta); + db_step(&ins); + db_finalize(&ins); + } + blob_reset(&basis); + blob_reset(&delta); + } + + /* If unable to insert the artifact as a delta, insert full-text */ + if( deltaFrom==0 ){ + Stmt ins; + blob_compress(&content, &content); + db_prepare(&ins, + "REPLACE INTO bblob(blobid,uuid,sz,delta,data,notes)" + " SELECT rid, uuid, size, NULL, :content," + " (SELECT summary FROM description WHERE rid=blob.rid)" + " FROM blob WHERE rid=%d", rid); + db_bind_blob(&ins, ":content", &content); + db_step(&ins); + db_finalize(&ins); + } + blob_reset(&content); + } + db_finalize(&q); + + db_end_transaction(0); +} + + +/* +** There is a TEMP table bix(blobid,delta) containing a set of purgeitems +** that need to be transferred to the BLOB table. This routine does +** all items that have srcid=iSrc. The pBasis blob holds the content +** of the source document if iSrc>0. +*/ +static void bundle_import_elements(int iSrc, Blob *pBasis, int isPriv){ + Stmt q; + static Bag busy; + assert( pBasis!=0 || iSrc==0 ); + if( iSrc>0 ){ + if( bag_find(&busy, iSrc) ){ + fossil_fatal("delta loop while uncompressing bundle artifacts"); + } + bag_insert(&busy, iSrc); + } + db_prepare(&q, + "SELECT uuid, data, bblob.delta, bix.blobid" + " FROM bix, bblob" + " WHERE bix.delta=%d" + " AND bix.blobid=bblob.blobid;", + iSrc + ); + while( db_step(&q)==SQLITE_ROW ){ + Blob h1, h2, c1, c2; + int rid; + blob_zero(&h1); + db_column_blob(&q, 0, &h1); + blob_zero(&c1); + db_column_blob(&q, 1, &c1); + blob_uncompress(&c1, &c1); + blob_zero(&c2); + if( db_column_type(&q,2)==SQLITE_TEXT && db_column_bytes(&q,2)==40 ){ + Blob basis; + rid = db_int(0,"SELECT rid FROM blob WHERE uuid=%Q", + db_column_text(&q,2)); + content_get(rid, &basis); + blob_delta_apply(&basis, &c1, &c2); + blob_reset(&basis); + blob_reset(&c1); + }else if( pBasis ){ + blob_delta_apply(pBasis, &c1, &c2); + blob_reset(&c1); + }else{ + c2 = c1; + } + sha1sum_blob(&c2, &h2); + if( blob_compare(&h1, &h2)!=0 ){ + fossil_fatal("SHA1 hash mismatch - wanted %s, got %s", + blob_str(&h1), blob_str(&h2)); + } + blob_reset(&h2); + rid = content_put_ex(&c2, blob_str(&h1), 0, 0, isPriv); + if( rid==0 ){ + fossil_fatal("%s", g.zErrMsg); + }else{ + if( !isPriv ) content_make_public(rid); + content_get(rid, &c1); + manifest_crosslink(rid, &c1, MC_NO_ERRORS); + db_multi_exec("INSERT INTO got(rid) VALUES(%d)",rid); + } + bundle_import_elements(db_column_int(&q,3), &c2, isPriv); + blob_reset(&c2); + } + db_finalize(&q); + if( iSrc>0 ) bag_remove(&busy, iSrc); +} + +/* +** Extract an item from content from the bundle +*/ +static void bundle_extract_item( + int blobid, /* ID of the item to extract */ + Blob *pOut /* Write the content into this blob */ +){ + Stmt q; + Blob x, basis, h1, h2; + static Bag busy; + + db_prepare(&q, "SELECT uuid, delta, data FROM bblob" + " WHERE blobid=%d", blobid); + if( db_step(&q)!=SQLITE_ROW ){ + db_finalize(&q); + fossil_fatal("no such item: %d", blobid); + } + if( bag_find(&busy, blobid) ) fossil_fatal("delta loop"); + blob_zero(&x); + db_column_blob(&q, 2, &x); + blob_uncompress(&x, &x); + if( db_column_type(&q,1)==SQLITE_INTEGER ){ + bundle_extract_item(db_column_int(&q,1), &basis); + blob_delta_apply(&basis, &x, pOut); + blob_reset(&basis); + blob_reset(&x); + }else if( db_column_type(&q,1)==SQLITE_TEXT ){ + int rid = db_int(0, "SELECT rid FROM blob WHERE uuid=%Q", + db_column_text(&q,1)); + if( rid==0 ){ + fossil_fatal("cannot find delta basis %s", db_column_text(&q,1)); + } + content_get(rid, &basis); + db_column_blob(&q, 2, &x); + blob_delta_apply(&basis, &x, pOut); + blob_reset(&basis); + blob_reset(&x); + }else{ + *pOut = x; + } + blob_zero(&h1); + db_column_blob(&q, 0, &h1); + sha1sum_blob(pOut, &h2); + if( blob_compare(&h1, &h2)!=0 ){ + fossil_fatal("SHA1 hash mismatch - wanted %s, got %s", + blob_str(&h1), blob_str(&h2)); + } + blob_reset(&h1); + blob_reset(&h2); + bag_remove(&busy, blobid); + db_finalize(&q); +} + +/* fossil bundle cat BUNDLE UUID... +** +** Write elements of a bundle on standard output +*/ +static void bundle_cat_cmd(void){ + int i; + Blob x; + verify_all_options(); + if( g.argc<5 ) usage("cat BUNDLE UUID..."); + bundle_attach_file(g.argv[3], "b1", 1); + blob_zero(&x); + for(i=4; i<g.argc; i++){ + int blobid = db_int(0,"SELECT blobid FROM bblob WHERE uuid LIKE '%q%%'", + g.argv[i]); + if( blobid==0 ){ + fossil_fatal("no such artifact in bundle: %s", g.argv[i]); + } + bundle_extract_item(blobid, &x); + blob_write_to_file(&x, "-"); + blob_reset(&x); + } +} + + +/* fossil bundle import BUNDLE ?OPTIONS? +** +** Attempt to import the changes contained in BUNDLE. Make the change +** private so that they do not sync. +** +** OPTIONS: +** --force Import even if the project-code does not match +** --publish Imported changes are not private +*/ +static void bundle_import_cmd(void){ + int forceFlag = find_option("force","f",0)!=0; + int isPriv = find_option("publish",0,0)==0; + char *zMissingDeltas; + verify_all_options(); + if ( g.argc!=4 ) usage("import BUNDLE ?OPTIONS?"); + bundle_attach_file(g.argv[3], "b1", 1); + + /* Only import a bundle that was generated from a repo with the same + ** project code, unless the --force flag is true */ + if( !forceFlag ){ + if( !db_exists("SELECT 1 FROM config, bconfig" + " WHERE config.name='project-code'" + " AND bconfig.bcname='project-code'" + " AND config.value=bconfig.bcvalue;") + ){ + fossil_fatal("project-code in the bundle does not match the " + "repository project code. (override with --force)."); + } + } + + /* If the bundle contains deltas with a basis that is external to the + ** bundle and those external basis files are missing from the local + ** repo, then the delta encodings cannot be decoded and the bundle cannot + ** be extracted. */ + zMissingDeltas = db_text(0, + "SELECT group_concat(substr(delta,1,10),' ')" + " FROM bblob" + " WHERE typeof(delta)='text' AND length(delta)=40" + " AND NOT EXISTS(SELECT 1 FROM blob WHERE uuid=bblob.delta)"); + if( zMissingDeltas && zMissingDeltas[0] ){ + fossil_fatal("delta basis artifacts not found in repository: %s", + zMissingDeltas); + } + + db_begin_transaction(); + db_multi_exec( + "CREATE TEMP TABLE bix(" + " blobid INTEGER PRIMARY KEY," + " delta INTEGER" + ");" + "CREATE INDEX bixdelta ON bix(delta);" + "INSERT INTO bix(blobid,delta)" + " SELECT blobid," + " CASE WHEN typeof(delta)=='integer'" + " THEN delta ELSE 0 END" + " FROM bblob" + " WHERE NOT EXISTS(SELECT 1 FROM blob WHERE uuid=bblob.uuid AND size>=0);" + "CREATE TEMP TABLE got(rid INTEGER PRIMARY KEY ON CONFLICT IGNORE);" + ); + manifest_crosslink_begin(); + bundle_import_elements(0, 0, isPriv); + manifest_crosslink_end(0); + describe_artifacts_to_stdout("IN got", "Imported content:"); + db_end_transaction(0); +} + +/* fossil bundle purge BUNDLE +** +** Try to undo a prior "bundle import BUNDLE". +** +** If the --force option is omitted, then this will only work if +** there have been no check-ins or tags added that use the import. +** +** This routine never removes content that is not already in the bundle +** so the bundle serves as a backup. The purge can be undone using +** "fossil bundle import BUNDLE". +*/ +static void bundle_purge_cmd(void){ + int bForce = find_option("force",0,0)!=0; + int bTest = find_option("test",0,0)!=0; /* Undocumented --test option */ + const char *zFile = g.argv[3]; + verify_all_options(); + if ( g.argc!=4 ) usage("purge BUNDLE ?OPTIONS?"); + bundle_attach_file(zFile, "b1", 0); + db_begin_transaction(); + + /* Find all check-ins of the bundle */ + db_multi_exec( + "CREATE TEMP TABLE ok(rid INTEGER PRIMARY KEY);" + "INSERT OR IGNORE INTO ok SELECT blob.rid FROM bblob, blob, plink" + " WHERE bblob.uuid=blob.uuid" + " AND plink.cid=blob.rid;" + ); + + /* Check to see if new check-ins have been committed to check-ins in + ** the bundle. Do not allow the purge if that is true and if --force + ** is omitted. + */ + if( !bForce ){ + Stmt q; + int n = 0; + db_prepare(&q, + "SELECT cid FROM plink WHERE pid IN ok AND cid NOT IN ok" + ); + while( db_step(&q)==SQLITE_ROW ){ + whatis_rid(db_column_int(&q,0),0); + fossil_print("%.78c\n", '-'); + n++; + } + db_finalize(&q); + if( n>0 ){ + fossil_fatal("check-ins above are derived from check-ins in the bundle."); + } + } + + /* Find all files associated with those check-ins that are used + ** nowhere else. */ + find_checkin_associates("ok", 1); + + /* Check to see if any associated files are not in the bundle. Issue + ** an error if there are any, unless --force is used. + */ + if( !bForce ){ + db_multi_exec( + "CREATE TEMP TABLE err1(rid INTEGER PRIMARY KEY);" + "INSERT INTO err1 " + " SELECT blob.rid FROM ok CROSS JOIN blob" + " WHERE blob.rid=ok.rid" + " AND blob.uuid NOT IN (SELECT uuid FROM bblob);" + ); + if( db_changes() ){ + describe_artifacts_to_stdout("IN err1", 0); + fossil_fatal("artifacts above associated with bundle check-ins " + " are not in the bundle"); + }else{ + db_multi_exec("DROP TABLE err1;"); + } + } + + if( bTest ){ + describe_artifacts_to_stdout( + "IN (SELECT blob.rid FROM ok, blob, bblob" + " WHERE blob.rid=ok.rid AND blob.uuid=bblob.uuid)", + "Purged artifacts found in the bundle:"); + describe_artifacts_to_stdout( + "IN (SELECT blob.rid FROM ok, blob" + " WHERE blob.rid=ok.rid " + " AND blob.uuid NOT IN (SELECT uuid FROM bblob))", + "Purged artifacts NOT in the bundle:"); + describe_artifacts_to_stdout( + "IN (SELECT blob.rid FROM bblob, blob" + " WHERE blob.uuid=bblob.uuid " + " AND blob.rid NOT IN ok)", + "Artifacts in the bundle but not purged:"); + }else{ + purge_artifact_list("ok",0,0); + } + db_end_transaction(0); +} + +/* +** COMMAND: bundle +** +** Usage: %fossil bundle SUBCOMMAND ARGS... +** +** fossil bundle append BUNDLE FILE... +** +** Add files named on the command line to BUNDLE. This subcommand has +** little practical use and is mostly intended for testing. +** +** fossil bundle cat BUNDLE UUID... +** +** Extract one or more artifacts from the bundle and write them +** consecutively on standard output. This subcommand was designed +** for testing and introspection of bundles and is not something +** commonly used. +** +** fossil bundle export BUNDLE ?OPTIONS? +** +** Generate a new bundle, in the file named BUNDLE, that contains a +** subset of the check-ins in the repository (usually a single branch) +** described by the --branch, --from, --to, and/or --checkin options, +** at least one of which is required. If BUNDLE already exists, the +** specified content is added to the bundle. +** +** --branch BRANCH Package all check-ins on BRANCH. +** --from TAG1 --to TAG2 Package check-ins between TAG1 and TAG2. +** --checkin TAG Package the single check-in TAG +** --standalone Do no use delta-encoding against +** artifacts not in the bundle +** +** fossil bundle extend BUNDLE +** +** The BUNDLE must already exist. This subcommand adds to the bundle +** any check-ins that are descendants of check-ins already in the bundle, +** and any tags that apply to artifacts in the bundle. +** +** fossil bundle import BUNDLE ?--publish? +** +** Import all content from BUNDLE into the repository. By default, the +** imported files are private and will not sync. Use the --publish +** option makes the import public. +** +** fossil bundle ls BUNDLE +** +** List the contents of BUNDLE on standard output +** +** fossil bundle purge BUNDLE +** +** Remove from the repository all files that are used exclusively +** by check-ins in BUNDLE. This has the effect of undoing a +** "fossil bundle import". +** +** SUMMARY: +** fossil bundle append BUNDLE FILE... Add files to BUNDLE +** fossil bundle cat BUNDLE UUID... Extract file from BUNDLE +** fossil bundle export BUNDLE ?OPTIONS? Create a new BUNDLE +** --branch BRANCH --from TAG1 --to TAG2 Check-ins to include +** --checkin TAG Use only check-in TAG +** --standalone Omit dependencies +** fossil bundle extend BUNDLE Update with newer content +** fossil bundle import BUNDLE ?OPTIONS? Import a bundle +** --publish Publish the import +** --force Cross-repo import +** fossil bundle ls BUNDLE List content of a bundle +** fossil bundle purge BUNDLE Undo an import +** +** See also: publish +*/ +void bundle_cmd(void){ + const char *zSubcmd; + int n; + if( g.argc<4 ) usage("SUBCOMMAND BUNDLE ?OPTIONS?"); + zSubcmd = g.argv[2]; + db_find_and_open_repository(0,0); + n = (int)strlen(zSubcmd); + if( strncmp(zSubcmd, "append", n)==0 ){ + bundle_append_cmd(); + }else if( strncmp(zSubcmd, "cat", n)==0 ){ + bundle_cat_cmd(); + }else if( strncmp(zSubcmd, "export", n)==0 ){ + bundle_export_cmd(); + }else if( strncmp(zSubcmd, "extend", n)==0 ){ + fossil_fatal("not yet implemented"); + }else if( strncmp(zSubcmd, "import", n)==0 ){ + bundle_import_cmd(); + }else if( strncmp(zSubcmd, "ls", n)==0 ){ + bundle_ls_cmd(); + }else if( strncmp(zSubcmd, "purge", n)==0 ){ + bundle_purge_cmd(); + }else{ + fossil_fatal("unknown subcommand for bundle: %s", zSubcmd); + } +} ADDED src/cache.c Index: src/cache.c ================================================================== --- src/cache.c +++ src/cache.c @@ -0,0 +1,403 @@ +/* +** Copyright (c) 2014 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@sqlite.org +** +******************************************************************************* +** +** This file implements a cache for expense operations such as +** /zip and /tarball. +*/ +#include "config.h" +#include <sqlite3.h> +#include "cache.h" + +/* +** Construct the name of the repository cache file +*/ +static char *cacheName(void){ + int i; + int n; + + if( g.zRepositoryName==0 ) return 0; + n = (int)strlen(g.zRepositoryName); + for(i=n-1; i>=0; i--){ + if( g.zRepositoryName[i]=='/' ){ i = n; break; } + if( g.zRepositoryName[i]=='.' ) break; + } + if( i<0 ) i = n; + return mprintf("%.*s.cache", i, g.zRepositoryName); +} + +/* +** Attempt to open the cache database, if such a database exists. +** Make sure the cache table exists within that database. +*/ +static sqlite3 *cacheOpen(int bForce){ + char *zDbName; + sqlite3 *db = 0; + int rc; + i64 sz; + + zDbName = cacheName(); + if( zDbName==0 ) return 0; + if( bForce==0 ){ + sz = file_size(zDbName); + if( sz<=0 ){ + fossil_free(zDbName); + return 0; + } + } + rc = sqlite3_open(zDbName, &db); + fossil_free(zDbName); + if( rc ){ + sqlite3_close(db); + return 0; + } + rc = sqlite3_exec(db, + "PRAGMA page_size=8192;" + "CREATE TABLE IF NOT EXISTS blob(id INTEGER PRIMARY KEY, data BLOB);" + "CREATE TABLE IF NOT EXISTS cache(" + "key TEXT PRIMARY KEY," /* Key used to access the cache */ + "id INT REFERENCES blob," /* The cache content */ + "sz INT," /* Size of content in bytes */ + "tm INT," /* Last access time (unix timestampe) */ + "nref INT" /* Number of uses */ + ");" + "CREATE TRIGGER IF NOT EXISTS cacheDel AFTER DELETE ON cache BEGIN" + " DELETE FROM blob WHERE id=OLD.id;" + "END;", + 0, 0, 0); + if( rc!=SQLITE_OK ){ + sqlite3_close(db); + return 0; + } + return db; +} + +/* +** Attempt to construct a prepared statement for the cache database. +*/ +static sqlite3_stmt *cacheStmt(sqlite3 *db, const char *zSql){ + sqlite3_stmt *pStmt = 0; + int rc; + + rc = sqlite3_prepare_v2(db, zSql, -1, &pStmt, 0); + if( rc ){ + sqlite3_finalize(pStmt); + pStmt = 0; + } + return pStmt; +} + +/* +** This routine implements an SQL function that renders a large integer +** compactly: ex: 12.3MB +*/ +static void cache_sizename( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + char zBuf[30]; + double v, x; + assert( argc==1 ); + v = sqlite3_value_double(argv[0]); + x = v<0.0 ? -v : v; + if( x>=1e9 ){ + sqlite3_snprintf(sizeof(zBuf), zBuf, "%.1fGB", v/1e9); + }else if( x>=1e6 ){ + sqlite3_snprintf(sizeof(zBuf), zBuf, "%.1fMB", v/1e6); + }else if( x>=1e3 ){ + sqlite3_snprintf(sizeof(zBuf), zBuf, "%.1fKB", v/1e3); + }else{ + sqlite3_snprintf(sizeof(zBuf), zBuf, "%gB", v); + } + sqlite3_result_text(context, zBuf, -1, SQLITE_TRANSIENT); +} + +/* +** Register the sizename() SQL function with the SQLite database +** connection. +*/ +static void cache_register_sizename(sqlite3 *db){ + sqlite3_create_function(db, "sizename", 1, SQLITE_UTF8, 0, + cache_sizename, 0, 0); +} + +/* +** Attempt to write pContent into the cache. If the cache file does +** not exist, then this routine is a no-op. Older cache entries might +** be deleted. +*/ +void cache_write(Blob *pContent, const char *zKey){ + sqlite3 *db; + sqlite3_stmt *pStmt; + int rc = 0; + int nKeep; + + db = cacheOpen(0); + if( db==0 ) return; + sqlite3_busy_timeout(db, 10000); + sqlite3_exec(db, "BEGIN IMMEDIATE", 0, 0, 0); + pStmt = cacheStmt(db, "INSERT INTO blob(data) VALUES(?1)"); + if( pStmt==0 ) goto cache_write_end; + sqlite3_bind_blob(pStmt, 1, blob_buffer(pContent), blob_size(pContent), + SQLITE_STATIC); + if( sqlite3_step(pStmt)!=SQLITE_DONE ) goto cache_write_end; + sqlite3_finalize(pStmt); + pStmt = cacheStmt(db, + "INSERT OR IGNORE INTO cache(key,sz,tm,nref,id)" + "VALUES(?1,?2,strftime('%s','now'),1,?3)" + ); + if( pStmt==0 ) goto cache_write_end; + sqlite3_bind_text(pStmt, 1, zKey, -1, SQLITE_STATIC); + sqlite3_bind_int(pStmt, 2, blob_size(pContent)); + sqlite3_bind_int(pStmt, 3, sqlite3_last_insert_rowid(db)); + if( sqlite3_step(pStmt)!=SQLITE_DONE) goto cache_write_end; + rc = sqlite3_changes(db); + + /* If the write was successful, truncate the cache to keep at most + ** max-cache-entry entries in the cache */ + if( rc ){ + nKeep = db_get_int("max-cache-entry",10); + sqlite3_finalize(pStmt); + pStmt = cacheStmt(db, + "DELETE FROM cache WHERE rowid IN (" + "SELECT rowid FROM cache ORDER BY tm DESC" + " LIMIT -1 OFFSET ?1)"); + if( pStmt ){ + sqlite3_bind_int(pStmt, 1, nKeep); + sqlite3_step(pStmt); + } + } + +cache_write_end: + sqlite3_finalize(pStmt); + sqlite3_exec(db, rc ? "COMMIT" : "ROLLBACK", 0, 0, 0); + sqlite3_close(db); +} + +/* +** Attempt to read content out of the cache with the given zKey. Return +** non-zero on success and zero if unable to locate the content. +** +** Possible reasons for returning zero: +** (1) This server does not implement a cache +** (2) The requested element is not in the cache +*/ +int cache_read(Blob *pContent, const char *zKey){ + sqlite3 *db; + sqlite3_stmt *pStmt; + int rc = 0; + + db = cacheOpen(0); + if( db==0 ) return 0; + sqlite3_busy_timeout(db, 10000); + sqlite3_exec(db, "BEGIN IMMEDIATE", 0, 0, 0); + pStmt = cacheStmt(db, + "SELECT blob.data FROM cache, blob" + " WHERE cache.key=?1 AND cache.id=blob.id"); + if( pStmt==0 ) goto cache_read_done; + sqlite3_bind_text(pStmt, 1, zKey, -1, SQLITE_STATIC); + if( sqlite3_step(pStmt)==SQLITE_ROW ){ + blob_append(pContent, sqlite3_column_blob(pStmt, 0), + sqlite3_column_bytes(pStmt, 0)); + rc = 1; + sqlite3_reset(pStmt); + pStmt = cacheStmt(db, + "UPDATE cache SET nref=nref+1, tm=strftime('%s','now')" + " WHERE key=?1"); + if( pStmt ){ + sqlite3_bind_text(pStmt, 1, zKey, -1, SQLITE_STATIC); + sqlite3_step(pStmt); + } + } + sqlite3_finalize(pStmt); +cache_read_done: + sqlite3_exec(db, "COMMIT", 0, 0, 0); + sqlite3_close(db); + return rc; +} + +/* +** Create a cache database for the current repository if no such +** database already exists. +*/ +void cache_initialize(void){ + sqlite3_close(cacheOpen(1)); +} + +/* +** COMMAND: cache* +** Usage: %fossil cache SUBCOMMAND +** +** Manage the cache used for potentially expensive web pages such as +** /zip and /tarball. SUBCOMMAND can be: +** +** clear Remove all entries from the cache. +** +** init Create the cache file if it does not already exists. +** +** list|ls List the keys and content sizes and other stats for +** all entries currently in the cache +** +** status Show a summary of cache status. +** +** The cache is stored in a file that is distinct from the repository +** but that is held in the same directory as the repository. The cache +** file can be deleted in order to completely disable the cache. +*/ +void cache_cmd(void){ + const char *zCmd; + int nCmd; + sqlite3 *db; + sqlite3_stmt *pStmt; + + db_find_and_open_repository(0,0); + zCmd = g.argc>=3 ? g.argv[2] : ""; + nCmd = (int)strlen(zCmd); + if( nCmd<=1 ){ + fossil_fatal("Usage: %s cache SUBCOMMAND", g.argv[0]); + } + if( strncmp(zCmd, "init", nCmd)==0 ){ + db = cacheOpen(0); + sqlite3_close(db); + if( db ){ + fossil_print("cache already exists in file %z\n", cacheName()); + }else{ + db = cacheOpen(1); + sqlite3_close(db); + if( db ){ + fossil_print("cache created in file %z\n", cacheName()); + }else{ + fossil_fatal("unable to create cache file %z", cacheName()); + } + } + }else if( strncmp(zCmd, "clear", nCmd)==0 ){ + db = cacheOpen(0); + if( db ){ + sqlite3_exec(db, "DELETE FROM cache; DELETE FROM blob; VACUUM;",0,0,0); + sqlite3_close(db); + fossil_print("cache cleared\n"); + }else{ + fossil_print("nothing to clear; cache does not exist\n"); + } + }else if(( strncmp(zCmd, "list", nCmd)==0 ) || ( strncmp(zCmd, "ls", nCmd)==0 )){ + db = cacheOpen(0); + if( db==0 ){ + fossil_print("cache does not exist\n"); + }else{ + int nEntry = 0; + char *zDbName = cacheName(); + cache_register_sizename(db); + pStmt = cacheStmt(db, + "SELECT key, sizename(sz), nRef, datetime(tm,'unixepoch')" + " FROM cache" + " ORDER BY tm DESC" + ); + if( pStmt ){ + while( sqlite3_step(pStmt)==SQLITE_ROW ){ + fossil_print("%s %4d %8s %s\n", + sqlite3_column_text(pStmt, 3), + sqlite3_column_int(pStmt, 2), + sqlite3_column_text(pStmt, 1), + sqlite3_column_text(pStmt, 0)); + nEntry++; + } + sqlite3_finalize(pStmt); + } + sqlite3_close(db); + fossil_print("Entries: %d Cache-file Size: %lld\n", + nEntry, file_size(zDbName)); + fossil_free(zDbName); + } + }else if( strncmp(zCmd, "status", nCmd)==0 ){ + fossil_print("TBD...\n"); + }else{ + fossil_fatal("Unknown subcommand \"%s\"." + " Should be one of: clear init list status", zCmd); + } +} + +/* +** WEBPAGE: cachestat +** +** Show information about the webpage cache. Requires Admin privilege. +*/ +void cache_page(void){ + sqlite3 *db; + sqlite3_stmt *pStmt; + char zBuf[100]; + + login_check_credentials(); + if( !g.perm.Setup ){ login_needed(0); return; } + style_header("Web Cache Status"); + db = cacheOpen(0); + if( db==0 ){ + @ The web-page cache is disabled for this repository + }else{ + char *zDbName = cacheName(); + cache_register_sizename(db); + pStmt = cacheStmt(db, + "SELECT key, sizename(sz), nRef, datetime(tm,'unixepoch')" + " FROM cache" + " ORDER BY tm DESC" + ); + if( pStmt ){ + @ <ol> + while( sqlite3_step(pStmt)==SQLITE_ROW ){ + const unsigned char *zName = sqlite3_column_text(pStmt,0); + @ <li><p>%z(href("%R/cacheget?key=%T",zName))%h(zName)</a><br> + @ size: %s(sqlite3_column_text(pStmt,1)) + @ hit-count: %d(sqlite3_column_int(pStmt,2)) + @ last-access: %s(sqlite3_column_text(pStmt,3))</p></li> + } + sqlite3_finalize(pStmt); + @ </ol> + } + zDbName = cacheName(); + bigSizeName(sizeof(zBuf), zBuf, file_size(zDbName)); + @ <p>cache-file name: %h(zDbName)</p> + @ <p>cache-file size: %s(zBuf)</p> + fossil_free(zDbName); + sqlite3_close(db); + } + style_footer(); +} + +/* +** WEBPAGE: cacheget +** +** Usage: /cacheget?key=KEY +** +** Download a single entry for the cache, identified by KEY. +** This page is normally a hyperlink from the /cachestat page. +** Requires Admin privilege. +*/ +void cache_getpage(void){ + const char *zKey; + Blob content; + + login_check_credentials(); + if( !g.perm.Setup ){ login_needed(0); return; } + zKey = PD("key",""); + blob_zero(&content); + if( cache_read(&content, zKey)==0 ){ + style_header("Cache Download Error"); + @ The cache does not contain any entry with this key: "%h(zKey)" + style_footer(); + return; + } + cgi_set_content(&content); + cgi_set_content_type("application/x-compressed"); +} Index: src/captcha.c ================================================================== --- src/captcha.c +++ src/captcha.c @@ -13,26 +13,26 @@ ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** -** This file contains code to a simple text-based CAPTCHA. Though eaily +** This file contains code to a simple text-based CAPTCHA. Though easily ** defeated by a sophisticated attacker, this CAPTCHA does at least make ** scripting attacks more difficult. */ -#include <assert.h> #include "config.h" +#include <assert.h> #include "captcha.h" #if INTERFACE #define CAPTCHA 3 /* Which captcha rendering to use */ #endif /* ** Convert a hex digit into a value between 0 and 15 */ -static int hexValue(char c){ +int hex_digit_value(char c){ if( c>='0' && c<='9' ){ return c - '0'; }else if( c>='a' && c<='f' ){ return c - 'a' + 10; }else if( c>='A' && c<='F' ){ @@ -69,17 +69,17 @@ ** Render an 8-character hexadecimal string as ascii art. ** Space to hold the result is obtained from malloc() and should be freed ** by the caller. */ char *captcha_render(const char *zPw){ - char *z = malloc( 500 ); + char *z = fossil_malloc( 9*6*strlen(zPw) + 7 ); int i, j, k, m; k = 0; for(i=0; i<6; i++){ - for(j=0; j<8; j++){ - unsigned char v = hexValue(zPw[j]); + for(j=0; zPw[j]; j++){ + unsigned char v = hex_digit_value(zPw[j]); v = (aFont1[v] >> ((5-i)*4)) & 0xf; for(m=8; m>=1; m = m>>1){ if( v & m ){ z[k++] = 'X'; z[k++] = 'X'; @@ -92,17 +92,17 @@ z[k++] = ' '; } z[k++] = '\n'; } z[k] = 0; - return z; + return z; } #endif /* CAPTCHA==1 */ #if CAPTCHA==2 -static const char *azFont2[] = { +static const char *const azFont2[] = { /* 0 */ " __ ", " / \\ ", "| () |", " \\__/ ", @@ -148,11 +148,11 @@ "|__ |", " / / ", " /_/ ", /* 8 */ - " ___ ", + " ___ ", "( _ )", "/ _ \\", "\\___/", /* 9 */ @@ -202,152 +202,152 @@ ** Render an 8-digit hexadecimal string as ascii arg. ** Space to hold the result is obtained from malloc() and should be freed ** by the caller. */ char *captcha_render(const char *zPw){ - char *z = malloc( 300 ); + char *z = fossil_malloc( 7*4*strlen(zPw) + 5 ); int i, j, k, m; const char *zChar; k = 0; for(i=0; i<4; i++){ - for(j=0; j<8; j++){ - unsigned char v = hexValue(zPw[j]); + for(j=0; zPw[j]; j++){ + unsigned char v = hex_digit_value(zPw[j]); zChar = azFont2[4*v + i]; for(m=0; zChar[m]; m++){ z[k++] = zChar[m]; } } z[k++] = '\n'; } z[k] = 0; - return z; + return z; } #endif /* CAPTCHA==2 */ #if CAPTCHA==3 -static const char *azFont3[] = { +static const char *const azFont3[] = { /* 0 */ " ___ ", " / _ \\ ", "| | | |", "| | | |", "| |_| |", " \\___/ ", - + /* 1 */ " __ ", "/_ |", " | |", " | |", " | |", " |_|", - + /* 2 */ " ___ ", "|__ \\ ", " ) |", " / / ", " / /_ ", "|____|", - + /* 3 */ " ____ ", "|___ \\ ", " __) |", " |__ < ", " ___) |", "|____/ ", - + /* 4 */ " _ _ ", "| || | ", "| || |_ ", "|__ _|", " | | ", " |_| ", - + /* 5 */ " _____ ", "| ____|", "| |__ ", "|___ \\ ", " ___) |", "|____/ ", - + /* 6 */ " __ ", " / / ", " / /_ ", "| '_ \\ ", "| (_) |", " \\___/ ", - + /* 7 */ " ______ ", "|____ |", " / / ", " / / ", " / / ", " /_/ ", - + /* 8 */ " ___ ", " / _ \\ ", "| (_) |", " > _ < ", "| (_) |", " \\___/ ", - + /* 9 */ " ___ ", " / _ \\ ", "| (_) |", " \\__, |", " / / ", " /_/ ", - + /* A */ " ", " /\\ ", " / \\ ", " / /\\ \\ ", " / ____ \\ ", "/_/ \\_\\", - + /* B */ " ____ ", "| _ \\ ", "| |_) |", "| _ < ", "| |_) |", "|____/ ", - + /* C */ " _____ ", " / ____|", "| | ", "| | ", "| |____ ", " \\_____|", - + /* D */ " _____ ", "| __ \\ ", "| | | |", "| | | |", "| |__| |", "|_____/ ", - + /* E */ " ______ ", "| ____|", "| |__ ", "| __| ", "| |____ ", "|______|", - + /* F */ " ______ ", "| ____|", "| |__ ", "| __| ", @@ -359,51 +359,86 @@ ** Render an 8-digit hexadecimal string as ascii arg. ** Space to hold the result is obtained from malloc() and should be freed ** by the caller. */ char *captcha_render(const char *zPw){ - char *z = malloc( 600 ); + char *z = fossil_malloc( 10*6*strlen(zPw) + 7 ); int i, j, k, m; const char *zChar; + unsigned char x; + int y; k = 0; for(i=0; i<6; i++){ - for(j=0; j<8; j++){ - unsigned char v = hexValue(zPw[j]); + x = 0; + for(j=0; zPw[j]; j++){ + unsigned char v = hex_digit_value(zPw[j]); + x = (x<<4) + v; + switch( x ){ + case 0x7a: + case 0xfa: + y = 3; + break; + case 0x47: + y = 2; + break; + case 0xf6: + case 0xa9: + case 0xa4: + case 0xa1: + case 0x9a: + case 0x76: + case 0x61: + case 0x67: + case 0x69: + case 0x41: + case 0x42: + case 0x43: + case 0x4a: + y = 1; + break; + default: + y = 0; + break; + } zChar = azFont3[6*v + i]; + while( y && zChar[0]==' ' ){ y--; zChar++; } + while( y && z[k-1]==' ' ){ y--; k--; } for(m=0; zChar[m]; m++){ z[k++] = zChar[m]; } } z[k++] = '\n'; } z[k] = 0; - return z; + return z; } #endif /* CAPTCHA==3 */ /* ** COMMAND: test-captcha +** +** Render an ASCII-art captcha for numbers given on the command line. */ void test_captcha(void){ int i; unsigned int v; char *z; for(i=2; i<g.argc; i++){ char zHex[30]; v = (unsigned int)atoi(g.argv[i]); - sprintf(zHex, "%x", v); + sqlite3_snprintf(sizeof(zHex), zHex, "%x", v); z = captcha_render(zHex); - printf("%s:\n%s", zHex, z); + fossil_print("%s:\n%s", zHex, z); free(z); } } /* ** Compute a seed value for a captcha. The seed is public and is sent -** has a hidden parameter with the page that contains the captcha. Knowledge +** as a hidden parameter with the page that contains the captcha. Knowledge ** of the seed is insufficient for determining the captcha without additional ** information held only on the server and never revealed. */ unsigned int captcha_seed(void){ unsigned int x; @@ -412,12 +447,14 @@ return x; } /* ** Translate a captcha seed value into the captcha password string. +** The returned string is static and overwritten on each call to +** this function. */ -char *captcha_decode(unsigned int seed){ +const char *captcha_decode(unsigned int seed){ const char *zSecret; const char *z; Blob b; static char zRes[20]; @@ -436,5 +473,140 @@ z = blob_buffer(&b); memcpy(zRes, z, 8); zRes[8] = 0; return zRes; } + +/* +** Return true if a CAPTCHA is required for editing wiki or tickets or for +** adding attachments. +** +** A CAPTCHA is required in those cases if the user is not logged in (if they +** are user "nobody") and if the "require-captcha" setting is true. The +** "require-captcha" setting is controlled on the Admin/Access page. It +** defaults to true. +*/ +int captcha_needed(void){ + return login_is_nobody() && db_get_boolean("require-captcha", 1); +} + +/* +** If a captcha is required but the correct captcha code is not supplied +** in the query parameters, then return false (0). +** +** If no captcha is required or if the correct captcha is supplied, return +** true (non-zero). +** +** The query parameters examined are "captchaseed" for the seed value and +** "captcha" for text that the user types in response to the captcha prompt. +*/ +int captcha_is_correct(void){ + const char *zSeed; + const char *zEntered; + const char *zDecode; + char z[30]; + int i; + if( !captcha_needed() ){ + return 1; /* No captcha needed */ + } + zSeed = P("captchaseed"); + if( zSeed==0 ) return 0; + zEntered = P("captcha"); + if( zEntered==0 || strlen(zEntered)!=8 ) return 0; + zDecode = captcha_decode((unsigned int)atoi(zSeed)); + assert( strlen(zDecode)==8 ); + if( strlen(zEntered)!=8 ) return 0; + for(i=0; i<8; i++){ + char c = zEntered[i]; + if( c>='A' && c<='F' ) c += 'a' - 'A'; + if( c=='O' ) c = '0'; + z[i] = c; + } + if( strncmp(zDecode,z,8)!=0 ) return 0; + return 1; +} + +/* +** Generate a captcha display together with the necessary hidden parameter +** for the seed and the entry box into which the user will type the text of +** the captcha. This is typically done at the very bottom of a form. +** +** This routine is a no-op if no captcha is required. +*/ +void captcha_generate(int showButton){ + unsigned int uSeed; + const char *zDecoded; + char *zCaptcha; + + if( !captcha_needed() ) return; + uSeed = captcha_seed(); + zDecoded = captcha_decode(uSeed); + zCaptcha = captcha_render(zDecoded); + @ <div class="captcha"><table class="captcha"><tr><td><pre> + @ %h(zCaptcha) + @ </pre> + @ Enter security code shown above: + @ <input type="hidden" name="captchaseed" value="%u(uSeed)" /> + @ <input type="text" name="captcha" size=8 /> + if( showButton ){ + @ <input type="submit" value="Submit"> + } + @ </td></tr></table></div> +} + +/* +** WEBPAGE: test-captcha +** Test the captcha-generator by rendering the value of the name= query +** parameter using ascii-art. If name= is omitted, show a random 16-digit +** hexadecimal number. +*/ +void captcha_test(void){ + const char *zPw = P("name"); + if( zPw==0 || zPw[0]==0 ){ + u64 x; + sqlite3_randomness(sizeof(x), &x); + zPw = mprintf("%016llx", x); + } + style_header("Captcha Test"); + @ <pre> + @ %s(captcha_render(zPw)) + @ </pre> + style_footer(); +} + +/* +** Check to see if the current request is coming from an agent that might +** be a spider. If the agent is not a spider, then return 0 without doing +** anything. But if the user agent appears to be a spider, offer +** a captcha challenge to allow the user agent to prove that it is human +** and return non-zero. +*/ +int exclude_spiders(const char *zPage){ + const char *zCookieValue; + char *zCookieName; + if( g.isHuman ) return 0; +#if 0 + { + const char *zReferer = P("HTTP_REFERER"); + if( zReferer && strncmp(g.zBaseURL, zReferer, strlen(g.zBaseURL))==0 ){ + return 0; + } + } +#endif + zCookieName = mprintf("fossil-cc-%.10s", db_get("project-code","x")); + zCookieValue = P(zCookieName); + if( zCookieValue && atoi(zCookieValue)==1 ) return 0; + if( captcha_is_correct() ){ + cgi_set_cookie(zCookieName, "1", login_cookie_path(), 8*3600); + return 0; + } + + /* This appears to be a spider. Offer the captcha */ + style_header("Verification"); + form_begin(0, "%s", zPage); + cgi_query_parameters_to_hidden(); + @ <p>Please demonstrate that you are human, not a spider or robot</p> + captcha_generate(1); + @ </form> + style_footer(); + return 1; +} Index: src/cgi.c ================================================================== --- src/cgi.c +++ src/cgi.c @@ -2,11 +2,11 @@ ** Copyright (c) 2006 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) - +** ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** ** Author contact information: @@ -20,15 +20,13 @@ ** dispensing QUERY_STRING parameters and cookies, the "mprintf()" ** formatting function and its cousins, and routines to encode and ** decode strings in HTML or HTTP. */ #include "config.h" -#ifdef __MINGW32__ -# include <windows.h> /* for Sleep once server works again */ -# include <winsock2.h> /* socket operations */ -# define sleep Sleep /* windows does not have sleep, but Sleep */ -# include <ws2tcpip.h> +#ifdef _WIN32 +# include <winsock2.h> +# include <ws2tcpip.h> #else # include <sys/socket.h> # include <netinet/in.h> # include <arpa/inet.h> # include <sys/times.h> @@ -42,40 +40,49 @@ #include <time.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include "cgi.h" +#include "cygsup.h" #if INTERFACE /* ** Shortcuts for cgi_parameter. P("x") returns the value of query parameter ** or cookie "x", or NULL if there is no such parameter or cookie. PD("x","y") ** does the same except "y" is returned in place of NULL if there is not match. */ #define P(x) cgi_parameter((x),0) #define PD(x,y) cgi_parameter((x),(y)) -#define QP(x) quotable_string(cgi_parameter((x),0)) -#define QPD(x,y) quotable_string(cgi_parameter((x),(y))) +#define PT(x) cgi_parameter_trimmed((x),0) +#define PDT(x,y) cgi_parameter_trimmed((x),(y)) +#define PB(x) cgi_parameter_boolean(x) /* ** Destinations for output text. */ #define CGI_HEADER 0 #define CGI_BODY 1 +/* +** Flags for SSH HTTP clients +*/ +#define CGI_SSH_CLIENT 0x0001 /* Client is SSH */ +#define CGI_SSH_COMPAT 0x0002 /* Compat for old SSH transport */ +#define CGI_SSH_FOSSIL 0x0004 /* Use new Fossil SSH transport */ + #endif /* INTERFACE */ /* ** The HTTP reply is generated in two pieces: the header and the body. -** These pieces are generated separately because they are not necessary +** These pieces are generated separately because they are not necessarily ** produced in order. Parts of the header might be built after all or ** part of the body. The header and body are accumulated in separate ** Blob structures then output sequentially once everything has been ** built. ** -** The cgi_destination() interface switch between the buffers. +** The cgi_destination() interface switches between the buffers. */ static Blob cgiContent[2] = { BLOB_INITIALIZER, BLOB_INITIALIZER }; static Blob *pContent = &cgiContent[0]; /* @@ -94,10 +101,21 @@ default: { cgi_panic("bad destination"); } } } + +/* +** Check to see if the header contains the zNeedle string. Return true +** if it does and false if it does not. +*/ +int cgi_header_contains(const char *zNeedle){ + return strstr(blob_str(&cgiContent[0]), zNeedle)!=0; +} +int cgi_body_contains(const char *zNeedle){ + return strstr(blob_str(&cgiContent[1]), zNeedle)!=0; +} /* ** Append reply content to what already exists. */ void cgi_append_content(const char *zData, int nAmt){ @@ -131,11 +149,11 @@ } /* ** Return a pointer to the HTTP reply text. */ -char *cgi_extract_content(int *pnAmt){ +char *cgi_extract_content(void){ cgi_combine_header_and_body(); return blob_buffer(&cgiContent[0]); } /* @@ -187,20 +205,24 @@ const char *zName, /* Name of the cookie */ const char *zValue, /* Value of the cookie. Automatically escaped */ const char *zPath, /* Path cookie applies to. NULL means "/" */ int lifetime /* Expiration of the cookie in seconds from now */ ){ + char *zSecure = ""; if( zPath==0 ) zPath = g.zTop; + if( g.zBaseURL!=0 && strncmp(g.zBaseURL, "https:", 6)==0 ){ + zSecure = " secure;"; + } if( lifetime>0 ){ lifetime += (int)time(0); blob_appendf(&extraHeader, - "Set-Cookie: %s=%t; Path=%s; expires=%z; Version=1\r\n", - zName, zValue, zPath, cgi_rfc822_datestamp(lifetime)); + "Set-Cookie: %s=%t; Path=%s; expires=%z; HttpOnly;%s Version=1\r\n", + zName, zValue, zPath, cgi_rfc822_datestamp(lifetime), zSecure); }else{ blob_appendf(&extraHeader, - "Set-Cookie: %s=%t; Path=%s; Version=1\r\n", - zName, zValue, zPath); + "Set-Cookie: %s=%t; Path=%s; HttpOnly;%s Version=1\r\n", + zName, zValue, zPath, zSecure); } } #if 0 /* @@ -217,11 +239,11 @@ MD5Final(digest,&ctx); for(j=i=0; i<16; i++,j+=2){ bprintf(&zETag[j],sizeof(zETag)-j,"%02x",(int)digest[i]); } blob_appendf(&extraHeader, "ETag: %s\r\n", zETag); - return strdup(zETag); + return fossil_strdup(zETag); } /* ** Do some cache control stuff. First, we generate an ETag and include it in ** the response headers. Second, we do whatever is necessary to determine if @@ -237,25 +259,35 @@ /* FIXME: there's some gotchas wth cookies and some headers. */ char *zETag = cgi_add_etag(blob_buffer(&cgiContent),blob_size(&cgiContent)); char *zMatch = P("HTTP_IF_NONE_MATCH"); if( zETag!=0 && zMatch!=0 ) { - char *zBuf = strdup(zMatch); + char *zBuf = fossil_strdup(zMatch); if( zBuf!=0 ){ char *zTok = 0; char *zPos; for( zTok = strtok_r(zBuf, ",\"",&zPos); - zTok && strcasecmp(zTok,zETag); + zTok && fossil_stricmp(zTok,zETag); zTok = strtok_r(0, ",\"",&zPos)){} - free(zBuf); + fossil_free(zBuf); if(zTok) return 1; } } - + return 0; } #endif + +/* +** Return true if the response should be sent with Content-Encoding: gzip. +*/ +static int is_gzippable(void){ + if( strstr(PD("HTTP_ACCEPT_ENCODING", ""), "gzip")==0 ) return 0; + return strncmp(zContentType, "text/", 5)==0 + || sqlite3_strglob("application/*xml", zContentType)==0 + || sqlite3_strglob("application/*javascript", zContentType)==0; +} /* ** Do a normal HTTP reply */ void cgi_reply(void){ @@ -278,17 +310,36 @@ if( g.fullHttpReply ){ fprintf(g.httpOut, "HTTP/1.0 %d %s\r\n", iReplyStatus, zReplyStatus); fprintf(g.httpOut, "Date: %s\r\n", cgi_rfc822_datestamp(time(0))); fprintf(g.httpOut, "Connection: close\r\n"); + fprintf(g.httpOut, "X-UA-Compatible: IE=edge\r\n"); }else{ fprintf(g.httpOut, "Status: %d %s\r\n", iReplyStatus, zReplyStatus); } if( blob_size(&extraHeader)>0 ){ fprintf(g.httpOut, "%s", blob_buffer(&extraHeader)); } + + /* Add headers to turn on useful security options in browsers. */ + fprintf(g.httpOut, "X-Frame-Options: SAMEORIGIN\r\n"); + /* This stops fossil pages appearing in frames or iframes, preventing + ** click-jacking attacks on supporting browsers. + ** + ** Other good headers would be + ** Strict-Transport-Security: max-age=62208000 + ** if we're using https. However, this would break sites which serve different + ** content on http and https protocols. Also, + ** X-Content-Security-Policy: allow 'self' + ** would help mitigate some XSS and data injection attacks, but will break + ** deliberate inclusion of external resources, such as JavaScript syntax + ** highlighter scripts. + ** + ** These headers are probably best added by the web server hosting fossil as + ** a CGI script. + */ if( g.isConst ){ /* constant means that the input URL will _never_ generate anything ** else. In the case of attachments, the contents won't change because ** an attempt to change them generates a new attachment number. In the @@ -295,27 +346,37 @@ ** case of most /getfile calls for specific versions, the only way the ** content changes is if someone breaks the SCM. And if that happens, a ** stale cache is the least of the problem. So we provide an Expires ** header set to a reasonable period (default: one week). */ - /*time_t expires = time(0) + atoi(db_config("constant_expires","604800"));*/ - time_t expires = time(0) + 604800; - fprintf(g.httpOut, "Expires: %s\r\n", cgi_rfc822_datestamp(expires)); + fprintf(g.httpOut, "Cache-control: max-age=28800\r\n"); }else{ - fprintf(g.httpOut, "Cache-control: no-cache, no-store\r\n"); + fprintf(g.httpOut, "Cache-control: no-cache\r\n"); } /* Content intended for logged in users should only be cached in ** the browser, not some shared location. */ fprintf(g.httpOut, "Content-Type: %s; charset=utf-8\r\n", zContentType); - if( strcmp(zContentType,"application/x-fossil")==0 ){ + if( fossil_strcmp(zContentType,"application/x-fossil")==0 ){ cgi_combine_header_and_body(); blob_compress(&cgiContent[0], &cgiContent[0]); } if( iReplyStatus != 304 ) { + if( is_gzippable() ){ + int i; + gzip_begin(0); + for( i=0; i<2; i++ ){ + int size = blob_size(&cgiContent[i]); + if( size>0 ) gzip_step(blob_buffer(&cgiContent[i]), size); + blob_reset(&cgiContent[i]); + } + gzip_finish(&cgiContent[0]); + fprintf(g.httpOut, "Content-Encoding: gzip\r\n"); + fprintf(g.httpOut, "Vary: Accept-Encoding\r\n"); + } total_size = blob_size(&cgiContent[0]) + blob_size(&cgiContent[1]); fprintf(g.httpOut, "Content-Length: %d\r\n", total_size); }else{ total_size = 0; } @@ -327,35 +388,41 @@ if( size>0 ){ fwrite(blob_buffer(&cgiContent[i]), 1, size, g.httpOut); } } } + fflush(g.httpOut); CGIDEBUG(("DONE\n")); } /* ** Do a redirect request to the URL given in the argument. ** ** The URL must be relative to the base of the fossil server. */ -void cgi_redirect(const char *zURL){ +NORETURN void cgi_redirect(const char *zURL){ char *zLocation; CGIDEBUG(("redirect to %s\n", zURL)); - if( strncmp(zURL,"http:",5)==0 || strncmp(zURL,"https:",6)==0 || *zURL=='/' ){ + if( strncmp(zURL,"http:",5)==0 || strncmp(zURL,"https:",6)==0 ){ zLocation = mprintf("Location: %s\r\n", zURL); + }else if( *zURL=='/' ){ + int n1 = (int)strlen(g.zBaseURL); + int n2 = (int)strlen(g.zTop); + if( g.zBaseURL[n1-1]=='/' ) zURL++; + zLocation = mprintf("Location: %.*s%s\r\n", n1-n2, g.zBaseURL, zURL); }else{ zLocation = mprintf("Location: %s/%s\r\n", g.zBaseURL, zURL); } cgi_append_header(zLocation); cgi_reset_content(); - cgi_printf("<html>\n<p>Redirect to %h</p>\n</html>\n", zURL); + cgi_printf("<html>\n<p>Redirect to %h</p>\n</html>\n", zLocation); cgi_set_status(302, "Moved Temporarily"); free(zLocation); cgi_reply(); - exit(0); + fossil_exit(0); } -void cgi_redirectf(const char *zFormat, ...){ +NORETURN void cgi_redirectf(const char *zFormat, ...){ va_list ap; va_start(ap, zFormat); cgi_redirect(vmprintf(zFormat, ap)); va_end(ap); } @@ -370,10 +437,12 @@ static int seqQP = 0; /* Sequence numbers */ static struct QParam { /* One entry for each query parameter or cookie */ const char *zName; /* Parameter or cookie name */ const char *zValue; /* Value of the query parameter or cookie */ int seq; /* Order of insertion */ + char isQP; /* True for query parameters */ + char cTag; /* Tag on query parameters */ } *aParamQP; /* An array of all parameters and cookies */ /* ** Add another query parameter or cookie to the parameter set. ** zName is the name of the query parameter or cookie and zValue @@ -380,22 +449,27 @@ ** is its fully decoded value. ** ** zName and zValue are not copied and must not change or be ** deallocated after this routine returns. */ -void cgi_set_parameter_nocopy(const char *zName, const char *zValue){ +void cgi_set_parameter_nocopy(const char *zName, const char *zValue, int isQP){ if( nAllocQP<=nUsedQP ){ nAllocQP = nAllocQP*2 + 10; - aParamQP = realloc( aParamQP, nAllocQP*sizeof(aParamQP[0]) ); - if( aParamQP==0 ) exit(1); + if( nAllocQP>1000 ){ + /* Prevent a DOS service attack against the framework */ + fossil_fatal("Too many query parameters"); + } + aParamQP = fossil_realloc( aParamQP, nAllocQP*sizeof(aParamQP[0]) ); } aParamQP[nUsedQP].zName = zName; aParamQP[nUsedQP].zValue = zValue; if( g.fHttpTrace ){ fprintf(stderr, "# cgi: %s = [%s]\n", zName, zValue); } aParamQP[nUsedQP].seq = seqQP++; + aParamQP[nUsedQP].isQP = isQP; + aParamQP[nUsedQP].cTag = 0; nUsedQP++; sortQP = 1; } /* @@ -404,35 +478,46 @@ ** is its fully decoded value. ** ** Copies are made of both the zName and zValue parameters. */ void cgi_set_parameter(const char *zName, const char *zValue){ - cgi_set_parameter_nocopy(mprintf("%s",zName), mprintf("%s",zValue)); + cgi_set_parameter_nocopy(mprintf("%s",zName), mprintf("%s",zValue), 0); } /* ** Replace a parameter with a new value. */ void cgi_replace_parameter(const char *zName, const char *zValue){ int i; for(i=0; i<nUsedQP; i++){ - if( strcmp(aParamQP[i].zName,zName)==0 ){ + if( fossil_strcmp(aParamQP[i].zName,zName)==0 ){ + aParamQP[i].zValue = zValue; + return; + } + } + cgi_set_parameter_nocopy(zName, zValue, 0); +} +void cgi_replace_query_parameter(const char *zName, const char *zValue){ + int i; + for(i=0; i<nUsedQP; i++){ + if( fossil_strcmp(aParamQP[i].zName,zName)==0 ){ aParamQP[i].zValue = zValue; + assert( aParamQP[i].isQP ); return; } } - cgi_set_parameter_nocopy(zName, zValue); + cgi_set_parameter_nocopy(zName, zValue, 1); } /* ** Add a query parameter. The zName portion is fixed but a copy ** must be made of zValue. */ void cgi_setenv(const char *zName, const char *zValue){ - cgi_set_parameter_nocopy(zName, mprintf("%s",zValue)); + cgi_set_parameter_nocopy(zName, mprintf("%s",zValue), 0); } - + /* ** Add a list of query parameters or cookies to the parameter set. ** ** Each parameter is of the form NAME=VALUE. Both the NAME and the @@ -456,14 +541,15 @@ ** The input string "z" is modified but no copies is made. "z" ** should not be deallocated or changed again after this routine ** returns or it will corrupt the parameter table. */ static void add_param_list(char *z, int terminator){ + int isQP = terminator=='&'; while( *z ){ char *zName; char *zValue; - while( isspace(*z) ){ z++; } + while( fossil_isspace(*z) ){ z++; } zName = z; while( *z && *z!='=' && *z!=terminator ){ z++; } if( *z=='=' ){ *z = 0; z++; @@ -476,13 +562,16 @@ dehttpize(zValue); }else{ if( *z ){ *z++ = 0; } zValue = ""; } - if( islower(zName[0]) ){ - cgi_set_parameter_nocopy(zName, zValue); + if( fossil_islower(zName[0]) ){ + cgi_set_parameter_nocopy(zName, zValue, isQP); } +#ifdef FOSSIL_ENABLE_JSON + json_setenv( zName, cson_value_new_string(zValue,strlen(zValue)) ); +#endif /* FOSSIL_ENABLE_JSON */ } } /* ** *pz is a string that consists of multiple lines of text. This @@ -542,11 +631,11 @@ break; } } *pz = &z[i]; get_line_from_string(pz, pLen); - return z; + return z; } /* ** Tokenize a line of text into as many as nArg tokens. Make ** azArg[] point to the start of each token. @@ -571,11 +660,11 @@ ** in the example above. */ static int tokenize_line(char *z, int mxArg, char **azArg){ int i = 0; while( *z ){ - while( isspace(*z) || *z==';' ){ z++; } + while( fossil_isspace(*z) || *z==';' ){ z++; } if( *z=='"' && z[1] ){ *z = 0; z++; if( i<mxArg-1 ){ azArg[i++] = z; } while( *z && *z!='"' ){ z++; } @@ -582,11 +671,11 @@ if( *z==0 ) break; *z = 0; z++; }else{ if( i<mxArg-1 ){ azArg[i++] = z; } - while( *z && !isspace(*z) && *z!=';' && *z!='"' ){ z++; } + while( *z && !fossil_isspace(*z) && *z!=';' && *z!='"' ){ z++; } if( *z && *z!='"' ){ *z = 0; z++; } } @@ -617,92 +706,285 @@ if( zBoundry==0 ) return; while( (zLine = get_line_from_string(&z, &len))!=0 ){ if( zLine[0]==0 ){ int nContent = 0; zValue = get_bounded_content(&z, &len, zBoundry, &nContent); - if( zName && zValue && islower(zName[0]) ){ - cgi_set_parameter_nocopy(zName, zValue); + if( zName && zValue && fossil_islower(zName[0]) ){ + cgi_set_parameter_nocopy(zName, zValue, 1); if( showBytes ){ cgi_set_parameter_nocopy(mprintf("%s:bytes", zName), - mprintf("%d",nContent)); + mprintf("%d",nContent), 1); } } zName = 0; showBytes = 0; }else{ nArg = tokenize_line(zLine, sizeof(azArg)/sizeof(azArg[0]), azArg); for(i=0; i<nArg; i++){ - int c = tolower(azArg[i][0]); + int c = fossil_tolower(azArg[i][0]); int n = strlen(azArg[i]); if( c=='c' && sqlite3_strnicmp(azArg[i],"content-disposition:",n)==0 ){ i++; }else if( c=='n' && sqlite3_strnicmp(azArg[i],"name=",n)==0 ){ zName = azArg[++i]; }else if( c=='f' && sqlite3_strnicmp(azArg[i],"filename=",n)==0 ){ char *z = azArg[++i]; - if( zName && z && islower(zName[0]) ){ - cgi_set_parameter_nocopy(mprintf("%s:filename",zName), z); + if( zName && z && fossil_islower(zName[0]) ){ + cgi_set_parameter_nocopy(mprintf("%s:filename",zName), z, 1); } showBytes = 1; }else if( c=='c' && sqlite3_strnicmp(azArg[i],"content-type:",n)==0 ){ char *z = azArg[++i]; - if( zName && z && islower(zName[0]) ){ - cgi_set_parameter_nocopy(mprintf("%s:mimetype",zName), z); + if( zName && z && fossil_islower(zName[0]) ){ + cgi_set_parameter_nocopy(mprintf("%s:mimetype",zName), z, 1); } } } } - } + } +} + + +#ifdef FOSSIL_ENABLE_JSON +/* +** Internal helper for cson_data_source_FILE_n(). +*/ +typedef struct CgiPostReadState_ { + FILE * fh; + unsigned int len; + unsigned int pos; +} CgiPostReadState; + +/* +** cson_data_source_f() impl which reads only up to +** a specified amount of data from its input FILE. +** state MUST be a full populated (CgiPostReadState*). +*/ +static int cson_data_source_FILE_n( void * state, + void * dest, + unsigned int * n ){ + if( ! state || !dest || !n ) return cson_rc.ArgError; + else { + CgiPostReadState * st = (CgiPostReadState *)state; + if( st->pos >= st->len ){ + *n = 0; + return 0; + }else if( !*n || ((st->pos + *n) > st->len) ){ + return cson_rc.RangeError; + }else{ + unsigned int rsz = (unsigned int)fread( dest, 1, *n, st->fh ); + if( ! rsz ){ + *n = rsz; + return feof(st->fh) ? 0 : cson_rc.IOError; + }else{ + *n = rsz; + st->pos += *n; + return 0; + } + } + } +} + +/* +** Reads a JSON object from the first contentLen bytes of zIn. On +** g.json.post is updated to hold the content. On error a +** FSL_JSON_E_INVALID_REQUEST response is output and fossil_exit() is +** called (in HTTP mode exit code 0 is used). +** +** If contentLen is 0 then the whole file is read. +*/ +void cgi_parse_POST_JSON( FILE * zIn, unsigned int contentLen ){ + cson_value * jv = NULL; + int rc; + CgiPostReadState state; + cson_parse_opt popt = cson_parse_opt_empty; + cson_parse_info pinfo = cson_parse_info_empty; + assert(g.json.gc.a && "json_main_bootstrap() was not called!"); + popt.maxDepth = 15; + state.fh = zIn; + state.len = contentLen; + state.pos = 0; + rc = cson_parse( &jv, + contentLen ? cson_data_source_FILE_n : cson_data_source_FILE, + contentLen ? (void *)&state : (void *)zIn, &popt, &pinfo ); + if(rc){ + goto invalidRequest; + }else{ + json_gc_add( "POST.JSON", jv ); + g.json.post.v = jv; + g.json.post.o = cson_value_get_object( jv ); + if( !g.json.post.o ){ /* we don't support non-Object (Array) requests */ + goto invalidRequest; + } + } + return; + invalidRequest: + cgi_set_content_type(json_guess_content_type()); + if(0 != pinfo.errorCode){ /* fancy error message */ + char * msg = mprintf("JSON parse error at line %u, column %u, " + "byte offset %u: %s", + pinfo.line, pinfo.col, pinfo.length, + cson_rc_string(pinfo.errorCode)); + json_err( FSL_JSON_E_INVALID_REQUEST, msg, 1 ); + free(msg); + }else if(jv && !g.json.post.o){ + json_err( FSL_JSON_E_INVALID_REQUEST, + "Request envelope must be a JSON Object (not array).", 1 ); + }else{ /* generic error message */ + json_err( FSL_JSON_E_INVALID_REQUEST, NULL, 1 ); + } + fossil_exit( g.isHTTP ? 0 : 1); +} +#endif /* FOSSIL_ENABLE_JSON */ + +/* +** Log HTTP traffic to a file. Begin the log on first use. Close the log +** when the argument is NULL. +*/ +void cgi_trace(const char *z){ + static FILE *pLog = 0; + if( g.fHttpTrace==0 ) return; + if( z==0 ){ + if( pLog ) fclose(pLog); + pLog = 0; + return; + } + if( pLog==0 ){ + char zFile[50]; + unsigned r; + sqlite3_randomness(sizeof(r), &r); + sqlite3_snprintf(sizeof(zFile), zFile, "httplog-%08x.txt", r); + pLog = fossil_fopen(zFile, "wb"); + if( pLog ){ + fprintf(stderr, "# open log on %s\n", zFile); + }else{ + fprintf(stderr, "# failed to open %s\n", zFile); + return; + } + } + fputs(z, pLog); } +/* Forward declaration */ +static NORETURN void malformed_request(const char *zMsg); + /* ** Initialize the query parameter database. Information is pulled from ** the QUERY_STRING environment variable (if it exists), from standard ** input if there is POST data, and from HTTP_COOKIE. +** +** REQUEST_URI, PATH_INFO, and SCRIPT_NAME are related as follows: +** +** REQUEST_URI == SCRIPT_NAME + PATH_INFO +** +** Where "+" means concatenate. Fossil requires SCRIPT_NAME. If +** REQUEST_URI is provided but PATH_INFO is not, then PATH_INFO is +** computed from REQUEST_URI and SCRIPT_NAME. If PATH_INFO is provided +** but REQUEST_URI is not, then compute REQUEST_URI from PATH_INFO and +** SCRIPT_NAME. If neither REQUEST_URI nor PATH_INFO are provided, then +** assume that PATH_INFO is an empty string and set REQUEST_URI equal +** to PATH_INFO. +** +** SCGI typically omits PATH_INFO. CGI sometimes omits REQUEST_URI and +** PATH_INFO when it is empty. */ void cgi_init(void){ char *z; const char *zType; int len; + const char *zRequestUri = cgi_parameter("REQUEST_URI",0); + const char *zScriptName = cgi_parameter("SCRIPT_NAME",0); + const char *zPathInfo = cgi_parameter("PATH_INFO",0); + +#ifdef FOSSIL_ENABLE_JSON + json_main_bootstrap(); +#endif + g.isHTTP = 1; cgi_destination(CGI_BODY); - z = (char*)P("QUERY_STRING"); - if( z ){ - z = mprintf("%s",z); - add_param_list(z, '&'); - } - - z = (char*)P("REMOTE_ADDR"); - if( z ) g.zIpAddr = mprintf("%s", z); - - len = atoi(PD("CONTENT_LENGTH", "0")); - g.zContentType = zType = P("CONTENT_TYPE"); - if( len>0 && zType ){ - blob_zero(&g.cgiIn); - if( strcmp(zType,"application/x-www-form-urlencoded")==0 - || strncmp(zType,"multipart/form-data",19)==0 ){ - z = malloc( len+1 ); - if( z==0 ) exit(1); - len = fread(z, 1, len, g.httpIn); - z[len] = 0; - if( zType[0]=='a' ){ - add_param_list(z, '&'); - }else{ - process_multipart_form_data(z, len); - } - }else if( strcmp(zType, "application/x-fossil")==0 ){ - blob_read_from_channel(&g.cgiIn, g.httpIn, len); - blob_uncompress(&g.cgiIn, &g.cgiIn); - }else if( strcmp(zType, "application/x-fossil-debug")==0 ){ - blob_read_from_channel(&g.cgiIn, g.httpIn, len); - } + if( zScriptName==0 ) malformed_request("missing SCRIPT_NAME"); + if( zRequestUri==0 ){ + const char *z = zPathInfo; + if( zPathInfo==0 ){ + malformed_request("missing PATH_INFO and/or REQUEST_URI"); + } + if( z[0]=='/' ) z++; + zRequestUri = mprintf("%s/%s", zScriptName, z); + cgi_set_parameter("REQUEST_URI", zRequestUri); + } + if( zPathInfo==0 ){ + int i, j; + for(i=0; zRequestUri[i]==zScriptName[i] && zRequestUri[i]; i++){} + for(j=i; zRequestUri[j] && zRequestUri[j]!='?'; j++){} + cgi_set_parameter("PATH_INFO", mprintf("%.*s", j-i, zRequestUri+i)); } z = (char*)P("HTTP_COOKIE"); if( z ){ z = mprintf("%s",z); add_param_list(z, ';'); } + + z = (char*)P("QUERY_STRING"); + if( z ){ + z = mprintf("%s",z); + add_param_list(z, '&'); + } + + z = (char*)P("REMOTE_ADDR"); + if( z ){ + g.zIpAddr = mprintf("%s", z); + } + + len = atoi(PD("CONTENT_LENGTH", "0")); + g.zContentType = zType = P("CONTENT_TYPE"); + blob_zero(&g.cgiIn); + if( len>0 && zType ){ + if( fossil_strcmp(zType,"application/x-www-form-urlencoded")==0 + || strncmp(zType,"multipart/form-data",19)==0 ){ + z = fossil_malloc( len+1 ); + len = fread(z, 1, len, g.httpIn); + z[len] = 0; + cgi_trace(z); + if( zType[0]=='a' ){ + add_param_list(z, '&'); + }else{ + process_multipart_form_data(z, len); + } + }else if( fossil_strcmp(zType, "application/x-fossil")==0 ){ + blob_read_from_channel(&g.cgiIn, g.httpIn, len); + blob_uncompress(&g.cgiIn, &g.cgiIn); + }else if( fossil_strcmp(zType, "application/x-fossil-debug")==0 ){ + blob_read_from_channel(&g.cgiIn, g.httpIn, len); + }else if( fossil_strcmp(zType, "application/x-fossil-uncompressed")==0 ){ + blob_read_from_channel(&g.cgiIn, g.httpIn, len); + } +#ifdef FOSSIL_ENABLE_JSON + else if( fossil_strcmp(zType, "application/json") + || fossil_strcmp(zType,"text/plain")/*assume this MIGHT be JSON*/ + || fossil_strcmp(zType,"application/javascript")){ + g.json.isJsonMode = 1; + cgi_parse_POST_JSON(g.httpIn, (unsigned int)len); + /* FIXMEs: + + - See if fossil really needs g.cgiIn to be set for this purpose + (i don't think it does). If it does then fill g.cgiIn and + refactor to parse the JSON from there. + + - After parsing POST JSON, copy the "first layer" of keys/values + to cgi_setenv(), honoring the upper-case distinction used + in add_param_list(). However... + + - If we do that then we might get a disconnect in precedence of + GET/POST arguments. i prefer for GET entries to take precedence + over like-named POST entries, but in order for that to happen we + need to process QUERY_STRING _after_ reading the POST data. + */ + cgi_set_content_type(json_guess_content_type()); + } +#endif /* FOSSIL_ENABLE_JSON */ + } + } /* ** This is the comparison function used to sort the aParamQP[] array of ** query parameters and cookies. @@ -709,11 +991,11 @@ */ static int qparam_compare(const void *a, const void *b){ struct QParam *pA = (struct QParam*)a; struct QParam *pB = (struct QParam*)b; int c; - c = strcmp(pA->zName, pB->zName); + c = fossil_strcmp(pA->zName, pB->zName); if( c==0 ){ c = pA->seq - pB->seq; } return c; } @@ -738,11 +1020,11 @@ /* After sorting, remove duplicate parameters. The secondary sort ** key is aParamQP[].seq and we keep the first entry. That means ** with duplicate calls to cgi_set_parameter() the second and ** subsequent calls are effectively no-ops. */ for(i=j=1; i<nUsedQP; i++){ - if( strcmp(aParamQP[i].zName,aParamQP[i-1].zName)==0 ){ + if( fossil_strcmp(aParamQP[i].zName,aParamQP[i-1].zName)==0 ){ continue; } if( j<i ){ memcpy(&aParamQP[j], &aParamQP[i], sizeof(aParamQP[j])); } @@ -754,11 +1036,11 @@ /* Do a binary search for a matching query parameter */ lo = 0; hi = nUsedQP-1; while( lo<=hi ){ mid = (lo+hi)/2; - c = strcmp(aParamQP[mid].zName, zName); + c = fossil_strcmp(aParamQP[mid].zName, zName); if( c==0 ){ CGIDEBUG(("mem-match [%s] = [%s]\n", zName, aParamQP[mid].zValue)); return aParamQP[mid].zValue; }else if( c>0 ){ hi = mid-1; @@ -769,25 +1051,52 @@ /* If no match is found and the name begins with an upper-case ** letter, then check to see if there is an environment variable ** with the given name. */ - if( isupper(zName[0]) ){ - const char *zValue = getenv(zName); + if( zName && fossil_isupper(zName[0]) ){ + const char *zValue = fossil_getenv(zName); if( zValue ){ - cgi_set_parameter_nocopy(zName, zValue); + cgi_set_parameter_nocopy(zName, zValue, 0); CGIDEBUG(("env-match [%s] = [%s]\n", zName, zValue)); return zValue; } } CGIDEBUG(("no-match [%s]\n", zName)); return zDefault; } + +/* +** Return the value of a CGI parameter with leading and trailing +** spaces removed. +*/ +char *cgi_parameter_trimmed(const char *zName, const char *zDefault){ + const char *zIn; + char *zOut; + int i; + zIn = cgi_parameter(zName, 0); + if( zIn==0 ) zIn = zDefault; + while( fossil_isspace(zIn[0]) ) zIn++; + zOut = fossil_strdup(zIn); + for(i=0; zOut[i]; i++){} + while( i>0 && fossil_isspace(zOut[i-1]) ) zOut[--i] = 0; + return zOut; +} + +/* +** Return true if the CGI parameter zName exists and is not equal to 0, +** or "no" or "off". +*/ +int cgi_parameter_boolean(const char *zName){ + const char *zIn = cgi_parameter(zName, 0); + if( zIn==0 ) return 0; + return zIn[0]==0 || is_truth(zIn); +} /* ** Return the name of the i-th CGI parameter. Return NULL if there -** are fewer than i registered CGI parmaeters. +** are fewer than i registered CGI parameters. */ const char *cgi_parameter_name(int i){ if( i>=0 && i<nUsedQP ){ return aParamQP[i].zName; }else{ @@ -841,153 +1150,67 @@ } /* ** Print all query parameters on standard output. Format the ** parameters as HTML. This is used for testing and debugging. +** +** Omit the values of the cookies unless showAll is true. */ -void cgi_print_all(void){ +void cgi_print_all(int showAll){ int i; cgi_parameter("",""); /* Force the parameters into sorted order */ for(i=0; i<nUsedQP; i++){ - cgi_printf("%s = %s <br />\n", - htmlize(aParamQP[i].zName, -1), htmlize(aParamQP[i].zValue, -1)); - } -} - -/* -** Write HTML text for an option menu to standard output. zParam -** is the query parameter that the option menu sets. zDflt is the -** initial value of the option menu. Addition arguments are name/value -** pairs that define values on the menu. The list is terminated with -** a single NULL argument. -*/ -void cgi_optionmenu(int in, const char *zP, const char *zD, ...){ - va_list ap; - char *zName, *zVal; - int dfltSeen = 0; - cgi_printf("%*s<select size=1 name=\"%s\">\n", in, "", zP); - va_start(ap, zD); - while( (zName = va_arg(ap, char*))!=0 && (zVal = va_arg(ap, char*))!=0 ){ - if( strcmp(zVal,zD)==0 ){ dfltSeen = 1; break; } - } - va_end(ap); - if( !dfltSeen ){ - if( zD[0] ){ - cgi_printf("%*s<option value=\"%h\" selected>%h</option>\n", - in+2, "", zD, zD); - }else{ - cgi_printf("%*s<option value=\"\" selected> </option>\n", in+2, ""); - } - } - va_start(ap, zD); - while( (zName = va_arg(ap, char*))!=0 && (zVal = va_arg(ap, char*))!=0 ){ - if( zName[0] ){ - cgi_printf("%*s<option value=\"%h\"%s>%h</option>\n", - in+2, "", - zVal, - strcmp(zVal, zD) ? "" : " selected", - zName - ); - }else{ - cgi_printf("%*s<option value=\"\"%s> </option>\n", - in+2, "", - strcmp(zVal, zD) ? "" : " selected" - ); - } - } - va_end(ap); - cgi_printf("%*s</select>\n", in, ""); -} - -/* -** This routine works a lot like cgi_optionmenu() except that the list of -** values is contained in an array. Also, the values are just values, not -** name/value pairs as in cgi_optionmenu. -*/ -void cgi_v_optionmenu( - int in, /* Indent by this amount */ - const char *zP, /* The query parameter name */ - const char *zD, /* Default value */ - const char **az /* NULL-terminated list of allowed values */ -){ - const char *zVal; - int i; - cgi_printf("%*s<select size=1 name=\"%s\">\n", in, "", zP); - for(i=0; az[i]; i++){ - if( strcmp(az[i],zD)==0 ) break; - } - if( az[i]==0 ){ - if( zD[0]==0 ){ - cgi_printf("%*s<option value=\"\" selected> </option>\n", - in+2, ""); - }else{ - cgi_printf("%*s<option value=\"%h\" selected>%h</option>\n", - in+2, "", zD, zD); - } - } - while( (zVal = *(az++))!=0 ){ - if( zVal[0] ){ - cgi_printf("%*s<option value=\"%h\"%s>%h</option>\n", - in+2, "", - zVal, - strcmp(zVal, zD) ? "" : " selected", - zVal - ); - }else{ - cgi_printf("%*s<option value=\"\"%s> </option>\n", - in+2, "", - strcmp(zVal, zD) ? "" : " selected" - ); - } - } - cgi_printf("%*s</select>\n", in, ""); -} - -/* -** This routine works a lot like cgi_v_optionmenu() except that the list -** is a list of pairs. The first element of each pair is the value used -** internally and the second element is the value displayed to the user. -*/ -void cgi_v_optionmenu2( - int in, /* Indent by this amount */ - const char *zP, /* The query parameter name */ - const char *zD, /* Default value */ - const char **az /* NULL-terminated list of allowed values */ -){ - const char *zVal; - int i; - cgi_printf("%*s<select size=1 name=\"%s\">\n", in, "", zP); - for(i=0; az[i]; i+=2){ - if( strcmp(az[i],zD)==0 ) break; - } - if( az[i]==0 ){ - if( zD[0]==0 ){ - cgi_printf("%*s<option value=\"\" selected> </option>\n", - in+2, ""); - }else{ - cgi_printf("%*s<option value=\"%h\" selected>%h</option>\n", - in+2, "", zD, zD); - } - } - while( (zVal = *(az++))!=0 ){ - const char *zName = *(az++); - if( zName[0] ){ - cgi_printf("%*s<option value=\"%h\"%s>%h</option>\n", - in+2, "", - zVal, - strcmp(zVal, zD) ? "" : " selected", - zName - ); - }else{ - cgi_printf("%*s<option value=\"%h\"%s> </option>\n", - in+2, "", - zVal, - strcmp(zVal, zD) ? "" : " selected" - ); - } - } - cgi_printf("%*s</select>\n", in, ""); + const char *zName = aParamQP[i].zName; + if( !showAll ){ + if( fossil_stricmp("HTTP_COOKIE",zName)==0 ) continue; + if( fossil_strnicmp("fossil-",zName,7)==0 ) continue; + } + cgi_printf("%h = %h <br />\n", zName, aParamQP[i].zValue); + } +} + +/* +** Export all untagged query parameters (but not cookies or environment +** variables) as hidden values of a form. +*/ +void cgi_query_parameters_to_hidden(void){ + int i; + const char *zN, *zV; + for(i=0; i<nUsedQP; i++){ + if( aParamQP[i].isQP==0 || aParamQP[i].cTag ) continue; + zN = aParamQP[i].zName; + zV = aParamQP[i].zValue; + @ <input type="hidden" name="%h(zN)" value="%h(zV)"> + } +} + +/* +** Export all untagged query parameters (but not cookies or environment +** variables) to the HQuery object. +*/ +void cgi_query_parameters_to_url(HQuery *p){ + int i; + for(i=0; i<nUsedQP; i++){ + if( aParamQP[i].isQP==0 || aParamQP[i].cTag ) continue; + url_add_parameter(p, aParamQP[i].zName, aParamQP[i].zValue); + } +} + +/* +** Tag query parameter zName so that it is not exported by +** cgi_query_parameters_to_hidden(). Or if zName==0, then +** untag all query parameters. +*/ +void cgi_tag_query_parameter(const char *zName){ + int i; + if( zName==0 ){ + for(i=0; i<nUsedQP; i++) aParamQP[i].cTag = 0; + }else{ + for(i=0; i<nUsedQP; i++){ + if( strcmp(zName,aParamQP[i].zName)==0 ) aParamQP[i].cTag = 1; + } + } } /* ** This routine works like "printf" except that it has the ** extra formatting capabilities such as %h and %t. @@ -1009,35 +1232,62 @@ /* ** Send a reply indicating that the HTTP request was malformed */ -static void malformed_request(void){ +static NORETURN void malformed_request(const char *zMsg){ cgi_set_status(501, "Not Implemented"); cgi_printf( - "<html><body>Unrecognized HTTP Request</body></html>\n" + "<html><body><p>Bad Request: %s</p></body></html>\n", zMsg ); cgi_reply(); - exit(0); + fossil_exit(0); } /* ** Panic and die while processing a webpage. */ -void cgi_panic(const char *zFormat, ...){ +NORETURN void cgi_panic(const char *zFormat, ...){ va_list ap; cgi_reset_content(); - cgi_set_status(500, "Internal Server Error"); - cgi_printf( - "<html><body><h1>Internal Server Error</h1>\n" - "<plaintext>" - ); - va_start(ap, zFormat); - vxprintf(pContent,zFormat,ap); - va_end(ap); - cgi_reply(); - exit(1); +#ifdef FOSSIL_ENABLE_JSON + if( g.json.isJsonMode ){ + char * zMsg; + va_start(ap, zFormat); + zMsg = vmprintf(zFormat,ap); + va_end(ap); + json_err( FSL_JSON_E_PANIC, zMsg, 1 ); + free(zMsg); + fossil_exit( g.isHTTP ? 0 : 1 ); + }else +#endif /* FOSSIL_ENABLE_JSON */ + { + cgi_set_status(500, "Internal Server Error"); + cgi_printf( + "<html><body><h1>Internal Server Error</h1>\n" + "<plaintext>" + ); + va_start(ap, zFormat); + vxprintf(pContent,zFormat,ap); + va_end(ap); + cgi_reply(); + fossil_exit(1); + } +} + +/* z[] is the value of an X-FORWARDED-FOR: line in an HTTP header. +** Return a pointer to a string containing the real IP address, or a +** NULL pointer to stick with the IP address previously computed and +** loaded into g.zIpAddr. +*/ +static const char *cgi_accept_forwarded_for(const char *z){ + int i; + if( fossil_strcmp(g.zIpAddr, "127.0.0.1")!=0 ) return 0; + + i = strlen(z)-1; + while( i>=0 && z[i]!=',' && !fossil_isspace(z[i]) ) i--; + return &z[++i]; } /* ** Remove the first space-delimited token from a string and return ** a pointer to it. Add a NULL to the string to terminate the token. @@ -1047,107 +1297,399 @@ char *zResult = 0; if( zInput==0 ){ if( zLeftOver ) *zLeftOver = 0; return 0; } - while( isspace(*zInput) ){ zInput++; } + while( fossil_isspace(*zInput) ){ zInput++; } zResult = zInput; - while( *zInput && !isspace(*zInput) ){ zInput++; } + while( *zInput && !fossil_isspace(*zInput) ){ zInput++; } if( *zInput ){ *zInput = 0; zInput++; - while( isspace(*zInput) ){ zInput++; } + while( fossil_isspace(*zInput) ){ zInput++; } } if( zLeftOver ){ *zLeftOver = zInput; } return zResult; } /* ** This routine handles a single HTTP request which is coming in on -** standard input and which replies on standard output. +** g.httpIn and which replies on g.httpOut ** -** The HTTP request is read from standard input and is used to initialize -** environment variables as per CGI. The cgi_init() routine to complete +** The HTTP request is read from g.httpIn and is used to initialize +** entries in the cgi_parameter() hash, as if those entries were +** environment variables. A call to cgi_init() completes ** the setup. Once all the setup is finished, this procedure returns ** and subsequent code handles the actual generation of the webpage. */ void cgi_handle_http_request(const char *zIpAddr){ char *z, *zToken; int i; struct sockaddr_in remoteName; - size_t size = sizeof(struct sockaddr_in); + socklen_t size = sizeof(struct sockaddr_in); char zLine[2000]; /* A single line of input. */ - g.fullHttpReply = 1; if( fgets(zLine, sizeof(zLine),g.httpIn)==0 ){ - malformed_request(); + malformed_request("missing HTTP header"); } + blob_append(&g.httpHeader, zLine, -1); + cgi_trace(zLine); zToken = extract_token(zLine, &z); if( zToken==0 ){ - malformed_request(); + malformed_request("malformed HTTP header"); } - if( strcmp(zToken,"GET")!=0 && strcmp(zToken,"POST")!=0 - && strcmp(zToken,"HEAD")!=0 ){ - malformed_request(); + if( fossil_strcmp(zToken,"GET")!=0 && fossil_strcmp(zToken,"POST")!=0 + && fossil_strcmp(zToken,"HEAD")!=0 ){ + malformed_request("unsupported HTTP method"); } cgi_setenv("GATEWAY_INTERFACE","CGI/1.0"); cgi_setenv("REQUEST_METHOD",zToken); zToken = extract_token(z, &z); if( zToken==0 ){ - malformed_request(); + malformed_request("malformed URL in HTTP header"); } cgi_setenv("REQUEST_URI", zToken); + cgi_setenv("SCRIPT_NAME", ""); for(i=0; zToken[i] && zToken[i]!='?'; i++){} if( zToken[i] ) zToken[i++] = 0; cgi_setenv("PATH_INFO", zToken); cgi_setenv("QUERY_STRING", &zToken[i]); if( zIpAddr==0 && - getpeername(fileno(g.httpIn), (struct sockaddr*)&remoteName, - (socklen_t*)&size)>=0 + getpeername(fileno(g.httpIn), (struct sockaddr*)&remoteName, + &size)>=0 ){ zIpAddr = inet_ntoa(remoteName.sin_addr); } - if( zIpAddr ){ + if( zIpAddr ){ cgi_setenv("REMOTE_ADDR", zIpAddr); g.zIpAddr = mprintf("%s", zIpAddr); } - + + /* Get all the optional fields that follow the first line. + */ + while( fgets(zLine,sizeof(zLine),g.httpIn) ){ + char *zFieldName; + char *zVal; + + cgi_trace(zLine); + blob_append(&g.httpHeader, zLine, -1); + zFieldName = extract_token(zLine,&zVal); + if( zFieldName==0 || *zFieldName==0 ) break; + while( fossil_isspace(*zVal) ){ zVal++; } + i = strlen(zVal); + while( i>0 && fossil_isspace(zVal[i-1]) ){ i--; } + zVal[i] = 0; + for(i=0; zFieldName[i]; i++){ + zFieldName[i] = fossil_tolower(zFieldName[i]); + } + if( fossil_strcmp(zFieldName,"accept-encoding:")==0 ){ + cgi_setenv("HTTP_ACCEPT_ENCODING", zVal); + }else if( fossil_strcmp(zFieldName,"content-length:")==0 ){ + cgi_setenv("CONTENT_LENGTH", zVal); + }else if( fossil_strcmp(zFieldName,"content-type:")==0 ){ + cgi_setenv("CONTENT_TYPE", zVal); + }else if( fossil_strcmp(zFieldName,"cookie:")==0 ){ + cgi_setenv("HTTP_COOKIE", zVal); + }else if( fossil_strcmp(zFieldName,"https:")==0 ){ + cgi_setenv("HTTPS", zVal); + }else if( fossil_strcmp(zFieldName,"host:")==0 ){ + cgi_setenv("HTTP_HOST", zVal); + }else if( fossil_strcmp(zFieldName,"if-none-match:")==0 ){ + cgi_setenv("HTTP_IF_NONE_MATCH", zVal); + }else if( fossil_strcmp(zFieldName,"if-modified-since:")==0 ){ + cgi_setenv("HTTP_IF_MODIFIED_SINCE", zVal); + }else if( fossil_strcmp(zFieldName,"referer:")==0 ){ + cgi_setenv("HTTP_REFERER", zVal); + }else if( fossil_strcmp(zFieldName,"user-agent:")==0 ){ + cgi_setenv("HTTP_USER_AGENT", zVal); + }else if( fossil_strcmp(zFieldName,"x-forwarded-for:")==0 ){ + const char *zIpAddr = cgi_accept_forwarded_for(zVal); + if( zIpAddr!=0 ){ + g.zIpAddr = mprintf("%s", zIpAddr); + cgi_replace_parameter("REMOTE_ADDR", g.zIpAddr); + } + } + } + cgi_init(); + cgi_trace(0); +} + +/* +** This routine handles a single HTTP request from an SSH client which is +** coming in on g.httpIn and which replies on g.httpOut +** +** Once all the setup is finished, this procedure returns +** and subsequent code handles the actual generation of the webpage. +** +** It is called in a loop so some variables will need to be replaced +*/ +void cgi_handle_ssh_http_request(const char *zIpAddr){ + static int nCycles = 0; + static char *zCmd = 0; + char *z, *zToken; + const char *zType = 0; + int i, content_length = 0; + char zLine[2000]; /* A single line of input. */ + + if( zIpAddr ){ + if( nCycles==0 ){ + cgi_setenv("REMOTE_ADDR", zIpAddr); + g.zIpAddr = mprintf("%s", zIpAddr); + } + }else{ + fossil_panic("missing SSH IP address"); + } + if( fgets(zLine, sizeof(zLine),g.httpIn)==0 ){ + malformed_request("missing HTTP header"); + } + cgi_trace(zLine); + zToken = extract_token(zLine, &z); + if( zToken==0 ){ + malformed_request("malformed HTTP header"); + } + + if( fossil_strcmp(zToken, "echo")==0 ){ + /* start looking for probes to complete transport_open */ + zCmd = cgi_handle_ssh_probes(zLine, sizeof(zLine), z, zToken); + if( fgets(zLine, sizeof(zLine),g.httpIn)==0 ){ + malformed_request("missing HTTP header"); + } + cgi_trace(zLine); + zToken = extract_token(zLine, &z); + if( zToken==0 ){ + malformed_request("malformed HTTP header"); + } + }else if( zToken && strlen(zToken)==0 && zCmd ){ + /* transport_flip request and continued transport_open */ + cgi_handle_ssh_transport(zCmd); + if( fgets(zLine, sizeof(zLine),g.httpIn)==0 ){ + malformed_request("missing HTTP header"); + } + cgi_trace(zLine); + zToken = extract_token(zLine, &z); + if( zToken==0 ){ + malformed_request("malformed HTTP header"); + } + } + + if( fossil_strcmp(zToken,"GET")!=0 && fossil_strcmp(zToken,"POST")!=0 + && fossil_strcmp(zToken,"HEAD")!=0 ){ + malformed_request("unsupported HTTP method"); + } + + if( nCycles==0 ){ + cgi_setenv("GATEWAY_INTERFACE","CGI/1.0"); + cgi_setenv("REQUEST_METHOD",zToken); + } + + zToken = extract_token(z, &z); + if( zToken==0 ){ + malformed_request("malformed URL in HTTP header"); + } + if( nCycles==0 ){ + cgi_setenv("REQUEST_URI", zToken); + cgi_setenv("SCRIPT_NAME", ""); + } + + for(i=0; zToken[i] && zToken[i]!='?'; i++){} + if( zToken[i] ) zToken[i++] = 0; + if( nCycles==0 ){ + cgi_setenv("PATH_INFO", zToken); + }else{ + cgi_replace_parameter("PATH_INFO", mprintf("%s",zToken)); + } + /* Get all the optional fields that follow the first line. */ while( fgets(zLine,sizeof(zLine),g.httpIn) ){ char *zFieldName; char *zVal; + cgi_trace(zLine); zFieldName = extract_token(zLine,&zVal); if( zFieldName==0 || *zFieldName==0 ) break; - while( isspace(*zVal) ){ zVal++; } + while( fossil_isspace(*zVal) ){ zVal++; } i = strlen(zVal); - while( i>0 && isspace(zVal[i-1]) ){ i--; } + while( i>0 && fossil_isspace(zVal[i-1]) ){ i--; } zVal[i] = 0; - for(i=0; zFieldName[i]; i++){ zFieldName[i] = tolower(zFieldName[i]); } - if( strcmp(zFieldName,"user-agent:")==0 ){ - cgi_setenv("HTTP_USER_AGENT", zVal); - }else if( strcmp(zFieldName,"content-length:")==0 ){ - cgi_setenv("CONTENT_LENGTH", zVal); - }else if( strcmp(zFieldName,"referer:")==0 ){ - cgi_setenv("HTTP_REFERER", zVal); - }else if( strcmp(zFieldName,"host:")==0 ){ - cgi_setenv("HTTP_HOST", zVal); - }else if( strcmp(zFieldName,"content-type:")==0 ){ - cgi_setenv("CONTENT_TYPE", zVal); - }else if( strcmp(zFieldName,"cookie:")==0 ){ - cgi_setenv("HTTP_COOKIE", zVal); - }else if( strcmp(zFieldName,"if-none-match:")==0 ){ - cgi_setenv("HTTP_IF_NONE_MATCH", zVal); - }else if( strcmp(zFieldName,"if-modified-since:")==0 ){ - cgi_setenv("HTTP_IF_MODIFIED_SINCE", zVal); - } - } - + for(i=0; zFieldName[i]; i++){ + zFieldName[i] = fossil_tolower(zFieldName[i]); + } + if( fossil_strcmp(zFieldName,"content-length:")==0 ){ + content_length = atoi(zVal); + }else if( fossil_strcmp(zFieldName,"content-type:")==0 ){ + g.zContentType = zType = mprintf("%s", zVal); + }else if( fossil_strcmp(zFieldName,"host:")==0 ){ + if( nCycles==0 ){ + cgi_setenv("HTTP_HOST", zVal); + } + }else if( fossil_strcmp(zFieldName,"user-agent:")==0 ){ + if( nCycles==0 ){ + cgi_setenv("HTTP_USER_AGENT", zVal); + } + }else if( fossil_strcmp(zFieldName,"x-fossil-transport:")==0 ){ + if( fossil_strnicmp(zVal, "ssh", 3)==0 ){ + if( nCycles==0 ){ + g.fSshClient |= CGI_SSH_FOSSIL; + g.fullHttpReply = 0; + } + } + } + } + + if( nCycles==0 ){ + if( ! ( g.fSshClient & CGI_SSH_FOSSIL ) ){ + /* did not find new fossil ssh transport */ + g.fSshClient &= ~CGI_SSH_CLIENT; + g.fullHttpReply = 1; + cgi_replace_parameter("REMOTE_ADDR", "127.0.0.1"); + } + } + + cgi_reset_content(); + cgi_destination(CGI_BODY); + + if( content_length>0 && zType ){ + blob_zero(&g.cgiIn); + if( fossil_strcmp(zType, "application/x-fossil")==0 ){ + blob_read_from_channel(&g.cgiIn, g.httpIn, content_length); + blob_uncompress(&g.cgiIn, &g.cgiIn); + }else if( fossil_strcmp(zType, "application/x-fossil-debug")==0 ){ + blob_read_from_channel(&g.cgiIn, g.httpIn, content_length); + }else if( fossil_strcmp(zType, "application/x-fossil-uncompressed")==0 ){ + blob_read_from_channel(&g.cgiIn, g.httpIn, content_length); + } + } + cgi_trace(0); + nCycles++; +} + +/* +** This routine handles the old fossil SSH probes +*/ +char *cgi_handle_ssh_probes(char *zLine, int zSize, char *z, char *zToken){ + /* Start looking for probes */ + while( fossil_strcmp(zToken, "echo")==0 ){ + zToken = extract_token(z, &z); + if( zToken==0 ){ + malformed_request("malformed probe"); + } + if( fossil_strncmp(zToken, "test", 4)==0 || + fossil_strncmp(zToken, "probe-", 6)==0 ){ + fprintf(g.httpOut, "%s\n", zToken); + fflush(g.httpOut); + }else{ + malformed_request("malformed probe"); + } + if( fgets(zLine, zSize, g.httpIn)==0 ){ + malformed_request("malformed probe"); + } + cgi_trace(zLine); + zToken = extract_token(zLine, &z); + if( zToken==0 ){ + malformed_request("malformed probe"); + } + } + + /* Got all probes now first transport_open is completed + ** so return the command that was requested + */ + g.fSshClient |= CGI_SSH_COMPAT; + return mprintf("%s", zToken); +} + +/* +** This routine handles the old fossil SSH transport_flip +** and transport_open communications if detected. +*/ +void cgi_handle_ssh_transport(const char *zCmd){ + char *z, *zToken; + char zLine[2000]; /* A single line of input. */ + + /* look for second newline of transport_flip */ + if( fgets(zLine, sizeof(zLine),g.httpIn)==0 ){ + malformed_request("incorrect transport_flip"); + } + cgi_trace(zLine); + zToken = extract_token(zLine, &z); + if( zToken && strlen(zToken)==0 ){ + /* look for path to fossil */ + if( fgets(zLine, sizeof(zLine),g.httpIn)==0 ){ + if( zCmd==0 ){ + malformed_request("missing fossil command"); + }else{ + /* no new command so exit */ + fossil_exit(0); + } + } + cgi_trace(zLine); + zToken = extract_token(zLine, &z); + if( zToken==0 ){ + malformed_request("malformed fossil command"); + } + /* see if we've seen the command */ + if( zCmd && zCmd[0] && fossil_strcmp(zToken, zCmd)==0 ){ + return; + }else{ + malformed_request("transport_open failed"); + } + }else{ + malformed_request("transport_flip failed"); + } +} + +/* +** This routine handles a single SCGI request which is coming in on +** g.httpIn and which replies on g.httpOut +** +** The SCGI request is read from g.httpIn and is used to initialize +** entries in the cgi_parameter() hash, as if those entries were +** environment variables. A call to cgi_init() completes +** the setup. Once all the setup is finished, this procedure returns +** and subsequent code handles the actual generation of the webpage. +*/ +void cgi_handle_scgi_request(void){ + char *zHdr; + char *zToFree; + int nHdr = 0; + int nRead; + int c, n, m; + + while( (c = fgetc(g.httpIn))!=EOF && fossil_isdigit((char)c) ){ + nHdr = nHdr*10 + (char)c - '0'; + } + if( nHdr<16 ) malformed_request("SCGI header too short"); + zToFree = zHdr = fossil_malloc(nHdr); + nRead = (int)fread(zHdr, 1, nHdr, g.httpIn); + if( nRead<nHdr ) malformed_request("cannot read entire SCGI header"); + nHdr = nRead; + while( nHdr ){ + for(n=0; n<nHdr && zHdr[n]; n++){} + for(m=n+1; m<nHdr && zHdr[m]; m++){} + if( m>=nHdr ) malformed_request("SCGI header formatting error"); + cgi_set_parameter(zHdr, zHdr+n+1); + zHdr += m+1; + nHdr -= m+1; + } + fossil_free(zToFree); + fgetc(g.httpIn); /* Read past the "," separating header from content */ cgi_init(); } + +#if INTERFACE +/* +** Bitmap values for the flags parameter to cgi_http_server(). +*/ +#define HTTP_SERVER_LOCALHOST 0x0001 /* Bind to 127.0.0.1 only */ +#define HTTP_SERVER_SCGI 0x0002 /* SCGI instead of HTTP */ +#define HTTP_SERVER_HAD_REPOSITORY 0x0004 /* Was the repository open? */ +#define HTTP_SERVER_HAD_CHECKOUT 0x0008 /* Was a checkout open? */ +#define HTTP_SERVER_REPOLIST 0x0010 /* Allow repo listing */ + +#endif /* INTERFACE */ + /* ** Maximum number of child processes that we can have running ** at one time before we start slowing things down. */ #define MAX_PARALLEL 2 @@ -1160,19 +1702,24 @@ ** The parent never returns from this procedure. ** ** Return 0 to each child as it runs. If unable to establish a ** listening socket, return non-zero. */ -int cgi_http_server(int mnPort, int mxPort, char *zBrowser){ -#ifdef __MINGW32__ +int cgi_http_server( + int mnPort, int mxPort, /* Range of TCP ports to try */ + const char *zBrowser, /* Run this browser, if not NULL */ + const char *zIpAddr, /* Bind to this IP address, if not null */ + int flags /* HTTP_SERVER_* flags */ +){ +#if defined(_WIN32) /* Use win32_http_server() instead */ - exit(1); + fossil_exit(1); #else int listener = -1; /* The server socket */ int connection; /* A socket for each individual connection */ fd_set readfds; /* Set of file descriptors for select() */ - size_t lenaddr; /* Length of the inaddr structure */ + socklen_t lenaddr; /* Length of the inaddr structure */ int child; /* PID of the child process */ int nchildren = 0; /* Number of child processes */ struct timeval delay; /* How long to wait inside select() */ struct sockaddr_in inaddr; /* The socket address */ int opt = 1; /* setsockopt flag */ @@ -1179,11 +1726,20 @@ int iPort = mnPort; while( iPort<=mxPort ){ memset(&inaddr, 0, sizeof(inaddr)); inaddr.sin_family = AF_INET; - inaddr.sin_addr.s_addr = INADDR_ANY; + if( zIpAddr ){ + inaddr.sin_addr.s_addr = inet_addr(zIpAddr); + if( inaddr.sin_addr.s_addr == (-1) ){ + fossil_fatal("not a valid IP address: %s", zIpAddr); + } + }else if( flags & HTTP_SERVER_LOCALHOST ){ + inaddr.sin_addr.s_addr = htonl(INADDR_LOOPBACK); + }else{ + inaddr.sin_addr.s_addr = htonl(INADDR_ANY); + } inaddr.sin_port = htons(iPort); listener = socket(AF_INET, SOCK_STREAM, 0); if( listener<0 ){ iPort++; continue; @@ -1207,67 +1763,86 @@ " port in the range %d..%d", mnPort, mxPort); } } if( iPort>mxPort ) return 1; listen(listener,10); - if( iPort>mnPort ){ - printf("Listening for HTTP requests on TCP port %d\n", iPort); - fflush(stdout); - } + fossil_print("Listening for %s requests on TCP port %d\n", + (flags & HTTP_SERVER_SCGI)!=0?"SCGI":"HTTP", iPort); + fflush(stdout); if( zBrowser ){ - zBrowser = mprintf(zBrowser, iPort); - system(zBrowser); + assert( strstr(zBrowser,"%d")!=0 ); + zBrowser = mprintf(zBrowser /*works-like:"%d"*/, iPort); +#if defined(__CYGWIN__) + /* On Cygwin, we can do better than "echo" */ + if( strncmp(zBrowser, "echo ", 5)==0 ){ + wchar_t *wUrl = fossil_utf8_to_unicode(zBrowser+5); + wUrl[wcslen(wUrl)-2] = 0; /* Strip terminating " &" */ + if( (size_t)ShellExecuteW(0, L"open", wUrl, 0, 0, 1)<33 ){ + fossil_warning("cannot start browser\n"); + } + }else +#endif + if( system(zBrowser)<0 ){ + fossil_warning("cannot start browser: %s\n", zBrowser); + } } while( 1 ){ if( nchildren>MAX_PARALLEL ){ /* Slow down if connections are arriving too fast */ sleep( nchildren-MAX_PARALLEL ); } delay.tv_sec = 60; delay.tv_usec = 0; FD_ZERO(&readfds); + assert( listener>=0 ); FD_SET( listener, &readfds); - if( select( listener+1, &readfds, 0, 0, &delay) ){ + select( listener+1, &readfds, 0, 0, &delay); + if( FD_ISSET(listener, &readfds) ){ lenaddr = sizeof(inaddr); - connection = accept(listener, (struct sockaddr*)&inaddr, - (socklen_t*) &lenaddr); + connection = accept(listener, (struct sockaddr*)&inaddr, &lenaddr); if( connection>=0 ){ child = fork(); if( child!=0 ){ if( child>0 ) nchildren++; close(connection); }else{ + int nErr = 0, fd; close(0); - dup(connection); + fd = dup(connection); + if( fd!=0 ) nErr++; close(1); - dup(connection); - if( !g.fHttpTrace && !g.fSqlTrace ){ + fd = dup(connection); + if( fd!=1 ) nErr++; + if( !g.fAnyTrace ){ close(2); - dup(connection); + fd = dup(connection); + if( fd!=2 ) nErr++; } close(connection); - return 0; + return nErr; } } } /* Bury dead children */ while( waitpid(0, 0, WNOHANG)>0 ){ nchildren--; } } - /* NOT REACHED */ - exit(1); + /* NOT REACHED */ + fossil_exit(1); #endif + /* NOT REACHED */ + return 0; } /* ** Name of days and months. */ -static const char *azDays[] = +static const char *const azDays[] = {"Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat", 0}; -static const char *azMonths[] = +static const char *const azMonths[] = {"Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec", 0}; /* @@ -1308,11 +1883,11 @@ &t.tm_mday, zMonth, &t.tm_year, &t.tm_hour, &t.tm_min, &t.tm_sec)){ if( t.tm_year > 1900 ) t.tm_year -= 1900; for(t.tm_mon=0; azMonths[t.tm_mon]; t.tm_mon++){ - if( !strncasecmp( azMonths[t.tm_mon], zMonth, 3 )){ + if( !fossil_strnicmp( azMonths[t.tm_mon], zMonth, 3 )){ return mkgmtime(&t); } } } @@ -1338,11 +1913,11 @@ p->tm_mon %= 12; } isLeapYr = p->tm_year%4==0 && (p->tm_year%100!=0 || (p->tm_year+300)%400==0); p->tm_yday = priorDays[p->tm_mon] + p->tm_mday - 1; if( isLeapYr && p->tm_mon>1 ) p->tm_yday++; - nDay = (p->tm_year-70)*365 + (p->tm_year-69)/4 -p->tm_year/100 + + nDay = (p->tm_year-70)*365 + (p->tm_year-69)/4 -p->tm_year/100 + (p->tm_year+300)/400 + p->tm_yday; t = ((nDay*24 + p->tm_hour)*60 + p->tm_min)*60 + p->tm_sec; return t; } @@ -1356,7 +1931,25 @@ if( zIf==0 ) return; if( objectTime > cgi_rfc822_parsedate(zIf) ) return; cgi_set_status(304,"Not Modified"); cgi_reset_content(); cgi_reply(); - exit(0); + fossil_exit(0); +} + +/* +** Check to see if the remote client is SSH and return +** its IP or return default +*/ +const char *cgi_ssh_remote_addr(const char *zDefault){ + char *zIndex; + const char *zSshConn = fossil_getenv("SSH_CONNECTION"); + + if( zSshConn && zSshConn[0] ){ + char *zSshClient = mprintf("%s",zSshConn); + if( (zIndex = strchr(zSshClient,' '))!=0 ){ + zSshClient[zIndex-zSshClient] = '\0'; + return zSshClient; + } + } + return zDefault; } Index: src/checkin.c ================================================================== --- src/checkin.c +++ src/checkin.c @@ -2,11 +2,11 @@ ** Copyright (c) 2007 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) - +** ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** ** Author contact information: @@ -32,315 +32,971 @@ ** are not true files results in a fatal error. */ static void status_report( Blob *report, /* Append the status report here */ const char *zPrefix, /* Prefix on each line of the report */ - int missingIsFatal /* MISSING and NOT_A_FILE are fatal errors */ + int missingIsFatal, /* MISSING and NOT_A_FILE are fatal errors */ + int cwdRelative /* Report relative to the current working dir */ ){ Stmt q; int nPrefix = strlen(zPrefix); int nErr = 0; - db_prepare(&q, - "SELECT pathname, deleted, chnged, rid, coalesce(origname!=pathname,0)" + Blob rewrittenPathname; + Blob where; + const char *zName; + int i; + + blob_zero(&where); + for(i=2; i<g.argc; i++){ + Blob fname; + file_tree_name(g.argv[i], &fname, 0, 1); + zName = blob_str(&fname); + if( fossil_strcmp(zName, ".")==0 ){ + blob_reset(&where); + break; + } + blob_append_sql(&where, + " %s (pathname=%Q %s) " + "OR (pathname>'%q/' %s AND pathname<'%q0' %s)", + (blob_size(&where)>0) ? "OR" : "AND", zName, + filename_collation(), zName, filename_collation(), + zName, filename_collation() + ); + } + + db_prepare(&q, + "SELECT pathname, deleted, chnged," + " rid, coalesce(origname!=pathname,0), islink" " FROM vfile " - " WHERE file_is_selected(id)" - " AND (chnged OR deleted OR rid=0 OR pathname!=origname) ORDER BY 1" + " WHERE is_selected(id) %s" + " AND (chnged OR deleted OR rid=0 OR pathname!=origname)" + " ORDER BY 1 /*scan*/", + blob_sql_text(&where) ); + blob_zero(&rewrittenPathname); while( db_step(&q)==SQLITE_ROW ){ const char *zPathname = db_column_text(&q,0); + const char *zDisplayName = zPathname; int isDeleted = db_column_int(&q, 1); int isChnged = db_column_int(&q,2); int isNew = db_column_int(&q,3)==0; int isRenamed = db_column_int(&q,4); - char *zFullName = mprintf("%s/%s", g.zLocalRoot, zPathname); + int isLink = db_column_int(&q,5); + char *zFullName = mprintf("%s%s", g.zLocalRoot, zPathname); + if( cwdRelative ){ + file_relative_name(zFullName, &rewrittenPathname, 0); + zDisplayName = blob_str(&rewrittenPathname); + if( zDisplayName[0]=='.' && zDisplayName[1]=='/' ){ + zDisplayName += 2; /* no unnecessary ./ prefix */ + } + } blob_append(report, zPrefix, nPrefix); if( isDeleted ){ - blob_appendf(report, "DELETED %s\n", zPathname); - }else if( !file_isfile(zFullName) ){ - if( access(zFullName, 0)==0 ){ - blob_appendf(report, "NOT_A_FILE %s\n", zPathname); + blob_appendf(report, "DELETED %s\n", zDisplayName); + }else if( !file_wd_isfile_or_link(zFullName) ){ + if( file_access(zFullName, F_OK)==0 ){ + blob_appendf(report, "NOT_A_FILE %s\n", zDisplayName); if( missingIsFatal ){ - fossil_warning("not a file: %s", zPathname); + fossil_warning("not a file: %s", zDisplayName); nErr++; } }else{ - blob_appendf(report, "MISSING %s\n", zPathname); + blob_appendf(report, "MISSING %s\n", zDisplayName); if( missingIsFatal ){ - fossil_warning("missing file: %s", zPathname); + fossil_warning("missing file: %s", zDisplayName); nErr++; } } }else if( isNew ){ - blob_appendf(report, "ADDED %s\n", zPathname); - }else if( isDeleted ){ - blob_appendf(report, "DELETED %s\n", zPathname); - }else if( isChnged==2 ){ - blob_appendf(report, "UPDATED_BY_MERGE %s\n", zPathname); - }else if( isChnged==3 ){ - blob_appendf(report, "ADDED_BY_MERGE %s\n", zPathname); - }else if( isChnged==1 ){ - blob_appendf(report, "EDITED %s\n", zPathname); + blob_appendf(report, "ADDED %s\n", zDisplayName); + }else if( isChnged ){ + if( isChnged==2 ){ + blob_appendf(report, "UPDATED_BY_MERGE %s\n", zDisplayName); + }else if( isChnged==3 ){ + blob_appendf(report, "ADDED_BY_MERGE %s\n", zDisplayName); + }else if( isChnged==4 ){ + blob_appendf(report, "UPDATED_BY_INTEGRATE %s\n", zDisplayName); + }else if( isChnged==5 ){ + blob_appendf(report, "ADDED_BY_INTEGRATE %s\n", zDisplayName); + }else if( isChnged==6 ){ + blob_appendf(report, "EXECUTABLE %s\n", zDisplayName); + }else if( isChnged==7 ){ + blob_appendf(report, "SYMLINK %s\n", zDisplayName); + }else if( isChnged==8 ){ + blob_appendf(report, "UNEXEC %s\n", zDisplayName); + }else if( isChnged==9 ){ + blob_appendf(report, "UNLINK %s\n", zDisplayName); + }else if( !isLink && file_contains_merge_marker(zFullName) ){ + blob_appendf(report, "CONFLICT %s\n", zDisplayName); + }else{ + blob_appendf(report, "EDITED %s\n", zDisplayName); + } }else if( isRenamed ){ - blob_appendf(report, "RENAMED %s\n", zPathname); + blob_appendf(report, "RENAMED %s\n", zDisplayName); + }else{ + report->nUsed -= nPrefix; } free(zFullName); } + blob_reset(&rewrittenPathname); db_finalize(&q); - db_prepare(&q, "SELECT uuid FROM vmerge JOIN blob ON merge=rid" - " WHERE id=0"); + db_prepare(&q, "SELECT uuid, id FROM vmerge JOIN blob ON merge=rid" + " WHERE id<=0"); while( db_step(&q)==SQLITE_ROW ){ + const char *zLabel = "MERGED_WITH "; + switch( db_column_int(&q, 1) ){ + case -1: zLabel = "CHERRYPICK "; break; + case -2: zLabel = "BACKOUT "; break; + case -4: zLabel = "INTEGRATE "; break; + } blob_append(report, zPrefix, nPrefix); - blob_appendf(report, "MERGED_WITH %s\n", db_column_text(&q, 0)); + blob_appendf(report, "%s%s\n", zLabel, db_column_text(&q, 0)); } db_finalize(&q); if( nErr ){ fossil_fatal("aborting due to prior errors"); } } /* -** COMMAND: changes -** -** Usage: %fossil changes -** -** Report on the edit status of all files in the current checkout. -** See also the "status" and "extra" commands. +** Use the "relative-paths" setting and the --abs-paths and +** --rel-paths command line options to determine whether the +** status report should be shown relative to the current +** working directory. */ -void changes_cmd(void){ +static int determine_cwd_relative_option() +{ + int relativePaths = db_get_boolean("relative-paths", 1); + int absPathOption = find_option("abs-paths", 0, 0)!=0; + int relPathOption = find_option("rel-paths", 0, 0)!=0; + if( absPathOption ){ relativePaths = 0; } + if( relPathOption ){ relativePaths = 1; } + return relativePaths; +} + +void print_changes( + int useSha1sum, /* Verify file status using SHA1 hashing rather + than relying on file mtimes. */ + int showHdr, /* Identify the repository if there are changes */ + int verboseFlag, /* Say "(none)" if there are no changes */ + int cwdRelative /* Report relative to the current working dir */ +){ Blob report; int vid; - db_must_be_within_tree(); blob_zero(&report); + vid = db_lget_int("checkout", 0); - vfile_check_signature(vid, 0); - status_report(&report, "", 0); + vfile_check_signature(vid, useSha1sum ? CKSIG_SHA1 : 0); + status_report(&report, "", 0, cwdRelative); + if( verboseFlag && blob_size(&report)==0 ){ + blob_append(&report, " (none)\n", -1); + } + if( showHdr && blob_size(&report)>0 ){ + fossil_print("Changes for %s at %s:\n", db_get("project-name","???"), + g.zLocalRoot); + } blob_write_to_file(&report, "-"); + blob_reset(&report); +} + +/* +** COMMAND: changes +** +** Usage: %fossil changes ?OPTIONS? +** +** Report on the edit status of all files in the current checkout. +** +** Pathnames are displayed according to the "relative-paths" setting, +** unless overridden by the --abs-paths or --rel-paths options. +** +** Options: +** --abs-paths Display absolute pathnames. +** --rel-paths Display pathnames relative to the current working +** directory. +** --sha1sum Verify file status using SHA1 hashing rather +** than relying on file mtimes. +** --header Identify the repository if there are changes +** -v|--verbose Say "(none)" if there are no changes +** +** See also: extras, ls, status +*/ +void changes_cmd(void){ + int useSha1sum = find_option("sha1sum", 0, 0)!=0; + int showHdr = find_option("header",0,0)!=0; + int verboseFlag = find_option("verbose","v",0)!=0; + int cwdRelative = 0; + db_must_be_within_tree(); + cwdRelative = determine_cwd_relative_option(); + + /* We should be done with options.. */ + verify_all_options(); + + print_changes(useSha1sum, showHdr, verboseFlag, cwdRelative); } /* ** COMMAND: status ** -** Usage: %fossil status +** Usage: %fossil status ?OPTIONS? ** ** Report on the status of the current checkout. +** +** Pathnames are displayed according to the "relative-paths" setting, +** unless overridden by the --abs-paths or --rel-paths options. +** +** Options: +** +** --abs-paths Display absolute pathnames. +** --rel-paths Display pathnames relative to the current working +** directory. +** --sha1sum Verify file status using SHA1 hashing rather +** than relying on file mtimes. +** +** See also: changes, extras, ls */ void status_cmd(void){ int vid; + int useSha1sum = find_option("sha1sum", 0, 0)!=0; + int showHdr = find_option("header",0,0)!=0; + int verboseFlag = find_option("verbose","v",0)!=0; + int cwdRelative = 0; db_must_be_within_tree(); /* 012345678901234 */ - printf("repository: %s\n", db_lget("repository","")); - printf("local-root: %s\n", g.zLocalRoot); - printf("server-code: %s\n", db_get("server-code", "")); + cwdRelative = determine_cwd_relative_option(); + + /* We should be done with options.. */ + verify_all_options(); + + fossil_print("repository: %s\n", db_repository_filename()); + fossil_print("local-root: %s\n", g.zLocalRoot); + if( g.zConfigDbName ){ + fossil_print("config-db: %s\n", g.zConfigDbName); + } vid = db_lget_int("checkout", 0); if( vid ){ - show_common_info(vid, "checkout:", 0); + show_common_info(vid, "checkout:", 1, 1); } - changes_cmd(); + db_record_repository_filename(0); + print_changes(useSha1sum, showHdr, verboseFlag, cwdRelative); + leaf_ambiguity_warning(vid, vid); +} + +/* +** Take care of -r version of ls command +*/ +static void ls_cmd_rev( + const char *zRev, /* Revision string given */ + int verboseFlag, /* Verbose flag given */ + int showAge, /* Age flag given */ + int timeOrder /* Order by time flag given */ +){ + Stmt q; + char *zOrderBy = "pathname COLLATE nocase"; + char *zName; + Blob where; + int rid; + int i; + + /* Handle given file names */ + blob_zero(&where); + for(i=2; i<g.argc; i++){ + Blob fname; + file_tree_name(g.argv[i], &fname, 0, 1); + zName = blob_str(&fname); + if( fossil_strcmp(zName, ".")==0 ){ + blob_reset(&where); + break; + } + blob_append_sql(&where, + " %s (pathname=%Q %s) " + "OR (pathname>'%q/' %s AND pathname<'%q0' %s)", + (blob_size(&where)>0) ? "OR" : "AND (", zName, + filename_collation(), zName, filename_collation(), + zName, filename_collation() + ); + } + if( blob_size(&where)>0 ){ + blob_append_sql(&where, ")"); + } + + rid = symbolic_name_to_rid(zRev, "ci"); + if( rid==0 ){ + fossil_fatal("not a valid check-in: %s", zRev); + } + + if( timeOrder ){ + zOrderBy = "mtime DESC"; + } + + compute_fileage(rid,0); + db_prepare(&q, + "SELECT datetime(fileage.mtime, toLocal()), fileage.pathname,\n" + " blob.size\n" + " FROM fileage, blob\n" + " WHERE blob.rid=fileage.fid %s\n" + " ORDER BY %s;", blob_sql_text(&where), zOrderBy /*safe-for-%s*/ + ); + blob_reset(&where); + + while( db_step(&q)==SQLITE_ROW ){ + const char *zTime = db_column_text(&q,0); + const char *zFile = db_column_text(&q,1); + int size = db_column_int(&q,2); + if( verboseFlag ){ + fossil_print("%s %7d %s\n", zTime, size, zFile); + }else if( showAge ){ + fossil_print("%s %s\n", zTime, zFile); + }else{ + fossil_print("%s\n", zFile); + } + } + db_finalize(&q); } /* ** COMMAND: ls ** -** Usage: %fossil ls [-l] +** Usage: %fossil ls ?OPTIONS? ?FILENAMES? +** +** Show the names of all files in the current checkout. The -v provides +** extra information about each file. If FILENAMES are included, only +** the files listed (or their children if they are directories) are shown. +** +** If -r is given a specific check-in is listed. In this case -R can be +** given to query another repository. +** +** Options: +** --age Show when each file was committed +** -v|--verbose Provide extra information about each file. +** -t Sort output in time order. +** -r VERSION The specific check-in to list +** -R|--repository FILE Extract info from repository FILE ** -** Show the names of all files in the current checkout. The -l provides -** extra information about each file. +** See also: changes, extras, status */ void ls_cmd(void){ int vid; Stmt q; - int isBrief; + int verboseFlag; + int showAge; + int timeOrder; + char *zOrderBy = "pathname"; + Blob where; + int i; + const char *zName; + const char *zRev; - isBrief = find_option("l","l", 0)==0; + verboseFlag = find_option("verbose","v", 0)!=0; + if( !verboseFlag ){ + verboseFlag = find_option("l","l", 0)!=0; /* deprecated */ + } + showAge = find_option("age",0,0)!=0; + zRev = find_option("r","r",1); + timeOrder = find_option("t","t",0)!=0; + + if( zRev!=0 ){ + db_find_and_open_repository(0, 0); + verify_all_options(); + ls_cmd_rev(zRev,verboseFlag,showAge,timeOrder); + return; + }else if( find_option("R",0,1)!=0 ){ + fossil_fatal("the -r is required in addition to -R"); + } + db_must_be_within_tree(); vid = db_lget_int("checkout", 0); + if( timeOrder ){ + if( showAge ){ + zOrderBy = mprintf("checkin_mtime(%d,rid) DESC", vid); + }else{ + zOrderBy = "mtime DESC"; + } + } + verify_all_options(); + blob_zero(&where); + for(i=2; i<g.argc; i++){ + Blob fname; + file_tree_name(g.argv[i], &fname, 0, 1); + zName = blob_str(&fname); + if( fossil_strcmp(zName, ".")==0 ){ + blob_reset(&where); + break; + } + blob_append_sql(&where, + " %s (pathname=%Q %s) " + "OR (pathname>'%q/' %s AND pathname<'%q0' %s)", + (blob_size(&where)>0) ? "OR" : "WHERE", zName, + filename_collation(), zName, filename_collation(), + zName, filename_collation() + ); + } vfile_check_signature(vid, 0); - db_prepare(&q, - "SELECT pathname, deleted, rid, chnged, coalesce(origname!=pathname,0)" - " FROM vfile" - " ORDER BY 1" - ); + if( showAge ){ + db_prepare(&q, + "SELECT pathname, deleted, rid, chnged, coalesce(origname!=pathname,0)," + " datetime(checkin_mtime(%d,rid),'unixepoch',toLocal())" + " FROM vfile %s" + " ORDER BY %s", + vid, blob_sql_text(&where), zOrderBy /*safe-for-%s*/ + ); + }else{ + db_prepare(&q, + "SELECT pathname, deleted, rid, chnged," + " coalesce(origname!=pathname,0), islink" + " FROM vfile %s" + " ORDER BY %s", blob_sql_text(&where), zOrderBy /*safe-for-%s*/ + ); + } + blob_reset(&where); while( db_step(&q)==SQLITE_ROW ){ const char *zPathname = db_column_text(&q,0); int isDeleted = db_column_int(&q, 1); int isNew = db_column_int(&q,2)==0; int chnged = db_column_int(&q,3); int renamed = db_column_int(&q,4); - char *zFullName = mprintf("%s/%s", g.zLocalRoot, zPathname); - if( isBrief ){ - printf("%s\n", zPathname); - }else if( isNew ){ - printf("ADDED %s\n", zPathname); - }else if( !file_isfile(zFullName) ){ - if( access(zFullName, 0)==0 ){ - printf("NOT_A_FILE %s\n", zPathname); - }else{ - printf("MISSING %s\n", zPathname); - } - }else if( isDeleted ){ - printf("DELETED %s\n", zPathname); - }else if( chnged ){ - printf("EDITED %s\n", zPathname); - }else if( renamed ){ - printf("RENAMED %s\n", zPathname); - }else{ - printf("UNCHANGED %s\n", zPathname); + int isLink = db_column_int(&q,5); + char *zFullName = mprintf("%s%s", g.zLocalRoot, zPathname); + const char *type = ""; + if( verboseFlag ){ + if( isNew ){ + type = "ADDED "; + }else if( isDeleted ){ + type = "DELETED "; + }else if( !file_wd_isfile_or_link(zFullName) ){ + if( file_access(zFullName, F_OK)==0 ){ + type = "NOT_A_FILE "; + }else{ + type = "MISSING "; + } + }else if( chnged ){ + if( chnged==2 ){ + type = "UPDATED_BY_MERGE "; + }else if( chnged==3 ){ + type = "ADDED_BY_MERGE "; + }else if( chnged==4 ){ + type = "UPDATED_BY_INTEGRATE "; + }else if( chnged==5 ){ + type = "ADDED_BY_INTEGRATE "; + }else if( !isLink && file_contains_merge_marker(zFullName) ){ + type = "CONFLICT "; + }else{ + type = "EDITED "; + } + }else if( renamed ){ + type = "RENAMED "; + }else{ + type = "UNCHANGED "; + } + } + if( showAge ){ + fossil_print("%s%s %s\n", type, db_column_text(&q, 5), zPathname); + }else{ + fossil_print("%s%s\n", type, zPathname); } free(zFullName); } db_finalize(&q); } /* -** Construct and return a string which is an SQL expression that will -** be TRUE if value zVal matches any of the GLOB expressions in the list -** zGlobList. For example: -** -** zVal: "x" -** zGlobList: "*.o,*.obj" -** -** Result: "(x GLOB '*.o' OR x GLOB '*.obj')" -** -** Each element of the GLOB list may optionally be enclosed in either '...' -** or "...". This allows commas in the expression. Whitespace at the -** beginning and end of each GLOB pattern is ignored, except when enclosed -** within '...' or "...". -** -** This routine makes no effort to free the memory space it uses. +** Create a TEMP table named SFILE and add all unmanaged files named on +** the command-line to that table. If directories are named, then add +** all unmanaged files contained underneath those directories. If there +** are no files or directories named on the command-line, then add all +** unmanaged files anywhere in the checkout. */ -char *glob_expr(const char *zVal, const char *zGlobList){ - Blob expr; - char *zSep = "("; - int nTerm = 0; - int i; - int cTerm; - - if( zGlobList==0 || zGlobList[0]==0 ) return "0"; - blob_zero(&expr); - while( zGlobList[0] ){ - while( isspace(zGlobList[0]) || zGlobList[0]==',' ) zGlobList++; - if( zGlobList[0]==0 ) break; - if( zGlobList[0]=='\'' || zGlobList[0]=='"' ){ - cTerm = zGlobList[0]; - zGlobList++; - }else{ - cTerm = ','; - } - for(i=0; zGlobList[i] && zGlobList[i]!=cTerm; i++){} - if( cTerm==',' ){ - while( i>0 && isspace(zGlobList[i-1]) ){ i--; } - } - blob_appendf(&expr, "%s%s GLOB '%.*q'", zSep, zVal, i, zGlobList); - zSep = " OR "; - if( cTerm!=',' && zGlobList[i] ) i++; - zGlobList += i; - if( zGlobList[0] ) zGlobList++; - nTerm++; - } - if( nTerm ){ - blob_appendf(&expr, ")"); - return blob_str(&expr); - }else{ - return "0"; +static void locate_unmanaged_files( + int argc, /* Number of command-line arguments to examine */ + char **argv, /* values of command-line arguments */ + unsigned scanFlags, /* Zero or more SCAN_xxx flags */ + Glob *pIgnore1, /* Do not add files that match this GLOB */ + Glob *pIgnore2 /* Omit files matching this GLOB too */ +){ + Blob name; /* Name of a candidate file or directory */ + char *zName; /* Name of a candidate file or directory */ + int isDir; /* 1 for a directory, 0 if doesn't exist, 2 for anything else */ + int i; /* Loop counter */ + int nRoot; /* length of g.zLocalRoot */ + + db_multi_exec("CREATE TEMP TABLE sfile(x TEXT PRIMARY KEY %s)", + filename_collation()); + nRoot = (int)strlen(g.zLocalRoot); + if( argc==0 ){ + blob_init(&name, g.zLocalRoot, nRoot - 1); + vfile_scan(&name, blob_size(&name), scanFlags, pIgnore1, pIgnore2); + blob_reset(&name); + }else{ + for(i=0; i<argc; i++){ + file_canonical_name(argv[i], &name, 0); + zName = blob_str(&name); + isDir = file_wd_isdir(zName); + if( isDir==1 ){ + vfile_scan(&name, nRoot-1, scanFlags, pIgnore1, pIgnore2); + }else if( isDir==0 ){ + fossil_warning("not found: %s", &zName[nRoot]); + }else if( file_access(zName, R_OK) ){ + fossil_fatal("cannot open %s", &zName[nRoot]); + }else{ + db_multi_exec( + "INSERT OR IGNORE INTO sfile(x) VALUES(%Q)", + &zName[nRoot] + ); + } + blob_reset(&name); + } } } /* ** COMMAND: extras -** Usage: %fossil extras ?--dotfiles? ?--ignore GLOBPATTERN? +** Usage: %fossil extras ?OPTIONS? ?PATH1 ...? ** ** Print a list of all files in the source tree that are not part of -** the current checkout. See also the "clean" command. +** the current checkout. See also the "clean" command. If paths are +** specified, only files in the given directories will be listed. ** ** Files and subdirectories whose names begin with "." are normally ** ignored but can be included by adding the --dotfiles option. +** +** The GLOBPATTERN is a comma-separated list of GLOB expressions for +** files that are ignored. The GLOBPATTERN specified by the "ignore-glob" +** is used if the --ignore option is omitted. +** +** Pathnames are displayed according to the "relative-paths" setting, +** unless overridden by the --abs-paths or --rel-paths options. +** +** Options: +** --abs-paths Display absolute pathnames. +** --case-sensitive <BOOL> override case-sensitive setting +** --dotfiles include files beginning with a dot (".") +** --header Identify the repository if there are extras +** --ignore <CSG> ignore files matching patterns from the argument +** --rel-paths Display pathnames relative to the current working +** directory. +** +** See also: changes, clean, status */ -void extra_cmd(void){ - Blob path; - Blob repo; +void extras_cmd(void){ Stmt q; - int n; const char *zIgnoreFlag = find_option("ignore",0,1); - int allFlag = find_option("dotfiles",0,0)!=0; + unsigned scanFlags = find_option("dotfiles",0,0)!=0 ? SCAN_ALL : 0; + int showHdr = find_option("header",0,0)!=0; + int cwdRelative = 0; + Glob *pIgnore; + Blob rewrittenPathname; + const char *zPathname, *zDisplayName; + if( find_option("temp",0,0)!=0 ) scanFlags |= SCAN_TEMP; db_must_be_within_tree(); - db_multi_exec("CREATE TEMP TABLE sfile(x TEXT PRIMARY KEY)"); - n = strlen(g.zLocalRoot); - blob_init(&path, g.zLocalRoot, n-1); + cwdRelative = determine_cwd_relative_option(); + + if( db_get_boolean("dotfiles", 0) ) scanFlags |= SCAN_ALL; + + /* We should be done with options.. */ + verify_all_options(); + if( zIgnoreFlag==0 ){ zIgnoreFlag = db_get("ignore-glob", 0); } - vfile_scan(0, &path, blob_size(&path), allFlag); - db_prepare(&q, + pIgnore = glob_create(zIgnoreFlag); + locate_unmanaged_files(g.argc-2, g.argv+2, scanFlags, pIgnore, 0); + glob_free(pIgnore); + db_prepare(&q, "SELECT x FROM sfile" - " WHERE x NOT IN ('manifest','manifest.uuid','_FOSSIL_'," - "'_FOSSIL_-journal','.fos','.fos-journal')" - " AND NOT %s" + " WHERE x NOT IN (%s)" " ORDER BY 1", - glob_expr("x", zIgnoreFlag) + fossil_all_reserved_names(0) ); - if( file_tree_name(g.zRepositoryName, &repo, 0) ){ - db_multi_exec("DELETE FROM sfile WHERE x=%B", &repo); - } + db_multi_exec("DELETE FROM sfile WHERE x IN (SELECT pathname FROM vfile)"); + blob_zero(&rewrittenPathname); + g.allowSymlinks = 1; /* Report on symbolic links */ while( db_step(&q)==SQLITE_ROW ){ - printf("%s\n", db_column_text(&q, 0)); + zDisplayName = zPathname = db_column_text(&q, 0); + if( cwdRelative ){ + char *zFullName = mprintf("%s%s", g.zLocalRoot, zPathname); + file_relative_name(zFullName, &rewrittenPathname, 0); + free(zFullName); + zDisplayName = blob_str(&rewrittenPathname); + if( zDisplayName[0]=='.' && zDisplayName[1]=='/' ){ + zDisplayName += 2; /* no unnecessary ./ prefix */ + } + } + if( showHdr ){ + showHdr = 0; + fossil_print("Extras for %s at %s:\n", db_get("project-name","???"), + g.zLocalRoot); + } + fossil_print("%s\n", zDisplayName); } + blob_reset(&rewrittenPathname); db_finalize(&q); } /* ** COMMAND: clean -** Usage: %fossil clean ?--force? ?--dotfiles? +** Usage: %fossil clean ?OPTIONS? ?PATH ...? ** ** Delete all "extra" files in the source tree. "Extra" files are -** files that are not officially part of the checkout. See also -** the "extra" command. This operation cannot be undone. -** -** You will be prompted before removing each file. If you are -** sure you wish to remove all "extra" files you can specify the -** optional --force flag and no prompts will be issued. -** -** Files and subdirectories whose names begin with "." are -** normally ignored. They are included if the "--dotfiles" option -** is used. +** files that are not officially part of the checkout. This operation +** cannot be undone. If one or more PATH arguments appear, then only +** the files named, or files contained with directories named, will be +** removed. +** +** Prompted are issued to confirm the removal of each file, unless +** the --force flag is used or unless the file matches glob pattern +** specified by the --clean option. No file that matches glob patterns +** specified by --ignore or --keep will ever be deleted. The default +** values for --clean, --ignore, and --keep are determined by the +** (versionable) clean-glob, ignore-glob, and keep-glob settings. +** Files and subdirectories whose names begin with "." are automatically +** ignored unless the --dotfiles option is used. +** +** The --verily option ignores the keep-glob and ignore-glob settings +** and turns on --force, --dotfiles, and --emptydirs. Use the --verily +** option when you really want to clean up everything. Extreme care +** should be exercised when using the --verily option. +** +** Options: +** --allckouts Check for empty directories within any checkouts +** that may be nested within the current one. This +** option should be used with great care because the +** empty-dirs setting (and other applicable settings) +** belonging to the other repositories, if any, will +** not be checked. +** --case-sensitive <BOOL> override case-sensitive setting +** --dirsonly Only remove empty directories. No files will +** be removed. Using this option will automatically +** enable the --emptydirs option as well. +** --disable-undo WARNING: This option disables use of the undo +** mechanism for this clean operation and should be +** used with extreme caution. +** --dotfiles Include files beginning with a dot ("."). +** --emptydirs Remove any empty directories that are not +** explicitly exempted via the empty-dirs setting +** or another applicable setting or command line +** argument. Matching files, if any, are removed +** prior to checking for any empty directories; +** therefore, directories that contain only files +** that were removed will be removed as well. +** -f|--force Remove files without prompting. +** -i|--prompt Prompt before removing each file. +** -x|--verily WARNING: Removes everything that is not a managed +** file or the repository itself. This option +** implies the --force, --emptydirs, --dotfiles, and +** --disable-undo options. Furthermore, it completely +** disregards the keep-glob and ignore-glob settings. +** However, it does honor the --ignore and --keep +** options. +** --clean <CSG> WARNING: Never prompt to delete any files matching +** this comma separated list of glob patterns. Also, +** deletions of any files matching this pattern list +** cannot be undone. +** --ignore <CSG> Ignore files matching patterns from the +** comma separated list of glob patterns. +** --keep <CSG> Keep files matching this comma separated +** list of glob patterns. +** -n|--dry-run Delete nothing, but display what would have been +** deleted. +** --no-prompt This option disables prompting the user for input +** and assumes an answer of 'No' for every question. +** --temp Remove only Fossil-generated temporary files. +** -v|--verbose Show all files as they are removed. +** +** See also: addremove, extras, status */ void clean_cmd(void){ - int allFlag; - int dotfilesFlag; - Blob path, repo; - Stmt q; - int n; - allFlag = find_option("force","f",0)!=0; - dotfilesFlag = find_option("dotfiles",0,0)!=0; + int allFileFlag, allDirFlag, dryRunFlag, verboseFlag; + int emptyDirsFlag, dirsOnlyFlag; + int disableUndo, noPrompt; + int alwaysPrompt = 0; + unsigned scanFlags = 0; + int verilyFlag = 0; + const char *zIgnoreFlag, *zKeepFlag, *zCleanFlag; + Glob *pIgnore, *pKeep, *pClean; + int nRoot; + +#ifndef UNDO_SIZE_LIMIT /* TODO: Setting? */ +#define UNDO_SIZE_LIMIT (10*1024*1024) /* 10MiB */ +#endif + + undo_capture_command_line(); + dryRunFlag = find_option("dry-run","n",0)!=0; + if( !dryRunFlag ){ + dryRunFlag = find_option("test",0,0)!=0; /* deprecated */ + } + if( !dryRunFlag ){ + dryRunFlag = find_option("whatif",0,0)!=0; + } + disableUndo = find_option("disable-undo",0,0)!=0; + noPrompt = find_option("no-prompt",0,0)!=0; + alwaysPrompt = find_option("prompt","i",0)!=0; + allFileFlag = allDirFlag = find_option("force","f",0)!=0; + dirsOnlyFlag = find_option("dirsonly",0,0)!=0; + emptyDirsFlag = find_option("emptydirs","d",0)!=0 || dirsOnlyFlag; + if( find_option("dotfiles",0,0)!=0 ) scanFlags |= SCAN_ALL; + if( find_option("temp",0,0)!=0 ) scanFlags |= SCAN_TEMP; + if( find_option("allckouts",0,0)!=0 ) scanFlags |= SCAN_NESTED; + zIgnoreFlag = find_option("ignore",0,1); + verboseFlag = find_option("verbose","v",0)!=0; + zKeepFlag = find_option("keep",0,1); + zCleanFlag = find_option("clean",0,1); db_must_be_within_tree(); - db_multi_exec("CREATE TEMP TABLE sfile(x TEXT PRIMARY KEY)"); - n = strlen(g.zLocalRoot); - blob_init(&path, g.zLocalRoot, n-1); - vfile_scan(0, &path, blob_size(&path), dotfilesFlag); - db_prepare(&q, - "SELECT %Q || x FROM sfile" - " WHERE x NOT IN ('manifest','manifest.uuid','_FOSSIL_'," - "'_FOSSIL_-journal','.fos','.fos-journal')" - " ORDER BY 1", g.zLocalRoot); - if( file_tree_name(g.zRepositoryName, &repo, 0) ){ - db_multi_exec("DELETE FROM sfile WHERE x=%B", &repo); - } - while( db_step(&q)==SQLITE_ROW ){ - if( allFlag ){ - unlink(db_column_text(&q, 0)); - }else{ - Blob ans; - char *prompt = mprintf("remove unmanaged file \"%s\" (y/N)? ", - db_column_text(&q, 0)); - blob_zero(&ans); - prompt_user(prompt, &ans); - if( blob_str(&ans)[0]=='y' ){ - unlink(db_column_text(&q, 0)); - } - } - } - db_finalize(&q); + if( find_option("verily","x",0)!=0 ){ + verilyFlag = allFileFlag = allDirFlag = 1; + emptyDirsFlag = 1; + disableUndo = 1; + scanFlags |= SCAN_ALL; + zCleanFlag = 0; + } + if( zIgnoreFlag==0 && !verilyFlag ){ + zIgnoreFlag = db_get("ignore-glob", 0); + } + if( zKeepFlag==0 && !verilyFlag ){ + zKeepFlag = db_get("keep-glob", 0); + } + if( zCleanFlag==0 && !verilyFlag ){ + zCleanFlag = db_get("clean-glob", 0); + } + if( db_get_boolean("dotfiles", 0) ) scanFlags |= SCAN_ALL; + verify_all_options(); + pIgnore = glob_create(zIgnoreFlag); + pKeep = glob_create(zKeepFlag); + pClean = glob_create(zCleanFlag); + nRoot = (int)strlen(g.zLocalRoot); + g.allowSymlinks = 1; /* Find symlinks too */ + if( !dirsOnlyFlag ){ + Stmt q; + Blob repo; + if( !dryRunFlag && !disableUndo ) undo_begin(); + locate_unmanaged_files(g.argc-2, g.argv+2, scanFlags, pIgnore, 0); + db_prepare(&q, + "SELECT %Q || x FROM sfile" + " WHERE x NOT IN (%s)" + " ORDER BY 1", + g.zLocalRoot, fossil_all_reserved_names(0) + ); + if( file_tree_name(g.zRepositoryName, &repo, 0, 0) ){ + db_multi_exec("DELETE FROM sfile WHERE x=%B", &repo); + } + db_multi_exec("DELETE FROM sfile WHERE x IN (SELECT pathname FROM vfile)"); + while( db_step(&q)==SQLITE_ROW ){ + const char *zName = db_column_text(&q, 0); + if( glob_match(pKeep, zName+nRoot) ){ + if( verboseFlag ){ + fossil_print("KEPT file \"%s\" not removed (due to --keep" + " or \"keep-glob\")\n", zName+nRoot); + } + continue; + } + if( !dryRunFlag && !glob_match(pClean, zName+nRoot) ){ + char *zPrompt = 0; + char cReply; + Blob ans = empty_blob; + int undoRc = UNDO_NONE; + if( alwaysPrompt ){ + zPrompt = mprintf("Remove unmanaged file \"%s\" (a=all/y/N)? ", + zName+nRoot); + prompt_user(zPrompt, &ans); + fossil_free(zPrompt); + cReply = fossil_toupper(blob_str(&ans)[0]); + blob_reset(&ans); + if( cReply=='N' ) continue; + if( cReply=='A' ){ + allFileFlag = 1; + alwaysPrompt = 0; + }else{ + undoRc = UNDO_SAVED_OK; + } + }else if( !disableUndo ){ + undoRc = undo_maybe_save(zName+nRoot, UNDO_SIZE_LIMIT); + } + if( undoRc!=UNDO_SAVED_OK ){ + if( allFileFlag ){ + cReply = 'Y'; + }else if( !noPrompt ){ + Blob ans; + zPrompt = mprintf("\nWARNING: Deletion of this file will " + "not be undoable via the 'undo'\n" + " command because %s.\n\n" + "Remove unmanaged file \"%s\" (a=all/y/N)? ", + undo_save_message(undoRc), zName+nRoot); + prompt_user(zPrompt, &ans); + fossil_free(zPrompt); + cReply = blob_str(&ans)[0]; + blob_reset(&ans); + }else{ + cReply = 'N'; + } + if( cReply=='a' || cReply=='A' ){ + allFileFlag = 1; + }else if( cReply!='y' && cReply!='Y' ){ + continue; + } + } + } + if( dryRunFlag || file_delete(zName)==0 ){ + if( verboseFlag || dryRunFlag ){ + fossil_print("Removed unmanaged file: %s\n", zName+nRoot); + } + }else{ + fossil_print("Could not remove file: %s\n", zName+nRoot); + } + } + db_finalize(&q); + if( !dryRunFlag && !disableUndo ) undo_finish(); + } + if( emptyDirsFlag ){ + Glob *pEmptyDirs = glob_create(db_get("empty-dirs", 0)); + Stmt q; + Blob root; + blob_init(&root, g.zLocalRoot, nRoot - 1); + vfile_dir_scan(&root, blob_size(&root), scanFlags, pIgnore, + pEmptyDirs); + blob_reset(&root); + db_prepare(&q, + "SELECT %Q || x FROM dscan_temp" + " WHERE x NOT IN (%s) AND y = 0" + " ORDER BY 1 DESC", + g.zLocalRoot, fossil_all_reserved_names(0) + ); + while( db_step(&q)==SQLITE_ROW ){ + const char *zName = db_column_text(&q, 0); + if( glob_match(pKeep, zName+nRoot) ){ + if( verboseFlag ){ + fossil_print("KEPT directory \"%s\" not removed (due to --keep" + " or \"keep-glob\")\n", zName+nRoot); + } + continue; + } + if( !allDirFlag && !dryRunFlag && !glob_match(pClean, zName+nRoot) ){ + char cReply; + if( !noPrompt ){ + Blob ans; + char *prompt = mprintf("Remove empty directory \"%s\" (a=all/y/N)? ", + zName+nRoot); + prompt_user(prompt, &ans); + cReply = blob_str(&ans)[0]; + fossil_free(prompt); + blob_reset(&ans); + }else{ + cReply = 'N'; + } + if( cReply=='a' || cReply=='A' ){ + allDirFlag = 1; + }else if( cReply!='y' && cReply!='Y' ){ + continue; + } + } + if( dryRunFlag || file_rmdir(zName)==0 ){ + if( verboseFlag || dryRunFlag ){ + fossil_print("Removed unmanaged directory: %s\n", zName+nRoot); + } + }else if( verboseFlag ){ + fossil_print("Could not remove directory: %s\n", zName+nRoot); + } + } + db_finalize(&q); + glob_free(pEmptyDirs); + } + glob_free(pClean); + glob_free(pKeep); + glob_free(pIgnore); +} + +/* +** Prompt the user for a check-in or stash comment (given in pPrompt), +** gather the response, then return the response in pComment. +** +** Lines of the prompt that begin with # are discarded. Excess whitespace +** is removed from the reply. +** +** Appropriate encoding translations are made on windows. +*/ +void prompt_for_user_comment(Blob *pComment, Blob *pPrompt){ + const char *zEditor; + char *zCmd; + char *zFile; + Blob reply, line; + char *zComment; + int i; + + zEditor = db_get("editor", 0); + if( zEditor==0 ){ + zEditor = fossil_getenv("VISUAL"); + } + if( zEditor==0 ){ + zEditor = fossil_getenv("EDITOR"); + } +#if defined(_WIN32) || defined(__CYGWIN__) + if( zEditor==0 ){ + zEditor = mprintf("%s\\notepad.exe", fossil_getenv("SYSTEMROOT")); +#if defined(__CYGWIN__) + zEditor = fossil_utf8_to_path(zEditor, 0); + blob_add_cr(pPrompt); +#endif + } +#endif + if( zEditor==0 ){ + blob_append(pPrompt, + "#\n" + "# Since no default text editor is set using EDITOR or VISUAL\n" + "# environment variables or the \"fossil set editor\" command,\n" + "# and because no comment was specified using the \"-m\" or \"-M\"\n" + "# command-line options, you will need to enter the comment below.\n" + "# Type \".\" on a line by itself when you are done:\n", -1); + zFile = mprintf("-"); + }else{ + Blob fname; + blob_zero(&fname); + file_relative_name(g.zLocalRoot, &fname, 1); + zFile = db_text(0, "SELECT '%qci-comment-' || hex(randomblob(6)) || '.txt'", + blob_str(&fname)); + blob_reset(&fname); + } +#if defined(_WIN32) + blob_add_cr(pPrompt); +#endif + blob_write_to_file(pPrompt, zFile); + if( zEditor ){ + zCmd = mprintf("%s \"%s\"", zEditor, zFile); + fossil_print("%s\n", zCmd); + if( fossil_system(zCmd) ){ + fossil_fatal("editor aborted: \"%s\"", zCmd); + } + + blob_read_from_file(&reply, zFile); + }else{ + char zIn[300]; + blob_zero(&reply); + while( fgets(zIn, sizeof(zIn), stdin)!=0 ){ + if( zIn[0]=='.' && (zIn[1]==0 || zIn[1]=='\r' || zIn[1]=='\n') ){ + break; + } + blob_append(&reply, zIn, -1); + } + } + blob_to_utf8_no_bom(&reply, 1); + blob_to_lf_only(&reply); + file_delete(zFile); + free(zFile); + blob_zero(pComment); + while( blob_line(&reply, &line) ){ + int i, n; + char *z; + n = blob_size(&line); + z = blob_buffer(&line); + for(i=0; i<n && fossil_isspace(z[i]); i++){} + if( i<n && z[i]=='#' ) continue; + if( i<n || blob_size(pComment)>0 ){ + blob_appendf(pComment, "%b", &line); + } + } + blob_reset(&reply); + zComment = blob_str(pComment); + i = strlen(zComment); + while( i>0 && fossil_isspace(zComment[i-1]) ){ i--; } + blob_resize(pComment, i); } /* ** Prepare a commit comment. Let the user modify it using the ** editor specified in the global_config table or either @@ -359,87 +1015,68 @@ ** parent_rid is the recordid of the parent check-in. */ static void prepare_commit_comment( Blob *pComment, char *zInit, - const char *zBranch, + CheckinInfo *p, int parent_rid ){ - const char *zEditor; - char *zCmd; - char *zFile; - Blob text, line; - char *zComment; - int i; - blob_init(&text, zInit, -1); - blob_append(&text, + Blob prompt; +#if defined(_WIN32) || defined(__CYGWIN__) + int bomSize; + const unsigned char *bom = get_utf8_bom(&bomSize); + blob_init(&prompt, (const char *) bom, bomSize); + if( zInit && zInit[0]){ + blob_append(&prompt, zInit, -1); + } +#else + blob_init(&prompt, zInit, -1); +#endif + blob_append(&prompt, "\n" - "# Enter comments on this check-in. Lines beginning with # are ignored.\n" - "# The check-in comment follows wiki formatting rules.\n" + "# Enter a commit message for this check-in." + " Lines beginning with # are ignored.\n" "#\n", -1 ); - if( zBranch && zBranch[0] ){ - blob_appendf(&text, "# tags: %s\n#\n", zBranch); + blob_appendf(&prompt, "# user: %s\n", + p->zUserOvrd ? p->zUserOvrd : login_name()); + if( p->zBranch && p->zBranch[0] ){ + blob_appendf(&prompt, "# tags: %s\n#\n", p->zBranch); }else{ char *zTags = info_tags_of_checkin(parent_rid, 1); - if( zTags ) blob_appendf(&text, "# tags: %z\n#\n", zTags); + if( zTags || p->azTag ){ + blob_append(&prompt, "# tags: ", 8); + if(zTags){ + blob_appendf(&prompt, "%z%s", zTags, p->azTag ? ", " : ""); + } + if(p->azTag){ + int i = 0; + for( ; p->azTag[i]; ++i ){ + blob_appendf(&prompt, "%s%s", p->azTag[i], + p->azTag[i+1] ? ", " : ""); + } + } + blob_appendf(&prompt, "\n#\n"); + } } + status_report(&prompt, "# ", 1, 0); if( g.markPrivate ){ - blob_append(&text, + blob_append(&prompt, "# PRIVATE BRANCH: This check-in will be private and will not sync to\n" "# repositories.\n" "#\n", -1 ); } - status_report(&text, "# ", 1); - zEditor = db_get("editor", 0); - if( zEditor==0 ){ - zEditor = getenv("VISUAL"); - } - if( zEditor==0 ){ - zEditor = getenv("EDITOR"); - } - if( zEditor==0 ){ -#ifdef __MINGW32__ - zEditor = "notepad"; -#else - zEditor = "ed"; -#endif - } - zFile = db_text(0, "SELECT '%qci-comment-' || hex(randomblob(6)) || '.txt'", - g.zLocalRoot); -#ifdef __MINGW32__ - blob_add_cr(&text); -#endif - blob_write_to_file(&text, zFile); - zCmd = mprintf("%s \"%s\"", zEditor, zFile); - printf("%s\n", zCmd); - if( portable_system(zCmd) ){ - fossil_panic("editor aborted"); - } - blob_reset(&text); - blob_read_from_file(&text, zFile); - blob_remove_cr(&text); - unlink(zFile); - free(zFile); - blob_zero(pComment); - while( blob_line(&text, &line) ){ - int i, n; - char *z; - n = blob_size(&line); - z = blob_buffer(&line); - for(i=0; i<n && isspace(z[i]); i++){} - if( i<n && z[i]=='#' ) continue; - if( i<n || blob_size(pComment)>0 ){ - blob_appendf(pComment, "%b", &line); - } - } - blob_reset(&text); - zComment = blob_str(pComment); - i = strlen(zComment); - while( i>0 && isspace(zComment[i-1]) ){ i--; } - blob_resize(pComment, i); + if( p->integrateFlag ){ + blob_append(&prompt, + "#\n" + "# All merged-in branches will be closed due to the --integrate flag\n" + "#\n", -1 + ); + } + prompt_for_user_comment(pComment, &prompt); + blob_reset(&prompt); } /* ** Populate the Global.aCommitFile[] based on the command line arguments ** to a [commit] command. Global.aCommitFile is an array of integers @@ -451,48 +1088,56 @@ ** of the array. ** ** If there were no arguments passed to [commit], aCommitFile is not ** allocated and remains NULL. Other parts of the code interpret this ** to mean "all files". +** +** Returns 1 if there was a warning, 0 otherwise. */ -void select_commit_files(void){ +int select_commit_files(void){ + int result = 0; + assert( g.aCommitFile==0 ); if( g.argc>2 ){ - int ii; - Blob b; - blob_zero(&b); - g.aCommitFile = malloc(sizeof(int)*(g.argc-1)); + int ii, jj=0; + Blob fname; + Stmt q; + Bag toCommit; + blob_zero(&fname); + bag_init(&toCommit); for(ii=2; ii<g.argc; ii++){ - int iId; - file_tree_name(g.argv[ii], &b, 1); - iId = db_int(-1, "SELECT id FROM vfile WHERE pathname=%Q", blob_str(&b)); - if( iId<0 ){ - fossil_fatal("fossil knows nothing about: %s", g.argv[ii]); - } - g.aCommitFile[ii-2] = iId; - blob_reset(&b); - } - g.aCommitFile[ii-2] = 0; - } -} - -/* -** Return true if the check-in with RID=rid is a leaf. -** A leaf has no children in the same branch. -*/ -int is_a_leaf(int rid){ - int rc; - static const char zSql[] = - @ SELECT 1 FROM plink - @ WHERE pid=%d - @ AND coalesce((SELECT value FROM tagxref - @ WHERE tagid=%d AND rid=plink.pid), 'trunk') - @ =coalesce((SELECT value FROM tagxref - @ WHERE tagid=%d AND rid=plink.cid), 'trunk') - ; - rc = db_int(0, zSql, rid, TAG_BRANCH, TAG_BRANCH); - return rc==0; + int cnt = 0; + file_tree_name(g.argv[ii], &fname, 0, 1); + if( fossil_strcmp(blob_str(&fname),".")==0 ){ + bag_clear(&toCommit); + return result; + } + db_prepare(&q, + "SELECT id FROM vfile WHERE pathname=%Q %s" + " OR (pathname>'%q/' %s AND pathname<'%q0' %s)", + blob_str(&fname), filename_collation(), blob_str(&fname), + filename_collation(), blob_str(&fname), filename_collation()); + while( db_step(&q)==SQLITE_ROW ){ + cnt++; + bag_insert(&toCommit, db_column_int(&q, 0)); + } + db_finalize(&q); + if( cnt==0 ){ + fossil_warning("fossil knows nothing about: %s", g.argv[ii]); + result = 1; + } + blob_reset(&fname); + } + g.aCommitFile = fossil_malloc( (bag_count(&toCommit)+1) * + sizeof(g.aCommitFile[0]) ); + for(ii=bag_first(&toCommit); ii>0; ii=bag_next(&toCommit, ii)){ + g.aCommitFile[jj++] = ii; + } + g.aCommitFile[jj] = 0; + bag_clear(&toCommit); + } + return result; } /* ** Make sure the current check-in with timestamp zDate is younger than its ** ancestor identified rid and zUuid. Throw a fatal error if not. @@ -509,110 +1154,676 @@ " WHERE datetime(mtime)>=%Q" " AND type='ci' AND objid=%d", zDate, rid ); if( b ){ - fossil_fatal("ancestor check-in [%.10s] (%s) is younger (clock skew?)", - zUuid, zDate); + fossil_fatal("ancestor check-in [%S] (%s) is not older (clock skew?)" + " Use --allow-older to override.", zUuid, zDate); + } +#endif +} + +/* +** zDate should be a valid date string. Convert this string into the +** format YYYY-MM-DDTHH:MM:SS. If the string is not a valid date, +** print a fatal error and quit. +*/ +char *date_in_standard_format(const char *zInputDate){ + char *zDate; + if( g.perm.Setup && fossil_strcmp(zInputDate,"now")==0 ){ + zInputDate = PD("date_override","now"); + } + zDate = db_text(0, "SELECT strftime('%%Y-%%m-%%dT%%H:%%M:%%f',%Q)", + zInputDate); + if( zDate[0]==0 ){ + fossil_fatal( + "unrecognized date format (%s): use \"YYYY-MM-DD HH:MM:SS.SSS\"", + zInputDate + ); + } + return zDate; +} + +/* +** COMMAND: test-date-format +** +** Usage: %fossil test-date-format DATE-STRING... +** +** Convert the DATE-STRING into the standard format used in artifacts +** and display the result. +*/ +void test_date_format(void){ + int i; + db_find_and_open_repository(OPEN_ANY_SCHEMA, 0); + for(i=2; i<g.argc; i++){ + fossil_print("%s -> %s\n", g.argv[i], date_in_standard_format(g.argv[i])); + } +} + +#if INTERFACE +/* +** The following structure holds some of the information needed to construct a +** check-in manifest. +*/ +struct CheckinInfo { + Blob *pComment; /* Check-in comment text */ + const char *zMimetype; /* Mimetype of check-in command. May be NULL */ + int verifyDate; /* Verify that child is younger */ + int closeFlag; /* Close the branch being committed */ + int integrateFlag; /* Close merged-in branches */ + Blob *pCksum; /* Repository checksum. May be 0 */ + const char *zDateOvrd; /* Date override. If 0 then use 'now' */ + const char *zUserOvrd; /* User override. If 0 then use login_name() */ + const char *zBranch; /* Branch name. May be 0 */ + const char *zColor; /* One-time background color. May be 0 */ + const char *zBrClr; /* Persistent branch color. May be 0 */ + const char **azTag; /* Tags to apply to this check-in */ +}; +#endif /* INTERFACE */ + +/* +** Create a manifest. +*/ +static void create_manifest( + Blob *pOut, /* Write the manifest here */ + const char *zBaselineUuid, /* UUID of baseline, or zero */ + Manifest *pBaseline, /* Make it a delta manifest if not zero */ + int vid, /* BLOB.id for the parent check-in */ + CheckinInfo *p, /* Information about the check-in */ + int *pnFBcard /* OUT: Number of generated B- and F-cards */ +){ + char *zDate; /* Date of the check-in */ + char *zParentUuid = 0; /* UUID of parent check-in */ + Blob filename; /* A single filename */ + int nBasename; /* Size of base filename */ + Stmt q; /* Various queries */ + Blob mcksum; /* Manifest checksum */ + ManifestFile *pFile; /* File from the baseline */ + int nFBcard = 0; /* Number of B-cards and F-cards */ + int i; /* Loop counter */ + const char *zColor; /* Modified value of p->zColor */ + + assert( pBaseline==0 || pBaseline->zBaseline==0 ); + assert( pBaseline==0 || zBaselineUuid!=0 ); + blob_zero(pOut); + if( vid ){ + zParentUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d AND " + "EXISTS(SELECT 1 FROM event WHERE event.type='ci' and event.objid=%d)", + vid, vid); + if( !zParentUuid ){ + fossil_fatal("Could not find a valid check-in for RID %d. " + "Possible checkout/repo mismatch.", vid); + } + } + if( pBaseline ){ + blob_appendf(pOut, "B %s\n", zBaselineUuid); + manifest_file_rewind(pBaseline); + pFile = manifest_file_next(pBaseline, 0); + nFBcard++; + }else{ + pFile = 0; + } + if( blob_size(p->pComment)!=0 ){ + blob_appendf(pOut, "C %F\n", blob_str(p->pComment)); + }else{ + blob_append(pOut, "C (no\\scomment)\n", 16); } + zDate = date_in_standard_format(p->zDateOvrd ? p->zDateOvrd : "now"); + blob_appendf(pOut, "D %s\n", zDate); + zDate[10] = ' '; + db_prepare(&q, + "SELECT pathname, uuid, origname, blob.rid, isexe, islink," + " is_selected(vfile.id)" + " FROM vfile JOIN blob ON vfile.mrid=blob.rid" + " WHERE (NOT deleted OR NOT is_selected(vfile.id))" + " AND vfile.vid=%d" + " ORDER BY if_selected(vfile.id, pathname, origname)", + vid); + blob_zero(&filename); + blob_appendf(&filename, "%s", g.zLocalRoot); + nBasename = blob_size(&filename); + while( db_step(&q)==SQLITE_ROW ){ + const char *zName = db_column_text(&q, 0); + const char *zUuid = db_column_text(&q, 1); + const char *zOrig = db_column_text(&q, 2); + int frid = db_column_int(&q, 3); + int isExe = db_column_int(&q, 4); + int isLink = db_column_int(&q, 5); + int isSelected = db_column_int(&q, 6); + const char *zPerm; + int cmp; + + blob_resize(&filename, nBasename); + blob_append(&filename, zName, -1); + +#if !defined(_WIN32) + /* For unix, extract the "executable" and "symlink" permissions + ** directly from the filesystem. On windows, permissions are + ** unchanged from the original. However, only do this if the file + ** itself is actually selected to be part of this check-in. + */ + if( isSelected ){ + int mPerm; + + mPerm = file_wd_perm(blob_str(&filename)); + isExe = ( mPerm==PERM_EXE ); + isLink = ( mPerm==PERM_LNK ); + } #endif + if( isExe ){ + zPerm = " x"; + }else if( isLink ){ + zPerm = " l"; /* note: symlinks don't have executable bit on unix */ + }else{ + zPerm = ""; + } + if( !g.markPrivate ) content_make_public(frid); + while( pFile && fossil_strcmp(pFile->zName,zName)<0 ){ + blob_appendf(pOut, "F %F\n", pFile->zName); + pFile = manifest_file_next(pBaseline, 0); + nFBcard++; + } + cmp = 1; + if( pFile==0 + || (cmp = fossil_strcmp(pFile->zName,zName))!=0 + || fossil_strcmp(pFile->zUuid, zUuid)!=0 + ){ + if( zOrig && !isSelected ){ zName = zOrig; zOrig = 0; } + if( zOrig==0 || fossil_strcmp(zOrig,zName)==0 ){ + blob_appendf(pOut, "F %F %s%s\n", zName, zUuid, zPerm); + }else{ + if( zPerm[0]==0 ){ zPerm = " w"; } + blob_appendf(pOut, "F %F %s%s %F\n", zName, zUuid, zPerm, zOrig); + } + nFBcard++; + } + if( cmp==0 ) pFile = manifest_file_next(pBaseline,0); + } + blob_reset(&filename); + db_finalize(&q); + while( pFile ){ + blob_appendf(pOut, "F %F\n", pFile->zName); + pFile = manifest_file_next(pBaseline, 0); + nFBcard++; + } + if( p->zMimetype && p->zMimetype[0] ){ + blob_appendf(pOut, "N %F\n", p->zMimetype); + } + if( vid ){ + blob_appendf(pOut, "P %s", zParentUuid); + if( p->verifyDate ) checkin_verify_younger(vid, zParentUuid, zDate); + free(zParentUuid); + db_prepare(&q, "SELECT merge FROM vmerge WHERE id=0 OR id<-2"); + while( db_step(&q)==SQLITE_ROW ){ + char *zMergeUuid; + int mid = db_column_int(&q, 0); + if( (!g.markPrivate && content_is_private(mid)) || (mid == vid) ){ + continue; + } + zMergeUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", mid); + if( zMergeUuid ){ + blob_appendf(pOut, " %s", zMergeUuid); + if( p->verifyDate ) checkin_verify_younger(mid, zMergeUuid, zDate); + free(zMergeUuid); + } + } + db_finalize(&q); + blob_appendf(pOut, "\n"); + } + free(zDate); + + db_prepare(&q, + "SELECT CASE vmerge.id WHEN -1 THEN '+' ELSE '-' END || blob.uuid, merge" + " FROM vmerge, blob" + " WHERE (vmerge.id=-1 OR vmerge.id=-2)" + " AND blob.rid=vmerge.merge" + " ORDER BY 1"); + while( db_step(&q)==SQLITE_ROW ){ + const char *zCherrypickUuid = db_column_text(&q, 0); + int mid = db_column_int(&q, 1); + if( mid != vid ){ + blob_appendf(pOut, "Q %s\n", zCherrypickUuid); + } + } + db_finalize(&q); + + if( p->pCksum ) blob_appendf(pOut, "R %b\n", p->pCksum); + zColor = p->zColor; + if( p->zBranch && p->zBranch[0] ){ + /* Set tags for the new branch */ + if( p->zBrClr && p->zBrClr[0] ){ + zColor = 0; + blob_appendf(pOut, "T *bgcolor * %F\n", p->zBrClr); + } + blob_appendf(pOut, "T *branch * %F\n", p->zBranch); + blob_appendf(pOut, "T *sym-%F *\n", p->zBranch); + } + if( zColor && zColor[0] ){ + /* One-time background color */ + blob_appendf(pOut, "T +bgcolor * %F\n", zColor); + } + if( p->closeFlag ){ + blob_appendf(pOut, "T +closed *\n"); + } + db_prepare(&q, "SELECT uuid,merge FROM vmerge JOIN blob ON merge=rid" + " WHERE id %s ORDER BY 1", + p->integrateFlag ? "IN(0,-4)" : "=(-4)"); + while( db_step(&q)==SQLITE_ROW ){ + const char *zIntegrateUuid = db_column_text(&q, 0); + int rid = db_column_int(&q, 1); + if( is_a_leaf(rid) && !db_exists("SELECT 1 FROM tagxref " + " WHERE tagid=%d AND rid=%d AND tagtype>0", TAG_CLOSED, rid)){ + blob_appendf(pOut, "T +closed %s\n", zIntegrateUuid); + } + } + db_finalize(&q); + + if( p->azTag ){ + for(i=0; p->azTag[i]; i++){ + /* Add a symbolic tag to this check-in. The tag names have already + ** been sorted and converted using the %F format */ + assert( i==0 || strcmp(p->azTag[i-1], p->azTag[i])<=0 ); + blob_appendf(pOut, "T +sym-%s *\n", p->azTag[i]); + } + } + if( p->zBranch && p->zBranch[0] ){ + /* For a new branch, cancel all prior propagating tags */ + db_prepare(&q, + "SELECT tagname FROM tagxref, tag" + " WHERE tagxref.rid=%d AND tagxref.tagid=tag.tagid" + " AND tagtype==2 AND tagname GLOB 'sym-*'" + " AND tagname!='sym-'||%Q" + " ORDER BY tagname", + vid, p->zBranch); + while( db_step(&q)==SQLITE_ROW ){ + const char *zBrTag = db_column_text(&q, 0); + blob_appendf(pOut, "T -%F *\n", zBrTag); + } + db_finalize(&q); + } + blob_appendf(pOut, "U %F\n", p->zUserOvrd ? p->zUserOvrd : login_name()); + md5sum_blob(pOut, &mcksum); + blob_appendf(pOut, "Z %b\n", &mcksum); + if( pnFBcard ) *pnFBcard = nFBcard; +} + +/* +** Issue a warning and give the user an opportunity to abandon out +** if a Unicode (UTF-16) byte-order-mark (BOM) or a \r\n line ending +** is seen in a text file. +** +** Return 1 if the user pressed 'c'. In that case, the file will have +** been converted to UTF-8 (if it was UTF-16) with NL line-endings, +** and the original file will have been renamed to "<filename>-original". +*/ +static int commit_warning( + Blob *p, /* The content of the file being committed. */ + int crnlOk, /* Non-zero if CR/NL warnings should be disabled. */ + int binOk, /* Non-zero if binary warnings should be disabled. */ + int encodingOk, /* Non-zero if encoding warnings should be disabled. */ + const char *zFilename /* The full name of the file being committed. */ +){ + int bReverse; /* UTF-16 byte order is reversed? */ + int fUnicode; /* return value of could_be_utf16() */ + int fBinary; /* does the blob content appear to be binary? */ + int lookFlags; /* output flags from looks_like_utf8/utf16() */ + int fHasAnyCr; /* the blob contains one or more CR chars */ + int fHasLoneCrOnly; /* all detected line endings are CR only */ + int fHasCrLfOnly; /* all detected line endings are CR/LF pairs */ + int fHasInvalidUtf8 = 0;/* contains invalid UTF-8 */ + char *zMsg; /* Warning message */ + Blob fname; /* Relative pathname of the file */ + static int allOk = 0; /* Set to true to disable this routine */ + + if( allOk ) return 0; + fUnicode = could_be_utf16(p, &bReverse); + if( fUnicode ){ + lookFlags = looks_like_utf16(p, bReverse, LOOK_NUL); + }else{ + lookFlags = looks_like_utf8(p, LOOK_NUL); + if( !(lookFlags & LOOK_BINARY) && invalid_utf8(p) ){ + fHasInvalidUtf8 = 1; + } + } + fHasAnyCr = (lookFlags & LOOK_CR); + fBinary = (lookFlags & LOOK_BINARY); + fHasLoneCrOnly = ((lookFlags & LOOK_EOL) == LOOK_LONE_CR); + fHasCrLfOnly = ((lookFlags & LOOK_EOL) == LOOK_CRLF); + if( fUnicode || fHasAnyCr || fBinary || fHasInvalidUtf8){ + const char *zWarning; + const char *zDisable; + const char *zConvert = "c=convert/"; + Blob ans; + char cReply; + + if( fBinary ){ + int fHasNul = (lookFlags & LOOK_NUL); /* contains NUL chars? */ + int fHasLong = (lookFlags & LOOK_LONG); /* overly long line? */ + if( binOk ){ + return 0; /* We don't want binary warnings for this file. */ + } + if( !fHasNul && fHasLong ){ + zWarning = "long lines"; + zConvert = ""; /* We cannot convert binary files. */ + }else{ + zWarning = "binary data"; + zConvert = ""; /* We cannot convert binary files. */ + } + zDisable = "\"binary-glob\" setting"; + }else if( fUnicode && fHasAnyCr ){ + if( crnlOk && encodingOk ){ + return 0; /* We don't want CR/NL and Unicode warnings for this file. */ + } + if( fHasLoneCrOnly ){ + zWarning = "CR line endings and Unicode"; + }else if( fHasCrLfOnly ){ + zWarning = "CR/NL line endings and Unicode"; + }else{ + zWarning = "mixed line endings and Unicode"; + } + zDisable = "\"crnl-glob\" and \"encoding-glob\" settings"; + }else if( fHasInvalidUtf8 ){ + if( encodingOk ){ + return 0; /* We don't want encoding warnings for this file. */ + } + zWarning = "invalid UTF-8"; + zDisable = "\"encoding-glob\" setting"; + }else if( fHasAnyCr ){ + if( crnlOk ){ + return 0; /* We don't want CR/NL warnings for this file. */ + } + if( fHasLoneCrOnly ){ + zWarning = "CR line endings"; + }else if( fHasCrLfOnly ){ + zWarning = "CR/NL line endings"; + }else{ + zWarning = "mixed line endings"; + } + zDisable = "\"crnl-glob\" setting"; + }else{ + if( encodingOk ){ + return 0; /* We don't want encoding warnings for this file. */ + } + zWarning = "Unicode"; + zDisable = "\"encoding-glob\" setting"; + } + file_relative_name(zFilename, &fname, 0); + zMsg = mprintf( + "%s contains %s. Use --no-warnings or the %s to" + " disable this warning.\n" + "Commit anyhow (a=all/%sy/N)? ", + blob_str(&fname), zWarning, zDisable, zConvert); + prompt_user(zMsg, &ans); + fossil_free(zMsg); + cReply = blob_str(&ans)[0]; + if( cReply=='a' || cReply=='A' ){ + allOk = 1; + }else if( *zConvert && (cReply=='c' || cReply=='C') ){ + char *zOrig = file_newname(zFilename, "original", 1); + FILE *f; + blob_write_to_file(p, zOrig); + fossil_free(zOrig); + f = fossil_fopen(zFilename, "wb"); + if( f==0 ){ + fossil_warning("cannot open %s for writing", zFilename); + }else{ + if( fUnicode ){ + int bomSize; + const unsigned char *bom = get_utf8_bom(&bomSize); + fwrite(bom, 1, bomSize, f); + blob_to_utf8_no_bom(p, 0); + }else if( fHasInvalidUtf8 ){ + blob_cp1252_to_utf8(p); + } + if( fHasAnyCr ){ + blob_to_lf_only(p); + } + fwrite(blob_buffer(p), 1, blob_size(p), f); + fclose(f); + } + return 1; + }else if( cReply!='y' && cReply!='Y' ){ + fossil_fatal("Abandoning commit due to %s in %s", + zWarning, blob_str(&fname)); + } + blob_reset(&ans); + blob_reset(&fname); + } + return 0; +} + +/* +** qsort() comparison routine for an array of pointers to strings. +*/ +static int tagCmp(const void *a, const void *b){ + char **pA = (char**)a; + char **pB = (char**)b; + return fossil_strcmp(pA[0], pB[0]); } /* -** COMMAND: ci +** COMMAND: ci* ** COMMAND: commit ** ** Usage: %fossil commit ?OPTIONS? ?FILE...? ** ** Create a new version containing all of the changes in the current ** checkout. You will be prompted to enter a check-in comment unless -** the comment has been specified on the command-line using "-m". -** The editor defined in the "editor" fossil option (see %fossil help set) -** will be used, or from the "VISUAL" or "EDITOR" environment variables -** (in that order) if no editor is set. -** -** You will be prompted for your GPG passphrase in order to sign the -** new manifest unless the "--nosign" options is used. All files that -** have changed will be committed unless some subset of files is -** specified on the command line. -** -** The --branch option followed by a branch name cases the new check-in -** to be placed in the named branch. The --bgcolor option can be followed -** by a color name (ex: '#ffc0c0') to specify the background color of -** entries in the new branch when shown in the web timeline interface. -** -** A check-in is not permitted to fork unless the --force or -f -** option appears. A check-in is not allowed against a closed check-in. +** the comment has been specified on the command-line using "-m" or a +** file containing the comment using -M. The editor defined in the +** "editor" fossil option (see %fossil help set) will be used, or from +** the "VISUAL" or "EDITOR" environment variables (in that order) if +** no editor is set. +** +** All files that have changed will be committed unless some subset of +** files is specified on the command line. +** +** The --branch option followed by a branch name causes the new +** check-in to be placed in a newly-created branch with the name +** passed to the --branch option. +** +** Use the --branchcolor option followed by a color name (ex: +** '#ffc0c0') to specify the background color of entries in the new +** branch when shown in the web timeline interface. The use of +** the --branchcolor option is not recommended. Instead, let Fossil +** choose the branch color automatically. +** +** The --bgcolor option works like --branchcolor but only sets the +** background color for a single check-in. Subsequent check-ins revert +** to the default color. +** +** A check-in is not permitted to fork unless the --allow-fork option +** appears. An empty check-in (i.e. with nothing changed) is not +** allowed unless the --allow-empty option appears. A check-in may not +** be older than its ancestor unless the --allow-older option appears. +** If any of files in the check-in appear to contain unresolved merge +** conflicts, the check-in will not be allowed unless the +** --allow-conflict option is present. In addition, the entire +** check-in process may be aborted if a file contains content that +** appears to be binary, Unicode text, or text with CR/NL line endings +** unless the interactive user chooses to proceed. If there is no +** interactive user or these warnings should be skipped for some other +** reason, the --no-warnings option may be used. A check-in is not +** allowed against a closed leaf. +** +** If a commit message is blank, you will be prompted: +** ("continue (y/N)?") to confirm you really want to commit with a +** blank commit message. The default value is "N", do not commit. ** ** The --private option creates a private check-in that is never synced. ** Children of private check-ins are automatically private. ** +** The --tag option applies the symbolic tag name to the check-in. +** +** The --sha1sum option detects edited files by computing each file's +** SHA1 hash rather than just checking for changes to its size or mtime. +** ** Options: -** -** --comment|-m COMMENT-TEXT -** --branch NEW-BRANCH-NAME -** --bgcolor COLOR -** --nosign -** --force|-f -** --private -** +** --allow-conflict allow unresolved merge conflicts +** --allow-empty allow a commit with no changes +** --allow-fork allow the commit to fork +** --allow-older allow a commit older than its ancestor +** --baseline use a baseline manifest in the commit process +** --bgcolor COLOR apply COLOR to this one check-in only +** --branch NEW-BRANCH-NAME check in to this new branch +** --branchcolor COLOR apply given COLOR to the branch +** --close close the branch being committed +** --delta use a delta manifest in the commit process +** --integrate close all merged-in branches +** -m|--comment COMMENT-TEXT use COMMENT-TEXT as commit comment +** -M|--message-file FILE read the commit comment from given file +** --mimetype MIMETYPE mimetype of check-in comment +** -n|--dry-run If given, display instead of run actions +** --no-warnings omit all warnings about file contents +** --nosign do not attempt to sign this commit with gpg +** --private do not sync changes and their descendants +** --sha1sum verify file status using SHA1 hashing rather +** than relying on file mtimes +** --tag TAG-NAME assign given tag TAG-NAME to the check-in +** --date-override DATE DATE to use instead of 'now' +** --user-override USER USER to use instead of the current default +** +** See also: branch, changes, checkout, extras, sync */ void commit_cmd(void){ - int rc; - int vid, nrid, nvid; - Blob comment; - const char *zComment; - Stmt q; - Stmt q2; - char *zUuid, *zDate; + int hasChanges; /* True if unsaved changes exist */ + int vid; /* blob-id of parent version */ + int nrid; /* blob-id of a modified file */ + int nvid; /* Blob-id of the new check-in */ + Blob comment; /* Check-in comment */ + const char *zComment; /* Check-in comment */ + Stmt q; /* Various queries */ + char *zUuid; /* UUID of the new check-in */ + int useSha1sum = 0; /* True to verify file status using SHA1 hashing */ int noSign = 0; /* True to omit signing the manifest using GPG */ int isAMerge = 0; /* True if checking in a merge */ - int forceFlag = 0; /* Force a fork */ + int noWarningFlag = 0; /* True if skipping all warnings */ + int forceFlag = 0; /* Undocumented: Disables all checks */ + int forceDelta = 0; /* Force a delta-manifest */ + int forceBaseline = 0; /* Force a baseline-manifest */ + int allowConflict = 0; /* Allow unresolve merge conflicts */ + int allowEmpty = 0; /* Allow a commit with no changes */ + int allowFork = 0; /* Allow the commit to fork */ + int allowOlder = 0; /* Allow a commit older than its ancestor */ char *zManifestFile; /* Name of the manifest file */ - int nBasename; /* Length of "g.zLocalRoot/" */ - const char *zBranch; /* Create a new branch with this name */ - const char *zBgColor; /* Set background color when branching */ - const char *zDateOvrd; /* Override date string */ - const char *zUserOvrd; /* Override user name */ + int useCksum; /* True if checksums should be computed and verified */ + int outputManifest; /* True to output "manifest" and "manifest.uuid" */ + int dryRunFlag; /* True for a test run. Debugging only */ + CheckinInfo sCiInfo; /* Information about this check-in */ const char *zComFile; /* Read commit message from this file */ - Blob filename; /* complete filename */ - Blob manifest; + int nTag = 0; /* Number of --tag arguments */ + const char *zTag; /* A single --tag argument */ + ManifestFile *pFile; /* File structure in the manifest */ + Manifest *pManifest; /* Manifest structure */ + Blob manifest; /* Manifest in baseline form */ Blob muuid; /* Manifest uuid */ - Blob mcksum; /* Self-checksum on the manifest */ Blob cksum1, cksum2; /* Before and after commit checksums */ Blob cksum1b; /* Checksum recorded in the manifest */ - + int szD; /* Size of the delta manifest */ + int szB; /* Size of the baseline manifest */ + int nConflict = 0; /* Number of unresolved merge conflicts */ + int abortCommit = 0; + Blob ans; + char cReply; + + memset(&sCiInfo, 0, sizeof(sCiInfo)); url_proxy_options(); + useSha1sum = find_option("sha1sum", 0, 0)!=0; noSign = find_option("nosign",0,0)!=0; + forceDelta = find_option("delta",0,0)!=0; + forceBaseline = find_option("baseline",0,0)!=0; + if( forceDelta && forceBaseline ){ + fossil_fatal("cannot use --delta and --baseline together"); + } + dryRunFlag = find_option("dry-run","n",0)!=0; + if( !dryRunFlag ){ + dryRunFlag = find_option("test",0,0)!=0; /* deprecated */ + } zComment = find_option("comment","m",1); forceFlag = find_option("force", "f", 0)!=0; - zBranch = find_option("branch","b",1); - zBgColor = find_option("bgcolor",0,1); + allowConflict = find_option("allow-conflict",0,0)!=0; + allowEmpty = find_option("allow-empty",0,0)!=0; + allowFork = find_option("allow-fork",0,0)!=0; + allowOlder = find_option("allow-older",0,0)!=0; + noWarningFlag = find_option("no-warnings", 0, 0)!=0; + sCiInfo.zBranch = find_option("branch","b",1); + sCiInfo.zColor = find_option("bgcolor",0,1); + sCiInfo.zBrClr = find_option("branchcolor",0,1); + sCiInfo.closeFlag = find_option("close",0,0)!=0; + sCiInfo.integrateFlag = find_option("integrate",0,0)!=0; + sCiInfo.zMimetype = find_option("mimetype",0,1); + while( (zTag = find_option("tag",0,1))!=0 ){ + if( zTag[0]==0 ) continue; + sCiInfo.azTag = fossil_realloc((void*)sCiInfo.azTag, + sizeof(char*)*(nTag+2)); + sCiInfo.azTag[nTag++] = zTag; + sCiInfo.azTag[nTag] = 0; + } zComFile = find_option("message-file", "M", 1); if( find_option("private",0,0) ){ g.markPrivate = 1; - if( zBranch==0 ) zBranch = "private"; - if( zBgColor==0 ) zBgColor = "#fec084"; /* Orange */ + if( sCiInfo.zBranch==0 ) sCiInfo.zBranch = "private"; + if( sCiInfo.zBrClr==0 && sCiInfo.zColor==0 ){ + sCiInfo.zBrClr = "#fec084"; /* Orange */ + } } - zDateOvrd = find_option("date-override",0,1); - zUserOvrd = find_option("user-override",0,1); + sCiInfo.zDateOvrd = find_option("date-override",0,1); + sCiInfo.zUserOvrd = find_option("user-override",0,1); db_must_be_within_tree(); noSign = db_get_boolean("omitsign", 0)|noSign; if( db_get_boolean("clearsign", 0)==0 ){ noSign = 1; } + useCksum = db_get_boolean("repo-cksum", 1); + outputManifest = db_get_boolean("manifest", 0); verify_all_options(); + + /* Escape special characters in tags and put all tags in sorted order */ + if( nTag ){ + int i; + for(i=0; i<nTag; i++) sCiInfo.azTag[i] = mprintf("%F", sCiInfo.azTag[i]); + qsort((void*)sCiInfo.azTag, nTag, sizeof(sCiInfo.azTag[0]), tagCmp); + } + + /* So that older versions of Fossil (that do not understand delta- + ** manifest) can continue to use this repository, do not create a new + ** delta-manifest unless this repository already contains one or more + ** delta-manifests, or unless the delta-manifest is explicitly requested + ** by the --delta option. + */ + if( !forceDelta && !db_get_boolean("seen-delta-manifest",0) ){ + forceBaseline = 1; + } /* Get the ID of the parent manifest artifact */ vid = db_lget_int("checkout", 0); - if( content_is_private(vid) ){ + if( vid==0 ){ + useCksum = 1; + }else if( content_is_private(vid) ){ g.markPrivate = 1; } /* ** Autosync if autosync is enabled and this is not a private check-in. */ if( !g.markPrivate ){ - autosync(AUTOSYNC_PULL); + if( autosync_loop(SYNC_PULL, db_get_int("autosync-tries", 1)) ){ + prompt_user("continue in spite of sync failure (y/N)? ", &ans); + cReply = blob_str(&ans)[0]; + if( cReply!='y' && cReply!='Y' ){ + fossil_exit(1); + } + } + } + + /* Require confirmation to continue with the check-in if there is + ** clock skew + */ + if( g.clockSkewSeen ){ + prompt_user("continue in spite of time skew (y/N)? ", &ans); + cReply = blob_str(&ans)[0]; + if( cReply!='y' && cReply!='Y' ){ + fossil_exit(1); + } } /* There are two ways this command may be executed. If there are ** no arguments following the word "commit", then all modified files ** in the checked out directory are committed. If one or more arguments @@ -621,289 +1832,403 @@ ** After the following function call has returned, the Global.aCommitFile[] ** array is allocated to contain the "id" field from the vfile table ** for each file to be committed. Or, if aCommitFile is NULL, all files ** should be committed. */ - select_commit_files(); - isAMerge = db_exists("SELECT 1 FROM vmerge"); + if( select_commit_files() ){ + prompt_user("continue (y/N)? ", &ans); + cReply = blob_str(&ans)[0]; + if( cReply!='y' && cReply!='Y' ) fossil_exit(1); + } + isAMerge = db_exists("SELECT 1 FROM vmerge WHERE id=0 OR id<-2"); if( g.aCommitFile && isAMerge ){ fossil_fatal("cannot do a partial commit of a merge"); } + + /* Doing "fossil mv fileA fileB; fossil add fileA; fossil commit fileA" + ** will generate a manifest that has two fileA entries, which is illegal. + ** When you think about it, the sequence above makes no sense. So detect + ** it and disallow it. Ticket [0ff64b0a5fc8]. + */ + if( g.aCommitFile ){ + db_prepare(&q, + "SELECT v1.pathname, v2.pathname" + " FROM vfile AS v1, vfile AS v2" + " WHERE is_selected(v1.id)" + " AND v2.origname IS NOT NULL" + " AND v2.origname=v1.pathname" + " AND NOT is_selected(v2.id)"); + if( db_step(&q)==SQLITE_ROW ){ + const char *zFrom = db_column_text(&q, 0); + const char *zTo = db_column_text(&q, 1); + fossil_fatal("cannot do a partial commit of '%s' without '%s' because " + "'%s' was renamed to '%s'", zFrom, zTo, zFrom, zTo); + } + db_finalize(&q); + } user_select(); /* ** Check that the user exists. */ if( !db_exists("SELECT 1 FROM user WHERE login=%Q", g.zLogin) ){ fossil_fatal("no such user: %s", g.zLogin); } - - db_begin_transaction(); - db_record_repository_filename(0); - rc = unsaved_changes(); - if( rc==0 && !isAMerge && !forceFlag ){ - fossil_panic("nothing has changed"); - } - - /* If one or more files that were named on the command line have not - ** been modified, bail out now. - */ - if( g.aCommitFile ){ - Blob unmodified; - memset(&unmodified, 0, sizeof(Blob)); - blob_init(&unmodified, 0, 0); - db_blob(&unmodified, - "SELECT pathname FROM vfile WHERE chnged = 0 AND file_is_selected(id)" - ); - if( strlen(blob_str(&unmodified)) ){ - fossil_panic("file %s has not changed", blob_str(&unmodified)); - } - } - - /* - ** Do not allow a commit that will cause a fork unless the --force flag - ** is used or unless this is a private check-in. - */ - if( zBranch==0 && forceFlag==0 && g.markPrivate==0 && !is_a_leaf(vid) ){ - fossil_fatal("would fork. \"update\" first or use -f or --force."); - } - - /* - ** Do not allow a commit against a closed leaf - */ - if( db_exists("SELECT 1 FROM tagxref" - " WHERE tagid=%d AND rid=%d AND tagtype>0", - TAG_CLOSED, vid) ){ - fossil_fatal("cannot commit against a closed leaf"); - } - - vfile_aggregate_checksum_disk(vid, &cksum1); + + hasChanges = unsaved_changes(useSha1sum ? CKSIG_SHA1 : 0); + db_begin_transaction(); + db_record_repository_filename(0); + if( hasChanges==0 && !isAMerge && !allowEmpty && !forceFlag ){ + fossil_fatal("nothing has changed; use --allow-empty to override"); + } + + /* If none of the files that were named on the command line have + ** been modified, bail out now unless the --allow-empty or --force + ** flags is used. + */ + if( g.aCommitFile + && !allowEmpty + && !forceFlag + && !db_exists( + "SELECT 1 FROM vfile " + " WHERE is_selected(id)" + " AND (chnged OR deleted OR rid=0 OR pathname!=origname)") + ){ + fossil_fatal("none of the selected files have changed; use " + "--allow-empty to override."); + } + + /* + ** Do not allow a commit that will cause a fork unless the --allow-fork + ** or --force flags is used, or unless this is a private check-in. + ** The initial commit MUST have tags "trunk" and "sym-trunk". + */ + if( !vid ){ + if( sCiInfo.zBranch==0 ){ + if( allowFork==0 && forceFlag==0 && g.markPrivate==0 + && db_exists("SELECT 1 from event where type='ci'") ){ + fossil_fatal("would fork. \"update\" first or use --allow-fork."); + } + sCiInfo.zBranch = db_get("main-branch", "trunk"); + } + }else if( sCiInfo.zBranch==0 && allowFork==0 && forceFlag==0 + && g.markPrivate==0 && !is_a_leaf(vid) + ){ + fossil_fatal("would fork. \"update\" first or use --allow-fork."); + } + + /* + ** Do not allow a commit against a closed leaf unless the commit + ** ends up on a different branch. + */ + if( + /* parent check-in has the "closed" tag... */ + db_exists("SELECT 1 FROM tagxref" + " WHERE tagid=%d AND rid=%d AND tagtype>0", + TAG_CLOSED, vid) + /* ... and the new check-in has no --branch option or the --branch + ** option does not actually change the branch */ + && (sCiInfo.zBranch==0 + || db_exists("SELECT 1 FROM tagxref" + " WHERE tagid=%d AND rid=%d AND tagtype>0" + " AND value=%Q", TAG_BRANCH, vid, sCiInfo.zBranch)) + ){ + fossil_fatal("cannot commit against a closed leaf"); + } + if( zComment ){ blob_zero(&comment); blob_append(&comment, zComment, -1); }else if( zComFile ){ blob_zero(&comment); blob_read_from_file(&comment, zComFile); + blob_to_utf8_no_bom(&comment, 1); + }else if(dryRunFlag){ + blob_zero(&comment); }else{ char *zInit = db_text(0, "SELECT value FROM vvar WHERE name='ci-comment'"); - prepare_commit_comment(&comment, zInit, zBranch, vid); + prepare_commit_comment(&comment, zInit, &sCiInfo, vid); + if( zInit && zInit[0] && fossil_strcmp(zInit, blob_str(&comment))==0 ){ + prompt_user("unchanged check-in comment. continue (y/N)? ", &ans); + cReply = blob_str(&ans)[0]; + if( cReply!='y' && cReply!='Y' ) fossil_exit(1); + } free(zInit); } if( blob_size(&comment)==0 ){ - Blob ans; - blob_zero(&ans); - prompt_user("empty check-in comment. continue (y/N)? ", &ans); - if( blob_str(&ans)[0]!='y' ){ - db_end_transaction(1); - exit(1); + if( !dryRunFlag ){ + prompt_user("empty check-in comment. continue (y/N)? ", &ans); + cReply = blob_str(&ans)[0]; + if( cReply!='y' && cReply!='Y' ){ + fossil_exit(1); + } } }else{ db_multi_exec("REPLACE INTO vvar VALUES('ci-comment',%B)", &comment); db_end_transaction(0); db_begin_transaction(); } - /* Step 1: Insert records for all modified files into the blob + /* + ** Step 1: Compute an aggregate MD5 checksum over the disk image + ** of every file in vid. The file names are part of the checksum. + ** The resulting checksum is the same as is expected on the R-card + ** of a manifest. + */ + if( useCksum ) vfile_aggregate_checksum_disk(vid, &cksum1); + + /* Step 2: Insert records for all modified files into the blob ** table. If there were arguments passed to this command, only - ** the identified fils are inserted (if they have been modified). + ** the identified files are inserted (if they have been modified). */ db_prepare(&q, - "SELECT id, %Q || pathname, mrid FROM vfile " - "WHERE chnged==1 AND NOT deleted AND file_is_selected(id)" - , g.zLocalRoot + "SELECT id, %Q || pathname, mrid, %s, %s, %s FROM vfile " + "WHERE chnged==1 AND NOT deleted AND is_selected(id)", + g.zLocalRoot, + glob_expr("pathname", db_get("crnl-glob","")), + glob_expr("pathname", db_get("binary-glob","")), + glob_expr("pathname", db_get("encoding-glob","")) ); while( db_step(&q)==SQLITE_ROW ){ int id, rid; const char *zFullname; Blob content; + int crnlOk, binOk, encodingOk; id = db_column_int(&q, 0); zFullname = db_column_text(&q, 1); rid = db_column_int(&q, 2); + crnlOk = db_column_int(&q, 3); + binOk = db_column_int(&q, 4); + encodingOk = db_column_int(&q, 5); blob_zero(&content); - blob_read_from_file(&content, zFullname); - nrid = content_put(&content, 0, 0); + if( file_wd_islink(zFullname) ){ + /* Instead of file content, put link destination path */ + blob_read_link(&content, zFullname); + }else{ + blob_read_from_file(&content, zFullname); + } + /* Do not emit any warnings when they are disabled. */ + if( !noWarningFlag ){ + abortCommit |= commit_warning(&content, crnlOk, binOk, + encodingOk, zFullname); + } + if( contains_merge_marker(&content) ){ + Blob fname; /* Relative pathname of the file */ + + nConflict++; + file_relative_name(zFullname, &fname, 0); + fossil_print("possible unresolved merge conflict in %s\n", + blob_str(&fname)); + blob_reset(&fname); + } + nrid = content_put(&content); blob_reset(&content); if( rid>0 ){ content_deltify(rid, nrid, 0); } db_multi_exec("UPDATE vfile SET mrid=%d, rid=%d WHERE id=%d", nrid,nrid,id); db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", nrid); } db_finalize(&q); - - /* Create the manifest */ - blob_zero(&manifest); - if( blob_size(&comment)==0 ){ - blob_append(&comment, "(no comment)", -1); - } - blob_appendf(&manifest, "C %F\n", blob_str(&comment)); - zDate = db_text(0, "SELECT datetime('%q')", zDateOvrd ? zDateOvrd : "now"); - zDate[10] = 'T'; - blob_appendf(&manifest, "D %s\n", zDate); - zDate[10] = ' '; - db_prepare(&q, - "SELECT pathname, uuid, origname, blob.rid, isexe" - " FROM vfile JOIN blob ON vfile.mrid=blob.rid" - " WHERE NOT deleted AND vfile.vid=%d" - " ORDER BY 1", vid); - blob_zero(&filename); - blob_appendf(&filename, "%s", g.zLocalRoot); - nBasename = blob_size(&filename); - while( db_step(&q)==SQLITE_ROW ){ - const char *zName = db_column_text(&q, 0); - const char *zUuid = db_column_text(&q, 1); - const char *zOrig = db_column_text(&q, 2); - int frid = db_column_int(&q, 3); - int isexe = db_column_int(&q, 4); - const char *zPerm; - blob_append(&filename, zName, -1); -#ifndef __MINGW32__ - /* For unix, extract the "executable" permission bit directly from - ** the filesystem. On windows, the "executable" bit is retained - ** unchanged from the original. */ - isexe = file_isexe(blob_str(&filename)); -#endif - if( isexe ){ - zPerm = " x"; - }else{ - zPerm = ""; - } - blob_resize(&filename, nBasename); - if( zOrig==0 || strcmp(zOrig,zName)==0 ){ - blob_appendf(&manifest, "F %F %s%s\n", zName, zUuid, zPerm); - }else{ - if( zPerm[0]==0 ){ zPerm = " w"; } - blob_appendf(&manifest, "F %F %s%s %F\n", zName, zUuid, zPerm, zOrig); - } - if( !g.markPrivate ) content_make_public(frid); - } - blob_reset(&filename); - db_finalize(&q); - zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", vid); - blob_appendf(&manifest, "P %s", zUuid); - checkin_verify_younger(vid, zUuid, zDate); - - db_prepare(&q2, "SELECT merge FROM vmerge WHERE id=:id"); - db_bind_int(&q2, ":id", 0); - while( db_step(&q2)==SQLITE_ROW ){ - int mid = db_column_int(&q2, 0); - if( !g.markPrivate && content_is_private(mid) ) continue; - zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", mid); - if( zUuid ){ - blob_appendf(&manifest, " %s", zUuid); - checkin_verify_younger(mid, zUuid, zDate); - free(zUuid); - } - } - db_reset(&q2); - - blob_appendf(&manifest, "\n"); - blob_appendf(&manifest, "R %b\n", &cksum1); - if( zBranch && zBranch[0] ){ - Stmt q; - if( zBgColor && zBgColor[0] ){ - blob_appendf(&manifest, "T *bgcolor * %F\n", zBgColor); - } - blob_appendf(&manifest, "T *branch * %F\n", zBranch); - blob_appendf(&manifest, "T *sym-%F *\n", zBranch); - - /* Cancel all other symbolic tags */ - db_prepare(&q, - "SELECT tagname FROM tagxref, tag" - " WHERE tagxref.rid=%d AND tagxref.tagid=tag.tagid" - " AND tagtype>0 AND tagname GLOB 'sym-*'" - " AND tagname!='sym-'||%Q" - " ORDER BY tagname", - vid, zBranch); - while( db_step(&q)==SQLITE_ROW ){ - const char *zTag = db_column_text(&q, 0); - blob_appendf(&manifest, "T -%F *\n", zTag); - } - db_finalize(&q); - } - blob_appendf(&manifest, "U %F\n", zUserOvrd ? zUserOvrd : g.zLogin); - md5sum_blob(&manifest, &mcksum); - blob_appendf(&manifest, "Z %b\n", &mcksum); - zManifestFile = mprintf("%smanifest", g.zLocalRoot); - if( !noSign && !g.markPrivate && clearsign(&manifest, &manifest) ){ - Blob ans; - blob_zero(&ans); - prompt_user("unable to sign manifest. continue (y/N)? ", &ans); - if( blob_str(&ans)[0]!='y' ){ - db_end_transaction(1); - exit(1); - } - } - blob_write_to_file(&manifest, zManifestFile); - blob_reset(&manifest); - blob_read_from_file(&manifest, zManifestFile); - free(zManifestFile); - nvid = content_put(&manifest, 0, 0); - if( nvid==0 ){ - fossil_panic("trouble committing manifest: %s", g.zErrMsg); - } - db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", nvid); - manifest_crosslink(nvid, &manifest); - content_deltify(vid, nvid, 0); - zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", nvid); - printf("New_Version: %s\n", zUuid); - zManifestFile = mprintf("%smanifest.uuid", g.zLocalRoot); - blob_zero(&muuid); - blob_appendf(&muuid, "%s\n", zUuid); - blob_write_to_file(&muuid, zManifestFile); - free(zManifestFile); - blob_reset(&muuid); - - - /* Update the vfile and vmerge tables */ - db_multi_exec( - "DELETE FROM vfile WHERE (vid!=%d OR deleted) AND file_is_selected(id);" - "DELETE FROM vmerge WHERE file_is_selected(id) OR id=0;" - "UPDATE vfile SET vid=%d;" - "UPDATE vfile SET rid=mrid, chnged=0, deleted=0, origname=NULL" - " WHERE file_is_selected(id);" + if( nConflict && !allowConflict ){ + fossil_fatal("abort due to unresolved merge conflicts; " + "use --allow-conflict to override"); + }else if( abortCommit ){ + fossil_fatal("one or more files were converted on your request; " + "please re-test before committing"); + } + + /* Create the new manifest */ + sCiInfo.pComment = &comment; + sCiInfo.pCksum = useCksum ? &cksum1 : 0; + sCiInfo.verifyDate = !allowOlder && !forceFlag; + if( forceDelta ){ + blob_zero(&manifest); + }else{ + create_manifest(&manifest, 0, 0, vid, &sCiInfo, &szB); + } + + /* See if a delta-manifest would be more appropriate */ + if( !forceBaseline ){ + const char *zBaselineUuid; + Manifest *pParent; + Manifest *pBaseline; + pParent = manifest_get(vid, CFTYPE_MANIFEST, 0); + if( pParent && pParent->zBaseline ){ + zBaselineUuid = pParent->zBaseline; + pBaseline = manifest_get_by_name(zBaselineUuid, 0); + }else{ + zBaselineUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", vid); + pBaseline = pParent; + } + if( pBaseline ){ + Blob delta; + create_manifest(&delta, zBaselineUuid, pBaseline, vid, &sCiInfo, &szD); + /* + ** At this point, two manifests have been constructed, either of + ** which would work for this check-in. The first manifest (held + ** in the "manifest" variable) is a baseline manifest and the second + ** (held in variable named "delta") is a delta manifest. The + ** question now is: which manifest should we use? + ** + ** Let B be the number of F-cards in the baseline manifest and + ** let D be the number of F-cards in the delta manifest, plus one for + ** the B-card. (B is held in the szB variable and D is held in the + ** szD variable.) Assume that all delta manifests adds X new F-cards. + ** Then to minimize the total number of F- and B-cards in the repository, + ** we should use the delta manifest if and only if: + ** + ** D*D < B*X - X*X + ** + ** X is an unknown here, but for most repositories, we will not be + ** far wrong if we assume X=3. + */ + if( forceDelta || (szD*szD)<(szB*3-9) ){ + blob_reset(&manifest); + manifest = delta; + }else{ + blob_reset(&delta); + } + }else if( forceDelta ){ + fossil_fatal("unable to find a baseline-manifest for the delta"); + } + } + if( !noSign && !g.markPrivate && clearsign(&manifest, &manifest) ){ + prompt_user("unable to sign manifest. continue (y/N)? ", &ans); + cReply = blob_str(&ans)[0]; + if( cReply!='y' && cReply!='Y' ){ + fossil_exit(1); + } + } + + /* If the -n|--dry-run option is specified, output the manifest file + ** and rollback the transaction. + */ + if( dryRunFlag ){ + blob_write_to_file(&manifest, ""); + } + if( outputManifest ){ + zManifestFile = mprintf("%smanifest", g.zLocalRoot); + blob_write_to_file(&manifest, zManifestFile); + blob_reset(&manifest); + blob_read_from_file(&manifest, zManifestFile); + free(zManifestFile); + } + + nvid = content_put(&manifest); + if( nvid==0 ){ + fossil_fatal("trouble committing manifest: %s", g.zErrMsg); + } + db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", nvid); + if( manifest_crosslink(nvid, &manifest, + dryRunFlag ? MC_NONE : MC_PERMIT_HOOKS)==0 ){ + fossil_fatal("%s\n", g.zErrMsg); + } + assert( blob_is_reset(&manifest) ); + content_deltify(vid, nvid, 0); + zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", nvid); + + db_prepare(&q, "SELECT uuid,merge FROM vmerge JOIN blob ON merge=rid" + " WHERE id=-4"); + while( db_step(&q)==SQLITE_ROW ){ + const char *zIntegrateUuid = db_column_text(&q, 0); + if( is_a_leaf(db_column_int(&q, 1)) ){ + fossil_print("Closed: %s\n", zIntegrateUuid); + }else{ + fossil_print("Not_Closed: %s (not a leaf any more)\n", zIntegrateUuid); + } + } + db_finalize(&q); + + fossil_print("New_Version: %s\n", zUuid); + if( outputManifest ){ + zManifestFile = mprintf("%smanifest.uuid", g.zLocalRoot); + blob_zero(&muuid); + blob_appendf(&muuid, "%s\n", zUuid); + blob_write_to_file(&muuid, zManifestFile); + free(zManifestFile); + blob_reset(&muuid); + } + + /* Update the vfile and vmerge tables */ + db_multi_exec( + "DELETE FROM vfile WHERE (vid!=%d OR deleted) AND is_selected(id);" + "DELETE FROM vmerge;" + "UPDATE vfile SET vid=%d;" + "UPDATE vfile SET rid=mrid, chnged=0, deleted=0, origname=NULL" + " WHERE is_selected(id);" , vid, nvid ); db_lset_int("checkout", nvid); - /* Verify that the repository checksum matches the expected checksum - ** calculated before the checkin started (and stored as the R record - ** of the manifest file). - */ - vfile_aggregate_checksum_repository(nvid, &cksum2); - if( blob_compare(&cksum1, &cksum2) ){ - fossil_panic("tree checksum does not match repository after commit"); - } - - /* Verify that the manifest checksum matches the expected checksum */ - vfile_aggregate_checksum_manifest(nvid, &cksum2, &cksum1b); - if( blob_compare(&cksum1, &cksum1b) ){ - fossil_panic("manifest checksum does not agree with manifest: " - "%b versus %b", &cksum1, &cksum1b); - } - if( blob_compare(&cksum1, &cksum2) ){ - fossil_panic("tree checksum does not match manifest after commit: " - "%b versus %b", &cksum1, &cksum2); - } - - /* Verify that the commit did not modify any disk images. */ - vfile_aggregate_checksum_disk(nvid, &cksum2); - if( blob_compare(&cksum1, &cksum2) ){ - fossil_panic("tree checksums before and after commit do not match"); + /* Update the isexe and islink columns of the vfile table */ + db_prepare(&q, + "UPDATE vfile SET isexe=:exec, islink=:link" + " WHERE vid=:vid AND pathname=:path AND (isexe!=:exec OR islink!=:link)" + ); + db_bind_int(&q, ":vid", nvid); + pManifest = manifest_get(nvid, CFTYPE_MANIFEST, 0); + manifest_file_rewind(pManifest); + while( (pFile = manifest_file_next(pManifest, 0)) ){ + db_bind_int(&q, ":exec", pFile->zPerm && strstr(pFile->zPerm, "x")); + db_bind_int(&q, ":link", pFile->zPerm && strstr(pFile->zPerm, "l")); + db_bind_text(&q, ":path", pFile->zName); + db_step(&q); + db_reset(&q); + } + db_finalize(&q); + manifest_destroy(pManifest); + + if( useCksum ){ + /* Verify that the repository checksum matches the expected checksum + ** calculated before the check-in started (and stored as the R record + ** of the manifest file). + */ + vfile_aggregate_checksum_repository(nvid, &cksum2); + if( blob_compare(&cksum1, &cksum2) ){ + vfile_compare_repository_to_disk(nvid); + fossil_fatal("working checkout does not match what would have ended " + "up in the repository: %b versus %b", + &cksum1, &cksum2); + } + + /* Verify that the manifest checksum matches the expected checksum */ + vfile_aggregate_checksum_manifest(nvid, &cksum2, &cksum1b); + if( blob_compare(&cksum1, &cksum1b) ){ + fossil_fatal("manifest checksum self-test failed: " + "%b versus %b", &cksum1, &cksum1b); + } + if( blob_compare(&cksum1, &cksum2) ){ + fossil_fatal( + "working checkout does not match manifest after commit: " + "%b versus %b", &cksum1, &cksum2); + } + + /* Verify that the commit did not modify any disk images. */ + vfile_aggregate_checksum_disk(nvid, &cksum2); + if( blob_compare(&cksum1, &cksum2) ){ + fossil_fatal("working checkout before and after commit does not match"); + } } /* Clear the undo/redo stack */ undo_reset(); /* Commit */ db_multi_exec("DELETE FROM vvar WHERE name='ci-comment'"); + db_multi_exec("PRAGMA %s.application_id=252006673;", db_name("repository")); + db_multi_exec("PRAGMA %s.application_id=252006674;", db_name("localdb")); + if( dryRunFlag ){ + db_end_transaction(1); + exit(1); + } db_end_transaction(0); if( !g.markPrivate ){ - autosync(AUTOSYNC_PUSH); + autosync_loop(SYNC_PUSH|SYNC_PULL, db_get_int("autosync-tries", 1)); } if( count_nonbranch_children(vid)>1 ){ - printf("**** warning: a fork has occurred *****\n"); + fossil_print("**** warning: a fork has occurred *****\n"); } } Index: src/checkout.c ================================================================== --- src/checkout.c +++ src/checkout.c @@ -26,29 +26,28 @@ ** Check to see if there is an existing checkout that has been ** modified. Return values: ** ** 0: There is an existing checkout but it is unmodified ** 1: There is a modified checkout - there are unsaved changes -** 2: There is no existing checkout */ -int unsaved_changes(void){ +int unsaved_changes(unsigned int cksigFlags){ int vid; db_must_be_within_tree(); vid = db_lget_int("checkout",0); - if( vid==0 ) return 2; - vfile_check_signature(vid, 1); + vfile_check_signature(vid, cksigFlags|CKSIG_ENOTFILE); return db_exists("SELECT 1 FROM vfile WHERE chnged" " OR coalesce(origname!=pathname,0)"); } /* ** Undo the current check-out. Unlink all files from the disk. ** Clear the VFILE table. */ void uncheckout(int vid){ - if( vid==0 ) return; - vfile_unlink(vid); + if( vid>0 ){ + vfile_unlink(vid); + } db_multi_exec("DELETE FROM vfile WHERE vid=%d", vid); } /* @@ -55,96 +54,124 @@ ** Given the abbreviated UUID name of a version, load the content of that ** version in the VFILE table. Return the VID for the version. ** ** If anything goes wrong, panic. */ -int load_vfile(const char *zName){ +int load_vfile(const char *zName, int forceMissingFlag){ Blob uuid; int vid; blob_init(&uuid, zName, -1); - if( name_to_uuid(&uuid, 1) ){ - fossil_panic(g.zErrMsg); + if( name_to_uuid(&uuid, 1, "ci") ){ + fossil_fatal("%s", g.zErrMsg); } vid = db_int(0, "SELECT rid FROM blob WHERE uuid=%B", &uuid); if( vid==0 ){ fossil_fatal("no such check-in: %s", g.argv[2]); } if( !is_a_version(vid) ){ - fossil_fatal("object [%.10s] is not a check-in", blob_str(&uuid)); - } - load_vfile_from_rid(vid); - return vid; -} - -/* -** Load a vfile from a record ID. -*/ -void load_vfile_from_rid(int vid){ - Blob manifest; - - if( db_exists("SELECT 1 FROM vfile WHERE vid=%d", vid) ){ - return; - } - content_get(vid, &manifest); - vfile_build(vid, &manifest); - blob_reset(&manifest); + fossil_fatal("object [%S] is not a check-in", blob_str(&uuid)); + } + if( load_vfile_from_rid(vid) && !forceMissingFlag ){ + fossil_fatal("missing content, unable to checkout"); + }; + return vid; } /* ** Set or clear the vfile.isexe flag for a file. */ static void set_or_clear_isexe(const char *zFilename, int vid, int onoff){ - db_multi_exec("UPDATE vfile SET isexe=%d WHERE vid=%d and pathname=%Q", - onoff, vid, zFilename); + static Stmt s; + db_static_prepare(&s, + "UPDATE vfile SET isexe=:isexe" + " WHERE vid=:vid AND pathname=:path AND isexe!=:isexe" + ); + db_bind_int(&s, ":isexe", onoff); + db_bind_int(&s, ":vid", vid); + db_bind_text(&s, ":path", zFilename); + db_step(&s); + db_reset(&s); +} + +/* +** Set or clear the execute permission bit (as appropriate) for all +** files in the current check-out, and replace files that have +** symlink bit with actual symlinks. +*/ +void checkout_set_all_exe(int vid){ + Blob filename; + int baseLen; + Manifest *pManifest; + ManifestFile *pFile; + + /* Check the EXE permission status of all files + */ + pManifest = manifest_get(vid, CFTYPE_MANIFEST, 0); + if( pManifest==0 ) return; + blob_zero(&filename); + blob_appendf(&filename, "%s", g.zLocalRoot); + baseLen = blob_size(&filename); + manifest_file_rewind(pManifest); + while( (pFile = manifest_file_next(pManifest, 0))!=0 ){ + int isExe; + blob_append(&filename, pFile->zName, -1); + isExe = pFile->zPerm && strstr(pFile->zPerm, "x"); + file_wd_setexe(blob_str(&filename), isExe); + set_or_clear_isexe(pFile->zName, vid, isExe); + blob_resize(&filename, baseLen); + } + blob_reset(&filename); + manifest_destroy(pManifest); } + /* -** Read the manifest file given by vid out of the repository -** and store it in the root of the local check-out. +** If the "manifest" setting is true, then automatically generate +** files named "manifest" and "manifest.uuid" containing, respectively, +** the text of the manifest and the artifact ID of the manifest. */ void manifest_to_disk(int vid){ char *zManFile; Blob manifest; Blob hash; - Blob filename; - int baseLen; - int i; - Manifest m; - - blob_zero(&manifest); - zManFile = mprintf("%smanifest", g.zLocalRoot); - content_get(vid, &manifest); - blob_write_to_file(&manifest, zManFile); - free(zManFile); - blob_zero(&hash); - sha1sum_blob(&manifest, &hash); - zManFile = mprintf("%smanifest.uuid", g.zLocalRoot); - blob_append(&hash, "\n", 1); - blob_write_to_file(&hash, zManFile); - free(zManFile); - blob_reset(&hash); - manifest_parse(&m, &manifest); - blob_zero(&filename); - blob_appendf(&filename, "%s/", g.zLocalRoot); - baseLen = blob_size(&filename); - for(i=0; i<m.nFile; i++){ - int isExe; - blob_append(&filename, m.aFile[i].zName, -1); - isExe = m.aFile[i].zPerm && strstr(m.aFile[i].zPerm, "x"); - file_setexe(blob_str(&filename), isExe); - set_or_clear_isexe(m.aFile[i].zName, vid, isExe); - blob_resize(&filename, baseLen); - } - blob_reset(&filename); - manifest_clear(&m); + + if( db_get_boolean("manifest",0) ){ + blob_zero(&manifest); + content_get(vid, &manifest); + zManFile = mprintf("%smanifest", g.zLocalRoot); + blob_zero(&hash); + sha1sum_blob(&manifest, &hash); + sterilize_manifest(&manifest); + blob_write_to_file(&manifest, zManFile); + free(zManFile); + zManFile = mprintf("%smanifest.uuid", g.zLocalRoot); + blob_append(&hash, "\n", 1); + blob_write_to_file(&hash, zManFile); + free(zManFile); + blob_reset(&hash); + }else{ + if( !db_exists("SELECT 1 FROM vfile WHERE pathname='manifest'") ){ + zManFile = mprintf("%smanifest", g.zLocalRoot); + file_delete(zManFile); + free(zManFile); + } + if( !db_exists("SELECT 1 FROM vfile WHERE pathname='manifest.uuid'") ){ + zManFile = mprintf("%smanifest.uuid", g.zLocalRoot); + file_delete(zManFile); + free(zManFile); + } + } + } /* -** COMMAND: checkout +** COMMAND: checkout* +** COMMAND: co* ** -** Usage: %fossil checkout VERSION ?-f|--force? ?--keep? +** Usage: %fossil checkout ?VERSION | --latest? ?OPTIONS? +** or: %fossil co ?VERSION | --latest? ?OPTIONS? ** ** Check out a version specified on the command-line. This command ** will abort if there are edited files in the current checkout unless ** the --force option appears on the command-line. The --keep option ** leaves files on disk unchanged, except the manifest and manifest.uuid @@ -151,29 +178,42 @@ ** files. ** ** The --latest flag can be used in place of VERSION to checkout the ** latest version in the repository. ** -** See also the "update" command. +** Options: +** --force Ignore edited files in the current checkout +** --keep Only update the manifest and manifest.uuid files +** --force-missing Force checkout even if content is missing +** +** See also: update */ void checkout_cmd(void){ int forceFlag; /* Force checkout even if edits exist */ + int forceMissingFlag; /* Force checkout even if missing content */ int keepFlag; /* Do not change any files on disk */ int latestFlag; /* Checkout the latest version */ char *zVers; /* Version to checkout */ + int promptFlag; /* True to prompt before overwriting */ int vid, prior; Blob cksum1, cksum1b, cksum2; - + db_must_be_within_tree(); db_begin_transaction(); forceFlag = find_option("force","f",0)!=0; + forceMissingFlag = find_option("force-missing",0,0)!=0; keepFlag = find_option("keep",0,0)!=0; latestFlag = find_option("latest",0,0)!=0; + promptFlag = find_option("prompt",0,0)!=0 || forceFlag==0; + + /* We should be done with options.. */ + verify_all_options(); + if( (latestFlag!=0 && g.argc!=2) || (latestFlag==0 && g.argc!=3) ){ usage("VERSION|--latest ?--force? ?--keep?"); } - if( !forceFlag && unsaved_changes()==1 ){ + if( !forceFlag && unsaved_changes(0) ){ fossil_fatal("there are unsaved changes in the current checkout"); } if( forceFlag ){ db_multi_exec("DELETE FROM vfile"); prior = 0; @@ -189,74 +229,95 @@ zVers = db_text(0, "SELECT uuid FROM event, blob" " WHERE event.objid=blob.rid AND event.type='ci'" " ORDER BY event.mtime DESC"); } if( zVers==0 ){ - fossil_fatal("cannot locate \"latest\" checkout"); + return; } }else{ zVers = g.argv[2]; } - vid = load_vfile(zVers); + vid = load_vfile(zVers, forceMissingFlag); if( prior==vid ){ return; } if( !keepFlag ){ uncheckout(prior); } db_multi_exec("DELETE FROM vfile WHERE vid!=%d", vid); if( !keepFlag ){ - vfile_to_disk(vid, 0, 1); + vfile_to_disk(vid, 0, 1, promptFlag); } + checkout_set_all_exe(vid); manifest_to_disk(vid); + ensure_empty_dirs_created(); db_lset_int("checkout", vid); undo_reset(); db_multi_exec("DELETE FROM vmerge"); - if( !keepFlag ){ + if( !keepFlag && db_get_boolean("repo-cksum",1) ){ vfile_aggregate_checksum_manifest(vid, &cksum1, &cksum1b); vfile_aggregate_checksum_disk(vid, &cksum2); if( blob_compare(&cksum1, &cksum2) ){ - printf("WARNING: manifest checksum does not agree with disk\n"); + fossil_print("WARNING: manifest checksum does not agree with disk\n"); } - if( blob_compare(&cksum1, &cksum1b) ){ - printf("WARNING: manifest checksum does not agree with manifest\n"); + if( blob_size(&cksum1b) && blob_compare(&cksum1, &cksum1b) ){ + fossil_print("WARNING: manifest checksum does not agree with manifest\n"); } } db_end_transaction(0); } /* ** Unlink the local database file */ -void unlink_local_database(void){ - static const char *azFile[] = { - "%s_FOSSIL_", - "%s_FOSSIL_-journal", - "%s.fos", - "%s.fos-journal", - }; +static void unlink_local_database(int manifestOnly){ + const char *zReserved; int i; - for(i=0; i<sizeof(azFile)/sizeof(azFile[0]); i++){ - char *z = mprintf(azFile[i], g.zLocalRoot); - unlink(z); - free(z); + for(i=0; (zReserved = fossil_reserved_name(i, 1))!=0; i++){ + if( manifestOnly==0 || zReserved[0]=='m' ){ + char *z; + z = mprintf("%s%s", g.zLocalRoot, zReserved); + file_delete(z); + free(z); + } } } /* -** COMMAND: close +** COMMAND: close* ** -** Usage: %fossil close ?-f|--force? +** Usage: %fossil close ?OPTIONS? ** ** The opposite of "open". Close the current database connection. ** Require a -f or --force flag if there are unsaved changed in the -** current check-out. +** current check-out or if there is non-empty stash. +** +** Options: +** --force|-f necessary to close a check out with uncommitted changes +** +** See also: open */ void close_cmd(void){ int forceFlag = find_option("force","f",0)!=0; db_must_be_within_tree(); - if( !forceFlag && unsaved_changes()==1 ){ + + /* We should be done with options.. */ + verify_all_options(); + + if( !forceFlag && unsaved_changes(0) ){ fossil_fatal("there are unsaved changes in the current checkout"); } - db_close(); - unlink_local_database(); + if( !forceFlag + && db_table_exists("localdb","stash") + && db_exists("SELECT 1 FROM %s.stash", db_name("localdb")) + ){ + fossil_fatal("closing the checkout will delete your stash"); + } + if( db_is_writeable("repository") ){ + char *zUnset = mprintf("ckout:%q", g.zLocalRoot); + db_unset(zUnset, 1); + fossil_free(zUnset); + } + unlink_local_database(1); + db_close(1); + unlink_local_database(0); } Index: src/clearsign.c ================================================================== --- src/clearsign.c +++ src/clearsign.c @@ -39,11 +39,11 @@ zRand = db_text(0, "SELECT hex(randomblob(10))"); zOut = mprintf("out-%s", zRand); zIn = mprintf("in-%z", zRand); blob_write_to_file(pIn, zOut); zCmd = mprintf("%s %s %s", zBase, zIn, zOut); - rc = portable_system(zCmd); + rc = fossil_system(zCmd); free(zCmd); if( rc==0 ){ if( pOut==pIn ){ blob_reset(pIn); } @@ -52,11 +52,11 @@ }else{ if( pOut!=pIn ){ blob_copy(pOut, pIn); } } - unlink(zOut); - unlink(zIn); + file_delete(zOut); + file_delete(zIn); free(zOut); free(zIn); return rc; } Index: src/clone.c ================================================================== --- src/clone.c +++ src/clone.c @@ -19,95 +19,276 @@ */ #include "config.h" #include "clone.h" #include <assert.h> +/* +** If there are public BLOBs that deltas from private BLOBs, then +** undeltify the public BLOBs so that the private BLOBs may be safely +** deleted. +*/ +void fix_private_blob_dependencies(int showWarning){ + Bag toUndelta; + Stmt q; + int rid; + + /* Careful: We are about to delete all BLOB entries that are private. + ** So make sure that any no public BLOBs are deltas from a private BLOB. + ** Otherwise after the deletion, we won't be able to recreate the public + ** BLOBs. + */ + db_prepare(&q, + "SELECT " + " rid, (SELECT uuid FROM blob WHERE rid=delta.rid)," + " srcid, (SELECT uuid FROM blob WHERE rid=delta.srcid)" + " FROM delta" + " WHERE srcid in private AND rid NOT IN private" + ); + bag_init(&toUndelta); + while( db_step(&q)==SQLITE_ROW ){ + int rid = db_column_int(&q, 0); + const char *zId = db_column_text(&q, 1); + int srcid = db_column_int(&q, 2); + const char *zSrc = db_column_text(&q, 3); + if( showWarning ){ + fossil_warning( + "public artifact %S (%d) is a delta from private artifact %S (%d)", + zId, rid, zSrc, srcid + ); + } + bag_insert(&toUndelta, rid); + } + db_finalize(&q); + while( (rid = bag_first(&toUndelta))>0 ){ + content_undelta(rid); + bag_remove(&toUndelta, rid); + } + bag_clear(&toUndelta); +} + +/* +** Delete all private content from a repository. +*/ +void delete_private_content(void){ + fix_private_blob_dependencies(1); + db_multi_exec( + "DELETE FROM blob WHERE rid IN private;" + "DELETE FROM delta WHERE rid IN private;" + "DELETE FROM private;" + "DROP TABLE IF EXISTS modreq;" + ); +} /* ** COMMAND: clone ** -** Usage: %fossil clone ?OPTIONS? URL FILENAME +** Usage: %fossil clone ?OPTIONS? URI FILENAME +** +** Make a clone of a repository specified by URI in the local +** file named FILENAME. +** +** URI may be one of the following form: ([...] mean optional) +** HTTP/HTTPS protocol: +** http[s]://[userid[:password]@]host[:port][/path] +** +** SSH protocol: +** ssh://[userid@]host[:port]/path/to/repo.fossil\\ +** [?fossil=path/to/fossil.exe] +** +** Filesystem: +** [file://]path/to/repo.fossil +** +** Note 1: For ssh and filesystem, path must have an extra leading +** '/' to use an absolute path. ** -** Make a clone of a repository specified by URL in the local -** file named FILENAME. +** Note 2: Use %HH escapes for special characters in the userid and +** password. For example "%40" in place of "@", "%2f" in place +** of "/", and "%3a" in place of ":". ** ** By default, your current login name is used to create the default ** admin user. This can be overridden using the -A|--admin-user ** parameter. ** ** Options: -** -** --admin-user|-A USERNAME +** --admin-user|-A USERNAME Make USERNAME the administrator +** --once Don't remember the URI. +** --private Also clone private branches +** --ssl-identity FILENAME Use the SSL identity if requested by the server +** --ssh-command|-c SSH Use SSH as the "ssh" command +** --httpauth|-B USER:PASS Add HTTP Basic Authorization to requests +** --verbose Show more statistics in output ** +** See also: init */ void clone_cmd(void){ char *zPassword; const char *zDefaultUser; /* Optional name of the default user */ + const char *zHttpAuth; /* HTTP Authorization user:pass information */ + int nErr = 0; + int urlFlags = URL_PROMPT_PW | URL_REMEMBER; + int syncFlags = SYNC_CLONE; + /* Also clone private branches */ + if( find_option("private",0,0)!=0 ) syncFlags |= SYNC_PRIVATE; + if( find_option("once",0,0)!=0) urlFlags &= ~URL_REMEMBER; + if( find_option("verbose",0,0)!=0) syncFlags |= SYNC_VERBOSE; + zHttpAuth = find_option("httpauth","B",1); + zDefaultUser = find_option("admin-user","A",1); + clone_ssh_find_options(); url_proxy_options(); + + /* We should be done with options.. */ + verify_all_options(); + if( g.argc < 4 ){ usage("?OPTIONS? FILE-OR-URL NEW-REPOSITORY"); } db_open_config(0); - if( file_size(g.argv[3])>0 ){ - fossil_panic("file already exists: %s", g.argv[3]); + if( -1 != file_size(g.argv[3]) ){ + fossil_fatal("file already exists: %s", g.argv[3]); } - zDefaultUser = find_option("admin-user","A",1); - - url_parse(g.argv[2]); - if( g.urlIsFile ){ - file_copy(g.urlName, g.argv[3]); - db_close(); + url_parse(g.argv[2], urlFlags); + if( zDefaultUser==0 && g.url.user!=0 ) zDefaultUser = g.url.user; + if( g.url.isFile ){ + file_copy(g.url.name, g.argv[3]); + db_close(1); db_open_repository(g.argv[3]); db_record_repository_filename(g.argv[3]); - db_multi_exec( - "REPLACE INTO config(name,value)" - " VALUES('server-code', lower(hex(randomblob(20))));" - "REPLACE INTO config(name,value)" - " VALUES('last-sync-url', '%q');", - g.urlCanonical - ); - db_multi_exec( - "DELETE FROM blob WHERE rid IN private;" - "DELETE FROM delta wHERE rid IN private;" - "DELETE FROM private;" - ); + url_remember(); + if( !(syncFlags & SYNC_PRIVATE) ) delete_private_content(); shun_artifacts(); - g.zLogin = db_text(0, "SELECT login FROM user WHERE cap LIKE '%%s%%'"); - if( g.zLogin==0 ){ - db_create_default_users(1,zDefaultUser); + db_create_default_users(1, zDefaultUser); + if( zDefaultUser ){ + g.zLogin = zDefaultUser; + }else{ + g.zLogin = db_text(0, "SELECT login FROM user WHERE cap LIKE '%%s%%'"); } - printf("Repository cloned into %s\n", g.argv[3]); + fossil_print("Repository cloned into %s\n", g.argv[3]); }else{ db_create_repository(g.argv[3]); db_open_repository(g.argv[3]); db_begin_transaction(); db_record_repository_filename(g.argv[3]); - db_initial_setup(0, zDefaultUser, 0); + db_initial_setup(0, 0, zDefaultUser); user_select(); db_set("content-schema", CONTENT_SCHEMA, 0); - db_set("aux-schema", AUX_SCHEMA, 0); - db_set("last-sync-url", g.argv[2], 0); + db_set("aux-schema", AUX_SCHEMA_MAX, 0); + db_set("rebuilt", get_version(), 0); + remember_or_get_http_auth(zHttpAuth, urlFlags & URL_REMEMBER, g.argv[2]); + url_remember(); + if( g.zSSLIdentity!=0 ){ + /* If the --ssl-identity option was specified, store it as a setting */ + Blob fn; + blob_zero(&fn); + file_canonical_name(g.zSSLIdentity, &fn, 0); + db_set("ssl-identity", blob_str(&fn), 0); + blob_reset(&fn); + } db_multi_exec( - "REPLACE INTO config(name,value)" - " VALUES('server-code', lower(hex(randomblob(20))));" + "REPLACE INTO config(name,value,mtime)" + " VALUES('server-code', lower(hex(randomblob(20))), now());" + "DELETE FROM config WHERE name='project-code';" ); url_enable_proxy(0); + clone_ssh_db_set_options(); + url_get_password_if_needed(); g.xlinkClusterOnly = 1; - client_sync(0,0,1,CONFIGSET_ALL,0); + nErr = client_sync(syncFlags,CONFIGSET_ALL,0); g.xlinkClusterOnly = 0; verify_cancel(); db_end_transaction(0); - db_close(); + db_close(1); + if( nErr ){ + file_delete(g.argv[3]); + fossil_fatal("server returned an error - clone aborted"); + } db_open_repository(g.argv[3]); } db_begin_transaction(); - printf("Rebuilding repository meta-data...\n"); - rebuild_db(0, 1); - printf("project-id: %s\n", db_get("project-code", 0)); - printf("server-id: %s\n", db_get("server-code", 0)); - zPassword = db_text(0, "SELECT pw FROM user WHERE login=%Q", g.zLogin); - printf("admin-user: %s (password is \"%s\")\n", g.zLogin, zPassword); + fossil_print("Rebuilding repository meta-data...\n"); + rebuild_db(0, 1, 0); + fossil_print("Extra delta compression... "); fflush(stdout); + extra_deltification(); db_end_transaction(0); + fossil_print("\nVacuuming the database... "); fflush(stdout); + if( db_int(0, "PRAGMA page_count")>1000 + && db_int(0, "PRAGMA page_size")<8192 ){ + db_multi_exec("PRAGMA page_size=8192;"); + } + db_multi_exec("VACUUM"); + fossil_print("\nproject-id: %s\n", db_get("project-code", 0)); + fossil_print("server-id: %s\n", db_get("server-code", 0)); + zPassword = db_text(0, "SELECT pw FROM user WHERE login=%Q", g.zLogin); + fossil_print("admin-user: %s (password is \"%s\")\n", g.zLogin, zPassword); +} + +/* +** If user chooses to use HTTP Authentication over unencrypted HTTP, +** remember decision. Otherwise, if the URL is being changed and no +** preference has been indicated, err on the safe side and revert the +** decision. Set the global preference if the URL is not being changed. +*/ +void remember_or_get_http_auth( + const char *zHttpAuth, /* Credentials in the form "user:password" */ + int fRemember, /* True to remember credentials for later reuse */ + const char *zUrl /* URL for which these credentials apply */ +){ + char *zKey = mprintf("http-auth:%s", g.url.canonical); + if( zHttpAuth && zHttpAuth[0] ){ + g.zHttpAuth = mprintf("%s", zHttpAuth); + } + if( fRemember ){ + if( g.zHttpAuth && g.zHttpAuth[0] ){ + set_httpauth(g.zHttpAuth); + }else if( zUrl && zUrl[0] ){ + db_unset(zKey, 0); + }else{ + g.zHttpAuth = get_httpauth(); + } + }else if( g.zHttpAuth==0 && zUrl==0 ){ + g.zHttpAuth = get_httpauth(); + } + free(zKey); +} + +/* +** Get the HTTP Authorization preference from db. +*/ +char *get_httpauth(void){ + char *zKey = mprintf("http-auth:%s", g.url.canonical); + char * rc = unobscure(db_get(zKey, 0)); + free(zKey); + return rc; +} + +/* +** Set the HTTP Authorization preference in db. +*/ +void set_httpauth(const char *zHttpAuth){ + char *zKey = mprintf("http-auth:%s", g.url.canonical); + db_set(zKey, obscure(zHttpAuth), 0); + free(zKey); +} + +/* +** Look for SSH clone command line options and setup in globals. +*/ +void clone_ssh_find_options(void){ + const char *zSshCmd; /* SSH command string */ + + zSshCmd = find_option("ssh-command","c",1); + if( zSshCmd && zSshCmd[0] ){ + g.zSshCmd = mprintf("%s", zSshCmd); + } +} + +/* +** Set SSH options discovered in global variables (set from command line +** options). +*/ +void clone_ssh_db_set_options(void){ + if( g.zSshCmd && g.zSshCmd[0] ){ + db_set("ssh-command", g.zSshCmd, 0); + } } ADDED src/codecheck1.c Index: src/codecheck1.c ================================================================== --- src/codecheck1.c +++ src/codecheck1.c @@ -0,0 +1,563 @@ +/* +** Copyright (c) 2014 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This program reads Fossil source code files and tries to verify that +** printf-style format strings are correct. +** +** This program implements a compile-time validation step on the Fossil +** source code. Running this program is entirely optional. Its role is +** similar to the -Wall compiler switch on gcc, or the scan-build utility +** of clang, or other static analyzers. The purpose is to try to identify +** problems in the source code at compile-time. The difference is that this +** static checker is specifically designed for the particular printf formatter +** implementation used by Fossil. +** +** Checks include: +** +** * Verify that vararg formatting routines like blob_printf() or +** db_multi_exec() have the correct number of arguments for their +** format string. +** +** * For routines designed to generate SQL, warn about the use of %s +** which might allow SQL injection. +*/ +#include <stdio.h> +#include <stdlib.h> +#include <ctype.h> +#include <string.h> +#include <assert.h> + +/* +** Malloc, aborting if it fails. +*/ +void *safe_malloc(int nByte){ + void *x = malloc(nByte); + if( x==0 ){ + fprintf(stderr, "failed to allocate %d bytes\n", nByte); + exit(1); + } + return x; +} +void *safe_realloc(void *pOld, int nByte){ + void *x = realloc(pOld, nByte); + if( x==0 ){ + fprintf(stderr, "failed to allocate %d bytes\n", nByte); + exit(1); + } + return x; +} + +/* +** Read the entire content of the file named zFilename into memory obtained +** from malloc(). Add a zero-terminator to the end. +** Return a pointer to that memory. +*/ +static char *read_file(const char *zFilename){ + FILE *in; + char *z; + int nByte; + int got; + in = fopen(zFilename, "rb"); + if( in==0 ){ + return 0; + } + fseek(in, 0, SEEK_END); + nByte = ftell(in); + fseek(in, 0, SEEK_SET); + z = safe_malloc( nByte+1 ); + got = fread(z, 1, nByte, in); + z[got] = 0; + fclose(in); + return z; +} + +/* +** When parsing the input file, the following token types are recognized. +*/ +#define TK_SPACE 1 /* Whitespace or comments */ +#define TK_ID 2 /* An identifier */ +#define TK_STR 3 /* A string literal in double-quotes */ +#define TK_OTHER 4 /* Any other token */ +#define TK_EOF 99 /* End of file */ + +/* +** Determine the length and type of the token beginning at z[0] +*/ +static int token_length(const char *z, int *pType, int *pLN){ + int i; + if( z[0]==0 ){ + *pType = TK_EOF; + return 0; + } + if( z[0]=='"' || z[0]=='\'' ){ + for(i=1; z[i] && z[i]!=z[0]; i++){ + if( z[i]=='\\' && z[i+1]!=0 ){ + if( z[i+1]=='\n' ) (*pLN)++; + i++; + } + } + if( z[i]!=0 ) i++; + *pType = z[0]=='"' ? TK_STR : TK_OTHER; + return i; + } + if( isalnum(z[0]) || z[0]=='_' ){ + for(i=1; isalnum(z[i]) || z[i]=='_'; i++){} + *pType = isalpha(z[0]) || z[0]=='_' ? TK_ID : TK_OTHER; + return i; + } + if( isspace(z[0]) ){ + if( z[0]=='\n' ) (*pLN)++; + for(i=1; isspace(z[i]); i++){ + if( z[i]=='\n' ) (*pLN)++; + } + *pType = TK_SPACE; + return i; + } + if( z[0]=='/' && z[1]=='*' ){ + for(i=2; z[i] && (z[i]!='*' || z[i+1]!='/'); i++){ + if( z[i]=='\n' ) (*pLN)++; + } + if( z[i] ) i += 2; + *pType = TK_SPACE; + return i; + } + if( z[0]=='/' && z[1]=='/' ){ + for(i=2; z[i] && z[i]!='\n'; i++){} + if( z[i] ){ + (*pLN)++; + i++; + } + *pType = TK_SPACE; + return i; + } + *pType = TK_OTHER; + return 1; +} + +/* +** Return the next non-whitespace token +*/ +const char *next_non_whitespace(const char *z, int *pLen, int *pType){ + int len; + int eType; + int ln = 0; + while( (len = token_length(z, &eType, &ln))>0 && eType==TK_SPACE ){ + z += len; + } + *pLen = len; + *pType = eType; + return z; +} + +/* +** Return index into z[] for the first balanced TK_OTHER token with +** value cValue. +*/ +static int distance_to(const char *z, char cVal){ + int len; + int dist = 0; + int eType; + int nNest = 0; + int ln = 0; + while( z[0] && (len = token_length(z, &eType, &ln))>0 ){ + if( eType==TK_OTHER ){ + if( z[0]==cVal && nNest==0 ){ + break; + }else if( z[0]=='(' ){ + nNest++; + }else if( z[0]==')' ){ + nNest--; + } + } + dist += len; + z += len; + } + return dist; +} + +/* +** Return the first non-whitespace characters in z[] +*/ +static const char *skip_space(const char *z){ + while( isspace(z[0]) ){ z++; } + return z; +} + +/* +** Return true if the input is a string literal. +*/ +static int is_string_lit(const char *z){ + int nu1, nu2; + z = next_non_whitespace(z, &nu1, &nu2); + return z[0]=='"'; +} + +/* +** Return true if the input is an expression of string literals: +** +** EXPR ? "..." : "..." +*/ +static int is_string_expr(const char *z){ + int len = 0, eType; + const char *zOrig = z; + len = distance_to(z, '?'); + if( z[len]==0 && skip_space(z)[0]=='(' ){ + z = skip_space(z) + 1; + len = distance_to(z, '?'); + } + z += len; + if( z[0]=='?' ){ + z++; + z = next_non_whitespace(z, &len, &eType); + if( eType==TK_STR ){ + z += len; + z = next_non_whitespace(z, &len, &eType); + if( eType==TK_OTHER && z[0]==':' ){ + z += len; + z = next_non_whitespace(z, &len, &eType); + if( eType==TK_STR ){ + z += len; + z = next_non_whitespace(z, &len, &eType); + if( eType==TK_EOF ) return 1; + if( eType==TK_OTHER && z[0]==')' && skip_space(zOrig)[0]=='(' ){ + z += len; + z = next_non_whitespace(z, &len, &eType); + if( eType==TK_EOF ) return 1; + } + } + } + } + } + return 0; +} + +/* +** A list of functions that return strings that are safe to insert into +** SQL using %s. +*/ +static const char *azSafeFunc[] = { + "filename_collation", + "db_name", + "leaf_is_closed_sql", + "timeline_query_for_www", + "timeline_query_for_tty", + "blob_sql_text", + "glob_expr", + "fossil_all_reserved_names", + "configure_inop_rhs", + "db_setting_inop_rhs", +}; + +/* +** Return true if the input is an argument that is safe to use with %s +** while building an SQL statement. +*/ +static int is_s_safe(const char *z){ + int len, eType; + int i; + + /* A string literal is safe for use with %s */ + if( is_string_lit(z) ) return 1; + + /* Certain functions are guaranteed to return a string that is safe + ** for use with %s */ + z = next_non_whitespace(z, &len, &eType); + for(i=0; i<sizeof(azSafeFunc)/sizeof(azSafeFunc[0]); i++){ + if( eType==TK_ID + && strncmp(z, azSafeFunc[i], len)==0 + && strlen(azSafeFunc[i])==len + ){ + return 1; + } + } + + /* Expressions of the form: EXPR ? "..." : "...." can count as + ** a string literal. */ + if( is_string_expr(z) ) return 1; + + /* If the "safe-for-%s" comment appears in the argument, then + ** let it through */ + if( strstr(z, "/*safe-for-%s*/")!=0 ) return 1; + + return 0; +} + +/* +** Processing flags +*/ +#define FMT_NO_S 0x00001 /* Do not allow %s substitutions */ + +/* +** A list of internal Fossil interfaces that take a printf-style format +** string. +*/ +struct { + const char *zFName; /* Name of the function */ + int iFmtArg; /* Index of format argument. Leftmost is 1. */ + unsigned fmtFlags; /* Processing flags */ +} aFmtFunc[] = { + { "admin_log", 1, 0 }, + { "blob_append_sql", 2, FMT_NO_S }, + { "blob_appendf", 2, 0 }, + { "cgi_panic", 1, 0 }, + { "cgi_redirectf", 1, 0 }, + { "db_blob", 2, FMT_NO_S }, + { "db_double", 2, FMT_NO_S }, + { "db_err", 1, 0 }, + { "db_exists", 1, FMT_NO_S }, + { "db_int", 2, FMT_NO_S }, + { "db_int64", 2, FMT_NO_S }, + { "db_multi_exec", 1, FMT_NO_S }, + { "db_optional_sql", 2, FMT_NO_S }, + { "db_prepare", 2, FMT_NO_S }, + { "db_prepare_ignore_error", 2, FMT_NO_S }, + { "db_static_prepare", 2, FMT_NO_S }, + { "db_text", 2, FMT_NO_S }, + { "form_begin", 2, 0 }, + { "fossil_error", 2, 0 }, + { "fossil_errorlog", 1, 0 }, + { "fossil_fatal", 1, 0 }, + { "fossil_fatal_recursive", 1, 0 }, + { "fossil_panic", 1, 0 }, + { "fossil_print", 1, 0 }, + { "fossil_trace", 1, 0 }, + { "fossil_warning", 1, 0 }, + { "href", 1, 0 }, + { "json_new_string_f", 1, 0 }, + { "mprintf", 1, 0 }, + { "socket_set_errmsg", 1, 0 }, + { "ssl_set_errmsg", 1, 0 }, + { "style_header", 1, 0 }, + { "style_set_current_page", 1, 0 }, + { "webpage_error", 1, 0 }, + { "xhref", 2, 0 }, +}; + +/* +** Determine if the indentifier zIdent of length nIndent is a Fossil +** internal interface that uses a printf-style argument. Return zero if not. +** Return the index of the format string if true with the left-most +** argument having an index of 1. +*/ +static int isFormatFunc(const char *zIdent, int nIdent, unsigned *pFlags){ + int upr, lwr; + lwr = 0; + upr = sizeof(aFmtFunc)/sizeof(aFmtFunc[0]) - 1; + while( lwr<=upr ){ + unsigned x = (lwr + upr)/2; + int c = strncmp(zIdent, aFmtFunc[x].zFName, nIdent); + if( c==0 ){ + if( aFmtFunc[x].zFName[nIdent]==0 ){ + *pFlags = aFmtFunc[x].fmtFlags; + return aFmtFunc[x].iFmtArg; + } + c = -1; + } + if( c<0 ){ + upr = x - 1; + }else{ + lwr = x + 1; + } + } + *pFlags = 0; + return 0; +} + +/* +** Return the expected number of arguments for the format string. +** Return -1 if the value cannot be computed. +** +** For each argument less than nType, store the conversion character +** for that argument in cType[i]. +*/ +static int formatArgCount(const char *z, int nType, char *cType){ + int nArg = 0; + int i, k; + int len; + int eType; + int ln = 0; + while( z[0] ){ + len = token_length(z, &eType, &ln); + if( eType==TK_STR ){ + for(i=1; i<len-1; i++){ + if( z[i]!='%' ) continue; + if( z[i+1]=='%' ){ i++; continue; } + for(k=i+1; k<len && !isalpha(z[k]); k++){ + if( z[k]=='*' || z[k]=='#' ){ + if( nArg<nType ) cType[nArg] = z[k]; + nArg++; + } + } + if( z[k]!='R' ){ + if( nArg<nType ) cType[nArg] = z[k]; + nArg++; + } + } + } + z += len; + } + return nArg; +} + +/* +** The function call that begins at zFCall[0] (which is on line lnFCall of the +** original file) is a function that uses a printf-style format string +** on argument number fmtArg. It has processings flags fmtFlags. Do +** compile-time checking on this function, output any errors, and return +** the number of errors. +*/ +static int checkFormatFunc( + const char *zFilename, /* Name of the file being processed */ + const char *zFCall, /* Pointer to start of function call */ + int lnFCall, /* Line number that holds z[0] */ + int fmtArg, /* Format string should be this argument */ + int fmtFlags /* Extra processing flags */ +){ + int szFName; + int eToken; + int ln = lnFCall; + int len; + const char *zStart; + char *z; + char *zCopy; + int nArg = 0; + const char **azArg = 0; + int i, k; + int nErr = 0; + char *acType; + + szFName = token_length(zFCall, &eToken, &ln); + zStart = next_non_whitespace(zFCall+szFName, &len, &eToken); + assert( zStart[0]=='(' && len==1 ); + len = distance_to(zStart+1, ')'); + zCopy = safe_malloc( len + 1 ); + memcpy(zCopy, zStart+1, len); + zCopy[len] = 0; + azArg = 0; + nArg = 0; + z = zCopy; + while( z[0] ){ + len = distance_to(z, ','); + azArg = safe_realloc((char*)azArg, (sizeof(azArg[0])+1)*(nArg+1)); + azArg[nArg++] = skip_space(z); + if( z[len]==0 ) break; + z[len] = 0; + for(i=len-1; i>0 && isspace(z[i]); i--){ z[i] = 0; } + z += len + 1; + } + acType = (char*)&azArg[nArg]; + if( fmtArg>nArg ){ + printf("%s:%d: too few arguments to %.*s()\n", + zFilename, lnFCall, szFName, zFCall); + nErr++; + }else{ + const char *zFmt = azArg[fmtArg-1]; + const char *zOverride = strstr(zFmt, "/*works-like:"); + if( zOverride ) zFmt = zOverride + sizeof("/*works-like:")-1; + if( !is_string_lit(zFmt) ){ + printf("%s:%d: %.*s() has non-constant format string\n", + zFilename, lnFCall, szFName, zFCall); + nErr++; + }else if( (k = formatArgCount(zFmt, nArg, acType))>=0 + && nArg!=fmtArg+k ){ + printf("%s:%d: too %s arguments to %.*s() " + "- got %d and expected %d\n", + zFilename, lnFCall, (nArg<fmtArg+k ? "few" : "many"), + szFName, zFCall, nArg, fmtArg+k); + nErr++; + }else if( fmtFlags & FMT_NO_S ){ + for(i=0; i<nArg && i<k; i++){ + if( (acType[i]=='s' || acType[i]=='z' || acType[i]=='b') + && !is_s_safe(azArg[fmtArg+i]) + ){ + printf("%s:%d: Argument %d to %.*s() not safe for SQL\n", + zFilename, lnFCall, i+fmtArg, szFName, zFCall); + nErr++; + } + } + } + } + if( nErr ){ + for(i=0; i<nArg; i++){ + printf(" arg[%d]: %s\n", i, azArg[i]); + } + } + + free((char*)azArg); + free(zCopy); + return nErr; +} + + +/* +** Do a design-rule check of format strings for the file named zName +** with content zContent. Write errors on standard output. Return +** the number of errors. +*/ +static int scan_file(const char *zName, const char *zContent){ + const char *z; + int ln = 0; + int szToken; + int eToken; + const char *zPrev; + int ePrev; + int szPrev; + int lnPrev; + int nCurly = 0; + int x; + unsigned fmtFlags = 0; + int nErr = 0; + + if( zContent==0 ){ + printf("cannot read file: %s\n", zName); + return 1; + } + for(z=zContent; z[0]; z += szToken){ + szToken = token_length(z, &eToken, &ln); + if( eToken==TK_SPACE ) continue; + if( eToken==TK_OTHER ){ + if( z[0]=='{' ){ + nCurly++; + }else if( z[0]=='}' ){ + nCurly--; + }else if( nCurly>0 && z[0]=='(' && ePrev==TK_ID + && (x = isFormatFunc(zPrev,szPrev,&fmtFlags))>0 ){ + nErr += checkFormatFunc(zName, zPrev, lnPrev, x, fmtFlags); + } + } + zPrev = z; + ePrev = eToken; + szPrev = szToken; + lnPrev = ln; + } + return nErr; +} + +/* +** Check for format-string design rule violations on all files listed +** on the command-line. +*/ +int main(int argc, char **argv){ + int i; + int nErr = 0; + for(i=1; i<argc; i++){ + char *zFile = read_file(argv[i]); + nErr += scan_file(argv[i], zFile); + free(zFile); + } + return nErr; +} Index: src/comformat.c ================================================================== --- src/comformat.c +++ src/comformat.c @@ -19,82 +19,499 @@ ** text on a TTY. */ #include "config.h" #include "comformat.h" #include <assert.h> +#ifdef _WIN32 +# include <windows.h> +#else +# include <termios.h> +# include <sys/ioctl.h> +#endif + +#if INTERFACE +#define COMMENT_PRINT_NONE ((u32)0x00000000) /* No flags. */ +#define COMMENT_PRINT_LEGACY ((u32)0x00000001) /* Use legacy algorithm. */ +#define COMMENT_PRINT_TRIM_CRLF ((u32)0x00000002) /* Trim leading CR/LF. */ +#define COMMENT_PRINT_TRIM_SPACE ((u32)0x00000004) /* Trim leading/trailing. */ +#define COMMENT_PRINT_WORD_BREAK ((u32)0x00000008) /* Break lines on words. */ +#define COMMENT_PRINT_ORIG_BREAK ((u32)0x00000010) /* Break before original. */ +#define COMMENT_PRINT_DEFAULT (COMMENT_PRINT_LEGACY) /* Defaults. */ +#endif + +/* +** This is the previous value used by most external callers when they +** needed to specify a default maximum line length to be used with the +** comment_print() function. +*/ +#ifndef COMMENT_LEGACY_LINE_LENGTH +# define COMMENT_LEGACY_LINE_LENGTH (78) +#endif + +/* +** This is the number of spaces to print when a tab character is seen. +*/ +#ifndef COMMENT_TAB_WIDTH +# define COMMENT_TAB_WIDTH (8) +#endif + +/* +** This function sets the maximum number of characters to print per line +** based on the detected terminal line width, if available; otherwise, it +** uses the legacy default terminal line width minus the amount to indent. +** +** Zero is returned to indicate any failure. One is returned to indicate +** the successful detection of the terminal line width. Negative one is +** returned to indicate the terminal line width is using the hard-coded +** legacy default value. +*/ +static int comment_set_maxchars( + int indent, + int *pMaxChars +){ +#if defined(_WIN32) + CONSOLE_SCREEN_BUFFER_INFO csbi; + memset(&csbi, 0, sizeof(CONSOLE_SCREEN_BUFFER_INFO)); + if( GetConsoleScreenBufferInfo(GetStdHandle(STD_OUTPUT_HANDLE), &csbi) ){ + *pMaxChars = csbi.srWindow.Right - csbi.srWindow.Left - indent; + return 1; + } + return 0; +#elif defined(TIOCGWINSZ) + struct winsize w; + memset(&w, 0, sizeof(struct winsize)); + if( ioctl(0, TIOCGWINSZ, &w)!=-1 ){ + *pMaxChars = w.ws_col - indent; + return 1; + } + return 0; +#else + /* + ** Fallback to using more-or-less the "legacy semantics" of hard-coding + ** the maximum line length to a value reasonable for the vast majority + ** of supported systems. + */ + *pMaxChars = COMMENT_LEGACY_LINE_LENGTH - indent; + return -1; +#endif +} + +/* +** This function checks the current line being printed against the original +** comment text. Upon matching, it emits a new line and updates the provided +** character and line counts, if applicable. +*/ +static int comment_check_orig( + const char *zOrigText, /* [in] Original comment text ONLY, may be NULL. */ + const char *zLine, /* [in] The comment line to print. */ + int *pCharCnt, /* [in/out] Pointer to the line character count. */ + int *pLineCnt /* [in/out] Pointer to the total line count. */ +){ + if( zOrigText && fossil_strcmp(zLine, zOrigText)==0 ){ + fossil_print("\n"); + if( pCharCnt ) *pCharCnt = 0; + if( pLineCnt ) (*pLineCnt)++; + return 1; + } + return 0; +} + +/* +** This function scans the specified comment line starting just after the +** initial index and returns the index of the next spacing character -OR- +** zero if such a character cannot be found. For the purposes of this +** algorithm, the NUL character is treated the same as a spacing character. +*/ +static int comment_next_space( + const char *zLine, /* [in] The comment line being printed. */ + int index /* [in] The current character index being handled. */ +){ + int nextIndex = index + 1; + for(;;){ + char c = zLine[nextIndex]; + if( c==0 || fossil_isspace(c) ){ + return nextIndex; + } + nextIndex++; + } + return 0; /* NOT REACHED */ +} + +/* +** This function is called when printing a logical comment line to perform +** the necessary indenting. +*/ +static void comment_print_indent( + const char *zLine, /* [in] The comment line being printed. */ + int indent, /* [in] Number of spaces to indent, zero for none. */ + int trimCrLf, /* [in] Non-zero to trim leading/trailing CR/LF. */ + int trimSpace, /* [in] Non-zero to trim leading/trailing spaces. */ + int *piIndex /* [in/out] Pointer to first non-space character. */ +){ + if( indent>0 ){ + fossil_print("%*s", indent, ""); + } + if( zLine && piIndex ){ + int index = *piIndex; + if( trimCrLf ){ + while( zLine[index]=='\r' || zLine[index]=='\n' ){ index++; } + } + if( trimSpace ){ + while( fossil_isspace(zLine[index]) ){ index++; } + } + *piIndex = index; + } +} + +/* +** This function prints one logical line of a comment, stopping when it hits +** a new line -OR- runs out of space on the logical line. +*/ +static void comment_print_line( + const char *zOrigText, /* [in] Original comment text ONLY, may be NULL. */ + const char *zLine, /* [in] The comment line to print. */ + int origIndent, /* [in] Number of spaces to indent before the original + ** comment. */ + int indent, /* [in] Number of spaces to indent, before the line + ** to print. */ + int lineChars, /* [in] Maximum number of characters to print. */ + int trimCrLf, /* [in] Non-zero to trim leading/trailing CR/LF. */ + int trimSpace, /* [in] Non-zero to trim leading/trailing spaces. */ + int wordBreak, /* [in] Non-zero to try breaking on word boundaries. */ + int origBreak, /* [in] Non-zero to break before original comment. */ + int *pLineCnt, /* [in/out] Pointer to the total line count. */ + const char **pzLine /* [out] Pointer to the end of the logical line. */ +){ + int index = 0, charCnt = 0, lineCnt = 0, maxChars; + if( !zLine ) return; + if( lineChars<=0 ) return; + comment_print_indent(zLine, indent, trimCrLf, trimSpace, &index); + maxChars = lineChars; + for(;;){ + int useChars = 1; + char c = zLine[index]; + if( c==0 ){ + break; + }else{ + if( origBreak && index>0 ){ + const char *zCurrent = &zLine[index]; + if( comment_check_orig(zOrigText, zCurrent, &charCnt, &lineCnt) ){ + comment_print_indent(zCurrent, origIndent, trimCrLf, trimSpace, + &index); + maxChars = lineChars; + } + } + index++; + } + if( c=='\n' ){ + lineCnt++; + charCnt = 0; + useChars = 0; + }else if( c=='\t' ){ + int nextIndex = comment_next_space(zLine, index); + if( nextIndex<=0 || (nextIndex-index)>maxChars ){ + break; + } + charCnt++; + useChars = COMMENT_TAB_WIDTH; + if( maxChars<useChars ){ + fossil_print(" "); + break; + } + }else if( wordBreak && fossil_isspace(c) ){ + int nextIndex = comment_next_space(zLine, index); + if( nextIndex<=0 || (nextIndex-index)>maxChars ){ + break; + } + charCnt++; + }else{ + charCnt++; + } + assert( c!='\n' || charCnt==0 ); + fossil_print("%c", c); + maxChars -= useChars; + if( maxChars==0 ) break; + assert( maxChars>0 ); + if( c=='\n' ) break; + } + if( charCnt>0 ){ + fossil_print("\n"); + lineCnt++; + } + if( pLineCnt ){ + *pLineCnt += lineCnt; + } + if( pzLine ){ + *pzLine = zLine + index; + } +} /* -** Given a comment string zText, format that string for printing -** on a TTY. Assume that the output cursors is indent spaces from -** the left margin and that a single line can contain no more than -** lineLength characters. Indent all subsequent lines by indent. -** -** lineLength must be less than 400. -** -** Return the number of newlines that are output. +** This is the legacy comment printing algorithm. It is being retained +** for backward compatibility. +** +** Given a comment string, format that string for printing on a TTY. +** Assume that the output cursors is indent spaces from the left margin +** and that a single line can contain no more than 'width' characters. +** Indent all subsequent lines by 'indent'. +** +** Returns the number of new lines emitted. */ -int comment_print(const char *zText, int indent, int lineLength){ - int tlen = lineLength - indent; +static int comment_print_legacy( + const char *zText, /* The comment text to be printed. */ + int indent, /* Number of spaces to indent each non-initial line. */ + int width /* Maximum number of characters per line. */ +){ + int maxChars = width - indent; int si, sk, i, k; int doIndent = 0; - char zBuf[400]; - int lineCnt = 0; + char *zBuf; + char zBuffer[400]; + int lineCnt = 0; + if( width<0 ){ + comment_set_maxchars(indent, &maxChars); + } + if( zText==0 ) zText = "(NULL)"; + if( maxChars<=0 ){ + maxChars = strlen(zText); + } + if( maxChars >= (sizeof(zBuffer)) ){ + zBuf = fossil_malloc(maxChars+1); + }else{ + zBuf = zBuffer; + } for(;;){ - while( isspace(zText[0]) ){ zText++; } + while( fossil_isspace(zText[0]) ){ zText++; } if( zText[0]==0 ){ if( doIndent==0 ){ - printf("\n"); + fossil_print("\n"); lineCnt = 1; } + if( zBuf!=zBuffer) fossil_free(zBuf); return lineCnt; } - for(sk=si=i=k=0; zText[i] && k<tlen; i++){ + for(sk=si=i=k=0; zText[i] && k<maxChars; i++){ char c = zText[i]; - if( isspace(c) ){ + if( fossil_isspace(c) ){ si = i; sk = k; if( k==0 || zBuf[k-1]!=' ' ){ zBuf[k++] = ' '; } }else{ zBuf[k] = c; - if( c=='-' && k>0 && isalpha(zBuf[k-1]) ){ + if( c=='-' && k>0 && fossil_isalpha(zBuf[k-1]) ){ si = i+1; sk = k+1; } k++; } } if( doIndent ){ - printf("%*s", indent, ""); + fossil_print("%*s", indent, ""); } doIndent = 1; if( sk>0 && zText[i] ){ zText += si; - zBuf[sk++] = '\n'; zBuf[sk] = 0; - printf("%s", zBuf); }else{ zText += i; - zBuf[k++] = '\n'; zBuf[k] = 0; - printf("%s", zBuf); } + fossil_print("%s\n", zBuf); + lineCnt++; + } +} + +/* +** This is the comment printing function. The comment printing algorithm +** contained within it attempts to preserve the formatting present within +** the comment string itself while honoring line width limitations. There +** are several flags that modify the default behavior of this function: +** +** COMMENT_PRINT_LEGACY: Forces use of the legacy comment printing +** algorithm. For backward compatibility, +** this is the default. +** +** COMMENT_PRINT_TRIM_CRLF: Trims leading and trailing carriage-returns +** and line-feeds where they do not materially +** impact pre-existing formatting (i.e. at the +** start of the comment string -AND- right +** before line indentation). This flag does +** not apply to the legacy comment printing +** algorithm. This flag may be combined with +** COMMENT_PRINT_TRIM_SPACE. +** +** COMMENT_PRINT_TRIM_SPACE: Trims leading and trailing spaces where they +** do not materially impact the pre-existing +** formatting (i.e. at the start of the comment +** string -AND- right before line indentation). +** This flag does not apply to the legacy +** comment printing algorithm. This flag may +** be combined with COMMENT_PRINT_TRIM_CRLF. +** +** COMMENT_PRINT_WORD_BREAK: Attempts to break lines on word boundaries +** while honoring the logical line length. +** If this flag is not specified, honoring the +** logical line length may result in breaking +** lines in the middle of words. This flag +** does not apply to the legacy comment +** printing algorithm. +** +** COMMENT_PRINT_ORIG_BREAK: Looks for the original comment text within +** the text being printed. Upon matching, a +** new line will be emitted, thus preserving +** more of the pre-existing formatting. +** +** Given a comment string, format that string for printing on a TTY. +** Assume that the output cursors is indent spaces from the left margin +** and that a single line can contain no more than 'width' characters. +** Indent all subsequent lines by 'indent'. +** +** Returns the number of new lines emitted. +*/ +int comment_print( + const char *zText, /* The comment text to be printed. */ + const char *zOrigText, /* Original comment text ONLY, may be NULL. */ + int indent, /* Spaces to indent each non-initial line. */ + int width, /* Maximum number of characters per line. */ + int flags /* Zero or more "COMMENT_PRINT_*" flags. */ +){ + int maxChars = width - indent; + int legacy = flags & COMMENT_PRINT_LEGACY; + int trimCrLf = flags & COMMENT_PRINT_TRIM_CRLF; + int trimSpace = flags & COMMENT_PRINT_TRIM_SPACE; + int wordBreak = flags & COMMENT_PRINT_WORD_BREAK; + int origBreak = flags & COMMENT_PRINT_ORIG_BREAK; + int lineCnt = 0; + const char *zLine; + + if( legacy ){ + return comment_print_legacy(zText, indent, width); + } + if( width<0 ){ + comment_set_maxchars(indent, &maxChars); + } + if( zText==0 ) zText = "(NULL)"; + if( maxChars<=0 ){ + maxChars = strlen(zText); + } + if( trimSpace ){ + while( fossil_isspace(zText[0]) ){ zText++; } + } + if( zText[0]==0 ){ + fossil_print("\n"); lineCnt++; + return lineCnt; + } + zLine = zText; + for(;;){ + comment_print_line(zOrigText, zLine, indent, zLine>zText ? indent : 0, + maxChars, trimCrLf, trimSpace, wordBreak, origBreak, + &lineCnt, &zLine); + if( !zLine || !zLine[0] ) break; } + return lineCnt; } /* -** Test the comment printing ** ** COMMAND: test-comment-format +** +** Usage: %fossil test-comment-format ?OPTIONS? PREFIX TEXT ?ORIGTEXT? +** +** Test comment formatting and printing. Use for testing only. +** +** Options: +** --file The comment text is really just a file name to +** read it from. +** --decode Decode the text using the same method used when +** handling the value of a C-card from a manifest. +** --legacy Use the legacy comment printing algorithm. +** --trimcrlf Enable trimming of leading/trailing CR/LF. +** --trimspace Enable trimming of leading/trailing spaces. +** --wordbreak Attempt to break lines on word boundaries. +** --origbreak Attempt to break when the original comment text +** is detected. +** --indent Number of spaces to indent (default (-1) is to +** auto-detect). Zero means no indent. +** -W|--width <num> Width of lines (default (-1) is to auto-detect). +** Zero means no limit. */ void test_comment_format(void){ - int indent; - if( g.argc!=4 ){ - usage("PREFIX TEXT"); + const char *zWidth; + const char *zIndent; + const char *zPrefix; + char *zText; + char *zOrigText; + int indent, width; + int fromFile = find_option("file", 0, 0)!=0; + int decode = find_option("decode", 0, 0)!=0; + int flags = COMMENT_PRINT_NONE; + if( find_option("legacy", 0, 0) ){ + flags |= COMMENT_PRINT_LEGACY; + } + if( find_option("trimcrlf", 0, 0) ){ + flags |= COMMENT_PRINT_TRIM_CRLF; + } + if( find_option("trimspace", 0, 0) ){ + flags |= COMMENT_PRINT_TRIM_SPACE; + } + if( find_option("wordbreak", 0, 0) ){ + flags |= COMMENT_PRINT_WORD_BREAK; + } + if( find_option("origbreak", 0, 0) ){ + flags |= COMMENT_PRINT_ORIG_BREAK; + } + zWidth = find_option("width","W",1); + if( zWidth ){ + width = atoi(zWidth); + }else{ + width = -1; /* automatic */ + } + zIndent = find_option("indent",0,1); + if( zIndent ){ + indent = atoi(zIndent); + }else{ + indent = -1; /* automatic */ + } + if( g.argc!=4 && g.argc!=5 ){ + usage("?OPTIONS? PREFIX TEXT ?ORIGTEXT?"); + } + zPrefix = g.argv[2]; + zText = g.argv[3]; + if( g.argc==5 ){ + zOrigText = g.argv[4]; + }else{ + zOrigText = 0; + } + if( fromFile ){ + Blob fileData; + blob_read_from_file(&fileData, zText); + zText = mprintf("%s", blob_str(&fileData)); + blob_reset(&fileData); + if( zOrigText ){ + blob_read_from_file(&fileData, zOrigText); + zOrigText = mprintf("%s", blob_str(&fileData)); + blob_reset(&fileData); + } + } + if( decode ){ + zText = mprintf(fromFile?"%z":"%s" /*works-like:"%s"*/, zText); + defossilize(zText); + if( zOrigText ){ + zOrigText = mprintf(fromFile?"%z":"%s" /*works-like:"%s"*/, zOrigText); + defossilize(zOrigText); + } + } + if( indent<0 ){ + indent = strlen(zPrefix); + } + if( zPrefix && *zPrefix ){ + fossil_print("%s", zPrefix); } - indent = strlen(g.argv[2]) + 1; - printf("%s ", g.argv[2]); - printf("(%d lines output)\n", comment_print(g.argv[3], indent, 79)); + fossil_print("(%d lines output)\n", + comment_print(zText, zOrigText, indent, width, flags)); + if( zOrigText && zOrigText!=g.argv[4] ) fossil_free(zOrigText); + if( zText && zText!=g.argv[3] ) fossil_free(zText); } Index: src/config.h ================================================================== --- src/config.h +++ src/config.h @@ -16,96 +16,206 @@ ******************************************************************************* ** ** A common header file used by all modules. */ +/* The following macros are necessary for large-file support under +** some linux distributions, and possibly other unixes as well. +*/ +#define _LARGE_FILE 1 +#ifndef _FILE_OFFSET_BITS +# define _FILE_OFFSET_BITS 64 +#endif +#define _LARGEFILE_SOURCE 1 + +/* Needed for various definitions... */ +#ifndef _GNU_SOURCE +# define _GNU_SOURCE +#endif + +/* Make sure that in Win32 MinGW builds, _USE_32BIT_TIME_T is always defined. */ +#if defined(_WIN32) && !defined(_WIN64) && !defined(_MSC_VER) && !defined(_USE_32BIT_TIME_T) +# define _USE_32BIT_TIME_T +#endif + +#ifdef HAVE_AUTOCONFIG_H +#include "autoconfig.h" +#endif + +#ifndef _RC_COMPILE_ + /* ** System header files used by all modules */ #include <unistd.h> #include <stdio.h> #include <stdlib.h> -#include <ctype.h> +/* #include <ctype.h> // do not use - causes problems */ #include <string.h> #include <stdarg.h> #include <assert.h> -#ifdef __MINGW32__ -# include <windows.h> + +#endif + +#if defined( __MINGW32__) || defined(__DMC__) || defined(_MSC_VER) || defined(__POCC__) +# if defined(__DMC__) || defined(_MSC_VER) || defined(__POCC__) + typedef int socklen_t; +# endif +# ifndef _WIN32 +# define _WIN32 +# endif #else +# include <sys/types.h> +# include <signal.h> # include <pwd.h> #endif +/* +** Utility macro to wrap an argument with double quotes. +*/ +#if !defined(COMPILER_STRINGIFY) +# define COMPILER_STRINGIFY(x) COMPILER_STRINGIFY1(x) +# define COMPILER_STRINGIFY1(x) #x +#endif + +/* +** Define the compiler variant, used to compile the project +*/ +#if !defined(COMPILER_NAME) +# if defined(__DMC__) +# if defined(COMPILER_VERSION) && !defined(NO_COMPILER_VERSION) +# define COMPILER_NAME "dmc-" COMPILER_VERSION +# else +# define COMPILER_NAME "dmc" +# endif +# elif defined(__POCC__) +# if defined(_M_X64) +# if defined(COMPILER_VERSION) && !defined(NO_COMPILER_VERSION) +# define COMPILER_NAME "pellesc64-" COMPILER_VERSION +# else +# define COMPILER_NAME "pellesc64" +# endif +# else +# if defined(COMPILER_VERSION) && !defined(NO_COMPILER_VERSION) +# define COMPILER_NAME "pellesc32-" COMPILER_VERSION +# else +# define COMPILER_NAME "pellesc32" +# endif +# endif +# elif defined(_MSC_VER) +# if !defined(COMPILER_VERSION) +# define COMPILER_VERSION COMPILER_STRINGIFY(_MSC_VER) +# endif +# if defined(COMPILER_VERSION) && !defined(NO_COMPILER_VERSION) +# define COMPILER_NAME "msc-" COMPILER_VERSION +# else +# define COMPILER_NAME "msc" +# endif +# elif defined(__MINGW32__) +# if !defined(COMPILER_VERSION) +# if defined(__MINGW_VERSION) +# if defined(__GNUC__) +# if defined(__VERSION__) +# define COMPILER_VERSION COMPILER_STRINGIFY(__MINGW_VERSION) "-gcc-" __VERSION__ +# else +# define COMPILER_VERSION COMPILER_STRINGIFY(__MINGW_VERSION) "-gcc" +# endif +# else +# define COMPILER_VERSION COMPILER_STRINGIFY(__MINGW_VERSION) +# endif +# elif defined(__MINGW32_VERSION) +# if defined(__GNUC__) +# if defined(__VERSION__) +# define COMPILER_VERSION COMPILER_STRINGIFY(__MINGW32_VERSION) "-gcc-" __VERSION__ +# else +# define COMPILER_VERSION COMPILER_STRINGIFY(__MINGW32_VERSION) "-gcc" +# endif +# else +# define COMPILER_VERSION COMPILER_STRINGIFY(__MINGW32_VERSION) +# endif +# endif +# endif +# if defined(COMPILER_VERSION) && !defined(NO_COMPILER_VERSION) +# define COMPILER_NAME "mingw32-" COMPILER_VERSION +# else +# define COMPILER_NAME "mingw32" +# endif +# elif defined(_WIN32) +# define COMPILER_NAME "win32" +# elif defined(__GNUC__) +# if !defined(COMPILER_VERSION) +# if defined(__VERSION__) +# define COMPILER_VERSION __VERSION__ +# endif +# endif +# if defined(COMPILER_VERSION) && !defined(NO_COMPILER_VERSION) +# define COMPILER_NAME "gcc-" COMPILER_VERSION +# else +# define COMPILER_NAME "gcc" +# endif +# else +# define COMPILER_NAME "unknown" +# endif +#endif + +#if !defined(_RC_COMPILE_) && !defined(SQLITE_AMALGAMATION) + #include "sqlite3.h" /* -** Typedef for a 64-bit integer -*/ -typedef sqlite_int64 i64; -typedef sqlite_uint64 u64; - -/* -** Unsigned character type -*/ -typedef unsigned char u8; - -/* -** Standard colors. These colors can also be changed using a stylesheet. -*/ - -/* A blue border and background. Used for the title bar and for dates -** in a timeline. -*/ -#define BORDER1 "#a0b5f4" /* Stylesheet class: border1 */ -#define BG1 "#d0d9f4" /* Stylesheet class: bkgnd1 */ - -/* A red border and background. Use for releases in the timeline. -*/ -#define BORDER2 "#ec9898" /* Stylesheet class: border2 */ -#define BG2 "#f7c0c0" /* Stylesheet class: bkgnd2 */ - -/* A gray background. Used for column headers in the Wiki Table of Contents -** and to highlight ticket properties. -*/ -#define BG3 "#d0d0d0" /* Stylesheet class: bkgnd3 */ - -/* A light-gray background. Used for title bar, menus, and rlog alternation -*/ -#define BG4 "#f0f0f0" /* Stylesheet class: bkgnd4 */ - -/* A deeper gray background. Used for branches -*/ -#define BG5 "#dddddd" /* Stylesheet class: bkgnd5 */ - -/* Default HTML page header */ -#define HEADER "<html>\n" \ - "<head>\n" \ - "<link rel=\"alternate\" type=\"application/rss+xml\"\n" \ - " title=\"%N Timeline Feed\" href=\"%B/timeline.rss\">\n" \ - "<title>%N: %T\n\n" \ - "" - -/* Default HTML page footer */ -#define FOOTER "
\n" \ - "Fossil version %V\n" \ - "
\n" \ - "\n" +** On Solaris, getpass() will only return up to 8 characters. getpassphrase() returns up to 257. +*/ +#if HAVE_GETPASSPHRASE + #define getpass getpassphrase +#endif + +/* +** Typedef for a 64-bit integer +*/ +typedef sqlite3_int64 i64; +typedef sqlite3_uint64 u64; + +/* +** 8-bit types +*/ +typedef unsigned char u8; +typedef signed char i8; /* In the timeline, check-in messages are truncated at the first space ** that is more than MX_CKIN_MSG from the beginning, or at the first ** paragraph break that is more than MN_CKIN_MSG from the beginning. */ #define MN_CKIN_MSG 100 #define MX_CKIN_MSG 300 -/* Unset the following to disable internationalization code. */ -#ifndef FOSSIL_I18N -# define FOSSIL_I18N 1 +/* +** The following macros are used to cast pointers to integers and +** integers to pointers. The way you do this varies from one compiler +** to the next, so we have developed the following set of #if statements +** to generate appropriate macros for a wide range of compilers. +** +** The correct "ANSI" way to do this is to use the intptr_t type. +** Unfortunately, that typedef is not available on all compilers, or +** if it is available, it requires an #include of specific headers +** that vary from one machine to the next. +*/ +#if defined(__PTRDIFF_TYPE__) /* This case should work for GCC */ +# define FOSSIL_INT_TO_PTR(X) ((void*)(__PTRDIFF_TYPE__)(X)) +# define FOSSIL_PTR_TO_INT(X) ((int)(__PTRDIFF_TYPE__)(X)) +#elif !defined(__GNUC__) /* Works for compilers other than LLVM */ +# define FOSSIL_INT_TO_PTR(X) ((void*)&((char*)0)[X]) +# define FOSSIL_PTR_TO_INT(X) ((int)(((char*)X)-(char*)0)) +#else /* Generates a warning - but it always works */ +# define FOSSIL_INT_TO_PTR(X) ((void*)(X)) +# define FOSSIL_PTR_TO_INT(X) ((int)(X)) +#endif + +/* +** A marker for functions that never return. +*/ +#if defined(__GNUC__) || defined(__clang__) +# define NORETURN __attribute__((__noreturn__)) +#else +# define NORETURN #endif -#if FOSSIL_I18N -# include -# include -#endif -#ifndef CODESET -# undef FOSSIL_I18N -# define FOSSIL_I18N 0 -#endif +#endif /* _RC_COMPILE_ */ Index: src/configure.c ================================================================== --- src/configure.c +++ src/configure.c @@ -2,11 +2,11 @@ ** Copyright (c) 2008 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) - +** ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** ** Author contact information: @@ -14,11 +14,12 @@ ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** This file contains code used to manage repository configurations. -** By "responsitory configure" we mean the local state of a repository +** +** By "repository configure" we mean the local state of a repository ** distinct from the versioned files. */ #include "config.h" #include "configure.h" #include @@ -26,18 +27,30 @@ #if INTERFACE /* ** Configuration transfers occur in groups. These are the allowed ** groupings: */ -#define CONFIGSET_SKIN 0x000001 /* WWW interface appearance */ -#define CONFIGSET_TKT 0x000002 /* Ticket configuration */ -#define CONFIGSET_PROJ 0x000004 /* Project name */ -#define CONFIGSET_SHUN 0x000008 /* Shun settings */ -#define CONFIGSET_USER 0x000010 /* The USER table */ -#define CONFIGSET_ADDR 0x000020 /* The CONCEALED table */ - -#define CONFIGSET_ALL 0xffffff /* Everything */ +#define CONFIGSET_CSS 0x000001 /* Style sheet only */ +#define CONFIGSET_SKIN 0x000002 /* WWW interface appearance */ +#define CONFIGSET_TKT 0x000004 /* Ticket configuration */ +#define CONFIGSET_PROJ 0x000008 /* Project name */ +#define CONFIGSET_SHUN 0x000010 /* Shun settings */ +#define CONFIGSET_USER 0x000020 /* The USER table */ +#define CONFIGSET_ADDR 0x000040 /* The CONCEALED table */ +#define CONFIGSET_XFER 0x000080 /* Transfer configuration */ + +#define CONFIGSET_ALL 0x0000ff /* Everything */ + +#define CONFIGSET_OVERWRITE 0x100000 /* Causes overwrite instead of merge */ +#define CONFIGSET_OLDFORMAT 0x200000 /* Use the legacy format */ + +/* +** This mask is used for the common TH1 configuration settings (i.e. those +** that are not specific to one particular subsystem, such as the transfer +** subsystem). +*/ +#define CONFIGSET_TH1 (CONFIGSET_SKIN|CONFIGSET_TKT|CONFIGSET_XFER) #endif /* INTERFACE */ /* ** Names of the configuration sets @@ -45,56 +58,107 @@ static struct { const char *zName; /* Name of the configuration set */ int groupMask; /* Mask for that configuration set */ const char *zHelp; /* What it does */ } aGroupName[] = { - { "email", CONFIGSET_ADDR, "Concealed email addresses in tickets" }, - { "project", CONFIGSET_PROJ, "Project name and description" }, - { "skin", CONFIGSET_SKIN, "Web interface apparance settings" }, - { "shun", CONFIGSET_SHUN, "List of shunned artifacts" }, - { "ticket", CONFIGSET_TKT, "Ticket setup", }, - { "user", CONFIGSET_USER, "Users and privilege settings" }, - { "all", CONFIGSET_ALL, "All of the above" }, + { "/email", CONFIGSET_ADDR, "Concealed email addresses in tickets" }, + { "/project", CONFIGSET_PROJ, "Project name and description" }, + { "/skin", CONFIGSET_SKIN | CONFIGSET_CSS, + "Web interface appearance settings" }, + { "/css", CONFIGSET_CSS, "Style sheet" }, + { "/shun", CONFIGSET_SHUN, "List of shunned artifacts" }, + { "/ticket", CONFIGSET_TKT, "Ticket setup", }, + { "/user", CONFIGSET_USER, "Users and privilege settings" }, + { "/xfer", CONFIGSET_XFER, "Transfer setup", }, + { "/all", CONFIGSET_ALL, "All of the above" }, }; /* ** The following is a list of settings that we are willing to -** transfer. +** transfer. ** ** Setting names that begin with an alphabetic characters refer to ** single entries in the CONFIG table. Setting names that begin with ** "@" are for special processing. */ static struct { const char *zName; /* Name of the configuration parameter */ int groupMask; /* Which config groups is it part of */ } aConfig[] = { - { "css", CONFIGSET_SKIN }, + { "css", CONFIGSET_CSS }, { "header", CONFIGSET_SKIN }, { "footer", CONFIGSET_SKIN }, + { "details", CONFIGSET_SKIN }, { "logo-mimetype", CONFIGSET_SKIN }, { "logo-image", CONFIGSET_SKIN }, - { "project-name", CONFIGSET_PROJ }, - { "project-description", CONFIGSET_PROJ }, - { "index-page", CONFIGSET_SKIN }, + { "background-mimetype", CONFIGSET_SKIN }, + { "background-image", CONFIGSET_SKIN }, { "timeline-block-markup", CONFIGSET_SKIN }, { "timeline-max-comment", CONFIGSET_SKIN }, + { "timeline-plaintext", CONFIGSET_SKIN }, + { "adunit", CONFIGSET_SKIN }, + { "adunit-omit-if-admin", CONFIGSET_SKIN }, + { "adunit-omit-if-user", CONFIGSET_SKIN }, + +#ifdef FOSSIL_ENABLE_TH1_DOCS + { "th1-docs", CONFIGSET_TH1 }, +#endif +#ifdef FOSSIL_ENABLE_TH1_HOOKS + { "th1-hooks", CONFIGSET_TH1 }, +#endif + { "th1-setup", CONFIGSET_TH1 }, + { "th1-uri-regexp", CONFIGSET_TH1 }, + +#ifdef FOSSIL_ENABLE_TCL + { "tcl", CONFIGSET_TH1 }, + { "tcl-setup", CONFIGSET_TH1 }, +#endif + + { "project-name", CONFIGSET_PROJ }, + { "short-project-name", CONFIGSET_PROJ }, + { "project-description", CONFIGSET_PROJ }, + { "index-page", CONFIGSET_PROJ }, + { "manifest", CONFIGSET_PROJ }, + { "binary-glob", CONFIGSET_PROJ }, + { "clean-glob", CONFIGSET_PROJ }, + { "ignore-glob", CONFIGSET_PROJ }, + { "keep-glob", CONFIGSET_PROJ }, + { "crnl-glob", CONFIGSET_PROJ }, + { "encoding-glob", CONFIGSET_PROJ }, + { "empty-dirs", CONFIGSET_PROJ }, + { "allow-symlinks", CONFIGSET_PROJ }, + { "dotfiles", CONFIGSET_PROJ }, + +#ifdef FOSSIL_ENABLE_LEGACY_MV_RM + { "mv-rm-files", CONFIGSET_PROJ }, +#endif + { "ticket-table", CONFIGSET_TKT }, { "ticket-common", CONFIGSET_TKT }, + { "ticket-change", CONFIGSET_TKT }, { "ticket-newpage", CONFIGSET_TKT }, { "ticket-viewpage", CONFIGSET_TKT }, { "ticket-editpage", CONFIGSET_TKT }, { "ticket-reportlist", CONFIGSET_TKT }, { "ticket-report-template", CONFIGSET_TKT }, { "ticket-key-template", CONFIGSET_TKT }, { "ticket-title-expr", CONFIGSET_TKT }, { "ticket-closed-expr", CONFIGSET_TKT }, { "@reportfmt", CONFIGSET_TKT }, + { "@user", CONFIGSET_USER }, + { "@concealed", CONFIGSET_ADDR }, + { "@shun", CONFIGSET_SHUN }, + + { "xfer-common-script", CONFIGSET_XFER }, + { "xfer-push-script", CONFIGSET_XFER }, + { "xfer-commit-script", CONFIGSET_XFER }, + { "xfer-ticket-script", CONFIGSET_XFER }, + }; static int iConfig = 0; /* ** Return name of first configuration property matching the given mask. @@ -102,33 +166,79 @@ const char *configure_first_name(int iMask){ iConfig = 0; return configure_next_name(iMask); } const char *configure_next_name(int iMask){ - while( iConfig2 && zName[0]=='\'' && zName[n-1]=='\'' ){ + zName++; + n -= 2; + } for(i=0; i=count(aType) ) return; + while( blob_token(pContent, &name) && blob_sqltoken(pContent, &value) ){ + char *z = blob_terminate(&name); + if( !safeSql(z) ) return; + if( nToken>0 ){ + for(jj=0; jj=aType[ii].nField ) continue; + }else{ + if( !safeInt(z) ) return; + } + azToken[nToken++] = z; + azToken[nToken++] = z = blob_terminate(&value); + if( !safeSql(z) ) return; + if( nToken>=count(azToken) ) break; + } + if( nToken<2 ) return; + if( aType[ii].zName[0]=='/' ){ + thisMask = configure_is_exportable(azToken[1]); + }else{ + thisMask = configure_is_exportable(aType[ii].zName); + } + if( (thisMask & groupMask)==0 ) return; + + blob_zero(&sql); + if( groupMask & CONFIGSET_OVERWRITE ){ + if( (thisMask & configHasBeenReset)==0 && aType[ii].zName[0]!='/' ){ + db_multi_exec("DELETE FROM \"%w\"", &aType[ii].zName[1]); + configHasBeenReset |= thisMask; + } + blob_append_sql(&sql, "REPLACE INTO "); + }else{ + blob_append_sql(&sql, "INSERT OR IGNORE INTO "); + } + blob_append_sql(&sql, "\"%w\"(\"%w\", mtime", &zName[1], aType[ii].zPrimKey); + for(jj=2; jj=%lld", iStart); + while( db_step(&q)==SQLITE_ROW ){ + blob_appendf(&rec,"%s %s scom %s", + db_column_text(&q, 0), + db_column_text(&q, 1), + db_column_text(&q, 2) + ); + blob_appendf(pOut, "config /shun %d\n%s\n", + blob_size(&rec), blob_str(&rec)); + nCard++; + blob_reset(&rec); + } + db_finalize(&q); + } + if( groupMask & CONFIGSET_USER ){ + db_prepare(&q, "SELECT mtime, quote(login), quote(pw), quote(cap)," + " quote(info), quote(photo) FROM user" + " WHERE mtime>=%lld", iStart); + while( db_step(&q)==SQLITE_ROW ){ + blob_appendf(&rec,"%s %s pw %s cap %s info %s photo %s", + db_column_text(&q, 0), + db_column_text(&q, 1), + db_column_text(&q, 2), + db_column_text(&q, 3), + db_column_text(&q, 4), + db_column_text(&q, 5) + ); + blob_appendf(pOut, "config /user %d\n%s\n", + blob_size(&rec), blob_str(&rec)); + nCard++; + blob_reset(&rec); + } + db_finalize(&q); + } + if( groupMask & CONFIGSET_TKT ){ + db_prepare(&q, "SELECT mtime, quote(title), quote(owner), quote(cols)," + " quote(sqlcode) FROM reportfmt" + " WHERE mtime>=%lld", iStart); + while( db_step(&q)==SQLITE_ROW ){ + blob_appendf(&rec,"%s %s owner %s cols %s sqlcode %s", + db_column_text(&q, 0), + db_column_text(&q, 1), + db_column_text(&q, 2), + db_column_text(&q, 3), + db_column_text(&q, 4) + ); + blob_appendf(pOut, "config /reportfmt %d\n%s\n", + blob_size(&rec), blob_str(&rec)); + nCard++; + blob_reset(&rec); + } + db_finalize(&q); + } + if( groupMask & CONFIGSET_ADDR ){ + db_prepare(&q, "SELECT mtime, quote(hash), quote(content) FROM concealed" + " WHERE mtime>=%lld", iStart); + while( db_step(&q)==SQLITE_ROW ){ + blob_appendf(&rec,"%s %s content %s", + db_column_text(&q, 0), + db_column_text(&q, 1), + db_column_text(&q, 2) + ); + blob_appendf(pOut, "config /concealed %d\n%s\n", + blob_size(&rec), blob_str(&rec)); + nCard++; + blob_reset(&rec); + } + db_finalize(&q); + } + db_prepare(&q, "SELECT mtime, quote(name), quote(value) FROM config" + " WHERE name=:name AND mtime>=%lld", iStart); + for(ii=0; ii=3 ){ + zPattern = g.argv[2]; + } + blob_init(&sql,0,0); + blob_appendf(&sql, "SELECT name, value, datetime(mtime,'unixepoch')" + " FROM config"); + if( zPattern ){ + blob_appendf(&sql, " WHERE name GLOB %Q", zPattern); + } + if( showMtime ){ + blob_appendf(&sql, " ORDER BY mtime, name"); + }else{ + blob_appendf(&sql, " ORDER BY name"); + } + db_prepare(&q, "%s", blob_str(&sql)/*safe-for-%s*/); + blob_reset(&sql); +#define MX_VAL 40 +#define MX_NM 28 +#define MX_LONGNM 60 + while( db_step(&q)==SQLITE_ROW ){ + const char *zName = db_column_text(&q,0); + int nName = db_column_bytes(&q,0); + const char *zValue = db_column_text(&q,1); + int szValue = db_column_bytes(&q,1); + const char *zMTime = db_column_text(&q,2); + for(i=j=0; j=' ' && c<='~' ){ + zTrans[j++] = c; + }else{ + zTrans[j++] = '\\'; + if( c=='\n' ){ + zTrans[j++] = 'n'; + }else if( c=='\r' ){ + zTrans[j++] = 'r'; + }else if( c=='\t' ){ + zTrans[j++] = 't'; + }else{ + zTrans[j++] = '0' + ((c>>6)&7); + zTrans[j++] = '0' + ((c>>3)&7); + zTrans[j++] = '0' + (c&7); + } + } + } + zTrans[j] = 0; + if( i=4 ? g.argv[3] : "-"; + n = db_int(0, "SELECT count(*) FROM config WHERE name GLOB %Q", zVar); + if( n==0 ){ + fossil_fatal("no match for %Q", zVar); + } + if( n>1 ){ + fossil_fatal("multiple matches: %s", + db_text(0, "SELECT group_concat(quote(name),', ') FROM (" + " SELECT name FROM config WHERE name GLOB %Q ORDER BY 1)", + zVar)); + } + blob_init(&x,0,0); + db_blob(&x, "SELECT value FROM config WHERE name GLOB %Q", zVar); + blob_write_to_file(&x, zFile); +} + +/* +** COMMAND: test-var-set +** +** Usage: %fossil test-var-set VAR ?VALUE? ?--file FILE? +** +** Store VALUE or the content of FILE (exactly one of which must be +** supplied) into variable VAR. Use a FILE of "-" to read from +** standard input. +** +** WARNING: changing the value of a variable can interfere with the +** operation of Fossil. Be sure you know what you are doing. +** +** Use "--blob FILE" instead of "--file FILE" to load a binary blob +** such as a GIF. +*/ +void test_var_set_cmd(void){ + const char *zVar; + const char *zFile; + const char *zBlob; + Blob x; + Stmt ins; + zFile = find_option("file",0,1); + zBlob = find_option("blob",0,1); + db_find_and_open_repository(OPEN_ANY_SCHEMA, 0); + verify_all_options(); + if( g.argc<3 || (zFile==0 && zBlob==0 && g.argc<4) ){ + usage("VAR ?VALUE? ?--file FILE?"); + } + zVar = g.argv[2]; + if( zFile ){ + if( zBlob ) fossil_fatal("cannot do both --file or --blob"); + blob_read_from_file(&x, zFile); + }else if( zBlob ){ + blob_read_from_file(&x, zBlob); + }else{ + blob_init(&x,g.argv[3],-1); + } + db_prepare(&ins, + "REPLACE INTO config(name,value,mtime)" + "VALUES(%Q,:val,now())", zVar); + if( zBlob ){ + db_bind_blob(&ins, ":val", &x); + }else{ + db_bind_text(&ins, ":val", blob_str(&x)); + } + db_step(&ins); + db_finalize(&ins); + blob_reset(&x); } Index: src/content.c ================================================================== --- src/content.c +++ src/content.c @@ -20,46 +20,87 @@ #include "config.h" #include "content.h" #include /* -** Macros for debugging -*/ -#if 0 -# define CONTENT_TRACE(X) printf X; -#else -# define CONTENT_TRACE(X) -#endif - -/* -** The artifact retrival cache -*/ -#define MX_CACHE_CNT 50 /* Maximum number of positive cache entries */ -#define EXPELL_INTERVAL 5 /* How often to expell from a full cache */ +** The artifact retrieval cache +*/ static struct { - int n; /* Current number of positive cache entries */ + i64 szTotal; /* Total size of all entries in the cache */ + int n; /* Current number of cache entries */ + int nAlloc; /* Number of slots allocated in a[] */ int nextAge; /* Age counter for implementing LRU */ - int skipCnt; /* Used to limit entries expelled from cache */ - struct { /* One instance of this for each cache entry */ + struct cacheLine { /* One instance of this for each cache entry */ int rid; /* Artifact id */ int age; /* Age. Newer is larger */ Blob content; /* Content of the artifact */ - } a[MX_CACHE_CNT]; /* The positive cache */ + } *a; /* The positive cache */ + Bag inCache; /* Set of artifacts currently in cache */ /* ** The missing artifact cache. ** ** Artifacts whose record ID are in missingCache cannot be retrieved ** either because they are phantoms or because they are a delta that ** depends on a phantom. Artifacts whose content we are certain is ** available are in availableCache. If an artifact is in neither cache - ** then its current availablity is unknown. + ** then its current availability is unknown. */ Bag missing; /* Cache of artifacts that are incomplete */ Bag available; /* Cache of artifacts that are complete */ } contentCache; +/* +** Remove the oldest element from the content cache +*/ +static void content_cache_expire_oldest(void){ + int i; + int mnAge = contentCache.nextAge; + int mn = -1; + for(i=0; i=0 ){ + bag_remove(&contentCache.inCache, contentCache.a[mn].rid); + contentCache.szTotal -= blob_size(&contentCache.a[mn].content); + blob_reset(&contentCache.a[mn].content); + contentCache.n--; + contentCache.a[mn] = contentCache.a[contentCache.n]; + } +} + +/* +** Add an entry to the content cache. +** +** This routines hands responsibility for the artifact over to the cache. +** The cache will deallocate memory when it has finished with it. +*/ +void content_cache_insert(int rid, Blob *pBlob){ + struct cacheLine *p; + if( contentCache.n>500 || contentCache.szTotal>50000000 ){ + i64 szBefore; + do{ + szBefore = contentCache.szTotal; + content_cache_expire_oldest(); + }while( contentCache.szTotal>50000000 && contentCache.szTotal=contentCache.nAlloc ){ + contentCache.nAlloc = contentCache.nAlloc*2 + 10; + contentCache.a = fossil_realloc(contentCache.a, + contentCache.nAlloc*sizeof(contentCache.a[0])); + } + p = &contentCache.a[contentCache.n++]; + p->rid = rid; + p->age = contentCache.nextAge++; + contentCache.szTotal += blob_size(pBlob); + p->content = *pBlob; + blob_zero(pBlob); + bag_insert(&contentCache.inCache, rid); +} /* ** Clear the content cache. */ void content_clear_cache(void){ @@ -67,15 +108,17 @@ for(i=0; i0 ){ + n++; + if( n>=nAlloc ){ + if( n>db_int(0, "SELECT max(rid) FROM blob") ){ + fossil_panic("infinite loop in DELTA table"); + } + nAlloc = nAlloc*2 + 10; + a = fossil_realloc(a, nAlloc*sizeof(a[0])); + } + a[n] = nextRid; + } + mx = n; + rc = content_get(a[n], pBlob); + n--; + while( rc && n>=0 ){ + rc = content_of_blob(a[n], &delta); + if( rc ){ + blob_delta_apply(pBlob, &delta, &next); blob_reset(&delta); - rc = 1; - } - - /* Save the srcid artifact in the cache */ - if( contentCache.n=0 ){ - contentCache.a[i].content = src; - contentCache.a[i].age = contentCache.nextAge++; - contentCache.a[i].rid = srcid; - CONTENT_TRACE(("%*sadd %d to cache\n", - bag_count(&inProcess), "", srcid)) - }else{ - blob_reset(&src); - } - } - bag_remove(&inProcess, srcid); - }else{ - /* No delta required. Read content directly from the database */ - if( content_of_blob(rid, pBlob) ){ - rc = 1; - } + if( (mx-n)%8==0 ){ + content_cache_insert(a[n+1], pBlob); + }else{ + blob_reset(pBlob); + } + *pBlob = next; + } + n--; + } + free(a); + if( !rc ) blob_reset(pBlob); } if( rc==0 ){ bag_insert(&contentCache.missing, rid); }else{ bag_insert(&contentCache.available, rid); @@ -275,26 +305,34 @@ } return rc; } /* -** COMMAND: artifact +** COMMAND: artifact* ** -** Usage: %fossil artifact ARTIFACT-ID ?OUTPUT-FILENAME? +** Usage: %fossil artifact ARTIFACT-ID ?OUTPUT-FILENAME? ?OPTIONS? ** ** Extract an artifact by its SHA1 hash and write the results on ** standard output, or if the optional 4th argument is given, in ** the named output file. +** +** Options: +** -R|--repository FILE Extract artifacts from repository FILE +** +** See also: finfo */ void artifact_cmd(void){ int rid; Blob content; const char *zFile; - if( g.argc!=4 && g.argc!=3 ) usage("RECORDID ?FILENAME?"); + db_find_and_open_repository(OPEN_ANY_SCHEMA, 0); + if( g.argc!=4 && g.argc!=3 ) usage("ARTIFACT-ID ?FILENAME? ?OPTIONS?"); zFile = g.argc==4 ? g.argv[3] : "-"; - db_must_be_within_tree(); rid = name_to_rid(g.argv[2]); + if( rid==0 ){ + fossil_fatal("%s",g.zErrMsg); + } content_get(rid, &content); blob_write_to_file(&content, zFile); } /* @@ -315,64 +353,161 @@ db_blob(&content, "SELECT content FROM blob WHERE rid=%d", rid); blob_uncompress(&content, &content); blob_write_to_file(&content, zFile); } +/* +** The following flag is set to disable the automatic calls to +** manifest_crosslink() when a record is dephantomized. This +** flag can be set (for example) when doing a clone when we know +** that rebuild will be run over all records at the conclusion +** of the operation. +*/ +static int ignoreDephantomizations = 0; + /* ** When a record is converted from a phantom to a real record, ** if that record has other records that are derived by delta, ** then call manifest_crosslink() on those other records. +** +** If the formerly phantom record or any of the other records +** derived by delta from the former phantom are a baseline manifest, +** then also invoke manifest_crosslink() on the delta-manifests +** associated with that baseline. +** +** Tail recursion is used to minimize stack depth. */ void after_dephantomize(int rid, int linkFlag){ Stmt q; - db_prepare(&q, "SELECT rid FROM delta WHERE srcid=%d", rid); - while( db_step(&q)==SQLITE_ROW ){ - int tid = db_column_int(&q, 0); - after_dephantomize(tid, 1); - } - db_finalize(&q); - if( linkFlag ){ - Blob content; - content_get(rid, &content); - manifest_crosslink(rid, &content); - blob_reset(&content); - } + int nChildAlloc = 0; + int *aChild = 0; + Blob content; + + if( ignoreDephantomizations ) return; + while( rid ){ + int nChildUsed = 0; + int i; + + /* Parse the object rid itself */ + if( linkFlag ){ + content_get(rid, &content); + manifest_crosslink(rid, &content, MC_NONE); + assert( blob_is_reset(&content) ); + } + + /* Parse all delta-manifests that depend on baseline-manifest rid */ + db_prepare(&q, "SELECT rid FROM orphan WHERE baseline=%d", rid); + while( db_step(&q)==SQLITE_ROW ){ + int child = db_column_int(&q, 0); + if( nChildUsed>=nChildAlloc ){ + nChildAlloc = nChildAlloc*2 + 10; + aChild = fossil_realloc(aChild, nChildAlloc*sizeof(aChild)); + } + aChild[nChildUsed++] = child; + } + db_finalize(&q); + for(i=0; i=nChildAlloc ){ + nChildAlloc = nChildAlloc*2 + 10; + aChild = fossil_realloc(aChild, nChildAlloc*sizeof(aChild)); + } + aChild[nChildUsed++] = child; + } + db_finalize(&q); + for(i=1; i0 ? aChild[0] : 0; + linkFlag = 1; + } + free(aChild); +} + +/* +** Turn dephantomization processing on or off. +*/ +void content_enable_dephantomize(int onoff){ + ignoreDephantomizations = !onoff; } /* ** Write content into the database. Return the record ID. If the ** content is already in the database, just return the record ID. ** ** If srcId is specified, then pBlob is delta content from ** the srcId record. srcId might be a phantom. +** +** pBlob is normally uncompressed text. But if nBlob>0 then the +** pBlob value has already been compressed and nBlob is its uncompressed +** size. If nBlob>0 then zUuid must be valid. ** ** zUuid is the UUID of the artifact, if it is specified. When srcId is ** specified then zUuid must always be specified. If srcId is zero, ** and zUuid is zero then the correct zUuid is computed from pBlob. ** ** If the record already exists but is a phantom, the pBlob content ** is inserted and the phatom becomes a real record. +** +** The original content of pBlob is not disturbed. The caller continues +** to be responsible for pBlob. This routine does *not* take over +** responsibility for freeing pBlob. */ -int content_put(Blob *pBlob, const char *zUuid, int srcId){ +int content_put_ex( + Blob *pBlob, /* Content to add to the repository */ + const char *zUuid, /* SHA1 hash of reconstructed pBlob */ + int srcId, /* pBlob is a delta from this entry */ + int nBlob, /* pBlob is compressed. Original size is this */ + int isPrivate /* The content should be marked private */ +){ int size; int rid; Stmt s1; Blob cmpr; Blob hash; int markAsUnclustered = 0; int isDephantomize = 0; - + assert( g.repositoryOpen ); assert( pBlob!=0 ); assert( srcId==0 || zUuid!=0 ); if( zUuid==0 ){ - assert( pBlob!=0 ); + assert( nBlob==0 ); sha1sum_blob(pBlob, &hash); }else{ blob_init(&hash, zUuid, -1); } - size = blob_size(pBlob); + if( nBlob ){ + size = nBlob; + }else{ + size = blob_size(pBlob); + if( srcId ){ + size = delta_output_size(blob_buffer(pBlob), size); + } + } db_begin_transaction(); /* Check to see if the entry already exists and if it does whether ** or not the entry is a phantom */ @@ -401,11 +536,15 @@ g.userUid, g.zNonce, g.zIpAddr ); g.rcvid = db_last_insert_rowid(); } - blob_compress(pBlob, &cmpr); + if( nBlob ){ + cmpr = pBlob[0]; + }else{ + blob_compress(pBlob, &cmpr); + } if( rid>0 ){ /* We are just adding data to a phantom */ db_prepare(&s1, "UPDATE blob SET rcvid=%d, size=%d, content=:data WHERE rid=%d", g.rcvid, size, rid @@ -419,40 +558,40 @@ } }else{ /* We are creating a new entry */ db_prepare(&s1, "INSERT INTO blob(rcvid,size,uuid,content)" - "VALUES(%d,%d,'%b',:data)", - g.rcvid, size, &hash + "VALUES(%d,%d,'%q',:data)", + g.rcvid, size, blob_str(&hash) ); db_bind_blob(&s1, ":data", &cmpr); db_exec(&s1); rid = db_last_insert_rowid(); if( !pBlob ){ db_multi_exec("INSERT OR IGNORE INTO phantom VALUES(%d)", rid); } - if( g.markPrivate ){ - db_multi_exec("INSERT INTO private VALUES(%d)", rid); - markAsUnclustered = 0; - } + } + if( g.markPrivate || isPrivate ){ + db_multi_exec("INSERT INTO private VALUES(%d)", rid); + markAsUnclustered = 0; } - blob_reset(&cmpr); + if( nBlob==0 ) blob_reset(&cmpr); /* If the srcId is specified, then the data we just added is ** really a delta. Record this fact in the delta table. */ if( srcId ){ db_multi_exec("REPLACE INTO delta(rid,srcid) VALUES(%d,%d)", rid, srcId); } - if( !isDephantomize && bag_find(&contentCache.missing, rid) && + if( !isDephantomize && bag_find(&contentCache.missing, rid) && (srcId==0 || content_is_available(srcId)) ){ content_mark_available(rid); } if( isDephantomize ){ after_dephantomize(rid, 0); } - + /* Add the element to the unclustered table if has never been ** previously seen. */ if( markAsUnclustered ){ db_multi_exec("INSERT OR IGNORE INTO unclustered VALUES(%d)", rid); @@ -466,18 +605,34 @@ /* Make arrangements to verify that the data can be recovered ** before we commit */ verify_before_commit(rid); return rid; } + +/* +** This is the simple common case for inserting content into the +** repository. pBlob is the content to be inserted. +** +** pBlob is uncompressed and is not deltaed. It is exactly the content +** to be inserted. +** +** The original content of pBlob is not disturbed. The caller continues +** to be responsible for pBlob. This routine does *not* take over +** responsiblity for freeing pBlob. +*/ +int content_put(Blob *pBlob){ + return content_put_ex(pBlob, 0, 0, 0, 0); +} + /* ** Create a new phantom with the given UUID and return its artifact ID. */ -int content_new(const char *zUuid){ +int content_new(const char *zUuid, int isPrivate){ int rid; static Stmt s1, s2, s3; - + assert( g.repositoryOpen ); db_begin_transaction(); if( uuid_is_shunned(zUuid) ){ db_end_transaction(0); return 0; @@ -492,11 +647,11 @@ db_static_prepare(&s2, "INSERT INTO phantom VALUES(:rid)" ); db_bind_int(&s2, ":rid", rid); db_exec(&s2); - if( g.markPrivate ){ + if( g.markPrivate || isPrivate ){ db_multi_exec("INSERT INTO private VALUES(%d)", rid); }else{ db_static_prepare(&s3, "INSERT INTO unclustered VALUES(:rid)" ); @@ -510,21 +665,24 @@ /* ** COMMAND: test-content-put ** -** Extract a blob from the database and write it into a file. +** Usage: %fossil test-content-put FILE +** +** Read the content of FILE and add it to the Blob table as a new +** artifact using a direct call to content_put(). */ void test_content_put_cmd(void){ int rid; Blob content; if( g.argc!=3 ) usage("FILENAME"); db_must_be_within_tree(); user_select(); blob_read_from_file(&content, g.argv[2]); - rid = content_put(&content, 0, 0); - printf("inserted as record %d\n", rid); + rid = content_put(&content); + fossil_print("inserted as record %d\n", rid); } /* ** Make sure the content at rid is the original content and is not a ** delta. @@ -551,11 +709,11 @@ ** ** Make sure the content at RECORDID is not a delta */ void test_content_undelta_cmd(void){ int rid; - if( g.argc!=2 ) usage("RECORDID"); + if( g.argc!=3 ) usage("RECORDID"); db_must_be_within_tree(); rid = atoi(g.argv[2]); content_undelta(rid); } @@ -569,15 +727,15 @@ "SELECT 1 FROM private WHERE rid=:rid" ); db_bind_int(&s1, ":rid", rid); rc = db_step(&s1); db_reset(&s1); - return rc==SQLITE_ROW; + return rc==SQLITE_ROW; } /* -** Make sure an artifact is public. +** Make sure an artifact is public. */ void content_make_public(int rid){ static Stmt s1; db_static_prepare(&s1, "DELETE FROM private WHERE rid=:rid" @@ -600,21 +758,25 @@ ** ** If srcid is a delta that depends on rid, then srcid is ** converted to undeltaed text. ** ** If either rid or srcid contain less than 50 bytes, or if the -** resulting delta does not achieve a compression of at least 25% on -** its own the rid is left untouched. +** resulting delta does not achieve a compression of at least 25% +** the rid is left untouched. +** +** Return 1 if a delta is made and 0 if no delta occurs. */ -void content_deltify(int rid, int srcid, int force){ +int content_deltify(int rid, int srcid, int force){ int s; Blob data, src, delta; Stmt s1, s2; - if( srcid==rid ) return; - if( !force && findSrcid(rid)>0 ) return; + int rc = 0; + + if( srcid==rid ) return 0; + if( !force && findSrcid(rid)>0 ) return 0; if( content_is_private(srcid) && !content_is_private(rid) ){ - return; + return 0; } s = srcid; while( (s = findSrcid(s))>0 ){ if( s==rid ){ content_undelta(srcid); @@ -622,20 +784,20 @@ } } content_get(srcid, &src); if( blob_size(&src)<50 ){ blob_reset(&src); - return; + return 0; } content_get(rid, &data); if( blob_size(&data)<50 ){ blob_reset(&src); blob_reset(&data); - return; + return 0; } blob_delta_create(&src, &data, &delta); - if( blob_size(&delta) < blob_size(&data)*0.75 ){ + if( blob_size(&delta) <= blob_size(&data)*0.75 ){ blob_compress(&delta, &delta); db_prepare(&s1, "UPDATE blob SET content=:data WHERE rid=%d", rid); db_prepare(&s2, "REPLACE INTO delta(rid,srcid)VALUES(%d,%d)", rid, srcid); db_bind_blob(&s1, ":data", &delta); db_begin_transaction(); @@ -643,14 +805,16 @@ db_exec(&s2); db_end_transaction(0); db_finalize(&s1); db_finalize(&s2); verify_before_commit(rid); + rc = 1; } blob_reset(&src); blob_reset(&data); blob_reset(&delta); + return rc; } /* ** COMMAND: test-content-deltify ** @@ -659,5 +823,355 @@ void test_content_deltify_cmd(void){ if( g.argc!=5 ) usage("RID SRCID FORCE"); db_must_be_within_tree(); content_deltify(atoi(g.argv[2]), atoi(g.argv[3]), atoi(g.argv[4])); } + +/* +** Return true if Blob p looks like it might be a parsable control artifact. +*/ +static int looks_like_control_artifact(Blob *p){ + const char *z = blob_buffer(p); + int n = blob_size(p); + if( n<10 ) return 0; + if( strncmp(z, "-----BEGIN PGP SIGNED MESSAGE-----", 34)==0 ) return 1; + if( z[0]<'A' || z[0]>'Z' || z[1]!=' ' || z[0]=='I' ) return 0; + if( z[n-1]!='\n' ) return 0; + return 1; +} + +/* +** COMMAND: test-integrity ?OPTIONS? +** +** Verify that all content can be extracted from the BLOB table correctly. +** If the BLOB table is correct, then the repository can always be +** successfully reconstructed using "fossil rebuild". +** +** Options: +** +** --parse Parse all manifests, wikis, tickets, events, and +** so forth, reporting any errors found. +*/ +void test_integrity(void){ + Stmt q; + Blob content; + Blob cksum; + int n1 = 0; + int n2 = 0; + int nErr = 0; + int total; + int nCA = 0; + int anCA[10]; + int bParse = find_option("parse",0,0)!=0; + db_find_and_open_repository(OPEN_ANY_SCHEMA, 2); + memset(anCA, 0, sizeof(anCA)); + + /* Make sure no public artifact is a delta from a private artifact */ + db_prepare(&q, + "SELECT " + " rid, (SELECT uuid FROM blob WHERE rid=delta.rid)," + " srcid, (SELECT uuid FROM blob WHERE rid=delta.srcid)" + " FROM delta" + " WHERE srcid in private AND rid NOT IN private" + ); + while( db_step(&q)==SQLITE_ROW ){ + int rid = db_column_int(&q, 0); + const char *zId = db_column_text(&q, 1); + int srcid = db_column_int(&q, 2); + const char *zSrc = db_column_text(&q, 3); + fossil_print( + "public artifact %S (%d) is a delta from private artifact %S (%d)\n", + zId, rid, zSrc, srcid + ); + nErr++; + } + db_finalize(&q); + + db_prepare(&q, "SELECT rid, uuid, size FROM blob ORDER BY rid"); + total = db_int(0, "SELECT max(rid) FROM blob"); + while( db_step(&q)==SQLITE_ROW ){ + int rid = db_column_int(&q, 0); + const char *zUuid = db_column_text(&q, 1); + int size = db_column_int(&q, 2); + n1++; + fossil_print(" %d/%d\r", n1, total); + fflush(stdout); + if( size<0 ){ + fossil_print("skip phantom %d %s\n", rid, zUuid); + continue; /* Ignore phantoms */ + } + content_get(rid, &content); + if( blob_size(&content)!=size ){ + fossil_print("size mismatch on artifact %d: wanted %d but got %d\n", + rid, size, blob_size(&content)); + nErr++; + } + sha1sum_blob(&content, &cksum); + if( fossil_strcmp(blob_str(&cksum), zUuid)!=0 ){ + fossil_print("wrong hash on artifact %d: wanted %s but got %s\n", + rid, zUuid, blob_str(&cksum)); + nErr++; + } + if( bParse && looks_like_control_artifact(&content) ){ + Blob err; + int i, n; + char *z; + Manifest *p; + char zFirstLine[400]; + blob_zero(&err); + + z = blob_buffer(&content); + n = blob_size(&content); + for(i=0; itype]++; + manifest_destroy(p); + nCA++; + } + blob_reset(&err); + }else{ + blob_reset(&content); + } + blob_reset(&cksum); + n2++; + } + db_finalize(&q); + fossil_print("%d non-phantom blobs (out of %d total) checked: %d errors\n", + n2, n1, nErr); + if( bParse ){ + static const char *const azType[] = { 0, "manifest", "cluster", + "control", "wiki", "ticket", "attachment", "event" }; + int i; + fossil_print("%d total control artifacts\n", nCA); + for(i=1; i0;" /* Tags */ + "INSERT INTO used SELECT rid FROM tagxref;" /* Wiki & tickets */ + "INSERT INTO used SELECT rid FROM attachment JOIN blob ON src=uuid;" + "INSERT INTO used SELECT attachid FROM attachment;" + "INSERT INTO used SELECT objid FROM event;" + ); + db_prepare(&q, "SELECT rid, uuid, size FROM blob WHERE rid NOT IN used"); + while( db_step(&q)==SQLITE_ROW ){ + fossil_print("%7d %s size: %d\n", + db_column_int(&q, 0), + db_column_text(&q, 1), + db_column_int(&q,2)); + cnt++; + } + db_finalize(&q); + fossil_print("%d orphans\n", cnt); +} + +/* Allowed flags for check_exists */ +#define MISSING_SHUNNED 0x0001 /* Do not report shunned artifacts */ + +/* This is a helper routine for test-artifacts. +** +** Check to see that artifact zUuid exists in the repository. If it does, +** return 0. If it does not, generate an error message and return 1. +*/ +static int check_exists( + const char *zUuid, /* The artifact we are checking for */ + unsigned flags, /* Flags */ + Manifest *p, /* The control artifact that references zUuid */ + const char *zRole, /* Role of zUuid in p */ + const char *zDetail /* Additional information, such as a filename */ +){ + static Stmt q; + int rc = 0; + + db_static_prepare(&q, "SELECT size FROM blob WHERE uuid=:uuid"); + if( zUuid==0 || zUuid[0]==0 ) return 0; + db_bind_text(&q, ":uuid", zUuid); + if( db_step(&q)==SQLITE_ROW ){ + int size = db_column_int(&q, 0); + if( size<0 ) rc = 2; + }else{ + rc = 1; + } + db_reset(&q); + if( rc ){ + const char *zCFType = "control artifact"; + char *zSrc; + char *zDate; + const char *zErrType = "MISSING"; + if( db_exists("SELECT 1 FROM shun WHERE uuid=%Q", zUuid) ){ + if( flags & MISSING_SHUNNED ) return 0; + zErrType = "SHUNNED"; + } + switch( p->type ){ + case CFTYPE_MANIFEST: zCFType = "check-in"; break; + case CFTYPE_CLUSTER: zCFType = "cluster"; break; + case CFTYPE_CONTROL: zCFType = "tag"; break; + case CFTYPE_WIKI: zCFType = "wiki"; break; + case CFTYPE_TICKET: zCFType = "ticket"; break; + case CFTYPE_ATTACHMENT: zCFType = "attachment"; break; + case CFTYPE_EVENT: zCFType = "event"; break; + } + zSrc = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", p->rid); + if( p->rDate>0.0 ){ + zDate = db_text(0, "SELECT datetime(%.17g)", p->rDate); + }else{ + zDate = db_text(0, + "SELECT datetime(rcvfrom.mtime)" + " FROM blob, rcvfrom" + " WHERE blob.rcvid=rcvfrom.rcvid" + " AND blob.rid=%d", p->rid); + } + fossil_print("%s: %s\n %s %s %S (%d) %s\n", + zErrType, zUuid, zRole, zCFType, zSrc, p->rid, zDate); + if( zDetail && zDetail[0] ){ + fossil_print(" %s\n", zDetail); + } + fossil_free(zSrc); + fossil_free(zDate); + rc = 1; + } + return rc; +} + +/* +** COMMAND: test-missing +** +** Usage: %fossil test-missing +** +** Look at every artifact in the repository and verify that +** all references are satisfied. Report any referenced artifacts +** that are missing or shunned. +** +** Options: +** +** --notshunned Do not report shunned artifacts +** --quiet Only show output if there are errors +*/ +void test_missing(void){ + Stmt q; + Blob content; + int nErr = 0; + int nArtifact = 0; + int i; + Manifest *p; + unsigned flags = 0; + int quietFlag; + + if( find_option("notshunned", 0, 0)!=0 ) flags |= MISSING_SHUNNED; + quietFlag = find_option("quiet","q",0)!=0; + db_find_and_open_repository(OPEN_ANY_SCHEMA, 0); + db_prepare(&q, + "SELECT mid FROM mlink UNION " + "SELECT srcid FROM tagxref WHERE srcid>0 UNION " + "SELECT rid FROM tagxref UNION " + "SELECT rid FROM attachment JOIN blob ON src=uuid UNION " + "SELECT objid FROM event"); + while( db_step(&q)==SQLITE_ROW ){ + int rid = db_column_int(&q, 0); + content_get(rid, &content); + p = manifest_parse(&content, rid, 0); + if( p ){ + nArtifact++; + nErr += check_exists(p->zBaseline, flags, p, "baseline of", 0); + nErr += check_exists(p->zAttachSrc, flags, p, "file of", 0); + for(i=0; inFile; i++){ + nErr += check_exists(p->aFile[i].zUuid, flags, p, "file of", + p->aFile[i].zName); + } + for(i=0; inParent; i++){ + nErr += check_exists(p->azParent[i], flags, p, "parent of", 0); + } + for(i=0; inCherrypick; i++){ + nErr += check_exists(p->aCherrypick[i].zCPTarget+1, flags, p, + "cherry-pick target of", 0); + nErr += check_exists(p->aCherrypick[i].zCPBase, flags, p, + "cherry-pick baseline of", 0); + } + for(i=0; inCChild; i++){ + nErr += check_exists(p->azCChild[i], flags, p, "in", 0); + } + for(i=0; inTag; i++){ + nErr += check_exists(p->aTag[i].zUuid, flags, p, "target of", 0); + } + manifest_destroy(p); + } + } + db_finalize(&q); + if( nErr>0 || quietFlag==0 ){ + fossil_print("%d missing or shunned references in %d control artifacts\n", + nErr, nArtifact); + } +} + +/* +** COMMAND: test-content-erase +** +** Usage: %fossil test-content-erase RID .... +** +** Remove all traces of one or more artifacts from the local repository. +** +** WARNING: This command destroys data and can cause you to lose work. +** Make sure you have a backup copy before using this command! +** +** WARNING: You must run "fossil rebuild" after this command to rebuild +** the metadata. +** +** Note that the arguments are the integer raw RID values from the BLOB table, +** not SHA1 hashs or labels. +*/ +void test_content_erase(void){ + int i; + Blob x; + char c; + Stmt q; + prompt_user("This command erases information from the repository and\n" + "might irrecoverably damage the repository. Make sure you\n" + "have a backup copy!\n" + "Continue? (y/N)? ", &x); + c = blob_str(&x)[0]; + blob_reset(&x); + if( c!='y' && c!='Y' ) return; + db_find_and_open_repository(OPEN_ANY_SCHEMA, 0); + db_begin_transaction(); + db_prepare(&q, "SELECT rid FROM delta WHERE srcid=:rid"); + for(i=2; i + +/* Windows DLL stuff */ +#ifdef JSON_PARSER_DLL +# ifdef _MSC_VER +# ifdef JSON_PARSER_DLL_EXPORTS +# define JSON_PARSER_DLL_API __declspec(dllexport) +# else +# define JSON_PARSER_DLL_API __declspec(dllimport) +# endif +# else +# define JSON_PARSER_DLL_API +# endif +#else +# define JSON_PARSER_DLL_API +#endif + +/* Determine the integer type use to parse non-floating point numbers */ +#ifdef _WIN32 +typedef __int64 JSON_int_t; +#define JSON_PARSER_INTEGER_SSCANF_TOKEN "%I64d" +#define JSON_PARSER_INTEGER_SPRINTF_TOKEN "%I64d" +#elif (__STDC_VERSION__ >= 199901L) || (HAVE_LONG_LONG == 1) +typedef long long JSON_int_t; +#define JSON_PARSER_INTEGER_SSCANF_TOKEN "%lld" +#define JSON_PARSER_INTEGER_SPRINTF_TOKEN "%lld" +#else +typedef long JSON_int_t; +#define JSON_PARSER_INTEGER_SSCANF_TOKEN "%ld" +#define JSON_PARSER_INTEGER_SPRINTF_TOKEN "%ld" +#endif + + +#ifdef __cplusplus +extern "C" { +#endif + +typedef enum +{ + JSON_E_NONE = 0, + JSON_E_INVALID_CHAR, + JSON_E_INVALID_KEYWORD, + JSON_E_INVALID_ESCAPE_SEQUENCE, + JSON_E_INVALID_UNICODE_SEQUENCE, + JSON_E_INVALID_NUMBER, + JSON_E_NESTING_DEPTH_REACHED, + JSON_E_UNBALANCED_COLLECTION, + JSON_E_EXPECTED_KEY, + JSON_E_EXPECTED_COLON, + JSON_E_OUT_OF_MEMORY +} JSON_error; + +typedef enum +{ + JSON_T_NONE = 0, + JSON_T_ARRAY_BEGIN, + JSON_T_ARRAY_END, + JSON_T_OBJECT_BEGIN, + JSON_T_OBJECT_END, + JSON_T_INTEGER, + JSON_T_FLOAT, + JSON_T_NULL, + JSON_T_TRUE, + JSON_T_FALSE, + JSON_T_STRING, + JSON_T_KEY, + JSON_T_MAX +} JSON_type; + +typedef struct JSON_value_struct { + union { + JSON_int_t integer_value; + + double float_value; + + struct { + const char* value; + size_t length; + } str; + } vu; +} JSON_value; + +typedef struct JSON_parser_struct* JSON_parser; + +/*! \brief JSON parser callback + + \param ctx The pointer passed to new_JSON_parser. + \param type An element of JSON_type but not JSON_T_NONE. + \param value A representation of the parsed value. This parameter is NULL for + JSON_T_ARRAY_BEGIN, JSON_T_ARRAY_END, JSON_T_OBJECT_BEGIN, JSON_T_OBJECT_END, + JSON_T_NULL, JSON_T_TRUE, and JSON_T_FALSE. String values are always returned + as zero-terminated C strings. + + \return Non-zero if parsing should continue, else zero. +*/ +typedef int (*JSON_parser_callback)(void* ctx, int type, const JSON_value* value); + + +/** + A typedef for allocator functions semantically compatible with malloc(). +*/ +typedef void* (*JSON_malloc_t)(size_t n); +/** + A typedef for deallocator functions semantically compatible with free(). +*/ +typedef void (*JSON_free_t)(void* mem); + +/*! \brief The structure used to configure a JSON parser object +*/ +typedef struct { + /** Pointer to a callback, called when the parser has something to tell + the user. This parameter may be NULL. In this case the input is + merely checked for validity. + */ + JSON_parser_callback callback; + /** + Callback context - client-specified data to pass to the + callback function. This parameter may be NULL. + */ + void* callback_ctx; + /** Specifies the levels of nested JSON to allow. Negative numbers yield unlimited nesting. + If negative, the parser can parse arbitrary levels of JSON, otherwise + the depth is the limit. + */ + int depth; + /** + To allow C style comments in JSON, set to non-zero. + */ + int allow_comments; + /** + To decode floating point numbers manually set this parameter to + non-zero. + */ + int handle_floats_manually; + /** + The memory allocation routine, which must be semantically + compatible with malloc(3). If set to NULL, malloc(3) is used. + + If this is set to a non-NULL value then the 'free' member MUST be + set to the proper deallocation counterpart for this function. + Failure to do so results in undefined behaviour at deallocation + time. + */ + JSON_malloc_t malloc; + /** + The memory deallocation routine, which must be semantically + compatible with free(3). If set to NULL, free(3) is used. + + If this is set to a non-NULL value then the 'alloc' member MUST be + set to the proper allocation counterpart for this function. + Failure to do so results in undefined behaviour at deallocation + time. + */ + JSON_free_t free; +} JSON_config; + +/*! \brief Initializes the JSON parser configuration structure to default values. + + The default configuration is + - 127 levels of nested JSON (depends on JSON_PARSER_STACK_SIZE, see json_parser.c) + - no parsing, just checking for JSON syntax + - no comments + - Uses realloc() for memory de/allocation. + + \param config. Used to configure the parser. +*/ +JSON_PARSER_DLL_API void init_JSON_config(JSON_config * config); + +/*! \brief Create a JSON parser object + + \param config. Used to configure the parser. Set to NULL to use + the default configuration. See init_JSON_config. Its contents are + copied by this function, so it need not outlive the returned + object. + + \return The parser object, which is owned by the caller and must eventually + be freed by calling delete_JSON_parser(). +*/ +JSON_PARSER_DLL_API JSON_parser new_JSON_parser(JSON_config const* config); + +/*! \brief Destroy a previously created JSON parser object. */ +JSON_PARSER_DLL_API void delete_JSON_parser(JSON_parser jc); + +/*! \brief Parse a character. + + \return Non-zero, if all characters passed to this function are part of are valid JSON. +*/ +JSON_PARSER_DLL_API int JSON_parser_char(JSON_parser jc, int next_char); + +/*! \brief Finalize parsing. + + Call this method once after all input characters have been consumed. + + \return Non-zero, if all parsed characters are valid JSON, zero otherwise. +*/ +JSON_PARSER_DLL_API int JSON_parser_done(JSON_parser jc); + +/*! \brief Determine if a given string is valid JSON white space + + \return Non-zero if the string is valid, zero otherwise. +*/ +JSON_PARSER_DLL_API int JSON_parser_is_legal_white_space_string(const char* s); + +/*! \brief Gets the last error that occurred during the use of JSON_parser. + + \return A value from the JSON_error enum. +*/ +JSON_PARSER_DLL_API int JSON_parser_get_last_error(JSON_parser jc); + +/*! \brief Re-sets the parser to prepare it for another parse run. + + \return True (non-zero) on success, 0 on error (e.g. !jc). +*/ +JSON_PARSER_DLL_API int JSON_parser_reset(JSON_parser jc); + + +#ifdef __cplusplus +} +#endif + + +#endif /* JSON_PARSER_H */ +/* end file parser/JSON_parser.h */ +/* begin file parser/JSON_parser.c */ +/* +Copyright (c) 2007-2013 Jean Gressmann (jean@0x42.de) + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. +*/ + +/* + Changelog: + 2013-09-08 + Updated license to to be compatible with Debian license requirements. + + 2012-06-06 + Fix for invalid UTF16 characters and some comment fixex (thomas.h.moog@intel.com). + + 2010-11-25 + Support for custom memory allocation (sgbeal@googlemail.com). + + 2010-05-07 + Added error handling for memory allocation failure (sgbeal@googlemail.com). + Added diagnosis errors for invalid JSON. + + 2010-03-25 + Fixed buffer overrun in grow_parse_buffer & cleaned up code. + + 2009-10-19 + Replaced long double in JSON_value_struct with double after reports + of strtold being broken on some platforms (charles@transmissionbt.com). + + 2009-05-17 + Incorporated benrudiak@googlemail.com fix for UTF16 decoding. + + 2009-05-14 + Fixed float parsing bug related to a locale being set that didn't + use '.' as decimal point character (charles@transmissionbt.com). + + 2008-10-14 + Renamed states.IN to states.IT to avoid name clash which IN macro + defined in windef.h (alexey.pelykh@gmail.com) + + 2008-07-19 + Removed some duplicate code & debugging variable (charles@transmissionbt.com) + + 2008-05-28 + Made JSON_value structure ansi C compliant. This bug was report by + trisk@acm.jhu.edu + + 2008-05-20 + Fixed bug reported by charles@transmissionbt.com where the switching + from static to dynamic parse buffer did not copy the static parse + buffer's content. +*/ + + + +#include +#include +#include +#include +#include +#include +#include +#include + + +#ifdef _MSC_VER +# if _MSC_VER >= 1400 /* Visual Studio 2005 and up */ +# pragma warning(disable:4996) /* unsecure sscanf */ +# pragma warning(disable:4127) /* conditional expression is constant */ +# endif +#endif + + +#define true 1 +#define false 0 +#define __ -1 /* the universal error code */ + +/* values chosen so that the object size is approx equal to one page (4K) */ +#ifndef JSON_PARSER_STACK_SIZE +# define JSON_PARSER_STACK_SIZE 128 +#endif + +#ifndef JSON_PARSER_PARSE_BUFFER_SIZE +# define JSON_PARSER_PARSE_BUFFER_SIZE 3500 +#endif + +typedef void* (*JSON_debug_malloc_t)(size_t bytes, const char* reason); + +#ifdef JSON_PARSER_DEBUG_MALLOC +# define JSON_parser_malloc(func, bytes, reason) ((JSON_debug_malloc_t)func)(bytes, reason) +#else +# define JSON_parser_malloc(func, bytes, reason) func(bytes) +#endif + +typedef unsigned short UTF16; + +struct JSON_parser_struct { + JSON_parser_callback callback; + void* ctx; + signed char state, before_comment_state, type, escaped, comment, allow_comments, handle_floats_manually, error; + char decimal_point; + UTF16 utf16_high_surrogate; + int current_char; + int depth; + int top; + int stack_capacity; + signed char* stack; + char* parse_buffer; + size_t parse_buffer_capacity; + size_t parse_buffer_count; + signed char static_stack[JSON_PARSER_STACK_SIZE]; + char static_parse_buffer[JSON_PARSER_PARSE_BUFFER_SIZE]; + JSON_malloc_t malloc; + JSON_free_t free; +}; + +#define COUNTOF(x) (sizeof(x)/sizeof(x[0])) + +/* + Characters are mapped into these character classes. This allows for + a significant reduction in the size of the state transition table. +*/ + + + +enum classes { + C_SPACE, /* space */ + C_WHITE, /* other whitespace */ + C_LCURB, /* { */ + C_RCURB, /* } */ + C_LSQRB, /* [ */ + C_RSQRB, /* ] */ + C_COLON, /* : */ + C_COMMA, /* , */ + C_QUOTE, /* " */ + C_BACKS, /* \ */ + C_SLASH, /* / */ + C_PLUS, /* + */ + C_MINUS, /* - */ + C_POINT, /* . */ + C_ZERO , /* 0 */ + C_DIGIT, /* 123456789 */ + C_LOW_A, /* a */ + C_LOW_B, /* b */ + C_LOW_C, /* c */ + C_LOW_D, /* d */ + C_LOW_E, /* e */ + C_LOW_F, /* f */ + C_LOW_L, /* l */ + C_LOW_N, /* n */ + C_LOW_R, /* r */ + C_LOW_S, /* s */ + C_LOW_T, /* t */ + C_LOW_U, /* u */ + C_ABCDF, /* ABCDF */ + C_E, /* E */ + C_ETC, /* everything else */ + C_STAR, /* * */ + NR_CLASSES +}; + +static const signed char ascii_class[128] = { +/* + This array maps the 128 ASCII characters into character classes. + The remaining Unicode characters should be mapped to C_ETC. + Non-whitespace control characters are errors. +*/ + __, __, __, __, __, __, __, __, + __, C_WHITE, C_WHITE, __, __, C_WHITE, __, __, + __, __, __, __, __, __, __, __, + __, __, __, __, __, __, __, __, + + C_SPACE, C_ETC, C_QUOTE, C_ETC, C_ETC, C_ETC, C_ETC, C_ETC, + C_ETC, C_ETC, C_STAR, C_PLUS, C_COMMA, C_MINUS, C_POINT, C_SLASH, + C_ZERO, C_DIGIT, C_DIGIT, C_DIGIT, C_DIGIT, C_DIGIT, C_DIGIT, C_DIGIT, + C_DIGIT, C_DIGIT, C_COLON, C_ETC, C_ETC, C_ETC, C_ETC, C_ETC, + + C_ETC, C_ABCDF, C_ABCDF, C_ABCDF, C_ABCDF, C_E, C_ABCDF, C_ETC, + C_ETC, C_ETC, C_ETC, C_ETC, C_ETC, C_ETC, C_ETC, C_ETC, + C_ETC, C_ETC, C_ETC, C_ETC, C_ETC, C_ETC, C_ETC, C_ETC, + C_ETC, C_ETC, C_ETC, C_LSQRB, C_BACKS, C_RSQRB, C_ETC, C_ETC, + + C_ETC, C_LOW_A, C_LOW_B, C_LOW_C, C_LOW_D, C_LOW_E, C_LOW_F, C_ETC, + C_ETC, C_ETC, C_ETC, C_ETC, C_LOW_L, C_ETC, C_LOW_N, C_ETC, + C_ETC, C_ETC, C_LOW_R, C_LOW_S, C_LOW_T, C_LOW_U, C_ETC, C_ETC, + C_ETC, C_ETC, C_ETC, C_LCURB, C_ETC, C_RCURB, C_ETC, C_ETC +}; + + +/* + The state codes. +*/ +enum states { + GO, /* start */ + OK, /* ok */ + OB, /* object */ + KE, /* key */ + CO, /* colon */ + VA, /* value */ + AR, /* array */ + ST, /* string */ + ES, /* escape */ + U1, /* u1 */ + U2, /* u2 */ + U3, /* u3 */ + U4, /* u4 */ + MI, /* minus */ + ZE, /* zero */ + IT, /* integer */ + FR, /* fraction */ + E1, /* e */ + E2, /* ex */ + E3, /* exp */ + T1, /* tr */ + T2, /* tru */ + T3, /* true */ + F1, /* fa */ + F2, /* fal */ + F3, /* fals */ + F4, /* false */ + N1, /* nu */ + N2, /* nul */ + N3, /* null */ + C1, /* / */ + C2, /* / * */ + C3, /* * */ + FX, /* *.* *eE* */ + D1, /* second UTF-16 character decoding started by \ */ + D2, /* second UTF-16 character proceeded by u */ + NR_STATES +}; + +enum actions +{ + CB = -10, /* comment begin */ + CE = -11, /* comment end */ + FA = -12, /* false */ + TR = -13, /* false */ + NU = -14, /* null */ + DE = -15, /* double detected by exponent e E */ + DF = -16, /* double detected by fraction . */ + SB = -17, /* string begin */ + MX = -18, /* integer detected by minus */ + ZX = -19, /* integer detected by zero */ + IX = -20, /* integer detected by 1-9 */ + EX = -21, /* next char is escaped */ + UC = -22 /* Unicode character read */ +}; + + +static const signed char state_transition_table[NR_STATES][NR_CLASSES] = { +/* + The state transition table takes the current state and the current symbol, + and returns either a new state or an action. An action is represented as a + negative number. A JSON text is accepted if at the end of the text the + state is OK and if the mode is MODE_DONE. + + white 1-9 ABCDF etc + space | { } [ ] : , " \ / + - . 0 | a b c d e f l n r s t u | E | * */ +/*start GO*/ {GO,GO,-6,__,-5,__,__,__,__,__,CB,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__}, +/*ok OK*/ {OK,OK,__,-8,__,-7,__,-3,__,__,CB,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__}, +/*object OB*/ {OB,OB,__,-9,__,__,__,__,SB,__,CB,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__}, +/*key KE*/ {KE,KE,__,__,__,__,__,__,SB,__,CB,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__}, +/*colon CO*/ {CO,CO,__,__,__,__,-2,__,__,__,CB,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__}, +/*value VA*/ {VA,VA,-6,__,-5,__,__,__,SB,__,CB,__,MX,__,ZX,IX,__,__,__,__,__,FA,__,NU,__,__,TR,__,__,__,__,__}, +/*array AR*/ {AR,AR,-6,__,-5,-7,__,__,SB,__,CB,__,MX,__,ZX,IX,__,__,__,__,__,FA,__,NU,__,__,TR,__,__,__,__,__}, +/*string ST*/ {ST,__,ST,ST,ST,ST,ST,ST,-4,EX,ST,ST,ST,ST,ST,ST,ST,ST,ST,ST,ST,ST,ST,ST,ST,ST,ST,ST,ST,ST,ST,ST}, +/*escape ES*/ {__,__,__,__,__,__,__,__,ST,ST,ST,__,__,__,__,__,__,ST,__,__,__,ST,__,ST,ST,__,ST,U1,__,__,__,__}, +/*u1 U1*/ {__,__,__,__,__,__,__,__,__,__,__,__,__,__,U2,U2,U2,U2,U2,U2,U2,U2,__,__,__,__,__,__,U2,U2,__,__}, +/*u2 U2*/ {__,__,__,__,__,__,__,__,__,__,__,__,__,__,U3,U3,U3,U3,U3,U3,U3,U3,__,__,__,__,__,__,U3,U3,__,__}, +/*u3 U3*/ {__,__,__,__,__,__,__,__,__,__,__,__,__,__,U4,U4,U4,U4,U4,U4,U4,U4,__,__,__,__,__,__,U4,U4,__,__}, +/*u4 U4*/ {__,__,__,__,__,__,__,__,__,__,__,__,__,__,UC,UC,UC,UC,UC,UC,UC,UC,__,__,__,__,__,__,UC,UC,__,__}, +/*minus MI*/ {__,__,__,__,__,__,__,__,__,__,__,__,__,__,ZE,IT,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__}, +/*zero ZE*/ {OK,OK,__,-8,__,-7,__,-3,__,__,CB,__,__,DF,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__}, +/*int IT*/ {OK,OK,__,-8,__,-7,__,-3,__,__,CB,__,__,DF,IT,IT,__,__,__,__,DE,__,__,__,__,__,__,__,__,DE,__,__}, +/*frac FR*/ {OK,OK,__,-8,__,-7,__,-3,__,__,CB,__,__,__,FR,FR,__,__,__,__,E1,__,__,__,__,__,__,__,__,E1,__,__}, +/*e E1*/ {__,__,__,__,__,__,__,__,__,__,__,E2,E2,__,E3,E3,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__}, +/*ex E2*/ {__,__,__,__,__,__,__,__,__,__,__,__,__,__,E3,E3,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__}, +/*exp E3*/ {OK,OK,__,-8,__,-7,__,-3,__,__,__,__,__,__,E3,E3,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__}, +/*tr T1*/ {__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,T2,__,__,__,__,__,__,__}, +/*tru T2*/ {__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,T3,__,__,__,__}, +/*true T3*/ {__,__,__,__,__,__,__,__,__,__,CB,__,__,__,__,__,__,__,__,__,OK,__,__,__,__,__,__,__,__,__,__,__}, +/*fa F1*/ {__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,F2,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__}, +/*fal F2*/ {__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,F3,__,__,__,__,__,__,__,__,__}, +/*fals F3*/ {__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,F4,__,__,__,__,__,__}, +/*false F4*/ {__,__,__,__,__,__,__,__,__,__,CB,__,__,__,__,__,__,__,__,__,OK,__,__,__,__,__,__,__,__,__,__,__}, +/*nu N1*/ {__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,N2,__,__,__,__}, +/*nul N2*/ {__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,N3,__,__,__,__,__,__,__,__,__}, +/*null N3*/ {__,__,__,__,__,__,__,__,__,__,CB,__,__,__,__,__,__,__,__,__,__,__,OK,__,__,__,__,__,__,__,__,__}, +/*/ C1*/ {__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,C2}, +/*/star C2*/ {C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C3}, +/** C3*/ {C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,CE,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C2,C3}, +/*_. FX*/ {OK,OK,__,-8,__,-7,__,-3,__,__,__,__,__,__,FR,FR,__,__,__,__,E1,__,__,__,__,__,__,__,__,E1,__,__}, +/*\ D1*/ {__,__,__,__,__,__,__,__,__,D2,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__}, +/*\ D2*/ {__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,__,U1,__,__,__,__}, +}; + + +/* + These modes can be pushed on the stack. +*/ +enum modes { + MODE_ARRAY = 1, + MODE_DONE = 2, + MODE_KEY = 3, + MODE_OBJECT = 4 +}; + +static void set_error(JSON_parser jc) +{ + switch (jc->state) { + case GO: + switch (jc->current_char) { + case '{': case '}': case '[': case ']': + jc->error = JSON_E_UNBALANCED_COLLECTION; + break; + default: + jc->error = JSON_E_INVALID_CHAR; + break; + } + break; + case OB: + jc->error = JSON_E_EXPECTED_KEY; + break; + case AR: + jc->error = JSON_E_UNBALANCED_COLLECTION; + break; + case CO: + jc->error = JSON_E_EXPECTED_COLON; + break; + case KE: + jc->error = JSON_E_EXPECTED_KEY; + break; + /* \uXXXX\uYYYY */ + case U1: case U2: case U3: case U4: case D1: case D2: + jc->error = JSON_E_INVALID_UNICODE_SEQUENCE; + break; + /* true, false, null */ + case T1: case T2: case T3: case F1: case F2: case F3: case F4: case N1: case N2: case N3: + jc->error = JSON_E_INVALID_KEYWORD; + break; + /* minus, integer, fraction, exponent */ + case MI: case ZE: case IT: case FR: case E1: case E2: case E3: + jc->error = JSON_E_INVALID_NUMBER; + break; + default: + jc->error = JSON_E_INVALID_CHAR; + break; + } +} + +static int +push(JSON_parser jc, int mode) +{ +/* + Push a mode onto the stack. Return false if there is overflow. +*/ + assert(jc->top <= jc->stack_capacity); + + if (jc->depth < 0) { + if (jc->top == jc->stack_capacity) { + const size_t bytes_to_copy = jc->stack_capacity * sizeof(jc->stack[0]); + const size_t new_capacity = jc->stack_capacity * 2; + const size_t bytes_to_allocate = new_capacity * sizeof(jc->stack[0]); + void* mem = JSON_parser_malloc(jc->malloc, bytes_to_allocate, "stack"); + if (!mem) { + jc->error = JSON_E_OUT_OF_MEMORY; + return false; + } + jc->stack_capacity = (int)new_capacity; + memcpy(mem, jc->stack, bytes_to_copy); + if (jc->stack != &jc->static_stack[0]) { + jc->free(jc->stack); + } + jc->stack = (signed char*)mem; + } + } else { + if (jc->top == jc->depth) { + jc->error = JSON_E_NESTING_DEPTH_REACHED; + return false; + } + } + jc->stack[++jc->top] = (signed char)mode; + return true; +} + + +static int +pop(JSON_parser jc, int mode) +{ +/* + Pop the stack, assuring that the current mode matches the expectation. + Return false if there is underflow or if the modes mismatch. +*/ + if (jc->top < 0 || jc->stack[jc->top] != mode) { + return false; + } + jc->top -= 1; + return true; +} + + +#define parse_buffer_clear(jc) \ + do {\ + jc->parse_buffer_count = 0;\ + jc->parse_buffer[0] = 0;\ + } while (0) + +#define parse_buffer_pop_back_char(jc)\ + do {\ + assert(jc->parse_buffer_count >= 1);\ + --jc->parse_buffer_count;\ + jc->parse_buffer[jc->parse_buffer_count] = 0;\ + } while (0) + + + +void delete_JSON_parser(JSON_parser jc) +{ + if (jc) { + if (jc->stack != &jc->static_stack[0]) { + jc->free((void*)jc->stack); + } + if (jc->parse_buffer != &jc->static_parse_buffer[0]) { + jc->free((void*)jc->parse_buffer); + } + jc->free((void*)jc); + } +} + +int JSON_parser_reset(JSON_parser jc) +{ + if (NULL == jc) { + return false; + } + + jc->state = GO; + jc->top = -1; + + /* parser has been used previously? */ + if (NULL == jc->parse_buffer) { + + /* Do we want non-bound stack? */ + if (jc->depth > 0) { + jc->stack_capacity = jc->depth; + if (jc->depth <= (int)COUNTOF(jc->static_stack)) { + jc->stack = &jc->static_stack[0]; + } else { + const size_t bytes_to_alloc = jc->stack_capacity * sizeof(jc->stack[0]); + jc->stack = (signed char*)JSON_parser_malloc(jc->malloc, bytes_to_alloc, "stack"); + if (jc->stack == NULL) { + return false; + } + } + } else { + jc->stack_capacity = (int)COUNTOF(jc->static_stack); + jc->depth = -1; + jc->stack = &jc->static_stack[0]; + } + + /* set up the parse buffer */ + jc->parse_buffer = &jc->static_parse_buffer[0]; + jc->parse_buffer_capacity = COUNTOF(jc->static_parse_buffer); + } + + /* set parser to start */ + push(jc, MODE_DONE); + parse_buffer_clear(jc); + + return true; +} + +JSON_parser +new_JSON_parser(JSON_config const * config) +{ +/* + new_JSON_parser starts the checking process by constructing a JSON_parser + object. It takes a depth parameter that restricts the level of maximum + nesting. + + To continue the process, call JSON_parser_char for each character in the + JSON text, and then call JSON_parser_done to obtain the final result. + These functions are fully reentrant. +*/ + + int use_std_malloc = false; + JSON_config default_config; + JSON_parser jc; + JSON_malloc_t alloc; + + /* set to default configuration if none was provided */ + if (NULL == config) { + /* initialize configuration */ + init_JSON_config(&default_config); + config = &default_config; + } + + /* use std malloc if either the allocator or deallocator function isn't set */ + use_std_malloc = NULL == config->malloc || NULL == config->free; + + alloc = use_std_malloc ? malloc : config->malloc; + + jc = (JSON_parser)JSON_parser_malloc(alloc, sizeof(*jc), "parser"); + + if (NULL == jc) { + return NULL; + } + + /* configure the parser */ + memset(jc, 0, sizeof(*jc)); + jc->malloc = alloc; + jc->free = use_std_malloc ? free : config->free; + jc->callback = config->callback; + jc->ctx = config->callback_ctx; + jc->allow_comments = (signed char)(config->allow_comments != 0); + jc->handle_floats_manually = (signed char)(config->handle_floats_manually != 0); + jc->decimal_point = *localeconv()->decimal_point; + /* We need to be able to push at least one object */ + jc->depth = config->depth == 0 ? 1 : config->depth; + + /* reset the parser */ + if (!JSON_parser_reset(jc)) { + jc->free(jc); + return NULL; + } + + return jc; +} + +static int parse_buffer_grow(JSON_parser jc) +{ + const size_t bytes_to_copy = jc->parse_buffer_count * sizeof(jc->parse_buffer[0]); + const size_t new_capacity = jc->parse_buffer_capacity * 2; + const size_t bytes_to_allocate = new_capacity * sizeof(jc->parse_buffer[0]); + void* mem = JSON_parser_malloc(jc->malloc, bytes_to_allocate, "parse buffer"); + + if (mem == NULL) { + jc->error = JSON_E_OUT_OF_MEMORY; + return false; + } + + assert(new_capacity > 0); + memcpy(mem, jc->parse_buffer, bytes_to_copy); + + if (jc->parse_buffer != &jc->static_parse_buffer[0]) { + jc->free(jc->parse_buffer); + } + + jc->parse_buffer = (char*)mem; + jc->parse_buffer_capacity = new_capacity; + + return true; +} + +static int parse_buffer_reserve_for(JSON_parser jc, unsigned chars) +{ + while (jc->parse_buffer_count + chars + 1 > jc->parse_buffer_capacity) { + if (!parse_buffer_grow(jc)) { + assert(jc->error == JSON_E_OUT_OF_MEMORY); + return false; + } + } + + return true; +} + +#define parse_buffer_has_space_for(jc, count) \ + (jc->parse_buffer_count + (count) + 1 <= jc->parse_buffer_capacity) + +#define parse_buffer_push_back_char(jc, c)\ + do {\ + assert(parse_buffer_has_space_for(jc, 1)); \ + jc->parse_buffer[jc->parse_buffer_count++] = c;\ + jc->parse_buffer[jc->parse_buffer_count] = 0;\ + } while (0) + +#define assert_is_non_container_type(jc) \ + assert( \ + jc->type == JSON_T_NULL || \ + jc->type == JSON_T_FALSE || \ + jc->type == JSON_T_TRUE || \ + jc->type == JSON_T_FLOAT || \ + jc->type == JSON_T_INTEGER || \ + jc->type == JSON_T_STRING) + + +static int parse_parse_buffer(JSON_parser jc) +{ + if (jc->callback) { + JSON_value value, *arg = NULL; + + if (jc->type != JSON_T_NONE) { + assert_is_non_container_type(jc); + + switch(jc->type) { + case JSON_T_FLOAT: + arg = &value; + if (jc->handle_floats_manually) { + value.vu.str.value = jc->parse_buffer; + value.vu.str.length = jc->parse_buffer_count; + } else { + /* not checking with end pointer b/c there may be trailing ws */ + value.vu.float_value = strtod(jc->parse_buffer, NULL); + } + break; + case JSON_T_INTEGER: + arg = &value; + sscanf(jc->parse_buffer, JSON_PARSER_INTEGER_SSCANF_TOKEN, &value.vu.integer_value); + break; + case JSON_T_STRING: + arg = &value; + value.vu.str.value = jc->parse_buffer; + value.vu.str.length = jc->parse_buffer_count; + break; + } + + if (!(*jc->callback)(jc->ctx, jc->type, arg)) { + return false; + } + } + } + + parse_buffer_clear(jc); + + return true; +} + +#define IS_HIGH_SURROGATE(uc) (((uc) & 0xFC00) == 0xD800) +#define IS_LOW_SURROGATE(uc) (((uc) & 0xFC00) == 0xDC00) +#define DECODE_SURROGATE_PAIR(hi,lo) ((((hi) & 0x3FF) << 10) + ((lo) & 0x3FF) + 0x10000) +static const unsigned char utf8_lead_bits[4] = { 0x00, 0xC0, 0xE0, 0xF0 }; + +static int decode_unicode_char(JSON_parser jc) +{ + int i; + unsigned uc = 0; + char* p; + int trail_bytes; + + assert(jc->parse_buffer_count >= 6); + + p = &jc->parse_buffer[jc->parse_buffer_count - 4]; + + for (i = 12; i >= 0; i -= 4, ++p) { + unsigned x = *p; + + if (x >= 'a') { + x -= ('a' - 10); + } else if (x >= 'A') { + x -= ('A' - 10); + } else { + x &= ~0x30u; + } + + assert(x < 16); + + uc |= x << i; + } + + /* clear UTF-16 char from buffer */ + jc->parse_buffer_count -= 6; + jc->parse_buffer[jc->parse_buffer_count] = 0; + + if (uc == 0xffff || uc == 0xfffe) { + return false; + } + + /* attempt decoding ... */ + if (jc->utf16_high_surrogate) { + if (IS_LOW_SURROGATE(uc)) { + uc = DECODE_SURROGATE_PAIR(jc->utf16_high_surrogate, uc); + trail_bytes = 3; + jc->utf16_high_surrogate = 0; + } else { + /* high surrogate without a following low surrogate */ + return false; + } + } else { + if (uc < 0x80) { + trail_bytes = 0; + } else if (uc < 0x800) { + trail_bytes = 1; + } else if (IS_HIGH_SURROGATE(uc)) { + /* save the high surrogate and wait for the low surrogate */ + jc->utf16_high_surrogate = (UTF16)uc; + return true; + } else if (IS_LOW_SURROGATE(uc)) { + /* low surrogate without a preceding high surrogate */ + return false; + } else { + trail_bytes = 2; + } + } + + jc->parse_buffer[jc->parse_buffer_count++] = (char) ((uc >> (trail_bytes * 6)) | utf8_lead_bits[trail_bytes]); + + for (i = trail_bytes * 6 - 6; i >= 0; i -= 6) { + jc->parse_buffer[jc->parse_buffer_count++] = (char) (((uc >> i) & 0x3F) | 0x80); + } + + jc->parse_buffer[jc->parse_buffer_count] = 0; + + return true; +} + +static int add_escaped_char_to_parse_buffer(JSON_parser jc, int next_char) +{ + assert(parse_buffer_has_space_for(jc, 1)); + + jc->escaped = 0; + /* remove the backslash */ + parse_buffer_pop_back_char(jc); + switch(next_char) { + case 'b': + parse_buffer_push_back_char(jc, '\b'); + break; + case 'f': + parse_buffer_push_back_char(jc, '\f'); + break; + case 'n': + parse_buffer_push_back_char(jc, '\n'); + break; + case 'r': + parse_buffer_push_back_char(jc, '\r'); + break; + case 't': + parse_buffer_push_back_char(jc, '\t'); + break; + case '"': + parse_buffer_push_back_char(jc, '"'); + break; + case '\\': + parse_buffer_push_back_char(jc, '\\'); + break; + case '/': + parse_buffer_push_back_char(jc, '/'); + break; + case 'u': + parse_buffer_push_back_char(jc, '\\'); + parse_buffer_push_back_char(jc, 'u'); + break; + default: + return false; + } + + return true; +} + +static int add_char_to_parse_buffer(JSON_parser jc, int next_char, int next_class) +{ + if (!parse_buffer_reserve_for(jc, 1)) { + assert(JSON_E_OUT_OF_MEMORY == jc->error); + return false; + } + + if (jc->escaped) { + if (!add_escaped_char_to_parse_buffer(jc, next_char)) { + jc->error = JSON_E_INVALID_ESCAPE_SEQUENCE; + return false; + } + } else if (!jc->comment) { + if ((jc->type != JSON_T_NONE) | !((next_class == C_SPACE) | (next_class == C_WHITE)) /* non-white-space */) { + parse_buffer_push_back_char(jc, (char)next_char); + } + } + + return true; +} + +#define assert_type_isnt_string_null_or_bool(jc) \ + assert(jc->type != JSON_T_FALSE); \ + assert(jc->type != JSON_T_TRUE); \ + assert(jc->type != JSON_T_NULL); \ + assert(jc->type != JSON_T_STRING) + + +int +JSON_parser_char(JSON_parser jc, int next_char) +{ +/* + After calling new_JSON_parser, call this function for each character (or + partial character) in your JSON text. It can accept UTF-8, UTF-16, or + UTF-32. It returns true if things are looking ok so far. If it rejects the + text, it returns false. +*/ + int next_class, next_state; + +/* + Store the current char for error handling +*/ + jc->current_char = next_char; + +/* + Determine the character's class. +*/ + if (next_char < 0) { + jc->error = JSON_E_INVALID_CHAR; + return false; + } + if (next_char >= 128) { + next_class = C_ETC; + } else { + next_class = ascii_class[next_char]; + if (next_class <= __) { + set_error(jc); + return false; + } + } + + if (!add_char_to_parse_buffer(jc, next_char, next_class)) { + return false; + } + +/* + Get the next state from the state transition table. +*/ + next_state = state_transition_table[jc->state][next_class]; + if (next_state >= 0) { +/* + Change the state. +*/ + jc->state = (signed char)next_state; + } else { +/* + Or perform one of the actions. +*/ + switch (next_state) { +/* Unicode character */ + case UC: + if(!decode_unicode_char(jc)) { + jc->error = JSON_E_INVALID_UNICODE_SEQUENCE; + return false; + } + /* check if we need to read a second UTF-16 char */ + if (jc->utf16_high_surrogate) { + jc->state = D1; + } else { + jc->state = ST; + } + break; +/* escaped char */ + case EX: + jc->escaped = 1; + jc->state = ES; + break; +/* integer detected by minus */ + case MX: + jc->type = JSON_T_INTEGER; + jc->state = MI; + break; +/* integer detected by zero */ + case ZX: + jc->type = JSON_T_INTEGER; + jc->state = ZE; + break; +/* integer detected by 1-9 */ + case IX: + jc->type = JSON_T_INTEGER; + jc->state = IT; + break; + +/* floating point number detected by exponent*/ + case DE: + assert_type_isnt_string_null_or_bool(jc); + jc->type = JSON_T_FLOAT; + jc->state = E1; + break; + +/* floating point number detected by fraction */ + case DF: + assert_type_isnt_string_null_or_bool(jc); + if (!jc->handle_floats_manually) { +/* + Some versions of strtod (which underlies sscanf) don't support converting + C-locale formated floating point values. +*/ + assert(jc->parse_buffer[jc->parse_buffer_count-1] == '.'); + jc->parse_buffer[jc->parse_buffer_count-1] = jc->decimal_point; + } + jc->type = JSON_T_FLOAT; + jc->state = FX; + break; +/* string begin " */ + case SB: + parse_buffer_clear(jc); + assert(jc->type == JSON_T_NONE); + jc->type = JSON_T_STRING; + jc->state = ST; + break; + +/* n */ + case NU: + assert(jc->type == JSON_T_NONE); + jc->type = JSON_T_NULL; + jc->state = N1; + break; +/* f */ + case FA: + assert(jc->type == JSON_T_NONE); + jc->type = JSON_T_FALSE; + jc->state = F1; + break; +/* t */ + case TR: + assert(jc->type == JSON_T_NONE); + jc->type = JSON_T_TRUE; + jc->state = T1; + break; + +/* closing comment */ + case CE: + jc->comment = 0; + assert(jc->parse_buffer_count == 0); + assert(jc->type == JSON_T_NONE); + jc->state = jc->before_comment_state; + break; + +/* opening comment */ + case CB: + if (!jc->allow_comments) { + return false; + } + parse_buffer_pop_back_char(jc); + if (!parse_parse_buffer(jc)) { + return false; + } + assert(jc->parse_buffer_count == 0); + assert(jc->type != JSON_T_STRING); + switch (jc->stack[jc->top]) { + case MODE_ARRAY: + case MODE_OBJECT: + switch(jc->state) { + case VA: + case AR: + jc->before_comment_state = jc->state; + break; + default: + jc->before_comment_state = OK; + break; + } + break; + default: + jc->before_comment_state = jc->state; + break; + } + jc->type = JSON_T_NONE; + jc->state = C1; + jc->comment = 1; + break; +/* empty } */ + case -9: + parse_buffer_clear(jc); + if (jc->callback && !(*jc->callback)(jc->ctx, JSON_T_OBJECT_END, NULL)) { + return false; + } + if (!pop(jc, MODE_KEY)) { + return false; + } + jc->state = OK; + break; + +/* } */ case -8: + parse_buffer_pop_back_char(jc); + if (!parse_parse_buffer(jc)) { + return false; + } + if (jc->callback && !(*jc->callback)(jc->ctx, JSON_T_OBJECT_END, NULL)) { + return false; + } + if (!pop(jc, MODE_OBJECT)) { + jc->error = JSON_E_UNBALANCED_COLLECTION; + return false; + } + jc->type = JSON_T_NONE; + jc->state = OK; + break; + +/* ] */ case -7: + parse_buffer_pop_back_char(jc); + if (!parse_parse_buffer(jc)) { + return false; + } + if (jc->callback && !(*jc->callback)(jc->ctx, JSON_T_ARRAY_END, NULL)) { + return false; + } + if (!pop(jc, MODE_ARRAY)) { + jc->error = JSON_E_UNBALANCED_COLLECTION; + return false; + } + + jc->type = JSON_T_NONE; + jc->state = OK; + break; + +/* { */ case -6: + parse_buffer_pop_back_char(jc); + if (jc->callback && !(*jc->callback)(jc->ctx, JSON_T_OBJECT_BEGIN, NULL)) { + return false; + } + if (!push(jc, MODE_KEY)) { + return false; + } + assert(jc->type == JSON_T_NONE); + jc->state = OB; + break; + +/* [ */ case -5: + parse_buffer_pop_back_char(jc); + if (jc->callback && !(*jc->callback)(jc->ctx, JSON_T_ARRAY_BEGIN, NULL)) { + return false; + } + if (!push(jc, MODE_ARRAY)) { + return false; + } + assert(jc->type == JSON_T_NONE); + jc->state = AR; + break; + +/* string end " */ case -4: + parse_buffer_pop_back_char(jc); + switch (jc->stack[jc->top]) { + case MODE_KEY: + assert(jc->type == JSON_T_STRING); + jc->type = JSON_T_NONE; + jc->state = CO; + + if (jc->callback) { + JSON_value value; + value.vu.str.value = jc->parse_buffer; + value.vu.str.length = jc->parse_buffer_count; + if (!(*jc->callback)(jc->ctx, JSON_T_KEY, &value)) { + return false; + } + } + parse_buffer_clear(jc); + break; + case MODE_ARRAY: + case MODE_OBJECT: + assert(jc->type == JSON_T_STRING); + if (!parse_parse_buffer(jc)) { + return false; + } + jc->type = JSON_T_NONE; + jc->state = OK; + break; + default: + return false; + } + break; + +/* , */ case -3: + parse_buffer_pop_back_char(jc); + if (!parse_parse_buffer(jc)) { + return false; + } + switch (jc->stack[jc->top]) { + case MODE_OBJECT: +/* + A comma causes a flip from object mode to key mode. +*/ + if (!pop(jc, MODE_OBJECT) || !push(jc, MODE_KEY)) { + return false; + } + assert(jc->type != JSON_T_STRING); + jc->type = JSON_T_NONE; + jc->state = KE; + break; + case MODE_ARRAY: + assert(jc->type != JSON_T_STRING); + jc->type = JSON_T_NONE; + jc->state = VA; + break; + default: + return false; + } + break; + +/* : */ case -2: +/* + A colon causes a flip from key mode to object mode. +*/ + parse_buffer_pop_back_char(jc); + if (!pop(jc, MODE_KEY) || !push(jc, MODE_OBJECT)) { + return false; + } + assert(jc->type == JSON_T_NONE); + jc->state = VA; + break; +/* + Bad action. +*/ + default: + set_error(jc); + return false; + } + } + return true; +} + +int +JSON_parser_done(JSON_parser jc) +{ + if ((jc->state == OK || jc->state == GO) && pop(jc, MODE_DONE)) + { + return true; + } + + jc->error = JSON_E_UNBALANCED_COLLECTION; + return false; +} + + +int JSON_parser_is_legal_white_space_string(const char* s) +{ + int c, char_class; + + if (s == NULL) { + return false; + } + + for (; *s; ++s) { + c = *s; + + if (c < 0 || c >= 128) { + return false; + } + + char_class = ascii_class[c]; + + if (char_class != C_SPACE && char_class != C_WHITE) { + return false; + } + } + + return true; +} + +int JSON_parser_get_last_error(JSON_parser jc) +{ + return jc->error; +} + + +void init_JSON_config(JSON_config* config) +{ + if (config) { + memset(config, 0, sizeof(*config)); + + config->depth = JSON_PARSER_STACK_SIZE - 1; + config->malloc = malloc; + config->free = free; + } +} + +/* end file parser/JSON_parser.c */ +/* begin file ./cson.c */ +#include +#include /* malloc()/free() */ +#include +#include + +#ifdef _MSC_VER +# if _MSC_VER >= 1400 /* Visual Studio 2005 and up */ +# pragma warning( push ) +# pragma warning(disable:4996) /* unsecure sscanf (but snscanf() isn't in c89) */ +# pragma warning(disable:4244) /* complaining about data loss due + to integer precision in the + sqlite3 utf decoding routines */ +# endif +#endif + +#if 1 +#include +#define MARKER if(1) printf("MARKER: %s:%d:%s():\t",__FILE__,__LINE__,__func__); if(1) printf +#else +static void noop_printf(char const * fmt, ...) {} +#define MARKER if(0) printf +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + + + +/** + This type holds the "vtbl" for type-specific operations when + working with cson_value objects. + + All cson_values of a given logical type share a pointer to a single + library-internal instance of this class. +*/ +struct cson_value_api +{ + /** + The logical JavaScript/JSON type associated with + this object. + */ + const cson_type_id typeID; + /** + Must free any memory associated with self, + but not free self. If self is NULL then + this function must do nothing. + */ + void (*cleanup)( cson_value * self ); + /** + POSSIBLE TODOs: + + // Deep copy. + int (*clone)( cson_value const * self, cson_value ** tgt ); + + // Using JS semantics for true/value + char (*bool_value)( cson_value const * self ); + + // memcmp() return value semantics + int (*compare)( cson_value const * self, cson_value const * other ); + */ +}; + +typedef struct cson_value_api cson_value_api; + +/** + Empty-initialized cson_value_api object. +*/ +#define cson_value_api_empty_m { \ + CSON_TYPE_UNDEF/*typeID*/, \ + NULL/*cleanup*/\ + } +/** + Empty-initialized cson_value_api object. +*/ +/*static const cson_value_api cson_value_api_empty = cson_value_api_empty_m;*/ + + +typedef unsigned int cson_counter_t; +struct cson_value +{ + /** The "vtbl" of type-specific operations. All instances + of a given logical value type share a single api instance. + + Results are undefined if this value is NULL. + */ + cson_value_api const * api; + + /** The raw value. Its interpretation depends on the value of the + api member. Some value types require dynamically-allocated + memory, so one must always call cson_value_free() to destroy a + value when it is no longer needed. For stack-allocated values + (which client could SHOULD NOT USE unless they are intimately + familiar with the memory management rules and don't mind an + occasional leak or crash), use cson_value_clean() instead of + cson_value_free(). + */ + void * value; + + /** + We use this to allow us to store cson_value instances in + multiple containers or multiple times within a single container + (provided no cycles are introduced). + + Notes about the rc implementation: + + - The refcount is for the cson_value instance itself, not its + value pointer. + + - Instances start out with a refcount of 0 (not 1). Adding them + to a container will increase the refcount. Cleaning up the container + will decrement the count. + + - cson_value_free() decrements the refcount (if it is not already + 0) and cleans/frees the value only when the refcount is 0. + + - Some places in the internals add an "extra" reference to + objects to avoid a premature deletion. Don't try this at home. + */ + cson_counter_t refcount; +}; + + +/** + Empty-initialized cson_value object. +*/ +const cson_parse_opt cson_parse_opt_empty = cson_parse_opt_empty_m; +const cson_output_opt cson_output_opt_empty = cson_output_opt_empty_m; +const cson_object_iterator cson_object_iterator_empty = cson_object_iterator_empty_m; +const cson_buffer cson_buffer_empty = cson_buffer_empty_m; +const cson_parse_info cson_parse_info_empty = cson_parse_info_empty_m; + +static void cson_value_destroy_zero_it( cson_value * self ); +static void cson_value_destroy_object( cson_value * self ); +/** + If self is-a array then this function destroys its contents, + else this function does nothing. +*/ +static void cson_value_destroy_array( cson_value * self ); + +static const cson_value_api cson_value_api_null = { CSON_TYPE_NULL, cson_value_destroy_zero_it }; +static const cson_value_api cson_value_api_undef = { CSON_TYPE_UNDEF, cson_value_destroy_zero_it }; +static const cson_value_api cson_value_api_bool = { CSON_TYPE_BOOL, cson_value_destroy_zero_it }; +static const cson_value_api cson_value_api_integer = { CSON_TYPE_INTEGER, cson_value_destroy_zero_it }; +static const cson_value_api cson_value_api_double = { CSON_TYPE_DOUBLE, cson_value_destroy_zero_it }; +static const cson_value_api cson_value_api_string = { CSON_TYPE_STRING, cson_value_destroy_zero_it }; +static const cson_value_api cson_value_api_array = { CSON_TYPE_ARRAY, cson_value_destroy_array }; +static const cson_value_api cson_value_api_object = { CSON_TYPE_OBJECT, cson_value_destroy_object }; + +static const cson_value cson_value_undef = { &cson_value_api_undef, NULL, 0 }; +static const cson_value cson_value_integer_empty = { &cson_value_api_integer, NULL, 0 }; +static const cson_value cson_value_double_empty = { &cson_value_api_double, NULL, 0 }; +static const cson_value cson_value_string_empty = { &cson_value_api_string, NULL, 0 }; +static const cson_value cson_value_array_empty = { &cson_value_api_array, NULL, 0 }; +static const cson_value cson_value_object_empty = { &cson_value_api_object, NULL, 0 }; + +/** + Strings are allocated as an instances of this class with N+1 + trailing bytes, where N is the length of the string being + allocated. To convert a cson_string to c-string we simply increment + the cson_string pointer. To do the opposite we use (cstr - + sizeof(cson_string)). Zero-length strings are a special case + handled by a couple of the cson_string functions. +*/ +struct cson_string +{ + unsigned int length; +}; +#define cson_string_empty_m {0/*length*/} +static const cson_string cson_string_empty = cson_string_empty_m; + + +/** + Assumes V is a (cson_value*) ans V->value is a (T*). Returns + V->value cast to a (T*). +*/ +#define CSON_CAST(T,V) ((T*)((V)->value)) +/** + Assumes V is a pointer to memory which is allocated as part of a + cson_value instance (the bytes immediately after that part). + Returns a pointer a a cson_value by subtracting sizeof(cson_value) + from that address and casting it to a (cson_value*) +*/ +#define CSON_VCAST(V) ((cson_value *)(((unsigned char *)(V))-sizeof(cson_value))) + +/** + CSON_INT(V) assumes that V is a (cson_value*) of type + CSON_TYPE_INTEGER. This macro returns a (cson_int_t*) representing + its value (how that is stored depends on whether we are running in + 32- or 64-bit mode). + */ +#if CSON_VOID_PTR_IS_BIG +# define CSON_INT(V) ((cson_int_t*)(&((V)->value))) +#else +# define CSON_INT(V) ((cson_int_t*)(V)->value) +#endif + +#define CSON_DBL(V) CSON_CAST(cson_double_t,(V)) +#define CSON_STR(V) CSON_CAST(cson_string,(V)) +#define CSON_OBJ(V) CSON_CAST(cson_object,(V)) +#define CSON_ARRAY(V) CSON_CAST(cson_array,(V)) + +/** + Holds special shared "constant" (though they are non-const) + values. +*/ +static struct CSON_EMPTY_HOLDER_ +{ + char trueValue; + cson_string stringValue; +} CSON_EMPTY_HOLDER = { + 1/*trueValue*/, + cson_string_empty_m +}; + +/** + Indexes into the CSON_SPECIAL_VALUES array. + + If this enum changes in any way, + makes damned sure that CSON_SPECIAL_VALUES is updated + to match!!! +*/ +enum CSON_INTERNAL_VALUES { + + CSON_VAL_UNDEF = 0, + CSON_VAL_NULL = 1, + CSON_VAL_TRUE = 2, + CSON_VAL_FALSE = 3, + CSON_VAL_INT_0 = 4, + CSON_VAL_DBL_0 = 5, + CSON_VAL_STR_EMPTY = 6, + CSON_INTERNAL_VALUES_LENGTH +}; + +/** + Some "special" shared cson_value instances. + + These values MUST be initialized in the order specified + by the CSON_INTERNAL_VALUES enum. + + Note that they are not const because they are used as + shared-allocation objects in non-const contexts. However, the + public API provides no way to modifying them, and clients who + modify values directly are subject to The Wrath of Undefined + Behaviour. +*/ +static cson_value CSON_SPECIAL_VALUES[] = { +{ &cson_value_api_undef, NULL, 0 }, /* UNDEF */ +{ &cson_value_api_null, NULL, 0 }, /* NULL */ +{ &cson_value_api_bool, &CSON_EMPTY_HOLDER.trueValue, 0 }, /* TRUE */ +{ &cson_value_api_bool, NULL, 0 }, /* FALSE */ +{ &cson_value_api_integer, NULL, 0 }, /* INT_0 */ +{ &cson_value_api_double, NULL, 0 }, /* DBL_0 */ +{ &cson_value_api_string, &CSON_EMPTY_HOLDER.stringValue, 0 }, /* STR_EMPTY */ +{ NULL, NULL, 0 } +}; + + +/** + Returns non-0 (true) if m is one of our special + "built-in" values, e.g. from CSON_SPECIAL_VALUES and some + "empty" values. + + If this returns true, m MUST NOT be free()d! + */ +static char cson_value_is_builtin( void const * m ) +{ + if((m >= (void const *)&CSON_EMPTY_HOLDER) + && ( m < (void const *)(&CSON_EMPTY_HOLDER+1))) + return 1; + else return + ((m >= (void const *)&CSON_SPECIAL_VALUES[0]) + && ( m < (void const *)&CSON_SPECIAL_VALUES[CSON_INTERNAL_VALUES_LENGTH]) ) + ? 1 + : 0; +} + +char const * cson_rc_string(int rc) +{ + if(0 == rc) return "OK"; +#define CHECK(N) else if(cson_rc.N == rc ) return #N + CHECK(OK); + CHECK(ArgError); + CHECK(RangeError); + CHECK(TypeError); + CHECK(IOError); + CHECK(AllocError); + CHECK(NYIError); + CHECK(InternalError); + CHECK(UnsupportedError); + CHECK(NotFoundError); + CHECK(UnknownError); + CHECK(Parse_INVALID_CHAR); + CHECK(Parse_INVALID_KEYWORD); + CHECK(Parse_INVALID_ESCAPE_SEQUENCE); + CHECK(Parse_INVALID_UNICODE_SEQUENCE); + CHECK(Parse_INVALID_NUMBER); + CHECK(Parse_NESTING_DEPTH_REACHED); + CHECK(Parse_UNBALANCED_COLLECTION); + CHECK(Parse_EXPECTED_KEY); + CHECK(Parse_EXPECTED_COLON); + else return "UnknownError"; +#undef CHECK +} + +/** + If CSON_LOG_ALLOC is true then the cson_malloc/realloc/free() routines + will log a message to stderr. +*/ +#define CSON_LOG_ALLOC 0 + + +/** + CSON_FOSSIL_MODE is only for use in the Fossil + source tree, so that we can plug in to its allocators. + We can't do this by, e.g., defining macros for the + malloc/free funcs because fossil's lack of header files + means we would have to #include "main.c" here to + get the declarations. + */ +#if defined(CSON_FOSSIL_MODE) +extern void *fossil_malloc(size_t n); +extern void fossil_free(void *p); +extern void *fossil_realloc(void *p, size_t n); +# define CSON_MALLOC_IMPL fossil_malloc +# define CSON_FREE_IMPL fossil_free +# define CSON_REALLOC_IMPL fossil_realloc +#endif + +#if !defined CSON_MALLOC_IMPL +# define CSON_MALLOC_IMPL malloc +#endif +#if !defined CSON_FREE_IMPL +# define CSON_FREE_IMPL free +#endif +#if !defined CSON_REALLOC_IMPL +# define CSON_REALLOC_IMPL realloc +#endif + +/** + A test/debug macro for simulating an OOM after the given number of + bytes have been allocated. +*/ +#define CSON_SIMULATE_OOM 0 +#if CSON_SIMULATE_OOM +static unsigned int cson_totalAlloced = 0; +#endif + +/** Simple proxy for malloc(). descr is a description of the allocation. */ +static void * cson_malloc( size_t n, char const * descr ) +{ +#if CSON_LOG_ALLOC + fprintf(stderr, "Allocating %u bytes [%s].\n", (unsigned int)n, descr); +#endif +#if CSON_SIMULATE_OOM + cson_totalAlloced += n; + if( cson_totalAlloced > CSON_SIMULATE_OOM ) + { + return NULL; + } +#endif + return CSON_MALLOC_IMPL(n); +} + +/** Simple proxy for free(). descr is a description of the memory being freed. */ +static void cson_free( void * p, char const * descr ) +{ +#if CSON_LOG_ALLOC + fprintf(stderr, "Freeing @%p [%s].\n", p, descr); +#endif + if( !cson_value_is_builtin(p) ) + { + CSON_FREE_IMPL( p ); + } +} +/** Simple proxy for realloc(). descr is a description of the (re)allocation. */ +static void * cson_realloc( void * hint, size_t n, char const * descr ) +{ +#if CSON_LOG_ALLOC + fprintf(stderr, "%sllocating %u bytes [%s].\n", + hint ? "Rea" : "A", + (unsigned int)n, descr); +#endif +#if CSON_SIMULATE_OOM + cson_totalAlloced += n; + if( cson_totalAlloced > CSON_SIMULATE_OOM ) + { + return NULL; + } +#endif + if( 0==n ) + { + cson_free(hint, descr); + return NULL; + } + else + { + return CSON_REALLOC_IMPL( hint, n ); + } +} + + +#undef CSON_LOG_ALLOC +#undef CSON_SIMULATE_OOM + + + +/** + CLIENTS CODE SHOULD NEVER USE THIS because it opens up doors to + memory leaks if it is not used in very controlled circumstances. + Users must be very aware of how the underlying memory management + works. + + Frees any resources owned by val, but does not free val itself + (which may be stack-allocated). If !val or val->api or + val->api->cleanup are NULL then this is a no-op. + + If v is a container type (object or array) its children are also + cleaned up, recursively. + + After calling this, val will have the special "undefined" type. +*/ +static void cson_value_clean( cson_value * val ); + +/** + Increments cv's reference count by 1. As a special case, values + for which cson_value_is_builtin() returns true are not + modified. assert()s if (NULL==cv). +*/ +static void cson_refcount_incr( cson_value * cv ) +{ + assert( NULL != cv ); + if( cson_value_is_builtin( cv ) ) + { /* do nothing: we do not want to modify the shared + instances. + */ + return; + } + else + { + ++cv->refcount; + } +} + +#if 0 +int cson_value_refcount_set( cson_value * cv, unsigned short rc ) +{ + if( NULL == cv ) return cson_rc.ArgError; + else + { + cv->refcount = rc; + return 0; + } +} +#endif + +int cson_value_add_reference( cson_value * cv ) +{ + if( NULL == cv ) return cson_rc.ArgError; + else if( (cv->refcount+1) < cv->refcount ) + { + return cson_rc.RangeError; + } + else + { + cson_refcount_incr( cv ); + return 0; + } +} + +/** + If cv is NULL or cson_value_is_builtin(cv) returns true then this + function does nothing and returns 0, otherwise... If + cv->refcount is 0 or 1 then cson_value_clean(cv) is called, cv is + freed, and 0 is returned. If cv->refcount is any other value then + it is decremented and the new value is returned. +*/ +static cson_counter_t cson_refcount_decr( cson_value * cv ) +{ + if( (NULL == cv) || cson_value_is_builtin(cv) ) return 0; + else if( (0 == cv->refcount) || (0 == --cv->refcount) ) + { + cson_value_clean(cv); + cson_free(cv,"cson_value::refcount=0"); + return 0; + } + else return cv->refcount; +} + +unsigned int cson_string_length_bytes( cson_string const * str ) +{ + return str ? str->length : 0; +} + + +/** + Fetches v's string value as a non-const string. + + cson_strings are intended to be immutable, but this form provides + access to the immutable bits, which are v->length bytes long. A + length-0 string is returned as NULL from here, as opposed to + "". (This is a side-effect of the string allocation mechanism.) + Returns NULL if !v or if v is the internal empty-string singleton. +*/ +static char * cson_string_str(cson_string *v) +{ + /* + See http://groups.google.com/group/comp.lang.c.moderated/browse_thread/thread/2e0c0df5e8a0cd6a + */ +#if 1 + if( !v || (&CSON_EMPTY_HOLDER.stringValue == v) ) return NULL; + else return (char *)((unsigned char *)( v+1 )); +#else + static char empty[2] = {0,0}; + return ( NULL == v ) + ? NULL + : (v->length + ? (char *) (((unsigned char *)v) + sizeof(cson_string)) + : empty) + ; +#endif +} + +/** + Fetches v's string value as a const string. +*/ +char const * cson_string_cstr(cson_string const *v) +{ + /* + See http://groups.google.com/group/comp.lang.c.moderated/browse_thread/thread/2e0c0df5e8a0cd6a + */ +#if 1 + if( ! v ) return NULL; + else if( v == &CSON_EMPTY_HOLDER.stringValue ) return ""; + else { + assert((0 < v->length) && "How do we have a non-singleton empty string?"); + return (char const *)((unsigned char const *)(v+1)); + } +#else + return (NULL == v) + ? NULL + : (v->length + ? (char const *) ((unsigned char const *)(v+1)) + : ""); +#endif +} + + +#if 0 +/** + Just like strndup(3), in that neither are C89/C99-standard and both + are documented in detail in strndup(3). +*/ +static char * cson_strdup( char const * src, size_t n ) +{ + char * rc = (char *)cson_malloc(n+1, "cson_strdup"); + if( ! rc ) return NULL; + memset( rc, 0, n+1 ); + rc[n] = 0; + return strncpy( rc, src, n ); +} +#endif + +int cson_string_cmp_cstr_n( cson_string const * str, char const * other, unsigned int otherLen ) +{ + if( ! other && !str ) return 0; + else if( other && !str ) return 1; + else if( str && !other ) return -1; + else if( !otherLen ) return str->length ? 1 : 0; + else if( !str->length ) return otherLen ? -1 : 0; + else + { + unsigned const int max = (otherLen > str->length) ? otherLen : str->length; + int const rc = strncmp( cson_string_cstr(str), other, max ); + return ( (0 == rc) && (otherLen != str->length) ) + ? (str->length < otherLen) ? -1 : 1 + : rc; + } +} + +int cson_string_cmp_cstr( cson_string const * lhs, char const * rhs ) +{ + return cson_string_cmp_cstr_n( lhs, rhs, (rhs&&*rhs) ? strlen(rhs) : 0 ); +} +int cson_string_cmp( cson_string const * lhs, cson_string const * rhs ) +{ + return cson_string_cmp_cstr_n( lhs, cson_string_cstr(rhs), rhs ? rhs->length : 0 ); +} + + +/** + If self is not NULL, *self is overwritten to have the undefined + type. self is not cleaned up or freed. +*/ +void cson_value_destroy_zero_it( cson_value * self ) +{ + if( self ) + { + *self = cson_value_undef; + } +} + +/** + A key/value pair collection. + + Each of these objects owns its key/value pointers, and they + are cleaned up by cson_kvp_clean(). +*/ +struct cson_kvp +{ + cson_value * key; + cson_value * value; +}; +#define cson_kvp_empty_m {NULL,NULL} +static const cson_kvp cson_kvp_empty = cson_kvp_empty_m; + +/** @def CSON_OBJECT_PROPS_SORT + + Don't use this - it has not been updated to account for internal + changes in cson_object. + + If CSON_OBJECT_PROPS_SORT is set to a true value then + qsort() and bsearch() are used to sort (upon insertion) + and search cson_object::kvp property lists. This costs us + a re-sort on each insertion but searching is O(log n) + average/worst case (and O(1) best-case). + + i'm not yet convinced that the overhead of the qsort() justifies + the potentially decreased search times - it has not been + measured. Object property lists tend to be relatively short in + JSON, and a linear search which uses the cson_string::length + property as a quick check is quite fast when one compares it with + the sort overhead required by the bsearch() approach. +*/ +#define CSON_OBJECT_PROPS_SORT 0 + +/** @def CSON_OBJECT_PROPS_SORT_USE_LENGTH + + Don't use this - i'm not sure that it works how i'd like. + + If CSON_OBJECT_PROPS_SORT_USE_LENGTH is true then + we use string lengths as quick checks when sorting + property keys. This leads to a non-intuitive sorting + order but "should" be faster. + + This is ignored if CSON_OBJECT_PROPS_SORT is false. + +*/ +#define CSON_OBJECT_PROPS_SORT_USE_LENGTH 0 + +#if CSON_OBJECT_PROPS_SORT + +/** + cson_kvp comparator for use with qsort(). ALMOST compares with + strcmp() semantics, but it uses the strings' lengths as a quicker + approach. This might give non-intuitive results, but it's faster. + */ +static int cson_kvp_cmp( void const * lhs, void const * rhs ) +{ + cson_kvp const * lk = *((cson_kvp const * const*)lhs); + cson_kvp const * rk = *((cson_kvp const * const*)rhs); + cson_string const * l = cson_string_value(lk->key); + cson_string const * r = cson_string_value(rk->key); +#if CSON_OBJECT_PROPS_SORT_USE_LENGTH + if( l->length < r->length ) return -1; + else if( l->length > r->length ) return 1; + else return strcmp( cson_string_cstr( l ), cson_string_cstr( r ) ); +#else + return strcmp( cson_string_cstr( l ), + cson_string_cstr( r ) ); +#endif /*CSON_OBJECT_PROPS_SORT_USE_LENGTH*/ +} +#endif /*CSON_OBJECT_PROPS_SORT*/ + + +#if CSON_OBJECT_PROPS_SORT +#error "Need to rework this for cson_string-to-cson_value refactoring" +/** + A bsearch() comparison function which requires that lhs be a (char + const *) and rhs be-a (cson_kvp const * const *). It compares lhs + to rhs->key's value, using strcmp() semantics. + */ +static int cson_kvp_cmp_vs_cstr( void const * lhs, void const * rhs ) +{ + char const * lk = (char const *)lhs; + cson_kvp const * rk = + *((cson_kvp const * const*)rhs) + ; +#if CSON_OBJECT_PROPS_SORT_USE_LENGTH + unsigned int llen = strlen(lk); + if( llen < rk->key->length ) return -1; + else if( llen > rk->key->length ) return 1; + else return strcmp( lk, cson_string_cstr( rk->key ) ); +#else + return strcmp( lk, cson_string_cstr( rk->key ) ); +#endif /*CSON_OBJECT_PROPS_SORT_USE_LENGTH*/ +} +#endif /*CSON_OBJECT_PROPS_SORT*/ + + +struct cson_kvp_list +{ + cson_kvp ** list; + unsigned int count; + unsigned int alloced; +}; +typedef struct cson_kvp_list cson_kvp_list; +#define cson_kvp_list_empty_m {NULL/*list*/,0/*count*/,0/*alloced*/} +/*static const cson_kvp_list cson_kvp_list_empty = cson_kvp_list_empty_m;*/ + +struct cson_object +{ + cson_kvp_list kvp; +}; +/*typedef struct cson_object cson_object;*/ +#define cson_object_empty_m { cson_kvp_list_empty_m/*kvp*/ } +static const cson_object cson_object_empty = cson_object_empty_m; + +struct cson_value_list +{ + cson_value ** list; + unsigned int count; + unsigned int alloced; +}; +typedef struct cson_value_list cson_value_list; +#define cson_value_list_empty_m {NULL/*list*/,0/*count*/,0/*alloced*/} +static const cson_value_list cson_value_list_empty = cson_value_list_empty_m; + +struct cson_array +{ + cson_value_list list; +}; +/*typedef struct cson_array cson_array;*/ +#define cson_array_empty_m { cson_value_list_empty_m/*list*/ } +static const cson_array cson_array_empty = cson_array_empty_m; + + +struct cson_parser +{ + JSON_parser p; + cson_value * root; + cson_value * node; + cson_array stack; + cson_string * ckey; + int errNo; + unsigned int totalKeyCount; + unsigned int totalValueCount; +}; +typedef struct cson_parser cson_parser; +static const cson_parser cson_parser_empty = { +NULL/*p*/, +NULL/*root*/, +NULL/*node*/, +cson_array_empty_m/*stack*/, +NULL/*ckey*/, +0/*errNo*/, +0/*totalKeyCount*/, +0/*totalValueCount*/ +}; + +#if 1 +/* The following funcs are declared in generated code (cson_lists.h), + but we need early access to their decls for the Amalgamation build. +*/ +static unsigned int cson_value_list_reserve( cson_value_list * self, unsigned int n ); +static unsigned int cson_kvp_list_reserve( cson_kvp_list * self, unsigned int n ); +static int cson_kvp_list_append( cson_kvp_list * self, cson_kvp * cp ); +static void cson_kvp_list_clean( cson_kvp_list * self, + void (*cleaner)(cson_kvp * obj) ); +#if 0 +static int cson_value_list_append( cson_value_list * self, cson_value * cp ); +static void cson_value_list_clean( cson_value_list * self, void (*cleaner)(cson_value * obj)); +static int cson_kvp_list_visit( cson_kvp_list * self, + int (*visitor)(cson_kvp * obj, void * visitorState ), + void * visitorState ); +static int cson_value_list_visit( cson_value_list * self, + int (*visitor)(cson_value * obj, void * visitorState ), + void * visitorState ); +#endif +#endif + +#if 0 +# define LIST_T cson_value_list +# define VALUE_T cson_value * +# define VALUE_T_IS_PTR 1 +# define LIST_T cson_kvp_list +# define VALUE_T cson_kvp * +# define VALUE_T_IS_PTR 1 +#else +#endif + +/** + Allocates a new value of the specified type. Ownership is + transfered to the caller, who must eventually free it by passing it + to cson_value_free() or transfering ownership to a container. + + extra is only valid for type CSON_TYPE_STRING, and must be the length + of the string to allocate + 1 byte (for the NUL). + + The returned value->api member will be set appropriately and + val->value will be set to point to the memory allocated to hold the + native value type. Use the internal CSON_CAST() family of macros to + convert the cson_values to their corresponding native + representation. + + Returns NULL on allocation error. + + @see cson_value_new_array() + @see cson_value_new_object() + @see cson_value_new_string() + @see cson_value_new_integer() + @see cson_value_new_double() + @see cson_value_new_bool() + @see cson_value_free() +*/ +static cson_value * cson_value_new(cson_type_id t, size_t extra) +{ + static const size_t vsz = sizeof(cson_value); + const size_t sz = vsz + extra; + size_t tx = 0; + cson_value def = cson_value_undef; + cson_value * v = NULL; + char const * reason = "cson_value_new"; + switch(t) + { + case CSON_TYPE_ARRAY: + assert( 0 == extra ); + def = cson_value_array_empty; + tx = sizeof(cson_array); + reason = "cson_value:array"; + break; + case CSON_TYPE_DOUBLE: + assert( 0 == extra ); + def = cson_value_double_empty; + tx = sizeof(cson_double_t); + reason = "cson_value:double"; + break; + case CSON_TYPE_INTEGER: + assert( 0 == extra ); + def = cson_value_integer_empty; +#if !CSON_VOID_PTR_IS_BIG + tx = sizeof(cson_int_t); +#endif + reason = "cson_value:int"; + break; + case CSON_TYPE_STRING: + assert( 0 != extra ); + def = cson_value_string_empty; + tx = sizeof(cson_string); + reason = "cson_value:string"; + break; + case CSON_TYPE_OBJECT: + assert( 0 == extra ); + def = cson_value_object_empty; + tx = sizeof(cson_object); + reason = "cson_value:object"; + break; + default: + assert(0 && "Unhandled type in cson_value_new()!"); + return NULL; + } + assert( def.api->typeID != CSON_TYPE_UNDEF ); + v = (cson_value *)cson_malloc(sz+tx, reason); + if( v ) { + *v = def; + if(tx || extra){ + memset(v+1, 0, tx + extra); + v->value = (void *)(v+1); + } + } + return v; +} + +void cson_value_free(cson_value *v) +{ + cson_refcount_decr( v ); +} + +#if 0 /* we might actually want this later on. */ +/** Returns true if v is not NULL and has the given type ID. */ +static char cson_value_is_a( cson_value const * v, cson_type_id is ) +{ + return (v && v->api && (v->api->typeID == is)) ? 1 : 0; +} +#endif + +cson_type_id cson_value_type_id( cson_value const * v ) +{ + return (v && v->api) ? v->api->typeID : CSON_TYPE_UNDEF; +} + +char cson_value_is_undef( cson_value const * v ) +{ + return ( !v || !v->api || (v->api==&cson_value_api_undef)) + ? 1 : 0; +} +#define ISA(T,TID) char cson_value_is_##T( cson_value const * v ) { \ + /*return (v && v->api) ? cson_value_is_a(v,CSON_TYPE_##TID) : 0;*/ \ + return (v && (v->api == &cson_value_api_##T)) ? 1 : 0; \ + } extern char bogusPlaceHolderForEmacsIndention##TID +ISA(null,NULL); +ISA(bool,BOOL); +ISA(integer,INTEGER); +ISA(double,DOUBLE); +ISA(string,STRING); +ISA(array,ARRAY); +ISA(object,OBJECT); +#undef ISA +char cson_value_is_number( cson_value const * v ) +{ + return cson_value_is_integer(v) || cson_value_is_double(v); +} + + +void cson_value_clean( cson_value * val ) +{ + if( val && val->api && val->api->cleanup ) + { + if( ! cson_value_is_builtin( val ) ) + { + cson_counter_t const rc = val->refcount; + val->api->cleanup(val); + *val = cson_value_undef; + val->refcount = rc; + } + } +} + +static cson_value * cson_value_array_alloc() +{ + cson_value * v = cson_value_new(CSON_TYPE_ARRAY,0); + if( NULL != v ) + { + cson_array * ar = CSON_ARRAY(v); + assert(NULL != ar); + *ar = cson_array_empty; + } + return v; +} + +static cson_value * cson_value_object_alloc() +{ + cson_value * v = cson_value_new(CSON_TYPE_OBJECT,0); + if( NULL != v ) + { + cson_object * obj = CSON_OBJ(v); + assert(NULL != obj); + *obj = cson_object_empty; + } + return v; +} + +cson_value * cson_value_new_object() +{ + return cson_value_object_alloc(); +} + +cson_object * cson_new_object() +{ + + return cson_value_get_object( cson_value_new_object() ); +} + +cson_value * cson_value_new_array() +{ + return cson_value_array_alloc(); +} + + +cson_array * cson_new_array() +{ + return cson_value_get_array( cson_value_new_array() ); +} + +/** + Frees kvp->key and kvp->value and sets them to NULL, but does not free + kvp. If !kvp then this is a no-op. +*/ +static void cson_kvp_clean( cson_kvp * kvp ) +{ + if( kvp ) + { + if(kvp->key) + { + cson_value_free(kvp->key); + kvp->key = NULL; + } + if(kvp->value) + { + cson_value_free( kvp->value ); + kvp->value = NULL; + } + } +} + +cson_string * cson_kvp_key( cson_kvp const * kvp ) +{ + return kvp ? cson_value_get_string(kvp->key) : NULL; +} +cson_value * cson_kvp_value( cson_kvp const * kvp ) +{ + return kvp ? kvp->value : NULL; +} + + +/** + Calls cson_kvp_clean(kvp) and then frees kvp. +*/ +static void cson_kvp_free( cson_kvp * kvp ) +{ + if( kvp ) + { + cson_kvp_clean(kvp); + cson_free(kvp,"cson_kvp"); + } +} + + +/** + cson_value_api::destroy_value() impl for Object + values. Cleans up self-owned memory and overwrites + self to have the undefined value, but does not + free self. +*/ +static void cson_value_destroy_object( cson_value * self ) +{ + if(self && self->value) { + cson_object * obj = (cson_object *)self->value; + assert( self->value == obj ); + cson_kvp_list_clean( &obj->kvp, cson_kvp_free ); + *self = cson_value_undef; + } +} + +/** + Cleans up the contents of ar->list, but does not free ar. + + After calling this, ar will have a length of 0. + + If properlyCleanValues is 1 then cson_value_free() is called on + each non-NULL item, otherwise the outer list is destroyed but the + individual items are assumed to be owned by someone else and are + not freed. +*/ +static void cson_array_clean( cson_array * ar, char properlyCleanValues ) +{ + if( ar ) + { + unsigned int i = 0; + cson_value * val = NULL; + for( ; i < ar->list.count; ++i ) + { + val = ar->list.list[i]; + if(val) + { + ar->list.list[i] = NULL; + if( properlyCleanValues ) + { + cson_value_free( val ); + } + } + } + cson_value_list_reserve(&ar->list,0); + ar->list = cson_value_list_empty + /* Pedantic note: reserve(0) already clears the list-specific + fields, but we do this just in case we ever add new fields + to cson_value_list which are not used in the reserve() impl. + */ + ; + } +} + +/** + cson_value_api::destroy_value() impl for Array + values. Cleans up self-owned memory and overwrites + self to have the undefined value, but does not + free self. +*/ +static void cson_value_destroy_array( cson_value * self ) +{ + cson_array * ar = cson_value_get_array(self); + if(ar) { + assert( self->value == ar ); + cson_array_clean( ar, 1 ); + *self = cson_value_undef; + } +} + +int cson_buffer_fill_from( cson_buffer * dest, cson_data_source_f src, void * state ) +{ + int rc; + enum { BufSize = 1024 * 4 }; + char rbuf[BufSize]; + size_t total = 0; + unsigned int rlen = 0; + if( ! dest || ! src ) return cson_rc.ArgError; + dest->used = 0; + while(1) + { + rlen = BufSize; + rc = src( state, rbuf, &rlen ); + if( rc ) break; + total += rlen; + if( dest->capacity < (total+1) ) + { + rc = cson_buffer_reserve( dest, total + 1); + if( 0 != rc ) break; + } + memcpy( dest->mem + dest->used, rbuf, rlen ); + dest->used += rlen; + if( rlen < BufSize ) break; + } + if( !rc && dest->used ) + { + assert( dest->used < dest->capacity ); + dest->mem[dest->used] = 0; + } + return rc; +} + +int cson_data_source_FILE( void * state, void * dest, unsigned int * n ) +{ + FILE * f = (FILE*) state; + if( ! state || ! n || !dest ) return cson_rc.ArgError; + else if( !*n ) return cson_rc.RangeError; + *n = (unsigned int)fread( dest, 1, *n, f ); + if( !*n ) + { + return feof(f) ? 0 : cson_rc.IOError; + } + return 0; +} + +int cson_parse_FILE( cson_value ** tgt, FILE * src, + cson_parse_opt const * opt, cson_parse_info * err ) +{ + return cson_parse( tgt, cson_data_source_FILE, src, opt, err ); +} + + +int cson_value_fetch_bool( cson_value const * val, char * v ) +{ + /** + FIXME: move the to-bool operation into cson_value_api, like we + do in the C++ API. + */ + if( ! val || !val->api ) return cson_rc.ArgError; + else + { + int rc = 0; + char b = 0; + switch( val->api->typeID ) + { + case CSON_TYPE_ARRAY: + case CSON_TYPE_OBJECT: + b = 1; + break; + case CSON_TYPE_STRING: { + char const * str = cson_string_cstr(cson_value_get_string(val)); + b = (str && *str) ? 1 : 0; + break; + } + case CSON_TYPE_UNDEF: + case CSON_TYPE_NULL: + break; + case CSON_TYPE_BOOL: + b = (NULL==val->value) ? 0 : 1; + break; + case CSON_TYPE_INTEGER: { + cson_int_t i = 0; + cson_value_fetch_integer( val, &i ); + b = i ? 1 : 0; + break; + } + case CSON_TYPE_DOUBLE: { + cson_double_t d = 0.0; + cson_value_fetch_double( val, &d ); + b = (0.0==d) ? 0 : 1; + break; + } + default: + rc = cson_rc.TypeError; + break; + } + if( v ) *v = b; + return rc; + } +} + +char cson_value_get_bool( cson_value const * val ) +{ + char i = 0; + cson_value_fetch_bool( val, &i ); + return i; +} + +int cson_value_fetch_integer( cson_value const * val, cson_int_t * v ) +{ + if( ! val || !val->api ) return cson_rc.ArgError; + else + { + cson_int_t i = 0; + int rc = 0; + switch(val->api->typeID) + { + case CSON_TYPE_UNDEF: + case CSON_TYPE_NULL: + i = 0; + break; + case CSON_TYPE_BOOL: { + char b = 0; + cson_value_fetch_bool( val, &b ); + i = b; + break; + } + case CSON_TYPE_INTEGER: { + cson_int_t const * x = CSON_INT(val); + if(!x) + { + assert( val == &CSON_SPECIAL_VALUES[CSON_VAL_INT_0] ); + } + i = x ? *x : 0; + break; + } + case CSON_TYPE_DOUBLE: { + cson_double_t d = 0.0; + cson_value_fetch_double( val, &d ); + i = (cson_int_t)d; + break; + } + case CSON_TYPE_STRING: + case CSON_TYPE_ARRAY: + case CSON_TYPE_OBJECT: + default: + rc = cson_rc.TypeError; + break; + } + if(!rc && v) *v = i; + return rc; + } +} + +cson_int_t cson_value_get_integer( cson_value const * val ) +{ + cson_int_t i = 0; + cson_value_fetch_integer( val, &i ); + return i; +} + +int cson_value_fetch_double( cson_value const * val, cson_double_t * v ) +{ + if( ! val || !val->api ) return cson_rc.ArgError; + else + { + cson_double_t d = 0.0; + int rc = 0; + switch(val->api->typeID) + { + case CSON_TYPE_UNDEF: + case CSON_TYPE_NULL: + d = 0; + break; + case CSON_TYPE_BOOL: { + char b = 0; + cson_value_fetch_bool( val, &b ); + d = b ? 1.0 : 0.0; + break; + } + case CSON_TYPE_INTEGER: { + cson_int_t i = 0; + cson_value_fetch_integer( val, &i ); + d = i; + break; + } + case CSON_TYPE_DOUBLE: { + cson_double_t const* dv = CSON_DBL(val); + d = dv ? *dv : 0.0; + break; + } + default: + rc = cson_rc.TypeError; + break; + } + if(v) *v = d; + return rc; + } +} + +cson_double_t cson_value_get_double( cson_value const * val ) +{ + cson_double_t i = 0.0; + cson_value_fetch_double( val, &i ); + return i; +} + +int cson_value_fetch_string( cson_value const * val, cson_string ** dest ) +{ + if( ! val || ! dest ) return cson_rc.ArgError; + else if( ! cson_value_is_string(val) ) return cson_rc.TypeError; + else + { + if( dest ) *dest = CSON_STR(val); + return 0; + } +} + +cson_string * cson_value_get_string( cson_value const * val ) +{ + cson_string * rc = NULL; + cson_value_fetch_string( val, &rc ); + return rc; +} + +char const * cson_value_get_cstr( cson_value const * val ) +{ + return cson_string_cstr( cson_value_get_string(val) ); +} + +int cson_value_fetch_object( cson_value const * val, cson_object ** obj ) +{ + if( ! val ) return cson_rc.ArgError; + else if( ! cson_value_is_object(val) ) return cson_rc.TypeError; + else + { + if(obj) *obj = CSON_OBJ(val); + return 0; + } +} +cson_object * cson_value_get_object( cson_value const * v ) +{ + cson_object * obj = NULL; + cson_value_fetch_object( v, &obj ); + return obj; +} + +int cson_value_fetch_array( cson_value const * val, cson_array ** ar) +{ + if( ! val ) return cson_rc.ArgError; + else if( !cson_value_is_array(val) ) return cson_rc.TypeError; + else + { + if(ar) *ar = CSON_ARRAY(val); + return 0; + } +} + +cson_array * cson_value_get_array( cson_value const * v ) +{ + cson_array * ar = NULL; + cson_value_fetch_array( v, &ar ); + return ar; +} + +cson_kvp * cson_kvp_alloc() +{ + cson_kvp * kvp = (cson_kvp*)cson_malloc(sizeof(cson_kvp),"cson_kvp"); + if( kvp ) + { + *kvp = cson_kvp_empty; + } + return kvp; +} + + + +int cson_array_append( cson_array * ar, cson_value * v ) +{ + if( !ar || !v ) return cson_rc.ArgError; + else if( (ar->list.count+1) < ar->list.count ) return cson_rc.RangeError; + else + { + if( !ar->list.alloced || (ar->list.count == ar->list.alloced-1)) + { + unsigned int const n = ar->list.count ? (ar->list.count*2) : 7; + if( n > cson_value_list_reserve( &ar->list, n ) ) + { + return cson_rc.AllocError; + } + } + return cson_array_set( ar, ar->list.count, v ); + } +} + +#if 0 +/** + Removes and returns the last value from the given array, + shrinking its size by 1. Returns NULL if ar is NULL, + ar->list.count is 0, or the element at that index is NULL. + + + If removeRef is true then cson_value_free() is called to remove + ar's reference count for the value. In that case NULL is returned, + even if the object still has live references. If removeRef is false + then the caller takes over ownership of that reference count point. + + If removeRef is false then the caller takes over ownership + of the return value, otherwise ownership is effectively + determined by any remaining references for the returned + value. +*/ +static cson_value * cson_array_pop_back( cson_array * ar, + char removeRef ) +{ + if( !ar ) return NULL; + else if( ! ar->list.count ) return NULL; + else + { + unsigned int const ndx = --ar->list.count; + cson_value * v = ar->list.list[ndx]; + ar->list.list[ndx] = NULL; + if( removeRef ) + { + cson_value_free( v ); + v = NULL; + } + return v; + } +} +#endif + +cson_value * cson_value_new_bool( char v ) +{ + return v ? &CSON_SPECIAL_VALUES[CSON_VAL_TRUE] : &CSON_SPECIAL_VALUES[CSON_VAL_FALSE]; +} + +cson_value * cson_value_true() +{ + return &CSON_SPECIAL_VALUES[CSON_VAL_TRUE]; +} +cson_value * cson_value_false() +{ + return &CSON_SPECIAL_VALUES[CSON_VAL_FALSE]; +} + +cson_value * cson_value_null() +{ + return &CSON_SPECIAL_VALUES[CSON_VAL_NULL]; +} + +cson_value * cson_new_int( cson_int_t v ) +{ + return cson_value_new_integer(v); +} + +cson_value * cson_value_new_integer( cson_int_t v ) +{ + if( 0 == v ) return &CSON_SPECIAL_VALUES[CSON_VAL_INT_0]; + else + { + cson_value * c = cson_value_new(CSON_TYPE_INTEGER,0); +#if !defined(NDEBUG) && CSON_VOID_PTR_IS_BIG + assert( sizeof(cson_int_t) <= sizeof(void *) ); +#endif + if( c ) + { + *CSON_INT(c) = v; + } + return c; + } +} + +cson_value * cson_new_double( cson_double_t v ) +{ + return cson_value_new_double(v); +} + +cson_value * cson_value_new_double( cson_double_t v ) +{ + if( 0.0 == v ) return &CSON_SPECIAL_VALUES[CSON_VAL_DBL_0]; + else + { + cson_value * c = cson_value_new(CSON_TYPE_DOUBLE,0); + if( c ) + { + *CSON_DBL(c) = v; + } + return c; + } +} + +cson_string * cson_new_string(char const * str, unsigned int len) +{ + if( !str || !*str || !len ) return &CSON_EMPTY_HOLDER.stringValue; + else + { + cson_value * c = cson_value_new(CSON_TYPE_STRING, len + 1/*NUL byte*/); + cson_string * s = NULL; + if( c ) + { + char * dest = NULL; + s = CSON_STR(c); + *s = cson_string_empty; + assert( NULL != s ); + s->length = len; + dest = cson_string_str(s); + assert( NULL != dest ); + memcpy( dest, str, len ); + dest[len] = 0; + } + return s; + } +} + +cson_value * cson_value_new_string( char const * str, unsigned int len ) +{ + return cson_string_value( cson_new_string(str, len) ); +} + +int cson_array_value_fetch( cson_array const * ar, unsigned int pos, cson_value ** v ) +{ + if( !ar) return cson_rc.ArgError; + if( pos >= ar->list.count ) return cson_rc.RangeError; + else + { + if(v) *v = ar->list.list[pos]; + return 0; + } +} + +cson_value * cson_array_get( cson_array const * ar, unsigned int pos ) +{ + cson_value *v = NULL; + cson_array_value_fetch(ar, pos, &v); + return v; +} + +int cson_array_length_fetch( cson_array const * ar, unsigned int * v ) +{ + if( ! ar || !v ) return cson_rc.ArgError; + else + { + if(v) *v = ar->list.count; + return 0; + } +} + +unsigned int cson_array_length_get( cson_array const * ar ) +{ + unsigned int i = 0; + cson_array_length_fetch(ar, &i); + return i; +} + +int cson_array_reserve( cson_array * ar, unsigned int size ) +{ + if( ! ar ) return cson_rc.ArgError; + else if( size <= ar->list.alloced ) + { + /* We don't want to introduce a can of worms by trying to + handle the cleanup from here. + */ + return 0; + } + else + { + return (ar->list.alloced > cson_value_list_reserve( &ar->list, size )) + ? cson_rc.AllocError + : 0 + ; + } +} + +int cson_array_set( cson_array * ar, unsigned int ndx, cson_value * v ) +{ + if( !ar || !v ) return cson_rc.ArgError; + else if( (ndx+1) < ndx) /* overflow */return cson_rc.RangeError; + else + { + unsigned const int len = cson_value_list_reserve( &ar->list, ndx+1 ); + if( len <= ndx ) return cson_rc.AllocError; + else + { + cson_value * old = ar->list.list[ndx]; + if( old ) + { + if(old == v) return 0; + else cson_value_free(old); + } + cson_refcount_incr( v ); + ar->list.list[ndx] = v; + if( ndx >= ar->list.count ) + { + ar->list.count = ndx+1; + } + return 0; + } + } +} + +/** @internal + + Searchs for the given key in the given object. + + Returns the found item on success, NULL on error. If ndx is not + NULL, it is set to the index (in obj->kvp.list) of the found + item. *ndx is not modified if no entry is found. +*/ +static cson_kvp * cson_object_search_impl( cson_object const * obj, char const * key, unsigned int * ndx ) +{ + if( obj && key && *key && obj->kvp.count) + { +#if CSON_OBJECT_PROPS_SORT + cson_kvp ** s = (cson_kvp**) + bsearch( key, obj->kvp.list, + obj->kvp.count, sizeof(cson_kvp*), + cson_kvp_cmp_vs_cstr ); + if( ndx && s ) + { /* index of found record is required by + cson_object_unset(). Calculate the offset based on s...*/ +#if 0 + *ndx = (((unsigned char const *)s - ((unsigned char const *)obj->kvp.list)) + / sizeof(cson_kvp*)); +#else + *ndx = s - obj->kvp.list; +#endif + } + return s ? *s : NULL; +#else + cson_kvp_list const * li = &obj->kvp; + unsigned int i = 0; + cson_kvp * kvp; + const unsigned int klen = strlen(key); + for( ; i < li->count; ++i ) + { + cson_string const * sKey; + kvp = li->list[i]; + assert( kvp && kvp->key ); + sKey = cson_value_get_string(kvp->key); + assert(sKey); + if( sKey->length != klen ) continue; + else if(0==strcmp(key,cson_string_cstr(sKey))) + { + if(ndx) *ndx = i; + return kvp; + } + } +#endif + } + return NULL; +} + +cson_value * cson_object_get( cson_object const * obj, char const * key ) +{ + cson_kvp * kvp = cson_object_search_impl( obj, key, NULL ); + return kvp ? kvp->value : NULL; +} + +cson_value * cson_object_get_s( cson_object const * obj, cson_string const *key ) +{ + cson_kvp * kvp = cson_object_search_impl( obj, cson_string_cstr(key), NULL ); + return kvp ? kvp->value : NULL; +} + + +#if CSON_OBJECT_PROPS_SORT +static void cson_object_sort_props( cson_object * obj ) +{ + assert( NULL != obj ); + if( obj->kvp.count ) + { + qsort( obj->kvp.list, obj->kvp.count, sizeof(cson_kvp*), + cson_kvp_cmp ); + } + +} +#endif + +int cson_object_unset( cson_object * obj, char const * key ) +{ + if( ! obj || !key || !*key ) return cson_rc.ArgError; + else + { + unsigned int ndx = 0; + cson_kvp * kvp = cson_object_search_impl( obj, key, &ndx ); + if( ! kvp ) + { + return cson_rc.NotFoundError; + } + assert( obj->kvp.count > 0 ); + assert( obj->kvp.list[ndx] == kvp ); + cson_kvp_free( kvp ); + obj->kvp.list[ndx] = NULL; + { /* if my brain were bigger i'd use memmove(). */ + unsigned int i = ndx; + for( ; i < obj->kvp.count; ++i ) + { + obj->kvp.list[i] = + (i < (obj->kvp.alloced-1)) + ? obj->kvp.list[i+1] + : NULL; + } + } + obj->kvp.list[--obj->kvp.count] = NULL; +#if CSON_OBJECT_PROPS_SORT + cson_object_sort_props( obj ); +#endif + return 0; + } +} + +int cson_object_set_s( cson_object * obj, cson_string * key, cson_value * v ) +{ + if( !obj || !key ) return cson_rc.ArgError; + else if( NULL == v ) return cson_object_unset( obj, cson_string_cstr(key) ); + else + { + char const * cKey; + cson_value * vKey; + cson_kvp * kvp; + vKey = cson_string_value(key); + assert(vKey && (key==CSON_STR(vKey))); + if( vKey == CSON_VCAST(obj) ){ + return cson_rc.ArgError; + } + cKey = cson_string_cstr(key); + kvp = cson_object_search_impl( obj, cKey, NULL ); + if( kvp ) + { /* "I told 'em we've already got one!" */ + if( kvp->key != vKey ){ + cson_value_free( kvp->key ); + cson_refcount_incr(vKey); + kvp->key = vKey; + } + if(kvp->value != v){ + cson_value_free( kvp->value ); + cson_refcount_incr( v ); + kvp->value = v; + } + return 0; + } + if( !obj->kvp.alloced || (obj->kvp.count == obj->kvp.alloced-1)) + { /* reserve space */ + unsigned int const n = obj->kvp.count ? (obj->kvp.count*2) : 6; + if( n > cson_kvp_list_reserve( &obj->kvp, n ) ) + { + return cson_rc.AllocError; + } + } + { /* insert new item... */ + int rc = 0; + kvp = cson_kvp_alloc(); + if( ! kvp ) + { + return cson_rc.AllocError; + } + rc = cson_kvp_list_append( &obj->kvp, kvp ); + if( 0 != rc ) + { + cson_kvp_free(kvp); + } + else + { + cson_refcount_incr(vKey); + cson_refcount_incr(v); + kvp->key = vKey; + kvp->value = v; +#if CSON_OBJECT_PROPS_SORT + cson_object_sort_props( obj ); +#endif + } + return rc; + } + } + +} +int cson_object_set( cson_object * obj, char const * key, cson_value * v ) +{ + if( ! obj || !key || !*key ) return cson_rc.ArgError; + else if( NULL == v ) + { + return cson_object_unset( obj, key ); + } + else + { + cson_string * cs = cson_new_string(key,strlen(key)); + if(!cs) return cson_rc.AllocError; + else + { + int const rc = cson_object_set_s(obj, cs, v); + if(rc) cson_value_free(cson_string_value(cs)); + return rc; + } + } +} + +cson_value * cson_object_take( cson_object * obj, char const * key ) +{ + if( ! obj || !key || !*key ) return NULL; + else + { + /* FIXME: this is 90% identical to cson_object_unset(), + only with different refcount handling. + Consolidate them. + */ + unsigned int ndx = 0; + cson_kvp * kvp = cson_object_search_impl( obj, key, &ndx ); + cson_value * rc = NULL; + if( ! kvp ) + { + return NULL; + } + assert( obj->kvp.count > 0 ); + assert( obj->kvp.list[ndx] == kvp ); + rc = kvp->value; + assert( rc ); + kvp->value = NULL; + cson_kvp_free( kvp ); + assert( rc->refcount > 0 ); + --rc->refcount; + obj->kvp.list[ndx] = NULL; + { /* if my brain were bigger i'd use memmove(). */ + unsigned int i = ndx; + for( ; i < obj->kvp.count; ++i ) + { + obj->kvp.list[i] = + (i < (obj->kvp.alloced-1)) + ? obj->kvp.list[i+1] + : NULL; + } + } + obj->kvp.list[--obj->kvp.count] = NULL; +#if CSON_OBJECT_PROPS_SORT + cson_object_sort_props( obj ); +#endif + return rc; + } +} +/** @internal + + If p->node is-a Object then value is inserted into the object + using p->key. In any other case cson_rc.InternalError is returned. + + Returns cson_rc.AllocError if an allocation fails. + + Returns 0 on success. On error, parsing must be ceased immediately. + + Ownership of val is ALWAYS TRANSFERED to this function. If this + function fails, val will be cleaned up and destroyed. (This + simplifies error handling in the core parser.) +*/ +static int cson_parser_set_key( cson_parser * p, cson_value * val ) +{ + assert( p && val ); + + if( p->ckey && cson_value_is_object(p->node) ) + { + int rc; + cson_object * obj = cson_value_get_object(p->node); + cson_kvp * kvp = NULL; + assert( obj && (p->node->value == obj) ); + /** + FIXME? Use cson_object_set() instead of our custom + finagling with the object? We do it this way to avoid an + extra alloc/strcpy of the key data. + */ + if( !obj->kvp.alloced || (obj->kvp.count == obj->kvp.alloced-1)) + { + if( obj->kvp.alloced > cson_kvp_list_reserve( &obj->kvp, obj->kvp.count ? (obj->kvp.count*2) : 5 ) ) + { + cson_value_free(val); + return cson_rc.AllocError; + } + } + kvp = cson_kvp_alloc(); + if( ! kvp ) + { + cson_value_free(val); + return cson_rc.AllocError; + } + kvp->key = cson_string_value(p->ckey)/*transfer ownership*/; + assert(0 == kvp->key->refcount); + cson_refcount_incr(kvp->key); + p->ckey = NULL; + kvp->value = val; + cson_refcount_incr( val ); + rc = cson_kvp_list_append( &obj->kvp, kvp ); + if( 0 != rc ) + { + cson_kvp_free( kvp ); + } + else + { + ++p->totalValueCount; + } + return rc; + } + else + { + if(val) cson_value_free(val); + return p->errNo = cson_rc.InternalError; + } + +} + +/** @internal + + Pushes val into the current object/array parent node, depending on the + internal state of the parser. + + Ownership of val is always transfered to this function, regardless of + success or failure. + + Returns 0 on success. On error, parsing must be ceased immediately. +*/ +static int cson_parser_push_value( cson_parser * p, cson_value * val ) +{ + if( p->ckey ) + { /* we're in Object mode */ + assert( cson_value_is_object( p->node ) ); + return cson_parser_set_key( p, val ); + } + else if( cson_value_is_array( p->node ) ) + { /* we're in Array mode */ + cson_array * ar = cson_value_get_array( p->node ); + int rc; + assert( ar && (ar == p->node->value) ); + rc = cson_array_append( ar, val ); + if( 0 != rc ) + { + cson_value_free(val); + } + else + { + ++p->totalValueCount; + } + return rc; + } + else + { /* WTF? */ + assert( 0 && "Internal error in cson_parser code" ); + return p->errNo = cson_rc.InternalError; + } +} + +/** + Callback for JSON_parser API. Reminder: it returns 0 (meaning false) + on error! +*/ +static int cson_parse_callback( void * cx, int type, JSON_value const * value ) +{ + cson_parser * p = (cson_parser *)cx; + int rc = 0; +#define ALLOC_V(T,V) cson_value * v = cson_value_new_##T(V); if( ! v ) { rc = cson_rc.AllocError; break; } + switch(type) { + case JSON_T_ARRAY_BEGIN: + case JSON_T_OBJECT_BEGIN: { + cson_value * obja = (JSON_T_ARRAY_BEGIN == type) + ? cson_value_new_array() + : cson_value_new_object(); + if( ! obja ) + { + p->errNo = cson_rc.AllocError; + break; + } + if( 0 != rc ) break; + if( ! p->root ) + { + p->root = p->node = obja; + rc = cson_array_append( &p->stack, obja ); + if( 0 != rc ) + { /* work around a (potential) corner case in the cleanup code. */ + cson_value_free( p->root ); + p->root = NULL; + } + else + { + cson_refcount_incr( p->root ) + /* simplifies cleanup later on. */ + ; + ++p->totalValueCount; + } + } + else + { + rc = cson_array_append( &p->stack, obja ); + if(rc) cson_value_free( obja ); + else + { + rc = cson_parser_push_value( p, obja ); + if( 0 == rc ) p->node = obja; + } + } + break; + } + case JSON_T_ARRAY_END: + case JSON_T_OBJECT_END: { + if( 0 == p->stack.list.count ) + { + rc = cson_rc.RangeError; + break; + } +#if CSON_OBJECT_PROPS_SORT + if( cson_value_is_object(p->node) ) + {/* kludge: the parser uses custom cson_object property + insertion as a malloc/strcpy-reduction optimization. + Because of that, we have to sort the property list + ourselves... + */ + cson_object * obj = cson_value_get_object(p->node); + assert( NULL != obj ); + cson_object_sort_props( obj ); + } +#endif + +#if 1 + /* Reminder: do not use cson_array_pop_back( &p->stack ) + because that will clean up the object, and we don't want + that. We just want to forget this reference + to it. The object is either the root or was pushed into + an object/array in the parse tree (and is owned by that + object/array). + */ + --p->stack.list.count; + assert( p->node == p->stack.list.list[p->stack.list.count] ); + cson_refcount_decr( p->node ) + /* p->node might be owned by an outer object but we + need to remove the list's reference. For the + root node we manually add a reference to + avoid a special case here. Thus when we close + the root node, its refcount is still 1. + */; + p->stack.list.list[p->stack.list.count] = NULL; + if( p->stack.list.count ) + { + p->node = p->stack.list.list[p->stack.list.count-1]; + } + else + { + p->node = p->root; + } +#else + /* + Causing a leak? + */ + cson_array_pop_back( &p->stack, 1 ); + if( p->stack.list.count ) + { + p->node = p->stack.list.list[p->stack.list.count-1]; + } + else + { + p->node = p->root; + } + assert( p->node && (1==p->node->refcount) ); +#endif + break; + } + case JSON_T_INTEGER: { + ALLOC_V(integer, value->vu.integer_value ); + rc = cson_parser_push_value( p, v ); + break; + } + case JSON_T_FLOAT: { + ALLOC_V(double, value->vu.float_value ); + rc = cson_parser_push_value( p, v ); + break; + } + case JSON_T_NULL: { + rc = cson_parser_push_value( p, cson_value_null() ); + break; + } + case JSON_T_TRUE: { + rc = cson_parser_push_value( p, cson_value_true() ); + break; + } + case JSON_T_FALSE: { + rc = cson_parser_push_value( p, cson_value_false() ); + break; + } + case JSON_T_KEY: { + assert(!p->ckey); + p->ckey = cson_new_string( value->vu.str.value, value->vu.str.length ); + if( ! p->ckey ) + { + rc = cson_rc.AllocError; + break; + } + ++p->totalKeyCount; + break; + } + case JSON_T_STRING: { + cson_value * v = cson_value_new_string( value->vu.str.value, value->vu.str.length ); + rc = ( NULL == v ) + ? cson_rc.AllocError + : cson_parser_push_value( p, v ); + break; + } + default: + assert(0); + rc = cson_rc.InternalError; + break; + } +#undef ALLOC_V + return ((p->errNo = rc)) ? 0 : 1; +} + + +/** + Converts a JSON_error code to one of the cson_rc values. +*/ +static int cson_json_err_to_rc( JSON_error jrc ) +{ + switch(jrc) + { + case JSON_E_NONE: return 0; + case JSON_E_INVALID_CHAR: return cson_rc.Parse_INVALID_CHAR; + case JSON_E_INVALID_KEYWORD: return cson_rc.Parse_INVALID_KEYWORD; + case JSON_E_INVALID_ESCAPE_SEQUENCE: return cson_rc.Parse_INVALID_ESCAPE_SEQUENCE; + case JSON_E_INVALID_UNICODE_SEQUENCE: return cson_rc.Parse_INVALID_UNICODE_SEQUENCE; + case JSON_E_INVALID_NUMBER: return cson_rc.Parse_INVALID_NUMBER; + case JSON_E_NESTING_DEPTH_REACHED: return cson_rc.Parse_NESTING_DEPTH_REACHED; + case JSON_E_UNBALANCED_COLLECTION: return cson_rc.Parse_UNBALANCED_COLLECTION; + case JSON_E_EXPECTED_KEY: return cson_rc.Parse_EXPECTED_KEY; + case JSON_E_EXPECTED_COLON: return cson_rc.Parse_EXPECTED_COLON; + case JSON_E_OUT_OF_MEMORY: return cson_rc.AllocError; + default: + return cson_rc.InternalError; + } +} + +/** @internal + + Cleans up all contents of p but does not free p. + + To properly take over ownership of the parser's root node on a + successful parse: + + - Copy p->root's pointer and set p->root to NULL. + - Eventually free up p->root with cson_value_free(). + + If you do not set p->root to NULL, p->root will be freed along with + any other items inserted into it (or under it) during the parsing + process. +*/ +static int cson_parser_clean( cson_parser * p ) +{ + if( ! p ) return cson_rc.ArgError; + else + { + if( p->p ) + { + delete_JSON_parser(p->p); + p->p = NULL; + } + if( p->ckey ){ + cson_value_free(cson_string_value(p->ckey)); + } + cson_array_clean( &p->stack, 1 ); + if( p->root ) + { + cson_value_free( p->root ); + } + *p = cson_parser_empty; + return 0; + } +} + + +int cson_parse( cson_value ** tgt, cson_data_source_f src, void * state, + cson_parse_opt const * opt_, cson_parse_info * info_ ) +{ + unsigned char ch[2] = {0,0}; + cson_parse_opt const opt = opt_ ? *opt_ : cson_parse_opt_empty; + int rc = 0; + unsigned int len = 1; + cson_parse_info info = info_ ? *info_ : cson_parse_info_empty; + cson_parser p = cson_parser_empty; + if( ! tgt || ! src ) return cson_rc.ArgError; + + { + JSON_config jopt = {0}; + init_JSON_config( &jopt ); + jopt.allow_comments = opt.allowComments; + jopt.depth = opt.maxDepth; + jopt.callback_ctx = &p; + jopt.handle_floats_manually = 0; + jopt.callback = cson_parse_callback; + p.p = new_JSON_parser(&jopt); + if( ! p.p ) + { + return cson_rc.AllocError; + } + } + + do + { /* FIXME: buffer the input in multi-kb chunks. */ + len = 1; + ch[0] = 0; + rc = src( state, ch, &len ); + if( 0 != rc ) break; + else if( !len /* EOF */ ) break; + ++info.length; + if('\n' == ch[0]) + { + ++info.line; + info.col = 0; + } + if( ! JSON_parser_char(p.p, ch[0]) ) + { + rc = cson_json_err_to_rc( JSON_parser_get_last_error(p.p) ); + if(0==rc) rc = p.errNo; + if(0==rc) rc = cson_rc.InternalError; + info.errorCode = rc; + break; + } + if( '\n' != ch[0]) ++info.col; + } while(1); + if( info_ ) + { + info.totalKeyCount = p.totalKeyCount; + info.totalValueCount = p.totalValueCount; + *info_ = info; + } + if( 0 != rc ) + { + cson_parser_clean(&p); + return rc; + } + if( ! JSON_parser_done(p.p) ) + { + rc = cson_json_err_to_rc( JSON_parser_get_last_error(p.p) ); + cson_parser_clean(&p); + if(0==rc) rc = p.errNo; + if(0==rc) rc = cson_rc.InternalError; + } + else + { + cson_value * root = p.root; + p.root = NULL; + cson_parser_clean(&p); + if( root ) + { + assert( (1 == root->refcount) && "Detected memory mismanagement in the parser." ); + root->refcount = 0 + /* HUGE KLUDGE! Avoids having one too many references + in some client code, leading to a leak. Here we're + accommodating a memory management workaround in the + parser code which manually adds a reference to the + root node to keep it from being cleaned up + prematurely. + */; + *tgt = root; + } + else + { /* then can happen on empty input. */ + rc = cson_rc.UnknownError; + } + } + return rc; +} + +/** + The UTF code was originally taken from sqlite3's public-domain + source code (http://sqlite.org), modified only slightly for use + here. This code generates some "possible data loss" warnings on + MSVC, but if this code is good enough for sqlite3 then it's damned + well good enough for me, so we disable that warning for Windows + builds. +*/ + +/* +** This lookup table is used to help decode the first byte of +** a multi-byte UTF8 character. +*/ +static const unsigned char cson_utfTrans1[] = { + 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, + 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, + 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, + 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, + 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, + 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, + 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, + 0x00, 0x01, 0x02, 0x03, 0x00, 0x01, 0x00, 0x00 +}; + + +/* +** Translate a single UTF-8 character. Return the unicode value. +** +** During translation, assume that the byte that zTerm points +** is a 0x00. +** +** Write a pointer to the next unread byte back into *pzNext. +** +** Notes On Invalid UTF-8: +** +** * This routine never allows a 7-bit character (0x00 through 0x7f) to +** be encoded as a multi-byte character. Any multi-byte character that +** attempts to encode a value between 0x00 and 0x7f is rendered as 0xfffd. +** +** * This routine never allows a UTF16 surrogate value to be encoded. +** If a multi-byte character attempts to encode a value between +** 0xd800 and 0xe000 then it is rendered as 0xfffd. +** +** * Bytes in the range of 0x80 through 0xbf which occur as the first +** byte of a character are interpreted as single-byte characters +** and rendered as themselves even though they are technically +** invalid characters. +** +** * This routine accepts an infinite number of different UTF8 encodings +** for unicode values 0x80 and greater. It do not change over-length +** encodings to 0xfffd as some systems recommend. +*/ +#define READ_UTF8(zIn, zTerm, c) \ + c = *(zIn++); \ + if( c>=0xc0 ){ \ + c = cson_utfTrans1[c-0xc0]; \ + while( zIn!=zTerm && (*zIn & 0xc0)==0x80 ){ \ + c = (c<<6) + (0x3f & *(zIn++)); \ + } \ + if( c<0x80 \ + || (c&0xFFFFF800)==0xD800 \ + || (c&0xFFFFFFFE)==0xFFFE ){ c = 0xFFFD; } \ + } +static int cson_utf8Read( + const unsigned char *z, /* First byte of UTF-8 character */ + const unsigned char *zTerm, /* Pretend this byte is 0x00 */ + const unsigned char **pzNext /* Write first byte past UTF-8 char here */ +){ + int c; + READ_UTF8(z, zTerm, c); + *pzNext = z; + return c; +} +#undef READ_UTF8 + +#ifdef _MSC_VER +# if _MSC_VER >= 1400 /* Visual Studio 2005 and up */ +# pragma warning( pop ) +# endif +#endif + +unsigned int cson_string_length_utf8( cson_string const * str ) +{ + if( ! str ) return 0; + else + { + char unsigned const * pos = (char unsigned const *)cson_string_cstr(str); + char unsigned const * end = pos + str->length; + unsigned int rc = 0; + for( ; (pos < end) && cson_utf8Read(pos, end, &pos); + ++rc ) + { + }; + return rc; + } +} + +/** + Escapes the first len bytes of the given string as JSON and sends + it to the given output function (which will be called often - once + for each logical character). The output is also surrounded by + double-quotes. + + A NULL str will be escaped as an empty string, though we should + arguably export it as "null" (without quotes). We do this because + in JavaScript (typeof null === "object"), and by outputing null + here we would effectively change the data type from string to + object. +*/ +static int cson_str_to_json( char const * str, unsigned int len, + char escapeFwdSlash, + cson_data_dest_f f, void * state ) +{ + if( NULL == f ) return cson_rc.ArgError; + else if( !str || !*str || (0 == len) ) + { /* special case for 0-length strings. */ + return f( state, "\"\"", 2 ); + } + else + { + unsigned char const * pos = (unsigned char const *)str; + unsigned char const * end = (unsigned char const *)(str ? (str + len) : NULL); + unsigned char const * next = NULL; + int ch; + unsigned char clen = 0; + char escChar[3] = {'\\',0,0}; + enum { UBLen = 13 }; + char ubuf[UBLen]; + int rc = 0; + rc = f(state, "\"", 1 ); + for( ; (pos < end) && (0 == rc); pos += clen ) + { + ch = cson_utf8Read(pos, end, &next); + if( 0 == ch ) break; + assert( next > pos ); + clen = next - pos; + assert( clen ); + if( 1 == clen ) + { /* ASCII */ +#if defined(CSON_FOSSIL_MODE) + /* Workaround for fossil repo artifact + f460839cff85d4e4f1360b366bb2858cef1411ea, + which has what appears to be latin1-encoded + text. file(1) thinks it's a FORTRAN program. + */ + if(0xfffd==ch){ + assert(*pos != ch); + /* MARKER("ch=%04x, *pos=%04x\n", ch, *pos); */ + ch = *pos + /* We should arguably translate to '?', and + will if this problem ever comes up with a + non-latin1 encoding. For latin1 this + workaround incidentally corrects the output + to proper UTF8-escaped characters, and only + for that reason is it being kept around. + */; + goto assume_latin1; + } +#endif + assert( (*pos == ch) && "Invalid UTF8" ); + escChar[1] = 0; + switch(ch) + { + case '\t': escChar[1] = 't'; break; + case '\r': escChar[1] = 'r'; break; + case '\n': escChar[1] = 'n'; break; + case '\f': escChar[1] = 'f'; break; + case '\b': escChar[1] = 'b'; break; + case '/': + /* + Regarding escaping of forward-slashes. See the main exchange below... + + -------------- + From: Douglas Crockford + To: Stephan Beal + Subject: Re: Is escaping of forward slashes required? + + It is allowed, not required. It is allowed so that JSON can be safely + embedded in HTML, which can freak out when seeing strings containing + " Hello, Jsonites, + > + > i'm a bit confused on a small grammatic detail of JSON: + > + > if i'm reading the grammar chart on http://www.json.org/ correctly, + > forward slashes (/) are supposed to be escaped in JSON. However, the + > JSON class provided with my browsers (Chrome and FF, both of which i + > assume are fairly standards/RFC-compliant) do not escape such characters. + > + > Is backslash-escaping forward slashes required? If so, what is the + > justification for it? (i ask because i find it unnecessary and hard to + > look at.) + -------------- + */ + if( escapeFwdSlash ) escChar[1] = '/'; + break; + case '\\': escChar[1] = '\\'; break; + case '"': escChar[1] = '"'; break; + default: break; + } + if( escChar[1]) + { + rc = f(state, escChar, 2); + } + else + { + rc = f(state, (char const *)pos, clen); + } + continue; + } + else + { /* UTF: transform it to \uXXXX */ +#if defined(CSON_FOSSIL_MODE) + assume_latin1: +#endif + memset(ubuf,0,UBLen); + if(ch <= 0xFFFF){ + rc = sprintf(ubuf, "\\u%04x",ch); + if( rc != 6 ) + { + rc = cson_rc.RangeError; + break; + } + rc = f( state, ubuf, 6 ); + }else{ /* encode as a UTF16 surrogate pair */ + /* http://unicodebook.readthedocs.org/en/latest/unicode_encodings.html#surrogates */ + ch -= 0x10000; + rc = sprintf(ubuf, "\\u%04x\\u%04x", + (0xd800 | (ch>>10)), + (0xdc00 | (ch & 0x3ff))); + if( rc != 12 ) + { + rc = cson_rc.RangeError; + break; + } + rc = f( state, ubuf, 12 ); + } + continue; + } + } + if( 0 == rc ) + { + rc = f(state, "\"", 1 ); + } + return rc; + } +} + +int cson_object_iter_init( cson_object const * obj, cson_object_iterator * iter ) +{ + if( ! obj || !iter ) return cson_rc.ArgError; + else + { + iter->obj = obj; + iter->pos = 0; + return 0; + } +} + +cson_kvp * cson_object_iter_next( cson_object_iterator * iter ) +{ + if( ! iter || !iter->obj ) return NULL; + else if( iter->pos >= iter->obj->kvp.count ) return NULL; + else + { + cson_kvp * rc = iter->obj->kvp.list[iter->pos++]; + while( (NULL==rc) && (iter->pos < iter->obj->kvp.count)) + { + rc = iter->obj->kvp.list[iter->pos++]; + } + return rc; + } +} + +static int cson_output_null( cson_data_dest_f f, void * state ) +{ + if( !f ) return cson_rc.ArgError; + else + { + return f(state, "null", 4); + } +} + +static int cson_output_bool( cson_value const * src, cson_data_dest_f f, void * state ) +{ + if( !f ) return cson_rc.ArgError; + else + { + char const v = cson_value_get_bool(src); + return f(state, v ? "true" : "false", v ? 4 : 5); + } +} + +static int cson_output_integer( cson_value const * src, cson_data_dest_f f, void * state ) +{ + if( !f ) return cson_rc.ArgError; + else if( !cson_value_is_integer(src) ) return cson_rc.TypeError; + else + { + enum { BufLen = 100 }; + char b[BufLen]; + int rc; + memset( b, 0, BufLen ); + rc = sprintf( b, "%"CSON_INT_T_PFMT, cson_value_get_integer(src) ) + /* Reminder: snprintf() is C99 */ + ; + return ( rc<=0 ) + ? cson_rc.RangeError + : f( state, b, (unsigned int)rc ) + ; + } +} + +static int cson_output_double( cson_value const * src, cson_data_dest_f f, void * state ) +{ + if( !f ) return cson_rc.ArgError; + else if( !cson_value_is_double(src) ) return cson_rc.TypeError; + else + { + enum { BufLen = 128 /* this must be relatively large or huge + doubles can cause us to overrun here, + resulting in stack-smashing errors. + */}; + char b[BufLen]; + int rc; + memset( b, 0, BufLen ); + rc = sprintf( b, "%"CSON_DOUBLE_T_PFMT, cson_value_get_double(src) ) + /* Reminder: snprintf() is C99 */ + ; + if( rc<=0 ) return cson_rc.RangeError; + else if(1) + { /* Strip trailing zeroes before passing it on... */ + unsigned int urc = (unsigned int)rc; + char * pos = b + urc - 1; + for( ; ('0' == *pos) && urc && (*(pos-1) != '.'); --pos, --urc ) + { + *pos = 0; + } + assert(urc && *pos); + return f( state, b, urc ); + } + else + { + unsigned int urc = (unsigned int)rc; + return f( state, b, urc ); + } + return 0; + } +} + +static int cson_output_string( cson_value const * src, char escapeFwdSlash, cson_data_dest_f f, void * state ) +{ + if( !f ) return cson_rc.ArgError; + else if( ! cson_value_is_string(src) ) return cson_rc.TypeError; + else + { + cson_string const * str = cson_value_get_string(src); + assert( NULL != str ); + return cson_str_to_json(cson_string_cstr(str), str->length, escapeFwdSlash, f, state); + } +} + + +/** + Outputs indention spacing to f(). + + blanks: (0)=no indentation, (1)=1 TAB per/level, (>1)=n spaces/level + + depth is the current depth of the output tree, and determines how much + indentation to generate. + + If blanks is 0 this is a no-op. Returns non-0 on error, and the + error code will always come from f(). +*/ +static int cson_output_indent( cson_data_dest_f f, void * state, + unsigned char blanks, unsigned int depth ) +{ + if( 0 == blanks ) return 0; + else + { +#if 0 + /* FIXME: stuff the indention into the buffer and make a single + call to f(). + */ + enum { BufLen = 200 }; + char buf[BufLen]; +#endif + unsigned int i; + unsigned int x; + char const ch = (1==blanks) ? '\t' : ' '; + int rc = f(state, "\n", 1 ); + for( i = 0; (i < depth) && (0 == rc); ++i ) + { + for( x = 0; (x < blanks) && (0 == rc); ++x ) + { + rc = f(state, &ch, 1); + } + } + return rc; + } +} + +static int cson_output_array( cson_value const * src, cson_data_dest_f f, void * state, + cson_output_opt const * fmt, unsigned int level ); +static int cson_output_object( cson_value const * src, cson_data_dest_f f, void * state, + cson_output_opt const * fmt, unsigned int level ); +/** + Main cson_output() implementation. Dispatches to a different impl depending + on src->api->typeID. + + Returns 0 on success. +*/ +static int cson_output_impl( cson_value const * src, cson_data_dest_f f, void * state, + cson_output_opt const * fmt, unsigned int level ) +{ + if( ! src || !f || !src->api ) return cson_rc.ArgError; + else + { + int rc = 0; + assert(fmt); + switch( src->api->typeID ) + { + case CSON_TYPE_UNDEF: + case CSON_TYPE_NULL: + rc = cson_output_null(f, state); + break; + case CSON_TYPE_BOOL: + rc = cson_output_bool(src, f, state); + break; + case CSON_TYPE_INTEGER: + rc = cson_output_integer(src, f, state); + break; + case CSON_TYPE_DOUBLE: + rc = cson_output_double(src, f, state); + break; + case CSON_TYPE_STRING: + rc = cson_output_string(src, fmt->escapeForwardSlashes, f, state); + break; + case CSON_TYPE_ARRAY: + rc = cson_output_array( src, f, state, fmt, level ); + break; + case CSON_TYPE_OBJECT: + rc = cson_output_object( src, f, state, fmt, level ); + break; + default: + rc = cson_rc.TypeError; + break; + } + return rc; + } +} + + +static int cson_output_array( cson_value const * src, cson_data_dest_f f, void * state, + cson_output_opt const * fmt, unsigned int level ) +{ + if( !src || !f || !fmt ) return cson_rc.ArgError; + else if( ! cson_value_is_array(src) ) return cson_rc.TypeError; + else if( level > fmt->maxDepth ) return cson_rc.RangeError; + else + { + int rc; + unsigned int i; + cson_value const * v; + char doIndent = fmt->indentation ? 1 : 0; + cson_array const * ar = cson_value_get_array(src); + assert( NULL != ar ); + if( 0 == ar->list.count ) + { + return f(state, "[]", 2 ); + } + else if( (1 == ar->list.count) && !fmt->indentSingleMemberValues ) doIndent = 0; + rc = f(state, "[", 1); + ++level; + if( doIndent ) + { + rc = cson_output_indent( f, state, fmt->indentation, level ); + } + for( i = 0; (i < ar->list.count) && (0 == rc); ++i ) + { + v = ar->list.list[i]; + if( v ) + { + rc = cson_output_impl( v, f, state, fmt, level ); + } + else + { + rc = cson_output_null( f, state ); + } + if( 0 == rc ) + { + if(i < (ar->list.count-1)) + { + rc = f(state, ",", 1); + if( 0 == rc ) + { + rc = doIndent + ? cson_output_indent( f, state, fmt->indentation, level ) + : 0 /*f( state, " ", 1 )*/; + } + } + } + } + --level; + if( doIndent && (0 == rc) ) + { + rc = cson_output_indent( f, state, fmt->indentation, level ); + } + return (0 == rc) + ? f(state, "]", 1) + : rc; + } +} + +static int cson_output_object( cson_value const * src, cson_data_dest_f f, void * state, + cson_output_opt const * fmt, unsigned int level ) +{ + if( !src || !f || !fmt ) return cson_rc.ArgError; + else if( ! cson_value_is_object(src) ) return cson_rc.TypeError; + else if( level > fmt->maxDepth ) return cson_rc.RangeError; + else + { + int rc; + unsigned int i; + cson_kvp const * kvp; + char doIndent = fmt->indentation ? 1 : 0; + cson_object const * obj = cson_value_get_object(src); + assert( (NULL != obj) && (NULL != fmt)); + if( 0 == obj->kvp.count ) + { + return f(state, "{}", 2 ); + } + else if( (1 == obj->kvp.count) && !fmt->indentSingleMemberValues ) doIndent = 0; + rc = f(state, "{", 1); + ++level; + if( doIndent ) + { + rc = cson_output_indent( f, state, fmt->indentation, level ); + } + for( i = 0; (i < obj->kvp.count) && (0 == rc); ++i ) + { + kvp = obj->kvp.list[i]; + if( kvp && kvp->key ) + { + cson_string const * sKey = cson_value_get_string(kvp->key); + char const * cKey = cson_string_cstr(sKey); + rc = cson_str_to_json(cKey, sKey->length, + fmt->escapeForwardSlashes, f, state); + if( 0 == rc ) + { + rc = fmt->addSpaceAfterColon + ? f(state, ": ", 2 ) + : f(state, ":", 1 ) + ; + } + if( 0 == rc) + { + rc = ( kvp->value ) + ? cson_output_impl( kvp->value, f, state, fmt, level ) + : cson_output_null( f, state ); + } + } + else + { + assert( 0 && "Possible internal error." ); + continue /* internal error? */; + } + if( 0 == rc ) + { + if(i < (obj->kvp.count-1)) + { + rc = f(state, ",", 1); + if( 0 == rc ) + { + rc = doIndent + ? cson_output_indent( f, state, fmt->indentation, level ) + : 0 /*f( state, " ", 1 )*/; + } + } + } + } + --level; + if( doIndent && (0 == rc) ) + { + rc = cson_output_indent( f, state, fmt->indentation, level ); + } + return (0 == rc) + ? f(state, "}", 1) + : rc; + } +} + +int cson_output( cson_value const * src, cson_data_dest_f f, + void * state, cson_output_opt const * fmt ) +{ + int rc; + if(! fmt ) fmt = &cson_output_opt_empty; + rc = cson_output_impl(src, f, state, fmt, 0 ); + if( (0 == rc) && fmt->addNewline ) + { + rc = f(state, "\n", 1); + } + return rc; +} + +int cson_data_dest_FILE( void * state, void const * src, unsigned int n ) +{ + if( ! state ) return cson_rc.ArgError; + else if( !src || !n ) return 0; + else + { + return ( 1 == fwrite( src, n, 1, (FILE*) state ) ) + ? 0 + : cson_rc.IOError; + } +} + +int cson_output_FILE( cson_value const * src, FILE * dest, cson_output_opt const * fmt ) +{ + int rc = 0; + if( fmt ) + { + rc = cson_output( src, cson_data_dest_FILE, dest, fmt ); + } + else + { + /* We normally want a newline on FILE output. */ + cson_output_opt opt = cson_output_opt_empty; + opt.addNewline = 1; + rc = cson_output( src, cson_data_dest_FILE, dest, &opt ); + } + if( 0 == rc ) + { + fflush( dest ); + } + return rc; +} + +int cson_output_filename( cson_value const * src, char const * dest, cson_output_opt const * fmt ) +{ + if( !src || !dest ) return cson_rc.ArgError; + else + { + FILE * f = fopen(dest,"wb"); + if( !f ) return cson_rc.IOError; + else + { + int const rc = cson_output_FILE( src, f, fmt ); + fclose(f); + return rc; + } + } +} + +int cson_parse_filename( cson_value ** tgt, char const * src, + cson_parse_opt const * opt, cson_parse_info * err ) +{ + if( !src || !tgt ) return cson_rc.ArgError; + else + { + FILE * f = fopen(src, "r"); + if( !f ) return cson_rc.IOError; + else + { + int const rc = cson_parse_FILE( tgt, f, opt, err ); + fclose(f); + return rc; + } + } +} + +/** Internal type to hold state for a JSON input string. + */ +typedef struct cson_data_source_StringSource_ +{ + /** Start of input string. */ + char const * str; + /** Current iteration position. Must initially be == str. */ + char const * pos; + /** Logical EOF, one-past-the-end of str. */ + char const * end; +} cson_data_source_StringSource_t; + +/** + A cson_data_source_f() implementation which requires the state argument + to be a properly populated (cson_data_source_StringSource_t*). +*/ +static int cson_data_source_StringSource( void * state, void * dest, unsigned int * n ) +{ + if( !state || !n || !dest ) return cson_rc.ArgError; + else if( !*n ) return 0 /* ignore this */; + else + { + unsigned int i; + cson_data_source_StringSource_t * ss = (cson_data_source_StringSource_t*) state; + unsigned char * tgt = (unsigned char *)dest; + for( i = 0; (i < *n) && (ss->pos < ss->end); ++i, ++ss->pos, ++tgt ) + { + *tgt = *ss->pos; + } + *n = i; + return 0; + } +} + +int cson_parse_string( cson_value ** tgt, char const * src, unsigned int len, + cson_parse_opt const * opt, cson_parse_info * err ) +{ + if( ! tgt || !src ) return cson_rc.ArgError; + else if( !*src || (len<2/*2==len of {} and []*/) ) return cson_rc.RangeError; + else + { + cson_data_source_StringSource_t ss; + ss.str = ss.pos = src; + ss.end = src + len; + return cson_parse( tgt, cson_data_source_StringSource, &ss, opt, err ); + } + +} + +int cson_parse_buffer( cson_value ** tgt, + cson_buffer const * buf, + cson_parse_opt const * opt, + cson_parse_info * err ) +{ + return ( !tgt || !buf || !buf->mem || !buf->used ) + ? cson_rc.ArgError + : cson_parse_string( tgt, (char const *)buf->mem, + buf->used, opt, err ); +} + +int cson_buffer_reserve( cson_buffer * buf, cson_size_t n ) +{ + if( ! buf ) return cson_rc.ArgError; + else if( 0 == n ) + { + cson_free(buf->mem, "cson_buffer::mem"); + *buf = cson_buffer_empty; + return 0; + } + else if( buf->capacity >= n ) + { + return 0; + } + else + { + unsigned char * x = (unsigned char *)cson_realloc( buf->mem, n, "cson_buffer::mem" ); + if( ! x ) return cson_rc.AllocError; + memset( x + buf->used, 0, n - buf->used ); + buf->mem = x; + buf->capacity = n; + ++buf->timesExpanded; + return 0; + } +} + +cson_size_t cson_buffer_fill( cson_buffer * buf, char c ) +{ + if( !buf || !buf->capacity || !buf->mem ) return 0; + else + { + memset( buf->mem, c, buf->capacity ); + return buf->capacity; + } +} + +/** + cson_data_dest_f() implementation, used by cson_output_buffer(). + + arg MUST be a (cson_buffer*). This function appends n bytes at + position arg->used, expanding the buffer as necessary. +*/ +static int cson_data_dest_cson_buffer( void * arg, void const * data_, unsigned int n ) +{ + if( !arg ) return cson_rc.ArgError; + else if( ! n ) return 0; + else + { + cson_buffer * sb = (cson_buffer*)arg; + char const * data = (char const *)data_; + cson_size_t npos = sb->used + n; + unsigned int i; + if( npos >= sb->capacity ) + { + const cson_size_t oldCap = sb->capacity; + const cson_size_t asz = npos * 2; + if( asz < npos ) return cson_rc.ArgError; /* overflow */ + else if( 0 != cson_buffer_reserve( sb, asz ) ) return cson_rc.AllocError; + assert( (sb->capacity > oldCap) && "Internal error in memory buffer management!" ); + /* make sure it gets NUL terminated. */ + memset( sb->mem + oldCap, 0, (sb->capacity - oldCap) ); + } + for( i = 0; i < n; ++i, ++sb->used ) + { + sb->mem[sb->used] = data[i]; + } + return 0; + } +} + + +int cson_output_buffer( cson_value const * v, cson_buffer * buf, + cson_output_opt const * opt ) +{ + int rc = cson_output( v, cson_data_dest_cson_buffer, buf, opt ); + if( 0 == rc ) + { /* Ensure that the buffer is null-terminated. */ + rc = cson_buffer_reserve( buf, buf->used + 1 ); + if( 0 == rc ) + { + buf->mem[buf->used] = 0; + } + } + return rc; +} + +/** @internal + +Tokenizes an input string on a given separator. Inputs are: + +- (inp) = is a pointer to the pointer to the start of the input. + +- (separator) = the separator character + +- (end) = a pointer to NULL. i.e. (*end == NULL) + +This function scans *inp for the given separator char or a NUL char. +Successive separators at the start of *inp are skipped. The effect is +that, when this function is called in a loop, all neighboring +separators are ignored. e.g. the string "aa.bb...cc" will tokenize to +the list (aa,bb,cc) if the separator is '.' and to (aa.,...cc) if the +separator is 'b'. + +Returns 0 (false) if it finds no token, else non-0 (true). + +Output: + +- (*inp) will be set to the first character of the next token. + +- (*end) will point to the one-past-the-end point of the token. + +If (*inp == *end) then the end of the string has been reached +without finding a token. + +Post-conditions: + +- (*end == *inp) if no token is found. + +- (*end > *inp) if a token is found. + +It is intolerant of NULL values for (inp, end), and will assert() in +debug builds if passed NULL as either parameter. +*/ +static char cson_next_token( char const ** inp, char separator, char const ** end ) +{ + char const * pos = NULL; + assert( inp && end && *inp ); + if( *inp == *end ) return 0; + pos = *inp; + if( !*pos ) + { + *end = pos; + return 0; + } + for( ; *pos && (*pos == separator); ++pos) { /* skip preceeding splitters */ } + *inp = pos; + for( ; *pos && (*pos != separator); ++pos) { /* find next splitter */ } + *end = pos; + return (pos > *inp) ? 1 : 0; +} + +int cson_object_fetch_sub2( cson_object const * obj, cson_value ** tgt, char const * path ) +{ + if( ! obj || !path ) return cson_rc.ArgError; + else if( !*path || !*(1+path) ) return cson_rc.RangeError; + else return cson_object_fetch_sub(obj, tgt, path+1, *path); +} + +int cson_object_fetch_sub( cson_object const * obj, cson_value ** tgt, char const * path, char sep ) +{ + if( ! obj || !path ) return cson_rc.ArgError; + else if( !*path || !sep ) return cson_rc.RangeError; + else + { + char const * beg = path; + char const * end = NULL; + int rc; + unsigned int i, len; + unsigned int tokenCount = 0; + cson_value * cv = NULL; + cson_object const * curObj = obj; + enum { BufSize = 128 }; + char buf[BufSize]; + memset( buf, 0, BufSize ); + + while( cson_next_token( &beg, sep, &end ) ) + { + if( beg == end ) break; + else + { + ++tokenCount; + beg = end; + end = NULL; + } + } + if( 0 == tokenCount ) return cson_rc.RangeError; + beg = path; + end = NULL; + for( i = 0; i < tokenCount; ++i, beg=end, end=NULL ) + { + rc = cson_next_token( &beg, sep, &end ); + assert( 1 == rc ); + assert( beg != end ); + assert( end > beg ); + len = end - beg; + if( len > (BufSize-1) ) return cson_rc.RangeError; + memset( buf, 0, len + 1 ); + memcpy( buf, beg, len ); + buf[len] = 0; + cv = cson_object_get( curObj, buf ); + if( NULL == cv ) return cson_rc.NotFoundError; + else if( i == (tokenCount-1) ) + { + if(tgt) *tgt = cv; + return 0; + } + else if( cson_value_is_object(cv) ) + { + curObj = cson_value_get_object(cv); + assert((NULL != curObj) && "Detected mis-management of internal memory!"); + } + /* TODO: arrays. Requires numeric parsing for the index. */ + else + { + return cson_rc.NotFoundError; + } + } + assert( i == tokenCount ); + return cson_rc.NotFoundError; + } +} + +cson_value * cson_object_get_sub( cson_object const * obj, char const * path, char sep ) +{ + cson_value * v = NULL; + cson_object_fetch_sub( obj, &v, path, sep ); + return v; +} + +cson_value * cson_object_get_sub2( cson_object const * obj, char const * path ) +{ + cson_value * v = NULL; + cson_object_fetch_sub2( obj, &v, path ); + return v; +} + + +/** + If v is-a Object or Array then this function returns a deep + clone, otherwise it returns v. In either case, the refcount + of the returned value is increased by 1 by this call. +*/ +static cson_value * cson_value_clone_ref( cson_value * v ) +{ + cson_value * rc = NULL; +#define TRY_SHARING 1 +#if TRY_SHARING + if(!v ) return rc; + else if( cson_value_is_object(v) + || cson_value_is_array(v)) + { + rc = cson_value_clone( v ); + } + else + { + rc = v; + } +#else + rc = cson_value_clone(v); +#endif +#undef TRY_SHARING + cson_value_add_reference(rc); + return rc; +} + +static cson_value * cson_value_clone_array( cson_value const * orig ) +{ + unsigned int i = 0; + cson_array const * asrc = cson_value_get_array( orig ); + unsigned int alen = cson_array_length_get( asrc ); + cson_value * destV = NULL; + cson_array * destA = NULL; + assert( orig && asrc ); + destV = cson_value_new_array(); + if( NULL == destV ) return NULL; + destA = cson_value_get_array( destV ); + assert( destA ); + if( 0 != cson_array_reserve( destA, alen ) ) + { + cson_value_free( destV ); + return NULL; + } + for( ; i < alen; ++i ) + { + cson_value * ch = cson_array_get( asrc, i ); + if( NULL != ch ) + { + cson_value * cl = cson_value_clone_ref( ch ); + if( NULL == cl ) + { + cson_value_free( destV ); + return NULL; + } + if( 0 != cson_array_set( destA, i, cl ) ) + { + cson_value_free( cl ); + cson_value_free( destV ); + return NULL; + } + cson_value_free(cl)/*remove our artificial reference */; + } + } + return destV; +} + +static cson_value * cson_value_clone_object( cson_value const * orig ) +{ + cson_object const * src = cson_value_get_object( orig ); + cson_value * destV = NULL; + cson_object * dest = NULL; + cson_kvp const * kvp = NULL; + cson_object_iterator iter = cson_object_iterator_empty; + assert( orig && src ); + if( 0 != cson_object_iter_init( src, &iter ) ) + { + return NULL; + } + destV = cson_value_new_object(); + if( NULL == destV ) return NULL; + dest = cson_value_get_object( destV ); + assert( dest ); + if( src->kvp.count > cson_kvp_list_reserve( &dest->kvp, src->kvp.count ) ){ + cson_value_free( destV ); + return NULL; + } + while( (kvp = cson_object_iter_next( &iter )) ) + { + cson_value * key = NULL; + cson_value * val = NULL; + assert( kvp->key && (kvp->key->refcount>0) ); + key = cson_value_clone_ref(kvp->key); + val = key ? cson_value_clone_ref(kvp->value) : NULL; + if( ! key || !val ){ + goto error; + } + assert( CSON_STR(key) ); + if( 0 != cson_object_set_s( dest, CSON_STR(key), val ) ) + { + goto error; + } + /* remove our references */ + cson_value_free(key); + cson_value_free(val); + continue; + error: + cson_value_free(key); + cson_value_free(val); + cson_value_free(destV); + destV = NULL; + break; + } + return destV; +} + +cson_value * cson_value_clone( cson_value const * orig ) +{ + if( NULL == orig ) return NULL; + else + { + switch( orig->api->typeID ) + { + case CSON_TYPE_UNDEF: + assert(0 && "This should never happen."); + return NULL; + case CSON_TYPE_NULL: + return cson_value_null(); + case CSON_TYPE_BOOL: + return cson_value_new_bool( cson_value_get_bool( orig ) ); + case CSON_TYPE_INTEGER: + return cson_value_new_integer( cson_value_get_integer( orig ) ); + break; + case CSON_TYPE_DOUBLE: + return cson_value_new_double( cson_value_get_double( orig ) ); + break; + case CSON_TYPE_STRING: { + cson_string const * str = cson_value_get_string( orig ); + return cson_value_new_string( cson_string_cstr( str ), + cson_string_length_bytes( str ) ); + } + case CSON_TYPE_ARRAY: + return cson_value_clone_array( orig ); + case CSON_TYPE_OBJECT: + return cson_value_clone_object( orig ); + } + assert( 0 && "We can't get this far." ); + return NULL; + } +} + +cson_value * cson_string_value(cson_string const * s) +{ +#define MT CSON_SPECIAL_VALUES[CSON_VAL_STR_EMPTY] + return s + ? ((s==MT.value) ? &MT : CSON_VCAST(s)) + : NULL; +#undef MT +} + +cson_value * cson_object_value(cson_object const * s) +{ + return s + ? CSON_VCAST(s) + : NULL; +} + + +cson_value * cson_array_value(cson_array const * s) +{ + return s + ? CSON_VCAST(s) + : NULL; +} + +void cson_free_object(cson_object *x) +{ + if(x) cson_value_free(cson_object_value(x)); +} +void cson_free_array(cson_array *x) +{ + if(x) cson_value_free(cson_array_value(x)); +} + +void cson_free_string(cson_string *x) +{ + if(x) cson_value_free(cson_string_value(x)); +} +void cson_free_value(cson_value *x) +{ + if(x) cson_value_free(x); +} + + +#if 0 +/* i'm not happy with this... */ +char * cson_pod_to_string( cson_value const * orig ) +{ + if( ! orig ) return NULL; + else + { + enum { BufSize = 64 }; + char * v = NULL; + switch( orig->api->typeID ) + { + case CSON_TYPE_BOOL: { + char const bv = cson_value_get_bool(orig); + v = cson_strdup( bv ? "true" : "false", + bv ? 4 : 5 ); + break; + } + case CSON_TYPE_UNDEF: + case CSON_TYPE_NULL: { + v = cson_strdup( "null", 4 ); + break; + } + case CSON_TYPE_STRING: { + cson_string const * jstr = cson_value_get_string(orig); + unsigned const int slen = cson_string_length_bytes( jstr ); + assert( NULL != jstr ); + v = cson_strdup( cson_string_cstr( jstr ), slen ); + break; + } + case CSON_TYPE_INTEGER: { + char buf[BufSize] = {0}; + if( 0 < sprintf( v, "%"CSON_INT_T_PFMT, cson_value_get_integer(orig)) ) + { + v = cson_strdup( buf, strlen(buf) ); + } + break; + } + case CSON_TYPE_DOUBLE: { + char buf[BufSize] = {0}; + if( 0 < sprintf( v, "%"CSON_DOUBLE_T_PFMT, cson_value_get_double(orig)) ) + { + v = cson_strdup( buf, strlen(buf) ); + } + break; + } + default: + break; + } + return v; + } +} +#endif + +#if 0 +/* i'm not happy with this... */ +char * cson_pod_to_string( cson_value const * orig ) +{ + if( ! orig ) return NULL; + else + { + enum { BufSize = 64 }; + char * v = NULL; + switch( orig->api->typeID ) + { + case CSON_TYPE_BOOL: { + char const bv = cson_value_get_bool(orig); + v = cson_strdup( bv ? "true" : "false", + bv ? 4 : 5 ); + break; + } + case CSON_TYPE_UNDEF: + case CSON_TYPE_NULL: { + v = cson_strdup( "null", 4 ); + break; + } + case CSON_TYPE_STRING: { + cson_string const * jstr = cson_value_get_string(orig); + unsigned const int slen = cson_string_length_bytes( jstr ); + assert( NULL != jstr ); + v = cson_strdup( cson_string_cstr( jstr ), slen ); + break; + } + case CSON_TYPE_INTEGER: { + char buf[BufSize] = {0}; + if( 0 < sprintf( v, "%"CSON_INT_T_PFMT, cson_value_get_integer(orig)) ) + { + v = cson_strdup( buf, strlen(buf) ); + } + break; + } + case CSON_TYPE_DOUBLE: { + char buf[BufSize] = {0}; + if( 0 < sprintf( v, "%"CSON_DOUBLE_T_PFMT, cson_value_get_double(orig)) ) + { + v = cson_strdup( buf, strlen(buf) ); + } + break; + } + default: + break; + } + return v; + } +} +#endif + +unsigned int cson_value_msize(cson_value const * v) +{ + if(!v) return 0; + else if( cson_value_is_builtin(v) ) return 0; + else { + unsigned int rc = sizeof(cson_value); + assert(NULL != v->api); + switch(v->api->typeID){ + case CSON_TYPE_INTEGER: + assert( v != &CSON_SPECIAL_VALUES[CSON_VAL_INT_0]); + rc += sizeof(cson_int_t); + break; + case CSON_TYPE_DOUBLE: + assert( v != &CSON_SPECIAL_VALUES[CSON_VAL_DBL_0]); + rc += sizeof(cson_double_t); + break; + case CSON_TYPE_STRING: + rc += sizeof(cson_string) + + CSON_STR(v)->length + 1/*NUL*/; + break; + case CSON_TYPE_ARRAY:{ + cson_array const * ar = CSON_ARRAY(v); + cson_value_list const * li; + unsigned int i = 0; + assert( NULL != ar ); + li = &ar->list; + rc += sizeof(cson_array) + + (li->alloced * sizeof(cson_value *)); + for( ; i < li->count; ++i ){ + cson_value const * e = ar->list.list[i]; + if( e ) rc += cson_value_msize( e ); + } + break; + } + case CSON_TYPE_OBJECT:{ + cson_object const * obj = CSON_OBJ(v); + unsigned int i = 0; + cson_kvp_list const * kl; + assert(NULL != obj); + kl = &obj->kvp; + rc += sizeof(cson_object) + + (kl->alloced * sizeof(cson_kvp*)); + for( ; i < kl->count; ++i ){ + cson_kvp const * kvp = kl->list[i]; + assert(NULL != kvp); + rc += cson_value_msize(kvp->key); + rc += cson_value_msize(kvp->value); + } + break; + } + case CSON_TYPE_UNDEF: + case CSON_TYPE_NULL: + case CSON_TYPE_BOOL: + assert( 0 && "Should have been caught by is-builtin check!" ); + break; + default: + assert(0 && "Invalid typeID!"); + return 0; +#undef RCCHECK + } + return rc; + } +} + +int cson_object_merge( cson_object * dest, cson_object const * src, int flags ){ + cson_object_iterator iter = cson_object_iterator_empty; + int rc; + char const replace = (flags & CSON_MERGE_REPLACE); + char const recurse = !(flags & CSON_MERGE_NO_RECURSE); + cson_kvp const * kvp; + if((!dest || !src) || (dest==src)) return cson_rc.ArgError; + rc = cson_object_iter_init( src, &iter ); + if(rc) return rc; + while( (kvp = cson_object_iter_next(&iter) ) ) + { + cson_string * key = cson_kvp_key(kvp); + cson_value * val = cson_kvp_value(kvp); + cson_value * check = cson_object_get_s( dest, key ); + if(!check){ + cson_object_set_s( dest, key, val ); + continue; + } + else if(!replace && !recurse) continue; + else if(replace && !recurse){ + cson_object_set_s( dest, key, val ); + continue; + } + else if( recurse ){ + if( cson_value_is_object(check) && + cson_value_is_object(val) ){ + rc = cson_object_merge( cson_value_get_object(check), + cson_value_get_object(val), + flags ); + if(rc) return rc; + else continue; + } + else continue; + } + else continue; + } + return 0; +} + +static cson_value * cson_guess_arg_type(char const *arg){ + char * end = NULL; + if(!arg || !*arg) return cson_value_null(); + else if(('0'>*arg) || ('9'<*arg)){ + goto do_string; + } + else{ /* try numbers... */ + long const val = strtol(arg, &end, 10); + if(!*end){ + return cson_value_new_integer( (cson_int_t)val); + } + else if( '.' != *end ) { + goto do_string; + } + else { + double const val = strtod(arg, &end); + if(!*end){ + return cson_value_new_double(val); + } + } + } + do_string: + return cson_value_new_string(arg, strlen(arg)); +} + + +int cson_parse_argv_flags( int argc, char const * const * argv, + cson_object ** tgt, unsigned int * count ){ + cson_object * o = NULL; + int rc = 0; + int i = 0; + if(argc<1 || !argc || !tgt) return cson_rc.ArgError; + o = *tgt ? *tgt : cson_new_object(); + if(count) *count = 0; + for( i = 0; i < argc; ++i ){ + char const * arg = argv[i]; + char const * key = arg; + char const * pos; + cson_string * k = NULL; + cson_value * v = NULL; + if('-' != *arg) continue; + while('-'==*key) ++key; + if(!*key) continue; + pos = key; + while( *pos && ('=' != *pos)) ++pos; + k = cson_new_string(key, pos-key); + if(!k){ + rc = cson_rc.AllocError; + break; + } + if(!*pos){ /** --key */ + v = cson_value_true(); + }else{ /** --key=...*/ + assert('=' == *pos); + ++pos /*skip '='*/; + v = cson_guess_arg_type(pos); + } + if(0 != (rc=cson_object_set_s(o, k, v))){ + cson_free_string(k); + cson_value_free(v); + break; + } + else if(count) ++*count; + } + if(o != *tgt){ + if(rc) cson_free_object(o); + else *tgt = o; + } + return rc; +} + +#if defined(__cplusplus) +} /*extern "C"*/ +#endif + +#undef MARKER +#undef CSON_OBJECT_PROPS_SORT +#undef CSON_OBJECT_PROPS_SORT_USE_LENGTH +#undef CSON_CAST +#undef CSON_INT +#undef CSON_DBL +#undef CSON_STR +#undef CSON_OBJ +#undef CSON_ARRAY +#undef CSON_VCAST +#undef CSON_MALLOC_IMPL +#undef CSON_FREE_IMPL +#undef CSON_REALLOC_IMPL +/* end file ./cson.c */ +/* begin file ./cson_lists.h */ +/* Auto-generated from cson_list.h. Edit at your own risk! */ +unsigned int cson_value_list_reserve( cson_value_list * self, unsigned int n ) +{ + if( !self ) return 0; + else if(0 == n) + { + if(0 == self->alloced) return 0; + cson_free(self->list, "cson_value_list_reserve"); + self->list = NULL; + self->alloced = self->count = 0; + return 0; + } + else if( self->alloced >= n ) + { + return self->alloced; + } + else + { + size_t const sz = sizeof(cson_value *) * n; + cson_value * * m = (cson_value **)cson_realloc( self->list, sz, "cson_value_list_reserve" ); + if( ! m ) return self->alloced; + + memset( m + self->alloced, 0, (sizeof(cson_value *)*(n-self->alloced))); + self->alloced = n; + self->list = m; + return n; + } +} +int cson_value_list_append( cson_value_list * self, cson_value * cp ) +{ + if( !self || !cp ) return cson_rc.ArgError; + else if( self->alloced > cson_value_list_reserve(self, self->count+1) ) + { + return cson_rc.AllocError; + } + else + { + self->list[self->count++] = cp; + return 0; + } +} +int cson_value_list_visit( cson_value_list * self, + + int (*visitor)(cson_value * obj, void * visitorState ), + + + + void * visitorState ) +{ + int rc = cson_rc.ArgError; + if( self && visitor ) + { + unsigned int i = 0; + for( rc = 0; (i < self->count) && (0 == rc); ++i ) + { + + cson_value * obj = self->list[i]; + + + + if(obj) rc = visitor( obj, visitorState ); + } + } + return rc; +} +void cson_value_list_clean( cson_value_list * self, + + void (*cleaner)(cson_value * obj) + + + + ) +{ + if( self && cleaner && self->count ) + { + unsigned int i = 0; + for( ; i < self->count; ++i ) + { + + cson_value * obj = self->list[i]; + + + + if(obj) cleaner(obj); + } + } + cson_value_list_reserve(self,0); +} +unsigned int cson_kvp_list_reserve( cson_kvp_list * self, unsigned int n ) +{ + if( !self ) return 0; + else if(0 == n) + { + if(0 == self->alloced) return 0; + cson_free(self->list, "cson_kvp_list_reserve"); + self->list = NULL; + self->alloced = self->count = 0; + return 0; + } + else if( self->alloced >= n ) + { + return self->alloced; + } + else + { + size_t const sz = sizeof(cson_kvp *) * n; + cson_kvp * * m = (cson_kvp **)cson_realloc( self->list, sz, "cson_kvp_list_reserve" ); + if( ! m ) return self->alloced; + + memset( m + self->alloced, 0, (sizeof(cson_kvp *)*(n-self->alloced))); + self->alloced = n; + self->list = m; + return n; + } +} +int cson_kvp_list_append( cson_kvp_list * self, cson_kvp * cp ) +{ + if( !self || !cp ) return cson_rc.ArgError; + else if( self->alloced > cson_kvp_list_reserve(self, self->count+1) ) + { + return cson_rc.AllocError; + } + else + { + self->list[self->count++] = cp; + return 0; + } +} +int cson_kvp_list_visit( cson_kvp_list * self, + + int (*visitor)(cson_kvp * obj, void * visitorState ), + + + + void * visitorState ) +{ + int rc = cson_rc.ArgError; + if( self && visitor ) + { + unsigned int i = 0; + for( rc = 0; (i < self->count) && (0 == rc); ++i ) + { + + cson_kvp * obj = self->list[i]; + + + + if(obj) rc = visitor( obj, visitorState ); + } + } + return rc; +} +void cson_kvp_list_clean( cson_kvp_list * self, + + void (*cleaner)(cson_kvp * obj) + + + + ) +{ + if( self && cleaner && self->count ) + { + unsigned int i = 0; + for( ; i < self->count; ++i ) + { + + cson_kvp * obj = self->list[i]; + + + + if(obj) cleaner(obj); + } + } + cson_kvp_list_reserve(self,0); +} +/* end file ./cson_lists.h */ +/* begin file ./cson_sqlite3.c */ +/** @file cson_sqlite3.c + +This file contains the implementation code for the cson +sqlite3-to-JSON API. + +License: the same as the cson core library. + +Author: Stephan Beal (http://wanderinghorse.net/home/stephan) +*/ +#if CSON_ENABLE_SQLITE3 /* we do this here for the sake of the amalgamation build */ +#include +#include /* strlen() */ + +#if 0 +#include +#define MARKER if(1) printf("MARKER: %s:%d:%s():\t",__FILE__,__LINE__,__func__); if(1) printf +#else +#define MARKER if(0) printf +#endif + +#if defined(__cplusplus) +extern "C" { +#endif + +cson_value * cson_sqlite3_column_to_value( sqlite3_stmt * st, int col ) +{ + if( ! st ) return NULL; + else + { +#if 0 + sqlite3_value * val = sqlite3_column_type(st,col); + int const vtype = val ? sqlite3_value_type(val) : -1; + if( ! val ) return cson_value_null(); +#else + int const vtype = sqlite3_column_type(st,col); +#endif + switch( vtype ) + { + case SQLITE_NULL: + return cson_value_null(); + case SQLITE_INTEGER: + /* FIXME: for large integers fall back to Double instead. */ + return cson_value_new_integer( (cson_int_t) sqlite3_column_int64(st, col) ); + case SQLITE_FLOAT: + return cson_value_new_double( sqlite3_column_double(st, col) ); + case SQLITE_BLOB: /* arguably fall through... */ + case SQLITE_TEXT: { + char const * str = (char const *)sqlite3_column_text(st,col); + return cson_value_new_string(str, str ? strlen(str) : 0); + } + default: + return NULL; + } + } +} + +cson_value * cson_sqlite3_column_names( sqlite3_stmt * st ) +{ + cson_value * aryV = NULL; + cson_array * ary = NULL; + char const * colName = NULL; + int i = 0; + int rc = 0; + int colCount = 0; + assert(st); + colCount = sqlite3_column_count(st); + if( colCount <= 0 ) return NULL; + + aryV = cson_value_new_array(); + if( ! aryV ) return NULL; + ary = cson_value_get_array(aryV); + assert(ary); + for( i = 0; (0 ==rc) && (i < colCount); ++i ) + { + colName = sqlite3_column_name( st, i ); + if( ! colName ) rc = cson_rc.AllocError; + else + { + rc = cson_array_set( ary, (unsigned int)i, + cson_value_new_string(colName, strlen(colName)) ); + } + } + if( 0 == rc ) return aryV; + else + { + cson_value_free(aryV); + return NULL; + } +} + + +cson_value * cson_sqlite3_row_to_object2( sqlite3_stmt * st, + cson_array * colNames ) +{ + cson_value * rootV = NULL; + cson_object * root = NULL; + cson_string * colName = NULL; + int i = 0; + int rc = 0; + cson_value * currentValue = NULL; + int const colCount = sqlite3_column_count(st); + if( !colCount || (colCount>cson_array_length_get(colNames)) ) { + return NULL; + } + rootV = cson_value_new_object(); + if(!rootV) return NULL; + root = cson_value_get_object(rootV); + for( i = 0; i < colCount; ++i ) + { + colName = cson_value_get_string( cson_array_get( colNames, i ) ); + if( ! colName ) goto error; + currentValue = cson_sqlite3_column_to_value(st,i); + if( ! currentValue ) currentValue = cson_value_null(); + rc = cson_object_set_s( root, colName, currentValue ); + if( 0 != rc ) + { + cson_value_free( currentValue ); + goto error; + } + } + goto end; + error: + cson_value_free( rootV ); + rootV = NULL; + end: + return rootV; +} + + +cson_value * cson_sqlite3_row_to_object( sqlite3_stmt * st ) +{ +#if 0 + cson_value * arV = cson_sqlite3_column_names(st); + cson_array * ar = NULL; + cson_value * rc = NULL; + if(!arV) return NULL; + ar = cson_value_get_array(arV); + assert( NULL != ar ); + rc = cson_sqlite3_row_to_object2(st, ar); + cson_value_free(arV); + return rc; +#else + cson_value * rootV = NULL; + cson_object * root = NULL; + char const * colName = NULL; + int i = 0; + int rc = 0; + cson_value * currentValue = NULL; + int const colCount = sqlite3_column_count(st); + if( !colCount ) return NULL; + rootV = cson_value_new_object(); + if(!rootV) return NULL; + root = cson_value_get_object(rootV); + for( i = 0; i < colCount; ++i ) + { + colName = sqlite3_column_name( st, i ); + if( ! colName ) goto error; + currentValue = cson_sqlite3_column_to_value(st,i); + if( ! currentValue ) currentValue = cson_value_null(); + rc = cson_object_set( root, colName, currentValue ); + if( 0 != rc ) + { + cson_value_free( currentValue ); + goto error; + } + } + goto end; + error: + cson_value_free( rootV ); + rootV = NULL; + end: + return rootV; +#endif +} + +cson_value * cson_sqlite3_row_to_array( sqlite3_stmt * st ) +{ + cson_value * aryV = NULL; + cson_array * ary = NULL; + int i = 0; + int rc = 0; + int const colCount = sqlite3_column_count(st); + if( ! colCount ) return NULL; + aryV = cson_value_new_array(); + if( ! aryV ) return NULL; + ary = cson_value_get_array(aryV); + rc = cson_array_reserve(ary, (unsigned int) colCount ); + if( 0 != rc ) goto error; + + for( i = 0; i < colCount; ++i ){ + cson_value * elem = cson_sqlite3_column_to_value(st,i); + if( ! elem ) goto error; + rc = cson_array_append(ary,elem); + if(0!=rc) + { + cson_value_free( elem ); + goto end; + } + } + goto end; + error: + cson_value_free(aryV); + aryV = NULL; + end: + return aryV; +} + + +/** + Internal impl of cson_sqlite3_stmt_to_json() when the 'fat' + parameter is non-0. +*/ +static int cson_sqlite3_stmt_to_json_fat( sqlite3_stmt * st, cson_value ** tgt ) +{ +#define RETURN(RC) { if(rootV) cson_value_free(rootV); return RC; } + if( ! tgt || !st ) return cson_rc.ArgError; + else + { + cson_value * rootV = NULL; + cson_object * root = NULL; + cson_value * colsV = NULL; + cson_array * cols = NULL; + cson_value * rowsV = NULL; + cson_array * rows = NULL; + cson_value * objV = NULL; + int rc = 0; + int const colCount = sqlite3_column_count(st); + if( colCount <= 0 ) return cson_rc.ArgError; + rootV = cson_value_new_object(); + if( ! rootV ) return cson_rc.AllocError; + colsV = cson_sqlite3_column_names(st); + if( ! colsV ) + { + cson_value_free( rootV ); + RETURN(cson_rc.AllocError); + } + cols = cson_value_get_array(colsV); + assert(NULL != cols); + root = cson_value_get_object(rootV); + rc = cson_object_set( root, "columns", colsV ); + if( rc ) + { + cson_value_free( colsV ); + RETURN(rc); + } + rowsV = cson_value_new_array(); + if( ! rowsV ) RETURN(cson_rc.AllocError); + rc = cson_object_set( root, "rows", rowsV ); + if( rc ) + { + cson_value_free( rowsV ); + RETURN(rc); + } + rows = cson_value_get_array(rowsV); + assert(rows); + while( SQLITE_ROW == sqlite3_step(st) ) + { + objV = cson_sqlite3_row_to_object2(st, cols); + if( ! objV ) RETURN(cson_rc.UnknownError); + rc = cson_array_append( rows, objV ); + if( rc ) + { + cson_value_free( objV ); + RETURN(rc); + } + } + *tgt = rootV; + return 0; + } +#undef RETURN +} + +/** + Internal impl of cson_sqlite3_stmt_to_json() when the 'fat' + parameter is 0. +*/ +static int cson_sqlite3_stmt_to_json_slim( sqlite3_stmt * st, cson_value ** tgt ) +{ +#define RETURN(RC) { if(rootV) cson_value_free(rootV); return RC; } + if( ! tgt || !st ) return cson_rc.ArgError; + else + { + cson_value * rootV = NULL; + cson_object * root = NULL; + cson_value * aryV = NULL; + cson_value * rowsV = NULL; + cson_array * rows = NULL; + int rc = 0; + int const colCount = sqlite3_column_count(st); + if( colCount <= 0 ) return cson_rc.ArgError; + rootV = cson_value_new_object(); + if( ! rootV ) return cson_rc.AllocError; + aryV = cson_sqlite3_column_names(st); + if( ! aryV ) + { + cson_value_free( rootV ); + RETURN(cson_rc.AllocError); + } + root = cson_value_get_object(rootV); + rc = cson_object_set( root, "columns", aryV ); + if( rc ) + { + cson_value_free( aryV ); + RETURN(rc); + } + aryV = NULL; + rowsV = cson_value_new_array(); + if( ! rowsV ) RETURN(cson_rc.AllocError); + rc = cson_object_set( root, "rows", rowsV ); + if( 0 != rc ) + { + cson_value_free( rowsV ); + RETURN(rc); + } + rows = cson_value_get_array(rowsV); + assert(rows); + while( SQLITE_ROW == sqlite3_step(st) ) + { + aryV = cson_sqlite3_row_to_array(st); + if( ! aryV ) RETURN(cson_rc.UnknownError); + rc = cson_array_append( rows, aryV ); + if( 0 != rc ) + { + cson_value_free( aryV ); + RETURN(rc); + } + } + *tgt = rootV; + return 0; + } +#undef RETURN +} + +int cson_sqlite3_stmt_to_json( sqlite3_stmt * st, cson_value ** tgt, char fat ) +{ + return fat + ? cson_sqlite3_stmt_to_json_fat(st,tgt) + : cson_sqlite3_stmt_to_json_slim(st,tgt) + ; +} + +int cson_sqlite3_sql_to_json( sqlite3 * db, cson_value ** tgt, char const * sql, char fat ) +{ + if( !db || !tgt || !sql || !*sql ) return cson_rc.ArgError; + else + { + sqlite3_stmt * st = NULL; + int rc = sqlite3_prepare_v2( db, sql, -1, &st, NULL ); + if( 0 != rc ) return cson_rc.IOError /* FIXME: Better error code? */; + rc = cson_sqlite3_stmt_to_json( st, tgt, fat ); + sqlite3_finalize( st ); + return rc; + } +} + +int cson_sqlite3_bind_value( sqlite3_stmt * st, int ndx, cson_value const * v ) +{ + int rc = 0; + char convertErr = 0; + if(!st) return cson_rc.ArgError; + else if( ndx < 1 ) { + rc = cson_rc.RangeError; + } + else if( cson_value_is_array(v) ){ + cson_array * ar = cson_value_get_array(v); + unsigned int len = cson_array_length_get(ar); + unsigned int i; + assert(NULL != ar); + for( i = 0; !rc && (i < len); ++i ){ + rc = cson_sqlite3_bind_value( st, (int)i+ndx, + cson_array_get(ar, i)); + } + } + else if(!v || cson_value_is_null(v)){ + rc = sqlite3_bind_null(st,ndx); + convertErr = 1; + } + else if( cson_value_is_double(v) ){ + rc = sqlite3_bind_double( st, ndx, cson_value_get_double(v) ); + convertErr = 1; + } + else if( cson_value_is_bool(v) ){ + rc = sqlite3_bind_int( st, ndx, cson_value_get_bool(v) ? 1 : 0 ); + convertErr = 1; + } + else if( cson_value_is_integer(v) ){ + rc = sqlite3_bind_int64( st, ndx, cson_value_get_integer(v) ); + convertErr = 1; + } + else if( cson_value_is_string(v) ){ + cson_string const * s = cson_value_get_string(v); + rc = sqlite3_bind_text( st, ndx, + cson_string_cstr(s), + cson_string_length_bytes(s), + SQLITE_TRANSIENT); + convertErr = 1; + } + else { + rc = cson_rc.TypeError; + } + if(convertErr && rc) switch(rc){ + case SQLITE_TOOBIG: + case SQLITE_RANGE: rc = cson_rc.RangeError; break; + case SQLITE_NOMEM: rc = cson_rc.AllocError; break; + case SQLITE_IOERR: rc = cson_rc.IOError; break; + default: rc = cson_rc.UnknownError; break; + }; + return rc; +} + + +#if defined(__cplusplus) +} /*extern "C"*/ +#endif +#undef MARKER +#endif /* CSON_ENABLE_SQLITE3 */ +/* end file ./cson_sqlite3.c */ +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/cson_amalgamation.h Index: src/cson_amalgamation.h ================================================================== --- src/cson_amalgamation.h +++ src/cson_amalgamation.h @@ -0,0 +1,2616 @@ +#ifdef FOSSIL_ENABLE_JSON +#ifndef CSON_FOSSIL_MODE +#define CSON_FOSSIL_MODE +#endif +/* auto-generated! Do not edit! */ +/* begin file include/wh/cson/cson.h */ +#if !defined(WANDERINGHORSE_NET_CSON_H_INCLUDED) +#define WANDERINGHORSE_NET_CSON_H_INCLUDED 1 + +/*#include C99: fixed-size int types. */ +#include /* FILE decl */ + +/** @page page_cson cson JSON API + +cson (pronounced "season") is an object-oriented C API for generating +and consuming JSON (http://www.json.org) data. + +Its main claim to fame is that it can parse JSON from, and output it +to, damned near anywhere. The i/o routines use a callback function to +fetch/emit JSON data, allowing clients to easily plug in their own +implementations. Implementations are provided for string- and +FILE-based i/o. + +Project home page: http://fossil.wanderinghorse.net/repos/cson + +Author: Stephan Beal (http://www.wanderinghorse.net/home/stephan/) + +License: Dual Public Domain/MIT + +The full license text is at the bottom of the main header file +(cson.h). + +Examples of how to use the library are scattered throughout +the API documentation, in the test.c file in the source repo, +and in the wiki on the project's home page. + + +*/ + +#if defined(__cplusplus) +extern "C" { +#endif + +#if defined(_WIN32) || defined(_WIN64) +# define CSON_ENABLE_UNIX 0 +#else +# define CSON_ENABLE_UNIX 1 +#endif + + +/** @typedef some_long_int_type cson_int_t + +Typedef for JSON-like integer types. This is (long long) where feasible, +otherwise (long). +*/ +#ifdef _WIN32 +typedef __int64 cson_int_t; +#define CSON_INT_T_SFMT "I64d" +#define CSON_INT_T_PFMT "I64d" +#elif (__STDC_VERSION__ >= 199901L) || (HAVE_LONG_LONG == 1) +typedef long long cson_int_t; +#define CSON_INT_T_SFMT "lld" +#define CSON_INT_T_PFMT "lld" +#else +typedef long cson_int_t; +#define CSON_INT_T_SFMT "ld" +#define CSON_INT_T_PFMT "ld" +#endif + +/** @typedef double_or_long_double cson_double_t + + This is the type of double value used by the library. + It is only lightly tested with long double, and when using + long double the memory requirements for such values goes + up. + + Note that by default cson uses C-API defaults for numeric + precision. To use a custom precision throughout the library, one + needs to define the macros CSON_DOUBLE_T_SFMT and/or + CSON_DOUBLE_T_PFMT macros to include their desired precision, and + must build BOTH cson AND the client using these same values. For + example: + + @code + #define CSON_DOUBLE_T_PFMT ".8Lf" // for Modified Julian Day values + #define HAVE_LONG_DOUBLE + @endcode + + (Only CSON_DOUBLE_T_PFTM should be needed for most + purposes.) +*/ + +#if defined(HAVE_LONG_DOUBLE) + typedef long double cson_double_t; +# ifndef CSON_DOUBLE_T_SFMT +# define CSON_DOUBLE_T_SFMT "Lf" +# endif +# ifndef CSON_DOUBLE_T_PFMT +# define CSON_DOUBLE_T_PFMT "Lf" +# endif +#else + typedef double cson_double_t; +# ifndef CSON_DOUBLE_T_SFMT +# define CSON_DOUBLE_T_SFMT "f" +# endif +# ifndef CSON_DOUBLE_T_PFMT +# define CSON_DOUBLE_T_PFMT "f" +# endif +#endif + +/** @def CSON_VOID_PTR_IS_BIG + +ONLY define this to a true value if you know that + +(sizeof(cson_int_t) <= sizeof(void*)) + +If that is the case, cson does not need to dynamically +allocate integers. However, enabling this may cause +compilation warnings in 32-bit builds even though the code +being warned about cannot ever be called. To get around such +warnings, when building on a 64-bit environment you can define +this to 1 to get "big" integer support. HOWEVER, all clients must +also use the same value for this macro. If i knew a halfway reliable +way to determine this automatically at preprocessor-time, i would +automate this. We might be able to do halfway reliably by looking +for a large INT_MAX value? +*/ +#if !defined(CSON_VOID_PTR_IS_BIG) + +/* Largely taken from http://predef.sourceforge.net/prearch.html + +See also: http://poshlib.hookatooka.com/poshlib/trac.cgi/browser/posh.h +*/ +# if defined(_WIN64) || defined(__LP64__)/*gcc*/ \ + || defined(_M_X64) || defined(__amd64__) || defined(__amd64) \ + || defined(__x86_64__) || defined(__x86_64) \ + || defined(__ia64__) || defined(__ia64) || defined(_IA64) || defined(__IA64__) \ + || defined(_M_IA64) \ + || defined(__sparc_v9__) || defined(__sparcv9) || defined(_ADDR64) \ + || defined(__64BIT__) +# define CSON_VOID_PTR_IS_BIG 1 +# else +# define CSON_VOID_PTR_IS_BIG 0 +# endif +#endif + +/** @def CSON_INT_T_SFMT + +scanf()-compatible format token for cson_int_t. +*/ + +/** @def CSON_INT_T_PFMT + +printf()-compatible format token for cson_int_t. +*/ + + +/** @def CSON_DOUBLE_T_SFMT + +scanf()-compatible format token for cson_double_t. +*/ + +/** @def CSON_DOUBLE_T_PFMT + +printf()-compatible format token for cson_double_t. +*/ + +/** + Type IDs corresponding to JavaScript/JSON types. + + These are only in the public API to allow O(1) client-side + dispatching based on cson_value types. +*/ +enum cson_type_id { + /** + The special "undefined" value constant. + + Its value must be 0 for internal reasons. + */ + CSON_TYPE_UNDEF = 0, + /** + The special "null" value constant. + */ + CSON_TYPE_NULL = 1, + /** + The bool value type. + */ + CSON_TYPE_BOOL = 2, + /** + The integer value type, represented in this library + by cson_int_t. + */ + CSON_TYPE_INTEGER = 3, + /** + The double value type, represented in this library + by cson_double_t. + */ + CSON_TYPE_DOUBLE = 4, + /** The immutable string type. This library stores strings + as immutable UTF8. + */ + CSON_TYPE_STRING = 5, + /** The "Array" type. */ + CSON_TYPE_ARRAY = 6, + /** The "Object" type. */ + CSON_TYPE_OBJECT = 7 +}; +/** + Convenience typedef. +*/ +typedef enum cson_type_id cson_type_id; + + +/** + Convenience typedef. +*/ +typedef struct cson_value cson_value; + +/** @struct cson_value + + The core value type of this API. It is opaque to clients, and + only the cson public API should be used for setting or + inspecting their values. + + This class is opaque because stack-based usage can easily cause + leaks if one does not intimately understand the underlying + internal memory management (which sometimes changes). + + It is (as of 20110323) legal to insert a given value instance into + multiple containers (they will share ownership using reference + counting) as long as those insertions do not cause cycles. However, + be very aware that such value re-use uses a reference to the + original copy, meaning that if its value is changed once, it is + changed everywhere. Also beware that multi-threaded write + operations on such references leads to undefined behaviour. + + PLEASE read the ACHTUNGEN below... + + ACHTUNG #1: + + cson_values MUST NOT form cycles (e.g. via object or array + entries). + + Not abiding th Holy Law Of No Cycles will lead to double-frees and + the like (i.e. undefined behaviour, likely crashes due to infinite + recursion or stepping on invalid (freed) pointers). + + ACHTUNG #2: + + ALL cson_values returned as non-const cson_value pointers from any + public functions in the cson API are to be treated as if they are + heap-allocated, and MUST be freed by client by doing ONE of: + + - Passing it to cson_value_free(). + + - Adding it to an Object or Array, in which case the object/array + takes over ownership. As of 20110323, a value may be inserted into + a single container multiple times, or into multiple containers, + in which case they all share ownership (via reference counting) + of the original value (meaning any changes to it are visible in + all references to it). + + Each call to cson_value_new_xxx() MUST eventually be followed up + by one of those options. + + Some cson_value_new_XXX() implementations do not actually allocate + memory, but this is an internal implementation detail. Client code + MUST NOT rely on this behaviour and MUST treat each object + returned by such a function as if it was a freshly-allocated copy + (even if their pointer addresses are the same). + + ACHTUNG #3: + + Note that ACHTUNG #2 tells us that we must always free (or transfer + ownership of) all pointers returned bycson_value_new_xxx(), but + that two calls to (e.g.) cson_value_new_bool(1) will (or might) + return the same address. The client must not rely on the + "non-allocation" policy of such special cases, and must pass each + returned value to cson_value_free(), even if two of them have the + same address. Some special values (e.g. null, true, false, integer + 0, double 0.0, and empty strings) use shared copies and in other + places reference counting is used internally to figure out when it + is safe to destroy an object. + + + @see cson_value_new_array() + @see cson_value_new_object() + @see cson_value_new_string() + @see cson_value_new_integer() + @see cson_value_new_double() + @see cson_value_new_bool() + @see cson_value_true() + @see cson_value_false() + @see cson_value_null() + @see cson_value_free() + @see cson_value_type_id() +*/ + +/** @var cson_rc + + This object defines the error codes used by cson. + + Library routines which return int values almost always return a + value from this structure. None of the members in this struct have + published values except for the OK member, which has the value 0. + All other values might be incidentally defined where clients + can see them, but the numbers might change from release to + release, so clients should only use the symbolic names. + + Client code is expected to access these values via the shared + cson_rc object, and use them as demonstrated here: + + @code + int rc = cson_some_func(...); + if( 0 == rc ) {...success...} + else if( cson_rc.ArgError == rc ) { ... some argument was wrong ... } + else if( cson_rc.AllocError == rc ) { ... allocation error ... } + ... + @endcode + + The entries named Parse_XXX are generally only returned by + cson_parse() and friends. +*/ + +/** @struct cson_rc_ + See \ref cson_rc for details. +*/ +static const struct cson_rc_ +{ + /** The generic success value. Guaranteed to be 0. */ + const int OK; + /** Signifies an error in one or more arguments (e.g. NULL where it is not allowed). */ + const int ArgError; + /** Signifies that some argument is not in a valid range. */ + const int RangeError; + /** Signifies that some argument is not of the correct logical cson type. */ + const int TypeError; + /** Signifies an input/ouput error. */ + const int IOError; + /** Signifies an out-of-memory error. */ + const int AllocError; + /** Signifies that the called code is "NYI" (Not Yet Implemented). */ + const int NYIError; + /** Signifies that an internal error was triggered. If it happens, please report this as a bug! */ + const int InternalError; + /** Signifies that the called operation is not supported in the + current environment. e.g. missing support from 3rd-party or + platform-specific code. + */ + const int UnsupportedError; + /** + Signifies that the request resource could not be found. + */ + const int NotFoundError; + /** + Signifies an unknown error, possibly because an underlying + 3rd-party API produced an error and we have no other reasonable + error code to convert it to. + */ + const int UnknownError; + /** + Signifies that the parser found an unexpected character. + */ + const int Parse_INVALID_CHAR; + /** + Signifies that the parser found an invalid keyword (possibly + an unquoted string). + */ + const int Parse_INVALID_KEYWORD; + /** + Signifies that the parser found an invalid escape sequence. + */ + const int Parse_INVALID_ESCAPE_SEQUENCE; + /** + Signifies that the parser found an invalid Unicode character + sequence. + */ + const int Parse_INVALID_UNICODE_SEQUENCE; + /** + Signifies that the parser found an invalid numeric token. + */ + const int Parse_INVALID_NUMBER; + /** + Signifies that the parser reached its maximum defined + parsing depth before finishing the input. + */ + const int Parse_NESTING_DEPTH_REACHED; + /** + Signifies that the parser found an unclosed object or array. + */ + const int Parse_UNBALANCED_COLLECTION; + /** + Signifies that the parser found an key in an unexpected place. + */ + const int Parse_EXPECTED_KEY; + /** + Signifies that the parser expected to find a colon but + found none (e.g. between keys and values in an object). + */ + const int Parse_EXPECTED_COLON; +} cson_rc = { +0/*OK*/, +1/*ArgError*/, +2/*RangeError*/, +3/*TypeError*/, +4/*IOError*/, +5/*AllocError*/, +6/*NYIError*/, +7/*InternalError*/, +8/*UnsupportedError*/, +9/*NotFoundError*/, +10/*UnknownError*/, +11/*Parse_INVALID_CHAR*/, +12/*Parse_INVALID_KEYWORD*/, +13/*Parse_INVALID_ESCAPE_SEQUENCE*/, +14/*Parse_INVALID_UNICODE_SEQUENCE*/, +15/*Parse_INVALID_NUMBER*/, +16/*Parse_NESTING_DEPTH_REACHED*/, +17/*Parse_UNBALANCED_COLLECTION*/, +18/*Parse_EXPECTED_KEY*/, +19/*Parse_EXPECTED_COLON*/ +}; + +/** + Returns the string form of the cson_rc code corresponding to rc, or + some unspecified, non-NULL string if it is an unknown code. + + The returned bytes are static and do not changing during the + lifetime of the application. +*/ +char const * cson_rc_string(int rc); + +/** @struct cson_parse_opt + Client-configurable options for the cson_parse() family of + functions. +*/ +struct cson_parse_opt +{ + /** + Maximum object/array depth to traverse. + */ + unsigned short maxDepth; + /** + Whether or not to allow C-style comments. Do not rely on this + option being available. If the underlying parser is replaced, + this option might no longer be supported. + */ + char allowComments; +}; +typedef struct cson_parse_opt cson_parse_opt; + +/** + Empty-initialized cson_parse_opt object. +*/ +#define cson_parse_opt_empty_m { 25/*maxDepth*/, 0/*allowComments*/} + + +/** + A class for holding JSON parser information. It is primarily + intended for finding the position of a parse error. +*/ +struct cson_parse_info +{ + /** + 1-based line number. + */ + unsigned int line; + /** + 0-based column number. + */ + unsigned int col; + + /** + Length, in bytes. + */ + unsigned int length; + + /** + Error code of the parse run (0 for no error). + */ + int errorCode; + + /** + The total number of object keys successfully processed by the + parser. + */ + unsigned int totalKeyCount; + + /** + The total number of object/array values successfully processed + by the parser, including the root node. + */ + unsigned int totalValueCount; +}; +typedef struct cson_parse_info cson_parse_info; + +/** + Empty-initialized cson_parse_info object. +*/ +#define cson_parse_info_empty_m {1/*line*/,\ + 0/*col*/, \ + 0/*length*/, \ + 0/*errorCode*/, \ + 0/*totalKeyCount*/, \ + 0/*totalValueCount*/ \ + } +/** + Empty-initialized cson_parse_info object. +*/ +extern const cson_parse_info cson_parse_info_empty; + +/** + Empty-initialized cson_parse_opt object. +*/ +extern const cson_parse_opt cson_parse_opt_empty; + +/** + Client-configurable options for the cson_output() family of + functions. +*/ +struct cson_output_opt +{ + /** + Specifies how to indent (or not) output. The values + are: + + (0) == no extra indentation. + + (1) == 1 TAB character for each level. + + (>1) == that number of SPACES for each level. + */ + unsigned char indentation; + + /** + Maximum object/array depth to traverse. Traversing deeply can + be indicative of cycles in the object/array tree, and this + value is used to figure out when to abort the traversal. + */ + unsigned short maxDepth; + + /** + If true, a newline will be added to generated output, + else not. + */ + char addNewline; + + /** + If true, a space will be added after the colon operator + in objects' key/value pairs. + */ + char addSpaceAfterColon; + + /** + If set to 1 then objects/arrays containing only a single value + will not indent an extra level for that value (but will indent + on subsequent levels if that value contains multiple values). + */ + char indentSingleMemberValues; + + /** + The JSON format allows, but does not require, JSON generators + to backslash-escape forward slashes. This option enables/disables + that feature. According to JSON's inventor, Douglas Crockford: + + + It is allowed, not required. It is allowed so that JSON can be + safely embedded in HTML, which can freak out when seeing + strings containing " + + (from an email on 2011-04-08) + + The default value is 0 (because it's just damned ugly). + */ + char escapeForwardSlashes; +}; +typedef struct cson_output_opt cson_output_opt; + +/** + Empty-initialized cson_output_opt object. +*/ +#define cson_output_opt_empty_m { 0/*indentation*/,\ + 25/*maxDepth*/, \ + 0/*addNewline*/, \ + 0/*addSpaceAfterColon*/, \ + 0/*indentSingleMemberValues*/, \ + 0/*escapeForwardSlashes*/ \ + } + +/** + Empty-initialized cson_output_opt object. +*/ +extern const cson_output_opt cson_output_opt_empty; + +/** + Typedef for functions which act as an input source for + the cson JSON parser. + + The arguments are: + + - state: implementation-specific state needed by the function. + + - n: when called, *n will be the number of bytes the function + should read and copy to dest. The function MUST NOT copy more than + *n bytes to dest. Before returning, *n must be set to the number of + bytes actually copied to dest. If that number is smaller than the + original *n value, the input is assumed to be completed (thus this + is not useful with non-blocking readers). + + - dest: the destination memory to copy the data do. + + Must return 0 on success, non-0 on error (preferably a value from + cson_rc). + + The parser allows this routine to return a partial character from a + UTF multi-byte character. The input routine does not need to + concern itself with character boundaries. +*/ +typedef int (*cson_data_source_f)( void * state, void * dest, unsigned int * n ); + +/** + Typedef for functions which act as an output destination for + generated JSON. + + The arguments are: + + - state: implementation-specific state needed by the function. + + - n: the length, in bytes, of src. + + - src: the source bytes which the output function should consume. + The src pointer will be invalidated shortly after this function + returns, so the implementation must copy or ignore the data, but not + hold a copy of the src pointer. + + Must return 0 on success, non-0 on error (preferably a value from + cson_rc). + + These functions are called relatively often during the JSON-output + process, and should try to be fast. +*/ +typedef int (*cson_data_dest_f)( void * state, void const * src, unsigned int n ); + +/** + Reads JSON-formatted string data (in ASCII, UTF8, or UTF16), using the + src function to fetch all input. This function fetches each input character + from the source function, which is calls like src(srcState, buffer, bufferSize), + and processes them. If anything is not JSON-kosher then this function + fails and returns one of the non-0 cson_rc codes. + + This function is only intended to read root nodes of a JSON tree, either + a single object or a single array, containing any number of child elements. + + On success, *tgt is assigned the value of the root node of the + JSON input, and the caller takes over ownership of that memory. + On error, *tgt is not modified and the caller need not do any + special cleanup, except possibly for the input source. + + + The opt argument may point to an initialized cson_parse_opt object + which contains any settings the caller wants. If it is NULL then + default settings (the values defined in cson_parse_opt_empty) are + used. + + The info argument may be NULL. If it is not NULL then the parser + populates it with information which is useful in error + reporting. Namely, it contains the line/column of parse errors. + + The srcState argument is ignored by this function but is passed on to src, + so any output-destination-specific state can be stored there and accessed + via the src callback. + + Non-parse error conditions include: + + - (!tgt) or !src: cson_rc.ArgError + - cson_rc.AllocError can happen at any time during the input phase + + Here's a complete example of using a custom input source: + + @code + // Internal type to hold state for a JSON input string. + typedef struct + { + char const * str; // start of input string + char const * pos; // current internal cursor position + char const * end; // logical EOF (one-past-the-end) + } StringSource; + + // cson_data_source_f() impl which uses StringSource. + static int cson_data_source_StringSource( void * state, void * dest, + unsigned int * n ) + { + StringSource * ss = (StringSource*) state; + unsigned int i; + unsigned char * tgt = (unsigned char *)dest; + if( ! ss || ! n || !dest ) return cson_rc.ArgError; + else if( !*n ) return cson_rc.RangeError; + for( i = 0; + (i < *n) && (ss->pos < ss->end); + ++i, ++ss->pos, ++tgt ) + { + *tgt = *ss->pos; + } + *n = i; + return 0; + } + + ... + // Now use StringSource together with cson_parse() + StringSource ss; + cson_value * root = NULL; + char const * json = "{\"k1\":123}"; + ss.str = ss.pos = json; + ss.end = json + strlen(json); + int rc = cson_parse( &root, cson_data_source_StringSource, &ss, NULL, NULL ); + @endcode + + It is recommended that clients wrap such utility code into + type-safe wrapper functions which also initialize the internal + state object and check the user-provided parameters for legality + before passing them on to cson_parse(). For examples of this, see + cson_parse_FILE() or cson_parse_string(). + + TODOs: + + - Buffer the input in larger chunks. We currently read + byte-by-byte, but i'm too tired to write/test the looping code for + the buffering. + + @see cson_parse_FILE() + @see cson_parse_string() +*/ +int cson_parse( cson_value ** tgt, cson_data_source_f src, void * srcState, + cson_parse_opt const * opt, cson_parse_info * info ); +/** + A cson_data_source_f() implementation which requires the state argument + to be a readable (FILE*) handle. +*/ +int cson_data_source_FILE( void * state, void * dest, unsigned int * n ); + +/** + Equivalent to cson_parse( tgt, cson_data_source_FILE, src, opt ). + + @see cson_parse_filename() +*/ +int cson_parse_FILE( cson_value ** tgt, FILE * src, + cson_parse_opt const * opt, cson_parse_info * info ); + +/** + Convenience wrapper around cson_parse_FILE() which opens the given filename. + + Returns cson_rc.IOError if the file cannot be opened. + + @see cson_parse_FILE() +*/ +int cson_parse_filename( cson_value ** tgt, char const * src, + cson_parse_opt const * opt, cson_parse_info * info ); + +/** + Uses an internal helper class to pass src through cson_parse(). + See that function for the return value and argument semantics. + + src must be a string containing JSON code, at least len bytes long, + and the parser will attempt to parse exactly len bytes from src. + + If len is less than 2 (the minimum length of a legal top-node JSON + object) then cson_rc.RangeError is returned. +*/ +int cson_parse_string( cson_value ** tgt, char const * src, unsigned int len, + cson_parse_opt const * opt, cson_parse_info * info ); + + + +/** + Outputs the given value as a JSON-formatted string, sending all + output to the given callback function. It is intended for top-level + objects or arrays, but can be used with any cson_value. + + If opt is NULL then default options (the values defined in + cson_output_opt_empty) are used. + + If opt->maxDepth is exceeded while traversing the value tree, + cson_rc.RangeError is returned. + + The destState parameter is ignored by this function and is passed + on to the dest function. + + Returns 0 on success. On error, any amount of output might have been + generated before the error was triggered. + + Example: + + @code + int rc = cson_output( myValue, cson_data_dest_FILE, stdout, NULL ); + // basically equivalent to: cson_output_FILE( myValue, stdout, NULL ); + // but note that cson_output_FILE() actually uses different defaults + // for the output options. + @endcode +*/ +int cson_output( cson_value const * src, cson_data_dest_f dest, void * destState, cson_output_opt const * opt ); + + +/** + A cson_data_dest_f() implementation which requires the state argument + to be a writable (FILE*) handle. +*/ +int cson_data_dest_FILE( void * state, void const * src, unsigned int n ); + +/** + Almost equivalent to cson_output( src, cson_data_dest_FILE, dest, opt ), + with one minor difference: if opt is NULL then the default options + always include the addNewline option, since that is normally desired + for FILE output. + + @see cson_output_filename() +*/ +int cson_output_FILE( cson_value const * src, FILE * dest, cson_output_opt const * opt ); +/** + Convenience wrapper around cson_output_FILE() which writes to the given filename, destroying + any existing contents. Returns cson_rc.IOError if the file cannot be opened. + + @see cson_output_FILE() +*/ +int cson_output_filename( cson_value const * src, char const * dest, cson_output_opt const * fmt ); + +/** + Returns the virtual type of v, or CSON_TYPE_UNDEF if !v. +*/ +cson_type_id cson_value_type_id( cson_value const * v ); + + +/** Returns true if v is null, v->api is NULL, or v holds the special undefined value. */ +char cson_value_is_undef( cson_value const * v ); +/** Returns true if v contains a null value. */ +char cson_value_is_null( cson_value const * v ); +/** Returns true if v contains a bool value. */ +char cson_value_is_bool( cson_value const * v ); +/** Returns true if v contains an integer value. */ +char cson_value_is_integer( cson_value const * v ); +/** Returns true if v contains a double value. */ +char cson_value_is_double( cson_value const * v ); +/** Returns true if v contains a number (double, integer) value. */ +char cson_value_is_number( cson_value const * v ); +/** Returns true if v contains a string value. */ +char cson_value_is_string( cson_value const * v ); +/** Returns true if v contains an array value. */ +char cson_value_is_array( cson_value const * v ); +/** Returns true if v contains an object value. */ +char cson_value_is_object( cson_value const * v ); + +/** @struct cson_object + + cson_object is an opaque handle to an Object value. + + They are used like: + + @code + cson_object * obj = cson_value_get_object(myValue); + ... + @endcode + + They can be created like: + + @code + cson_value * objV = cson_value_new_object(); + cson_object * obj = cson_value_get_object(objV); + // obj is owned by objV and objV must eventually be freed + // using cson_value_free() or added to a container + // object/array (which transfers ownership to that container). + @endcode + + @see cson_value_new_object() + @see cson_value_get_object() + @see cson_value_free() +*/ + +typedef struct cson_object cson_object; + +/** @struct cson_array + + cson_array is an opaque handle to an Array value. + + They are used like: + + @code + cson_array * obj = cson_value_get_array(myValue); + ... + @endcode + + They can be created like: + + @code + cson_value * arV = cson_value_new_array(); + cson_array * ar = cson_value_get_array(arV); + // ar is owned by arV and arV must eventually be freed + // using cson_value_free() or added to a container + // object/array (which transfers ownership to that container). + @endcode + + @see cson_value_new_array() + @see cson_value_get_array() + @see cson_value_free() + +*/ +typedef struct cson_array cson_array; + +/** @struct cson_string + + cson-internal string type, opaque to client code. Strings in cson + are immutable and allocated only by library internals, never + directly by client code. + + The actual string bytes are to be allocated together in the same + memory chunk as the cson_string object, which saves us 1 malloc() + and 1 pointer member in this type (because we no longer have a + direct pointer to the memory). + + Potential TODOs: + + @see cson_string_cstr() +*/ +typedef struct cson_string cson_string; + +/** + Converts the given value to a boolean, using JavaScript semantics depending + on the concrete type of val: + + undef or null: false + + boolean: same + + integer, double: 0 or 0.0 == false, else true + + object, array: true + + string: length-0 string is false, else true. + + Returns 0 on success and assigns *v (if v is not NULL) to either 0 or 1. + On error (val is NULL) then v is not modified. +*/ +int cson_value_fetch_bool( cson_value const * val, char * v ); + +/** + Similar to cson_value_fetch_bool(), but fetches an integer value. + + The conversion, if any, depends on the concrete type of val: + + NULL, null, undefined: *v is set to 0 and 0 is returned. + + string, object, array: *v is set to 0 and + cson_rc.TypeError is returned. The error may normally be safely + ignored, but it is provided for those wanted to know whether a direct + conversion was possible. + + integer: *v is set to the int value and 0 is returned. + + double: *v is set to the value truncated to int and 0 is returned. +*/ +int cson_value_fetch_integer( cson_value const * val, cson_int_t * v ); + +/** + The same conversions and return values as + cson_value_fetch_integer(), except that the roles of int/double are + swapped. +*/ +int cson_value_fetch_double( cson_value const * val, cson_double_t * v ); + +/** + If cson_value_is_string(val) then this function assigns *str to the + contents of the string. str may be NULL, in which case this function + functions like cson_value_is_string() but returns 0 on success. + + Returns 0 if val is-a string, else non-0, in which case *str is not + modified. + + The bytes are owned by the given value and may be invalidated in any of + the following ways: + + - The value is cleaned up or freed. + + - An array or object containing the value peforms a re-allocation + (it shrinks or grows). + + And thus the bytes should be consumed before any further operations + on val or any container which holds it. + + Note that this routine does not convert non-String values to their + string representations. (Adding that ability would add more + overhead to every cson_value instance.) +*/ +int cson_value_fetch_string( cson_value const * val, cson_string ** str ); + +/** + If cson_value_is_object(val) then this function assigns *obj to the underlying + object value and returns 0, otherwise non-0 is returned and *obj is not modified. + + obj may be NULL, in which case this function works like cson_value_is_object() + but with inverse return value semantics (0==success) (and it's a few + CPU cycles slower). + + The *obj pointer is owned by val, and will be invalidated when val + is cleaned up. + + Achtung: for best results, ALWAYS pass a pointer to NULL as the + second argument, e.g.: + + @code + cson_object * obj = NULL; + int rc = cson_value_fetch_object( val, &obj ); + + // Or, more simply: + obj = cson_value_get_object( val ); + @endcode + + @see cson_value_get_object() +*/ +int cson_value_fetch_object( cson_value const * val, cson_object ** obj ); + +/** + Identical to cson_value_fetch_object(), but works on array values. + + @see cson_value_get_array() +*/ +int cson_value_fetch_array( cson_value const * val, cson_array ** tgt ); + +/** + Simplified form of cson_value_fetch_bool(). Returns 0 if val + is NULL. +*/ +char cson_value_get_bool( cson_value const * val ); + +/** + Simplified form of cson_value_fetch_integer(). Returns 0 if val + is NULL. +*/ +cson_int_t cson_value_get_integer( cson_value const * val ); + +/** + Simplified form of cson_value_fetch_double(). Returns 0.0 if val + is NULL. +*/ +cson_double_t cson_value_get_double( cson_value const * val ); + +/** + Simplified form of cson_value_fetch_string(). Returns NULL if val + is-not-a string value. +*/ +cson_string * cson_value_get_string( cson_value const * val ); + +/** + Returns a pointer to the NULL-terminated string bytes of str. + The bytes are owned by string and will be invalided when it + is cleaned up. + + If str is NULL then NULL is returned. If the string has a length + of 0 then "" is returned. + + @see cson_string_length_bytes() + @see cson_value_get_string() +*/ +char const * cson_string_cstr( cson_string const * str ); + +/** + Convenience function which returns the string bytes of + the given value if it is-a string, otherwise it returns + NULL. Note that this does no conversion of non-string types + to strings. + + Equivalent to cson_string_cstr(cson_value_get_string(val)). +*/ +char const * cson_value_get_cstr( cson_value const * val ); + +/** + Equivalent to cson_string_cmp_cstr_n(lhs, cson_string_cstr(rhs), cson_string_length_bytes(rhs)). +*/ +int cson_string_cmp( cson_string const * lhs, cson_string const * rhs ); + +/** + Compares lhs to rhs using memcmp()/strcmp() semantics. Generically + speaking it returns a negative number if lhs is less-than rhs, 0 if + they are equivalent, or a positive number if lhs is greater-than + rhs. It has the following rules for equivalence: + + - The maximum number of bytes compared is the lesser of rhsLen and + the length of lhs. If the strings do not match, but compare equal + up to the just-described comparison length, the shorter string is + considered to be less-than the longer one. + + - If lhs and rhs are both NULL, or both have a length of 0 then they will + compare equal. + + - If lhs is null/length-0 but rhs is not then lhs is considered to be less-than + rhs. + + - If rhs is null/length-0 but lhs is not then rhs is considered to be less-than + rhs. + + - i have no clue if the results are exactly correct for UTF strings. + +*/ +int cson_string_cmp_cstr_n( cson_string const * lhs, char const * rhs, unsigned int rhsLen ); + +/** + Equivalent to cson_string_cmp_cstr_n( lhs, rhs, (rhs&&*rhs)?strlen(rhs):0 ). +*/ +int cson_string_cmp_cstr( cson_string const * lhs, char const * rhs ); + +/** + Returns the length, in bytes, of str, or 0 if str is NULL. This is + an O(1) operation. + + TODO: add cson_string_length_chars() (is O(N) unless we add another + member to store the char length). + + @see cson_string_cstr() +*/ +unsigned int cson_string_length_bytes( cson_string const * str ); + +/** + Returns the number of UTF8 characters in str. This value will + be at most as long as cson_string_length_bytes() for the + same string, and less if it has multi-byte characters. + + Returns 0 if str is NULL. +*/ +unsigned int cson_string_length_utf8( cson_string const * str ); + +/** + Like cson_value_get_string(), but returns a copy of the underying + string bytes, which the caller owns and must eventually free + using free(). +*/ +char * cson_value_get_string_copy( cson_value const * val ); + +/** + Simplified form of cson_value_fetch_object(). Returns NULL if val + is-not-a object value. +*/ +cson_object * cson_value_get_object( cson_value const * val ); + +/** + Simplified form of cson_value_fetch_array(). Returns NULL if val + is-not-a array value. +*/ +cson_array * cson_value_get_array( cson_value const * val ); + +/** + Const-correct form of cson_value_get_array(). +*/ +cson_array const * cson_value_get_array_c( cson_value const * val ); + +/** + If ar is-a array and is at least (pos+1) entries long then *v (if v is not NULL) + is assigned to the value at that position (which may be NULL). + + Ownership of the *v return value is unchanged by this call. (The + containing array may share ownership of the value with other + containers.) + + If pos is out of range, non-0 is returned and *v is not modified. + + If v is NULL then this function returns 0 if pos is in bounds, but does not + otherwise return a value to the caller. +*/ +int cson_array_value_fetch( cson_array const * ar, unsigned int pos, cson_value ** v ); + +/** + Simplified form of cson_array_value_fetch() which returns NULL if + ar is NULL, pos is out of bounds or if ar has no element at that + position. +*/ +cson_value * cson_array_get( cson_array const * ar, unsigned int pos ); + +/** + Ensures that ar has allocated space for at least the given + number of entries. This never shrinks the array and never + changes its logical size, but may pre-allocate space in the + array for storing new (as-yet-unassigned) values. + + Returns 0 on success, or non-zero on error: + + - If ar is NULL: cson_rc.ArgError + + - If allocation fails: cson_rc.AllocError +*/ +int cson_array_reserve( cson_array * ar, unsigned int size ); + +/** + If ar is not NULL, sets *v (if v is not NULL) to the length of the array + and returns 0. Returns cson_rc.ArgError if ar is NULL. +*/ +int cson_array_length_fetch( cson_array const * ar, unsigned int * v ); + +/** + Simplified form of cson_array_length_fetch() which returns 0 if ar + is NULL. +*/ +unsigned int cson_array_length_get( cson_array const * ar ); + +/** + Sets the given index of the given array to the given value. + + If ar already has an item at that index then it is cleaned up and + freed before inserting the new item. + + ar is expanded, if needed, to be able to hold at least (ndx+1) + items, and any new entries created by that expansion are empty + (NULL values). + + On success, 0 is returned and ownership of v is transfered to ar. + + On error ownership of v is NOT modified, and the caller may still + need to clean it up. For example, the following code will introduce + a leak if this function fails: + + @code + cson_array_append( myArray, cson_value_new_integer(42) ); + @endcode + + Because the value created by cson_value_new_integer() has no owner + and is not cleaned up. The "more correct" way to do this is: + + @code + cson_value * v = cson_value_new_integer(42); + int rc = cson_array_append( myArray, v ); + if( 0 != rc ) { + cson_value_free( v ); + ... handle error ... + } + @endcode + +*/ +int cson_array_set( cson_array * ar, unsigned int ndx, cson_value * v ); + +/** + Appends the given value to the given array, transfering ownership of + v to ar. On error, ownership of v is not modified. Ownership of ar + is never changed by this function. + + This is functionally equivalent to + cson_array_set(ar,cson_array_length_get(ar),v), but this + implementation has slightly different array-preallocation policy + (it grows more eagerly). + + Returns 0 on success, non-zero on error. Error cases include: + + - ar or v are NULL: cson_rc.ArgError + + - Array cannot be expanded to hold enough elements: cson_rc.AllocError. + + - Appending would cause a numeric overlow in the array's size: + cson_rc.RangeError. (However, you'll get an AllocError long before + that happens!) + + On error ownership of v is NOT modified, and the caller may still + need to clean it up. See cson_array_set() for the details. + +*/ +int cson_array_append( cson_array * ar, cson_value * v ); + + +/** + Creates a new cson_value from the given boolean value. + + Ownership of the new value is passed to the caller, who must + eventually either free the value using cson_value_free() or + inserting it into a container (array or object), which transfers + ownership to the container. See the cson_value class documentation + for more details. + + Semantically speaking this function Returns NULL on allocation + error, but the implementation never actually allocates for this + case. Nonetheless, it must be treated as if it were an allocated + value. +*/ +cson_value * cson_value_new_bool( char v ); + + +/** + Alias for cson_value_new_bool(v). +*/ +cson_value * cson_new_bool(char v); + +/** + Returns the special JSON "null" value. When outputing JSON, + its string representation is "null" (without the quotes). + + See cson_value_new_bool() for notes regarding the returned + value's memory. +*/ +cson_value * cson_value_null(); + +/** + Equivalent to cson_value_new_bool(1). +*/ +cson_value * cson_value_true(); + +/** + Equivalent to cson_value_new_bool(0). +*/ +cson_value * cson_value_false(); + +/** + Semantically the same as cson_value_new_bool(), but for integers. +*/ +cson_value * cson_value_new_integer( cson_int_t v ); + +/** + Alias for cson_value_new_integer(v). +*/ +cson_value * cson_new_int(cson_int_t v); + +/** + Semantically the same as cson_value_new_bool(), but for doubles. +*/ +cson_value * cson_value_new_double( cson_double_t v ); + +/** + Alias for cson_value_new_double(v). +*/ +cson_value * cson_new_double(cson_double_t v); + +/** + Semantically the same as cson_value_new_bool(), but for strings. + This creates a JSON value which copies the first n bytes of str. + The string will automatically be NUL-terminated. + + Note that if str is NULL or n is 0, this function still + returns non-NULL value representing that empty string. + + Returns NULL on allocation error. + + See cson_value_new_bool() for important information about the + returned memory. +*/ +cson_value * cson_value_new_string( char const * str, unsigned int n ); + +/** + Allocates a new "object" value and transfers ownership of it to the + caller. It must eventually be destroyed, by the caller or its + owning container, by passing it to cson_value_free(). + + Returns NULL on allocation error. + + Post-conditions: cson_value_is_object(value) will return true. + + @see cson_value_new_array() + @see cson_value_free() +*/ +cson_value * cson_value_new_object(); + +/** + This works like cson_value_new_object() but returns an Object + handle directly. + + The value handle for the returned object can be fetched with + cson_object_value(theObject). + + Ownership is transfered to the caller, who must eventually free it + by passing the Value handle (NOT the Object handle) to + cson_value_free() or passing ownership to a parent container. + + Returns NULL on error (out of memory). +*/ +cson_object * cson_new_object(); + +/** + Identical to cson_new_object() except that it creates + an Array. +*/ +cson_array * cson_new_array(); + +/** + Identical to cson_new_object() except that it creates + a String. +*/ +cson_string * cson_new_string(char const * val, unsigned int len); + +/** + Equivalent to cson_value_free(cson_object_value(x)). +*/ +void cson_free_object(cson_object *x); + +/** + Equivalent to cson_value_free(cson_array_value(x)). +*/ +void cson_free_array(cson_array *x); + +/** + Equivalent to cson_value_free(cson_string_value(x)). +*/ +void cson_free_string(cson_string *x); + + +/** + Allocates a new "array" value and transfers ownership of it to the + caller. It must eventually be destroyed, by the caller or its + owning container, by passing it to cson_value_free(). + + Returns NULL on allocation error. + + Post-conditions: cson_value_is_array(value) will return true. + + @see cson_value_new_object() + @see cson_value_free() +*/ +cson_value * cson_value_new_array(); + +/** + Frees any resources owned by v, then frees v. If v is a container + type (object or array) its children are also freed (recursively). + + If v is NULL, this is a no-op. + + This function decrements a reference count and only destroys the + value if its reference count drops to 0. Reference counts are + increased by either inserting the value into a container or via + cson_value_add_reference(). Even if this function does not + immediately destroy the value, the value must be considered, from + the perspective of that client code, to have been + destroyed/invalidated by this call. + + + @see cson_value_new_object() + @see cson_value_new_array() + @see cson_value_add_reference() +*/ +void cson_value_free(cson_value * v); + +/** + Alias for cson_value_free(). +*/ +void cson_free_value(cson_value * v); + + +/** + Functionally similar to cson_array_set(), but uses a string key + as an index. Like arrays, if a value already exists for the given key, + it is destroyed by this function before inserting the new value. + + If v is NULL then this call is equivalent to + cson_object_unset(obj,key). Note that (v==NULL) is treated + differently from v having the special null value. In the latter + case, the key is set to the special null value. + + The key may be encoded as ASCII or UTF8. Results are undefined + with other encodings, and the errors won't show up here, but may + show up later, e.g. during output. + + Returns 0 on success, non-0 on error. It has the following error + cases: + + - cson_rc.ArgError: obj or key are NULL or strlen(key) is 0. + + - cson_rc.AllocError: an out-of-memory error + + On error ownership of v is NOT modified, and the caller may still + need to clean it up. For example, the following code will introduce + a leak if this function fails: + + @code + cson_object_set( myObj, "foo", cson_value_new_integer(42) ); + @endcode + + Because the value created by cson_value_new_integer() has no owner + and is not cleaned up. The "more correct" way to do this is: + + @code + cson_value * v = cson_value_new_integer(42); + int rc = cson_object_set( myObj, "foo", v ); + if( 0 != rc ) { + cson_value_free( v ); + ... handle error ... + } + @endcode + + Potential TODOs: + + - Add an overload which takes a cson_value key instead. To get + any value out of that we first need to be able to convert arbitrary + value types to strings. We could simply to-JSON them and use those + as keys. +*/ +int cson_object_set( cson_object * obj, char const * key, cson_value * v ); + +/** + Functionaly equivalent to cson_object_set(), but takes a + cson_string() as its KEY type. The string will be reference-counted + like any other values, and the key may legally be used within this + same container (as a value) or others (as a key or value) at the + same time. + + Returns 0 on success. On error, ownership (i.e. refcounts) of key + and value are not modified. On success key and value will get + increased refcounts unless they are replacing themselves (which is + a harmless no-op). +*/ +int cson_object_set_s( cson_object * obj, cson_string * key, cson_value * v ); + +/** + Removes a property from an object. + + If obj contains the given key, it is removed and 0 is returned. If + it is not found, cson_rc.NotFoundError is returned (which can + normally be ignored by client code). + + cson_rc.ArgError is returned if obj or key are NULL or key has + a length of 0. + + Returns 0 if the given key is found and removed. + + This is functionally equivalent calling + cson_object_set(obj,key,NULL). +*/ +int cson_object_unset( cson_object * obj, char const * key ); + +/** + Searches the given object for a property with the given key. If found, + it is returned. If no match is found, or any arguments are NULL, NULL is + returned. The returned object is owned by obj, and may be invalidated + by ANY operations which change obj's property list (i.e. add or remove + properties). + + FIXME: allocate the key/value pairs like we do for cson_array, + to get improve the lifetimes of fetched values. + + @see cson_object_fetch_sub() + @see cson_object_get_sub() +*/ +cson_value * cson_object_get( cson_object const * obj, char const * key ); + +/** + Equivalent to cson_object_get() but takes a cson_string argument + instead of a C-style string. +*/ +cson_value * cson_object_get_s( cson_object const * obj, cson_string const *key ); + +/** + Similar to cson_object_get(), but removes the value from the parent + object's ownership. If no item is found then NULL is returned, else + the object (now owned by the caller or possibly shared with other + containers) is returned. + + Returns NULL if either obj or key are NULL or key has a length + of 0. + + This function reduces the returned value's reference count but has + the specific property that it does not treat refcounts 0 and 1 + identically, meaning that the returned object may have a refcount + of 0. This behaviour works around a corner-case where we want to + extract a child element from its parent and then destroy the parent + (which leaves us in an undesireable (normally) reference count + state). +*/ +cson_value * cson_object_take( cson_object * obj, char const * key ); + +/** + Fetches a property from a child (or [great-]*grand-child) object. + + obj is the object to search. + + path is a delimited string, where the delimiter is the given + separator character. + + This function searches for the given path, starting at the given object + and traversing its properties as the path specifies. If a given part of the + path is not found, then this function fails with cson_rc.NotFoundError. + + If it finds the given path, it returns the value by assiging *tgt + to it. If tgt is NULL then this function has no side-effects but + will return 0 if the given path is found within the object, so it can be used + to test for existence without fetching it. + + Returns 0 if it finds an entry, cson_rc.NotFoundError if it finds + no item, and any other non-zero error code on a "real" error. Errors include: + + - obj or path are NULL: cson_rc.ArgError + + - separator is 0, or path is an empty string or contains only + separator characters: cson_rc.RangeError + + - There is an upper limit on how long a single path component may + be (some "reasonable" internal size), and cson_rc.RangeError is + returned if that length is violated. + + + Limitations: + + - It has no way to fetch data from arrays this way. i could + imagine, e.g., a path of "subobj.subArray.0" for + subobj.subArray[0], or "0.3.1" for [0][3][1]. But i'm too + lazy/tired to add this. + + Example usage: + + + Assume we have a JSON structure which abstractly looks like: + + @code + {"subobj":{"subsubobj":{"myValue":[1,2,3]}}} + @endcode + + Out goal is to get the value of myValue. We can do that with: + + @code + cson_value * v = NULL; + int rc = cson_object_fetch_sub( object, &v, "subobj.subsubobj.myValue", '.' ); + @endcode + + Note that because keys in JSON may legally contain a '.', the + separator must be specified by the caller. e.g. the path + "subobj/subsubobj/myValue" with separator='/' is equivalent the + path "subobj.subsubobj.myValue" with separator='.'. The value of 0 + is not legal as a separator character because we cannot + distinguish that use from the real end-of-string without requiring + the caller to also pass in the length of the string. + + Multiple successive separators in the list are collapsed into a + single separator for parsing purposes. e.g. the path "a...b...c" + (separator='.') is equivalent to "a.b.c". + + @see cson_object_get_sub() + @see cson_object_get_sub2() +*/ +int cson_object_fetch_sub( cson_object const * obj, cson_value ** tgt, char const * path, char separator ); + +/** + Similar to cson_object_fetch_sub(), but derives the path separator + character from the first byte of the path argument. e.g. the + following arg equivalent: + + @code + cson_object_fetch_sub( obj, &tgt, "foo.bar.baz", '.' ); + cson_object_fetch_sub2( obj, &tgt, ".foo.bar.baz" ); + @endcode +*/ +int cson_object_fetch_sub2( cson_object const * obj, cson_value ** tgt, char const * path ); + +/** + Convenience form of cson_object_fetch_sub() which returns NULL if the given + item is not found. +*/ +cson_value * cson_object_get_sub( cson_object const * obj, char const * path, char sep ); + +/** + Convenience form of cson_object_fetch_sub2() which returns NULL if the given + item is not found. +*/ +cson_value * cson_object_get_sub2( cson_object const * obj, char const * path ); + +/** @enum CSON_MERGE_FLAGS + + Flags for cson_object_merge(). +*/ +enum CSON_MERGE_FLAGS { + CSON_MERGE_DEFAULT = 0, + CSON_MERGE_REPLACE = 0x01, + CSON_MERGE_NO_RECURSE = 0x02 +}; + +/** + "Merges" the src object's properties into dest. Each property in + src is copied (using reference counting, not cloning) into dest. If + dest already has the given property then behaviour depends on the + flags argument: + + If flag has the CSON_MERGE_REPLACE bit set then this function will + by default replace non-object properties with the src property. If + src and dest both have the property AND it is an Object then this + function operates recursively on those objects. If + CSON_MERGE_NO_RECURSE is set then objects are not recursed in this + manner, and will be completely replaced if CSON_MERGE_REPLACE is + set. + + Array properties in dest are NOT recursed for merging - they are + either replaced or left as-is, depending on whether flags contains + he CSON_MERGE_REPLACE bit. + + Returns 0 on success. The error conditions are: + + - dest or src are NULL or (dest==src) returns cson_rc.ArgError. + + - dest or src contain cyclic references - this will likely cause a + crash due to endless recursion. + + Potential TODOs: + + - Add a flag to copy clones, not the original values. +*/ +int cson_object_merge( cson_object * dest, cson_object const * src, int flags ); + + +/** + An iterator type for traversing object properties. + + Its values must be considered private, not to be touched by client + code. + + @see cson_object_iter_init() + @see cson_object_iter_next() +*/ +struct cson_object_iterator +{ + + /** @internal + The underlying object. + */ + cson_object const * obj; + /** @internal + Current position in the property list. + */ + unsigned int pos; +}; +typedef struct cson_object_iterator cson_object_iterator; + +/** + Empty-initialized cson_object_iterator object. +*/ +#define cson_object_iterator_empty_m {NULL/*obj*/,0/*pos*/} + +/** + Empty-initialized cson_object_iterator object. +*/ +extern const cson_object_iterator cson_object_iterator_empty; + +/** + Initializes the given iterator to point at the start of obj's + properties. Returns 0 on success or cson_rc.ArgError if !obj + or !iter. + + obj must outlive iter, or results are undefined. Results are also + undefined if obj is modified while the iterator is active. + + @see cson_object_iter_next() +*/ +int cson_object_iter_init( cson_object const * obj, cson_object_iterator * iter ); + +/** @struct cson_kvp + +This class represents a key/value pair and is used for storing +object properties. It is opaque to client code, and the public +API only uses this type for purposes of iterating over cson_object +properties using the cson_object_iterator interfaces. +*/ + +typedef struct cson_kvp cson_kvp; + +/** + Returns the next property from the given iterator's object, or NULL + if the end of the property list as been reached. + + Note that the order of object properties is undefined by the API, + and may change from version to version. + + The returned memory belongs to the underlying object and may be + invalidated by any changes to that object. + + Example usage: + + @code + cson_object_iterator it; + cson_object_iter_init( myObject, &it ); // only fails if either arg is 0 + cson_kvp * kvp; + cson_string const * key; + cson_value const * val; + while( (kvp = cson_object_iter_next(&it) ) ) + { + key = cson_kvp_key(kvp); + val = cson_kvp_value(kvp); + ... + } + @endcode + + There is no need to clean up an iterator, as it holds no dynamic resources. + + @see cson_kvp_key() + @see cson_kvp_value() +*/ +cson_kvp * cson_object_iter_next( cson_object_iterator * iter ); + + +/** + Returns the key associated with the given key/value pair, + or NULL if !kvp. The memory is owned by the object which contains + the key/value pair, and may be invalidated by any modifications + to that object. +*/ +cson_string * cson_kvp_key( cson_kvp const * kvp ); + +/** + Returns the value associated with the given key/value pair, + or NULL if !kvp. The memory is owned by the object which contains + the key/value pair, and may be invalidated by any modifications + to that object. +*/ +cson_value * cson_kvp_value( cson_kvp const * kvp ); + +/** @typedef some unsigned int type cson_size_t + +*/ +typedef unsigned int cson_size_t; + +/** + A generic buffer class. + + They can be used like this: + + @code + cson_buffer b = cson_buffer_empty; + int rc = cson_buffer_reserve( &buf, 100 ); + if( 0 != rc ) { ... allocation error ... } + ... use buf.mem ... + ... then free it up ... + cson_buffer_reserve( &buf, 0 ); + @endcode + + To take over ownership of a buffer's memory: + + @code + void * mem = b.mem; + // mem is b.capacity bytes long, but only b.used + // bytes of it has been "used" by the API. + b = cson_buffer_empty; + @endcode + + The memory now belongs to the caller and must eventually be + free()d. +*/ +struct cson_buffer +{ + /** + The number of bytes allocated for this object. + Use cson_buffer_reserve() to change its value. + */ + cson_size_t capacity; + /** + The number of bytes "used" by this object. It is not needed for + all use cases, and management of this value (if needed) is up + to the client. The cson_buffer public API does not use this + member. The intention is that this can be used to track the + length of strings which are allocated via cson_buffer, since + they need an explicit length and/or null terminator. + */ + cson_size_t used; + + /** + This is a debugging/metric-counting value + intended to help certain malloc()-conscious + clients tweak their memory reservation sizes. + Each time cson_buffer_reserve() expands the + buffer, it increments this value by 1. + */ + cson_size_t timesExpanded; + + /** + The memory allocated for and owned by this buffer. + Use cson_buffer_reserve() to change its size or + free it. To take over ownership, do: + + @code + void * myptr = buf.mem; + buf = cson_buffer_empty; + @endcode + + (You might also need to store buf.used and buf.capacity, + depending on what you want to do with the memory.) + + When doing so, the memory must eventually be passed to free() + to deallocate it. + */ + unsigned char * mem; +}; +/** Convenience typedef. */ +typedef struct cson_buffer cson_buffer; + +/** An empty-initialized cson_buffer object. */ +#define cson_buffer_empty_m {0/*capacity*/,0/*used*/,0/*timesExpanded*/,NULL/*mem*/} +/** An empty-initialized cson_buffer object. */ +extern const cson_buffer cson_buffer_empty; + +/** + Uses cson_output() to append all JSON output to the given buffer + object. The semantics for the (v, opt) parameters, and the return + value, are as documented for cson_output(). buf must be a non-NULL + pointer to a properly initialized buffer (see example below). + + Ownership of buf is not changed by calling this. + + On success 0 is returned and the contents of buf.mem are guaranteed + to be NULL-terminated. On error the buffer might contain partial + contents, and it should not be used except to free its contents. + + On error non-zero is returned. Errors include: + + - Invalid arguments: cson_rc.ArgError + + - Buffer cannot be expanded (runs out of memory): cson_rc.AllocError + + Example usage: + + @code + cson_buffer buf = cson_buffer_empty; + // optional: cson_buffer_reserve(&buf, 1024 * 10); + int rc = cson_output_buffer( myValue, &buf, NULL ); + if( 0 != rc ) { + ... error! ... + } + else { + ... use buffer ... + puts((char const*)buf.mem); + } + // In both cases, we eventually need to clean up the buffer: + cson_buffer_reserve( &buf, 0 ); + // Or take over ownership of its memory: + { + char * mem = (char *)buf.mem; + buf = cson_buffer_empty; + ... + free(mem); + } + @endcode + + @see cson_output() + +*/ +int cson_output_buffer( cson_value const * v, cson_buffer * buf, + cson_output_opt const * opt ); + +/** + This works identically to cson_parse_string(), but takes a + cson_buffer object as its input. buf->used bytes of buf->mem are + assumed to be valid JSON input, but it need not be NUL-terminated + (we only read up to buf->used bytes). The value of buf->used is + assumed to be the "string length" of buf->mem, i.e. not including + the NUL terminator. + + Returns 0 on success, non-0 on error. + + See cson_parse() for the semantics of the tgt, opt, and err + parameters. +*/ +int cson_parse_buffer( cson_value ** tgt, cson_buffer const * buf, + cson_parse_opt const * opt, cson_parse_info * err ); + + +/** + Reserves the given amount of memory for the given buffer object. + + If n is 0 then buf->mem is freed and its state is set to + NULL/0 values. + + If buf->capacity is less than or equal to n then 0 is returned and + buf is not modified. + + If n is larger than buf->capacity then buf->mem is (re)allocated + and buf->capacity contains the new length. Newly-allocated bytes + are filled with zeroes. + + On success 0 is returned. On error non-0 is returned and buf is not + modified. + + buf->mem is owned by buf and must eventually be freed by passing an + n value of 0 to this function. + + buf->used is never modified by this function unless n is 0, in which case + it is reset. +*/ +int cson_buffer_reserve( cson_buffer * buf, cson_size_t n ); + +/** + Fills all bytes of the given buffer with the given character. + Returns the number of bytes set (buf->capacity), or 0 if + !buf or buf has no memory allocated to it. +*/ +cson_size_t cson_buffer_fill( cson_buffer * buf, char c ); + +/** + Uses a cson_data_source_f() function to buffer input into a + cson_buffer. + + dest must be a non-NULL, initialized (though possibly empty) + cson_buffer object. Its contents, if any, will be overwritten by + this function, and any memory it holds might be re-used. + + The src function is called, and passed the state parameter, to + fetch the input. If it returns non-0, this function returns that + error code. src() is called, possibly repeatedly, until it reports + that there is no more data. + + Whether or not this function succeeds, dest still owns any memory + pointed to by dest->mem, and the client must eventually free it by + calling cson_buffer_reserve(dest,0). + + dest->mem might (and possibly will) be (re)allocated by this + function, so any pointers to it held from before this call might be + invalidated by this call. + + On error non-0 is returned and dest has almost certainly been + modified but its state must be considered incomplete. + + Errors include: + + - dest or src are NULL (cson_rc.ArgError) + + - Allocation error (cson_rc.AllocError) + + - src() returns an error code + + Whether or not the state parameter may be NULL depends on + the src implementation requirements. + + On success dest will contain the contents read from the input + source. dest->used will be the length of the read-in data, and + dest->mem will point to the memory. dest->mem is automatically + NUL-terminated if this function succeeds, but dest->used does not + count that terminator. On error the state of dest->mem must be + considered incomplete, and is not guaranteed to be NUL-terminated. + + Example usage: + + @code + cson_buffer buf = cson_buffer_empty; + int rc = cson_buffer_fill_from( &buf, + cson_data_source_FILE, + stdin ); + if( rc ) + { + fprintf(stderr,"Error %d (%s) while filling buffer.\n", + rc, cson_rc_string(rc)); + cson_buffer_reserve( &buf, 0 ); + return ...; + } + ... use the buf->mem ... + ... clean up the buffer ... + cson_buffer_reserve( &buf, 0 ); + @endcode + + To take over ownership of the buffer's memory, do: + + @code + void * mem = buf.mem; + buf = cson_buffer_empty; + @endcode + + In which case the memory must eventually be passed to free() to + free it. +*/ +int cson_buffer_fill_from( cson_buffer * dest, cson_data_source_f src, void * state ); + + +/** + Increments the reference count for the given value. This is a + low-level operation and should not normally be used by client code + without understanding exactly what side-effects it introduces. + Mis-use can lead to premature destruction or cause a value instance + to never be properly destructed (i.e. a memory leak). + + This function is probably only useful for the following cases: + + - You want to hold a reference to a value which is itself contained + in one or more containers, and you need to be sure that your + reference outlives the container(s) and/or that you can free your + copy of the reference without invaliding any references to the same + value held in containers. + + - You want to implement "value sharing" behaviour without using an + object or array to contain the shared value. This can be used to + ensure the lifetime of the shared value instance. Each sharing + point adds a reference and simply passed the value to + cson_value_free() when they're done. The object will be kept alive + for other sharing points which added a reference. + + Normally any such value handles would be invalidated when the + parent container(s) is/are cleaned up, but this function can be + used to effectively delay the cleanup. + + This function, at its lowest level, increments the value's + reference count by 1. + + To decrement the reference count, pass the value to + cson_value_free(), after which the value must be considered, from + the perspective of that client code, to be destroyed (though it + will not be if there are still other live references to + it). cson_value_free() will not _actually_ destroy the value until + its reference count drops to 0. + + Returns 0 on success. The only error conditions are if v is NULL + (cson_rc.ArgError) or if the reference increment would overflow + (cson_rc.RangeError). In theory a client would get allocation + errors long before the reference count could overflow (assuming + those reference counts come from container insertions, as opposed + to via this function). + + Insider notes which clients really need to know: + + For shared/constant value instances, such as those returned by + cson_value_true() and cson_value_null(), this function has no side + effects - it does not actually modify the reference count because + (A) those instances are shared across all client code and (B) those + objects are static and never get cleaned up. However, that is an + implementation detail which client code should not rely on. In + other words, if you call cson_value_add_reference() 3 times using + the value returned by cson_value_true() (which is incidentally a + shared cson_value instance), you must eventually call + cson_value_free() 3 times to (semantically) remove those + references. However, internally the reference count for that + specific cson_value instance will not be modified and those + objects will never be freed (they're stack-allocated). + + It might be interesting to note that newly-created objects + have a reference count of 0 instead of 1. This is partly because + if the initial reference is counted then it makes ownership + problematic when inserting values into containers. e.g. consider the + following code: + + @code + // ACHTUNG: this code is hypothetical and does not reflect + // what actually happens! + cson_value * v = + cson_value_new_integer( 42 ); // v's refcount = 1 + cson_array_append( myArray, v ); // v's refcount = 2 + @endcode + + If that were the case, the client would be forced to free his own + reference after inserting it into the container (which is a bit + counter-intuitive as well as intrusive). It would look a bit like + the following and would have to be done after every create/insert + operation: + + @code + // ACHTUNG: this code is hypothetical and does not reflect + // what actually happens! + cson_array_append( myArray, v ); // v's refcount = 2 + cson_value_free( v ); // v's refcount = 1 + @endcode + + (As i said: it's counter-intuitive and intrusive.) + + Instead, values start with a refcount of 0 and it is only increased + when the value is added to an object/array container or when this + function is used to manually increment it. cson_value_free() treats + a refcount of 0 or 1 equivalently, destroying the value + instance. The only semantic difference between 0 and 1, for + purposes of cleaning up, is that a value with a non-0 refcount has + been had its refcount adjusted, whereas a 0 refcount indicates a + fresh, "unowned" reference. +*/ +int cson_value_add_reference( cson_value * v ); + +#if 0 +/** + DO NOT use this unless you know EXACTLY what you're doing. + It is only in the public API to work around a couple corner + cases involving extracting child elements and discarding + their parents. + + This function sets v's reference count to the given value. + It does not clean up the object if rc is 0. + + Returns 0 on success, non-0 on error. +*/ +int cson_value_refcount_set( cson_value * v, unsigned short rc ); +#endif + +/** + Deeply copies a JSON value, be it an object/array or a "plain" + value (e.g. number/string/boolean). If cv is not NULL then this + function makes a deep clone of it and returns that clone. Ownership + of the clone is identical t transfered to the caller, who must + eventually free the value using cson_value_free() or add it to a + container object/array to transfer ownership to the container. The + returned object will be of the same logical type as orig. + + ACHTUNG: if orig contains any cyclic references at any depth level + this function will endlessly recurse. (Having _any_ cyclic + references violates this library's requirements.) + + Returns NULL if orig is NULL or if cloning fails. Assuming that + orig is in a valid state, the only "likely" error case is that an + allocation fails while constructing the clone. In other words, if + cloning fails due to something other than an allocation error then + either orig is in an invalid state or there is a bug. + + When this function clones Objects or Arrays it shares any immutable + values (including object keys) between the parent and the + clone. Mutable values (Objects and Arrays) are copied, however. + For example, if we clone: + + @code + { a: 1, b: 2, c:["hi"] } + @endcode + + The cloned object and the array "c" would be a new Object/Array + instances but the object keys (a,b,b) and the values of (a,b), as + well as the string value within the "c" array, would be shared + between the original and the clone. The "c" array itself would be + deeply cloned, such that future changes to the clone are not + visible to the parent, and vice versa, but immutable values within + the array are shared (in this case the string "hi"). The + justification for this heuristic is that immutable values can never + be changed, so there is no harm in sharing them across + clones. Additionally, such types can never contribute to cycles in + a JSON tree, so they are safe to share this way. Objects and + Arrays, on the other hand, can be modified later and can contribute + to cycles, and thus the clone needs to be an independent instance. + Note, however, that if this function directly passed a + non-Object/Array, that value is deeply cloned. The sharing + behaviour only applies when traversing Objects/Arrays. +*/ +cson_value * cson_value_clone( cson_value const * orig ); + +/** + Returns the value handle associated with s. The handle itself owns + s, and ownership of the handle is not changed by calling this + function. If the returned handle is part of a container, calling + cson_value_free() on the returned handle invoked undefined + behaviour (quite possibly downstream when the container tries to + use it). + + This function only returns NULL if s is NULL. The length of the + returned string is cson_string_length_bytes(). +*/ +cson_value * cson_string_value(cson_string const * s); +/** + The Object form of cson_string_value(). See that function + for full details. +*/ +cson_value * cson_object_value(cson_object const * s); + +/** + The Array form of cson_string_value(). See that function + for full details. +*/ +cson_value * cson_array_value(cson_array const * s); + + +/** + Calculates the approximate in-memory-allocated size of v, + recursively if it is a container type, with the following caveats + and limitations: + + If a given value is reference counted then it is only and multiple + times within a traversed container, each reference is counted at + full cost. We have no way of knowing if a given reference has been + visited already and whether it should or should not be counted, so + we pessimistically count them even though the _might_ not really + count for the given object tree (it depends on where the other open + references live). + + This function returns 0 if any of the following are true: + + - v is NULL + + - v is one of the special singleton values (null, bools, empty + string, int 0, double 0.0) + + All other values require an allocation, and this will return their + total memory cost, including the cson-specific internals and the + native value(s). + + Note that because arrays and objects might have more internal slots + allocated than used, the alloced size of a container does not + necessarily increase when a new item is inserted into it. An interesting + side-effect of this is that when cson_clone()ing an array or object, the + size of the clone can actually be less than the original. +*/ +unsigned int cson_value_msize(cson_value const * v); + +/** + Parses command-line-style arguments into a JSON object. + + It expects arguments to be in any of these forms, and any number + of leading dashes are treated identically: + + --key : Treats key as a boolean with a true value. + + --key=VAL : Treats VAL as either a double, integer, or string. + + --key= : Treats key as a JSON null (not literal NULL) value. + + Arguments not starting with a dash are skipped. + + Each key/value pair is inserted into an object. If a given key + appears more than once then only the final entry is actually + stored. + + argc and argv are expected to be values from main() (or similar, + possibly adjusted to remove argv[0]). + + tgt must be either a pointer to NULL or a pointer to a + client-provided Object. If (NULL==*tgt) then this function + allocates a new object and on success it stores the new object in + *tgt (it is owned by the caller). If (NULL!=*tgt) then it is + assumed to be a properly allocated object. DO NOT pass a pointer to + an unitialized pointer, as that will fool this function into + thinking it is a valid object and Undefined Behaviour will ensue. + + If count is not NULL then the number of arugments parsed by this + function are assigned to it. On error, count will be the number of + options successfully parsed before the error was encountered. + + On success: + + - 0 is returned. + + - If (*tgt==NULL) then *tgt is assigned to a newly-allocated + object, owned by the caller. Note that even if no arguments are + parsed, the object is still created. + + On error: + + - non-0 is returned + + - If (*tgt==NULL) then it is not modified. + + - If (*tgt!=NULL) (i.e., the caller provides his own object) then + it might contain partial results. +*/ +int cson_parse_argv_flags( int argc, char const * const * argv, + cson_object ** tgt, unsigned int * count ); + + +/* LICENSE + +This software's source code, including accompanying documentation and +demonstration applications, are licensed under the following +conditions... + +Certain files are imported from external projects and have their own +licensing terms. Namely, the JSON_parser.* files. See their files for +their official licenses, but the summary is "do what you want [with +them] but leave the license text and copyright in place." + +The author (Stephan G. Beal [http://wanderinghorse.net/home/stephan/]) +explicitly disclaims copyright in all jurisdictions which recognize +such a disclaimer. In such jurisdictions, this software is released +into the Public Domain. + +In jurisdictions which do not recognize Public Domain property +(e.g. Germany as of 2011), this software is Copyright (c) 2011 by +Stephan G. Beal, and is released under the terms of the MIT License +(see below). + +In jurisdictions which recognize Public Domain property, the user of +this software may choose to accept it either as 1) Public Domain, 2) +under the conditions of the MIT License (see below), or 3) under the +terms of dual Public Domain/MIT License conditions described here, as +they choose. + +The MIT License is about as close to Public Domain as a license can +get, and is described in clear, concise terms at: + + http://en.wikipedia.org/wiki/MIT_License + +The full text of the MIT License follows: + +-- +Copyright (c) 2011 Stephan G. Beal (http://wanderinghorse.net/home/stephan/) + +Permission is hereby granted, free of charge, to any person +obtaining a copy of this software and associated documentation +files (the "Software"), to deal in the Software without +restriction, including without limitation the rights to use, +copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the +Software is furnished to do so, subject to the following +conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES +OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT +HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, +WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING +FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR +OTHER DEALINGS IN THE SOFTWARE. + +--END OF MIT LICENSE-- + +For purposes of the above license, the term "Software" includes +documentation and demonstration source code which accompanies +this software. ("Accompanies" = is contained in the Software's +primary public source code repository.) + +*/ + +#if defined(__cplusplus) +} /*extern "C"*/ +#endif + +#endif /* WANDERINGHORSE_NET_CSON_H_INCLUDED */ +/* end file include/wh/cson/cson.h */ +/* begin file include/wh/cson/cson_sqlite3.h */ +/** @file cson_sqlite3.h + +This file contains cson's public sqlite3-to-JSON API declarations +and API documentation. If CSON_ENABLE_SQLITE3 is not defined, +or is defined to 0, then including this file will have no side-effects +other than defining CSON_ENABLE_SQLITE3 (if it was not defined) to 0 +and defining a few include guard macros. i.e. if CSON_ENABLE_SQLITE3 +is not set to a true value then the API is not visible. + +This API requires that be in the INCLUDES path and that +the client eventually link to (or directly embed) the sqlite3 library. +*/ +#if !defined(WANDERINGHORSE_NET_CSON_SQLITE3_H_INCLUDED) +#define WANDERINGHORSE_NET_CSON_SQLITE3_H_INCLUDED 1 +#if !defined(CSON_ENABLE_SQLITE3) +# if defined(DOXYGEN) +#define CSON_ENABLE_SQLITE3 1 +# else +#define CSON_ENABLE_SQLITE3 1 +# endif +#endif + +#if CSON_ENABLE_SQLITE3 /* we do this here for the sake of the amalgamation build */ +#include + +#if defined(__cplusplus) +extern "C" { +#endif + +/** + Converts a single value from a single 0-based column index to its JSON + equivalent. + + On success it returns a new JSON value, which will have a different concrete + type depending on the field type reported by sqlite3_column_type(st,col): + + Integer, double, null, or string (TEXT and BLOB data, though not + all blob data is legal for a JSON string). + + st must be a sqlite3_step()'d row and col must be a 0-based column + index within that result row. + */ +cson_value * cson_sqlite3_column_to_value( sqlite3_stmt * st, int col ); + +/** + Creates a JSON Array object containing the names of all columns + of the given prepared statement handle. + + Returns a new array value on success, which the caller owns. Its elements + are in the same order as in the underlying query. + + On error NULL is returned. + + st is not traversed or freed by this function - only the column + count and names are read. +*/ +cson_value * cson_sqlite3_column_names( sqlite3_stmt * st ); + +/** + Creates a JSON Object containing key/value pairs corresponding + to the result columns in the current row of the given statement + handle. st must be a sqlite3_step()'d row result. + + On success a new Object is returned which is owned by the + caller. On error NULL is returned. + + cson_sqlite3_column_to_value() is used to convert each column to a + JSON value, and the column names are taken from + sqlite3_column_name(). +*/ +cson_value * cson_sqlite3_row_to_object( sqlite3_stmt * st ); +/** + Functionally almost identical to cson_sqlite3_row_to_object(), the + only difference being how the result objects gets its column names. + st must be a freshly-step()'d handle holding a result row. + colNames must be an Array with at least the same number of columns + as st. If it has fewer, NULL is returned and this function has + no side-effects. + + For each column in the result set, the colNames entry at the same + index is used for the column key. If a given entry is-not-a String + then conversion will fail and NULL will be returned. + + The one reason to prefer this over cson_sqlite3_row_to_object() is + that this one can share the keys across multiple rows (or even + other JSON containers), whereas the former makes fresh copies of + the column names for each row. + +*/ +cson_value * cson_sqlite3_row_to_object2( sqlite3_stmt * st, + cson_array * colNames ); + +/** + Similar to cson_sqlite3_row_to_object(), but creates an Array + value which contains the JSON-form values of the given result + set row. +*/ +cson_value * cson_sqlite3_row_to_array( sqlite3_stmt * st ); +/** + Converts the results of an sqlite3 SELECT statement to JSON, + in the form of a cson_value object tree. + + st must be a prepared, but not yet traversed, SELECT query. + tgt must be a pointer to NULL (see the example below). If + either of those arguments are NULL, cson_rc.ArgError is returned. + + This walks the query results and returns a JSON object which + has a different structure depending on the value of the 'fat' + argument. + + + If 'fat' is 0 then the structure is: + + @code + { + "columns":["colName1",..."colNameN"], + "rows":[ + [colVal0, ... colValN], + [colVal0, ... colValN], + ... + ] + } + @endcode + + In the "non-fat" format the order of the columns and row values is + guaranteed to be the same as that of the underlying query. + + If 'fat' is not 0 then the structure is: + + @code + { + "columns":["colName1",..."colNameN"], + "rows":[ + {"colName1":value1,..."colNameN":valueN}, + {"colName1":value1,..."colNameN":valueN}, + ... + ] + } + @endcode + + In the "fat" format, the order of the "columns" entries is guaranteed + to be the same as the underlying query fields, but the order + of the keys in the "rows" might be different and might in fact + change when passed through different JSON implementations, + depending on how they implement object key/value pairs. + + On success it returns 0 and assigns *tgt to a newly-allocated + JSON object tree (using the above structure), which the caller owns. + If the query returns no rows, the "rows" value will be an empty + array, as opposed to null. + + On error non-0 is returned and *tgt is not modified. + + The error code cson_rc.IOError is used to indicate a db-level + error, and cson_rc.TypeError is returned if sqlite3_column_count(st) + returns 0 or less (indicating an invalid or non-SELECT statement). + + The JSON data types are determined by the column type as reported + by sqlite3_column_type(): + + SQLITE_INTEGER: integer + + SQLITE_FLOAT: double + + SQLITE_TEXT or SQLITE_BLOB: string, and this will only work if + the data is UTF8 compatible. + + If the db returns a literal or SQL NULL for a value it is converted + to a JSON null. If it somehow finds a column type it cannot handle, + the value is also converted to a NULL in the output. + + Example + + @code + cson_value * json = NULL; + int rc = cson_sqlite3_stmt_to_json( myStatement, &json, 1 ); + if( 0 != rc ) { ... error ... } + else { + cson_output_FILE( json, stdout, NULL ); + cson_value_free( json ); + } + @endcode +*/ +int cson_sqlite3_stmt_to_json( sqlite3_stmt * st, cson_value ** tgt, char fat ); + +/** + A convenience wrapper around cson_sqlite3_stmt_to_json(), which + takes SQL instead of a sqlite3_stmt object. It has the same + return value and argument semantics as that function. +*/ +int cson_sqlite3_sql_to_json( sqlite3 * db, cson_value ** tgt, char const * sql, char fat ); + +/** + Binds a JSON value to a 1-based parameter index in a prepared SQL + statement. v must be NULL or one of one of the types (null, string, + integer, double, boolean, array). Booleans are bound as integer 0 + or 1. NULL or null are bound as SQL NULL. Integers are bound as + 64-bit ints. Strings are bound using sqlite3_bind_text() (as + opposed to text16), but we could/should arguably bind them as + blobs. + + If v is an Array then ndx is is used as a starting position + (1-based) and each item in the array is bound to the next parameter + position (starting and ndx, though the array uses 0-based offsets). + + TODO: add Object support for named parameters. + + Returns 0 on success, non-0 on error. + */ +int cson_sqlite3_bind_value( sqlite3_stmt * st, int ndx, cson_value const * v ); + +#if defined(__cplusplus) +} /*extern "C"*/ +#endif + +#endif /* CSON_ENABLE_SQLITE3 */ +#endif /* WANDERINGHORSE_NET_CSON_SQLITE3_H_INCLUDED */ +/* end file include/wh/cson/cson_sqlite3.h */ +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/cygsup.h Index: src/cygsup.h ================================================================== --- src/cygsup.h +++ src/cygsup.h @@ -0,0 +1,124 @@ +/* +** Copyright (c) 2007 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains preprocessor directives used to help integrate with the +** Cygwin runtime and build environment. The intent of this file is to keep +** the Cygwin-specific preprocessor directives together. +*/ + +#if defined(__CYGWIN__) && !defined(CYGSUP_H) +#define CYGSUP_H + +/* +******************************************************************************* +** Include any Cygwin-specific headers here. ** +******************************************************************************* +*/ + +#include +#include + +/* +******************************************************************************* +** Define any Cygwin-specific preprocessor macros here. All macros defined in +** this section should be wrapped with #ifndef, in order to allow them to be +** externally overridden. +******************************************************************************* +*/ + +#ifndef CP_UTF8 +# define CP_UTF8 65001 +#endif + +#ifndef WINBASEAPI +# define WINBASEAPI __declspec(dllimport) +#endif + +#ifndef WINADVAPI +# define WINADVAPI __declspec(dllimport) +#endif + +#ifndef SHSTDAPI +# define SHSTDAPI __declspec(dllimport) +#endif + +#ifndef STDAPI +# define STDAPI __stdcall +#endif + +#ifndef WINAPI +# define WINAPI __stdcall +#endif + +/* +******************************************************************************* +** Declare any Cygwin-specific Win32 or other APIs here. Functions declared in +** this section should use the built-in ANSI C types in order to make sure this +** header file continues to work as a self-contained unit. +** +** On Cygwin64, "long" is 64-bit but in Win64 it's 32-bit. That's why in the +** signatures below "long" should not be used. They now use "int" instead. +******************************************************************************* +*/ + +WINADVAPI extern WINAPI int RegOpenKeyExW( + void *, /* HKEY */ + const wchar_t *, /* LPCWSTR */ + unsigned int, /* DWORD */ + unsigned int, /* REGSAM */ + void * /* PHKEY */ + ); + +WINADVAPI extern WINAPI int RegQueryValueExW( + void *, /* HKEY */ + const wchar_t *, /* LPCWSTR */ + unsigned int *, /* LPDWORD */ + unsigned int *, /* LPDWORD */ + unsigned char *, /* LPBYTE */ + unsigned int * /* LPDWORD */ + ); + +SHSTDAPI extern STDAPI void *ShellExecuteW( + void *, /* HWND */ + const wchar_t *, /* LPCWSTR */ + const wchar_t *, /* LPCWSTR */ + const wchar_t *, /* LPCWSTR */ + const wchar_t *, /* LPCWSTR */ + int /* INT */ + ); + +WINBASEAPI extern WINAPI int WideCharToMultiByte( + unsigned int, /* UINT */ + unsigned int, /* DWORD */ + const wchar_t *, /* LPCWSTR */ + int, /* int */ + char *, /* LPSTR */ + int, /* int */ + const char *, /* LPCSTR */ + int * /* LPBOOL */ + ); + +WINBASEAPI extern WINAPI int MultiByteToWideChar( + unsigned int, /* UINT */ + unsigned int, /* DWORD */ + const char *, /* LPCSTR */ + int, /* int */ + wchar_t *, /* LPWSTR */ + int /* int */ + ); + +#endif /* defined(__CYGWIN__) && !defined(CYGSUP_H) */ Index: src/db.c ================================================================== --- src/db.c +++ src/db.c @@ -22,73 +22,110 @@ ** ** (1) The "user" database in ~/.fossil ** ** (2) The "repository" database ** -** (3) A local checkout database named "_FOSSIL_" or ".fos" +** (3) A local checkout database named "_FOSSIL_" or ".fslckout" ** and located at the root of the local copy of the source tree. ** */ #include "config.h" +#if ! defined(_WIN32) +# include +#endif #include #include #include #include +#include #include "db.h" #if INTERFACE /* ** An single SQL statement is represented as an instance of the following ** structure. */ struct Stmt { Blob sql; /* The SQL for this statement */ - sqlite3_stmt *pStmt; /* The results of sqlite3_prepare() */ + sqlite3_stmt *pStmt; /* The results of sqlite3_prepare_v2() */ Stmt *pNext, *pPrev; /* List of all unfinalized statements */ int nStep; /* Number of sqlite3_step() calls */ }; + +/* +** Copy this to initialize a Stmt object to a clean/empty state. This +** is useful to help avoid assertions when performing cleanup in some +** error handling cases. +*/ +#define empty_Stmt_m {BLOB_INITIALIZER,NULL, NULL, NULL, 0} #endif /* INTERFACE */ +const struct Stmt empty_Stmt = empty_Stmt_m; /* ** Call this routine when a database error occurs. */ static void db_err(const char *zFormat, ...){ va_list ap; char *z; - static const char zRebuildMsg[] = - "If you have recently updated your fossil executable, you might\n" - "need to run \"fossil all rebuild\" to bring the repository\n" - "schemas up to date.\n"; + int rc = 1; va_start(ap, zFormat); z = vmprintf(zFormat, ap); va_end(ap); +#ifdef FOSSIL_ENABLE_JSON + if( g.json.isJsonMode ){ + json_err( 0, z, 1 ); + if( g.isHTTP ){ + rc = 0 /* avoid HTTP 500 */; + } + } + else +#endif /* FOSSIL_ENABLE_JSON */ if( g.xferPanic ){ cgi_reset_content(); @ error Database\serror:\s%F(z) - cgi_reply(); + cgi_reply(); } - if( g.cgiOutput ){ + else if( g.cgiOutput ){ g.cgiOutput = 0; - cgi_printf("

Database Error

\n" - "
%h

%s

", z, zRebuildMsg); + cgi_printf("

Database Error

\n

%h

\n", z); cgi_reply(); }else{ - fprintf(stderr, "%s: %s\n\n%s", g.argv[0], z, zRebuildMsg); + fprintf(stderr, "%s: %s\n", g.argv[0], z); } + free(z); db_force_rollback(); - exit(1); + fossil_exit(rc); } -static int nBegin = 0; /* Nesting depth of BEGIN */ -static int isNewRepo = 0; /* True if the repository is newly created */ -static int doRollback = 0; /* True to force a rollback */ -static int nCommitHook = 0; /* Number of commit hooks */ -static struct sCommitHook { - int (*xHook)(void); /* Functions to call at db_end_transaction() */ - int sequence; /* Call functions in sequence order */ -} aHook[5]; -static Stmt *pAllStmt = 0; /* List of all unfinalized statements */ +/* +** All static variable that a used by only this file are gathered into +** the following structure. +*/ +static struct DbLocalData { + int nBegin; /* Nesting depth of BEGIN */ + int doRollback; /* True to force a rollback */ + int nCommitHook; /* Number of commit hooks */ + Stmt *pAllStmt; /* List of all unfinalized statements */ + int nPrepare; /* Number of calls to sqlite3_prepare_v2() */ + int nDeleteOnFail; /* Number of entries in azDeleteOnFail[] */ + struct sCommitHook { + int (*xHook)(void); /* Functions to call at db_end_transaction() */ + int sequence; /* Call functions in sequence order */ + } aHook[5]; + char *azDeleteOnFail[3]; /* Files to delete on a failure */ + char *azBeforeCommit[5]; /* Commands to run prior to COMMIT */ + int nBeforeCommit; /* Number of entries in azBeforeCommit */ + int nPriorChanges; /* sqlite3_total_changes() at transaction start */ +} db = {0, 0, 0, 0, 0, 0, }; + +/* +** Arrange for the given file to be deleted on a failure. +*/ +void db_delete_on_failure(const char *zFilename){ + assert( db.nDeleteOnFailsequence ){ + assert( db.nCommitHook < count(db.aHook) ); + for(i=0; isequence ){ int s = sequence; int (*xS)(void) = x; - sequence = aHook[i].sequence; - x = aHook[i].xHook; - aHook[i].sequence = s; - aHook[i].xHook = xS; + sequence = db.aHook[i].sequence; + x = db.aHook[i].xHook; + db.aHook[i].sequence = s; + db.aHook[i].xHook = xS; } } - aHook[nCommitHook].sequence = sequence; - aHook[nCommitHook].xHook = x; - nCommitHook++; + db.aHook[db.nCommitHook].sequence = sequence; + db.aHook[db.nCommitHook].xHook = x; + db.nCommitHook++; } /* ** Prepare a Stmt. Assume that the Stmt is previously uninitialized. ** If the input string contains multiple SQL statements, only the first ** one is processed. All statements beyond the first are silently ignored. */ -int db_vprepare(Stmt *pStmt, const char *zFormat, va_list ap){ +int db_vprepare(Stmt *pStmt, int errOk, const char *zFormat, va_list ap){ + int rc; char *zSql; blob_zero(&pStmt->sql); blob_vappendf(&pStmt->sql, zFormat, ap); va_end(ap); zSql = blob_str(&pStmt->sql); - if( sqlite3_prepare_v2(g.db, zSql, -1, &pStmt->pStmt, 0)!=0 ){ + db.nPrepare++; + rc = sqlite3_prepare_v2(g.db, zSql, -1, &pStmt->pStmt, 0); + if( rc!=0 && !errOk ){ db_err("%s\n%s", sqlite3_errmsg(g.db), zSql); } pStmt->pNext = pStmt->pPrev = 0; pStmt->nStep = 0; - return 0; + return rc; } int db_prepare(Stmt *pStmt, const char *zFormat, ...){ int rc; va_list ap; va_start(ap, zFormat); - rc = db_vprepare(pStmt, zFormat, ap); + rc = db_vprepare(pStmt, 0, zFormat, ap); + va_end(ap); + return rc; +} +int db_prepare_ignore_error(Stmt *pStmt, const char *zFormat, ...){ + int rc; + va_list ap; + va_start(ap, zFormat); + rc = db_vprepare(pStmt, 1, zFormat, ap); va_end(ap); return rc; } int db_static_prepare(Stmt *pStmt, const char *zFormat, ...){ int rc = SQLITE_OK; if( blob_size(&pStmt->sql)==0 ){ va_list ap; va_start(ap, zFormat); - rc = db_vprepare(pStmt, zFormat, ap); - pStmt->pNext = pAllStmt; + rc = db_vprepare(pStmt, 0, zFormat, ap); + pStmt->pNext = db.pAllStmt; pStmt->pPrev = 0; - if( pAllStmt ) pAllStmt->pPrev = pStmt; - pAllStmt = pStmt; + if( db.pAllStmt ) db.pAllStmt->pPrev = pStmt; + db.pAllStmt = pStmt; va_end(ap); } return rc; } @@ -237,10 +312,14 @@ return sqlite3_bind_double(pStmt->pStmt, paramIdx(pStmt, zParamName), rValue); } int db_bind_text(Stmt *pStmt, const char *zParamName, const char *zValue){ return sqlite3_bind_text(pStmt->pStmt, paramIdx(pStmt, zParamName), zValue, -1, SQLITE_STATIC); +} +int db_bind_text16(Stmt *pStmt, const char *zParamName, const char *zValue){ + return sqlite3_bind_text16(pStmt->pStmt, paramIdx(pStmt, zParamName), zValue, + -1, SQLITE_STATIC); } int db_bind_null(Stmt *pStmt, const char *zParamName){ return sqlite3_bind_null(pStmt->pStmt, paramIdx(pStmt, zParamName)); } int db_bind_blob(Stmt *pStmt, const char *zParamName, Blob *pBlob){ @@ -247,11 +326,11 @@ return sqlite3_bind_blob(pStmt->pStmt, paramIdx(pStmt, zParamName), blob_buffer(pBlob), blob_size(pBlob), SQLITE_STATIC); } /* bind_str() treats a Blob object like a TEXT string and binds it -** to the SQL variable. Constrast this to bind_blob() which treats +** to the SQL variable. Contrast this to bind_blob() which treats ** the Blob object like an SQL BLOB. */ int db_bind_str(Stmt *pStmt, const char *zParamName, Blob *pBlob){ return sqlite3_bind_text(pStmt->pStmt, paramIdx(pStmt, zParamName), blob_buffer(pBlob), blob_size(pBlob), SQLITE_STATIC); @@ -310,23 +389,27 @@ if( pStmt->pNext ){ pStmt->pNext->pPrev = pStmt->pPrev; } if( pStmt->pPrev ){ pStmt->pPrev->pNext = pStmt->pNext; - }else if( pAllStmt==pStmt ){ - pAllStmt = pStmt->pNext; + }else if( db.pAllStmt==pStmt ){ + db.pAllStmt = pStmt->pNext; } pStmt->pNext = 0; pStmt->pPrev = 0; return rc; } /* ** Return the rowid of the most recent insert */ -i64 db_last_insert_rowid(void){ - return sqlite3_last_insert_rowid(g.db); +int db_last_insert_rowid(void){ + i64 x = sqlite3_last_insert_rowid(g.db); + if( x<0 || x>(i64)2147483647 ){ + fossil_fatal("rowid out of range (0..2147483647)"); + } + return (int)x; } /* ** Return the number of rows that were changed by the most recent ** INSERT, UPDATE, or DELETE. Auxiliary changes caused by triggers @@ -338,10 +421,13 @@ /* ** Extract text, integer, or blob values from the N-th column of the ** current row. */ +int db_column_type(Stmt *pStmt, int N){ + return sqlite3_column_type(pStmt->pStmt, N); +} int db_column_bytes(Stmt *pStmt, int N){ return sqlite3_column_bytes(pStmt->pStmt, N); } int db_column_int(Stmt *pStmt, int N){ return sqlite3_column_int(pStmt->pStmt, N); @@ -353,10 +439,13 @@ return sqlite3_column_double(pStmt->pStmt, N); } const char *db_column_text(Stmt *pStmt, int N){ return (char*)sqlite3_column_text(pStmt->pStmt, N); } +const char *db_column_raw(Stmt *pStmt, int N){ + return (const char*)sqlite3_column_blob(pStmt->pStmt, N); +} const char *db_column_name(Stmt *pStmt, int N){ return (char*)sqlite3_column_name(pStmt->pStmt, N); } int db_column_count(Stmt *pStmt){ return sqlite3_column_count(pStmt->pStmt); @@ -368,11 +457,11 @@ blob_append(pBlob, sqlite3_column_blob(pStmt->pStmt, N), sqlite3_column_bytes(pStmt->pStmt, N)); } /* -** Initialize a blob to an ephermeral copy of the content of a +** Initialize a blob to an ephemeral copy of the content of a ** column in the current row. The data in the blob will become ** invalid when the statement is stepped or reset. */ void db_ephemeral_blob(Stmt *pStmt, int N, Blob *pBlob){ blob_init(pBlob, sqlite3_column_blob(pStmt->pStmt, N), @@ -397,40 +486,105 @@ while( (rc = db_step(pStmt))==SQLITE_ROW ){} rc = db_reset(pStmt); db_check_result(rc); return rc; } + +/* +** Print the output of one or more SQL queries on standard output. +** This routine is used for debugging purposes only. +*/ +int db_debug(const char *zSql, ...){ + Blob sql; + int rc = SQLITE_OK; + va_list ap; + const char *z, *zEnd; + sqlite3_stmt *pStmt; + blob_init(&sql, 0, 0); + va_start(ap, zSql); + blob_vappendf(&sql, zSql, ap); + va_end(ap); + z = blob_str(&sql); + while( rc==SQLITE_OK && z[0] ){ + pStmt = 0; + rc = sqlite3_prepare_v2(g.db, z, -1, &pStmt, &zEnd); + if( rc!=SQLITE_OK ) break; + if( pStmt ){ + int nRow = 0; + db.nPrepare++; + while( sqlite3_step(pStmt)==SQLITE_ROW ){ + int i, n; + if( nRow++ > 0 ) fossil_print("\n"); + n = sqlite3_column_count(pStmt); + for(i=0; i0 ){ - if( access(zPwd, W_OK) ) break; - for(i=0; i1 && zPwd[n-1]=='/' ){ + while( n>0 && zPwd[n-1]=='/' ){ n--; zPwd[n] = 0; } g.zLocalRoot = mprintf("%s/", zPwd); + g.localOpen = 1; + db_open_config(0); + db_open_repository(zDbName); return 1; } } n--; - while( n>0 && zPwd[n]!='/' ){ n--; } - while( n>0 && zPwd[n-1]=='/' ){ n--; } + while( n>1 && zPwd[n]!='/' ){ n--; } + while( n>1 && zPwd[n-1]=='/' ){ n--; } zPwd[n] = 0; } /* A checkout database file could not be found */ return 0; } + +/* +** Get the full pathname to the repository database file. The +** local database (the _FOSSIL_ or .fslckout database) must have already +** been opened before this routine is called. +*/ +const char *db_repository_filename(void){ + static char *zRepo = 0; + assert( g.localOpen ); + assert( g.zLocalRoot ); + if( zRepo==0 ){ + zRepo = db_lget("repository", 0); + if( zRepo && !file_is_absolute_path(zRepo) ){ + zRepo = mprintf("%s%s", g.zLocalRoot, zRepo); + } + } + return zRepo; +} + +/* +** Returns non-zero if the default value for the "allow-symlinks" setting +** is "on". +*/ +int db_allow_symlinks_by_default(void){ +#if defined(_WIN32) + return 0; +#else + return 1; +#endif +} /* ** Open the repository database given by zDbName. If zDbName==NULL then ** get the name from the already open local database. */ void db_open_repository(const char *zDbName){ if( g.repositoryOpen ) return; if( zDbName==0 ){ if( g.localOpen ){ - zDbName = db_lget("repository", 0); + zDbName = db_repository_filename(); } if( zDbName==0 ){ db_err("unable to find the name of a repository database"); } } - if( access(zDbName, R_OK) || file_size(zDbName)<1024 ){ - if( access(zDbName, 0) ){ + if( file_access(zDbName, R_OK) || file_size(zDbName)<1024 ){ + if( file_access(zDbName, F_OK) ){ +#ifdef FOSSIL_ENABLE_JSON + g.json.resultCode = FSL_JSON_E_DB_NOT_FOUND; +#endif fossil_panic("repository does not exist or" " is in an unreadable directory: %s", zDbName); - }else if( access(zDbName, R_OK) ){ + }else if( file_access(zDbName, R_OK) ){ +#ifdef FOSSIL_ENABLE_JSON + g.json.resultCode = FSL_JSON_E_DENIED; +#endif fossil_panic("read permission denied for repository %s", zDbName); }else{ +#ifdef FOSSIL_ENABLE_JSON + g.json.resultCode = FSL_JSON_E_DB_NOT_VALID; +#endif fossil_panic("not a valid repository: %s", zDbName); } } - db_open_or_attach(zDbName, "repository"); + g.zRepositoryName = mprintf("%s", zDbName); + db_open_or_attach(g.zRepositoryName, "repository", 0); g.repositoryOpen = 1; - g.zRepositoryName = mprintf("%s", zDbName); + /* Cache "allow-symlinks" option, because we'll need it on every stat call */ + g.allowSymlinks = db_get_boolean("allow-symlinks", + db_allow_symlinks_by_default()); + g.zAuxSchema = db_get("aux-schema",""); + + /* Verify that the PLINK table has a new column added by the + ** 2014-11-28 schema change. Create it if necessary. This code + ** can be removed in the future, once all users have upgraded to the + ** 2014-11-28 or later schema. + */ + if( !db_table_has_column("repository","plink","baseid") ){ + db_multi_exec( + "ALTER TABLE %s.plink ADD COLUMN baseid;", db_name("repository") + ); + } + + /* Verify that the MLINK table has the newer columns added by the + ** 2015-01-24 schema change. Create them if necessary. This code + ** can be removed in the future, once all users have upgraded to the + ** 2015-01-24 or later schema. + */ + if( !db_table_has_column("repository","mlink","isaux") ){ + db_begin_transaction(); + db_multi_exec( + "ALTER TABLE %s.mlink ADD COLUMN pmid INTEGER DEFAULT 0;" + "ALTER TABLE %s.mlink ADD COLUMN isaux BOOLEAN DEFAULT 0;", + db_name("repository"), db_name("repository") + ); + db_end_transaction(0); + } } +/* +** Flags for the db_find_and_open_repository() function. +*/ +#if INTERFACE +#define OPEN_OK_NOT_FOUND 0x001 /* Do not error out if not found */ +#define OPEN_ANY_SCHEMA 0x002 /* Do not error if schema is wrong */ +#endif + /* ** Try to find the repository and open it. Use the -R or --repository ** option to locate the repository. If no such option is available, then ** use the repository of the open checkout if there is one. ** ** Error out if the repository cannot be opened. */ -void db_find_and_open_repository(int errIfNotFound){ - const char *zRep = find_option("repository", "R", 1); +void db_find_and_open_repository(int bFlags, int nArgUsed){ + const char *zRep = find_repository_option(); + if( zRep && file_isdir(zRep)==1 ){ + goto rep_not_found; + } + if( zRep==0 && nArgUsed && g.argc==nArgUsed+1 ){ + zRep = g.argv[nArgUsed]; + } if( zRep==0 ){ - if( db_open_local()==0 ){ + if( db_open_local(0)==0 ){ goto rep_not_found; } - zRep = db_lget("repository", 0); + zRep = db_repository_filename(); if( zRep==0 ){ goto rep_not_found; } } db_open_repository(zRep); if( g.repositoryOpen ){ + if( (bFlags & OPEN_ANY_SCHEMA)==0 ) db_verify_schema(); return; } rep_not_found: - if( errIfNotFound ){ - fossil_fatal("use --repository or -R to specify the repository database"); + if( (bFlags & OPEN_OK_NOT_FOUND)==0 ){ +#ifdef FOSSIL_ENABLE_JSON + g.json.resultCode = FSL_JSON_E_DB_NOT_FOUND; +#endif + if( nArgUsed==0 ){ + fossil_fatal("use --repository or -R to specify the repository database"); + }else{ + fossil_fatal("specify the repository name as a command-line argument"); + } } } + +/* +** Return the name of the database "localdb", "configdb", or "repository". +*/ +const char *db_name(const char *zDb){ + assert( fossil_strcmp(zDb,"localdb")==0 + || fossil_strcmp(zDb,"configdb")==0 + || fossil_strcmp(zDb,"repository")==0 + || fossil_strcmp(zDb,"temp")==0 ); + if( fossil_strcmp(zDb, g.zMainDbType)==0 ) zDb = "main"; + return zDb; +} + +/* +** Return TRUE if the schema is out-of-date +*/ +int db_schema_is_outofdate(void){ + return strcmp(g.zAuxSchema,AUX_SCHEMA_MIN)<0 + || strcmp(g.zAuxSchema,AUX_SCHEMA_MAX)>0; +} + +/* +** Return true if the database is writeable +*/ +int db_is_writeable(const char *zName){ + return g.db!=0 && !sqlite3_db_readonly(g.db, db_name(zName)); +} + +/* +** Verify that the repository schema is correct. If it is not correct, +** issue a fatal error and die. +*/ +void db_verify_schema(void){ + if( db_schema_is_outofdate() ){ +#ifdef FOSSIL_ENABLE_JSON + g.json.resultCode = FSL_JSON_E_DB_NEEDS_REBUILD; +#endif + fossil_warning("incorrect repository schema version: " + "current repository schema version is \"%s\" " + "but need versions between \"%s\" and \"%s\".", + g.zAuxSchema, AUX_SCHEMA_MIN, AUX_SCHEMA_MAX); + fossil_fatal("run \"fossil rebuild\" to fix this problem"); + } +} + + +/* +** COMMAND: test-move-repository +** +** Usage: %fossil test-move-repository PATHNAME +** +** Change the location of the repository database on a local check-out. +** Use this command to avoid having to close and reopen a checkout +** when relocating the repository database. +*/ +void move_repo_cmd(void){ + Blob repo; + char *zRepo; + if( g.argc!=3 ){ + usage("PATHNAME"); + } + file_canonical_name(g.argv[2], &repo, 0); + zRepo = blob_str(&repo); + if( file_access(zRepo, F_OK) ){ + fossil_fatal("no such file: %s", zRepo); + } + if( db_open_local(zRepo)==0 ){ + fossil_fatal("not in a local checkout"); + return; + } + db_open_or_attach(zRepo, "test_repo", 0); + db_lset("repository", blob_str(&repo)); + db_record_repository_filename(blob_str(&repo)); + db_close(1); +} @@ -850,29 +1407,89 @@ + /* ** Open the local database. If unable, exit with an error. */ void db_must_be_within_tree(void){ - if( db_open_local()==0 ){ - fossil_fatal("not within an open checkout"); + if( db_open_local(0)==0 ){ + fossil_fatal("current directory is not within an open checkout"); } db_open_repository(0); + db_verify_schema(); } /* ** Close the database connection. +** +** Check for unfinalized statements and report errors if the reportErrors +** argument is true. Ignore unfinalized statements when false. */ -void db_close(void){ +void db_close(int reportErrors){ + sqlite3_stmt *pStmt; if( g.db==0 ) return; - while( pAllStmt ){ - db_finalize(pAllStmt); + if( g.fSqlStats ){ + int cur, hiwtr; + sqlite3_db_status(g.db, SQLITE_DBSTATUS_LOOKASIDE_USED, &cur, &hiwtr, 0); + fprintf(stderr, "-- LOOKASIDE_USED %10d %10d\n", cur, hiwtr); + sqlite3_db_status(g.db, SQLITE_DBSTATUS_LOOKASIDE_HIT, &cur, &hiwtr, 0); + fprintf(stderr, "-- LOOKASIDE_HIT %10d\n", hiwtr); + sqlite3_db_status(g.db, SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE, &cur,&hiwtr,0); + fprintf(stderr, "-- LOOKASIDE_MISS_SIZE %10d\n", hiwtr); + sqlite3_db_status(g.db, SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL, &cur,&hiwtr,0); + fprintf(stderr, "-- LOOKASIDE_MISS_FULL %10d\n", hiwtr); + sqlite3_db_status(g.db, SQLITE_DBSTATUS_CACHE_USED, &cur, &hiwtr, 0); + fprintf(stderr, "-- CACHE_USED %10d\n", cur); + sqlite3_db_status(g.db, SQLITE_DBSTATUS_SCHEMA_USED, &cur, &hiwtr, 0); + fprintf(stderr, "-- SCHEMA_USED %10d\n", cur); + sqlite3_db_status(g.db, SQLITE_DBSTATUS_STMT_USED, &cur, &hiwtr, 0); + fprintf(stderr, "-- STMT_USED %10d\n", cur); + sqlite3_status(SQLITE_STATUS_MEMORY_USED, &cur, &hiwtr, 0); + fprintf(stderr, "-- MEMORY_USED %10d %10d\n", cur, hiwtr); + sqlite3_status(SQLITE_STATUS_MALLOC_SIZE, &cur, &hiwtr, 0); + fprintf(stderr, "-- MALLOC_SIZE %10d\n", hiwtr); + sqlite3_status(SQLITE_STATUS_MALLOC_COUNT, &cur, &hiwtr, 0); + fprintf(stderr, "-- MALLOC_COUNT %10d %10d\n", cur, hiwtr); + sqlite3_status(SQLITE_STATUS_PAGECACHE_OVERFLOW, &cur, &hiwtr, 0); + fprintf(stderr, "-- PCACHE_OVFLOW %10d %10d\n", cur, hiwtr); + fprintf(stderr, "-- prepared statements %10d\n", db.nPrepare); + } + while( db.pAllStmt ){ + db_finalize(db.pAllStmt); + } + db_end_transaction(1); + pStmt = 0; + db_close_config(); + + /* If the localdb (the check-out database) is open and if it has + ** a lot of unused free space, then VACUUM it as we shut down. + */ + if( g.localOpen && strcmp(db_name("localdb"),"main")==0 ){ + int nFree = db_int(0, "PRAGMA main.freelist_count"); + int nTotal = db_int(0, "PRAGMA main.page_count"); + if( nFree>nTotal/4 ){ + db_multi_exec("VACUUM;"); + } + } + + if( g.db ){ + int rc; + sqlite3_wal_checkpoint(g.db, 0); + rc = sqlite3_close(g.db); + if( rc==SQLITE_BUSY && reportErrors ){ + while( (pStmt = sqlite3_next_stmt(g.db, pStmt))!=0 ){ + fossil_warning("unfinalized SQL statement: [%s]", sqlite3_sql(pStmt)); + } + } + g.db = 0; + g.zMainDbType = 0; } g.repositoryOpen = 0; g.localOpen = 0; - g.configOpen = 0; - sqlite3_close(g.db); - g.db = 0; + assert( g.dbConfig==0 ); + assert( g.useAttach==0 ); + assert( g.zConfigDbName==0 ); + assert( g.zConfigDbType==0 ); } /* ** Create a new empty repository database with the given name. @@ -883,180 +1500,283 @@ */ void db_create_repository(const char *zFilename){ db_init_database( zFilename, zRepositorySchema1, + zRepositorySchemaDefaultReports, zRepositorySchema2, (char*)0 ); - isNewRepo = 1; + db_delete_on_failure(zFilename); } /* ** Create the default user accounts in the USER table. */ void db_create_default_users(int setupUserOnly, const char *zDefaultUser){ - const char *zUser; - zUser = db_get("default-user", 0); + const char *zUser = zDefaultUser; + if( zUser==0 ){ + zUser = db_get("default-user", 0); + } if( zUser==0 ){ - zUser = zDefaultUser; + zUser = fossil_getenv("FOSSIL_USER"); } if( zUser==0 ){ -#ifdef __MINGW32__ - zUser = getenv("USERNAME"); +#if defined(_WIN32) + zUser = fossil_getenv("USERNAME"); #else - zUser = getenv("USER"); + zUser = fossil_getenv("USER"); + if( zUser==0 ){ + zUser = fossil_getenv("LOGNAME"); + } #endif } if( zUser==0 ){ zUser = "root"; } db_multi_exec( - "INSERT INTO user(login, pw, cap, info)" - "VALUES(%Q,lower(hex(randomblob(3))),'s','')", zUser + "INSERT OR IGNORE INTO user(login, info) VALUES(%Q,'')", zUser + ); + db_multi_exec( + "UPDATE user SET cap='s', pw=lower(hex(randomblob(3)))" + " WHERE login=%Q", zUser ); if( !setupUserOnly ){ db_multi_exec( - "INSERT INTO user(login,pw,cap,info)" - " VALUES('anonymous',hex(randomblob(8)),'ghmncz','Anon');" - "INSERT INTO user(login,pw,cap,info)" - " VALUES('nobody','','jor','Nobody');" - "INSERT INTO user(login,pw,cap,info)" + "INSERT OR IGNORE INTO user(login,pw,cap,info)" + " VALUES('anonymous',hex(randomblob(8)),'hmnc','Anon');" + "INSERT OR IGNORE INTO user(login,pw,cap,info)" + " VALUES('nobody','','gjorz','Nobody');" + "INSERT OR IGNORE INTO user(login,pw,cap,info)" " VALUES('developer','','dei','Dev');" - "INSERT INTO user(login,pw,cap,info)" + "INSERT OR IGNORE INTO user(login,pw,cap,info)" " VALUES('reader','','kptw','Reader');" ); } } + +/* +** Return a pointer to a string that contains the RHS of an IN operator +** that will select CONFIG table names that are in the list of control +** settings. +*/ +const char *db_setting_inop_rhs(){ + Blob x; + int i; + const char *zSep = ""; + + blob_zero(&x); + blob_append_sql(&x, "("); + for(i=0; aSetting[i].name; i++){ + blob_append_sql(&x, "%s%Q", zSep/*safe-for-%s*/, aSetting[i].name); + zSep = ","; + } + blob_append_sql(&x, ")"); + return blob_sql_text(&x); +} /* ** Fill an empty repository database with the basic information for a ** repository. This function is shared between 'create_repository_cmd' ** ('new') and 'reconstruct_cmd' ('reconstruct'), both of which create ** new repositories. +** +** The zTemplate parameter determines if the settings for the repository +** should be copied from another repository. If zTemplate is 0 then the +** settings will have their normal default values. If zTemplate is +** non-zero, it is assumed that the caller of this function has already +** attached a database using the label "settingSrc". If not, the call to +** this function will fail. ** ** The zInitialDate parameter determines the date of the initial check-in ** that is automatically created. If zInitialDate is 0 then no initial ** check-in is created. The makeServerCodes flag determines whether or ** not server and project codes are invented for this repository. */ void db_initial_setup( + const char *zTemplate, /* Repository from which to copy settings. */ const char *zInitialDate, /* Initial date of repository. (ex: "now") */ - const char *zDefaultUser, /* Default user for the repository */ - int makeServerCodes /* True to make new server & project codes */ + const char *zDefaultUser /* Default user for the repository */ ){ char *zDate; Blob hash; Blob manifest; db_set("content-schema", CONTENT_SCHEMA, 0); - db_set("aux-schema", AUX_SCHEMA, 0); - if( makeServerCodes ){ - db_multi_exec( - "INSERT INTO config(name,value)" - " VALUES('server-code', lower(hex(randomblob(20))));" - "INSERT INTO config(name,value)" - " VALUES('project-code', lower(hex(randomblob(20))));" - ); - } + db_set("aux-schema", AUX_SCHEMA_MAX, 0); + db_set("rebuilt", get_version(), 0); + db_multi_exec( + "INSERT INTO config(name,value,mtime)" + " VALUES('server-code', lower(hex(randomblob(20))),now());" + "INSERT INTO config(name,value,mtime)" + " VALUES('project-code', lower(hex(randomblob(20))),now());" + ); if( !db_is_global("autosync") ) db_set_int("autosync", 1, 0); if( !db_is_global("localauth") ) db_set_int("localauth", 0, 0); + if( !db_is_global("timeline-plaintext") ){ + db_set_int("timeline-plaintext", 1, 0); + } db_create_default_users(0, zDefaultUser); + if( zDefaultUser ) g.zLogin = zDefaultUser; user_select(); + + if( zTemplate ){ + /* + ** Copy all settings from the supplied template repository. + */ + db_multi_exec( + "INSERT OR REPLACE INTO config" + " SELECT name,value,mtime FROM settingSrc.config" + " WHERE (name IN %s OR name IN %s)" + " AND name NOT GLOB 'project-*'" + " AND name NOT GLOB 'short-project-*';", + configure_inop_rhs(CONFIGSET_ALL), + db_setting_inop_rhs() + ); + db_multi_exec( + "REPLACE INTO reportfmt SELECT * FROM settingSrc.reportfmt;" + ); + + /* + ** Copy the user permissions, contact information, last modified + ** time, and photo for all the "system" users from the supplied + ** template repository into the one being setup. The other columns + ** are not copied because they contain security information or other + ** data specific to the other repository. The list of columns copied + ** by this SQL statement may need to be revised in the future. + */ + db_multi_exec("UPDATE user SET" + " cap = (SELECT u2.cap FROM settingSrc.user u2" + " WHERE u2.login = user.login)," + " info = (SELECT u2.info FROM settingSrc.user u2" + " WHERE u2.login = user.login)," + " mtime = (SELECT u2.mtime FROM settingSrc.user u2" + " WHERE u2.login = user.login)," + " photo = (SELECT u2.photo FROM settingSrc.user u2" + " WHERE u2.login = user.login)" + " WHERE user.login IN ('anonymous','nobody','developer','reader');" + ); + } if( zInitialDate ){ int rid; blob_zero(&manifest); blob_appendf(&manifest, "C initial\\sempty\\scheck-in\n"); - zDate = db_text(0, "SELECT datetime(%Q)", zInitialDate); - zDate[10]='T'; + zDate = date_in_standard_format(zInitialDate); blob_appendf(&manifest, "D %s\n", zDate); - blob_appendf(&manifest, "P\n"); md5sum_init(); + /* The R-card is necessary here because without it + * fossil versions earlier than versions 1.27 would + * interpret this artifact as a "control". */ blob_appendf(&manifest, "R %s\n", md5sum_finish(0)); blob_appendf(&manifest, "T *branch * trunk\n"); blob_appendf(&manifest, "T *sym-trunk *\n"); blob_appendf(&manifest, "U %F\n", g.zLogin); md5sum_blob(&manifest, &hash); blob_appendf(&manifest, "Z %b\n", &hash); blob_reset(&hash); - rid = content_put(&manifest, 0, 0); - manifest_crosslink(rid, &manifest); + rid = content_put(&manifest); + manifest_crosslink(rid, &manifest, MC_NONE); } } /* -** COMMAND: new +** COMMAND: new* +** COMMAND: init ** ** Usage: %fossil new ?OPTIONS? FILENAME +** Or: %fossil init ?OPTIONS? FILENAME ** ** Create a repository for a new project in the file named FILENAME. ** This command is distinct from "clone". The "clone" command makes ** a copy of an existing project. This command starts a new project. ** ** By default, your current login name is used to create the default ** admin user. This can be overridden using the -A|--admin-user ** parameter. ** +** By default, all settings will be initialized to their default values. +** This can be overridden using the --template parameter to specify a +** repository file from which to copy the initial settings. When a template +** repository is used, almost all of the settings accessible from the setup +** page, either directly or indirectly, will be copied. Normal users and +** their associated permissions will not be copied; however, the system +** default users "anonymous", "nobody", "reader", "developer", and their +** associated permissions will be copied. +** ** Options: +** --template FILE copy settings from repository file +** --admin-user|-A USERNAME select given USERNAME as admin user +** --date-override DATETIME use DATETIME as time of the initial check-in ** -** --admin-user|-A USERNAME -** --date-override DATETIME -** +** See also: clone */ void create_repository_cmd(void){ char *zPassword; + const char *zTemplate; /* Repository from which to copy settings */ const char *zDate; /* Date of the initial check-in */ const char *zDefaultUser; /* Optional name of the default user */ + zTemplate = find_option("template",0,1); zDate = find_option("date-override",0,1); zDefaultUser = find_option("admin-user","A",1); - if( zDate==0 ) zDate = "now"; + /* We should be done with options.. */ + verify_all_options(); + if( g.argc!=3 ){ usage("REPOSITORY-NAME"); } + + if( -1 != file_size(g.argv[2]) ){ + fossil_fatal("file already exists: %s", g.argv[2]); + } + db_create_repository(g.argv[2]); db_open_repository(g.argv[2]); db_open_config(0); + if( zTemplate ) db_attach(zTemplate, "settingSrc"); db_begin_transaction(); - db_initial_setup(zDate, zDefaultUser, 1); + if( zDate==0 ) zDate = "now"; + db_initial_setup(zTemplate, zDate, zDefaultUser); db_end_transaction(0); - printf("project-id: %s\n", db_get("project-code", 0)); - printf("server-id: %s\n", db_get("server-code", 0)); + if( zTemplate ) db_detach("settingSrc"); + fossil_print("project-id: %s\n", db_get("project-code", 0)); + fossil_print("server-id: %s\n", db_get("server-code", 0)); zPassword = db_text(0, "SELECT pw FROM user WHERE login=%Q", g.zLogin); - printf("admin-user: %s (initial password is \"%s\")\n", g.zLogin, zPassword); + fossil_print("admin-user: %s (initial password is \"%s\")\n", + g.zLogin, zPassword); } /* ** SQL functions for debugging. ** ** The print() function writes its arguments on stdout, but only ** if the -sqlprint command-line option is turned on. */ -static void db_sql_print( +LOCAL void db_sql_print( sqlite3_context *context, int argc, sqlite3_value **argv ){ int i; if( g.fSqlPrint ){ for(i=0; i0 && zSql[n-1]==';') ? "" : ";"); + fossil_trace("%s%s\n", zSql, (n>0 && zSql[n-1]==';') ? "" : ";"); } /* ** Implement the user() SQL function. user() takes no arguments and ** returns the user ID of the current user. */ -static void db_sql_user( +LOCAL void db_sql_user( sqlite3_context *context, int argc, sqlite3_value **argv ){ if( g.zLogin!=0 ){ @@ -1063,16 +1783,16 @@ sqlite3_result_text(context, g.zLogin, -1, SQLITE_STATIC); } } /* -** Implement the cgi() SQL function. cgi() takes a an argument which is -** a name of CGI query parameter. The value of that parameter is returned, -** if available. optional second argument will be returned if the first +** Implement the cgi() SQL function. cgi() takes an argument which is +** a name of CGI query parameter. The value of that parameter is returned, +** if available. Optional second argument will be returned if the first ** doesn't exist as a CGI parameter. */ -static void db_sql_cgi(sqlite3_context *context, int argc, sqlite3_value **argv){ +LOCAL void db_sql_cgi(sqlite3_context *context, int argc, sqlite3_value **argv){ const char* zP; if( argc!=1 && argc!=2 ) return; zP = P((const char*)sqlite3_value_text(argv[0])); if( zP ){ sqlite3_result_text(context, zP, -1, SQLITE_STATIC); @@ -1081,37 +1801,55 @@ if( zP ) sqlite3_result_text(context, zP, -1, SQLITE_TRANSIENT); } } /* -** This is used by the [commit] command. -** -** Return true if either: -** -** a) Global.aCommitFile is NULL, or -** b) Global.aCommitFile contains the integer passed as an argument. -** -** Otherwise return false. +** SQL function: +** +** is_selected(id) +** if_selected(id, X, Y) +** +** On the commit command, when filenames are specified (in order to do +** a partial commit) the vfile.id values for the named files are loaded +** into the g.aCommitFile[] array. This function looks at that array +** to see if a file is named on the command-line. +** +** In the first form (1 argument) return TRUE if either no files are +** named on the command line (g.aCommitFile is NULL meaning that all +** changes are to be committed) or if id is found in g.aCommitFile[] +** (meaning that id was named on the command-line). +** +** In the second form (3 arguments) return argument X if true and Y +** if false. Except if Y is NULL then always return X. */ -static void file_is_selected( +LOCAL void file_is_selected( sqlite3_context *context, int argc, sqlite3_value **argv ){ - assert(argc==1); + int rc = 0; + + assert(argc==1 || argc==3); if( g.aCommitFile ){ int iId = sqlite3_value_int(argv[0]); int ii; for(ii=0; g.aCommitFile[ii]; ii++){ if( iId==g.aCommitFile[ii] ){ - sqlite3_result_int(context, 1); - return; + rc = 1; + break; } } - sqlite3_result_int(context, 0); + }else{ + rc = 1; + } + if( argc==1 ){ + sqlite3_result_int(context, rc); }else{ - sqlite3_result_int(context, 1); + assert( argc==3 ); + assert( rc==0 || rc==1 ); + if( sqlite3_value_type(argv[2-rc])==SQLITE_NULL ) rc = 1-rc; + sqlite3_result_value(context, argv[2-rc]); } } /* ** Convert the input string into an SHA1. Make a notation in the @@ -1136,14 +1874,15 @@ memcpy(zHash, zContent, n); zHash[n] = 0; }else{ sha1sum_step_text(zContent, n); sha1sum_finish(&out); - strcpy(zHash, blob_str(&out)); + sqlite3_snprintf(sizeof(zHash), zHash, "%s", blob_str(&out)); blob_reset(&out); db_multi_exec( - "INSERT OR IGNORE INTO concealed VALUES(%Q,%#Q)", + "INSERT OR IGNORE INTO concealed(hash,content,mtime)" + " VALUES(%Q,%#Q,now())", zHash, n, zContent ); } return zHash; } @@ -1157,11 +1896,11 @@ ** In either case, the string returned is stored in space obtained ** from malloc and should be freed by the calling function. */ char *db_reveal(const char *zKey){ char *zOut; - if( g.okRdAddr ){ + if( g.perm.RdAddr ){ zOut = db_text(0, "SELECT content FROM concealed WHERE hash=%Q", zKey); }else{ zOut = 0; } if( zOut==0 ){ @@ -1168,82 +1907,206 @@ zOut = mprintf("%s", zKey); } return zOut; } -/* -** This function registers auxiliary functions when the SQLite -** database connection is first established. -*/ -LOCAL void db_connection_init(void){ - sqlite3_exec(g.db, "PRAGMA foreign_keys=OFF;", 0, 0, 0); - sqlite3_create_function(g.db, "user", 0, SQLITE_ANY, 0, db_sql_user, 0, 0); - sqlite3_create_function(g.db, "cgi", 1, SQLITE_ANY, 0, db_sql_cgi, 0, 0); - sqlite3_create_function(g.db, "cgi", 2, SQLITE_ANY, 0, db_sql_cgi, 0, 0); - sqlite3_create_function(g.db, "print", -1, SQLITE_UTF8, 0,db_sql_print,0,0); - sqlite3_create_function( - g.db, "file_is_selected", 1, SQLITE_UTF8, 0, file_is_selected,0,0 - ); - if( g.fSqlTrace ){ - sqlite3_trace(g.db, db_sql_trace, 0); - } -} - /* ** Return true if the string zVal represents "true" (or "false"). */ int is_truth(const char *zVal){ - static const char *azOn[] = { "on", "yes", "true", "1" }; + static const char *const azOn[] = { "on", "yes", "true", "1" }; int i; - for(i=0; i. +** +** Return the text of the string if it is found. Return NULL if not +** found. +** +** If the zNonVersionedSetting parameter is not NULL then it holds the +** non-versioned value for this setting. If both a versioned and a +** non-versioned value exist and are not equal, then a warning message +** might be generated. +*/ +char *db_get_versioned(const char *zName, char *zNonVersionedSetting){ + char *zVersionedSetting = 0; + int noWarn = 0; + int found = 0; + struct _cacheEntry { + struct _cacheEntry *next; + const char *zName, *zValue; + } *cacheEntry = 0; + static struct _cacheEntry *cache = 0; + + if( !g.localOpen && g.zOpenRevision==0 ) return zNonVersionedSetting; + /* Look up name in cache */ + cacheEntry = cache; + while( cacheEntry!=0 ){ + if( fossil_strcmp(cacheEntry->zName, zName)==0 ){ + zVersionedSetting = fossil_strdup(cacheEntry->zValue); + break; + } + cacheEntry = cacheEntry->next; + } + /* Attempt to read value from file in checkout if there wasn't a cache hit. */ + if( cacheEntry==0 ){ + Blob versionedPathname; + Blob setting; + blob_zero(&versionedPathname); + blob_zero(&setting); + blob_appendf(&versionedPathname, "%s.fossil-settings/%s", + g.zLocalRoot, zName); + if( !g.localOpen ){ + /* Repository is in the process of being opened, but files have not been + * written to disk. Load from the database. */ + Blob noWarnFile; + if( historical_version_of_file(g.zOpenRevision, + blob_str(&versionedPathname), + &setting, 0, 0, 0, 2)!=2 ){ + found = 1; + } + /* See if there's a no-warn flag */ + blob_append(&versionedPathname, ".no-warn", -1); + blob_zero(&noWarnFile); + if( historical_version_of_file(g.zOpenRevision, + blob_str(&versionedPathname), + &noWarnFile, 0, 0, 0, 2)!=2 ){ + noWarn = 1; + } + blob_reset(&noWarnFile); + }else if( file_size(blob_str(&versionedPathname))>=0 ){ + /* File exists, and contains the value for this setting. Load from + ** the file. */ + if( blob_read_from_file(&setting, blob_str(&versionedPathname))>=0 ){ + found = 1; + } + /* See if there's a no-warn flag */ + blob_append(&versionedPathname, ".no-warn", -1); + if( file_size(blob_str(&versionedPathname))>=0 ){ + noWarn = 1; + } + } + blob_reset(&versionedPathname); + if( found ){ + blob_trim(&setting); /* Avoid non-obvious problems with line endings + ** on boolean properties */ + zVersionedSetting = fossil_strdup(blob_str(&setting)); + } + blob_reset(&setting); + /* Store result in cache, which can be the value or 0 if not found */ + cacheEntry = (struct _cacheEntry*)fossil_malloc(sizeof(struct _cacheEntry)); + cacheEntry->next = cache; + cacheEntry->zName = zName; + cacheEntry->zValue = fossil_strdup(zVersionedSetting); + cache = cacheEntry; + } + /* Display a warning? */ + if( zVersionedSetting!=0 && zNonVersionedSetting!=0 + && zNonVersionedSetting[0]!='\0' && !noWarn + ){ + /* There's a versioned setting, and a non-versioned setting. Tell + ** the user about the conflict */ + fossil_warning( + "setting %s has both versioned and non-versioned values: using " + "versioned value from file .fossil-settings/%s (to silence this " + "warning, either create an empty file named " + ".fossil-settings/%s.no-warn in the check-out root, " + "or delete the non-versioned setting " + "with \"fossil unset %s\")", zName, zName, zName, zName + ); } + /* Prefer the versioned setting */ + return ( zVersionedSetting!=0 ) ? zVersionedSetting : zNonVersionedSetting; } + /* ** Get and set values from the CONFIG, GLOBAL_CONFIG and VVAR table in the ** repository and local databases. +** +** If no such variable exists, return zDefault. Or, if zName is the name +** of a setting, then the zDefault is ignored and the default value of the +** setting is returned instead. If zName is a versioned setting, then +** versioned value takes priority. */ -char *db_get(const char *zName, char *zDefault){ +char *db_get(const char *zName, const char *zDefault){ char *z = 0; + const Setting *pSetting = db_find_setting(zName, 0); if( g.repositoryOpen ){ z = db_text(0, "SELECT value FROM config WHERE name=%Q", zName); } - if( z==0 && g.configOpen ){ + if( z==0 && g.zConfigDbName ){ db_swap_connections(); z = db_text(0, "SELECT value FROM global_config WHERE name=%Q", zName); db_swap_connections(); } + if( pSetting!=0 && pSetting->versionable ){ + /* This is a versionable setting, try and get the info from a + ** checked out file */ + z = db_get_versioned(zName, z); + } + if( z==0 ){ + if( zDefault==0 && pSetting && pSetting->def[0] ){ + z = fossil_strdup(pSetting->def); + }else{ + z = fossil_strdup(zDefault); + } + } + return z; +} +char *db_get_mtime(const char *zName, const char *zFormat, const char *zDefault){ + char *z = 0; + if( g.repositoryOpen ){ + z = db_text(0, "SELECT mtime FROM config WHERE name=%Q", zName); + } if( z==0 ){ - z = zDefault; + z = fossil_strdup(zDefault); + }else if( zFormat!=0 ){ + z = db_text(0, "SELECT strftime(%Q,%Q,'unixepoch');", zFormat, z); } return z; } void db_set(const char *zName, const char *zValue, int globalFlag){ db_begin_transaction(); @@ -1251,11 +2114,11 @@ db_swap_connections(); db_multi_exec("REPLACE INTO global_config(name,value) VALUES(%Q,%Q)", zName, zValue); db_swap_connections(); }else{ - db_multi_exec("REPLACE INTO config(name,value) VALUES(%Q,%Q)", + db_multi_exec("REPLACE INTO config(name,value,mtime) VALUES(%Q,%Q,now())", zName, zValue); } if( globalFlag && g.repositoryOpen ){ db_multi_exec("DELETE FROM config WHERE name=%Q", zName); } @@ -1275,11 +2138,11 @@ } db_end_transaction(0); } int db_is_global(const char *zName){ int rc = 0; - if( g.configOpen ){ + if( g.zConfigDbName ){ db_swap_connections(); rc = db_exists("SELECT 1 FROM global_config WHERE name=%Q", zName); db_swap_connections(); } return rc; @@ -1296,11 +2159,11 @@ } db_finalize(&q); }else{ rc = SQLITE_DONE; } - if( rc==SQLITE_DONE && g.configOpen ){ + if( rc==SQLITE_DONE && g.zConfigDbName ){ db_swap_connections(); v = db_int(dflt, "SELECT value FROM global_config WHERE name=%Q", zName); db_swap_connections(); } return v; @@ -1310,11 +2173,11 @@ db_swap_connections(); db_multi_exec("REPLACE INTO global_config(name,value) VALUES(%Q,%d)", zName, value); db_swap_connections(); }else{ - db_multi_exec("REPLACE INTO config(name,value) VALUES(%Q,%d)", + db_multi_exec("REPLACE INTO config(name,value,mtime) VALUES(%Q,%d,now())", zName, value); } if( globalFlag && g.repositoryOpen ){ db_multi_exec("DELETE FROM config WHERE name=%Q", zName); } @@ -1323,12 +2186,19 @@ char *zVal = db_get(zName, dflt ? "on" : "off"); if( is_truth(zVal) ) return 1; if( is_false(zVal) ) return 0; return dflt; } -char *db_lget(const char *zName, char *zDefault){ - return db_text((char*)zDefault, +int db_get_versioned_boolean(const char *zName, int dflt){ + char *zVal = db_get_versioned(zName, 0); + if( zVal==0 ) return dflt; + if( is_truth(zVal) ) return 1; + if( is_false(zVal) ) return 0; + return dflt; +} +char *db_lget(const char *zName, const char *zDefault){ + return db_text(zDefault, "SELECT value FROM vvar WHERE name=%Q", zName); } void db_lset(const char *zName, const char *zValue){ db_multi_exec("REPLACE INTO vvar(name,value) VALUES(%Q,%Q)", zName, zValue); } @@ -1339,229 +2209,841 @@ db_multi_exec("REPLACE INTO vvar(name,value) VALUES(%Q,%d)", zName, value); } /* ** Record the name of a local repository in the global_config() database. -** The repostiroy filename %s is recorded as an entry with a "name" field +** The repository filename %s is recorded as an entry with a "name" field ** of the following form: ** ** repo:%s ** ** The value field is set to 1. +** +** If running from a local checkout, also record the root of the checkout +** as follows: +** +** ckout:%s +** +** Where %s is the checkout root. The value is the repository file. */ void db_record_repository_filename(const char *zName){ + char *zRepoSetting; + char *zCkoutSetting; Blob full; if( zName==0 ){ if( !g.localOpen ) return; - zName = db_lget("repository", 0); + zName = db_repository_filename(); } - file_canonical_name(zName, &full); + file_canonical_name(zName, &full, 0); + (void)filename_collation(); /* Initialize before connection swap */ db_swap_connections(); + zRepoSetting = mprintf("repo:%q", blob_str(&full)); + db_multi_exec( + "DELETE FROM global_config WHERE name %s = %Q;", + filename_collation(), zRepoSetting + ); db_multi_exec( "INSERT OR IGNORE INTO global_config(name,value)" - "VALUES('repo:%q',1)", - blob_str(&full) + "VALUES(%Q,1);", + zRepoSetting ); - db_swap_connections(); + fossil_free(zRepoSetting); + if( g.localOpen && g.zLocalRoot && g.zLocalRoot[0] ){ + Blob localRoot; + file_canonical_name(g.zLocalRoot, &localRoot, 1); + zCkoutSetting = mprintf("ckout:%q", blob_str(&localRoot)); + db_multi_exec( + "DELETE FROM global_config WHERE name %s = %Q;", + filename_collation(), zCkoutSetting + ); + db_multi_exec( + "REPLACE INTO global_config(name, value)" + "VALUES(%Q,%Q);", + zCkoutSetting, blob_str(&full) + ); + db_swap_connections(); + db_optional_sql("repository", + "DELETE FROM config WHERE name %s = %Q;", + filename_collation(), zCkoutSetting + ); + db_optional_sql("repository", + "REPLACE INTO config(name,value,mtime)" + "VALUES(%Q,1,now());", + zCkoutSetting + ); + fossil_free(zCkoutSetting); + blob_reset(&localRoot); + }else{ + db_swap_connections(); + } blob_reset(&full); } /* ** COMMAND: open ** -** Usage: %fossil open FILENAME ?VERSION? ?--keep? +** Usage: %fossil open FILENAME ?VERSION? ?OPTIONS? ** ** Open a connection to the local repository in FILENAME. A checkout ** for the repository is created with its root at the working directory. ** If VERSION is specified then that version is checked out. Otherwise ** the latest version is checked out. No files other than "manifest" ** and "manifest.uuid" are modified if the --keep option is present. ** -** See also the "close" command. +** Options: +** --empty Initialize checkout as being empty, but still connected +** with the local repository. If you commit this checkout, +** it will become a new "initial" commit in the repository. +** --keep Only modify the manifest and manifest.uuid files +** --nested Allow opening a repository inside an opened checkout +** --force-missing Force opening a repository with missing content +** +** See also: close */ void cmd_open(void){ - Blob path; - int vid; + int emptyFlag; int keepFlag; - static char *azNewArgv[] = { 0, "checkout", "--latest", 0, 0, 0 }; + int forceMissingFlag; + int allowNested; + int allowSymlinks; + static char *azNewArgv[] = { 0, "checkout", "--prompt", 0, 0, 0, 0 }; + url_proxy_options(); + emptyFlag = find_option("empty",0,0)!=0; keepFlag = find_option("keep",0,0)!=0; + forceMissingFlag = find_option("force-missing",0,0)!=0; + allowNested = find_option("nested",0,0)!=0; + + /* We should be done with options.. */ + verify_all_options(); + if( g.argc!=3 && g.argc!=4 ){ usage("REPOSITORY-FILENAME ?VERSION?"); } - if( db_open_local() ){ - fossil_panic("already within an open tree rooted at %s", g.zLocalRoot); - } - file_canonical_name(g.argv[2], &path); - db_open_repository(blob_str(&path)); - db_init_database("./_FOSSIL_", zLocalSchema, (char*)0); - db_open_local(); - db_lset("repository", blob_str(&path)); - db_record_repository_filename(blob_str(&path)); - vid = db_int(0, "SELECT pid FROM plink y" - " WHERE NOT EXISTS(SELECT 1 FROM plink x WHERE x.cid=y.pid)"); - if( vid==0 ){ - db_lset_int("checkout", 1); + if( !allowNested && db_open_local(0) ){ + fossil_fatal("already within an open tree rooted at %s", g.zLocalRoot); + } + db_open_repository(g.argv[2]); + + /* Figure out which revision to open. */ + if( !emptyFlag ){ + if( g.argc==4 ){ + g.zOpenRevision = g.argv[3]; + }else if( db_exists("SELECT 1 FROM event WHERE type='ci'") ){ + g.zOpenRevision = db_get("main-branch", "trunk"); + } + } + + if( g.zOpenRevision ){ + /* Since the repository is open and we know the revision now, + ** refresh the allow-symlinks flag. Since neither the local + ** checkout nor the configuration database are open at this + ** point, this should always return the versioned setting, + ** if any, or the default value, which is negative one. The + ** value negative one, in this context, means that the code + ** below should fallback to using the setting value from the + ** repository or global configuration databases only. */ + allowSymlinks = db_get_versioned_boolean("allow-symlinks", -1); + }else{ + allowSymlinks = -1; /* Use non-versioned settings only. */ + } + +#if defined(_WIN32) || defined(__CYGWIN__) +# define LOCALDB_NAME "./_FOSSIL_" +#else +# define LOCALDB_NAME "./.fslckout" +#endif + db_init_database(LOCALDB_NAME, zLocalSchema, +#ifdef FOSSIL_LOCAL_WAL + "COMMIT; PRAGMA journal_mode=WAL; BEGIN;", +#endif + (char*)0); + db_delete_on_failure(LOCALDB_NAME); + db_open_local(0); + if( allowSymlinks>=0 ){ + /* Use the value from the versioned setting, which was read + ** prior to opening the local checkout (i.e. which is most + ** likely empty and does not actually contain any versioned + ** setting files yet). Normally, this value would be given + ** first priority within db_get_boolean(); however, this is + ** a special case because we know the on-disk files may not + ** exist yet. */ + g.allowSymlinks = allowSymlinks; }else{ - char **oldArgv = g.argv; - int oldArgc = g.argc; - db_lset_int("checkout", vid); - azNewArgv[0] = g.argv[0]; - g.argv = azNewArgv; + /* Since the local checkout may not have any files at this + ** point, this will probably be the setting value from the + ** repository or global configuration databases. */ + g.allowSymlinks = db_get_boolean("allow-symlinks", + db_allow_symlinks_by_default()); + } + db_lset("repository", g.argv[2]); + db_record_repository_filename(g.argv[2]); + db_lset_int("checkout", 0); + azNewArgv[0] = g.argv[0]; + g.argv = azNewArgv; + if( !emptyFlag ){ g.argc = 3; - if( oldArgc==4 ){ - azNewArgv[g.argc-1] = oldArgv[3]; + if( g.zOpenRevision ){ + azNewArgv[g.argc-1] = g.zOpenRevision; + }else{ + azNewArgv[g.argc-1] = "--latest"; } if( keepFlag ){ azNewArgv[g.argc++] = "--keep"; } + if( forceMissingFlag ){ + azNewArgv[g.argc++] = "--force-missing"; + } checkout_cmd(); - g.argc = 2; - info_cmd(); } + g.argc = 2; + info_cmd(); } /* -** Print the value of a setting named zName +** Print the current value of a setting identified by the pSetting +** pointer. */ -static void print_setting(const char *zName){ +static void print_setting(const Setting *pSetting){ Stmt q; if( g.repositoryOpen ){ db_prepare(&q, "SELECT '(local)', value FROM config WHERE name=%Q" " UNION ALL " "SELECT '(global)', value FROM global_config WHERE name=%Q", - zName, zName + pSetting->name, pSetting->name ); }else{ db_prepare(&q, "SELECT '(global)', value FROM global_config WHERE name=%Q", - zName + pSetting->name ); } if( db_step(&q)==SQLITE_ROW ){ - printf("%-20s %-8s %s\n", zName, db_column_text(&q, 0), + fossil_print("%-20s %-8s %s\n", pSetting->name, db_column_text(&q, 0), db_column_text(&q, 1)); }else{ - printf("%-20s\n", zName); + fossil_print("%-20s\n", pSetting->name); + } + if( pSetting->versionable && g.localOpen ){ + /* Check to see if this is overridden by a versionable settings file */ + Blob versionedPathname; + blob_zero(&versionedPathname); + blob_appendf(&versionedPathname, "%s.fossil-settings/%s", + g.zLocalRoot, pSetting->name); + if( file_size(blob_str(&versionedPathname))>=0 ){ + fossil_print(" (overridden by contents of file .fossil-settings/%s)\n", + pSetting->name); + } } db_finalize(&q); } + +#if INTERFACE +/* +** Define all settings, which can be controlled via the set/unset +** command. +** +** var is the name of the internal configuration name for db_(un)set. +** If var is 0, the settings name is used. +** +** width is the length for the edit field on the behavior page, 0 +** is used for on/off checkboxes. +** +** The behaviour page doesn't use a special layout. It lists all +** set-commands and displays the 'set'-help as info. +*/ +struct Setting { + const char *name; /* Name of the setting */ + const char *var; /* Internal variable name used by db_set() */ + int width; /* Width of display. 0 for boolean values. */ + int versionable; /* Is this setting versionable? */ + int forceTextArea; /* Force using a text area for display? */ + const char *def; /* Default value */ +}; +#endif /* INTERFACE */ + +const Setting aSetting[] = { + { "access-log", 0, 0, 0, 0, "off" }, + { "admin-log", 0, 0, 0, 0, "off" }, +#if defined(_WIN32) + { "allow-symlinks", 0, 0, 1, 0, "off" }, +#else + { "allow-symlinks", 0, 0, 1, 0, "on" }, +#endif + { "auto-captcha", "autocaptcha", 0, 0, 0, "on" }, + { "auto-hyperlink", 0, 0, 0, 0, "on", }, + { "auto-shun", 0, 0, 0, 0, "on" }, + { "autosync", 0, 0, 0, 0, "on" }, + { "autosync-tries", 0, 16, 0, 0, "1" }, + { "binary-glob", 0, 40, 1, 0, "" }, +#if defined(_WIN32) || defined(__CYGWIN__) || defined(__DARWIN__) || \ + defined(__APPLE__) + { "case-sensitive", 0, 0, 0, 0, "off" }, +#else + { "case-sensitive", 0, 0, 0, 0, "on" }, +#endif + { "clean-glob", 0, 40, 1, 0, "" }, + { "clearsign", 0, 0, 0, 0, "off" }, + { "crnl-glob", 0, 40, 1, 0, "" }, + { "default-perms", 0, 16, 0, 0, "u" }, + { "diff-binary", 0, 0, 0, 0, "on" }, + { "diff-command", 0, 40, 0, 0, "" }, + { "dont-push", 0, 0, 0, 0, "off" }, + { "dotfiles", 0, 0, 1, 0, "off" }, + { "editor", 0, 32, 0, 0, "" }, + { "empty-dirs", 0, 40, 1, 0, "" }, + { "encoding-glob", 0, 40, 1, 0, "" }, +#if defined(FOSSIL_ENABLE_EXEC_REL_PATHS) + { "exec-rel-paths", 0, 0, 0, 0, "on" }, +#else + { "exec-rel-paths", 0, 0, 0, 0, "off" }, +#endif + { "gdiff-command", 0, 40, 0, 0, "gdiff" }, + { "gmerge-command", 0, 40, 0, 0, "" }, + { "hash-digits", 0, 5, 0, 0, "10" }, + { "http-port", 0, 16, 0, 0, "8080" }, + { "https-login", 0, 0, 0, 0, "off" }, + { "ignore-glob", 0, 40, 1, 0, "" }, + { "keep-glob", 0, 40, 1, 0, "" }, + { "localauth", 0, 0, 0, 0, "off" }, + { "main-branch", 0, 40, 0, 0, "trunk" }, + { "manifest", 0, 0, 1, 0, "off" }, + { "max-loadavg", 0, 25, 0, 0, "0.0" }, + { "max-upload", 0, 25, 0, 0, "250000" }, + { "mtime-changes", 0, 0, 0, 0, "on" }, +#if FOSSIL_ENABLE_LEGACY_MV_RM + { "mv-rm-files", 0, 0, 0, 0, "off" }, +#endif + { "pgp-command", 0, 40, 0, 0, "gpg --clearsign -o " }, + { "proxy", 0, 32, 0, 0, "off" }, + { "relative-paths", 0, 0, 0, 0, "on" }, + { "repo-cksum", 0, 0, 0, 0, "on" }, + { "self-register", 0, 0, 0, 0, "off" }, + { "ssh-command", 0, 40, 0, 0, "" }, + { "ssl-ca-location", 0, 40, 0, 0, "" }, + { "ssl-identity", 0, 40, 0, 0, "" }, +#ifdef FOSSIL_ENABLE_TCL + { "tcl", 0, 0, 0, 0, "off" }, + { "tcl-setup", 0, 40, 1, 1, "" }, +#endif +#ifdef FOSSIL_ENABLE_TH1_DOCS + { "th1-docs", 0, 0, 0, 0, "off" }, +#endif +#ifdef FOSSIL_ENABLE_TH1_HOOKS + { "th1-hooks", 0, 0, 0, 0, "off" }, +#endif + { "th1-setup", 0, 40, 1, 1, "" }, + { "th1-uri-regexp", 0, 40, 1, 0, "" }, + { "web-browser", 0, 32, 0, 0, "" }, + { 0,0,0,0,0,0 } +}; + +/* +** Look up a control setting by its name. Return a pointer to the Setting +** object, or NULL if there is no such setting. +** +** If allowPrefix is true, then the Setting returned is the first one for +** which zName is a prefix of the Setting name. +*/ +const Setting *db_find_setting(const char *zName, int allowPrefix){ + int lwr, mid, upr, c; + int n = (int)strlen(zName) + !allowPrefix; + lwr = 0; + upr = ArraySize(aSetting)-2; + while( upr>=lwr ){ + mid = (upr+lwr)/2; + c = fossil_strncmp(zName, aSetting[mid].name, n); + if( c<0 ){ + upr = mid - 1; + }else if( c>0 ){ + lwr = mid + 1; + }else{ + if( allowPrefix ){ + while( mid>lwr && fossil_strncmp(zName, aSetting[mid-1].name, n)==0 ){ + mid--; + } + } + return &aSetting[mid]; + } + } + return 0; +} /* ** COMMAND: settings -** COMMAND: unset -** %fossil setting ?PROPERTY? ?VALUE? ?-global? -** %fossil unset PROPERTY ?-global? +** COMMAND: unset* +** +** %fossil settings ?PROPERTY? ?VALUE? ?OPTIONS? +** %fossil unset PROPERTY ?OPTIONS? ** -** The "setting" command with no arguments lists all properties and their +** The "settings" command with no arguments lists all properties and their ** values. With just a property name it shows the value of that property. ** With a value argument it changes the property for the current repository. +** +** Settings marked as versionable are overridden by the contents of the +** file named .fossil-settings/PROPERTY in the check-out root, if that +** file exists. ** ** The "unset" command clears a property setting. ** +** +** access-log If enabled, record successful and failed login attempts +** in the "accesslog" table. Default: off +** +** admin-log If enabled, record configuration changes in the +** "admin_log" table. Default: off +** +** allow-symlinks If enabled, don't follow symlinks, and instead treat +** (versionable) them as symlinks on Unix. Has no effect on Windows +** (existing links in repository created on Unix become +** plain-text files with link destination path inside). +** Default: off +** +** auto-captcha If enabled, the Login page provides a button to +** fill in the captcha password. Default: on +** +** auto-hyperlink Use javascript to enable hyperlinks on web pages +** for all users (regardless of the "h" privilege) if the +** User-Agent string in the HTTP header look like it came +** from real person, not a spider or bot. Default: on +** +** auto-shun If enabled, automatically pull the shunning list +** from a server to which the client autosyncs. +** Default: on ** ** autosync If enabled, automatically pull prior to commit ** or update and automatically push after commit or -** tag or branch creation. If the the value is "pullonly" +** tag or branch creation. If the value is "pullonly" ** then only pull operations occur automatically. +** Default: on +** +** autosync-tries If autosync is enabled setting this to a value greater +** than zero will cause autosync to try no more than this +** number of attempts if there is a sync failure. +** Default: 1 +** +** binary-glob The VALUE is a comma or newline-separated list of +** (versionable) GLOB patterns that should be treated as binary files +** for committing and merging purposes. Example: *.jpg +** +** case-sensitive If TRUE, the files whose names differ only in case +** are considered distinct. If FALSE files whose names +** differ only in case are the same file. Defaults to +** TRUE for unix and FALSE for Cygwin, Mac and Windows. ** -** binary-glob The VALUE is a comma-separated list of GLOB patterns -** that should be treated as binary files for merging -** purposes. Example: *.xml +** clean-glob The VALUE is a comma or newline-separated list of GLOB +** (versionable) patterns specifying files that the "clean" command will +** delete without prompting even when the -force flag has +** not been used. Example: *.a *.lib *.o ** ** clearsign When enabled, fossil will attempt to sign all commits ** with gpg. When disabled (the default), commits will -** be unsigned. +** be unsigned. Default: off +** +** crnl-glob A comma or newline-separated list of GLOB patterns for +** (versionable) text files in which it is ok to have CR, CR+NL or mixed +** line endings. Set to "*" to disable CR+NL checking. +** +** default-perms Permissions given automatically to new users. For more +** information on permissions see Users page in Server +** Administration of the HTTP UI. Default: u. +** +** diff-binary If TRUE (the default), permit files that may be binary +** or that match the "binary-glob" setting to be used with +** external diff programs. If FALSE, skip these files. ** ** diff-command External command to run when performing a diff. ** If undefined, the internal text diff will be used. ** ** dont-push Prevent this repository from pushing from client to ** server. Useful when setting up a private branch. +** +** dotfiles Include --dotfiles option for all compatible commands. +** (versionable) ** ** editor Text editor command used for check-in comments. +** +** empty-dirs A comma or newline-separated list of pathnames. On +** (versionable) update and checkout commands, if no file or directory +** exists with that name, an empty directory will be +** created. +** +** encoding-glob The VALUE is a comma or newline-separated list of GLOB +** (versionable) patterns specifying files that the "commit" command will +** ignore when issuing warnings about text files that may +** use another encoding than ASCII or UTF-8. Set to "*" +** to disable encoding checking. +** +** exec-rel-paths When executing certain external commands (e.g. diff and +** gdiff), use relative paths. ** ** gdiff-command External command to run when performing a graphical ** diff. If undefined, text diff will be used. +** +** gmerge-command A graphical merge conflict resolver command operating +** on four files. +** Ex: kdiff3 "%baseline" "%original" "%merge" -o "%output" +** Ex: xxdiff "%original" "%baseline" "%merge" -M "%output" +** Ex: meld "%baseline" "%original" "%merge" "%output" +** +** hash-digits The number of hexadecimal digits of the SHA1 hash to +** display. (Default: 10; Minimum: 6) ** ** http-port The TCP/IP port number to use by the "server" ** and "ui" commands. Default: 8080 ** -** ignore-glob The VALUE is a comma-separated list of GLOB patterns -** specifying files that the "extra" command will ignore. -** Example: *.o,*.obj,*.exe +** https-login Send login credentials using HTTPS instead of HTTP +** even if the login page request came via HTTP. +** +** ignore-glob The VALUE is a comma or newline-separated list of GLOB +** (versionable) patterns specifying files that the "add", "addremove", +** "clean", and "extra" commands will ignore. +** Example: *.log customCode.c notes.txt +** +** keep-glob The VALUE is a comma or newline-separated list of GLOB +** (versionable) patterns specifying files that the "clean" command will +** keep. ** ** localauth If enabled, require that HTTP connections from ** 127.0.0.1 be authenticated by password. If ** false, all HTTP requests from localhost have ** unrestricted access to the repository. +** +** main-branch The primary branch for the project. Default: trunk +** +** manifest If enabled, automatically create files "manifest" and +** (versionable) "manifest.uuid" in every checkout. The SQLite and +** Fossil repositories both require this. Default: off. +** +** max-loadavg Some CPU-intensive web pages (ex: /zip, /tarball, /blame) +** are disallowed if the system load average goes above this +** value. "0.0" means no limit. This only works on unix. +** Only local settings of this value make a difference since +** when running as a web-server, Fossil does not open the +** global configuration database. +** +** max-upload A limit on the size of uplink HTTP requests. The +** default is 250000 bytes. ** ** mtime-changes Use file modification times (mtimes) to detect when ** files have been modified. (Default "on".) +** +** mv-rm-files If enabled (and Fossil was compiled with legacy "mv/rm" +** support), the "mv" and "rename" commands will also move +** the associated files within the checkout -AND- the "rm" +** and "delete" commands will also remove the associated +** files from within the checkout. Default: off. ** ** pgp-command Command used to clear-sign manifests at check-in. ** The default is "gpg --clearsign -o ". ** ** proxy URL of the HTTP proxy. If undefined or "off" then ** the "http_proxy" environment variable is consulted. ** If the http_proxy environment variable is undefined ** then a direct HTTP connection is used. +** +** relative-paths When showing changes and extras, report paths relative +** to the current working directory. Default: "on" +** +** repo-cksum Compute checksums over all files in each checkout +** as a double-check of correctness. Defaults to "on". +** Disable on large repositories for a performance +** improvement. +** +** self-register Allow users to register themselves through the HTTP UI. +** This is useful if you want to see other names than +** "Anonymous" in e.g. ticketing system. On the other hand +** users can not be deleted. Default: off. +** +** ssh-command Command used to talk to a remote machine with +** the "ssh://" protocol. +** +** ssl-ca-location The full pathname to a file containing PEM encoded +** CA root certificates, or a directory of certificates +** with filenames formed from the certificate hashes as +** required by OpenSSL. +** If set, this will override the OS default list of +** OpenSSL CAs. If unset, the default list will be used. +** Some platforms may add additional certificates. +** Check your platform behaviour is as required if the +** exact contents of the CA root is critical for your +** application. +** +** ssl-identity The full pathname to a file containing a certificate +** and private key in PEM format. Create by concatenating +** the certificate and private key files. +** This identity will be presented to SSL servers to +** authenticate this client, in addition to the normal +** password authentication. +** +** tcl If enabled (and Fossil was compiled with Tcl support), +** Tcl integration commands will be added to the TH1 +** interpreter, allowing arbitrary Tcl expressions and +** scripts to be evaluated from TH1. Additionally, the Tcl +** interpreter will be able to evaluate arbitrary TH1 +** expressions and scripts. Default: off. +** +** tcl-setup This is the setup script to be evaluated after creating +** (versionable) and initializing the Tcl interpreter. By default, this +** is empty and no extra setup is performed. +** +** th1-docs WARNING: If enabled (and Fossil was compiled with TH1 +** support for embedded documentation files), this allows +** embedded documentation files to contain arbitrary TH1 +** scripts that are evaluated on the server. If native +** Tcl integration is also enabled, this setting has the +** potential to allow anybody with check-in privileges to +** do almost anything that the associated operating system +** user account could do. Extreme caution should be used +** when enabling this setting. Default: off. +** +** th1-hooks If enabled (and Fossil was compiled with support for TH1 +** hooks), special TH1 commands will be called before and +** after any Fossil command or web page. Default: off. +** +** th1-setup This is the setup script to be evaluated after creating +** (versionable) and initializing the TH1 interpreter. By default, this +** is empty and no extra setup is performed. +** +** th1-uri-regexp Specify which URI's are allowed in HTTP requests from +** (versionable) TH1 scripts. If empty, no HTTP requests are allowed +** whatsoever. The default is an empty string. ** ** web-browser A shell command used to launch your preferred ** web browser when given a URL as an argument. ** Defaults to "start" on windows, "open" on Mac, ** and "firefox" on Unix. +** +** Options: +** --global set or unset the given property globally instead of +** setting or unsetting it for the open repository only. +** +** See also: configuration */ void setting_cmd(void){ - static const char *azName[] = { - "autosync", - "binary-glob", - "clearsign", - "diff-command", - "dont-push", - "editor", - "gdiff-command", - "ignore-glob", - "http-port", - "localauth", - "mtime-changes", - "pgp-command", - "proxy", - "web-browser", - }; int i; int globalFlag = find_option("global","g",0)!=0; int unsetFlag = g.argv[1][0]=='u'; db_open_config(1); - db_find_and_open_repository(0); + if( !globalFlag ){ + db_find_and_open_repository(OPEN_ANY_SCHEMA | OPEN_OK_NOT_FOUND, 0); + } if( !g.repositoryOpen ){ globalFlag = 1; } if( unsetFlag && g.argc!=3 ){ usage("PROPERTY ?-global?"); } + + /* Verify that the aSetting[] entries are in sorted order. This is + ** necessary for the binary search in db_find_setting() to work correctly. + */ + for(i=1; aSetting[i].name; i++){ + if( fossil_strcmp(aSetting[i-1].name, aSetting[i].name)>=0 ){ + fossil_panic("Internal Error: aSetting[] entries for \"%s\"" + " and \"%s\" are out of order.", + aSetting[i-1].name, aSetting[i].name); + } + } + if( g.argc==2 ){ - for(i=0; i=sizeof(azName)/sizeof(azName[0]) ){ + int n = (int)strlen(zName); + const Setting *pSetting = db_find_setting(zName, 1); + if( pSetting==0 ){ fossil_fatal("no such setting: %s", zName); } - if( unsetFlag ){ - db_unset(azName[i], globalFlag); - }else if( g.argc==4 ){ - db_set(azName[i], g.argv[3], globalFlag); + if( globalFlag && fossil_strcmp(pSetting->name, "manifest")==0 ){ + fossil_fatal("cannot set 'manifest' globally"); + } + if( unsetFlag || g.argc==4 ){ + int isManifest = fossil_strcmp(pSetting->name, "manifest")==0; + if( n!=strlen(pSetting[0].name) && pSetting[1].name && + fossil_strncmp(pSetting[1].name, zName, n)==0 ){ + Blob x; + int i; + blob_init(&x,0,0); + for(i=0; pSetting[i].name; i++){ + if( fossil_strncmp(pSetting[i].name,zName,n)!=0 ) break; + blob_appendf(&x, " %s", pSetting[i].name); + } + fossil_fatal("ambiguous setting \"%s\" - might be:%s", + zName, blob_str(&x)); + } + if( globalFlag && isManifest ){ + fossil_fatal("cannot set 'manifest' globally"); + } + if( unsetFlag ){ + db_unset(pSetting->name, globalFlag); + }else{ + db_set(pSetting->name, g.argv[3], globalFlag); + } + if( isManifest && g.localOpen ){ + manifest_to_disk(db_lget_int("checkout", 0)); + } }else{ - print_setting(azName[i]); + while( pSetting->name && fossil_strncmp(pSetting->name,zName,n)==0 ){ + print_setting(pSetting); + pSetting++; + } } }else{ - usage("?PROPERTY? ?VALUE?"); + usage("?PROPERTY? ?VALUE? ?-global?"); + } +} + +/* +** The input in a timespan measured in days. Return a string which +** describes that timespan in units of seconds, minutes, hours, days, +** or years, depending on its duration. +*/ +char *db_timespan_name(double rSpan){ + if( rSpan<0 ) rSpan = -rSpan; + rSpan *= 24.0*3600.0; /* Convert units to seconds */ + if( rSpan<120.0 ){ + return sqlite3_mprintf("%.1f seconds", rSpan); + } + rSpan /= 60.0; /* Convert units to minutes */ + if( rSpan<90.0 ){ + return sqlite3_mprintf("%.1f minutes", rSpan); + } + rSpan /= 60.0; /* Convert units to hours */ + if( rSpan<=48.0 ){ + return sqlite3_mprintf("%.1f hours", rSpan); + } + rSpan /= 24.0; /* Convert units to days */ + if( rSpan<=365.0 ){ + return sqlite3_mprintf("%.1f days", rSpan); + } + rSpan /= 356.24; /* Convert units to years */ + return sqlite3_mprintf("%.1f years", rSpan); +} + +/* +** COMMAND: test-timespan +** %fossil test-timespan TIMESTAMP +** +** Print the approximate span of time from now to TIMESTAMP. +*/ +void test_timespan_cmd(void){ + double rDiff; + if( g.argc!=3 ) usage("TIMESTAMP"); + sqlite3_open(":memory:", &g.db); + rDiff = db_double(0.0, "SELECT julianday('now') - julianday(%Q)", g.argv[2]); + fossil_print("Time differences: %s\n", db_timespan_name(rDiff)); + sqlite3_close(g.db); + g.db = 0; +} + +/* +** COMMAND: test-without-rowid +** %fossil test-without-rowid FILENAME... +** +** Change the Fossil repository FILENAME to make use of the WITHOUT ROWID +** optimization. FILENAME can also be the ~/.fossil file or a local +** .fslckout or _FOSSIL_ file. +** +** The purpose of this command is for testing the WITHOUT ROWID capabilities +** of SQLite. There is no big advantage to using WITHOUT ROWID in Fossil. +** +** Options: +** --dryrun | -n No changes. Just print what would happen. +*/ +void test_without_rowid(void){ + int i, j; + Stmt q; + Blob allSql; + int dryRun = find_option("dry-run", "n", 0)!=0; + for(i=2; i #include #include #include #include "delta.h" @@ -64,11 +65,11 @@ ** The "u32" type must be an unsigned 32-bit integer. Adjust this */ typedef unsigned int u32; /* -** Must be a 16-bit value +** Must be a 16-bit value */ typedef short int s16; typedef unsigned short int u16; #endif /* INTERFACE */ @@ -81,11 +82,11 @@ /* ** The current state of the rolling hash. ** ** z[] holds the values that have been hashed. z[] is a circular buffer. -** z[i] is the first entry and z[(i+NHASH-1)%NHASH] is the last entry of +** z[i] is the first entry and z[(i+NHASH-1)%NHASH] is the last entry of ** the window. ** ** Hash.a is the sum of all elements of hash.z[]. Hash.b is a weighted ** sum. Hash.b is z[i]*NHASH + z[i+1]*(NHASH-1) + ... + z[i+NHASH-1]*1. ** (Each index for z[] should be module NHASH, of course. The %NHASH operator @@ -101,16 +102,16 @@ /* ** Initialize the rolling hash using the first NHASH characters of z[] */ static void hash_init(hash *pHash, const char *z){ u16 a, b, i; - a = b = 0; - for(i=0; iz[i] = z[i]; + b += a; } + memcpy(pHash->z, z, NHASH); pHash->a = a & 0xffff; pHash->b = b & 0xffff; pHash->i = 0; } @@ -129,16 +130,34 @@ ** Return a 32-bit hash value */ static u32 hash_32bit(hash *pHash){ return (pHash->a & 0xffff) | (((u32)(pHash->b & 0xffff))<<16); } + +/* +** Compute a hash on NHASH bytes. +** +** This routine is intended to be equivalent to: +** hash h; +** hash_init(&h, zInput); +** return hash_32bit(&h); +*/ +static u32 hash_once(const char *z){ + u16 a, b, i; + a = b = z[0]; + for(i=1; i=x; i++, x <<= 6){} return i; } +#ifdef __GNUC__ +# define GCC_VERSION (__GNUC__*1000000+__GNUC_MINOR__*1000+__GNUC_PATCHLEVEL__) +#else +# define GCC_VERSION 0 +#endif + /* -** Compute a 32-bit checksum on the N-byte buffer. Return the result. +** Compute a 32-bit big-endian checksum on the N-byte buffer. If the +** buffer is not a multiple of 4 bytes length, compute the sum that would +** have occurred if the buffer was padded with zeros to the next multiple +** of four bytes. */ static unsigned int checksum(const char *zIn, size_t N){ + static const int byteOrderTest = 1; const unsigned char *z = (const unsigned char *)zIn; + const unsigned char *zEnd = (const unsigned char*)&zIn[N&~3]; unsigned sum = 0; - while(N >= 16){ - sum += ((unsigned)z[0] + z[4] + z[8] + z[12]) << 24; - sum += ((unsigned)z[1] + z[5] + z[9] + z[13]) << 16; - sum += ((unsigned)z[2] + z[6] + z[10]+ z[14]) << 8; - sum += ((unsigned)z[3] + z[7] + z[11]+ z[15]); - z += 16; - N -= 16; - } - while(N >= 4){ - sum += (z[0]<<24) | (z[1]<<16) | (z[2]<<8) | z[3]; - z += 4; - N -= 4; - } - switch(N){ + assert( (z - (const unsigned char*)0)%4==0 ); /* Four-byte alignment */ + if( 0==*(char*)&byteOrderTest ){ + /* This is a big-endian machine */ + while( z=4003000 + while( z=1300 + while( z= 16){ + sum0 += ((unsigned)z[0] + z[4] + z[8] + z[12]); + sum1 += ((unsigned)z[1] + z[5] + z[9] + z[13]); + sum2 += ((unsigned)z[2] + z[6] + z[10]+ z[14]); + sum += ((unsigned)z[3] + z[7] + z[11]+ z[15]); + z += 16; + N -= 16; + } + while(N >= 4){ + sum0 += z[0]; + sum1 += z[1]; + sum2 += z[2]; + sum += z[3]; + z += 4; + N -= 4; + } + sum += (sum2 << 8) + (sum1 << 16) + (sum0 << 24); +#endif + } + switch(N&3){ case 3: sum += (z[2] << 8); case 2: sum += (z[1] << 16); case 1: sum += (z[0] << 24); default: ; } @@ -221,11 +280,11 @@ } /* ** Create a new delta. ** -** The delta is written into a preallocated buffer, zDelta, which +** The delta is written into a preallocated buffer, zDelta, which ** should be at least 60 bytes longer than the target file, zOut. ** The delta string will be NUL-terminated, but it might also contain ** embedded NUL characters if either the zSrc or zOut files are ** binary. This function returns the length of the delta string ** in bytes, excluding the final NUL terminator character. @@ -237,11 +296,11 @@ ** delta file z, a program can compute the size of the output file ** simply by reading the first line and decoding the base-64 number ** found there. The delta_output_size() routine does exactly this. ** ** After the initial size number, the delta consists of a series of -** literal text segments and commands to copy from the SOURCE file. +** literal text segments and commands to copy from the SOURCE file. ** A copy command looks like this: ** ** NNN@MMM, ** ** where NNN is the number of bytes to be copied and MMM is the offset @@ -275,11 +334,11 @@ ** search for a matching section in the source file. When a match ** is found, a copy command is added to the delta. An effort is ** made to extend the matching section to regions that come before ** and after the 16-byte hash window. A copy command is only issued ** if the result would use less space that just quoting the text -** literally. Literal text is added to the delta for sections that +** literally. Literal text is added to the delta for sections that ** do not match or which can not be encoded efficiently using copy ** commands. */ int delta_create( const char *zSrc, /* The source or pattern file */ @@ -317,19 +376,16 @@ /* Compute the hash table used to locate matching sections in the ** source file. */ nHash = lenSrc/NHASH; - collide = malloc( nHash*2*sizeof(int) ); - if( collide==0 ) return -1; + collide = fossil_malloc( nHash*2*sizeof(int) ); landmark = &collide[nHash]; memset(landmark, -1, nHash*sizeof(int)); memset(collide, -1, nHash*sizeof(int)); for(i=0; i=0 && (limit--)>0 ){ /* - ** The hash window has identified a potential match against + ** The hash window has identified a potential match against ** landmark block iBlock. But we need to investigate further. - ** + ** ** Look for a region in zOut that matches zSrc. Anchor the search ** at zSrc[iSrc] and zOut[base+i]. Do not include anything prior to ** zOut[base] or after zOut[outLen] nor anything after zSrc[srcLen]. ** ** Set cnt equal to the length of the match and set ofst so that @@ -366,18 +422,21 @@ ** copy command is less than the amount of literal text to be copied. */ int cnt, ofst, litsz; int j, k, x, y; int sz; + int limitX; /* Beginning at iSrc, match forwards as far as we can. j counts ** the number of characters that match */ iSrc = iBlock*NHASH; - for(j=0, x=iSrc, y=base+i; xlenOut ){ + if( base+i+NHASH>=lenOut ){ /* We have reached the end of the file and have not found any ** matches. Do an "insert" for everything that does not match */ putInt(lenOut-base, &zDelta); *(zDelta++) = ':'; memcpy(zDelta, &zOut[base], lenOut-base); @@ -460,17 +519,17 @@ zDelta += lenOut-base; } /* Output the final checksum record. */ putInt(checksum(zOut, lenOut), &zDelta); *(zDelta++) = ';'; - free(collide); - return zDelta - zOrigDelta; + fossil_free(collide); + return zDelta - zOrigDelta; } /* ** Return the size (in bytes) of the output from applying -** a delta. +** a delta. ** ** This routine is provided so that an procedure that is able ** to call delta_apply() can learn how much space is required ** for the output and hence allocate nor more space that is really ** needed. @@ -513,11 +572,13 @@ int lenDelta, /* Length of the delta */ char *zOut /* Write the output into this preallocated buffer */ ){ unsigned int limit; unsigned int total = 0; +#ifndef FOSSIL_OMIT_DELTA_CKSUM_TEST char *zOrigOut = zOut; +#endif limit = getInt(&zDelta, &lenDelta); if( *zDelta!='\n' ){ /* ERROR: size integer not terminated by "\n" */ return -1; @@ -528,11 +589,11 @@ cnt = getInt(&zDelta, &lenDelta); switch( zDelta[0] ){ case '@': { zDelta++; lenDelta--; ofst = getInt(&zDelta, &lenDelta); - if( zDelta[0]!=',' ){ + if( lenDelta>0 && zDelta[0]!=',' ){ /* ERROR: copy command not terminated by ',' */ return -1; } zDelta++; lenDelta--; DEBUG1( printf("COPY %d from %d\n", cnt, ofst); ) @@ -568,24 +629,87 @@ break; } case ';': { zDelta++; lenDelta--; zOut[0] = 0; +#ifndef FOSSIL_OMIT_DELTA_CKSUM_TEST if( cnt!=checksum(zOrigOut, total) ){ /* ERROR: bad checksum */ return -1; } +#endif if( total!=limit ){ /* ERROR: generated size does not match predicted size */ return -1; } return total; } + default: { + /* ERROR: unknown delta operator */ + return -1; + } + } + } + /* ERROR: unterminated delta */ + return -1; +} + +/* +** Analyze a delta. Figure out the total number of bytes copied from +** source to target, and the total number of bytes inserted by the delta, +** and return both numbers. +*/ +int delta_analyze( + const char *zDelta, /* Delta to apply to the pattern */ + int lenDelta, /* Length of the delta */ + int *pnCopy, /* OUT: Number of bytes copied */ + int *pnInsert /* OUT: Number of bytes inserted */ +){ + unsigned int nInsert = 0; + unsigned int nCopy = 0; + + (void)getInt(&zDelta, &lenDelta); + if( *zDelta!='\n' ){ + /* ERROR: size integer not terminated by "\n" */ + return -1; + } + zDelta++; lenDelta--; + while( *zDelta && lenDelta>0 ){ + unsigned int cnt; + cnt = getInt(&zDelta, &lenDelta); + switch( zDelta[0] ){ + case '@': { + zDelta++; lenDelta--; + (void)getInt(&zDelta, &lenDelta); + if( lenDelta>0 && zDelta[0]!=',' ){ + /* ERROR: copy command not terminated by ',' */ + return -1; + } + zDelta++; lenDelta--; + nCopy += cnt; + break; + } + case ':': { + zDelta++; lenDelta--; + nInsert += cnt; + if( cnt>lenDelta ){ + /* ERROR: insert count exceeds size of delta */ + return -1; + } + zDelta += cnt; + lenDelta -= cnt; + break; + } + case ';': { + *pnCopy = nCopy; + *pnInsert = nInsert; + return 0; + } default: { /* ERROR: unknown delta operator */ return -1; } } } /* ERROR: unterminated delta */ return -1; } Index: src/deltacmd.c ================================================================== --- src/deltacmd.c +++ src/deltacmd.c @@ -17,11 +17,11 @@ ** ** This module implements the interface to the delta generator. */ #include "config.h" #include "deltacmd.h" - + /* ** Create a delta that describes the change from pOriginal to pTarget ** and put that delta in pDelta. The pDelta blob is assumed to be ** uninitialized. */ @@ -43,35 +43,72 @@ } /* ** COMMAND: test-delta-create ** -** Given two input files, create and output a delta that carries -** the first file into the second. +** Usage: %fossil test-delta-create FILE1 FILE2 DELTA +** +** Create and output a delta that carries FILE1 into FILE2. +** Store the result in DELTA. */ void delta_create_cmd(void){ Blob orig, target, delta; if( g.argc!=5 ){ - fprintf(stderr,"Usage: %s %s ORIGIN TARGET DELTA\n", g.argv[0], g.argv[1]); - exit(1); + usage("ORIGIN TARGET DELTA"); } if( blob_read_from_file(&orig, g.argv[2])<0 ){ - fprintf(stderr,"cannot read %s\n", g.argv[2]); - exit(1); + fossil_fatal("cannot read %s\n", g.argv[2]); } if( blob_read_from_file(&target, g.argv[3])<0 ){ - fprintf(stderr,"cannot read %s\n", g.argv[3]); - exit(1); + fossil_fatal("cannot read %s\n", g.argv[3]); } blob_delta_create(&orig, &target, &delta); if( blob_write_to_file(&delta, g.argv[4]) @@ -24,15 +24,17 @@ /* ** Create a temporary table named "leaves" if it does not ** already exist. Load this table with the RID of all -** check-ins that are leaves which are decended from -** check-in iBase. If iBase==0, find all leaves within the -** entire check-in hierarchy. +** check-ins that are leaves which are descended from +** check-in iBase. ** ** A "leaf" is a check-in that has no children in the same branch. +** There is a separate permanent table LEAF that contains all leaves +** in the tree. This routine is used to compute a subset of that +** table consisting of leaves that are descended from a single check-in. ** ** The closeMode flag determines behavior associated with the "closed" ** tag: ** ** closeMode==0 Show all leaves regardless of the "closed" tag. @@ -55,26 +57,11 @@ " rid INTEGER PRIMARY KEY" ");" "DELETE FROM leaves;" ); - /* We are checking all descendants of iBase. If iBase==0, then - ** use a short-cut to find all leaves anywhere in the hierarchy. - */ - if( iBase<=0 ){ - db_multi_exec( - "INSERT OR IGNORE INTO leaves" - " SELECT cid FROM plink" - " EXCEPT" - " SELECT pid FROM plink" - " WHERE coalesce((SELECT value FROM tagxref" - " WHERE tagid=%d AND rid=plink.pid),'trunk')" - " == coalesce((SELECT value FROM tagxref" - " WHERE tagid=%d AND rid=plink.cid),'trunk');", - TAG_BRANCH, TAG_BRANCH - ); - }else{ + if( iBase>0 ){ Bag seen; /* Descendants seen */ Bag pending; /* Unpropagated descendants */ Stmt q1; /* Query to find children of a check-in */ Stmt isBr; /* Query to check to see if a check-in starts a new branch */ Stmt ins; /* INSERT statement for a new record */ @@ -84,11 +71,11 @@ bag_init(&pending); bag_insert(&pending, iBase); /* This query returns all non-branch-merge children of check-in :rid. ** - ** If a a child is a merge of a fork within the same branch, it is + ** If a child is a merge of a fork within the same branch, it is ** returned. Only merge children in different branches are excluded. */ db_prepare(&q1, "SELECT cid FROM plink" " WHERE pid=:rid" @@ -97,25 +84,25 @@ " WHERE tagid=%d AND rid=plink.pid), 'trunk')" "=coalesce((SELECT value FROM tagxref" " WHERE tagid=%d AND rid=plink.cid), 'trunk'))", TAG_BRANCH, TAG_BRANCH ); - + /* This query returns a single row if check-in :rid is the first ** check-in of a new branch. */ - db_prepare(&isBr, + db_prepare(&isBr, "SELECT 1 FROM tagxref" " WHERE rid=:rid AND tagid=%d AND tagtype=2" " AND srcid>0", TAG_BRANCH ); - + /* This statement inserts check-in :rid into the LEAVES table. */ db_prepare(&ins, "INSERT OR IGNORE INTO leaves VALUES(:rid)"); - + while( bag_count(&pending) ){ int rid = bag_first(&pending); int cnt = 0; bag_remove(&pending, rid); db_bind_int(&q1, ":rid", rid); @@ -143,10 +130,15 @@ db_finalize(&ins); db_finalize(&isBr); db_finalize(&q1); bag_clear(&pending); bag_clear(&seen); + }else{ + db_multi_exec( + "INSERT INTO leaves" + " SELECT leaf.rid FROM leaf" + ); } if( closeMode==1 ){ db_multi_exec( "DELETE FROM leaves WHERE rid IN" " (SELECT leaves.rid FROM leaves, tagxref" @@ -169,144 +161,302 @@ /* ** Load the record ID rid and up to N-1 closest ancestors into ** the "ok" table. */ -void compute_ancestors(int rid, int N){ - Bag seen; - PQueue queue; - bag_init(&seen); - pqueue_init(&queue); - bag_insert(&seen, rid); - pqueue_insert(&queue, rid, 0.0); - while( (N--)>0 && (rid = pqueue_extract(&queue))!=0 ){ - Stmt q; - db_multi_exec("INSERT OR IGNORE INTO ok VALUES(%d)", rid); - db_prepare(&q, - "SELECT a.pid, b.mtime FROM plink a LEFT JOIN plink b ON b.cid=a.pid" - " WHERE a.cid=%d", rid - ); - while( db_step(&q)==SQLITE_ROW ){ - int pid = db_column_int(&q, 0); - double mtime = db_column_double(&q, 1); - if( bag_insert(&seen, pid) ){ - pqueue_insert(&queue, pid, -mtime); - } - } - db_finalize(&q); - } - bag_clear(&seen); - pqueue_clear(&queue); +void compute_ancestors(int rid, int N, int directOnly){ + db_multi_exec( + "WITH RECURSIVE " + " ancestor(rid, mtime) AS (" + " SELECT %d, mtime FROM event WHERE objid=%d " + " UNION " + " SELECT plink.pid, event.mtime" + " FROM ancestor, plink, event" + " WHERE plink.cid=ancestor.rid" + " AND event.objid=plink.pid %s" + " ORDER BY mtime DESC LIMIT %d" + " )" + "INSERT INTO ok" + " SELECT rid FROM ancestor;", + rid, rid, directOnly ? "AND plink.isPrim" : "", N + ); +} + +/* +** Compute all direct ancestors (merge ancestors do not count) +** for the check-in rid and put them in a table named "ancestor". +** Label each generation with consecutive integers going backwards +** in time such that rid has the smallest generation number and the oldest +** direct ancestor as the largest generation number. +*/ +void compute_direct_ancestors(int rid){ + db_multi_exec( + "CREATE TEMP TABLE IF NOT EXISTS ancestor(rid INTEGER UNIQUE NOT NULL," + " generation INTEGER PRIMARY KEY);" + "DELETE FROM ancestor;" + "WITH RECURSIVE g(x,i) AS (" + " VALUES(%d,1)" + " UNION ALL" + " SELECT plink.pid, g.i+1 FROM plink, g" + " WHERE plink.cid=g.x AND plink.isprim)" + "INSERT INTO ancestor(rid,generation) SELECT x,i FROM g;", + rid + ); +} + +/* +** Compute the "mtime" of the file given whose blob.rid is "fid" that +** is part of check-in "vid". The mtime will be the mtime on vid or +** some ancestor of vid where fid first appears. +*/ +int mtime_of_manifest_file( + int vid, /* The check-in that contains fid */ + int fid, /* The id of the file whose check-in time is sought */ + i64 *pMTime /* Write result here */ +){ + static int prevVid = -1; + static Stmt q; + + if( prevVid!=vid ){ + prevVid = vid; + db_multi_exec("DROP TABLE IF EXISTS temp.ok;" + "CREATE TEMP TABLE ok(x INTEGER PRIMARY KEY);"); + compute_ancestors(vid, 100000000, 1); + } + db_static_prepare(&q, + "SELECT (max(event.mtime)-2440587.5)*86400 FROM mlink, event" + " WHERE mlink.mid=event.objid" + " AND +mlink.mid IN ok" + " AND mlink.fid=:fid"); + db_bind_int(&q, ":fid", fid); + if( db_step(&q)!=SQLITE_ROW ){ + db_reset(&q); + return 1; + } + *pMTime = db_column_int64(&q, 0); + db_reset(&q); + return 0; } /* ** Load the record ID rid and up to N-1 closest descendants into ** the "ok" table. */ void compute_descendants(int rid, int N){ - Bag seen; - PQueue queue; - bag_init(&seen); - pqueue_init(&queue); - bag_insert(&seen, rid); - pqueue_insert(&queue, rid, 0.0); - while( (N--)>0 && (rid = pqueue_extract(&queue))!=0 ){ - Stmt q; - db_multi_exec("INSERT OR IGNORE INTO ok VALUES(%d)", rid); - db_prepare(&q,"SELECT cid, mtime FROM plink WHERE pid=%d", rid); - while( db_step(&q)==SQLITE_ROW ){ - int pid = db_column_int(&q, 0); - double mtime = db_column_double(&q, 1); - if( bag_insert(&seen, pid) ){ - pqueue_insert(&queue, pid, mtime); - } - } - db_finalize(&q); - } - bag_clear(&seen); - pqueue_clear(&queue); + db_multi_exec( + "WITH RECURSIVE" + " dx(rid,mtime) AS (" + " SELECT %d, 0" + " UNION" + " SELECT plink.cid, plink.mtime FROM dx, plink" + " WHERE plink.pid=dx.rid" + " ORDER BY 2" + " )" + "INSERT OR IGNORE INTO ok SELECT rid FROM dx LIMIT %d", + rid, N + ); } /* -** COMMAND: descendants +** COMMAND: descendants* +** +** Usage: %fossil descendants ?CHECKIN? ?OPTIONS? +** +** Find all leaf descendants of the check-in specified or if the argument +** is omitted, of the check-in currently checked out. ** -** Usage: %fossil descendants ?BASELINE-ID? +** Options: +** -R|--repository FILE Extract info from repository FILE +** -W|--width Width of lines (default is to auto-detect). +** Must be >20 or 0 (= no limit, resulting in a +** single line per entry). ** -** Find all leaf descendants of the baseline specified or if the argument -** is omitted, of the baseline currently checked out. +** See also: finfo, info, leaves */ void descendants_cmd(void){ Stmt q; - int base; + int base, width; + const char *zWidth; + + db_find_and_open_repository(0,0); + zWidth = find_option("width","W",1); + if( zWidth ){ + width = atoi(zWidth); + if( (width!=0) && (width<=20) ){ + fossil_fatal("-W|--width value must be >20 or 0"); + } + }else{ + width = -1; + } - db_must_be_within_tree(); + /* We should be done with options.. */ + verify_all_options(); + if( g.argc==2 ){ base = db_lget_int("checkout", 0); }else{ - base = name_to_rid(g.argv[2]); + base = name_to_typed_rid(g.argv[2], "ci"); } if( base==0 ) return; compute_leaves(base, 0); db_prepare(&q, "%s" " AND event.objid IN (SELECT rid FROM leaves)" " ORDER BY event.mtime DESC", timeline_query_for_tty() ); - print_timeline(&q, 20); - db_finalize(&q); -} - -/* -** COMMAND: leaves -** -** Usage: %fossil leaves ?--all? ?--closed? -** -** Find leaves of all branches. By default show only open leaves. -** The --all flag causes all leaves (closed and open) to be shown. -** The --closed flag shows only closed leaves. -*/ -void leaves_cmd(void){ - Stmt q; - int showAll = find_option("all", 0, 0)!=0; - int showClosed = find_option("closed", 0, 0)!=0; - - db_must_be_within_tree(); - compute_leaves(0, showAll ? 0 : showClosed ? 2 : 1); - db_prepare(&q, - "%s" - " AND blob.rid IN leaves" - " ORDER BY event.mtime DESC", - timeline_query_for_tty() - ); - print_timeline(&q, 2000); - db_finalize(&q); -} - -/* -** This routine is called while for each check-in that is rendered by -** the "leaves" page. Add some additional hyperlink to show the -** ancestors of the leaf. -*/ -static void leaves_extra(int rid){ - if( g.okHistory ){ - @ [timeline] - } + print_timeline(&q, 0, width, 0); + db_finalize(&q); +} + +/* +** COMMAND: leaves* +** +** Usage: %fossil leaves ?OPTIONS? +** +** Find leaves of all branches. By default show only open leaves. +** The -a|--all flag causes all leaves (closed and open) to be shown. +** The -c|--closed flag shows only closed leaves. +** +** The --recompute flag causes the content of the "leaf" table in the +** repository database to be recomputed. +** +** Options: +** -a|--all show ALL leaves +** --bybranch order output by branch name +** -c|--closed show only closed leaves +** -m|--multiple show only cases with multiple leaves on a single branch +** --recompute recompute the "leaf" table in the repository DB +** -W|--width Width of lines (default is to auto-detect). Must be +** >39 or 0 (= no limit, resulting in a single line per +** entry). +** +** See also: descendants, finfo, info, branch +*/ +void leaves_cmd(void){ + Stmt q; + Blob sql; + int showAll = find_option("all", "a", 0)!=0; + int showClosed = find_option("closed", "c", 0)!=0; + int recomputeFlag = find_option("recompute",0,0)!=0; + int byBranch = find_option("bybranch",0,0)!=0; + int multipleFlag = find_option("multiple","m",0)!=0; + const char *zWidth = find_option("width","W",1); + char *zLastBr = 0; + int n, width; + char zLineNo[10]; + + if( multipleFlag ) byBranch = 1; + if( zWidth ){ + width = atoi(zWidth); + if( (width!=0) && (width<=39) ){ + fossil_fatal("-W|--width value must be >39 or 0"); + } + }else{ + width = -1; + } + db_find_and_open_repository(0,0); + + /* We should be done with options.. */ + verify_all_options(); + + if( recomputeFlag ) leaf_rebuild(); + blob_zero(&sql); + blob_append(&sql, timeline_query_for_tty(), -1); + if( !multipleFlag ){ + /* The usual case - show all leaves */ + blob_append_sql(&sql, " AND blob.rid IN leaf"); + }else{ + /* Show only leaves where two are more occur in the same branch */ + db_multi_exec( + "CREATE TEMP TABLE openLeaf(rid INTEGER PRIMARY KEY);" + "INSERT INTO openLeaf(rid)" + " SELECT rid FROM leaf" + " WHERE NOT EXISTS(" + " SELECT 1 FROM tagxref" + " WHERE tagid=%d AND tagtype>0 AND rid=leaf.rid);", + TAG_CLOSED + ); + db_multi_exec( + "CREATE TEMP TABLE ambiguousBranch(brname TEXT);" + "INSERT INTO ambiguousBranch(brname)" + " SELECT (SELECT value FROM tagxref WHERE tagid=%d AND rid=openLeaf.rid)" + " FROM openLeaf" + " GROUP BY 1 HAVING count(*)>1;", + TAG_BRANCH + ); + db_multi_exec( + "CREATE TEMP TABLE ambiguousLeaf(rid INTEGER PRIMARY KEY);\n" + "INSERT INTO ambiguousLeaf(rid)\n" + " SELECT rid FROM openLeaf\n" + " WHERE (SELECT value FROM tagxref WHERE tagid=%d AND rid=openLeaf.rid)" + " IN (SELECT brname FROM ambiguousBranch);", + TAG_BRANCH + ); + blob_append_sql(&sql, " AND blob.rid IN ambiguousLeaf"); + } + if( showClosed ){ + blob_append_sql(&sql," AND %z", leaf_is_closed_sql("blob.rid")); + }else if( !showAll ){ + blob_append_sql(&sql," AND NOT %z", leaf_is_closed_sql("blob.rid")); + } + if( byBranch ){ + db_prepare(&q, "%s ORDER BY nullif(branch,'trunk') COLLATE nocase," + " event.mtime DESC", + blob_sql_text(&sql)); + }else{ + db_prepare(&q, "%s ORDER BY event.mtime DESC", blob_sql_text(&sql)); + } + blob_reset(&sql); + n = 0; + while( db_step(&q)==SQLITE_ROW ){ + const char *zId = db_column_text(&q, 1); + const char *zDate = db_column_text(&q, 2); + const char *zCom = db_column_text(&q, 3); + const char *zBr = db_column_text(&q, 7); + char *z; + + if( byBranch && fossil_strcmp(zBr, zLastBr)!=0 ){ + fossil_print("*** %s ***\n", zBr); + fossil_free(zLastBr); + zLastBr = fossil_strdup(zBr); + if( multipleFlag ) n = 0; + } + n++; + sqlite3_snprintf(sizeof(zLineNo), zLineNo, "(%d)", n); + fossil_print("%6s ", zLineNo); + z = mprintf("%s [%S] %s", zDate, zId, zCom); + comment_print(z, zCom, 7, width, g.comFmtFlags); + fossil_free(z); + } + fossil_free(zLastBr); + db_finalize(&q); } /* ** WEBPAGE: leaves ** -** Find leaves of all branches. +** Show leaf check-ins in a timeline. By default only open leaves +** are listed. +** +** A "leaf" is a check-in with no children in the same branch. A +** "closed leaf" is a leaf that has a "closed" tag. An "open leaf" +** is a leaf without a "closed" tag. +** +** Query parameters: +** +** all Show all leaves +** closed Show only closed leaves */ void leaves_page(void){ + Blob sql; Stmt q; int showAll = P("all")!=0; int showClosed = P("closed")!=0; login_check_credentials(); - if( !g.okRead ){ login_needed(); return; } + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } if( !showAll ){ style_submenu_element("All", "All", "leaves?all"); } if( !showClosed ){ @@ -315,40 +465,107 @@ if( showClosed || showAll ){ style_submenu_element("Open", "Open", "leaves"); } style_header("Leaves"); login_anonymous_available(); - compute_leaves(0, showAll ? 0 : showClosed ? 2 : 1); +#if 0 style_sidebox_begin("Nomenclature:", "33%"); @
    - @
  1. A leaf is a check-in with no descendants.
  2. - @
  3. An open leaf is a leaf that does not have a "closed" tag + @
  4. A
    leaf
    + @ is a check-in with no descendants in the same branch.
  5. + @
  6. An
    open leaf
    + @ is a leaf that does not have a "closed" tag @ and is thus assumed to still be in use.
  7. - @
  8. A closed leaf has a "closed" tag and is thus assumed to + @
  9. A
    closed leaf
    + @ has a "closed" tag and is thus assumed to @ be historical and no longer in active use.
  10. @
style_sidebox_end(); +#endif if( showAll ){ @

All leaves, both open and closed:

}else if( showClosed ){ @

Closed leaves:

}else{ @

Open leaves:

} - db_prepare(&q, - "%s" - " AND blob.rid IN leaves" - " ORDER BY event.mtime DESC", - timeline_query_for_www() - ); - www_print_timeline(&q, TIMELINE_LEAFONLY, leaves_extra); + blob_zero(&sql); + blob_append(&sql, timeline_query_for_www(), -1); + blob_append_sql(&sql, " AND blob.rid IN leaf"); + if( showClosed ){ + blob_append_sql(&sql," AND %z", leaf_is_closed_sql("blob.rid")); + }else if( !showAll ){ + blob_append_sql(&sql," AND NOT %z", leaf_is_closed_sql("blob.rid")); + } + db_prepare(&q, "%s ORDER BY event.mtime DESC", blob_sql_text(&sql)); + blob_reset(&sql); + www_print_timeline(&q, TIMELINE_LEAFONLY, 0, 0, 0, 0); db_finalize(&q); - @
- @ + @
style_footer(); } + +#if INTERFACE +/* Flag parameters to compute_uses_file() */ +#define USESFILE_DELETE 0x01 /* Include the check-ins where file deleted */ + +#endif + + +/* +** Add to table zTab the record ID (rid) of every check-in that contains +** the file fid. +*/ +void compute_uses_file(const char *zTab, int fid, int usesFlags){ + Bag seen; + Bag pending; + Stmt ins; + Stmt q; + int rid; + + bag_init(&seen); + bag_init(&pending); + db_prepare(&ins, "INSERT OR IGNORE INTO \"%w\" VALUES(:rid)", zTab); + db_prepare(&q, "SELECT mid FROM mlink WHERE fid=%d", fid); + while( db_step(&q)==SQLITE_ROW ){ + int mid = db_column_int(&q, 0); + bag_insert(&pending, mid); + bag_insert(&seen, mid); + db_bind_int(&ins, ":rid", mid); + db_step(&ins); + db_reset(&ins); + } + db_finalize(&q); + + db_prepare(&q, "SELECT mid FROM mlink WHERE pid=%d", fid); + while( db_step(&q)==SQLITE_ROW ){ + int mid = db_column_int(&q, 0); + bag_insert(&seen, mid); + if( usesFlags & USESFILE_DELETE ){ + db_bind_int(&ins, ":rid", mid); + db_step(&ins); + db_reset(&ins); + } + } + db_finalize(&q); + db_prepare(&q, "SELECT cid FROM plink WHERE pid=:rid AND isprim"); + + while( (rid = bag_first(&pending))!=0 ){ + bag_remove(&pending, rid); + db_bind_int(&q, ":rid", rid); + while( db_step(&q)==SQLITE_ROW ){ + int mid = db_column_int(&q, 0); + if( bag_find(&seen, mid) ) continue; + bag_insert(&seen, mid); + bag_insert(&pending, mid); + db_bind_int(&ins, ":rid", mid); + db_step(&ins); + db_reset(&ins); + } + db_reset(&q); + } + db_finalize(&q); + db_finalize(&ins); + bag_clear(&seen); + bag_clear(&pending); +} Index: src/diff.c ================================================================== --- src/diff.c +++ src/diff.c @@ -21,15 +21,53 @@ #include "config.h" #include "diff.h" #include +#if INTERFACE +/* +** Flag parameters to the text_diff() routine used to control the formatting +** of the diff output. +*/ +#define DIFF_CONTEXT_MASK ((u64)0x0000ffff) /* Lines of context. Default if 0 */ +#define DIFF_WIDTH_MASK ((u64)0x00ff0000) /* side-by-side column width */ +#define DIFF_IGNORE_EOLWS ((u64)0x01000000) /* Ignore end-of-line whitespace */ +#define DIFF_IGNORE_ALLWS ((u64)0x03000000) /* Ignore all whitespace */ +#define DIFF_SIDEBYSIDE ((u64)0x04000000) /* Generate a side-by-side diff */ +#define DIFF_VERBOSE ((u64)0x08000000) /* Missing shown as empty files */ +#define DIFF_BRIEF ((u64)0x10000000) /* Show filenames only */ +#define DIFF_HTML ((u64)0x20000000) /* Render for HTML */ +#define DIFF_LINENO ((u64)0x40000000) /* Show line numbers */ +#define DIFF_NOOPT (((u64)0x01)<<32) /* Suppress optimizations (debug) */ +#define DIFF_INVERT (((u64)0x02)<<32) /* Invert the diff (debug) */ +#define DIFF_CONTEXT_EX (((u64)0x04)<<32) /* Use context even if zero */ +#define DIFF_NOTTOOBIG (((u64)0x08)<<32) /* Only display if not too big */ +#define DIFF_STRIP_EOLCR (((u64)0x10)<<32) /* Strip trailing CR */ + +/* +** These error messages are shared in multiple locations. They are defined +** here for consistency. +*/ +#define DIFF_CANNOT_COMPUTE_BINARY \ + "cannot compute difference between binary files\n" + +#define DIFF_CANNOT_COMPUTE_SYMLINK \ + "cannot compute difference between symlink and regular file\n" + +#define DIFF_TOO_MANY_CHANGES \ + "more than 10,000 changes\n" + +#define DIFF_WHITESPACE_ONLY \ + "whitespace changes only\n" + /* -** Maximum length of a line in a text file. (8192) +** Maximum length of a line in a text file, in bytes. (2**13 = 8192 bytes) */ #define LENGTH_MASK_SZ 13 #define LENGTH_MASK ((1<n) + +/* +** A context for running a raw diff. +** +** The aEdit[] array describes the raw diff. Each triple of integers in +** aEdit[] means: +** +** (1) COPY: Number of lines aFrom and aTo have in common +** (2) DELETE: Number of lines found only in aFrom +** (3) INSERT: Number of lines found only in aTo +** +** The triples repeat until all lines of both aFrom and aTo are accounted +** for. */ typedef struct DContext DContext; struct DContext { int *aEdit; /* Array of copy/delete/insert triples */ int nEdit; /* Number of integers (3x num of triples) in aEdit[] */ @@ -58,24 +113,30 @@ int nEditAlloc; /* Space allocated for aEdit[] */ DLine *aFrom; /* File on left side of the diff */ int nFrom; /* Number of lines in aFrom[] */ DLine *aTo; /* File on right side of the diff */ int nTo; /* Number of lines in aTo[] */ + int (*same_fn)(const DLine *, const DLine *); /* Function to be used for comparing */ }; /* ** Return an array of DLine objects containing a pointer to the -** start of each line and a hash of that line. The lower +** start of each line and a hash of that line. The lower ** bits of the hash store the length of each line. ** -** Trailing whitespace is removed from each line. +** Trailing whitespace is removed from each line. 2010-08-20: Not any +** more. If trailing whitespace is ignored, the "patch" command gets +** confused by the diff output. Ticket [a9f7b23c2e376af5b0e5b] ** ** Return 0 if the file is binary or contains a line that is ** too long. +** +** Profiling show that in most cases this routine consumes the bulk of +** the CPU time on a diff. */ -static DLine *break_into_lines(const char *z, int n, int *pnLine){ - int nLine, i, j, k, x; +static DLine *break_into_lines(const char *z, int n, int *pnLine, u64 diffFlags){ + int nLine, i, j, k, s, x; unsigned int h, h2; DLine *a; /* Count the number of lines. Allocate space to hold ** the returned array. @@ -94,23 +155,48 @@ } } if( j>LENGTH_MASK ){ return 0; } - a = malloc( nLine*sizeof(a[0]) ); - if( a==0 ) fossil_panic("out of memory"); + a = fossil_malloc( nLine*sizeof(a[0]) ); memset(a, 0, nLine*sizeof(a[0]) ); + if( n==0 ){ + *pnLine = 0; + return a; + } /* Fill in the array */ for(i=0; i0 && isspace(z[k-1]); k--){} - for(h=0, x=0; x0 && z[k-1]=='\r' ){ k--; } + } + a[i].n = k; + s = 0; + if( diffFlags & DIFF_IGNORE_EOLWS ){ + while( k>0 && fossil_isspace(z[k-1]) ){ k--; } + } + if( (diffFlags & DIFF_IGNORE_ALLWS)==DIFF_IGNORE_ALLWS ){ + int numws = 0; + while( sh==pB->h && memcmp(pA->z,pB->z,pA->h & LENGTH_MASK)==0; +static int same_dline(const DLine *pA, const DLine *pB){ + return pA->h==pB->h && memcmp(pA->z,pB->z, pA->h&LENGTH_MASK)==0; +} + +/* +** Return true if two DLine elements are identical, ignoring +** all whitespace. The indent field of pA/pB already points +** to the first non-space character in the string. +*/ + +static int same_dline_ignore_allws(const DLine *pA, const DLine *pB){ + int a = pA->indent, b = pB->indent; + if( pA->h==pB->h ){ + while( an || bn ){ + if( an && bn && pA->z[a++] != pB->z[b++] ) return 0; + while( an && fossil_isspace(pA->z[a])) ++a; + while( bn && fossil_isspace(pB->z[b])) ++b; + } + return pA->n-a == pB->n-b; + } + return 0; +} + +/* +** Return true if the regular expression *pRe matches any of the +** N dlines +*/ +static int re_dline_match( + ReCompiled *pRe, /* The regular expression to be matched */ + DLine *aDLine, /* First of N DLines to compare against */ + int N /* Number of DLines to check */ +){ + while( N-- ){ + if( re_match(pRe, (const unsigned char *)aDLine->z, LENGTH(aDLine)) ){ + return 1; + } + aDLine++; + } + return 0; } /* -** Append a single line of "diff" output to pOut. +** Append a single line of context-diff output to pOut. */ -static void appendDiffLine(Blob *pOut, char *zPrefix, DLine *pLine){ - blob_append(pOut, zPrefix, 1); - blob_append(pOut, pLine->z, pLine->h & LENGTH_MASK); +static void appendDiffLine( + Blob *pOut, /* Where to write the line of output */ + char cPrefix, /* One of " ", "+", or "-" */ + DLine *pLine, /* The line to be output */ + int html, /* True if generating HTML. False for plain text */ + ReCompiled *pRe /* Colorize only if line matches this Regex */ +){ + blob_append(pOut, &cPrefix, 1); + if( html ){ + if( pRe && re_dline_match(pRe, pLine, 1)==0 ){ + cPrefix = ' '; + }else if( cPrefix=='+' ){ + blob_append(pOut, "", -1); + }else if( cPrefix=='-' ){ + blob_append(pOut, "", -1); + } + htmlize_to_blob(pOut, pLine->z, pLine->n); + if( cPrefix!=' ' ){ + blob_append(pOut, "", -1); + } + }else{ + blob_append(pOut, pLine->z, pLine->n); + } blob_append(pOut, "\n", 1); } /* -** Expand the size of aEdit[] array to hold nEdit elements. -*/ -static void expandEdit(DContext *p, int nEdit){ - int *a; - a = realloc(p->aEdit, nEdit*sizeof(int)); - if( a==0 ){ - free( p->aEdit ); - p->nEdit = 0; - nEdit = 0; - } - p->aEdit = a; - p->nEditAlloc = nEdit; -} - -/* -** Append a new COPY/DELETE/INSERT triple. -*/ -static void appendTriple(DContext *p, int nCopy, int nDel, int nIns){ - /* printf("APPEND %d/%d/%d\n", nCopy, nDel, nIns); */ - if( p->nEdit>=3 ){ - if( p->aEdit[p->nEdit-1]==0 ){ - if( p->aEdit[p->nEdit-2]==0 ){ - p->aEdit[p->nEdit-3] += nCopy; - p->aEdit[p->nEdit-2] += nDel; - p->aEdit[p->nEdit-1] += nIns; - return; - } - if( nCopy==0 ){ - p->aEdit[p->nEdit-2] += nDel; - p->aEdit[p->nEdit-1] += nIns; - return; - } - } - if( nCopy==0 && nDel==0 ){ - p->aEdit[p->nEdit-1] += nIns; - return; - } - } - if( p->nEdit+3>p->nEditAlloc ){ - expandEdit(p, p->nEdit*2 + 15); - if( p->aEdit==0 ) return; - } - p->aEdit[p->nEdit++] = nCopy; - p->aEdit[p->nEdit++] = nDel; - p->aEdit[p->nEdit++] = nIns; -} - - -/* -** Given a diff context in which the aEdit[] array has been filled +** Add two line numbers to the beginning of an output line for a context +** diff. One or the other of the two numbers might be zero, which means +** to leave that number field blank. The "html" parameter means to format +** the output for HTML. +*/ +static void appendDiffLineno(Blob *pOut, int lnA, int lnB, int html){ + if( html ) blob_append(pOut, "", -1); + if( lnA>0 ){ + blob_appendf(pOut, "%6d ", lnA); + }else{ + blob_append(pOut, " ", 7); + } + if( lnB>0 ){ + blob_appendf(pOut, "%6d ", lnB); + }else{ + blob_append(pOut, " ", 8); + } + if( html ) blob_append(pOut, "", -1); +} + +/* +** Given a raw diff p[] in which the p->aEdit[] array has been filled ** in, compute a context diff into pOut. */ -static void contextDiff(DContext *p, Blob *pOut, int nContext){ +static void contextDiff( + DContext *p, /* The difference */ + Blob *pOut, /* Output a context diff to here */ + ReCompiled *pRe, /* Only show changes that match this regex */ + u64 diffFlags /* Flags controlling the diff format */ +){ DLine *A; /* Left side of the diff */ - DLine *B; /* Right side of the diff */ + DLine *B; /* Right side of the diff */ int a = 0; /* Index of next line in A[] */ int b = 0; /* Index of next line in B[] */ int *R; /* Array of COPY/DELETE/INSERT triples */ int r; /* Index into R[] */ int nr; /* Number of COPY/DELETE/INSERT triples to process */ @@ -200,20 +320,911 @@ int mxr; /* Maximum value for r */ int na, nb; /* Number of lines shown from A and B */ int i, j; /* Loop counters */ int m; /* Number of lines to output */ int skip; /* Number of lines to skip */ + static int nChunk = 0; /* Number of diff chunks seen so far */ + int nContext; /* Number of lines of context */ + int showLn; /* Show line numbers */ + int html; /* Render as HTML */ + int showDivider = 0; /* True to show the divider between diff blocks */ + nContext = diff_context_lines(diffFlags); + showLn = (diffFlags & DIFF_LINENO)!=0; + html = (diffFlags & DIFF_HTML)!=0; + A = p->aFrom; + B = p->aTo; + R = p->aEdit; + mxr = p->nEdit; + while( mxr>2 && R[mxr-1]==0 && R[mxr-2]==0 ){ mxr -= 3; } + for(r=0; r0 && R[r+nr*3]nContext ){ + na = nb = nContext; + skip = R[r] - nContext; + }else{ + na = nb = R[r]; + skip = 0; + } + for(i=0; inContext ){ + na += nContext; + nb += nContext; + }else{ + na += R[r+nr*3]; + nb += R[r+nr*3]; + } + for(i=1; i%.80c\n", '.'); + }else{ + blob_appendf(pOut, "%.80c\n", '.'); + } + if( html ) blob_appendf(pOut, "", nChunk); + }else{ + if( html ) blob_appendf(pOut, ""); + /* + * If the patch changes an empty file or results in an empty file, + * the block header must use 0,0 as position indicator and not 1,0. + * Otherwise, patch would be confused and may reject the diff. + */ + blob_appendf(pOut,"@@ -%d,%d +%d,%d @@", + na ? a+skip+1 : 0, na, + nb ? b+skip+1 : 0, nb); + if( html ) blob_appendf(pOut, ""); + blob_append(pOut, "\n", 1); + } + + /* Show the initial common area */ + a += skip; + b += skip; + m = R[r] - skip; + for(j=0; jnContext ) m = nContext; + for(j=0; j tag */ + int iEnd; /* Write prior to character iEnd */ + int iStart2; /* Write zStart2 prior to character iStart2 */ + const char *zStart2; /* A tag */ + int iEnd2; /* Write prior to character iEnd2 */ + ReCompiled *pRe; /* Only colorize matching lines, if not NULL */ +}; + +/* +** Column indices for SbsLine.apCols[] +*/ +#define SBS_LNA 0 /* Left line number */ +#define SBS_TXTA 1 /* Left text */ +#define SBS_MKR 2 /* Middle separator column */ +#define SBS_LNB 3 /* Right line number */ +#define SBS_TXTB 4 /* Right text */ + +/* +** Append newlines to all columns. +*/ +static void sbsWriteNewlines(SbsLine *p){ + int i; + for( i=p->escHtml ? SBS_LNA : SBS_TXTB; i<=SBS_TXTB; i++ ){ + blob_append(p->apCols[i], "\n", 1); + } +} + +/* +** Append n spaces to the column. +*/ +static void sbsWriteSpace(SbsLine *p, int n, int col){ + blob_appendf(p->apCols[col], "%*s", n, ""); +} + +/* +** Write the text of pLine into column iCol of p. +** +** If outputting HTML, write the full line. Otherwise, only write the +** width characters. Translate tabs into spaces. Add newlines if col +** is SBS_TXTB. Translate HTML characters if escHtml is true. Pad the +** rendering to width bytes if col is SBS_TXTA and escHtml is false. +** +** This comment contains multibyte unicode characters (ü, Æ, ð) in order +** to test the ability of the diff code to handle such characters. +*/ +static void sbsWriteText(SbsLine *p, DLine *pLine, int col){ + Blob *pCol = p->apCols[col]; + int n = pLine->n; + int i; /* Number of input characters consumed */ + int k; /* Cursor position */ + int needEndSpan = 0; + const char *zIn = pLine->z; + int w = p->width; + int colorize = p->escHtml; + if( colorize && p->pRe && re_dline_match(p->pRe, pLine, 1)==0 ){ + colorize = 0; + } + for(i=k=0; (p->escHtml || kiStart ){ + int x = strlen(p->zStart); + blob_append(pCol, p->zStart, x); + needEndSpan = 1; + if( p->iStart2 ){ + p->iStart = p->iStart2; + p->zStart = p->zStart2; + p->iStart2 = 0; + } + }else if( i==p->iEnd ){ + blob_append(pCol, "", 7); + needEndSpan = 0; + if( p->iEnd2 ){ + p->iEnd = p->iEnd2; + p->iEnd2 = 0; + } + } + } + if( c=='\t' && !p->escHtml ){ + blob_append(pCol, " ", 1); + while( (k&7)!=7 && (p->escHtml || kescHtml ){ + blob_append(pCol, "<", 4); + }else if( c=='&' && p->escHtml ){ + blob_append(pCol, "&", 5); + }else if( c=='>' && p->escHtml ){ + blob_append(pCol, ">", 4); + }else if( c=='"' && p->escHtml ){ + blob_append(pCol, """, 6); + }else{ + blob_append(pCol, &zIn[i], 1); + if( (c&0xc0)==0x80 ) k--; + } + } + if( needEndSpan ){ + blob_append(pCol, "", 7); + } + if( col==SBS_TXTB ){ + sbsWriteNewlines(p); + }else if( !p->escHtml ){ + sbsWriteSpace(p, w-k, SBS_TXTA); + } +} + +/* +** Append a column to the final output blob. +*/ +static void sbsWriteColumn(Blob *pOut, Blob *pCol, int col){ + blob_appendf(pOut, + "
\n" + "
\n"
+    "%s"
+    "
\n" + "
\n", + (col % 3) ? (col == SBS_MKR ? "mkr" : "txt") : "ln", + blob_str(pCol) + ); +} + +/* +** Append a separator line to column iCol +*/ +static void sbsWriteSep(SbsLine *p, int len, int col){ + char ch = '.'; + if( len<1 ){ + len = 1; + ch = ' '; + } + blob_appendf(p->apCols[col], "%.*c\n", len, ch); +} + +/* +** Append the appropriate marker into the center column of the diff. +*/ +static void sbsWriteMarker(SbsLine *p, const char *zTxt, const char *zHtml){ + blob_append(p->apCols[SBS_MKR], p->escHtml ? zHtml : zTxt, -1); +} + +/* +** Append a line number to the column. +*/ +static void sbsWriteLineno(SbsLine *p, int ln, int col){ + if( p->escHtml ){ + blob_appendf(p->apCols[col], "%d", ln+1); + }else{ + char zLn[7]; + sqlite3_snprintf(7, zLn, "%5d ", ln+1); + blob_appendf(p->apCols[col], "%s ", zLn); + } +} + +/* +** The two text segments zLeft and zRight are known to be different on +** both ends, but they might have a common segment in the middle. If +** they do not have a common segment, return 0. If they do have a large +** common segment, return 1 and before doing so set: +** +** aLCS[0] = start of the common segment in zLeft +** aLCS[1] = end of the common segment in zLeft +** aLCS[2] = start of the common segment in zLeft +** aLCS[3] = end of the common segment in zLeft +** +** This computation is for display purposes only and does not have to be +** optimal or exact. +*/ +static int textLCS( + const char *zLeft, int nA, /* String on the left */ + const char *zRight, int nB, /* String on the right */ + int *aLCS /* Identify bounds of LCS here */ +){ + const unsigned char *zA = (const unsigned char*)zLeft; /* left string */ + const unsigned char *zB = (const unsigned char*)zRight; /* right string */ + int nt; /* Number of target points */ + int ti[3]; /* Index for start of each 4-byte target */ + unsigned int target[3]; /* 4-byte alignment targets */ + unsigned int probe; /* probe to compare against target */ + int iAS, iAE, iBS, iBE; /* Range of common segment */ + int i, j; /* Loop counters */ + int rc = 0; /* Result code. 1 for success */ + + if( nA<6 || nB<6 ) return 0; + memset(aLCS, 0, sizeof(int)*4); + ti[0] = i = nB/2-2; + target[0] = (zB[i]<<24) | (zB[i+1]<<16) | (zB[i+2]<<8) | zB[i+3]; + probe = 0; + if( nB<16 ){ + nt = 1; + }else{ + ti[1] = i = nB/4-2; + target[1] = (zB[i]<<24) | (zB[i+1]<<16) | (zB[i+2]<<8) | zB[i+3]; + ti[2] = i = (nB*3)/4-2; + target[2] = (zB[i]<<24) | (zB[i+1]<<16) | (zB[i+2]<<8) | zB[i+3]; + nt = 3; + } + probe = (zA[0]<<16) | (zA[1]<<8) | zA[2]; + for(i=3; i0 && iBS>0 && zA[iAS-1]==zB[iBS-1] ){ iAS--; iBS--; } + if( iAE-iAS > aLCS[1] - aLCS[0] ){ + aLCS[0] = iAS; + aLCS[1] = iAE; + aLCS[2] = iBS; + aLCS[3] = iBE; + rc = 1; + } + } + } + } + return rc; +} + +/* +** Try to shift iStart as far as possible to the left. +*/ +static void sbsShiftLeft(SbsLine *p, const char *z){ + int i, j; + while( (i=p->iStart)>0 && z[i-1]==z[i] ){ + for(j=i+1; jiEnd && z[j-1]==z[j]; j++){} + if( jiEnd ) break; + p->iStart--; + p->iEnd--; + } +} + +/* +** Simplify iStart and iStart2: +** +** * If iStart is a null-change then move iStart2 into iStart +** * Make sure any null-changes are in canonoical form. +** * Make sure all changes are at character boundaries for +** multi-byte characters. +*/ +static void sbsSimplifyLine(SbsLine *p, const char *z){ + if( p->iStart2==p->iEnd2 ){ + p->iStart2 = p->iEnd2 = 0; + }else if( p->iStart2 ){ + while( p->iStart2>0 && (z[p->iStart2]&0xc0)==0x80 ) p->iStart2--; + while( (z[p->iEnd2]&0xc0)==0x80 ) p->iEnd2++; + } + if( p->iStart==p->iEnd ){ + p->iStart = p->iStart2; + p->iEnd = p->iEnd2; + p->zStart = p->zStart2; + p->iStart2 = 0; + p->iEnd2 = 0; + } + if( p->iStart==p->iEnd ){ + p->iStart = p->iEnd = -1; + }else if( p->iStart>0 ){ + while( p->iStart>0 && (z[p->iStart]&0xc0)==0x80 ) p->iStart--; + while( (z[p->iEnd]&0xc0)==0x80 ) p->iEnd++; + } +} + +/* +** Write out lines that have been edited. Adjust the highlight to cover +** only those parts of the line that actually changed. +*/ +static void sbsWriteLineChange( + SbsLine *p, /* The SBS output line */ + DLine *pLeft, /* Left line of the change */ + int lnLeft, /* Line number for the left line */ + DLine *pRight, /* Right line of the change */ + int lnRight /* Line number of the right line */ +){ + int nLeft; /* Length of left line in bytes */ + int nRight; /* Length of right line in bytes */ + int nShort; /* Shortest of left and right */ + int nPrefix; /* Length of common prefix */ + int nSuffix; /* Length of common suffix */ + const char *zLeft; /* Text of the left line */ + const char *zRight; /* Text of the right line */ + int nLeftDiff; /* nLeft - nPrefix - nSuffix */ + int nRightDiff; /* nRight - nPrefix - nSuffix */ + int aLCS[4]; /* Bounds of common middle segment */ + static const char zClassRm[] = ""; + static const char zClassAdd[] = ""; + static const char zClassChng[] = ""; + + nLeft = pLeft->n; + zLeft = pLeft->z; + nRight = pRight->n; + zRight = pRight->z; + nShort = nLeft0 && (zLeft[nPrefix]&0xc0)==0x80 ) nPrefix--; + } + nSuffix = 0; + if( nPrefix0 && (zLeft[nLeft-nSuffix]&0xc0)==0x80 ) nSuffix--; + } + if( nSuffix==nLeft || nSuffix==nRight ) nPrefix = 0; + } + + /* If the prefix and suffix overlap, that means that we are dealing with + ** a pure insertion or deletion of text that can have multiple alignments. + ** Try to find an alignment to begins and ends on whitespace, or on + ** punctuation, rather than in the middle of a name or number. + */ + if( nPrefix+nSuffix > nShort ){ + int iBest = -1; + int iBestVal = -1; + int i; + int nLong = nLeftiBestVal ){ + iBestVal = iVal; + iBest = i; + } + } + nPrefix = iBest; + nSuffix = nShort - nPrefix; + } + + /* A single chunk of text inserted on the right */ + if( nPrefix+nSuffix==nLeft ){ + sbsWriteLineno(p, lnLeft, SBS_LNA); + p->iStart2 = p->iEnd2 = 0; + p->iStart = p->iEnd = -1; + sbsWriteText(p, pLeft, SBS_TXTA); + if( nLeft==nRight && zLeft[nLeft]==zRight[nRight] ){ + sbsWriteMarker(p, " ", ""); + }else{ + sbsWriteMarker(p, " | ", "|"); + } + sbsWriteLineno(p, lnRight, SBS_LNB); + p->iStart = nPrefix; + p->iEnd = nRight - nSuffix; + p->zStart = zClassAdd; + sbsWriteText(p, pRight, SBS_TXTB); + return; + } + + /* A single chunk of text deleted from the left */ + if( nPrefix+nSuffix==nRight ){ + /* Text deleted from the left */ + sbsWriteLineno(p, lnLeft, SBS_LNA); + p->iStart2 = p->iEnd2 = 0; + p->iStart = nPrefix; + p->iEnd = nLeft - nSuffix; + p->zStart = zClassRm; + sbsWriteText(p, pLeft, SBS_TXTA); + sbsWriteMarker(p, " | ", "|"); + sbsWriteLineno(p, lnRight, SBS_LNB); + p->iStart = p->iEnd = -1; + sbsWriteText(p, pRight, SBS_TXTB); + return; + } + + /* At this point we know that there is a chunk of text that has + ** changed between the left and the right. Check to see if there + ** is a large unchanged section in the middle of that changed block. + */ + nLeftDiff = nLeft - nSuffix - nPrefix; + nRightDiff = nRight - nSuffix - nPrefix; + if( p->escHtml + && nLeftDiff >= 6 + && nRightDiff >= 6 + && textLCS(&zLeft[nPrefix], nLeftDiff, &zRight[nPrefix], nRightDiff, aLCS) + ){ + sbsWriteLineno(p, lnLeft, SBS_LNA); + p->iStart = nPrefix; + p->iEnd = nPrefix + aLCS[0]; + if( aLCS[2]==0 ){ + sbsShiftLeft(p, pLeft->z); + p->zStart = zClassRm; + }else{ + p->zStart = zClassChng; + } + p->iStart2 = nPrefix + aLCS[1]; + p->iEnd2 = nLeft - nSuffix; + p->zStart2 = aLCS[3]==nRightDiff ? zClassRm : zClassChng; + sbsSimplifyLine(p, zLeft); + sbsWriteText(p, pLeft, SBS_TXTA); + sbsWriteMarker(p, " | ", "|"); + sbsWriteLineno(p, lnRight, SBS_LNB); + p->iStart = nPrefix; + p->iEnd = nPrefix + aLCS[2]; + if( aLCS[0]==0 ){ + sbsShiftLeft(p, pRight->z); + p->zStart = zClassAdd; + }else{ + p->zStart = zClassChng; + } + p->iStart2 = nPrefix + aLCS[3]; + p->iEnd2 = nRight - nSuffix; + p->zStart2 = aLCS[1]==nLeftDiff ? zClassAdd : zClassChng; + sbsSimplifyLine(p, zRight); + sbsWriteText(p, pRight, SBS_TXTB); + return; + } + + /* If all else fails, show a single big change between left and right */ + sbsWriteLineno(p, lnLeft, SBS_LNA); + p->iStart2 = p->iEnd2 = 0; + p->iStart = nPrefix; + p->iEnd = nLeft - nSuffix; + p->zStart = zClassChng; + sbsWriteText(p, pLeft, SBS_TXTA); + sbsWriteMarker(p, " | ", "|"); + sbsWriteLineno(p, lnRight, SBS_LNB); + p->iEnd = nRight - nSuffix; + sbsWriteText(p, pRight, SBS_TXTB); +} + +/* +** Minimum of two values +*/ +static int minInt(int a, int b){ return az; + zB = pB->z; + nA = pA->n; + nB = pB->n; + while( nA>0 && fossil_isspace(zA[0]) ){ nA--; zA++; } + while( nA>0 && fossil_isspace(zA[nA-1]) ){ nA--; } + while( nB>0 && fossil_isspace(zB[0]) ){ nB--; zB++; } + while( nB>0 && fossil_isspace(zB[nB-1]) ){ nB--; } + if( nA>250 ) nA = 250; + if( nB>250 ) nB = 250; + avg = (nA+nB)/2; + if( avg==0 ) return 0; + if( nA==nB && memcmp(zA, zB, nA)==0 ) return 0; + memset(aFirst, 0, sizeof(aFirst)); + zA--; zB--; /* Make both zA[] and zB[] 1-indexed */ + for(i=nB; i>0; i--){ + c = (unsigned char)zB[i]; + aNext[i] = aFirst[c]; + aFirst[c] = i; + } + best = 0; + for(i=1; i<=nA-best; i++){ + c = (unsigned char)zA[i]; + for(j=aFirst[c]; j>0 && jbest ) best = k; + } + } + score = (best>avg) ? 0 : (avg - best)*100/avg; + +#if 0 + fprintf(stderr, "A: [%.*s]\nB: [%.*s]\nbest=%d avg=%d score=%d\n", + nA, zA+1, nB, zB+1, best, avg, score); +#endif + + /* Return the result */ + return score; +} + +/* +** There is a change block in which nLeft lines of text on the left are +** converted into nRight lines of text on the right. This routine computes +** how the lines on the left line up with the lines on the right. +** +** The return value is a buffer of unsigned characters, obtained from +** fossil_malloc(). (The caller needs to free the return value using +** fossil_free().) Entries in the returned array have values as follows: +** +** 1. Delete the next line of pLeft. +** 2. Insert the next line of pRight. +** 3. The next line of pLeft changes into the next line of pRight. +** 4. Delete one line from pLeft and add one line to pRight. +** +** Values larger than three indicate better matches. +** +** The length of the returned array will be just large enough to cause +** all elements of pLeft and pRight to be consumed. +** +** Algorithm: Wagner's minimum edit-distance algorithm, modified by +** adding a cost to each match based on how well the two rows match +** each other. Insertion and deletion costs are 50. Match costs +** are between 0 and 100 where 0 is a perfect match 100 is a complete +** mismatch. +*/ +static unsigned char *sbsAlignment( + DLine *aLeft, int nLeft, /* Text on the left */ + DLine *aRight, int nRight /* Text on the right */ +){ + int i, j, k; /* Loop counters */ + int *a; /* One row of the Wagner matrix */ + int *pToFree; /* Space that needs to be freed */ + unsigned char *aM; /* Wagner result matrix */ + int nMatch, iMatch; /* Number of matching lines and match score */ + int mnLen; /* MIN(nLeft, nRight) */ + int mxLen; /* MAX(nLeft, nRight) */ + int aBuf[100]; /* Stack space for a[] if nRight not to big */ + + aM = fossil_malloc( (nLeft+1)*(nRight+1) ); + if( nLeft==0 ){ + memset(aM, 2, nRight); + return aM; + } + if( nRight==0 ){ + memset(aM, 1, nLeft); + return aM; + } + + /* This algorithm is O(N**2). So if N is too big, bail out with a + ** simple (but stupid and ugly) result that doesn't take too long. */ + mnLen = nLeft100000 ){ + memset(aM, 4, mnLen); + if( nLeft>mnLen ) memset(aM+mnLen, 1, nLeft-mnLen); + if( nRight>mnLen ) memset(aM+mnLen, 2, nRight-mnLen); + return aM; + } + + if( nRight < (sizeof(aBuf)/sizeof(aBuf[0]))-1 ){ + pToFree = 0; + a = aBuf; + }else{ + a = pToFree = fossil_malloc( sizeof(a[0])*(nRight+1) ); + } + + /* Compute the best alignment */ + for(i=0; i<=nRight; i++){ + aM[i] = 2; + a[i] = i*50; + } + aM[0] = 0; + for(j=1; j<=nLeft; j++){ + int p = a[0]; + a[0] = p+50; + aM[j*(nRight+1)] = 1; + for(i=1; i<=nRight; i++){ + int m = a[i-1]+50; + int d = 2; + if( m>a[i]+50 ){ + m = a[i]+50; + d = 1; + } + if( m>p ){ + int score = match_dline(&aLeft[j-1], &aRight[i-1]); + if( (score<=63 || (ij-1)) && m>p+score ){ + m = p+score; + d = 3 | score*4; + } + } + p = a[i]; + a[i] = m; + aM[j*(nRight+1)+i] = d; + } + } + + /* Compute the lowest-cost path back through the matrix */ + i = nRight; + j = nLeft; + k = (nRight+1)*(nLeft+1)-1; + nMatch = iMatch = 0; + while( i+j>0 ){ + unsigned char c = aM[k]; + if( c>=3 ){ + assert( i>0 && j>0 ); + i--; + j--; + nMatch++; + iMatch += (c>>2); + aM[k] = 3; + }else if( c==2 ){ + assert( i>0 ); + i--; + }else{ + assert( j>0 ); + j--; + } + k--; + aM[k] = aM[j*(nRight+1)+i]; + } + k++; + i = (nRight+1)*(nLeft+1) - k; + memmove(aM, &aM[k], i); + + /* If: + ** (1) the alignment is more than 25% longer than the longest side, and + ** (2) the average match cost exceeds 15 + ** Then this is probably an alignment that will be difficult for humans + ** to read. So instead, just show all of the right side inserted followed + ** by all of the left side deleted. + ** + ** The coefficients for conditions (1) and (2) above are determined by + ** experimentation. + */ + mxLen = nLeft>nRight ? nLeft : nRight; + if( i*4>mxLen*5 && (nMatch==0 || iMatch/nMatch>15) ){ + memset(aM, 4, mnLen); + if( nLeft>mnLen ) memset(aM+mnLen, 1, nLeft-mnLen); + if( nRight>mnLen ) memset(aM+mnLen, 2, nRight-mnLen); + } + + /* Return the result */ + fossil_free(pToFree); + return aM; +} + +/* +** R[] is an array of six integer, two COPY/DELETE/INSERT triples for a +** pair of adjacent differences. Return true if the gap between these +** two differences is so small that they should be rendered as a single +** edit. +*/ +static int smallGap(int *R){ + return R[3]<=2 || R[3]<=(R[1]+R[2]+R[4]+R[5])/8; +} + +/* +** Given a diff context in which the aEdit[] array has been filled +** in, compute a side-by-side diff into pOut. +*/ +static void sbsDiff( + DContext *p, /* The computed diff */ + Blob *pOut, /* Write the results here */ + ReCompiled *pRe, /* Only show changes that match this regex */ + u64 diffFlags /* Flags controlling the diff */ +){ + DLine *A; /* Left side of the diff */ + DLine *B; /* Right side of the diff */ + int a = 0; /* Index of next line in A[] */ + int b = 0; /* Index of next line in B[] */ + int *R; /* Array of COPY/DELETE/INSERT triples */ + int r; /* Index into R[] */ + int nr; /* Number of COPY/DELETE/INSERT triples to process */ + int mxr; /* Maximum value for r */ + int na, nb; /* Number of lines shown from A and B */ + int i, j; /* Loop counters */ + int m, ma, mb;/* Number of lines to output */ + int skip; /* Number of lines to skip */ + static int nChunk = 0; /* Number of chunks of diff output seen so far */ + SbsLine s; /* Output line buffer */ + int nContext; /* Lines of context above and below each change */ + int showDivider = 0; /* True to show the divider */ + Blob aCols[5]; /* Array of column blobs */ + + memset(&s, 0, sizeof(s)); + s.width = diff_width(diffFlags); + nContext = diff_context_lines(diffFlags); + s.escHtml = (diffFlags & DIFF_HTML)!=0; + if( s.escHtml ){ + for(i=SBS_LNA; i<=SBS_TXTB; i++){ + blob_zero(&aCols[i]); + s.apCols[i] = &aCols[i]; + } + }else{ + for(i=SBS_LNA; i<=SBS_TXTB; i++){ + s.apCols[i] = pOut; + } + } + s.pRe = pRe; + s.iStart = -1; + s.iStart2 = 0; + s.iEnd = -1; A = p->aFrom; B = p->aTo; R = p->aEdit; mxr = p->nEdit; while( mxr>2 && R[mxr-1]==0 && R[mxr-2]==0 ){ mxr -= 3; } + for(r=0; r0 && R[r+nr*3]nContext ){ @@ -236,38 +1247,128 @@ } for(i=1; i", nChunk); + } /* Show the initial common area */ a += skip; b += skip; m = R[r] - skip; for(j=0; j0; j++){ + if( alignment[j]==1 ){ + /* Delete one line from the left */ + sbsWriteLineno(&s, a, SBS_LNA); + s.iStart = 0; + s.zStart = ""; + s.iEnd = LENGTH(&A[a]); + sbsWriteText(&s, &A[a], SBS_TXTA); + sbsWriteMarker(&s, " <", "<"); + sbsWriteNewlines(&s); + assert( ma>0 ); + ma--; + a++; + }else if( alignment[j]==3 ){ + /* The left line is changed into the right line */ + sbsWriteLineChange(&s, &A[a], a, &B[b], b); + assert( ma>0 && mb>0 ); + ma--; + mb--; + a++; + b++; + }else if( alignment[j]==2 ){ + /* Insert one line on the right */ + if( !s.escHtml ){ + sbsWriteSpace(&s, s.width + 7, SBS_TXTA); + } + sbsWriteMarker(&s, " > ", ">"); + sbsWriteLineno(&s, b, SBS_LNB); + s.iStart = 0; + s.zStart = ""; + s.iEnd = LENGTH(&B[b]); + sbsWriteText(&s, &B[b], SBS_TXTB); + assert( mb>0 ); + mb--; + b++; + }else{ + /* Delete from the left and insert on the right */ + sbsWriteLineno(&s, a, SBS_LNA); + s.iStart = 0; + s.zStart = ""; + s.iEnd = LENGTH(&A[a]); + sbsWriteText(&s, &A[a], SBS_TXTA); + sbsWriteMarker(&s, " | ", "|"); + sbsWriteLineno(&s, b, SBS_LNB); + s.iStart = 0; + s.zStart = ""; + s.iEnd = LENGTH(&B[b]); + sbsWriteText(&s, &B[b], SBS_TXTB); + ma--; + mb--; + a++; + b++; + } + } + fossil_free(alignment); if( inContext ) m = nContext; for(j=0; j0 ){ + blob_append(pOut, "\n", -1); + for(i=SBS_LNA; i<=SBS_TXTB; i++){ + sbsWriteColumn(pOut, s.apCols[i], i); + blob_reset(s.apCols[i]); + } + blob_append(pOut, "
\n", -1); + } +} + +/* +** Compute the optimal longest common subsequence (LCS) using an +** exhaustive search. This version of the LCS is only used for +** shorter input strings since runtime is O(N*N) where N is the +** input string length. +*/ +static void optimalLCS( + DContext *p, /* Two files being compared */ + int iS1, int iE1, /* Range of lines in p->aFrom[] */ + int iS2, int iE2, /* Range of lines in p->aTo[] */ + int *piSX, int *piEX, /* Write p->aFrom[] common segment here */ + int *piSY, int *piEY /* Write p->aTo[] common segment here */ +){ + int mxLength = 0; /* Length of longest common subsequence */ + int i, j; /* Loop counters */ + int k; /* Length of a candidate subsequence */ + int iSXb = iS1; /* Best match so far */ + int iSYb = iS2; /* Best match so far */ + + for(i=iS1; isame_fn(&p->aFrom[i], &p->aTo[j]) ) continue; + if( mxLength && !p->same_fn(&p->aFrom[i+mxLength], &p->aTo[j+mxLength]) ){ + continue; + } + k = 1; + while( i+ksame_fn(&p->aFrom[i+k],&p->aTo[j+k]) ){ + k++; + } + if( k>mxLength ){ + iSXb = i; + iSYb = j; + mxLength = k; + } } } + *piSX = iSXb; + *piEX = iSXb + mxLength; + *piSY = iSYb; + *piEY = iSYb + mxLength; } /* ** Compare two blocks of text on lines iS1 through iE1-1 of the aFrom[] ** file and lines iS2 through iE2-1 of the aTo[] file. Locate a sequence ** of lines in these two blocks that are exactly the same. Return ** the bounds of the matching sequence. +** +** If there are two or more possible answers of the same length, the +** returned sequence should be the one closest to the center of the +** input range. +** +** Ideally, the common sequence should be the longest possible common +** sequence. However, an exact computation of LCS is O(N*N) which is +** way too slow for larger files. So this routine uses an O(N) +** heuristic approximation based on hashing that usually works about +** as well. But if the O(N) algorithm doesn't get a good solution +** and N is not too large, we fall back to an exact solution by +** calling optimalLCS(). */ static void longestCommonSequence( DContext *p, /* Two files being compared */ int iS1, int iE1, /* Range of lines in p->aFrom[] */ int iS2, int iE2, /* Range of lines in p->aTo[] */ int *piSX, int *piEX, /* Write p->aFrom[] common segment here */ int *piSY, int *piEY /* Write p->aTo[] common segment here */ ){ - double bestScore = -1e30; /* Best score seen so far */ - int i, j; /* Loop counters */ + int i, j, k; /* Loop counters */ + int n; /* Loop limit */ + DLine *pA, *pB; /* Pointers to lines */ int iSX, iSY, iEX, iEY; /* Current match */ - double score; /* Current score */ - int skew; /* How lopsided is the match */ - int dist; /* Distance of match from center */ + int skew = 0; /* How lopsided is the match */ + int dist = 0; /* Distance of match from center */ int mid; /* Center of the span */ int iSXb, iSYb, iEXb, iEYb; /* Best match so far */ int iSXp, iSYp, iEXp, iEYp; /* Previous match */ + sqlite3_int64 bestScore; /* Best score so far */ + sqlite3_int64 score; /* Score for current candidate LCS */ + int span; /* combined width of the input sequences */ + span = (iE1 - iS1) + (iE2 - iS2); + bestScore = -10000; + score = 0; iSXb = iSXp = iS1; iEXb = iEXp = iS1; iSYb = iSYp = iS2; iEYb = iEYp = iS2; mid = (iE1 + iS1)/2; for(i=iS1; iaTo[p->aFrom[i].h % p->nTo].iHash; - while( j>0 - && (j-1=iE2 || !same_dline(&p->aFrom[i], &p->aTo[j-1])) + while( j>0 + && (j-1=iE2 || !p->same_fn(&p->aFrom[i], &p->aTo[j-1])) ){ if( limit++ > 10 ){ j = 0; break; } @@ -326,44 +1501,93 @@ assert( i>=iSXb && i>=iSXp ); if( i=iSYb && j=iSYp && jiS1 && iSY>iS2 && same_dline(&p->aFrom[iSX-1],&p->aTo[iSY-1]) ){ - iSX--; - iSY--; - } + pA = &p->aFrom[iSX-1]; + pB = &p->aTo[iSY-1]; + n = minInt(iSX-iS1, iSY-iS2); + for(k=0; ksame_fn(pA,pB); k++, pA--, pB--){} + iSX -= k; + iSY -= k; iEX = i+1; iEY = j; - while( iEXaFrom[iEX],&p->aTo[iEY]) ){ - iEX++; - iEY++; - } + pA = &p->aFrom[iEX]; + pB = &p->aTo[iEY]; + n = minInt(iE1-iEX, iE2-iEY); + for(k=0; ksame_fn(pA,pB); k++, pA++, pB++){} + iEX += k; + iEY += k; skew = (iSX-iS1) - (iSY-iS2); if( skew<0 ) skew = -skew; dist = (iSX+iEX)/2 - mid; if( dist<0 ) dist = -dist; - score = (iEX - iSX) - 0.05*skew - 0.05*dist; + score = (iEX - iSX)*(sqlite3_int64)span - (skew + dist); if( score>bestScore ){ bestScore = score; iSXb = iSX; iSYb = iSY; iEXb = iEX; iEYb = iEY; - }else{ + }else if( iEX>iEXp ){ iSXp = iSX; iSYp = iSY; iEXp = iEX; iEYp = iEY; } } - *piSX = iSXb; - *piSY = iSYb; - *piEX = iEXb; - *piEY = iEYb; - /* printf("LCS(%d..%d/%d..%d) = %d..%d/%d..%d\n", - iS1, iE1, iS2, iE2, *piSX, *piEX, *piSY, *piEY); */ + if( iSXb==iEXb && (iE1-iS1)*(iE2-iS2)<400 ){ + /* If no common sequence is found using the hashing heuristic and + ** the input is not too big, use the expensive exact solution */ + optimalLCS(p, iS1, iE1, iS2, iE2, piSX, piEX, piSY, piEY); + }else{ + *piSX = iSXb; + *piSY = iSYb; + *piEX = iEXb; + *piEY = iEYb; + } +} + +/* +** Expand the size of aEdit[] array to hold at least nEdit elements. +*/ +static void expandEdit(DContext *p, int nEdit){ + p->aEdit = fossil_realloc(p->aEdit, nEdit*sizeof(int)); + p->nEditAlloc = nEdit; +} + +/* +** Append a new COPY/DELETE/INSERT triple. +*/ +static void appendTriple(DContext *p, int nCopy, int nDel, int nIns){ + /* printf("APPEND %d/%d/%d\n", nCopy, nDel, nIns); */ + if( p->nEdit>=3 ){ + if( p->aEdit[p->nEdit-1]==0 ){ + if( p->aEdit[p->nEdit-2]==0 ){ + p->aEdit[p->nEdit-3] += nCopy; + p->aEdit[p->nEdit-2] += nDel; + p->aEdit[p->nEdit-1] += nIns; + return; + } + if( nCopy==0 ){ + p->aEdit[p->nEdit-2] += nDel; + p->aEdit[p->nEdit-1] += nIns; + return; + } + } + if( nCopy==0 && nDel==0 ){ + p->aEdit[p->nEdit-1] += nIns; + return; + } + } + if( p->nEdit+3>p->nEditAlloc ){ + expandEdit(p, p->nEdit*2 + 15); + if( p->aEdit==0 ) return; + } + p->aEdit[p->nEdit++] = nCopy; + p->aEdit[p->nEdit++] = nDel; + p->aEdit[p->nEdit++] = nIns; } /* ** Do a single step in the difference. Compute a sequence of ** copy/delete/insert steps that will convert lines iS1 through iE1-1 of @@ -394,11 +1618,11 @@ /* Find the longest matching segment between the two sequences */ longestCommonSequence(p, iS1, iE1, iS2, iE2, &iSX, &iEX, &iSY, &iEY); if( iEX>iSX ){ - /* A common segement has been found. + /* A common segment has been found. ** Recursively diff either side of the matching segment */ diff_step(p, iS1, iSX, iS2, iSY); if( iEX>iSX ){ appendTriple(p, iEX - iSX, 0, 0); } @@ -429,16 +1653,16 @@ int mnE, iS, iE1, iE2; /* Carve off the common header and footer */ iE1 = p->nFrom; iE2 = p->nTo; - while( iE1>0 && iE2>0 && same_dline(&p->aFrom[iE1-1], &p->aTo[iE2-1]) ){ + while( iE1>0 && iE2>0 && p->same_fn(&p->aFrom[iE1-1], &p->aTo[iE2-1]) ){ iE1--; iE2--; } mnE = iE1aFrom[iS],&p->aTo[iS]); iS++){} + for(iS=0; iSsame_fn(&p->aFrom[iS],&p->aTo[iS]); iS++){} /* do the difference */ if( iS>0 ){ appendTriple(p, iS, 0, 0); } @@ -453,16 +1677,149 @@ p->aEdit[p->nEdit++] = 0; p->aEdit[p->nEdit++] = 0; p->aEdit[p->nEdit++] = 0; } } + +/* +** Attempt to shift insertion or deletion blocks so that they begin and +** end on lines that are pure whitespace. In other words, try to transform +** this: +** +** int func1(int x){ +** return x*10; +** +} +** + +** +int func2(int x){ +** + return x*20; +** } +** +** int func3(int x){ +** return x/5; +** } +** +** Into one of these: +** +** int func1(int x){ int func1(int x){ +** return x*10; return x*10; +** } } +** + +** +int func2(int x){ +int func2(int x){ +** + return x*20; + return x*20; +** +} +} +** + +** int func3(int x){ int func3(int x){ +** return x/5; return x/5; +** } } +*/ +static void diff_optimize(DContext *p){ + int r; /* Index of current triple */ + int lnFrom; /* Line number in p->aFrom */ + int lnTo; /* Line number in p->aTo */ + int cpy, del, ins; + + lnFrom = lnTo = 0; + for(r=0; rnEdit; r += 3){ + cpy = p->aEdit[r]; + del = p->aEdit[r+1]; + ins = p->aEdit[r+2]; + lnFrom += cpy; + lnTo += cpy; + + /* Shift insertions toward the beginning of the file */ + while( cpy>0 && del==0 && ins>0 ){ + DLine *pTop = &p->aFrom[lnFrom-1]; /* Line before start of insert */ + DLine *pBtm = &p->aTo[lnTo+ins-1]; /* Last line inserted */ + if( p->same_fn(pTop, pBtm)==0 ) break; + if( LENGTH(pTop+1)+LENGTH(pBtm)<=LENGTH(pTop)+LENGTH(pBtm-1) ) break; + lnFrom--; + lnTo--; + p->aEdit[r]--; + p->aEdit[r+3]++; + cpy--; + } + + /* Shift insertions toward the end of the file */ + while( r+3nEdit && p->aEdit[r+3]>0 && del==0 && ins>0 ){ + DLine *pTop = &p->aTo[lnTo]; /* First line inserted */ + DLine *pBtm = &p->aTo[lnTo+ins]; /* First line past end of insert */ + if( p->same_fn(pTop, pBtm)==0 ) break; + if( LENGTH(pTop)+LENGTH(pBtm-1)<=LENGTH(pTop+1)+LENGTH(pBtm) ) break; + lnFrom++; + lnTo++; + p->aEdit[r]++; + p->aEdit[r+3]--; + cpy++; + } + + /* Shift deletions toward the beginning of the file */ + while( cpy>0 && del>0 && ins==0 ){ + DLine *pTop = &p->aFrom[lnFrom-1]; /* Line before start of delete */ + DLine *pBtm = &p->aFrom[lnFrom+del-1]; /* Last line deleted */ + if( p->same_fn(pTop, pBtm)==0 ) break; + if( LENGTH(pTop+1)+LENGTH(pBtm)<=LENGTH(pTop)+LENGTH(pBtm-1) ) break; + lnFrom--; + lnTo--; + p->aEdit[r]--; + p->aEdit[r+3]++; + cpy--; + } + + /* Shift deletions toward the end of the file */ + while( r+3nEdit && p->aEdit[r+3]>0 && del>0 && ins==0 ){ + DLine *pTop = &p->aFrom[lnFrom]; /* First line deleted */ + DLine *pBtm = &p->aFrom[lnFrom+del]; /* First line past end of delete */ + if( p->same_fn(pTop, pBtm)==0 ) break; + if( LENGTH(pTop)+LENGTH(pBtm-1)<=LENGTH(pTop)+LENGTH(pBtm) ) break; + lnFrom++; + lnTo++; + p->aEdit[r]++; + p->aEdit[r+3]--; + cpy++; + } + + lnFrom += del; + lnTo += ins; + } +} + +/* +** Extract the number of lines of context from diffFlags. Supply an +** appropriate default if no context width is specified. +*/ +int diff_context_lines(u64 diffFlags){ + int n = diffFlags & DIFF_CONTEXT_MASK; + if( n==0 && (diffFlags & DIFF_CONTEXT_EX)==0 ) n = 5; + return n; +} + +/* +** Extract the width of columns for side-by-side diff. Supply an +** appropriate default if no width is given. +*/ +int diff_width(u64 diffFlags){ + int w = (diffFlags & DIFF_WIDTH_MASK)/(DIFF_CONTEXT_MASK+1); + if( w==0 ) w = 80; + return w; +} + +/* +** Append the error message to pOut. +*/ +void diff_errmsg(Blob *pOut, const char *msg, int diffFlags){ + if( diffFlags & DIFF_HTML ){ + blob_appendf(pOut, "

%s

", msg); + }else{ + blob_append(pOut, msg, -1); + } +} /* ** Generate a report of the differences between files pA and pB. ** If pOut is not NULL then a unified diff is appended there. It ** is assumed that pOut has already been initialized. If pOut is -** NULL, then a pointer to an array of integers is returned. +** NULL, then a pointer to an array of integers is returned. ** The integers come in triples. For each triple, ** the elements are the number of lines copied, the number of ** lines deleted, and the number of lines inserted. The vector ** is terminated by a triple of all zeros. ** @@ -471,37 +1828,82 @@ ** text "cannot compute difference between binary files". */ int *text_diff( Blob *pA_Blob, /* FROM file */ Blob *pB_Blob, /* TO file */ - Blob *pOut, /* Write unified diff here if not NULL */ - int nContext /* Amount of context to unified diff */ + Blob *pOut, /* Write diff here if not NULL */ + ReCompiled *pRe, /* Only output changes where this Regexp matches */ + u64 diffFlags /* DIFF_* flags defined above */ ){ + int ignoreWs; /* Ignore whitespace */ DContext c; - + + if( diffFlags & DIFF_INVERT ){ + Blob *pTemp = pA_Blob; + pA_Blob = pB_Blob; + pB_Blob = pTemp; + } + ignoreWs = (diffFlags & DIFF_IGNORE_ALLWS)!=0; + blob_to_utf8_no_bom(pA_Blob, 0); + blob_to_utf8_no_bom(pB_Blob, 0); + /* Prepare the input files */ memset(&c, 0, sizeof(c)); - c.aFrom = break_into_lines(blob_str(pA_Blob), blob_size(pA_Blob), &c.nFrom); - c.aTo = break_into_lines(blob_str(pB_Blob), blob_size(pB_Blob), &c.nTo); + if( (diffFlags & DIFF_IGNORE_ALLWS)==DIFF_IGNORE_ALLWS ){ + c.same_fn = same_dline_ignore_allws; + }else{ + c.same_fn = same_dline; + } + c.aFrom = break_into_lines(blob_str(pA_Blob), blob_size(pA_Blob), + &c.nFrom, diffFlags); + c.aTo = break_into_lines(blob_str(pB_Blob), blob_size(pB_Blob), + &c.nTo, diffFlags); if( c.aFrom==0 || c.aTo==0 ){ - free(c.aFrom); - free(c.aTo); + fossil_free(c.aFrom); + fossil_free(c.aTo); if( pOut ){ - blob_appendf(pOut, "cannot compute difference between binary files\n"); + diff_errmsg(pOut, DIFF_CANNOT_COMPUTE_BINARY, diffFlags); } return 0; } /* Compute the difference */ diff_all(&c); + if( ignoreWs && c.nEdit==6 && c.aEdit[1]==0 && c.aEdit[2]==0 ){ + fossil_free(c.aFrom); + fossil_free(c.aTo); + fossil_free(c.aEdit); + if( pOut ) diff_errmsg(pOut, DIFF_WHITESPACE_ONLY, diffFlags); + return 0; + } + if( (diffFlags & DIFF_NOTTOOBIG)!=0 ){ + int i, m, n; + int *a = c.aEdit; + int mx = c.nEdit; + for(i=m=n=0; i10000 ){ + fossil_free(c.aFrom); + fossil_free(c.aTo); + fossil_free(c.aEdit); + if( pOut ) diff_errmsg(pOut, DIFF_TOO_MANY_CHANGES, diffFlags); + return 0; + } + } + if( (diffFlags & DIFF_NOOPT)==0 ){ + diff_optimize(&c); + } if( pOut ){ - /* Compute a context diff if requested */ - contextDiff(&c, pOut, nContext); - free(c.aFrom); - free(c.aTo); - free(c.aEdit); + /* Compute a context or side-by-side diff into pOut */ + if( diffFlags & DIFF_SIDEBYSIDE ){ + sbsDiff(&c, pOut, pRe, diffFlags); + }else{ + contextDiff(&c, pOut, pRe, diffFlags); + } + fossil_free(c.aFrom); + fossil_free(c.aTo); + fossil_free(c.aEdit); return 0; }else{ /* If a context diff is not requested, then return the ** array of COPY/DELETE/INSERT triples. */ @@ -508,44 +1910,123 @@ free(c.aFrom); free(c.aTo); return c.aEdit; } } + +/* +** Process diff-related command-line options and return an appropriate +** "diffFlags" integer. +** +** --brief Show filenames only DIFF_BRIEF +** -c|--context N N lines of context. DIFF_CONTEXT_MASK +** --html Format for HTML DIFF_HTML +** --invert Invert the diff DIFF_INVERT +** -n|--linenum Show line numbers DIFF_LINENO +** --noopt Disable optimization DIFF_NOOPT +** --strip-trailing-cr Strip trailing CR DIFF_STRIP_EOLCR +** --unified Unified diff. ~DIFF_SIDEBYSIDE +** -w|--ignore-all-space Ignore all whitespaces DIFF_IGNORE_ALLWS +** -W|--width N N character lines. DIFF_WIDTH_MASK +** -y|--side-by-side Side-by-side diff. DIFF_SIDEBYSIDE +** -Z|--ignore-trailing-space Ignore eol-whitespaces DIFF_IGNORE_EOLWS +*/ +u64 diff_options(void){ + u64 diffFlags = 0; + const char *z; + int f; + if( find_option("ignore-trailing-space","Z",0)!=0 ){ + diffFlags = DIFF_IGNORE_EOLWS; + } + if( find_option("ignore-all-space","w",0)!=0 ){ + diffFlags = DIFF_IGNORE_ALLWS; /* stronger than DIFF_IGNORE_EOLWS */ + } + if( find_option("strip-trailing-cr",0,0)!=0 ){ + diffFlags |= DIFF_STRIP_EOLCR; + } + if( find_option("side-by-side","y",0)!=0 ) diffFlags |= DIFF_SIDEBYSIDE; + if( find_option("unified",0,0)!=0 ) diffFlags &= ~DIFF_SIDEBYSIDE; + if( (z = find_option("context","c",1))!=0 && (f = atoi(z))>=0 ){ + if( f > DIFF_CONTEXT_MASK ) f = DIFF_CONTEXT_MASK; + diffFlags |= f + DIFF_CONTEXT_EX; + } + if( (z = find_option("width","W",1))!=0 && (f = atoi(z))>0 ){ + f *= DIFF_CONTEXT_MASK+1; + if( f > DIFF_WIDTH_MASK ) f = DIFF_CONTEXT_MASK; + diffFlags |= f; + } + if( find_option("html",0,0)!=0 ) diffFlags |= DIFF_HTML; + if( find_option("linenum","n",0)!=0 ) diffFlags |= DIFF_LINENO; + if( find_option("noopt",0,0)!=0 ) diffFlags |= DIFF_NOOPT; + if( find_option("invert",0,0)!=0 ) diffFlags |= DIFF_INVERT; + if( find_option("brief",0,0)!=0 ) diffFlags |= DIFF_BRIEF; + return diffFlags; +} /* ** COMMAND: test-rawdiff +** +** Usage: %fossil test-rawdiff FILE1 FILE2 +** +** Show a minimal sequence of Copy/Delete/Insert operations needed to convert +** FILE1 into FILE2. This command is intended for use in testing and debugging +** the built-in difference engine of Fossil. */ void test_rawdiff_cmd(void){ Blob a, b; int r; int i; int *R; + u64 diffFlags = diff_options(); if( g.argc<4 ) usage("FILE1 FILE2 ..."); blob_read_from_file(&a, g.argv[2]); for(i=3; i3 ) printf("-------------------------------\n"); + if( i>3 ) fossil_print("-------------------------------\n"); blob_read_from_file(&b, g.argv[i]); - R = text_diff(&a, &b, 0, 0); + R = text_diff(&a, &b, 0, 0, diffFlags); for(r=0; R[r] || R[r+1] || R[r+2]; r += 3){ - printf(" copy %4d delete %4d insert %4d\n", R[r], R[r+1], R[r+2]); + fossil_print(" copy %4d delete %4d insert %4d\n", R[r], R[r+1], R[r+2]); } /* free(R); */ blob_reset(&b); } } /* -** COMMAND: test-udiff +** COMMAND: test-diff +** +** Usage: %fossil [options] FILE1 FILE2 +** +** Print the difference between two files. The usual diff options apply. */ -void test_udiff_cmd(void){ +void test_diff_cmd(void){ Blob a, b, out; + u64 diffFlag; + const char *zRe; /* Regex filter for diff output */ + ReCompiled *pRe = 0; /* Regex filter for diff output */ + + if( find_option("tk",0,0)!=0 ){ + diff_tk("test-diff", 2); + return; + } + find_option("i",0,0); + find_option("v",0,0); + zRe = find_option("regexp","e",1); + if( zRe ){ + const char *zErr = re_compile(&pRe, zRe, 0); + if( zErr ) fossil_fatal("regex error: %s", zErr); + } + diffFlag = diff_options(); + verify_all_options(); if( g.argc!=4 ) usage("FILE1 FILE2"); + diff_print_filenames(g.argv[2], g.argv[3], diffFlag); blob_read_from_file(&a, g.argv[2]); blob_read_from_file(&b, g.argv[3]); blob_zero(&out); - text_diff(&a, &b, &out, 3); + text_diff(&a, &b, &out, pRe, diffFlag); blob_write_to_file(&out, "-"); + re_free(pRe); } /************************************************************************** ** The basic difference engine is above. What follows is the annotation ** engine. Both are in the same file since they share many components. @@ -556,240 +2037,513 @@ ** of the following structure. */ typedef struct Annotator Annotator; struct Annotator { DContext c; /* The diff-engine context */ - struct { /* Lines of the original files... */ + struct AnnLine { /* Lines of the original files... */ const char *z; /* The text of the line */ - int n; /* Number of bytes (omitting trailing space and \n) */ - const char *zSrc; /* Tag showing origin of this line */ + short int n; /* Number of bytes (omitting trailing \n) */ + short int iVers; /* Level at which tag was set */ } *aOrig; int nOrig; /* Number of elements in aOrig[] */ - int nNoSrc; /* Number of entries where aOrig[].zSrc==NULL */ + int nVers; /* Number of versions analyzed */ + int bLimit; /* True if the iLimit was reached */ + struct AnnVers { + const char *zFUuid; /* File being analyzed */ + const char *zMUuid; /* Check-in containing the file */ + const char *zDate; /* Date of the check-in */ + const char *zBgColor; /* Suggested background color */ + const char *zUser; /* Name of user who did the check-in */ + unsigned cnt; /* Number of lines contributed by this check-in */ + } *aVers; /* For each check-in analyzed */ + char **azVers; /* Names of versions analyzed */ }; /* ** Initialize the annotation process by specifying the file that is ** to be annotated. The annotator takes control of the input Blob and ** will release it when it is finished with it. */ -static int annotation_start(Annotator *p, Blob *pInput){ +static int annotation_start(Annotator *p, Blob *pInput, u64 diffFlags){ int i; memset(p, 0, sizeof(*p)); - p->c.aTo = break_into_lines(blob_str(pInput), blob_size(pInput), &p->c.nTo); + if( (diffFlags & DIFF_IGNORE_ALLWS)==DIFF_IGNORE_ALLWS ){ + p->c.same_fn = same_dline_ignore_allws; + }else{ + p->c.same_fn = same_dline; + } + p->c.aTo = break_into_lines(blob_str(pInput), blob_size(pInput),&p->c.nTo, + diffFlags); if( p->c.aTo==0 ){ return 1; } - p->aOrig = malloc( sizeof(p->aOrig[0])*p->c.nTo ); - if( p->aOrig==0 ) fossil_panic("out of memory"); + p->aOrig = fossil_malloc( sizeof(p->aOrig[0])*p->c.nTo ); for(i=0; ic.nTo; i++){ p->aOrig[i].z = p->c.aTo[i].z; - p->aOrig[i].n = p->c.aTo[i].h & LENGTH_MASK; - p->aOrig[i].zSrc = 0; + p->aOrig[i].n = p->c.aTo[i].n; + p->aOrig[i].iVers = -1; } p->nOrig = p->c.nTo; return 0; } /* ** The input pParent is the next most recent ancestor of the file ** being annotated. Do another step of the annotation. Return true -** if additional annotation is required. zPName is the tag to insert -** on each line of the file being annotated that was contributed by -** pParent. Memory to hold zPName is leaked. +** if additional annotation is required. */ -static int annotation_step(Annotator *p, Blob *pParent, char *zPName){ +static int annotation_step(Annotator *p, Blob *pParent, int iVers, u64 diffFlags){ int i, j; int lnTo; /* Prepare the parent file to be diffed */ p->c.aFrom = break_into_lines(blob_str(pParent), blob_size(pParent), - &p->c.nFrom); + &p->c.nFrom, diffFlags); if( p->c.aFrom==0 ){ return 1; } /* Compute the differences going from pParent to the file being ** annotated. */ diff_all(&p->c); /* Where new lines are inserted on this difference, record the - ** zPName as the source of the new line. + ** iVers as the source of the new line. */ for(i=lnTo=0; ic.nEdit; i+=3){ - for(j=0; jc.aEdit[i]; j++, lnTo++){ - p->aOrig[lnTo].zSrc = zPName; + int nCopy = p->c.aEdit[i]; + int nIns = p->c.aEdit[i+2]; + lnTo += nCopy; + for(j=0; jaOrig[lnTo].iVers<0 ){ + p->aOrig[lnTo].iVers = iVers; + } } - lnTo += p->c.aEdit[i+2]; } /* Clear out the diff results */ - free(p->c.aEdit); + fossil_free(p->c.aEdit); p->c.aEdit = 0; p->c.nEdit = 0; p->c.nEditAlloc = 0; /* Clear out the from file */ - free(p->c.aFrom); - blob_zero(pParent); + free(p->c.aFrom); /* Return no errors */ return 0; } -/* -** COMMAND: test-annotate-step -*/ -void test_annotate_step_cmd(void){ - Blob orig, b; - Annotator x; - int i; - - if( g.argc<4 ) usage("RID1 RID2 ..."); - db_must_be_within_tree(); - blob_zero(&b); - content_get(name_to_rid(g.argv[2]), &orig); - if( annotation_start(&x, &orig) ){ - fossil_fatal("binary file"); - } - for(i=3; i%.10s %s %9.9s", - g.zBaseURL, zUuid, zUuid, zDate, zUser); - }else{ - zLabel = mprintf("%.10s %s %9.9s", zUuid, zDate, zUser); - } - content_get(pid, &step); - annotation_step(p, &step, zLabel); - blob_reset(&step); - } - db_finalize(&q); + + db_bind_int(&q, ":rid", rid); + if( iLimit==0 ) iLimit = 1000000000; + while( rid && iLimit>cnt && db_step(&q)==SQLITE_ROW ){ + int prevId = db_column_int(&q, 4); + p->aVers = fossil_realloc(p->aVers, (p->nVers+1)*sizeof(p->aVers[0])); + p->aVers[p->nVers].zFUuid = fossil_strdup(db_column_text(&q, 0)); + p->aVers[p->nVers].zMUuid = fossil_strdup(db_column_text(&q, 1)); + p->aVers[p->nVers].zDate = fossil_strdup(db_column_text(&q, 2)); + p->aVers[p->nVers].zUser = fossil_strdup(db_column_text(&q, 3)); + if( p->nVers ){ + content_get(rid, &step); + blob_to_utf8_no_bom(&step, 0); + annotation_step(p, &step, p->nVers-1, annFlags); + blob_reset(&step); + } + p->nVers++; + db_bind_int(&ins, ":rid", rid); + db_step(&ins); + db_reset(&ins); + db_reset(&q); + rid = prevId; + db_bind_int(&q, ":rid", prevId); + cnt++; + } + p->bLimit = iLimit==cnt; + db_finalize(&q); + db_finalize(&ins); + db_end_transaction(0); +} + +/* +** Return a color from a gradient. +*/ +unsigned gradient_color(unsigned c1, unsigned c2, int n, int i){ + unsigned c; /* Result color */ + unsigned x1, x2; + if( i==0 || n==0 ) return c1; + else if(i>=n) return c2; + x1 = (c1>>16)&0xff; + x2 = (c2>>16)&0xff; + c = (x1*(n-i) + x2*i)/n<<16 & 0xff0000; + x1 = (c1>>8)&0xff; + x2 = (c2>>8)&0xff; + c |= (x1*(n-i) + x2*i)/n<<8 & 0xff00; + x1 = c1&0xff; + x2 = c2&0xff; + c |= (x1*(n-i) + x2*i)/n & 0xff; + return c; } /* ** WEBPAGE: annotate +** WEBPAGE: blame +** WEBPAGE: praise +** +** URL: /annotate?checkin=ID&filename=FILENAME +** URL: /blame?checkin=ID&filename=FILENAME +** URL: /praise?checkin=ID&filename=FILENAME +** +** Show the most recent change to each line of a text file. /annotate shows +** the date of the changes and the check-in SHA1 hash (with a link to the +** check-in). /blame and /praise also show the user who made the check-in. ** ** Query parameters: ** ** checkin=ID The manifest ID at which to start the annotation ** filename=FILENAME The filename. +** filevers Show file versions rather than check-in versions +** limit=N Limit the search depth to N ancestors +** log=BOOLEAN Show a log of versions analyzed +** w Ignore whitespace +** */ void annotation_page(void){ int mid; int fnid; int i; + int iLimit; /* Depth limit */ + u64 annFlags = (ANN_FILE_ANCEST|DIFF_STRIP_EOLCR); + int showLog = 0; /* True to display the log */ + int ignoreWs = 0; /* Ignore whitespace */ + const char *zFilename; /* Name of file to annotate */ + const char *zCI; /* The check-in containing zFilename */ Annotator ann; + HQuery url; + struct AnnVers *p; + unsigned clr1, clr2, clr; + int bBlame = g.zPath[0]!='a';/* True for BLAME output. False for ANNOTATE. */ + /* Gather query parameters */ + showLog = atoi(PD("log","1")); login_check_credentials(); - if( !g.okRead ){ login_needed(); return; } - mid = name_to_rid(PD("checkin","0")); - fnid = db_int(0, "SELECT fnid FROM filename WHERE name=%Q", P("filename")); + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } + if( exclude_spiders("annotate") ) return; + load_control(); + mid = name_to_typed_rid(PD("checkin","0"),"ci"); + zFilename = P("filename"); + fnid = db_int(0, "SELECT fnid FROM filename WHERE name=%Q", zFilename); if( mid==0 || fnid==0 ){ fossil_redirect_home(); } + iLimit = atoi(PD("limit","20")); + if( P("filevers") ) annFlags |= ANN_FILE_VERS; + ignoreWs = P("w")!=0; + if( ignoreWs ) annFlags |= DIFF_IGNORE_ALLWS; if( !db_exists("SELECT 1 FROM mlink WHERE mid=%d AND fnid=%d",mid,fnid) ){ fossil_redirect_home(); } - style_header("File Annotation"); - annotate_file(&ann, fnid, mid, g.okHistory); + + /* compute the annotation */ + compute_direct_ancestors(mid); + annotate_file(&ann, fnid, mid, iLimit, annFlags); + zCI = ann.aVers[0].zMUuid; + + /* generate the web page */ + style_header("Annotation For %h", zFilename); + if( bBlame ){ + url_initialize(&url, "blame"); + }else{ + url_initialize(&url, "annotate"); + } + url_add_parameter(&url, "checkin", P("checkin")); + url_add_parameter(&url, "filename", zFilename); + if( iLimit!=20 ){ + url_add_parameter(&url, "limit", sqlite3_mprintf("%d", iLimit)); + } + url_add_parameter(&url, "log", showLog ? "1" : "0"); + if( ignoreWs ){ + url_add_parameter(&url, "w", ""); + style_submenu_element("Show Whitespace Changes", "Show Whitespace Changes", + "%s", url_render(&url, "w", 0, 0, 0)); + }else{ + style_submenu_element("Ignore Whitespace", "Ignore Whitespace", + "%s", url_render(&url, "w", "", 0, 0)); + } + if( showLog ){ + style_submenu_element("Hide Log", "Hide Log", + "%s", url_render(&url, "log", "0", 0, 0)); + }else{ + style_submenu_element("Show Log", "Show Log", + "%s", url_render(&url, "log", "1", 0, 0)); + } + if( ann.bLimit ){ + char *z1, *z2; + style_submenu_element("All Ancestors", "All Ancestors", + "%s", url_render(&url, "limit", "-1", 0, 0)); + z1 = sqlite3_mprintf("%d Ancestors", iLimit+20); + z2 = sqlite3_mprintf("%d", iLimit+20); + style_submenu_element(z1, z1, "%s", url_render(&url, "limit", z2, 0, 0)); + } + if( iLimit>20 ){ + style_submenu_element("20 Ancestors", "20 Ancestors", + "%s", url_render(&url, "limit", "20", 0, 0)); + } + if( skin_detail_boolean("white-foreground") ){ + clr1 = 0xa04040; + clr2 = 0x4059a0; + }else{ + clr1 = 0xffb5b5; /* Recent changes: red (hot) */ + clr2 = 0xb5e0ff; /* Older changes: blue (cold) */ + } + for(p=ann.aVers, i=0; iAncestors of %z(zLink)%h(zFilename) analyzed: + @
    + for(p=ann.aVers, i=0; i%s(p->zDate) + @ check-in %z(href("%R/info/%!S",p->zMUuid))%S(p->zMUuid) + @ artifact %z(href("%R/artifact/%!S",p->zFUuid))%S(p->zFUuid) + @ +#if 0 + if( i>0 ){ + char *zLink = xhref("target='infowindow'", + "%R/fdiff?v1=%S&v2=%S&sbs=1", + p->zFUuid,ann.aVers[0].zFUuid); + @ %z(zLink)[diff-to-top] + if( i>1 ){ + zLink = xhref("target='infowindow'", + "%R/fdiff?v1=%S&v2=%S&sbs=1", + p->zFUuid,p[-1].zFUuid); + @ %z(zLink)[diff-to-previous] + } + } +#endif + } + @
+ @
+ } + if( !ann.bLimit ){ + @

Origin for each line in + @ %z(href("%R/finfo?name=%h&ci=%!S", zFilename, zCI))%h(zFilename) + @ from check-in %z(href("%R/info/%!S",zCI))%S(zCI):

+ iLimit = ann.nVers+10; + }else{ + @

Lines added by the %d(iLimit) most recent ancestors of + @ %z(href("%R/finfo?name=%h&ci=%!S", zFilename, zCI))%h(zFilename) + @ from check-in %z(href("%R/info/%!S",zCI))%S(zCI):

+ } @
   for(i=0; iann.nVers && iVers<0 ) iVers = ann.nVers-1;
+
+    if( bBlame ){
+      if( iVers>=0 ){
+        struct AnnVers *p = ann.aVers+iVers;
+        char *zLink = xhref("target='infowindow'", "%R/info/%!S", p->zMUuid);
+        sqlite3_snprintf(sizeof(zPrefix), zPrefix,
+             ""
+             "%s%.10s %s %13.13s:",
+             p->zBgColor, zLink, p->zMUuid, p->zDate, p->zUser);
+        fossil_free(zLink);
+      }else{
+        sqlite3_snprintf(sizeof(zPrefix), zPrefix, "%36s", "");
+      }
+    }else{
+      if( iVers>=0 ){
+        struct AnnVers *p = ann.aVers+iVers;
+        char *zLink = xhref("target='infowindow'", "%R/info/%!S", p->zMUuid);
+        sqlite3_snprintf(sizeof(zPrefix), zPrefix,
+             ""
+             "%s%.10s %s %4d:",
+             p->zBgColor, zLink, p->zMUuid, p->zDate, i+1);
+        fossil_free(zLink);
+      }else{
+        sqlite3_snprintf(sizeof(zPrefix), zPrefix, "%22s%4d:", "", i+1);
+      }
+    }
+    @ %s(zPrefix) %h(z)
+
   }
   @ 
style_footer(); } /* ** COMMAND: annotate +** COMMAND: blame +** COMMAND: praise ** -** %fossil annotate FILENAME +** %fossil (annotate|blame|praise) ?OPTIONS? FILENAME ** ** Output the text of a file with markings to show when each line of -** the file was introduced. +** the file was last modified. The "annotate" command shows line numbers +** and omits the username. The "blame" and "praise" commands show the user +** who made each check-in and omits the line number. +** +** Options: +** --filevers Show file version numbers rather than check-in versions +** -l|--log List all versions analyzed +** -n|--limit N Only look backwards in time by N versions +** -w|--ignore-all-space Ignore white space when comparing lines +** -Z|--ignore-trailing-space Ignore whitespace at line end +** +** See also: info, finfo, timeline */ void annotate_cmd(void){ int fnid; /* Filename ID */ int fid; /* File instance ID */ int mid; /* Manifest where file was checked in */ + int cid; /* Checkout ID */ Blob treename; /* FILENAME translated to canonical form */ - char *zFilename; /* Cannonical filename */ + char *zFilename; /* Canonical filename */ Annotator ann; /* The annotation of the file */ int i; /* Loop counter */ + const char *zLimit; /* The value to the -n|--limit option */ + int iLimit; /* How far back in time to look */ + int showLog; /* True to show the log */ + int fileVers; /* Show file version instead of check-in versions */ + u64 annFlags = 0; /* Flags to control annotation properties */ + int bBlame = 0; /* True for BLAME output. False for ANNOTATE. */ + bBlame = g.argv[1][0]!='a'; + zLimit = find_option("limit","n",1); + if( zLimit==0 || zLimit[0]==0 ) zLimit = "-1"; + iLimit = atoi(zLimit); + showLog = find_option("log","l",0)!=0; + if( find_option("ignore-trailing-space","Z",0)!=0 ){ + annFlags = DIFF_IGNORE_EOLWS; + } + if( find_option("ignore-all-space","w",0)!=0 ){ + annFlags = DIFF_IGNORE_ALLWS; /* stronger than DIFF_IGNORE_EOLWS */ + } + fileVers = find_option("filevers",0,0)!=0; db_must_be_within_tree(); - if (g.argc<3) { + + /* We should be done with options.. */ + verify_all_options(); + + if( g.argc<3 ) { usage("FILENAME"); } - file_tree_name(g.argv[2], &treename, 1); + file_tree_name(g.argv[2], &treename, 0, 1); zFilename = blob_str(&treename); fnid = db_int(0, "SELECT fnid FROM filename WHERE name=%Q", zFilename); if( fnid==0 ){ fossil_fatal("no such file: %s", zFilename); } fid = db_int(0, "SELECT rid FROM vfile WHERE pathname=%Q", zFilename); if( fid==0 ){ fossil_fatal("not part of current checkout: %s", zFilename); } - mid = db_int(0, "SELECT mid FROM mlink WHERE fid=%d AND fnid=%d", fid, fnid); + cid = db_lget_int("checkout", 0); + if( cid == 0 ){ + fossil_fatal("Not in a checkout"); + } + if( iLimit<=0 ) iLimit = 1000000000; + compute_direct_ancestors(cid); + mid = db_int(0, "SELECT mlink.mid FROM mlink, ancestor " + " WHERE mlink.fid=%d AND mlink.fnid=%d AND mlink.mid=ancestor.rid" + " ORDER BY ancestor.generation ASC LIMIT 1", + fid, fnid); if( mid==0 ){ - fossil_panic("unable to find manifest"); + fossil_fatal("unable to find manifest"); } - annotate_file(&ann, fnid, mid, 0); + annFlags |= (ANN_FILE_ANCEST|DIFF_STRIP_EOLCR); + annotate_file(&ann, fnid, mid, iLimit, annFlags); + if( showLog ){ + struct AnnVers *p; + for(p=ann.aVers, i=0; izDate, p->zMUuid, p->zFUuid); + } + fossil_print("---------------------------------------------------\n"); + } for(i=0; iann.nVers && iVers<0 ) iVers = ann.nVers-1; + p = ann.aVers + iVers; + if( bBlame ){ + if( iVers>=0 ){ + fossil_print("%S %s %13.13s: %.*s\n", + fileVers ? p->zFUuid : p->zMUuid, p->zDate, p->zUser, n, z); + }else{ + fossil_print("%35s %.*s\n", "", n, z); + } + }else{ + if( iVers>=0 ){ + fossil_print("%S %s %5d: %.*s\n", + fileVers ? p->zFUuid : p->zMUuid, p->zDate, i+1, n, z); + }else{ + fossil_print("%21s %5d: %.*s\n", + "", i+1, n, z); + } + } } } ADDED src/diff.tcl Index: src/diff.tcl ================================================================== --- src/diff.tcl +++ src/diff.tcl @@ -0,0 +1,397 @@ +# The "diff --tk" command outputs prepends a "set fossilcmd {...}" line +# to this file, then runs this file using "tclsh" in order to display the +# graphical diff in a separate window. A typical "set fossilcmd" line +# looks like this: +# +# set fossilcmd {| "./fossil" diff --html -y -i -v} +# +# This header comment is stripped off by the "mkbuiltin.c" program. +# +set prog { +package require Tk + +array set CFG { + TITLE {Fossil Diff} + LN_COL_BG #dddddd + LN_COL_FG #444444 + TXT_COL_BG #ffffff + TXT_COL_FG #000000 + MKR_COL_BG #444444 + MKR_COL_FG #dddddd + CHNG_BG #d0d0ff + ADD_BG #c0ffc0 + RM_BG #ffc0c0 + HR_FG #888888 + HR_PAD_TOP 4 + HR_PAD_BTM 8 + FN_BG #444444 + FN_FG #ffffff + FN_PAD 5 + ERR_FG #ee0000 + PADX 5 + WIDTH 80 + HEIGHT 45 + LB_HEIGHT 25 +} + +if {![namespace exists ttk]} { + interp alias {} ::ttk::scrollbar {} ::scrollbar + interp alias {} ::ttk::menubutton {} ::menubutton +} + +proc dehtml {x} { + set x [regsub -all {<[^>]*>} $x {}] + return [string map {& & < < > > ' ' " \"} $x] +} + +proc cols {} { + return [list .lnA .txtA .mkr .lnB .txtB] +} + +proc colType {c} { + regexp {[a-z]+} $c type + return $type +} + +proc getLine {difftxt N iivar} { + upvar $iivar ii + if {$ii>=$N} {return -1} + set x [lindex $difftxt $ii] + incr ii + return $x +} + +proc readDiffs {fossilcmd} { + global difftxt + if {![info exists difftxt]} { + set in [open $fossilcmd r] + fconfigure $in -encoding utf-8 + set difftxt [split [read $in] \n] + close $in + } + set N [llength $difftxt] + set ii 0 + set nDiffs 0 + array set widths {txt 0 ln 0 mkr 0} + while {[set line [getLine $difftxt $N ii]] != -1} { + set fn2 {} + if {![regexp {^=+ (.*?) =+ versus =+ (.*?) =+$} $line all fn fn2] + && ![regexp {^=+ (.*?) =+$} $line all fn] + } { + continue + } + set errMsg "" + set line [getLine $difftxt $N ii] + if {[string compare -length 6 $line "]*>(.+)} $line - errMsg]} { + continue + } + incr nDiffs + set idx [expr {$nDiffs > 1 ? [.txtA index end] : "1.0"}] + .wfiles.lb insert end $fn + + foreach c [cols] { + if {$nDiffs > 1} { + $c insert end \n - + } + if {[colType $c] eq "txt"} { + $c insert end $fn\n fn + if {$fn2!=""} {set fn $fn2} + } else { + $c insert end \n fn + } + $c insert end \n - + + if {$errMsg ne ""} continue + while {[getLine $difftxt $N ii] ne "
"} continue
+      set type [colType $c]
+      set str {}
+      while {[set line [getLine $difftxt $N ii]] ne "
"} { + set len [string length [dehtml $line]] + if {$len > $widths($type)} { + set widths($type) $len + } + append str $line\n + } + + set re {([^<]*)} + # Use \r as separator since it can't appear in the diff output (it gets + # converted to a space). + set str [regsub -all $re $str "\r\\1\r\\2\r"] + foreach {pre class mid} [split $str \r] { + if {$class ne ""} { + $c insert end [dehtml $pre] - [dehtml $mid] [list $class -] + } else { + $c insert end [dehtml $pre] - + } + } + } + + if {$errMsg ne ""} { + foreach c {.txtA .txtB} {$c insert end [string trim $errMsg] err} + foreach c [cols] {$c insert end \n -} + } + } + + foreach c [cols] { + set type [colType $c] + if {$type ne "txt"} { + $c config -width $widths($type) + } + $c config -state disabled + } + if {$nDiffs <= [.wfiles.lb cget -height]} { + .wfiles.lb config -height $nDiffs + grid remove .wfiles.sb + } + + return $nDiffs +} + +proc viewDiff {idx} { + .txtA yview $idx + .txtA xview moveto 0 +} + +proc cycleDiffs {{reverse 0}} { + if {$reverse} { + set range [.txtA tag prevrange fn @0,0 1.0] + if {$range eq ""} { + viewDiff {fn.last -1c} + } else { + viewDiff [lindex $range 0] + } + } else { + set range [.txtA tag nextrange fn {@0,0 +1c} end] + if {$range eq "" || [lindex [.txtA yview] 1] == 1} { + viewDiff fn.first + } else { + viewDiff [lindex $range 0] + } + } +} + +proc xvis {col} { + set view [$col xview] + return [expr {[lindex $view 1]-[lindex $view 0]}] +} + +proc scroll-x {args} { + set c .txt[expr {[xvis .txtA] < [xvis .txtB] ? "A" : "B"}] + eval $c xview $args +} + +interp alias {} scroll-y {} .txtA yview + +proc noop {args} {} + +proc enableSync {axis} { + update idletasks + interp alias {} sync-$axis {} + rename _sync-$axis sync-$axis +} + +proc disableSync {axis} { + rename sync-$axis _sync-$axis + interp alias {} sync-$axis {} noop +} + +proc sync-x {col first last} { + disableSync x + $col xview moveto [expr {$first*[xvis $col]/($last-$first)}] + foreach side {A B} { + set sb .sbx$side + set xview [.txt$side xview] + if {[lindex $xview 0] > 0 || [lindex $xview 1] < 1} { + grid $sb + eval $sb set $xview + } else { + grid remove $sb + } + } + enableSync x +} + +proc sync-y {first last} { + disableSync y + foreach c [cols] { + $c yview moveto $first + } + if {$first > 0 || $last < 1} { + grid .sby + .sby set $first $last + } else { + grid remove .sby + } + enableSync y +} + +wm withdraw . +wm title . $CFG(TITLE) +wm iconname . $CFG(TITLE) +bind . exit +bind . {after 0 exit} +bind . {cycleDiffs; break} +bind . <> {cycleDiffs 1; break} +bind . { + event generate .bb.files <1> + event generate .bb.files + break +} +foreach {key axis args} { + Up y {scroll -5 units} + k y {scroll -5 units} + Down y {scroll 5 units} + j y {scroll 5 units} + Left x {scroll -5 units} + h x {scroll -5 units} + Right x {scroll 5 units} + l x {scroll 5 units} + Prior y {scroll -1 page} + b y {scroll -1 page} + Next y {scroll 1 page} + space y {scroll 1 page} + Home y {moveto 0} + g y {moveto 0} + End y {moveto 1} +} { + bind . <$key> "scroll-$axis $args; break" + bind . continue +} + +frame .bb +::ttk::menubutton .bb.files -text "Files" +toplevel .wfiles +wm withdraw .wfiles +update idletasks +wm transient .wfiles . +wm overrideredirect .wfiles 1 +listbox .wfiles.lb -width 0 -height $CFG(LB_HEIGHT) -activestyle none \ + -yscroll {.wfiles.sb set} +::ttk::scrollbar .wfiles.sb -command {.wfiles.lb yview} +grid .wfiles.lb .wfiles.sb -sticky ns +bind .bb.files <1> { + set x [winfo rootx %W] + set y [expr {[winfo rooty %W]+[winfo height %W]}] + wm geometry .wfiles +$x+$y + wm deiconify .wfiles + focus .wfiles.lb +} +bind .wfiles {wm withdraw .wfiles} +bind .wfiles {focus .} +foreach evt {1 Return} { + bind .wfiles.lb <$evt> { + catch { + set idx [lindex [.txtA tag ranges fn] [expr {[%W curselection]*2}]] + viewDiff $idx + } + focus . + break + } +} +bind .wfiles.lb { + %W selection clear 0 end + %W selection set @%x,%y +} + +foreach {side syncCol} {A .txtB B .txtA} { + set ln .ln$side + text $ln + $ln tag config - -justify right + + set txt .txt$side + text $txt -width $CFG(WIDTH) -height $CFG(HEIGHT) -wrap none \ + -xscroll "sync-x $syncCol" + catch {$txt config -tabstyle wordprocessor} ;# Required for Tk>=8.5 + foreach tag {add rm chng} { + $txt tag config $tag -background $CFG([string toupper $tag]_BG) + $txt tag lower $tag + } + $txt tag config fn -background $CFG(FN_BG) -foreground $CFG(FN_FG) \ + -justify center + $txt tag config err -foreground $CFG(ERR_FG) +} +text .mkr + +foreach c [cols] { + set keyPrefix [string toupper [colType $c]]_COL_ + if {[tk windowingsystem] eq "win32"} {$c config -font {courier 9}} + $c config -bg $CFG(${keyPrefix}BG) -fg $CFG(${keyPrefix}FG) -borderwidth 0 \ + -padx $CFG(PADX) -yscroll sync-y + $c tag config hr -spacing1 $CFG(HR_PAD_TOP) -spacing3 $CFG(HR_PAD_BTM) \ + -foreground $CFG(HR_FG) + $c tag config fn -spacing1 $CFG(FN_PAD) -spacing3 $CFG(FN_PAD) + bindtags $c ". $c Text all" + bind $c <1> {focus %W} +} + +::ttk::scrollbar .sby -command {.txtA yview} -orient vertical +::ttk::scrollbar .sbxA -command {.txtA xview} -orient horizontal +::ttk::scrollbar .sbxB -command {.txtB xview} -orient horizontal +frame .spacer + +if {[readDiffs $fossilcmd] == 0} { + tk_messageBox -type ok -title $CFG(TITLE) -message "No changes" + exit +} +update idletasks + +proc saveDiff {} { + set fn [tk_getSaveFile] + if {$fn==""} return + set out [open $fn wb] + puts $out "#!/usr/bin/tclsh\n#\n# Run this script using 'tclsh' or 'wish'" + puts $out "# to see the graphical diff.\n#" + puts $out "set fossilcmd {}" + puts $out "set prog [list $::prog]" + puts $out "set difftxt \173" + foreach e $::difftxt {puts $out [list $e]} + puts $out "\175" + puts $out "eval \$prog" + close $out +} +proc invertDiff {} { + global CFG + array set x [grid info .txtA] + if {$x(-column)==1} { + grid config .lnB -column 0 + grid config .txtB -column 1 + .txtB tag config add -background $CFG(RM_BG) + grid config .lnA -column 3 + grid config .txtA -column 4 + .txtA tag config rm -background $CFG(ADD_BG) + } else { + grid config .lnA -column 0 + grid config .txtA -column 1 + .txtA tag config rm -background $CFG(RM_BG) + grid config .lnB -column 3 + grid config .txtB -column 4 + .txtB tag config add -background $CFG(ADD_BG) + } + .mkr config -state normal + set clt [.mkr search -all < 1.0 end] + set cgt [.mkr search -all > 1.0 end] + foreach c $clt {.mkr replace $c "$c +1 chars" >} + foreach c $cgt {.mkr replace $c "$c +1 chars" <} + .mkr config -state disabled +} +::ttk::button .bb.quit -text {Quit} -command exit +::ttk::button .bb.invert -text {Invert} -command invertDiff +::ttk::button .bb.save -text {Save As...} -command saveDiff +pack .bb.quit .bb.invert -side left +if {$fossilcmd!=""} {pack .bb.save -side left} +pack .bb.files -side left +grid rowconfigure . 1 -weight 1 +grid columnconfigure . 1 -weight 1 +grid columnconfigure . 4 -weight 1 +grid .bb -row 0 -columnspan 6 +eval grid [cols] -row 1 -sticky nsew +grid .sby -row 1 -column 5 -sticky ns +grid .sbxA -row 2 -columnspan 2 -sticky ew +grid .spacer -row 2 -column 2 +grid .sbxB -row 2 -column 3 -columnspan 2 -sticky ew + +.spacer config -height [winfo height .sbxA] +wm deiconify . +} +eval $prog Index: src/diffcmd.c ================================================================== --- src/diffcmd.c +++ src/diffcmd.c @@ -20,48 +20,134 @@ #include "config.h" #include "diffcmd.h" #include /* -** Shell-escape the given string. Append the result to a blob. -*/ -static void shell_escape(Blob *pBlob, const char *zIn){ - int n = blob_size(pBlob); - int k = strlen(zIn); - int i, c; - char *z; - for(i=0; (c = zIn[i])!=0; i++){ - if( isspace(c) || c=='"' || (c=='\\' && zIn[i+1]!=0) ){ - blob_appendf(pBlob, "\"%s\"", zIn); - z = blob_buffer(pBlob); - for(i=n+1; i<=n+k; i++){ - if( z[i]=='"' ) z[i] = '_'; - } - return; - } - } - blob_append(pBlob, zIn, -1); -} - -/* -** This function implements a cross-platform "system()" interface. -*/ -int portable_system(const char *zOrigCmd){ - int rc; -#ifdef __MINGW32__ - /* On windows, we have to put double-quotes around the entire command. - ** Who knows why - this is just the way windows works. - */ - char *zNewCmd = mprintf("\"%s\"", zOrigCmd); - rc = system(zNewCmd); - free(zNewCmd); +** Use the right null device for the platform. +*/ +#if defined(_WIN32) +# define NULL_DEVICE "NUL" +#else +# define NULL_DEVICE "/dev/null" +#endif + +/* +** Used when the name for the diff is unknown. +*/ +#define DIFF_NO_NAME "(unknown)" + +/* +** Use the "exec-rel-paths" setting and the --exec-abs-paths and +** --exec-rel-paths command line options to determine whether +** certain external commands are executed using relative paths. +*/ +static int determine_exec_relative_option(int force){ + static int relativePaths = -1; + if( force || relativePaths==-1 ){ + int relPathOption = find_option("exec-rel-paths", 0, 0)!=0; + int absPathOption = find_option("exec-abs-paths", 0, 0)!=0; +#if defined(FOSSIL_ENABLE_EXEC_REL_PATHS) + relativePaths = db_get_boolean("exec-rel-paths", 1); #else - /* On unix, evaluate the command directly. - */ - rc = system(zOrigCmd); -#endif - return rc; + relativePaths = db_get_boolean("exec-rel-paths", 0); +#endif + if( relPathOption ){ relativePaths = 1; } + if( absPathOption ){ relativePaths = 0; } + } + return relativePaths; +} + +#if INTERFACE +/* +** An array of FileDirList objects describe the files and directories listed +** on the command line of a "diff" command. Only those objects listed are +** actually diffed. +*/ +struct FileDirList { + int nUsed; /* Number of times each entry is used */ + int nName; /* Length of the entry */ + char *zName; /* Text of the entry */ +}; +#endif + +/* +** Return true if zFile is a file named on the azInclude[] list or is +** a file in a directory named on the azInclude[] list. +** +** if azInclude is NULL, then always include zFile. +*/ +static int file_dir_match(FileDirList *p, const char *zFile){ + if( p==0 || strcmp(p->zName,".")==0 ) return 1; + if( filenames_are_case_sensitive() ){ + while( p->zName ){ + if( strcmp(zFile, p->zName)==0 + || (strncmp(zFile, p->zName, p->nName)==0 + && zFile[p->nName]=='/') + ){ + break; + } + p++; + } + }else{ + while( p->zName ){ + if( fossil_stricmp(zFile, p->zName)==0 + || (fossil_strnicmp(zFile, p->zName, p->nName)==0 + && zFile[p->nName]=='/') + ){ + break; + } + p++; + } + } + if( p->zName ){ + p->nUsed++; + return 1; + } + return 0; +} + +/* +** Print the "Index:" message that patches wants to see at the top of a diff. +*/ +void diff_print_index(const char *zFile, u64 diffFlags){ + if( (diffFlags & (DIFF_SIDEBYSIDE|DIFF_BRIEF))==0 ){ + char *z = mprintf("Index: %s\n%.66c\n", zFile, '='); + fossil_print("%s", z); + fossil_free(z); + } +} + +/* +** Print the +++/--- filename lines for a diff operation. +*/ +void diff_print_filenames(const char *zLeft, const char *zRight, u64 diffFlags){ + char *z = 0; + if( diffFlags & DIFF_BRIEF ){ + /* no-op */ + }else if( diffFlags & DIFF_SIDEBYSIDE ){ + int w = diff_width(diffFlags); + int n1 = strlen(zLeft); + int n2 = strlen(zRight); + int x; + if( n1==n2 && fossil_strcmp(zLeft,zRight)==0 ){ + if( n1>w*2 ) n1 = w*2; + x = w*2+17 - (n1+2); + z = mprintf("%.*c %.*s %.*c\n", + x/2, '=', n1, zLeft, (x+1)/2, '='); + }else{ + if( w<20 ) w = 20; + if( n1>w-10 ) n1 = w - 10; + if( n2>w-10 ) n2 = w - 10; + z = mprintf("%.*c %.*s %.*c versus %.*c %.*s %.*c\n", + (w-n1+10)/2, '=', n1, zLeft, (w-n1+1)/2, '=', + (w-n2)/2, '=', n2, zRight, (w-n2+1)/2, '='); + } + }else{ + z = mprintf("--- %s\n+++ %s\n", zLeft, zRight); + } + fossil_print("%s", z); + fossil_free(z); } /* ** Show the difference between two files, one in memory and one on disk. ** @@ -68,46 +154,103 @@ ** The difference is the set of edits needed to transform pFile1 into ** zFile2. The content of pFile1 is in memory. zFile2 exists on disk. ** ** Use the internal diff logic if zDiffCmd is NULL. Otherwise call the ** command zDiffCmd to do the diffing. +** +** When using an external diff program, zBinGlob contains the GLOB patterns +** for file names to treat as binary. If fIncludeBinary is zero, these files +** will be skipped in addition to files that may contain binary content. */ -static void diff_file( +void diff_file( Blob *pFile1, /* In memory content to compare from */ + int isBin1, /* Does the 'from' content appear to be binary */ const char *zFile2, /* On disk content to compare to */ const char *zName, /* Display name of the file */ - const char *zDiffCmd /* Command for comparison */ + const char *zDiffCmd, /* Command for comparison */ + const char *zBinGlob, /* Treat file names matching this as binary */ + int fIncludeBinary, /* Include binary files for external diff */ + u64 diffFlags /* Flags to control the diff */ ){ if( zDiffCmd==0 ){ - Blob out; /* Diff output text */ - Blob file2; /* Content of zFile2 */ + Blob out; /* Diff output text */ + Blob file2; /* Content of zFile2 */ + const char *zName2; /* Name of zFile2 for display */ /* Read content of zFile2 into memory */ blob_zero(&file2); - blob_read_from_file(&file2, zFile2); + if( file_wd_size(zFile2)<0 ){ + zName2 = NULL_DEVICE; + }else{ + if( file_wd_islink(0) ){ + blob_read_link(&file2, zFile2); + }else{ + blob_read_from_file(&file2, zFile2); + } + zName2 = zName; + } /* Compute and output the differences */ - blob_zero(&out); - text_diff(pFile1, &file2, &out, 5); - printf("--- %s\n+++ %s\n", zName, zName); - printf("%s\n", blob_str(&out)); + if( diffFlags & DIFF_BRIEF ){ + if( blob_compare(pFile1, &file2) ){ + fossil_print("CHANGED %s\n", zName); + } + }else{ + blob_zero(&out); + text_diff(pFile1, &file2, &out, 0, diffFlags); + if( blob_size(&out) ){ + diff_print_filenames(zName, zName2, diffFlags); + fossil_print("%s\n", blob_str(&out)); + } + blob_reset(&out); + } /* Release memory resources */ blob_reset(&file2); - blob_reset(&out); }else{ int cnt = 0; Blob nameFile1; /* Name of temporary file to old pFile1 content */ Blob cmd; /* Text of command to run */ + + if( !fIncludeBinary ){ + Blob file2; + if( isBin1 ){ + fossil_print("%s",DIFF_CANNOT_COMPUTE_BINARY); + return; + } + if( zBinGlob ){ + Glob *pBinary = glob_create(zBinGlob); + if( glob_match(pBinary, zName) ){ + fossil_print("%s",DIFF_CANNOT_COMPUTE_BINARY); + glob_free(pBinary); + return; + } + glob_free(pBinary); + } + blob_zero(&file2); + if( file_wd_size(zFile2)>=0 ){ + if( file_wd_islink(0) ){ + blob_read_link(&file2, zFile2); + }else{ + blob_read_from_file(&file2, zFile2); + } + } + if( looks_like_binary(&file2) ){ + fossil_print("%s",DIFF_CANNOT_COMPUTE_BINARY); + blob_reset(&file2); + return; + } + blob_reset(&file2); + } /* Construct a temporary file to hold pFile1 based on the name of ** zFile2 */ blob_zero(&nameFile1); do{ blob_reset(&nameFile1); blob_appendf(&nameFile1, "%s~%d", zFile2, cnt++); - }while( access(blob_str(&nameFile1),0)==0 ); + }while( file_access(blob_str(&nameFile1),F_OK)==0 ); blob_write_to_file(pFile1, blob_str(&nameFile1)); /* Construct the external diff command */ blob_zero(&cmd); blob_appendf(&cmd, "%s ", zDiffCmd); @@ -114,14 +257,14 @@ shell_escape(&cmd, blob_str(&nameFile1)); blob_append(&cmd, " ", 1); shell_escape(&cmd, zFile2); /* Run the external diff command */ - portable_system(blob_str(&cmd)); + fossil_system(blob_str(&cmd)); /* Delete the temporary file and clean up memory used */ - unlink(blob_str(&nameFile1)); + file_delete(blob_str(&nameFile1)); blob_reset(&nameFile1); blob_reset(&cmd); } } @@ -131,31 +274,57 @@ ** The difference is the set of edits needed to transform pFile1 into ** pFile2. ** ** Use the internal diff logic if zDiffCmd is NULL. Otherwise call the ** command zDiffCmd to do the diffing. +** +** When using an external diff program, zBinGlob contains the GLOB patterns +** for file names to treat as binary. If fIncludeBinary is zero, these files +** will be skipped in addition to files that may contain binary content. */ -static void diff_file_mem( +void diff_file_mem( Blob *pFile1, /* In memory content to compare from */ Blob *pFile2, /* In memory content to compare to */ + int isBin1, /* Does the 'from' content appear to be binary */ + int isBin2, /* Does the 'to' content appear to be binary */ const char *zName, /* Display name of the file */ - const char *zDiffCmd /* Command for comparison */ + const char *zDiffCmd, /* Command for comparison */ + const char *zBinGlob, /* Treat file names matching this as binary */ + int fIncludeBinary, /* Include binary files for external diff */ + u64 diffFlags /* Diff flags */ ){ + if( diffFlags & DIFF_BRIEF ) return; if( zDiffCmd==0 ){ Blob out; /* Diff output text */ blob_zero(&out); - text_diff(pFile1, pFile2, &out, 5); - printf("--- %s\n+++ %s\n", zName, zName); - printf("%s\n", blob_str(&out)); + text_diff(pFile1, pFile2, &out, 0, diffFlags); + diff_print_filenames(zName, zName, diffFlags); + fossil_print("%s\n", blob_str(&out)); /* Release memory resources */ blob_reset(&out); }else{ Blob cmd; char zTemp1[300]; char zTemp2[300]; + + if( !fIncludeBinary ){ + if( isBin1 || isBin2 ){ + fossil_print("%s",DIFF_CANNOT_COMPUTE_BINARY); + return; + } + if( zBinGlob ){ + Glob *pBinary = glob_create(zBinGlob); + if( glob_match(pBinary, zName) ){ + fossil_print("%s",DIFF_CANNOT_COMPUTE_BINARY); + glob_free(pBinary); + return; + } + glob_free(pBinary); + } + } /* Construct a temporary file names */ file_tempname(sizeof(zTemp1), zTemp1); file_tempname(sizeof(zTemp2), zTemp2); blob_write_to_file(pFile1, zTemp1); @@ -167,163 +336,441 @@ shell_escape(&cmd, zTemp1); blob_append(&cmd, " ", 1); shell_escape(&cmd, zTemp2); /* Run the external diff command */ - portable_system(blob_str(&cmd)); + fossil_system(blob_str(&cmd)); /* Delete the temporary file and clean up memory used */ - unlink(zTemp1); - unlink(zTemp2); + file_delete(zTemp1); + file_delete(zTemp2); blob_reset(&cmd); } } -/* -** Do a diff against a single file named in g.argv[2] from version zFrom -** against the same file on disk. -*/ -static void diff_one_against_disk(const char *zFrom, const char *zDiffCmd){ - Blob fname; - Blob content; - file_tree_name(g.argv[2], &fname, 1); - historical_version_of_file(zFrom, blob_str(&fname), &content, 0); - diff_file(&content, g.argv[2], g.argv[2], zDiffCmd); - blob_reset(&content); - blob_reset(&fname); -} - /* ** Run a diff between the version zFrom and files on disk. zFrom might ** be NULL which means to simply show the difference between the edited ** files on disk and the check-out on which they are based. +** +** Use the internal diff logic if zDiffCmd is NULL. Otherwise call the +** command zDiffCmd to do the diffing. +** +** When using an external diff program, zBinGlob contains the GLOB patterns +** for file names to treat as binary. If fIncludeBinary is zero, these files +** will be skipped in addition to files that may contain binary content. */ -static void diff_all_against_disk(const char *zFrom, const char *zDiffCmd){ +static void diff_against_disk( + const char *zFrom, /* Version to difference from */ + const char *zDiffCmd, /* Use this diff command. NULL for built-in */ + const char *zBinGlob, /* Treat file names matching this as binary */ + int fIncludeBinary, /* Treat file names matching this as binary */ + u64 diffFlags, /* Flags controlling diff output */ + FileDirList *pFileDir /* Which files to diff */ +){ int vid; Blob sql; Stmt q; + int asNewFile; /* Treat non-existant files as empty files */ + asNewFile = (diffFlags & DIFF_VERBOSE)!=0; vid = db_lget_int("checkout", 0); - vfile_check_signature(vid, 1); + vfile_check_signature(vid, CKSIG_ENOTFILE); blob_zero(&sql); db_begin_transaction(); if( zFrom ){ - int rid = name_to_rid(zFrom); + int rid = name_to_typed_rid(zFrom, "ci"); if( !is_a_version(rid) ){ fossil_fatal("no such check-in: %s", zFrom); } load_vfile_from_rid(rid); - blob_appendf(&sql, - "SELECT v2.pathname, v2.deleted, v2.chnged, v2.rid==0, v1.rid" + blob_append_sql(&sql, + "SELECT v2.pathname, v2.deleted, v2.chnged, v2.rid==0, v1.rid, v1.islink" " FROM vfile v1, vfile v2 " " WHERE v1.pathname=v2.pathname AND v1.vid=%d AND v2.vid=%d" - " AND (v2.deleted OR v2.chnged OR v1.rid!=v2.rid)" + " AND (v2.deleted OR v2.chnged OR v1.mrid!=v2.rid)" "UNION " - "SELECT pathname, 1, 0, 0, 0" + "SELECT pathname, 1, 0, 0, 0, islink" " FROM vfile v1" " WHERE v1.vid=%d" " AND NOT EXISTS(SELECT 1 FROM vfile v2" " WHERE v2.vid=%d AND v2.pathname=v1.pathname)" "UNION " - "SELECT pathname, 0, 0, 1, 0" + "SELECT pathname, 0, 0, 1, 0, islink" " FROM vfile v2" " WHERE v2.vid=%d" " AND NOT EXISTS(SELECT 1 FROM vfile v1" " WHERE v1.vid=%d AND v1.pathname=v2.pathname)" - " ORDER BY 1", + " ORDER BY 1 /*scan*/", rid, vid, rid, vid, vid, rid ); }else{ - blob_appendf(&sql, - "SELECT pathname, deleted, chnged , rid==0, rid" + blob_append_sql(&sql, + "SELECT pathname, deleted, chnged , rid==0, rid, islink" " FROM vfile" " WHERE vid=%d" " AND (deleted OR chnged OR rid==0)" - " ORDER BY pathname", + " ORDER BY pathname /*scan*/", vid ); } - db_prepare(&q, blob_str(&sql)); + db_prepare(&q, "%s", blob_sql_text(&sql)); while( db_step(&q)==SQLITE_ROW ){ const char *zPathname = db_column_text(&q,0); int isDeleted = db_column_int(&q, 1); int isChnged = db_column_int(&q,2); int isNew = db_column_int(&q,3); - char *zFullName = mprintf("%s%s", g.zLocalRoot, zPathname); - if( isDeleted ){ - printf("DELETED %s\n", zPathname); - }else if( access(zFullName, 0) ){ - printf("MISSING %s\n", zPathname); - }else if( isNew ){ - printf("ADDED %s\n", zPathname); - }else if( isDeleted ){ - printf("DELETED %s\n", zPathname); - }else if( isChnged==3 ){ - printf("ADDED_BY_MERGE %s\n", zPathname); + int srcid = db_column_int(&q, 4); + int isLink = db_column_int(&q, 5); + const char *zFullName; + int showDiff = 1; + Blob fname; + + if( !file_dir_match(pFileDir, zPathname) ) continue; + if( determine_exec_relative_option(0) ){ + blob_zero(&fname); + file_relative_name(zPathname, &fname, 1); }else{ - int srcid = db_column_int(&q, 4); + blob_set(&fname, g.zLocalRoot); + blob_append(&fname, zPathname, -1); + } + zFullName = blob_str(&fname); + if( isDeleted ){ + fossil_print("DELETED %s\n", zPathname); + if( !asNewFile ){ showDiff = 0; zFullName = NULL_DEVICE; } + }else if( file_access(zFullName, F_OK) ){ + fossil_print("MISSING %s\n", zPathname); + if( !asNewFile ){ showDiff = 0; } + }else if( isNew ){ + fossil_print("ADDED %s\n", zPathname); + srcid = 0; + if( !asNewFile ){ showDiff = 0; } + }else if( isChnged==3 ){ + fossil_print("ADDED_BY_MERGE %s\n", zPathname); + srcid = 0; + if( !asNewFile ){ showDiff = 0; } + }else if( isChnged==5 ){ + fossil_print("ADDED_BY_INTEGRATE %s\n", zPathname); + srcid = 0; + if( !asNewFile ){ showDiff = 0; } + } + if( showDiff ){ Blob content; - content_get(srcid, &content); - printf("Index: %s\n=======================================" - "============================\n", - zPathname - ); - diff_file(&content, zFullName, zPathname, zDiffCmd); + int isBin; + if( !isLink != !file_wd_islink(zFullName) ){ + diff_print_index(zPathname, diffFlags); + diff_print_filenames(zPathname, zPathname, diffFlags); + fossil_print("%s",DIFF_CANNOT_COMPUTE_SYMLINK); + continue; + } + if( srcid>0 ){ + content_get(srcid, &content); + }else{ + blob_zero(&content); + } + isBin = fIncludeBinary ? 0 : looks_like_binary(&content); + diff_print_index(zPathname, diffFlags); + diff_file(&content, isBin, zFullName, zPathname, zDiffCmd, + zBinGlob, fIncludeBinary, diffFlags); blob_reset(&content); } - free(zFullName); + blob_reset(&fname); + } + db_finalize(&q); + db_end_transaction(1); /* ROLLBACK */ +} + +/* +** Run a diff between the undo buffer and files on disk. +** +** Use the internal diff logic if zDiffCmd is NULL. Otherwise call the +** command zDiffCmd to do the diffing. +** +** When using an external diff program, zBinGlob contains the GLOB patterns +** for file names to treat as binary. If fIncludeBinary is zero, these files +** will be skipped in addition to files that may contain binary content. +*/ +static void diff_against_undo( + const char *zDiffCmd, /* Use this diff command. NULL for built-in */ + const char *zBinGlob, /* Treat file names matching this as binary */ + int fIncludeBinary, /* Treat file names matching this as binary */ + u64 diffFlags, /* Flags controlling diff output */ + FileDirList *pFileDir /* List of files and directories to diff */ +){ + Stmt q; + Blob content; + db_prepare(&q, "SELECT pathname, content FROM undo"); + blob_init(&content, 0, 0); + while( db_step(&q)==SQLITE_ROW ){ + char *zFullName; + const char *zFile = (const char*)db_column_text(&q, 0); + if( !file_dir_match(pFileDir, zFile) ) continue; + zFullName = mprintf("%s%s", g.zLocalRoot, zFile); + db_column_blob(&q, 1, &content); + diff_file(&content, 0, zFullName, zFile, + zDiffCmd, zBinGlob, fIncludeBinary, diffFlags); + fossil_free(zFullName); + blob_reset(&content); } db_finalize(&q); - db_end_transaction(1); } /* -** Output the differences between two versions of a single file. -** zFrom and zTo are the check-ins containing the two file versions. -** The filename is contained in g.argv[2]. +** Show the difference between two files identified by ManifestFile +** entries. +** +** Use the internal diff logic if zDiffCmd is NULL. Otherwise call the +** command zDiffCmd to do the diffing. +** +** When using an external diff program, zBinGlob contains the GLOB patterns +** for file names to treat as binary. If fIncludeBinary is zero, these files +** will be skipped in addition to files that may contain binary content. */ -static void diff_one_two_versions( - const char *zFrom, - const char *zTo, - const char *zDiffCmd +static void diff_manifest_entry( + struct ManifestFile *pFrom, + struct ManifestFile *pTo, + const char *zDiffCmd, + const char *zBinGlob, + int fIncludeBinary, + u64 diffFlags ){ - char *zName; - Blob fname; - Blob v1, v2; - file_tree_name(g.argv[2], &fname, 1); - zName = blob_str(&fname); - historical_version_of_file(zFrom, zName, &v1, 0); - historical_version_of_file(zTo, zName, &v2, 0); - diff_file_mem(&v1, &v2, zName, zDiffCmd); - blob_reset(&v1); - blob_reset(&v2); - blob_reset(&fname); + Blob f1, f2; + int isBin1, isBin2; + int rid; + const char *zName; + if( pFrom ){ + zName = pFrom->zName; + }else if( pTo ){ + zName = pTo->zName; + }else{ + zName = DIFF_NO_NAME; + } + if( diffFlags & DIFF_BRIEF ) return; + diff_print_index(zName, diffFlags); + if( pFrom ){ + rid = uuid_to_rid(pFrom->zUuid, 0); + content_get(rid, &f1); + }else{ + blob_zero(&f1); + } + if( pTo ){ + rid = uuid_to_rid(pTo->zUuid, 0); + content_get(rid, &f2); + }else{ + blob_zero(&f2); + } + isBin1 = fIncludeBinary ? 0 : looks_like_binary(&f1); + isBin2 = fIncludeBinary ? 0 : looks_like_binary(&f2); + diff_file_mem(&f1, &f2, isBin1, isBin2, zName, zDiffCmd, + zBinGlob, fIncludeBinary, diffFlags); + blob_reset(&f1); + blob_reset(&f2); } /* ** Output the differences between two check-ins. +** +** Use the internal diff logic if zDiffCmd is NULL. Otherwise call the +** command zDiffCmd to do the diffing. +** +** When using an external diff program, zBinGlob contains the GLOB patterns +** for file names to treat as binary. If fIncludeBinary is zero, these files +** will be skipped in addition to files that may contain binary content. */ -static void diff_all_two_versions( +static void diff_two_versions( const char *zFrom, const char *zTo, - const char *zDiffCmd + const char *zDiffCmd, + const char *zBinGlob, + int fIncludeBinary, + u64 diffFlags, + FileDirList *pFileDir ){ + Manifest *pFrom, *pTo; + ManifestFile *pFromFile, *pToFile; + int asNewFlag = (diffFlags & DIFF_VERBOSE)!=0 ? 1 : 0; + + pFrom = manifest_get_by_name(zFrom, 0); + manifest_file_rewind(pFrom); + pFromFile = manifest_file_next(pFrom,0); + pTo = manifest_get_by_name(zTo, 0); + manifest_file_rewind(pTo); + pToFile = manifest_file_next(pTo,0); + + while( pFromFile || pToFile ){ + int cmp; + if( pFromFile==0 ){ + cmp = +1; + }else if( pToFile==0 ){ + cmp = -1; + }else{ + cmp = fossil_strcmp(pFromFile->zName, pToFile->zName); + } + if( cmp<0 ){ + if( file_dir_match(pFileDir, pFromFile->zName) ){ + fossil_print("DELETED %s\n", pFromFile->zName); + if( asNewFlag ){ + diff_manifest_entry(pFromFile, 0, zDiffCmd, zBinGlob, + fIncludeBinary, diffFlags); + } + } + pFromFile = manifest_file_next(pFrom,0); + }else if( cmp>0 ){ + if( file_dir_match(pFileDir, pToFile->zName) ){ + fossil_print("ADDED %s\n", pToFile->zName); + if( asNewFlag ){ + diff_manifest_entry(0, pToFile, zDiffCmd, zBinGlob, + fIncludeBinary, diffFlags); + } + } + pToFile = manifest_file_next(pTo,0); + }else if( fossil_strcmp(pFromFile->zUuid, pToFile->zUuid)==0 ){ + /* No changes */ + (void)file_dir_match(pFileDir, pFromFile->zName); /* Record name usage */ + pFromFile = manifest_file_next(pFrom,0); + pToFile = manifest_file_next(pTo,0); + }else{ + if( file_dir_match(pFileDir, pToFile->zName) ){ + if( diffFlags & DIFF_BRIEF ){ + fossil_print("CHANGED %s\n", pFromFile->zName); + }else{ + diff_manifest_entry(pFromFile, pToFile, zDiffCmd, zBinGlob, + fIncludeBinary, diffFlags); + } + } + pFromFile = manifest_file_next(pFrom,0); + pToFile = manifest_file_next(pTo,0); + } + } + manifest_destroy(pFrom); + manifest_destroy(pTo); +} + +/* +** Return the name of the external diff command, or return NULL if +** no external diff command is defined. +*/ +const char *diff_command_external(int guiDiff){ + const char *zDefault; + const char *zName; + + if( guiDiff ){ +#if defined(_WIN32) + zDefault = "WinDiff.exe"; +#else + zDefault = 0; +#endif + zName = "gdiff-command"; + }else{ + zDefault = 0; + zName = "diff-command"; + } + return db_get(zName, zDefault); +} + +/* +** Show diff output in a Tcl/Tk window, in response to the --tk option +** to the diff command. +** +** If fossil has direct access to a Tcl interpreter (either loaded +** dynamically through stubs or linked in statically), we can use it +** directly. Otherwise: +** (1) Write the Tcl/Tk script used for rendering into a temp file. +** (2) Invoke "tclsh" on the temp file using fossil_system(). +** (3) Delete the temp file. +*/ +void diff_tk(const char *zSubCmd, int firstArg){ + int i; + Blob script; + const char *zTempFile = 0; + char *zCmd; + blob_zero(&script); + blob_appendf(&script, "set fossilcmd {| \"%/\" %s --html -y -i -v", + g.nameOfExe, zSubCmd); + find_option("html",0,0); + find_option("side-by-side","y",0); + find_option("internal","i",0); + find_option("verbose","v",0); + /* The undocumented --script FILENAME option causes the Tk script to + ** be written into the FILENAME instead of being run. This is used + ** for testing and debugging. */ + zTempFile = find_option("script",0,1); + for(i=firstArg; i Width of lines in side-by-side diff +** -Z|--ignore-trailing-space Ignore changes to end-of-line whitespace */ void diff_cmd(void){ int isGDiff; /* True for gdiff. False for normal diff */ int isInternDiff; /* True for internal diff */ + int verboseFlag; /* True if -v or --verbose flag is used */ const char *zFrom; /* Source version number */ const char *zTo; /* Target version number */ + const char *zBranch; /* Branch to diff */ const char *zDiffCmd = 0; /* External diff command. NULL for internal diff */ + const char *zBinGlob = 0; /* Treat file names matching this as binary */ + int fIncludeBinary = 0; /* Include binary files for external diff */ + int againstUndo = 0; /* Diff against files in the undo buffer */ + u64 diffFlags = 0; /* Flags to control the DIFF */ + FileDirList *pFileDir = 0; /* Restrict the diff to these files */ + if( find_option("tk",0,0)!=0 ){ + diff_tk("diff", 2); + return; + } isGDiff = g.argv[1][0]=='g'; isInternDiff = find_option("internal","i",0)!=0; zFrom = find_option("from", "r", 1); zTo = find_option("to", 0, 1); - - if( zTo==0 ){ - db_must_be_within_tree(); - verify_all_options(); - if( !isInternDiff && g.argc==3 ){ - zDiffCmd = db_get(isGDiff ? "gdiff-command" : "diff-command", 0); - } - if( g.argc==3 ){ - diff_one_against_disk(zFrom, zDiffCmd); - }else{ - diff_all_against_disk(zFrom, zDiffCmd); - } + zBranch = find_option("branch", 0, 1); + againstUndo = find_option("undo",0,0)!=0; + diffFlags = diff_options(); + verboseFlag = find_option("verbose","v",0)!=0; + if( !verboseFlag ){ + verboseFlag = find_option("new-file","N",0)!=0; /* deprecated */ + } + if( verboseFlag ) diffFlags |= DIFF_VERBOSE; + if( againstUndo && (zFrom!=0 || zTo!=0 || zBranch!=0) ){ + fossil_fatal("cannot use --undo together with --from or --to or --branch"); + } + if( zBranch ){ + if( zTo || zFrom ){ + fossil_fatal("cannot use --from or --to with --branch"); + } + zTo = zBranch; + zFrom = mprintf("root:%s", zBranch); + } + if( zTo==0 || againstUndo ){ + db_must_be_within_tree(); }else if( zFrom==0 ){ fossil_fatal("must use --from if --to is present"); }else{ - db_find_and_open_repository(1); - verify_all_options(); - if( !isInternDiff && g.argc==3 ){ - zDiffCmd = db_get(isGDiff ? "gdiff-command" : "diff-command", 0); - } - if( g.argc==3 ){ - diff_one_two_versions(zFrom, zTo, zDiffCmd); - }else{ - fossil_fatal("--to on complete check-ins not yet implemented"); - diff_all_two_versions(zFrom, zTo, zDiffCmd); - } - } + db_find_and_open_repository(0, 0); + } + if( !isInternDiff ){ + zDiffCmd = diff_command_external(isGDiff); + } + zBinGlob = diff_get_binary_glob(); + fIncludeBinary = diff_include_binary_files(); + determine_exec_relative_option(1); + verify_all_options(); + if( g.argc>=3 ){ + int i; + Blob fname; + pFileDir = fossil_malloc( sizeof(*pFileDir) * (g.argc-1) ); + memset(pFileDir, 0, sizeof(*pFileDir) * (g.argc-1)); + for(i=2; i=n ){ - return 0; /* Plain text */ - } - for(i=0; i=aMime[i].size && memcmp(x, aMime[i].zPrefix, aMime[i].size)==0 ){ return aMime[i].zMimetype; } } return "unknown/unknown"; } + +/* A table of mimetypes based on file suffixes. +** Suffixes must be in sorted order so that we can do a binary +** search to find the mime-type +*/ +static const struct { + const char *zSuffix; /* The file suffix */ + int size; /* Length of the suffix */ + const char *zMimetype; /* The corresponding mimetype */ +} aMime[] = { + { "ai", 2, "application/postscript" }, + { "aif", 3, "audio/x-aiff" }, + { "aifc", 4, "audio/x-aiff" }, + { "aiff", 4, "audio/x-aiff" }, + { "arj", 3, "application/x-arj-compressed" }, + { "asc", 3, "text/plain" }, + { "asf", 3, "video/x-ms-asf" }, + { "asx", 3, "video/x-ms-asx" }, + { "au", 2, "audio/ulaw" }, + { "avi", 3, "video/x-msvideo" }, + { "bat", 3, "application/x-msdos-program" }, + { "bcpio", 5, "application/x-bcpio" }, + { "bin", 3, "application/octet-stream" }, + { "c", 1, "text/plain" }, + { "cc", 2, "text/plain" }, + { "ccad", 4, "application/clariscad" }, + { "cdf", 3, "application/x-netcdf" }, + { "class", 5, "application/octet-stream" }, + { "cod", 3, "application/vnd.rim.cod" }, + { "com", 3, "application/x-msdos-program" }, + { "cpio", 4, "application/x-cpio" }, + { "cpt", 3, "application/mac-compactpro" }, + { "cs", 2, "text/plain" }, + { "csh", 3, "application/x-csh" }, + { "css", 3, "text/css" }, + { "csv", 3, "text/csv" }, + { "dcr", 3, "application/x-director" }, + { "deb", 3, "application/x-debian-package" }, + { "dir", 3, "application/x-director" }, + { "dl", 2, "video/dl" }, + { "dms", 3, "application/octet-stream" }, + { "doc", 3, "application/msword" }, + { "docx", 4, "application/vnd.openxmlformats-" + "officedocument.wordprocessingml.document"}, + { "dot", 3, "application/msword" }, + { "dotx", 4, "application/vnd.openxmlformats-" + "officedocument.wordprocessingml.template"}, + { "drw", 3, "application/drafting" }, + { "dvi", 3, "application/x-dvi" }, + { "dwg", 3, "application/acad" }, + { "dxf", 3, "application/dxf" }, + { "dxr", 3, "application/x-director" }, + { "eps", 3, "application/postscript" }, + { "etx", 3, "text/x-setext" }, + { "exe", 3, "application/octet-stream" }, + { "ez", 2, "application/andrew-inset" }, + { "f", 1, "text/plain" }, + { "f90", 3, "text/plain" }, + { "fli", 3, "video/fli" }, + { "flv", 3, "video/flv" }, + { "gif", 3, "image/gif" }, + { "gl", 2, "video/gl" }, + { "gtar", 4, "application/x-gtar" }, + { "gz", 2, "application/x-gzip" }, + { "h", 1, "text/plain" }, + { "hdf", 3, "application/x-hdf" }, + { "hh", 2, "text/plain" }, + { "hqx", 3, "application/mac-binhex40" }, + { "htm", 3, "text/html" }, + { "html", 4, "text/html" }, + { "ice", 3, "x-conference/x-cooltalk" }, + { "ief", 3, "image/ief" }, + { "iges", 4, "model/iges" }, + { "igs", 3, "model/iges" }, + { "ips", 3, "application/x-ipscript" }, + { "ipx", 3, "application/x-ipix" }, + { "jad", 3, "text/vnd.sun.j2me.app-descriptor" }, + { "jar", 3, "application/java-archive" }, + { "jpe", 3, "image/jpeg" }, + { "jpeg", 4, "image/jpeg" }, + { "jpg", 3, "image/jpeg" }, + { "js", 2, "application/x-javascript" }, + { "kar", 3, "audio/midi" }, + { "latex", 5, "application/x-latex" }, + { "lha", 3, "application/octet-stream" }, + { "lsp", 3, "application/x-lisp" }, + { "lzh", 3, "application/octet-stream" }, + { "m", 1, "text/plain" }, + { "m3u", 3, "audio/x-mpegurl" }, + { "man", 3, "text/plain" }, + { "markdown", 8, "text/x-markdown" }, + { "md", 2, "text/x-markdown" }, + { "me", 2, "application/x-troff-me" }, + { "mesh", 4, "model/mesh" }, + { "mid", 3, "audio/midi" }, + { "midi", 4, "audio/midi" }, + { "mif", 3, "application/x-mif" }, + { "mime", 4, "www/mime" }, + { "mkd", 3, "text/x-markdown" }, + { "mov", 3, "video/quicktime" }, + { "movie", 5, "video/x-sgi-movie" }, + { "mp2", 3, "audio/mpeg" }, + { "mp3", 3, "audio/mpeg" }, + { "mp4", 3, "video/mp4" }, + { "mpe", 3, "video/mpeg" }, + { "mpeg", 4, "video/mpeg" }, + { "mpg", 3, "video/mpeg" }, + { "mpga", 4, "audio/mpeg" }, + { "ms", 2, "application/x-troff-ms" }, + { "msh", 3, "model/mesh" }, + { "n", 1, "text/plain" }, + { "nc", 2, "application/x-netcdf" }, + { "oda", 3, "application/oda" }, + { "odp", 3, "application/vnd.oasis.opendocument.presentation" }, + { "ods", 3, "application/vnd.oasis.opendocument.spreadsheet" }, + { "odt", 3, "application/vnd.oasis.opendocument.text" }, + { "ogg", 3, "application/ogg" }, + { "ogm", 3, "application/ogg" }, + { "pbm", 3, "image/x-portable-bitmap" }, + { "pdb", 3, "chemical/x-pdb" }, + { "pdf", 3, "application/pdf" }, + { "pgm", 3, "image/x-portable-graymap" }, + { "pgn", 3, "application/x-chess-pgn" }, + { "pgp", 3, "application/pgp" }, + { "pl", 2, "application/x-perl" }, + { "pm", 2, "application/x-perl" }, + { "png", 3, "image/png" }, + { "pnm", 3, "image/x-portable-anymap" }, + { "pot", 3, "application/mspowerpoint" }, + { "potx", 4, "application/vnd.openxmlformats-" + "officedocument.presentationml.template"}, + { "ppm", 3, "image/x-portable-pixmap" }, + { "pps", 3, "application/mspowerpoint" }, + { "ppsx", 4, "application/vnd.openxmlformats-" + "officedocument.presentationml.slideshow"}, + { "ppt", 3, "application/mspowerpoint" }, + { "pptx", 4, "application/vnd.openxmlformats-" + "officedocument.presentationml.presentation"}, + { "ppz", 3, "application/mspowerpoint" }, + { "pre", 3, "application/x-freelance" }, + { "prt", 3, "application/pro_eng" }, + { "ps", 2, "application/postscript" }, + { "qt", 2, "video/quicktime" }, + { "ra", 2, "audio/x-realaudio" }, + { "ram", 3, "audio/x-pn-realaudio" }, + { "rar", 3, "application/x-rar-compressed" }, + { "ras", 3, "image/cmu-raster" }, + { "rgb", 3, "image/x-rgb" }, + { "rm", 2, "audio/x-pn-realaudio" }, + { "roff", 4, "application/x-troff" }, + { "rpm", 3, "audio/x-pn-realaudio-plugin" }, + { "rtf", 3, "text/rtf" }, + { "rtx", 3, "text/richtext" }, + { "scm", 3, "application/x-lotusscreencam" }, + { "set", 3, "application/set" }, + { "sgm", 3, "text/sgml" }, + { "sgml", 4, "text/sgml" }, + { "sh", 2, "application/x-sh" }, + { "shar", 4, "application/x-shar" }, + { "silo", 4, "model/mesh" }, + { "sit", 3, "application/x-stuffit" }, + { "skd", 3, "application/x-koan" }, + { "skm", 3, "application/x-koan" }, + { "skp", 3, "application/x-koan" }, + { "skt", 3, "application/x-koan" }, + { "smi", 3, "application/smil" }, + { "smil", 4, "application/smil" }, + { "snd", 3, "audio/basic" }, + { "sol", 3, "application/solids" }, + { "spl", 3, "application/x-futuresplash" }, + { "src", 3, "application/x-wais-source" }, + { "step", 4, "application/STEP" }, + { "stl", 3, "application/SLA" }, + { "stp", 3, "application/STEP" }, + { "sv4cpio", 7, "application/x-sv4cpio" }, + { "sv4crc", 6, "application/x-sv4crc" }, + { "svg", 3, "image/svg+xml" }, + { "swf", 3, "application/x-shockwave-flash" }, + { "t", 1, "application/x-troff" }, + { "tar", 3, "application/x-tar" }, + { "tcl", 3, "application/x-tcl" }, + { "tex", 3, "application/x-tex" }, + { "texi", 4, "application/x-texinfo" }, + { "texinfo", 7, "application/x-texinfo" }, + { "tgz", 3, "application/x-tar-gz" }, + { "th1", 3, "application/x-th1" }, + { "tif", 3, "image/tiff" }, + { "tiff", 4, "image/tiff" }, + { "tr", 2, "application/x-troff" }, + { "tsi", 3, "audio/TSP-audio" }, + { "tsp", 3, "application/dsptype" }, + { "tsv", 3, "text/tab-separated-values" }, + { "txt", 3, "text/plain" }, + { "unv", 3, "application/i-deas" }, + { "ustar", 5, "application/x-ustar" }, + { "vb", 2, "text/plain" }, + { "vcd", 3, "application/x-cdlink" }, + { "vda", 3, "application/vda" }, + { "viv", 3, "video/vnd.vivo" }, + { "vivo", 4, "video/vnd.vivo" }, + { "vrml", 4, "model/vrml" }, + { "wav", 3, "audio/x-wav" }, + { "wax", 3, "audio/x-ms-wax" }, + { "wiki", 4, "text/x-fossil-wiki" }, + { "wma", 3, "audio/x-ms-wma" }, + { "wmv", 3, "video/x-ms-wmv" }, + { "wmx", 3, "video/x-ms-wmx" }, + { "wrl", 3, "model/vrml" }, + { "wvx", 3, "video/x-ms-wvx" }, + { "xbm", 3, "image/x-xbitmap" }, + { "xlc", 3, "application/vnd.ms-excel" }, + { "xll", 3, "application/vnd.ms-excel" }, + { "xlm", 3, "application/vnd.ms-excel" }, + { "xls", 3, "application/vnd.ms-excel" }, + { "xlsx", 4, "application/vnd.openxmlformats-" + "officedocument.spreadsheetml.sheet"}, + { "xlw", 3, "application/vnd.ms-excel" }, + { "xml", 3, "text/xml" }, + { "xpm", 3, "image/x-xpixmap" }, + { "xwd", 3, "image/x-xwindowdump" }, + { "xyz", 3, "chemical/x-pdb" }, + { "zip", 3, "application/zip" }, +}; + +/* +** Verify that all entries in the aMime[] table are in sorted order. +** Abort with a fatal error if any is out-of-order. +*/ +static void mimetype_verify(void){ + int i; + for(i=1; i=0 ){ + fossil_fatal("mimetypes out of sequence: %s before %s", + aMime[i-1].zSuffix, aMime[i].zSuffix); + } + } +} /* ** Guess the mime-type of a document based on its name. */ const char *mimetype_from_name(const char *zName){ @@ -82,224 +309,35 @@ int i; int first, last; int len; char zSuffix[20]; - /* A table of mimetypes based on file suffixes. - ** Suffixes must be in sorted order so that we can do a binary - ** search to find the mime-type + +#ifdef FOSSIL_DEBUG + /* This is test code to make sure the table above is in the correct + ** order */ - static const struct { - const char *zSuffix; /* The file suffix */ - int size; /* Length of the suffix */ - const char *zMimetype; /* The corresponding mimetype */ - } aMime[] = { - { "ai", 2, "application/postscript" }, - { "aif", 3, "audio/x-aiff" }, - { "aifc", 4, "audio/x-aiff" }, - { "aiff", 4, "audio/x-aiff" }, - { "arj", 3, "application/x-arj-compressed" }, - { "asc", 3, "text/plain" }, - { "asf", 3, "video/x-ms-asf" }, - { "asx", 3, "video/x-ms-asx" }, - { "au", 2, "audio/ulaw" }, - { "avi", 3, "video/x-msvideo" }, - { "bat", 3, "application/x-msdos-program" }, - { "bcpio", 5, "application/x-bcpio" }, - { "bin", 3, "application/octet-stream" }, - { "c", 1, "text/plain" }, - { "cc", 2, "text/plain" }, - { "ccad", 4, "application/clariscad" }, - { "cdf", 3, "application/x-netcdf" }, - { "class", 5, "application/octet-stream" }, - { "cod", 3, "application/vnd.rim.cod" }, - { "com", 3, "application/x-msdos-program" }, - { "cpio", 4, "application/x-cpio" }, - { "cpt", 3, "application/mac-compactpro" }, - { "csh", 3, "application/x-csh" }, - { "css", 3, "text/css" }, - { "dcr", 3, "application/x-director" }, - { "deb", 3, "application/x-debian-package" }, - { "dir", 3, "application/x-director" }, - { "dl", 2, "video/dl" }, - { "dms", 3, "application/octet-stream" }, - { "doc", 3, "application/msword" }, - { "drw", 3, "application/drafting" }, - { "dvi", 3, "application/x-dvi" }, - { "dwg", 3, "application/acad" }, - { "dxf", 3, "application/dxf" }, - { "dxr", 3, "application/x-director" }, - { "eps", 3, "application/postscript" }, - { "etx", 3, "text/x-setext" }, - { "exe", 3, "application/octet-stream" }, - { "ez", 2, "application/andrew-inset" }, - { "f", 1, "text/plain" }, - { "f90", 3, "text/plain" }, - { "fli", 3, "video/fli" }, - { "flv", 3, "video/flv" }, - { "gif", 3, "image/gif" }, - { "gl", 2, "video/gl" }, - { "gtar", 4, "application/x-gtar" }, - { "gz", 2, "application/x-gzip" }, - { "hdf", 3, "application/x-hdf" }, - { "hh", 2, "text/plain" }, - { "hqx", 3, "application/mac-binhex40" }, - { "h", 1, "text/plain" }, - { "htm", 3, "text/html" }, - { "html", 4, "text/html" }, - { "ice", 3, "x-conference/x-cooltalk" }, - { "ief", 3, "image/ief" }, - { "iges", 4, "model/iges" }, - { "igs", 3, "model/iges" }, - { "ips", 3, "application/x-ipscript" }, - { "ipx", 3, "application/x-ipix" }, - { "jad", 3, "text/vnd.sun.j2me.app-descriptor" }, - { "jar", 3, "application/java-archive" }, - { "jpeg", 4, "image/jpeg" }, - { "jpe", 3, "image/jpeg" }, - { "jpg", 3, "image/jpeg" }, - { "js", 2, "application/x-javascript" }, - { "kar", 3, "audio/midi" }, - { "latex", 5, "application/x-latex" }, - { "lha", 3, "application/octet-stream" }, - { "lsp", 3, "application/x-lisp" }, - { "lzh", 3, "application/octet-stream" }, - { "m", 1, "text/plain" }, - { "m3u", 3, "audio/x-mpegurl" }, - { "man", 3, "application/x-troff-man" }, - { "me", 2, "application/x-troff-me" }, - { "mesh", 4, "model/mesh" }, - { "mid", 3, "audio/midi" }, - { "midi", 4, "audio/midi" }, - { "mif", 3, "application/x-mif" }, - { "mime", 4, "www/mime" }, - { "movie", 5, "video/x-sgi-movie" }, - { "mov", 3, "video/quicktime" }, - { "mp2", 3, "audio/mpeg" }, - { "mp2", 3, "video/mpeg" }, - { "mp3", 3, "audio/mpeg" }, - { "mpeg", 4, "video/mpeg" }, - { "mpe", 3, "video/mpeg" }, - { "mpga", 4, "audio/mpeg" }, - { "mpg", 3, "video/mpeg" }, - { "ms", 2, "application/x-troff-ms" }, - { "msh", 3, "model/mesh" }, - { "nc", 2, "application/x-netcdf" }, - { "oda", 3, "application/oda" }, - { "ogg", 3, "application/ogg" }, - { "ogm", 3, "application/ogg" }, - { "pbm", 3, "image/x-portable-bitmap" }, - { "pdb", 3, "chemical/x-pdb" }, - { "pdf", 3, "application/pdf" }, - { "pgm", 3, "image/x-portable-graymap" }, - { "pgn", 3, "application/x-chess-pgn" }, - { "pgp", 3, "application/pgp" }, - { "pl", 2, "application/x-perl" }, - { "pm", 2, "application/x-perl" }, - { "png", 3, "image/png" }, - { "pnm", 3, "image/x-portable-anymap" }, - { "pot", 3, "application/mspowerpoint" }, - { "ppm", 3, "image/x-portable-pixmap" }, - { "pps", 3, "application/mspowerpoint" }, - { "ppt", 3, "application/mspowerpoint" }, - { "ppz", 3, "application/mspowerpoint" }, - { "pre", 3, "application/x-freelance" }, - { "prt", 3, "application/pro_eng" }, - { "ps", 2, "application/postscript" }, - { "qt", 2, "video/quicktime" }, - { "ra", 2, "audio/x-realaudio" }, - { "ram", 3, "audio/x-pn-realaudio" }, - { "rar", 3, "application/x-rar-compressed" }, - { "ras", 3, "image/cmu-raster" }, - { "ras", 3, "image/x-cmu-raster" }, - { "rgb", 3, "image/x-rgb" }, - { "rm", 2, "audio/x-pn-realaudio" }, - { "roff", 4, "application/x-troff" }, - { "rpm", 3, "audio/x-pn-realaudio-plugin" }, - { "rtf", 3, "application/rtf" }, - { "rtf", 3, "text/rtf" }, - { "rtx", 3, "text/richtext" }, - { "scm", 3, "application/x-lotusscreencam" }, - { "set", 3, "application/set" }, - { "sgml", 4, "text/sgml" }, - { "sgm", 3, "text/sgml" }, - { "sh", 2, "application/x-sh" }, - { "shar", 4, "application/x-shar" }, - { "silo", 4, "model/mesh" }, - { "sit", 3, "application/x-stuffit" }, - { "skd", 3, "application/x-koan" }, - { "skm", 3, "application/x-koan" }, - { "skp", 3, "application/x-koan" }, - { "skt", 3, "application/x-koan" }, - { "smi", 3, "application/smil" }, - { "smil", 4, "application/smil" }, - { "snd", 3, "audio/basic" }, - { "sol", 3, "application/solids" }, - { "spl", 3, "application/x-futuresplash" }, - { "src", 3, "application/x-wais-source" }, - { "step", 4, "application/STEP" }, - { "stl", 3, "application/SLA" }, - { "stp", 3, "application/STEP" }, - { "sv4cpio", 7, "application/x-sv4cpio" }, - { "sv4crc", 6, "application/x-sv4crc" }, - { "swf", 3, "application/x-shockwave-flash" }, - { "t", 1, "application/x-troff" }, - { "tar", 3, "application/x-tar" }, - { "tcl", 3, "application/x-tcl" }, - { "tex", 3, "application/x-tex" }, - { "texi", 4, "application/x-texinfo" }, - { "texinfo", 7, "application/x-texinfo" }, - { "tgz", 3, "application/x-tar-gz" }, - { "tiff", 4, "image/tiff" }, - { "tif", 3, "image/tiff" }, - { "tr", 2, "application/x-troff" }, - { "tsi", 3, "audio/TSP-audio" }, - { "tsp", 3, "application/dsptype" }, - { "tsv", 3, "text/tab-separated-values" }, - { "txt", 3, "text/plain" }, - { "unv", 3, "application/i-deas" }, - { "ustar", 5, "application/x-ustar" }, - { "vcd", 3, "application/x-cdlink" }, - { "vda", 3, "application/vda" }, - { "viv", 3, "video/vnd.vivo" }, - { "vivo", 4, "video/vnd.vivo" }, - { "vrml", 4, "model/vrml" }, - { "wav", 3, "audio/x-wav" }, - { "wax", 3, "audio/x-ms-wax" }, - { "wiki", 4, "application/x-fossil-wiki" }, - { "wma", 3, "audio/x-ms-wma" }, - { "wmv", 3, "video/x-ms-wmv" }, - { "wmx", 3, "video/x-ms-wmx" }, - { "wrl", 3, "model/vrml" }, - { "wvx", 3, "video/x-ms-wvx" }, - { "xbm", 3, "image/x-xbitmap" }, - { "xlc", 3, "application/vnd.ms-excel" }, - { "xll", 3, "application/vnd.ms-excel" }, - { "xlm", 3, "application/vnd.ms-excel" }, - { "xls", 3, "application/vnd.ms-excel" }, - { "xlw", 3, "application/vnd.ms-excel" }, - { "xml", 3, "text/xml" }, - { "xpm", 3, "image/x-xpixmap" }, - { "xwd", 3, "image/x-xwindowdump" }, - { "xyz", 3, "chemical/x-pdb" }, - { "zip", 3, "application/zip" }, - }; + if( fossil_strcmp(zName, "mimetype-test")==0 ){ + mimetype_verify(); + return "ok"; + } +#endif z = zName; for(i=0; zName[i]; i++){ if( zName[i]=='.' ) z = &zName[i+1]; } len = strlen(z); if( len %s\n", g.argv[i], mimetype_from_name(g.argv[i])); + } +} + +/* +** WEBPAGE: mimetype_list +** +** Show the built-in table used to guess embedded document mimetypes +** from file suffixes. +*/ +void mimetype_list_page(void){ + int i; + mimetype_verify(); + style_header("Mimetype List"); + @

The Fossil /doc page uses filename + @ suffixes and the following table to guess at the appropriate mimetype + @ for each document.

+ @ + @ + @ + @ + for(i=0; i + } + @
SuffixMimetype + @
%h(aMime[i].zSuffix)%h(aMime[i].zMimetype)
+ output_table_sorting_javascript("mimeTable","tt",1); + style_footer(); +} + +/* +** Check to see if the file in the pContent blob is "embedded HTML". Return +** true if it is, and fill pTitle with the document title. +** +** An "embedded HTML" file is HTML that lacks a header and a footer. The +** standard Fossil header is prepended and the standard Fossil footer is +** appended. Otherwise, the file is displayed without change. +** +** Embedded HTML must be contained in a
element. +** If that
also contains a data-title attribute, then the +** value of that attribute is extracted into pTitle and becomes the title +** of the document. +*/ +int doc_is_embedded_html(Blob *pContent, Blob *pTitle){ + const char *zIn = blob_str(pContent); + const char *zAttr; + const char *zValue; + int nAttr, nValue; + int seenClass = 0; + int seenTitle = 0; + + while( fossil_isspace(zIn[0]) ) zIn++; + if( fossil_strnicmp(zIn,"' ) return 0; + zAttr = zIn; + while( fossil_isalnum(zIn[0]) || zIn[0]=='-' ) zIn++; + nAttr = (int)(zIn - zAttr); + while( fossil_isspace(zIn[0]) ) zIn++; + if( zIn[0]!='=' ) continue; + zIn++; + while( fossil_isspace(zIn[0]) ) zIn++; + if( zIn[0]=='"' || zIn[0]=='\'' ){ + char cDelim = zIn[0]; + zIn++; + zValue = zIn; + while( zIn[0] && zIn[0]!=cDelim ) zIn++; + if( zIn[0]==0 ) return 0; + nValue = (int)(zIn - zValue); + zIn++; + }else{ + zValue = zIn; + while( zIn[0]!=0 && zIn[0]!='>' && zIn[0]!='/' + && !fossil_isspace(zIn[0]) ) zIn++; + if( zIn[0]==0 ) return 0; + nValue = (int)(zIn - zValue); + } + if( nAttr==5 && fossil_strnicmp(zAttr,"class",5)==0 ){ + if( nValue!=10 || fossil_strnicmp(zValue,"fossil-doc",10)!=0 ) return 0; + seenClass = 1; + if( seenTitle ) return 1; + } + if( nAttr==10 && fossil_strnicmp(zAttr,"data-title",10)==0 ){ + blob_append(pTitle, zValue, nValue); + seenTitle = 1; + if( seenClass ) return 1; + } + } + return seenClass; +} + +/* +** Look for a file named zName in the check-in with RID=vid. Load the content +** of that file into pContent and return the RID for the file. Or return 0 +** if the file is not found or could not be loaded. +*/ +int doc_load_content(int vid, const char *zName, Blob *pContent){ + int rid; /* The RID of the file being loaded */ + if( !db_table_exists("repository","vcache") ){ + db_multi_exec( + "CREATE TABLE IF NOT EXISTS vcache(\n" + " vid INTEGER, -- check-in ID\n" + " fname TEXT, -- filename\n" + " rid INTEGER, -- artifact ID\n" + " PRIMARY KEY(vid,fname)\n" + ") WITHOUT ROWID" + ); + } + if( !db_exists("SELECT 1 FROM vcache WHERE vid=%d", vid) ){ + db_multi_exec( + "DELETE FROM vcache;\n" + "CREATE VIRTUAL TABLE IF NOT EXISTS temp.foci USING files_of_checkin;\n" + "INSERT INTO vcache(vid,fname,rid)" + " SELECT checkinID, filename, blob.rid FROM foci, blob" + " WHERE blob.uuid=foci.uuid" + " AND foci.checkinID=%d;", + vid + ); + } + rid = db_int(0, "SELECT rid FROM vcache" + " WHERE vid=%d AND fname=%Q", vid, zName); + if( rid && content_get(rid, pContent)==0 ){ + rid = 0; + } + return rid; +} + +/* +** Transfer content to the output. During the transfer, when text of +** the followign form is seen: +** +** href="$ROOT/ +** action="$ROOT/ +** +** Convert $ROOT to the root URI of the repository. Allow ' in place of " +** and any case for href. +*/ +static void convert_href_and_output(Blob *pIn){ + int i, base; + int n = blob_size(pIn); + char *z = blob_buffer(pIn); + for(base=0, i=7; i=9 + && (fossil_strnicmp(&z[i-7]," href=", 6)==0 || + fossil_strnicmp(&z[i-9]," action=", 8)==0) + ){ + blob_append(cgi_output_blob(), &z[base], i-base); + blob_appendf(cgi_output_blob(), "%R"); + base = i+5; + } + } + blob_append(cgi_output_blob(), &z[base], i-base); +} /* ** WEBPAGE: doc -** URL: /doc?name=BASELINE/PATH -** URL: /doc/BASELINE/PATH -** -** BASELINE can be either a baseline uuid prefix or magic words "tip" -** to me the most recently checked in baseline or "ckout" to mean the -** content of the local checkout, if any. PATH is the relative pathname -** of some file. This method returns the file content. -** -** If PATH matches the patterns *.wiki or *.txt then formatting content -** is added before returning the file. For all other names, the content -** is returned straight without any interpretation or processing. +** URL: /doc?name=CHECKIN/FILE +** URL: /doc/CHECKIN/FILE +** +** CHECKIN can be either tag or SHA1 hash or timestamp identifying a +** particular check, or the name of a branch (meaning the most recent +** check-in on that branch) or one of various magic words: +** +** "tip" means the most recent check-in +** +** "ckout" means the current check-out, if the server is run from +** within a check-out, otherwise it is the same as "tip" +** +** FILE is the name of a file to delivered up as a webpage. FILE is relative +** to the root of the source tree of the repository. The FILE must +** be a part of CHECKIN, except when CHECKIN=="ckout" when FILE is read +** directly from disk and need not be a managed file. +** +** The "ckout" CHECKIN is intended for development - to provide a mechanism +** for looking at what a file will look like using the /doc webpage after +** it gets checked in. +** +** The file extension is used to decide how to render the file. +** +** If FILE ends in "/" then the names "FILE/index.html", "FILE/index.wiki", +** and "FILE/index.md" are tried in that order. If the binary was compiled +** with TH1 embedded documentation support and the "th1-docs" setting is +** enabled, the name "FILE/index.th1" is also tried. If none of those are +** found, then FILE is completely replaced by "404.md" and tried. If that +** is not found, then a default 404 screen is generated. +** +** Headers and footers are added for text/x-fossil-wiki and text/md +** If the document has mimetype text/html then headers and footers are +** usually not added. However, a text/html document begins with the +** following div: +** +**
+** +** then headers and footers are supplied. The optional data-title field +** specifies the title of the document in that case. +** +** For fossil-doc documents and for markdown documents, text of the +** form: "href='$ROOT/" or "action='$ROOT" has the $ROOT name expanded +** to the top-level of the repository. */ void doc_page(void){ const char *zName; /* Argument to the /doc page */ + const char *zOrigName = "?"; /* Original document name */ const char *zMime; /* Document MIME type */ - int vid = 0; /* Artifact of baseline */ + char *zCheckin = "tip"; /* The check-in holding the document */ + int vid = 0; /* Artifact of check-in */ int rid = 0; /* Artifact of file */ int i; /* Loop counter */ Blob filebody; /* Content of the documentation file */ - char zBaseline[UUID_SIZE+1]; /* Baseline UUID */ - - login_check_credentials(); - if( !g.okRead ){ login_needed(); return; } - zName = PD("name", "tip/index.wiki"); - for(i=0; zName[i] && zName[i]!='/'; i++){} - if( zName[i]==0 || i>UUID_SIZE ){ - goto doc_not_found; - } - memcpy(zBaseline, zName, i); - zBaseline[i] = 0; - zName += i; - while( zName[0]=='/' ){ zName++; } - if( !file_is_simple_pathname(zName) ){ - goto doc_not_found; - } - if( strcmp(zBaseline,"ckout")==0 && db_open_local()==0 ){ - strcpy(zBaseline,"tip"); - } - if( strcmp(zBaseline,"ckout")==0 ){ - /* Read from the local checkout */ - char *zFullpath; - db_must_be_within_tree(); - zFullpath = mprintf("%s/%s", g.zLocalRoot, zName); - if( !file_isfile(zFullpath) ){ - goto doc_not_found; - } - if( blob_read_from_file(&filebody, zFullpath)<0 ){ - goto doc_not_found; - } - }else{ - db_begin_transaction(); - if( strcmp(zBaseline,"tip")==0 ){ - vid = db_int(0, "SELECT objid FROM event WHERE type='ci'" - " ORDER BY mtime DESC LIMIT 1"); - }else{ - vid = name_to_rid(zBaseline); - } - - /* Create the baseline cache if it does not already exist */ - db_multi_exec( - "CREATE TABLE IF NOT EXISTS vcache(\n" - " vid INTEGER, -- baseline ID\n" - " fname TEXT, -- filename\n" - " rid INTEGER, -- artifact ID\n" - " UNIQUE(vid,fname,rid)\n" - ")" - ); - - /* Check to see if the documentation file artifact ID is contained - ** in the baseline cache */ - rid = db_int(0, "SELECT rid FROM vcache" - " WHERE vid=%d AND fname=%Q", vid, zName); - if( rid==0 && db_exists("SELECT 1 FROM vcache WHERE vid=%d", vid) ){ - goto doc_not_found; - } - - if( rid==0 ){ - Stmt s; - Blob baseline; - Manifest m; - - /* Add the vid baseline to the cache */ - if( db_int(0, "SELECT count(*) FROM vcache")>10000 ){ - db_multi_exec("DELETE FROM vcache"); - } - if( content_get(vid, &baseline)==0 ){ - goto doc_not_found; - } - if( manifest_parse(&m, &baseline)==0 || m.type!=CFTYPE_MANIFEST ){ - goto doc_not_found; - } - db_prepare(&s, - "INSERT INTO vcache(vid,fname,rid)" - " SELECT %d, :fname, rid FROM blob" - " WHERE uuid=:uuid", - vid - ); - for(i=0; i=0 && nMiss=0 && nMiss0 ){ + rid = 1; /* Fake RID just to get the loop to end */ + } + fossil_free(zFullpath); + }else{ + vid = name_to_typed_rid(zCheckin, "ci"); + rid = doc_load_content(vid, zName, &filebody); + } + } + if( rid==0 ) goto doc_not_found; + blob_to_utf8_no_bom(&filebody, 0); + + /* The file is now contained in the filebody blob. Deliver the + ** file to the user + */ + zMime = nMiss==0 ? P("mimetype") : 0; if( zMime==0 ){ zMime = mimetype_from_name(zName); } - if( strcmp(zMime, "application/x-fossil-wiki")==0 ){ - Blob title, tail; + Th_Store("doc_name", zName); + Th_Store("doc_version", db_text(0, "SELECT '[' || substr(uuid,1,10) || ']'" + " FROM blob WHERE rid=%d", vid)); + Th_Store("doc_date", db_text(0, "SELECT datetime(mtime) FROM event" + " WHERE objid=%d AND type='ci'", vid)); + if( fossil_strcmp(zMime, "text/x-fossil-wiki")==0 ){ + Blob tail; + style_adunit_config(ADUNIT_RIGHT_OK); if( wiki_find_title(&filebody, &title, &tail) ){ - style_header(blob_str(&title)); - wiki_convert(&tail, 0, 0); + style_header("%s", blob_str(&title)); + wiki_convert(&tail, 0, WIKI_BUTTONS); }else{ style_header("Documentation"); - wiki_convert(&filebody, 0, 0); + wiki_convert(&filebody, 0, WIKI_BUTTONS); + } + style_footer(); + }else if( fossil_strcmp(zMime, "text/x-markdown")==0 ){ + Blob tail = BLOB_INITIALIZER; + markdown_to_html(&filebody, &title, &tail); + if( blob_size(&title)>0 ){ + style_header("%s", blob_str(&title)); + }else{ + style_header("%s", nMiss>=ArraySize(azSuffix)? + "Not Found" : "Documentation"); } + convert_href_and_output(&tail); style_footer(); - }else if( strcmp(zMime, "text/plain")==0 ){ + }else if( fossil_strcmp(zMime, "text/plain")==0 ){ style_header("Documentation"); @
     @ %h(blob_str(&filebody))
     @ 
style_footer(); + }else if( fossil_strcmp(zMime, "text/html")==0 + && doc_is_embedded_html(&filebody, &title) ){ + if( blob_size(&title)==0 ) blob_append(&title,zName,-1); + style_header("%s", blob_str(&title)); + convert_href_and_output(&filebody); + style_footer(); +#ifdef FOSSIL_ENABLE_TH1_DOCS + }else if( Th_AreDocsEnabled() && + fossil_strcmp(zMime, "application/x-th1")==0 ){ + style_header("%h", zName); + Th_Render(blob_str(&filebody)); + style_footer(); +#endif }else{ cgi_set_content_type(zMime); cgi_set_content(&filebody); } + if( nMiss>=ArraySize(azSuffix) ) cgi_set_status(404, "Not Found"); + db_end_transaction(0); return; -doc_not_found: /* Jump here when unable to locate the document */ +doc_not_found: db_end_transaction(0); - style_header("Document Not Found"); - @

No such document: %h(PD("name","tip/index.wiki"))

+ cgi_set_status(404, "Not Found"); + style_header("Not Found"); + @

Document %h(zOrigName) not found + if( fossil_strcmp(zCheckin,"ckout")!=0 ){ + @ in %z(href("%R/tree?ci=%T",zCheckin))%h(zCheckin) + } style_footer(); - return; + db_end_transaction(0); + return; } /* ** The default logo. */ static const unsigned char aLogo[] = { - 71, 73, 70, 56, 55, 97, 62, 0, 71, 0, 244, 0, 0, 85, - 129, 149, 95, 136, 155, 99, 139, 157, 106, 144, 162, 113, 150, 166, - 116, 152, 168, 127, 160, 175, 138, 168, 182, 148, 176, 188, 159, 184, - 195, 170, 192, 202, 180, 199, 208, 184, 202, 210, 191, 207, 215, 201, - 215, 221, 212, 223, 228, 223, 231, 235, 226, 227, 226, 226, 234, 237, - 233, 239, 241, 240, 244, 246, 244, 247, 248, 255, 255, 255, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 44, 0, 0, - 0, 0, 62, 0, 71, 0, 0, 5, 255, 96, 100, 141, 100, 105, - 158, 168, 37, 41, 132, 192, 164, 112, 44, 207, 102, 99, 0, 56, - 16, 84, 116, 239, 199, 141, 65, 110, 232, 248, 25, 141, 193, 161, - 82, 113, 108, 202, 32, 55, 229, 210, 73, 61, 41, 164, 88, 102, - 181, 10, 41, 96, 179, 91, 106, 35, 240, 5, 135, 143, 137, 242, - 87, 123, 246, 33, 190, 81, 108, 163, 237, 198, 14, 30, 113, 233, - 131, 78, 115, 72, 11, 115, 87, 101, 19, 124, 51, 66, 74, 8, - 19, 16, 67, 100, 74, 133, 50, 15, 101, 135, 56, 11, 74, 6, - 143, 49, 126, 106, 56, 8, 145, 67, 9, 152, 48, 139, 155, 5, - 22, 13, 74, 115, 161, 41, 147, 101, 13, 130, 57, 132, 170, 40, - 167, 155, 0, 94, 57, 3, 178, 48, 183, 181, 57, 160, 186, 40, - 19, 141, 189, 0, 69, 192, 40, 16, 195, 155, 185, 199, 41, 201, - 189, 191, 205, 193, 188, 131, 210, 49, 175, 88, 209, 214, 38, 19, - 3, 11, 19, 111, 127, 60, 219, 39, 55, 204, 19, 11, 6, 100, - 5, 10, 227, 228, 37, 163, 0, 239, 117, 56, 238, 243, 49, 195, - 177, 247, 48, 158, 56, 251, 50, 216, 254, 197, 56, 128, 107, 158, - 2, 125, 171, 114, 92, 218, 246, 96, 66, 3, 4, 50, 134, 176, - 145, 6, 97, 64, 144, 24, 19, 136, 108, 91, 177, 160, 0, 194, - 19, 253, 0, 216, 107, 214, 224, 192, 129, 5, 16, 83, 255, 244, - 43, 213, 195, 24, 159, 27, 169, 64, 230, 88, 208, 227, 129, 182, - 54, 4, 89, 158, 24, 181, 163, 199, 1, 155, 52, 233, 8, 130, - 176, 83, 24, 128, 137, 50, 18, 32, 48, 48, 114, 11, 173, 137, - 19, 110, 4, 64, 105, 1, 194, 30, 140, 68, 15, 24, 24, 224, - 50, 76, 70, 0, 11, 171, 54, 26, 160, 181, 194, 149, 148, 40, - 174, 148, 122, 64, 180, 208, 161, 17, 207, 112, 164, 1, 128, 96, - 148, 78, 18, 21, 194, 33, 229, 51, 247, 65, 133, 97, 5, 250, - 69, 229, 100, 34, 220, 128, 166, 116, 190, 62, 8, 167, 195, 170, - 47, 163, 0, 130, 90, 152, 11, 160, 173, 170, 27, 154, 26, 91, - 232, 151, 171, 18, 14, 162, 253, 98, 170, 18, 70, 171, 64, 219, - 10, 67, 136, 134, 187, 116, 75, 180, 46, 179, 174, 135, 4, 189, - 229, 231, 78, 40, 10, 62, 226, 164, 172, 64, 240, 167, 170, 10, - 18, 124, 188, 10, 107, 65, 193, 94, 11, 93, 171, 28, 248, 17, - 239, 46, 140, 78, 97, 34, 25, 153, 36, 99, 65, 130, 7, 203, - 183, 168, 51, 34, 136, 25, 140, 10, 6, 16, 28, 255, 145, 241, - 230, 140, 10, 66, 178, 167, 112, 48, 192, 128, 129, 9, 31, 141, - 84, 138, 63, 163, 162, 2, 203, 206, 240, 56, 55, 98, 192, 188, - 15, 185, 50, 160, 6, 0, 125, 62, 33, 214, 195, 33, 5, 24, - 184, 25, 231, 14, 201, 245, 144, 23, 126, 104, 228, 0, 145, 2, - 13, 140, 244, 212, 17, 21, 20, 176, 159, 17, 95, 225, 160, 128, - 16, 1, 32, 224, 142, 32, 227, 125, 87, 64, 0, 16, 54, 129, - 205, 2, 141, 76, 53, 130, 103, 37, 166, 64, 144, 107, 78, 196, - 5, 192, 0, 54, 50, 229, 9, 141, 49, 84, 194, 35, 12, 196, - 153, 48, 192, 137, 57, 84, 24, 7, 87, 159, 249, 240, 215, 143, - 105, 241, 118, 149, 9, 139, 4, 64, 203, 141, 35, 140, 129, 131, - 16, 222, 125, 231, 128, 2, 238, 17, 152, 66, 3, 5, 56, 224, - 159, 103, 16, 76, 25, 75, 5, 11, 164, 215, 96, 9, 14, 16, - 36, 225, 15, 11, 40, 144, 192, 156, 41, 10, 178, 199, 3, 66, - 64, 80, 193, 3, 124, 90, 48, 129, 129, 102, 177, 18, 192, 154, - 49, 84, 240, 208, 92, 22, 149, 96, 39, 9, 31, 74, 17, 94, - 3, 8, 177, 199, 72, 59, 85, 76, 25, 216, 8, 139, 194, 197, - 138, 163, 69, 96, 115, 0, 147, 72, 72, 84, 28, 14, 79, 86, - 233, 230, 23, 113, 26, 160, 128, 3, 10, 58, 129, 103, 14, 159, - 214, 163, 146, 117, 238, 213, 154, 128, 151, 109, 84, 64, 217, 13, - 27, 10, 228, 39, 2, 235, 164, 168, 74, 8, 0, 59, + 71, 73, 70, 56, 55, 97, 62, 0, 71, 0, 244, 0, 0, 85, + 129, 149, 95, 136, 155, 99, 139, 157, 106, 144, 162, 113, 150, 166, + 116, 152, 168, 127, 160, 175, 138, 168, 182, 148, 176, 188, 159, 184, + 195, 170, 192, 202, 180, 199, 208, 184, 202, 210, 191, 207, 215, 201, + 215, 221, 212, 223, 228, 223, 231, 235, 226, 227, 226, 226, 234, 237, + 233, 239, 241, 240, 244, 246, 244, 247, 248, 255, 255, 255, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 44, 0, 0, + 0, 0, 62, 0, 71, 0, 0, 5, 255, 96, 100, 141, 100, 105, + 158, 168, 37, 41, 132, 192, 164, 112, 44, 207, 102, 99, 0, 56, + 16, 84, 116, 239, 199, 141, 65, 110, 232, 248, 25, 141, 193, 161, + 82, 113, 108, 202, 32, 55, 229, 210, 73, 61, 41, 164, 88, 102, + 181, 10, 41, 96, 179, 91, 106, 35, 240, 5, 135, 143, 137, 242, + 87, 123, 246, 33, 190, 81, 108, 163, 237, 198, 14, 30, 113, 233, + 131, 78, 115, 72, 11, 115, 87, 101, 19, 124, 51, 66, 74, 8, + 19, 16, 67, 100, 74, 133, 50, 15, 101, 135, 56, 11, 74, 6, + 143, 49, 126, 106, 56, 8, 145, 67, 9, 152, 48, 139, 155, 5, + 22, 13, 74, 115, 161, 41, 147, 101, 13, 130, 57, 132, 170, 40, + 167, 155, 0, 94, 57, 3, 178, 48, 183, 181, 57, 160, 186, 40, + 19, 141, 189, 0, 69, 192, 40, 16, 195, 155, 185, 199, 41, 201, + 189, 191, 205, 193, 188, 131, 210, 49, 175, 88, 209, 214, 38, 19, + 3, 11, 19, 111, 127, 60, 219, 39, 55, 204, 19, 11, 6, 100, + 5, 10, 227, 228, 37, 163, 0, 239, 117, 56, 238, 243, 49, 195, + 177, 247, 48, 158, 56, 251, 50, 216, 254, 197, 56, 128, 107, 158, + 2, 125, 171, 114, 92, 218, 246, 96, 66, 3, 4, 50, 134, 176, + 145, 6, 97, 64, 144, 24, 19, 136, 108, 91, 177, 160, 0, 194, + 19, 253, 0, 216, 107, 214, 224, 192, 129, 5, 16, 83, 255, 244, + 43, 213, 195, 24, 159, 27, 169, 64, 230, 88, 208, 227, 129, 182, + 54, 4, 89, 158, 24, 181, 163, 199, 1, 155, 52, 233, 8, 130, + 176, 83, 24, 128, 137, 50, 18, 32, 48, 48, 114, 11, 173, 137, + 19, 110, 4, 64, 105, 1, 194, 30, 140, 68, 15, 24, 24, 224, + 50, 76, 70, 0, 11, 171, 54, 26, 160, 181, 194, 149, 148, 40, + 174, 148, 122, 64, 180, 208, 161, 17, 207, 112, 164, 1, 128, 96, + 148, 78, 18, 21, 194, 33, 229, 51, 247, 65, 133, 97, 5, 250, + 69, 229, 100, 34, 220, 128, 166, 116, 190, 62, 8, 167, 195, 170, + 47, 163, 0, 130, 90, 152, 11, 160, 173, 170, 27, 154, 26, 91, + 232, 151, 171, 18, 14, 162, 253, 98, 170, 18, 70, 171, 64, 219, + 10, 67, 136, 134, 187, 116, 75, 180, 46, 179, 174, 135, 4, 189, + 229, 231, 78, 40, 10, 62, 226, 164, 172, 64, 240, 167, 170, 10, + 18, 124, 188, 10, 107, 65, 193, 94, 11, 93, 171, 28, 248, 17, + 239, 46, 140, 78, 97, 34, 25, 153, 36, 99, 65, 130, 7, 203, + 183, 168, 51, 34, 136, 25, 140, 10, 6, 16, 28, 255, 145, 241, + 230, 140, 10, 66, 178, 167, 112, 48, 192, 128, 129, 9, 31, 141, + 84, 138, 63, 163, 162, 2, 203, 206, 240, 56, 55, 98, 192, 188, + 15, 185, 50, 160, 6, 0, 125, 62, 33, 214, 195, 33, 5, 24, + 184, 25, 231, 14, 201, 245, 144, 23, 126, 104, 228, 0, 145, 2, + 13, 140, 244, 212, 17, 21, 20, 176, 159, 17, 95, 225, 160, 128, + 16, 1, 32, 224, 142, 32, 227, 125, 87, 64, 0, 16, 54, 129, + 205, 2, 141, 76, 53, 130, 103, 37, 166, 64, 144, 107, 78, 196, + 5, 192, 0, 54, 50, 229, 9, 141, 49, 84, 194, 35, 12, 196, + 153, 48, 192, 137, 57, 84, 24, 7, 87, 159, 249, 240, 215, 143, + 105, 241, 118, 149, 9, 139, 4, 64, 203, 141, 35, 140, 129, 131, + 16, 222, 125, 231, 128, 2, 238, 17, 152, 66, 3, 5, 56, 224, + 159, 103, 16, 76, 25, 75, 5, 11, 164, 215, 96, 9, 14, 16, + 36, 225, 15, 11, 40, 144, 192, 156, 41, 10, 178, 199, 3, 66, + 64, 80, 193, 3, 124, 90, 48, 129, 129, 102, 177, 18, 192, 154, + 49, 84, 240, 208, 92, 22, 149, 96, 39, 9, 31, 74, 17, 94, + 3, 8, 177, 199, 72, 59, 85, 76, 25, 216, 8, 139, 194, 197, + 138, 163, 69, 96, 115, 0, 147, 72, 72, 84, 28, 14, 79, 86, + 233, 230, 23, 113, 26, 160, 128, 3, 10, 58, 129, 103, 14, 159, + 214, 163, 146, 117, 238, 213, 154, 128, 151, 109, 84, 64, 217, 13, + 27, 10, 228, 39, 2, 235, 164, 168, 74, 8, 0, 59, }; /* ** WEBPAGE: logo ** @@ -557,5 +799,58 @@ } cgi_set_content_type(zMime); cgi_set_content(&logo); g.isConst = 1; } + +/* +** The default background image: a 16x16 white GIF +*/ +static const unsigned char aBackground[] = { + 71, 73, 70, 56, 57, 97, 16, 0, 16, 0, + 240, 0, 0, 255, 255, 255, 0, 0, 0, 33, + 254, 4, 119, 105, 115, 104, 0, 44, 0, 0, + 0, 0, 16, 0, 16, 0, 0, 2, 14, 132, + 143, 169, 203, 237, 15, 163, 156, 180, 218, 139, + 179, 62, 5, 0, 59, +}; + + +/* +** WEBPAGE: background +** +** Return the background image. If no background image is defined, a +** built-in 16x16 pixel white GIF is returned. +*/ +void background_page(void){ + Blob bgimg; + char *zMime; + + zMime = db_get("background-mimetype", "image/gif"); + blob_zero(&bgimg); + db_blob(&bgimg, "SELECT value FROM config WHERE name='background-image'"); + if( blob_size(&bgimg)==0 ){ + blob_init(&bgimg, (char*)aBackground, sizeof(aBackground)); + } + cgi_set_content_type(zMime); + cgi_set_content(&bgimg); + g.isConst = 1; +} + + +/* +** WEBPAGE: docsrch +** +** Search for documents that match a user-supplied full-text search pattern. +** If no pattern is specified (by the s= query parameter) then the user +** is prompted to enter a search string. +** +** Query parameters: +** +** s=PATTERN Search for PATTERN +*/ +void doc_search_page(void){ + login_check_credentials(); + style_header("Document Search"); + search_screen(SRCH_DOC, 0); + style_footer(); +} Index: src/encode.c ================================================================== --- src/encode.c +++ src/encode.c @@ -44,34 +44,33 @@ default: count++; break; } i++; } i = 0; - zOut = malloc( count+1 ); - if( zOut==0 ) return 0; + zOut = fossil_malloc( count+1 ); while( n-->0 && (c = *zIn)!=0 ){ switch( c ){ - case '<': + case '<': zOut[i++] = '&'; zOut[i++] = 'l'; zOut[i++] = 't'; zOut[i++] = ';'; break; - case '>': + case '>': zOut[i++] = '&'; zOut[i++] = 'g'; zOut[i++] = 't'; zOut[i++] = ';'; break; - case '&': + case '&': zOut[i++] = '&'; zOut[i++] = 'a'; zOut[i++] = 'm'; zOut[i++] = 'p'; zOut[i++] = ';'; break; - case '"': + case '"': zOut[i++] = '&'; zOut[i++] = 'q'; zOut[i++] = 'u'; zOut[i++] = 'o'; zOut[i++] = 't'; @@ -85,10 +84,44 @@ } zOut[i] = 0; return zOut; } +/* +** Append HTML-escaped text to a Blob. +*/ +void htmlize_to_blob(Blob *p, const char *zIn, int n){ + int c, i, j; + if( n<0 ) n = strlen(zIn); + for(i=j=0; i': + if( j0 && (c = *zIn)!=0 ){ if( IsSafeChar(c) ){ zOut[i++] = c; }else if( c==' ' ){ zOut[i++] = '+'; @@ -129,17 +161,18 @@ zOut[i++] = "0123456789ABCDEF"[c&0xf]; } zIn++; } zOut[i] = 0; +#undef IsSafeChar return zOut; } /* ** Convert the input string into a form that is suitable for use as ** a token in the HTTP protocol. Spaces are encoded as '+' and special -** characters are encoded as "%HH" where HH is a two-digit hexidecimal +** characters are encoded as "%HH" where HH is a two-digit hexadecimal ** representation of the character. The "/" character is encoded ** as "%2F". */ char *httpize(const char *z, int n){ return EncodeHttp(z, n, 1); @@ -148,11 +181,11 @@ /* ** Convert the input string into a form that is suitable for use as ** a token in the HTTP protocol. Spaces are encoded as '+' and special ** characters are encoded as "%HH" where HH is a two-digit hexidecimal ** representation of the character. The "/" character is not encoded -** by this routine. +** by this routine. */ char *urlize(const char *z, int n){ return EncodeHttp(z, n, 0); } @@ -233,21 +266,21 @@ c = zIn[i]; if( c==0 || c==' ' || c=='\n' || c=='\t' || c=='\r' || c=='\f' || c=='\v' || c=='\\' ) n++; } n += nIn; - zOut = malloc( n+1 ); + zOut = fossil_malloc( n+1 ); if( zOut ){ for(i=j=0; idecode64 function. @@ -309,11 +343,11 @@ int i, n; if( nData<=0 ){ nData = strlen(zData); } - z64 = malloc( (nData*4)/3 + 8 ); + z64 = fossil_malloc( (nData*4)/3 + 8 ); for(i=n=0; i+2>2) & 0x3f ]; z64[n++] = zBase[ ((zData[i]<<4) & 0x30) | ((zData[i+1]>>4) & 0x0f) ]; z64[n++] = zBase[ ((zData[i+1]<<2) & 0x3c) | ((zData[i+2]>>6) & 0x03) ]; z64[n++] = zBase[ zData[i+2] & 0x3f ]; @@ -332,19 +366,19 @@ z64[n] = 0; return z64; } /* -** COMMAND: test-encode64 +** COMMAND: test-encode64 ** Usage: %fossil test-encode64 STRING */ void test_encode64_cmd(void){ char *z; int i; for(i=2; i0 && z64[n64-1]=='=' ) n64--; - zData = malloc( (n64*3)/4 + 4 ); + zData = fossil_malloc( (n64*3)/4 + 4 ); for(i=j=0; i+3 %s (%s)\n", g.argv[i], z, z2); + free(z); + free(z2); + z = unobscure(g.argv[i]); + fossil_print("UNOBSCURE: %s -> %s\n", g.argv[i], z); + free(z); + } +} ADDED src/event.c Index: src/event.c ================================================================== --- src/event.c +++ src/event.c @@ -0,0 +1,595 @@ +/* +** Copyright (c) 2010 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code to do formatting of event messages: +** +** Technical Notes +** Milestones +** Blog posts +** New articles +** Process checkpoints +** Announcements +** +** Do not confuse "event" artifacts with the "event" table in the +** repository database. An "event" artifact is a technical-note: a +** wiki- or blog-like essay that appears on the timeline. The "event" +** table records all entries on the timeline, including tech-notes. +** +** (2015-02-14): Changing the name to "tech-note" most everywhere. +*/ +#include "config.h" +#include +#include +#include "event.h" + +/* +** Output a hyperlink to an technote given its tagid. +*/ +void hyperlink_to_event_tagid(int tagid){ + char *zId; + zId = db_text(0, "SELECT substr(tagname, 7) FROM tag WHERE tagid=%d", + tagid); + @ [%z(href("%R/technote/%s",zId))%S(zId)] + free(zId); +} + +/* +** WEBPAGE: technote +** WEBPAGE: event +** +** Display a "technical note" or "tech-note" (formerly called an "event"). +** +** PARAMETERS: +** +** name=ID // Identify the tech-note to display. ID must be complete +** aid=ARTIFACTID // Which specific version of the tech-note. Optional. +** v=BOOLEAN // Show details if TRUE. Default is FALSE. Optional. +** +** Display an existing event identified by EVENTID +*/ +void event_page(void){ + int rid = 0; /* rid of the event artifact */ + char *zUuid; /* UUID corresponding to rid */ + const char *zId; /* Event identifier */ + const char *zVerbose; /* Value of verbose option */ + char *zETime; /* Time of the tech-note */ + char *zATime; /* Time the artifact was created */ + int specRid; /* rid specified by aid= parameter */ + int prevRid, nextRid; /* Previous or next edits of this tech-note */ + Manifest *pTNote; /* Parsed technote artifact */ + Blob fullbody; /* Complete content of the technote body */ + Blob title; /* Title extracted from the technote body */ + Blob tail; /* Event body that comes after the title */ + Stmt q1; /* Query to search for the technote */ + int verboseFlag; /* True to show details */ + const char *zMimetype = 0; /* Mimetype of the document */ + const char *zFullId; /* Full event identifier */ + + + /* wiki-read privilege is needed in order to read tech-notes. + */ + login_check_credentials(); + if( !g.perm.RdWiki ){ + login_needed(g.anon.RdWiki); + return; + } + + zId = P("name"); + if( zId==0 ){ fossil_redirect_home(); return; } + zUuid = (char*)P("aid"); + specRid = zUuid ? uuid_to_rid(zUuid, 0) : 0; + rid = nextRid = prevRid = 0; + db_prepare(&q1, + "SELECT rid FROM tagxref" + " WHERE tagid=(SELECT tagid FROM tag WHERE tagname GLOB 'event-%q*')" + " ORDER BY mtime DESC", + zId + ); + while( db_step(&q1)==SQLITE_ROW ){ + nextRid = rid; + rid = db_column_int(&q1, 0); + if( specRid==0 || specRid==rid ){ + if( db_step(&q1)==SQLITE_ROW ){ + prevRid = db_column_int(&q1, 0); + } + break; + } + } + db_finalize(&q1); + if( rid==0 || (specRid!=0 && specRid!=rid) ){ + style_header("No Such Tech-Note"); + @ Cannot locate a technical note called %h(zId). + style_footer(); + return; + } + zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); + zVerbose = P("v"); + if( !zVerbose ){ + zVerbose = P("verbose"); + } + if( !zVerbose ){ + zVerbose = P("detail"); /* deprecated */ + } + verboseFlag = (zVerbose!=0) && !is_false(zVerbose); + + /* Extract the event content. + */ + pTNote = manifest_get(rid, CFTYPE_EVENT, 0); + if( pTNote==0 ){ + fossil_fatal("Object #%d is not a tech-note", rid); + } + zMimetype = wiki_filter_mimetypes(PD("mimetype",pTNote->zMimetype)); + blob_init(&fullbody, pTNote->zWiki, -1); + blob_init(&title, 0, 0); + blob_init(&tail, 0, 0); + if( fossil_strcmp(zMimetype, "text/x-fossil-wiki")==0 ){ + if( !wiki_find_title(&fullbody, &title, &tail) ){ + blob_appendf(&title, "Tech-note %S", zId); + tail = fullbody; + } + }else if( fossil_strcmp(zMimetype, "text/x-markdown")==0 ){ + markdown_to_html(&fullbody, &title, &tail); + if( blob_size(&title)==0 ){ + blob_appendf(&title, "Tech-note %S", zId); + } + }else{ + blob_appendf(&title, "Tech-note %S", zId); + tail = fullbody; + } + style_header("%s", blob_str(&title)); + if( g.perm.WrWiki && g.perm.Write && nextRid==0 ){ + style_submenu_element("Edit", 0, "%R/technoteedit?name=%!S", zId); + if( g.perm.Attach ){ + style_submenu_element("Attach", "Add an attachment", + "%R/attachadd?technote=%!S&from=%R/technote/%!S", + zId, zId); + } + } + zETime = db_text(0, "SELECT datetime(%.17g)", pTNote->rEventDate); + style_submenu_element("Context", 0, "%R/timeline?c=%.20s", zId); + if( g.perm.Hyperlink ){ + if( verboseFlag ){ + style_submenu_element("Plain", 0, + "%R/technote?name=%!S&aid=%s&mimetype=text/plain", + zId, zUuid); + if( nextRid ){ + char *zNext; + zNext = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", nextRid); + style_submenu_element("Next", 0,"%R/technote?name=%!S&aid=%s&v", + zId, zNext); + free(zNext); + } + if( prevRid ){ + char *zPrev; + zPrev = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", prevRid); + style_submenu_element("Prev", 0, "%R/technote?name=%!S&aid=%s&v", + zId, zPrev); + free(zPrev); + } + }else{ + style_submenu_element("Detail", 0, "%R/technote?name=%!S&aid=%s&v", + zId, zUuid); + } + } + + if( verboseFlag && g.perm.Hyperlink ){ + int i; + const char *zClr = 0; + Blob comment; + + zATime = db_text(0, "SELECT datetime(%.17g)", pTNote->rDate); + @

Tech-note [%z(href("%R/artifact/%!S",zUuid))%S(zUuid)] at + @ [%z(href("%R/timeline?c=%T",zETime))%s(zETime)] + @ entered by user %h(pTNote->zUser) on + @ [%z(href("%R/timeline?c=%T",zATime))%s(zATime)]:

+ @
+ for(i=0; inTag; i++){ + if( fossil_strcmp(pTNote->aTag[i].zName,"+bgcolor")==0 ){ + zClr = pTNote->aTag[i].zValue; + } + } + if( zClr && zClr[0]==0 ) zClr = 0; + if( zClr ){ + @
+ }else{ + @
+ } + blob_init(&comment, pTNote->zComment, -1); + wiki_convert(&comment, 0, WIKI_INLINE); + blob_reset(&comment); + @
+ @

+ } + + if( fossil_strcmp(zMimetype, "text/x-fossil-wiki")==0 ){ + wiki_convert(&fullbody, 0, 0); + }else if( fossil_strcmp(zMimetype, "text/x-markdown")==0 ){ + cgi_append_content(blob_buffer(&tail), blob_size(&tail)); + }else{ + @
+    @ %h(blob_str(&fullbody))
+    @ 
+ } + zFullId = db_text(0, "SELECT SUBSTR(tagname,7)" + " FROM tag" + " WHERE tagname GLOB 'event-%q*'", + zId); + attachment_list(zFullId, "

Attachments:

    "); + style_footer(); + manifest_destroy(pTNote); +} + +/* +** Add or update a new tech note to the repository. rid is id of +** the prior version of this technote, if any. +** +** returns 1 if the tech note was added or updated, 0 if the +** update failed making an invalid artifact +*/ +int event_commit_common( + int rid, /* id of the prior version of the technote */ + const char *zId, /* hash label for the technote */ + const char *zBody, /* content of the technote */ + char *zETime, /* timestamp for the technote */ + const char *zMimetype, /* mimetype for the technote N-card */ + const char *zComment, /* comment shown on the timeline */ + const char *zTags, /* tags associated with this technote */ + const char *zClr /* Background color */ +){ + Blob event; + char *zDate; + Blob cksum; + int nrid, n; + + blob_init(&event, 0, 0); + db_begin_transaction(); + while( fossil_isspace(zComment[0]) ) zComment++; + n = strlen(zComment); + while( n>0 && fossil_isspace(zComment[n-1]) ){ n--; } + if( n>0 ){ + blob_appendf(&event, "C %#F\n", n, zComment); + } + zDate = date_in_standard_format("now"); + blob_appendf(&event, "D %s\n", zDate); + free(zDate); + + zETime[10] = 'T'; + blob_appendf(&event, "E %s %s\n", zETime, zId); + zETime[10] = ' '; + if( rid ){ + char *zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); + blob_appendf(&event, "P %s\n", zUuid); + free(zUuid); + } + if( zMimetype && zMimetype[0] ){ + blob_appendf(&event, "N %s\n", zMimetype); + } + if( zClr && zClr[0] ){ + blob_appendf(&event, "T +bgcolor * %F\n", zClr); + } + if( zTags && zTags[0] ){ + Blob tags, one; + int i, j; + Stmt q; + char *zBlob; + + /* Load the tags string into a blob */ + blob_zero(&tags); + blob_append(&tags, zTags, -1); + + /* Collapse all sequences of whitespace and "," characters into + ** a single space character */ + zBlob = blob_str(&tags); + for(i=j=0; zBlob[i]; i++, j++){ + if( fossil_isspace(zBlob[i]) || zBlob[i]==',' ){ + while( fossil_isspace(zBlob[i+1]) ){ i++; } + zBlob[j] = ' '; + }else{ + zBlob[j] = zBlob[i]; + } + } + blob_resize(&tags, j); + + /* Parse out each tag and load it into a temporary table for sorting */ + db_multi_exec("CREATE TEMP TABLE newtags(x);"); + while( blob_token(&tags, &one) ){ + db_multi_exec("INSERT INTO newtags VALUES(%B)", &one); + } + blob_reset(&tags); + + /* Extract the tags in sorted order and make an entry in the + ** artifact for each. */ + db_prepare(&q, "SELECT x FROM newtags ORDER BY x"); + while( db_step(&q)==SQLITE_ROW ){ + blob_appendf(&event, "T +sym-%F *\n", db_column_text(&q, 0)); + } + db_finalize(&q); + } + if( !login_is_nobody() ){ + blob_appendf(&event, "U %F\n", login_name()); + } + blob_appendf(&event, "W %d\n%s\n", strlen(zBody), zBody); + md5sum_blob(&event, &cksum); + blob_appendf(&event, "Z %b\n", &cksum); + blob_reset(&cksum); + nrid = content_put(&event); + db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", nrid); + if( manifest_crosslink(nrid, &event, MC_NONE)==0 ){ + db_end_transaction(1); + return 0; + } + assert( blob_is_reset(&event) ); + content_deltify(rid, nrid, 0); + db_end_transaction(0); + return 1; +} + +/* +** WEBPAGE: technoteedit +** WEBPAGE: eventedit +** +** Revise or create a technical note (formerly called an 'event'). +** +** Parameters: +** +** name=ID Hex hash ID of the tech-note. If omitted, a new +** tech-note is created. +*/ +void eventedit_page(void){ + char *zTag; + int rid = 0; + Blob event; + const char *zId; + int n; + const char *z; + char *zBody = (char*)P("w"); + char *zETime = (char*)P("t"); + const char *zComment = P("c"); + const char *zTags = P("g"); + const char *zClr; + const char *zMimetype = P("mimetype"); + int isNew = 0; + + if( zBody ){ + zBody = mprintf("%s", zBody); + } + login_check_credentials(); + zId = P("name"); + if( zId==0 ){ + zId = db_text(0, "SELECT lower(hex(randomblob(20)))"); + isNew = 1; + }else{ + int nId = strlen(zId); + if( !validate16(zId, nId) ){ + fossil_redirect_home(); + return; + } + } + zTag = mprintf("event-%s", zId); + rid = db_int(0, + "SELECT rid FROM tagxref" + " WHERE tagid=(SELECT tagid FROM tag WHERE tagname GLOB '%q*')" + " ORDER BY mtime DESC", zTag + ); + if( rid && strlen(zId)<40 ){ + zId = db_text(0, + "SELECT substr(tagname,7) FROM tag WHERE tagname GLOB '%q*'", + zTag + ); + } + free(zTag); + + /* Need both check-in and wiki-write or wiki-create privileges in order + ** to edit/create an event. + */ + if( !g.perm.Write || (rid && !g.perm.WrWiki) || (!rid && !g.perm.NewWiki) ){ + login_needed(g.anon.Write && (rid ? g.anon.WrWiki : g.anon.NewWiki)); + return; + } + + /* Figure out the color */ + if( rid ){ + zClr = db_text("", "SELECT bgcolor FROM event WHERE objid=%d", rid); + }else{ + zClr = ""; + isNew = 1; + } + zClr = PD("clr",zClr); + if( fossil_strcmp(zClr,"##")==0 ) zClr = PD("cclr",""); + + + /* If editing an existing event, extract the key fields to use as + ** a starting point for the edit. + */ + if( rid + && (zBody==0 || zETime==0 || zComment==0 || zTags==0 || zMimetype==0) + ){ + Manifest *pTNote; + pTNote = manifest_get(rid, CFTYPE_EVENT, 0); + if( pTNote && pTNote->type==CFTYPE_EVENT ){ + if( zBody==0 ) zBody = pTNote->zWiki; + if( zETime==0 ){ + zETime = db_text(0, "SELECT datetime(%.17g)", pTNote->rEventDate); + } + if( zComment==0 ) zComment = pTNote->zComment; + if( zMimetype==0 ) zMimetype = pTNote->zMimetype; + } + if( zTags==0 ){ + zTags = db_text(0, + "SELECT group_concat(substr(tagname,5),', ')" + " FROM tagxref, tag" + " WHERE tagxref.rid=%d" + " AND tagxref.tagid=tag.tagid" + " AND tag.tagname GLOB 'sym-*'", + rid + ); + } + } + zETime = db_text(0, "SELECT coalesce(datetime(%Q),datetime('now'))", zETime); + if( P("submit")!=0 && (zBody!=0 && zComment!=0) ){ + login_verify_csrf_secret(); + if ( !event_commit_common(rid, zId, zBody, zETime, + zMimetype, zComment, zTags, zClr) ){ + style_header("Error"); + @ Internal error: Fossil tried to make an invalid artifact for + @ the edited technote. + style_footer(); + return; + } + cgi_redirectf("technote?name=%T", zId); + } + if( P("cancel")!=0 ){ + cgi_redirectf("technote?name=%T", zId); + return; + } + if( zBody==0 ){ + zBody = mprintf("Insert new content here..."); + } + if( isNew ){ + style_header("New Tech-note %S", zId); + }else{ + style_header("Edit Tech-note %S", zId); + } + if( P("preview")!=0 ){ + Blob com; + @

    Timeline comment preview:

    + @
    + @ + if( zClr && zClr[0] ){ + @
    + }else{ + @
    + } + blob_zero(&com); + blob_append(&com, zComment, -1); + wiki_convert(&com, 0, WIKI_INLINE|WIKI_NOBADLINKS); + @
    + @
    + @

    Page content preview:

    + @

    + blob_init(&event, 0, 0); + blob_append(&event, zBody, -1); + wiki_render_by_mimetype(&event, zMimetype); + @

    + blob_reset(&event); + } + for(n=2, z=zBody; z[0]; z++){ + if( z[0]=='\n' ) n++; + } + if( n<20 ) n = 20; + if( n>40 ) n = 40; + @
    + login_insert_csrf_secret(); + @ + @ + + @ + @ + + @ + @ + + @ + @ + + @ + @ + + @ + @ + + @ + @ + + @
    Timestamp (UTC): + @ + @
    Timeline Comment: + @ + @
    Timeline Background Color: + render_color_chooser(0, zClr, 0, "clr", "cclr"); + @
    Tags: + @ + @
    Markup Style: + mimetype_option_menu(zMimetype); + @
    Page Content: + @ + @
    + @ + @ + @ + @
    + @
    + style_footer(); +} + +/* +** Add a new tech note to the repository. The timestamp is +** given by the zETime parameter. isNew must be true to create +** a new page. If no previous page with the name zPageName exists +** and isNew is false, then this routine throws an error. +*/ +void event_cmd_commit( + char *zETime, /* timestamp */ + int isNew, /* true to create a new page */ + Blob *pContent, /* content of the new page */ + const char *zMimeType, /* mimetype of the content */ + const char *zComment, /* comment to go on the timeline */ + const char *zTags, /* tags */ + const char *zClr /* background color */ +){ + int rid; /* Artifact id of the tech note */ + const char *zId; /* id of the tech note */ + rid = db_int(0, "SELECT objid FROM event" + " WHERE datetime(mtime)=datetime('%q') AND type = 'e'" + " LIMIT 1", + zETime + ); + if( rid==0 && !isNew ){ +#ifdef FOSSIL_ENABLE_JSON + g.json.resultCode = FSL_JSON_E_RESOURCE_NOT_FOUND; +#endif + fossil_fatal("no such tech note: %s", zETime); + } + if( rid!=0 && isNew ){ +#ifdef FOSSIL_ENABLE_JSON + g.json.resultCode = FSL_JSON_E_RESOURCE_ALREADY_EXISTS; +#endif + fossil_fatal("tech note %s already exists", zETime); + } + + if ( isNew ){ + zId = db_text(0, "SELECT lower(hex(randomblob(20)))"); + }else{ + zId = db_text(0, + "SELECT substr(tagname,7) FROM tag" + " WHERE tagid=(SELECT tagid FROM event WHERE objid='%d')", + rid + ); + } + + user_select(); + if (event_commit_common(rid, zId, blob_str(pContent), zETime, + zMimeType, zComment, zTags, zClr)==0 ){ +#ifdef FOSSIL_ENABLE_JSON + g.json.resultCode = FSL_JSON_E_ASSERT; +#endif + fossil_fatal("Internal error: Fossil tried to make an " + "invalid artifact for the technote."); + } +} ADDED src/export.c Index: src/export.c ================================================================== --- src/export.c +++ src/export.c @@ -0,0 +1,372 @@ +/* +** Copyright (c) 2010 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@sqlite.org +** +******************************************************************************* +** +** This file contains code used to export the content of a Fossil +** repository in the git-fast-import format. +*/ +#include "config.h" +#include "export.h" +#include + +/* +** Output a "committer" record for the given user. +*/ +static void print_person(const char *zUser){ + static Stmt q; + const char *zContact; + char *zName; + char *zEmail; + int i, j; + + if( zUser==0 ){ + printf(" "); + return; + } + db_static_prepare(&q, "SELECT info FROM user WHERE login=:user"); + db_bind_text(&q, ":user", zUser); + if( db_step(&q)!=SQLITE_ROW ){ + db_reset(&q); + for(i=0; zUser[i] && zUser[i]!='>' && zUser[i]!='<'; i++){} + if( zUser[i]==0 ){ + printf(" %s <%s>", zUser, zUser); + return; + } + zName = mprintf("%s", zUser); + for(i=j=0; zName[i]; i++){ + if( zName[i]!='<' && zName[i]!='>' ){ + zName[j++] = zName[i]; + } + } + zName[j] = 0; + printf(" %s <%s>", zName, zUser); + free(zName); + return; + } + /* + ** We have contact information. + ** It may or may not contain an email address. + */ + zContact = db_column_text(&q, 0); + for(i=0; zContact[i] && zContact[i]!='>' && zContact[i]!='<'; i++){} + if( zContact[i]==0 ){ + /* No email address found. Take as user info if not empty */ + printf(" %s <%s>", zContact[0] ? zContact : zUser, zUser); + db_reset(&q); + return; + } + if( zContact[i]=='<' ){ + /* + ** Found beginning of email address. Look for the end and extract + ** the part. + */ + zEmail = mprintf("%s", &zContact[i]); + for(j=0; zEmail[j] && zEmail[j]!='>'; j++){} + if( zEmail[j]=='>' ) zEmail[j+1] = 0; + }else{ + /* + ** Found an end marker for email, but nothing else. + */ + zEmail = mprintf("<%s>", zUser); + } + /* + ** Here zContact[i] either '<' or '>'. Extract the string _before_ + ** either as user name. + */ + zName = mprintf("%.*s", i-1, zContact); + for(i=j=0; zName[i]; i++){ + if( zName[i]!='"' ) zName[j++] = zName[i]; + } + zName[j] = 0; + printf(" %s %s", zName, zEmail); + free(zName); + free(zEmail); + db_reset(&q); +} + +#define BLOBMARK(rid) ((rid) * 2) +#define COMMITMARK(rid) ((rid) * 2 + 1) + +/* +** COMMAND: export +** +** Usage: %fossil export --git ?OPTIONS? ?REPOSITORY? +** +** Write an export of all check-ins to standard output. The export is +** written in the git-fast-export file format assuming the --git option is +** provided. The git-fast-export format is currently the only VCS +** interchange format supported, though other formats may be added in +** the future. +** +** Run this command within a checkout. Or use the -R or --repository +** option to specify a Fossil repository to be exported. +** +** Only check-ins are exported using --git. Git does not support tickets +** or wiki or events or attachments, so none of those are exported. +** +** If the "--import-marks FILE" option is used, it contains a list of +** rids to skip. +** +** If the "--export-marks FILE" option is used, the rid of all commits and +** blobs written on exit for use with "--import-marks" on the next run. +** +** Options: +** --export-marks FILE export rids of exported data to FILE +** --import-marks FILE read rids of data to ignore from FILE +** --repository|-R REPOSITORY export the given REPOSITORY +** +** See also: import +*/ +void export_cmd(void){ + Stmt q, q2, q3; + int i; + Bag blobs, vers; + const char *markfile_in; + const char *markfile_out; + + bag_init(&blobs); + bag_init(&vers); + + find_option("git", 0, 0); /* Ignore the --git option for now */ + markfile_in = find_option("import-marks", 0, 1); + markfile_out = find_option("export-marks", 0, 1); + + db_find_and_open_repository(0, 2); + verify_all_options(); + if( g.argc!=2 && g.argc!=3 ){ usage("--git ?REPOSITORY?"); } + + db_multi_exec("CREATE TEMPORARY TABLE oldblob(rid INTEGER PRIMARY KEY)"); + db_multi_exec("CREATE TEMPORARY TABLE oldcommit(rid INTEGER PRIMARY KEY)"); + if( markfile_in!=0 ){ + Stmt qb,qc; + char line[100]; + FILE *f; + + f = fossil_fopen(markfile_in, "r"); + if( f==0 ){ + fossil_fatal("cannot open %s for reading", markfile_in); + } + db_prepare(&qb, "INSERT OR IGNORE INTO oldblob VALUES (:rid)"); + db_prepare(&qc, "INSERT OR IGNORE INTO oldcommit VALUES (:rid)"); + while( fgets(line, sizeof(line), f)!=0 ){ + if( *line == 'b' ){ + db_bind_text(&qb, ":rid", line + 1); + db_step(&qb); + db_reset(&qb); + bag_insert(&blobs, atoi(line + 1)); + }else if( *line == 'c' ){ + db_bind_text(&qc, ":rid", line + 1); + db_step(&qc); + db_reset(&qc); + bag_insert(&vers, atoi(line + 1)); + }else{ + fossil_fatal("bad input from %s: %s", markfile_in, line); + } + } + db_finalize(&qb); + db_finalize(&qc); + fclose(f); + } + + /* Step 1: Generate "blob" records for every artifact that is part + ** of a check-in + */ + fossil_binary_mode(stdout); + db_multi_exec("CREATE TEMP TABLE newblob(rid INTEGER KEY, srcid INTEGER)"); + db_multi_exec("CREATE INDEX newblob_src ON newblob(srcid)"); + db_multi_exec( + "INSERT INTO newblob" + " SELECT DISTINCT fid," + " CASE WHEN EXISTS(SELECT 1 FROM delta" + " WHERE rid=fid" + " AND NOT EXISTS(SELECT 1 FROM oldblob" + " WHERE srcid=fid))" + " THEN (SELECT srcid FROM delta WHERE rid=fid)" + " ELSE 0" + " END" + " FROM mlink" + " WHERE fid>0 AND NOT EXISTS(SELECT 1 FROM oldblob WHERE rid=fid)"); + db_prepare(&q, + "SELECT DISTINCT fid FROM mlink" + " WHERE fid>0 AND NOT EXISTS(SELECT 1 FROM oldblob WHERE rid=fid)"); + db_prepare(&q2, "INSERT INTO oldblob VALUES (:rid)"); + db_prepare(&q3, "SELECT rid FROM newblob WHERE srcid= (:srcid)"); + while( db_step(&q)==SQLITE_ROW ){ + int rid = db_column_int(&q, 0); + Blob content; + + while( !bag_find(&blobs, rid) ){ + content_get(rid, &content); + db_bind_int(&q2, ":rid", rid); + db_step(&q2); + db_reset(&q2); + printf("blob\nmark :%d\ndata %d\n", BLOBMARK(rid), blob_size(&content)); + bag_insert(&blobs, rid); + fwrite(blob_buffer(&content), 1, blob_size(&content), stdout); + printf("\n"); + blob_reset(&content); + + db_bind_int(&q3, ":srcid", rid); + if( db_step(&q3) != SQLITE_ROW ){ + db_reset(&q3); + break; + } + rid = db_column_int(&q3, 0); + db_reset(&q3); + } + } + db_finalize(&q); + db_finalize(&q2); + db_finalize(&q3); + + /* Output the commit records. + */ + db_prepare(&q, + "SELECT strftime('%%s',mtime), objid, coalesce(ecomment,comment)," + " coalesce(euser,user)," + " (SELECT value FROM tagxref WHERE rid=objid AND tagid=%d)" + " FROM event" + " WHERE type='ci' AND NOT EXISTS (SELECT 1 FROM oldcommit WHERE objid=rid)" + " ORDER BY mtime ASC", + TAG_BRANCH + ); + db_prepare(&q2, "INSERT INTO oldcommit VALUES (:rid)"); + while( db_step(&q)==SQLITE_ROW ){ + Stmt q4; + const char *zSecondsSince1970 = db_column_text(&q, 0); + int ckinId = db_column_int(&q, 1); + const char *zComment = db_column_text(&q, 2); + const char *zUser = db_column_text(&q, 3); + const char *zBranch = db_column_text(&q, 4); + char *zBr; + + bag_insert(&vers, ckinId); + db_bind_int(&q2, ":rid", ckinId); + db_step(&q2); + db_reset(&q2); + if( zBranch==0 ) zBranch = "trunk"; + zBr = mprintf("%s", zBranch); + for(i=0; zBr[i]; i++){ + if( !fossil_isalnum(zBr[i]) ) zBr[i] = '_'; + } + printf("commit refs/heads/%s\nmark :%d\n", zBr, COMMITMARK(ckinId)); + free(zBr); + printf("committer"); + print_person(zUser); + printf(" %s +0000\n", zSecondsSince1970); + if( zComment==0 ) zComment = "null comment"; + printf("data %d\n%s\n", (int)strlen(zComment), zComment); + db_prepare(&q3, + "SELECT pid FROM plink" + " WHERE cid=%d AND isprim" + " AND pid IN (SELECT objid FROM event)", + ckinId + ); + if( db_step(&q3) == SQLITE_ROW ){ + printf("from :%d\n", COMMITMARK(db_column_int(&q3, 0))); + db_prepare(&q4, + "SELECT pid FROM plink" + " WHERE cid=%d AND NOT isprim" + " AND NOT EXISTS(SELECT 1 FROM phantom WHERE rid=pid)" + " ORDER BY pid", + ckinId); + while( db_step(&q4)==SQLITE_ROW ){ + printf("merge :%d\n", COMMITMARK(db_column_int(&q4,0))); + } + db_finalize(&q4); + }else{ + printf("deleteall\n"); + } + + db_prepare(&q4, + "SELECT filename.name, mlink.fid, mlink.mperm FROM mlink" + " JOIN filename ON filename.fnid=mlink.fnid" + " WHERE mlink.mid=%d", + ckinId + ); + while( db_step(&q4)==SQLITE_ROW ){ + const char *zName = db_column_text(&q4,0); + int zNew = db_column_int(&q4,1); + int mPerm = db_column_int(&q4,2); + if( zNew==0) + printf("D %s\n", zName); + else if( bag_find(&blobs, zNew) ) { + const char *zPerm; + switch( mPerm ){ + case PERM_LNK: zPerm = "120000"; break; + case PERM_EXE: zPerm = "100755"; break; + default: zPerm = "100644"; break; + } + printf("M %s :%d %s\n", zPerm, BLOBMARK(zNew), zName); + } + } + db_finalize(&q4); + db_finalize(&q3); + printf("\n"); + } + db_finalize(&q2); + db_finalize(&q); + bag_clear(&blobs); + manifest_cache_clear(); + + + /* Output tags */ + db_prepare(&q, + "SELECT tagname, rid, strftime('%%s',mtime)" + " FROM tagxref JOIN tag USING(tagid)" + " WHERE tagtype=1 AND tagname GLOB 'sym-*'" + ); + while( db_step(&q)==SQLITE_ROW ){ + const char *zTagname = db_column_text(&q, 0); + char *zEncoded = 0; + int rid = db_column_int(&q, 1); + const char *zSecSince1970 = db_column_text(&q, 2); + int i; + if( rid==0 || !bag_find(&vers, rid) ) continue; + zTagname += 4; + zEncoded = mprintf("%s", zTagname); + for(i=0; zEncoded[i]; i++){ + if( !fossil_isalnum(zEncoded[i]) ) zEncoded[i] = '_'; + } + printf("tag %s\n", zEncoded); + printf("from :%d\n", COMMITMARK(rid)); + printf("tagger %s +0000\n", zSecSince1970); + printf("data 0\n"); + fossil_free(zEncoded); + } + db_finalize(&q); + bag_clear(&vers); + + if( markfile_out!=0 ){ + FILE *f; + f = fossil_fopen(markfile_out, "w"); + if( f == 0 ){ + fossil_fatal("cannot open %s for writing", markfile_out); + } + db_prepare(&q, "SELECT rid FROM oldblob"); + while( db_step(&q)==SQLITE_ROW ){ + fprintf(f, "b%d\n", db_column_int(&q, 0)); + } + db_finalize(&q); + db_prepare(&q, "SELECT rid FROM oldcommit"); + while( db_step(&q)==SQLITE_ROW ){ + fprintf(f, "c%d\n", db_column_int(&q, 0)); + } + db_finalize(&q); + if( ferror(f)!=0 || fclose(f)!=0 ) { + fossil_fatal("error while writing %s", markfile_out); + } + } +} Index: src/file.c ================================================================== --- src/file.c +++ src/file.c @@ -13,37 +13,112 @@ ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** -** File utilities +** File utilities. +** +** Functions named file_* are generic functions that always follow symlinks. +** +** Functions named file_wd_* are to be used for files inside working +** directories. They follow symlinks depending on 'allow-symlinks' setting. */ #include "config.h" #include #include #include +#include +#include #include "file.h" /* -** The file status information from the most recent stat() call. +** On Windows, include the Platform SDK header file. +*/ +#ifdef _WIN32 +# include +# include +# include +#else +# include +#endif + +#if INTERFACE + +#include +#if defined(_WIN32) +# define DIR _WDIR +# define dirent _wdirent +# define opendir _wopendir +# define readdir _wreaddir +# define closedir _wclosedir +#endif /* _WIN32 */ + +#if defined(_WIN32) && (defined(__MSVCRT__) || defined(_MSC_VER)) +struct fossilStat { + i64 st_size; + i64 st_mtime; + int st_mode; +}; +#endif + +#if defined(_WIN32) || defined(__CYGWIN__) +# define fossil_isdirsep(a) (((a) == '/') || ((a) == '\\')) +#else +# define fossil_isdirsep(a) ((a) == '/') +#endif + +#endif /* INTERFACE */ + +#if !defined(_WIN32) || !(defined(__MSVCRT__) || defined(_MSC_VER)) +# define fossilStat stat +#endif + +/* +** On Windows S_ISLNK always returns FALSE. */ -static struct stat fileStat; +#if !defined(S_ISLNK) +# define S_ISLNK(x) (0) +#endif static int fileStatValid = 0; +static struct fossilStat fileStat; + +/* +** Fill stat buf with information received from stat() or lstat(). +** lstat() is called on Unix if isWd is TRUE and allow-symlinks setting is on. +** +*/ +static int fossil_stat(const char *zFilename, struct fossilStat *buf, int isWd){ + int rc; + void *zMbcs = fossil_utf8_to_path(zFilename, 0); +#if !defined(_WIN32) + if( isWd && g.allowSymlinks ){ + rc = lstat(zMbcs, buf); + }else{ + rc = stat(zMbcs, buf); + } +#else + rc = win32_stat(zMbcs, buf, isWd); +#endif + fossil_path_free(zMbcs); + return rc; +} /* ** Fill in the fileStat variable for the file named zFilename. ** If zFilename==0, then use the previous value of fileStat if ** there is a previous value. ** +** If isWd is TRUE, do lstat() instead of stat() if allow-symlinks is on. +** ** Return the number of errors. No error messages are generated. */ -static int getStat(const char *zFilename){ +static int getStat(const char *zFilename, int isWd){ int rc = 0; if( zFilename==0 ){ if( fileStatValid==0 ) rc = 1; }else{ - if( stat(zFilename, &fileStat)!=0 ){ + if( fossil_stat(zFilename, &fileStat, isWd)!=0 ){ fileStatValid = 0; rc = 1; }else{ fileStatValid = 1; rc = 0; @@ -50,50 +125,156 @@ } } return rc; } - /* ** Return the size of a file in bytes. Return -1 if the file does not ** exist. If zFilename is NULL, return the size of the most recently ** stat-ed file. */ i64 file_size(const char *zFilename){ - return getStat(zFilename) ? -1 : fileStat.st_size; + return getStat(zFilename, 0) ? -1 : fileStat.st_size; +} + +/* +** Same as file_size(), but takes into account symlinks. +*/ +i64 file_wd_size(const char *zFilename){ + return getStat(zFilename, 1) ? -1 : fileStat.st_size; } /* ** Return the modification time for a file. Return -1 if the file ** does not exist. If zFilename is NULL return the size of the most ** recently stat-ed file. */ i64 file_mtime(const char *zFilename){ - return getStat(zFilename) ? -1 : fileStat.st_mtime; + return getStat(zFilename, 0) ? -1 : fileStat.st_mtime; +} + +/* +** Same as file_mtime(), but takes into account symlinks. +*/ +i64 file_wd_mtime(const char *zFilename){ + return getStat(zFilename, 1) ? -1 : fileStat.st_mtime; +} + +/* +** Return TRUE if the named file is an ordinary file or symlink +** and symlinks are allowed. +** Return false for directories, devices, fifos, etc. +*/ +int file_wd_isfile_or_link(const char *zFilename){ + return getStat(zFilename, 1) ? 0 : S_ISREG(fileStat.st_mode) || + S_ISLNK(fileStat.st_mode); } /* ** Return TRUE if the named file is an ordinary file. Return false ** for directories, devices, fifos, symlinks, etc. */ int file_isfile(const char *zFilename){ - return getStat(zFilename) ? 0 : S_ISREG(fileStat.st_mode); + return getStat(zFilename, 0) ? 0 : S_ISREG(fileStat.st_mode); +} + +/* +** Same as file_isfile(), but takes into account symlinks. +*/ +int file_wd_isfile(const char *zFilename){ + return getStat(zFilename, 1) ? 0 : S_ISREG(fileStat.st_mode); +} + +/* +** Create symlink to file on Unix, or plain-text file with +** symlink target if "allow-symlinks" is off or we're on Windows. +** +** Arguments: target file (symlink will point to it), link file +**/ +void symlink_create(const char *zTargetFile, const char *zLinkFile){ +#if !defined(_WIN32) + if( g.allowSymlinks ){ + int i, nName; + char *zName, zBuf[1000]; + + nName = strlen(zLinkFile); + if( nName>=sizeof(zBuf) ){ + zName = mprintf("%s", zLinkFile); + }else{ + zName = zBuf; + memcpy(zName, zLinkFile, nName+1); + } + nName = file_simplify_name(zName, nName, 0); + for(i=1; i=0 ){ + fossil_free(z); + z = mprintf("%s-%s-%d", zBase, zSuffix, cnt++); + } + if( relFlag ){ + Blob x; + file_relative_name(z, &x, 0); + fossil_free(z); + z = blob_str(&x); + } + return z; +} + /* ** Return the tail of a file pathname. The tail is the last component ** of the path. For example, the tail of "/a/b/c.d" is "c.d". */ const char *file_tail(const char *z){ const char *zTail = z; + if( !zTail ) return 0; while( z[0] ){ - if( z[0]=='/' ) zTail = &z[1]; + if( fossil_isdirsep(z[0]) ) zTail = &z[1]; z++; } return zTail; } + +/* +** Return the directory of a file path name. The directory is all components +** except the last one. For example, the directory of "/a/b/c.d" is "/a/b". +** If there is no directory, NULL is returned; otherwise, the returned memory +** should be freed via fossil_free(). +*/ +char *file_dirname(const char *z){ + const char *zTail = file_tail(z); + if( zTail && zTail!=z ){ + return mprintf("%.*s", (int)(zTail-z-1), z); + }else{ + return 0; + } +} /* ** Copy the content of a file from one place to another. */ void file_copy(const char *zFrom, const char *zTo){ FILE *in, *out; int got; char zBuf[8192]; - in = fopen(zFrom, "rb"); + in = fossil_fopen(zFrom, "rb"); if( in==0 ) fossil_fatal("cannot open \"%s\" for reading", zFrom); - out = fopen(zTo, "wb"); + file_mkfolder(zTo, 0, 0); + out = fossil_fopen(zTo, "wb"); if( out==0 ) fossil_fatal("cannot open \"%s\" for writing", zTo); while( (got=fread(zBuf, 1, sizeof(zBuf), in))>0 ){ fwrite(zBuf, 1, got, out); } fclose(in); fclose(out); } /* -** Set or clear the execute bit on a file. +** COMMAND: test-file-copy +** +** Usage: %fossil test-file-copy SOURCE DESTINATION +** +** Make a copy of the file at SOURCE into a new name DESTINATION. Any +** directories in the path leading up to DESTINATION that do not already +** exist are created automatically. +*/ +void test_file_copy(void){ + if( g.argc!=4 ){ + fossil_fatal("Usage: %s test-file-copy SOURCE DESTINATION", g.argv[0]); + } + file_copy(g.argv[2], g.argv[3]); +} + +/* +** Set or clear the execute bit on a file. Return true if a change +** occurred and false if this routine is a no-op. */ -void file_setexe(const char *zFilename, int onoff){ -#ifndef __MINGW32__ +int file_wd_setexe(const char *zFilename, int onoff){ + int rc = 0; +#if !defined(_WIN32) struct stat buf; - if( stat(zFilename, &buf)!=0 ) return; + if( fossil_stat(zFilename, &buf, 1)!=0 || S_ISLNK(buf.st_mode) ) return 0; if( onoff ){ - if( (buf.st_mode & 0111)!=0111 ){ - chmod(zFilename, buf.st_mode | 0111); + int targetMode = (buf.st_mode & 0444)>>2; + if( (buf.st_mode & 0100) == 0 ){ + chmod(zFilename, buf.st_mode | targetMode); + rc = 1; } }else{ - if( (buf.st_mode & 0111)!=0 ){ + if( (buf.st_mode & 0100) != 0 ){ chmod(zFilename, buf.st_mode & ~0111); + rc = 1; } } -#endif /* __MINGW32__ */ +#endif /* _WIN32 */ + return rc; +} + +/* +** Set the mtime for a file. +*/ +void file_set_mtime(const char *zFilename, i64 newMTime){ +#if !defined(_WIN32) + char *zMbcs; + struct timeval tv[2]; + memset(tv, 0, sizeof(tv[0])*2); + tv[0].tv_sec = newMTime; + tv[1].tv_sec = newMTime; + zMbcs = fossil_utf8_to_path(zFilename, 0); + utimes(zMbcs, tv); +#else + struct _utimbuf tb; + wchar_t *zMbcs = fossil_utf8_to_path(zFilename, 0); + tb.actime = newMTime; + tb.modtime = newMTime; + _wutime(zMbcs, &tb); +#endif + fossil_path_free(zMbcs); +} + +/* +** COMMAND: test-set-mtime +** +** Usage: %fossil test-set-mtime FILENAME DATE/TIME +** +** Sets the mtime of the named file to the date/time shown. +*/ +void test_set_mtime(void){ + const char *zFile; + char *zDate; + i64 iMTime; + if( g.argc!=4 ){ + usage("test-set-mtime FILENAME DATE/TIME"); + } + db_open_or_attach(":memory:", "mem", 0); + iMTime = db_int64(0, "SELECT strftime('%%s',%Q)", g.argv[3]); + zFile = g.argv[2]; + file_set_mtime(zFile, iMTime); + iMTime = file_wd_mtime(zFile); + zDate = db_text(0, "SELECT datetime(%lld, 'unixepoch')", iMTime); + fossil_print("Set mtime of \"%s\" to %s (%lld)\n", zFile, zDate, iMTime); +} + +/* +** Delete a file. +** +** Returns zero upon success. +*/ +int file_delete(const char *zFilename){ + int rc; +#ifdef _WIN32 + wchar_t *z = fossil_utf8_to_path(zFilename, 0); + rc = _wunlink(z); +#else + char *z = fossil_utf8_to_path(zFilename, 0); + rc = unlink(zFilename); +#endif + fossil_path_free(z); + return rc; } /* ** Create the directory named in the argument, if it does not already -** exist. If forceFlag is 1, delete any prior non-directory object +** exist. If forceFlag is 1, delete any prior non-directory object ** with the same name. ** ** Return the number of errors. */ int file_mkdir(const char *zName, int forceFlag){ - int rc = file_isdir(zName); + int rc = file_wd_isdir(zName); if( rc==2 ){ if( !forceFlag ) return 1; - unlink(zName); + file_delete(zName); } if( rc!=1 ){ -#ifdef __MINGW32__ - return mkdir(zName); +#if defined(_WIN32) + wchar_t *zMbcs = fossil_utf8_to_path(zName, 1); + rc = _wmkdir(zMbcs); +#else + char *zMbcs = fossil_utf8_to_path(zName, 1); + rc = mkdir(zName, 0755); +#endif + fossil_path_free(zMbcs); + return rc; + } + return 0; +} + +/* +** Create the tree of directories in which zFilename belongs, if that sequence +** of directories does not already exist. +** +** On success, return zero. On error, return errorReturn if positive, otherwise +** print an error message and abort. +*/ +int file_mkfolder(const char *zFilename, int forceFlag, int errorReturn){ + int i, nName, rc = 0; + char *zName; + + nName = strlen(zFilename); + zName = mprintf("%s", zFilename); + nName = file_simplify_name(zName, nName, 0); + for(i=1; i U+FFFF are not supported. + * Windows XP and earlier cannot handle them. + */ + return 0; + } + /* This is a 3-byte UTF-8 character */ + unicode = ((c&0x0f)<<12) + ((z[i]&0x3f)<<6) + (z[i+1]&0x3f); + if( unicode <= 0x07ff ){ + /* overlong form */ + return 0; + }else if( unicode>=0xe000 ){ + /* U+E000..U+FFFF */ + if( (unicode<=0xf8ff) || (unicode>=0xfffe) ){ + /* U+E000..U+F8FF are for private use. + * U+FFFE..U+FFFF are noncharacters. */ + return 0; + } else if( (unicode>=0xfdd0) && (unicode<=0xfdef) ){ + /* U+FDD0..U+FDEF are noncharacters. */ + return 0; + } + }else if( (unicode>=0xd800) && (unicode<=0xdfff) ){ + /* U+D800..U+DFFF are for surrogate pairs. */ + return 0; + } + if( (z[++i]&0xc0)!=0x80 ){ + /* Invalid second continuation byte */ + return 0; + } + } + }else if( bStrictUtf8 && (c=='\\') ){ return 0; } - if( z[i]=='/' ){ + if( c=='/' ){ if( z[i+1]=='/' ) return 0; if( z[i+1]=='.' ){ if( z[i+2]=='/' || z[i+2]==0 ) return 0; if( z[i+2]=='.' && (z[i+3]=='/' || z[i+3]==0) ) return 0; } @@ -217,91 +699,233 @@ } } if( z[i-1]=='/' ) return 0; return 1; } + +/* +** If the last component of the pathname in z[0]..z[j-1] is something +** other than ".." then back it out and return true. If the last +** component is empty or if it is ".." then return false. +*/ +static int backup_dir(const char *z, int *pJ){ + int j = *pJ; + int i; + if( j<=0 ) return 0; + for(i=j-1; i>0 && z[i-1]!='/'; i--){} + if( z[i]=='.' && i==j-2 && z[i+1]=='.' ) return 0; + *pJ = i-1; + return 1; +} /* ** Simplify a filename by ** +** * Remove extended path prefix on windows and cygwin +** * Convert all \ into / on windows and cygwin ** * removing any trailing and duplicate / ** * removing /./ ** * removing /A/../ ** ** Changes are made in-place. Return the new name length. +** If the slash parameter is non-zero, the trailing slash, if any, +** is retained. */ -int file_simplify_name(char *z, int n){ - int i, j; +int file_simplify_name(char *z, int n, int slash){ + int i = 1, j; if( n<0 ) n = strlen(z); -#ifdef __MINGW32__ - for(i=0; i3 && !memcmp(z, "//?/", 4) ){ + if( fossil_strnicmp(z+4,"UNC", 3) ){ + i += 4; + z[0] = z[4]; + }else{ + i += 6; + z[0] = '/'; + } } #endif - while( n>1 && z[n-1]=='/' ){ n--; } - for(i=j=0; i1 && z[n-1]=='/' ){ n--; } + } + + /* Remove duplicate '/' characters. Except, two // at the beginning + ** of a pathname is allowed since this is important on windows. */ + for(j=1; i0 && z[j-1]!='/' ){ j--; } - if( j>0 ){ j--; } + + /* If this is a "/.." directory component then back out the + ** previous term of the directory if it is something other than ".." + ** or "." + */ + if( z[i+1]=='.' && i+2=0 ) z[j] = z[i]; + j++; } + if( j==0 ) z[j++] = '/'; z[j] = 0; return j; } + +/* +** COMMAND: test-simplify-name +** +** %fossil test-simplify-name FILENAME... +** +** Print the simplified versions of each FILENAME. +*/ +void cmd_test_simplify_name(void){ + int i; + char *z; + for(i=2; i ", z); + file_simplify_name(z, -1, 0); + fossil_print("[%s]\n", z); + fossil_free(z); + } +} + +/* +** Get the current working directory. +** +** On windows, the name is converted from unicode to UTF8 and all '\\' +** characters are converted to '/'. No conversions are needed on +** unix. +*/ +void file_getcwd(char *zBuf, int nBuf){ +#ifdef _WIN32 + win32_getcwd(zBuf, nBuf); +#else + if( getcwd(zBuf, nBuf-1)==0 ){ + if( errno==ERANGE ){ + fossil_fatal("pwd too big: max %d\n", nBuf-1); + }else{ + fossil_fatal("cannot find current working directory; %s", + strerror(errno)); + } + } +#endif +} + +/* +** Return true if zPath is an absolute pathname. Return false +** if it is relative. +*/ +int file_is_absolute_path(const char *zPath){ + if( fossil_isdirsep(zPath[0]) +#if defined(_WIN32) || defined(__CYGWIN__) + || (fossil_isalpha(zPath[0]) && zPath[1]==':' + && (fossil_isdirsep(zPath[2]) || zPath[2]=='\0')) +#endif + ){ + return 1; + }else{ + return 0; + } +} /* ** Compute a canonical pathname for a file or directory. ** Make the name absolute if it is relative. ** Remove redundant / characters ** Remove all /./ path elements. ** Convert /A/../ to just / +** If the slash parameter is non-zero, the trailing slash, if any, +** is retained. */ -void file_canonical_name(const char *zOrigName, Blob *pOut){ - if( zOrigName[0]=='/' -#ifdef __MINGW32__ - || zOrigName[0]=='\\' - || (strlen(zOrigName)>3 && zOrigName[1]==':' - && (zOrigName[2]=='\\' || zOrigName[2]=='/')) -#endif - ){ - blob_set(pOut, zOrigName); - blob_materialize(pOut); +void file_canonical_name(const char *zOrigName, Blob *pOut, int slash){ + blob_zero(pOut); + if( file_is_absolute_path(zOrigName) ){ + blob_appendf(pOut, "%/", zOrigName); }else{ char zPwd[2000]; - if( getcwd(zPwd, sizeof(zPwd)-20)==0 ){ - fprintf(stderr, "pwd too big: max %d\n", (int)sizeof(zPwd)-20); - exit(1); - } - blob_zero(pOut); - blob_appendf(pOut, "%//%/", zPwd, zOrigName); - } - blob_resize(pOut, file_simplify_name(blob_buffer(pOut), blob_size(pOut))); + file_getcwd(zPwd, sizeof(zPwd)-strlen(zOrigName)); + if( zPwd[0]=='/' && strlen(zPwd)==1 ){ + /* when on '/', don't add an extra '/' */ + if( zOrigName[0]=='.' && strlen(zOrigName)==1 ){ + /* '.' when on '/' mean '/' */ + blob_appendf(pOut, "%/", zPwd); + }else{ + blob_appendf(pOut, "%/%/", zPwd, zOrigName); + } + }else{ + blob_appendf(pOut, "%//%/", zPwd, zOrigName); + } + } +#if defined(_WIN32) || defined(__CYGWIN__) + { + char *zOut; + /* + ** On Windows/cygwin, normalize the drive letter to upper case. + */ + zOut = blob_str(pOut); + if( fossil_islower(zOut[0]) && zOut[1]==':' && zOut[2]=='/' ){ + zOut[0] = fossil_toupper(zOut[0]); + } + } +#endif + blob_resize(pOut, file_simplify_name(blob_buffer(pOut), + blob_size(pOut), slash)); } /* ** COMMAND: test-canonical-name +** Usage: %fossil test-canonical-name FILENAME... ** ** Test the operation of the canonical name generator. +** Also test Fossil's ability to measure attributes of a file. */ void cmd_test_canonical_name(void){ int i; Blob x; + int slashFlag = find_option("slash",0,0)!=0; blob_zero(&x); for(i=2; i [%s]\n", zName, blob_buffer(&x)); blob_reset(&x); + sqlite3_snprintf(sizeof(zBuf), zBuf, "%lld", file_wd_size(zName)); + fossil_print(" file_size = %s\n", zBuf); + sqlite3_snprintf(sizeof(zBuf), zBuf, "%lld", file_wd_mtime(zName)); + fossil_print(" file_mtime = %s\n", zBuf); + fossil_print(" file_isfile = %d\n", file_wd_isfile(zName)); + fossil_print(" file_isfile_or_link = %d\n",file_wd_isfile_or_link(zName)); + fossil_print(" file_islink = %d\n", file_wd_islink(zName)); + fossil_print(" file_isexe = %d\n", file_wd_isexe(zName)); + fossil_print(" file_isdir = %d\n", file_wd_isdir(zName)); } } /* ** Return TRUE if the given filename is canonical. @@ -310,12 +934,12 @@ ** contain no "/./" or "/../" terms. */ int file_is_canonical(const char *z){ int i; if( z[0]!='/' -#ifdef __MINGW32__ - && (z[0]==0 || z[1]!=':' || z[2]!='/') +#if defined(_WIN32) || defined(__CYGWIN__) + && (!fossil_isupper(z[0]) || z[1]!=':' || z[2]!='/') #endif ) return 0; for(i=0; z[i]; i++){ if( z[i]=='\\' ) return 0; @@ -326,41 +950,65 @@ } } } return 1; } + +/* +** Return a pointer to the first character in a pathname past the +** drive letter. This routine is a no-op on unix. +*/ +char *file_without_drive_letter(char *zIn){ +#ifdef _WIN32 + if( fossil_isalpha(zIn[0]) && zIn[1]==':' ) zIn += 2; +#endif + return zIn; +} /* ** Compute a pathname for a file or directory that is relative -** to the current directory. +** to the current directory. If the slash parameter is non-zero, +** the trailing slash, if any, is retained. */ -void file_relative_name(const char *zOrigName, Blob *pOut){ +void file_relative_name(const char *zOrigName, Blob *pOut, int slash){ char *zPath; blob_set(pOut, zOrigName); - blob_resize(pOut, file_simplify_name(blob_buffer(pOut), blob_size(pOut))); - zPath = blob_buffer(pOut); + blob_resize(pOut, file_simplify_name(blob_buffer(pOut), + blob_size(pOut), slash)); + zPath = file_without_drive_letter(blob_buffer(pOut)); if( zPath[0]=='/' ){ int i, j; Blob tmp; - char zPwd[2000]; - if( getcwd(zPwd, sizeof(zPwd)-20)==0 ){ - fprintf(stderr, "pwd too big: max %d\n", (int)sizeof(zPwd)-20); - exit(1); - } - for(i=1; zPath[i] && zPwd[i]==zPath[i]; i++){} + char *zPwd; + char zBuf[2000]; + zPwd = zBuf; + file_getcwd(zBuf, sizeof(zBuf)-20); + zPwd = file_without_drive_letter(zBuf); + i = 1; +#if defined(_WIN32) || defined(__CYGWIN__) + while( zPath[i] && fossil_tolower(zPwd[i])==fossil_tolower(zPath[i]) ) i++; +#else + while( zPath[i] && zPwd[i]==zPath[i] ) i++; +#endif if( zPath[i]==0 ){ - blob_reset(pOut); + memcpy(&tmp, pOut, sizeof(tmp)); if( zPwd[i]==0 ){ - blob_append(pOut, ".", 1); + blob_set(pOut, "."); }else{ - blob_append(pOut, "..", 2); + blob_set(pOut, ".."); for(j=i+1; zPwd[j]; j++){ - if( zPwd[j]=='/' ) { + if( zPwd[j]=='/' ){ blob_append(pOut, "/..", 3); } } + while( i>0 && (zPwd[i]!='/')) --i; + blob_append(pOut, zPath+i, j-i); + } + if( slash && i>0 && zPath[strlen(zPath)-1]=='/'){ + blob_append(pOut, "/", 1); } + blob_reset(&tmp); return; } if( zPwd[i]==0 && zPath[i]=='/' ){ memcpy(&tmp, pOut, sizeof(tmp)); blob_set(pOut, "./"); @@ -367,13 +1015,18 @@ blob_append(pOut, &zPath[i+1], -1); blob_reset(&tmp); return; } while( zPath[i-1]!='/' ){ i--; } - blob_set(&tmp, "../"); + if( zPwd[0]=='/' && strlen(zPwd)==1 ){ + /* If on '/', don't go to higher level */ + blob_zero(&tmp); + }else{ + blob_set(&tmp, "../"); + } for(j=i; zPwd[j]; j++){ - if( zPwd[j]=='/' ) { + if( zPwd[j]=='/' ){ blob_append(&tmp, "../", 3); } } blob_append(&tmp, &zPath[i], -1); blob_reset(pOut); @@ -387,56 +1040,135 @@ ** Test the operation of the relative name generator. */ void cmd_test_relative_name(void){ int i; Blob x; + int slashFlag = find_option("slash",0,0)!=0; blob_zero(&x); for(i=2; i0 && zLocalRoot[nLocalRoot-1]=='/' ); + file_canonical_name(zOrigName, &full, 0); + nFull = blob_size(&full); + zFull = blob_buffer(&full); + if( filenames_are_case_sensitive() ){ + xCmp = fossil_strncmp; + }else{ + xCmp = fossil_strnicmp; + } + + /* Special case. zOrigName refers to g.zLocalRoot directory. */ + if( (nFull==nLocalRoot-1 && xCmp(zLocalRoot, zFull, nFull)==0) + || (nFull==1 && zFull[0]=='/' && nLocalRoot==1 && zLocalRoot[0]=='/') ){ + if( absolute ){ + blob_append(pOut, zLocalRoot, nLocalRoot); + }else{ + blob_append(pOut, ".", 1); + } + blob_reset(&localRoot); + blob_reset(&full); + return 1; + } + + if( nFull<=nLocalRoot || xCmp(zLocalRoot, zFull, nLocalRoot) ){ + blob_reset(&localRoot); blob_reset(&full); if( errFatal ){ fossil_fatal("file outside of checkout tree: %s", zOrigName); } return 0; } - blob_zero(pOut); - blob_append(pOut, blob_buffer(&full)+n, blob_size(&full)-n); + if( absolute ){ + if( !file_is_absolute_path(zOrigName) ){ + blob_append(pOut, zLocalRoot, nLocalRoot); + } + blob_append(pOut, zOrigName, -1); + blob_resize(pOut, file_simplify_name(blob_buffer(pOut), + blob_size(pOut), 0)); + }else{ + blob_append(pOut, &zFull[nLocalRoot], nFull-nLocalRoot); + } + blob_reset(&localRoot); + blob_reset(&full); return 1; } /* ** COMMAND: test-tree-name ** ** Test the operation of the tree name generator. +** +** Options: +** --absolute Return an absolute path instead of a relative one. +** --case-sensitive B Enable or disable case-sensitive filenames. B is +** a boolean: "yes", "no", "true", "false", etc. */ void cmd_test_tree_name(void){ int i; Blob x; + int absoluteFlag = find_option("absolute",0,0)!=0; + db_find_and_open_repository(0,0); blob_zero(&x); for(i=2; i= (size_t)nBuf ){ fossil_fatal("insufficient space for temporary filename"); } do{ + if( cnt++>20 ) fossil_panic("cannot generate a temporary filename"); sqlite3_snprintf(nBuf-17, zBuf, "%s/", zDir); j = (int)strlen(zBuf); sqlite3_randomness(15, &zBuf[j]); for(i=0; i<15; i++, j++){ zBuf[j] = (char)zChars[ ((unsigned char)zBuf[j])%(sizeof(zChars)-1) ]; } zBuf[j] = 0; - }while( access(zBuf,0)==0 ); + }while( file_size(zBuf)>=0 ); + +#if defined(_WIN32) + fossil_path_free((char *)azDirs[0]); + fossil_path_free((char *)azDirs[1]); + fossil_path_free((char *)azDirs[2]); +#endif +} + + +/* +** Return true if a file named zName exists and has identical content +** to the blob pContent. If zName does not exist or if the content is +** different in any way, then return false. +*/ +int file_is_the_same(Blob *pContent, const char *zName){ + i64 iSize; + int rc; + Blob onDisk; + + iSize = file_wd_size(zName); + if( iSize<0 ) return 0; + if( iSize!=blob_size(pContent) ) return 0; + if( file_wd_islink(zName) ){ + blob_read_link(&onDisk, zName); + }else{ + blob_read_from_file(&onDisk, zName); + } + rc = blob_compare(&onDisk, pContent); + blob_reset(&onDisk); + return rc==0; +} + +/* +** Return the value of an environment variable as UTF8. +** Use fossil_path_free() to release resources. +*/ +char *fossil_getenv(const char *zName){ +#ifdef _WIN32 + wchar_t *uName = fossil_utf8_to_unicode(zName); + void *zValue = _wgetenv(uName); + fossil_unicode_free(uName); +#else + char *zValue = getenv(zName); +#endif + if( zValue ) zValue = fossil_path_to_utf8(zValue); + return zValue; +} + +/* +** Sets the value of an environment variable as UTF8. +*/ +int fossil_setenv(const char *zName, const char *zValue){ + int rc; + char *zString = mprintf("%s=%s", zName, zValue); +#ifdef _WIN32 + wchar_t *uString = fossil_utf8_to_unicode(zString); + rc = _wputenv(uString); + fossil_unicode_free(uString); + fossil_free(zString); +#else + rc = putenv(zString); + /* NOTE: Cannot free the string on POSIX. */ + /* fossil_free(zString); */ +#endif + return rc; +} + +/* +** Like fopen() but always takes a UTF8 argument. +*/ +FILE *fossil_fopen(const char *zName, const char *zMode){ +#ifdef _WIN32 + wchar_t *uMode = fossil_utf8_to_unicode(zMode); + wchar_t *uName = fossil_utf8_to_path(zName, 0); + FILE *f = _wfopen(uName, uMode); + fossil_path_free(uName); + fossil_unicode_free(uMode); +#else + FILE *f = fopen(zName, zMode); +#endif + return f; } Index: src/finfo.c ================================================================== --- src/finfo.c +++ src/finfo.c @@ -20,127 +20,418 @@ #include "config.h" #include "finfo.h" /* ** COMMAND: finfo -** -** Usage: %fossil finfo FILENAME +** +** Usage: %fossil finfo ?OPTIONS? FILENAME +** +** Print the complete change history for a single file going backwards +** in time. The default mode is -l. +** +** For the -l|--log mode: If "-b|--brief" is specified one line per revision +** is printed, otherwise the full comment is printed. The "-n|--limit N" +** and "--offset P" options limits the output to the first N changes +** after skipping P changes. +** +** In the -s mode prints the status as . This is +** a quick status and does not check for up-to-date-ness of the file. +** +** In the -p mode, there's an optional flag "-r|--revision REVISION". +** The specified version (or the latest checked out version) is printed +** to stdout. The -p mode is another form of the "cat" command. ** -** Print the change history for a single file. +** Options: +** -b|--brief display a brief (one line / revision) summary +** --case-sensitive B Enable or disable case-sensitive filenames. B is a +** boolean: "yes", "no", "true", "false", etc. +** -l|--log select log mode (the default) +** -n|--limit N Display the first N changes (default unlimited). +** N<=0 means no limit. +** --offset P skip P changes +** -p|--print select print mode +** -r|--revision R print the given revision (or ckout, if none is given) +** to stdout (only in print mode) +** -s|--status select status mode (print a status indicator for FILE) +** -W|--width Width of lines (default is to auto-detect). Must be +** >22 or 0 (= no limit, resulting in a single line per +** entry). ** -** The "--limit N" and "--offset P" options limits the output to the first -** N changes after skipping P changes. +** See also: artifact, cat, descendants, info, leaves */ void finfo_cmd(void){ - Stmt q; - int vid; - Blob dest; - const char *zFilename; - const char *zLimit; - const char *zOffset; - int iLimit, iOffset; - - db_must_be_within_tree(); - vid = db_lget_int("checkout", 0); - if( vid==0 ){ - fossil_panic("no checkout to finfo files in"); - } - zLimit = find_option("limit",0,1); - iLimit = zLimit ? atoi(zLimit) : -1; - zOffset = find_option("offset",0,1); - iOffset = zOffset ? atoi(zOffset) : 0; - if (g.argc<3) { - usage("FILENAME"); - } - file_tree_name(g.argv[2], &dest, 1); - zFilename = blob_str(&dest); - db_prepare(&q, - "SELECT " - " (SELECT uuid FROM blob WHERE rid=mlink.fid)," /* New file */ - " (SELECT uuid FROM blob WHERE rid=mlink.mid)," /* The check-in */ - " date(event.mtime,'localtime')," - " coalesce(event.ecomment, event.comment)," - " coalesce(event.euser, event.user)" - " FROM mlink, event" - " WHERE mlink.fnid=(SELECT fnid FROM filename WHERE name=%Q)" - " AND event.objid=mlink.mid" - " ORDER BY event.mtime DESC LIMIT %d OFFSET %d /*sort*/", - zFilename, iLimit, iOffset - ); - - printf("History of %s\n", zFilename); - while( db_step(&q)==SQLITE_ROW ){ - const char *zFileUuid = db_column_text(&q, 0); - const char *zCiUuid = db_column_text(&q, 1); - const char *zDate = db_column_text(&q, 2); - const char *zCom = db_column_text(&q, 3); - const char *zUser = db_column_text(&q, 4); - char *zOut; - printf("%s ", zDate); - if( zFileUuid==0 ){ - zOut = sqlite3_mprintf("[%.10s] DELETED %s (user: %s)", - zCiUuid, zCom, zUser); - }else{ - zOut = sqlite3_mprintf("[%.10s] %s (user: %s, artifact: [%.10s])", - zCiUuid, zCom, zUser, zFileUuid); - } - comment_print(zOut, 11, 79); - sqlite3_free(zOut); - } - db_finalize(&q); - blob_reset(&dest); -} - + db_must_be_within_tree(); + if( find_option("status","s",0) ){ + Stmt q; + Blob line; + Blob fname; + int vid; + + /* We should be done with options.. */ + verify_all_options(); + + if( g.argc!=3 ) usage("-s|--status FILENAME"); + vid = db_lget_int("checkout", 0); + if( vid==0 ){ + fossil_fatal("no checkout to finfo files in"); + } + vfile_check_signature(vid, CKSIG_ENOTFILE); + file_tree_name(g.argv[2], &fname, 0, 1); + db_prepare(&q, + "SELECT pathname, deleted, rid, chnged, coalesce(origname!=pathname,0)" + " FROM vfile WHERE vfile.pathname=%B %s", + &fname, filename_collation()); + blob_zero(&line); + if( db_step(&q)==SQLITE_ROW ) { + Blob uuid; + int isDeleted = db_column_int(&q, 1); + int isNew = db_column_int(&q,2) == 0; + int chnged = db_column_int(&q,3); + int renamed = db_column_int(&q,4); + + blob_zero(&uuid); + db_blob(&uuid, + "SELECT uuid FROM blob, mlink, vfile WHERE " + "blob.rid = mlink.mid AND mlink.fid = vfile.rid AND " + "vfile.pathname=%B %s", + &fname, filename_collation() + ); + if( isNew ){ + blob_appendf(&line, "new"); + }else if( isDeleted ){ + blob_appendf(&line, "deleted"); + }else if( renamed ){ + blob_appendf(&line, "renamed"); + }else if( chnged ){ + blob_appendf(&line, "edited"); + }else{ + blob_appendf(&line, "unchanged"); + } + blob_appendf(&line, " "); + blob_appendf(&line, " %10.10s", blob_str(&uuid)); + blob_reset(&uuid); + }else{ + blob_appendf(&line, "unknown 0000000000"); + } + db_finalize(&q); + fossil_print("%s\n", blob_str(&line)); + blob_reset(&fname); + blob_reset(&line); + }else if( find_option("print","p",0) ){ + Blob record; + Blob fname; + const char *zRevision = find_option("revision", "r", 1); + + /* We should be done with options.. */ + verify_all_options(); + + file_tree_name(g.argv[2], &fname, 0, 1); + if( zRevision ){ + historical_version_of_file(zRevision, blob_str(&fname), &record, 0,0,0,0); + }else{ + int rid = db_int(0, "SELECT rid FROM vfile WHERE pathname=%B %s", + &fname, filename_collation()); + if( rid==0 ){ + fossil_fatal("no history for file: %b", &fname); + } + content_get(rid, &record); + } + blob_write_to_file(&record, "-"); + blob_reset(&record); + blob_reset(&fname); + }else{ + Blob line; + Stmt q; + Blob fname; + int rid; + const char *zFilename; + const char *zLimit; + const char *zWidth; + const char *zOffset; + int iLimit, iOffset, iBrief, iWidth; + + if( find_option("log","l",0) ){ + /* this is the default, no-op */ + } + zLimit = find_option("limit","n",1); + zWidth = find_option("width","W",1); + iLimit = zLimit ? atoi(zLimit) : -1; + zOffset = find_option("offset",0,1); + iOffset = zOffset ? atoi(zOffset) : 0; + iBrief = (find_option("brief","b",0) == 0); + if( iLimit==0 ){ + iLimit = -1; + } + if( zWidth ){ + iWidth = atoi(zWidth); + if( (iWidth!=0) && (iWidth<=22) ){ + fossil_fatal("-W|--width value must be >22 or 0"); + } + }else{ + iWidth = -1; + } + + /* We should be done with options.. */ + verify_all_options(); + + if( g.argc!=3 ){ + usage("?-l|--log? ?-b|--brief? FILENAME"); + } + file_tree_name(g.argv[2], &fname, 0, 1); + rid = db_int(0, "SELECT rid FROM vfile WHERE pathname=%B %s", + &fname, filename_collation()); + if( rid==0 ){ + fossil_fatal("no history for file: %b", &fname); + } + zFilename = blob_str(&fname); + db_prepare(&q, + "SELECT DISTINCT b.uuid, ci.uuid, date(event.mtime,toLocal())," + " coalesce(event.ecomment, event.comment)," + " coalesce(event.euser, event.user)," + " (SELECT value FROM tagxref WHERE tagid=%d AND tagtype>0" + " AND tagxref.rid=mlink.mid)" /* Tags */ + " FROM mlink, blob b, event, blob ci, filename" + " WHERE filename.name=%Q %s" + " AND mlink.fnid=filename.fnid" + " AND b.rid=mlink.fid" + " AND event.objid=mlink.mid" + " AND event.objid=ci.rid" + " ORDER BY event.mtime DESC LIMIT %d OFFSET %d", + TAG_BRANCH, zFilename, filename_collation(), + iLimit, iOffset + ); + blob_zero(&line); + if( iBrief ){ + fossil_print("History of %s\n", blob_str(&fname)); + } + while( db_step(&q)==SQLITE_ROW ){ + const char *zFileUuid = db_column_text(&q, 0); + const char *zCiUuid = db_column_text(&q,1); + const char *zDate = db_column_text(&q, 2); + const char *zCom = db_column_text(&q, 3); + const char *zUser = db_column_text(&q, 4); + const char *zBr = db_column_text(&q, 5); + char *zOut; + if( zBr==0 ) zBr = "trunk"; + if( iBrief ){ + fossil_print("%s ", zDate); + zOut = mprintf( + "[%S] %s (user: %s, artifact: [%S], branch: %s)", + zCiUuid, zCom, zUser, zFileUuid, zBr); + comment_print(zOut, zCom, 11, iWidth, g.comFmtFlags); + fossil_free(zOut); + }else{ + blob_reset(&line); + blob_appendf(&line, "%S ", zCiUuid); + blob_appendf(&line, "%.10s ", zDate); + blob_appendf(&line, "%8.8s ", zUser); + blob_appendf(&line, "%8.8s ", zBr); + blob_appendf(&line,"%-39.39s", zCom ); + comment_print(blob_str(&line), zCom, 0, iWidth, g.comFmtFlags); + } + } + db_finalize(&q); + blob_reset(&fname); + } +} + +/* +** COMMAND: cat +** +** Usage: %fossil cat FILENAME ... ?OPTIONS? +** +** Print on standard output the content of one or more files as they exist +** in the repository. The version currently checked out is shown by default. +** Other versions may be specified using the -r option. +** +** Options: +** -R|--repository FILE Extract artifacts from repository FILE +** -r VERSION The specific check-in containing the file +** +** See also: finfo +*/ +void cat_cmd(void){ + int i; + int rc; + Blob content, fname; + const char *zRev; + db_find_and_open_repository(0, 0); + zRev = find_option("r","r",1); + + /* We should be done with options.. */ + verify_all_options(); + + for(i=2; i0" - " AND tagxref.rid=mlink.mid)" /* Tags */ + " AND tagxref.rid=mlink.mid)," /* Branchname */ + " mlink.mid," /* check-in ID */ + " mlink.pfnid" /* Previous filename */ " FROM mlink, event" - " WHERE mlink.fnid=(SELECT fnid FROM filename WHERE name=%Q)" - " AND event.objid=mlink.mid" - " ORDER BY event.mtime DESC /*sort*/", - TAG_BRANCH, - zFilename + " WHERE mlink.fnid=%d" + " AND event.objid=mlink.mid", + TAG_BRANCH, fnid ); + if( (zA = P("a"))!=0 ){ + blob_append_sql(&sql, " AND event.mtime>=julianday('%q')", zA); + url_add_parameter(&url, "a", zA); + } + if( (zB = P("b"))!=0 ){ + blob_append_sql(&sql, " AND event.mtime<=julianday('%q')", zB); + url_add_parameter(&url, "b", zB); + } + if( baseCheckin ){ + blob_append_sql(&sql, + " AND mlink.mid IN (SELECT rid FROM ancestor)" + " GROUP BY mlink.fid" + ); + }else{ + /* We only want each version of a file to appear on the graph once, + ** at its earliest appearance. All the other times that it gets merged + ** into this or that branch can be ignored. An exception is for when + ** files are deleted (when they have mlink.fid==0). If the same file + ** is deleted in multiple places, we want to show each deletion, so + ** use a "fake fid" which is derived from the parent-fid for grouping. + ** The same fake-fid must be used on the graph. + */ + blob_append_sql(&sql, + " GROUP BY" + " CASE WHEN mlink.fid>0 THEN mlink.fid ELSE mlink.pid+1000000000 END" + ); + } + blob_append_sql(&sql, " ORDER BY event.mtime DESC /*sort*/"); + if( (n = atoi(PD("n","0")))>0 ){ + blob_append_sql(&sql, " LIMIT %d", n); + url_add_parameter(&url, "n", P("n")); + } + db_prepare(&q, "%s", blob_sql_text(&sql)); + if( P("showsql")!=0 ){ + @

    SQL: %h(blob_str(&sql))

    + } + blob_reset(&sql); blob_zero(&title); - blob_appendf(&title, "History of "); - hyperlinked_path(zFilename, &title); + if( baseCheckin ){ + char *zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", baseCheckin); + char *zLink = href("%R/info/%!S", zUuid); + if( n>0 ){ + blob_appendf(&title, "First %d ancestors of file ", n); + }else{ + blob_appendf(&title, "Ancestors of file "); + } + blob_appendf(&title,"%h", + zFilename, zFilename); + if( fShowId ) blob_appendf(&title, " (%d)", fnid); + blob_appendf(&title, " from check-in %z%S", zLink, zUuid); + if( fShowId ) blob_appendf(&title, " (%d)", baseCheckin); + fossil_free(zUuid); + }else{ + blob_appendf(&title, "History of files named "); + hyperlinked_path(zFilename, &title, 0, "tree", ""); + if( fShowId ) blob_appendf(&title, " (%d)", fnid); + } @

    %b(&title)

    blob_reset(&title); pGraph = graph_init(); - @
    - @ + @
    + if( baseCheckin ){ + db_prepare(&qparent, + "SELECT DISTINCT pid FROM mlink" + " WHERE fid=:fid AND mid=:mid AND pid>0 AND fnid=:fnid" + " AND pmid IN (SELECT rid FROM ancestor)" + " ORDER BY isaux /*sort*/" + ); + }else{ + db_prepare(&qparent, + "SELECT DISTINCT pid FROM mlink" + " WHERE fid=:fid AND mid=:mid AND pid>0 AND fnid=:fnid" + " ORDER BY isaux /*sort*/" + ); + } while( db_step(&q)==SQLITE_ROW ){ const char *zDate = db_column_text(&q, 0); const char *zCom = db_column_text(&q, 1); const char *zUser = db_column_text(&q, 2); int fpid = db_column_int(&q, 3); @@ -148,66 +439,299 @@ const char *zPUuid = db_column_text(&q, 5); const char *zUuid = db_column_text(&q, 6); const char *zCkin = db_column_text(&q,7); const char *zBgClr = db_column_text(&q, 8); const char *zBr = db_column_text(&q, 9); + int fmid = db_column_int(&q, 10); + int pfnid = db_column_int(&q, 11); int gidx; char zTime[10]; - char zShort[20]; - char zShortCkin[20]; + int nParent = 0; + int aParent[GR_MAX_RAIL]; + + db_bind_int(&qparent, ":fid", frid); + db_bind_int(&qparent, ":mid", fmid); + db_bind_int(&qparent, ":fnid", fnid); + while( db_step(&qparent)==SQLITE_ROW && nParent0 ? 1 : 0, &fpid, zBr, zBgClr); - if( memcmp(zDate, zPrevDate, 10) ){ - sprintf(zPrevDate, "%.10s", zDate); + if( uBg ){ + zBgClr = hash_color(zUser); + }else if( brBg || zBgClr==0 || zBgClr[0]==0 ){ + zBgClr = strcmp(zBr,"trunk")==0 ? "" : hash_color(zBr); + } + gidx = graph_add_row(pGraph, frid>0 ? frid : fpid+1000000000, + nParent, aParent, zBr, zBgClr, + zUuid, 0); + if( strncmp(zDate, zPrevDate, 10) ){ + sqlite3_snprintf(sizeof(zPrevDate), zPrevDate, "%.10s", zDate); @ + @
    %s(zPrevDate)
    + @ } memcpy(zTime, &zDate[11], 5); zTime[5] = 0; - @ - @ - if( zBgClr && zBgClr[0] ){ - @ + @ + if( zBgClr && zBgClr[0] ){ + @ + tag_private_status(frid); + @ } db_finalize(&q); + db_finalize(&qparent); if( pGraph ){ graph_finish(pGraph, 1); if( pGraph->nErr ){ graph_free(pGraph); pGraph = 0; }else{ - @ } } @
    - @
    %s(zPrevDate)
    - @
    - @ %s(zTime)
    - }else{ - @ - } - sqlite3_snprintf(sizeof(zShort), zShort, "%.10s", zUuid); - sqlite3_snprintf(sizeof(zShortCkin), zShortCkin, "%.10s", zCkin); - if( zUuid ){ - if( g.okHistory ){ - @ [%S(zUuid)] - }else{ - @ [%S(zUuid)] + @
    + @ %z(href("%R/timeline?c=%t",zDate))%s(zTime)
    + @
    + }else{ + @ + } + if( zUuid ){ + if( nParent==0 ){ + @ Added + }else if( pfnid ){ + char *zPrevName = db_text(0, "SELECT name FROM filename WHERE fnid=%d", + pfnid); + @ Renamed from + @ %z(href("%R/finfo?name=%t", zPrevName))%h(zPrevName) + } + @ %z(href("%R/artifact/%!S",zUuid))[%S(zUuid)] + if( fShowId ){ + @ (%d(frid)) } @ part of check-in }else{ - @ Deleted by check-in + char *zNewName; + zNewName = db_text(0, + "SELECT name FROM filename WHERE fnid = " + " (SELECT fnid FROM mlink" + " WHERE mid=%d" + " AND pfnid IN (SELECT fnid FROM filename WHERE name=%Q))", + fmid, zFilename); + if( zNewName ){ + @ Renamed to + @ %z(href("%R/finfo?name=%t",zNewName))%h(zNewName) by check-in + fossil_free(zNewName); + }else{ + @ Deleted by check-in + } + } + hyperlink_to_uuid(zCkin); + if( fShowId ){ + @ (%d(fmid)) } - hyperlink_to_uuid(zShortCkin); - @ %h(zCom) (user: + @ %W(zCom) (user: hyperlink_to_user(zUser, zDate, ""); - @ branch: %h(zBr)) - if( g.okHistory && zUuid ){ - if( fpid ){ - @ [diff] - } - @ + @ branch: %z(href("%R/timeline?t=%T&n=200",zBr))%h(zBr)) + if( g.perm.Hyperlink && zUuid ){ + const char *z = zFilename; + @ %z(href("%R/annotate?filename=%h&checkin=%s",z,zCkin)) @ [annotate] + @ %z(href("%R/blame?filename=%h&checkin=%s",z,zCkin)) + @ [blame] + @ %z(href("%R/timeline?n=200&uf=%!S",zUuid))[check-ins using] + if( fpid>0 ){ + @ %z(href("%R/fdiff?sbs=1&v1=%!S&v2=%!S",zPUuid,zUuid))[diff] + } + } + if( fDebug & FINFO_DEBUG_MLINK ){ + int ii; + char *zAncLink; + @
    fid=%d(frid) pid=%d(fpid) mid=%d(fmid) + if( nParent>0 ){ + @ parents=%d(aParent[0]) + for(ii=1; ii } - @
    + @
    - timeline_output_graph_javascript(pGraph); + timeline_output_graph_javascript(pGraph, 0, 1); + style_footer(); +} + +/* +** WEBPAGE: mlink +** URL: /mlink?name=FILENAME +** URL: /mlink?ci=NAME +** +** Show all MLINK table entries for a particular file, or for +** a particular check-in. This screen is intended for use by developers +** in debugging Fossil. +*/ +void mlink_page(void){ + const char *zFName = P("name"); + const char *zCI = P("ci"); + Stmt q; + + login_check_credentials(); + if( !g.perm.Admin ){ login_needed(g.anon.Admin); return; } + style_header("MLINK Table"); + if( zFName==0 && zCI==0 ){ + @ + @ Requires either a name= or ci= query parameter + @ + }else if( zFName ){ + int fnid = db_int(0,"SELECT fnid FROM filename WHERE name=%Q",zFName); + if( fnid<=0 ) fossil_fatal("no such file: \"%s\"", zFName); + db_prepare(&q, + "SELECT" + /* 0 */ " datetime(event.mtime,toLocal())," + /* 1 */ " (SELECT uuid FROM blob WHERE rid=mlink.mid)," + /* 2 */ " (SELECT uuid FROM blob WHERE rid=mlink.pmid)," + /* 3 */ " isaux," + /* 4 */ " (SELECT uuid FROM blob WHERE rid=mlink.fid)," + /* 5 */ " (SELECT uuid FROM blob WHERE rid=mlink.pid)," + /* 6 */ " mlink.pid," + /* 7 */ " mperm," + /* 8 */ " (SELECT name FROM filename WHERE fnid=mlink.pfnid)" + " FROM mlink, event" + " WHERE mlink.fnid=%d" + " AND event.objid=mlink.mid" + " ORDER BY 1 DESC", + fnid + ); + @

    MLINK table for file + @ %h(zFName)

    + @
    + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + while( db_step(&q)==SQLITE_ROW ){ + const char *zDate = db_column_text(&q,0); + const char *zCkin = db_column_text(&q,1); + const char *zParent = db_column_text(&q,2); + int isMerge = db_column_int(&q,3); + const char *zFid = db_column_text(&q,4); + const char *zPid = db_column_text(&q,5); + int isExe = db_column_int(&q,7); + const char *zPrior = db_column_text(&q,8); + @ + @ + @ + if( zParent ){ + @ + }else{ + @ + } + @ + if( zFid ){ + @ + }else{ + @ + } + if( zPid ){ + @ + }else{ + @ + } + @ + if( zPrior ){ + @ + }else{ + @ + } + @ + } + db_finalize(&q); + @ + @
    DateCheck-inParent Check-inMerge?NewOldExe Bit?Prior Name
    %s(zDate)%S(zCkin)%S(zParent)(New)%s(isMerge?"✓":"")%S(zFid)(Deleted)%S(zPid) + }else if( db_column_int(&q,6)<0 ){ + @ (Added by merge)(New)%s(isExe?"✓":"")%h(zPrior)
    + @
    + output_table_sorting_javascript("mlinktable","tttxtttt",1); + }else{ + int mid = name_to_rid_www("ci"); + db_prepare(&q, + "SELECT" + /* 0 */ " (SELECT name FROM filename WHERE fnid=mlink.fnid)," + /* 1 */ " (SELECT uuid FROM blob WHERE rid=mlink.fid)," + /* 2 */ " pid," + /* 3 */ " (SELECT uuid FROM blob WHERE rid=mlink.pid)," + /* 4 */ " (SELECT name FROM filename WHERE fnid=mlink.pfnid)," + /* 5 */ " (SELECT uuid FROM blob WHERE rid=mlink.pmid)," + /* 6 */ " mperm," + /* 7 */ " isaux" + " FROM mlink WHERE mid=%d ORDER BY 1", + mid + ); + @

    MLINK table for check-in %h(zCI)

    + render_checkin_context(mid, 1); + @
    + @
    + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + while( db_step(&q)==SQLITE_ROW ){ + const char *zName = db_column_text(&q,0); + const char *zFid = db_column_text(&q,1); + const char *zPid = db_column_text(&q,3); + const char *zPrior = db_column_text(&q,4); + const char *zParent = db_column_text(&q,5); + int isExec = db_column_int(&q,6); + int isAux = db_column_int(&q,7); + @ + @ + if( zParent ){ + @ + }else{ + @ + } + @ + if( zFid ){ + @ + }else{ + @ + } + if( zPid ){ + @ + }else{ + @ + } + @ + if( zPrior ){ + @ + }else{ + @ + } + @ + } + db_finalize(&q); + @ + @
    FileFromMerge?NewOldExe Bit?Prior Name
    %h(zName)%S(zParent)(New)%s(isAux?"✓":"")%S(zFid)(Deleted)%S(zPid) + }else if( db_column_int(&q,2)<0 ){ + @ (Added by merge)(New)%s(isExec?"✓":"")%h(zPrior)
    + @
    + output_table_sorting_javascript("mlinktable","ttxtttt",1); + } style_footer(); } ADDED src/foci.c Index: src/foci.c ================================================================== --- src/foci.c +++ src/foci.c @@ -0,0 +1,244 @@ +/* +** Copyright (c) 2014 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This routine implements an SQLite virtual table that gives all of the +** files associated with a single check-in. +** +** The filename "foci" is short for "Files of Check-in". +** +** Usage example: +** +** CREATE VIRTUAL TABLE temp.foci USING files_of_checkin; +** -- ^^^^--- important! +** SELECT * FROM foci WHERE checkinID=symbolic_name_to_rid('trunk'); +** +** The symbolic_name_to_rid('trunk') function finds the BLOB.RID value +** corresponding to the 'trunk' tag. Then the files_of_checkin virtual table +** decodes the manifest defined by that BLOB and returns all files described +** by that manifest. The "schema" for the temp.foci table is: +** +** CREATE TABLE files_of_checkin( +** checkinID INTEGER, -- RID for the check-in manifest +** filename TEXT, -- Name of a file +** uuid TEXT, -- SHA1 hash of the file +** previousName TEXT, -- Name of the file in previous check-in +** perm TEXT -- Permissions on the file +** ); +** +*/ +#include "config.h" +#include "foci.h" +#include + +/* +** The schema for the virtual table: +*/ +static const char zFociSchema[] = +@ CREATE TABLE files_of_checkin( +@ checkinID INTEGER, -- RID for the check-in manifest +@ filename TEXT, -- Name of a file +@ uuid TEXT, -- SHA1 hash of the file +@ previousName TEXT, -- Name of the file in previous check-in +@ perm TEXT -- Permissions on the file +@ ); +; + +#if INTERFACE +/* +** The subclasses of sqlite3_vtab and sqlite3_vtab_cursor tables +** that implement the files_of_checkin virtual table. +*/ +struct FociTable { + sqlite3_vtab base; /* Base class - must be first */ +}; +struct FociCursor { + sqlite3_vtab_cursor base; /* Base class - must be first */ + Manifest *pMan; /* Current manifest */ + ManifestFile *pFile; /* Current file */ + int iFile; /* File index */ +}; +#endif /* INTERFACE */ + + +/* +** Connect to or create a foci virtual table. +*/ +static int fociConnect( + sqlite3 *db, + void *pAux, + int argc, const char *const*argv, + sqlite3_vtab **ppVtab, + char **pzErr +){ + FociTable *pTab; + + pTab = (FociTable *)sqlite3_malloc(sizeof(FociTable)); + memset(pTab, 0, sizeof(FociTable)); + sqlite3_declare_vtab(db, zFociSchema); + *ppVtab = &pTab->base; + return SQLITE_OK; +} + +/* +** Disconnect from or destroy a focivfs virtual table. +*/ +static int fociDisconnect(sqlite3_vtab *pVtab){ + sqlite3_free(pVtab); + return SQLITE_OK; +} + +/* +** Available scan methods: +** +** (0) A full scan. Visit every manifest in the repo. (Slow) +** (1) checkinID=?. visit only the single manifest specifed. +*/ +static int fociBestIndex(sqlite3_vtab *tab, sqlite3_index_info *pIdxInfo){ + int i; + pIdxInfo->estimatedCost = 10000.0; + for(i=0; inConstraint; i++){ + if( pIdxInfo->aConstraint[i].iColumn==0 + && pIdxInfo->aConstraint[i].op==SQLITE_INDEX_CONSTRAINT_EQ ){ + pIdxInfo->idxNum = 1; + pIdxInfo->estimatedCost = 1.0; + pIdxInfo->aConstraintUsage[i].argvIndex = 1; + pIdxInfo->aConstraintUsage[i].omit = 1; + break; + } + } + return SQLITE_OK; +} + +/* +** Open a new focivfs cursor. +*/ +static int fociOpen(sqlite3_vtab *pVTab, sqlite3_vtab_cursor **ppCursor){ + FociCursor *pCsr; + pCsr = (FociCursor *)sqlite3_malloc(sizeof(FociCursor)); + memset(pCsr, 0, sizeof(FociCursor)); + pCsr->base.pVtab = pVTab; + *ppCursor = (sqlite3_vtab_cursor *)pCsr; + return SQLITE_OK; +} + +/* +** Close a focivfs cursor. +*/ +static int fociClose(sqlite3_vtab_cursor *pCursor){ + FociCursor *pCsr = (FociCursor *)pCursor; + manifest_destroy(pCsr->pMan); + sqlite3_free(pCsr); + return SQLITE_OK; +} + +/* +** Move a focivfs cursor to the next entry in the file. +*/ +static int fociNext(sqlite3_vtab_cursor *pCursor){ + FociCursor *pCsr = (FociCursor *)pCursor; + pCsr->pFile = manifest_file_next(pCsr->pMan, 0); + pCsr->iFile++; + return SQLITE_OK; +} + +static int fociEof(sqlite3_vtab_cursor *pCursor){ + FociCursor *pCsr = (FociCursor *)pCursor; + return pCsr->pFile==0; +} + +static int fociFilter( + sqlite3_vtab_cursor *pCursor, + int idxNum, const char *idxStr, + int argc, sqlite3_value **argv +){ + FociCursor *pCur = (FociCursor *)pCursor; + manifest_destroy(pCur->pMan); + if( idxNum ){ + pCur->pMan = manifest_get(sqlite3_value_int(argv[0]), CFTYPE_MANIFEST, 0); + if( pCur->pMan ){ + manifest_file_rewind(pCur->pMan); + pCur->pFile = manifest_file_next(pCur->pMan, 0); + } + }else{ + pCur->pMan = 0; + } + pCur->iFile = 0; + return SQLITE_OK; +} + +static int fociColumn( + sqlite3_vtab_cursor *pCursor, + sqlite3_context *ctx, + int i +){ + FociCursor *pCsr = (FociCursor *)pCursor; + switch( i ){ + case 0: /* checkinID */ + sqlite3_result_int(ctx, pCsr->pMan->rid); + break; + case 1: /* filename */ + sqlite3_result_text(ctx, pCsr->pFile->zName, -1, + SQLITE_TRANSIENT); + break; + case 2: /* uuid */ + sqlite3_result_text(ctx, pCsr->pFile->zUuid, -1, + SQLITE_TRANSIENT); + break; + case 3: /* previousName */ + sqlite3_result_text(ctx, pCsr->pFile->zPrior, -1, + SQLITE_TRANSIENT); + break; + case 4: /* perm */ + sqlite3_result_text(ctx, pCsr->pFile->zPerm, -1, + SQLITE_TRANSIENT); + break; + } + return SQLITE_OK; +} + +static int fociRowid(sqlite3_vtab_cursor *pCursor, sqlite_int64 *pRowid){ + FociCursor *pCsr = (FociCursor *)pCursor; + *pRowid = pCsr->iFile; + return SQLITE_OK; +} + +int foci_register(sqlite3 *db){ + static sqlite3_module foci_module = { + 0, /* iVersion */ + fociConnect, /* xCreate */ + fociConnect, /* xConnect */ + fociBestIndex, /* xBestIndex */ + fociDisconnect, /* xDisconnect */ + fociDisconnect, /* xDestroy */ + fociOpen, /* xOpen - open a cursor */ + fociClose, /* xClose - close a cursor */ + fociFilter, /* xFilter - configure scan constraints */ + fociNext, /* xNext - advance a cursor */ + fociEof, /* xEof - check for end of scan */ + fociColumn, /* xColumn - read data */ + fociRowid, /* xRowid - read data */ + 0, /* xUpdate */ + 0, /* xBegin */ + 0, /* xSync */ + 0, /* xCommit */ + 0, /* xRollback */ + 0, /* xFindMethod */ + 0, /* xRename */ + }; + sqlite3_create_module(db, "files_of_checkin", &foci_module, 0); + return SQLITE_OK; +} ADDED src/fusefs.c Index: src/fusefs.c ================================================================== --- src/fusefs.c +++ src/fusefs.c @@ -0,0 +1,344 @@ +/* +** Copyright (c) 2014 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@sqlite.org +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This module implements the userspace side of a Fuse Filesystem that +** contains all check-ins for a fossil repository. +** +** This module is a mostly a no-op unless compiled with -DFOSSIL_HAVE_FUSEFS. +** The FOSSIL_HAVE_FUSEFS should be omitted on systems that lack support for +** the Fuse Filesystem, of course. +*/ +#include "config.h" +#include +#include +#include +#include +#include +#include +#include +#include "fusefs.h" +#ifdef FOSSIL_HAVE_FUSEFS + +#define FUSE_USE_VERSION 26 +#include + +/* +** Global state information about the archive +*/ +static struct sGlobal { + /* A cache of a single check-in manifest */ + int rid; /* rid for the cached manifest */ + char *zSymName; /* Symbolic name corresponding to rid */ + Manifest *pMan; /* The cached manifest */ + /* A cache of a single file within a single check-in */ + int iFileRid; /* Check-in ID for the cached file */ + ManifestFile *pFile; /* Name of a cached file */ + Blob content; /* Content of the cached file */ + /* Parsed path */ + char *az[3]; /* 0=type, 1=id, 2=path */ +} fusefs; + +/* +** Clear the fusefs.sz[] array. +*/ +static void fusefs_clear_path(void){ + int i; + for(i=0; i0 && strcmp(zSymName, fusefs.zSymName)==0 ){ + return fusefs.rid; + }else{ + return symbolic_name_to_rid(zSymName, "ci"); + } +} + + +/* +** Implementation of stat() +*/ +static int fusefs_getattr(const char *zPath, struct stat *stbuf){ + int n, rid; + ManifestFile *pFile; + char *zDir; + stbuf->st_uid = getuid(); + stbuf->st_gid = getgid(); + n = fusefs_parse_path(zPath); + if( n==0 ){ + stbuf->st_mode = S_IFDIR | 0555; + stbuf->st_nlink = 2; + return 0; + } + if( strcmp(fusefs.az[0],"checkins")!=0 ) return -ENOENT; + if( n==1 ){ + stbuf->st_mode = S_IFDIR | 0111; + stbuf->st_nlink = 2; + return 0; + } + rid = fusefs_name_to_rid(fusefs.az[1]); + if( rid<=0 ) return -ENOENT; + if( n==2 ){ + stbuf->st_mode = S_IFDIR | 0555; + stbuf->st_nlink = 2; + return 0; + } + fusefs_load_rid(rid, fusefs.az[1]); + if( fusefs.pMan==0 ) return -ENOENT; + stbuf->st_mtime = (fusefs.pMan->rDate - 2440587.5)*86400.0; + pFile = manifest_file_seek(fusefs.pMan, fusefs.az[2], 0); + if( pFile ){ + static Stmt q; + stbuf->st_mode = S_IFREG | + (manifest_file_mperm(pFile)==PERM_EXE ? 0555 : 0444); + stbuf->st_nlink = 1; + db_static_prepare(&q, "SELECT size FROM blob WHERE uuid=$uuid"); + db_bind_text(&q, "$uuid", pFile->zUuid); + if( db_step(&q)==SQLITE_ROW ){ + stbuf->st_size = db_column_int(&q, 0); + } + db_reset(&q); + return 0; + } + zDir = mprintf("%s/", fusefs.az[2]); + pFile = manifest_file_seek(fusefs.pMan, zDir, 1); + fossil_free(zDir); + if( pFile==0 ) return -ENOENT; + n = (int)strlen(fusefs.az[2]); + if( strncmp(fusefs.az[2], pFile->zName, n)!=0 ) return -ENOENT; + if( pFile->zName[n]!='/' ) return -ENOENT; + stbuf->st_mode = S_IFDIR | 0555; + stbuf->st_nlink = 2; + return 0; +} + +/* +** Implementation of readdir() +*/ +static int fusefs_readdir( + const char *zPath, + void *buf, + fuse_fill_dir_t filler, + off_t offset, + struct fuse_file_info *fi +){ + int n, rid; + ManifestFile *pFile; + const char *zPrev = ""; + int nPrev = 0; + char *z; + int cnt = 0; + n = fusefs_parse_path(zPath); + if( n==0 ){ + filler(buf, ".", NULL, 0); + filler(buf, "..", NULL, 0); + filler(buf, "checkins", NULL, 0); + return 0; + } + if( strcmp(fusefs.az[0],"checkins")!=0 ) return -ENOENT; + if( n==1 ) return -ENOENT; + rid = fusefs_name_to_rid(fusefs.az[1]); + if( rid<=0 ) return -ENOENT; + fusefs_load_rid(rid, fusefs.az[1]); + if( fusefs.pMan==0 ) return -ENOENT; + filler(buf, ".", NULL, 0); + filler(buf, "..", NULL, 0); + manifest_file_rewind(fusefs.pMan); + if( n==2 ){ + while( (pFile = manifest_file_next(fusefs.pMan, 0))!=0 ){ + if( nPrev>0 && strncmp(pFile->zName, zPrev, nPrev)==0 ) continue; + zPrev = pFile->zName; + for(nPrev=0; zPrev[nPrev] && zPrev[nPrev]!='/'; nPrev++){} + z = mprintf("%.*s", nPrev, zPrev); + filler(buf, z, NULL, 0); + fossil_free(z); + cnt++; + } + }else{ + char *zBase = mprintf("%s/", fusefs.az[2]); + int nBase = (int)strlen(zBase); + while( (pFile = manifest_file_next(fusefs.pMan, 0))!=0 ){ + if( strcmp(pFile->zName, zBase)>=0 ) break; + } + while( pFile && strncmp(zBase, pFile->zName, nBase)==0 ){ + if( nPrev==0 || strncmp(pFile->zName+nBase, zPrev, nPrev)!=0 ){ + zPrev = pFile->zName+nBase; + for(nPrev=0; zPrev[nPrev] && zPrev[nPrev]!='/'; nPrev++){} + if( zPrev[nPrev]=='/' ){ + z = mprintf("%.*s", nPrev, zPrev); + filler(buf, z, NULL, 0); + fossil_free(z); + }else{ + filler(buf, zPrev, NULL, 0); + nPrev = 0; + } + cnt++; + } + pFile = manifest_file_next(fusefs.pMan, 0); + } + fossil_free(zBase); + } + return cnt>0 ? 0 : -ENOENT; +} + + +/* +** Implementation of read() +*/ +static int fusefs_read( + const char *zPath, + char *buf, + size_t size, + off_t offset, + struct fuse_file_info *fi +){ + int n, rid; + n = fusefs_parse_path(zPath); + if( n<3 ) return -ENOENT; + if( strcmp(fusefs.az[0], "checkins")!=0 ) return -ENOENT; + rid = fusefs_name_to_rid(fusefs.az[1]); + if( rid<=0 ) return -ENOENT; + fusefs_load_rid(rid, fusefs.az[1]); + if( fusefs.pFile!=0 && strcmp(fusefs.az[2], fusefs.pFile->zName)!=0 ){ + fusefs.pFile = 0; + blob_reset(&fusefs.content); + } + fusefs.pFile = manifest_file_seek(fusefs.pMan, fusefs.az[2], 0); + if( fusefs.pFile==0 ) return -ENOENT; + rid = uuid_to_rid(fusefs.pFile->zUuid, 0); + blob_reset(&fusefs.content); + content_get(rid, &fusefs.content); + if( offset>blob_size(&fusefs.content) ) return 0; + if( offset+size>blob_size(&fusefs.content) ){ + size = blob_size(&fusefs.content) - offset; + } + memcpy(buf, blob_buffer(&fusefs.content)+offset, size); + return size; +} + +static struct fuse_operations fusefs_methods = { + .getattr = fusefs_getattr, + .readdir = fusefs_readdir, + .read = fusefs_read, +}; +#endif /* FOSSIL_HAVE_FUSEFS */ + +/* +** COMMAND: fusefs +** +** Usage: %fossil fusefs [--debug] DIRECTORY +** +** This command uses the Fuse Filesystem to mount a directory at +** DIRECTORY that contains the content of all check-ins in the +** repository. The names of files are DIRECTORY/checkins/VERSION/PATH +** where DIRECTORY is the root of the mount, VERSION is any valid +** check-in name (examples: "trunk" or "tip" or a tag or any unique +** prefix of a SHA1 hash, etc) and PATH is the pathname of the file +** in the check-in. If DIRECTORY does not exist, then an attempt is +** made to create it. +** +** The DIRECTORY/checkins directory is not searchable so one cannot +** do "ls DIRECTORY/checkins" to get a listing of all possible check-in +** names. There are countless variations on check-in names and it is +** impractical to list them all. But all other directories are searchable +** and so the "ls" command will work everywhere else in the fusefs +** file hierarchy. +** +** The FuseFS typically only works on Linux, and then only on Linux +** systems that have the right kernel drivers and have install the +** appropriate support libraries. +** +** After stopping the "fossil fusefs" command, it might also be necessary +** to run "fusermount -u DIRECTORY" to reset the FuseFS before using it +** again. +*/ +void fusefs_cmd(void){ +#ifndef FOSSIL_HAVE_FUSEFS + fossil_fatal("this build of fossil does not support the fuse filesystem"); +#else + char *zMountPoint; + char *azNewArgv[5]; + int doDebug = find_option("debug","d",0)!=0; + + db_find_and_open_repository(0,0); + verify_all_options(); + blob_init(&fusefs.content, 0, 0); + if( g.argc!=3 ) usage("DIRECTORY"); + zMountPoint = g.argv[2]; + if( file_mkdir(zMountPoint, 0) ){ + fossil_fatal("cannot make directory [%s]", zMountPoint); + } + azNewArgv[0] = g.argv[0]; + azNewArgv[1] = doDebug ? "-d" : "-f"; + azNewArgv[2] = "-s"; + azNewArgv[3] = zMountPoint; + azNewArgv[4] = 0; + g.localOpen = 0; /* Prevent tags like "current" and "prev" */ + fuse_main(4, azNewArgv, &fusefs_methods, NULL); + fusefs_reset(); + fusefs_clear_path(); +#endif +} ADDED src/glob.c Index: src/glob.c ================================================================== --- src/glob.c +++ src/glob.c @@ -0,0 +1,203 @@ +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code used to pattern matching using "glob" syntax. +*/ +#include "config.h" +#include "glob.h" +#include + +/* +** Construct and return a string which is an SQL expression that will +** be TRUE if value zVal matches any of the GLOB expressions in the list +** zGlobList. For example: +** +** zVal: "x" +** zGlobList: "*.o,*.obj" +** +** Result: "(x GLOB '*.o' OR x GLOB '*.obj')" +** +** Commas and whitespace are considered to be element delimters. Each +** element of the GLOB list may optionally be enclosed in either '...' or +** "...". This allows commas and/or whitespace to be used in the elements +** themselves. +** +** This routine makes no effort to free the memory space it uses, which +** currently consists of a blob object and its contents. +*/ +char *glob_expr(const char *zVal, const char *zGlobList){ + Blob expr; + const char *zSep = "("; + int nTerm = 0; + int i; + int cTerm; + + if( zGlobList==0 || zGlobList[0]==0 ) return fossil_strdup("0"); + blob_zero(&expr); + while( zGlobList[0] ){ + while( fossil_isspace(zGlobList[0]) || zGlobList[0]==',' ){ + zGlobList++; /* Skip leading commas, spaces, and newlines */ + } + if( zGlobList[0]==0 ) break; + if( zGlobList[0]=='\'' || zGlobList[0]=='"' ){ + cTerm = zGlobList[0]; + zGlobList++; + }else{ + cTerm = ','; + } + /* Find the next delimter (or the end of the string). */ + for(i=0; zGlobList[i] && zGlobList[i]!=cTerm; i++){ + if( cTerm!=',' ) continue; /* If quoted, keep going. */ + if( fossil_isspace(zGlobList[i]) ) break; /* If space, stop. */ + } + blob_appendf(&expr, "%s%s GLOB '%#q'", zSep, zVal, i, zGlobList); + zSep = " OR "; + if( cTerm!=',' && zGlobList[i] ) i++; + zGlobList += i; + if( zGlobList[0] ) zGlobList++; + nTerm++; + } + if( nTerm ){ + blob_appendf(&expr, ")"); + return blob_str(&expr); + }else{ + return fossil_strdup("0"); + } +} + +#if INTERFACE +/* +** A Glob object holds a set of patterns read to be matched against +** a string. +*/ +struct Glob { + int nPattern; /* Number of patterns */ + char **azPattern; /* Array of pointers to patterns */ +}; +#endif /* INTERFACE */ + +/* +** zPatternList is a comma-separated list of glob patterns. Parse up +** that list and use it to create a new Glob object. +** +** Elements of the glob list may be optionally enclosed in single our +** double-quotes. This allows a comma to be part of a glob pattern. +** +** Leading and trailing spaces on unquoted glob patterns are ignored. +** +** An empty or null pattern list results in a null glob, which will +** match nothing. +*/ +Glob *glob_create(const char *zPatternList){ + int nList; /* Size of zPatternList in bytes */ + int i; /* Loop counters */ + Glob *p; /* The glob being created */ + char *z; /* Copy of the pattern list */ + char delimiter; /* '\'' or '\"' or 0 */ + + if( zPatternList==0 || zPatternList[0]==0 ) return 0; + nList = strlen(zPatternList); + p = fossil_malloc( sizeof(*p) + nList+1 ); + memset(p, 0, sizeof(*p)); + z = (char*)&p[1]; + memcpy(z, zPatternList, nList+1); + while( z[0] ){ + while( fossil_isspace(z[0]) || z[0]==',' ){ + z++; /* Skip leading commas, spaces, and newlines */ + } + if( z[0]==0 ) break; + if( z[0]=='\'' || z[0]=='"' ){ + delimiter = z[0]; + z++; + }else{ + delimiter = ','; + } + p->azPattern = fossil_realloc(p->azPattern, (p->nPattern+1)*sizeof(char*) ); + p->azPattern[p->nPattern++] = z; + /* Find the next delimter (or the end of the string). */ + for(i=0; z[i] && z[i]!=delimiter; i++){ + if( delimiter!=',' ) continue; /* If quoted, keep going. */ + if( fossil_isspace(z[i]) ) break; /* If space, stop. */ + } + if( z[i]==0 ) break; + z[i] = 0; + z += i+1; + } + return p; +} + +/* +** Return true (non-zero) if zString matches any of the patterns in +** the Glob. The value returned is actually a 1-based index of the pattern +** that matched. Return 0 if none of the patterns match zString. +** +** A NULL glob matches nothing. +*/ +int glob_match(Glob *pGlob, const char *zString){ + int i; + if( pGlob==0 ) return 0; + for(i=0; inPattern; i++){ + if( sqlite3_strglob(pGlob->azPattern[i], zString)==0 ) return i+1; + } + return 0; +} + +/* +** Free all memory associated with the given Glob object +*/ +void glob_free(Glob *pGlob){ + if( pGlob ){ + fossil_free(pGlob->azPattern); + fossil_free(pGlob); + } +} + +/* +** COMMAND: test-glob +** +** Usage: %fossil test-glob PATTERN STRING... +** +** PATTERN is a comma- and whitespace-separated list of optionally +** quoted glob patterns. Show which of the STRINGs that follow match +** the PATTERN. +** +** If PATTERN begins with "@" the rest of the pattern is understood +** to be a setting name (such as binary-glob, crln-glob, or encoding-glob) +** and the value of that setting is used as the actually glob pattern. +*/ +void glob_test_cmd(void){ + Glob *pGlob; + int i; + char *zPattern; + if( g.argc<4 ) usage("PATTERN STRING ..."); + zPattern = g.argv[2]; + if( zPattern[0]=='@' ){ + db_find_and_open_repository(OPEN_ANY_SCHEMA,0); + zPattern = db_get(zPattern+1, 0); + if( zPattern==0 ) fossil_fatal("no such setting: %s", g.argv[2]+1); + fossil_print("GLOB pattern: %s\n", zPattern); + } + fossil_print("SQL expression: %s\n", glob_expr("x", zPattern)); + pGlob = glob_create(zPattern); + for(i=0; inPattern; i++){ + fossil_print("pattern[%d] = [%s]\n", i, pGlob->azPattern[i]); + } + for(i=3; i #if INTERFACE -#define GR_MAX_PARENT 10 -#define GR_MAX_RAIL 32 +#define GR_MAX_RAIL 40 /* Max number of "rails" to display */ /* The graph appears vertically beside a timeline. Each row in the -** timeline corresponds to a row in the graph. +** timeline corresponds to a row in the graph. GraphRow.idx is 0 for +** the top-most row and increases moving down. Hence (in the absence of +** time skew) parents have a larger index than their children. */ struct GraphRow { int rid; /* The rid for the check-in */ - int nParent; /* Number of parents */ - int aParent[GR_MAX_PARENT]; /* Array of parents. 0 element is primary .*/ + i8 nParent; /* Number of parents */ + int *aParent; /* Array of parents. 0 element is primary .*/ char *zBranch; /* Branch name */ char *zBgClr; /* Background Color */ + char zUuid[41]; /* Check-in for file ID */ GraphRow *pNext; /* Next row down in the list of all rows */ GraphRow *pPrev; /* Previous row */ - + int idx; /* Row index. First is 1. 0 used for "none" */ - u8 isLeaf; /* True if no direct child nodes */ + int idxTop; /* Direct descendent highest up on the graph */ + GraphRow *pChild; /* Child immediately above this node */ u8 isDup; /* True if this is duplicate of a prior entry */ - int iRail; /* Which rail this check-in appears on. 0-based.*/ - int aiRaiser[GR_MAX_RAIL]; /* Raisers from this node to a higher row. */ - int bDescender; /* Raiser from bottom of graph to here. */ - u32 mergeIn; /* Merge in from other rails */ - int mergeOut; /* Merge out to this rail */ - int mergeUpto; /* Draw the merge rail up to this level */ - - u32 railInUse; /* Mask of occupied rails */ + u8 isLeaf; /* True if this is a leaf node */ + u8 timeWarp; /* Child is earlier in time */ + u8 bDescender; /* True if riser from bottom of graph to here. */ + i8 iRail; /* Which rail this check-in appears on. 0-based.*/ + i8 mergeOut; /* Merge out to this rail. -1 if no merge-out */ + u8 mergeIn[GR_MAX_RAIL]; /* Merge in from non-zero rails */ + int aiRiser[GR_MAX_RAIL]; /* Risers from this node to a higher row. */ + int mergeUpto; /* Draw the mergeOut rail up to this level */ + u64 mergeDown; /* Draw merge lines up from bottom of graph */ + + u64 railInUse; /* Mask of occupied rails at this row */ }; /* Context while building a graph */ struct GraphContext { - int nErr; /* Number of errors encountered */ - int mxRail; /* Number of rails required to render the graph */ - GraphRow *pFirst; /* First row in the list */ - GraphRow *pLast; /* Last row in the list */ - int nBranch; /* Number of distinct branches */ - char **azBranch; /* Names of the branches */ + int nErr; /* Number of errors encountered */ + int mxRail; /* Number of rails required to render the graph */ + GraphRow *pFirst; /* First row in the list */ + GraphRow *pLast; /* Last row in the list */ + int nBranch; /* Number of distinct branches */ + char **azBranch; /* Names of the branches */ int nRow; /* Number of rows */ - int railMap[GR_MAX_RAIL]; /* Rail order mapping */ int nHash; /* Number of slots in apHash[] */ - GraphRow **apHash; /* Hash table of rows */ + GraphRow **apHash; /* Hash table of GraphRow objects. Key: rid */ }; #endif +/* The N-th bit */ +#define BIT(N) (((u64)1)<<(N)) + /* ** Malloc for zeroed space. Panic if unable to provide the ** requested space. */ void *safeMalloc(int nByte){ - void *p = malloc(nByte); - if( p==0 ) fossil_panic("out of memory"); + void *p = fossil_malloc(nByte); memset(p, 0, nByte); return p; } /* @@ -86,13 +93,13 @@ GraphContext *graph_init(void){ return (GraphContext*)safeMalloc( sizeof(GraphContext) ); } /* -** Destroy a GraphContext; +** Clear all content from a graph */ -void graph_free(GraphContext *p){ +static void graph_clear(GraphContext *p){ int i; GraphRow *pRow; while( p->pFirst ){ pRow = p->pFirst; p->pFirst = pRow->pNext; @@ -99,17 +106,26 @@ free(pRow); } for(i=0; inBranch; i++) free(p->azBranch[i]); free(p->azBranch); free(p->apHash); + memset(p, 0, sizeof(*p)); + p->nErr = 1; +} + +/* +** Destroy a GraphContext; +*/ +void graph_free(GraphContext *p){ + graph_clear(p); free(p); } /* -** Insert a row into the hash table. If there is already another -** row with the same rid, overwrite the prior entry if the overwrite -** flag is set. +** Insert a row into the hash table. pRow->rid is the key. Keys must +** be unique. If there is already another row with the same rid, +** overwrite the prior entry if and only if the overwrite flag is set. */ static void hashInsert(GraphContext *p, GraphRow *pRow, int overwrite){ int h; h = pRow->rid % p->nHash; while( p->apHash[h] && p->apHash[h]->rid!=pRow->rid ){ @@ -135,66 +151,78 @@ /* ** Return the canonical pointer for a given branch name. ** Multiple calls to this routine with equivalent strings ** will return the same pointer. +** +** The returned value is a pointer to a (readonly) string that +** has the useful property that strings can be checked for +** equality by comparing pointers. ** ** Note: also used for background color names. */ static char *persistBranchName(GraphContext *p, const char *zBranch){ int i; for(i=0; inBranch; i++){ - if( strcmp(zBranch, p->azBranch[i])==0 ) return p->azBranch[i]; + if( fossil_strcmp(zBranch, p->azBranch[i])==0 ) return p->azBranch[i]; } p->nBranch++; - p->azBranch = realloc(p->azBranch, sizeof(char*)*p->nBranch); - if( p->azBranch==0 ) fossil_panic("out of memory"); + p->azBranch = fossil_realloc(p->azBranch, sizeof(char*)*p->nBranch); p->azBranch[p->nBranch-1] = mprintf("%s", zBranch); return p->azBranch[p->nBranch-1]; } /* -** Add a new row t the graph context. Rows are added from top to bottom. +** Add a new row to the graph context. Rows are added from top to bottom. */ int graph_add_row( GraphContext *p, /* The context to which the row is added */ int rid, /* RID for the check-in */ int nParent, /* Number of parents */ int *aParent, /* Array of parents */ const char *zBranch, /* Branch for this check-in */ - const char *zBgClr /* Background color. NULL or "" for white. */ + const char *zBgClr, /* Background color. NULL or "" for white. */ + const char *zUuid, /* SHA1 hash of the object being graphed */ + int isLeaf /* True if this row is a leaf */ ){ GraphRow *pRow; + int nByte; if( p->nErr ) return 0; - if( nParent>GR_MAX_PARENT ){ p->nErr++; return 0; } - pRow = (GraphRow*)safeMalloc( sizeof(GraphRow) ); + nByte = sizeof(GraphRow); + nByte += sizeof(pRow->aParent[0])*nParent; + pRow = (GraphRow*)safeMalloc( nByte ); + pRow->aParent = (int*)&pRow[1]; pRow->rid = rid; pRow->nParent = nParent; pRow->zBranch = persistBranchName(p, zBranch); - if( zBgClr==0 || zBgClr[0]==0 ) zBgClr = "white"; + if( zUuid==0 ) zUuid = ""; + sqlite3_snprintf(sizeof(pRow->zUuid), pRow->zUuid, "%s", zUuid); + pRow->isLeaf = isLeaf; + memset(pRow->aiRiser, -1, sizeof(pRow->aiRiser)); + if( zBgClr==0 ) zBgClr = ""; pRow->zBgClr = persistBranchName(p, zBgClr); memcpy(pRow->aParent, aParent, sizeof(aParent[0])*nParent); if( p->pFirst==0 ){ p->pFirst = pRow; }else{ p->pLast->pNext = pRow; } p->pLast = pRow; p->nRow++; - pRow->idx = p->nRow; + pRow->idx = pRow->idxTop = p->nRow; return pRow->idx; } /* ** Return the index of a rail currently not in use for any row between -** top and bottom, inclusive. +** top and bottom, inclusive. */ static int findFreeRail( GraphContext *p, /* The graph context */ int top, int btm, /* Span of rows for which the rail is needed */ - u32 inUseMask, /* Mask or rails already in use */ + u64 inUseMask, /* Mask or rails already in use */ int iNearto /* Find rail nearest to this rail */ ){ GraphRow *pRow; int i; int iBest = 0; @@ -203,11 +231,11 @@ while( pRow && pRow->idx<=btm ){ inUseMask |= pRow->railInUse; pRow = pRow->pNext; } for(i=0; i<32; i++){ - if( (inUseMask & (1<1000 ) p->nErr++; + if( iBest>p->mxRail ) p->mxRail = iBest; return iBest; } + +/* +** Assign all children of node pBottom to the same rail as pBottom. +*/ +static void assignChildrenToRail(GraphRow *pBottom){ + int iRail = pBottom->iRail; + GraphRow *pCurrent; + GraphRow *pPrior; + u64 mask = ((u64)1)<iRail = iRail; + pBottom->railInUse |= mask; + pPrior = pBottom; + for(pCurrent=pBottom->pChild; pCurrent; pCurrent=pCurrent->pChild){ + assert( pPrior->idx > pCurrent->idx ); + assert( pCurrent->iRail<0 ); + pCurrent->iRail = iRail; + pCurrent->railInUse |= mask; + pPrior->aiRiser[iRail] = pCurrent->idx; + while( pPrior->idx > pCurrent->idx ){ + pPrior->railInUse |= mask; + pPrior = pPrior->pPrev; + assert( pPrior!=0 ); + } + } +} + +/* +** Create a merge-arrow riser going from pParent up to pChild. +*/ +static void createMergeRiser( + GraphContext *p, + GraphRow *pParent, + GraphRow *pChild +){ + int u; + u64 mask; + GraphRow *pLoop; + + if( pParent->mergeOut<0 ){ + u = pParent->aiRiser[pParent->iRail]; + if( u>=0 && uidx ){ + /* The thick arrow up to the next primary child of pDesc goes + ** further up than the thin merge arrow riser, so draw them both + ** on the same rail. */ + pParent->mergeOut = pParent->iRail; + pParent->mergeUpto = pChild->idx; + }else{ + /* The thin merge arrow riser is taller than the thick primary + ** child riser, so use separate rails. */ + int iTarget = pParent->iRail; + pParent->mergeOut = findFreeRail(p, pChild->idx, pParent->idx-1, + 0, iTarget); + pParent->mergeUpto = pChild->idx; + mask = BIT(pParent->mergeOut); + for(pLoop=pChild->pNext; pLoop && pLoop->rid!=pParent->rid; + pLoop=pLoop->pNext){ + pLoop->railInUse |= mask; + } + } + } + pChild->mergeIn[pParent->mergeOut] = 1; +} + +/* +** Compute the maximum rail number. +*/ +static void find_max_rail(GraphContext *p){ + GraphRow *pRow; + p->mxRail = 0; + for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ + if( pRow->iRail>p->mxRail ) p->mxRail = pRow->iRail; + if( pRow->mergeOut>p->mxRail ) p->mxRail = pRow->mergeOut; + while( p->mxRailmergeDown>(BIT(p->mxRail+1)-1) ){ + p->mxRail++; + } + } +} + /* ** Compute the complete graph */ void graph_finish(GraphContext *p, int omitDescenders){ - GraphRow *pRow, *pDesc, *pDup, *pLoop; + GraphRow *pRow, *pDesc, *pDup, *pLoop, *pParent; int i; - u32 mask; - u32 inUse; - int hasDup = 0; /* True if one or more isDup entries */ + u64 mask; + u64 inUse; + int hasDup = 0; /* True if one or more isDup entries */ const char *zTrunk; if( p==0 || p->pFirst==0 || p->nErr ) return; + p->nErr = 1; /* Assume an error until proven otherwise */ /* Initialize all rows */ p->nHash = p->nRow*2 + 1; p->apHash = safeMalloc( sizeof(p->apHash[0])*p->nHash ); for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ @@ -248,159 +357,232 @@ } hashInsert(p, pRow, 1); } p->mxRail = -1; - /* Purge merge-parents that are out-of-graph + /* Purge merge-parents that are out-of-graph if descenders are not + ** drawn. + ** + ** Each node has one primary parent and zero or more "merge" parents. + ** A merge parent is a prior check-in from which changes were merged into + ** the current check-in. If a merge parent is not in the visible section + ** of this graph, then no arrows will be drawn for it, so remove it from + ** the aParent[] array. + */ + if( omitDescenders ){ + for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ + for(i=1; inParent; i++){ + if( hashFind(p, pRow->aParent[i])==0 ){ + pRow->aParent[i] = pRow->aParent[--pRow->nParent]; + i--; + } + } + } + } + + /* If the primary parent is in a different branch, but there are + ** other parents in the same branch, reorder the parents to make + ** the parent from the same branch the primary parent. */ for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ + if( pRow->isDup ) continue; + if( pRow->nParent<2 ) continue; /* Not a fork */ + pParent = hashFind(p, pRow->aParent[0]); + if( pParent==0 ) continue; /* Parent off-screen */ + if( pParent->zBranch==pRow->zBranch ) continue; /* Same branch */ for(i=1; inParent; i++){ - if( hashFind(p, pRow->aParent[i])==0 ){ - pRow->aParent[i] = pRow->aParent[--pRow->nParent]; - i--; + pParent = hashFind(p, pRow->aParent[i]); + if( pParent && pParent->zBranch==pRow->zBranch ){ + int t = pRow->aParent[0]; + pRow->aParent[0] = pRow->aParent[i]; + pRow->aParent[i] = t; + break; } } } - /* Figure out which nodes have no direct children (children on - ** the same rail). Mark such nodes as isLeaf. + + /* Find the pChild pointer for each node. + ** + ** The pChild points to the node directly above on the same rail. + ** The pChild must be in the same branch. Leaf nodes have a NULL + ** pChild. + ** + ** In the case of a fork, choose the pChild that results in the + ** longest rail. */ - memset(p->apHash, 0, sizeof(p->apHash[0])*p->nHash); - for(pRow=p->pLast; pRow; pRow=pRow->pPrev) pRow->isLeaf = 1; - for(pRow=p->pLast; pRow; pRow=pRow->pPrev){ - GraphRow *pParent; - hashInsert(p, pRow, 0); - if( !pRow->isDup - && pRow->nParent>0 - && (pParent = hashFind(p, pRow->aParent[0]))!=0 - && pRow->zBranch==pParent->zBranch - ){ - pParent->isLeaf = 0; + for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ + if( pRow->isDup ) continue; + if( pRow->nParent==0 ) continue; /* Root node */ + pParent = hashFind(p, pRow->aParent[0]); + if( pParent==0 ) continue; /* Parent off-screen */ + if( pParent->zBranch!=pRow->zBranch ) continue; /* Different branch */ + if( pParent->idx <= pRow->idx ){ + pParent->timeWarp = 1; + continue; /* Time-warp */ + } + if( pRow->idxTop < pParent->idxTop ){ + pParent->pChild = pRow; + pParent->idxTop = pRow->idxTop; } } /* Identify rows where the primary parent is off screen. Assign ** each to a rail and draw descenders to the bottom of the screen. + ** + ** Strive to put the "trunk" branch on far left. */ zTrunk = persistBranchName(p, "trunk"); for(i=0; i<2; i++){ - for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ + for(pRow=p->pLast; pRow; pRow=pRow->pPrev){ + if( pRow->isDup ) continue; if( i==0 ){ if( pRow->zBranch!=zTrunk ) continue; }else { if( pRow->iRail>=0 ) continue; } if( pRow->nParent==0 || hashFind(p,pRow->aParent[0])==0 ){ if( omitDescenders ){ - pRow->iRail = findFreeRail(p, pRow->idx, pRow->idx, 0, 0); + pRow->iRail = findFreeRail(p, pRow->idxTop, pRow->idx, 0, 0); }else{ pRow->iRail = ++p->mxRail; } - mask = 1<<(pRow->iRail); - if( omitDescenders ){ - pRow->railInUse |= mask; - if( pRow->pNext ) pRow->pNext->railInUse |= mask; - }else{ + if( p->mxRail>=GR_MAX_RAIL ) return; + mask = BIT(pRow->iRail); + if( !omitDescenders ){ pRow->bDescender = pRow->nParent>0; - for(pDesc=pRow; pDesc; pDesc=pDesc->pNext){ - pDesc->railInUse |= mask; + for(pLoop=pRow; pLoop; pLoop=pLoop->pNext){ + pLoop->railInUse |= mask; } } + assignChildrenToRail(pRow); } } } /* Assign rails to all rows that are still unassigned. - ** The first primary child of a row goes on the same rail as - ** that row. */ - inUse = (1<<(p->mxRail+1))-1; + inUse = BIT(p->mxRail+1) - 1; for(pRow=p->pLast; pRow; pRow=pRow->pPrev){ int parentRid; - if( pRow->iRail>=0 ) continue; + + if( pRow->iRail>=0 ){ + if( pRow->pChild==0 && !pRow->timeWarp ){ + if( omitDescenders || count_nonbranch_children(pRow->rid)==0 ){ + inUse &= ~BIT(pRow->iRail); + }else{ + pRow->aiRiser[pRow->iRail] = 0; + mask = BIT(pRow->iRail); + for(pLoop=pRow; pLoop; pLoop=pLoop->pPrev){ + pLoop->railInUse |= mask; + } + } + } + continue; + } if( pRow->isDup ){ - pRow->iRail = findFreeRail(p, pRow->idx, pRow->idx, inUse, 0); - pDesc = pRow; + continue; }else{ assert( pRow->nParent>0 ); parentRid = pRow->aParent[0]; - pDesc = hashFind(p, parentRid); - if( pDesc==0 ){ - /* Time skew */ + pParent = hashFind(p, parentRid); + if( pParent==0 ){ pRow->iRail = ++p->mxRail; - pRow->railInUse = 1<iRail; + if( p->mxRail>=GR_MAX_RAIL ) return; + pRow->railInUse = BIT(pRow->iRail); continue; } - if( pDesc->aiRaiser[pDesc->iRail]==0 && pDesc->zBranch==pRow->zBranch ){ - pRow->iRail = pDesc->iRail; + if( pParent->idx>pRow->idx ){ + /* Common case: Child occurs after parent and is above the + ** parent in the timeline */ + pRow->iRail = findFreeRail(p, 0, pParent->idx, inUse, pParent->iRail); + if( p->mxRail>=GR_MAX_RAIL ) return; + pParent->aiRiser[pRow->iRail] = pRow->idx; }else{ - pRow->iRail = findFreeRail(p, 0, pDesc->idx, inUse, pDesc->iRail); + /* Timewarp case: Child occurs earlier in time than parent and + ** appears below the parent in the timeline. */ + int iDownRail = ++p->mxRail; + if( iDownRail<1 ) iDownRail = ++p->mxRail; + pRow->iRail = ++p->mxRail; + if( p->mxRail>=GR_MAX_RAIL ) return; + pRow->railInUse = BIT(pRow->iRail); + pParent->aiRiser[iDownRail] = pRow->idx; + mask = BIT(iDownRail); + inUse |= mask; + for(pLoop=p->pFirst; pLoop; pLoop=pLoop->pNext){ + pLoop->railInUse |= mask; + } } - pDesc->aiRaiser[pRow->iRail] = pRow->idx; } - mask = 1<iRail; - if( pRow->isLeaf ){ + mask = BIT(pRow->iRail); + pRow->railInUse |= mask; + if( pRow->pChild==0 ){ inUse &= ~mask; }else{ inUse |= mask; + assignChildrenToRail(pRow); } - for(pLoop=pRow; pLoop && pLoop!=pDesc; pLoop=pLoop->pNext){ - pLoop->railInUse |= mask; + if( pParent ){ + for(pLoop=pParent->pPrev; pLoop && pLoop!=pRow; pLoop=pLoop->pPrev){ + pLoop->railInUse |= mask; + } } - pDesc->railInUse |= mask; } /* ** Insert merge rails and merge arrows */ for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ for(i=1; inParent; i++){ int parentRid = pRow->aParent[i]; pDesc = hashFind(p, parentRid); - if( pDesc==0 ) continue; - if( pDesc->mergeOut<0 ){ - int iTarget = (pRow->iRail + pDesc->iRail)/2; - pDesc->mergeOut = findFreeRail(p, pRow->idx, pDesc->idx, 0, iTarget); - pDesc->mergeUpto = pRow->idx; - mask = 1<mergeOut; - pDesc->railInUse |= mask; - for(pDesc=pRow->pNext; pDesc && pDesc->rid!=parentRid; - pDesc=pDesc->pNext){ - pDesc->railInUse |= mask; - } - } - pRow->mergeIn |= 1<mergeOut; + if( pDesc==0 ){ + /* Merge from a node that is off-screen */ + int iMrail = findFreeRail(p, pRow->idx, p->nRow, 0, 0); + if( p->mxRail>=GR_MAX_RAIL ) return; + mask = BIT(iMrail); + pRow->mergeIn[iMrail] = 1; + pRow->mergeDown |= mask; + for(pLoop=pRow->pNext; pLoop; pLoop=pLoop->pNext){ + pLoop->railInUse |= mask; + } + }else{ + /* Merge from an on-screen node */ + createMergeRiser(p, pDesc, pRow); + if( p->mxRail>=GR_MAX_RAIL ) return; + } } } /* - ** Insert merge rails from primaries to duplicates. + ** Insert merge rails from primaries to duplicates. */ if( hasDup ){ + int dupRail; + int mxRail; + find_max_rail(p); + mxRail = p->mxRail; + dupRail = mxRail+1; + if( p->mxRail>=GR_MAX_RAIL ) return; for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ if( !pRow->isDup ) continue; + pRow->iRail = dupRail; pDesc = hashFind(p, pRow->rid); assert( pDesc!=0 && pDesc!=pRow ); - if( pDesc->mergeOut<0 ){ - int iTarget = (pRow->iRail + pDesc->iRail)/2; - pDesc->mergeOut = findFreeRail(p, pRow->idx, pDesc->idx, 0, iTarget); - pDesc->mergeUpto = pRow->idx; - mask = 1<mergeOut; - pDesc->railInUse |= mask; - for(pLoop=pRow->pNext; pLoop && pLoop!=pDesc; pLoop=pLoop->pNext){ - pLoop->railInUse |= mask; - } - } - pRow->mergeIn |= 1<mergeOut; - } + createMergeRiser(p, pDesc, pRow); + if( pDesc->mergeOut>mxRail ) mxRail = pDesc->mergeOut; + } + if( dupRail<=mxRail ){ + dupRail = mxRail+1; + for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ + if( pRow->isDup ) pRow->iRail = dupRail; + } + } + if( mxRail>=GR_MAX_RAIL ) return; } /* ** Find the maximum rail number. */ - for(i=0; irailMap[i] = i; - p->mxRail = 0; - for(pRow=p->pFirst; pRow; pRow=pRow->pNext){ - if( pRow->iRail>p->mxRail ) p->mxRail = pRow->iRail; - if( pRow->mergeOut>p->mxRail ) p->mxRail = pRow->mergeOut; - } + find_max_rail(p); + p->nErr = 0; } ADDED src/gzip.c Index: src/gzip.c ================================================================== --- src/gzip.c +++ src/gzip.c @@ -0,0 +1,145 @@ +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code used to incrementally generate a GZIP compressed +** file. The GZIP format is described in RFC-1952. +** +** State information is stored in static variables, so this implementation +** can only be building up a single GZIP file at a time. +*/ +#include "config.h" +#include +#if defined(FOSSIL_ENABLE_MINIZ) +# define MINIZ_HEADER_FILE_ONLY +# include "miniz.c" +#else +# include +#endif +#include "gzip.h" + +/* +** State information for the GZIP file under construction. +*/ +struct gzip_state { + int eState; /* 0: idle 1: header 2: compressing */ + int iCRC; /* The checksum */ + z_stream stream; /* The working compressor */ + Blob out; /* Results stored here */ +} gzip; + +/* +** Write a 32-bit integer as little-endian into the given buffer. +*/ +static void put32(char *z, int v){ + z[0] = v & 0xff; + z[1] = (v>>8) & 0xff; + z[2] = (v>>16) & 0xff; + z[3] = (v>>24) & 0xff; +} + +/* +** Begin constructing a gzip file. +*/ +void gzip_begin(sqlite3_int64 now){ + char aHdr[10]; + assert( gzip.eState==0 ); + blob_zero(&gzip.out); + aHdr[0] = 0x1f; + aHdr[1] = 0x8b; + aHdr[2] = 8; + aHdr[3] = 0; + if( now==-1 ){ + now = db_int64(0, "SELECT (julianday('now') - 2440587.5)*86400.0"); + } + put32(&aHdr[4], now&0xffffffff); + aHdr[8] = 2; + aHdr[9] = 255; + blob_append(&gzip.out, aHdr, 10); + gzip.iCRC = 0; + gzip.eState = 1; +} + +/* +** Add nIn bytes of content from pIn to the gzip file. +*/ +#define GZIP_BUFSZ 100000 +void gzip_step(const char *pIn, int nIn){ + char *zOutBuf; + int nOut; + + nOut = nIn + nIn/10 + 100; + if( nOut<100000 ) nOut = 100000; + zOutBuf = fossil_malloc(nOut); + gzip.stream.avail_in = nIn; + gzip.stream.next_in = (unsigned char*)pIn; + gzip.stream.avail_out = nOut; + gzip.stream.next_out = (unsigned char*)zOutBuf; + if( gzip.eState==1 ){ + gzip.stream.zalloc = (alloc_func)0; + gzip.stream.zfree = (free_func)0; + gzip.stream.opaque = 0; + deflateInit2(&gzip.stream, 9, Z_DEFLATED, -MAX_WBITS,8,Z_DEFAULT_STRATEGY); + gzip.eState = 2; + } + gzip.iCRC = crc32(gzip.iCRC, gzip.stream.next_in, gzip.stream.avail_in); + do{ + deflate(&gzip.stream, nIn==0 ? Z_FINISH : 0); + blob_append(&gzip.out, zOutBuf, nOut - gzip.stream.avail_out); + gzip.stream.avail_out = nOut; + gzip.stream.next_out = (unsigned char*)zOutBuf; + }while( gzip.stream.avail_in>0 ); + fossil_free(zOutBuf); +} + +/* +** Finish the gzip file and put the content in *pOut +*/ +void gzip_finish(Blob *pOut){ + char aTrailer[8]; + assert( gzip.eState>0 ); + gzip_step("", 0); + deflateEnd(&gzip.stream); + put32(aTrailer, gzip.iCRC); + put32(&aTrailer[4], gzip.stream.total_in); + blob_append(&gzip.out, aTrailer, 8); + *pOut = gzip.out; + blob_zero(&gzip.out); + gzip.eState = 0; +} + +/* +** COMMAND: test-gzip +** +** Usage: %fossil test-gzip FILENAME +** +** Compress a file using gzip. +*/ +void test_gzip_cmd(void){ + Blob b; + char *zOut; + if( g.argc!=3 ) usage("FILENAME"); + sqlite3_open(":memory:", &g.db); + gzip_begin(-1); + blob_read_from_file(&b, g.argv[2]); + zOut = mprintf("%s.gz", g.argv[2]); + gzip_step(blob_buffer(&b), blob_size(&b)); + blob_reset(&b); + gzip_finish(&b); + blob_write_to_file(&b, zOut); + blob_reset(&b); + fossil_free(zOut); +} Index: src/http.c ================================================================== --- src/http.c +++ src/http.c @@ -19,10 +19,26 @@ */ #include "config.h" #include "http.h" #include +#ifdef _WIN32 +#include +#ifndef isatty +#define isatty(d) _isatty(d) +#endif +#ifndef fileno +#define fileno(s) _fileno(s) +#endif +#endif + +/* Maximum number of HTTP Authorization attempts */ +#define MAX_HTTP_AUTH 2 + +/* Keep track of HTTP Basic Authorization failures */ +static int fSeenHttpAuth = 0; + /* ** Construct the "login" card with the client credentials. ** ** login LOGIN NONCE SIGNATURE ** @@ -39,46 +55,38 @@ const char *zPw; /* The user password */ Blob pw; /* The nonce with user password appended */ Blob sig; /* The signature field */ blob_zero(pLogin); - if( g.urlUser==0 || strcmp(g.urlUser, "anonymous")==0 ){ + if( g.url.user==0 || fossil_strcmp(g.url.user, "anonymous")==0 ){ return; /* If no login card for users "nobody" and "anonymous" */ } + if( g.url.isSsh ){ + return; /* If no login card for SSH: */ + } blob_zero(&nonce); blob_zero(&pw); sha1sum_blob(pPayload, &nonce); blob_copy(&pw, &nonce); - zLogin = g.urlUser; - if( g.urlPasswd ){ - zPw = g.urlPasswd; + zLogin = g.url.user; + if( g.url.passwd ){ + zPw = g.url.passwd; }else if( g.cgiOutput ){ /* Password failure while doing a sync from the web interface */ cgi_printf("*** incorrect or missing password for user %h\n", zLogin); zPw = 0; }else{ /* Password failure while doing a sync from the command-line interface */ url_prompt_for_password(); - zPw = g.urlPasswd; - if( !g.dontKeepUrl ) db_set("last-sync-pw", zPw, 0); + zPw = g.url.passwd; } /* The login card wants the SHA1 hash of the password, so convert the - ** password to its SHA1 hash it it isn't already a SHA1 hash. - ** - ** Except, if the password begins with "*" then use the characters - ** after the "*" as a cleartext password. Put an "*" at the beginning - ** of the password to trick a newer client to use the cleartext password - ** protocol required by legacy servers. + ** password to its SHA1 hash if it isn't already a SHA1 hash. */ - if( zPw && zPw[0] ){ - if( zPw[0]=='*' ){ - zPw++; - }else{ - zPw = sha1_shared_secret(zPw, zLogin); - } - } + /* fossil_print("\nzPw=[%s]\n", zPw); // TESTING ONLY */ + if( zPw && zPw[0] ) zPw = sha1_shared_secret(zPw, zLogin, 0); blob_append(&pw, zPw, -1); sha1sum_blob(&pw, &sig); blob_appendf(pLogin, "login %F %b %b\n", zLogin, &nonce, &sig); blob_reset(&pw); @@ -94,29 +102,97 @@ static void http_build_header(Blob *pPayload, Blob *pHdr){ int i; const char *zSep; blob_zero(pHdr); - i = strlen(g.urlPath); - if( i>0 && g.urlPath[i-1]=='/' ){ + i = strlen(g.url.path); + if( i>0 && g.url.path[i-1]=='/' ){ zSep = ""; }else{ zSep = "/"; } - blob_appendf(pHdr, "POST %s%sxfer HTTP/1.0\r\n", g.urlPath, zSep); - if( g.urlProxyAuth ){ - blob_appendf(pHdr, "Proxy-Authorization: %s\n", g.urlProxyAuth); + blob_appendf(pHdr, "POST %s%sxfer/xfer HTTP/1.0\r\n", g.url.path, zSep); + if( g.url.proxyAuth ){ + blob_appendf(pHdr, "Proxy-Authorization: %s\r\n", g.url.proxyAuth); + } + if( g.zHttpAuth && g.zHttpAuth[0] ){ + const char *zCredentials = g.zHttpAuth; + char *zEncoded = encode64(zCredentials, -1); + blob_appendf(pHdr, "Authorization: Basic %s\r\n", zEncoded); + fossil_free(zEncoded); } - blob_appendf(pHdr, "Host: %s\r\n", g.urlHostname); - blob_appendf(pHdr, "User-Agent: Fossil/" MANIFEST_VERSION "\r\n"); + blob_appendf(pHdr, "Host: %s\r\n", g.url.hostname); + blob_appendf(pHdr, "User-Agent: %s\r\n", get_user_agent()); + if( g.url.isSsh ) blob_appendf(pHdr, "X-Fossil-Transport: SSH\r\n"); if( g.fHttpTrace ){ blob_appendf(pHdr, "Content-Type: application/x-fossil-debug\r\n"); }else{ blob_appendf(pHdr, "Content-Type: application/x-fossil\r\n"); } blob_appendf(pHdr, "Content-Length: %d\r\n\r\n", blob_size(pPayload)); } + +/* +** Use Fossil credentials for HTTP Basic Authorization prompt +*/ +static int use_fossil_creds_for_httpauth_prompt(void){ + Blob x; + char c; + prompt_user("Use Fossil username and password (y/N)? ", &x); + c = blob_str(&x)[0]; + blob_reset(&x); + return ( c=='y' || c=='Y' ); +} + +/* +** Prompt to save HTTP Basic Authorization information +*/ +static int save_httpauth_prompt(void){ + Blob x; + char c; + if( (g.url.flags & URL_REMEMBER)==0 ) return 0; + prompt_user("Remember Basic Authorization credentials (Y/n)? ", &x); + c = blob_str(&x)[0]; + blob_reset(&x); + return ( c!='n' && c!='N' ); +} + +/* +** Get the HTTP Basic Authorization credentials from the user +** when 401 is received. +*/ +char *prompt_for_httpauth_creds(void){ + Blob x; + char *zUser; + char *zPw; + char *zPrompt; + char *zHttpAuth = 0; + if( !isatty(fileno(stdin)) ) return 0; + zPrompt = mprintf("\n%s authorization required by\n%s\n", + g.url.isHttps==1 ? "Encrypted HTTPS" : "Unencrypted HTTP", g.url.canonical); + fossil_print("%s", zPrompt); + free(zPrompt); + if ( g.url.user && g.url.passwd && use_fossil_creds_for_httpauth_prompt() ){ + zHttpAuth = mprintf("%s:%s", g.url.user, g.url.passwd); + }else{ + prompt_user("Basic Authorization user: ", &x); + zUser = mprintf("%b", &x); + zPrompt = mprintf("HTTP password for %b: ", &x); + blob_reset(&x); + prompt_for_password(zPrompt, &x, 1); + zPw = mprintf("%b", &x); + zHttpAuth = mprintf("%s:%s", zUser, zPw); + free(zUser); + free(zPw); + free(zPrompt); + blob_reset(&x); + } + if( save_httpauth_prompt() ){ + set_httpauth(zHttpAuth); + } + return zHttpAuth; +} /* ** Sign the content in pSend, compress it, and send it to the server ** via HTTP or HTTPS. Get a reply, uncompress the reply, and store the reply ** in pRecv. pRecv is assumed to be uninitialized when @@ -124,23 +200,27 @@ ** ** The server address is contain in the "g" global structure. The ** url_parse() routine should have been called prior to this routine ** in order to fill this structure appropriately. */ -void http_exchange(Blob *pSend, Blob *pReply, int useLogin){ +int http_exchange(Blob *pSend, Blob *pReply, int useLogin, int maxRedirect){ Blob login; /* The login card */ Blob payload; /* The complete payload including login card */ Blob hdr; /* The HTTP request header */ int closeConnection; /* True to close the connection when done */ - int iLength; /* Length of the reply payload */ - int rc; /* Result code */ + int iLength; /* Expected length of the reply payload */ + int iRecvLen; /* Received length of the reply payload */ + int rc = 0; /* Result code */ int iHttpVersion; /* Which version of HTTP protocol server uses */ char *zLine; /* A single line of the reply header */ int i; /* Loop counter */ + int isError = 0; /* True if the reply is an error message */ + int isCompressed = 1; /* True if the reply is compressed */ - if( transport_open() ){ - fossil_fatal(transport_errmsg()); + if( transport_open(&g.url) ){ + fossil_warning("%s", transport_errmsg(&g.url)); + return 1; } /* Construct the login card and prepare the complete payload */ blob_zero(&login); if( useLogin ) http_build_login_card(pSend, &login); @@ -157,121 +237,186 @@ /* When tracing, write the transmitted HTTP message both to standard ** output and into a file. The file can then be used to drive the ** server-side like this: ** - ** ./fossil http 4 && strcmp(&zLine[j-4],"/xfer")==0 ) zLine[j-4] = 0; - fossil_print("redirect to %s\n", &zLine[i]); - url_parse(&zLine[i]); - transport_close(); - http_exchange(pSend, pReply, useLogin); - return; - } - } - if( rc!=200 ){ - fossil_fatal("\"location:\" missing from 302 redirect reply"); + }else if( ( rc==301 || rc==302 ) && + fossil_strnicmp(zLine, "location:", 9)==0 ){ + int i, j; + + if ( --maxRedirect == 0){ + fossil_warning("redirect limit exceeded"); + goto write_err; + } + for(i=9; zLine[i] && zLine[i]==' '; i++){} + if( zLine[i]==0 ){ + fossil_warning("malformed redirect: %s", zLine); + goto write_err; + } + j = strlen(zLine) - 1; + while( j>4 && fossil_strcmp(&zLine[j-4],"/xfer")==0 ){ + j -= 4; + zLine[j] = 0; + } + fossil_print("redirect with status %d to %s\n", rc, &zLine[i]); + url_parse(&zLine[i], 0); + transport_close(&g.url); + transport_global_shutdown(&g.url); + fSeenHttpAuth = 0; + if( g.zHttpAuth ) free(g.zHttpAuth); + g.zHttpAuth = get_httpauth(); + return http_exchange(pSend, pReply, useLogin, maxRedirect); + }else if( fossil_strnicmp(zLine, "content-type: ", 14)==0 ){ + if( fossil_strnicmp(&zLine[14], "application/x-fossil-debug", -1)==0 ){ + isCompressed = 0; + }else if( fossil_strnicmp(&zLine[14], + "application/x-fossil-uncompressed", -1)==0 ){ + isCompressed = 0; + }else if( fossil_strnicmp(&zLine[14], "application/x-fossil", -1)!=0 ){ + isError = 1; + } + } + } + if( iLength<0 ){ + fossil_warning("server did not reply"); + goto write_err; + } + if( rc!=200 ){ + fossil_warning("\"location:\" missing from %d redirect reply", rc); goto write_err; } /* ** Extract the reply payload that follows the header */ - if( iLength<0 ){ - fossil_fatal("server did not reply"); - goto write_err; - } blob_zero(pReply); blob_resize(pReply, iLength); - iLength = transport_receive(blob_buffer(pReply), iLength); + iRecvLen = transport_receive(&g.url, blob_buffer(pReply), iLength); + if( iRecvLen != iLength ){ + fossil_warning("response truncated: got %d bytes of %d", iRecvLen, iLength); + goto write_err; + } blob_resize(pReply, iLength); - if( g.fHttpTrace ){ - printf("HTTP RECEIVE:\n%s\n=======================\n", blob_str(pReply)); - }else{ - blob_uncompress(pReply, pReply); + if( isError ){ + char *z; + int i, j; + z = blob_str(pReply); + for(i=j=0; z[i]; i++, j++){ + if( z[i]=='<' ){ + while( z[i] && z[i]!='>' ) i++; + if( z[i]==0 ) break; + } + z[j] = z[i]; + } + z[j] = 0; + fossil_warning("server sends error: %s", z); + goto write_err; } + if( isCompressed ) blob_uncompress(pReply, pReply); /* ** Close the connection to the server if appropriate. ** ** FIXME: There is some bug in the lower layers that prevents the ** connection from remaining open. The easiest fix for now is to ** simply close and restart the connection for each round-trip. + ** + ** For SSH we will leave the connection open. */ - closeConnection = 1; /* FIX ME */ + if( ! g.url.isSsh ) closeConnection = 1; /* FIX ME */ if( closeConnection ){ - transport_close(); + transport_close(&g.url); }else{ - transport_rewind(); + transport_rewind(&g.url); } - return; + return 0; - /* + /* ** Jump to here if an error is seen. */ write_err: - transport_close(); - return; + transport_close(&g.url); + return 1; } Index: src/http_socket.c ================================================================== --- src/http_socket.c +++ src/http_socket.c @@ -20,24 +20,30 @@ ** ** This file implements a singleton. A single client socket may be active ** at a time. State information is stored in static variables. The identity ** of the server is held in global variables that are set by url_parse(). ** -** Low-level sockets are abstracted out into this module because they +** Low-level sockets are abstracted out into this module because they ** are handled different on Unix and windows. */ +#ifndef __EXTENSIONS__ +# define __EXTENSIONS__ 1 /* IPv6 won't compile on Solaris without this */ +#endif #include "config.h" #include "http_socket.h" -#ifdef __MINGW32__ -# include +#if defined(_WIN32) +# if !defined(_WIN32_WINNT) +# define _WIN32_WINNT 0x0501 +# endif # include +# include #else +# include # include # include # include -# include #endif #include #include #include @@ -45,11 +51,11 @@ ** There can only be a single socket connection open at a time. ** State information about that socket is stored in the following ** local variables: */ static int socketIsInit = 0; /* True after global initialization */ -#ifdef __MINGW32__ +#if defined(_WIN32) static WSADATA socketInfo; /* Windows socket initialize data */ #endif static int iSocket = -1; /* The socket on which we talk to the server */ static char *socketErrMsg = 0; /* Text of most recent socket error */ @@ -63,11 +69,11 @@ } /* ** Set the socket error message. */ -void socket_set_errmsg(char *zFormat, ...){ +void socket_set_errmsg(const char *zFormat, ...){ va_list ap; socket_clear_errmsg(); va_start(ap, zFormat); socketErrMsg = vmprintf(zFormat, ap); va_end(ap); @@ -84,11 +90,11 @@ ** Call this routine once before any other use of the socket interface. ** This routine does initial configuration of the socket module. */ void socket_global_init(void){ if( socketIsInit==0 ){ -#ifdef __MINGW32__ +#if defined(_WIN32) if( WSAStartup(MAKEWORD(2,0), &socketInfo)!=0 ){ fossil_panic("can't initialize winsock"); } #endif socketIsInit = 1; @@ -99,11 +105,11 @@ ** Call this routine to shutdown the socket module prior to program ** exit. */ void socket_global_shutdown(void){ if( socketIsInit ){ -#ifdef __MINGW32__ +#if defined(_WIN32) WSACleanup(); #endif socket_clear_errmsg(); socketIsInit = 0; } @@ -113,11 +119,11 @@ ** Close the currently open socket. If no socket is open, this routine ** is a no-op. */ void socket_close(void){ if( iSocket>=0 ){ -#ifdef __MINGW32__ +#if defined(_WIN32) closesocket(iSocket); #else close(iSocket); #endif iSocket = -1; @@ -124,61 +130,65 @@ } } /* ** Open a socket connection. The identify of the server is determined -** by global varibles that are set using url_parse(): +** by pUrlData ** -** g.urlName Name of the server. Ex: www.fossil-scm.org -** g.urlPort TCP/IP port to use. Ex: 80 +** pUrlDAta->name Name of the server. Ex: www.fossil-scm.org +** pUrlDAta->port TCP/IP port to use. Ex: 80 ** ** Return the number of errors. */ -int socket_open(void){ - static struct sockaddr_in addr; /* The server address */ - static int addrIsInit = 0; /* True once addr is initialized */ +int socket_open(UrlData *pUrlData){ + int rc = 0; + struct addrinfo *ai = 0; + struct addrinfo *p; + struct addrinfo hints; + char zPort[30]; + char zRemote[NI_MAXHOST]; socket_global_init(); - if( !addrIsInit ){ - addr.sin_family = AF_INET; - addr.sin_port = htons(g.urlPort); - *(int*)&addr.sin_addr = inet_addr(g.urlName); - if( -1 == *(int*)&addr.sin_addr ){ -#ifndef FOSSIL_STATIC_LINK - struct hostent *pHost; - pHost = gethostbyname(g.urlName); - if( pHost!=0 ){ - memcpy(&addr.sin_addr,pHost->h_addr_list[0],pHost->h_length); - }else -#endif - { - socket_set_errmsg("can't resolve host name: %s\n", g.urlName); - return 1; - } - } - addrIsInit = 1; - - /* Set the Global.zIpAddr variable to the server we are talking to. - ** This is used to populate the ipaddr column of the rcvfrom table, - ** if any files are received from the server. - */ - g.zIpAddr = mprintf("%s", inet_ntoa(addr.sin_addr)); - } - iSocket = socket(AF_INET,SOCK_STREAM,0); - if( iSocket<0 ){ - socket_set_errmsg("cannot create a socket"); - return 1; - } - if( connect(iSocket,(struct sockaddr*)&addr,sizeof(addr))<0 ){ - socket_set_errmsg("cannot connect to host %s:%d", g.urlName, g.urlPort); - socket_close(); - return 1; - } -#ifndef __MINGW32__ + memset(&hints, 0, sizeof(struct addrinfo)); + assert( iSocket<0 ); + hints.ai_family = g.fIPv4 ? AF_INET : AF_UNSPEC; + hints.ai_socktype = SOCK_STREAM; + hints.ai_protocol = IPPROTO_TCP; + sqlite3_snprintf(sizeof(zPort),zPort,"%d", pUrlData->port); + rc = getaddrinfo(pUrlData->name, zPort, &hints, &ai); + if( rc ){ + socket_set_errmsg("getaddrinfo() fails: %s", gai_strerror(rc)); + goto end_socket_open; + } + for(p=ai; p; p=p->ai_next){ + iSocket = socket(p->ai_family, p->ai_socktype, p->ai_protocol); + if( iSocket<0 ) continue; + if( connect(iSocket,p->ai_addr,p->ai_addrlen)<0 ){ + socket_close(); + continue; + } + rc = getnameinfo(p->ai_addr, p->ai_addrlen, zRemote, sizeof(zRemote), + 0, 0, NI_NUMERICHOST); + if( rc ){ + socket_set_errmsg("getnameinfo() failed: %s", gai_strerror(rc)); + goto end_socket_open; + } + g.zIpAddr = mprintf("%s", zRemote); + break; + } + if( p==0 ){ + socket_set_errmsg("cannot connect to host %s:%d", pUrlData->name, + pUrlData->port); + rc = 1; + } +#if !defined(_WIN32) signal(SIGPIPE, SIG_IGN); #endif - return 0; +end_socket_open: + if( rc && iSocket>=0 ) socket_close(); + if( ai ) freeaddrinfo(ai); + return rc; } /* ** Send content out over the open socket connection. */ @@ -197,16 +207,42 @@ /* ** Receive content back from the open socket connection. */ size_t socket_receive(void *NotUsed, void *pContent, size_t N){ - size_t got; + ssize_t got; size_t total = 0; while( N>0 ){ - got = recv(iSocket, pContent, N, 0); + /* WinXP fails for large values of N. So limit it to 64KiB. */ + got = recv(iSocket, pContent, N>65536 ? 65536 : N, 0); if( got<=0 ) break; - total += got; - N -= got; + total += (size_t)got; + N -= (size_t)got; pContent = (void*)&((char*)pContent)[got]; } return total; } + +/* +** Attempt to resolve pUrlData->name to an IP address and setup g.zIpAddr +** so rcvfrom gets populated. For hostnames with more than one IP (or +** if overridden in ~/.ssh/config) the rcvfrom may not match the host +** to which we connect. +*/ +void socket_ssh_resolve_addr(UrlData *pUrlData){ + struct addrinfo *ai = 0; + struct addrinfo hints; + char zRemote[NI_MAXHOST]; + hints.ai_family = AF_UNSPEC; + hints.ai_socktype = SOCK_STREAM; + hints.ai_protocol = IPPROTO_TCP; + if( getaddrinfo(pUrlData->name, NULL, &hints, &ai)==0 + && ai!=0 + && getnameinfo(ai->ai_addr, ai->ai_addrlen, zRemote, + sizeof(zRemote), 0, 0, NI_NUMERICHOST)==0 ){ + g.zIpAddr = mprintf("%s (%s)", zRemote, pUrlData->name); + } + if( ai ) freeaddrinfo(ai); + if( g.zIpAddr==0 ){ + g.zIpAddr = mprintf("%s", pUrlData->name); + } +} ADDED src/http_ssl.c Index: src/http_ssl.c ================================================================== --- src/http_ssl.c +++ src/http_ssl.c @@ -0,0 +1,485 @@ +/* +** Copyright (c) 2009 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file manages low-level SSL communications. +** +** This file implements a singleton. A single SSL connection may be active +** at a time. State information is stored in static variables. The identity +** of the server is held in global variables that are set by url_parse(). +** +** SSL support is abstracted out into this module because Fossil can +** be compiled without SSL support (which requires OpenSSL library) +*/ + +#include "config.h" + +#ifdef FOSSIL_ENABLE_SSL + +#include +#include +#include + +#include "http_ssl.h" +#include +#include + +/* +** There can only be a single OpenSSL IO connection open at a time. +** State information about that IO is stored in the following +** local variables: +*/ +static int sslIsInit = 0; /* True after global initialization */ +static BIO *iBio = 0; /* OpenSSL I/O abstraction */ +static char *sslErrMsg = 0; /* Text of most recent OpenSSL error */ +static SSL_CTX *sslCtx; /* SSL context */ +static SSL *ssl; + + +/* +** Clear the SSL error message +*/ +static void ssl_clear_errmsg(void){ + free(sslErrMsg); + sslErrMsg = 0; +} + +/* +** Set the SSL error message. +*/ +void ssl_set_errmsg(const char *zFormat, ...){ + va_list ap; + ssl_clear_errmsg(); + va_start(ap, zFormat); + sslErrMsg = vmprintf(zFormat, ap); + va_end(ap); +} + +/* +** Return the current SSL error message +*/ +const char *ssl_errmsg(void){ + return sslErrMsg; +} + +/* +** When a server requests a client certificate that hasn't been provided, +** display a warning message explaining what to do next. +*/ +static int ssl_client_cert_callback(SSL *ssl, X509 **x509, EVP_PKEY **pkey){ + fossil_warning("The remote server requested a client certificate for " + "authentication. Specify the pathname to a file containing the PEM " + "encoded certificate and private key with the --ssl-identity option " + "or the ssl-identity setting."); + return 0; /* no cert available */ +} + +/* +** Call this routine once before any other use of the SSL interface. +** This routine does initial configuration of the SSL module. +*/ +void ssl_global_init(void){ + const char *zCaSetting = 0, *zCaFile = 0, *zCaDirectory = 0; + const char *identityFile; + + if( sslIsInit==0 ){ + SSL_library_init(); + SSL_load_error_strings(); + ERR_load_BIO_strings(); + OpenSSL_add_all_algorithms(); + sslCtx = SSL_CTX_new(SSLv23_client_method()); + /* Disable SSLv2 and SSLv3 */ + SSL_CTX_set_options(sslCtx, SSL_OP_NO_SSLv2|SSL_OP_NO_SSLv3); + + /* Set up acceptable CA root certificates */ + zCaSetting = db_get("ssl-ca-location", 0); + if( zCaSetting==0 || zCaSetting[0]=='\0' ){ + /* CA location not specified, use platform's default certificate store */ + X509_STORE_set_default_paths(SSL_CTX_get_cert_store(sslCtx)); + }else{ + /* User has specified a CA location, make sure it exists and use it */ + switch( file_isdir(zCaSetting) ){ + case 0: { /* doesn't exist */ + fossil_fatal("ssl-ca-location is set to '%s', " + "but is not a file or directory", zCaSetting); + break; + } + case 1: { /* directory */ + zCaDirectory = zCaSetting; + break; + } + case 2: { /* file */ + zCaFile = zCaSetting; + break; + } + } + if( SSL_CTX_load_verify_locations(sslCtx, zCaFile, zCaDirectory)==0 ){ + fossil_fatal("Failed to use CA root certificates from " + "ssl-ca-location '%s'", zCaSetting); + } + } + + /* Load client SSL identity, preferring the filename specified on the + ** command line */ + if( g.zSSLIdentity!=0 ){ + identityFile = g.zSSLIdentity; + }else{ + identityFile = db_get("ssl-identity", 0); + } + if( identityFile!=0 && identityFile[0]!='\0' ){ + if( SSL_CTX_use_certificate_file(sslCtx,identityFile,SSL_FILETYPE_PEM)!=1 + || SSL_CTX_use_PrivateKey_file(sslCtx,identityFile,SSL_FILETYPE_PEM)!=1 + ){ + fossil_fatal("Could not load SSL identity from %s", identityFile); + } + } + /* Register a callback to tell the user what to do when the server asks + ** for a cert */ + SSL_CTX_set_client_cert_cb(sslCtx, ssl_client_cert_callback); + + sslIsInit = 1; + } +} + +/* +** Call this routine to shutdown the SSL module prior to program exit. +*/ +void ssl_global_shutdown(void){ + if( sslIsInit ){ + SSL_CTX_free(sslCtx); + ssl_clear_errmsg(); + sslIsInit = 0; + } +} + +/* +** Close the currently open SSL connection. If no connection is open, +** this routine is a no-op. +*/ +void ssl_close(void){ + if( iBio!=NULL ){ + (void)BIO_reset(iBio); + BIO_free_all(iBio); + iBio = NULL; + } +} + +/* See RFC2817 for details */ +static int establish_proxy_tunnel(UrlData *pUrlData, BIO *bio){ + int rc, httpVerMin; + char *bbuf; + Blob snd, reply; + int done=0,end=0; + blob_zero(&snd); + blob_appendf(&snd, "CONNECT %s:%d HTTP/1.1\r\n", pUrlData->hostname, + pUrlData->proxyOrigPort); + blob_appendf(&snd, "Host: %s:%d\r\n", pUrlData->hostname, pUrlData->proxyOrigPort); + if( pUrlData->proxyAuth ){ + blob_appendf(&snd, "Proxy-Authorization: %s\r\n", pUrlData->proxyAuth); + } + blob_append(&snd, "Proxy-Connection: keep-alive\r\n", -1); + blob_appendf(&snd, "User-Agent: %s\r\n", get_user_agent()); + blob_append(&snd, "\r\n", 2); + BIO_write(bio, blob_buffer(&snd), blob_size(&snd)); + blob_reset(&snd); + + /* Wait for end of reply */ + blob_zero(&reply); + do{ + int len; + char buf[256]; + len = BIO_read(bio, buf, sizeof(buf)); + blob_append(&reply, buf, len); + + bbuf = blob_buffer(&reply); + len = blob_size(&reply); + while(end < len) { + if(bbuf[end] == '\r') { + if(len - end < 4) { + /* need more data */ + break; + } + if(memcmp(&bbuf[end], "\r\n\r\n", 4) == 0) { + done = 1; + break; + } + } + end++; + } + }while(!done); + sscanf(bbuf, "HTTP/1.%d %d", &httpVerMin, &rc); + blob_reset(&reply); + return rc; +} + +/* +** Open an SSL connection. The identify of the server is determined +** as follows: +** +** g.url.name Name of the server. Ex: www.fossil-scm.org +** pUrlData->port TCP/IP port to use. Ex: 80 +** +** Return the number of errors. +*/ +int ssl_open(UrlData *pUrlData){ + X509 *cert; + int hasSavedCertificate = 0; + int trusted = 0; + unsigned long e; + + ssl_global_init(); + + /* Get certificate for current server from global config and + * (if we have it in config) add it to certificate store. + */ + cert = ssl_get_certificate(pUrlData, &trusted); + if ( cert!=NULL ){ + X509_STORE_add_cert(SSL_CTX_get_cert_store(sslCtx), cert); + X509_free(cert); + hasSavedCertificate = 1; + } + + if( pUrlData->useProxy ){ + int rc; + BIO *sBio; + char *connStr; + connStr = mprintf("%s:%d", g.url.name, pUrlData->port); + sBio = BIO_new_connect(connStr); + free(connStr); + if( BIO_do_connect(sBio)<=0 ){ + ssl_set_errmsg("SSL: cannot connect to proxy %s:%d (%s)", + pUrlData->name, pUrlData->port, ERR_reason_error_string(ERR_get_error())); + ssl_close(); + return 1; + } + rc = establish_proxy_tunnel(pUrlData, sBio); + if( rc<200||rc>299 ){ + ssl_set_errmsg("SSL: proxy connect failed with HTTP status code %d", rc); + return 1; + } + + pUrlData->path = pUrlData->proxyUrlPath; + + iBio = BIO_new_ssl(sslCtx, 1); + BIO_push(iBio, sBio); + }else{ + iBio = BIO_new_ssl_connect(sslCtx); + } + if( iBio==NULL ) { + ssl_set_errmsg("SSL: cannot open SSL (%s)", + ERR_reason_error_string(ERR_get_error())); + return 1; + } + BIO_get_ssl(iBio, &ssl); + +#if (SSLEAY_VERSION_NUMBER >= 0x00908070) && !defined(OPENSSL_NO_TLSEXT) + if( !SSL_set_tlsext_host_name(ssl, (pUrlData->useProxy?pUrlData->hostname:pUrlData->name)) ){ + fossil_warning("WARNING: failed to set server name indication (SNI), " + "continuing without it.\n"); + } +#endif + + SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); + + if( !pUrlData->useProxy ){ + BIO_set_conn_hostname(iBio, pUrlData->name); + BIO_set_conn_int_port(iBio, &pUrlData->port); + if( BIO_do_connect(iBio)<=0 ){ + ssl_set_errmsg("SSL: cannot connect to host %s:%d (%s)", + pUrlData->name, pUrlData->port, ERR_reason_error_string(ERR_get_error())); + ssl_close(); + return 1; + } + } + + if( BIO_do_handshake(iBio)<=0 ) { + ssl_set_errmsg("Error establishing SSL connection %s:%d (%s)", + pUrlData->useProxy?pUrlData->hostname:pUrlData->name, + pUrlData->useProxy?pUrlData->proxyOrigPort:pUrlData->port, + ERR_reason_error_string(ERR_get_error())); + ssl_close(); + return 1; + } + /* Check if certificate is valid */ + cert = SSL_get_peer_certificate(ssl); + + if ( cert==NULL ){ + ssl_set_errmsg("No SSL certificate was presented by the peer"); + ssl_close(); + return 1; + } + + if( trusted<=0 && (e = SSL_get_verify_result(ssl)) != X509_V_OK ){ + char *desc, *prompt; + const char *warning = ""; + Blob ans; + char cReply; + BIO *mem; + unsigned char md[32]; + unsigned int mdLength = 31; + + mem = BIO_new(BIO_s_mem()); + X509_NAME_print_ex(mem, X509_get_subject_name(cert), 2, XN_FLAG_MULTILINE); + BIO_puts(mem, "\n\nIssued By:\n\n"); + X509_NAME_print_ex(mem, X509_get_issuer_name(cert), 2, XN_FLAG_MULTILINE); + BIO_puts(mem, "\n\nSHA1 Fingerprint:\n\n "); + if(X509_digest(cert, EVP_sha1(), md, &mdLength)){ + int j; + for( j = 0; j < mdLength; ++j ) { + BIO_printf(mem, " %02x", md[j]); + } + } + BIO_write(mem, "", 1); /* nul-terminate mem buffer */ + BIO_get_mem_data(mem, &desc); + + if( hasSavedCertificate ){ + warning = "WARNING: Certificate doesn't match the " + "saved certificate for this host!"; + } + prompt = mprintf("\nSSL verification failed: %s\n" + "Certificate received: \n\n%s\n\n%s\n" + "Either:\n" + " * verify the certificate is correct using the " + "SHA1 fingerprint above\n" + " * use the global ssl-ca-location setting to specify your CA root\n" + " certificates list\n\n" + "If you are not expecting this message, answer no and " + "contact your server\nadministrator.\n\n" + "Accept certificate for host %s (a=always/y/N)? ", + X509_verify_cert_error_string(e), desc, warning, + pUrlData->useProxy?pUrlData->hostname:pUrlData->name); + BIO_free(mem); + + prompt_user(prompt, &ans); + free(prompt); + cReply = blob_str(&ans)[0]; + blob_reset(&ans); + if( cReply!='y' && cReply!='Y' && cReply!='a' && cReply!='A') { + X509_free(cert); + ssl_set_errmsg("SSL certificate declined"); + ssl_close(); + return 1; + } + if( cReply=='a' || cReply=='A') { + if ( trusted==0 ){ + prompt_user("\nSave this certificate as fully trusted (a=always/N)? ", + &ans); + cReply = blob_str(&ans)[0]; + trusted = ( cReply=='a' || cReply=='A' ); + blob_reset(&ans); + } + ssl_save_certificate(pUrlData, cert, trusted); + } + } + + /* Set the Global.zIpAddr variable to the server we are talking to. + ** This is used to populate the ipaddr column of the rcvfrom table, + ** if any files are received from the server. + */ + { + /* IPv4 only code */ + const unsigned char *ip = (const unsigned char *) BIO_get_conn_ip(iBio); + g.zIpAddr = mprintf("%d.%d.%d.%d", ip[0], ip[1], ip[2], ip[3]); + } + + X509_free(cert); + return 0; +} + +/* +** Save certificate to global config. +*/ +void ssl_save_certificate(UrlData *pUrlData, X509 *cert, int trusted){ + BIO *mem; + char *zCert, *zHost; + + mem = BIO_new(BIO_s_mem()); + PEM_write_bio_X509(mem, cert); + BIO_write(mem, "", 1); /* nul-terminate mem buffer */ + BIO_get_mem_data(mem, &zCert); + zHost = mprintf("cert:%s", pUrlData->useProxy?pUrlData->hostname:pUrlData->name); + db_set(zHost, zCert, 1); + free(zHost); + zHost = mprintf("trusted:%s", pUrlData->useProxy?pUrlData->hostname:pUrlData->name); + db_set_int(zHost, trusted, 1); + free(zHost); + BIO_free(mem); +} + +/* +** Get certificate for pUrlData->urlName from global config. +** Return NULL if no certificate found. +*/ +X509 *ssl_get_certificate(UrlData *pUrlData, int *pTrusted){ + char *zHost, *zCert; + BIO *mem; + X509 *cert; + + zHost = mprintf("cert:%s", + pUrlData->useProxy ? pUrlData->hostname : pUrlData->name); + zCert = db_get(zHost, NULL); + free(zHost); + if ( zCert==NULL ) + return NULL; + + if ( pTrusted!=0 ){ + zHost = mprintf("trusted:%s", + pUrlData->useProxy ? pUrlData->hostname : pUrlData->name); + *pTrusted = db_get_int(zHost, 0); + free(zHost); + } + + mem = BIO_new(BIO_s_mem()); + BIO_puts(mem, zCert); + cert = PEM_read_bio_X509(mem, NULL, 0, NULL); + free(zCert); + BIO_free(mem); + return cert; +} + +/* +** Send content out over the SSL connection. +*/ +size_t ssl_send(void *NotUsed, void *pContent, size_t N){ + size_t sent; + size_t total = 0; + while( N>0 ){ + sent = BIO_write(iBio, pContent, N); + if( sent<=0 ) break; + total += sent; + N -= sent; + pContent = (void*)&((char*)pContent)[sent]; + } + return total; +} + +/* +** Receive content back from the SSL connection. +*/ +size_t ssl_receive(void *NotUsed, void *pContent, size_t N){ + size_t got; + size_t total = 0; + while( N>0 ){ + got = BIO_read(iBio, pContent, N); + if( got<=0 ) break; + total += got; + N -= got; + pContent = (void*)&((char*)pContent)[got]; + } + return total; +} + +#endif /* FOSSIL_ENABLE_SSL */ Index: src/http_transport.c ================================================================== --- src/http_transport.c +++ src/http_transport.c @@ -30,25 +30,35 @@ int isOpen; /* True when the transport layer is open */ char *pBuf; /* Buffer used to hold the reply */ int nAlloc; /* Space allocated for transportBuf[] */ int nUsed ; /* Space of transportBuf[] used */ int iCursor; /* Next unread by in transportBuf[] */ - int nSent; /* Number of bytes sent */ - int nRcvd; /* Number of bytes received */ + i64 nSent; /* Number of bytes sent */ + i64 nRcvd; /* Number of bytes received */ FILE *pFile; /* File I/O for FILE: */ char *zOutFile; /* Name of outbound file for FILE: */ char *zInFile; /* Name of inbound file for FILE: */ + FILE *pLog; /* Log output here */ } transport = { - 0, 0, 0, 0, 0, 0, 0 + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; + +/* +** Information about the connection to the SSH subprocess when +** using the ssh:// sync method. +*/ +static int sshPid; /* Process id of ssh subprocess */ +static int sshIn; /* From ssh subprocess to this process */ +static FILE *sshOut; /* From this to ssh subprocess */ + /* ** Return the current transport error message. */ -const char *transport_errmsg(void){ +const char *transport_errmsg(UrlData *pUrlData){ #ifdef FOSSIL_ENABLE_SSL - if( g.urlIsHttps ){ + if( pUrlData->isHttps ){ return ssl_errmsg(); } #endif return socket_errmsg(); } @@ -55,81 +65,152 @@ /* ** Retrieve send/receive counts from the transport layer. If "resetFlag" ** is true, then reset the counts. */ -void transport_stats(int *pnSent, int *pnRcvd, int resetFlag){ +void transport_stats(i64 *pnSent, i64 *pnRcvd, int resetFlag){ if( pnSent ) *pnSent = transport.nSent; if( pnRcvd ) *pnRcvd = transport.nRcvd; if( resetFlag ){ transport.nSent = 0; transport.nRcvd = 0; } } + +/* +** Default SSH command +*/ +#ifdef _WIN32 +static const char zDefaultSshCmd[] = "plink -ssh -T"; +#else +static const char zDefaultSshCmd[] = "ssh -e none -T"; +#endif + +/* +** SSH initialization of the transport layer +*/ +int transport_ssh_open(UrlData *pUrlData){ + /* For SSH we need to create and run SSH fossil http + ** to talk to the remote machine. + */ + char *zSsh; /* The base SSH command */ + Blob zCmd; /* The SSH command */ + char *zHost; /* The host name to contact */ + int n; /* Size of prefix string */ + + socket_ssh_resolve_addr(pUrlData); + zSsh = db_get("ssh-command", zDefaultSshCmd); + blob_init(&zCmd, zSsh, -1); + if( pUrlData->port!=pUrlData->dfltPort && pUrlData->port ){ +#ifdef _WIN32 + blob_appendf(&zCmd, " -P %d", pUrlData->port); +#else + blob_appendf(&zCmd, " -p %d", pUrlData->port); +#endif + } + if( g.fSshTrace ){ + fossil_force_newline(); + fossil_print("%s", blob_str(&zCmd)); /* Show the base of the SSH command */ + } + if( pUrlData->user && pUrlData->user[0] ){ + zHost = mprintf("%s@%s", pUrlData->user, pUrlData->name); + }else{ + zHost = mprintf("%s", pUrlData->name); + } + n = blob_size(&zCmd); + blob_append(&zCmd, " ", 1); + shell_escape(&zCmd, zHost); + blob_append(&zCmd, " ", 1); + shell_escape(&zCmd, mprintf("%s", pUrlData->fossil)); + blob_append(&zCmd, " test-http", 10); + if( pUrlData->path && pUrlData->path[0] ){ + blob_append(&zCmd, " ", 1); + shell_escape(&zCmd, mprintf("%s", pUrlData->path)); + } + if( g.fSshTrace ){ + fossil_print("%s\n", blob_str(&zCmd)+n); /* Show tail of SSH command */ + } + free(zHost); + popen2(blob_str(&zCmd), &sshIn, &sshOut, &sshPid); + if( sshPid==0 ){ + socket_set_errmsg("cannot start ssh tunnel using [%b]", &zCmd); + } + blob_reset(&zCmd); + return sshPid==0; +} /* ** Open a connection to the server. The server is defined by the following -** global variables: +** variables: ** -** g.urlName Name of the server. Ex: www.fossil-scm.org -** g.urlPort TCP/IP port. Ex: 80 -** g.urlIsHttps Use TLS for the connection +** pUrlData->name Name of the server. Ex: www.fossil-scm.org +** pUrlData->port TCP/IP port. Ex: 80 +** pUrlData->isHttps Use TLS for the connection ** ** Return the number of errors. */ -int transport_open(void){ +int transport_open(UrlData *pUrlData){ int rc = 0; if( transport.isOpen==0 ){ - if( g.urlIsHttps ){ + if( pUrlData->isSsh ){ + rc = transport_ssh_open(pUrlData); + if( rc==0 ) transport.isOpen = 1; + }else if( pUrlData->isHttps ){ #ifdef FOSSIL_ENABLE_SSL - rc = ssl_open(); + rc = ssl_open(pUrlData); if( rc==0 ) transport.isOpen = 1; #else socket_set_errmsg("HTTPS: Fossil has been compiled without SSL support"); rc = 1; #endif - }else if( g.urlIsFile ){ + }else if( pUrlData->isFile ){ sqlite3_uint64 iRandId; sqlite3_randomness(sizeof(iRandId), &iRandId); - transport.zOutFile = mprintf("%s-%llu-out.http", + transport.zOutFile = mprintf("%s-%llu-out.http", g.zRepositoryName, iRandId); - transport.zInFile = mprintf("%s-%llu-in.http", + transport.zInFile = mprintf("%s-%llu-in.http", g.zRepositoryName, iRandId); - transport.pFile = fopen(transport.zOutFile, "wb"); + transport.pFile = fossil_fopen(transport.zOutFile, "wb"); if( transport.pFile==0 ){ fossil_fatal("cannot output temporary file: %s", transport.zOutFile); } transport.isOpen = 1; }else{ - rc = socket_open(); + rc = socket_open(pUrlData); if( rc==0 ) transport.isOpen = 1; } } return rc; } /* ** Close the current connection */ -void transport_close(void){ +void transport_close(UrlData *pUrlData){ if( transport.isOpen ){ free(transport.pBuf); transport.pBuf = 0; transport.nAlloc = 0; transport.nUsed = 0; transport.iCursor = 0; - if( g.urlIsHttps ){ + if( transport.pLog ){ + fclose(transport.pLog); + transport.pLog = 0; + } + if( pUrlData->isSsh ){ + transport_ssh_close(); + }else if( pUrlData->isHttps ){ #ifdef FOSSIL_ENABLE_SSL ssl_close(); #endif - }else if( g.urlIsFile ){ - if( transport.pFile ){ + }else if( pUrlData->isFile ){ + if( transport.pFile ){ fclose(transport.pFile); transport.pFile = 0; } - unlink(transport.zInFile); - unlink(transport.zOutFile); + file_delete(transport.zInFile); + file_delete(transport.zOutFile); free(transport.zInFile); free(transport.zOutFile); }else{ socket_close(); } @@ -138,25 +219,28 @@ } /* ** Send content over the wire. */ -void transport_send(Blob *toSend){ +void transport_send(UrlData *pUrlData, Blob *toSend){ char *z = blob_buffer(toSend); int n = blob_size(toSend); transport.nSent += n; - if( g.urlIsHttps ){ + if( pUrlData->isSsh ){ + fwrite(z, 1, n, sshOut); + fflush(sshOut); + }else if( pUrlData->isHttps ){ #ifdef FOSSIL_ENABLE_SSL int sent; while( n>0 ){ sent = ssl_send(0, z, n); /* printf("Sent %d of %d bytes\n", sent, n); fflush(stdout); */ if( sent<=0 ) break; n -= sent; - } + } #endif - }else if( g.urlIsFile ){ + }else if( pUrlData->isFile ){ fwrite(z, 1, n, transport.pFile); }else{ int sent; while( n>0 ){ sent = socket_send(0, z, n); @@ -167,44 +251,95 @@ } } /* ** This routine is called when the outbound message is complete and -** it is time to being recieving a reply. +** it is time to being receiving a reply. */ -void transport_flip(void){ - if( g.urlIsFile ){ +void transport_flip(UrlData *pUrlData){ + if( pUrlData->isFile ){ char *zCmd; fclose(transport.pFile); - zCmd = mprintf("\"%s\" http \"%s\" \"%s\" \"%s\" 127.0.0.1", - g.argv[0], g.urlName, transport.zOutFile, transport.zInFile + zCmd = mprintf("\"%s\" http \"%s\" \"%s\" 127.0.0.1 \"%s\" --localauth", + g.nameOfExe, transport.zOutFile, transport.zInFile, pUrlData->name ); - portable_system(zCmd); + fossil_system(zCmd); free(zCmd); - transport.pFile = fopen(transport.zInFile, "rb"); + transport.pFile = fossil_fopen(transport.zInFile, "rb"); + } +} + +/* +** Log all input to a file. The transport layer will take responsibility +** for closing the log file when it is done. +*/ +void transport_log(FILE *pLog){ + if( transport.pLog ){ + fclose(transport.pLog); + transport.pLog = 0; } + transport.pLog = pLog; } /* ** This routine is called when the inbound message has been received ** and it is time to start sending again. */ -void transport_rewind(void){ - if( g.urlIsFile ){ - transport_close(); +void transport_rewind(UrlData *pUrlData){ + if( pUrlData->isFile ){ + transport_close(pUrlData); + } +} + +/* +** Read N bytes of content directly from the wire and write into +** the buffer. +*/ +static int transport_fetch(UrlData *pUrlData, char *zBuf, int N){ + int got; + if( sshIn ){ + int x; + int wanted = N; + got = 0; + while( wanted>0 ){ + x = read(sshIn, &zBuf[got], wanted); + if( x<=0 ) break; + got += x; + wanted -= x; + } + }else if( pUrlData->isHttps ){ + #ifdef FOSSIL_ENABLE_SSL + got = ssl_receive(0, zBuf, N); + #else + got = 0; + #endif + }else if( pUrlData->isFile ){ + got = fread(zBuf, 1, N, transport.pFile); + }else{ + got = socket_receive(0, zBuf, N); + } + /* printf("received %d of %d bytes\n", got, N); fflush(stdout); */ + if( transport.pLog ){ + fwrite(zBuf, 1, got, transport.pLog); + fflush(transport.pLog); } + return got; } /* ** Read N bytes of content from the wire and store in the supplied buffer. ** Return the number of bytes actually received. */ -int transport_receive(char *zBuf, int N){ +int transport_receive(UrlData *pUrlData, char *zBuf, int N){ int onHand; /* Bytes current held in the transport buffer */ int nByte = 0; /* Bytes of content received */ onHand = transport.nUsed - transport.iCursor; + if( g.fSshTrace){ + printf("Reading %d bytes with %d on hand... ", N, onHand); + fflush(stdout); + } if( onHand>0 ){ int toMove = onHand; if( toMove>N ) toMove = N; /* printf("bytes on hand: %d of %d\n", toMove, N); fflush(stdout); */ memcpy(zBuf, &transport.pBuf[transport.iCursor], toMove); @@ -216,43 +351,30 @@ N -= toMove; zBuf += toMove; nByte += toMove; } if( N>0 ){ - int got; - if( g.urlIsHttps ){ - #ifdef FOSSIL_ENABLE_SSL - got = ssl_receive(0, zBuf, N); - /* printf("received %d of %d bytes\n", got, N); fflush(stdout); */ - #else - got = 0; - #endif - }else if( g.urlIsFile ){ - got = fread(zBuf, 1, N, transport.pFile); - }else{ - got = socket_receive(0, zBuf, N); - /* printf("received %d of %d bytes\n", got, N); fflush(stdout); */ - } + int got = transport_fetch(pUrlData, zBuf, N); if( got>0 ){ nByte += got; transport.nRcvd += got; } } + if( g.fSshTrace ) printf("Got %d bytes\n", nByte); return nByte; } /* ** Load up to N new bytes of content into the transport.pBuf buffer. ** The buffer itself might be moved. And the transport.iCursor value ** might be reset to 0. */ -static void transport_load_buffer(int N){ +static void transport_load_buffer(UrlData *pUrlData, int N){ int i, j; if( transport.nAlloc==0 ){ transport.nAlloc = N; - transport.pBuf = malloc( N ); - if( transport.pBuf==0 ) fossil_panic("out of memory"); + transport.pBuf = fossil_malloc( N ); transport.iCursor = 0; transport.nUsed = 0; } if( transport.iCursor>0 ){ for(i=0, j=transport.iCursor; j transport.nAlloc ){ char *pNew; transport.nAlloc = transport.nUsed + N; - pNew = realloc(transport.pBuf, transport.nAlloc); - if( pNew==0 ) fossil_panic("out of memory"); + pNew = fossil_realloc(transport.pBuf, transport.nAlloc); transport.pBuf = pNew; } if( N>0 ){ - i = transport_receive(&transport.pBuf[transport.nUsed], N); + i = transport_fetch(pUrlData, &transport.pBuf[transport.nUsed], N); if( i>0 ){ + transport.nRcvd += i; transport.nUsed += i; } } } @@ -282,18 +404,18 @@ ** from the received line and zero-terminate the result. Return a pointer ** to the line. ** ** Each call to this routine potentially overwrites the returned buffer. */ -char *transport_receive_line(void){ +char *transport_receive_line(UrlData *pUrlData){ int i; int iStart; i = iStart = transport.iCursor; while(1){ if( i >= transport.nUsed ){ - transport_load_buffer(1000); + transport_load_buffer(pUrlData, pUrlData->isSsh ? 2 : 1000); i -= iStart; iStart = 0; if( i >= transport.nUsed ){ transport.pBuf[i] = 0; transport.iCursor = i; @@ -300,26 +422,44 @@ break; } } if( transport.pBuf[i]=='\n' ){ transport.iCursor = i+1; - while( i>=iStart && isspace(transport.pBuf[i]) ){ + while( i>=iStart && fossil_isspace(transport.pBuf[i]) ){ transport.pBuf[i] = 0; i--; } break; } i++; } - /* printf("Got line: [%s]\n", &transport.pBuf[iStart]); */ + if( g.fSshTrace ) printf("Got line: [%s]\n", &transport.pBuf[iStart]); return &transport.pBuf[iStart]; } -void transport_global_shutdown(void){ - if( g.urlIsHttps ){ +/* +** Global transport shutdown +*/ +void transport_global_shutdown(UrlData *pUrlData){ + if( pUrlData->isSsh ){ + transport_ssh_close(); + } + if( pUrlData->isHttps ){ #ifdef FOSSIL_ENABLE_SSL ssl_global_shutdown(); #endif }else{ socket_global_shutdown(); } } + +/* +** Close SSH transport. +*/ +void transport_ssh_close(void){ + if( sshPid ){ + /*printf("Closing SSH tunnel: ");*/ + fflush(stdout); + pclose2(sshIn, sshOut, sshPid); + sshPid = 0; + } +} ADDED src/import.c Index: src/import.c ================================================================== --- src/import.c +++ src/import.c @@ -0,0 +1,1690 @@ +/* +** Copyright (c) 2010 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@sqlite.org +** +******************************************************************************* +** +** This file contains code used to import the content of a Git/SVN +** repository in the git-fast-import/svn-dump formats as a new Fossil +** repository. +*/ +#include "config.h" +#include "import.h" +#include + +#if INTERFACE +/* +** A single file change record. +*/ +struct ImportFile { + char *zName; /* Name of a file */ + char *zUuid; /* UUID of the file */ + char *zPrior; /* Prior name if the name was changed */ + char isFrom; /* True if obtained from the parent */ + char isExe; /* True if executable */ + char isLink; /* True if symlink */ +}; +#endif + + +/* +** State information about an on-going fast-import parse. +*/ +static struct { + void (*xFinish)(void); /* Function to finish a prior record */ + int nData; /* Bytes of data */ + char *zTag; /* Name of a tag */ + char *zBranch; /* Name of a branch for a commit */ + char *zPrevBranch; /* The branch of the previous check-in */ + char *aData; /* Data content */ + char *zMark; /* The current mark */ + char *zDate; /* Date/time stamp */ + char *zUser; /* User name */ + char *zComment; /* Comment of a commit */ + char *zFrom; /* from value as a UUID */ + char *zPrevCheckin; /* Name of the previous check-in */ + char *zFromMark; /* The mark of the "from" field */ + int nMerge; /* Number of merge values */ + int nMergeAlloc; /* Number of slots in azMerge[] */ + char **azMerge; /* Merge values */ + int nFile; /* Number of aFile values */ + int nFileAlloc; /* Number of slots in aFile[] */ + ImportFile *aFile; /* Information about files in a commit */ + int fromLoaded; /* True zFrom content loaded into aFile[] */ + int hasLinks; /* True if git repository contains symlinks */ + int tagCommit; /* True if the commit adds a tag */ +} gg; + +/* +** Duplicate a string. +*/ +char *fossil_strdup(const char *zOrig){ + char *z = 0; + if( zOrig ){ + int n = strlen(zOrig); + z = fossil_malloc( n+1 ); + memcpy(z, zOrig, n+1); + } + return z; +} + +/* +** A no-op "xFinish" method +*/ +static void finish_noop(void){} + +/* +** Deallocate the state information. +** +** The azMerge[] and aFile[] arrays are zeroed by allocated space is +** retained unless the freeAll flag is set. +*/ +static void import_reset(int freeAll){ + int i; + gg.xFinish = 0; + fossil_free(gg.zTag); gg.zTag = 0; + fossil_free(gg.zBranch); gg.zBranch = 0; + fossil_free(gg.aData); gg.aData = 0; + fossil_free(gg.zMark); gg.zMark = 0; + fossil_free(gg.zDate); gg.zDate = 0; + fossil_free(gg.zUser); gg.zUser = 0; + fossil_free(gg.zComment); gg.zComment = 0; + fossil_free(gg.zFrom); gg.zFrom = 0; + fossil_free(gg.zFromMark); gg.zFromMark = 0; + for(i=0; izName, pB->zName); +} + +/* +** Compare two strings for sorting. +*/ +static int string_cmp(const void *pLeft, const void *pRight){ + const char *zLeft = *(const char **)pLeft; + const char *zRight = *(const char **)pRight; + return fossil_strcmp(zLeft, zRight); +} + +/* Forward reference */ +static void import_prior_files(void); + +/* +** Use data accumulated in gg from a "commit" record to add a new +** manifest artifact to the BLOB table. +*/ +static void finish_commit(void){ + int i; + char *zFromBranch; + char *aTCard[4]; /* Array of T cards for manifest */ + int nTCard = 0; /* Entries used in aTCard[] */ + Blob record, cksum; + + import_prior_files(); + qsort(gg.aFile, gg.nFile, sizeof(gg.aFile[0]), mfile_cmp); + blob_zero(&record); + blob_appendf(&record, "C %F\n", gg.zComment); + blob_appendf(&record, "D %s\n", gg.zDate); + if( !g.fQuiet ) fossil_print("%.10s\r", gg.zDate); + for(i=0; i=gg.nFileAlloc ){ + gg.nFileAlloc = gg.nFileAlloc*2 + 100; + gg.aFile = fossil_realloc(gg.aFile, gg.nFileAlloc*sizeof(gg.aFile[0])); + } + pFile = &gg.aFile[gg.nFile++]; + memset(pFile, 0, sizeof(*pFile)); + return pFile; +} + + +/* +** Load all file information out of the gg.zFrom check-in +*/ +static void import_prior_files(void){ + Manifest *p; + int rid; + ManifestFile *pOld; + ImportFile *pNew; + if( gg.fromLoaded ) return; + gg.fromLoaded = 1; + if( gg.zFrom==0 && gg.zPrevCheckin!=0 + && fossil_strcmp(gg.zBranch, gg.zPrevBranch)==0 + ){ + gg.zFrom = gg.zPrevCheckin; + gg.zPrevCheckin = 0; + } + if( gg.zFrom==0 ) return; + rid = fast_uuid_to_rid(gg.zFrom); + if( rid==0 ) return; + p = manifest_get(rid, CFTYPE_MANIFEST, 0); + if( p==0 ) return; + manifest_file_rewind(p); + while( (pOld = manifest_file_next(p, 0))!=0 ){ + pNew = import_add_file(); + pNew->zName = fossil_strdup(pOld->zName); + pNew->isExe = pOld->zPerm && strstr(pOld->zPerm, "x")!=0; + pNew->isLink = pOld->zPerm && strstr(pOld->zPerm, "l")!=0; + pNew->zUuid = fossil_strdup(pOld->zUuid); + pNew->isFrom = 1; + } + manifest_destroy(p); +} + +/* +** Locate a file in the gg.aFile[] array by its name. Begin the search +** with the *pI-th file. Update *pI to be one past the file found. +** Do not search past the mx-th file. +*/ +static ImportFile *import_find_file(const char *zName, int *pI, int mx){ + int i = *pI; + int nName = strlen(zName); + while( i='0' && zName[j+1]<='3' + && zName[j+2]>='0' && zName[j+2]<='7' + && zName[j+3]>='0' && zName[j+3]<='7' + && (x = 64*(zName[j+1]-'0') + 8*(zName[j+2]-'0') + zName[j+3]-'0')!=0 + ){ + c = (unsigned char)x; + j += 3; + }else{ + c = zName[++j]; + } + } + zName[i++] = c; + } + zName[i] = 0; +} + + +/* +** Read the git-fast-import format from pIn and insert the corresponding +** content into the database. +*/ +static void git_fast_import(FILE *pIn){ + ImportFile *pFile, *pNew; + int i, mx; + char *z; + char *zUuid; + char *zName; + char *zPerm; + char *zFrom; + char *zTo; + char zLine[1000]; + + gg.xFinish = finish_noop; + while( fgets(zLine, sizeof(zLine), pIn) ){ + if( zLine[0]=='\n' || zLine[0]=='#' ) continue; + if( strncmp(zLine, "blob", 4)==0 ){ + gg.xFinish(); + gg.xFinish = finish_blob; + }else + if( strncmp(zLine, "commit ", 7)==0 ){ + gg.xFinish(); + gg.xFinish = finish_commit; + trim_newline(&zLine[7]); + z = &zLine[7]; + + /* The argument to the "commit" line might match either of these + ** patterns: + ** + ** (A) refs/heads/BRANCHNAME + ** (B) refs/tags/TAGNAME + ** + ** If pattern A is used, then the branchname used is as shown. + ** Except, the "master" branch which is the default branch name in + ** Git is changed to "trunk" which is the default name in Fossil. + ** If the pattern is B, then the new commit should be on the same + ** branch as its parent. And, we might need to add the TAGNAME + ** tag to the new commit. However, if there are multiple instances + ** of pattern B with the same TAGNAME, then only put the tag on the + ** last commit that holds that tag. + ** + ** None of the above is explained in the git-fast-export + ** documentation. We had to figure it out via trial and error. + */ + for(i=5; i=gg.nMergeAlloc ){ + gg.nMergeAlloc = gg.nMergeAlloc*2 + 10; + gg.azMerge = fossil_realloc(gg.azMerge, gg.nMergeAlloc*sizeof(char*)); + } + gg.azMerge[gg.nMerge] = resolve_committish(&zLine[6]); + if( gg.azMerge[gg.nMerge] ) gg.nMerge++; + }else + if( strncmp(zLine, "M ", 2)==0 ){ + import_prior_files(); + z = &zLine[2]; + zPerm = next_token(&z); + zUuid = next_token(&z); + zName = rest_of_line(&z); + dequote_git_filename(zName); + i = 0; + pFile = import_find_file(zName, &i, gg.nFile); + if( pFile==0 ){ + pFile = import_add_file(); + pFile->zName = fossil_strdup(zName); + } + pFile->isExe = (fossil_strcmp(zPerm, "100755")==0); + pFile->isLink = (fossil_strcmp(zPerm, "120000")==0); + fossil_free(pFile->zUuid); + pFile->zUuid = resolve_committish(zUuid); + pFile->isFrom = 0; + }else + if( strncmp(zLine, "D ", 2)==0 ){ + import_prior_files(); + z = &zLine[2]; + zName = rest_of_line(&z); + dequote_git_filename(zName); + i = 0; + while( (pFile = import_find_file(zName, &i, gg.nFile))!=0 ){ + if( pFile->isFrom==0 ) continue; + fossil_free(pFile->zName); + fossil_free(pFile->zPrior); + fossil_free(pFile->zUuid); + *pFile = gg.aFile[--gg.nFile]; + i--; + } + }else + if( strncmp(zLine, "C ", 2)==0 ){ + int nFrom; + import_prior_files(); + z = &zLine[2]; + zFrom = next_token(&z); + zTo = rest_of_line(&z); + i = 0; + mx = gg.nFile; + nFrom = strlen(zFrom); + while( (pFile = import_find_file(zFrom, &i, mx))!=0 ){ + if( pFile->isFrom==0 ) continue; + pNew = import_add_file(); + pFile = &gg.aFile[i-1]; + if( strlen(pFile->zName)>nFrom ){ + pNew->zName = mprintf("%s%s", zTo, pFile->zName[nFrom]); + }else{ + pNew->zName = fossil_strdup(pFile->zName); + } + pNew->isExe = pFile->isExe; + pNew->isLink = pFile->isLink; + pNew->zUuid = fossil_strdup(pFile->zUuid); + pNew->isFrom = 0; + } + }else + if( strncmp(zLine, "R ", 2)==0 ){ + int nFrom; + import_prior_files(); + z = &zLine[2]; + zFrom = next_token(&z); + zTo = rest_of_line(&z); + i = 0; + nFrom = strlen(zFrom); + while( (pFile = import_find_file(zFrom, &i, gg.nFile))!=0 ){ + if( pFile->isFrom==0 ) continue; + pNew = import_add_file(); + pFile = &gg.aFile[i-1]; + if( strlen(pFile->zName)>nFrom ){ + pNew->zName = mprintf("%s%s", zTo, pFile->zName[nFrom]); + }else{ + pNew->zName = fossil_strdup(pFile->zName); + } + pNew->zPrior = pFile->zName; + pNew->isExe = pFile->isExe; + pNew->isLink = pFile->isLink; + pNew->zUuid = pFile->zUuid; + pNew->isFrom = 0; + gg.nFile--; + *pFile = *pNew; + memset(pNew, 0, sizeof(*pNew)); + } + fossil_fatal("cannot handle R records, use --full-tree"); + }else + if( strncmp(zLine, "deleteall", 9)==0 ){ + gg.fromLoaded = 1; + }else + if( strncmp(zLine, "N ", 2)==0 ){ + /* No-op */ + }else + + { + goto malformed_line; + } + } + gg.xFinish(); + if( gg.hasLinks ){ + db_set_int("allow-symlinks", 1, 0); + } + import_reset(1); + return; + +malformed_line: + trim_newline(zLine); + fossil_fatal("bad fast-import line: [%s]", zLine); + return; +} + +static struct{ + int rev; /* SVN revision number */ + char *zDate; /* Date/time stamp */ + char *zUser; /* User name */ + char *zComment; /* Comment of a commit */ + const char *zTrunk; /* Name of trunk folder in repo root */ + int lenTrunk; /* String length of zTrunk */ + const char *zBranches; /* Name of branches folder in repo root */ + int lenBranches; /* String length of zBranches */ + const char *zTags; /* Name of tags folder in repo root */ + int lenTags; /* String length of zTags */ + Bag newBranches; /* Branches that were created in this revision */ + int incrFlag; /* Add svn-rev-nn tags on every checkin */ +} gsvn; +typedef struct { + char *zKey; + char *zVal; +} KeyVal; +typedef struct { + KeyVal *aHeaders; + int nHeaders; + char *pRawProps; + KeyVal *aProps; + int nProps; + Blob content; + int contentFlag; +} SvnRecord; + +#define svn_find_header(rec, zHeader) \ + svn_find_keyval((rec).aHeaders, (rec).nHeaders, (zHeader)) +#define svn_find_prop(rec, zProp) \ + svn_find_keyval((rec).aProps, (rec).nProps, (zProp)) +static char *svn_find_keyval( + KeyVal *aKeyVal, + int nKeyVal, + const char *zKey +){ + int i; + for(i=0; inHeaders; i++){ + fossil_free(rec->aHeaders[i].zKey); + } + fossil_free(rec->aHeaders); + fossil_free(rec->aProps); + fossil_free(rec->pRawProps); + blob_reset(&rec->content); +} + +static int svn_read_headers(FILE *pIn, SvnRecord *rec){ + char zLine[1000]; + + rec->aHeaders = 0; + rec->nHeaders = 0; + while( fgets(zLine, sizeof(zLine), pIn) ){ + if( zLine[0]!='\n' ) break; + } + if( feof(pIn) ) return 0; + do{ + char *sep; + if( zLine[0]=='\n' ) break; + rec->nHeaders += 1; + rec->aHeaders = fossil_realloc(rec->aHeaders, + sizeof(rec->aHeaders[0])*rec->nHeaders); + rec->aHeaders[rec->nHeaders-1].zKey = mprintf("%s", zLine); + sep = strchr(rec->aHeaders[rec->nHeaders-1].zKey, ':'); + if( !sep ){ + trim_newline(zLine); + fossil_fatal("bad header line: [%s]", zLine); + } + *sep = 0; + rec->aHeaders[rec->nHeaders-1].zVal = sep+1; + sep = strchr(rec->aHeaders[rec->nHeaders-1].zVal, '\n'); + *sep = 0; + while(rec->aHeaders[rec->nHeaders-1].zVal + && fossil_isspace(*(rec->aHeaders[rec->nHeaders-1].zVal)) ) + { + rec->aHeaders[rec->nHeaders-1].zVal++; + } + }while( fgets(zLine, sizeof(zLine), pIn) ); + if( zLine[0]!='\n' ){ + trim_newline(zLine); + fossil_fatal("svn-dump data ended unexpectedly"); + } + return 1; +} + +static void svn_read_props(FILE *pIn, SvnRecord *rec){ + int nRawProps = 0; + char *pRawProps; + const char *zLen; + + rec->pRawProps = 0; + rec->aProps = 0; + rec->nProps = 0; + zLen = svn_find_header(*rec, "Prop-content-length"); + if( zLen ){ + nRawProps = atoi(zLen); + } + if( nRawProps ){ + int got; + char *zLine; + rec->pRawProps = pRawProps = fossil_malloc( nRawProps ); + got = fread(rec->pRawProps, 1, nRawProps, pIn); + if( got!=nRawProps ){ + fossil_fatal("short read: got %d of %d bytes", got, nRawProps); + } + if( memcmp(&pRawProps[got-10], "PROPS-END\n", 10)!=0 ){ + fossil_fatal("svn-dump data ended unexpectedly"); + } + zLine = pRawProps; + while( zLine<(pRawProps+nRawProps-10) ){ + char *eol; + int propLen; + if( zLine[0]=='D' ){ + propLen = atoi(&zLine[2]); + eol = strchr(zLine, '\n'); + zLine = eol+1+propLen+1; + }else{ + if( zLine[0]!='K' ){ + fossil_fatal("svn-dump data format broken"); + } + propLen = atoi(&zLine[2]); + eol = strchr(zLine, '\n'); + zLine = eol+1; + eol = zLine+propLen; + if( *eol!='\n' ){ + fossil_fatal("svn-dump data format broken"); + } + *eol = 0; + rec->nProps += 1; + rec->aProps = fossil_realloc(rec->aProps, + sizeof(rec->aProps[0])*rec->nProps); + rec->aProps[rec->nProps-1].zKey = zLine; + zLine = eol+1; + if( zLine[0]!='V' ){ + fossil_fatal("svn-dump data format broken"); + } + propLen = atoi(&zLine[2]); + eol = strchr(zLine, '\n'); + zLine = eol+1; + eol = zLine+propLen; + if( *eol!='\n' ){ + fossil_fatal("svn-dump data format broken"); + } + *eol = 0; + rec->aProps[rec->nProps-1].zVal = zLine; + zLine = eol+1; + } + } + } +} + +static int svn_read_rec(FILE *pIn, SvnRecord *rec){ + const char *zLen; + int nLen = 0; + if( svn_read_headers(pIn, rec)==0 ) return 0; + svn_read_props(pIn, rec); + blob_zero(&rec->content); + zLen = svn_find_header(*rec, "Text-content-length"); + if( zLen ){ + rec->contentFlag = 1; + nLen = atoi(zLen); + blob_read_from_channel(&rec->content, pIn, nLen); + if( blob_size(&rec->content)!=nLen ){ + fossil_fatal("short read: got %d of %d bytes", + blob_size(&rec->content), nLen + ); + } + }else{ + rec->contentFlag = 0; + } + return 1; +} + +/* +** Returns the UUID for the RID, or NULL if not found. +** The returned string is allocated via db_text() and must be +** free()d by the caller. +*/ +char * rid_to_uuid(int rid) +{ + return db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); +} + +#define SVN_UNKNOWN 0 +#define SVN_TRUNK 1 +#define SVN_BRANCH 2 +#define SVN_TAG 3 + +#define MAX_INT_32 (0x7FFFFFFFL) + +static void svn_finish_revision(){ + Blob manifest; + static Stmt getChanges; + static Stmt getFiles; + static Stmt setRid; + Blob mcksum; + + blob_zero(&manifest); + db_static_prepare(&getChanges, "SELECT tid, tname, ttype, tparent" + " FROM xrevisions, xbranches ON (tbranch=tid)" + " WHERE trid ISNULL"); + db_static_prepare(&getFiles, "SELECT tpath, tuuid, tperm FROM xfiles" + " WHERE tbranch=:branch ORDER BY tpath"); + db_prepare(&setRid, "UPDATE xrevisions SET trid=:rid" + " WHERE trev=%d AND tbranch=:branch", gsvn.rev); + while( db_step(&getChanges)==SQLITE_ROW ){ + int branchId = db_column_int(&getChanges, 0); + const char *zBranch = db_column_text(&getChanges, 1); + int branchType = db_column_int(&getChanges, 2); + int parentRid = db_column_int(&getChanges, 3); + int mergeRid = parentRid; + Manifest *pParentManifest = 0; + ManifestFile *pParentFile = 0; + int sameAsParent = 1; + int parentBranch = 0; + if( !bag_find(&gsvn.newBranches, branchId) ){ + parentRid = db_int(0, "SELECT trid, max(trev) FROM xrevisions" + " WHERE trev<%d AND tbranch=%d", + gsvn.rev, branchId); + } + if( parentRid>0 ){ + pParentManifest = manifest_get(parentRid, CFTYPE_MANIFEST, 0); + pParentFile = manifest_file_next(pParentManifest, 0); + parentBranch = db_int(0, "SELECT tbranch FROM xrevisions WHERE trid=%d", + parentRid); + if( parentBranch!=branchId && branchType!=SVN_TAG ){ + sameAsParent = 0; + } + } + if( mergeRidzName,zFile)!=0 + || fossil_strcmp(pParentFile->zUuid,zUuid)!=0 + || fossil_strcmp(pParentFile->zPerm,zPerm)!=0 + ){ + sameAsParent = 0; + }else{ + pParentFile = manifest_file_next(pParentManifest, 0); + } + } + } + if( pParentFile ){ + sameAsParent = 0; + } + db_reset(&getFiles); + if( !sameAsParent ){ + if( parentRid>0 ){ + char *zParentUuid = rid_to_uuid(parentRid); + if( parentRid==mergeRid || mergeRid==0){ + char *zParentBranch = + db_text(0, "SELECT tname FROM xbranches WHERE tid=%d", + parentBranch + ); + blob_appendf(&manifest, "P %s\n", zParentUuid); + blob_appendf(&manifest, "T *branch * %F\n", zBranch); + blob_appendf(&manifest, "T *sym-%F *\n", zBranch); + if( gsvn.incrFlag ){ + blob_appendf(&manifest, "T +sym-svn-rev-%d *\n", gsvn.rev); + } + blob_appendf(&manifest, "T -sym-%F *\n", zParentBranch); + fossil_free(zParentBranch); + }else{ + char *zMergeUuid = rid_to_uuid(mergeRid); + blob_appendf(&manifest, "P %s %s\n", zParentUuid, zMergeUuid); + if( gsvn.incrFlag ){ + blob_appendf(&manifest, "T +sym-svn-rev-%d *\n", gsvn.rev); + } + fossil_free(zMergeUuid); + } + fossil_free(zParentUuid); + }else{ + blob_appendf(&manifest, "T *branch * %F\n", zBranch); + blob_appendf(&manifest, "T *sym-%F *\n", zBranch); + if( gsvn.incrFlag ){ + blob_appendf(&manifest, "T +sym-svn-rev-%d *\n", gsvn.rev); + } + } + }else if( branchType==SVN_TAG ){ + char *zParentUuid = rid_to_uuid(parentRid); + blob_reset(&manifest); + blob_appendf(&manifest, "D %s\n", gsvn.zDate); + blob_appendf(&manifest, "T +sym-%F %s\n", zBranch, zParentUuid); + fossil_free(zParentUuid); + } + }else{ + char *zParentUuid = rid_to_uuid(parentRid); + blob_appendf(&manifest, "D %s\n", gsvn.zDate); + if( branchType!=SVN_TAG ){ + blob_appendf(&manifest, "T +closed %s\n", zParentUuid); + }else{ + blob_appendf(&manifest, "T -sym-%F %s\n", zBranch, zParentUuid); + } + fossil_free(zParentUuid); + } + if( gsvn.zUser ){ + blob_appendf(&manifest, "U %F\n", gsvn.zUser); + }else{ + const char *zUserOvrd = find_option("user-override",0,1); + blob_appendf(&manifest, "U %F\n", zUserOvrd ? zUserOvrd : login_name()); + } + md5sum_blob(&manifest, &mcksum); + blob_appendf(&manifest, "Z %b\n", &mcksum); + blob_reset(&mcksum); + if( !sameAsParent ){ + int rid = content_put(&manifest); + db_bind_int(&setRid, ":branch", branchId); + db_bind_int(&setRid, ":rid", rid); + db_step(&setRid); + db_reset(&setRid); + }else if( branchType==SVN_TAG ){ + content_put(&manifest); + db_bind_int(&setRid, ":branch", branchId); + db_bind_int(&setRid, ":rid", parentRid); + db_step(&setRid); + db_reset(&setRid); + }else if( mergeRid==MAX_INT_32 ){ + content_put(&manifest); + db_multi_exec("DELETE FROM xrevisions WHERE tbranch=%d AND trev=%d", + branchId, gsvn.rev); + }else{ + db_multi_exec("DELETE FROM xrevisions WHERE tbranch=%d AND trev=%d", + branchId, gsvn.rev); + } + blob_reset(&manifest); + manifest_destroy(pParentManifest); + } + db_reset(&getChanges); + db_finalize(&setRid); +} + +static u64 svn_get_varint(const char **pz){ + unsigned int v = 0; + do{ + v = (v<<7) | ((*pz)[0]&0x7f); + }while( (*pz)++[0]&0x80 ); + return v; +} + +static void svn_apply_svndiff(Blob *pDiff, Blob *pSrc, Blob *pOut){ + const char *zDiff = blob_buffer(pDiff); + char *zOut; + if( blob_size(pDiff)<4 || memcmp(zDiff, "SVN", 4)!=0 ){ + fossil_fatal("Invalid svndiff0 format"); + } + zDiff += 4; + blob_zero(pOut); + while( zDiff<(blob_buffer(pDiff)+blob_size(pDiff)) ){ + u64 lenOut, lenInst, lenData, lenOld; + const char *zInst; + const char *zData; + + u64 offSrc = svn_get_varint(&zDiff); + /*lenSrc =*/ svn_get_varint(&zDiff); + lenOut = svn_get_varint(&zDiff); + lenInst = svn_get_varint(&zDiff); + lenData = svn_get_varint(&zDiff); + zInst = zDiff; + zData = zInst+lenInst; + lenOld = blob_size(pOut); + blob_resize(pOut, lenOut+lenOld); + zOut = blob_buffer(pOut)+lenOld; + while( zDiff 0 ){ + *zOut++ = *zCpy++; + } + } + zDiff += lenData; + } +} + +/* +** Extract the branch or tag that the given path is on. Return the branch ID. + */ +static int svn_parse_path(char *zPath, char **zFile, int *type){ + char *zBranch = 0; + int branchId = 0; + *type = SVN_UNKNOWN; + *zFile = 0; + if( gsvn.lenTrunk==0 ){ + zBranch = "trunk"; + *zFile = zPath; + *type = SVN_TRUNK; + }else + if( strncmp(zPath, gsvn.zTrunk, gsvn.lenTrunk-1)==0 ){ + if( zPath[gsvn.lenTrunk-1]=='/' || zPath[gsvn.lenTrunk-1]==0 ){ + zBranch = "trunk"; + *zFile = zPath+gsvn.lenTrunk; + *type = SVN_TRUNK; + }else{ + zBranch = 0; + *type = SVN_UNKNOWN; + } + }else{ + if( strncmp(zPath, gsvn.zBranches, gsvn.lenBranches)==0 ){ + *zFile = zBranch = zPath+gsvn.lenBranches; + *type = SVN_BRANCH; + }else + if( strncmp(zPath, gsvn.zTags, gsvn.lenTags)==0 ){ + *zFile = zBranch = zPath+gsvn.lenTags; + *type = SVN_TAG; + }else{ /* Not a branch, tag or trunk */ + return 0; + } + while( **zFile && **zFile!='/' ){ (*zFile)++; } + if( **zFile ){ + **zFile = '\0'; + (*zFile)++; + } + } + if( *type!=SVN_UNKNOWN ){ + branchId = db_int(0, + "SELECT tid FROM xbranches WHERE tname=%Q AND ttype=%d", + zBranch, *type); + if( branchId==0 ){ + db_multi_exec("INSERT INTO xbranches (tname, ttype) VALUES(%Q, %d)", + zBranch, *type); + branchId = db_last_insert_rowid(); + } + } + return branchId; +} + +/* +** Read the svn-dump format from pIn and insert the corresponding +** content into the database. +*/ +static void svn_dump_import(FILE *pIn){ + SvnRecord rec; + int ver; + char *zTemp; + const char *zUuid; + Stmt addFile; + Stmt delPath; + Stmt addRev; + Stmt cpyPath; + Stmt cpyRoot; + Stmt revSrc; + + /* version */ + if( svn_read_rec(pIn, &rec) + && (zTemp = svn_find_header(rec, "SVN-fs-dump-format-version")) ){ + ver = atoi(zTemp); + if( ver!=2 && ver!=3 ){ + fossil_fatal("Unknown svn-dump format version: %d", ver); + } + }else{ + fossil_fatal("Input is not an svn-dump!"); + } + svn_free_rec(&rec); + /* UUID */ + if( !svn_read_rec(pIn, &rec) || !(zUuid = svn_find_header(rec, "UUID")) ){ + /* Removed the following line since UUID is not actually used + fossil_fatal("Missing UUID!"); */ + } + svn_free_rec(&rec); + + /* content */ + db_prepare(&addFile, + "INSERT INTO xfiles (tpath, tbranch, tuuid, tperm)" + " VALUES(:path, :branch, (SELECT uuid FROM blob WHERE rid=:rid), :perm)" + ); + db_prepare(&delPath, + "DELETE FROM xfiles" + " WHERE (tpath=:path OR (tpath>:path||'/' AND tpath<:path||'0'))" + " AND tbranch=:branch" + ); + db_prepare(&addRev, + "INSERT OR IGNORE INTO xrevisions (trev, tbranch) VALUES(:rev, :branch)" + ); + db_prepare(&cpyPath, + "INSERT INTO xfiles (tpath, tbranch, tuuid, tperm)" + " SELECT :path||:sep||substr(filename, length(:srcpath)+2), :branch, uuid, perm" + " FROM xfoci" + " WHERE checkinID=:rid" + " AND filename>:srcpath||'/'" + " AND filename<:srcpath||'0'" + ); + db_prepare(&cpyRoot, + "INSERT INTO xfiles (tpath, tbranch, tuuid, tperm)" + " SELECT :path||:sep||filename, :branch, uuid, perm" + " FROM xfoci" + " WHERE checkinID=:rid" + ); + db_prepare(&revSrc, + "UPDATE xrevisions SET tparent=:parent" + " WHERE trev=:rev AND tbranch=:branch AND tparent<:parent" + ); + gsvn.rev = -1; + bag_init(&gsvn.newBranches); + while( svn_read_rec(pIn, &rec) ){ + if( (zTemp = svn_find_header(rec, "Revision-number")) ){ /* revision node */ + /* finish previous revision */ + char *zDate = NULL; + if( gsvn.rev>=0 ){ + svn_finish_revision(); + fossil_free(gsvn.zUser); + fossil_free(gsvn.zComment); + fossil_free(gsvn.zDate); + bag_clear(&gsvn.newBranches); + } + /* start new revision */ + gsvn.rev = atoi(zTemp); + gsvn.zUser = mprintf("%s", svn_find_prop(rec, "svn:author")); + gsvn.zComment = mprintf("%s", svn_find_prop(rec, "svn:log")); + zDate = svn_find_prop(rec, "svn:date"); + if( zDate ){ + gsvn.zDate = date_in_standard_format(zDate); + }else{ + gsvn.zDate = date_in_standard_format("now"); + } + db_bind_int(&addRev, ":rev", gsvn.rev); + fossil_print("\rImporting SVN revision: %d", gsvn.rev); + }else + if( (zTemp = svn_find_header(rec, "Node-path")) ){ /* file/dir node */ + char *zFile; + int branchType; + int branchId = svn_parse_path(zTemp, &zFile, &branchType); + char *zAction = svn_find_header(rec, "Node-action"); + char *zKind = svn_find_header(rec, "Node-kind"); + char *zPerm = svn_find_prop(rec, "svn:executable") ? "x" : 0; + int deltaFlag = 0; + int srcRev = 0; + if( branchId==0 ){ + svn_free_rec(&rec); + continue; + } + if( (zTemp = svn_find_header(rec, "Text-delta")) ){ + deltaFlag = strncmp(zTemp, "true", 4)==0; + } + if( strncmp(zAction, "delete", 6)==0 + || strncmp(zAction, "replace", 7)==0 ) + { + db_bind_int(&addRev, ":branch", branchId); + db_step(&addRev); + db_reset(&addRev); + if( zFile[0]!=0 ){ + db_bind_text(&delPath, ":path", zFile); + db_bind_int(&delPath, ":branch", branchId); + db_step(&delPath); + db_reset(&delPath); + }else{ + db_multi_exec("DELETE FROM xfiles WHERE tbranch=%d", branchId); + db_bind_int(&revSrc, ":parent", MAX_INT_32); + db_bind_int(&revSrc, ":rev", gsvn.rev); + db_bind_int(&revSrc, ":branch", branchId); + db_step(&revSrc); + db_reset(&revSrc); + } + } /* no 'else' here since 'replace' does both a 'delete' and an 'add' */ + if( strncmp(zAction, "add", 3)==0 + || strncmp(zAction, "replace", 7)==0 ) + { + char *zSrcPath = svn_find_header(rec, "Node-copyfrom-path"); + char *zSrcFile; + int srcRid = 0; + if( zSrcPath ){ + int srcBranch; + zTemp = svn_find_header(rec, "Node-copyfrom-rev"); + if( zTemp ){ + srcRev = atoi(zTemp); + }else{ + fossil_fatal("Missing copyfrom-rev"); + } + srcBranch = svn_parse_path(zSrcPath, &zSrcFile, &branchType); + if( srcBranch==0 ){ + fossil_fatal("Copy from path outside the import paths"); + } + srcRid = db_int(0, "SELECT trid, max(trev) FROM xrevisions" + " WHERE trev<=%d AND tbranch=%d", + srcRev, srcBranch); + if( srcRid>0 && srcBranch!=branchId ){ + db_bind_int(&addRev, ":branch", branchId); + db_step(&addRev); + db_reset(&addRev); + db_bind_int(&revSrc, ":parent", srcRid); + db_bind_int(&revSrc, ":rev", gsvn.rev); + db_bind_int(&revSrc, ":branch", branchId); + db_step(&revSrc); + db_reset(&revSrc); + } + } + if( zKind==0 ){ + fossil_fatal("Missing Node-kind"); + }else if( strncmp(zKind, "dir", 3)==0 ){ + if( zSrcPath ){ + if( srcRid>0 ){ + if( zSrcFile[0]==0 ){ + db_bind_text(&cpyRoot, ":path", zFile); + if( zFile[0]!=0 ){ + db_bind_text(&cpyRoot, ":sep", "/"); + }else{ + db_bind_text(&cpyRoot, ":sep", ""); + } + db_bind_int(&cpyRoot, ":branch", branchId); + db_bind_int(&cpyRoot, ":rid", srcRid); + db_step(&cpyRoot); + db_reset(&cpyRoot); + }else{ + db_bind_text(&cpyPath, ":path", zFile); + if( zFile[0]!=0 ){ + db_bind_text(&cpyPath, ":sep", "/"); + }else{ + db_bind_text(&cpyPath, ":sep", ""); + } + db_bind_int(&cpyPath, ":branch", branchId); + db_bind_text(&cpyPath, ":srcpath", zSrcFile); + db_bind_int(&cpyPath, ":rid", srcRid); + db_step(&cpyPath); + db_reset(&cpyPath); + } + } + } + if( zFile[0]==0 ){ + bag_insert(&gsvn.newBranches, branchId); + } + }else{ + int rid = 0; + if( zSrcPath ){ + rid = db_int(0, "SELECT rid FROM blob WHERE uuid=(" + " SELECT uuid FROM xfoci" + " WHERE checkinID=%d AND filename=%Q" + ")", + srcRid, zSrcFile); + } + if( deltaFlag ){ + Blob deltaSrc; + Blob target; + if( rid!=0 ){ + content_get(rid, &deltaSrc); + }else{ + blob_zero(&deltaSrc); + } + svn_apply_svndiff(&rec.content, &deltaSrc, &target); + rid = content_put(&target); + }else if( rec.contentFlag ){ + rid = content_put(&rec.content); + } + db_bind_text(&addFile, ":path", zFile); + db_bind_int(&addFile, ":branch", branchId); + db_bind_int(&addFile, ":rid", rid); + db_bind_text(&addFile, ":perm", zPerm); + db_step(&addFile); + db_reset(&addFile); + db_bind_int(&addRev, ":branch", branchId); + db_step(&addRev); + db_reset(&addRev); + } + }else + if( strncmp(zAction, "change", 6)==0 ){ + int rid = 0; + if( zKind==0 ){ + fossil_fatal("Missing Node-kind"); + } + if( strncmp(zKind, "dir", 3)!=0 ){ + if( deltaFlag ){ + Blob deltaSrc; + Blob target; + rid = db_int(0, "SELECT rid FROM blob WHERE uuid=(" + " SELECT tuuid FROM xfiles" + " WHERE tpath=%Q AND tbranch=%d" + ")", zFile, branchId); + content_get(rid, &deltaSrc); + svn_apply_svndiff(&rec.content, &deltaSrc, &target); + rid = content_put(&target); + }else{ + rid = content_put(&rec.content); + } + db_bind_text(&addFile, ":path", zFile); + db_bind_int(&addFile, ":branch", branchId); + db_bind_int(&addFile, ":rid", rid); + db_bind_text(&addFile, ":perm", zPerm); + db_step(&addFile); + db_reset(&addFile); + db_bind_int(&addRev, ":branch", branchId); + db_step(&addRev); + db_reset(&addRev); + } + }else + if( strncmp(zAction, "delete", 6)!=0 ){ /* already did this one above */ + fossil_fatal("Unknown Node-action"); + } + }else{ + fossil_fatal("Unknown record type"); + } + svn_free_rec(&rec); + } + svn_finish_revision(); + fossil_free(gsvn.zUser); + fossil_free(gsvn.zComment); + fossil_free(gsvn.zDate); + db_finalize(&addFile); + db_finalize(&delPath); + db_finalize(&addRev); + db_finalize(&cpyPath); + db_finalize(&cpyRoot); + db_finalize(&revSrc); + fossil_print(" Done!\n"); +} + +/* +** COMMAND: import +** +** Usage: %fossil import ?--git? ?OPTIONS? NEW-REPOSITORY ?INPUT-FILE? +** or: %fossil import --svn ?OPTIONS? NEW-REPOSITORY ?INPUT-FILE? +** +** Read interchange format generated by another VCS and use it to +** construct a new Fossil repository named by the NEW-REPOSITORY +** argument. If no input file is supplied the interchange format +** data is read from standard input. +** +** The following formats are currently understood by this command +** +** --git Import from the git-fast-export file format (default) +** +** --svn Import from the svnadmin-dump file format. The default +** behaviour (unless overridden by --flat) is to treat 3 +** folders in the SVN root as special, following the +** common layout of SVN repositories. These are (by +** default) trunk/, branches/ and tags/ +** Options: +** --trunk FOLDER Name of trunk folder +** --branches FOLDER Name of branches folder +** --tags FOLDER Name of tags folder +** --base PATH Path to project root in repository +** --flat The whole dump is a single branch +** +** Common Options: +** -i|--incremental allow importing into an existing repository +** -f|--force overwrite repository if already exist +** -q|--quiet omit progress output +** --no-rebuild skip the "rebuilding metadata" step +** --no-vacuum skip the final VACUUM of the database file +** +** The --incremental option allows an existing repository to be extended +** with new content. +** +** See also: export +*/ +void import_cmd(void){ + char *zPassword; + FILE *pIn; + Stmt q; + int forceFlag = find_option("force", "f", 0)!=0; + int svnFlag = find_option("svn", 0, 0)!=0; + int omitRebuild = find_option("no-rebuild",0,0)!=0; + int omitVacuum = find_option("no-vacuum",0,0)!=0; + + /* Options common to all input formats */ + int incrFlag = find_option("incremental", "i", 0)!=0; + + /* Options for --svn only */ + const char *zBase=""; + int flatFlag=0; + + if( svnFlag ){ + /* Get --svn related options here, so verify_all_options() fail when svn + * only option are specify with --git + */ + zBase = find_option("base", 0, 1); + flatFlag = find_option("flat", 0, 0)!=0; + gsvn.zTrunk = find_option("trunk", 0, 1); + gsvn.zBranches = find_option("branches", 0, 1); + gsvn.zTags = find_option("tags", 0, 1); + gsvn.incrFlag = incrFlag; + }else{ + find_option("git",0,0); /* Skip the --git option for now */ + } + verify_all_options(); + + if( g.argc!=3 && g.argc!=4 ){ + usage("--git|--svn ?OPTIONS? NEW-REPOSITORY ?INPUT-FILE?"); + } + if( g.argc==4 ){ + pIn = fossil_fopen(g.argv[3], "rb"); + }else{ + pIn = stdin; + fossil_binary_mode(pIn); + } + if( !incrFlag ){ + if( forceFlag ) file_delete(g.argv[2]); + db_create_repository(g.argv[2]); + } + db_open_repository(g.argv[2]); + db_open_config(0); + + db_begin_transaction(); + if( !incrFlag ) db_initial_setup(0, 0, 0); + + if( svnFlag ){ + db_multi_exec( + "CREATE TEMP TABLE xrevisions(" + " trev INTEGER, tbranch INT, trid INT, tparent INT DEFAULT 0," + " UNIQUE(tbranch, trev)" + ");" + "CREATE INDEX temp.i_xrevisions ON xrevisions(trid);" + "CREATE TEMP TABLE xfiles(" + " tpath TEXT, tbranch INT, tuuid TEXT, tperm TEXT," + " UNIQUE (tbranch, tpath) ON CONFLICT REPLACE" + ");" + "CREATE TEMP TABLE xbranches(" + " tid INTEGER PRIMARY KEY, tname TEXT, ttype INT," + " UNIQUE(tname, ttype)" + ");" + "CREATE VIRTUAL TABLE temp.xfoci USING files_of_checkin;" + ); + if( zBase==0 ){ zBase = ""; } + if( strlen(zBase)>0 ){ + if( zBase[strlen(zBase)-1]!='/' ){ + zBase = mprintf("%s/", zBase); + } + } + if( flatFlag ){ + gsvn.zTrunk = zBase; + gsvn.zBranches = 0; + gsvn.zTags = 0; + gsvn.lenTrunk = strlen(zBase); + gsvn.lenBranches = 0; + gsvn.lenTags = 0; + }else{ + if( gsvn.zTrunk==0 ){ gsvn.zTrunk = "trunk/"; } + if( gsvn.zBranches==0 ){ gsvn.zBranches = "branches/"; } + if( gsvn.zTags==0 ){ gsvn.zTags = "tags/"; } + gsvn.zTrunk = mprintf("%s%s", zBase, gsvn.zTrunk); + gsvn.zBranches = mprintf("%s%s", zBase, gsvn.zBranches); + gsvn.zTags = mprintf("%s%s", zBase, gsvn.zTags); + gsvn.lenTrunk = strlen(gsvn.zTrunk); + gsvn.lenBranches = strlen(gsvn.zBranches); + gsvn.lenTags = strlen(gsvn.zTags); + if( gsvn.zTrunk[gsvn.lenTrunk-1]!='/' ){ + gsvn.zTrunk = mprintf("%s/", gsvn.zTrunk); + gsvn.lenTrunk++; + } + if( gsvn.zBranches[gsvn.lenBranches-1]!='/' ){ + gsvn.zBranches = mprintf("%s/", gsvn.zBranches); + gsvn.lenBranches++; + } + if( gsvn.zTags[gsvn.lenTags-1]!='/' ){ + gsvn.zTags = mprintf("%s/", gsvn.zTags); + gsvn.lenTags++; + } + } + svn_dump_import(pIn); + }else{ + /* The following temp-tables are used to hold information needed for + ** the import. + ** + ** The XMARK table provides a mapping from fast-import "marks" and symbols + ** into artifact ids (UUIDs - the 40-byte hex SHA1 hash of artifacts). + ** Given any valid fast-import symbol, the corresponding fossil rid and + ** uuid can found by searching against the xmark.tname field. + ** + ** The XBRANCH table maps commit marks and symbols into the branch those + ** commits belong to. If xbranch.tname is a fast-import symbol for a + ** check-in then xbranch.brnm is the branch that check-in is part of. + ** + ** The XTAG table records information about tags that need to be applied + ** to various branches after the import finishes. The xtag.tcontent field + ** contains the text of an artifact that will add a tag to a check-in. + ** The git-fast-export file format might specify the same tag multiple + ** times but only the last tag should be used. And we do not know which + ** occurrence of the tag is the last until the import finishes. + */ + db_multi_exec( + "CREATE TEMP TABLE xmark(tname TEXT UNIQUE, trid INT, tuuid TEXT);" + "CREATE TEMP TABLE xbranch(tname TEXT UNIQUE, brnm TEXT);" + "CREATE TEMP TABLE xtag(tname TEXT UNIQUE, tcontent TEXT);" + ); + + manifest_crosslink_begin(); + git_fast_import(pIn); + db_prepare(&q, "SELECT tcontent FROM xtag"); + while( db_step(&q)==SQLITE_ROW ){ + Blob record; + db_ephemeral_blob(&q, 0, &record); + fast_insert_content(&record, 0, 0, 1); + import_reset(0); + } + db_finalize(&q); + manifest_crosslink_end(MC_NONE); + } + + verify_cancel(); + db_end_transaction(0); + fossil_print(" \r"); + if( omitRebuild ){ + omitVacuum = 1; + }else{ + db_begin_transaction(); + fossil_print("Rebuilding repository meta-data...\n"); + rebuild_db(0, 1, !incrFlag); + verify_cancel(); + db_end_transaction(0); + } + if( !omitVacuum ){ + fossil_print("Vacuuming..."); fflush(stdout); + db_multi_exec("VACUUM"); + } + fossil_print(" ok\n"); + if( !incrFlag ){ + fossil_print("project-id: %s\n", db_get("project-code", 0)); + fossil_print("server-id: %s\n", db_get("server-code", 0)); + zPassword = db_text(0, "SELECT pw FROM user WHERE login=%Q", g.zLogin); + fossil_print("admin-user: %s (password is \"%s\")\n", g.zLogin, zPassword); + } +} Index: src/info.c ================================================================== --- src/info.c +++ src/info.c @@ -22,12 +22,12 @@ #include "config.h" #include "info.h" #include /* -** Return a string (in memory obtained from malloc) holding a -** comma-separated list of tags that apply to check-in with +** Return a string (in memory obtained from malloc) holding a +** comma-separated list of tags that apply to check-in with ** record-id rid. If the "propagatingOnly" flag is true, then only ** show branch tags (tags that propagate to children). ** ** Return NULL if there are no such tags. */ @@ -48,134 +48,200 @@ ** ** * The UUID ** * The record ID ** * mtime and ctime ** * who signed it +** */ -void show_common_info(int rid, const char *zUuidName, int showComment){ +void show_common_info( + int rid, /* The rid for the check-in to display info for */ + const char *zUuidName, /* Name of the UUID */ + int showComment, /* True to show the check-in comment */ + int showFamily /* True to show parents and children */ +){ Stmt q; char *zComment = 0; char *zTags; char *zDate; char *zUuid; zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); if( zUuid ){ - zDate = db_text(0, + zDate = db_text(0, "SELECT datetime(mtime) || ' UTC' FROM event WHERE objid=%d", rid ); /* 01234567890123 */ - printf("%-13s %s %s\n", zUuidName, zUuid, zDate ? zDate : ""); + fossil_print("%-13s %s %s\n", zUuidName, zUuid, zDate ? zDate : ""); free(zUuid); free(zDate); } - db_prepare(&q, "SELECT uuid, pid FROM plink JOIN blob ON pid=rid " - " WHERE cid=%d", rid); - while( db_step(&q)==SQLITE_ROW ){ - const char *zUuid = db_column_text(&q, 0); - zDate = db_text("", - "SELECT datetime(mtime) || ' UTC' FROM event WHERE objid=%d", - db_column_int(&q, 1) - ); - printf("parent: %s %s\n", zUuid, zDate); - free(zDate); - } - db_finalize(&q); - db_prepare(&q, "SELECT uuid, cid FROM plink JOIN blob ON cid=rid " - " WHERE pid=%d", rid); - while( db_step(&q)==SQLITE_ROW ){ - const char *zUuid = db_column_text(&q, 0); - zDate = db_text("", - "SELECT datetime(mtime) || ' UTC' FROM event WHERE objid=%d", - db_column_int(&q, 1) - ); - printf("child: %s %s\n", zUuid, zDate); - free(zDate); - } - db_finalize(&q); + if( zUuid && showComment ){ + zComment = db_text(0, + "SELECT coalesce(ecomment,comment) || " + " ' (user: ' || coalesce(euser,user,'?') || ')' " + " FROM event WHERE objid=%d", + rid + ); + } + if( showFamily ){ + db_prepare(&q, "SELECT uuid, pid, isprim FROM plink JOIN blob ON pid=rid " + " WHERE cid=%d" + " ORDER BY isprim DESC, mtime DESC /*sort*/", rid); + while( db_step(&q)==SQLITE_ROW ){ + const char *zUuid = db_column_text(&q, 0); + const char *zType = db_column_int(&q, 2) ? "parent:" : "merged-from:"; + zDate = db_text("", + "SELECT datetime(mtime) || ' UTC' FROM event WHERE objid=%d", + db_column_int(&q, 1) + ); + fossil_print("%-13s %s %s\n", zType, zUuid, zDate); + free(zDate); + } + db_finalize(&q); + db_prepare(&q, "SELECT uuid, cid, isprim FROM plink JOIN blob ON cid=rid " + " WHERE pid=%d" + " ORDER BY isprim DESC, mtime DESC /*sort*/", rid); + while( db_step(&q)==SQLITE_ROW ){ + const char *zUuid = db_column_text(&q, 0); + const char *zType = db_column_int(&q, 2) ? "child:" : "merged-into:"; + zDate = db_text("", + "SELECT datetime(mtime) || ' UTC' FROM event WHERE objid=%d", + db_column_int(&q, 1) + ); + fossil_print("%-13s %s %s\n", zType, zUuid, zDate); + free(zDate); + } + db_finalize(&q); + } zTags = info_tags_of_checkin(rid, 0); if( zTags && zTags[0] ){ - printf("tags: %s\n", zTags); + fossil_print("tags: %s\n", zTags); } free(zTags); if( zComment ){ - printf("comment:\n%s\n", zComment); + fossil_print("comment: "); + comment_print(zComment, 0, 14, -1, g.comFmtFlags); free(zComment); } } + +/* +** Print information about the URLs used to access a repository and +** checkouts in a repository. +*/ +static void extraRepoInfo(void){ + Stmt s; + db_prepare(&s, "SELECT substr(name,7), date(mtime,'unixepoch')" + " FROM config" + " WHERE name GLOB 'ckout:*' ORDER BY name"); + while( db_step(&s)==SQLITE_ROW ){ + const char *zName; + const char *zCkout = db_column_text(&s, 0); + if( g.localOpen ){ + if( fossil_strcmp(zCkout, g.zLocalRoot)==0 ) continue; + zName = "alt-root:"; + }else{ + zName = "check-out:"; + } + fossil_print("%-11s %-54s %s\n", zName, zCkout, + db_column_text(&s, 1)); + } + db_finalize(&s); + db_prepare(&s, "SELECT substr(name,9), date(mtime,'unixepoch')" + " FROM config" + " WHERE name GLOB 'baseurl:*' ORDER BY name"); + while( db_step(&s)==SQLITE_ROW ){ + fossil_print("access-url: %-54s %s\n", db_column_text(&s, 0), + db_column_text(&s, 1)); + } + db_finalize(&s); +} /* ** COMMAND: info ** -** Usage: %fossil info ?ARTIFACT-ID|FILENAME? +** Usage: %fossil info ?VERSION | REPOSITORY_FILENAME? ?OPTIONS? ** ** With no arguments, provide information about the current tree. ** If an argument is specified, provide information about the object -** in the respository of the current tree that the argument refers +** in the repository of the current tree that the argument refers ** to. Or if the argument is the name of a repository, show ** information about that repository. +** +** Use the "finfo" command to get information about a specific +** file in a checkout. +** +** Options: +** +** -R|--repository FILE Extract info from repository FILE +** -v|--verbose Show extra information +** +** See also: annotate, artifact, finfo, timeline */ void info_cmd(void){ i64 fsize; - if( g.argc!=2 && g.argc!=3 ){ - usage("?FILENAME|ARTIFACT-ID?"); + int verboseFlag = find_option("verbose","v",0)!=0; + if( !verboseFlag ){ + verboseFlag = find_option("detail","l",0)!=0; /* deprecated */ } + if( g.argc==3 && (fsize = file_size(g.argv[2]))>0 && (fsize&0x1ff)==0 ){ db_open_config(0); - db_record_repository_filename(g.argv[2]); db_open_repository(g.argv[2]); - printf("project-name: %s\n", db_get("project-name", "")); - printf("project-code: %s\n", db_get("project-code", "")); - printf("server-code: %s\n", db_get("server-code", "")); + db_record_repository_filename(g.argv[2]); + fossil_print("project-name: %s\n", db_get("project-name", "")); + fossil_print("project-code: %s\n", db_get("project-code", "")); + extraRepoInfo(); return; } - db_must_be_within_tree(); + db_find_and_open_repository(0,0); + verify_all_options(); if( g.argc==2 ){ int vid; /* 012345678901234 */ db_record_repository_filename(0); - printf("project-name: %s\n", db_get("project-name", "")); - printf("repository: %s\n", db_lget("repository", "")); - printf("local-root: %s\n", g.zLocalRoot); -#ifdef __MINGW32__ - if( g.zHome ){ - printf("user-home: %s\n", g.zHome); - } -#endif - printf("project-code: %s\n", db_get("project-code", "")); - printf("server-code: %s\n", db_get("server-code", "")); - vid = db_lget_int("checkout", 0); - if( vid==0 ){ - printf("checkout: nil\n"); - }else{ - show_common_info(vid, "checkout:", 1); - } + fossil_print("project-name: %s\n", db_get("project-name", "")); + if( g.localOpen ){ + fossil_print("repository: %s\n", db_repository_filename()); + fossil_print("local-root: %s\n", g.zLocalRoot); + } + if( verboseFlag ) extraRepoInfo(); + if( g.zConfigDbName ){ + fossil_print("config-db: %s\n", g.zConfigDbName); + } + fossil_print("project-code: %s\n", db_get("project-code", "")); + vid = g.localOpen ? db_lget_int("checkout", 0) : 0; + if( vid ){ + show_common_info(vid, "checkout:", 1, 1); + } + fossil_print("check-ins: %d\n", + db_int(-1, "SELECT count(*) FROM event WHERE type='ci' /*scan*/")); }else{ int rid; rid = name_to_rid(g.argv[2]); if( rid==0 ){ - fossil_panic("no such object: %s\n", g.argv[2]); + fossil_fatal("no such object: %s\n", g.argv[2]); } - show_common_info(rid, "uuid:", 1); + show_common_info(rid, "uuid:", 1, 1); } } /* -** Show information about all tags on a given node. +** Show information about all tags on a given check-in. */ -static void showTags(int rid, const char *zNotGlob){ +static void showTags(int rid){ Stmt q; int cnt = 0; db_prepare(&q, "SELECT tag.tagid, tagname, " " (SELECT uuid FROM blob WHERE rid=tagxref.srcid AND rid!=%d)," - " value, datetime(tagxref.mtime,'localtime'), tagtype," + " value, datetime(tagxref.mtime,toLocal()), tagtype," " (SELECT uuid FROM blob WHERE rid=tagxref.origid AND rid!=%d)" " FROM tagxref JOIN tag ON tagxref.tagid=tag.tagid" - " WHERE tagxref.rid=%d AND tagname NOT GLOB '%s'" - " ORDER BY tagname", rid, rid, rid, zNotGlob + " WHERE tagxref.rid=%d" + " ORDER BY tagname /*sort*/", rid, rid, rid ); while( db_step(&q)==SQLITE_ROW ){ const char *zTagname = db_column_text(&q, 1); const char *zSrcUuid = db_column_text(&q, 2); const char *zValue = db_column_text(&q, 3); @@ -187,27 +253,29 @@ @
    Tags And Properties
    @
      } @
    • if( tagtype==0 ){ - @ %h(zTagname) cancelled + @ %h(zTagname) cancelled }else if( zValue ){ - @ %h(zTagname)=%h(zValue) + @ %h(zTagname)=%h(zValue) }else { - @ %h(zTagname) + @ %h(zTagname) } if( tagtype==2 ){ if( zOrigUuid && zOrigUuid[0] ){ @ inherited from hyperlink_to_uuid(zOrigUuid); }else{ @ propagates to descendants } - if( zValue && strcmp(zTagname,"branch")==0 ){ +#if 0 + if( zValue && fossil_strcmp(zTagname,"branch")==0 ){ @    - @ branch timeline + @ %z(href("%R/timeline?r=%T",zValue))branch timeline } +#endif } if( zSrcUuid && zSrcUuid[0] ){ if( tagtype==0 ){ @ by }else{ @@ -215,102 +283,318 @@ } hyperlink_to_uuid(zSrcUuid); @ on hyperlink_to_date(zDate,0); } + @
    • } db_finalize(&q); if( cnt ){ @
    } } +/* +** Show the context graph (immediate parents and children) for +** check-in rid. +*/ +void render_checkin_context(int rid, int parentsOnly){ + Blob sql; + Stmt q; + blob_zero(&sql); + blob_append(&sql, timeline_query_for_www(), -1); + db_multi_exec( + "CREATE TEMP TABLE IF NOT EXISTS ok(rid INTEGER PRIMARY KEY);" + "INSERT INTO ok VALUES(%d);" + "INSERT OR IGNORE INTO ok SELECT pid FROM plink WHERE cid=%d;", + rid, rid + ); + if( !parentsOnly ){ + db_multi_exec( + "INSERT OR IGNORE INTO ok SELECT cid FROM plink WHERE pid=%d;", rid + ); + } + blob_append_sql(&sql, " AND event.objid IN ok ORDER BY mtime DESC"); + db_prepare(&q, "%s", blob_sql_text(&sql)); + www_print_timeline(&q, TIMELINE_DISJOINT|TIMELINE_GRAPH, 0, 0, rid, 0); + db_finalize(&q); +} + /* -** Append the difference between two RIDs to the output +** Append the difference between artifacts to the output */ -static void append_diff(int fromid, int toid){ +static void append_diff( + const char *zFrom, /* Diff from this artifact */ + const char *zTo, /* ... to this artifact */ + u64 diffFlags, /* Diff formatting flags */ + ReCompiled *pRe /* Only show change matching this regex */ +){ + int fromid; + int toid; Blob from, to, out; - content_get(fromid, &from); - content_get(toid, &to); + if( zFrom ){ + fromid = uuid_to_rid(zFrom, 0); + content_get(fromid, &from); + }else{ + blob_zero(&from); + } + if( zTo ){ + toid = uuid_to_rid(zTo, 0); + content_get(toid, &to); + }else{ + blob_zero(&to); + } blob_zero(&out); - text_diff(&from, &to, &out, 5); - @ %h(blob_str(&out)) + if( diffFlags & DIFF_SIDEBYSIDE ){ + text_diff(&from, &to, &out, pRe, diffFlags | DIFF_HTML | DIFF_NOTTOOBIG); + @ %s(blob_str(&out)) + }else{ + text_diff(&from, &to, &out, pRe, + diffFlags | DIFF_LINENO | DIFF_HTML | DIFF_NOTTOOBIG); + @
    +    @ %s(blob_str(&out))
    +    @ 
    + } blob_reset(&from); blob_reset(&to); - blob_reset(&out); + blob_reset(&out); +} + +/* +** Write a line of web-page output that shows changes that have occurred +** to a file between two check-ins. +*/ +static void append_file_change_line( + const char *zName, /* Name of the file that has changed */ + const char *zOld, /* blob.uuid before change. NULL for added files */ + const char *zNew, /* blob.uuid after change. NULL for deletes */ + const char *zOldName, /* Prior name. NULL if no name change. */ + u64 diffFlags, /* Flags for text_diff(). Zero to omit diffs */ + ReCompiled *pRe, /* Only show diffs that match this regex, if not NULL */ + int mperm /* executable or symlink permission for zNew */ +){ + @

    + if( !g.perm.Hyperlink ){ + if( zNew==0 ){ + @ Deleted %h(zName). + }else if( zOld==0 ){ + @ Added %h(zName). + }else if( zOldName!=0 && fossil_strcmp(zName,zOldName)!=0 ){ + @ Name change from %h(zOldName) to %h(zName). + }else if( fossil_strcmp(zNew, zOld)==0 ){ + if( mperm==PERM_EXE ){ + @ %h(zName) became executable. + }else if( mperm==PERM_LNK ){ + @ %h(zName) became a symlink. + }else{ + @ %h(zName) became a regular file. + } + }else{ + @ Changes to %h(zName). + } + if( diffFlags ){ + append_diff(zOld, zNew, diffFlags, pRe); + } + }else{ + if( zOld && zNew ){ + if( fossil_strcmp(zOld, zNew)!=0 ){ + @ Modified %z(href("%R/finfo?name=%T",zName))%h(zName) + @ from %z(href("%R/artifact/%!S",zOld))[%S(zOld)] + @ to %z(href("%R/artifact/%!S",zNew))[%S(zNew)]. + }else if( zOldName!=0 && fossil_strcmp(zName,zOldName)!=0 ){ + @ Name change + @ from %z(href("%R/finfo?name=%T",zOldName))%h(zOldName) + @ to %z(href("%R/finfo?name=%T",zName))%h(zName). + }else{ + @ %z(href("%R/finfo?name=%T",zName))%h(zName) became + if( mperm==PERM_EXE ){ + @ executable with contents + }else if( mperm==PERM_LNK ){ + @ a symlink with target + }else{ + @ a regular file with contents + } + @ %z(href("%R/artifact/%!S",zNew))[%S(zNew)]. + } + }else if( zOld ){ + @ Deleted %z(href("%R/finfo?name=%T",zName))%h(zName) + @ version %z(href("%R/artifact/%!S",zOld))[%S(zOld)]. + }else{ + @ Added %z(href("%R/finfo?name=%T",zName))%h(zName) + @ version %z(href("%R/artifact/%!S",zNew))[%S(zNew)]. + } + if( diffFlags ){ + append_diff(zOld, zNew, diffFlags, pRe); + }else if( zOld && zNew && fossil_strcmp(zOld,zNew)!=0 ){ + @    + @ %z(href("%R/fdiff?v1=%!S&v2=%!S&sbs=1",zOld,zNew))[diff] + } + } + @

    +} + +/* +** Generate javascript to enhance HTML diffs. +*/ +void append_diff_javascript(int sideBySide){ + if( !sideBySide ) return; + @ } +/* +** Construct an appropriate diffFlag for text_diff() based on query +** parameters and the to boolean arguments. +*/ +u64 construct_diff_flags(int verboseFlag, int sideBySide){ + u64 diffFlags = 0; /* Zero means do not show any diff */ + if( verboseFlag!=0 ){ + int x; + if( sideBySide ){ + diffFlags = DIFF_SIDEBYSIDE; + + /* "dw" query parameter determines width of each column */ + x = atoi(PD("dw","80"))*(DIFF_CONTEXT_MASK+1); + if( x<0 || x>DIFF_WIDTH_MASK ) x = DIFF_WIDTH_MASK; + diffFlags += x; + } + + if( P("w") ){ + diffFlags |= DIFF_IGNORE_ALLWS; + } + /* "dc" query parameter determines lines of context */ + x = atoi(PD("dc","7")); + if( x<0 || x>DIFF_CONTEXT_MASK ) x = DIFF_CONTEXT_MASK; + diffFlags += x; + + /* The "noopt" parameter disables diff optimization */ + if( PD("noopt",0)!=0 ) diffFlags |= DIFF_NOOPT; + diffFlags |= DIFF_STRIP_EOLCR; + } + return diffFlags; +} /* ** WEBPAGE: vinfo ** WEBPAGE: ci -** URL: /ci?name=RID|ARTIFACTID +** URL: /ci?name=ARTIFACTID +** URL: /vinfo?name=ARTIFACTID ** -** Display information about a particular check-in. +** Display information about a particular check-in. ** -** We also jump here from /info if the name is a version. +** We also jump here from /info if the name is a check-in ** ** If the /ci page is used (instead of /vinfo or /info) then the ** default behavior is to show unified diffs of all file changes. ** With /vinfo and /info, only a list of the changed files are ** shown, without diffs. This behavior is inverted if the ** "show-version-diffs" setting is turned on. */ void ci_page(void){ - Stmt q; + Stmt q1, q2, q3; int rid; int isLeaf; - int showDiff; - const char *zName; + int verboseFlag; /* True to show diffs */ + int sideBySide; /* True for side-by-side diffs */ + u64 diffFlags; /* Flag parameter for text_diff() */ + const char *zName; /* Name of the check-in to be displayed */ + const char *zUuid; /* UUID of zName */ + const char *zParent; /* UUID of the parent check-in (if any) */ + const char *zRe; /* regex parameter */ + ReCompiled *pRe = 0; /* regex */ + const char *zW; /* URL param for ignoring whitespace */ + const char *zPage = "vinfo"; /* Page that shows diffs */ + const char *zPageHide = "ci"; /* Page that hides diffs */ login_check_credentials(); - if( !g.okRead ){ login_needed(); return; } + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } zName = P("name"); rid = name_to_rid_www("name"); if( rid==0 ){ style_header("Check-in Information Error"); @ No such object: %h(g.argv[2]) style_footer(); return; } - isLeaf = !db_exists("SELECT 1 FROM plink WHERE pid=%d", rid); - db_prepare(&q, - "SELECT uuid, datetime(mtime, 'localtime'), user, comment" + zRe = P("regex"); + if( zRe ) re_compile(&pRe, zRe, 0); + zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); + zParent = db_text(0, + "SELECT uuid FROM plink, blob" + " WHERE plink.cid=%d AND blob.rid=plink.pid AND plink.isprim", + rid + ); + isLeaf = is_a_leaf(rid); + db_prepare(&q1, + "SELECT uuid, datetime(mtime,toLocal()), user, comment," + " datetime(omtime,toLocal()), mtime" " FROM blob, event" " WHERE blob.rid=%d" " AND event.objid=%d", rid, rid ); - if( db_step(&q)==SQLITE_ROW ){ - const char *zUuid = db_column_text(&q, 0); - char *zTitle = mprintf("Check-in [%.10s]", zUuid); + sideBySide = !is_false(PD("sbs","1")); + if( db_step(&q1)==SQLITE_ROW ){ + const char *zUuid = db_column_text(&q1, 0); char *zEUser, *zEComment; const char *zUser; const char *zComment; const char *zDate; - style_header(zTitle); + const char *zOrigDate; + + style_header("Check-in [%S]", zUuid); login_anonymous_available(); - free(zTitle); zEUser = db_text(0, - "SELECT value FROM tagxref WHERE tagid=%d AND rid=%d", + "SELECT value FROM tagxref" + " WHERE tagid=%d AND rid=%d AND tagtype>0", TAG_USER, rid); - zEComment = db_text(0, + zEComment = db_text(0, "SELECT value FROM tagxref WHERE tagid=%d AND rid=%d", TAG_COMMENT, rid); - zUser = db_column_text(&q, 2); - zComment = db_column_text(&q, 3); - zDate = db_column_text(&q,1); + zUser = db_column_text(&q1, 2); + zComment = db_column_text(&q1, 3); + zDate = db_column_text(&q1,1); + zOrigDate = db_column_text(&q1, 4); @
    Overview
    - @

    + @
    @ @ "); + if( zOrigDate && fossil_strcmp(zDate, zOrigDate)!=0 ){ + @ "); + } if( zEUser ){ @ "); @ "); @@ -317,281 +601,578 @@ }else{ @ "); } if( zEComment ){ - @ - @ + @ + @ + @ + @ }else{ - @ + @ } - if( g.okAdmin ){ - db_prepare(&q, + if( g.perm.Admin ){ + db_prepare(&q2, "SELECT rcvfrom.ipaddr, user.login, datetime(rcvfrom.mtime)" " FROM blob JOIN rcvfrom USING(rcvid) LEFT JOIN user USING(uid)" " WHERE blob.rid=%d", rid ); - if( db_step(&q)==SQLITE_ROW ){ - const char *zIpAddr = db_column_text(&q, 0); - const char *zUser = db_column_text(&q, 1); - const char *zDate = db_column_text(&q, 2); + if( db_step(&q2)==SQLITE_ROW ){ + const char *zIpAddr = db_column_text(&q2, 0); + const char *zUser = db_column_text(&q2, 1); + const char *zDate = db_column_text(&q2, 2); if( zUser==0 || zUser[0]==0 ) zUser = "unknown"; @ @ } - db_finalize(&q); + db_finalize(&q2); } - if( g.okHistory ){ - const char *zProjName = db_get("project-name", "unnamed"); + if( g.perm.Hyperlink ){ + char *zPJ = db_get("short-project-name", 0); + Blob projName; + int jj; + if( zPJ==0 ) zPJ = db_get("project-name", "unnamed"); + blob_zero(&projName); + blob_append(&projName, zPJ, -1); + blob_trim(&projName); + zPJ = blob_str(&projName); + for(jj=0; zPJ[jj]; jj++){ + if( (zPJ[jj]>0 && zPJ[jj]<' ') || strchr("\"*/:<>?\\|", zPJ[jj]) ){ + zPJ[jj] = '_'; + } + } @ + @ @ @ @ + blob_reset(&projName); } - @
    SHA1 Hash:%s(zUuid) - if( g.okSetup ){ + if( g.perm.Setup ){ @ (Record ID: %d(rid)) } @
    Date: hyperlink_to_date(zDate, "
    Original Date: + hyperlink_to_date(zOrigDate, "
    Edited User: hyperlink_to_user(zEUser,zDate,"
    Original User: hyperlink_to_user(zUser,zDate,"
    User: hyperlink_to_user(zUser,zDate,"
    Edited Comment:%w(zEComment)
    Original Comment:%w(zComment)
    Edited Comment:%!W(zEComment)
    Original Comment:%!W(zComment)
    Comment:%w(zComment)
    Comment:%!W(zComment)
    Received From:%h(zUser) @ %h(zIpAddr) on %s(zDate)
    Timelines: - @ ancestors - @ | descendants - @ | both - db_prepare(&q, "SELECT substr(tag.tagname,5) FROM tagxref, tag " + @ %z(href("%R/timeline?f=%!S&unhide",zUuid))family + if( zParent ){ + @ | %z(href("%R/timeline?p=%!S&unhide",zUuid))ancestors + } + if( !isLeaf ){ + @ | %z(href("%R/timeline?d=%!S&unhide",zUuid))descendants + } + if( zParent && !isLeaf ){ + @ | %z(href("%R/timeline?dp=%!S&unhide",zUuid))both + } + db_prepare(&q2,"SELECT substr(tag.tagname,5) FROM tagxref, tag " " WHERE rid=%d AND tagtype>0 " " AND tag.tagid=tagxref.tagid " " AND +tag.tagname GLOB 'sym-*'", rid); - while( db_step(&q)==SQLITE_ROW ){ - const char *zTagName = db_column_text(&q, 0); - @ | %h(zTagName) + while( db_step(&q2)==SQLITE_ROW ){ + const char *zTagName = db_column_text(&q2, 0); + @ | %z(href("%R/timeline?r=%T&unhide",zTagName))%h(zTagName) } - db_finalize(&q); + db_finalize(&q2); + + + /* The Download: line */ + if( g.anon.Zip ){ + char *zUrl = mprintf("%R/tarball/%t-%S.tar.gz?uuid=%s", + zPJ, zUuid, zUuid); + @
    Downloads: + @ %z(href("%s",zUrl))Tarball + @ | %z(href("%R/zip/%t-%S.zip?uuid=%!S",zPJ,zUuid,zUuid)) + @ ZIP archive + fossil_free(zUrl); + } @
    Other Links: - @ files - @ | - @ ZIP archive - @ | manifest - if( g.okWrite ){ - @ | edit + @ %z(href("%R/tree?ci=%!S",zUuid))files + @ | %z(href("%R/fileage?name=%!S",zUuid))file ages + @ | %z(href("%R/tree?nofiles&type=tree&ci=%!S",zUuid))folders + @ | %z(href("%R/artifact/%!S",zUuid))manifest + if( g.anon.Write ){ + @ | %z(href("%R/ci_edit?r=%!S",zUuid))edit } @

    + @ }else{ style_header("Check-in Information"); login_anonymous_available(); } - db_finalize(&q); - showTags(rid, ""); + db_finalize(&q1); + showTags(rid); + @
    Context
    + render_checkin_context(rid, 0); @
    Changes
    - showDiff = g.zPath[0]!='c'; + @
    + verboseFlag = g.zPath[0]!='c'; if( db_get_boolean("show-version-diffs", 0)==0 ){ - showDiff = !showDiff; - if( showDiff ){ - @ [hide diffs]
    - }else{ - @ [show diffs]
    - } - }else{ - if( showDiff ){ - @ [hide diffs]
    - }else{ - @ [show diffs]
    - } - } - db_prepare(&q, - "SELECT pid, fid, name," - " (SELECT uuid FROM blob WHERE rid=mlink.pid)," - " (SELECT uuid FROM blob WHERE rid=mlink.fid)" - " FROM mlink JOIN filename ON filename.fnid=mlink.fnid" - " WHERE mlink.mid=%d" - " ORDER BY name", - rid - ); - while( db_step(&q)==SQLITE_ROW ){ - int pid = db_column_int(&q,0); - int fid = db_column_int(&q,1); - const char *zName = db_column_text(&q,2); - const char *zOld = db_column_text(&q,3); - const char *zNew = db_column_text(&q,4); - if( !g.okHistory ){ - if( zNew==0 ){ - @

    Deleted %h(zName)

    - continue; - }else{ - @

    Changes to %h(zName)

    - } - }else if( zOld && zNew ){ - @

    Modified %h(zName) - @ from [%S(zOld)] - @ to [%S(zNew)]. - if( !showDiff ){ - @    - @ [diff] - } - }else if( zOld ){ - @

    Deleted %h(zName) - @ version [%S(zOld)]

    - continue; - }else{ - @

    Added %h(zName) - @ version [%S(zNew)]

    - } - if( showDiff ){ - @
    -      append_diff(pid, fid);
    -      @ 
    - } - } - db_finalize(&q); + verboseFlag = !verboseFlag; + zPage = "ci"; + zPageHide = "vinfo"; + } + diffFlags = construct_diff_flags(verboseFlag, sideBySide); + zW = (diffFlags&DIFF_IGNORE_ALLWS)?"&w":""; + if( verboseFlag ){ + @ %z(xhref("class='button'","%R/%s/%T",zPageHide,zName)) + @ Hide Diffs + if( sideBySide ){ + @ %z(xhref("class='button'","%R/%s/%T?sbs=0%s",zPage,zName,zW)) + @ Unified Diffs + }else{ + @ %z(xhref("class='button'","%R/%s/%T?sbs=1%s",zPage,zName,zW)) + @ Side-by-Side Diffs + } + if( *zW ){ + @ %z(xhref("class='button'","%R/%s/%T?sbs=%d",zPage,zName,sideBySide)) + @ Show Whitespace Changes + }else{ + @ %z(xhref("class='button'","%R/%s/%T?sbs=%d&w",zPage,zName,sideBySide)) + @ Ignore Whitespace + } + }else{ + @ %z(xhref("class='button'","%R/%s/%T?sbs=0",zPage,zName)) + @ Show Unified Diffs + @ %z(xhref("class='button'","%R/%s/%T?sbs=1",zPage,zName)) + @ Show Side-by-Side Diffs + } + if( zParent ){ + @ %z(xhref("class='button'","%R/vpatch?from=%!S&to=%!S",zParent,zUuid)) + @ Patch + } + if( g.perm.Admin ){ + @ %z(xhref("class='button'","%R/mlink?ci=%!S",zUuid))MLink Table + } + @
    + if( pRe ){ + @

    Only differences that match regular expression "%h(zRe)" + @ are shown.

    + } + db_prepare(&q3, + "SELECT name," + " mperm," + " (SELECT uuid FROM blob WHERE rid=mlink.pid)," + " (SELECT uuid FROM blob WHERE rid=mlink.fid)," + " (SELECT name FROM filename WHERE filename.fnid=mlink.pfnid)" + " FROM mlink JOIN filename ON filename.fnid=mlink.fnid" + " WHERE mlink.mid=%d AND NOT mlink.isaux" + " AND (mlink.fid>0" + " OR mlink.fnid NOT IN (SELECT pfnid FROM mlink WHERE mid=%d))" + " ORDER BY name /*sort*/", + rid, rid + ); + while( db_step(&q3)==SQLITE_ROW ){ + const char *zName = db_column_text(&q3,0); + int mperm = db_column_int(&q3, 1); + const char *zOld = db_column_text(&q3,2); + const char *zNew = db_column_text(&q3,3); + const char *zOldName = db_column_text(&q3, 4); + append_file_change_line(zName, zOld, zNew, zOldName, diffFlags,pRe,mperm); + } + db_finalize(&q3); + append_diff_javascript(sideBySide); style_footer(); } /* ** WEBPAGE: winfo -** URL: /winfo?name=RID +** URL: /winfo?name=UUID ** -** Return information about a wiki page. +** Display information about a wiki page. */ void winfo_page(void){ - Stmt q; int rid; + Manifest *pWiki; + char *zUuid; + char *zDate; + Blob wiki; + int modPending; + const char *zModAction; login_check_credentials(); - if( !g.okRdWiki ){ login_needed(); return; } + if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } rid = name_to_rid_www("name"); - if( rid==0 ){ + if( rid==0 || (pWiki = manifest_get(rid, CFTYPE_WIKI, 0))==0 ){ style_header("Wiki Page Information Error"); - @ No such object: %h(g.argv[2]) + @ No such object: %h(P("name")) style_footer(); return; } - db_prepare(&q, - "SELECT substr(tagname, 6, 1000), uuid," - " datetime(event.mtime, 'localtime'), user" - " FROM tagxref, tag, blob, event" - " WHERE tagxref.rid=%d" - " AND tag.tagid=tagxref.tagid" - " AND tag.tagname LIKE 'wiki-%%'" - " AND blob.rid=%d" - " AND event.objid=%d", - rid, rid, rid - ); - if( db_step(&q)==SQLITE_ROW ){ - const char *zName = db_column_text(&q, 0); - const char *zUuid = db_column_text(&q, 1); - char *zTitle = mprintf("Wiki Page %s", zName); - const char *zDate = db_column_text(&q,2); - const char *zUser = db_column_text(&q,3); - style_header(zTitle); - free(zTitle); - login_anonymous_available(); - @
    Overview
    - @

    - @ - @ "); - if( g.okSetup ){ - @ - } - @ "); - if( g.okHistory ){ - @ - @ - @ - } - @
    Version:%s(zUuid)
    Date: - hyperlink_to_date(zDate, "
    Record ID:%d(rid)
    Original User: - hyperlink_to_user(zUser, zDate, "
    Commands: - @ history - @ | raw-text - @

    - }else{ - style_header("Wiki Information"); - rid = 0; - } - db_finalize(&q); - showTags(rid, "wiki-*"); - if( rid ){ - Blob content; - Manifest m; - memset(&m, 0, sizeof(m)); - blob_zero(&m.content); - content_get(rid, &content); - manifest_parse(&m, &content); - if( m.type==CFTYPE_WIKI ){ - Blob wiki; - blob_init(&wiki, m.zWiki, -1); - @
    Content
    - wiki_convert(&wiki, 0, 0); - blob_reset(&wiki); - } - manifest_clear(&m); - } - style_footer(); -} - -/* -** WEBPAGE: vdiff -** URL: /vdiff?name=RID -** -** Show all differences for a particular check-in. -*/ -void vdiff_page(void){ - int rid; - Stmt q; - char *zUuid; - - login_check_credentials(); - if( !g.okRead ){ login_needed(); return; } - login_anonymous_available(); - - rid = name_to_rid_www("name"); + if( g.perm.ModWiki && (zModAction = P("modaction"))!=0 ){ + if( strcmp(zModAction,"delete")==0 ){ + moderation_disapprove(rid); + /* + ** Next, check if the wiki page still exists; if not, we cannot + ** redirect to it. + */ + if( db_exists("SELECT 1 FROM tagxref JOIN tag USING(tagid)" + " WHERE rid=%d AND tagname LIKE 'wiki-%%'", rid) ){ + cgi_redirectf("%R/wiki?name=%T", pWiki->zWikiTitle); + /*NOTREACHED*/ + }else{ + cgi_redirectf("%R/modreq"); + /*NOTREACHED*/ + } + } + if( strcmp(zModAction,"approve")==0 ){ + moderation_approve(rid); + } + } + style_header("Update of \"%h\"", pWiki->zWikiTitle); + zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); + zDate = db_text(0, "SELECT datetime(%.17g)", pWiki->rDate); + style_submenu_element("Raw", "Raw", "artifact/%s", zUuid); + style_submenu_element("History", "History", "whistory?name=%t", + pWiki->zWikiTitle); + style_submenu_element("Page", "Page", "wiki?name=%t", + pWiki->zWikiTitle); + login_anonymous_available(); + @
    Overview
    + @

    + @ + @ + @ + @ "); + @ "); + if( pWiki->zMimetype ){ + @ + } + if( pWiki->nParent>0 ){ + int i; + @ + } + @
    Artifact ID:%z(href("%R/artifact/%!S",zUuid))%s(zUuid) + if( g.perm.Setup ){ + @ (%d(rid)) + } + modPending = moderation_pending(rid); + if( modPending ){ + @ *** Awaiting Moderator Approval *** + } + @
    Page Name:%h(pWiki->zWikiTitle)
    Date: + hyperlink_to_date(zDate, "
    Original User: + hyperlink_to_user(pWiki->zUser, zDate, "
    Mimetype:%h(pWiki->zMimetype)
    Parent%s(pWiki->nParent==1?"":"s"): + for(i=0; inParent; i++){ + char *zParent = pWiki->azParent[i]; + @ %z(href("info/%!S",zParent))%s(zParent) + } + @
    + + if( g.perm.ModWiki && modPending ){ + @

    Moderation
    + @
    + @
    + @
    + @
    + @ + @
    + @
    + } + + + @
    Content
    + blob_init(&wiki, pWiki->zWiki, -1); + wiki_render_by_mimetype(&wiki, pWiki->zMimetype); + blob_reset(&wiki); + manifest_destroy(pWiki); + style_footer(); +} + +/* +** Show a webpage error message +*/ +void webpage_error(const char *zFormat, ...){ + va_list ap; + const char *z; + va_start(ap, zFormat); + z = vmprintf(zFormat, ap); + va_end(ap); + style_header("URL Error"); + @

    Error

    + @

    %h(z)

    + style_footer(); +} + +/* +** Find an check-in based on query parameter zParam and parse its +** manifest. Return the number of errors. +*/ +static Manifest *vdiff_parse_manifest(const char *zParam, int *pRid){ + int rid; + + *pRid = rid = name_to_rid_www(zParam); if( rid==0 ){ - fossil_redirect_home(); + const char *z = P(zParam); + if( z==0 || z[0]==0 ){ + webpage_error("Missing \"%s\" query parameter.", zParam); + }else{ + webpage_error("No such artifact: \"%s\"", z); + } + return 0; } - zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); - style_header("Check-in [%.10s]", zUuid); + if( !is_a_version(rid) ){ + webpage_error("Artifact %s is not a check-in.", P(zParam)); + return 0; + } + return manifest_get(rid, CFTYPE_MANIFEST, 0); +} + +/* +** Output a description of a check-in +*/ +static void checkin_description(int rid){ + Stmt q; db_prepare(&q, - "SELECT datetime(mtime), " - " coalesce(event.ecomment,event.comment)," - " coalesce(event.euser,event.user)" - " FROM event WHERE type='ci' AND objid=%d", - rid + "SELECT datetime(mtime), coalesce(euser,user)," + " coalesce(ecomment,comment), uuid," + " (SELECT group_concat(substr(tagname,5), ', ') FROM tag, tagxref" + " WHERE tagname GLOB 'sym-*' AND tag.tagid=tagxref.tagid" + " AND tagxref.rid=blob.rid AND tagxref.tagtype>0)" + " FROM event, blob" + " WHERE event.objid=%d AND type='ci'" + " AND blob.rid=%d", + rid, rid ); while( db_step(&q)==SQLITE_ROW ){ const char *zDate = db_column_text(&q, 0); - const char *zUser = db_column_text(&q, 2); - const char *zComment = db_column_text(&q, 1); - @

    Check-in %s(zUuid)

    - @

    Made by - hyperlink_to_user(zUser,zDate," on"); - hyperlink_to_date(zDate, ":"); - @ %w(zComment). - if( g.okHistory ){ - @ [details] - } - @


    - } - db_finalize(&q); - db_prepare(&q, - "SELECT pid, fid, name" - " FROM mlink, filename" - " WHERE mlink.mid=%d" - " AND filename.fnid=mlink.fnid" - " ORDER BY name", - rid - ); - while( db_step(&q)==SQLITE_ROW ){ - int pid = db_column_int(&q,0); - int fid = db_column_int(&q,1); - const char *zName = db_column_text(&q,2); - if( g.okHistory ){ - @

    %h(zName)

    - }else{ - @

    %h(zName)

    - } - @
    -    append_diff(pid, fid);
    -    @ 
    - } - db_finalize(&q); + const char *zUser = db_column_text(&q, 1); + const char *zUuid = db_column_text(&q, 3); + const char *zTagList = db_column_text(&q, 4); + Blob comment; + int wikiFlags = WIKI_INLINE|WIKI_NOBADLINKS; + if( db_get_boolean("timeline-block-markup", 0)==0 ){ + wikiFlags |= WIKI_NOBLOCK; + } + hyperlink_to_uuid(zUuid); + blob_zero(&comment); + db_column_blob(&q, 2, &comment); + wiki_convert(&comment, 0, wikiFlags); + blob_reset(&comment); + @ (user: + hyperlink_to_user(zUser,zDate,","); + if( zTagList && zTagList[0] && g.perm.Hyperlink ){ + int i; + const char *z = zTagList; + Blob links; + blob_zero(&links); + while( z && z[0] ){ + for(i=0; z[i] && (z[i]!=',' || z[i+1]!=' '); i++){} + blob_appendf(&links, + "%z%#h%.2s", + href("%R/timeline?r=%#t&nd&c=%t",i,z,zDate), i,z, &z[i] + ); + if( z[i]==0 ) break; + z += i+2; + } + @ tags: %s(blob_str(&links)), + blob_reset(&links); + }else{ + @ tags: %h(zTagList), + } + @ date: + hyperlink_to_date(zDate, ")"); + tag_private_status(rid); + } + db_finalize(&q); +} + + +/* +** WEBPAGE: vdiff +** URL: /vdiff?from=TAG&to=TAG +** +** Show the difference between two check-ins identified by the from= and +** to= query parameters. +** +** Query parameters: +** +** from=TAG Left side of the comparison +** to=TAG Right side of the comparison +** branch=TAG Show all changes on a particular branch +** v=BOOLEAN Default true. If false, only list files that have changed +** sbs=BOOLEAN Side-by-side diff if true. Unified diff if false +** glob=STRING only diff files matching this glob +** dc=N show N lines of context around each diff +** w ignore whitespace when computing diffs +** nohdr omit the description at the top of the page +** +** +** Show all differences between two check-ins. +*/ +void vdiff_page(void){ + int ridFrom, ridTo; + int verboseFlag = 0; + int sideBySide = 0; + u64 diffFlags = 0; + Manifest *pFrom, *pTo; + ManifestFile *pFileFrom, *pFileTo; + const char *zBranch; + const char *zFrom; + const char *zTo; + const char *zRe; + const char *zW; + const char *zVerbose; + const char *zGlob; + ReCompiled *pRe = 0; + login_check_credentials(); + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } + login_anonymous_available(); + zRe = P("regex"); + if( zRe ) re_compile(&pRe, zRe, 0); + zBranch = P("branch"); + if( zBranch && zBranch[0] ){ + cgi_replace_parameter("from", mprintf("root:%s", zBranch)); + cgi_replace_parameter("to", zBranch); + } + pTo = vdiff_parse_manifest("to", &ridTo); + if( pTo==0 ) return; + pFrom = vdiff_parse_manifest("from", &ridFrom); + if( pFrom==0 ) return; + sideBySide = !is_false(PD("sbs","1")); + zVerbose = P("v"); + if( !zVerbose ){ + zVerbose = P("verbose"); + } + if( !zVerbose ){ + zVerbose = P("detail"); /* deprecated */ + } + verboseFlag = (zVerbose!=0) && !is_false(zVerbose); + if( !verboseFlag && sideBySide ) verboseFlag = 1; + zGlob = P("glob"); + zFrom = P("from"); + zTo = P("to"); + if(zGlob && !*zGlob){ + zGlob = NULL; + } + diffFlags = construct_diff_flags(verboseFlag, sideBySide); + zW = (diffFlags&DIFF_IGNORE_ALLWS)?"&w":""; + style_submenu_element("Path","path", + "%R/timeline?me=%T&you=%T", zFrom, zTo); + if( sideBySide || verboseFlag ){ + style_submenu_element("Hide Diff", "hidediff", + "%R/vdiff?from=%T&to=%T&sbs=0%s%T%s", + zFrom, zTo, + zGlob ? "&glob=" : "", zGlob ? zGlob : "", zW); + } + if( !sideBySide ){ + style_submenu_element("Side-by-Side Diff", "sbsdiff", + "%R/vdiff?from=%T&to=%T&sbs=1%s%T%s", + zFrom, zTo, + zGlob ? "&glob=" : "", zGlob ? zGlob : "", zW); + } + if( sideBySide || !verboseFlag ) { + style_submenu_element("Unified Diff", "udiff", + "%R/vdiff?from=%T&to=%T&sbs=0&v%s%T%s", + zFrom, zTo, + zGlob ? "&glob=" : "", zGlob ? zGlob : "", zW); + } + style_submenu_element("Invert", "invert", + "%R/vdiff?from=%T&to=%T&sbs=%d%s%s%T%s", zTo, zFrom, + sideBySide, (verboseFlag && !sideBySide)?"&v":"", + zGlob ? "&glob=" : "", zGlob ? zGlob : "", zW); + if( zGlob ){ + style_submenu_element("Clear glob", "clearglob", + "%R/vdiff?from=%T&to=%T&sbs=%d%s%s", zFrom, zTo, + sideBySide, (verboseFlag && !sideBySide)?"&v":"", zW); + }else{ + style_submenu_element("Patch", "patch", + "%R/vpatch?from=%T&to=%T%s", zFrom, zTo, zW); + } + if( sideBySide || verboseFlag ){ + if( *zW ){ + style_submenu_element("Show Whitespace Differences", "whitespace", + "%R/vdiff?from=%T&to=%T&sbs=%d%s%s%T", zFrom, zTo, + sideBySide, (verboseFlag && !sideBySide)?"&v":"", + zGlob ? "&glob=" : "", zGlob ? zGlob : ""); + }else{ + style_submenu_element("Ignore Whitespace", "ignorews", + "%R/vdiff?from=%T&to=%T&sbs=%d%s%s%T&w", zFrom, zTo, + sideBySide, (verboseFlag && !sideBySide)?"&v":"", + zGlob ? "&glob=" : "", zGlob ? zGlob : ""); + } + } + style_header("Check-in Differences"); + if( P("nohdr")==0 ){ + @

    Difference From:

    + checkin_description(ridFrom); + @

    To:

    + checkin_description(ridTo); + @
    + if( pRe ){ + @

    Only differences that match regular expression "%h(zRe)" + @ are shown.

    + } + if( zGlob ){ + @

    Only files matching the glob "%h(zGlob)" are shown.

    + } + @

    + } + + manifest_file_rewind(pFrom); + pFileFrom = manifest_file_next(pFrom, 0); + manifest_file_rewind(pTo); + pFileTo = manifest_file_next(pTo, 0); + while( pFileFrom || pFileTo ){ + int cmp; + if( pFileFrom==0 ){ + cmp = +1; + }else if( pFileTo==0 ){ + cmp = -1; + }else{ + cmp = fossil_strcmp(pFileFrom->zName, pFileTo->zName); + } + if( cmp<0 ){ + if( !zGlob || sqlite3_strglob(zGlob, pFileFrom->zName)==0 ){ + append_file_change_line(pFileFrom->zName, + pFileFrom->zUuid, 0, 0, diffFlags, pRe, 0); + } + pFileFrom = manifest_file_next(pFrom, 0); + }else if( cmp>0 ){ + if( !zGlob || sqlite3_strglob(zGlob, pFileTo->zName)==0 ){ + append_file_change_line(pFileTo->zName, + 0, pFileTo->zUuid, 0, diffFlags, pRe, + manifest_file_mperm(pFileTo)); + } + pFileTo = manifest_file_next(pTo, 0); + }else if( fossil_strcmp(pFileFrom->zUuid, pFileTo->zUuid)==0 ){ + pFileFrom = manifest_file_next(pFrom, 0); + pFileTo = manifest_file_next(pTo, 0); + }else{ + if(!zGlob || (sqlite3_strglob(zGlob, pFileFrom->zName)==0 + || sqlite3_strglob(zGlob, pFileTo->zName)==0) ){ + append_file_change_line(pFileFrom->zName, + pFileFrom->zUuid, + pFileTo->zUuid, 0, diffFlags, pRe, + manifest_file_mperm(pFileTo)); + } + pFileFrom = manifest_file_next(pFrom, 0); + pFileTo = manifest_file_next(pTo, 0); + } + } + manifest_destroy(pFrom); + manifest_destroy(pTo); + append_diff_javascript(sideBySide); style_footer(); } + +#if INTERFACE +/* +** Possible return values from object_description() +*/ +#define OBJTYPE_CHECKIN 0x0001 +#define OBJTYPE_CONTENT 0x0002 +#define OBJTYPE_WIKI 0x0004 +#define OBJTYPE_TICKET 0x0008 +#define OBJTYPE_ATTACHMENT 0x0010 +#define OBJTYPE_EVENT 0x0020 +#define OBJTYPE_TAG 0x0040 +#define OBJTYPE_SYMLINK 0x0080 +#define OBJTYPE_EXE 0x0100 + +/* +** Possible flags for the second parameter to +** object_description() +*/ +#define OBJDESC_DETAIL 0x0001 /* more detail */ +#endif /* ** Write a description of an object to the www reply. ** ** If the object is a file then mention: @@ -604,66 +1185,117 @@ ** ** * It's artifact ID ** * date of check-in ** * Comment & user */ -void object_description( +int object_description( int rid, /* The artifact ID */ - int linkToView, /* Add viewer link if true */ + u32 objdescFlags, /* Flags to control display */ Blob *pDownloadName /* Fill with an appropriate download name */ ){ Stmt q; int cnt = 0; int nWiki = 0; + int objType = 0; char *zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); + int showDetail = (objdescFlags & OBJDESC_DETAIL)!=0; + char *prevName = 0; db_prepare(&q, "SELECT filename.name, datetime(event.mtime)," " coalesce(event.ecomment,event.comment)," " coalesce(event.euser,event.user)," - " b.uuid" + " b.uuid, mlink.mperm," + " coalesce((SELECT value FROM tagxref" + " WHERE tagid=%d AND tagtype>0 AND rid=mlink.mid),'trunk')" " FROM mlink, filename, event, blob a, blob b" " WHERE filename.fnid=mlink.fnid" " AND event.objid=mlink.mid" " AND a.rid=mlink.fid" " AND b.rid=mlink.mid" - " AND mlink.fid=%d", - rid + " AND mlink.fid=%d" + " ORDER BY filename.name, event.mtime /*sort*/", + TAG_BRANCH, rid ); + @

      while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); const char *zDate = db_column_text(&q, 1); const char *zCom = db_column_text(&q, 2); const char *zUser = db_column_text(&q, 3); const char *zVers = db_column_text(&q, 4); - if( cnt>0 ){ - @ Also file - }else{ - @ File - } - if( g.okHistory ){ - @ %h(zName) - }else{ - @ %h(zName) - } - @ part of check-in - hyperlink_to_uuid(zVers); - @ - %w(zCom) by - hyperlink_to_user(zUser,zDate," on"); - hyperlink_to_date(zDate,"."); + int mPerm = db_column_int(&q, 5); + const char *zBr = db_column_text(&q, 6); + int sameFilename = prevName!=0 && fossil_strcmp(zName,prevName)==0; + if( sameFilename && !showDetail ){ + if( cnt==1 ){ + @ %z(href("%R/whatis/%!S",zUuid))[more...] + } + cnt++; + continue; + } + if( !sameFilename ){ + if( prevName && showDetail ) { + @
    + } + if( mPerm==PERM_LNK ){ + @
  • Symbolic link + objType |= OBJTYPE_SYMLINK; + }else if( mPerm==PERM_EXE ){ + @
  • Executable file + objType |= OBJTYPE_EXE; + }else{ + @
  • File + } + objType |= OBJTYPE_CONTENT; + @ %z(href("%R/finfo?name=%T",zName))%h(zName) + tag_private_status(rid); + if( showDetail ){ + @
      + } + prevName = fossil_strdup(zName); + } + if( showDetail ){ + @
    • + hyperlink_to_date(zDate,""); + @ — part of check-in + hyperlink_to_uuid(zVers); + }else{ + @ — part of check-in + hyperlink_to_uuid(zVers); + @ at + hyperlink_to_date(zDate,""); + } + if( zBr && zBr[0] ){ + @ on branch %z(href("%R/timeline?r=%T",zBr))%h(zBr) + } + @ — %!W(zCom) (user: + hyperlink_to_user(zUser,zDate,")"); + if( g.perm.Hyperlink ){ + @ %z(href("%R/finfo?name=%T&ci=%!S",zName,zVers))[ancestry] + @ %z(href("%R/annotate?filename=%T&checkin=%!S",zName,zVers)) + @ [annotate] + @ %z(href("%R/blame?filename=%T&checkin=%!S",zName,zVers)) + @ [blame] + } cnt++; if( pDownloadName && blob_size(pDownloadName)==0 ){ blob_append(pDownloadName, zName, -1); } } + if( prevName && showDetail ){ + @
    + } + @
+ free(prevName); db_finalize(&q); - db_prepare(&q, + db_prepare(&q, "SELECT substr(tagname, 6, 10000), datetime(event.mtime)," " coalesce(event.euser, event.user)" " FROM tagxref, tag, event" " WHERE tagxref.rid=%d" - " AND tag.tagid=tagxref.tagid" + " AND tag.tagid=tagxref.tagid" " AND tag.tagname LIKE 'wiki-%%'" " AND event.objid=tagxref.rid", rid ); while( db_step(&q)==SQLITE_ROW ){ @@ -673,28 +1305,24 @@ if( cnt>0 ){ @ Also wiki page }else{ @ Wiki page } - if( g.okHistory ){ - @ [%h(zPagename)] - }else{ - @ [%h(zPagename)] - } - @ by + objType |= OBJTYPE_WIKI; + @ [%z(href("%R/wiki?name=%t",zPagename))%h(zPagename)] by hyperlink_to_user(zUser,zDate," on"); hyperlink_to_date(zDate,"."); nWiki++; cnt++; if( pDownloadName && blob_size(pDownloadName)==0 ){ - blob_appendf(pDownloadName, "%s.wiki", zPagename); + blob_appendf(pDownloadName, "%s.txt", zPagename); } } db_finalize(&q); if( nWiki==0 ){ db_prepare(&q, - "SELECT datetime(mtime), user, comment, type, uuid" + "SELECT datetime(mtime), user, comment, type, uuid, tagid" " FROM event, blob" " WHERE event.objid=%d" " AND blob.rid=%d", rid, rid ); @@ -707,29 +1335,39 @@ if( cnt>0 ){ @ Also } if( zType[0]=='w' ){ @ Wiki edit + objType |= OBJTYPE_WIKI; }else if( zType[0]=='t' ){ @ Ticket change + objType |= OBJTYPE_TICKET; }else if( zType[0]=='c' ){ @ Manifest of check-in + objType |= OBJTYPE_CHECKIN; + }else if( zType[0]=='e' ){ + @ Instance of technote + objType |= OBJTYPE_EVENT; + hyperlink_to_event_tagid(db_column_int(&q, 5)); }else{ - @ Control file referencing + @ Tag referencing + } + if( zType[0]!='e' ){ + hyperlink_to_uuid(zUuid); } - hyperlink_to_uuid(zUuid); - @ - %w(zCom) by + @ - %!W(zCom) by hyperlink_to_user(zUser,zDate," on"); hyperlink_to_date(zDate, "."); if( pDownloadName && blob_size(pDownloadName)==0 ){ - blob_appendf(pDownloadName, "%.10s.txt", zUuid); + blob_appendf(pDownloadName, "%S.txt", zUuid); } + tag_private_status(rid); cnt++; } db_finalize(&q); } - db_prepare(&q, + db_prepare(&q, "SELECT target, filename, datetime(mtime), user, src" " FROM attachment" " WHERE src=(SELECT uuid FROM blob WHERE rid=%d)" " ORDER BY mtime DESC /*sort*/", rid @@ -743,21 +1381,20 @@ if( cnt>0 ){ @ Also attachment "%h(zFilename)" to }else{ @ Attachment "%h(zFilename)" to } + objType |= OBJTYPE_ATTACHMENT; if( strlen(zTarget)==UUID_SIZE && validate16(zTarget,UUID_SIZE) ){ - char zShort[20]; - memcpy(zShort, zTarget, 10); - if( g.okHistory && g.okRdTkt ){ - @ ticket [%s(zShort)] + if( g.perm.Hyperlink && g.anon.RdTkt ){ + @ ticket [%z(href("%R/tktview?name=%!S",zTarget))%S(zTarget)] }else{ - @ ticket [%s(zShort)] + @ ticket [%S(zTarget)] } }else{ - if( g.okHistory && g.okRdWiki ){ - @ wiki page [%h(zTarget)] + if( g.perm.Hyperlink && g.anon.RdWiki ){ + @ wiki page [%z(href("%R/wiki?name=%t",zTarget))%h(zTarget)] }else{ @ wiki page [%h(zTarget)] } } @ added by @@ -765,77 +1402,162 @@ hyperlink_to_date(zDate,"."); cnt++; if( pDownloadName && blob_size(pDownloadName)==0 ){ blob_append(pDownloadName, zFilename, -1); } + tag_private_status(rid); } db_finalize(&q); + if( db_exists("SELECT 1 FROM tagxref WHERE rid=%d AND tagid=%d", + rid, TAG_CLUSTER) ){ + @ Cluster + cnt++; + } if( cnt==0 ){ - @ Control artifact. + @ Unrecognized artifact if( pDownloadName && blob_size(pDownloadName)==0 ){ - blob_appendf(pDownloadName, "%.10s.txt", zUuid); + blob_appendf(pDownloadName, "%S.txt", zUuid); } - }else if( linkToView && g.okHistory ){ - @ [view] + tag_private_status(rid); } + return objType; } /* ** WEBPAGE: fdiff +** URL: fdiff?v1=UUID&v2=UUID&patch&sbs=BOOLEAN®ex=REGEX +** +** Two arguments, v1 and v2, identify the files to be diffed. Show the +** difference between the two artifacts. Show diff side by side unless sbs +** is 0. Generate plaintext if "patch" is present. +** +** Additional parameters: ** -** Two arguments, v1 and v2, are integers. Show the difference between -** the two records. +** verbose Show more detail when describing artifacts +** dc=N Show N lines of context around each diff +** w Ignore whitespace */ void diff_page(void){ - int v1 = name_to_rid(P("v1")); - int v2 = name_to_rid(P("v2")); - Blob c1, c2, diff; + int v1, v2; + int isPatch; + int sideBySide; + char *zV1; + char *zV2; + const char *zRe; + const char *zW; /* URL param for ignoring whitespace */ + ReCompiled *pRe = 0; + u64 diffFlags; + u32 objdescFlags = 0; login_check_credentials(); - if( !g.okRead ){ login_needed(); return; } + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } + v1 = name_to_rid_www("v1"); + v2 = name_to_rid_www("v2"); if( v1==0 || v2==0 ) fossil_redirect_home(); + zRe = P("regex"); + if( zRe ) re_compile(&pRe, zRe, 0); + if( P("verbose")!=0 ) objdescFlags |= OBJDESC_DETAIL; + isPatch = P("patch")!=0; + if( isPatch ){ + Blob c1, c2, *pOut; + pOut = cgi_output_blob(); + cgi_set_content_type("text/plain"); + diffFlags = 4; + content_get(v1, &c1); + content_get(v2, &c2); + text_diff(&c1, &c2, pOut, pRe, diffFlags); + blob_reset(&c1); + blob_reset(&c2); + return; + } + + sideBySide = !is_false(PD("sbs","1")); + zV1 = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", v1); + zV2 = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", v2); + diffFlags = construct_diff_flags(1, sideBySide) | DIFF_HTML; + style_header("Diff"); - @

Differences From:

- @
- object_description(v1, 1, 0); - @
- @

To:

- @
- object_description(v2, 1, 0); - @
- @
- @
-  content_get(v1, &c1);
-  content_get(v2, &c2);
-  blob_zero(&diff);
-  text_diff(&c1, &c2, &diff, 4);
-  blob_reset(&c1);
-  blob_reset(&c2);
-  @ %h(blob_str(&diff))
-  @ 
- blob_reset(&diff); + zW = (diffFlags&DIFF_IGNORE_ALLWS)?"&w":""; + if( *zW ){ + style_submenu_element("Show Whitespace Changes", "Show Whitespace Changes", + "%s/fdiff?v1=%T&v2=%T&sbs=%d", + g.zTop, P("v1"), P("v2"), sideBySide); + }else{ + style_submenu_element("Ignore Whitespace", "Ignore Whitespace", + "%s/fdiff?v1=%T&v2=%T&sbs=%d&w", + g.zTop, P("v1"), P("v2"), sideBySide); + } + style_submenu_element("Patch", "Patch", "%s/fdiff?v1=%T&v2=%T&patch", + g.zTop, P("v1"), P("v2")); + if( !sideBySide ){ + style_submenu_element("Side-by-Side Diff", "sbsdiff", + "%s/fdiff?v1=%T&v2=%T&sbs=1%s", + g.zTop, P("v1"), P("v2"), zW); + }else{ + style_submenu_element("Unified Diff", "udiff", + "%s/fdiff?v1=%T&v2=%T&sbs=0%s", + g.zTop, P("v1"), P("v2"), zW); + } + + if( P("smhdr")!=0 ){ + @

Differences From Artifact + @ %z(href("%R/artifact/%!S",zV1))[%S(zV1)] To + @ %z(href("%R/artifact/%!S",zV2))[%S(zV2)].

+ }else{ + @

Differences From + @ Artifact %z(href("%R/artifact/%!S",zV1))[%S(zV1)]:

+ object_description(v1, objdescFlags, 0); + @

To Artifact %z(href("%R/artifact/%!S",zV2))[%S(zV2)]:

+ object_description(v2, objdescFlags, 0); + } + if( pRe ){ + @ Only differences that match regular expression "%h(zRe)" + @ are shown. + } + @
+ append_diff(zV1, zV2, diffFlags, pRe); + append_diff_javascript(sideBySide); style_footer(); } /* ** WEBPAGE: raw ** URL: /raw?name=ARTIFACTID&m=TYPE -** +** ** Return the uninterpreted content of an artifact. Used primarily ** to view artifacts that are images. */ void rawartifact_page(void){ int rid; + char *zUuid; const char *zMime; Blob content; rid = name_to_rid_www("name"); - zMime = PD("m","application/x-fossil-artifact"); login_check_credentials(); - if( !g.okRead ){ login_needed(); return; } + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } if( rid==0 ) fossil_redirect_home(); + zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); + if( fossil_strcmp(P("name"), zUuid)==0 && login_is_nobody() ){ + g.isConst = 1; + } + free(zUuid); + zMime = P("m"); + if( zMime==0 ){ + char *zFName = db_text(0, "SELECT filename.name FROM mlink, filename" + " WHERE mlink.fid=%d" + " AND filename.fnid=mlink.fnid", rid); + if( !zFName ){ + /* Look also at the attachment table */ + zFName = db_text(0, "SELECT attachment.filename FROM attachment, blob" + " WHERE blob.rid=%d" + " AND attachment.src=blob.uuid", rid); + } + if( zFName ) zMime = mimetype_from_name(zFName); + if( zMime==0 ) zMime = "application/x-fossil-artifact"; + } content_get(rid, &content); cgi_set_content_type(zMime); cgi_set_content(&content); } @@ -855,11 +1577,11 @@ zLine[0] = zHex[(i>>24)&0xf]; zLine[1] = zHex[(i>>16)&0xf]; zLine[2] = zHex[(i>>8)&0xf]; zLine[3] = zHex[i&0xf]; zLine[4] = ':'; - sprintf(zLine, "%04x: ", i); + sqlite3_snprintf(sizeof(zLine), zLine, "%04x: ", i); for(j=0; j<16; j++){ k = 5+j*3; zLine[k] = ' '; if( i+jArtifact %s(zUuid): - @
+ if( g.perm.Setup ){ + @

Artifact %s(zUuid) (%d(rid)):

+ }else{ + @

Artifact %s(zUuid):

+ } blob_zero(&downloadName); - object_description(rid, 0, &downloadName); - style_submenu_element("Download", "Download", + if( P("verbose")!=0 ) objdescFlags |= OBJDESC_DETAIL; + object_description(rid, objdescFlags, &downloadName); + style_submenu_element("Download", "Download", "%s/raw/%T?name=%s", g.zTop, blob_str(&downloadName), zUuid); - @
- @
+ @
content_get(rid, &content); @
   hexdump(&content);
   @ 
style_footer(); } + +/* +** Attempt to lookup the specified check-in and file name into an rid. +*/ +int artifact_from_ci_and_filename( + const char *zCI, + const char *zFilename +){ + int cirid; + Manifest *pManifest; + ManifestFile *pFile; + + if( zCI==0 ) return 0; + if( zFilename==0 ) return 0; + cirid = name_to_rid(zCI); + pManifest = manifest_get(cirid, CFTYPE_MANIFEST, 0); + if( pManifest==0 ) return 0; + manifest_file_rewind(pManifest); + while( (pFile = manifest_file_next(pManifest,0))!=0 ){ + if( fossil_strcmp(zFilename, pFile->zName)==0 ){ + int rid = db_int(0, "SELECT rid FROM blob WHERE uuid=%Q", pFile->zUuid); + manifest_destroy(pManifest); + return rid; + } + } + return 0; +} /* ** Look for "ci" and "filename" query parameters. If found, try to ** use them to extract the record ID of an artifact for the file. */ -int artifact_from_ci_and_filename(void){ +int artifact_from_ci_and_filename_www(void){ const char *zFilename; const char *zCI; int cirid; - Blob content; - Manifest m; - int i; + Manifest *pManifest; + ManifestFile *pFile; zCI = P("ci"); if( zCI==0 ) return 0; zFilename = P("filename"); if( zFilename==0 ) return 0; cirid = name_to_rid_www("ci"); - if( !content_get(cirid, &content) ) return 0; - if( !manifest_parse(&m, &content) ) return 0; - if( m.type!=CFTYPE_MANIFEST ) return 0; - for(i=0; izName)==0 ){ + int rid = db_int(0, "SELECT rid FROM blob WHERE uuid=%Q", pFile->zUuid); + manifest_destroy(pManifest); + return rid; } } return 0; } +/* +** The "z" argument is a string that contains the text of a source code +** file. This routine appends that text to the HTTP reply with line numbering. +** +** zLn is the ?ln= parameter for the HTTP query. If there is an argument, +** then highlight that line number and scroll to it once the page loads. +** If there are two line numbers, highlight the range of lines. +** Multiple ranges can be highlighed by adding additional line numbers +** separated by a non-digit character (also not one of [-,.]). +*/ +void output_text_with_line_numbers( + const char *z, + const char *zLn +){ + int iStart, iEnd; /* Start and end of region to highlight */ + int n = 0; /* Current line number */ + int i = 0; /* Loop index */ + int iTop = 0; /* Scroll so that this line is on top of screen. */ + Stmt q; + + iStart = iEnd = atoi(zLn); + db_multi_exec( + "CREATE TEMP TABLE lnos(iStart INTEGER PRIMARY KEY, iEnd INTEGER)"); + if( iStart>0 ){ + do{ + while( fossil_isdigit(zLn[i]) ) i++; + if( zLn[i]==',' || zLn[i]=='-' || zLn[i]=='.' ){ + i++; + while( zLn[i]=='.' ){ i++; } + iEnd = atoi(&zLn[i]); + while( fossil_isdigit(zLn[i]) ) i++; + } + while( fossil_isdigit(zLn[i]) ) i++; + if( iEndiStart - 2 ) iTop = iStart-2; + } + db_finalize(&q); + @
+  while( z[0] ){
+    n++;
+    db_prepare(&q,
+      "SELECT min(iStart), max(iEnd) FROM lnos"
+      " WHERE iStart <= %d AND iEnd >= %d", n, n);
+    if( db_step(&q)==SQLITE_ROW ){
+      iStart = db_column_int(&q, 0);
+      iEnd = db_column_int(&q, 1);
+    }
+    db_finalize(&q);
+    for(i=0; z[i] && z[i]!='\n'; i++){}
+    if( n==iTop ) cgi_append_content("", -1);
+    if( n==iStart ){
+      cgi_append_content("
",-1); + } + cgi_printf("%6d ", n); + if( i>0 ){ + char *zHtml = htmlize(z, i); + cgi_append_content(zHtml, -1); + fossil_free(zHtml); + } + if( n==iTop ) cgi_append_content("", -1); + if( n==iEnd ) cgi_append_content("
", -1); + else cgi_append_content("\n", 1); + z += i; + if( z[0]=='\n' ) z++; + } + if( n"); + @
+ if( db_int(0, "SELECT EXISTS(SELECT 1 FROM lnos)") ){ + @ + } +} + /* +** WEBPAGE: whatis ** WEBPAGE: artifact -** URL: /artifact?name=ARTIFACTID +** +** URL: /artifact/SHA1HASH ** URL: /artifact?ci=CHECKIN&filename=PATH -** -** Show the complete content of a file identified by ARTIFACTID -** as preformatted text. +** URL: /whatis/SHA1HASH +** +** Additional query parameters: +** +** ln - show line numbers +** ln=N - highlight line number N +** ln=M-N - highlight lines M through N inclusive +** ln=M-N+Y-Z - higllight lines M through N and Y through Z (inclusive) +** verbose - show more detail in the description +** download - redirect to the download (artifact page only) +** +** The /artifact page show the complete content of a file +** identified by SHA1HASH as preformatted text. The +** /whatis page shows only a description of the file. */ void artifact_page(void){ int rid = 0; Blob content; const char *zMime; Blob downloadName; int renderAsWiki = 0; int renderAsHtml = 0; + int objType; + int asText; const char *zUuid; + u32 objdescFlags = 0; + int descOnly = fossil_strcmp(g.zPath,"whatis")==0; + const char *zLn = P("ln"); + if( P("ci") && P("filename") ){ - rid = artifact_from_ci_and_filename(); + rid = artifact_from_ci_and_filename_www(); } if( rid==0 ){ rid = name_to_rid_www("name"); } login_check_credentials(); - if( !g.okRead ){ login_needed(); return; } + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } if( rid==0 ) fossil_redirect_home(); - if( g.okAdmin ){ + if( descOnly || P("verbose")!=0 ) objdescFlags |= OBJDESC_DETAIL; + blob_zero(&downloadName); + objType = object_description(rid, objdescFlags, &downloadName); + if( !descOnly && P("download")!=0 ){ + cgi_redirectf("%R/raw/%T?name=%s", blob_str(&downloadName), + db_text("?", "SELECT uuid FROM blob WHERE rid=%d", rid)); + /*NOTREACHED*/ + } + if( g.perm.Admin ){ const char *zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); - if( db_exists("SELECT 1 FROM shun WHERE uuid='%s'", zUuid) ){ - style_submenu_element("Unshun","Unshun", "%s/shun?uuid=%s&sub=1", + if( db_exists("SELECT 1 FROM shun WHERE uuid=%Q", zUuid) ){ + style_submenu_element("Unshun","Unshun","%s/shun?accept=%s&sub=1#accshun", g.zTop, zUuid); }else{ style_submenu_element("Shun","Shun", "%s/shun?shun=%s#addshun", g.zTop, zUuid); } } - style_header("Artifact Content"); + style_header("%s", descOnly ? "Artifact Description" : "Artifact Content"); zUuid = db_text("?", "SELECT uuid FROM blob WHERE rid=%d", rid); - @

Artifact %s(zUuid)

- @
- blob_zero(&downloadName); - object_description(rid, 0, &downloadName); - style_submenu_element("Download", "Download", - "%s/raw/%T?name=%s", g.zTop, blob_str(&downloadName), zUuid); + if( g.perm.Setup ){ + @

Artifact %s(zUuid) (%d(rid)):

+ }else{ + @

Artifact %s(zUuid):

+ } + if( g.perm.Admin ){ + Stmt q; + db_prepare(&q, + "SELECT coalesce(user.login,rcvfrom.uid)," + " datetime(rcvfrom.mtime), rcvfrom.ipaddr" + " FROM blob, rcvfrom LEFT JOIN user ON user.uid=rcvfrom.uid" + " WHERE blob.rid=%d" + " AND rcvfrom.rcvid=blob.rcvid;", rid); + while( db_step(&q)==SQLITE_ROW ){ + const char *zUser = db_column_text(&q,0); + const char *zDate = db_column_text(&q,1); + const char *zIp = db_column_text(&q,2); + @

Received on %s(zDate) from %h(zUser) at %h(zIp).

+ } + db_finalize(&q); + } + style_submenu_element("Download", "Download", + "%R/raw/%T?name=%s", blob_str(&downloadName), zUuid); + if( db_exists("SELECT 1 FROM mlink WHERE fid=%d", rid) ){ + style_submenu_element("Check-ins Using", "Check-ins Using", + "%R/timeline?n=200&uf=%s",zUuid); + } + asText = P("txt")!=0; zMime = mimetype_from_name(blob_str(&downloadName)); if( zMime ){ - if( strcmp(zMime, "text/html")==0 ){ - if( P("txt") ){ + if( fossil_strcmp(zMime, "text/html")==0 ){ + if( asText ){ style_submenu_element("Html", "Html", - "%s/artifact?name=%s", g.zTop, zUuid); + "%s/artifact/%s", g.zTop, zUuid); }else{ renderAsHtml = 1; style_submenu_element("Text", "Text", - "%s/artifact?name=%s&txt=1", g.zTop, zUuid); + "%s/artifact/%s?txt=1", g.zTop, zUuid); } - }else if( strcmp(zMime, "application/x-fossil-wiki")==0 ){ - if( P("txt") ){ + }else if( fossil_strcmp(zMime, "text/x-fossil-wiki")==0 + || fossil_strcmp(zMime, "text/x-markdown")==0 ){ + if( asText ){ style_submenu_element("Wiki", "Wiki", - "%s/artifact?name=%s", g.zTop, zUuid); + "%s/artifact/%s", g.zTop, zUuid); }else{ renderAsWiki = 1; style_submenu_element("Text", "Text", - "%s/artifact?name=%s&txt=1", g.zTop, zUuid); - } - } - } - @
- @
- content_get(rid, &content); - if( renderAsWiki ){ - wiki_convert(&content, 0, 0); - }else if( renderAsHtml ){ - @
- cgi_append_content(blob_buffer(&content), blob_size(&content)); - @
- }else{ - zMime = mimetype_from_content(&content); - @
- if( zMime==0 ){ - @
-      @ %h(blob_str(&content))
-      @ 
- style_submenu_element("Hex","Hex", "%s/hexdump?name=%s", g.zTop, zUuid); - }else if( strncmp(zMime, "image/", 6)==0 ){ - @ - style_submenu_element("Hex","Hex", "%s/hexdump?name=%s", g.zTop, zUuid); - }else{ - @
-      hexdump(&content);
-      @ 
- } - @
+ "%s/artifact/%s?txt=1", g.zTop, zUuid); + } + } + } + if( (objType & (OBJTYPE_WIKI|OBJTYPE_TICKET))!=0 ){ + style_submenu_element("Parsed", "Parsed", "%R/info/%s", zUuid); + } + if( descOnly ){ + style_submenu_element("Content", "Content", "%R/artifact/%s", zUuid); + }else{ + style_submenu_element("Line Numbers", "Line Numbers", + "%R/artifact/%s%s",zUuid, + ((zLn&&*zLn) ? "" : "?txt=1&ln=0")); + @
+ content_get(rid, &content); + if( renderAsWiki ){ + wiki_render_by_mimetype(&content, zMime); + }else if( renderAsHtml ){ + @ + }else{ + style_submenu_element("Hex","Hex", "%s/hexdump?name=%s", g.zTop, zUuid); + blob_to_utf8_no_bom(&content, 0); + zMime = mimetype_from_content(&content); + @
+ if( zMime==0 ){ + const char *z; + z = blob_str(&content); + if( zLn ){ + output_text_with_line_numbers(z, zLn); + }else{ + @
+          @ %h(z)
+          @ 
+ } + }else if( strncmp(zMime, "image/", 6)==0 ){ + @ + style_submenu_element("Image", "Image", + "%R/raw/%s?m=%s", zUuid, zMime); + }else{ + @ (file is %d(blob_size(&content)) bytes of binary data) + } + @
+ } } style_footer(); -} +} /* ** WEBPAGE: tinfo ** URL: /tinfo?name=ARTIFACTID ** ** Show the details of a ticket change control artifact. */ void tinfo_page(void){ int rid; - Blob content; char *zDate; const char *zUuid; - char zTktName[20]; - Manifest m; - + char zTktName[UUID_SIZE+1]; + Manifest *pTktChng; + int modPending; + const char *zModAction; + char *zTktTitle; login_check_credentials(); - if( !g.okRdTkt ){ login_needed(); return; } + if( !g.perm.RdTkt ){ login_needed(g.anon.RdTkt); return; } rid = name_to_rid_www("name"); if( rid==0 ){ fossil_redirect_home(); } zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", rid); - if( g.okAdmin ){ - if( db_exists("SELECT 1 FROM shun WHERE uuid='%s'", zUuid) ){ - style_submenu_element("Unshun","Unshun", "%s/shun?uuid=%s&sub=1", + if( g.perm.Admin ){ + if( db_exists("SELECT 1 FROM shun WHERE uuid=%Q", zUuid) ){ + style_submenu_element("Unshun","Unshun", "%s/shun?accept=%s&sub=1#accshun", g.zTop, zUuid); }else{ style_submenu_element("Shun","Shun", "%s/shun?shun=%s#addshun", g.zTop, zUuid); } } - content_get(rid, &content); - if( manifest_parse(&m, &content)==0 ){ - fossil_redirect_home(); + pTktChng = manifest_get(rid, CFTYPE_TICKET, 0); + if( pTktChng==0 ) fossil_redirect_home(); + zDate = db_text(0, "SELECT datetime(%.12f)", pTktChng->rDate); + memcpy(zTktName, pTktChng->zTicketUuid, UUID_SIZE+1); + if( g.perm.ModTkt && (zModAction = P("modaction"))!=0 ){ + if( strcmp(zModAction,"delete")==0 ){ + moderation_disapprove(rid); + /* + ** Next, check if the ticket still exists; if not, we cannot + ** redirect to it. + */ + if( db_exists("SELECT 1 FROM ticket WHERE tkt_uuid GLOB '%q*'", + zTktName) ){ + cgi_redirectf("%R/tktview/%s", zTktName); + /*NOTREACHED*/ + }else{ + cgi_redirectf("%R/modreq"); + /*NOTREACHED*/ + } + } + if( strcmp(zModAction,"approve")==0 ){ + moderation_approve(rid); + } } - if( m.type!=CFTYPE_TICKET ){ - fossil_redirect_home(); - } + zTktTitle = db_table_has_column("repository", "ticket", "title" ) + ? db_text("(No title)", "SELECT title FROM ticket WHERE tkt_uuid=%Q", zTktName) + : 0; style_header("Ticket Change Details"); - zDate = db_text(0, "SELECT datetime(%.12f)", m.rDate); - memcpy(zTktName, m.zTicketUuid, 10); - zTktName[10] = 0; - if( g.okHistory ){ - @

Changes to ticket %s(zTktName)

- @ - @

By %h(m.zUser) on %s(zDate). See also: - @ artifact content, and - @ ticket history - @

+ style_submenu_element("Raw", "Raw", "%R/artifact/%s", zUuid); + style_submenu_element("History", "History", "%R/tkthistory/%s", zTktName); + style_submenu_element("Page", "Page", "%R/tktview/%t", zTktName); + style_submenu_element("Timeline", "Timeline", "%R/tkttimeline/%t", zTktName); + if( P("plaintext") ){ + style_submenu_element("Formatted", "Formatted", "%R/info/%s", zUuid); }else{ - @

Changes to ticket %s(zTktName)

- @ - @

By %h(m.zUser) on %s(zDate). - @

+ style_submenu_element("Plaintext", "Plaintext", + "%R/info/%s?plaintext", zUuid); + } + + @
Overview
+ @

+ @ + @ + @ + @ "); + @ "); + @
Artifact ID:%z(href("%R/artifact/%!S",zUuid))%s(zUuid) + if( g.perm.Setup ){ + @ (%d(rid)) + } + modPending = moderation_pending(rid); + if( modPending ){ + @ *** Awaiting Moderator Approval *** + } + @
Ticket:%z(href("%R/tktview/%s",zTktName))%s(zTktName) + if( zTktTitle ){ + @
%h(zTktTitle) } - @ - @
    + @
Date: + hyperlink_to_date(zDate, "
User: + hyperlink_to_user(pTktChng->zUser, zDate, "
free(zDate); - ticket_output_change_artifact(&m); - manifest_clear(&m); + free(zTktTitle); + + if( g.perm.ModTkt && modPending ){ + @

Moderation
+ @
+ @
+ @
+ @
+ @ + @
+ @
+ } + + @
Changes
+ @

+ ticket_output_change_artifact(pTktChng, 0); + manifest_destroy(pTktChng); style_footer(); } /* ** WEBPAGE: info ** URL: info/ARTIFACTID ** -** The argument is a artifact ID which might be a baseline or a file or -** a ticket changes or a wiki edit or something else. +** The argument is a artifact ID which might be a check-in or a file or +** a ticket changes or a wiki edit or something else. ** -** Figure out what the artifact ID is and jump to it. +** Figure out what the artifact ID is and display it appropriately. */ void info_page(void){ const char *zName; Blob uuid; int rid; - + int rc; + int nLen; + zName = P("name"); if( zName==0 ) fossil_redirect_home(); - if( validate16(zName, strlen(zName)) - && db_exists("SELECT 1 FROM ticket WHERE tkt_uuid GLOB '%q*'", zName) ){ - tktview_page(); - return; - } + nLen = strlen(zName); blob_set(&uuid, zName); - if( name_to_uuid(&uuid, 1) ){ - fossil_redirect_home(); + if( name_collisions(zName) ){ + cgi_set_parameter("src","info"); + ambiguous_page(); + return; + } + rc = name_to_uuid(&uuid, -1, "*"); + if( rc==1 ){ + if( validate16(zName, nLen) ){ + if( db_exists("SELECT 1 FROM ticket WHERE tkt_uuid GLOB '%q*'", zName) ){ + tktview_page(); + return; + } + if( db_exists("SELECT 1 FROM tag" + " WHERE tagname GLOB 'event-%q*'", zName) ){ + event_page(); + return; + } + } + style_header("No Such Object"); + @

No such object: %h(zName)

+ if( nLen<4 ){ + @

Object name should be no less than 4 characters. Ten or more + @ characters are recommended.

+ } + style_footer(); + return; + }else if( rc==2 ){ + cgi_set_parameter("src","info"); + ambiguous_page(); + return; } zName = blob_str(&uuid); - rid = db_int(0, "SELECT rid FROM blob WHERE uuid='%s'", zName); + rid = db_int(0, "SELECT rid FROM blob WHERE uuid=%Q", zName); if( rid==0 ){ style_header("Broken Link"); @

No such object: %h(zName)

style_footer(); return; @@ -1168,203 +2155,442 @@ ci_page(); }else if( db_exists("SELECT 1 FROM plink WHERE pid=%d", rid) ){ ci_page(); }else + if( db_exists("SELECT 1 FROM attachment WHERE attachid=%d", rid) ){ + ainfo_page(); + }else { artifact_page(); } } + +/* +** Generate HTML that will present the user with a selection of +** potential background colors for timeline entries. +*/ +void render_color_chooser( + int fPropagate, /* Default value for propagation */ + const char *zDefaultColor, /* The current default color */ + const char *zIdPropagate, /* ID of form element checkbox. NULL for none */ + const char *zId, /* The ID of the form element */ + const char *zIdCustom /* ID of text box for custom color */ +){ + static const struct SampleColors { + const char *zCName; + const char *zColor; + } aColor[] = { + { "(none)", "" }, + { "#f2dcdc", 0 }, + { "#bde5d6", 0 }, + { "#a0a0a0", 0 }, + { "#b0b0b0", 0 }, + { "#c0c0c0", 0 }, + { "#d0d0d0", 0 }, + { "#e0e0e0", 0 }, + + { "#c0fff0", 0 }, + { "#c0f0ff", 0 }, + { "#d0c0ff", 0 }, + { "#ffc0ff", 0 }, + { "#ffc0d0", 0 }, + { "#fff0c0", 0 }, + { "#f0ffc0", 0 }, + { "#c0ffc0", 0 }, + + { "#a8d3c0", 0 }, + { "#a8c7d3", 0 }, + { "#aaa8d3", 0 }, + { "#cba8d3", 0 }, + { "#d3a8bc", 0 }, + { "#d3b5a8", 0 }, + { "#d1d3a8", 0 }, + { "#b1d3a8", 0 }, + + { "#8eb2a1", 0 }, + { "#8ea7b2", 0 }, + { "#8f8eb2", 0 }, + { "#ab8eb2", 0 }, + { "#b28e9e", 0 }, + { "#b2988e", 0 }, + { "#b0b28e", 0 }, + { "#95b28e", 0 }, + + { "#80d6b0", 0 }, + { "#80bbd6", 0 }, + { "#8680d6", 0 }, + { "#c680d6", 0 }, + { "#d680a6", 0 }, + { "#d69b80", 0 }, + { "#d1d680", 0 }, + { "#91d680", 0 }, + + + { "custom", "##" }, + }; + int nColor = sizeof(aColor)/sizeof(aColor[0])-1; + int stdClrFound = 0; + int i; + + if( zIdPropagate ){ + @
+ } + @ + @ + for(i=0; i + }else{ + @ + if( (i%8)==7 && i+1 + } + } + @ + if( stdClrFound ){ + @ + @ + @
+ } + @
  + @ + @
+} + +/* +** Do a comment comparison. +** +** + Leading and trailing whitespace are ignored. +** + \r\n characters compare equal to \n +** +** Return true if equal and false if not equal. +*/ +static int comment_compare(const char *zA, const char *zB){ + if( zA==0 ) zA = ""; + if( zB==0 ) zB = ""; + while( fossil_isspace(zA[0]) ) zA++; + while( fossil_isspace(zB[0]) ) zB++; + while( zA[0] && zB[0] ){ + if( zA[0]==zB[0] ){ zA++; zB++; continue; } + if( zA[0]=='\r' && zA[1]=='\n' && zB[0]=='\n' ){ + zA += 2; + zB++; + continue; + } + if( zB[0]=='\r' && zB[1]=='\n' && zA[0]=='\n' ){ + zB += 2; + zA++; + continue; + } + return 0; + } + while( fossil_isspace(zB[0]) ) zB++; + while( fossil_isspace(zA[0]) ) zA++; + return zA[0]==0 && zB[0]==0; +} + +/* +** The following methods operate on the newtags temporary table +** that is used to collect various changes to be added to a control +** artifact for a check-in edit. +*/ +static void init_newtags(void){ + db_multi_exec("CREATE TEMP TABLE newtags(tag UNIQUE, prefix, value)"); +} + +static void change_special( + const char *zName, /* Name of the special tag */ + const char *zOp, /* Operation prefix (e.g. +,-,*) */ + const char *zValue /* Value of the tag */ +){ + db_multi_exec("REPLACE INTO newtags VALUES(%Q,'%q',%Q)", zName, zOp, zValue); +} + +static void change_sym_tag(const char *zTag, const char *zOp){ + db_multi_exec("REPLACE INTO newtags VALUES('sym-%q',%Q,NULL)", zTag, zOp); +} + +static void cancel_special(const char *zTag){ + change_special(zTag,"-",0); +} + +static void add_color(const char *zNewColor, int fPropagateColor){ + change_special("bgcolor",fPropagateColor ? "*" : "+", zNewColor); +} + +static void cancel_color(void){ + change_special("bgcolor","-",0); +} + +static void add_comment(const char *zNewComment){ + change_special("comment","+",zNewComment); +} + +static void add_date(const char *zNewDate){ + change_special("date","+",zNewDate); +} + +static void add_user(const char *zNewUser){ + change_special("user","+",zNewUser); +} + +static void add_tag(const char *zNewTag){ + change_sym_tag(zNewTag,"+"); +} + +static void cancel_tag(int rid, const char *zCancelTag){ + if( db_exists("SELECT 1 FROM tagxref, tag" + " WHERE tagxref.rid=%d AND tagtype>0" + " AND tagxref.tagid=tag.tagid AND tagname='sym-%q'", + rid, zCancelTag) + ) change_sym_tag(zCancelTag,"-"); +} + +static void hide_branch(void){ + change_special("hidden","*",0); +} + +static void close_leaf(int rid){ + change_special("closed",is_a_leaf(rid)?"+":"*",0); +} + +static void change_branch(int rid, const char *zNewBranch){ + db_multi_exec( + "REPLACE INTO newtags " + " SELECT tagname, '-', NULL FROM tagxref, tag" + " WHERE tagxref.rid=%d AND tagtype==2" + " AND tagname GLOB 'sym-*'" + " AND tag.tagid=tagxref.tagid", + rid + ); + change_special("branch","*",zNewBranch); + change_sym_tag(zNewBranch,"*"); +} + +/* +** The apply_newtags method is called after all newtags have been added +** and the control artifact is completed and then written to the DB. +*/ +static void apply_newtags(Blob *ctrl, int rid, const char *zUuid){ + Stmt q; + int nChng = 0; + + db_prepare(&q, "SELECT tag, prefix, value FROM newtags" + " ORDER BY prefix || tag"); + while( db_step(&q)==SQLITE_ROW ){ + const char *zTag = db_column_text(&q, 0); + const char *zPrefix = db_column_text(&q, 1); + const char *zValue = db_column_text(&q, 2); + nChng++; + if( zValue ){ + blob_appendf(ctrl, "T %s%F %s %F\n", zPrefix, zTag, zUuid, zValue); + }else{ + blob_appendf(ctrl, "T %s%F %s\n", zPrefix, zTag, zUuid); + } + } + db_finalize(&q); + if( nChng>0 ){ + int nrid; + Blob cksum; + blob_appendf(ctrl, "U %F\n", login_name()); + md5sum_blob(ctrl, &cksum); + blob_appendf(ctrl, "Z %b\n", &cksum); + db_begin_transaction(); + g.markPrivate = content_is_private(rid); + nrid = content_put(ctrl); + manifest_crosslink(nrid, ctrl, MC_PERMIT_HOOKS); + assert( blob_is_reset(ctrl) ); + db_end_transaction(0); + } +} + +/* +** This method checks that the date can be parsed. +** Returns 1 if datetime() can validate, 0 otherwise. +*/ +int is_datetime(const char* zDate){ + return db_int(0, "SELECT datetime(%Q) NOT NULL", zDate); +} /* ** WEBPAGE: ci_edit -** URL: ci_edit?r=RID&c=NEWCOMMENT&u=NEWUSER +** URL: /ci_edit?r=RID&c=NEWCOMMENT&u=NEWUSER ** -** Present a dialog for updating properties of a baseline: +** Present a dialog for updating properties of a check-in. ** ** * The check-in user ** * The check-in comment +** * The check-in time and date ** * The background color. +** * Add and remove tags */ void ci_edit_page(void){ int rid; const char *zComment; /* Current comment on the check-in */ const char *zNewComment; /* Revised check-in comment */ const char *zUser; /* Current user for the check-in */ const char *zNewUser; /* Revised user */ const char *zDate; /* Current date of the check-in */ const char *zNewDate; /* Revised check-in date */ - const char *zColor; + const char *zColor; const char *zNewColor; const char *zNewTagFlag; const char *zNewTag; const char *zNewBrFlag; const char *zNewBranch; const char *zCloseFlag; - int fPropagateColor; + const char *zHideFlag; + int fPropagateColor; /* True if color propagates before edit */ + int fNewPropagateColor; /* True if color propagates after edit */ + int fHasHidden = 0; /* True if hidden tag already set */ + int fHasClosed = 0; /* True if closed tag already set */ + const char *zChngTime = 0; /* Value of chngtime= query param, if any */ char *zUuid; Blob comment; + char *zBranchName = 0; Stmt q; - static const struct SampleColors { - const char *zCName; - const char *zColor; - } aColor[] = { - { "(none)", "" }, - { "#f2dcdc", "#f2dcdc" }, - { "#f0ffc0", "#f0ffc0" }, - { "#bde5d6", "#bde5d6" }, - { "#c0ffc0", "#c0ffc0" }, - { "#c0fff0", "#c0fff0" }, - { "#c0f0ff", "#c0f0ff" }, - { "#d0c0ff", "#d0c0ff" }, - { "#ffc0ff", "#ffc0ff" }, - { "#ffc0d0", "#ffc0d0" }, - { "#fff0c0", "#fff0c0" }, - { "#c0c0c0", "#c0c0c0" }, - }; - int nColor = sizeof(aColor)/sizeof(aColor[0]); - int i; - + login_check_credentials(); - if( !g.okWrite ){ login_needed(); return; } - rid = name_to_rid(P("r")); + if( !g.perm.Write ){ login_needed(g.anon.Write); return; } + rid = name_to_typed_rid(P("r"), "ci"); zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); zComment = db_text(0, "SELECT coalesce(ecomment,comment)" " FROM event WHERE objid=%d", rid); if( zComment==0 ) fossil_redirect_home(); if( P("cancel") ){ cgi_redirectf("ci?name=%s", zUuid); } + if( g.perm.Setup ) zChngTime = P("chngtime"); zNewComment = PD("c",zComment); zUser = db_text(0, "SELECT coalesce(euser,user)" " FROM event WHERE objid=%d", rid); if( zUser==0 ) fossil_redirect_home(); - zNewUser = PD("u",zUser); + zNewUser = PDT("u",zUser); zDate = db_text(0, "SELECT datetime(mtime)" " FROM event WHERE objid=%d", rid); if( zDate==0 ) fossil_redirect_home(); - zNewDate = PD("dt",zDate); + zNewDate = PDT("dt",zDate); zColor = db_text("", "SELECT bgcolor" " FROM event WHERE objid=%d", rid); - zNewColor = PD("clr",zColor); - fPropagateColor = P("pclr")!=0; + zNewColor = PDT("clr",zColor); + if( fossil_strcmp(zNewColor,"##")==0 ){ + zNewColor = PT("clrcust"); + } + fPropagateColor = db_int(0, "SELECT tagtype FROM tagxref" + " WHERE rid=%d AND tagid=%d", + rid, TAG_BGCOLOR)==2; + fNewPropagateColor = P("clr")!=0 ? P("pclr")!=0 : fPropagateColor; zNewTagFlag = P("newtag") ? " checked" : ""; - zNewTag = PD("tagname",""); + zNewTag = PDT("tagname",""); zNewBrFlag = P("newbr") ? " checked" : ""; - zNewBranch = PD("brname",""); + zNewBranch = PDT("brname",""); zCloseFlag = P("close") ? " checked" : ""; + zHideFlag = P("hide") ? " checked" : ""; if( P("apply") ){ Blob ctrl; - char *zDate; - int nChng = 0; + char *zNow; login_verify_csrf_secret(); blob_zero(&ctrl); - zDate = db_text(0, "SELECT datetime('now')"); - zDate[10] = 'T'; - blob_appendf(&ctrl, "D %s\n", zDate); - db_multi_exec("CREATE TEMP TABLE newtags(tag UNIQUE, prefix, value)"); - if( zNewColor[0] && strcmp(zColor,zNewColor)!=0 ){ - char *zPrefix = "+"; - if( fPropagateColor ){ - zPrefix = "*"; - } - db_multi_exec("REPLACE INTO newtags VALUES('bgcolor',%Q,%Q)", - zPrefix, zNewColor); - } - if( zNewColor[0]==0 && zColor[0]!=0 ){ - db_multi_exec("REPLACE INTO newtags VALUES('bgcolor','-',NULL)"); - } - if( strcmp(zComment,zNewComment)!=0 ){ - db_multi_exec("REPLACE INTO newtags VALUES('comment','+',%Q)", - zNewComment); - } - if( strcmp(zDate,zNewDate)!=0 ){ - db_multi_exec("REPLACE INTO newtags VALUES('date','+',%Q)", - zNewDate); - } - if( strcmp(zUser,zNewUser)!=0 ){ - db_multi_exec("REPLACE INTO newtags VALUES('user','+',%Q)", zNewUser); - } + zNow = date_in_standard_format(zChngTime ? zChngTime : "now"); + blob_appendf(&ctrl, "D %s\n", zNow); + init_newtags(); + if( zNewColor[0] + && (fPropagateColor!=fNewPropagateColor + || fossil_strcmp(zColor,zNewColor)!=0) + ) add_color(zNewColor,fNewPropagateColor); + if( zNewColor[0]==0 && zColor[0]!=0 ) cancel_color(); + if( comment_compare(zComment,zNewComment)==0 ) add_comment(zNewComment); + if( fossil_strcmp(zDate,zNewDate)!=0 ) add_date(zNewDate); + if( fossil_strcmp(zUser,zNewUser)!=0 ) add_user(zNewUser); db_prepare(&q, "SELECT tag.tagid, tagname FROM tagxref, tag" " WHERE tagxref.rid=%d AND tagtype>0 AND tagxref.tagid=tag.tagid", rid ); while( db_step(&q)==SQLITE_ROW ){ int tagid = db_column_int(&q, 0); const char *zTag = db_column_text(&q, 1); char zLabel[30]; - sprintf(zLabel, "c%d", tagid); - if( P(zLabel) ){ - db_multi_exec("REPLACE INTO newtags VALUES(%Q,'-',NULL)", zTag); - } - } - db_finalize(&q); - if( zCloseFlag[0] ){ - db_multi_exec("REPLACE INTO newtags VALUES('closed','+',NULL)"); - } - if( zNewTagFlag[0] ){ - db_multi_exec("REPLACE INTO newtags VALUES('sym-%q','+',NULL)", zNewTag); - } - if( zNewBrFlag[0] ){ - db_multi_exec( - "REPLACE INTO newtags " - " SELECT tagname, '-', NULL FROM tagxref, tag" - " WHERE tagxref.rid=%d AND tagtype==2" - " AND tagname GLOB 'sym-*'" - " AND tag.tagid=tagxref.tagid", - rid - ); - db_multi_exec("REPLACE INTO newtags VALUES('branch','*',%Q)", zNewBranch); - db_multi_exec("REPLACE INTO newtags VALUES('sym-%q','*',NULL)", - zNewBranch); - } - db_prepare(&q, "SELECT tag, prefix, value FROM newtags" - " ORDER BY prefix || tag"); - while( db_step(&q)==SQLITE_ROW ){ - const char *zTag = db_column_text(&q, 0); - const char *zPrefix = db_column_text(&q, 1); - const char *zValue = db_column_text(&q, 2); - nChng++; - if( zValue ){ - blob_appendf(&ctrl, "T %s%F %s %F\n", zPrefix, zTag, zUuid, zValue); - }else{ - blob_appendf(&ctrl, "T %s%F %s\n", zPrefix, zTag, zUuid); - } - } - db_finalize(&q); - if( nChng>0 ){ - int nrid; - Blob cksum; - blob_appendf(&ctrl, "U %F\n", g.zLogin); - md5sum_blob(&ctrl, &cksum); - blob_appendf(&ctrl, "Z %b\n", &cksum); - db_begin_transaction(); - g.markPrivate = content_is_private(rid); - nrid = content_put(&ctrl, 0, 0); - manifest_crosslink(nrid, &ctrl); - db_end_transaction(0); - } + sqlite3_snprintf(sizeof(zLabel), zLabel, "c%d", tagid); + if( P(zLabel) ) cancel_special(zTag); + } + db_finalize(&q); + if( zHideFlag[0] ) hide_branch(); + if( zCloseFlag[0] ) close_leaf(rid); + if( zNewTagFlag[0] && zNewTag[0] ) add_tag(zNewTag); + if( zNewBrFlag[0] && zNewBranch[0] ) change_branch(rid,zNewBranch); + apply_newtags(&ctrl, rid, zUuid); cgi_redirectf("ci?name=%s", zUuid); } blob_zero(&comment); blob_append(&comment, zNewComment, -1); zUuid[10] = 0; style_header("Edit Check-in [%s]", zUuid); + /* + ** chgcbn/chgbn: Handle change of (checkbox for) branch name in + ** remaining of form. + */ + @ if( P("preview") ){ Blob suffix; int nTag = 0; @ Preview: @
@ if( zNewColor && zNewColor[0] ){ - @
+ @
}else{ @
} - wiki_convert(&comment, 0, WIKI_INLINE); + @ %!W(blob_str(&comment)) blob_zero(&suffix); blob_appendf(&suffix, "(user: %h", zNewUser); db_prepare(&q, "SELECT substr(tagname,5) FROM tagxref, tag" " WHERE tagname GLOB 'sym-*' AND tagxref.rid=%d" " AND tagtype>1 AND tag.tagid=tagxref.tagid", @@ -1380,123 +2606,355 @@ } db_finalize(&q); blob_appendf(&suffix, ")"); @ %s(blob_str(&suffix)) @
+ if( zChngTime ){ + @

The timestamp on the tag used to make the changes above + @ will be overridden as: %s(date_in_standard_format(zChngTime))

+ } @
- @
+ @
blob_reset(&suffix); } @

Make changes to attributes of check-in - @ [%s(zUuid)]:

- @
+ @ [%z(href("%R/ci/%!S",zUuid))%s(zUuid)]:

+ form_begin(0, "%R/ci_edit"); login_insert_csrf_secret(); - @ + @
@ - @ + @ @ - @ + @ @ - @ - @ - - @ - @ - - @ - @ + @ + + if( zChngTime ){ + @ + @ + } + + @ + @ + + @ + @ - @ - @ - - if( is_a_leaf(rid) - && !db_exists("SELECT 1 FROM tagxref " - " WHERE tagid=%d AND rid=%d AND tagtype>0", - TAG_CLOSED, rid) - ){ - @ - @ - } + if( !zBranchName ){ + zBranchName = db_get("main-branch", "trunk"); + } + if( !zNewBranch || !zNewBranch[0]){ + zNewBranch = zBranchName; + } + @ + @ + if( !fHasHidden ){ + @ + @ + } + if( !fHasClosed ){ + if( is_a_leaf(rid) ){ + @ + @ + }else if( zBranchName ){ + @ + @ + } + } + if( zBranchName ) fossil_free(zBranchName); @ @
User:
User: - @ + @ @
Comment:
Comment: @ @
Check-in Time: - @ - @
Background Color: - @ - @ - @ - for(i=0; i - }else{ - @ - if( (i%6)==5 && i+1 - } - } - @ - @
- if( fPropagateColor ){ - @ - }else{ - @ - } - @ Propagate color to descendants
- } - if( strcmp(zNewColor, aColor[i].zColor)==0 ){ - @ - }else{ - @ - } - @ %h(aColor[i].zCName)
- @
Tags: - @ - @ Add the following new tag name to this check-in: - @ - db_prepare(&q, - "SELECT tag.tagid, tagname FROM tagxref, tag" - " WHERE tagxref.rid=%d AND tagtype>0 AND tagxref.tagid=tag.tagid" - " ORDER BY CASE WHEN tagname GLOB 'sym-*' THEN substr(tagname,5)" - " ELSE tagname END", + @
Check-in Time: + @ + @
Timestamp of this change: + @ + @
Background Color: + render_color_chooser(fNewPropagateColor, zNewColor, "pclr", "clr", "clrcust"); + @
Tags: + @ + @ + zBranchName = db_text(0, "SELECT value FROM tagxref, tag" + " WHERE tagxref.rid=%d AND tagtype>0 AND tagxref.tagid=tag.tagid" + " AND tagxref.tagid=%d", rid, TAG_BRANCH); + db_prepare(&q, + "SELECT tag.tagid, tagname, tagxref.value FROM tagxref, tag" + " WHERE tagxref.rid=%d AND tagtype>0 AND tagxref.tagid=tag.tagid" + " ORDER BY CASE WHEN tagname GLOB 'sym-*' THEN substr(tagname,5)" + " ELSE tagname END /*sort*/", rid ); while( db_step(&q)==SQLITE_ROW ){ int tagid = db_column_int(&q, 0); const char *zTagName = db_column_text(&q, 1); + int isSpecialTag = fossil_strncmp(zTagName, "sym-", 4)!=0; char zLabel[30]; - sprintf(zLabel, "c%d", tagid); - if( P(zLabel) ){ - @
- }else{ - @
- } - if( strncmp(zTagName, "sym-", 4)==0 ){ - @ Cancel tag %h(&zTagName[4]) - }else{ - @ Cancel special tag %h(zTagName) + + if( tagid == TAG_CLOSED ){ + fHasClosed = 1; + }else if( (tagid == TAG_COMMENT) || (tagid == TAG_BRANCH) ){ + continue; + }else if( tagid==TAG_HIDDEN ){ + fHasHidden = 1; + }else if( !isSpecialTag && zTagName && + fossil_strcmp(&zTagName[4], zBranchName)==0){ + continue; + } + sqlite3_snprintf(sizeof(zLabel), zLabel, "c%d", tagid); + @
+ }else{ + @ Cancel tag %h(&zTagName[4]) } } db_finalize(&q); @
Branching: - @ - @ Make this check-in the start of a new branch named: - @ - @
Leaf Closure: - @ - @ Mark this leaf as "closed" so that it no longer appears on the - @ "leaves" page and is no longer labeled as a "Leaf". - @
Branching: + @ + @
Branch Hiding: + @ + @
Leaf Closure: + @ + @
Branch Closure: + @ + @
- @ - @ - @ + @ + @ + @ @
- @ + @
style_footer(); } + +/* +** Prepare an ammended commit comment. Let the user modify it using the +** editor specified in the global_config table or either +** the VISUAL or EDITOR environment variable. +** +** Store the final commit comment in pComment. pComment is assumed +** to be uninitialized - any prior content is overwritten. +** +** Use zInit to initialize the check-in comment so that the user does +** not have to retype. +*/ +static void prepare_amend_comment( + Blob *pComment, + const char *zInit, + const char *zUuid +){ + Blob prompt; +#if defined(_WIN32) || defined(__CYGWIN__) + int bomSize; + const unsigned char *bom = get_utf8_bom(&bomSize); + blob_init(&prompt, (const char *) bom, bomSize); + if( zInit && zInit[0]){ + blob_append(&prompt, zInit, -1); + } +#else + blob_init(&prompt, zInit, -1); +#endif + blob_append(&prompt, "\n# Enter a new comment for check-in ", -1); + if( zUuid && zUuid[0] ){ + blob_append(&prompt, zUuid, -1); + } + blob_append(&prompt, ".\n# Lines beginning with a # are ignored.\n", -1); + prompt_for_user_comment(pComment, &prompt); + blob_reset(&prompt); +} + +#define AMEND_USAGE_STMT "UUID OPTION ?OPTION ...?" +/* +** COMMAND: amend +** +** Usage: %fossil amend UUID OPTION ?OPTION ...? +** +** Amend the tags on check-in UUID to change how it displays in the timeline. +** +** Options: +** +** --author USER Make USER the author for check-in +** -m|--comment COMMENT Make COMMENT the check-in comment +** -M|--message-file FILE Read the amended comment from FILE +** --edit-comment Launch editor to revise comment +** --date DATE Make DATE the check-in time +** --bgcolor COLOR Apply COLOR to this check-in +** --branchcolor COLOR Apply and propagate COLOR to the branch +** --tag TAG Add new TAG to this check-in +** --cancel TAG Cancel TAG from this check-in +** --branch NAME Make this check-in the start of branch NAME +** --hide Hide branch starting from this check-in +** --close Mark this "leaf" as closed +*/ +void ci_amend_cmd(void){ + int rid; + const char *zComment; /* Current comment on the check-in */ + const char *zNewComment; /* Revised check-in comment */ + const char *zComFile; /* Filename from which to read comment */ + const char *zUser; /* Current user for the check-in */ + const char *zNewUser; /* Revised user */ + const char *zDate; /* Current date of the check-in */ + const char *zNewDate; /* Revised check-in date */ + const char *zColor; + const char *zNewColor; + const char *zNewBrColor; + const char *zNewBranch; + const char **pzNewTags = 0; + const char **pzCancelTags = 0; + int fClose; /* True if leaf should be closed */ + int fHide; /* True if branch should be hidden */ + int fPropagateColor; /* True if color propagates before amend */ + int fNewPropagateColor = 0; /* True if color propagates after amend */ + int fHasHidden = 0; /* True if hidden tag already set */ + int fHasClosed = 0; /* True if closed tag already set */ + int fEditComment; /* True if editor to be used for comment */ + const char *zChngTime; /* The change time on the control artifact */ + const char *zUuid; + Blob ctrl; + Blob comment; + char *zNow; + int nTags, nCancels; + int i; + Stmt q; + + if( g.argc==3 ) usage(AMEND_USAGE_STMT); + fEditComment = find_option("edit-comment",0,0)!=0; + zNewComment = find_option("comment","m",1); + zComFile = find_option("message-file","M",1); + zNewBranch = find_option("branch",0,1); + zNewColor = find_option("bgcolor",0,1); + zNewBrColor = find_option("branchcolor",0,1); + if( zNewBrColor ){ + zNewColor = zNewBrColor; + fNewPropagateColor = 1; + } + zNewDate = find_option("date",0,1); + zNewUser = find_option("author",0,1); + pzNewTags = find_repeatable_option("tag",0,&nTags); + pzCancelTags = find_repeatable_option("cancel",0,&nCancels); + fClose = find_option("close",0,0)!=0; + fHide = find_option("hide",0,0)!=0; + zChngTime = find_option("chngtime",0,1); + db_find_and_open_repository(0,0); + user_select(); + verify_all_options(); + if( g.argc<3 || g.argc>=4 ) usage(AMEND_USAGE_STMT); + rid = name_to_typed_rid(g.argv[2], "ci"); + if( rid==0 && !is_a_version(rid) ) fossil_fatal("no such check-in"); + zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); + if( zUuid==0 ) fossil_fatal("Unable to find UUID"); + zComment = db_text(0, "SELECT coalesce(ecomment,comment)" + " FROM event WHERE objid=%d", rid); + zUser = db_text(0, "SELECT coalesce(euser,user)" + " FROM event WHERE objid=%d", rid); + zDate = db_text(0, "SELECT datetime(mtime)" + " FROM event WHERE objid=%d", rid); + zColor = db_text("", "SELECT bgcolor" + " FROM event WHERE objid=%d", rid); + fPropagateColor = db_int(0, "SELECT tagtype FROM tagxref" + " WHERE rid=%d AND tagid=%d", + rid, TAG_BGCOLOR)==2; + fNewPropagateColor = zNewColor && zNewColor[0] + ? fNewPropagateColor : fPropagateColor; + db_prepare(&q, + "SELECT tag.tagid FROM tagxref, tag" + " WHERE tagxref.rid=%d AND tagtype>0 AND tagxref.tagid=tag.tagid", + rid + ); + while( db_step(&q)==SQLITE_ROW ){ + int tagid = db_column_int(&q, 0); + + if( tagid == TAG_CLOSED ){ + fHasClosed = 1; + }else if( tagid==TAG_HIDDEN ){ + fHasHidden = 1; + }else{ + continue; + } + } + db_finalize(&q); + blob_zero(&ctrl); + zNow = date_in_standard_format(zChngTime && zChngTime[0] ? zChngTime : "now"); + blob_appendf(&ctrl, "D %s\n", zNow); + init_newtags(); + if( zNewColor && zNewColor[0] + && (fPropagateColor!=fNewPropagateColor + || fossil_strcmp(zColor,zNewColor)!=0) + ){ + add_color( + mprintf("%s%s", (zNewColor[0]!='#' && + validate16(zNewColor,strlen(zNewColor)) && + (strlen(zNewColor)==6 || strlen(zNewColor)==3)) ? "#" : "", + zNewColor + ), + fNewPropagateColor + ); + } + if( (zNewColor!=0 && zNewColor[0]==0) && (zColor && zColor[0] ) ){ + cancel_color(); + } + if( fEditComment ){ + prepare_amend_comment(&comment, zComment, zUuid); + zNewComment = blob_str(&comment); + }else if( zComFile ){ + blob_zero(&comment); + blob_read_from_file(&comment, zComFile); + blob_to_utf8_no_bom(&comment, 1); + zNewComment = blob_str(&comment); + } + if( zNewComment && zNewComment[0] + && comment_compare(zComment,zNewComment)==0 ) add_comment(zNewComment); + if( zNewDate && zNewDate[0] && fossil_strcmp(zDate,zNewDate)!=0 ){ + if( is_datetime(zNewDate) ){ + add_date(zNewDate); + }else{ + fossil_fatal("Unsupported date format, use YYYY-MM-DD HH:MM:SS"); + } + } + if( zNewUser && zNewUser[0] && fossil_strcmp(zUser,zNewUser)!=0 ){ + add_user(zNewUser); + } + if( pzNewTags!=0 ){ + for(i=0; i +#include + +#if INTERFACE +#include "json_detail.h" /* workaround for apparent enum limitation in makeheaders */ +#endif + +const FossilJsonKeys_ FossilJsonKeys = { + "anonymousSeed" /*anonymousSeed*/, + "authToken" /*authToken*/, + "COMMAND_PATH" /*commandPath*/, + "mtime" /*mtime*/, + "payload" /* payload */, + "requestId" /*requestId*/, + "resultCode" /*resultCode*/, + "resultText" /*resultText*/, + "timestamp" /*timestamp*/ +}; + + + +/* +** Returns true (non-0) if fossil appears to be running in JSON mode. +*/ +int fossil_has_json(){ + return g.json.isJsonMode && (g.isHTTP || g.json.post.o); +} + +/* +** Placeholder /json/XXX page impl for NYI (Not Yet Implemented) +** (but planned) pages/commands. +*/ +cson_value * json_page_nyi(){ + g.json.resultCode = FSL_JSON_E_NYI; + return NULL; +} + +/* +** Given a FossilJsonCodes value, it returns a string suitable for use +** as a resultCode string. Returns some unspecified non-empty string +** if errCode is not one of the FossilJsonCodes values. +*/ +static char const * json_err_cstr( int errCode ){ + switch( errCode ){ + case 0: return "Success"; +#define C(X,V) case FSL_JSON_E_ ## X: return V + + C(GENERIC,"Generic error"); + C(INVALID_REQUEST,"Invalid request"); + C(UNKNOWN_COMMAND,"Unknown command or subcommand"); + C(UNKNOWN,"Unknown error"); + C(TIMEOUT,"Timeout reached"); + C(ASSERT,"Assertion failed"); + C(ALLOC,"Resource allocation failed"); + C(NYI,"Not yet implemented"); + C(PANIC,"x"); + C(MANIFEST_READ_FAILED,"Reading artifact manifest failed"); + C(FILE_OPEN_FAILED,"Opening file failed"); + + C(AUTH,"Authentication error"); + C(MISSING_AUTH,"Authentication info missing from request"); + C(DENIED,"Access denied"); + C(WRONG_MODE,"Request not allowed (wrong operation mode)"); + C(LOGIN_FAILED,"Login failed"); + C(LOGIN_FAILED_NOSEED,"Anonymous login attempt was missing password seed"); + C(LOGIN_FAILED_NONAME,"Login failed - name not supplied"); + C(LOGIN_FAILED_NOPW,"Login failed - password not supplied"); + C(LOGIN_FAILED_NOTFOUND,"Login failed - no match found"); + + C(USAGE,"Usage error"); + C(INVALID_ARGS,"Invalid argument(s)"); + C(MISSING_ARGS,"Missing argument(s)"); + C(AMBIGUOUS_UUID,"Resource identifier is ambiguous"); + C(UNRESOLVED_UUID,"Provided uuid/tag/branch could not be resolved"); + C(RESOURCE_ALREADY_EXISTS,"Resource already exists"); + C(RESOURCE_NOT_FOUND,"Resource not found"); + + C(DB,"Database error"); + C(STMT_PREP,"Statement preparation failed"); + C(STMT_BIND,"Statement parameter binding failed"); + C(STMT_EXEC,"Statement execution/stepping failed"); + C(DB_LOCKED,"Database is locked"); + C(DB_NEEDS_REBUILD,"Fossil repository needs to be rebuilt"); + C(DB_NOT_FOUND,"Fossil repository db file could not be found."); + C(DB_NOT_VALID, "Fossil repository db file is not valid."); + C(DB_NEEDS_CHECKOUT, "Command requires a local checkout."); +#undef C + default: + return "Unknown Error"; + } +} + +/* +** Implements the cson_data_dest_f() interface and outputs the data to +** a fossil Blob object. pState must be-a initialized (Blob*), to +** which n bytes of src will be appended. +**/ +int cson_data_dest_Blob(void * pState, void const * src, unsigned int n){ + Blob * b = (Blob*)pState; + blob_append( b, (char const *)src, (int)n ) /* will die on OOM */; + return 0; +} + +/* +** Implements the cson_data_source_f() interface and reads input from +** a fossil Blob object. pState must be-a (Blob*) populated with JSON +** data. +*/ +int cson_data_src_Blob(void * pState, void * dest, unsigned int * n){ + Blob * b = (Blob*)pState; + *n = blob_read( b, dest, *n ); + return 0; +} + +/* +** Convenience wrapper around cson_output() which appends the output +** to pDest. pOpt may be NULL, in which case g.json.outOpt will be used. +*/ +int cson_output_Blob( cson_value const * pVal, Blob * pDest, cson_output_opt const * pOpt ){ + return cson_output( pVal, cson_data_dest_Blob, + pDest, pOpt ? pOpt : &g.json.outOpt ); +} + +/* +** Convenience wrapper around cson_parse() which reads its input +** from pSrc. pSrc is rewound before parsing. +** +** pInfo may be NULL. If it is not NULL then it will contain details +** about the parse state when this function returns. +** +** On success a new JSON Object or Array is returned (owned by the +** caller). On error NULL is returned. +*/ +cson_value * cson_parse_Blob( Blob * pSrc, cson_parse_info * pInfo ){ + cson_value * root = NULL; + blob_rewind( pSrc ); + cson_parse( &root, cson_data_src_Blob, pSrc, NULL, pInfo ); + return root; +} + +/* +** Implements the cson_data_dest_f() interface and outputs the data to +** cgi_append_content(). pState is ignored. +**/ +int cson_data_dest_cgi(void * pState, void const * src, unsigned int n){ + cgi_append_content( (char const *)src, (int)n ); + return 0; +} + +/* +** Returns a string in the form FOSSIL-XXXX, where XXXX is a +** left-zero-padded value of code. The returned buffer is static, and +** must be copied if needed for later. The returned value will always +** be 11 bytes long (not including the trailing NUL byte). +** +** In practice we will only ever call this one time per app execution +** when constructing the JSON response envelope, so the static buffer +** "shouldn't" be a problem. +** +*/ +char const * json_rc_cstr( int code ){ + enum { BufSize = 12 }; + static char buf[BufSize] = {'F','O','S','S','I','L','-',0}; + assert((code >= 1000) && (code <= 9999) && "Invalid Fossil/JSON code."); + sprintf(buf+7,"%04d", code); + return buf; +} + +/* +** Adds v to the API-internal cleanup mechanism. key is ignored +** (legacy) but might be re-introduced and "should" be a unique +** (app-wide) value. Failure to insert an item may be caused by any +** of the following: +** +** - Allocation error. +** - g.json.gc.a is NULL +** - key is NULL or empty. +** +** Returns 0 on success. +** +** Ownership of v is transfered to (or shared with) g.json.gc, and v +** will be valid until that object is cleaned up or some internal code +** incorrectly removes it from the gc (which we never do). If this +** function fails, it is fatal to the app (as it indicates an +** allocation error (more likely than not) or a serious internal error +** such as numeric overflow). +*/ +void json_gc_add( char const * key, cson_value * v ){ + int const rc = cson_array_append( g.json.gc.a, v ); + + assert( NULL != g.json.gc.a ); + if( 0 != rc ){ + cson_value_free( v ); + } + assert( (0==rc) && "Adding item to GC failed." ); + if(0!=rc){ + fprintf(stderr,"%s: FATAL: alloc error.\n", g.argv[0]) + /* reminder: allocation error is the only reasonable cause of + error here, provided g.json.gc.a and v are not NULL. + */ + ; + fossil_exit(1)/*not fossil_panic() b/c it might land us somewhere + where this function is called again. + */; + } +} + + +/* +** Returns the value of json_rc_cstr(code) as a new JSON +** string, which is owned by the caller and must eventually +** be cson_value_free()d or transfered to a JSON container. +*/ +cson_value * json_rc_string( int code ){ + return cson_value_new_string( json_rc_cstr(code), 11 ); +} + +cson_value * json_new_string( char const * str ){ + return str + ? cson_value_new_string(str,strlen(str)) + : NULL; +} + +cson_value * json_new_string_f( char const * fmt, ... ){ + cson_value * v; + char * zStr; + va_list vargs; + va_start(vargs,fmt); + zStr = vmprintf(fmt,vargs); + va_end(vargs); + v = cson_value_new_string(zStr, strlen(zStr)); + free(zStr); + return v; +} + +cson_value * json_new_int( i64 v ){ + return cson_value_new_integer((cson_int_t)v); +} + +/* +** Gets a POST/POST.payload/GET/COOKIE/ENV value. The returned memory +** is owned by the g.json object (one of its sub-objects). Returns +** NULL if no match is found. +** +** ENV means the system environment (getenv()). +** +** Precedence: POST.payload, GET/COOKIE/non-JSON POST, JSON POST, ENV. +** +** FIXME: the precedence SHOULD be: GET, POST.payload, POST, COOKIE, +** ENV, but the amalgamation of the GET/POST vars makes it difficult +** for me to do that. Since fossil only uses one cookie, cookie +** precedence isn't a real/high-priority problem. +*/ +cson_value * json_getenv( char const * zKey ){ + cson_value * rc; + rc = g.json.reqPayload.o + ? cson_object_get( g.json.reqPayload.o, zKey ) + : NULL; + if(rc){ + return rc; + } + rc = cson_object_get( g.json.param.o, zKey ); + if( rc ){ + return rc; + } + rc = cson_object_get( g.json.post.o, zKey ); + if(rc){ + return rc; + }else{ + char const * cv = PD(zKey,NULL); + if(!cv && !g.isHTTP){ + /* reminder to self: in CLI mode i'd like to try + find_option(zKey,NULL,XYZ) here, but we don't have a sane + default for the XYZ param here. + */ + cv = fossil_getenv(zKey); + } + if(cv){/*transform it to JSON for later use.*/ + /* use sscanf() to figure out if it's an int, + and transform it to JSON int if it is. + + FIXME: use strtol(), since it has more accurate + error handling. + */ + int intVal = -1; + char endOfIntCheck; + int const scanRc = sscanf(cv,"%d%c",&intVal, &endOfIntCheck) + /* The %c bit there is to make sure that we don't accept 123x + as a number. sscanf() returns the number of tokens + successfully parsed, so an RC of 1 will be correct for "123" + but "123x" will have RC==2. + + But it appears to not be working that way :/ + */ + ; + if(1==scanRc){ + json_setenv( zKey, cson_value_new_integer(intVal) ); + }else{ + rc = cson_value_new_string(cv,strlen(cv)); + json_setenv( zKey, rc ); + } + return rc; + }else{ + return NULL; + } + } +} + +/* +** Wrapper around json_getenv() which... +** +** If it finds a value and that value is-a JSON number or is a string +** which looks like an integer or is-a JSON bool/null then it is +** converted to an int. If none of those apply then dflt is returned. +*/ +int json_getenv_int(char const * pKey, int dflt ){ + cson_value const * v = json_getenv(pKey); + const cson_type_id type = v ? cson_value_type_id(v) : CSON_TYPE_UNDEF; + switch(type){ + case CSON_TYPE_INTEGER: + case CSON_TYPE_DOUBLE: + return (int)cson_value_get_integer(v); + case CSON_TYPE_STRING: { + char const * sv = cson_string_cstr(cson_value_get_string(v)); + assert( (NULL!=sv) && "This is quite unexpected." ); + return sv ? atoi(sv) : dflt; + } + case CSON_TYPE_BOOL: + return cson_value_get_bool(v) ? 1 : 0; + case CSON_TYPE_NULL: + return 0; + default: + return dflt; + } +} + + +/* +** Wrapper around json_getenv() which tries to evaluate a payload/env +** value as a boolean. Uses mostly the same logic as +** json_getenv_int(), with the addition that string values which +** either start with a digit 1..9 or the letters [tTyY] are considered +** to be true. If this function cannot find a matching key/value then +** dflt is returned. e.g. if it finds the key but the value is-a +** Object then dftl is returned. +** +** If an entry is found, this function guarantees that it will return +** either 0 or 1, as opposed to "0 or non-zero", so that clients can +** pass a different value as dflt. Thus they can use, e.g. -1 to know +** whether or not this function found a match (it will return -1 in +** that case). +*/ +int json_getenv_bool(char const * pKey, int dflt ){ + cson_value const * v = json_getenv(pKey); + const cson_type_id type = v ? cson_value_type_id(v) : CSON_TYPE_UNDEF; + switch(type){ + case CSON_TYPE_INTEGER: + case CSON_TYPE_DOUBLE: + return cson_value_get_integer(v) ? 1 : 0; + case CSON_TYPE_STRING: { + char const * sv = cson_string_cstr(cson_value_get_string(v)); + assert( (NULL!=sv) && "This is quite unexpected." ); + if(!*sv || ('0'==*sv)){ + return 0; + }else{ + return ((('1'<=*sv) && ('9'>=*sv)) + || ('t'==*sv) || ('T'==*sv) + || ('y'==*sv) || ('Y'==*sv) + ) + ? 1 : 0; + } + } + case CSON_TYPE_BOOL: + return cson_value_get_bool(v) ? 1 : 0; + case CSON_TYPE_NULL: + return 0; + default: + return dflt; + } +} + +/* +** Returns the string form of a json_getenv() value, but ONLY If that +** value is-a String. Non-strings are not converted to strings for +** this purpose. Returned memory is owned by g.json or fossil and is +** valid until end-of-app or the given key is replaced in fossil's +** internals via cgi_replace_parameter() and friends or json_setenv(). +*/ +char const * json_getenv_cstr( char const * zKey ){ + return cson_value_get_cstr( json_getenv(zKey) ); +} + +/* +** An extended form of find_option() which tries to look up a combo +** GET/POST/CLI argument. +** +** zKey must be the GET/POST parameter key. zCLILong must be the "long +** form" CLI flag (NULL means to use zKey). zCLIShort may be NULL or +** the "short form" CLI flag (if NULL, no short form is used). +** +** If argPos is >=0 and no other match is found, +** json_command_arg(argPos) is also checked. +** +** On error (no match found) NULL is returned. +** +** This ONLY works for String JSON/GET/CLI values, not JSON +** booleans and whatnot. +*/ +char const * json_find_option_cstr2(char const * zKey, + char const * zCLILong, + char const * zCLIShort, + int argPos){ + char const * rc = NULL; + assert(NULL != zKey); + if(!g.isHTTP){ + rc = find_option(zCLILong ? zCLILong : zKey, + zCLIShort, 1); + } + if(!rc && fossil_has_json()){ + rc = json_getenv_cstr(zKey); + if(!rc && zCLIShort){ + rc = cson_value_get_cstr( cson_object_get( g.json.param.o, zCLIShort) ); + } + } + if(!rc && (argPos>=0)){ + rc = json_command_arg((unsigned char)argPos); + } + return rc; +} + +/* +** Short-hand form of json_find_option_cstr2(zKey,zCLILong,zCLIShort,-1). +*/ +char const * json_find_option_cstr(char const * zKey, + char const * zCLILong, + char const * zCLIShort){ + return json_find_option_cstr2(zKey, zCLILong, zCLIShort, -1); +} + +/* +** The boolean equivalent of json_find_option_cstr(). +** If the option is not found, dftl is returned. +*/ +int json_find_option_bool(char const * zKey, + char const * zCLILong, + char const * zCLIShort, + int dflt ){ + int rc = -1; + if(!g.isHTTP){ + if(NULL != find_option(zCLILong ? zCLILong : zKey, + zCLIShort, 0)){ + rc = 1; + } + } + if((-1==rc) && fossil_has_json()){ + rc = json_getenv_bool(zKey,-1); + } + return (-1==rc) ? dflt : rc; +} + +/* +** The integer equivalent of json_find_option_cstr2(). +** If the option is not found, dftl is returned. +*/ +int json_find_option_int(char const * zKey, + char const * zCLILong, + char const * zCLIShort, + int dflt ){ + enum { Magic = -1947854832 }; + int rc = Magic; + if(!g.isHTTP){ + /* FIXME: use strtol() for better error/dflt handling. */ + char const * opt = find_option(zCLILong ? zCLILong : zKey, + zCLIShort, 1); + if(NULL!=opt){ + rc = atoi(opt); + } + } + if(Magic==rc){ + rc = json_getenv_int(zKey,Magic); + } + return (Magic==rc) ? dflt : rc; +} + + +/* +** Adds v to g.json.param.o using the given key. May cause any prior +** item with that key to be destroyed (depends on current reference +** count for that value). On success, transfers (or shares) ownership +** of v to (or with) g.json.param.o. On error ownership of v is not +** modified. +*/ +int json_setenv( char const * zKey, cson_value * v ){ + return cson_object_set( g.json.param.o, zKey, v ); +} + +/* +** Guesses a RESPONSE Content-Type value based (primarily) on the +** HTTP_ACCEPT header. +** +** It will try to figure out if the client can support +** application/json or application/javascript, and will fall back to +** text/plain if it cannot figure out anything more specific. +** +** Returned memory is static and immutable, but if the environment +** changes after calling this then subsequent calls to this function +** might return different (also static/immutable) values. +*/ +char const * json_guess_content_type(){ + char const * cset; + char doUtf8; + cset = PD("HTTP_ACCEPT_CHARSET",NULL); + doUtf8 = ((NULL == cset) || (NULL!=strstr("utf-8",cset))) + ? 1 : 0; + if( g.json.jsonp ){ + return doUtf8 + ? "application/javascript; charset=utf-8" + : "application/javascript"; + }else{ + /* + Content-type + + If the browser does not sent an ACCEPT for application/json + then we fall back to text/plain. + */ + char const * cstr; + cstr = PD("HTTP_ACCEPT",NULL); + if( NULL == cstr ){ + return doUtf8 + ? "application/json; charset=utf-8" + : "application/json"; + }else{ + if( strstr( cstr, "application/json" ) + || strstr( cstr, "*/*" ) ){ + return doUtf8 + ? "application/json; charset=utf-8" + : "application/json"; + }else{ + return "text/plain"; + } + } + } +} + +/* +** Sends pResponse to the output stream as the response object. This +** function does no validation of pResponse except to assert() that it +** is not NULL. The caller is responsible for ensuring that it meets +** API response envelope conventions. +** +** In CLI mode pResponse is sent to stdout immediately. In HTTP +** mode pResponse replaces any current CGI content but cgi_reply() +** is not called to flush the output. +** +** If g.json.jsonp is not NULL then the content type is set to +** application/javascript and the output is wrapped in a jsonp +** wrapper. +*/ +void json_send_response( cson_value const * pResponse ){ + assert( NULL != pResponse ); + if( g.isHTTP ){ + cgi_reset_content(); + if( g.json.jsonp ){ + cgi_printf("%s(",g.json.jsonp); + } + cson_output( pResponse, cson_data_dest_cgi, NULL, &g.json.outOpt ); + if( g.json.jsonp ){ + cgi_append_content(")",1); + } + }else{/*CLI mode*/ + if( g.json.jsonp ){ + fprintf(stdout,"%s(",g.json.jsonp); + } + cson_output_FILE( pResponse, stdout, &g.json.outOpt ); + if( g.json.jsonp ){ + fwrite(")\n", 2, 1, stdout); + } + } +} + +/* +** Returns the current request's JSON authentication token, or NULL if +** none is found. The token's memory is owned by (or shared with) +** g.json. +** +** If an auth token is found in the GET/POST request data then fossil +** is given that data for use in authentication for this +** session. i.e. the GET/POST data overrides fossil's authentication +** cookie value (if any) and also works with clients which do not +** support cookies. +** +** Must be called once before login_check_credentials() is called or +** we will not be able to replace fossil's internal idea of the auth +** info in time (and future changes to that state may cause unexpected +** results). +** +** The result of this call are cached for future calls. +*/ +cson_value * json_auth_token(){ + assert(g.json.gc.a && "json_main_bootstrap() was not called!"); + if( !g.json.authToken ){ + /* Try to get an authorization token from GET parameter, POSTed + JSON, or fossil cookie (in that order). */ + g.json.authToken = json_getenv(FossilJsonKeys.authToken); + if(g.json.authToken + && cson_value_is_string(g.json.authToken) + && !PD(login_cookie_name(),NULL)){ + /* tell fossil to use this login info. + + FIXME?: because the JSON bits don't carry around + login_cookie_name(), there is(?) a potential(?) login hijacking + window here. We may need to change the JSON auth token to be in + the form: login_cookie_name()=... + + Then again, the hardened cookie value helps ensure that + only a proper key/value match is valid. + */ + cgi_replace_parameter( login_cookie_name(), cson_value_get_cstr(g.json.authToken) ); + }else if( g.isHTTP ){ + /* try fossil's conventional cookie. */ + /* Reminder: chicken/egg scenario regarding db access in CLI + mode because login_cookie_name() needs the db. CLI + mode does not use any authentication, so we don't need + to support it here. + */ + char const * zCookie = P(login_cookie_name()); + if( zCookie && *zCookie ){ + /* Transfer fossil's cookie to JSON for downstream convenience... */ + cson_value * v = cson_value_new_string(zCookie, strlen(zCookie)); + json_gc_add( FossilJsonKeys.authToken, v ); + g.json.authToken = v; + } + } + } + return g.json.authToken; +} + +/* +** If g.json.reqPayload.o is NULL then NULL is returned, else the +** given property is searched for in the request payload. If found it +** is returned. The returned value is owned by (or shares ownership +** with) g.json, and must NOT be cson_value_free()'d by the +** caller. +*/ +cson_value * json_req_payload_get(char const *pKey){ + return g.json.reqPayload.o + ? cson_object_get(g.json.reqPayload.o,pKey) + : NULL; +} + +/* +** Initializes some JSON bits which need to be initialized relatively +** early on. It should only be called from cgi_init() or +** json_cmd_top() (early on in those functions). +** +** Initializes g.json.gc and g.json.param. This code does not (and +** must not) rely on any of the fossil environment having been set +** up. e.g. it must not use cgi_parameter() and friends because this +** must be called before those data are initialized. +*/ +void json_main_bootstrap(){ + cson_value * v; + assert( (NULL == g.json.gc.v) && + "json_main_bootstrap() was called twice!" ); + + g.json.timerId = fossil_timer_start(); + + /* g.json.gc is our "garbage collector" - where we put JSON values + which need a long lifetime but don't have a logical parent to put + them in. + */ + v = cson_value_new_array(); + g.json.gc.v = v; + assert(0 != g.json.gc.v); + g.json.gc.a = cson_value_get_array(v); + assert(0 != g.json.gc.a); + cson_value_add_reference(v) + /* Needed to allow us to include this value in other JSON + containers without transferring ownership to those containers. + All other persistent g.json.XXX.v values get appended to + g.json.gc.a, and therefore already have a live reference + for this purpose. + */ + ; + + /* + g.json.param holds the JSONized counterpart of fossil's + cgi_parameter_xxx() family of data. We store them as JSON, as + opposed to using fossil's data directly, because we can retain + full type information for data this way (as opposed to it always + being of type string). + */ + v = cson_value_new_object(); + g.json.param.v = v; + g.json.param.o = cson_value_get_object(v); + json_gc_add("$PARAMS", v); +} + +/* +** Appends a warning object to the (pending) JSON response. +** +** Code must be a FSL_JSON_W_xxx value from the FossilJsonCodes enum. +** +** A Warning object has this JSON structure: +** +** { "code":integer, "text":"string" } +** +** But the text part is optional. +** +** If msg is non-NULL and not empty then it is used as the "text" +** property's value. It is copied, and need not refer to static +** memory. +** +** CURRENTLY this code only allows a given warning code to be +** added one time, and elides subsequent warnings. The intention +** is to remove that burden from loops which produce warnings. +** +** FIXME: if msg is NULL then use a standard string for +** the given code. If !*msg then elide the "text" property, +** for consistency with how json_err() works. +*/ +void json_warn( int code, char const * fmt, ... ){ + cson_object * obj = NULL; + assert( (code>FSL_JSON_W_START) + && (code=rc ){ + cson_free_array(a); + a = NULL; + } + return a ? cson_array_value(a) : NULL; +} + + +/* +** Performs some common initialization of JSON-related state. Must be +** called by the json_page_top() and json_cmd_top() dispatching +** functions to set up the JSON stat used by the dispatched functions. +** +** Implicitly sets up the login information state in CGI mode, but +** does not perform any permissions checking. It _might_ (haven't +** tested this) die with an error if an auth cookie is malformed. +** +** This must be called by the top-level JSON command dispatching code +** before they do any work. +** +** This must only be called once, or an assertion may be triggered. +*/ +static void json_mode_bootstrap(){ + static char once = 0 /* guard against multiple runs */; + char const * zPath = P("PATH_INFO"); + assert(g.json.gc.a && "json_main_bootstrap() was not called!"); + assert( (0==once) && "json_mode_bootstrap() called too many times!"); + if( once ){ + return; + }else{ + once = 1; + } + g.json.isJsonMode = 1; + g.json.resultCode = 0; + g.json.cmd.offset = -1; + g.json.jsonp = PD("jsonp",NULL) + /* FIXME: do some sanity checking on g.json.jsonp and ignore it + if it is not halfway reasonable. + */ + ; + if( !g.isHTTP && g.fullHttpReply ){ + /* workaround for server mode, so we see it as CGI mode. */ + g.isHTTP = 1; + } + + if(g.isHTTP){ + cgi_set_content_type(json_guess_content_type()) + /* reminder: must be done after g.json.jsonp is initialized */ + ; +#if 0 + /* Calling this seems to trigger an SQLITE_MISUSE warning??? + Maybe it's not legal to set the logger more than once? + */ + sqlite3_config(SQLITE_CONFIG_LOG, NULL, 0) + /* avoids debug messages on stderr in JSON mode */ + ; +#endif + } + + g.json.cmd.v = cson_value_new_array(); + g.json.cmd.a = cson_value_get_array(g.json.cmd.v); + json_gc_add( FossilJsonKeys.commandPath, g.json.cmd.v ); + /* + The following if/else block translates the PATH_INFO path (in + CLI/server modes) or g.argv (CLI mode) into an internal list so + that we can simplify command dispatching later on. + + Note that translating g.argv this way is overkill but allows us to + avoid CLI-only special-case handling in other code, e.g. + json_command_arg(). + */ + if( zPath ){/* Either CGI or server mode... */ + /* Translate PATH_INFO into JSON array for later convenience. */ + json_string_split(zPath, '/', 1, g.json.cmd.a); + }else{/* assume CLI mode */ + int i; + char const * arg; + cson_value * part; + for(i = 1/*skip argv[0]*/; i < g.argc; ++i ){ + arg = g.argv[i]; + if( !arg || !*arg ){ + continue; + } + if('-' == *arg){ + /* workaround to skip CLI args so that + json_command_arg() does not see them. + This assumes that all arguments come LAST + on the command line. + */ + break; + } + part = cson_value_new_string(arg,strlen(arg)); + cson_array_append(g.json.cmd.a, part); + } + } + + while(!g.isHTTP){ + /* Simulate JSON POST data via input file. Pedantic reminder: + error handling does not honor user-supplied g.json.outOpt + because outOpt cannot (generically) be configured until after + POST-reading is finished. + */ + FILE * inFile = NULL; + char const * jfile = find_option("json-input",NULL,1); + if(!jfile || !*jfile){ + break; + } + inFile = (0==strcmp("-",jfile)) + ? stdin + : fossil_fopen(jfile,"rb"); + if(!inFile){ + g.json.resultCode = FSL_JSON_E_FILE_OPEN_FAILED; + fossil_fatal("Could not open JSON file [%s].",jfile) + /* Does not return. */ + ; + } + cgi_parse_POST_JSON(inFile, 0); + if( stdin != inFile ){ + fclose(inFile); + } + break; + } + + /* g.json.reqPayload exists only to simplify some of our access to + the request payload. We currently only use this in the context of + Object payloads, not Arrays, strings, etc. + */ + g.json.reqPayload.v = cson_object_get( g.json.post.o, FossilJsonKeys.payload ); + if( g.json.reqPayload.v ){ + g.json.reqPayload.o = cson_value_get_object( g.json.reqPayload.v ) + /* g.json.reqPayload.o may legally be NULL, which means only that + g.json.reqPayload.v is-not-a Object. + */; + } + + /* Anything which needs json_getenv() and friends should go after + this point. + */ + + if(1 == cson_array_length_get(g.json.cmd.a)){ + /* special case: if we're at the top path, look for + a "command" request arg which specifies which command + to run. + */ + char const * cmd = json_getenv_cstr("command"); + if(cmd){ + json_string_split(cmd, '/', 0, g.json.cmd.a); + g.json.cmd.commandStr = cmd; + } + } + + + if(!g.json.jsonp){ + g.json.jsonp = json_find_option_cstr("jsonp",NULL,NULL); + } + if(!g.isHTTP){ + g.json.errorDetailParanoia = 0 /*disable error code dumb-down for CLI mode*/; + } + + {/* set up JSON output formatting options. */ + int indent = -1; + indent = json_find_option_int("indent",NULL,"I",-1); + g.json.outOpt.indentation = (0>indent) + ? (g.isHTTP ? 0 : 1) + : (unsigned char)indent; + g.json.outOpt.addNewline = g.isHTTP + ? 0 + : (g.json.jsonp ? 0 : 1); + } + + if( g.isHTTP ){ + json_auth_token()/* will copy our auth token, if any, to fossil's + core, which we need before we call + login_check_credentials(). */; + login_check_credentials()/* populates g.perm */; + } + else{ + /* FIXME: we need an option which allows us to skip this. At least + one known command (/json/version) does not need an opened + repo. The problem here is we cannot know which functions need + it from here (because command dispatching hasn't yet happened) + and all other commands rely on the repo being opened before + they are called. A textbook example of lack of foresight :/. + */ + db_find_and_open_repository(OPEN_ANY_SCHEMA,0); + } +} + +/* +** Returns the ndx'th item in the "command path", where index 0 is the +** position of the "json" part of the path. Returns NULL if ndx is out +** of bounds or there is no "json" path element. +** +** In CLI mode the "path" is the list of arguments (skipping argv[0]). +** In server/CGI modes the path is taken from PATH_INFO. +** +** The returned bytes are owned by g.json.cmd.v and _may_ be +** invalidated if that object is modified (depending on how it is +** modified). +** +** Note that CLI options are not included in the command path. Use +** find_option() to get those. +** +*/ +char const * json_command_arg(unsigned short ndx){ + cson_array * ar = g.json.cmd.a; + assert((NULL!=ar) && "Internal error. Was json_mode_bootstrap() called?"); + assert((g.argc>1) && "Internal error - we never should have gotten this far."); + if( g.json.cmd.offset < 0 ){ + /* first-time setup. */ + short i = 0; +#define NEXT cson_string_cstr( \ + cson_value_get_string( \ + cson_array_get(ar,i) \ + )) + char const * tok = NEXT; + while( tok ){ + if( !g.isHTTP/*workaround for "abbreviated name" in CLI mode*/ + ? (0==strcmp(g.argv[1],tok)) + : (0==strncmp("json",tok,4)) + ){ + g.json.cmd.offset = i; + break; + } + ++i; + tok = NEXT; + } + } +#undef NEXT + if(g.json.cmd.offset < 0){ + return NULL; + }else{ + ndx = g.json.cmd.offset + ndx; + return cson_string_cstr(cson_value_get_string(cson_array_get( ar, g.json.cmd.offset + ndx ))); + } +} + +/* Returns the C-string form of json_auth_token(), or NULL +** if json_auth_token() returns NULL. +*/ +char const * json_auth_token_cstr(){ + return cson_value_get_cstr( json_auth_token() ); +} + +/* +** Returns the JsonPageDef with the given name, or NULL if no match is +** found. +** +** head must be a pointer to an array of JsonPageDefs in which the +** last entry has a NULL name. +*/ +JsonPageDef const * json_handler_for_name( char const * name, JsonPageDef const * head ){ + JsonPageDef const * pageDef = head; + assert( head != NULL ); + if(name && *name) for( ; pageDef->name; ++pageDef ){ + if( 0 == strcmp(name, pageDef->name) ){ + return pageDef; + } + } + return NULL; +} + +/* +** Given a Fossil/JSON result code, this function "dumbs it down" +** according to the current value of g.json.errorDetailParanoia. The +** dumbed-down value is returned. +** +** This function assert()s that code is in the inclusive range 0 to +** 9999. +** +** Note that WARNING codes (1..999) are never dumbed down. +** +*/ +static int json_dumbdown_rc( int code ){ + if(!g.json.errorDetailParanoia + || !code + || ((code>=FSL_JSON_W_START) && (code= 1000) && (code <= 9999) && "Invalid Fossil/JSON code."); + switch( g.json.errorDetailParanoia ){ + case 1: modulo = 10; break; + case 2: modulo = 100; break; + case 3: modulo = 1000; break; + default: break; + } + if( modulo ) code = code - (code % modulo); + return code; + } +} + +/* +** Convenience routine which converts a Julian time value into a Unix +** Epoch timestamp. Requires the db, so this cannot be used before the +** repo is opened (will trigger a fatal error in db_xxx()). The returned +** value is owned by the caller. +*/ +cson_value * json_julian_to_timestamp(double j){ + return cson_value_new_integer((cson_int_t) + db_int64(0,"SELECT cast(strftime('%%s',%lf) as int)",j) + ); +} + +/* +** Returns a timestamp value. +*/ +cson_int_t json_timestamp(){ + return (cson_int_t)time(0); +} + +/* +** Returns a new JSON value (owned by the caller) representing +** a timestamp. If timeVal is < 0 then time(0) is used to fetch +** the time, else timeVal is used as-is. The returned value is +** owned by the caller. +*/ +cson_value * json_new_timestamp(cson_int_t timeVal){ + return cson_value_new_integer((timeVal<0) ? (cson_int_t)time(0) : timeVal); +} + +/* +** Internal helper for json_create_response(). Appends the first +** g.json.dispatchDepth elements of g.json.cmd.a, skipping the first +** one (the "json" part), to a string and returns that string value +** (which is owned by the caller). +*/ +static cson_value * json_response_command_path(){ + if(!g.json.cmd.a){ + return NULL; + }else{ + cson_value * rc = NULL; + Blob path = empty_blob; + unsigned int aLen = g.json.dispatchDepth+1; /*cson_array_length_get(g.json.cmd.a);*/ + unsigned int i = 1; + for( ; i < aLen; ++i ){ + char const * part = cson_string_cstr(cson_value_get_string(cson_array_get(g.json.cmd.a, i))); + if(!part){ +#if 1 + fossil_warning("Iterating further than expected in %s.", + __FILE__); +#endif + break; + } + blob_appendf(&path,"%s%s", (i>1 ? "/": ""), part); + } + rc = json_new_string((blob_size(&path)>0) + ? blob_buffer(&path) + : "") + /* reminder; we need an empty string instead of NULL + in this case, to avoid what outwardly looks like + (but is not) an allocation error in + json_create_response(). + */ + ; + blob_reset(&path); + return rc; + } +} + +/* +** Returns a JSON Object representation of the global g object. +** Returned value is owned by the caller. +*/ +cson_value * json_g_to_json(){ + cson_object * o = NULL; + cson_object * pay = NULL; + pay = o = cson_new_object(); + +#define INT(OBJ,K) cson_object_set(o, #K, json_new_int(OBJ.K)) +#define CSTR(OBJ,K) cson_object_set(o, #K, OBJ.K ? json_new_string(OBJ.K) : cson_value_null()) +#define VAL(K,V) cson_object_set(o, #K, (V) ? (V) : cson_value_null()) + VAL(capabilities, json_cap_value()); + INT(g, argc); + INT(g, isConst); + INT(g, useAttach); + CSTR(g, zConfigDbName); + INT(g, repositoryOpen); + INT(g, localOpen); + INT(g, minPrefix); + INT(g, fSqlTrace); + INT(g, fSqlStats); + INT(g, fSqlPrint); + INT(g, fQuiet); + INT(g, fHttpTrace); + INT(g, fSystemTrace); + INT(g, fNoSync); + INT(g, iErrPriority); + INT(g, sslNotAvailable); + INT(g, cgiOutput); + INT(g, xferPanic); + INT(g, fullHttpReply); + INT(g, xlinkClusterOnly); + INT(g, fTimeFormat); + INT(g, markPrivate); + INT(g, clockSkewSeen); + INT(g, isHTTP); + INT(g.url, isFile); + INT(g.url, isHttps); + INT(g.url, isSsh); + INT(g.url, port); + INT(g.url, dfltPort); + INT(g, useLocalauth); + INT(g, noPswd); + INT(g, userUid); + INT(g, rcvid); + INT(g, okCsrf); + INT(g, thTrace); + INT(g, isHome); + INT(g, nAux); + INT(g, allowSymlinks); + + CSTR(g, zMainDbType); + CSTR(g, zConfigDbType); + CSTR(g, zOpenRevision); + CSTR(g, zLocalRoot); + CSTR(g, zPath); + CSTR(g, zExtra); + CSTR(g, zBaseURL); + CSTR(g, zTop); + CSTR(g, zContentType); + CSTR(g, zErrMsg); + CSTR(g.url, name); + CSTR(g.url, hostname); + CSTR(g.url, protocol); + CSTR(g.url, path); + CSTR(g.url, user); + CSTR(g.url, passwd); + CSTR(g.url, canonical); + CSTR(g.url, proxyAuth); + CSTR(g.url, fossil); + CSTR(g, zLogin); + CSTR(g, zSSLIdentity); + CSTR(g, zIpAddr); + CSTR(g, zNonce); + CSTR(g, zCsrfToken); + + o = cson_new_object(); + cson_object_set(pay, "json", cson_object_value(o) ); + INT(g.json, isJsonMode); + INT(g.json, resultCode); + INT(g.json, errorDetailParanoia); + INT(g.json, dispatchDepth); + VAL(authToken, g.json.authToken); + CSTR(g.json, jsonp); + VAL(gc, g.json.gc.v); + VAL(cmd, g.json.cmd.v); + VAL(param, g.json.param.v); + VAL(POST, g.json.post.v); + VAL(warnings, cson_array_value(g.json.warnings)); + /*cson_output_opt outOpt;*/ + + +#undef INT +#undef CSTR +#undef VAL + return cson_object_value(pay); +} + + +/* +** Creates a new Fossil/JSON response envelope skeleton. It is owned +** by the caller, who must eventually free it using cson_value_free(), +** or add it to a cson container to transfer ownership. Returns NULL +** on error. +** +** If payload is not NULL and resultCode is 0 then it is set as the +** "payload" property of the returned object. If resultCode is 0 then +** it defaults to g.json.resultCode. If resultCode is (or defaults to) +** non-zero and payload is not NULL then this function calls +** cson_value_free(payload) and does not insert the payload into the +** response. In either case, ownership of payload is transfered to (or +** shared with, if the caller holds a reference) this function. +** +** pMsg is an optional message string property (resultText) of the +** response. If resultCode is non-0 and pMsg is NULL then +** json_err_cstr() is used to get the error string. The caller may +** provide his own or may use an empty string to suppress the +** resultText property. +** +*/ +static cson_value * json_create_response( int resultCode, + char const * pMsg, + cson_value * payload){ + cson_value * v = NULL; + cson_value * tmp = NULL; + cson_object * o = NULL; + int rc; + resultCode = json_dumbdown_rc(resultCode ? resultCode : g.json.resultCode); + o = cson_new_object(); + v = cson_object_value(o); + if( ! o ) return NULL; +#define SET(K) if(!tmp) goto cleanup; \ + rc = cson_object_set( o, K, tmp ); \ + if(rc) do{\ + cson_value_free(tmp); \ + tmp = NULL; \ + goto cleanup; \ + }while(0) + + + tmp = json_new_string(MANIFEST_UUID); + SET("fossil"); + + tmp = json_new_timestamp(-1); + SET(FossilJsonKeys.timestamp); + + if( 0 != resultCode ){ + if( ! pMsg ){ + pMsg = g.zErrMsg; + if(!pMsg){ + pMsg = json_err_cstr(resultCode); + } + } + tmp = json_new_string(json_rc_cstr(resultCode)); + SET(FossilJsonKeys.resultCode); + } + + if( pMsg && *pMsg ){ + tmp = json_new_string(pMsg); + SET(FossilJsonKeys.resultText); + } + + if(g.json.cmd.commandStr){ + tmp = json_new_string(g.json.cmd.commandStr); + }else{ + tmp = json_response_command_path(); + } + SET("command"); + + tmp = json_getenv(FossilJsonKeys.requestId); + if( tmp ) cson_object_set( o, FossilJsonKeys.requestId, tmp ); + + if(0){/* these are only intended for my own testing...*/ + if(g.json.cmd.v){ + tmp = g.json.cmd.v; + SET("$commandPath"); + } + if(g.json.param.v){ + tmp = g.json.param.v; + SET("$params"); + } + if(0){/*Only for debugging, add some info to the response.*/ + tmp = cson_value_new_integer( g.json.cmd.offset ); + cson_object_set( o, "cmd.offset", tmp ); + cson_object_set( o, "isCGI", cson_value_new_bool( g.isHTTP ) ); + } + } + + if(fossil_timer_is_active(g.json.timerId)){ + /* This is, philosophically speaking, not quite the right place + for ending the timer, but this is the one function which all of + the JSON exit paths use (and they call it after processing, + just before they end). + */ + sqlite3_uint64 span = fossil_timer_stop(g.json.timerId); + /* I'm actually seeing sub-uSec runtimes in some tests, but a time of + 0 is "just kinda wrong". + */ + cson_object_set(o,"procTimeUs", cson_value_new_integer((cson_int_t)span)); + span /= 1000/*for milliseconds */; + cson_object_set(o,"procTimeMs", cson_value_new_integer((cson_int_t)span)); + assert(!fossil_timer_is_active(g.json.timerId)); + g.json.timerId = -1; + + } + if(g.json.warnings){ + tmp = cson_array_value(g.json.warnings); + SET("warnings"); + } + + /* Only add the payload to SUCCESS responses. Else delete it. */ + if( NULL != payload ){ + if( resultCode ){ + cson_value_free(payload); + payload = NULL; + }else{ + tmp = payload; + SET(FossilJsonKeys.payload); + } + } + + if(json_find_option_bool("debugFossilG","json-debug-g",NULL,0) + &&(g.perm.Admin||g.perm.Setup)){ + tmp = json_g_to_json(); + SET("g"); + } + +#undef SET + goto ok; + cleanup: + cson_value_free(v); + v = NULL; + ok: + return v; +} + +/* +** Outputs a JSON error response to either the cgi_xxx() family of +** buffers (in CGI/server mode) or stdout (in CLI mode). If rc is 0 +** then g.json.resultCode is used. If that is also 0 then the "Unknown +** Error" code is used. +** +** If g.isHTTP then the generated JSON error response object replaces +** any currently buffered page output. Because the output goes via +** the cgi_xxx() family of functions, this function inherits any +** compression which fossil does for its output. +** +** If alsoOutput is true AND g.isHTTP then cgi_reply() is called to +** flush the output (and headers). Generally only do this if you are +** about to call exit(). +** +** If !g.isHTTP then alsoOutput is ignored and all output is sent to +** stdout immediately. +** +** For generating the resultText property: if msg is not NULL then it +** is used as-is. If it is NULL then g.zErrMsg is checked, and if that +** is NULL then json_err_cstr(code) is used. +*/ +void json_err( int code, char const * msg, int alsoOutput ){ + int rc = code ? code : (g.json.resultCode + ? g.json.resultCode + : FSL_JSON_E_UNKNOWN); + cson_value * resp = NULL; + rc = json_dumbdown_rc(rc); + if( rc && !msg ){ + msg = g.zErrMsg; + if(!msg){ + msg = json_err_cstr(rc); + } + } + resp = json_create_response(rc, msg, NULL); + if(!resp){ + /* about the only error case here is out-of-memory. DO NOT + call fossil_panic() here because that calls this function. + */ + fprintf(stderr, "%s: Fatal error: could not allocate " + "response object.\n", g.argv[0]); + fossil_exit(1); + } + if( g.isHTTP ){ + if(alsoOutput){ + json_send_response(resp); + }else{ + /* almost a duplicate of json_send_response() :( */ + cgi_reset_content(); + if( g.json.jsonp ){ + cgi_printf("%s(",g.json.jsonp); + } + cson_output( resp, cson_data_dest_cgi, NULL, &g.json.outOpt ); + if( g.json.jsonp ){ + cgi_append_content(")",1); + } + } + }else{ + json_send_response(resp); + } + cson_value_free(resp); +} + +/* +** Sets g.json.resultCode and g.zErrMsg, but does not report the error +** via json_err(). Returns the code passed to it. +** +** code must be in the inclusive range 1000..9999. +*/ +int json_set_err( int code, char const * fmt, ... ){ + assert( (code>=1000) && (code<=9999) ); + free(g.zErrMsg); + g.json.resultCode = code; + if(!fmt || !*fmt){ + g.zErrMsg = mprintf("%s", json_err_cstr(code)); + }else{ + va_list vargs; + char * msg; + va_start(vargs,fmt); + msg = vmprintf(fmt, vargs); + va_end(vargs); + g.zErrMsg = msg; + } + return code; +} + +/* +** Iterates through a prepared SELECT statement and converts each row +** to a JSON object. If pTgt is not NULL then this function will +** append the results to pTgt and return cson_array_value(pTgt). If +** pTgt is NULL then a new Array object is created and returned (owned +** by the caller). Each row of pStmt is converted to an Object and +** appended to the array. If the result set has no rows AND pTgt is +** NULL then NULL (not an empty array) is returned. +*/ +cson_value * json_stmt_to_array_of_obj(Stmt *pStmt, + cson_array * pTgt){ + cson_array * a = pTgt; + char const * warnMsg = NULL; + cson_value * colNamesV = NULL; + cson_array * colNames = NULL; + while( (SQLITE_ROW==db_step(pStmt)) ){ + cson_value * row = NULL; + if(!a){ + a = cson_new_array(); + assert(NULL!=a); + } + if(!colNames){ + colNamesV = cson_sqlite3_column_names(pStmt->pStmt); + assert(NULL != colNamesV); + /*Why? cson_value_add_reference(colNamesV) avoids an ownership problem*/; + colNames = cson_value_get_array(colNamesV); + assert(NULL != colNames); + } + row = cson_sqlite3_row_to_object2(pStmt->pStmt, colNames); + if(!row && !warnMsg){ + warnMsg = "Could not convert at least one result row to JSON."; + continue; + } + if( 0 != cson_array_append(a, row) ){ + cson_value_free(row); + if(pTgt != a) { + cson_free_array(a); + } + assert( 0 && "Alloc error."); + return NULL; + } + } + cson_value_free(colNamesV); + if(warnMsg){ + json_warn( FSL_JSON_W_ROW_TO_JSON_FAILED, warnMsg ); + } + return cson_array_value(a); +} + +/* +** Works just like json_stmt_to_array_of_obj(), but each row in the +** result set is represented as an Array of values instead of an +** Object (key/value pairs). If pTgt is NULL and the statement +** has no results then NULL is returned, not an empty array. +*/ +cson_value * json_stmt_to_array_of_array(Stmt *pStmt, + cson_array * pTgt){ + cson_array * a = pTgt; + while( (SQLITE_ROW==db_step(pStmt)) ){ + cson_value * row = NULL; + if(!a){ + a = cson_new_array(); + assert(NULL!=a); + } + row = cson_sqlite3_row_to_array(pStmt->pStmt); + cson_array_append(a, row); + } + return cson_array_value(a); +} + +cson_value * json_stmt_to_array_of_values(Stmt *pStmt, + int resultColumn, + cson_array * pTgt){ + cson_array * a = pTgt; + while( (SQLITE_ROW==db_step(pStmt)) ){ + cson_value * row = cson_sqlite3_column_to_value(pStmt->pStmt, + resultColumn); + if(row){ + if(!a){ + a = cson_new_array(); + assert(NULL!=a); + } + cson_array_append(a, row); + } + } + return cson_array_value(a); +} + +/* +** Executes the given SQL and runs it through +** json_stmt_to_array_of_obj(), returning the result of that +** function. If resetBlob is true then blob_reset(pSql) is called +** after preparing the query. +** +** pTgt has the same semantics as described for +** json_stmt_to_array_of_obj(). +** +** FIXME: change this to take a (char const *) instead of a blob, +** to simplify the trivial use-cases (which don't need a Blob). +*/ +cson_value * json_sql_to_array_of_obj(Blob * pSql, cson_array * pTgt, + int resetBlob){ + Stmt q = empty_Stmt; + cson_value * pay = NULL; + assert( blob_size(pSql) > 0 ); + db_prepare(&q, "%s", blob_str(pSql) /*safe-for-%s*/); + if(resetBlob){ + blob_reset(pSql); + } + pay = json_stmt_to_array_of_obj(&q, pTgt); + db_finalize(&q); + return pay; + +} + +/* +** If the given COMMIT rid has any tags associated with it, this +** function returns a JSON Array containing the tag names (owned by +** the caller), else it returns NULL. +** +** See info_tags_of_checkin() for more details (this is simply a JSON +** wrapper for that function). +** +** If there are no tags then this function returns NULL, not an empty +** Array. +*/ +cson_value * json_tags_for_checkin_rid(int rid, int propagatingOnly){ + cson_value * v = NULL; + char * tags = info_tags_of_checkin(rid, propagatingOnly); + if(tags){ + if(*tags){ + v = json_string_split2(tags,',',0); + } + free(tags); + } + return v; +} + +/* + ** Returns a "new" value representing the boolean value of zVal + ** (false if zVal is NULL). Note that cson does not really allocate + ** any memory for boolean values, but they "should" (for reasons of + ** style and philosophy) be cleaned up like any other values (but + ** it's a no-op for bools). + */ +cson_value * json_value_to_bool(cson_value const * zVal){ + return cson_value_get_bool(zVal) + ? cson_value_true() + : cson_value_false(); +} + +/* +** Impl of /json/resultCodes +** +*/ +cson_value * json_page_resultCodes(){ + cson_array * list = cson_new_array(); + cson_object * obj = NULL; + cson_string * kRC; + cson_string * kSymbol; + cson_string * kNumber; + cson_string * kDesc; + cson_array_reserve( list, 35 ); + kRC = cson_new_string("resultCode",10); + kSymbol = cson_new_string("cSymbol",7); + kNumber = cson_new_string("number",6); + kDesc = cson_new_string("description",11); +#define C(K) obj = cson_new_object(); \ + cson_object_set_s(obj, kRC, json_new_string(json_rc_cstr(FSL_JSON_E_##K)) ); \ + cson_object_set_s(obj, kSymbol, json_new_string("FSL_JSON_E_"#K) ); \ + cson_object_set_s(obj, kNumber, cson_value_new_integer(FSL_JSON_E_##K) ); \ + cson_object_set_s(obj, kDesc, json_new_string(json_err_cstr(FSL_JSON_E_##K))); \ + cson_array_append( list, cson_object_value(obj) ); obj = NULL; + + C(GENERIC); + C(INVALID_REQUEST); + C(UNKNOWN_COMMAND); + C(UNKNOWN); + C(TIMEOUT); + C(ASSERT); + C(ALLOC); + C(NYI); + C(PANIC); + C(MANIFEST_READ_FAILED); + C(FILE_OPEN_FAILED); + + C(AUTH); + C(MISSING_AUTH); + C(DENIED); + C(WRONG_MODE); + C(LOGIN_FAILED); + C(LOGIN_FAILED_NOSEED); + C(LOGIN_FAILED_NONAME); + C(LOGIN_FAILED_NOPW); + C(LOGIN_FAILED_NOTFOUND); + + C(USAGE); + C(INVALID_ARGS); + C(MISSING_ARGS); + C(AMBIGUOUS_UUID); + C(UNRESOLVED_UUID); + C(RESOURCE_ALREADY_EXISTS); + C(RESOURCE_NOT_FOUND); + + C(DB); + C(STMT_PREP); + C(STMT_BIND); + C(STMT_EXEC); + C(DB_LOCKED); + C(DB_NEEDS_REBUILD); + C(DB_NOT_FOUND); + C(DB_NOT_VALID); +#undef C + return cson_array_value(list); +} + + +/* +** /json/version implementation. +** +** Returns the payload object (owned by the caller). +*/ +cson_value * json_page_version(){ + cson_value * jval = NULL; + cson_object * jobj = NULL; + jval = cson_value_new_object(); + jobj = cson_value_get_object(jval); +#define FSET(X,K) cson_object_set( jobj, K, cson_value_new_string(X,strlen(X))) + FSET(MANIFEST_UUID,"manifestUuid"); + FSET(MANIFEST_VERSION,"manifestVersion"); + FSET(MANIFEST_DATE,"manifestDate"); + FSET(MANIFEST_YEAR,"manifestYear"); + FSET(RELEASE_VERSION,"releaseVersion"); + cson_object_set( jobj, "releaseVersionNumber", + cson_value_new_integer(RELEASE_VERSION_NUMBER) ); + cson_object_set( jobj, "resultCodeParanoiaLevel", + cson_value_new_integer(g.json.errorDetailParanoia) ); + FSET(FOSSIL_JSON_API_VERSION, "jsonApiVersion" ); +#undef FSET + return jval; +} + + +/* +** Returns the current user's capabilities string as a String value. +** Returned value is owned by the caller, and will only be NULL if +** g.userUid is invalid or an out of memory error. Or, it turns out, +** in CLI mode (where there is no logged-in user). +*/ +cson_value * json_cap_value(){ + if(g.userUid<=0){ + return NULL; + }else{ + Stmt q = empty_Stmt; + cson_value * val = NULL; + db_prepare(&q, "SELECT cap FROM user WHERE uid=%d", g.userUid); + if( db_step(&q)==SQLITE_ROW ){ + char const * str = (char const *)sqlite3_column_text(q.pStmt,0); + if( str ){ + val = json_new_string(str); + } + } + db_finalize(&q); + return val; + } +} + +/* +** Implementation for /json/cap +** +** Returned object contains details about the "capabilities" of the +** current user (what he may/may not do). +** +** This is primarily intended for debuggering, but may have +** a use in client code. (?) +*/ +cson_value * json_page_cap(){ + cson_value * payload = cson_value_new_object(); + cson_value * sub = cson_value_new_object(); + Stmt q; + cson_object * obj = cson_value_get_object(payload); + db_prepare(&q, "SELECT login, cap FROM user WHERE uid=%d", g.userUid); + if( db_step(&q)==SQLITE_ROW ){ + /* reminder: we don't use g.zLogin because it's 0 for the guest + user and the HTML UI appears to currently allow the name to be + changed (but doing so would break other code). */ + char const * str = (char const *)sqlite3_column_text(q.pStmt,0); + if( str ){ + cson_object_set( obj, "name", + cson_value_new_string(str,strlen(str)) ); + } + str = (char const *)sqlite3_column_text(q.pStmt,1); + if( str ){ + cson_object_set( obj, "capabilities", + cson_value_new_string(str,strlen(str)) ); + } + } + db_finalize(&q); + cson_object_set( obj, "permissionFlags", sub ); + obj = cson_value_get_object(sub); + +#define ADD(X,K) cson_object_set(obj, K, cson_value_new_bool(g.perm.X)) + ADD(Setup,"setup"); + ADD(Admin,"admin"); + ADD(Delete,"delete"); + ADD(Password,"password"); + ADD(Query,"query"); /* don't think this one is actually used */ + ADD(Write,"checkin"); + ADD(Read,"checkout"); + ADD(Hyperlink,"history"); + ADD(Clone,"clone"); + ADD(RdWiki,"readWiki"); + ADD(NewWiki,"createWiki"); + ADD(ApndWiki,"appendWiki"); + ADD(WrWiki,"editWiki"); + ADD(ModWiki,"moderateWiki"); + ADD(RdTkt,"readTicket"); + ADD(NewTkt,"createTicket"); + ADD(ApndTkt,"appendTicket"); + ADD(WrTkt,"editTicket"); + ADD(ModTkt,"moderateTicket"); + ADD(Attach,"attachFile"); + ADD(TktFmt,"createTicketReport"); + ADD(RdAddr,"readPrivate"); + ADD(Zip,"zip"); + ADD(Private,"xferPrivate"); +#undef ADD + return payload; +} + +/* +** Implementation of the /json/stat page/command. +** +*/ +cson_value * json_page_stat(){ + i64 t, fsize; + int n, m; + int full; + const char *zDb; + enum { BufLen = 1000 }; + char zBuf[BufLen]; + cson_value * jv = NULL; + cson_object * jo = NULL; + cson_value * jv2 = NULL; + cson_object * jo2 = NULL; + char * zTmp = NULL; + if( !g.perm.Read ){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'o' permissions."); + return NULL; + } + full = json_find_option_bool("full",NULL,"f", + json_find_option_bool("verbose",NULL,"v",0)); +#define SETBUF(O,K) cson_object_set(O, K, cson_value_new_string(zBuf, strlen(zBuf))); + + jv = cson_value_new_object(); + jo = cson_value_get_object(jv); + + zTmp = db_get("project-name",NULL); + cson_object_set(jo, "projectName", json_new_string(zTmp)); + free(zTmp); + zTmp = db_get("project-description",NULL); + cson_object_set(jo, "projectDescription", json_new_string(zTmp)); + free(zTmp); + zTmp = NULL; + fsize = file_size(g.zRepositoryName); + cson_object_set(jo, "repositorySize", cson_value_new_integer((cson_int_t)fsize)); + + if(full){ + n = db_int(0, "SELECT count(*) FROM blob"); + m = db_int(0, "SELECT count(*) FROM delta"); + cson_object_set(jo, "blobCount", cson_value_new_integer((cson_int_t)n)); + cson_object_set(jo, "deltaCount", cson_value_new_integer((cson_int_t)m)); + if( n>0 ){ + int a, b; + Stmt q; + db_prepare(&q, "SELECT total(size), avg(size), max(size)" + " FROM blob WHERE size>0"); + db_step(&q); + t = db_column_int64(&q, 0); + cson_object_set(jo, "uncompressedArtifactSize", + cson_value_new_integer((cson_int_t)t)); + cson_object_set(jo, "averageArtifactSize", + cson_value_new_integer((cson_int_t)db_column_int(&q, 1))); + cson_object_set(jo, "maxArtifactSize", + cson_value_new_integer((cson_int_t)db_column_int(&q, 2))); + db_finalize(&q); + if( t/fsize < 5 ){ + b = 10; + fsize /= 10; + }else{ + b = 1; + } + a = t/fsize; + sqlite3_snprintf(BufLen,zBuf, "%d:%d", a, b); + SETBUF(jo, "compressionRatio"); + } + n = db_int(0, "SELECT count(distinct mid) FROM mlink /*scan*/"); + cson_object_set(jo, "checkinCount", cson_value_new_integer((cson_int_t)n)); + n = db_int(0, "SELECT count(*) FROM filename /*scan*/"); + cson_object_set(jo, "fileCount", cson_value_new_integer((cson_int_t)n)); + n = db_int(0, "SELECT count(*) FROM tag /*scan*/" + " WHERE +tagname GLOB 'wiki-*'"); + cson_object_set(jo, "wikiPageCount", cson_value_new_integer((cson_int_t)n)); + n = db_int(0, "SELECT count(*) FROM tag /*scan*/" + " WHERE +tagname GLOB 'tkt-*'"); + cson_object_set(jo, "ticketCount", cson_value_new_integer((cson_int_t)n)); + }/*full*/ + n = db_int(0, "SELECT julianday('now') - (SELECT min(mtime) FROM event)" + " + 0.99"); + cson_object_set(jo, "ageDays", cson_value_new_integer((cson_int_t)n)); + cson_object_set(jo, "ageYears", cson_value_new_double(n/365.2425)); + sqlite3_snprintf(BufLen, zBuf, db_get("project-code","")); + SETBUF(jo, "projectCode"); + cson_object_set(jo, "compiler", cson_value_new_string(COMPILER_NAME, strlen(COMPILER_NAME))); + + jv2 = cson_value_new_object(); + jo2 = cson_value_get_object(jv2); + cson_object_set(jo, "sqlite", jv2); + sqlite3_snprintf(BufLen, zBuf, "%.19s [%.10s] (%s)", + sqlite3_sourceid(), &sqlite3_sourceid()[20], sqlite3_libversion()); + SETBUF(jo2, "version"); + zDb = db_name("repository"); + cson_object_set(jo2, "pageCount", cson_value_new_integer((cson_int_t)db_int(0, "PRAGMA \"%w\".page_count", zDb))); + cson_object_set(jo2, "pageSize", cson_value_new_integer((cson_int_t)db_int(0, "PRAGMA \"%w\".page_size", zDb))); + cson_object_set(jo2, "freeList", cson_value_new_integer((cson_int_t)db_int(0, "PRAGMA \"%w\".freelist_count", zDb))); + sqlite3_snprintf(BufLen, zBuf, "%s", db_text(0, "PRAGMA \"%w\".encoding", zDb)); + SETBUF(jo2, "encoding"); + sqlite3_snprintf(BufLen, zBuf, "%s", db_text(0, "PRAGMA \"%w\".journal_mode", zDb)); + cson_object_set(jo2, "journalMode", *zBuf ? cson_value_new_string(zBuf, strlen(zBuf)) : cson_value_null()); + return jv; +#undef SETBUF +} + + + + +/* +** Creates a comma-separated list of command names +** taken from zPages. zPages must be an array of objects +** whose final entry MUST have a NULL name value or results +** are undefined. +** +** The list is appended to pOut. The number of items (not bytes) +** appended are returned. If filterByMode is non-0 then the result +** list will contain only commands which are able to run in the +** current run mode (CLI vs. HTTP). +*/ +static int json_pagedefs_to_string(JsonPageDef const * zPages, + Blob * pOut, int filterByMode){ + int i = 0; + for( ; zPages->name; ++zPages, ++i ){ + if(filterByMode){ + if(g.isHTTP && zPages->runMode < 0) continue; + else if(zPages->runMode > 0) continue; + } + blob_append(pOut, zPages->name, -1); + if((zPages+1)->name){ + blob_append(pOut, ", ",2); + } + } + return i; +} + +/* +** Creates an error message using zErrPrefix and the given array of +** JSON command definitions, and sets the g.json error state to +** reflect FSL_JSON_E_MISSING_ARGS. If zErrPrefix is NULL then +** some default is used (e.g. "Try one of: "). If it is "" then +** no prefix is used. +** +** The intention is to provide the user (via the response.resultText) +** a list of available commands/subcommands. +** +*/ +void json_dispatch_missing_args_err( JsonPageDef const * pCommands, + char const * zErrPrefix ){ + Blob cmdNames = empty_blob; + blob_init(&cmdNames,NULL,0); + if( !zErrPrefix ) { + zErrPrefix = "Try one of: "; + } + blob_append( &cmdNames, zErrPrefix, strlen(zErrPrefix) ); + json_pagedefs_to_string(pCommands, &cmdNames, 1); + json_set_err(FSL_JSON_E_MISSING_ARGS, "%s", + blob_str(&cmdNames)); + blob_reset(&cmdNames); +} + +cson_value * json_page_dispatch_helper(JsonPageDef const * pages){ + JsonPageDef const * def; + char const * cmd = json_command_arg(1+g.json.dispatchDepth); + assert( NULL != pages ); + if( ! cmd ){ + json_dispatch_missing_args_err(pages, + "No subcommand specified. " + "Try one of: "); + return NULL; + } + def = json_handler_for_name( cmd, pages ); + if(!def){ + json_set_err(FSL_JSON_E_UNKNOWN_COMMAND, + "Unknown subcommand: %s", cmd); + return NULL; + } + else{ + ++g.json.dispatchDepth; + return (*def->func)(); + } +} + + +/* +** Impl of /json/rebuild. Requires admin privileges. +*/ +static cson_value * json_page_rebuild(){ + if( !g.perm.Admin ){ + json_set_err(FSL_JSON_E_DENIED,"Requires 'a' privileges."); + return NULL; + }else{ + /* Reminder: the db_xxx() ops "should" fail via the fossil core + error handlers, which will cause a JSON error and exit(). i.e. we + don't handle the errors here. TODO: confirm that all these db + routine fail gracefully in JSON mode. + + On large repos (e.g. fossil's) this operation is likely to take + longer than the client timeout, which will cause it to fail (but + it's sqlite3, so it'll fail gracefully). + */ + db_close(1); + db_open_repository(g.zRepositoryName); + db_begin_transaction(); + rebuild_db(0, 0, 0); + db_end_transaction(0); + return NULL; + } +} + +/* +** Impl of /json/g. Requires admin/setup rights. +*/ +static cson_value * json_page_g(){ + if(!g.perm.Admin || !g.perm.Setup){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'a' or 's' privileges."); + return NULL; + } + return json_g_to_json(); +} + +/* Impl in json_login.c. */ +cson_value * json_page_anon_password(); +/* Impl in json_artifact.c. */ +cson_value * json_page_artifact(); +/* Impl in json_branch.c. */ +cson_value * json_page_branch(); +/* Impl in json_diff.c. */ +cson_value * json_page_diff(); +/* Impl in json_dir.c. */ +cson_value * json_page_dir(); +/* Impl in json_login.c. */ +cson_value * json_page_login(); +/* Impl in json_login.c. */ +cson_value * json_page_logout(); +/* Impl in json_query.c. */ +cson_value * json_page_query(); +/* Impl in json_report.c. */ +cson_value * json_page_report(); +/* Impl in json_tag.c. */ +cson_value * json_page_tag(); +/* Impl in json_user.c. */ +cson_value * json_page_user(); +/* Impl in json_config.c. */ +cson_value * json_page_config(); +/* Impl in json_finfo.c. */ +cson_value * json_page_finfo(); +/* Impl in json_status.c. */ +cson_value * json_page_status(); + +/* +** Mapping of names to JSON pages/commands. Each name is a subpath of +** /json (in CGI mode) or a subcommand of the json command in CLI mode +*/ +static const JsonPageDef JsonPageDefs[] = { +/* please keep alphabetically sorted (case-insensitive) for maintenance reasons. */ +{"anonymousPassword", json_page_anon_password, 0}, +{"artifact", json_page_artifact, 0}, +{"branch", json_page_branch,0}, +{"cap", json_page_cap, 0}, +{"config", json_page_config, 0 }, +{"diff", json_page_diff, 0}, +{"dir", json_page_dir, 0}, +{"finfo", json_page_finfo, 0}, +{"g", json_page_g, 0}, +{"HAI",json_page_version,0}, +{"login",json_page_login,0}, +{"logout",json_page_logout,0}, +{"query",json_page_query,0}, +{"rebuild",json_page_rebuild,0}, +{"report", json_page_report, 0}, +{"resultCodes", json_page_resultCodes,0}, +{"stat",json_page_stat,0}, +{"status", json_page_status, 0}, +{"tag", json_page_tag,0}, +/*{"ticket", json_page_nyi,0},*/ +{"timeline", json_page_timeline,0}, +{"user",json_page_user,0}, +{"version",json_page_version,0}, +{"whoami",json_page_whoami,0}, +{"wiki",json_page_wiki,0}, +/* Last entry MUST have a NULL name. */ +{NULL,NULL,0} +}; + +/* +** Internal helper for json_cmd_top() and json_page_top(). +** +** Searches JsonPageDefs for a command with the given name. If found, +** it is used to generate and output a JSON response. If not found, it +** generates a JSON-style error response. Returns 0 on success, non-0 +** on error. On error it will set g.json's error state. +*/ +static int json_dispatch_root_command( char const * zCommand ){ + int rc = 0; + cson_value * payload = NULL; + JsonPageDef const * pageDef = NULL; + pageDef = json_handler_for_name(zCommand,&JsonPageDefs[0]); + if( ! pageDef ){ + rc = FSL_JSON_E_UNKNOWN_COMMAND; + json_set_err( rc, "Unknown command: %s", zCommand ); + }else if( pageDef->runMode < 0 /*CLI only*/){ + rc = FSL_JSON_E_WRONG_MODE; + }else if( (g.isHTTP && (pageDef->runMode < 0 /*CLI only*/)) + || + (!g.isHTTP && (pageDef->runMode > 0 /*HTTP only*/)) + ){ + rc = FSL_JSON_E_WRONG_MODE; + } + else{ + rc = 0; + g.json.dispatchDepth = 1; + payload = (*pageDef->func)(); + } + payload = json_create_response(rc, NULL, payload); + json_send_response(payload); + cson_value_free(payload); + return rc; +} + +#ifdef FOSSIL_ENABLE_JSON +/* dupe ifdef needed for mkindex */ +/* +** WEBPAGE: json +** +** Pages under /json/... must be entered into JsonPageDefs. +** This function dispatches them, and is the HTTP equivalent of +** json_cmd_top(). +*/ +void json_page_top(void){ + char const * zCommand; + assert(g.json.gc.a && "json_main_bootstrap() was not called!"); + json_mode_bootstrap(); + zCommand = json_command_arg(1); + if(!zCommand || !*zCommand){ + json_dispatch_missing_args_err( JsonPageDefs, + "No command (sub-path) specified." + " Try one of: "); + return; + } + json_dispatch_root_command( zCommand ); +} +#endif /* FOSSIL_ENABLE_JSON for mkindex */ + +#ifdef FOSSIL_ENABLE_JSON +/* dupe ifdef needed for mkindex */ +/* +** This function dispatches json commands and is the CLI equivalent of +** json_page_top(). +** +** COMMAND: json +** +** Usage: %fossil json SUBCOMMAND ?OPTIONS? +** +** In CLI mode, the -R REPO common option is supported. Due to limitations +** in the argument dispatching code, any -FLAGS must come after the final +** sub- (or subsub-) command. +** +** The commands include: +** +** anonymousPassword +** artifact +** branch +** cap +** config +** diff +** dir +** g +** login +** logout +** query +** rebuild +** report +** resultCodes +** stat +** tag +** timeline +** user +** version (alias: HAI) +** whoami +** wiki +** +** Run '%fossil json' without any subcommand to see the full list (but be +** aware that some listed might not yet be fully implemented). +** +*/ +void json_cmd_top(void){ + char const * cmd = NULL; + int rc = 0; + memset( &g.perm, 0xff, sizeof(g.perm) ) + /* In CLI mode fossil does not use permissions + and they all default to false. We enable them + here because (A) fossil doesn't use them in local + mode but (B) having them set gives us one less + difference in the CLI/CGI/Server-mode JSON + handling. + */ + ; + json_main_bootstrap(); + json_mode_bootstrap(); + if( 2 > cson_array_length_get(g.json.cmd.a) ){ + goto usage; + } +#if 0 + json_warn(FSL_JSON_W_ROW_TO_JSON_FAILED, "Just testing."); + json_warn(FSL_JSON_W_ROW_TO_JSON_FAILED, "Just testing again."); +#endif + cmd = json_command_arg(1); + if( !cmd || !*cmd ){ + goto usage; + } + rc = json_dispatch_root_command( cmd ); + if(0 != rc){ + /* FIXME: we need a way of passing this error back + up to the routine which called this callback. + e.g. add g.errCode. + */ + fossil_exit(1); + } + return; + usage: + { + cson_value * payload; + json_dispatch_missing_args_err( JsonPageDefs, + "No subcommand specified." + " Try one of: "); + payload = json_create_response(0, NULL, NULL); + json_send_response(payload); + cson_value_free(payload); + fossil_exit(1); + } +} +#endif /* FOSSIL_ENABLE_JSON for mkindex */ + +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/json_artifact.c Index: src/json_artifact.c ================================================================== --- src/json_artifact.c +++ src/json_artifact.c @@ -0,0 +1,502 @@ +#ifdef FOSSIL_ENABLE_JSON +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +*/ +#include "VERSION.h" +#include "config.h" +#include "json_artifact.h" + +#if INTERFACE +#include "json_detail.h" +#endif + +/* +** Internal callback for /json/artifact handlers. rid refers to +** the rid of a given type of artifact, and each callback is +** specialized to return a JSON form of one type of artifact. +** +** Implementations may assert() that rid refers to requested artifact +** type, since mismatches in the artifact types come from +** json_page_artifact() as opposed to client data. +** +** The pParent parameter points to the response payload object. It +** _may_ be used to populate "top-level" information in the response +** payload, but normally this is neither necessary nor desired. +*/ +typedef cson_value * (*artifact_f)( cson_object * pParent, int rid ); + +/* +** Internal per-artifact-type dispatching helper. +*/ +typedef struct ArtifactDispatchEntry { + /** + Artifact type name, e.g. "checkin", "ticket", "wiki". + */ + char const * name; + + /** + JSON construction callback. Creates the contents for the + payload.artifact property of /json/artifact responses. + */ + artifact_f func; +} ArtifactDispatchEntry; + + +/* +** Generates a JSON Array reference holding the parent UUIDs (as strings). +** If it finds no matches then it returns NULL (OOM is a fatal error). +** +** Returned value is NULL or an Array owned by the caller. +*/ +cson_value * json_parent_uuids_for_ci( int rid ){ + Stmt q = empty_Stmt; + cson_array * pParents = NULL; + db_prepare( &q, + "SELECT uuid FROM plink, blob" + " WHERE plink.cid=%d AND blob.rid=plink.pid" + " ORDER BY plink.isprim DESC", + rid ); + while( SQLITE_ROW==db_step(&q) ){ + if(!pParents) { + pParents = cson_new_array(); + } + cson_array_append( pParents, cson_sqlite3_column_to_value( q.pStmt, 0 ) ); + } + db_finalize(&q); + return cson_array_value(pParents); +} + +/* +** Generates an artifact Object for the given rid, +** which must refer to a Check-in. +** +** Returned value is NULL or an Object owned by the caller. +*/ +cson_value * json_artifact_for_ci( int rid, char showFiles ){ + cson_value * v = NULL; + Stmt q = empty_Stmt; + static cson_value * eventTypeLabel = NULL; + if(!eventTypeLabel){ + eventTypeLabel = json_new_string("checkin"); + json_gc_add("$EVENT_TYPE_LABEL(commit)", eventTypeLabel); + } + + db_prepare(&q, + "SELECT b.uuid, " + " cast(strftime('%%s',e.mtime) as int), " + " strftime('%%s',e.omtime)," + " e.user, " + " e.comment" + " FROM blob b, event e" + " WHERE b.rid=%d" + " AND e.objid=%d", + rid, rid + ); + if( db_step(&q)==SQLITE_ROW ){ + cson_object * o; + cson_value * tmpV = NULL; + const char *zUuid = db_column_text(&q, 0); + const char *zUser; + const char *zComment; + char * zEUser, * zEComment; + i64 mtime, omtime; + v = cson_value_new_object(); + o = cson_value_get_object(v); +#define SET(K,V) cson_object_set(o,(K), (V)) + SET("type", eventTypeLabel ); + SET("uuid",json_new_string(zUuid)); + SET("isLeaf", cson_value_new_bool(is_a_leaf(rid))); + + mtime = db_column_int64(&q,1); + SET("timestamp",json_new_int(mtime)); + omtime = db_column_int64(&q,2); + if(omtime && (omtime!=mtime)){ + SET("originTime",json_new_int(omtime)); + } + + zUser = db_column_text(&q,3); + zEUser = db_text(0, + "SELECT value FROM tagxref WHERE tagid=%d AND rid=%d", + TAG_USER, rid); + if(zEUser){ + SET("user", json_new_string(zEUser)); + if(0!=fossil_strcmp(zEUser,zUser)){ + SET("originUser",json_new_string(zUser)); + } + free(zEUser); + }else{ + SET("user",json_new_string(zUser)); + } + + zComment = db_column_text(&q,4); + zEComment = db_text(0, + "SELECT value FROM tagxref WHERE tagid=%d AND rid=%d", + TAG_COMMENT, rid); + if(zEComment){ + SET("comment",json_new_string(zEComment)); + if(0 != fossil_strcmp(zEComment,zComment)){ + SET("originComment", json_new_string(zComment)); + } + free(zEComment); + }else{ + SET("comment",json_new_string(zComment)); + } + + tmpV = json_parent_uuids_for_ci(rid); + if(tmpV){ + SET("parents", tmpV); + } + + tmpV = json_tags_for_checkin_rid(rid,0); + if(tmpV){ + SET("tags",tmpV); + } + + if( showFiles ){ + tmpV = json_get_changed_files(rid, 1); + if(tmpV){ + SET("files",tmpV); + } + } + +#undef SET + } + db_finalize(&q); + return v; +} + +/* +** Very incomplete/incorrect impl of /json/artifact/TICKET_ID. +*/ +cson_value * json_artifact_ticket( cson_object * zParent, int rid ){ + cson_object * pay = NULL; + Manifest *pTktChng = NULL; + static cson_value * eventTypeLabel = NULL; + if(! g.perm.RdTkt ){ + g.json.resultCode = FSL_JSON_E_DENIED; + return NULL; + } + if(!eventTypeLabel){ + eventTypeLabel = json_new_string("ticket"); + json_gc_add("$EVENT_TYPE_LABEL(ticket)", eventTypeLabel); + } + + pTktChng = manifest_get(rid, CFTYPE_TICKET, 0); + if( pTktChng==0 ){ + g.json.resultCode = FSL_JSON_E_MANIFEST_READ_FAILED; + return NULL; + } + pay = cson_new_object(); + cson_object_set(pay, "eventType", eventTypeLabel ); + cson_object_set(pay, "uuid", json_new_string(pTktChng->zTicketUuid)); + cson_object_set(pay, "user", json_new_string(pTktChng->zUser)); + cson_object_set(pay, "timestamp", json_julian_to_timestamp(pTktChng->rDate)); + manifest_destroy(pTktChng); + return cson_object_value(pay); +} + +/* +** Sub-impl of /json/artifact for check-ins. +*/ +static cson_value * json_artifact_ci( cson_object * zParent, int rid ){ + if(!g.perm.Read){ + json_set_err( FSL_JSON_E_DENIED, "Viewing check-ins requires 'o' privileges." ); + return NULL; + }else{ + cson_value * artV = json_artifact_for_ci(rid, 1); + cson_object * art = cson_value_get_object(artV); + if(art){ + cson_object_merge( zParent, art, CSON_MERGE_REPLACE ); + cson_free_object(art); + } + return cson_object_value(zParent); + } +} + +/* +** Internal mapping of /json/artifact/FOO commands/callbacks. +*/ +static ArtifactDispatchEntry ArtifactDispatchList[] = { +{"checkin", json_artifact_ci}, +{"file", json_artifact_file}, +{"tag", NULL}, +{"ticket", json_artifact_ticket}, +{"wiki", json_artifact_wiki}, +/* Final entry MUST have a NULL name. */ +{NULL,NULL} +}; + +/* +** Internal helper which returns: +** +** If the "format" (CLI: -f) flag is set function returns the same as +** json_wiki_get_content_format_flag(), else it returns true (non-0) +** if either the includeContent (HTTP) or -content|-c boolean flags +** (CLI) are set. +*/ +static int json_artifact_get_content_format_flag(){ + enum { MagicValue = -9 }; + int contentFormat = json_wiki_get_content_format_flag(MagicValue); + if(MagicValue == contentFormat){ + contentFormat = json_find_option_bool("includeContent","content","c",0) /* deprecated */ ? -1 : 0; + } + return contentFormat; +} + +extern int json_wiki_get_content_format_flag( int defaultValue ) /* json_wiki.c */; + +cson_value * json_artifact_wiki(cson_object * zParent, int rid){ + if( ! g.perm.RdWiki ){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'j' privileges."); + return NULL; + }else{ + enum { MagicValue = -9 }; + int const contentFormat = json_artifact_get_content_format_flag(); + return json_get_wiki_page_by_rid(rid, contentFormat); + } +} + +/* +** Internal helper for routines which add a "status" flag to file +** artifact data. isNew and isDel should be the "is this object new?" +** and "is this object removed?" flags of the underlying query. This +** function returns a static string from the set (added, removed, +** modified), depending on the combination of the two args. +** +** Reminder to self: (mlink.pid==0) AS isNew, (mlink.fid==0) AS isDel +*/ +char const * json_artifact_status_to_string( char isNew, char isDel ){ + return isNew + ? "added" + : (isDel + ? "removed" + : "modified"); +} + +cson_value * json_artifact_file(cson_object * zParent, int rid){ + cson_object * pay = NULL; + Stmt q = empty_Stmt; + cson_array * checkin_arr = NULL; + int contentFormat; + i64 contentSize = -1; + char * parentUuid; + if( ! g.perm.Read ){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'o' privileges."); + return NULL; + } + + pay = zParent; + + contentFormat = json_artifact_get_content_format_flag(); + if( 0 != contentFormat ){ + Blob content = empty_blob; + const char *zMime; + char const * zFormat = (contentFormat<1) ? "raw" : "html"; + content_get(rid, &content); + zMime = mimetype_from_content(&content); + cson_object_set(zParent, "contentType", + json_new_string(zMime ? zMime : "text/plain")); + if(!zMime){/* text/plain */ + if(0 < blob_size(&content)){ + if( 0 < contentFormat ){/*HTML-size it*/ + Blob html = empty_blob; + wiki_convert(&content, &html, 0); + assert( blob_size(&content) < blob_size(&html) ); + blob_swap( &html, &content ); + assert( blob_size(&content) > blob_size(&html) ); + blob_reset( &html ); + }/*else as-is*/ + } + cson_object_set(zParent, "content", + cson_value_new_string(blob_str(&content), + (unsigned int)blob_size(&content))); + }/*else binary: ignore*/ + contentSize = blob_size(&content); + cson_object_set(zParent, "contentSize", json_new_int(contentSize) ); + cson_object_set(zParent, "contentFormat", json_new_string(zFormat) ); + blob_reset(&content); + } + contentSize = db_int64(-1, "SELECT size FROM blob WHERE rid=%d", rid); + assert( -1 < contentSize ); + cson_object_set(zParent, "size", json_new_int(contentSize) ); + + parentUuid = db_text(NULL, + "SELECT DISTINCT p.uuid " + "FROM blob p, blob f, mlink m " + "WHERE m.pid=p.rid " + "AND m.fid=f.rid " + "AND f.rid=%d", + rid + ); + if(parentUuid){ + cson_object_set( zParent, "parent", json_new_string(parentUuid) ); + fossil_free(parentUuid); + } + + /* Find check-ins associated with this file... */ + db_prepare(&q, + "SELECT filename.name AS name, " + " (mlink.pid==0) AS isNew," + " (mlink.fid==0) AS isDel," + " cast(strftime('%%s',event.mtime) as int) AS timestamp," + " coalesce(event.ecomment,event.comment) as comment," + " coalesce(event.euser,event.user) as user," +#if 0 + " a.size AS size," /* same for all check-ins. */ +#endif + " b.uuid as checkin, " +#if 0 + " mlink.mperm as mperm," +#endif + " coalesce((SELECT value FROM tagxref" + " WHERE tagid=%d AND tagtype>0 AND " + " rid=mlink.mid),'trunk') as branch" + " FROM mlink, filename, event, blob a, blob b" + " WHERE filename.fnid=mlink.fnid" + " AND event.objid=mlink.mid" + " AND a.rid=mlink.fid" + " AND b.rid=mlink.mid" + " AND mlink.fid=%d" + " ORDER BY filename.name, event.mtime", + TAG_BRANCH, rid + ); + /* TODO: add a "state" flag for the file in each check-in, + e.g. "modified", "new", "deleted". + */ + checkin_arr = cson_new_array(); + cson_object_set(pay, "checkins", cson_array_value(checkin_arr)); + while( (SQLITE_ROW==db_step(&q) ) ){ + cson_object * row = cson_value_get_object(cson_sqlite3_row_to_object(q.pStmt)); + /* FIXME: move this isNew/isDel stuff into an SQL CASE statement. */ + char const isNew = cson_value_get_bool(cson_object_get(row,"isNew")); + char const isDel = cson_value_get_bool(cson_object_get(row,"isDel")); + cson_object_set(row, "isNew", NULL); + cson_object_set(row, "isDel", NULL); + cson_object_set(row, "state", + json_new_string(json_artifact_status_to_string(isNew, isDel))); + cson_array_append( checkin_arr, cson_object_value(row) ); + } + db_finalize(&q); + return cson_object_value(pay); +} + +/* +** Impl of /json/artifact. This basically just determines the type of +** an artifact and forwards the real work to another function. +*/ +cson_value * json_page_artifact(){ + cson_object * pay = NULL; + char const * zName = NULL; + char const * zType = NULL; + char const * zUuid = NULL; + cson_value * entry = NULL; + Blob uuid = empty_blob; + int rc; + int rid = 0; + ArtifactDispatchEntry const * dispatcher = &ArtifactDispatchList[0]; + zName = json_find_option_cstr2("name", NULL, NULL, g.json.dispatchDepth+1); + if(!zName || !*zName) { + json_set_err(FSL_JSON_E_MISSING_ARGS, + "Missing 'name' argument."); + return NULL; + } + + if( validate16(zName, strlen(zName)) ){ + if( db_exists("SELECT 1 FROM ticket WHERE tkt_uuid GLOB '%q*'", zName) ){ + zType = "ticket"; + goto handle_entry; + } + if( db_exists("SELECT 1 FROM tag WHERE tagname GLOB 'event-%q*'", zName) ){ + zType = "tag"; + goto handle_entry; + } + } + blob_set(&uuid,zName); + rc = name_to_uuid(&uuid,-1,"*"); + /* FIXME: check for a filename if all else fails. */ + if(1==rc){ + g.json.resultCode = FSL_JSON_E_RESOURCE_NOT_FOUND; + goto error; + }else if(2==rc){ + g.json.resultCode = FSL_JSON_E_AMBIGUOUS_UUID; + goto error; + } + zUuid = blob_str(&uuid); + rid = db_int(0, "SELECT rid FROM blob WHERE uuid=%Q", zUuid); + if(0==rid){ + g.json.resultCode = FSL_JSON_E_RESOURCE_NOT_FOUND; + goto error; + } + + if( db_exists("SELECT 1 FROM mlink WHERE mid=%d", rid) + || db_exists("SELECT 1 FROM plink WHERE cid=%d", rid) + || db_exists("SELECT 1 FROM plink WHERE pid=%d", rid)){ + zType = "checkin"; + goto handle_entry; + }else if( db_exists("SELECT 1 FROM tagxref JOIN tag USING(tagid)" + " WHERE rid=%d AND tagname LIKE 'wiki-%%'", rid) ){ + zType = "wiki"; + goto handle_entry; + }else if( db_exists("SELECT 1 FROM tagxref JOIN tag USING(tagid)" + " WHERE rid=%d AND tagname LIKE 'tkt-%%'", rid) ){ + zType = "ticket"; + goto handle_entry; + }else if ( db_exists("SELECT 1 FROM mlink WHERE fid = %d", rid) ){ + zType = "file"; + goto handle_entry; + }else{ + g.json.resultCode = FSL_JSON_E_RESOURCE_NOT_FOUND; + goto error; + } + + error: + assert( 0 != g.json.resultCode ); + goto veryend; + + handle_entry: + pay = cson_new_object(); + assert( (NULL != zType) && "Internal dispatching error." ); + for( ; dispatcher->name; ++dispatcher ){ + if(0!=fossil_strcmp(dispatcher->name, zType)){ + continue; + }else{ + entry = (*dispatcher->func)(pay, rid); + break; + } + } + if(!g.json.resultCode){ + assert( NULL != entry ); + assert( NULL != zType ); + cson_object_set( pay, "type", json_new_string(zType) ); + cson_object_set( pay, "uuid", json_new_string(zUuid) ); + /*cson_object_set( pay, "name", json_new_string(zName ? zName : zUuid) );*/ + /*cson_object_set( pay, "rid", cson_value_new_integer(rid) );*/ + if(cson_value_is_object(entry) && (cson_value_get_object(entry) != pay)){ + cson_object_set(pay, "artifact", entry); + } + } + veryend: + blob_reset(&uuid); + if(g.json.resultCode && pay){ + cson_free_object(pay); + pay = NULL; + } + return cson_object_value(pay); +} + +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/json_branch.c Index: src/json_branch.c ================================================================== --- src/json_branch.c +++ src/json_branch.c @@ -0,0 +1,391 @@ +#ifdef FOSSIL_ENABLE_JSON +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +*/ +#include "VERSION.h" +#include "config.h" +#include "json_branch.h" + +#if INTERFACE +#include "json_detail.h" +#endif + + +static cson_value * json_branch_list(); +static cson_value * json_branch_create(); +/* +** Mapping of /json/branch/XXX commands/paths to callbacks. +*/ +static const JsonPageDef JsonPageDefs_Branch[] = { +{"create", json_branch_create, 0}, +{"list", json_branch_list, 0}, +{"new", json_branch_create, -1/* for compat with non-JSON branch command.*/}, +/* Last entry MUST have a NULL name. */ +{NULL,NULL,0} +}; + +/* +** Implements the /json/branch family of pages/commands. Far from +** complete. +** +*/ +cson_value * json_page_branch(){ + return json_page_dispatch_helper(&JsonPageDefs_Branch[0]); +} + +/* +** Impl for /json/branch/list +** +** +** CLI mode options: +** +** --range X | -r X, where X is one of (open,closed,all) +** (only the first letter is significant, default=open). +** -a (same as --range a) +** -c (same as --range c) +** +** HTTP mode options: +** +** "range" GET/POST.payload parameter. FIXME: currently we also use +** POST, but really want to restrict this to POST.payload. +*/ +static cson_value * json_branch_list(){ + cson_value * payV; + cson_object * pay; + cson_value * listV; + cson_array * list; + char const * range = NULL; + int branchListFlags = BRL_OPEN_ONLY; + char * sawConversionError = NULL; + Stmt q; + if( !g.perm.Read ){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'o' permissions."); + return NULL; + } + payV = cson_value_new_object(); + pay = cson_value_get_object(payV); + listV = cson_value_new_array(); + list = cson_value_get_array(listV); + if(fossil_has_json()){ + range = json_getenv_cstr("range"); + } + + range = json_find_option_cstr("range",NULL,"r"); + if((!range||!*range) && !g.isHTTP){ + range = find_option("all","a",0); + if(range && *range){ + range = "a"; + }else{ + range = find_option("closed","c",0); + if(range&&*range){ + range = "c"; + } + } + } + + if(!range || !*range){ + range = "o"; + } + /* Normalize range values... */ + switch(*range){ + case 'c': + range = "closed"; + branchListFlags = BRL_CLOSED_ONLY; + break; + case 'a': + range = "all"; + branchListFlags = BRL_BOTH; + break; + default: + range = "open"; + branchListFlags = BRL_OPEN_ONLY; + break; + }; + cson_object_set(pay,"range",json_new_string(range)); + + if( g.localOpen ){ /* add "current" property (branch name). */ + int vid = db_lget_int("checkout", 0); + char const * zCurrent = vid + ? db_text(0, "SELECT value FROM tagxref" + " WHERE rid=%d AND tagid=%d", + vid, TAG_BRANCH) + : 0; + if(zCurrent){ + cson_object_set(pay,"current",json_new_string(zCurrent)); + } + } + + + branch_prepare_list_query(&q, branchListFlags); + cson_object_set(pay,"branches",listV); + while((SQLITE_ROW==db_step(&q))){ + cson_value * v = cson_sqlite3_column_to_value(q.pStmt,0); + if(v){ + cson_array_append(list,v); + }else if(!sawConversionError){ + sawConversionError = mprintf("Column-to-json failed @ %s:%d", + __FILE__,__LINE__); + } + } + if( sawConversionError ){ + json_warn(FSL_JSON_W_COL_TO_JSON_FAILED,sawConversionError); + free(sawConversionError); + } + return payV; +} + +/* +** Parameters for the create-branch operation. +*/ +typedef struct BranchCreateOptions{ + char const * zName; + char const * zBasis; + char const * zColor; + char isPrivate; + /** + Might be set to an error string by + json_branch_new(). + */ + char const * rcErrMsg; +} BranchCreateOptions; + +/* +** Tries to create a new branch based on the options set in zOpt. If +** an error is encountered, zOpt->rcErrMsg _might_ be set to a +** descriptive string and one of the FossilJsonCodes values will be +** returned. Or fossil_fatal() (or similar) might be called, exiting +** the app. +** +** On success 0 is returned and if zNewRid is not NULL then the rid of +** the new branch is assigned to it. +** +** If zOpt->isPrivate is 0 but the parent branch is private, +** zOpt->isPrivate will be set to a non-zero value and the new branch +** will be private. +*/ +static int json_branch_new(BranchCreateOptions * zOpt, + int *zNewRid){ + /* Mostly copied from branch.c:branch_new(), but refactored a small + bit to not produce output or interact with the user. The + down-side to that is that we dropped the gpg-signing. It was + either that or abort the creation if we couldn't sign. We can't + sign over HTTP mode, anyway. + */ + char const * zBranch = zOpt->zName; + char const * zBasis = zOpt->zBasis; + char const * zColor = zOpt->zColor; + int rootid; /* RID of the root check-in - what we branch off of */ + int brid; /* RID of the branch check-in */ + int i; /* Loop counter */ + char *zUuid; /* Artifact ID of origin */ + Stmt q; /* Generic query */ + char *zDate; /* Date that branch was created */ + char *zComment; /* Check-in comment for the new branch */ + Blob branch; /* manifest for the new branch */ + Manifest *pParent; /* Parsed parent manifest */ + Blob mcksum; /* Self-checksum on the manifest */ + + /* fossil branch new name */ + if( zBranch==0 || zBranch[0]==0 ){ + zOpt->rcErrMsg = "Branch name may not be null/empty."; + return FSL_JSON_E_INVALID_ARGS; + } + if( db_exists( + "SELECT 1 FROM tagxref" + " WHERE tagtype>0" + " AND tagid=(SELECT tagid FROM tag WHERE tagname='sym-%q')", + zBranch)!=0 ){ + zOpt->rcErrMsg = "Branch already exists."; + return FSL_JSON_E_RESOURCE_ALREADY_EXISTS; + } + + db_begin_transaction(); + rootid = name_to_typed_rid(zBasis, "ci"); + if( rootid==0 ){ + zOpt->rcErrMsg = "Basis branch not found."; + return FSL_JSON_E_RESOURCE_NOT_FOUND; + } + + pParent = manifest_get(rootid, CFTYPE_MANIFEST, 0); + if( pParent==0 ){ + zOpt->rcErrMsg = "Could not read parent manifest."; + return FSL_JSON_E_UNKNOWN; + } + + /* Create a manifest for the new branch */ + blob_zero(&branch); + if( pParent->zBaseline ){ + blob_appendf(&branch, "B %s\n", pParent->zBaseline); + } + zComment = mprintf("Create new branch named \"%s\" " + "from \"%s\".", zBranch, zBasis); + blob_appendf(&branch, "C %F\n", zComment); + free(zComment); + zDate = date_in_standard_format("now"); + blob_appendf(&branch, "D %s\n", zDate); + free(zDate); + + /* Copy all of the content from the parent into the branch */ + for(i=0; inFile; ++i){ + blob_appendf(&branch, "F %F", pParent->aFile[i].zName); + if( pParent->aFile[i].zUuid ){ + blob_appendf(&branch, " %s", pParent->aFile[i].zUuid); + if( pParent->aFile[i].zPerm && pParent->aFile[i].zPerm[0] ){ + blob_appendf(&branch, " %s", pParent->aFile[i].zPerm); + } + } + blob_append(&branch, "\n", 1); + } + zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rootid); + blob_appendf(&branch, "P %s\n", zUuid); + free(zUuid); + if( pParent->zRepoCksum ){ + blob_appendf(&branch, "R %s\n", pParent->zRepoCksum); + } + manifest_destroy(pParent); + + /* Add the symbolic branch name and the "branch" tag to identify + ** this as a new branch */ + if( content_is_private(rootid) ) zOpt->isPrivate = 1; + if( zOpt->isPrivate && zColor==0 ) zColor = "#fec084"; + if( zColor!=0 ){ + blob_appendf(&branch, "T *bgcolor * %F\n", zColor); + } + blob_appendf(&branch, "T *branch * %F\n", zBranch); + blob_appendf(&branch, "T *sym-%F *\n", zBranch); + if( zOpt->isPrivate ){ + blob_appendf(&branch, "T +private *\n"); + } + + /* Cancel all other symbolic tags */ + db_prepare(&q, + "SELECT tagname FROM tagxref, tag" + " WHERE tagxref.rid=%d AND tagxref.tagid=tag.tagid" + " AND tagtype>0 AND tagname GLOB 'sym-*'" + " ORDER BY tagname", + rootid); + while( db_step(&q)==SQLITE_ROW ){ + const char *zTag = db_column_text(&q, 0); + blob_appendf(&branch, "T -%F *\n", zTag); + } + db_finalize(&q); + + blob_appendf(&branch, "U %F\n", g.zLogin); + md5sum_blob(&branch, &mcksum); + blob_appendf(&branch, "Z %b\n", &mcksum); + + brid = content_put(&branch); + if( brid==0 ){ + fossil_fatal("Problem committing manifest: %s", g.zErrMsg); + } + db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", brid); + if( manifest_crosslink(brid, &branch, MC_PERMIT_HOOKS)==0 ){ + fossil_fatal("%s\n", g.zErrMsg); + } + assert( blob_is_reset(&branch) ); + content_deltify(rootid, brid, 0); + if( zNewRid ){ + *zNewRid = brid; + } + + /* Commit */ + db_end_transaction(0); + +#if 0 /* Do an autosync push, if requested */ + /* arugable for JSON mode? */ + if( !g.isHTTP && !isPrivate ) autosync(SYNC_PUSH); +#endif + return 0; +} + + +/* +** Impl of /json/branch/create. +*/ +static cson_value * json_branch_create(){ + cson_value * payV = NULL; + cson_object * pay = NULL; + int rc = 0; + BranchCreateOptions opt; + char * zUuid = NULL; + int rid = 0; + if( !g.perm.Write ){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'i' permissions."); + return NULL; + } + memset(&opt,0,sizeof(BranchCreateOptions)); + if(fossil_has_json()){ + opt.zName = json_getenv_cstr("name"); + } + + if(!opt.zName){ + opt.zName = json_command_arg(g.json.dispatchDepth+1); + } + + if(!opt.zName){ + json_set_err(FSL_JSON_E_MISSING_ARGS, "'name' parameter was not specified." ); + return NULL; + } + + opt.zColor = json_find_option_cstr("bgColor","bgcolor",NULL); + opt.zBasis = json_find_option_cstr("basis",NULL,NULL); + if(!opt.zBasis && !g.isHTTP){ + opt.zBasis = json_command_arg(g.json.dispatchDepth+2); + } + if(!opt.zBasis){ + opt.zBasis = "trunk"; + } + opt.isPrivate = json_find_option_bool("private",NULL,NULL,-1); + if(-1==opt.isPrivate){ + if(!g.isHTTP){ + opt.isPrivate = (NULL != find_option("private","",0)); + }else{ + opt.isPrivate = 0; + } + } + + rc = json_branch_new( &opt, &rid ); + if(rc){ + json_set_err(rc, opt.rcErrMsg); + goto error; + } + assert(0 != rid); + payV = cson_value_new_object(); + pay = cson_value_get_object(payV); + + cson_object_set(pay,"name",json_new_string(opt.zName)); + cson_object_set(pay,"basis",json_new_string(opt.zBasis)); + cson_object_set(pay,"rid",json_new_int(rid)); + zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); + cson_object_set(pay,"uuid", json_new_string(zUuid)); + cson_object_set(pay, "isPrivate", cson_value_new_bool(opt.isPrivate)); + free(zUuid); + if(opt.zColor){ + cson_object_set(pay,"bgColor",json_new_string(opt.zColor)); + } + + goto ok; + error: + assert( 0 != g.json.resultCode ); + cson_value_free(payV); + payV = NULL; + ok: + return payV; +} + +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/json_config.c Index: src/json_config.c ================================================================== --- src/json_config.c +++ src/json_config.c @@ -0,0 +1,188 @@ +#ifdef FOSSIL_ENABLE_JSON +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +*/ +#include "VERSION.h" +#include "config.h" +#include "json_config.h" + +#if INTERFACE +#include "json_detail.h" +#endif + +static cson_value * json_config_get(); +static cson_value * json_config_save(); + +/* +** Mapping of /json/config/XXX commands/paths to callbacks. +*/ +static const JsonPageDef JsonPageDefs_Config[] = { +{"get", json_config_get, 0}, +{"save", json_config_save, 0}, +/* Last entry MUST have a NULL name. */ +{NULL,NULL,0} +}; + + +/* +** Implements the /json/config family of pages/commands. +** +*/ +cson_value * json_page_config(){ + return json_page_dispatch_helper(&JsonPageDefs_Config[0]); +} + + +/* +** JSON-internal mapping of config options to config groups. This is +** mostly a copy of the config options in configure.c, but that data +** is private and cannot be re-used directly here. +*/ +static const struct JsonConfigProperty { + char const * name; + int groupMask; +} JsonConfigProperties[] = { +{ "css", CONFIGSET_CSS }, +{ "header", CONFIGSET_SKIN }, +{ "footer", CONFIGSET_SKIN }, +{ "details", CONFIGSET_SKIN }, +{ "logo-mimetype", CONFIGSET_SKIN }, +{ "logo-image", CONFIGSET_SKIN }, +{ "background-mimetype", CONFIGSET_SKIN }, +{ "background-image", CONFIGSET_SKIN }, +{ "timeline-block-markup", CONFIGSET_SKIN }, +{ "timeline-max-comment", CONFIGSET_SKIN }, +{ "timeline-plaintext", CONFIGSET_SKIN }, +{ "adunit", CONFIGSET_SKIN }, +{ "adunit-omit-if-admin", CONFIGSET_SKIN }, +{ "adunit-omit-if-user", CONFIGSET_SKIN }, + +{ "project-name", CONFIGSET_PROJ }, +{ "short-project-name", CONFIGSET_PROJ }, +{ "project-description", CONFIGSET_PROJ }, +{ "index-page", CONFIGSET_PROJ }, +{ "manifest", CONFIGSET_PROJ }, +{ "binary-glob", CONFIGSET_PROJ }, +{ "clean-glob", CONFIGSET_PROJ }, +{ "ignore-glob", CONFIGSET_PROJ }, +{ "keep-glob", CONFIGSET_PROJ }, +{ "crnl-glob", CONFIGSET_PROJ }, +{ "encoding-glob", CONFIGSET_PROJ }, +{ "empty-dirs", CONFIGSET_PROJ }, +{ "allow-symlinks", CONFIGSET_PROJ }, +{ "dotfiles", CONFIGSET_PROJ }, + +{ "ticket-table", CONFIGSET_TKT }, +{ "ticket-common", CONFIGSET_TKT }, +{ "ticket-change", CONFIGSET_TKT }, +{ "ticket-newpage", CONFIGSET_TKT }, +{ "ticket-viewpage", CONFIGSET_TKT }, +{ "ticket-editpage", CONFIGSET_TKT }, +{ "ticket-reportlist", CONFIGSET_TKT }, +{ "ticket-report-template", CONFIGSET_TKT }, +{ "ticket-key-template", CONFIGSET_TKT }, +{ "ticket-title-expr", CONFIGSET_TKT }, +{ "ticket-closed-expr", CONFIGSET_TKT }, + +{NULL, 0} +}; + + +/* +** Impl of /json/config/get. Requires setup rights. +** +*/ +static cson_value * json_config_get(){ + cson_object * pay = NULL; + Stmt q = empty_Stmt; + Blob sql = empty_blob; + char const * zName = NULL; + int confMask = 0; + char optSkinBackups = 0; + unsigned int i; + if(!g.perm.Setup){ + json_set_err(FSL_JSON_E_DENIED, "Requires 's' permissions."); + return NULL; + } + + i = g.json.dispatchDepth + 1; + zName = json_command_arg(i); + for( ; zName; zName = json_command_arg(++i) ){ + if(0==(strcmp("all", zName))){ + confMask = CONFIGSET_ALL; + }else if(0==(strcmp("project", zName))){ + confMask |= CONFIGSET_PROJ; + }else if(0==(strcmp("skin", zName))){ + confMask |= (CONFIGSET_CSS|CONFIGSET_SKIN); + }else if(0==(strcmp("ticket", zName))){ + confMask |= CONFIGSET_TKT; + }else if(0==(strcmp("skin-backup", zName))){ + optSkinBackups = 1; + }else{ + json_set_err( FSL_JSON_E_INVALID_ARGS, + "Unknown config area: %s", zName); + return NULL; + } + } + + if(!confMask && !optSkinBackups){ + json_set_err(FSL_JSON_E_MISSING_ARGS, "No configuration area(s) selected."); + } + blob_append(&sql, + "SELECT name, value" + " FROM config " + " WHERE 0 ", -1); + { + const struct JsonConfigProperty * prop = &JsonConfigProperties[0]; + blob_append(&sql," OR name IN (",-1); + for( i = 0; prop->name; ++prop ){ + if(prop->groupMask & confMask){ + if( i++ ){ + blob_append(&sql,",",1); + } + blob_append_sql(&sql, "%Q", prop->name); + } + } + blob_append(&sql,") ", -1); + } + + + if( optSkinBackups ){ + blob_append(&sql, " OR name GLOB 'skin:*'", -1); + } + blob_append(&sql," ORDER BY name", -1); + db_prepare(&q, "%s", blob_sql_text(&sql)); + blob_reset(&sql); + pay = cson_new_object(); + while( (SQLITE_ROW==db_step(&q)) ){ + cson_object_set(pay, + db_column_text(&q,0), + json_new_string(db_column_text(&q,1))); + } + db_finalize(&q); + return cson_object_value(pay); +} + +/* +** Impl of /json/config/save. +** +** TODOs: +*/ +static cson_value * json_config_save(){ + json_set_err(FSL_JSON_E_NYI, NULL); + return NULL; +} +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/json_detail.h Index: src/json_detail.h ================================================================== --- src/json_detail.h +++ src/json_detail.h @@ -0,0 +1,269 @@ +#ifdef FOSSIL_ENABLE_JSON +#if !defined(FOSSIL_JSON_DETAIL_H_INCLUDED) +#define FOSSIL_JSON_DETAIL_H_INCLUDED +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +*/ + +#include "cson_amalgamation.h" + +/** + FOSSIL_JSON_API_VERSION holds the date (YYYYMMDD) of the latest + "significant" change to the JSON API (a change in an interface or + new functionality). It is sent as part of the /json/version + request. We could arguably add it to each response or even add a + version number to each response type, allowing very fine (too + fine?) granularity in compatibility change notification. The + version number could be included in part of the command dispatching + framework, allowing the top-level dispatching code to deal with it + (for the most part). +*/ +#define FOSSIL_JSON_API_VERSION "20120713" + +/* +** Impl details for the JSON API which need to be shared +** across multiple C files. +*/ + +/* +** The "official" list of Fossil/JSON error codes. Their values might +** very well change during initial development but after their first +** public release they must stay stable. +** +** Values must be in the range 1000..9999 for error codes and 1..999 +** for warning codes. +** +** Numbers evenly dividable by 100 are "categories", and error codes +** for a given category have their high bits set to the category +** value. +** +** Maintenance reminder: when entries are added to this list, update +** the code in json_page_resultCodes() and json_err_cstr() (both in +** json.c)! +** +*/ +enum FossilJsonCodes { +FSL_JSON_W_START = 0, +FSL_JSON_W_UNKNOWN /*+1*/, +FSL_JSON_W_ROW_TO_JSON_FAILED /*+2*/, +FSL_JSON_W_COL_TO_JSON_FAILED /*+3*/, +FSL_JSON_W_STRING_TO_ARRAY_FAILED /*+4*/, +FSL_JSON_W_TAG_NOT_FOUND /*+5*/, + +FSL_JSON_W_END = 1000, +FSL_JSON_E_GENERIC = 1000, +FSL_JSON_E_GENERIC_SUB1 = FSL_JSON_E_GENERIC + 100, +FSL_JSON_E_INVALID_REQUEST /*+1*/, +FSL_JSON_E_UNKNOWN_COMMAND /*+2*/, +FSL_JSON_E_UNKNOWN /*+3*/, +/*REUSE: +4*/ +FSL_JSON_E_TIMEOUT /*+5*/, +FSL_JSON_E_ASSERT /*+6*/, +FSL_JSON_E_ALLOC /*+7*/, +FSL_JSON_E_NYI /*+8*/, +FSL_JSON_E_PANIC /*+9*/, +FSL_JSON_E_MANIFEST_READ_FAILED /*+10*/, +FSL_JSON_E_FILE_OPEN_FAILED /*+11*/, + +FSL_JSON_E_AUTH = 2000, +FSL_JSON_E_MISSING_AUTH /*+1*/, +FSL_JSON_E_DENIED /*+2*/, +FSL_JSON_E_WRONG_MODE /*+3*/, + +FSL_JSON_E_LOGIN_FAILED = FSL_JSON_E_AUTH +100, +FSL_JSON_E_LOGIN_FAILED_NOSEED /*+1*/, +FSL_JSON_E_LOGIN_FAILED_NONAME /*+2*/, +FSL_JSON_E_LOGIN_FAILED_NOPW /*+3*/, +FSL_JSON_E_LOGIN_FAILED_NOTFOUND /*+4*/, + +FSL_JSON_E_USAGE = 3000, +FSL_JSON_E_INVALID_ARGS /*+1*/, +FSL_JSON_E_MISSING_ARGS /*+2*/, +FSL_JSON_E_AMBIGUOUS_UUID /*+3*/, +FSL_JSON_E_UNRESOLVED_UUID /*+4*/, +FSL_JSON_E_RESOURCE_ALREADY_EXISTS /*+5*/, +FSL_JSON_E_RESOURCE_NOT_FOUND /*+6*/, + +FSL_JSON_E_DB = 4000, +FSL_JSON_E_STMT_PREP /*+1*/, +FSL_JSON_E_STMT_BIND /*+2*/, +FSL_JSON_E_STMT_EXEC /*+3*/, +FSL_JSON_E_DB_LOCKED /*+4*/, + +FSL_JSON_E_DB_NEEDS_REBUILD = FSL_JSON_E_DB + 101, +FSL_JSON_E_DB_NOT_FOUND = FSL_JSON_E_DB + 102, +FSL_JSON_E_DB_NOT_VALID = FSL_JSON_E_DB + 103, +/* +** Maintenance reminder: FSL_JSON_E_DB_NOT_FOUND gets triggered in the +** bootstrapping process before we know whether we need to check for +** FSL_JSON_E_DB_NEEDS_CHECKOUT. Thus the former error trumps the +** latter. +*/ +FSL_JSON_E_DB_NEEDS_CHECKOUT = FSL_JSON_E_DB + 104 +}; + + +/* +** Signature for JSON page/command callbacks. Each callback is +** responsible for handling one JSON request/command and/or +** dispatching to sub-commands. +** +** By the time the callback is called, json_page_top() (HTTP mode) or +** json_cmd_top() (CLI mode) will have set up the JSON-related +** environment. Implementations may generate a "result payload" of any +** JSON type by returning its value from this function (ownership is +** transferred to the caller). On error they should set +** g.json.resultCode to one of the FossilJsonCodes values and return +** either their payload object or NULL. Note that NULL is a legal +** success value - it simply means the response will contain no +** payload. If g.json.resultCode is non-zero when this function +** returns then the top-level dispatcher will destroy any payload +** returned by this function and will output a JSON error response +** instead. +** +** All of the setup/response code is handled by the top dispatcher +** functions and the callbacks concern themselves only with: +** +** a) Permissions checking (inspecting g.perm). +** b) generating a response payload (if applicable) +** c) Setting g.json's error state (if applicable). See json_set_err(). +** +** It is imperative that NO callback functions EVER output ANYTHING to +** stdout, as that will effectively corrupt any JSON output, and +** almost certainly will corrupt any HTTP response headers. Output +** sent to stderr ends up in my apache log, so that might be useful +** for debugging in some cases, but no such code should be left +** enabled for non-debugging builds. +*/ +typedef cson_value * (*fossil_json_f)(); + +/* +** Holds name-to-function mappings for JSON page/command dispatching. +** +** Internally we model page dispatching lists as arrays of these +** objects, where the final entry in the array has a NULL name value +** to act as the end-of-list sentinel. +** +*/ +typedef struct JsonPageDef{ + /* + ** The commmand/page's name (path, not including leading /json/). + ** + ** Reminder to self: we cannot use sub-paths with commands this way + ** without additional string-splitting downstream. e.g. foo/bar. + ** Alternately, we can create different JsonPageDef arrays for each + ** subset. + */ + char const * name; + /* + ** Returns a payload object for the response. If it returns a + ** non-NULL value, the caller owns it. To trigger an error this + ** function should set g.json.resultCode to a value from the + ** FossilJsonCodes enum. If it sets an error value and returns + ** a payload, the payload will be destroyed (not sent with the + ** response). + */ + fossil_json_f func; + /* + ** Which mode(s) of execution does func() support: + ** + ** <0 = CLI only, >0 = HTTP only, 0==both + ** + ** Now that we can simulate POST in CLI mode, the distinction + ** between them has disappeared in most (or all) cases, so 0 is + ** the standard value. + */ + int runMode; +} JsonPageDef; + +/* +** Holds common keys used for various JSON API properties. +*/ +typedef struct FossilJsonKeys_{ + /** maintainers: please keep alpha sorted (case-insensitive) */ + char const * anonymousSeed; + char const * authToken; + char const * commandPath; + char const * mtime; + char const * payload; + char const * requestId; + char const * resultCode; + char const * resultText; + char const * timestamp; +} FossilJsonKeys_; +extern const FossilJsonKeys_ FossilJsonKeys; + +/* +** A page/command dispatch helper for fossil_json_f() implementations. +** pages must be an array of JsonPageDef commands which we can +** dispatch. The final item in the array MUST have a NULL name +** element. +** +** This function takes the command specified in +** json_command_arg(1+g.json.dispatchDepth) and searches pages for a +** matching name. If found then that page's func() is called to fetch +** the payload, which is returned to the caller. +** +** On error, g.json.resultCode is set to one of the FossilJsonCodes +** values and NULL is returned. If non-NULL is returned, ownership is +** transfered to the caller (but the g.json error state might still be +** set in that case, so the caller must check that or pass it on up +** the dispatch chain). +*/ +cson_value * json_page_dispatch_helper(JsonPageDef const * pages); + +/* +** Convenience wrapper around cson_value_new_string(). +** Returns NULL if str is NULL or on allocation error. +*/ +cson_value * json_new_string( char const * str ); + +/* +** Similar to json_new_string(), but takes a printf()-style format +** specifiers. Supports the printf extensions supported by fossil's +** mprintf(). Returns NULL if str is NULL or on allocation error. +** +** Maintenance note: json_new_string() is NOT variadic because by the +** time the variadic form was introduced we already had use cases +** which segfaulted via json_new_string() because they contain printf +** markup (e.g. wiki content). Been there, debugged that. +*/ +cson_value * json_new_string_f( char const * fmt, ... ); + +/* +** Returns true if fossil is running in JSON mode and we are either +** running in HTTP mode OR g.json.post.o is not NULL (meaning POST +** data was fed in from CLI mode). +** +** Specifically, it will return false when any of these apply: +** +** a) Not running in JSON mode (via json command or /json path). +** +** b) We are running in JSON CLI mode, but no POST data has been fed +** in. +** +** Whether or not we need to take args from CLI or POST data makes a +** difference in argument/parameter handling in many JSON routines, +** and thus this distinction. +*/ +int fossil_has_json(); + +enum json_get_changed_files_flags { + json_get_changed_files_ELIDE_PARENT = 1 << 0 +}; + +#endif/*FOSSIL_JSON_DETAIL_H_INCLUDED*/ +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/json_diff.c Index: src/json_diff.c ================================================================== --- src/json_diff.c +++ src/json_diff.c @@ -0,0 +1,138 @@ +#ifdef FOSSIL_ENABLE_JSON +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +*/ + +#include "config.h" +#include "json_diff.h" + +#if INTERFACE +#include "json_detail.h" +#endif + + + +/* +** Generates a diff between two versions (zFrom and zTo), using nContext +** content lines in the output. On success, returns a new JSON String +** object. On error it sets g.json's error state and returns NULL. +** +** If fSbs is true (non-0) them side-by-side diffs are used. +** +** If fHtml is true then HTML markup is added to the diff. +*/ +cson_value * json_generate_diff(const char *zFrom, const char *zTo, + int nContext, char fSbs, + char fHtml){ + int fromid; + int toid; + int outLen; + Blob from = empty_blob, to = empty_blob, out = empty_blob; + cson_value * rc = NULL; + int flags = (DIFF_CONTEXT_MASK & nContext) + | (fSbs ? DIFF_SIDEBYSIDE : 0) + | (fHtml ? DIFF_HTML : 0); + fromid = name_to_typed_rid(zFrom, "*"); + if(fromid<=0){ + json_set_err(FSL_JSON_E_UNRESOLVED_UUID, + "Could not resolve 'from' ID."); + return NULL; + } + toid = name_to_typed_rid(zTo, "*"); + if(toid<=0){ + json_set_err(FSL_JSON_E_UNRESOLVED_UUID, + "Could not resolve 'to' ID."); + return NULL; + } + content_get(fromid, &from); + content_get(toid, &to); + blob_zero(&out); + text_diff(&from, &to, &out, 0, flags); + blob_reset(&from); + blob_reset(&to); + outLen = blob_size(&out); + if(outLen>=0){ + rc = cson_value_new_string(blob_buffer(&out), + (unsigned int)blob_size(&out)); + } + blob_reset(&out); + return rc; +} + +/* +** Implementation of the /json/diff page. +** +** Arguments: +** +** v1=1st version to diff +** v2=2nd version to diff +** +** Can come from GET, POST.payload, CLI -v1/-v2 or as positional +** parameters following the command name (in HTTP and CLI modes). +** +*/ +cson_value * json_page_diff(){ + cson_object * pay = NULL; + cson_value * v = NULL; + char const * zFrom; + char const * zTo; + int nContext = 0; + char doSBS; + char doHtml; + if(!g.perm.Read){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'o' permissions."); + return NULL; + } + zFrom = json_find_option_cstr("v1",NULL,NULL); + if(!zFrom){ + zFrom = json_command_arg(2); + } + if(!zFrom){ + json_set_err(FSL_JSON_E_MISSING_ARGS, + "Required 'v1' parameter is missing."); + return NULL; + } + zTo = json_find_option_cstr("v2",NULL,NULL); + if(!zTo){ + zTo = json_command_arg(3); + } + if(!zTo){ + json_set_err(FSL_JSON_E_MISSING_ARGS, + "Required 'v2' parameter is missing."); + return NULL; + } + nContext = json_find_option_int("context",NULL,"c",5); + doSBS = json_find_option_bool("sbs",NULL,"y",0); + doHtml = json_find_option_bool("html",NULL,"h",0); + v = json_generate_diff(zFrom, zTo, nContext, doSBS, doHtml); + if(!v){ + if(!g.json.resultCode){ + json_set_err(FSL_JSON_E_UNKNOWN, + "Generating diff failed for unknown reason."); + } + return NULL; + } + pay = cson_new_object(); + cson_object_set(pay, "from", json_new_string(zFrom)); + cson_object_set(pay, "to", json_new_string(zTo)); + cson_object_set(pay, "diff", v); + v = 0; + + return pay ? cson_object_value(pay) : NULL; +} + +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/json_dir.c Index: src/json_dir.c ================================================================== --- src/json_dir.c +++ src/json_dir.c @@ -0,0 +1,292 @@ +#ifdef FOSSIL_ENABLE_JSON +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +*/ +#include "VERSION.h" +#include "config.h" +#include "json_dir.h" + +#if INTERFACE +#include "json_detail.h" +#endif + +static cson_value * json_page_dir_list(); +/* +** Mapping of /json/wiki/XXX commands/paths to callbacks. +*/ +static const JsonPageDef JsonPageDefs_Dir[] = { +/* Last entry MUST have a NULL name. */ +{NULL,NULL,0} +}; + +#if 0 /* TODO: Not used? */ +static char const * json_dir_path_extra(){ + static char const * zP = NULL; + if( !zP ){ + zP = g.zExtra; + while(zP && *zP && ('/'==*zP)){ + ++zP; + } + } + return zP; +} +#endif + +/* +** Impl of /json/dir. 98% of it was taken directly +** from browse.c::page_dir() +*/ +static cson_value * json_page_dir_list(){ + cson_object * zPayload = NULL; /* return value */ + cson_array * zEntries = NULL; /* accumulated list of entries. */ + cson_object * zEntry = NULL; /* a single dir/file entry. */ + cson_array * keyStore = NULL; /* garbage collector for shared strings. */ + cson_string * zKeyName = NULL; + cson_string * zKeySize = NULL; + cson_string * zKeyIsDir = NULL; + cson_string * zKeyUuid = NULL; + cson_string * zKeyTime = NULL; + cson_string * zKeyRaw = NULL; + char * zD = NULL; + char const * zDX = NULL; + int nD; + char * zUuid = NULL; + char const * zCI = NULL; + Manifest * pM = NULL; + Stmt q = empty_Stmt; + int rid = 0; + if( !g.perm.Read ){ + json_set_err(FSL_JSON_E_DENIED, "Requires 'o' permissions."); + return NULL; + } + zCI = json_find_option_cstr("checkin",NULL,"ci" ); + + /* If a specific check-in is requested, fetch and parse it. If the + ** specific check-in does not exist, clear zCI. zCI==0 will cause all + ** files from all check-ins to be displayed. + */ + if( zCI && *zCI ){ + pM = manifest_get_by_name(zCI, &rid); + if( pM ){ + zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); + }else{ + json_set_err(FSL_JSON_E_UNRESOLVED_UUID, + "Check-in name [%s] is unresolved.", + zCI); + return NULL; + } + } + + /* Jump through some hoops to find the directory name... */ + zDX = json_find_option_cstr("name",NULL,NULL); + if(!zDX && !g.isHTTP){ + zDX = json_command_arg(g.json.dispatchDepth+1); + } + if(zDX && (!*zDX || (0==strcmp(zDX,"/")))){ + zDX = NULL; + } + zD = zDX ? fossil_strdup(zDX) : NULL; + nD = zD ? strlen(zD)+1 : 0; + while( nD>1 && zD[nD-2]=='/' ){ zD[(--nD)-1] = 0; } + + sqlite3_create_function(g.db, "pathelement", 2, SQLITE_UTF8, 0, + pathelementFunc, 0, 0); + + /* Compute the temporary table "localfiles" containing the names + ** of all files and subdirectories in the zD[] directory. + ** + ** Subdirectory names begin with "/". This causes them to sort + ** first and it also gives us an easy way to distinguish files + ** from directories in the loop that follows. + */ + + if( zCI ){ + Stmt ins; + ManifestFile *pFile; + ManifestFile *pPrev = 0; + int nPrev = 0; + int c; + + db_multi_exec( + "CREATE TEMP TABLE json_dir_files(" + " n UNIQUE NOT NULL," /* file name */ + " fn UNIQUE NOT NULL," /* full file name */ + " u DEFAULT NULL," /* file uuid */ + " sz DEFAULT -1," /* file size */ + " mtime DEFAULT NULL" /* file mtime in unix epoch format */ + ");" + ); + + db_prepare(&ins, + "INSERT OR IGNORE INTO json_dir_files (n,fn,u,sz,mtime) " + "SELECT" + " pathelement(:path,0)," + " CASE WHEN %Q IS NULL THEN '' ELSE %Q||'/' END ||:abspath," + " a.uuid," + " a.size," + " CAST(strftime('%%s',e.mtime) AS INTEGER) " + "FROM" + " mlink m, " + " event e," + " blob a," + " blob b " + "WHERE" + " e.objid=m.mid" + " AND a.rid=m.fid"/*FILE artifact*/ + " AND b.rid=m.mid"/*CHECKIN artifact*/ + " AND a.uuid=:uuid", + zD, zD + ); + manifest_file_rewind(pM); + while( (pFile = manifest_file_next(pM,0))!=0 ){ + if( nD>0 + && ((pFile->zName[nD-1]!='/') || (0!=memcmp(pFile->zName, zD, nD-1))) + ){ + continue; + } + /*printf("zD=%s, nD=%d, pFile->zName=%s\n", zD, nD, pFile->zName);*/ + if( pPrev + && memcmp(&pFile->zName[nD],&pPrev->zName[nD],nPrev)==0 + && (pFile->zName[nD+nPrev]==0 || pFile->zName[nD+nPrev]=='/') + ){ + continue; + } + db_bind_text( &ins, ":path", &pFile->zName[nD] ); + db_bind_text( &ins, ":abspath", &pFile->zName[nD] ); + db_bind_text( &ins, ":uuid", pFile->zUuid ); + db_step(&ins); + db_reset(&ins); + pPrev = pFile; + for(nPrev=0; (c=pPrev->zName[nD+nPrev]) && c!='/'; nPrev++){} + if( c=='/' ) nPrev++; + } + db_finalize(&ins); + }else if( zD && *zD ){ + db_multi_exec( + "CREATE TEMP VIEW json_dir_files AS" + " SELECT DISTINCT(pathelement(name,%d)) AS n," + " %Q||'/'||name AS fn," + " NULL AS u, NULL AS sz, NULL AS mtime" + " FROM filename" + " WHERE name GLOB '%q/*'" + " GROUP BY n", + nD, zD, zD + ); + }else{ + db_multi_exec( + "CREATE TEMP VIEW json_dir_files" + " AS SELECT DISTINCT(pathelement(name,0)) AS n, NULL AS fn" + " FROM filename" + ); + } + + if(zCI){ + db_prepare( &q, "SELECT" + " n as name," + " fn as fullname," + " u as uuid," + " sz as size," + " mtime as mtime " + "FROM json_dir_files ORDER BY n"); + }else{/* UUIDs are all NULL. */ + db_prepare( &q, "SELECT n, fn FROM json_dir_files ORDER BY n"); + } + + zKeyName = cson_new_string("name",4); + zKeyUuid = cson_new_string("uuid",4); + zKeyIsDir = cson_new_string("isDir",5); + keyStore = cson_new_array(); + cson_array_append( keyStore, cson_string_value(zKeyName) ); + cson_array_append( keyStore, cson_string_value(zKeyUuid) ); + cson_array_append( keyStore, cson_string_value(zKeyIsDir) ); + + if( zCI ){ + zKeySize = cson_new_string("size",4); + cson_array_append( keyStore, cson_string_value(zKeySize) ); + zKeyTime = cson_new_string("timestamp",9); + cson_array_append( keyStore, cson_string_value(zKeyTime) ); + zKeyRaw = cson_new_string("downloadPath",12); + cson_array_append( keyStore, cson_string_value(zKeyRaw) ); + } + zPayload = cson_new_object(); + cson_object_set_s( zPayload, zKeyName, + json_new_string((zD&&*zD) ? zD : "/") ); + if( zUuid ){ + cson_object_set( zPayload, "checkin", json_new_string(zUuid) ); + } + + while( (SQLITE_ROW==db_step(&q)) ){ + cson_value * name = NULL; + char const * n = db_column_text(&q,0); + char const isDir = ('/'==*n); + zEntry = cson_new_object(); + if(!zEntries){ + zEntries = cson_new_array(); + cson_object_set( zPayload, "entries", cson_array_value(zEntries) ); + } + cson_array_append(zEntries, cson_object_value(zEntry) ); + if(isDir){ + name = json_new_string( n+1 ); + cson_object_set_s(zEntry, zKeyIsDir, cson_value_true() ); + } else{ + name = json_new_string( n ); + } + cson_object_set_s(zEntry, zKeyName, name ); + if( zCI && !isDir){ + /* Don't add the uuid/size for dir entries - that data refers to + one of the files in that directory :/. Entries with no + --checkin may refer to N versions, and therefore we cannot + associate a single size and uuid with them (and fetching all + would be overkill for most use cases). + */ + char const * fullName = db_column_text(&q,1); + char const * u = db_column_text(&q,2); + sqlite_int64 const sz = db_column_int64(&q,3); + sqlite_int64 const ts = db_column_int64(&q,4); + cson_object_set_s(zEntry, zKeyUuid, json_new_string( u ) ); + cson_object_set_s(zEntry, zKeySize, + cson_value_new_integer( (cson_int_t)sz )); + cson_object_set_s(zEntry, zKeyTime, + cson_value_new_integer( (cson_int_t)ts )); + cson_object_set_s(zEntry, zKeyRaw, + json_new_string_f("/raw/%T?name=%t", + fullName, u)); + } + } + db_finalize(&q); + if(pM){ + manifest_destroy(pM); + } + cson_free_array( keyStore ); + + free( zUuid ); + free( zD ); + return cson_object_value(zPayload); +} + +/* +** Implements the /json/dir family of pages/commands. +** +*/ +cson_value * json_page_dir(){ +#if 1 + return json_page_dir_list(); +#else + return json_page_dispatch_helper(&JsonPageDefs_Dir[0]); +#endif +} + +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/json_finfo.c Index: src/json_finfo.c ================================================================== --- src/json_finfo.c +++ src/json_finfo.c @@ -0,0 +1,147 @@ +#ifdef FOSSIL_ENABLE_JSON +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +*/ +#include "VERSION.h" +#include "config.h" +#include "json_finfo.h" + +#if INTERFACE +#include "json_detail.h" +#endif + +/* +** Implements the /json/finfo page/command. +** +*/ +cson_value * json_page_finfo(){ + cson_object * pay = NULL; + cson_array * checkins = NULL; + char const * zFilename = NULL; + Blob sql = empty_blob; + Stmt q = empty_Stmt; + char const * zAfter = NULL; + char const * zBefore = NULL; + int limit = -1; + int currentRow = 0; + char const * zCheckin = NULL; + char sort = -1; + if(!g.perm.Read){ + json_set_err(FSL_JSON_E_DENIED,"Requires 'o' privileges."); + return NULL; + } + json_warn( FSL_JSON_W_UNKNOWN, "Achtung: the output of the finfo command is up for change."); + + /* For the "name" argument we have to jump through some hoops to make sure that we don't + get the fossil-internally-assigned "name" option. + */ + zFilename = json_find_option_cstr2("name",NULL,NULL, g.json.dispatchDepth+1); + if(!zFilename || !*zFilename){ + json_set_err(FSL_JSON_E_MISSING_ARGS, "Missing 'name' parameter."); + return NULL; + } + + if(0==db_int(0,"SELECT 1 FROM filename WHERE name=%Q",zFilename)){ + json_set_err(FSL_JSON_E_RESOURCE_NOT_FOUND, "File entry not found."); + return NULL; + } + + zBefore = json_find_option_cstr("before",NULL,"b"); + zAfter = json_find_option_cstr("after",NULL,"a"); + limit = json_find_option_int("limit",NULL,"n", -1); + zCheckin = json_find_option_cstr("checkin",NULL,"ci"); + + blob_append_sql(&sql, +/*0*/ "SELECT b.uuid," +/*1*/ " ci.uuid," +/*2*/ " (SELECT uuid FROM blob WHERE rid=mlink.fid)," /* Current file uuid */ +/*3*/ " cast(strftime('%%s',event.mtime) AS INTEGER)," +/*4*/ " coalesce(event.euser, event.user)," +/*5*/ " coalesce(event.ecomment, event.comment)," +/*6*/ " (SELECT uuid FROM blob WHERE rid=mlink.pid)," /* Parent file uuid */ +/*7*/ " event.bgcolor," +/*8*/ " b.size," +/*9*/ " (mlink.pid==0) AS isNew," +/*10*/ " (mlink.fid==0) AS isDel" + " FROM mlink, blob b, event, blob ci, filename" + " WHERE filename.name=%Q" + " AND mlink.fnid=filename.fnid" + " AND b.rid=mlink.fid" + " AND event.objid=mlink.mid" + " AND event.objid=ci.rid", + zFilename + ); + + if( zCheckin && *zCheckin ){ + char * zU = NULL; + int rc = name_to_uuid2( zCheckin, "ci", &zU ); + /*printf("zCheckin=[%s], zU=[%s]", zCheckin, zU);*/ + if(rc<=0){ + json_set_err((rc<0) ? FSL_JSON_E_AMBIGUOUS_UUID : FSL_JSON_E_RESOURCE_NOT_FOUND, + "Check-in UUID %s.", (rc<0) ? "is ambiguous" : "not found"); + blob_reset(&sql); + return NULL; + } + blob_append_sql(&sql, " AND ci.uuid='%q'", zU); + free(zU); + }else{ + if( zAfter && *zAfter ){ + blob_append_sql(&sql, " AND event.mtime>=julianday('%q')", zAfter); + sort = 1; + }else if( zBefore && *zBefore ){ + blob_append_sql(&sql, " AND event.mtime<=julianday('%q')", zBefore); + } + } + + blob_append_sql(&sql," ORDER BY event.mtime %s /*sort*/", (sort>0?"ASC":"DESC")); + /*printf("SQL=\n%s\n",blob_str(&sql));*/ + db_prepare(&q, "%s", blob_sql_text(&sql)); + blob_reset(&sql); + + pay = cson_new_object(); + cson_object_set(pay, "name", json_new_string(zFilename)); + if( limit > 0 ){ + cson_object_set(pay, "limit", json_new_int(limit)); + } + checkins = cson_new_array(); + cson_object_set(pay, "checkins", cson_array_value(checkins)); + while( db_step(&q)==SQLITE_ROW ){ + cson_object * row = cson_new_object(); + int const isNew = db_column_int(&q,9); + int const isDel = db_column_int(&q,10); + cson_array_append( checkins, cson_object_value(row) ); + cson_object_set(row, "checkin", json_new_string( db_column_text(&q,1) )); + cson_object_set(row, "uuid", json_new_string( db_column_text(&q,2) )); + /*cson_object_set(row, "parentArtifact", json_new_string( db_column_text(&q,6) ));*/ + cson_object_set(row, "timestamp", json_new_int( db_column_int64(&q,3) )); + cson_object_set(row, "user", json_new_string( db_column_text(&q,4) )); + cson_object_set(row, "comment", json_new_string( db_column_text(&q,5) )); + /*cson_object_set(row, "bgColor", json_new_string( db_column_text(&q,7) ));*/ + cson_object_set(row, "size", json_new_int( db_column_int64(&q,8) )); + cson_object_set(row, "state", + json_new_string(json_artifact_status_to_string(isNew,isDel))); + if( (0 < limit) && (++currentRow >= limit) ){ + break; + } + } + db_finalize(&q); + + return pay ? cson_object_value(pay) : NULL; +} + + + +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/json_login.c Index: src/json_login.c ================================================================== --- src/json_login.c +++ src/json_login.c @@ -0,0 +1,271 @@ +#ifdef FOSSIL_ENABLE_JSON +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +*/ + +#include "config.h" +#include "json_login.h" + +#if INTERFACE +#include "json_detail.h" +#endif + + +/* +** Implementation of the /json/login page. +** +*/ +cson_value * json_page_login(){ + char preciseErrors = /* if true, "complete" JSON error codes are used, + else they are "dumbed down" to a generic login + error code. + */ +#if 1 + g.json.errorDetailParanoia ? 0 : 1 +#else + 0 +#endif + ; + /* + FIXME: we want to check the GET/POST args in this order: + + - GET: name, n, password, p + - POST: name, password + + but a bug in cgi_parameter() is breaking that, causing PD() to + return the last element of the PATH_INFO instead. + + Summary: If we check for P("name") first, then P("n"), + then ONLY a GET param of "name" will match ("n" + is not recognized). If we reverse the order of the + checks then both forms work. Strangely enough, the + "p"/"password" check is not affected by this. + */ + char const * name = cson_value_get_cstr(json_req_payload_get("name")); + char const * pw = NULL; + char const * anonSeed = NULL; + cson_value * payload = NULL; + int uid = 0; + /* reminder to self: Fossil internally (for the sake of /wiki) + interprets paths in the form /foo/bar/baz such that P("name") == + "bar/baz". This collides with our name/password checking, and + thus we do some rather elaborate name=... checking. + */ + pw = cson_value_get_cstr(json_req_payload_get("password")); + if( !pw ){ + pw = PD("p",NULL); + if( !pw ){ + pw = PD("password",NULL); + } + } + if(!pw){ + g.json.resultCode = preciseErrors + ? FSL_JSON_E_LOGIN_FAILED_NOPW + : FSL_JSON_E_LOGIN_FAILED; + return NULL; + } + + if( !name ){ + name = PD("n",NULL); + if( !name ){ + name = PD("name",NULL); + if( !name ){ + g.json.resultCode = preciseErrors + ? FSL_JSON_E_LOGIN_FAILED_NONAME + : FSL_JSON_E_LOGIN_FAILED; + return NULL; + } + } + } + + if(0 == strcmp("anonymous",name)){ + /* check captcha/seed values... */ + enum { SeedBufLen = 100 /* in some JSON tests i once actually got an + 80-digit number. + */ + }; + static char seedBuffer[SeedBufLen]; + cson_value const * jseed = json_getenv(FossilJsonKeys.anonymousSeed); + seedBuffer[0] = 0; + if( !jseed ){ + jseed = json_req_payload_get(FossilJsonKeys.anonymousSeed); + if( !jseed ){ + jseed = json_getenv("cs") /* name used by HTML interface */; + } + } + if(jseed){ + if( cson_value_is_number(jseed) ){ + sprintf(seedBuffer, "%"CSON_INT_T_PFMT, cson_value_get_integer(jseed)); + anonSeed = seedBuffer; + }else if( cson_value_is_string(jseed) ){ + anonSeed = cson_string_cstr(cson_value_get_string(jseed)); + } + } + if(!anonSeed){ + g.json.resultCode = preciseErrors + ? FSL_JSON_E_LOGIN_FAILED_NOSEED + : FSL_JSON_E_LOGIN_FAILED; + return NULL; + } + } + +#if 0 + { + /* only for debugging the PD()-incorrect-result problem */ + cson_object * o = NULL; + uid = login_search_uid( name, pw ); + payload = cson_value_new_object(); + o = cson_value_get_object(payload); + cson_object_set( o, "n", cson_value_new_string(name,strlen(name))); + cson_object_set( o, "p", cson_value_new_string(pw,strlen(pw))); + return payload; + } +#endif + uid = anonSeed + ? login_is_valid_anonymous(name, pw, anonSeed) + : login_search_uid(name, pw) + ; + if( !uid ){ + g.json.resultCode = preciseErrors + ? FSL_JSON_E_LOGIN_FAILED_NOTFOUND + : FSL_JSON_E_LOGIN_FAILED; + return NULL; + }else{ + char * cookie = NULL; + cson_object * po; + char * cap = NULL; + if(anonSeed){ + login_set_anon_cookie(NULL, &cookie); + }else{ + login_set_user_cookie(name, uid, &cookie); + } + payload = cson_value_new_object(); + po = cson_value_get_object(payload); + cson_object_set(po, "authToken", json_new_string(cookie)); + free(cookie); + cson_object_set(po, "name", json_new_string(name)); + cap = db_text(NULL, "SELECT cap FROM user WHERE login=%Q", name); + cson_object_set(po, "capabilities", cap ? json_new_string(cap) : cson_value_null() ); + free(cap); + cson_object_set(po, "loginCookieName", json_new_string( login_cookie_name() ) ); + /* TODO: add loginExpiryTime to the payload. To do this properly + we "should" add an ([unsigned] int *) to + login_set_user_cookie() and login_set_anon_cookie(), to which + the expiry time is assigned. (Remember that JSON doesn't do + unsigned int.) + + For non-anonymous users we could also simply query the + user.cexpire db field after calling login_set_user_cookie(), + but for anonymous we need to get the time when the cookie is + set because anon does not get a db entry like normal users + do. Anonymous cookies currently have a hard-coded lifetime in + login_set_anon_cookie() (currently 6 hours), which we "should + arguably" change to use the time configured for non-anonymous + users (see login_set_user_cookie() for details). + */ + return payload; + } +} + +/* +** Impl of /json/logout. +** +*/ +cson_value * json_page_logout(){ + cson_value const *token = g.json.authToken; + /* Remember that json_mode_bootstrap() replaces the login cookie + with the JSON auth token if the request contains it. If the + request is missing the auth token then this will fetch fossil's + original cookie. Either way, it's what we want :). + + We require the auth token to avoid someone maliciously + trying to log someone else out (not 100% sure if that + would be possible, given fossil's hardened cookie, but + I'll assume it would be for the time being). + */ + ; + if(!token){ + g.json.resultCode = FSL_JSON_E_MISSING_AUTH; + }else{ + login_clear_login_data(); + g.json.authToken = NULL /* memory is owned elsewhere.*/; + json_setenv(FossilJsonKeys.authToken, NULL); + } + return json_page_whoami(); +} + +/* +** Implementation of the /json/anonymousPassword page. +*/ +cson_value * json_page_anon_password(){ + cson_value * v = cson_value_new_object(); + cson_object * o = cson_value_get_object(v); + unsigned const int seed = captcha_seed(); + char const * zCaptcha = captcha_decode(seed); + cson_object_set(o, "seed", + cson_value_new_integer( (cson_int_t)seed ) + ); + cson_object_set(o, "password", + cson_value_new_string( zCaptcha, strlen(zCaptcha) ) + ); + return v; +} + + + +/* +** Implements the /json/whoami page/command. +*/ +cson_value * json_page_whoami(){ + cson_value * payload = NULL; + cson_object * obj = NULL; + Stmt q; + if(!g.json.authToken){ + /* assume we just logged out. */ + db_prepare(&q, "SELECT login, cap FROM user WHERE login='nobody'"); + } + else{ + db_prepare(&q, "SELECT login, cap FROM user WHERE uid=%d", + g.userUid); + } + if( db_step(&q)==SQLITE_ROW ){ + + /* reminder: we don't use g.zLogin because it's 0 for the guest + user and the HTML UI appears to currently allow the name to be + changed (but doing so would break other code). */ + char const * str; + payload = cson_value_new_object(); + obj = cson_value_get_object(payload); + str = (char const *)sqlite3_column_text(q.pStmt,0); + if( str ){ + cson_object_set( obj, "name", + cson_value_new_string(str,strlen(str)) ); + } + str = (char const *)sqlite3_column_text(q.pStmt,1); + if( str ){ + cson_object_set( obj, "capabilities", + cson_value_new_string(str,strlen(str)) ); + } + if( g.json.authToken ){ + cson_object_set( obj, "authToken", g.json.authToken ); + } + }else{ + g.json.resultCode = FSL_JSON_E_RESOURCE_NOT_FOUND; + } + db_finalize(&q); + return payload; +} +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/json_query.c Index: src/json_query.c ================================================================== --- src/json_query.c +++ src/json_query.c @@ -0,0 +1,96 @@ +#ifdef FOSSIL_ENABLE_JSON +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +*/ + +#include "config.h" +#include "json_query.h" + +#if INTERFACE +#include "json_detail.h" +#endif + + +/* +** Implementation of the /json/query page. +** +** Requires admin privileges. Intended primarily to assist me in +** coming up with JSON output structures for pending features. +** +** Options/parameters: +** +** sql=string - a SELECT statement +** +** format=string 'a' means each row is an Array of values, 'o' +** (default) creates each row as an Object. +** +** TODO: in CLI mode (only) use -S FILENAME to read the sql +** from a file. +*/ +cson_value * json_page_query(){ + char const * zSql = NULL; + cson_value * payV; + char const * zFmt; + Stmt q = empty_Stmt; + int check; + if(!g.perm.Admin && !g.perm.Setup){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'a' or 's' privileges."); + return NULL; + } + + if( cson_value_is_string(g.json.reqPayload.v) ){ + zSql = cson_string_cstr(cson_value_get_string(g.json.reqPayload.v)); + }else{ + zSql = json_find_option_cstr2("sql",NULL,"s",2); + } + + if(!zSql || !*zSql){ + json_set_err(FSL_JSON_E_MISSING_ARGS, + "'sql' (-s) argument is missing."); + return NULL; + } + + zFmt = json_find_option_cstr2("format",NULL,"f",3); + if(!zFmt) zFmt = "o"; + db_prepare(&q,"%s", zSql/*safe-for-%s*/); + if( 0 == sqlite3_column_count( q.pStmt ) ){ + json_set_err(FSL_JSON_E_USAGE, + "Input query has no result columns. " + "Only SELECT-like queries are supported."); + db_finalize(&q); + return NULL; + } + switch(*zFmt){ + case 'a': + check = cson_sqlite3_stmt_to_json(q.pStmt, &payV, 0); + break; + case 'o': + default: + check = cson_sqlite3_stmt_to_json(q.pStmt, &payV, 1); + }; + db_finalize(&q); + if(0 != check){ + json_set_err(FSL_JSON_E_UNKNOWN, + "Conversion to JSON failed with cson code #%d (%s).", + check, cson_rc_string(check)); + assert(NULL==payV); + } + return payV; + +} + +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/json_report.c Index: src/json_report.c ================================================================== --- src/json_report.c +++ src/json_report.c @@ -0,0 +1,263 @@ +#ifdef FOSSIL_ENABLE_JSON +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +*/ + +#include "config.h" +#include "json_report.h" + +#if INTERFACE +#include "json_detail.h" +#endif + + +static cson_value * json_report_create(); +static cson_value * json_report_get(); +static cson_value * json_report_list(); +static cson_value * json_report_run(); +static cson_value * json_report_save(); + +/* +** Mapping of /json/report/XXX commands/paths to callbacks. +*/ +static const JsonPageDef JsonPageDefs_Report[] = { +{"create", json_report_create, 0}, +{"get", json_report_get, 0}, +{"list", json_report_list, 0}, +{"run", json_report_run, 0}, +{"save", json_report_save, 0}, +/* Last entry MUST have a NULL name. */ +{NULL,NULL,0} +}; +/* +** Implementation of the /json/report page. +** +** +*/ +cson_value * json_page_report(){ + if(!g.perm.RdTkt && !g.perm.NewTkt ){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'r' or 'n' permissions."); + return NULL; + } + return json_page_dispatch_helper(JsonPageDefs_Report); +} + +/* +** Searches the environment for a "report" parameter +** (CLI: -report/-r #). +** +** If one is not found and argPos is >0 then json_command_arg() +** is checked. +** +** Returns >0 (the report number) on success . +*/ +static int json_report_get_number(int argPos){ + int nReport = json_find_option_int("report",NULL,"r",-1); + if( (nReport<=0) && cson_value_is_integer(g.json.reqPayload.v)){ + nReport = cson_value_get_integer(g.json.reqPayload.v); + } + if( (nReport <= 0) && (argPos>0) ){ + char const * arg = json_command_arg(argPos); + if(arg && fossil_isdigit(*arg)) { + nReport = atoi(arg); + } + } + return nReport; +} + +static cson_value * json_report_create(){ + json_set_err(FSL_JSON_E_NYI, NULL); + return NULL; +} + +static cson_value * json_report_get(){ + int nReport; + Stmt q = empty_Stmt; + cson_value * pay = NULL; + + if(!g.perm.TktFmt){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 't' privileges."); + return NULL; + } + nReport = json_report_get_number(3); + if(nReport <=0){ + json_set_err(FSL_JSON_E_MISSING_ARGS, + "Missing or invalid 'report' (-r) parameter."); + return NULL; + } + + db_prepare(&q,"SELECT rn AS report," + " owner AS owner," + " title AS title," + " cast(strftime('%%s',mtime) as int) as timestamp," + " cols as columns," + " sqlcode as sqlCode" + " FROM reportfmt" + " WHERE rn=%d", + nReport); + if( SQLITE_ROW != db_step(&q) ){ + db_finalize(&q); + json_set_err(FSL_JSON_E_RESOURCE_NOT_FOUND, + "Report #%d not found.", nReport); + return NULL; + } + pay = cson_sqlite3_row_to_object(q.pStmt); + db_finalize(&q); + return pay; +} + +/* +** Impl of /json/report/list. +*/ +static cson_value * json_report_list(){ + Blob sql = empty_blob; + cson_value * pay = NULL; + if(!g.perm.RdTkt){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'r' privileges."); + return NULL; + } + blob_append(&sql, "SELECT" + " rn AS report," + " title as title," + " owner as owner" + " FROM reportfmt" + " WHERE 1" + " ORDER BY title", + -1); + pay = json_sql_to_array_of_obj(&sql, NULL, 1); + if(!pay){ + json_set_err(FSL_JSON_E_UNKNOWN, + "Quite unexpected: no ticket reports found."); + } + return pay; +} + +/* +** Impl for /json/report/run +** +** Options/arguments: +** +** report=int (CLI: -report # or -r #) is the report number to run. +** +** limit=int (CLI: -limit # or -n #) -n is for compat. with other commands. +** +** format=a|o Specifies result format: a=each row is an arry, o=each +** row is an object. Default=o. +*/ +static cson_value * json_report_run(){ + int nReport; + Stmt q = empty_Stmt; + cson_object * pay = NULL; + cson_array * tktList = NULL; + char const * zFmt; + char * zTitle = NULL; + Blob sql = empty_blob; + int limit = 0; + cson_value * colNames = NULL; + int i; + + if(!g.perm.RdTkt){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'r' privileges."); + return NULL; + } + nReport = json_report_get_number(3); + if(nReport <=0){ + json_set_err(FSL_JSON_E_MISSING_ARGS, + "Missing or invalid 'number' (-n) parameter."); + goto error; + } + zFmt = json_find_option_cstr2("format",NULL,"f",3); + if(!zFmt) zFmt = "o"; + db_prepare(&q, + "SELECT sqlcode, " + " title" + " FROM reportfmt" + " WHERE rn=%d", + nReport); + if(SQLITE_ROW != db_step(&q)){ + json_set_err(FSL_JSON_E_INVALID_ARGS, + "Report number %d not found.", + nReport); + db_finalize(&q); + goto error; + } + + limit = json_find_option_int("limit",NULL,"n",-1); + + + /* Copy over report's SQL...*/ + blob_append(&sql, db_column_text(&q,0), -1); + zTitle = mprintf("%s", db_column_text(&q,1)); + db_finalize(&q); + db_prepare(&q, "%s", blob_sql_text(&sql)); + + /** Build the response... */ + pay = cson_new_object(); + + cson_object_set(pay, "report", json_new_int(nReport)); + cson_object_set(pay, "title", json_new_string(zTitle)); + if(limit>0){ + cson_object_set(pay, "limit", json_new_int((limit<0) ? 0 : limit)); + } + free(zTitle); + zTitle = NULL; + + if(g.perm.TktFmt){ + cson_object_set(pay, "sqlcode", + cson_value_new_string(blob_str(&sql), + (unsigned int)blob_size(&sql))); + } + blob_reset(&sql); + + colNames = cson_sqlite3_column_names(q.pStmt); + cson_object_set( pay, "columnNames", colNames); + for( i = 0 ; ((limit>0) ?(i < limit) : 1) + && (SQLITE_ROW == db_step(&q)); + ++i){ + cson_value * row = ('a'==*zFmt) + ? cson_sqlite3_row_to_array(q.pStmt) + : cson_sqlite3_row_to_object2(q.pStmt, + cson_value_get_array(colNames)); + ; + if(row && !tktList){ + tktList = cson_new_array(); + } + cson_array_append(tktList, row); + } + db_finalize(&q); + cson_object_set(pay, "tickets", + tktList ? cson_array_value(tktList) : cson_value_null()); + + goto end; + + error: + assert(0 != g.json.resultCode); + cson_value_free( cson_object_value(pay) ); + pay = NULL; + end: + + return pay ? cson_object_value(pay) : NULL; + +} + +static cson_value * json_report_save(){ + return NULL; +} +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/json_status.c Index: src/json_status.c ================================================================== --- src/json_status.c +++ src/json_status.c @@ -0,0 +1,179 @@ +#ifdef FOSSIL_ENABLE_JSON +/* +** Copyright (c) 2013 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +*/ + +#include "config.h" +#include "json_status.h" + +#if INTERFACE +#include "json_detail.h" +#endif + +/* +Reminder to check if a column exists: + +PRAGMA table_info(table_name) + +and search for a row where the 'name' field matches. + +That assumes, of course, that table_info()'s output format +is stable. +*/ + +/* +** Implementation of the /json/status page. +** +*/ +cson_value * json_page_status(){ + Stmt q = empty_Stmt; + cson_object * oPay; + /*cson_object * files;*/ + int vid, nErr = 0; + cson_object * tmpO; + char * zTmp; + i64 iMtime; + cson_array * aFiles; + + if(!db_open_local(0)){ + json_set_err(FSL_JSON_E_DB_NEEDS_CHECKOUT, NULL); + return NULL; + } + oPay = cson_new_object(); + cson_object_set(oPay, "repository", + json_new_string(db_repository_filename())); + cson_object_set(oPay, "localRoot", + json_new_string(g.zLocalRoot)); + vid = db_lget_int("checkout", 0); + if(!vid){ + json_set_err( FSL_JSON_E_UNKNOWN, "Can this even happen?" ); + return 0; + } + /* TODO: dupe show_common_info() state */ + tmpO = cson_new_object(); + cson_object_set(oPay, "checkout", cson_object_value(tmpO)); + + zTmp = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", vid); + cson_object_set(tmpO, "uuid", json_new_string(zTmp) ); + free(zTmp); + + cson_object_set( tmpO, "tags", json_tags_for_checkin_rid(vid, 0) ); + + /* FIXME: optimize the datetime/timestamp queries into 1 query. */ + zTmp = db_text(0, "SELECT datetime(mtime) || " + "' UTC' FROM event WHERE objid=%d", + vid); + cson_object_set(tmpO, "datetime", json_new_string(zTmp)); + free(zTmp); + iMtime = db_int64(0, "SELECT CAST(strftime('%%s',mtime) AS INTEGER) " + "FROM event WHERE objid=%d", vid); + cson_object_set(tmpO, "timestamp", + cson_value_new_integer((cson_int_t)iMtime)); +#if 0 + /* TODO: add parent artifact info */ + tmpO = cson_new_object(); + cson_object_set( oPay, "parent", cson_object_value(tmpO) ); + cson_object_set( tmpO, "uuid", TODO ); + cson_object_set( tmpO, "timestamp", TODO ); +#endif + + /* Now get the list of non-pristine files... */ + aFiles = cson_new_array(); + cson_object_set( oPay, "files", cson_array_value( aFiles ) ); + + db_prepare(&q, + "SELECT pathname, deleted, chnged, rid, coalesce(origname!=pathname,0)" + " FROM vfile " + " WHERE is_selected(id)" + " AND (chnged OR deleted OR rid=0 OR pathname!=origname) ORDER BY 1" + ); + while( db_step(&q)==SQLITE_ROW ){ + const char *zPathname = db_column_text(&q,0); + int isDeleted = db_column_int(&q, 1); + int isChnged = db_column_int(&q,2); + int isNew = db_column_int(&q,3)==0; + int isRenamed = db_column_int(&q,4); + cson_object * oFile; + char const * zStatus = "???"; + char * zFullName = mprintf("%s%s", g.zLocalRoot, zPathname); + if( isDeleted ){ + zStatus = "deleted"; + }else if( isNew ){ + zStatus = "new" /* maintenance reminder: MUST come + BEFORE the isChnged checks. */; + }else if( isRenamed ){ + zStatus = "renamed"; + }else if( !file_wd_isfile_or_link(zFullName) ){ + if( file_access(zFullName, F_OK)==0 ){ + zStatus = "notAFile"; + ++nErr; + }else{ + zStatus = "missing"; + ++nErr; + } + }else if( 2==isChnged ){ + zStatus = "updatedByMerge"; + }else if( 3==isChnged ){ + zStatus = "addedByMerge"; + }else if( 4==isChnged ){ + zStatus = "updatedByIntegrate"; + }else if( 5==isChnged ){ + zStatus = "addedByIntegrate"; + }else if( 1==isChnged ){ + if( file_contains_merge_marker(zFullName) ){ + zStatus = "conflict"; + }else{ + zStatus = "edited"; + } + } + + oFile = cson_new_object(); + cson_array_append( aFiles, cson_object_value(oFile) ); + /* optimization potential: move these keys into cson_strings + to take advantage of refcounting. */ + cson_object_set( oFile, "name", json_new_string( zPathname ) ); + cson_object_set( oFile, "status", json_new_string( zStatus ) ); + + free(zFullName); + } + cson_object_set( oPay, "errorCount", json_new_int( nErr ) ); + db_finalize(&q); + +#if 0 + /* TODO: add "merged with" status. First need (A) to decide on a + structure and (B) to set up some tests for the multi-merge + case.*/ + db_prepare(&q, "SELECT uuid, id FROM vmerge JOIN blob ON merge=rid" + " WHERE id<=0"); + while( db_step(&q)==SQLITE_ROW ){ + const char *zLabel = "MERGED_WITH"; + switch( db_column_int(&q, 1) ){ + case -1: zLabel = "CHERRYPICK "; break; + case -2: zLabel = "BACKOUT "; break; + case -4: zLabel = "INTEGRATE "; break; + } + blob_append(report, zPrefix, nPrefix); + blob_appendf(report, "%s %s\n", zLabel, db_column_text(&q, 0)); + } + db_finalize(&q); + if( nErr ){ + fossil_fatal("aborting due to prior errors"); + } +#endif + return cson_object_value( oPay ); +} + +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/json_tag.c Index: src/json_tag.c ================================================================== --- src/json_tag.c +++ src/json_tag.c @@ -0,0 +1,478 @@ +#ifdef FOSSIL_ENABLE_JSON +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +*/ +#include "VERSION.h" +#include "config.h" +#include "json_tag.h" + +#if INTERFACE +#include "json_detail.h" +#endif + + +static cson_value * json_tag_add(); +static cson_value * json_tag_cancel(); +static cson_value * json_tag_find(); +static cson_value * json_tag_list(); +/* +** Mapping of /json/tag/XXX commands/paths to callbacks. +*/ +static const JsonPageDef JsonPageDefs_Tag[] = { +{"add", json_tag_add, 0}, +{"cancel", json_tag_cancel, 0}, +{"find", json_tag_find, 0}, +{"list", json_tag_list, 0}, +/* Last entry MUST have a NULL name. */ +{NULL,NULL,0} +}; + +/* +** Implements the /json/tag family of pages/commands. +** +*/ +cson_value * json_page_tag(){ + return json_page_dispatch_helper(&JsonPageDefs_Tag[0]); +} + + +/* +** Impl of /json/tag/add. +*/ +static cson_value * json_tag_add(){ + cson_value * payV = NULL; + cson_object * pay = NULL; + char const * zName = NULL; + char const * zCheckin = NULL; + char fRaw = 0; + char fPropagate = 0; + char const * zValue = NULL; + const char *zPrefix = NULL; + + if( !g.perm.Write ){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'i' permissions."); + return NULL; + } + fRaw = json_find_option_bool("raw",NULL,NULL,0); + fPropagate = json_find_option_bool("propagate",NULL,NULL,0); + zName = json_find_option_cstr("name",NULL,NULL); + zPrefix = fRaw ? "" : "sym-"; + if(!zName || !*zName){ + if(!fossil_has_json()){ + zName = json_command_arg(3); + } + if(!zName || !*zName){ + json_set_err(FSL_JSON_E_MISSING_ARGS, + "'name' parameter is missing."); + return NULL; + } + } + + zCheckin = json_find_option_cstr("checkin",NULL,NULL); + if( !zCheckin ){ + if(!fossil_has_json()){ + zCheckin = json_command_arg(4); + } + if(!zCheckin || !*zCheckin){ + json_set_err(FSL_JSON_E_MISSING_ARGS, + "'checkin' parameter is missing."); + return NULL; + } + } + + + zValue = json_find_option_cstr("value",NULL,NULL); + if(!zValue && !fossil_has_json()){ + zValue = json_command_arg(5); + } + + db_begin_transaction(); + tag_add_artifact(zPrefix, zName, zCheckin, zValue, + 1+fPropagate,NULL/*DateOvrd*/,NULL/*UserOvrd*/); + db_end_transaction(0); + + payV = cson_value_new_object(); + pay = cson_value_get_object(payV); + cson_object_set(pay, "name", json_new_string(zName) ); + cson_object_set(pay, "value", (zValue&&*zValue) + ? json_new_string(zValue) + : cson_value_null()); + cson_object_set(pay, "propagate", cson_value_new_bool(fPropagate)); + cson_object_set(pay, "raw", cson_value_new_bool(fRaw)); + { + Blob uu = empty_blob; + int rc; + blob_append(&uu, zName, -1); + rc = name_to_uuid(&uu, 9, "*"); + if(0!=rc){ + json_set_err(FSL_JSON_E_UNKNOWN,"Could not convert name back to UUID!"); + blob_reset(&uu); + goto error; + } + cson_object_set(pay, "appliedTo", json_new_string(blob_buffer(&uu))); + blob_reset(&uu); + } + + goto ok; + error: + assert( 0 != g.json.resultCode ); + cson_value_free(payV); + payV = NULL; + ok: + return payV; +} + + +/* +** Impl of /json/tag/cancel. +*/ +static cson_value * json_tag_cancel(){ + char const * zName = NULL; + char const * zCheckin = NULL; + char fRaw = 0; + const char *zPrefix = NULL; + + if( !g.perm.Write ){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'i' permissions."); + return NULL; + } + + fRaw = json_find_option_bool("raw",NULL,NULL,0); + zPrefix = fRaw ? "" : "sym-"; + zName = json_find_option_cstr("name",NULL,NULL); + if(!zName || !*zName){ + if(!fossil_has_json()){ + zName = json_command_arg(3); + } + if(!zName || !*zName){ + json_set_err(FSL_JSON_E_MISSING_ARGS, + "'name' parameter is missing."); + return NULL; + } + } + + zCheckin = json_find_option_cstr("checkin",NULL,NULL); + if( !zCheckin ){ + if(!fossil_has_json()){ + zCheckin = json_command_arg(4); + } + if(!zCheckin || !*zCheckin){ + json_set_err(FSL_JSON_E_MISSING_ARGS, + "'checkin' parameter is missing."); + return NULL; + } + } + /* FIXME?: verify that the tag is currently active. We have no real + error case unless we do that. + */ + db_begin_transaction(); + tag_add_artifact(zPrefix, zName, zCheckin, NULL, 0, 0, 0); + db_end_transaction(0); + return NULL; +} + + +/* +** Impl of /json/tag/find. +*/ +static cson_value * json_tag_find(){ + cson_value * payV = NULL; + cson_object * pay = NULL; + cson_value * listV = NULL; + cson_array * list = NULL; + char const * zName = NULL; + char const * zType = NULL; + char const * zType2 = NULL; + char fRaw = 0; + Stmt q = empty_Stmt; + int limit = 0; + int tagid = 0; + + if( !g.perm.Read ){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'o' permissions."); + return NULL; + } + zName = json_find_option_cstr("name",NULL,NULL); + if(!zName || !*zName){ + if(!fossil_has_json()){ + zName = json_command_arg(3); + } + if(!zName || !*zName){ + json_set_err(FSL_JSON_E_MISSING_ARGS, + "'name' parameter is missing."); + return NULL; + } + } + zType = json_find_option_cstr("type",NULL,"t"); + if(!zType || !*zType){ + zType = "*"; + zType2 = zType; + }else{ + switch(*zType){ + case 'c': zType = "ci"; zType2 = "checkin"; break; + case 'e': zType = "e"; zType2 = "event"; break; + case 'w': zType = "w"; zType2 = "wiki"; break; + case 't': zType = "t"; zType2 = "ticket"; break; + } + } + + limit = json_find_option_int("limit",NULL,"n",0); + fRaw = json_find_option_bool("raw",NULL,NULL,0); + + tagid = db_int(0, "SELECT tagid FROM tag WHERE tagname='%s' || %Q", + fRaw ? "" : "sym-", + zName); + + payV = cson_value_new_object(); + pay = cson_value_get_object(payV); + cson_object_set(pay, "name", json_new_string(zName)); + cson_object_set(pay, "raw", cson_value_new_bool(fRaw)); + cson_object_set(pay, "type", json_new_string(zType2)); + cson_object_set(pay, "limit", json_new_int(limit)); + +#if 1 + if( tagid<=0 ){ + cson_object_set(pay,"artifacts", cson_value_null()); + json_warn(FSL_JSON_W_TAG_NOT_FOUND, "Tag not found."); + return payV; + } +#endif + + if( fRaw ){ + db_prepare(&q, + "SELECT blob.uuid FROM tagxref, blob" + " WHERE tagid=(SELECT tagid FROM tag WHERE tagname=%Q)" + " AND tagxref.tagtype>0" + " AND blob.rid=tagxref.rid" + "%s LIMIT %d", + zName, + (limit>0)?"":"--", limit + ); + while( db_step(&q)==SQLITE_ROW ){ + if(!listV){ + listV = cson_value_new_array(); + list = cson_value_get_array(listV); + } + cson_array_append(list, cson_sqlite3_column_to_value(q.pStmt,0)); + } + db_finalize(&q); + }else{ + char const * zSqlBase = /*modified from timeline_query_for_tty()*/ + " SELECT" +#if 0 + " blob.rid AS rid," +#endif + " uuid AS uuid," + " cast(strftime('%s',event.mtime) as int) AS timestamp," + " coalesce(ecomment,comment) AS comment," + " coalesce(euser,user) AS user," + " CASE event.type" + " WHEN 'ci' THEN 'checkin'" + " WHEN 'w' THEN 'wiki'" + " WHEN 'e' THEN 'event'" + " WHEN 't' THEN 'ticket'" + " ELSE 'unknown'" + " END" + " AS eventType" + " FROM event, blob" + " WHERE blob.rid=event.objid" + ; + /* FIXME: re-add tags. */ + db_prepare(&q, + "%s" + " AND event.type GLOB '%q'" + " AND blob.rid IN (" + " SELECT rid FROM tagxref" + " WHERE tagtype>0 AND tagid=%d" + " )" + " ORDER BY event.mtime DESC" + "%s LIMIT %d", + zSqlBase /*safe-for-%s*/, zType, tagid, + (limit>0)?"":"--", limit + ); + listV = json_stmt_to_array_of_obj(&q, NULL); + db_finalize(&q); + } + + if(!listV) { + listV = cson_value_null(); + } + cson_object_set(pay, "artifacts", listV); + return payV; +} + + +/* +** Impl for /json/tag/list +** +** TODOs: +** +** Add -type TYPE (ci, w, e, t) +*/ +static cson_value * json_tag_list(){ + cson_value * payV = NULL; + cson_object * pay = NULL; + cson_value const * tagsVal = NULL; + char const * zCheckin = NULL; + char fRaw = 0; + char fTicket = 0; + Stmt q = empty_Stmt; + + if( !g.perm.Read ){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'o' permissions."); + return NULL; + } + + fRaw = json_find_option_bool("raw",NULL,NULL,0); + fTicket = json_find_option_bool("includeTickets","tkt","t",0); + zCheckin = json_find_option_cstr("checkin",NULL,NULL); + if( !zCheckin ){ + zCheckin = json_command_arg( g.json.dispatchDepth + 1); + if( !zCheckin && cson_value_is_string(g.json.reqPayload.v) ){ + zCheckin = cson_string_cstr(cson_value_get_string(g.json.reqPayload.v)); + assert(zCheckin); + } + } + payV = cson_value_new_object(); + pay = cson_value_get_object(payV); + cson_object_set(pay, "raw", cson_value_new_bool(fRaw) ); + if( zCheckin ){ + /** + Tags for a specific check-in. Output format: + + RAW mode: + + { + "sym-tagname": (value || null), + ...other tags... + } + + Non-raw: + + { + "tagname": (value || null), + ...other tags... + } + */ + cson_value * objV = NULL; + cson_object * obj = NULL; + int const rid = name_to_rid(zCheckin); + if(0==rid){ + json_set_err(FSL_JSON_E_UNRESOLVED_UUID, + "Could not find artifact for check-in [%s].", + zCheckin); + goto error; + } + cson_object_set(pay, "checkin", json_new_string(zCheckin)); + db_prepare(&q, + "SELECT tagname, value FROM tagxref, tag" + " WHERE tagxref.rid=%d AND tagxref.tagid=tag.tagid" + " AND tagtype>%d" + " ORDER BY tagname", + rid, + fRaw ? -1 : 0 + ); + while( SQLITE_ROW == db_step(&q) ){ + const char *zName = db_column_text(&q, 0); + const char *zValue = db_column_text(&q, 1); + if( fRaw==0 ){ + if( 0!=strncmp(zName, "sym-", 4) ) continue; + zName += 4; + assert( *zName ); + } + if(NULL==objV){ + objV = cson_value_new_object(); + obj = cson_value_get_object(objV); + tagsVal = objV; + cson_object_set( pay, "tags", objV ); + } + if( zValue && zValue[0] ){ + cson_object_set( obj, zName, json_new_string(zValue) ); + }else{ + cson_object_set( obj, zName, cson_value_null() ); + } + } + db_finalize(&q); + }else{/* all tags */ + /* Output format: + + RAW mode: + + ["tagname", "sym-tagname2",...] + + Non-raw: + + ["tagname", "tagname2",...] + + i don't really like the discrepancy in the format but this list + can get really long and (A) most tags don't have values, (B) i + don't want to bloat it more, and (C) cson_object_set() is O(N) + (N=current number of properties) because it uses an unsorted list + internally (for memory reasons), so this can slow down appreciably + on a long list. The culprit is really tkt- tags, as there is one + for each ticket (941 in the main fossil repo as of this writing). + */ + Blob sql = empty_blob; + cson_value * arV = NULL; + cson_array * ar = NULL; + blob_append(&sql, + "SELECT tagname FROM tag" + " WHERE EXISTS(SELECT 1 FROM tagxref" + " WHERE tagid=tag.tagid" + " AND tagtype>0)", + -1 + ); + if(!fTicket){ + blob_append(&sql, " AND tagname NOT GLOB('tkt-*') ", -1); + } + blob_append(&sql, + " ORDER BY tagname", -1); + db_prepare(&q, "%s", blob_sql_text(&sql)); + blob_reset(&sql); + cson_object_set(pay, "includeTickets", cson_value_new_bool(fTicket) ); + while( SQLITE_ROW == db_step(&q) ){ + const char *zName = db_column_text(&q, 0); + if(NULL==arV){ + arV = cson_value_new_array(); + ar = cson_value_get_array(arV); + cson_object_set(pay, "tags", arV); + tagsVal = arV; + } + else if( !fRaw && (0==strncmp(zName, "sym-", 4))){ + zName += 4; + assert( *zName ); + } + cson_array_append(ar, json_new_string(zName)); + } + db_finalize(&q); + } + + goto end; + error: + assert(0 != g.json.resultCode); + cson_value_free(payV); + payV = NULL; + end: + if( payV && !tagsVal ){ + cson_object_set( pay, "tags", cson_value_null() ); + } + return payV; +} +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/json_timeline.c Index: src/json_timeline.c ================================================================== --- src/json_timeline.c +++ src/json_timeline.c @@ -0,0 +1,701 @@ +#ifdef FOSSIL_ENABLE_JSON +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +*/ + +#include "VERSION.h" +#include "config.h" +#include "json_timeline.h" + +#if INTERFACE +#include "json_detail.h" +#endif + +static cson_value * json_timeline_branch(); +static cson_value * json_timeline_ci(); +static cson_value * json_timeline_ticket(); +/* +** Mapping of /json/timeline/XXX commands/paths to callbacks. +*/ +static const JsonPageDef JsonPageDefs_Timeline[] = { +/* the short forms are only enabled in CLI mode, to avoid + that we end up with HTTP clients using 3 different names + for the same requests. +*/ +{"branch", json_timeline_branch, 0}, +{"checkin", json_timeline_ci, 0}, +{"ticket", json_timeline_ticket, 0}, +{"wiki", json_timeline_wiki, 0}, +/* Last entry MUST have a NULL name. */ +{NULL,NULL,0} +}; + + +/* +** Implements the /json/timeline family of pages/commands. Far from +** complete. +** +*/ +cson_value * json_page_timeline(){ +#if 0 + /* The original timeline code does not require 'h' access, + but it arguably should. For JSON mode i think one could argue + that History permissions are required. + */ + if(! g.perm.Hyperlink && !g.perm.Read ){ + json_set_err(FSL_JSON_E_DENIED, "Timeline requires 'h' or 'o' access."); + return NULL; + } +#endif + return json_page_dispatch_helper(&JsonPageDefs_Timeline[0]); +} + +/* +** Create a temporary table suitable for storing timeline data. +*/ +static void json_timeline_temp_table(void){ + /* Field order MUST match that from json_timeline_query()!!! */ + static const char zSql[] = + @ CREATE TEMP TABLE IF NOT EXISTS json_timeline( + @ sortId INTEGER PRIMARY KEY, + @ rid INTEGER, + @ uuid TEXT, + @ mtime INTEGER, + @ timestampString TEXT, + @ comment TEXT, + @ user TEXT, + @ isLeaf BOOLEAN, + @ bgColor TEXT, + @ eventType TEXT, + @ tags TEXT, + @ tagId INTEGER, + @ brief TEXT + @ ) + ; + db_multi_exec("%s", zSql /*safe-for-%s*/); +} + +/* +** Return a pointer to a constant string that forms the basis +** for a timeline query for the JSON interface. It MUST NOT +** be used in a formatted string argument. +*/ +char const * json_timeline_query(void){ + /* Field order MUST match that from json_timeline_temp_table()!!! */ + static const char zBaseSql[] = + @ SELECT + @ NULL, + @ blob.rid, + @ uuid, + @ CAST(strftime('%s',event.mtime) AS INTEGER), + @ datetime(event.mtime), + @ coalesce(ecomment, comment), + @ coalesce(euser, user), + @ blob.rid IN leaf, + @ bgcolor, + @ event.type, + @ (SELECT group_concat(substr(tagname,5), ',') FROM tag, tagxref + @ WHERE tagname GLOB 'sym-*' AND tag.tagid=tagxref.tagid + @ AND tagxref.rid=blob.rid AND tagxref.tagtype>0) as tags, + @ tagid as tagId, + @ brief as brief + @ FROM event JOIN blob + @ WHERE blob.rid=event.objid + ; + return zBaseSql; +} + +/* +** Internal helper to append query information if the +** "tag" or "branch" request properties (CLI: --tag/--branch) +** are set. Limits the query to a particular branch/tag. +** +** tag works like HTML mode's "t" option and branch works like HTML +** mode's "r" option. They are very similar, but subtly different - +** tag mode shows only entries with a given tag but branch mode can +** also reveal some with "related" tags (meaning they were merged into +** the requested branch, or back). +** +** pSql is the target blob to append the query [subset] +** to. +** +** Returns a positive value if it modifies pSql, 0 if it +** does not. It returns a negative value if the tag +** provided to the request was not found (pSql is not modified +** in that case). +** +** If payload is not NULL then on success its "tag" or "branch" +** property is set to the tag/branch name found in the request. +** +** Only one of "tag" or "branch" modes will work at a time, and if +** both are specified, which one takes precedence is unspecified. +*/ +static char json_timeline_add_tag_branch_clause(Blob *pSql, + cson_object * pPayload){ + char const * zTag = NULL; + char const * zBranch = NULL; + char const * zMiOnly = NULL; + char const * zUnhide = NULL; + int tagid = 0; + if(! g.perm.Read ){ + return 0; + } + zTag = json_find_option_cstr("tag",NULL,NULL); + if(!zTag || !*zTag){ + zBranch = json_find_option_cstr("branch",NULL,NULL); + if(!zBranch || !*zBranch){ + return 0; + } + zTag = zBranch; + zMiOnly = json_find_option_cstr("mionly",NULL,NULL); + } + zUnhide = json_find_option_cstr("unhide",NULL,NULL); + tagid = db_int(0, "SELECT tagid FROM tag WHERE tagname='sym-%q'", + zTag); + if(tagid<=0){ + return -1; + } + if(pPayload){ + cson_object_set( pPayload, zBranch ? "branch" : "tag", json_new_string(zTag) ); + } + blob_appendf(pSql, + " AND (" + " EXISTS(SELECT 1 FROM tagxref" + " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid)", + tagid); + if(!zUnhide){ + blob_appendf(pSql, + " AND NOT EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=blob.rid" + " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid)", + TAG_HIDDEN); + } + if(zBranch){ + /* from "r" flag code in page_timeline().*/ + blob_appendf(pSql, + " OR EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=cid" + " WHERE tagid=%d AND tagtype>0 AND pid=blob.rid)", + tagid); + if( !zUnhide ){ + blob_appendf(pSql, + " AND NOT EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=cid" + " WHERE tagid=%d AND tagtype>0 AND pid=blob.rid)", + TAG_HIDDEN); + } + if( zMiOnly==0 ){ + blob_appendf(pSql, + " OR EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=pid" + " WHERE tagid=%d AND tagtype>0 AND cid=blob.rid)", + tagid); + if( !zUnhide ){ + blob_appendf(pSql, + " AND NOT EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=pid" + " WHERE tagid=%d AND tagtype>0 AND cid=blob.rid)", + TAG_HIDDEN); + } + } + } + blob_append(pSql," ) ",3); + return 1; +} +/* +** Helper for the timeline family of functions. Possibly appends 1 +** AND clause and an ORDER BY clause to pSql, depending on the state +** of the "after" ("a") or "before" ("b") environment parameters. +** This function gives "after" precedence over "before", and only +** applies one of them. +** +** Returns -1 if it adds a "before" clause, 1 if it adds +** an "after" clause, and 0 if adds only an order-by clause. +*/ +static char json_timeline_add_time_clause(Blob *pSql){ + char const * zAfter = NULL; + char const * zBefore = NULL; + int rc = 0; + zAfter = json_find_option_cstr("after",NULL,"a"); + zBefore = zAfter ? NULL : json_find_option_cstr("before",NULL,"b"); + + if(zAfter&&*zAfter){ + while( fossil_isspace(*zAfter) ) ++zAfter; + blob_appendf(pSql, + " AND event.mtime>=(SELECT julianday(%Q,fromLocal())) " + " ORDER BY event.mtime ASC ", + zAfter); + rc = 1; + }else if(zBefore && *zBefore){ + while( fossil_isspace(*zBefore) ) ++zBefore; + blob_appendf(pSql, + " AND event.mtime<=(SELECT julianday(%Q,fromLocal())) " + " ORDER BY event.mtime DESC ", + zBefore); + rc = -1; + }else{ + blob_append(pSql, " ORDER BY event.mtime DESC ", -1); + rc = 0; + } + return rc; +} + +/* +** Tries to figure out a timeline query length limit base on +** environment parameters. If it can it returns that value, +** else it returns some statically defined default value. +** +** Never returns a negative value. 0 means no limit. +*/ +static int json_timeline_limit(int defaultLimit){ + int limit = -1; + if(!g.isHTTP){/* CLI mode */ + char const * arg = find_option("limit","n",1); + if(arg && *arg){ + limit = atoi(arg); + } + } + if( (limit<0) && fossil_has_json() ){ + limit = json_getenv_int("limit",-1); + } + return (limit<0) ? defaultLimit : limit; +} + +/* +** Internal helper for the json_timeline_EVENTTYPE() family of +** functions. zEventType must be one of (ci, w, t). pSql must be a +** cleanly-initialized, empty Blob to store the sql in. If pPayload is +** not NULL it is assumed to be the pending response payload. If +** json_timeline_limit() returns non-0, this function adds a LIMIT +** clause to the generated SQL. +** +** If pPayload is not NULL then this might add properties to pPayload, +** reflecting options set in the request environment. +** +** Returns 0 on success. On error processing should not continue and +** the returned value should be used as g.json.resultCode. +*/ +static int json_timeline_setup_sql( char const * zEventType, + Blob * pSql, + cson_object * pPayload ){ + int limit; + assert( zEventType && *zEventType && pSql ); + json_timeline_temp_table(); + blob_append(pSql, "INSERT OR IGNORE INTO json_timeline ", -1); + blob_append(pSql, json_timeline_query(), -1 ); + blob_appendf(pSql, " AND event.type IN(%Q) ", zEventType); + if( json_timeline_add_tag_branch_clause(pSql, pPayload) < 0 ){ + return FSL_JSON_E_INVALID_ARGS; + } + json_timeline_add_time_clause(pSql); + limit = json_timeline_limit(20); + if(limit>0){ + blob_appendf(pSql,"LIMIT %d ",limit); + } + if(pPayload){ + cson_object_set(pPayload, "limit", json_new_int(limit)); + } + return 0; +} + + +/* +** If any files are associated with the given rid, a JSON array +** containing information about them is returned (and is owned by the +** caller). If no files are associated with it then NULL is returned. +** +** flags may optionally be a bitmask of json_get_changed_files flags, +** or 0 for defaults. +*/ +cson_value * json_get_changed_files(int rid, int flags){ + cson_value * rowsV = NULL; + cson_array * rows = NULL; + Stmt q = empty_Stmt; + db_prepare(&q, + "SELECT (pid==0) AS isnew," + " (fid==0) AS isdel," + " (SELECT name FROM filename WHERE fnid=mlink.fnid) AS name," + " blob.uuid as uuid," + " (SELECT uuid FROM blob WHERE rid=pid) as parent," + " blob.size as size" + " FROM mlink, blob" + " WHERE mid=%d AND pid!=fid" + " AND blob.rid=fid AND NOT mlink.isaux" + " ORDER BY name /*sort*/", + rid + ); + while( (SQLITE_ROW == db_step(&q)) ){ + cson_value * rowV = cson_value_new_object(); + cson_object * row = cson_value_get_object(rowV); + int const isNew = db_column_int(&q,0); + int const isDel = db_column_int(&q,1); + char * zDownload = NULL; + if(!rowsV){ + rowsV = cson_value_new_array(); + rows = cson_value_get_array(rowsV); + } + cson_array_append( rows, rowV ); + cson_object_set(row, "name", json_new_string(db_column_text(&q,2))); + cson_object_set(row, "uuid", json_new_string(db_column_text(&q,3))); + if(!isNew && (flags & json_get_changed_files_ELIDE_PARENT)){ + cson_object_set(row, "parent", json_new_string(db_column_text(&q,4))); + } + cson_object_set(row, "size", json_new_int(db_column_int(&q,5))); + + cson_object_set(row, "state", + json_new_string(json_artifact_status_to_string(isNew,isDel))); + zDownload = mprintf("/raw/%s?name=%s", + /* reminder: g.zBaseURL is of course not set for CLI mode. */ + db_column_text(&q,2), + db_column_text(&q,3)); + cson_object_set(row, "downloadPath", json_new_string(zDownload)); + free(zDownload); + } + db_finalize(&q); + return rowsV; +} + +static cson_value * json_timeline_branch(){ + cson_value * pay = NULL; + Blob sql = empty_blob; + Stmt q = empty_Stmt; + int limit = 0; + if(!g.perm.Read){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'o' permissions."); + return NULL; + } + json_timeline_temp_table(); + blob_append(&sql, + "SELECT" + " blob.rid AS rid," + " uuid AS uuid," + " CAST(strftime('%s',event.mtime) AS INTEGER) as timestamp," + " coalesce(ecomment, comment) as comment," + " coalesce(euser, user) as user," + " blob.rid IN leaf as isLeaf," + " bgcolor as bgColor" + " FROM event JOIN blob" + " WHERE blob.rid=event.objid", + -1); + + blob_append_sql(&sql, + " AND event.type='ci'" + " AND blob.rid IN (SELECT rid FROM tagxref" + " WHERE tagtype>0 AND tagid=%d AND srcid!=0)" + " ORDER BY event.mtime DESC", + TAG_BRANCH); + limit = json_timeline_limit(20); + if(limit>0){ + blob_append_sql(&sql," LIMIT %d ",limit); + } + db_prepare(&q,"%s", blob_sql_text(&sql)); + blob_reset(&sql); + pay = json_stmt_to_array_of_obj(&q, NULL); + db_finalize(&q); + assert(NULL != pay); + if(pay){ + /* get the array-form tags of each record. */ + cson_string * tags = cson_new_string("tags",4); + cson_string * isLeaf = cson_new_string("isLeaf",6); + cson_array * ar = cson_value_get_array(pay); + cson_object * outer = NULL; + unsigned int i = 0; + unsigned int len = cson_array_length_get(ar); + cson_value_add_reference( cson_string_value(tags) ); + cson_value_add_reference( cson_string_value(isLeaf) ); + for( ; i < len; ++i ){ + cson_object * row = cson_value_get_object(cson_array_get(ar,i)); + int rid = cson_value_get_integer(cson_object_get(row,"rid")); + assert( rid > 0 ); + cson_object_set_s(row, tags, json_tags_for_checkin_rid(rid,0)); + cson_object_set_s(row, isLeaf, + json_value_to_bool(cson_object_get(row,"isLeaf"))); + cson_object_set(row, "rid", NULL) + /* remove rid - we don't really want it to be public */; + } + cson_value_free( cson_string_value(tags) ); + cson_value_free( cson_string_value(isLeaf) ); + + /* now we wrap the payload in an outer shell, for consistency with + other /json/timeline/xyz APIs... + */ + outer = cson_new_object(); + if(limit>0){ + cson_object_set( outer, "limit", json_new_int(limit) ); + } + cson_object_set( outer, "timeline", pay ); + pay = cson_object_value(outer); + } + return pay; +} + +/* +** Implementation of /json/timeline/ci. +** +** Still a few TODOs (like figuring out how to structure +** inheritance info). +*/ +static cson_value * json_timeline_ci(){ + cson_value * payV = NULL; + cson_object * pay = NULL; + cson_value * tmp = NULL; + cson_value * listV = NULL; + cson_array * list = NULL; + int check = 0; + char verboseFlag; + Stmt q = empty_Stmt; + char warnRowToJsonFailed = 0; + Blob sql = empty_blob; + if( !g.perm.Hyperlink ){ + /* Reminder to self: HTML impl requires 'o' (Read) + rights. + */ + json_set_err( FSL_JSON_E_DENIED, "Check-in timeline requires 'h' access." ); + return NULL; + } + verboseFlag = json_find_option_bool("verbose",NULL,"v",0); + if( !verboseFlag ){ + verboseFlag = json_find_option_bool("files",NULL,"f",0); + } + payV = cson_value_new_object(); + pay = cson_value_get_object(payV); + check = json_timeline_setup_sql( "ci", &sql, pay ); + if(check){ + json_set_err(check, "Query initialization failed."); + goto error; + } +#define SET(K) if(0!=(check=cson_object_set(pay,K,tmp))){ \ + json_set_err((cson_rc.AllocError==check) \ + ? FSL_JSON_E_ALLOC : FSL_JSON_E_UNKNOWN,\ + "Object property insertion failed"); \ + goto error;\ + } (void)0 + +#if 0 + /* only for testing! */ + tmp = cson_value_new_string(blob_buffer(&sql),strlen(blob_buffer(&sql))); + SET("timelineSql"); +#endif + db_multi_exec("%s", blob_buffer(&sql)/*safe-for-%s*/); + blob_reset(&sql); + db_prepare(&q, "SELECT " + " rid AS rid" + " FROM json_timeline" + " ORDER BY rowid"); + listV = cson_value_new_array(); + list = cson_value_get_array(listV); + tmp = listV; + SET("timeline"); + while( (SQLITE_ROW == db_step(&q) )){ + /* convert each row into a JSON object...*/ + int const rid = db_column_int(&q,0); + cson_value * rowV = json_artifact_for_ci(rid, verboseFlag); + cson_object * row = cson_value_get_object(rowV); + if(!row){ + if( !warnRowToJsonFailed ){ + warnRowToJsonFailed = 1; + json_warn( FSL_JSON_W_ROW_TO_JSON_FAILED, + "Could not convert at least one timeline result row to JSON." ); + } + continue; + } + cson_array_append(list, rowV); + } +#undef SET + goto ok; + error: + assert( 0 != g.json.resultCode ); + cson_value_free(payV); + payV = NULL; + ok: + db_finalize(&q); + return payV; +} + +/* +** Implementation of /json/timeline/wiki. +** +*/ +cson_value * json_timeline_wiki(){ + /* This code is 95% the same as json_timeline_ci(), by the way. */ + cson_value * payV = NULL; + cson_object * pay = NULL; + cson_array * list = NULL; + int check = 0; + Stmt q = empty_Stmt; + Blob sql = empty_blob; + if( !g.perm.RdWiki && !g.perm.Read ){ + json_set_err( FSL_JSON_E_DENIED, "Wiki timeline requires 'o' or 'j' access."); + return NULL; + } + payV = cson_value_new_object(); + pay = cson_value_get_object(payV); + check = json_timeline_setup_sql( "w", &sql, pay ); + if(check){ + json_set_err(check, "Query initialization failed."); + goto error; + } + +#if 0 + /* only for testing! */ + cson_object_set(pay, "timelineSql", cson_value_new_string(blob_buffer(&sql),strlen(blob_buffer(&sql)))); +#endif + db_multi_exec("%s", blob_buffer(&sql) /*safe-for-%s*/); + blob_reset(&sql); + db_prepare(&q, "SELECT" + " uuid AS uuid," + " mtime AS timestamp," +#if 0 + " timestampString AS timestampString," +#endif + " comment AS comment, " + " user AS user," + " eventType AS eventType" +#if 0 + /* can wiki pages have tags? */ + " tags AS tags," /*FIXME: split this into + a JSON array*/ + " tagId AS tagId," +#endif + " FROM json_timeline" + " ORDER BY rowid"); + list = cson_new_array(); + json_stmt_to_array_of_obj(&q, list); + cson_object_set(pay, "timeline", cson_array_value(list)); + goto ok; + error: + assert( 0 != g.json.resultCode ); + cson_value_free(payV); + payV = NULL; + ok: + db_finalize(&q); + blob_reset(&sql); + return payV; +} + +/* +** Implementation of /json/timeline/ticket. +** +*/ +static cson_value * json_timeline_ticket(){ + /* This code is 95% the same as json_timeline_ci(), by the way. */ + cson_value * payV = NULL; + cson_object * pay = NULL; + cson_value * tmp = NULL; + cson_value * listV = NULL; + cson_array * list = NULL; + int check = 0; + Stmt q = empty_Stmt; + Blob sql = empty_blob; + if( !g.perm.RdTkt && !g.perm.Read ){ + json_set_err(FSL_JSON_E_DENIED, "Ticket timeline requires 'o' or 'r' access."); + return NULL; + } + payV = cson_value_new_object(); + pay = cson_value_get_object(payV); + check = json_timeline_setup_sql( "t", &sql, pay ); + if(check){ + json_set_err(check, "Query initialization failed."); + goto error; + } + + db_multi_exec("%s", blob_buffer(&sql) /*safe-for-%s*/); +#define SET(K) if(0!=(check=cson_object_set(pay,K,tmp))){ \ + json_set_err((cson_rc.AllocError==check) \ + ? FSL_JSON_E_ALLOC : FSL_JSON_E_UNKNOWN, \ + "Object property insertion failed."); \ + goto error;\ + } (void)0 + +#if 0 + /* only for testing! */ + tmp = cson_value_new_string(blob_buffer(&sql),strlen(blob_buffer(&sql))); + SET("timelineSql"); +#endif + + blob_reset(&sql); + /* + REMINDER/FIXME(?): we have both uuid (the change uuid?) and + ticketUuid (the actual ticket). This is different from the wiki + timeline, where we only have the wiki page uuid. + */ + db_prepare(&q, "SELECT rid AS rid," + " uuid AS uuid," + " mtime AS timestamp," +#if 0 + " timestampString AS timestampString," +#endif + " user AS user," + " eventType AS eventType," + " comment AS comment," + " brief AS briefComment" + " FROM json_timeline" + " ORDER BY rowid"); + listV = cson_value_new_array(); + list = cson_value_get_array(listV); + tmp = listV; + SET("timeline"); + while( (SQLITE_ROW == db_step(&q) )){ + /* convert each row into a JSON object...*/ + int rc; + int const rid = db_column_int(&q,0); + Manifest * pMan = NULL; + cson_value * rowV; + cson_object * row; + /*printf("rid=%d\n",rid);*/ + pMan = manifest_get(rid, CFTYPE_TICKET, 0); + if(!pMan){ + /* this might be an attachment? I'm seeing this with + rid 15380, uuid [1292fef05f2472108]. + + /json/artifact/1292fef05f2472108 returns not-found, + probably because we haven't added artifact/ticket + yet(?). + */ + continue; + } + + rowV = cson_sqlite3_row_to_object(q.pStmt); + row = cson_value_get_object(rowV); + if(!row){ + manifest_destroy(pMan); + json_warn( FSL_JSON_W_ROW_TO_JSON_FAILED, + "Could not convert at least one timeline result row to JSON." ); + continue; + } + /* FIXME: certainly there's a more efficient way for use to get + the ticket UUIDs? + */ + cson_object_set(row,"ticketUuid",json_new_string(pMan->zTicketUuid)); + manifest_destroy(pMan); + rc = cson_array_append( list, rowV ); + if( 0 != rc ){ + cson_value_free(rowV); + g.json.resultCode = (cson_rc.AllocError==rc) + ? FSL_JSON_E_ALLOC + : FSL_JSON_E_UNKNOWN; + goto error; + } + } +#undef SET + goto ok; + error: + assert( 0 != g.json.resultCode ); + cson_value_free(payV); + payV = NULL; + ok: + blob_reset(&sql); + db_finalize(&q); + return payV; +} + +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/json_user.c Index: src/json_user.c ================================================================== --- src/json_user.c +++ src/json_user.c @@ -0,0 +1,438 @@ +#ifdef FOSSIL_ENABLE_JSON +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +*/ +#include "VERSION.h" +#include "config.h" +#include "json_user.h" + +#if INTERFACE +#include "json_detail.h" +#endif + +static cson_value * json_user_get(); +static cson_value * json_user_list(); +static cson_value * json_user_save(); + +/* +** Mapping of /json/user/XXX commands/paths to callbacks. +*/ +static const JsonPageDef JsonPageDefs_User[] = { +{"save", json_user_save, 0}, +{"get", json_user_get, 0}, +{"list", json_user_list, 0}, +/* Last entry MUST have a NULL name. */ +{NULL,NULL,0} +}; + + +/* +** Implements the /json/user family of pages/commands. +** +*/ +cson_value * json_page_user(){ + return json_page_dispatch_helper(&JsonPageDefs_User[0]); +} + + +/* +** Impl of /json/user/list. Requires admin/setup rights. +*/ +static cson_value * json_user_list(){ + cson_value * payV = NULL; + Stmt q; + if(!g.perm.Admin && !g.perm.Setup){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'a' or 's' privileges."); + return NULL; + } + db_prepare(&q,"SELECT uid AS uid," + " login AS name," + " cap AS capabilities," + " info AS info," + " mtime AS timestamp" + " FROM user ORDER BY login"); + payV = json_stmt_to_array_of_obj(&q, NULL); + db_finalize(&q); + if(NULL == payV){ + json_set_err(FSL_JSON_E_UNKNOWN, + "Could not convert user list to JSON."); + } + return payV; +} + +/* +** Creates a new JSON Object based on the db state of +** the given user name. On error (no record found) +** it returns NULL, else the caller owns the returned +** object. +*/ +static cson_value * json_load_user_by_name(char const * zName){ + cson_value * u = NULL; + Stmt q; + db_prepare(&q,"SELECT uid AS uid," + " login AS name," + " cap AS capabilities," + " info AS info," + " mtime AS timestamp" + " FROM user" + " WHERE login=%Q", + zName); + if( (SQLITE_ROW == db_step(&q)) ){ + u = cson_sqlite3_row_to_object(q.pStmt); + } + db_finalize(&q); + return u; +} + +/* +** Identical to json_load_user_by_name(), but expects a user ID. Returns +** NULL if no user found with that ID. +*/ +static cson_value * json_load_user_by_id(int uid){ + cson_value * u = NULL; + Stmt q; + db_prepare(&q,"SELECT uid AS uid," + " login AS name," + " cap AS capabilities," + " info AS info," + " mtime AS timestamp" + " FROM user" + " WHERE uid=%d", + uid); + if( (SQLITE_ROW == db_step(&q)) ){ + u = cson_sqlite3_row_to_object(q.pStmt); + } + db_finalize(&q); + return u; +} + + +/* +** Impl of /json/user/get. Requires admin or setup rights. +*/ +static cson_value * json_user_get(){ + cson_value * payV = NULL; + char const * pUser = NULL; + if(!g.perm.Admin && !g.perm.Setup){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'a' or 's' privileges."); + return NULL; + } + pUser = json_find_option_cstr2("name", NULL, NULL, g.json.dispatchDepth+1); + if(!pUser || !*pUser){ + json_set_err(FSL_JSON_E_MISSING_ARGS,"Missing 'name' property."); + return NULL; + } + payV = json_load_user_by_name(pUser); + if(!payV){ + json_set_err(FSL_JSON_E_RESOURCE_NOT_FOUND,"User not found."); + } + return payV; +} + +/* +** Expects pUser to contain fossil user fields in JSON form: name, +** uid, info, capabilities, password. +** +** At least one of (name, uid) must be included. All others are +** optional and their db fields will not be updated if those fields +** are not included in pUser. +** +** If uid is specified then name may refer to a _new_ name +** for a user, otherwise the name must refer to an existing user. +** If uid=-1 then the name must be specified and a new user is +** created (fails if one already exists). +** +** If uid is not set, this function might modify pUser to contain the +** db-found (or inserted) user ID. +** +** On error g.json's error state is set and one of the FSL_JSON_E_xxx +** values from FossilJsonCodes is returned. +** +** On success the db record for the given user is updated. +** +** Requires either Admin, Setup, or Password access. Non-admin/setup +** users can only change their own information. Non-setup users may +** not modify the 's' permission. Admin users without setup +** permissions may not edit any other user who has the 's' permission. +** +*/ +int json_user_update_from_json( cson_object * pUser ){ +#define CSTR(X) cson_string_cstr(cson_value_get_string( cson_object_get(pUser, X ) )) + char const * zName = CSTR("name"); + char const * zNameNew = zName; + char * zNameFree = NULL; + char const * zInfo = CSTR("info"); + char const * zCap = CSTR("capabilities"); + char const * zPW = CSTR("password"); + cson_value const * forceLogout = cson_object_get(pUser, "forceLogout"); + int gotFields = 0; +#undef CSTR + cson_int_t uid = cson_value_get_integer( cson_object_get(pUser, "uid") ); + char const tgtHasSetup = zCap && (NULL!=strchr(zCap, 's')); + char tgtHadSetup = 0; + Blob sql = empty_blob; + Stmt q = empty_Stmt; + +#if 0 + if(!g.perm.Admin && !g.perm.Setup && !g.perm.Password){ + return json_set_err( FSL_JSON_E_DENIED, + "Password change requires 'a', 's', " + "or 'p' permissions."); + } +#endif + if(uid<=0 && (!zName||!*zName)){ + return json_set_err(FSL_JSON_E_MISSING_ARGS, + "One of 'uid' or 'name' is required."); + }else if(uid>0){ + zNameFree = db_text(NULL, "SELECT login FROM user WHERE uid=%d",uid); + if(!zNameFree){ + return json_set_err(FSL_JSON_E_RESOURCE_NOT_FOUND, + "No login found for uid %d.", uid); + } + zName = zNameFree; + }else if(-1==uid){ + /* try to create a new user */ + if(!g.perm.Admin && !g.perm.Setup){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'a' or 's' privileges."); + goto error; + }else if(!zName || !*zName){ + json_set_err(FSL_JSON_E_MISSING_ARGS, + "No name specified for new user."); + goto error; + }else if( db_exists("SELECT 1 FROM user WHERE login=%Q", zName) ){ + json_set_err(FSL_JSON_E_RESOURCE_ALREADY_EXISTS, + "User %s already exists.", zName); + goto error; + }else{ + Stmt ins = empty_Stmt; + db_prepare(&ins, "INSERT INTO user (login) VALUES(%Q)",zName); + db_step( &ins ); + db_finalize(&ins); + uid = db_int(0,"SELECT uid FROM user WHERE login=%Q", zName); + assert(uid>0); + zNameNew = zName; + cson_object_set( pUser, "uid", cson_value_new_integer(uid) ); + } + }else{ + uid = db_int(0,"SELECT uid FROM user WHERE login=%Q", zName); + if(uid<=0){ + json_set_err(FSL_JSON_E_RESOURCE_NOT_FOUND, + "No login found for user [%s].", zName); + goto error; + } + cson_object_set( pUser, "uid", cson_value_new_integer(uid) ); + } + + /* Maintenance note: all error-returns from here on out should go + via 'goto error' in order to clean up. + */ + + if(uid != g.userUid){ + if(!g.perm.Admin && !g.perm.Setup){ + json_set_err(FSL_JSON_E_DENIED, + "Changing another user's data requires " + "'a' or 's' privileges."); + goto error; + } + } + /* check if the target uid currently has setup rights. */ + tgtHadSetup = db_int(0,"SELECT 1 FROM user where uid=%d" + " AND cap GLOB '*s*'", uid); + + if((tgtHasSetup || tgtHadSetup) && !g.perm.Setup){ + /* + Do not allow a non-setup user to set or remove setup + privileges. setup.c uses similar logic. + */ + json_set_err(FSL_JSON_E_DENIED, + "Modifying 's' users/privileges requires " + "'s' privileges."); + goto error; + } + /* + Potential todo: do not allow a setup user to remove 's' from + himself, to avoid locking himself out? + */ + + blob_append(&sql, "UPDATE user SET",-1 ); + blob_append(&sql, " mtime=cast(strftime('%s') AS INTEGER)", -1); + + if((uid>0) && zNameNew){ + /* Check for name change... */ + if(0!=strcmp(zName,zNameNew)){ + if( (!g.perm.Admin && !g.perm.Setup) + && (zName != zNameNew)){ + json_set_err( FSL_JSON_E_DENIED, + "Modifying user names requires 'a' or 's' privileges."); + goto error; + } + forceLogout = cson_value_true() + /* reminders: 1) does not allocate. + 2) we do this because changing a name + invalidates any login token because the old name + is part of the token hash. + */; + blob_append_sql(&sql, ", login=%Q", zNameNew); + ++gotFields; + } + } + + if( zCap && *zCap ){ + if(!g.perm.Admin || !g.perm.Setup){ + /* we "could" arguably silently ignore cap in this case. */ + json_set_err(FSL_JSON_E_DENIED, + "Changing capabilities requires 'a' or 's' privileges."); + goto error; + } + blob_append_sql(&sql, ", cap=%Q", zCap); + ++gotFields; + } + + if( zPW && *zPW ){ + if(!g.perm.Admin && !g.perm.Setup && !g.perm.Password){ + json_set_err( FSL_JSON_E_DENIED, + "Password change requires 'a', 's', " + "or 'p' permissions."); + goto error; + }else{ +#define TRY_LOGIN_GROUP 0 /* login group support is not yet implemented. */ +#if !TRY_LOGIN_GROUP + char * zPWHash = NULL; + ++gotFields; + zPWHash = sha1_shared_secret(zPW, zNameNew ? zNameNew : zName, NULL); + blob_append_sql(&sql, ", pw=%Q", zPWHash); + free(zPWHash); +#else + ++gotFields; + blob_append_sql(&sql, ", pw=coalesce(shared_secret(%Q,%Q," + "(SELECT value FROM config WHERE name='project-code')))", + zPW, zNameNew ? zNameNew : zName); + /* shared_secret() func is undefined? */ +#endif + } + } + + if( zInfo ){ + blob_append_sql(&sql, ", info=%Q", zInfo); + ++gotFields; + } + + if((g.perm.Admin || g.perm.Setup) + && forceLogout && cson_value_get_bool(forceLogout)){ + blob_append(&sql, ", cookie=NULL, cexpire=NULL", -1); + ++gotFields; + } + + if(!gotFields){ + json_set_err( FSL_JSON_E_MISSING_ARGS, + "Required user data are missing."); + goto error; + } + assert(uid>0); +#if !TRY_LOGIN_GROUP + blob_append_sql(&sql, " WHERE uid=%d", uid); +#else /* need name for login group support :/ */ + blob_append_sql(&sql, " WHERE login=%Q", zName); +#endif +#if 0 + puts(blob_str(&sql)); + cson_output_FILE( cson_object_value(pUser), stdout, NULL ); +#endif + db_prepare(&q, "%s", blob_sql_text(&sql)); + db_exec(&q); + db_finalize(&q); +#if TRY_LOGIN_GROUP + if( zPW || cson_value_get_bool(forceLogout) ){ + Blob groupSql = empty_blob; + char * zErr = NULL; + blob_append_sql(&groupSql, + "INSERT INTO user(login)" + " SELECT %Q WHERE NOT EXISTS(SELECT 1 FROM user WHERE login=%Q);", + zName, zName + ); + blob_append(&groupSql, blob_str(&sql), blob_size(&sql)); + login_group_sql(blob_str(&groupSql), NULL, NULL, &zErr); + blob_reset(&groupSql); + if( zErr ){ + json_set_err( FSL_JSON_E_UNKNOWN, + "Repo-group update at least partially failed: %s", + zErr); + free(zErr); + goto error; + } + } +#endif /* TRY_LOGIN_GROUP */ + +#undef TRY_LOGIN_GROUP + + free( zNameFree ); + blob_reset(&sql); + return 0; + + error: + assert(0 != g.json.resultCode); + free(zNameFree); + blob_reset(&sql); + return g.json.resultCode; +} + + +/* +** Impl of /json/user/save. +*/ +static cson_value * json_user_save(){ + /* try to get user info from GET/CLI args and construct + a JSON form of it... */ + cson_object * u = cson_new_object(); + char const * str = NULL; + char b = -1; + int i = -1; + int uid = -1; + cson_value * payload = NULL; + /* String properties... */ +#define PROP(LK,SK) str = json_find_option_cstr(LK,NULL,SK); \ + if(str){ cson_object_set(u, LK, json_new_string(str)); } (void)0 + PROP("name","n"); + PROP("password","p"); + PROP("info","i"); + PROP("capabilities","c"); +#undef PROP + /* Boolean properties... */ +#define PROP(LK,DFLT) b = json_find_option_bool(LK,NULL,NULL,DFLT); \ + if(DFLT!=b){ cson_object_set(u, LK, cson_value_new_bool(b)); } (void)0 + PROP("forceLogout",-1); +#undef PROP + +#define PROP(LK,DFLT) i = json_find_option_int(LK,NULL,NULL,DFLT); \ + if(DFLT != i){ cson_object_set(u, LK, cson_value_new_integer(i)); } (void)0 + PROP("uid",-99); +#undef PROP + if( g.json.reqPayload.o ){ + cson_object_merge( u, g.json.reqPayload.o, CSON_MERGE_NO_RECURSE ); + } + json_user_update_from_json( u ); + if(!g.json.resultCode){ + uid = cson_value_get_integer( cson_object_get(u, "uid") ); + assert((uid>0) && "Something went wrong in json_user_update_from_json()"); + payload = json_load_user_by_id(uid); + } + cson_free_object(u); + return payload; +} +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/json_wiki.c Index: src/json_wiki.c ================================================================== --- src/json_wiki.c +++ src/json_wiki.c @@ -0,0 +1,595 @@ +#ifdef FOSSIL_ENABLE_JSON +/* +** Copyright (c) 2011-12 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +*/ +#include "VERSION.h" +#include "config.h" +#include "json_wiki.h" + +#if INTERFACE +#include "json_detail.h" +#endif + +static cson_value * json_wiki_create(); +static cson_value * json_wiki_get(); +static cson_value * json_wiki_list(); +static cson_value * json_wiki_preview(); +static cson_value * json_wiki_save(); +static cson_value * json_wiki_diff(); +/* +** Mapping of /json/wiki/XXX commands/paths to callbacks. +*/ +static const JsonPageDef JsonPageDefs_Wiki[] = { +{"create", json_wiki_create, 0}, +{"diff", json_wiki_diff, 0}, +{"get", json_wiki_get, 0}, +{"list", json_wiki_list, 0}, +{"preview", json_wiki_preview, 0}, +{"save", json_wiki_save, 0}, +{"timeline", json_timeline_wiki,0}, +/* Last entry MUST have a NULL name. */ +{NULL,NULL,0} +}; + + +/* +** Implements the /json/wiki family of pages/commands. +** +*/ +cson_value * json_page_wiki(){ + return json_page_dispatch_helper(JsonPageDefs_Wiki); +} + +/* +** Returns the UUID for the given wiki blob RID, or NULL if not +** found. The returned string is allocated via db_text() and must be +** free()d by the caller. +*/ +char * json_wiki_get_uuid_for_rid( int rid ) +{ + return db_text(NULL, + "SELECT b.uuid FROM tag t, tagxref x, blob b" + " WHERE x.tagid=t.tagid AND t.tagname GLOB 'wiki-*' " + " AND b.rid=x.rid AND b.rid=%d" + " ORDER BY x.mtime DESC LIMIT 1", + rid + ); +} + +/* +** Tries to load a wiki page from the given rid creates a JSON object +** representation of it. If the page is not found then NULL is +** returned. If contentFormat is positive then the page content +** is HTML-ized using fossil's conventional wiki format, if it is +** negative then no parsing is performed, if it is 0 then the content +** is not returned in the response. If contentFormat is 0 then the +** contentSize reflects the number of bytes, not characters, stored in +** the page. +** +** The returned value, if not NULL, is-a JSON Object owned by the +** caller. If it returns NULL then it may set g.json's error state. +*/ +cson_value * json_get_wiki_page_by_rid(int rid, int contentFormat){ + Manifest * pWiki = NULL; + if( NULL == (pWiki = manifest_get(rid, CFTYPE_WIKI, 0)) ){ + json_set_err( FSL_JSON_E_UNKNOWN, + "Error reading wiki page from manifest (rid=%d).", + rid ); + return NULL; + }else{ + unsigned int len = 0; + cson_object * pay = cson_new_object(); + char const * zBody = pWiki->zWiki; + char const * zFormat = NULL; + char * zUuid = json_wiki_get_uuid_for_rid(rid); + cson_object_set(pay,"name",json_new_string(pWiki->zWikiTitle)); + cson_object_set(pay,"uuid",json_new_string(zUuid)); + free(zUuid); + zUuid = NULL; + if( pWiki->nParent > 0 ){ + cson_object_set( pay, "parent", json_new_string(pWiki->azParent[0]) ) + /* Reminder: wiki pages do not branch and have only one parent + (except for the initial version, which has no parents). */; + } + /*cson_object_set(pay,"rid",json_new_int((cson_int_t)rid));*/ + cson_object_set(pay,"user",json_new_string(pWiki->zUser)); + cson_object_set(pay,FossilJsonKeys.timestamp, + json_julian_to_timestamp(pWiki->rDate)); + if(0 == contentFormat){ + cson_object_set(pay,"size", + json_new_int((cson_int_t)(zBody?strlen(zBody):0))); + }else{ + if( contentFormat>0 ){/*HTML-ize it*/ + Blob content = empty_blob; + Blob raw = empty_blob; + zFormat = "html"; + if(zBody && *zBody){ + blob_append(&raw,zBody,-1); + wiki_convert(&raw,&content,0); + len = (unsigned int)blob_size(&content); + } + cson_object_set(pay,"size",json_new_int((cson_int_t)len)); + cson_object_set(pay,"content", + cson_value_new_string(blob_buffer(&content),len)); + blob_reset(&content); + blob_reset(&raw); + }else{/*raw format*/ + zFormat = "raw"; + len = zBody ? strlen(zBody) : 0; + cson_object_set(pay,"size",json_new_int((cson_int_t)len)); + cson_object_set(pay,"content",cson_value_new_string(zBody,len)); + } + cson_object_set(pay,"contentFormat",json_new_string(zFormat)); + + } + /*TODO: add 'T' (tag) fields*/ + /*TODO: add the 'A' card (file attachment) entries?*/ + manifest_destroy(pWiki); + return cson_object_value(pay); + } +} + +/* +** Searches for the latest version of a wiki page with the given +** name. If found it behaves like json_get_wiki_page_by_rid(theRid, +** contentFormat), else it returns NULL. +*/ +cson_value * json_get_wiki_page_by_name(char const * zPageName, int contentFormat){ + int rid; + rid = db_int(0, + "SELECT x.rid FROM tag t, tagxref x, blob b" + " WHERE x.tagid=t.tagid AND t.tagname='wiki-%q' " + " AND b.rid=x.rid" + " ORDER BY x.mtime DESC LIMIT 1", + zPageName + ); + if( 0==rid ){ + json_set_err( FSL_JSON_E_RESOURCE_NOT_FOUND, "Wiki page not found: %s", + zPageName ); + return NULL; + } + return json_get_wiki_page_by_rid(rid, contentFormat); +} + + +/* +** Searches json_find_option_ctr("format",NULL,"f") for a flag. +** If not found it returns defaultValue else it returns a value +** depending on the first character of the format option: +** +** [h]tml = 1 +** [n]one = 0 +** [r]aw = -1 +** +** The return value is intended for use with +** json_get_wiki_page_by_rid() and friends. +*/ +int json_wiki_get_content_format_flag( int defaultValue ){ + int contentFormat = defaultValue; + char const * zFormat = json_find_option_cstr("format",NULL,"f"); + if( !zFormat || !*zFormat ){ + return contentFormat; + } + else if('r'==*zFormat){ + contentFormat = -1; + } + else if('h'==*zFormat){ + contentFormat = 1; + } + else if('n'==*zFormat){ + contentFormat = 0; + } + return contentFormat; +} + +/* +** Helper for /json/wiki/get and /json/wiki/preview. At least one of +** zPageName (wiki page name) or zSymname must be set to a +** non-empty/non-NULL value. zSymname takes precedence. On success +** the result of one of json_get_wiki_page_by_rid() or +** json_get_wiki_page_by_name() will be returned (owned by the +** caller). On error g.json's error state is set and NULL is returned. +*/ +static cson_value * json_wiki_get_by_name_or_symname(char const * zPageName, + char const * zSymname, + int contentFormat ){ + if(!zSymname || !*zSymname){ + return json_get_wiki_page_by_name(zPageName, contentFormat); + }else{ + int rid = symbolic_name_to_rid( zSymname ? zSymname : zPageName, "w" ); + if(rid<0){ + json_set_err(FSL_JSON_E_AMBIGUOUS_UUID, + "UUID [%s] is ambiguous.", zSymname); + return NULL; + }else if(rid==0){ + json_set_err(FSL_JSON_E_RESOURCE_NOT_FOUND, + "UUID [%s] does not resolve to a wiki page.", zSymname); + return NULL; + }else{ + return json_get_wiki_page_by_rid(rid, contentFormat); + } + } +} + +/* +** Implementation of /json/wiki/get. +** +*/ +static cson_value * json_wiki_get(){ + char const * zPageName; + char const * zSymName = NULL; + int contentFormat = -1; + if( !g.perm.RdWiki && !g.perm.Read ){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'o' or 'j' access."); + return NULL; + } + zPageName = json_find_option_cstr2("name",NULL,"n",g.json.dispatchDepth+1); + + zSymName = json_find_option_cstr("uuid",NULL,"u"); + + if((!zPageName||!*zPageName) && (!zSymName || !*zSymName)){ + json_set_err(FSL_JSON_E_MISSING_ARGS, + "At least one of the 'name' or 'uuid' arguments must be provided."); + return NULL; + } + + /* TODO: see if we have a page named zPageName. If not, try to resolve + zPageName as a UUID. + */ + + contentFormat = json_wiki_get_content_format_flag(contentFormat); + return json_wiki_get_by_name_or_symname( zPageName, zSymName, contentFormat ); +} + +/* +** Implementation of /json/wiki/preview. +** +*/ +static cson_value * json_wiki_preview(){ + char const * zContent = NULL; + cson_value * pay = NULL; + Blob contentOrig = empty_blob; + Blob contentHtml = empty_blob; + if( !g.perm.WrWiki ){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'k' access."); + return NULL; + } + zContent = cson_string_cstr(cson_value_get_string(g.json.reqPayload.v)); + if(!zContent) { + json_set_err(FSL_JSON_E_MISSING_ARGS, + "The 'payload' property must be a string containing the wiki code to preview."); + return NULL; + } + blob_append( &contentOrig, zContent, (int)cson_string_length_bytes(cson_value_get_string(g.json.reqPayload.v)) ); + wiki_convert( &contentOrig, &contentHtml, 0 ); + blob_reset( &contentOrig ); + pay = cson_value_new_string( blob_str(&contentHtml), (unsigned int)blob_size(&contentHtml)); + blob_reset( &contentHtml ); + return pay; +} + + +/* +** Internal impl of /wiki/save and /wiki/create. If createMode is 0 +** and the page already exists then a +** FSL_JSON_E_RESOURCE_ALREADY_EXISTS error is triggered. If +** createMode is false then the FSL_JSON_E_RESOURCE_NOT_FOUND is +** triggered if the page does not already exists. +** +** Note that the error triggered when createMode==0 and no such page +** exists is rather arbitrary - we could just as well create the entry +** here if it doesn't already exist. With that, save/create would +** become one operation. That said, i expect there are people who +** would categorize such behaviour as "being too clever" or "doing too +** much automatically" (and i would likely agree with them). +** +** If allowCreateIfNotExists is true then this function will allow a new +** page to be created even if createMode is false. +*/ +static cson_value * json_wiki_create_or_save(char createMode, + char allowCreateIfNotExists){ + Blob content = empty_blob; /* wiki page content */ + cson_value * nameV; /* wiki page name */ + char const * zPageName; /* cstr form of page name */ + cson_value * contentV; /* passed-in content */ + cson_value * emptyContent = NULL; /* placeholder for empty content. */ + cson_value * payV = NULL; /* payload/return value */ + cson_string const * jstr = NULL; /* temp for cson_value-to-cson_string conversions. */ + char const * zMimeType = 0; + unsigned int contentLen = 0; + int rid; + if( (createMode && !g.perm.NewWiki) + || (!createMode && !g.perm.WrWiki)){ + json_set_err(FSL_JSON_E_DENIED, + "Requires '%c' permissions.", + (createMode ? 'f' : 'k')); + return NULL; + } + nameV = json_req_payload_get("name"); + if(!nameV){ + json_set_err( FSL_JSON_E_MISSING_ARGS, + "'name' parameter is missing."); + return NULL; + } + zPageName = cson_string_cstr(cson_value_get_string(nameV)); + if(!zPageName || !*zPageName){ + json_set_err(FSL_JSON_E_INVALID_ARGS, + "'name' parameter must be a non-empty string."); + return NULL; + } + rid = db_int(0, + "SELECT x.rid FROM tag t, tagxref x" + " WHERE x.tagid=t.tagid AND t.tagname='wiki-%q'" + " ORDER BY x.mtime DESC LIMIT 1", + zPageName + ); + + if(rid){ + if(createMode){ + json_set_err(FSL_JSON_E_RESOURCE_ALREADY_EXISTS, + "Wiki page '%s' already exists.", + zPageName); + goto error; + } + }else if(!createMode && !allowCreateIfNotExists){ + json_set_err(FSL_JSON_E_RESOURCE_NOT_FOUND, + "Wiki page '%s' not found.", + zPageName); + goto error; + } + + contentV = json_req_payload_get("content"); + if( !contentV ){ + if( createMode || (!rid && allowCreateIfNotExists) ){ + contentV = emptyContent = cson_value_new_string("",0); + }else{ + json_set_err(FSL_JSON_E_MISSING_ARGS, + "'content' parameter is missing."); + goto error; + } + } + if( !cson_value_is_string(nameV) + || !cson_value_is_string(contentV)){ + json_set_err(FSL_JSON_E_INVALID_ARGS, + "'content' parameter must be a string."); + goto error; + } + jstr = cson_value_get_string(contentV); + contentLen = (int)cson_string_length_bytes(jstr); + if(contentLen){ + blob_append(&content, cson_string_cstr(jstr),contentLen); + } + + zMimeType = json_find_option_cstr("mimetype","mimetype","M"); + + wiki_cmd_commit(zPageName, 0==rid, &content, zMimeType, 0); + blob_reset(&content); + /* + Our return value here has a race condition: if this operation + is called concurrently for the same wiki page via two requests, + payV could reflect the results of the other save operation. + */ + payV = json_get_wiki_page_by_name( + cson_string_cstr( + cson_value_get_string(nameV)), + 0); + goto ok; + error: + assert( 0 != g.json.resultCode ); + assert( NULL == payV ); + ok: + if( emptyContent ){ + /* We have some potentially tricky memory ownership + here, which is why we handle emptyContent separately. + + This is, in fact, overkill because cson_value_new_string("",0) + actually returns a shared singleton instance (i.e. doesn't + allocate), but that is a cson implementation detail which i + don't want leaking into this code... + */ + cson_value_free(emptyContent); + } + return payV; + +} + +/* +** Implementation of /json/wiki/create. +*/ +static cson_value * json_wiki_create(){ + return json_wiki_create_or_save(1,0); +} + +/* +** Implementation of /json/wiki/save. +*/ +static cson_value * json_wiki_save(){ + char const createIfNotExists = json_getenv_bool("createIfNotExists",0); + return json_wiki_create_or_save(0,createIfNotExists); +} + +/* +** Implementation of /json/wiki/list. +*/ +static cson_value * json_wiki_list(){ + cson_value * listV = NULL; + cson_array * list = NULL; + char const * zGlob = NULL; + Stmt q = empty_Stmt; + Blob sql = empty_blob; + char const verbose = json_find_option_bool("verbose",NULL,"v",0); + char fInvert = json_find_option_bool("invert",NULL,"i",0);; + + if( !g.perm.RdWiki && !g.perm.Read ){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'j' or 'o' permissions."); + return NULL; + } + blob_append(&sql,"SELECT" + " substr(tagname,6) as name" + " FROM tag WHERE tagname GLOB 'wiki-*'", + -1); + zGlob = json_find_option_cstr("glob",NULL,"g"); + if(zGlob && *zGlob){ + blob_append_sql(&sql," AND name %s GLOB %Q", + fInvert ? "NOT" : "", zGlob); + }else{ + zGlob = json_find_option_cstr("like",NULL,"l"); + if(zGlob && *zGlob){ + blob_append_sql(&sql," AND name %s LIKE %Q", + fInvert ? "NOT" : "", zGlob); + } + } + blob_append(&sql," ORDER BY lower(name)", -1); + db_prepare(&q,"%s", blob_sql_text(&sql)); + blob_reset(&sql); + listV = cson_value_new_array(); + list = cson_value_get_array(listV); + while( SQLITE_ROW == db_step(&q) ){ + cson_value * v; + if( verbose ){ + char const * name = db_column_text(&q,0); + v = json_get_wiki_page_by_name(name,0); + }else{ + v = cson_sqlite3_column_to_value(q.pStmt,0); + } + if(!v){ + json_set_err(FSL_JSON_E_UNKNOWN, + "Could not convert wiki name column to JSON."); + goto error; + }else if( 0 != cson_array_append( list, v ) ){ + cson_value_free(v); + json_set_err(FSL_JSON_E_ALLOC,"Could not append wiki page name to array.") + /* OOM (or maybe numeric overflow) are the only realistic + error codes for that particular failure.*/; + goto error; + } + } + goto end; + error: + assert(0 != g.json.resultCode); + cson_value_free(listV); + listV = NULL; + end: + db_finalize(&q); + return listV; +} + +static cson_value * json_wiki_diff(){ + char const * zV1 = NULL; + char const * zV2 = NULL; + cson_object * pay = NULL; + int argPos = g.json.dispatchDepth; + int r1 = 0, r2 = 0; + Manifest * pW1 = NULL, *pW2 = NULL; + Blob w1 = empty_blob, w2 = empty_blob, d = empty_blob; + char const * zErrTag = NULL; + u64 diffFlags; + char * zUuid = NULL; + if( !g.perm.Hyperlink ){ + json_set_err(FSL_JSON_E_DENIED, + "Requires 'h' permissions."); + return NULL; + } + + + zV1 = json_find_option_cstr2( "v1",NULL, NULL, ++argPos ); + zV2 = json_find_option_cstr2( "v2",NULL, NULL, ++argPos ); + if(!zV1 || !*zV1 || !zV2 || !*zV2) { + json_set_err(FSL_JSON_E_INVALID_ARGS, + "Requires both 'v1' and 'v2' arguments."); + return NULL; + } + + r1 = symbolic_name_to_rid( zV1, "w" ); + zErrTag = zV1; + if(r1<0){ + goto ambiguous; + }else if(0==r1){ + goto invalid; + } + + r2 = symbolic_name_to_rid( zV2, "w" ); + zErrTag = zV2; + if(r2<0){ + goto ambiguous; + }else if(0==r2){ + goto invalid; + } + + zErrTag = zV1; + pW1 = manifest_get(r1, CFTYPE_WIKI, 0); + if( pW1==0 ) { + goto manifest; + } + zErrTag = zV2; + pW2 = manifest_get(r2, CFTYPE_WIKI, 0); + if( pW2==0 ) { + goto manifest; + } + + blob_init(&w1, pW1->zWiki, -1); + blob_zero(&w2); + blob_init(&w2, pW2->zWiki, -1); + blob_zero(&d); + diffFlags = DIFF_IGNORE_EOLWS | DIFF_STRIP_EOLCR; + text_diff(&w1, &w2, &d, 0, diffFlags); + blob_reset(&w1); + blob_reset(&w2); + + pay = cson_new_object(); + + zUuid = json_wiki_get_uuid_for_rid( pW1->rid ); + cson_object_set(pay, "v1", json_new_string(zUuid) ); + free(zUuid); + zUuid = json_wiki_get_uuid_for_rid( pW2->rid ); + cson_object_set(pay, "v2", json_new_string(zUuid) ); + free(zUuid); + zUuid = NULL; + + manifest_destroy(pW1); + manifest_destroy(pW2); + + cson_object_set(pay, "diff", + cson_value_new_string( blob_str(&d), + (unsigned int)blob_size(&d))); + + return cson_object_value(pay); + + manifest: + json_set_err(FSL_JSON_E_UNKNOWN, + "Could not load wiki manifest for UUID [%s].", + zErrTag); + goto end; + + ambiguous: + json_set_err(FSL_JSON_E_AMBIGUOUS_UUID, + "UUID [%s] is ambiguous.", zErrTag); + goto end; + + invalid: + json_set_err(FSL_JSON_E_RESOURCE_NOT_FOUND, + "UUID [%s] not found.", zErrTag); + goto end; + + end: + cson_free_object(pay); + return NULL; +} + +#endif /* FOSSIL_ENABLE_JSON */ ADDED src/leaf.c Index: src/leaf.c ================================================================== --- src/leaf.c +++ src/leaf.c @@ -0,0 +1,271 @@ +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code used to manage the "leaf" table of the +** repository. +** +** The LEAF table contains the rids for all leaves in the check-in DAG. +** A leaf is a check-in that has no children in the same branch. +*/ +#include "config.h" +#include "leaf.h" +#include + + +/* +** Return true if the check-in with RID=rid is a leaf. +** +** A leaf has no children in the same branch. +*/ +int is_a_leaf(int rid){ + int rc; + static const char zSql[] = + @ SELECT 1 FROM plink + @ WHERE pid=%d + @ AND coalesce((SELECT value FROM tagxref + @ WHERE tagid=%d AND rid=plink.pid), 'trunk') + @ =coalesce((SELECT value FROM tagxref + @ WHERE tagid=%d AND rid=plink.cid), 'trunk') + ; + rc = db_int(0, zSql /*works-like:"%d,%d,%d"*/, + rid, TAG_BRANCH, TAG_BRANCH); + return rc==0; +} + +/* +** Count the number of primary non-branch children for the given check-in. +** +** A primary child is one where the parent is the primary parent, not +** a merge parent. A "leaf" is a node that has zero children of any +** kind. This routine counts only primary children. +** +** A non-branch child is one which is on the same branch as the parent. +*/ +int count_nonbranch_children(int pid){ + int nNonBranch = 0; + static Stmt q; + static const char zSql[] = + @ SELECT count(*) FROM plink + @ WHERE pid=:pid AND isprim + @ AND coalesce((SELECT value FROM tagxref + @ WHERE tagid=%d AND rid=plink.pid), 'trunk') + @ =coalesce((SELECT value FROM tagxref + @ WHERE tagid=%d AND rid=plink.cid), 'trunk') + ; + db_static_prepare(&q, zSql /*works-like: "%d,%d"*/, TAG_BRANCH, TAG_BRANCH); + db_bind_int(&q, ":pid", pid); + if( db_step(&q)==SQLITE_ROW ){ + nNonBranch = db_column_int(&q, 0); + } + db_reset(&q); + return nNonBranch; +} + + +/* +** Recompute the entire LEAF table. +** +** This can be expensive (5 seconds or so) for a really large repository. +** So it is only done for things like a rebuild. +*/ +void leaf_rebuild(void){ + db_multi_exec( + "DELETE FROM leaf;" + "INSERT OR IGNORE INTO leaf" + " SELECT cid FROM plink" + " EXCEPT" + " SELECT pid FROM plink" + " WHERE coalesce((SELECT value FROM tagxref" + " WHERE tagid=%d AND rid=plink.pid),'trunk')" + " == coalesce((SELECT value FROM tagxref" + " WHERE tagid=%d AND rid=plink.cid),'trunk')", + TAG_BRANCH, TAG_BRANCH + ); +} + +/* +** A bag of check-ins whose leaf status needs to be checked. +*/ +static Bag needToCheck; + +/* +** Check to see if check-in "rid" is a leaf and either add it to the LEAF +** table if it is, or remove it if it is not. +*/ +void leaf_check(int rid){ + static Stmt checkIfLeaf; + static Stmt addLeaf; + static Stmt removeLeaf; + int rc; + + db_static_prepare(&checkIfLeaf, + "SELECT 1 FROM plink" + " WHERE pid=:rid" + " AND coalesce((SELECT value FROM tagxref" + " WHERE tagid=%d AND rid=:rid),'trunk')" + " == coalesce((SELECT value FROM tagxref" + " WHERE tagid=%d AND rid=plink.cid),'trunk');", + TAG_BRANCH, TAG_BRANCH + ); + db_bind_int(&checkIfLeaf, ":rid", rid); + rc = db_step(&checkIfLeaf); + db_reset(&checkIfLeaf); + if( rc==SQLITE_ROW ){ + db_static_prepare(&removeLeaf, "DELETE FROM leaf WHERE rid=:rid"); + db_bind_int(&removeLeaf, ":rid", rid); + db_step(&removeLeaf); + db_reset(&removeLeaf); + }else{ + db_static_prepare(&addLeaf, "INSERT OR IGNORE INTO leaf VALUES(:rid)"); + db_bind_int(&addLeaf, ":rid", rid); + db_step(&addLeaf); + db_reset(&addLeaf); + } +} + +/* +** Return an SQL expression (stored in memory obtained from fossil_malloc()) +** that is true if the SQL variable named "zVar" contains the rid with +** a CLOSED tag. In other words, return true if the leaf is closed. +** +** The result can be prefaced with a NOT operator to get all leaves that +** are open. +*/ +char *leaf_is_closed_sql(const char *zVar){ + return mprintf( + "EXISTS(SELECT 1 FROM tagxref AS tx" + " WHERE tx.rid=%s" + " AND tx.tagid=%d" + " AND tx.tagtype>0)", + zVar, TAG_CLOSED + ); +} + +/* +** Schedule a leaf check for "rid" and its parents. +*/ +void leaf_eventually_check(int rid){ + static Stmt parentsOf; + + db_static_prepare(&parentsOf, + "SELECT pid FROM plink WHERE cid=:rid AND pid>0" + ); + db_bind_int(&parentsOf, ":rid", rid); + bag_insert(&needToCheck, rid); + while( db_step(&parentsOf)==SQLITE_ROW ){ + bag_insert(&needToCheck, db_column_int(&parentsOf, 0)); + } + db_reset(&parentsOf); +} + +/* +** Do all pending leaf checks. +*/ +void leaf_do_pending_checks(void){ + int rid; + for(rid=bag_first(&needToCheck); rid; rid=bag_next(&needToCheck,rid)){ + leaf_check(rid); + } + bag_clear(&needToCheck); +} + +/* +** If check-in rid is an open-leaf and there exists another +** open leaf on the same branch, then return 1. +** +** If check-in rid is not an open leaf, or if it is the only open leaf +** on its branch, then return 0. +*/ +int leaf_ambiguity(int rid){ + int rc; /* Result */ + char zVal[30]; + if( !is_a_leaf(rid) ) return 0; + sqlite3_snprintf(sizeof(zVal), zVal, "%d", rid); + rc = db_exists( + "SELECT 1 FROM leaf" + " WHERE NOT %z" + " AND rid<>%d" + " AND (SELECT value FROM tagxref WHERE tagid=%d AND rid=leaf.rid)=" + " (SELECT value FROM tagxref WHERE tagid=%d AND rid=%d)" + " AND NOT %z", + leaf_is_closed_sql(zVal), rid, TAG_BRANCH, TAG_BRANCH, rid, + leaf_is_closed_sql("leaf.rid")); + return rc; +} + +/* +** If check-in rid is an open-leaf and there exists another open leaf +** on the same branch, then print a detailed warning showing all open +** leaves on that branch. +*/ +int leaf_ambiguity_warning(int rid, int currentCkout){ + char *zBr; + Stmt q; + int n = 0; + Blob msg; + if( leaf_ambiguity(rid)==0 ) return 0; + zBr = db_text(0, "SELECT value FROM tagxref WHERE tagid=%d AND rid=%d", + TAG_BRANCH, rid); + if( zBr==0 ) zBr = fossil_strdup("trunk"); + blob_init(&msg, 0, 0); + blob_appendf(&msg, "WARNING: multiple open leaf check-ins on %s:", zBr); + db_prepare(&q, + "SELECT" + " (SELECT uuid FROM blob WHERE rid=leaf.rid)," + " (SELECT datetime(mtime,toLocal()) FROM event WHERE objid=leaf.rid)," + " leaf.rid" + " FROM leaf" + " WHERE (SELECT value FROM tagxref WHERE tagid=%d AND rid=leaf.rid)=%Q" + " AND NOT %z" + " ORDER BY 2 DESC", + TAG_BRANCH, zBr, leaf_is_closed_sql("leaf.rid") + ); + while( db_step(&q)==SQLITE_ROW ){ + blob_appendf(&msg, "\n (%d) %s [%S]%s", + ++n, db_column_text(&q,1), db_column_text(&q,0), + db_column_int(&q,2)==currentCkout ? " (current)" : ""); + } + db_finalize(&q); + fossil_warning("%s",blob_str(&msg)); + blob_reset(&msg); + return 1; +} + +/* +** COMMAND: test-leaf-ambiguity +** +** Usage: %fossil NAME ... +** +** Resolve each name on the command line and call leaf_ambiguity_warning() +** for each resulting RID. +*/ +void leaf_ambiguity_warning_test(void){ + int i; + int rid; + int rc; + db_find_and_open_repository(0,0); + verify_all_options(); + for(i=2; i + * Copyright (c) 2010-2013, Pieter Noordhuis + * + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions are + * met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + * + * ------------------------------------------------------------------------ + * + * References: + * - http://invisible-island.net/xterm/ctlseqs/ctlseqs.html + * - http://www.3waylabs.com/nw/WWW/products/wizcon/vt220.html + * + * Todo list: + * - Filter bogus Ctrl+ combinations. + * - Win32 support + * + * Bloat: + * - History search like Ctrl+r in readline? + * + * List of escape sequences used by this program, we do everything just + * with three sequences. In order to be so cheap we may have some + * flickering effect with some slow terminal, but the lesser sequences + * the more compatible. + * + * EL (Erase Line) + * Sequence: ESC [ n K + * Effect: if n is 0 or missing, clear from cursor to end of line + * Effect: if n is 1, clear from beginning of line to cursor + * Effect: if n is 2, clear entire line + * + * CUF (CUrsor Forward) + * Sequence: ESC [ n C + * Effect: moves cursor forward n chars + * + * CUB (CUrsor Backward) + * Sequence: ESC [ n D + * Effect: moves cursor backward n chars + * + * The following is used to get the terminal width if getting + * the width with the TIOCGWINSZ ioctl fails + * + * DSR (Device Status Report) + * Sequence: ESC [ 6 n + * Effect: reports the current cusor position as ESC [ n ; m R + * where n is the row and m is the column + * + * When multi line mode is enabled, we also use an additional escape + * sequence. However multi line editing is disabled by default. + * + * CUU (Cursor Up) + * Sequence: ESC [ n A + * Effect: moves cursor up of n chars. + * + * CUD (Cursor Down) + * Sequence: ESC [ n B + * Effect: moves cursor down of n chars. + * + * When linenoiseClearScreen() is called, two additional escape sequences + * are used in order to clear the screen and position the cursor at home + * position. + * + * CUP (Cursor position) + * Sequence: ESC [ H + * Effect: moves the cursor to upper left corner + * + * ED (Erase display) + * Sequence: ESC [ 2 J + * Effect: clear the whole screen + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "linenoise.h" + +#define LINENOISE_DEFAULT_HISTORY_MAX_LEN 100 +#define LINENOISE_MAX_LINE 4096 +static const char *unsupported_term[] = {"dumb","cons25","emacs",NULL}; +static linenoiseCompletionCallback *completionCallback = NULL; + +static struct termios orig_termios; /* In order to restore at exit.*/ +static int rawmode = 0; /* For atexit() function to check if restore is needed*/ +static int mlmode = 0; /* Multi line mode. Default is single line. */ +static int atexit_registered = 0; /* Register atexit just 1 time. */ +static int history_max_len = LINENOISE_DEFAULT_HISTORY_MAX_LEN; +static int history_len = 0; +static char **history = NULL; + +/* The linenoiseState structure represents the state during line editing. + * We pass this state to functions implementing specific editing + * functionalities. */ +struct linenoiseState { + int ifd; /* Terminal stdin file descriptor. */ + int ofd; /* Terminal stdout file descriptor. */ + char *buf; /* Edited line buffer. */ + size_t buflen; /* Edited line buffer size. */ + const char *prompt; /* Prompt to display. */ + size_t plen; /* Prompt length. */ + size_t pos; /* Current cursor position. */ + size_t oldpos; /* Previous refresh cursor position. */ + size_t len; /* Current edited line length. */ + size_t cols; /* Number of columns in terminal. */ + size_t maxrows; /* Maximum num of rows used so far (multiline mode) */ + int history_index; /* The history index we are currently editing. */ +}; + +enum KEY_ACTION{ + KEY_NULL = 0, /* NULL */ + CTRL_A = 1, /* Ctrl+a */ + CTRL_B = 2, /* Ctrl-b */ + CTRL_C = 3, /* Ctrl-c */ + CTRL_D = 4, /* Ctrl-d */ + CTRL_E = 5, /* Ctrl-e */ + CTRL_F = 6, /* Ctrl-f */ + CTRL_H = 8, /* Ctrl-h */ + TAB = 9, /* Tab */ + CTRL_K = 11, /* Ctrl+k */ + CTRL_L = 12, /* Ctrl+l */ + ENTER = 13, /* Enter */ + CTRL_N = 14, /* Ctrl-n */ + CTRL_P = 16, /* Ctrl-p */ + CTRL_T = 20, /* Ctrl-t */ + CTRL_U = 21, /* Ctrl+u */ + CTRL_W = 23, /* Ctrl+w */ + ESC = 27, /* Escape */ + BACKSPACE = 127 /* Backspace */ +}; + +static void linenoiseAtExit(void); +int linenoiseHistoryAdd(const char *line); +static void refreshLine(struct linenoiseState *l); + +/* Debugging macro. */ +#if 0 +FILE *lndebug_fp = NULL; +#define lndebug(fmt, arg1) \ + do { \ + if (lndebug_fp == NULL) { \ + lndebug_fp = fopen("/tmp/lndebug.txt","a"); \ + fprintf(lndebug_fp, \ + "[%d %d %d] p: %d, rows: %d, rpos: %d, max: %d, oldmax: %d\n", \ + (int)l->len,(int)l->pos,(int)l->oldpos,plen,rows,rpos, \ + (int)l->maxrows,old_rows); \ + } \ + fprintf(lndebug_fp, ", " fmt, arg1); \ + fflush(lndebug_fp); \ + } while (0) +#else +#define lndebug(fmt, arg1) +#endif + +/* ======================= Low level terminal handling ====================== */ + +/* Set if to use or not the multi line mode. */ +void linenoiseSetMultiLine(int ml) { + mlmode = ml; +} + +/* Return true if the terminal name is in the list of terminals we know are + * not able to understand basic escape sequences. */ +static int isUnsupportedTerm(void) { + char *term = getenv("TERM"); + int j; + + if (term == NULL) return 0; + for (j = 0; unsupported_term[j]; j++) + if (!strcasecmp(term,unsupported_term[j])) return 1; + return 0; +} + +/* Raw mode */ +static int enableRawMode(int fd) { + struct termios raw; + + if (!isatty(STDIN_FILENO)) goto fatal; + if (!atexit_registered) { + atexit(linenoiseAtExit); + atexit_registered = 1; + } + if (tcgetattr(fd,&orig_termios) == -1) goto fatal; + + raw = orig_termios; /* modify the original mode */ + /* input modes: no break, no CR to NL, no parity check, no strip char, + * no start/stop output control. */ + raw.c_iflag &= ~(BRKINT | ICRNL | INPCK | ISTRIP | IXON); + /* output modes - disable post processing */ + raw.c_oflag &= ~(OPOST); + /* control modes - set 8 bit chars */ + raw.c_cflag |= (CS8); + /* local modes - choing off, canonical off, no extended functions, + * no signal chars (^Z,^C) */ + raw.c_lflag &= ~(ECHO | ICANON | IEXTEN | ISIG); + /* control chars - set return condition: min number of bytes and timer. + * We want read to return every single byte, without timeout. */ + raw.c_cc[VMIN] = 1; raw.c_cc[VTIME] = 0; /* 1 byte, no timer */ + + /* put terminal in raw mode after flushing */ + if (tcsetattr(fd,TCSAFLUSH,&raw) < 0) goto fatal; + rawmode = 1; + return 0; + +fatal: + errno = ENOTTY; + return -1; +} + +static void disableRawMode(int fd) { + /* Don't even check the return value as it's too late. */ + if (rawmode && tcsetattr(fd,TCSAFLUSH,&orig_termios) != -1) + rawmode = 0; +} + +/* Use the ESC [6n escape sequence to query the horizontal cursor position + * and return it. On error -1 is returned, on success the position of the + * cursor. */ +static int getCursorPosition(int ifd, int ofd) { + char buf[32]; + int cols, rows; + unsigned int i = 0; + + /* Report cursor location */ + if (write(ofd, "\x1b[6n", 4) != 4) return -1; + + /* Read the response: ESC [ rows ; cols R */ + while (i < sizeof(buf)-1) { + if (read(ifd,buf+i,1) != 1) break; + if (buf[i] == 'R') break; + i++; + } + buf[i] = '\0'; + + /* Parse it. */ + if (buf[0] != ESC || buf[1] != '[') return -1; + if (sscanf(buf+2,"%d;%d",&rows,&cols) != 2) return -1; + return cols; +} + +/* Try to get the number of columns in the current terminal, or assume 80 + * if it fails. */ +static int getColumns(int ifd, int ofd) { +#if !defined(__sun__) + struct winsize ws; + + if (ioctl(1, TIOCGWINSZ, &ws) == -1 || ws.ws_col == 0) { + /* ioctl() failed. Try to query the terminal itself. */ + int start, cols; + + /* Get the initial position so we can restore it later. */ + start = getCursorPosition(ifd,ofd); + if (start == -1) goto failed; + + /* Go to right margin and get position. */ + if (write(ofd,"\x1b[999C",6) != 6) goto failed; + cols = getCursorPosition(ifd,ofd); + if (cols == -1) goto failed; + + /* Restore position. */ + if (cols > start) { + char seq[32]; + snprintf(seq,32,"\x1b[%dD",cols-start); + if (write(ofd,seq,strlen(seq)) == -1) { + /* Can't recover... */ + } + } + return cols; + } else { + return ws.ws_col; + } + +failed: +#endif + return 80; +} + +/* Clear the screen. Used to handle ctrl+l */ +void linenoiseClearScreen(void) { + if (write(STDOUT_FILENO,"\x1b[H\x1b[2J",7) <= 0) { + /* nothing to do, just to avoid warning. */ + } +} + +/* Beep, used for completion when there is nothing to complete or when all + * the choices were already shown. */ +static void linenoiseBeep(void) { + fprintf(stderr, "\x7"); + fflush(stderr); +} + +/* ============================== Completion ================================ */ + +/* Free a list of completion option populated by linenoiseAddCompletion(). */ +static void freeCompletions(linenoiseCompletions *lc) { + size_t i; + for (i = 0; i < lc->len; i++) + free(lc->cvec[i]); + if (lc->cvec != NULL) + free(lc->cvec); +} + +/* This is an helper function for linenoiseEdit() and is called when the + * user types the key in order to complete the string currently in the + * input. + * + * The state of the editing is encapsulated into the pointed linenoiseState + * structure as described in the structure definition. */ +static int completeLine(struct linenoiseState *ls) { + linenoiseCompletions lc = { 0, NULL }; + int nread, nwritten; + char c = 0; + + completionCallback(ls->buf,&lc); + if (lc.len == 0) { + linenoiseBeep(); + } else { + size_t stop = 0, i = 0; + + while(!stop) { + /* Show completion or original buffer */ + if (i < lc.len) { + struct linenoiseState saved = *ls; + + ls->len = ls->pos = strlen(lc.cvec[i]); + ls->buf = lc.cvec[i]; + refreshLine(ls); + ls->len = saved.len; + ls->pos = saved.pos; + ls->buf = saved.buf; + } else { + refreshLine(ls); + } + + nread = read(ls->ifd,&c,1); + if (nread <= 0) { + freeCompletions(&lc); + return -1; + } + + switch(c) { + case 9: /* tab */ + i = (i+1) % (lc.len+1); + if (i == lc.len) linenoiseBeep(); + break; + case 27: /* escape */ + /* Re-show original buffer */ + if (i < lc.len) refreshLine(ls); + stop = 1; + break; + default: + /* Update buffer and return */ + if (i < lc.len) { + nwritten = snprintf(ls->buf,ls->buflen,"%s",lc.cvec[i]); + ls->len = ls->pos = nwritten; + } + stop = 1; + break; + } + } + } + + freeCompletions(&lc); + return c; /* Return last read character */ +} + +/* Register a callback function to be called for tab-completion. */ +void linenoiseSetCompletionCallback(linenoiseCompletionCallback *fn) { + completionCallback = fn; +} + +/* This function is used by the callback function registered by the user + * in order to add completion options given the input string when the + * user typed . See the example.c source code for a very easy to + * understand example. */ +void linenoiseAddCompletion(linenoiseCompletions *lc, const char *str) { + size_t len = strlen(str); + char *copy, **cvec; + + copy = malloc(len+1); + if (copy == NULL) return; + memcpy(copy,str,len+1); + cvec = realloc(lc->cvec,sizeof(char*)*(lc->len+1)); + if (cvec == NULL) { + free(copy); + return; + } + lc->cvec = cvec; + lc->cvec[lc->len++] = copy; +} + +/* =========================== Line editing ================================= */ + +/* We define a very simple "append buffer" structure, that is an heap + * allocated string where we can append to. This is useful in order to + * write all the escape sequences in a buffer and flush them to the standard + * output in a single call, to avoid flickering effects. */ +struct abuf { + char *b; + int len; +}; + +static void abInit(struct abuf *ab) { + ab->b = NULL; + ab->len = 0; +} + +static void abAppend(struct abuf *ab, const char *s, int len) { + char *new = realloc(ab->b,ab->len+len); + + if (new == NULL) return; + memcpy(new+ab->len,s,len); + ab->b = new; + ab->len += len; +} + +static void abFree(struct abuf *ab) { + free(ab->b); +} + +/* Single line low level line refresh. + * + * Rewrite the currently edited line accordingly to the buffer content, + * cursor position, and number of columns of the terminal. */ +static void refreshSingleLine(struct linenoiseState *l) { + char seq[64]; + size_t plen = strlen(l->prompt); + int fd = l->ofd; + char *buf = l->buf; + size_t len = l->len; + size_t pos = l->pos; + struct abuf ab; + + while((plen+pos) >= l->cols) { + buf++; + len--; + pos--; + } + while (plen+len > l->cols) { + len--; + } + + abInit(&ab); + /* Cursor to left edge */ + snprintf(seq,64,"\r"); + abAppend(&ab,seq,strlen(seq)); + /* Write the prompt and the current buffer content */ + abAppend(&ab,l->prompt,strlen(l->prompt)); + abAppend(&ab,buf,len); + /* Erase to right */ + snprintf(seq,64,"\x1b[0K"); + abAppend(&ab,seq,strlen(seq)); + /* Move cursor to original position. */ + snprintf(seq,64,"\r\x1b[%dC", (int)(pos+plen)); + abAppend(&ab,seq,strlen(seq)); + if (write(fd,ab.b,ab.len) == -1) {} /* Can't recover from write error. */ + abFree(&ab); +} + +/* Multi line low level line refresh. + * + * Rewrite the currently edited line accordingly to the buffer content, + * cursor position, and number of columns of the terminal. */ +static void refreshMultiLine(struct linenoiseState *l) { + char seq[64]; + int plen = strlen(l->prompt); + int rows = (plen+l->len+l->cols-1)/l->cols; /* rows used by current buf. */ + int rpos = (plen+l->oldpos+l->cols)/l->cols; /* cursor relative row. */ + int rpos2; /* rpos after refresh. */ + int col; /* colum position, zero-based. */ + int old_rows = l->maxrows; + int fd = l->ofd, j; + struct abuf ab; + + /* Update maxrows if needed. */ + if (rows > (int)l->maxrows) l->maxrows = rows; + + /* First step: clear all the lines used before. To do so start by + * going to the last row. */ + abInit(&ab); + if (old_rows-rpos > 0) { + lndebug("go down %d", old_rows-rpos); + snprintf(seq,64,"\x1b[%dB", old_rows-rpos); + abAppend(&ab,seq,strlen(seq)); + } + + /* Now for every row clear it, go up. */ + for (j = 0; j < old_rows-1; j++) { + lndebug("clear+up", 0); + snprintf(seq,64,"\r\x1b[0K\x1b[1A"); + abAppend(&ab,seq,strlen(seq)); + } + + /* Clean the top line. */ + lndebug("clear", 0); + snprintf(seq,64,"\r\x1b[0K"); + abAppend(&ab,seq,strlen(seq)); + + /* Write the prompt and the current buffer content */ + abAppend(&ab,l->prompt,strlen(l->prompt)); + abAppend(&ab,l->buf,l->len); + + /* If we are at the very end of the screen with our prompt, we need to + * emit a newline and move the prompt to the first column. */ + if (l->pos && + l->pos == l->len && + (l->pos+plen) % l->cols == 0) + { + lndebug("", 0); + abAppend(&ab,"\n",1); + snprintf(seq,64,"\r"); + abAppend(&ab,seq,strlen(seq)); + rows++; + if (rows > (int)l->maxrows) l->maxrows = rows; + } + + /* Move cursor to right position. */ + rpos2 = (plen+l->pos+l->cols)/l->cols; /* current cursor relative row. */ + lndebug("rpos2 %d", rpos2); + + /* Go up till we reach the expected positon. */ + if (rows-rpos2 > 0) { + lndebug("go-up %d", rows-rpos2); + snprintf(seq,64,"\x1b[%dA", rows-rpos2); + abAppend(&ab,seq,strlen(seq)); + } + + /* Set column. */ + col = (plen+(int)l->pos) % (int)l->cols; + lndebug("set col %d", 1+col); + if (col) + snprintf(seq,64,"\r\x1b[%dC", col); + else + snprintf(seq,64,"\r"); + abAppend(&ab,seq,strlen(seq)); + + lndebug("\n", 0); + l->oldpos = l->pos; + + if (write(fd,ab.b,ab.len) == -1) {} /* Can't recover from write error. */ + abFree(&ab); +} + +/* Calls the two low level functions refreshSingleLine() or + * refreshMultiLine() according to the selected mode. */ +static void refreshLine(struct linenoiseState *l) { + if (mlmode) + refreshMultiLine(l); + else + refreshSingleLine(l); +} + +/* Insert the character 'c' at cursor current position. + * + * On error writing to the terminal -1 is returned, otherwise 0. */ +int linenoiseEditInsert(struct linenoiseState *l, char c) { + if (l->len < l->buflen) { + if (l->len == l->pos) { + l->buf[l->pos] = c; + l->pos++; + l->len++; + l->buf[l->len] = '\0'; + if ((!mlmode && l->plen+l->len < l->cols) /* || mlmode */) { + /* Avoid a full update of the line in the + * trivial case. */ + if (write(l->ofd,&c,1) == -1) return -1; + } else { + refreshLine(l); + } + } else { + memmove(l->buf+l->pos+1,l->buf+l->pos,l->len-l->pos); + l->buf[l->pos] = c; + l->len++; + l->pos++; + l->buf[l->len] = '\0'; + refreshLine(l); + } + } + return 0; +} + +/* Move cursor on the left. */ +void linenoiseEditMoveLeft(struct linenoiseState *l) { + if (l->pos > 0) { + l->pos--; + refreshLine(l); + } +} + +/* Move cursor on the right. */ +void linenoiseEditMoveRight(struct linenoiseState *l) { + if (l->pos != l->len) { + l->pos++; + refreshLine(l); + } +} + +/* Move cursor to the start of the line. */ +void linenoiseEditMoveHome(struct linenoiseState *l) { + if (l->pos != 0) { + l->pos = 0; + refreshLine(l); + } +} + +/* Move cursor to the end of the line. */ +void linenoiseEditMoveEnd(struct linenoiseState *l) { + if (l->pos != l->len) { + l->pos = l->len; + refreshLine(l); + } +} + +/* Substitute the currently edited line with the next or previous history + * entry as specified by 'dir'. */ +#define LINENOISE_HISTORY_NEXT 0 +#define LINENOISE_HISTORY_PREV 1 +void linenoiseEditHistoryNext(struct linenoiseState *l, int dir) { + if (history_len > 1) { + /* Update the current history entry before to + * overwrite it with the next one. */ + free(history[history_len - 1 - l->history_index]); + history[history_len - 1 - l->history_index] = strdup(l->buf); + /* Show the new entry */ + l->history_index += (dir == LINENOISE_HISTORY_PREV) ? 1 : -1; + if (l->history_index < 0) { + l->history_index = 0; + return; + } else if (l->history_index >= history_len) { + l->history_index = history_len-1; + return; + } + strncpy(l->buf,history[history_len - 1 - l->history_index],l->buflen); + l->buf[l->buflen-1] = '\0'; + l->len = l->pos = strlen(l->buf); + refreshLine(l); + } +} + +/* Delete the character at the right of the cursor without altering the cursor + * position. Basically this is what happens with the "Delete" keyboard key. */ +void linenoiseEditDelete(struct linenoiseState *l) { + if (l->len > 0 && l->pos < l->len) { + memmove(l->buf+l->pos,l->buf+l->pos+1,l->len-l->pos-1); + l->len--; + l->buf[l->len] = '\0'; + refreshLine(l); + } +} + +/* Backspace implementation. */ +void linenoiseEditBackspace(struct linenoiseState *l) { + if (l->pos > 0 && l->len > 0) { + memmove(l->buf+l->pos-1,l->buf+l->pos,l->len-l->pos); + l->pos--; + l->len--; + l->buf[l->len] = '\0'; + refreshLine(l); + } +} + +/* Delete the previosu word, maintaining the cursor at the start of the + * current word. */ +void linenoiseEditDeletePrevWord(struct linenoiseState *l) { + size_t old_pos = l->pos; + size_t diff; + + while (l->pos > 0 && l->buf[l->pos-1] == ' ') + l->pos--; + while (l->pos > 0 && l->buf[l->pos-1] != ' ') + l->pos--; + diff = old_pos - l->pos; + memmove(l->buf+l->pos,l->buf+old_pos,l->len-old_pos+1); + l->len -= diff; + refreshLine(l); +} + +/* This function is the core of the line editing capability of linenoise. + * It expects 'fd' to be already in "raw mode" so that every key pressed + * will be returned ASAP to read(). + * + * The resulting string is put into 'buf' when the user type enter, or + * when ctrl+d is typed. + * + * The function returns the length of the current buffer. */ +static int linenoiseEdit(int stdin_fd, int stdout_fd, char *buf, size_t buflen, const char *prompt) +{ + struct linenoiseState l; + + /* Populate the linenoise state that we pass to functions implementing + * specific editing functionalities. */ + l.ifd = stdin_fd; + l.ofd = stdout_fd; + l.buf = buf; + l.buflen = buflen; + l.prompt = prompt; + l.plen = strlen(prompt); + l.oldpos = l.pos = 0; + l.len = 0; + l.cols = getColumns(stdin_fd, stdout_fd); + l.maxrows = 0; + l.history_index = 0; + + /* Buffer starts empty. */ + l.buf[0] = '\0'; + l.buflen--; /* Make sure there is always space for the nulterm */ + + /* The latest history entry is always our current buffer, that + * initially is just an empty string. */ + linenoiseHistoryAdd(""); + + if (write(l.ofd,prompt,l.plen) == -1) return -1; + while(1) { + char c; + int nread; + char seq[3]; + + nread = read(l.ifd,&c,1); + if (nread <= 0) return l.len; + + /* Only autocomplete when the callback is set. It returns < 0 when + * there was an error reading from fd. Otherwise it will return the + * character that should be handled next. */ + if (c == 9 && completionCallback != NULL) { + c = completeLine(&l); + /* Return on errors */ + if (c < 0) return l.len; + /* Read next character when 0 */ + if (c == 0) continue; + } + + switch(c) { + case ENTER: /* enter */ + history_len--; + free(history[history_len]); + if (mlmode) linenoiseEditMoveEnd(&l); + return (int)l.len; + case CTRL_C: /* ctrl-c */ + errno = EAGAIN; + return -1; + case BACKSPACE: /* backspace */ + case 8: /* ctrl-h */ + linenoiseEditBackspace(&l); + break; + case CTRL_D: /* ctrl-d, remove char at right of cursor, or if the + line is empty, act as end-of-file. */ + if (l.len > 0) { + linenoiseEditDelete(&l); + } else { + history_len--; + free(history[history_len]); + return -1; + } + break; + case CTRL_T: /* ctrl-t, swaps current character with previous. */ + if (l.pos > 0 && l.pos < l.len) { + int aux = buf[l.pos-1]; + buf[l.pos-1] = buf[l.pos]; + buf[l.pos] = aux; + if (l.pos != l.len-1) l.pos++; + refreshLine(&l); + } + break; + case CTRL_B: /* ctrl-b */ + linenoiseEditMoveLeft(&l); + break; + case CTRL_F: /* ctrl-f */ + linenoiseEditMoveRight(&l); + break; + case CTRL_P: /* ctrl-p */ + linenoiseEditHistoryNext(&l, LINENOISE_HISTORY_PREV); + break; + case CTRL_N: /* ctrl-n */ + linenoiseEditHistoryNext(&l, LINENOISE_HISTORY_NEXT); + break; + case ESC: /* escape sequence */ + /* Read the next two bytes representing the escape sequence. + * Use two calls to handle slow terminals returning the two + * chars at different times. */ + if (read(l.ifd,seq,1) == -1) break; + if (read(l.ifd,seq+1,1) == -1) break; + + /* ESC [ sequences. */ + if (seq[0] == '[') { + if (seq[1] >= '0' && seq[1] <= '9') { + /* Extended escape, read additional byte. */ + if (read(l.ifd,seq+2,1) == -1) break; + if (seq[2] == '~') { + switch(seq[1]) { + case '3': /* Delete key. */ + linenoiseEditDelete(&l); + break; + } + } + } else { + switch(seq[1]) { + case 'A': /* Up */ + linenoiseEditHistoryNext(&l, LINENOISE_HISTORY_PREV); + break; + case 'B': /* Down */ + linenoiseEditHistoryNext(&l, LINENOISE_HISTORY_NEXT); + break; + case 'C': /* Right */ + linenoiseEditMoveRight(&l); + break; + case 'D': /* Left */ + linenoiseEditMoveLeft(&l); + break; + case 'H': /* Home */ + linenoiseEditMoveHome(&l); + break; + case 'F': /* End*/ + linenoiseEditMoveEnd(&l); + break; + } + } + } + + /* ESC O sequences. */ + else if (seq[0] == 'O') { + switch(seq[1]) { + case 'H': /* Home */ + linenoiseEditMoveHome(&l); + break; + case 'F': /* End*/ + linenoiseEditMoveEnd(&l); + break; + } + } + break; + default: + if (linenoiseEditInsert(&l,c)) return -1; + break; + case CTRL_U: /* Ctrl+u, delete the whole line. */ + buf[0] = '\0'; + l.pos = l.len = 0; + refreshLine(&l); + break; + case CTRL_K: /* Ctrl+k, delete from current to end of line. */ + buf[l.pos] = '\0'; + l.len = l.pos; + refreshLine(&l); + break; + case CTRL_A: /* Ctrl+a, go to the start of the line */ + linenoiseEditMoveHome(&l); + break; + case CTRL_E: /* ctrl+e, go to the end of the line */ + linenoiseEditMoveEnd(&l); + break; + case CTRL_L: /* ctrl+l, clear screen */ + linenoiseClearScreen(); + refreshLine(&l); + break; + case CTRL_W: /* ctrl+w, delete previous word */ + linenoiseEditDeletePrevWord(&l); + break; + } + } + return l.len; +} + +/* This special mode is used by linenoise in order to print scan codes + * on screen for debugging / development purposes. It is implemented + * by the linenoise_example program using the --keycodes option. */ +void linenoisePrintKeyCodes(void) { + char quit[4]; + + printf("Linenoise key codes debugging mode.\n" + "Press keys to see scan codes. Type 'quit' at any time to exit.\n"); + if (enableRawMode(STDIN_FILENO) == -1) return; + memset(quit,' ',4); + while(1) { + char c; + int nread; + + nread = read(STDIN_FILENO,&c,1); + if (nread <= 0) continue; + memmove(quit,quit+1,sizeof(quit)-1); /* shift string to left. */ + quit[sizeof(quit)-1] = c; /* Insert current char on the right. */ + if (memcmp(quit,"quit",sizeof(quit)) == 0) break; + + printf("'%c' %02x (%d) (type quit to exit)\n", + isprint((int)c) ? c : '?', (int)c, (int)c); + printf("\r"); /* Go left edge manually, we are in raw mode. */ + fflush(stdout); + } + disableRawMode(STDIN_FILENO); +} + +/* This function calls the line editing function linenoiseEdit() using + * the STDIN file descriptor set in raw mode. */ +static int linenoiseRaw(char *buf, size_t buflen, const char *prompt) { + int count; + + if (buflen == 0) { + errno = EINVAL; + return -1; + } + if (!isatty(STDIN_FILENO)) { + /* Not a tty: read from file / pipe. */ + if (fgets(buf, buflen, stdin) == NULL) return -1; + count = strlen(buf); + if (count && buf[count-1] == '\n') { + count--; + buf[count] = '\0'; + } + } else { + /* Interactive editing. */ + if (enableRawMode(STDIN_FILENO) == -1) return -1; + count = linenoiseEdit(STDIN_FILENO, STDOUT_FILENO, buf, buflen, prompt); + disableRawMode(STDIN_FILENO); + printf("\n"); + } + return count; +} + +/* The high level function that is the main API of the linenoise library. + * This function checks if the terminal has basic capabilities, just checking + * for a blacklist of stupid terminals, and later either calls the line + * editing function or uses dummy fgets() so that you will be able to type + * something even in the most desperate of the conditions. */ +char *linenoise(const char *prompt) { + char buf[LINENOISE_MAX_LINE]; + int count; + + if (isUnsupportedTerm()) { + size_t len; + + printf("%s",prompt); + fflush(stdout); + if (fgets(buf,LINENOISE_MAX_LINE,stdin) == NULL) return NULL; + len = strlen(buf); + while(len && (buf[len-1] == '\n' || buf[len-1] == '\r')) { + len--; + buf[len] = '\0'; + } + return strdup(buf); + } else { + count = linenoiseRaw(buf,LINENOISE_MAX_LINE,prompt); + if (count == -1) return NULL; + return strdup(buf); + } +} + +/* ================================ History ================================= */ + +/* Free the history, but does not reset it. Only used when we have to + * exit() to avoid memory leaks are reported by valgrind & co. */ +static void freeHistory(void) { + if (history) { + int j; + + for (j = 0; j < history_len; j++) + free(history[j]); + free(history); + } +} + +/* At exit we'll try to fix the terminal to the initial conditions. */ +static void linenoiseAtExit(void) { + disableRawMode(STDIN_FILENO); + freeHistory(); +} + +/* This is the API call to add a new entry in the linenoise history. + * It uses a fixed array of char pointers that are shifted (memmoved) + * when the history max length is reached in order to remove the older + * entry and make room for the new one, so it is not exactly suitable for huge + * histories, but will work well for a few hundred of entries. + * + * Using a circular buffer is smarter, but a bit more complex to handle. */ +int linenoiseHistoryAdd(const char *line) { + char *linecopy; + + if (history_max_len == 0) return 0; + + /* Initialization on first call. */ + if (history == NULL) { + history = malloc(sizeof(char*)*history_max_len); + if (history == NULL) return 0; + memset(history,0,(sizeof(char*)*history_max_len)); + } + + /* Don't add duplicated lines. */ + if (history_len && !strcmp(history[history_len-1], line)) return 0; + + /* Add an heap allocated copy of the line in the history. + * If we reached the max length, remove the older line. */ + linecopy = strdup(line); + if (!linecopy) return 0; + if (history_len == history_max_len) { + free(history[0]); + memmove(history,history+1,sizeof(char*)*(history_max_len-1)); + history_len--; + } + history[history_len] = linecopy; + history_len++; + return 1; +} + +/* Set the maximum length for the history. This function can be called even + * if there is already some history, the function will make sure to retain + * just the latest 'len' elements if the new history length value is smaller + * than the amount of items already inside the history. */ +int linenoiseHistorySetMaxLen(int len) { + char **new; + + if (len < 1) return 0; + if (history) { + int tocopy = history_len; + + new = malloc(sizeof(char*)*len); + if (new == NULL) return 0; + + /* If we can't copy everything, free the elements we'll not use. */ + if (len < tocopy) { + int j; + + for (j = 0; j < tocopy-len; j++) free(history[j]); + tocopy = len; + } + memset(new,0,sizeof(char*)*len); + memcpy(new,history+(history_len-tocopy), sizeof(char*)*tocopy); + free(history); + history = new; + } + history_max_len = len; + if (history_len > history_max_len) + history_len = history_max_len; + return 1; +} + +/* Save the history in the specified file. On success 0 is returned + * otherwise -1 is returned. */ +int linenoiseHistorySave(const char *filename) { + FILE *fp = fopen(filename,"w"); + int j; + + if (fp == NULL) return -1; + for (j = 0; j < history_len; j++) + fprintf(fp,"%s\n",history[j]); + fclose(fp); + return 0; +} + +/* Load the history from the specified file. If the file does not exist + * zero is returned and no operation is performed. + * + * If the file exists and the operation succeeded 0 is returned, otherwise + * on error -1 is returned. */ +int linenoiseHistoryLoad(const char *filename) { + FILE *fp = fopen(filename,"r"); + char buf[LINENOISE_MAX_LINE]; + + if (fp == NULL) return -1; + + while (fgets(buf,LINENOISE_MAX_LINE,fp) != NULL) { + char *p; + + p = strchr(buf,'\r'); + if (!p) p = strchr(buf,'\n'); + if (p) *p = '\0'; + linenoiseHistoryAdd(buf); + } + fclose(fp); + return 0; +} ADDED src/linenoise.h Index: src/linenoise.h ================================================================== --- src/linenoise.h +++ src/linenoise.h @@ -0,0 +1,68 @@ +/* linenoise.h -- VERSION 1.0 + * + * Guerrilla line editing library against the idea that a line editing lib + * needs to be 20,000 lines of C code. + * + * See linenoise.c for more information. + * + * ------------------------------------------------------------------------ + * + * Copyright (c) 2010-2014, Salvatore Sanfilippo + * Copyright (c) 2010-2013, Pieter Noordhuis + * + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions are + * met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef __LINENOISE_H +#define __LINENOISE_H + +#ifdef __cplusplus +extern "C" { +#endif + +typedef struct linenoiseCompletions { + size_t len; + char **cvec; +} linenoiseCompletions; + +typedef void(linenoiseCompletionCallback)(const char *, linenoiseCompletions *); +void linenoiseSetCompletionCallback(linenoiseCompletionCallback *); +void linenoiseAddCompletion(linenoiseCompletions *, const char *); + +char *linenoise(const char *prompt); +int linenoiseHistoryAdd(const char *line); +int linenoiseHistorySetMaxLen(int len); +int linenoiseHistorySave(const char *filename); +int linenoiseHistoryLoad(const char *filename); +void linenoiseClearScreen(void); +void linenoiseSetMultiLine(int ml); +void linenoisePrintKeyCodes(void); + +#ifdef __cplusplus +} +#endif + +#endif /* __LINENOISE_H */ ADDED src/loadctrl.c Index: src/loadctrl.c ================================================================== --- src/loadctrl.c +++ src/loadctrl.c @@ -0,0 +1,65 @@ +/* +** Copyright (c) 2014 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code to check the host load-average and abort +** CPU-intensive operations if the load-average is too high. +*/ +#include "config.h" +#include "loadctrl.h" +#include + +/* +** Return the load average for the host processor +*/ +double load_average(void){ +#if !defined(_WIN32) && !defined(FOSSIL_OMIT_LOAD_AVERAGE) + double a[3]; + if( getloadavg(a, 3)>0 ){ + return a[0]>=0.000001 ? a[0] : 0.000001; + } +#endif + return 0.0; +} + +/* +** COMMAND: test-loadavg +** %fossil test-loadavg +** +** Print the load average on the host machine. +*/ +void loadavg_test_cmd(void){ + fossil_print("load-average: %f\n", load_average()); +} + +/* +** Abort the current operation of the load average of the host computer +** is too high. +*/ +void load_control(void){ + double mxLoad = atof(db_get("max-loadavg", "0")); + if( mxLoad<=0.0 || mxLoad>=load_average() ) return; + + style_header("Server Overload"); + @

The server load is currently too high. + @ Please try again later.

+ @

Current load average: %f(load_average()).
+ @ Load average limit: %f(mxLoad)

+ style_footer(); + cgi_set_status(503,"Server Overload"); + cgi_reply(); + exit(0); +} Index: src/login.c ================================================================== --- src/login.c +++ src/login.c @@ -17,16 +17,20 @@ ** ** This file contains code for generating the login and logout screens. ** ** Notes: ** -** There are two special-case user-ids: "anonymous" and "nobody". +** There are four special-case user-ids: "anonymous", "nobody", +** "developer" and "reader". +** ** The capabilities of the nobody user are available to anyone, ** regardless of whether or not they are logged in. The capabilities ** of anonymous are only available after logging in, but the login ** screen displays the password for the anonymous login, so this -** should not prevent a human user from doing so. +** should not prevent a human user from doing so. The capabilities +** of developer and reader are inherited by any user that has the +** "v" and "u" capabilities, respectively. ** ** The nobody user has capabilities that you want spiders to have. ** The anonymous user has capabilities that you want people without ** logins to have. ** @@ -37,27 +41,62 @@ ** logs and downloading diffs of very version of the archive that ** has ever existed, and things like that. */ #include "config.h" #include "login.h" -#ifdef __MINGW32__ +#if defined(_WIN32) # include /* for Sleep */ -# define sleep Sleep /* windows does not have sleep, but Sleep */ +# if defined(__MINGW32__) || defined(_MSC_VER) +# define sleep Sleep /* windows does not have sleep, but Sleep */ +# endif #endif #include /* -** Return the name of the login cookie +** Return the login-group name. Or return 0 if this repository is +** not a member of a login-group. +*/ +const char *login_group_name(void){ + static const char *zGroup = 0; + static int once = 1; + if( once ){ + zGroup = db_get("login-group-name", 0); + once = 0; + } + return zGroup; +} + +/* +** Return a path appropriate for setting a cookie. +** +** The path is g.zTop for single-repo cookies. It is "/" for +** cookies of a login-group. +*/ +const char *login_cookie_path(void){ + if( login_group_name()==0 ){ + return g.zTop; + }else{ + return "/"; + } +} + +/* +** Return the name of the login cookie. +** +** The login cookie name is always of the form: fossil-XXXXXXXXXXXXXXXX +** where the Xs are the first 16 characters of the login-group-code or +** of the project-code if we are not a member of any login-group. */ -static char *login_cookie_name(void){ +char *login_cookie_name(void){ static char *zCookieName = 0; if( zCookieName==0 ){ - int n = strlen(g.zTop); - zCookieName = malloc( n*2+16 ); - /* 0123456789 12345 */ - strcpy(zCookieName, "fossil_login_"); - encode16((unsigned char*)g.zTop, (unsigned char*)&zCookieName[13], n); + zCookieName = db_text(0, + "SELECT 'fossil-' || substr(value,1,16)" + " FROM config" + " WHERE name IN ('project-code','login-group-code')" + " ORDER BY name /*sort*/" + ); } return zCookieName; } /* @@ -72,331 +111,840 @@ fossil_redirect_home(); } } /* -** The IP address of the client is stored as part of the anonymous -** login cookie for additional security. But some clients are behind -** firewalls that shift the IP address with each HTTP request. To -** allow such (broken) clients to log in, extract just a prefix of the -** IP address. +** The IP address of the client is stored as part of login cookies. +** But some clients are behind firewalls that shift the IP address +** with each HTTP request. To allow such (broken) clients to log in, +** extract just a prefix of the IP address. */ static char *ipPrefix(const char *zIP){ - int i, j; + int i, j; + static int ip_prefix_terms = -1; + if( ip_prefix_terms<0 ){ + ip_prefix_terms = db_get_int("ip-prefix-terms",2); + } + if( ip_prefix_terms==0 ) return mprintf("0"); for(i=j=0; zIP[i]; i++){ if( zIP[i]=='.' ){ j++; - if( j==2 ) break; + if( j==ip_prefix_terms ) break; } } return mprintf("%.*s", i, zIP); } - + +/* +** Return an abbreviated project code. The abbreviation is the first +** 16 characters of the project code. +** +** Memory is obtained from malloc. +*/ +static char *abbreviated_project_code(const char *zFullCode){ + return mprintf("%.16s", zFullCode); +} + /* ** Check to see if the anonymous login is valid. If it is valid, return ** the userid of the anonymous user. +** +** The zCS parameter is the "captcha seed" used for a specific +** anonymous login request. */ -static int isValidAnonymousLogin( +int login_is_valid_anonymous( const char *zUsername, /* The username. Must be "anonymous" */ - const char *zPassword /* The supplied password */ + const char *zPassword, /* The supplied password */ + const char *zCS /* The captcha seed value */ ){ - const char *zCS; /* The captcha seed value */ const char *zPw; /* The correct password shown in the captcha */ int uid; /* The user ID of anonymous */ if( zUsername==0 ) return 0; - if( zPassword==0 ) return 0; - if( strcmp(zUsername,"anonymous")!=0 ) return 0; - zCS = P("cs"); /* The "cs" parameter is the "captcha seed" */ - if( zCS==0 ) return 0; + else if( zPassword==0 ) return 0; + else if( zCS==0 ) return 0; + else if( fossil_strcmp(zUsername,"anonymous")!=0 ) return 0; zPw = captcha_decode((unsigned int)atoi(zCS)); - if( strcasecmp(zPw, zPassword)!=0 ) return 0; + if( fossil_stricmp(zPw, zPassword)!=0 ) return 0; uid = db_int(0, "SELECT uid FROM user WHERE login='anonymous'" " AND length(pw)>0 AND length(cap)>0"); return uid; } /* -** WEBPAGE: login -** WEBPAGE: logout -** WEBPAGE: my +** Make sure the accesslog table exists. Create it if it does not +*/ +void create_accesslog_table(void){ + db_multi_exec( + "CREATE TABLE IF NOT EXISTS %s.accesslog(" + " uname TEXT," + " ipaddr TEXT," + " success BOOLEAN," + " mtime TIMESTAMP" + ");", db_name("repository") + ); +} + +/* +** Make a record of a login attempt, if login record keeping is enabled. +*/ +static void record_login_attempt( + const char *zUsername, /* Name of user logging in */ + const char *zIpAddr, /* IP address from which they logged in */ + int bSuccess /* True if the attempt was a success */ +){ + if( !db_get_boolean("access-log", 0) ) return; + create_accesslog_table(); + db_multi_exec( + "INSERT INTO accesslog(uname,ipaddr,success,mtime)" + "VALUES(%Q,%Q,%d,julianday('now'));", + zUsername, zIpAddr, bSuccess + ); +} + +/* +** Searches for the user ID matching the given name and password. +** On success it returns a positive value. On error it returns 0. +** On serious (DB-level) error it will probably exit. +** +** zPassword may be either the plain-text form or the encrypted +** form of the user's password. +*/ +int login_search_uid(const char *zUsername, const char *zPasswd){ + char *zSha1Pw = sha1_shared_secret(zPasswd, zUsername, 0); + int const uid = + db_int(0, + "SELECT uid FROM user" + " WHERE login=%Q" + " AND length(cap)>0 AND length(pw)>0" + " AND login NOT IN ('anonymous','nobody','developer','reader')" + " AND (pw=%Q OR (length(pw)<>40 AND pw=%Q))" + " AND (info NOT LIKE '%%expires 20%%'" + " OR substr(info,instr(lower(info),'expires')+8,10)>datetime('now'))", + zUsername, zSha1Pw, zPasswd + ); + free(zSha1Pw); + return uid; +} + +/* +** Generates a login cookie value for a non-anonymous user. +** +** The zHash parameter must be a random value which must be +** subsequently stored in user.cookie for later validation. +** +** The returned memory should be free()d after use. +*/ +char *login_gen_user_cookie_value(const char *zUsername, const char *zHash){ + char *zProjCode = db_get("project-code",NULL); + char *zCode = abbreviated_project_code(zProjCode); + free(zProjCode); + assert((zUsername && *zUsername) && "Invalid user data."); + return mprintf("%s/%z/%s", zHash, zCode, zUsername); +} + +/* +** Generates a login cookie for NON-ANONYMOUS users. Note that this +** function "could" figure out the uid by itself but it currently +** doesn't because the code which calls this already has the uid. +** +** This function also updates the user.cookie, user.ipaddr, +** and user.cexpire fields for the given user. +** +** If zDest is not NULL then the generated cookie is copied to +** *zDdest and ownership is transfered to the caller (who should +** eventually pass it to free()). +*/ +void login_set_user_cookie( + const char *zUsername, /* User's name */ + int uid, /* User's ID */ + char **zDest /* Optional: store generated cookie value. */ +){ + const char *zCookieName = login_cookie_name(); + const char *zExpire = db_get("cookie-expire","8766"); + int expires = atoi(zExpire)*3600; + char *zHash; + char *zCookie; + const char *zIpAddr = PD("REMOTE_ADDR","nil"); /* IP address of user */ + char *zRemoteAddr = ipPrefix(zIpAddr); /* Abbreviated IP address */ + + assert((zUsername && *zUsername) && (uid > 0) && "Invalid user data."); + zHash = db_text(0, + "SELECT cookie FROM user" + " WHERE uid=%d" + " AND ipaddr=%Q" + " AND cexpire>julianday('now')" + " AND length(cookie)>30", + uid, zRemoteAddr); + if( zHash==0 ) zHash = db_text(0, "SELECT hex(randomblob(25))"); + zCookie = login_gen_user_cookie_value(zUsername, zHash); + cgi_set_cookie(zCookieName, zCookie, login_cookie_path(), expires); + record_login_attempt(zUsername, zIpAddr, 1); + db_multi_exec( + "UPDATE user SET cookie=%Q, ipaddr=%Q, " + " cexpire=julianday('now')+%d/86400.0 WHERE uid=%d", + zHash, zRemoteAddr, expires, uid + ); + free(zRemoteAddr); + free(zHash); + if( zDest ){ + *zDest = zCookie; + }else{ + free(zCookie); + } +} + +/* Sets a cookie for an anonymous user login, which looks like this: +** +** HASH/TIME/anonymous +** +** Where HASH is the sha1sum of TIME/IPADDR/SECRET, in which IPADDR +** is the abbreviated IP address and SECRET is captcha-secret. +** +** If either zIpAddr or zRemoteAddr are NULL then REMOTE_ADDR +** is used. +** +** If zCookieDest is not NULL then the generated cookie is assigned to +** *zCookieDest and the caller must eventually free() it. +*/ +void login_set_anon_cookie(const char *zIpAddr, char **zCookieDest ){ + const char *zNow; /* Current time (julian day number) */ + char *zCookie; /* The login cookie */ + const char *zCookieName; /* Name of the login cookie */ + Blob b; /* Blob used during cookie construction */ + char *zRemoteAddr; /* Abbreviated IP address */ + if(!zIpAddr){ + zIpAddr = PD("REMOTE_ADDR","nil"); + } + zRemoteAddr = ipPrefix(zIpAddr); + zCookieName = login_cookie_name(); + zNow = db_text("0", "SELECT julianday('now')"); + assert( zCookieName && zRemoteAddr && zIpAddr && zNow ); + blob_init(&b, zNow, -1); + blob_appendf(&b, "/%s/%s", zRemoteAddr, db_get("captcha-secret","")); + sha1sum_blob(&b, &b); + zCookie = mprintf("%s/%s/anonymous", blob_buffer(&b), zNow); + blob_reset(&b); + cgi_set_cookie(zCookieName, zCookie, login_cookie_path(), 6*3600); + if( zCookieDest ){ + *zCookieDest = zCookie; + }else{ + free(zCookie); + } + +} + +/* +** "Unsets" the login cookie (insofar as cookies can be unset) and +** clears the current user's (g.userUid) login information from the +** user table. Sets: user.cookie, user.ipaddr, user.cexpire. +** +** We could/should arguably clear out g.userUid and g.perm here, but +** we don't currently do not. +** +** This is a no-op if g.userUid is 0. +*/ +void login_clear_login_data(){ + if(!g.userUid){ + return; + }else{ + const char *cookie = login_cookie_name(); + /* To logout, change the cookie value to an empty string */ + cgi_set_cookie(cookie, "", + login_cookie_path(), -86400); + db_multi_exec("UPDATE user SET cookie=NULL, ipaddr=NULL, " + " cexpire=0 WHERE uid=%d" + " AND login NOT IN ('anonymous','nobody'," + " 'developer','reader')", g.userUid); + cgi_replace_parameter(cookie, NULL); + cgi_replace_parameter("anon", NULL); + } +} + +/* +** Return true if the prefix of zStr matches zPattern. Return false if +** they are different. +** +** A lowercase character in zPattern will match either upper or lower +** case in zStr. But an uppercase in zPattern will only match an +** uppercase in zStr. +*/ +static int prefix_match(const char *zPattern, const char *zStr){ + int i; + char c; + for(i=0; (c = zPattern[i])!=0; i++){ + if( zStr[i]!=c && fossil_tolower(zStr[i])!=c ) return 0; + } + return 1; +} + +/* +** Look at the HTTP_USER_AGENT parameter and try to determine if the user agent +** is a manually operated browser or a bot. When in doubt, assume a bot. +** Return true if we believe the agent is a real person. +*/ +static int isHuman(const char *zAgent){ + int i; + if( zAgent==0 ) return 0; /* If no UserAgent, then probably a bot */ + for(i=0; zAgent[i]; i++){ + if( prefix_match("bot", zAgent+i) ) return 0; + if( prefix_match("spider", zAgent+i) ) return 0; + if( prefix_match("crawl", zAgent+i) ) return 0; + /* If a URI appears in the User-Agent, it is probably a bot */ + if( strncmp("http", zAgent+i,4)==0 ) return 0; + } + if( strncmp(zAgent, "Mozilla/", 8)==0 ){ + if( atoi(&zAgent[8])<4 ) return 0; /* Many bots advertise as Mozilla/3 */ + if( sqlite3_strglob("*Firefox/[1-9]*", zAgent)==0 ) return 1; + if( sqlite3_strglob("*Chrome/[1-9]*", zAgent)==0 ) return 1; + if( sqlite3_strglob("*(compatible;?MSIE?[1789]*", zAgent)==0 ) return 1; + if( sqlite3_strglob("*Trident/[1-9]*;?rv:[1-9]*", zAgent)==0 ) return 1; /* IE11+ */ + if( sqlite3_strglob("*AppleWebKit/[1-9]*(KHTML*", zAgent)==0 ) return 1; + return 0; + } + if( strncmp(zAgent, "Opera/", 6)==0 ) return 1; + if( strncmp(zAgent, "Safari/", 7)==0 ) return 1; + if( strncmp(zAgent, "Lynx/", 5)==0 ) return 1; + if( strncmp(zAgent, "NetSurf/", 8)==0 ) return 1; + return 0; +} + +/* +** COMMAND: test-ishuman ** -** Generate the login page. -** +** Read lines of text from standard input. Interpret each line of text +** as a User-Agent string from an HTTP header. Label each line as HUMAN +** or ROBOT. +*/ +void test_ishuman(void){ + char zLine[3000]; + while( fgets(zLine, sizeof(zLine), stdin) ){ + fossil_print("%s %s", isHuman(zLine) ? "HUMAN" : "ROBOT", zLine); + } +} + +/* +** SQL function for constant time comparison of two values. +** Sets result to 0 if two values are equal. +*/ +static void constant_time_cmp_function( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const unsigned char *buf1, *buf2; + int len, i; + unsigned char rc = 0; + + assert( argc==2 ); + len = sqlite3_value_bytes(argv[0]); + if( len==0 || len!=sqlite3_value_bytes(argv[1]) ){ + rc = 1; + }else{ + buf1 = sqlite3_value_text(argv[0]); + buf2 = sqlite3_value_text(argv[1]); + for( i=0; i + zErrMsg = + @

@ You entered an incorrect old password while attempting to change @ your password. Your password is unchanged. - @

+ @

; - }else if( strcmp(zNew1,zNew2)!=0 ){ - zErrMsg = - @

+ }else if( fossil_strcmp(zNew1,zNew2)!=0 ){ + zErrMsg = + @

@ The two copies of your new passwords do not match. @ Your password is unchanged. - @

+ @

; }else{ - char *zNewPw = sha1_shared_secret(zNew1, g.zLogin); + char *zNewPw = sha1_shared_secret(zNew1, g.zLogin, 0); + char *zChngPw; + char *zErr; db_multi_exec( "UPDATE user SET pw=%Q WHERE uid=%d", zNewPw, g.userUid ); - redirect_to_g(); - return; + fossil_free(zNewPw); + zChngPw = mprintf( + "UPDATE user" + " SET pw=shared_secret(%Q,%Q," + " (SELECT value FROM config WHERE name='project-code'))" + " WHERE login=%Q", + zNew1, g.zLogin, g.zLogin + ); + if( login_group_sql(zChngPw, "

", "

\n", &zErr) ){ + zErrMsg = mprintf("%s", zErr); + fossil_free(zErr); + }else{ + redirect_to_g(); + return; + } } } - uid = isValidAnonymousLogin(zUsername, zPasswd); + zIpAddr = PD("REMOTE_ADDR","nil"); /* Complete IP address for logging */ + zReferer = P("HTTP_REFERER"); + uid = login_is_valid_anonymous(zUsername, zPasswd, P("cs")); if( uid>0 ){ - char *zNow; /* Current time (julian day number) */ - const char *zIpAddr; /* IP address of requestor */ - char *zCookie; /* The login cookie */ - const char *zCookieName; /* Name of the login cookie */ - Blob b; /* Blob used during cookie construction */ - - zIpAddr = PD("REMOTE_ADDR","nil"); - zCookieName = login_cookie_name(); - zNow = db_text("0", "SELECT julianday('now')"); - blob_init(&b, zNow, -1); - blob_appendf(&b, "/%z/%s", ipPrefix(zIpAddr), db_get("captcha-secret","")); - sha1sum_blob(&b, &b); - zCookie = sqlite3_mprintf("anon/%s/%s", zNow, blob_buffer(&b)); - blob_reset(&b); - free(zNow); - cgi_set_cookie(zCookieName, zCookie, 0, 6*3600); + login_set_anon_cookie(zIpAddr, NULL); + record_login_attempt("anonymous", zIpAddr, 1); redirect_to_g(); } if( zUsername!=0 && zPasswd!=0 && zPasswd[0]!=0 ){ - zSha1Pw = sha1_shared_secret(zPasswd, zUsername); - uid = db_int(0, - "SELECT uid FROM user" - " WHERE login=%Q" - " AND login NOT IN ('anonymous','nobody','developer','reader')" - " AND (pw=%Q OR pw=%Q)", - zUsername, zPasswd, zSha1Pw - ); + /* Attempting to log in as a user other than anonymous. + */ + uid = login_search_uid(zUsername, zPasswd); if( uid<=0 ){ sleep(1); - zErrMsg = - @

- @ You entered an unknown user or an incorrect password. - @

- ; - }else{ - char *zCookie; - const char *zCookieName = login_cookie_name(); - const char *zExpire = db_get("cookie-expire","8766"); - int expires = atoi(zExpire)*3600; - const char *zIpAddr = PD("REMOTE_ADDR","nil"); - - zCookie = db_text(0, "SELECT '%d/' || hex(randomblob(25))", uid); - cgi_set_cookie(zCookieName, zCookie, 0, expires); - db_multi_exec( - "UPDATE user SET cookie=%Q, ipaddr=%Q, " - " cexpire=julianday('now')+%d/86400.0 WHERE uid=%d", - zCookie, zIpAddr, expires, uid - ); + zErrMsg = + @

+ @ You entered an unknown user or an incorrect password. + @

+ ; + record_login_attempt(zUsername, zIpAddr, 0); + }else{ + /* Non-anonymous login is successful. Set a cookie of the form: + ** + ** HASH/PROJECT/LOGIN + ** + ** where HASH is a random hex number, PROJECT is either project + ** code prefix, and LOGIN is the user name. + */ + login_set_user_cookie(zUsername, uid, NULL); redirect_to_g(); } } style_header("Login/Logout"); + style_adunit_config(ADUNIT_OFF); @ %s(zErrMsg) - @
- if( P("g") ){ - @ + if( zGoto ){ + char *zAbbrev = fossil_strdup(zGoto); + int i; + for(i=0; zAbbrev[i] && zAbbrev[i]!='?'; i++){} + zAbbrev[i] = 0; + if( g.zLogin ){ + @

Use a different login with greater privilege than %h(g.zLogin) + @ to access %h(zAbbrev). + }else if( anonFlag ){ + @

Login as anonymous or any named user + @ to access page %h(zAbbrev). + }else{ + @

Login as a named user to access page %h(zAbbrev). + } + } + form_begin(0, "%R/login"); + if( zGoto ){ + @ + }else if( zReferer && strncmp(g.zBaseURL, zReferer, strlen(g.zBaseURL))==0 ){ + @ + } + if( anonFlag ){ + @ + } + if( g.zLogin ){ + @

Currently logged in as %h(g.zLogin). + @

+ @
+ @

Change user: } - @ + @
@ - @ + @ if( anonFlag ){ - @ + @ }else{ - @ + @ } @ @ - @ - @ + @ + @ @ - if( g.zLogin==0 ){ + if( g.zLogin==0 && (anonFlag || zGoto==0) ){ zAnonPw = db_text(0, "SELECT pw FROM user" " WHERE login='anonymous'" " AND cap!=''"); } @ @ - @ + @ @ @ - @ - if( g.zLogin==0 ){ - @

Enter - }else{ - @

You are currently logged in as %h(g.zLogin)

- @

To change your login to a different user, enter - } - @ your user-id and password at the left and press the - @ "Login" button. Your user name will be stored in a browser cookie. - @ You must configure your web browser to accept cookies in order for - @ the login to take.

+ @ + @

Pressing the Login button grants permission to store a cookie.

+ if( db_get_boolean("self-register", 0) ){ + @

If you do not have an account, you can + @ create one. + } if( zAnonPw ){ unsigned int uSeed = captcha_seed(); - char const *zDecoded = captcha_decode(uSeed); + const char *zDecoded = captcha_decode(uSeed); int bAutoCaptcha = db_get_boolean("auto-captcha", 0); char *zCaptcha = captcha_render(zDecoded); - @ - @

Visitors may enter anonymous as the user-ID with + @

+ @ Visitors may enter anonymous as the user-ID with @ the 8-character hexadecimal password shown below:

- @
\n"); + } +} + +static void html_table_row( + struct Blob *ob, + struct Blob *cells, + int flags, + void *opaque +){ + BLOB_APPEND_LITERAL(ob, " \n"); + BLOB_APPEND_BLOB(ob, cells); + BLOB_APPEND_LITERAL(ob, " \n"); +} + + + +/* HTML span tags */ + +static int html_raw_span(struct Blob *ob, struct Blob *text, void *opaque){ + /* If the document begins with a

markup, take that as the header. */ + BLOB_APPEND_BLOB(ob, text); + return 1; +} + +static int html_autolink( + struct Blob *ob, + struct Blob *link, + enum mkd_autolink type, + void *opaque +){ + if( !link || blob_size(link)<=0 ) return 0; + BLOB_APPEND_LITERAL(ob, ""); + if( type==MKDA_EXPLICIT_EMAIL && blob_size(link)>7 ){ + /* remove "mailto:" from displayed text */ + html_escape(ob, blob_buffer(link)+7, blob_size(link)-7); + }else{ + html_escape(ob, blob_buffer(link), blob_size(link)); + } + BLOB_APPEND_LITERAL(ob, ""); + return 1; +} + +static int html_code_span(struct Blob *ob, struct Blob *text, void *opaque){ + BLOB_APPEND_LITERAL(ob, ""); + html_escape(ob, blob_buffer(text), blob_size(text)); + BLOB_APPEND_LITERAL(ob, ""); + return 1; +} + +static int html_double_emphasis( + struct Blob *ob, + struct Blob *text, + char c, + void *opaque +){ + BLOB_APPEND_LITERAL(ob, ""); + BLOB_APPEND_BLOB(ob, text); + BLOB_APPEND_LITERAL(ob, ""); + return 1; +} + +static int html_emphasis( + struct Blob *ob, + struct Blob *text, + char c, + void *opaque +){ + BLOB_APPEND_LITERAL(ob, ""); + BLOB_APPEND_BLOB(ob, text); + BLOB_APPEND_LITERAL(ob, ""); + return 1; +} + +static int html_image( + struct Blob *ob, + struct Blob *link, + struct Blob *title, + struct Blob *alt, + void *opaque +){ + BLOB_APPEND_LITERAL(ob, "\"");0 ){ + BLOB_APPEND_LITERAL(ob, "\" title=\""); + html_escape(ob, blob_buffer(title), blob_size(title)); + } + BLOB_APPEND_LITERAL(ob, "\" />"); + return 1; +} + +static int html_line_break(struct Blob *ob, void *opaque){ + BLOB_APPEND_LITERAL(ob, "
\n"); + return 1; +} + +static int html_link( + struct Blob *ob, + struct Blob *link, + struct Blob *title, + struct Blob *content, + void *opaque +){ + BLOB_APPEND_LITERAL(ob, "0 ){ + BLOB_APPEND_LITERAL(ob, "\" title=\""); + html_escape(ob, blob_buffer(title), blob_size(title)); + } + BLOB_APPEND_LITERAL(ob, "\">"); + BLOB_APPEND_BLOB(ob, content); + BLOB_APPEND_LITERAL(ob, ""); + return 1; +} + +static int html_triple_emphasis( + struct Blob *ob, + struct Blob *text, + char c, + void *opaque +){ + BLOB_APPEND_LITERAL(ob, ""); + BLOB_APPEND_BLOB(ob, text); + BLOB_APPEND_LITERAL(ob, ""); + return 1; +} + + +static void html_normal_text(struct Blob *ob, struct Blob *text, void *opaque){ + html_escape(ob, blob_buffer(text), blob_size(text)); +} + +/* +** Convert markdown into HTML. +** +** The document title is placed in output_title if not NULL. Or if +** output_title is NULL, the document title appears in the body. +*/ +void markdown_to_html( + struct Blob *input_markdown, /* Markdown content to be rendered */ + struct Blob *output_title, /* Put title here. May be NULL */ + struct Blob *output_body /* Put document body here. */ +){ + struct mkd_renderer html_renderer = { + /* prolog and epilog */ + html_prolog, + html_epilog, + + /* block level elements */ + html_blockcode, + html_blockquote, + html_raw_block, + html_header, + html_hrule, + html_list, + html_list_item, + html_paragraph, + html_table, + html_table_cell, + html_table_row, + + /* span level elements */ + html_autolink, + html_code_span, + html_double_emphasis, + html_emphasis, + html_image, + html_line_break, + html_link, + html_raw_span, + html_triple_emphasis, + + /* low level elements */ + 0, /* entities are copied verbatim */ + html_normal_text, + + /* misc. parameters */ + 64, /* maximum stack */ + "*_", /* emphasis characters */ + 0 /* opaque data */ + }; + html_renderer.opaque = output_title; + if( output_title ) blob_reset(output_title); + blob_reset(output_body); + markdown(output_body, input_markdown, &html_renderer); +} Index: src/md5.c ================================================================== --- src/md5.c +++ src/md5.c @@ -16,10 +16,11 @@ * To compute the message digest of a chunk of bytes, declare an * MD5Context structure, pass it to MD5Init, call MD5Update as * needed on buffers full of bytes, and then call MD5Final, which * will fill a supplied 16-byte array with the digest. */ +#include "config.h" #include #include #include #include "md5.h" @@ -41,12 +42,16 @@ uint32 bits[2]; unsigned char in[64]; }; typedef struct Context MD5Context; +#if defined(__i386__) || defined(__x86_64__) || defined(_WIN32) +# define byteReverse(A,B) +#else /* - * Note: this code is harmless on little-endian machines. + * Convert an array of integers to little-endian. + * Note: this code is a no-op on little-endian machines. */ static void byteReverse (unsigned char *buf, unsigned longs){ uint32 t; do { t = (uint32)((unsigned)buf[3]<<8 | buf[2]) << 16 | @@ -53,10 +58,12 @@ ((unsigned)buf[1]<<8 | buf[0]); *(uint32 *)buf = t; buf += 4; } while (--longs); } +#endif + /* The four core functions - F1 is optimized somewhat */ /* #define F1(x, y, z) (x & y | ~x & z) */ #define F1(x, y, z) (z ^ (x & (y ^ z))) #define F2(x, y, z) F1(z, x, y) @@ -157,11 +164,11 @@ /* * Start MD5 accumulation. Set bit count to 0 and buffer to mysterious * initialization constants. */ static void MD5Init(MD5Context *ctx){ - ctx->isInit = 1; + ctx->isInit = 1; ctx->buf[0] = 0x67452301; ctx->buf[1] = 0xefcdab89; ctx->buf[2] = 0x98badcfe; ctx->buf[3] = 0x10325476; ctx->bits[0] = 0; @@ -170,11 +177,11 @@ /* * Update context to reflect the concatenation of another buffer full * of bytes. */ -static +static void MD5Update(MD5Context *pCtx, const unsigned char *buf, unsigned int len){ struct Context *ctx = (struct Context *)pCtx; uint32 t; /* Update bitcount */ @@ -217,11 +224,11 @@ memcpy(ctx->in, buf, len); } /* - * Final wrapup - pad to 64-byte boundary with the bit pattern + * Final wrapup - pad to 64-byte boundary with the bit pattern * 1 0* (64-bit count of bits processed, MSB-first) */ static void MD5Final(unsigned char digest[16], MD5Context *pCtx){ struct Context *ctx = (struct Context *)pCtx; unsigned count; @@ -252,27 +259,26 @@ memset(p, 0, count-8); } byteReverse(ctx->in, 14); /* Append length in bits and transform */ - ((uint32 *)ctx->in)[ 14 ] = ctx->bits[0]; - ((uint32 *)ctx->in)[ 15 ] = ctx->bits[1]; + memcpy(&ctx->in[14*sizeof(uint32)], ctx->bits, 2*sizeof(uint32)); MD5Transform(ctx->buf, (uint32 *)ctx->in); byteReverse((unsigned char *)ctx->buf, 4); memcpy(digest, ctx->buf, 16); - memset(ctx, 0, sizeof(ctx)); /* In case it's sensitive */ + memset(ctx, 0, sizeof(*ctx)); /* In case it's sensitive */ } /* ** Convert a digest into base-16. digest should be declared as ** "unsigned char digest[16]" in the calling function. The MD5 ** digest is stored in the first 16 bytes. zBuf should ** be "char zBuf[33]". */ static void DigestToBase16(unsigned char *digest, char *zBuf){ - static char const zEncode[] = "0123456789abcdef"; + static const char zEncode[] = "0123456789abcdef"; int i, j; for(j=i=0; i<16; i++){ int a = digest[i]; zBuf[j++] = zEncode[(a>>4)&0xf]; @@ -314,14 +320,34 @@ ** Add the content of a blob to the incremental MD5 checksum. */ void md5sum_step_blob(Blob *p){ md5sum_step_text(blob_buffer(p), blob_size(p)); } + +/* +** For trouble-shooting only: +** +** Report the current state of the incremental checksum. +*/ +const char *md5sum_current_state(void){ + unsigned int cksum = 0; + unsigned int *pFirst, *pLast; + static char zResult[12]; + + pFirst = (unsigned int*)&incrCtx; + pLast = (unsigned int*)((&incrCtx)+1); + while( pFirst +/* +** Print information about a particular check-in. +*/ +void print_checkin_description(int rid, int indent, const char *zLabel){ + Stmt q; + db_prepare(&q, + "SELECT datetime(mtime,toLocal())," + " coalesce(euser,user), coalesce(ecomment,comment)," + " (SELECT uuid FROM blob WHERE rid=%d)," + " (SELECT group_concat(substr(tagname,5), ', ') FROM tag, tagxref" + " WHERE tagname GLOB 'sym-*' AND tag.tagid=tagxref.tagid" + " AND tagxref.rid=%d AND tagxref.tagtype>0)" + " FROM event WHERE objid=%d", rid, rid, rid); + if( db_step(&q)==SQLITE_ROW ){ + const char *zTagList = db_column_text(&q, 4); + char *zCom; + if( zTagList && zTagList[0] ){ + zCom = mprintf("%s (%s)", db_column_text(&q, 2), zTagList); + }else{ + zCom = mprintf("%s", db_column_text(&q,2)); + } + fossil_print("%-*s [%S] by %s on %s\n%*s", + indent-1, zLabel, + db_column_text(&q, 3), + db_column_text(&q, 1), + db_column_text(&q, 0), + indent, ""); + comment_print(zCom, db_column_text(&q,2), indent, -1, g.comFmtFlags); + fossil_free(zCom); + } + db_finalize(&q); +} + + +/* Pick the most recent leaf that is (1) not equal to vid and (2) +** has not already been merged into vid and (3) the leaf is not +** closed and (4) the leaf is in the same branch as vid. +** +** Set vmergeFlag to control whether the vmerge table is checked. +*/ +int fossil_find_nearest_fork(int vid, int vmergeFlag){ + Blob sql; + Stmt q; + int rid = 0; + + blob_zero(&sql); + blob_append_sql(&sql, + "SELECT leaf.rid" + " FROM leaf, event" + " WHERE leaf.rid=event.objid" + " AND leaf.rid!=%d", /* Constraint (1) */ + vid + ); + if( vmergeFlag ){ + blob_append_sql(&sql, + " AND leaf.rid NOT IN (SELECT merge FROM vmerge)" /* Constraint (2) */ + ); + } + blob_append_sql(&sql, + " AND NOT EXISTS(SELECT 1 FROM tagxref" /* Constraint (3) */ + " WHERE rid=leaf.rid" + " AND tagid=%d" + " AND tagtype>0)" + " AND (SELECT value FROM tagxref" /* Constraint (4) */ + " WHERE tagid=%d AND rid=%d AND tagtype>0) =" + " (SELECT value FROM tagxref" + " WHERE tagid=%d AND rid=leaf.rid AND tagtype>0)" + " ORDER BY event.mtime DESC LIMIT 1", + TAG_CLOSED, TAG_BRANCH, vid, TAG_BRANCH + ); + db_prepare(&q, "%s", blob_sql_text(&sql)); + blob_reset(&sql); + if( db_step(&q)==SQLITE_ROW ){ + rid = db_column_int(&q, 0); + } + db_finalize(&q); + return rid; +} + +/* +** Check content that was received with rcvid and return true if any +** fork was created. +*/ +int fossil_any_has_fork(int rcvid){ + static Stmt q; + int fForkSeen = 0; + + if( rcvid==0 ) return 0; + db_static_prepare(&q, + " SELECT pid FROM plink WHERE pid>0 AND isprim" + " AND cid IN (SELECT blob.rid FROM blob" + " WHERE rcvid=:rcvid)"); + db_bind_int(&q, ":rcvid", rcvid); + while( !fForkSeen && db_step(&q)==SQLITE_ROW ){ + int pid = db_column_int(&q, 0); + if( count_nonbranch_children(pid)>1 ){ + compute_leaves(pid,1); + if( db_int(0, "SELECT count(*) FROM leaves")>1 ){ + int rid = db_int(0, "SELECT rid FROM leaves, event" + " WHERE event.objid=leaves.rid" + " ORDER BY event.mtime DESC LIMIT 1"); + fForkSeen = fossil_find_nearest_fork(rid, db_open_local(0))!=0; + } + } + } + db_finalize(&q); + return fForkSeen; +} /* ** COMMAND: merge ** -** Usage: %fossil merge [--cherrypick] [--backout] VERSION +** Usage: %fossil merge ?OPTIONS? ?VERSION? ** -** The argument is a version that should be merged into the current -** checkout. All changes from VERSION back to the nearest common -** ancestor are merged. Except, if either of the --cherrypick or +** The argument VERSION is a version that should be merged into the +** current checkout. All changes from VERSION back to the nearest +** common ancestor are merged. Except, if either of the --cherrypick or ** --backout options are used only the changes associated with the ** single check-in VERSION are merged. The --backout option causes ** the changes associated with VERSION to be removed from the current ** checkout rather than added. +** +** If the VERSION argument is omitted, then Fossil attempts to find +** a recent fork on the current branch to merge. ** ** Only file content is merged. The result continues to use the ** file and directory names from the current checkout even if those ** names might have been changed in the branch being merged in. ** ** Other options: ** -** --detail Show additional details of the merge +** --baseline BASELINE Use BASELINE as the "pivot" of the merge instead +** of the nearest common ancestor. This allows +** a sequence of changes in a branch to be merged +** without having to merge the entire branch. ** ** --binary GLOBPATTERN Treat files that match GLOBPATTERN as binary ** and do not try to merge parallel changes. This ** option overrides the "binary-glob" setting. +** +** --case-sensitive BOOL Override the case-sensitive setting. If false, +** files whose names differ only in case are taken +** to be the same file. +** +** -f|--force Force the merge even if it would be a no-op. +** +** --force-missing Force the merge even if there is missing content. +** +** --integrate Merged branch will be closed when committing. +** +** -n|--dry-run If given, display instead of run actions +** +** -v|--verbose Show additional details of the merge */ void merge_cmd(void){ - int vid; /* Current version */ - int mid; /* Version we are merging against */ - int pid; /* The pivot version - most recent common ancestor */ - int detailFlag; /* True if the --detail option is present */ + int vid; /* Current version "V" */ + int mid; /* Version we are merging from "M" */ + int pid; /* The pivot version - most recent common ancestor P */ + int verboseFlag; /* True if the -v|--verbose option is present */ + int integrateFlag; /* True if the --integrate option is present */ int pickFlag; /* True if the --cherrypick option is present */ - int backoutFlag; /* True if the --backout optioni is present */ + int backoutFlag; /* True if the --backout option is present */ + int dryRunFlag; /* True if the --dry-run or -n option is present */ + int forceFlag; /* True if the --force or -f option is present */ + int forceMissingFlag; /* True if the --force-missing option is present */ const char *zBinGlob; /* The value of --binary */ + const char *zPivot; /* The value of --baseline */ + int debugFlag; /* True if --debug is present */ + int nChng; /* Number of file name changes */ + int *aChng; /* An array of file name changes */ + int i; /* Loop counter */ + int nConflict = 0; /* Number of conflicts seen */ + int nOverwrite = 0; /* Number of unmanaged files overwritten */ Stmt q; - detailFlag = find_option("detail",0,0)!=0; + + /* Notation: + ** + ** V The current checkout + ** M The version being merged in + ** P The "pivot" - the most recent common ancestor of V and M. + */ + + undo_capture_command_line(); + verboseFlag = find_option("verbose","v",0)!=0; + forceMissingFlag = find_option("force-missing",0,0)!=0; + if( !verboseFlag ){ + verboseFlag = find_option("detail",0,0)!=0; /* deprecated */ + } pickFlag = find_option("cherrypick",0,0)!=0; + integrateFlag = find_option("integrate",0,0)!=0; backoutFlag = find_option("backout",0,0)!=0; + debugFlag = find_option("debug",0,0)!=0; zBinGlob = find_option("binary",0,1); - if( g.argc!=3 ){ - usage("VERSION"); + dryRunFlag = find_option("dry-run","n",0)!=0; + if( !dryRunFlag ){ + dryRunFlag = find_option("nochange",0,0)!=0; /* deprecated */ } + forceFlag = find_option("force","f",0)!=0; + zPivot = find_option("baseline",0,1); + verify_all_options(); db_must_be_within_tree(); if( zBinGlob==0 ) zBinGlob = db_get("binary-glob",0); vid = db_lget_int("checkout", 0); if( vid==0 ){ fossil_fatal("nothing is checked out"); } - mid = name_to_rid(g.argv[2]); - if( mid==0 ){ - fossil_fatal("not a version: %s", g.argv[2]); - } - if( mid>1 && !db_exists("SELECT 1 FROM plink WHERE cid=%d", mid) ){ - fossil_fatal("not a version: %s", g.argv[2]); - } - if( pickFlag || backoutFlag ){ + if( !dryRunFlag ){ + if( autosync_loop(SYNC_PULL + SYNC_VERBOSE*verboseFlag, + db_get_int("autosync-tries", 1)) ){ + fossil_fatal("Cannot proceed with merge"); + } + } + + /* Find mid, the artifactID of the version to be merged into the current + ** check-out */ + if( g.argc==3 ){ + /* Mid is specified as an argument on the command-line */ + mid = name_to_typed_rid(g.argv[2], "ci"); + if( mid==0 || !is_a_version(mid) ){ + fossil_fatal("not a version: %s", g.argv[2]); + } + }else if( g.argc==2 ){ + /* No version specified on the command-line so pick the most recent + ** leaf that is (1) not the version currently checked out and (2) + ** has not already been merged into the current checkout and (3) + ** the leaf is not closed and (4) the leaf is in the same branch + ** as the current checkout. + */ + Stmt q; + if( pickFlag || backoutFlag || integrateFlag){ + fossil_fatal("cannot use --backout, --cherrypick or --integrate with a fork merge"); + } + mid = fossil_find_nearest_fork(vid, db_open_local(0)); + if( mid==0 ){ + fossil_fatal("no unmerged forks of branch \"%s\"", + db_text(0, "SELECT value FROM tagxref" + " WHERE tagid=%d AND rid=%d AND tagtype>0", + TAG_BRANCH, vid) + ); + } + db_prepare(&q, + "SELECT blob.uuid," + " datetime(event.mtime,toLocal())," + " coalesce(ecomment, comment)," + " coalesce(euser, user)" + " FROM event, blob" + " WHERE event.objid=%d AND blob.rid=%d", + mid, mid + ); + if( db_step(&q)==SQLITE_ROW ){ + char *zCom = mprintf("Merging fork [%S] at %s by %s: \"%s\"", + db_column_text(&q, 0), db_column_text(&q, 1), + db_column_text(&q, 3), db_column_text(&q, 2)); + comment_print(zCom, db_column_text(&q,2), 0, -1, g.comFmtFlags); + fossil_free(zCom); + } + db_finalize(&q); + }else{ + usage("?OPTIONS? ?VERSION?"); + return; + } + + if( zPivot ){ + pid = name_to_typed_rid(zPivot, "ci"); + if( pid==0 || !is_a_version(pid) ){ + fossil_fatal("not a version: %s", zPivot); + } + if( pickFlag ){ + fossil_fatal("incompatible options: --cherrypick & --baseline"); + } + }else if( pickFlag || backoutFlag ){ + if( integrateFlag ){ + fossil_fatal("incompatible options: --integrate & --cherrypick or --backout"); + } pid = db_int(0, "SELECT pid FROM plink WHERE cid=%d AND isprim", mid); if( pid<=0 ){ fossil_fatal("cannot find an ancestor for %s", g.argv[2]); } - if( backoutFlag ){ - int t = pid; - pid = mid; - mid = t; - } }else{ pivot_set_primary(mid); pivot_set_secondary(vid); db_prepare(&q, "SELECT merge FROM vmerge WHERE id=0"); while( db_step(&q)==SQLITE_ROW ){ @@ -96,158 +309,271 @@ pivot_set_secondary(db_column_int(&q,0)); } db_finalize(&q); pid = pivot_find(); if( pid<=0 ){ - fossil_fatal("cannot find a common ancestor between the current" + fossil_fatal("cannot find a common ancestor between the current " "checkout and %s", g.argv[2]); } } - if( pid>1 && !db_exists("SELECT 1 FROM plink WHERE cid=%d", pid) ){ - fossil_fatal("not a version: record #%d", mid); + if( backoutFlag ){ + int t = pid; + pid = mid; + mid = t; + } + if( !is_a_version(pid) ){ + fossil_fatal("not a version: record #%d", pid); + } + if( !forceFlag && mid==pid ){ + fossil_print("Merge skipped because it is a no-op. " + " Use --force to override.\n"); + return; + } + if( integrateFlag && !is_a_leaf(mid)){ + fossil_warning("ignoring --integrate: %s is not a leaf", g.argv[2]); + integrateFlag = 0; + } + if( verboseFlag ){ + print_checkin_description(mid, 12, integrateFlag?"integrate:":"merge-from:"); + print_checkin_description(pid, 12, "baseline:"); } - vfile_check_signature(vid, 1); + vfile_check_signature(vid, CKSIG_ENOTFILE); db_begin_transaction(); - undo_begin(); - load_vfile_from_rid(mid); - load_vfile_from_rid(pid); + if( !dryRunFlag ) undo_begin(); + if( load_vfile_from_rid(mid) && !forceMissingFlag ){ + fossil_fatal("missing content, unable to merge"); + } + if( load_vfile_from_rid(pid) && !forceMissingFlag ){ + fossil_fatal("missing content, unable to merge"); + } + if( debugFlag ){ + char *z; + z = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", pid); + fossil_print("P=%d %z\n", pid, z); + z = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", mid); + fossil_print("M=%d %z\n", mid, z); + z = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", vid); + fossil_print("V=%d %z\n", vid, z); + } /* ** The vfile.pathname field is used to match files against each other. The ** FV table contains one row for each each unique filename in ** in the current checkout, the pivot, and the version being merged. */ db_multi_exec( "DROP TABLE IF EXISTS fv;" "CREATE TEMP TABLE fv(" - " fn TEXT PRIMARY KEY," /* The filename */ + " fn TEXT PRIMARY KEY %s," /* The filename */ " idv INTEGER," /* VFILE entry for current version */ " idp INTEGER," /* VFILE entry for the pivot */ " idm INTEGER," /* VFILE entry for version merging in */ " chnged BOOLEAN," /* True if current version has been edited */ " ridv INTEGER," /* Record ID for current version */ " ridp INTEGER," /* Record ID for pivot */ - " ridm INTEGER" /* Record ID for merge */ - ");" - "INSERT OR IGNORE INTO fv" - " SELECT pathname, 0, 0, 0, 0, 0, 0, 0 FROM vfile" - ); - db_prepare(&q, - "SELECT id, pathname, rid FROM vfile" - " WHERE vid=%d", pid - ); - while( db_step(&q)==SQLITE_ROW ){ - int id = db_column_int(&q, 0); - const char *fn = db_column_text(&q, 1); - int rid = db_column_int(&q, 2); - db_multi_exec( - "UPDATE fv SET idp=%d, ridp=%d WHERE fn=%Q", - id, rid, fn - ); - } - db_finalize(&q); - db_prepare(&q, - "SELECT id, pathname, rid FROM vfile" - " WHERE vid=%d", mid - ); - while( db_step(&q)==SQLITE_ROW ){ - int id = db_column_int(&q, 0); - const char *fn = db_column_text(&q, 1); - int rid = db_column_int(&q, 2); - db_multi_exec( - "UPDATE fv SET idm=%d, ridm=%d WHERE fn=%Q", - id, rid, fn - ); - } - db_finalize(&q); - db_prepare(&q, - "SELECT id, pathname, rid, chnged FROM vfile" - " WHERE vid=%d", vid - ); - while( db_step(&q)==SQLITE_ROW ){ - int id = db_column_int(&q, 0); - const char *fn = db_column_text(&q, 1); - int rid = db_column_int(&q, 2); - int chnged = db_column_int(&q, 3); - db_multi_exec( - "UPDATE fv SET idv=%d, ridv=%d, chnged=%d WHERE fn=%Q", - id, rid, chnged, fn - ); - } - db_finalize(&q); - - /* - ** Find files in mid and vid but not in pid and report conflicts. - ** The file in mid will be ignored. It will be treated as if it + " ridm INTEGER," /* Record ID for merge */ + " isexe BOOLEAN," /* Execute permission enabled */ + " fnp TEXT %s," /* The filename in the pivot */ + " fnm TEXT %s," /* the filename in the merged version */ + " islinkv BOOLEAN," /* True if current version is a symlink */ + " islinkm BOOLEAN" /* True if merged version in is a symlink */ + ");", + filename_collation(), filename_collation(), filename_collation() + ); + + /* Add files found in V + */ + db_multi_exec( + "INSERT OR IGNORE" + " INTO fv(fn,fnp,fnm,idv,idp,idm,ridv,ridp,ridm,isexe,chnged)" + " SELECT pathname, pathname, pathname, id, 0, 0, rid, 0, 0, isexe, chnged " + " FROM vfile WHERE vid=%d", + vid + ); + + /* + ** Compute name changes from P->V + */ + find_filename_changes(pid, vid, 0, &nChng, &aChng, debugFlag ? "P->V" : 0); + if( nChng ){ + for(i=0; iM + */ + find_filename_changes(pid, mid, 0, &nChng, &aChng, debugFlag ? "P->M" : 0); + if( nChng ){ + if( nChng>4 ) db_multi_exec("CREATE INDEX fv_fnp ON fv(fnp)"); + for(i=0; i0 AND idm>0" ); while( db_step(&q)==SQLITE_ROW ){ int idm = db_column_int(&q, 0); char *zName = db_text(0, "SELECT pathname FROM vfile WHERE id=%d", idm); - printf("WARNING: conflict on %s\n", zName); + fossil_warning("WARNING - no common ancestor: %s", zName); free(zName); db_multi_exec("UPDATE fv SET idm=0 WHERE idm=%d", idm); } db_finalize(&q); /* - ** Add to vid files that are not in pid but are in mid + ** Add to V files that are not in V or P but are in M */ - db_prepare(&q, - "SELECT idm, rowid, fn FROM fv WHERE idp=0 AND idv=0 AND idm>0" + db_prepare(&q, + "SELECT idm, rowid, fnm FROM fv AS x" + " WHERE idp=0 AND idv=0 AND idm>0" ); while( db_step(&q)==SQLITE_ROW ){ int idm = db_column_int(&q, 0); int rowid = db_column_int(&q, 1); int idv; const char *zName; + char *zFullName; db_multi_exec( - "INSERT INTO vfile(vid,chnged,deleted,rid,mrid,pathname)" - " SELECT %d,3,0,rid,mrid,pathname FROM vfile WHERE id=%d", - vid, idm + "INSERT INTO vfile(vid,chnged,deleted,rid,mrid,isexe,islink,pathname)" + " SELECT %d,%d,0,rid,mrid,isexe,islink,pathname FROM vfile WHERE id=%d", + vid, integrateFlag?5:3, idm ); idv = db_last_insert_rowid(); db_multi_exec("UPDATE fv SET idv=%d WHERE rowid=%d", idv, rowid); zName = db_column_text(&q, 2); - printf("ADDED %s\n", zName); - undo_save(zName); - vfile_to_disk(0, idm, 0); + zFullName = mprintf("%s%s", g.zLocalRoot, zName); + if( file_wd_isfile_or_link(zFullName) ){ + fossil_print("ADDED %s (overwrites an unmanaged file)\n", zName); + nOverwrite++; + }else{ + fossil_print("ADDED %s\n", zName); + } + fossil_free(zFullName); + if( !dryRunFlag ){ + undo_save(zName); + vfile_to_disk(0, idm, 0, 0); + } } db_finalize(&q); - + /* - ** Find files that have changed from pid->mid but not pid->vid. - ** Copy the mid content over into vid. + ** Find files that have changed from P->M but not P->V. + ** Copy the M content over into V. */ db_prepare(&q, - "SELECT idv, ridm FROM fv" + "SELECT idv, ridm, fn, islinkm FROM fv" " WHERE idp>0 AND idv>0 AND idm>0" " AND ridm!=ridp AND ridv=ridp AND NOT chnged" ); while( db_step(&q)==SQLITE_ROW ){ int idv = db_column_int(&q, 0); int ridm = db_column_int(&q, 1); - char *zName = db_text(0, "SELECT pathname FROM vfile WHERE id=%d", idv); + const char *zName = db_column_text(&q, 2); + int islinkm = db_column_int(&q, 3); /* Copy content from idm over into idv. Overwrite idv. */ - printf("UPDATE %s\n", zName); - undo_save(zName); - db_multi_exec( - "UPDATE vfile SET mrid=%d, chnged=2 WHERE id=%d", ridm, idv - ); - vfile_to_disk(0, idv, 0); - free(zName); + fossil_print("UPDATE %s\n", zName); + if( !dryRunFlag ){ + undo_save(zName); + db_multi_exec( + "UPDATE vfile SET mtime=0, mrid=%d, chnged=%d, islink=%d " + " WHERE id=%d", ridm, integrateFlag?4:2, islinkm, idv + ); + vfile_to_disk(0, idv, 0, 0); + } } db_finalize(&q); /* - ** Do a three-way merge on files that have changes pid->mid and pid->vid + ** Do a three-way merge on files that have changes on both P->M and P->V. */ db_prepare(&q, - "SELECT ridm, idv, ridp, ridv, %s FROM fv" + "SELECT ridm, idv, ridp, ridv, %s, fn, isexe, islinkv, islinkm FROM fv" " WHERE idp>0 AND idv>0 AND idm>0" " AND ridm!=ridp AND (ridv!=ridp OR chnged)", glob_expr("fv.fn", zBinGlob) ); while( db_step(&q)==SQLITE_ROW ){ @@ -254,75 +580,160 @@ int ridm = db_column_int(&q, 0); int idv = db_column_int(&q, 1); int ridp = db_column_int(&q, 2); int ridv = db_column_int(&q, 3); int isBinary = db_column_int(&q, 4); - int rc; - char *zName = db_text(0, "SELECT pathname FROM vfile WHERE id=%d", idv); - char *zFullPath; - Blob m, p, v, r; - /* Do a 3-way merge of idp->idm into idp->idv. The results go into idv. */ - if( detailFlag ){ - printf("MERGE %s (pivot=%d v1=%d v2=%d)\n", zName, ridp, ridm, ridv); - }else{ - printf("MERGE %s\n", zName); - } - undo_save(zName); - zFullPath = mprintf("%s/%s", g.zLocalRoot, zName); - content_get(ridp, &p); - content_get(ridm, &m); - blob_zero(&v); - blob_read_from_file(&v, zFullPath); - if( isBinary ){ - rc = -1; - blob_zero(&r); - }else{ - rc = blob_merge(&p, &m, &v, &r); - } - if( rc>=0 ){ - blob_write_to_file(&r, zFullPath); - if( rc>0 ){ - printf("***** %d merge conflicts in %s\n", rc, zName); - } - }else{ - printf("***** Cannot merge binary file %s\n", zName); - } - free(zName); - blob_reset(&p); - blob_reset(&m); - blob_reset(&v); - blob_reset(&r); + const char *zName = db_column_text(&q, 5); + int isExe = db_column_int(&q, 6); + int islinkv = db_column_int(&q, 7); + int islinkm = db_column_int(&q, 8); + int rc; + char *zFullPath; + Blob m, p, r; + /* Do a 3-way merge of idp->idm into idp->idv. The results go into idv. */ + if( verboseFlag ){ + fossil_print("MERGE %s (pivot=%d v1=%d v2=%d)\n", + zName, ridp, ridm, ridv); + }else{ + fossil_print("MERGE %s\n", zName); + } + if( islinkv || islinkm /* || file_wd_islink(zFullPath) */ ){ + fossil_print("***** Cannot merge symlink %s\n", zName); + nConflict++; + }else{ + if( !dryRunFlag ) undo_save(zName); + zFullPath = mprintf("%s/%s", g.zLocalRoot, zName); + content_get(ridp, &p); + content_get(ridm, &m); + if( isBinary ){ + rc = -1; + blob_zero(&r); + }else{ + unsigned mergeFlags = dryRunFlag ? MERGE_DRYRUN : 0; + rc = merge_3way(&p, zFullPath, &m, &r, mergeFlags); + } + if( rc>=0 ){ + if( !dryRunFlag ){ + blob_write_to_file(&r, zFullPath); + file_wd_setexe(zFullPath, isExe); + } + db_multi_exec("UPDATE vfile SET mtime=0 WHERE id=%d", idv); + if( rc>0 ){ + fossil_print("***** %d merge conflicts in %s\n", rc, zName); + nConflict++; + } + }else{ + fossil_print("***** Cannot merge binary file %s\n", zName); + nConflict++; + } + blob_reset(&p); + blob_reset(&m); + blob_reset(&r); + } db_multi_exec("INSERT OR IGNORE INTO vmerge(id,merge) VALUES(%d,%d)", idv,ridm); } db_finalize(&q); /* - ** Drop files from vid that are in pid but not in mid + ** Drop files that are in P and V but not in M */ db_prepare(&q, - "SELECT idv FROM fv" + "SELECT idv, fn, chnged FROM fv" " WHERE idp>0 AND idv>0 AND idm=0" ); while( db_step(&q)==SQLITE_ROW ){ int idv = db_column_int(&q, 0); - char *zName = db_text(0, "SELECT pathname FROM vfile WHERE id=%d", idv); + const char *zName = db_column_text(&q, 1); + int chnged = db_column_int(&q, 2); /* Delete the file idv */ - printf("DELETE %s\n", zName); - undo_save(zName); + fossil_print("DELETE %s\n", zName); + if( chnged ){ + fossil_warning("WARNING: local edits lost for %s\n", zName); + nConflict++; + } + if( !dryRunFlag ) undo_save(zName); db_multi_exec( "UPDATE vfile SET deleted=1 WHERE id=%d", idv ); - free(zName); + if( !dryRunFlag ){ + char *zFullPath = mprintf("%s%s", g.zLocalRoot, zName); + file_delete(zFullPath); + free(zFullPath); + } + } + db_finalize(&q); + + /* + ** Rename files that have taken a rename on P->M but which keep the same + ** name on P->V. If a file is renamed on P->V only or on both P->V and + ** P->M then we retain the V name of the file. + */ + db_prepare(&q, + "SELECT idv, fnp, fnm FROM fv" + " WHERE idv>0 AND idp>0 AND idm>0 AND fnp=fn AND fnm!=fnp" + ); + while( db_step(&q)==SQLITE_ROW ){ + int idv = db_column_int(&q, 0); + const char *zOldName = db_column_text(&q, 1); + const char *zNewName = db_column_text(&q, 2); + fossil_print("RENAME %s -> %s\n", zOldName, zNewName); + if( !dryRunFlag ) undo_save(zOldName); + if( !dryRunFlag ) undo_save(zNewName); + db_multi_exec( + "UPDATE vfile SET pathname=%Q, origname=coalesce(origname,pathname)" + " WHERE id=%d AND vid=%d", zNewName, idv, vid + ); + if( !dryRunFlag ){ + char *zFullOldPath = mprintf("%s%s", g.zLocalRoot, zOldName); + char *zFullNewPath = mprintf("%s%s", g.zLocalRoot, zNewName); + if( file_wd_islink(zFullOldPath) ){ + symlink_copy(zFullOldPath, zFullNewPath); + }else{ + file_copy(zFullOldPath, zFullNewPath); + } + file_delete(zFullOldPath); + free(zFullNewPath); + free(zFullOldPath); + } } db_finalize(&q); - + + + /* Report on conflicts + */ + if( nConflict ){ + fossil_warning("WARNING: %d merge conflicts", nConflict); + } + if( nOverwrite ){ + fossil_warning("WARNING: %d unmanaged files were overwritten", + nOverwrite); + } + if( dryRunFlag ){ + fossil_warning("REMINDER: this was a dry run -" + " no files were actually changed."); + } + /* ** Clean up the mid and pid VFILE entries. Then commit the changes. */ db_multi_exec("DELETE FROM vfile WHERE vid!=%d", vid); - if( !pickFlag ){ + if( pickFlag ){ + db_multi_exec("INSERT OR IGNORE INTO vmerge(id,merge) VALUES(-1,%d)",mid); + /* For a cherry-pick merge, make the default check-in comment the same + ** as the check-in comment on the check-in that is being merged in. */ + db_multi_exec( + "REPLACE INTO vvar(name,value)" + " SELECT 'ci-comment', coalesce(ecomment,comment) FROM event" + " WHERE type='ci' AND objid=%d", + mid + ); + }else if( backoutFlag ){ + db_multi_exec("INSERT OR IGNORE INTO vmerge(id,merge) VALUES(-2,%d)",pid); + }else if( integrateFlag ){ + db_multi_exec("INSERT OR IGNORE INTO vmerge(id,merge) VALUES(-4,%d)",mid); + }else{ db_multi_exec("INSERT OR IGNORE INTO vmerge(id,merge) VALUES(0,%d)", mid); } - undo_finish(); - db_end_transaction(0); + if( !dryRunFlag ) undo_finish(); + db_end_transaction(dryRunFlag); } Index: src/merge3.c ================================================================== --- src/merge3.c +++ src/merge3.c @@ -27,11 +27,13 @@ #define DEBUG(X) #define ISDEBUG 0 #endif /* The minimum of two integers */ -#define min(A,B) (A0 && (aC[0]>0 || aC[1]>0 || aC[2]>0) ){ @@ -130,10 +133,20 @@ i += 3; } return i; } +/* +** Text of boundary markers for merge conflicts. +*/ +static const char *const mergeMarker[] = { + /*123456789 123456789 123456789 123456789 123456789 123456789 123456789*/ + "<<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<<\n", + "======= COMMON ANCESTOR content follows ============================\n", + "======= MERGED IN content follows ==================================\n", + ">>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n" +}; /* ** Do a three-way merge. Initialize pOut to contain the result. ** @@ -141,23 +154,20 @@ ** common origin at pPivot. Apply the changes of pPivot ==> pV1 ** to pV2. ** ** The return is 0 upon complete success. If any input file is binary, ** -1 is returned and pOut is unmodified. If there are merge -** conflicts, the merge proceeds as best as it can and the number +** conflicts, the merge proceeds as best as it can and the number ** of conflicts is returns */ -int blob_merge(Blob *pPivot, Blob *pV1, Blob *pV2, Blob *pOut){ +static int blob_merge(Blob *pPivot, Blob *pV1, Blob *pV2, Blob *pOut){ int *aC1; /* Changes from pPivot to pV1 */ int *aC2; /* Changes from pPivot to pV2 */ int i1, i2; /* Index into aC1[] and aC2[] */ int nCpy, nDel, nIns; /* Number of lines to copy, delete, or insert */ int limit1, limit2; /* Sizes of aC1[] and aC2[] */ int nConflict = 0; /* Number of merge conflicts seen so far */ - static const char zBegin[] = ">>>>>>> BEGIN MERGE CONFLICT\n"; - static const char zMid[] = "============================\n"; - static const char zEnd[] = "<<<<<<< END MERGE CONFLICT\n"; blob_zero(pOut); /* Merge results stored in pOut */ /* Compute the edits that occur from pPivot => pV1 (into aC1) ** and pPivot => pV2 (into aC2). Each of the aC1 and aC2 arrays is @@ -165,12 +175,12 @@ ** is the number of lines of text to copy directly from the pivot, ** the second integer is the number of lines of text to omit from the ** pivot, and the third integer is the number of lines of text that are ** inserted. The edit array ends with a triple of 0,0,0. */ - aC1 = text_diff(pPivot, pV1, 0, 0); - aC2 = text_diff(pPivot, pV2, 0, 0); + aC1 = text_diff(pPivot, pV1, 0, 0, 0); + aC2 = text_diff(pPivot, pV2, 0, 0, 0); if( aC1==0 || aC2==0 ){ free(aC1); free(aC2); return -1; } @@ -188,11 +198,11 @@ DEBUG( for(i1=0; i1VERSION1 with the change going -** from PIVOT->VERSION2 and write the combined changes into MERGED. +** fossil 3-way-merge Xbase.c Xlocal.c Xup.c Xlocal.c +** cp Xup.c Xbase.c +** # Verify that everything still works +** fossil commit +** */ void delta_3waymerge_cmd(void){ Blob pivot, v1, v2, merged; + + /* We should be done with options.. */ + verify_all_options(); + if( g.argc!=6 ){ - fprintf(stderr,"Usage: %s %s PIVOT V1 V2 MERGED\n", g.argv[0], g.argv[1]); - exit(1); + usage("PIVOT V1 V2 MERGED"); } if( blob_read_from_file(&pivot, g.argv[2])<0 ){ - fprintf(stderr,"cannot read %s\n", g.argv[2]); - exit(1); + fossil_fatal("cannot read %s\n", g.argv[2]); } if( blob_read_from_file(&v1, g.argv[3])<0 ){ - fprintf(stderr,"cannot read %s\n", g.argv[3]); - exit(1); + fossil_fatal("cannot read %s\n", g.argv[3]); } if( blob_read_from_file(&v2, g.argv[4])<0 ){ - fprintf(stderr,"cannot read %s\n", g.argv[4]); - exit(1); + fossil_fatal("cannot read %s\n", g.argv[4]); } blob_merge(&pivot, &v1, &v2, &merged); if( blob_write_to_file(&merged, g.argv[5])0 ){ + blob_append(&x, zInput, i); + zInput += i; + } + if( zInput[0]==0 ) break; + for(j=0; j=nSubst ){ + blob_append(&x, "%", 1); + zInput++; + } + } + return blob_str(&x); +} + +#if INTERFACE +/* +** Flags to the 3-way merger +*/ +#define MERGE_DRYRUN 0x0001 +#endif + + +/* +** This routine is a wrapper around blob_merge() with the following +** enhancements: +** +** (1) If the merge-command is defined, then use the external merging +** program specified instead of the built-in blob-merge to do the +** merging. Panic if the external merger fails. +** ** Not currently implemented ** +** +** (2) If gmerge-command is defined and there are merge conflicts in +** blob_merge() then invoke the external graphical merger to resolve +** the conflicts. +** +** (3) If a merge conflict occurs and gmerge-command is not defined, +** then write the pivot, original, and merge-in files to the +** filesystem. +*/ +int merge_3way( + Blob *pPivot, /* Common ancestor (older) */ + const char *zV1, /* Name of file for version merging into (mine) */ + Blob *pV2, /* Version merging from (yours) */ + Blob *pOut, /* Output written here */ + unsigned mergeFlags /* Flags that control operation */ +){ + Blob v1; /* Content of zV1 */ + int rc; /* Return code of subroutines and this routine */ + + blob_read_from_file(&v1, zV1); + rc = blob_merge(pPivot, &v1, pV2, pOut); + if( rc!=0 && (mergeFlags & MERGE_DRYRUN)==0 ){ + char *zPivot; /* Name of the pivot file */ + char *zOrig; /* Name of the original content file */ + char *zOther; /* Name of the merge file */ + + zPivot = file_newname(zV1, "baseline", 1); + blob_write_to_file(pPivot, zPivot); + zOrig = file_newname(zV1, "original", 1); + blob_write_to_file(&v1, zOrig); + zOther = file_newname(zV1, "merge", 1); + blob_write_to_file(pV2, zOther); + if( rc>0 ){ + const char *zGMerge; /* Name of the gmerge command */ + + zGMerge = db_get("gmerge-command", 0); + if( zGMerge && zGMerge[0] ){ + char *zOut; /* Temporary output file */ + char *zCmd; /* Command to invoke */ + const char *azSubst[8]; /* Strings to be substituted */ + + zOut = file_newname(zV1, "output", 1); + azSubst[0] = "%baseline"; azSubst[1] = zPivot; + azSubst[2] = "%original"; azSubst[3] = zOrig; + azSubst[4] = "%merge"; azSubst[5] = zOther; + azSubst[6] = "%output"; azSubst[7] = zOut; + zCmd = string_subst(zGMerge, 8, azSubst); + printf("%s\n", zCmd); fflush(stdout); + fossil_system(zCmd); + if( file_wd_size(zOut)>=0 ){ + blob_read_from_file(pOut, zOut); + file_delete(zPivot); + file_delete(zOrig); + file_delete(zOther); + file_delete(zOut); + } + fossil_free(zCmd); + fossil_free(zOut); + } + } + fossil_free(zPivot); + fossil_free(zOrig); + fossil_free(zOther); + } + blob_reset(&v1); + return rc; +} ADDED src/miniz.c Index: src/miniz.c ================================================================== --- src/miniz.c +++ src/miniz.c @@ -0,0 +1,4916 @@ +/* miniz.c v1.15 - public domain deflate/inflate, zlib-subset, ZIP reading/writing/appending, PNG writing + See "unlicense" statement at the end of this file. + Rich Geldreich , last updated Oct. 13, 2013 + Implements RFC 1950: http://www.ietf.org/rfc/rfc1950.txt and RFC 1951: http://www.ietf.org/rfc/rfc1951.txt + + Most API's defined in miniz.c are optional. For example, to disable the archive related functions just define + MINIZ_NO_ARCHIVE_APIS, or to get rid of all stdio usage define MINIZ_NO_STDIO (see the list below for more macros). + + * Change History + 10/13/13 v1.15 r4 - Interim bugfix release while I work on the next major release with Zip64 support (almost there!): + - Critical fix for the MZ_ZIP_FLAG_DO_NOT_SORT_CENTRAL_DIRECTORY bug (thanks kahmyong.moon@hp.com) which could cause locate files to not find files. This bug + would only have occured in earlier versions if you explicitly used this flag, OR if you used mz_zip_extract_archive_file_to_heap() or mz_zip_add_mem_to_archive_file_in_place() + (which used this flag). If you can't switch to v1.15 but want to fix this bug, just remove the uses of this flag from both helper funcs (and of course don't use the flag). + - Bugfix in mz_zip_reader_extract_to_mem_no_alloc() from kymoon when pUser_read_buf is not NULL and compressed size is > uncompressed size + - Fixing mz_zip_reader_extract_*() funcs so they don't try to extract compressed data from directory entries, to account for weird zipfiles which contain zero-size compressed data on dir entries. + Hopefully this fix won't cause any issues on weird zip archives, because it assumes the low 16-bits of zip external attributes are DOS attributes (which I believe they always are in practice). + - Fixing mz_zip_reader_is_file_a_directory() so it doesn't check the internal attributes, just the filename and external attributes + - mz_zip_reader_init_file() - missing MZ_FCLOSE() call if the seek failed + - Added cmake support for Linux builds which builds all the examples, tested with clang v3.3 and gcc v4.6. + - Clang fix for tdefl_write_image_to_png_file_in_memory() from toffaletti + - Merged MZ_FORCEINLINE fix from hdeanclark + - Fix include before config #ifdef, thanks emil.brink + - Added tdefl_write_image_to_png_file_in_memory_ex(): supports Y flipping (super useful for OpenGL apps), and explicit control over the compression level (so you can + set it to 1 for real-time compression). + - Merged in some compiler fixes from paulharris's github repro. + - Retested this build under Windows (VS 2010, including static analysis), tcc 0.9.26, gcc v4.6 and clang v3.3. + - Added example6.c, which dumps an image of the mandelbrot set to a PNG file. + - Modified example2 to help test the MZ_ZIP_FLAG_DO_NOT_SORT_CENTRAL_DIRECTORY flag more. + - In r3: Bugfix to mz_zip_writer_add_file() found during merge: Fix possible src file fclose() leak if alignment bytes+local header file write faiiled + - In r4: Minor bugfix to mz_zip_writer_add_from_zip_reader(): Was pushing the wrong central dir header offset, appears harmless in this release, but it became a problem in the zip64 branch + 5/20/12 v1.14 - MinGW32/64 GCC 4.6.1 compiler fixes: added MZ_FORCEINLINE, #include (thanks fermtect). + 5/19/12 v1.13 - From jason@cornsyrup.org and kelwert@mtu.edu - Fix mz_crc32() so it doesn't compute the wrong CRC-32's when mz_ulong is 64-bit. + - Temporarily/locally slammed in "typedef unsigned long mz_ulong" and re-ran a randomized regression test on ~500k files. + - Eliminated a bunch of warnings when compiling with GCC 32-bit/64. + - Ran all examples, miniz.c, and tinfl.c through MSVC 2008's /analyze (static analysis) option and fixed all warnings (except for the silly + "Use of the comma-operator in a tested expression.." analysis warning, which I purposely use to work around a MSVC compiler warning). + - Created 32-bit and 64-bit Codeblocks projects/workspace. Built and tested Linux executables. The codeblocks workspace is compatible with Linux+Win32/x64. + - Added miniz_tester solution/project, which is a useful little app derived from LZHAM's tester app that I use as part of the regression test. + - Ran miniz.c and tinfl.c through another series of regression testing on ~500,000 files and archives. + - Modified example5.c so it purposely disables a bunch of high-level functionality (MINIZ_NO_STDIO, etc.). (Thanks to corysama for the MINIZ_NO_STDIO bug report.) + - Fix ftell() usage in examples so they exit with an error on files which are too large (a limitation of the examples, not miniz itself). + 4/12/12 v1.12 - More comments, added low-level example5.c, fixed a couple minor level_and_flags issues in the archive API's. + level_and_flags can now be set to MZ_DEFAULT_COMPRESSION. Thanks to Bruce Dawson for the feedback/bug report. + 5/28/11 v1.11 - Added statement from unlicense.org + 5/27/11 v1.10 - Substantial compressor optimizations: + - Level 1 is now ~4x faster than before. The L1 compressor's throughput now varies between 70-110MB/sec. on a + - Core i7 (actual throughput varies depending on the type of data, and x64 vs. x86). + - Improved baseline L2-L9 compression perf. Also, greatly improved compression perf. issues on some file types. + - Refactored the compression code for better readability and maintainability. + - Added level 10 compression level (L10 has slightly better ratio than level 9, but could have a potentially large + drop in throughput on some files). + 5/15/11 v1.09 - Initial stable release. + + * Low-level Deflate/Inflate implementation notes: + + Compression: Use the "tdefl" API's. The compressor supports raw, static, and dynamic blocks, lazy or + greedy parsing, match length filtering, RLE-only, and Huffman-only streams. It performs and compresses + approximately as well as zlib. + + Decompression: Use the "tinfl" API's. The entire decompressor is implemented as a single function + coroutine: see tinfl_decompress(). It supports decompression into a 32KB (or larger power of 2) wrapping buffer, or into a memory + block large enough to hold the entire file. + + The low-level tdefl/tinfl API's do not make any use of dynamic memory allocation. + + * zlib-style API notes: + + miniz.c implements a fairly large subset of zlib. There's enough functionality present for it to be a drop-in + zlib replacement in many apps: + The z_stream struct, optional memory allocation callbacks + deflateInit/deflateInit2/deflate/deflateReset/deflateEnd/deflateBound + inflateInit/inflateInit2/inflate/inflateEnd + compress, compress2, compressBound, uncompress + CRC-32, Adler-32 - Using modern, minimal code size, CPU cache friendly routines. + Supports raw deflate streams or standard zlib streams with adler-32 checking. + + Limitations: + The callback API's are not implemented yet. No support for gzip headers or zlib static dictionaries. + I've tried to closely emulate zlib's various flavors of stream flushing and return status codes, but + there are no guarantees that miniz.c pulls this off perfectly. + + * PNG writing: See the tdefl_write_image_to_png_file_in_memory() function, originally written by + Alex Evans. Supports 1-4 bytes/pixel images. + + * ZIP archive API notes: + + The ZIP archive API's where designed with simplicity and efficiency in mind, with just enough abstraction to + get the job done with minimal fuss. There are simple API's to retrieve file information, read files from + existing archives, create new archives, append new files to existing archives, or clone archive data from + one archive to another. It supports archives located in memory or the heap, on disk (using stdio.h), + or you can specify custom file read/write callbacks. + + - Archive reading: Just call this function to read a single file from a disk archive: + + void *mz_zip_extract_archive_file_to_heap(const char *pZip_filename, const char *pArchive_name, + size_t *pSize, mz_uint zip_flags); + + For more complex cases, use the "mz_zip_reader" functions. Upon opening an archive, the entire central + directory is located and read as-is into memory, and subsequent file access only occurs when reading individual files. + + - Archives file scanning: The simple way is to use this function to scan a loaded archive for a specific file: + + int mz_zip_reader_locate_file(mz_zip_archive *pZip, const char *pName, const char *pComment, mz_uint flags); + + The locate operation can optionally check file comments too, which (as one example) can be used to identify + multiple versions of the same file in an archive. This function uses a simple linear search through the central + directory, so it's not very fast. + + Alternately, you can iterate through all the files in an archive (using mz_zip_reader_get_num_files()) and + retrieve detailed info on each file by calling mz_zip_reader_file_stat(). + + - Archive creation: Use the "mz_zip_writer" functions. The ZIP writer immediately writes compressed file data + to disk and builds an exact image of the central directory in memory. The central directory image is written + all at once at the end of the archive file when the archive is finalized. + + The archive writer can optionally align each file's local header and file data to any power of 2 alignment, + which can be useful when the archive will be read from optical media. Also, the writer supports placing + arbitrary data blobs at the very beginning of ZIP archives. Archives written using either feature are still + readable by any ZIP tool. + + - Archive appending: The simple way to add a single file to an archive is to call this function: + + mz_bool mz_zip_add_mem_to_archive_file_in_place(const char *pZip_filename, const char *pArchive_name, + const void *pBuf, size_t buf_size, const void *pComment, mz_uint16 comment_size, mz_uint level_and_flags); + + The archive will be created if it doesn't already exist, otherwise it'll be appended to. + Note the appending is done in-place and is not an atomic operation, so if something goes wrong + during the operation it's possible the archive could be left without a central directory (although the local + file headers and file data will be fine, so the archive will be recoverable). + + For more complex archive modification scenarios: + 1. The safest way is to use a mz_zip_reader to read the existing archive, cloning only those bits you want to + preserve into a new archive using using the mz_zip_writer_add_from_zip_reader() function (which compiles the + compressed file data as-is). When you're done, delete the old archive and rename the newly written archive, and + you're done. This is safe but requires a bunch of temporary disk space or heap memory. + + 2. Or, you can convert an mz_zip_reader in-place to an mz_zip_writer using mz_zip_writer_init_from_reader(), + append new files as needed, then finalize the archive which will write an updated central directory to the + original archive. (This is basically what mz_zip_add_mem_to_archive_file_in_place() does.) There's a + possibility that the archive's central directory could be lost with this method if anything goes wrong, though. + + - ZIP archive support limitations: + No zip64 or spanning support. Extraction functions can only handle unencrypted, stored or deflated files. + Requires streams capable of seeking. + + * This is a header file library, like stb_image.c. To get only a header file, either cut and paste the + below header, or create miniz.h, #define MINIZ_HEADER_FILE_ONLY, and then include miniz.c from it. + + * Important: For best perf. be sure to customize the below macros for your target platform: + #define MINIZ_USE_UNALIGNED_LOADS_AND_STORES 1 + #define MINIZ_LITTLE_ENDIAN 1 + #define MINIZ_HAS_64BIT_REGISTERS 1 + + * On platforms using glibc, Be sure to "#define _LARGEFILE64_SOURCE 1" before including miniz.c to ensure miniz + uses the 64-bit variants: fopen64(), stat64(), etc. Otherwise you won't be able to process large files + (i.e. 32-bit stat() fails for me on files > 0x7FFFFFFF bytes). +*/ + +#ifndef MINIZ_HEADER_INCLUDED +#define MINIZ_HEADER_INCLUDED + +#include + +// Defines to completely disable specific portions of miniz.c: +// If all macros here are defined the only functionality remaining will be CRC-32, adler-32, tinfl, and tdefl. + +// Define MINIZ_NO_STDIO to disable all usage and any functions which rely on stdio for file I/O. +//#define MINIZ_NO_STDIO + +// If MINIZ_NO_TIME is specified then the ZIP archive functions will not be able to get the current time, or +// get/set file times, and the C run-time funcs that get/set times won't be called. +// The current downside is the times written to your archives will be from 1979. +//#define MINIZ_NO_TIME + +// Define MINIZ_NO_ARCHIVE_APIS to disable all ZIP archive API's. +//#define MINIZ_NO_ARCHIVE_APIS + +// Define MINIZ_NO_ARCHIVE_APIS to disable all writing related ZIP archive API's. +//#define MINIZ_NO_ARCHIVE_WRITING_APIS + +// Define MINIZ_NO_ZLIB_APIS to remove all ZLIB-style compression/decompression API's. +//#define MINIZ_NO_ZLIB_APIS + +// Define MINIZ_NO_ZLIB_COMPATIBLE_NAME to disable zlib names, to prevent conflicts against stock zlib. +//#define MINIZ_NO_ZLIB_COMPATIBLE_NAMES + +// Define MINIZ_NO_MALLOC to disable all calls to malloc, free, and realloc. +// Note if MINIZ_NO_MALLOC is defined then the user must always provide custom user alloc/free/realloc +// callbacks to the zlib and archive API's, and a few stand-alone helper API's which don't provide custom user +// functions (such as tdefl_compress_mem_to_heap() and tinfl_decompress_mem_to_heap()) won't work. +//#define MINIZ_NO_MALLOC + +#if defined(__TINYC__) && (defined(__linux) || defined(__linux__)) + // TODO: Work around "error: include file 'sys\utime.h' when compiling with tcc on Linux + #define MINIZ_NO_TIME +#endif + +#if !defined(MINIZ_NO_TIME) && !defined(MINIZ_NO_ARCHIVE_APIS) + #include +#endif + +#if defined(_M_IX86) || defined(_M_X64) || defined(__i386__) || defined(__i386) || defined(__i486__) || defined(__i486) || defined(i386) || defined(__ia64__) || defined(__x86_64__) +// MINIZ_X86_OR_X64_CPU is only used to help set the below macros. +#define MINIZ_X86_OR_X64_CPU 1 +#endif + +#if (__BYTE_ORDER__==__ORDER_LITTLE_ENDIAN__) || MINIZ_X86_OR_X64_CPU +// Set MINIZ_LITTLE_ENDIAN to 1 if the processor is little endian. +#define MINIZ_LITTLE_ENDIAN 1 +#endif + +#if MINIZ_X86_OR_X64_CPU +// Set MINIZ_USE_UNALIGNED_LOADS_AND_STORES to 1 on CPU's that permit efficient integer loads and stores from unaligned addresses. +#define MINIZ_USE_UNALIGNED_LOADS_AND_STORES 1 +#endif + +#if defined(_M_X64) || defined(_WIN64) || defined(__MINGW64__) || defined(_LP64) || defined(__LP64__) || defined(__ia64__) || defined(__x86_64__) +// Set MINIZ_HAS_64BIT_REGISTERS to 1 if operations on 64-bit integers are reasonably fast (and don't involve compiler generated calls to helper functions). +#define MINIZ_HAS_64BIT_REGISTERS 1 +#endif + +#ifdef __cplusplus +extern "C" { +#endif + +// ------------------- zlib-style API Definitions. + +// For more compatibility with zlib, miniz.c uses unsigned long for some parameters/struct members. Beware: mz_ulong can be either 32 or 64-bits! +typedef unsigned long mz_ulong; + +// mz_free() internally uses the MZ_FREE() macro (which by default calls free() unless you've modified the MZ_MALLOC macro) to release a block allocated from the heap. +void mz_free(void *p); + +#define MZ_ADLER32_INIT (1) +// mz_adler32() returns the initial adler-32 value to use when called with ptr==NULL. +mz_ulong mz_adler32(mz_ulong adler, const unsigned char *ptr, size_t buf_len); + +#define MZ_CRC32_INIT (0) +// mz_crc32() returns the initial CRC-32 value to use when called with ptr==NULL. +mz_ulong mz_crc32(mz_ulong crc, const unsigned char *ptr, size_t buf_len); + +// Compression strategies. +enum { MZ_DEFAULT_STRATEGY = 0, MZ_FILTERED = 1, MZ_HUFFMAN_ONLY = 2, MZ_RLE = 3, MZ_FIXED = 4 }; + +// Method +#define MZ_DEFLATED 8 + +#ifndef MINIZ_NO_ZLIB_APIS + +// Heap allocation callbacks. +// Note that mz_alloc_func parameter types purpsosely differ from zlib's: items/size is size_t, not unsigned long. +typedef void *(*mz_alloc_func)(void *opaque, size_t items, size_t size); +typedef void (*mz_free_func)(void *opaque, void *address); +typedef void *(*mz_realloc_func)(void *opaque, void *address, size_t items, size_t size); + +#define MZ_VERSION "9.1.15" +#define MZ_VERNUM 0x91F0 +#define MZ_VER_MAJOR 9 +#define MZ_VER_MINOR 1 +#define MZ_VER_REVISION 15 +#define MZ_VER_SUBREVISION 0 + +// Flush values. For typical usage you only need MZ_NO_FLUSH and MZ_FINISH. The other values are for advanced use (refer to the zlib docs). +enum { MZ_NO_FLUSH = 0, MZ_PARTIAL_FLUSH = 1, MZ_SYNC_FLUSH = 2, MZ_FULL_FLUSH = 3, MZ_FINISH = 4, MZ_BLOCK = 5 }; + +// Return status codes. MZ_PARAM_ERROR is non-standard. +enum { MZ_OK = 0, MZ_STREAM_END = 1, MZ_NEED_DICT = 2, MZ_ERRNO = -1, MZ_STREAM_ERROR = -2, MZ_DATA_ERROR = -3, MZ_MEM_ERROR = -4, MZ_BUF_ERROR = -5, MZ_VERSION_ERROR = -6, MZ_PARAM_ERROR = -10000 }; + +// Compression levels: 0-9 are the standard zlib-style levels, 10 is best possible compression (not zlib compatible, and may be very slow), MZ_DEFAULT_COMPRESSION=MZ_DEFAULT_LEVEL. +enum { MZ_NO_COMPRESSION = 0, MZ_BEST_SPEED = 1, MZ_BEST_COMPRESSION = 9, MZ_UBER_COMPRESSION = 10, MZ_DEFAULT_LEVEL = 6, MZ_DEFAULT_COMPRESSION = -1 }; + +// Window bits +#define MZ_DEFAULT_WINDOW_BITS 15 + +struct mz_internal_state; + +// Compression/decompression stream struct. +typedef struct mz_stream_s +{ + const unsigned char *next_in; // pointer to next byte to read + unsigned int avail_in; // number of bytes available at next_in + mz_ulong total_in; // total number of bytes consumed so far + + unsigned char *next_out; // pointer to next byte to write + unsigned int avail_out; // number of bytes that can be written to next_out + mz_ulong total_out; // total number of bytes produced so far + + char *msg; // error msg (unused) + struct mz_internal_state *state; // internal state, allocated by zalloc/zfree + + mz_alloc_func zalloc; // optional heap allocation function (defaults to malloc) + mz_free_func zfree; // optional heap free function (defaults to free) + void *opaque; // heap alloc function user pointer + + int data_type; // data_type (unused) + mz_ulong adler; // adler32 of the source or uncompressed data + mz_ulong reserved; // not used +} mz_stream; + +typedef mz_stream *mz_streamp; + +// Returns the version string of miniz.c. +const char *mz_version(void); + +// mz_deflateInit() initializes a compressor with default options: +// Parameters: +// pStream must point to an initialized mz_stream struct. +// level must be between [MZ_NO_COMPRESSION, MZ_BEST_COMPRESSION]. +// level 1 enables a specially optimized compression function that's been optimized purely for performance, not ratio. +// (This special func. is currently only enabled when MINIZ_USE_UNALIGNED_LOADS_AND_STORES and MINIZ_LITTLE_ENDIAN are defined.) +// Return values: +// MZ_OK on success. +// MZ_STREAM_ERROR if the stream is bogus. +// MZ_PARAM_ERROR if the input parameters are bogus. +// MZ_MEM_ERROR on out of memory. +int mz_deflateInit(mz_streamp pStream, int level); + +// mz_deflateInit2() is like mz_deflate(), except with more control: +// Additional parameters: +// method must be MZ_DEFLATED +// window_bits must be MZ_DEFAULT_WINDOW_BITS (to wrap the deflate stream with zlib header/adler-32 footer) or -MZ_DEFAULT_WINDOW_BITS (raw deflate/no header or footer) +// mem_level must be between [1, 9] (it's checked but ignored by miniz.c) +int mz_deflateInit2(mz_streamp pStream, int level, int method, int window_bits, int mem_level, int strategy); + +// Quickly resets a compressor without having to reallocate anything. Same as calling mz_deflateEnd() followed by mz_deflateInit()/mz_deflateInit2(). +int mz_deflateReset(mz_streamp pStream); + +// mz_deflate() compresses the input to output, consuming as much of the input and producing as much output as possible. +// Parameters: +// pStream is the stream to read from and write to. You must initialize/update the next_in, avail_in, next_out, and avail_out members. +// flush may be MZ_NO_FLUSH, MZ_PARTIAL_FLUSH/MZ_SYNC_FLUSH, MZ_FULL_FLUSH, or MZ_FINISH. +// Return values: +// MZ_OK on success (when flushing, or if more input is needed but not available, and/or there's more output to be written but the output buffer is full). +// MZ_STREAM_END if all input has been consumed and all output bytes have been written. Don't call mz_deflate() on the stream anymore. +// MZ_STREAM_ERROR if the stream is bogus. +// MZ_PARAM_ERROR if one of the parameters is invalid. +// MZ_BUF_ERROR if no forward progress is possible because the input and/or output buffers are empty. (Fill up the input buffer or free up some output space and try again.) +int mz_deflate(mz_streamp pStream, int flush); + +// mz_deflateEnd() deinitializes a compressor: +// Return values: +// MZ_OK on success. +// MZ_STREAM_ERROR if the stream is bogus. +int mz_deflateEnd(mz_streamp pStream); + +// mz_deflateBound() returns a (very) conservative upper bound on the amount of data that could be generated by deflate(), assuming flush is set to only MZ_NO_FLUSH or MZ_FINISH. +mz_ulong mz_deflateBound(mz_streamp pStream, mz_ulong source_len); + +// Single-call compression functions mz_compress() and mz_compress2(): +// Returns MZ_OK on success, or one of the error codes from mz_deflate() on failure. +int mz_compress(unsigned char *pDest, mz_ulong *pDest_len, const unsigned char *pSource, mz_ulong source_len); +int mz_compress2(unsigned char *pDest, mz_ulong *pDest_len, const unsigned char *pSource, mz_ulong source_len, int level); + +// mz_compressBound() returns a (very) conservative upper bound on the amount of data that could be generated by calling mz_compress(). +mz_ulong mz_compressBound(mz_ulong source_len); + +// Initializes a decompressor. +int mz_inflateInit(mz_streamp pStream); + +// mz_inflateInit2() is like mz_inflateInit() with an additional option that controls the window size and whether or not the stream has been wrapped with a zlib header/footer: +// window_bits must be MZ_DEFAULT_WINDOW_BITS (to parse zlib header/footer) or -MZ_DEFAULT_WINDOW_BITS (raw deflate). +int mz_inflateInit2(mz_streamp pStream, int window_bits); + +// Decompresses the input stream to the output, consuming only as much of the input as needed, and writing as much to the output as possible. +// Parameters: +// pStream is the stream to read from and write to. You must initialize/update the next_in, avail_in, next_out, and avail_out members. +// flush may be MZ_NO_FLUSH, MZ_SYNC_FLUSH, or MZ_FINISH. +// On the first call, if flush is MZ_FINISH it's assumed the input and output buffers are both sized large enough to decompress the entire stream in a single call (this is slightly faster). +// MZ_FINISH implies that there are no more source bytes available beside what's already in the input buffer, and that the output buffer is large enough to hold the rest of the decompressed data. +// Return values: +// MZ_OK on success. Either more input is needed but not available, and/or there's more output to be written but the output buffer is full. +// MZ_STREAM_END if all needed input has been consumed and all output bytes have been written. For zlib streams, the adler-32 of the decompressed data has also been verified. +// MZ_STREAM_ERROR if the stream is bogus. +// MZ_DATA_ERROR if the deflate stream is invalid. +// MZ_PARAM_ERROR if one of the parameters is invalid. +// MZ_BUF_ERROR if no forward progress is possible because the input buffer is empty but the inflater needs more input to continue, or if the output buffer is not large enough. Call mz_inflate() again +// with more input data, or with more room in the output buffer (except when using single call decompression, described above). +int mz_inflate(mz_streamp pStream, int flush); + +// Deinitializes a decompressor. +int mz_inflateEnd(mz_streamp pStream); + +// Single-call decompression. +// Returns MZ_OK on success, or one of the error codes from mz_inflate() on failure. +int mz_uncompress(unsigned char *pDest, mz_ulong *pDest_len, const unsigned char *pSource, mz_ulong source_len); + +// Returns a string description of the specified error code, or NULL if the error code is invalid. +const char *mz_error(int err); + +// Redefine zlib-compatible names to miniz equivalents, so miniz.c can be used as a drop-in replacement for the subset of zlib that miniz.c supports. +// Define MINIZ_NO_ZLIB_COMPATIBLE_NAMES to disable zlib-compatibility if you use zlib in the same project. +#ifndef MINIZ_NO_ZLIB_COMPATIBLE_NAMES + typedef unsigned char Byte; + typedef unsigned int uInt; + typedef mz_ulong uLong; + typedef Byte Bytef; + typedef uInt uIntf; + typedef char charf; + typedef int intf; + typedef void *voidpf; + typedef uLong uLongf; + typedef void *voidp; + typedef void *const voidpc; + #define Z_NULL 0 + #define Z_NO_FLUSH MZ_NO_FLUSH + #define Z_PARTIAL_FLUSH MZ_PARTIAL_FLUSH + #define Z_SYNC_FLUSH MZ_SYNC_FLUSH + #define Z_FULL_FLUSH MZ_FULL_FLUSH + #define Z_FINISH MZ_FINISH + #define Z_BLOCK MZ_BLOCK + #define Z_OK MZ_OK + #define Z_STREAM_END MZ_STREAM_END + #define Z_NEED_DICT MZ_NEED_DICT + #define Z_ERRNO MZ_ERRNO + #define Z_STREAM_ERROR MZ_STREAM_ERROR + #define Z_DATA_ERROR MZ_DATA_ERROR + #define Z_MEM_ERROR MZ_MEM_ERROR + #define Z_BUF_ERROR MZ_BUF_ERROR + #define Z_VERSION_ERROR MZ_VERSION_ERROR + #define Z_PARAM_ERROR MZ_PARAM_ERROR + #define Z_NO_COMPRESSION MZ_NO_COMPRESSION + #define Z_BEST_SPEED MZ_BEST_SPEED + #define Z_BEST_COMPRESSION MZ_BEST_COMPRESSION + #define Z_DEFAULT_COMPRESSION MZ_DEFAULT_COMPRESSION + #define Z_DEFAULT_STRATEGY MZ_DEFAULT_STRATEGY + #define Z_FILTERED MZ_FILTERED + #define Z_HUFFMAN_ONLY MZ_HUFFMAN_ONLY + #define Z_RLE MZ_RLE + #define Z_FIXED MZ_FIXED + #define Z_DEFLATED MZ_DEFLATED + #define Z_DEFAULT_WINDOW_BITS MZ_DEFAULT_WINDOW_BITS + #define alloc_func mz_alloc_func + #define free_func mz_free_func + #define internal_state mz_internal_state + #define z_stream mz_stream + #define deflateInit mz_deflateInit + #define deflateInit2 mz_deflateInit2 + #define deflateReset mz_deflateReset + #define deflate mz_deflate + #define deflateEnd mz_deflateEnd + #define deflateBound mz_deflateBound + #define compress mz_compress + #define compress2 mz_compress2 + #define compressBound mz_compressBound + #define inflateInit mz_inflateInit + #define inflateInit2 mz_inflateInit2 + #define inflate mz_inflate + #define inflateEnd mz_inflateEnd + #define uncompress mz_uncompress + #define crc32 mz_crc32 + #define adler32 mz_adler32 + #define MAX_WBITS 15 + #define MAX_MEM_LEVEL 9 + #define zError mz_error + #define ZLIB_VERSION MZ_VERSION + #define ZLIB_VERNUM MZ_VERNUM + #define ZLIB_VER_MAJOR MZ_VER_MAJOR + #define ZLIB_VER_MINOR MZ_VER_MINOR + #define ZLIB_VER_REVISION MZ_VER_REVISION + #define ZLIB_VER_SUBREVISION MZ_VER_SUBREVISION + #define zlibVersion mz_version + #define zlib_version mz_version() +#endif // #ifndef MINIZ_NO_ZLIB_COMPATIBLE_NAMES + +#endif // MINIZ_NO_ZLIB_APIS + +// ------------------- Types and macros + +typedef unsigned char mz_uint8; +typedef signed short mz_int16; +typedef unsigned short mz_uint16; +typedef unsigned int mz_uint32; +typedef unsigned int mz_uint; +typedef long long mz_int64; +typedef unsigned long long mz_uint64; +typedef int mz_bool; + +#define MZ_FALSE (0) +#define MZ_TRUE (1) + +// An attempt to work around MSVC's spammy "warning C4127: conditional expression is constant" message. +#ifdef _MSC_VER + #define MZ_MACRO_END while (0, 0) +#else + #define MZ_MACRO_END while (0) +#endif + +// ------------------- ZIP archive reading/writing + +#ifndef MINIZ_NO_ARCHIVE_APIS + +enum +{ + MZ_ZIP_MAX_IO_BUF_SIZE = 64*1024, + MZ_ZIP_MAX_ARCHIVE_FILENAME_SIZE = 260, + MZ_ZIP_MAX_ARCHIVE_FILE_COMMENT_SIZE = 256 +}; + +typedef struct +{ + mz_uint32 m_file_index; + mz_uint32 m_central_dir_ofs; + mz_uint16 m_version_made_by; + mz_uint16 m_version_needed; + mz_uint16 m_bit_flag; + mz_uint16 m_method; +#ifndef MINIZ_NO_TIME + time_t m_time; +#endif + mz_uint32 m_crc32; + mz_uint64 m_comp_size; + mz_uint64 m_uncomp_size; + mz_uint16 m_internal_attr; + mz_uint32 m_external_attr; + mz_uint64 m_local_header_ofs; + mz_uint32 m_comment_size; + char m_filename[MZ_ZIP_MAX_ARCHIVE_FILENAME_SIZE]; + char m_comment[MZ_ZIP_MAX_ARCHIVE_FILE_COMMENT_SIZE]; +} mz_zip_archive_file_stat; + +typedef size_t (*mz_file_read_func)(void *pOpaque, mz_uint64 file_ofs, void *pBuf, size_t n); +typedef size_t (*mz_file_write_func)(void *pOpaque, mz_uint64 file_ofs, const void *pBuf, size_t n); + +struct mz_zip_internal_state_tag; +typedef struct mz_zip_internal_state_tag mz_zip_internal_state; + +typedef enum +{ + MZ_ZIP_MODE_INVALID = 0, + MZ_ZIP_MODE_READING = 1, + MZ_ZIP_MODE_WRITING = 2, + MZ_ZIP_MODE_WRITING_HAS_BEEN_FINALIZED = 3 +} mz_zip_mode; + +typedef struct mz_zip_archive_tag +{ + mz_uint64 m_archive_size; + mz_uint64 m_central_directory_file_ofs; + mz_uint m_total_files; + mz_zip_mode m_zip_mode; + + mz_uint m_file_offset_alignment; + + mz_alloc_func m_pAlloc; + mz_free_func m_pFree; + mz_realloc_func m_pRealloc; + void *m_pAlloc_opaque; + + mz_file_read_func m_pRead; + mz_file_write_func m_pWrite; + void *m_pIO_opaque; + + mz_zip_internal_state *m_pState; + +} mz_zip_archive; + +typedef enum +{ + MZ_ZIP_FLAG_CASE_SENSITIVE = 0x0100, + MZ_ZIP_FLAG_IGNORE_PATH = 0x0200, + MZ_ZIP_FLAG_COMPRESSED_DATA = 0x0400, + MZ_ZIP_FLAG_DO_NOT_SORT_CENTRAL_DIRECTORY = 0x0800 +} mz_zip_flags; + +// ZIP archive reading + +// Inits a ZIP archive reader. +// These functions read and validate the archive's central directory. +mz_bool mz_zip_reader_init(mz_zip_archive *pZip, mz_uint64 size, mz_uint32 flags); +mz_bool mz_zip_reader_init_mem(mz_zip_archive *pZip, const void *pMem, size_t size, mz_uint32 flags); + +#ifndef MINIZ_NO_STDIO +mz_bool mz_zip_reader_init_file(mz_zip_archive *pZip, const char *pFilename, mz_uint32 flags); +#endif + +// Returns the total number of files in the archive. +mz_uint mz_zip_reader_get_num_files(mz_zip_archive *pZip); + +// Returns detailed information about an archive file entry. +mz_bool mz_zip_reader_file_stat(mz_zip_archive *pZip, mz_uint file_index, mz_zip_archive_file_stat *pStat); + +// Determines if an archive file entry is a directory entry. +mz_bool mz_zip_reader_is_file_a_directory(mz_zip_archive *pZip, mz_uint file_index); +mz_bool mz_zip_reader_is_file_encrypted(mz_zip_archive *pZip, mz_uint file_index); + +// Retrieves the filename of an archive file entry. +// Returns the number of bytes written to pFilename, or if filename_buf_size is 0 this function returns the number of bytes needed to fully store the filename. +mz_uint mz_zip_reader_get_filename(mz_zip_archive *pZip, mz_uint file_index, char *pFilename, mz_uint filename_buf_size); + +// Attempts to locates a file in the archive's central directory. +// Valid flags: MZ_ZIP_FLAG_CASE_SENSITIVE, MZ_ZIP_FLAG_IGNORE_PATH +// Returns -1 if the file cannot be found. +int mz_zip_reader_locate_file(mz_zip_archive *pZip, const char *pName, const char *pComment, mz_uint flags); + +// Extracts a archive file to a memory buffer using no memory allocation. +mz_bool mz_zip_reader_extract_to_mem_no_alloc(mz_zip_archive *pZip, mz_uint file_index, void *pBuf, size_t buf_size, mz_uint flags, void *pUser_read_buf, size_t user_read_buf_size); +mz_bool mz_zip_reader_extract_file_to_mem_no_alloc(mz_zip_archive *pZip, const char *pFilename, void *pBuf, size_t buf_size, mz_uint flags, void *pUser_read_buf, size_t user_read_buf_size); + +// Extracts a archive file to a memory buffer. +mz_bool mz_zip_reader_extract_to_mem(mz_zip_archive *pZip, mz_uint file_index, void *pBuf, size_t buf_size, mz_uint flags); +mz_bool mz_zip_reader_extract_file_to_mem(mz_zip_archive *pZip, const char *pFilename, void *pBuf, size_t buf_size, mz_uint flags); + +// Extracts a archive file to a dynamically allocated heap buffer. +void *mz_zip_reader_extract_to_heap(mz_zip_archive *pZip, mz_uint file_index, size_t *pSize, mz_uint flags); +void *mz_zip_reader_extract_file_to_heap(mz_zip_archive *pZip, const char *pFilename, size_t *pSize, mz_uint flags); + +// Extracts a archive file using a callback function to output the file's data. +mz_bool mz_zip_reader_extract_to_callback(mz_zip_archive *pZip, mz_uint file_index, mz_file_write_func pCallback, void *pOpaque, mz_uint flags); +mz_bool mz_zip_reader_extract_file_to_callback(mz_zip_archive *pZip, const char *pFilename, mz_file_write_func pCallback, void *pOpaque, mz_uint flags); + +#ifndef MINIZ_NO_STDIO +// Extracts a archive file to a disk file and sets its last accessed and modified times. +// This function only extracts files, not archive directory records. +mz_bool mz_zip_reader_extract_to_file(mz_zip_archive *pZip, mz_uint file_index, const char *pDst_filename, mz_uint flags); +mz_bool mz_zip_reader_extract_file_to_file(mz_zip_archive *pZip, const char *pArchive_filename, const char *pDst_filename, mz_uint flags); +#endif + +// Ends archive reading, freeing all allocations, and closing the input archive file if mz_zip_reader_init_file() was used. +mz_bool mz_zip_reader_end(mz_zip_archive *pZip); + +// ZIP archive writing + +#ifndef MINIZ_NO_ARCHIVE_WRITING_APIS + +// Inits a ZIP archive writer. +mz_bool mz_zip_writer_init(mz_zip_archive *pZip, mz_uint64 existing_size); +mz_bool mz_zip_writer_init_heap(mz_zip_archive *pZip, size_t size_to_reserve_at_beginning, size_t initial_allocation_size); + +#ifndef MINIZ_NO_STDIO +mz_bool mz_zip_writer_init_file(mz_zip_archive *pZip, const char *pFilename, mz_uint64 size_to_reserve_at_beginning); +#endif + +// Converts a ZIP archive reader object into a writer object, to allow efficient in-place file appends to occur on an existing archive. +// For archives opened using mz_zip_reader_init_file, pFilename must be the archive's filename so it can be reopened for writing. If the file can't be reopened, mz_zip_reader_end() will be called. +// For archives opened using mz_zip_reader_init_mem, the memory block must be growable using the realloc callback (which defaults to realloc unless you've overridden it). +// Finally, for archives opened using mz_zip_reader_init, the mz_zip_archive's user provided m_pWrite function cannot be NULL. +// Note: In-place archive modification is not recommended unless you know what you're doing, because if execution stops or something goes wrong before +// the archive is finalized the file's central directory will be hosed. +mz_bool mz_zip_writer_init_from_reader(mz_zip_archive *pZip, const char *pFilename); + +// Adds the contents of a memory buffer to an archive. These functions record the current local time into the archive. +// To add a directory entry, call this method with an archive name ending in a forwardslash with empty buffer. +// level_and_flags - compression level (0-10, see MZ_BEST_SPEED, MZ_BEST_COMPRESSION, etc.) logically OR'd with zero or more mz_zip_flags, or just set to MZ_DEFAULT_COMPRESSION. +mz_bool mz_zip_writer_add_mem(mz_zip_archive *pZip, const char *pArchive_name, const void *pBuf, size_t buf_size, mz_uint level_and_flags); +mz_bool mz_zip_writer_add_mem_ex(mz_zip_archive *pZip, const char *pArchive_name, const void *pBuf, size_t buf_size, const void *pComment, mz_uint16 comment_size, mz_uint level_and_flags, mz_uint64 uncomp_size, mz_uint32 uncomp_crc32); + +#ifndef MINIZ_NO_STDIO +// Adds the contents of a disk file to an archive. This function also records the disk file's modified time into the archive. +// level_and_flags - compression level (0-10, see MZ_BEST_SPEED, MZ_BEST_COMPRESSION, etc.) logically OR'd with zero or more mz_zip_flags, or just set to MZ_DEFAULT_COMPRESSION. +mz_bool mz_zip_writer_add_file(mz_zip_archive *pZip, const char *pArchive_name, const char *pSrc_filename, const void *pComment, mz_uint16 comment_size, mz_uint level_and_flags); +#endif + +// Adds a file to an archive by fully cloning the data from another archive. +// This function fully clones the source file's compressed data (no recompression), along with its full filename, extra data, and comment fields. +mz_bool mz_zip_writer_add_from_zip_reader(mz_zip_archive *pZip, mz_zip_archive *pSource_zip, mz_uint file_index); + +// Finalizes the archive by writing the central directory records followed by the end of central directory record. +// After an archive is finalized, the only valid call on the mz_zip_archive struct is mz_zip_writer_end(). +// An archive must be manually finalized by calling this function for it to be valid. +mz_bool mz_zip_writer_finalize_archive(mz_zip_archive *pZip); +mz_bool mz_zip_writer_finalize_heap_archive(mz_zip_archive *pZip, void **pBuf, size_t *pSize); + +// Ends archive writing, freeing all allocations, and closing the output file if mz_zip_writer_init_file() was used. +// Note for the archive to be valid, it must have been finalized before ending. +mz_bool mz_zip_writer_end(mz_zip_archive *pZip); + +// Misc. high-level helper functions: + +// mz_zip_add_mem_to_archive_file_in_place() efficiently (but not atomically) appends a memory blob to a ZIP archive. +// level_and_flags - compression level (0-10, see MZ_BEST_SPEED, MZ_BEST_COMPRESSION, etc.) logically OR'd with zero or more mz_zip_flags, or just set to MZ_DEFAULT_COMPRESSION. +mz_bool mz_zip_add_mem_to_archive_file_in_place(const char *pZip_filename, const char *pArchive_name, const void *pBuf, size_t buf_size, const void *pComment, mz_uint16 comment_size, mz_uint level_and_flags); + +// Reads a single file from an archive into a heap block. +// Returns NULL on failure. +void *mz_zip_extract_archive_file_to_heap(const char *pZip_filename, const char *pArchive_name, size_t *pSize, mz_uint zip_flags); + +#endif // #ifndef MINIZ_NO_ARCHIVE_WRITING_APIS + +#endif // #ifndef MINIZ_NO_ARCHIVE_APIS + +// ------------------- Low-level Decompression API Definitions + +// Decompression flags used by tinfl_decompress(). +// TINFL_FLAG_PARSE_ZLIB_HEADER: If set, the input has a valid zlib header and ends with an adler32 checksum (it's a valid zlib stream). Otherwise, the input is a raw deflate stream. +// TINFL_FLAG_HAS_MORE_INPUT: If set, there are more input bytes available beyond the end of the supplied input buffer. If clear, the input buffer contains all remaining input. +// TINFL_FLAG_USING_NON_WRAPPING_OUTPUT_BUF: If set, the output buffer is large enough to hold the entire decompressed stream. If clear, the output buffer is at least the size of the dictionary (typically 32KB). +// TINFL_FLAG_COMPUTE_ADLER32: Force adler-32 checksum computation of the decompressed bytes. +enum +{ + TINFL_FLAG_PARSE_ZLIB_HEADER = 1, + TINFL_FLAG_HAS_MORE_INPUT = 2, + TINFL_FLAG_USING_NON_WRAPPING_OUTPUT_BUF = 4, + TINFL_FLAG_COMPUTE_ADLER32 = 8 +}; + +// High level decompression functions: +// tinfl_decompress_mem_to_heap() decompresses a block in memory to a heap block allocated via malloc(). +// On entry: +// pSrc_buf, src_buf_len: Pointer and size of the Deflate or zlib source data to decompress. +// On return: +// Function returns a pointer to the decompressed data, or NULL on failure. +// *pOut_len will be set to the decompressed data's size, which could be larger than src_buf_len on uncompressible data. +// The caller must call mz_free() on the returned block when it's no longer needed. +void *tinfl_decompress_mem_to_heap(const void *pSrc_buf, size_t src_buf_len, size_t *pOut_len, int flags); + +// tinfl_decompress_mem_to_mem() decompresses a block in memory to another block in memory. +// Returns TINFL_DECOMPRESS_MEM_TO_MEM_FAILED on failure, or the number of bytes written on success. +#define TINFL_DECOMPRESS_MEM_TO_MEM_FAILED ((size_t)(-1)) +size_t tinfl_decompress_mem_to_mem(void *pOut_buf, size_t out_buf_len, const void *pSrc_buf, size_t src_buf_len, int flags); + +// tinfl_decompress_mem_to_callback() decompresses a block in memory to an internal 32KB buffer, and a user provided callback function will be called to flush the buffer. +// Returns 1 on success or 0 on failure. +typedef int (*tinfl_put_buf_func_ptr)(const void* pBuf, int len, void *pUser); +int tinfl_decompress_mem_to_callback(const void *pIn_buf, size_t *pIn_buf_size, tinfl_put_buf_func_ptr pPut_buf_func, void *pPut_buf_user, int flags); + +struct tinfl_decompressor_tag; typedef struct tinfl_decompressor_tag tinfl_decompressor; + +// Max size of LZ dictionary. +#define TINFL_LZ_DICT_SIZE 32768 + +// Return status. +typedef enum +{ + TINFL_STATUS_BAD_PARAM = -3, + TINFL_STATUS_ADLER32_MISMATCH = -2, + TINFL_STATUS_FAILED = -1, + TINFL_STATUS_DONE = 0, + TINFL_STATUS_NEEDS_MORE_INPUT = 1, + TINFL_STATUS_HAS_MORE_OUTPUT = 2 +} tinfl_status; + +// Initializes the decompressor to its initial state. +#define tinfl_init(r) do { (r)->m_state = 0; } MZ_MACRO_END +#define tinfl_get_adler32(r) (r)->m_check_adler32 + +// Main low-level decompressor coroutine function. This is the only function actually needed for decompression. All the other functions are just high-level helpers for improved usability. +// This is a universal API, i.e. it can be used as a building block to build any desired higher level decompression API. In the limit case, it can be called once per every byte input or output. +tinfl_status tinfl_decompress(tinfl_decompressor *r, const mz_uint8 *pIn_buf_next, size_t *pIn_buf_size, mz_uint8 *pOut_buf_start, mz_uint8 *pOut_buf_next, size_t *pOut_buf_size, const mz_uint32 decomp_flags); + +// Internal/private bits follow. +enum +{ + TINFL_MAX_HUFF_TABLES = 3, TINFL_MAX_HUFF_SYMBOLS_0 = 288, TINFL_MAX_HUFF_SYMBOLS_1 = 32, TINFL_MAX_HUFF_SYMBOLS_2 = 19, + TINFL_FAST_LOOKUP_BITS = 10, TINFL_FAST_LOOKUP_SIZE = 1 << TINFL_FAST_LOOKUP_BITS +}; + +typedef struct +{ + mz_uint8 m_code_size[TINFL_MAX_HUFF_SYMBOLS_0]; + mz_int16 m_look_up[TINFL_FAST_LOOKUP_SIZE], m_tree[TINFL_MAX_HUFF_SYMBOLS_0 * 2]; +} tinfl_huff_table; + +#if MINIZ_HAS_64BIT_REGISTERS + #define TINFL_USE_64BIT_BITBUF 1 +#endif + +#if TINFL_USE_64BIT_BITBUF + typedef mz_uint64 tinfl_bit_buf_t; + #define TINFL_BITBUF_SIZE (64) +#else + typedef mz_uint32 tinfl_bit_buf_t; + #define TINFL_BITBUF_SIZE (32) +#endif + +struct tinfl_decompressor_tag +{ + mz_uint32 m_state, m_num_bits, m_zhdr0, m_zhdr1, m_z_adler32, m_final, m_type, m_check_adler32, m_dist, m_counter, m_num_extra, m_table_sizes[TINFL_MAX_HUFF_TABLES]; + tinfl_bit_buf_t m_bit_buf; + size_t m_dist_from_out_buf_start; + tinfl_huff_table m_tables[TINFL_MAX_HUFF_TABLES]; + mz_uint8 m_raw_header[4], m_len_codes[TINFL_MAX_HUFF_SYMBOLS_0 + TINFL_MAX_HUFF_SYMBOLS_1 + 137]; +}; + +// ------------------- Low-level Compression API Definitions + +// Set TDEFL_LESS_MEMORY to 1 to use less memory (compression will be slightly slower, and raw/dynamic blocks will be output more frequently). +#define TDEFL_LESS_MEMORY 0 + +// tdefl_init() compression flags logically OR'd together (low 12 bits contain the max. number of probes per dictionary search): +// TDEFL_DEFAULT_MAX_PROBES: The compressor defaults to 128 dictionary probes per dictionary search. 0=Huffman only, 1=Huffman+LZ (fastest/crap compression), 4095=Huffman+LZ (slowest/best compression). +enum +{ + TDEFL_HUFFMAN_ONLY = 0, TDEFL_DEFAULT_MAX_PROBES = 128, TDEFL_MAX_PROBES_MASK = 0xFFF +}; + +// TDEFL_WRITE_ZLIB_HEADER: If set, the compressor outputs a zlib header before the deflate data, and the Adler-32 of the source data at the end. Otherwise, you'll get raw deflate data. +// TDEFL_COMPUTE_ADLER32: Always compute the adler-32 of the input data (even when not writing zlib headers). +// TDEFL_GREEDY_PARSING_FLAG: Set to use faster greedy parsing, instead of more efficient lazy parsing. +// TDEFL_NONDETERMINISTIC_PARSING_FLAG: Enable to decrease the compressor's initialization time to the minimum, but the output may vary from run to run given the same input (depending on the contents of memory). +// TDEFL_RLE_MATCHES: Only look for RLE matches (matches with a distance of 1) +// TDEFL_FILTER_MATCHES: Discards matches <= 5 chars if enabled. +// TDEFL_FORCE_ALL_STATIC_BLOCKS: Disable usage of optimized Huffman tables. +// TDEFL_FORCE_ALL_RAW_BLOCKS: Only use raw (uncompressed) deflate blocks. +// The low 12 bits are reserved to control the max # of hash probes per dictionary lookup (see TDEFL_MAX_PROBES_MASK). +enum +{ + TDEFL_WRITE_ZLIB_HEADER = 0x01000, + TDEFL_COMPUTE_ADLER32 = 0x02000, + TDEFL_GREEDY_PARSING_FLAG = 0x04000, + TDEFL_NONDETERMINISTIC_PARSING_FLAG = 0x08000, + TDEFL_RLE_MATCHES = 0x10000, + TDEFL_FILTER_MATCHES = 0x20000, + TDEFL_FORCE_ALL_STATIC_BLOCKS = 0x40000, + TDEFL_FORCE_ALL_RAW_BLOCKS = 0x80000 +}; + +// High level compression functions: +// tdefl_compress_mem_to_heap() compresses a block in memory to a heap block allocated via malloc(). +// On entry: +// pSrc_buf, src_buf_len: Pointer and size of source block to compress. +// flags: The max match finder probes (default is 128) logically OR'd against the above flags. Higher probes are slower but improve compression. +// On return: +// Function returns a pointer to the compressed data, or NULL on failure. +// *pOut_len will be set to the compressed data's size, which could be larger than src_buf_len on uncompressible data. +// The caller must free() the returned block when it's no longer needed. +void *tdefl_compress_mem_to_heap(const void *pSrc_buf, size_t src_buf_len, size_t *pOut_len, int flags); + +// tdefl_compress_mem_to_mem() compresses a block in memory to another block in memory. +// Returns 0 on failure. +size_t tdefl_compress_mem_to_mem(void *pOut_buf, size_t out_buf_len, const void *pSrc_buf, size_t src_buf_len, int flags); + +// Compresses an image to a compressed PNG file in memory. +// On entry: +// pImage, w, h, and num_chans describe the image to compress. num_chans may be 1, 2, 3, or 4. +// The image pitch in bytes per scanline will be w*num_chans. The leftmost pixel on the top scanline is stored first in memory. +// level may range from [0,10], use MZ_NO_COMPRESSION, MZ_BEST_SPEED, MZ_BEST_COMPRESSION, etc. or a decent default is MZ_DEFAULT_LEVEL +// If flip is true, the image will be flipped on the Y axis (useful for OpenGL apps). +// On return: +// Function returns a pointer to the compressed data, or NULL on failure. +// *pLen_out will be set to the size of the PNG image file. +// The caller must mz_free() the returned heap block (which will typically be larger than *pLen_out) when it's no longer needed. +void *tdefl_write_image_to_png_file_in_memory_ex(const void *pImage, int w, int h, int num_chans, size_t *pLen_out, mz_uint level, mz_bool flip); +void *tdefl_write_image_to_png_file_in_memory(const void *pImage, int w, int h, int num_chans, size_t *pLen_out); + +// Output stream interface. The compressor uses this interface to write compressed data. It'll typically be called TDEFL_OUT_BUF_SIZE at a time. +typedef mz_bool (*tdefl_put_buf_func_ptr)(const void* pBuf, int len, void *pUser); + +// tdefl_compress_mem_to_output() compresses a block to an output stream. The above helpers use this function internally. +mz_bool tdefl_compress_mem_to_output(const void *pBuf, size_t buf_len, tdefl_put_buf_func_ptr pPut_buf_func, void *pPut_buf_user, int flags); + +enum { TDEFL_MAX_HUFF_TABLES = 3, TDEFL_MAX_HUFF_SYMBOLS_0 = 288, TDEFL_MAX_HUFF_SYMBOLS_1 = 32, TDEFL_MAX_HUFF_SYMBOLS_2 = 19, TDEFL_LZ_DICT_SIZE = 32768, TDEFL_LZ_DICT_SIZE_MASK = TDEFL_LZ_DICT_SIZE - 1, TDEFL_MIN_MATCH_LEN = 3, TDEFL_MAX_MATCH_LEN = 258 }; + +// TDEFL_OUT_BUF_SIZE MUST be large enough to hold a single entire compressed output block (using static/fixed Huffman codes). +#if TDEFL_LESS_MEMORY +enum { TDEFL_LZ_CODE_BUF_SIZE = 24 * 1024, TDEFL_OUT_BUF_SIZE = (TDEFL_LZ_CODE_BUF_SIZE * 13 ) / 10, TDEFL_MAX_HUFF_SYMBOLS = 288, TDEFL_LZ_HASH_BITS = 12, TDEFL_LEVEL1_HASH_SIZE_MASK = 4095, TDEFL_LZ_HASH_SHIFT = (TDEFL_LZ_HASH_BITS + 2) / 3, TDEFL_LZ_HASH_SIZE = 1 << TDEFL_LZ_HASH_BITS }; +#else +enum { TDEFL_LZ_CODE_BUF_SIZE = 64 * 1024, TDEFL_OUT_BUF_SIZE = (TDEFL_LZ_CODE_BUF_SIZE * 13 ) / 10, TDEFL_MAX_HUFF_SYMBOLS = 288, TDEFL_LZ_HASH_BITS = 15, TDEFL_LEVEL1_HASH_SIZE_MASK = 4095, TDEFL_LZ_HASH_SHIFT = (TDEFL_LZ_HASH_BITS + 2) / 3, TDEFL_LZ_HASH_SIZE = 1 << TDEFL_LZ_HASH_BITS }; +#endif + +// The low-level tdefl functions below may be used directly if the above helper functions aren't flexible enough. The low-level functions don't make any heap allocations, unlike the above helper functions. +typedef enum +{ + TDEFL_STATUS_BAD_PARAM = -2, + TDEFL_STATUS_PUT_BUF_FAILED = -1, + TDEFL_STATUS_OKAY = 0, + TDEFL_STATUS_DONE = 1, +} tdefl_status; + +// Must map to MZ_NO_FLUSH, MZ_SYNC_FLUSH, etc. enums +typedef enum +{ + TDEFL_NO_FLUSH = 0, + TDEFL_SYNC_FLUSH = 2, + TDEFL_FULL_FLUSH = 3, + TDEFL_FINISH = 4 +} tdefl_flush; + +// tdefl's compression state structure. +typedef struct +{ + tdefl_put_buf_func_ptr m_pPut_buf_func; + void *m_pPut_buf_user; + mz_uint m_flags, m_max_probes[2]; + int m_greedy_parsing; + mz_uint m_adler32, m_lookahead_pos, m_lookahead_size, m_dict_size; + mz_uint8 *m_pLZ_code_buf, *m_pLZ_flags, *m_pOutput_buf, *m_pOutput_buf_end; + mz_uint m_num_flags_left, m_total_lz_bytes, m_lz_code_buf_dict_pos, m_bits_in, m_bit_buffer; + mz_uint m_saved_match_dist, m_saved_match_len, m_saved_lit, m_output_flush_ofs, m_output_flush_remaining, m_finished, m_block_index, m_wants_to_finish; + tdefl_status m_prev_return_status; + const void *m_pIn_buf; + void *m_pOut_buf; + size_t *m_pIn_buf_size, *m_pOut_buf_size; + tdefl_flush m_flush; + const mz_uint8 *m_pSrc; + size_t m_src_buf_left, m_out_buf_ofs; + mz_uint8 m_dict[TDEFL_LZ_DICT_SIZE + TDEFL_MAX_MATCH_LEN - 1]; + mz_uint16 m_huff_count[TDEFL_MAX_HUFF_TABLES][TDEFL_MAX_HUFF_SYMBOLS]; + mz_uint16 m_huff_codes[TDEFL_MAX_HUFF_TABLES][TDEFL_MAX_HUFF_SYMBOLS]; + mz_uint8 m_huff_code_sizes[TDEFL_MAX_HUFF_TABLES][TDEFL_MAX_HUFF_SYMBOLS]; + mz_uint8 m_lz_code_buf[TDEFL_LZ_CODE_BUF_SIZE]; + mz_uint16 m_next[TDEFL_LZ_DICT_SIZE]; + mz_uint16 m_hash[TDEFL_LZ_HASH_SIZE]; + mz_uint8 m_output_buf[TDEFL_OUT_BUF_SIZE]; +} tdefl_compressor; + +// Initializes the compressor. +// There is no corresponding deinit() function because the tdefl API's do not dynamically allocate memory. +// pBut_buf_func: If NULL, output data will be supplied to the specified callback. In this case, the user should call the tdefl_compress_buffer() API for compression. +// If pBut_buf_func is NULL the user should always call the tdefl_compress() API. +// flags: See the above enums (TDEFL_HUFFMAN_ONLY, TDEFL_WRITE_ZLIB_HEADER, etc.) +tdefl_status tdefl_init(tdefl_compressor *d, tdefl_put_buf_func_ptr pPut_buf_func, void *pPut_buf_user, int flags); + +// Compresses a block of data, consuming as much of the specified input buffer as possible, and writing as much compressed data to the specified output buffer as possible. +tdefl_status tdefl_compress(tdefl_compressor *d, const void *pIn_buf, size_t *pIn_buf_size, void *pOut_buf, size_t *pOut_buf_size, tdefl_flush flush); + +// tdefl_compress_buffer() is only usable when the tdefl_init() is called with a non-NULL tdefl_put_buf_func_ptr. +// tdefl_compress_buffer() always consumes the entire input buffer. +tdefl_status tdefl_compress_buffer(tdefl_compressor *d, const void *pIn_buf, size_t in_buf_size, tdefl_flush flush); + +tdefl_status tdefl_get_prev_return_status(tdefl_compressor *d); +mz_uint32 tdefl_get_adler32(tdefl_compressor *d); + +// Can't use tdefl_create_comp_flags_from_zip_params if MINIZ_NO_ZLIB_APIS isn't defined, because it uses some of its macros. +#ifndef MINIZ_NO_ZLIB_APIS +// Create tdefl_compress() flags given zlib-style compression parameters. +// level may range from [0,10] (where 10 is absolute max compression, but may be much slower on some files) +// window_bits may be -15 (raw deflate) or 15 (zlib) +// strategy may be either MZ_DEFAULT_STRATEGY, MZ_FILTERED, MZ_HUFFMAN_ONLY, MZ_RLE, or MZ_FIXED +mz_uint tdefl_create_comp_flags_from_zip_params(int level, int window_bits, int strategy); +#endif // #ifndef MINIZ_NO_ZLIB_APIS + +#ifdef __cplusplus +} +#endif + +#endif // MINIZ_HEADER_INCLUDED + +// ------------------- End of Header: Implementation follows. (If you only want the header, define MINIZ_HEADER_FILE_ONLY.) + +#ifndef MINIZ_HEADER_FILE_ONLY + +typedef unsigned char mz_validate_uint16[sizeof(mz_uint16)==2 ? 1 : -1]; +typedef unsigned char mz_validate_uint32[sizeof(mz_uint32)==4 ? 1 : -1]; +typedef unsigned char mz_validate_uint64[sizeof(mz_uint64)==8 ? 1 : -1]; + +#include +#include + +#define MZ_ASSERT(x) assert(x) + +#ifdef MINIZ_NO_MALLOC + #define MZ_MALLOC(x) NULL + #define MZ_FREE(x) (void)x, ((void)0) + #define MZ_REALLOC(p, x) NULL +#else + #define MZ_MALLOC(x) malloc(x) + #define MZ_FREE(x) free(x) + #define MZ_REALLOC(p, x) realloc(p, x) +#endif + +#define MZ_MAX(a,b) (((a)>(b))?(a):(b)) +#define MZ_MIN(a,b) (((a)<(b))?(a):(b)) +#define MZ_CLEAR_OBJ(obj) memset(&(obj), 0, sizeof(obj)) + +#if MINIZ_USE_UNALIGNED_LOADS_AND_STORES && MINIZ_LITTLE_ENDIAN + #define MZ_READ_LE16(p) *((const mz_uint16 *)(p)) + #define MZ_READ_LE32(p) *((const mz_uint32 *)(p)) +#else + #define MZ_READ_LE16(p) ((mz_uint32)(((const mz_uint8 *)(p))[0]) | ((mz_uint32)(((const mz_uint8 *)(p))[1]) << 8U)) + #define MZ_READ_LE32(p) ((mz_uint32)(((const mz_uint8 *)(p))[0]) | ((mz_uint32)(((const mz_uint8 *)(p))[1]) << 8U) | ((mz_uint32)(((const mz_uint8 *)(p))[2]) << 16U) | ((mz_uint32)(((const mz_uint8 *)(p))[3]) << 24U)) +#endif + +#ifdef _MSC_VER + #define MZ_FORCEINLINE __forceinline +#elif defined(__GNUC__) + #define MZ_FORCEINLINE inline __attribute__((__always_inline__)) +#else + #define MZ_FORCEINLINE inline +#endif + +#ifdef __cplusplus + extern "C" { +#endif + +// ------------------- zlib-style API's + +mz_ulong mz_adler32(mz_ulong adler, const unsigned char *ptr, size_t buf_len) +{ + mz_uint32 i, s1 = (mz_uint32)(adler & 0xffff), s2 = (mz_uint32)(adler >> 16); size_t block_len = buf_len % 5552; + if (!ptr) return MZ_ADLER32_INIT; + while (buf_len) { + for (i = 0; i + 7 < block_len; i += 8, ptr += 8) { + s1 += ptr[0], s2 += s1; s1 += ptr[1], s2 += s1; s1 += ptr[2], s2 += s1; s1 += ptr[3], s2 += s1; + s1 += ptr[4], s2 += s1; s1 += ptr[5], s2 += s1; s1 += ptr[6], s2 += s1; s1 += ptr[7], s2 += s1; + } + for ( ; i < block_len; ++i) s1 += *ptr++, s2 += s1; + s1 %= 65521U, s2 %= 65521U; buf_len -= block_len; block_len = 5552; + } + return (s2 << 16) + s1; +} + +// Karl Malbrain's compact CRC-32. See "A compact CCITT crc16 and crc32 C implementation that balances processor cache usage against speed": http://www.geocities.com/malbrain/ +mz_ulong mz_crc32(mz_ulong crc, const mz_uint8 *ptr, size_t buf_len) +{ + static const mz_uint32 s_crc32[16] = { 0, 0x1db71064, 0x3b6e20c8, 0x26d930ac, 0x76dc4190, 0x6b6b51f4, 0x4db26158, 0x5005713c, + 0xedb88320, 0xf00f9344, 0xd6d6a3e8, 0xcb61b38c, 0x9b64c2b0, 0x86d3d2d4, 0xa00ae278, 0xbdbdf21c }; + mz_uint32 crcu32 = (mz_uint32)crc; + if (!ptr) return MZ_CRC32_INIT; + crcu32 = ~crcu32; while (buf_len--) { mz_uint8 b = *ptr++; crcu32 = (crcu32 >> 4) ^ s_crc32[(crcu32 & 0xF) ^ (b & 0xF)]; crcu32 = (crcu32 >> 4) ^ s_crc32[(crcu32 & 0xF) ^ (b >> 4)]; } + return ~crcu32; +} + +void mz_free(void *p) +{ + MZ_FREE(p); +} + +#ifndef MINIZ_NO_ZLIB_APIS + +static void *def_alloc_func(void *opaque, size_t items, size_t size) { (void)opaque, (void)items, (void)size; return MZ_MALLOC(items * size); } +static void def_free_func(void *opaque, void *address) { (void)opaque, (void)address; MZ_FREE(address); } +static void *def_realloc_func(void *opaque, void *address, size_t items, size_t size) { (void)opaque, (void)address, (void)items, (void)size; return MZ_REALLOC(address, items * size); } + +const char *mz_version(void) +{ + return MZ_VERSION; +} + +int mz_deflateInit(mz_streamp pStream, int level) +{ + return mz_deflateInit2(pStream, level, MZ_DEFLATED, MZ_DEFAULT_WINDOW_BITS, 9, MZ_DEFAULT_STRATEGY); +} + +int mz_deflateInit2(mz_streamp pStream, int level, int method, int window_bits, int mem_level, int strategy) +{ + tdefl_compressor *pComp; + mz_uint comp_flags = TDEFL_COMPUTE_ADLER32 | tdefl_create_comp_flags_from_zip_params(level, window_bits, strategy); + + if (!pStream) return MZ_STREAM_ERROR; + if ((method != MZ_DEFLATED) || ((mem_level < 1) || (mem_level > 9)) || ((window_bits != MZ_DEFAULT_WINDOW_BITS) && (-window_bits != MZ_DEFAULT_WINDOW_BITS))) return MZ_PARAM_ERROR; + + pStream->data_type = 0; + pStream->adler = MZ_ADLER32_INIT; + pStream->msg = NULL; + pStream->reserved = 0; + pStream->total_in = 0; + pStream->total_out = 0; + if (!pStream->zalloc) pStream->zalloc = def_alloc_func; + if (!pStream->zfree) pStream->zfree = def_free_func; + + pComp = (tdefl_compressor *)pStream->zalloc(pStream->opaque, 1, sizeof(tdefl_compressor)); + if (!pComp) + return MZ_MEM_ERROR; + + pStream->state = (struct mz_internal_state *)pComp; + + if (tdefl_init(pComp, NULL, NULL, comp_flags) != TDEFL_STATUS_OKAY) + { + mz_deflateEnd(pStream); + return MZ_PARAM_ERROR; + } + + return MZ_OK; +} + +int mz_deflateReset(mz_streamp pStream) +{ + if ((!pStream) || (!pStream->state) || (!pStream->zalloc) || (!pStream->zfree)) return MZ_STREAM_ERROR; + pStream->total_in = pStream->total_out = 0; + tdefl_init((tdefl_compressor*)pStream->state, NULL, NULL, ((tdefl_compressor*)pStream->state)->m_flags); + return MZ_OK; +} + +int mz_deflate(mz_streamp pStream, int flush) +{ + size_t in_bytes, out_bytes; + mz_ulong orig_total_in, orig_total_out; + int mz_status = MZ_OK; + + if ((!pStream) || (!pStream->state) || (flush < 0) || (flush > MZ_FINISH) || (!pStream->next_out)) return MZ_STREAM_ERROR; + if (!pStream->avail_out) return MZ_BUF_ERROR; + + if (flush == MZ_PARTIAL_FLUSH) flush = MZ_SYNC_FLUSH; + + if (((tdefl_compressor*)pStream->state)->m_prev_return_status == TDEFL_STATUS_DONE) + return (flush == MZ_FINISH) ? MZ_STREAM_END : MZ_BUF_ERROR; + + orig_total_in = pStream->total_in; orig_total_out = pStream->total_out; + for ( ; ; ) + { + tdefl_status defl_status; + in_bytes = pStream->avail_in; out_bytes = pStream->avail_out; + + defl_status = tdefl_compress((tdefl_compressor*)pStream->state, pStream->next_in, &in_bytes, pStream->next_out, &out_bytes, (tdefl_flush)flush); + pStream->next_in += (mz_uint)in_bytes; pStream->avail_in -= (mz_uint)in_bytes; + pStream->total_in += (mz_uint)in_bytes; pStream->adler = tdefl_get_adler32((tdefl_compressor*)pStream->state); + + pStream->next_out += (mz_uint)out_bytes; pStream->avail_out -= (mz_uint)out_bytes; + pStream->total_out += (mz_uint)out_bytes; + + if (defl_status < 0) + { + mz_status = MZ_STREAM_ERROR; + break; + } + else if (defl_status == TDEFL_STATUS_DONE) + { + mz_status = MZ_STREAM_END; + break; + } + else if (!pStream->avail_out) + break; + else if ((!pStream->avail_in) && (flush != MZ_FINISH)) + { + if ((flush) || (pStream->total_in != orig_total_in) || (pStream->total_out != orig_total_out)) + break; + return MZ_BUF_ERROR; // Can't make forward progress without some input. + } + } + return mz_status; +} + +int mz_deflateEnd(mz_streamp pStream) +{ + if (!pStream) return MZ_STREAM_ERROR; + if (pStream->state) + { + pStream->zfree(pStream->opaque, pStream->state); + pStream->state = NULL; + } + return MZ_OK; +} + +mz_ulong mz_deflateBound(mz_streamp pStream, mz_ulong source_len) +{ + (void)pStream; + // This is really over conservative. (And lame, but it's actually pretty tricky to compute a true upper bound given the way tdefl's blocking works.) + return MZ_MAX(128 + (source_len * 110) / 100, 128 + source_len + ((source_len / (31 * 1024)) + 1) * 5); +} + +int mz_compress2(unsigned char *pDest, mz_ulong *pDest_len, const unsigned char *pSource, mz_ulong source_len, int level) +{ + int status; + mz_stream stream; + memset(&stream, 0, sizeof(stream)); + + // In case mz_ulong is 64-bits (argh I hate longs). + if ((source_len | *pDest_len) > 0xFFFFFFFFU) return MZ_PARAM_ERROR; + + stream.next_in = pSource; + stream.avail_in = (mz_uint32)source_len; + stream.next_out = pDest; + stream.avail_out = (mz_uint32)*pDest_len; + + status = mz_deflateInit(&stream, level); + if (status != MZ_OK) return status; + + status = mz_deflate(&stream, MZ_FINISH); + if (status != MZ_STREAM_END) + { + mz_deflateEnd(&stream); + return (status == MZ_OK) ? MZ_BUF_ERROR : status; + } + + *pDest_len = stream.total_out; + return mz_deflateEnd(&stream); +} + +int mz_compress(unsigned char *pDest, mz_ulong *pDest_len, const unsigned char *pSource, mz_ulong source_len) +{ + return mz_compress2(pDest, pDest_len, pSource, source_len, MZ_DEFAULT_COMPRESSION); +} + +mz_ulong mz_compressBound(mz_ulong source_len) +{ + return mz_deflateBound(NULL, source_len); +} + +typedef struct +{ + tinfl_decompressor m_decomp; + mz_uint m_dict_ofs, m_dict_avail, m_first_call, m_has_flushed; int m_window_bits; + mz_uint8 m_dict[TINFL_LZ_DICT_SIZE]; + tinfl_status m_last_status; +} inflate_state; + +int mz_inflateInit2(mz_streamp pStream, int window_bits) +{ + inflate_state *pDecomp; + if (!pStream) return MZ_STREAM_ERROR; + if ((window_bits != MZ_DEFAULT_WINDOW_BITS) && (-window_bits != MZ_DEFAULT_WINDOW_BITS)) return MZ_PARAM_ERROR; + + pStream->data_type = 0; + pStream->adler = 0; + pStream->msg = NULL; + pStream->total_in = 0; + pStream->total_out = 0; + pStream->reserved = 0; + if (!pStream->zalloc) pStream->zalloc = def_alloc_func; + if (!pStream->zfree) pStream->zfree = def_free_func; + + pDecomp = (inflate_state*)pStream->zalloc(pStream->opaque, 1, sizeof(inflate_state)); + if (!pDecomp) return MZ_MEM_ERROR; + + pStream->state = (struct mz_internal_state *)pDecomp; + + tinfl_init(&pDecomp->m_decomp); + pDecomp->m_dict_ofs = 0; + pDecomp->m_dict_avail = 0; + pDecomp->m_last_status = TINFL_STATUS_NEEDS_MORE_INPUT; + pDecomp->m_first_call = 1; + pDecomp->m_has_flushed = 0; + pDecomp->m_window_bits = window_bits; + + return MZ_OK; +} + +int mz_inflateInit(mz_streamp pStream) +{ + return mz_inflateInit2(pStream, MZ_DEFAULT_WINDOW_BITS); +} + +int mz_inflate(mz_streamp pStream, int flush) +{ + inflate_state* pState; + mz_uint n, first_call, decomp_flags = TINFL_FLAG_COMPUTE_ADLER32; + size_t in_bytes, out_bytes, orig_avail_in; + tinfl_status status; + + if ((!pStream) || (!pStream->state)) return MZ_STREAM_ERROR; + if (flush == MZ_PARTIAL_FLUSH) flush = MZ_SYNC_FLUSH; + if ((flush) && (flush != MZ_SYNC_FLUSH) && (flush != MZ_FINISH)) return MZ_STREAM_ERROR; + + pState = (inflate_state*)pStream->state; + if (pState->m_window_bits > 0) decomp_flags |= TINFL_FLAG_PARSE_ZLIB_HEADER; + orig_avail_in = pStream->avail_in; + + first_call = pState->m_first_call; pState->m_first_call = 0; + if (pState->m_last_status < 0) return MZ_DATA_ERROR; + + if (pState->m_has_flushed && (flush != MZ_FINISH)) return MZ_STREAM_ERROR; + pState->m_has_flushed |= (flush == MZ_FINISH); + + if ((flush == MZ_FINISH) && (first_call)) + { + // MZ_FINISH on the first call implies that the input and output buffers are large enough to hold the entire compressed/decompressed file. + decomp_flags |= TINFL_FLAG_USING_NON_WRAPPING_OUTPUT_BUF; + in_bytes = pStream->avail_in; out_bytes = pStream->avail_out; + status = tinfl_decompress(&pState->m_decomp, pStream->next_in, &in_bytes, pStream->next_out, pStream->next_out, &out_bytes, decomp_flags); + pState->m_last_status = status; + pStream->next_in += (mz_uint)in_bytes; pStream->avail_in -= (mz_uint)in_bytes; pStream->total_in += (mz_uint)in_bytes; + pStream->adler = tinfl_get_adler32(&pState->m_decomp); + pStream->next_out += (mz_uint)out_bytes; pStream->avail_out -= (mz_uint)out_bytes; pStream->total_out += (mz_uint)out_bytes; + + if (status < 0) + return MZ_DATA_ERROR; + else if (status != TINFL_STATUS_DONE) + { + pState->m_last_status = TINFL_STATUS_FAILED; + return MZ_BUF_ERROR; + } + return MZ_STREAM_END; + } + // flush != MZ_FINISH then we must assume there's more input. + if (flush != MZ_FINISH) decomp_flags |= TINFL_FLAG_HAS_MORE_INPUT; + + if (pState->m_dict_avail) + { + n = MZ_MIN(pState->m_dict_avail, pStream->avail_out); + memcpy(pStream->next_out, pState->m_dict + pState->m_dict_ofs, n); + pStream->next_out += n; pStream->avail_out -= n; pStream->total_out += n; + pState->m_dict_avail -= n; pState->m_dict_ofs = (pState->m_dict_ofs + n) & (TINFL_LZ_DICT_SIZE - 1); + return ((pState->m_last_status == TINFL_STATUS_DONE) && (!pState->m_dict_avail)) ? MZ_STREAM_END : MZ_OK; + } + + for ( ; ; ) + { + in_bytes = pStream->avail_in; + out_bytes = TINFL_LZ_DICT_SIZE - pState->m_dict_ofs; + + status = tinfl_decompress(&pState->m_decomp, pStream->next_in, &in_bytes, pState->m_dict, pState->m_dict + pState->m_dict_ofs, &out_bytes, decomp_flags); + pState->m_last_status = status; + + pStream->next_in += (mz_uint)in_bytes; pStream->avail_in -= (mz_uint)in_bytes; + pStream->total_in += (mz_uint)in_bytes; pStream->adler = tinfl_get_adler32(&pState->m_decomp); + + pState->m_dict_avail = (mz_uint)out_bytes; + + n = MZ_MIN(pState->m_dict_avail, pStream->avail_out); + memcpy(pStream->next_out, pState->m_dict + pState->m_dict_ofs, n); + pStream->next_out += n; pStream->avail_out -= n; pStream->total_out += n; + pState->m_dict_avail -= n; pState->m_dict_ofs = (pState->m_dict_ofs + n) & (TINFL_LZ_DICT_SIZE - 1); + + if (status < 0) + return MZ_DATA_ERROR; // Stream is corrupted (there could be some uncompressed data left in the output dictionary - oh well). + else if ((status == TINFL_STATUS_NEEDS_MORE_INPUT) && (!orig_avail_in)) + return MZ_BUF_ERROR; // Signal caller that we can't make forward progress without supplying more input or by setting flush to MZ_FINISH. + else if (flush == MZ_FINISH) + { + // The output buffer MUST be large to hold the remaining uncompressed data when flush==MZ_FINISH. + if (status == TINFL_STATUS_DONE) + return pState->m_dict_avail ? MZ_BUF_ERROR : MZ_STREAM_END; + // status here must be TINFL_STATUS_HAS_MORE_OUTPUT, which means there's at least 1 more byte on the way. If there's no more room left in the output buffer then something is wrong. + else if (!pStream->avail_out) + return MZ_BUF_ERROR; + } + else if ((status == TINFL_STATUS_DONE) || (!pStream->avail_in) || (!pStream->avail_out) || (pState->m_dict_avail)) + break; + } + + return ((status == TINFL_STATUS_DONE) && (!pState->m_dict_avail)) ? MZ_STREAM_END : MZ_OK; +} + +int mz_inflateEnd(mz_streamp pStream) +{ + if (!pStream) + return MZ_STREAM_ERROR; + if (pStream->state) + { + pStream->zfree(pStream->opaque, pStream->state); + pStream->state = NULL; + } + return MZ_OK; +} + +int mz_uncompress(unsigned char *pDest, mz_ulong *pDest_len, const unsigned char *pSource, mz_ulong source_len) +{ + mz_stream stream; + int status; + memset(&stream, 0, sizeof(stream)); + + // In case mz_ulong is 64-bits (argh I hate longs). + if ((source_len | *pDest_len) > 0xFFFFFFFFU) return MZ_PARAM_ERROR; + + stream.next_in = pSource; + stream.avail_in = (mz_uint32)source_len; + stream.next_out = pDest; + stream.avail_out = (mz_uint32)*pDest_len; + + status = mz_inflateInit(&stream); + if (status != MZ_OK) + return status; + + status = mz_inflate(&stream, MZ_FINISH); + if (status != MZ_STREAM_END) + { + mz_inflateEnd(&stream); + return ((status == MZ_BUF_ERROR) && (!stream.avail_in)) ? MZ_DATA_ERROR : status; + } + *pDest_len = stream.total_out; + + return mz_inflateEnd(&stream); +} + +const char *mz_error(int err) +{ + static struct { int m_err; const char *m_pDesc; } s_error_descs[] = + { + { MZ_OK, "" }, { MZ_STREAM_END, "stream end" }, { MZ_NEED_DICT, "need dictionary" }, { MZ_ERRNO, "file error" }, { MZ_STREAM_ERROR, "stream error" }, + { MZ_DATA_ERROR, "data error" }, { MZ_MEM_ERROR, "out of memory" }, { MZ_BUF_ERROR, "buf error" }, { MZ_VERSION_ERROR, "version error" }, { MZ_PARAM_ERROR, "parameter error" } + }; + mz_uint i; for (i = 0; i < sizeof(s_error_descs) / sizeof(s_error_descs[0]); ++i) if (s_error_descs[i].m_err == err) return s_error_descs[i].m_pDesc; + return NULL; +} + +#endif //MINIZ_NO_ZLIB_APIS + +// ------------------- Low-level Decompression (completely independent from all compression API's) + +#define TINFL_MEMCPY(d, s, l) memcpy(d, s, l) +#define TINFL_MEMSET(p, c, l) memset(p, c, l) + +#define TINFL_CR_BEGIN switch(r->m_state) { case 0: +#define TINFL_CR_RETURN(state_index, result) do { status = result; r->m_state = state_index; goto common_exit; case state_index:; } MZ_MACRO_END +#define TINFL_CR_RETURN_FOREVER(state_index, result) do { for ( ; ; ) { TINFL_CR_RETURN(state_index, result); } } MZ_MACRO_END +#define TINFL_CR_FINISH } + +// TODO: If the caller has indicated that there's no more input, and we attempt to read beyond the input buf, then something is wrong with the input because the inflator never +// reads ahead more than it needs to. Currently TINFL_GET_BYTE() pads the end of the stream with 0's in this scenario. +#define TINFL_GET_BYTE(state_index, c) do { \ + if (pIn_buf_cur >= pIn_buf_end) { \ + for ( ; ; ) { \ + if (decomp_flags & TINFL_FLAG_HAS_MORE_INPUT) { \ + TINFL_CR_RETURN(state_index, TINFL_STATUS_NEEDS_MORE_INPUT); \ + if (pIn_buf_cur < pIn_buf_end) { \ + c = *pIn_buf_cur++; \ + break; \ + } \ + } else { \ + c = 0; \ + break; \ + } \ + } \ + } else c = *pIn_buf_cur++; } MZ_MACRO_END + +#define TINFL_NEED_BITS(state_index, n) do { mz_uint c; TINFL_GET_BYTE(state_index, c); bit_buf |= (((tinfl_bit_buf_t)c) << num_bits); num_bits += 8; } while (num_bits < (mz_uint)(n)) +#define TINFL_SKIP_BITS(state_index, n) do { if (num_bits < (mz_uint)(n)) { TINFL_NEED_BITS(state_index, n); } bit_buf >>= (n); num_bits -= (n); } MZ_MACRO_END +#define TINFL_GET_BITS(state_index, b, n) do { if (num_bits < (mz_uint)(n)) { TINFL_NEED_BITS(state_index, n); } b = bit_buf & ((1 << (n)) - 1); bit_buf >>= (n); num_bits -= (n); } MZ_MACRO_END + +// TINFL_HUFF_BITBUF_FILL() is only used rarely, when the number of bytes remaining in the input buffer falls below 2. +// It reads just enough bytes from the input stream that are needed to decode the next Huffman code (and absolutely no more). It works by trying to fully decode a +// Huffman code by using whatever bits are currently present in the bit buffer. If this fails, it reads another byte, and tries again until it succeeds or until the +// bit buffer contains >=15 bits (deflate's max. Huffman code size). +#define TINFL_HUFF_BITBUF_FILL(state_index, pHuff) \ + do { \ + temp = (pHuff)->m_look_up[bit_buf & (TINFL_FAST_LOOKUP_SIZE - 1)]; \ + if (temp >= 0) { \ + code_len = temp >> 9; \ + if ((code_len) && (num_bits >= code_len)) \ + break; \ + } else if (num_bits > TINFL_FAST_LOOKUP_BITS) { \ + code_len = TINFL_FAST_LOOKUP_BITS; \ + do { \ + temp = (pHuff)->m_tree[~temp + ((bit_buf >> code_len++) & 1)]; \ + } while ((temp < 0) && (num_bits >= (code_len + 1))); if (temp >= 0) break; \ + } TINFL_GET_BYTE(state_index, c); bit_buf |= (((tinfl_bit_buf_t)c) << num_bits); num_bits += 8; \ + } while (num_bits < 15); + +// TINFL_HUFF_DECODE() decodes the next Huffman coded symbol. It's more complex than you would initially expect because the zlib API expects the decompressor to never read +// beyond the final byte of the deflate stream. (In other words, when this macro wants to read another byte from the input, it REALLY needs another byte in order to fully +// decode the next Huffman code.) Handling this properly is particularly important on raw deflate (non-zlib) streams, which aren't followed by a byte aligned adler-32. +// The slow path is only executed at the very end of the input buffer. +#define TINFL_HUFF_DECODE(state_index, sym, pHuff) do { \ + int temp; mz_uint code_len, c; \ + if (num_bits < 15) { \ + if ((pIn_buf_end - pIn_buf_cur) < 2) { \ + TINFL_HUFF_BITBUF_FILL(state_index, pHuff); \ + } else { \ + bit_buf |= (((tinfl_bit_buf_t)pIn_buf_cur[0]) << num_bits) | (((tinfl_bit_buf_t)pIn_buf_cur[1]) << (num_bits + 8)); pIn_buf_cur += 2; num_bits += 16; \ + } \ + } \ + if ((temp = (pHuff)->m_look_up[bit_buf & (TINFL_FAST_LOOKUP_SIZE - 1)]) >= 0) \ + code_len = temp >> 9, temp &= 511; \ + else { \ + code_len = TINFL_FAST_LOOKUP_BITS; do { temp = (pHuff)->m_tree[~temp + ((bit_buf >> code_len++) & 1)]; } while (temp < 0); \ + } sym = temp; bit_buf >>= code_len; num_bits -= code_len; } MZ_MACRO_END + +tinfl_status tinfl_decompress(tinfl_decompressor *r, const mz_uint8 *pIn_buf_next, size_t *pIn_buf_size, mz_uint8 *pOut_buf_start, mz_uint8 *pOut_buf_next, size_t *pOut_buf_size, const mz_uint32 decomp_flags) +{ + static const int s_length_base[31] = { 3,4,5,6,7,8,9,10,11,13, 15,17,19,23,27,31,35,43,51,59, 67,83,99,115,131,163,195,227,258,0,0 }; + static const int s_length_extra[31]= { 0,0,0,0,0,0,0,0,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4,5,5,5,5,0,0,0 }; + static const int s_dist_base[32] = { 1,2,3,4,5,7,9,13,17,25,33,49,65,97,129,193, 257,385,513,769,1025,1537,2049,3073,4097,6145,8193,12289,16385,24577,0,0}; + static const int s_dist_extra[32] = { 0,0,0,0,1,1,2,2,3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,11,11,12,12,13,13}; + static const mz_uint8 s_length_dezigzag[19] = { 16,17,18,0,8,7,9,6,10,5,11,4,12,3,13,2,14,1,15 }; + static const int s_min_table_sizes[3] = { 257, 1, 4 }; + + tinfl_status status = TINFL_STATUS_FAILED; mz_uint32 num_bits, dist, counter, num_extra; tinfl_bit_buf_t bit_buf; + const mz_uint8 *pIn_buf_cur = pIn_buf_next, *const pIn_buf_end = pIn_buf_next + *pIn_buf_size; + mz_uint8 *pOut_buf_cur = pOut_buf_next, *const pOut_buf_end = pOut_buf_next + *pOut_buf_size; + size_t out_buf_size_mask = (decomp_flags & TINFL_FLAG_USING_NON_WRAPPING_OUTPUT_BUF) ? (size_t)-1 : ((pOut_buf_next - pOut_buf_start) + *pOut_buf_size) - 1, dist_from_out_buf_start; + + // Ensure the output buffer's size is a power of 2, unless the output buffer is large enough to hold the entire output file (in which case it doesn't matter). + if (((out_buf_size_mask + 1) & out_buf_size_mask) || (pOut_buf_next < pOut_buf_start)) { *pIn_buf_size = *pOut_buf_size = 0; return TINFL_STATUS_BAD_PARAM; } + + num_bits = r->m_num_bits; bit_buf = r->m_bit_buf; dist = r->m_dist; counter = r->m_counter; num_extra = r->m_num_extra; dist_from_out_buf_start = r->m_dist_from_out_buf_start; + TINFL_CR_BEGIN + + bit_buf = num_bits = dist = counter = num_extra = r->m_zhdr0 = r->m_zhdr1 = 0; r->m_z_adler32 = r->m_check_adler32 = 1; + if (decomp_flags & TINFL_FLAG_PARSE_ZLIB_HEADER) + { + TINFL_GET_BYTE(1, r->m_zhdr0); TINFL_GET_BYTE(2, r->m_zhdr1); + counter = (((r->m_zhdr0 * 256 + r->m_zhdr1) % 31 != 0) || (r->m_zhdr1 & 32) || ((r->m_zhdr0 & 15) != 8)); + if (!(decomp_flags & TINFL_FLAG_USING_NON_WRAPPING_OUTPUT_BUF)) counter |= (((1U << (8U + (r->m_zhdr0 >> 4))) > 32768U) || ((out_buf_size_mask + 1) < (size_t)(1U << (8U + (r->m_zhdr0 >> 4))))); + if (counter) { TINFL_CR_RETURN_FOREVER(36, TINFL_STATUS_FAILED); } + } + + do + { + TINFL_GET_BITS(3, r->m_final, 3); r->m_type = r->m_final >> 1; + if (r->m_type == 0) + { + TINFL_SKIP_BITS(5, num_bits & 7); + for (counter = 0; counter < 4; ++counter) { if (num_bits) TINFL_GET_BITS(6, r->m_raw_header[counter], 8); else TINFL_GET_BYTE(7, r->m_raw_header[counter]); } + if ((counter = (r->m_raw_header[0] | (r->m_raw_header[1] << 8))) != (mz_uint)(0xFFFF ^ (r->m_raw_header[2] | (r->m_raw_header[3] << 8)))) { TINFL_CR_RETURN_FOREVER(39, TINFL_STATUS_FAILED); } + while ((counter) && (num_bits)) + { + TINFL_GET_BITS(51, dist, 8); + while (pOut_buf_cur >= pOut_buf_end) { TINFL_CR_RETURN(52, TINFL_STATUS_HAS_MORE_OUTPUT); } + *pOut_buf_cur++ = (mz_uint8)dist; + counter--; + } + while (counter) + { + size_t n; while (pOut_buf_cur >= pOut_buf_end) { TINFL_CR_RETURN(9, TINFL_STATUS_HAS_MORE_OUTPUT); } + while (pIn_buf_cur >= pIn_buf_end) + { + if (decomp_flags & TINFL_FLAG_HAS_MORE_INPUT) + { + TINFL_CR_RETURN(38, TINFL_STATUS_NEEDS_MORE_INPUT); + } + else + { + TINFL_CR_RETURN_FOREVER(40, TINFL_STATUS_FAILED); + } + } + n = MZ_MIN(MZ_MIN((size_t)(pOut_buf_end - pOut_buf_cur), (size_t)(pIn_buf_end - pIn_buf_cur)), counter); + TINFL_MEMCPY(pOut_buf_cur, pIn_buf_cur, n); pIn_buf_cur += n; pOut_buf_cur += n; counter -= (mz_uint)n; + } + } + else if (r->m_type == 3) + { + TINFL_CR_RETURN_FOREVER(10, TINFL_STATUS_FAILED); + } + else + { + if (r->m_type == 1) + { + mz_uint8 *p = r->m_tables[0].m_code_size; mz_uint i; + r->m_table_sizes[0] = 288; r->m_table_sizes[1] = 32; TINFL_MEMSET(r->m_tables[1].m_code_size, 5, 32); + for ( i = 0; i <= 143; ++i) *p++ = 8; for ( ; i <= 255; ++i) *p++ = 9; for ( ; i <= 279; ++i) *p++ = 7; for ( ; i <= 287; ++i) *p++ = 8; + } + else + { + for (counter = 0; counter < 3; counter++) { TINFL_GET_BITS(11, r->m_table_sizes[counter], "\05\05\04"[counter]); r->m_table_sizes[counter] += s_min_table_sizes[counter]; } + MZ_CLEAR_OBJ(r->m_tables[2].m_code_size); for (counter = 0; counter < r->m_table_sizes[2]; counter++) { mz_uint s; TINFL_GET_BITS(14, s, 3); r->m_tables[2].m_code_size[s_length_dezigzag[counter]] = (mz_uint8)s; } + r->m_table_sizes[2] = 19; + } + for ( ; (int)r->m_type >= 0; r->m_type--) + { + int tree_next, tree_cur; tinfl_huff_table *pTable; + mz_uint i, j, used_syms, total, sym_index, next_code[17], total_syms[16]; pTable = &r->m_tables[r->m_type]; MZ_CLEAR_OBJ(total_syms); MZ_CLEAR_OBJ(pTable->m_look_up); MZ_CLEAR_OBJ(pTable->m_tree); + for (i = 0; i < r->m_table_sizes[r->m_type]; ++i) total_syms[pTable->m_code_size[i]]++; + used_syms = 0, total = 0; next_code[0] = next_code[1] = 0; + for (i = 1; i <= 15; ++i) { used_syms += total_syms[i]; next_code[i + 1] = (total = ((total + total_syms[i]) << 1)); } + if ((65536 != total) && (used_syms > 1)) + { + TINFL_CR_RETURN_FOREVER(35, TINFL_STATUS_FAILED); + } + for (tree_next = -1, sym_index = 0; sym_index < r->m_table_sizes[r->m_type]; ++sym_index) + { + mz_uint rev_code = 0, l, cur_code, code_size = pTable->m_code_size[sym_index]; if (!code_size) continue; + cur_code = next_code[code_size]++; for (l = code_size; l > 0; l--, cur_code >>= 1) rev_code = (rev_code << 1) | (cur_code & 1); + if (code_size <= TINFL_FAST_LOOKUP_BITS) { mz_int16 k = (mz_int16)((code_size << 9) | sym_index); while (rev_code < TINFL_FAST_LOOKUP_SIZE) { pTable->m_look_up[rev_code] = k; rev_code += (1 << code_size); } continue; } + if (0 == (tree_cur = pTable->m_look_up[rev_code & (TINFL_FAST_LOOKUP_SIZE - 1)])) { pTable->m_look_up[rev_code & (TINFL_FAST_LOOKUP_SIZE - 1)] = (mz_int16)tree_next; tree_cur = tree_next; tree_next -= 2; } + rev_code >>= (TINFL_FAST_LOOKUP_BITS - 1); + for (j = code_size; j > (TINFL_FAST_LOOKUP_BITS + 1); j--) + { + tree_cur -= ((rev_code >>= 1) & 1); + if (!pTable->m_tree[-tree_cur - 1]) { pTable->m_tree[-tree_cur - 1] = (mz_int16)tree_next; tree_cur = tree_next; tree_next -= 2; } else tree_cur = pTable->m_tree[-tree_cur - 1]; + } + tree_cur -= ((rev_code >>= 1) & 1); pTable->m_tree[-tree_cur - 1] = (mz_int16)sym_index; + } + if (r->m_type == 2) + { + for (counter = 0; counter < (r->m_table_sizes[0] + r->m_table_sizes[1]); ) + { + mz_uint s; TINFL_HUFF_DECODE(16, dist, &r->m_tables[2]); if (dist < 16) { r->m_len_codes[counter++] = (mz_uint8)dist; continue; } + if ((dist == 16) && (!counter)) + { + TINFL_CR_RETURN_FOREVER(17, TINFL_STATUS_FAILED); + } + num_extra = "\02\03\07"[dist - 16]; TINFL_GET_BITS(18, s, num_extra); s += "\03\03\013"[dist - 16]; + TINFL_MEMSET(r->m_len_codes + counter, (dist == 16) ? r->m_len_codes[counter - 1] : 0, s); counter += s; + } + if ((r->m_table_sizes[0] + r->m_table_sizes[1]) != counter) + { + TINFL_CR_RETURN_FOREVER(21, TINFL_STATUS_FAILED); + } + TINFL_MEMCPY(r->m_tables[0].m_code_size, r->m_len_codes, r->m_table_sizes[0]); TINFL_MEMCPY(r->m_tables[1].m_code_size, r->m_len_codes + r->m_table_sizes[0], r->m_table_sizes[1]); + } + } + for ( ; ; ) + { + mz_uint8 *pSrc; + for ( ; ; ) + { + if (((pIn_buf_end - pIn_buf_cur) < 4) || ((pOut_buf_end - pOut_buf_cur) < 2)) + { + TINFL_HUFF_DECODE(23, counter, &r->m_tables[0]); + if (counter >= 256) + break; + while (pOut_buf_cur >= pOut_buf_end) { TINFL_CR_RETURN(24, TINFL_STATUS_HAS_MORE_OUTPUT); } + *pOut_buf_cur++ = (mz_uint8)counter; + } + else + { + int sym2; mz_uint code_len; +#if TINFL_USE_64BIT_BITBUF + if (num_bits < 30) { bit_buf |= (((tinfl_bit_buf_t)MZ_READ_LE32(pIn_buf_cur)) << num_bits); pIn_buf_cur += 4; num_bits += 32; } +#else + if (num_bits < 15) { bit_buf |= (((tinfl_bit_buf_t)MZ_READ_LE16(pIn_buf_cur)) << num_bits); pIn_buf_cur += 2; num_bits += 16; } +#endif + if ((sym2 = r->m_tables[0].m_look_up[bit_buf & (TINFL_FAST_LOOKUP_SIZE - 1)]) >= 0) + code_len = sym2 >> 9; + else + { + code_len = TINFL_FAST_LOOKUP_BITS; do { sym2 = r->m_tables[0].m_tree[~sym2 + ((bit_buf >> code_len++) & 1)]; } while (sym2 < 0); + } + counter = sym2; bit_buf >>= code_len; num_bits -= code_len; + if (counter & 256) + break; + +#if !TINFL_USE_64BIT_BITBUF + if (num_bits < 15) { bit_buf |= (((tinfl_bit_buf_t)MZ_READ_LE16(pIn_buf_cur)) << num_bits); pIn_buf_cur += 2; num_bits += 16; } +#endif + if ((sym2 = r->m_tables[0].m_look_up[bit_buf & (TINFL_FAST_LOOKUP_SIZE - 1)]) >= 0) + code_len = sym2 >> 9; + else + { + code_len = TINFL_FAST_LOOKUP_BITS; do { sym2 = r->m_tables[0].m_tree[~sym2 + ((bit_buf >> code_len++) & 1)]; } while (sym2 < 0); + } + bit_buf >>= code_len; num_bits -= code_len; + + pOut_buf_cur[0] = (mz_uint8)counter; + if (sym2 & 256) + { + pOut_buf_cur++; + counter = sym2; + break; + } + pOut_buf_cur[1] = (mz_uint8)sym2; + pOut_buf_cur += 2; + } + } + if ((counter &= 511) == 256) break; + + num_extra = s_length_extra[counter - 257]; counter = s_length_base[counter - 257]; + if (num_extra) { mz_uint extra_bits; TINFL_GET_BITS(25, extra_bits, num_extra); counter += extra_bits; } + + TINFL_HUFF_DECODE(26, dist, &r->m_tables[1]); + num_extra = s_dist_extra[dist]; dist = s_dist_base[dist]; + if (num_extra) { mz_uint extra_bits; TINFL_GET_BITS(27, extra_bits, num_extra); dist += extra_bits; } + + dist_from_out_buf_start = pOut_buf_cur - pOut_buf_start; + if ((dist > dist_from_out_buf_start) && (decomp_flags & TINFL_FLAG_USING_NON_WRAPPING_OUTPUT_BUF)) + { + TINFL_CR_RETURN_FOREVER(37, TINFL_STATUS_FAILED); + } + + pSrc = pOut_buf_start + ((dist_from_out_buf_start - dist) & out_buf_size_mask); + + if ((MZ_MAX(pOut_buf_cur, pSrc) + counter) > pOut_buf_end) + { + while (counter--) + { + while (pOut_buf_cur >= pOut_buf_end) { TINFL_CR_RETURN(53, TINFL_STATUS_HAS_MORE_OUTPUT); } + *pOut_buf_cur++ = pOut_buf_start[(dist_from_out_buf_start++ - dist) & out_buf_size_mask]; + } + continue; + } +#if MINIZ_USE_UNALIGNED_LOADS_AND_STORES + else if ((counter >= 9) && (counter <= dist)) + { + const mz_uint8 *pSrc_end = pSrc + (counter & ~7); + do + { + ((mz_uint32 *)pOut_buf_cur)[0] = ((const mz_uint32 *)pSrc)[0]; + ((mz_uint32 *)pOut_buf_cur)[1] = ((const mz_uint32 *)pSrc)[1]; + pOut_buf_cur += 8; + } while ((pSrc += 8) < pSrc_end); + if ((counter &= 7) < 3) + { + if (counter) + { + pOut_buf_cur[0] = pSrc[0]; + if (counter > 1) + pOut_buf_cur[1] = pSrc[1]; + pOut_buf_cur += counter; + } + continue; + } + } +#endif + do + { + pOut_buf_cur[0] = pSrc[0]; + pOut_buf_cur[1] = pSrc[1]; + pOut_buf_cur[2] = pSrc[2]; + pOut_buf_cur += 3; pSrc += 3; + } while ((int)(counter -= 3) > 2); + if ((int)counter > 0) + { + pOut_buf_cur[0] = pSrc[0]; + if ((int)counter > 1) + pOut_buf_cur[1] = pSrc[1]; + pOut_buf_cur += counter; + } + } + } + } while (!(r->m_final & 1)); + if (decomp_flags & TINFL_FLAG_PARSE_ZLIB_HEADER) + { + TINFL_SKIP_BITS(32, num_bits & 7); for (counter = 0; counter < 4; ++counter) { mz_uint s; if (num_bits) TINFL_GET_BITS(41, s, 8); else TINFL_GET_BYTE(42, s); r->m_z_adler32 = (r->m_z_adler32 << 8) | s; } + } + TINFL_CR_RETURN_FOREVER(34, TINFL_STATUS_DONE); + TINFL_CR_FINISH + +common_exit: + r->m_num_bits = num_bits; r->m_bit_buf = bit_buf; r->m_dist = dist; r->m_counter = counter; r->m_num_extra = num_extra; r->m_dist_from_out_buf_start = dist_from_out_buf_start; + *pIn_buf_size = pIn_buf_cur - pIn_buf_next; *pOut_buf_size = pOut_buf_cur - pOut_buf_next; + if ((decomp_flags & (TINFL_FLAG_PARSE_ZLIB_HEADER | TINFL_FLAG_COMPUTE_ADLER32)) && (status >= 0)) + { + const mz_uint8 *ptr = pOut_buf_next; size_t buf_len = *pOut_buf_size; + mz_uint32 i, s1 = r->m_check_adler32 & 0xffff, s2 = r->m_check_adler32 >> 16; size_t block_len = buf_len % 5552; + while (buf_len) + { + for (i = 0; i + 7 < block_len; i += 8, ptr += 8) + { + s1 += ptr[0], s2 += s1; s1 += ptr[1], s2 += s1; s1 += ptr[2], s2 += s1; s1 += ptr[3], s2 += s1; + s1 += ptr[4], s2 += s1; s1 += ptr[5], s2 += s1; s1 += ptr[6], s2 += s1; s1 += ptr[7], s2 += s1; + } + for ( ; i < block_len; ++i) s1 += *ptr++, s2 += s1; + s1 %= 65521U, s2 %= 65521U; buf_len -= block_len; block_len = 5552; + } + r->m_check_adler32 = (s2 << 16) + s1; if ((status == TINFL_STATUS_DONE) && (decomp_flags & TINFL_FLAG_PARSE_ZLIB_HEADER) && (r->m_check_adler32 != r->m_z_adler32)) status = TINFL_STATUS_ADLER32_MISMATCH; + } + return status; +} + +// Higher level helper functions. +void *tinfl_decompress_mem_to_heap(const void *pSrc_buf, size_t src_buf_len, size_t *pOut_len, int flags) +{ + tinfl_decompressor decomp; void *pBuf = NULL, *pNew_buf; size_t src_buf_ofs = 0, out_buf_capacity = 0; + *pOut_len = 0; + tinfl_init(&decomp); + for ( ; ; ) + { + size_t src_buf_size = src_buf_len - src_buf_ofs, dst_buf_size = out_buf_capacity - *pOut_len, new_out_buf_capacity; + tinfl_status status = tinfl_decompress(&decomp, (const mz_uint8*)pSrc_buf + src_buf_ofs, &src_buf_size, (mz_uint8*)pBuf, pBuf ? (mz_uint8*)pBuf + *pOut_len : NULL, &dst_buf_size, + (flags & ~TINFL_FLAG_HAS_MORE_INPUT) | TINFL_FLAG_USING_NON_WRAPPING_OUTPUT_BUF); + if ((status < 0) || (status == TINFL_STATUS_NEEDS_MORE_INPUT)) + { + MZ_FREE(pBuf); *pOut_len = 0; return NULL; + } + src_buf_ofs += src_buf_size; + *pOut_len += dst_buf_size; + if (status == TINFL_STATUS_DONE) break; + new_out_buf_capacity = out_buf_capacity * 2; if (new_out_buf_capacity < 128) new_out_buf_capacity = 128; + pNew_buf = MZ_REALLOC(pBuf, new_out_buf_capacity); + if (!pNew_buf) + { + MZ_FREE(pBuf); *pOut_len = 0; return NULL; + } + pBuf = pNew_buf; out_buf_capacity = new_out_buf_capacity; + } + return pBuf; +} + +size_t tinfl_decompress_mem_to_mem(void *pOut_buf, size_t out_buf_len, const void *pSrc_buf, size_t src_buf_len, int flags) +{ + tinfl_decompressor decomp; tinfl_status status; tinfl_init(&decomp); + status = tinfl_decompress(&decomp, (const mz_uint8*)pSrc_buf, &src_buf_len, (mz_uint8*)pOut_buf, (mz_uint8*)pOut_buf, &out_buf_len, (flags & ~TINFL_FLAG_HAS_MORE_INPUT) | TINFL_FLAG_USING_NON_WRAPPING_OUTPUT_BUF); + return (status != TINFL_STATUS_DONE) ? TINFL_DECOMPRESS_MEM_TO_MEM_FAILED : out_buf_len; +} + +int tinfl_decompress_mem_to_callback(const void *pIn_buf, size_t *pIn_buf_size, tinfl_put_buf_func_ptr pPut_buf_func, void *pPut_buf_user, int flags) +{ + int result = 0; + tinfl_decompressor decomp; + mz_uint8 *pDict = (mz_uint8*)MZ_MALLOC(TINFL_LZ_DICT_SIZE); size_t in_buf_ofs = 0, dict_ofs = 0; + if (!pDict) + return TINFL_STATUS_FAILED; + tinfl_init(&decomp); + for ( ; ; ) + { + size_t in_buf_size = *pIn_buf_size - in_buf_ofs, dst_buf_size = TINFL_LZ_DICT_SIZE - dict_ofs; + tinfl_status status = tinfl_decompress(&decomp, (const mz_uint8*)pIn_buf + in_buf_ofs, &in_buf_size, pDict, pDict + dict_ofs, &dst_buf_size, + (flags & ~(TINFL_FLAG_HAS_MORE_INPUT | TINFL_FLAG_USING_NON_WRAPPING_OUTPUT_BUF))); + in_buf_ofs += in_buf_size; + if ((dst_buf_size) && (!(*pPut_buf_func)(pDict + dict_ofs, (int)dst_buf_size, pPut_buf_user))) + break; + if (status != TINFL_STATUS_HAS_MORE_OUTPUT) + { + result = (status == TINFL_STATUS_DONE); + break; + } + dict_ofs = (dict_ofs + dst_buf_size) & (TINFL_LZ_DICT_SIZE - 1); + } + MZ_FREE(pDict); + *pIn_buf_size = in_buf_ofs; + return result; +} + +// ------------------- Low-level Compression (independent from all decompression API's) + +// Purposely making these tables static for faster init and thread safety. +static const mz_uint16 s_tdefl_len_sym[256] = { + 257,258,259,260,261,262,263,264,265,265,266,266,267,267,268,268,269,269,269,269,270,270,270,270,271,271,271,271,272,272,272,272, + 273,273,273,273,273,273,273,273,274,274,274,274,274,274,274,274,275,275,275,275,275,275,275,275,276,276,276,276,276,276,276,276, + 277,277,277,277,277,277,277,277,277,277,277,277,277,277,277,277,278,278,278,278,278,278,278,278,278,278,278,278,278,278,278,278, + 279,279,279,279,279,279,279,279,279,279,279,279,279,279,279,279,280,280,280,280,280,280,280,280,280,280,280,280,280,280,280,280, + 281,281,281,281,281,281,281,281,281,281,281,281,281,281,281,281,281,281,281,281,281,281,281,281,281,281,281,281,281,281,281,281, + 282,282,282,282,282,282,282,282,282,282,282,282,282,282,282,282,282,282,282,282,282,282,282,282,282,282,282,282,282,282,282,282, + 283,283,283,283,283,283,283,283,283,283,283,283,283,283,283,283,283,283,283,283,283,283,283,283,283,283,283,283,283,283,283,283, + 284,284,284,284,284,284,284,284,284,284,284,284,284,284,284,284,284,284,284,284,284,284,284,284,284,284,284,284,284,284,284,285 }; + +static const mz_uint8 s_tdefl_len_extra[256] = { + 0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3, + 4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4, + 5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5, + 5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,0 }; + +static const mz_uint8 s_tdefl_small_dist_sym[512] = { + 0,1,2,3,4,4,5,5,6,6,6,6,7,7,7,7,8,8,8,8,8,8,8,8,9,9,9,9,9,9,9,9,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,10,11,11,11,11,11,11, + 11,11,11,11,11,11,11,11,11,11,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,13, + 13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,14,14,14,14,14,14,14,14,14,14,14,14, + 14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14, + 14,14,14,14,14,14,14,14,14,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15, + 15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,16,16,16,16,16,16,16,16,16,16,16,16,16, + 16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16, + 16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16, + 16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,17,17,17,17,17,17,17,17,17,17,17,17,17,17, + 17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17, + 17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17, + 17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17,17 }; + +static const mz_uint8 s_tdefl_small_dist_extra[512] = { + 0,0,0,0,1,1,1,1,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,5,5,5,5,5,5,5,5, + 5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6, + 6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6, + 6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7, + 7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7, + 7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7, + 7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7, + 7,7,7,7,7,7,7,7 }; + +static const mz_uint8 s_tdefl_large_dist_sym[128] = { + 0,0,18,19,20,20,21,21,22,22,22,22,23,23,23,23,24,24,24,24,24,24,24,24,25,25,25,25,25,25,25,25,26,26,26,26,26,26,26,26,26,26,26,26, + 26,26,26,26,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,27,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28, + 28,28,28,28,28,28,28,28,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29,29 }; + +static const mz_uint8 s_tdefl_large_dist_extra[128] = { + 0,0,8,8,9,9,9,9,10,10,10,10,10,10,10,10,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12, + 12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13, + 13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13 }; + +// Radix sorts tdefl_sym_freq[] array by 16-bit key m_key. Returns ptr to sorted values. +typedef struct { mz_uint16 m_key, m_sym_index; } tdefl_sym_freq; +static tdefl_sym_freq* tdefl_radix_sort_syms(mz_uint num_syms, tdefl_sym_freq* pSyms0, tdefl_sym_freq* pSyms1) +{ + mz_uint32 total_passes = 2, pass_shift, pass, i, hist[256 * 2]; tdefl_sym_freq* pCur_syms = pSyms0, *pNew_syms = pSyms1; MZ_CLEAR_OBJ(hist); + for (i = 0; i < num_syms; i++) { mz_uint freq = pSyms0[i].m_key; hist[freq & 0xFF]++; hist[256 + ((freq >> 8) & 0xFF)]++; } + while ((total_passes > 1) && (num_syms == hist[(total_passes - 1) * 256])) total_passes--; + for (pass_shift = 0, pass = 0; pass < total_passes; pass++, pass_shift += 8) + { + const mz_uint32* pHist = &hist[pass << 8]; + mz_uint offsets[256], cur_ofs = 0; + for (i = 0; i < 256; i++) { offsets[i] = cur_ofs; cur_ofs += pHist[i]; } + for (i = 0; i < num_syms; i++) pNew_syms[offsets[(pCur_syms[i].m_key >> pass_shift) & 0xFF]++] = pCur_syms[i]; + { tdefl_sym_freq* t = pCur_syms; pCur_syms = pNew_syms; pNew_syms = t; } + } + return pCur_syms; +} + +// tdefl_calculate_minimum_redundancy() originally written by: Alistair Moffat, alistair@cs.mu.oz.au, Jyrki Katajainen, jyrki@diku.dk, November 1996. +static void tdefl_calculate_minimum_redundancy(tdefl_sym_freq *A, int n) +{ + int root, leaf, next, avbl, used, dpth; + if (n==0) return; else if (n==1) { A[0].m_key = 1; return; } + A[0].m_key += A[1].m_key; root = 0; leaf = 2; + for (next=1; next < n-1; next++) + { + if (leaf>=n || A[root].m_key=n || (root=0; next--) A[next].m_key = A[A[next].m_key].m_key+1; + avbl = 1; used = dpth = 0; root = n-2; next = n-1; + while (avbl>0) + { + while (root>=0 && (int)A[root].m_key==dpth) { used++; root--; } + while (avbl>used) { A[next--].m_key = (mz_uint16)(dpth); avbl--; } + avbl = 2*used; dpth++; used = 0; + } +} + +// Limits canonical Huffman code table's max code size. +enum { TDEFL_MAX_SUPPORTED_HUFF_CODESIZE = 32 }; +static void tdefl_huffman_enforce_max_code_size(int *pNum_codes, int code_list_len, int max_code_size) +{ + int i; mz_uint32 total = 0; if (code_list_len <= 1) return; + for (i = max_code_size + 1; i <= TDEFL_MAX_SUPPORTED_HUFF_CODESIZE; i++) pNum_codes[max_code_size] += pNum_codes[i]; + for (i = max_code_size; i > 0; i--) total += (((mz_uint32)pNum_codes[i]) << (max_code_size - i)); + while (total != (1UL << max_code_size)) + { + pNum_codes[max_code_size]--; + for (i = max_code_size - 1; i > 0; i--) if (pNum_codes[i]) { pNum_codes[i]--; pNum_codes[i + 1] += 2; break; } + total--; + } +} + +static void tdefl_optimize_huffman_table(tdefl_compressor *d, int table_num, int table_len, int code_size_limit, int static_table) +{ + int i, j, l, num_codes[1 + TDEFL_MAX_SUPPORTED_HUFF_CODESIZE]; mz_uint next_code[TDEFL_MAX_SUPPORTED_HUFF_CODESIZE + 1]; MZ_CLEAR_OBJ(num_codes); + if (static_table) + { + for (i = 0; i < table_len; i++) num_codes[d->m_huff_code_sizes[table_num][i]]++; + } + else + { + tdefl_sym_freq syms0[TDEFL_MAX_HUFF_SYMBOLS], syms1[TDEFL_MAX_HUFF_SYMBOLS], *pSyms; + int num_used_syms = 0; + const mz_uint16 *pSym_count = &d->m_huff_count[table_num][0]; + for (i = 0; i < table_len; i++) if (pSym_count[i]) { syms0[num_used_syms].m_key = (mz_uint16)pSym_count[i]; syms0[num_used_syms++].m_sym_index = (mz_uint16)i; } + + pSyms = tdefl_radix_sort_syms(num_used_syms, syms0, syms1); tdefl_calculate_minimum_redundancy(pSyms, num_used_syms); + + for (i = 0; i < num_used_syms; i++) num_codes[pSyms[i].m_key]++; + + tdefl_huffman_enforce_max_code_size(num_codes, num_used_syms, code_size_limit); + + MZ_CLEAR_OBJ(d->m_huff_code_sizes[table_num]); MZ_CLEAR_OBJ(d->m_huff_codes[table_num]); + for (i = 1, j = num_used_syms; i <= code_size_limit; i++) + for (l = num_codes[i]; l > 0; l--) d->m_huff_code_sizes[table_num][pSyms[--j].m_sym_index] = (mz_uint8)(i); + } + + next_code[1] = 0; for (j = 0, i = 2; i <= code_size_limit; i++) next_code[i] = j = ((j + num_codes[i - 1]) << 1); + + for (i = 0; i < table_len; i++) + { + mz_uint rev_code = 0, code, code_size; if ((code_size = d->m_huff_code_sizes[table_num][i]) == 0) continue; + code = next_code[code_size]++; for (l = code_size; l > 0; l--, code >>= 1) rev_code = (rev_code << 1) | (code & 1); + d->m_huff_codes[table_num][i] = (mz_uint16)rev_code; + } +} + +#define TDEFL_PUT_BITS(b, l) do { \ + mz_uint bits = b; mz_uint len = l; MZ_ASSERT(bits <= ((1U << len) - 1U)); \ + d->m_bit_buffer |= (bits << d->m_bits_in); d->m_bits_in += len; \ + while (d->m_bits_in >= 8) { \ + if (d->m_pOutput_buf < d->m_pOutput_buf_end) \ + *d->m_pOutput_buf++ = (mz_uint8)(d->m_bit_buffer); \ + d->m_bit_buffer >>= 8; \ + d->m_bits_in -= 8; \ + } \ +} MZ_MACRO_END + +#define TDEFL_RLE_PREV_CODE_SIZE() { if (rle_repeat_count) { \ + if (rle_repeat_count < 3) { \ + d->m_huff_count[2][prev_code_size] = (mz_uint16)(d->m_huff_count[2][prev_code_size] + rle_repeat_count); \ + while (rle_repeat_count--) packed_code_sizes[num_packed_code_sizes++] = prev_code_size; \ + } else { \ + d->m_huff_count[2][16] = (mz_uint16)(d->m_huff_count[2][16] + 1); packed_code_sizes[num_packed_code_sizes++] = 16; packed_code_sizes[num_packed_code_sizes++] = (mz_uint8)(rle_repeat_count - 3); \ +} rle_repeat_count = 0; } } + +#define TDEFL_RLE_ZERO_CODE_SIZE() { if (rle_z_count) { \ + if (rle_z_count < 3) { \ + d->m_huff_count[2][0] = (mz_uint16)(d->m_huff_count[2][0] + rle_z_count); while (rle_z_count--) packed_code_sizes[num_packed_code_sizes++] = 0; \ + } else if (rle_z_count <= 10) { \ + d->m_huff_count[2][17] = (mz_uint16)(d->m_huff_count[2][17] + 1); packed_code_sizes[num_packed_code_sizes++] = 17; packed_code_sizes[num_packed_code_sizes++] = (mz_uint8)(rle_z_count - 3); \ + } else { \ + d->m_huff_count[2][18] = (mz_uint16)(d->m_huff_count[2][18] + 1); packed_code_sizes[num_packed_code_sizes++] = 18; packed_code_sizes[num_packed_code_sizes++] = (mz_uint8)(rle_z_count - 11); \ +} rle_z_count = 0; } } + +static mz_uint8 s_tdefl_packed_code_size_syms_swizzle[] = { 16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15 }; + +static void tdefl_start_dynamic_block(tdefl_compressor *d) +{ + int num_lit_codes, num_dist_codes, num_bit_lengths; mz_uint i, total_code_sizes_to_pack, num_packed_code_sizes, rle_z_count, rle_repeat_count, packed_code_sizes_index; + mz_uint8 code_sizes_to_pack[TDEFL_MAX_HUFF_SYMBOLS_0 + TDEFL_MAX_HUFF_SYMBOLS_1], packed_code_sizes[TDEFL_MAX_HUFF_SYMBOLS_0 + TDEFL_MAX_HUFF_SYMBOLS_1], prev_code_size = 0xFF; + + d->m_huff_count[0][256] = 1; + + tdefl_optimize_huffman_table(d, 0, TDEFL_MAX_HUFF_SYMBOLS_0, 15, MZ_FALSE); + tdefl_optimize_huffman_table(d, 1, TDEFL_MAX_HUFF_SYMBOLS_1, 15, MZ_FALSE); + + for (num_lit_codes = 286; num_lit_codes > 257; num_lit_codes--) if (d->m_huff_code_sizes[0][num_lit_codes - 1]) break; + for (num_dist_codes = 30; num_dist_codes > 1; num_dist_codes--) if (d->m_huff_code_sizes[1][num_dist_codes - 1]) break; + + memcpy(code_sizes_to_pack, &d->m_huff_code_sizes[0][0], num_lit_codes); + memcpy(code_sizes_to_pack + num_lit_codes, &d->m_huff_code_sizes[1][0], num_dist_codes); + total_code_sizes_to_pack = num_lit_codes + num_dist_codes; num_packed_code_sizes = 0; rle_z_count = 0; rle_repeat_count = 0; + + memset(&d->m_huff_count[2][0], 0, sizeof(d->m_huff_count[2][0]) * TDEFL_MAX_HUFF_SYMBOLS_2); + for (i = 0; i < total_code_sizes_to_pack; i++) + { + mz_uint8 code_size = code_sizes_to_pack[i]; + if (!code_size) + { + TDEFL_RLE_PREV_CODE_SIZE(); + if (++rle_z_count == 138) { TDEFL_RLE_ZERO_CODE_SIZE(); } + } + else + { + TDEFL_RLE_ZERO_CODE_SIZE(); + if (code_size != prev_code_size) + { + TDEFL_RLE_PREV_CODE_SIZE(); + d->m_huff_count[2][code_size] = (mz_uint16)(d->m_huff_count[2][code_size] + 1); packed_code_sizes[num_packed_code_sizes++] = code_size; + } + else if (++rle_repeat_count == 6) + { + TDEFL_RLE_PREV_CODE_SIZE(); + } + } + prev_code_size = code_size; + } + if (rle_repeat_count) { TDEFL_RLE_PREV_CODE_SIZE(); } else { TDEFL_RLE_ZERO_CODE_SIZE(); } + + tdefl_optimize_huffman_table(d, 2, TDEFL_MAX_HUFF_SYMBOLS_2, 7, MZ_FALSE); + + TDEFL_PUT_BITS(2, 2); + + TDEFL_PUT_BITS(num_lit_codes - 257, 5); + TDEFL_PUT_BITS(num_dist_codes - 1, 5); + + for (num_bit_lengths = 18; num_bit_lengths >= 0; num_bit_lengths--) if (d->m_huff_code_sizes[2][s_tdefl_packed_code_size_syms_swizzle[num_bit_lengths]]) break; + num_bit_lengths = MZ_MAX(4, (num_bit_lengths + 1)); TDEFL_PUT_BITS(num_bit_lengths - 4, 4); + for (i = 0; (int)i < num_bit_lengths; i++) TDEFL_PUT_BITS(d->m_huff_code_sizes[2][s_tdefl_packed_code_size_syms_swizzle[i]], 3); + + for (packed_code_sizes_index = 0; packed_code_sizes_index < num_packed_code_sizes; ) + { + mz_uint code = packed_code_sizes[packed_code_sizes_index++]; MZ_ASSERT(code < TDEFL_MAX_HUFF_SYMBOLS_2); + TDEFL_PUT_BITS(d->m_huff_codes[2][code], d->m_huff_code_sizes[2][code]); + if (code >= 16) TDEFL_PUT_BITS(packed_code_sizes[packed_code_sizes_index++], "\02\03\07"[code - 16]); + } +} + +static void tdefl_start_static_block(tdefl_compressor *d) +{ + mz_uint i; + mz_uint8 *p = &d->m_huff_code_sizes[0][0]; + + for (i = 0; i <= 143; ++i) *p++ = 8; + for ( ; i <= 255; ++i) *p++ = 9; + for ( ; i <= 279; ++i) *p++ = 7; + for ( ; i <= 287; ++i) *p++ = 8; + + memset(d->m_huff_code_sizes[1], 5, 32); + + tdefl_optimize_huffman_table(d, 0, 288, 15, MZ_TRUE); + tdefl_optimize_huffman_table(d, 1, 32, 15, MZ_TRUE); + + TDEFL_PUT_BITS(1, 2); +} + +static const mz_uint mz_bitmasks[17] = { 0x0000, 0x0001, 0x0003, 0x0007, 0x000F, 0x001F, 0x003F, 0x007F, 0x00FF, 0x01FF, 0x03FF, 0x07FF, 0x0FFF, 0x1FFF, 0x3FFF, 0x7FFF, 0xFFFF }; + +#if MINIZ_USE_UNALIGNED_LOADS_AND_STORES && MINIZ_LITTLE_ENDIAN && MINIZ_HAS_64BIT_REGISTERS +static mz_bool tdefl_compress_lz_codes(tdefl_compressor *d) +{ + mz_uint flags; + mz_uint8 *pLZ_codes; + mz_uint8 *pOutput_buf = d->m_pOutput_buf; + mz_uint8 *pLZ_code_buf_end = d->m_pLZ_code_buf; + mz_uint64 bit_buffer = d->m_bit_buffer; + mz_uint bits_in = d->m_bits_in; + +#define TDEFL_PUT_BITS_FAST(b, l) { bit_buffer |= (((mz_uint64)(b)) << bits_in); bits_in += (l); } + + flags = 1; + for (pLZ_codes = d->m_lz_code_buf; pLZ_codes < pLZ_code_buf_end; flags >>= 1) + { + if (flags == 1) + flags = *pLZ_codes++ | 0x100; + + if (flags & 1) + { + mz_uint s0, s1, n0, n1, sym, num_extra_bits; + mz_uint match_len = pLZ_codes[0], match_dist = *(const mz_uint16 *)(pLZ_codes + 1); pLZ_codes += 3; + + MZ_ASSERT(d->m_huff_code_sizes[0][s_tdefl_len_sym[match_len]]); + TDEFL_PUT_BITS_FAST(d->m_huff_codes[0][s_tdefl_len_sym[match_len]], d->m_huff_code_sizes[0][s_tdefl_len_sym[match_len]]); + TDEFL_PUT_BITS_FAST(match_len & mz_bitmasks[s_tdefl_len_extra[match_len]], s_tdefl_len_extra[match_len]); + + // This sequence coaxes MSVC into using cmov's vs. jmp's. + s0 = s_tdefl_small_dist_sym[match_dist & 511]; + n0 = s_tdefl_small_dist_extra[match_dist & 511]; + s1 = s_tdefl_large_dist_sym[match_dist >> 8]; + n1 = s_tdefl_large_dist_extra[match_dist >> 8]; + sym = (match_dist < 512) ? s0 : s1; + num_extra_bits = (match_dist < 512) ? n0 : n1; + + MZ_ASSERT(d->m_huff_code_sizes[1][sym]); + TDEFL_PUT_BITS_FAST(d->m_huff_codes[1][sym], d->m_huff_code_sizes[1][sym]); + TDEFL_PUT_BITS_FAST(match_dist & mz_bitmasks[num_extra_bits], num_extra_bits); + } + else + { + mz_uint lit = *pLZ_codes++; + MZ_ASSERT(d->m_huff_code_sizes[0][lit]); + TDEFL_PUT_BITS_FAST(d->m_huff_codes[0][lit], d->m_huff_code_sizes[0][lit]); + + if (((flags & 2) == 0) && (pLZ_codes < pLZ_code_buf_end)) + { + flags >>= 1; + lit = *pLZ_codes++; + MZ_ASSERT(d->m_huff_code_sizes[0][lit]); + TDEFL_PUT_BITS_FAST(d->m_huff_codes[0][lit], d->m_huff_code_sizes[0][lit]); + + if (((flags & 2) == 0) && (pLZ_codes < pLZ_code_buf_end)) + { + flags >>= 1; + lit = *pLZ_codes++; + MZ_ASSERT(d->m_huff_code_sizes[0][lit]); + TDEFL_PUT_BITS_FAST(d->m_huff_codes[0][lit], d->m_huff_code_sizes[0][lit]); + } + } + } + + if (pOutput_buf >= d->m_pOutput_buf_end) + return MZ_FALSE; + + *(mz_uint64*)pOutput_buf = bit_buffer; + pOutput_buf += (bits_in >> 3); + bit_buffer >>= (bits_in & ~7); + bits_in &= 7; + } + +#undef TDEFL_PUT_BITS_FAST + + d->m_pOutput_buf = pOutput_buf; + d->m_bits_in = 0; + d->m_bit_buffer = 0; + + while (bits_in) + { + mz_uint32 n = MZ_MIN(bits_in, 16); + TDEFL_PUT_BITS((mz_uint)bit_buffer & mz_bitmasks[n], n); + bit_buffer >>= n; + bits_in -= n; + } + + TDEFL_PUT_BITS(d->m_huff_codes[0][256], d->m_huff_code_sizes[0][256]); + + return (d->m_pOutput_buf < d->m_pOutput_buf_end); +} +#else +static mz_bool tdefl_compress_lz_codes(tdefl_compressor *d) +{ + mz_uint flags; + mz_uint8 *pLZ_codes; + + flags = 1; + for (pLZ_codes = d->m_lz_code_buf; pLZ_codes < d->m_pLZ_code_buf; flags >>= 1) + { + if (flags == 1) + flags = *pLZ_codes++ | 0x100; + if (flags & 1) + { + mz_uint sym, num_extra_bits; + mz_uint match_len = pLZ_codes[0], match_dist = (pLZ_codes[1] | (pLZ_codes[2] << 8)); pLZ_codes += 3; + + MZ_ASSERT(d->m_huff_code_sizes[0][s_tdefl_len_sym[match_len]]); + TDEFL_PUT_BITS(d->m_huff_codes[0][s_tdefl_len_sym[match_len]], d->m_huff_code_sizes[0][s_tdefl_len_sym[match_len]]); + TDEFL_PUT_BITS(match_len & mz_bitmasks[s_tdefl_len_extra[match_len]], s_tdefl_len_extra[match_len]); + + if (match_dist < 512) + { + sym = s_tdefl_small_dist_sym[match_dist]; num_extra_bits = s_tdefl_small_dist_extra[match_dist]; + } + else + { + sym = s_tdefl_large_dist_sym[match_dist >> 8]; num_extra_bits = s_tdefl_large_dist_extra[match_dist >> 8]; + } + MZ_ASSERT(d->m_huff_code_sizes[1][sym]); + TDEFL_PUT_BITS(d->m_huff_codes[1][sym], d->m_huff_code_sizes[1][sym]); + TDEFL_PUT_BITS(match_dist & mz_bitmasks[num_extra_bits], num_extra_bits); + } + else + { + mz_uint lit = *pLZ_codes++; + MZ_ASSERT(d->m_huff_code_sizes[0][lit]); + TDEFL_PUT_BITS(d->m_huff_codes[0][lit], d->m_huff_code_sizes[0][lit]); + } + } + + TDEFL_PUT_BITS(d->m_huff_codes[0][256], d->m_huff_code_sizes[0][256]); + + return (d->m_pOutput_buf < d->m_pOutput_buf_end); +} +#endif // MINIZ_USE_UNALIGNED_LOADS_AND_STORES && MINIZ_LITTLE_ENDIAN && MINIZ_HAS_64BIT_REGISTERS + +static mz_bool tdefl_compress_block(tdefl_compressor *d, mz_bool static_block) +{ + if (static_block) + tdefl_start_static_block(d); + else + tdefl_start_dynamic_block(d); + return tdefl_compress_lz_codes(d); +} + +static int tdefl_flush_block(tdefl_compressor *d, int flush) +{ + mz_uint saved_bit_buf, saved_bits_in; + mz_uint8 *pSaved_output_buf; + mz_bool comp_block_succeeded = MZ_FALSE; + int n, use_raw_block = ((d->m_flags & TDEFL_FORCE_ALL_RAW_BLOCKS) != 0) && (d->m_lookahead_pos - d->m_lz_code_buf_dict_pos) <= d->m_dict_size; + mz_uint8 *pOutput_buf_start = ((d->m_pPut_buf_func == NULL) && ((*d->m_pOut_buf_size - d->m_out_buf_ofs) >= TDEFL_OUT_BUF_SIZE)) ? ((mz_uint8 *)d->m_pOut_buf + d->m_out_buf_ofs) : d->m_output_buf; + + d->m_pOutput_buf = pOutput_buf_start; + d->m_pOutput_buf_end = d->m_pOutput_buf + TDEFL_OUT_BUF_SIZE - 16; + + MZ_ASSERT(!d->m_output_flush_remaining); + d->m_output_flush_ofs = 0; + d->m_output_flush_remaining = 0; + + *d->m_pLZ_flags = (mz_uint8)(*d->m_pLZ_flags >> d->m_num_flags_left); + d->m_pLZ_code_buf -= (d->m_num_flags_left == 8); + + if ((d->m_flags & TDEFL_WRITE_ZLIB_HEADER) && (!d->m_block_index)) + { + TDEFL_PUT_BITS(0x78, 8); TDEFL_PUT_BITS(0x01, 8); + } + + TDEFL_PUT_BITS(flush == TDEFL_FINISH, 1); + + pSaved_output_buf = d->m_pOutput_buf; saved_bit_buf = d->m_bit_buffer; saved_bits_in = d->m_bits_in; + + if (!use_raw_block) + comp_block_succeeded = tdefl_compress_block(d, (d->m_flags & TDEFL_FORCE_ALL_STATIC_BLOCKS) || (d->m_total_lz_bytes < 48)); + + // If the block gets expanded, forget the current contents of the output buffer and send a raw block instead. + if ( ((use_raw_block) || ((d->m_total_lz_bytes) && ((d->m_pOutput_buf - pSaved_output_buf + 1U) >= d->m_total_lz_bytes))) && + ((d->m_lookahead_pos - d->m_lz_code_buf_dict_pos) <= d->m_dict_size) ) + { + mz_uint i; d->m_pOutput_buf = pSaved_output_buf; d->m_bit_buffer = saved_bit_buf, d->m_bits_in = saved_bits_in; + TDEFL_PUT_BITS(0, 2); + if (d->m_bits_in) { TDEFL_PUT_BITS(0, 8 - d->m_bits_in); } + for (i = 2; i; --i, d->m_total_lz_bytes ^= 0xFFFF) + { + TDEFL_PUT_BITS(d->m_total_lz_bytes & 0xFFFF, 16); + } + for (i = 0; i < d->m_total_lz_bytes; ++i) + { + TDEFL_PUT_BITS(d->m_dict[(d->m_lz_code_buf_dict_pos + i) & TDEFL_LZ_DICT_SIZE_MASK], 8); + } + } + // Check for the extremely unlikely (if not impossible) case of the compressed block not fitting into the output buffer when using dynamic codes. + else if (!comp_block_succeeded) + { + d->m_pOutput_buf = pSaved_output_buf; d->m_bit_buffer = saved_bit_buf, d->m_bits_in = saved_bits_in; + tdefl_compress_block(d, MZ_TRUE); + } + + if (flush) + { + if (flush == TDEFL_FINISH) + { + if (d->m_bits_in) { TDEFL_PUT_BITS(0, 8 - d->m_bits_in); } + if (d->m_flags & TDEFL_WRITE_ZLIB_HEADER) { mz_uint i, a = d->m_adler32; for (i = 0; i < 4; i++) { TDEFL_PUT_BITS((a >> 24) & 0xFF, 8); a <<= 8; } } + } + else + { + mz_uint i, z = 0; TDEFL_PUT_BITS(0, 3); if (d->m_bits_in) { TDEFL_PUT_BITS(0, 8 - d->m_bits_in); } for (i = 2; i; --i, z ^= 0xFFFF) { TDEFL_PUT_BITS(z & 0xFFFF, 16); } + } + } + + MZ_ASSERT(d->m_pOutput_buf < d->m_pOutput_buf_end); + + memset(&d->m_huff_count[0][0], 0, sizeof(d->m_huff_count[0][0]) * TDEFL_MAX_HUFF_SYMBOLS_0); + memset(&d->m_huff_count[1][0], 0, sizeof(d->m_huff_count[1][0]) * TDEFL_MAX_HUFF_SYMBOLS_1); + + d->m_pLZ_code_buf = d->m_lz_code_buf + 1; d->m_pLZ_flags = d->m_lz_code_buf; d->m_num_flags_left = 8; d->m_lz_code_buf_dict_pos += d->m_total_lz_bytes; d->m_total_lz_bytes = 0; d->m_block_index++; + + if ((n = (int)(d->m_pOutput_buf - pOutput_buf_start)) != 0) + { + if (d->m_pPut_buf_func) + { + *d->m_pIn_buf_size = d->m_pSrc - (const mz_uint8 *)d->m_pIn_buf; + if (!(*d->m_pPut_buf_func)(d->m_output_buf, n, d->m_pPut_buf_user)) + return (d->m_prev_return_status = TDEFL_STATUS_PUT_BUF_FAILED); + } + else if (pOutput_buf_start == d->m_output_buf) + { + int bytes_to_copy = (int)MZ_MIN((size_t)n, (size_t)(*d->m_pOut_buf_size - d->m_out_buf_ofs)); + memcpy((mz_uint8 *)d->m_pOut_buf + d->m_out_buf_ofs, d->m_output_buf, bytes_to_copy); + d->m_out_buf_ofs += bytes_to_copy; + if ((n -= bytes_to_copy) != 0) + { + d->m_output_flush_ofs = bytes_to_copy; + d->m_output_flush_remaining = n; + } + } + else + { + d->m_out_buf_ofs += n; + } + } + + return d->m_output_flush_remaining; +} + +#if MINIZ_USE_UNALIGNED_LOADS_AND_STORES +#define TDEFL_READ_UNALIGNED_WORD(p) *(const mz_uint16*)(p) +static MZ_FORCEINLINE void tdefl_find_match(tdefl_compressor *d, mz_uint lookahead_pos, mz_uint max_dist, mz_uint max_match_len, mz_uint *pMatch_dist, mz_uint *pMatch_len) +{ + mz_uint dist, pos = lookahead_pos & TDEFL_LZ_DICT_SIZE_MASK, match_len = *pMatch_len, probe_pos = pos, next_probe_pos, probe_len; + mz_uint num_probes_left = d->m_max_probes[match_len >= 32]; + const mz_uint16 *s = (const mz_uint16*)(d->m_dict + pos), *p, *q; + mz_uint16 c01 = TDEFL_READ_UNALIGNED_WORD(&d->m_dict[pos + match_len - 1]), s01 = TDEFL_READ_UNALIGNED_WORD(s); + MZ_ASSERT(max_match_len <= TDEFL_MAX_MATCH_LEN); if (max_match_len <= match_len) return; + for ( ; ; ) + { + for ( ; ; ) + { + if (--num_probes_left == 0) return; + #define TDEFL_PROBE \ + next_probe_pos = d->m_next[probe_pos]; \ + if ((!next_probe_pos) || ((dist = (mz_uint16)(lookahead_pos - next_probe_pos)) > max_dist)) return; \ + probe_pos = next_probe_pos & TDEFL_LZ_DICT_SIZE_MASK; \ + if (TDEFL_READ_UNALIGNED_WORD(&d->m_dict[probe_pos + match_len - 1]) == c01) break; + TDEFL_PROBE; TDEFL_PROBE; TDEFL_PROBE; + } + if (!dist) break; q = (const mz_uint16*)(d->m_dict + probe_pos); if (TDEFL_READ_UNALIGNED_WORD(q) != s01) continue; p = s; probe_len = 32; + do { } while ( (TDEFL_READ_UNALIGNED_WORD(++p) == TDEFL_READ_UNALIGNED_WORD(++q)) && (TDEFL_READ_UNALIGNED_WORD(++p) == TDEFL_READ_UNALIGNED_WORD(++q)) && + (TDEFL_READ_UNALIGNED_WORD(++p) == TDEFL_READ_UNALIGNED_WORD(++q)) && (TDEFL_READ_UNALIGNED_WORD(++p) == TDEFL_READ_UNALIGNED_WORD(++q)) && (--probe_len > 0) ); + if (!probe_len) + { + *pMatch_dist = dist; *pMatch_len = MZ_MIN(max_match_len, TDEFL_MAX_MATCH_LEN); break; + } + else if ((probe_len = ((mz_uint)(p - s) * 2) + (mz_uint)(*(const mz_uint8*)p == *(const mz_uint8*)q)) > match_len) + { + *pMatch_dist = dist; if ((*pMatch_len = match_len = MZ_MIN(max_match_len, probe_len)) == max_match_len) break; + c01 = TDEFL_READ_UNALIGNED_WORD(&d->m_dict[pos + match_len - 1]); + } + } +} +#else +static MZ_FORCEINLINE void tdefl_find_match(tdefl_compressor *d, mz_uint lookahead_pos, mz_uint max_dist, mz_uint max_match_len, mz_uint *pMatch_dist, mz_uint *pMatch_len) +{ + mz_uint dist, pos = lookahead_pos & TDEFL_LZ_DICT_SIZE_MASK, match_len = *pMatch_len, probe_pos = pos, next_probe_pos, probe_len; + mz_uint num_probes_left = d->m_max_probes[match_len >= 32]; + const mz_uint8 *s = d->m_dict + pos, *p, *q; + mz_uint8 c0 = d->m_dict[pos + match_len], c1 = d->m_dict[pos + match_len - 1]; + MZ_ASSERT(max_match_len <= TDEFL_MAX_MATCH_LEN); if (max_match_len <= match_len) return; + for ( ; ; ) + { + for ( ; ; ) + { + if (--num_probes_left == 0) return; + #define TDEFL_PROBE \ + next_probe_pos = d->m_next[probe_pos]; \ + if ((!next_probe_pos) || ((dist = (mz_uint16)(lookahead_pos - next_probe_pos)) > max_dist)) return; \ + probe_pos = next_probe_pos & TDEFL_LZ_DICT_SIZE_MASK; \ + if ((d->m_dict[probe_pos + match_len] == c0) && (d->m_dict[probe_pos + match_len - 1] == c1)) break; + TDEFL_PROBE; TDEFL_PROBE; TDEFL_PROBE; + } + if (!dist) break; p = s; q = d->m_dict + probe_pos; for (probe_len = 0; probe_len < max_match_len; probe_len++) if (*p++ != *q++) break; + if (probe_len > match_len) + { + *pMatch_dist = dist; if ((*pMatch_len = match_len = probe_len) == max_match_len) return; + c0 = d->m_dict[pos + match_len]; c1 = d->m_dict[pos + match_len - 1]; + } + } +} +#endif // #if MINIZ_USE_UNALIGNED_LOADS_AND_STORES + +#if MINIZ_USE_UNALIGNED_LOADS_AND_STORES && MINIZ_LITTLE_ENDIAN +static mz_bool tdefl_compress_fast(tdefl_compressor *d) +{ + // Faster, minimally featured LZRW1-style match+parse loop with better register utilization. Intended for applications where raw throughput is valued more highly than ratio. + mz_uint lookahead_pos = d->m_lookahead_pos, lookahead_size = d->m_lookahead_size, dict_size = d->m_dict_size, total_lz_bytes = d->m_total_lz_bytes, num_flags_left = d->m_num_flags_left; + mz_uint8 *pLZ_code_buf = d->m_pLZ_code_buf, *pLZ_flags = d->m_pLZ_flags; + mz_uint cur_pos = lookahead_pos & TDEFL_LZ_DICT_SIZE_MASK; + + while ((d->m_src_buf_left) || ((d->m_flush) && (lookahead_size))) + { + const mz_uint TDEFL_COMP_FAST_LOOKAHEAD_SIZE = 4096; + mz_uint dst_pos = (lookahead_pos + lookahead_size) & TDEFL_LZ_DICT_SIZE_MASK; + mz_uint num_bytes_to_process = (mz_uint)MZ_MIN(d->m_src_buf_left, TDEFL_COMP_FAST_LOOKAHEAD_SIZE - lookahead_size); + d->m_src_buf_left -= num_bytes_to_process; + lookahead_size += num_bytes_to_process; + + while (num_bytes_to_process) + { + mz_uint32 n = MZ_MIN(TDEFL_LZ_DICT_SIZE - dst_pos, num_bytes_to_process); + memcpy(d->m_dict + dst_pos, d->m_pSrc, n); + if (dst_pos < (TDEFL_MAX_MATCH_LEN - 1)) + memcpy(d->m_dict + TDEFL_LZ_DICT_SIZE + dst_pos, d->m_pSrc, MZ_MIN(n, (TDEFL_MAX_MATCH_LEN - 1) - dst_pos)); + d->m_pSrc += n; + dst_pos = (dst_pos + n) & TDEFL_LZ_DICT_SIZE_MASK; + num_bytes_to_process -= n; + } + + dict_size = MZ_MIN(TDEFL_LZ_DICT_SIZE - lookahead_size, dict_size); + if ((!d->m_flush) && (lookahead_size < TDEFL_COMP_FAST_LOOKAHEAD_SIZE)) break; + + while (lookahead_size >= 4) + { + mz_uint cur_match_dist, cur_match_len = 1; + mz_uint8 *pCur_dict = d->m_dict + cur_pos; + mz_uint first_trigram = (*(const mz_uint32 *)pCur_dict) & 0xFFFFFF; + mz_uint hash = (first_trigram ^ (first_trigram >> (24 - (TDEFL_LZ_HASH_BITS - 8)))) & TDEFL_LEVEL1_HASH_SIZE_MASK; + mz_uint probe_pos = d->m_hash[hash]; + d->m_hash[hash] = (mz_uint16)lookahead_pos; + + if (((cur_match_dist = (mz_uint16)(lookahead_pos - probe_pos)) <= dict_size) && ((*(const mz_uint32 *)(d->m_dict + (probe_pos &= TDEFL_LZ_DICT_SIZE_MASK)) & 0xFFFFFF) == first_trigram)) + { + const mz_uint16 *p = (const mz_uint16 *)pCur_dict; + const mz_uint16 *q = (const mz_uint16 *)(d->m_dict + probe_pos); + mz_uint32 probe_len = 32; + do { } while ( (TDEFL_READ_UNALIGNED_WORD(++p) == TDEFL_READ_UNALIGNED_WORD(++q)) && (TDEFL_READ_UNALIGNED_WORD(++p) == TDEFL_READ_UNALIGNED_WORD(++q)) && + (TDEFL_READ_UNALIGNED_WORD(++p) == TDEFL_READ_UNALIGNED_WORD(++q)) && (TDEFL_READ_UNALIGNED_WORD(++p) == TDEFL_READ_UNALIGNED_WORD(++q)) && (--probe_len > 0) ); + cur_match_len = ((mz_uint)(p - (const mz_uint16 *)pCur_dict) * 2) + (mz_uint)(*(const mz_uint8 *)p == *(const mz_uint8 *)q); + if (!probe_len) + cur_match_len = cur_match_dist ? TDEFL_MAX_MATCH_LEN : 0; + + if ((cur_match_len < TDEFL_MIN_MATCH_LEN) || ((cur_match_len == TDEFL_MIN_MATCH_LEN) && (cur_match_dist >= 8U*1024U))) + { + cur_match_len = 1; + *pLZ_code_buf++ = (mz_uint8)first_trigram; + *pLZ_flags = (mz_uint8)(*pLZ_flags >> 1); + d->m_huff_count[0][(mz_uint8)first_trigram]++; + } + else + { + mz_uint32 s0, s1; + cur_match_len = MZ_MIN(cur_match_len, lookahead_size); + + MZ_ASSERT((cur_match_len >= TDEFL_MIN_MATCH_LEN) && (cur_match_dist >= 1) && (cur_match_dist <= TDEFL_LZ_DICT_SIZE)); + + cur_match_dist--; + + pLZ_code_buf[0] = (mz_uint8)(cur_match_len - TDEFL_MIN_MATCH_LEN); + *(mz_uint16 *)(&pLZ_code_buf[1]) = (mz_uint16)cur_match_dist; + pLZ_code_buf += 3; + *pLZ_flags = (mz_uint8)((*pLZ_flags >> 1) | 0x80); + + s0 = s_tdefl_small_dist_sym[cur_match_dist & 511]; + s1 = s_tdefl_large_dist_sym[cur_match_dist >> 8]; + d->m_huff_count[1][(cur_match_dist < 512) ? s0 : s1]++; + + d->m_huff_count[0][s_tdefl_len_sym[cur_match_len - TDEFL_MIN_MATCH_LEN]]++; + } + } + else + { + *pLZ_code_buf++ = (mz_uint8)first_trigram; + *pLZ_flags = (mz_uint8)(*pLZ_flags >> 1); + d->m_huff_count[0][(mz_uint8)first_trigram]++; + } + + if (--num_flags_left == 0) { num_flags_left = 8; pLZ_flags = pLZ_code_buf++; } + + total_lz_bytes += cur_match_len; + lookahead_pos += cur_match_len; + dict_size = MZ_MIN(dict_size + cur_match_len, TDEFL_LZ_DICT_SIZE); + cur_pos = (cur_pos + cur_match_len) & TDEFL_LZ_DICT_SIZE_MASK; + MZ_ASSERT(lookahead_size >= cur_match_len); + lookahead_size -= cur_match_len; + + if (pLZ_code_buf > &d->m_lz_code_buf[TDEFL_LZ_CODE_BUF_SIZE - 8]) + { + int n; + d->m_lookahead_pos = lookahead_pos; d->m_lookahead_size = lookahead_size; d->m_dict_size = dict_size; + d->m_total_lz_bytes = total_lz_bytes; d->m_pLZ_code_buf = pLZ_code_buf; d->m_pLZ_flags = pLZ_flags; d->m_num_flags_left = num_flags_left; + if ((n = tdefl_flush_block(d, 0)) != 0) + return (n < 0) ? MZ_FALSE : MZ_TRUE; + total_lz_bytes = d->m_total_lz_bytes; pLZ_code_buf = d->m_pLZ_code_buf; pLZ_flags = d->m_pLZ_flags; num_flags_left = d->m_num_flags_left; + } + } + + while (lookahead_size) + { + mz_uint8 lit = d->m_dict[cur_pos]; + + total_lz_bytes++; + *pLZ_code_buf++ = lit; + *pLZ_flags = (mz_uint8)(*pLZ_flags >> 1); + if (--num_flags_left == 0) { num_flags_left = 8; pLZ_flags = pLZ_code_buf++; } + + d->m_huff_count[0][lit]++; + + lookahead_pos++; + dict_size = MZ_MIN(dict_size + 1, TDEFL_LZ_DICT_SIZE); + cur_pos = (cur_pos + 1) & TDEFL_LZ_DICT_SIZE_MASK; + lookahead_size--; + + if (pLZ_code_buf > &d->m_lz_code_buf[TDEFL_LZ_CODE_BUF_SIZE - 8]) + { + int n; + d->m_lookahead_pos = lookahead_pos; d->m_lookahead_size = lookahead_size; d->m_dict_size = dict_size; + d->m_total_lz_bytes = total_lz_bytes; d->m_pLZ_code_buf = pLZ_code_buf; d->m_pLZ_flags = pLZ_flags; d->m_num_flags_left = num_flags_left; + if ((n = tdefl_flush_block(d, 0)) != 0) + return (n < 0) ? MZ_FALSE : MZ_TRUE; + total_lz_bytes = d->m_total_lz_bytes; pLZ_code_buf = d->m_pLZ_code_buf; pLZ_flags = d->m_pLZ_flags; num_flags_left = d->m_num_flags_left; + } + } + } + + d->m_lookahead_pos = lookahead_pos; d->m_lookahead_size = lookahead_size; d->m_dict_size = dict_size; + d->m_total_lz_bytes = total_lz_bytes; d->m_pLZ_code_buf = pLZ_code_buf; d->m_pLZ_flags = pLZ_flags; d->m_num_flags_left = num_flags_left; + return MZ_TRUE; +} +#endif // MINIZ_USE_UNALIGNED_LOADS_AND_STORES && MINIZ_LITTLE_ENDIAN + +static MZ_FORCEINLINE void tdefl_record_literal(tdefl_compressor *d, mz_uint8 lit) +{ + d->m_total_lz_bytes++; + *d->m_pLZ_code_buf++ = lit; + *d->m_pLZ_flags = (mz_uint8)(*d->m_pLZ_flags >> 1); if (--d->m_num_flags_left == 0) { d->m_num_flags_left = 8; d->m_pLZ_flags = d->m_pLZ_code_buf++; } + d->m_huff_count[0][lit]++; +} + +static MZ_FORCEINLINE void tdefl_record_match(tdefl_compressor *d, mz_uint match_len, mz_uint match_dist) +{ + mz_uint32 s0, s1; + + MZ_ASSERT((match_len >= TDEFL_MIN_MATCH_LEN) && (match_dist >= 1) && (match_dist <= TDEFL_LZ_DICT_SIZE)); + + d->m_total_lz_bytes += match_len; + + d->m_pLZ_code_buf[0] = (mz_uint8)(match_len - TDEFL_MIN_MATCH_LEN); + + match_dist -= 1; + d->m_pLZ_code_buf[1] = (mz_uint8)(match_dist & 0xFF); + d->m_pLZ_code_buf[2] = (mz_uint8)(match_dist >> 8); d->m_pLZ_code_buf += 3; + + *d->m_pLZ_flags = (mz_uint8)((*d->m_pLZ_flags >> 1) | 0x80); if (--d->m_num_flags_left == 0) { d->m_num_flags_left = 8; d->m_pLZ_flags = d->m_pLZ_code_buf++; } + + s0 = s_tdefl_small_dist_sym[match_dist & 511]; s1 = s_tdefl_large_dist_sym[(match_dist >> 8) & 127]; + d->m_huff_count[1][(match_dist < 512) ? s0 : s1]++; + + if (match_len >= TDEFL_MIN_MATCH_LEN) d->m_huff_count[0][s_tdefl_len_sym[match_len - TDEFL_MIN_MATCH_LEN]]++; +} + +static mz_bool tdefl_compress_normal(tdefl_compressor *d) +{ + const mz_uint8 *pSrc = d->m_pSrc; size_t src_buf_left = d->m_src_buf_left; + tdefl_flush flush = d->m_flush; + + while ((src_buf_left) || ((flush) && (d->m_lookahead_size))) + { + mz_uint len_to_move, cur_match_dist, cur_match_len, cur_pos; + // Update dictionary and hash chains. Keeps the lookahead size equal to TDEFL_MAX_MATCH_LEN. + if ((d->m_lookahead_size + d->m_dict_size) >= (TDEFL_MIN_MATCH_LEN - 1)) + { + mz_uint dst_pos = (d->m_lookahead_pos + d->m_lookahead_size) & TDEFL_LZ_DICT_SIZE_MASK, ins_pos = d->m_lookahead_pos + d->m_lookahead_size - 2; + mz_uint hash = (d->m_dict[ins_pos & TDEFL_LZ_DICT_SIZE_MASK] << TDEFL_LZ_HASH_SHIFT) ^ d->m_dict[(ins_pos + 1) & TDEFL_LZ_DICT_SIZE_MASK]; + mz_uint num_bytes_to_process = (mz_uint)MZ_MIN(src_buf_left, TDEFL_MAX_MATCH_LEN - d->m_lookahead_size); + const mz_uint8 *pSrc_end = pSrc + num_bytes_to_process; + src_buf_left -= num_bytes_to_process; + d->m_lookahead_size += num_bytes_to_process; + while (pSrc != pSrc_end) + { + mz_uint8 c = *pSrc++; d->m_dict[dst_pos] = c; if (dst_pos < (TDEFL_MAX_MATCH_LEN - 1)) d->m_dict[TDEFL_LZ_DICT_SIZE + dst_pos] = c; + hash = ((hash << TDEFL_LZ_HASH_SHIFT) ^ c) & (TDEFL_LZ_HASH_SIZE - 1); + d->m_next[ins_pos & TDEFL_LZ_DICT_SIZE_MASK] = d->m_hash[hash]; d->m_hash[hash] = (mz_uint16)(ins_pos); + dst_pos = (dst_pos + 1) & TDEFL_LZ_DICT_SIZE_MASK; ins_pos++; + } + } + else + { + while ((src_buf_left) && (d->m_lookahead_size < TDEFL_MAX_MATCH_LEN)) + { + mz_uint8 c = *pSrc++; + mz_uint dst_pos = (d->m_lookahead_pos + d->m_lookahead_size) & TDEFL_LZ_DICT_SIZE_MASK; + src_buf_left--; + d->m_dict[dst_pos] = c; + if (dst_pos < (TDEFL_MAX_MATCH_LEN - 1)) + d->m_dict[TDEFL_LZ_DICT_SIZE + dst_pos] = c; + if ((++d->m_lookahead_size + d->m_dict_size) >= TDEFL_MIN_MATCH_LEN) + { + mz_uint ins_pos = d->m_lookahead_pos + (d->m_lookahead_size - 1) - 2; + mz_uint hash = ((d->m_dict[ins_pos & TDEFL_LZ_DICT_SIZE_MASK] << (TDEFL_LZ_HASH_SHIFT * 2)) ^ (d->m_dict[(ins_pos + 1) & TDEFL_LZ_DICT_SIZE_MASK] << TDEFL_LZ_HASH_SHIFT) ^ c) & (TDEFL_LZ_HASH_SIZE - 1); + d->m_next[ins_pos & TDEFL_LZ_DICT_SIZE_MASK] = d->m_hash[hash]; d->m_hash[hash] = (mz_uint16)(ins_pos); + } + } + } + d->m_dict_size = MZ_MIN(TDEFL_LZ_DICT_SIZE - d->m_lookahead_size, d->m_dict_size); + if ((!flush) && (d->m_lookahead_size < TDEFL_MAX_MATCH_LEN)) + break; + + // Simple lazy/greedy parsing state machine. + len_to_move = 1; cur_match_dist = 0; cur_match_len = d->m_saved_match_len ? d->m_saved_match_len : (TDEFL_MIN_MATCH_LEN - 1); cur_pos = d->m_lookahead_pos & TDEFL_LZ_DICT_SIZE_MASK; + if (d->m_flags & (TDEFL_RLE_MATCHES | TDEFL_FORCE_ALL_RAW_BLOCKS)) + { + if ((d->m_dict_size) && (!(d->m_flags & TDEFL_FORCE_ALL_RAW_BLOCKS))) + { + mz_uint8 c = d->m_dict[(cur_pos - 1) & TDEFL_LZ_DICT_SIZE_MASK]; + cur_match_len = 0; while (cur_match_len < d->m_lookahead_size) { if (d->m_dict[cur_pos + cur_match_len] != c) break; cur_match_len++; } + if (cur_match_len < TDEFL_MIN_MATCH_LEN) cur_match_len = 0; else cur_match_dist = 1; + } + } + else + { + tdefl_find_match(d, d->m_lookahead_pos, d->m_dict_size, d->m_lookahead_size, &cur_match_dist, &cur_match_len); + } + if (((cur_match_len == TDEFL_MIN_MATCH_LEN) && (cur_match_dist >= 8U*1024U)) || (cur_pos == cur_match_dist) || ((d->m_flags & TDEFL_FILTER_MATCHES) && (cur_match_len <= 5))) + { + cur_match_dist = cur_match_len = 0; + } + if (d->m_saved_match_len) + { + if (cur_match_len > d->m_saved_match_len) + { + tdefl_record_literal(d, (mz_uint8)d->m_saved_lit); + if (cur_match_len >= 128) + { + tdefl_record_match(d, cur_match_len, cur_match_dist); + d->m_saved_match_len = 0; len_to_move = cur_match_len; + } + else + { + d->m_saved_lit = d->m_dict[cur_pos]; d->m_saved_match_dist = cur_match_dist; d->m_saved_match_len = cur_match_len; + } + } + else + { + tdefl_record_match(d, d->m_saved_match_len, d->m_saved_match_dist); + len_to_move = d->m_saved_match_len - 1; d->m_saved_match_len = 0; + } + } + else if (!cur_match_dist) + tdefl_record_literal(d, d->m_dict[MZ_MIN(cur_pos, sizeof(d->m_dict) - 1)]); + else if ((d->m_greedy_parsing) || (d->m_flags & TDEFL_RLE_MATCHES) || (cur_match_len >= 128)) + { + tdefl_record_match(d, cur_match_len, cur_match_dist); + len_to_move = cur_match_len; + } + else + { + d->m_saved_lit = d->m_dict[MZ_MIN(cur_pos, sizeof(d->m_dict) - 1)]; d->m_saved_match_dist = cur_match_dist; d->m_saved_match_len = cur_match_len; + } + // Move the lookahead forward by len_to_move bytes. + d->m_lookahead_pos += len_to_move; + MZ_ASSERT(d->m_lookahead_size >= len_to_move); + d->m_lookahead_size -= len_to_move; + d->m_dict_size = MZ_MIN(d->m_dict_size + len_to_move, TDEFL_LZ_DICT_SIZE); + // Check if it's time to flush the current LZ codes to the internal output buffer. + if ( (d->m_pLZ_code_buf > &d->m_lz_code_buf[TDEFL_LZ_CODE_BUF_SIZE - 8]) || + ( (d->m_total_lz_bytes > 31*1024) && (((((mz_uint)(d->m_pLZ_code_buf - d->m_lz_code_buf) * 115) >> 7) >= d->m_total_lz_bytes) || (d->m_flags & TDEFL_FORCE_ALL_RAW_BLOCKS))) ) + { + int n; + d->m_pSrc = pSrc; d->m_src_buf_left = src_buf_left; + if ((n = tdefl_flush_block(d, 0)) != 0) + return (n < 0) ? MZ_FALSE : MZ_TRUE; + } + } + + d->m_pSrc = pSrc; d->m_src_buf_left = src_buf_left; + return MZ_TRUE; +} + +static tdefl_status tdefl_flush_output_buffer(tdefl_compressor *d) +{ + if (d->m_pIn_buf_size) + { + *d->m_pIn_buf_size = d->m_pSrc - (const mz_uint8 *)d->m_pIn_buf; + } + + if (d->m_pOut_buf_size) + { + size_t n = MZ_MIN(*d->m_pOut_buf_size - d->m_out_buf_ofs, d->m_output_flush_remaining); + memcpy((mz_uint8 *)d->m_pOut_buf + d->m_out_buf_ofs, d->m_output_buf + d->m_output_flush_ofs, n); + d->m_output_flush_ofs += (mz_uint)n; + d->m_output_flush_remaining -= (mz_uint)n; + d->m_out_buf_ofs += n; + + *d->m_pOut_buf_size = d->m_out_buf_ofs; + } + + return (d->m_finished && !d->m_output_flush_remaining) ? TDEFL_STATUS_DONE : TDEFL_STATUS_OKAY; +} + +tdefl_status tdefl_compress(tdefl_compressor *d, const void *pIn_buf, size_t *pIn_buf_size, void *pOut_buf, size_t *pOut_buf_size, tdefl_flush flush) +{ + if (!d) + { + if (pIn_buf_size) *pIn_buf_size = 0; + if (pOut_buf_size) *pOut_buf_size = 0; + return TDEFL_STATUS_BAD_PARAM; + } + + d->m_pIn_buf = pIn_buf; d->m_pIn_buf_size = pIn_buf_size; + d->m_pOut_buf = pOut_buf; d->m_pOut_buf_size = pOut_buf_size; + d->m_pSrc = (const mz_uint8 *)(pIn_buf); d->m_src_buf_left = pIn_buf_size ? *pIn_buf_size : 0; + d->m_out_buf_ofs = 0; + d->m_flush = flush; + + if ( ((d->m_pPut_buf_func != NULL) == ((pOut_buf != NULL) || (pOut_buf_size != NULL))) || (d->m_prev_return_status != TDEFL_STATUS_OKAY) || + (d->m_wants_to_finish && (flush != TDEFL_FINISH)) || (pIn_buf_size && *pIn_buf_size && !pIn_buf) || (pOut_buf_size && *pOut_buf_size && !pOut_buf) ) + { + if (pIn_buf_size) *pIn_buf_size = 0; + if (pOut_buf_size) *pOut_buf_size = 0; + return (d->m_prev_return_status = TDEFL_STATUS_BAD_PARAM); + } + d->m_wants_to_finish |= (flush == TDEFL_FINISH); + + if ((d->m_output_flush_remaining) || (d->m_finished)) + return (d->m_prev_return_status = tdefl_flush_output_buffer(d)); + +#if MINIZ_USE_UNALIGNED_LOADS_AND_STORES && MINIZ_LITTLE_ENDIAN + if (((d->m_flags & TDEFL_MAX_PROBES_MASK) == 1) && + ((d->m_flags & TDEFL_GREEDY_PARSING_FLAG) != 0) && + ((d->m_flags & (TDEFL_FILTER_MATCHES | TDEFL_FORCE_ALL_RAW_BLOCKS | TDEFL_RLE_MATCHES)) == 0)) + { + if (!tdefl_compress_fast(d)) + return d->m_prev_return_status; + } + else +#endif // #if MINIZ_USE_UNALIGNED_LOADS_AND_STORES && MINIZ_LITTLE_ENDIAN + { + if (!tdefl_compress_normal(d)) + return d->m_prev_return_status; + } + + if ((d->m_flags & (TDEFL_WRITE_ZLIB_HEADER | TDEFL_COMPUTE_ADLER32)) && (pIn_buf)) + d->m_adler32 = (mz_uint32)mz_adler32(d->m_adler32, (const mz_uint8 *)pIn_buf, d->m_pSrc - (const mz_uint8 *)pIn_buf); + + if ((flush) && (!d->m_lookahead_size) && (!d->m_src_buf_left) && (!d->m_output_flush_remaining)) + { + if (tdefl_flush_block(d, flush) < 0) + return d->m_prev_return_status; + d->m_finished = (flush == TDEFL_FINISH); + if (flush == TDEFL_FULL_FLUSH) { MZ_CLEAR_OBJ(d->m_hash); MZ_CLEAR_OBJ(d->m_next); d->m_dict_size = 0; } + } + + return (d->m_prev_return_status = tdefl_flush_output_buffer(d)); +} + +tdefl_status tdefl_compress_buffer(tdefl_compressor *d, const void *pIn_buf, size_t in_buf_size, tdefl_flush flush) +{ + MZ_ASSERT(d->m_pPut_buf_func); return tdefl_compress(d, pIn_buf, &in_buf_size, NULL, NULL, flush); +} + +tdefl_status tdefl_init(tdefl_compressor *d, tdefl_put_buf_func_ptr pPut_buf_func, void *pPut_buf_user, int flags) +{ + d->m_pPut_buf_func = pPut_buf_func; d->m_pPut_buf_user = pPut_buf_user; + d->m_flags = (mz_uint)(flags); d->m_max_probes[0] = 1 + ((flags & 0xFFF) + 2) / 3; d->m_greedy_parsing = (flags & TDEFL_GREEDY_PARSING_FLAG) != 0; + d->m_max_probes[1] = 1 + (((flags & 0xFFF) >> 2) + 2) / 3; + if (!(flags & TDEFL_NONDETERMINISTIC_PARSING_FLAG)) MZ_CLEAR_OBJ(d->m_hash); + d->m_lookahead_pos = d->m_lookahead_size = d->m_dict_size = d->m_total_lz_bytes = d->m_lz_code_buf_dict_pos = d->m_bits_in = 0; + d->m_output_flush_ofs = d->m_output_flush_remaining = d->m_finished = d->m_block_index = d->m_bit_buffer = d->m_wants_to_finish = 0; + d->m_pLZ_code_buf = d->m_lz_code_buf + 1; d->m_pLZ_flags = d->m_lz_code_buf; d->m_num_flags_left = 8; + d->m_pOutput_buf = d->m_output_buf; d->m_pOutput_buf_end = d->m_output_buf; d->m_prev_return_status = TDEFL_STATUS_OKAY; + d->m_saved_match_dist = d->m_saved_match_len = d->m_saved_lit = 0; d->m_adler32 = 1; + d->m_pIn_buf = NULL; d->m_pOut_buf = NULL; + d->m_pIn_buf_size = NULL; d->m_pOut_buf_size = NULL; + d->m_flush = TDEFL_NO_FLUSH; d->m_pSrc = NULL; d->m_src_buf_left = 0; d->m_out_buf_ofs = 0; + memset(&d->m_huff_count[0][0], 0, sizeof(d->m_huff_count[0][0]) * TDEFL_MAX_HUFF_SYMBOLS_0); + memset(&d->m_huff_count[1][0], 0, sizeof(d->m_huff_count[1][0]) * TDEFL_MAX_HUFF_SYMBOLS_1); + return TDEFL_STATUS_OKAY; +} + +tdefl_status tdefl_get_prev_return_status(tdefl_compressor *d) +{ + return d->m_prev_return_status; +} + +mz_uint32 tdefl_get_adler32(tdefl_compressor *d) +{ + return d->m_adler32; +} + +mz_bool tdefl_compress_mem_to_output(const void *pBuf, size_t buf_len, tdefl_put_buf_func_ptr pPut_buf_func, void *pPut_buf_user, int flags) +{ + tdefl_compressor *pComp; mz_bool succeeded; if (((buf_len) && (!pBuf)) || (!pPut_buf_func)) return MZ_FALSE; + pComp = (tdefl_compressor*)MZ_MALLOC(sizeof(tdefl_compressor)); if (!pComp) return MZ_FALSE; + succeeded = (tdefl_init(pComp, pPut_buf_func, pPut_buf_user, flags) == TDEFL_STATUS_OKAY); + succeeded = succeeded && (tdefl_compress_buffer(pComp, pBuf, buf_len, TDEFL_FINISH) == TDEFL_STATUS_DONE); + MZ_FREE(pComp); return succeeded; +} + +typedef struct +{ + size_t m_size, m_capacity; + mz_uint8 *m_pBuf; + mz_bool m_expandable; +} tdefl_output_buffer; + +static mz_bool tdefl_output_buffer_putter(const void *pBuf, int len, void *pUser) +{ + tdefl_output_buffer *p = (tdefl_output_buffer *)pUser; + size_t new_size = p->m_size + len; + if (new_size > p->m_capacity) + { + size_t new_capacity = p->m_capacity; mz_uint8 *pNew_buf; if (!p->m_expandable) return MZ_FALSE; + do { new_capacity = MZ_MAX(128U, new_capacity << 1U); } while (new_size > new_capacity); + pNew_buf = (mz_uint8*)MZ_REALLOC(p->m_pBuf, new_capacity); if (!pNew_buf) return MZ_FALSE; + p->m_pBuf = pNew_buf; p->m_capacity = new_capacity; + } + memcpy((mz_uint8*)p->m_pBuf + p->m_size, pBuf, len); p->m_size = new_size; + return MZ_TRUE; +} + +void *tdefl_compress_mem_to_heap(const void *pSrc_buf, size_t src_buf_len, size_t *pOut_len, int flags) +{ + tdefl_output_buffer out_buf; MZ_CLEAR_OBJ(out_buf); + if (!pOut_len) return MZ_FALSE; else *pOut_len = 0; + out_buf.m_expandable = MZ_TRUE; + if (!tdefl_compress_mem_to_output(pSrc_buf, src_buf_len, tdefl_output_buffer_putter, &out_buf, flags)) return NULL; + *pOut_len = out_buf.m_size; return out_buf.m_pBuf; +} + +size_t tdefl_compress_mem_to_mem(void *pOut_buf, size_t out_buf_len, const void *pSrc_buf, size_t src_buf_len, int flags) +{ + tdefl_output_buffer out_buf; MZ_CLEAR_OBJ(out_buf); + if (!pOut_buf) return 0; + out_buf.m_pBuf = (mz_uint8*)pOut_buf; out_buf.m_capacity = out_buf_len; + if (!tdefl_compress_mem_to_output(pSrc_buf, src_buf_len, tdefl_output_buffer_putter, &out_buf, flags)) return 0; + return out_buf.m_size; +} + +#ifndef MINIZ_NO_ZLIB_APIS +static const mz_uint s_tdefl_num_probes[11] = { 0, 1, 6, 32, 16, 32, 128, 256, 512, 768, 1500 }; + +// level may actually range from [0,10] (10 is a "hidden" max level, where we want a bit more compression and it's fine if throughput to fall off a cliff on some files). +mz_uint tdefl_create_comp_flags_from_zip_params(int level, int window_bits, int strategy) +{ + mz_uint comp_flags = s_tdefl_num_probes[(level >= 0) ? MZ_MIN(10, level) : MZ_DEFAULT_LEVEL] | ((level <= 3) ? TDEFL_GREEDY_PARSING_FLAG : 0); + if (window_bits > 0) comp_flags |= TDEFL_WRITE_ZLIB_HEADER; + + if (!level) comp_flags |= TDEFL_FORCE_ALL_RAW_BLOCKS; + else if (strategy == MZ_FILTERED) comp_flags |= TDEFL_FILTER_MATCHES; + else if (strategy == MZ_HUFFMAN_ONLY) comp_flags &= ~TDEFL_MAX_PROBES_MASK; + else if (strategy == MZ_FIXED) comp_flags |= TDEFL_FORCE_ALL_STATIC_BLOCKS; + else if (strategy == MZ_RLE) comp_flags |= TDEFL_RLE_MATCHES; + + return comp_flags; +} +#endif //MINIZ_NO_ZLIB_APIS + +#ifdef _MSC_VER +#pragma warning (push) +#pragma warning (disable:4204) // nonstandard extension used : non-constant aggregate initializer (also supported by GNU C and C99, so no big deal) +#endif + +// Simple PNG writer function by Alex Evans, 2011. Released into the public domain: https://gist.github.com/908299, more context at +// http://altdevblogaday.org/2011/04/06/a-smaller-jpg-encoder/. +// This is actually a modification of Alex's original code so PNG files generated by this function pass pngcheck. +void *tdefl_write_image_to_png_file_in_memory_ex(const void *pImage, int w, int h, int num_chans, size_t *pLen_out, mz_uint level, mz_bool flip) +{ + // Using a local copy of this array here in case MINIZ_NO_ZLIB_APIS was defined. + static const mz_uint s_tdefl_png_num_probes[11] = { 0, 1, 6, 32, 16, 32, 128, 256, 512, 768, 1500 }; + tdefl_compressor *pComp = (tdefl_compressor *)MZ_MALLOC(sizeof(tdefl_compressor)); tdefl_output_buffer out_buf; int i, bpl = w * num_chans, y, z; mz_uint32 c; *pLen_out = 0; + if (!pComp) return NULL; + MZ_CLEAR_OBJ(out_buf); out_buf.m_expandable = MZ_TRUE; out_buf.m_capacity = 57+MZ_MAX(64, (1+bpl)*h); if (NULL == (out_buf.m_pBuf = (mz_uint8*)MZ_MALLOC(out_buf.m_capacity))) { MZ_FREE(pComp); return NULL; } + // write dummy header + for (z = 41; z; --z) tdefl_output_buffer_putter(&z, 1, &out_buf); + // compress image data + tdefl_init(pComp, tdefl_output_buffer_putter, &out_buf, s_tdefl_png_num_probes[MZ_MIN(10, level)] | TDEFL_WRITE_ZLIB_HEADER); + for (y = 0; y < h; ++y) { tdefl_compress_buffer(pComp, &z, 1, TDEFL_NO_FLUSH); tdefl_compress_buffer(pComp, (mz_uint8*)pImage + (flip ? (h - 1 - y) : y) * bpl, bpl, TDEFL_NO_FLUSH); } + if (tdefl_compress_buffer(pComp, NULL, 0, TDEFL_FINISH) != TDEFL_STATUS_DONE) { MZ_FREE(pComp); MZ_FREE(out_buf.m_pBuf); return NULL; } + // write real header + *pLen_out = out_buf.m_size-41; + { + static const mz_uint8 chans[] = {0x00, 0x00, 0x04, 0x02, 0x06}; + mz_uint8 pnghdr[41]={0x89,0x50,0x4e,0x47,0x0d,0x0a,0x1a,0x0a,0x00,0x00,0x00,0x0d,0x49,0x48,0x44,0x52, + 0,0,(mz_uint8)(w>>8),(mz_uint8)w,0,0,(mz_uint8)(h>>8),(mz_uint8)h,8,chans[num_chans],0,0,0,0,0,0,0, + (mz_uint8)(*pLen_out>>24),(mz_uint8)(*pLen_out>>16),(mz_uint8)(*pLen_out>>8),(mz_uint8)*pLen_out,0x49,0x44,0x41,0x54}; + c=(mz_uint32)mz_crc32(MZ_CRC32_INIT,pnghdr+12,17); for (i=0; i<4; ++i, c<<=8) ((mz_uint8*)(pnghdr+29))[i]=(mz_uint8)(c>>24); + memcpy(out_buf.m_pBuf, pnghdr, 41); + } + // write footer (IDAT CRC-32, followed by IEND chunk) + if (!tdefl_output_buffer_putter("\0\0\0\0\0\0\0\0\x49\x45\x4e\x44\xae\x42\x60\x82", 16, &out_buf)) { *pLen_out = 0; MZ_FREE(pComp); MZ_FREE(out_buf.m_pBuf); return NULL; } + c = (mz_uint32)mz_crc32(MZ_CRC32_INIT,out_buf.m_pBuf+41-4, *pLen_out+4); for (i=0; i<4; ++i, c<<=8) (out_buf.m_pBuf+out_buf.m_size-16)[i] = (mz_uint8)(c >> 24); + // compute final size of file, grab compressed data buffer and return + *pLen_out += 57; MZ_FREE(pComp); return out_buf.m_pBuf; +} +void *tdefl_write_image_to_png_file_in_memory(const void *pImage, int w, int h, int num_chans, size_t *pLen_out) +{ + // Level 6 corresponds to TDEFL_DEFAULT_MAX_PROBES or MZ_DEFAULT_LEVEL (but we can't depend on MZ_DEFAULT_LEVEL being available in case the zlib API's where #defined out) + return tdefl_write_image_to_png_file_in_memory_ex(pImage, w, h, num_chans, pLen_out, 6, MZ_FALSE); +} + +#ifdef _MSC_VER +#pragma warning (pop) +#endif + +// ------------------- .ZIP archive reading + +#ifndef MINIZ_NO_ARCHIVE_APIS + +#ifdef MINIZ_NO_STDIO + #define MZ_FILE void * +#else + #include + #include + + #if defined(_MSC_VER) || defined(__MINGW64__) + static FILE *mz_fopen(const char *pFilename, const char *pMode) + { + FILE* pFile = NULL; + fopen_s(&pFile, pFilename, pMode); + return pFile; + } + static FILE *mz_freopen(const char *pPath, const char *pMode, FILE *pStream) + { + FILE* pFile = NULL; + if (freopen_s(&pFile, pPath, pMode, pStream)) + return NULL; + return pFile; + } + #ifndef MINIZ_NO_TIME + #include + #endif + #define MZ_FILE FILE + #define MZ_FOPEN mz_fopen + #define MZ_FCLOSE fclose + #define MZ_FREAD fread + #define MZ_FWRITE fwrite + #define MZ_FTELL64 _ftelli64 + #define MZ_FSEEK64 _fseeki64 + #define MZ_FILE_STAT_STRUCT _stat + #define MZ_FILE_STAT _stat + #define MZ_FFLUSH fflush + #define MZ_FREOPEN mz_freopen + #define MZ_DELETE_FILE remove + #elif defined(__MINGW32__) + #ifndef MINIZ_NO_TIME + #include + #endif + #define MZ_FILE FILE + #define MZ_FOPEN(f, m) fopen(f, m) + #define MZ_FCLOSE fclose + #define MZ_FREAD fread + #define MZ_FWRITE fwrite + #define MZ_FTELL64 ftello64 + #define MZ_FSEEK64 fseeko64 + #define MZ_FILE_STAT_STRUCT _stat + #define MZ_FILE_STAT _stat + #define MZ_FFLUSH fflush + #define MZ_FREOPEN(f, m, s) freopen(f, m, s) + #define MZ_DELETE_FILE remove + #elif defined(__TINYC__) + #ifndef MINIZ_NO_TIME + #include + #endif + #define MZ_FILE FILE + #define MZ_FOPEN(f, m) fopen(f, m) + #define MZ_FCLOSE fclose + #define MZ_FREAD fread + #define MZ_FWRITE fwrite + #define MZ_FTELL64 ftell + #define MZ_FSEEK64 fseek + #define MZ_FILE_STAT_STRUCT stat + #define MZ_FILE_STAT stat + #define MZ_FFLUSH fflush + #define MZ_FREOPEN(f, m, s) freopen(f, m, s) + #define MZ_DELETE_FILE remove + #elif defined(__GNUC__) && _LARGEFILE64_SOURCE + #ifndef MINIZ_NO_TIME + #include + #endif + #define MZ_FILE FILE + #define MZ_FOPEN(f, m) fopen64(f, m) + #define MZ_FCLOSE fclose + #define MZ_FREAD fread + #define MZ_FWRITE fwrite + #define MZ_FTELL64 ftello64 + #define MZ_FSEEK64 fseeko64 + #define MZ_FILE_STAT_STRUCT stat64 + #define MZ_FILE_STAT stat64 + #define MZ_FFLUSH fflush + #define MZ_FREOPEN(p, m, s) freopen64(p, m, s) + #define MZ_DELETE_FILE remove + #else + #ifndef MINIZ_NO_TIME + #include + #endif + #define MZ_FILE FILE + #define MZ_FOPEN(f, m) fopen(f, m) + #define MZ_FCLOSE fclose + #define MZ_FREAD fread + #define MZ_FWRITE fwrite + #define MZ_FTELL64 ftello + #define MZ_FSEEK64 fseeko + #define MZ_FILE_STAT_STRUCT stat + #define MZ_FILE_STAT stat + #define MZ_FFLUSH fflush + #define MZ_FREOPEN(f, m, s) freopen(f, m, s) + #define MZ_DELETE_FILE remove + #endif // #ifdef _MSC_VER +#endif // #ifdef MINIZ_NO_STDIO + +#define MZ_TOLOWER(c) ((((c) >= 'A') && ((c) <= 'Z')) ? ((c) - 'A' + 'a') : (c)) + +// Various ZIP archive enums. To completely avoid cross platform compiler alignment and platform endian issues, miniz.c doesn't use structs for any of this stuff. +enum +{ + // ZIP archive identifiers and record sizes + MZ_ZIP_END_OF_CENTRAL_DIR_HEADER_SIG = 0x06054b50, MZ_ZIP_CENTRAL_DIR_HEADER_SIG = 0x02014b50, MZ_ZIP_LOCAL_DIR_HEADER_SIG = 0x04034b50, + MZ_ZIP_LOCAL_DIR_HEADER_SIZE = 30, MZ_ZIP_CENTRAL_DIR_HEADER_SIZE = 46, MZ_ZIP_END_OF_CENTRAL_DIR_HEADER_SIZE = 22, + // Central directory header record offsets + MZ_ZIP_CDH_SIG_OFS = 0, MZ_ZIP_CDH_VERSION_MADE_BY_OFS = 4, MZ_ZIP_CDH_VERSION_NEEDED_OFS = 6, MZ_ZIP_CDH_BIT_FLAG_OFS = 8, + MZ_ZIP_CDH_METHOD_OFS = 10, MZ_ZIP_CDH_FILE_TIME_OFS = 12, MZ_ZIP_CDH_FILE_DATE_OFS = 14, MZ_ZIP_CDH_CRC32_OFS = 16, + MZ_ZIP_CDH_COMPRESSED_SIZE_OFS = 20, MZ_ZIP_CDH_DECOMPRESSED_SIZE_OFS = 24, MZ_ZIP_CDH_FILENAME_LEN_OFS = 28, MZ_ZIP_CDH_EXTRA_LEN_OFS = 30, + MZ_ZIP_CDH_COMMENT_LEN_OFS = 32, MZ_ZIP_CDH_DISK_START_OFS = 34, MZ_ZIP_CDH_INTERNAL_ATTR_OFS = 36, MZ_ZIP_CDH_EXTERNAL_ATTR_OFS = 38, MZ_ZIP_CDH_LOCAL_HEADER_OFS = 42, + // Local directory header offsets + MZ_ZIP_LDH_SIG_OFS = 0, MZ_ZIP_LDH_VERSION_NEEDED_OFS = 4, MZ_ZIP_LDH_BIT_FLAG_OFS = 6, MZ_ZIP_LDH_METHOD_OFS = 8, MZ_ZIP_LDH_FILE_TIME_OFS = 10, + MZ_ZIP_LDH_FILE_DATE_OFS = 12, MZ_ZIP_LDH_CRC32_OFS = 14, MZ_ZIP_LDH_COMPRESSED_SIZE_OFS = 18, MZ_ZIP_LDH_DECOMPRESSED_SIZE_OFS = 22, + MZ_ZIP_LDH_FILENAME_LEN_OFS = 26, MZ_ZIP_LDH_EXTRA_LEN_OFS = 28, + // End of central directory offsets + MZ_ZIP_ECDH_SIG_OFS = 0, MZ_ZIP_ECDH_NUM_THIS_DISK_OFS = 4, MZ_ZIP_ECDH_NUM_DISK_CDIR_OFS = 6, MZ_ZIP_ECDH_CDIR_NUM_ENTRIES_ON_DISK_OFS = 8, + MZ_ZIP_ECDH_CDIR_TOTAL_ENTRIES_OFS = 10, MZ_ZIP_ECDH_CDIR_SIZE_OFS = 12, MZ_ZIP_ECDH_CDIR_OFS_OFS = 16, MZ_ZIP_ECDH_COMMENT_SIZE_OFS = 20, +}; + +typedef struct +{ + void *m_p; + size_t m_size, m_capacity; + mz_uint m_element_size; +} mz_zip_array; + +struct mz_zip_internal_state_tag +{ + mz_zip_array m_central_dir; + mz_zip_array m_central_dir_offsets; + mz_zip_array m_sorted_central_dir_offsets; + MZ_FILE *m_pFile; + void *m_pMem; + size_t m_mem_size; + size_t m_mem_capacity; +}; + +#define MZ_ZIP_ARRAY_SET_ELEMENT_SIZE(array_ptr, element_size) (array_ptr)->m_element_size = element_size +#define MZ_ZIP_ARRAY_ELEMENT(array_ptr, element_type, index) ((element_type *)((array_ptr)->m_p))[index] + +static MZ_FORCEINLINE void mz_zip_array_clear(mz_zip_archive *pZip, mz_zip_array *pArray) +{ + pZip->m_pFree(pZip->m_pAlloc_opaque, pArray->m_p); + memset(pArray, 0, sizeof(mz_zip_array)); +} + +static mz_bool mz_zip_array_ensure_capacity(mz_zip_archive *pZip, mz_zip_array *pArray, size_t min_new_capacity, mz_uint growing) +{ + void *pNew_p; size_t new_capacity = min_new_capacity; MZ_ASSERT(pArray->m_element_size); if (pArray->m_capacity >= min_new_capacity) return MZ_TRUE; + if (growing) { new_capacity = MZ_MAX(1, pArray->m_capacity); while (new_capacity < min_new_capacity) new_capacity *= 2; } + if (NULL == (pNew_p = pZip->m_pRealloc(pZip->m_pAlloc_opaque, pArray->m_p, pArray->m_element_size, new_capacity))) return MZ_FALSE; + pArray->m_p = pNew_p; pArray->m_capacity = new_capacity; + return MZ_TRUE; +} + +static MZ_FORCEINLINE mz_bool mz_zip_array_reserve(mz_zip_archive *pZip, mz_zip_array *pArray, size_t new_capacity, mz_uint growing) +{ + if (new_capacity > pArray->m_capacity) { if (!mz_zip_array_ensure_capacity(pZip, pArray, new_capacity, growing)) return MZ_FALSE; } + return MZ_TRUE; +} + +static MZ_FORCEINLINE mz_bool mz_zip_array_resize(mz_zip_archive *pZip, mz_zip_array *pArray, size_t new_size, mz_uint growing) +{ + if (new_size > pArray->m_capacity) { if (!mz_zip_array_ensure_capacity(pZip, pArray, new_size, growing)) return MZ_FALSE; } + pArray->m_size = new_size; + return MZ_TRUE; +} + +static MZ_FORCEINLINE mz_bool mz_zip_array_ensure_room(mz_zip_archive *pZip, mz_zip_array *pArray, size_t n) +{ + return mz_zip_array_reserve(pZip, pArray, pArray->m_size + n, MZ_TRUE); +} + +static MZ_FORCEINLINE mz_bool mz_zip_array_push_back(mz_zip_archive *pZip, mz_zip_array *pArray, const void *pElements, size_t n) +{ + size_t orig_size = pArray->m_size; if (!mz_zip_array_resize(pZip, pArray, orig_size + n, MZ_TRUE)) return MZ_FALSE; + memcpy((mz_uint8*)pArray->m_p + orig_size * pArray->m_element_size, pElements, n * pArray->m_element_size); + return MZ_TRUE; +} + +#ifndef MINIZ_NO_TIME +static time_t mz_zip_dos_to_time_t(int dos_time, int dos_date) +{ + struct tm tm; + memset(&tm, 0, sizeof(tm)); tm.tm_isdst = -1; + tm.tm_year = ((dos_date >> 9) & 127) + 1980 - 1900; tm.tm_mon = ((dos_date >> 5) & 15) - 1; tm.tm_mday = dos_date & 31; + tm.tm_hour = (dos_time >> 11) & 31; tm.tm_min = (dos_time >> 5) & 63; tm.tm_sec = (dos_time << 1) & 62; + return mktime(&tm); +} + +static void mz_zip_time_to_dos_time(time_t time, mz_uint16 *pDOS_time, mz_uint16 *pDOS_date) +{ +#ifdef _MSC_VER + struct tm tm_struct; + struct tm *tm = &tm_struct; + errno_t err = localtime_s(tm, &time); + if (err) + { + *pDOS_date = 0; *pDOS_time = 0; + return; + } +#else + struct tm *tm = localtime(&time); +#endif + *pDOS_time = (mz_uint16)(((tm->tm_hour) << 11) + ((tm->tm_min) << 5) + ((tm->tm_sec) >> 1)); + *pDOS_date = (mz_uint16)(((tm->tm_year + 1900 - 1980) << 9) + ((tm->tm_mon + 1) << 5) + tm->tm_mday); +} +#endif + +#ifndef MINIZ_NO_STDIO +static mz_bool mz_zip_get_file_modified_time(const char *pFilename, mz_uint16 *pDOS_time, mz_uint16 *pDOS_date) +{ +#ifdef MINIZ_NO_TIME + (void)pFilename; *pDOS_date = *pDOS_time = 0; +#else + struct MZ_FILE_STAT_STRUCT file_stat; + // On Linux with x86 glibc, this call will fail on large files (>= 0x80000000 bytes) unless you compiled with _LARGEFILE64_SOURCE. Argh. + if (MZ_FILE_STAT(pFilename, &file_stat) != 0) + return MZ_FALSE; + mz_zip_time_to_dos_time(file_stat.st_mtime, pDOS_time, pDOS_date); +#endif // #ifdef MINIZ_NO_TIME + return MZ_TRUE; +} + +#ifndef MINIZ_NO_TIME +static mz_bool mz_zip_set_file_times(const char *pFilename, time_t access_time, time_t modified_time) +{ + struct utimbuf t; t.actime = access_time; t.modtime = modified_time; + return !utime(pFilename, &t); +} +#endif // #ifndef MINIZ_NO_TIME +#endif // #ifndef MINIZ_NO_STDIO + +static mz_bool mz_zip_reader_init_internal(mz_zip_archive *pZip, mz_uint32 flags) +{ + (void)flags; + if ((!pZip) || (pZip->m_pState) || (pZip->m_zip_mode != MZ_ZIP_MODE_INVALID)) + return MZ_FALSE; + + if (!pZip->m_pAlloc) pZip->m_pAlloc = def_alloc_func; + if (!pZip->m_pFree) pZip->m_pFree = def_free_func; + if (!pZip->m_pRealloc) pZip->m_pRealloc = def_realloc_func; + + pZip->m_zip_mode = MZ_ZIP_MODE_READING; + pZip->m_archive_size = 0; + pZip->m_central_directory_file_ofs = 0; + pZip->m_total_files = 0; + + if (NULL == (pZip->m_pState = (mz_zip_internal_state *)pZip->m_pAlloc(pZip->m_pAlloc_opaque, 1, sizeof(mz_zip_internal_state)))) + return MZ_FALSE; + memset(pZip->m_pState, 0, sizeof(mz_zip_internal_state)); + MZ_ZIP_ARRAY_SET_ELEMENT_SIZE(&pZip->m_pState->m_central_dir, sizeof(mz_uint8)); + MZ_ZIP_ARRAY_SET_ELEMENT_SIZE(&pZip->m_pState->m_central_dir_offsets, sizeof(mz_uint32)); + MZ_ZIP_ARRAY_SET_ELEMENT_SIZE(&pZip->m_pState->m_sorted_central_dir_offsets, sizeof(mz_uint32)); + return MZ_TRUE; +} + +static MZ_FORCEINLINE mz_bool mz_zip_reader_filename_less(const mz_zip_array *pCentral_dir_array, const mz_zip_array *pCentral_dir_offsets, mz_uint l_index, mz_uint r_index) +{ + const mz_uint8 *pL = &MZ_ZIP_ARRAY_ELEMENT(pCentral_dir_array, mz_uint8, MZ_ZIP_ARRAY_ELEMENT(pCentral_dir_offsets, mz_uint32, l_index)), *pE; + const mz_uint8 *pR = &MZ_ZIP_ARRAY_ELEMENT(pCentral_dir_array, mz_uint8, MZ_ZIP_ARRAY_ELEMENT(pCentral_dir_offsets, mz_uint32, r_index)); + mz_uint l_len = MZ_READ_LE16(pL + MZ_ZIP_CDH_FILENAME_LEN_OFS), r_len = MZ_READ_LE16(pR + MZ_ZIP_CDH_FILENAME_LEN_OFS); + mz_uint8 l = 0, r = 0; + pL += MZ_ZIP_CENTRAL_DIR_HEADER_SIZE; pR += MZ_ZIP_CENTRAL_DIR_HEADER_SIZE; + pE = pL + MZ_MIN(l_len, r_len); + while (pL < pE) + { + if ((l = MZ_TOLOWER(*pL)) != (r = MZ_TOLOWER(*pR))) + break; + pL++; pR++; + } + return (pL == pE) ? (l_len < r_len) : (l < r); +} + +#define MZ_SWAP_UINT32(a, b) do { mz_uint32 t = a; a = b; b = t; } MZ_MACRO_END + +// Heap sort of lowercased filenames, used to help accelerate plain central directory searches by mz_zip_reader_locate_file(). (Could also use qsort(), but it could allocate memory.) +static void mz_zip_reader_sort_central_dir_offsets_by_filename(mz_zip_archive *pZip) +{ + mz_zip_internal_state *pState = pZip->m_pState; + const mz_zip_array *pCentral_dir_offsets = &pState->m_central_dir_offsets; + const mz_zip_array *pCentral_dir = &pState->m_central_dir; + mz_uint32 *pIndices = &MZ_ZIP_ARRAY_ELEMENT(&pState->m_sorted_central_dir_offsets, mz_uint32, 0); + const int size = pZip->m_total_files; + int start = (size - 2) >> 1, end; + while (start >= 0) + { + int child, root = start; + for ( ; ; ) + { + if ((child = (root << 1) + 1) >= size) + break; + child += (((child + 1) < size) && (mz_zip_reader_filename_less(pCentral_dir, pCentral_dir_offsets, pIndices[child], pIndices[child + 1]))); + if (!mz_zip_reader_filename_less(pCentral_dir, pCentral_dir_offsets, pIndices[root], pIndices[child])) + break; + MZ_SWAP_UINT32(pIndices[root], pIndices[child]); root = child; + } + start--; + } + + end = size - 1; + while (end > 0) + { + int child, root = 0; + MZ_SWAP_UINT32(pIndices[end], pIndices[0]); + for ( ; ; ) + { + if ((child = (root << 1) + 1) >= end) + break; + child += (((child + 1) < end) && mz_zip_reader_filename_less(pCentral_dir, pCentral_dir_offsets, pIndices[child], pIndices[child + 1])); + if (!mz_zip_reader_filename_less(pCentral_dir, pCentral_dir_offsets, pIndices[root], pIndices[child])) + break; + MZ_SWAP_UINT32(pIndices[root], pIndices[child]); root = child; + } + end--; + } +} + +static mz_bool mz_zip_reader_read_central_dir(mz_zip_archive *pZip, mz_uint32 flags) +{ + mz_uint cdir_size, num_this_disk, cdir_disk_index; + mz_uint64 cdir_ofs; + mz_int64 cur_file_ofs; + const mz_uint8 *p; + mz_uint32 buf_u32[4096 / sizeof(mz_uint32)]; mz_uint8 *pBuf = (mz_uint8 *)buf_u32; + mz_bool sort_central_dir = ((flags & MZ_ZIP_FLAG_DO_NOT_SORT_CENTRAL_DIRECTORY) == 0); + // Basic sanity checks - reject files which are too small, and check the first 4 bytes of the file to make sure a local header is there. + if (pZip->m_archive_size < MZ_ZIP_END_OF_CENTRAL_DIR_HEADER_SIZE) + return MZ_FALSE; + // Find the end of central directory record by scanning the file from the end towards the beginning. + cur_file_ofs = MZ_MAX((mz_int64)pZip->m_archive_size - (mz_int64)sizeof(buf_u32), 0); + for ( ; ; ) + { + int i, n = (int)MZ_MIN(sizeof(buf_u32), pZip->m_archive_size - cur_file_ofs); + if (pZip->m_pRead(pZip->m_pIO_opaque, cur_file_ofs, pBuf, n) != (mz_uint)n) + return MZ_FALSE; + for (i = n - 4; i >= 0; --i) + if (MZ_READ_LE32(pBuf + i) == MZ_ZIP_END_OF_CENTRAL_DIR_HEADER_SIG) + break; + if (i >= 0) + { + cur_file_ofs += i; + break; + } + if ((!cur_file_ofs) || ((pZip->m_archive_size - cur_file_ofs) >= (0xFFFF + MZ_ZIP_END_OF_CENTRAL_DIR_HEADER_SIZE))) + return MZ_FALSE; + cur_file_ofs = MZ_MAX(cur_file_ofs - (sizeof(buf_u32) - 3), 0); + } + // Read and verify the end of central directory record. + if (pZip->m_pRead(pZip->m_pIO_opaque, cur_file_ofs, pBuf, MZ_ZIP_END_OF_CENTRAL_DIR_HEADER_SIZE) != MZ_ZIP_END_OF_CENTRAL_DIR_HEADER_SIZE) + return MZ_FALSE; + if ((MZ_READ_LE32(pBuf + MZ_ZIP_ECDH_SIG_OFS) != MZ_ZIP_END_OF_CENTRAL_DIR_HEADER_SIG) || + ((pZip->m_total_files = MZ_READ_LE16(pBuf + MZ_ZIP_ECDH_CDIR_TOTAL_ENTRIES_OFS)) != MZ_READ_LE16(pBuf + MZ_ZIP_ECDH_CDIR_NUM_ENTRIES_ON_DISK_OFS))) + return MZ_FALSE; + + num_this_disk = MZ_READ_LE16(pBuf + MZ_ZIP_ECDH_NUM_THIS_DISK_OFS); + cdir_disk_index = MZ_READ_LE16(pBuf + MZ_ZIP_ECDH_NUM_DISK_CDIR_OFS); + if (((num_this_disk | cdir_disk_index) != 0) && ((num_this_disk != 1) || (cdir_disk_index != 1))) + return MZ_FALSE; + + if ((cdir_size = MZ_READ_LE32(pBuf + MZ_ZIP_ECDH_CDIR_SIZE_OFS)) < pZip->m_total_files * MZ_ZIP_CENTRAL_DIR_HEADER_SIZE) + return MZ_FALSE; + + cdir_ofs = MZ_READ_LE32(pBuf + MZ_ZIP_ECDH_CDIR_OFS_OFS); + if ((cdir_ofs + (mz_uint64)cdir_size) > pZip->m_archive_size) + return MZ_FALSE; + + pZip->m_central_directory_file_ofs = cdir_ofs; + + if (pZip->m_total_files) + { + mz_uint i, n; + + // Read the entire central directory into a heap block, and allocate another heap block to hold the unsorted central dir file record offsets, and another to hold the sorted indices. + if ((!mz_zip_array_resize(pZip, &pZip->m_pState->m_central_dir, cdir_size, MZ_FALSE)) || + (!mz_zip_array_resize(pZip, &pZip->m_pState->m_central_dir_offsets, pZip->m_total_files, MZ_FALSE))) + return MZ_FALSE; + + if (sort_central_dir) + { + if (!mz_zip_array_resize(pZip, &pZip->m_pState->m_sorted_central_dir_offsets, pZip->m_total_files, MZ_FALSE)) + return MZ_FALSE; + } + + if (pZip->m_pRead(pZip->m_pIO_opaque, cdir_ofs, pZip->m_pState->m_central_dir.m_p, cdir_size) != cdir_size) + return MZ_FALSE; + + // Now create an index into the central directory file records, do some basic sanity checking on each record, and check for zip64 entries (which are not yet supported). + p = (const mz_uint8 *)pZip->m_pState->m_central_dir.m_p; + for (n = cdir_size, i = 0; i < pZip->m_total_files; ++i) + { + mz_uint total_header_size, comp_size, decomp_size, disk_index; + if ((n < MZ_ZIP_CENTRAL_DIR_HEADER_SIZE) || (MZ_READ_LE32(p) != MZ_ZIP_CENTRAL_DIR_HEADER_SIG)) + return MZ_FALSE; + MZ_ZIP_ARRAY_ELEMENT(&pZip->m_pState->m_central_dir_offsets, mz_uint32, i) = (mz_uint32)(p - (const mz_uint8 *)pZip->m_pState->m_central_dir.m_p); + if (sort_central_dir) + MZ_ZIP_ARRAY_ELEMENT(&pZip->m_pState->m_sorted_central_dir_offsets, mz_uint32, i) = i; + comp_size = MZ_READ_LE32(p + MZ_ZIP_CDH_COMPRESSED_SIZE_OFS); + decomp_size = MZ_READ_LE32(p + MZ_ZIP_CDH_DECOMPRESSED_SIZE_OFS); + if (((!MZ_READ_LE32(p + MZ_ZIP_CDH_METHOD_OFS)) && (decomp_size != comp_size)) || (decomp_size && !comp_size) || (decomp_size == 0xFFFFFFFF) || (comp_size == 0xFFFFFFFF)) + return MZ_FALSE; + disk_index = MZ_READ_LE16(p + MZ_ZIP_CDH_DISK_START_OFS); + if ((disk_index != num_this_disk) && (disk_index != 1)) + return MZ_FALSE; + if (((mz_uint64)MZ_READ_LE32(p + MZ_ZIP_CDH_LOCAL_HEADER_OFS) + MZ_ZIP_LOCAL_DIR_HEADER_SIZE + comp_size) > pZip->m_archive_size) + return MZ_FALSE; + if ((total_header_size = MZ_ZIP_CENTRAL_DIR_HEADER_SIZE + MZ_READ_LE16(p + MZ_ZIP_CDH_FILENAME_LEN_OFS) + MZ_READ_LE16(p + MZ_ZIP_CDH_EXTRA_LEN_OFS) + MZ_READ_LE16(p + MZ_ZIP_CDH_COMMENT_LEN_OFS)) > n) + return MZ_FALSE; + n -= total_header_size; p += total_header_size; + } + } + + if (sort_central_dir) + mz_zip_reader_sort_central_dir_offsets_by_filename(pZip); + + return MZ_TRUE; +} + +mz_bool mz_zip_reader_init(mz_zip_archive *pZip, mz_uint64 size, mz_uint32 flags) +{ + if ((!pZip) || (!pZip->m_pRead)) + return MZ_FALSE; + if (!mz_zip_reader_init_internal(pZip, flags)) + return MZ_FALSE; + pZip->m_archive_size = size; + if (!mz_zip_reader_read_central_dir(pZip, flags)) + { + mz_zip_reader_end(pZip); + return MZ_FALSE; + } + return MZ_TRUE; +} + +static size_t mz_zip_mem_read_func(void *pOpaque, mz_uint64 file_ofs, void *pBuf, size_t n) +{ + mz_zip_archive *pZip = (mz_zip_archive *)pOpaque; + size_t s = (file_ofs >= pZip->m_archive_size) ? 0 : (size_t)MZ_MIN(pZip->m_archive_size - file_ofs, n); + memcpy(pBuf, (const mz_uint8 *)pZip->m_pState->m_pMem + file_ofs, s); + return s; +} + +mz_bool mz_zip_reader_init_mem(mz_zip_archive *pZip, const void *pMem, size_t size, mz_uint32 flags) +{ + if (!mz_zip_reader_init_internal(pZip, flags)) + return MZ_FALSE; + pZip->m_archive_size = size; + pZip->m_pRead = mz_zip_mem_read_func; + pZip->m_pIO_opaque = pZip; +#ifdef __cplusplus + pZip->m_pState->m_pMem = const_cast(pMem); +#else + pZip->m_pState->m_pMem = (void *)pMem; +#endif + pZip->m_pState->m_mem_size = size; + if (!mz_zip_reader_read_central_dir(pZip, flags)) + { + mz_zip_reader_end(pZip); + return MZ_FALSE; + } + return MZ_TRUE; +} + +#ifndef MINIZ_NO_STDIO +static size_t mz_zip_file_read_func(void *pOpaque, mz_uint64 file_ofs, void *pBuf, size_t n) +{ + mz_zip_archive *pZip = (mz_zip_archive *)pOpaque; + mz_int64 cur_ofs = MZ_FTELL64(pZip->m_pState->m_pFile); + if (((mz_int64)file_ofs < 0) || (((cur_ofs != (mz_int64)file_ofs)) && (MZ_FSEEK64(pZip->m_pState->m_pFile, (mz_int64)file_ofs, SEEK_SET)))) + return 0; + return MZ_FREAD(pBuf, 1, n, pZip->m_pState->m_pFile); +} + +mz_bool mz_zip_reader_init_file(mz_zip_archive *pZip, const char *pFilename, mz_uint32 flags) +{ + mz_uint64 file_size; + MZ_FILE *pFile = MZ_FOPEN(pFilename, "rb"); + if (!pFile) + return MZ_FALSE; + if (MZ_FSEEK64(pFile, 0, SEEK_END)) + { + MZ_FCLOSE(pFile); + return MZ_FALSE; + } + file_size = MZ_FTELL64(pFile); + if (!mz_zip_reader_init_internal(pZip, flags)) + { + MZ_FCLOSE(pFile); + return MZ_FALSE; + } + pZip->m_pRead = mz_zip_file_read_func; + pZip->m_pIO_opaque = pZip; + pZip->m_pState->m_pFile = pFile; + pZip->m_archive_size = file_size; + if (!mz_zip_reader_read_central_dir(pZip, flags)) + { + mz_zip_reader_end(pZip); + return MZ_FALSE; + } + return MZ_TRUE; +} +#endif // #ifndef MINIZ_NO_STDIO + +mz_uint mz_zip_reader_get_num_files(mz_zip_archive *pZip) +{ + return pZip ? pZip->m_total_files : 0; +} + +static MZ_FORCEINLINE const mz_uint8 *mz_zip_reader_get_cdh(mz_zip_archive *pZip, mz_uint file_index) +{ + if ((!pZip) || (!pZip->m_pState) || (file_index >= pZip->m_total_files) || (pZip->m_zip_mode != MZ_ZIP_MODE_READING)) + return NULL; + return &MZ_ZIP_ARRAY_ELEMENT(&pZip->m_pState->m_central_dir, mz_uint8, MZ_ZIP_ARRAY_ELEMENT(&pZip->m_pState->m_central_dir_offsets, mz_uint32, file_index)); +} + +mz_bool mz_zip_reader_is_file_encrypted(mz_zip_archive *pZip, mz_uint file_index) +{ + mz_uint m_bit_flag; + const mz_uint8 *p = mz_zip_reader_get_cdh(pZip, file_index); + if (!p) + return MZ_FALSE; + m_bit_flag = MZ_READ_LE16(p + MZ_ZIP_CDH_BIT_FLAG_OFS); + return (m_bit_flag & 1); +} + +mz_bool mz_zip_reader_is_file_a_directory(mz_zip_archive *pZip, mz_uint file_index) +{ + mz_uint filename_len, external_attr; + const mz_uint8 *p = mz_zip_reader_get_cdh(pZip, file_index); + if (!p) + return MZ_FALSE; + + // First see if the filename ends with a '/' character. + filename_len = MZ_READ_LE16(p + MZ_ZIP_CDH_FILENAME_LEN_OFS); + if (filename_len) + { + if (*(p + MZ_ZIP_CENTRAL_DIR_HEADER_SIZE + filename_len - 1) == '/') + return MZ_TRUE; + } + + // Bugfix: This code was also checking if the internal attribute was non-zero, which wasn't correct. + // Most/all zip writers (hopefully) set DOS file/directory attributes in the low 16-bits, so check for the DOS directory flag and ignore the source OS ID in the created by field. + // FIXME: Remove this check? Is it necessary - we already check the filename. + external_attr = MZ_READ_LE32(p + MZ_ZIP_CDH_EXTERNAL_ATTR_OFS); + if ((external_attr & 0x10) != 0) + return MZ_TRUE; + + return MZ_FALSE; +} + +mz_bool mz_zip_reader_file_stat(mz_zip_archive *pZip, mz_uint file_index, mz_zip_archive_file_stat *pStat) +{ + mz_uint n; + const mz_uint8 *p = mz_zip_reader_get_cdh(pZip, file_index); + if ((!p) || (!pStat)) + return MZ_FALSE; + + // Unpack the central directory record. + pStat->m_file_index = file_index; + pStat->m_central_dir_ofs = MZ_ZIP_ARRAY_ELEMENT(&pZip->m_pState->m_central_dir_offsets, mz_uint32, file_index); + pStat->m_version_made_by = MZ_READ_LE16(p + MZ_ZIP_CDH_VERSION_MADE_BY_OFS); + pStat->m_version_needed = MZ_READ_LE16(p + MZ_ZIP_CDH_VERSION_NEEDED_OFS); + pStat->m_bit_flag = MZ_READ_LE16(p + MZ_ZIP_CDH_BIT_FLAG_OFS); + pStat->m_method = MZ_READ_LE16(p + MZ_ZIP_CDH_METHOD_OFS); +#ifndef MINIZ_NO_TIME + pStat->m_time = mz_zip_dos_to_time_t(MZ_READ_LE16(p + MZ_ZIP_CDH_FILE_TIME_OFS), MZ_READ_LE16(p + MZ_ZIP_CDH_FILE_DATE_OFS)); +#endif + pStat->m_crc32 = MZ_READ_LE32(p + MZ_ZIP_CDH_CRC32_OFS); + pStat->m_comp_size = MZ_READ_LE32(p + MZ_ZIP_CDH_COMPRESSED_SIZE_OFS); + pStat->m_uncomp_size = MZ_READ_LE32(p + MZ_ZIP_CDH_DECOMPRESSED_SIZE_OFS); + pStat->m_internal_attr = MZ_READ_LE16(p + MZ_ZIP_CDH_INTERNAL_ATTR_OFS); + pStat->m_external_attr = MZ_READ_LE32(p + MZ_ZIP_CDH_EXTERNAL_ATTR_OFS); + pStat->m_local_header_ofs = MZ_READ_LE32(p + MZ_ZIP_CDH_LOCAL_HEADER_OFS); + + // Copy as much of the filename and comment as possible. + n = MZ_READ_LE16(p + MZ_ZIP_CDH_FILENAME_LEN_OFS); n = MZ_MIN(n, MZ_ZIP_MAX_ARCHIVE_FILENAME_SIZE - 1); + memcpy(pStat->m_filename, p + MZ_ZIP_CENTRAL_DIR_HEADER_SIZE, n); pStat->m_filename[n] = '\0'; + + n = MZ_READ_LE16(p + MZ_ZIP_CDH_COMMENT_LEN_OFS); n = MZ_MIN(n, MZ_ZIP_MAX_ARCHIVE_FILE_COMMENT_SIZE - 1); + pStat->m_comment_size = n; + memcpy(pStat->m_comment, p + MZ_ZIP_CENTRAL_DIR_HEADER_SIZE + MZ_READ_LE16(p + MZ_ZIP_CDH_FILENAME_LEN_OFS) + MZ_READ_LE16(p + MZ_ZIP_CDH_EXTRA_LEN_OFS), n); pStat->m_comment[n] = '\0'; + + return MZ_TRUE; +} + +mz_uint mz_zip_reader_get_filename(mz_zip_archive *pZip, mz_uint file_index, char *pFilename, mz_uint filename_buf_size) +{ + mz_uint n; + const mz_uint8 *p = mz_zip_reader_get_cdh(pZip, file_index); + if (!p) { if (filename_buf_size) pFilename[0] = '\0'; return 0; } + n = MZ_READ_LE16(p + MZ_ZIP_CDH_FILENAME_LEN_OFS); + if (filename_buf_size) + { + n = MZ_MIN(n, filename_buf_size - 1); + memcpy(pFilename, p + MZ_ZIP_CENTRAL_DIR_HEADER_SIZE, n); + pFilename[n] = '\0'; + } + return n + 1; +} + +static MZ_FORCEINLINE mz_bool mz_zip_reader_string_equal(const char *pA, const char *pB, mz_uint len, mz_uint flags) +{ + mz_uint i; + if (flags & MZ_ZIP_FLAG_CASE_SENSITIVE) + return 0 == memcmp(pA, pB, len); + for (i = 0; i < len; ++i) + if (MZ_TOLOWER(pA[i]) != MZ_TOLOWER(pB[i])) + return MZ_FALSE; + return MZ_TRUE; +} + +static MZ_FORCEINLINE int mz_zip_reader_filename_compare(const mz_zip_array *pCentral_dir_array, const mz_zip_array *pCentral_dir_offsets, mz_uint l_index, const char *pR, mz_uint r_len) +{ + const mz_uint8 *pL = &MZ_ZIP_ARRAY_ELEMENT(pCentral_dir_array, mz_uint8, MZ_ZIP_ARRAY_ELEMENT(pCentral_dir_offsets, mz_uint32, l_index)), *pE; + mz_uint l_len = MZ_READ_LE16(pL + MZ_ZIP_CDH_FILENAME_LEN_OFS); + mz_uint8 l = 0, r = 0; + pL += MZ_ZIP_CENTRAL_DIR_HEADER_SIZE; + pE = pL + MZ_MIN(l_len, r_len); + while (pL < pE) + { + if ((l = MZ_TOLOWER(*pL)) != (r = MZ_TOLOWER(*pR))) + break; + pL++; pR++; + } + return (pL == pE) ? (int)(l_len - r_len) : (l - r); +} + +static int mz_zip_reader_locate_file_binary_search(mz_zip_archive *pZip, const char *pFilename) +{ + mz_zip_internal_state *pState = pZip->m_pState; + const mz_zip_array *pCentral_dir_offsets = &pState->m_central_dir_offsets; + const mz_zip_array *pCentral_dir = &pState->m_central_dir; + mz_uint32 *pIndices = &MZ_ZIP_ARRAY_ELEMENT(&pState->m_sorted_central_dir_offsets, mz_uint32, 0); + const int size = pZip->m_total_files; + const mz_uint filename_len = (mz_uint)strlen(pFilename); + int l = 0, h = size - 1; + while (l <= h) + { + int m = (l + h) >> 1, file_index = pIndices[m], comp = mz_zip_reader_filename_compare(pCentral_dir, pCentral_dir_offsets, file_index, pFilename, filename_len); + if (!comp) + return file_index; + else if (comp < 0) + l = m + 1; + else + h = m - 1; + } + return -1; +} + +int mz_zip_reader_locate_file(mz_zip_archive *pZip, const char *pName, const char *pComment, mz_uint flags) +{ + mz_uint file_index; size_t name_len, comment_len; + if ((!pZip) || (!pZip->m_pState) || (!pName) || (pZip->m_zip_mode != MZ_ZIP_MODE_READING)) + return -1; + if (((flags & (MZ_ZIP_FLAG_IGNORE_PATH | MZ_ZIP_FLAG_CASE_SENSITIVE)) == 0) && (!pComment) && (pZip->m_pState->m_sorted_central_dir_offsets.m_size)) + return mz_zip_reader_locate_file_binary_search(pZip, pName); + name_len = strlen(pName); if (name_len > 0xFFFF) return -1; + comment_len = pComment ? strlen(pComment) : 0; if (comment_len > 0xFFFF) return -1; + for (file_index = 0; file_index < pZip->m_total_files; file_index++) + { + const mz_uint8 *pHeader = &MZ_ZIP_ARRAY_ELEMENT(&pZip->m_pState->m_central_dir, mz_uint8, MZ_ZIP_ARRAY_ELEMENT(&pZip->m_pState->m_central_dir_offsets, mz_uint32, file_index)); + mz_uint filename_len = MZ_READ_LE16(pHeader + MZ_ZIP_CDH_FILENAME_LEN_OFS); + const char *pFilename = (const char *)pHeader + MZ_ZIP_CENTRAL_DIR_HEADER_SIZE; + if (filename_len < name_len) + continue; + if (comment_len) + { + mz_uint file_extra_len = MZ_READ_LE16(pHeader + MZ_ZIP_CDH_EXTRA_LEN_OFS), file_comment_len = MZ_READ_LE16(pHeader + MZ_ZIP_CDH_COMMENT_LEN_OFS); + const char *pFile_comment = pFilename + filename_len + file_extra_len; + if ((file_comment_len != comment_len) || (!mz_zip_reader_string_equal(pComment, pFile_comment, file_comment_len, flags))) + continue; + } + if ((flags & MZ_ZIP_FLAG_IGNORE_PATH) && (filename_len)) + { + int ofs = filename_len - 1; + do + { + if ((pFilename[ofs] == '/') || (pFilename[ofs] == '\\') || (pFilename[ofs] == ':')) + break; + } while (--ofs >= 0); + ofs++; + pFilename += ofs; filename_len -= ofs; + } + if ((filename_len == name_len) && (mz_zip_reader_string_equal(pName, pFilename, filename_len, flags))) + return file_index; + } + return -1; +} + +mz_bool mz_zip_reader_extract_to_mem_no_alloc(mz_zip_archive *pZip, mz_uint file_index, void *pBuf, size_t buf_size, mz_uint flags, void *pUser_read_buf, size_t user_read_buf_size) +{ + int status = TINFL_STATUS_DONE; + mz_uint64 needed_size, cur_file_ofs, comp_remaining, out_buf_ofs = 0, read_buf_size, read_buf_ofs = 0, read_buf_avail; + mz_zip_archive_file_stat file_stat; + void *pRead_buf; + mz_uint32 local_header_u32[(MZ_ZIP_LOCAL_DIR_HEADER_SIZE + sizeof(mz_uint32) - 1) / sizeof(mz_uint32)]; mz_uint8 *pLocal_header = (mz_uint8 *)local_header_u32; + tinfl_decompressor inflator; + + if ((buf_size) && (!pBuf)) + return MZ_FALSE; + + if (!mz_zip_reader_file_stat(pZip, file_index, &file_stat)) + return MZ_FALSE; + + // Empty file, or a directory (but not always a directory - I've seen odd zips with directories that have compressed data which inflates to 0 bytes) + if (!file_stat.m_comp_size) + return MZ_TRUE; + + // Entry is a subdirectory (I've seen old zips with dir entries which have compressed deflate data which inflates to 0 bytes, but these entries claim to uncompress to 512 bytes in the headers). + // I'm torn how to handle this case - should it fail instead? + if (mz_zip_reader_is_file_a_directory(pZip, file_index)) + return MZ_TRUE; + + // Encryption and patch files are not supported. + if (file_stat.m_bit_flag & (1 | 32)) + return MZ_FALSE; + + // This function only supports stored and deflate. + if ((!(flags & MZ_ZIP_FLAG_COMPRESSED_DATA)) && (file_stat.m_method != 0) && (file_stat.m_method != MZ_DEFLATED)) + return MZ_FALSE; + + // Ensure supplied output buffer is large enough. + needed_size = (flags & MZ_ZIP_FLAG_COMPRESSED_DATA) ? file_stat.m_comp_size : file_stat.m_uncomp_size; + if (buf_size < needed_size) + return MZ_FALSE; + + // Read and parse the local directory entry. + cur_file_ofs = file_stat.m_local_header_ofs; + if (pZip->m_pRead(pZip->m_pIO_opaque, cur_file_ofs, pLocal_header, MZ_ZIP_LOCAL_DIR_HEADER_SIZE) != MZ_ZIP_LOCAL_DIR_HEADER_SIZE) + return MZ_FALSE; + if (MZ_READ_LE32(pLocal_header) != MZ_ZIP_LOCAL_DIR_HEADER_SIG) + return MZ_FALSE; + + cur_file_ofs += MZ_ZIP_LOCAL_DIR_HEADER_SIZE + MZ_READ_LE16(pLocal_header + MZ_ZIP_LDH_FILENAME_LEN_OFS) + MZ_READ_LE16(pLocal_header + MZ_ZIP_LDH_EXTRA_LEN_OFS); + if ((cur_file_ofs + file_stat.m_comp_size) > pZip->m_archive_size) + return MZ_FALSE; + + if ((flags & MZ_ZIP_FLAG_COMPRESSED_DATA) || (!file_stat.m_method)) + { + // The file is stored or the caller has requested the compressed data. + if (pZip->m_pRead(pZip->m_pIO_opaque, cur_file_ofs, pBuf, (size_t)needed_size) != needed_size) + return MZ_FALSE; + return ((flags & MZ_ZIP_FLAG_COMPRESSED_DATA) != 0) || (mz_crc32(MZ_CRC32_INIT, (const mz_uint8 *)pBuf, (size_t)file_stat.m_uncomp_size) == file_stat.m_crc32); + } + + // Decompress the file either directly from memory or from a file input buffer. + tinfl_init(&inflator); + + if (pZip->m_pState->m_pMem) + { + // Read directly from the archive in memory. + pRead_buf = (mz_uint8 *)pZip->m_pState->m_pMem + cur_file_ofs; + read_buf_size = read_buf_avail = file_stat.m_comp_size; + comp_remaining = 0; + } + else if (pUser_read_buf) + { + // Use a user provided read buffer. + if (!user_read_buf_size) + return MZ_FALSE; + pRead_buf = (mz_uint8 *)pUser_read_buf; + read_buf_size = user_read_buf_size; + read_buf_avail = 0; + comp_remaining = file_stat.m_comp_size; + } + else + { + // Temporarily allocate a read buffer. + read_buf_size = MZ_MIN(file_stat.m_comp_size, MZ_ZIP_MAX_IO_BUF_SIZE); +#ifdef _MSC_VER + if (((0, sizeof(size_t) == sizeof(mz_uint32))) && (read_buf_size > 0x7FFFFFFF)) +#else + if (((sizeof(size_t) == sizeof(mz_uint32))) && (read_buf_size > 0x7FFFFFFF)) +#endif + return MZ_FALSE; + if (NULL == (pRead_buf = pZip->m_pAlloc(pZip->m_pAlloc_opaque, 1, (size_t)read_buf_size))) + return MZ_FALSE; + read_buf_avail = 0; + comp_remaining = file_stat.m_comp_size; + } + + do + { + size_t in_buf_size, out_buf_size = (size_t)(file_stat.m_uncomp_size - out_buf_ofs); + if ((!read_buf_avail) && (!pZip->m_pState->m_pMem)) + { + read_buf_avail = MZ_MIN(read_buf_size, comp_remaining); + if (pZip->m_pRead(pZip->m_pIO_opaque, cur_file_ofs, pRead_buf, (size_t)read_buf_avail) != read_buf_avail) + { + status = TINFL_STATUS_FAILED; + break; + } + cur_file_ofs += read_buf_avail; + comp_remaining -= read_buf_avail; + read_buf_ofs = 0; + } + in_buf_size = (size_t)read_buf_avail; + status = tinfl_decompress(&inflator, (mz_uint8 *)pRead_buf + read_buf_ofs, &in_buf_size, (mz_uint8 *)pBuf, (mz_uint8 *)pBuf + out_buf_ofs, &out_buf_size, TINFL_FLAG_USING_NON_WRAPPING_OUTPUT_BUF | (comp_remaining ? TINFL_FLAG_HAS_MORE_INPUT : 0)); + read_buf_avail -= in_buf_size; + read_buf_ofs += in_buf_size; + out_buf_ofs += out_buf_size; + } while (status == TINFL_STATUS_NEEDS_MORE_INPUT); + + if (status == TINFL_STATUS_DONE) + { + // Make sure the entire file was decompressed, and check its CRC. + if ((out_buf_ofs != file_stat.m_uncomp_size) || (mz_crc32(MZ_CRC32_INIT, (const mz_uint8 *)pBuf, (size_t)file_stat.m_uncomp_size) != file_stat.m_crc32)) + status = TINFL_STATUS_FAILED; + } + + if ((!pZip->m_pState->m_pMem) && (!pUser_read_buf)) + pZip->m_pFree(pZip->m_pAlloc_opaque, pRead_buf); + + return status == TINFL_STATUS_DONE; +} + +mz_bool mz_zip_reader_extract_file_to_mem_no_alloc(mz_zip_archive *pZip, const char *pFilename, void *pBuf, size_t buf_size, mz_uint flags, void *pUser_read_buf, size_t user_read_buf_size) +{ + int file_index = mz_zip_reader_locate_file(pZip, pFilename, NULL, flags); + if (file_index < 0) + return MZ_FALSE; + return mz_zip_reader_extract_to_mem_no_alloc(pZip, file_index, pBuf, buf_size, flags, pUser_read_buf, user_read_buf_size); +} + +mz_bool mz_zip_reader_extract_to_mem(mz_zip_archive *pZip, mz_uint file_index, void *pBuf, size_t buf_size, mz_uint flags) +{ + return mz_zip_reader_extract_to_mem_no_alloc(pZip, file_index, pBuf, buf_size, flags, NULL, 0); +} + +mz_bool mz_zip_reader_extract_file_to_mem(mz_zip_archive *pZip, const char *pFilename, void *pBuf, size_t buf_size, mz_uint flags) +{ + return mz_zip_reader_extract_file_to_mem_no_alloc(pZip, pFilename, pBuf, buf_size, flags, NULL, 0); +} + +void *mz_zip_reader_extract_to_heap(mz_zip_archive *pZip, mz_uint file_index, size_t *pSize, mz_uint flags) +{ + mz_uint64 comp_size, uncomp_size, alloc_size; + const mz_uint8 *p = mz_zip_reader_get_cdh(pZip, file_index); + void *pBuf; + + if (pSize) + *pSize = 0; + if (!p) + return NULL; + + comp_size = MZ_READ_LE32(p + MZ_ZIP_CDH_COMPRESSED_SIZE_OFS); + uncomp_size = MZ_READ_LE32(p + MZ_ZIP_CDH_DECOMPRESSED_SIZE_OFS); + + alloc_size = (flags & MZ_ZIP_FLAG_COMPRESSED_DATA) ? comp_size : uncomp_size; +#ifdef _MSC_VER + if (((0, sizeof(size_t) == sizeof(mz_uint32))) && (alloc_size > 0x7FFFFFFF)) +#else + if (((sizeof(size_t) == sizeof(mz_uint32))) && (alloc_size > 0x7FFFFFFF)) +#endif + return NULL; + if (NULL == (pBuf = pZip->m_pAlloc(pZip->m_pAlloc_opaque, 1, (size_t)alloc_size))) + return NULL; + + if (!mz_zip_reader_extract_to_mem(pZip, file_index, pBuf, (size_t)alloc_size, flags)) + { + pZip->m_pFree(pZip->m_pAlloc_opaque, pBuf); + return NULL; + } + + if (pSize) *pSize = (size_t)alloc_size; + return pBuf; +} + +void *mz_zip_reader_extract_file_to_heap(mz_zip_archive *pZip, const char *pFilename, size_t *pSize, mz_uint flags) +{ + int file_index = mz_zip_reader_locate_file(pZip, pFilename, NULL, flags); + if (file_index < 0) + { + if (pSize) *pSize = 0; + return MZ_FALSE; + } + return mz_zip_reader_extract_to_heap(pZip, file_index, pSize, flags); +} + +mz_bool mz_zip_reader_extract_to_callback(mz_zip_archive *pZip, mz_uint file_index, mz_file_write_func pCallback, void *pOpaque, mz_uint flags) +{ + int status = TINFL_STATUS_DONE; mz_uint file_crc32 = MZ_CRC32_INIT; + mz_uint64 read_buf_size, read_buf_ofs = 0, read_buf_avail, comp_remaining, out_buf_ofs = 0, cur_file_ofs; + mz_zip_archive_file_stat file_stat; + void *pRead_buf = NULL; void *pWrite_buf = NULL; + mz_uint32 local_header_u32[(MZ_ZIP_LOCAL_DIR_HEADER_SIZE + sizeof(mz_uint32) - 1) / sizeof(mz_uint32)]; mz_uint8 *pLocal_header = (mz_uint8 *)local_header_u32; + + if (!mz_zip_reader_file_stat(pZip, file_index, &file_stat)) + return MZ_FALSE; + + // Empty file, or a directory (but not always a directory - I've seen odd zips with directories that have compressed data which inflates to 0 bytes) + if (!file_stat.m_comp_size) + return MZ_TRUE; + + // Entry is a subdirectory (I've seen old zips with dir entries which have compressed deflate data which inflates to 0 bytes, but these entries claim to uncompress to 512 bytes in the headers). + // I'm torn how to handle this case - should it fail instead? + if (mz_zip_reader_is_file_a_directory(pZip, file_index)) + return MZ_TRUE; + + // Encryption and patch files are not supported. + if (file_stat.m_bit_flag & (1 | 32)) + return MZ_FALSE; + + // This function only supports stored and deflate. + if ((!(flags & MZ_ZIP_FLAG_COMPRESSED_DATA)) && (file_stat.m_method != 0) && (file_stat.m_method != MZ_DEFLATED)) + return MZ_FALSE; + + // Read and parse the local directory entry. + cur_file_ofs = file_stat.m_local_header_ofs; + if (pZip->m_pRead(pZip->m_pIO_opaque, cur_file_ofs, pLocal_header, MZ_ZIP_LOCAL_DIR_HEADER_SIZE) != MZ_ZIP_LOCAL_DIR_HEADER_SIZE) + return MZ_FALSE; + if (MZ_READ_LE32(pLocal_header) != MZ_ZIP_LOCAL_DIR_HEADER_SIG) + return MZ_FALSE; + + cur_file_ofs += MZ_ZIP_LOCAL_DIR_HEADER_SIZE + MZ_READ_LE16(pLocal_header + MZ_ZIP_LDH_FILENAME_LEN_OFS) + MZ_READ_LE16(pLocal_header + MZ_ZIP_LDH_EXTRA_LEN_OFS); + if ((cur_file_ofs + file_stat.m_comp_size) > pZip->m_archive_size) + return MZ_FALSE; + + // Decompress the file either directly from memory or from a file input buffer. + if (pZip->m_pState->m_pMem) + { + pRead_buf = (mz_uint8 *)pZip->m_pState->m_pMem + cur_file_ofs; + read_buf_size = read_buf_avail = file_stat.m_comp_size; + comp_remaining = 0; + } + else + { + read_buf_size = MZ_MIN(file_stat.m_comp_size, MZ_ZIP_MAX_IO_BUF_SIZE); + if (NULL == (pRead_buf = pZip->m_pAlloc(pZip->m_pAlloc_opaque, 1, (size_t)read_buf_size))) + return MZ_FALSE; + read_buf_avail = 0; + comp_remaining = file_stat.m_comp_size; + } + + if ((flags & MZ_ZIP_FLAG_COMPRESSED_DATA) || (!file_stat.m_method)) + { + // The file is stored or the caller has requested the compressed data. + if (pZip->m_pState->m_pMem) + { +#ifdef _MSC_VER + if (((0, sizeof(size_t) == sizeof(mz_uint32))) && (file_stat.m_comp_size > 0xFFFFFFFF)) +#else + if (((sizeof(size_t) == sizeof(mz_uint32))) && (file_stat.m_comp_size > 0xFFFFFFFF)) +#endif + return MZ_FALSE; + if (pCallback(pOpaque, out_buf_ofs, pRead_buf, (size_t)file_stat.m_comp_size) != file_stat.m_comp_size) + status = TINFL_STATUS_FAILED; + else if (!(flags & MZ_ZIP_FLAG_COMPRESSED_DATA)) + file_crc32 = (mz_uint32)mz_crc32(file_crc32, (const mz_uint8 *)pRead_buf, (size_t)file_stat.m_comp_size); + cur_file_ofs += file_stat.m_comp_size; + out_buf_ofs += file_stat.m_comp_size; + comp_remaining = 0; + } + else + { + while (comp_remaining) + { + read_buf_avail = MZ_MIN(read_buf_size, comp_remaining); + if (pZip->m_pRead(pZip->m_pIO_opaque, cur_file_ofs, pRead_buf, (size_t)read_buf_avail) != read_buf_avail) + { + status = TINFL_STATUS_FAILED; + break; + } + + if (!(flags & MZ_ZIP_FLAG_COMPRESSED_DATA)) + file_crc32 = (mz_uint32)mz_crc32(file_crc32, (const mz_uint8 *)pRead_buf, (size_t)read_buf_avail); + + if (pCallback(pOpaque, out_buf_ofs, pRead_buf, (size_t)read_buf_avail) != read_buf_avail) + { + status = TINFL_STATUS_FAILED; + break; + } + cur_file_ofs += read_buf_avail; + out_buf_ofs += read_buf_avail; + comp_remaining -= read_buf_avail; + } + } + } + else + { + tinfl_decompressor inflator; + tinfl_init(&inflator); + + if (NULL == (pWrite_buf = pZip->m_pAlloc(pZip->m_pAlloc_opaque, 1, TINFL_LZ_DICT_SIZE))) + status = TINFL_STATUS_FAILED; + else + { + do + { + mz_uint8 *pWrite_buf_cur = (mz_uint8 *)pWrite_buf + (out_buf_ofs & (TINFL_LZ_DICT_SIZE - 1)); + size_t in_buf_size, out_buf_size = TINFL_LZ_DICT_SIZE - (out_buf_ofs & (TINFL_LZ_DICT_SIZE - 1)); + if ((!read_buf_avail) && (!pZip->m_pState->m_pMem)) + { + read_buf_avail = MZ_MIN(read_buf_size, comp_remaining); + if (pZip->m_pRead(pZip->m_pIO_opaque, cur_file_ofs, pRead_buf, (size_t)read_buf_avail) != read_buf_avail) + { + status = TINFL_STATUS_FAILED; + break; + } + cur_file_ofs += read_buf_avail; + comp_remaining -= read_buf_avail; + read_buf_ofs = 0; + } + + in_buf_size = (size_t)read_buf_avail; + status = tinfl_decompress(&inflator, (const mz_uint8 *)pRead_buf + read_buf_ofs, &in_buf_size, (mz_uint8 *)pWrite_buf, pWrite_buf_cur, &out_buf_size, comp_remaining ? TINFL_FLAG_HAS_MORE_INPUT : 0); + read_buf_avail -= in_buf_size; + read_buf_ofs += in_buf_size; + + if (out_buf_size) + { + if (pCallback(pOpaque, out_buf_ofs, pWrite_buf_cur, out_buf_size) != out_buf_size) + { + status = TINFL_STATUS_FAILED; + break; + } + file_crc32 = (mz_uint32)mz_crc32(file_crc32, pWrite_buf_cur, out_buf_size); + if ((out_buf_ofs += out_buf_size) > file_stat.m_uncomp_size) + { + status = TINFL_STATUS_FAILED; + break; + } + } + } while ((status == TINFL_STATUS_NEEDS_MORE_INPUT) || (status == TINFL_STATUS_HAS_MORE_OUTPUT)); + } + } + + if ((status == TINFL_STATUS_DONE) && (!(flags & MZ_ZIP_FLAG_COMPRESSED_DATA))) + { + // Make sure the entire file was decompressed, and check its CRC. + if ((out_buf_ofs != file_stat.m_uncomp_size) || (file_crc32 != file_stat.m_crc32)) + status = TINFL_STATUS_FAILED; + } + + if (!pZip->m_pState->m_pMem) + pZip->m_pFree(pZip->m_pAlloc_opaque, pRead_buf); + if (pWrite_buf) + pZip->m_pFree(pZip->m_pAlloc_opaque, pWrite_buf); + + return status == TINFL_STATUS_DONE; +} + +mz_bool mz_zip_reader_extract_file_to_callback(mz_zip_archive *pZip, const char *pFilename, mz_file_write_func pCallback, void *pOpaque, mz_uint flags) +{ + int file_index = mz_zip_reader_locate_file(pZip, pFilename, NULL, flags); + if (file_index < 0) + return MZ_FALSE; + return mz_zip_reader_extract_to_callback(pZip, file_index, pCallback, pOpaque, flags); +} + +#ifndef MINIZ_NO_STDIO +static size_t mz_zip_file_write_callback(void *pOpaque, mz_uint64 ofs, const void *pBuf, size_t n) +{ + (void)ofs; return MZ_FWRITE(pBuf, 1, n, (MZ_FILE*)pOpaque); +} + +mz_bool mz_zip_reader_extract_to_file(mz_zip_archive *pZip, mz_uint file_index, const char *pDst_filename, mz_uint flags) +{ + mz_bool status; + mz_zip_archive_file_stat file_stat; + MZ_FILE *pFile; + if (!mz_zip_reader_file_stat(pZip, file_index, &file_stat)) + return MZ_FALSE; + pFile = MZ_FOPEN(pDst_filename, "wb"); + if (!pFile) + return MZ_FALSE; + status = mz_zip_reader_extract_to_callback(pZip, file_index, mz_zip_file_write_callback, pFile, flags); + if (MZ_FCLOSE(pFile) == EOF) + return MZ_FALSE; +#ifndef MINIZ_NO_TIME + if (status) + mz_zip_set_file_times(pDst_filename, file_stat.m_time, file_stat.m_time); +#endif + return status; +} +#endif // #ifndef MINIZ_NO_STDIO + +mz_bool mz_zip_reader_end(mz_zip_archive *pZip) +{ + if ((!pZip) || (!pZip->m_pState) || (!pZip->m_pAlloc) || (!pZip->m_pFree) || (pZip->m_zip_mode != MZ_ZIP_MODE_READING)) + return MZ_FALSE; + + if (pZip->m_pState) + { + mz_zip_internal_state *pState = pZip->m_pState; pZip->m_pState = NULL; + mz_zip_array_clear(pZip, &pState->m_central_dir); + mz_zip_array_clear(pZip, &pState->m_central_dir_offsets); + mz_zip_array_clear(pZip, &pState->m_sorted_central_dir_offsets); + +#ifndef MINIZ_NO_STDIO + if (pState->m_pFile) + { + MZ_FCLOSE(pState->m_pFile); + pState->m_pFile = NULL; + } +#endif // #ifndef MINIZ_NO_STDIO + + pZip->m_pFree(pZip->m_pAlloc_opaque, pState); + } + pZip->m_zip_mode = MZ_ZIP_MODE_INVALID; + + return MZ_TRUE; +} + +#ifndef MINIZ_NO_STDIO +mz_bool mz_zip_reader_extract_file_to_file(mz_zip_archive *pZip, const char *pArchive_filename, const char *pDst_filename, mz_uint flags) +{ + int file_index = mz_zip_reader_locate_file(pZip, pArchive_filename, NULL, flags); + if (file_index < 0) + return MZ_FALSE; + return mz_zip_reader_extract_to_file(pZip, file_index, pDst_filename, flags); +} +#endif + +// ------------------- .ZIP archive writing + +#ifndef MINIZ_NO_ARCHIVE_WRITING_APIS + +static void mz_write_le16(mz_uint8 *p, mz_uint16 v) { p[0] = (mz_uint8)v; p[1] = (mz_uint8)(v >> 8); } +static void mz_write_le32(mz_uint8 *p, mz_uint32 v) { p[0] = (mz_uint8)v; p[1] = (mz_uint8)(v >> 8); p[2] = (mz_uint8)(v >> 16); p[3] = (mz_uint8)(v >> 24); } +#define MZ_WRITE_LE16(p, v) mz_write_le16((mz_uint8 *)(p), (mz_uint16)(v)) +#define MZ_WRITE_LE32(p, v) mz_write_le32((mz_uint8 *)(p), (mz_uint32)(v)) + +mz_bool mz_zip_writer_init(mz_zip_archive *pZip, mz_uint64 existing_size) +{ + if ((!pZip) || (pZip->m_pState) || (!pZip->m_pWrite) || (pZip->m_zip_mode != MZ_ZIP_MODE_INVALID)) + return MZ_FALSE; + + if (pZip->m_file_offset_alignment) + { + // Ensure user specified file offset alignment is a power of 2. + if (pZip->m_file_offset_alignment & (pZip->m_file_offset_alignment - 1)) + return MZ_FALSE; + } + + if (!pZip->m_pAlloc) pZip->m_pAlloc = def_alloc_func; + if (!pZip->m_pFree) pZip->m_pFree = def_free_func; + if (!pZip->m_pRealloc) pZip->m_pRealloc = def_realloc_func; + + pZip->m_zip_mode = MZ_ZIP_MODE_WRITING; + pZip->m_archive_size = existing_size; + pZip->m_central_directory_file_ofs = 0; + pZip->m_total_files = 0; + + if (NULL == (pZip->m_pState = (mz_zip_internal_state *)pZip->m_pAlloc(pZip->m_pAlloc_opaque, 1, sizeof(mz_zip_internal_state)))) + return MZ_FALSE; + memset(pZip->m_pState, 0, sizeof(mz_zip_internal_state)); + MZ_ZIP_ARRAY_SET_ELEMENT_SIZE(&pZip->m_pState->m_central_dir, sizeof(mz_uint8)); + MZ_ZIP_ARRAY_SET_ELEMENT_SIZE(&pZip->m_pState->m_central_dir_offsets, sizeof(mz_uint32)); + MZ_ZIP_ARRAY_SET_ELEMENT_SIZE(&pZip->m_pState->m_sorted_central_dir_offsets, sizeof(mz_uint32)); + return MZ_TRUE; +} + +static size_t mz_zip_heap_write_func(void *pOpaque, mz_uint64 file_ofs, const void *pBuf, size_t n) +{ + mz_zip_archive *pZip = (mz_zip_archive *)pOpaque; + mz_zip_internal_state *pState = pZip->m_pState; + mz_uint64 new_size = MZ_MAX(file_ofs + n, pState->m_mem_size); +#ifdef _MSC_VER + if ((!n) || ((0, sizeof(size_t) == sizeof(mz_uint32)) && (new_size > 0x7FFFFFFF))) +#else + if ((!n) || ((sizeof(size_t) == sizeof(mz_uint32)) && (new_size > 0x7FFFFFFF))) +#endif + return 0; + if (new_size > pState->m_mem_capacity) + { + void *pNew_block; + size_t new_capacity = MZ_MAX(64, pState->m_mem_capacity); while (new_capacity < new_size) new_capacity *= 2; + if (NULL == (pNew_block = pZip->m_pRealloc(pZip->m_pAlloc_opaque, pState->m_pMem, 1, new_capacity))) + return 0; + pState->m_pMem = pNew_block; pState->m_mem_capacity = new_capacity; + } + memcpy((mz_uint8 *)pState->m_pMem + file_ofs, pBuf, n); + pState->m_mem_size = (size_t)new_size; + return n; +} + +mz_bool mz_zip_writer_init_heap(mz_zip_archive *pZip, size_t size_to_reserve_at_beginning, size_t initial_allocation_size) +{ + pZip->m_pWrite = mz_zip_heap_write_func; + pZip->m_pIO_opaque = pZip; + if (!mz_zip_writer_init(pZip, size_to_reserve_at_beginning)) + return MZ_FALSE; + if (0 != (initial_allocation_size = MZ_MAX(initial_allocation_size, size_to_reserve_at_beginning))) + { + if (NULL == (pZip->m_pState->m_pMem = pZip->m_pAlloc(pZip->m_pAlloc_opaque, 1, initial_allocation_size))) + { + mz_zip_writer_end(pZip); + return MZ_FALSE; + } + pZip->m_pState->m_mem_capacity = initial_allocation_size; + } + return MZ_TRUE; +} + +#ifndef MINIZ_NO_STDIO +static size_t mz_zip_file_write_func(void *pOpaque, mz_uint64 file_ofs, const void *pBuf, size_t n) +{ + mz_zip_archive *pZip = (mz_zip_archive *)pOpaque; + mz_int64 cur_ofs = MZ_FTELL64(pZip->m_pState->m_pFile); + if (((mz_int64)file_ofs < 0) || (((cur_ofs != (mz_int64)file_ofs)) && (MZ_FSEEK64(pZip->m_pState->m_pFile, (mz_int64)file_ofs, SEEK_SET)))) + return 0; + return MZ_FWRITE(pBuf, 1, n, pZip->m_pState->m_pFile); +} + +mz_bool mz_zip_writer_init_file(mz_zip_archive *pZip, const char *pFilename, mz_uint64 size_to_reserve_at_beginning) +{ + MZ_FILE *pFile; + pZip->m_pWrite = mz_zip_file_write_func; + pZip->m_pIO_opaque = pZip; + if (!mz_zip_writer_init(pZip, size_to_reserve_at_beginning)) + return MZ_FALSE; + if (NULL == (pFile = MZ_FOPEN(pFilename, "wb"))) + { + mz_zip_writer_end(pZip); + return MZ_FALSE; + } + pZip->m_pState->m_pFile = pFile; + if (size_to_reserve_at_beginning) + { + mz_uint64 cur_ofs = 0; char buf[4096]; MZ_CLEAR_OBJ(buf); + do + { + size_t n = (size_t)MZ_MIN(sizeof(buf), size_to_reserve_at_beginning); + if (pZip->m_pWrite(pZip->m_pIO_opaque, cur_ofs, buf, n) != n) + { + mz_zip_writer_end(pZip); + return MZ_FALSE; + } + cur_ofs += n; size_to_reserve_at_beginning -= n; + } while (size_to_reserve_at_beginning); + } + return MZ_TRUE; +} +#endif // #ifndef MINIZ_NO_STDIO + +mz_bool mz_zip_writer_init_from_reader(mz_zip_archive *pZip, const char *pFilename) +{ + mz_zip_internal_state *pState; + if ((!pZip) || (!pZip->m_pState) || (pZip->m_zip_mode != MZ_ZIP_MODE_READING)) + return MZ_FALSE; + // No sense in trying to write to an archive that's already at the support max size + if ((pZip->m_total_files == 0xFFFF) || ((pZip->m_archive_size + MZ_ZIP_CENTRAL_DIR_HEADER_SIZE + MZ_ZIP_LOCAL_DIR_HEADER_SIZE) > 0xFFFFFFFF)) + return MZ_FALSE; + + pState = pZip->m_pState; + + if (pState->m_pFile) + { +#ifdef MINIZ_NO_STDIO + pFilename; return MZ_FALSE; +#else + // Archive is being read from stdio - try to reopen as writable. + if (pZip->m_pIO_opaque != pZip) + return MZ_FALSE; + if (!pFilename) + return MZ_FALSE; + pZip->m_pWrite = mz_zip_file_write_func; + if (NULL == (pState->m_pFile = MZ_FREOPEN(pFilename, "r+b", pState->m_pFile))) + { + // The mz_zip_archive is now in a bogus state because pState->m_pFile is NULL, so just close it. + mz_zip_reader_end(pZip); + return MZ_FALSE; + } +#endif // #ifdef MINIZ_NO_STDIO + } + else if (pState->m_pMem) + { + // Archive lives in a memory block. Assume it's from the heap that we can resize using the realloc callback. + if (pZip->m_pIO_opaque != pZip) + return MZ_FALSE; + pState->m_mem_capacity = pState->m_mem_size; + pZip->m_pWrite = mz_zip_heap_write_func; + } + // Archive is being read via a user provided read function - make sure the user has specified a write function too. + else if (!pZip->m_pWrite) + return MZ_FALSE; + + // Start writing new files at the archive's current central directory location. + pZip->m_archive_size = pZip->m_central_directory_file_ofs; + pZip->m_zip_mode = MZ_ZIP_MODE_WRITING; + pZip->m_central_directory_file_ofs = 0; + + return MZ_TRUE; +} + +mz_bool mz_zip_writer_add_mem(mz_zip_archive *pZip, const char *pArchive_name, const void *pBuf, size_t buf_size, mz_uint level_and_flags) +{ + return mz_zip_writer_add_mem_ex(pZip, pArchive_name, pBuf, buf_size, NULL, 0, level_and_flags, 0, 0); +} + +typedef struct +{ + mz_zip_archive *m_pZip; + mz_uint64 m_cur_archive_file_ofs; + mz_uint64 m_comp_size; +} mz_zip_writer_add_state; + +static mz_bool mz_zip_writer_add_put_buf_callback(const void* pBuf, int len, void *pUser) +{ + mz_zip_writer_add_state *pState = (mz_zip_writer_add_state *)pUser; + if ((int)pState->m_pZip->m_pWrite(pState->m_pZip->m_pIO_opaque, pState->m_cur_archive_file_ofs, pBuf, len) != len) + return MZ_FALSE; + pState->m_cur_archive_file_ofs += len; + pState->m_comp_size += len; + return MZ_TRUE; +} + +static mz_bool mz_zip_writer_create_local_dir_header(mz_zip_archive *pZip, mz_uint8 *pDst, mz_uint16 filename_size, mz_uint16 extra_size, mz_uint64 uncomp_size, mz_uint64 comp_size, mz_uint32 uncomp_crc32, mz_uint16 method, mz_uint16 bit_flags, mz_uint16 dos_time, mz_uint16 dos_date) +{ + (void)pZip; + memset(pDst, 0, MZ_ZIP_LOCAL_DIR_HEADER_SIZE); + MZ_WRITE_LE32(pDst + MZ_ZIP_LDH_SIG_OFS, MZ_ZIP_LOCAL_DIR_HEADER_SIG); + MZ_WRITE_LE16(pDst + MZ_ZIP_LDH_VERSION_NEEDED_OFS, method ? 20 : 0); + MZ_WRITE_LE16(pDst + MZ_ZIP_LDH_BIT_FLAG_OFS, bit_flags); + MZ_WRITE_LE16(pDst + MZ_ZIP_LDH_METHOD_OFS, method); + MZ_WRITE_LE16(pDst + MZ_ZIP_LDH_FILE_TIME_OFS, dos_time); + MZ_WRITE_LE16(pDst + MZ_ZIP_LDH_FILE_DATE_OFS, dos_date); + MZ_WRITE_LE32(pDst + MZ_ZIP_LDH_CRC32_OFS, uncomp_crc32); + MZ_WRITE_LE32(pDst + MZ_ZIP_LDH_COMPRESSED_SIZE_OFS, comp_size); + MZ_WRITE_LE32(pDst + MZ_ZIP_LDH_DECOMPRESSED_SIZE_OFS, uncomp_size); + MZ_WRITE_LE16(pDst + MZ_ZIP_LDH_FILENAME_LEN_OFS, filename_size); + MZ_WRITE_LE16(pDst + MZ_ZIP_LDH_EXTRA_LEN_OFS, extra_size); + return MZ_TRUE; +} + +static mz_bool mz_zip_writer_create_central_dir_header(mz_zip_archive *pZip, mz_uint8 *pDst, mz_uint16 filename_size, mz_uint16 extra_size, mz_uint16 comment_size, mz_uint64 uncomp_size, mz_uint64 comp_size, mz_uint32 uncomp_crc32, mz_uint16 method, mz_uint16 bit_flags, mz_uint16 dos_time, mz_uint16 dos_date, mz_uint64 local_header_ofs, mz_uint32 ext_attributes) +{ + (void)pZip; + memset(pDst, 0, MZ_ZIP_CENTRAL_DIR_HEADER_SIZE); + MZ_WRITE_LE32(pDst + MZ_ZIP_CDH_SIG_OFS, MZ_ZIP_CENTRAL_DIR_HEADER_SIG); + MZ_WRITE_LE16(pDst + MZ_ZIP_CDH_VERSION_NEEDED_OFS, method ? 20 : 0); + MZ_WRITE_LE16(pDst + MZ_ZIP_CDH_BIT_FLAG_OFS, bit_flags); + MZ_WRITE_LE16(pDst + MZ_ZIP_CDH_METHOD_OFS, method); + MZ_WRITE_LE16(pDst + MZ_ZIP_CDH_FILE_TIME_OFS, dos_time); + MZ_WRITE_LE16(pDst + MZ_ZIP_CDH_FILE_DATE_OFS, dos_date); + MZ_WRITE_LE32(pDst + MZ_ZIP_CDH_CRC32_OFS, uncomp_crc32); + MZ_WRITE_LE32(pDst + MZ_ZIP_CDH_COMPRESSED_SIZE_OFS, comp_size); + MZ_WRITE_LE32(pDst + MZ_ZIP_CDH_DECOMPRESSED_SIZE_OFS, uncomp_size); + MZ_WRITE_LE16(pDst + MZ_ZIP_CDH_FILENAME_LEN_OFS, filename_size); + MZ_WRITE_LE16(pDst + MZ_ZIP_CDH_EXTRA_LEN_OFS, extra_size); + MZ_WRITE_LE16(pDst + MZ_ZIP_CDH_COMMENT_LEN_OFS, comment_size); + MZ_WRITE_LE32(pDst + MZ_ZIP_CDH_EXTERNAL_ATTR_OFS, ext_attributes); + MZ_WRITE_LE32(pDst + MZ_ZIP_CDH_LOCAL_HEADER_OFS, local_header_ofs); + return MZ_TRUE; +} + +static mz_bool mz_zip_writer_add_to_central_dir(mz_zip_archive *pZip, const char *pFilename, mz_uint16 filename_size, const void *pExtra, mz_uint16 extra_size, const void *pComment, mz_uint16 comment_size, mz_uint64 uncomp_size, mz_uint64 comp_size, mz_uint32 uncomp_crc32, mz_uint16 method, mz_uint16 bit_flags, mz_uint16 dos_time, mz_uint16 dos_date, mz_uint64 local_header_ofs, mz_uint32 ext_attributes) +{ + mz_zip_internal_state *pState = pZip->m_pState; + mz_uint32 central_dir_ofs = (mz_uint32)pState->m_central_dir.m_size; + size_t orig_central_dir_size = pState->m_central_dir.m_size; + mz_uint8 central_dir_header[MZ_ZIP_CENTRAL_DIR_HEADER_SIZE]; + + // No zip64 support yet + if ((local_header_ofs > 0xFFFFFFFF) || (((mz_uint64)pState->m_central_dir.m_size + MZ_ZIP_CENTRAL_DIR_HEADER_SIZE + filename_size + extra_size + comment_size) > 0xFFFFFFFF)) + return MZ_FALSE; + + if (!mz_zip_writer_create_central_dir_header(pZip, central_dir_header, filename_size, extra_size, comment_size, uncomp_size, comp_size, uncomp_crc32, method, bit_flags, dos_time, dos_date, local_header_ofs, ext_attributes)) + return MZ_FALSE; + + if ((!mz_zip_array_push_back(pZip, &pState->m_central_dir, central_dir_header, MZ_ZIP_CENTRAL_DIR_HEADER_SIZE)) || + (!mz_zip_array_push_back(pZip, &pState->m_central_dir, pFilename, filename_size)) || + (!mz_zip_array_push_back(pZip, &pState->m_central_dir, pExtra, extra_size)) || + (!mz_zip_array_push_back(pZip, &pState->m_central_dir, pComment, comment_size)) || + (!mz_zip_array_push_back(pZip, &pState->m_central_dir_offsets, ¢ral_dir_ofs, 1))) + { + // Try to push the central directory array back into its original state. + mz_zip_array_resize(pZip, &pState->m_central_dir, orig_central_dir_size, MZ_FALSE); + return MZ_FALSE; + } + + return MZ_TRUE; +} + +static mz_bool mz_zip_writer_validate_archive_name(const char *pArchive_name) +{ + // Basic ZIP archive filename validity checks: Valid filenames cannot start with a forward slash, cannot contain a drive letter, and cannot use DOS-style backward slashes. + if (*pArchive_name == '/') + return MZ_FALSE; + while (*pArchive_name) + { + if ((*pArchive_name == '\\') || (*pArchive_name == ':')) + return MZ_FALSE; + pArchive_name++; + } + return MZ_TRUE; +} + +static mz_uint mz_zip_writer_compute_padding_needed_for_file_alignment(mz_zip_archive *pZip) +{ + mz_uint32 n; + if (!pZip->m_file_offset_alignment) + return 0; + n = (mz_uint32)(pZip->m_archive_size & (pZip->m_file_offset_alignment - 1)); + return (pZip->m_file_offset_alignment - n) & (pZip->m_file_offset_alignment - 1); +} + +static mz_bool mz_zip_writer_write_zeros(mz_zip_archive *pZip, mz_uint64 cur_file_ofs, mz_uint32 n) +{ + char buf[4096]; + memset(buf, 0, MZ_MIN(sizeof(buf), n)); + while (n) + { + mz_uint32 s = MZ_MIN(sizeof(buf), n); + if (pZip->m_pWrite(pZip->m_pIO_opaque, cur_file_ofs, buf, s) != s) + return MZ_FALSE; + cur_file_ofs += s; n -= s; + } + return MZ_TRUE; +} + +mz_bool mz_zip_writer_add_mem_ex(mz_zip_archive *pZip, const char *pArchive_name, const void *pBuf, size_t buf_size, const void *pComment, mz_uint16 comment_size, mz_uint level_and_flags, mz_uint64 uncomp_size, mz_uint32 uncomp_crc32) +{ + mz_uint16 method = 0, dos_time = 0, dos_date = 0; + mz_uint level, ext_attributes = 0, num_alignment_padding_bytes; + mz_uint64 local_dir_header_ofs = pZip->m_archive_size, cur_archive_file_ofs = pZip->m_archive_size, comp_size = 0; + size_t archive_name_size; + mz_uint8 local_dir_header[MZ_ZIP_LOCAL_DIR_HEADER_SIZE]; + tdefl_compressor *pComp = NULL; + mz_bool store_data_uncompressed; + mz_zip_internal_state *pState; + + if ((int)level_and_flags < 0) + level_and_flags = MZ_DEFAULT_LEVEL; + level = level_and_flags & 0xF; + store_data_uncompressed = ((!level) || (level_and_flags & MZ_ZIP_FLAG_COMPRESSED_DATA)); + + if ((!pZip) || (!pZip->m_pState) || (pZip->m_zip_mode != MZ_ZIP_MODE_WRITING) || ((buf_size) && (!pBuf)) || (!pArchive_name) || ((comment_size) && (!pComment)) || (pZip->m_total_files == 0xFFFF) || (level > MZ_UBER_COMPRESSION)) + return MZ_FALSE; + + pState = pZip->m_pState; + + if ((!(level_and_flags & MZ_ZIP_FLAG_COMPRESSED_DATA)) && (uncomp_size)) + return MZ_FALSE; + // No zip64 support yet + if ((buf_size > 0xFFFFFFFF) || (uncomp_size > 0xFFFFFFFF)) + return MZ_FALSE; + if (!mz_zip_writer_validate_archive_name(pArchive_name)) + return MZ_FALSE; + +#ifndef MINIZ_NO_TIME + { + time_t cur_time; time(&cur_time); + mz_zip_time_to_dos_time(cur_time, &dos_time, &dos_date); + } +#endif // #ifndef MINIZ_NO_TIME + + archive_name_size = strlen(pArchive_name); + if (archive_name_size > 0xFFFF) + return MZ_FALSE; + + num_alignment_padding_bytes = mz_zip_writer_compute_padding_needed_for_file_alignment(pZip); + + // no zip64 support yet + if ((pZip->m_total_files == 0xFFFF) || ((pZip->m_archive_size + num_alignment_padding_bytes + MZ_ZIP_LOCAL_DIR_HEADER_SIZE + MZ_ZIP_CENTRAL_DIR_HEADER_SIZE + comment_size + archive_name_size) > 0xFFFFFFFF)) + return MZ_FALSE; + + if ((archive_name_size) && (pArchive_name[archive_name_size - 1] == '/')) + { + // Set DOS Subdirectory attribute bit. + ext_attributes |= 0x10; + // Subdirectories cannot contain data. + if ((buf_size) || (uncomp_size)) + return MZ_FALSE; + } + + // Try to do any allocations before writing to the archive, so if an allocation fails the file remains unmodified. (A good idea if we're doing an in-place modification.) + if ((!mz_zip_array_ensure_room(pZip, &pState->m_central_dir, MZ_ZIP_CENTRAL_DIR_HEADER_SIZE + archive_name_size + comment_size)) || (!mz_zip_array_ensure_room(pZip, &pState->m_central_dir_offsets, 1))) + return MZ_FALSE; + + if ((!store_data_uncompressed) && (buf_size)) + { + if (NULL == (pComp = (tdefl_compressor *)pZip->m_pAlloc(pZip->m_pAlloc_opaque, 1, sizeof(tdefl_compressor)))) + return MZ_FALSE; + } + + if (!mz_zip_writer_write_zeros(pZip, cur_archive_file_ofs, num_alignment_padding_bytes + sizeof(local_dir_header))) + { + pZip->m_pFree(pZip->m_pAlloc_opaque, pComp); + return MZ_FALSE; + } + local_dir_header_ofs += num_alignment_padding_bytes; + if (pZip->m_file_offset_alignment) { MZ_ASSERT((local_dir_header_ofs & (pZip->m_file_offset_alignment - 1)) == 0); } + cur_archive_file_ofs += num_alignment_padding_bytes + sizeof(local_dir_header); + + MZ_CLEAR_OBJ(local_dir_header); + if (pZip->m_pWrite(pZip->m_pIO_opaque, cur_archive_file_ofs, pArchive_name, archive_name_size) != archive_name_size) + { + pZip->m_pFree(pZip->m_pAlloc_opaque, pComp); + return MZ_FALSE; + } + cur_archive_file_ofs += archive_name_size; + + if (!(level_and_flags & MZ_ZIP_FLAG_COMPRESSED_DATA)) + { + uncomp_crc32 = (mz_uint32)mz_crc32(MZ_CRC32_INIT, (const mz_uint8*)pBuf, buf_size); + uncomp_size = buf_size; + if (uncomp_size <= 3) + { + level = 0; + store_data_uncompressed = MZ_TRUE; + } + } + + if (store_data_uncompressed) + { + if (pZip->m_pWrite(pZip->m_pIO_opaque, cur_archive_file_ofs, pBuf, buf_size) != buf_size) + { + pZip->m_pFree(pZip->m_pAlloc_opaque, pComp); + return MZ_FALSE; + } + + cur_archive_file_ofs += buf_size; + comp_size = buf_size; + + if (level_and_flags & MZ_ZIP_FLAG_COMPRESSED_DATA) + method = MZ_DEFLATED; + } + else if (buf_size) + { + mz_zip_writer_add_state state; + + state.m_pZip = pZip; + state.m_cur_archive_file_ofs = cur_archive_file_ofs; + state.m_comp_size = 0; + + if ((tdefl_init(pComp, mz_zip_writer_add_put_buf_callback, &state, tdefl_create_comp_flags_from_zip_params(level, -15, MZ_DEFAULT_STRATEGY)) != TDEFL_STATUS_OKAY) || + (tdefl_compress_buffer(pComp, pBuf, buf_size, TDEFL_FINISH) != TDEFL_STATUS_DONE)) + { + pZip->m_pFree(pZip->m_pAlloc_opaque, pComp); + return MZ_FALSE; + } + + comp_size = state.m_comp_size; + cur_archive_file_ofs = state.m_cur_archive_file_ofs; + + method = MZ_DEFLATED; + } + + pZip->m_pFree(pZip->m_pAlloc_opaque, pComp); + pComp = NULL; + + // no zip64 support yet + if ((comp_size > 0xFFFFFFFF) || (cur_archive_file_ofs > 0xFFFFFFFF)) + return MZ_FALSE; + + if (!mz_zip_writer_create_local_dir_header(pZip, local_dir_header, (mz_uint16)archive_name_size, 0, uncomp_size, comp_size, uncomp_crc32, method, 0, dos_time, dos_date)) + return MZ_FALSE; + + if (pZip->m_pWrite(pZip->m_pIO_opaque, local_dir_header_ofs, local_dir_header, sizeof(local_dir_header)) != sizeof(local_dir_header)) + return MZ_FALSE; + + if (!mz_zip_writer_add_to_central_dir(pZip, pArchive_name, (mz_uint16)archive_name_size, NULL, 0, pComment, comment_size, uncomp_size, comp_size, uncomp_crc32, method, 0, dos_time, dos_date, local_dir_header_ofs, ext_attributes)) + return MZ_FALSE; + + pZip->m_total_files++; + pZip->m_archive_size = cur_archive_file_ofs; + + return MZ_TRUE; +} + +#ifndef MINIZ_NO_STDIO +mz_bool mz_zip_writer_add_file(mz_zip_archive *pZip, const char *pArchive_name, const char *pSrc_filename, const void *pComment, mz_uint16 comment_size, mz_uint level_and_flags) +{ + mz_uint uncomp_crc32 = MZ_CRC32_INIT, level, num_alignment_padding_bytes; + mz_uint16 method = 0, dos_time = 0, dos_date = 0, ext_attributes = 0; + mz_uint64 local_dir_header_ofs = pZip->m_archive_size, cur_archive_file_ofs = pZip->m_archive_size, uncomp_size = 0, comp_size = 0; + size_t archive_name_size; + mz_uint8 local_dir_header[MZ_ZIP_LOCAL_DIR_HEADER_SIZE]; + MZ_FILE *pSrc_file = NULL; + + if ((int)level_and_flags < 0) + level_and_flags = MZ_DEFAULT_LEVEL; + level = level_and_flags & 0xF; + + if ((!pZip) || (!pZip->m_pState) || (pZip->m_zip_mode != MZ_ZIP_MODE_WRITING) || (!pArchive_name) || ((comment_size) && (!pComment)) || (level > MZ_UBER_COMPRESSION)) + return MZ_FALSE; + if (level_and_flags & MZ_ZIP_FLAG_COMPRESSED_DATA) + return MZ_FALSE; + if (!mz_zip_writer_validate_archive_name(pArchive_name)) + return MZ_FALSE; + + archive_name_size = strlen(pArchive_name); + if (archive_name_size > 0xFFFF) + return MZ_FALSE; + + num_alignment_padding_bytes = mz_zip_writer_compute_padding_needed_for_file_alignment(pZip); + + // no zip64 support yet + if ((pZip->m_total_files == 0xFFFF) || ((pZip->m_archive_size + num_alignment_padding_bytes + MZ_ZIP_LOCAL_DIR_HEADER_SIZE + MZ_ZIP_CENTRAL_DIR_HEADER_SIZE + comment_size + archive_name_size) > 0xFFFFFFFF)) + return MZ_FALSE; + + if (!mz_zip_get_file_modified_time(pSrc_filename, &dos_time, &dos_date)) + return MZ_FALSE; + + pSrc_file = MZ_FOPEN(pSrc_filename, "rb"); + if (!pSrc_file) + return MZ_FALSE; + MZ_FSEEK64(pSrc_file, 0, SEEK_END); + uncomp_size = MZ_FTELL64(pSrc_file); + MZ_FSEEK64(pSrc_file, 0, SEEK_SET); + + if (uncomp_size > 0xFFFFFFFF) + { + // No zip64 support yet + MZ_FCLOSE(pSrc_file); + return MZ_FALSE; + } + if (uncomp_size <= 3) + level = 0; + + if (!mz_zip_writer_write_zeros(pZip, cur_archive_file_ofs, num_alignment_padding_bytes + sizeof(local_dir_header))) + { + MZ_FCLOSE(pSrc_file); + return MZ_FALSE; + } + local_dir_header_ofs += num_alignment_padding_bytes; + if (pZip->m_file_offset_alignment) { MZ_ASSERT((local_dir_header_ofs & (pZip->m_file_offset_alignment - 1)) == 0); } + cur_archive_file_ofs += num_alignment_padding_bytes + sizeof(local_dir_header); + + MZ_CLEAR_OBJ(local_dir_header); + if (pZip->m_pWrite(pZip->m_pIO_opaque, cur_archive_file_ofs, pArchive_name, archive_name_size) != archive_name_size) + { + MZ_FCLOSE(pSrc_file); + return MZ_FALSE; + } + cur_archive_file_ofs += archive_name_size; + + if (uncomp_size) + { + mz_uint64 uncomp_remaining = uncomp_size; + void *pRead_buf = pZip->m_pAlloc(pZip->m_pAlloc_opaque, 1, MZ_ZIP_MAX_IO_BUF_SIZE); + if (!pRead_buf) + { + MZ_FCLOSE(pSrc_file); + return MZ_FALSE; + } + + if (!level) + { + while (uncomp_remaining) + { + mz_uint n = (mz_uint)MZ_MIN(MZ_ZIP_MAX_IO_BUF_SIZE, uncomp_remaining); + if ((MZ_FREAD(pRead_buf, 1, n, pSrc_file) != n) || (pZip->m_pWrite(pZip->m_pIO_opaque, cur_archive_file_ofs, pRead_buf, n) != n)) + { + pZip->m_pFree(pZip->m_pAlloc_opaque, pRead_buf); + MZ_FCLOSE(pSrc_file); + return MZ_FALSE; + } + uncomp_crc32 = (mz_uint32)mz_crc32(uncomp_crc32, (const mz_uint8 *)pRead_buf, n); + uncomp_remaining -= n; + cur_archive_file_ofs += n; + } + comp_size = uncomp_size; + } + else + { + mz_bool result = MZ_FALSE; + mz_zip_writer_add_state state; + tdefl_compressor *pComp = (tdefl_compressor *)pZip->m_pAlloc(pZip->m_pAlloc_opaque, 1, sizeof(tdefl_compressor)); + if (!pComp) + { + pZip->m_pFree(pZip->m_pAlloc_opaque, pRead_buf); + MZ_FCLOSE(pSrc_file); + return MZ_FALSE; + } + + state.m_pZip = pZip; + state.m_cur_archive_file_ofs = cur_archive_file_ofs; + state.m_comp_size = 0; + + if (tdefl_init(pComp, mz_zip_writer_add_put_buf_callback, &state, tdefl_create_comp_flags_from_zip_params(level, -15, MZ_DEFAULT_STRATEGY)) != TDEFL_STATUS_OKAY) + { + pZip->m_pFree(pZip->m_pAlloc_opaque, pComp); + pZip->m_pFree(pZip->m_pAlloc_opaque, pRead_buf); + MZ_FCLOSE(pSrc_file); + return MZ_FALSE; + } + + for ( ; ; ) + { + size_t in_buf_size = (mz_uint32)MZ_MIN(uncomp_remaining, MZ_ZIP_MAX_IO_BUF_SIZE); + tdefl_status status; + + if (MZ_FREAD(pRead_buf, 1, in_buf_size, pSrc_file) != in_buf_size) + break; + + uncomp_crc32 = (mz_uint32)mz_crc32(uncomp_crc32, (const mz_uint8 *)pRead_buf, in_buf_size); + uncomp_remaining -= in_buf_size; + + status = tdefl_compress_buffer(pComp, pRead_buf, in_buf_size, uncomp_remaining ? TDEFL_NO_FLUSH : TDEFL_FINISH); + if (status == TDEFL_STATUS_DONE) + { + result = MZ_TRUE; + break; + } + else if (status != TDEFL_STATUS_OKAY) + break; + } + + pZip->m_pFree(pZip->m_pAlloc_opaque, pComp); + + if (!result) + { + pZip->m_pFree(pZip->m_pAlloc_opaque, pRead_buf); + MZ_FCLOSE(pSrc_file); + return MZ_FALSE; + } + + comp_size = state.m_comp_size; + cur_archive_file_ofs = state.m_cur_archive_file_ofs; + + method = MZ_DEFLATED; + } + + pZip->m_pFree(pZip->m_pAlloc_opaque, pRead_buf); + } + + MZ_FCLOSE(pSrc_file); pSrc_file = NULL; + + // no zip64 support yet + if ((comp_size > 0xFFFFFFFF) || (cur_archive_file_ofs > 0xFFFFFFFF)) + return MZ_FALSE; + + if (!mz_zip_writer_create_local_dir_header(pZip, local_dir_header, (mz_uint16)archive_name_size, 0, uncomp_size, comp_size, uncomp_crc32, method, 0, dos_time, dos_date)) + return MZ_FALSE; + + if (pZip->m_pWrite(pZip->m_pIO_opaque, local_dir_header_ofs, local_dir_header, sizeof(local_dir_header)) != sizeof(local_dir_header)) + return MZ_FALSE; + + if (!mz_zip_writer_add_to_central_dir(pZip, pArchive_name, (mz_uint16)archive_name_size, NULL, 0, pComment, comment_size, uncomp_size, comp_size, uncomp_crc32, method, 0, dos_time, dos_date, local_dir_header_ofs, ext_attributes)) + return MZ_FALSE; + + pZip->m_total_files++; + pZip->m_archive_size = cur_archive_file_ofs; + + return MZ_TRUE; +} +#endif // #ifndef MINIZ_NO_STDIO + +mz_bool mz_zip_writer_add_from_zip_reader(mz_zip_archive *pZip, mz_zip_archive *pSource_zip, mz_uint file_index) +{ + mz_uint n, bit_flags, num_alignment_padding_bytes; + mz_uint64 comp_bytes_remaining, local_dir_header_ofs; + mz_uint64 cur_src_file_ofs, cur_dst_file_ofs; + mz_uint32 local_header_u32[(MZ_ZIP_LOCAL_DIR_HEADER_SIZE + sizeof(mz_uint32) - 1) / sizeof(mz_uint32)]; mz_uint8 *pLocal_header = (mz_uint8 *)local_header_u32; + mz_uint8 central_header[MZ_ZIP_CENTRAL_DIR_HEADER_SIZE]; + size_t orig_central_dir_size; + mz_zip_internal_state *pState; + void *pBuf; const mz_uint8 *pSrc_central_header; + + if ((!pZip) || (!pZip->m_pState) || (pZip->m_zip_mode != MZ_ZIP_MODE_WRITING)) + return MZ_FALSE; + if (NULL == (pSrc_central_header = mz_zip_reader_get_cdh(pSource_zip, file_index))) + return MZ_FALSE; + pState = pZip->m_pState; + + num_alignment_padding_bytes = mz_zip_writer_compute_padding_needed_for_file_alignment(pZip); + + // no zip64 support yet + if ((pZip->m_total_files == 0xFFFF) || ((pZip->m_archive_size + num_alignment_padding_bytes + MZ_ZIP_LOCAL_DIR_HEADER_SIZE + MZ_ZIP_CENTRAL_DIR_HEADER_SIZE) > 0xFFFFFFFF)) + return MZ_FALSE; + + cur_src_file_ofs = MZ_READ_LE32(pSrc_central_header + MZ_ZIP_CDH_LOCAL_HEADER_OFS); + cur_dst_file_ofs = pZip->m_archive_size; + + if (pSource_zip->m_pRead(pSource_zip->m_pIO_opaque, cur_src_file_ofs, pLocal_header, MZ_ZIP_LOCAL_DIR_HEADER_SIZE) != MZ_ZIP_LOCAL_DIR_HEADER_SIZE) + return MZ_FALSE; + if (MZ_READ_LE32(pLocal_header) != MZ_ZIP_LOCAL_DIR_HEADER_SIG) + return MZ_FALSE; + cur_src_file_ofs += MZ_ZIP_LOCAL_DIR_HEADER_SIZE; + + if (!mz_zip_writer_write_zeros(pZip, cur_dst_file_ofs, num_alignment_padding_bytes)) + return MZ_FALSE; + cur_dst_file_ofs += num_alignment_padding_bytes; + local_dir_header_ofs = cur_dst_file_ofs; + if (pZip->m_file_offset_alignment) { MZ_ASSERT((local_dir_header_ofs & (pZip->m_file_offset_alignment - 1)) == 0); } + + if (pZip->m_pWrite(pZip->m_pIO_opaque, cur_dst_file_ofs, pLocal_header, MZ_ZIP_LOCAL_DIR_HEADER_SIZE) != MZ_ZIP_LOCAL_DIR_HEADER_SIZE) + return MZ_FALSE; + cur_dst_file_ofs += MZ_ZIP_LOCAL_DIR_HEADER_SIZE; + + n = MZ_READ_LE16(pLocal_header + MZ_ZIP_LDH_FILENAME_LEN_OFS) + MZ_READ_LE16(pLocal_header + MZ_ZIP_LDH_EXTRA_LEN_OFS); + comp_bytes_remaining = n + MZ_READ_LE32(pSrc_central_header + MZ_ZIP_CDH_COMPRESSED_SIZE_OFS); + + if (NULL == (pBuf = pZip->m_pAlloc(pZip->m_pAlloc_opaque, 1, (size_t)MZ_MAX(sizeof(mz_uint32) * 4, MZ_MIN(MZ_ZIP_MAX_IO_BUF_SIZE, comp_bytes_remaining))))) + return MZ_FALSE; + + while (comp_bytes_remaining) + { + n = (mz_uint)MZ_MIN(MZ_ZIP_MAX_IO_BUF_SIZE, comp_bytes_remaining); + if (pSource_zip->m_pRead(pSource_zip->m_pIO_opaque, cur_src_file_ofs, pBuf, n) != n) + { + pZip->m_pFree(pZip->m_pAlloc_opaque, pBuf); + return MZ_FALSE; + } + cur_src_file_ofs += n; + + if (pZip->m_pWrite(pZip->m_pIO_opaque, cur_dst_file_ofs, pBuf, n) != n) + { + pZip->m_pFree(pZip->m_pAlloc_opaque, pBuf); + return MZ_FALSE; + } + cur_dst_file_ofs += n; + + comp_bytes_remaining -= n; + } + + bit_flags = MZ_READ_LE16(pLocal_header + MZ_ZIP_LDH_BIT_FLAG_OFS); + if (bit_flags & 8) + { + // Copy data descriptor + if (pSource_zip->m_pRead(pSource_zip->m_pIO_opaque, cur_src_file_ofs, pBuf, sizeof(mz_uint32) * 4) != sizeof(mz_uint32) * 4) + { + pZip->m_pFree(pZip->m_pAlloc_opaque, pBuf); + return MZ_FALSE; + } + + n = sizeof(mz_uint32) * ((MZ_READ_LE32(pBuf) == 0x08074b50) ? 4 : 3); + if (pZip->m_pWrite(pZip->m_pIO_opaque, cur_dst_file_ofs, pBuf, n) != n) + { + pZip->m_pFree(pZip->m_pAlloc_opaque, pBuf); + return MZ_FALSE; + } + + cur_src_file_ofs += n; + cur_dst_file_ofs += n; + } + pZip->m_pFree(pZip->m_pAlloc_opaque, pBuf); + + // no zip64 support yet + if (cur_dst_file_ofs > 0xFFFFFFFF) + return MZ_FALSE; + + orig_central_dir_size = pState->m_central_dir.m_size; + + memcpy(central_header, pSrc_central_header, MZ_ZIP_CENTRAL_DIR_HEADER_SIZE); + MZ_WRITE_LE32(central_header + MZ_ZIP_CDH_LOCAL_HEADER_OFS, local_dir_header_ofs); + if (!mz_zip_array_push_back(pZip, &pState->m_central_dir, central_header, MZ_ZIP_CENTRAL_DIR_HEADER_SIZE)) + return MZ_FALSE; + + n = MZ_READ_LE16(pSrc_central_header + MZ_ZIP_CDH_FILENAME_LEN_OFS) + MZ_READ_LE16(pSrc_central_header + MZ_ZIP_CDH_EXTRA_LEN_OFS) + MZ_READ_LE16(pSrc_central_header + MZ_ZIP_CDH_COMMENT_LEN_OFS); + if (!mz_zip_array_push_back(pZip, &pState->m_central_dir, pSrc_central_header + MZ_ZIP_CENTRAL_DIR_HEADER_SIZE, n)) + { + mz_zip_array_resize(pZip, &pState->m_central_dir, orig_central_dir_size, MZ_FALSE); + return MZ_FALSE; + } + + if (pState->m_central_dir.m_size > 0xFFFFFFFF) + return MZ_FALSE; + n = (mz_uint32)orig_central_dir_size; + if (!mz_zip_array_push_back(pZip, &pState->m_central_dir_offsets, &n, 1)) + { + mz_zip_array_resize(pZip, &pState->m_central_dir, orig_central_dir_size, MZ_FALSE); + return MZ_FALSE; + } + + pZip->m_total_files++; + pZip->m_archive_size = cur_dst_file_ofs; + + return MZ_TRUE; +} + +mz_bool mz_zip_writer_finalize_archive(mz_zip_archive *pZip) +{ + mz_zip_internal_state *pState; + mz_uint64 central_dir_ofs, central_dir_size; + mz_uint8 hdr[MZ_ZIP_END_OF_CENTRAL_DIR_HEADER_SIZE]; + + if ((!pZip) || (!pZip->m_pState) || (pZip->m_zip_mode != MZ_ZIP_MODE_WRITING)) + return MZ_FALSE; + + pState = pZip->m_pState; + + // no zip64 support yet + if ((pZip->m_total_files > 0xFFFF) || ((pZip->m_archive_size + pState->m_central_dir.m_size + MZ_ZIP_END_OF_CENTRAL_DIR_HEADER_SIZE) > 0xFFFFFFFF)) + return MZ_FALSE; + + central_dir_ofs = 0; + central_dir_size = 0; + if (pZip->m_total_files) + { + // Write central directory + central_dir_ofs = pZip->m_archive_size; + central_dir_size = pState->m_central_dir.m_size; + pZip->m_central_directory_file_ofs = central_dir_ofs; + if (pZip->m_pWrite(pZip->m_pIO_opaque, central_dir_ofs, pState->m_central_dir.m_p, (size_t)central_dir_size) != central_dir_size) + return MZ_FALSE; + pZip->m_archive_size += central_dir_size; + } + + // Write end of central directory record + MZ_CLEAR_OBJ(hdr); + MZ_WRITE_LE32(hdr + MZ_ZIP_ECDH_SIG_OFS, MZ_ZIP_END_OF_CENTRAL_DIR_HEADER_SIG); + MZ_WRITE_LE16(hdr + MZ_ZIP_ECDH_CDIR_NUM_ENTRIES_ON_DISK_OFS, pZip->m_total_files); + MZ_WRITE_LE16(hdr + MZ_ZIP_ECDH_CDIR_TOTAL_ENTRIES_OFS, pZip->m_total_files); + MZ_WRITE_LE32(hdr + MZ_ZIP_ECDH_CDIR_SIZE_OFS, central_dir_size); + MZ_WRITE_LE32(hdr + MZ_ZIP_ECDH_CDIR_OFS_OFS, central_dir_ofs); + + if (pZip->m_pWrite(pZip->m_pIO_opaque, pZip->m_archive_size, hdr, sizeof(hdr)) != sizeof(hdr)) + return MZ_FALSE; +#ifndef MINIZ_NO_STDIO + if ((pState->m_pFile) && (MZ_FFLUSH(pState->m_pFile) == EOF)) + return MZ_FALSE; +#endif // #ifndef MINIZ_NO_STDIO + + pZip->m_archive_size += sizeof(hdr); + + pZip->m_zip_mode = MZ_ZIP_MODE_WRITING_HAS_BEEN_FINALIZED; + return MZ_TRUE; +} + +mz_bool mz_zip_writer_finalize_heap_archive(mz_zip_archive *pZip, void **pBuf, size_t *pSize) +{ + if ((!pZip) || (!pZip->m_pState) || (!pBuf) || (!pSize)) + return MZ_FALSE; + if (pZip->m_pWrite != mz_zip_heap_write_func) + return MZ_FALSE; + if (!mz_zip_writer_finalize_archive(pZip)) + return MZ_FALSE; + + *pBuf = pZip->m_pState->m_pMem; + *pSize = pZip->m_pState->m_mem_size; + pZip->m_pState->m_pMem = NULL; + pZip->m_pState->m_mem_size = pZip->m_pState->m_mem_capacity = 0; + return MZ_TRUE; +} + +mz_bool mz_zip_writer_end(mz_zip_archive *pZip) +{ + mz_zip_internal_state *pState; + mz_bool status = MZ_TRUE; + if ((!pZip) || (!pZip->m_pState) || (!pZip->m_pAlloc) || (!pZip->m_pFree) || ((pZip->m_zip_mode != MZ_ZIP_MODE_WRITING) && (pZip->m_zip_mode != MZ_ZIP_MODE_WRITING_HAS_BEEN_FINALIZED))) + return MZ_FALSE; + + pState = pZip->m_pState; + pZip->m_pState = NULL; + mz_zip_array_clear(pZip, &pState->m_central_dir); + mz_zip_array_clear(pZip, &pState->m_central_dir_offsets); + mz_zip_array_clear(pZip, &pState->m_sorted_central_dir_offsets); + +#ifndef MINIZ_NO_STDIO + if (pState->m_pFile) + { + MZ_FCLOSE(pState->m_pFile); + pState->m_pFile = NULL; + } +#endif // #ifndef MINIZ_NO_STDIO + + if ((pZip->m_pWrite == mz_zip_heap_write_func) && (pState->m_pMem)) + { + pZip->m_pFree(pZip->m_pAlloc_opaque, pState->m_pMem); + pState->m_pMem = NULL; + } + + pZip->m_pFree(pZip->m_pAlloc_opaque, pState); + pZip->m_zip_mode = MZ_ZIP_MODE_INVALID; + return status; +} + +#ifndef MINIZ_NO_STDIO +mz_bool mz_zip_add_mem_to_archive_file_in_place(const char *pZip_filename, const char *pArchive_name, const void *pBuf, size_t buf_size, const void *pComment, mz_uint16 comment_size, mz_uint level_and_flags) +{ + mz_bool status, created_new_archive = MZ_FALSE; + mz_zip_archive zip_archive; + struct MZ_FILE_STAT_STRUCT file_stat; + MZ_CLEAR_OBJ(zip_archive); + if ((int)level_and_flags < 0) + level_and_flags = MZ_DEFAULT_LEVEL; + if ((!pZip_filename) || (!pArchive_name) || ((buf_size) && (!pBuf)) || ((comment_size) && (!pComment)) || ((level_and_flags & 0xF) > MZ_UBER_COMPRESSION)) + return MZ_FALSE; + if (!mz_zip_writer_validate_archive_name(pArchive_name)) + return MZ_FALSE; + if (MZ_FILE_STAT(pZip_filename, &file_stat) != 0) + { + // Create a new archive. + if (!mz_zip_writer_init_file(&zip_archive, pZip_filename, 0)) + return MZ_FALSE; + created_new_archive = MZ_TRUE; + } + else + { + // Append to an existing archive. + if (!mz_zip_reader_init_file(&zip_archive, pZip_filename, level_and_flags | MZ_ZIP_FLAG_DO_NOT_SORT_CENTRAL_DIRECTORY)) + return MZ_FALSE; + if (!mz_zip_writer_init_from_reader(&zip_archive, pZip_filename)) + { + mz_zip_reader_end(&zip_archive); + return MZ_FALSE; + } + } + status = mz_zip_writer_add_mem_ex(&zip_archive, pArchive_name, pBuf, buf_size, pComment, comment_size, level_and_flags, 0, 0); + // Always finalize, even if adding failed for some reason, so we have a valid central directory. (This may not always succeed, but we can try.) + if (!mz_zip_writer_finalize_archive(&zip_archive)) + status = MZ_FALSE; + if (!mz_zip_writer_end(&zip_archive)) + status = MZ_FALSE; + if ((!status) && (created_new_archive)) + { + // It's a new archive and something went wrong, so just delete it. + int ignoredStatus = MZ_DELETE_FILE(pZip_filename); + (void)ignoredStatus; + } + return status; +} + +void *mz_zip_extract_archive_file_to_heap(const char *pZip_filename, const char *pArchive_name, size_t *pSize, mz_uint flags) +{ + int file_index; + mz_zip_archive zip_archive; + void *p = NULL; + + if (pSize) + *pSize = 0; + + if ((!pZip_filename) || (!pArchive_name)) + return NULL; + + MZ_CLEAR_OBJ(zip_archive); + if (!mz_zip_reader_init_file(&zip_archive, pZip_filename, flags | MZ_ZIP_FLAG_DO_NOT_SORT_CENTRAL_DIRECTORY)) + return NULL; + + if ((file_index = mz_zip_reader_locate_file(&zip_archive, pArchive_name, NULL, flags)) >= 0) + p = mz_zip_reader_extract_to_heap(&zip_archive, file_index, pSize, flags); + + mz_zip_reader_end(&zip_archive); + return p; +} + +#endif // #ifndef MINIZ_NO_STDIO + +#endif // #ifndef MINIZ_NO_ARCHIVE_WRITING_APIS + +#endif // #ifndef MINIZ_NO_ARCHIVE_APIS + +#ifdef __cplusplus +} +#endif + +#endif // MINIZ_HEADER_FILE_ONLY + +/* + This is free and unencumbered software released into the public domain. + + Anyone is free to copy, modify, publish, use, compile, sell, or + distribute this software, either in source code form or as a compiled + binary, for any purpose, commercial or non-commercial, and by any + means. + + In jurisdictions that recognize copyright laws, the author or authors + of this software dedicate any and all copyright interest in the + software to the public domain. We make this dedication for the benefit + of the public at large and to the detriment of our heirs and + successors. We intend this dedication to be an overt act of + relinquishment in perpetuity of all present and future rights to this + software under copyright law. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. + IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR + OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + OTHER DEALINGS IN THE SOFTWARE. + + For more information, please refer to +*/ ADDED src/mkbuiltin.c Index: src/mkbuiltin.c ================================================================== --- src/mkbuiltin.c +++ src/mkbuiltin.c @@ -0,0 +1,164 @@ +/* +** Copyright (c) 2014 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This is a stand-alone utility program that is part of the Fossil build +** process. This program reads files named on the command line and converts +** them into ANSI-C static char array variables. Output is written onto +** standard output. +** +** The makefiles use this utility package various resources (large scripts, +** GIF images, etc) that are separate files in the source code as byte +** arrays in the resulting executable. +*/ +#include +#include +#include + + +/* +** Read the entire content of the file named zFilename into memory obtained +** from malloc() and return a pointer to that memory. Write the size of the +** file into *pnByte. +*/ +static unsigned char *read_file(const char *zFilename, int *pnByte){ + FILE *in; + unsigned char *z; + int nByte; + int got; + in = fopen(zFilename, "rb"); + if( in==0 ){ + return 0; + } + fseek(in, 0, SEEK_END); + *pnByte = nByte = ftell(in); + fseek(in, 0, SEEK_SET); + z = malloc( nByte+1 ); + if( z==0 ){ + fprintf(stderr, "failed to allocate %d bytes\n", nByte+1); + exit(1); + } + got = fread(z, 1, nByte, in); + fclose(in); + z[got] = 0; + return z; +} + +/* +** There is an instance of the following for each file translated. +*/ +typedef struct Resource Resource; +struct Resource { + const char *zName; + int nByte; + int idx; +}; + +/* +** Compare two Resource objects for sorting purposes. They sort +** in zName order so that Fossil can search for resources using +** a binary search. +*/ +static int compareResource(const void *a, const void *b){ + Resource *pA = (Resource*)a; + Resource *pB = (Resource*)b; + return strcmp(pA->zName, pB->zName); +} + +int main(int argc, char **argv){ + int i, sz; + int j, n; + Resource *aRes; + int nRes; + unsigned char *pData; + int nErr = 0; + int nSkip; + int nPrefix = 0; + + if( argc>3 && strcmp(argv[1],"--prefix")==0 ){ + nPrefix = (int)strlen(argv[2]); + argc -= 2; + argv += 2; + } + nRes = argc - 1; + aRes = malloc( nRes*sizeof(aRes[0]) ); + if( aRes==0 ){ + fprintf(stderr, "malloc failed\n"); + return 1; + } + for(i=0; i=nPrefix ) z += nPrefix; + while( z[0]=='.' || z[0]=='/' ){ z++; } + aRes[i].zName = z; + } + qsort(aRes, nRes, sizeof(aRes[0]), compareResource); + for(i=0; i #include #include #include @@ -46,10 +61,11 @@ /* ** Each entry looks like this: */ typedef struct Entry { int eType; + char *zIf; char *zFunc; char *zPath; char *zHelp; } Entry; @@ -59,11 +75,11 @@ #define N_ENTRY 500 /* ** Maximum size of a help message */ -#define MX_HELP 10000 +#define MX_HELP 25000 /* ** Table of entries */ Entry aEntry[N_ENTRY]; @@ -72,10 +88,15 @@ ** Current help message accumulator */ char zHelp[MX_HELP]; int nHelp; +/* +** Most recently encountered #if +*/ +char zIf[200]; + /* ** How many entries are used */ int nUsed; int nFixed; @@ -120,20 +141,45 @@ aEntry[nUsed].eType = eType; aEntry[nUsed].zPath = string_dup(&zLine[i], j); aEntry[nUsed].zFunc = 0; nUsed++; } + +/* +** Check to see if the current line is an #if and if it is, add it to +** the zIf[] string. If the current line is an #endif or #else or #elif +** then cancel the current zIf[] string. +*/ +void scan_for_if(const char *zLine){ + int i; + int len; + if( zLine[0]!='#' ) return; + for(i=1; isspace(zLine[i]); i++){} + if( zLine[i]==0 ) return; + len = strlen(&zLine[i]); + if( memcmp(&zLine[i],"if",2)==0 ){ + zIf[0] = '#'; + memcpy(&zIf[1], &zLine[i], len+1); + }else if( zLine[i]=='e' ){ + zIf[0] = 0; + } +} /* ** Scan a line for a function that implements a web page or command. */ void scan_for_func(char *zLine){ int i,j,k; char *z; if( nUsed<=nFixed ) return; - if( strncmp(zLine, "**", 2)==0 && isspace(zLine[2]) - && strlen(zLine)nFixed ){ + if( strncmp(zLine, "**", 2)==0 + && isspace(zLine[2]) + && strlen(zLine)nFixed + && memcmp(zLine,"** COMMAND:",11)!=0 + && memcmp(zLine,"** WEBPAGE:",11)!=0 + ){ if( zLine[2]=='\n' ){ zHelp[nHelp++] = '\n'; }else{ if( strncmp(&zLine[3], "Usage: ", 6)==0 ) nHelp = 0; strcpy(&zHelp[nHelp], &zLine[3]); @@ -160,10 +206,11 @@ z = string_dup(&zHelp[k], nHelp-k); }else{ z = 0; } for(k=nFixed; k +#include +#include + +int main(int argc, char *argv[]){ + FILE *m,*u,*v; + char *z; + int i, x, d; + char b[1000]; + char vx[1000]; + memset(b,0,sizeof(b)); + memset(vx,0,sizeof(vx)); + u = fopen(argv[1],"r"); + if( fgets(b, sizeof(b)-1,u)==0 ){ + fprintf(stderr, "malformed manifest.uuid file: %s\n", argv[1]); + exit(1); + } + fclose(u); + for(z=b; z[0] && z[0]!='\r' && z[0]!='\n'; z++){} + *z = 0; + printf("#define MANIFEST_UUID \"%s\"\n",b); + printf("#define MANIFEST_VERSION \"[%10.10s]\"\n",b); + m = fopen(argv[2],"r"); + while(b == fgets(b, sizeof(b)-1,m)){ + if(0 == strncmp("D ",b,2)){ + printf("#define MANIFEST_DATE \"%.10s %.8s\"\n",b+2,b+13); + printf("#define MANIFEST_YEAR \"%.4s\"\n",b+2); + } + } + fclose(m); + v = fopen(argv[3],"r"); + if( fgets(b, sizeof(b)-1,v)==0 ){ + fprintf(stderr, "malformed VERSION file: %s\n", argv[3]); + exit(1); + } + fclose(v); + for(z=b; z[0] && z[0]!='\r' && z[0]!='\n'; z++){} + *z = 0; + printf("#define RELEASE_VERSION \"%s\"\n", b); + x=0; + i=0; + z=b; + while(1){ + if( z[0]>='0' && z[0]<='9' ){ + x = x*10 + z[0] - '0'; + }else{ + sprintf(&vx[i],"%02d",x); + i += 2; + x = 0; + if( z[0]==0 ) break; + } + z++; + } + for(z=vx; z[0]=='0'; z++){} + printf("#define RELEASE_VERSION_NUMBER %s\n", z); + memset(vx,0,sizeof(vx)); + strcpy(vx,b); + d = 0; + for(z=vx; z[0]; z++){ + if( z[0]!='.' ) continue; + if ( d<3 ){ + z[0] = ','; + d++; + }else{ + z[0] = '\0'; + break; + } + } + printf("#define RELEASE_RESOURCE_VERSION %s", vx); + while( d<3 ){ printf(",0"); d++; } + printf("\n"); +#if defined(__DMC__) /* e.g. 0x857 */ + d = (__DMC__ & 0xF00) >> 8; /* major */ + x = (__DMC__ & 0x0F0) >> 4; /* minor */ + i = (__DMC__ & 0x00F); /* revision */ + printf("#define COMPILER_VERSION \"%d.%d.%d\"\n", d, x, i); +#elif defined(__POCC__) /* e.g. 700 */ + d = (__POCC__ / 100); /* major */ + x = (__POCC__ % 100); /* minor */ + printf("#define COMPILER_VERSION \"%d.%02d\"\n", d, x); +#elif defined(_MSC_VER) /* e.g. 1800 */ + d = (_MSC_VER / 100); /* major */ + x = (_MSC_VER % 100); /* minor */ + printf("#define COMPILER_VERSION \"%d.%02d\"\n", d, x); +#endif + return 0; +} ADDED src/moderate.c Index: src/moderate.c ================================================================== --- src/moderate.c +++ src/moderate.c @@ -0,0 +1,166 @@ +/* +** Copyright (c) 2012 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code used to deal with moderator actions for +** Wiki and Tickets. +*/ +#include "config.h" +#include "moderate.h" +#include + +/* +** Create a table to represent pending moderation requests, if the +** table does not already exist. +*/ +void moderation_table_create(void){ + db_multi_exec( + "CREATE TABLE IF NOT EXISTS %s.modreq(\n" + " objid INTEGER PRIMARY KEY,\n" /* Record pending approval */ + " attachRid INT,\n" /* Object attached */ + " tktid TEXT\n" /* Associated ticket id */ + ");\n", db_name("repository") + ); +} + +/* +** Return TRUE if the modreq table exists +*/ +int moderation_table_exists(void){ + return db_table_exists("repository", "modreq"); +} + +/* +** Return TRUE if the object specified is being held for moderation. +*/ +int moderation_pending(int rid){ + static Stmt q; + int rc; + if( rid==0 || !moderation_table_exists() ) return 0; + db_static_prepare(&q, "SELECT 1 FROM modreq WHERE objid=:objid"); + db_bind_int(&q, ":objid", rid); + rc = db_step(&q)==SQLITE_ROW; + db_reset(&q); + return rc; +} + +/* +** Check to see if the object identified by RID is used for anything. +*/ +static int object_used(int rid){ + static const char *const aTabField[] = { + "modreq", "attachRid", + "mlink", "mid", + "mlink", "fid", + "tagxref", "srcid", + "tagxref", "rid", + }; + int i; + for(i=0; iAll Pending Moderation Requests

+ if( moderation_table_exists() ){ + blob_init(&sql, timeline_query_for_www(), -1); + blob_append_sql(&sql, + " AND event.objid IN (SELECT objid FROM modreq)" + " ORDER BY event.mtime DESC" + ); + db_prepare(&q, "%s", blob_sql_text(&sql)); + www_print_timeline(&q, 0, 0, 0, 0, 0); + db_finalize(&q); + } + style_footer(); +} Index: src/name.c ================================================================== --- src/name.c +++ src/name.c @@ -17,15 +17,285 @@ ** ** This file contains code used to convert user-supplied object names into ** canonical UUIDs. ** ** A user-supplied object name is any unique prefix of a valid UUID but -** not necessarily in canonical form. +** not necessarily in canonical form. */ #include "config.h" #include "name.h" #include + +/* +** Return TRUE if the string begins with something that looks roughly +** like an ISO date/time string. The SQLite date/time functions will +** have the final say-so about whether or not the date/time string is +** well-formed. +*/ +int fossil_isdate(const char *z){ + if( !fossil_isdigit(z[0]) ) return 0; + if( !fossil_isdigit(z[1]) ) return 0; + if( !fossil_isdigit(z[2]) ) return 0; + if( !fossil_isdigit(z[3]) ) return 0; + if( z[4]!='-') return 0; + if( !fossil_isdigit(z[5]) ) return 0; + if( !fossil_isdigit(z[6]) ) return 0; + if( z[7]!='-') return 0; + if( !fossil_isdigit(z[8]) ) return 0; + if( !fossil_isdigit(z[9]) ) return 0; + return 1; +} + +/* +** Return the RID that is the "root" of the branch that contains +** check-in "rid" if inBranch==0 or the first check-in in the branch +** if inBranch==1. +*/ +int start_of_branch(int rid, int inBranch){ + Stmt q; + int rc; + char *zBr; + zBr = db_text("trunk","SELECT value FROM tagxref" + " WHERE rid=%d AND tagid=%d" + " AND tagtype>0", + rid, TAG_BRANCH); + db_prepare(&q, + "SELECT pid, EXISTS(SELECT 1 FROM tagxref" + " WHERE tagid=%d AND tagtype>0" + " AND value=%Q AND rid=plink.pid)" + " FROM plink" + " WHERE cid=:cid AND isprim", + TAG_BRANCH, zBr + ); + fossil_free(zBr); + do{ + db_reset(&q); + db_bind_int(&q, ":cid", rid); + rc = db_step(&q); + if( rc!=SQLITE_ROW ) break; + if( inBranch && db_column_int(&q,1)==0 ) break; + rid = db_column_int(&q, 0); + }while( db_column_int(&q, 1)==1 && rid>0 ); + db_finalize(&q); + return rid; +} + +/* +** Convert a symbolic name into a RID. Acceptable forms: +** +** * SHA1 hash +** * SHA1 hash prefix of at least 4 characters +** * Symbolic Name +** * "tag:" + symbolic name +** * Date or date-time +** * "date:" + Date or date-time +** * symbolic-name ":" date-time +** * "tip" +** +** The following additional forms are available in local checkouts: +** +** * "current" +** * "prev" or "previous" +** * "next" +** +** Return the RID of the matching artifact. Or return 0 if the name does not +** match any known object. Or return -1 if the name is ambiguous. +** +** The zType parameter specifies the type of artifact: ci, t, w, e, g. +** If zType is NULL or "" or "*" then any type of artifact will serve. +** If zType is "br" then find the first check-in of the named branch +** rather than the last. +** zType is "ci" in most use cases since we are usually searching for +** a check-in. +** +** Note that the input zTag for types "t" and "e" is the SHA1 hash of +** the ticket-change or event-change artifact, not the randomly generated +** hexadecimal identifier assigned to tickets and events. Those identifiers +** live in a separate namespace. +*/ +int symbolic_name_to_rid(const char *zTag, const char *zType){ + int vid; + int rid = 0; + int nTag; + int i; + int startOfBranch = 0; + + if( zType==0 || zType[0]==0 ){ + zType = "*"; + }else if( zType[0]=='b' ){ + zType = "ci"; + startOfBranch = 1; + } + if( zTag==0 || zTag[0]==0 ) return 0; + + /* special keyword: "tip" */ + if( fossil_strcmp(zTag, "tip")==0 && (zType[0]=='*' || zType[0]=='c') ){ + rid = db_int(0, + "SELECT objid" + " FROM event" + " WHERE type='ci'" + " ORDER BY event.mtime DESC" + ); + if( rid ) return rid; + } + + /* special keywords: "prev", "previous", "current", and "next" */ + if( g.localOpen && (vid=db_lget_int("checkout",0))!=0 ){ + if( fossil_strcmp(zTag, "current")==0 ){ + rid = vid; + }else if( fossil_strcmp(zTag, "prev")==0 + || fossil_strcmp(zTag, "previous")==0 ){ + rid = db_int(0, "SELECT pid FROM plink WHERE cid=%d AND isprim", vid); + }else if( fossil_strcmp(zTag, "next")==0 ){ + rid = db_int(0, "SELECT cid FROM plink WHERE pid=%d" + " ORDER BY isprim DESC, mtime DESC", vid); + } + if( rid ) return rid; + } + + /* Date and times */ + if( memcmp(zTag, "date:", 5)==0 ){ + rid = db_int(0, + "SELECT objid FROM event" + " WHERE mtime<=julianday(%Q,fromLocal()) AND type GLOB '%q'" + " ORDER BY mtime DESC LIMIT 1", + &zTag[5], zType); + return rid; + } + if( fossil_isdate(zTag) ){ + rid = db_int(0, + "SELECT objid FROM event" + " WHERE mtime<=julianday(%Q,fromLocal()) AND type GLOB '%q'" + " ORDER BY mtime DESC LIMIT 1", + zTag, zType); + if( rid) return rid; + } + + /* Deprecated date & time formats: "local:" + date-time and + ** "utc:" + date-time */ + if( memcmp(zTag, "local:", 6)==0 ){ + rid = db_int(0, + "SELECT objid FROM event" + " WHERE mtime<=julianday(%Q) AND type GLOB '%q'" + " ORDER BY mtime DESC LIMIT 1", + &zTag[6], zType); + return rid; + } + if( memcmp(zTag, "utc:", 4)==0 ){ + rid = db_int(0, + "SELECT objid FROM event" + " WHERE mtime<=julianday('%qz') AND type GLOB '%q'" + " ORDER BY mtime DESC LIMIT 1", + &zTag[4], zType); + return rid; + } + + /* "tag:" + symbolic-name */ + if( memcmp(zTag, "tag:", 4)==0 ){ + rid = db_int(0, + "SELECT event.objid, max(event.mtime)" + " FROM tag, tagxref, event" + " WHERE tag.tagname='sym-%q' " + " AND tagxref.tagid=tag.tagid AND tagxref.tagtype>0 " + " AND event.objid=tagxref.rid " + " AND event.type GLOB '%q'", + &zTag[4], zType + ); + if( startOfBranch ) rid = start_of_branch(rid,1); + return rid; + } + + /* root:TAG -> The origin of the branch */ + if( memcmp(zTag, "root:", 5)==0 ){ + rid = symbolic_name_to_rid(zTag+5, zType); + return start_of_branch(rid, 0); + } + + /* symbolic-name ":" date-time */ + nTag = strlen(zTag); + for(i=0; i0 " + " AND event.objid=tagxref.rid " + " AND event.mtime<=julianday(%Q)" + " AND event.type GLOB '%q'", + zTagBase, zDate, zType + ); + return rid; + } + + /* SHA1 hash or prefix */ + if( nTag>=4 && nTag<=UUID_SIZE && validate16(zTag, nTag) ){ + Stmt q; + char zUuid[UUID_SIZE+1]; + memcpy(zUuid, zTag, nTag+1); + canonical16(zUuid, nTag); + rid = 0; + if( zType[0]=='*' ){ + db_prepare(&q, "SELECT rid FROM blob WHERE uuid GLOB '%q*'", zUuid); + }else{ + db_prepare(&q, + "SELECT blob.rid" + " FROM blob, event" + " WHERE blob.uuid GLOB '%q*'" + " AND event.objid=blob.rid" + " AND event.type GLOB '%q'", + zUuid, zType + ); + } + if( db_step(&q)==SQLITE_ROW ){ + rid = db_column_int(&q, 0); + if( db_step(&q)==SQLITE_ROW ) rid = -1; + } + db_finalize(&q); + if( rid ) return rid; + } + + /* Symbolic name */ + rid = db_int(0, + "SELECT event.objid, max(event.mtime)" + " FROM tag, tagxref, event" + " WHERE tag.tagname='sym-%q' " + " AND tagxref.tagid=tag.tagid AND tagxref.tagtype>0 " + " AND event.objid=tagxref.rid " + " AND event.type GLOB '%q'", + zTag, zType + ); + if( rid>0 ){ + if( startOfBranch ) rid = start_of_branch(rid,1); + return rid; + } + + /* Undocumented: numeric tags get translated directly into the RID */ + if( memcmp(zTag, "rid:", 4)==0 ){ + zTag += 4; + for(i=0; fossil_isdigit(zTag[i]); i++){} + if( zTag[i]==0 ){ + if( strcmp(zType,"*")==0 ){ + rid = atoi(zTag); + }else{ + rid = db_int(0, + "SELECT event.objid" + " FROM event" + " WHERE event.objid=%s" + " AND event.type GLOB '%q'", zTag /*safe-for-%s*/, zType); + } + } + } + return rid; +} /* ** This routine takes a user-entered UUID which might be in mixed ** case and might only be a prefix of the full UUID and converts it ** into the full-length UUID in canonical form. @@ -34,214 +304,74 @@ ** the name as a tag. If multiple tags match, pick the latest. ** If the input name matches "tag:*" then always resolve as a tag. ** ** If the input is not a tag, then try to match it as an ISO-8601 date ** string YYYY-MM-DD HH:MM:SS and pick the nearest check-in to that date. -** If the input is of the form "date:*" or "localtime:*" or "utc:*" then -** always resolve the name as a date. +** If the input is of the form "date:*" then always resolve the name as +** a date. The forms "utc:*" and "local:" are deprecated. ** ** Return 0 on success. Return 1 if the name cannot be resolved. ** Return 2 name is ambiguous. */ -int name_to_uuid(Blob *pName, int iErrPriority){ - int rc; - int sz; - sz = blob_size(pName); - if( sz>UUID_SIZE || sz<4 || !validate16(blob_buffer(pName), sz) ){ - char *zUuid; - const char *zName = blob_str(pName); - if( memcmp(zName, "tag:", 4)==0 ){ - zName += 4; - zUuid = tag_to_uuid(zName); - }else{ - zUuid = tag_to_uuid(zName); - if( zUuid==0 ){ - zUuid = date_to_uuid(zName); - } - } - if( zUuid ){ - blob_reset(pName); - blob_append(pName, zUuid, -1); - free(zUuid); - return 0; - } - fossil_error(iErrPriority, "not a valid object name: %s", zName); - return 1; - } - blob_materialize(pName); - canonical16(blob_buffer(pName), sz); - if( sz==UUID_SIZE ){ - rc = db_int(1, "SELECT 0 FROM blob WHERE uuid=%B", pName); - if( rc ){ - fossil_error(iErrPriority, "no such artifact: %b", pName); - blob_reset(pName); - } - }else if( sz=4 ){ - Stmt q; - db_prepare(&q, "SELECT uuid FROM blob WHERE uuid GLOB '%b*'", pName); - if( db_step(&q)!=SQLITE_ROW ){ - char *zUuid; - db_finalize(&q); - zUuid = tag_to_uuid(blob_str(pName)); - if( zUuid ){ - blob_reset(pName); - blob_append(pName, zUuid, -1); - free(zUuid); - return 0; - } - fossil_error(iErrPriority, "no artifacts match the prefix \"%b\"", pName); - return 1; - } - blob_reset(pName); - blob_append(pName, db_column_text(&q, 0), db_column_bytes(&q, 0)); - if( db_step(&q)==SQLITE_ROW ){ - fossil_error(iErrPriority, - "multiple artifacts match" - ); - blob_reset(pName); - db_finalize(&q); - return 2; - } - db_finalize(&q); - rc = 0; - }else{ - rc = 0; - } - return rc; -} - -/* -** Return TRUE if the string begins with an ISO8601 date: YYYY-MM-DD. -*/ -static int is_date(const char *z){ - if( !isdigit(z[0]) ) return 0; - if( !isdigit(z[1]) ) return 0; - if( !isdigit(z[2]) ) return 0; - if( !isdigit(z[3]) ) return 0; - if( z[4]!='-') return 0; - if( !isdigit(z[5]) ) return 0; - if( !isdigit(z[6]) ) return 0; - if( z[7]!='-') return 0; - if( !isdigit(z[8]) ) return 0; - if( !isdigit(z[9]) ) return 0; - return 1; -} - -/* -** Convert a symbolic tag name into the UUID of a check-in that contains -** that tag. If the tag appears on multiple check-ins, return the UUID -** of the most recent check-in with the tag. -** -** If the input string is of the form: -** -** tag:date -** -** Then return the UUID of the oldest check-in with that tag that is -** not older than 'date'. -** -** An input of "tip" returns the most recent check-in. -** -** Memory to hold the returned string comes from malloc() and needs to -** be freed by the caller. -*/ -char *tag_to_uuid(const char *zTag){ - char *zUuid = - db_text(0, - "SELECT blob.uuid" - " FROM tag, tagxref, event, blob" - " WHERE tag.tagname='sym-'||%Q " - " AND tagxref.tagid=tag.tagid AND tagxref.tagtype>0 " - " AND event.objid=tagxref.rid " - " AND blob.rid=event.objid " - " ORDER BY event.mtime DESC ", - zTag - ); - if( zUuid==0 ){ - int nTag = strlen(zTag); - int i; - for(i=0; i0 " - " AND event.objid=tagxref.rid " - " AND blob.rid=event.objid " - " AND event.mtime<=julianday(%Q %s)" - " ORDER BY event.mtime DESC ", - zTagBase, zDate, (useUtc ? "" : ",'utc'") - ); - break; - } - } - if( zUuid==0 && strcmp(zTag, "tip")==0 ){ - zUuid = db_text(0, - "SELECT blob.uuid" - " FROM event, blob" - " WHERE event.type='ci'" - " AND blob.rid=event.objid" - " ORDER BY event.mtime DESC" - ); - } - } - return zUuid; -} - -/* -** Convert a date/time string into a UUID. -** -** Input forms accepted: -** -** date:DATE -** local:DATE -** utc:DATE -** -** The DATE is interpreted as localtime unless the "utc:" prefix is used -** or a "utc" string appears at the end of the DATE string. -*/ -char *date_to_uuid(const char *zDate){ - int useUtc = 0; - int n; - char *zCopy = 0; - char *zUuid; - - if( memcmp(zDate, "date:", 5)==0 ){ - zDate += 5; - }else if( memcmp(zDate, "local:", 6)==0 ){ - zDate += 6; - }else if( memcmp(zDate, "utc:", 4)==0 ){ - zDate += 4; - useUtc = 1; - } - n = strlen(zDate); - if( n<10 || !is_date(zDate) ) return 0; - if( n>4 && sqlite3_strnicmp(&zDate[n-3], "utc", 3)==0 ){ - zCopy = mprintf("%s", zDate); - zCopy[n-3] = 0; - zDate = zCopy; - n -= 3; - useUtc = 1; - } - zUuid = db_text(0, - "SELECT (SELECT uuid FROM blob WHERE rid=event.objid)" - " FROM event" - " WHERE mtime<=julianday(%Q %s) AND type='ci'" - " ORDER BY mtime DESC LIMIT 1", - zDate, useUtc ? "" : ",'utc'" - ); - free(zCopy); - return zUuid; +int name_to_uuid(Blob *pName, int iErrPriority, const char *zType){ + char *zName = blob_str(pName); + int rid = symbolic_name_to_rid(zName, zType); + if( rid<0 ){ + fossil_error(iErrPriority, "ambiguous name: %s", zName); + return 2; + }else if( rid==0 ){ + fossil_error(iErrPriority, "not found: %s", zName); + return 1; + }else{ + blob_reset(pName); + db_blob(pName, "SELECT uuid FROM blob WHERE rid=%d", rid); + return 0; + } +} + +/* +** This routine is similar to name_to_uuid() except in the form it +** takes its parameters and returns its value, and in that it does not +** treat errors as fatal. zName must be a UUID, as described for +** name_to_uuid(). zType is also as described for that function. If +** zName does not resolve, 0 is returned. If it is ambiguous, a +** negative value is returned. On success the rid is returned and +** pUuid (if it is not NULL) is set to the a newly-allocated string, +** the full UUID, which must eventually be free()d by the caller. +*/ +int name_to_uuid2(const char *zName, const char *zType, char **pUuid){ + int rid = symbolic_name_to_rid(zName, zType); + if((rid>0) && pUuid){ + *pUuid = db_text(NULL, "SELECT uuid FROM blob WHERE rid=%d", rid); + } + return rid; +} + + +/* +** name_collisions searches through events, blobs, and tickets for +** collisions of a given UUID based on its length on UUIDs no shorter +** than 4 characters in length. +*/ +int name_collisions(const char *zName){ + int c = 0; /* count of collisions for zName */ + int nLen; /* length of zName */ + nLen = strlen(zName); + if( nLen>=4 && nLen<=UUID_SIZE && validate16(zName, nLen) ){ + c = db_int(0, + "SELECT" + " (SELECT count(*) FROM ticket" + " WHERE tkt_uuid GLOB '%q*') +" + " (SELECT count(*) FROM tag" + " WHERE tagname GLOB 'event-%q*') +" + " (SELECT count(*) FROM blob" + " WHERE uuid GLOB '%q*');", + zName, zName, zName + ); + if( c<2 ) c = 0; + } + return c; } /* ** COMMAND: test-name-to-id ** @@ -251,68 +381,65 @@ int i; Blob name; db_must_be_within_tree(); for(i=2; i ", g.argv[i]); - if( name_to_uuid(&name, 1) ){ - printf("ERROR: %s\n", g.zErrMsg); + fossil_print("%s -> ", g.argv[i]); + if( name_to_uuid(&name, 1, "*") ){ + fossil_print("ERROR: %s\n", g.zErrMsg); fossil_error_reset(); }else{ - printf("%s\n", blob_buffer(&name)); + fossil_print("%s\n", blob_buffer(&name)); } blob_reset(&name); } } /* -** Convert a name to a rid. If the name is a small integer value then -** just use atoi() to do the conversion. If the name contains alphabetic -** characters or is not an existing rid, then use name_to_uuid then -** convert the uuid to a rid. +** Convert a name to a rid. If the name can be any of the various forms +** accepted: +** +** * SHA1 hash or prefix thereof +** * symbolic name +** * date +** * label:date +** * prev, previous +** * next +** * tip ** ** This routine is used by command-line routines to resolve command-line inputs ** into a rid. */ -int name_to_rid(const char *zName){ - int i; +int name_to_typed_rid(const char *zName, const char *zType){ int rid; - Blob name; if( zName==0 || zName[0]==0 ) return 0; - blob_init(&name, zName, -1); - if( name_to_uuid(&name, -1) ){ - blob_reset(&name); - for(i=0; zName[i] && isdigit(zName[i]); i++){} - if( zName[i]==0 ){ - rid = atoi(zName); - if( db_exists("SELECT 1 FROM blob WHERE rid=%d", rid) ){ - return rid; - } - } - fossil_error(1, "no such artifact: %s", zName); - return 0; - }else{ - rid = db_int(0, "SELECT rid FROM blob WHERE uuid=%B", &name); - blob_reset(&name); - } - return rid; + rid = symbolic_name_to_rid(zName, zType); + if( rid<0 ){ + fossil_fatal("ambiguous name: %s", zName); + }else if( rid==0 ){ + fossil_fatal("not found: %s", zName); + } + return rid; +} +int name_to_rid(const char *zName){ + return name_to_typed_rid(zName, "*"); } /* ** WEBPAGE: ambiguous ** URL: /ambiguous?name=UUID&src=WEBPAGE -** -** The UUID given by the name paramager is ambiguous. Display a page +** +** The UUID given by the name parameter is ambiguous. Display a page ** that shows all possible choices and let the user select between them. */ void ambiguous_page(void){ Stmt q; - const char *zName = P("name"); + const char *zName = P("name"); const char *zSrc = P("src"); char *z; - + if( zName==0 || zName[0]==0 || zSrc==0 || zSrc[0]==0 ){ fossil_redirect_home(); } style_header("Ambiguous Artifact ID"); @

The artifact id %h(zName) is ambiguous and might @@ -322,16 +449,56 @@ canonical16(z, strlen(z)); db_prepare(&q, "SELECT uuid, rid FROM blob WHERE uuid GLOB '%q*'", z); while( db_step(&q)==SQLITE_ROW ){ const char *zUuid = db_column_text(&q, 0); int rid = db_column_int(&q, 1); - @

  • - @ %S(zUuid) - + @

  • + @ %s(zUuid) - + object_description(rid, 0, 0); + @

  • + } + db_finalize(&q); + db_prepare(&q, + " SELECT tkt_rid, tkt_uuid, title" + " FROM ticket, ticketchng" + " WHERE ticket.tkt_id = ticketchng.tkt_id" + " AND tkt_uuid GLOB '%q*'" + " GROUP BY tkt_uuid" + " ORDER BY tkt_ctime DESC", z); + while( db_step(&q)==SQLITE_ROW ){ + int rid = db_column_int(&q, 0); + const char *zUuid = db_column_text(&q, 1); + const char *zTitle = db_column_text(&q, 2); + @
  • + @ %s(zUuid) - + @

      + @ Ticket + hyperlink_to_uuid(zUuid); + @ - %s(zTitle). + @
      • + object_description(rid, 0, 0); + @
      + @

    • + } + db_finalize(&q); + db_prepare(&q, + "SELECT rid, uuid FROM" + " (SELECT tagxref.rid AS rid, substr(tagname, 7) AS uuid" + " FROM tagxref, tag WHERE tagxref.tagid = tag.tagid" + " AND tagname GLOB 'event-%q*') GROUP BY uuid", z); + while( db_step(&q)==SQLITE_ROW ){ + int rid = db_column_int(&q, 0); + const char* zUuid = db_column_text(&q, 1); + @
    • + @ %s(zUuid) - + @

      • object_description(rid, 0, 0); + @
      @

    • } @ + db_finalize(&q); style_footer(); } /* ** Convert the name in CGI parameter zParamName into a rid and return that @@ -338,33 +505,722 @@ ** rid. If the CGI parameter is missing or is not a valid artifact tag, ** return 0. If the CGI parameter is ambiguous, redirect to a page that ** shows all possibilities and do not return. */ int name_to_rid_www(const char *zParamName){ - int i, rc; int rid; const char *zName = P(zParamName); - Blob name; - +#ifdef FOSSIL_ENABLE_JSON + if(!zName && fossil_has_json()){ + zName = json_find_option_cstr(zParamName,NULL,NULL); + } +#endif if( zName==0 || zName[0]==0 ) return 0; - blob_init(&name, zName, -1); - rc = name_to_uuid(&name, -1); - if( rc==1 ){ - blob_reset(&name); - for(i=0; zName[i] && isdigit(zName[i]); i++){} - if( zName[i]==0 ){ - rid = atoi(zName); - if( db_exists("SELECT 1 FROM blob WHERE rid=%d", rid) ){ - return rid; - } - } - return 0; - }else if( rc==2 ){ + rid = symbolic_name_to_rid(zName, "*"); + if( rid<0 ){ cgi_redirectf("%s/ambiguous/%T?src=%t", g.zTop, zName, g.zPath); - return 0; - }else{ - rid = db_int(0, "SELECT rid FROM blob WHERE uuid=%B", &name); - blob_reset(&name); + rid = 0; } return rid; } +/* +** Generate a description of artifact "rid" +*/ +void whatis_rid(int rid, int verboseFlag){ + Stmt q; + int cnt; + + /* Basic information about the object. */ + db_prepare(&q, + "SELECT uuid, size, datetime(mtime,toLocal()), ipaddr" + " FROM blob, rcvfrom" + " WHERE rid=%d" + " AND rcvfrom.rcvid=blob.rcvid", + rid); + if( db_step(&q)==SQLITE_ROW ){ + if( verboseFlag ){ + fossil_print("artifact: %s (%d)\n", db_column_text(&q,0), rid); + fossil_print("size: %d bytes\n", db_column_int(&q,1)); + fossil_print("received: %s from %s\n", + db_column_text(&q, 2), + db_column_text(&q, 3)); + }else{ + fossil_print("artifact: %s\n", db_column_text(&q,0)); + fossil_print("size: %d bytes\n", db_column_int(&q,1)); + } + } + db_finalize(&q); + + /* Report any symbolic tags on this artifact */ + db_prepare(&q, + "SELECT substr(tagname,5)" + " FROM tag JOIN tagxref ON tag.tagid=tagxref.tagid" + " WHERE tagxref.rid=%d" + " AND tagname GLOB 'sym-*'" + " ORDER BY 1", + rid + ); + cnt = 0; + while( db_step(&q)==SQLITE_ROW ){ + const char *zPrefix = cnt++ ? ", " : "tags: "; + fossil_print("%s%s", zPrefix, db_column_text(&q,0)); + } + if( cnt ) fossil_print("\n"); + db_finalize(&q); + + /* Report any HIDDEN, PRIVATE, CLUSTER, or CLOSED tags on this artifact */ + db_prepare(&q, + "SELECT tagname" + " FROM tag JOIN tagxref ON tag.tagid=tagxref.tagid" + " WHERE tagxref.rid=%d" + " AND tag.tagid IN (5,6,7,9)" + " ORDER BY 1", + rid + ); + cnt = 0; + while( db_step(&q)==SQLITE_ROW ){ + const char *zPrefix = cnt++ ? ", " : "raw-tags: "; + fossil_print("%s%s", zPrefix, db_column_text(&q,0)); + } + if( cnt ) fossil_print("\n"); + db_finalize(&q); + + /* Check for entries on the timeline that reference this object */ + db_prepare(&q, + "SELECT type, datetime(mtime,toLocal())," + " coalesce(euser,user), coalesce(ecomment,comment)" + " FROM event WHERE objid=%d", rid); + if( db_step(&q)==SQLITE_ROW ){ + const char *zType; + switch( db_column_text(&q,0)[0] ){ + case 'c': zType = "Check-in"; break; + case 'w': zType = "Wiki-edit"; break; + case 'e': zType = "Event"; break; + case 't': zType = "Ticket-change"; break; + case 'g': zType = "Tag-change"; break; + default: zType = "Unknown"; break; + } + fossil_print("type: %s by %s on %s\n", zType, db_column_text(&q,2), + db_column_text(&q, 1)); + fossil_print("comment: "); + comment_print(db_column_text(&q,3), 0, 12, -1, g.comFmtFlags); + } + db_finalize(&q); + + /* Check to see if this object is used as a file in a check-in */ + db_prepare(&q, + "SELECT filename.name, blob.uuid, datetime(event.mtime,toLocal())," + " coalesce(euser,user), coalesce(ecomment,comment)" + " FROM mlink, filename, blob, event" + " WHERE mlink.fid=%d" + " AND filename.fnid=mlink.fnid" + " AND event.objid=mlink.mid" + " AND blob.rid=mlink.mid" + " ORDER BY event.mtime DESC /*sort*/", + rid); + while( db_step(&q)==SQLITE_ROW ){ + fossil_print("file: %s\n", db_column_text(&q,0)); + fossil_print(" part of [%S] by %s on %s\n", + db_column_text(&q, 1), + db_column_text(&q, 3), + db_column_text(&q, 2)); + fossil_print(" "); + comment_print(db_column_text(&q,4), 0, 12, -1, g.comFmtFlags); + } + db_finalize(&q); + + /* Check to see if this object is used as an attachment */ + db_prepare(&q, + "SELECT attachment.filename," + " attachment.comment," + " attachment.user," + " datetime(attachment.mtime,toLocal())," + " attachment.target," + " CASE WHEN EXISTS(SELECT 1 FROM tag WHERE tagname=('tkt-'||target))" + " THEN 'ticket'" + " WHEN EXISTS(SELECT 1 FROM tag WHERE tagname=('wiki-'||target))" + " THEN 'wiki' END," + " attachment.attachid," + " (SELECT uuid FROM blob WHERE rid=attachid)" + " FROM attachment JOIN blob ON attachment.src=blob.uuid" + " WHERE blob.rid=%d", + rid + ); + while( db_step(&q)==SQLITE_ROW ){ + fossil_print("attachment: %s\n", db_column_text(&q,0)); + fossil_print(" attached to %s %s\n", + db_column_text(&q,5), db_column_text(&q,4)); + if( verboseFlag ){ + fossil_print(" via %s (%d)\n", + db_column_text(&q,7), db_column_int(&q,6)); + }else{ + fossil_print(" via %s\n", + db_column_text(&q,7)); + } + fossil_print(" by user %s on %s\n", + db_column_text(&q,2), db_column_text(&q,3)); + fossil_print(" "); + comment_print(db_column_text(&q,1), 0, 12, -1, g.comFmtFlags); + } + db_finalize(&q); +} + +/* +** COMMAND: whatis* +** Usage: %fossil whatis NAME +** +** Resolve the symbol NAME into its canonical 40-character SHA1-hash +** artifact name and provide a description of what role that artifact +** plays. +** +** Options: +** +** --type TYPE Only find artifacts of TYPE (one of: 'ci', 't', +** 'w', 'g', or 'e'). +** -v|--verbose Provide extra information (such as the RID) +*/ +void whatis_cmd(void){ + int rid; + const char *zName; + int verboseFlag; + int i; + const char *zType = 0; + db_find_and_open_repository(0,0); + verboseFlag = find_option("verbose","v",0)!=0; + zType = find_option("type",0,1); + + /* We should be done with options.. */ + verify_all_options(); + + if( g.argc<3 ) usage("whatis NAME ..."); + for(i=2; i2 ) fossil_print("%.79c\n",'-'); + rid = symbolic_name_to_rid(zName, zType); + if( rid<0 ){ + Stmt q; + int cnt = 0; + fossil_print("name: %s (ambiguous)\n", zName); + db_prepare(&q, + "SELECT rid FROM blob WHERE uuid>=lower(%Q) AND uuid<(lower(%Q)||'z')", + zName, zName + ); + while( db_step(&q)==SQLITE_ROW ){ + if( cnt++ ) fossil_print("%12s---- meaning #%d ----\n", " ", cnt); + whatis_rid(db_column_int(&q, 0), verboseFlag); + } + db_finalize(&q); + }else if( rid==0 ){ + /* 0123456789 12 */ + fossil_print("unknown: %s\n", zName); + }else{ + fossil_print("name: %s\n", zName); + whatis_rid(rid, verboseFlag); + } + } +} + +/* +** COMMAND: test-whatis-all +** Usage: %fossil test-whatis-all +** +** Show "whatis" information about every artifact in the repository +*/ +void test_whatis_all_cmd(void){ + Stmt q; + int cnt = 0; + db_find_and_open_repository(0,0); + db_prepare(&q, "SELECT rid FROM blob ORDER BY rid"); + while( db_step(&q)==SQLITE_ROW ){ + if( cnt++ ) fossil_print("%.79c\n", '-'); + whatis_rid(db_column_int(&q,0), 1); + } + db_finalize(&q); +} + + +/* +** COMMAND: test-ambiguous +** Usage: %fossil test-ambiguous [--minsize N] +** +** Show a list of ambiguous SHA1-hash abbreviations of N characters or +** more where N defaults to 4. Change N to a different value using +** the "--minsize N" command-line option. +*/ +void test_ambiguous_cmd(void){ + Stmt q, ins; + int i; + int minSize = 4; + const char *zMinsize; + char zPrev[100]; + db_find_and_open_repository(0,0); + zMinsize = find_option("minsize",0,1); + if( zMinsize && atoi(zMinsize)>0 ) minSize = atoi(zMinsize); + db_multi_exec("CREATE TEMP TABLE dups(uuid, cnt)"); + db_prepare(&ins,"INSERT INTO dups(uuid) VALUES(substr(:uuid,1,:cnt))"); + db_prepare(&q, + "SELECT uuid FROM blob " + "UNION " + "SELECT substr(tagname,7) FROM tag WHERE tagname GLOB 'event-*' " + "UNION " + "SELECT tkt_uuid FROM ticket " + "ORDER BY 1" + ); + zPrev[0] = 0; + while( db_step(&q)==SQLITE_ROW ){ + const char *zUuid = db_column_text(&q, 0); + for(i=0; zUuid[i]==zPrev[i] && zUuid[i]!=0; i++){} + if( i>=minSize ){ + db_bind_int(&ins, ":cnt", i); + db_bind_text(&ins, ":uuid", zUuid); + db_step(&ins); + db_reset(&ins); + } + sqlite3_snprintf(sizeof(zPrev), zPrev, "%s", zUuid); + } + db_finalize(&ins); + db_finalize(&q); + db_prepare(&q, "SELECT uuid FROM dups ORDER BY length(uuid) DESC, uuid"); + while( db_step(&q)==SQLITE_ROW ){ + fossil_print("%s\n", db_column_text(&q, 0)); + } + db_finalize(&q); +} + +/* +** Schema for the description table +*/ +static const char zDescTab[] = +@ CREATE TEMP TABLE IF NOT EXISTS description( +@ rid INTEGER PRIMARY KEY, -- RID of the object +@ uuid TEXT, -- SHA1 hash of the object +@ ctime DATETIME, -- Time of creation +@ isPrivate BOOLEAN DEFAULT 0, -- True for unpublished artifacts +@ type TEXT, -- file, checkin, wiki, ticket, etc. +@ summary TEXT, -- Summary comment for the object +@ detail TEXT -- File name, check-in comment, etc +@ ); +; + +/* +** Create the description table if it does not already exists. +** Populate fields of this table with descriptions for all artifacts +** whose RID matches the SQL expression in zWhere. +*/ +void describe_artifacts(const char *zWhere){ + db_multi_exec("%s", zDescTab/*safe-for-%s*/); + + /* Describe check-ins */ + db_multi_exec( + "INSERT OR IGNORE INTO description(rid,uuid,ctime,type,summary)\n" + "SELECT blob.rid, blob.uuid, event.mtime, 'checkin',\n" + " 'check-in on ' || strftime('%%Y-%%m-%%d %%H:%%M',event.mtime)\n" + " FROM event, blob\n" + " WHERE (event.objid %s) AND event.type='ci'\n" + " AND event.objid=blob.rid;", + zWhere /*safe-for-%s*/ + ); + + /* Describe files */ + db_multi_exec( + "INSERT OR IGNORE INTO description(rid,uuid,ctime,type,summary)\n" + "SELECT blob.rid, blob.uuid, event.mtime, 'file', 'file '||filename.name\n" + " FROM mlink, blob, event, filename\n" + " WHERE (mlink.fid %s)\n" + " AND mlink.mid=event.objid\n" + " AND filename.fnid=mlink.fnid\n" + " AND mlink.fid=blob.rid;", + zWhere /*safe-for-%s*/ + ); + + /* Describe tags */ + db_multi_exec( + "INSERT OR IGNORE INTO description(rid,uuid,ctime,type,summary)\n" + "SELECT blob.rid, blob.uuid, tagxref.mtime, 'tag',\n" + " 'tag '||substr((SELECT uuid FROM blob WHERE rid=tagxref.rid),1,16)\n" + " FROM tagxref, blob\n" + " WHERE (tagxref.srcid %s) AND tagxref.srcid!=tagxref.rid\n" + " AND tagxref.srcid=blob.rid;", + zWhere /*safe-for-%s*/ + ); + + /* Cluster artifacts */ + db_multi_exec( + "INSERT OR IGNORE INTO description(rid,uuid,ctime,type,summary)\n" + "SELECT blob.rid, blob.uuid, rcvfrom.mtime, 'cluster', 'cluster'\n" + " FROM tagxref, blob, rcvfrom\n" + " WHERE (tagxref.rid %s)\n" + " AND tagxref.tagid=(SELECT tagid FROM tag WHERE tagname='cluster')\n" + " AND blob.rid=tagxref.rid" + " AND rcvfrom.rcvid=blob.rcvid;", + zWhere /*safe-for-%s*/ + ); + + /* Ticket change artifacts */ + db_multi_exec( + "INSERT OR IGNORE INTO description(rid,uuid,ctime,type,summary)\n" + "SELECT blob.rid, blob.uuid, tagxref.mtime, 'ticket',\n" + " 'ticket '||substr(tag.tagname,5,21)\n" + " FROM tagxref, tag, blob\n" + " WHERE (tagxref.rid %s)\n" + " AND tag.tagid=tagxref.tagid\n" + " AND tag.tagname GLOB 'tkt-*'" + " AND blob.rid=tagxref.rid;", + zWhere /*safe-for-%s*/ + ); + + /* Wiki edit artifacts */ + db_multi_exec( + "INSERT OR IGNORE INTO description(rid,uuid,ctime,type,summary)\n" + "SELECT blob.rid, blob.uuid, tagxref.mtime, 'wiki',\n" + " printf('wiki \"%%s\"',substr(tag.tagname,6))\n" + " FROM tagxref, tag, blob\n" + " WHERE (tagxref.rid %s)\n" + " AND tag.tagid=tagxref.tagid\n" + " AND tag.tagname GLOB 'wiki-*'" + " AND blob.rid=tagxref.rid;", + zWhere /*safe-for-%s*/ + ); + + /* Event edit artifacts */ + db_multi_exec( + "INSERT OR IGNORE INTO description(rid,uuid,ctime,type,summary)\n" + "SELECT blob.rid, blob.uuid, tagxref.mtime, 'event',\n" + " 'event '||substr(tag.tagname,7)\n" + " FROM tagxref, tag, blob\n" + " WHERE (tagxref.rid %s)\n" + " AND tag.tagid=tagxref.tagid\n" + " AND tag.tagname GLOB 'event-*'" + " AND blob.rid=tagxref.rid;", + zWhere /*safe-for-%s*/ + ); + + /* Attachments */ + db_multi_exec( + "INSERT OR IGNORE INTO description(rid,uuid,ctime,type,summary)\n" + "SELECT blob.rid, blob.uuid, attachment.mtime, 'attach-control',\n" + " 'attachment-control for '||attachment.filename\n" + " FROM attachment, blob\n" + " WHERE (attachment.attachid %s)\n" + " AND blob.rid=attachment.attachid", + zWhere /*safe-for-%s*/ + ); + db_multi_exec( + "INSERT OR IGNORE INTO description(rid,uuid,ctime,type,summary)\n" + "SELECT blob.rid, blob.uuid, attachment.mtime, 'attachment',\n" + " 'attachment '||attachment.filename\n" + " FROM attachment, blob\n" + " WHERE (blob.rid %s)\n" + " AND blob.rid NOT IN (SELECT rid FROM description)\n" + " AND blob.uuid=attachment.src", + zWhere /*safe-for-%s*/ + ); + + /* Everything else */ + db_multi_exec( + "INSERT OR IGNORE INTO description(rid,uuid,type,summary)\n" + "SELECT blob.rid, blob.uuid," + " CASE WHEN blob.size<0 THEN 'phantom' ELSE '' END,\n" + " 'unknown'\n" + " FROM blob WHERE (blob.rid %s);", + zWhere /*safe-for-%s*/ + ); + + /* Mark private elements */ + db_multi_exec( + "UPDATE description SET isPrivate=1 WHERE rid IN private" + ); +} + +/* +** Print the content of the description table on stdout +*/ +int describe_artifacts_to_stdout(const char *zWhere, const char *zLabel){ + Stmt q; + int cnt = 0; + describe_artifacts(zWhere); + db_prepare(&q, + "SELECT uuid, summary, isPrivate\n" + " FROM description\n" + " ORDER BY ctime, type;" + ); + while( db_step(&q)==SQLITE_ROW ){ + if( zLabel ){ + fossil_print("%s\n", zLabel); + zLabel = 0; + } + fossil_print(" %.16s %s", db_column_text(&q,0), db_column_text(&q,1)); + if( db_column_int(&q,2) ) fossil_print(" (unpublished)"); + fossil_print("\n"); + cnt++; + } + db_finalize(&q); + db_multi_exec("DELETE FROM description;"); + return cnt; +} + +/* +** COMMAND: test-describe-artifacts +** +** Usage: %fossil test-describe-artifacts [--from S] [--count N] +** +** Display a one-line description of every artifact. +*/ +void test_describe_artifacts_cmd(void){ + int iFrom = 0; + int iCnt = 1000000; + const char *z; + char *zRange; + db_find_and_open_repository(0,0); + z = find_option("from",0,1); + if( z ) iFrom = atoi(z); + z = find_option("count",0,1); + if( z ) iCnt = atoi(z); + zRange = mprintf("BETWEEN %d AND %d", iFrom, iFrom+iCnt-1); + describe_artifacts_to_stdout(zRange, 0); +} + +/* +** WEBPAGE: bloblist +** +** Return a page showing all artifacts in the repository. Query parameters: +** +** n=N Show N artifacts +** s=S Start with artifact number S +** unpub Show only unpublished artifacts +*/ +void bloblist_page(void){ + Stmt q; + int s = atoi(PD("s","0")); + int n = atoi(PD("n","5000")); + int mx = db_int(0, "SELECT max(rid) FROM blob"); + int unpubOnly = PB("unpub"); + char *zRange; + + login_check_credentials(); + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } + style_header("List Of Artifacts"); + style_submenu_element("250 Largest", 0, "bigbloblist"); + if( !unpubOnly && mx>n && P("s")==0 ){ + int i; + @

      Select a range of artifacts to view:

      + @
        + for(i=1; i<=mx; i+=n){ + @
      • %z(href("%R/bloblist?s=%d&n=%d",i,n)) + @ %d(i)..%d(i+n-1 + } + @
      + style_footer(); + return; + } + if( !unpubOnly && mx>n ){ + style_submenu_element("Index", "Index", "bloblist"); + } + if( unpubOnly ){ + zRange = mprintf("IN private"); + }else{ + zRange = mprintf("BETWEEN %d AND %d", s, s+n-1); + } + describe_artifacts(zRange); + fossil_free(zRange); + db_prepare(&q, + "SELECT rid, uuid, summary, isPrivate FROM description ORDER BY rid" + ); + @
      -    @ %s(zCaptcha)
      +    @ 
      +    @ %h(zCaptcha)
           @ 
      if( bAutoCaptcha ) { @ - } - @ - free(zCaptcha); - } - if( g.zLogin ){ - @

      - @

      To log off the system (and delete your login cookie) - @ press the following button:
      - @

      + @ onclick="gebi('u').value='anonymous'; gebi('p').value='%s(zDecoded)';" /> + } + @
      + free(zCaptcha); } @ - if( g.okPassword ){ - @

      - @

      To change your password, enter your old password and your - @ new password twice below then press the "Change Password" - @ button.

      - @
      + if( g.perm.Password ){ + @
      + @

      Change Password for user %h(g.zLogin):

      + form_begin(0, "%R/login"); @ - @ - @ - @ - @ - @ - @ + @ + @ + @ + @ + @ + @ @ - @ + @ @
      Old Password:
      New Password:
      Repeat New Password:
      @
      } style_footer(); } +/* +** Attempt to find login credentials for user zLogin on a peer repository +** with project code zCode. Transfer those credentials to the local +** repository. +** +** Return true if a transfer was made and false if not. +*/ +static int login_transfer_credentials( + const char *zLogin, /* Login we are looking for */ + const char *zCode, /* Project code of peer repository */ + const char *zHash, /* HASH from login cookie HASH/CODE/LOGIN */ + const char *zRemoteAddr /* Request comes from here */ +){ + sqlite3 *pOther = 0; /* The other repository */ + sqlite3_stmt *pStmt; /* Query against the other repository */ + char *zSQL; /* SQL of the query against other repo */ + char *zOtherRepo; /* Filename of the other repository */ + int rc; /* Result code from SQLite library functions */ + int nXfer = 0; /* Number of credentials transferred */ + + zOtherRepo = db_text(0, + "SELECT value FROM config WHERE name='peer-repo-%q'", + zCode + ); + if( zOtherRepo==0 ) return 0; /* No such peer repository */ + + rc = sqlite3_open_v2( + zOtherRepo, &pOther, + SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, + g.zVfsName + ); + if( rc==SQLITE_OK ){ + sqlite3_create_function(pOther,"now",0,SQLITE_UTF8,0,db_now_function,0,0); + sqlite3_create_function(pOther, "constant_time_cmp", 2, SQLITE_UTF8, 0, + constant_time_cmp_function, 0, 0); + sqlite3_busy_timeout(pOther, 5000); + zSQL = mprintf( + "SELECT cexpire FROM user" + " WHERE login=%Q" + " AND ipaddr=%Q" + " AND length(cap)>0" + " AND length(pw)>0" + " AND cexpire>julianday('now')" + " AND constant_time_cmp(cookie,%Q)=0", + zLogin, zRemoteAddr, zHash + ); + pStmt = 0; + rc = sqlite3_prepare_v2(pOther, zSQL, -1, &pStmt, 0); + if( rc==SQLITE_OK && sqlite3_step(pStmt)==SQLITE_ROW ){ + db_multi_exec( + "UPDATE user SET cookie=%Q, ipaddr=%Q, cexpire=%.17g" + " WHERE login=%Q", + zHash, zRemoteAddr, + sqlite3_column_double(pStmt, 0), zLogin + ); + nXfer++; + } + sqlite3_finalize(pStmt); + } + sqlite3_close(pOther); + fossil_free(zOtherRepo); + return nXfer; +} + +/* +** Return TRUE if zLogin is one of the special usernames +*/ +int login_is_special(const char *zLogin){ + if( fossil_strcmp(zLogin, "anonymous")==0 ) return 1; + if( fossil_strcmp(zLogin, "nobody")==0 ) return 1; + if( fossil_strcmp(zLogin, "developer")==0 ) return 1; + if( fossil_strcmp(zLogin, "reader")==0 ) return 1; + return 0; +} + +/* +** Lookup the uid for a non-built-in user with zLogin and zCookie and +** zRemoteAddr. Return 0 if not found. +** +** Note that this only searches for logged-in entries with matching +** zCookie (db: user.cookie) and zRemoteAddr (db: user.ipaddr) +** entries. +*/ +static int login_find_user( + const char *zLogin, /* User name */ + const char *zCookie, /* Login cookie value */ + const char *zRemoteAddr /* Abbreviated IP address for valid login */ +){ + int uid; + if( login_is_special(zLogin) ) return 0; + uid = db_int(0, + "SELECT uid FROM user" + " WHERE login=%Q" + " AND ipaddr=%Q" + " AND cexpire>julianday('now')" + " AND length(cap)>0" + " AND length(pw)>0" + " AND constant_time_cmp(cookie,%Q)=0", + zLogin, zRemoteAddr, zCookie + ); + return uid; +} +/* +** Return true if it is appropriate to redirect login requests to HTTPS. +** +** Redirect to https is appropriate if all of the above are true: +** (1) The redirect-to-https flag is set +** (2) The current connection is http, not https or ssh +** (3) The sslNotAvailable flag is clear +*/ +int login_wants_https_redirect(void){ + if( g.sslNotAvailable ) return 0; + if( db_get_boolean("redirect-to-https",0)==0 ) return 0; + if( P("HTTPS")!=0 ) return 0; + return 1; +} /* ** This routine examines the login cookie to see if it exists and -** and is valid. If the login cookie checks out, it then sets -** g.zUserUuid appropriately. +** is valid. If the login cookie checks out, it then sets global +** variables appropriately. +** +** g.userUid Database USER.UID value. Might be -1 for "nobody" +** g.zLogin Database USER.LOGIN value. NULL for user "nobody" +** g.perm Permissions granted to this user +** g.anon Permissions that would be available to anonymous +** g.isHuman True if the user is human, not a spider or robot ** */ void login_check_credentials(void){ int uid = 0; /* User id */ const char *zCookie; /* Text of the login cookie */ - const char *zRemoteAddr; /* IP address of the requestor */ + const char *zIpAddr; /* Raw IP address of the requestor */ + char *zRemoteAddr; /* Abbreviated IP address of the requestor */ const char *zCap = 0; /* Capability string */ + const char *zPublicPages = 0; /* GLOB patterns of public pages */ + const char *zLogin = 0; /* Login user for credentials */ /* Only run this check once. */ if( g.userUid!=0 ) return; + sqlite3_create_function(g.db, "constant_time_cmp", 2, SQLITE_UTF8, 0, + constant_time_cmp_function, 0, 0); /* If the HTTP connection is coming over 127.0.0.1 and if - ** local login is disabled and if we are using HTTP and not HTTPS, + ** local login is disabled and if we are using HTTP and not HTTPS, ** then there is no need to check user credentials. ** + ** This feature allows the "fossil ui" command to give the user + ** full access rights without having to log in. */ - zRemoteAddr = PD("REMOTE_ADDR","nil"); - if( strcmp(zRemoteAddr, "127.0.0.1")==0 + zRemoteAddr = ipPrefix(zIpAddr = PD("REMOTE_ADDR","nil")); + if( ( fossil_strcmp(zIpAddr, "127.0.0.1")==0 || + (g.fSshClient & CGI_SSH_CLIENT)!=0 ) + && g.useLocalauth && db_get_int("localauth",0)==0 && P("HTTPS")==0 ){ - uid = db_int(0, "SELECT uid FROM user WHERE cap LIKE '%%s%%'"); + if( g.localOpen ) zLogin = db_lget("default-user",0); + if( zLogin!=0 ){ + uid = db_int(0, "SELECT uid FROM user WHERE login=%Q", zLogin); + }else{ + uid = db_int(0, "SELECT uid FROM user WHERE cap LIKE '%%s%%'"); + } g.zLogin = db_text("?", "SELECT login FROM user WHERE uid=%d", uid); - zCap = "s"; + zCap = "sx"; g.noPswd = 1; - strcpy(g.zCsrfToken, "localhost"); + g.isHuman = 1; + sqlite3_snprintf(sizeof(g.zCsrfToken), g.zCsrfToken, "localhost"); } /* Check the login cookie to see if it matches a known valid user. */ if( uid==0 && (zCookie = P(login_cookie_name()))!=0 ){ - if( isdigit(zCookie[0]) ){ - /* Cookies of the form "uid/randomness". There must be a - ** corresponding entry in the user table. */ - uid = db_int(0, - "SELECT uid FROM user" - " WHERE uid=%d" - " AND cookie=%Q" - " AND ipaddr=%Q" - " AND cexpire>julianday('now')", - atoi(zCookie), zCookie, zRemoteAddr - ); - }else if( memcmp(zCookie,"anon/",5)==0 ){ - /* Cookies of the form "anon/TIME/HASH". The TIME must not be - ** too old and the sha1 hash of TIME+IPADDR+SECRET must match HASH. + /* Parse the cookie value up into HASH/ARG/USER */ + char *zHash = fossil_strdup(zCookie); + char *zArg = 0; + char *zUser = 0; + int i, c; + for(i=0; (c = zHash[i])!=0; i++){ + if( c=='/' ){ + zHash[i++] = 0; + if( zArg==0 ){ + zArg = &zHash[i]; + }else{ + zUser = &zHash[i]; + break; + } + } + } + if( zUser==0 ){ + /* Invalid cookie */ + }else if( fossil_strcmp(zUser, "anonymous")==0 ){ + /* Cookies of the form "HASH/TIME/anonymous". The TIME must not be + ** too old and the sha1 hash of TIME/IPADDR/SECRET must match HASH. ** SECRET is the "captcha-secret" value in the repository. */ - double rTime; - int i; + double rTime = atof(zArg); Blob b; - rTime = atof(&zCookie[5]); - for(i=5; zCookie[i] && zCookie[i]!='/'; i++){} - blob_init(&b, &zCookie[5], i-5); - if( zCookie[i]=='/' ){ i++; } - blob_append(&b, "/", 1); - blob_appendf(&b, "%z/%s", ipPrefix(zRemoteAddr), - db_get("captcha-secret","")); + blob_zero(&b); + blob_appendf(&b, "%s/%s/%s", + zArg, zRemoteAddr, db_get("captcha-secret","")); sha1sum_blob(&b, &b); - uid = db_int(0, - "SELECT uid FROM user WHERE login='anonymous'" - " AND length(cap)>0" - " AND length(pw)>0" - " AND %f+0.25>julianday('now')" - " AND %Q=%Q", - rTime, &zCookie[i], blob_buffer(&b) - ); + if( fossil_strcmp(zHash, blob_str(&b))==0 ){ + uid = db_int(0, + "SELECT uid FROM user WHERE login='anonymous'" + " AND length(cap)>0" + " AND length(pw)>0" + " AND %.17g+0.25>julianday('now')", + rTime + ); + } blob_reset(&b); + }else{ + /* Cookies of the form "HASH/CODE/USER". Search first in the + ** local user table, then the user table for project CODE if we + ** are part of a login-group. + */ + uid = login_find_user(zUser, zHash, zRemoteAddr); + if( uid==0 && login_transfer_credentials(zUser,zArg,zHash,zRemoteAddr) ){ + uid = login_find_user(zUser, zHash, zRemoteAddr); + if( uid ) record_login_attempt(zUser, zIpAddr, 1); + } } - sqlite3_snprintf(sizeof(g.zCsrfToken), g.zCsrfToken, "%.10s", zCookie); + sqlite3_snprintf(sizeof(g.zCsrfToken), g.zCsrfToken, "%.10s", zHash); } /* If no user found and the REMOTE_USER environment variable is set, - ** the accept the value of REMOTE_USER as the user. + ** then accept the value of REMOTE_USER as the user. */ if( uid==0 ){ const char *zRemoteUser = P("REMOTE_USER"); if( zRemoteUser && db_get_boolean("remote_user_ok",0) ){ uid = db_int(0, "SELECT uid FROM user WHERE login=%Q" @@ -410,11 +958,11 @@ if( uid==0 ){ /* If there is no user "nobody", then make one up - with no privileges */ uid = -1; zCap = ""; } - strcpy(g.zCsrfToken, "none"); + sqlite3_snprintf(sizeof(g.zCsrfToken), g.zCsrfToken, "none"); } /* At this point, we know that uid!=0. Find the privileges associated ** with user uid. */ @@ -437,191 +985,691 @@ /* Set the global variables recording the userid and login. The ** "nobody" user is a special case in that g.zLogin==0. */ g.userUid = uid; - if( g.zLogin && strcmp(g.zLogin,"nobody")==0 ){ + if( fossil_strcmp(g.zLogin,"nobody")==0 ){ g.zLogin = 0; } + g.isHuman = g.zLogin==0 ? isHuman(P("HTTP_USER_AGENT")) : 1; /* Set the capabilities */ - login_set_capabilities(zCap); + login_replace_capabilities(zCap, 0); login_set_anon_nobody_capabilities(); + + /* The auto-hyperlink setting allows hyperlinks to be displayed for users + ** who do not have the "h" permission as long as their UserAgent string + ** makes it appear that they are human. Check to see if auto-hyperlink is + ** enabled for this repository and make appropriate adjustments to the + ** permission flags if it is. + */ + if( zCap[0] + && !g.perm.Hyperlink + && g.isHuman + && db_get_boolean("auto-hyperlink",1) + ){ + g.perm.Hyperlink = 1; + g.javascriptHyperlink = 1; + } + + /* If the public-pages glob pattern is defined and REQUEST_URI matches + ** one of the globs in public-pages, then also add in all default-perms + ** permissions. + */ + zPublicPages = db_get("public-pages",0); + if( zPublicPages!=0 ){ + Glob *pGlob = glob_create(zPublicPages); + if( glob_match(pGlob, PD("REQUEST_URI","no-match")) ){ + login_set_capabilities(db_get("default-perms","u"), 0); + } + glob_free(pGlob); + } } /* -** Add the default privileges of users "nobody" and "anonymous" as appropriate -** for the user g.zLogin. +** Memory of settings +*/ +static int login_anon_once = 1; + +/* +** Add to g.perm the default privileges of users "nobody" and/or "anonymous" +** as appropriate for the user g.zLogin. +** +** This routine also sets up g.anon to be either a copy of g.perm for +** all logged in uses, or the privileges that would be available to "anonymous" +** if g.zLogin==0 (meaning that the user is "nobody"). */ void login_set_anon_nobody_capabilities(void){ - static int once = 1; - if( g.zLogin && once ){ + if( login_anon_once ){ const char *zCap; - /* All logged-in users inherit privileges from "nobody" */ + /* All users get privileges from "nobody" */ zCap = db_text("", "SELECT cap FROM user WHERE login = 'nobody'"); - login_set_capabilities(zCap); - if( strcmp(g.zLogin, "anonymous")!=0 ){ + login_set_capabilities(zCap, 0); + zCap = db_text("", "SELECT cap FROM user WHERE login = 'anonymous'"); + if( g.zLogin && fossil_strcmp(g.zLogin, "nobody")!=0 ){ /* All logged-in users inherit privileges from "anonymous" */ - zCap = db_text("", "SELECT cap FROM user WHERE login = 'anonymous'"); - login_set_capabilities(zCap); + login_set_capabilities(zCap, 0); + g.anon = g.perm; + }else{ + /* Record the privileges of anonymous in g.anon */ + g.anon = g.perm; + login_set_capabilities(zCap, LOGIN_ANON); } - once = 0; + login_anon_once = 0; } } /* -** Set the global capability flags based on a capability string. +** Flags passed into the 2nd argument of login_set/replace_capabilities(). +*/ +#if INTERFACE +#define LOGIN_IGNORE_UV 0x01 /* Ignore "u" and "v" */ +#define LOGIN_ANON 0x02 /* Use g.anon instead of g.perm */ +#endif + +/* +** Adds all capability flags in zCap to g.perm or g.anon. */ -void login_set_capabilities(const char *zCap){ - static char *zDev = 0; - static char *zUser = 0; +void login_set_capabilities(const char *zCap, unsigned flags){ int i; + FossilUserPerms *p = (flags & LOGIN_ANON) ? &g.anon : &g.perm; + if(NULL==zCap){ + return; + } for(i=0; zCap[i]; i++){ switch( zCap[i] ){ - case 's': g.okSetup = 1; /* Fall thru into Admin */ - case 'a': g.okAdmin = g.okRdTkt = g.okWrTkt = g.okZip = - g.okRdWiki = g.okWrWiki = g.okNewWiki = - g.okApndWiki = g.okHistory = g.okClone = - g.okNewTkt = g.okPassword = g.okRdAddr = - g.okTktFmt = g.okAttach = g.okApndTkt = 1; - /* Fall thru into Read/Write */ - case 'i': g.okRead = g.okWrite = 1; break; - case 'o': g.okRead = 1; break; - case 'z': g.okZip = 1; break; - - case 'd': g.okDelete = 1; break; - case 'h': g.okHistory = 1; break; - case 'g': g.okClone = 1; break; - case 'p': g.okPassword = 1; break; - - case 'j': g.okRdWiki = 1; break; - case 'k': g.okWrWiki = g.okRdWiki = g.okApndWiki =1; break; - case 'm': g.okApndWiki = 1; break; - case 'f': g.okNewWiki = 1; break; - - case 'e': g.okRdAddr = 1; break; - case 'r': g.okRdTkt = 1; break; - case 'n': g.okNewTkt = 1; break; - case 'w': g.okWrTkt = g.okRdTkt = g.okNewTkt = - g.okApndTkt = 1; break; - case 'c': g.okApndTkt = 1; break; - case 't': g.okTktFmt = 1; break; - case 'b': g.okAttach = 1; break; - - /* The "u" privileges is a little different. It recursively + case 's': p->Setup = 1; /* Fall thru into Admin */ + case 'a': p->Admin = p->RdTkt = p->WrTkt = p->Zip = + p->RdWiki = p->WrWiki = p->NewWiki = + p->ApndWiki = p->Hyperlink = p->Clone = + p->NewTkt = p->Password = p->RdAddr = + p->TktFmt = p->Attach = p->ApndTkt = + p->ModWiki = p->ModTkt = p->Delete = + p->Private = 1; + /* Fall thru into Read/Write */ + case 'i': p->Read = p->Write = 1; break; + case 'o': p->Read = 1; break; + case 'z': p->Zip = 1; break; + + case 'd': p->Delete = 1; break; + case 'h': p->Hyperlink = 1; break; + case 'g': p->Clone = 1; break; + case 'p': p->Password = 1; break; + + case 'j': p->RdWiki = 1; break; + case 'k': p->WrWiki = p->RdWiki = p->ApndWiki =1; break; + case 'm': p->ApndWiki = 1; break; + case 'f': p->NewWiki = 1; break; + case 'l': p->ModWiki = 1; break; + + case 'e': p->RdAddr = 1; break; + case 'r': p->RdTkt = 1; break; + case 'n': p->NewTkt = 1; break; + case 'w': p->WrTkt = p->RdTkt = p->NewTkt = + p->ApndTkt = 1; break; + case 'c': p->ApndTkt = 1; break; + case 'q': p->ModTkt = 1; break; + case 't': p->TktFmt = 1; break; + case 'b': p->Attach = 1; break; + case 'x': p->Private = 1; break; + + /* The "u" privileges is a little different. It recursively ** inherits all privileges of the user named "reader" */ case 'u': { - if( zUser==0 ){ + if( (flags & LOGIN_IGNORE_UV)==0 ){ + const char *zUser; zUser = db_text("", "SELECT cap FROM user WHERE login='reader'"); - login_set_capabilities(zUser); + login_set_capabilities(zUser, flags | LOGIN_IGNORE_UV); } break; } - /* The "v" privileges is a little different. It recursively + /* The "v" privileges is a little different. It recursively ** inherits all privileges of the user named "developer" */ case 'v': { - if( zDev==0 ){ + if( (flags & LOGIN_IGNORE_UV)==0 ){ + const char *zDev; zDev = db_text("", "SELECT cap FROM user WHERE login='developer'"); - login_set_capabilities(zDev); + login_set_capabilities(zDev, flags | LOGIN_IGNORE_UV); } break; } } } } + +/* +** Zeroes out g.perm and calls login_set_capabilities(zCap,flags). +*/ +void login_replace_capabilities(const char *zCap, unsigned flags){ + memset(&g.perm, 0, sizeof(g.perm)); + login_set_capabilities(zCap, flags); + login_anon_once = 1; +} /* ** If the current login lacks any of the capabilities listed in ** the input, then return 0. If all capabilities are present, then ** return 1. */ -int login_has_capability(const char *zCap, int nCap){ +int login_has_capability(const char *zCap, int nCap, u32 flgs){ int i; int rc = 1; + FossilUserPerms *p = (flgs & LOGIN_ANON) ? &g.anon : &g.perm; if( nCap<0 ) nCap = strlen(zCap); for(i=0; iAdmin; break; + case 'b': rc = p->Attach; break; + case 'c': rc = p->ApndTkt; break; + case 'd': rc = p->Delete; break; + case 'e': rc = p->RdAddr; break; + case 'f': rc = p->NewWiki; break; + case 'g': rc = p->Clone; break; + case 'h': rc = p->Hyperlink; break; + case 'i': rc = p->Write; break; + case 'j': rc = p->RdWiki; break; + case 'k': rc = p->WrWiki; break; + case 'l': rc = p->ModWiki; break; + case 'm': rc = p->ApndWiki; break; + case 'n': rc = p->NewTkt; break; + case 'o': rc = p->Read; break; + case 'p': rc = p->Password; break; + case 'q': rc = p->ModTkt; break; + case 'r': rc = p->RdTkt; break; + case 's': rc = p->Setup; break; + case 't': rc = p->TktFmt; break; /* case 'u': READER */ /* case 'v': DEVELOPER */ - case 'w': rc = g.okWrTkt; break; - /* case 'x': */ + case 'w': rc = p->WrTkt; break; + case 'x': rc = p->Private; break; /* case 'y': */ - case 'z': rc = g.okZip; break; - default: rc = 0; break; + case 'z': rc = p->Zip; break; + default: rc = 0; break; } } return rc; } + +/* +** Change the login to zUser. +*/ +void login_as_user(const char *zUser){ + char *zCap = ""; /* New capabilities */ + + /* Turn off all capabilities from prior logins */ + memset( &g.perm, 0, sizeof(g.perm) ); + + /* Set the global variables recording the userid and login. The + ** "nobody" user is a special case in that g.zLogin==0. + */ + g.userUid = db_int(0, "SELECT uid FROM user WHERE login=%Q", zUser); + if( g.userUid==0 ){ + zUser = 0; + g.userUid = db_int(0, "SELECT uid FROM user WHERE login='nobody'"); + } + if( g.userUid ){ + zCap = db_text("", "SELECT cap FROM user WHERE uid=%d", g.userUid); + } + if( fossil_strcmp(zUser,"nobody")==0 ) zUser = 0; + g.zLogin = fossil_strdup(zUser); + + /* Set the capabilities */ + login_set_capabilities(zCap, 0); + login_anon_once = 1; + login_set_anon_nobody_capabilities(); +} + +/* +** Return true if the user is "nobody" +*/ +int login_is_nobody(void){ + return g.zLogin==0 || g.zLogin[0]==0 || fossil_strcmp(g.zLogin,"nobody")==0; +} + +/* +** Return the login name. If no login name is specified, return "nobody". +*/ +const char *login_name(void){ + return (g.zLogin && g.zLogin[0]) ? g.zLogin : "nobody"; +} /* ** Call this routine when the credential check fails. It causes ** a redirect to the "login" page. */ -void login_needed(void){ - const char *zUrl = PD("REQUEST_URI", "index"); - cgi_redirect(mprintf("login?g=%T", zUrl)); - /* NOTREACHED */ - assert(0); +void login_needed(int anonOk){ +#ifdef FOSSIL_ENABLE_JSON + if(g.json.isJsonMode){ + json_err( FSL_JSON_E_DENIED, NULL, 1 ); + fossil_exit(0); + /* NOTREACHED */ + assert(0); + }else +#endif /* FOSSIL_ENABLE_JSON */ + { + const char *zUrl = PD("REQUEST_URI", "index"); + const char *zQS = P("QUERY_STRING"); + Blob redir; + blob_init(&redir, 0, 0); + if( login_wants_https_redirect() ){ + blob_appendf(&redir, "%s/login?g=%T", g.zHttpsURL, zUrl); + }else{ + blob_appendf(&redir, "%R/login?g=%T", zUrl); + } + if( anonOk ) blob_append(&redir, "&anon", 5); + if( zQS && zQS[0] ){ + blob_appendf(&redir, "&%s", zQS); + } + cgi_redirect(blob_str(&redir)); + /* NOTREACHED */ + assert(0); + } } /* -** Call this routine if the user lacks okHistory permission. If -** the anonymous user has okHistory permission, then paint a mesage +** Call this routine if the user lacks g.perm.Hyperlink permission. If +** the anonymous user has Hyperlink permission, then paint a mesage ** to inform the user that much more information is available by ** logging in as anonymous. */ void login_anonymous_available(void){ - if( !g.okHistory && - db_exists("SELECT 1 FROM user" - " WHERE login='anonymous'" - " AND cap LIKE '%%h%%'") ){ + if( !g.perm.Hyperlink && g.anon.Hyperlink ){ const char *zUrl = PD("REQUEST_URI", "index"); - @

      Many hyperlinks are disabled.
      - @ Use anonymous login + @

      Many hyperlinks are disabled.
      + @ Use anonymous login @ to enable hyperlinks.

      } } /* ** While rendering a form, call this routine to add the Anti-CSRF token ** as a hidden element of the form. */ void login_insert_csrf_secret(void){ - @ + @ } /* ** Before using the results of a form, first call this routine to verify -** that ths Anti-CSRF token is present and is valid. If the Anti-CSRF token -** is missing or is incorrect, that indicates a cross-site scripting attach -** so emits an error message and abort. +** that this Anti-CSRF token is present and is valid. If the Anti-CSRF token +** is missing or is incorrect, that indicates a cross-site scripting attack. +** If the event of an attack is detected, an error message is generated and +** all further processing is aborted. */ void login_verify_csrf_secret(void){ - const char *zCsrf; /* The CSRF secret */ if( g.okCsrf ) return; - if( (zCsrf = P("csrf"))!=0 && strcmp(zCsrf, g.zCsrfToken)==0 ){ + if( fossil_strcmp(P("csrf"), g.zCsrfToken)==0 ){ g.okCsrf = 1; return; } fossil_fatal("Cross-site request forgery attempt"); } + +/* +** WEBPAGE: register +** +** Page to allow users to self-register. The "self-register" setting +** must be enabled for this page to operate. +*/ +void register_page(void){ + const char *zUsername, *zPasswd, *zConfirm, *zContact, *zCS, *zPw, *zCap; + unsigned int uSeed; + const char *zDecoded; + char *zCaptcha; + if( !db_get_boolean("self-register", 0) ){ + style_header("Registration not possible"); + @

      This project does not allow user self-registration. Please contact the + @ project administrator to obtain an account.

      + style_footer(); + return; + } + + style_header("Register"); + zUsername = P("u"); + zPasswd = P("p"); + zConfirm = P("cp"); + zContact = P("c"); + zCap = P("cap"); + zCS = P("cs"); /* Captcha Secret */ + + /* Try to make any sense from user input. */ + if( P("new") ){ + if( zCS==0 ) fossil_redirect_home(); /* Forged request */ + zPw = captcha_decode((unsigned int)atoi(zCS)); + if( !(zUsername && zPasswd && zConfirm && zContact) ){ + @

      + @ All fields are obligatory. + @

      + }else if( strlen(zPasswd) < 6){ + @

      + @ Password too weak. + @

      + }else if( fossil_strcmp(zPasswd,zConfirm)!=0 ){ + @

      + @ The two copies of your new passwords do not match. + @

      + }else if( fossil_stricmp(zPw, zCap)!=0 ){ + @

      + @ Captcha text invalid. + @

      + }else{ + /* This almost is stupid copy-paste of code from user.c:user_cmd(). */ + Blob passwd, login, caps, contact; + + blob_init(&login, zUsername, -1); + blob_init(&contact, zContact, -1); + blob_init(&caps, db_get("default-perms", "u"), -1); + blob_init(&passwd, zPasswd, -1); + + if( db_exists("SELECT 1 FROM user WHERE login=%B", &login) ){ + /* Here lies the reason I don't use zErrMsg - it would not substitute + * this %s(zUsername), or at least I don't know how to force it to.*/ + @

      + @ %s(zUsername) already exists. + @

      + }else{ + char *zPw = sha1_shared_secret(blob_str(&passwd), blob_str(&login), 0); + int uid; + db_multi_exec( + "INSERT INTO user(login,pw,cap,info,mtime)" + "VALUES(%B,%Q,%B,%B,strftime('%%s','now'))", + &login, zPw, &caps, &contact + ); + free(zPw); + + /* The user is registered, now just log him in. */ + uid = db_int(0, "SELECT uid FROM user WHERE login=%Q", zUsername); + login_set_user_cookie( zUsername, uid, NULL ); + redirect_to_g(); + + } + } + } + + /* Prepare the captcha. */ + uSeed = captcha_seed(); + zDecoded = captcha_decode(uSeed); + zCaptcha = captcha_render(zDecoded); + + /* Print out the registration form. */ + form_begin(0, "%R/register"); + if( P("g") ){ + @ + } + @

      + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @

      +  @ %h(zCaptcha)
      +  @ 
      + @ + style_footer(); + + free(zCaptcha); +} + +/* +** Run SQL on the repository database for every repository in our +** login group. The SQL is run in a separate database connection. +** +** Any members of the login group whose repository database file +** cannot be found is silently removed from the group. +** +** Error messages accumulate and are returned in *pzErrorMsg. The +** memory used to hold these messages should be freed using +** fossil_free() if one desired to avoid a memory leak. The +** zPrefix and zSuffix strings surround each error message. +** +** Return the number of errors. +*/ +int login_group_sql( + const char *zSql, /* The SQL to run */ + const char *zPrefix, /* Prefix to each error message */ + const char *zSuffix, /* Suffix to each error message */ + char **pzErrorMsg /* Write error message here, if not NULL */ +){ + sqlite3 *pPeer; /* Connection to another database */ + int nErr = 0; /* Number of errors seen so far */ + int rc; /* Result code from subroutine calls */ + char *zErr; /* SQLite error text */ + char *zSelfCode; /* Project code for ourself */ + Blob err; /* Accumulate errors here */ + Stmt q; /* Query of all peer-* entries in CONFIG */ + + if( zPrefix==0 ) zPrefix = ""; + if( zSuffix==0 ) zSuffix = ""; + if( pzErrorMsg ) *pzErrorMsg = 0; + zSelfCode = abbreviated_project_code(db_get("project-code", "x")); + blob_zero(&err); + db_prepare(&q, + "SELECT name, value FROM config" + " WHERE name GLOB 'peer-repo-*'" + " AND name <> 'peer-repo-%q'" + " ORDER BY +value", + zSelfCode + ); + while( db_step(&q)==SQLITE_ROW ){ + const char *zRepoName = db_column_text(&q, 1); + if( file_size(zRepoName)<0 ){ + /* Silently remove non-existent repositories from the login group. */ + const char *zLabel = db_column_text(&q, 0); + db_multi_exec( + "DELETE FROM config WHERE name GLOB 'peer-*-%q'", + &zLabel[10] + ); + continue; + } + rc = sqlite3_open_v2( + zRepoName, &pPeer, + SQLITE_OPEN_READWRITE, + g.zVfsName + ); + if( rc!=SQLITE_OK ){ + blob_appendf(&err, "%s%s: %s%s", zPrefix, zRepoName, + sqlite3_errmsg(pPeer), zSuffix); + nErr++; + sqlite3_close(pPeer); + continue; + } + sqlite3_create_function(pPeer, "shared_secret", 3, SQLITE_UTF8, + 0, sha1_shared_secret_sql_function, 0, 0); + sqlite3_create_function(pPeer, "now", 0,SQLITE_UTF8,0,db_now_function,0,0); + sqlite3_busy_timeout(pPeer, 5000); + zErr = 0; + rc = sqlite3_exec(pPeer, zSql, 0, 0, &zErr); + if( zErr ){ + blob_appendf(&err, "%s%s: %s%s", zPrefix, zRepoName, zErr, zSuffix); + sqlite3_free(zErr); + nErr++; + }else if( rc!=SQLITE_OK ){ + blob_appendf(&err, "%s%s: %s%s", zPrefix, zRepoName, + sqlite3_errmsg(pPeer), zSuffix); + nErr++; + } + sqlite3_close(pPeer); + } + db_finalize(&q); + if( pzErrorMsg && blob_size(&err)>0 ){ + *pzErrorMsg = fossil_strdup(blob_str(&err)); + } + blob_reset(&err); + fossil_free(zSelfCode); + return nErr; +} + +/* +** Attempt to join a login-group. +** +** If problems arise, leave an error message in *pzErrMsg. +*/ +void login_group_join( + const char *zRepo, /* Repository file in the login group */ + const char *zLogin, /* Login name for the other repo */ + const char *zPassword, /* Password to prove we are authorized to join */ + const char *zNewName, /* Name of new login group if making a new one */ + char **pzErrMsg /* Leave an error message here */ +){ + Blob fullName; /* Blob for finding full pathnames */ + sqlite3 *pOther; /* The other repository */ + int rc; /* Return code from sqlite3 functions */ + char *zOtherProjCode; /* Project code for pOther */ + char *zPwHash; /* Password hash on pOther */ + char *zSelfRepo; /* Name of our repository */ + char *zSelfLabel; /* Project-name for our repository */ + char *zSelfProjCode; /* Our project-code */ + char *zSql; /* SQL to run on all peers */ + const char *zSelf; /* The ATTACH name of our repository */ + + *pzErrMsg = 0; /* Default to no errors */ + zSelf = db_name("repository"); + + /* Get the full pathname of the other repository */ + file_canonical_name(zRepo, &fullName, 0); + zRepo = fossil_strdup(blob_str(&fullName)); + blob_reset(&fullName); + + /* Get the full pathname for our repository. Also the project code + ** and project name for ourself. */ + file_canonical_name(g.zRepositoryName, &fullName, 0); + zSelfRepo = fossil_strdup(blob_str(&fullName)); + blob_reset(&fullName); + zSelfProjCode = db_get("project-code", "unknown"); + zSelfLabel = db_get("project-name", 0); + if( zSelfLabel==0 ){ + zSelfLabel = zSelfProjCode; + } + + /* Make sure we are not trying to join ourselves */ + if( fossil_strcmp(zRepo, zSelfRepo)==0 ){ + *pzErrMsg = mprintf("The \"other\" repository is the same as this one."); + return; + } + + /* Make sure the other repository is a valid Fossil database */ + if( file_size(zRepo)<0 ){ + *pzErrMsg = mprintf("repository file \"%s\" does not exist", zRepo); + return; + } + rc = sqlite3_open_v2( + zRepo, &pOther, + SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, + g.zVfsName + ); + if( rc!=SQLITE_OK ){ + *pzErrMsg = fossil_strdup(sqlite3_errmsg(pOther)); + }else{ + rc = sqlite3_exec(pOther, "SELECT count(*) FROM user", 0, 0, pzErrMsg); + } + sqlite3_close(pOther); + if( rc ) return; + + /* Attach the other repository. Make sure the username/password is + ** valid and has Setup permission. + */ + db_attach(zRepo, "other"); + zOtherProjCode = db_text("x", "SELECT value FROM other.config" + " WHERE name='project-code'"); + zPwHash = sha1_shared_secret(zPassword, zLogin, zOtherProjCode); + if( !db_exists( + "SELECT 1 FROM other.user" + " WHERE login=%Q AND cap GLOB '*s*'" + " AND (pw=%Q OR pw=%Q)", + zLogin, zPassword, zPwHash) + ){ + db_detach("other"); + *pzErrMsg = "The supplied username/password does not correspond to a" + " user Setup permission on the other repository."; + return; + } + + /* Create all the necessary CONFIG table entries on both the + ** other repository and on our own repository. + */ + zSelfProjCode = abbreviated_project_code(zSelfProjCode); + zOtherProjCode = abbreviated_project_code(zOtherProjCode); + db_begin_transaction(); + db_multi_exec( + "DELETE FROM \"%w\".config WHERE name GLOB 'peer-*';" + "INSERT INTO \"%w\".config(name,value) VALUES('peer-repo-%q',%Q);" + "INSERT INTO \"%w\".config(name,value) " + " SELECT 'peer-name-%q', value FROM other.config" + " WHERE name='project-name';", + zSelf, + zSelf, zOtherProjCode, zRepo, + zSelf, zOtherProjCode + ); + db_multi_exec( + "INSERT OR IGNORE INTO other.config(name,value)" + " VALUES('login-group-name',%Q);" + "INSERT OR IGNORE INTO other.config(name,value)" + " VALUES('login-group-code',lower(hex(randomblob(8))));", + zNewName + ); + db_multi_exec( + "REPLACE INTO \"%w\".config(name,value)" + " SELECT name, value FROM other.config" + " WHERE name GLOB 'peer-*' OR name GLOB 'login-group-*'", + zSelf + ); + db_end_transaction(0); + db_multi_exec("DETACH other"); + + /* Propagate the changes to all other members of the login-group */ + zSql = mprintf( + "BEGIN;" + "REPLACE INTO config(name,value,mtime) VALUES('peer-name-%q',%Q,now());" + "REPLACE INTO config(name,value,mtime) VALUES('peer-repo-%q',%Q,now());" + "COMMIT;", + zSelfProjCode, zSelfLabel, zSelfProjCode, zSelfRepo + ); + login_group_sql(zSql, "
    • ", "
    • ", pzErrMsg); + fossil_free(zSql); +} + +/* +** Leave the login group that we are currently part of. +*/ +void login_group_leave(char **pzErrMsg){ + char *zProjCode; + char *zSql; + + *pzErrMsg = 0; + zProjCode = abbreviated_project_code(db_get("project-code","x")); + zSql = mprintf( + "DELETE FROM config WHERE name GLOB 'peer-*-%q';" + "DELETE FROM config" + " WHERE name='login-group-name'" + " AND (SELECT count(*) FROM config WHERE name GLOB 'peer-*')==0;", + zProjCode + ); + fossil_free(zProjCode); + login_group_sql(zSql, "
    • ", "
    • ", pzErrMsg); + fossil_free(zSql); + db_multi_exec( + "DELETE FROM config " + " WHERE name GLOB 'peer-*'" + " OR name GLOB 'login-group-*';" + ); +} ADDED src/lookslike.c Index: src/lookslike.c ================================================================== --- src/lookslike.c +++ src/lookslike.c @@ -0,0 +1,426 @@ +/* +** Copyright (c) 2013 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code used to try to guess if a particular file is +** text or binary, what types of line endings it uses, is it UTF8 or +** UTF16, etc. +*/ +#include "config.h" +#include "lookslike.h" +#include + + +#if INTERFACE + +/* +** This macro is designed to return non-zero if the specified blob contains +** data that MAY be binary in nature; otherwise, zero will be returned. +*/ +#define looks_like_binary(blob) \ + ((looks_like_utf8((blob), LOOK_BINARY) & LOOK_BINARY) != LOOK_NONE) + +/* +** Output flags for the looks_like_utf8() and looks_like_utf16() routines used +** to convey status information about the blob content. +*/ +#define LOOK_NONE ((int)0x00000000) /* Nothing special was found. */ +#define LOOK_NUL ((int)0x00000001) /* One or more NUL chars were found. */ +#define LOOK_CR ((int)0x00000002) /* One or more CR chars were found. */ +#define LOOK_LONE_CR ((int)0x00000004) /* An unpaired CR char was found. */ +#define LOOK_LF ((int)0x00000008) /* One or more LF chars were found. */ +#define LOOK_LONE_LF ((int)0x00000010) /* An unpaired LF char was found. */ +#define LOOK_CRLF ((int)0x00000020) /* One or more CR/LF pairs were found. */ +#define LOOK_LONG ((int)0x00000040) /* An over length line was found. */ +#define LOOK_ODD ((int)0x00000080) /* An odd number of bytes was found. */ +#define LOOK_SHORT ((int)0x00000100) /* Unable to perform full check. */ +#define LOOK_INVALID ((int)0x00000200) /* Invalid sequence was found. */ +#define LOOK_BINARY (LOOK_NUL | LOOK_LONG | LOOK_SHORT) /* May be binary. */ +#define LOOK_EOL (LOOK_LONE_CR | LOOK_LONE_LF | LOOK_CRLF) /* Line seps. */ +#endif /* INTERFACE */ + + +/* +** This function attempts to scan each logical line within the blob to +** determine the type of content it appears to contain. The return value +** is a combination of one or more of the LOOK_XXX flags (see above): +** +** !LOOK_BINARY -- The content appears to consist entirely of text; however, +** the encoding may not be UTF-8. +** +** LOOK_BINARY -- The content appears to be binary because it contains one +** or more embedded NUL characters or an extremely long line. +** Since this function does not understand UTF-16, it may +** falsely consider UTF-16 text to be binary. +** +** Additional flags (i.e. those other than the ones included in LOOK_BINARY) +** may be present in the result as well; however, they should not impact the +** determination of text versus binary content. +** +************************************ WARNING ********************************** +** +** This function does not validate that the blob content is properly formed +** UTF-8. It assumes that all code points are the same size. It does not +** validate any code points. It makes no attempt to detect if any [invalid] +** switches between UTF-8 and other encodings occur. +** +** The only code points that this function cares about are the NUL character, +** carriage-return, and line-feed. +** +** This function examines the contents of the blob until one of the flags +** specified in "stopFlags" is set. +** +************************************ WARNING ********************************** +*/ +int looks_like_utf8(const Blob *pContent, int stopFlags){ + const char *z = blob_buffer(pContent); + unsigned int n = blob_size(pContent); + int j, c, flags = LOOK_NONE; /* Assume UTF-8 text, prove otherwise */ + + if( n==0 ) return flags; /* Empty file -> text */ + c = *z; + if( c==0 ){ + flags |= LOOK_NUL; /* NUL character in a file -> binary */ + }else if( c=='\r' ){ + flags |= LOOK_CR; + if( n<=1 || z[1]!='\n' ){ + flags |= LOOK_LONE_CR; /* Not enough chars or next char not LF */ + } + } + j = (c!='\n'); + if( !j ) flags |= (LOOK_LF | LOOK_LONE_LF); /* Found LF as first char */ + while( !(flags&stopFlags) && --n>0 ){ + int c2 = c; + c = *++z; ++j; + if( c==0 ){ + flags |= LOOK_NUL; /* NUL character in a file -> binary */ + }else if( c=='\n' ){ + flags |= LOOK_LF; + if( c2=='\r' ){ + flags |= (LOOK_CR | LOOK_CRLF); /* Found LF preceded by CR */ + }else{ + flags |= LOOK_LONE_LF; + } + if( j>LENGTH_MASK ){ + flags |= LOOK_LONG; /* Very long line -> binary */ + } + j = 0; + }else if( c=='\r' ){ + flags |= LOOK_CR; + if( n<=1 || z[1]!='\n' ){ + flags |= LOOK_LONE_CR; /* Not enough chars or next char not LF */ + } + } + } + if( n ){ + flags |= LOOK_SHORT; /* The whole blob was not examined */ + } + if( j>LENGTH_MASK ){ + flags |= LOOK_LONG; /* Very long line -> binary */ + } + return flags; +} + + +/* +** Checks for proper UTF-8. It uses the method described in: +** http://en.wikipedia.org/wiki/UTF-8#Invalid_byte_sequences +** except for the "overlong form" of \u0000 which is not considered invalid +** here: Some languages like Java and Tcl use it. For UTF-8 characters +** > 7f, the variable 'c2' not necessary means the previous character. +** It's number of higher 1-bits indicate the number of continuation bytes +** that are expected to be followed. E.g. when 'c2' has a value in the range +** 0xc0..0xdf it means that 'c' is expected to contain the last continuation +** byte of a UTF-8 character. A value 0xe0..0xef means that after 'c' one +** more continuation byte is expected. +*/ + +int invalid_utf8(const Blob *pContent){ + const unsigned char *z = (unsigned char *) blob_buffer(pContent); + unsigned int n = blob_size(pContent); + unsigned char c, c2; + + if( n==0 ) return 0; /* Empty file -> OK */ + c = *z; + while( --n>0 ){ + c2 = c; + c = *++z; + if( c2>=0x80 ){ + if( ((c2<0xc2) || (c2>=0xf4) || ((c&0xc0)!=0x80)) && + (((c2!=0xf4) || (c>=0x90)) && ((c2!=0xc0) || (c!=0x80))) ){ + return LOOK_INVALID; /* Invalid UTF-8 */ + } + c = (c2 >= 0xe0) ? (c2<<1)+1 : ' '; + } + } + return (c>=0x80) ? LOOK_INVALID : 0; /* Last byte must be ASCII. */ +} + + +/* +** Define the type needed to represent a Unicode (UTF-16) character. +*/ +#ifndef WCHAR_T +# ifdef _WIN32 +# define WCHAR_T wchar_t +# else +# define WCHAR_T unsigned short +# endif +#endif + +/* +** Maximum length of a line in a text file, in UTF-16 characters. (4096) +** The number of bytes represented by this value cannot exceed LENGTH_MASK +** bytes, because that is the line buffer size used by the diff engine. +*/ +#define UTF16_LENGTH_MASK_SZ (LENGTH_MASK_SZ-(sizeof(WCHAR_T)-sizeof(char))) +#define UTF16_LENGTH_MASK ((1<> 8) & 0xff)) +#define UTF16_SWAP_IF(expr,ch) ((expr) ? UTF16_SWAP((ch)) : (ch)) + +/* +** This function attempts to scan each logical line within the blob to +** determine the type of content it appears to contain. The return value +** is a combination of one or more of the LOOK_XXX flags (see above): +** +** !LOOK_BINARY -- The content appears to consist entirely of text; however, +** the encoding may not be UTF-16. +** +** LOOK_BINARY -- The content appears to be binary because it contains one +** or more embedded NUL characters or an extremely long line. +** Since this function does not understand UTF-8, it may +** falsely consider UTF-8 text to be binary. +** +** Additional flags (i.e. those other than the ones included in LOOK_BINARY) +** may be present in the result as well; however, they should not impact the +** determination of text versus binary content. +** +************************************ WARNING ********************************** +** +** This function does not validate that the blob content is properly formed +** UTF-16. It assumes that all code points are the same size. It does not +** validate any code points. It makes no attempt to detect if any [invalid] +** switches between the UTF-16be and UTF-16le encodings occur. +** +** The only code points that this function cares about are the NUL character, +** carriage-return, and line-feed. +** +** This function examines the contents of the blob until one of the flags +** specified in "stopFlags" is set. +** +************************************ WARNING ********************************** +*/ +int looks_like_utf16(const Blob *pContent, int bReverse, int stopFlags){ + const WCHAR_T *z = (WCHAR_T *)blob_buffer(pContent); + unsigned int n = blob_size(pContent); + int j, c, flags = LOOK_NONE; /* Assume UTF-16 text, prove otherwise */ + + if( n%sizeof(WCHAR_T) ){ + flags |= LOOK_ODD; /* Odd number of bytes -> binary (UTF-8?) */ + } + if( n binary (UTF-8?) */ + c = *z; + if( bReverse ){ + c = UTF16_SWAP(c); + } + if( c==0 ){ + flags |= LOOK_NUL; /* NUL character in a file -> binary */ + }else if( c=='\r' ){ + flags |= LOOK_CR; + if( n<(2*sizeof(WCHAR_T)) || UTF16_SWAP_IF(bReverse, z[1])!='\n' ){ + flags |= LOOK_LONE_CR; /* Not enough chars or next char not LF */ + } + } + j = (c!='\n'); + if( !j ) flags |= (LOOK_LF | LOOK_LONE_LF); /* Found LF as first char */ + while( !(flags&stopFlags) && ((n-=sizeof(WCHAR_T))>=sizeof(WCHAR_T)) ){ + int c2 = c; + c = *++z; + if( bReverse ){ + c = UTF16_SWAP(c); + } + ++j; + if( c==0 ){ + flags |= LOOK_NUL; /* NUL character in a file -> binary */ + }else if( c=='\n' ){ + flags |= LOOK_LF; + if( c2=='\r' ){ + flags |= (LOOK_CR | LOOK_CRLF); /* Found LF preceded by CR */ + }else{ + flags |= LOOK_LONE_LF; + } + if( j>UTF16_LENGTH_MASK ){ + flags |= LOOK_LONG; /* Very long line -> binary */ + } + j = 0; + }else if( c=='\r' ){ + flags |= LOOK_CR; + if( n<(2*sizeof(WCHAR_T)) || UTF16_SWAP_IF(bReverse, z[1])!='\n' ){ + flags |= LOOK_LONE_CR; /* Not enough chars or next char not LF */ + } + } + } + if( n ){ + flags |= LOOK_SHORT; /* The whole blob was not examined */ + } + if( j>UTF16_LENGTH_MASK ){ + flags |= LOOK_LONG; /* Very long line -> binary */ + } + return flags; +} + +/* +** This function returns an array of bytes representing the byte-order-mark +** for UTF-8. +*/ +const unsigned char *get_utf8_bom(int *pnByte){ + static const unsigned char bom[] = { + 0xef, 0xbb, 0xbf, 0x00, 0x00, 0x00 + }; + if( pnByte ) *pnByte = 3; + return bom; +} + +/* +** This function returns non-zero if the blob starts with a UTF-8 +** byte-order-mark (BOM). +*/ +int starts_with_utf8_bom(const Blob *pContent, int *pnByte){ + const char *z = blob_buffer(pContent); + int bomSize = 0; + const unsigned char *bom = get_utf8_bom(&bomSize); + + if( pnByte ) *pnByte = bomSize; + if( blob_size(pContent)=(2*bomSize) && z[1]==0 ) goto noBom; /* No: possible UTF-32. */ + if( z[0]==0xfeff ){ + if( pbReverse ) *pbReverse = 0; + }else if( z[0]==0xfffe ){ + if( pbReverse ) *pbReverse = 1; + }else{ + static const int one = 1; + noBom: + if( pbReverse ) *pbReverse = *(char *) &one; + return 0; /* No: UTF-16 byte-order-mark not found. */ + } + if( pnByte ) *pnByte = bomSize; + return 1; /* Yes. */ +} + +/* +** Returns non-zero if the specified content could be valid UTF-16. +*/ +int could_be_utf16(const Blob *pContent, int *pbReverse){ + return (blob_size(pContent) % sizeof(WCHAR_T) == 0) ? + starts_with_utf16_bom(pContent, 0, pbReverse) : 0; +} + + +/* +** COMMAND: test-looks-like-utf +** +** Usage: %fossil test-looks-like-utf FILENAME +** +** Options: +** -n|--limit Repeat looks-like function times, for +** performance measurement. Default = 1; +** --utf8 Ignoring BOM and file size, force UTF-8 checking +** --utf16 Ignoring BOM and file size, force UTF-16 checking +** +** FILENAME is the name of a file to check for textual content in the UTF-8 +** and/or UTF-16 encodings. +*/ +void looks_like_utf_test_cmd(void){ + Blob blob; /* the contents of the specified file */ + int fUtf8 = 0; /* return value of starts_with_utf8_bom() */ + int fUtf16 = 0; /* return value of starts_with_utf16_bom() */ + int fUnicode = 0; /* return value of could_be_utf16() */ + int lookFlags = 0; /* output flags from looks_like_utf8/utf16() */ + int bRevUtf16 = 0; /* non-zero -> UTF-16 byte order reversed */ + int fForceUtf8 = find_option("utf8",0,0)!=0; + int fForceUtf16 = find_option("utf16",0,0)!=0; + const char *zCount = find_option("limit","n",1); + int nRepeat = 1; + + if( g.argc!=3 ) usage("FILENAME"); + if( zCount ){ + nRepeat = atoi(zCount); + } + blob_read_from_file(&blob, g.argv[2]); + while( --nRepeat >= 0 ){ + fUtf8 = starts_with_utf8_bom(&blob, 0); + fUtf16 = starts_with_utf16_bom(&blob, 0, &bRevUtf16); + if( fForceUtf8 ){ + fUnicode = 0; + }else{ + fUnicode = could_be_utf16(&blob, 0) || fForceUtf16; + } + if( fUnicode ){ + lookFlags = looks_like_utf16(&blob, bRevUtf16, 0); + }else{ + lookFlags = looks_like_utf8(&blob, 0)|invalid_utf8(&blob); + } + } + fossil_print("File \"%s\" has %d bytes.\n",g.argv[2],blob_size(&blob)); + fossil_print("Starts with UTF-8 BOM: %s\n",fUtf8?"yes":"no"); + fossil_print("Starts with UTF-16 BOM: %s\n", + fUtf16?(bRevUtf16?"reversed":"yes"):"no"); + fossil_print("Looks like UTF-%s: %s\n",fUnicode?"16":"8", + (lookFlags&LOOK_BINARY)?"no":"yes"); + fossil_print("Has flag LOOK_NUL: %s\n",(lookFlags&LOOK_NUL)?"yes":"no"); + fossil_print("Has flag LOOK_CR: %s\n",(lookFlags&LOOK_CR)?"yes":"no"); + fossil_print("Has flag LOOK_LONE_CR: %s\n", + (lookFlags&LOOK_LONE_CR)?"yes":"no"); + fossil_print("Has flag LOOK_LF: %s\n",(lookFlags&LOOK_LF)?"yes":"no"); + fossil_print("Has flag LOOK_LONE_LF: %s\n", + (lookFlags&LOOK_LONE_LF)?"yes":"no"); + fossil_print("Has flag LOOK_CRLF: %s\n",(lookFlags&LOOK_CRLF)?"yes":"no"); + fossil_print("Has flag LOOK_LONG: %s\n",(lookFlags&LOOK_LONG)?"yes":"no"); + fossil_print("Has flag LOOK_INVALID: %s\n", + (lookFlags&LOOK_INVALID)?"yes":"no"); + fossil_print("Has flag LOOK_ODD: %s\n",(lookFlags&LOOK_ODD)?"yes":"no"); + fossil_print("Has flag LOOK_SHORT: %s\n",(lookFlags&LOOK_SHORT)?"yes":"no"); + blob_reset(&blob); +} Index: src/main.c ================================================================== --- src/main.c +++ src/main.c @@ -2,11 +2,11 @@ ** Copyright (c) 2006 D. Richard Hipp ** ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) - +** ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** ** Author contact information: @@ -16,20 +16,41 @@ ******************************************************************************* ** ** This module codes the main() procedure that runs first when the ** program is invoked. */ +#include "VERSION.h" #include "config.h" #include "main.h" #include #include #include #include #include - - +#include /* atexit() */ +#if defined(_WIN32) +# include +#else +# include /* errno global */ +#endif +#ifdef FOSSIL_ENABLE_SSL +# include "openssl/crypto.h" +#endif +#if defined(FOSSIL_ENABLE_MINIZ) +# define MINIZ_HEADER_FILE_ONLY +# include "miniz.c" +#else +# include +#endif #if INTERFACE +#ifdef FOSSIL_ENABLE_TCL +# include "tcl.h" +#endif +#ifdef FOSSIL_ENABLE_JSON +# include "cson_amalgamation.h" /* JSON API. */ +# include "json_detail.h" +#endif /* ** Number of elements in an array */ #define count(X) (sizeof(X)/sizeof(X[0])) @@ -43,101 +64,163 @@ ** Maximum number of auxiliary parameters on reports */ #define MX_AUX 5 /* -** All global variables are in this structure. +** Holds flags for fossil user permissions. +*/ +struct FossilUserPerms { + char Setup; /* s: use Setup screens on web interface */ + char Admin; /* a: administrative permission */ + char Delete; /* d: delete wiki or tickets */ + char Password; /* p: change password */ + char Query; /* q: create new reports */ + char Write; /* i: xfer inbound. check-in */ + char Read; /* o: xfer outbound. check-out */ + char Hyperlink; /* h: enable the display of hyperlinks */ + char Clone; /* g: clone */ + char RdWiki; /* j: view wiki via web */ + char NewWiki; /* f: create new wiki via web */ + char ApndWiki; /* m: append to wiki via web */ + char WrWiki; /* k: edit wiki via web */ + char ModWiki; /* l: approve and publish wiki content (Moderator) */ + char RdTkt; /* r: view tickets via web */ + char NewTkt; /* n: create new tickets */ + char ApndTkt; /* c: append to tickets via the web */ + char WrTkt; /* w: make changes to tickets via web */ + char ModTkt; /* q: approve and publish ticket changes (Moderator) */ + char Attach; /* b: add attachments */ + char TktFmt; /* t: create new ticket report formats */ + char RdAddr; /* e: read email addresses or other private data */ + char Zip; /* z: download zipped artifact via /zip URL */ + char Private; /* x: can send and receive private content */ +}; + +#ifdef FOSSIL_ENABLE_TCL +/* +** All Tcl related context information is in this structure. This structure +** definition has been copied from and should be kept in sync with the one in +** "th_tcl.c". */ +struct TclContext { + int argc; /* Number of original (expanded) arguments. */ + char **argv; /* Full copy of the original (expanded) arguments. */ + void *hLibrary; /* The Tcl library module handle. */ + void *xFindExecutable; /* See tcl_FindExecutableProc in th_tcl.c. */ + void *xCreateInterp; /* See tcl_CreateInterpProc in th_tcl.c. */ + void *xDeleteInterp; /* See tcl_DeleteInterpProc in th_tcl.c. */ + void *xFinalize; /* See tcl_FinalizeProc in th_tcl.c. */ + Tcl_Interp *interp; /* The on-demand created Tcl interpreter. */ + int useObjProc; /* Non-zero if an objProc can be called directly. */ + int useTip285; /* Non-zero if TIP #285 is available. */ + char *setup; /* The optional Tcl setup script. */ + void *xPreEval; /* Optional, called before Tcl_Eval*(). */ + void *pPreContext; /* Optional, provided to xPreEval(). */ + void *xPostEval; /* Optional, called after Tcl_Eval*(). */ + void *pPostContext; /* Optional, provided to xPostEval(). */ +}; +#endif + struct Global { int argc; char **argv; /* Command-line arguments to the program */ - int isConst; /* True if the output is unchanging */ + char *nameOfExe; /* Full path of executable. */ + const char *zErrlog; /* Log errors to this file, if not NULL */ + int isConst; /* True if the output is unchanging & cacheable */ + const char *zVfsName; /* The VFS to use for database connections */ sqlite3 *db; /* The connection to the databases */ sqlite3 *dbConfig; /* Separate connection for global_config table */ + char *zAuxSchema; /* Main repository aux-schema */ int useAttach; /* True if global_config is attached to repository */ - int configOpen; /* True if the config database is open */ - long long int now; /* Seconds since 1970 */ + const char *zConfigDbName;/* Path of the config database. NULL if not open */ + sqlite3_int64 now; /* Seconds since 1970 */ int repositoryOpen; /* True if the main repository database is open */ + char *zRepositoryOption; /* Most recent cached repository option value */ char *zRepositoryName; /* Name of the repository database */ - const char *zHome; /* Name of user home directory */ + char *zLocalDbName; /* Name of the local database */ + const char *zMainDbType;/* "configdb", "localdb", or "repository" */ + const char *zConfigDbType; /* "configdb", "localdb", or "repository" */ + char *zOpenRevision; /* Check-in version to use during database open */ int localOpen; /* True if the local database is open */ char *zLocalRoot; /* The directory holding the local database */ int minPrefix; /* Number of digits needed for a distinct UUID */ - int fSqlTrace; /* True if -sqltrace flag is present */ + int fSqlTrace; /* True if --sqltrace flag is present */ + int fSqlStats; /* True if --sqltrace or --sqlstats are present */ int fSqlPrint; /* True if -sqlprint flag is present */ int fQuiet; /* True if -quiet flag is present */ int fHttpTrace; /* Trace outbound HTTP requests */ - int fNoSync; /* Do not do an autosync even. --nosync */ + int fAnyTrace; /* Any kind of tracing */ + char *zHttpAuth; /* HTTP Authorization user:pass information */ + int fSystemTrace; /* Trace calls to fossil_system(), --systemtrace */ + int fSshTrace; /* Trace the SSH setup traffic */ + int fSshClient; /* HTTP client flags for SSH client */ + char *zSshCmd; /* SSH command string */ + int fNoSync; /* Do not do an autosync ever. --nosync */ + int fIPv4; /* Use only IPv4, not IPv6. --ipv4 */ char *zPath; /* Name of webpage being served */ char *zExtra; /* Extra path information past the webpage name */ char *zBaseURL; /* Full text of the URL being served */ + char *zHttpsURL; /* zBaseURL translated to https: */ char *zTop; /* Parent directory of zPath */ const char *zContentType; /* The content type of the input HTTP request */ int iErrPriority; /* Priority of current error message */ char *zErrMsg; /* Text of an error message */ + int sslNotAvailable; /* SSL is not available. Do not redirect to https: */ Blob cgiIn; /* Input to an xfer www method */ int cgiOutput; /* Write error and status messages to CGI */ int xferPanic; /* Write error messages in XFER protocol */ int fullHttpReply; /* True for full HTTP reply. False for CGI reply */ Th_Interp *interp; /* The TH1 interpreter */ + char *th1Setup; /* The TH1 post-creation setup script, if any */ + int th1Flags; /* The TH1 integration state flags */ FILE *httpIn; /* Accept HTTP input from here */ FILE *httpOut; /* Send HTTP output here */ int xlinkClusterOnly; /* Set when cloning. Only process clusters */ int fTimeFormat; /* 1 for UTC. 2 for localtime. 0 not yet selected */ int *aCommitFile; /* Array of files to be committed */ int markPrivate; /* All new artifacts are private if true */ - - int urlIsFile; /* True if a "file:" url */ - int urlIsHttps; /* True if a "https:" url */ - char *urlName; /* Hostname for http: or filename for file: */ - char *urlHostname; /* The HOST: parameter on http headers */ - char *urlProtocol; /* "http" or "https" */ - int urlPort; /* TCP port number for http: or https: */ - int urlDfltPort; /* The default port for the given protocol */ - char *urlPath; /* Pathname for http: */ - char *urlUser; /* User id for http: */ - char *urlPasswd; /* Password for http: */ - char *urlCanonical; /* Canonical representation of the URL */ - char *urlProxyAuth; /* Proxy-Authorizer: string */ - int dontKeepUrl; /* Do not persist the URL */ - - const char *zLogin; /* Login name. "" if not logged in. */ + int clockSkewSeen; /* True if clocks on client and server out of sync */ + int wikiFlags; /* Wiki conversion flags applied to %W */ + char isHTTP; /* True if server/CGI modes, else assume CLI. */ + char javascriptHyperlink; /* If true, set href= using script, not HTML */ + Blob httpHeader; /* Complete text of the HTTP request header */ + UrlData url; /* Information about current URL */ + const char *zLogin; /* Login name. NULL or "" if not logged in. */ + const char *zSSLIdentity; /* Value of --ssl-identity option, filename of + ** SSL client identity */ + int useLocalauth; /* No login required if from 127.0.0.1 */ int noPswd; /* Logged in without password (on 127.0.0.1) */ int userUid; /* Integer user id */ + int isHuman; /* True if access by a human, not a spider or bot */ + int comFmtFlags; /* Zero or more "COMMENT_PRINT_*" bit flags */ /* Information used to populate the RCVFROM table */ int rcvid; /* The rcvid. 0 if not yet defined. */ char *zIpAddr; /* The remote IP address */ char *zNonce; /* The nonce used for login */ - - /* permissions used by the server */ - int okSetup; /* s: use Setup screens on web interface */ - int okAdmin; /* a: administrative permission */ - int okDelete; /* d: delete wiki or tickets */ - int okPassword; /* p: change password */ - int okQuery; /* q: create new reports */ - int okWrite; /* i: xfer inbound. checkin */ - int okRead; /* o: xfer outbound. checkout */ - int okHistory; /* h: access historical information. */ - int okClone; /* g: clone */ - int okRdWiki; /* j: view wiki via web */ - int okNewWiki; /* f: create new wiki via web */ - int okApndWiki; /* m: append to wiki via web */ - int okWrWiki; /* k: edit wiki via web */ - int okRdTkt; /* r: view tickets via web */ - int okNewTkt; /* n: create new tickets */ - int okApndTkt; /* c: append to tickets via the web */ - int okWrTkt; /* w: make changes to tickets via web */ - int okAttach; /* b: add attachments */ - int okTktFmt; /* t: create new ticket report formats */ - int okRdAddr; /* e: read email addresses or other private data */ - int okZip; /* z: download zipped artifact via /zip URL */ + + /* permissions available to current user */ + struct FossilUserPerms perm; + + /* permissions available to current user or to "anonymous". + ** This is the logical union of perm permissions above with + ** the value that perm would take if g.zLogin were "anonymous". */ + struct FossilUserPerms anon; + +#ifdef FOSSIL_ENABLE_TCL + /* all Tcl related context necessary for integration */ + struct TclContext tcl; +#endif /* For defense against Cross-site Request Forgery attacks */ char zCsrfToken[12]; /* Value of the anti-CSRF token */ int okCsrf; /* Anti-CSRF token is present and valid */ + int parseCnt[10]; /* Counts of artifacts parsed */ FILE *fDebug; /* Write debug information here, if the file exists */ +#ifdef FOSSIL_ENABLE_TH1_HOOKS + int fNoThHook; /* Disable all TH1 command/webpage hooks */ +#endif int thTrace; /* True to enable TH1 debugging output */ Blob thLog; /* Text of the TH1 debugging output */ int isHome; /* True if rendering the "home" page */ @@ -146,10 +229,65 @@ const char *azAuxName[MX_AUX]; /* Name of each aux() or option() value */ char *azAuxParam[MX_AUX]; /* Param of each aux() or option() value */ const char *azAuxVal[MX_AUX]; /* Value of each aux() or option() value */ const char **azAuxOpt[MX_AUX]; /* Options of each option() value */ int anAuxCols[MX_AUX]; /* Number of columns for option() values */ + + int allowSymlinks; /* Cached "allow-symlinks" option */ + + int mainTimerId; /* Set to fossil_timer_start() */ +#ifdef FOSSIL_ENABLE_JSON + struct FossilJsonBits { + int isJsonMode; /* True if running in JSON mode, else + false. This changes how errors are + reported. In JSON mode we try to + always output JSON-form error + responses and always exit() with + code 0 to avoid an HTTP 500 error. + */ + int resultCode; /* used for passing back specific codes + ** from /json callbacks. */ + int errorDetailParanoia; /* 0=full error codes, 1=%10, 2=%100, 3=%1000 */ + cson_output_opt outOpt; /* formatting options for JSON mode. */ + cson_value *authToken; /* authentication token */ + const char *jsonp; /* Name of JSONP function wrapper. */ + unsigned char dispatchDepth /* Tells JSON command dispatching + which argument we are currently + working on. For this purpose, arg#0 + is the "json" path/CLI arg. + */; + struct { /* "garbage collector" */ + cson_value *v; + cson_array *a; + } gc; + struct { /* JSON POST data. */ + cson_value *v; + cson_array *a; + int offset; /* Tells us which PATH_INFO/CLI args + part holds the "json" command, so + that we can account for sub-repos + and path prefixes. This is handled + differently for CLI and CGI modes. + */ + const char *commandStr /*"command" request param.*/; + } cmd; + struct { /* JSON POST data. */ + cson_value *v; + cson_object *o; + } post; + struct { /* GET/COOKIE params in JSON mode. */ + cson_value *v; + cson_object *o; + } param; + struct { + cson_value *v; + cson_object *o; + } reqPayload; /* request payload object (if any) */ + cson_array *warnings; /* response warnings */ + int timerId; /* fetched from fossil_timer_start() */ + } json; +#endif /* FOSSIL_ENABLE_JSON */ }; /* ** Macro for debugging: */ @@ -158,11 +296,11 @@ #endif Global g; /* -** The table of web pages supported by this application is generated +** The table of web pages supported by this application is generated ** automatically by the "mkindex" program and written into a file ** named "page_index.h". We include that file here to get access ** to the table. */ #include "page_index.h" @@ -178,31 +316,32 @@ */ static int name_search( const char *zName, /* The name we are looking for */ const NameMap *aMap, /* Search in this array */ int nMap, /* Number of slots in aMap[] */ + int iBegin, /* Lower bound on the array search */ int *pIndex /* OUT: The index in aMap[] of the match */ ){ int upr, lwr, cnt, m, i; int n = strlen(zName); - lwr = 0; + lwr = iBegin; upr = nMap-1; while( lwr<=upr ){ int mid, c; mid = (upr+lwr)/2; - c = strcmp(zName, aMap[mid].zName); + c = fossil_strcmp(zName, aMap[mid].zName); if( c==0 ){ *pIndex = mid; return 0; }else if( c<0 ){ upr = mid - 1; }else{ lwr = mid + 1; } } - for(m=cnt=0, i=upr-2; i<=upr+3 && i1); } +/* +** atexit() handler which frees up "some" of the resources +** used by fossil. +*/ +static void fossil_atexit(void) { +#if defined(_WIN32) && !defined(_WIN64) && defined(FOSSIL_ENABLE_TCL) && \ + defined(USE_TCL_STUBS) + /* + ** If Tcl is compiled on Windows using the latest MinGW, Fossil can crash + ** when exiting while a stubs-enabled Tcl is still loaded. This is due to + ** a bug in MinGW, see: + ** + ** http://comments.gmane.org/gmane.comp.gnu.mingw.user/41724 + ** + ** The workaround is to manually unload the loaded Tcl library prior to + ** exiting the process. This issue does not impact 64-bit Windows. + */ + unloadTcl(g.interp, &g.tcl); +#endif +#ifdef FOSSIL_ENABLE_JSON + cson_value_free(g.json.gc.v); + memset(&g.json, 0, sizeof(g.json)); +#endif + free(g.zErrMsg); + if(g.db){ + db_close(0); + } + /* + ** FIXME: The next two lines cannot always be enabled; however, they + ** are very useful for tracking down TH1 memory leaks. + */ + if( fossil_getenv("TH1_DELETE_INTERP")!=0 ){ + if( g.interp ){ + Th_DeleteInterp(g.interp); g.interp = 0; + } + assert( Th_GetOutstandingMalloc()==0 ); + } +} /* -** This procedure runs first. +** Convert all arguments from mbcs (or unicode) to UTF-8. Then +** search g.argv for arguments "--args FILENAME". If found, then +** (1) remove the two arguments from g.argv +** (2) Read the file FILENAME +** (3) Use the contents of FILE to replace the two removed arguments: +** (a) Ignore blank lines in the file +** (b) Each non-empty line of the file is an argument, except +** (c) If the line begins with "-" and contains a space, it is broken +** into two arguments at the space. */ -int main(int argc, char **argv){ - const char *zCmdName; - int idx; - int rc; +static void expand_args_option(int argc, void *argv){ + Blob file = empty_blob; /* Content of the file */ + Blob line = empty_blob; /* One line of the file */ + unsigned int nLine; /* Number of lines in the file*/ + unsigned int i, j, k; /* Loop counters */ + int n; /* Number of bytes in one line */ + char *z; /* General use string pointer */ + char **newArgv; /* New expanded g.argv under construction */ + const char *zFileName; /* input file name */ + FILE *inFile; /* input FILE */ +#if defined(_WIN32) + wchar_t buf[MAX_PATH]; +#endif - sqlite3_config(SQLITE_CONFIG_LOG, fossil_sqlite_log, 0); - g.now = time(0); g.argc = argc; g.argv = argv; - if( getenv("GATEWAY_INTERFACE")!=0 ){ - zCmdName = "cgi"; - }else if( argc<2 ){ - fprintf(stderr, "Usage: %s COMMAND ...\n", argv[0]); - exit(1); - }else{ - g.fQuiet = find_option("quiet", 0, 0)!=0; - g.fSqlTrace = find_option("sqltrace", 0, 0)!=0; - g.fSqlPrint = find_option("sqlprint", 0, 0)!=0; - g.fHttpTrace = find_option("httptrace", 0, 0)!=0; - g.zLogin = find_option("user", "U", 1); - zCmdName = argv[1]; - } - rc = name_search(zCmdName, aCommand, count(aCommand), &idx); - if( rc==1 ){ - fprintf(stderr,"%s: unknown command: %s\n" - "%s: use \"help\" for more information\n", - argv[0], zCmdName, argv[0]); - return 1; - }else if( rc==2 ){ - fprintf(stderr,"%s: ambiguous command prefix: %s\n" - "%s: use \"help\" for more information\n", - argv[0], zCmdName, argv[0]); - return 1; - } - aCommand[idx].xFunc(); - return 0; -} - -/* -** The following variable becomes true while processing a fatal error -** or a panic. If additional "recursive-fatal" errors occur while -** shutting down, the recursive errors are silently ignored. -*/ -static int mainInFatalError = 0; - -/* -** Print an error message, rollback all databases, and quit. These -** routines never return. -*/ -void fossil_panic(const char *zFormat, ...){ - char *z; - va_list ap; - static int once = 1; - mainInFatalError = 1; - va_start(ap, zFormat); - z = vmprintf(zFormat, ap); - va_end(ap); - if( g.cgiOutput && once ){ - once = 0; - cgi_printf("

      %h

      ", z); - cgi_reply(); - }else{ - fprintf(stderr, "%s: %s\n", g.argv[0], z); - } - db_force_rollback(); - exit(1); -} -void fossil_fatal(const char *zFormat, ...){ - char *z; - va_list ap; - mainInFatalError = 1; - va_start(ap, zFormat); - z = vmprintf(zFormat, ap); - va_end(ap); - if( g.cgiOutput ){ - g.cgiOutput = 0; - cgi_printf("

      %h

      ", z); - cgi_reply(); - }else{ - fprintf(stderr, "%s: %s\n", g.argv[0], z); - } - db_force_rollback(); - exit(1); -} - -/* This routine works like fossil_fatal() except that if called -** recursively, the recursive call is a no-op. -** -** Use this in places where an error might occur while doing -** fatal error shutdown processing. Unlike fossil_panic() and -** fossil_fatal() which never return, this routine might return if -** the fatal error handing is already in process. The caller must -** be prepared for this routine to return. -*/ -void fossil_fatal_recursive(const char *zFormat, ...){ - char *z; - va_list ap; - if( mainInFatalError ) return; - mainInFatalError = 1; - va_start(ap, zFormat); - z = vmprintf(zFormat, ap); - va_end(ap); - if( g.cgiOutput ){ - g.cgiOutput = 0; - cgi_printf("

      %h

      ", z); - cgi_reply(); - }else{ - fprintf(stderr, "%s: %s\n", g.argv[0], z); - } - db_force_rollback(); - exit(1); -} - - -/* Print a warning message */ -void fossil_warning(const char *zFormat, ...){ - char *z; - va_list ap; - va_start(ap, zFormat); - z = vmprintf(zFormat, ap); - va_end(ap); - if( g.cgiOutput ){ - cgi_printf("

      %h

      ", z); - }else{ - fprintf(stderr, "%s: %s\n", g.argv[0], z); - } -} - -/* -** Return a name for an SQLite error code -*/ -static const char *sqlite_error_code_name(int iCode){ + sqlite3_initialize(); +#if defined(_WIN32) && defined(BROKEN_MINGW_CMDLINE) + for(i=0; i=g.argc-1 ) return; + + zFileName = g.argv[i+1]; + inFile = (0==strcmp("-",zFileName)) + ? stdin + : fossil_fopen(zFileName,"rb"); + if(!inFile){ + fossil_fatal("Cannot open -args file [%s]", zFileName); + }else{ + blob_read_from_channel(&file, inFile, -1); + if(stdin != inFile){ + fclose(inFile); + } + inFile = NULL; + } + blob_to_utf8_no_bom(&file, 1); + z = blob_str(&file); + for(k=0, nLine=1; z[k]; k++) if( z[k]=='\n' ) nLine++; + newArgv = fossil_malloc( sizeof(char*)*(g.argc + nLine*2) ); + for(j=0; j0 ){ + if( n<1 ) continue + /** + ** Reminder: corner-case: a line with 1 byte and no newline. + */; + z = blob_buffer(&line); + if('\n'==z[n-1]){ + z[n-1] = 0; + } + + if((n>1) && ('\r'==z[n-2])){ + if(n==2) continue /*empty line*/; + z[n-2] = 0; + } + if(!z[0]) continue; + newArgv[j++] = z; + if( z[0]=='-' ){ + for(k=1; z[k] && !fossil_isspace(z[k]); k++){} + if( z[k] ){ + z[k] = 0; + k++; + if( z[k] ) newArgv[j++] = &z[k]; + } + } + } + i += 2; + while( i=2 ) break; + if( fd<0 ) x = errno; + }while( nTry++ < 2 ); + if( fd<2 ){ + g.cgiOutput = 1; + g.httpOut = stdout; + g.fullHttpReply = !g.isHTTP; + fossil_fatal("file descriptor 2 is not open. (fd=%d, errno=%d)", + fd, x); + } + } +#endif + rc = name_search(zCmdName, aCommand, count(aCommand), FOSSIL_FIRST_CMD, &idx); + if( rc==1 ){ +#ifdef FOSSIL_ENABLE_TH1_HOOKS + if( !g.isHTTP && !g.fNoThHook ){ + rc = Th_CommandHook(zCmdName, 0); + }else{ + rc = TH_OK; + } + if( rc==TH_OK || rc==TH_RETURN || rc==TH_CONTINUE ){ + if( rc==TH_OK || rc==TH_RETURN ){ +#endif + fossil_fatal("%s: unknown command: %s\n" + "%s: use \"help\" for more information\n", + g.argv[0], zCmdName, g.argv[0]); +#ifdef FOSSIL_ENABLE_TH1_HOOKS + } + if( !g.isHTTP && !g.fNoThHook && (rc==TH_OK || rc==TH_CONTINUE) ){ + Th_CommandNotify(zCmdName, 0); + } + } + fossil_exit(0); +#endif + }else if( rc==2 ){ + int i, n; + Blob couldbe; + blob_zero(&couldbe); + n = strlen(zCmdName); + for(i=0; i= g.argc) break; + if( i+hasArg >= g.argc ) break; z = g.argv[i]; if( z[0]!='-' ) continue; z++; if( z[0]=='-' ){ if( z[1]==0 ){ @@ -440,18 +868,66 @@ }else if( z[nLong]==0 ){ zReturn = g.argv[i+hasArg]; remove_from_argv(i, 1+hasArg); break; } - }else if( zShort!=0 && strcmp(z,zShort)==0 ){ + }else if( fossil_strcmp(z,zShort)==0 ){ zReturn = g.argv[i+hasArg]; remove_from_argv(i, 1+hasArg); break; } } return zReturn; } + +/* +** Look for multiple occurrences of a command-line option with the +** corresponding argument. +** +** Return a malloc allocated array of pointers to the arguments. +** +** pnUsedArgs is used to store the number of matched arguments. +** +** Caller is responsible to free allocated memory. +*/ +const char **find_repeatable_option( + const char *zLong, + const char *zShort, + int *pnUsedArgs +){ + const char *zOption; + const char **pzArgs = 0; + int nAllocArgs = 0; + int nUsedArgs = 0; + + while( (zOption = find_option(zLong, zShort, 1))!=0 ){ + if( pzArgs==0 && nAllocArgs==0 ){ + nAllocArgs = 1; + pzArgs = fossil_malloc( nAllocArgs*sizeof(pzArgs[0]) ); + }else if( nAllocArgs<=nUsedArgs ){ + nAllocArgs = nAllocArgs*2; + pzArgs = fossil_realloc( (void *)pzArgs, nAllocArgs*sizeof(pzArgs[0]) ); + } + pzArgs[nUsedArgs++] = zOption; + } + *pnUsedArgs = nUsedArgs; + return pzArgs; +} + +/* +** Look for a repository command-line option. If present, [re-]cache it in +** the global state and return the new pointer, freeing any previous value. +** If absent and there is no cached value, return NULL. +*/ +const char *find_repository_option(){ + const char *zRepository = find_option("repository", "R", 1); + if( zRepository ){ + if( g.zRepositoryOption ) fossil_free(g.zRepositoryOption); + g.zRepositoryOption = mprintf("%s", zRepository); + } + return g.zRepositoryOption; +} /* ** Verify that there are no unprocessed command-line options. If ** Any remaining command-line argument begins with "-" print ** an error message and quit. @@ -458,11 +934,13 @@ */ void verify_all_options(void){ int i; for(i=1; i
      +  @ %h(blob_str(&versionInfo))
      +  @ 
      + style_footer(); } /* ** COMMAND: help ** ** Usage: %fossil help COMMAND +** or: %fossil COMMAND --help +** +** Display information on how to use COMMAND. To display a list of +** available commands use one of: ** -** Display information on how to use COMMAND +** %fossil help Show common commands +** %fossil help -a|--all Show both common and auxiliary commands +** %fossil help -t|--test Show test commands only +** %fossil help -x|--aux Show auxiliary commands only +** %fossil help -w|--www Show list of WWW pages */ void help_cmd(void){ - int rc, idx; + int rc, idx, isPage = 0; const char *z; - if( g.argc!=3 ){ - printf("Usage: %s help COMMAND.\nAvailable COMMANDs:\n", g.argv[0]); - cmd_cmd_list(); + const char *zCmdOrPage; + const char *zCmdOrPagePlural; + if( g.argc<3 ){ + z = g.argv[0]; + fossil_print( + "Usage: %s help COMMAND\n" + "Common COMMANDs: (use \"%s help -a|--all\" for a complete list)\n", + z, z); + command_list(0, CMDFLAG_1ST_TIER); version_cmd(); return; } - rc = name_search(g.argv[2], aCommand, count(aCommand), &idx); + if( find_option("all","a",0) ){ + command_list(0, CMDFLAG_1ST_TIER | CMDFLAG_2ND_TIER); + return; + } + else if( find_option("www","w",0) ){ + command_list(0, CMDFLAG_WEBPAGE); + return; + } + else if( find_option("aux","x",0) ){ + command_list(0, CMDFLAG_2ND_TIER); + return; + } + else if( find_option("test","t",0) ){ + command_list(0, CMDFLAG_TEST); + return; + } + isPage = ('/' == *g.argv[2]) ? 1 : 0; + if(isPage){ + zCmdOrPage = "page"; + zCmdOrPagePlural = "pages"; + }else{ + zCmdOrPage = "command"; + zCmdOrPagePlural = "commands"; + } + rc = name_search(g.argv[2], aCommand, count(aCommand), 0, &idx); if( rc==1 ){ - fossil_fatal("unknown command: %s", g.argv[2]); + fossil_print("unknown %s: %s\nAvailable %s:\n", + zCmdOrPage, g.argv[2], zCmdOrPagePlural); + command_list(0, isPage ? CMDFLAG_WEBPAGE : (0xff & ~CMDFLAG_WEBPAGE)); + fossil_exit(1); }else if( rc==2 ){ - fossil_fatal("ambiguous command prefix: %s", g.argv[2]); + fossil_print("ambiguous %s prefix: %s\nMatching %s:\n", + zCmdOrPage, g.argv[2], zCmdOrPagePlural); + command_list(g.argv[2], 0xff); + fossil_exit(1); } - z = aCmdHelp[idx]; + z = aCmdHelp[idx].zText; if( z==0 ){ - fossil_fatal("no help available for the %s command", - aCommand[idx].zName); + fossil_fatal("no help available for the %s %s", + aCommand[idx].zName, zCmdOrPage); } while( *z ){ if( *z=='%' && strncmp(z, "%fossil", 7)==0 ){ - printf("%s", g.argv[0]); + fossil_print("%s", g.argv[0]); z += 7; }else{ putchar(*z); z++; } } putchar('\n'); } + +/* +** WEBPAGE: help +** URL: /help?name=CMD +** +** Show the built-in help text for CMD. CMD can be a command-line interface +** command or a page name from the web interface. +*/ +void help_page(void){ + const char *zCmd = P("cmd"); + + if( zCmd==0 ) zCmd = P("name"); + style_header("Command-line Help"); + if( zCmd ){ + int rc, idx; + char *z, *s, *d; + const char *zCmdOrPage = ('/'==*zCmd) ? "page" : "command"; + style_submenu_element("Command-List", "Command-List", "%s/help", g.zTop); + @

      The "%s(zCmd)" %s(zCmdOrPage):

      + rc = name_search(zCmd, aCommand, count(aCommand), 0, &idx); + if( rc==1 ){ + @ unknown command: %s(zCmd) + }else if( rc==2 ){ + @ ambiguous command prefix: %s(zCmd) + }else{ + z = (char*)aCmdHelp[idx].zText; + if( z==0 ){ + @ no help available for the %s(aCommand[idx].zName) command + }else{ + z=s=d=mprintf("%s",z); + while( *s ){ + if( *s=='%' && strncmp(s, "%fossil", 7)==0 ){ + s++; + }else{ + *d++ = *s++; + } + } + *d = 0; + @
      +        @ %h(z)
      +        @ 
      + fossil_free(z); + } + } + }else{ + int i, j, n; + + @

      Available commands:

      + @ + for(i=j=0; i
        + } + @
      • %s(z)
      • + j++; + if( j>=n ){ + @
      + j = 0; + } + } + if( j>0 ){ + @ + } + @
      + + @

      Available web UI pages:

      + @ + for(i=j=0; i
        + } + if( aCmdHelp[i].zText && *aCmdHelp[i].zText ){ + @
      • %s(z+1)
      • + }else{ + @
      • %s(z+1)
      • + } + j++; + if( j>=n ){ + @
      + j = 0; + } + } + if( j>0 ){ + @ + } + @
      + + @

      Unsupported commands:

      + @ + for(i=j=0; i
        + } + if( aCmdHelp[i].zText && *aCmdHelp[i].zText ){ + @
      • %s(z)
      • + }else{ + @
      • %s(z)
      • + } + j++; + if( j>=n ){ + @
      + j = 0; + } + } + if( j>0 ){ + @ + } + @
      + + } + style_footer(); +} + +/* +** WEBPAGE: test-all-help +** +** Show all help text on a single page. Useful for proof-reading. +*/ +void test_all_help_page(void){ + int i; + style_header("Testpage: All Help Text"); + for(i=0; i%s(aCommand[i].zName): + @
      +    @ %h(aCmdHelp[i].zText)
      +    @ 
      + } + style_footer(); +} /* ** Set the g.zBaseURL value to the full URL for the toplevel of ** the fossil tree. Set g.zTop to g.zBaseURL without the ** leading "http://" and the host and port. +** +** The g.zBaseURL is normally set based on HTTP_HOST and SCRIPT_NAME +** environment variables. However, if zAltBase is not NULL then it +** is the argument to the --baseurl option command-line option and +** g.zBaseURL and g.zTop is set from that instead. */ -void set_base_url(void){ +static void set_base_url(const char *zAltBase){ int i; - const char *zHost = PD("HTTP_HOST",""); - const char *zMode = PD("HTTPS","off"); - const char *zCur = PD("SCRIPT_NAME","/"); - - i = strlen(zCur); - while( i>0 && zCur[i-1]=='/' ) i--; - if( strcmp(zMode,"on")==0 ){ - g.zBaseURL = mprintf("https://%s%.*s", zHost, i, zCur); - g.zTop = &g.zBaseURL[8+strlen(zHost)]; + const char *zHost; + const char *zMode; + const char *zCur; + + if( g.zBaseURL!=0 ) return; + if( zAltBase ){ + int i, n, c; + g.zTop = g.zBaseURL = mprintf("%s", zAltBase); + if( strncmp(g.zTop, "http://", 7)==0 ){ + /* it is HTTP, replace prefix with HTTPS. */ + g.zHttpsURL = mprintf("https://%s", &g.zTop[7]); + }else if( strncmp(g.zTop, "https://", 8)==0 ){ + /* it is already HTTPS, use it. */ + g.zHttpsURL = mprintf("%s", g.zTop); + }else{ + fossil_fatal("argument to --baseurl should be 'http://host/path'" + " or 'https://host/path'"); + } + for(i=n=0; (c = g.zTop[i])!=0; i++){ + if( c=='/' ){ + n++; + if( n==3 ){ + g.zTop += i; + break; + } + } + } + if( g.zTop==g.zBaseURL ){ + fossil_fatal("argument to --baseurl should be 'http://host/path'" + " or 'https://host/path'"); + } + if( g.zTop[1]==0 ) g.zTop++; }else{ - g.zBaseURL = mprintf("http://%s%.*s", zHost, i, zCur); - g.zTop = &g.zBaseURL[7+strlen(zHost)]; + zHost = PD("HTTP_HOST",""); + zMode = PD("HTTPS","off"); + zCur = PD("SCRIPT_NAME","/"); + i = strlen(zCur); + while( i>0 && zCur[i-1]=='/' ) i--; + if( fossil_stricmp(zMode,"on")==0 ){ + g.zBaseURL = mprintf("https://%s%.*s", zHost, i, zCur); + g.zTop = &g.zBaseURL[8+strlen(zHost)]; + g.zHttpsURL = g.zBaseURL; + }else{ + g.zBaseURL = mprintf("http://%s%.*s", zHost, i, zCur); + g.zTop = &g.zBaseURL[7+strlen(zHost)]; + g.zHttpsURL = mprintf("https://%s%.*s", zHost, i, zCur); + } + } + if( db_is_writeable("repository") ){ + if( !db_exists("SELECT 1 FROM config WHERE name='baseurl:%q'", g.zBaseURL)){ + db_multi_exec("INSERT INTO config(name,value,mtime)" + "VALUES('baseurl:%q',1,now())", g.zBaseURL); + }else{ + db_optional_sql("repository", + "REPLACE INTO config(name,value,mtime)" + "VALUES('baseurl:%q',1,now())", g.zBaseURL + ); + } } } /* ** Send an HTTP redirect back to the designated Index Page. */ -void fossil_redirect_home(void){ - cgi_redirectf("%s%s", g.zBaseURL, db_get("index-page", "/index")); +NORETURN void fossil_redirect_home(void){ + cgi_redirectf("%s%s", g.zTop, db_get("index-page", "/index")); } /* ** If running as root, chroot to the directory containing the ** repository zRepo and then drop root privileges. Return the @@ -614,105 +1467,248 @@ ** zRepo might be a directory itself. In that case chroot into ** the directory zRepo. ** ** Assume the user-id and group-id of the repository, or if zRepo ** is a directory, of that directory. +** +** The noJail flag means that the chroot jail is not entered. But +** privileges are still lowered to that of the user-id and group-id +** of the repository file. */ -static char *enter_chroot_jail(char *zRepo){ -#if !defined(__MINGW32__) +static char *enter_chroot_jail(char *zRepo, int noJail){ +#if !defined(_WIN32) if( getuid()==0 ){ int i; struct stat sStat; Blob dir; char *zDir; + if( g.db!=0 ){ + db_close(1); + } - file_canonical_name(zRepo, &dir); + file_canonical_name(zRepo, &dir, 0); zDir = blob_str(&dir); - if( file_isdir(zDir)==1 ){ - chdir(zDir); - chroot(zDir); - zRepo = "/"; - }else{ - for(i=strlen(zDir)-1; i>0 && zDir[i]!='/'; i--){} - if( zDir[i]!='/' ) fossil_panic("bad repository name: %s", zRepo); - zDir[i] = 0; - chdir(zDir); - chroot(zDir); - zDir[i] = '/'; - zRepo = &zDir[i]; + if( !noJail ){ + if( file_isdir(zDir)==1 ){ + if( file_chdir(zDir, 1) ){ + fossil_fatal("unable to chroot into %s", zDir); + } + zRepo = "/"; + }else{ + for(i=strlen(zDir)-1; i>0 && zDir[i]!='/'; i--){} + if( zDir[i]!='/' ) fossil_fatal("bad repository name: %s", zRepo); + if( i>0 ){ + zDir[i] = 0; + if( file_chdir(zDir, 1) ){ + fossil_fatal("unable to chroot into %s", zDir); + } + zDir[i] = '/'; + } + zRepo = &zDir[i]; + } } if( stat(zRepo, &sStat)!=0 ){ fossil_fatal("cannot stat() repository: %s", zRepo); } - setgid(sStat.st_gid); - setuid(sStat.st_uid); - if( g.db!=0 ){ - db_close(); + i = setgid(sStat.st_gid); + i = i || setuid(sStat.st_uid); + if(i){ + fossil_fatal("setgid/uid() failed with errno %d", errno); + } + if( g.db==0 && file_isfile(zRepo) ){ db_open_repository(zRepo); } } #endif return zRepo; } + +/* +** Generate a web-page that lists all repositories located under the +** g.zRepositoryName directory and return non-zero. +** +** Or, if no repositories can be located beneath g.zRepositoryName, +** return 0. +*/ +static int repo_list_page(void){ + Blob base; + int n = 0; + + assert( g.db==0 ); + blob_init(&base, g.zRepositoryName, -1); + sqlite3_open(":memory:", &g.db); + db_multi_exec("CREATE TABLE sfile(x TEXT);"); + db_multi_exec("CREATE TABLE vfile(pathname);"); + vfile_scan(&base, blob_size(&base), 0, 0, 0); + db_multi_exec("DELETE FROM sfile WHERE x NOT GLOB '*[^/].fossil'"); + n = db_int(0, "SELECT count(*) FROM sfile"); + if( n>0 ){ + Stmt q; + @ + @ + @ Repository List + @ + @ + @

      Available Repositories:

      + @
        + db_prepare(&q, "SELECT x, substr(x,-7,-100000)||'/home'" + " FROM sfile ORDER BY x COLLATE nocase;"); + while( db_step(&q)==SQLITE_ROW ){ + const char *zName = db_column_text(&q, 0); + const char *zUrl = db_column_text(&q, 1); + @
      1. %h(zName)
      2. + } + @
      + @ + @ + cgi_reply(); + } + sqlite3_close(g.db); + g.db = 0; + return n; +} /* ** Preconditions: ** ** * Environment variables are set up according to the CGI standard. ** ** If the repository is known, it has already been opened. If unknown, ** then g.zRepositoryName holds the directory that contains the repository ** and the actual repository is taken from the first element of PATH_INFO. -** +** ** Process the webpage specified by the PATH_INFO or REQUEST_URI ** environment variable. +** +** If the repository is not known, then a search is done through the +** file hierarchy rooted at g.zRepositoryName for a suitable repository +** with a name of $prefix.fossil, where $prefix is any prefix of PATH_INFO. +** Or, if an ordinary file named $prefix is found, and $prefix matches +** pFileGlob and $prefix does not match "*.fossil*" and the mimetype of +** $prefix can be determined from its suffix, then the file $prefix is +** returned as static text. +** +** If no suitable webpage is found, try to redirect to zNotFound. */ -static void process_one_web_page(const char *zNotFound){ +static void process_one_web_page( + const char *zNotFound, /* Redirect here on a 404 if not NULL */ + Glob *pFileGlob, /* Deliver static files matching */ + int allowRepoList /* Send repo list for "/" URL */ +){ const char *zPathInfo; char *zPath = NULL; int idx; int i; + + /* Handle universal query parameters */ + if( PB("utc") ){ + g.fTimeFormat = 1; + }else if( PB("localtime") ){ + g.fTimeFormat = 2; + } /* If the repository has not been opened already, then find the ** repository based on the first element of PATH_INFO and open it. */ - zPathInfo = P("PATH_INFO"); + zPathInfo = PD("PATH_INFO",""); if( !g.repositoryOpen ){ - char *zRepo; + char *zRepo, *zToFree; const char *zOldScript = PD("SCRIPT_NAME", ""); char *zNewScript; int j, k; - - i = 1; - while( zPathInfo[i] && zPathInfo[i]!='/' ){ i++; } - zRepo = mprintf("%s%.*s.fossil",g.zRepositoryName,i,zPathInfo); - - /* To avoid mischief, make sure the repository basename contains no - ** characters other than alphanumerics, "-", and "_". - */ - for(j=strlen(g.zRepositoryName)+1, k=0; kNot Found - cgi_set_status(404, "not found"); - cgi_reply(); - } - return; + i64 szFile; + + i = zPathInfo[0]!=0; + while( 1 ){ + while( zPathInfo[i] && zPathInfo[i]!='/' ){ i++; } + zRepo = zToFree = mprintf("%s%.*s.fossil",g.zRepositoryName,i,zPathInfo); + + /* To avoid mischief, make sure the repository basename contains no + ** characters other than alphanumerics, "/", "_", "-", and ".", and + ** that "-" never occurs immediately after a "/" and that "." is always + ** surrounded by two alphanumerics. Any character that does not + ** satisfy these constraints is converted into "_". + */ + szFile = 0; + for(j=strlen(g.zRepositoryName)+1, k=0; zRepo[j] && k0 ){ + const char *zMimetype; + assert( fossil_strcmp(&zRepo[j], ".fossil")==0 ); + zRepo[j] = 0; + if( zPathInfo[i]=='/' && file_isdir(zRepo)==1 ){ + fossil_free(zToFree); + i++; + continue; + } + if( pFileGlob!=0 + && file_isfile(zRepo) + && glob_match(pFileGlob, zRepo) + && sqlite3_strglob("*.fossil*",zRepo)!=0 + && (zMimetype = mimetype_from_name(zRepo))!=0 + && strcmp(zMimetype, "application/x-fossil-artifact")!=0 + ){ + Blob content; + blob_read_from_file(&content, zRepo); + cgi_set_content_type(zMimetype); + cgi_set_content(&content); + cgi_reply(); + return; + } + zRepo[j] = '.'; + } + + if( szFile<1024 ){ + set_base_url(0); + if( strcmp(zPathInfo,"/")==0 + && allowRepoList + && repo_list_page() ){ + /* Will return a list of repositories */ + }else if( zNotFound ){ + cgi_redirect(zNotFound); + }else{ +#ifdef FOSSIL_ENABLE_JSON + if(g.json.isJsonMode){ + json_err(FSL_JSON_E_RESOURCE_NOT_FOUND,NULL,1); + return; + } +#endif + @

      Not Found

      + cgi_set_status(404, "not found"); + cgi_reply(); + } + return; + } + break; } zNewScript = mprintf("%s%.*s", zOldScript, i, zPathInfo); cgi_replace_parameter("PATH_INFO", &zPathInfo[i+1]); zPathInfo += i; cgi_replace_parameter("SCRIPT_NAME", zNewScript); db_open_repository(zRepo); if( g.fHttpTrace ){ - fprintf(stderr, + fprintf(stderr, "# repository: [%s]\n" "# new PATH_INFO = [%s]\n" "# new SCRIPT_NAME = [%s]\n", zRepo, zPathInfo, zNewScript); } @@ -719,58 +1715,253 @@ } /* Find the page that the user has requested, construct and deliver that ** page. */ - if( g.zContentType && memcmp(g.zContentType, "application/x-fossil", 20)==0 ){ + if( g.zContentType && + strncmp(g.zContentType, "application/x-fossil", 20)==0 ){ zPathInfo = "/xfer"; } - set_base_url(); - if( zPathInfo==0 || zPathInfo[0]==0 + set_base_url(0); + if( zPathInfo==0 || zPathInfo[0]==0 || (zPathInfo[0]=='/' && zPathInfo[1]==0) ){ - fossil_redirect_home(); +#ifdef FOSSIL_ENABLE_JSON + if(g.json.isJsonMode){ + json_err(FSL_JSON_E_RESOURCE_NOT_FOUND,NULL,1); + fossil_exit(0); + } +#endif + fossil_redirect_home() /*does not return*/; }else{ zPath = mprintf("%s", zPathInfo); } - /* Remove the leading "/" at the beginning of the path. + /* Make g.zPath point to the first element of the path. Make + ** g.zExtra point to everything past that point. + */ + while(1){ + char *zAltRepo = 0; + g.zPath = &zPath[1]; + for(i=1; zPath[i] && zPath[i]!='/'; i++){} + if( zPath[i]=='/' ){ + zPath[i] = 0; + g.zExtra = &zPath[i+1]; + + /* Look for sub-repositories. A sub-repository is another repository + ** that accepts the login credentials of the current repository. A + ** subrepository is identified by a CONFIG table entry "subrepo:NAME" + ** where NAME is the first component of the path. The value of the + ** the CONFIG entries is the string "USER:FILENAME" where USER is the + ** USER name to log in as in the subrepository and FILENAME is the + ** repository filename. + */ + zAltRepo = db_text(0, "SELECT value FROM config WHERE name='subrepo:%q'", + g.zPath); + if( zAltRepo ){ + int nHost; + int jj; + char *zUser = zAltRepo; + login_check_credentials(); + for(jj=0; zAltRepo[jj] && zAltRepo[jj]!=':'; jj++){} + if( zAltRepo[jj]==':' ){ + zAltRepo[jj] = 0; + zAltRepo += jj+1; + }else{ + zUser = "nobody"; + } + if( g.zLogin==0 || g.zLogin[0]==0 ) zUser = "nobody"; + if( zAltRepo[0]!='/' ){ + zAltRepo = mprintf("%s/../%s", g.zRepositoryName, zAltRepo); + file_simplify_name(zAltRepo, -1, 0); + } + db_close(1); + db_open_repository(zAltRepo); + login_as_user(zUser); + g.perm.Password = 0; + zPath += i; + nHost = g.zTop - g.zBaseURL; + g.zBaseURL = mprintf("%z/%s", g.zBaseURL, g.zPath); + g.zTop = g.zBaseURL + nHost; + continue; + } + }else{ + g.zExtra = 0; + } + break; + } +#ifdef FOSSIL_ENABLE_JSON + /* + ** Workaround to allow us to customize some following behaviour for + ** JSON mode. The problem is, we don't always know if we're in JSON + ** mode at this point (namely, for GET mode we don't know but POST + ** we do), so we snoop g.zPath and cheat a bit. */ - g.zPath = &zPath[1]; - for(i=1; zPath[i] && zPath[i]!='/'; i++){} - if( zPath[i]=='/' ){ - zPath[i] = 0; - g.zExtra = &zPath[i+1]; - }else{ - g.zExtra = 0; - } + if( !g.json.isJsonMode && g.zPath && (0==strncmp("json",g.zPath,4)) ){ + g.json.isJsonMode = 1; + } +#endif if( g.zExtra ){ /* CGI parameters get this treatment elsewhere, but places like getfile ** will use g.zExtra directly. + ** Reminder: the login mechanism uses 'name' differently, and may + ** eventually have a problem/collision with this. + ** + ** Disabled by stephan when running in JSON mode because this + ** particular parameter name is very common and i have had no end + ** of grief with this handling. The JSON API never relies on the + ** handling below, and by disabling it in JSON mode I can remove + ** lots of special-case handling in several JSON handlers. */ - dehttpize(g.zExtra); - cgi_set_parameter_nocopy("name", g.zExtra); +#ifdef FOSSIL_ENABLE_JSON + if(!g.json.isJsonMode){ +#endif + dehttpize(g.zExtra); + cgi_set_parameter_nocopy("name", g.zExtra, 1); +#ifdef FOSSIL_ENABLE_JSON + } +#endif } - + /* Locate the method specified by the path and execute the function ** that implements that method. */ - if( name_search(g.zPath, aWebpage, count(aWebpage), &idx) && - name_search("not_found", aWebpage, count(aWebpage), &idx) ){ - cgi_set_status(404,"Not Found"); - @

      Not Found

      - @

      Page not found: %h(g.zPath)

      + if( name_search(g.zPath, aWebpage, count(aWebpage), 0, &idx) ){ +#ifdef FOSSIL_ENABLE_JSON + if(g.json.isJsonMode){ + json_err(FSL_JSON_E_RESOURCE_NOT_FOUND,NULL,0); + }else +#endif + { +#ifdef FOSSIL_ENABLE_TH1_HOOKS + int rc; + if( !g.fNoThHook ){ + rc = Th_WebpageHook(g.zPath, 0); + }else{ + rc = TH_OK; + } + if( rc==TH_OK || rc==TH_RETURN || rc==TH_CONTINUE ){ + if( rc==TH_OK || rc==TH_RETURN ){ +#endif + cgi_set_status(404,"Not Found"); + @

      Not Found

      + @

      Page not found: %h(g.zPath)

      +#ifdef FOSSIL_ENABLE_TH1_HOOKS + } + if( !g.fNoThHook && (rc==TH_OK || rc==TH_CONTINUE) ){ + Th_WebpageNotify(g.zPath, 0); + } + } +#endif + } + }else if( aWebpage[idx].xFunc!=page_xfer && db_schema_is_outofdate() ){ +#ifdef FOSSIL_ENABLE_JSON + if(g.json.isJsonMode){ + json_err(FSL_JSON_E_DB_NEEDS_REBUILD,NULL,0); + }else +#endif + { + @

      Server Configuration Error

      + @

      The database schema on the server is out-of-date. Please ask + @ the administrator to run fossil rebuild.

      + } }else{ - aWebpage[idx].xFunc(); +#ifdef FOSSIL_ENABLE_TH1_HOOKS + /* + ** The TH1 return codes from the hook will be handled as follows: + ** + ** TH_OK: The xFunc() and the TH1 notification will both be executed. + ** + ** TH_ERROR: The xFunc() will be executed, the TH1 notification will be + ** skipped. If the xFunc() is being hooked, the error message + ** will be emitted. + ** + ** TH_BREAK: The xFunc() and the TH1 notification will both be skipped. + ** + ** TH_RETURN: The xFunc() will be executed, the TH1 notification will be + ** skipped. + ** + ** TH_CONTINUE: The xFunc() will be skipped, the TH1 notification will be + ** executed. + */ + int rc; + if( !g.fNoThHook ){ + rc = Th_WebpageHook(aWebpage[idx].zName, aWebpage[idx].cmdFlags); + }else{ + rc = TH_OK; + } + if( rc==TH_OK || rc==TH_RETURN || rc==TH_CONTINUE ){ + if( rc==TH_OK || rc==TH_RETURN ){ +#endif + aWebpage[idx].xFunc(); +#ifdef FOSSIL_ENABLE_TH1_HOOKS + } + if( !g.fNoThHook && (rc==TH_OK || rc==TH_CONTINUE) ){ + Th_WebpageNotify(aWebpage[idx].zName, aWebpage[idx].cmdFlags); + } + } +#endif } /* Return the result. */ cgi_reply(); } + +/* If the CGI program contains one or more lines of the form +** +** redirect: repository-filename http://hostname/path/%s +** +** then control jumps here. Search each repository for an artifact ID +** or ticket ID that matches the "name" CGI parameter and for the +** first match, redirect to the corresponding URL with the "name" CGI +** parameter inserted. Paint an error page if no match is found. +** +** If there is a line of the form: +** +** redirect: * URL +** +** Then a redirect is made to URL if no match is found. Otherwise a +** very primitive error message is returned. +*/ +static void redirect_web_page(int nRedirect, char **azRedirect){ + int i; /* Loop counter */ + const char *zNotFound = 0; /* Not found URL */ + const char *zName = P("name"); + set_base_url(0); + if( zName==0 ){ + zName = P("SCRIPT_NAME"); + if( zName && zName[0]=='/' ) zName++; + } + if( zName && validate16(zName, strlen(zName)) ){ + for(i=0; i + @ No Such Object + @ + @

      No such object: %h(zName)

      + @ + cgi_reply(); + } +} /* -** COMMAND: cgi +** COMMAND: cgi* ** ** Usage: %fossil ?cgi? SCRIPT ** ** The SCRIPT argument is the name of a file that is the CGI script ** that is being run. The command name, "cgi", may be omitted if @@ -782,238 +1973,577 @@ ** repository: /home/somebody/project.db ** ** The second line defines the name of the repository. After locating ** the repository, fossil will generate a webpage on stdout based on ** the values of standard CGI environment variables. +** +** See also: http, server, winsrv */ void cmd_cgi(void){ const char *zFile; const char *zNotFound = 0; - Blob config, line, key, value; - if( g.argc==3 && strcmp(g.argv[1],"cgi")==0 ){ + char **azRedirect = 0; /* List of repositories to redirect to */ + int nRedirect = 0; /* Number of entries in azRedirect */ + Glob *pFileGlob = 0; /* Pattern for files */ + int allowRepoList = 0; /* Allow lists of repository files */ + Blob config, line, key, value, value2; + if( g.argc==3 && fossil_strcmp(g.argv[1],"cgi")==0 ){ zFile = g.argv[2]; }else{ zFile = g.argv[1]; } g.httpOut = stdout; g.httpIn = stdin; -#ifdef __MINGW32__ - /* Set binary mode on windows to avoid undesired translations - ** between \n and \r\n. */ - setmode(_fileno(g.httpOut), _O_BINARY); - setmode(_fileno(g.httpIn), _O_BINARY); -#endif -#ifdef __EMX__ - /* Similar hack for OS/2 */ - setmode(fileno(g.httpOut), O_BINARY); - setmode(fileno(g.httpIn), O_BINARY); -#endif + fossil_binary_mode(g.httpOut); + fossil_binary_mode(g.httpIn); g.cgiOutput = 1; blob_read_from_file(&config, zFile); while( blob_line(&config, &line) ){ if( !blob_token(&line, &key) ) continue; if( blob_buffer(&key)[0]=='#' ) continue; - if( blob_eq(&key, "debug:") && blob_token(&line, &value) ){ - g.fDebug = fopen(blob_str(&value), "a"); - blob_reset(&value); - continue; - } - if( blob_eq(&key, "HOME:") && blob_token(&line, &value) ){ - cgi_setenv("HOME", blob_str(&value)); - blob_reset(&value); - continue; - } - if( blob_eq(&key, "repository:") && blob_token(&line, &value) ){ + if( blob_eq(&key, "repository:") && blob_tail(&line, &value) ){ + /* repository: FILENAME + ** + ** The name of the Fossil repository to be served via CGI. Most + ** fossil CGI scripts have a single non-comment line that contains + ** this one entry. + */ + blob_trim(&value); db_open_repository(blob_str(&value)); blob_reset(&value); continue; } if( blob_eq(&key, "directory:") && blob_token(&line, &value) ){ - db_close(); + /* directory: DIRECTORY + ** + ** If repository: is omitted, then terms of the PATH_INFO cgi parameter + ** are appended to DIRECTORY looking for a repository (whose name ends + ** in ".fossil") or a file in "files:". + */ + db_close(1); g.zRepositoryName = mprintf("%s", blob_str(&value)); blob_reset(&value); continue; } if( blob_eq(&key, "notfound:") && blob_token(&line, &value) ){ + /* notfound: URL + ** + ** If using directory: and no suitable repository or file is found, + ** then redirect to URL. + */ zNotFound = mprintf("%s", blob_str(&value)); blob_reset(&value); continue; + } + if( blob_eq(&key, "localauth") ){ + /* localauth + ** + ** Grant "administrator" privileges to users connecting with HTTP + ** from IP address 127.0.0.1. Do not bother checking credentials. + */ + g.useLocalauth = 1; + continue; + } + if( blob_eq(&key, "repolist") ){ + /* repolist + ** + ** If using "directory:" and the URL is "/" then generate a page + ** showing a list of available repositories. + */ + allowRepoList = 1; + continue; + } + if( blob_eq(&key, "redirect:") && blob_token(&line, &value) + && blob_token(&line, &value2) ){ + /* See the header comment on the redirect_web_page() function + ** above for details. */ + nRedirect++; + azRedirect = fossil_realloc(azRedirect, 2*nRedirect*sizeof(char*)); + azRedirect[nRedirect*2-2] = mprintf("%s", blob_str(&value)); + azRedirect[nRedirect*2-1] = mprintf("%s", blob_str(&value2)); + blob_reset(&value); + blob_reset(&value2); + continue; + } + if( blob_eq(&key, "files:") && blob_token(&line, &value) ){ + /* files: GLOBLIST + ** + ** GLOBLIST is a comma-separated list of filename globs. For + ** example: *.html,*.css,*.js + ** + ** If the repository: line is omitted and then PATH_INFO is searched + ** for files that match any of these GLOBs and if any such file is + ** found it is returned verbatim. This feature allows "fossil server" + ** to function as a primitive web-server delivering arbitrary content. + */ + pFileGlob = glob_create(blob_str(&value)); + blob_reset(&value); + continue; + } + if( blob_eq(&key, "setenv:") && blob_token(&line, &value) + && blob_token(&line, &value2) ){ + /* setenv: NAME VALUE + ** + ** Sets environment variable NAME to VALUE + */ + fossil_setenv(blob_str(&value), blob_str(&value2)); + blob_reset(&value); + blob_reset(&value2); + continue; + } + if( blob_eq(&key, "debug:") && blob_token(&line, &value) ){ + /* debug: FILENAME + ** + ** Causes output from cgi_debug() and CGIDEBUG(()) calls to go + ** into FILENAME. + */ + g.fDebug = fossil_fopen(blob_str(&value), "ab"); + blob_reset(&value); + continue; + } + if( blob_eq(&key, "errorlog:") && blob_token(&line, &value) ){ + /* errorlog: FILENAME + ** + ** Causes messages from warnings, errors, and panics to be appended + ** to FILENAME. + */ + g.zErrlog = mprintf("%s", blob_str(&value)); + blob_reset(&value); + continue; + } + if( blob_eq(&key, "HOME:") && blob_token(&line, &value) ){ + /* HOME: VALUE + ** + ** Set CGI parameter "HOME" to VALUE. This is legacy. Use + ** setenv: instead. + */ + cgi_setenv("HOME", blob_str(&value)); + blob_reset(&value); + continue; + } + if( blob_eq(&key, "skin:") && blob_token(&line, &value) ){ + /* skin: LABEL + ** + ** Use one of the built-in skins defined by LABEL. LABEL is the + ** name of the subdirectory under the skins/ directory that holds + ** the elements of the built-in skin. If LABEL does not match, + ** this directive is a silent no-op. + */ + skin_use_alternative(blob_str(&value)); + blob_reset(&value); + continue; } } blob_reset(&config); - if( g.db==0 && g.zRepositoryName==0 ){ + if( g.db==0 && g.zRepositoryName==0 && nRedirect==0 ){ cgi_panic("Unable to find or open the project repository"); } cgi_init(); - process_one_web_page(zNotFound); + if( nRedirect ){ + redirect_web_page(nRedirect, azRedirect); + }else{ + process_one_web_page(zNotFound, pFileGlob, allowRepoList); + } } /* -** If g.argv[2] exists then it is either the name of a repository +** If g.argv[arg] exists then it is either the name of a repository ** that will be used by a server, or else it is a directory that -** contains multiple repositories that can be served. If g.argv[2] +** contains multiple repositories that can be served. If g.argv[arg] ** is a directory, the repositories it contains must be named -** "*.fossil". If g.argv[2] does not exists, then we must be within -** a check-out and the repository to be served is the repository of +** "*.fossil". If g.argv[arg] does not exist, then we must be within +** an open check-out and the repository serve is the repository of ** that check-out. ** -** Open the respository to be served if it is known. If g.argv[2] is +** Open the repository to be served if it is known. If g.argv[arg] is ** a directory full of repositories, then set g.zRepositoryName to ** the name of that directory and the specific repository will be ** opened later by process_one_web_page() based on the content of ** the PATH_INFO variable. ** -** If disallowDir is set, then the directory full of repositories method -** is disallowed. +** If the fCreate flag is set, then create the repository if it +** does not already exist. */ -static void find_server_repository(int disallowDir){ - if( g.argc<3 ){ +static void find_server_repository(int arg, int fCreate){ + if( g.argc<=arg ){ db_must_be_within_tree(); - }else if( !disallowDir && file_isdir(g.argv[2])==1 ){ - g.zRepositoryName = mprintf("%s", g.argv[2]); - file_simplify_name(g.zRepositoryName, -1); }else{ - db_open_repository(g.argv[2]); + const char *zRepo = g.argv[arg]; + int isDir = file_isdir(zRepo); + if( isDir==1 ){ + g.zRepositoryName = mprintf("%s", zRepo); + file_simplify_name(g.zRepositoryName, -1, 0); + }else{ + if( isDir==0 && fCreate ){ + const char *zPassword; + db_create_repository(zRepo); + db_open_repository(zRepo); + db_begin_transaction(); + db_initial_setup(0, "now", g.zLogin); + db_end_transaction(0); + fossil_print("project-id: %s\n", db_get("project-code", 0)); + fossil_print("server-id: %s\n", db_get("server-code", 0)); + zPassword = db_text(0, "SELECT pw FROM user WHERE login=%Q", g.zLogin); + fossil_print("admin-user: %s (initial password is \"%s\")\n", + g.zLogin, zPassword); + cache_initialize(); + g.zLogin = 0; + g.userUid = 0; + }else{ + db_open_repository(zRepo); + } + } } } /* ** undocumented format: ** -** fossil http REPOSITORY INFILE OUTFILE IPADDR +** fossil http INFILE OUTFILE IPADDR ?REPOSITORY? +** +** The argv==6 form (with no options) is used by the win32 server only. ** -** The argv==6 form is used by the win32 server only. +** COMMAND: http* ** -** COMMAND: http -** -** Usage: %fossil http REPOSITORY [--notfound URL] +** Usage: %fossil http ?REPOSITORY? ?OPTIONS? ** ** Handle a single HTTP request appearing on stdin. The resulting webpage ** is delivered on stdout. This method is used to launch an HTTP request -** handler from inetd, for example. The argument is the name of the +** handler from inetd, for example. The argument is the name of the ** repository. ** -** If REPOSITORY is a directory that contains one or more respositories -** with names of the form "*.fossil" then the first element of the URL -** pathname selects among the various repositories. If the pathname does +** If REPOSITORY is a directory that contains one or more repositories, +** either directly in REPOSITORY itself or in subdirectories, and +** with names of the form "*.fossil" then a prefix of the URL pathname +** selects from among the various repositories. If the pathname does ** not select a valid repository and the --notfound option is available, ** then the server redirects (HTTP code 302) to the URL of --notfound. +** When REPOSITORY is a directory, the pathname must contain only +** alphanumerics, "_", "/", "-" and "." and no "-" may occur after a "/" +** and every "." must be surrounded on both sides by alphanumerics or else +** a 404 error is returned. Static content files in the directory are +** returned if they match comma-separate GLOB pattern specified by --files +** and do not match "*.fossil*" and have a well-known suffix. +** +** The --host option can be used to specify the hostname for the server. +** The --https option indicates that the request came from HTTPS rather +** than HTTP. If --nossl is given, then SSL connections will not be available, +** thus also no redirecting from http: to https: will take place. +** +** If the --localauth option is given, then automatic login is performed +** for requests coming from localhost, if the "localauth" setting is not +** enabled. +** +** Options: +** --baseurl URL base URL (useful with reverse proxies) +** --files GLOB comma-separate glob patterns for static file to serve +** --localauth enable automatic login for local connections +** --host NAME specify hostname of the server +** --https signal a request coming in via https +** --nojail drop root privilege but do not enter the chroot jail +** --nossl signal that no SSL connections are available +** --notfound URL use URL as "HTTP 404, object not found" page. +** --repolist If REPOSITORY is directory, URL "/" lists all repos +** --scgi Interpret input as SCGI rather than HTTP +** --skin LABEL Use override skin LABEL +** +** See also: cgi, server, winsrv */ void cmd_http(void){ - const char *zIpAddr; + const char *zIpAddr = 0; const char *zNotFound; + const char *zHost; + const char *zAltBase; + const char *zFileGlob; + int useSCGI; + int noJail; + int allowRepoList; + + /* The winhttp module passes the --files option as --files-urlenc with + ** the argument being URL encoded, to avoid wildcard expansion in the + ** shell. This option is for internal use and is undocumented. + */ + zFileGlob = find_option("files-urlenc",0,1); + if( zFileGlob ){ + char *z = mprintf("%s", zFileGlob); + dehttpize(z); + zFileGlob = z; + }else{ + zFileGlob = find_option("files",0,1); + } + skin_override(); zNotFound = find_option("notfound", 0, 1); - if( g.argc!=2 && g.argc!=3 && g.argc!=6 ){ - cgi_panic("no repository specified"); + noJail = find_option("nojail",0,0)!=0; + allowRepoList = find_option("repolist",0,0)!=0; + g.useLocalauth = find_option("localauth", 0, 0)!=0; + g.sslNotAvailable = find_option("nossl", 0, 0)!=0; + useSCGI = find_option("scgi", 0, 0)!=0; + zAltBase = find_option("baseurl", 0, 1); + if( zAltBase ) set_base_url(zAltBase); + if( find_option("https",0,0)!=0 ){ + zIpAddr = fossil_getenv("REMOTE_HOST"); /* From stunnel */ + cgi_replace_parameter("HTTPS","on"); + } + zHost = find_option("host", 0, 1); + if( zHost ) cgi_replace_parameter("HTTP_HOST",zHost); + + /* We should be done with options.. */ + verify_all_options(); + + if( g.argc!=2 && g.argc!=3 && g.argc!=5 && g.argc!=6 ){ + fossil_fatal("no repository specified"); } g.cgiOutput = 1; g.fullHttpReply = 1; - if( g.argc==6 ){ - g.httpIn = fopen(g.argv[3], "rb"); - g.httpOut = fopen(g.argv[4], "wb"); - zIpAddr = g.argv[5]; + if( g.argc>=5 ){ + g.httpIn = fossil_fopen(g.argv[2], "rb"); + g.httpOut = fossil_fopen(g.argv[3], "wb"); + zIpAddr = g.argv[4]; + find_server_repository(5, 0); }else{ g.httpIn = stdin; g.httpOut = stdout; - zIpAddr = 0; + find_server_repository(2, 0); + } + if( zIpAddr==0 ){ + zIpAddr = cgi_ssh_remote_addr(0); + if( zIpAddr && zIpAddr[0] ){ + g.fSshClient |= CGI_SSH_CLIENT; + } + } + g.zRepositoryName = enter_chroot_jail(g.zRepositoryName, noJail); + if( useSCGI ){ + cgi_handle_scgi_request(); + }else if( g.fSshClient & CGI_SSH_CLIENT ){ + ssh_request_loop(zIpAddr, glob_create(zFileGlob)); + }else{ + cgi_handle_http_request(zIpAddr); } - find_server_repository(0); - g.zRepositoryName = enter_chroot_jail(g.zRepositoryName); - cgi_handle_http_request(zIpAddr); - process_one_web_page(zNotFound); + process_one_web_page(zNotFound, glob_create(zFileGlob), allowRepoList); +} + +/* +** Process all requests in a single SSH connection if possible. +*/ +void ssh_request_loop(const char *zIpAddr, Glob *FileGlob){ + blob_zero(&g.cgiIn); + do{ + cgi_handle_ssh_http_request(zIpAddr); + process_one_web_page(0, FileGlob, 0); + blob_reset(&g.cgiIn); + } while ( g.fSshClient & CGI_SSH_FOSSIL || + g.fSshClient & CGI_SSH_COMPAT ); } /* +** Note that the following command is used by ssh:// processing. +** ** COMMAND: test-http +** ** Works like the http command but gives setup permission to all users. +** +** Options: +** --th-trace trace TH1 execution (for debugging purposes) +** */ void cmd_test_http(void){ - login_set_capabilities("s"); - cmd_http(); + const char *zIpAddr; /* IP address of remote client */ + + Th_InitTraceLog(); + login_set_capabilities("sx", 0); + g.useLocalauth = 1; + g.httpIn = stdin; + g.httpOut = stdout; + find_server_repository(2, 0); + g.cgiOutput = 1; + g.fullHttpReply = 1; + zIpAddr = cgi_ssh_remote_addr(0); + if( zIpAddr && zIpAddr[0] ){ + g.fSshClient |= CGI_SSH_CLIENT; + ssh_request_loop(zIpAddr, 0); + }else{ + cgi_set_parameter("REMOTE_ADDR", "127.0.0.1"); + cgi_handle_http_request(0); + process_one_web_page(0, 0, 0); + } } -#ifndef __MINGW32__ -#if !defined(__DARWIN__) && !defined(__APPLE__) +#if !defined(_WIN32) +#if !defined(__DARWIN__) && !defined(__APPLE__) && !defined(__HAIKU__) /* ** Search for an executable on the PATH environment variable. ** Return true (1) if found and false (0) if not found. */ static int binaryOnPath(const char *zBinary){ - const char *zPath = getenv("PATH"); + const char *zPath = fossil_getenv("PATH"); char *zFull; int i; int bExists; while( zPath && zPath[0] ){ while( zPath[0]==':' ) zPath++; for(i=0; zPath[i] && zPath[i]!=':'; i++){} zFull = mprintf("%.*s/%s", i, zPath, zBinary); - bExists = access(zFull, X_OK); - free(zFull); + bExists = file_access(zFull, X_OK); + fossil_free(zFull); if( bExists==0 ) return 1; zPath += i; } return 0; } #endif #endif /* -** COMMAND: server +** COMMAND: server* ** COMMAND: ui ** -** Usage: %fossil server ?-P|--port TCPPORT? ?REPOSITORY? -** Or: %fossil ui ?-P|--port TCPPORT? ?REPOSITORY? +** Usage: %fossil server ?OPTIONS? ?REPOSITORY? +** Or: %fossil ui ?OPTIONS? ?REPOSITORY? ** ** Open a socket and begin listening and responding to HTTP requests on ** TCP port 8080, or on any other TCP port defined by the -P or ** --port option. The optional argument is the name of the repository. ** The repository argument may be omitted if the working directory is ** within an open checkout. ** ** The "ui" command automatically starts a web browser after initializing -** the web server. +** the web server. The "ui" command also binds to 127.0.0.1 and so will +** only process HTTP traffic from the local machine. +** +** The REPOSITORY can be a directory (aka folder) that contains one or +** more repositories with names ending in ".fossil". In this case, a +** prefix of the URL pathname is used to search the directory for an +** appropriate repository. To thwart mischief, the pathname in the URL must +** contain only alphanumerics, "_", "/", "-", and ".", and no "-" may +** occur after "/", and every "." must be surrounded on both sides by +** alphanumerics. Any pathname that does not satisfy these constraints +** results in a 404 error. Files in REPOSITORY that match the comma-separated +** list of glob patterns given by --files and that have known suffixes +** such as ".txt" or ".html" or ".jpeg" and do not match the pattern +** "*.fossil*" will be served as static content. With the "ui" command, +** the REPOSITORY can only be a directory if the --notfound option is +** also present. +** +** By default, the "ui" command provides full administrative access without +** having to log in. This can be disabled by turning off the "localauth" +** setting. Automatic login for the "server" command is available if the +** --localauth option is present and the "localauth" setting is off and the +** connection is from localhost. The "ui" command also enables --repolist +** by default. +** +** Options: +** --baseurl URL Use URL as the base (useful for reverse proxies) +** --create Create a new REPOSITORY if it does not already exist +** --page PAGE Start "ui" on PAGE. ex: --page "timeline?y=ci" +** --files GLOBLIST Comma-separated list of glob patterns for static files +** --localauth enable automatic login for requests from localhost +** --localhost listen on 127.0.0.1 only (always true for "ui") +** --https signal a request coming in via https +** --nojail Drop root privileges but do not enter the chroot jail +** --nossl signal that no SSL connections are available +** --notfound URL Redirect +** -P|--port TCPPORT listen to request on port TCPPORT +** --th-trace trace TH1 execution (for debugging purposes) +** --repolist If REPOSITORY is dir, URL "/" lists repos. +** --scgi Accept SCGI rather than HTTP +** --skin LABEL Use override skin LABEL + ** -** In the "server" command, the REPOSITORY can be a directory (aka folder) -** that contains one or more respositories with names ending in ".fossil". -** In that case, the first element of the URL is used to select among the -** various repositories. +** See also: cgi, http, winsrv */ void cmd_webserver(void){ int iPort, mxPort; /* Range of TCP ports allowed */ const char *zPort; /* Value of the --port option */ - char *zBrowser; /* Name of web browser program */ + const char *zBrowser; /* Name of web browser program */ char *zBrowserCmd = 0; /* Command to launch the web browser */ int isUiCmd; /* True if command is "ui", not "server' */ const char *zNotFound; /* The --notfound option or NULL */ + int flags = 0; /* Server flags */ +#if !defined(_WIN32) + int noJail; /* Do not enter the chroot jail */ +#endif + int allowRepoList; /* List repositories on URL "/" */ + const char *zAltBase; /* Argument to the --baseurl option */ + const char *zFileGlob; /* Static content must match this */ + char *zIpAddr = 0; /* Bind to this IP address */ + int fCreate = 0; /* The --create flag */ + const char *zInitPage = 0; /* Start on this page. --page option */ -#ifdef __MINGW32__ +#if defined(_WIN32) const char *zStopperFile; /* Name of file used to terminate server */ zStopperFile = find_option("stopper", 0, 1); #endif - g.thTrace = find_option("th-trace", 0, 0)!=0; - if( g.thTrace ){ - blob_zero(&g.thLog); + zFileGlob = find_option("files-urlenc",0,1); + if( zFileGlob ){ + char *z = mprintf("%s", zFileGlob); + dehttpize(z); + zFileGlob = z; + }else{ + zFileGlob = find_option("files",0,1); } + skin_override(); +#if !defined(_WIN32) + noJail = find_option("nojail",0,0)!=0; +#endif + g.useLocalauth = find_option("localauth", 0, 0)!=0; + Th_InitTraceLog(); zPort = find_option("port", "P", 1); - zNotFound = find_option("notfound", 0, 1); - if( g.argc!=2 && g.argc!=3 ) usage("?REPOSITORY?"); isUiCmd = g.argv[1][0]=='u'; - find_server_repository(isUiCmd); + if( isUiCmd ){ + zInitPage = find_option("page", 0, 1); + } + if( zInitPage==0 ) zInitPage = ""; + zNotFound = find_option("notfound", 0, 1); + allowRepoList = find_option("repolist",0,0)!=0; + zAltBase = find_option("baseurl", 0, 1); + fCreate = find_option("create",0,0)!=0; + if( find_option("scgi", 0, 0)!=0 ) flags |= HTTP_SERVER_SCGI; + if( zAltBase ){ + set_base_url(zAltBase); + } + g.sslNotAvailable = find_option("nossl", 0, 0)!=0; + if( find_option("https",0,0)!=0 ){ + cgi_replace_parameter("HTTPS","on"); + }else{ + /* without --https, defaults to not available. */ + g.sslNotAvailable = 1; + } + if( find_option("localhost", 0, 0)!=0 ){ + flags |= HTTP_SERVER_LOCALHOST; + } + + /* We should be done with options.. */ + verify_all_options(); + + if( g.argc!=2 && g.argc!=3 ) usage("?REPOSITORY?"); + if( isUiCmd ){ + flags |= HTTP_SERVER_LOCALHOST|HTTP_SERVER_REPOLIST; + g.useLocalauth = 1; + allowRepoList = 1; + } + find_server_repository(2, fCreate); if( zPort ){ + int i; + for(i=strlen(zPort)-1; i>=0 && zPort[i]!=':'; i--){} + if( i>0 ){ + zIpAddr = mprintf("%.*s", i, zPort); + zPort += i+1; + } iPort = mxPort = atoi(zPort); }else{ iPort = db_get_int("http-port", 8080); mxPort = iPort+100; } -#ifndef __MINGW32__ +#if !defined(_WIN32) /* Unix implementation */ if( isUiCmd ){ -#if !defined(__DARWIN__) && !defined(__APPLE__) +#if !defined(__DARWIN__) && !defined(__APPLE__) && !defined(__HAIKU__) zBrowser = db_get("web-browser", 0); if( zBrowser==0 ){ - static char *azBrowserProg[] = { "xdg-open", "gnome-open", "firefox" }; + static const char *const azBrowserProg[] = + { "xdg-open", "gnome-open", "firefox", "google-chrome" }; int i; zBrowser = "echo"; for(i=0; imain.mk" +# file, edit "makemake.tcl" then run "tclsh makemake.tcl" # to regenerate this file. # -# This file is included by linux-gcc.mk or linux-mingw.mk or possible -# some other makefiles. This file contains the rules that are common -# to building regardless of the target. +# This file is included by primary Makefile. # -XTCC = $(TCC) $(CFLAGS) -I. -I$(SRCDIR) +XTCC = $(TCC) -I. -I$(SRCDIR) -I$(OBJDIR) $(TCCFLAGS) $(CFLAGS) SRC = \ $(SRCDIR)/add.c \ $(SRCDIR)/allrepo.c \ $(SRCDIR)/attach.c \ $(SRCDIR)/bag.c \ + $(SRCDIR)/bisect.c \ $(SRCDIR)/blob.c \ $(SRCDIR)/branch.c \ $(SRCDIR)/browse.c \ + $(SRCDIR)/builtin.c \ + $(SRCDIR)/bundle.c \ + $(SRCDIR)/cache.c \ $(SRCDIR)/captcha.c \ $(SRCDIR)/cgi.c \ $(SRCDIR)/checkin.c \ $(SRCDIR)/checkout.c \ $(SRCDIR)/clearsign.c \ @@ -35,135 +40,286 @@ $(SRCDIR)/descendants.c \ $(SRCDIR)/diff.c \ $(SRCDIR)/diffcmd.c \ $(SRCDIR)/doc.c \ $(SRCDIR)/encode.c \ + $(SRCDIR)/event.c \ + $(SRCDIR)/export.c \ $(SRCDIR)/file.c \ $(SRCDIR)/finfo.c \ + $(SRCDIR)/foci.c \ + $(SRCDIR)/fusefs.c \ + $(SRCDIR)/glob.c \ $(SRCDIR)/graph.c \ + $(SRCDIR)/gzip.c \ $(SRCDIR)/http.c \ $(SRCDIR)/http_socket.c \ + $(SRCDIR)/http_ssl.c \ $(SRCDIR)/http_transport.c \ + $(SRCDIR)/import.c \ $(SRCDIR)/info.c \ + $(SRCDIR)/json.c \ + $(SRCDIR)/json_artifact.c \ + $(SRCDIR)/json_branch.c \ + $(SRCDIR)/json_config.c \ + $(SRCDIR)/json_diff.c \ + $(SRCDIR)/json_dir.c \ + $(SRCDIR)/json_finfo.c \ + $(SRCDIR)/json_login.c \ + $(SRCDIR)/json_query.c \ + $(SRCDIR)/json_report.c \ + $(SRCDIR)/json_status.c \ + $(SRCDIR)/json_tag.c \ + $(SRCDIR)/json_timeline.c \ + $(SRCDIR)/json_user.c \ + $(SRCDIR)/json_wiki.c \ + $(SRCDIR)/leaf.c \ + $(SRCDIR)/loadctrl.c \ $(SRCDIR)/login.c \ + $(SRCDIR)/lookslike.c \ $(SRCDIR)/main.c \ $(SRCDIR)/manifest.c \ + $(SRCDIR)/markdown.c \ + $(SRCDIR)/markdown_html.c \ $(SRCDIR)/md5.c \ $(SRCDIR)/merge.c \ $(SRCDIR)/merge3.c \ + $(SRCDIR)/moderate.c \ $(SRCDIR)/name.c \ + $(SRCDIR)/path.c \ + $(SRCDIR)/piechart.c \ $(SRCDIR)/pivot.c \ + $(SRCDIR)/popen.c \ $(SRCDIR)/pqueue.c \ $(SRCDIR)/printf.c \ + $(SRCDIR)/publish.c \ + $(SRCDIR)/purge.c \ $(SRCDIR)/rebuild.c \ + $(SRCDIR)/regexp.c \ $(SRCDIR)/report.c \ $(SRCDIR)/rss.c \ $(SRCDIR)/schema.c \ $(SRCDIR)/search.c \ $(SRCDIR)/setup.c \ $(SRCDIR)/sha1.c \ $(SRCDIR)/shun.c \ + $(SRCDIR)/sitemap.c \ $(SRCDIR)/skins.c \ + $(SRCDIR)/sqlcmd.c \ + $(SRCDIR)/stash.c \ $(SRCDIR)/stat.c \ + $(SRCDIR)/statrep.c \ $(SRCDIR)/style.c \ $(SRCDIR)/sync.c \ $(SRCDIR)/tag.c \ + $(SRCDIR)/tar.c \ $(SRCDIR)/th_main.c \ $(SRCDIR)/timeline.c \ $(SRCDIR)/tkt.c \ $(SRCDIR)/tktsetup.c \ $(SRCDIR)/undo.c \ + $(SRCDIR)/unicode.c \ $(SRCDIR)/update.c \ $(SRCDIR)/url.c \ $(SRCDIR)/user.c \ + $(SRCDIR)/utf8.c \ + $(SRCDIR)/util.c \ $(SRCDIR)/verify.c \ $(SRCDIR)/vfile.c \ $(SRCDIR)/wiki.c \ $(SRCDIR)/wikiformat.c \ + $(SRCDIR)/winfile.c \ $(SRCDIR)/winhttp.c \ + $(SRCDIR)/wysiwyg.c \ $(SRCDIR)/xfer.c \ + $(SRCDIR)/xfersetup.c \ $(SRCDIR)/zip.c +EXTRA_FILES = \ + $(SRCDIR)/../skins/aht/details.txt \ + $(SRCDIR)/../skins/black_and_white/css.txt \ + $(SRCDIR)/../skins/black_and_white/details.txt \ + $(SRCDIR)/../skins/black_and_white/footer.txt \ + $(SRCDIR)/../skins/black_and_white/header.txt \ + $(SRCDIR)/../skins/blitz/css.txt \ + $(SRCDIR)/../skins/blitz/details.txt \ + $(SRCDIR)/../skins/blitz/footer.txt \ + $(SRCDIR)/../skins/blitz/header.txt \ + $(SRCDIR)/../skins/blitz/ticket.txt \ + $(SRCDIR)/../skins/blitz_no_logo/css.txt \ + $(SRCDIR)/../skins/blitz_no_logo/details.txt \ + $(SRCDIR)/../skins/blitz_no_logo/footer.txt \ + $(SRCDIR)/../skins/blitz_no_logo/header.txt \ + $(SRCDIR)/../skins/blitz_no_logo/ticket.txt \ + $(SRCDIR)/../skins/default/css.txt \ + $(SRCDIR)/../skins/default/details.txt \ + $(SRCDIR)/../skins/default/footer.txt \ + $(SRCDIR)/../skins/default/header.txt \ + $(SRCDIR)/../skins/eagle/css.txt \ + $(SRCDIR)/../skins/eagle/details.txt \ + $(SRCDIR)/../skins/eagle/footer.txt \ + $(SRCDIR)/../skins/eagle/header.txt \ + $(SRCDIR)/../skins/enhanced1/css.txt \ + $(SRCDIR)/../skins/enhanced1/details.txt \ + $(SRCDIR)/../skins/enhanced1/footer.txt \ + $(SRCDIR)/../skins/enhanced1/header.txt \ + $(SRCDIR)/../skins/khaki/css.txt \ + $(SRCDIR)/../skins/khaki/details.txt \ + $(SRCDIR)/../skins/khaki/footer.txt \ + $(SRCDIR)/../skins/khaki/header.txt \ + $(SRCDIR)/../skins/original/css.txt \ + $(SRCDIR)/../skins/original/details.txt \ + $(SRCDIR)/../skins/original/footer.txt \ + $(SRCDIR)/../skins/original/header.txt \ + $(SRCDIR)/../skins/plain_gray/css.txt \ + $(SRCDIR)/../skins/plain_gray/details.txt \ + $(SRCDIR)/../skins/plain_gray/footer.txt \ + $(SRCDIR)/../skins/plain_gray/header.txt \ + $(SRCDIR)/../skins/rounded1/css.txt \ + $(SRCDIR)/../skins/rounded1/details.txt \ + $(SRCDIR)/../skins/rounded1/footer.txt \ + $(SRCDIR)/../skins/rounded1/header.txt \ + $(SRCDIR)/../skins/xekri/css.txt \ + $(SRCDIR)/../skins/xekri/details.txt \ + $(SRCDIR)/../skins/xekri/footer.txt \ + $(SRCDIR)/../skins/xekri/header.txt \ + $(SRCDIR)/diff.tcl \ + $(SRCDIR)/markdown.md + TRANS_SRC = \ - add_.c \ - allrepo_.c \ - attach_.c \ - bag_.c \ - blob_.c \ - branch_.c \ - browse_.c \ - captcha_.c \ - cgi_.c \ - checkin_.c \ - checkout_.c \ - clearsign_.c \ - clone_.c \ - comformat_.c \ - configure_.c \ - content_.c \ - db_.c \ - delta_.c \ - deltacmd_.c \ - descendants_.c \ - diff_.c \ - diffcmd_.c \ - doc_.c \ - encode_.c \ - file_.c \ - finfo_.c \ - graph_.c \ - http_.c \ - http_socket_.c \ - http_transport_.c \ - info_.c \ - login_.c \ - main_.c \ - manifest_.c \ - md5_.c \ - merge_.c \ - merge3_.c \ - name_.c \ - pivot_.c \ - pqueue_.c \ - printf_.c \ - rebuild_.c \ - report_.c \ - rss_.c \ - schema_.c \ - search_.c \ - setup_.c \ - sha1_.c \ - shun_.c \ - skins_.c \ - stat_.c \ - style_.c \ - sync_.c \ - tag_.c \ - th_main_.c \ - timeline_.c \ - tkt_.c \ - tktsetup_.c \ - undo_.c \ - update_.c \ - url_.c \ - user_.c \ - verify_.c \ - vfile_.c \ - wiki_.c \ - wikiformat_.c \ - winhttp_.c \ - xfer_.c \ - zip_.c + $(OBJDIR)/add_.c \ + $(OBJDIR)/allrepo_.c \ + $(OBJDIR)/attach_.c \ + $(OBJDIR)/bag_.c \ + $(OBJDIR)/bisect_.c \ + $(OBJDIR)/blob_.c \ + $(OBJDIR)/branch_.c \ + $(OBJDIR)/browse_.c \ + $(OBJDIR)/builtin_.c \ + $(OBJDIR)/bundle_.c \ + $(OBJDIR)/cache_.c \ + $(OBJDIR)/captcha_.c \ + $(OBJDIR)/cgi_.c \ + $(OBJDIR)/checkin_.c \ + $(OBJDIR)/checkout_.c \ + $(OBJDIR)/clearsign_.c \ + $(OBJDIR)/clone_.c \ + $(OBJDIR)/comformat_.c \ + $(OBJDIR)/configure_.c \ + $(OBJDIR)/content_.c \ + $(OBJDIR)/db_.c \ + $(OBJDIR)/delta_.c \ + $(OBJDIR)/deltacmd_.c \ + $(OBJDIR)/descendants_.c \ + $(OBJDIR)/diff_.c \ + $(OBJDIR)/diffcmd_.c \ + $(OBJDIR)/doc_.c \ + $(OBJDIR)/encode_.c \ + $(OBJDIR)/event_.c \ + $(OBJDIR)/export_.c \ + $(OBJDIR)/file_.c \ + $(OBJDIR)/finfo_.c \ + $(OBJDIR)/foci_.c \ + $(OBJDIR)/fusefs_.c \ + $(OBJDIR)/glob_.c \ + $(OBJDIR)/graph_.c \ + $(OBJDIR)/gzip_.c \ + $(OBJDIR)/http_.c \ + $(OBJDIR)/http_socket_.c \ + $(OBJDIR)/http_ssl_.c \ + $(OBJDIR)/http_transport_.c \ + $(OBJDIR)/import_.c \ + $(OBJDIR)/info_.c \ + $(OBJDIR)/json_.c \ + $(OBJDIR)/json_artifact_.c \ + $(OBJDIR)/json_branch_.c \ + $(OBJDIR)/json_config_.c \ + $(OBJDIR)/json_diff_.c \ + $(OBJDIR)/json_dir_.c \ + $(OBJDIR)/json_finfo_.c \ + $(OBJDIR)/json_login_.c \ + $(OBJDIR)/json_query_.c \ + $(OBJDIR)/json_report_.c \ + $(OBJDIR)/json_status_.c \ + $(OBJDIR)/json_tag_.c \ + $(OBJDIR)/json_timeline_.c \ + $(OBJDIR)/json_user_.c \ + $(OBJDIR)/json_wiki_.c \ + $(OBJDIR)/leaf_.c \ + $(OBJDIR)/loadctrl_.c \ + $(OBJDIR)/login_.c \ + $(OBJDIR)/lookslike_.c \ + $(OBJDIR)/main_.c \ + $(OBJDIR)/manifest_.c \ + $(OBJDIR)/markdown_.c \ + $(OBJDIR)/markdown_html_.c \ + $(OBJDIR)/md5_.c \ + $(OBJDIR)/merge_.c \ + $(OBJDIR)/merge3_.c \ + $(OBJDIR)/moderate_.c \ + $(OBJDIR)/name_.c \ + $(OBJDIR)/path_.c \ + $(OBJDIR)/piechart_.c \ + $(OBJDIR)/pivot_.c \ + $(OBJDIR)/popen_.c \ + $(OBJDIR)/pqueue_.c \ + $(OBJDIR)/printf_.c \ + $(OBJDIR)/publish_.c \ + $(OBJDIR)/purge_.c \ + $(OBJDIR)/rebuild_.c \ + $(OBJDIR)/regexp_.c \ + $(OBJDIR)/report_.c \ + $(OBJDIR)/rss_.c \ + $(OBJDIR)/schema_.c \ + $(OBJDIR)/search_.c \ + $(OBJDIR)/setup_.c \ + $(OBJDIR)/sha1_.c \ + $(OBJDIR)/shun_.c \ + $(OBJDIR)/sitemap_.c \ + $(OBJDIR)/skins_.c \ + $(OBJDIR)/sqlcmd_.c \ + $(OBJDIR)/stash_.c \ + $(OBJDIR)/stat_.c \ + $(OBJDIR)/statrep_.c \ + $(OBJDIR)/style_.c \ + $(OBJDIR)/sync_.c \ + $(OBJDIR)/tag_.c \ + $(OBJDIR)/tar_.c \ + $(OBJDIR)/th_main_.c \ + $(OBJDIR)/timeline_.c \ + $(OBJDIR)/tkt_.c \ + $(OBJDIR)/tktsetup_.c \ + $(OBJDIR)/undo_.c \ + $(OBJDIR)/unicode_.c \ + $(OBJDIR)/update_.c \ + $(OBJDIR)/url_.c \ + $(OBJDIR)/user_.c \ + $(OBJDIR)/utf8_.c \ + $(OBJDIR)/util_.c \ + $(OBJDIR)/verify_.c \ + $(OBJDIR)/vfile_.c \ + $(OBJDIR)/wiki_.c \ + $(OBJDIR)/wikiformat_.c \ + $(OBJDIR)/winfile_.c \ + $(OBJDIR)/winhttp_.c \ + $(OBJDIR)/wysiwyg_.c \ + $(OBJDIR)/xfer_.c \ + $(OBJDIR)/xfersetup_.c \ + $(OBJDIR)/zip_.c OBJ = \ $(OBJDIR)/add.o \ $(OBJDIR)/allrepo.o \ $(OBJDIR)/attach.o \ $(OBJDIR)/bag.o \ + $(OBJDIR)/bisect.o \ $(OBJDIR)/blob.o \ $(OBJDIR)/branch.o \ $(OBJDIR)/browse.o \ + $(OBJDIR)/builtin.o \ + $(OBJDIR)/bundle.o \ + $(OBJDIR)/cache.o \ $(OBJDIR)/captcha.o \ $(OBJDIR)/cgi.o \ $(OBJDIR)/checkin.o \ $(OBJDIR)/checkout.o \ $(OBJDIR)/clearsign.o \ @@ -177,596 +333,1333 @@ $(OBJDIR)/descendants.o \ $(OBJDIR)/diff.o \ $(OBJDIR)/diffcmd.o \ $(OBJDIR)/doc.o \ $(OBJDIR)/encode.o \ + $(OBJDIR)/event.o \ + $(OBJDIR)/export.o \ $(OBJDIR)/file.o \ $(OBJDIR)/finfo.o \ + $(OBJDIR)/foci.o \ + $(OBJDIR)/fusefs.o \ + $(OBJDIR)/glob.o \ $(OBJDIR)/graph.o \ + $(OBJDIR)/gzip.o \ $(OBJDIR)/http.o \ $(OBJDIR)/http_socket.o \ + $(OBJDIR)/http_ssl.o \ $(OBJDIR)/http_transport.o \ + $(OBJDIR)/import.o \ $(OBJDIR)/info.o \ + $(OBJDIR)/json.o \ + $(OBJDIR)/json_artifact.o \ + $(OBJDIR)/json_branch.o \ + $(OBJDIR)/json_config.o \ + $(OBJDIR)/json_diff.o \ + $(OBJDIR)/json_dir.o \ + $(OBJDIR)/json_finfo.o \ + $(OBJDIR)/json_login.o \ + $(OBJDIR)/json_query.o \ + $(OBJDIR)/json_report.o \ + $(OBJDIR)/json_status.o \ + $(OBJDIR)/json_tag.o \ + $(OBJDIR)/json_timeline.o \ + $(OBJDIR)/json_user.o \ + $(OBJDIR)/json_wiki.o \ + $(OBJDIR)/leaf.o \ + $(OBJDIR)/loadctrl.o \ $(OBJDIR)/login.o \ + $(OBJDIR)/lookslike.o \ $(OBJDIR)/main.o \ $(OBJDIR)/manifest.o \ + $(OBJDIR)/markdown.o \ + $(OBJDIR)/markdown_html.o \ $(OBJDIR)/md5.o \ $(OBJDIR)/merge.o \ $(OBJDIR)/merge3.o \ + $(OBJDIR)/moderate.o \ $(OBJDIR)/name.o \ + $(OBJDIR)/path.o \ + $(OBJDIR)/piechart.o \ $(OBJDIR)/pivot.o \ + $(OBJDIR)/popen.o \ $(OBJDIR)/pqueue.o \ $(OBJDIR)/printf.o \ + $(OBJDIR)/publish.o \ + $(OBJDIR)/purge.o \ $(OBJDIR)/rebuild.o \ + $(OBJDIR)/regexp.o \ $(OBJDIR)/report.o \ $(OBJDIR)/rss.o \ $(OBJDIR)/schema.o \ $(OBJDIR)/search.o \ $(OBJDIR)/setup.o \ $(OBJDIR)/sha1.o \ $(OBJDIR)/shun.o \ + $(OBJDIR)/sitemap.o \ $(OBJDIR)/skins.o \ + $(OBJDIR)/sqlcmd.o \ + $(OBJDIR)/stash.o \ $(OBJDIR)/stat.o \ + $(OBJDIR)/statrep.o \ $(OBJDIR)/style.o \ $(OBJDIR)/sync.o \ $(OBJDIR)/tag.o \ + $(OBJDIR)/tar.o \ $(OBJDIR)/th_main.o \ $(OBJDIR)/timeline.o \ $(OBJDIR)/tkt.o \ $(OBJDIR)/tktsetup.o \ $(OBJDIR)/undo.o \ + $(OBJDIR)/unicode.o \ $(OBJDIR)/update.o \ $(OBJDIR)/url.o \ $(OBJDIR)/user.o \ + $(OBJDIR)/utf8.o \ + $(OBJDIR)/util.o \ $(OBJDIR)/verify.o \ $(OBJDIR)/vfile.o \ $(OBJDIR)/wiki.o \ $(OBJDIR)/wikiformat.o \ + $(OBJDIR)/winfile.o \ $(OBJDIR)/winhttp.o \ + $(OBJDIR)/wysiwyg.o \ $(OBJDIR)/xfer.o \ + $(OBJDIR)/xfersetup.o \ $(OBJDIR)/zip.o APPNAME = fossil$(E) all: $(OBJDIR) $(APPNAME) install: $(APPNAME) + mkdir -p $(INSTALLDIR) mv $(APPNAME) $(INSTALLDIR) + +codecheck: $(TRANS_SRC) $(OBJDIR)/codecheck1 + $(OBJDIR)/codecheck1 $(TRANS_SRC) $(OBJDIR): -mkdir $(OBJDIR) -translate: $(SRCDIR)/translate.c - $(BCC) -o translate $(SRCDIR)/translate.c - -makeheaders: $(SRCDIR)/makeheaders.c - $(BCC) -o makeheaders $(SRCDIR)/makeheaders.c - -mkindex: $(SRCDIR)/mkindex.c - $(BCC) -o mkindex $(SRCDIR)/mkindex.c - -# WARNING. DANGER. Running the testsuite modifies the repository the +$(OBJDIR)/translate: $(SRCDIR)/translate.c + $(BCC) -o $(OBJDIR)/translate $(SRCDIR)/translate.c + +$(OBJDIR)/makeheaders: $(SRCDIR)/makeheaders.c + $(BCC) -o $(OBJDIR)/makeheaders $(SRCDIR)/makeheaders.c + +$(OBJDIR)/mkindex: $(SRCDIR)/mkindex.c + $(BCC) -o $(OBJDIR)/mkindex $(SRCDIR)/mkindex.c + +$(OBJDIR)/mkbuiltin: $(SRCDIR)/mkbuiltin.c + $(BCC) -o $(OBJDIR)/mkbuiltin $(SRCDIR)/mkbuiltin.c + +$(OBJDIR)/mkversion: $(SRCDIR)/mkversion.c + $(BCC) -o $(OBJDIR)/mkversion $(SRCDIR)/mkversion.c + +$(OBJDIR)/codecheck1: $(SRCDIR)/codecheck1.c + $(BCC) -o $(OBJDIR)/codecheck1 $(SRCDIR)/codecheck1.c + +# WARNING. DANGER. Running the test suite modifies the repository the # build is done from, i.e. the checkout belongs to. Do not sync/push # the repository after running the tests. -test: $(APPNAME) - $(TCLSH) test/tester.tcl $(APPNAME) - -VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest - awk '{ printf "#define MANIFEST_UUID \"%s\"\n", $$1}' $(SRCDIR)/../manifest.uuid >VERSION.h - awk '{ printf "#define MANIFEST_VERSION \"[%.10s]\"\n", $$1}' $(SRCDIR)/../manifest.uuid >>VERSION.h - awk '$$1=="D"{printf "#define MANIFEST_DATE \"%s %s\"\n", substr($$2,1,10),substr($$2,12)}' $(SRCDIR)/../manifest >>VERSION.h - -$(APPNAME): headers $(OBJ) $(OBJDIR)/sqlite3.o $(OBJDIR)/th.o $(OBJDIR)/th_lang.o - $(TCC) -o $(APPNAME) $(OBJ) $(OBJDIR)/sqlite3.o $(OBJDIR)/th.o $(OBJDIR)/th_lang.o $(LIB) +test: $(OBJDIR) $(APPNAME) + $(TCLSH) $(SRCDIR)/../test/tester.tcl $(APPNAME) + +$(OBJDIR)/VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(SRCDIR)/../VERSION $(OBJDIR)/mkversion + $(OBJDIR)/mkversion $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(SRCDIR)/../VERSION >$(OBJDIR)/VERSION.h + +# Setup the options used to compile the included SQLite library. +SQLITE_OPTIONS = -DNDEBUG=1 \ + -DSQLITE_OMIT_LOAD_EXTENSION=1 \ + -DSQLITE_ENABLE_LOCKING_STYLE=0 \ + -DSQLITE_THREADSAFE=0 \ + -DSQLITE_DEFAULT_FILE_FORMAT=4 \ + -DSQLITE_OMIT_DEPRECATED \ + -DSQLITE_ENABLE_EXPLAIN_COMMENTS \ + -DSQLITE_ENABLE_FTS4 \ + -DSQLITE_ENABLE_FTS3_PARENTHESIS \ + -DSQLITE_ENABLE_DBSTAT_VTAB \ + -DSQLITE_ENABLE_JSON1 \ + -DSQLITE_ENABLE_FTS5 + +# Setup the options used to compile the included SQLite shell. +SHELL_OPTIONS = -Dmain=sqlite3_shell \ + -DSQLITE_OMIT_LOAD_EXTENSION=1 \ + -DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) \ + -DSQLITE_SHELL_DBNAME_PROC=fossil_open + +# Setup the options used to compile the included miniz library. +MINIZ_OPTIONS = -DMINIZ_NO_STDIO \ + -DMINIZ_NO_TIME \ + -DMINIZ_NO_ARCHIVE_APIS + +# The USE_SYSTEM_SQLITE variable may be undefined, set to 0, or set +# to 1. If it is set to 1, then there is no need to build or link +# the sqlite3.o object. Instead, the system SQLite will be linked +# using -lsqlite3. +SQLITE3_OBJ.1 = +SQLITE3_OBJ.0 = $(OBJDIR)/sqlite3.o +SQLITE3_OBJ. = $(SQLITE3_OBJ.0) + +# The FOSSIL_ENABLE_MINIZ variable may be undefined, set to 0, or +# set to 1. If it is set to 1, the miniz library included in the +# source tree should be used; otherwise, it should not. +MINIZ_OBJ.0 = +MINIZ_OBJ.1 = $(OBJDIR)/miniz.o +MINIZ_OBJ. = $(MINIZ_OBJ.0) + +# The USE_LINENOISE variable may be undefined, set to 0, or set +# to 1. If it is set to 0, then there is no need to build or link +# the linenoise.o object. +LINENOISE_DEF.0 = +LINENOISE_DEF.1 = -DHAVE_LINENOISE +LINENOISE_DEF. = $(LINENOISE_DEF.0) +LINENOISE_OBJ.0 = +LINENOISE_OBJ.1 = $(OBJDIR)/linenoise.o +LINENOISE_OBJ. = $(LINENOISE_OBJ.0) + + +EXTRAOBJ = \ + $(SQLITE3_OBJ.$(USE_SYSTEM_SQLITE)) \ + $(MINIZ_OBJ.$(FOSSIL_ENABLE_MINIZ)) \ + $(LINENOISE_OBJ.$(USE_LINENOISE)) \ + $(OBJDIR)/shell.o \ + $(OBJDIR)/th.o \ + $(OBJDIR)/th_lang.o \ + $(OBJDIR)/th_tcl.o \ + $(OBJDIR)/cson_amalgamation.o + + +$(APPNAME): $(OBJDIR)/headers $(OBJDIR)/codecheck1 $(OBJ) $(EXTRAOBJ) + $(OBJDIR)/codecheck1 $(TRANS_SRC) + $(TCC) -o $(APPNAME) $(OBJ) $(EXTRAOBJ) $(LIB) # This rule prevents make from using its default rules to try build # an executable named "manifest" out of the file named "manifest.c" # -$(SRCDIR)/../manifest: +$(SRCDIR)/../manifest: # noop -clean: - rm -f $(OBJDIR)/*.o *_.c $(APPNAME) VERSION.h - rm -f translate makeheaders mkindex page_index.h headers - rm -f add.h allrepo.h attach.h bag.h blob.h branch.h browse.h captcha.h cgi.h checkin.h checkout.h clearsign.h clone.h comformat.h configure.h content.h db.h delta.h deltacmd.h descendants.h diff.h diffcmd.h doc.h encode.h file.h finfo.h graph.h http.h http_socket.h http_transport.h info.h login.h main.h manifest.h md5.h merge.h merge3.h name.h pivot.h pqueue.h printf.h rebuild.h report.h rss.h schema.h search.h setup.h sha1.h shun.h skins.h stat.h style.h sync.h tag.h th_main.h timeline.h tkt.h tktsetup.h undo.h update.h url.h user.h verify.h vfile.h wiki.h wikiformat.h winhttp.h xfer.h zip.h - -page_index.h: $(TRANS_SRC) mkindex - ./mkindex $(TRANS_SRC) >$@ -headers: page_index.h makeheaders VERSION.h - ./makeheaders add_.c:add.h allrepo_.c:allrepo.h attach_.c:attach.h bag_.c:bag.h blob_.c:blob.h branch_.c:branch.h browse_.c:browse.h captcha_.c:captcha.h cgi_.c:cgi.h checkin_.c:checkin.h checkout_.c:checkout.h clearsign_.c:clearsign.h clone_.c:clone.h comformat_.c:comformat.h configure_.c:configure.h content_.c:content.h db_.c:db.h delta_.c:delta.h deltacmd_.c:deltacmd.h descendants_.c:descendants.h diff_.c:diff.h diffcmd_.c:diffcmd.h doc_.c:doc.h encode_.c:encode.h file_.c:file.h finfo_.c:finfo.h graph_.c:graph.h http_.c:http.h http_socket_.c:http_socket.h http_transport_.c:http_transport.h info_.c:info.h login_.c:login.h main_.c:main.h manifest_.c:manifest.h md5_.c:md5.h merge_.c:merge.h merge3_.c:merge3.h name_.c:name.h pivot_.c:pivot.h pqueue_.c:pqueue.h printf_.c:printf.h rebuild_.c:rebuild.h report_.c:report.h rss_.c:rss.h schema_.c:schema.h search_.c:search.h setup_.c:setup.h sha1_.c:sha1.h shun_.c:shun.h skins_.c:skins.h stat_.c:stat.h style_.c:style.h sync_.c:sync.h tag_.c:tag.h th_main_.c:th_main.h timeline_.c:timeline.h tkt_.c:tkt.h tktsetup_.c:tktsetup.h undo_.c:undo.h update_.c:update.h url_.c:url.h user_.c:user.h verify_.c:verify.h vfile_.c:vfile.h wiki_.c:wiki.h wikiformat_.c:wikiformat.h winhttp_.c:winhttp.h xfer_.c:xfer.h zip_.c:zip.h $(SRCDIR)/sqlite3.h $(SRCDIR)/th.h VERSION.h - touch headers -headers: Makefile +clean: + rm -rf $(OBJDIR)/* $(APPNAME) + + +$(OBJDIR)/page_index.h: $(TRANS_SRC) $(OBJDIR)/mkindex + $(OBJDIR)/mkindex $(TRANS_SRC) >$@ + +$(OBJDIR)/builtin_data.h: $(OBJDIR)/mkbuiltin $(EXTRA_FILES) + $(OBJDIR)/mkbuiltin --prefix $(SRCDIR)/ $(EXTRA_FILES) >$@ + +$(OBJDIR)/headers: $(OBJDIR)/page_index.h $(OBJDIR)/builtin_data.h $(OBJDIR)/makeheaders $(OBJDIR)/VERSION.h + $(OBJDIR)/makeheaders $(OBJDIR)/add_.c:$(OBJDIR)/add.h \ + $(OBJDIR)/allrepo_.c:$(OBJDIR)/allrepo.h \ + $(OBJDIR)/attach_.c:$(OBJDIR)/attach.h \ + $(OBJDIR)/bag_.c:$(OBJDIR)/bag.h \ + $(OBJDIR)/bisect_.c:$(OBJDIR)/bisect.h \ + $(OBJDIR)/blob_.c:$(OBJDIR)/blob.h \ + $(OBJDIR)/branch_.c:$(OBJDIR)/branch.h \ + $(OBJDIR)/browse_.c:$(OBJDIR)/browse.h \ + $(OBJDIR)/builtin_.c:$(OBJDIR)/builtin.h \ + $(OBJDIR)/bundle_.c:$(OBJDIR)/bundle.h \ + $(OBJDIR)/cache_.c:$(OBJDIR)/cache.h \ + $(OBJDIR)/captcha_.c:$(OBJDIR)/captcha.h \ + $(OBJDIR)/cgi_.c:$(OBJDIR)/cgi.h \ + $(OBJDIR)/checkin_.c:$(OBJDIR)/checkin.h \ + $(OBJDIR)/checkout_.c:$(OBJDIR)/checkout.h \ + $(OBJDIR)/clearsign_.c:$(OBJDIR)/clearsign.h \ + $(OBJDIR)/clone_.c:$(OBJDIR)/clone.h \ + $(OBJDIR)/comformat_.c:$(OBJDIR)/comformat.h \ + $(OBJDIR)/configure_.c:$(OBJDIR)/configure.h \ + $(OBJDIR)/content_.c:$(OBJDIR)/content.h \ + $(OBJDIR)/db_.c:$(OBJDIR)/db.h \ + $(OBJDIR)/delta_.c:$(OBJDIR)/delta.h \ + $(OBJDIR)/deltacmd_.c:$(OBJDIR)/deltacmd.h \ + $(OBJDIR)/descendants_.c:$(OBJDIR)/descendants.h \ + $(OBJDIR)/diff_.c:$(OBJDIR)/diff.h \ + $(OBJDIR)/diffcmd_.c:$(OBJDIR)/diffcmd.h \ + $(OBJDIR)/doc_.c:$(OBJDIR)/doc.h \ + $(OBJDIR)/encode_.c:$(OBJDIR)/encode.h \ + $(OBJDIR)/event_.c:$(OBJDIR)/event.h \ + $(OBJDIR)/export_.c:$(OBJDIR)/export.h \ + $(OBJDIR)/file_.c:$(OBJDIR)/file.h \ + $(OBJDIR)/finfo_.c:$(OBJDIR)/finfo.h \ + $(OBJDIR)/foci_.c:$(OBJDIR)/foci.h \ + $(OBJDIR)/fusefs_.c:$(OBJDIR)/fusefs.h \ + $(OBJDIR)/glob_.c:$(OBJDIR)/glob.h \ + $(OBJDIR)/graph_.c:$(OBJDIR)/graph.h \ + $(OBJDIR)/gzip_.c:$(OBJDIR)/gzip.h \ + $(OBJDIR)/http_.c:$(OBJDIR)/http.h \ + $(OBJDIR)/http_socket_.c:$(OBJDIR)/http_socket.h \ + $(OBJDIR)/http_ssl_.c:$(OBJDIR)/http_ssl.h \ + $(OBJDIR)/http_transport_.c:$(OBJDIR)/http_transport.h \ + $(OBJDIR)/import_.c:$(OBJDIR)/import.h \ + $(OBJDIR)/info_.c:$(OBJDIR)/info.h \ + $(OBJDIR)/json_.c:$(OBJDIR)/json.h \ + $(OBJDIR)/json_artifact_.c:$(OBJDIR)/json_artifact.h \ + $(OBJDIR)/json_branch_.c:$(OBJDIR)/json_branch.h \ + $(OBJDIR)/json_config_.c:$(OBJDIR)/json_config.h \ + $(OBJDIR)/json_diff_.c:$(OBJDIR)/json_diff.h \ + $(OBJDIR)/json_dir_.c:$(OBJDIR)/json_dir.h \ + $(OBJDIR)/json_finfo_.c:$(OBJDIR)/json_finfo.h \ + $(OBJDIR)/json_login_.c:$(OBJDIR)/json_login.h \ + $(OBJDIR)/json_query_.c:$(OBJDIR)/json_query.h \ + $(OBJDIR)/json_report_.c:$(OBJDIR)/json_report.h \ + $(OBJDIR)/json_status_.c:$(OBJDIR)/json_status.h \ + $(OBJDIR)/json_tag_.c:$(OBJDIR)/json_tag.h \ + $(OBJDIR)/json_timeline_.c:$(OBJDIR)/json_timeline.h \ + $(OBJDIR)/json_user_.c:$(OBJDIR)/json_user.h \ + $(OBJDIR)/json_wiki_.c:$(OBJDIR)/json_wiki.h \ + $(OBJDIR)/leaf_.c:$(OBJDIR)/leaf.h \ + $(OBJDIR)/loadctrl_.c:$(OBJDIR)/loadctrl.h \ + $(OBJDIR)/login_.c:$(OBJDIR)/login.h \ + $(OBJDIR)/lookslike_.c:$(OBJDIR)/lookslike.h \ + $(OBJDIR)/main_.c:$(OBJDIR)/main.h \ + $(OBJDIR)/manifest_.c:$(OBJDIR)/manifest.h \ + $(OBJDIR)/markdown_.c:$(OBJDIR)/markdown.h \ + $(OBJDIR)/markdown_html_.c:$(OBJDIR)/markdown_html.h \ + $(OBJDIR)/md5_.c:$(OBJDIR)/md5.h \ + $(OBJDIR)/merge_.c:$(OBJDIR)/merge.h \ + $(OBJDIR)/merge3_.c:$(OBJDIR)/merge3.h \ + $(OBJDIR)/moderate_.c:$(OBJDIR)/moderate.h \ + $(OBJDIR)/name_.c:$(OBJDIR)/name.h \ + $(OBJDIR)/path_.c:$(OBJDIR)/path.h \ + $(OBJDIR)/piechart_.c:$(OBJDIR)/piechart.h \ + $(OBJDIR)/pivot_.c:$(OBJDIR)/pivot.h \ + $(OBJDIR)/popen_.c:$(OBJDIR)/popen.h \ + $(OBJDIR)/pqueue_.c:$(OBJDIR)/pqueue.h \ + $(OBJDIR)/printf_.c:$(OBJDIR)/printf.h \ + $(OBJDIR)/publish_.c:$(OBJDIR)/publish.h \ + $(OBJDIR)/purge_.c:$(OBJDIR)/purge.h \ + $(OBJDIR)/rebuild_.c:$(OBJDIR)/rebuild.h \ + $(OBJDIR)/regexp_.c:$(OBJDIR)/regexp.h \ + $(OBJDIR)/report_.c:$(OBJDIR)/report.h \ + $(OBJDIR)/rss_.c:$(OBJDIR)/rss.h \ + $(OBJDIR)/schema_.c:$(OBJDIR)/schema.h \ + $(OBJDIR)/search_.c:$(OBJDIR)/search.h \ + $(OBJDIR)/setup_.c:$(OBJDIR)/setup.h \ + $(OBJDIR)/sha1_.c:$(OBJDIR)/sha1.h \ + $(OBJDIR)/shun_.c:$(OBJDIR)/shun.h \ + $(OBJDIR)/sitemap_.c:$(OBJDIR)/sitemap.h \ + $(OBJDIR)/skins_.c:$(OBJDIR)/skins.h \ + $(OBJDIR)/sqlcmd_.c:$(OBJDIR)/sqlcmd.h \ + $(OBJDIR)/stash_.c:$(OBJDIR)/stash.h \ + $(OBJDIR)/stat_.c:$(OBJDIR)/stat.h \ + $(OBJDIR)/statrep_.c:$(OBJDIR)/statrep.h \ + $(OBJDIR)/style_.c:$(OBJDIR)/style.h \ + $(OBJDIR)/sync_.c:$(OBJDIR)/sync.h \ + $(OBJDIR)/tag_.c:$(OBJDIR)/tag.h \ + $(OBJDIR)/tar_.c:$(OBJDIR)/tar.h \ + $(OBJDIR)/th_main_.c:$(OBJDIR)/th_main.h \ + $(OBJDIR)/timeline_.c:$(OBJDIR)/timeline.h \ + $(OBJDIR)/tkt_.c:$(OBJDIR)/tkt.h \ + $(OBJDIR)/tktsetup_.c:$(OBJDIR)/tktsetup.h \ + $(OBJDIR)/undo_.c:$(OBJDIR)/undo.h \ + $(OBJDIR)/unicode_.c:$(OBJDIR)/unicode.h \ + $(OBJDIR)/update_.c:$(OBJDIR)/update.h \ + $(OBJDIR)/url_.c:$(OBJDIR)/url.h \ + $(OBJDIR)/user_.c:$(OBJDIR)/user.h \ + $(OBJDIR)/utf8_.c:$(OBJDIR)/utf8.h \ + $(OBJDIR)/util_.c:$(OBJDIR)/util.h \ + $(OBJDIR)/verify_.c:$(OBJDIR)/verify.h \ + $(OBJDIR)/vfile_.c:$(OBJDIR)/vfile.h \ + $(OBJDIR)/wiki_.c:$(OBJDIR)/wiki.h \ + $(OBJDIR)/wikiformat_.c:$(OBJDIR)/wikiformat.h \ + $(OBJDIR)/winfile_.c:$(OBJDIR)/winfile.h \ + $(OBJDIR)/winhttp_.c:$(OBJDIR)/winhttp.h \ + $(OBJDIR)/wysiwyg_.c:$(OBJDIR)/wysiwyg.h \ + $(OBJDIR)/xfer_.c:$(OBJDIR)/xfer.h \ + $(OBJDIR)/xfersetup_.c:$(OBJDIR)/xfersetup.h \ + $(OBJDIR)/zip_.c:$(OBJDIR)/zip.h \ + $(SRCDIR)/sqlite3.h \ + $(SRCDIR)/th.h \ + $(OBJDIR)/VERSION.h + touch $(OBJDIR)/headers +$(OBJDIR)/headers: Makefile +$(OBJDIR)/json.o $(OBJDIR)/json_artifact.o $(OBJDIR)/json_branch.o $(OBJDIR)/json_config.o $(OBJDIR)/json_diff.o $(OBJDIR)/json_dir.o $(OBJDIR)/json_finfo.o $(OBJDIR)/json_login.o $(OBJDIR)/json_query.o $(OBJDIR)/json_report.o $(OBJDIR)/json_status.o $(OBJDIR)/json_tag.o $(OBJDIR)/json_timeline.o $(OBJDIR)/json_user.o $(OBJDIR)/json_wiki.o : $(SRCDIR)/json_detail.h Makefile: -add_.c: $(SRCDIR)/add.c translate - ./translate $(SRCDIR)/add.c >add_.c - -$(OBJDIR)/add.o: add_.c add.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/add.o -c add_.c - -add.h: headers -allrepo_.c: $(SRCDIR)/allrepo.c translate - ./translate $(SRCDIR)/allrepo.c >allrepo_.c - -$(OBJDIR)/allrepo.o: allrepo_.c allrepo.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/allrepo.o -c allrepo_.c - -allrepo.h: headers -attach_.c: $(SRCDIR)/attach.c translate - ./translate $(SRCDIR)/attach.c >attach_.c - -$(OBJDIR)/attach.o: attach_.c attach.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/attach.o -c attach_.c - -attach.h: headers -bag_.c: $(SRCDIR)/bag.c translate - ./translate $(SRCDIR)/bag.c >bag_.c - -$(OBJDIR)/bag.o: bag_.c bag.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/bag.o -c bag_.c - -bag.h: headers -blob_.c: $(SRCDIR)/blob.c translate - ./translate $(SRCDIR)/blob.c >blob_.c - -$(OBJDIR)/blob.o: blob_.c blob.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/blob.o -c blob_.c - -blob.h: headers -branch_.c: $(SRCDIR)/branch.c translate - ./translate $(SRCDIR)/branch.c >branch_.c - -$(OBJDIR)/branch.o: branch_.c branch.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/branch.o -c branch_.c - -branch.h: headers -browse_.c: $(SRCDIR)/browse.c translate - ./translate $(SRCDIR)/browse.c >browse_.c - -$(OBJDIR)/browse.o: browse_.c browse.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/browse.o -c browse_.c - -browse.h: headers -captcha_.c: $(SRCDIR)/captcha.c translate - ./translate $(SRCDIR)/captcha.c >captcha_.c - -$(OBJDIR)/captcha.o: captcha_.c captcha.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/captcha.o -c captcha_.c - -captcha.h: headers -cgi_.c: $(SRCDIR)/cgi.c translate - ./translate $(SRCDIR)/cgi.c >cgi_.c - -$(OBJDIR)/cgi.o: cgi_.c cgi.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/cgi.o -c cgi_.c - -cgi.h: headers -checkin_.c: $(SRCDIR)/checkin.c translate - ./translate $(SRCDIR)/checkin.c >checkin_.c - -$(OBJDIR)/checkin.o: checkin_.c checkin.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/checkin.o -c checkin_.c - -checkin.h: headers -checkout_.c: $(SRCDIR)/checkout.c translate - ./translate $(SRCDIR)/checkout.c >checkout_.c - -$(OBJDIR)/checkout.o: checkout_.c checkout.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/checkout.o -c checkout_.c - -checkout.h: headers -clearsign_.c: $(SRCDIR)/clearsign.c translate - ./translate $(SRCDIR)/clearsign.c >clearsign_.c - -$(OBJDIR)/clearsign.o: clearsign_.c clearsign.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/clearsign.o -c clearsign_.c - -clearsign.h: headers -clone_.c: $(SRCDIR)/clone.c translate - ./translate $(SRCDIR)/clone.c >clone_.c - -$(OBJDIR)/clone.o: clone_.c clone.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/clone.o -c clone_.c - -clone.h: headers -comformat_.c: $(SRCDIR)/comformat.c translate - ./translate $(SRCDIR)/comformat.c >comformat_.c - -$(OBJDIR)/comformat.o: comformat_.c comformat.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/comformat.o -c comformat_.c - -comformat.h: headers -configure_.c: $(SRCDIR)/configure.c translate - ./translate $(SRCDIR)/configure.c >configure_.c - -$(OBJDIR)/configure.o: configure_.c configure.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/configure.o -c configure_.c - -configure.h: headers -content_.c: $(SRCDIR)/content.c translate - ./translate $(SRCDIR)/content.c >content_.c - -$(OBJDIR)/content.o: content_.c content.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/content.o -c content_.c - -content.h: headers -db_.c: $(SRCDIR)/db.c translate - ./translate $(SRCDIR)/db.c >db_.c - -$(OBJDIR)/db.o: db_.c db.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/db.o -c db_.c - -db.h: headers -delta_.c: $(SRCDIR)/delta.c translate - ./translate $(SRCDIR)/delta.c >delta_.c - -$(OBJDIR)/delta.o: delta_.c delta.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/delta.o -c delta_.c - -delta.h: headers -deltacmd_.c: $(SRCDIR)/deltacmd.c translate - ./translate $(SRCDIR)/deltacmd.c >deltacmd_.c - -$(OBJDIR)/deltacmd.o: deltacmd_.c deltacmd.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/deltacmd.o -c deltacmd_.c - -deltacmd.h: headers -descendants_.c: $(SRCDIR)/descendants.c translate - ./translate $(SRCDIR)/descendants.c >descendants_.c - -$(OBJDIR)/descendants.o: descendants_.c descendants.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/descendants.o -c descendants_.c - -descendants.h: headers -diff_.c: $(SRCDIR)/diff.c translate - ./translate $(SRCDIR)/diff.c >diff_.c - -$(OBJDIR)/diff.o: diff_.c diff.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/diff.o -c diff_.c - -diff.h: headers -diffcmd_.c: $(SRCDIR)/diffcmd.c translate - ./translate $(SRCDIR)/diffcmd.c >diffcmd_.c - -$(OBJDIR)/diffcmd.o: diffcmd_.c diffcmd.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/diffcmd.o -c diffcmd_.c - -diffcmd.h: headers -doc_.c: $(SRCDIR)/doc.c translate - ./translate $(SRCDIR)/doc.c >doc_.c - -$(OBJDIR)/doc.o: doc_.c doc.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/doc.o -c doc_.c - -doc.h: headers -encode_.c: $(SRCDIR)/encode.c translate - ./translate $(SRCDIR)/encode.c >encode_.c - -$(OBJDIR)/encode.o: encode_.c encode.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/encode.o -c encode_.c - -encode.h: headers -file_.c: $(SRCDIR)/file.c translate - ./translate $(SRCDIR)/file.c >file_.c - -$(OBJDIR)/file.o: file_.c file.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/file.o -c file_.c - -file.h: headers -finfo_.c: $(SRCDIR)/finfo.c translate - ./translate $(SRCDIR)/finfo.c >finfo_.c - -$(OBJDIR)/finfo.o: finfo_.c finfo.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/finfo.o -c finfo_.c - -finfo.h: headers -graph_.c: $(SRCDIR)/graph.c translate - ./translate $(SRCDIR)/graph.c >graph_.c - -$(OBJDIR)/graph.o: graph_.c graph.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/graph.o -c graph_.c - -graph.h: headers -http_.c: $(SRCDIR)/http.c translate - ./translate $(SRCDIR)/http.c >http_.c - -$(OBJDIR)/http.o: http_.c http.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/http.o -c http_.c - -http.h: headers -http_socket_.c: $(SRCDIR)/http_socket.c translate - ./translate $(SRCDIR)/http_socket.c >http_socket_.c - -$(OBJDIR)/http_socket.o: http_socket_.c http_socket.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/http_socket.o -c http_socket_.c - -http_socket.h: headers -http_transport_.c: $(SRCDIR)/http_transport.c translate - ./translate $(SRCDIR)/http_transport.c >http_transport_.c - -$(OBJDIR)/http_transport.o: http_transport_.c http_transport.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/http_transport.o -c http_transport_.c - -http_transport.h: headers -info_.c: $(SRCDIR)/info.c translate - ./translate $(SRCDIR)/info.c >info_.c - -$(OBJDIR)/info.o: info_.c info.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/info.o -c info_.c - -info.h: headers -login_.c: $(SRCDIR)/login.c translate - ./translate $(SRCDIR)/login.c >login_.c - -$(OBJDIR)/login.o: login_.c login.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/login.o -c login_.c - -login.h: headers -main_.c: $(SRCDIR)/main.c translate - ./translate $(SRCDIR)/main.c >main_.c - -$(OBJDIR)/main.o: main_.c main.h page_index.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/main.o -c main_.c - -main.h: headers -manifest_.c: $(SRCDIR)/manifest.c translate - ./translate $(SRCDIR)/manifest.c >manifest_.c - -$(OBJDIR)/manifest.o: manifest_.c manifest.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/manifest.o -c manifest_.c - -manifest.h: headers -md5_.c: $(SRCDIR)/md5.c translate - ./translate $(SRCDIR)/md5.c >md5_.c - -$(OBJDIR)/md5.o: md5_.c md5.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/md5.o -c md5_.c - -md5.h: headers -merge_.c: $(SRCDIR)/merge.c translate - ./translate $(SRCDIR)/merge.c >merge_.c - -$(OBJDIR)/merge.o: merge_.c merge.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/merge.o -c merge_.c - -merge.h: headers -merge3_.c: $(SRCDIR)/merge3.c translate - ./translate $(SRCDIR)/merge3.c >merge3_.c - -$(OBJDIR)/merge3.o: merge3_.c merge3.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/merge3.o -c merge3_.c - -merge3.h: headers -name_.c: $(SRCDIR)/name.c translate - ./translate $(SRCDIR)/name.c >name_.c - -$(OBJDIR)/name.o: name_.c name.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/name.o -c name_.c - -name.h: headers -pivot_.c: $(SRCDIR)/pivot.c translate - ./translate $(SRCDIR)/pivot.c >pivot_.c - -$(OBJDIR)/pivot.o: pivot_.c pivot.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/pivot.o -c pivot_.c - -pivot.h: headers -pqueue_.c: $(SRCDIR)/pqueue.c translate - ./translate $(SRCDIR)/pqueue.c >pqueue_.c - -$(OBJDIR)/pqueue.o: pqueue_.c pqueue.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/pqueue.o -c pqueue_.c - -pqueue.h: headers -printf_.c: $(SRCDIR)/printf.c translate - ./translate $(SRCDIR)/printf.c >printf_.c - -$(OBJDIR)/printf.o: printf_.c printf.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/printf.o -c printf_.c - -printf.h: headers -rebuild_.c: $(SRCDIR)/rebuild.c translate - ./translate $(SRCDIR)/rebuild.c >rebuild_.c - -$(OBJDIR)/rebuild.o: rebuild_.c rebuild.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/rebuild.o -c rebuild_.c - -rebuild.h: headers -report_.c: $(SRCDIR)/report.c translate - ./translate $(SRCDIR)/report.c >report_.c - -$(OBJDIR)/report.o: report_.c report.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/report.o -c report_.c - -report.h: headers -rss_.c: $(SRCDIR)/rss.c translate - ./translate $(SRCDIR)/rss.c >rss_.c - -$(OBJDIR)/rss.o: rss_.c rss.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/rss.o -c rss_.c - -rss.h: headers -schema_.c: $(SRCDIR)/schema.c translate - ./translate $(SRCDIR)/schema.c >schema_.c - -$(OBJDIR)/schema.o: schema_.c schema.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/schema.o -c schema_.c - -schema.h: headers -search_.c: $(SRCDIR)/search.c translate - ./translate $(SRCDIR)/search.c >search_.c - -$(OBJDIR)/search.o: search_.c search.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/search.o -c search_.c - -search.h: headers -setup_.c: $(SRCDIR)/setup.c translate - ./translate $(SRCDIR)/setup.c >setup_.c - -$(OBJDIR)/setup.o: setup_.c setup.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/setup.o -c setup_.c - -setup.h: headers -sha1_.c: $(SRCDIR)/sha1.c translate - ./translate $(SRCDIR)/sha1.c >sha1_.c - -$(OBJDIR)/sha1.o: sha1_.c sha1.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/sha1.o -c sha1_.c - -sha1.h: headers -shun_.c: $(SRCDIR)/shun.c translate - ./translate $(SRCDIR)/shun.c >shun_.c - -$(OBJDIR)/shun.o: shun_.c shun.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/shun.o -c shun_.c - -shun.h: headers -skins_.c: $(SRCDIR)/skins.c translate - ./translate $(SRCDIR)/skins.c >skins_.c - -$(OBJDIR)/skins.o: skins_.c skins.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/skins.o -c skins_.c - -skins.h: headers -stat_.c: $(SRCDIR)/stat.c translate - ./translate $(SRCDIR)/stat.c >stat_.c - -$(OBJDIR)/stat.o: stat_.c stat.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/stat.o -c stat_.c - -stat.h: headers -style_.c: $(SRCDIR)/style.c translate - ./translate $(SRCDIR)/style.c >style_.c - -$(OBJDIR)/style.o: style_.c style.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/style.o -c style_.c - -style.h: headers -sync_.c: $(SRCDIR)/sync.c translate - ./translate $(SRCDIR)/sync.c >sync_.c - -$(OBJDIR)/sync.o: sync_.c sync.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/sync.o -c sync_.c - -sync.h: headers -tag_.c: $(SRCDIR)/tag.c translate - ./translate $(SRCDIR)/tag.c >tag_.c - -$(OBJDIR)/tag.o: tag_.c tag.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/tag.o -c tag_.c - -tag.h: headers -th_main_.c: $(SRCDIR)/th_main.c translate - ./translate $(SRCDIR)/th_main.c >th_main_.c - -$(OBJDIR)/th_main.o: th_main_.c th_main.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/th_main.o -c th_main_.c - -th_main.h: headers -timeline_.c: $(SRCDIR)/timeline.c translate - ./translate $(SRCDIR)/timeline.c >timeline_.c - -$(OBJDIR)/timeline.o: timeline_.c timeline.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/timeline.o -c timeline_.c - -timeline.h: headers -tkt_.c: $(SRCDIR)/tkt.c translate - ./translate $(SRCDIR)/tkt.c >tkt_.c - -$(OBJDIR)/tkt.o: tkt_.c tkt.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/tkt.o -c tkt_.c - -tkt.h: headers -tktsetup_.c: $(SRCDIR)/tktsetup.c translate - ./translate $(SRCDIR)/tktsetup.c >tktsetup_.c - -$(OBJDIR)/tktsetup.o: tktsetup_.c tktsetup.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/tktsetup.o -c tktsetup_.c - -tktsetup.h: headers -undo_.c: $(SRCDIR)/undo.c translate - ./translate $(SRCDIR)/undo.c >undo_.c - -$(OBJDIR)/undo.o: undo_.c undo.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/undo.o -c undo_.c - -undo.h: headers -update_.c: $(SRCDIR)/update.c translate - ./translate $(SRCDIR)/update.c >update_.c - -$(OBJDIR)/update.o: update_.c update.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/update.o -c update_.c - -update.h: headers -url_.c: $(SRCDIR)/url.c translate - ./translate $(SRCDIR)/url.c >url_.c - -$(OBJDIR)/url.o: url_.c url.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/url.o -c url_.c - -url.h: headers -user_.c: $(SRCDIR)/user.c translate - ./translate $(SRCDIR)/user.c >user_.c - -$(OBJDIR)/user.o: user_.c user.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/user.o -c user_.c - -user.h: headers -verify_.c: $(SRCDIR)/verify.c translate - ./translate $(SRCDIR)/verify.c >verify_.c - -$(OBJDIR)/verify.o: verify_.c verify.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/verify.o -c verify_.c - -verify.h: headers -vfile_.c: $(SRCDIR)/vfile.c translate - ./translate $(SRCDIR)/vfile.c >vfile_.c - -$(OBJDIR)/vfile.o: vfile_.c vfile.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/vfile.o -c vfile_.c - -vfile.h: headers -wiki_.c: $(SRCDIR)/wiki.c translate - ./translate $(SRCDIR)/wiki.c >wiki_.c - -$(OBJDIR)/wiki.o: wiki_.c wiki.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/wiki.o -c wiki_.c - -wiki.h: headers -wikiformat_.c: $(SRCDIR)/wikiformat.c translate - ./translate $(SRCDIR)/wikiformat.c >wikiformat_.c - -$(OBJDIR)/wikiformat.o: wikiformat_.c wikiformat.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/wikiformat.o -c wikiformat_.c - -wikiformat.h: headers -winhttp_.c: $(SRCDIR)/winhttp.c translate - ./translate $(SRCDIR)/winhttp.c >winhttp_.c - -$(OBJDIR)/winhttp.o: winhttp_.c winhttp.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/winhttp.o -c winhttp_.c - -winhttp.h: headers -xfer_.c: $(SRCDIR)/xfer.c translate - ./translate $(SRCDIR)/xfer.c >xfer_.c - -$(OBJDIR)/xfer.o: xfer_.c xfer.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/xfer.o -c xfer_.c - -xfer.h: headers -zip_.c: $(SRCDIR)/zip.c translate - ./translate $(SRCDIR)/zip.c >zip_.c - -$(OBJDIR)/zip.o: zip_.c zip.h $(SRCDIR)/config.h - $(XTCC) -o $(OBJDIR)/zip.o -c zip_.c - -zip.h: headers +$(OBJDIR)/add_.c: $(SRCDIR)/add.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/add.c >$@ + +$(OBJDIR)/add.o: $(OBJDIR)/add_.c $(OBJDIR)/add.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/add.o -c $(OBJDIR)/add_.c + +$(OBJDIR)/add.h: $(OBJDIR)/headers + +$(OBJDIR)/allrepo_.c: $(SRCDIR)/allrepo.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/allrepo.c >$@ + +$(OBJDIR)/allrepo.o: $(OBJDIR)/allrepo_.c $(OBJDIR)/allrepo.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/allrepo.o -c $(OBJDIR)/allrepo_.c + +$(OBJDIR)/allrepo.h: $(OBJDIR)/headers + +$(OBJDIR)/attach_.c: $(SRCDIR)/attach.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/attach.c >$@ + +$(OBJDIR)/attach.o: $(OBJDIR)/attach_.c $(OBJDIR)/attach.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/attach.o -c $(OBJDIR)/attach_.c + +$(OBJDIR)/attach.h: $(OBJDIR)/headers + +$(OBJDIR)/bag_.c: $(SRCDIR)/bag.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/bag.c >$@ + +$(OBJDIR)/bag.o: $(OBJDIR)/bag_.c $(OBJDIR)/bag.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/bag.o -c $(OBJDIR)/bag_.c + +$(OBJDIR)/bag.h: $(OBJDIR)/headers + +$(OBJDIR)/bisect_.c: $(SRCDIR)/bisect.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/bisect.c >$@ + +$(OBJDIR)/bisect.o: $(OBJDIR)/bisect_.c $(OBJDIR)/bisect.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/bisect.o -c $(OBJDIR)/bisect_.c + +$(OBJDIR)/bisect.h: $(OBJDIR)/headers + +$(OBJDIR)/blob_.c: $(SRCDIR)/blob.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/blob.c >$@ + +$(OBJDIR)/blob.o: $(OBJDIR)/blob_.c $(OBJDIR)/blob.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/blob.o -c $(OBJDIR)/blob_.c + +$(OBJDIR)/blob.h: $(OBJDIR)/headers + +$(OBJDIR)/branch_.c: $(SRCDIR)/branch.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/branch.c >$@ + +$(OBJDIR)/branch.o: $(OBJDIR)/branch_.c $(OBJDIR)/branch.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/branch.o -c $(OBJDIR)/branch_.c + +$(OBJDIR)/branch.h: $(OBJDIR)/headers + +$(OBJDIR)/browse_.c: $(SRCDIR)/browse.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/browse.c >$@ + +$(OBJDIR)/browse.o: $(OBJDIR)/browse_.c $(OBJDIR)/browse.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/browse.o -c $(OBJDIR)/browse_.c + +$(OBJDIR)/browse.h: $(OBJDIR)/headers + +$(OBJDIR)/builtin_.c: $(SRCDIR)/builtin.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/builtin.c >$@ + +$(OBJDIR)/builtin.o: $(OBJDIR)/builtin_.c $(OBJDIR)/builtin.h $(OBJDIR)/builtin_data.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/builtin.o -c $(OBJDIR)/builtin_.c + +$(OBJDIR)/builtin.h: $(OBJDIR)/headers + +$(OBJDIR)/bundle_.c: $(SRCDIR)/bundle.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/bundle.c >$@ + +$(OBJDIR)/bundle.o: $(OBJDIR)/bundle_.c $(OBJDIR)/bundle.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/bundle.o -c $(OBJDIR)/bundle_.c + +$(OBJDIR)/bundle.h: $(OBJDIR)/headers + +$(OBJDIR)/cache_.c: $(SRCDIR)/cache.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/cache.c >$@ + +$(OBJDIR)/cache.o: $(OBJDIR)/cache_.c $(OBJDIR)/cache.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/cache.o -c $(OBJDIR)/cache_.c + +$(OBJDIR)/cache.h: $(OBJDIR)/headers + +$(OBJDIR)/captcha_.c: $(SRCDIR)/captcha.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/captcha.c >$@ + +$(OBJDIR)/captcha.o: $(OBJDIR)/captcha_.c $(OBJDIR)/captcha.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/captcha.o -c $(OBJDIR)/captcha_.c + +$(OBJDIR)/captcha.h: $(OBJDIR)/headers + +$(OBJDIR)/cgi_.c: $(SRCDIR)/cgi.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/cgi.c >$@ + +$(OBJDIR)/cgi.o: $(OBJDIR)/cgi_.c $(OBJDIR)/cgi.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/cgi.o -c $(OBJDIR)/cgi_.c + +$(OBJDIR)/cgi.h: $(OBJDIR)/headers + +$(OBJDIR)/checkin_.c: $(SRCDIR)/checkin.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/checkin.c >$@ + +$(OBJDIR)/checkin.o: $(OBJDIR)/checkin_.c $(OBJDIR)/checkin.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/checkin.o -c $(OBJDIR)/checkin_.c + +$(OBJDIR)/checkin.h: $(OBJDIR)/headers + +$(OBJDIR)/checkout_.c: $(SRCDIR)/checkout.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/checkout.c >$@ + +$(OBJDIR)/checkout.o: $(OBJDIR)/checkout_.c $(OBJDIR)/checkout.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/checkout.o -c $(OBJDIR)/checkout_.c + +$(OBJDIR)/checkout.h: $(OBJDIR)/headers + +$(OBJDIR)/clearsign_.c: $(SRCDIR)/clearsign.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/clearsign.c >$@ + +$(OBJDIR)/clearsign.o: $(OBJDIR)/clearsign_.c $(OBJDIR)/clearsign.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/clearsign.o -c $(OBJDIR)/clearsign_.c + +$(OBJDIR)/clearsign.h: $(OBJDIR)/headers + +$(OBJDIR)/clone_.c: $(SRCDIR)/clone.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/clone.c >$@ + +$(OBJDIR)/clone.o: $(OBJDIR)/clone_.c $(OBJDIR)/clone.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/clone.o -c $(OBJDIR)/clone_.c + +$(OBJDIR)/clone.h: $(OBJDIR)/headers + +$(OBJDIR)/comformat_.c: $(SRCDIR)/comformat.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/comformat.c >$@ + +$(OBJDIR)/comformat.o: $(OBJDIR)/comformat_.c $(OBJDIR)/comformat.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/comformat.o -c $(OBJDIR)/comformat_.c + +$(OBJDIR)/comformat.h: $(OBJDIR)/headers + +$(OBJDIR)/configure_.c: $(SRCDIR)/configure.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/configure.c >$@ + +$(OBJDIR)/configure.o: $(OBJDIR)/configure_.c $(OBJDIR)/configure.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/configure.o -c $(OBJDIR)/configure_.c + +$(OBJDIR)/configure.h: $(OBJDIR)/headers + +$(OBJDIR)/content_.c: $(SRCDIR)/content.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/content.c >$@ + +$(OBJDIR)/content.o: $(OBJDIR)/content_.c $(OBJDIR)/content.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/content.o -c $(OBJDIR)/content_.c + +$(OBJDIR)/content.h: $(OBJDIR)/headers + +$(OBJDIR)/db_.c: $(SRCDIR)/db.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/db.c >$@ + +$(OBJDIR)/db.o: $(OBJDIR)/db_.c $(OBJDIR)/db.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/db.o -c $(OBJDIR)/db_.c + +$(OBJDIR)/db.h: $(OBJDIR)/headers + +$(OBJDIR)/delta_.c: $(SRCDIR)/delta.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/delta.c >$@ + +$(OBJDIR)/delta.o: $(OBJDIR)/delta_.c $(OBJDIR)/delta.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/delta.o -c $(OBJDIR)/delta_.c + +$(OBJDIR)/delta.h: $(OBJDIR)/headers + +$(OBJDIR)/deltacmd_.c: $(SRCDIR)/deltacmd.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/deltacmd.c >$@ + +$(OBJDIR)/deltacmd.o: $(OBJDIR)/deltacmd_.c $(OBJDIR)/deltacmd.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/deltacmd.o -c $(OBJDIR)/deltacmd_.c + +$(OBJDIR)/deltacmd.h: $(OBJDIR)/headers + +$(OBJDIR)/descendants_.c: $(SRCDIR)/descendants.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/descendants.c >$@ + +$(OBJDIR)/descendants.o: $(OBJDIR)/descendants_.c $(OBJDIR)/descendants.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/descendants.o -c $(OBJDIR)/descendants_.c + +$(OBJDIR)/descendants.h: $(OBJDIR)/headers + +$(OBJDIR)/diff_.c: $(SRCDIR)/diff.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/diff.c >$@ + +$(OBJDIR)/diff.o: $(OBJDIR)/diff_.c $(OBJDIR)/diff.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/diff.o -c $(OBJDIR)/diff_.c + +$(OBJDIR)/diff.h: $(OBJDIR)/headers + +$(OBJDIR)/diffcmd_.c: $(SRCDIR)/diffcmd.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/diffcmd.c >$@ + +$(OBJDIR)/diffcmd.o: $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/diffcmd.o -c $(OBJDIR)/diffcmd_.c + +$(OBJDIR)/diffcmd.h: $(OBJDIR)/headers + +$(OBJDIR)/doc_.c: $(SRCDIR)/doc.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/doc.c >$@ + +$(OBJDIR)/doc.o: $(OBJDIR)/doc_.c $(OBJDIR)/doc.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/doc.o -c $(OBJDIR)/doc_.c + +$(OBJDIR)/doc.h: $(OBJDIR)/headers + +$(OBJDIR)/encode_.c: $(SRCDIR)/encode.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/encode.c >$@ + +$(OBJDIR)/encode.o: $(OBJDIR)/encode_.c $(OBJDIR)/encode.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/encode.o -c $(OBJDIR)/encode_.c + +$(OBJDIR)/encode.h: $(OBJDIR)/headers + +$(OBJDIR)/event_.c: $(SRCDIR)/event.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/event.c >$@ + +$(OBJDIR)/event.o: $(OBJDIR)/event_.c $(OBJDIR)/event.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/event.o -c $(OBJDIR)/event_.c + +$(OBJDIR)/event.h: $(OBJDIR)/headers + +$(OBJDIR)/export_.c: $(SRCDIR)/export.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/export.c >$@ + +$(OBJDIR)/export.o: $(OBJDIR)/export_.c $(OBJDIR)/export.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/export.o -c $(OBJDIR)/export_.c + +$(OBJDIR)/export.h: $(OBJDIR)/headers + +$(OBJDIR)/file_.c: $(SRCDIR)/file.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/file.c >$@ + +$(OBJDIR)/file.o: $(OBJDIR)/file_.c $(OBJDIR)/file.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/file.o -c $(OBJDIR)/file_.c + +$(OBJDIR)/file.h: $(OBJDIR)/headers + +$(OBJDIR)/finfo_.c: $(SRCDIR)/finfo.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/finfo.c >$@ + +$(OBJDIR)/finfo.o: $(OBJDIR)/finfo_.c $(OBJDIR)/finfo.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/finfo.o -c $(OBJDIR)/finfo_.c + +$(OBJDIR)/finfo.h: $(OBJDIR)/headers + +$(OBJDIR)/foci_.c: $(SRCDIR)/foci.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/foci.c >$@ + +$(OBJDIR)/foci.o: $(OBJDIR)/foci_.c $(OBJDIR)/foci.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/foci.o -c $(OBJDIR)/foci_.c + +$(OBJDIR)/foci.h: $(OBJDIR)/headers + +$(OBJDIR)/fusefs_.c: $(SRCDIR)/fusefs.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/fusefs.c >$@ + +$(OBJDIR)/fusefs.o: $(OBJDIR)/fusefs_.c $(OBJDIR)/fusefs.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/fusefs.o -c $(OBJDIR)/fusefs_.c + +$(OBJDIR)/fusefs.h: $(OBJDIR)/headers + +$(OBJDIR)/glob_.c: $(SRCDIR)/glob.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/glob.c >$@ + +$(OBJDIR)/glob.o: $(OBJDIR)/glob_.c $(OBJDIR)/glob.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/glob.o -c $(OBJDIR)/glob_.c + +$(OBJDIR)/glob.h: $(OBJDIR)/headers + +$(OBJDIR)/graph_.c: $(SRCDIR)/graph.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/graph.c >$@ + +$(OBJDIR)/graph.o: $(OBJDIR)/graph_.c $(OBJDIR)/graph.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/graph.o -c $(OBJDIR)/graph_.c + +$(OBJDIR)/graph.h: $(OBJDIR)/headers + +$(OBJDIR)/gzip_.c: $(SRCDIR)/gzip.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/gzip.c >$@ + +$(OBJDIR)/gzip.o: $(OBJDIR)/gzip_.c $(OBJDIR)/gzip.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/gzip.o -c $(OBJDIR)/gzip_.c + +$(OBJDIR)/gzip.h: $(OBJDIR)/headers + +$(OBJDIR)/http_.c: $(SRCDIR)/http.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/http.c >$@ + +$(OBJDIR)/http.o: $(OBJDIR)/http_.c $(OBJDIR)/http.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/http.o -c $(OBJDIR)/http_.c + +$(OBJDIR)/http.h: $(OBJDIR)/headers + +$(OBJDIR)/http_socket_.c: $(SRCDIR)/http_socket.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/http_socket.c >$@ + +$(OBJDIR)/http_socket.o: $(OBJDIR)/http_socket_.c $(OBJDIR)/http_socket.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/http_socket.o -c $(OBJDIR)/http_socket_.c + +$(OBJDIR)/http_socket.h: $(OBJDIR)/headers + +$(OBJDIR)/http_ssl_.c: $(SRCDIR)/http_ssl.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/http_ssl.c >$@ + +$(OBJDIR)/http_ssl.o: $(OBJDIR)/http_ssl_.c $(OBJDIR)/http_ssl.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/http_ssl.o -c $(OBJDIR)/http_ssl_.c + +$(OBJDIR)/http_ssl.h: $(OBJDIR)/headers + +$(OBJDIR)/http_transport_.c: $(SRCDIR)/http_transport.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/http_transport.c >$@ + +$(OBJDIR)/http_transport.o: $(OBJDIR)/http_transport_.c $(OBJDIR)/http_transport.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/http_transport.o -c $(OBJDIR)/http_transport_.c + +$(OBJDIR)/http_transport.h: $(OBJDIR)/headers + +$(OBJDIR)/import_.c: $(SRCDIR)/import.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/import.c >$@ + +$(OBJDIR)/import.o: $(OBJDIR)/import_.c $(OBJDIR)/import.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/import.o -c $(OBJDIR)/import_.c + +$(OBJDIR)/import.h: $(OBJDIR)/headers + +$(OBJDIR)/info_.c: $(SRCDIR)/info.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/info.c >$@ + +$(OBJDIR)/info.o: $(OBJDIR)/info_.c $(OBJDIR)/info.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/info.o -c $(OBJDIR)/info_.c + +$(OBJDIR)/info.h: $(OBJDIR)/headers + +$(OBJDIR)/json_.c: $(SRCDIR)/json.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/json.c >$@ + +$(OBJDIR)/json.o: $(OBJDIR)/json_.c $(OBJDIR)/json.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json.o -c $(OBJDIR)/json_.c + +$(OBJDIR)/json.h: $(OBJDIR)/headers + +$(OBJDIR)/json_artifact_.c: $(SRCDIR)/json_artifact.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/json_artifact.c >$@ + +$(OBJDIR)/json_artifact.o: $(OBJDIR)/json_artifact_.c $(OBJDIR)/json_artifact.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_artifact.o -c $(OBJDIR)/json_artifact_.c + +$(OBJDIR)/json_artifact.h: $(OBJDIR)/headers + +$(OBJDIR)/json_branch_.c: $(SRCDIR)/json_branch.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/json_branch.c >$@ + +$(OBJDIR)/json_branch.o: $(OBJDIR)/json_branch_.c $(OBJDIR)/json_branch.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_branch.o -c $(OBJDIR)/json_branch_.c + +$(OBJDIR)/json_branch.h: $(OBJDIR)/headers + +$(OBJDIR)/json_config_.c: $(SRCDIR)/json_config.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/json_config.c >$@ + +$(OBJDIR)/json_config.o: $(OBJDIR)/json_config_.c $(OBJDIR)/json_config.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_config.o -c $(OBJDIR)/json_config_.c + +$(OBJDIR)/json_config.h: $(OBJDIR)/headers + +$(OBJDIR)/json_diff_.c: $(SRCDIR)/json_diff.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/json_diff.c >$@ + +$(OBJDIR)/json_diff.o: $(OBJDIR)/json_diff_.c $(OBJDIR)/json_diff.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_diff.o -c $(OBJDIR)/json_diff_.c + +$(OBJDIR)/json_diff.h: $(OBJDIR)/headers + +$(OBJDIR)/json_dir_.c: $(SRCDIR)/json_dir.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/json_dir.c >$@ + +$(OBJDIR)/json_dir.o: $(OBJDIR)/json_dir_.c $(OBJDIR)/json_dir.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_dir.o -c $(OBJDIR)/json_dir_.c + +$(OBJDIR)/json_dir.h: $(OBJDIR)/headers + +$(OBJDIR)/json_finfo_.c: $(SRCDIR)/json_finfo.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/json_finfo.c >$@ + +$(OBJDIR)/json_finfo.o: $(OBJDIR)/json_finfo_.c $(OBJDIR)/json_finfo.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_finfo.o -c $(OBJDIR)/json_finfo_.c + +$(OBJDIR)/json_finfo.h: $(OBJDIR)/headers + +$(OBJDIR)/json_login_.c: $(SRCDIR)/json_login.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/json_login.c >$@ + +$(OBJDIR)/json_login.o: $(OBJDIR)/json_login_.c $(OBJDIR)/json_login.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_login.o -c $(OBJDIR)/json_login_.c + +$(OBJDIR)/json_login.h: $(OBJDIR)/headers + +$(OBJDIR)/json_query_.c: $(SRCDIR)/json_query.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/json_query.c >$@ + +$(OBJDIR)/json_query.o: $(OBJDIR)/json_query_.c $(OBJDIR)/json_query.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_query.o -c $(OBJDIR)/json_query_.c + +$(OBJDIR)/json_query.h: $(OBJDIR)/headers + +$(OBJDIR)/json_report_.c: $(SRCDIR)/json_report.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/json_report.c >$@ + +$(OBJDIR)/json_report.o: $(OBJDIR)/json_report_.c $(OBJDIR)/json_report.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_report.o -c $(OBJDIR)/json_report_.c + +$(OBJDIR)/json_report.h: $(OBJDIR)/headers + +$(OBJDIR)/json_status_.c: $(SRCDIR)/json_status.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/json_status.c >$@ + +$(OBJDIR)/json_status.o: $(OBJDIR)/json_status_.c $(OBJDIR)/json_status.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_status.o -c $(OBJDIR)/json_status_.c + +$(OBJDIR)/json_status.h: $(OBJDIR)/headers + +$(OBJDIR)/json_tag_.c: $(SRCDIR)/json_tag.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/json_tag.c >$@ + +$(OBJDIR)/json_tag.o: $(OBJDIR)/json_tag_.c $(OBJDIR)/json_tag.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_tag.o -c $(OBJDIR)/json_tag_.c + +$(OBJDIR)/json_tag.h: $(OBJDIR)/headers + +$(OBJDIR)/json_timeline_.c: $(SRCDIR)/json_timeline.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/json_timeline.c >$@ + +$(OBJDIR)/json_timeline.o: $(OBJDIR)/json_timeline_.c $(OBJDIR)/json_timeline.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_timeline.o -c $(OBJDIR)/json_timeline_.c + +$(OBJDIR)/json_timeline.h: $(OBJDIR)/headers + +$(OBJDIR)/json_user_.c: $(SRCDIR)/json_user.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/json_user.c >$@ + +$(OBJDIR)/json_user.o: $(OBJDIR)/json_user_.c $(OBJDIR)/json_user.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_user.o -c $(OBJDIR)/json_user_.c + +$(OBJDIR)/json_user.h: $(OBJDIR)/headers + +$(OBJDIR)/json_wiki_.c: $(SRCDIR)/json_wiki.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/json_wiki.c >$@ + +$(OBJDIR)/json_wiki.o: $(OBJDIR)/json_wiki_.c $(OBJDIR)/json_wiki.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_wiki.o -c $(OBJDIR)/json_wiki_.c + +$(OBJDIR)/json_wiki.h: $(OBJDIR)/headers + +$(OBJDIR)/leaf_.c: $(SRCDIR)/leaf.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/leaf.c >$@ + +$(OBJDIR)/leaf.o: $(OBJDIR)/leaf_.c $(OBJDIR)/leaf.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/leaf.o -c $(OBJDIR)/leaf_.c + +$(OBJDIR)/leaf.h: $(OBJDIR)/headers + +$(OBJDIR)/loadctrl_.c: $(SRCDIR)/loadctrl.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/loadctrl.c >$@ + +$(OBJDIR)/loadctrl.o: $(OBJDIR)/loadctrl_.c $(OBJDIR)/loadctrl.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/loadctrl.o -c $(OBJDIR)/loadctrl_.c + +$(OBJDIR)/loadctrl.h: $(OBJDIR)/headers + +$(OBJDIR)/login_.c: $(SRCDIR)/login.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/login.c >$@ + +$(OBJDIR)/login.o: $(OBJDIR)/login_.c $(OBJDIR)/login.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/login.o -c $(OBJDIR)/login_.c + +$(OBJDIR)/login.h: $(OBJDIR)/headers + +$(OBJDIR)/lookslike_.c: $(SRCDIR)/lookslike.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/lookslike.c >$@ + +$(OBJDIR)/lookslike.o: $(OBJDIR)/lookslike_.c $(OBJDIR)/lookslike.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/lookslike.o -c $(OBJDIR)/lookslike_.c + +$(OBJDIR)/lookslike.h: $(OBJDIR)/headers + +$(OBJDIR)/main_.c: $(SRCDIR)/main.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/main.c >$@ + +$(OBJDIR)/main.o: $(OBJDIR)/main_.c $(OBJDIR)/main.h $(OBJDIR)/page_index.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/main.o -c $(OBJDIR)/main_.c + +$(OBJDIR)/main.h: $(OBJDIR)/headers + +$(OBJDIR)/manifest_.c: $(SRCDIR)/manifest.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/manifest.c >$@ + +$(OBJDIR)/manifest.o: $(OBJDIR)/manifest_.c $(OBJDIR)/manifest.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/manifest.o -c $(OBJDIR)/manifest_.c + +$(OBJDIR)/manifest.h: $(OBJDIR)/headers + +$(OBJDIR)/markdown_.c: $(SRCDIR)/markdown.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/markdown.c >$@ + +$(OBJDIR)/markdown.o: $(OBJDIR)/markdown_.c $(OBJDIR)/markdown.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/markdown.o -c $(OBJDIR)/markdown_.c + +$(OBJDIR)/markdown.h: $(OBJDIR)/headers + +$(OBJDIR)/markdown_html_.c: $(SRCDIR)/markdown_html.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/markdown_html.c >$@ + +$(OBJDIR)/markdown_html.o: $(OBJDIR)/markdown_html_.c $(OBJDIR)/markdown_html.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/markdown_html.o -c $(OBJDIR)/markdown_html_.c + +$(OBJDIR)/markdown_html.h: $(OBJDIR)/headers + +$(OBJDIR)/md5_.c: $(SRCDIR)/md5.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/md5.c >$@ + +$(OBJDIR)/md5.o: $(OBJDIR)/md5_.c $(OBJDIR)/md5.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/md5.o -c $(OBJDIR)/md5_.c + +$(OBJDIR)/md5.h: $(OBJDIR)/headers + +$(OBJDIR)/merge_.c: $(SRCDIR)/merge.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/merge.c >$@ + +$(OBJDIR)/merge.o: $(OBJDIR)/merge_.c $(OBJDIR)/merge.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/merge.o -c $(OBJDIR)/merge_.c + +$(OBJDIR)/merge.h: $(OBJDIR)/headers + +$(OBJDIR)/merge3_.c: $(SRCDIR)/merge3.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/merge3.c >$@ + +$(OBJDIR)/merge3.o: $(OBJDIR)/merge3_.c $(OBJDIR)/merge3.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/merge3.o -c $(OBJDIR)/merge3_.c + +$(OBJDIR)/merge3.h: $(OBJDIR)/headers + +$(OBJDIR)/moderate_.c: $(SRCDIR)/moderate.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/moderate.c >$@ + +$(OBJDIR)/moderate.o: $(OBJDIR)/moderate_.c $(OBJDIR)/moderate.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/moderate.o -c $(OBJDIR)/moderate_.c + +$(OBJDIR)/moderate.h: $(OBJDIR)/headers + +$(OBJDIR)/name_.c: $(SRCDIR)/name.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/name.c >$@ + +$(OBJDIR)/name.o: $(OBJDIR)/name_.c $(OBJDIR)/name.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/name.o -c $(OBJDIR)/name_.c + +$(OBJDIR)/name.h: $(OBJDIR)/headers + +$(OBJDIR)/path_.c: $(SRCDIR)/path.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/path.c >$@ + +$(OBJDIR)/path.o: $(OBJDIR)/path_.c $(OBJDIR)/path.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/path.o -c $(OBJDIR)/path_.c + +$(OBJDIR)/path.h: $(OBJDIR)/headers + +$(OBJDIR)/piechart_.c: $(SRCDIR)/piechart.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/piechart.c >$@ + +$(OBJDIR)/piechart.o: $(OBJDIR)/piechart_.c $(OBJDIR)/piechart.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/piechart.o -c $(OBJDIR)/piechart_.c + +$(OBJDIR)/piechart.h: $(OBJDIR)/headers + +$(OBJDIR)/pivot_.c: $(SRCDIR)/pivot.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/pivot.c >$@ + +$(OBJDIR)/pivot.o: $(OBJDIR)/pivot_.c $(OBJDIR)/pivot.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/pivot.o -c $(OBJDIR)/pivot_.c + +$(OBJDIR)/pivot.h: $(OBJDIR)/headers + +$(OBJDIR)/popen_.c: $(SRCDIR)/popen.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/popen.c >$@ + +$(OBJDIR)/popen.o: $(OBJDIR)/popen_.c $(OBJDIR)/popen.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/popen.o -c $(OBJDIR)/popen_.c + +$(OBJDIR)/popen.h: $(OBJDIR)/headers + +$(OBJDIR)/pqueue_.c: $(SRCDIR)/pqueue.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/pqueue.c >$@ + +$(OBJDIR)/pqueue.o: $(OBJDIR)/pqueue_.c $(OBJDIR)/pqueue.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/pqueue.o -c $(OBJDIR)/pqueue_.c + +$(OBJDIR)/pqueue.h: $(OBJDIR)/headers + +$(OBJDIR)/printf_.c: $(SRCDIR)/printf.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/printf.c >$@ + +$(OBJDIR)/printf.o: $(OBJDIR)/printf_.c $(OBJDIR)/printf.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/printf.o -c $(OBJDIR)/printf_.c + +$(OBJDIR)/printf.h: $(OBJDIR)/headers + +$(OBJDIR)/publish_.c: $(SRCDIR)/publish.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/publish.c >$@ + +$(OBJDIR)/publish.o: $(OBJDIR)/publish_.c $(OBJDIR)/publish.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/publish.o -c $(OBJDIR)/publish_.c + +$(OBJDIR)/publish.h: $(OBJDIR)/headers + +$(OBJDIR)/purge_.c: $(SRCDIR)/purge.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/purge.c >$@ + +$(OBJDIR)/purge.o: $(OBJDIR)/purge_.c $(OBJDIR)/purge.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/purge.o -c $(OBJDIR)/purge_.c + +$(OBJDIR)/purge.h: $(OBJDIR)/headers + +$(OBJDIR)/rebuild_.c: $(SRCDIR)/rebuild.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/rebuild.c >$@ + +$(OBJDIR)/rebuild.o: $(OBJDIR)/rebuild_.c $(OBJDIR)/rebuild.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/rebuild.o -c $(OBJDIR)/rebuild_.c + +$(OBJDIR)/rebuild.h: $(OBJDIR)/headers + +$(OBJDIR)/regexp_.c: $(SRCDIR)/regexp.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/regexp.c >$@ + +$(OBJDIR)/regexp.o: $(OBJDIR)/regexp_.c $(OBJDIR)/regexp.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/regexp.o -c $(OBJDIR)/regexp_.c + +$(OBJDIR)/regexp.h: $(OBJDIR)/headers + +$(OBJDIR)/report_.c: $(SRCDIR)/report.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/report.c >$@ + +$(OBJDIR)/report.o: $(OBJDIR)/report_.c $(OBJDIR)/report.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/report.o -c $(OBJDIR)/report_.c + +$(OBJDIR)/report.h: $(OBJDIR)/headers + +$(OBJDIR)/rss_.c: $(SRCDIR)/rss.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/rss.c >$@ + +$(OBJDIR)/rss.o: $(OBJDIR)/rss_.c $(OBJDIR)/rss.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/rss.o -c $(OBJDIR)/rss_.c + +$(OBJDIR)/rss.h: $(OBJDIR)/headers + +$(OBJDIR)/schema_.c: $(SRCDIR)/schema.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/schema.c >$@ + +$(OBJDIR)/schema.o: $(OBJDIR)/schema_.c $(OBJDIR)/schema.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/schema.o -c $(OBJDIR)/schema_.c + +$(OBJDIR)/schema.h: $(OBJDIR)/headers + +$(OBJDIR)/search_.c: $(SRCDIR)/search.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/search.c >$@ + +$(OBJDIR)/search.o: $(OBJDIR)/search_.c $(OBJDIR)/search.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/search.o -c $(OBJDIR)/search_.c + +$(OBJDIR)/search.h: $(OBJDIR)/headers + +$(OBJDIR)/setup_.c: $(SRCDIR)/setup.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/setup.c >$@ + +$(OBJDIR)/setup.o: $(OBJDIR)/setup_.c $(OBJDIR)/setup.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/setup.o -c $(OBJDIR)/setup_.c + +$(OBJDIR)/setup.h: $(OBJDIR)/headers + +$(OBJDIR)/sha1_.c: $(SRCDIR)/sha1.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/sha1.c >$@ + +$(OBJDIR)/sha1.o: $(OBJDIR)/sha1_.c $(OBJDIR)/sha1.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/sha1.o -c $(OBJDIR)/sha1_.c + +$(OBJDIR)/sha1.h: $(OBJDIR)/headers + +$(OBJDIR)/shun_.c: $(SRCDIR)/shun.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/shun.c >$@ + +$(OBJDIR)/shun.o: $(OBJDIR)/shun_.c $(OBJDIR)/shun.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/shun.o -c $(OBJDIR)/shun_.c + +$(OBJDIR)/shun.h: $(OBJDIR)/headers + +$(OBJDIR)/sitemap_.c: $(SRCDIR)/sitemap.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/sitemap.c >$@ + +$(OBJDIR)/sitemap.o: $(OBJDIR)/sitemap_.c $(OBJDIR)/sitemap.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/sitemap.o -c $(OBJDIR)/sitemap_.c + +$(OBJDIR)/sitemap.h: $(OBJDIR)/headers + +$(OBJDIR)/skins_.c: $(SRCDIR)/skins.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/skins.c >$@ + +$(OBJDIR)/skins.o: $(OBJDIR)/skins_.c $(OBJDIR)/skins.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/skins.o -c $(OBJDIR)/skins_.c + +$(OBJDIR)/skins.h: $(OBJDIR)/headers + +$(OBJDIR)/sqlcmd_.c: $(SRCDIR)/sqlcmd.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/sqlcmd.c >$@ + +$(OBJDIR)/sqlcmd.o: $(OBJDIR)/sqlcmd_.c $(OBJDIR)/sqlcmd.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/sqlcmd.o -c $(OBJDIR)/sqlcmd_.c + +$(OBJDIR)/sqlcmd.h: $(OBJDIR)/headers + +$(OBJDIR)/stash_.c: $(SRCDIR)/stash.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/stash.c >$@ + +$(OBJDIR)/stash.o: $(OBJDIR)/stash_.c $(OBJDIR)/stash.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/stash.o -c $(OBJDIR)/stash_.c + +$(OBJDIR)/stash.h: $(OBJDIR)/headers + +$(OBJDIR)/stat_.c: $(SRCDIR)/stat.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/stat.c >$@ + +$(OBJDIR)/stat.o: $(OBJDIR)/stat_.c $(OBJDIR)/stat.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/stat.o -c $(OBJDIR)/stat_.c + +$(OBJDIR)/stat.h: $(OBJDIR)/headers + +$(OBJDIR)/statrep_.c: $(SRCDIR)/statrep.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/statrep.c >$@ + +$(OBJDIR)/statrep.o: $(OBJDIR)/statrep_.c $(OBJDIR)/statrep.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/statrep.o -c $(OBJDIR)/statrep_.c + +$(OBJDIR)/statrep.h: $(OBJDIR)/headers + +$(OBJDIR)/style_.c: $(SRCDIR)/style.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/style.c >$@ + +$(OBJDIR)/style.o: $(OBJDIR)/style_.c $(OBJDIR)/style.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/style.o -c $(OBJDIR)/style_.c + +$(OBJDIR)/style.h: $(OBJDIR)/headers + +$(OBJDIR)/sync_.c: $(SRCDIR)/sync.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/sync.c >$@ + +$(OBJDIR)/sync.o: $(OBJDIR)/sync_.c $(OBJDIR)/sync.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/sync.o -c $(OBJDIR)/sync_.c + +$(OBJDIR)/sync.h: $(OBJDIR)/headers + +$(OBJDIR)/tag_.c: $(SRCDIR)/tag.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/tag.c >$@ + +$(OBJDIR)/tag.o: $(OBJDIR)/tag_.c $(OBJDIR)/tag.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/tag.o -c $(OBJDIR)/tag_.c + +$(OBJDIR)/tag.h: $(OBJDIR)/headers + +$(OBJDIR)/tar_.c: $(SRCDIR)/tar.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/tar.c >$@ + +$(OBJDIR)/tar.o: $(OBJDIR)/tar_.c $(OBJDIR)/tar.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/tar.o -c $(OBJDIR)/tar_.c + +$(OBJDIR)/tar.h: $(OBJDIR)/headers + +$(OBJDIR)/th_main_.c: $(SRCDIR)/th_main.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/th_main.c >$@ + +$(OBJDIR)/th_main.o: $(OBJDIR)/th_main_.c $(OBJDIR)/th_main.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/th_main.o -c $(OBJDIR)/th_main_.c + +$(OBJDIR)/th_main.h: $(OBJDIR)/headers + +$(OBJDIR)/timeline_.c: $(SRCDIR)/timeline.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/timeline.c >$@ + +$(OBJDIR)/timeline.o: $(OBJDIR)/timeline_.c $(OBJDIR)/timeline.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/timeline.o -c $(OBJDIR)/timeline_.c + +$(OBJDIR)/timeline.h: $(OBJDIR)/headers + +$(OBJDIR)/tkt_.c: $(SRCDIR)/tkt.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/tkt.c >$@ + +$(OBJDIR)/tkt.o: $(OBJDIR)/tkt_.c $(OBJDIR)/tkt.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/tkt.o -c $(OBJDIR)/tkt_.c + +$(OBJDIR)/tkt.h: $(OBJDIR)/headers + +$(OBJDIR)/tktsetup_.c: $(SRCDIR)/tktsetup.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/tktsetup.c >$@ + +$(OBJDIR)/tktsetup.o: $(OBJDIR)/tktsetup_.c $(OBJDIR)/tktsetup.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/tktsetup.o -c $(OBJDIR)/tktsetup_.c + +$(OBJDIR)/tktsetup.h: $(OBJDIR)/headers + +$(OBJDIR)/undo_.c: $(SRCDIR)/undo.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/undo.c >$@ + +$(OBJDIR)/undo.o: $(OBJDIR)/undo_.c $(OBJDIR)/undo.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/undo.o -c $(OBJDIR)/undo_.c + +$(OBJDIR)/undo.h: $(OBJDIR)/headers + +$(OBJDIR)/unicode_.c: $(SRCDIR)/unicode.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/unicode.c >$@ + +$(OBJDIR)/unicode.o: $(OBJDIR)/unicode_.c $(OBJDIR)/unicode.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/unicode.o -c $(OBJDIR)/unicode_.c + +$(OBJDIR)/unicode.h: $(OBJDIR)/headers + +$(OBJDIR)/update_.c: $(SRCDIR)/update.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/update.c >$@ + +$(OBJDIR)/update.o: $(OBJDIR)/update_.c $(OBJDIR)/update.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/update.o -c $(OBJDIR)/update_.c + +$(OBJDIR)/update.h: $(OBJDIR)/headers + +$(OBJDIR)/url_.c: $(SRCDIR)/url.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/url.c >$@ + +$(OBJDIR)/url.o: $(OBJDIR)/url_.c $(OBJDIR)/url.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/url.o -c $(OBJDIR)/url_.c + +$(OBJDIR)/url.h: $(OBJDIR)/headers + +$(OBJDIR)/user_.c: $(SRCDIR)/user.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/user.c >$@ + +$(OBJDIR)/user.o: $(OBJDIR)/user_.c $(OBJDIR)/user.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/user.o -c $(OBJDIR)/user_.c + +$(OBJDIR)/user.h: $(OBJDIR)/headers + +$(OBJDIR)/utf8_.c: $(SRCDIR)/utf8.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/utf8.c >$@ + +$(OBJDIR)/utf8.o: $(OBJDIR)/utf8_.c $(OBJDIR)/utf8.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/utf8.o -c $(OBJDIR)/utf8_.c + +$(OBJDIR)/utf8.h: $(OBJDIR)/headers + +$(OBJDIR)/util_.c: $(SRCDIR)/util.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/util.c >$@ + +$(OBJDIR)/util.o: $(OBJDIR)/util_.c $(OBJDIR)/util.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/util.o -c $(OBJDIR)/util_.c + +$(OBJDIR)/util.h: $(OBJDIR)/headers + +$(OBJDIR)/verify_.c: $(SRCDIR)/verify.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/verify.c >$@ + +$(OBJDIR)/verify.o: $(OBJDIR)/verify_.c $(OBJDIR)/verify.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/verify.o -c $(OBJDIR)/verify_.c + +$(OBJDIR)/verify.h: $(OBJDIR)/headers + +$(OBJDIR)/vfile_.c: $(SRCDIR)/vfile.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/vfile.c >$@ + +$(OBJDIR)/vfile.o: $(OBJDIR)/vfile_.c $(OBJDIR)/vfile.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/vfile.o -c $(OBJDIR)/vfile_.c + +$(OBJDIR)/vfile.h: $(OBJDIR)/headers + +$(OBJDIR)/wiki_.c: $(SRCDIR)/wiki.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/wiki.c >$@ + +$(OBJDIR)/wiki.o: $(OBJDIR)/wiki_.c $(OBJDIR)/wiki.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/wiki.o -c $(OBJDIR)/wiki_.c + +$(OBJDIR)/wiki.h: $(OBJDIR)/headers + +$(OBJDIR)/wikiformat_.c: $(SRCDIR)/wikiformat.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/wikiformat.c >$@ + +$(OBJDIR)/wikiformat.o: $(OBJDIR)/wikiformat_.c $(OBJDIR)/wikiformat.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/wikiformat.o -c $(OBJDIR)/wikiformat_.c + +$(OBJDIR)/wikiformat.h: $(OBJDIR)/headers + +$(OBJDIR)/winfile_.c: $(SRCDIR)/winfile.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/winfile.c >$@ + +$(OBJDIR)/winfile.o: $(OBJDIR)/winfile_.c $(OBJDIR)/winfile.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/winfile.o -c $(OBJDIR)/winfile_.c + +$(OBJDIR)/winfile.h: $(OBJDIR)/headers + +$(OBJDIR)/winhttp_.c: $(SRCDIR)/winhttp.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/winhttp.c >$@ + +$(OBJDIR)/winhttp.o: $(OBJDIR)/winhttp_.c $(OBJDIR)/winhttp.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/winhttp.o -c $(OBJDIR)/winhttp_.c + +$(OBJDIR)/winhttp.h: $(OBJDIR)/headers + +$(OBJDIR)/wysiwyg_.c: $(SRCDIR)/wysiwyg.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/wysiwyg.c >$@ + +$(OBJDIR)/wysiwyg.o: $(OBJDIR)/wysiwyg_.c $(OBJDIR)/wysiwyg.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/wysiwyg.o -c $(OBJDIR)/wysiwyg_.c + +$(OBJDIR)/wysiwyg.h: $(OBJDIR)/headers + +$(OBJDIR)/xfer_.c: $(SRCDIR)/xfer.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/xfer.c >$@ + +$(OBJDIR)/xfer.o: $(OBJDIR)/xfer_.c $(OBJDIR)/xfer.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/xfer.o -c $(OBJDIR)/xfer_.c + +$(OBJDIR)/xfer.h: $(OBJDIR)/headers + +$(OBJDIR)/xfersetup_.c: $(SRCDIR)/xfersetup.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/xfersetup.c >$@ + +$(OBJDIR)/xfersetup.o: $(OBJDIR)/xfersetup_.c $(OBJDIR)/xfersetup.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/xfersetup.o -c $(OBJDIR)/xfersetup_.c + +$(OBJDIR)/xfersetup.h: $(OBJDIR)/headers + +$(OBJDIR)/zip_.c: $(SRCDIR)/zip.c $(OBJDIR)/translate + $(OBJDIR)/translate $(SRCDIR)/zip.c >$@ + +$(OBJDIR)/zip.o: $(OBJDIR)/zip_.c $(OBJDIR)/zip.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/zip.o -c $(OBJDIR)/zip_.c + +$(OBJDIR)/zip.h: $(OBJDIR)/headers + $(OBJDIR)/sqlite3.o: $(SRCDIR)/sqlite3.c - $(XTCC) -DSQLITE_OMIT_LOAD_EXTENSION=1 -DSQLITE_THREADSAFE=0 -DSQLITE_DEFAULT_FILE_FORMAT=4 -Dlocaltime=fossil_localtime -DSQLITE_ENABLE_LOCKING_STYLE=0 -c $(SRCDIR)/sqlite3.c -o $(OBJDIR)/sqlite3.o + $(XTCC) $(SQLITE_OPTIONS) $(SQLITE_CFLAGS) -c $(SRCDIR)/sqlite3.c -o $@ + +$(OBJDIR)/shell.o: $(SRCDIR)/shell.c $(SRCDIR)/sqlite3.h + $(XTCC) $(SHELL_OPTIONS) $(SHELL_CFLAGS) $(LINENOISE_DEF.$(USE_LINENOISE)) -c $(SRCDIR)/shell.c -o $@ + +$(OBJDIR)/linenoise.o: $(SRCDIR)/linenoise.c $(SRCDIR)/linenoise.h + $(XTCC) -c $(SRCDIR)/linenoise.c -o $@ $(OBJDIR)/th.o: $(SRCDIR)/th.c - $(XTCC) -I$(SRCDIR) -c $(SRCDIR)/th.c -o $(OBJDIR)/th.o + $(XTCC) -c $(SRCDIR)/th.c -o $@ $(OBJDIR)/th_lang.o: $(SRCDIR)/th_lang.c - $(XTCC) -I$(SRCDIR) -c $(SRCDIR)/th_lang.c -o $(OBJDIR)/th_lang.o + $(XTCC) -c $(SRCDIR)/th_lang.c -o $@ + +$(OBJDIR)/th_tcl.o: $(SRCDIR)/th_tcl.c + $(XTCC) -c $(SRCDIR)/th_tcl.c -o $@ + + +$(OBJDIR)/miniz.o: $(SRCDIR)/miniz.c + $(XTCC) $(MINIZ_OPTIONS) -c $(SRCDIR)/miniz.c -o $@ + +$(OBJDIR)/cson_amalgamation.o: $(SRCDIR)/cson_amalgamation.c + $(XTCC) -c $(SRCDIR)/cson_amalgamation.c -o $@ + +# +# The list of all the targets that do not correspond to real files. This stops +# 'make' from getting confused when someone makes an error in a rule. +# + +.PHONY: all install test clean Index: src/makeheaders.c ================================================================== --- src/makeheaders.c +++ src/makeheaders.c @@ -1,11 +1,35 @@ -static const char ident[] = "@(#) $Header: /cvstrac/cvstrac/makeheaders.c,v 1.4 2005/03/16 22:17:51 drh Exp $"; /* ** This program is free software; you can redistribute it and/or ** modify it under the terms of the Simplified BSD License (also ** known as the "2-Clause License" or "FreeBSD License".) - +** +** Copyright 1993 D. Richard Hipp. All rights reserved. +** +** Redistribution and use in source and binary forms, with or +** without modification, are permitted provided that the following +** conditions are met: +** +** 1. Redistributions of source code must retain the above copyright +** notice, this list of conditions and the following disclaimer. +** +** 2. Redistributions in binary form must reproduce the above copyright +** notice, this list of conditions and the following disclaimer in +** the documentation and/or other materials provided with the +** distribution. +** +** This software is provided "as is" and any express or implied warranties, +** including, but not limited to, the implied warranties of merchantability +** and fitness for a particular purpose are disclaimed. In no event shall +** the author or contributors be liable for any direct, indirect, incidental, +** special, exemplary, or consequential damages (including, but not limited +** to, procurement of substitute goods or services; loss of use, data or +** profits; or business interruption) however caused and on any theory of +** liability, whether in contract, strict liability, or tort (including +** negligence or otherwise) arising in any way out of the use of this +** software, even if advised of the possibility of such damage. +** ** This program is distributed in the hope that it will be useful, ** but without any warranty; without even the implied warranty of ** merchantability or fitness for a particular purpose. ** appropriate header files. */ @@ -13,14 +37,18 @@ #include #include #include #include #include -#ifndef WIN32 -# include +#include + +#if defined( __MINGW32__) || defined(__DMC__) || defined(_MSC_VER) || defined(__POCC__) +# ifndef WIN32 +# define WIN32 +# endif #else -# include +# include #endif /* ** Macros for debugging. */ @@ -84,19 +112,19 @@ ** ** struct Xyzzy; ** ** Not every object has a forward declaration. If it does, thought, the ** forward declaration will be contained in the zFwd field for C and -** the zFwdCpp for C++. The zDecl field contains the complete -** declaration text. +** the zFwdCpp for C++. The zDecl field contains the complete +** declaration text. */ typedef struct Decl Decl; struct Decl { char *zName; /* Name of the object being declared. The appearance ** of this name is a source file triggers the declaration ** to be added to the header for that file. */ - char *zFile; /* File from which extracted. */ + const char *zFile; /* File from which extracted. */ char *zIf; /* Surround the declaration with this #if */ char *zFwd; /* A forward declaration. NULL if there is none. */ char *zFwdCpp; /* Use this forward declaration for C++. */ char *zDecl; /* A full declaration of this object */ char *zExtra; /* Extra declaration text inserted into class objects */ @@ -135,11 +163,11 @@ ** in the output when using the -H option.) ** ** EXPORT scope The object is visible and usable everywhere. ** ** The DP_Flag is a temporary use flag that is used during processing to -** prevent an infinite loop. It's use is localized. +** prevent an infinite loop. It's use is localized. ** ** The DP_Cplusplus, DP_ExternCReqd and DP_ExternReqd flags are permanent ** and are used to specify what type of declaration the object requires. */ #define DP_Forward 0x001 /* Has a forward declaration in this file */ @@ -173,11 +201,11 @@ ** Be careful not to confuse PS_Export with DP_Export or ** PS_Local with DP_Local. Their names are similar, but the meanings ** of these flags are very different. */ #define PS_Extern 0x000800 /* "extern" has been seen */ -#define PS_Export 0x001000 /* If between "#if EXPORT_INTERFACE" +#define PS_Export 0x001000 /* If between "#if EXPORT_INTERFACE" ** and "#endif" */ #define PS_Export2 0x002000 /* If "EXPORT" seen */ #define PS_Typedef 0x004000 /* If "typedef" has been seen */ #define PS_Static 0x008000 /* If "static" has been seen */ #define PS_Interface 0x010000 /* If within #if INTERFACE..#endif */ @@ -203,11 +231,11 @@ #define TY_Union 0x04000000 #define TY_Enumeration 0x08000000 #define TY_Defunct 0x10000000 /* Used to erase a declaration */ /* -** Each nested #if (or #ifdef or #ifndef) is stored in a stack of +** Each nested #if (or #ifdef or #ifndef) is stored in a stack of ** instances of the following structure. */ typedef struct Ifmacro Ifmacro; struct Ifmacro { int nLine; /* Line number where this macro occurs */ @@ -265,11 +293,11 @@ int flags; /* One or more DP_, PS_ and/or TY_ flags */ InFile *pNext; /* Next input file in the list of them all */ IdentTable idTable; /* All identifiers in this input file */ }; -/* +/* ** An unbounded string is able to grow without limit. We use these ** to construct large in-memory strings from lots of smaller components. */ typedef struct String String; struct String { @@ -300,19 +328,23 @@ /* ** The following text line appears at the top of every file generated ** by this program. By recognizing this line, the program can be sure ** never to read a file that it generated itself. +** +** The "#undef INTERFACE" part is a hack to work around a name collision +** in MSVC 2008. */ -const char zTopLine[] = - "/* \aThis file was automatically generated. Do not edit! */\n"; +const char zTopLine[] = + "/* \aThis file was automatically generated. Do not edit! */\n" + "#undef INTERFACE\n"; #define nTopLine (sizeof(zTopLine)-1) /* ** The name of the file currently being parsed. */ -static char *zFilename; +static const char *zFilename; /* ** The stack of #if macros for the file currently being parsed. */ static Ifmacro *ifStack = 0; @@ -670,11 +702,11 @@ struct stat sStat; FILE *pIn; char *zBuf; int n; - if( stat(zFilename,&sStat)!=0 + if( stat(zFilename,&sStat)!=0 #ifndef WIN32 || !S_ISREG(sStat.st_mode) #endif ){ return 0; @@ -716,11 +748,11 @@ #define TT_String 6 /* String or character constants. ".." or '.' */ #define TT_Braces 7 /* All text between { and a matching } */ #define TT_EOF 8 /* End of file */ #define TT_Error 9 /* An error condition */ #define TT_BlockComment 10 /* A C-Style comment at the left margin that - * spans multple lines */ + * spans multiple lines */ #define TT_Other 0 /* None of the above */ /* ** Get a single low-level token from the input file. Update the ** file pointer so that it points to the first character beyond the @@ -857,12 +889,12 @@ } } } i++; } - if( z[i] ){ - i += 2; + if( z[i] ){ + i += 2; }else{ isBlockComment = 0; fprintf(stderr,"%s:%d: Unterminated comment\n", zFilename, startLine); nErr++; @@ -874,11 +906,11 @@ pToken->eType = TT_Other; pToken->nText = 1 + (z[i+1]=='+'); } break; - case '0': + case '0': if( z[i+1]=='x' || z[i+1]=='X' ){ /* A hex constant */ i += 2; while( isxdigit(z[i]) ){ i++; } }else{ @@ -931,11 +963,11 @@ while( isalnum(z[i]) || z[i]=='_' ){ i++; }; pToken->eType = TT_Id; pToken->nText = i - pIn->i; break; - case ':': + case ':': pToken->eType = TT_Other; pToken->nText = 1 + (z[i+1]==':'); break; case '=': @@ -945,11 +977,11 @@ case '-': case '*': case '%': case '^': case '&': - case '|': + case '|': pToken->eType = TT_Other; pToken->nText = 1 + (z[i+1]=='='); break; default: @@ -1032,11 +1064,11 @@ } } /* NOT REACHED */ } -/* +/* ** This routine looks for identifiers (strings of contiguous alphanumeric ** characters) within a preprocessor directive and adds every such string ** found to the given identifier table */ static void FindIdentifiersInMacro(Token *pToken, IdentTable *pTable){ @@ -1125,11 +1157,11 @@ case TT_Id: if( pTable ){ IdentTableInsert(pTable,pToken->zText,pToken->nText); } break; - + case TT_Preprocessor: if( pTable!=0 ){ FindIdentifiersInMacro(pToken,pTable); } break; @@ -1231,11 +1263,11 @@ exit(1); } pList = TokenizeFile(zFile,&sTable); for(p=pList; p; p=p->pNext){ int j; - switch( p->eType ){ + switch( p->eType ){ case TT_Space: printf("%4d: Space\n",p->nLine); break; case TT_Id: printf("%4d: Id %.*s\n",p->nLine,p->nText,p->zText); @@ -1298,11 +1330,11 @@ needSpace = 1; break; default: c = pFirst->zText[0]; - printf("%s%.*s", + printf("%s%.*s", (needSpace && (c=='*' || c=='{')) ? " " : "", pFirst->nText, pFirst->zText); needSpace = pFirst->zText[0]==','; break; } @@ -1339,13 +1371,13 @@ StringInit(&str); pLast = pLast->pNext; while( pFirst!=pLast ){ if( pFirst==pSkip ){ iSkip = nSkip; } - if( iSkip>0 ){ + if( iSkip>0 ){ iSkip--; - pFirst=pFirst->pNext; + pFirst=pFirst->pNext; continue; } switch( pFirst->eType ){ case TT_Preprocessor: StringAppend(&str,"\n",1); @@ -1352,13 +1384,13 @@ StringAppend(&str,pFirst->zText,pFirst->nText); StringAppend(&str,"\n",1); needSpace = 0; break; - case TT_Id: + case TT_Id: switch( pFirst->zText[0] ){ - case 'E': + case 'E': if( pFirst->nText==6 && strncmp(pFirst->zText,"EXPORT",6)==0 ){ skipOne = 1; } break; case 'P': @@ -1455,11 +1487,11 @@ ** At this point, we know we have a type declaration that is bounded ** by pList and pEnd and has the name pName. */ /* - ** If the braces are followed immedately by a semicolon, then we are + ** If the braces are followed immediately by a semicolon, then we are ** dealing a type declaration only. There is not variable definition ** following the type declaration. So reset... */ if( pEnd->pNext==0 || pEnd->pNext->zText[0]==';' ){ *pReset = ';'; @@ -1613,17 +1645,17 @@ pLast = pLast->pNext; for(p=pFirst; p && p!=pLast; p=p->pNext){ if( p->eType==TT_Id ){ static IdentTable sReserved; static int isInit = 0; - static char *aWords[] = { "char", "class", - "const", "double", "enum", "extern", "EXPORT", "ET_PROC", + static const char *aWords[] = { "char", "class", + "const", "double", "enum", "extern", "EXPORT", "ET_PROC", "float", "int", "long", "PRIVATE", "PROTECTED", "PUBLIC", - "register", "static", "struct", "sizeof", "signed", "typedef", + "register", "static", "struct", "sizeof", "signed", "typedef", "union", "volatile", "virtual", "void", }; - + if( !isInit ){ int i; for(i=0; izText[0]!=')' ){ pLast = pLast->pPrev; } if( pLast==0 || pLast==pFirst || pFirst->pNext==pLast ){ - fprintf(stderr,"%s:%d: Unrecognized syntax.\n", + fprintf(stderr,"%s:%d: Unrecognized syntax.\n", zFilename, pFirst->nLine); return 1; } if( flags & (PS_Interface|PS_Export|PS_Local) ){ fprintf(stderr,"%s:%d: Missing \"inline\" on function or procedure.\n", @@ -1817,11 +1849,11 @@ return 1; } #ifdef DEBUG if( debugMask & PARSER ){ - printf("**** Found inline routine: %.*s on line %d...\n", + printf("**** Found inline routine: %.*s on line %d...\n", pName->nText, pName->zText, pFirst->nLine); PrintTokens(pFirst,pEnd); printf("\n"); } #endif @@ -1851,16 +1883,16 @@ ** ** pEnd is the token that ends the object. It can be either a ';' or ** a '='. If it is '=', then assume we have a variable definition. ** ** If pEnd is ';', then the determination is more difficult. We have -** to search for an occurance of an ID followed immediately by '('. +** to search for an occurrence of an ID followed immediately by '('. ** If found, we have a prototype. Otherwise we are dealing with a ** variable definition. */ static int isVariableDef(Token *pFirst, Token *pEnd){ - if( pEnd && pEnd->zText[0]=='=' && + if( pEnd && pEnd->zText[0]=='=' && (pEnd->pPrev->nText!=8 || strncmp(pEnd->pPrev->zText,"operator",8)!=0) ){ return 1; } while( pFirst && pFirst!=pEnd && pFirst->pNext && pFirst->pNext!=pEnd ){ @@ -1917,11 +1949,11 @@ } while( pFirst!=0 && pFirst->pNext!=pEnd && ((pFirst->nText==6 && strncmp(pFirst->zText,"static",6)==0) || (pFirst->nText==5 && strncmp(pFirst->zText,"LOCAL",6)==0)) ){ - /* Lose the initial "static" or local from local variables. + /* Lose the initial "static" or local from local variables. ** We'll prepend "extern" later. */ pFirst = pFirst->pNext; isLocal = 1; } if( pFirst==0 || !isLocal ){ @@ -1930,11 +1962,11 @@ }else if( flags & PS_Method ){ /* Methods are declared by their class. Don't declare separately. */ return nErr; } isVar = (flags & (PS_Typedef|PS_Method))==0 && isVariableDef(pFirst,pEnd); - if( isVar && (flags & (PS_Interface|PS_Export|PS_Local))!=0 + if( isVar && (flags & (PS_Interface|PS_Export|PS_Local))!=0 && (flags & PS_Extern)==0 ){ fprintf(stderr,"%s:%d: Can't define a variable in this context\n", zFilename, pFirst->nLine); nErr++; } @@ -2063,11 +2095,11 @@ nCmd++; } if( nCmd==5 && strncmp(zCmd,"endif",5)==0 ){ /* - ** Pop the if stack + ** Pop the if stack */ pIf = ifStack; if( pIf==0 ){ fprintf(stderr,"%s:%d: extra '#endif'.\n",zFilename,pToken->nLine); return 1; @@ -2074,11 +2106,11 @@ } ifStack = pIf->pNext; SafeFree(pIf); }else if( nCmd==6 && strncmp(zCmd,"define",6)==0 ){ /* - ** Record a #define if we are in PS_Interface or PS_Export + ** Record a #define if we are in PS_Interface or PS_Export */ Decl *pDecl; if( !(flags & (PS_Local|PS_Interface|PS_Export)) ){ return 0; } zArg = &zCmd[6]; while( *zArg && isspace(*zArg) && *zArg!='\n' ){ @@ -2097,11 +2129,11 @@ }else if( flags & PS_Local ){ DeclSetProperty(pDecl,DP_Local); } }else if( nCmd==7 && strncmp(zCmd,"include",7)==0 ){ /* - ** Record an #include if we are in PS_Interface or PS_Export + ** Record an #include if we are in PS_Interface or PS_Export */ Include *pInclude; char *zIf; if( !(flags & (PS_Interface|PS_Export)) ){ return 0; } @@ -2141,11 +2173,11 @@ zArg = &zCmd[2]; while( *zArg && isspace(*zArg) && *zArg!='\n' ){ zArg++; } if( *zArg==0 || *zArg=='\n' ){ return 0; } - nArg = pToken->nText + (int)pToken->zText - (int)zArg; + nArg = pToken->nText + (int)(pToken->zText - zArg); if( nArg==9 && strncmp(zArg,"INTERFACE",9)==0 ){ PushIfMacro(0,0,0,pToken->nLine,PS_Interface); }else if( nArg==16 && strncmp(zArg,"EXPORT_INTERFACE",16)==0 ){ PushIfMacro(0,0,0,pToken->nLine,PS_Export); }else if( nArg==15 && strncmp(zArg,"LOCAL_INTERFACE",15)==0 ){ @@ -2152,19 +2184,19 @@ PushIfMacro(0,0,0,pToken->nLine,PS_Local); }else{ PushIfMacro(0,zArg,nArg,pToken->nLine,0); } }else if( nCmd==5 && strncmp(zCmd,"ifdef",5)==0 ){ - /* + /* ** Push an #ifdef. */ zArg = &zCmd[5]; while( *zArg && isspace(*zArg) && *zArg!='\n' ){ zArg++; } if( *zArg==0 || *zArg=='\n' ){ return 0; } - nArg = pToken->nText + (int)pToken->zText - (int)zArg; + nArg = pToken->nText + (int)(pToken->zText - zArg); PushIfMacro("defined",zArg,nArg,pToken->nLine,0); }else if( nCmd==6 && strncmp(zCmd,"ifndef",6)==0 ){ /* ** Push an #ifndef. */ @@ -2171,15 +2203,15 @@ zArg = &zCmd[6]; while( *zArg && isspace(*zArg) && *zArg!='\n' ){ zArg++; } if( *zArg==0 || *zArg=='\n' ){ return 0; } - nArg = pToken->nText + (int)pToken->zText - (int)zArg; + nArg = pToken->nText + (int)(pToken->zText - zArg); PushIfMacro("!defined",zArg,nArg,pToken->nLine,0); }else if( nCmd==4 && strncmp(zCmd,"else",4)==0 ){ /* - ** Invert the #if on the top of the stack + ** Invert the #if on the top of the stack */ if( ifStack==0 ){ fprintf(stderr,"%s:%d: '#else' without an '#if'\n",zFilename, pToken->nLine); return 1; @@ -2192,33 +2224,33 @@ }else{ pIf->flags = 0; } }else{ /* - ** This directive can be safely ignored + ** This directive can be safely ignored */ return 0; } - /* - ** Recompute the preset flags + /* + ** Recompute the preset flags */ *pPresetFlags = 0; for(pIf = ifStack; pIf; pIf=pIf->pNext){ *pPresetFlags |= pIf->flags; } - + return nErr; } /* ** Parse an entire file. Return the number of errors. ** ** pList is a list of tokens in the file. Whitespace tokens have been ** eliminated, and text with {...} has been collapsed into a ** single TT_Brace token. -** +** ** initFlags are a set of parse flags that should always be set for this ** file. For .c files this is normally 0. For .h files it is PS_Interface. */ static int ParseFile(Token *pList, int initFlags){ int nErr = 0; @@ -2247,11 +2279,11 @@ pStart = 0; flags = presetFlags; break; case '=': - if( pList->pPrev->nText==8 + if( pList->pPrev->nText==8 && strncmp(pList->pPrev->zText,"operator",8)==0 ){ break; } nErr += ProcessDecl(pStart,pList,flags); pStart = 0; @@ -2317,11 +2349,13 @@ pStart = pList; } break; case 'i': - if( pList->nText==6 && strncmp(pList->zText,"inline",6)==0 ){ + if( pList->nText==6 && strncmp(pList->zText,"inline",6)==0 + && (flags & PS_Static)==0 + ){ nErr += ProcessInlineProc(pList,flags,&resetFlag); } break; case 'L': @@ -2437,11 +2471,11 @@ pDecl->zExtra = 0; } /* ** Reset the DP_Forward and DP_Declared flags on all Decl structures. -** Set both flags for anything that is tagged as local and isn't +** Set both flags for anything that is tagged as local and isn't ** in the file zFilename so that it won't be printing in other files. */ static void ResetDeclFlags(char *zFilename){ Decl *pDecl; @@ -2540,11 +2574,11 @@ int flag; int isCpp; /* True if generating C++ */ int doneTypedef = 0; /* True if a typedef has been done for this object */ /* printf("BEGIN %s of %s\n",needFullDecl?"FULL":"PROTOTYPE",pDecl->zName);*/ - /* + /* ** For any object that has a forward declaration, go ahead and do the ** forward declaration first. */ isCpp = (pState->flags & DP_Cplusplus) != 0; for(p=pDecl; p; p=p->pSameName){ @@ -2592,22 +2626,22 @@ ** function on a recursive call with the same pDecl. Hence, recursive ** calls to this function (through ScanText()) can never change the ** value of DP_Flag out from under us. */ for(p=pDecl; p; p=p->pSameName){ - if( !DeclHasProperty(p,DP_Declared) - && (p->zFwd==0 || needFullDecl) + if( !DeclHasProperty(p,DP_Declared) + && (p->zFwd==0 || needFullDecl) && p->zDecl!=0 ){ DeclSetProperty(p,DP_Forward|DP_Declared|DP_Flag); }else{ DeclClearProperty(p,DP_Flag); } } /* - ** Call ScanText() recusively (this routine is called from ScanText()) + ** Call ScanText() recursively (this routine is called from ScanText()) ** to include declarations required to come before these declarations. */ for(p=pDecl; p; p=p->pSameName){ if( DeclHasProperty(p,DP_Flag) ){ if( p->zDecl[0]=='#' ){ @@ -2701,12 +2735,12 @@ ** by sToken. */ pDecl = FindDecl(sToken.zText,sToken.nText); if( pDecl==0 ) continue; - /* - ** If we get this far, we've found an identifier that has a + /* + ** If we get this far, we've found an identifier that has a ** declaration in the database. Now see if we the full declaration ** or just a forward declaration. */ GetNonspaceToken(&sIn,&sNext); if( sNext.zText[0]=='*' ){ @@ -2727,21 +2761,21 @@ /* printf("END SCANTEXT\n"); */ } /* ** Provide a full declaration to any object which so far has had only -** a foward declaration. +** a forward declaration. */ static void CompleteForwardDeclarations(GenState *pState){ Decl *pDecl; int progress; do{ progress = 0; for(pDecl=pDeclFirst; pDecl; pDecl=pDecl->pNext){ - if( DeclHasProperty(pDecl,DP_Forward) - && !DeclHasProperty(pDecl,DP_Declared) + if( DeclHasProperty(pDecl,DP_Forward) + && !DeclHasProperty(pDecl,DP_Declared) ){ DeclareObject(pDecl,pState,1); progress = 1; assert( DeclHasProperty(pDecl,DP_Declared) ); } @@ -2797,11 +2831,11 @@ }else if( strncmp(zOldVersion,zTopLine,nTopLine)!=0 ){ if( report ) fprintf(report,"error!\n"); fprintf(stderr, "%s: Can't overwrite this file because it wasn't previously\n" "%*s generated by 'makeheaders'.\n", - pFile->zHdr, strlen(pFile->zHdr), ""); + pFile->zHdr, (int)strlen(pFile->zHdr), ""); nErr++; }else if( strcmp(zOldVersion,zNewVersion)!=0 ){ if( report ) fprintf(report,"updated\n"); if( WriteFile(pFile->zHdr,zNewVersion) ){ fprintf(stderr,"%s: Can't write to file\n",pFile->zHdr); @@ -2808,11 +2842,11 @@ nErr++; } }else if( report ){ fprintf(report,"unchanged\n"); } - SafeFree(zOldVersion); + SafeFree(zOldVersion); IdentTableReset(&includeTable); StringReset(&outStr); return nErr; } @@ -2844,11 +2878,11 @@ } ChangeIfContext(0,&sState); printf("%s",StringGet(&outStr)); IdentTableReset(&includeTable); StringReset(&outStr); - return 0; + return 0; } #ifdef DEBUG /* ** Return the number of characters in the given string prior to the @@ -2952,18 +2986,18 @@ if( nLabel==0 ) continue; zLabel[nLabel] = 0; InsertExtraDecl(pDecl); zDecl = pDecl->zDecl; if( zDecl==0 ) zDecl = pDecl->zFwd; - printf("%s %s %s %d %d %d %d %d %d\n", + printf("%s %s %s %p %d %d %d %d %d\n", pDecl->zName, zLabel, pDecl->zFile, - pDecl->pComment ? (int)pDecl->pComment/sizeof(Token) : 0, + pDecl->pComment, pDecl->pComment ? pDecl->pComment->nText+1 : 0, - pDecl->zIf ? strlen(pDecl->zIf)+1 : 0, - zDecl ? strlen(zDecl) : 0, + pDecl->zIf ? (int)strlen(pDecl->zIf)+1 : 0, + zDecl ? (int)strlen(zDecl) : 0, pDecl->pComment ? pDecl->pComment->nLine : 0, pDecl->tokenCode.nText ? pDecl->tokenCode.nText+1 : 0 ); if( pDecl->pComment ){ printf("%.*s\n",pDecl->pComment->nText, pDecl->pComment->zText); @@ -3006,15 +3040,21 @@ int nSrc; char *zSrc; InFile *pFile; int i; - /* - ** Get the name of the input file to be scanned + /* + ** Get the name of the input file to be scanned. The input file is + ** everything before the first ':' or the whole file if no ':' is seen. + ** + ** Except, on windows, ignore any ':' that occurs as the second character + ** since it might be part of the drive specifier. So really, the ":' has + ** to be the 3rd or later character in the name. This precludes 1-character + ** file names, which really should not be a problem. */ zSrc = zArg; - for(nSrc=0; zSrc[nSrc] && zArg[nSrc]!=':'; nSrc++){} + for(nSrc=2; zSrc[nSrc] && zArg[nSrc]!=':'; nSrc++){} pFile = SafeMalloc( sizeof(InFile) ); memset(pFile,0,sizeof(InFile)); pFile->zSrc = StrDup(zSrc,nSrc); /* Figure out if we are dealing with C or C++ code. Assume any @@ -3031,11 +3071,11 @@ */ if( zSrc[nSrc]==':' ){ int nHdr; char *zHdr; zHdr = &zSrc[nSrc+1]; - for(nHdr=0; zHdr[nHdr] && zHdr[nHdr]!=':'; nHdr++){} + for(nHdr=0; zHdr[nHdr]; nHdr++){} pFile->zHdr = StrDup(zHdr,nHdr); } /* Look for any 'c' or 'C' in the suffix of the file name and change ** that character to 'h' or 'H' respectively. If no 'c' or 'C' is found, @@ -3059,11 +3099,11 @@ } } /* ** If pFile->zSrc contains no 'c' or 'C' in its extension, it - ** must be a header file. In that case, we need to set the + ** must be a header file. In that case, we need to set the ** PS_Interface flag. */ pFile->flags |= PS_Interface; for(i=nSrc-1; i>0 && zSrc[i]!='.'; i--){ if( zSrc[i]=='c' || zSrc[i]=='C' ){ @@ -3070,11 +3110,11 @@ pFile->flags &= ~PS_Interface; break; } } - /* Done! + /* Done! */ return pFile; } /* MS-Windows and MS-DOS both have the following serious OS bug: the @@ -3122,11 +3162,11 @@ while( c!=EOF ){ while( c!=EOF && isspace(c) ){ if( c=='\n' ){ startOfLine = 1; } - c = getc(in); + c = getc(in); if( startOfLine && c=='#' ){ while( c!=EOF && c!='\n' ){ c = getc(in); } } @@ -3144,11 +3184,11 @@ if( nAlloc==0 ){ nAlloc = 100 + argc; zNew = malloc( sizeof(char*) * nAlloc ); }else{ nAlloc *= 2; - zNew = realloc( zNew, sizeof(char*) * nAlloc ); + zNew = realloc( zNew, sizeof(char*) * nAlloc ); } } if( zNew ){ int j = nNew + index; zNew[j] = malloc( n + 1 ); @@ -3214,11 +3254,11 @@ /* ** The following text contains a few simple #defines that we want ** to be available to every file. */ -static char zInit[] = +static const char zInit[] = "#define INTERFACE 0\n" "#define EXPORT_INTERFACE 0\n" "#define LOCAL_INTERFACE 0\n" "#define EXPORT\n" "#define LOCAL static\n" Index: src/makeheaders.html ================================================================== --- src/makeheaders.html +++ src/makeheaders.html @@ -43,11 +43,11 @@
    • 4.0 Using Makeheaders To Generate Documentation
    • 5.0 Compiling The Makeheaders Program
    • 6.0 Summary And Conclusion - +

      1.0 Background

      A piece of C source code can be one of two things: a declaration or a definition. @@ -98,11 +98,11 @@ source code when the .c file is compiled. In this way, the .h files define the interface to a subsystem and the .c files define how the subsystem is implemented.

      - +

      1.1 Problems With The Traditional Approach

      As the art of computer programming continues to advance, and the size and complexity of programs continues to swell, the traditional C @@ -116,11 +116,11 @@

      1. In large codes with many source files, it becomes difficult to determine which .h files should be included in which .c files.

      2. -It is typically the case the a .h file will be forced to include +It is typically the case that a .h file will be forced to include another .h files, which in turn might include other .h files, and so forth. The .c file must be recompiled when any of the .h files in this chain are altered, but it can be difficult to determine what .h files are found in the include chain. @@ -152,11 +152,11 @@ especially when the declarations involved are spread out over several files.

      - +

      1.2 The Makeheaders Solution

      The makeheaders program is designed to ameliorate the problems associated with the traditional C programming model by automatically generating @@ -215,11 +215,11 @@ And the burden of running makeheaders is light. It will easily process tens of thousands of lines of source code per second.

      - +

      2.0 Running The Makeheaders Program

      The makeheaders program is very easy to run. If you have a collection of C source code and include files in the working @@ -363,11 +363,11 @@ Or, you can insert the special option ``--'' on the command line to cause all subsequent command line arguments to be treated as filenames even if their names beginn with ``-''.

      - +

      3.0 Preparing Source Files For Use With Makeheaders

      Very little has to be done to prepare source files for use with makeheaders since makeheaders will read and understand ordinary @@ -375,11 +375,11 @@ But it is important that you structure your files in a way that makes sense in the makeheaders context. This section will describe several typical uses of makeheaders.

      - +

      3.1 The Basic Setup

      The simpliest way to use makeheaders is to put all definitions in one or more .c files and all structure and type declarations in @@ -474,11 +474,11 @@ But that is not a problem. The makeheaders program will recognize and ignore any files it has previously generated that show up on its input list.

      - +

      3.2 What Declarations Get Copied

      The following list details all of the code constructs that makeheaders will extract and place in @@ -577,11 +577,11 @@ If the declaration of some structure ``X'' requires a prior declaration of another structure ``Y'', then Y will appear first in the generated headers.

      - +

      3.3 How To Avoid Having To Write Any Header Files

      In my experience, large projects work better if all of the manually written code is placed in .c files and all .h files are generated @@ -644,11 +644,11 @@ come from. You should also note that a single .c file can contain as many ``#if INTERFACE'' regions as desired.

      - +

      3.4 Designating Declarations For Export

      In a large project, one will often construct a hierarchy of interfaces. @@ -733,11 +733,11 @@ (The ``#if INTERFACE'' can also be used in both .h and .c files, but since it's use in a .h file would be redundant, we haven't mentioned it before.)

      - +

      3.5 Local declarations processed by makeheaders

      Structure declarations and typedefs that appear in .c files are normally ignored by makeheaders. @@ -771,11 +771,11 @@ blocks described above, except that makeheaders insures that the objects declared in a LOCAL_INTERFACE are only visible to the file containing the LOCAL_INTERFACE.

      - +

      3.6 Using Makeheaders With C++ Code

      You can use makeheaders to generate header files for C++ code, in addition to C. @@ -870,11 +870,11 @@ Makeheaders does not understand more recent C++ syntax such as templates and namespaces. Perhaps these issued will be addressed in future revisions.

      - +

      3.7 Conditional Compilation

      The makeheaders program understands and tracks the conditional compilation constructs in the source code files it scans. @@ -903,11 +903,11 @@ #endif

    • and treats the enclosed text as a comment.

      - +

      3.8 Caveats

      The makeheaders system is designed to be robust but it is possible for a devious programmer to fool the system, @@ -973,11 +973,11 @@ As long as you avoid excessive cleverness, makeheaders will probably be able to figure out what you want and will do the right thing.

      - +

      4.0 Using Makeheaders To Generate Documentation

      Many people have observed the advantages of generating program documentation directly from the source code: @@ -1037,11 +1037,11 @@ The exact output format will not be described here. It is simple to understand and parse and should be obvious to anyone who inspects some sample output.

      - +

      5.0 Compiling The Makeheaders Program

      The source code for makeheaders is a single file of ANSI-C code, less than 3000 lines in length. @@ -1050,11 +1050,11 @@ and on most operating systems. It is known to compile using several variations of GCC for Unix as well as Cygwin32 and MSVC 5.0 for Win32.

      - +

      6.0 Summary And Conclusion

      The makeheaders program will automatically generate a minimal header file for each of a set of C source and header files, and will Index: src/makemake.tcl ================================================================== --- src/makemake.tcl +++ src/makemake.tcl @@ -1,21 +1,39 @@ #!/usr/bin/tclsh # -# Run this TCL script to generate the "main.mk" makefile. +# Run this Tcl script to generate the various makefiles for a variety +# of platforms. Files generated include: # +# src/main.mk # makefile for all unix systems +# win/Makefile.mingw # makefile for mingw on windows +# win/Makefile.* # makefiles for other windows compilers +# +# Run this script while in the "src" subdirectory. Like this: +# +# tclsh makemake.tcl +# +############################################################################# # Basenames of all source files that get preprocessed using -# "translate" and "makeheaders" +# "translate" and "makeheaders". To add new C-language source files to the +# project, simply add the basename to this list and rerun this script. +# +# Set the separate extra_files variable further down for how to add non-C +# files, such as string and BLOB resources. # set src { add allrepo attach bag + bisect blob branch browse + builtin + bundle + cache captcha cgi checkin checkout clearsign @@ -29,179 +47,2012 @@ descendants diff diffcmd doc encode + event + export file finfo + foci + fusefs + glob graph + gzip http http_socket http_transport + import info + json + json_artifact + json_branch + json_config + json_diff + json_dir + json_finfo + json_login + json_query + json_report + json_status + json_tag + json_timeline + json_user + json_wiki + leaf + loadctrl login + lookslike main manifest + markdown + markdown_html md5 merge merge3 + moderate name + path + piechart pivot + popen pqueue printf + publish + purge rebuild + regexp report rss schema search setup sha1 shun + sitemap skins + sqlcmd + stash stat + statrep style sync tag + tar th_main timeline tkt tktsetup undo + unicode update url user + utf8 + util verify vfile wiki wikiformat + winfile winhttp + wysiwyg xfer + xfersetup zip + http_ssl +} + +# Additional resource files that get built into the executable. +# +set extra_files { + diff.tcl + markdown.md + ../skins/*/*.txt +} + +# Options used to compile the included SQLite library. +# +set SQLITE_OPTIONS { + -DNDEBUG=1 + -DSQLITE_OMIT_LOAD_EXTENSION=1 + -DSQLITE_ENABLE_LOCKING_STYLE=0 + -DSQLITE_THREADSAFE=0 + -DSQLITE_DEFAULT_FILE_FORMAT=4 + -DSQLITE_OMIT_DEPRECATED + -DSQLITE_ENABLE_EXPLAIN_COMMENTS + -DSQLITE_ENABLE_FTS4 + -DSQLITE_ENABLE_FTS3_PARENTHESIS + -DSQLITE_ENABLE_DBSTAT_VTAB + -DSQLITE_ENABLE_JSON1 + -DSQLITE_ENABLE_FTS5 +} +#lappend SQLITE_OPTIONS -DSQLITE_ENABLE_FTS3=1 +#lappend SQLITE_OPTIONS -DSQLITE_ENABLE_STAT4 +#lappend SQLITE_OPTIONS -DSQLITE_WIN32_NO_ANSI +#lappend SQLITE_OPTIONS -DSQLITE_WINNT_MAX_PATH_CHARS=4096 + +# Options used to compile the included SQLite shell. +# +set SHELL_OPTIONS { + -Dmain=sqlite3_shell + -DSQLITE_OMIT_LOAD_EXTENSION=1 + -DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) + -DSQLITE_SHELL_DBNAME_PROC=fossil_open +} + +# miniz (libz drop-in alternative) precompiler flags. +# +set MINIZ_OPTIONS { + -DMINIZ_NO_STDIO + -DMINIZ_NO_TIME + -DMINIZ_NO_ARCHIVE_APIS } + +# Options used to compile the included SQLite shell on Windows. +# +set SHELL_WIN32_OPTIONS $SHELL_OPTIONS +lappend SHELL_WIN32_OPTIONS -Daccess=file_access +lappend SHELL_WIN32_OPTIONS -Dsystem=fossil_system +lappend SHELL_WIN32_OPTIONS -Dgetenv=fossil_getenv +lappend SHELL_WIN32_OPTIONS -Dfopen=fossil_fopen # Name of the final application # set name fossil -puts {# DO NOT EDIT +# The "writeln" command sends output to the target makefile. +# +proc writeln {args} { + global output_file + if {[lindex $args 0]=="-nonewline"} { + puts -nonewline $output_file [lindex $args 1] + } else { + puts $output_file [lindex $args 0] + } +} + +# STOP HERE. +# Unless the build procedures changes, you should not have to edit anything +# below this line. + +# Expand any wildcards in "extra_files" +set new_extra_files {} +foreach file $extra_files { + foreach x [glob -nocomplain $file] { + lappend new_extra_files $x + } +} +set extra_files $new_extra_files + +############################################################################## +############################################################################## +############################################################################## +# Start by generating the "main.mk" makefile used for all unix systems. +# +puts "building main.mk" +set output_file [open main.mk w] +fconfigure $output_file -translation binary + +writeln {# +############################################################################## +# WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl") +############################################################################## # # This file is automatically generated. Instead of editing this -# file, edit "makemake.tcl" then run "tclsh makemake.tcl >main.mk" +# file, edit "makemake.tcl" then run "tclsh makemake.tcl" # to regenerate this file. # -# This file is included by linux-gcc.mk or linux-mingw.mk or possible -# some other makefiles. This file contains the rules that are common -# to building regardless of the target. -# - -XTCC = $(TCC) $(CFLAGS) -I. -I$(SRCDIR) - -} -puts -nonewline "SRC =" -foreach s [lsort $src] { - puts -nonewline " \\\n \$(SRCDIR)/$s.c" -} -puts "\n" -puts -nonewline "TRANS_SRC =" -foreach s [lsort $src] { - puts -nonewline " \\\n ${s}_.c" -} -puts "\n" -puts -nonewline "OBJ =" -foreach s [lsort $src] { - puts -nonewline " \\\n \$(OBJDIR)/$s.o" -} -puts "\n" -puts "APPNAME = $name\$(E)" -puts "\n" - -puts { +# This file is included by primary Makefile. +# + +XTCC = $(TCC) -I. -I$(SRCDIR) -I$(OBJDIR) $(TCCFLAGS) $(CFLAGS) + +} +writeln -nonewline "SRC =" +foreach s [lsort $src] { + writeln -nonewline " \\\n \$(SRCDIR)/$s.c" +} +writeln "\n" +writeln -nonewline "EXTRA_FILES =" +foreach s [lsort $extra_files] { + writeln -nonewline " \\\n \$(SRCDIR)/$s" +} +writeln "\n" +writeln -nonewline "TRANS_SRC =" +foreach s [lsort $src] { + writeln -nonewline " \\\n \$(OBJDIR)/${s}_.c" +} +writeln "\n" +writeln -nonewline "OBJ =" +foreach s [lsort $src] { + writeln -nonewline " \\\n \$(OBJDIR)/$s.o" +} +writeln "\n" +writeln "APPNAME = $name\$(E)" +writeln "\n" + +writeln [string map [list \ + <<>> [join $SQLITE_OPTIONS " \\\n "] \ + <<>> [join $SHELL_OPTIONS " \\\n "] \ + <<>> [join $MINIZ_OPTIONS " \\\n "]] { all: $(OBJDIR) $(APPNAME) install: $(APPNAME) + mkdir -p $(INSTALLDIR) mv $(APPNAME) $(INSTALLDIR) + +codecheck: $(TRANS_SRC) $(OBJDIR)/codecheck1 + $(OBJDIR)/codecheck1 $(TRANS_SRC) $(OBJDIR): -mkdir $(OBJDIR) -translate: $(SRCDIR)/translate.c - $(BCC) -o translate $(SRCDIR)/translate.c - -makeheaders: $(SRCDIR)/makeheaders.c - $(BCC) -o makeheaders $(SRCDIR)/makeheaders.c - -mkindex: $(SRCDIR)/mkindex.c - $(BCC) -o mkindex $(SRCDIR)/mkindex.c - -# WARNING. DANGER. Running the testsuite modifies the repository the +$(OBJDIR)/translate: $(SRCDIR)/translate.c + $(BCC) -o $(OBJDIR)/translate $(SRCDIR)/translate.c + +$(OBJDIR)/makeheaders: $(SRCDIR)/makeheaders.c + $(BCC) -o $(OBJDIR)/makeheaders $(SRCDIR)/makeheaders.c + +$(OBJDIR)/mkindex: $(SRCDIR)/mkindex.c + $(BCC) -o $(OBJDIR)/mkindex $(SRCDIR)/mkindex.c + +$(OBJDIR)/mkbuiltin: $(SRCDIR)/mkbuiltin.c + $(BCC) -o $(OBJDIR)/mkbuiltin $(SRCDIR)/mkbuiltin.c + +$(OBJDIR)/mkversion: $(SRCDIR)/mkversion.c + $(BCC) -o $(OBJDIR)/mkversion $(SRCDIR)/mkversion.c + +$(OBJDIR)/codecheck1: $(SRCDIR)/codecheck1.c + $(BCC) -o $(OBJDIR)/codecheck1 $(SRCDIR)/codecheck1.c + +# WARNING. DANGER. Running the test suite modifies the repository the +# build is done from, i.e. the checkout belongs to. Do not sync/push +# the repository after running the tests. +test: $(OBJDIR) $(APPNAME) + $(TCLSH) $(SRCDIR)/../test/tester.tcl $(APPNAME) + +$(OBJDIR)/VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(SRCDIR)/../VERSION $(OBJDIR)/mkversion + $(OBJDIR)/mkversion $(SRCDIR)/../manifest.uuid \ + $(SRCDIR)/../manifest \ + $(SRCDIR)/../VERSION >$(OBJDIR)/VERSION.h + +# Setup the options used to compile the included SQLite library. +SQLITE_OPTIONS = <<>> + +# Setup the options used to compile the included SQLite shell. +SHELL_OPTIONS = <<>> + +# Setup the options used to compile the included miniz library. +MINIZ_OPTIONS = <<>> + +# The USE_SYSTEM_SQLITE variable may be undefined, set to 0, or set +# to 1. If it is set to 1, then there is no need to build or link +# the sqlite3.o object. Instead, the system SQLite will be linked +# using -lsqlite3. +SQLITE3_OBJ.1 = +SQLITE3_OBJ.0 = $(OBJDIR)/sqlite3.o +SQLITE3_OBJ. = $(SQLITE3_OBJ.0) + +# The FOSSIL_ENABLE_MINIZ variable may be undefined, set to 0, or +# set to 1. If it is set to 1, the miniz library included in the +# source tree should be used; otherwise, it should not. +MINIZ_OBJ.0 = +MINIZ_OBJ.1 = $(OBJDIR)/miniz.o +MINIZ_OBJ. = $(MINIZ_OBJ.0) + +# The USE_LINENOISE variable may be undefined, set to 0, or set +# to 1. If it is set to 0, then there is no need to build or link +# the linenoise.o object. +LINENOISE_DEF.0 = +LINENOISE_DEF.1 = -DHAVE_LINENOISE +LINENOISE_DEF. = $(LINENOISE_DEF.0) +LINENOISE_OBJ.0 = +LINENOISE_OBJ.1 = $(OBJDIR)/linenoise.o +LINENOISE_OBJ. = $(LINENOISE_OBJ.0) +}] + +writeln [string map [list <<>> \\] { +EXTRAOBJ = <<>> + $(SQLITE3_OBJ.$(USE_SYSTEM_SQLITE)) <<>> + $(MINIZ_OBJ.$(FOSSIL_ENABLE_MINIZ)) <<>> + $(LINENOISE_OBJ.$(USE_LINENOISE)) <<>> + $(OBJDIR)/shell.o <<>> + $(OBJDIR)/th.o <<>> + $(OBJDIR)/th_lang.o <<>> + $(OBJDIR)/th_tcl.o <<>> + $(OBJDIR)/cson_amalgamation.o +}] + +writeln { +$(APPNAME): $(OBJDIR)/headers $(OBJDIR)/codecheck1 $(OBJ) $(EXTRAOBJ) + $(OBJDIR)/codecheck1 $(TRANS_SRC) + $(TCC) -o $(APPNAME) $(OBJ) $(EXTRAOBJ) $(LIB) + +# This rule prevents make from using its default rules to try build +# an executable named "manifest" out of the file named "manifest.c" +# +$(SRCDIR)/../manifest: + # noop + +clean: + rm -rf $(OBJDIR)/* $(APPNAME) + +} + +set mhargs {} +foreach s [lsort $src] { + append mhargs "\$(OBJDIR)/${s}_.c:\$(OBJDIR)/$s.h <<>>" + set extra_h($s) { } +} +append mhargs "\$(SRCDIR)/sqlite3.h <<>>" +append mhargs "\$(SRCDIR)/th.h <<>>" +#append mhargs "\$(SRCDIR)/cson_amalgamation.h <<>>" +append mhargs "\$(OBJDIR)/VERSION.h" +set mhargs [string map [list <<>> \\\n\t] $mhargs] +writeln "\$(OBJDIR)/page_index.h: \$(TRANS_SRC) \$(OBJDIR)/mkindex" +writeln "\t\$(OBJDIR)/mkindex \$(TRANS_SRC) >\$@\n" + +writeln "\$(OBJDIR)/builtin_data.h: \$(OBJDIR)/mkbuiltin \$(EXTRA_FILES)" +writeln "\t\$(OBJDIR)/mkbuiltin --prefix \$(SRCDIR)/ \$(EXTRA_FILES) >\$@\n" + +writeln "\$(OBJDIR)/headers:\t\$(OBJDIR)/page_index.h \$(OBJDIR)/builtin_data.h \$(OBJDIR)/makeheaders \$(OBJDIR)/VERSION.h" +writeln "\t\$(OBJDIR)/makeheaders $mhargs" +writeln "\ttouch \$(OBJDIR)/headers" +writeln "\$(OBJDIR)/headers: Makefile" +writeln "\$(OBJDIR)/json.o \$(OBJDIR)/json_artifact.o \$(OBJDIR)/json_branch.o \$(OBJDIR)/json_config.o \$(OBJDIR)/json_diff.o \$(OBJDIR)/json_dir.o \$(OBJDIR)/json_finfo.o \$(OBJDIR)/json_login.o \$(OBJDIR)/json_query.o \$(OBJDIR)/json_report.o \$(OBJDIR)/json_status.o \$(OBJDIR)/json_tag.o \$(OBJDIR)/json_timeline.o \$(OBJDIR)/json_user.o \$(OBJDIR)/json_wiki.o : \$(SRCDIR)/json_detail.h" +writeln "Makefile:" +set extra_h(main) " \$(OBJDIR)/page_index.h " +set extra_h(builtin) " \$(OBJDIR)/builtin_data.h " + +foreach s [lsort $src] { + writeln "\$(OBJDIR)/${s}_.c:\t\$(SRCDIR)/$s.c \$(OBJDIR)/translate" + writeln "\t\$(OBJDIR)/translate \$(SRCDIR)/$s.c >\$@\n" + writeln "\$(OBJDIR)/$s.o:\t\$(OBJDIR)/${s}_.c \$(OBJDIR)/$s.h$extra_h($s)\$(SRCDIR)/config.h" + writeln "\t\$(XTCC) -o \$(OBJDIR)/$s.o -c \$(OBJDIR)/${s}_.c\n" + writeln "\$(OBJDIR)/$s.h:\t\$(OBJDIR)/headers\n" +} + +writeln "\$(OBJDIR)/sqlite3.o:\t\$(SRCDIR)/sqlite3.c" +writeln "\t\$(XTCC) \$(SQLITE_OPTIONS) \$(SQLITE_CFLAGS) -c \$(SRCDIR)/sqlite3.c -o \$@\n" + +writeln "\$(OBJDIR)/shell.o:\t\$(SRCDIR)/shell.c \$(SRCDIR)/sqlite3.h" +writeln "\t\$(XTCC) \$(SHELL_OPTIONS) \$(SHELL_CFLAGS) \$(LINENOISE_DEF.\$(USE_LINENOISE)) -c \$(SRCDIR)/shell.c -o \$@\n" + +writeln "\$(OBJDIR)/linenoise.o:\t\$(SRCDIR)/linenoise.c \$(SRCDIR)/linenoise.h" +writeln "\t\$(XTCC) -c \$(SRCDIR)/linenoise.c -o \$@\n" + +writeln "\$(OBJDIR)/th.o:\t\$(SRCDIR)/th.c" +writeln "\t\$(XTCC) -c \$(SRCDIR)/th.c -o \$@\n" + +writeln "\$(OBJDIR)/th_lang.o:\t\$(SRCDIR)/th_lang.c" +writeln "\t\$(XTCC) -c \$(SRCDIR)/th_lang.c -o \$@\n" + +writeln "\$(OBJDIR)/th_tcl.o:\t\$(SRCDIR)/th_tcl.c" +writeln "\t\$(XTCC) -c \$(SRCDIR)/th_tcl.c -o \$@\n" + +writeln { +$(OBJDIR)/miniz.o: $(SRCDIR)/miniz.c + $(XTCC) $(MINIZ_OPTIONS) -c $(SRCDIR)/miniz.c -o $@ + +$(OBJDIR)/cson_amalgamation.o: $(SRCDIR)/cson_amalgamation.c + $(XTCC) -c $(SRCDIR)/cson_amalgamation.c -o $@ + +# +# The list of all the targets that do not correspond to real files. This stops +# 'make' from getting confused when someone makes an error in a rule. +# + +.PHONY: all install test clean +} + +close $output_file +# +# End of the main.mk output +############################################################################## +############################################################################## +############################################################################## +# Begin win/Makefile.mingw output +# +puts "building ../win/Makefile.mingw" +set output_file [open ../win/Makefile.mingw w] +fconfigure $output_file -translation binary + +writeln {#!/usr/bin/make +# +############################################################################## +# WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl") +############################################################################## +# +# This file is automatically generated. Instead of editing this +# file, edit "makemake.tcl" then run "tclsh makemake.tcl" +# to regenerate this file. +# +# This is a makefile for use on Cygwin/Darwin/FreeBSD/Linux/Windows using +# MinGW or MinGW-w64. +# +# Some of the special options which can be passed to make +# USE_WINDOWS=1 if building under a windows command prompt +# X64=1 if using an unprefixed 64-bit mingw compiler +# + +#### Select one of MinGW, MinGW-w64 (32-bit) or MinGW-w64 (64-bit) compilers. +# By default, this is an empty string (i.e. use the native compiler). +# +PREFIX = +# PREFIX = mingw32- +# PREFIX = i686-pc-mingw32- +# PREFIX = i686-w64-mingw32- +# PREFIX = x86_64-w64-mingw32- + +#### The toplevel directory of the source tree. Fossil can be built +# in a directory that is separate from the source tree. Just change +# the following to point from the build directory to the src/ folder. +# +SRCDIR = src + +#### The directory into which object code files should be written. +# +OBJDIR = wbld + +#### C Compiler and options for use in building executables that +# will run on the platform that is doing the build. This is used +# to compile code-generator programs as part of the build process. +# See TCC below for the C compiler for building the finished binary. +# +BCC = gcc + +#### Enable compiling with debug symbols (much larger binary) +# +# FOSSIL_ENABLE_SYMBOLS = 1 + +#### Enable JSON (http://www.json.org) support using "cson" +# +# FOSSIL_ENABLE_JSON = 1 + +#### Enable HTTPS support via OpenSSL (links to libssl and libcrypto) +# +# FOSSIL_ENABLE_SSL = 1 + +#### Automatically build OpenSSL when building Fossil (causes rebuild +# issues when building incrementally). +# +# FOSSIL_BUILD_SSL = 1 + +#### Enable relative paths in external diff/gdiff +# +# FOSSIL_ENABLE_EXEC_REL_PATHS = 1 + +#### Enable legacy treatment of mv/rm (skip checkout files) +# +# FOSSIL_ENABLE_LEGACY_MV_RM = 1 + +#### Enable TH1 scripts in embedded documentation files +# +# FOSSIL_ENABLE_TH1_DOCS = 1 + +#### Enable hooks for commands and web pages via TH1 +# +# FOSSIL_ENABLE_TH1_HOOKS = 1 + +#### Enable scripting support via Tcl/Tk +# +# FOSSIL_ENABLE_TCL = 1 + +#### Load Tcl using the stubs library mechanism +# +# FOSSIL_ENABLE_TCL_STUBS = 1 + +#### Load Tcl using the private stubs mechanism +# +# FOSSIL_ENABLE_TCL_PRIVATE_STUBS = 1 + +#### Use 'system' SQLite +# +# USE_SYSTEM_SQLITE = 1 + +#### Use the miniz compression library +# +# FOSSIL_ENABLE_MINIZ = 1 + +#### Use the Tcl source directory instead of the install directory? +# This is useful when Tcl has been compiled statically with MinGW. +# +FOSSIL_TCL_SOURCE = 1 + +#### Check if the workaround for the MinGW command line handling needs to +# be enabled by default. This check may be somewhat fragile due to the +# use of "findstring". +# +ifndef MINGW_IS_32BIT_ONLY +ifeq (,$(findstring w64-mingw32,$(PREFIX))) +MINGW_IS_32BIT_ONLY = 1 +endif +endif + +#### The directories where the zlib include and library files are located. +# +ZINCDIR = $(SRCDIR)/../compat/zlib +ZLIBDIR = $(SRCDIR)/../compat/zlib + +#### Make an attempt to detect if Fossil is being built for the x64 processor +# architecture. This check may be somewhat fragile due to "findstring". +# +ifndef X64 +ifneq (,$(findstring x86_64-w64-mingw32,$(PREFIX))) +X64 = 1 +endif +endif + +#### Determine if the optimized assembly routines provided with zlib should be +# used, taking into account whether zlib is actually enabled and the target +# processor architecture. +# +ifndef X64 +SSLCONFIG = mingw +ifndef FOSSIL_ENABLE_MINIZ +ZLIBCONFIG = LOC="-DASMV -DASMINF" OBJA="inffas86.o match.o" +LIBTARGETS = $(ZLIBDIR)/inffas86.o $(ZLIBDIR)/match.o +else +ZLIBCONFIG = +LIBTARGETS = +endif +else +SSLCONFIG = mingw64 +ZLIBCONFIG = +LIBTARGETS = +endif + +#### Disable creation of the OpenSSL shared libraries. Also, disable support +# for both SSLv2 and SSLv3 (i.e. thereby forcing the use of TLS). +# +SSLCONFIG += no-ssl2 no-ssl3 no-shared + +#### When using zlib, make sure that OpenSSL is configured to use the zlib +# that Fossil knows about (i.e. the one within the source tree). +# +ifndef FOSSIL_ENABLE_MINIZ +SSLCONFIG += --with-zlib-lib=$(PWD)/$(ZLIBDIR) --with-zlib-include=$(PWD)/$(ZLIBDIR) zlib +endif + +#### The directories where the OpenSSL include and library files are located. +# The recommended usage here is to use the Sysinternals junction tool +# to create a hard link between an "openssl-1.x" sub-directory of the +# Fossil source code directory and the target OpenSSL source directory. +# +OPENSSLDIR = $(SRCDIR)/../compat/openssl-1.0.2f +OPENSSLINCDIR = $(OPENSSLDIR)/include +OPENSSLLIBDIR = $(OPENSSLDIR) + +#### Either the directory where the Tcl library is installed or the Tcl +# source code directory resides (depending on the value of the macro +# FOSSIL_TCL_SOURCE). If this points to the Tcl install directory, +# this directory must have "include" and "lib" sub-directories. If +# this points to the Tcl source code directory, this directory must +# have "generic" and "win" sub-directories. The recommended usage +# here is to use the Sysinternals junction tool to create a hard +# link between a "tcl-8.x" sub-directory of the Fossil source code +# directory and the target Tcl directory. This removes the need to +# hard-code the necessary paths in this Makefile. +# +TCLDIR = $(SRCDIR)/../compat/tcl-8.6 + +#### The Tcl source code directory. This defaults to the same value as +# TCLDIR macro (above), which may not be correct. This value will +# only be used if the FOSSIL_TCL_SOURCE macro is defined. +# +TCLSRCDIR = $(TCLDIR) + +#### The Tcl include and library directories. These values will only be +# used if the FOSSIL_TCL_SOURCE macro is not defined. +# +TCLINCDIR = $(TCLDIR)/include +TCLLIBDIR = $(TCLDIR)/lib + +#### Tcl: Which Tcl library do we want to use (8.4, 8.5, 8.6, etc)? +# +ifdef FOSSIL_ENABLE_TCL_STUBS +ifndef FOSSIL_ENABLE_TCL_PRIVATE_STUBS +LIBTCL = -ltclstub86 +endif +TCLTARGET = libtclstub86.a +else +LIBTCL = -ltcl86 +TCLTARGET = binaries +endif + +#### C Compile and options for use in building executables that +# will run on the target platform. This is usually the same +# as BCC, unless you are cross-compiling. This C compiler builds +# the finished binary for fossil. The BCC compiler above is used +# for building intermediate code-generator tools. +# +TCC = $(PREFIX)gcc -Wall + +#### Add the necessary command line options to build with debugging +# symbols, if enabled. +# +ifdef FOSSIL_ENABLE_SYMBOLS +TCC += -g +else +TCC += -Os +endif + +#### When not using the miniz compression library, zlib is required. +# +ifndef FOSSIL_ENABLE_MINIZ +TCC += -L$(ZLIBDIR) -I$(ZINCDIR) +endif + +#### Compile resources for use in building executables that will run +# on the target platform. +# +RCC = $(PREFIX)windres -I$(SRCDIR) + +ifndef FOSSIL_ENABLE_MINIZ +RCC += -I$(ZINCDIR) +endif + +# With HTTPS support +ifdef FOSSIL_ENABLE_SSL +TCC += -L$(OPENSSLLIBDIR) -I$(OPENSSLINCDIR) +RCC += -I$(OPENSSLINCDIR) +endif + +# With Tcl support +ifdef FOSSIL_ENABLE_TCL +ifdef FOSSIL_TCL_SOURCE +TCC += -L$(TCLSRCDIR)/win -I$(TCLSRCDIR)/generic -I$(TCLSRCDIR)/win +RCC += -I$(TCLSRCDIR)/generic -I$(TCLSRCDIR)/win +else +TCC += -L$(TCLLIBDIR) -I$(TCLINCDIR) +RCC += -I$(TCLINCDIR) +endif +endif + +# With miniz (i.e. instead of zlib) +ifdef FOSSIL_ENABLE_MINIZ +TCC += -DFOSSIL_ENABLE_MINIZ=1 +RCC += -DFOSSIL_ENABLE_MINIZ=1 +endif + +# With MinGW command line handling workaround +ifdef MINGW_IS_32BIT_ONLY +TCC += -DBROKEN_MINGW_CMDLINE=1 +RCC += -DBROKEN_MINGW_CMDLINE=1 +endif + +# With HTTPS support +ifdef FOSSIL_ENABLE_SSL +TCC += -DFOSSIL_ENABLE_SSL=1 +RCC += -DFOSSIL_ENABLE_SSL=1 +endif + +# With relative paths in external diff/gdiff +ifdef FOSSIL_ENABLE_EXEC_REL_PATHS +TCC += -DFOSSIL_ENABLE_EXEC_REL_PATHS=1 +RCC += -DFOSSIL_ENABLE_EXEC_REL_PATHS=1 +endif + +# With legacy treatment of mv/rm +ifdef FOSSIL_ENABLE_LEGACY_MV_RM +TCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1 +RCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1 +endif + +# With TH1 embedded docs support +ifdef FOSSIL_ENABLE_TH1_DOCS +TCC += -DFOSSIL_ENABLE_TH1_DOCS=1 +RCC += -DFOSSIL_ENABLE_TH1_DOCS=1 +endif + +# With TH1 hook support +ifdef FOSSIL_ENABLE_TH1_HOOKS +TCC += -DFOSSIL_ENABLE_TH1_HOOKS=1 +RCC += -DFOSSIL_ENABLE_TH1_HOOKS=1 +endif + +# With Tcl support +ifdef FOSSIL_ENABLE_TCL +TCC += -DFOSSIL_ENABLE_TCL=1 +RCC += -DFOSSIL_ENABLE_TCL=1 +# Either statically linked or via stubs +ifdef FOSSIL_ENABLE_TCL_STUBS +TCC += -DFOSSIL_ENABLE_TCL_STUBS=1 -DUSE_TCL_STUBS +RCC += -DFOSSIL_ENABLE_TCL_STUBS=1 -DUSE_TCL_STUBS +ifdef FOSSIL_ENABLE_TCL_PRIVATE_STUBS +TCC += -DFOSSIL_ENABLE_TCL_PRIVATE_STUBS=1 +RCC += -DFOSSIL_ENABLE_TCL_PRIVATE_STUBS=1 +endif +else +TCC += -DSTATIC_BUILD +RCC += -DSTATIC_BUILD +endif +endif + +# With JSON support +ifdef FOSSIL_ENABLE_JSON +TCC += -DFOSSIL_ENABLE_JSON=1 +RCC += -DFOSSIL_ENABLE_JSON=1 +endif + +#### The option -static has no effect on MinGW(-w64), only dynamic +# executables can be built when linking with MSVCRT. OpenSSL +# (optional) and zlib (required) however are always linked in +# statically. Therefore, the FOSSIL_DYNAMIC_BUILD option does +# not really apply to MinGW (i.e. since ALL external libraries +# are NOT linked dynamically). +# +# LIB = -static + +#### MinGW: If available, use the Unicode capable runtime startup code. +# +ifndef MINGW_IS_32BIT_ONLY +LIB += -municode +endif + +#### SQLite: If enabled, use the system SQLite library. +# +ifdef USE_SYSTEM_SQLITE +LIB += -lsqlite3 +endif + +#### OpenSSL: Add the necessary libraries required, if enabled. +# +ifdef FOSSIL_ENABLE_SSL +LIB += -lssl -lcrypto -lgdi32 +endif + +#### Tcl: Add the necessary libraries required, if enabled. +# +ifdef FOSSIL_ENABLE_TCL +LIB += $(LIBTCL) +endif + +#### Extra arguments for linking the finished binary. Fossil needs +# to link against the Z-Lib compression library. There are no +# other mandatory dependencies. +# +LIB += -lmingwex + +#### When not using the miniz compression library, zlib is required. +# +ifndef FOSSIL_ENABLE_MINIZ +LIB += -lz +endif + +#### These libraries MUST appear in the same order as they do for Tcl +# or linking with it will not work (exact reason unknown). +# +ifdef FOSSIL_ENABLE_TCL +ifdef FOSSIL_ENABLE_TCL_STUBS +LIB += -lkernel32 -lws2_32 +else +LIB += -lnetapi32 -lkernel32 -luser32 -ladvapi32 -lws2_32 +endif +else +LIB += -lkernel32 -lws2_32 +endif + +#### Tcl shell for use in running the fossil test suite. This is only +# used for testing. +# +TCLSH = tclsh + +#### Nullsoft installer MakeNSIS location +# +MAKENSIS = "$(PROGRAMFILES)\NSIS\MakeNSIS.exe" + +#### Inno Setup executable location +# +INNOSETUP = "$(PROGRAMFILES)\Inno Setup 5\ISCC.exe" + +#### Include a configuration file that can override any one of these settings. +# +-include config.w32 + +# STOP HERE +# You should not need to change anything below this line +#-------------------------------------------------------- +XTCC = $(TCC) $(CFLAGS) -I. -I$(SRCDIR) +} +writeln -nonewline "SRC =" +foreach s [lsort $src] { + writeln -nonewline " \\\n \$(SRCDIR)/$s.c" +} +writeln "\n" +writeln -nonewline "EXTRA_FILES =" +foreach s [lsort $extra_files] { + writeln -nonewline " \\\n \$(SRCDIR)/$s" +} +writeln "\n" +writeln -nonewline "TRANS_SRC =" +foreach s [lsort $src] { + writeln -nonewline " \\\n \$(OBJDIR)/${s}_.c" +} +writeln "\n" +writeln -nonewline "OBJ =" +foreach s [lsort $src] { + writeln -nonewline " \\\n \$(OBJDIR)/$s.o" +} +writeln "\n" +writeln "APPNAME = ${name}.exe" +writeln "APPTARGETS =" +writeln { +#### If the USE_WINDOWS variable exists, it is assumed that we are building +# inside of a Windows-style shell; otherwise, it is assumed that we are +# building inside of a Unix-style shell. Note that the "move" command is +# broken when attempting to use it from the Windows shell via MinGW make +# because the SHELL variable is only used for certain commands that are +# recognized internally by make. +# +ifdef USE_WINDOWS +TRANSLATE = $(subst /,\,$(OBJDIR)/translate.exe) +MAKEHEADERS = $(subst /,\,$(OBJDIR)/makeheaders.exe) +MKINDEX = $(subst /,\,$(OBJDIR)/mkindex.exe) +MKBUILTIN = $(subst /,\,$(OBJDIR)/mkbuiltin.exe) +MKVERSION = $(subst /,\,$(OBJDIR)/mkversion.exe) +CODECHECK1 = $(subst /,\,$(OBJDIR)/codecheck1.exe) +CAT = type +CP = copy +GREP = find +MV = copy +RM = del /Q +MKDIR = -mkdir +RMDIR = rmdir /S /Q +else +TRANSLATE = $(OBJDIR)/translate.exe +MAKEHEADERS = $(OBJDIR)/makeheaders.exe +MKINDEX = $(OBJDIR)/mkindex.exe +MKBUILTIN = $(OBJDIR)/mkbuiltin.exe +MKVERSION = $(OBJDIR)/mkversion.exe +CODECHECK1 = $(OBJDIR)/codecheck1.exe +CAT = cat +CP = cp +GREP = grep +MV = mv +RM = rm -f +MKDIR = -mkdir -p +RMDIR = rm -rf +endif} + +writeln { +all: $(OBJDIR) $(APPNAME) + +$(OBJDIR)/fossil.o: $(SRCDIR)/../win/fossil.rc $(OBJDIR)/VERSION.h +ifdef USE_WINDOWS + $(CAT) $(subst /,\,$(SRCDIR)\miniz.c) | $(GREP) "define MZ_VERSION" > $(subst /,\,$(OBJDIR)\minizver.h) + $(CP) $(subst /,\,$(SRCDIR)\..\win\fossil.rc) $(subst /,\,$(OBJDIR)) + $(CP) $(subst /,\,$(SRCDIR)\..\win\fossil.ico) $(subst /,\,$(OBJDIR)) + $(CP) $(subst /,\,$(SRCDIR)\..\win\fossil.exe.manifest) $(subst /,\,$(OBJDIR)) +else + $(CAT) $(SRCDIR)/miniz.c | $(GREP) "define MZ_VERSION" > $(OBJDIR)/minizver.h + $(CP) $(SRCDIR)/../win/fossil.rc $(OBJDIR) + $(CP) $(SRCDIR)/../win/fossil.ico $(OBJDIR) + $(CP) $(SRCDIR)/../win/fossil.exe.manifest $(OBJDIR) +endif + $(RCC) $(OBJDIR)/fossil.rc -o $(OBJDIR)/fossil.o + +install: $(OBJDIR) $(APPNAME) +ifdef USE_WINDOWS + $(MKDIR) $(subst /,\,$(INSTALLDIR)) + $(MV) $(subst /,\,$(APPNAME)) $(subst /,\,$(INSTALLDIR)) +else + $(MKDIR) $(INSTALLDIR) + $(MV) $(APPNAME) $(INSTALLDIR) +endif + +$(OBJDIR): +ifdef USE_WINDOWS + $(MKDIR) $(subst /,\,$(OBJDIR)) +else + $(MKDIR) $(OBJDIR) +endif + +$(TRANSLATE): $(SRCDIR)/translate.c + $(BCC) -o $@ $(SRCDIR)/translate.c + +$(MAKEHEADERS): $(SRCDIR)/makeheaders.c + $(BCC) -o $@ $(SRCDIR)/makeheaders.c + +$(MKINDEX): $(SRCDIR)/mkindex.c + $(BCC) -o $@ $(SRCDIR)/mkindex.c + +$(MKBUILTIN): $(SRCDIR)/mkbuiltin.c + $(BCC) -o $@ $(SRCDIR)/mkbuiltin.c + +$(MKVERSION): $(SRCDIR)/mkversion.c + $(BCC) -o $@ $(SRCDIR)/mkversion.c + +$(CODECHECK1): $(SRCDIR)/codecheck1.c + $(BCC) -o $@ $(SRCDIR)/codecheck1.c + +# WARNING. DANGER. Running the test suite modifies the repository the # build is done from, i.e. the checkout belongs to. Do not sync/push # the repository after running the tests. -test: $(APPNAME) - $(TCLSH) test/tester.tcl $(APPNAME) - -VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest - awk '{ printf "#define MANIFEST_UUID \"%s\"\n", $$1}' \ - $(SRCDIR)/../manifest.uuid >VERSION.h - awk '{ printf "#define MANIFEST_VERSION \"[%.10s]\"\n", $$1}' \ - $(SRCDIR)/../manifest.uuid >>VERSION.h - awk '$$1=="D"{printf "#define MANIFEST_DATE \"%s %s\"\n",\ - substr($$2,1,10),substr($$2,12)}' \ - $(SRCDIR)/../manifest >>VERSION.h - -$(APPNAME): headers $(OBJ) $(OBJDIR)/sqlite3.o $(OBJDIR)/th.o $(OBJDIR)/th_lang.o - $(TCC) -o $(APPNAME) $(OBJ) $(OBJDIR)/sqlite3.o $(OBJDIR)/th.o $(OBJDIR)/th_lang.o $(LIB) +test: $(OBJDIR) $(APPNAME) + $(TCLSH) $(SRCDIR)/../test/tester.tcl $(APPNAME) + +$(OBJDIR)/VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(MKVERSION) + $(MKVERSION) $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(SRCDIR)/../VERSION >$@ + +# The USE_SYSTEM_SQLITE variable may be undefined, set to 0, or set +# to 1. If it is set to 1, then there is no need to build or link +# the sqlite3.o object. Instead, the system SQLite will be linked +# using -lsqlite3. +SQLITE3_OBJ.1 = +SQLITE3_OBJ.0 = $(OBJDIR)/sqlite3.o +SQLITE3_OBJ. = $(SQLITE3_OBJ.0) + +# The FOSSIL_ENABLE_MINIZ variable may be undefined, set to 0, or +# set to 1. If it is set to 1, the miniz library included in the +# source tree should be used; otherwise, it should not. +MINIZ_OBJ.0 = +MINIZ_OBJ.1 = $(OBJDIR)/miniz.o +MINIZ_OBJ. = $(MINIZ_OBJ.0) +} + +writeln [string map [list <<>> \\] { +EXTRAOBJ = <<>> + $(SQLITE3_OBJ.$(USE_SYSTEM_SQLITE)) <<>> + $(MINIZ_OBJ.$(FOSSIL_ENABLE_MINIZ)) <<>> + $(OBJDIR)/shell.o <<>> + $(OBJDIR)/th.o <<>> + $(OBJDIR)/th_lang.o <<>> + $(OBJDIR)/th_tcl.o <<>> + $(OBJDIR)/cson_amalgamation.o +}] + +writeln { +zlib: + $(MAKE) -C $(ZLIBDIR) PREFIX=$(PREFIX) $(ZLIBCONFIG) -f win32/Makefile.gcc libz.a + +clean-zlib: + $(MAKE) -C $(ZLIBDIR) PREFIX=$(PREFIX) -f win32/Makefile.gcc clean + +$(ZLIBDIR)/inffas86.o: + $(TCC) -c -o $@ -DASMINF -I$(ZLIBDIR) -O3 $(ZLIBDIR)/contrib/inflate86/inffas86.c + +$(ZLIBDIR)/match.o: + $(TCC) -c -o $@ -DASMV $(ZLIBDIR)/contrib/asm686/match.S + + +ifndef FOSSIL_ENABLE_MINIZ +LIBTARGETS += zlib +endif + +openssl: $(LIBTARGETS) + cd $(OPENSSLLIBDIR);./Configure --cross-compile-prefix=$(PREFIX) $(SSLCONFIG) + $(MAKE) -C $(OPENSSLLIBDIR) build_libs + +clean-openssl: + $(MAKE) -C $(OPENSSLLIBDIR) clean + +tcl: + cd $(TCLSRCDIR)/win;./configure + $(MAKE) -C $(TCLSRCDIR)/win $(TCLTARGET) + +clean-tcl: + $(MAKE) -C $(TCLSRCDIR)/win distclean + +APPTARGETS += $(LIBTARGETS) + +ifdef FOSSIL_BUILD_SSL +APPTARGETS += openssl +endif + +$(APPNAME): $(APPTARGETS) $(OBJDIR)/headers $(CODECHECK1) $(OBJ) $(EXTRAOBJ) $(OBJDIR)/fossil.o + $(CODECHECK1) $(TRANS_SRC) + $(TCC) -o $@ $(OBJ) $(EXTRAOBJ) $(OBJDIR)/fossil.o $(LIB) # This rule prevents make from using its default rules to try build # an executable named "manifest" out of the file named "manifest.c" # -$(SRCDIR)/../manifest: +$(SRCDIR)/../manifest: # noop -clean: - rm -f $(OBJDIR)/*.o *_.c $(APPNAME) VERSION.h - rm -f translate makeheaders mkindex page_index.h headers} +clean: +ifdef USE_WINDOWS + $(RM) $(subst /,\,$(APPNAME)) + $(RMDIR) $(subst /,\,$(OBJDIR)) +else + $(RM) $(APPNAME) + $(RMDIR) $(OBJDIR) +endif -set hfiles {} -foreach s [lsort $src] {lappend hfiles $s.h} -puts "\trm -f $hfiles\n" +setup: $(OBJDIR) $(APPNAME) + $(MAKENSIS) ./setup/fossil.nsi + +innosetup: $(OBJDIR) $(APPNAME) + $(INNOSETUP) ./setup/fossil.iss -DAppVersion=$(shell $(CAT) ./VERSION) +} set mhargs {} foreach s [lsort $src] { - append mhargs " ${s}_.c:$s.h" - set extra_h($s) {} -} -append mhargs " \$(SRCDIR)/sqlite3.h" -append mhargs " \$(SRCDIR)/th.h" -append mhargs " VERSION.h" -puts "page_index.h: \$(TRANS_SRC) mkindex" -puts "\t./mkindex \$(TRANS_SRC) >$@" -puts "headers:\tpage_index.h makeheaders VERSION.h" -puts "\t./makeheaders $mhargs" -puts "\ttouch headers" -puts "headers: Makefile" -puts "Makefile:" -set extra_h(main) page_index.h - -foreach s [lsort $src] { - puts "${s}_.c:\t\$(SRCDIR)/$s.c translate" - puts "\t./translate \$(SRCDIR)/$s.c >${s}_.c\n" - puts "\$(OBJDIR)/$s.o:\t${s}_.c $s.h $extra_h($s) \$(SRCDIR)/config.h" - puts "\t\$(XTCC) -o \$(OBJDIR)/$s.o -c ${s}_.c\n" - puts "$s.h:\theaders" -# puts "\t./makeheaders $mhargs\n\ttouch headers\n" -# puts "\t./makeheaders ${s}_.c:${s}.h\n" -} - - -puts "\$(OBJDIR)/sqlite3.o:\t\$(SRCDIR)/sqlite3.c" -set opt {-DSQLITE_OMIT_LOAD_EXTENSION=1} -append opt " -DSQLITE_THREADSAFE=0 -DSQLITE_DEFAULT_FILE_FORMAT=4" -#append opt " -DSQLITE_ENABLE_FTS3=1" -append opt " -Dlocaltime=fossil_localtime" -append opt " -DSQLITE_ENABLE_LOCKING_STYLE=0" -puts "\t\$(XTCC) $opt -c \$(SRCDIR)/sqlite3.c -o \$(OBJDIR)/sqlite3.o\n" - -puts "\$(OBJDIR)/th.o:\t\$(SRCDIR)/th.c" -puts "\t\$(XTCC) -I\$(SRCDIR) -c \$(SRCDIR)/th.c -o \$(OBJDIR)/th.o\n" - -puts "\$(OBJDIR)/th_lang.o:\t\$(SRCDIR)/th_lang.c" -puts "\t\$(XTCC) -I\$(SRCDIR) -c \$(SRCDIR)/th_lang.c -o \$(OBJDIR)/th_lang.o\n" + if {[string length $mhargs] > 0} {append mhargs " \\\n\t\t"} + append mhargs "\$(OBJDIR)/${s}_.c:\$(OBJDIR)/$s.h" + set extra_h($s) { } +} +append mhargs " \\\n\t\t\$(SRCDIR)/sqlite3.h" +append mhargs " \\\n\t\t\$(SRCDIR)/th.h" +append mhargs " \\\n\t\t\$(OBJDIR)/VERSION.h" +writeln "\$(OBJDIR)/page_index.h: \$(TRANS_SRC) \$(MKINDEX)" +writeln "\t\$(MKINDEX) \$(TRANS_SRC) >\$@\n" + +writeln "\$(OBJDIR)/builtin_data.h:\t\$(MKBUILTIN) \$(EXTRA_FILES)" +writeln "\t\$(MKBUILTIN) --prefix \$(SRCDIR)/ \$(EXTRA_FILES) >\$@\n" + +writeln "\$(OBJDIR)/headers:\t\$(OBJDIR)/page_index.h \$(OBJDIR)/builtin_data.h \$(MAKEHEADERS) \$(OBJDIR)/VERSION.h" +writeln "\t\$(MAKEHEADERS) $mhargs" +writeln "\techo Done >\$(OBJDIR)/headers\n" +writeln "\$(OBJDIR)/headers: Makefile\n" +writeln "Makefile:\n" +set extra_h(main) " \$(OBJDIR)/page_index.h " +set extra_h(builtin) " \$(OBJDIR)/builtin_data.h " + +foreach s [lsort $src] { + writeln "\$(OBJDIR)/${s}_.c:\t\$(SRCDIR)/$s.c \$(TRANSLATE)" + writeln "\t\$(TRANSLATE) \$(SRCDIR)/$s.c >\$@\n" + writeln "\$(OBJDIR)/$s.o:\t\$(OBJDIR)/${s}_.c \$(OBJDIR)/$s.h$extra_h($s)\$(SRCDIR)/config.h" + writeln "\t\$(XTCC) -o \$(OBJDIR)/$s.o -c \$(OBJDIR)/${s}_.c\n" + writeln "\$(OBJDIR)/${s}.h:\t\$(OBJDIR)/headers\n" +} + +set SQLITE_WIN32_OPTIONS $SQLITE_OPTIONS +lappend SQLITE_WIN32_OPTIONS -DSQLITE_WIN32_NO_ANSI + +set MINGW_SQLITE_OPTIONS $SQLITE_WIN32_OPTIONS +lappend MINGW_SQLITE_OPTIONS -D_HAVE__MINGW_H +lappend MINGW_SQLITE_OPTIONS -DSQLITE_USE_MALLOC_H +lappend MINGW_SQLITE_OPTIONS -DSQLITE_USE_MSIZE + +set MINIZ_WIN32_OPTIONS $MINIZ_OPTIONS + +set j " \\\n " +writeln "SQLITE_OPTIONS = [join $MINGW_SQLITE_OPTIONS $j]\n" +set j " \\\n " +writeln "SHELL_OPTIONS = [join $SHELL_WIN32_OPTIONS $j]\n" +set j " \\\n " +writeln "MINIZ_OPTIONS = [join $MINIZ_WIN32_OPTIONS $j]\n" + +writeln "\$(OBJDIR)/sqlite3.o:\t\$(SRCDIR)/sqlite3.c \$(SRCDIR)/../win/Makefile.mingw" +writeln "\t\$(XTCC) \$(SQLITE_OPTIONS) \$(SQLITE_CFLAGS) -c \$(SRCDIR)/sqlite3.c -o \$@\n" + +writeln "\$(OBJDIR)/cson_amalgamation.o:\t\$(SRCDIR)/cson_amalgamation.c" +writeln "\t\$(XTCC) -c \$(SRCDIR)/cson_amalgamation.c -o \$@\n" +writeln "\$(OBJDIR)/json.o \$(OBJDIR)/json_artifact.o \$(OBJDIR)/json_branch.o \$(OBJDIR)/json_config.o \$(OBJDIR)/json_diff.o \$(OBJDIR)/json_dir.o \$(OBJDIR)/jsos_finfo.o \$(OBJDIR)/json_login.o \$(OBJDIR)/json_query.o \$(OBJDIR)/json_report.o \$(OBJDIR)/json_status.o \$(OBJDIR)/json_tag.o \$(OBJDIR)/json_timeline.o \$(OBJDIR)/json_user.o \$(OBJDIR)/json_wiki.o : \$(SRCDIR)/json_detail.h\n" + +writeln "\$(OBJDIR)/shell.o:\t\$(SRCDIR)/shell.c \$(SRCDIR)/sqlite3.h \$(SRCDIR)/../win/Makefile.mingw" +writeln "\t\$(XTCC) \$(SHELL_OPTIONS) \$(SHELL_CFLAGS) -c \$(SRCDIR)/shell.c -o \$@\n" + +writeln "\$(OBJDIR)/th.o:\t\$(SRCDIR)/th.c" +writeln "\t\$(XTCC) -c \$(SRCDIR)/th.c -o \$@\n" + +writeln "\$(OBJDIR)/th_lang.o:\t\$(SRCDIR)/th_lang.c" +writeln "\t\$(XTCC) -c \$(SRCDIR)/th_lang.c -o \$@\n" + +writeln "\$(OBJDIR)/th_tcl.o:\t\$(SRCDIR)/th_tcl.c" +writeln "\t\$(XTCC) -c \$(SRCDIR)/th_tcl.c -o \$@\n" + +writeln "\$(OBJDIR)/miniz.o:\t\$(SRCDIR)/miniz.c" +writeln "\t\$(XTCC) \$(MINIZ_OPTIONS) -c \$(SRCDIR)/miniz.c -o \$@\n" + +close $output_file +# +# End of the win/Makefile.mingw output +############################################################################## +############################################################################## +############################################################################## +# Begin win/Makefile.dmc output +# +puts "building ../win/Makefile.dmc" +set output_file [open ../win/Makefile.dmc w] +fconfigure $output_file -translation binary + +writeln {# +############################################################################## +# WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl") +############################################################################## +# +# This file is automatically generated. Instead of editing this +# file, edit "makemake.tcl" then run "tclsh makemake.tcl" +# to regenerate this file. +# +B = .. +SRCDIR = $B\src +OBJDIR = . +O = .obj +E = .exe + + +# Maybe DMDIR, SSL or INCL needs adjustment +DMDIR = c:\DM +INCL = -I. -I$(SRCDIR) -I$B\win\include -I$(DMDIR)\extra\include + +#SSL = -DFOSSIL_ENABLE_SSL=1 +SSL = + +CFLAGS = -o +BCC = $(DMDIR)\bin\dmc $(CFLAGS) +TCC = $(DMDIR)\bin\dmc $(CFLAGS) $(DMCDEF) $(SSL) $(INCL) +LIBS = $(DMDIR)\extra\lib\ zlib wsock32 advapi32 +} +writeln "SQLITE_OPTIONS = [join $SQLITE_OPTIONS { }]\n" +writeln "SHELL_OPTIONS = [join $SHELL_WIN32_OPTIONS { }]\n" +writeln -nonewline "SRC = " +foreach s [lsort $src] { + writeln -nonewline "${s}_.c " +} +writeln "\n" +writeln -nonewline "OBJ = " +foreach s [lsort $src] { + writeln -nonewline "\$(OBJDIR)\\$s\$O " +} +writeln "\$(OBJDIR)\\shell\$O \$(OBJDIR)\\sqlite3\$O \$(OBJDIR)\\th\$O \$(OBJDIR)\\th_lang\$O " +writeln { + +RC=$(DMDIR)\bin\rcc +RCFLAGS=-32 -w1 -I$(SRCDIR) /D__DMC__ + +APPNAME = $(OBJDIR)\fossil$(E) + +all: $(APPNAME) + +$(APPNAME) : translate$E mkindex$E codecheck1$E headers $(OBJ) $(OBJDIR)\link + cd $(OBJDIR) + codecheck1$E $(SRC) + $(DMDIR)\bin\link @link + +$(OBJDIR)\fossil.res: $B\win\fossil.rc + $(RC) $(RCFLAGS) -o$@ $** + +$(OBJDIR)\link: $B\win\Makefile.dmc $(OBJDIR)\fossil.res} +writeln -nonewline "\t+echo " +foreach s [lsort $src] { + writeln -nonewline "$s " +} +writeln "shell sqlite3 th th_lang > \$@" +writeln "\t+echo fossil >> \$@" +writeln "\t+echo fossil >> \$@" +writeln "\t+echo \$(LIBS) >> \$@" +writeln "\t+echo. >> \$@" +writeln "\t+echo fossil >> \$@" + +writeln { +translate$E: $(SRCDIR)\translate.c + $(BCC) -o$@ $** + +makeheaders$E: $(SRCDIR)\makeheaders.c + $(BCC) -o$@ $** + +mkindex$E: $(SRCDIR)\mkindex.c + $(BCC) -o$@ $** + +mkbuiltin$E: $(SRCDIR)\mkbuiltin.c + $(BCC) -o$@ $** + +mkversion$E: $(SRCDIR)\mkversion.c + $(BCC) -o$@ $** + +codecheck1$E: $(SRCDIR)\codecheck1.c + $(BCC) -o$@ $** + +$(OBJDIR)\shell$O : $(SRCDIR)\shell.c + $(TCC) -o$@ -c $(SHELL_OPTIONS) $(SQLITE_OPTIONS) $(SHELL_CFLAGS) $** + +$(OBJDIR)\sqlite3$O : $(SRCDIR)\sqlite3.c + $(TCC) -o$@ -c $(SQLITE_OPTIONS) $(SQLITE_CFLAGS) $** + +$(OBJDIR)\th$O : $(SRCDIR)\th.c + $(TCC) -o$@ -c $** + +$(OBJDIR)\th_lang$O : $(SRCDIR)\th_lang.c + $(TCC) -o$@ -c $** + +$(OBJDIR)\cson_amalgamation.h : $(SRCDIR)\cson_amalgamation.h + cp $@ $@ + +VERSION.h : mkversion$E $B\manifest.uuid $B\manifest $B\VERSION + +$** > $@ + +page_index.h: mkindex$E $(SRC) + +$** > $@ + +builtin_data.h: mkbuiltin$E $(EXTRA_FILES) + mkbuiltin$E --prefix $(SRCDIR)/ $(EXTRA_FILES) > $@ + +clean: + -del $(OBJDIR)\*.obj + -del *.obj *_.c *.h *.map + +realclean: + -del $(APPNAME) translate$E mkindex$E makeheaders$E mkversion$E codecheck1$E mkbuiltin$E + +$(OBJDIR)\json$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_artifact$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_branch$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_config$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_diff$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_dir$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_finfo$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_login$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_query$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_report$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_status$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_tag$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_timeline$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_user$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_wiki$O : $(SRCDIR)\json_detail.h + + +} +foreach s [lsort $src] { + writeln "\$(OBJDIR)\\$s\$O : ${s}_.c ${s}.h" + writeln "\t\$(TCC) -o\$@ -c ${s}_.c\n" + writeln "${s}_.c : \$(SRCDIR)\\$s.c" + writeln "\t+translate\$E \$** > \$@\n" +} + +writeln -nonewline "headers: makeheaders\$E page_index.h builtin_data.h VERSION.h\n\t +makeheaders\$E " +foreach s [lsort $src] { + writeln -nonewline "${s}_.c:$s.h " +} +writeln "\$(SRCDIR)\\sqlite3.h \$(SRCDIR)\\th.h VERSION.h \$(SRCDIR)\\cson_amalgamation.h" +writeln "\t@copy /Y nul: headers" + +close $output_file +# +# End of the win/Makefile.dmc output +############################################################################## +############################################################################## +############################################################################## +# Begin win/Makefile.msc output +# +puts "building ../win/Makefile.msc" +set output_file [open ../win/Makefile.msc w] +fconfigure $output_file -translation binary + +writeln {# +############################################################################## +# WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl") +############################################################################## +# +# This Makefile will only function correctly if used from a sub-directory +# that is a direct child of the top-level directory for this project. +# +!if !exist("..\.fossil-settings") +!error "Please change the current directory to the one containing this file." +!endif + +# +# This file is automatically generated. Instead of editing this +# file, edit "makemake.tcl" then run "tclsh makemake.tcl" +# to regenerate this file. +# +B = .. +SRCDIR = $B\src +OBJDIR = . +OX = . +O = .obj +E = .exe +P = .pdb + +# Perl is only necessary if OpenSSL support is enabled and it must +# be built from source code. The PERLDIR variable should point to +# the directory containing the main Perl binary (i.e. "perl.exe"). +PERLDIR = C:\Perl\bin +PERL = perl.exe + +# Enable debugging symbols? +!ifndef DEBUG +DEBUG = 0 +!endif + +# Build the OpenSSL libraries? +!ifndef FOSSIL_BUILD_SSL +FOSSIL_BUILD_SSL = 0 +!endif + +# Build the included zlib library? +!ifndef FOSSIL_BUILD_ZLIB +FOSSIL_BUILD_ZLIB = 1 +!endif + +# Link everything except SQLite dynamically? +!ifndef FOSSIL_DYNAMIC_BUILD +FOSSIL_DYNAMIC_BUILD = 0 +!endif + +# Enable relative paths in external diff/gdiff? +!ifndef FOSSIL_ENABLE_EXEC_REL_PATHS +FOSSIL_ENABLE_EXEC_REL_PATHS = 0 +!endif + +# Enable the JSON API? +!ifndef FOSSIL_ENABLE_JSON +FOSSIL_ENABLE_JSON = 0 +!endif + +# Enable legacy treatment of the mv/rm commands? +!ifndef FOSSIL_ENABLE_LEGACY_MV_RM +FOSSIL_ENABLE_LEGACY_MV_RM = 0 +!endif + +# Enable use of miniz instead of zlib? +!ifndef FOSSIL_ENABLE_MINIZ +FOSSIL_ENABLE_MINIZ = 0 +!endif + +# Enable OpenSSL support? +!ifndef FOSSIL_ENABLE_SSL +FOSSIL_ENABLE_SSL = 0 +!endif + +# Enable the Tcl integration subsystem? +!ifndef FOSSIL_ENABLE_TCL +FOSSIL_ENABLE_TCL = 0 +!endif + +# Enable TH1 scripts in embedded documentation files? +!ifndef FOSSIL_ENABLE_TH1_DOCS +FOSSIL_ENABLE_TH1_DOCS = 0 +!endif + +# Enable TH1 hooks for commands and web pages? +!ifndef FOSSIL_ENABLE_TH1_HOOKS +FOSSIL_ENABLE_TH1_HOOKS = 0 +!endif + +# Enable support for Windows XP with Visual Studio 201x? +!ifndef FOSSIL_ENABLE_WINXP +FOSSIL_ENABLE_WINXP = 0 +!endif + +!if $(FOSSIL_ENABLE_SSL)!=0 +SSLDIR = $(B)\compat\openssl-1.0.2f +SSLINCDIR = $(SSLDIR)\inc32 +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +SSLLIBDIR = $(SSLDIR)\out32dll +!else +SSLLIBDIR = $(SSLDIR)\out32 +!endif +SSLLFLAGS = /nologo /opt:ref /debug +SSLLIB = ssleay32.lib libeay32.lib user32.lib gdi32.lib +!if "$(PLATFORM)"=="amd64" || "$(PLATFORM)"=="x64" +!message Using 'x64' platform for OpenSSL... +# BUGBUG (OpenSSL): Using "no-ssl*" here breaks the build. +# SSLCONFIG = VC-WIN64A no-asm no-ssl2 no-ssl3 +SSLCONFIG = VC-WIN64A no-asm +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +SSLCONFIG = $(SSLCONFIG) shared +!else +SSLCONFIG = $(SSLCONFIG) no-shared +!endif +SSLSETUP = ms\do_win64a.bat +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +SSLNMAKE = ms\ntdll.mak all +!else +SSLNMAKE = ms\nt.mak all +!endif +# BUGBUG (OpenSSL): Using "OPENSSL_NO_SSL*" here breaks dynamic builds. +!if $(FOSSIL_DYNAMIC_BUILD)==0 +SSLCFLAGS = -DOPENSSL_NO_SSL2 -DOPENSSL_NO_SSL3 +!endif +!elseif "$(PLATFORM)"=="ia64" +!message Using 'ia64' platform for OpenSSL... +# BUGBUG (OpenSSL): Using "no-ssl*" here breaks the build. +# SSLCONFIG = VC-WIN64I no-asm no-ssl2 no-ssl3 +SSLCONFIG = VC-WIN64I no-asm +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +SSLCONFIG = $(SSLCONFIG) shared +!else +SSLCONFIG = $(SSLCONFIG) no-shared +!endif +SSLSETUP = ms\do_win64i.bat +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +SSLNMAKE = ms\ntdll.mak all +!else +SSLNMAKE = ms\nt.mak all +!endif +# BUGBUG (OpenSSL): Using "OPENSSL_NO_SSL*" here breaks dynamic builds. +!if $(FOSSIL_DYNAMIC_BUILD)==0 +SSLCFLAGS = -DOPENSSL_NO_SSL2 -DOPENSSL_NO_SSL3 +!endif +!else +!message Assuming 'x86' platform for OpenSSL... +# BUGBUG (OpenSSL): Using "no-ssl*" here breaks the build. +# SSLCONFIG = VC-WIN32 no-asm no-ssl2 no-ssl3 +SSLCONFIG = VC-WIN32 no-asm +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +SSLCONFIG = $(SSLCONFIG) shared +!else +SSLCONFIG = $(SSLCONFIG) no-shared +!endif +SSLSETUP = ms\do_ms.bat +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +SSLNMAKE = ms\ntdll.mak all +!else +SSLNMAKE = ms\nt.mak all +!endif +# BUGBUG (OpenSSL): Using "OPENSSL_NO_SSL*" here breaks dynamic builds. +!if $(FOSSIL_DYNAMIC_BUILD)==0 +SSLCFLAGS = -DOPENSSL_NO_SSL2 -DOPENSSL_NO_SSL3 +!endif +!endif +!endif + +!if $(FOSSIL_ENABLE_TCL)!=0 +TCLDIR = $(B)\compat\tcl-8.6 +TCLSRCDIR = $(TCLDIR) +TCLINCDIR = $(TCLSRCDIR)\generic +!endif + +# zlib options +ZINCDIR = $(B)\compat\zlib +ZLIBDIR = $(B)\compat\zlib + +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +ZLIB = zdll.lib +!else +ZLIB = zlib.lib +!endif + +INCL = /I. /I$(SRCDIR) /I$B\win\include + +!if $(FOSSIL_ENABLE_MINIZ)==0 +INCL = $(INCL) /I$(ZINCDIR) +!endif + +!if $(FOSSIL_ENABLE_SSL)!=0 +INCL = $(INCL) /I$(SSLINCDIR) +!endif + +!if $(FOSSIL_ENABLE_TCL)!=0 +INCL = $(INCL) /I$(TCLINCDIR) +!endif + +CFLAGS = /nologo +LDFLAGS = + +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +LDFLAGS = $(LDFLAGS) /MANIFEST +!else +LDFLAGS = $(LDFLAGS) /NODEFAULTLIB:msvcrt /MANIFEST:NO +!endif + +!if $(FOSSIL_ENABLE_WINXP)!=0 +XPCFLAGS = $(XPCFLAGS) /D_USING_V110_SDK71_=1 +CFLAGS = $(CFLAGS) $(XPCFLAGS) +!if "$(PLATFORM)"=="amd64" || "$(PLATFORM)"=="x64" +XPLDFLAGS = $(XPLDFLAGS) /SUBSYSTEM:CONSOLE,5.02 +!else +XPLDFLAGS = $(XPLDFLAGS) /SUBSYSTEM:CONSOLE,5.01 +!endif +LDFLAGS = $(LDFLAGS) $(XPLDFLAGS) +!endif + +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +!if $(DEBUG)!=0 +CRTFLAGS = /MDd +!else +CRTFLAGS = /MD +!endif +!else +!if $(DEBUG)!=0 +CRTFLAGS = /MTd +!else +CRTFLAGS = /MT +!endif +!endif + +!if $(DEBUG)!=0 +CFLAGS = $(CFLAGS) /Zi $(CRTFLAGS) /Od +LDFLAGS = $(LDFLAGS) /DEBUG +!else +CFLAGS = $(CFLAGS) $(CRTFLAGS) /O2 +!endif + +BCC = $(CC) $(CFLAGS) +TCC = $(CC) /c $(CFLAGS) $(MSCDEF) $(INCL) +RCC = $(RC) /D_WIN32 /D_MSC_VER $(MSCDEF) $(INCL) +MTC = mt +LIBS = ws2_32.lib advapi32.lib +LIBDIR = + +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +TCC = $(TCC) /DFOSSIL_DYNAMIC_BUILD=1 +RCC = $(RCC) /DFOSSIL_DYNAMIC_BUILD=1 +!endif + +!if $(FOSSIL_ENABLE_MINIZ)==0 +LIBS = $(LIBS) $(ZLIB) +LIBDIR = $(LIBDIR) /LIBPATH:$(ZLIBDIR) +!endif + +!if $(FOSSIL_ENABLE_MINIZ)!=0 +TCC = $(TCC) /DFOSSIL_ENABLE_MINIZ=1 +RCC = $(RCC) /DFOSSIL_ENABLE_MINIZ=1 +!endif + +!if $(FOSSIL_ENABLE_JSON)!=0 +TCC = $(TCC) /DFOSSIL_ENABLE_JSON=1 +RCC = $(RCC) /DFOSSIL_ENABLE_JSON=1 +!endif + +!if $(FOSSIL_ENABLE_SSL)!=0 +TCC = $(TCC) /DFOSSIL_ENABLE_SSL=1 +RCC = $(RCC) /DFOSSIL_ENABLE_SSL=1 +LIBS = $(LIBS) $(SSLLIB) +LIBDIR = $(LIBDIR) /LIBPATH:$(SSLLIBDIR) +!endif + +!if $(FOSSIL_ENABLE_EXEC_REL_PATHS)!=0 +TCC = $(TCC) /DFOSSIL_ENABLE_EXEC_REL_PATHS=1 +RCC = $(RCC) /DFOSSIL_ENABLE_EXEC_REL_PATHS=1 +!endif + +!if $(FOSSIL_ENABLE_LEGACY_MV_RM)!=0 +TCC = $(TCC) /DFOSSIL_ENABLE_LEGACY_MV_RM=1 +RCC = $(RCC) /DFOSSIL_ENABLE_LEGACY_MV_RM=1 +!endif + +!if $(FOSSIL_ENABLE_TH1_DOCS)!=0 +TCC = $(TCC) /DFOSSIL_ENABLE_TH1_DOCS=1 +RCC = $(RCC) /DFOSSIL_ENABLE_TH1_DOCS=1 +!endif + +!if $(FOSSIL_ENABLE_TH1_HOOKS)!=0 +TCC = $(TCC) /DFOSSIL_ENABLE_TH1_HOOKS=1 +RCC = $(RCC) /DFOSSIL_ENABLE_TH1_HOOKS=1 +!endif + +!if $(FOSSIL_ENABLE_TCL)!=0 +TCC = $(TCC) /DFOSSIL_ENABLE_TCL=1 +RCC = $(RCC) /DFOSSIL_ENABLE_TCL=1 +TCC = $(TCC) /DFOSSIL_ENABLE_TCL_STUBS=1 +RCC = $(RCC) /DFOSSIL_ENABLE_TCL_STUBS=1 +TCC = $(TCC) /DFOSSIL_ENABLE_TCL_PRIVATE_STUBS=1 +RCC = $(RCC) /DFOSSIL_ENABLE_TCL_PRIVATE_STUBS=1 +TCC = $(TCC) /DUSE_TCL_STUBS=1 +RCC = $(RCC) /DUSE_TCL_STUBS=1 +!endif +} +regsub -all {[-]D} [join $SQLITE_WIN32_OPTIONS { }] {/D} MSC_SQLITE_OPTIONS +set j " \\\n " +writeln "SQLITE_OPTIONS = [join $MSC_SQLITE_OPTIONS $j]\n" + +regsub -all {[-]D} [join $SHELL_WIN32_OPTIONS { }] {/D} MSC_SHELL_OPTIONS +set j " \\\n " +writeln "SHELL_OPTIONS = [join $MSC_SHELL_OPTIONS $j]\n" + +regsub -all {[-]D} [join $MINIZ_WIN32_OPTIONS { }] {/D} MSC_MINIZ_OPTIONS +set j " \\\n " +writeln "MINIZ_OPTIONS = [join $MSC_MINIZ_OPTIONS $j]\n" + +writeln -nonewline "SRC = " +set i 0 +foreach s [lsort $src] { + if {$i > 0} { + writeln " \\" + writeln -nonewline " " + } + writeln -nonewline "${s}_.c"; incr i +} +writeln "\n" +writeln -nonewline "EXTRA_FILES = " +set i 0 +foreach s [lsort $extra_files] { + if {$i > 0} { + writeln " \\" + writeln -nonewline " " + } + writeln -nonewline "\$(SRCDIR)\\${s}"; incr i +} +writeln "\n" +set AdditionalObj [list shell sqlite3 th th_lang th_tcl cson_amalgamation] +writeln -nonewline "OBJ = " +set i 0 +foreach s [lsort [concat $src $AdditionalObj]] { + if {$i > 0} { + writeln " \\" + writeln -nonewline " " + } + writeln -nonewline "\$(OX)\\$s\$O"; incr i +} +if {$i > 0} { + writeln " \\" +} +writeln "!if \$(FOSSIL_ENABLE_MINIZ)!=0" +writeln -nonewline " " +writeln "\$(OX)\\miniz\$O \\"; incr i +writeln "!endif" +writeln -nonewline " \$(OX)\\fossil.res\n\n" +writeln [string map [list <<>> \\] { +APPNAME = $(OX)\fossil$(E) +PDBNAME = $(OX)\fossil$(P) +APPTARGETS = + +all: $(OX) $(APPNAME) + +zlib: + @echo Building zlib from "$(ZLIBDIR)"... +!if $(FOSSIL_ENABLE_WINXP)!=0 + @pushd "$(ZLIBDIR)" && $(MAKE) /f win32\Makefile.msc $(ZLIB) "CC=cl $(XPCFLAGS)" "LD=link $(XPLDFLAGS)" && popd +!else + @pushd "$(ZLIBDIR)" && $(MAKE) /f win32\Makefile.msc $(ZLIB) && popd +!endif + +!if $(FOSSIL_ENABLE_SSL)!=0 +openssl: + @echo Building OpenSSL from "$(SSLDIR)"... +!if "$(PERLDIR)" != "" + @set PATH=$(PERLDIR);$(PATH) +!endif + @pushd "$(SSLDIR)" && $(PERL) Configure $(SSLCONFIG) && popd + @pushd "$(SSLDIR)" && call $(SSLSETUP) && popd +!if $(FOSSIL_ENABLE_WINXP)!=0 + @pushd "$(SSLDIR)" && $(MAKE) /f $(SSLNMAKE) "CC=cl $(SSLCFLAGS) $(XPCFLAGS)" "LFLAGS=$(SSLLFLAGS) $(XPLDFLAGS)" && popd +!else + @pushd "$(SSLDIR)" && $(MAKE) /f $(SSLNMAKE) "CC=cl $(SSLCFLAGS)" && popd +!endif +!endif + +!if $(FOSSIL_ENABLE_MINIZ)==0 +!if $(FOSSIL_BUILD_ZLIB)!=0 +APPTARGETS = $(APPTARGETS) zlib +!endif +!endif + +!if $(FOSSIL_ENABLE_SSL)!=0 +!if $(FOSSIL_BUILD_SSL)!=0 +APPTARGETS = $(APPTARGETS) openssl +!endif +!endif + +$(APPNAME) : $(APPTARGETS) translate$E mkindex$E codecheck1$E headers $(OBJ) $(OX)\linkopts + cd $(OX) + codecheck1$E $(SRC) + link $(LDFLAGS) /OUT:$@ $(LIBDIR) Wsetargv.obj fossil.res @linkopts + if exist $@.manifest <<>> + $(MTC) -nologo -manifest $@.manifest -outputresource:$@;1 + +$(OX)\linkopts: $B\win\Makefile.msc}] +set redir {>} +foreach s [lsort [concat $src $AdditionalObj]] { + writeln "\techo \$(OX)\\$s.obj $redir \$@" + set redir {>>} +} +set redir {>>} +writeln "!if \$(FOSSIL_ENABLE_MINIZ)!=0" +writeln "\techo \$(OX)\\miniz.obj $redir \$@" +writeln "!endif" +writeln "\techo \$(LIBS) $redir \$@" +writeln { +$(OX): + @-mkdir $@ + +translate$E: $(SRCDIR)\translate.c + $(BCC) $** + +makeheaders$E: $(SRCDIR)\makeheaders.c + $(BCC) $** + +mkindex$E: $(SRCDIR)\mkindex.c + $(BCC) $** + +mkbuiltin$E: $(SRCDIR)\mkbuiltin.c + $(BCC) $** + +mkversion$E: $(SRCDIR)\mkversion.c + $(BCC) $** + +codecheck1$E: $(SRCDIR)\codecheck1.c + $(BCC) $** + +$(OX)\shell$O : $(SRCDIR)\shell.c $B\win\Makefile.msc + $(TCC) /Fo$@ $(SHELL_OPTIONS) $(SQLITE_OPTIONS) $(SHELL_CFLAGS) -c $(SRCDIR)\shell.c + +$(OX)\sqlite3$O : $(SRCDIR)\sqlite3.c $B\win\Makefile.msc + $(TCC) /Fo$@ -c $(SQLITE_OPTIONS) $(SQLITE_CFLAGS) $(SRCDIR)\sqlite3.c + +$(OX)\th$O : $(SRCDIR)\th.c + $(TCC) /Fo$@ -c $** + +$(OX)\th_lang$O : $(SRCDIR)\th_lang.c + $(TCC) /Fo$@ -c $** + +$(OX)\th_tcl$O : $(SRCDIR)\th_tcl.c + $(TCC) /Fo$@ -c $** + +$(OX)\miniz$O : $(SRCDIR)\miniz.c + $(TCC) /Fo$@ -c $(MINIZ_OPTIONS) $(SRCDIR)\miniz.c + +VERSION.h : mkversion$E $B\manifest.uuid $B\manifest $B\VERSION + $** > $@ +$(OX)\cson_amalgamation$O : $(SRCDIR)\cson_amalgamation.c + $(TCC) /Fo$@ /c $** + +page_index.h: mkindex$E $(SRC) + $** > $@ + +builtin_data.h: mkbuiltin$E $(EXTRA_FILES) + mkbuiltin$E --prefix $(SRCDIR)/ $(EXTRA_FILES) > $@ + +clean: + del $(OX)\*.obj 2>NUL + del *.obj 2>NUL + del *_.c 2>NUL + del *.h 2>NUL + del *.ilk 2>NUL + del *.map 2>NUL + del *.res 2>NUL + del headers 2>NUL + del linkopts 2>NUL + del vc*.pdb 2>NUL + +realclean: clean + del $(APPNAME) 2>NUL + del $(PDBNAME) 2>NUL + del translate$E 2>NUL + del translate$P 2>NUL + del mkindex$E 2>NUL + del mkindex$P 2>NUL + del makeheaders$E 2>NUL + del makeheaders$P 2>NUL + del mkversion$E 2>NUL + del mkversion$P 2>NUL + del codecheck1$E 2>NUL + del codecheck1$P 2>NUL + del mkbuiltin$E 2>NUL + del mkbuiltin$P 2>NUL + +$(OBJDIR)\json$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_artifact$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_branch$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_config$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_diff$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_dir$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_finfo$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_login$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_query$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_report$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_status$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_tag$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_timeline$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_user$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_wiki$O : $(SRCDIR)\json_detail.h +} +foreach s [lsort $src] { + writeln "\$(OX)\\$s\$O : ${s}_.c ${s}.h" + writeln "\t\$(TCC) /Fo\$@ -c ${s}_.c\n" + writeln "${s}_.c : \$(SRCDIR)\\$s.c" + writeln "\ttranslate\$E \$** > \$@\n" +} + +writeln "fossil.res : \$B\\win\\fossil.rc" +writeln "\t\$(RCC) /fo \$@ \$**\n" + +writeln "headers: makeheaders\$E page_index.h builtin_data.h VERSION.h" +writeln -nonewline "\tmakeheaders\$E " +set i 0 +foreach s [lsort $src] { + if {$i > 0} { + writeln " \\" + writeln -nonewline "\t\t\t" + } + writeln -nonewline "${s}_.c:$s.h"; incr i +} +writeln " \\\n\t\t\t\$(SRCDIR)\\sqlite3.h \\" +writeln "\t\t\t\$(SRCDIR)\\th.h \\" +writeln "\t\t\tVERSION.h \\" +writeln "\t\t\t\$(SRCDIR)\\cson_amalgamation.h" +writeln "\t@copy /Y nul: headers" + + +close $output_file +# +# End of the win/Makefile.msc output +############################################################################## +############################################################################## +############################################################################## +# Begin win/Makefile.PellesCGMake output +# +puts "building ../win/Makefile.PellesCGMake" +set output_file [open ../win/Makefile.PellesCGMake w] +fconfigure $output_file -translation binary + +writeln [string map [list \ + <<>> [join $SQLITE_WIN32_OPTIONS { }] \ + <<>> [join $SHELL_WIN32_OPTIONS { }]] {# +############################################################################## +# WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl") +############################################################################## +# +# This file is automatically generated. Instead of editing this +# file, edit "makemake.tcl" then run "tclsh makemake.tcl" +# to regenerate this file. +# +# HowTo +# ----- +# +# This is a Makefile to compile fossil with PellesC from +# http://www.smorgasbordet.com/pellesc/index.htm +# In addition to the Compiler envrionment, you need +# gmake from http://sourceforge.net/projects/unxutils/, Pelles make version +# couldn't handle the complex dependencies in this build +# zlib sources +# Then you do +# 1. create a directory PellesC in the project root directory +# 2. Change the variables PellesCDir/ZLIBSRCDIR to the path of your installation +# 3. open a dos prompt window and change working directory into PellesC (step 1) +# 4. run gmake -f ..\win\Makefile.PellesCGMake +# +# this file is tested with +# PellesC 5.00.13 +# gmake 3.80 +# zlib sources 1.2.5 +# Windows XP SP 2 +# and +# PellesC 6.00.4 +# gmake 3.80 +# zlib sources 1.2.5 +# Windows 7 Home Premium +# + +# +PellesCDir=c:\Programme\PellesC + +# Select between 32/64 bit code, default is 32 bit +#TARGETVERSION=64 + +ifeq ($(TARGETVERSION),64) +# 64 bit version +TARGETMACHINE_CC=amd64 +TARGETMACHINE_LN=amd64 +TARGETEXTEND=64 +else +# 32 bit version +TARGETMACHINE_CC=x86 +TARGETMACHINE_LN=ix86 +TARGETEXTEND= +endif + +# define the project directories +B=.. +SRCDIR=$(B)/src/ +WINDIR=$(B)/win/ +ZLIBSRCDIR=../../zlib/ + +# define linker command and options +LINK=$(PellesCDir)/bin/polink.exe +LINKFLAGS=-subsystem:console -machine:$(TARGETMACHINE_LN) /LIBPATH:$(PellesCDir)\lib\win$(TARGETEXTEND) /LIBPATH:$(PellesCDir)\lib kernel32.lib advapi32.lib delayimp$(TARGETEXTEND).lib Wsock32.lib Crtmt$(TARGETEXTEND).lib + +# define standard C-compiler and flags, used to compile +# the fossil binary. Some special definitions follow for +# special files follow +CC=$(PellesCDir)\bin\pocc.exe +DEFINES=-D_pgmptr=g.argv[0] +CCFLAGS=-T$(TARGETMACHINE_CC)-coff -Ot -W2 -Gd -Go -Ze -MT $(DEFINES) +INCLUDE=/I $(PellesCDir)\Include\Win /I $(PellesCDir)\Include /I $(ZLIBSRCDIR) /I $(SRCDIR) + +# define commands for building the windows resource files +RESOURCE=fossil.res +RC=$(PellesCDir)\bin\porc.exe +RCFLAGS=$(INCLUDE) -D__POCC__=1 -D_M_X$(TARGETVERSION) + +# define the special utilities files, needed to generate +# the automatically generated source files +UTILS=translate.exe mkindex.exe makeheaders.exe mkbuiltin.exe +UTILS_OBJ=$(UTILS:.exe=.obj) +UTILS_SRC=$(foreach uf,$(UTILS),$(SRCDIR)$(uf:.exe=.c)) + +# define the SQLite files, which need special flags on compile +SQLITESRC=sqlite3.c +ORIGSQLITESRC=$(foreach sf,$(SQLITESRC),$(SRCDIR)$(sf)) +SQLITEOBJ=$(foreach sf,$(SQLITESRC),$(sf:.c=.obj)) +SQLITEDEFINES=<<>> + +# define the SQLite shell files, which need special flags on compile +SQLITESHELLSRC=shell.c +ORIGSQLITESHELLSRC=$(foreach sf,$(SQLITESHELLSRC),$(SRCDIR)$(sf)) +SQLITESHELLOBJ=$(foreach sf,$(SQLITESHELLSRC),$(sf:.c=.obj)) +SQLITESHELLDEFINES=<<>> + +# define the th scripting files, which need special flags on compile +THSRC=th.c th_lang.c +ORIGTHSRC=$(foreach sf,$(THSRC),$(SRCDIR)$(sf)) +THOBJ=$(foreach sf,$(THSRC),$(sf:.c=.obj)) + +# define the zlib files, needed by this compile +ZLIBSRC=adler32.c compress.c crc32.c deflate.c gzclose.c gzlib.c gzread.c gzwrite.c infback.c inffast.c inflate.c inftrees.c trees.c uncompr.c zutil.c +ORIGZLIBSRC=$(foreach sf,$(ZLIBSRC),$(ZLIBSRCDIR)$(sf)) +ZLIBOBJ=$(foreach sf,$(ZLIBSRC),$(sf:.c=.obj)) + +# define all fossil sources, using the standard compile and +# source generation. These are all files in SRCDIR, which are not +# mentioned as special files above: +ORIGSRC=$(filter-out $(UTILS_SRC) $(ORIGTHSRC) $(ORIGSQLITESRC) $(ORIGSQLITESHELLSRC),$(wildcard $(SRCDIR)*.c)) +SRC=$(subst $(SRCDIR),,$(ORIGSRC)) +TRANSLATEDSRC=$(SRC:.c=_.c) +TRANSLATEDOBJ=$(TRANSLATEDSRC:.c=.obj) + +# main target file is the application +APPLICATION=fossil.exe + +# define the standard make target +.PHONY: default +default: page_index.h builtin_data.h headers $(APPLICATION) + +# symbolic target to generate the source generate utils +.PHONY: utils +utils: $(UTILS) + +# link utils +$(UTILS) version.exe: %.exe: %.obj + $(LINK) $(LINKFLAGS) -out:"$@" $< + +# compiling standard fossil utils +$(UTILS_OBJ): %.obj: $(SRCDIR)%.c + $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" + +# compile special windows utils +version.obj: $(SRCDIR)mkversion.c + $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" + +# generate the translated c-source files +$(TRANSLATEDSRC): %_.c: $(SRCDIR)%.c translate.exe + translate.exe $< >$@ + +# generate the index source, containing all web references,.. +page_index.h: $(TRANSLATEDSRC) mkindex.exe + mkindex.exe $(TRANSLATEDSRC) >$@ + +builtin_data.h: $(EXTRA_FILES) mkbuiltin.exe + mkbuiltin.exe --prefix $(SRCDIR)/ $(EXTRA_FILES) >$@ + +# extracting version info from manifest +VERSION.h: version.exe ..\manifest.uuid ..\manifest ..\VERSION + version.exe ..\manifest.uuid ..\manifest ..\VERSION >$@ + +# generate the simplified headers +headers: makeheaders.exe page_index.h builtin_data.h VERSION.h ../src/sqlite3.h ../src/th.h VERSION.h + makeheaders.exe $(foreach ts,$(TRANSLATEDSRC),$(ts):$(ts:_.c=.h)) ../src/sqlite3.h ../src/th.h VERSION.h + echo Done >$@ + +# compile C sources with relevant options + +$(TRANSLATEDOBJ): %_.obj: %_.c %.h + $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" + +$(SQLITEOBJ): %.obj: $(SRCDIR)%.c $(SRCDIR)%.h + $(CC) $(CCFLAGS) $(SQLITEDEFINES) $(INCLUDE) "$<" -Fo"$@" + +$(SQLITESHELLOBJ): %.obj: $(SRCDIR)%.c + $(CC) $(CCFLAGS) $(SQLITESHELLDEFINES) $(INCLUDE) "$<" -Fo"$@" + +$(THOBJ): %.obj: $(SRCDIR)%.c $(SRCDIR)th.h + $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" + +$(ZLIBOBJ): %.obj: $(ZLIBSRCDIR)%.c + $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" + +# create the windows resource with icon and version info +$(RESOURCE): %.res: ../win/%.rc ../win/*.ico + $(RC) $(RCFLAGS) $< -Fo"$@" + +# link the application +$(APPLICATION): $(TRANSLATEDOBJ) $(SQLITEOBJ) $(SQLITESHELLOBJ) $(THOBJ) $(ZLIBOBJ) headers $(RESOURCE) + $(LINK) $(LINKFLAGS) -out:"$@" $(TRANSLATEDOBJ) $(SQLITEOBJ) $(SQLITESHELLOBJ) $(THOBJ) $(ZLIBOBJ) $(RESOURCE) + +# cleanup + +.PHONY: clean +clean: + del /F $(TRANSLATEDOBJ) $(SQLITEOBJ) $(THOBJ) $(ZLIBOBJ) $(UTILS_OBJ) version.obj + del /F $(TRANSLATEDSRC) + del /F *.h headers + del /F $(RESOURCE) + +.PHONY: clobber +clobber: clean + del /F *.exe +}] Index: src/manifest.c ================================================================== --- src/manifest.c +++ src/manifest.c @@ -26,94 +26,322 @@ #if INTERFACE /* ** Types of control files */ +#define CFTYPE_ANY 0 #define CFTYPE_MANIFEST 1 #define CFTYPE_CLUSTER 2 #define CFTYPE_CONTROL 3 #define CFTYPE_WIKI 4 #define CFTYPE_TICKET 5 #define CFTYPE_ATTACHMENT 6 +#define CFTYPE_EVENT 7 + +/* +** File permissions used by Fossil internally. +*/ +#define PERM_REG 0 /* regular file */ +#define PERM_EXE 1 /* executable */ +#define PERM_LNK 2 /* symlink */ + +/* +** Flags for use with manifest_crosslink(). +*/ +#define MC_NONE 0 /* default handling */ +#define MC_PERMIT_HOOKS 1 /* permit hooks to execute */ +#define MC_NO_ERRORS 2 /* do not issue errors for a bad parse */ + +/* +** A single F-card within a manifest +*/ +struct ManifestFile { + char *zName; /* Name of a file */ + char *zUuid; /* UUID of the file */ + char *zPerm; /* File permissions */ + char *zPrior; /* Prior name if the name was changed */ +}; + /* ** A parsed manifest or cluster. */ struct Manifest { Blob content; /* The original content blob */ int type; /* Type of artifact. One of CFTYPE_xxxxx */ + int rid; /* The blob-id for this manifest */ + char *zBaseline; /* Baseline manifest. The B card. */ + Manifest *pBaseline; /* The actual baseline manifest */ char *zComment; /* Decoded comment. The C card. */ double rDate; /* Date and time from D card. 0.0 if no D card. */ char *zUser; /* Name of the user from the U card. */ char *zRepoCksum; /* MD5 checksum of the baseline content. R card. */ char *zWiki; /* Text of the wiki page. W card. */ char *zWikiTitle; /* Name of the wiki page. L card. */ + char *zMimetype; /* Mime type of wiki or comment text. N card. */ + double rEventDate; /* Date of an event. E card. */ + char *zEventId; /* UUID for an event. E card. */ char *zTicketUuid; /* UUID for a ticket. K card. */ char *zAttachName; /* Filename of an attachment. A card. */ char *zAttachSrc; /* UUID of document being attached. A card. */ char *zAttachTarget; /* Ticket or wiki that attachment applies to. A card */ int nFile; /* Number of F cards */ int nFileAlloc; /* Slots allocated in aFile[] */ - struct { - char *zName; /* Name of a file */ - char *zUuid; /* UUID of the file */ - char *zPerm; /* File permissions */ - char *zPrior; /* Prior name if the name was changed */ - int iRename; /* index of renamed name in prior/next manifest */ - } *aFile; /* One entry for each F card */ + int iFile; /* Index of current file in iterator */ + ManifestFile *aFile; /* One entry for each F-card */ int nParent; /* Number of parents. */ int nParentAlloc; /* Slots allocated in azParent[] */ char **azParent; /* UUIDs of parents. One for each P card argument */ + int nCherrypick; /* Number of entries in aCherrypick[] */ + struct { + char *zCPTarget; /* UUID of cherry-picked version w/ +|- prefix */ + char *zCPBase; /* UUID of cherry-pick baseline. NULL for singletons */ + } *aCherrypick; int nCChild; /* Number of cluster children */ int nCChildAlloc; /* Number of closts allocated in azCChild[] */ char **azCChild; /* UUIDs of referenced objects in a cluster. M cards */ int nTag; /* Number of T Cards */ int nTagAlloc; /* Slots allocated in aTag[] */ - struct { + struct TagType { char *zName; /* Name of the tag */ char *zUuid; /* UUID that the tag is applied to */ char *zValue; /* Value if the tag is really a property */ } *aTag; /* One for each T card */ int nField; /* Number of J cards */ int nFieldAlloc; /* Slots allocated in aField[] */ - struct { + struct { char *zName; /* Key or field name */ char *zValue; /* Value of the field */ } *aField; /* One for each J card */ }; #endif +/* +** A cache of parsed manifests. This reduces the number of +** calls to manifest_parse() when doing a rebuild. +*/ +#define MX_MANIFEST_CACHE 6 +static struct { + int nxAge; + int aAge[MX_MANIFEST_CACHE]; + Manifest *apManifest[MX_MANIFEST_CACHE]; +} manifestCache; + +/* +** True if manifest_crosslink_begin() has been called but +** manifest_crosslink_end() is still pending. +*/ +static int manifest_crosslink_busy = 0; /* ** Clear the memory allocated in a manifest object */ -void manifest_clear(Manifest *p){ - blob_reset(&p->content); - free(p->aFile); - free(p->azParent); - free(p->azCChild); - free(p->aTag); - free(p->aField); - memset(p, 0, sizeof(*p)); -} +void manifest_destroy(Manifest *p){ + if( p ){ + blob_reset(&p->content); + fossil_free(p->aFile); + fossil_free(p->azParent); + fossil_free(p->azCChild); + fossil_free(p->aTag); + fossil_free(p->aField); + fossil_free(p->aCherrypick); + if( p->pBaseline ) manifest_destroy(p->pBaseline); + memset(p, 0, sizeof(*p)); + fossil_free(p); + } +} + +/* +** Add an element to the manifest cache using LRU replacement. +*/ +void manifest_cache_insert(Manifest *p){ + while( p ){ + int i; + Manifest *pBaseline = p->pBaseline; + p->pBaseline = 0; + for(i=0; i=MX_MANIFEST_CACHE ){ + int oldest = 0; + int oldestAge = manifestCache.aAge[0]; + for(i=1; irid==rid ){ + p = manifestCache.apManifest[i]; + manifestCache.apManifest[i] = 0; + return p; + } + } + return 0; +} + +/* +** Clear the manifest cache. +*/ +void manifest_cache_clear(void){ + int i; + for(i=0; i=n ) return; + z += i; + n -= i; + *pz = z; + for(i=n-1; i>=0; i--){ + if( z[i]=='\n' && strncmp(&z[i],"\n-----BEGIN PGP SIGNATURE-", 25)==0 ){ + n = i+1; + break; + } + } + *pn = n; + return; +} + +/* +** Verify the Z-card checksum on the artifact, if there is such a +** checksum. Return 0 if there is no Z-card. Return 1 if the Z-card +** exists and is correct. Return 2 if the Z-card exists and has the wrong +** value. +** +** 0123456789 123456789 123456789 123456789 +** Z aea84f4f863865a8d59d0384e4d2a41c +*/ +static int verify_z_card(const char *z, int n){ + if( n<35 ) return 0; + if( z[n-35]!='Z' || z[n-34]!=' ' ) return 0; + md5sum_init(); + md5sum_step_text(z, n-35); + if( memcmp(&z[n-33], md5sum_finish(0), 32)==0 ){ + return 1; + }else{ + return 2; + } +} + +/* +** A structure used for rapid parsing of the Manifest file +*/ +typedef struct ManifestText ManifestText; +struct ManifestText { + char *z; /* The first character of the next token */ + char *zEnd; /* One character beyond the end of the manifest */ + int atEol; /* True if z points to the start of a new line */ +}; + +/* +** Return a pointer to the next token. The token is zero-terminated. +** Return NULL if there are no more tokens on the current line. +*/ +static char *next_token(ManifestText *p, int *pLen){ + char *z; + char *zStart; + int c; + if( p->atEol ) return 0; + zStart = z = p->z; + while( (c=(*z))!=' ' && c!='\n' ){ z++; } + *z = 0; + p->z = &z[1]; + p->atEol = c=='\n'; + if( pLen ) *pLen = z - zStart; + return zStart; +} + +/* +** Return the card-type for the next card. Or, return 0 if there are no +** more cards or if we are not at the end of the current card. +*/ +static char next_card(ManifestText *p){ + char c; + if( !p->atEol || p->z>=p->zEnd ) return 0; + c = p->z[0]; + if( p->z[1]==' ' ){ + p->z += 2; + p->atEol = 0; + }else if( p->z[1]=='\n' ){ + p->z += 2; + p->atEol = 1; + }else{ + c = 0; + } + return c; +} + +/* +** Shorthand for a control-artifact parsing error +*/ +#define SYNTAX(T) {zErr=(T); goto manifest_syntax_error;} /* ** Parse a blob into a Manifest object. The Manifest object ** takes over the input blob and will free it when the ** Manifest object is freed. Zeros are inserted into the blob ** as string terminators so that blob should not be used again. ** -** Return TRUE if the content really is a control file of some -** kind. Return FALSE if there are syntax errors. +** Return a pointer to an allocated Manifest object if the content +** really is a control file of some kind. This object needs to be +** freed by a subsequent call to manifest_destroy(). Return NULL +** if there are syntax errors. ** ** This routine is strict about the format of a control file. ** The format must match exactly or else it is rejected. This ** rule minimizes the risk that a content file will be mistaken ** for a control file simply because they look the same. ** -** The pContent is reset. If TRUE is returned, then pContent will -** be reset when the Manifest object is cleared. If FALSE is +** The pContent is reset. If a pointer is returned, then pContent will +** be reset when the Manifest object is cleared. If NULL is ** returned then the Manifest object is cleared automatically ** and pContent is reset before the return. ** ** The entire file can be PGP clear-signed. The signature is ignored. ** The file consists of zero or more cards, one card per line. @@ -121,48 +349,89 @@ ** Each card is divided into tokens by a single space character. ** The first token is a single upper-case letter which is the card type. ** The card type determines the other parameters to the card. ** Cards must occur in lexicographical order. */ -int manifest_parse(Manifest *p, Blob *pContent){ - int seenHeader = 0; +Manifest *manifest_parse(Blob *pContent, int rid, Blob *pErr){ + Manifest *p; int seenZ = 0; int i, lineNo=0; - Blob line, token, a1, a2, a3, a4; + ManifestText x; char cPrevType = 0; + char cType; + char *z; + int n; + char *zUuid; + int sz = 0; + int isRepeat, hasSelfRefTag = 0; + Blob bUuid = BLOB_INITIALIZER; + static Bag seen; + const char *zErr = 0; + + if( rid==0 ){ + isRepeat = 1; + }else if( bag_find(&seen, rid) ){ + isRepeat = 1; + }else{ + isRepeat = 0; + bag_insert(&seen, rid); + } + + /* Every control artifact ends with a '\n' character. Exit early + ** if that is not the case for this artifact. + */ + if( !isRepeat ) g.parseCnt[0]++; + z = blob_materialize(pContent); + n = blob_size(pContent); + if( n<=0 || z[n-1]!='\n' ){ + blob_reset(pContent); + blob_appendf(pErr, "%s", n ? "not terminated with \\n" : "zero-length"); + return 0; + } + + /* Strip off the PGP signature if there is one. + */ + remove_pgp_signature(&z, &n); + + /* Verify that the first few characters of the artifact look like + ** a control artifact. + */ + if( n<10 || z[0]<'A' || z[0]>'Z' || z[1]!=' ' ){ + blob_reset(pContent); + blob_appendf(pErr, "line 1 not recognized"); + return 0; + } + /* Then verify the Z-card. + */ + if( verify_z_card(z, n)==2 ){ + blob_reset(pContent); + blob_appendf(pErr, "incorrect Z-card cksum"); + return 0; + } + + /* Store the UUID (before modifying the blob) only for error + ** reporting purposes. + */ + sha1sum_blob(pContent, &bUuid); + /* Allocate a Manifest object to hold the parsed control artifact. + */ + p = fossil_malloc( sizeof(*p) ); memset(p, 0, sizeof(*p)); memcpy(&p->content, pContent, sizeof(p->content)); + p->rid = rid; blob_zero(pContent); pContent = &p->content; - blob_zero(&a1); - blob_zero(&a2); - blob_zero(&a3); - md5sum_init(); - while( blob_line(pContent, &line) ){ - char *z = blob_buffer(&line); + /* Begin parsing, card by card. + */ + x.z = z; + x.zEnd = &z[n]; + x.atEol = 1; + while( (cType = next_card(&x))!=0 && cType>=cPrevType ){ lineNo++; - if( z[0]=='-' ){ - if( strncmp(z, "-----BEGIN PGP ", 15)!=0 ){ - goto manifest_syntax_error; - } - if( seenHeader ){ - break; - } - while( blob_line(pContent, &line)>2 ){} - if( blob_line(pContent, &line)==0 ) break; - z = blob_buffer(&line); - } - if( z[0] ?? ** ** Identifies an attachment to either a wiki page or a ticket. ** is the artifact that is the attachment. @@ -169,50 +438,62 @@ ** is omitted to delete an attachment. is the name of ** a wiki page or ticket to which that attachment is connected. */ case 'A': { char *zName, *zTarget, *zSrc; - md5sum_step_text(blob_buffer(&line), blob_size(&line)); - if( blob_token(&line, &a1)==0 ) goto manifest_syntax_error; - if( blob_token(&line, &a2)==0 ) goto manifest_syntax_error; + int nTarget = 0, nSrc = 0; + zName = next_token(&x, 0); + zTarget = next_token(&x, &nTarget); + zSrc = next_token(&x, &nSrc); + if( zName==0 || zTarget==0 ) goto manifest_syntax_error; if( p->zAttachName!=0 ) goto manifest_syntax_error; - zName = blob_terminate(&a1); - zTarget = blob_terminate(&a2); - blob_token(&line, &a3); - zSrc = blob_terminate(&a3); defossilize(zName); - if( !file_is_simple_pathname(zName) ){ - goto manifest_syntax_error; + if( !file_is_simple_pathname(zName, 0) ){ + SYNTAX("invalid filename on A-card"); } defossilize(zTarget); - if( (blob_size(&a2)!=UUID_SIZE || !validate16(zTarget, UUID_SIZE)) + if( (nTarget!=UUID_SIZE || !validate16(zTarget, UUID_SIZE)) && !wiki_name_is_wellformed((const unsigned char *)zTarget) ){ - goto manifest_syntax_error; + SYNTAX("invalid target on A-card"); } - if( blob_size(&a3)>0 - && (blob_size(&a3)!=UUID_SIZE || !validate16(zSrc, UUID_SIZE)) ){ - goto manifest_syntax_error; + if( zSrc && (nSrc!=UUID_SIZE || !validate16(zSrc, UUID_SIZE)) ){ + SYNTAX("invalid source on A-card"); } p->zAttachName = (char*)file_tail(zName); p->zAttachSrc = zSrc; p->zAttachTarget = zTarget; break; } + + /* + ** B + ** + ** A B-line gives the UUID for the baseline of a delta-manifest. + */ + case 'B': { + if( p->zBaseline ) SYNTAX("more than one B-card"); + p->zBaseline = next_token(&x, &sz); + if( p->zBaseline==0 ) SYNTAX("missing UUID on B-card"); + if( sz!=UUID_SIZE || !validate16(p->zBaseline, UUID_SIZE) ){ + SYNTAX("invalid UUID on B-card"); + } + break; + } + /* ** C ** ** Comment text is fossil-encoded. There may be no more than - ** one C line. C lines are required for manifests and are - ** disallowed on all other control files. + ** one C line. C lines are required for manifests, are optional + ** for Events and Attachments, and are disallowed on all other + ** control files. */ case 'C': { - md5sum_step_text(blob_buffer(&line), blob_size(&line)); - if( p->zComment!=0 ) goto manifest_syntax_error; - if( blob_token(&line, &a1)==0 ) goto manifest_syntax_error; - if( blob_token(&line, &a2)!=0 ) goto manifest_syntax_error; - p->zComment = blob_terminate(&a1); + if( p->zComment!=0 ) SYNTAX("more than one C-card"); + p->zComment = next_token(&x, 0); + if( p->zComment==0 ) SYNTAX("missing comment text on C-card"); defossilize(p->zComment); break; } /* @@ -221,65 +502,76 @@ ** The timestamp should be ISO 8601. YYYY-MM-DDtHH:MM:SS ** There can be no more than 1 D line. D lines are required ** for all control files except for clusters. */ case 'D': { - char *zDate; - md5sum_step_text(blob_buffer(&line), blob_size(&line)); - if( p->rDate!=0.0 ) goto manifest_syntax_error; - if( blob_token(&line, &a1)==0 ) goto manifest_syntax_error; - if( blob_token(&line, &a2)!=0 ) goto manifest_syntax_error; - zDate = blob_terminate(&a1); - p->rDate = db_double(0.0, "SELECT julianday(%Q)", zDate); + if( p->rDate>0.0 ) SYNTAX("more than one D-card"); + p->rDate = db_double(0.0, "SELECT julianday(%Q)", next_token(&x,0)); + if( p->rDate<=0.0 ) SYNTAX("cannot parse date on D-card"); + break; + } + + /* + ** E + ** + ** An "event" card that contains the timestamp of the event in the + ** format YYYY-MM-DDtHH:MM:SS and a unique identifier for the event. + ** The event timestamp is distinct from the D timestamp. The D + ** timestamp is when the artifact was created whereas the E timestamp + ** is when the specific event is said to occur. + */ + case 'E': { + if( p->rEventDate>0.0 ) SYNTAX("more than one E-card"); + p->rEventDate = db_double(0.0,"SELECT julianday(%Q)", next_token(&x,0)); + if( p->rEventDate<=0.0 ) SYNTAX("malformed date on E-card"); + p->zEventId = next_token(&x, &sz); + if( sz!=UUID_SIZE || !validate16(p->zEventId, UUID_SIZE) ){ + SYNTAX("malformed UUID on E-card"); + } break; } /* - ** F ?? ?? + ** F ?? ?? ?? ** ** Identifies a file in a manifest. Multiple F lines are ** allowed in a manifest. F lines are not allowed in any ** other control file. The filename and old-name are fossil-encoded. */ case 'F': { - char *zName, *zUuid, *zPerm, *zPriorName; - md5sum_step_text(blob_buffer(&line), blob_size(&line)); - if( blob_token(&line, &a1)==0 ) goto manifest_syntax_error; - if( blob_token(&line, &a2)==0 ) goto manifest_syntax_error; - zName = blob_terminate(&a1); - zUuid = blob_terminate(&a2); - blob_token(&line, &a3); - zPerm = blob_terminate(&a3); - if( blob_size(&a2)!=UUID_SIZE ) goto manifest_syntax_error; - if( !validate16(zUuid, UUID_SIZE) ) goto manifest_syntax_error; + char *zName, *zPerm, *zPriorName; + zName = next_token(&x,0); + if( zName==0 ) SYNTAX("missing filename on F-card"); defossilize(zName); - if( !file_is_simple_pathname(zName) ){ - goto manifest_syntax_error; + if( !file_is_simple_pathname(zName, 0) ){ + SYNTAX("F-card filename is not a simple path"); + } + zUuid = next_token(&x, &sz); + if( p->zBaseline==0 || zUuid!=0 ){ + if( sz!=UUID_SIZE ) SYNTAX("F-card UUID is the wrong size"); + if( !validate16(zUuid, UUID_SIZE) ) SYNTAX("F-card UUID invalid"); } - blob_token(&line, &a4); - zPriorName = blob_terminate(&a4); - if( zPriorName[0] ){ + zPerm = next_token(&x,0); + zPriorName = next_token(&x,0); + if( zPriorName ){ defossilize(zPriorName); - if( !file_is_simple_pathname(zPriorName) ){ - goto manifest_syntax_error; + if( !file_is_simple_pathname(zPriorName, 0) ){ + SYNTAX("F-card old filename is not a simple path"); } - }else{ - zPriorName = 0; } if( p->nFile>=p->nFileAlloc ){ p->nFileAlloc = p->nFileAlloc*2 + 10; - p->aFile = realloc(p->aFile, p->nFileAlloc*sizeof(p->aFile[0]) ); - if( p->aFile==0 ) fossil_panic("out of memory"); + p->aFile = fossil_realloc(p->aFile, + p->nFileAlloc*sizeof(p->aFile[0]) ); } i = p->nFile++; p->aFile[i].zName = zName; p->aFile[i].zUuid = zUuid; p->aFile[i].zPerm = zPerm; p->aFile[i].zPrior = zPriorName; - p->aFile[i].iRename = -1; - if( i>0 && strcmp(p->aFile[i-1].zName, zName)>=0 ){ - goto manifest_syntax_error; + if( i>0 && fossil_strcmp(p->aFile[i-1].zName, zName)>=0 ){ + SYNTAX("incorrect F-card sort order"); } break; } /* @@ -290,28 +582,25 @@ ** value. If is omitted then it is understood to be an ** empty string. */ case 'J': { char *zName, *zValue; - md5sum_step_text(blob_buffer(&line), blob_size(&line)); - if( blob_token(&line, &a1)==0 ) goto manifest_syntax_error; - blob_token(&line, &a2); - if( blob_token(&line, &a3)!=0 ) goto manifest_syntax_error; - zName = blob_terminate(&a1); - zValue = blob_terminate(&a2); + zName = next_token(&x,0); + zValue = next_token(&x,0); + if( zName==0 ) SYNTAX("name missing from J-card"); + if( zValue==0 ) zValue = ""; defossilize(zValue); if( p->nField>=p->nFieldAlloc ){ p->nFieldAlloc = p->nFieldAlloc*2 + 10; - p->aField = realloc(p->aField, + p->aField = fossil_realloc(p->aField, p->nFieldAlloc*sizeof(p->aField[0]) ); - if( p->aField==0 ) fossil_panic("out of memory"); } i = p->nField++; p->aField[i].zName = zName; p->aField[i].zValue = zValue; - if( i>0 && strcmp(p->aField[i-1].zName, zName)>=0 ){ - goto manifest_syntax_error; + if( i>0 && fossil_strcmp(p->aField[i-1].zName, zName)>=0 ){ + SYNTAX("incorrect J-card sort order"); } break; } @@ -320,18 +609,16 @@ ** ** A K-line gives the UUID for the ticket which this control file ** is amending. */ case 'K': { - char *zUuid; - md5sum_step_text(blob_buffer(&line), blob_size(&line)); - if( blob_token(&line, &a1)==0 ) goto manifest_syntax_error; - zUuid = blob_terminate(&a1); - if( blob_size(&a1)!=UUID_SIZE ) goto manifest_syntax_error; - if( !validate16(zUuid, UUID_SIZE) ) goto manifest_syntax_error; - if( p->zTicketUuid!=0 ) goto manifest_syntax_error; - p->zTicketUuid = zUuid; + if( p->zTicketUuid!=0 ) SYNTAX("more than one K-card"); + p->zTicketUuid = next_token(&x, &sz); + if( sz!=UUID_SIZE ) SYNTAX("K-card UUID is the wrong size"); + if( !validate16(p->zTicketUuid, UUID_SIZE) ){ + SYNTAX("invalid K-card UUID"); + } break; } /* ** L @@ -338,18 +625,16 @@ ** ** The wiki page title is fossil-encoded. There may be no more than ** one L line. */ case 'L': { - md5sum_step_text(blob_buffer(&line), blob_size(&line)); - if( p->zWikiTitle!=0 ) goto manifest_syntax_error; - if( blob_token(&line, &a1)==0 ) goto manifest_syntax_error; - if( blob_token(&line, &a2)!=0 ) goto manifest_syntax_error; - p->zWikiTitle = blob_terminate(&a1); + if( p->zWikiTitle!=0 ) SYNTAX("more than one L-card"); + p->zWikiTitle = next_token(&x,0); + if( p->zWikiTitle==0 ) SYNTAX("missing title on L-card"); defossilize(p->zWikiTitle); if( !wiki_name_is_wellformed((const unsigned char *)p->zWikiTitle) ){ - goto manifest_syntax_error; + SYNTAX("L-card has malformed wiki name"); } break; } /* @@ -357,69 +642,105 @@ ** ** An M-line identifies another artifact by its UUID. M-lines ** occur in clusters only. */ case 'M': { - char *zUuid; - md5sum_step_text(blob_buffer(&line), blob_size(&line)); - if( blob_token(&line, &a1)==0 ) goto manifest_syntax_error; - zUuid = blob_terminate(&a1); - if( blob_size(&a1)!=UUID_SIZE ) goto manifest_syntax_error; - if( !validate16(zUuid, UUID_SIZE) ) goto manifest_syntax_error; + zUuid = next_token(&x, &sz); + if( zUuid==0 ) SYNTAX("missing UUID on M-card"); + if( sz!=UUID_SIZE ) SYNTAX("wrong size for UUID on M-card"); + if( !validate16(zUuid, UUID_SIZE) ) SYNTAX("UUID invalid on M-card"); if( p->nCChild>=p->nCChildAlloc ){ p->nCChildAlloc = p->nCChildAlloc*2 + 10; - p->azCChild = - realloc(p->azCChild, p->nCChildAlloc*sizeof(p->azCChild[0]) ); - if( p->azCChild==0 ) fossil_panic("out of memory"); + p->azCChild = fossil_realloc(p->azCChild + , p->nCChildAlloc*sizeof(p->azCChild[0]) ); } i = p->nCChild++; p->azCChild[i] = zUuid; - if( i>0 && strcmp(p->azCChild[i-1], zUuid)>=0 ){ - goto manifest_syntax_error; + if( i>0 && fossil_strcmp(p->azCChild[i-1], zUuid)>=0 ){ + SYNTAX("M-card in the wrong order"); } break; } + + /* + ** N + ** + ** An N-line identifies the mimetype of wiki or comment text. + */ + case 'N': { + if( p->zMimetype!=0 ) SYNTAX("more than one N-card"); + p->zMimetype = next_token(&x,0); + if( p->zMimetype==0 ) SYNTAX("missing mimetype on N-card"); + defossilize(p->zMimetype); + break; + } /* ** P ... ** - ** Specify one or more other artifacts where are the parents of + ** Specify one or more other artifacts which are the parents of ** this artifact. The first parent is the primary parent. All - ** others are parents by merge. + ** others are parents by merge. Note that the initial empty + ** check-in historically has an empty P-card, so empty P-cards + ** must be accepted. */ case 'P': { - md5sum_step_text(blob_buffer(&line), blob_size(&line)); - while( blob_token(&line, &a1) ){ - char *zUuid; - if( blob_size(&a1)!=UUID_SIZE ) goto manifest_syntax_error; - zUuid = blob_terminate(&a1); - if( !validate16(zUuid, UUID_SIZE) ) goto manifest_syntax_error; + while( (zUuid = next_token(&x, &sz))!=0 ){ + if( sz!=UUID_SIZE ) SYNTAX("wrong size UUID on P-card"); + if( !validate16(zUuid, UUID_SIZE) )SYNTAX("invalid UUID on P-card"); if( p->nParent>=p->nParentAlloc ){ p->nParentAlloc = p->nParentAlloc*2 + 5; - p->azParent = realloc(p->azParent, p->nParentAlloc*sizeof(char*)); - if( p->azParent==0 ) fossil_panic("out of memory"); + p->azParent = fossil_realloc(p->azParent, + p->nParentAlloc*sizeof(char*)); } i = p->nParent++; p->azParent[i] = zUuid; } break; } + + /* + ** Q (+|-) ?? + ** + ** Specify one or a range of check-ins that are cherrypicked into + ** this check-in ("+") or backed out of this check-in ("-"). + */ + case 'Q': { + if( (zUuid=next_token(&x, &sz))==0 ) SYNTAX("missing UUID on Q-card"); + if( sz!=UUID_SIZE+1 ) SYNTAX("wrong size UUID on Q-card"); + if( zUuid[0]!='+' && zUuid[0]!='-' ){ + SYNTAX("Q-card does not begin with '+' or '-'"); + } + if( !validate16(&zUuid[1], UUID_SIZE) ){ + SYNTAX("invalid UUID on Q-card"); + } + n = p->nCherrypick; + p->nCherrypick++; + p->aCherrypick = fossil_realloc(p->aCherrypick, + p->nCherrypick*sizeof(p->aCherrypick[0])); + p->aCherrypick[n].zCPTarget = zUuid; + p->aCherrypick[n].zCPBase = zUuid = next_token(&x, &sz); + if( zUuid ){ + if( sz!=UUID_SIZE ) SYNTAX("wrong size second UUID in Q-card"); + if( !validate16(zUuid, UUID_SIZE) ){ + SYNTAX("invalid second UUID on Q-card"); + } + } + break; + } /* ** R ** - ** Specify the MD5 checksum of the entire baseline in a - ** manifest. + ** Specify the MD5 checksum over the name and content of all files + ** in the manifest. */ case 'R': { - md5sum_step_text(blob_buffer(&line), blob_size(&line)); - if( p->zRepoCksum!=0 ) goto manifest_syntax_error; - if( blob_token(&line, &a1)==0 ) goto manifest_syntax_error; - if( blob_token(&line, &a2)!=0 ) goto manifest_syntax_error; - if( blob_size(&a1)!=32 ) goto manifest_syntax_error; - p->zRepoCksum = blob_terminate(&a1); - if( !validate16(p->zRepoCksum, 32) ) goto manifest_syntax_error; + if( p->zRepoCksum!=0 ) SYNTAX("more than one R-card"); + p->zRepoCksum = next_token(&x, &sz); + if( sz!=32 ) SYNTAX("wrong size cksum on R-card"); + if( !validate16(p->zRepoCksum, 32) ) SYNTAX("malformed R-card cksum"); break; } /* ** T (+|*|-) ?? @@ -429,58 +750,56 @@ ** singleton tag, "*" to create a propagating tag, or "-" to create ** anti-tag that undoes a prior "+" or blocks propagation of of ** a "*". ** ** The tag is applied to . If is "*" then the tag is - ** applied to the current manifest. If is provided then + ** applied to the current manifest. If is provided then ** the tag is really a property with the given value. ** ** Tags are not allowed in clusters. Multiple T lines are allowed. */ case 'T': { - char *zName, *zUuid, *zValue; - md5sum_step_text(blob_buffer(&line), blob_size(&line)); - if( blob_token(&line, &a1)==0 ){ - goto manifest_syntax_error; - } - if( blob_token(&line, &a2)==0 ){ - goto manifest_syntax_error; - } - zName = blob_terminate(&a1); - zUuid = blob_terminate(&a2); - if( blob_token(&line, &a3)==0 ){ - zValue = 0; - }else{ - zValue = blob_terminate(&a3); - defossilize(zValue); - } - if( blob_size(&a2)==UUID_SIZE && validate16(zUuid, UUID_SIZE) ){ - /* A valid uuid */ - }else if( blob_size(&a2)==1 && zUuid[0]=='*' ){ - zUuid = 0; - }else{ - goto manifest_syntax_error; + char *zName, *zValue; + zName = next_token(&x, 0); + if( zName==0 ) SYNTAX("missing name on T-card"); + zUuid = next_token(&x, &sz); + if( zUuid==0 ) SYNTAX("missing UUID on T-card"); + zValue = next_token(&x, 0); + if( zValue ) defossilize(zValue); + if( sz==UUID_SIZE && validate16(zUuid, UUID_SIZE) ){ + /* A valid uuid */ + if( p->zEventId ) SYNTAX("non-self-referential T-card in event"); + }else if( sz==1 && zUuid[0]=='*' ){ + zUuid = 0; + hasSelfRefTag = 1; + if( p->zEventId && zName[0]!='+' ){ + SYNTAX("propagating T-card in event"); + } + }else{ + SYNTAX("malformed UUID on T-card"); } defossilize(zName); if( zName[0]!='-' && zName[0]!='+' && zName[0]!='*' ){ - goto manifest_syntax_error; + SYNTAX("T-card name does not begin with '-', '+', or '*'"); } if( validate16(&zName[1], strlen(&zName[1])) ){ /* Do not allow tags whose names look like UUIDs */ - goto manifest_syntax_error; + SYNTAX("T-card name looks like a UUID"); } if( p->nTag>=p->nTagAlloc ){ p->nTagAlloc = p->nTagAlloc*2 + 10; - p->aTag = realloc(p->aTag, p->nTagAlloc*sizeof(p->aTag[0]) ); - if( p->aTag==0 ) fossil_panic("out of memory"); + p->aTag = fossil_realloc(p->aTag, p->nTagAlloc*sizeof(p->aTag[0]) ); } i = p->nTag++; p->aTag[i].zName = zName; p->aTag[i].zUuid = zUuid; p->aTag[i].zValue = zValue; - if( i>0 && strcmp(p->aTag[i-1].zName, zName)>=0 ){ - goto manifest_syntax_error; + if( i>0 ){ + int c = fossil_strcmp(p->aTag[i-1].zName, zName); + if( c>0 || (c==0 && fossil_strcmp(p->aTag[i-1].zUuid, zUuid)>=0) ){ + SYNTAX("T-card in the wrong order"); + } } break; } /* @@ -489,19 +808,17 @@ ** Identify the user who created this control file by their ** login. Only one U line is allowed. Prohibited in clusters. ** If the user name is omitted, take that to be "anonymous". */ case 'U': { - md5sum_step_text(blob_buffer(&line), blob_size(&line)); - if( p->zUser!=0 ) goto manifest_syntax_error; - if( blob_token(&line, &a1)==0 ){ + if( p->zUser!=0 ) SYNTAX("more than one U-card"); + p->zUser = next_token(&x, 0); + if( p->zUser==0 ){ p->zUser = "anonymous"; }else{ - p->zUser = blob_terminate(&a1); defossilize(p->zUser); } - if( blob_token(&line, &a2)!=0 ) goto manifest_syntax_error; break; } /* ** W @@ -509,26 +826,29 @@ ** The next bytes of the file contain the text of the wiki ** page. There is always an extra \n before the start of the next ** record. */ case 'W': { - int size; + char *zSize; + unsigned size, oldsize, c; Blob wiki; - md5sum_step_text(blob_buffer(&line), blob_size(&line)); - if( blob_token(&line, &a1)==0 ) goto manifest_syntax_error; - if( blob_token(&line, &a2)!=0 ) goto manifest_syntax_error; - if( !blob_is_int(&a1, &size) ) goto manifest_syntax_error; - if( size<0 ) goto manifest_syntax_error; - if( p->zWiki!=0 ) goto manifest_syntax_error; + zSize = next_token(&x, 0); + if( zSize==0 ) SYNTAX("missing size on W-card"); + if( x.atEol==0 ) SYNTAX("no content after W-card"); + for(oldsize=size=0; (c = zSize[0])>='0' && c<='9'; zSize++){ + size = oldsize*10 + c - '0'; + if( sizezWiki!=0 ) SYNTAX("more than one W-card"); blob_zero(&wiki); - if( blob_extract(pContent, size+1, &wiki)!=size+1 ){ - goto manifest_syntax_error; - } - p->zWiki = blob_buffer(&wiki); - md5sum_step_text(p->zWiki, size+1); - if( p->zWiki[size]!='\n' ) goto manifest_syntax_error; - p->zWiki[size] = 0; + if( (&x.z[size+1])>=x.zEnd )SYNTAX("not enough content after W-card"); + p->zWiki = x.z; + x.z += size; + if( x.z[0]!='\n' ) SYNTAX("W-card content no \\n terminated"); + x.z[0] = 0; + x.z++; break; } /* @@ -541,127 +861,290 @@ ** This card is required for all control file types except for ** Manifest. It is not required for manifest only for historical ** compatibility reasons. */ case 'Z': { - int rc; - Blob hash; - if( blob_token(&line, &a1)==0 ) goto manifest_syntax_error; - if( blob_token(&line, &a2)!=0 ) goto manifest_syntax_error; - if( blob_size(&a1)!=32 ) goto manifest_syntax_error; - if( !validate16(blob_buffer(&a1), 32) ) goto manifest_syntax_error; - md5sum_finish(&hash); - rc = blob_compare(&hash, &a1); - blob_reset(&hash); - if( rc!=0 ) goto manifest_syntax_error; + zUuid = next_token(&x, &sz); + if( sz!=32 ) SYNTAX("wrong size for Z-card cksum"); + if( !validate16(zUuid, 32) ) SYNTAX("malformed Z-card cksum"); seenZ = 1; break; } default: { - goto manifest_syntax_error; - } - } - } - if( !seenHeader ) goto manifest_syntax_error; - - if( p->nFile>0 || p->zRepoCksum!=0 ){ - if( p->nCChild>0 ) goto manifest_syntax_error; - if( p->rDate==0.0 ) goto manifest_syntax_error; - if( p->nField>0 ) goto manifest_syntax_error; - if( p->zTicketUuid ) goto manifest_syntax_error; - if( p->zWiki ) goto manifest_syntax_error; - if( p->zWikiTitle ) goto manifest_syntax_error; - if( p->zTicketUuid ) goto manifest_syntax_error; - if( p->zAttachName ) goto manifest_syntax_error; - p->type = CFTYPE_MANIFEST; - }else if( p->nCChild>0 ){ - if( p->rDate>0.0 ) goto manifest_syntax_error; - if( p->zComment!=0 ) goto manifest_syntax_error; - if( p->zUser!=0 ) goto manifest_syntax_error; - if( p->nTag>0 ) goto manifest_syntax_error; - if( p->nParent>0 ) goto manifest_syntax_error; - if( p->nField>0 ) goto manifest_syntax_error; - if( p->zTicketUuid ) goto manifest_syntax_error; - if( p->zWiki ) goto manifest_syntax_error; - if( p->zWikiTitle ) goto manifest_syntax_error; - if( p->zAttachName ) goto manifest_syntax_error; - if( !seenZ ) goto manifest_syntax_error; + SYNTAX("unrecognized card"); + } + } + } + if( x.znCChild>0 ){ + if( p->zAttachName + || p->zBaseline + || p->zComment + || p->rDate>0.0 + || p->zEventId + || p->nFile>0 + || p->nField>0 + || p->zTicketUuid + || p->zWikiTitle + || p->zMimetype + || p->nParent>0 + || p->nCherrypick>0 + || p->zRepoCksum + || p->nTag>0 + || p->zUser + || p->zWiki + ){ + SYNTAX("cluster contains a card other than M- or Z-"); + } + if( !seenZ ) SYNTAX("missing Z-card on cluster"); p->type = CFTYPE_CLUSTER; - }else if( p->nField>0 ){ - if( p->rDate==0.0 ) goto manifest_syntax_error; - if( p->zWiki ) goto manifest_syntax_error; - if( p->zWikiTitle ) goto manifest_syntax_error; - if( p->nCChild>0 ) goto manifest_syntax_error; - if( p->nTag>0 ) goto manifest_syntax_error; - if( p->zTicketUuid==0 ) goto manifest_syntax_error; - if( p->zUser==0 ) goto manifest_syntax_error; - if( p->zAttachName ) goto manifest_syntax_error; - if( !seenZ ) goto manifest_syntax_error; - p->type = CFTYPE_TICKET; - }else if( p->zWiki!=0 ){ - if( p->rDate==0.0 ) goto manifest_syntax_error; - if( p->nCChild>0 ) goto manifest_syntax_error; - if( p->nTag>0 ) goto manifest_syntax_error; - if( p->zTicketUuid!=0 ) goto manifest_syntax_error; - if( p->zWikiTitle==0 ) goto manifest_syntax_error; - if( p->zAttachName ) goto manifest_syntax_error; - if( !seenZ ) goto manifest_syntax_error; + }else if( p->zEventId ){ + if( p->zAttachName ) SYNTAX("A-card in event"); + if( p->zBaseline ) SYNTAX("B-card in event"); + if( p->rDate<=0.0 ) SYNTAX("missing date on event"); + if( p->nFile>0 ) SYNTAX("F-card in event"); + if( p->nField>0 ) SYNTAX("J-card in event"); + if( p->zTicketUuid ) SYNTAX("K-card in event"); + if( p->zWikiTitle!=0 ) SYNTAX("L-card in event"); + if( p->zRepoCksum ) SYNTAX("R-card in event"); + if( p->zWiki==0 ) SYNTAX("missing W-card on event"); + if( !seenZ ) SYNTAX("missing Z-card on event"); + p->type = CFTYPE_EVENT; + }else if( p->zWiki!=0 || p->zWikiTitle!=0 ){ + if( p->zAttachName ) SYNTAX("A-card in wiki"); + if( p->zBaseline ) SYNTAX("B-card in wiki"); + if( p->rDate<=0.0 ) SYNTAX("missing date on wiki"); + if( p->nFile>0 ) SYNTAX("F-card in wiki"); + if( p->nField>0 ) SYNTAX("J-card in wiki"); + if( p->zTicketUuid ) SYNTAX("K-card in wiki"); + if( p->zWikiTitle==0 ) SYNTAX("missing L-card on wiki"); + if( p->zRepoCksum ) SYNTAX("R-card in wiki"); + if( p->nTag>0 ) SYNTAX("T-card in wiki"); + if( p->zWiki==0 ) SYNTAX("missing W-card on wiki"); + if( !seenZ ) SYNTAX("missing Z-card on wiki"); p->type = CFTYPE_WIKI; - }else if( p->nTag>0 ){ - if( p->rDate<=0.0 ) goto manifest_syntax_error; - if( p->nParent>0 ) goto manifest_syntax_error; - if( p->zWikiTitle ) goto manifest_syntax_error; - if( p->zTicketUuid ) goto manifest_syntax_error; - if( p->zAttachName ) goto manifest_syntax_error; - if( !seenZ ) goto manifest_syntax_error; - p->type = CFTYPE_CONTROL; + }else if( hasSelfRefTag || p->nFile>0 || p->zRepoCksum!=0 || p->zBaseline + || p->nParent>0 ){ + if( p->zAttachName ) SYNTAX("A-card in manifest"); + if( p->rDate<=0.0 ) SYNTAX("missing date on manifest"); + if( p->nField>0 ) SYNTAX("J-card in manifest"); + if( p->zTicketUuid ) SYNTAX("K-card in manifest"); + p->type = CFTYPE_MANIFEST; + }else if( p->nField>0 || p->zTicketUuid!=0 ){ + if( p->zAttachName ) SYNTAX("A-card in ticket"); + if( p->rDate<=0.0 ) SYNTAX("missing date on ticket"); + if( p->nField==0 ) SYNTAX("missing J-card on ticket"); + if( p->zTicketUuid==0 ) SYNTAX("missing K-card on ticket"); + if( p->zMimetype) SYNTAX("N-card in ticket"); + if( p->nTag>0 ) SYNTAX("T-card in ticket"); + if( p->zUser==0 ) SYNTAX("missing U-card on ticket"); + if( !seenZ ) SYNTAX("missing Z-card on ticket"); + p->type = CFTYPE_TICKET; }else if( p->zAttachName ){ - if( p->nCChild>0 ) goto manifest_syntax_error; - if( p->rDate==0.0 ) goto manifest_syntax_error; - if( p->zTicketUuid ) goto manifest_syntax_error; - if( p->zWikiTitle ) goto manifest_syntax_error; - if( !seenZ ) goto manifest_syntax_error; + if( p->rDate<=0.0 ) SYNTAX("missing date on attachment"); + if( p->nTag>0 ) SYNTAX("T-card in attachment"); + if( !seenZ ) SYNTAX("missing Z-card on attachment"); p->type = CFTYPE_ATTACHMENT; }else{ - if( p->nCChild>0 ) goto manifest_syntax_error; - if( p->rDate<=0.0 ) goto manifest_syntax_error; - if( p->nParent>0 ) goto manifest_syntax_error; - if( p->nField>0 ) goto manifest_syntax_error; - if( p->zTicketUuid ) goto manifest_syntax_error; - if( p->zWiki ) goto manifest_syntax_error; - if( p->zWikiTitle ) goto manifest_syntax_error; - if( p->zTicketUuid ) goto manifest_syntax_error; - if( p->zAttachName ) goto manifest_syntax_error; - p->type = CFTYPE_MANIFEST; + if( p->rDate<=0.0 ) SYNTAX("missing date on control"); + if( p->zMimetype ) SYNTAX("N-card in control"); + if( !seenZ ) SYNTAX("missing Z-card on control"); + p->type = CFTYPE_CONTROL; } md5sum_init(); - return 1; + if( !isRepeat ) g.parseCnt[p->type]++; + blob_reset(&bUuid); + return p; manifest_syntax_error: - /*fprintf(stderr, "Manifest error on line %i\n", lineNo);fflush(stderr);*/ + if(bUuid.nUsed){ + blob_appendf(pErr, "manifest [%.40s] ", blob_str(&bUuid)); + blob_reset(&bUuid); + } + if( zErr ){ + blob_appendf(pErr, "line %d: %s", lineNo, zErr); + }else{ + blob_appendf(pErr, "unknown error on line %d", lineNo); + } md5sum_init(); - manifest_clear(p); + manifest_destroy(p); return 0; } + +/* +** Get a manifest given the rid for the control artifact. Return +** a pointer to the manifest on success or NULL if there is a failure. +*/ +Manifest *manifest_get(int rid, int cfType, Blob *pErr){ + Blob content; + Manifest *p; + if( !rid ) return 0; + p = manifest_cache_find(rid); + if( p ){ + if( cfType!=CFTYPE_ANY && cfType!=p->type ){ + manifest_cache_insert(p); + p = 0; + } + return p; + } + content_get(rid, &content); + p = manifest_parse(&content, rid, pErr); + if( p && cfType!=CFTYPE_ANY && cfType!=p->type ){ + manifest_destroy(p); + p = 0; + } + return p; +} + +/* +** Given a check-in name, load and parse the manifest for that check-in. +** Throw a fatal error if anything goes wrong. +*/ +Manifest *manifest_get_by_name(const char *zName, int *pRid){ + int rid; + Manifest *p; + + rid = name_to_typed_rid(zName, "ci"); + if( !is_a_version(rid) ){ + fossil_fatal("no such check-in: %s", zName); + } + if( pRid ) *pRid = rid; + p = manifest_get(rid, CFTYPE_MANIFEST, 0); + if( p==0 ){ + fossil_fatal("cannot parse manifest for check-in: %s", zName); + } + return p; +} /* ** COMMAND: test-parse-manifest ** -** Usage: %fossil test-parse-manifest FILENAME +** Usage: %fossil test-parse-manifest FILENAME ?N? ** ** Parse the manifest and discarded. Use for testing only. */ void manifest_test_parse_cmd(void){ - Manifest m; + Manifest *p; Blob b; - if( g.argc!=3 ){ + int i; + int n = 1; + sqlite3_open(":memory:", &g.db); + if( g.argc!=3 && g.argc!=4 ){ usage("FILENAME"); } - db_must_be_within_tree(); blob_read_from_file(&b, g.argv[2]); - manifest_parse(&m, &b); - manifest_clear(&m); + if( g.argc>3 ) n = atoi(g.argv[3]); + for(i=0; izBaseline!=0 && p->pBaseline==0 ){ + int rid = uuid_to_rid(p->zBaseline, 1); + p->pBaseline = manifest_get(rid, CFTYPE_MANIFEST, 0); + if( p->pBaseline==0 ){ + if( !throwError ){ + db_multi_exec( + "INSERT OR IGNORE INTO orphan(rid, baseline) VALUES(%d,%d)", + p->rid, rid + ); + return 1; + } + fossil_fatal("cannot access baseline manifest %S", p->zBaseline); + } + } + return 0; +} + +/* +** Rewind a manifest-file iterator back to the beginning of the manifest. +*/ +void manifest_file_rewind(Manifest *p){ + p->iFile = 0; + fetch_baseline(p, 1); + if( p->pBaseline ){ + p->pBaseline->iFile = 0; + } +} + +/* +** Advance to the next manifest-file. +** +** Return NULL for end-of-records or if there is an error. If an error +** occurs and pErr!=0 then store 1 in *pErr. +*/ +ManifestFile *manifest_file_next( + Manifest *p, + int *pErr +){ + ManifestFile *pOut = 0; + if( pErr ) *pErr = 0; + if( p->pBaseline==0 ){ + /* Manifest p is a baseline-manifest. Just scan down the list + ** of files. */ + if( p->iFilenFile ) pOut = &p->aFile[p->iFile++]; + }else{ + /* Manifest p is a delta-manifest. Scan the baseline but amend the + ** file list in the baseline with changes described by p. + */ + Manifest *pB = p->pBaseline; + int cmp; + while(1){ + if( pB->iFile>=pB->nFile ){ + /* We have used all entries out of the baseline. Return the next + ** entry from the delta. */ + if( p->iFilenFile ) pOut = &p->aFile[p->iFile++]; + break; + }else if( p->iFile>=p->nFile ){ + /* We have used all entries from the delta. Return the next + ** entry from the baseline. */ + if( pB->iFilenFile ) pOut = &pB->aFile[pB->iFile++]; + break; + }else if( (cmp = fossil_strcmp(pB->aFile[pB->iFile].zName, + p->aFile[p->iFile].zName)) < 0 ){ + /* The next baseline entry comes before the next delta entry. + ** So return the baseline entry. */ + pOut = &pB->aFile[pB->iFile++]; + break; + }else if( cmp>0 ){ + /* The next delta entry comes before the next baseline + ** entry so return the delta entry */ + pOut = &p->aFile[p->iFile++]; + break; + }else if( p->aFile[p->iFile].zUuid ){ + /* The next delta entry is a replacement for the next baseline + ** entry. Skip the baseline entry and return the delta entry */ + pB->iFile++; + pOut = &p->aFile[p->iFile++]; + break; + }else{ + /* The next delta entry is a delete of the next baseline + ** entry. Skip them both. Repeat the loop to find the next + ** non-delete entry. */ + pB->iFile++; + p->iFile++; + continue; + } + } + } + return pOut; } /* ** Translate a filename into a filename-id (fnid). Create a new fnid ** if no previously exists. @@ -682,203 +1165,481 @@ db_exec(&s1); fnid = db_last_insert_rowid(); } return fnid; } + +/* +** Compute an appropriate mlink.mperm integer for the permission string +** of a file. +*/ +int manifest_file_mperm(ManifestFile *pFile){ + int mperm = PERM_REG; + if( pFile && pFile->zPerm){ + if( strstr(pFile->zPerm,"x")!=0 ){ + mperm = PERM_EXE; + }else if( strstr(pFile->zPerm,"l")!=0 ){ + mperm = PERM_LNK; + } + } + return mperm; +} /* ** Add a single entry to the mlink table. Also add the filename to ** the filename table if it is not there already. +** +** An mlink entry is always created if isPrimary is true. But if +** isPrimary is false (meaning that pmid is a merge parent of mid) +** then the mlink entry is only created if there is already an mlink +** from primary parent for the same file. */ static void add_one_mlink( + int pmid, /* The parent manifest */ + const char *zFromUuid, /* UUID for content in parent */ int mid, /* The record ID of the manifest */ - const char *zFromUuid, /* UUID for the mlink.pid field */ - const char *zToUuid, /* UUID for the mlink.fid field */ + const char *zToUuid, /* UUID for content in child */ const char *zFilename, /* Filename */ - const char *zPrior /* Previous filename. NULL if unchanged */ + const char *zPrior, /* Previous filename. NULL if unchanged */ + int isPublic, /* True if mid is not a private manifest */ + int isPrimary, /* pmid is the primary parent of mid */ + int mperm /* 1: exec, 2: symlink */ ){ int fnid, pfnid, pid, fid; - static Stmt s1; + int doInsert; + static Stmt s1, s2; fnid = filename_to_fnid(zFilename); if( zPrior==0 ){ pfnid = 0; }else{ pfnid = filename_to_fnid(zPrior); } - if( zFromUuid==0 ){ + if( zFromUuid==0 || zFromUuid[0]==0 ){ pid = 0; }else{ pid = uuid_to_rid(zFromUuid, 1); } - if( zToUuid==0 ){ + if( zToUuid==0 || zToUuid[0]==0 ){ fid = 0; }else{ fid = uuid_to_rid(zToUuid, 1); - } - db_static_prepare(&s1, - "INSERT INTO mlink(mid,pid,fid,fnid,pfnid)" - "VALUES(:m,:p,:f,:n,:pfn)" - ); - db_bind_int(&s1, ":m", mid); - db_bind_int(&s1, ":p", pid); - db_bind_int(&s1, ":f", fid); - db_bind_int(&s1, ":n", fnid); - db_bind_int(&s1, ":pfn", pfnid); - db_exec(&s1); + if( isPublic ) content_make_public(fid); + } + if( isPrimary ){ + doInsert = 1; + }else{ + db_static_prepare(&s2, + "SELECT 1 FROM mlink WHERE mid=:m AND fnid=:n AND NOT isaux" + ); + db_bind_int(&s2, ":m", mid); + db_bind_int(&s2, ":n", fnid); + doInsert = db_step(&s2)==SQLITE_ROW; + db_reset(&s2); + } + if( doInsert ){ + db_static_prepare(&s1, + "INSERT INTO mlink(mid,fid,pmid,pid,fnid,pfnid,mperm,isaux)" + "VALUES(:m,:f,:pm,:p,:n,:pfn,:mp,:isaux)" + ); + db_bind_int(&s1, ":m", mid); + db_bind_int(&s1, ":f", fid); + db_bind_int(&s1, ":pm", pmid); + db_bind_int(&s1, ":p", pid); + db_bind_int(&s1, ":n", fnid); + db_bind_int(&s1, ":pfn", pfnid); + db_bind_int(&s1, ":mp", mperm); + db_bind_int(&s1, ":isaux", isPrimary==0); + db_exec(&s1); + } if( pid && fid ){ content_deltify(pid, fid, 0); } } /* -** Locate a file named zName in the aFile[] array of the given -** manifest. We assume that filenames are in sorted order. -** Use a binary search. Return turn the index of the matching -** entry. Or return -1 if not found. +** Do a binary search to find a file in the p->aFile[] array. +** +** As an optimization, guess that the file we seek is at index p->iFile. +** That will usually be the case. If it is not found there, then do the +** actual binary search. +** +** Update p->iFile to be the index of the file that is found. */ -static int find_file_in_manifest(Manifest *p, const char *zName){ +static ManifestFile *manifest_file_seek_base( + Manifest *p, /* Manifest to search */ + const char *zName, /* Name of the file we are looking for */ + int bBest /* 0: exact match only. 1: closest match */ +){ int lwr, upr; int c; int i; lwr = 0; upr = p->nFile - 1; + if( p->iFile>=lwr && p->iFileaFile[p->iFile+1].zName, zName); + if( c==0 ){ + return &p->aFile[++p->iFile]; + }else if( c>0 ){ + upr = p->iFile; + }else{ + lwr = p->iFile+1; + } + } while( lwr<=upr ){ i = (lwr+upr)/2; - c = strcmp(p->aFile[i].zName, zName); + c = fossil_strcmp(p->aFile[i].zName, zName); if( c<0 ){ lwr = i+1; }else if( c>0 ){ upr = i-1; }else{ - return i; + p->iFile = i; + return &p->aFile[i]; + } + } + if( bBest ){ + if( lwr>=p->nFile ) lwr = p->nFile-1; + i = (int)strlen(zName); + if( strncmp(zName, p->aFile[lwr].zName, i)==0 ) return &p->aFile[lwr]; + } + return 0; +} + +/* +** Locate a file named zName in the aFile[] array of the given manifest. +** Return a pointer to the appropriate ManifestFile object. Return NULL +** if not found. +** +** This routine works even if p is a delta-manifest. The pointer +** returned might be to the baseline. +** +** We assume that filenames are in sorted order and use a binary search. +*/ +ManifestFile *manifest_file_seek(Manifest *p, const char *zName, int bBest){ + ManifestFile *pFile; + + pFile = manifest_file_seek_base(p, zName, p->zBaseline ? 0 : bBest); + if( pFile && pFile->zUuid==0 ) return 0; + if( pFile==0 && p->zBaseline ){ + fetch_baseline(p, 1); + pFile = manifest_file_seek_base(p->pBaseline, zName,bBest); + } + return pFile; +} + +/* +** Look for a file in a manifest, taking the case-sensitive option +** into account. If case-sensitive is off, then files in any case +** will match. +*/ +ManifestFile *manifest_file_find(Manifest *p, const char *zName){ + int i; + Manifest *pBase; + if( filenames_are_case_sensitive() ){ + return manifest_file_seek(p, zName, 0); + } + for(i=0; inFile; i++){ + if( fossil_stricmp(zName, p->aFile[i].zName)==0 ){ + return &p->aFile[i]; + } + } + if( p->zBaseline==0 ) return 0; + fetch_baseline(p, 1); + pBase = p->pBaseline; + if( pBase==0 ) return 0; + for(i=0; inFile; i++){ + if( fossil_stricmp(zName, pBase->aFile[i].zName)==0 ){ + return &pBase->aFile[i]; } } - return -1; + return 0; } /* -** Add mlink table entries associated with manifest cid. The -** parent manifest is pid. +** Add mlink table entries associated with manifest cid, pChild. The +** parent manifest is pid, pParent. One of either pChild or pParent +** will be NULL and it will be computed based on cid/pid. ** -** A single mlink entry is added for every file that changed content -** and/or name going from pid to cid. +** A single mlink entry is added for every file that changed content, +** name, and/or permissions going from pid to cid. ** ** Deleted files have mlink.fid=0. ** Added files have mlink.pid=0. +** File added by merge have mlink.pid=-1 ** Edited files have both mlink.pid!=0 and mlink.fid!=0 +** +** Many mlink entries for merge parents will only be added if another mlink +** entry already exists for the same file from the primary parent. Therefore, +** to ensure that all merge-parent mlink entries are properly created: +** +** (1) Make this routine a no-op if pParent is a merge parent and the +** primary parent is a phantom. +** (2) Invoke this routine recursively for merge-parents if pParent is the +** primary parent. */ -static void add_mlink(int pid, Manifest *pParent, int cid, Manifest *pChild){ - Manifest other; +static void add_mlink( + int pmid, Manifest *pParent, /* Parent check-in */ + int mid, Manifest *pChild, /* The child check-in */ + int isPrim /* TRUE if pmid is the primary parent of mid */ +){ Blob otherContent; - int i, j; + int otherRid; + int i, rc; + ManifestFile *pChildFile, *pParentFile; + Manifest **ppOther; + static Stmt eq; + int isPublic; /* True if pChild is non-private */ + + /* If mlink table entires are already exist for the pmid-to-mid transition, + ** then abort early doing no work. + */ + db_static_prepare(&eq, "SELECT 1 FROM mlink WHERE mid=:mid AND pmid=:pmid"); + db_bind_int(&eq, ":mid", mid); + db_bind_int(&eq, ":pmid", pmid); + rc = db_step(&eq); + db_reset(&eq); + if( rc==SQLITE_ROW ) return; - if( db_exists("SELECT 1 FROM mlink WHERE mid=%d", cid) ){ - return; - } + /* Compute the value of the missing pParent or pChild parameter. + ** Fetch the baseline check-ins for both. + */ assert( pParent==0 || pChild==0 ); if( pParent==0 ){ - pParent = &other; - content_get(pid, &otherContent); - }else{ - pChild = &other; - content_get(cid, &otherContent); - } - if( blob_size(&otherContent)==0 ) return; - if( manifest_parse(&other, &otherContent)==0 ) return; - content_deltify(pid, cid, 0); - - /* Use the iRename fields to find the cross-linkage between - ** renamed files. */ - for(j=0; jnFile; j++){ - const char *zPrior = pChild->aFile[j].zPrior; - if( zPrior && zPrior[0] ){ - i = find_file_in_manifest(pParent, zPrior); - if( i>=0 ){ - pChild->aFile[j].iRename = i; - pParent->aFile[i].iRename = j; - } - } - } - - /* Construct the mlink entries */ - for(i=j=0; inFile && jnFile; ){ - int c; - if( pParent->aFile[i].iRename>=0 ){ - i++; - }else if( (c = strcmp(pParent->aFile[i].zName, pChild->aFile[j].zName))<0 ){ - add_one_mlink(cid, pParent->aFile[i].zUuid,0,pParent->aFile[i].zName,0); - i++; - }else if( c>0 ){ - int rn = pChild->aFile[j].iRename; - if( rn>=0 ){ - add_one_mlink(cid, pParent->aFile[rn].zUuid, pChild->aFile[j].zUuid, - pChild->aFile[j].zName, pParent->aFile[rn].zName); - }else{ - add_one_mlink(cid, 0, pChild->aFile[j].zUuid, pChild->aFile[j].zName,0); - } - j++; - }else{ - if( strcmp(pParent->aFile[i].zUuid, pChild->aFile[j].zUuid)!=0 ){ - add_one_mlink(cid, pParent->aFile[i].zUuid, pChild->aFile[j].zUuid, - pChild->aFile[j].zName, 0); - } - i++; - j++; - } - } - while( inFile ){ - if( pParent->aFile[i].iRename<0 ){ - add_one_mlink(cid, pParent->aFile[i].zUuid, 0, pParent->aFile[i].zName,0); - } - i++; - } - while( jnFile ){ - int rn = pChild->aFile[j].iRename; - if( rn>=0 ){ - add_one_mlink(cid, pParent->aFile[rn].zUuid, pChild->aFile[j].zUuid, - pChild->aFile[j].zName, pParent->aFile[rn].zName); - }else{ - add_one_mlink(cid, 0, pChild->aFile[j].zUuid, pChild->aFile[j].zName,0); - } - j++; - } - manifest_clear(&other); -} - -/* -** True if manifest_crosslink_begin() has been called but -** manifest_crosslink_end() is still pending. -*/ -static int manifest_crosslink_busy = 0; + ppOther = &pParent; + otherRid = pmid; + }else{ + ppOther = &pChild; + otherRid = mid; + } + if( (*ppOther = manifest_cache_find(otherRid))==0 ){ + content_get(otherRid, &otherContent); + if( blob_size(&otherContent)==0 ) return; + *ppOther = manifest_parse(&otherContent, otherRid, 0); + if( *ppOther==0 ) return; + } + if( fetch_baseline(pParent, 0) || fetch_baseline(pChild, 0) ){ + manifest_destroy(*ppOther); + return; + } + isPublic = !content_is_private(mid); + + /* If pParent is not the primary parent of pChild, and the primary + ** parent of pChild is a phantom, then abort this routine without + ** doing any work. The mlink entries will be computed when the + ** primary parent dephantomizes. + */ + if( !isPrim && otherRid==mid + && !db_exists("SELECT 1 FROM blob WHERE uuid=%Q AND size>0", + pChild->azParent[0]) + ){ + manifest_cache_insert(*ppOther); + return; + } + + /* Try to make the parent manifest a delta from the child, if that + ** is an appropriate thing to do. For a new baseline, make the + ** previous baseline a delta from the current baseline. + */ + if( (pParent->zBaseline==0)==(pChild->zBaseline==0) ){ + content_deltify(pmid, mid, 0); + }else if( pChild->zBaseline==0 && pParent->zBaseline!=0 ){ + content_deltify(pParent->pBaseline->rid, mid, 0); + } + + /* Remember all children less than a few seconds younger than their parent, + ** as we might want to fudge the times for those children. + */ + if( pChild->rDaterDate+AGE_FUDGE_WINDOW + && manifest_crosslink_busy + ){ + db_multi_exec( + "INSERT OR REPLACE INTO time_fudge VALUES(%d, %.17g, %d, %.17g);", + pParent->rid, pParent->rDate, pChild->rid, pChild->rDate + ); + } + + /* First look at all files in pChild, ignoring its baseline. This + ** is where most of the changes will be found. + */ + for(i=0, pChildFile=pChild->aFile; inFile; i++, pChildFile++){ + int mperm = manifest_file_mperm(pChildFile); + if( pChildFile->zPrior ){ + pParentFile = manifest_file_seek(pParent, pChildFile->zPrior, 0); + if( pParentFile ){ + /* File with name change */ + add_one_mlink(pmid, pParentFile->zUuid, mid, pChildFile->zUuid, + pChildFile->zName, pChildFile->zPrior, + isPublic, isPrim, mperm); + }else{ + /* File name changed, but the old name is not found in the parent! + ** Treat this like a new file. */ + add_one_mlink(pmid, 0, mid, pChildFile->zUuid, pChildFile->zName, 0, + isPublic, isPrim, mperm); + } + }else{ + pParentFile = manifest_file_seek(pParent, pChildFile->zName, 0); + if( pParentFile==0 ){ + if( pChildFile->zUuid ){ + /* A new file */ + add_one_mlink(pmid, 0, mid, pChildFile->zUuid, pChildFile->zName, 0, + isPublic, isPrim, mperm); + } + }else if( fossil_strcmp(pChildFile->zUuid, pParentFile->zUuid)!=0 + || manifest_file_mperm(pParentFile)!=mperm ){ + /* Changes in file content or permissions */ + add_one_mlink(pmid, pParentFile->zUuid, mid, pChildFile->zUuid, + pChildFile->zName, 0, isPublic, isPrim, mperm); + } + } + } + if( pParent->zBaseline && pChild->zBaseline ){ + /* Both parent and child are delta manifests. Look for files that + ** are deleted or modified in the parent but which reappear or revert + ** to baseline in the child and show such files as being added or changed + ** in the child. */ + for(i=0, pParentFile=pParent->aFile; inFile; i++, pParentFile++){ + if( pParentFile->zUuid ){ + pChildFile = manifest_file_seek_base(pChild, pParentFile->zName, 0); + if( pChildFile==0 ){ + /* The child file reverts to baseline. Show this as a change */ + pChildFile = manifest_file_seek(pChild, pParentFile->zName, 0); + if( pChildFile ){ + add_one_mlink(pmid, pParentFile->zUuid, mid, pChildFile->zUuid, + pChildFile->zName, 0, isPublic, isPrim, + manifest_file_mperm(pChildFile)); + } + } + }else{ + pChildFile = manifest_file_seek(pChild, pParentFile->zName, 0); + if( pChildFile ){ + /* File resurrected in the child after having been deleted in + ** the parent. Show this as an added file. */ + add_one_mlink(pmid, 0, mid, pChildFile->zUuid, pChildFile->zName, 0, + isPublic, isPrim, manifest_file_mperm(pChildFile)); + } + } + } + }else if( pChild->zBaseline==0 ){ + /* pChild is a baseline. Look for files that are present in pParent + ** but are missing from pChild and mark them as having been deleted. */ + manifest_file_rewind(pParent); + while( (pParentFile = manifest_file_next(pParent,0))!=0 ){ + pChildFile = manifest_file_seek(pChild, pParentFile->zName, 0); + if( pChildFile==0 && pParentFile->zUuid!=0 ){ + add_one_mlink(pmid, pParentFile->zUuid, mid, 0, pParentFile->zName, 0, + isPublic, isPrim, 0); + } + } + } + manifest_cache_insert(*ppOther); + + /* If pParent is the primary parent of pChild, also run this analysis + ** for all merge parents of pChild + */ + if( isPrim ){ + for(i=1; inParent; i++){ + pmid = uuid_to_rid(pChild->azParent[i], 0); + if( pmid<=0 ) continue; + add_mlink(pmid, 0, mid, pChild, 0); + } + } +} /* ** Setup to do multiple manifest_crosslink() calls. ** This is only required if processing ticket changes. */ void manifest_crosslink_begin(void){ assert( manifest_crosslink_busy==0 ); manifest_crosslink_busy = 1; db_begin_transaction(); - db_multi_exec("CREATE TEMP TABLE pending_tkt(uuid TEXT UNIQUE)"); + db_multi_exec( + "CREATE TEMP TABLE pending_tkt(uuid TEXT UNIQUE);" + "CREATE TEMP TABLE time_fudge(" + " mid INTEGER PRIMARY KEY," /* The rid of a manifest */ + " m1 REAL," /* The timestamp on mid */ + " cid INTEGER," /* A child or mid */ + " m2 REAL" /* Timestamp on the child */ + ");" + ); } +#if INTERFACE +/* Timestamps might be adjusted slightly to ensure that check-ins appear +** on the timeline in chronological order. This is the maximum amount +** of the adjustment window, in days. +*/ +#define AGE_FUDGE_WINDOW (2.0/86400.0) /* 2 seconds */ + +/* This is increment (in days) by which timestamps are adjusted for +** use on the timeline. +*/ +#define AGE_ADJUST_INCREMENT (25.0/86400000.0) /* 25 milliseconds */ + +#endif /* LOCAL_INTERFACE */ + /* ** Finish up a sequence of manifest_crosslink calls. */ -void manifest_crosslink_end(void){ - Stmt q; +int manifest_crosslink_end(int flags){ + Stmt q, u; + int i; + int rc = TH_OK; + int permitHooks = (flags & MC_PERMIT_HOOKS); + const char *zScript = 0; assert( manifest_crosslink_busy==1 ); + if( permitHooks ){ + rc = xfer_run_common_script(); + if( rc==TH_OK ){ + zScript = xfer_ticket_code(); + } + } db_prepare(&q, "SELECT uuid FROM pending_tkt"); while( db_step(&q)==SQLITE_ROW ){ const char *zUuid = db_column_text(&q, 0); ticket_rebuild_entry(zUuid); + if( permitHooks && rc==TH_OK ){ + rc = xfer_run_script(zScript, zUuid, 0); + } } db_finalize(&q); db_multi_exec("DROP TABLE pending_tkt"); + + /* If multiple check-ins happen close together in time, adjust their + ** times by a few milliseconds to make sure they appear in chronological + ** order. + */ + db_prepare(&q, + "UPDATE time_fudge SET m1=m2-:incr WHERE m1>=m2 AND m1zTicketUuid ); if( !isNew ){ for(i=0; inField; i++){ - if( strcmp(pManifest->aField[i].zName, zStatusColumn)==0 ){ + if( fossil_strcmp(pManifest->aField[i].zName, zStatusColumn)==0 ){ zNewStatus = pManifest->aField[i].zValue; } } if( zNewStatus ){ - blob_appendf(&comment, "%h ticket [%.10s]: %s", - zNewStatus, pManifest->zTicketUuid, zTitle + blob_appendf(&comment, "%h ticket [%!S|%S]: %h", + zNewStatus, pManifest->zTicketUuid, pManifest->zTicketUuid, zTitle ); if( pManifest->nField>1 ){ blob_appendf(&comment, " plus %d other change%s", pManifest->nField-1, pManifest->nField==2 ? "" : "s"); } - blob_appendf(&brief, "%h ticket [%.10s].", - zNewStatus, pManifest->zTicketUuid); + blob_appendf(&brief, "%h ticket [%!S|%S].", + zNewStatus, pManifest->zTicketUuid, pManifest->zTicketUuid); }else{ - zNewStatus = db_text("unknown", - "SELECT %s FROM ticket WHERE tkt_uuid='%s'", + zNewStatus = db_text("unknown", + "SELECT \"%w\" FROM ticket WHERE tkt_uuid=%Q", zStatusColumn, pManifest->zTicketUuid ); - blob_appendf(&comment, "Ticket [%.10s] %s status still %h with " + blob_appendf(&comment, "Ticket [%!S|%S] %h status still %h with " "%d other change%s", - pManifest->zTicketUuid, zTitle, zNewStatus, pManifest->nField, - pManifest->nField==1 ? "" : "s" + pManifest->zTicketUuid, pManifest->zTicketUuid, zTitle, zNewStatus, + pManifest->nField, pManifest->nField==1 ? "" : "s" ); - free(zNewStatus); - blob_appendf(&brief, "Ticket [%.10s]: %d change%s", - pManifest->zTicketUuid, pManifest->nField, + fossil_free(zNewStatus); + blob_appendf(&brief, "Ticket [%!S|%S]: %d change%s", + pManifest->zTicketUuid, pManifest->zTicketUuid, pManifest->nField, pManifest->nField==1 ? "" : "s" ); } }else{ - blob_appendf(&comment, "New ticket [%.10s] %h.", - pManifest->zTicketUuid, zTitle + blob_appendf(&comment, "New ticket [%!S|%S] %h.", + pManifest->zTicketUuid, pManifest->zTicketUuid, zTitle ); - blob_appendf(&brief, "New ticket [%.10s].", pManifest->zTicketUuid); + blob_appendf(&brief, "New ticket [%!S|%S].", pManifest->zTicketUuid, + pManifest->zTicketUuid); } - free(zTitle); + fossil_free(zTitle); db_multi_exec( "REPLACE INTO event(type,tagid,mtime,objid,user,comment,brief)" "VALUES('t',%d,%.17g,%d,%Q,%Q,%Q)", tktTagId, pManifest->rDate, rid, pManifest->zUser, blob_str(&comment), blob_str(&brief) ); blob_reset(&comment); blob_reset(&brief); } + +/* +** Add an extra line of text to the end of a manifest to prevent it being +** recognized as a valid manifest. +** +** This routine is called prior to writing out the text of a manifest as +** the "manifest" file in the root of a repository when +** "fossil setting manifest on" is enabled. That way, if the files of +** the project are imported into a different Fossil project, the manifest +** file will not be interpreted as a control artifact in that other project. +** +** Normally it is sufficient to simply append the extra line of text. +** However, if the manifest is PGP signed then the extra line has to be +** inserted before the PGP signature (thus invalidating the signature). +*/ +void sterilize_manifest(Blob *p){ + char *z, *zOrig; + int n, nOrig; + static const char zExtraLine[] = + "# Remove this line to create a well-formed manifest.\n"; + + z = zOrig = blob_materialize(p); + n = nOrig = blob_size(p); + remove_pgp_signature(&z, &n); + if( z==zOrig ){ + blob_append(p, zExtraLine, -1); + }else{ + int iEnd; + Blob copy; + memcpy(©, p, sizeof(copy)); + blob_init(p, 0, 0); + iEnd = (int)(&z[n] - zOrig); + blob_append(p, zOrig, iEnd); + blob_append(p, zExtraLine, -1); + blob_append(p, &zOrig[iEnd], -1); + blob_zero(©); + } +} + +/* +** This is the comparison function used to sort the tag array. +*/ +static int tag_compare(const void *a, const void *b){ + struct TagType *pA = (struct TagType*)a; + struct TagType *pB = (struct TagType*)b; + int c; + c = fossil_strcmp(pA->zUuid, pB->zUuid); + if( c==0 ){ + c = fossil_strcmp(pA->zName, pB->zName); + } + return c; +} /* ** Scan artifact rid/pContent to see if it is a control artifact of ** any key: ** @@ -964,215 +1778,544 @@ ** * Manifest ** * Control ** * Wiki Page ** * Ticket Change ** * Cluster +** * Attachment +** * Event ** ** If the input is a control artifact, then make appropriate entries ** in the auxiliary tables of the database in order to crosslink the ** artifact. ** -** If global variable g.xlinkClusterOnly is true, then ignore all +** If global variable g.xlinkClusterOnly is true, then ignore all ** control artifacts other than clusters. +** +** This routine always resets the pContent blob before returning. ** ** Historical note: This routine original processed manifests only. ** Processing for other control artifacts was added later. The name ** of the routine, "manifest_crosslink", and the name of this source ** file, is a legacy of its original use. */ -int manifest_crosslink(int rid, Blob *pContent){ - int i; - Manifest m; +int manifest_crosslink(int rid, Blob *pContent, int flags){ + int i, rc = TH_OK; + Manifest *p; Stmt q; int parentid = 0; + int permitHooks = (flags & MC_PERMIT_HOOKS); + const char *zScript = 0; + const char *zUuid = 0; - if( manifest_parse(&m, pContent)==0 ){ + if( (p = manifest_cache_find(rid))!=0 ){ + blob_reset(pContent); + }else if( (p = manifest_parse(pContent, rid, 0))==0 ){ + assert( blob_is_reset(pContent) || pContent==0 ); + if( (flags & MC_NO_ERRORS)==0 ){ + fossil_error(1, "syntax error in manifest [%S]", + db_text(0, "SELECT uuid FROM blob WHERE rid=%d",rid)); + } + return 0; + } + if( g.xlinkClusterOnly && p->type!=CFTYPE_CLUSTER ){ + manifest_destroy(p); + assert( blob_is_reset(pContent) ); + if( (flags & MC_NO_ERRORS)==0 ) fossil_error(1, "no manifest"); return 0; } - if( g.xlinkClusterOnly && m.type!=CFTYPE_CLUSTER ){ - manifest_clear(&m); + if( p->type==CFTYPE_MANIFEST && fetch_baseline(p, 0) ){ + manifest_destroy(p); + assert( blob_is_reset(pContent) ); + if( (flags & MC_NO_ERRORS)==0 ){ + fossil_error(1, "cannot fetch baseline for manifest [%S]", + db_text(0, "SELECT uuid FROM blob WHERE rid=%d",rid)); + } return 0; } db_begin_transaction(); - if( m.type==CFTYPE_MANIFEST ){ + if( p->type==CFTYPE_MANIFEST ){ + if( permitHooks ){ + zScript = xfer_commit_code(); + zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); + } if( !db_exists("SELECT 1 FROM mlink WHERE mid=%d", rid) ){ char *zCom; - for(i=0; izBaseline ){ + sqlite3_snprintf(sizeof(zBaseId), zBaseId, "%d", + uuid_to_rid(p->zBaseline,1)); + }else{ + sqlite3_snprintf(sizeof(zBaseId), zBaseId, "NULL"); + } + for(i=0; inParent; i++){ + int pid = uuid_to_rid(p->azParent[i], 1); + db_multi_exec( + "INSERT OR IGNORE INTO plink(pid, cid, isprim, mtime, baseid)" + "VALUES(%d, %d, %d, %.17g, %s)", + pid, rid, i==0, p->rDate, zBaseId/*safe-for-%s*/); + if( i==0 ) parentid = pid; + } + add_mlink(parentid, 0, rid, p, 1); + if( p->nParent>1 ){ + /* Change MLINK.PID from 0 to -1 for files that are added by merge. */ + db_multi_exec( + "UPDATE mlink SET pid=-1" + " WHERE mid=%d" + " AND pid=0" + " AND fnid IN " + " (SELECT fnid FROM mlink WHERE mid=%d GROUP BY fnid" + " HAVING count(*)<%d)", + rid, rid, p->nParent + ); + } + db_prepare(&q, "SELECT cid, isprim FROM plink WHERE pid=%d", rid); while( db_step(&q)==SQLITE_ROW ){ int cid = db_column_int(&q, 0); - add_mlink(rid, &m, cid, 0); + int isprim = db_column_int(&q, 1); + add_mlink(rid, p, cid, 0, isprim); } db_finalize(&q); + if( p->nParent==0 ){ + /* For root files (files without parents) add mlink entries + ** showing all content as new. */ + int isPublic = !content_is_private(rid); + for(i=0; inFile; i++){ + add_one_mlink(0, 0, rid, p->aFile[i].zUuid, p->aFile[i].zName, 0, + isPublic, 1, manifest_file_mperm(&p->aFile[i])); + } + } + search_doc_touch('c', rid, 0); db_multi_exec( "REPLACE INTO event(type,mtime,objid,user,comment," - "bgcolor,euser,ecomment)" + "bgcolor,euser,ecomment,omtime)" "VALUES('ci'," " coalesce(" " (SELECT julianday(value) FROM tagxref WHERE tagid=%d AND rid=%d)," " %.17g" " )," " %d,%Q,%Q," " (SELECT value FROM tagxref WHERE tagid=%d AND rid=%d AND tagtype>0)," " (SELECT value FROM tagxref WHERE tagid=%d AND rid=%d)," - " (SELECT value FROM tagxref WHERE tagid=%d AND rid=%d));", - TAG_DATE, rid, m.rDate, - rid, m.zUser, m.zComment, + " (SELECT value FROM tagxref WHERE tagid=%d AND rid=%d),%.17g);", + TAG_DATE, rid, p->rDate, + rid, p->zUser, p->zComment, TAG_BGCOLOR, rid, TAG_USER, rid, - TAG_COMMENT, rid + TAG_COMMENT, rid, p->rDate ); zCom = db_text(0, "SELECT coalesce(ecomment, comment) FROM event" " WHERE rowid=last_insert_rowid()"); - wiki_extract_links(zCom, rid, 0, m.rDate, 1, WIKI_INLINE); - free(zCom); + wiki_extract_links(zCom, rid, 0, p->rDate, 1, WIKI_INLINE); + fossil_free(zCom); + + /* If this is a delta-manifest, record the fact that this repository + ** contains delta manifests, to free the "commit" logic to generate + ** new delta manifests. + */ + if( p->zBaseline!=0 ){ + static int once = 1; + if( once ){ + db_set_int("seen-delta-manifest", 1, 0); + once = 0; + } + } } } - if( m.type==CFTYPE_CLUSTER ){ - tag_insert("cluster", 1, 0, rid, m.rDate, rid); - for(i=0; itype==CFTYPE_CLUSTER ){ + static Stmt del1; + tag_insert("cluster", 1, 0, rid, p->rDate, rid); + db_static_prepare(&del1, "DELETE FROM unclustered WHERE rid=:rid"); + for(i=0; inCChild; i++){ int mid; - mid = uuid_to_rid(m.azCChild[i], 1); + mid = uuid_to_rid(p->azCChild[i], 1); if( mid>0 ){ - db_multi_exec("DELETE FROM unclustered WHERE rid=%d", mid); + db_bind_int(&del1, ":rid", mid); + db_step(&del1); + db_reset(&del1); } } } - if( m.type==CFTYPE_CONTROL || m.type==CFTYPE_MANIFEST ){ - for(i=0; itype==CFTYPE_CONTROL + || p->type==CFTYPE_MANIFEST + || p->type==CFTYPE_EVENT + ){ + for(i=0; inTag; i++){ int tid; int type; - if( m.aTag[i].zUuid ){ - tid = uuid_to_rid(m.aTag[i].zUuid, 1); + if( p->aTag[i].zUuid ){ + tid = uuid_to_rid(p->aTag[i].zUuid, 1); }else{ tid = rid; } if( tid ){ - switch( m.aTag[i].zName[0] ){ - case '-': type = 0; break; /* Cancel prior occurances */ + switch( p->aTag[i].zName[0] ){ + case '-': type = 0; break; /* Cancel prior occurrences */ case '+': type = 1; break; /* Apply to target only */ case '*': type = 2; break; /* Propagate to descendants */ default: - fossil_fatal("unknown tag type in manifest: %s", m.aTag); + fossil_error(1, "unknown tag type in manifest: %s", p->aTag); return 0; } - tag_insert(&m.aTag[i].zName[1], type, m.aTag[i].zValue, - rid, m.rDate, tid); + tag_insert(&p->aTag[i].zName[1], type, p->aTag[i].zValue, + rid, p->rDate, tid); } } if( parentid ){ tag_propagate_all(parentid); } } - if( m.type==CFTYPE_WIKI ){ - char *zTag = mprintf("wiki-%s", m.zWikiTitle); + if( p->type==CFTYPE_WIKI ){ + char *zTag = mprintf("wiki-%s", p->zWikiTitle); int tagid = tag_findid(zTag, 1); int prior; char *zComment; int nWiki; char zLength[40]; - while( isspace(m.zWiki[0]) ) m.zWiki++; - nWiki = strlen(m.zWiki); + while( fossil_isspace(p->zWiki[0]) ) p->zWiki++; + nWiki = strlen(p->zWiki); sqlite3_snprintf(sizeof(zLength), zLength, "%d", nWiki); - tag_insert(zTag, 1, zLength, rid, m.rDate, rid); - free(zTag); + tag_insert(zTag, 1, zLength, rid, p->rDate, rid); + fossil_free(zTag); prior = db_int(0, "SELECT rid FROM tagxref" " WHERE tagid=%d AND mtime<%.17g" " ORDER BY mtime DESC", - tagid, m.rDate + tagid, p->rDate ); if( prior ){ content_deltify(prior, rid, 0); } if( nWiki>0 ){ - zComment = mprintf("Changes to wiki page [%h]", m.zWikiTitle); + zComment = mprintf("Changes to wiki page [%h]", p->zWikiTitle); }else{ - zComment = mprintf("Deleted wiki page [%h]", m.zWikiTitle); + zComment = mprintf("Deleted wiki page [%h]", p->zWikiTitle); } + search_doc_touch('w',rid,p->zWikiTitle); db_multi_exec( "REPLACE INTO event(type,mtime,objid,user,comment," " bgcolor,euser,ecomment)" "VALUES('w',%.17g,%d,%Q,%Q," " (SELECT value FROM tagxref WHERE tagid=%d AND rid=%d AND tagtype>1)," " (SELECT value FROM tagxref WHERE tagid=%d AND rid=%d)," " (SELECT value FROM tagxref WHERE tagid=%d AND rid=%d));", - m.rDate, rid, m.zUser, zComment, - TAG_BGCOLOR, rid, + p->rDate, rid, p->zUser, zComment, TAG_BGCOLOR, rid, TAG_USER, rid, TAG_COMMENT, rid ); - free(zComment); + fossil_free(zComment); + } + if( p->type==CFTYPE_EVENT ){ + char *zTag = mprintf("event-%s", p->zEventId); + int tagid = tag_findid(zTag, 1); + int prior, subsequent; + int nWiki; + char zLength[40]; + Stmt qatt; + while( fossil_isspace(p->zWiki[0]) ) p->zWiki++; + nWiki = strlen(p->zWiki); + sqlite3_snprintf(sizeof(zLength), zLength, "%d", nWiki); + tag_insert(zTag, 1, zLength, rid, p->rDate, rid); + fossil_free(zTag); + prior = db_int(0, + "SELECT rid FROM tagxref" + " WHERE tagid=%d AND mtime<%.17g AND rid!=%d" + " ORDER BY mtime DESC", + tagid, p->rDate, rid + ); + subsequent = db_int(0, + "SELECT rid FROM tagxref" + " WHERE tagid=%d AND mtime>=%.17g AND rid!=%d" + " ORDER BY mtime", + tagid, p->rDate, rid + ); + if( prior ){ + content_deltify(prior, rid, 0); + if( !subsequent ){ + db_multi_exec( + "DELETE FROM event" + " WHERE type='e'" + " AND tagid=%d" + " AND objid IN (SELECT rid FROM tagxref WHERE tagid=%d)", + tagid, tagid + ); + } + } + if( subsequent ){ + content_deltify(rid, subsequent, 0); + }else{ + search_doc_touch('e',rid,0); + db_multi_exec( + "REPLACE INTO event(type,mtime,objid,tagid,user,comment,bgcolor)" + "VALUES('e',%.17g,%d,%d,%Q,%Q," + " (SELECT value FROM tagxref WHERE tagid=%d AND rid=%d));", + p->rEventDate, rid, tagid, p->zUser, p->zComment, + TAG_BGCOLOR, rid + ); + } + /* Locate and update comment for any attachments */ + db_prepare(&qatt, + "SELECT attachid, src, target, filename FROM attachment" + " WHERE target=%Q", + p->zEventId + ); + while( db_step(&qatt)==SQLITE_ROW ){ + const char *zAttachId = db_column_text(&qatt, 0); + const char *zSrc = db_column_text(&qatt, 1); + const char *zTarget = db_column_text(&qatt, 2); + const char *zName = db_column_text(&qatt, 3); + const char isAdd = (zSrc && zSrc[0]) ? 1 : 0; + char *zComment; + if( isAdd ){ + zComment = mprintf( + "Add attachment [/artifact/%!S|%h] to" + " tech note [/technote/%h|%.10h]", + zSrc, zName, zTarget, zTarget); + }else{ + zComment = mprintf("Delete attachment \"%h\" from tech note [%.10h]", + zName, zTarget); + } + db_multi_exec("UPDATE event SET comment=%Q, type='e'" + " WHERE objid=%Q", + zComment, zAttachId); + fossil_free(zComment); + } + db_finalize(&qatt); } - if( m.type==CFTYPE_TICKET ){ + if( p->type==CFTYPE_TICKET ){ char *zTag; - + Stmt qatt; assert( manifest_crosslink_busy==1 ); - zTag = mprintf("tkt-%s", m.zTicketUuid); - tag_insert(zTag, 1, 0, rid, m.rDate, rid); - free(zTag); + zTag = mprintf("tkt-%s", p->zTicketUuid); + tag_insert(zTag, 1, 0, rid, p->rDate, rid); + fossil_free(zTag); db_multi_exec("INSERT OR IGNORE INTO pending_tkt VALUES(%Q)", - m.zTicketUuid); + p->zTicketUuid); + /* Locate and update comment for any attachments */ + db_prepare(&qatt, + "SELECT attachid, src, target, filename FROM attachment" + " WHERE target=%Q", + p->zTicketUuid + ); + while( db_step(&qatt)==SQLITE_ROW ){ + const char *zAttachId = db_column_text(&qatt, 0); + const char *zSrc = db_column_text(&qatt, 1); + const char *zTarget = db_column_text(&qatt, 2); + const char *zName = db_column_text(&qatt, 3); + const char isAdd = (zSrc && zSrc[0]) ? 1 : 0; + char *zComment; + if( isAdd ){ + zComment = mprintf( + "Add attachment [/artifact/%!S|%h] to ticket [%!S|%S]", + zSrc, zName, zTarget, zTarget); + }else{ + zComment = mprintf("Delete attachment \"%h\" from ticket [%!S|%S]", + zName, zTarget, zTarget); + } + db_multi_exec("UPDATE event SET comment=%Q, type='t'" + " WHERE objid=%Q", + zComment, zAttachId); + fossil_free(zComment); + } + db_finalize(&qatt); } - if( m.type==CFTYPE_ATTACHMENT ){ + if( p->type==CFTYPE_ATTACHMENT ){ + char *zComment = 0; + const char isAdd = (p->zAttachSrc && p->zAttachSrc[0]) ? 1 : 0; + /* We assume that we're attaching to a wiki page until we + ** prove otherwise (which could on a later artifact if we + ** process the attachment artifact before the artifact to + ** which it is attached!) */ + char attachToType = 'w'; + if( fossil_is_uuid(p->zAttachTarget) ){ + if( db_exists("SELECT 1 FROM tag WHERE tagname='tkt-%q'", + p->zAttachTarget) + ){ + attachToType = 't'; /* Attaching to known ticket */ + }else if( db_exists("SELECT 1 FROM tag WHERE tagname='event-%q'", + p->zAttachTarget) + ){ + attachToType = 'e'; /* Attaching to known tech note */ + } + } db_multi_exec( "INSERT INTO attachment(attachid, mtime, src, target," - "filename, comment, user)" + "filename, comment, user)" "VALUES(%d,%.17g,%Q,%Q,%Q,%Q,%Q);", - rid, m.rDate, m.zAttachSrc, m.zAttachTarget, m.zAttachName, - (m.zComment ? m.zComment : ""), m.zUser + rid, p->rDate, p->zAttachSrc, p->zAttachTarget, p->zAttachName, + (p->zComment ? p->zComment : ""), p->zUser ); db_multi_exec( "UPDATE attachment SET isLatest = (mtime==" "(SELECT max(mtime) FROM attachment" " WHERE target=%Q AND filename=%Q))" " WHERE target=%Q AND filename=%Q", - m.zAttachTarget, m.zAttachName, - m.zAttachTarget, m.zAttachName + p->zAttachTarget, p->zAttachName, + p->zAttachTarget, p->zAttachName ); - if( strlen(m.zAttachTarget)!=UUID_SIZE - || !validate16(m.zAttachTarget, UUID_SIZE) - ){ - char *zComment; - if( m.zAttachSrc && m.zAttachSrc[0] ){ - zComment = mprintf("Add attachment \"%h\" to wiki page [%h]", - m.zAttachName, m.zAttachTarget); + if( 'w' == attachToType ){ + if( isAdd ){ + zComment = mprintf( + "Add attachment [/artifact/%!S|%h] to wiki page [%h]", + p->zAttachSrc, p->zAttachName, p->zAttachTarget); }else{ zComment = mprintf("Delete attachment \"%h\" from wiki page [%h]", - m.zAttachName, m.zAttachTarget); - } - db_multi_exec( - "REPLACE INTO event(type,mtime,objid,user,comment)" - "VALUES('w',%.17g,%d,%Q,%Q)", - m.rDate, rid, m.zUser, zComment - ); - free(zComment); - }else{ - char *zComment; - if( m.zAttachSrc && m.zAttachSrc[0] ){ - zComment = mprintf("Add attachment \"%h\" to ticket [%.10s]", - m.zAttachName, m.zAttachTarget); - }else{ - zComment = mprintf("Delete attachment \"%h\" from ticket [%.10s]", - m.zAttachName, m.zAttachTarget); - } - db_multi_exec( - "REPLACE INTO event(type,mtime,objid,user,comment)" - "VALUES('t',%.17g,%d,%Q,%Q)", - m.rDate, rid, m.zUser, zComment - ); - free(zComment); - } + p->zAttachName, p->zAttachTarget); + } + }else if( 'e' == attachToType ){ + if( isAdd ){ + zComment = mprintf( + "Add attachment [/artifact/%!S|%h] to tech note [/technote/%h|%.10h]", + p->zAttachSrc, p->zAttachName, p->zAttachTarget, p->zAttachTarget); + }else{ + zComment = mprintf("Delete attachment \"%h\" from tech note [%.10h]", + p->zAttachName, p->zAttachTarget); + } + }else{ + if( isAdd ){ + zComment = mprintf( + "Add attachment [/artifact/%!S|%h] to ticket [%!S|%S]", + p->zAttachSrc, p->zAttachName, p->zAttachTarget, p->zAttachTarget); + }else{ + zComment = mprintf("Delete attachment \"%h\" from ticket [%!S|%S]", + p->zAttachName, p->zAttachTarget, p->zAttachTarget); + } + } + db_multi_exec( + "REPLACE INTO event(type,mtime,objid,user,comment)" + "VALUES('%c',%.17g,%d,%Q,%Q)", + attachToType, p->rDate, rid, p->zUser, zComment + ); + fossil_free(zComment); + } + if( p->type==CFTYPE_CONTROL ){ + Blob comment; + int i; + const char *zName; + const char *zValue; + const char *zTagUuid; + int branchMove = 0; + blob_zero(&comment); + if( p->zComment ){ + blob_appendf(&comment, " %s.", p->zComment); + } + /* Next loop expects tags to be sorted on UUID, so sort it. */ + qsort(p->aTag, p->nTag, sizeof(p->aTag[0]), tag_compare); + for(i=0; inTag; i++){ + zTagUuid = p->aTag[i].zUuid; + if( !zTagUuid ) continue; + if( i==0 || fossil_strcmp(zTagUuid, p->aTag[i-1].zUuid)!=0 ){ + blob_appendf(&comment, + " Edit [%!S|%S]:", + zTagUuid, zTagUuid); + branchMove = 0; + if( permitHooks && db_exists("SELECT 1 FROM event, blob" + " WHERE event.type='ci' AND event.objid=blob.rid" + " AND blob.uuid=%Q", zTagUuid) ){ + zScript = xfer_commit_code(); + zUuid = zTagUuid; + } + } + zName = p->aTag[i].zName; + zValue = p->aTag[i].zValue; + if( strcmp(zName, "*branch")==0 ){ + blob_appendf(&comment, + " Move to branch [/timeline?r=%h&nd&dp=%!S&unhide | %h].", + zValue, zTagUuid, zValue); + branchMove = 1; + continue; + }else if( strcmp(zName, "*bgcolor")==0 ){ + blob_appendf(&comment, + " Change branch background color to \"%h\".", zValue); + continue; + }else if( strcmp(zName, "+bgcolor")==0 ){ + blob_appendf(&comment, + " Change background color to \"%h\".", zValue); + continue; + }else if( strcmp(zName, "-bgcolor")==0 ){ + blob_appendf(&comment, " Cancel background color"); + }else if( strcmp(zName, "+comment")==0 ){ + blob_appendf(&comment, " Edit check-in comment."); + continue; + }else if( strcmp(zName, "+user")==0 ){ + blob_appendf(&comment, " Change user to \"%h\".", zValue); + continue; + }else if( strcmp(zName, "+date")==0 ){ + blob_appendf(&comment, " Timestamp %h.", zValue); + continue; + }else if( memcmp(zName, "-sym-",5)==0 ){ + if( !branchMove ){ + blob_appendf(&comment, " Cancel tag \"%h\"", &zName[5]); + } + }else if( memcmp(zName, "*sym-",5)==0 ){ + if( !branchMove ){ + blob_appendf(&comment, " Add propagating tag \"%h\"", &zName[5]); + } + }else if( memcmp(zName, "+sym-",5)==0 ){ + blob_appendf(&comment, " Add tag \"%h\"", &zName[5]); + }else if( strcmp(zName, "+closed")==0 ){ + blob_append(&comment, " Marked \"Closed\"", -1); + }else if( strcmp(zName, "-closed")==0 ){ + blob_append(&comment, " Removed the \"Closed\" mark", -1); + }else { + if( zName[0]=='-' ){ + blob_appendf(&comment, " Cancel \"%h\"", &zName[1]); + }else if( zName[0]=='+' ){ + blob_appendf(&comment, " Add \"%h\"", &zName[1]); + }else{ + blob_appendf(&comment, " Add propagating \"%h\"", &zName[1]); + } + if( zValue && zValue[0] ){ + blob_appendf(&comment, " with value \"%h\".", zValue); + }else{ + blob_appendf(&comment, "."); + } + continue; + } + if( zValue && zValue[0] ){ + blob_appendf(&comment, " with note \"%h\".", zValue); + }else{ + blob_appendf(&comment, "."); + } + } + /*blob_appendf(&comment, " [[/info/%S | details]]");*/ + if( blob_size(&comment)==0 ) blob_append(&comment, " ", 1); + db_multi_exec( + "REPLACE INTO event(type,mtime,objid,user,comment)" + "VALUES('g',%.17g,%d,%Q,%Q)", + p->rDate, rid, p->zUser, blob_str(&comment)+1 + ); + blob_reset(&comment); } db_end_transaction(0); - manifest_clear(&m); - return 1; + if( permitHooks ){ + rc = xfer_run_common_script(); + if( rc==TH_OK ){ + rc = xfer_run_script(zScript, zUuid, 0); + } + } + if( p->type==CFTYPE_MANIFEST ){ + manifest_cache_insert(p); + }else{ + manifest_destroy(p); + } + assert( blob_is_reset(pContent) ); + return ( rc!=TH_ERROR ); +} + +/* +** COMMAND: test-crosslink +** +** Usage: %fossil test-crosslink RECORDID +** +** Run the manifest_crosslink() routine on the artifact with the given +** record ID. This is typically done in the debugger. +*/ +void test_crosslink_cmd(void){ + int rid; + Blob content; + db_find_and_open_repository(0, 0); + if( g.argc!=3 ) usage("RECORDID"); + rid = name_to_rid(g.argv[2]); + content_get(rid, &content); + manifest_crosslink(rid, &content, MC_NONE); } ADDED src/markdown.c Index: src/markdown.c ================================================================== --- src/markdown.c +++ src/markdown.c @@ -0,0 +1,2246 @@ +/* +** Copyright (c) 2012 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code to parse a blob containing markdown text, +** using an external renderer. +*/ + +#include "config.h" +#include "markdown.h" + +#include +#include +#include + +#define MKD_LI_END 8 /* internal list flag */ + +/******************** + * TYPE DEFINITIONS * + ********************/ + +#if INTERFACE + +/* mkd_autolink -- type of autolink */ +enum mkd_autolink { + MKDA_NOT_AUTOLINK, /* used internally when it is not an autolink*/ + MKDA_NORMAL, /* normal http/http/ftp/etc link */ + MKDA_EXPLICIT_EMAIL, /* e-mail link with explit mailto: */ + MKDA_IMPLICIT_EMAIL /* e-mail link without mailto: */ +}; + +/* mkd_renderer -- functions for rendering parsed data */ +struct mkd_renderer { + /* document level callbacks */ + void (*prolog)(struct Blob *ob, void *opaque); + void (*epilog)(struct Blob *ob, void *opaque); + + /* block level callbacks - NULL skips the block */ + void (*blockcode)(struct Blob *ob, struct Blob *text, void *opaque); + void (*blockquote)(struct Blob *ob, struct Blob *text, void *opaque); + void (*blockhtml)(struct Blob *ob, struct Blob *text, void *opaque); + void (*header)(struct Blob *ob, struct Blob *text, + int level, void *opaque); + void (*hrule)(struct Blob *ob, void *opaque); + void (*list)(struct Blob *ob, struct Blob *text, int flags, void *opaque); + void (*listitem)(struct Blob *ob, struct Blob *text, + int flags, void *opaque); + void (*paragraph)(struct Blob *ob, struct Blob *text, void *opaque); + void (*table)(struct Blob *ob, struct Blob *head_row, struct Blob *rows, + void *opaque); + void (*table_cell)(struct Blob *ob, struct Blob *text, int flags, + void *opaque); + void (*table_row)(struct Blob *ob, struct Blob *cells, int flags, + void *opaque); + + /* span level callbacks - NULL or return 0 prints the span verbatim */ + int (*autolink)(struct Blob *ob, struct Blob *link, + enum mkd_autolink type, void *opaque); + int (*codespan)(struct Blob *ob, struct Blob *text, void *opaque); + int (*double_emphasis)(struct Blob *ob, struct Blob *text, + char c, void *opaque); + int (*emphasis)(struct Blob *ob, struct Blob *text, char c,void*opaque); + int (*image)(struct Blob *ob, struct Blob *link, struct Blob *title, + struct Blob *alt, void *opaque); + int (*linebreak)(struct Blob *ob, void *opaque); + int (*link)(struct Blob *ob, struct Blob *link, struct Blob *title, + struct Blob *content, void *opaque); + int (*raw_html_tag)(struct Blob *ob, struct Blob *tag, void *opaque); + int (*triple_emphasis)(struct Blob *ob, struct Blob *text, + char c, void *opaque); + + /* low level callbacks - NULL copies input directly into the output */ + void (*entity)(struct Blob *ob, struct Blob *entity, void *opaque); + void (*normal_text)(struct Blob *ob, struct Blob *text, void *opaque); + + /* renderer data */ + int max_work_stack; /* prevent arbitrary deep recursion, cf README */ + const char *emph_chars; /* chars that trigger emphasis rendering */ + void *opaque; /* opaque data send to every rendering callback */ +}; + + + +/********* + * FLAGS * + *********/ + +/* list/listitem flags */ +#define MKD_LIST_ORDERED 1 +#define MKD_LI_BLOCK 2 /*

    • containing block data */ + +/* table cell flags */ +#define MKD_CELL_ALIGN_DEFAULT 0 +#define MKD_CELL_ALIGN_LEFT 1 +#define MKD_CELL_ALIGN_RIGHT 2 +#define MKD_CELL_ALIGN_CENTER 3 /* LEFT | RIGHT */ +#define MKD_CELL_ALIGN_MASK 3 +#define MKD_CELL_HEAD 4 + + + +/********************** + * EXPORTED FUNCTIONS * + **********************/ + +/* markdown -- parses the input buffer and renders it into the output buffer */ +void markdown( + struct Blob *ob, + struct Blob *ib, + const struct mkd_renderer *rndr); + + +#endif /* INTERFACE */ + + +/*************** + * LOCAL TYPES * + ***************/ + +/* link_ref -- reference to a link */ +struct link_ref { + struct Blob id; + struct Blob link; + struct Blob title; +}; + + +/* char_trigger -- function pointer to render active chars */ +/* returns the number of chars taken care of */ +/* data is the pointer of the beginning of the span */ +/* offset is the number of valid chars before data */ +struct render; +typedef size_t (*char_trigger)( + struct Blob *ob, + struct render *rndr, + char *data, + size_t offset, + size_t size); + + +/* render -- structure containing one particular render */ +struct render { + struct mkd_renderer make; + struct Blob refs; + char_trigger active_char[256]; + int work_active; + struct Blob *work; +}; + + +/* html_tag -- structure for quick HTML tag search (inspired from discount) */ +struct html_tag { + const char *text; + int size; +}; + + + +/******************** + * GLOBAL VARIABLES * + ********************/ + +/* block_tags -- recognised block tags, sorted by cmp_html_tag */ +static const struct html_tag block_tags[] = { + { "p", 1 }, + { "dl", 2 }, + { "h1", 2 }, + { "h2", 2 }, + { "h3", 2 }, + { "h4", 2 }, + { "h5", 2 }, + { "h6", 2 }, + { "ol", 2 }, + { "ul", 2 }, + { "del", 3 }, + { "div", 3 }, + { "ins", 3 }, + { "pre", 3 }, + { "form", 4 }, + { "math", 4 }, + { "table", 5 }, + { "iframe", 6 }, + { "script", 6 }, + { "fieldset", 8 }, + { "noscript", 8 }, + { "blockquote", 10 } +}; + +#define INS_TAG (block_tags + 12) +#define DEL_TAG (block_tags + 10) + + + +/*************************** + * STATIC HELPER FUNCTIONS * + ***************************/ + +/* build_ref_id -- collapse whitespace from input text to make it a ref id */ +static int build_ref_id(struct Blob *id, const char *data, size_t size){ + size_t beg, i; + char *id_data; + + /* skip leading whitespace */ + while( size>0 && (data[0]==' ' || data[0]=='\t' || data[0]=='\n') ){ + data++; + size--; + } + + /* skip trailing whitespace */ + while( size>0 && (data[size-1]==' ' + || data[size-1]=='\t' + || data[size-1]=='\n') + ){ + size--; + } + if( size==0 ) return -1; + + /* making the ref id */ + i = 0; + blob_reset(id); + while( i='A' && id_data[i]<='Z' ) id_data[i] += 'a' - 'A'; + } + return 0; +} + + +/* cmp_link_ref -- comparison function for link_ref sorted arrays */ +static int cmp_link_ref(const void *key, const void *array_entry){ + struct link_ref *lr = (void *)array_entry; + return blob_compare((void *)key, &lr->id); +} + + +/* cmp_link_ref_sort -- comparison function for link_ref qsort */ +static int cmp_link_ref_sort(const void *a, const void *b){ + struct link_ref *lra = (void *)a; + struct link_ref *lrb = (void *)b; + return blob_compare(&lra->id, &lrb->id); +} + + +/* cmp_html_tag -- comparison function for bsearch() (stolen from discount) */ +static int cmp_html_tag(const void *a, const void *b){ + const struct html_tag *hta = a; + const struct html_tag *htb = b; + if( hta->size!=htb->size ) return hta->size-htb->size; + return fossil_strnicmp(hta->text, htb->text, hta->size); +} + + +/* find_block_tag -- returns the current block tag */ +static const struct html_tag *find_block_tag(const char *data, size_t size){ + size_t i = 0; + struct html_tag key; + + /* looking for the word end */ + while( i='0' && data[i]<='9') + || (data[i]>='A' && data[i]<='Z') + || (data[i]>='a' && data[i]<='z')) + ){ + i++; + } + if( i>=size ) return 0; + + /* binary search of the tag */ + key.text = data; + key.size = i; + return bsearch(&key, + block_tags, + (sizeof block_tags)/(sizeof block_tags[0]), + sizeof block_tags[0], + cmp_html_tag); +} + + +/* new_work_buffer -- get a new working buffer from the stack or create one */ +static struct Blob *new_work_buffer(struct render *rndr){ + struct Blob *ret = 0; + + if( rndr->work_active < rndr->make.max_work_stack ){ + ret = rndr->work + rndr->work_active; + rndr->work_active += 1; + blob_reset(ret); + } + return ret; +} + + +/* release_work_buffer -- release the given working buffer */ +static void release_work_buffer(struct render *rndr, struct Blob *buf){ + if( !buf ) return; + assert(rndr->work_active>0 && buf==(rndr->work+rndr->work_active-1)); + rndr->work_active -= 1; +} + + + +/**************************** + * INLINE PARSING FUNCTIONS * + ****************************/ + +/* is_mail_autolink -- looks for the address part of a mail autolink and '>' */ +/* this is less strict than the original markdown e-mail address matching */ +static size_t is_mail_autolink(char *data, size_t size){ + size_t i = 0, nb = 0; + /* address is assumed to be: [-@._a-zA-Z0-9]+ with exactly one '@' */ + while( i='a' && data[i]<='z') + || (data[i]>='A' && data[i]<='Z') + || (data[i]>='0' && data[i]<='9')) + ){ + if( data[i]=='@' ) nb++; + i++; + } + if( i>=size || data[i]!='>' || nb!=1 ) return 0; + return i+1; +} + + +/* tag_length -- returns the length of the given tag, or 0 is it's not valid */ +static size_t tag_length(char *data, size_t size, enum mkd_autolink *autolink){ + size_t i, j; + + /* a valid tag can't be shorter than 3 chars */ + if( size<3 ) return 0; + + /* begins with a '<' optionally followed by '/', followed by letter */ + if( data[0]!='<' ) return 0; + i = (data[1]=='/') ? 2 : 1; + if( (data[i]<'a' || data[i]>'z') && (data[i]<'A' || data[i]>'Z') ){ + return 0; + } + + /* scheme test */ + *autolink = MKDA_NOT_AUTOLINK; + if( size>6 + && fossil_strnicmp(data+1, "http", 4)==0 + && (data[5]==':' + || ((data[5]=='s' || data[5]=='S') && data[6]==':')) + ){ + i = (data[5]==':') ? 6 : 7; + *autolink = MKDA_NORMAL; + }else if( size>5 && fossil_strnicmp(data+1, "ftp:", 4)==0 ){ + i = 5; + *autolink = MKDA_NORMAL; + }else if( size>7 && fossil_strnicmp(data+1, "mailto:", 7)==0 ){ + i = 8; + /* not changing *autolink to go to the address test */ + } + + /* completing autolink test: no whitespace or ' or " */ + if( i>=size || i=='>' ){ + *autolink = MKDA_NOT_AUTOLINK; + }else if( *autolink ){ + j = i; + while( i=size ) return 0; + if( i>j && data[i]=='>' ) return i+1; + /* one of the forbidden chars has been found */ + *autolink = MKDA_NOT_AUTOLINK; + }else if( (j = is_mail_autolink(data+i, size-i))!=0 ){ + *autolink = (i==8) ? MKDA_EXPLICIT_EMAIL : MKDA_IMPLICIT_EMAIL; + return i+j; + } + + /* looking for sometinhg looking like a tag end */ + while( i=size ) return 0; + return i+1; +} + + +/* parse_inline -- parses inline markdown elements */ +static void parse_inline( + struct Blob *ob, + struct render *rndr, + char *data, + size_t size +){ + size_t i = 0, end = 0; + char_trigger action = 0; + struct Blob work = BLOB_INITIALIZER; + + while( iactive_char[(unsigned char)data[end]])==0 + ){ + end++; + } + if( end>i ){ + if( rndr->make.normal_text ){ + blob_init(&work, data+i, end-i); + rndr->make.normal_text(ob, &work, rndr->make.opaque); + }else{ + blob_append(ob, data+i, end-i); + } + } + if( end>=size ) break; + i = end; + + /* calling the trigger */ + end = action(ob, rndr, data+i, i, size-i); + if( !end ){ + /* no action from the callback */ + end = i+1; + }else{ + i += end; + end = i; + } + } +} + + +/* find_emph_char -- looks for the next emph char, skipping other constructs */ +static size_t find_emph_char(char *data, size_t size, char c){ + size_t i = 1; + + while( i=size ) return 0; + if( data[i]==c ) return i; + + /* not counting escaped chars */ + if( i && data[i-1]=='\\' ){ + i++; + continue; + } + + /* skipping a code span */ + if( data[i]=='`' ){ + size_t span_nb = 0, bt; + size_t tmp_i = 0; + + /* counting the number of opening backticks */ + while( i=size ) return 0; + + /* finding the matching closing sequence */ + bt = 0; + while( i=size ) return tmp_i; + i++; + + /* skipping a link */ + }else if( data[i]=='[' ){ + size_t tmp_i = 0; + char cc; + i++; + while( i=size ) return tmp_i; + if( data[i]!='[' && data[i]!='(' ){ /* not a link*/ + if( tmp_i ) return tmp_i; else continue; + } + cc = data[i]; + i++; + while( i=size ) return tmp_i; + i++; + } + } + return 0; +} + + +/* parse_emph1 -- parsing single emphase */ +/* closed by a symbol not preceded by whitespace and not followed by symbol */ +static size_t parse_emph1( + struct Blob *ob, + struct render *rndr, + char *data, + size_t size, + char c +){ + size_t i = 0, len; + struct Blob *work = 0; + int r; + + if( !rndr->make.emphasis ) return 0; + + /* skipping one symbol if coming from emph3 */ + if( size>1 && data[0]==c && data[1]==c ) i = 1; + + while( i=size ) return 0; + + if( i+1make.emphasis(ob, work, c, rndr->make.opaque); + release_work_buffer(rndr, work); + return r ? i+1 : 0; + } + } + return 0; +} + + +/* parse_emph2 -- parsing single emphase */ +static size_t parse_emph2( + struct Blob *ob, + struct render *rndr, + char *data, + size_t size, + char c +){ + size_t i = 0, len; + struct Blob *work = 0; + int r; + + if( !rndr->make.double_emphasis ) return 0; + + while( imake.double_emphasis(ob, work, c, rndr->make.opaque); + release_work_buffer(rndr, work); + return r ? i+2 : 0; + } + i++; + } + return 0; +} + + +/* parse_emph3 -- parsing single emphase */ +/* finds the first closing tag, and delegates to the other emph */ +static size_t parse_emph3( + struct Blob *ob, + struct render *rndr, + char *data, + size_t size, + char c +){ + size_t i = 0, len; + int r; + + while( imake.triple_emphasis + ){ + /* triple symbol found */ + struct Blob *work = new_work_buffer(rndr); + if( !work ) return 0; + parse_inline(work, rndr, data, i); + r = rndr->make.triple_emphasis(ob, work, c, rndr->make.opaque); + release_work_buffer(rndr, work); + return r ? i+3 : 0; + }else if( i+12 && data[1]!=c ){ + /* whitespace cannot follow an opening emphasis */ + if( data[1]==' ' + || data[1]=='\t' + || data[1]=='\n' + || (ret = parse_emph1(ob, rndr, data+1, size-1, c))==0 + ){ + return 0; + } + return ret+1; + } + + if( size>3 && data[1]==c && data[2]!=c ){ + if( data[2]==' ' + || data[2]=='\t' + || data[2]=='\n' + || (ret = parse_emph2(ob, rndr, data+2, size-2, c))==0 + ){ + return 0; + } + return ret+2; + } + + if( size>4 && data[1]==c && data[2]==c && data[3]!=c ){ + if( data[3]==' ' + || data[3]=='\t' + || data[3]=='\n' + || (ret = parse_emph3(ob, rndr, data+3, size-3, c))==0 + ){ + return 0; + } + return ret+3; + } + return 0; +} + + +/* char_linebreak -- '\n' preceded by two spaces (assuming linebreak != 0) */ +static size_t char_linebreak( + struct Blob *ob, + struct render *rndr, + char *data, + size_t offset, + size_t size +){ + if( offset<2 || data[-1]!=' ' || data[-2]!=' ' ) return 0; + /* removing the last space from ob and rendering */ + if( blob_size(ob)>0 && blob_buffer(ob)[blob_size(ob)-1]==' ' ) ob->nUsed--; + return rndr->make.linebreak(ob, rndr->make.opaque) ? 1 : 0; +} + + +/* char_codespan -- '`' parsing a code span (assuming codespan != 0) */ +static size_t char_codespan( + struct Blob *ob, + struct render *rndr, + char *data, + size_t offset, + size_t size +){ + size_t end, nb = 0, i, f_begin, f_end; + + /* counting the number of backticks in the delimiter */ + while( nb=size ) return 0; /* no matching delimiter */ + + /* trimming outside whitespaces */ + f_begin = nb; + while( f_beginnb && (data[f_end-1]==' ' || data[f_end-1]=='\t') ){ f_end--; } + + /* real code span */ + if( f_beginmake.codespan(ob, &work, rndr->make.opaque) ) end = 0; + }else{ + if( !rndr->make.codespan(ob, 0, rndr->make.opaque) ) end = 0; + } + return end; +} + + +/* char_escape -- '\\' backslash escape */ +static size_t char_escape( + struct Blob *ob, + struct render *rndr, + char *data, + size_t offset, + size_t size +){ + struct Blob work = BLOB_INITIALIZER; + if( size>1 ){ + if( rndr->make.normal_text ){ + blob_init(&work, data+1,1); + rndr->make.normal_text(ob, &work, rndr->make.opaque); + }else{ + blob_append(ob, data+1, 1); + } + } + return 2; +} + + +/* char_entity -- '&' escaped when it doesn't belong to an entity */ +/* valid entities are assumed to be anything mathing &#?[A-Za-z0-9]+; */ +static size_t char_entity( + struct Blob *ob, + struct render *rndr, + char *data, + size_t offset, + size_t size +){ + size_t end = 1; + struct Blob work = BLOB_INITIALIZER; + if( end='0' && data[end]<='9') + || (data[end]>='a' && data[end]<='z') + || (data[end]>='A' && data[end]<='Z')) + ){ + end++; + } + if( endmake.entity ){ + blob_init(&work, data, end); + rndr->make.entity(ob, &work, rndr->make.opaque); + }else{ + blob_append(ob, data, end); + } + return end; +} + + +/* char_langle_tag -- '<' when tags or autolinks are allowed */ +static size_t char_langle_tag( + struct Blob *ob, + struct render *rndr, + char *data, + size_t offset, + size_t size +){ + enum mkd_autolink altype = MKDA_NOT_AUTOLINK; + size_t end = tag_length(data, size, &altype); + struct Blob work = BLOB_INITIALIZER; + int ret = 0; + if( end ){ + if( rndr->make.autolink && altype!=MKDA_NOT_AUTOLINK ){ + blob_init(&work, data+1, end-2); + ret = rndr->make.autolink(ob, &work, altype, rndr->make.opaque); + }else if( rndr->make.raw_html_tag ){ + blob_init(&work, data, end); + ret = rndr->make.raw_html_tag(ob, &work, rndr->make.opaque); + } + } + + if( !ret ){ + return 0; + }else{ + return end; + } +} + + +/* get_link_inline -- extract inline-style link and title from +** parenthesed data +*/ +static int get_link_inline( + struct Blob *link, + struct Blob *title, + char *data, + size_t size +){ + size_t i = 0, mark; + size_t link_b, link_e; + size_t title_b = 0, title_e = 0; + + /* skipping initial whitespace */ + while( ititle_b + && (data[title_e]==' ' + || data[title_e]=='\t' + || data[title_e]=='\n') + ){ + title_e--; + } + + /* checking for closing quote presence */ + if (data[title_e] != '\'' && data[title_e] != '"') { + title_b = title_e = 0; + link_e = i; + } + } + + /* remove whitespace at the end of the link */ + while( link_e>link_b + && (data[link_e-1]==' ' + || data[link_e-1]=='\t' + || data[link_e-1]=='\n') + ){ + link_e--; + } + + /* remove optional angle brackets around the link */ + if( data[link_b]=='<' ) link_b += 1; + if( data[link_e-1]=='>' ) link_e -= 1; + + /* escape backslashed character from link */ + blob_reset(link); + i = link_b; + while( ititle_b ) blob_append(title, data+title_b, title_e-title_b); + + /* this function always succeed */ + return 0; +} + + +/* get_link_ref -- extract referenced link and title from id */ +static int get_link_ref( + struct render *rndr, + struct Blob *link, + struct Blob *title, + char *data, + size_t size +){ + struct link_ref *lr; + + /* find the link from its id (stored temporarily in link) */ + blob_reset(link); + if( build_ref_id(link, data, size)<0 ) return -1; + lr = bsearch(link, + blob_buffer(&rndr->refs), + blob_size(&rndr->refs)/sizeof(struct link_ref), + sizeof (struct link_ref), + cmp_link_ref); + if( !lr ) return -1; + + /* fill the output buffers */ + blob_reset(link); + blob_reset(title); + blob_append(link, blob_buffer(&lr->link), blob_size(&lr->link)); + blob_append(title, blob_buffer(&lr->title), blob_size(&lr->title)); + return 0; +} + + +/* char_link -- '[': parsing a link or an image */ +static size_t char_link( + struct Blob *ob, + struct render *rndr, + char *data, + size_t offset, + size_t size +){ + int is_img = (offset && data[-1] == '!'), level; + size_t i = 1, txt_e; + struct Blob *content = 0; + struct Blob *link = 0; + struct Blob *title = 0; + int ret; + + /* checking whether the correct renderer exists */ + if( (is_img && !rndr->make.image) || (!is_img && !rndr->make.link) ){ + return 0; + } + + /* looking for the matching closing bracket */ + for(level=1; i=size ) return 0; + txt_e = i; + i++; + + /* skip any amount of whitespace or newline */ + /* (this is much more laxist than original markdown syntax) */ + while( i=size + || get_link_inline(link, title, data+i+1, span_end-(i+1))<0 + ){ + goto char_link_cleanup; + } + + i = span_end+1; + + /* reference style link */ + }else if( i=size ) goto char_link_cleanup; + + if( i+1==id_end ){ + /* implicit id - use the contents */ + id_data = data+1; + id_size = txt_e-1; + }else{ + /* explici id - between brackets */ + id_data = data+i+1; + id_size = id_end-(i+1); + } + + if( get_link_ref(rndr, link, title, id_data, id_size)<0 ){ + goto char_link_cleanup; + } + + i = id_end+1; + + /* shortcut reference style link */ + }else{ + if( get_link_ref(rndr, link, title, data+1, txt_e-1)<0 ){ + goto char_link_cleanup; + } + + /* rewinding the whitespace */ + i = txt_e+1; + } + + /* building content: img alt is escaped, link content is parsed */ + if( txt_e>1 ){ + if( is_img ) blob_append(content, data+1, txt_e-1); + else parse_inline(content, rndr, data+1, txt_e-1); + } + + /* calling the relevant rendering function */ + if( is_img ){ + if( blob_size(ob)>0 && blob_buffer(ob)[blob_size(ob)-1]=='!' ) ob->nUsed--; + ret = rndr->make.image(ob, link, title, content, rndr->make.opaque); + }else{ + ret = rndr->make.link(ob, link, title, content, rndr->make.opaque); + } + + /* cleanup */ +char_link_cleanup: + release_work_buffer(rndr, title); + release_work_buffer(rndr, link); + release_work_buffer(rndr, content); + return ret ? i : 0; +} + + + +/********************************* + * BLOCK-LEVEL PARSING FUNCTIONS * + *********************************/ + +/* is_empty -- returns the line length when it is empty, 0 otherwise */ +static size_t is_empty(const char *data, size_t size){ + size_t i; + for(i=0; i=size || (data[i]!='*' && data[i]!='-' && data[i]!='_') ) return 0; + c = data[i]; + + /* the whole line must be the char or whitespace */ + while (i < size && data[i] != '\n') { + if( data[i]==c ){ + n += 1; + }else if( data[i]!=' ' && data[i]!='\t' ){ + return 0; + } + i++; + } + + return n>=3; +} + + +/* is_headerline -- returns whether the line is a setext-style hdr underline */ +static int is_headerline(char *data, size_t size){ + size_t i = 0; + + /* test of level 1 header */ + if( data[i]=='=' ){ + for(i=1; i=size || data[i]=='\n') ? 1 : 0; + } + + /* test of level 2 header */ + if( data[i]=='-' ){ + for(i=1; i=size || data[i]=='\n') ? 2 : 0; + } + + return 0; +} + + +/* is_table_sep -- returns wether there is a table separator at the given pos */ +static int is_table_sep(char *data, size_t pos){ + return data[pos]=='|' && (pos==0 || data[pos-1]!='\\'); +} + + +/* is_tableline -- returns the number of column tables in the given line */ +static int is_tableline(char *data, size_t size){ + size_t i = 0; + int n_sep = 0, outer_sep = 0; + + /* skip initial blanks */ + while( i0) ? (n_sep-outer_sep+1) : 0; +} + + +/* prefix_quote -- returns blockquote prefix length */ +static size_t prefix_quote(char *data, size_t size){ + size_t i = 0; + if( i' ){ + if( i+10 && data[0]=='\t' ) return 1; + if( size>3 && data[0]==' ' && data[1]==' ' && data[2]==' ' && data[3]==' ' ){ + return 4; + } + return 0; +} + +/* prefix_oli -- returns ordered list item prefix */ +static size_t prefix_oli(char *data, size_t size){ + size_t i = 0; + if( i=size || data[i]<'0' || data[i]>'9' ) return 0; + while( i='0' && data[i]<='9' ){ i++; } + + if( i+1>=size + || data[i]!='.' + || (data[i+1]!=' ' && data[i+1]!='\t') + ){ + return 0; + } + i = i+2; + while( i=size + || (data[i]!='*' && data[i]!='+' && data[i]!='-') + || (data[i+1]!=' ' && data[i+1]!='\t') + ){ + return 0; + } + i = i+2; + while( i=size + || (prefix_quote(data+end, size-end)==0 + && !is_empty(data+end, size-end))) + ){ + /* empty line followed by non-quote line */ + break; + } + if( begmake.blockquote ){ + struct Blob fallback = BLOB_INITIALIZER; + if( out ){ + parse_block(out, rndr, work_data, work_size); + }else{ + blob_init(&fallback, work_data, work_size); + } + rndr->make.blockquote(ob, out ? out : &fallback, rndr->make.opaque); + } + release_work_buffer(rndr, out); + return end; +} + + +/* parse_blockquote -- hanldes parsing of a regular paragraph */ +static size_t parse_paragraph( + struct Blob *ob, + struct render *rndr, + char *data, + size_t size +){ + size_t i = 0, end = 0; + int level = 0; + char *work_data = data; + size_t work_size = 0; + struct Blob fallback = BLOB_INITIALIZER; + + while( imake.paragraph ){ + struct Blob *tmp = new_work_buffer(rndr); + if( tmp ){ + parse_inline(tmp, rndr, work_data, work_size); + }else{ + blob_init(&fallback, work_data, work_size); + } + rndr->make.paragraph(ob, tmp ? tmp : &fallback, rndr->make.opaque); + release_work_buffer(rndr, tmp); + } + }else{ + if( work_size ){ + size_t beg; + i = work_size; + work_size -= 1; + while( work_size && data[work_size]!='\n' ){ work_size--; } + beg = work_size+1; + while( work_size && data[work_size-1]=='\n'){ work_size--; } + if( work_size ){ + struct Blob *tmp = new_work_buffer(rndr); + if( tmp ){ + parse_inline(tmp, rndr, work_data, work_size); + }else{ + blob_init (&fallback, work_data, work_size); + } + if( rndr->make.paragraph ){ + rndr->make.paragraph(ob, tmp ? tmp : &fallback, rndr->make.opaque); + } + release_work_buffer(rndr, tmp); + work_data += beg; + work_size = i - beg; + }else{ + work_size = i; + } + } + + if( rndr->make.header ){ + struct Blob *span = new_work_buffer(rndr); + if( span ){ + parse_inline(span, rndr, work_data, work_size); + rndr->make.header(ob, span, level, rndr->make.opaque); + }else{ + blob_init(&fallback, work_data, work_size); + rndr->make.header(ob, &fallback, level, rndr->make.opaque); + } + release_work_buffer(rndr, span); + } + } + return end; +} + + +/* parse_blockquote -- hanldes parsing of a block-level code fragment */ +static size_t parse_blockcode( + struct Blob *ob, + struct render *rndr, + char *data, + size_t size +){ + size_t beg, end, pre; + struct Blob *work = new_work_buffer(rndr); + if( !work ) work = ob; + + beg = 0; + while( beg0 && blob_buffer(work)[end-1]=='\n' ){ end--; } + work->nUsed = end; + blob_append(work, "\n", 1); + + if( work!=ob ){ + if( rndr->make.blockcode ){ + rndr->make.blockcode(ob, work, rndr->make.opaque); + } + release_work_buffer(rndr, work); + } + return beg; +} + + +/* parse_listitem -- parsing of a single list item */ +/* assuming initial prefix is already removed */ +static size_t parse_listitem( + struct Blob *ob, + struct render *rndr, + char *data, + size_t size, + int *flags +){ + struct Blob fallback = BLOB_INITIALIZER; + struct Blob *work = 0, *inter = 0; + size_t beg = 0, end, pre, sublist = 0, orgpre = 0, i; + int in_empty = 0, has_inside_empty = 0; + + /* keeping track of the first indentation prefix */ + if( size>1 && data[0]==' ' ){ + orgpre = 1; + if( size>2 && data[1]==' ' ){ + orgpre = 2; + if( size>3 && data[2]==' ' ){ + orgpre = 3; + } + } + } + beg = prefix_uli(data, size); + if( !beg ) beg = prefix_oli(data, size); + if( !beg ) return 0; + /* skipping to the beginning of the following line */ + end = beg; + while( end1 && data[beg]==' ' ){ + i = 1; + if( end-beg>2 && data[beg+1]==' ' ){ + i = 2; + if( end-beg>3 && data[beg+2]==' ' ){ + i = 3; + if( end-beg>3 && data[beg+3]==' ' ){ + i = 4; + } + } + } + } + pre = i; + if( data[beg]=='\t' ){ i = 1; pre = 8; } + + /* checking for a new item */ + if( (prefix_uli(data+beg+i, end-beg-i) && !is_hrule(data+beg+i, end-beg-i)) + || prefix_oli(data+beg+i, end-beg-i) + ){ + if( in_empty ) has_inside_empty = 1; + if( pre == orgpre ){ /* the following item must have */ + break; /* the same indentation */ + } + if( !sublist ) sublist = blob_size(work); + + /* joining only indented stuff after empty lines */ + }else if( in_empty && i<4 && data[beg]!='\t' ){ + *flags |= MKD_LI_END; + break; + }else if( in_empty ){ + blob_append(work, "\n", 1); + has_inside_empty = 1; + } + in_empty = 0; + + /* adding the line without prefix into the working buffer */ + blob_append(work, data+beg+i, end-beg-i); + beg = end; + } + + /* non-recursive fallback when working buffer stack is full */ + if( !inter ){ + if( rndr->make.listitem ){ + rndr->make.listitem(ob, work, *flags, rndr->make.opaque); + } + if( work!=&fallback ) release_work_buffer(rndr, work); + blob_reset(&fallback); + return beg; + } + + /* render of li contents */ + if( has_inside_empty ) *flags |= MKD_LI_BLOCK; + if( *flags & MKD_LI_BLOCK ){ + /* intermediate render of block li */ + if( sublist && sublistmake.listitem ){ + rndr->make.listitem(ob, inter, *flags, rndr->make.opaque); + } + release_work_buffer(rndr, inter); + if( work!=&fallback ) release_work_buffer(rndr, work); + blob_reset(&fallback); + return beg; +} + + +/* parse_list -- parsing ordered or unordered list block */ +static size_t parse_list( + struct Blob *ob, + struct render *rndr, + char *data, + size_t size, + int flags +){ + struct Blob fallback = BLOB_INITIALIZER; + struct Blob *work = new_work_buffer(rndr); + size_t i = 0, j; + if( !work ) work = &fallback; + + while( imake.list ) rndr->make.list(ob, work, flags, rndr->make.opaque); + if( work!=&fallback ) release_work_buffer(rndr, work); + blob_reset(&fallback); + return i; +} + + +/* parse_atxheader -- parsing of atx-style headers */ +static size_t parse_atxheader( + struct Blob *ob, + struct render *rndr, + char *data, + size_t size +){ + int level = 0; + size_t i, end, skip, span_beg, span_size; + + if( !size || data[0]!='#' ) return 0; + + while( levelmake.header ){ + struct Blob fallback = BLOB_INITIALIZER; + struct Blob *span = new_work_buffer(rndr); + + if( span ){ + parse_inline(span, rndr, data+span_beg, span_size); + }else{ + blob_init(&fallback, data+span_beg, span_size); + } + rndr->make.header(ob, span ? span : &fallback, level, rndr->make.opaque); + release_work_buffer(rndr, span); + } + return skip; +} + + +/* htmlblock_end -- checking end of HTML block : [ \t]*\n[ \t*]\n */ +/* returns the length on match, 0 otherwise */ +static size_t htmlblock_end( + const struct html_tag *tag, + const char *data, + size_t size +){ + size_t i, w; + + /* assuming data[0]=='<' && data[1]=='/' already tested */ + + /* checking tag is a match */ + if( (tag->size+3)>=size + || fossil_strnicmp(data+2, tag->text, tag->size) + || data[tag->size+2]!='>' + ){ + return 0; + } + + /* checking white lines */ + i = tag->size + 3; + w = 0; + if( i5 && data[1]=='!' && data[2]=='-' && data[3]=='-' ){ + i = 5; + while( i') ){ + i++; + } + i++; + if( imake.blockhtml ) return work_size; + blob_init(&work, data, work_size); + rndr->make.blockhtml(ob, &work, rndr->make.opaque); + return work_size; + } + } + } + + /* HR, which is the only self-closing block tag considered */ + if( size>4 + && (data[1]=='h' || data[1]=='H') + && (data[2]=='r' || data[2]=='R') + ){ + i = 3; + while( imake.blockhtml ) return work_size; + blob_init(&work, data, work_size); + rndr->make.blockhtml(ob, &work, rndr->make.opaque); + return work_size; + } + } + } + + /* no special case recognised */ + return 0; + } + + /* looking for an unindented matching closing tag */ + /* followed by a blank line */ + i = 1; + found = 0; +#if 0 + while( isize)>=size ) break; + j = htmlblock_end(curtag, data+i-1, size-i+1); + if (j) { + i += j-1; + found = 1; + break; + } + } +#endif + + /* if not found, trying a second pass looking for indented match */ + /* but not if tag is "ins" or "del" (following original Markdown.pl) */ + if( !found && curtag!=INS_TAG && curtag!=DEL_TAG ){ + i = 1; + while( isize)>=size ) break; + j = htmlblock_end(curtag, data+i-1, size-i+1); + if (j) { + i += j-1; + found = 1; + break; + } + } + } + + if( !found ) return 0; + + /* the end of the block has been found */ + blob_init(&work, data, i); + if( rndr->make.blockhtml ){ + rndr->make.blockhtml(ob, &work, rndr->make.opaque); + } + return i; +} + + +/* parse_table_cell -- parse a cell inside a table */ +static void parse_table_cell( + struct Blob *ob, /* output blob */ + struct render *rndr, /* renderer description */ + char *data, /* input text */ + size_t size, /* input text size */ + int flags /* table flags */ +){ + struct Blob fallback = BLOB_INITIALIZER; + struct Blob *span = new_work_buffer(rndr); + + if( span ){ + parse_inline(span, rndr, data, size); + }else{ + blob_init(&fallback, data, size); + } + rndr->make.table_cell(ob, span ? span : &fallback, flags, rndr->make.opaque); + release_work_buffer(rndr, span); +} + + +/* parse_table_row -- parse an input line into a table row */ +static size_t parse_table_row( + struct Blob *ob, /* output blob for rendering */ + struct render *rndr, /* renderer description */ + char *data, /* input text */ + size_t size, /* input text size */ + int *aligns, /* array of default alignment for columns */ + size_t align_size, /* number of columns with default alignment */ + int flags /* table flags */ +){ + size_t i = 0, col = 0; + size_t beg, end, total = 0; + struct Blob *cells = new_work_buffer(rndr); + int align; + + /* skip leading blanks and sperator */ + while( ibeg && data[end-1]==':' ){ + align |= MKD_CELL_ALIGN_RIGHT; + end--; + } + + /* remove trailing blanks */ + while( end>beg && (data[end-1]==' ' || data[end-1]=='\t') ){ end--; } + + /* skip the last cell if it was only blanks */ + /* (because it is only the optional end separator) */ + if( total && end<=beg ) continue; + + /* fallback on default alignment if not explicit */ + if( align==0 && aligns && colmake.table_row(ob, cells, flags, rndr->make.opaque); + }else{ + struct Blob fallback = BLOB_INITIALIZER; + blob_init(&fallback, data, total ? total : size); + rndr->make.table_row(ob, &fallback, flags, rndr->make.opaque); + } + release_work_buffer(rndr, cells); + return total ? total : size; +} + + +/* parse_table -- parsing of a whole table */ +static size_t parse_table( + struct Blob *ob, + struct render *rndr, + char *data, + size_t size +){ + size_t i = 0, head_end, col; + size_t align_size = 0; + int *aligns = 0; + struct Blob fallback = BLOB_INITIALIZER; + struct Blob *head = 0; + struct Blob *rows = new_work_buffer(rndr); + if( !rows ) rows = &fallback; + + /* skip the first (presumably header) line */ + while( i=size ){ + parse_table_row(rows, rndr, data, size, 0, 0, 0); + rndr->make.table(ob, 0, rows, rndr->make.opaque); + if( rows!=&fallback ) release_work_buffer(rndr, rows); + return i; + } + + /* attempt to parse a table rule, i.e. blanks, dash, colons and sep */ + i++; + col = 0; + while( imake.table(ob, head, rows, rndr->make.opaque); + + /* cleanup */ + if( head ) release_work_buffer(rndr, head); + if( rows!=&fallback ) release_work_buffer(rndr, rows); + free(aligns); + return i; +} + + +/* parse_block -- parsing of one block, returning next char to parse */ +static void parse_block( + struct Blob *ob, /* output blob */ + struct render *rndr, /* renderer internal state */ + char *data, /* input text */ + size_t size /* input text size */ +){ + size_t beg, end, i; + char *txt_data; + int has_table = (rndr->make.table + && rndr->make.table_row + && rndr->make.table_cell); + + beg = 0; + while( begmake.blockhtml + && (i = parse_htmlblock(ob, rndr, txt_data, end))!=0 + ){ + beg += i; + }else if( (i=is_empty(txt_data, end))!=0 ){ + beg += i; + }else if( is_hrule(txt_data, end) ){ + if( rndr->make.hrule ) rndr->make.hrule(ob, rndr->make.opaque); + while( beg=end ) return 0; + if( data[beg]==' ' ){ + i = 1; + if( data[beg+1]==' ' ){ + i = 2; + if( data[beg+2]==' ' ){ + i = 3; + if( data[beg+3]==' ' ) return 0; + } + } + } + i += beg; + + /* id part: anything but a newline between brackets */ + if( data[i]!='[' ) return 0; + i++; + id_offset = i; + while( i=end || data[i]!=']' ) return 0; + id_end = i; + + /* spacer: colon (space | tab)* newline? (space | tab)* */ + i++; + if( i>=end || data[i]!=':' ) return 0; + i++; + while( i=end ) return 0; + + /* link: whitespace-free sequence, optionally between angle brackets */ + if( data[i]=='<' ) i++; + link_offset = i; + while( i' ) link_end = i-1; else link_end = i; + + /* optional spacer: (space | tab)* (newline | '\'' | '"' | '(' ) */ + while( i=end || data[i]=='\r' || data[i]=='\n' ) line_end = i; + if( i+1title_offset && (data[i]==' ' || data[i]=='\t') ){ i--; } + if( i>title_offset && (data[i]=='\'' || data[i]=='"' || data[i]==')') ){ + line_end = title_end; + title_end = i; + } + } + if( !line_end ) return 0; /* garbage after the link */ + + /* a valid ref has been found, filling-in return structures */ + if( last ) *last = line_end; + if( !refs ) return 1; + if( build_ref_id(&lr.id, data+id_offset, id_end-id_offset)<0 ) return 0; + blob_append(&lr.link, data+link_offset, link_end-link_offset); + if( title_end>title_offset ){ + blob_append(&lr.title, data+title_offset, title_end-title_offset); + } + blob_append(refs, (char *)&lr, sizeof lr); + return 1; +} + + + +/********************** + * EXPORTED FUNCTIONS * + **********************/ + +/* markdown -- parses the input buffer and renders it into the output buffer */ +void markdown( + struct Blob *ob, /* output blob for rendered text */ + struct Blob *ib, /* input blob in markdown */ + const struct mkd_renderer *rndrer /* renderer descriptor (callbacks) */ +){ + struct link_ref *lr; + struct Blob text = BLOB_INITIALIZER; + size_t i, beg, end = 0; + struct render rndr; + char *ib_data; + + /* filling the render structure */ + if( !rndrer ) return; + rndr.make = *rndrer; + if( rndr.make.max_work_stack<1 ) rndr.make.max_work_stack = 1; + rndr.work_active = 0; + rndr.work = fossil_malloc(rndr.make.max_work_stack * sizeof *rndr.work); + for(i=0; ibeg ) blob_append(&text, ib_data + beg, end - beg); + while( end'. + + * **Ordered list** items are prefixed by a number and a period. **Unordered list** items + are prefixed by a hyphen, asterisk or plus sign. Prefix and item text are separated by + whitespace. + + * **Code blocks** are formed by lines of text (possibly including empty lines) prefixed by + at least 4 spaces or a tab. + + * A **horizontal rule** is a line consisting of 3 or more asterisks, hyphens or underscores, + with optional whitespace between them. + + - Span elements + + * 3 types of **links** exist: + + - **automatic links** are URLs or email addresses enclosed in angle brackets + ('<' and '>'), and are displayed as such. + + - **inline links** consist of the displayed link text in square brackets ('[' and ']'), + followed by the link target in parentheses. + + - **reference links** separate _link instance_ from _link definition_. A link instance + consists of the displayed link text in square brackets, followed by a link definition name + in square brackets. + The corresponding link definition can occur anywhere on the page, and consists + of the link definition name in square brackets followed by a colon, whitespace and the + link target. + + * **Emphasis** can be given by wrapping text in one or two asterisks or underscores - use + one for HTML ``, and two for `` emphasis. + + * A **code span** is text wrapped in backticks ('`'). + + * **Images** use a syntax much like inline or reference links, but with alt attribute text + ('img alt=...') instead of link text, and the first pair of square + brackets in an image instance prefixed by an exclamation mark. + + - **Inline HTML** is mostly interpreted automatically. + + - **Escaping** Markdown punctuation characters is done by prefixing them by a backslash ('\\'). + +## Details + +### Paragraphs + +To cause an explicit line break within a paragraph, use 2 or more spaces at the end of a line. + +Any line containing only whitespace (space or tab characters) is considered a blank line. + +### Headers + +#### 'Setext' style headers (underlined) + +The number of underlining equal signs or hyphens used has no impact on the resulting header. + +Underlining using equal sign(s) creates a top level header (corresponding to HTML `

      `), +while hyphen(s) create a second level header (HTML `

      `). Thus, only 2 levels of headers +can be made this way. + +#### 'Atx' style headers (hash prefixed) + +1 to 6 hash characters can be used to indicate header levels 1 (HTML `

      `) to 6 (`

      `). + +Headers may optionally be 'closed' for cosmetic reasons, by appending a whitespace and hash +characters to the header. The number of trailing hash characters has no impact on the header +level. + +### Block quotes + +Not every line in a paragraph needs to be prefixed by '>' in order to make it a block quote, +only the first line. + +Block quoted paragraphs can be nested by using multiple '>' characters as prefix. + +Within a block quote, Markdown formatting (e.g. lists, emphasis) still works as normal. + +### Lists + +A list item prefix need not occur first on its line; up to 3 leading spaces are allowed +(4 spaces would make a code block out of the following text). + +For unordered lists, asterisks, hyphens and plus signs can be used interchangeably. + +For ordered lists, arbitrary numbers can be used as part of an item prefix; the items will be +renumbered during rendering. However, future implementations may demand that the number used +for the first item in a list indicates an offset to be used for subsequent items. + +For list items spanning multiple lines, subsequent lines can be indented using an arbitrary amount +of whitespace. + +List items will be wrapped in HTML `

      ` tags if they are separated by blank lines. + +A list item may span multiple paragraphs. At least the first line of each such paragraph must +be indented using at least 4 spaces or a tab character. + +Block quotes within list items must have their '>' delimiters indented using 4 up to 7 spaces. + +Code blocks within list items need to be indented _twice_, that is, using 8 spaces or 2 tab +characters. + +### Code blocks + +Lines within a code block are rendered verbatim using HTML `

      ` and `` tags, except that
      +HTML punctuation characters like '<' and '&' are automatically converted to HTML entities. Thus,
      +there is no need to explicitly escape HTML syntax within a code block.
      +
      +A code block runs until the first non blank line with indent less than 4 spaces or 1 tab character.
      +
      +
      +Regular Markdown syntax is not processed within code blocks.
      +
      +### Links
      +
      +#### Automatic links
      +
      +When rendering automatic links to email addresses, HTML encoding obfuscation is used to
      +prevent some spambots from harvesting.
      +
      +#### Inline links
      +
      +Links to resources on the same server can use relative paths (i.e. can start with a '/').
      +
      +An optional title for the link (e.g. to have mouseover text in the browser) may be given behind
      +the link target but within the parentheses, in single and double quotes, and separated from the
      +link target by whitespace.
      +
      +#### Reference links
      +
      +> Each reference link consists of
      +>
      +>   - one or more _link instances_ at appropriate locations in the page text
      +>   - a single _link definition_ at an arbitrary location on the page
      +>
      +> During rendering, each link instance is resolved, and the corresponding definition is
      +> filled in. No separate link definition clauses occur in the rendered output.
      +>
      +> There are 3 fields involved in link instances and definitions:
      +>
      +>   - link text (i.e. the text that is displayed at the resulting link)
      +>   - link definition name (i.e. an unique ID binding link instances to link definition)
      +>   - link target (a target URL for the link)
      +
      +Multiple link instances may reference the same link definition using its link definition
      +name.
      +
      +Link definition names are case insensitive, and may contain letters, numbers, spaces and
      +punctuation.
      +
      +##### Link instance
      +
      +A space may be inserted between the bracket pairs for link text and link definition name.
      +
      +A link instance can use an _implicit link definition name_ shortcut, in which case the link
      +text is used as the link definition name. The second set of brackets then remains empty, e.g.
      +'[Google][]' ('Google' being used as both link text and link definition name).
      +
      +##### Link definition
      +
      +The first bracket pair containing the link definition name may be indented using up to 3 spaces.
      +
      +The link target may optionally be surrounded by angle brackets ('<' and '>').
      +
      +A link target may be followed by an optional title (e.g. to have mouseover text in the browser).
      +This title may be enclosed in parentheses, single or double quotes.
      +
      +Link definitions may be split into 2 lines, with the title on the second line, arbitrarily
      +indented. This may be more visually pleasing when using long link targets.
      +
      +### Emphasis
      +
      +The same character(s) used for starting the emphasis must be used to end it; don't mix
      +asterisks and underscores.
      +
      +Emphasis can be used in the middle of a word. That is, there need not be whitespace on either
      +side of emphasis start or end punctuation characters.
      +
      +### Code spans
      +
      +To include a literal backtick character in a code span, use multiple backticks as opening and
      +closing delimiters.
      +
      +Whitespace may exist immediately after the opening delimiter and before the closing delimiter
      +of a code span, to allow for code fragments starting or ending with a backtick.
      +
      +Within a code span - like within a code block - angle brackets and ampersands are automatically encoded to make including
      +HTML fragments easier.
      +
      +### Images
      +
      +If necessary, HTML must be used to specify image dimensions. Markdown has no provision for this.
      +
      +### Inline HTML
      +
      +Start and end tags of
      +a HTML block level construct (`
      `, `` etc) must be separated from surrounding +context using blank lines, and must both occur at the start of a line. + +No extra unwanted `

      ` HTML tags are added around HTML block level tags. + +Markdown formatting within HTML block level tags is not processed; however, formatting within +span level tags (e.g. ``) is processed normally. + +### Escaping Markdown punctuation + +The following punctuation characters can be escaped using backslash: + + - \\ backslash + - ` backtick + - * asterisk + - _ underscore + - {} curly braces + - [] square brackets + - () parentheses + - # hash mark + - + plus sign + - - minus sign (hyphen) + - . dot + - ! exclamation mark + +To render a literal backslash, use 2 backslashes ('\\\\'). + ADDED src/markdown_html.c Index: src/markdown_html.c ================================================================== --- src/markdown_html.c +++ src/markdown_html.c @@ -0,0 +1,442 @@ +/* +** Copyright (c) 2012 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains callbacks for the markdown parser that generate +** XHTML output. +*/ + +#include "config.h" +#include "markdown_html.h" + +#if INTERFACE + +void markdown_to_html( + struct Blob *input_markdown, + struct Blob *output_title, + struct Blob *output_body); + +#endif /* INTERFACE */ + + +/* INTER_BLOCK -- skip a line between block level elements */ +#define INTER_BLOCK(ob) \ + do { if( blob_size(ob)>0 ) blob_append(ob, "\n", 1); } while (0) + +/* BLOB_APPEND_LITERAL -- append a string literal to a blob */ +#define BLOB_APPEND_LITERAL(blob, literal) \ + blob_append((blob), "" literal, (sizeof literal)-1) + /* + * The empty string in the second argument leads to a syntax error + * when the macro is not used with a string literal. Unfortunately + * the error is not overly explicit. + */ + +/* BLOB_APPEND_BLOB -- append blob contents to another */ +#define BLOB_APPEND_BLOB(dest, src) \ + blob_append((dest), blob_buffer(src), blob_size(src)) + + +/* HTML escape */ + +static void html_escape(struct Blob *ob, const char *data, size_t size){ + size_t beg = 0, i = 0; + while( i' ){ + BLOB_APPEND_LITERAL(ob, ">"); + }else if( data[i]=='&' ){ + BLOB_APPEND_LITERAL(ob, "&"); + }else if( data[i]=='"' ){ + BLOB_APPEND_LITERAL(ob, """); + }else{ + break; + } + i++; + } + } +} + + +/* HTML block tags */ + +/* Size of the prolog: "

      \n" */ +#define PROLOG_SIZE 23 + +static void html_prolog(struct Blob *ob, void *opaque){ + INTER_BLOCK(ob); + BLOB_APPEND_LITERAL(ob, "
      \n"); + assert( blob_size(ob)==PROLOG_SIZE ); +} + +static void html_epilog(struct Blob *ob, void *opaque){ + INTER_BLOCK(ob); + BLOB_APPEND_LITERAL(ob, "
      \n"); +} + +static void html_raw_block(struct Blob *ob, struct Blob *text, void *opaque){ + char *data = blob_buffer(text); + size_t size = blob_size(text); + Blob *title = (Blob*)opaque; + while( size>0 && fossil_isspace(data[0]) ){ data++; size--; } + while( size>0 && fossil_isspace(data[size-1]) ){ size--; } + /* If the first raw block is an

      element, then use it as the title. */ + if( blob_size(ob)<=PROLOG_SIZE + && size>9 + && title!=0 + && sqlite3_strnicmp("", &data[size-5],5)==0 + ){ + int nTag = htmlTagLength(data); + blob_append(title, data+nTag, size - nTag - 5); + return; + } + INTER_BLOCK(ob); + blob_append(ob, data, size); + BLOB_APPEND_LITERAL(ob, "\n"); +} + +static void html_blockcode(struct Blob *ob, struct Blob *text, void *opaque){ + INTER_BLOCK(ob); + BLOB_APPEND_LITERAL(ob, "
      ");
      +  html_escape(ob, blob_buffer(text), blob_size(text));
      +  BLOB_APPEND_LITERAL(ob, "
      \n"); +} + +static void html_blockquote(struct Blob *ob, struct Blob *text, void *opaque){ + INTER_BLOCK(ob); + BLOB_APPEND_LITERAL(ob, "
      \n"); + BLOB_APPEND_BLOB(ob, text); + BLOB_APPEND_LITERAL(ob, "
      \n"); +} + +static void html_header( + struct Blob *ob, + struct Blob *text, + int level, + void *opaque +){ + struct Blob *title = opaque; + /* The first header at the beginning of a text is considered as + * a title and not output. */ + if( blob_size(ob)<=PROLOG_SIZE && title!=0 && blob_size(title)==0 ){ + BLOB_APPEND_BLOB(title, text); + return; + } + INTER_BLOCK(ob); + blob_appendf(ob, "", level); + BLOB_APPEND_BLOB(ob, text); + blob_appendf(ob, "", level); +} + +static void html_hrule(struct Blob *ob, void *opaque){ + INTER_BLOCK(ob); + BLOB_APPEND_LITERAL(ob, "
      \n"); +} + + +static void html_list( + struct Blob *ob, + struct Blob *text, + int flags, + void *opaque +){ + char ol[] = "ol"; + char ul[] = "ul"; + char *tag = (flags & MKD_LIST_ORDERED) ? ol : ul; + INTER_BLOCK(ob); + blob_appendf(ob, "<%s>\n", tag); + BLOB_APPEND_BLOB(ob, text); + blob_appendf(ob, "\n", tag); +} + +static void html_list_item( + struct Blob *ob, + struct Blob *text, + int flags, + void *opaque +){ + char *text_data = blob_buffer(text); + size_t text_size = blob_size(text); + while( text_size>0 && text_data[text_size-1]=='\n' ) text_size--; + BLOB_APPEND_LITERAL(ob, "
    • "); + blob_append(ob, text_data, text_size); + BLOB_APPEND_LITERAL(ob, "
    • \n"); +} + +static void html_paragraph(struct Blob *ob, struct Blob *text, void *opaque){ + INTER_BLOCK(ob); + BLOB_APPEND_LITERAL(ob, "

      "); + BLOB_APPEND_BLOB(ob, text); + BLOB_APPEND_LITERAL(ob, "

      \n"); +} + + +static void html_table( + struct Blob *ob, + struct Blob *head_row, + struct Blob *rows, + void *opaque +){ + INTER_BLOCK(ob); + BLOB_APPEND_LITERAL(ob, "

      \n"); + if( head_row && blob_size(head_row)>0 ){ + BLOB_APPEND_LITERAL(ob, "\n"); + BLOB_APPEND_BLOB(ob, head_row); + BLOB_APPEND_LITERAL(ob, "\n\n"); + } + if( rows ){ + BLOB_APPEND_BLOB(ob, rows); + } + if( head_row && blob_size(head_row)>0 ){ + BLOB_APPEND_LITERAL(ob, "\n"); + } + BLOB_APPEND_LITERAL(ob, "
      \n"); +} + +static void html_table_cell( + struct Blob *ob, + struct Blob *text, + int flags, + void *opaque +){ + if( flags & MKD_CELL_HEAD ){ + BLOB_APPEND_LITERAL(ob, " "); + BLOB_APPEND_BLOB(ob, text); + if( flags & MKD_CELL_HEAD ){ + BLOB_APPEND_LITERAL(ob, "\n"); + }else{ + BLOB_APPEND_LITERAL(ob, "
    • + while( db_step(&q)==SQLITE_ROW ){ + int rid = db_column_int(&q,0); + const char *zUuid = db_column_text(&q, 1); + const char *zDesc = db_column_text(&q, 2); + int isPriv = db_column_int(&q,3); + @ + @ + @ + if( isPriv ){ + @ + } + @ + } + @
      %d(rid) %z(href("%R/info/%!S",zUuid))%S(zUuid) %h(zDesc)(unpublished)
      + db_finalize(&q); + style_footer(); +} + +/* +** WEBPAGE: bigbloblist +** +** Return a page showing the largest artifacts in the repository in order +** of decreasing size. +** +** n=N Show the top N artifacts +*/ +void bigbloblist_page(void){ + Stmt q; + int n = atoi(PD("n","250")); + + login_check_credentials(); + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } + style_header("%d Largest Artifacts", n); + db_multi_exec( + "CREATE TEMP TABLE toshow(rid INTEGER PRIMARY KEY);" + "INSERT INTO toshow(rid)" + " SELECT rid FROM blob" + " ORDER BY length(content) DESC" + " LIMIT %d;", n + ); + describe_artifacts("IN toshow"); + db_prepare(&q, + "SELECT description.rid, description.uuid, description.summary," + " length(blob.content), coalesce(delta.srcid,'')," + " datetime(description.ctime)" + " FROM description, blob LEFT JOIN delta ON delta.rid=blob.rid" + " WHERE description.rid=blob.rid" + " ORDER BY length(content) DESC" + ); + @ + @ + @ + while( db_step(&q)==SQLITE_ROW ){ + int rid = db_column_int(&q,0); + const char *zUuid = db_column_text(&q, 1); + const char *zDesc = db_column_text(&q, 2); + int sz = db_column_int(&q,3); + const char *zSrcId = db_column_text(&q,4); + const char *zDate = db_column_text(&q,5); + @ + @ + @ + @ + @ + @ + @ + } + @
      SizeRID + @ Delta FromSHA1DescriptionDate
      %d(sz)%d(rid)%s(zSrcId) %z(href("%R/info/%!S",zUuid))%S(zUuid) %h(zDesc)%z(href("%R/timeline?c=%T",zDate))%s(zDate)
      + db_finalize(&q); + output_table_sorting_javascript("bigblobtab", "NnnttT", -1); + style_footer(); +} + +/* +** COMMAND: test-unsent +** +** Usage: %fossil test-unsent +** +** Show all artifacts in the unsent table +*/ +void test_unsent_cmd(void){ + db_find_and_open_repository(0,0); + describe_artifacts_to_stdout("IN unsent", 0); +} + +/* +** COMMAND: test-unclustered +** +** Usage: %fossil test-unclustered +** +** Show all artifacts in the unclustered table +*/ +void test_unclusterd_cmd(void){ + db_find_and_open_repository(0,0); + describe_artifacts_to_stdout("IN unclustered", 0); +} + +/* +** COMMAND: test-phantoms +** +** Usage: %fossil test-phantoms +** +** Show all phantom artifacts +*/ +void test_phatoms_cmd(void){ + db_find_and_open_repository(0,0); + describe_artifacts_to_stdout("IN (SELECT rid FROM blob WHERE size<0)", 0); +} + +/* Maximum number of collision examples to remember */ +#define MAX_COLLIDE 25 + +/* +** Generate a report on the number of collisions in SHA1 hashes +** generated by the SQL given in the argument. +*/ +static void collision_report(const char *zSql){ + int i, j, kk; + int nHash = 0; + Stmt q; + char zPrev[UUID_SIZE+1]; + struct { + int cnt; + char *azHit[MAX_COLLIDE]; + char z[UUID_SIZE+1]; + } aCollide[UUID_SIZE+1]; + memset(aCollide, 0, sizeof(aCollide)); + memset(zPrev, 0, sizeof(zPrev)); + db_prepare(&q,"%s",zSql/*safe-for-%s*/); + while( db_step(&q)==SQLITE_ROW ){ + const char *zUuid = db_column_text(&q,0); + int n = db_column_bytes(&q,0); + int i; + nHash++; + for(i=0; zPrev[i] && zPrev[i]==zUuid[i]; i++){} + if( i>0 && i<=UUID_SIZE ){ + if( i>=4 && aCollide[i].cnt + @ LengthInstancesFirst Instance + @ + for(i=1; i<=UUID_SIZE; i++){ + if( aCollide[i].cnt==0 ) continue; + @ %d(i)%d(aCollide[i].cnt)%h(aCollide[i].z) + } + @ + @

      Total number of hashes: %d(nHash)

      + kk = 0; + for(i=UUID_SIZE; i>=4; i--){ + if( aCollide[i].cnt==0 ) continue; + if( aCollide[i].cnt>200 ) break; + kk += aCollide[i].cnt; + if( aCollide[i].cnt<25 ){ + @

      Collisions of length %d(i): + }else{ + @

      First 25 collisions of length %d(i): + } + for(j=0; j + } + } + for(i=4; iHash Prefix Collisions on Check-ins + collision_report("SELECT (SELECT uuid FROM blob WHERE rid=objid)" + " FROM event WHERE event.type='ci'" + " ORDER BY 1"); + @

      Hash Prefix Collisions on All Artifacts

      + collision_report("SELECT uuid FROM blob ORDER BY 1"); + style_footer(); +} ADDED src/path.c Index: src/path.c ================================================================== --- src/path.c +++ src/path.c @@ -0,0 +1,571 @@ +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@sqlite.org +** +******************************************************************************* +** +** This file contains code used to trace paths of through the +** directed acyclic graph (DAG) of check-ins. +*/ +#include "config.h" +#include "path.h" +#include + +#if INTERFACE +/* Nodes for the paths through the DAG. +*/ +struct PathNode { + int rid; /* ID for this node */ + u8 fromIsParent; /* True if pFrom is the parent of rid */ + u8 isPrim; /* True if primary side of common ancestor */ + u8 isHidden; /* Abbreviate output in "fossil bisect ls" */ + PathNode *pFrom; /* Node we came from */ + union { + PathNode *pPeer; /* List of nodes of the same generation */ + PathNode *pTo; /* Next on path from beginning to end */ + } u; + PathNode *pAll; /* List of all nodes */ +}; +#endif + +/* +** Local variables for this module +*/ +static struct { + PathNode *pCurrent; /* Current generation of nodes */ + PathNode *pAll; /* All nodes */ + Bag seen; /* Nodes seen before */ + int nStep; /* Number of steps from first to last */ + PathNode *pStart; /* Earliest node */ + PathNode *pPivot; /* Common ancestor of pStart and pEnd */ + PathNode *pEnd; /* Most recent */ +} path; + +/* +** Return the first (last) element of the computed path. +*/ +PathNode *path_first(void){ return path.pStart; } +PathNode *path_last(void){ return path.pEnd; } + +/* +** Return the number of steps in the computed path. +*/ +int path_length(void){ return path.nStep; } + +/* +** Create a new node +*/ +static PathNode *path_new_node(int rid, PathNode *pFrom, int isParent){ + PathNode *p; + + p = fossil_malloc( sizeof(*p) ); + memset(p, 0, sizeof(*p)); + p->rid = rid; + p->fromIsParent = isParent; + p->pFrom = pFrom; + p->u.pPeer = path.pCurrent; + path.pCurrent = p; + p->pAll = path.pAll; + path.pAll = p; + bag_insert(&path.seen, rid); + return p; +} + +/* +** Reset memory used by the shortest path algorithm. +*/ +void path_reset(void){ + PathNode *p; + while( path.pAll ){ + p = path.pAll; + path.pAll = p->pAll; + fossil_free(p); + } + bag_clear(&path.seen); + memset(&path, 0, sizeof(path)); +} + +/* +** Construct the path from path.pStart to path.pEnd in the u.pTo fields. +*/ +static void path_reverse_path(void){ + PathNode *p; + assert( path.pEnd!=0 ); + for(p=path.pEnd; p && p->pFrom; p = p->pFrom){ + p->pFrom->u.pTo = p; + } + path.pEnd->u.pTo = 0; + assert( p==path.pStart ); +} + +/* +** Compute the shortest path from iFrom to iTo +** +** If directOnly is true, then use only the "primary" links from parent to +** child. In other words, ignore merges. +** +** Return a pointer to the beginning of the path (the iFrom node). +** Elements of the path can be traversed by following the PathNode.u.pTo +** pointer chain. +** +** Return NULL if no path is found. +*/ +PathNode *path_shortest( + int iFrom, /* Path starts here */ + int iTo, /* Path ends here */ + int directOnly, /* No merge links if true */ + int oneWayOnly /* Parent->child only if true */ +){ + Stmt s; + PathNode *pPrev; + PathNode *p; + + path_reset(); + path.pStart = path_new_node(iFrom, 0, 0); + if( iTo==iFrom ){ + path.pEnd = path.pStart; + return path.pStart; + } + if( oneWayOnly && directOnly ){ + db_prepare(&s, + "SELECT cid, 1 FROM plink WHERE pid=:pid AND isprim" + ); + }else if( oneWayOnly ){ + db_prepare(&s, + "SELECT cid, 1 FROM plink WHERE pid=:pid " + ); + }else if( directOnly ){ + db_prepare(&s, + "SELECT cid, 1 FROM plink WHERE pid=:pid AND isprim " + "UNION ALL " + "SELECT pid, 0 FROM plink WHERE cid=:pid AND isprim" + ); + }else{ + db_prepare(&s, + "SELECT cid, 1 FROM plink WHERE pid=:pid " + "UNION ALL " + "SELECT pid, 0 FROM plink WHERE cid=:pid" + ); + } + while( path.pCurrent ){ + path.nStep++; + pPrev = path.pCurrent; + path.pCurrent = 0; + while( pPrev ){ + db_bind_int(&s, ":pid", pPrev->rid); + while( db_step(&s)==SQLITE_ROW ){ + int cid = db_column_int(&s, 0); + int isParent = db_column_int(&s, 1); + if( bag_find(&path.seen, cid) ) continue; + p = path_new_node(cid, pPrev, isParent); + if( cid==iTo ){ + db_finalize(&s); + path.pEnd = p; + path_reverse_path(); + return path.pStart; + } + } + db_reset(&s); + pPrev = pPrev->u.pPeer; + } + } + db_finalize(&s); + path_reset(); + return 0; +} + +/* +** Find the mid-point of the path. If the path contains fewer than +** 2 steps, return 0. +*/ +PathNode *path_midpoint(void){ + PathNode *p; + int i; + if( path.nStep<2 ) return 0; + for(p=path.pEnd, i=0; p && ipFrom, i++){} + return p; +} + +/* +** COMMAND: test-shortest-path +** +** Usage: %fossil test-shortest-path ?--no-merge? VERSION1 VERSION2 +** +** Report the shortest path between two check-ins. If the --no-merge flag +** is used, follow only direct parent-child paths and omit merge links. +*/ +void shortest_path_test_cmd(void){ + int iFrom; + int iTo; + PathNode *p; + int n; + int directOnly; + int oneWay; + + db_find_and_open_repository(0,0); + directOnly = find_option("no-merge",0,0)!=0; + oneWay = find_option("one-way",0,0)!=0; + if( g.argc!=4 ) usage("VERSION1 VERSION2"); + iFrom = name_to_rid(g.argv[2]); + iTo = name_to_rid(g.argv[3]); + p = path_shortest(iFrom, iTo, directOnly, oneWay); + if( p==0 ){ + fossil_fatal("no path from %s to %s", g.argv[1], g.argv[2]); + } + for(n=1, p=path.pStart; p; p=p->u.pTo, n++){ + char *z; + z = db_text(0, + "SELECT substr(uuid,1,12) || ' ' || datetime(mtime)" + " FROM blob, event" + " WHERE blob.rid=%d AND event.objid=%d AND event.type='ci'", + p->rid, p->rid); + fossil_print("%4d: %5d %s", n, p->rid, z); + fossil_free(z); + if( p->u.pTo ){ + fossil_print(" is a %s of\n", + p->u.pTo->fromIsParent ? "parent" : "child"); + }else{ + fossil_print("\n"); + } + } +} + +/* +** Find the closest common ancestor of two nodes. "Closest" means the +** fewest number of arcs. +*/ +int path_common_ancestor(int iMe, int iYou){ + Stmt s; + PathNode *pPrev; + PathNode *p; + Bag me, you; + + if( iMe==iYou ) return iMe; + if( iMe==0 || iYou==0 ) return 0; + path_reset(); + path.pStart = path_new_node(iMe, 0, 0); + path.pStart->isPrim = 1; + path.pEnd = path_new_node(iYou, 0, 0); + db_prepare(&s, "SELECT pid FROM plink WHERE cid=:cid"); + bag_init(&me); + bag_insert(&me, iMe); + bag_init(&you); + bag_insert(&you, iYou); + while( path.pCurrent ){ + pPrev = path.pCurrent; + path.pCurrent = 0; + while( pPrev ){ + db_bind_int(&s, ":cid", pPrev->rid); + while( db_step(&s)==SQLITE_ROW ){ + int pid = db_column_int(&s, 0); + if( bag_find(pPrev->isPrim ? &you : &me, pid) ){ + /* pid is the common ancestor */ + PathNode *pNext; + for(p=path.pAll; p && p->rid!=pid; p=p->pAll){} + assert( p!=0 ); + pNext = p; + while( pNext ){ + pNext = p->pFrom; + p->pFrom = pPrev; + pPrev = p; + p = pNext; + } + if( pPrev==path.pStart ) path.pStart = path.pEnd; + path.pEnd = pPrev; + path_reverse_path(); + db_finalize(&s); + return pid; + }else if( bag_find(&path.seen, pid) ){ + /* pid is just an alternative path on one of the legs */ + continue; + } + p = path_new_node(pid, pPrev, 0); + p->isPrim = pPrev->isPrim; + bag_insert(pPrev->isPrim ? &me : &you, pid); + } + db_reset(&s); + pPrev = pPrev->u.pPeer; + } + } + db_finalize(&s); + path_reset(); + return 0; +} + +/* +** COMMAND: test-ancestor-path +** +** Usage: %fossil test-ancestor-path VERSION1 VERSION2 +** +** Report the path from VERSION1 to VERSION2 through their most recent +** common ancestor. +*/ +void ancestor_path_test_cmd(void){ + int iFrom; + int iTo; + int iPivot; + PathNode *p; + int n; + + db_find_and_open_repository(0,0); + if( g.argc!=4 ) usage("VERSION1 VERSION2"); + iFrom = name_to_rid(g.argv[2]); + iTo = name_to_rid(g.argv[3]); + iPivot = path_common_ancestor(iFrom, iTo); + for(n=1, p=path.pStart; p; p=p->u.pTo, n++){ + char *z; + z = db_text(0, + "SELECT substr(uuid,1,12) || ' ' || datetime(mtime)" + " FROM blob, event" + " WHERE blob.rid=%d AND event.objid=%d AND event.type='ci'", + p->rid, p->rid); + fossil_print("%4d: %5d %s", n, p->rid, z); + fossil_free(z); + if( p->rid==iFrom ) fossil_print(" VERSION1"); + if( p->rid==iTo ) fossil_print(" VERSION2"); + if( p->rid==iPivot ) fossil_print(" PIVOT"); + fossil_print("\n"); + } +} + + +/* +** A record of a file rename operation. +*/ +typedef struct NameChange NameChange; +struct NameChange { + int origName; /* Original name of file */ + int curName; /* Current name of the file */ + int newName; /* Name of file in next version */ + NameChange *pNext; /* List of all name changes */ +}; + +/* +** Compute all file name changes that occur going from check-in iFrom +** to check-in iTo. +** +** The number of name changes is written into *pnChng. For each name +** change, two integers are allocated for *piChng. The first is the +** filename.fnid for the original name as seen in check-in iFrom and +** the second is for new name as it is used in check-in iTo. +** +** Space to hold *piChng is obtained from fossil_malloc() and should +** be released by the caller. +** +** This routine really has nothing to do with path. It is located +** in this path.c module in order to leverage some of the path +** infrastructure. +*/ +void find_filename_changes( + int iFrom, /* Ancestor check-in */ + int iTo, /* Recent check-in */ + int revOk, /* Ok to move backwards (child->parent) if true */ + int *pnChng, /* Number of name changes along the path */ + int **aiChng, /* Name changes */ + const char *zDebug /* Generate trace output if no NULL */ +){ + PathNode *p; /* For looping over path from iFrom to iTo */ + NameChange *pAll = 0; /* List of all name changes seen so far */ + NameChange *pChng; /* For looping through the name change list */ + int nChng = 0; /* Number of files whose names have changed */ + int *aChng; /* Two integers per name change */ + int i; /* Loop counter */ + Stmt q1; /* Query of name changes */ + + *pnChng = 0; + *aiChng = 0; + if(0==iFrom){ + fossil_fatal("Invalid 'from' RID: 0"); + }else if(0==iTo){ + fossil_fatal("Invalid 'to' RID: 0"); + } + if( iFrom==iTo ) return; + path_reset(); + p = path_shortest(iFrom, iTo, 1, revOk==0); + if( p==0 ) return; + path_reverse_path(); + db_prepare(&q1, + "SELECT pfnid, fnid FROM mlink" + " WHERE mid=:mid AND (pfnid>0 OR fid==0)" + " ORDER BY pfnid" + ); + for(p=path.pStart; p; p=p->u.pTo){ + int fnid, pfnid; + if( !p->fromIsParent && (p->u.pTo==0 || p->u.pTo->fromIsParent) ){ + /* Skip nodes where the parent is not on the path */ + continue; + } + db_bind_int(&q1, ":mid", p->rid); + while( db_step(&q1)==SQLITE_ROW ){ + fnid = db_column_int(&q1, 1); + pfnid = db_column_int(&q1, 0); + if( pfnid==0 ){ + pfnid = fnid; + fnid = 0; + } + if( !p->fromIsParent ){ + int t = fnid; + fnid = pfnid; + pfnid = t; + } + if( zDebug ){ + fossil_print("%s at %d%s %.10z: %d[%z] -> %d[%z]\n", + zDebug, p->rid, p->fromIsParent ? ">" : "<", + db_text(0, "SELECT uuid FROM blob WHERE rid=%d", p->rid), + pfnid, + db_text(0, "SELECT name FROM filename WHERE fnid=%d", pfnid), + fnid, + db_text(0, "SELECT name FROM filename WHERE fnid=%d", fnid)); + } + for(pChng=pAll; pChng; pChng=pChng->pNext){ + if( pChng->curName==pfnid ){ + pChng->newName = fnid; + break; + } + } + if( pChng==0 && fnid>0 ){ + pChng = fossil_malloc( sizeof(*pChng) ); + pChng->pNext = pAll; + pAll = pChng; + pChng->origName = pfnid; + pChng->curName = pfnid; + pChng->newName = fnid; + nChng++; + } + } + for(pChng=pAll; pChng; pChng=pChng->pNext){ + pChng->curName = pChng->newName; + } + db_reset(&q1); + } + db_finalize(&q1); + if( nChng ){ + aChng = *aiChng = fossil_malloc( nChng*2*sizeof(int) ); + for(pChng=pAll, i=0; pChng; pChng=pChng->pNext){ + if( pChng->newName==0 ) continue; + if( pChng->origName==0 ) continue; + if( pChng->newName==pChng->origName ) continue; + aChng[i] = pChng->origName; + aChng[i+1] = pChng->newName; + if( zDebug ){ + fossil_print("%s summary %d[%z] -> %d[%z]\n", + zDebug, + aChng[i], + db_text(0, "SELECT name FROM filename WHERE fnid=%d", aChng[i]), + aChng[i+1], + db_text(0, "SELECT name FROM filename WHERE fnid=%d", aChng[i+1])); + } + i += 2; + } + *pnChng = i/2; + while( pAll ){ + pChng = pAll; + pAll = pAll->pNext; + fossil_free(pChng); + } + } +} + +/* +** COMMAND: test-name-changes +** +** Usage: %fossil test-name-changes [--debug] VERSION1 VERSION2 +** +** Show all filename changes that occur going from VERSION1 to VERSION2 +*/ +void test_name_change(void){ + int iFrom; + int iTo; + int *aChng; + int nChng; + int i; + const char *zDebug = 0; + int revOk = 0; + + db_find_and_open_repository(0,0); + zDebug = find_option("debug",0,0)!=0 ? "debug" : 0; + revOk = find_option("bidirectional",0,0)!=0; + if( g.argc<4 ) usage("VERSION1 VERSION2"); + while( g.argc>=4 ){ + iFrom = name_to_rid(g.argv[2]); + iTo = name_to_rid(g.argv[3]); + find_filename_changes(iFrom, iTo, revOk, &nChng, &aChng, zDebug); + fossil_print("------ Changes for (%d) %s -> (%d) %s\n", + iFrom, g.argv[2], iTo, g.argv[3]); + for(i=0; i [%s]\n", zFrom, zTo); + fossil_free(zFrom); + fossil_free(zTo); + } + fossil_free(aChng); + g.argv += 2; + g.argc -= 2; + } +} + +/* Query to extract all rename operations */ +static const char zRenameQuery[] = +@ SELECT +@ datetime(event.mtime), +@ F.name AS old_name, +@ T.name AS new_name, +@ blob.uuid +@ FROM mlink, filename F, filename T, event, blob +@ WHERE coalesce(mlink.pfnid,0)!=0 AND mlink.pfnid!=mlink.fnid +@ AND F.fnid=mlink.pfnid +@ AND T.fnid=mlink.fnid +@ AND event.objid=mlink.mid +@ AND event.type='ci' +@ AND blob.rid=mlink.mid +@ ORDER BY 1 DESC, 2; +; + +/* +** WEBPAGE: test-rename-list +** +** Print a list of all file rename operations throughout history. +** This page is intended for for testing purposes only and may change +** or be discontinued without notice. +*/ +void test_rename_list_page(void){ + Stmt q; + + login_check_credentials(); + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } + style_header("List Of File Name Changes"); + @

      NB: Experimental Page

      + @ + @ + @ + @ + @ + db_prepare(&q, "%s", zRenameQuery/*safe-for-%s*/); + while( db_step(&q)==SQLITE_ROW ){ + const char *zDate = db_column_text(&q, 0); + const char *zOld = db_column_text(&q, 1); + const char *zNew = db_column_text(&q, 2); + const char *zUuid = db_column_text(&q, 3); + @ + @ + @ + @ + @ + } + @
      Date & TimeOld NameNew NameCheck-in
      %z(href("%R/timeline?c=%t",zDate))%s(zDate)%z(href("%R/finfo?name=%t",zOld))%h(zOld)%z(href("%R/finfo?name=%t",zNew))%h(zNew)%z(href("%R/info/%!S",zUuid))%S(zUuid)
      + db_finalize(&q); + style_footer(); +} ADDED src/piechart.c Index: src/piechart.c ================================================================== --- src/piechart.c +++ src/piechart.c @@ -0,0 +1,326 @@ +/* +** Copyright (c) 2015 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code for generating pie charts on web pages. +** +*/ +#include "config.h" +#include "piechart.h" +#include + +#ifndef M_PI +# define M_PI 3.1415926535897932385 +#endif + +/* +** Return an RGB color name given HSV values. The HSV values +** must each be between between 0 and 255. The string +** returned is held in a static buffer and is overwritten +** on each call. +*/ +const char *rgbName(unsigned char h, unsigned char s, unsigned char v){ + static char zColor[8]; + unsigned char A, B, C, r, g, b; + unsigned int i, m; + if( s==0 ){ + r = g = b = v; + }else{ + i = (h*6)/256; + m = (h*6)&0xff; + A = v*(256-s)/256; + B = v*(65536-s*m)/65536; + C = v*(65536-s*(256-m))/65536; + @ + switch( i ){ + case 0: r=v; g=C; b=A; break; + case 1: r=B; g=v; b=A; break; + case 2: r=A; g=v; b=C; break; + case 3: r=A; g=B; b=v; break; + case 4: r=C; g=A; b=v; break; + default: r=v; g=A; b=B; break; + } + } + sqlite3_snprintf(sizeof(zColor),zColor,"#%02x%02x%02x",r,g,b); + return zColor; +} + +/* +** Flags that can be passed into the pie-chart generator +*/ +#if INTERFACE +#define PIE_OTHER 0x0001 /* No wedge less than 1/60th of the circle */ +#define PIE_CHROMATIC 0x0002 /* Wedge colors are in chromatic order */ +#define PIE_PERCENT 0x0004 /* Add "(XX%)" marks on each label */ +#endif + +/* +** A pie-chart wedge label +*/ +struct WedgeLabel { + double rCos, rSin; /* Sine and Cosine of center angle of wedge */ + char *z; /* Label to draw on this wedge */ +}; +typedef struct WedgeLabel WedgeLabel; + +/* +** Comparison callback for qsort() to sort labels in order of increasing +** distance above and below the horizontal centerline. +*/ +static int wedgeCompare(const void *a, const void *b){ + const WedgeLabel *pA = (const WedgeLabel*)a; + const WedgeLabel *pB = (const WedgeLabel*)b; + double rA = fabs(pA->rCos); + double rB = fabs(pB->rCos); + if( rArB ) return +1; + return 0; +} + +/* +** Output HTML that will render a pie chart using data from +** the PIECHART temporary table. +** +** The schema for the PIECHART table should be: +** +** CREATE TEMP TABLE piechart(amt REAL, label TEXT); +*/ +void piechart_render(int width, int height, unsigned int pieFlags){ + Stmt q; + double cx, cy; /* center of the pie */ + double r, r2; /* Radius of the pie */ + double x1,y1; /* Start of the slice */ + double x2,y2; /* End of the slice */ + double x3,y3; /* Middle point of the slice */ + double x4,y4; /* End of line extending from x3,y3 */ + double x5,y5; /* Text anchor */ + double d1; /* radius to x4,y4 */ + const char *zAnc; /* Anchor point for text */ + double a1 = 0.0; /* Angle for first edge of slice */ + double a2; /* Angle for second edge */ + double a3; /* Angle at middle of slice */ + unsigned char h; /* Hue */ + const char *zClr; /* Color */ + int l; /* Large arc flag */ + int j; /* Wedge number */ + double rTotal; /* Total piechart.amt */ + double rTooSmall; /* Sum of pieChart.amt entries less than 1/60th */ + int nTotal; /* Total number of entries in piechart */ + int nTooSmall; /* Number of pieChart.amt entries less than 1/60th */ + const char *zFg; /* foreground color for lines and text */ + int nWedgeAlloc = 0; /* Slots allocated for aWedge[] */ + int nWedge = 0; /* Slots used for aWedge[] */ + WedgeLabel *aWedge = 0; /* Labels */ + double rUprRight; /* Floor for next label in the upper right quadrant */ + double rUprLeft; /* Floor for next label in the upper left quadrant */ + double rLwrRight; /* Ceiling for label in the lower right quadrant */ + double rLwrLeft; /* Ceiling for label in the lower left quadrant */ + int i; /* Loop counter looping over wedge labels */ + +# define SATURATION 128 +# define VALUE 192 +# define OTHER_CUTOFF 90.0 +# define TEXT_HEIGHT 15.0 + + cx = 0.5*width; + cy = 0.5*height; + r2 = cx1 ){ + db_prepare(&q, "SELECT sum(amt), count(*) FROM piechart WHERE amt<:amt"); + db_bind_double(&q, ":amt", rTotal/OTHER_CUTOFF); + if( db_step(&q)==SQLITE_ROW ){ + rTooSmall = db_column_double(&q, 0); + nTooSmall = db_column_double(&q, 1); + } + db_finalize(&q); + } + if( nTooSmall>1 ){ + db_prepare(&q, "SELECT amt, label FROM piechart WHERE amt>=:limit" + " UNION ALL SELECT %.17g, '%d others';", + rTooSmall, nTooSmall); + db_bind_double(&q, ":limit", rTotal/OTHER_CUTOFF); + nTotal += 1 - nTooSmall; + }else{ + db_prepare(&q, "SELECT amt, label FROM piechart"); + } + if( nTotal<=10 ) pieFlags |= PIE_CHROMATIC; + for(j=0; db_step(&q)==SQLITE_ROW; j++){ + double x = db_column_double(&q,0)/rTotal; + const char *zLbl = db_column_text(&q,1); + /* @ */ + if( x<=0.0 ) continue; + x1 = cx + sin(a1)*r; + y1 = cy - cos(a1)*r; + a2 = a1 + x*2.0*M_PI; + x2 = cx + sin(a2)*r; + y2 = cy - cos(a2)*r; + a3 = 0.5*(a1+a2); + if( nWedge+1>nWedgeAlloc ){ + nWedgeAlloc = nWedgeAlloc*2 + 40; + aWedge = fossil_realloc(aWedge, sizeof(aWedge[0])*nWedgeAlloc); + } + if( pieFlags & PIE_PERCENT ){ + int pct = (int)(x*100.0 + 0.5); + aWedge[nWedge].z = mprintf("%s (%d%%)", zLbl, pct); + }else{ + aWedge[nWedge].z = fossil_strdup(zLbl); + } + aWedge[nWedge].rSin = sin(a3); + aWedge[nWedge].rCos = cos(a3); + nWedge++; + if( (j&1)==0 || (pieFlags & PIE_CHROMATIC)!=0 ){ + h = 256*j/nTotal; + }else if( j+2=0.5; + a1 = a2; + @ + } + qsort(aWedge, nWedge, sizeof(aWedge[0]), wedgeCompare); + rUprLeft = height; + rLwrLeft = 0; + rUprRight = height; + rLwrRight = 0; + d1 = r*1.1; + for(i=0; irSin*r; + y3 = cy - p->rCos*r; + x4 = cx + p->rSin*d1; + y4 = cy - p->rCos*d1; + if( y4<=cy ){ + if( x4>=cx ){ + if( y4>rUprRight ){ + y4 = rUprRight; + } + rUprRight = y4 - TEXT_HEIGHT; + }else{ + if( y4>rUprLeft ){ + y4 = rUprLeft; + } + rUprLeft = y4 - TEXT_HEIGHT; + } + }else{ + if( x4>=cx ){ + if( y4rCos); + @ + @ %h(p->z) + fossil_free(p->z); + } + db_finalize(&q); + fossil_free(aWedge); +} + +/* +** WEBPAGE: test-piechart +** +** Generate a pie-chart based on data input from a form. +*/ +void piechart_test_page(void){ + const char *zData; + Stmt ins, q; + Blob all, line, token1, token2; + int n = 0; + int width; + int height; + + login_check_credentials(); + style_header("Pie Chart Test"); + db_multi_exec("CREATE TEMP TABLE piechart(amt REAL, label TEXT);"); + db_prepare(&ins, "INSERT INTO piechart(amt,label) VALUES(:amt,:label)"); + zData = PD("data",""); + width = atoi(PD("width","800")); + height = atoi(PD("height","400")); + blob_init(&all, zData, -1); + while( blob_line(&all, &line) ){ + double rAmt; + if( blob_token(&line, &token1)==0 ) continue; + rAmt = atof(blob_str(&token1)); + if( rAmt<=0.0 ) continue; + blob_tail(&line, &token2); + db_bind_double(&ins, ":amt", rAmt); + db_bind_text(&ins, ":label", blob_str(&token2)); + db_step(&ins); + db_reset(&ins); + n++; + } + db_finalize(&ins); + blob_reset(&all); + if( n>0 ){ + @ + piechart_render(width,height, PIE_OTHER); + @ + @
      + } + @
      + @

      One slice per line. Value and then Label.

      + @
      + @ Width: + @ Height:
      + @ + @ + @ + @

      + @

      Previous Data:

      + @ + db_prepare(&q, "SELECT rowid, amt, label FROM piechart"); + while( db_step(&q)==SQLITE_ROW ){ + @ + @ + @ + } + db_finalize(&q); + @
      %d(db_column_int(&q,0))%g(db_column_double(&q,1))%h(db_column_text(&q,2))
      + style_footer(); +} Index: src/pivot.c ================================================================== --- src/pivot.c +++ src/pivot.c @@ -48,13 +48,13 @@ "DELETE FROM aqueue;" "CREATE INDEX IF NOT EXISTS aqueue_idx1 ON aqueue(pending, mtime);" ); /* Insert the primary record */ - db_multi_exec( + db_multi_exec( "INSERT INTO aqueue(rid, mtime, pending, src)" - " SELECT %d, mtime, 1, 1 FROM plink WHERE cid=%d LIMIT 1", + " SELECT %d, mtime, 1, 1 FROM event WHERE objid=%d AND type='ci' LIMIT 1", rid, rid ); } /* @@ -62,13 +62,13 @@ ** must be at least one secondary but there can be more than one if ** desired. */ void pivot_set_secondary(int rid){ /* Insert the primary record */ - db_multi_exec( + db_multi_exec( "INSERT OR IGNORE INTO aqueue(rid, mtime, pending, src)" - " SELECT %d, mtime, 1, 0 FROM plink WHERE cid=%d", + " SELECT %d, mtime, 1, 0 FROM event WHERE objid=%d AND type='ci'", rid, rid ); } /* @@ -77,16 +77,16 @@ ** can be found. */ int pivot_find(void){ Stmt q1, q2, u1, i1; int rid = 0; - + /* aqueue must contain at least one primary and one other. Otherwise ** we abort early */ if( db_int(0, "SELECT count(distinct src) FROM aqueue")<2 ){ - fossil_panic("lack both primary and secondary files"); + fossil_fatal("lack both primary and secondary files"); } /* Prepare queries we will be needing ** ** The first query finds the oldest pending version on the aqueue. This ADDED src/popen.c Index: src/popen.c ================================================================== --- src/popen.c +++ src/popen.c @@ -0,0 +1,221 @@ +/* +** Copyright (c) 2010 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains an implementation of a bi-directional popen(). +*/ +#include "config.h" +#include "popen.h" + +#ifdef _WIN32 +#include +#include +/* +** Print a fatal error and quit. +*/ +static void win32_fatal_error(const char *zMsg){ + fossil_fatal("%s", zMsg); +} +#else +#include +#include +#endif + +/* +** The following macros are used to cast pointers to integers and +** integers to pointers. The way you do this varies from one compiler +** to the next, so we have developed the following set of #if statements +** to generate appropriate macros for a wide range of compilers. +** +** The correct "ANSI" way to do this is to use the intptr_t type. +** Unfortunately, that typedef is not available on all compilers, or +** if it is available, it requires an #include of specific headers +** that vary from one machine to the next. +** +** This code is copied out of SQLite. +*/ +#if defined(__PTRDIFF_TYPE__) /* This case should work for GCC */ +# define INT_TO_PTR(X) ((void*)(__PTRDIFF_TYPE__)(X)) +# define PTR_TO_INT(X) ((int)(__PTRDIFF_TYPE__)(X)) +#elif !defined(__GNUC__) /* Works for compilers other than LLVM */ +# define INT_TO_PTR(X) ((void*)&((char*)0)[X]) +# define PTR_TO_INT(X) ((int)(((char*)X)-(char*)0)) +#elif defined(HAVE_STDINT_H) /* Use this case if we have ANSI headers */ +# define INT_TO_PTR(X) ((void*)(intptr_t)(X)) +# define PTR_TO_INT(X) ((int)(intptr_t)(X)) +#else /* Generates a warning - but it always works */ +# define INT_TO_PTR(X) ((void*)(X)) +# define PTR_TO_INT(X) ((int)(X)) +#endif + + +#ifdef _WIN32 +/* +** On windows, create a child process and specify the stdin, stdout, +** and stderr channels for that process to use. +** +** Return the number of errors. +*/ +static int win32_create_child_process( + wchar_t *zCmd, /* The command that the child process will run */ + HANDLE hIn, /* Standard input */ + HANDLE hOut, /* Standard output */ + HANDLE hErr, /* Standard error */ + DWORD *pChildPid /* OUT: Child process handle */ +){ + STARTUPINFOW si; + PROCESS_INFORMATION pi; + BOOL rc; + + memset(&si, 0, sizeof(si)); + si.cb = sizeof(si); + si.dwFlags = STARTF_USESTDHANDLES; + SetHandleInformation(hIn, HANDLE_FLAG_INHERIT, TRUE); + si.hStdInput = hIn; + SetHandleInformation(hOut, HANDLE_FLAG_INHERIT, TRUE); + si.hStdOutput = hOut; + SetHandleInformation(hErr, HANDLE_FLAG_INHERIT, TRUE); + si.hStdError = hErr; + rc = CreateProcessW( + NULL, /* Application Name */ + zCmd, /* Command-line */ + NULL, /* Process attributes */ + NULL, /* Thread attributes */ + TRUE, /* Inherit Handles */ + 0, /* Create flags */ + NULL, /* Environment */ + NULL, /* Current directory */ + &si, /* Startup Info */ + &pi /* Process Info */ + ); + if( rc ){ + CloseHandle( pi.hProcess ); + CloseHandle( pi.hThread ); + *pChildPid = pi.dwProcessId; + }else{ + win32_fatal_error("cannot create child process"); + } + return rc!=0; +} +#endif + +/* +** Create a child process running shell command "zCmd". *ppOut is +** a FILE that becomes the standard input of the child process. +** (The caller writes to *ppOut in order to send text to the child.) +** *ppIn is stdout from the child process. (The caller +** reads from *ppIn in order to receive input from the child.) +** Note that *ppIn is an unbuffered file descriptor, not a FILE. +** The process ID of the child is written into *pChildPid. +** +** Return the number of errors. +*/ +int popen2(const char *zCmd, int *pfdIn, FILE **ppOut, int *pChildPid){ +#ifdef _WIN32 + HANDLE hStdinRd, hStdinWr, hStdoutRd, hStdoutWr, hStderr; + SECURITY_ATTRIBUTES saAttr; + DWORD childPid = 0; + int fd; + + saAttr.nLength = sizeof(saAttr); + saAttr.bInheritHandle = TRUE; + saAttr.lpSecurityDescriptor = NULL; + hStderr = GetStdHandle(STD_ERROR_HANDLE); + if( !CreatePipe(&hStdoutRd, &hStdoutWr, &saAttr, 4096) ){ + win32_fatal_error("cannot create pipe for stdout"); + } + SetHandleInformation( hStdoutRd, HANDLE_FLAG_INHERIT, FALSE); + + if( !CreatePipe(&hStdinRd, &hStdinWr, &saAttr, 4096) ){ + win32_fatal_error("cannot create pipe for stdin"); + } + SetHandleInformation( hStdinWr, HANDLE_FLAG_INHERIT, FALSE); + + win32_create_child_process(fossil_utf8_to_unicode(zCmd), + hStdinRd, hStdoutWr, hStderr,&childPid); + *pChildPid = childPid; + *pfdIn = _open_osfhandle(PTR_TO_INT(hStdoutRd), 0); + fd = _open_osfhandle(PTR_TO_INT(hStdinWr), 0); + *ppOut = _fdopen(fd, "w"); + CloseHandle(hStdinRd); + CloseHandle(hStdoutWr); + return 0; +#else + int pin[2], pout[2]; + *pfdIn = 0; + *ppOut = 0; + *pChildPid = 0; + + if( pipe(pin)<0 ){ + return 1; + } + if( pipe(pout)<0 ){ + close(pin[0]); + close(pin[1]); + return 1; + } + *pChildPid = fork(); + if( *pChildPid<0 ){ + close(pin[0]); + close(pin[1]); + close(pout[0]); + close(pout[1]); + *pChildPid = 0; + return 1; + } + signal(SIGPIPE,SIG_IGN); + if( *pChildPid==0 ){ + int fd; + int nErr = 0; + /* This is the child process */ + close(0); + fd = dup(pout[0]); + if( fd!=0 ) nErr++; + close(pout[0]); + close(pout[1]); + close(1); + fd = dup(pin[1]); + if( fd!=1 ) nErr++; + close(pin[0]); + close(pin[1]); + execl("/bin/sh", "/bin/sh", "-c", zCmd, (char*)0); + return 1; + }else{ + /* This is the parent process */ + close(pin[1]); + *pfdIn = pin[0]; + close(pout[0]); + *ppOut = fdopen(pout[1], "w"); + return 0; + } +#endif +} + +/* +** Close the connection to a child process previously created using +** popen2(). +*/ +void pclose2(int fdIn, FILE *pOut, int childPid){ +#ifdef _WIN32 + /* Not implemented, yet */ + close(fdIn); + fclose(pOut); +#else + close(fdIn); + fclose(pOut); + while( waitpid(0, 0, WNOHANG)>0 ) {} +#endif +} Index: src/pqueue.c ================================================================== --- src/pqueue.c +++ src/pqueue.c @@ -22,10 +22,14 @@ ** ** The way this queue is used, we never expect it to contain more ** than 2 or 3 elements, so a simple array is sufficient as the ** implementation. This could give worst case O(N) insert times, ** but because of the nature of the problem we expect O(1) performance. +** +** Compatibility note: Some versions of OpenSSL export a symbols +** like "pqueue_insert". This is, technically, a bug in OpenSSL. +** We work around it here by using "pqueuex_" instead of "pqueue_". */ #include "config.h" #include "pqueue.h" #include @@ -38,45 +42,46 @@ struct PQueue { int cnt; /* Number of entries in the queue */ int sz; /* Number of slots in a[] */ struct QueueElement { int id; /* ID of the element */ + void *p; /* Content pointer */ double value; /* Value of element. Kept in ascending order */ } *a; }; #endif /* ** Initialize a PQueue structure */ -void pqueue_init(PQueue *p){ +void pqueuex_init(PQueue *p){ memset(p, 0, sizeof(*p)); } /* ** Destroy a PQueue. Delete all of its content. */ -void pqueue_clear(PQueue *p){ +void pqueuex_clear(PQueue *p){ free(p->a); - pqueue_init(p); + pqueuex_init(p); } /* ** Change the size of the queue so that it contains N slots */ -static void pqueue_resize(PQueue *p, int N){ - p->a = realloc(p->a, sizeof(p->a[0])*N); +static void pqueuex_resize(PQueue *p, int N){ + p->a = fossil_realloc(p->a, sizeof(p->a[0])*N); p->sz = N; } /* ** Insert element e into the queue. */ -void pqueue_insert(PQueue *p, int e, double v){ +void pqueuex_insert(PQueue *p, int e, double v, void *pData){ int i, j; if( p->cnt+1>p->sz ){ - pqueue_resize(p, p->cnt+5); + pqueuex_resize(p, p->cnt+5); } for(i=0; icnt; i++){ if( p->a[i].value>v ){ for(j=p->cnt; j>i; j--){ p->a[j] = p->a[j-1]; @@ -83,26 +88,29 @@ } break; } } p->a[i].id = e; + p->a[i].p = pData; p->a[i].value = v; p->cnt++; } /* ** Extract the first element from the queue (the element with ** the smallest value) and return its ID. Return 0 if the queue ** is empty. */ -int pqueue_extract(PQueue *p){ +int pqueuex_extract(PQueue *p, void **pp){ int e, i; if( p->cnt==0 ){ + if( pp ) *pp = 0; return 0; } e = p->a[0].id; + if( pp ) *pp = p->a[0].p; for(i=0; icnt-1; i++){ p->a[i] = p->a[i+1]; } p->cnt--; return e; } Index: src/printf.c ================================================================== --- src/printf.c +++ src/printf.c @@ -13,22 +13,66 @@ ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** -** An implementation of printf() with extra conversion fields. +** This file contains implementions of routines for formatting output +** (ex: mprintf()) and for output to the console. */ #include "config.h" #include "printf.h" +#if defined(_WIN32) +# include +# include +#endif +#include + +/* Two custom conversions are used to show a prefix of SHA1 hashes: +** +** %!S Prefix of a length appropriate for URLs +** %S Prefix of a length appropriate for human display +** +** The following macros help determine those lengths. FOSSIL_HASH_DIGITS +** is the default number of digits to display to humans. This value can +** be overridden using the hash-digits setting. FOSSIL_HASH_DIGITS_URL +** is the minimum number of digits to be used in URLs. The number used +** will always be at least 6 more than the number used for human output, +** or 40 if the number of digits in human output is 34 or more. +*/ +#ifndef FOSSIL_HASH_DIGITS +# define FOSSIL_HASH_DIGITS 10 /* For %S (human display) */ +#endif +#ifndef FOSSIL_HASH_DIGITS_URL +# define FOSSIL_HASH_DIGITS_URL 16 /* For %!S (embedded in URLs) */ +#endif + +/* +** Return the number of SHA1 hash digits to display. The number is for +** human output if the bForUrl is false and is destined for a URL if +** bForUrl is false. +*/ +static int hashDigits(int bForUrl){ + static int nDigitHuman = 0; + static int nDigitUrl = 0; + if( nDigitHuman==0 ){ + nDigitHuman = db_get_int("hash-digits", FOSSIL_HASH_DIGITS); + if( nDigitHuman < 6 ) nDigitHuman = 6; + if( nDigitHuman > 40 ) nDigitHuman = 40; + nDigitUrl = nDigitHuman + 6; + if( nDigitUrl < FOSSIL_HASH_DIGITS_URL ) nDigitUrl = FOSSIL_HASH_DIGITS_URL; + if( nDigitUrl > 40 ) nDigitUrl = 40; + } + return bForUrl ? nDigitUrl : nDigitHuman; +} /* ** Conversion types fall into various categories as defined by the ** following enumeration. */ #define etRADIX 1 /* Integer types. %d, %x, %o, and so forth */ #define etFLOAT 2 /* Floating point. %f */ -#define etEXP 3 /* Exponentional notation. %e and %E */ +#define etEXP 3 /* Exponential notation. %e and %E */ #define etGENERIC 4 /* Floating or exponential, depending on exponent. %g */ #define etSIZE 5 /* Return number of characters processed so far. %n */ #define etSTRING 6 /* Strings. %s */ #define etDYNSTRING 7 /* Dynamically allocated strings. %z */ #define etPERCENT 8 /* Percent symbol. %% */ @@ -38,19 +82,20 @@ #define etBLOB 11 /* Blob objects. %b */ #define etBLOBSQL 12 /* Blob objects quoted for SQL. %B */ #define etSQLESCAPE 13 /* Strings with '\'' doubled. %q */ #define etSQLESCAPE2 14 /* Strings with '\'' doubled and enclosed in '', NULL pointers replaced by SQL NULL. %Q */ -#define etPOINTER 15 /* The %p conversion */ -#define etHTMLIZE 16 /* Make text safe for HTML */ -#define etHTTPIZE 17 /* Make text safe for HTTP. "/" encoded as %2f */ -#define etURLIZE 18 /* Make text safe for HTTP. "/" not encoded */ -#define etFOSSILIZE 19 /* The fossil header encoding format. */ -#define etPATH 20 /* Path type */ -#define etWIKISTR 21 /* Wiki text rendered from a char* */ -#define etWIKIBLOB 22 /* Wiki text rendered from a Blob* */ -#define etSTRINGID 23 /* String with length limit for a UUID prefix */ +#define etSQLESCAPE3 15 /* Double '"' characters within an indentifier. %w */ +#define etPOINTER 16 /* The %p conversion */ +#define etHTMLIZE 17 /* Make text safe for HTML */ +#define etHTTPIZE 18 /* Make text safe for HTTP. "/" encoded as %2f */ +#define etURLIZE 19 /* Make text safe for HTTP. "/" not encoded */ +#define etFOSSILIZE 20 /* The fossil header encoding format. */ +#define etPATH 21 /* Path type */ +#define etWIKISTR 22 /* Timeline comment text rendered from a char*: %W */ +#define etSTRINGID 23 /* String with length limit for a UUID prefix: %S */ +#define etROOT 24 /* String value of g.zTop: %R */ /* ** An "etByte" is an 8-bit unsigned value. */ @@ -90,15 +135,16 @@ { 'z', 0, 6, etDYNSTRING, 0, 0 }, { 'q', 0, 4, etSQLESCAPE, 0, 0 }, { 'Q', 0, 4, etSQLESCAPE2, 0, 0 }, { 'b', 0, 2, etBLOB, 0, 0 }, { 'B', 0, 2, etBLOBSQL, 0, 0 }, - { 'w', 0, 2, etWIKISTR, 0, 0 }, - { 'W', 0, 2, etWIKIBLOB, 0, 0 }, + { 'W', 0, 2, etWIKISTR, 0, 0 }, { 'h', 0, 4, etHTMLIZE, 0, 0 }, + { 'R', 0, 0, etROOT, 0, 0 }, { 't', 0, 4, etHTTPIZE, 0, 0 }, /* "/" -> "%2F" */ { 'T', 0, 4, etURLIZE, 0, 0 }, /* "/" unchanged */ + { 'w', 0, 4, etSQLESCAPE3, 0, 0 }, { 'F', 0, 4, etFOSSILIZE, 0, 0 }, { 'S', 0, 4, etSTRINGID, 0, 0 }, { 'c', 0, 0, etCHARX, 0, 0 }, { 'o', 8, 0, etRADIX, 0, 2 }, { 'u', 10, 0, etRADIX, 0, 0 }, @@ -155,24 +201,42 @@ int n = 0; while( (N-- != 0) && *(z++)!=0 ){ n++; } return n; } +/* +** Return an appropriate set of flags for wiki_convert() for displaying +** comments on a timeline. These flag settings are determined by +** configuration parameters. +** +** The altForm2 argument is true for "%!W" (with the "!" alternate-form-2 +** flags) and is false for plain "%W". The ! indicates that the text is +** to be rendered on a form rather than the timeline and that block markup +** is acceptable even if the "timeline-block-markup" setting is false. +*/ +static int wiki_convert_flags(int altForm2){ + static int wikiFlags = 0; + if( wikiFlags==0 ){ + if( altForm2 || db_get_boolean("timeline-block-markup", 0) ){ + wikiFlags = WIKI_INLINE | WIKI_NOBADLINKS; + }else{ + wikiFlags = WIKI_INLINE | WIKI_NOBLOCK | WIKI_NOBADLINKS; + } + if( db_get_boolean("timeline-plaintext", 0) ){ + wikiFlags |= WIKI_LINKSONLY; + } + } + return wikiFlags; +} + + /* ** The root program. All variations call this core. ** ** INPUTS: -** func This is a pointer to a function taking three arguments -** 1. A pointer to anything. Same as the "arg" parameter. -** 2. A pointer to the list of characters to be output -** (Note, this list is NOT null terminated.) -** 3. An integer number of characters to be output. -** (Note: This number might be zero.) -** -** arg This is the pointer to anything which will be passed as the -** first argument to "func". Use it for whatever you like. +** pBlob This is the blob where the output will be built. ** ** fmt This is the format string, as in the usual print. ** ** ap This is a pointer to a list of arguments. Same as in ** vfprint. @@ -241,11 +305,11 @@ blob_append(pBlob,"%",1); count++; break; } /* Find out what flags are present */ - flag_leftjustify = flag_plussign = flag_blanksign = + flag_leftjustify = flag_plussign = flag_blanksign = flag_alternateform = flag_altform2 = flag_zeropad = 0; done = 0; do{ switch( c ){ case '-': flag_leftjustify = 1; break; @@ -545,11 +609,11 @@ buf[0] = '%'; bufpt = buf; length = 1; break; case etCHARX: - c = buf[0] = (xtype==etCHARX ? va_arg(ap,int) : *++fmt); + c = buf[0] = va_arg(ap,int); if( precision>=0 ){ for(idx=1; idx=0 && precision=0 && limit etBUFSIZE ){ - bufpt = zExtra = malloc( n + cnt + 2 ); + bufpt = zExtra = fossil_malloc( n + cnt + 2 ); }else{ bufpt = buf; } bufpt[0] = '\''; for(i=0, j=1; ietBUFSIZE ){ - bufpt = zExtra = malloc( n ); - if( bufpt==0 ) return -1; + bufpt = zExtra = fossil_malloc( n ); }else{ bufpt = buf; } j = 0; - if( needQuote ) bufpt[j++] = '\''; + if( needQuote ) bufpt[j++] = q; for(i=0; i=0 && precision0 ) blob_append(pBlob,spaces,nspace); } } if( zExtra ){ - free(zExtra); + fossil_free(zExtra); } }/* End for loop over the format string */ return errorflag ? -1 : count; } /* End of function */ /* -** Print into memory obtained from malloc(). +** Print into memory obtained from fossil_malloc(). */ char *mprintf(const char *zFormat, ...){ va_list ap; char *z; va_start(ap,zFormat); @@ -798,10 +861,59 @@ void fossil_error_reset(void){ free(g.zErrMsg); g.zErrMsg = 0; g.iErrPriority = 0; } + +/* True if the last character standard output cursor is setting at +** the beginning of a blank link. False if a \r has been to move the +** cursor to the beginning of the line or if not at the beginning of +** a line. +** was a \n +*/ +static int stdoutAtBOL = 1; + +/* +** Write to standard output or standard error. +** +** On windows, transform the output into the current terminal encoding +** if the output is going to the screen. If output is redirected into +** a file, no translation occurs. No translation ever occurs on unix. +*/ +void fossil_puts(const char *z, int toStdErr){ + int n = (int)strlen(z); + if( n==0 ) return; + if( toStdErr==0 ) stdoutAtBOL = (z[n-1]=='\n'); +#if defined(_WIN32) + if( fossil_utf8_to_console(z, n, toStdErr) >= 0 ){ + return; + } +#endif + assert( toStdErr==0 || toStdErr==1 ); + fwrite(z, 1, n, toStdErr ? stderr : stdout); + fflush(toStdErr ? stderr : stdout); +} + +/* +** Force the standard output cursor to move to the beginning +** of a line, if it is not there already. +*/ +int fossil_force_newline(void){ + if( g.cgiOutput==0 && stdoutAtBOL==0 ){ + fossil_puts("\n", 0); + return 1; + } + return 0; +} + +/* +** Indicate that the cursor has moved to the start of a line by means +** other than writing to standard output. +*/ +void fossil_new_line_started(void){ + stdoutAtBOL = 1; +} /* ** Write output for user consumption. If g.cgiOutput is enabled, then ** send the output as part of the CGI reply. If g.cgiOutput is false, ** then write on standard output. @@ -810,8 +922,224 @@ va_list ap; va_start(ap, zFormat); if( g.cgiOutput ){ cgi_vprintf(zFormat, ap); }else{ - vprintf(zFormat, ap); + Blob b = empty_blob; + vxprintf(&b, zFormat, ap); + fossil_puts(blob_str(&b), 0); + blob_reset(&b); + } + va_end(ap); +} + +/* +** Print a trace message on standard error. +*/ +void fossil_trace(const char *zFormat, ...){ + va_list ap; + Blob b; + va_start(ap, zFormat); + b = empty_blob; + vxprintf(&b, zFormat, ap); + fossil_puts(blob_str(&b), 1); + blob_reset(&b); + va_end(ap); +} + +/* +** Write a message to the error log, if the error log filename is +** defined. +*/ +static void fossil_errorlog(const char *zFormat, ...){ + struct tm *pNow; + time_t now; + FILE *out; + const char *z; + int i; + va_list ap; + static const char *const azEnv[] = { "HTTP_HOST", "HTTP_USER_AGENT", + "PATH_INFO", "QUERY_STRING", "REMOTE_ADDR", "REQUEST_METHOD", + "REQUEST_URI", "SCRIPT_NAME" }; + if( g.zErrlog==0 ) return; + out = fossil_fopen(g.zErrlog, "a"); + if( out==0 ) return; + now = time(0); + pNow = gmtime(&now); + fprintf(out, "------------- %04d-%02d-%02d %02d:%02d:%02d UTC ------------\n", + pNow->tm_year+1900, pNow->tm_mon+1, pNow->tm_mday+1, + pNow->tm_hour, pNow->tm_min, pNow->tm_sec); + va_start(ap, zFormat); + vfprintf(out, zFormat, ap); + fprintf(out, "\n"); + va_end(ap); + for(i=0; i%h

      ", z); + cgi_reply(); + }else if( !g.fQuiet ){ + fossil_force_newline(); + fossil_puts("Fossil internal error: ", 1); + fossil_puts(z, 1); + fossil_puts("\n", 1); + } + } + exit(rc); +} + +NORETURN void fossil_fatal(const char *zFormat, ...){ + char *z; + int rc = 1; + va_list ap; + mainInFatalError = 1; + va_start(ap, zFormat); + z = vmprintf(zFormat, ap); + va_end(ap); + fossil_errorlog("fatal: %s", z); +#ifdef FOSSIL_ENABLE_JSON + if( g.json.isJsonMode ){ + json_err( g.json.resultCode, z, 1 ); + if( g.isHTTP ){ + rc = 0 /* avoid HTTP 500 */; + } + } + else +#endif + { + if( g.cgiOutput ){ + g.cgiOutput = 0; + cgi_printf("

      \n%h\n

      \n", z); + cgi_reply(); + }else if( !g.fQuiet ){ + fossil_force_newline(); + fossil_trace("%s\n", z); + } + } + free(z); + db_force_rollback(); + fossil_exit(rc); +} + +/* This routine works like fossil_fatal() except that if called +** recursively, the recursive call is a no-op. +** +** Use this in places where an error might occur while doing +** fatal error shutdown processing. Unlike fossil_panic() and +** fossil_fatal() which never return, this routine might return if +** the fatal error handing is already in process. The caller must +** be prepared for this routine to return. +*/ +void fossil_fatal_recursive(const char *zFormat, ...){ + char *z; + va_list ap; + int rc = 1; + if( mainInFatalError ) return; + mainInFatalError = 1; + va_start(ap, zFormat); + z = vmprintf(zFormat, ap); + va_end(ap); + fossil_errorlog("fatal: %s", z); +#ifdef FOSSIL_ENABLE_JSON + if( g.json.isJsonMode ){ + json_err( g.json.resultCode, z, 1 ); + if( g.isHTTP ){ + rc = 0 /* avoid HTTP 500 */; + } + } else +#endif + { + if( g.cgiOutput ){ + g.cgiOutput = 0; + cgi_printf("

      \n%h\n

      \n", z); + cgi_reply(); + }else{ + fossil_force_newline(); + fossil_trace("%s\n", z); + } + } + db_force_rollback(); + fossil_exit(rc); +} + + +/* Print a warning message */ +void fossil_warning(const char *zFormat, ...){ + char *z; + va_list ap; + va_start(ap, zFormat); + z = vmprintf(zFormat, ap); + va_end(ap); + fossil_errorlog("warning: %s", z); +#ifdef FOSSIL_ENABLE_JSON + if(g.json.isJsonMode){ + json_warn( FSL_JSON_W_UNKNOWN, z ); + }else +#endif + { + if( g.cgiOutput ){ + cgi_printf("

      \n%h\n

      \n", z); + }else{ + fossil_force_newline(); + fossil_trace("%s\n", z); + } } + free(z); +} + +/* +** Turn off any NL to CRNL translation on the stream given as an +** argument. This is a no-op on unix but is necessary on windows. +*/ +void fossil_binary_mode(FILE *p){ +#if defined(_WIN32) + _setmode(_fileno(p), _O_BINARY); +#endif +#ifdef __EMX__ /* OS/2 */ + setmode(fileno(p), O_BINARY); +#endif } ADDED src/publish.c Index: src/publish.c ================================================================== --- src/publish.c +++ src/publish.c @@ -0,0 +1,118 @@ +/* +** Copyright (c) 2014 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code used to implement the "publish" and +** "unpublished" commands. +*/ +#include "config.h" +#include "publish.h" +#include + +/* +** COMMAND: unpublished +** +** Usage: %fossil unpublished ?OPTIONS? +** +** Show a list of unpublished or "private" artifacts. Unpublished artifacts +** will never push and hence will not be shared with collaborators. +** +** By default, this command only shows unpublished check-ins. To show +** all unpublished artifacts, use the --all command-line option. +** +** OPTIONS: +** --all Show all artifacts, not just check-ins +*/ +void unpublished_cmd(void){ + int bAll = find_option("all",0,0)!=0; + + db_find_and_open_repository(0,0); + verify_all_options(); + if( bAll ){ + describe_artifacts_to_stdout("IN private", 0); + }else{ + describe_artifacts_to_stdout( + "IN (SELECT rid FROM private CROSS JOIN event" + " WHERE private.rid=event.objid" + " AND event.type='ci')", 0); + } +} + +/* +** COMMAND: publish +** +** Usage: %fossil publish ?--only? TAGS... +** +** Cause artifacts identified by TAGS... to be published (made non-private). +** This can be used (for example) to convert a private branch into a public +** branch, or to publish a bundle that was imported privately. +** +** If any of TAGS names a branch, then all check-ins on the most recent +** instance of that branch are included, not just the most recent check-in. +** +** If any of TAGS name check-ins then all files and tags associated with +** those check-ins are also published automatically. Except if the --only +** option is used, then only the specific artifacts identified by TAGS +** are published. +** +** If a TAG is already public, this command is a harmless no-op. +*/ +void publish_cmd(void){ + int bOnly = find_option("only",0,0)!=0; + int bTest = find_option("test",0,0)!=0; /* Undocumented --test option */ + int bExclusive = find_option("exclusive",0,0)!=0; /* undocumented */ + int i; + + db_find_and_open_repository(0,0); + verify_all_options(); + if( g.argc<3 ) usage("?--only? TAGS..."); + db_begin_transaction(); + db_multi_exec("CREATE TEMP TABLE ok(rid INTEGER PRIMARY KEY);"); + for(i=2; i0 AND value=%Q", + rid,TAG_BRANCH,g.argv[i]) ){ + rid = start_of_branch(rid, 1); + compute_descendants(rid, 1000000000); + }else{ + db_multi_exec("INSERT OR IGNORE INTO ok VALUES(%d)", rid); + } + } + if( !bOnly ){ + find_checkin_associates("ok", bExclusive); + } + if( bTest ){ + /* If the --test option is used, then do not actually publish any + ** artifacts. Instead, just list the artifact information on standard + ** output. The --test option is useful for verifying correct operation + ** of the logic that figures out which artifacts to publish, such as + ** the find_checkin_associates() routine + */ + describe_artifacts_to_stdout("IN ok", 0); + }else{ + /* Standard behavior is simply to remove the published documents from + ** the PRIVATE table */ + db_multi_exec( + "DELETE FROM ok WHERE rid NOT IN private;" + "DELETE FROM private WHERE rid IN ok;" + "INSERT OR IGNORE INTO unsent SELECT rid FROM ok;" + "INSERT OR IGNORE INTO unclustered SELECT rid FROM ok;" + ); + } + db_end_transaction(0); +} ADDED src/purge.c Index: src/purge.c ================================================================== --- src/purge.c +++ src/purge.c @@ -0,0 +1,582 @@ +/* +** Copyright (c) 2014 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code used to implement the "purge" command and +** related functionality for removing check-ins from a repository. It also +** manages the graveyard of purged content. +*/ +#include "config.h" +#include "purge.h" +#include + +/* +** SQL code used to initialize the schema of a bundle. +** +** The purgeevent table contains one entry for each purge event. For each +** purge event, multiple artifacts might have been removed. Each removed +** artifact is stored as an entry in the purgeitem table. +** +** The purgeevent and purgeitem tables are not synced, even by the +** "fossil config" command. They exist only as a backup in case of a +** mistaken purge or for content recovery in case there is a bug in the +** purge command. +*/ +static const char zPurgeInit[] = +@ CREATE TABLE IF NOT EXISTS "%w".purgeevent( +@ peid INTEGER PRIMARY KEY, -- Unique ID for the purge event +@ ctime DATETIME, -- When purge occurred. Seconds since 1970. +@ pnotes TEXT -- Human-readable notes about the purge event +@ ); +@ CREATE TABLE IF NOT EXISTS "%w".purgeitem( +@ piid INTEGER PRIMARY KEY, -- ID for the purge item +@ peid INTEGER REFERENCES purgeevent ON DELETE CASCADE, -- Purge event +@ orid INTEGER, -- Original RID before purged +@ uuid TEXT NOT NULL, -- SHA1 hash of the purged artifact +@ srcid INTEGER, -- Basis purgeitem for delta compression +@ isPrivate BOOLEAN, -- True if artifact was originally private +@ sz INT NOT NULL, -- Uncompressed size of the purged artifact +@ desc TEXT, -- Brief description of this artifact +@ data BLOB -- Compressed artifact content +@ ); +; + +/* +** This routine purges multiple artifacts from the repository, transfering +** those artifacts into the PURGEITEM table. +** +** Prior to invoking this routine, the caller must create a (TEMP) table +** named zTab that contains the RID of every artifact to be purged. +** +** This routine does the following: +** +** (1) Create the purgeevent and purgeitem tables, if required +** (2) Create a new purgeevent +** (3) Make sure no DELTA table entries depend on purged artifacts +** (4) Create new purgeitem entries for each purged artifact +** (5) Remove purged artifacts from the BLOB table +** (6) Remove references to purged artifacts in the following tables: +** (a) EVENT +** (b) PRIVATE +** (c) MLINK +** (d) PLINK +** (e) LEAF +** (f) UNCLUSTERED +** (g) UNSENT +** (h) BACKLINK +** (i) ATTACHMENT +** (j) TICKETCHNG +** (7) If any ticket artifacts were removed (6j) then rebuild the +** corresponding ticket entries. Possibly remove entries from +** the ticket table. +** +** Stops 1-4 (saving the purged artifacts into the graveyard) are only +** undertaken if the moveToGraveyard flag is true. +*/ +int purge_artifact_list( + const char *zTab, /* TEMP table containing list of RIDS to be purged */ + const char *zNote, /* Text of the purgeevent.pnotes field */ + int moveToGraveyard /* Move purged artifacts into the graveyard */ +){ + int peid = 0; /* New purgeevent ID */ + Stmt q; /* General-use prepared statement */ + char *z; + + assert( g.repositoryOpen ); /* Main database must already be open */ + db_begin_transaction(); + z = sqlite3_mprintf("IN \"%w\"", zTab); + describe_artifacts(z); + sqlite3_free(z); + + /* Make sure we are not removing a manifest that is the baseline of some + ** manifest that is being left behind. This step is not strictly necessary. + ** is is just a safety check. */ + if( purge_baseline_out_from_under_delta(zTab) ){ + fossil_fatal("attempt to purge a baseline manifest without also purging " + "all of its deltas"); + } + + /* Make sure that no delta that is left behind requires a purged artifact + ** as its basis. If such artifacts exist, go ahead and undelta them now. + */ + db_prepare(&q, "SELECT rid FROM delta WHERE srcid IN \"%w\"" + " AND rid NOT IN \"%w\"", zTab, zTab); + while( db_step(&q)==SQLITE_ROW ){ + int rid = db_column_int(&q, 0); + content_undelta(rid); + verify_before_commit(rid); + } + db_finalize(&q); + + /* Construct the graveyard and copy the artifacts to be purged into the + ** graveyard */ + if( moveToGraveyard ){ + db_multi_exec(zPurgeInit /*works-like:"%w%w"*/, + db_name("repository"), db_name("repository")); + db_multi_exec( + "INSERT INTO purgeevent(ctime,pnotes) VALUES(now(),%Q)", zNote + ); + peid = db_last_insert_rowid(); + db_prepare(&q, "SELECT rid FROM delta WHERE rid IN \"%w\"" + " AND srcid NOT IN \"%w\"", zTab, zTab); + while( db_step(&q)==SQLITE_ROW ){ + int rid = db_column_int(&q, 0); + content_undelta(rid); + } + db_finalize(&q); + db_multi_exec( + "INSERT INTO purgeitem(peid,orid,uuid,sz,isPrivate,desc,data)" + " SELECT %d, rid, uuid, size," + " EXISTS(SELECT 1 FROM private WHERE private.rid=blob.rid)," + " (SELECT summary FROM description WHERE rid=blob.rid)," + " content" + " FROM blob WHERE rid IN \"%w\"", + peid, zTab + ); + db_multi_exec( + "UPDATE purgeitem" + " SET srcid=(SELECT piid FROM purgeitem px, delta" + " WHERE px.orid=delta.srcid" + " AND delta.rid=purgeitem.orid)" + " WHERE peid=%d", + peid + ); + } + + /* Remove the artifacts being purged. Also remove all references to those + ** artifacts from the secondary tables. */ + db_multi_exec("DELETE FROM blob WHERE rid IN \"%w\"", zTab); + db_multi_exec("DELETE FROM delta WHERE rid IN \"%w\"", zTab); + db_multi_exec("DELETE FROM delta WHERE srcid IN \"%w\"", zTab); + db_multi_exec("DELETE FROM event WHERE objid IN \"%w\"", zTab); + db_multi_exec("DELETE FROM private WHERE rid IN \"%w\"", zTab); + db_multi_exec("DELETE FROM mlink WHERE mid IN \"%w\"", zTab); + db_multi_exec("DELETE FROM plink WHERE pid IN \"%w\"", zTab); + db_multi_exec("DELETE FROM plink WHERE cid IN \"%w\"", zTab); + db_multi_exec("DELETE FROM leaf WHERE rid IN \"%w\"", zTab); + db_multi_exec("DELETE FROM phantom WHERE rid IN \"%w\"", zTab); + db_multi_exec("DELETE FROM unclustered WHERE rid IN \"%w\"", zTab); + db_multi_exec("DELETE FROM unsent WHERE rid IN \"%w\"", zTab); + db_multi_exec("DELETE FROM tagxref" + " WHERE rid IN \"%w\"" + " OR srcid IN \"%w\"" + " OR origid IN \"%w\"", zTab, zTab, zTab); + db_multi_exec("DELETE FROM backlink WHERE srctype=0 AND srcid IN \"%w\"", + zTab); + db_multi_exec( + "CREATE TEMP TABLE \"%w_tickets\" AS" + " SELECT DISTINCT tkt_uuid FROM ticket WHERE tkt_id IN" + " (SELECT tkt_id FROM ticketchng WHERE tkt_rid IN \"%w\")", + zTab, zTab); + db_multi_exec("DELETE FROM ticketchng WHERE tkt_rid IN \"%w\"", zTab); + db_prepare(&q, "SELECT tkt_uuid FROM \"%w_tickets\"", zTab); + while( db_step(&q)==SQLITE_ROW ){ + ticket_rebuild_entry(db_column_text(&q, 0)); + } + db_finalize(&q); + /* db_multi_exec("DROP TABLE \"%w_tickets\"", zTab); */ + + /* Mission accomplished */ + db_end_transaction(0); + return peid; +} + +/* +** The TEMP table named zTab contains RIDs for a set of check-ins. +** +** Check to see if any check-in in zTab is a baseline manifest for some +** delta manifest that is not in zTab. Return true if zTab contains a +** baseline for a delta that is not in zTab. +** +** This is a database integrity preservation check. The check-ins in zTab +** are about to be deleted or otherwise made inaccessible. This routine +** is checking to ensure that purging the check-ins in zTab will not delete +** a baseline manifest out from under a delta. +*/ +int purge_baseline_out_from_under_delta(const char *zTab){ + if( !db_table_has_column("repository","plink","baseid") ){ + /* Skip this check if the current database is an older schema that + ** does not contain the PLINK.BASEID field. */ + return 0; + }else{ + return db_int(0, + "SELECT 1 FROM plink WHERE baseid IN \"%w\" AND cid NOT IN \"%w\"", + zTab, zTab); + } +} + + +/* +** The TEMP table named zTab contains the RIDs for a set of check-in +** artifacts. Expand this set (by adding new entries to zTab) to include +** all other artifacts that are used the set of check-ins in +** the original list. +** +** If the bExclusive flag is true, then the set is only expanded by +** artifacts that are used exclusively by the check-ins in the set. +** When bExclusive is false, then all artifacts used by the check-ins +** are added even if those artifacts are also used by other check-ins +** not in the set. +** +** The "fossil publish" command with the (undocumented) --test and +** --exclusive options can be used for interactiving testing of this +** function. +*/ +void find_checkin_associates(const char *zTab, int bExclusive){ + db_begin_transaction(); + + /* Compute the set of files that need to be added to zTab */ + db_multi_exec("CREATE TEMP TABLE \"%w_files\"(fid INTEGER PRIMARY KEY)",zTab); + db_multi_exec( + "INSERT OR IGNORE INTO \"%w_files\"(fid)" + " SELECT fid FROM mlink WHERE fid!=0 AND mid IN \"%w\"", + zTab, zTab + ); + if( bExclusive ){ + /* But take out all files that are referenced by check-ins not in zTab */ + db_multi_exec( + "DELETE FROM \"%w_files\"" + " WHERE fid IN (SELECT fid FROM mlink" + " WHERE fid IN \"%w_files\"" + " AND mid NOT IN \"%w\")", + zTab, zTab, zTab + ); + } + + /* Compute the set of tags that need to be added to zTag */ + db_multi_exec("CREATE TEMP TABLE \"%w_tags\"(tid INTEGER PRIMARY KEY)",zTab); + db_multi_exec( + "INSERT OR IGNORE INTO \"%w_tags\"(tid)" + " SELECT DISTINCT srcid FROM tagxref WHERE rid in \"%w\" AND srcid!=0", + zTab, zTab + ); + if( bExclusive ){ + /* But take out tags that references some check-ins in zTab and other + ** check-ins not in zTab. The current Fossil implementation never creates + ** such tags, so the following should usually be a no-op. But the file + ** format specification allows such tags, so we should check for them. + */ + db_multi_exec( + "DELETE FROM \"%w_tags\"" + " WHERE tid IN (SELECT srcid FROM tagxref" + " WHERE srcid IN \"%w_tags\"" + " AND rid NOT IN \"%w\")", + zTab, zTab, zTab + ); + } + + /* Transfer the extra artifacts into zTab */ + db_multi_exec( + "INSERT OR IGNORE INTO \"%w\" SELECT fid FROM \"%w_files\";" + "INSERT OR IGNORE INTO \"%w\" SELECT tid FROM \"%w_tags\";" + "DROP TABLE \"%w_files\";" + "DROP TABLE \"%w_tags\";", + zTab, zTab, zTab, zTab, zTab, zTab + ); + + db_end_transaction(0); +} + +/* +** Display the content of a single purge event. +*/ +static void purge_list_event_content(int peid){ + Stmt q; + sqlite3_int64 sz = 0; + db_prepare(&q, "SELECT piid, substr(uuid,1,16), srcid, isPrivate," + " length(data), desc" + " FROM purgeitem WHERE peid=%d", peid); + while( db_step(&q)==SQLITE_ROW ){ + fossil_print(" %5d %s %4s %c %10d %s\n", + db_column_int(&q,0), + db_column_text(&q,1), + db_column_text(&q,2), + db_column_int(&q,3) ? 'P' : ' ', + db_column_int(&q,4), + db_column_text(&q,5)); + sz += db_column_int(&q,4); + } + db_finalize(&q); + fossil_print("%.11c%16s%.8c%10lld\n", ' ', "Total:", ' ', sz); +} + +/* +** Extract the content for purgeitem number piid into a Blob. Return +** the number of errors. +*/ +static int purge_extract_item( + int piid, /* ID of the item to extract */ + Blob *pOut /* Write the content into this blob */ +){ + Stmt q; + int srcid; + Blob h1, h2, x; + static Bag busy; + + db_prepare(&q, "SELECT uuid, srcid, data FROM purgeitem" + " WHERE piid=%d", piid); + if( db_step(&q)!=SQLITE_ROW ){ + db_finalize(&q); + fossil_fatal("missing purge-item %d", piid); + } + if( bag_find(&busy, piid) ) return 1; + srcid = db_column_int(&q, 1); + blob_zero(pOut); + blob_zero(&x); + db_column_blob(&q, 2, &x); + blob_uncompress(&x, pOut); + blob_reset(&x); + if( srcid>0 ){ + Blob baseline, out; + bag_insert(&busy, piid); + purge_extract_item(srcid, &baseline); + blob_zero(&out); + blob_delta_apply(&baseline, pOut, &out); + blob_reset(pOut); + *pOut = out; + blob_reset(&baseline); + } + bag_remove(&busy, piid); + blob_zero(&h1); + db_column_blob(&q, 0, &h1); + sha1sum_blob(pOut, &h2); + if( blob_compare(&h1, &h2)!=0 ){ + fossil_fatal("SHA1 hash mismatch - wanted %s, got %s", + blob_str(&h1), blob_str(&h2)); + } + blob_reset(&h1); + blob_reset(&h2); + db_finalize(&q); + return 0; +} + +/* +** There is a TEMP table ix(piid,srcid) containing a set of purgeitems +** that need to be transferred to the BLOB table. This routine does +** all items that have srcid=iSrc. The pBasis blob holds the content +** of the source document if iSrc>0. +*/ +static void purge_item_resurrect(int iSrc, Blob *pBasis){ + Stmt q; + static Bag busy; + assert( pBasis!=0 || iSrc==0 ); + if( iSrc>0 ){ + if( bag_find(&busy, iSrc) ){ + fossil_fatal("delta loop while uncompressing purged artifacts"); + } + bag_insert(&busy, iSrc); + } + db_prepare(&q, + "SELECT uuid, data, isPrivate, ix.piid" + " FROM ix, purgeitem" + " WHERE ix.srcid=%d" + " AND ix.piid=purgeitem.piid;", + iSrc + ); + while( db_step(&q)==SQLITE_ROW ){ + Blob h1, h2, c1, c2; + int isPriv, rid; + blob_zero(&h1); + db_column_blob(&q, 0, &h1); + blob_zero(&c1); + db_column_blob(&q, 1, &c1); + blob_uncompress(&c1, &c1); + blob_zero(&c2); + if( pBasis ){ + blob_delta_apply(pBasis, &c1, &c2); + blob_reset(&c1); + }else{ + c2 = c1; + } + sha1sum_blob(&c2, &h2); + if( blob_compare(&h1, &h2)!=0 ){ + fossil_fatal("SHA1 hash mismatch - wanted %s, got %s", + blob_str(&h1), blob_str(&h2)); + } + blob_reset(&h2); + isPriv = db_column_int(&q, 2); + rid = content_put_ex(&c2, blob_str(&h1), 0, 0, isPriv); + if( rid==0 ){ + fossil_fatal("%s", g.zErrMsg); + }else{ + if( !isPriv ) content_make_public(rid); + content_get(rid, &c1); + manifest_crosslink(rid, &c1, MC_NO_ERRORS); + } + purge_item_resurrect(db_column_int(&q,3), &c2); + blob_reset(&c2); + } + db_finalize(&q); + if( iSrc>0 ) bag_remove(&busy, iSrc); +} + +/* +** COMMAND: purge +** +** The purge command removes content from a repository and stores that content +** in a "graveyard". The graveyard exists so that content can be recovered +** using the "fossil purge undo" command. +** +** fossil purge cat UUID... +** +** Write the content of one or more artifacts in the graveyard onto +** standard output. +** +** fossil purge ?checkins? TAGS... ?OPTIONS? +** +** Move the check-ins identified by TAGS and all of their descendants +** out of the repository and into the graveyard. The "checkins" +** subcommand keyword is option and can be omitted as long as TAGS +** does not conflict with any other subcommand. +** +** If a TAGS includes a branch name then it means all the check-ins +** on the most recent occurrance of that branch. +** +** --explain Make no changes, but show what would happen. +** --dry-run Make no chances. +** +** fossil purge list|ls ?-l? +** +** Show the graveyard of prior purges. The -l option gives more +** detail in the output. +** +** fossil purge obliterate ID... +** +** Remove one or more purge events from the graveyard. Once a purge +** event is obliterated, it can no longer be undone. +** +** fossil purge undo ID +** +** Restore the content previously removed by purge ID. +** +** SUMMARY: +** fossil purge cat UUID... +** fossil purge [checkins] TAGS... [--explain] +** fossil purge list +** fossil purge obliterate ID... +** fossil purge undo ID +*/ +void purge_cmd(void){ + const char *zSubcmd; + int n; + Stmt q; + if( g.argc<3 ) usage("SUBCOMMAND ?ARGS?"); + zSubcmd = g.argv[2]; + db_find_and_open_repository(0,0); + n = (int)strlen(zSubcmd); + if( strncmp(zSubcmd, "cat", n)==0 ){ + int i, piid; + Blob content; + if( g.argc<4 ) usage("cat UUID..."); + for(i=3; i=g.argc ) usage("[checkin] TAGS... [--explain]"); + db_multi_exec("CREATE TEMP TABLE ok(rid INTEGER PRIMARY KEY)"); + for(; i +#include /* -** Schema changes +** Make changes to the stable part of the schema (the part that is not +** simply deleted and reconstructed on a rebuild) to bring the schema +** up to the latest. */ -static const char zSchemaUpdates[] = +static const char zSchemaUpdates1[] = @ -- Index on the delta table @ -- @ CREATE INDEX IF NOT EXISTS delta_i1 ON delta(srcid); @ @ -- Artifacts that should not be processed are identified in the @@ -37,136 +40,268 @@ @ -- @ -- Shunned artifacts do not exist in the blob table. Hence they @ -- have not artifact ID (rid) and we thus must store their full @ -- UUID. @ -- -@ CREATE TABLE IF NOT EXISTS shun(uuid UNIQUE); +@ CREATE TABLE IF NOT EXISTS shun( +@ uuid UNIQUE, -- UUID of artifact to be shunned. Canonical form +@ mtime INTEGER, -- When added. Seconds since 1970 +@ scom TEXT -- Optional text explaining why the shun occurred +@ ); @ @ -- Artifacts that should not be pushed are stored in the "private" -@ -- table. +@ -- table. @ -- @ CREATE TABLE IF NOT EXISTS private(rid INTEGER PRIMARY KEY); @ -@ -- An entry in this table describes a database query that generates a -@ -- table of tickets. -@ -- -@ CREATE TABLE IF NOT EXISTS reportfmt( -@ rn integer primary key, -- Report number -@ owner text, -- Owner of this report format (not used) -@ title text, -- Title of this report -@ cols text, -- A color-key specification -@ sqlcode text -- An SQL SELECT statement for this report -@ ); -@ @ -- Some ticket content (such as the originators email address or contact @ -- information) needs to be obscured to protect privacy. This is achieved @ -- by storing an SHA1 hash of the content. For display, the hash is -@ -- mapped back into the original text using this table. +@ -- mapped back into the original text using this table. @ -- @ -- This table contains sensitive information and should not be shared @ -- with unauthorized users. @ -- @ CREATE TABLE IF NOT EXISTS concealed( -@ hash TEXT PRIMARY KEY, -@ content TEXT +@ hash TEXT PRIMARY KEY, -- The SHA1 hash of content +@ mtime INTEGER, -- Time created. Seconds since 1970 +@ content TEXT -- Content intended to be concealed +@ ); +; +static const char zSchemaUpdates2[] = +@ -- An entry in this table describes a database query that generates a +@ -- table of tickets. +@ -- +@ CREATE TABLE IF NOT EXISTS reportfmt( +@ rn INTEGER PRIMARY KEY, -- Report number +@ owner TEXT, -- Owner of this report format (not used) +@ title TEXT UNIQUE, -- Title of this report +@ mtime INTEGER, -- Time last modified. Seconds since 1970 +@ cols TEXT, -- A color-key specification +@ sqlcode TEXT -- An SQL SELECT statement for this report @ ); ; + +static void rebuild_update_schema(void){ + int rc; + db_multi_exec("%s", zSchemaUpdates1 /*safe-for-%s*/); + db_multi_exec("%s", zSchemaUpdates2 /*safe-for-%s*/); + + rc = db_exists("SELECT 1 FROM sqlite_master" + " WHERE name='user' AND sql GLOB '* mtime *'"); + if( rc==0 ){ + db_multi_exec( + "CREATE TEMP TABLE temp_user AS SELECT * FROM user;" + "DROP TABLE user;" + "CREATE TABLE user(\n" + " uid INTEGER PRIMARY KEY,\n" + " login TEXT UNIQUE,\n" + " pw TEXT,\n" + " cap TEXT,\n" + " cookie TEXT,\n" + " ipaddr TEXT,\n" + " cexpire DATETIME,\n" + " info TEXT,\n" + " mtime DATE,\n" + " photo BLOB\n" + ");" + "INSERT OR IGNORE INTO user" + " SELECT uid, login, pw, cap, cookie," + " ipaddr, cexpire, info, now(), photo FROM temp_user;" + "DROP TABLE temp_user;" + ); + } + + rc = db_exists("SELECT 1 FROM sqlite_master" + " WHERE name='config' AND sql GLOB '* mtime *'"); + if( rc==0 ){ + db_multi_exec( + "ALTER TABLE config ADD COLUMN mtime INTEGER;" + "UPDATE config SET mtime=now();" + ); + } + + rc = db_exists("SELECT 1 FROM sqlite_master" + " WHERE name='shun' AND sql GLOB '* mtime *'"); + if( rc==0 ){ + db_multi_exec( + "ALTER TABLE shun ADD COLUMN mtime INTEGER;" + "ALTER TABLE shun ADD COLUMN scom TEXT;" + "UPDATE shun SET mtime=now();" + ); + } + + rc = db_exists("SELECT 1 FROM sqlite_master" + " WHERE name='reportfmt' AND sql GLOB '* mtime *'"); + if( rc==0 ){ + db_multi_exec( + "CREATE TEMP TABLE old_fmt AS SELECT * FROM reportfmt;" + "DROP TABLE reportfmt;" + ); + db_multi_exec("%s", zSchemaUpdates2/*safe-for-%s*/); + db_multi_exec( + "INSERT OR IGNORE INTO reportfmt(rn,owner,title,cols,sqlcode,mtime)" + " SELECT rn, owner, title, cols, sqlcode, now() FROM old_fmt;" + "INSERT OR IGNORE INTO reportfmt(rn,owner,title,cols,sqlcode,mtime)" + " SELECT rn, owner, title || ' (' || rn || ')', cols, sqlcode, now()" + " FROM old_fmt;" + ); + } + + rc = db_exists("SELECT 1 FROM sqlite_master" + " WHERE name='concealed' AND sql GLOB '* mtime *'"); + if( rc==0 ){ + db_multi_exec( + "ALTER TABLE concealed ADD COLUMN mtime INTEGER;" + "UPDATE concealed SET mtime=now();" + ); + } +} /* -** Variables used for progress information +** Variables used to store state information about an on-going "rebuild" +** or "deconstruct". */ static int totalSize; /* Total number of artifacts to process */ static int processCnt; /* Number processed so far */ static int ttyOutput; /* Do progress output */ static Bag bagDone; /* Bag of records rebuilt */ + +static char *zFNameFormat; /* Format string for filenames on deconstruct */ +static int prefixLength; /* Length of directory prefix for deconstruct */ + + +/* +** Draw the percent-complete message. +** The input is actually the permill complete. +*/ +static void percent_complete(int permill){ + static int lastOutput = -1; + if( permill>lastOutput ){ + fossil_print(" %d.%d%% complete...\r", permill/10, permill%10); + fflush(stdout); + lastOutput = permill; + } +} + /* ** Called after each artifact is processed */ -static void rebuild_step_done(rid){ +static void rebuild_step_done(int rid){ /* assert( bag_find(&bagDone, rid)==0 ); */ bag_insert(&bagDone, rid); if( ttyOutput ){ processCnt++; - if (!g.fQuiet) { - printf("%d (%d%%)...\r", processCnt, (processCnt*100/totalSize)); - fflush(stdout); + if (!g.fQuiet && totalSize>0) { + percent_complete((processCnt*1000)/totalSize); } } } /* ** Rebuild cross-referencing information for the artifact ** rid with content pBase and all of its descendants. This ** routine clears the content buffer before returning. -*/ -static void rebuild_step(int rid, int size, Blob *pBase){ - Stmt q1; - Bag children; - Blob copy; - Blob *pUse; - int nChild, i, cid; - - /* Fix up the "blob.size" field if needed. */ - if( size!=blob_size(pBase) ){ - db_multi_exec( - "UPDATE blob SET size=%d WHERE rid=%d", blob_size(pBase), rid - ); - } - - /* Find all children of artifact rid */ - db_prepare(&q1, "SELECT rid FROM delta WHERE srcid=%d", rid); - bag_init(&children); - while( db_step(&q1)==SQLITE_ROW ){ - int cid = db_column_int(&q1, 0); - if( !bag_find(&bagDone, cid) ){ - bag_insert(&children, cid); - } - } - nChild = bag_count(&children); - db_finalize(&q1); - - /* Crosslink the artifact */ - if( nChild==0 ){ - pUse = pBase; - }else{ - blob_copy(©, pBase); - pUse = © - } - manifest_crosslink(rid, pUse); - blob_reset(pUse); - - /* Call all children recursively */ - for(cid=bag_first(&children), i=1; cid; cid=bag_next(&children, cid), i++){ - Stmt q2; - int sz; - if( nChild==i ){ - pUse = pBase; - }else{ - blob_copy(©, pBase); - pUse = © - } - db_prepare(&q2, "SELECT content, size FROM blob WHERE rid=%d", cid); - if( db_step(&q2)==SQLITE_ROW && (sz = db_column_int(&q2,1))>=0 ){ - Blob delta; - db_ephemeral_blob(&q2, 0, &delta); - blob_uncompress(&delta, &delta); - blob_delta_apply(pUse, &delta, pUse); - blob_reset(&delta); - db_finalize(&q2); - rebuild_step(cid, sz, pUse); - }else{ - db_finalize(&q2); - blob_reset(pUse); - } - } - bag_clear(&children); - rebuild_step_done(rid); -} - -/* -** Check to see if the the "sym-trunk" tag exists. If not, create it +** +** If the zFNameFormat variable is set, then this routine is +** called to run "fossil deconstruct" instead of the usual +** "fossil rebuild". In that case, instead of rebuilding the +** cross-referencing information, write the file content out +** to the appropriate directory. +** +** In both cases, this routine automatically recurses to process +** other artifacts that are deltas off of the current artifact. +** This is the most efficient way to extract all of the original +** artifact content from the Fossil repository. +*/ +static void rebuild_step(int rid, int size, Blob *pBase){ + static Stmt q1; + Bag children; + Blob copy; + Blob *pUse; + int nChild, i, cid; + + while( rid>0 ){ + + /* Fix up the "blob.size" field if needed. */ + if( size!=blob_size(pBase) ){ + db_multi_exec( + "UPDATE blob SET size=%d WHERE rid=%d", blob_size(pBase), rid + ); + } + + /* Find all children of artifact rid */ + db_static_prepare(&q1, "SELECT rid FROM delta WHERE srcid=:rid"); + db_bind_int(&q1, ":rid", rid); + bag_init(&children); + while( db_step(&q1)==SQLITE_ROW ){ + int cid = db_column_int(&q1, 0); + if( !bag_find(&bagDone, cid) ){ + bag_insert(&children, cid); + } + } + nChild = bag_count(&children); + db_reset(&q1); + + /* Crosslink the artifact */ + if( nChild==0 ){ + pUse = pBase; + }else{ + blob_copy(©, pBase); + pUse = © + } + if( zFNameFormat==0 ){ + /* We are doing "fossil rebuild" */ + manifest_crosslink(rid, pUse, MC_NONE); + }else{ + /* We are doing "fossil deconstruct" */ + char *zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); + char *zFile = mprintf(zFNameFormat /*works-like:"%s:%s"*/, + zUuid, zUuid+prefixLength); + blob_write_to_file(pUse,zFile); + free(zFile); + free(zUuid); + blob_reset(pUse); + } + assert( blob_is_reset(pUse) ); + rebuild_step_done(rid); + + /* Call all children recursively */ + rid = 0; + for(cid=bag_first(&children), i=1; cid; cid=bag_next(&children, cid), i++){ + static Stmt q2; + int sz; + db_static_prepare(&q2, "SELECT content, size FROM blob WHERE rid=:rid"); + db_bind_int(&q2, ":rid", cid); + if( db_step(&q2)==SQLITE_ROW && (sz = db_column_int(&q2,1))>=0 ){ + Blob delta, next; + db_ephemeral_blob(&q2, 0, &delta); + blob_uncompress(&delta, &delta); + blob_delta_apply(pBase, &delta, &next); + blob_reset(&delta); + db_reset(&q2); + if( i0 ){ + processCnt += incrSize; + percent_complete((processCnt*1000)/totalSize); + } + if( doClustering ) create_cluster(); + if( ttyOutput && !g.fQuiet && totalSize>0 ){ + processCnt += incrSize; + percent_complete((processCnt*1000)/totalSize); + } if(!g.fQuiet && ttyOutput ){ - printf("\n"); + percent_complete(1000); + fossil_print("\n"); } return errCnt; } /* -** COMMAND: rebuild +** Attempt to convert more full-text blobs into delta-blobs for +** storage efficiency. +*/ +void extra_deltification(void){ + Stmt q; + int topid, previd, rid; + int prevfnid, fnid; + db_begin_transaction(); + db_prepare(&q, + "SELECT rid FROM event, blob" + " WHERE blob.rid=event.objid" + " AND event.type='ci'" + " AND NOT EXISTS(SELECT 1 FROM delta WHERE rid=blob.rid)" + " ORDER BY event.mtime DESC" + ); + topid = previd = 0; + while( db_step(&q)==SQLITE_ROW ){ + rid = db_column_int(&q, 0); + if( topid==0 ){ + topid = previd = rid; + }else{ + if( content_deltify(rid, previd, 0)==0 && previd!=topid ){ + content_deltify(rid, topid, 0); + } + previd = rid; + } + } + db_finalize(&q); + + db_prepare(&q, + "SELECT blob.rid, mlink.fnid FROM blob, mlink, plink" + " WHERE NOT EXISTS(SELECT 1 FROM delta WHERE rid=blob.rid)" + " AND mlink.fid=blob.rid" + " AND mlink.mid=plink.cid" + " AND plink.cid=mlink.mid" + " ORDER BY mlink.fnid, plink.mtime DESC" + ); + prevfnid = 0; + while( db_step(&q)==SQLITE_ROW ){ + rid = db_column_int(&q, 0); + fnid = db_column_int(&q, 1); + if( prevfnid!=fnid ){ + prevfnid = fnid; + topid = previd = rid; + }else{ + if( content_deltify(rid, previd, 0)==0 && previd!=topid ){ + content_deltify(rid, topid, 0); + } + previd = rid; + } + } + db_finalize(&q); + + db_end_transaction(0); +} + + +/* Reconstruct the private table. The private table contains the rid +** of every manifest that is tagged with "private" and every file that +** is not used by a manifest that is not private. +*/ +static void reconstruct_private_table(void){ + db_multi_exec( + "CREATE TEMP TABLE private_ckin(rid INTEGER PRIMARY KEY);" + "INSERT INTO private_ckin " + " SELECT rid FROM tagxref WHERE tagid=%d AND tagtype>0;" + "INSERT OR IGNORE INTO private" + " SELECT fid FROM mlink" + " EXCEPT SELECT fid FROM mlink WHERE mid NOT IN private_ckin;" + "INSERT OR IGNORE INTO private SELECT rid FROM private_ckin;" + "DROP TABLE private_ckin;", TAG_PRIVATE + ); + fix_private_blob_dependencies(0); +} + + +/* +** COMMAND: rebuild ** -** Usage: %fossil rebuild ?REPOSITORY? +** Usage: %fossil rebuild ?REPOSITORY? ?OPTIONS? ** ** Reconstruct the named repository database from the core ** records. Run this command after updating the fossil ** executable in a way that changes the database schema. +** +** Options: +** --analyze Run ANALYZE on the database after rebuilding +** --cluster Compute clusters for unclustered artifacts +** --compress Strive to make the database as small as possible +** --compress-only Skip the rebuilding step. Do --compress only +** --deanalyze Remove ANALYZE tables from the database +** --force Force the rebuild to complete even if errors are seen +** --ifneeded Only do the rebuild if it would change the schema version +** --index Always add in the full-text search index +** --noverify Skip the verification of changes to the BLOB table +** --noindex Always omit the full-text search index +** --pagesize N Set the database pagesize to N. (512..65536 and power of 2) +** --quiet Only show output if there are errors +** --randomize Scan artifacts in a random order +** --stats Show artifact statistics after rebuilding +** --vacuum Run VACUUM on the database after rebuilding +** --wal Set Write-Ahead-Log journalling mode on the database +** +** See also: deconstruct, reconstruct */ void rebuild_database(void){ int forceFlag; int randomizeFlag; - int errCnt; + int errCnt = 0; + int omitVerify; + int doClustering; + const char *zPagesize; + int newPagesize = 0; + int activateWal; + int runVacuum; + int runDeanalyze; + int runAnalyze; + int runCompress; + int showStats; + int runReindex; + int optNoIndex; + int optIndex; + int optIfNeeded; + int compressOnlyFlag; + omitVerify = find_option("noverify",0,0)!=0; forceFlag = find_option("force","f",0)!=0; randomizeFlag = find_option("randomize", 0, 0)!=0; + doClustering = find_option("cluster", 0, 0)!=0; + runVacuum = find_option("vacuum",0,0)!=0; + runDeanalyze = find_option("deanalyze",0,0)!=0; + runAnalyze = find_option("analyze",0,0)!=0; + runCompress = find_option("compress",0,0)!=0; + zPagesize = find_option("pagesize",0,1); + showStats = find_option("stats",0,0)!=0; + optIndex = find_option("index",0,0)!=0; + optNoIndex = find_option("noindex",0,0)!=0; + optIfNeeded = find_option("ifneeded",0,0)!=0; + compressOnlyFlag = find_option("compress-only",0,0)!=0; + if( compressOnlyFlag ) runCompress = runVacuum = 1; + if( zPagesize ){ + newPagesize = atoi(zPagesize); + if( newPagesize<512 || newPagesize>65536 + || (newPagesize&(newPagesize-1))!=0 + ){ + fossil_fatal("page size must be a power of two between 512 and 65536"); + } + } + activateWal = find_option("wal",0,0)!=0; if( g.argc==3 ){ db_open_repository(g.argv[2]); }else{ - db_find_and_open_repository(1); + db_find_and_open_repository(OPEN_ANY_SCHEMA, 0); if( g.argc!=2 ){ usage("?REPOSITORY-FILENAME?"); } - db_close(); + db_close(1); db_open_repository(g.zRepositoryName); } + runReindex = search_index_exists() && !compressOnlyFlag; + if( optIndex ) runReindex = 1; + if( optNoIndex ) runReindex = 0; + if( optIfNeeded && fossil_strcmp(db_get("aux-schema",""),AUX_SCHEMA_MAX)==0 ){ + return; + } + + /* We should be done with options.. */ + verify_all_options(); + db_begin_transaction(); - ttyOutput = 1; - errCnt = rebuild_db(randomizeFlag, 1); + if( !compressOnlyFlag ){ + search_drop_index(); + ttyOutput = 1; + errCnt = rebuild_db(randomizeFlag, 1, doClustering); + reconstruct_private_table(); + } + db_multi_exec( + "REPLACE INTO config(name,value,mtime) VALUES('content-schema',%Q,now());" + "REPLACE INTO config(name,value,mtime) VALUES('aux-schema',%Q,now());" + "REPLACE INTO config(name,value,mtime) VALUES('rebuilt',%Q,now());", + CONTENT_SCHEMA, AUX_SCHEMA_MAX, get_version() + ); if( errCnt && !forceFlag ){ - printf("%d errors. Rolling back changes. Use --force to force a commit.\n", - errCnt); + fossil_print( + "%d errors. Rolling back changes. Use --force to force a commit.\n", + errCnt + ); db_end_transaction(1); }else{ + if( runCompress ){ + fossil_print("Extra delta compression... "); fflush(stdout); + extra_deltification(); + runVacuum = 1; + } + if( omitVerify ) verify_cancel(); db_end_transaction(0); + if( runCompress ) fossil_print("done\n"); + db_close(0); + db_open_repository(g.zRepositoryName); + if( newPagesize ){ + db_multi_exec("PRAGMA page_size=%d", newPagesize); + runVacuum = 1; + } + if( runDeanalyze ){ + db_multi_exec("DROP TABLE IF EXISTS sqlite_stat1;" + "DROP TABLE IF EXISTS sqlite_stat3;" + "DROP TABLE IF EXISTS sqlite_stat4;"); + } + if( runAnalyze ){ + fossil_print("Analyzing the database... "); fflush(stdout); + db_multi_exec("ANALYZE;"); + fossil_print("done\n"); + } + if( runVacuum ){ + fossil_print("Vacuuming the database... "); fflush(stdout); + db_multi_exec("VACUUM"); + fossil_print("done\n"); + } + if( activateWal ){ + db_multi_exec("PRAGMA journal_mode=WAL;"); + } + } + if( runReindex ) search_rebuild_index(); + if( showStats ){ + static const struct { int idx; const char *zLabel; } aStat[] = { + { CFTYPE_ANY, "Artifacts:" }, + { CFTYPE_MANIFEST, "Manifests:" }, + { CFTYPE_CLUSTER, "Clusters:" }, + { CFTYPE_CONTROL, "Tags:" }, + { CFTYPE_WIKI, "Wikis:" }, + { CFTYPE_TICKET, "Tickets:" }, + { CFTYPE_ATTACHMENT,"Attachments:" }, + { CFTYPE_EVENT, "Events:" }, + }; + int i; + int subtotal = 0; + for(i=0; i0 ) subtotal += g.parseCnt[k]; + } + fossil_print("%-15s %6d\n", "Other:", g.parseCnt[CFTYPE_ANY] - subtotal); } } /* -** COMMAND: test-detach +** COMMAND: test-detach ?REPOSITORY? ** ** Change the project-code and make other changes in order to prevent ** the repository from ever again pushing or pulling to other ** repositories. Used to create a "test" repository for development ** testing by cloning a working project repository. */ void test_detach_cmd(void){ - db_find_and_open_repository(1); + db_find_and_open_repository(0, 2); db_begin_transaction(); db_multi_exec( "DELETE FROM config WHERE name='last-sync-url';" "UPDATE config SET value=lower(hex(randomblob(20)))" " WHERE name='project-code';" @@ -338,64 +703,399 @@ ); db_end_transaction(0); } /* -** COMMAND: scrub -** %fossil scrub [--verily] [--force] [REPOSITORY] +** COMMAND: test-create-clusters +** +** Create clusters for all unclustered artifacts if the number of unclustered +** artifacts exceeds the current clustering threshold. +*/ +void test_createcluster_cmd(void){ + if( g.argc==3 ){ + db_open_repository(g.argv[2]); + }else{ + db_find_and_open_repository(0, 0); + if( g.argc!=2 ){ + usage("?REPOSITORY-FILENAME?"); + } + db_close(1); + db_open_repository(g.zRepositoryName); + } + db_begin_transaction(); + create_cluster(); + db_end_transaction(0); +} + +/* +** COMMAND: test-clusters +** +** Verify that all non-private and non-shunned artifacts are accessible +** through the cluster chain. +*/ +void test_clusters_cmd(void){ + Bag pending; + Stmt q; + int n; + + db_find_and_open_repository(0, 2); + bag_init(&pending); + db_multi_exec( + "CREATE TEMP TABLE xdone(x INTEGER PRIMARY KEY);" + "INSERT INTO xdone SELECT rid FROM unclustered;" + "INSERT OR IGNORE INTO xdone SELECT rid FROM private;" + "INSERT OR IGNORE INTO xdone" + " SELECT blob.rid FROM shun JOIN blob USING(uuid);" + ); + db_prepare(&q, + "SELECT rid FROM unclustered WHERE rid IN" + " (SELECT rid FROM tagxref WHERE tagid=%d)", TAG_CLUSTER + ); + while( db_step(&q)==SQLITE_ROW ){ + bag_insert(&pending, db_column_int(&q, 0)); + } + db_finalize(&q); + while( bag_count(&pending)>0 ){ + Manifest *p; + int rid = bag_first(&pending); + int i; + + bag_remove(&pending, rid); + p = manifest_get(rid, CFTYPE_CLUSTER, 0); + if( p==0 ){ + fossil_fatal("bad cluster: rid=%d", rid); + } + for(i=0; inCChild; i++){ + const char *zUuid = p->azCChild[i]; + int crid = name_to_rid(zUuid); + if( crid==0 ){ + fossil_warning("cluster (rid=%d) references unknown artifact %s", + rid, zUuid); + continue; + } + db_multi_exec("INSERT OR IGNORE INTO xdone VALUES(%d)", crid); + if( db_exists("SELECT 1 FROM tagxref WHERE tagid=%d AND rid=%d", + TAG_CLUSTER, crid) ){ + bag_insert(&pending, crid); + } + } + manifest_destroy(p); + } + n = db_int(0, "SELECT count(*) FROM /*scan*/" + " (SELECT rid FROM blob EXCEPT SELECT x FROM xdone)"); + if( n==0 ){ + fossil_print("all artifacts reachable through clusters\n"); + }else{ + fossil_print("%d unreachable artifacts:\n", n); + db_prepare(&q, "SELECT rid, uuid FROM blob WHERE rid NOT IN xdone"); + while( db_step(&q)==SQLITE_ROW ){ + fossil_print(" %3d %s\n", db_column_int(&q,0), db_column_text(&q,1)); + } + db_finalize(&q); + } +} + +/* +** COMMAND: scrub* +** %fossil scrub ?OPTIONS? ?REPOSITORY? ** ** The command removes sensitive information (such as passwords) from a -** repository so that the respository can be sent to an untrusted reader. +** repository so that the repository can be sent to an untrusted reader. ** ** By default, only passwords are removed. However, if the --verily option ** is added, then private branches, concealed email addresses, IP ** addresses of correspondents, and similar privacy-sensitive fields -** are also purged. +** are also purged. If the --private option is used, then only private +** branches are removed and all other information is left intact. ** -** This command permanently deletes the scrubbed information. The effects -** of this command are irreversible. Use with caution. +** This command permanently deletes the scrubbed information. THE EFFECTS +** OF THIS COMMAND ARE IRREVERSIBLE. USE WITH CAUTION! ** ** The user is prompted to confirm the scrub unless the --force option ** is used. +** +** Options: +** --force do not prompt for confirmation +** --private only private branches are removed from the repository +** --verily scrub real thoroughly (see above) */ void scrub_cmd(void){ int bVerily = find_option("verily",0,0)!=0; int bForce = find_option("force", "f", 0)!=0; + int privateOnly = find_option("private",0,0)!=0; int bNeedRebuild = 0; - if( g.argc!=2 && g.argc!=3 ) usage("?REPOSITORY?"); - if( g.argc==2 ){ - db_must_be_within_tree(); - }else{ - db_open_repository(g.argv[2]); - } + db_find_and_open_repository(OPEN_ANY_SCHEMA, 2); + db_close(1); + db_open_repository(g.zRepositoryName); + + /* We should be done with options.. */ + verify_all_options(); + if( !bForce ){ Blob ans; - blob_zero(&ans); - prompt_user("Scrubbing the repository will permanently remove user\n" - "passwords and other information. Changes cannot be undone.\n" - "Continue (y/N)? ", &ans); - if( blob_str(&ans)[0]!='y' ){ - exit(1); + char cReply; + prompt_user( + "Scrubbing the repository will permanently delete information.\n" + "Changes cannot be undone. Continue (y/N)? ", &ans); + cReply = blob_str(&ans)[0]; + if( cReply!='y' && cReply!='Y' ){ + fossil_exit(1); } } db_begin_transaction(); - db_multi_exec( - "UPDATE user SET pw='';" - "DELETE FROM config WHERE name GLOB 'last-sync-*';" - ); - if( bVerily ){ - bNeedRebuild = db_exists("SELECT 1 FROM private"); - db_multi_exec( - "DELETE FROM concealed;" - "UPDATE rcvfrom SET ipaddr='unknown';" - "UPDATE user SET photo=NULL, info='';" - "INSERT INTO shun SELECT uuid FROM blob WHERE rid IN private;" - ); + if( privateOnly || bVerily ){ + bNeedRebuild = db_exists("SELECT 1 FROM private"); + delete_private_content(); + } + if( !privateOnly ){ + db_multi_exec( + "UPDATE user SET pw='';" + "DELETE FROM config WHERE name GLOB 'last-sync-*';" + "DELETE FROM config WHERE name GLOB 'peer-*';" + "DELETE FROM config WHERE name GLOB 'login-group-*';" + "DELETE FROM config WHERE name GLOB 'skin:*';" + "DELETE FROM config WHERE name GLOB 'subrepo:*';" + ); + if( bVerily ){ + db_multi_exec( + "DELETE FROM concealed;\n" + "UPDATE rcvfrom SET ipaddr='unknown';\n" + "DROP TABLE IF EXISTS accesslog;\n" + "UPDATE user SET photo=NULL, info='';\n" + "DROP TABLE IF EXISTS purgeevent;\n" + "DROP TABLE IF EXISTS purgeitem;\n" + "DROP TABLE IF EXISTS admin_log;\n" + "DROP TABLE IF EXISTS vcache;\n" + ); + } } if( !bNeedRebuild ){ db_end_transaction(0); db_multi_exec("VACUUM;"); }else{ - rebuild_db(0, 1); + rebuild_db(0, 1, 0); db_end_transaction(0); } } + +/* +** Recursively read all files from the directory zPath and install +** every file read as a new artifact in the repository. +*/ +void recon_read_dir(char *zPath){ + DIR *d; + struct dirent *pEntry; + Blob aContent; /* content of the just read artifact */ + static int nFileRead = 0; + void *zUnicodePath; + char *zUtf8Name; + + zUnicodePath = fossil_utf8_to_path(zPath, 1); + d = opendir(zUnicodePath); + if( d ){ + while( (pEntry=readdir(d))!=0 ){ + Blob path; + char *zSubpath; + + if( pEntry->d_name[0]=='.' ){ + continue; + } + zUtf8Name = fossil_path_to_utf8(pEntry->d_name); + zSubpath = mprintf("%s/%s", zPath, zUtf8Name); + fossil_path_free(zUtf8Name); +#ifdef _DIRENT_HAVE_D_TYPE + if( (pEntry->d_type==DT_UNKNOWN || pEntry->d_type==DT_LNK) + ? (file_isdir(zSubpath)==1) : (pEntry->d_type==DT_DIR) ) +#else + if( file_isdir(zSubpath)==1 ) +#endif + { + recon_read_dir(zSubpath); + }else{ + blob_init(&path, 0, 0); + blob_appendf(&path, "%s", zSubpath); + if( blob_read_from_file(&aContent, blob_str(&path))==-1 ){ + fossil_fatal("some unknown error occurred while reading \"%s\"", + blob_str(&path)); + } + content_put(&aContent); + blob_reset(&path); + blob_reset(&aContent); + fossil_print("\r%d", ++nFileRead); + fflush(stdout); + } + free(zSubpath); + } + closedir(d); + }else { + fossil_fatal("encountered error %d while trying to open \"%s\".", + errno, g.argv[3]); + } + fossil_path_free(zUnicodePath); +} + +/* +** COMMAND: reconstruct* +** +** Usage: %fossil reconstruct FILENAME DIRECTORY +** +** This command studies the artifacts (files) in DIRECTORY and +** reconstructs the fossil record from them. It places the new +** fossil repository in FILENAME. Subdirectories are read, files +** with leading '.' in the filename are ignored. +** +** See also: deconstruct, rebuild +*/ +void reconstruct_cmd(void) { + char *zPassword; + if( g.argc!=4 ){ + usage("FILENAME DIRECTORY"); + } + if( file_isdir(g.argv[3])!=1 ){ + fossil_print("\"%s\" is not a directory\n\n", g.argv[3]); + usage("FILENAME DIRECTORY"); + } + db_create_repository(g.argv[2]); + db_open_repository(g.argv[2]); + + /* We should be done with options.. */ + verify_all_options(); + + db_open_config(0); + db_begin_transaction(); + db_initial_setup(0, 0, 0); + + fossil_print("Reading files from directory \"%s\"...\n", g.argv[3]); + recon_read_dir(g.argv[3]); + fossil_print("\nBuilding the Fossil repository...\n"); + + rebuild_db(0, 1, 1); + reconstruct_private_table(); + + /* Skip the verify_before_commit() step on a reconstruct. Most artifacts + ** will have been changed and verification therefore takes a really, really + ** long time. + */ + verify_cancel(); + + db_end_transaction(0); + fossil_print("project-id: %s\n", db_get("project-code", 0)); + fossil_print("server-id: %s\n", db_get("server-code", 0)); + zPassword = db_text(0, "SELECT pw FROM user WHERE login=%Q", g.zLogin); + fossil_print("admin-user: %s (initial password is \"%s\")\n", g.zLogin, zPassword); +} + +/* +** COMMAND: deconstruct* +** +** Usage %fossil deconstruct ?OPTIONS? DESTINATION +** +** +** This command exports all artifacts of a given repository and +** writes all artifacts to the file system. The DESTINATION directory +** will be populated with subdirectories AA and files AA/BBBBBBBBB.., where +** AABBBBBBBBB.. is the 40 character artifact ID, AA the first 2 characters. +** If -L|--prefixlength is given, the length (default 2) of the directory +** prefix can be set to 0,1,..,9 characters. +** +** Options: +** -R|--repository REPOSITORY deconstruct given REPOSITORY +** -L|--prefixlength N set the length of the names of the DESTINATION +** subdirectories to N +** --private Include private artifacts. +** +** See also: rebuild, reconstruct +*/ +void deconstruct_cmd(void){ + const char *zDestDir; + const char *zPrefixOpt; + Stmt s; + int privateFlag; + + /* get and check prefix length argument and build format string */ + zPrefixOpt=find_option("prefixlength","L",1); + if( !zPrefixOpt ){ + prefixLength = 2; + }else{ + if( zPrefixOpt[0]>='0' && zPrefixOpt[0]<='9' && !zPrefixOpt[1] ){ + prefixLength = (int)(*zPrefixOpt-'0'); + }else{ + fossil_fatal("N(%s) is not a valid prefix length!",zPrefixOpt); + } + } + /* open repository and open query for all artifacts */ + db_find_and_open_repository(OPEN_ANY_SCHEMA, 0); + privateFlag = find_option("private",0,0)!=0; + verify_all_options(); + /* check number of arguments */ + if( g.argc!=3 ){ + usage ("?OPTIONS? DESTINATION"); + } + /* get and check argument destination directory */ + zDestDir = g.argv[g.argc-1]; + if( !*zDestDir || !file_isdir(zDestDir)) { + fossil_fatal("DESTINATION(%s) is not a directory!",zDestDir); + } +#ifndef _WIN32 + if( file_access(zDestDir, W_OK) ){ + fossil_fatal("DESTINATION(%s) is not writeable!",zDestDir); + } +#else + /* write access on windows is not checked, errors will be + ** detected on blob_write_to_file + */ +#endif + if( prefixLength ){ + zFNameFormat = mprintf("%s/%%.%ds/%%s",zDestDir,prefixLength); + }else{ + zFNameFormat = mprintf("%s/%%s",zDestDir); + } + + bag_init(&bagDone); + ttyOutput = 1; + processCnt = 0; + if (!g.fQuiet) { + fossil_print("0 (0%%)...\r"); + fflush(stdout); + } + totalSize = db_int(0, "SELECT count(*) FROM blob"); + db_prepare(&s, + "SELECT rid, size FROM blob /*scan*/" + " WHERE NOT EXISTS(SELECT 1 FROM shun WHERE uuid=blob.uuid)" + " AND NOT EXISTS(SELECT 1 FROM delta WHERE rid=blob.rid) %s", + privateFlag==0 ? "AND rid NOT IN private" : "" + ); + while( db_step(&s)==SQLITE_ROW ){ + int rid = db_column_int(&s, 0); + int size = db_column_int(&s, 1); + if( size>=0 ){ + Blob content; + content_get(rid, &content); + rebuild_step(rid, size, &content); + } + } + db_finalize(&s); + db_prepare(&s, + "SELECT rid, size FROM blob" + " WHERE NOT EXISTS(SELECT 1 FROM shun WHERE uuid=blob.uuid) %s", + privateFlag==0 ? "AND rid NOT IN private" : "" + ); + while( db_step(&s)==SQLITE_ROW ){ + int rid = db_column_int(&s, 0); + int size = db_column_int(&s, 1); + if( size>=0 ){ + if( !bag_find(&bagDone, rid) ){ + Blob content; + content_get(rid, &content); + rebuild_step(rid, size, &content); + } + } + } + db_finalize(&s); + if(!g.fQuiet && ttyOutput ){ + fossil_print("\n"); + } + + /* free filename format string */ + free(zFNameFormat); + zFNameFormat = 0; +} ADDED src/regexp.c Index: src/regexp.c ================================================================== --- src/regexp.c +++ src/regexp.c @@ -0,0 +1,793 @@ +/* +** Copyright (c) 2013 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file was adapted from the test_regexp.c file in SQLite3. That +** file is in the public domain. +** +** The code in this file implements a compact but reasonably +** efficient regular-expression matcher for posix extended regular +** expressions against UTF8 text. The following syntax is supported: +** +** X* zero or more occurrences of X +** X+ one or more occurrences of X +** X? zero or one occurrences of X +** X{p,q} between p and q occurrences of X +** (X) match X +** X|Y X or Y +** ^X X occurring at the beginning of the string +** X$ X occurring at the end of the string +** . Match any single character +** \c Character c where c is one of \{}()[]|*+?. +** \c C-language escapes for c in afnrtv. ex: \t or \n +** \uXXXX Where XXXX is exactly 4 hex digits, unicode value XXXX +** \xXX Where XX is exactly 2 hex digits, unicode value XX +** [abc] Any single character from the set abc +** [^abc] Any single character not in the set abc +** [a-z] Any single character in the range a-z +** [^a-z] Any single character not in the range a-z +** \b Word boundary +** \w Word character. [A-Za-z0-9_] +** \W Non-word character +** \d Digit +** \D Non-digit +** \s Whitespace character +** \S Non-whitespace character +** +** A nondeterministic finite automaton (NFA) is used for matching, so the +** performance is bounded by O(N*M) where N is the size of the regular +** expression and M is the size of the input string. The matcher never +** exhibits exponential behavior. Note that the X{p,q} operator expands +** to p copies of X following by q-p copies of X? and that the size of the +** regular expression in the O(N*M) performance bound is computed after +** this expansion. +*/ +#include "config.h" +#include "regexp.h" + +/* The end-of-input character */ +#define RE_EOF 0 /* End of input */ + +/* The NFA is implemented as sequence of opcodes taken from the following +** set. Each opcode has a single integer argument. +*/ +#define RE_OP_MATCH 1 /* Match the one character in the argument */ +#define RE_OP_ANY 2 /* Match any one character. (Implements ".") */ +#define RE_OP_ANYSTAR 3 /* Special optimized version of .* */ +#define RE_OP_FORK 4 /* Continue to both next and opcode at iArg */ +#define RE_OP_GOTO 5 /* Jump to opcode at iArg */ +#define RE_OP_ACCEPT 6 /* Halt and indicate a successful match */ +#define RE_OP_CC_INC 7 /* Beginning of a [...] character class */ +#define RE_OP_CC_EXC 8 /* Beginning of a [^...] character class */ +#define RE_OP_CC_VALUE 9 /* Single value in a character class */ +#define RE_OP_CC_RANGE 10 /* Range of values in a character class */ +#define RE_OP_WORD 11 /* Perl word character [A-Za-z0-9_] */ +#define RE_OP_NOTWORD 12 /* Not a perl word character */ +#define RE_OP_DIGIT 13 /* digit: [0-9] */ +#define RE_OP_NOTDIGIT 14 /* Not a digit */ +#define RE_OP_SPACE 15 /* space: [ \t\n\r\v\f] */ +#define RE_OP_NOTSPACE 16 /* Not a digit */ +#define RE_OP_BOUNDARY 17 /* Boundary between word and non-word */ + +/* Each opcode is a "state" in the NFA */ +typedef unsigned short ReStateNumber; + +/* Because this is an NFA and not a DFA, multiple states can be active at +** once. An instance of the following object records all active states in +** the NFA. The implementation is optimized for the common case where the +** number of actives states is small. +*/ +typedef struct ReStateSet { + unsigned nState; /* Number of current states */ + ReStateNumber *aState; /* Current states */ +} ReStateSet; + +#if INTERFACE +/* An input string read one character at a time. +*/ +struct ReInput { + const unsigned char *z; /* All text */ + int i; /* Next byte to read */ + int mx; /* EOF when i>=mx */ +}; + +/* A compiled NFA (or an NFA that is in the process of being compiled) is +** an instance of the following object. +*/ +struct ReCompiled { + ReInput sIn; /* Regular expression text */ + const char *zErr; /* Error message to return */ + char *aOp; /* Operators for the virtual machine */ + int *aArg; /* Arguments to each operator */ + unsigned (*xNextChar)(ReInput*); /* Next character function */ + unsigned char zInit[12]; /* Initial text to match */ + int nInit; /* Number of characters in zInit */ + unsigned nState; /* Number of entries in aOp[] and aArg[] */ + unsigned nAlloc; /* Slots allocated for aOp[] and aArg[] */ +}; +#endif + +/* Add a state to the given state set if it is not already there */ +static void re_add_state(ReStateSet *pSet, int newState){ + unsigned i; + for(i=0; inState; i++) if( pSet->aState[i]==newState ) return; + pSet->aState[pSet->nState++] = newState; +} + +/* Extract the next unicode character from *pzIn and return it. Advance +** *pzIn to the first byte past the end of the character returned. To +** be clear: this routine converts utf8 to unicode. This routine is +** optimized for the common case where the next character is a single byte. +*/ +static unsigned re_next_char(ReInput *p){ + unsigned c; + if( p->i>=p->mx ) return 0; + c = p->z[p->i++]; + if( c>=0x80 ){ + if( (c&0xe0)==0xc0 && p->imx && (p->z[p->i]&0xc0)==0x80 ){ + c = (c&0x1f)<<6 | (p->z[p->i++]&0x3f); + if( c<0x80 ) c = 0xfffd; + }else if( (c&0xf0)==0xe0 && p->i+1mx && (p->z[p->i]&0xc0)==0x80 + && (p->z[p->i+1]&0xc0)==0x80 ){ + c = (c&0x0f)<<12 | ((p->z[p->i]&0x3f)<<6) | (p->z[p->i+1]&0x3f); + p->i += 2; + if( c<=0x3ff || (c>=0xd800 && c<=0xdfff) ) c = 0xfffd; + }else if( (c&0xf8)==0xf0 && p->i+3mx && (p->z[p->i]&0xc0)==0x80 + && (p->z[p->i+1]&0xc0)==0x80 && (p->z[p->i+2]&0xc0)==0x80 ){ + c = (c&0x07)<<18 | ((p->z[p->i]&0x3f)<<12) | ((p->z[p->i+1]&0x3f)<<6) + | (p->z[p->i+2]&0x3f); + p->i += 3; + if( c<=0xffff || c>0x10ffff ) c = 0xfffd; + }else{ + c = 0xfffd; + } + } + return c; +} +static unsigned re_next_char_nocase(ReInput *p){ + unsigned c = re_next_char(p); + return unicode_fold(c,1); +} + +/* Return true if c is a perl "word" character: [A-Za-z0-9_] */ +static int re_word_char(int c){ + return unicode_isalnum(c) || c=='_'; +} + +/* Return true if c is a "digit" character: [0-9] */ +static int re_digit_char(int c){ + return (c>='0' && c<='9'); +} + +/* Return true if c is a perl "space" character: [ \t\r\n\v\f] */ +static int re_space_char(int c){ + return c==' ' || c=='\t' || c=='\n' || c=='\r' || c=='\v' || c=='\f'; +} + +/* Run a compiled regular expression on the zero-terminated input +** string zIn[]. Return true on a match and false if there is no match. +*/ +int re_match(ReCompiled *pRe, const unsigned char *zIn, int nIn){ + ReStateSet aStateSet[2], *pThis, *pNext; + ReStateNumber aSpace[100]; + ReStateNumber *pToFree; + unsigned int i = 0; + unsigned int iSwap = 0; + int c = RE_EOF+1; + int cPrev = 0; + int rc = 0; + ReInput in; + + in.z = zIn; + in.i = 0; + in.mx = nIn>=0 ? nIn : strlen((const char*)zIn); + + /* Look for the initial prefix match, if there is one. */ + if( pRe->nInit ){ + unsigned char x = pRe->zInit[0]; + while( in.i+pRe->nInit<=in.mx + && (zIn[in.i]!=x || + strncmp((const char*)zIn+in.i, (const char*)pRe->zInit, pRe->nInit)!=0) + ){ + in.i++; + } + if( in.i+pRe->nInit>in.mx ) return 0; + } + + if( pRe->nState<=(sizeof(aSpace)/(sizeof(aSpace[0])*2)) ){ + pToFree = 0; + aStateSet[0].aState = aSpace; + }else{ + pToFree = fossil_malloc( sizeof(ReStateNumber)*2*pRe->nState ); + if( pToFree==0 ) return -1; + aStateSet[0].aState = pToFree; + } + aStateSet[1].aState = &aStateSet[0].aState[pRe->nState]; + pNext = &aStateSet[1]; + pNext->nState = 0; + re_add_state(pNext, 0); + while( c!=RE_EOF && pNext->nState>0 ){ + cPrev = c; + c = pRe->xNextChar(&in); + pThis = pNext; + pNext = &aStateSet[iSwap]; + iSwap = 1 - iSwap; + pNext->nState = 0; + for(i=0; inState; i++){ + int x = pThis->aState[i]; + switch( pRe->aOp[x] ){ + case RE_OP_MATCH: { + if( pRe->aArg[x]==c ) re_add_state(pNext, x+1); + break; + } + case RE_OP_ANY: { + re_add_state(pNext, x+1); + break; + } + case RE_OP_WORD: { + if( re_word_char(c) ) re_add_state(pNext, x+1); + break; + } + case RE_OP_NOTWORD: { + if( !re_word_char(c) ) re_add_state(pNext, x+1); + break; + } + case RE_OP_DIGIT: { + if( re_digit_char(c) ) re_add_state(pNext, x+1); + break; + } + case RE_OP_NOTDIGIT: { + if( !re_digit_char(c) ) re_add_state(pNext, x+1); + break; + } + case RE_OP_SPACE: { + if( re_space_char(c) ) re_add_state(pNext, x+1); + break; + } + case RE_OP_NOTSPACE: { + if( !re_space_char(c) ) re_add_state(pNext, x+1); + break; + } + case RE_OP_BOUNDARY: { + if( re_word_char(c)!=re_word_char(cPrev) ) re_add_state(pThis, x+1); + break; + } + case RE_OP_ANYSTAR: { + re_add_state(pNext, x); + re_add_state(pThis, x+1); + break; + } + case RE_OP_FORK: { + re_add_state(pThis, x+pRe->aArg[x]); + re_add_state(pThis, x+1); + break; + } + case RE_OP_GOTO: { + re_add_state(pThis, x+pRe->aArg[x]); + break; + } + case RE_OP_ACCEPT: { + rc = 1; + goto re_match_end; + } + case RE_OP_CC_INC: + case RE_OP_CC_EXC: { + int j = 1; + int n = pRe->aArg[x]; + int hit = 0; + for(j=1; j>0 && jaOp[x+j]==RE_OP_CC_VALUE ){ + if( pRe->aArg[x+j]==c ){ + hit = 1; + j = -1; + } + }else{ + if( pRe->aArg[x+j]<=c && pRe->aArg[x+j+1]>=c ){ + hit = 1; + j = -1; + }else{ + j++; + } + } + } + if( pRe->aOp[x]==RE_OP_CC_EXC ) hit = !hit; + if( hit ) re_add_state(pNext, x+n); + break; + } + } + } + } + for(i=0; inState; i++){ + if( pRe->aOp[pNext->aState[i]]==RE_OP_ACCEPT ){ rc = 1; break; } + } +re_match_end: + fossil_free(pToFree); + return rc; +} + +/* Resize the opcode and argument arrays for an RE under construction. +*/ +static int re_resize(ReCompiled *p, int N){ + char *aOp; + int *aArg; + aOp = fossil_realloc(p->aOp, N*sizeof(p->aOp[0])); + if( aOp==0 ) return 1; + p->aOp = aOp; + aArg = fossil_realloc(p->aArg, N*sizeof(p->aArg[0])); + if( aArg==0 ) return 1; + p->aArg = aArg; + p->nAlloc = N; + return 0; +} + +/* Insert a new opcode and argument into an RE under construction. The +** insertion point is just prior to existing opcode iBefore. +*/ +static int re_insert(ReCompiled *p, int iBefore, int op, int arg){ + int i; + if( p->nAlloc<=p->nState && re_resize(p, p->nAlloc*2) ) return 0; + for(i=p->nState; i>iBefore; i--){ + p->aOp[i] = p->aOp[i-1]; + p->aArg[i] = p->aArg[i-1]; + } + p->nState++; + p->aOp[iBefore] = op; + p->aArg[iBefore] = arg; + return iBefore; +} + +/* Append a new opcode and argument to the end of the RE under construction. +*/ +static int re_append(ReCompiled *p, int op, int arg){ + return re_insert(p, p->nState, op, arg); +} + +/* Make a copy of N opcodes starting at iStart onto the end of the RE +** under construction. +*/ +static void re_copy(ReCompiled *p, int iStart, int N){ + if( p->nState+N>=p->nAlloc && re_resize(p, p->nAlloc*2+N) ) return; + memcpy(&p->aOp[p->nState], &p->aOp[iStart], N*sizeof(p->aOp[0])); + memcpy(&p->aArg[p->nState], &p->aArg[iStart], N*sizeof(p->aArg[0])); + p->nState += N; +} + +/* Return true if c is a hexadecimal digit character: [0-9a-fA-F] +** If c is a hex digit, also set *pV = (*pV)*16 + valueof(c). If +** c is not a hex digit *pV is unchanged. +*/ +static int re_hex(int c, int *pV){ + if( c>='0' && c<='9' ){ + c -= '0'; + }else if( c>='a' && c<='f' ){ + c -= 'a' - 10; + }else if( c>='A' && c<='F' ){ + c -= 'A' - 10; + }else{ + return 0; + } + *pV = (*pV)*16 + (c & 0xff); + return 1; +} + +/* A backslash character has been seen, read the next character and +** return its interpretation. +*/ +static unsigned re_esc_char(ReCompiled *p){ + static const char zEsc[] = "afnrtv\\()*.+?[$^{|}]"; + static const char zTrans[] = "\a\f\n\r\t\v"; + int i, v = 0; + char c; + if( p->sIn.i>=p->sIn.mx ) return 0; + c = p->sIn.z[p->sIn.i]; + if( c=='u' && p->sIn.i+4sIn.mx ){ + const unsigned char *zIn = p->sIn.z + p->sIn.i; + if( re_hex(zIn[1],&v) + && re_hex(zIn[2],&v) + && re_hex(zIn[3],&v) + && re_hex(zIn[4],&v) + ){ + p->sIn.i += 5; + return v; + } + } + if( c=='x' && p->sIn.i+2sIn.mx ){ + const unsigned char *zIn = p->sIn.z + p->sIn.i; + if( re_hex(zIn[1],&v) + && re_hex(zIn[2],&v) + ){ + p->sIn.i += 3; + return v; + } + } + for(i=0; zEsc[i] && zEsc[i]!=c; i++){} + if( zEsc[i] ){ + if( i<6 ) c = zTrans[i]; + p->sIn.i++; + }else{ + p->zErr = "unknown \\ escape"; + } + return c; +} + +/* Forward declaration */ +static const char *re_subcompile_string(ReCompiled*); + +/* Peek at the next byte of input */ +static unsigned char rePeek(ReCompiled *p){ + return p->sIn.isIn.mx ? p->sIn.z[p->sIn.i] : 0; +} + +/* Compile RE text into a sequence of opcodes. Continue up to the +** first unmatched ")" character, then return. If an error is found, +** return a pointer to the error message string. +*/ +static const char *re_subcompile_re(ReCompiled *p){ + const char *zErr; + int iStart, iEnd, iGoto; + iStart = p->nState; + zErr = re_subcompile_string(p); + if( zErr ) return zErr; + while( rePeek(p)=='|' ){ + iEnd = p->nState; + re_insert(p, iStart, RE_OP_FORK, iEnd + 2 - iStart); + iGoto = re_append(p, RE_OP_GOTO, 0); + p->sIn.i++; + zErr = re_subcompile_string(p); + if( zErr ) return zErr; + p->aArg[iGoto] = p->nState - iGoto; + } + return 0; +} + +/* Compile an element of regular expression text (anything that can be +** an operand to the "|" operator). Return NULL on success or a pointer +** to the error message if there is a problem. +*/ +static const char *re_subcompile_string(ReCompiled *p){ + int iPrev = -1; + int iStart; + unsigned c; + const char *zErr; + while( (c = p->xNextChar(&p->sIn))!=0 ){ + iStart = p->nState; + switch( c ){ + case '|': + case '$': + case ')': { + p->sIn.i--; + return 0; + } + case '(': { + zErr = re_subcompile_re(p); + if( zErr ) return zErr; + if( rePeek(p)!=')' ) return "unmatched '('"; + p->sIn.i++; + break; + } + case '.': { + if( rePeek(p)=='*' ){ + re_append(p, RE_OP_ANYSTAR, 0); + p->sIn.i++; + }else{ + re_append(p, RE_OP_ANY, 0); + } + break; + } + case '*': { + if( iPrev<0 ) return "'*' without operand"; + re_insert(p, iPrev, RE_OP_GOTO, p->nState - iPrev + 1); + re_append(p, RE_OP_FORK, iPrev - p->nState + 1); + break; + } + case '+': { + if( iPrev<0 ) return "'+' without operand"; + re_append(p, RE_OP_FORK, iPrev - p->nState); + break; + } + case '?': { + if( iPrev<0 ) return "'?' without operand"; + re_insert(p, iPrev, RE_OP_FORK, p->nState - iPrev+1); + break; + } + case '{': { + int m = 0, n = 0; + int sz, j; + if( iPrev<0 ) return "'{m,n}' without operand"; + while( (c=rePeek(p))>='0' && c<='9' ){ m = m*10 + c - '0'; p->sIn.i++; } + n = m; + if( c==',' ){ + p->sIn.i++; + n = 0; + while( (c=rePeek(p))>='0' && c<='9' ){ n = n*10 + c-'0'; p->sIn.i++; } + } + if( c!='}' ) return "unmatched '{'"; + if( n>0 && nsIn.i++; + sz = p->nState - iPrev; + if( m==0 ){ + if( n==0 ) return "both m and n are zero in '{m,n}'"; + re_insert(p, iPrev, RE_OP_FORK, sz+1); + n--; + }else{ + for(j=1; j0 ){ + re_append(p, RE_OP_FORK, -sz); + } + break; + } + case '[': { + int iFirst = p->nState; + if( rePeek(p)=='^' ){ + re_append(p, RE_OP_CC_EXC, 0); + p->sIn.i++; + }else{ + re_append(p, RE_OP_CC_INC, 0); + } + while( (c = p->xNextChar(&p->sIn))!=0 ){ + if( c=='[' && rePeek(p)==':' ){ + return "POSIX character classes not supported"; + } + if( c=='\\' ) c = re_esc_char(p); + if( rePeek(p)=='-' ){ + re_append(p, RE_OP_CC_RANGE, c); + p->sIn.i++; + c = p->xNextChar(&p->sIn); + if( c=='\\' ) c = re_esc_char(p); + re_append(p, RE_OP_CC_RANGE, c); + }else{ + re_append(p, RE_OP_CC_VALUE, c); + } + if( rePeek(p)==']' ){ p->sIn.i++; break; } + } + if( c==0 ) return "unclosed '['"; + p->aArg[iFirst] = p->nState - iFirst; + break; + } + case '\\': { + int specialOp = 0; + switch( rePeek(p) ){ + case 'b': specialOp = RE_OP_BOUNDARY; break; + case 'd': specialOp = RE_OP_DIGIT; break; + case 'D': specialOp = RE_OP_NOTDIGIT; break; + case 's': specialOp = RE_OP_SPACE; break; + case 'S': specialOp = RE_OP_NOTSPACE; break; + case 'w': specialOp = RE_OP_WORD; break; + case 'W': specialOp = RE_OP_NOTWORD; break; + } + if( specialOp ){ + p->sIn.i++; + re_append(p, specialOp, 0); + }else{ + c = re_esc_char(p); + re_append(p, RE_OP_MATCH, c); + } + break; + } + default: { + re_append(p, RE_OP_MATCH, c); + break; + } + } + iPrev = iStart; + } + return 0; +} + +/* Free and reclaim all the memory used by a previously compiled +** regular expression. Applications should invoke this routine once +** for every call to re_compile() to avoid memory leaks. +*/ +void re_free(ReCompiled *pRe){ + if( pRe ){ + fossil_free(pRe->aOp); + fossil_free(pRe->aArg); + fossil_free(pRe); + } +} + +/* +** Compile a textual regular expression in zIn[] into a compiled regular +** expression suitable for us by re_match() and return a pointer to the +** compiled regular expression in *ppRe. Return NULL on success or an +** error message if something goes wrong. +*/ +const char *re_compile(ReCompiled **ppRe, const char *zIn, int noCase){ + ReCompiled *pRe; + const char *zErr; + int i, j; + + *ppRe = 0; + pRe = fossil_malloc( sizeof(*pRe) ); + if( pRe==0 ){ + return "out of memory"; + } + memset(pRe, 0, sizeof(*pRe)); + pRe->xNextChar = noCase ? re_next_char_nocase : re_next_char; + if( re_resize(pRe, 30) ){ + re_free(pRe); + return "out of memory"; + } + if( zIn[0]=='^' ){ + zIn++; + }else{ + re_append(pRe, RE_OP_ANYSTAR, 0); + } + pRe->sIn.z = (unsigned char*)zIn; + pRe->sIn.i = 0; + pRe->sIn.mx = strlen(zIn); + zErr = re_subcompile_re(pRe); + if( zErr ){ + re_free(pRe); + return zErr; + } + if( rePeek(pRe)=='$' && pRe->sIn.i+1>=pRe->sIn.mx ){ + re_append(pRe, RE_OP_MATCH, RE_EOF); + re_append(pRe, RE_OP_ACCEPT, 0); + *ppRe = pRe; + }else if( pRe->sIn.i>=pRe->sIn.mx ){ + re_append(pRe, RE_OP_ACCEPT, 0); + *ppRe = pRe; + }else{ + re_free(pRe); + return "unrecognized character"; + } + + /* The following is a performance optimization. If the regex begins with + ** ".*" (if the input regex lacks an initial "^") and afterwards there are + ** one or more matching characters, enter those matching characters into + ** zInit[]. The re_match() routine can then search ahead in the input + ** string looking for the initial match without having to run the whole + ** regex engine over the string. Do not worry able trying to match + ** unicode characters beyond plane 0 - those are very rare and this is + ** just an optimization. */ + if( pRe->aOp[0]==RE_OP_ANYSTAR ){ + for(j=0, i=1; jzInit)-2 && pRe->aOp[i]==RE_OP_MATCH; i++){ + unsigned x = pRe->aArg[i]; + if( x<=127 ){ + pRe->zInit[j++] = x; + }else if( x<=0xfff ){ + pRe->zInit[j++] = 0xc0 | (x>>6); + pRe->zInit[j++] = 0x80 | (x&0x3f); + }else if( x<=0xffff ){ + pRe->zInit[j++] = 0xd0 | (x>>12); + pRe->zInit[j++] = 0x80 | ((x>>6)&0x3f); + pRe->zInit[j++] = 0x80 | (x&0x3f); + }else{ + break; + } + } + if( j>0 && pRe->zInit[j-1]==0 ) j--; + pRe->nInit = j; + } + return pRe->zErr; +} + +/* +** Implementation of the regexp() SQL function. This function implements +** the build-in REGEXP operator. The first argument to the function is the +** pattern and the second argument is the string. So, the SQL statements: +** +** A REGEXP B +** +** is implemented as regexp(B,A). +*/ +static void re_sql_func( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + ReCompiled *pRe; /* Compiled regular expression */ + const char *zPattern; /* The regular expression */ + const unsigned char *zStr;/* String being searched */ + const char *zErr; /* Compile error message */ + + pRe = sqlite3_get_auxdata(context, 0); + if( pRe==0 ){ + zPattern = (const char*)sqlite3_value_text(argv[0]); + if( zPattern==0 ) return; + zErr = re_compile(&pRe, zPattern, 0); + if( zErr ){ + sqlite3_result_error(context, zErr, -1); + return; + } + if( pRe==0 ){ + sqlite3_result_error_nomem(context); + return; + } + sqlite3_set_auxdata(context, 0, pRe, (void(*)(void*))re_free); + } + zStr = (const unsigned char*)sqlite3_value_text(argv[1]); + if( zStr!=0 ){ + sqlite3_result_int(context, re_match(pRe, zStr, -1)); + } +} + +/* +** Invoke this routine in order to install the REGEXP function in an +** SQLite database connection. +** +** Use: +** +** sqlite3_auto_extension(sqlite3_add_regexp_func); +** +** to cause this extension to be automatically loaded into each new +** database connection. +*/ +int re_add_sql_func(sqlite3 *db){ + return sqlite3_create_function(db, "regexp", 2, SQLITE_UTF8, 0, + re_sql_func, 0, 0); +} + +/* +** Run a "grep" over a single file +*/ +static void grep(ReCompiled *pRe, const char *zFile, FILE *in){ + int ln = 0; + int n; + char zLine[2000]; + while( fgets(zLine, sizeof(zLine), in) ){ + ln++; + n = (int)strlen(zLine); + while( n && (zLine[n-1]=='\n' || zLine[n-1]=='\r') ) n--; + if( re_match(pRe, (const unsigned char*)zLine, n) ){ + printf("%s:%d:%.*s\n", zFile, ln, n, zLine); + } + } +} + +/* +** COMMAND: test-grep +** +** Usage: %fossil test-grep REGEXP [FILE...] +** +** Run a regular expression match over the named disk files, or against +** standard input if no disk files are named on the command-line. +** +** Options: +** +** -i|--ignore-case Ignore case +*/ +void re_test_grep(void){ + ReCompiled *pRe; + const char *zErr; + int ignoreCase = find_option("ignore-case","i",0)!=0; + if( g.argc<3 ){ + usage("REGEXP [FILE...]"); + } + zErr = re_compile(&pRe, g.argv[2], ignoreCase); + if( zErr ) fossil_fatal("%s", zErr); + if( g.argc==3 ){ + grep(pRe, "-", stdin); + }else{ + int i; + for(i=3; i #include "report.h" #include /* Forward references to static routines */ static void report_format_hints(void); +#ifndef SQLITE_RECURSIVE +# define SQLITE_RECURSIVE 33 +#endif + /* -** WEBPAGE: /reportlist +** WEBPAGE: reportlist +** +** Main menu for Tickets. */ void view_list(void){ const char *zScript; Blob ril; /* Report Item List */ Stmt q; int rn = 0; int cnt = 0; login_check_credentials(); - if( !g.okRdTkt && !g.okNewTkt ){ login_needed(); return; } + if( !g.perm.RdTkt && !g.perm.NewTkt ){ + login_needed(g.anon.RdTkt || g.anon.NewTkt); + return; + } style_header("Ticket Main Menu"); + ticket_standard_submenu(T_ALL_BUT(T_REPLIST)); if( g.thTrace ) Th_Trace("BEGIN_REPORTLIST
      \n", -1); zScript = ticket_reportlist_code(); if( g.thTrace ) Th_Trace("BEGIN_REPORTLIST_SCRIPT
      \n", -1); - + blob_zero(&ril); ticket_init(); db_prepare(&q, "SELECT rn, title, owner FROM reportfmt ORDER BY title"); while( db_step(&q)==SQLITE_ROW ){ const char *zTitle = db_column_text(&q, 1); const char *zOwner = db_column_text(&q, 2); - if( zTitle[0] =='_' && !g.okTktFmt ){ + if( zTitle[0] =='_' && !g.perm.TktFmt ){ continue; } rn = db_column_int(&q, 0); cnt++; blob_appendf(&ril, "
    • "); if( zTitle[0] == '_' ){ blob_appendf(&ril, "%s", zTitle); } else { - blob_appendf(&ril, "%h", rn, zTitle); + blob_appendf(&ril, "%z%h", href("%R/rptview?rn=%d", rn), zTitle); } blob_appendf(&ril, "   "); - if( g.okWrite && zOwner && zOwner[0] ){ - blob_appendf(&ril, "(by %h) ", zOwner); - } - if( g.okTktFmt ){ - blob_appendf(&ril, "[copy] ", rn); - } - if( g.okAdmin || (g.okWrTkt && zOwner && strcmp(g.zLogin,zOwner)==0) ){ - blob_appendf(&ril, "[edit] ", rn); - } - if( g.okTktFmt ){ - blob_appendf(&ril, "[sql] ", rn); + if( g.perm.Write && zOwner && zOwner[0] ){ + blob_appendf(&ril, "(by %h) ", zOwner); + } + if( g.perm.TktFmt ){ + blob_appendf(&ril, "[%zcopy] ", + href("%R/rptedit?rn=%d©=1", rn)); + } + if( g.perm.Admin + || (g.perm.WrTkt && zOwner && fossil_strcmp(g.zLogin,zOwner)==0) + ){ + blob_appendf(&ril, "[%zedit]", + href("%R/rptedit?rn=%d", rn)); + } + if( g.perm.TktFmt ){ + blob_appendf(&ril, "[%zsql]", + href("%R/rptsql?rn=%d", rn)); } blob_appendf(&ril, "
    • \n"); } + db_finalize(&q); Th_Store("report_items", blob_str(&ril)); - + Th_Render(zScript); - + blob_reset(&ril); if( g.thTrace ) Th_Trace("END_REPORTLIST
      \n", -1); style_footer(); } @@ -88,22 +105,22 @@ /* ** Remove whitespace from both ends of a string. */ char *trim_string(const char *zOrig){ int i; - while( isspace(*zOrig) ){ zOrig++; } + while( fossil_isspace(*zOrig) ){ zOrig++; } i = strlen(zOrig); - while( i>0 && isspace(zOrig[i-1]) ){ i--; } + while( i>0 && fossil_isspace(zOrig[i-1]) ){ i--; } return mprintf("%.*s", i, zOrig); } /* ** Extract a numeric (integer) value from a string. */ char *extract_integer(const char *zOrig){ if( zOrig == NULL || zOrig[0] == 0 ) return ""; - while( *zOrig && !isdigit(*zOrig) ){ zOrig++; } + while( *zOrig && !fossil_isdigit(*zOrig) ){ zOrig++; } if( *zOrig ){ /* we have a digit. atoi() will get as much of the number as it ** can. We'll run it through mprintf() to get a string. Not ** an efficient way to do it, but effective. */ @@ -112,24 +129,24 @@ return ""; } /* ** Remove blank lines from the beginning of a string and -** all whitespace from the end. Removes whitespace preceeding a NL, +** all whitespace from the end. Removes whitespace preceding a NL, ** which also converts any CRNL sequence into a single NL. */ char *remove_blank_lines(const char *zOrig){ int i, j, n; char *z; - for(i=j=0; isspace(zOrig[i]); i++){ if( zOrig[i]=='\n' ) j = i+1; } + for(i=j=0; fossil_isspace(zOrig[i]); i++){ if( zOrig[i]=='\n' ) j = i+1; } n = strlen(&zOrig[j]); - while( n>0 && isspace(zOrig[j+n-1]) ){ n--; } + while( n>0 && fossil_isspace(zOrig[j+n-1]) ){ n--; } z = mprintf("%.*s", n, &zOrig[j]); for(i=j=0; z[i]; i++){ - if( z[i+1]=='\n' && z[i]!='\n' && isspace(z[i]) ){ + if( z[i+1]=='\n' && z[i]!='\n' && fossil_isspace(z[i]) ){ z[j] = z[i]; - while(isspace(z[j]) && z[j] != '\n' ){ j--; } + while(fossil_isspace(z[j]) && z[j] != '\n' ){ j--; } j++; continue; } z[j++] = z[i]; @@ -145,11 +162,11 @@ ** This is the SQLite authorizer callback used to make sure that the ** SQL statements entered by users do not try to do anything untoward. ** If anything suspicious is tried, set *(char**)pError to an error ** message obtained from malloc. */ -static int report_query_authorizer( +int report_query_authorizer( void *pError, int code, const char *zArg1, const char *zArg2, const char *zArg3, @@ -160,32 +177,37 @@ /* We've already seen an error. No need to continue. */ return SQLITE_OK; } switch( code ){ case SQLITE_SELECT: + case SQLITE_RECURSIVE: case SQLITE_FUNCTION: { break; } case SQLITE_READ: { - static const char *azAllowed[] = { + static const char *const azAllowed[] = { "ticket", + "ticketchng", "blob", "filename", "mlink", "plink", "event", "tag", "tagxref", }; int i; + if( fossil_strncmp(zArg1, "fx_", 3)==0 ){ + break; + } for(i=0; i=sizeof(azAllowed)/sizeof(azAllowed[0]) ){ *(char**)pError = mprintf("access to table \"%s\" is restricted",zArg1); rc = SQLITE_DENY; - }else if( !g.okRdAddr && strncmp(zArg2, "private_", 8)==0 ){ + }else if( !g.perm.RdAddr && strncmp(zArg2, "private_", 8)==0 ){ rc = SQLITE_IGNORE; } break; } default: { @@ -194,10 +216,20 @@ break; } } return rc; } + +/* +** Activate the query authorizer +*/ +static void report_restrict_sql(char **pzErr){ + sqlite3_set_authorizer(g.db, report_query_authorizer, (void*)pzErr); +} +static void report_unrestrict_sql(void){ + sqlite3_set_authorizer(g.db, 0, 0); +} /* ** Check the given SQL to see if is a valid query that does not ** attempt to do anything dangerous. Return 0 on success and a @@ -210,15 +242,17 @@ const char *zTail; sqlite3_stmt *pStmt; int rc; /* First make sure the SQL is a single query command by verifying that - ** the first token is "SELECT" and that there are no unquoted semicolons. + ** the first token is "SELECT" or "WITH" and that there are no unquoted + ** semicolons. */ - for(i=0; isspace(zSql[i]); i++){} - if( strncasecmp(&zSql[i],"select",6)!=0 ){ - return mprintf("The SQL must be a SELECT statement"); + for(i=0; fossil_isspace(zSql[i]); i++){} + if( fossil_strnicmp(&zSql[i], "select", 6)!=0 + && fossil_strnicmp(&zSql[i], "with", 4)!=0 ){ + return mprintf("The SQL must be a SELECT or WITH statement"); } for(i=0; zSql[i]; i++){ if( zSql[i]==';' ){ int bad; int c = zSql[i+1]; @@ -232,26 +266,33 @@ return mprintf("Semi-colon detected! " "Only a single SQL statement is allowed"); } } } - + /* Compile the statement and check for illegal accesses or syntax errors. */ - sqlite3_set_authorizer(g.db, report_query_authorizer, (void*)&zErr); - rc = sqlite3_prepare(g.db, zSql, -1, &pStmt, &zTail); + report_restrict_sql(&zErr); + rc = sqlite3_prepare_v2(g.db, zSql, -1, &pStmt, &zTail); if( rc!=SQLITE_OK ){ zErr = mprintf("Syntax error: %s", sqlite3_errmsg(g.db)); } + if( !sqlite3_stmt_readonly(pStmt) ){ + zErr = mprintf("SQL must not modify the database"); + } if( pStmt ){ sqlite3_finalize(pStmt); } - sqlite3_set_authorizer(g.db, 0, 0); + report_unrestrict_sql(); return zErr; } /* -** WEBPAGE: /rptsql +** WEBPAGE: rptsql +** URL: /rptsql?rn=N +** +** Display the SQL query used to generate a ticket report. The rn=N +** query parameter identifies the specific report number to be displayed. */ void view_see_sql(void){ int rn; const char *zTitle; const char *zSQL; @@ -258,32 +299,33 @@ const char *zOwner; const char *zClrKey; Stmt q; login_check_credentials(); - if( !g.okTktFmt ){ - login_needed(); + if( !g.perm.TktFmt ){ + login_needed(g.anon.TktFmt); return; } rn = atoi(PD("rn","0")); db_prepare(&q, "SELECT title, sqlcode, owner, cols " "FROM reportfmt WHERE rn=%d",rn); style_header("SQL For Report Format Number %d", rn); if( db_step(&q)!=SQLITE_ROW ){ @

      Unknown report number: %d(rn)

      style_footer(); + db_finalize(&q); return; } zTitle = db_column_text(&q, 0); zSQL = db_column_text(&q, 1); zOwner = db_column_text(&q, 2); zClrKey = db_column_text(&q, 3); @ @ - @ + @ @ - @ + @ @ @ @ @
      Title:%h(zTitle)
      %h(zTitle)
      Owner:%h(zOwner)
      %h(zOwner)
      SQL:
         @ %h(zSQL)
         @ 
      @@ -290,15 +332,25 @@ output_color_key(zClrKey, 0, "border=0 cellspacing=0 cellpadding=3"); @
      report_format_hints(); style_footer(); + db_finalize(&q); } /* -** WEBPAGE: /rptnew -** WEBPAGE: /rptedit +** WEBPAGE: rptnew +** WEBPAGE: rptedit +** +** Create (/rptnew) or edit (/rptedit) a ticket report format. +** Query parameters: +** +** rn=N Ticket report number. (required) +** t=TITLE Title of the report format +** w=USER Owner of the report format +** s=SQL SQL text used to implement the report +** k=KEY Color key */ void view_edit(void){ int rn; const char *zTitle; const char *z; @@ -306,12 +358,12 @@ const char *zClrKey; char *zSQL; char *zErr = 0; login_check_credentials(); - if( !g.okTktFmt ){ - login_needed(); + if( !g.perm.TktFmt ){ + login_needed(g.anon.TktFmt); return; } /*view_add_functions(0);*/ rn = atoi(PD("rn","0")); zTitle = P("t"); @@ -328,11 +380,11 @@ zTitle = db_text(0, "SELECT title FROM reportfmt " "WHERE rn=%d", rn); if( zTitle==0 ) cgi_redirect("reportlist"); style_header("Are You Sure?"); - @
      + @ @

      You are about to delete all traces of the report @ %h(zTitle) from @ the database. This is an irreversible operation. All records @ related to this report will be removed and cannot be recovered.

      @ @@ -350,23 +402,29 @@ } if( zTitle && zSQL ){ if( zSQL[0]==0 ){ zErr = "Please supply an SQL query statement"; }else if( (zTitle = trim_string(zTitle))[0]==0 ){ - zErr = "Please supply a title"; + zErr = "Please supply a title"; }else{ zErr = verify_sql_statement(zSQL); } + if( zErr==0 + && db_exists("SELECT 1 FROM reportfmt WHERE title=%Q and rn<>%d", + zTitle, rn) + ){ + zErr = mprintf("There is already another report named \"%h\"", zTitle); + } if( zErr==0 ){ login_verify_csrf_secret(); if( rn>0 ){ db_multi_exec("UPDATE reportfmt SET title=%Q, sqlcode=%Q," - " owner=%Q, cols=%Q WHERE rn=%d", + " owner=%Q, cols=%Q, mtime=now() WHERE rn=%d", zTitle, zSQL, zOwner, zClrKey, rn); }else{ - db_multi_exec("INSERT INTO reportfmt(title,sqlcode,owner,cols) " - "VALUES(%Q,%Q,%Q,%Q)", + db_multi_exec("INSERT INTO reportfmt(title,sqlcode,owner,cols,mtime) " + "VALUES(%Q,%Q,%Q,%Q,now())", zTitle, zSQL, zOwner, zClrKey); rn = db_last_insert_rowid(); } cgi_redirect(mprintf("rptview?rn=%d", rn)); return; @@ -395,48 +453,48 @@ if( zOwner==0 ) zOwner = g.zLogin; style_submenu_element("Cancel", "Cancel", "reportlist"); if( rn>0 ){ style_submenu_element("Delete", "Delete", "rptedit?rn=%d&del1=1", rn); } - style_header(rn>0 ? "Edit Report Format":"Create New Report Format"); + style_header("%s", rn>0 ? "Edit Report Format":"Create New Report Format"); if( zErr ){ - @
      %h(zErr)
      + @
      %h(zErr)
      } - @ - @ - @

      Report Title:
      - @

      - @

      Enter a complete SQL query statement against the "TICKET" table:
      + @

      + @ + @

      Report Title:
      + @

      + @

      Enter a complete SQL query statement against the "TICKET" table:
      @ @

      login_insert_csrf_secret(); - if( g.okAdmin ){ + if( g.perm.Admin ){ @

      Report owner: - @ + @ @

      } else { - @ + @ } @

      Enter an optional color key in the following box. (If blank, no @ color key is displayed.) Each line contains the text for a single @ entry in the key. The first token of each line is the background - @ color for that line.
      + @ color for that line.
      @ @

      - if( !g.okAdmin && strcmp(zOwner,g.zLogin)!=0 ){ + if( !g.perm.Admin && fossil_strcmp(zOwner,g.zLogin)!=0 ){ @

      This report format is owned by %h(zOwner). You are not allowed @ to change it.

      @ report_format_hints(); style_footer(); return; } - @ + @ if( rn>0 ){ - @ + @ } - @ + @
      report_format_hints(); style_footer(); } /* @@ -448,11 +506,11 @@ zSchema = db_text(0,"SELECT sql FROM sqlite_master WHERE name='ticket'"); if( zSchema==0 ){ zSchema = db_text(0,"SELECT sql FROM repository.sqlite_master" " WHERE name='ticket'"); } - @

      TICKET Schema

      + @

      TICKET Schema

      @
         @ %h(zSchema)
         @ 
      @

      Notes

      @
        @@ -462,14 +520,21 @@ @ is assumed to hold a ticket number. A hyperlink will be created from @ that column to a detailed view of the ticket.

        @ @
      • If a column of the result set is named "bgcolor" then the content @ of that column determines the background color of the row.

      • + @ + @
      • The text of all columns prior to the first column whose name begins + @ with underscore ("_") is shown character-for-character as it appears in + @ the database. In other words, it is assumed to have a mimetype of + @ text/plain. @ @

      • The first column whose name begins with underscore ("_") and all - @ subsequent columns are shown on their own rows in the table. This might - @ be useful for displaying the description of tickets. + @ subsequent columns are shown on their own rows in the table and with + @ wiki formatting. In other words, such rows are shown with a mimetype + @ of text/x-fossil-wiki. This is recommended for the "description" field + @ of tickets. @

      • @ @
      • The query can join other tables in the database besides TICKET. @

      • @
      @@ -478,17 +543,17 @@ @

      In this example, the first column in the result set is named @ "bgcolor". The value of this column is not displayed. Instead, it @ selects the background color of each row based on the TICKET.STATUS @ field of the database. The color key at the right shows the various @ color codes.

      - @ - @ - @ - @ - @ - @ - @ + @
      new or active
      review
      fixed
      tested
      defer
      closed
      + @ + @ + @ + @ + @ + @ @
      new or active
      review
      fixed
      tested
      defer
      closed
      @
         @ SELECT
         @   CASE WHEN status IN ('new','active') THEN '#f2dcdc'
         @        WHEN status='review' THEN '#e8e8bd'
      @@ -510,16 +575,16 @@
         @ FROM ticket
         @ 
      @

      To base the background color on the TICKET.PRIORITY or @ TICKET.SEVERITY fields, substitute the following code for the @ first column of the query:

      - @ - @ - @ - @ - @ - @ + @
      1
      2
      3
      4
      5
      + @ + @ + @ + @ + @ @
      1
      2
      3
      4
      5
      @
         @ SELECT
         @   CASE priority WHEN 1 THEN '#f2dcdc'
         @        WHEN 2 THEN '#e8e8bd'
      @@ -563,58 +628,16 @@
         @    sdate(changetime) AS 'Changed',
         @    assignedto AS 'Assigned',
         @    severity AS 'Svr',
         @    priority AS 'Pri',
         @    title AS 'Title',
      -  @    description AS '_Description',   -- When the column name begins with '_'
      -  @    remarks AS '_Remarks'            -- the data is shown on a separate row.
      -  @  FROM ticket
      -  @ 
      - @ - @

      Or, to see part of the description on the same row, use the - @ wiki() function with some string manipulation. Using the - @ tkt() function on the ticket number will also generate a linked - @ field, but without the extra edit column: - @

      - @
      -  @  SELECT
      -  @    tkt(tn) AS '',
      -  @    title AS 'Title',
      -  @    wiki(substr(description,0,80)) AS 'Description'
      -  @  FROM ticket
      -  @ 
      - @ -} - -#if 0 /* NOT USED */ -static void column_header(int rn,const char *zCol, int nCol, int nSorted, - const char *zDirection, const char *zExtra -){ - int set = (nCol==nSorted); - int desc = !strcmp(zDirection,"DESC"); - - /* - ** Clicking same column header 3 times in a row resets any sorting. - ** Note that we link to rptview, which means embedded reports will get - ** sent to the actual report view page as soon as a user tries to do - ** any sorting. I don't see that as a Bad Thing. - */ - if(set && desc){ - @ - @ %h(zCol) - }else{ - if(set){ - @ %h(zCol) - } -} -#endif + @ description AS '_Description', -- When the column name begins with '_' + @ remarks AS '_Remarks' -- content is rendered as wiki + @ FROM ticket + @ + @ +} /* ** The state of the report generation. */ struct GenerateHTML { @@ -622,31 +645,28 @@ int nCount; /* Row number */ int nCol; /* Number of columns */ int isMultirow; /* True if multiple table rows per query result row */ int iNewRow; /* Index of first column that goes on separate row */ int iBg; /* Index of column that defines background color */ + int wikiFlags; /* Flags passed into wiki_convert() */ + const char *zWikiStart; /* HTML before display of multi-line wiki */ + const char *zWikiEnd; /* HTML after display of multi-line wiki */ }; /* ** The callback function for db_query */ static int generate_html( void *pUser, /* Pointer to output state */ int nArg, /* Number of columns in this result row */ - char **azArg, /* Text of data in all columns */ - char **azName /* Names of the columns */ + const char **azArg, /* Text of data in all columns */ + const char **azName /* Names of the columns */ ){ struct GenerateHTML *pState = (struct GenerateHTML*)pUser; int i; const char *zTid; /* Ticket UUID. (value of column named '#') */ - int rn; /* Report number */ - char *zBg = 0; /* Use this background color */ - char zPage[30]; /* Text version of the ticket number */ - - /* Get the report number - */ - rn = pState->rn; + const char *zBg = 0; /* Use this background color */ /* Do initialization */ if( pState->nCount==0 ){ /* Turn off the authorizer. It is no longer doing anything since the @@ -661,36 +681,49 @@ pState->nCol = 0; pState->isMultirow = 0; pState->iNewRow = -1; pState->iBg = -1; for(i=0; iiBg = i; continue; } - if( g.okWrite && azName[i][0]=='#' ){ + if( g.perm.Write && azName[i][0]=='#' ){ pState->nCol++; } if( !pState->isMultirow ){ if( azName[i][0]=='_' ){ pState->isMultirow = 1; pState->iNewRow = i; + pState->wikiFlags = WIKI_NOBADLINKS; + pState->zWikiStart = ""; + pState->zWikiEnd = ""; + if( P("plaintext") ){ + pState->wikiFlags |= WIKI_LINKSONLY; + pState->zWikiStart = "
      ";
      +            pState->zWikiEnd = "
      "; + style_submenu_element("Formatted", "Formatted", + "%R/rptview?rn=%d", pState->rn); + }else{ + style_submenu_element("Plaintext", "Plaintext", + "%R/rptview?rn=%d&plaintext", pState->rn); + } }else{ pState->nCol++; } } } /* The first time this routine is called, output a table header */ - @ + @ zTid = 0; for(i=0; iiBg ) continue; if( pState->iNewRow>=0 && i>=pState->iNewRow ){ - if( g.okWrite && zTid ){ + if( g.perm.Write && zTid ){ @   zTid = 0; } if( zName[0]=='_' ) zName++; @ nCol)>%h(zName) @@ -699,14 +732,14 @@ zTid = zName; } @ %h(zName) } } - if( g.okWrite && zTid ){ + if( g.perm.Write && zTid ){ @   } - @ + @ } if( azArg==0 ){ @ @ No records match the report criteria @ @@ -723,47 +756,45 @@ /* Output the data for this entry from the database */ zBg = pState->iBg>=0 ? azArg[pState->iBg] : 0; if( zBg==0 ) zBg = "white"; - @ + @ zTid = 0; - zPage[0] = 0; for(i=0; iiBg ) continue; zData = azArg[i]; if( zData==0 ) zData = ""; if( pState->iNewRow>=0 && i>=pState->iNewRow ){ - if( zTid && g.okWrite ){ - @ edit + if( zTid && g.perm.Write ){ + @ %z(href("%R/tktedit/%h",zTid))edit zTid = 0; } if( zData[0] ){ Blob content; - @ nCol)> + @ + @ nCol)> + @ %s(pState->zWikiStart) blob_init(&content, zData, -1); - wiki_convert(&content, 0, 0); + wiki_convert(&content, 0, pState->wikiFlags); blob_reset(&content); + @ %s(pState->zWikiEnd) } }else if( azName[i][0]=='#' ){ zTid = zData; - if( g.okHistory ){ - @ %h(zData) - }else{ - @ %h(zData) - } + @ %z(href("%R/tktview?name=%h",zData))%h(zData) }else if( zData[0]==0 ){ @   }else{ @ @ %h(zData) @ } } - if( zTid && g.okWrite ){ - @ edit + if( zTid && g.perm.Write ){ + @ %z(href("%R/tktedit/%h",zTid))edit } @ return 0; } @@ -772,15 +803,15 @@ ** spaces. */ static void output_no_tabs(const char *z){ while( z && z[0] ){ int i, j; - for(i=0; z[i] && (!isspace(z[i]) || z[i]==' '); i++){} + for(i=0; z[i] && (!fossil_isspace(z[i]) || z[i]==' '); i++){} if( i>0 ){ cgi_printf("%.*s", i, z); } - for(j=i; isspace(z[j]); j++){} + for(j=i; fossil_isspace(z[j]); j++){} if( j>i ){ cgi_printf("%*s", j-i, ""); } z += j; } @@ -790,12 +821,12 @@ ** Output a row as a tab-separated line of text. */ static int output_tab_separated( void *pUser, /* Pointer to row-count integer */ int nArg, /* Number of columns in this result row */ - char **azArg, /* Text of data in all columns */ - char **azName /* Names of the columns */ + const char **azArg, /* Text of data in all columns */ + const char **azName /* Names of the columns */ ){ int *pCount = (int*)pUser; int i; if( *pCount==0 ){ @@ -815,28 +846,29 @@ /* ** Generate HTML that describes a color key. */ void output_color_key(const char *zClrKey, int horiz, char *zTabArgs){ int i, j, k; - char *zSafeKey, *zToFree; - while( isspace(*zClrKey) ) zClrKey++; + const char *zSafeKey; + char *zToFree; + while( fossil_isspace(*zClrKey) ) zClrKey++; if( zClrKey[0]==0 ) return; @ if( horiz ){ @ } - zToFree = zSafeKey = mprintf("%h", zClrKey); + zSafeKey = zToFree = mprintf("%h", zClrKey); while( zSafeKey[0] ){ - while( isspace(*zSafeKey) ) zSafeKey++; - for(i=0; zSafeKey[i] && !isspace(zSafeKey[i]); i++){} - for(j=i; isspace(zSafeKey[j]); j++){} + while( fossil_isspace(*zSafeKey) ) zSafeKey++; + for(i=0; zSafeKey[i] && !fossil_isspace(zSafeKey[i]); i++){} + for(j=i; fossil_isspace(zSafeKey[j]); j++){} for(k=j; zSafeKey[k] && zSafeKey[k]!='\n' && zSafeKey[k]!='\r'; k++){} if( !horiz ){ - cgi_printf("\n", + cgi_printf("\n", i, zSafeKey, k-j, &zSafeKey[j]); }else{ - cgi_printf("\n", + cgi_printf("\n", i, zSafeKey, k-j, &zSafeKey[j]); } zSafeKey += k; } free(zToFree); @@ -844,22 +876,265 @@ @ } @
      %.*s
      %.*s
      %.*s%.*s
      } +/* +** Execute a single read-only SQL statement. Invoke xCallback() on each +** row. +*/ +static int db_exec_readonly( + sqlite3 *db, /* The database on which the SQL executes */ + const char *zSql, /* The SQL to be executed */ + int (*xCallback)(void*,int,const char**, const char**), + /* Invoke this callback routine */ + void *pArg, /* First argument to xCallback() */ + char **pzErrMsg /* Write error messages here */ +){ + int rc = SQLITE_OK; /* Return code */ + const char *zLeftover; /* Tail of unprocessed SQL */ + sqlite3_stmt *pStmt = 0; /* The current SQL statement */ + const char **azCols = 0; /* Names of result columns */ + int nCol; /* Number of columns of output */ + const char **azVals = 0; /* Text of all output columns */ + int i; /* Loop counter */ + + pStmt = 0; + rc = sqlite3_prepare_v2(db, zSql, -1, &pStmt, &zLeftover); + assert( rc==SQLITE_OK || pStmt==0 ); + if( rc!=SQLITE_OK ){ + return rc; + } + if( !pStmt ){ + /* this happens for a comment or white-space */ + return SQLITE_OK; + } + if( !sqlite3_stmt_readonly(pStmt) ){ + sqlite3_finalize(pStmt); + return SQLITE_ERROR; + } + + i = sqlite3_bind_parameter_index(pStmt, "$login"); + if( i ) sqlite3_bind_text(pStmt, i, g.zLogin, -1, SQLITE_TRANSIENT); + + nCol = sqlite3_column_count(pStmt); + azVals = fossil_malloc(2*nCol*sizeof(const char*) + 1); + while( (rc = sqlite3_step(pStmt))==SQLITE_ROW ){ + if( azCols==0 ){ + azCols = &azVals[nCol]; + for(i=0; i + @ function SortableTable(tableEl,columnTypes,initSort){ + @ this.tbody = tableEl.getElementsByTagName('tbody'); + @ this.columnTypes = columnTypes; + @ var ncols = tableEl.rows[0].cells.length; + @ for(var i = columnTypes.length; i<=ncols; i++){this.columnTypes += 't';} + @ this.sort = function (cell) { + @ var column = cell.cellIndex; + @ var sortFn; + @ switch( cell.sortType ){ + if( strchr(zColumnTypes,'n') ){ + @ case "n": sortFn = this.sortNumeric; break; + } + if( strchr(zColumnTypes,'N') ){ + @ case "N": sortFn = this.sortReverseNumeric; break; + } + @ case "t": sortFn = this.sortText; break; + if( strchr(zColumnTypes,'T') ){ + @ case "T": sortFn = this.sortReverseText; break; + } + if( strchr(zColumnTypes,'k') ){ + @ case "k": sortFn = this.sortKey; break; + } + if( strchr(zColumnTypes,'K') ){ + @ case "K": sortFn = this.sortReverseKey; break; + } + @ default: return; + @ } + @ this.sortIndex = column; + @ var newRows = new Array(); + @ for (j = 0; j < this.tbody[0].rows.length; j++) { + @ newRows[j] = this.tbody[0].rows[j]; + @ } + @ if( this.sortIndex==Math.abs(this.prevColumn)-1 ){ + @ newRows.reverse(); + @ this.prevColumn = -this.prevColumn; + @ }else{ + @ newRows.sort(sortFn); + @ this.prevColumn = this.sortIndex+1; + @ } + @ for (i=0;i0)){ + @ return; + @ } + @ if(x && x[0].rows && x[0].rows.length > 0) { + @ this.hdrRow = x[0].rows[0]; + @ } else { + @ return; + @ } + @ var thisObject = this; + @ this.prevColumn = initSort; + @ for (var i=0; i +} + /* -** WEBPAGE: /rptview +** WEBPAGE: rptview ** ** Generate a report. The rn query parameter is the report number ** corresponding to REPORTFMT.RN. If the tablist query parameter exists, ** then the output consists of lines of tab-separated fields instead of ** an HTML table. */ void rptview_page(void){ int count = 0; - int rn; + int rn, rc; char *zSql; char *zTitle; char *zOwner; char *zClrKey; int tabs; @@ -866,28 +1141,33 @@ Stmt q; char *zErr1 = 0; char *zErr2 = 0; login_check_credentials(); - if( !g.okRdTkt ){ login_needed(); return; } - rn = atoi(PD("rn","0")); - if( rn==0 ){ - cgi_redirect("reportlist"); - return; - } + if( !g.perm.RdTkt ){ login_needed(g.anon.RdTkt); return; } tabs = P("tablist")!=0; - /* view_add_functions(tabs); */ db_prepare(&q, - "SELECT title, sqlcode, owner, cols FROM reportfmt WHERE rn=%d", rn); - if( db_step(&q)!=SQLITE_ROW ){ + "SELECT title, sqlcode, owner, cols, rn FROM reportfmt WHERE rn=%d", + atoi(PD("rn","0"))); + rc = db_step(&q); + if( rc!=SQLITE_ROW ){ + db_finalize(&q); + db_prepare(&q, + "SELECT title, sqlcode, owner, cols, rn FROM reportfmt WHERE title GLOB %Q", + P("title")); + rc = db_step(&q); + } + if( rc!=SQLITE_ROW ){ + db_finalize(&q); cgi_redirect("reportlist"); return; } zTitle = db_column_malloc(&q, 0); zSql = db_column_malloc(&q, 1); zOwner = db_column_malloc(&q, 2); zClrKey = db_column_malloc(&q, 3); + rn = db_column_int(&q,4); db_finalize(&q); if( P("order_by") ){ /* ** If the user wants to do a column sort, wrap the query into a sub @@ -906,41 +1186,195 @@ count = 0; if( !tabs ){ struct GenerateHTML sState; db_multi_exec("PRAGMA empty_result_callbacks=ON"); - style_submenu_element("Raw", "Raw", - "rptview?tablist=1&%s", PD("QUERY_STRING","")); - if( g.okAdmin - || (g.okTktFmt && g.zLogin && zOwner && strcmp(g.zLogin,zOwner)==0) ){ + style_submenu_element("Raw", "Raw", + "rptview?tablist=1&%h", PD("QUERY_STRING","")); + if( g.perm.Admin + || (g.perm.TktFmt && g.zLogin && fossil_strcmp(g.zLogin,zOwner)==0) ){ style_submenu_element("Edit", "Edit", "rptedit?rn=%d", rn); } - if( g.okTktFmt ){ + if( g.perm.TktFmt ){ style_submenu_element("SQL", "SQL", "rptsql?rn=%d",rn); } - if( g.okNewTkt ){ + if( g.perm.NewTkt ){ style_submenu_element("New Ticket", "Create a new ticket", "%s/tktnew", g.zTop); } - style_header(zTitle); - output_color_key(zClrKey, 1, - "border=0 cellpadding=3 cellspacing=0 class=\"report\""); - @ + style_header("%s", zTitle); + output_color_key(zClrKey, 1, + "border=\"0\" cellpadding=\"3\" cellspacing=\"0\" class=\"report\""); + @
      sState.rn = rn; sState.nCount = 0; - sqlite3_set_authorizer(g.db, report_query_authorizer, (void*)&zErr1); - sqlite3_exec(g.db, zSql, generate_html, &sState, &zErr2); - sqlite3_set_authorizer(g.db, 0, 0); - @
      + report_restrict_sql(&zErr1); + db_exec_readonly(g.db, zSql, generate_html, &sState, &zErr2); + report_unrestrict_sql(); + @ if( zErr1 ){ - @

      Error: %h(zErr1)

      + @

      Error: %h(zErr1)

      }else if( zErr2 ){ - @

      Error: %h(zErr2)

      + @

      Error: %h(zErr2)

      } + output_table_sorting_javascript("reportTable","",0); style_footer(); }else{ - sqlite3_set_authorizer(g.db, report_query_authorizer, (void*)&zErr1); - sqlite3_exec(g.db, zSql, output_tab_separated, &count, &zErr2); - sqlite3_set_authorizer(g.db, 0, 0); + report_restrict_sql(&zErr1); + db_exec_readonly(g.db, zSql, output_tab_separated, &count, &zErr2); + report_unrestrict_sql(); cgi_set_content_type("text/plain"); } } + +/* +** report number for full table ticket export +*/ +static const char zFullTicketRptRn[] = "0"; + +/* +** report title for full table ticket export +*/ +static const char zFullTicketRptTitle[] = "full ticket export"; + +/* +** show all reports, which can be used for ticket show. +** Output is written to stdout as tab delimited table +*/ +void rpt_list_reports(void){ + Stmt q; + fossil_print("Available reports:\n"); + fossil_print("%s\t%s\n","report number","report title"); + fossil_print("%s\t%s\n",zFullTicketRptRn,zFullTicketRptTitle); + db_prepare(&q,"SELECT rn,title FROM reportfmt ORDER BY rn"); + while( db_step(&q)==SQLITE_ROW ){ + const char *zRn = db_column_text(&q, 0); + const char *zTitle = db_column_text(&q, 1); + + fossil_print("%s\t%s\n",zRn,zTitle); + } + db_finalize(&q); +} + +/* +** user defined separator used by ticket show command +*/ +static const char *zSep = 0; + +/* +** select the quoting algorithm for "ticket show" +*/ +#if INTERFACE +typedef enum eTktShowEnc { tktNoTab=0, tktFossilize=1 } tTktShowEncoding; +#endif +static tTktShowEncoding tktEncode = tktNoTab; + +/* +** Output the text given in the argument. Convert tabs and newlines into +** spaces. +*/ +static void output_no_tabs_file(const char *z){ + switch( tktEncode ){ + case tktFossilize: + { char *zFosZ; + + if( z && *z ){ + zFosZ = fossilize(z,-1); + fossil_print("%s",zFosZ); + free(zFosZ); + } + break; + } + default: + while( z && z[0] ){ + int i, j; + for(i=0; z[i] && (!fossil_isspace(z[i]) || z[i]==' '); i++){} + if( i>0 ){ + fossil_print("%.*s", i, z); + } + for(j=i; fossil_isspace(z[j]); j++){} + if( j>i ){ + fossil_print("%*s", j-i, ""); + } + z += j; + } + break; + } +} + +/* +** Output a row as a tab-separated line of text. +*/ +int output_separated_file( + void *pUser, /* Pointer to row-count integer */ + int nArg, /* Number of columns in this result row */ + const char **azArg, /* Text of data in all columns */ + const char **azName /* Names of the columns */ +){ + int *pCount = (int*)pUser; + int i; + + if( *pCount==0 ){ + for(i=0; i #include "rss.h" #include -#include /* ** WEBPAGE: timeline.rss +** URL: /timeline.rss?y=TYPE&n=LIMIT&tkt=UUID&tag=TAG&wiki=NAME&name=FILENAME +** +** Produce an RSS feed of the timeline. +** +** TYPE may be: all, ci (show check-ins only), t (show tickets only), +** w (show wiki only). +** +** LIMIT is the number of items to show. +** +** tkt=UUID filters for only those events for the specified ticket. tag=TAG +** filters for a tag, and wiki=NAME for a wiki page. Only one may be used. +** +** In addition, name=FILENAME filters for a specific file. This may be +** combined with one of the other filters (useful for looking at a specific +** branch). */ + void page_timeline_rss(void){ Stmt q; int nLine=0; char *zPubDate, *zProjectName, *zProjectDescr, *zFreeProjectName=0; Blob bSQL; const char *zType = PD("y","all"); /* Type of events. All if NULL */ + const char *zTicketUuid = PD("tkt",NULL); + const char *zTag = PD("tag",NULL); + const char *zFilename = PD("name",NULL); + const char *zWiki = PD("wiki",NULL); int nLimit = atoi(PD("n","20")); + int nTagId; const char zSQL1[] = @ SELECT @ blob.rid, @ uuid, @ event.mtime, @ coalesce(ecomment,comment), @ coalesce(euser,user), @ (SELECT count(*) FROM plink WHERE pid=blob.rid AND isprim), - @ (SELECT count(*) FROM plink WHERE cid=blob.rid) + @ (SELECT count(*) FROM plink WHERE cid=blob.rid), + @ (SELECT group_concat(substr(tagname,5), ', ') FROM tag, tagxref + @ WHERE tagname GLOB 'sym-*' AND tag.tagid=tagxref.tagid + @ AND tagxref.rid=blob.rid AND tagxref.tagtype>0) AS tags @ FROM event, blob @ WHERE blob.rid=event.objid ; login_check_credentials(); - if( !g.okRead && !g.okRdTkt && !g.okRdWiki ){ + if( !g.perm.Read && !g.perm.RdTkt && !g.perm.RdWiki ){ return; } blob_zero(&bSQL); blob_append( &bSQL, zSQL1, -1 ); if( zType[0]!='a' ){ - if( zType[0]=='c' && !g.okRead ) zType = "x"; - if( zType[0]=='w' && !g.okRdWiki ) zType = "x"; - if( zType[0]=='t' && !g.okRdTkt ) zType = "x"; - blob_appendf(&bSQL, " AND event.type=%Q", zType); + if( zType[0]=='c' && !g.perm.Read ) zType = "x"; + if( zType[0]=='w' && !g.perm.RdWiki ) zType = "x"; + if( zType[0]=='t' && !g.perm.RdTkt ) zType = "x"; + blob_append_sql(&bSQL, " AND event.type=%Q", zType); }else{ - if( !g.okRead ){ - if( g.okRdTkt && g.okRdWiki ){ + if( !g.perm.Read ){ + if( g.perm.RdTkt && g.perm.RdWiki ){ blob_append(&bSQL, " AND event.type!='ci'", -1); - }else if( g.okRdTkt ){ + }else if( g.perm.RdTkt ){ blob_append(&bSQL, " AND event.type=='t'", -1); + }else{ blob_append(&bSQL, " AND event.type=='w'", -1); } - }else if( !g.okRdWiki ){ - if( g.okRdTkt ){ + }else if( !g.perm.RdWiki ){ + if( g.perm.RdTkt ){ blob_append(&bSQL, " AND event.type!='w'", -1); }else{ blob_append(&bSQL, " AND event.type=='ci'", -1); } - }else if( !g.okRdTkt ){ - assert( !g.okRdTkt &&& g.okRead && g.okRdWiki ); + }else if( !g.perm.RdTkt ){ + assert( !g.perm.RdTkt && g.perm.Read && g.perm.RdWiki ); blob_append(&bSQL, " AND event.type!='t'", -1); } } + + if( zTicketUuid ){ + nTagId = db_int(0, "SELECT tagid FROM tag WHERE tagname GLOB 'tkt-%q*'", + zTicketUuid); + if ( nTagId==0 ){ + nTagId = -1; + } + }else if( zTag ){ + nTagId = db_int(0, "SELECT tagid FROM tag WHERE tagname GLOB 'sym-%q*'", + zTag); + if ( nTagId==0 ){ + nTagId = -1; + } + }else if( zWiki ){ + nTagId = db_int(0, "SELECT tagid FROM tag WHERE tagname GLOB 'wiki-%q*'", + zWiki); + if ( nTagId==0 ){ + nTagId = -1; + } + }else{ + nTagId = 0; + } + + if( nTagId==-1 ){ + blob_append_sql(&bSQL, " AND 0"); + }else if( nTagId!=0 ){ + blob_append_sql(&bSQL, " AND (EXISTS(SELECT 1 FROM tagxref" + " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid))", nTagId); + } + + if( zFilename ){ + blob_append_sql(&bSQL, + " AND (SELECT mlink.fnid FROM mlink WHERE event.objid=mlink.mid) IN (SELECT fnid FROM filename WHERE name=%Q %s)", + zFilename, filename_collation() + ); + } blob_append( &bSQL, " ORDER BY event.mtime DESC", -1 ); cgi_set_content_type("application/rss+xml"); @@ -94,29 +155,227 @@ } zPubDate = cgi_rfc822_datestamp(time(NULL)); @ - @ + @ @ @ %h(zProjectName) @ %s(g.zBaseURL) @ %h(zProjectDescr) @ %s(zPubDate) @ Fossil version %s(MANIFEST_VERSION) %s(MANIFEST_DATE) - db_prepare(&q, blob_str(&bSQL)); + free(zPubDate); + db_prepare(&q, "%s", blob_sql_text(&bSQL)); + blob_reset( &bSQL ); + while( db_step(&q)==SQLITE_ROW && nLine1 && nChild>1 ){ + zPrefix = "*MERGE/FORK* "; + }else if( nParent>1 ){ + zPrefix = "*MERGE* "; + }else if( nChild>1 ){ + zPrefix = "*FORK* "; + } + + if( zTagList ){ + zSuffix = mprintf(" (tags: %s)", zTagList); + } + + @ + @ %s(zPrefix)%h(zCom)%h(zSuffix) + @ %s(g.zBaseURL)/info/%s(zId) + @ %s(zPrefix)%h(zCom)%h(zSuffix) + @ %s(zDate) + @ %h(zAuthor) + @ %s(g.zBaseURL)/info/%s(zId) + @ + free(zDate); + free(zSuffix); + nLine++; + } + + db_finalize(&q); + @ + @ + + if( zFreeProjectName != 0 ){ + free( zFreeProjectName ); + } +} + +/* +** COMMAND: rss +** +** Usage: %fossil rss ?OPTIONS? +** +** The CLI variant of the /timeline.rss page, this produces an RSS +** feed of the timeline to stdout. Options: +** +** -type|y FLAG +** may be: all (default), ci (show check-ins only), t (show tickets only), +** w (show wiki only). +** +** -limit|n LIMIT +** The maximum number of items to show. +** +** -tkt UUID +** Filters for only those events for the specified ticket. +** +** -tag TAG +** filters for a tag +** +** -wiki NAME +** Filters on a specific wiki page. +** +** Only one of -tkt, -tag, or -wiki may be used. +** +** -name FILENAME +** filters for a specific file. This may be combined with one of the other +** filters (useful for looking at a specific branch). +** +** -url STRING +** Sets the RSS feed's root URL to the given string. The default is +** "URL-PLACEHOLDER" (without quotes). +*/ +void cmd_timeline_rss(void){ + Stmt q; + int nLine=0; + char *zPubDate, *zProjectName, *zProjectDescr, *zFreeProjectName=0; + Blob bSQL; + const char *zType = find_option("type","y",1); /* Type of events. All if NULL */ + const char *zTicketUuid = find_option("tkt",NULL,1); + const char *zTag = find_option("tag",NULL,1); + const char *zFilename = find_option("name",NULL,1); + const char *zWiki = find_option("wiki",NULL,1); + const char *zLimit = find_option("limit", "n",1); + const char *zBaseURL = find_option("url", NULL, 1); + int nLimit = atoi( (zLimit && *zLimit) ? zLimit : "20" ); + int nTagId; + const char zSQL1[] = + @ SELECT + @ blob.rid, + @ uuid, + @ event.mtime, + @ coalesce(ecomment,comment), + @ coalesce(euser,user), + @ (SELECT count(*) FROM plink WHERE pid=blob.rid AND isprim), + @ (SELECT count(*) FROM plink WHERE cid=blob.rid), + @ (SELECT group_concat(substr(tagname,5), ', ') FROM tag, tagxref + @ WHERE tagname GLOB 'sym-*' AND tag.tagid=tagxref.tagid + @ AND tagxref.rid=blob.rid AND tagxref.tagtype>0) AS tags + @ FROM event, blob + @ WHERE blob.rid=event.objid + ; + if(!zType || !*zType){ + zType = "all"; + } + if(!zBaseURL || !*zBaseURL){ + zBaseURL = "URL-PLACEHOLDER"; + } + + db_find_and_open_repository(0, 0); + + /* We should be done with options.. */ + verify_all_options(); + + blob_zero(&bSQL); + blob_append( &bSQL, zSQL1, -1 ); + + if( zType[0]!='a' ){ + blob_append_sql(&bSQL, " AND event.type=%Q", zType); + } + + if( zTicketUuid ){ + nTagId = db_int(0, "SELECT tagid FROM tag WHERE tagname GLOB 'tkt-%q*'", + zTicketUuid); + if ( nTagId==0 ){ + nTagId = -1; + } + }else if( zTag ){ + nTagId = db_int(0, "SELECT tagid FROM tag WHERE tagname GLOB 'sym-%q*'", + zTag); + if ( nTagId==0 ){ + nTagId = -1; + } + }else if( zWiki ){ + nTagId = db_int(0, "SELECT tagid FROM tag WHERE tagname GLOB 'wiki-%q*'", + zWiki); + if ( nTagId==0 ){ + nTagId = -1; + } + }else{ + nTagId = 0; + } + + if( nTagId==-1 ){ + blob_append_sql(&bSQL, " AND 0"); + }else if( nTagId!=0 ){ + blob_append_sql(&bSQL, " AND (EXISTS(SELECT 1 FROM tagxref" + " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid))", nTagId); + } + + if( zFilename ){ + blob_append_sql(&bSQL, + " AND (SELECT mlink.fnid FROM mlink WHERE event.objid=mlink.mid) IN (SELECT fnid FROM filename WHERE name=%Q %s)", + zFilename, filename_collation() + ); + } + + blob_append( &bSQL, " ORDER BY event.mtime DESC", -1 ); + + zProjectName = db_get("project-name", 0); + if( zProjectName==0 ){ + zFreeProjectName = zProjectName = mprintf("Fossil source repository for: %s", + zBaseURL); + } + zProjectDescr = db_get("project-description", 0); + if( zProjectDescr==0 ){ + zProjectDescr = zProjectName; + } + + zPubDate = cgi_rfc822_datestamp(time(NULL)); + + fossil_print(""); + fossil_print(""); + fossil_print("\n"); + fossil_print("%h\n", zProjectName); + fossil_print("%s\n", zBaseURL); + fossil_print("%h\n", zProjectDescr); + fossil_print("%s\n", zPubDate); + fossil_print("Fossil version %s %s\n", + MANIFEST_VERSION, MANIFEST_DATE); + free(zPubDate); + db_prepare(&q, "%s", blob_sql_text(&bSQL)); blob_reset( &bSQL ); - while( db_step(&q)==SQLITE_ROW && nLine<=nLimit ){ + while( db_step(&q)==SQLITE_ROW && nLine1 && nChild>1 ){ zPrefix = "*MERGE/FORK* "; @@ -124,25 +383,30 @@ zPrefix = "*MERGE* "; }else if( nChild>1 ){ zPrefix = "*FORK* "; } - @ - @ %s(zPrefix)%h(zCom) - @ %s(g.zBaseURL)/ci/%s(zId) - @ %s(zPrefix)%h(zCom) - @ %s(zDate) - @ %h(zAuthor) - @ %s(g.zBaseURL)/ci/%s(zId) - @ + if( zTagList ){ + zSuffix = mprintf(" (tags: %s)", zTagList); + } + + fossil_print(""); + fossil_print("%s%h%h\n", zPrefix, zCom, zSuffix); + fossil_print("%s/info/%s\n", zBaseURL, zId); + fossil_print("%s%h%h\n", zPrefix, zCom, zSuffix); + fossil_print("%s\n", zDate); + fossil_print("%h\n", zAuthor); + fossil_print("%s/info/%s\n", g.zBaseURL, zId); + fossil_print("\n"); free(zDate); + free(zSuffix); nLine++; } db_finalize(&q); - @ - @ + fossil_print("\n"); + fossil_print("\n"); if( zFreeProjectName != 0 ){ free( zFreeProjectName ); } } Index: src/schema.c ================================================================== --- src/schema.c +++ src/schema.c @@ -12,27 +12,31 @@ ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* -** +** ** This file contains string constants that implement the database schema. */ #include "config.h" #include "schema.h" /* ** The database schema for the ~/.fossil configuration database. */ -const char zConfigSchema[] = +const char zConfigSchema[] = @ -- This file contains the schema for the database that is kept in the @ -- ~/.fossil file and that stores information about the users setup. @ -- @ CREATE TABLE global_config( @ name TEXT PRIMARY KEY, @ value TEXT @ ); +@ +@ -- Identifier for this file type. +@ -- The integer is the same as 'FSLG'. +@ PRAGMA application_id=252006675; ; #if INTERFACE /* ** The content tables have a content version number which rarely @@ -39,33 +43,41 @@ ** changes. The aux tables have an arbitrary version number (typically ** a date) which can change frequently. When the content schema changes, ** we have to execute special procedures to update the schema. When ** the aux schema changes, all we need to do is rebuild the database. */ -#define CONTENT_SCHEMA "1" -#define AUX_SCHEMA "2006-12-23" +#define CONTENT_SCHEMA "2" +#define AUX_SCHEMA_MIN "2011-04-25 19:50" +#define AUX_SCHEMA_MAX "2015-01-24" +/* NB: Some features require the latest schema. Warning or error messages +** will appear if an older schema is used. However, the older schemas are +** adequate for many common functions. */ #endif /* INTERFACE */ /* -** The schema for a repository database. +** The schema for a repository database. ** ** Schema1[] contains parts of the schema that are fixed and unchanging ** across versions. Schema2[] contains parts of the schema that can ** change from one version to the next. The information in Schema2[] -** can be reconstructed from the information in Schema1[]. +** is reconstructed from the information in Schema1[] by the "rebuild" +** operation. */ -const char zRepositorySchema1[] = +const char zRepositorySchema1[] = @ -- The BLOB and DELTA tables contain all records held in the repository. @ -- -@ -- The BLOB.CONTENT column is always compressed using libz. This +@ -- The BLOB.CONTENT column is always compressed using zlib. This @ -- column might hold the full text of the record or it might hold @ -- a delta that is able to reconstruct the record from some other @ -- record. If BLOB.CONTENT holds a delta, then a DELTA table entry @ -- will exist for the record and that entry will point to another @ -- entry that holds the source of the delta. Deltas can be chained. +@ -- +@ -- The blob and delta tables collectively hold the "global state" of +@ -- a Fossil repository. @ -- @ CREATE TABLE blob( @ rid INTEGER PRIMARY KEY, -- Record ID @ rcvid INTEGER, -- Origin of this record @ size INTEGER, -- Size of content. -1 for a phantom. @@ -72,22 +84,29 @@ @ uuid TEXT UNIQUE NOT NULL, -- SHA1 hash of the content @ content BLOB, -- Compressed content of this record @ CHECK( length(uuid)==40 AND rid>0 ) @ ); @ CREATE TABLE delta( -@ rid INTEGER PRIMARY KEY, -- Record ID -@ srcid INTEGER NOT NULL REFERENCES blob -- Record holding source document +@ rid INTEGER PRIMARY KEY, -- BLOB that is delta-compressed +@ srcid INTEGER NOT NULL REFERENCES blob -- Baseline for delta-compression @ ); @ CREATE INDEX delta_i1 ON delta(srcid); +@ +@ ------------------------------------------------------------------------- +@ -- The BLOB and DELTA tables above hold the "global state" of a Fossil +@ -- project; the stuff that is normally exchanged during "sync". The +@ -- "local state" of a repository is contained in the remaining tables of +@ -- the zRepositorySchema1 string. +@ ------------------------------------------------------------------------- @ @ -- Whenever new blobs are received into the repository, an entry @ -- in this table records the source of the blob. @ -- @ CREATE TABLE rcvfrom( @ rcvid INTEGER PRIMARY KEY, -- Received-From ID @ uid INTEGER REFERENCES user, -- User login -@ mtime DATETIME, -- Time or receipt +@ mtime DATETIME, -- Time of receipt. Julian day. @ nonce TEXT UNIQUE, -- Nonce used for login @ ipaddr TEXT -- Remote IP address. NULL for direct. @ ); @ @ -- Information about users @@ -99,26 +118,28 @@ @ -- hash based on the project-code, the user login, and the cleartext @ -- password. @ -- @ CREATE TABLE user( @ uid INTEGER PRIMARY KEY, -- User ID -@ login TEXT, -- login name of the user +@ login TEXT UNIQUE, -- login name of the user @ pw TEXT, -- password @ cap TEXT, -- Capabilities of this user @ cookie TEXT, -- WWW login cookie @ ipaddr TEXT, -- IP address for which cookie is valid @ cexpire DATETIME, -- Time when cookie expires @ info TEXT, -- contact information +@ mtime DATE, -- last change. seconds since 1970 @ photo BLOB -- JPEG image of this user @ ); @ -@ -- The VAR table holds miscellanous information about the repository. +@ -- The config table holds miscellanous information about the repository. @ -- in the form of name-value pairs. @ -- @ CREATE TABLE config( @ name TEXT PRIMARY KEY NOT NULL, -- Primary name of the entry @ value CLOB, -- Content of the named parameter +@ mtime DATE, -- last modified. seconds since 1970 @ CHECK( typeof(name)='text' AND length(name)>=1 ) @ ); @ @ -- Artifacts that should not be processed are identified in the @ -- "shun" table. Artifacts that are control-file forgeries or @@ -128,11 +149,15 @@ @ -- @ -- Shunned artifacts do not exist in the blob table. Hence they @ -- have not artifact ID (rid) and we thus must store their full @ -- UUID. @ -- -@ CREATE TABLE shun(uuid UNIQUE); +@ CREATE TABLE shun( +@ uuid UNIQUE, -- UUID of artifact to be shunned. Canonical form +@ mtime DATE, -- When added. seconds since 1970 +@ scom TEXT -- Optional text explaining why the shun occurred +@ ); @ @ -- Artifacts that should not be pushed are stored in the "private" @ -- table. Private artifacts are omitted from the "unclustered" and @ -- "unsent" tables. @ -- @@ -140,17 +165,44 @@ @ @ -- An entry in this table describes a database query that generates a @ -- table of tickets. @ -- @ CREATE TABLE reportfmt( -@ rn integer primary key, -- Report number -@ owner text, -- Owner of this report format (not used) -@ title text, -- Title of this report -@ cols text, -- A color-key specification -@ sqlcode text -- An SQL SELECT statement for this report +@ rn INTEGER PRIMARY KEY, -- Report number +@ owner TEXT, -- Owner of this report format (not used) +@ title TEXT UNIQUE, -- Title of this report +@ mtime DATE, -- Last modified. seconds since 1970 +@ cols TEXT, -- A color-key specification +@ sqlcode TEXT -- An SQL SELECT statement for this report +@ ); +@ +@ -- Some ticket content (such as the originators email address or contact +@ -- information) needs to be obscured to protect privacy. This is achieved +@ -- by storing an SHA1 hash of the content. For display, the hash is +@ -- mapped back into the original text using this table. +@ -- +@ -- This table contains sensitive information and should not be shared +@ -- with unauthorized users. +@ -- +@ CREATE TABLE concealed( +@ hash TEXT PRIMARY KEY, -- The SHA1 hash of content +@ mtime DATE, -- Time created. Seconds since 1970 +@ content TEXT -- Content intended to be concealed @ ); -@ INSERT INTO reportfmt(title,cols,sqlcode) VALUES('All Tickets','#ffffff Key: +@ +@ -- The application ID helps the unix "file" command to identify the +@ -- database as a fossil repository. +@ PRAGMA application_id=252006673; +; + +/* +** The default reportfmt entry for the schema. This is in an extra +** script so that (configure reset) can install the default report. +*/ +const char zRepositorySchemaDefaultReports[] = +@ INSERT INTO reportfmt(title,mtime,cols,sqlcode) +@ VALUES('All Tickets',julianday('1970-01-01'),'#ffffff Key: @ #f2dcdc Active @ #e8e8e8 Review @ #cfe8bd Fixed @ #bde5d6 Tested @ #cacae5 Deferred @@ -166,23 +218,10 @@ @ type, @ status, @ subsystem, @ title @ FROM ticket'); -@ -@ -- Some ticket content (such as the originators email address or contact -@ -- information) needs to be obscured to protect privacy. This is achieved -@ -- by storing an SHA1 hash of the content. For display, the hash is -@ -- mapped back into the original text using this table. -@ -- -@ -- This table contains sensitive information and should not be shared -@ -- with unauthorized users. -@ -- -@ CREATE TABLE concealed( -@ hash TEXT PRIMARY KEY, -@ content TEXT -@ ); ; const char zRepositorySchema2[] = @ -- Filenames @ -- @@ -189,62 +228,106 @@ @ CREATE TABLE filename( @ fnid INTEGER PRIMARY KEY, -- Filename ID @ name TEXT UNIQUE -- Name of file page @ ); @ -@ -- Linkages between manifests, files created by that manifest, and +@ -- Linkages between check-ins, files created by each check-in, and @ -- the names of those files. @ -- -@ -- pid==0 if the file is added by check-in mid. -@ -- fid==0 if the file is removed by check-in mid. +@ -- Each entry represents a file that changed content from pid to fid +@ -- due to the check-in that goes from pmid to mid. fnid is the name +@ -- of the file in the mid check-in. If the file was renamed as part +@ -- of the mid check-in, then pfnid is the previous filename. +@ +@ -- There can be multiple entries for (mid,fid) if the mid check-in was +@ -- a merge. Entries with isaux==0 are from the primary parent. Merge +@ -- parents have isaux set to true. +@ -- +@ -- Field name mnemonics: +@ -- mid = Manifest ID. (Each check-in is stored as a "Manifest") +@ -- fid = File ID. +@ -- pmid = Parent Manifest ID. +@ -- pid = Parent file ID. +@ -- fnid = File Name ID. +@ -- pfnid = Parent File Name ID. +@ -- isaux = pmid IS AUXiliary parent, not primary parent +@ -- +@ -- pid==0 if the file is added by check-in mid. +@ -- pid==(-1) if the file exists in a merge parents but not in the primary +@ -- parent. In other words, if the file file was added by merge. +@ -- fid==0 if the file is removed by check-in mid. @ -- @ CREATE TABLE mlink( -@ mid INTEGER REFERENCES blob, -- Manifest ID where change occurs -@ pid INTEGER REFERENCES blob, -- File ID in parent manifest -@ fid INTEGER REFERENCES blob, -- Changed file ID in this manifest -@ fnid INTEGER REFERENCES filename, -- Name of the file -@ pfnid INTEGER REFERENCES filename -- Previous name. 0 if unchanged +@ mid INTEGER, -- Check-in that contains fid +@ fid INTEGER, -- New file content. 0 if deleted +@ pmid INTEGER, -- Check-in that contains pid +@ pid INTEGER, -- Prev file content. 0 if new. -1 merge +@ fnid INTEGER REFERENCES filename, -- Name of the file +@ pfnid INTEGER REFERENCES filename, -- Previous name. 0 if unchanged +@ mperm INTEGER, -- File permissions. 1==exec +@ isaux BOOLEAN DEFAULT 0 -- TRUE if pmid is the primary @ ); @ CREATE INDEX mlink_i1 ON mlink(mid); @ CREATE INDEX mlink_i2 ON mlink(fnid); @ CREATE INDEX mlink_i3 ON mlink(fid); @ CREATE INDEX mlink_i4 ON mlink(pid); @ -@ -- Parent/child linkages +@ -- Parent/child linkages between check-ins @ -- @ CREATE TABLE plink( @ pid INTEGER REFERENCES blob, -- Parent manifest @ cid INTEGER REFERENCES blob, -- Child manifest @ isprim BOOLEAN, -- pid is the primary parent of cid -@ mtime DATETIME, -- the date/time stamp on cid +@ mtime DATETIME, -- the date/time stamp on cid. Julian day. +@ baseid INTEGER REFERENCES blob, -- Baseline if cid is a delta manifest. @ UNIQUE(pid, cid) @ ); -@ CREATE INDEX plink_i2 ON plink(cid); +@ CREATE INDEX plink_i2 ON plink(cid,pid); +@ +@ -- A "leaf" check-in is a check-in that has no children in the same +@ -- branch. The set of all leaves is easily computed with a join, +@ -- between the plink and tagxref tables, but it is a slower join for +@ -- very large repositories (repositories with 100,000 or more check-ins) +@ -- and so it makes sense to precompute the set of leaves. There is +@ -- one entry in the following table for each leaf. +@ -- +@ CREATE TABLE leaf(rid INTEGER PRIMARY KEY); @ @ -- Events used to generate a timeline @ -- @ CREATE TABLE event( -@ type TEXT, -- Type of event -@ mtime DATETIME, -- Date and time when the event occurs +@ type TEXT, -- Type of event: 'ci', 'w', 'e', 't', 'g' +@ mtime DATETIME, -- Time of occurrence. Julian day. @ objid INTEGER PRIMARY KEY, -- Associated record ID @ tagid INTEGER, -- Associated ticket or wiki name tag @ uid INTEGER REFERENCES user, -- User who caused the event @ bgcolor TEXT, -- Color set by 'bgcolor' property @ euser TEXT, -- User set by 'user' property @ user TEXT, -- Name of the user @ ecomment TEXT, -- Comment set by 'comment' property @ comment TEXT, -- Comment describing the event -@ brief TEXT -- Short comment when tagid already seen +@ brief TEXT, -- Short comment when tagid already seen +@ omtime DATETIME -- Original unchanged date+time, or NULL @ ); @ CREATE INDEX event_i1 ON event(mtime); @ @ -- A record of phantoms. A phantom is a record for which we know the @ -- UUID but we do not (yet) know the file content. @ -- @ CREATE TABLE phantom( @ rid INTEGER PRIMARY KEY -- Record ID of the phantom @ ); +@ +@ -- A record of orphaned delta-manifests. An orphan is a delta-manifest +@ -- for which we have content, but its baseline-manifest is a phantom. +@ -- We have to track all orphan maniftests so that when the baseline arrives, +@ -- we know to process the orphaned deltas. +@ CREATE TABLE orphan( +@ rid INTEGER PRIMARY KEY, -- Delta manifest with a phantom baseline +@ baseline INTEGER -- Phantom baseline of this orphan +@ ); +@ CREATE INDEX orphan_baseline ON orphan(baseline); @ @ -- Unclustered records. An unclustered record is a record (including @ -- a cluster records themselves) that is not mentioned by some other @ -- cluster. @ -- @@ -265,13 +348,13 @@ @ rid INTEGER PRIMARY KEY -- Record ID of the phantom @ ); @ @ -- Each baseline or manifest can have one or more tags. A tag @ -- is defined by a row in the next table. -@ -- +@ -- @ -- Wiki pages are tagged with "wiki-NAME" where NAME is the name of -@ -- the wiki page. Tickets changes are tagged with "ticket-UUID" where +@ -- the wiki page. Tickets changes are tagged with "ticket-UUID" where @ -- UUID is the indentifier of the ticket. Tags used to assign symbolic @ -- names to baselines are branches are of the form "sym-NAME" where @ -- NAME is the symbolic name. @ -- @ CREATE TABLE tag( @@ -285,23 +368,25 @@ @ INSERT INTO tag VALUES(5, 'hidden'); -- TAG_HIDDEN @ INSERT INTO tag VALUES(6, 'private'); -- TAG_PRIVATE @ INSERT INTO tag VALUES(7, 'cluster'); -- TAG_CLUSTER @ INSERT INTO tag VALUES(8, 'branch'); -- TAG_BRANCH @ INSERT INTO tag VALUES(9, 'closed'); -- TAG_CLOSED +@ INSERT INTO tag VALUES(10,'parent'); -- TAG_PARENT +@ INSERT INTO tag VALUES(11,'note'); -- TAG_NOTE @ @ -- Assignments of tags to baselines. Note that we allow tags to @ -- have values assigned to them. So we are not really dealing with @ -- tags here. These are really properties. But we are going to @ -- keep calling them tags because in many cases the value is ignored. @ -- @ CREATE TABLE tagxref( @ tagid INTEGER REFERENCES tag, -- The tag that added or removed -@ tagtype INTEGER, -- 0:cancel 1:single 2:branch +@ tagtype INTEGER, -- 0:-,cancel 1:+,single 2:*,propagate @ srcid INTEGER REFERENCES blob, -- Artifact of tag. 0 for propagated tags @ origid INTEGER REFERENCES blob, -- check-in holding propagated tag @ value TEXT, -- Value of the tag. Might be NULL. -@ mtime TIMESTAMP, -- Time of addition or removal +@ mtime TIMESTAMP, -- Time of addition or removal. Julian day @ rid INTEGER REFERENCE blob, -- Artifact tag is applied to @ UNIQUE(rid, tagid) @ ); @ CREATE INDEX tagxref_i1 ON tagxref(tagid, mtime); @ @@ -311,12 +396,12 @@ @ -- facilitate the display of "back links". @ -- @ CREATE TABLE backlink( @ target TEXT, -- Where the hyperlink points to @ srctype INT, -- 0: check-in 1: ticket 2: wiki -@ srcid INT, -- rid for checkin or wiki. tkt_id for ticket. -@ mtime TIMESTAMP, -- time that the hyperlink was added +@ srcid INT, -- rid for check-in or wiki. tkt_id for ticket. +@ mtime TIMESTAMP, -- time that the hyperlink was added. Julian day. @ UNIQUE(target, srctype, srcid) @ ); @ CREATE INDEX backlink_src ON backlink(srcid, srctype); @ @ -- Each attachment is an entry in the following table. Only @@ -323,11 +408,11 @@ @ -- the most recent attachment (identified by the D card) is saved. @ -- @ CREATE TABLE attachment( @ attachid INTEGER PRIMARY KEY, -- Local id for this attachment @ isLatest BOOLEAN DEFAULT 0, -- True if this is the one to use -@ mtime TIMESTAMP, -- Time when attachment last changed +@ mtime TIMESTAMP, -- Last changed. Julian day. @ src TEXT, -- UUID of the attachment. NULL to delete @ target TEXT, -- Object attached to. Wikiname or Tkt UUID @ filename TEXT, -- Filename for the attachment @ comment TEXT, -- Comment associated with this attachment @ user TEXT -- Name of user adding attachment @@ -343,10 +428,11 @@ @ CREATE TABLE ticket( @ -- Do not change any column that begins with tkt_ @ tkt_id INTEGER PRIMARY KEY, @ tkt_uuid TEXT UNIQUE, @ tkt_mtime DATE, +@ tkt_ctime DATE, @ -- Add as many field as required below this line @ type TEXT, @ status TEXT, @ subsystem TEXT, @ priority TEXT, @@ -355,10 +441,22 @@ @ private_contact TEXT, @ resolution TEXT, @ title TEXT, @ comment TEXT @ ); +@ CREATE TABLE ticketchng( +@ -- Do not change any column that begins with tkt_ +@ tkt_id INTEGER REFERENCES ticket, +@ tkt_rid INTEGER REFERENCES blob, +@ tkt_mtime DATE, +@ -- Add as many fields as required below this line +@ login TEXT, +@ username TEXT, +@ mimetype TEXT, +@ icomment TEXT +@ ); +@ CREATE INDEX ticketchng_idx1 ON ticketchng(tkt_id, tkt_mtime); ; /* ** Predefined tagid values */ @@ -365,23 +463,22 @@ #if INTERFACE # define TAG_BGCOLOR 1 /* Set the background color for display */ # define TAG_COMMENT 2 /* The check-in comment */ # define TAG_USER 3 /* User who made a checking */ # define TAG_DATE 4 /* The date of a check-in */ -# define TAG_HIDDEN 5 /* Do not display or sync */ -# define TAG_PRIVATE 6 /* Display but do not sync */ +# define TAG_HIDDEN 5 /* Do not display in timeline */ +# define TAG_PRIVATE 6 /* Do not sync */ # define TAG_CLUSTER 7 /* A cluster */ # define TAG_BRANCH 8 /* Value is name of the current branch */ # define TAG_CLOSED 9 /* Do not display this check-in as a leaf */ -#endif -#if EXPORT_INTERFACE -# define MAX_INT_TAG 9 /* The largest pre-assigned tag id */ +# define TAG_PARENT 10 /* Change to parentage on a check-in */ +# define TAG_NOTE 11 /* Extra text appended to a check-in comment */ #endif /* -** The schema for the locate FOSSIL database file found at the root -** of very check-out. This database contains the complete state of +** The schema for the local FOSSIL database file found at the root +** of every check-out. This database contains the complete state of ** the checkout. */ const char zLocalSchema[] = @ -- The VVAR table holds miscellanous information about the local database @ -- in the form of name-value pairs. This is similar to the VAR table @@ -404,40 +501,46 @@ @ -- @ -- The file.rid field is 0 for files or folders that have been @ -- added but not yet committed. @ -- @ -- Vfile.chnged is 0 for unmodified files, 1 for files that have -@ -- been edited or which have been subjected to a 3-way merge. +@ -- been edited or which have been subjected to a 3-way merge. @ -- Vfile.chnged is 2 if the file has been replaced from a different @ -- version by the merge and 3 if the file has been added by a merge. -@ -- The difference between vfile.chnged==2 and a regular add is that -@ -- with vfile.chnged==2 we know that the current version of the file -@ -- is already in the repository. -@ -- +@ -- Vfile.chnged is 4|5 is the same as 2|3, but the operation has been +@ -- done by an --integrate merge. The difference between vfile.chnged==3|5 +@ -- and a regular add is that with vfile.chnged==3|5 we know that the +@ -- current version of the file is already in the repository. @ -- @ CREATE TABLE vfile( @ id INTEGER PRIMARY KEY, -- ID of the checked out file @ vid INTEGER REFERENCES blob, -- The baseline this file is part of. -@ chnged INT DEFAULT 0, -- 0:unchnged 1:edited 2:m-chng 3:m-add -@ deleted BOOLEAN DEFAULT 0, -- True if deleted +@ chnged INT DEFAULT 0, -- 0:unchng 1:edit 2:m-chng 3:m-add 4:i-chng 5:i-add +@ deleted BOOLEAN DEFAULT 0, -- True if deleted @ isexe BOOLEAN, -- True if file should be executable +@ islink BOOLEAN, -- True if file should be symlink @ rid INTEGER, -- Originally from this repository record @ mrid INTEGER, -- Based on this record due to a merge -@ mtime INTEGER, -- Modification time of file on disk +@ mtime INTEGER, -- Mtime of file on disk. sec since 1970 @ pathname TEXT, -- Full pathname relative to root @ origname TEXT, -- Original pathname. NULL if unchanged @ UNIQUE(pathname,vid) @ ); @ @ -- This table holds a record of uncommitted merges in the local @ -- file tree. If a VFILE entry with id has merged with another @ -- record, there is an entry in this table with (id,merge) where @ -- merge is the RECORD table entry that the file merged against. -@ -- An id of 0 here means the version record itself. +@ -- An id of 0 or <-3 here means the version record itself. When +@ -- id==(-1) that is a cherrypick merge, id==(-2) that is a +@ -- backout merge and id==(-4) is a integrate merge. @ @ CREATE TABLE vmerge( @ id INTEGER REFERENCES vfile, -- VFILE entry that has been merged @ merge INTEGER, -- Merged with this record @ UNIQUE(id, merge) @ ); -@ +@ +@ -- Identifier for this file type. +@ -- The integer is the same as 'FSLC'. +@ PRAGMA application_id=252006674; ; Index: src/search.c ================================================================== --- src/search.c +++ src/search.c @@ -13,63 +13,56 @@ ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** -** This file contains code to implement the "/doc" web page and related -** pages. +** This file contains code to implement a very simple search function +** against timeline comments, check-in content, wiki pages, and/or tickets. +** +** The search is full-text like in that it is looking for words and ignores +** punctuation and capitalization. But it is more akin to "grep" in that +** it scans the entire corpus for the search, and it does not support the +** full functionality of FTS4. */ #include "config.h" #include "search.h" #include #if INTERFACE + +/* Maximum number of search terms */ +#define SEARCH_MAX_TERM 8 + /* -** A compiled search patter +** A compiled search pattern */ struct Search { - int nTerm; - struct srchTerm { - char *z; - int n; - } a[8]; + int nTerm; /* Number of search terms */ + struct srchTerm { /* For each search term */ + char *z; /* Text */ + int n; /* length */ + } a[SEARCH_MAX_TERM]; + /* Snippet controls */ + char *zPattern; /* The search pattern */ + char *zMarkBegin; /* Start of a match */ + char *zMarkEnd; /* End of a match */ + char *zMarkGap; /* A gap between two matches */ + unsigned fSrchFlg; /* Flags */ + int iScore; /* Score of the last match attempt */ + Blob snip; /* Snippet for the most recent match */ }; + +#define SRCHFLG_HTML 0x01 /* Escape snippet text for HTML */ +#define SRCHFLG_STATIC 0x04 /* The static gSearch object */ + #endif /* -** Compile a search pattern -*/ -Search *search_init(const char *zPattern){ - int nPattern = strlen(zPattern); - Search *p; - char *z; - int i; - - p = malloc( nPattern + sizeof(*p) + 1); - if( p==0 ) fossil_panic("out of memory"); - z = (char*)&p[1]; - strcpy(z, zPattern); - memset(p, 0, sizeof(*p)); - while( *z && p->nTerma)/sizeof(p->a[0]) ){ - while( !isalnum(*z) && *z ){ z++; } - if( *z==0 ) break; - p->a[p->nTerm].z = z; - for(i=1; isalnum(z[i]) || z[i]=='_'; i++){} - p->a[p->nTerm].n = i; - z += i; - p->nTerm++; - } - return p; -} - - -/* -** Destroy a search context. -*/ -void search_end(Search *p){ - free(p); -} +** There is a single global Search object: +*/ +static Search gSearch; + /* ** Theses characters constitute a word boundary */ static const char isBoundary[] = { @@ -88,124 +81,1651 @@ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }; +#define ISALNUM(x) (!isBoundary[(x)&0xff]) + + +/* +** Destroy a search context. +*/ +void search_end(Search *p){ + if( p ){ + fossil_free(p->zPattern); + fossil_free(p->zMarkBegin); + fossil_free(p->zMarkEnd); + fossil_free(p->zMarkGap); + if( p->iScore ) blob_reset(&p->snip); + memset(p, 0, sizeof(*p)); + if( p!=&gSearch ) fossil_free(p); + } +} + +/* +** Compile a search pattern +*/ +static Search *search_init( + const char *zPattern, /* The search pattern */ + const char *zMarkBegin, /* Start of a match */ + const char *zMarkEnd, /* End of a match */ + const char *zMarkGap, /* A gap between two matches */ + unsigned fSrchFlg /* Flags */ +){ + Search *p; + char *z; + int i; + + if( fSrchFlg & SRCHFLG_STATIC ){ + p = &gSearch; + search_end(p); + }else{ + p = fossil_malloc(sizeof(*p)); + memset(p, 0, sizeof(*p)); + } + p->zPattern = z = mprintf("%s", zPattern); + p->zMarkBegin = mprintf("%s", zMarkBegin); + p->zMarkEnd = mprintf("%s", zMarkEnd); + p->zMarkGap = mprintf("%s", zMarkGap); + p->fSrchFlg = fSrchFlg; + blob_init(&p->snip, 0, 0); + while( *z && p->nTerma[p->nTerm].z = z; + for(i=1; ISALNUM(z[i]); i++){} + p->a[p->nTerm].n = i; + z += i; + p->nTerm++; + } + return p; +} + + +/* +** Append n bytes of text to snippet zTxt. Encode the text appropriately. +*/ +static void snippet_text_append( + Search *p, /* The search context */ + Blob *pSnip, /* Append to this snippet */ + const char *zTxt, /* Text to append */ + int n /* How many bytes to append */ +){ + if( n>0 ){ + if( p->fSrchFlg & SRCHFLG_HTML ){ + blob_appendf(pSnip, "%#h", n, zTxt); + }else{ + blob_append(pSnip, zTxt, n); + } + } +} /* -** Compare a search pattern against an input string and return a score. +** Compare a search pattern against one or more input strings which +** collectively comprise a document. Return a match score. Any +** postive value means there was a match. Zero means that one or +** more terms are missing. +** +** The score and a snippet are record for future use. ** ** Scoring: ** * All terms must match at least once or the score is zero -** * 10 bonus points if the first occurrance is an exact match -** * 1 additional point for each subsequent match of the same word -** * Extra points of two consecutive words of the pattern are consecutive +** * One point for each matching term +** * Extra points if consecutive words of the pattern are consecutive ** in the document */ -int search_score(Search *p, const char *zDoc){ - int iPrev = 999; - int score = 10; - int iBonus = 0; - int i, j; - unsigned char seen[8]; - - memset(seen, 0, sizeof(seen)); - for(i=0; zDoc[i]; i++){ - char c = zDoc[i]; - if( isBoundary[c&0xff] ) continue; - for(j=0; jnTerm; j++){ - int n = p->a[j].n; - if( sqlite3_strnicmp(p->a[j].z, &zDoc[i], n)==0 ){ - score += 1; - if( !seen[j] ){ - if( isBoundary[zDoc[i+n]&0xff] ) score += 10; - seen[j] = 1; - } - if( j==iPrev+1 ){ - score += iBonus; - } - i += n-1; - iPrev = j; - iBonus = 50; - break; - } - } - iBonus /= 2; - while( !isBoundary[zDoc[i]&0xff] ){ i++; } - } - - /* Every term must be seen or else the score is zero */ - for(j=0; jnTerm; j++){ - if( !seen[j] ) return 0; - } - +static int search_match( + Search *p, /* Search pattern and flags */ + int nDoc, /* Number of strings in this document */ + const char **azDoc /* Text of each string */ +){ + int score; /* Final score */ + int i; /* Offset into current document */ + int ii; /* Loop counter */ + int j; /* Loop over search terms */ + int k; /* Loop over prior terms */ + int iWord = 0; /* Current word number */ + int iDoc; /* Current document number */ + int wantGap = 0; /* True if a zMarkGap is wanted */ + const char *zDoc; /* Current document text */ + const int CTX = 50; /* Amount of snippet context */ + int anMatch[SEARCH_MAX_TERM]; /* Number of terms in best match */ + int aiBestDoc[SEARCH_MAX_TERM]; /* Document containing best match */ + int aiBestOfst[SEARCH_MAX_TERM]; /* Byte offset to start of best match */ + int aiLastDoc[SEARCH_MAX_TERM]; /* Document containing most recent match */ + int aiLastOfst[SEARCH_MAX_TERM]; /* Byte offset to the most recent match */ + int aiWordIdx[SEARCH_MAX_TERM]; /* Word index of most recent match */ + + memset(anMatch, 0, sizeof(anMatch)); + memset(aiWordIdx, 0xff, sizeof(aiWordIdx)); + for(iDoc=0; iDocnTerm; j++){ + int n = p->a[j].n; + if( sqlite3_strnicmp(p->a[j].z, &zDoc[i], n)==0 + && (!ISALNUM(zDoc[i+n]) || p->a[j].z[n]=='*') + ){ + aiWordIdx[j] = iWord; + aiLastDoc[j] = iDoc; + aiLastOfst[j] = i; + for(k=1; j-k>=0 && anMatch[j-k] && aiWordIdx[j-k]==iWord-k; k++){} + for(ii=0; iinTerm; j++) score *= anMatch[j]; + blob_reset(&p->snip); + p->iScore = score; + if( score==0 ) return score; + + + /* Prepare a snippet that describes the matching text. + */ + while(1){ + int iOfst; + int iTail; + int iBest; + for(ii=0; iinTerm && anMatch[ii]==0; ii++){} + if( ii>=p->nTerm ) break; /* This is where the loop exits */ + iBest = ii; + iDoc = aiBestDoc[ii]; + iOfst = aiBestOfst[ii]; + for(; iinTerm; ii++){ + if( anMatch[ii]==0 ) continue; + if( aiBestDoc[ii]>iDoc ) continue; + if( aiBestOfst[ii]>iOfst ) continue; + iDoc = aiBestDoc[ii]; + iOfst = aiBestOfst[ii]; + iBest = ii; + } + iTail = iOfst + p->a[iBest].n; + anMatch[iBest] = 0; + for(ii=0; iinTerm; ii++){ + if( anMatch[ii]==0 ) continue; + if( aiBestDoc[ii]!=iDoc ) continue; + if( aiBestOfst[ii]<=iTail+CTX*2 ){ + if( iTaila[ii].n ){ + iTail = aiBestOfst[ii]+p->a[ii].n; + } + anMatch[ii] = 0; + ii = -1; + continue; + } + } + zDoc = azDoc[iDoc]; + iOfst -= CTX; + if( iOfst<0 ) iOfst = 0; + while( iOfst>0 && ISALNUM(zDoc[iOfst-1]) ) iOfst--; + while( zDoc[iOfst] && !ISALNUM(zDoc[iOfst]) ) iOfst++; + for(ii=0; ii0 || wantGap ) blob_append(&p->snip, p->zMarkGap, -1); + wantGap = zDoc[iTail]!=0; + zDoc += iOfst; + iTail -= iOfst; + + /* Add a snippet segment using characters iOfst..iOfst+iTail from zDoc */ + for(i=0; inTerm; j++){ + int n = p->a[j].n; + if( sqlite3_strnicmp(p->a[j].z, &zDoc[i], n)==0 + && (!ISALNUM(zDoc[i+n]) || p->a[j].z[n]=='*') + ){ + snippet_text_append(p, &p->snip, zDoc, i); + zDoc += i; + iTail -= i; + blob_append(&p->snip, p->zMarkBegin, -1); + if( p->a[j].z[n]=='*' ){ + while( ISALNUM(zDoc[n]) ) n++; + } + snippet_text_append(p, &p->snip, zDoc, n); + zDoc += n; + iTail -= n; + blob_append(&p->snip, p->zMarkEnd, -1); + i = -1; + break; + } /* end-if */ + } /* end for(j) */ + if( jnTerm ){ + while( ISALNUM(zDoc[i]) && isnip, zDoc, iTail); + } + if( wantGap ) blob_append(&p->snip, p->zMarkGap, -1); return score; } /* -** This is an SQLite function that scores its input using -** a pre-computed pattern. +** COMMAND: test-match +** +** Usage: fossil test-match SEARCHSTRING FILE1 FILE2 ... +*/ +void test_match_cmd(void){ + Search *p; + int i; + Blob x; + int score; + char *zDoc; + int flg = 0; + char *zBegin = (char*)find_option("begin",0,1); + char *zEnd = (char*)find_option("end",0,1); + char *zGap = (char*)find_option("gap",0,1); + if( find_option("html",0,0)!=0 ) flg |= SRCHFLG_HTML; + if( find_option("static",0,0)!=0 ) flg |= SRCHFLG_STATIC; + verify_all_options(); + if( g.argc<4 ) usage("SEARCHSTRING FILE1..."); + if( zBegin==0 ) zBegin = "[["; + if( zEnd==0 ) zEnd = "]]"; + if( zGap==0 ) zGap = " ... "; + p = search_init(g.argv[2], zBegin, zEnd, zGap, flg); + for(i=3; iiScore); + blob_reset(&x); + if( score ){ + fossil_print("%.78c\n%s\n%.78c\n\n", '=', blob_str(&p->snip), '='); + } + } + search_end(p); +} + +/* +** An SQL function to initialize the global search pattern: +** +** search_init(PATTERN,BEGIN,END,GAP,FLAGS) +** +** All arguments are optional. +*/ +static void search_init_sqlfunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const char *zPattern = 0; + const char *zBegin = ""; + const char *zEnd = ""; + const char *zGap = " ... "; + unsigned int flg = SRCHFLG_HTML; + switch( argc ){ + default: + flg = (unsigned int)sqlite3_value_int(argv[4]); + case 4: + zGap = (const char*)sqlite3_value_text(argv[3]); + case 3: + zEnd = (const char*)sqlite3_value_text(argv[2]); + case 2: + zBegin = (const char*)sqlite3_value_text(argv[1]); + case 1: + zPattern = (const char*)sqlite3_value_text(argv[0]); + } + if( zPattern && zPattern[0] ){ + search_init(zPattern, zBegin, zEnd, zGap, flg | SRCHFLG_STATIC); + }else{ + search_end(&gSearch); + } +} + +/* +** Try to match the input text against the search parameters set up +** by the previous search_init() call. Remember the results globally. +** Return non-zero on a match and zero on a miss. +*/ +static void search_match_sqlfunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const char *azDoc[5]; + int nDoc; + int rc; + for(nDoc=0; nDoc0 ){ + sqlite3_result_text(context, blob_str(&gSearch.snip), -1, fossil_free); + blob_init(&gSearch.snip, 0, 0); + } +} + +/* +** This is an SQLite function that computes the searchable text. +** It is a wrapper around the search_stext() routine. See the +** search_stext() routine for further detail. +*/ +static void search_stext_sqlfunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const char *zType = (const char*)sqlite3_value_text(argv[0]); + int rid = sqlite3_value_int(argv[1]); + const char *zName = (const char*)sqlite3_value_text(argv[2]); + sqlite3_result_text(context, search_stext_cached(zType[0],rid,zName,0), -1, + SQLITE_TRANSIENT); +} +static void search_title_sqlfunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const char *zType = (const char*)sqlite3_value_text(argv[0]); + int rid = sqlite3_value_int(argv[1]); + const char *zName = (const char*)sqlite3_value_text(argv[2]); + int nHdr; + char *z = search_stext_cached(zType[0], rid, zName, &nHdr); + if( nHdr || zType[0]!='d' ){ + sqlite3_result_text(context, z, nHdr, SQLITE_TRANSIENT); + }else{ + sqlite3_result_value(context, argv[2]); + } +} +static void search_body_sqlfunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const char *zType = (const char*)sqlite3_value_text(argv[0]); + int rid = sqlite3_value_int(argv[1]); + const char *zName = (const char*)sqlite3_value_text(argv[2]); + int nHdr; + char *z = search_stext_cached(zType[0], rid, zName, &nHdr); + sqlite3_result_text(context, z+nHdr+1, -1, SQLITE_TRANSIENT); +} + +/* +** Encode a string for use as a query parameter in a URL +*/ +static void search_urlencode_sqlfunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + char *z = mprintf("%T",sqlite3_value_text(argv[0])); + sqlite3_result_text(context, z, -1, fossil_free); } /* ** Register the "score()" SQL function to score its input text ** using the given Search object. Once this function is registered, ** do not delete the Search object. */ -void search_sql_setup(Search *p){ - sqlite3_create_function(g.db, "score", 1, SQLITE_UTF8, p, +void search_sql_setup(sqlite3 *db){ + static int once = 0; + if( once++ ) return; + sqlite3_create_function(db, "search_match", -1, SQLITE_UTF8, 0, + search_match_sqlfunc, 0, 0); + sqlite3_create_function(db, "search_score", 0, SQLITE_UTF8, 0, search_score_sqlfunc, 0, 0); + sqlite3_create_function(db, "search_snippet", 0, SQLITE_UTF8, 0, + search_snippet_sqlfunc, 0, 0); + sqlite3_create_function(db, "search_init", -1, SQLITE_UTF8, 0, + search_init_sqlfunc, 0, 0); + sqlite3_create_function(db, "stext", 3, SQLITE_UTF8, 0, + search_stext_sqlfunc, 0, 0); + sqlite3_create_function(db, "title", 3, SQLITE_UTF8, 0, + search_title_sqlfunc, 0, 0); + sqlite3_create_function(db, "body", 3, SQLITE_UTF8, 0, + search_body_sqlfunc, 0, 0); + sqlite3_create_function(db, "urlencode", 1, SQLITE_UTF8, 0, + search_urlencode_sqlfunc, 0, 0); } /* ** Testing the search function. ** -** COMMAND: search -** %fossil search pattern... +** COMMAND: search* +** %fossil search [-all|-a] [-limit|-n #] [-width|-W #] pattern... +** +** Search for timeline entries matching all words +** provided on the command line. Whole-word matches +** scope more highly than partial matches. ** -** Search for timeline entries matching the pattern. +** Outputs, by default, some top-N fraction of the +** results. The -all option can be used to output +** all matches, regardless of their search score. +** The -limit option can be used to limit the number +** of entries returned. The -width option can be +** used to set the output width used when printing +** matches. */ void search_cmd(void){ - Search *p; Blob pattern; int i; + Blob sql = empty_blob; Stmt q; int iBest; + char fAll = NULL != find_option("all", "a", 0); /* If set, do not lop + off the end of the + results. */ + const char *zLimit = find_option("limit","n",1); + const char *zWidth = find_option("width","W",1); + int nLimit = zLimit ? atoi(zLimit) : -1000; /* Max number of matching + lines/entries to list */ + int width; + if( zWidth ){ + width = atoi(zWidth); + if( (width!=0) && (width<=20) ){ + fossil_fatal("-W|--width value must be >20 or 0"); + } + }else{ + width = -1; + } db_must_be_within_tree(); if( g.argc<2 ) return; blob_init(&pattern, g.argv[2], -1); for(i=3; i0;" + " WHERE blob.rid=event.objid" + " AND search_match(coalesce(ecomment,comment));" ); iBest = db_int(0, "SELECT max(x) FROM srch"); - db_prepare(&q, - "SELECT rid, uuid, date, comment, 0, 0 FROM srch" - " WHERE x>%d ORDER BY x DESC, date DESC", - iBest/3 + blob_append(&sql, + "SELECT rid, uuid, date, comment, 0, 0 FROM srch " + "WHERE 1 ", -1); + if(!fAll){ + blob_append_sql(&sql,"AND x>%d ", iBest/3); + } + blob_append(&sql, "ORDER BY x DESC, date DESC ", -1); + db_prepare(&q, "%s", blob_sql_text(&sql)); + blob_reset(&sql); + print_timeline(&q, nLimit, width, 0); + db_finalize(&q); +} + +#if INTERFACE +/* What to search for */ +#define SRCH_CKIN 0x0001 /* Search over check-in comments */ +#define SRCH_DOC 0x0002 /* Search over embedded documents */ +#define SRCH_TKT 0x0004 /* Search over tickets */ +#define SRCH_WIKI 0x0008 /* Search over wiki */ +#define SRCH_ALL 0x000f /* Search over everything */ +#endif + +/* +** Remove bits from srchFlags which are disallowed by either the +** current server configuration or by user permissions. +*/ +unsigned int search_restrict(unsigned int srchFlags){ + static unsigned int knownGood = 0; + static unsigned int knownBad = 0; + static const struct { unsigned m; const char *zKey; } aSetng[] = { + { SRCH_CKIN, "search-ci" }, + { SRCH_DOC, "search-doc" }, + { SRCH_TKT, "search-tkt" }, + { SRCH_WIKI, "search-wiki" }, + }; + int i; + if( g.perm.Read==0 ) srchFlags &= ~(SRCH_CKIN|SRCH_DOC); + if( g.perm.RdTkt==0 ) srchFlags &= ~(SRCH_TKT); + if( g.perm.RdWiki==0 ) srchFlags &= ~(SRCH_WIKI); + for(i=0; i", "", " ... ", + SRCHFLG_STATIC|SRCHFLG_HTML); + if( (srchFlags & SRCH_DOC)!=0 ){ + char *zDocGlob = db_get("doc-glob",""); + char *zDocBr = db_get("doc-branch","trunk"); + if( zDocGlob && zDocGlob[0] && zDocBr && zDocBr[0] ){ + db_multi_exec( + "CREATE VIRTUAL TABLE IF NOT EXISTS temp.foci USING files_of_checkin;" + ); + db_multi_exec( + "INSERT INTO x(label,url,score,id,date,snip)" + " SELECT printf('Document: %%s',title('d',blob.rid,foci.filename))," + " printf('/doc/%T/%%s',foci.filename)," + " search_score()," + " 'd'||blob.rid," + " (SELECT datetime(event.mtime) FROM event" + " WHERE objid=symbolic_name_to_rid('trunk'))," + " search_snippet()" + " FROM foci CROSS JOIN blob" + " WHERE checkinID=symbolic_name_to_rid('trunk')" + " AND blob.uuid=foci.uuid" + " AND search_match(title('d',blob.rid,foci.filename)," + " body('d',blob.rid,foci.filename))" + " AND %z", + zDocBr, glob_expr("foci.filename", zDocGlob) + ); + } + } + if( (srchFlags & SRCH_WIKI)!=0 ){ + db_multi_exec( + "WITH wiki(name,rid,mtime) AS (" + " SELECT substr(tagname,6), tagxref.rid, max(tagxref.mtime)" + " FROM tag, tagxref" + " WHERE tag.tagname GLOB 'wiki-*'" + " AND tagxref.tagid=tag.tagid" + " GROUP BY 1" + ")" + "INSERT INTO x(label,url,score,id,date,snip)" + " SELECT printf('Wiki: %%s',name)," + " printf('/wiki?name=%%s',urlencode(name))," + " search_score()," + " 'w'||rid," + " datetime(mtime)," + " search_snippet()" + " FROM wiki" + " WHERE search_match(title('w',rid,name),body('w',rid,name));" + ); + } + if( (srchFlags & SRCH_CKIN)!=0 ){ + db_multi_exec( + "WITH ckin(uuid,rid,mtime) AS (" + " SELECT blob.uuid, event.objid, event.mtime" + " FROM event, blob" + " WHERE event.type='ci'" + " AND blob.rid=event.objid" + ")" + "INSERT INTO x(label,url,score,id,date,snip)" + " SELECT printf('Check-in [%%.10s] on %%s',uuid,datetime(mtime))," + " printf('/timeline?c=%%s&n=8&y=ci',uuid)," + " search_score()," + " 'c'||rid," + " datetime(mtime)," + " search_snippet()" + " FROM ckin" + " WHERE search_match('',body('c',rid,NULL));" + ); + } + if( (srchFlags & SRCH_TKT)!=0 ){ + db_multi_exec( + "INSERT INTO x(label,url,score,id,date,snip)" + " SELECT printf('Ticket: %%s (%%s)',title('t',tkt_id,NULL)," + "datetime(tkt_mtime))," + " printf('/tktview/%%.20s',tkt_uuid)," + " search_score()," + " 't'||tkt_id," + " datetime(tkt_mtime)," + " search_snippet()" + " FROM ticket" + " WHERE search_match(title('t',tkt_id,NULL),body('t',tkt_id,NULL));" + ); + } +} + +/* +** Number of significant bits in a u32 +*/ +static int nbits(u32 x){ + int n = 0; + while( x ){ n++; x >>= 1; } + return n; +} + +/* +** Implemenation of the rank() function used with rank(matchinfo(*,'pcsx')). +*/ +static void search_rank_sqlfunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const unsigned *aVal = (unsigned int*)sqlite3_value_blob(argv[0]); + int nVal = sqlite3_value_bytes(argv[0])/4; + int nCol; /* Number of columns in the index */ + int nTerm; /* Number of search terms in the query */ + int i, j; /* Loop counter */ + double r = 0.0; /* Score */ + const unsigned *aX, *aS; + + if( nVal<2 ) return; + nTerm = aVal[0]; + nCol = aVal[1]; + if( nVal<2+3*nCol*nTerm+nCol ) return; + aS = aVal+2; + aX = aS+nCol; + for(j=0; j0 ){ + x = 0.0; + for(i=0; i','',' ... ',-1,35)" + " FROM ftsidx CROSS JOIN ftsdocs" + " WHERE ftsidx MATCH %Q" + " AND ftsdocs.rowid=ftsidx.docid", + zPattern + ); + if( srchFlags!=SRCH_ALL ){ + const char *zSep = " AND ("; + static const struct { unsigned m; char c; } aMask[] = { + { SRCH_CKIN, 'c' }, + { SRCH_DOC, 'd' }, + { SRCH_TKT, 't' }, + { SRCH_WIKI, 'w' }, + }; + int i; + for(i=0; iTEXT" where TEXT contains +** no white-space or punctuation, then return the length of the mark. +*/ +static int isSnippetMark(const char *z){ + int n; + if( strncmp(z,"",6)!=0 ) return 0; + n = 6; + while( fossil_isalnum(z[n]) ) n++; + if( strncmp(&z[n],"",7)!=0 ) return 0; + return n+7; +} + +/* +** Return a copy of zSnip (in memory obtained from fossil_malloc()) that +** has all "<" characters, other than those on and , +** converted into "<". This is similar to htmlize() except that +** and are preserved. +*/ +static char *cleanSnippet(const char *zSnip){ + int i; + int n = 0; + char *z; + for(i=0; zSnip[i]; i++) if( zSnip[i]=='<' ) n++; + z = fossil_malloc( i+n*4+1 ); + i = 0; + while( zSnip[0] ){ + if( zSnip[0]=='<' ){ + n = isSnippetMark(zSnip); + if( n ){ + memcpy(&z[i], zSnip, n); + zSnip += n; + i += n; + continue; + }else{ + memcpy(&z[i], "<", 4); + i += 4; + zSnip++; + } + }else{ + z[i++] = zSnip[0]; + zSnip++; + } + } + z[i] = 0; + return z; +} + + +/* +** This routine generates web-page output for a search operation. +** Other web-pages can invoke this routine to add search results +** in the middle of the page. +** +** Return the number of rows. +*/ +int search_run_and_output( + const char *zPattern, /* The query pattern */ + unsigned int srchFlags, /* What to search over */ + int fDebug /* Extra debugging output */ +){ + Stmt q; + int nRow = 0; + + srchFlags = search_restrict(srchFlags); + if( srchFlags==0 ) return 0; + search_sql_setup(g.db); + add_content_sql_commands(g.db); + db_multi_exec( + "CREATE TEMP TABLE x(label,url,score,id,date,snip);" ); - print_timeline(&q, 1000); + if( !search_index_exists() ){ + search_fullscan(zPattern, srchFlags); + }else{ + search_update_index(srchFlags); + search_indexed(zPattern, srchFlags); + } + db_prepare(&q, "SELECT url, snip, label, score, id" + " FROM x" + " ORDER BY score DESC, date DESC;"); + while( db_step(&q)==SQLITE_ROW ){ + const char *zUrl = db_column_text(&q, 0); + const char *zSnippet = db_column_text(&q, 1); + const char *zLabel = db_column_text(&q, 2); + if( nRow==0 ){ + @
        + } + nRow++; + @
      1. %h(zLabel) + if( fDebug ){ + @ (%e(db_column_double(&q,3)), %s(db_column_text(&q,4))) + } + @
        %z(cleanSnippet(zSnippet))

      2. + } db_finalize(&q); + if( nRow ){ + @
      + } + return nRow; +} + +/* +** Generate some HTML for doing search. At a minimum include the +** Search-Text entry form. If the "s" query parameter is present, also +** show search results. +** +** The srchFlags parameter restricts the set of documents to be searched. +** srchFlags should normally be either a single search category or all +** categories. Any srchFlags with two or more bits set +** is treated like SRCH_ALL for display purposes. +** +** This routine automatically restricts srchFlag according to user +** permissions and the server configuration. The entry box is shown +** disabled if srchFlags is 0 after these restrictions are applied. +** +** If useYparam is true, then this routine also looks at the y= query +** parameter for further search restrictions. +*/ +void search_screen(unsigned srchFlags, int useYparam){ + const char *zType = 0; + const char *zClass = 0; + const char *zDisable1; + const char *zDisable2; + const char *zPattern; + int fDebug = PB("debug"); + srchFlags = search_restrict(srchFlags); + switch( srchFlags ){ + case SRCH_CKIN: zType = " Check-ins"; zClass = "Ckin"; break; + case SRCH_DOC: zType = " Docs"; zClass = "Doc"; break; + case SRCH_TKT: zType = " Tickets"; zClass = "Tkt"; break; + case SRCH_WIKI: zType = " Wiki"; zClass = "Wiki"; break; + } + if( srchFlags==0 ){ + zDisable1 = " disabled"; + zDisable2 = " disabled"; + zPattern = ""; + }else{ + zDisable1 = " autofocus"; + zDisable2 = ""; + zPattern = PD("s",""); + } + @
      + if( zClass ){ + @
      + }else{ + @
      + } + @ + if( useYparam && (srchFlags & (srchFlags-1))!=0 && useYparam ){ + static const struct { char *z; char *zNm; unsigned m; } aY[] = { + { "all", "All", SRCH_ALL }, + { "c", "Check-ins", SRCH_CKIN }, + { "d", "Docs", SRCH_DOC }, + { "t", "Tickets", SRCH_TKT }, + { "w", "Wiki", SRCH_WIKI }, + }; + const char *zY = PD("y","all"); + unsigned newFlags = srchFlags; + int i; + @ + srchFlags = newFlags; + } + if( fDebug ){ + @ + } + @ + if( srchFlags==0 ){ + @

      Search is disabled

      + } + @
      + while( fossil_isspace(zPattern[0]) ) zPattern++; + if( zPattern[0] ){ + if( zClass ){ + @
      + }else{ + @
      + } + if( search_run_and_output(zPattern, srchFlags, fDebug)==0 ){ + @

      No matches for: %h(zPattern)

      + } + @
      + } +} + +/* +** WEBPAGE: search +** +** Search for check-in comments, documents, tickets, or wiki that +** match a user-supplied pattern. +** +** s=PATTERN Specify the full-text pattern to search for +** y=TYPE What to search. +** c -> check-ins +** d -> documentation +** t -> tickets +** w -> wiki +** all -> everything +*/ +void search_page(void){ + login_check_credentials(); + style_header("Search"); + search_screen(SRCH_ALL, 1); + style_footer(); +} + + +/* +** This is a helper function for search_stext(). Writing into pOut +** the search text obtained from pIn according to zMimetype. +** +** The title of the document is the first line of text. All subsequent +** lines are the body. If the document has no title, the first line +** is blank. +*/ +static void get_stext_by_mimetype( + Blob *pIn, + const char *zMimetype, + Blob *pOut +){ + Blob html, title; + blob_init(&html, 0, 0); + blob_init(&title, 0, 0); + if( zMimetype==0 ) zMimetype = "text/plain"; + if( fossil_strcmp(zMimetype,"text/x-fossil-wiki")==0 ){ + Blob tail; + blob_init(&tail, 0, 0); + if( wiki_find_title(pIn, &title, &tail) ){ + blob_appendf(pOut, "%s\n", blob_str(&title)); + wiki_convert(&tail, &html, 0); + blob_reset(&tail); + }else{ + blob_append(pOut, "\n", 1); + wiki_convert(pIn, &html, 0); + } + html_to_plaintext(blob_str(&html), pOut); + }else if( fossil_strcmp(zMimetype,"text/x-markdown")==0 ){ + markdown_to_html(pIn, &title, &html); + if( blob_size(&title) ){ + blob_appendf(pOut, "%s\n", blob_str(&title)); + }else{ + blob_append(pOut, "\n", 1); + } + html_to_plaintext(blob_str(&html), pOut); + }else if( fossil_strcmp(zMimetype,"text/html")==0 ){ + if( doc_is_embedded_html(pIn, &title) ){ + blob_appendf(pOut, "%s\n", blob_str(&title)); + } + html_to_plaintext(blob_str(pIn), pOut); + }else{ + blob_append(pOut, "\n", 1); + blob_append(pOut, blob_buffer(pIn), blob_size(pIn)); + } + blob_reset(&html); + blob_reset(&title); +} + +/* +** Query pQuery is pointing at a single row of output. Append a text +** representation of every text-compatible column to pAccum. +*/ +static void append_all_ticket_fields(Blob *pAccum, Stmt *pQuery, int iTitle){ + int n = db_column_count(pQuery); + int i; + const char *zMime = 0; + if( iTitle>=0 && iTitlezWiki, -1); + get_stext_by_mimetype(&wiki, wiki_filter_mimetypes(pWiki->zMimetype), + pOut); + blob_reset(&wiki); + manifest_destroy(pWiki); + break; + } + case 'c': { /* Check-in Comments */ + static Stmt q; + static int isPlainText = -1; + db_static_prepare(&q, + "SELECT coalesce(ecomment,comment)" + " ||' (user: '||coalesce(euser,user,'?')" + " ||', tags: '||" + " (SELECT group_concat(substr(tag.tagname,5),',')" + " FROM tag, tagxref" + " WHERE tagname GLOB 'sym-*' AND tag.tagid=tagxref.tagid" + " AND tagxref.rid=event.objid AND tagxref.tagtype>0)" + " ||')'" + " FROM event WHERE objid=:x AND type='ci'"); + if( isPlainText<0 ){ + isPlainText = db_get_boolean("timeline-plaintext",0); + } + db_bind_int(&q, ":x", rid); + if( db_step(&q)==SQLITE_ROW ){ + blob_append(pOut, "\n", 1); + if( isPlainText ){ + db_column_blob(&q, 0, pOut); + }else{ + Blob x; + blob_init(&x,0,0); + db_column_blob(&q, 0, &x); + get_stext_by_mimetype(&x, "text/x-fossil-wiki", pOut); + blob_reset(&x); + } + } + db_reset(&q); + break; + } + case 't': { /* Tickets */ + static Stmt q1; + static int iTitle = -1; + db_static_prepare(&q1, "SELECT * FROM ticket WHERE tkt_id=:rid"); + db_bind_int(&q1, ":rid", rid); + if( db_step(&q1)==SQLITE_ROW ){ + if( iTitle<0 ){ + int n = db_column_count(&q1); + for(iTitle=0; iTitle0 ){ + blob_reset(&cache.stext); + }else{ + blob_init(&cache.stext,0,0); + } + cache.cType = cType; + cache.rid = rid; + if( cType==0 ) return 0; + search_stext(cType, rid, zName, &cache.stext); + z = blob_str(&cache.stext); + for(i=0; z[i] && z[i]!='\n'; i++){} + cache.nTitle = i; + } + if( pnTitle ) *pnTitle = cache.nTitle; + return blob_str(&cache.stext); +} + +/* +** COMMAND: test-search-stext +** +** Usage: fossil test-search-stext TYPE ARG1 ARG2 +*/ +void test_search_stext(void){ + Blob out; + db_find_and_open_repository(0,0); + if( g.argc!=5 ) usage("TYPE RID NAME"); + search_stext(g.argv[2][0], atoi(g.argv[3]), g.argv[4], &out); + fossil_print("%s\n",blob_str(&out)); + blob_reset(&out); +} + +/* +** COMMAND: test-convert-stext +** +** Usage: fossil test-convert-stext FILE MIMETYPE +** +** Read the content of FILE and convert it to stext according to MIMETYPE. +** Send the result to standard output. +*/ +void test_convert_stext(void){ + Blob in, out; + db_find_and_open_repository(0,0); + if( g.argc!=4 ) usage("FILENAME MIMETYPE"); + blob_read_from_file(&in, g.argv[2]); + blob_init(&out, 0, 0); + get_stext_by_mimetype(&in, g.argv[3], &out); + fossil_print("%s\n",blob_str(&out)); + blob_reset(&in); + blob_reset(&out); +} + +/* The schema for the full-text index +*/ +static const char zFtsSchema[] = +@ -- One entry for each possible search result +@ CREATE TABLE IF NOT EXISTS "%w".ftsdocs( +@ rowid INTEGER PRIMARY KEY, -- Maps to the ftsidx.docid +@ type CHAR(1), -- Type of document +@ rid INTEGER, -- BLOB.RID or TAG.TAGID for the document +@ name TEXT, -- Additional document description +@ idxed BOOLEAN, -- True if currently in the index +@ label TEXT, -- Label to print on search results +@ url TEXT, -- URL to access this document +@ mtime DATE, -- Date when document created +@ bx TEXT, -- Temporary "body" content cache +@ UNIQUE(type,rid) +@ ); +@ CREATE INDEX "%w".ftsdocIdxed ON ftsdocs(type,rid,name) WHERE idxed==0; +@ CREATE INDEX "%w".ftsdocName ON ftsdocs(name) WHERE type='w'; +@ CREATE VIEW IF NOT EXISTS "%w".ftscontent AS +@ SELECT rowid, type, rid, name, idxed, label, url, mtime, +@ title(type,rid,name) AS 'title', body(type,rid,name) AS 'body' +@ FROM ftsdocs; +@ CREATE VIRTUAL TABLE IF NOT EXISTS "%w".ftsidx +@ USING fts4(content="ftscontent", title, body%s); +; +static const char zFtsDrop[] = +@ DROP TABLE IF EXISTS "%w".ftsidx; +@ DROP VIEW IF EXISTS "%w".ftscontent; +@ DROP TABLE IF EXISTS "%w".ftsdocs; +; + +/* +** Create or drop the tables associated with a full-text index. +*/ +static int searchIdxExists = -1; +void search_create_index(void){ + const char *zDb = db_name("repository"); + int useStemmer = db_get_boolean("search-stemmer",0); + const char *zExtra = useStemmer ? ",tokenize=porter" : ""; + search_sql_setup(g.db); + db_multi_exec(zFtsSchema/*works-like:"%w%w%w%w%w%s"*/, + zDb, zDb, zDb, zDb, zDb, zExtra/*safe-for-%s*/); + searchIdxExists = 1; +} +void search_drop_index(void){ + const char *zDb = db_name("repository"); + db_multi_exec(zFtsDrop/*works-like:"%w%w%w"*/, zDb, zDb, zDb); + searchIdxExists = 0; +} + +/* +** Return true if the full-text search index exists +*/ +int search_index_exists(void){ + if( searchIdxExists<0 ){ + searchIdxExists = db_table_exists("repository","ftsdocs"); + } + return searchIdxExists; +} + +/* +** Fill the FTSDOCS table with unindexed entries for everything +** in the repository. This uses INSERT OR IGNORE so entries already +** in FTSDOCS are unchanged. +*/ +void search_fill_index(void){ + if( !search_index_exists() ) return; + search_sql_setup(g.db); + db_multi_exec( + "INSERT OR IGNORE INTO ftsdocs(type,rid,idxed)" + " SELECT 'c', objid, 0 FROM event WHERE type='ci';" + ); + db_multi_exec( + "WITH latest_wiki(rid,name,mtime) AS (" + " SELECT tagxref.rid, substr(tag.tagname,6), max(tagxref.mtime)" + " FROM tag, tagxref" + " WHERE tag.tagname GLOB 'wiki-*'" + " AND tagxref.tagid=tag.tagid" + " AND tagxref.value>0" + " GROUP BY 2" + ") INSERT OR IGNORE INTO ftsdocs(type,rid,name,idxed)" + " SELECT 'w', rid, name, 0 FROM latest_wiki;" + ); + db_multi_exec( + "INSERT OR IGNORE INTO ftsdocs(type,rid,idxed)" + " SELECT 't', tkt_id, 0 FROM ticket;" + ); +} + +/* +** The document described by cType,rid,zName is about to be added or +** updated. If the document has already been indexed, then unindex it +** now while we still have access to the old content. Add the document +** to the queue of documents that need to be indexed or reindexed. +*/ +void search_doc_touch(char cType, int rid, const char *zName){ + if( search_index_exists() ){ + char zType[2]; + zType[0] = cType; + zType[1] = 0; + search_sql_setup(g.db); + db_multi_exec( + "DELETE FROM ftsidx WHERE docid IN" + " (SELECT rowid FROM ftsdocs WHERE type=%Q AND rid=%d AND idxed)", + zType, rid + ); + db_multi_exec( + "REPLACE INTO ftsdocs(type,rid,name,idxed)" + " VALUES(%Q,%d,%Q,0)", + zType, rid, zName + ); + if( cType=='w' ){ + db_multi_exec( + "DELETE FROM ftsidx WHERE docid IN" + " (SELECT rowid FROM ftsdocs WHERE type='w' AND name=%Q AND idxed)", + zName + ); + db_multi_exec( + "DELETE FROM ftsdocs WHERE type='w' AND name=%Q AND rid!=%d", + zName, rid + ); + } + } +} + +/* +** If the doc-glob and doc-br settings are valid for document search +** and if the latest check-in on doc-br is in the unindexed set of +** check-ins, then update all 'd' entries in FTSDOCS that have +** changed. +*/ +static void search_update_doc_index(void){ + const char *zDocBr = db_get("doc-branch","trunk"); + int ckid = zDocBr ? symbolic_name_to_rid(zDocBr,"ci") : 0; + double rTime; + char *zBrUuid; + if( ckid==0 ) return; + if( !db_exists("SELECT 1 FROM ftsdocs WHERE type='c' AND rid=%d" + " AND NOT idxed", ckid) ) return; + + /* If we get this far, it means that changes to 'd' entries are + ** required. */ + rTime = db_double(0.0, "SELECT mtime FROM event WHERE objid=%d", ckid); + zBrUuid = db_text("","SELECT substr(uuid,1,20) FROM blob WHERE rid=%d",ckid); + db_multi_exec( + "CREATE TEMP TABLE current_docs(rid INTEGER PRIMARY KEY, name);" + "CREATE VIRTUAL TABLE IF NOT EXISTS temp.foci USING files_of_checkin;" + "INSERT OR IGNORE INTO current_docs(rid, name)" + " SELECT blob.rid, foci.filename FROM foci, blob" + " WHERE foci.checkinID=%d AND blob.uuid=foci.uuid" + " AND %z", + ckid, glob_expr("foci.filename", db_get("doc-glob","")) + ); + db_multi_exec( + "DELETE FROM ftsidx WHERE docid IN" + " (SELECT rowid FROM ftsdocs WHERE type='d'" + " AND rid NOT IN (SELECT rid FROM current_docs))" + ); + db_multi_exec( + "DELETE FROM ftsdocs WHERE type='d'" + " AND rid NOT IN (SELECT rid FROM current_docs)" + ); + db_multi_exec( + "INSERT OR IGNORE INTO ftsdocs(type,rid,name,idxed,label,bx,url,mtime)" + " SELECT 'd', rid, name, 0," + " title('d',rid,name)," + " body('d',rid,name)," + " printf('/doc/%q/%%s',urlencode(name))," + " %.17g" + " FROM current_docs", + zBrUuid, rTime + ); + db_multi_exec( + "INSERT INTO ftsidx(docid,title,body)" + " SELECT rowid, label, bx FROM ftsdocs WHERE type='d' AND NOT idxed" + ); + db_multi_exec( + "UPDATE ftsdocs SET" + " idxed=1," + " bx=NULL," + " label='Document: '||label" + " WHERE type='d' AND NOT idxed" + ); +} + +/* +** Deal with all of the unindexed 'c' terms in FTSDOCS +*/ +static void search_update_checkin_index(void){ + db_multi_exec( + "INSERT INTO ftsidx(docid,title,body)" + " SELECT rowid, '', body('c',rid,NULL) FROM ftsdocs" + " WHERE type='c' AND NOT idxed;" + ); + db_multi_exec( + "REPLACE INTO ftsdocs(rowid,idxed,type,rid,name,label,url,mtime)" + " SELECT ftsdocs.rowid, 1, 'c', ftsdocs.rid, NULL," + " printf('Check-in [%%.16s] on %%s',blob.uuid,datetime(event.mtime))," + " printf('/timeline?y=ci&c=%%.20s',blob.uuid)," + " event.mtime" + " FROM ftsdocs, event, blob" + " WHERE ftsdocs.type='c' AND NOT ftsdocs.idxed" + " AND event.objid=ftsdocs.rid" + " AND blob.rid=ftsdocs.rid" + ); +} + +/* +** Deal with all of the unindexed 't' terms in FTSDOCS +*/ +static void search_update_ticket_index(void){ + db_multi_exec( + "INSERT INTO ftsidx(docid,title,body)" + " SELECT rowid, title('t',rid,NULL), body('t',rid,NULL) FROM ftsdocs" + " WHERE type='t' AND NOT idxed;" + ); + if( db_changes()==0 ) return; + db_multi_exec( + "REPLACE INTO ftsdocs(rowid,idxed,type,rid,name,label,url,mtime)" + " SELECT ftsdocs.rowid, 1, 't', ftsdocs.rid, NULL," + " printf('Ticket: %%s (%%s)',title('t',tkt_id,null)," + " datetime(tkt_mtime))," + " printf('/tktview/%%.20s',tkt_uuid)," + " tkt_mtime" + " FROM ftsdocs, ticket" + " WHERE ftsdocs.type='t' AND NOT ftsdocs.idxed" + " AND ticket.tkt_id=ftsdocs.rid" + ); +} + +/* +** Deal with all of the unindexed 'w' terms in FTSDOCS +*/ +static void search_update_wiki_index(void){ + db_multi_exec( + "INSERT INTO ftsidx(docid,title,body)" + " SELECT rowid, title('w',rid,NULL),body('w',rid,NULL) FROM ftsdocs" + " WHERE type='w' AND NOT idxed;" + ); + if( db_changes()==0 ) return; + db_multi_exec( + "REPLACE INTO ftsdocs(rowid,idxed,type,rid,name,label,url,mtime)" + " SELECT ftsdocs.rowid, 1, 'w', ftsdocs.rid, ftsdocs.name," + " 'Wiki: '||ftsdocs.name," + " '/wiki?name='||urlencode(ftsdocs.name)," + " tagxref.mtime" + " FROM ftsdocs, tagxref" + " WHERE ftsdocs.type='w' AND NOT ftsdocs.idxed" + " AND tagxref.rid=ftsdocs.rid" + ); +} + +/* +** Deal with all of the unindexed entries in the FTSDOCS table - that +** is to say, all the entries with FTSDOCS.IDXED=0. Add them to the +** index. +*/ +void search_update_index(unsigned int srchFlags){ + if( !search_index_exists() ) return; + if( !db_exists("SELECT 1 FROM ftsdocs WHERE NOT idxed") ) return; + search_sql_setup(g.db); + if( srchFlags & (SRCH_CKIN|SRCH_DOC) ){ + search_update_doc_index(); + search_update_checkin_index(); + } + if( srchFlags & SRCH_TKT ){ + search_update_ticket_index(); + } + if( srchFlags & SRCH_WIKI ){ + search_update_wiki_index(); + } +} + +/* +** Construct, prepopulate, and then update the full-text index. +*/ +void search_rebuild_index(void){ + fossil_print("rebuilding the search index..."); + fflush(stdout); + search_create_index(); + search_fill_index(); + search_update_index(search_restrict(SRCH_ALL)); + fossil_print(" done\n"); +} + +/* +** COMMAND: fts-config* +** +** Usage: fossil fts-config ?SUBCOMMAND? ?ARGUMENT? +** +** The "fossil fts-config" command configures the full-text search capabilities +** of the repository. Subcommands: +** +** reindex Rebuild the search index. This is a no-op if +** index search is disabled +** +** index (on|off) Turn the search index on or off +** +** enable cdtw Enable various kinds of search. c=Check-ins, +** d=Documents, t=Tickets, w=Wiki. +** +** disable cdtw Disable versious kinds of search +** +** stemmer (on|off) Turn the Porter stemmer on or off for indexed +** search. (Unindexed search is never stemmed.) +** +** The current search settings are displayed after any changes are applied. +** Run this command with no arguments to simply see the settings. +*/ +void test_fts_cmd(void){ + static const struct { int iCmd; const char *z; } aCmd[] = { + { 1, "reindex" }, + { 2, "index" }, + { 3, "disable" }, + { 4, "enable" }, + { 5, "stemmer" }, + }; + static const struct { char *zSetting; char *zName; char *zSw; } aSetng[] = { + { "search-ckin", "check-in search:", "c" }, + { "search-doc", "document search:", "d" }, + { "search-tkt", "ticket search:", "t" }, + { "search-wiki", "wiki search:", "w" }, + }; + char *zSubCmd = 0; + int i, j, n; + int iCmd = 0; + int iAction = 0; + db_find_and_open_repository(0, 0); + if( g.argc>2 ){ + zSubCmd = g.argv[2]; + n = (int)strlen(zSubCmd); + for(i=0; i=ArraySize(aCmd) ){ + Blob all; + blob_init(&all,0,0); + for(i=0; i=1 ){ + search_drop_index(); + } + if( iAction>=2 ){ + search_rebuild_index(); + } + + /* Always show the status before ending */ + for(i=0; i #include "config.h" +#include #include "setup.h" +#if INTERFACE +#define ArraySize(x) (sizeof(x)/sizeof(x[0])) +#endif + +/* +** The table of web pages supported by this application is generated +** automatically by the "mkindex" program and written into a file +** named "page_index.h". We include that file here to get access +** to the table. +*/ +#include "page_index.h" /* ** Output a single entry for a menu generated using an HTML table. ** If zLink is not NULL or an empty string, then it is the page that ** the menu entry will hyperlink to. If zLink is NULL or "", then @@ -37,181 +48,289 @@ if( zLink && zLink[0] ){ @ %h(zTitle) }else{ @ %h(zTitle) } - @ %h(zDesc) + @ %h(zDesc) } + + /* -** WEBPAGE: /setup +** WEBPAGE: setup +** +** Main menu for the administrative pages. Requires Admin privileges. */ void setup_page(void){ login_check_credentials(); - if( !g.okSetup ){ - login_needed(); + if( !g.perm.Setup ){ + login_needed(0); } style_header("Server Administration"); - @ + + /* Make sure the header contains . Issue a warning + ** if it does not. */ + if( !cgi_header_contains("Configuration Error: Please add + @ <base href="$secureurl/$current_page"> after + @ <head> in the HTML header!

      + } + +#if !defined(_WIN32) + /* Check for /dev/null and /dev/urandom. We want both devices to be present, + ** but they are sometimes omitted (by mistake) from chroot jails. */ + if( access("/dev/null", R_OK|W_OK) ){ + @

      WARNING: Device "/dev/null" is not available + @ for reading and writing.

      + } + if( access("/dev/urandom", R_OK) ){ + @

      WARNING: Device "/dev/urandom" is not available + @ for reading. This means that the pseudo-random number generator used + @ by SQLite will be poorly seeded.

      + } +#endif + + @
      setup_menu_entry("Users", "setup_ulist", "Grant privileges to individual users."); setup_menu_entry("Access", "setup_access", "Control access settings."); setup_menu_entry("Configuration", "setup_config", "Configure the WWW components of the repository"); + setup_menu_entry("Settings", "setup_settings", + "Web interface to the \"fossil settings\" command"); setup_menu_entry("Timeline", "setup_timeline", "Timeline display preferences"); + setup_menu_entry("Login-Group", "setup_login_group", + "Manage single sign-on between this repository and others" + " on the same server"); setup_menu_entry("Tickets", "tktsetup", "Configure the trouble-ticketing system for this repository"); + setup_menu_entry("Search","srchsetup", + "Configure the built-in search engine"); + setup_menu_entry("Transfers", "xfersetup", + "Configure the transfer system for this repository"); setup_menu_entry("Skins", "setup_skin", - "Select from a menu of prepackaged \"skins\" for the web interface"); - setup_menu_entry("CSS", "setup_editcss", - "Edit the Cascading Style Sheet used by all pages of this repository"); - setup_menu_entry("Header", "setup_header", - "Edit HTML text inserted at the top of every page"); - setup_menu_entry("Footer", "setup_footer", - "Edit HTML text inserted at the bottom of every page"); + "Select and/or modify the web interface \"skins\""); + setup_menu_entry("Moderation", "setup_modreq", + "Enable/Disable requiring moderator approval of Wiki and/or Ticket" + " changes and attachments."); + setup_menu_entry("Ad-Unit", "setup_adunit", + "Edit HTML text for an ad unit inserted after the menu bar"); setup_menu_entry("Logo", "setup_logo", - "Change the logo image for the server"); + "Change the logo and background images for the server"); setup_menu_entry("Shunned", "shun", "Show artifacts that are shunned by this repository"); - setup_menu_entry("Log", "rcvfromlist", + setup_menu_entry("Artifact Receipts Log", "rcvfromlist", "A record of received artifacts and their sources"); + setup_menu_entry("User Log", "access_log", + "A record of login attempts"); + setup_menu_entry("Administrative Log", "admin_log", + "View the admin_log entries"); setup_menu_entry("Stats", "stat", - "Display repository statistics"); + "Repository Status Reports"); + setup_menu_entry("Sitemap", "sitemap", + "Links to miscellaneous pages"); + setup_menu_entry("SQL", "admin_sql", + "Enter raw SQL commands"); + setup_menu_entry("TH1", "admin_th1", + "Enter raw TH1 commands"); @
      style_footer(); } /* ** WEBPAGE: setup_ulist ** ** Show a list of users. Clicking on any user jumps to the edit -** screen for that user. +** screen for that user. Requires Admin privileges. */ void setup_ulist(void){ Stmt s; login_check_credentials(); - if( !g.okAdmin ){ - login_needed(); + if( !g.perm.Admin ){ + login_needed(0); return; } style_submenu_element("Add", "Add User", "setup_uedit"); - style_header("User List"); - @ - @
      - @ Users: - @
      - @ - @ - @ - @ - @ - @ - db_prepare(&s, "SELECT uid, login, cap, info FROM user ORDER BY login"); - while( db_step(&s)==SQLITE_ROW ){ - const char *zCap = db_column_text(&s, 2); - if( strstr(zCap, "s") ) zCap = "s"; - @ - @ - @ - @ - @ - @ - } - @
      User ID Capabilities Contact Info
      - if( g.okAdmin && (zCap[0]!='s' || g.okSetup) ){ - @ - } - @ %h(db_column_text(&s,1)) - if( g.okAdmin ){ - @ - } - @    %s(zCap)   %s(db_column_text(&s,3))
      - @
      - @ Notes: + style_submenu_element("Log", "Access Log", "access_log"); + style_submenu_element("Help", "Help", "setup_ulist_notes"); + style_header("User List"); + @ + @ + @ + db_prepare(&s, + "SELECT uid, login, cap, date(mtime,'unixepoch')" + " FROM user" + " WHERE login IN ('anonymous','nobody','developer','reader')" + " ORDER BY login" + ); + while( db_step(&s)==SQLITE_ROW ){ + int uid = db_column_int(&s, 0); + const char *zLogin = db_column_text(&s, 1); + const char *zCap = db_column_text(&s, 2); + const char *zDate = db_column_text(&s, 4); + @ + @ + } + db_finalize(&s); + @
      UID Category Capabilities Info Last Change
      %d(uid) + @ %h(zLogin) + @ %h(zCap) + + if( fossil_strcmp(zLogin,"anonymous")==0 ){ + @ All logged-in users + }else if( fossil_strcmp(zLogin,"developer")==0 ){ + @ Users with 'v' capability + }else if( fossil_strcmp(zLogin,"nobody")==0 ){ + @ All users without login + }else if( fossil_strcmp(zLogin,"reader")==0 ){ + @ Users with 'u' capability + }else{ + @ + } + if( zDate && zDate[0] ){ + @ %h(zDate) + }else{ + @ + } + @
      + @
      Users
      + @ + @ + @ + @ + db_prepare(&s, + "SELECT uid, login, cap, info, date(mtime,'unixepoch'), lower(login) AS sortkey, " + " CASE WHEN info LIKE '%%expires 20%%'" + " THEN substr(info,instr(lower(info),'expires')+8,10)" + " END AS exp" + " FROM user" + " WHERE login NOT IN ('anonymous','nobody','developer','reader')" + " ORDER BY sortkey" + ); + while( db_step(&s)==SQLITE_ROW ){ + int uid = db_column_int(&s, 0); + const char *zLogin = db_column_text(&s, 1); + const char *zCap = db_column_text(&s, 2); + const char *zInfo = db_column_text(&s, 3); + const char *zDate = db_column_text(&s, 4); + const char *zSortKey = db_column_text(&s,5); + const char *zExp = db_column_text(&s,6); + @ + @ + } + @
      IDLoginCapsInfoDateExpire
      %d(uid) + @ %h(zLogin) + @ %h(zCap) + @ %h(zInfo) + @ %h(zDate?zDate:"") + @ %h(zExp?zExp:"") + @
      + db_finalize(&s); + output_table_sorting_javascript("userlist","nktxTT",2); + style_footer(); +} + +/* +** WEBPAGE: setup_ulist_notes +** +** A documentation page showing notes about user configuration. This information +** used to be a side-bar on the user list page, but has been factored out for +** improved presentation. +*/ +void setup_ulist_notes(void){ + style_header("User Configuration Notes"); + @

      User Configuration Notes:

      @
        @
      1. The permission flags are as follows:

        @ - @ + @ @ - @ + @ @ - @ + @ @ - @ + @ @ - @ + @ @ - @ + @ @ - @ + @ @ - @ + @ @ - @ + @ @ - @ + @ @ - @ + @ @ - @ + @ + @ + @ @ - @ + @ @ - @ + @ @ - @ + @ @ - @ + @ + @ + @ @ - @ + @ @ - @ + @ @ - @ + @ @ - @ + @ @ - @ + @ @ - @ - @ + @ + @ + @ + @ @
        a
        aAdmin: Create and delete users
        b
        bAttach: Add attachments to wiki or tickets
        c
        cAppend-Tkt: Append to tickets
        d
        dDelete: Delete wiki and tickets
        e
        eEmail: View sensitive data such as EMail addresses
        f
        fNew-Wiki: Create new wiki pages
        g
        gClone: Clone the repository
        h
        hHyperlinks: Show hyperlinks to detailed @ repository history
        i
        iCheck-In: Commit new versions in the repository
        j
        jRead-Wiki: View wiki pages
        k
        kWrite-Wiki: Edit wiki pages
        m
        lMod-Wiki: Moderator for wiki pages
        mAppend-Wiki: Append to wiki pages
        n
        nNew-Tkt: Create new tickets
        o
        oCheck-Out: Check out versions
        p
        pPassword: Change your own password
        r
        qMod-Tkt: Moderator for tickets
        rRead-Tkt: View tickets
        s
        sSetup/Super-user: Setup and configure this website
        t
        tTkt-Report: Create new bug summary reports
        u
        uReader: Inherit privileges of @ user reader
        v
        vDeveloper: Inherit privileges of @ user developer
        w
        wWrite-Tkt: Edit tickets
        zZip download: Download a baseline via the - @ /zip URL even without checkout - @ and history permissions
        xPrivate: Push and/or pull private branches
        zZip download: Download a ZIP archive or tarball
        @
      2. @ @
      3. - @ Every user, logged in or not, inherits the privileges of nobody. + @ Every user, logged in or not, inherits the privileges of + @ nobody. @

      4. @ @
      5. - @ Any human can login as anonymous since the password is - @ clearly displayed on the login page for them to type. The purpose - @ of requiring anonymous to log in is to prevent access by spiders. + @ Any human can login as anonymous since the + @ password is clearly displayed on the login page for them to type. The + @ purpose of requiring anonymous to log in is to prevent access by spiders. @ Every logged-in user inherits the combined privileges of - @ anonymous and - @ nobody. + @ anonymous and + @ nobody. @

      6. @ @
      7. - @ Users with privilege v inherit the combined privileges of - @ developer, anonymous, and nobody. + @ Users with privilege v inherit the combined + @ privileges of developer, + @ anonymous, and + @ nobody. @

      8. @ @
      - @
      style_footer(); } + /* ** Return true if zPw is a valid password string. A valid ** password string is: ** @@ -224,122 +343,136 @@ while( zPw[0]=='*' ){ zPw++; } return zPw[0]!=0; } /* -** WEBPAGE: /setup_uedit +** WEBPAGE: setup_uedit +** +** Edit information about a user or create a new user. +** Requires Admin privileges. */ void user_edit(void){ const char *zId, *zLogin, *zInfo, *zCap, *zPw; - char *oaa, *oas, *oar, *oaw, *oan, *oai, *oaj, *oao, *oap; - char *oak, *oad, *oac, *oaf, *oam, *oah, *oag, *oae; - char *oat, *oau, *oav, *oab, *oaz; - const char *inherit[128]; + const char *zGroup; + const char *zOldLogin; int doWrite; - int uid; + int uid, i; int higherUser = 0; /* True if user being edited is SETUP and the */ /* user doing the editing is ADMIN. Disallow editing */ + const char *inherit[128]; + int a[128]; + const char *oa[128]; - /* Must have ADMIN privleges to access this page + /* Must have ADMIN privileges to access this page */ login_check_credentials(); - if( !g.okAdmin ){ login_needed(); return; } + if( !g.perm.Admin ){ login_needed(0); return; } /* Check to see if an ADMIN user is trying to edit a SETUP account. ** Don't allow that. */ zId = PD("id", "0"); uid = atoi(zId); - if( zId && !g.okSetup && uid>0 ){ + if( zId && !g.perm.Setup && uid>0 ){ char *zOldCaps; zOldCaps = db_text(0, "SELECT cap FROM user WHERE uid=%d",uid); higherUser = zOldCaps && strchr(zOldCaps,'s'); } if( P("can") ){ - cgi_redirect("setup_ulist"); + cgi_redirect("setup_ulist"); /* User pressed the Cancel button */ return; } /* If we have all the necessary information, write the new or ** modified user record. After writing the user record, redirect ** to the page that displays a list of users. */ doWrite = cgi_all("login","info","pw") && !higherUser; if( doWrite ){ - char zCap[50]; - int i = 0; - int aa = P("aa")!=0; - int ab = P("ab")!=0; - int ad = P("ad")!=0; - int ae = P("ae")!=0; - int ai = P("ai")!=0; - int aj = P("aj")!=0; - int ak = P("ak")!=0; - int an = P("an")!=0; - int ao = P("ao")!=0; - int ap = P("ap")!=0; - int ar = P("ar")!=0; - int as = g.okSetup && P("as")!=0; - int aw = P("aw")!=0; - int ac = P("ac")!=0; - int af = P("af")!=0; - int am = P("am")!=0; - int ah = P("ah")!=0; - int ag = P("ag")!=0; - int at = P("at")!=0; - int au = P("au")!=0; - int av = P("av")!=0; - int az = P("az")!=0; - if( aa ){ zCap[i++] = 'a'; } - if( ab ){ zCap[i++] = 'b'; } - if( ac ){ zCap[i++] = 'c'; } - if( ad ){ zCap[i++] = 'd'; } - if( ae ){ zCap[i++] = 'e'; } - if( af ){ zCap[i++] = 'f'; } - if( ah ){ zCap[i++] = 'h'; } - if( ag ){ zCap[i++] = 'g'; } - if( ai ){ zCap[i++] = 'i'; } - if( aj ){ zCap[i++] = 'j'; } - if( ak ){ zCap[i++] = 'k'; } - if( am ){ zCap[i++] = 'm'; } - if( an ){ zCap[i++] = 'n'; } - if( ao ){ zCap[i++] = 'o'; } - if( ap ){ zCap[i++] = 'p'; } - if( ar ){ zCap[i++] = 'r'; } - if( as ){ zCap[i++] = 's'; } - if( at ){ zCap[i++] = 't'; } - if( au ){ zCap[i++] = 'u'; } - if( av ){ zCap[i++] = 'v'; } - if( aw ){ zCap[i++] = 'w'; } - if( az ){ zCap[i++] = 'z'; } + char c; + char zCap[50], zNm[4]; + zNm[0] = 'a'; + zNm[2] = 0; + for(i=0, c='a'; c<='z'; c++){ + zNm[1] = c; + a[c&0x7f] = (c!='s' || g.perm.Setup) && P(zNm)!=0; + if( a[c&0x7f] ) zCap[i++] = c; + } zCap[i] = 0; zPw = P("pw"); zLogin = P("login"); + if( strlen(zLogin)==0 ){ + style_header("User Creation Error"); + @ Empty login not allowed. + @ + @

      [Bummer]

      + style_footer(); + return; + } if( isValidPwString(zPw) ){ - zPw = sha1_shared_secret(zPw, zLogin); + zPw = sha1_shared_secret(zPw, zLogin, 0); }else{ zPw = db_text(0, "SELECT pw FROM user WHERE uid=%d", uid); } - if( uid>0 && - db_exists("SELECT 1 FROM user WHERE login=%Q AND uid!=%d", zLogin, uid) - ){ + zOldLogin = db_text(0, "SELECT login FROM user WHERE uid=%d", uid); + if( db_exists("SELECT 1 FROM user WHERE login=%Q AND uid!=%d", zLogin, uid) ){ style_header("User Creation Error"); - @ Login "%h(zLogin)" is already used by a different - @ user. + @ Login "%h(zLogin)" is already used by + @ a different user. @ @

      [Bummer]

      style_footer(); return; } login_verify_csrf_secret(); db_multi_exec( - "REPLACE INTO user(uid,login,info,pw,cap) " - "VALUES(nullif(%d,0),%Q,%Q,%Q,'%s')", - uid, P("login"), P("info"), zPw, zCap + "REPLACE INTO user(uid,login,info,pw,cap,mtime) " + "VALUES(nullif(%d,0),%Q,%Q,%Q,%Q,now())", + uid, zLogin, P("info"), zPw, zCap ); + admin_log( "Updated user [%q] with capabilities [%q].", + zLogin, zCap ); + if( atoi(PD("all","0"))>0 ){ + Blob sql; + char *zErr = 0; + blob_zero(&sql); + if( zOldLogin==0 ){ + blob_appendf(&sql, + "INSERT INTO user(login)" + " SELECT %Q WHERE NOT EXISTS(SELECT 1 FROM user WHERE login=%Q);", + zLogin, zLogin + ); + zOldLogin = zLogin; + } + blob_appendf(&sql, + "UPDATE user SET login=%Q," + " pw=coalesce(shared_secret(%Q,%Q," + "(SELECT value FROM config WHERE name='project-code')),pw)," + " info=%Q," + " cap=%Q," + " mtime=now()" + " WHERE login=%Q;", + zLogin, P("pw"), zLogin, P("info"), zCap, + zOldLogin + ); + login_group_sql(blob_str(&sql), "
    • ", "
    • \n", &zErr); + blob_reset(&sql); + admin_log( "Updated user [%q] in all login groups " + "with capabilities [%q].", + zLogin, zCap ); + if( zErr ){ + style_header("User Change Error"); + admin_log( "Error updating user '%q': %s'.", zLogin, zErr ); + @ %s(zErr) + @ + @

      [Bummer]

      + style_footer(); + return; + } + } cgi_redirect("setup_ulist"); return; } /* Load the existing information about the user, if any @@ -346,246 +479,376 @@ */ zLogin = ""; zInfo = ""; zCap = ""; zPw = ""; - oaa = oab = oac = oad = oae = oaf = oag = oah = oai = oaj = oak = oam = - oan = oao = oap = oar = oas = oat = oau = oav = oaw = oaz = ""; + for(i='a'; i<='z'; i++) oa[i] = ""; if( uid ){ zLogin = db_text("", "SELECT login FROM user WHERE uid=%d", uid); zInfo = db_text("", "SELECT info FROM user WHERE uid=%d", uid); zCap = db_text("", "SELECT cap FROM user WHERE uid=%d", uid); zPw = db_text("", "SELECT pw FROM user WHERE uid=%d", uid); - if( strchr(zCap, 'a') ) oaa = " checked"; - if( strchr(zCap, 'b') ) oab = " checked"; - if( strchr(zCap, 'c') ) oac = " checked"; - if( strchr(zCap, 'd') ) oad = " checked"; - if( strchr(zCap, 'e') ) oae = " checked"; - if( strchr(zCap, 'f') ) oaf = " checked"; - if( strchr(zCap, 'g') ) oag = " checked"; - if( strchr(zCap, 'h') ) oah = " checked"; - if( strchr(zCap, 'i') ) oai = " checked"; - if( strchr(zCap, 'j') ) oaj = " checked"; - if( strchr(zCap, 'k') ) oak = " checked"; - if( strchr(zCap, 'm') ) oam = " checked"; - if( strchr(zCap, 'n') ) oan = " checked"; - if( strchr(zCap, 'o') ) oao = " checked"; - if( strchr(zCap, 'p') ) oap = " checked"; - if( strchr(zCap, 'r') ) oar = " checked"; - if( strchr(zCap, 's') ) oas = " checked"; - if( strchr(zCap, 't') ) oat = " checked"; - if( strchr(zCap, 'u') ) oau = " checked"; - if( strchr(zCap, 'v') ) oav = " checked"; - if( strchr(zCap, 'w') ) oaw = " checked"; - if( strchr(zCap, 'z') ) oaz = " checked"; + for(i=0; zCap[i]; i++){ + char c = zCap[i]; + if( c>='a' && c<='z' ) oa[c&0x7f] = " checked=\"checked\""; + } } /* figure out inherited permissions */ memset(inherit, 0, sizeof(inherit)); - if( strcmp(zLogin, "developer") ){ + if( fossil_strcmp(zLogin, "developer") ){ char *z1, *z2; z1 = z2 = db_text(0,"SELECT cap FROM user WHERE login='developer'"); while( z1 && *z1 ){ - inherit[0x7f & *(z1++)] = ""; + inherit[0x7f & *(z1++)] = + "[D]"; } free(z2); } - if( strcmp(zLogin, "reader") ){ + if( fossil_strcmp(zLogin, "reader") ){ char *z1, *z2; z1 = z2 = db_text(0,"SELECT cap FROM user WHERE login='reader'"); while( z1 && *z1 ){ - inherit[0x7f & *(z1++)] = ""; + inherit[0x7f & *(z1++)] = + "[R]"; } free(z2); } - if( strcmp(zLogin, "anonymous") ){ + if( fossil_strcmp(zLogin, "anonymous") ){ char *z1, *z2; z1 = z2 = db_text(0,"SELECT cap FROM user WHERE login='anonymous'"); while( z1 && *z1 ){ - inherit[0x7f & *(z1++)] = ""; + inherit[0x7f & *(z1++)] = + "[A]"; } free(z2); } - if( strcmp(zLogin, "nobody") ){ + if( fossil_strcmp(zLogin, "nobody") ){ char *z1, *z2; z1 = z2 = db_text(0,"SELECT cap FROM user WHERE login='nobody'"); while( z1 && *z1 ){ - inherit[0x7f & *(z1++)] = ""; + inherit[0x7f & *(z1++)] = + "[N]"; } free(z2); } /* Begin generating the page */ style_submenu_element("Cancel", "Cancel", "setup_ulist"); if( uid ){ - style_header(mprintf("Edit User %h", zLogin)); + style_header("Edit User %h", zLogin); }else{ style_header("Add A New User"); } - @
      - @
      - login_insert_csrf_secret(); - @ - @ - @ - if( uid ){ - @ - }else{ - @ - } - @ - @ - @ - @ - @ - @ - @ - @ - @ - @ - @ - @
      User ID:%d(uid) (new user)
      Login:
      Contact Info:
      Capabilities: -#define B(x) inherit[x] - if( g.okSetup ){ - @ %s(B('s'))Setup
      - } - @ %s(B('a'))Admin
      - @ %s(B('d'))Delete
      - @ %s(B('e'))Email
      - @ %s(B('p'))Password
      - @ %s(B('i'))Check-In
      - @ %s(B('o'))Check-Out
      - @ %s(B('h'))History
      - @ %s(B('u'))Reader
      - @ %s(B('v'))Developer
      - @ %s(B('g'))Clone
      - @ %s(B('j'))Read Wiki
      - @ %s(B('f'))New Wiki
      - @ %s(B('m'))Append Wiki
      - @ %s(B('k'))Write Wiki
      - @ %s(B('b'))Attachments
      - @ %s(B('r'))Read Ticket
      - @ %s(B('n'))New Ticket
      - @ %s(B('c'))Append Ticket
      - @ %s(B('w'))Write Ticket
      - @ %s(B('t'))Ticket Report
      - @ %s(B('z'))Download Zip + @
      + @
      + login_insert_csrf_secret(); + if( login_is_special(zLogin) ){ + @ + @ + @ + } + @ + @ + @ + @ + if( uid ){ + @ + }else{ + @ + } + @ + @ + @ + if( login_is_special(zLogin) ){ + @ + }else{ + @ + @ + @ + @ + @ + } + @ + @ + @ + @ @ @ - @ - if( zPw[0] ){ - /* Obscure the password for all users */ - @ - }else{ - /* Show an empty password as an empty input field */ - @ - } + @ + @ @ + if( !login_is_special(zLogin) ){ + @ + @ + if( zPw[0] ){ + /* Obscure the password for all users */ + @ + }else{ + /* Show an empty password as an empty input field */ + @ + } + @ + } + zGroup = login_group_name(); + if( zGroup ){ + @ + @ + @ + } if( !higherUser ){ @ - @ - @ + @ @ } - @
      User ID:%d(uid) (new user)
      Login:%h(zLogin)
      Contact Info:
      Capabilities: +#define B(x) inherit[x] + @ + @
      + if( g.perm.Setup ){ + @
      + } + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @
      + @ + @
      @
      Password:Selected Cap.: + @ (missing JS?) + @
      Password:
      Scope: + @ + @ Apply changes to this repository only.
      + @ + @ Apply changes to all repositories in the "%h(zGroup)" + @ login group.
        + @  
      + @
      + @
      + @
      + @ @

      Privileges And Capabilities:

      @
        if( higherUser ){ - @
      • + @

      • @ User %h(zLogin) has Setup privileges and you only have Admin privileges @ so you are not permitted to make changes to %h(zLogin). - @

      • + @

        @ } @
      • - @ The Setup user can make arbitrary configuration changes. - @ An Admin user can add other users and change user privileges + @ The Setup user can make arbitrary + @ configuration changes. An Admin user + @ can add other users and change user privileges @ and reset user passwords. Both automatically get all other privileges @ listed below. Use these two settings with discretion. @

      • @ @
      • - @ The "" mark indicates - @ the privileges of "nobody" that are available to all users - @ regardless of whether or not they are logged in. - @

      • - @ - @
      • - @ The "" mark indicates - @ the privileges of "anonymous" that are inherited by all logged-in users. - @

      • - @ - @
      • - @ The "" mark indicates - @ the privileges of "developer" that are inherited by all users with - @ the Developer privilege. - @

      • - @ - @
      • - @ The "" mark indicates - @ the privileges of "reader" that are inherited by all users with - @ the Reader privilege. - @

      • - @ - @
      • - @ The Delete privilege give the user the ability to erase - @ wiki, tickets, and attachments that have been added by anonymous - @ users. This capability is intended for deletion of spam. The - @ delete capability is only in effect for 24 hours after the item - @ is first posted. The Setup user can delete anything at any time. - @

      • - @ - @
      • - @ The History privilege allows a user to see most hyperlinks. - @ This is recommended ON for most logged-in users but OFF for - @ user "nobody" to avoid problems with spiders trying to walk every - @ historical version of every baseline and file. - @

      • - @ - @
      • - @ The Zip privilege allows a user to see the "download as ZIP" - @ hyperlink and permits access to the /zip page. This allows - @ users to download ZIP archives without granting other rights like - @ Read or History. This privilege is recommended for - @ user nobody so that automatic package downloaders can obtain - @ the sources without going through the login procedure. - @

      • - @ - @
      • - @ The Check-in privilege allows remote users to "push". - @ The Check-out privilege allows remote users to "pull". - @ The Clone privilege allows remote users to "clone". - @

      • - @ - @

      • - @ The Read Wiki, New Wiki, Append Wiki, and - @ Write Wiki privileges control access to wiki pages. The - @ Read Ticket, New Ticket, Append Ticket, and - @ Write Ticket privileges control access to trouble tickets. - @ The Ticket Report privilege allows the user to create or edit - @ ticket report formats. - @

      • - @ - @
      • - @ Users with the Password privilege are allowed to change their - @ own password. Recommended ON for most users but OFF for special - @ users "developer", "anonymous", and "nobody". - @

      • - @ - @
      • - @ The EMail privilege allows the display of sensitive information - @ such as the email address of users and contact information on tickets. - @ Recommended OFF for "anonymous" and for "nobody" but ON for - @ "developer". - @

      • - @ - @
      • - @ The Attachment privilege is needed in order to add attachments - @ to tickets or wiki. Write privilege on the ticket or wiki is also - @ required.

      • + @ The "N" subscript suffix + @ indicates the privileges of nobody that + @ are available to all users regardless of whether or not they are logged in. + @

        + @ + @
      • + @ The "A" subscript suffix + @ indicates the privileges of anonymous that + @ are inherited by all logged-in users. + @

      • + @ + @
      • + @ The "D" subscript suffix + @ indicates the privileges of developer that + @ are inherited by all users with the + @ Developer privilege. + @

      • + @ + @
      • + @ The "R" subscript suffix + @ indicates the privileges of reader that + @ are inherited by all users with the Reader + @ privilege. + @

      • + @ + @
      • + @ The Delete privilege give the user the + @ ability to erase wiki, tickets, and attachments that have been added + @ by anonymous users. This capability is intended for deletion of spam. + @ The delete capability is only in effect for 24 hours after the item + @ is first posted. The Setup user can + @ delete anything at any time. + @

      • + @ + @
      • + @ The Hyperlinks privilege allows a user + @ to see most hyperlinks. This is recommended ON for most logged-in users + @ but OFF for user "nobody" to avoid problems with spiders trying to walk + @ every diff and annotation of every historical check-in and file. + @

      • + @ + @
      • + @ The Zip privilege allows a user to + @ see the "download as ZIP" + @ hyperlink and permits access to the /zip page. This allows + @ users to download ZIP archives without granting other rights like + @ Read or + @ Hyperlink. The "z" privilege is recommended + @ for user nobody so that automatic package + @ downloaders can obtain the sources without going through the login + @ procedure. + @

      • + @ + @
      • + @ The Check-in privilege allows remote + @ users to "push". The Check-out privilege + @ allows remote users to "pull". The Clone + @ privilege allows remote users to "clone". + @

      • + @ + @
      • + @ The Read Wiki, + @ New Wiki, + @ Append Wiki, and + @ Write Wiki privileges control access to wiki pages. The + @ Read Ticket, + @ New Ticket, + @ Append Ticket, and + @ Write Ticket privileges control access + @ to trouble tickets. + @ The Ticket Report privilege allows + @ the user to create or edit ticket report formats. + @

      • + @ + @
      • + @ Users with the Password privilege + @ are allowed to change their own password. Recommended ON for most + @ users but OFF for special users developer, + @ anonymous, + @ and nobody. + @

      • + @ + @
      • + @ The EMail privilege allows the display of + @ sensitive information such as the email address of users and contact + @ information on tickets. Recommended OFF for + @ anonymous and for + @ nobody but ON for + @ developer. + @

      • + @ + @
      • + @ The Attachment privilege is needed in + @ order to add attachments to tickets or wiki. Write privilege on the + @ ticket or wiki is also required. + @

      • @ @
      • @ Login is prohibited if the password is an empty string. @

      • @
      @@ -592,42 +855,42 @@ @ @

      Special Logins

      @ @
        @
      • - @ No login is required for user "nobody". The capabilities - @ of the nobody user are inherited by all users, regardless of - @ whether or not they are logged in. To disable universal access - @ to the repository, make sure no user named "nobody" exists or - @ that the nobody user has no capabilities enabled. - @ The password for nobody is ignore. To avoid problems with - @ spiders overloading the server, it is recommended - @ that the 'h' (History) capability be turned off for the nobody - @ user. - @

      • - @ - @
      • - @ Login is required for user "anonymous" but the password - @ is displayed on the login screen beside the password entry box - @ so anybody who can read should be able to login as anonymous. - @ On the other hand, spiders and web-crawlers will typically not - @ be able to login. Set the capabilities of the anonymous user - @ to things that you want any human to be able to do, but not any - @ spider. Every other logged-in user inherits the privileges of - @ anonymous. - @

      • - @ - @
      • - @ The "developer" user is intended as a template for trusted users - @ with check-in privileges. When adding new trusted users, simply - @ select the Developer privilege to cause the new user to inherit - @ all privileges of the "developer" user. Similarly, the "reader" - @ user is a template for users who are allowed more access than anonymous, - @ but less than a developer. - @

      • - @
      - @ + @ No login is required for user nobody. The + @ capabilities of the nobody user are + @ inherited by all users, regardless of whether or not they are logged in. + @ To disable universal access to the repository, make sure that the + @ nobody user has no capabilities + @ enabled. The password for nobody is ignored. + @

      + @ + @
    • + @ Login is required for user anonymous but the + @ password is displayed on the login screen beside the password entry box + @ so anybody who can read should be able to login as anonymous. + @ On the other hand, spiders and web-crawlers will typically not + @ be able to login. Set the capabilities of the + @ anonymous + @ user to things that you want any human to be able to do, but not any + @ spider. Every other logged-in user inherits the privileges of + @ anonymous. + @

    • + @ + @
    • + @ The developer user is intended as a template + @ for trusted users with check-in privileges. When adding new trusted users, + @ simply select the developer privilege to + @ cause the new user to inherit all privileges of the + @ developer + @ user. Similarly, the reader user is a + @ template for users who are allowed more access than + @ anonymous, + @ but less than a developer. + @

    • + @ style_footer(); } /* @@ -635,30 +898,36 @@ */ static void onoff_attribute( const char *zLabel, /* The text label on the checkbox */ const char *zVar, /* The corresponding row in the VAR table */ const char *zQParm, /* The query parameter */ - int dfltVal /* Default value if VAR table entry does not exist */ + int dfltVal, /* Default value if VAR table entry does not exist */ + int disabled /* 1 if disabled */ ){ const char *zQ = P(zQParm); int iVal = db_get_boolean(zVar, dfltVal); - if( zQ==0 && P("submit") ){ + if( zQ==0 && !disabled && P("submit") ){ zQ = "off"; } if( zQ ){ - int iQ = strcmp(zQ,"on")==0 || atoi(zQ); + int iQ = fossil_strcmp(zQ,"on")==0 || atoi(zQ); if( iQ!=iVal ){ login_verify_csrf_secret(); db_set(zVar, iQ ? "1" : "0", 0); + admin_log("Set option [%q] to [%q].", + zVar, iQ ? "on" : "off"); iVal = iQ; } } + @ %s(zLabel) - }else{ - @ %s(zLabel) + @ checked="checked" + } + if( disabled ){ + @ disabled="disabled" } + @ /> %s(zLabel) } /* ** Generate an entry box for an attribute. */ @@ -665,397 +934,1258 @@ void entry_attribute( const char *zLabel, /* The text label on the entry box */ int width, /* Width of the entry box */ const char *zVar, /* The corresponding row in the VAR table */ const char *zQParm, /* The query parameter */ - char *zDflt /* Default value if VAR table entry does not exist */ + const char *zDflt, /* Default value if VAR table entry does not exist */ + int disabled /* 1 if disabled */ ){ const char *zVal = db_get(zVar, zDflt); const char *zQ = P(zQParm); - if( zQ && strcmp(zQ,zVal)!=0 ){ + if( zQ && fossil_strcmp(zQ,zVal)!=0 ){ + const int nZQ = (int)strlen(zQ); login_verify_csrf_secret(); db_set(zVar, zQ, 0); + admin_log("Set entry_attribute %Q to: %.*s%s", + zVar, 20, zQ, (nZQ>20 ? "..." : "")); zVal = zQ; } - @ - @ %s(zLabel) + @ %s(zLabel) } /* ** Generate a text box for an attribute. */ -static void textarea_attribute( +const char *textarea_attribute( const char *zLabel, /* The text label on the textarea */ int rows, /* Rows in the textarea */ int cols, /* Columns in the textarea */ const char *zVar, /* The corresponding row in the VAR table */ const char *zQP, /* The query parameter */ - const char *zDflt /* Default value if VAR table entry does not exist */ + const char *zDflt, /* Default value if VAR table entry does not exist */ + int disabled /* 1 if the textarea should not be editable */ ){ - const char *z = db_get(zVar, (char*)zDflt); + const char *z = db_get(zVar, zDflt); const char *zQ = P(zQP); - if( zQ && strcmp(zQ,z)!=0 ){ + if( zQ && !disabled && fossil_strcmp(zQ,z)!=0){ + const int nZQ = (int)strlen(zQ); login_verify_csrf_secret(); db_set(zVar, zQ, 0); + admin_log("Set textarea_attribute %Q to: %.*s%s", + zVar, 20, zQ, (nZQ>20 ? "..." : "")); z = zQ; } if( rows>0 && cols>0 ){ - @ - @ %s(zLabel) + @ + if( zLabel && *zLabel ){ + @ %s(zLabel) + } + } + return z; +} + +/* +** Generate a text box for an attribute. +*/ +static void multiple_choice_attribute( + const char *zLabel, /* The text label on the menu */ + const char *zVar, /* The corresponding row in the VAR table */ + const char *zQP, /* The query parameter */ + const char *zDflt, /* Default value if VAR table entry does not exist */ + int nChoice, /* Number of choices */ + const char *const *azChoice /* Choices. 2 per choice: (VAR value, Display) */ +){ + const char *z = db_get(zVar, zDflt); + const char *zQ = P(zQP); + int i; + if( zQ && fossil_strcmp(zQ,z)!=0){ + const int nZQ = (int)strlen(zQ); + login_verify_csrf_secret(); + db_set(zVar, zQ, 0); + admin_log("Set multiple_choice_attribute %Q to: %.*s%s", + zVar, 20, zQ, (nZQ>20 ? "..." : "")); + z = zQ; + } + @ %h(zLabel) } /* ** WEBPAGE: setup_access +** +** The access-control settings page. Requires Admin privileges. */ void setup_access(void){ login_check_credentials(); - if( !g.okSetup ){ - login_needed(); + if( !g.perm.Setup ){ + login_needed(0); + return; } style_header("Access Control Settings"); db_begin_transaction(); - @
      + @
      login_insert_csrf_secret(); - @
      + @
      + onoff_attribute("Redirect to HTTPS on the Login page", + "redirect-to-https", "redirhttps", 0, 0); + @

      When selected, force the use of HTTPS for the Login page. + @

      Details: When enabled, this option causes the $secureurl TH1 + @ variable is set to an "https:" variant of $baseurl. Otherwise, + @ $secureurl is just an alias for $baseurl. Also when enabled, the + @ Login page redirects to https if accessed via http. + @


      onoff_attribute("Require password for local access", - "localauth", "localauth", 0); - @

      When enabled, the password sign-in is required for - @ web access coming from 127.0.0.1. When disabled, web access - @ from 127.0.0.1 is allows without any login - the user id is selected - @ from the ~/.fossil database. Password login is always required - @ for incoming web connections on internet addresses other than - @ 127.0.0.1.

      - - @
      + "localauth", "localauth", 0, 0); + @

      When enabled, the password sign-in is always required for + @ web access. When disabled, unrestricted web access from 127.0.0.1 + @ is allowed for the fossil ui command or + @ from the fossil server, + @ fossil http commands when the + @ "--localauth" command line options is used, or from the + @ fossil cgi if a line containing + @ the word "localauth" appears in the CGI script. + @ + @

      A password is always required if any one or more + @ of the following are true: + @

        + @
      1. This button is checked + @
      2. The inbound TCP/IP connection is not from 127.0.0.1 + @
      3. The server is started using either of the + @ fossil server or + @ fossil http commands + @ without the "--localauth" option. + @
      4. The server is started from CGI without the "localauth" keyword + @ in the CGI script. + @
      + @ + @
      + onoff_attribute("Enable /test_env", + "test_env_enable", "test_env_enable", 0, 0); + @

      When enabled, the %h(g.zBaseURL)/test_env URL is available to all + @ users. When disabled (the default) only users Admin and Setup can visit + @ the /test_env page. + @

      + @ + @
      onoff_attribute("Allow REMOTE_USER authentication", - "remote_user_ok", "remote_user_ok", 0); + "remote_user_ok", "remote_user_ok", 0, 0); @

      When enabled, if the REMOTE_USER environment variable is set to the @ login name of a valid user and no other login credentials are available, @ then the REMOTE_USER is accepted as an authenticated user. - @

      - - @
      - entry_attribute("Login expiration time", 6, "cookie-expire", "cex", "8766"); + @

      + @ + @
      + entry_attribute("IP address terms used in login cookie", 3, + "ip-prefix-terms", "ipt", "2", 0); + @

      The number of octets of of the IP address used in the login cookie. + @ Set to zero to omit the IP address from the login cookie. A value of + @ 2 is recommended. + @

      + @ + @
      + entry_attribute("Login expiration time", 6, "cookie-expire", "cex", + "8766", 0); @

      The number of hours for which a login is valid. This must be a - @ positive number. The default is 8760 hours which is approximately equal + @ positive number. The default is 8766 hours which is approximately equal @ to a year.

      - @
      + @
      entry_attribute("Download packet limit", 10, "max-download", "mxdwn", - "5000000"); + "5000000", 0); @

      Fossil tries to limit out-bound sync, clone, and pull packets @ to this many bytes, uncompressed. If the client requires more data @ than this, then the client will issue multiple HTTP requests. @ Values below 1 million are not recommended. 5 million is a @ reasonable number.

      - @
      + @
      + entry_attribute("Download time limit", 11, "max-download-time", "mxdwnt", + "30", 0); + + @

      Fossil tries to spend less than this many seconds gathering + @ the out-bound data of sync, clone, and pull packets. + @ If the client request takes longer, a partial reply is given similar + @ to the download packet limit. 30s is a reasonable default.

      + + @
      + entry_attribute("Server Load Average Limit", 11, "max-loadavg", "mxldavg", + "0.0", 0); + @

      Some expensive operations (such as computing tarballs, zip archives, + @ or annotation/blame pages) are prohibited if the load average on the host + @ computer is too large. Set the threshold for disallowing expensive + @ computations here. Set this to 0.0 to disable the load average limit. + @ This limit is only enforced on Unix servers. On Linux systems, + @ access to the /proc virtual filesystem is required, which means this limit + @ might not work inside a chroot() jail.

      + + @
      + onoff_attribute( + "Enable hyperlinks for \"nobody\" based on User-Agent and Javascript", + "auto-hyperlink", "autohyperlink", 1, 0); + @

      Enable hyperlinks (the equivalent of the "h" permission) for all users + @ including user "nobody", as long as (1) the User-Agent string in the + @ HTTP header indicates that the request is coming from an actual human + @ being and not a a robot or spider and (2) the user agent is able to + @ run Javascript in order to set the href= attribute of hyperlinks. Bots + @ and spiders can forge a User-Agent string that makes them seem to be a + @ normal browser and they can run javascript just like browsers. But most + @ bots do not go to that much trouble so this is normally an effective + @ defense.

      + @ + @

      You do not normally want a bot to walk your entire repository because + @ if it does, your server will end up computing diffs and annotations for + @ every historical version of every file and creating ZIPs and tarballs of + @ every historical check-in, which can use a lot of CPU and bandwidth + @ even for relatively small projects.

      + @ + @

      Additional parameters that control this behavior:

      + @
      + onoff_attribute("Enable hyperlinks for humans (as deduced from the UserAgent " + " HTTP header string)", + "auto-hyperlink-ishuman", "ahis", 0, 0); + @
      + onoff_attribute("Require mouse movement before enabling hyperlinks", + "auto-hyperlink-mouseover", "ahmo", 0, 0); + @
      + entry_attribute("Delay before enabling hyperlinks (milliseconds)", 5, + "auto-hyperlink-delay", "ah-delay", "10", 0); + @
      + @

      Hyperlinks for user "nobody" are normally enabled as soon as the page + @ finishes loading. But the first check-box below can be set to require mouse + @ movement before enabling the links. One can also set a delay prior to enabling + @ links by enter a positive number of milliseconds in the entry box above.

      + + @
      + onoff_attribute("Require a CAPTCHA if not logged in", + "require-captcha", "reqcapt", 1, 0); + @

      Require a CAPTCHA for edit operations (appending, creating, or + @ editing wiki or tickets or adding attachments to wiki or tickets) + @ for users who are not logged in.

      + + @
      + entry_attribute("Public pages", 30, "public-pages", + "pubpage", "", 0); + @

      A comma-separated list of glob patterns for pages that are accessible + @ without needing a login and using the privileges given by the + @ "Default privileges" setting below. Example use case: Set this field + @ to "/doc/trunk/www/*" to give anonymous users read-only permission to the + @ latest version of the embedded documentation in the www/ folder without + @ allowing them to see the rest of the source code. + @

      + + @
      + onoff_attribute("Allow users to register themselves", + "self-register", "selfregister", 0, 0); + @

      Allow users to register themselves through the HTTP UI. + @ The registration form always requires filling in a CAPTCHA + @ (auto-captcha setting is ignored). Still, bear in mind that anyone + @ can register under any user name. This option is useful for public projects + @ where you do not want everyone in any ticket discussion to be named + @ "Anonymous".

      + + @
      + entry_attribute("Default privileges", 10, "default-perms", + "defaultperms", "u", 0); + @

      Permissions given to users that...

      • register themselves using + @ the self-registration procedure (if enabled), or
      • access "public" + @ pages identified by the public-pages glob pattern above, or
      • + @ are users newly created by the administrator.
      + @

      + + @
      onoff_attribute("Show javascript button to fill in CAPTCHA", - "auto-captcha", "autocaptcha", 0); + "auto-captcha", "autocaptcha", 0, 0); @

      When enabled, a button appears on the login screen for user @ "anonymous" that will automatically fill in the CAPTCHA password. - @ This is less secure that forcing the user to do it manually, but is + @ This is less secure than forcing the user to do it manually, but is @ probably secure enough and it is certainly more convenient for @ anonymous users.

      - @
      - @

      - @ + @
      + @

      + @
      db_end_transaction(0); style_footer(); } + +/* +** WEBPAGE: setup_login_group +** +** Change how the current repository participates in a login +** group. +*/ +void setup_login_group(void){ + const char *zGroup; + char *zErrMsg = 0; + Blob fullName; + char *zSelfRepo; + const char *zRepo = PD("repo", ""); + const char *zLogin = PD("login", ""); + const char *zPw = PD("pw", ""); + const char *zNewName = PD("newname", "New Login Group"); + + login_check_credentials(); + if( !g.perm.Setup ){ + login_needed(0); + return; + } + file_canonical_name(g.zRepositoryName, &fullName, 0); + zSelfRepo = fossil_strdup(blob_str(&fullName)); + blob_reset(&fullName); + if( P("join")!=0 ){ + login_group_join(zRepo, zLogin, zPw, zNewName, &zErrMsg); + }else if( P("leave") ){ + login_group_leave(&zErrMsg); + } + style_header("Login Group Configuration"); + if( zErrMsg ){ + @

      %s(zErrMsg)

      + } + zGroup = login_group_name(); + if( zGroup==0 ){ + @

      This repository (in the file named "%h(zSelfRepo)") + @ is not currently part of any login-group. + @ To join a login group, fill out the form below.

      + @ + @
      + login_insert_csrf_secret(); + @
      + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @ + @
      Repository filename in group to join: + @
      Login on the above repo: + @
      Password: + @
      Name of login-group: + @ + @ (only used if creating a new login-group).
      + @
      + }else{ + Stmt q; + int n = 0; + @

      This repository (in the file "%h(zSelfRepo)") + @ is currently part of the "%h(zGroup)" login group. + @ Other repositories in that group are:

      + @ + @ + db_prepare(&q, + "SELECT value," + " (SELECT value FROM config" + " WHERE name=('peer-name-' || substr(x.name,11)))" + " FROM config AS x" + " WHERE name GLOB 'peer-repo-*'" + " ORDER BY value" + ); + while( db_step(&q)==SQLITE_ROW ){ + const char *zRepo = db_column_text(&q, 0); + const char *zTitle = db_column_text(&q, 1); + n++; + @ + } + db_finalize(&q); + @
      Project Name + @ Repository File
      %d(n). + @ %h(zTitle)%h(zRepo)
      + @ + @

      + login_insert_csrf_secret(); + @ To leave this login group press + @ + @

      + @

      Implementation Details

      + @

      The following are fields from the CONFIG table related to login-groups, + @ provided here for instructional and debugging purposes:

      + @ + @ + db_prepare(&q, "SELECT name, value, datetime(mtime,'unixepoch') FROM config" + " WHERE name GLOB 'peer-*'" + " OR name GLOB 'project-*'" + " ORDER BY name"); + while( db_step(&q)==SQLITE_ROW ){ + @ + @ + @ + } + db_finalize(&q); + @
      Config.NameConfig.ValueConfig.mtime
      %h(db_column_text(&q,0))%h(db_column_text(&q,1))%h(db_column_text(&q,2))
      + output_table_sorting_javascript("configTab","ttt",1); + } + style_footer(); +} /* ** WEBPAGE: setup_timeline +** +** Edit administrative settings controlling the display of +** timelines. */ void setup_timeline(void){ + double tmDiff; + char zTmDiff[20]; + static const char *const azTimeFormats[] = { + "0", "HH:MM", + "1", "HH:MM:SS", + "2", "YYYY-MM-DD HH:MM", + "3", "YYMMDD HH:MM", + "4", "(off)" + }; login_check_credentials(); - if( !g.okSetup ){ - login_needed(); + if( !g.perm.Setup ){ + login_needed(0); + return; } style_header("Timeline Display Preferences"); db_begin_transaction(); - @
      + @
      login_insert_csrf_secret(); - @
      + @
      onoff_attribute("Allow block-markup in timeline", - "timeline-block-markup", "tbm", 0); + "timeline-block-markup", "tbm", 0, 0); @

      In timeline displays, check-in comments can be displayed with or @ without block markup (paragraphs, tables, etc.)

      - @
      + @
      + onoff_attribute("Plaintext comments on timelines", + "timeline-plaintext", "tpt", 0, 0); + @

      In timeline displays, check-in comments are displayed literally, + @ without any wiki or HTML interpretation. (Note: Use CSS to change + @ display formatting features such as fonts and line-wrapping behavior.)

      + + @
      + onoff_attribute("Truncate comment at first blank line", + "timeline-truncate-at-blank", "ttb", 0, 0); + @

      In timeline displays, check-in comments are displayed only through + @ the first blank line.

      + + @
      onoff_attribute("Use Universal Coordinated Time (UTC)", - "timeline-utc", "utc", 1); + "timeline-utc", "utc", 1, 0); @

      Show times as UTC (also sometimes called Greenwich Mean Time (GMT) or - @ Zulu) instead of in local time.

      + @ Zulu) instead of in local time. On this server, local time is currently + tmDiff = db_double(0.0, "SELECT julianday('now')"); + tmDiff = db_double(0.0, + "SELECT (julianday(%.17g,'localtime')-julianday(%.17g))*24.0", + tmDiff, tmDiff); + sqlite3_snprintf(sizeof(zTmDiff), zTmDiff, "%.1f", tmDiff); + if( strcmp(zTmDiff, "0.0")==0 ){ + @ the same as UTC and so this setting will make no difference in + @ the display.

      + }else if( tmDiff<0.0 ){ + sqlite3_snprintf(sizeof(zTmDiff), zTmDiff, "%.1f", -tmDiff); + @ %s(zTmDiff) hours behind UTC.

      + }else{ + @ %s(zTmDiff) hours ahead of UTC.

      + } + + @
      + multiple_choice_attribute("Per-Item Time Format", "timeline-date-format", + "tdf", "0", ArraySize(azTimeFormats)/2, azTimeFormats); + @

      If the "HH:MM" or "HH:MM:SS" format is selected, then the date is shown + @ in a separate box (using CSS class "timelineDate") whenever the date changes. + @ With the "YYYY-MM-DD HH:MM" and "YYMMDD ..." formats, the complete date + @ and time is shown on every timeline entry (using the CSS class "timelineTime").

      - @
      + @
      onoff_attribute("Show version differences by default", - "show-version-diffs", "vdiff", 0); - @

      On the version-information pages linked from the timeline can either + "show-version-diffs", "vdiff", 0, 0); + @

      The version-information pages linked from the timeline can either @ show complete diffs of all file changes, or can just list the names of @ the files that have changed. Users can get to either page by @ clicking. This setting selects the default.

      - @
      + @
      entry_attribute("Max timeline comment length", 6, - "timeline-max-comment", "tmc", "0"); + "timeline-max-comment", "tmc", "0", 0); @

      The maximum length of a comment to be displayed in a timeline. @ "0" there is no length limit.

      - @
      - @

      - @ + @
      + @

      + @
      + db_end_transaction(0); + style_footer(); +} + +/* +** WEBPAGE: setup_settings +** +** Change or view miscellanous settings. Part of the +** Admin pages requiring Admin privileges. +*/ +void setup_settings(void){ + Setting const *pSet; + + login_check_credentials(); + if( !g.perm.Setup ){ + login_needed(0); + return; + } + + (void) aCmdHelp; /* NOTE: Silence compiler warning. */ + style_header("Settings"); + if(!g.repositoryOpen){ + /* Provide read-only access to versioned settings, + but only if no repo file was explicitly provided. */ + db_open_local(0); + } + db_begin_transaction(); + @

      This page provides a simple interface to the "fossil setting" command. + @ See the "fossil help setting" output below for further information on + @ the meaning of each setting.


      + @
      + @
      + login_insert_csrf_secret(); + for(pSet=aSetting; pSet->name!=0; pSet++){ + if( pSet->width==0 ){ + int hasVersionableValue = pSet->versionable && + (db_get_versioned(pSet->name, NULL)!=0); + onoff_attribute(pSet->name, pSet->name, + pSet->var!=0 ? pSet->var : pSet->name, + is_truth(pSet->def), hasVersionableValue); + if( pSet->versionable ){ + @ (v)
      + } else { + @
      + } + } + } + @
      + @
      + for(pSet=aSetting; pSet->name!=0; pSet++){ + if( pSet->width!=0 && !pSet->versionable && !pSet->forceTextArea ){ + entry_attribute(pSet->name, /*pSet->width*/ 25, pSet->name, + pSet->var!=0 ? pSet->var : pSet->name, + (char*)pSet->def, 0); + @
      + } + } + for(pSet=aSetting; pSet->name!=0; pSet++){ + if( pSet->width!=0 && !pSet->versionable && pSet->forceTextArea ){ + @%s(pSet->name)
      + textarea_attribute("", /*rows*/ 3, /*cols*/ 50, pSet->name, + pSet->var!=0 ? pSet->var : pSet->name, + (char*)pSet->def, 0); + @
      + } + } + @
      + for(pSet=aSetting; pSet->name!=0; pSet++){ + if( pSet->width!=0 && pSet->versionable ){ + int hasVersionableValue = db_get_versioned(pSet->name, NULL)!=0; + @%s(pSet->name) (v)
      + textarea_attribute("", /*rows*/ 3, /*cols*/ 20, pSet->name, + pSet->var!=0 ? pSet->var : pSet->name, + (char*)pSet->def, hasVersionableValue); + @
      + } + } + @
      + @
      + @

      Settings marked with (v) are 'versionable' and will be overridden + @ by the contents of files named .fossil-settings/PROPERTY + @ in the check-out root. + @ If such a file is present, the corresponding field above is not + @ editable.


      + @ These settings work in the same way, as the set + @ commandline:
      + @

      %s(zHelp_setting_cmd)
      db_end_transaction(0); style_footer(); } /* ** WEBPAGE: setup_config +** +** The "Admin/Configuration" page. Requires Admin privilege. */ void setup_config(void){ login_check_credentials(); - if( !g.okSetup ){ - login_needed(); + if( !g.perm.Setup ){ + login_needed(0); + return; } style_header("WWW Configuration"); db_begin_transaction(); - @
      + @
      login_insert_csrf_secret(); @
      - entry_attribute("Project Name", 60, "project-name", "pn", ""); + entry_attribute("Project Name", 60, "project-name", "pn", "", 0); @

      Give your project a name so visitors know what this site is about. - @ The project name will also be used as the RSS feed title.

      + @ The project name will also be used as the RSS feed title. + @

      @
      - textarea_attribute("Project Description", 5, 60, - "project-description", "pd", ""); + textarea_attribute("Project Description", 3, 80, + "project-description", "pd", "", 0); @

      Describe your project. This will be used in page headers for search @ engines as well as a short RSS description.

      @
      - entry_attribute("Index Page", 60, "index-page", "idxpg", "/home"); + entry_attribute("Tarball and ZIP-archive Prefix", 20, "short-project-name", "spn", "", 0); + @

      This is used as a prefix on the names of generated tarballs and ZIP archive. + @ For best results, keep this prefix brief and avoid special characters such + @ as "/" and "\". + @ If no tarball prefix is specified, then the full Project Name above is used. + @

      + @
      + onoff_attribute("Enable WYSIWYG Wiki Editing", + "wysiwyg-wiki", "wysiwyg-wiki", 0, 0); + @

      Enable what-you-see-is-what-you-get (WYSIWYG) editing of wiki pages. + @ The WYSIWYG editor generates HTML instead of markup, which makes + @ subsequent manual editing more difficult.

      + @
      + entry_attribute("Index Page", 60, "index-page", "idxpg", "/home", 0); @

      Enter the pathname of the page to display when the "Home" menu @ option is selected and when no pathname is @ specified in the URL. For example, if you visit the url:

      @ - @
      %h(g.zBaseURL)
      + @

      %h(g.zBaseURL)

      @ @

      And you have specified an index page of "/home" the above will @ automatically redirect to:

      @ - @
      %h(g.zBaseURL)/home
      + @

      %h(g.zBaseURL)/home

      @ @

      The default "/home" page displays a Wiki page with the same name @ as the Project Name specified above. Some sites prefer to redirect @ to a documentation page (ex: "/doc/tip/index.wiki") or to "/timeline".

      + @ + @

      Note: To avoid a redirect loop or other problems, this entry must + @ begin with "/" and it must specify a valid page. For example, + @ "/home" will work but "home" will not, since it omits the + @ leading "/".

      @
      onoff_attribute("Use HTML as wiki markup language", - "wiki-use-html", "wiki-use-html", 0); - @

      Use HTML as the wiki markup language. Wiki links will still be parsed but - @ all other wiki formatting will be ignored. This option is helpful if you have - @ chosen to use a rich HTML editor for wiki markup such as TinyMCE.

      + "wiki-use-html", "wiki-use-html", 0, 0); + @

      Use HTML as the wiki markup language. Wiki links will still be parsed + @ but all other wiki formatting will be ignored. This option is helpful + @ if you have chosen to use a rich HTML editor for wiki markup such as + @ TinyMCE.

      @

      CAUTION: when @ enabling, all HTML tags and attributes are accepted in the wiki. @ No sanitization is done. This means that it is very possible for malicious @ users to inject dangerous HTML, CSS and JavaScript code into your wiki.

      @

      This should only be enabled when wiki editing is limited @ to trusted users. It should not be used on a publically @ editable wiki.

      @
      - @

      - @ - db_end_transaction(0); - style_footer(); -} - -/* -** WEBPAGE: setup_editcss -*/ -void setup_editcss(void){ - login_check_credentials(); - if( !g.okSetup ){ - login_needed(); - } - db_begin_transaction(); - if( P("clear")!=0 ){ - db_multi_exec("DELETE FROM config WHERE name='css'"); - cgi_replace_parameter("css", zDefaultCSS); - db_end_transaction(0); - cgi_redirect("setup_editcss"); - }else{ - textarea_attribute(0, 0, 0, "css", "css", zDefaultCSS); - } - if( P("submit")!=0 ){ - db_end_transaction(0); - cgi_redirect("setup_editcss"); - } - style_header("Edit CSS"); - @
      - login_insert_csrf_secret(); - @ Edit the CSS below:
      - textarea_attribute("", 40, 80, "css", "css", zDefaultCSS); - @
      - @ - @ - @
      - @

      Note: Press your browser Reload button after modifying the - @ CSS in order to pull in the modified CSS file.

      - @
      - @ The default CSS is shown below for reference. Other examples - @ of CSS files can be seen on the skins page. - @ See also the header and - @ footer editing screens. - @
      -  @ %h(zDefaultCSS)
      -  @ 
      - style_footer(); - db_end_transaction(0); -} - -/* -** WEBPAGE: setup_header -*/ -void setup_header(void){ - login_check_credentials(); - if( !g.okSetup ){ - login_needed(); - } - db_begin_transaction(); - if( P("clear")!=0 ){ - db_multi_exec("DELETE FROM config WHERE name='header'"); - cgi_replace_parameter("header", zDefaultHeader); - }else{ - textarea_attribute(0, 0, 0, "header", "header", zDefaultHeader); - } - style_header("Edit Page Header"); - @
      - login_insert_csrf_secret(); - @

      Edit HTML text with embedded TH1 (a TCL dialect) that will be used to - @ generate the beginning of every page through start of the main - @ menu.

      - textarea_attribute("", 40, 80, "header", "header", zDefaultHeader); - @
      - @ - @ - @
      - @
      - @ The default header is shown below for reference. Other examples - @ of headers can be seen on the skins page. - @ See also the CSS and - @ footer editing screeens. - @
      -  @ %h(zDefaultHeader)
      -  @ 
      - style_footer(); - db_end_transaction(0); -} - -/* -** WEBPAGE: setup_footer -*/ -void setup_footer(void){ - login_check_credentials(); - if( !g.okSetup ){ - login_needed(); - } - db_begin_transaction(); - if( P("clear")!=0 ){ - db_multi_exec("DELETE FROM config WHERE name='footer'"); - cgi_replace_parameter("footer", zDefaultFooter); - }else{ - textarea_attribute(0, 0, 0, "footer", "footer", zDefaultFooter); - } - style_header("Edit Page Footer"); - @
      - login_insert_csrf_secret(); - @

      Edit HTML text with embedded TH1 (a TCL dialect) that will be used to - @ generate the end of every page.

      - textarea_attribute("", 20, 80, "footer", "footer", zDefaultFooter); - @
      - @ - @ - @
      - @
      - @ The default footer is shown below for reference. Other examples - @ of footers can be seen on the skins page. - @ See also the CSS and - @ header editing screens. - @
      -  @ %h(zDefaultFooter)
      -  @ 
      + @

      + @
      + db_end_transaction(0); + style_footer(); +} + +/* +** WEBPAGE: setup_modreq +** +** Admin page for setting up moderation of tickets and wiki. +*/ +void setup_modreq(void){ + login_check_credentials(); + if( !g.perm.Setup ){ + login_needed(0); + return; + } + + style_header("Moderator For Wiki And Tickets"); + db_begin_transaction(); + @
      + login_insert_csrf_secret(); + @
      + onoff_attribute("Moderate ticket changes", + "modreq-tkt", "modreq-tkt", 0, 0); + @

      When enabled, any change to tickets is subject to the approval + @ by a ticket moderator - a user with the "q" or Mod-Tkt privilege. + @ Ticket changes enter the system and are shown locally, but are not + @ synced until they are approved. The moderator has the option to + @ delete the change rather than approve it. Ticket changes made by + @ a user who has the Mod-Tkt privilege are never subject to + @ moderation. + @ + @


      + onoff_attribute("Moderate wiki changes", + "modreq-wiki", "modreq-wiki", 0, 0); + @

      When enabled, any change to wiki is subject to the approval + @ by a wiki moderator - a user with the "l" or Mod-Wiki privilege. + @ Wiki changes enter the system and are shown locally, but are not + @ synced until they are approved. The moderator has the option to + @ delete the change rather than approve it. Wiki changes made by + @ a user who has the Mod-Wiki privilege are never subject to + @ moderation. + @

      + + @
      + @

      + @
      + db_end_transaction(0); + style_footer(); + +} + +/* +** WEBPAGE: setup_adunit +** +** Administrative page for configuring and controlling ad units +** and how they are displayed. +*/ +void setup_adunit(void){ + login_check_credentials(); + if( !g.perm.Setup ){ + login_needed(0); + return; + } + db_begin_transaction(); + if( P("clear")!=0 ){ + db_multi_exec("DELETE FROM config WHERE name GLOB 'adunit*'"); + cgi_replace_parameter("adunit",""); + } + + style_header("Edit Ad Unit"); + @
      + login_insert_csrf_secret(); + @ Banner Ad-Unit:
      + textarea_attribute("", 6, 80, "adunit", "adunit", "", 0); + @
      + @ Right-Column Ad-Unit:
      + textarea_attribute("", 6, 80, "adunit-right", "adright", "", 0); + @
      + onoff_attribute("Omit ads to administrator", + "adunit-omit-if-admin", "oia", 0, 0); + @
      + onoff_attribute("Omit ads to logged-in users", + "adunit-omit-if-user", "oiu", 0, 0); + @
      + @ + @ + @
      + @
      + @ Ad-Unit Notes:
        + @
      • Leave both Ad-Units blank to disable all advertising. + @
      • The "Banner Ad-Unit" is used for wide pages. + @
      • The "Right-Column Ad-Unit" is used on pages with tall, narrow content. + @
      • If the "Right-Column Ad-Unit" is blank, the "Banner Ad-Unit" is used on all pages. + @
      • Suggested CSS changes: + @
        +  @ div.adunit_banner {
        +  @   margin: auto;
        +  @   width: 100%;
        +  @ }
        +  @ div.adunit_right {
        +  @   float: right;
        +  @ }
        +  @ div.adunit_right_container {
        +  @   min-height: height-of-right-column-ad-unit;
        +  @ }
        +  @ 
        + @
      • For a place-holder Ad-Unit for testing, Copy/Paste the following + @ with appropriate adjustments to "width:" and "height:". + @
        +  @ <div style='
        +  @   margin: 0 auto;
        +  @   width: 600px;
        +  @   height: 90px;
        +  @   border: 1px solid #f11;
        +  @   background-color: #fcc;
        +  @ '>Demo Ad</div>
        +  @ 
        + @
      • style_footer(); db_end_transaction(0); } /* ** WEBPAGE: setup_logo +** +** Administrative page for changing the logo image. */ void setup_logo(void){ - const char *zMime = "image/gif"; - const char *aImg = P("im"); - int szImg = atoi(PD("im:bytes","0")); - if( szImg>0 ){ - zMime = PD("im:mimetype","image/gif"); + const char *zLogoMtime = db_get_mtime("logo-image", 0, 0); + const char *zLogoMime = db_get("logo-mimetype","image/gif"); + const char *aLogoImg = P("logoim"); + int szLogoImg = atoi(PD("logoim:bytes","0")); + const char *zBgMtime = db_get_mtime("background-image", 0, 0); + const char *zBgMime = db_get("background-mimetype","image/gif"); + const char *aBgImg = P("bgim"); + int szBgImg = atoi(PD("bgim:bytes","0")); + if( szLogoImg>0 ){ + zLogoMime = PD("logoim:mimetype","image/gif"); + } + if( szBgImg>0 ){ + zBgMime = PD("bgim:mimetype","image/gif"); } login_check_credentials(); - if( !g.okSetup ){ - login_needed(); + if( !g.perm.Setup ){ + login_needed(0); + return; } db_begin_transaction(); - if( P("set")!=0 && zMime && zMime[0] && szImg>0 ){ + if( P("setlogo")!=0 && zLogoMime && zLogoMime[0] && szLogoImg>0 ){ + Blob img; + Stmt ins; + blob_init(&img, aLogoImg, szLogoImg); + db_prepare(&ins, + "REPLACE INTO config(name,value,mtime)" + " VALUES('logo-image',:bytes,now())" + ); + db_bind_blob(&ins, ":bytes", &img); + db_step(&ins); + db_finalize(&ins); + db_multi_exec( + "REPLACE INTO config(name,value,mtime) VALUES('logo-mimetype',%Q,now())", + zLogoMime + ); + db_end_transaction(0); + cgi_redirect("setup_logo"); + }else if( P("clrlogo")!=0 ){ + db_multi_exec( + "DELETE FROM config WHERE name IN " + "('logo-image','logo-mimetype')" + ); + db_end_transaction(0); + cgi_redirect("setup_logo"); + }else if( P("setbg")!=0 && zBgMime && zBgMime[0] && szBgImg>0 ){ Blob img; Stmt ins; - blob_init(&img, aImg, szImg); + blob_init(&img, aBgImg, szBgImg); db_prepare(&ins, - "REPLACE INTO config(name, value)" - " VALUES('logo-image',:bytes)" + "REPLACE INTO config(name,value,mtime)" + " VALUES('background-image',:bytes,now())" ); db_bind_blob(&ins, ":bytes", &img); db_step(&ins); db_finalize(&ins); db_multi_exec( - "REPLACE INTO config(name, value) VALUES('logo-mimetype',%Q)", - zMime - ); - db_end_transaction(0); - cgi_redirect("setup_logo"); - }else if( P("clr")!=0 ){ - db_multi_exec( - "DELETE FROM config WHERE name GLOB 'logo-*'" - ); - db_end_transaction(0); - cgi_redirect("setup_logo"); - } - style_header("Edit Project Logo"); - @

        The current project logo has a MIME-Type of %h(zMime) and looks - @ like this:

        - @
        logo
        - @ - @

        The logo is accessible to all users at this URL: - @ %s(g.zBaseURL)/logo. - @ The logo may or may not appear on each - @ page depending on the CSS and - @ header setup.

        - @ - @
        - @

        To set a new logo image, select a file to use as the logo using - @ the entry box below and then press the "Change Logo" button.

        - login_insert_csrf_secret(); - @ Logo Image file: - @
        - @ - @ - @
        - @ - @

        Note: Your browser has probably cached the logo image, so - @ you will probably need to press the Reload button on your browser after - @ changing the logo to provoke your browser to reload the new logo image. - @

        - style_footer(); - db_end_transaction(0); + "REPLACE INTO config(name,value,mtime)" + " VALUES('background-mimetype',%Q,now())", + zBgMime + ); + db_end_transaction(0); + cgi_redirect("setup_logo"); + }else if( P("clrbg")!=0 ){ + db_multi_exec( + "DELETE FROM config WHERE name IN " + "('background-image','background-mimetype')" + ); + db_end_transaction(0); + cgi_redirect("setup_logo"); + } + style_header("Edit Project Logo And Background"); + @

        The current project logo has a MIME-Type of %h(zLogoMime) + @ and looks like this:

        + @

        logo + @

        + @ + @
        + @

        The logo is accessible to all users at this URL: + @ %s(g.zBaseURL)/logo. + @ The logo may or may not appear on each + @ page depending on the CSS and + @ header setup. + @ To change the logo image, use the following form:

        + login_insert_csrf_secret(); + @ Logo Image file: + @ + @

        + @ + @

        + @
        + @
        + @ + @

        The current background image has a MIME-Type of %h(zBgMime) + @ and looks like this:

        + @

        background + @

        + @ + @
        + @

        The background image is accessible to all users at this URL: + @ %s(g.zBaseURL)/background. + @ The background image may or may not appear on each + @ page depending on the CSS and + @ header setup. + @ To change the background image, use the following form:

        + login_insert_csrf_secret(); + @ Background image file: + @ + @

        + @ + @

        + @
        + @
        + @ + @

        Note: Your browser has probably cached these + @ images, so you may need to press the Reload button before changes will + @ take effect.

        + style_footer(); + db_end_transaction(0); +} + +/* +** Prevent the RAW SQL feature from being used to ATTACH a different +** database and query it. +** +** Actually, the RAW SQL feature only does a single statement per request. +** So it is not possible to ATTACH and then do a separate query. This +** routine is not strictly necessary, therefore. But it does not hurt +** to be paranoid. +*/ +int raw_sql_query_authorizer( + void *pError, + int code, + const char *zArg1, + const char *zArg2, + const char *zArg3, + const char *zArg4 +){ + if( code==SQLITE_ATTACH ){ + return SQLITE_DENY; + } + return SQLITE_OK; +} + + +/* +** WEBPAGE: admin_sql +** +** Run raw SQL commands against the database file using the web interface. +** Requires Admin privileges. +*/ +void sql_page(void){ + const char *zQ = P("q"); + int go = P("go")!=0; + login_check_credentials(); + if( !g.perm.Setup ){ + login_needed(0); + return; + } + db_begin_transaction(); + style_header("Raw SQL Commands"); + @

        Caution: There are no restrictions on the SQL that can be + @ run by this page. You can do serious and irrepairable damage to the + @ repository. Proceed with extreme caution.

        + @ + @

        Only the first statement in the entry box will be run. + @ Any subsequent statements will be silently ignored.

        + @ + @

        Database names:

        • repository → %s(db_name("repository")) + if( g.zConfigDbName ){ + @
        • config → %s(db_name("configdb")) + } + if( g.localOpen ){ + @
        • local-checkout → %s(db_name("localdb")) + } + @

        + @ + @
        + login_insert_csrf_secret(); + @ SQL:
        + @
        + @ + @ + @ + @
        + if( P("schema") ){ + zQ = sqlite3_mprintf( + "SELECT sql FROM %s.sqlite_master WHERE sql IS NOT NULL", + db_name("repository")); + go = 1; + }else if( P("tablelist") ){ + zQ = sqlite3_mprintf( + "SELECT name FROM %s.sqlite_master WHERE type='table'" + " ORDER BY name", + db_name("repository")); + go = 1; + } + if( go ){ + sqlite3_stmt *pStmt; + int rc; + const char *zTail; + int nCol; + int nRow = 0; + int i; + @
        + login_verify_csrf_secret(); + sqlite3_set_authorizer(g.db, raw_sql_query_authorizer, 0); + rc = sqlite3_prepare_v2(g.db, zQ, -1, &pStmt, &zTail); + if( rc!=SQLITE_OK ){ + @
        %h(sqlite3_errmsg(g.db))
        + sqlite3_finalize(pStmt); + }else if( pStmt==0 ){ + /* No-op */ + }else if( (nCol = sqlite3_column_count(pStmt))==0 ){ + sqlite3_step(pStmt); + rc = sqlite3_finalize(pStmt); + if( rc ){ + @
        %h(sqlite3_errmsg(g.db))
        + } + }else{ + @ + while( sqlite3_step(pStmt)==SQLITE_ROW ){ + if( nRow==0 ){ + @ + for(i=0; i%h(sqlite3_column_name(pStmt, i)) + } + @ + } + nRow++; + @ + for(i=0; i + @ %s(sqlite3_column_text(pStmt, i)) + break; + } + case SQLITE_NULL: { + @ + break; + } + case SQLITE_TEXT: { + const char *zText = (const char*)sqlite3_column_text(pStmt, i); + @ + break; + } + case SQLITE_BLOB: { + @ + break; + } + } + } + @ + } + sqlite3_finalize(pStmt); + @
        NULL%h(zText) + @ %d(sqlite3_column_bytes(pStmt, i))-byte BLOB
        + } + } + style_footer(); +} + + +/* +** WEBPAGE: admin_th1 +** +** Run raw TH1 commands using the web interface. If Tcl integration was +** enabled at compile-time and the "tcl" setting is enabled, Tcl commands +** may be run as well. Requires Admin privilege. +*/ +void th1_page(void){ + const char *zQ = P("q"); + int go = P("go")!=0; + login_check_credentials(); + if( !g.perm.Setup ){ + login_needed(0); + return; + } + db_begin_transaction(); + style_header("Raw TH1 Commands"); + @

        Caution: There are no restrictions on the TH1 that can be + @ run by this page. If Tcl integration was enabled at compile-time and + @ the "tcl" setting is enabled, Tcl commands may be run as well.

        + @ + @
        + login_insert_csrf_secret(); + @ TH1:
        + @
        + @ + @
        + if( go ){ + const char *zR; + int rc; + int n; + @
        + login_verify_csrf_secret(); + rc = Th_Eval(g.interp, 0, zQ, -1); + zR = Th_GetResult(g.interp, &n); + if( rc==TH_OK ){ + @
        %h(zR)
        + }else{ + @
        %h(zR)
        + } + } + style_footer(); +} + +static void admin_log_render_limits(){ + int const count = db_int(0,"SELECT COUNT(*) FROM admin_log"); + int i; + int limits[] = { + 10, 20, 50, 100, 250, 500, 0 + }; + for(i = 0; limits[i]; ++i ){ + cgi_printf("%s%d", + i ? " " : "", + limits[i], limits[i]); + if(limits[i]>count) break; + } +} + +/* +** WEBPAGE: admin_log +** +** Shows the contents of the admin_log table, which is only created if +** the admin-log setting is enabled. Requires Admin or Setup ('a' or +** 's') permissions. +*/ +void page_admin_log(){ + Stmt stLog = empty_Stmt; + Blob qLog = empty_blob; + int limit; + int fLogEnabled; + int counter = 0; + login_check_credentials(); + if( !g.perm.Setup && !g.perm.Admin ){ + login_needed(0); + return; + } + style_header("Admin Log"); + create_admin_log_table(); + limit = atoi(PD("n","20")); + fLogEnabled = db_get_boolean("admin-log", 0); + @
        Admin logging is %s(fLogEnabled?"on":"off"). + @ (Change this on the settings page.)
        + + + @
        Limit results to: + admin_log_render_limits(); + @
        + + blob_append_sql(&qLog, + "SELECT datetime(time,'unixepoch'), who, page, what " + "FROM admin_log " + "ORDER BY time DESC "); + if(limit>0){ + @ %d(limit) Most recent entries: + blob_append_sql(&qLog, "LIMIT %d", limit); + } + db_prepare(&stLog, "%s", blob_sql_text(&qLog)); + blob_reset(&qLog); + @ + @ + @ + @ + @ + @ + @ + while( SQLITE_ROW == db_step(&stLog) ){ + const char *zTime = db_column_text(&stLog, 0); + const char *zUser = db_column_text(&stLog, 1); + const char *zPage = db_column_text(&stLog, 2); + const char *zMessage = db_column_text(&stLog, 3); + @ + @ + @ + @ + @ + @ + } + @
        TimeUserPageMessage
        %s(zTime)%s(zUser)%s(zPage)%h(zMessage)
        + if(limit>0 && counter%d(counter) entries shown.
      + } + style_footer(); +} + +/* +** WEBPAGE: srchsetup +** +** Configure the search engine. Requires Admin privilege. +*/ +void page_srchsetup(){ + login_check_credentials(); + if( !g.perm.Setup && !g.perm.Admin ){ + login_needed(0); + return; + } + style_header("Search Configuration"); + @
      + login_insert_csrf_secret(); + @
      + @ Server-specific settings that affect the + @ /search webpage. + @
      + @
      + textarea_attribute("Document Glob List", 3, 35, "doc-glob", "dg", "", 0); + @

      The "Document Glob List" is a comma- or newline-separated list + @ of GLOB expressions that identify all documents within the source + @ tree that are to be searched when "Document Search" is enabled. + @ Some examples: + @ + @ + @ + @ + @ + @
      *.wiki,*.html,*.md,*.txt + @ Search all wiki, HTML, Markdown, and Text files
      doc/*.md,*/README.txt,README.txt + @ Search all Markdown files in the doc/ subfolder and all README.txt + @ files.
      *Search all checked-in files
      (blank) + @ Search nothing. (Disables document search).
      + @


      + entry_attribute("Document Branch", 20, "doc-branch", "db", "trunk", 0); + @

      When searching documents, use the versions of the files found at the + @ type of the "Document Branch" branch. Recommended value: "trunk". + @ Document search is disabled if blank. + @


      + onoff_attribute("Search Check-in Comments", "search-ci", "sc", 0, 0); + @
      + onoff_attribute("Search Documents", "search-doc", "sd", 0, 0); + @
      + onoff_attribute("Search Tickets", "search-tkt", "st", 0, 0); + @
      + onoff_attribute("Search Wiki","search-wiki", "sw", 0, 0); + @
      + @

      + @
      + if( P("fts0") ){ + search_drop_index(); + }else if( P("fts1") ){ + search_drop_index(); + search_create_index(); + search_fill_index(); + search_update_index(search_restrict(SRCH_ALL)); + } + if( search_index_exists() ){ + @

      Currently using an SQLite FTS4 search index. This makes search + @ run faster, especially on large repositories, but takes up space.

      + onoff_attribute("Use Porter Stemmer","search-stemmer","ss",0,0); + @

      + @ + }else{ + @

      The SQLite FTS4 search index is disabled. All searching will be + @ a full-text scan. This usually works fine, but can be slow for + @ larger repositories.

      + onoff_attribute("Use Porter Stemmer","search-stemmer","ss",0,0); + @

      + } + @

      + style_footer(); } Index: src/sha1.c ================================================================== --- src/sha1.c +++ src/sha1.c @@ -1,434 +1,234 @@ /* -** This implementation of SHA1 is adapted from the example implementation -** contained in RFC-3174. -*/ -#include -#include -#include "config.h" -#include "sha1.h" - -/* - * If you do not have the ISO standard stdint.h header file, then you - * must typdef the following: - * name meaning - * uint32_t unsigned 32 bit integer - * uint8_t unsigned 8 bit integer (i.e., unsigned char) - * - */ -#define SHA1HashSize 20 -#define shaSuccess 0 -#define shaInputTooLong 1 -#define shaStateError 2 - -/* - * This structure will hold context information for the SHA-1 - * hashing operation - */ -typedef struct SHA1Context SHA1Context; -struct SHA1Context { - uint32_t Intermediate_Hash[SHA1HashSize/4]; /* Message Digest */ - - uint32_t Length_Low; /* Message length in bits */ - uint32_t Length_High; /* Message length in bits */ - - int Message_Block_Index; /* Index into message block array */ - uint8_t Message_Block[64]; /* 512-bit message blocks */ - - int Computed; /* Is the digest computed? */ - int Corrupted; /* Is the message digest corrupted? */ -}; - -/* - * sha1.c - * - * Description: - * This file implements the Secure Hashing Algorithm 1 as - * defined in FIPS PUB 180-1 published April 17, 1995. - * - * The SHA-1, produces a 160-bit message digest for a given - * data stream. It should take about 2**n steps to find a - * message with the same digest as a given message and - * 2**(n/2) to find any two messages with the same digest, - * when n is the digest size in bits. Therefore, this - * algorithm can serve as a means of providing a - * "fingerprint" for a message. - * - * Portability Issues: - * SHA-1 is defined in terms of 32-bit "words". This code - * uses (included via "sha1.h" to define 32 and 8 - * bit unsigned integer types. If your C compiler does not - * support 32 bit unsigned integers, this code is not - * appropriate. - * - * Caveats: - * SHA-1 is designed to work with messages less than 2^64 bits - * long. Although SHA-1 allows a message digest to be generated - * for messages of any number of bits less than 2^64, this - * implementation only works with messages with a length that is - * a multiple of the size of an 8-bit character. - * - */ - -/* - * Define the SHA1 circular left shift macro - */ -#define SHA1CircularShift(bits,word) \ - (((word) << (bits)) | ((word) >> (32-(bits)))) - -/* Local Function Prototyptes */ -static void SHA1PadMessage(SHA1Context *); -static void SHA1ProcessMessageBlock(SHA1Context *); - -/* - * SHA1Reset - * - * Description: - * This function will initialize the SHA1Context in preparation - * for computing a new SHA1 message digest. - * - * Parameters: - * context: [in/out] - * The context to reset. - * - * Returns: - * sha Error Code. - * - */ -static int SHA1Reset(SHA1Context *context) -{ - context->Length_Low = 0; - context->Length_High = 0; - context->Message_Block_Index = 0; - - context->Intermediate_Hash[0] = 0x67452301; - context->Intermediate_Hash[1] = 0xEFCDAB89; - context->Intermediate_Hash[2] = 0x98BADCFE; - context->Intermediate_Hash[3] = 0x10325476; - context->Intermediate_Hash[4] = 0xC3D2E1F0; - - context->Computed = 0; - context->Corrupted = 0; - - return shaSuccess; -} - -/* - * SHA1Result - * - * Description: - * This function will return the 160-bit message digest into the - * Message_Digest array provided by the caller. - * NOTE: The first octet of hash is stored in the 0th element, - * the last octet of hash in the 19th element. - * - * Parameters: - * context: [in/out] - * The context to use to calculate the SHA-1 hash. - * Message_Digest: [out] - * Where the digest is returned. - * - * Returns: - * sha Error Code. - * - */ -static int SHA1Result( SHA1Context *context, - uint8_t Message_Digest[SHA1HashSize]) -{ - int i; - - if (context->Corrupted) - { - return context->Corrupted; - } - - if (!context->Computed) - { - SHA1PadMessage(context); - for(i=0; i<64; ++i) - { - /* message may be sensitive, clear it out */ - context->Message_Block[i] = 0; - } - context->Length_Low = 0; /* and clear length */ - context->Length_High = 0; - context->Computed = 1; - - } - - for(i = 0; i < SHA1HashSize; ++i) - { - Message_Digest[i] = context->Intermediate_Hash[i>>2] - >> 8 * ( 3 - ( i & 0x03 ) ); - } - - return shaSuccess; -} - -/* - * SHA1Input - * - * Description: - * This function accepts an array of octets as the next portion - * of the message. - * - * Parameters: - * context: [in/out] - * The SHA context to update - * message_array: [in] - * An array of characters representing the next portion of - * the message. - * length: [in] - * The length of the message in message_array - * - * Returns: - * sha Error Code. - * - */ -static -int SHA1Input( SHA1Context *context, - const uint8_t *message_array, - unsigned length) -{ - if (!length) - { - return shaSuccess; - } - - if (context->Computed) - { - context->Corrupted = shaStateError; - - return shaStateError; - } - - if (context->Corrupted) - { - return context->Corrupted; - } - while(length-- && !context->Corrupted) - { - context->Message_Block[context->Message_Block_Index++] = - (*message_array & 0xFF); - - context->Length_Low += 8; - if (context->Length_Low == 0) - { - context->Length_High++; - if (context->Length_High == 0) - { - /* Message is too long */ - context->Corrupted = 1; - } - } - - if (context->Message_Block_Index == 64) - { - SHA1ProcessMessageBlock(context); - } - - message_array++; - } - - return shaSuccess; -} - -/* - * SHA1ProcessMessageBlock - * - * Description: - * This function will process the next 512 bits of the message - * stored in the Message_Block array. - * - * Parameters: - * None. - * - * Returns: - * Nothing. - * - * Comments: - * Many of the variable names in this code, especially the - * single character names, were used because those were the - * names used in the publication. - * - * - */ -static void SHA1ProcessMessageBlock(SHA1Context *context) -{ - const uint32_t K[] = { /* Constants defined in SHA-1 */ - 0x5A827999, - 0x6ED9EBA1, - 0x8F1BBCDC, - 0xCA62C1D6 - }; - int t; /* Loop counter */ - uint32_t temp; /* Temporary word value */ - uint32_t W[80]; /* Word sequence */ - uint32_t A, B, C, D, E; /* Word buffers */ - - /* - * Initialize the first 16 words in the array W - */ - for(t = 0; t < 16; t++) - { - W[t] = context->Message_Block[t * 4] << 24; - W[t] |= context->Message_Block[t * 4 + 1] << 16; - W[t] |= context->Message_Block[t * 4 + 2] << 8; - W[t] |= context->Message_Block[t * 4 + 3]; - } - - for(t = 16; t < 80; t++) - { - W[t] = SHA1CircularShift(1,W[t-3] ^ W[t-8] ^ W[t-14] ^ W[t-16]); - } - - A = context->Intermediate_Hash[0]; - B = context->Intermediate_Hash[1]; - C = context->Intermediate_Hash[2]; - D = context->Intermediate_Hash[3]; - E = context->Intermediate_Hash[4]; - - for(t = 0; t < 20; t++) - { - temp = SHA1CircularShift(5,A) + - ((B & C) | ((~B) & D)) + E + W[t] + K[0]; - E = D; - D = C; - C = SHA1CircularShift(30,B); - - B = A; - A = temp; - } - - for(t = 20; t < 40; t++) - { - temp = SHA1CircularShift(5,A) + (B ^ C ^ D) + E + W[t] + K[1]; - E = D; - D = C; - C = SHA1CircularShift(30,B); - B = A; - A = temp; - } - - for(t = 40; t < 60; t++) - { - temp = SHA1CircularShift(5,A) + - ((B & C) | (B & D) | (C & D)) + E + W[t] + K[2]; - E = D; - D = C; - C = SHA1CircularShift(30,B); - B = A; - A = temp; - } - - for(t = 60; t < 80; t++) - { - temp = SHA1CircularShift(5,A) + (B ^ C ^ D) + E + W[t] + K[3]; - E = D; - D = C; - C = SHA1CircularShift(30,B); - B = A; - A = temp; - } - - context->Intermediate_Hash[0] += A; - context->Intermediate_Hash[1] += B; - context->Intermediate_Hash[2] += C; - context->Intermediate_Hash[3] += D; - context->Intermediate_Hash[4] += E; - - context->Message_Block_Index = 0; -} - -/* - * SHA1PadMessage - * - - * Description: - * According to the standard, the message must be padded to an even - * 512 bits. The first padding bit must be a '1'. The last 64 - * bits represent the length of the original message. All bits in - * between should be 0. This function will pad the message - * according to those rules by filling the Message_Block array - * accordingly. It will also call the ProcessMessageBlock function - * provided appropriately. When it returns, it can be assumed that - * the message digest has been computed. - * - * Parameters: - * context: [in/out] - * The context to pad - * ProcessMessageBlock: [in] - * The appropriate SHA*ProcessMessageBlock function - * Returns: - * Nothing. - * - */ -static void SHA1PadMessage(SHA1Context *context) -{ - /* - * Check to see if the current message block is too small to hold - * the initial padding bits and length. If so, we will pad the - * block, process it, and then continue padding into a second - * block. - */ - if (context->Message_Block_Index > 55) - { - context->Message_Block[context->Message_Block_Index++] = 0x80; - while(context->Message_Block_Index < 64) - { - context->Message_Block[context->Message_Block_Index++] = 0; - } - - SHA1ProcessMessageBlock(context); - - while(context->Message_Block_Index < 56) - { - context->Message_Block[context->Message_Block_Index++] = 0; - } - } - else - { - context->Message_Block[context->Message_Block_Index++] = 0x80; - while(context->Message_Block_Index < 56) - { - - context->Message_Block[context->Message_Block_Index++] = 0; - } - } - - /* - * Store the message length as the last 8 octets - */ - context->Message_Block[56] = context->Length_High >> 24; - context->Message_Block[57] = context->Length_High >> 16; - context->Message_Block[58] = context->Length_High >> 8; - context->Message_Block[59] = context->Length_High; - context->Message_Block[60] = context->Length_Low >> 24; - context->Message_Block[61] = context->Length_Low >> 16; - context->Message_Block[62] = context->Length_Low >> 8; - context->Message_Block[63] = context->Length_Low; - - SHA1ProcessMessageBlock(context); -} +** This implementation of SHA1. +*/ +#include "config.h" +#include +#include "sha1.h" + +#ifdef FOSSIL_ENABLE_SSL + +# include +# define SHA1Context SHA_CTX +# define SHA1Init SHA1_Init +# define SHA1Update SHA1_Update +# define SHA1Final(a,b) SHA1_Final(b,a) + +#else + +/* +** The SHA1 implementation below is adapted from: +** +** $NetBSD: sha1.c,v 1.6 2009/11/06 20:31:18 joerg Exp $ +** $OpenBSD: sha1.c,v 1.9 1997/07/23 21:12:32 kstailey Exp $ +** +** SHA-1 in C +** By Steve Reid +** 100% Public Domain +*/ +typedef struct SHA1Context SHA1Context; +struct SHA1Context { + unsigned int state[5]; + unsigned int count[2]; + unsigned char buffer[64]; +}; + +/* + * blk0() and blk() perform the initial expand. + * I got the idea of expanding during the round function from SSLeay + * + * blk0le() for little-endian and blk0be() for big-endian. + */ +#if __GNUC__ && (defined(__i386__) || defined(__x86_64__)) +/* + * GCC by itself only generates left rotates. Use right rotates if + * possible to be kinder to dinky implementations with iterative rotate + * instructions. + */ +#define SHA_ROT(op, x, k) \ + ({ unsigned int y; asm(op " %1,%0" : "=r" (y) : "I" (k), "0" (x)); y; }) +#define rol(x,k) SHA_ROT("roll", x, k) +#define ror(x,k) SHA_ROT("rorl", x, k) + +#else +/* Generic C equivalent */ +#define SHA_ROT(x,l,r) ((x) << (l) | (x) >> (r)) +#define rol(x,k) SHA_ROT(x,k,32-(k)) +#define ror(x,k) SHA_ROT(x,32-(k),k) +#endif + + +#define blk0le(i) (block[i] = (ror(block[i],8)&0xFF00FF00) \ + |(rol(block[i],8)&0x00FF00FF)) +#define blk0be(i) block[i] +#define blk(i) (block[i&15] = rol(block[(i+13)&15]^block[(i+8)&15] \ + ^block[(i+2)&15]^block[i&15],1)) + +/* + * (R0+R1), R2, R3, R4 are the different operations (rounds) used in SHA1 + * + * Rl0() for little-endian and Rb0() for big-endian. Endianness is + * determined at run-time. + */ +#define Rl0(v,w,x,y,z,i) \ + z+=((w&(x^y))^y)+blk0le(i)+0x5A827999+rol(v,5);w=ror(w,2); +#define Rb0(v,w,x,y,z,i) \ + z+=((w&(x^y))^y)+blk0be(i)+0x5A827999+rol(v,5);w=ror(w,2); +#define R1(v,w,x,y,z,i) \ + z+=((w&(x^y))^y)+blk(i)+0x5A827999+rol(v,5);w=ror(w,2); +#define R2(v,w,x,y,z,i) \ + z+=(w^x^y)+blk(i)+0x6ED9EBA1+rol(v,5);w=ror(w,2); +#define R3(v,w,x,y,z,i) \ + z+=(((w|x)&y)|(w&x))+blk(i)+0x8F1BBCDC+rol(v,5);w=ror(w,2); +#define R4(v,w,x,y,z,i) \ + z+=(w^x^y)+blk(i)+0xCA62C1D6+rol(v,5);w=ror(w,2); + +/* + * Hash a single 512-bit block. This is the core of the algorithm. + */ +#define a qq[0] +#define b qq[1] +#define c qq[2] +#define d qq[3] +#define e qq[4] + +void SHA1Transform(unsigned int state[5], const unsigned char buffer[64]) +{ + unsigned int qq[5]; /* a, b, c, d, e; */ + static int one = 1; + unsigned int block[16]; + memcpy(block, buffer, 64); + memcpy(qq,state,5*sizeof(unsigned int)); + + /* Copy context->state[] to working vars */ + /* + a = state[0]; + b = state[1]; + c = state[2]; + d = state[3]; + e = state[4]; + */ + + /* 4 rounds of 20 operations each. Loop unrolled. */ + if( 1 == *(unsigned char*)&one ){ + Rl0(a,b,c,d,e, 0); Rl0(e,a,b,c,d, 1); Rl0(d,e,a,b,c, 2); Rl0(c,d,e,a,b, 3); + Rl0(b,c,d,e,a, 4); Rl0(a,b,c,d,e, 5); Rl0(e,a,b,c,d, 6); Rl0(d,e,a,b,c, 7); + Rl0(c,d,e,a,b, 8); Rl0(b,c,d,e,a, 9); Rl0(a,b,c,d,e,10); Rl0(e,a,b,c,d,11); + Rl0(d,e,a,b,c,12); Rl0(c,d,e,a,b,13); Rl0(b,c,d,e,a,14); Rl0(a,b,c,d,e,15); + }else{ + Rb0(a,b,c,d,e, 0); Rb0(e,a,b,c,d, 1); Rb0(d,e,a,b,c, 2); Rb0(c,d,e,a,b, 3); + Rb0(b,c,d,e,a, 4); Rb0(a,b,c,d,e, 5); Rb0(e,a,b,c,d, 6); Rb0(d,e,a,b,c, 7); + Rb0(c,d,e,a,b, 8); Rb0(b,c,d,e,a, 9); Rb0(a,b,c,d,e,10); Rb0(e,a,b,c,d,11); + Rb0(d,e,a,b,c,12); Rb0(c,d,e,a,b,13); Rb0(b,c,d,e,a,14); Rb0(a,b,c,d,e,15); + } + R1(e,a,b,c,d,16); R1(d,e,a,b,c,17); R1(c,d,e,a,b,18); R1(b,c,d,e,a,19); + R2(a,b,c,d,e,20); R2(e,a,b,c,d,21); R2(d,e,a,b,c,22); R2(c,d,e,a,b,23); + R2(b,c,d,e,a,24); R2(a,b,c,d,e,25); R2(e,a,b,c,d,26); R2(d,e,a,b,c,27); + R2(c,d,e,a,b,28); R2(b,c,d,e,a,29); R2(a,b,c,d,e,30); R2(e,a,b,c,d,31); + R2(d,e,a,b,c,32); R2(c,d,e,a,b,33); R2(b,c,d,e,a,34); R2(a,b,c,d,e,35); + R2(e,a,b,c,d,36); R2(d,e,a,b,c,37); R2(c,d,e,a,b,38); R2(b,c,d,e,a,39); + R3(a,b,c,d,e,40); R3(e,a,b,c,d,41); R3(d,e,a,b,c,42); R3(c,d,e,a,b,43); + R3(b,c,d,e,a,44); R3(a,b,c,d,e,45); R3(e,a,b,c,d,46); R3(d,e,a,b,c,47); + R3(c,d,e,a,b,48); R3(b,c,d,e,a,49); R3(a,b,c,d,e,50); R3(e,a,b,c,d,51); + R3(d,e,a,b,c,52); R3(c,d,e,a,b,53); R3(b,c,d,e,a,54); R3(a,b,c,d,e,55); + R3(e,a,b,c,d,56); R3(d,e,a,b,c,57); R3(c,d,e,a,b,58); R3(b,c,d,e,a,59); + R4(a,b,c,d,e,60); R4(e,a,b,c,d,61); R4(d,e,a,b,c,62); R4(c,d,e,a,b,63); + R4(b,c,d,e,a,64); R4(a,b,c,d,e,65); R4(e,a,b,c,d,66); R4(d,e,a,b,c,67); + R4(c,d,e,a,b,68); R4(b,c,d,e,a,69); R4(a,b,c,d,e,70); R4(e,a,b,c,d,71); + R4(d,e,a,b,c,72); R4(c,d,e,a,b,73); R4(b,c,d,e,a,74); R4(a,b,c,d,e,75); + R4(e,a,b,c,d,76); R4(d,e,a,b,c,77); R4(c,d,e,a,b,78); R4(b,c,d,e,a,79); + + /* Add the working vars back into context.state[] */ + state[0] += a; + state[1] += b; + state[2] += c; + state[3] += d; + state[4] += e; +} + + +/* + * SHA1Init - Initialize new context + */ +static void SHA1Init(SHA1Context *context){ + /* SHA1 initialization constants */ + context->state[0] = 0x67452301; + context->state[1] = 0xEFCDAB89; + context->state[2] = 0x98BADCFE; + context->state[3] = 0x10325476; + context->state[4] = 0xC3D2E1F0; + context->count[0] = context->count[1] = 0; +} + + +/* + * Run your data through this. + */ +static void SHA1Update( + SHA1Context *context, + const unsigned char *data, + unsigned int len +){ + unsigned int i, j; + + j = context->count[0]; + if ((context->count[0] += len << 3) < j) + context->count[1] += (len>>29)+1; + j = (j >> 3) & 63; + if ((j + len) > 63) { + (void)memcpy(&context->buffer[j], data, (i = 64-j)); + SHA1Transform(context->state, context->buffer); + for ( ; i + 63 < len; i += 64) + SHA1Transform(context->state, &data[i]); + j = 0; + } else { + i = 0; + } + (void)memcpy(&context->buffer[j], &data[i], len - i); +} + + +/* + * Add padding and return the message digest. + */ +static void SHA1Final(SHA1Context *context, unsigned char digest[20]){ + unsigned int i; + unsigned char finalcount[8]; + + for (i = 0; i < 8; i++) { + finalcount[i] = (unsigned char)((context->count[(i >= 4 ? 0 : 1)] + >> ((3-(i & 3)) * 8) ) & 255); /* Endian independent */ + } + SHA1Update(context, (const unsigned char *)"\200", 1); + while ((context->count[0] & 504) != 448) + SHA1Update(context, (const unsigned char *)"\0", 1); + SHA1Update(context, finalcount, 8); /* Should cause a SHA1Transform() */ + + if (digest) { + for (i = 0; i < 20; i++) + digest[i] = (unsigned char) + ((context->state[i>>2] >> ((3-(i & 3)) * 8) ) & 255); + } +} +#endif /* ** Convert a digest into base-16. digest should be declared as ** "unsigned char digest[20]" in the calling function. The SHA1 ** digest is stored in the first 20 bytes. zBuf should ** be "char zBuf[41]". */ static void DigestToBase16(unsigned char *digest, char *zBuf){ - static char const zEncode[] = "0123456789abcdef"; - int i, j; - - for(j=i=0; i<20; i++){ - int a = digest[i]; - zBuf[j++] = zEncode[(a>>4)&0xf]; - zBuf[j++] = zEncode[a & 0xf]; - } - zBuf[j] = 0; + static const char zEncode[] = "0123456789abcdef"; + int ix; + + for(ix=0; ix<20; ix++){ + *zBuf++ = zEncode[(*digest>>4)&0xf]; + *zBuf++ = zEncode[*digest++ & 0xf]; + } + *zBuf = '\0'; } /* ** The state of a incremental SHA1 checksum computation. Only one ** such computation can be underway at a time, of course. @@ -439,18 +239,18 @@ /* ** Add more text to the incremental SHA1 checksum. */ void sha1sum_step_text(const char *zText, int nBytes){ if( !incrInit ){ - SHA1Reset(&incrCtx); + SHA1Init(&incrCtx); incrInit = 1; } if( nBytes<=0 ){ if( nBytes==0 ) return; nBytes = strlen(zText); } - SHA1Input(&incrCtx, (unsigned char*)zText, nBytes); + SHA1Update(&incrCtx, (unsigned char*)zText, nBytes); } /* ** Add the content of a blob to the incremental SHA1 checksum. */ @@ -458,21 +258,21 @@ sha1sum_step_text(blob_buffer(p), blob_size(p)); } /* ** Finish the incremental SHA1 checksum. Store the result in blob pOut -** if pOut!=0. Also return a pointer to the result. +** if pOut!=0. Also return a pointer to the result. ** ** This resets the incremental checksum preparing for the next round ** of computation. The return pointer points to a static buffer that ** is overwritten by subsequent calls to this function. */ char *sha1sum_finish(Blob *pOut){ unsigned char zResult[20]; static char zOut[41]; sha1sum_step_text(0,0); - SHA1Result(&incrCtx, zResult); + SHA1Final(&incrCtx, zResult); incrInit = 0; DigestToBase16(zResult, zOut); if( pOut ){ blob_zero(pOut); blob_append(pOut, zOut, 40); @@ -481,35 +281,46 @@ } /* ** Compute the SHA1 checksum of a file on disk. Store the resulting -** checksum in the blob pCksum. pCksum is assumed to be ininitialized. +** checksum in the blob pCksum. pCksum is assumed to be initialized. ** ** Return the number of errors. */ int sha1sum_file(const char *zFilename, Blob *pCksum){ FILE *in; SHA1Context ctx; unsigned char zResult[20]; char zBuf[10240]; - in = fopen(zFilename,"rb"); + if( file_wd_islink(zFilename) ){ + /* Instead of file content, return sha1 of link destination path */ + Blob destinationPath; + int rc; + + blob_read_link(&destinationPath, zFilename); + rc = sha1sum_blob(&destinationPath, pCksum); + blob_reset(&destinationPath); + return rc; + } + + in = fossil_fopen(zFilename,"rb"); if( in==0 ){ return 1; } - SHA1Reset(&ctx); + SHA1Init(&ctx); for(;;){ int n; n = fread(zBuf, 1, sizeof(zBuf), in); if( n<=0 ) break; - SHA1Input(&ctx, (unsigned char*)zBuf, (unsigned)n); + SHA1Update(&ctx, (unsigned char*)zBuf, (unsigned)n); } fclose(in); blob_zero(pCksum); blob_resize(pCksum, 40); - SHA1Result(&ctx, zResult); + SHA1Final(&ctx, zResult); DigestToBase16(zResult, blob_buffer(pCksum)); return 0; } /* @@ -521,19 +332,19 @@ */ int sha1sum_blob(const Blob *pIn, Blob *pCksum){ SHA1Context ctx; unsigned char zResult[20]; - SHA1Reset(&ctx); - SHA1Input(&ctx, (unsigned char*)blob_buffer(pIn), blob_size(pIn)); + SHA1Init(&ctx); + SHA1Update(&ctx, (unsigned char*)blob_buffer(pIn), blob_size(pIn)); if( pIn==pCksum ){ blob_reset(pCksum); }else{ blob_zero(pCksum); } blob_resize(pCksum, 40); - SHA1Result(&ctx, zResult); + SHA1Final(&ctx, zResult); DigestToBase16(zResult, blob_buffer(pCksum)); return 0; } /* @@ -543,20 +354,20 @@ char *sha1sum(const char *zIn){ SHA1Context ctx; unsigned char zResult[20]; char zDigest[41]; - SHA1Reset(&ctx); - SHA1Input(&ctx, (unsigned const char*)zIn, strlen(zIn)); - SHA1Result(&ctx, zResult); + SHA1Init(&ctx); + SHA1Update(&ctx, (unsigned const char*)zIn, strlen(zIn)); + SHA1Final(&ctx, zResult); DigestToBase16(zResult, zDigest); return mprintf("%s", zDigest); } /* ** Convert a cleartext password for a specific user into a SHA1 hash. -** +** ** The algorithm here is: ** ** SHA1( project-code + "/" + login + "/" + password ) ** ** In words: The users login name and password are appended to the @@ -564,59 +375,100 @@ ** ** The result of this function is the shared secret used by a client ** to authenticate to a server for the sync protocol. It is also the ** value stored in the USER.PW field of the database. By mixing in the ** login name and the project id with the hash, different shared secrets -** are obtained even if two users select the same password, or if a +** are obtained even if two users select the same password, or if a ** single user selects the same password for multiple projects. */ -char *sha1_shared_secret(const char *zPw, const char *zLogin){ +char *sha1_shared_secret( + const char *zPw, /* The password to encrypt */ + const char *zLogin, /* Username */ + const char *zProjCode /* Project-code. Use built-in project code if NULL */ +){ static char *zProjectId = 0; SHA1Context ctx; unsigned char zResult[20]; char zDigest[41]; - SHA1Reset(&ctx); - if( zProjectId==0 ){ - zProjectId = db_get("project-code", 0); - - /* On the first xfer request of a clone, the project-code is not yet - ** known. Use the cleartext password, since that is all we have. - */ - if( zProjectId==0 ){ - return mprintf("%s", zPw); - } - } - SHA1Input(&ctx, (unsigned char*)zProjectId, strlen(zProjectId)); - SHA1Input(&ctx, (unsigned char*)"/", 1); - SHA1Input(&ctx, (unsigned char*)zLogin, strlen(zLogin)); - SHA1Input(&ctx, (unsigned char*)"/", 1); - SHA1Input(&ctx, (unsigned const char*)zPw, strlen(zPw)); - SHA1Result(&ctx, zResult); + SHA1Init(&ctx); + if( zProjCode==0 ){ + if( zProjectId==0 ){ + zProjectId = db_get("project-code", 0); + + /* On the first xfer request of a clone, the project-code is not yet + ** known. Use the cleartext password, since that is all we have. + */ + if( zProjectId==0 ){ + return mprintf("%s", zPw); + } + } + zProjCode = zProjectId; + } + SHA1Update(&ctx, (unsigned char*)zProjCode, strlen(zProjCode)); + SHA1Update(&ctx, (unsigned char*)"/", 1); + SHA1Update(&ctx, (unsigned char*)zLogin, strlen(zLogin)); + SHA1Update(&ctx, (unsigned char*)"/", 1); + SHA1Update(&ctx, (unsigned const char*)zPw, strlen(zPw)); + SHA1Final(&ctx, zResult); DigestToBase16(zResult, zDigest); return mprintf("%s", zDigest); } /* -** COMMAND: sha1sum +** Implement the shared_secret() SQL function. shared_secret() takes two or +** three arguments; the third argument is optional. +** +** (1) The cleartext password +** (2) The login name +** (3) The project code +** +** Returns sha1($password/$login/$projcode). +*/ +void sha1_shared_secret_sql_function( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const char *zPw; + const char *zLogin; + const char *zProjid; + + assert( argc==2 || argc==3 ); + zPw = (const char*)sqlite3_value_text(argv[0]); + if( zPw==0 || zPw[0]==0 ) return; + zLogin = (const char*)sqlite3_value_text(argv[1]); + if( zLogin==0 ) return; + if( argc==3 ){ + zProjid = (const char*)sqlite3_value_text(argv[2]); + if( zProjid && zProjid[0]==0 ) zProjid = 0; + }else{ + zProjid = 0; + } + sqlite3_result_text(context, sha1_shared_secret(zPw, zLogin, zProjid), -1, + fossil_free); +} + +/* +** COMMAND: sha1sum* ** %fossil sha1sum FILE... ** ** Compute an SHA1 checksum of all files named on the command-line. ** If an file is named "-" then take its content from standard input. */ void sha1sum_test(void){ int i; Blob in; Blob cksum; - + for(i=2; i +#include +#include +#include +#include "sqlite3.h" +#if SQLITE_USER_AUTHENTICATION +# include "sqlite3userauth.h" +#endif +#include +#include + +#if !defined(_WIN32) && !defined(WIN32) +# include +# if !defined(__RTP__) && !defined(_WRS_KERNEL) +# include +# endif +# include +# include +#endif + +#if HAVE_READLINE +# include +# include +#endif + +#if HAVE_EDITLINE +# include +#endif + +#if HAVE_EDITLINE || HAVE_READLINE + +# define shell_add_history(X) add_history(X) +# define shell_read_history(X) read_history(X) +# define shell_write_history(X) write_history(X) +# define shell_stifle_history(X) stifle_history(X) +# define shell_readline(X) readline(X) + +#elif HAVE_LINENOISE + +# include "linenoise.h" +# define shell_add_history(X) linenoiseHistoryAdd(X) +# define shell_read_history(X) linenoiseHistoryLoad(X) +# define shell_write_history(X) linenoiseHistorySave(X) +# define shell_stifle_history(X) linenoiseHistorySetMaxLen(X) +# define shell_readline(X) linenoise(X) + +#else + +# define shell_read_history(X) +# define shell_write_history(X) +# define shell_stifle_history(X) + +# define SHELL_USE_LOCAL_GETLINE 1 +#endif + + +#if defined(_WIN32) || defined(WIN32) +# include +# include +# define isatty(h) _isatty(h) +# ifndef access +# define access(f,m) _access((f),(m)) +# endif +# undef popen +# define popen _popen +# undef pclose +# define pclose _pclose +#else + /* Make sure isatty() has a prototype. */ + extern int isatty(int); + +# if !defined(__RTP__) && !defined(_WRS_KERNEL) + /* popen and pclose are not C89 functions and so are + ** sometimes omitted from the header */ + extern FILE *popen(const char*,const char*); + extern int pclose(FILE*); +# else +# define SQLITE_OMIT_POPEN 1 +# endif +#endif + +#if defined(_WIN32_WCE) +/* Windows CE (arm-wince-mingw32ce-gcc) does not provide isatty() + * thus we always assume that we have a console. That can be + * overridden with the -batch command line option. + */ +#define isatty(x) 1 +#endif + +/* ctype macros that work with signed characters */ +#define IsSpace(X) isspace((unsigned char)X) +#define IsDigit(X) isdigit((unsigned char)X) +#define ToLower(X) (char)tolower((unsigned char)X) + +/* On Windows, we normally run with output mode of TEXT so that \n characters +** are automatically translated into \r\n. However, this behavior needs +** to be disabled in some cases (ex: when generating CSV output and when +** rendering quoted strings that contain \n characters). The following +** routines take care of that. +*/ +#if defined(_WIN32) || defined(WIN32) +static void setBinaryMode(FILE *out){ + fflush(out); + _setmode(_fileno(out), _O_BINARY); +} +static void setTextMode(FILE *out){ + fflush(out); + _setmode(_fileno(out), _O_TEXT); +} +#else +# define setBinaryMode(X) +# define setTextMode(X) +#endif + + +/* True if the timer is enabled */ +static int enableTimer = 0; + +/* Return the current wall-clock time */ +static sqlite3_int64 timeOfDay(void){ + static sqlite3_vfs *clockVfs = 0; + sqlite3_int64 t; + if( clockVfs==0 ) clockVfs = sqlite3_vfs_find(0); + if( clockVfs->iVersion>=2 && clockVfs->xCurrentTimeInt64!=0 ){ + clockVfs->xCurrentTimeInt64(clockVfs, &t); + }else{ + double r; + clockVfs->xCurrentTime(clockVfs, &r); + t = (sqlite3_int64)(r*86400000.0); + } + return t; +} + +#if !defined(_WIN32) && !defined(WIN32) && !defined(__minux) +#include +#include + +/* VxWorks does not support getrusage() as far as we can determine */ +#if defined(_WRS_KERNEL) || defined(__RTP__) +struct rusage { + struct timeval ru_utime; /* user CPU time used */ + struct timeval ru_stime; /* system CPU time used */ +}; +#define getrusage(A,B) memset(B,0,sizeof(*B)) +#endif + +/* Saved resource information for the beginning of an operation */ +static struct rusage sBegin; /* CPU time at start */ +static sqlite3_int64 iBegin; /* Wall-clock time at start */ + +/* +** Begin timing an operation +*/ +static void beginTimer(void){ + if( enableTimer ){ + getrusage(RUSAGE_SELF, &sBegin); + iBegin = timeOfDay(); + } +} + +/* Return the difference of two time_structs in seconds */ +static double timeDiff(struct timeval *pStart, struct timeval *pEnd){ + return (pEnd->tv_usec - pStart->tv_usec)*0.000001 + + (double)(pEnd->tv_sec - pStart->tv_sec); +} + +/* +** Print the timing results. +*/ +static void endTimer(void){ + if( enableTimer ){ + sqlite3_int64 iEnd = timeOfDay(); + struct rusage sEnd; + getrusage(RUSAGE_SELF, &sEnd); + printf("Run Time: real %.3f user %f sys %f\n", + (iEnd - iBegin)*0.001, + timeDiff(&sBegin.ru_utime, &sEnd.ru_utime), + timeDiff(&sBegin.ru_stime, &sEnd.ru_stime)); + } +} + +#define BEGIN_TIMER beginTimer() +#define END_TIMER endTimer() +#define HAS_TIMER 1 + +#elif (defined(_WIN32) || defined(WIN32)) + +#include + +/* Saved resource information for the beginning of an operation */ +static HANDLE hProcess; +static FILETIME ftKernelBegin; +static FILETIME ftUserBegin; +static sqlite3_int64 ftWallBegin; +typedef BOOL (WINAPI *GETPROCTIMES)(HANDLE, LPFILETIME, LPFILETIME, + LPFILETIME, LPFILETIME); +static GETPROCTIMES getProcessTimesAddr = NULL; + +/* +** Check to see if we have timer support. Return 1 if necessary +** support found (or found previously). +*/ +static int hasTimer(void){ + if( getProcessTimesAddr ){ + return 1; + } else { + /* GetProcessTimes() isn't supported in WIN95 and some other Windows + ** versions. See if the version we are running on has it, and if it + ** does, save off a pointer to it and the current process handle. + */ + hProcess = GetCurrentProcess(); + if( hProcess ){ + HINSTANCE hinstLib = LoadLibrary(TEXT("Kernel32.dll")); + if( NULL != hinstLib ){ + getProcessTimesAddr = + (GETPROCTIMES) GetProcAddress(hinstLib, "GetProcessTimes"); + if( NULL != getProcessTimesAddr ){ + return 1; + } + FreeLibrary(hinstLib); + } + } + } + return 0; +} + +/* +** Begin timing an operation +*/ +static void beginTimer(void){ + if( enableTimer && getProcessTimesAddr ){ + FILETIME ftCreation, ftExit; + getProcessTimesAddr(hProcess,&ftCreation,&ftExit, + &ftKernelBegin,&ftUserBegin); + ftWallBegin = timeOfDay(); + } +} + +/* Return the difference of two FILETIME structs in seconds */ +static double timeDiff(FILETIME *pStart, FILETIME *pEnd){ + sqlite_int64 i64Start = *((sqlite_int64 *) pStart); + sqlite_int64 i64End = *((sqlite_int64 *) pEnd); + return (double) ((i64End - i64Start) / 10000000.0); +} + +/* +** Print the timing results. +*/ +static void endTimer(void){ + if( enableTimer && getProcessTimesAddr){ + FILETIME ftCreation, ftExit, ftKernelEnd, ftUserEnd; + sqlite3_int64 ftWallEnd = timeOfDay(); + getProcessTimesAddr(hProcess,&ftCreation,&ftExit,&ftKernelEnd,&ftUserEnd); + printf("Run Time: real %.3f user %f sys %f\n", + (ftWallEnd - ftWallBegin)*0.001, + timeDiff(&ftUserBegin, &ftUserEnd), + timeDiff(&ftKernelBegin, &ftKernelEnd)); + } +} + +#define BEGIN_TIMER beginTimer() +#define END_TIMER endTimer() +#define HAS_TIMER hasTimer() + +#else +#define BEGIN_TIMER +#define END_TIMER +#define HAS_TIMER 0 +#endif + +/* +** Used to prevent warnings about unused parameters +*/ +#define UNUSED_PARAMETER(x) (void)(x) + +/* +** If the following flag is set, then command execution stops +** at an error if we are not interactive. +*/ +static int bail_on_error = 0; + +/* +** Threat stdin as an interactive input if the following variable +** is true. Otherwise, assume stdin is connected to a file or pipe. +*/ +static int stdin_is_interactive = 1; + +/* +** On Windows systems we have to know if standard output is a console +** in order to translate UTF-8 into MBCS. The following variable is +** true if translation is required. +*/ +static int stdout_is_console = 1; + +/* +** The following is the open SQLite database. We make a pointer +** to this database a static variable so that it can be accessed +** by the SIGINT handler to interrupt database processing. +*/ +static sqlite3 *globalDb = 0; + +/* +** True if an interrupt (Control-C) has been received. +*/ +static volatile int seenInterrupt = 0; + +/* +** This is the name of our program. It is set in main(), used +** in a number of other places, mostly for error messages. +*/ +static char *Argv0; + +/* +** Prompt strings. Initialized in main. Settable with +** .prompt main continue +*/ +static char mainPrompt[20]; /* First line prompt. default: "sqlite> "*/ +static char continuePrompt[20]; /* Continuation prompt. default: " ...> " */ + +/* +** Write I/O traces to the following stream. +*/ +#ifdef SQLITE_ENABLE_IOTRACE +static FILE *iotrace = 0; +#endif + +/* +** This routine works like printf in that its first argument is a +** format string and subsequent arguments are values to be substituted +** in place of % fields. The result of formatting this string +** is written to iotrace. +*/ +#ifdef SQLITE_ENABLE_IOTRACE +static void SQLITE_CDECL iotracePrintf(const char *zFormat, ...){ + va_list ap; + char *z; + if( iotrace==0 ) return; + va_start(ap, zFormat); + z = sqlite3_vmprintf(zFormat, ap); + va_end(ap); + fprintf(iotrace, "%s", z); + sqlite3_free(z); +} +#endif + + +/* +** Determines if a string is a number of not. +*/ +static int isNumber(const char *z, int *realnum){ + if( *z=='-' || *z=='+' ) z++; + if( !IsDigit(*z) ){ + return 0; + } + z++; + if( realnum ) *realnum = 0; + while( IsDigit(*z) ){ z++; } + if( *z=='.' ){ + z++; + if( !IsDigit(*z) ) return 0; + while( IsDigit(*z) ){ z++; } + if( realnum ) *realnum = 1; + } + if( *z=='e' || *z=='E' ){ + z++; + if( *z=='+' || *z=='-' ) z++; + if( !IsDigit(*z) ) return 0; + while( IsDigit(*z) ){ z++; } + if( realnum ) *realnum = 1; + } + return *z==0; +} + +/* +** A global char* and an SQL function to access its current value +** from within an SQL statement. This program used to use the +** sqlite_exec_printf() API to substitue a string into an SQL statement. +** The correct way to do this with sqlite3 is to use the bind API, but +** since the shell is built around the callback paradigm it would be a lot +** of work. Instead just use this hack, which is quite harmless. +*/ +static const char *zShellStatic = 0; +static void shellstaticFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + assert( 0==argc ); + assert( zShellStatic ); + UNUSED_PARAMETER(argc); + UNUSED_PARAMETER(argv); + sqlite3_result_text(context, zShellStatic, -1, SQLITE_STATIC); +} + + +/* +** Compute a string length that is limited to what can be stored in +** lower 30 bits of a 32-bit signed integer. +*/ +static int strlen30(const char *z){ + const char *z2 = z; + while( *z2 ){ z2++; } + return 0x3fffffff & (int)(z2 - z); +} + +/* +** This routine reads a line of text from FILE in, stores +** the text in memory obtained from malloc() and returns a pointer +** to the text. NULL is returned at end of file, or if malloc() +** fails. +** +** If zLine is not NULL then it is a malloced buffer returned from +** a previous call to this routine that may be reused. +*/ +static char *local_getline(char *zLine, FILE *in){ + int nLine = zLine==0 ? 0 : 100; + int n = 0; + + while( 1 ){ + if( n+100>nLine ){ + nLine = nLine*2 + 100; + zLine = realloc(zLine, nLine); + if( zLine==0 ) return 0; + } + if( fgets(&zLine[n], nLine - n, in)==0 ){ + if( n==0 ){ + free(zLine); + return 0; + } + zLine[n] = 0; + break; + } + while( zLine[n] ) n++; + if( n>0 && zLine[n-1]=='\n' ){ + n--; + if( n>0 && zLine[n-1]=='\r' ) n--; + zLine[n] = 0; + break; + } + } +#if defined(_WIN32) || defined(WIN32) + /* For interactive input on Windows systems, translate the + ** multi-byte characterset characters into UTF-8. */ + if( stdin_is_interactive ){ + extern char *sqlite3_win32_mbcs_to_utf8(const char*); + char *zTrans = sqlite3_win32_mbcs_to_utf8(zLine); + if( zTrans ){ + int nTrans = strlen30(zTrans)+1; + if( nTrans>nLine ){ + zLine = realloc(zLine, nTrans); + if( zLine==0 ){ + sqlite3_free(zTrans); + return 0; + } + } + memcpy(zLine, zTrans, nTrans); + sqlite3_free(zTrans); + } + } +#endif /* defined(_WIN32) || defined(WIN32) */ + return zLine; +} + +/* +** Retrieve a single line of input text. +** +** If in==0 then read from standard input and prompt before each line. +** If isContinuation is true, then a continuation prompt is appropriate. +** If isContinuation is zero, then the main prompt should be used. +** +** If zPrior is not NULL then it is a buffer from a prior call to this +** routine that can be reused. +** +** The result is stored in space obtained from malloc() and must either +** be freed by the caller or else passed back into this routine via the +** zPrior argument for reuse. +*/ +static char *one_input_line(FILE *in, char *zPrior, int isContinuation){ + char *zPrompt; + char *zResult; + if( in!=0 ){ + zResult = local_getline(zPrior, in); + }else{ + zPrompt = isContinuation ? continuePrompt : mainPrompt; +#if SHELL_USE_LOCAL_GETLINE + printf("%s", zPrompt); + fflush(stdout); + zResult = local_getline(zPrior, stdin); +#else + free(zPrior); + zResult = shell_readline(zPrompt); + if( zResult && *zResult ) shell_add_history(zResult); +#endif + } + return zResult; +} + +/* +** Render output like fprintf(). Except, if the output is going to the +** console and if this is running on a Windows machine, translate the +** output from UTF-8 into MBCS. +*/ +#if defined(_WIN32) || defined(WIN32) +void utf8_printf(FILE *out, const char *zFormat, ...){ + va_list ap; + va_start(ap, zFormat); + if( stdout_is_console && (out==stdout || out==stderr) ){ + extern char *sqlite3_win32_utf8_to_mbcs(const char*); + char *z1 = sqlite3_vmprintf(zFormat, ap); + char *z2 = sqlite3_win32_utf8_to_mbcs(z1); + sqlite3_free(z1); + fputs(z2, out); + sqlite3_free(z2); + }else{ + vfprintf(out, zFormat, ap); + } + va_end(ap); +} +#elif !defined(utf8_printf) +# define utf8_printf fprintf +#endif + +/* +** Render output like fprintf(). This should not be used on anything that +** includes string formatting (e.g. "%s"). +*/ +#if !defined(raw_printf) +# define raw_printf fprintf +#endif + +/* +** Shell output mode information from before ".explain on", +** saved so that it can be restored by ".explain off" +*/ +typedef struct SavedModeInfo SavedModeInfo; +struct SavedModeInfo { + int valid; /* Is there legit data in here? */ + int mode; /* Mode prior to ".explain on" */ + int showHeader; /* The ".header" setting prior to ".explain on" */ + int colWidth[100]; /* Column widths prior to ".explain on" */ +}; + +/* +** State information about the database connection is contained in an +** instance of the following structure. +*/ +typedef struct ShellState ShellState; +struct ShellState { + sqlite3 *db; /* The database */ + int echoOn; /* True to echo input commands */ + int autoExplain; /* Automatically turn on .explain mode */ + int autoEQP; /* Run EXPLAIN QUERY PLAN prior to seach SQL stmt */ + int statsOn; /* True to display memory stats before each finalize */ + int scanstatsOn; /* True to display scan stats before each finalize */ + int countChanges; /* True to display change counts */ + int backslashOn; /* Resolve C-style \x escapes in SQL input text */ + int outCount; /* Revert to stdout when reaching zero */ + int cnt; /* Number of records displayed so far */ + FILE *out; /* Write results here */ + FILE *traceOut; /* Output for sqlite3_trace() */ + int nErr; /* Number of errors seen */ + int mode; /* An output mode setting */ + int cMode; /* temporary output mode for the current query */ + int normalMode; /* Output mode before ".explain on" */ + int writableSchema; /* True if PRAGMA writable_schema=ON */ + int showHeader; /* True to show column names in List or Column mode */ + unsigned shellFlgs; /* Various flags */ + char *zDestTable; /* Name of destination table when MODE_Insert */ + char colSeparator[20]; /* Column separator character for several modes */ + char rowSeparator[20]; /* Row separator character for MODE_Ascii */ + int colWidth[100]; /* Requested width of each column when in column mode*/ + int actualWidth[100]; /* Actual width of each column */ + char nullValue[20]; /* The text to print when a NULL comes back from + ** the database */ + char outfile[FILENAME_MAX]; /* Filename for *out */ + const char *zDbFilename; /* name of the database file */ + char *zFreeOnClose; /* Filename to free when closing */ + const char *zVfs; /* Name of VFS to use */ + sqlite3_stmt *pStmt; /* Current statement if any. */ + FILE *pLog; /* Write log output here */ + int *aiIndent; /* Array of indents used in MODE_Explain */ + int nIndent; /* Size of array aiIndent[] */ + int iIndent; /* Index of current op in aiIndent[] */ +}; + +/* +** These are the allowed shellFlgs values +*/ +#define SHFLG_Scratch 0x00001 /* The --scratch option is used */ +#define SHFLG_Pagecache 0x00002 /* The --pagecache option is used */ +#define SHFLG_Lookaside 0x00004 /* Lookaside memory is used */ + +/* +** These are the allowed modes. +*/ +#define MODE_Line 0 /* One column per line. Blank line between records */ +#define MODE_Column 1 /* One record per line in neat columns */ +#define MODE_List 2 /* One record per line with a separator */ +#define MODE_Semi 3 /* Same as MODE_List but append ";" to each line */ +#define MODE_Html 4 /* Generate an XHTML table */ +#define MODE_Insert 5 /* Generate SQL "insert" statements */ +#define MODE_Tcl 6 /* Generate ANSI-C or TCL quoted elements */ +#define MODE_Csv 7 /* Quote strings, numbers are plain */ +#define MODE_Explain 8 /* Like MODE_Column, but do not truncate data */ +#define MODE_Ascii 9 /* Use ASCII unit and record separators (0x1F/0x1E) */ + +static const char *modeDescr[] = { + "line", + "column", + "list", + "semi", + "html", + "insert", + "tcl", + "csv", + "explain", + "ascii", +}; + +/* +** These are the column/row/line separators used by the various +** import/export modes. +*/ +#define SEP_Column "|" +#define SEP_Row "\n" +#define SEP_Tab "\t" +#define SEP_Space " " +#define SEP_Comma "," +#define SEP_CrLf "\r\n" +#define SEP_Unit "\x1F" +#define SEP_Record "\x1E" + +/* +** Number of elements in an array +*/ +#define ArraySize(X) (int)(sizeof(X)/sizeof(X[0])) + +/* +** A callback for the sqlite3_log() interface. +*/ +static void shellLog(void *pArg, int iErrCode, const char *zMsg){ + ShellState *p = (ShellState*)pArg; + if( p->pLog==0 ) return; + utf8_printf(p->pLog, "(%d) %s\n", iErrCode, zMsg); + fflush(p->pLog); +} + +/* +** Output the given string as a hex-encoded blob (eg. X'1234' ) +*/ +static void output_hex_blob(FILE *out, const void *pBlob, int nBlob){ + int i; + char *zBlob = (char *)pBlob; + raw_printf(out,"X'"); + for(i=0; i0 ){ + utf8_printf(out,"%.*s",i,z); + } + if( z[i]=='<' ){ + raw_printf(out,"<"); + }else if( z[i]=='&' ){ + raw_printf(out,"&"); + }else if( z[i]=='>' ){ + raw_printf(out,">"); + }else if( z[i]=='\"' ){ + raw_printf(out,"""); + }else if( z[i]=='\'' ){ + raw_printf(out,"'"); + }else{ + break; + } + z += i + 1; + } +} + +/* +** If a field contains any character identified by a 1 in the following +** array, then the string must be quoted for CSV. +*/ +static const char needCsvQuote[] = { + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +}; + +/* +** Output a single term of CSV. Actually, p->colSeparator is used for +** the separator, which may or may not be a comma. p->nullValue is +** the null value. Strings are quoted if necessary. The separator +** is only issued if bSep is true. +*/ +static void output_csv(ShellState *p, const char *z, int bSep){ + FILE *out = p->out; + if( z==0 ){ + utf8_printf(out,"%s",p->nullValue); + }else{ + int i; + int nSep = strlen30(p->colSeparator); + for(i=0; z[i]; i++){ + if( needCsvQuote[((unsigned char*)z)[i]] + || (z[i]==p->colSeparator[0] && + (nSep==1 || memcmp(z, p->colSeparator, nSep)==0)) ){ + i = 0; + break; + } + } + if( i==0 ){ + putc('"', out); + for(i=0; z[i]; i++){ + if( z[i]=='"' ) putc('"', out); + putc(z[i], out); + } + putc('"', out); + }else{ + utf8_printf(out, "%s", z); + } + } + if( bSep ){ + utf8_printf(p->out, "%s", p->colSeparator); + } +} + +#ifdef SIGINT +/* +** This routine runs when the user presses Ctrl-C +*/ +static void interrupt_handler(int NotUsed){ + UNUSED_PARAMETER(NotUsed); + seenInterrupt++; + if( seenInterrupt>2 ) exit(1); + if( globalDb ) sqlite3_interrupt(globalDb); +} +#endif + +/* +** This is the callback routine that the shell +** invokes for each row of a query result. +*/ +static int shell_callback( + void *pArg, + int nArg, /* Number of result columns */ + char **azArg, /* Text of each result column */ + char **azCol, /* Column names */ + int *aiType /* Column types */ +){ + int i; + ShellState *p = (ShellState*)pArg; + + switch( p->cMode ){ + case MODE_Line: { + int w = 5; + if( azArg==0 ) break; + for(i=0; iw ) w = len; + } + if( p->cnt++>0 ) utf8_printf(p->out, "%s", p->rowSeparator); + for(i=0; iout,"%*s = %s%s", w, azCol[i], + azArg[i] ? azArg[i] : p->nullValue, p->rowSeparator); + } + break; + } + case MODE_Explain: + case MODE_Column: { + static const int aExplainWidths[] = {4, 13, 4, 4, 4, 13, 2, 13}; + const int *colWidth; + int showHdr; + char *rowSep; + if( p->cMode==MODE_Column ){ + colWidth = p->colWidth; + showHdr = p->showHeader; + rowSep = p->rowSeparator; + }else{ + colWidth = aExplainWidths; + showHdr = 1; + rowSep = SEP_Row; + } + if( p->cnt++==0 ){ + for(i=0; icolWidth) ){ + w = colWidth[i]; + }else{ + w = 0; + } + if( w==0 ){ + w = strlen30(azCol[i] ? azCol[i] : ""); + if( w<10 ) w = 10; + n = strlen30(azArg && azArg[i] ? azArg[i] : p->nullValue); + if( wactualWidth) ){ + p->actualWidth[i] = w; + } + if( showHdr ){ + if( w<0 ){ + utf8_printf(p->out,"%*.*s%s",-w,-w,azCol[i], + i==nArg-1 ? rowSep : " "); + }else{ + utf8_printf(p->out,"%-*.*s%s",w,w,azCol[i], + i==nArg-1 ? rowSep : " "); + } + } + } + if( showHdr ){ + for(i=0; iactualWidth) ){ + w = p->actualWidth[i]; + if( w<0 ) w = -w; + }else{ + w = 10; + } + utf8_printf(p->out,"%-*.*s%s",w,w, + "----------------------------------------------------------" + "----------------------------------------------------------", + i==nArg-1 ? rowSep : " "); + } + } + } + if( azArg==0 ) break; + for(i=0; iactualWidth) ){ + w = p->actualWidth[i]; + }else{ + w = 10; + } + if( p->cMode==MODE_Explain && azArg[i] && strlen30(azArg[i])>w ){ + w = strlen30(azArg[i]); + } + if( i==1 && p->aiIndent && p->pStmt ){ + if( p->iIndentnIndent ){ + utf8_printf(p->out, "%*.s", p->aiIndent[p->iIndent], ""); + } + p->iIndent++; + } + if( w<0 ){ + utf8_printf(p->out,"%*.*s%s",-w,-w, + azArg[i] ? azArg[i] : p->nullValue, + i==nArg-1 ? rowSep : " "); + }else{ + utf8_printf(p->out,"%-*.*s%s",w,w, + azArg[i] ? azArg[i] : p->nullValue, + i==nArg-1 ? rowSep : " "); + } + } + break; + } + case MODE_Semi: + case MODE_List: { + if( p->cnt++==0 && p->showHeader ){ + for(i=0; iout,"%s%s",azCol[i], + i==nArg-1 ? p->rowSeparator : p->colSeparator); + } + } + if( azArg==0 ) break; + for(i=0; inullValue; + utf8_printf(p->out, "%s", z); + if( iout, "%s", p->colSeparator); + }else if( p->cMode==MODE_Semi ){ + utf8_printf(p->out, ";%s", p->rowSeparator); + }else{ + utf8_printf(p->out, "%s", p->rowSeparator); + } + } + break; + } + case MODE_Html: { + if( p->cnt++==0 && p->showHeader ){ + raw_printf(p->out,""); + for(i=0; iout,""); + output_html_string(p->out, azCol[i]); + raw_printf(p->out,"\n"); + } + raw_printf(p->out,"\n"); + } + if( azArg==0 ) break; + raw_printf(p->out,""); + for(i=0; iout,""); + output_html_string(p->out, azArg[i] ? azArg[i] : p->nullValue); + raw_printf(p->out,"\n"); + } + raw_printf(p->out,"\n"); + break; + } + case MODE_Tcl: { + if( p->cnt++==0 && p->showHeader ){ + for(i=0; iout,azCol[i] ? azCol[i] : ""); + if(iout, "%s", p->colSeparator); + } + utf8_printf(p->out, "%s", p->rowSeparator); + } + if( azArg==0 ) break; + for(i=0; iout, azArg[i] ? azArg[i] : p->nullValue); + if(iout, "%s", p->colSeparator); + } + utf8_printf(p->out, "%s", p->rowSeparator); + break; + } + case MODE_Csv: { + setBinaryMode(p->out); + if( p->cnt++==0 && p->showHeader ){ + for(i=0; iout, "%s", p->rowSeparator); + } + if( nArg>0 ){ + for(i=0; iout, "%s", p->rowSeparator); + } + setTextMode(p->out); + break; + } + case MODE_Insert: { + p->cnt++; + if( azArg==0 ) break; + utf8_printf(p->out,"INSERT INTO %s",p->zDestTable); + if( p->showHeader ){ + raw_printf(p->out,"("); + for(i=0; i0 ? ",": ""; + utf8_printf(p->out, "%s%s", zSep, azCol[i]); + } + raw_printf(p->out,")"); + } + raw_printf(p->out," VALUES("); + for(i=0; i0 ? ",": ""; + if( (azArg[i]==0) || (aiType && aiType[i]==SQLITE_NULL) ){ + utf8_printf(p->out,"%sNULL",zSep); + }else if( aiType && aiType[i]==SQLITE_TEXT ){ + if( zSep[0] ) utf8_printf(p->out,"%s",zSep); + output_quoted_string(p->out, azArg[i]); + }else if( aiType && (aiType[i]==SQLITE_INTEGER + || aiType[i]==SQLITE_FLOAT) ){ + utf8_printf(p->out,"%s%s",zSep, azArg[i]); + }else if( aiType && aiType[i]==SQLITE_BLOB && p->pStmt ){ + const void *pBlob = sqlite3_column_blob(p->pStmt, i); + int nBlob = sqlite3_column_bytes(p->pStmt, i); + if( zSep[0] ) utf8_printf(p->out,"%s",zSep); + output_hex_blob(p->out, pBlob, nBlob); + }else if( isNumber(azArg[i], 0) ){ + utf8_printf(p->out,"%s%s",zSep, azArg[i]); + }else{ + if( zSep[0] ) utf8_printf(p->out,"%s",zSep); + output_quoted_string(p->out, azArg[i]); + } + } + raw_printf(p->out,");\n"); + break; + } + case MODE_Ascii: { + if( p->cnt++==0 && p->showHeader ){ + for(i=0; i0 ) utf8_printf(p->out, "%s", p->colSeparator); + utf8_printf(p->out,"%s",azCol[i] ? azCol[i] : ""); + } + utf8_printf(p->out, "%s", p->rowSeparator); + } + if( azArg==0 ) break; + for(i=0; i0 ) utf8_printf(p->out, "%s", p->colSeparator); + utf8_printf(p->out,"%s",azArg[i] ? azArg[i] : p->nullValue); + } + utf8_printf(p->out, "%s", p->rowSeparator); + break; + } + } + return 0; +} + +/* +** This is the callback routine that the SQLite library +** invokes for each row of a query result. +*/ +static int callback(void *pArg, int nArg, char **azArg, char **azCol){ + /* since we don't have type info, call the shell_callback with a NULL value */ + return shell_callback(pArg, nArg, azArg, azCol, NULL); +} + +/* +** Set the destination table field of the ShellState structure to +** the name of the table given. Escape any quote characters in the +** table name. +*/ +static void set_table_name(ShellState *p, const char *zName){ + int i, n; + int needQuote; + char *z; + + if( p->zDestTable ){ + free(p->zDestTable); + p->zDestTable = 0; + } + if( zName==0 ) return; + needQuote = !isalpha((unsigned char)*zName) && *zName!='_'; + for(i=n=0; zName[i]; i++, n++){ + if( !isalnum((unsigned char)zName[i]) && zName[i]!='_' ){ + needQuote = 1; + if( zName[i]=='\'' ) n++; + } + } + if( needQuote ) n += 2; + z = p->zDestTable = malloc( n+1 ); + if( z==0 ){ + raw_printf(stderr,"Error: out of memory\n"); + exit(1); + } + n = 0; + if( needQuote ) z[n++] = '\''; + for(i=0; zName[i]; i++){ + z[n++] = zName[i]; + if( zName[i]=='\'' ) z[n++] = '\''; + } + if( needQuote ) z[n++] = '\''; + z[n] = 0; +} + +/* zIn is either a pointer to a NULL-terminated string in memory obtained +** from malloc(), or a NULL pointer. The string pointed to by zAppend is +** added to zIn, and the result returned in memory obtained from malloc(). +** zIn, if it was not NULL, is freed. +** +** If the third argument, quote, is not '\0', then it is used as a +** quote character for zAppend. +*/ +static char *appendText(char *zIn, char const *zAppend, char quote){ + int len; + int i; + int nAppend = strlen30(zAppend); + int nIn = (zIn?strlen30(zIn):0); + + len = nAppend+nIn+1; + if( quote ){ + len += 2; + for(i=0; idb, zSelect, -1, &pSelect, 0); + if( rc!=SQLITE_OK || !pSelect ){ + utf8_printf(p->out, "/**** ERROR: (%d) %s *****/\n", rc, + sqlite3_errmsg(p->db)); + if( (rc&0xff)!=SQLITE_CORRUPT ) p->nErr++; + return rc; + } + rc = sqlite3_step(pSelect); + nResult = sqlite3_column_count(pSelect); + while( rc==SQLITE_ROW ){ + if( zFirstRow ){ + utf8_printf(p->out, "%s", zFirstRow); + zFirstRow = 0; + } + z = (const char*)sqlite3_column_text(pSelect, 0); + utf8_printf(p->out, "%s", z); + for(i=1; iout, ",%s", sqlite3_column_text(pSelect, i)); + } + if( z==0 ) z = ""; + while( z[0] && (z[0]!='-' || z[1]!='-') ) z++; + if( z[0] ){ + raw_printf(p->out, "\n;\n"); + }else{ + raw_printf(p->out, ";\n"); + } + rc = sqlite3_step(pSelect); + } + rc = sqlite3_finalize(pSelect); + if( rc!=SQLITE_OK ){ + utf8_printf(p->out, "/**** ERROR: (%d) %s *****/\n", rc, + sqlite3_errmsg(p->db)); + if( (rc&0xff)!=SQLITE_CORRUPT ) p->nErr++; + } + return rc; +} + +/* +** Allocate space and save off current error string. +*/ +static char *save_err_msg( + sqlite3 *db /* Database to query */ +){ + int nErrMsg = 1+strlen30(sqlite3_errmsg(db)); + char *zErrMsg = sqlite3_malloc64(nErrMsg); + if( zErrMsg ){ + memcpy(zErrMsg, sqlite3_errmsg(db), nErrMsg); + } + return zErrMsg; +} + +/* +** Display memory stats. +*/ +static int display_stats( + sqlite3 *db, /* Database to query */ + ShellState *pArg, /* Pointer to ShellState */ + int bReset /* True to reset the stats */ +){ + int iCur; + int iHiwtr; + + if( pArg && pArg->out ){ + + iHiwtr = iCur = -1; + sqlite3_status(SQLITE_STATUS_MEMORY_USED, &iCur, &iHiwtr, bReset); + raw_printf(pArg->out, + "Memory Used: %d (max %d) bytes\n", + iCur, iHiwtr); + iHiwtr = iCur = -1; + sqlite3_status(SQLITE_STATUS_MALLOC_COUNT, &iCur, &iHiwtr, bReset); + raw_printf(pArg->out, "Number of Outstanding Allocations: %d (max %d)\n", + iCur, iHiwtr); + if( pArg->shellFlgs & SHFLG_Pagecache ){ + iHiwtr = iCur = -1; + sqlite3_status(SQLITE_STATUS_PAGECACHE_USED, &iCur, &iHiwtr, bReset); + raw_printf(pArg->out, + "Number of Pcache Pages Used: %d (max %d) pages\n", + iCur, iHiwtr); + } + iHiwtr = iCur = -1; + sqlite3_status(SQLITE_STATUS_PAGECACHE_OVERFLOW, &iCur, &iHiwtr, bReset); + raw_printf(pArg->out, + "Number of Pcache Overflow Bytes: %d (max %d) bytes\n", + iCur, iHiwtr); + if( pArg->shellFlgs & SHFLG_Scratch ){ + iHiwtr = iCur = -1; + sqlite3_status(SQLITE_STATUS_SCRATCH_USED, &iCur, &iHiwtr, bReset); + raw_printf(pArg->out, + "Number of Scratch Allocations Used: %d (max %d)\n", + iCur, iHiwtr); + } + iHiwtr = iCur = -1; + sqlite3_status(SQLITE_STATUS_SCRATCH_OVERFLOW, &iCur, &iHiwtr, bReset); + raw_printf(pArg->out, + "Number of Scratch Overflow Bytes: %d (max %d) bytes\n", + iCur, iHiwtr); + iHiwtr = iCur = -1; + sqlite3_status(SQLITE_STATUS_MALLOC_SIZE, &iCur, &iHiwtr, bReset); + raw_printf(pArg->out, "Largest Allocation: %d bytes\n", + iHiwtr); + iHiwtr = iCur = -1; + sqlite3_status(SQLITE_STATUS_PAGECACHE_SIZE, &iCur, &iHiwtr, bReset); + raw_printf(pArg->out, "Largest Pcache Allocation: %d bytes\n", + iHiwtr); + iHiwtr = iCur = -1; + sqlite3_status(SQLITE_STATUS_SCRATCH_SIZE, &iCur, &iHiwtr, bReset); + raw_printf(pArg->out, "Largest Scratch Allocation: %d bytes\n", + iHiwtr); +#ifdef YYTRACKMAXSTACKDEPTH + iHiwtr = iCur = -1; + sqlite3_status(SQLITE_STATUS_PARSER_STACK, &iCur, &iHiwtr, bReset); + raw_printf(pArg->out, "Deepest Parser Stack: %d (max %d)\n", + iCur, iHiwtr); +#endif + } + + if( pArg && pArg->out && db ){ + if( pArg->shellFlgs & SHFLG_Lookaside ){ + iHiwtr = iCur = -1; + sqlite3_db_status(db, SQLITE_DBSTATUS_LOOKASIDE_USED, + &iCur, &iHiwtr, bReset); + raw_printf(pArg->out, + "Lookaside Slots Used: %d (max %d)\n", + iCur, iHiwtr); + sqlite3_db_status(db, SQLITE_DBSTATUS_LOOKASIDE_HIT, + &iCur, &iHiwtr, bReset); + raw_printf(pArg->out, "Successful lookaside attempts: %d\n", + iHiwtr); + sqlite3_db_status(db, SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE, + &iCur, &iHiwtr, bReset); + raw_printf(pArg->out, "Lookaside failures due to size: %d\n", + iHiwtr); + sqlite3_db_status(db, SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL, + &iCur, &iHiwtr, bReset); + raw_printf(pArg->out, "Lookaside failures due to OOM: %d\n", + iHiwtr); + } + iHiwtr = iCur = -1; + sqlite3_db_status(db, SQLITE_DBSTATUS_CACHE_USED, &iCur, &iHiwtr, bReset); + raw_printf(pArg->out, "Pager Heap Usage: %d bytes\n", + iCur); + iHiwtr = iCur = -1; + sqlite3_db_status(db, SQLITE_DBSTATUS_CACHE_HIT, &iCur, &iHiwtr, 1); + raw_printf(pArg->out, "Page cache hits: %d\n", iCur); + iHiwtr = iCur = -1; + sqlite3_db_status(db, SQLITE_DBSTATUS_CACHE_MISS, &iCur, &iHiwtr, 1); + raw_printf(pArg->out, "Page cache misses: %d\n", iCur); + iHiwtr = iCur = -1; + sqlite3_db_status(db, SQLITE_DBSTATUS_CACHE_WRITE, &iCur, &iHiwtr, 1); + raw_printf(pArg->out, "Page cache writes: %d\n", iCur); + iHiwtr = iCur = -1; + sqlite3_db_status(db, SQLITE_DBSTATUS_SCHEMA_USED, &iCur, &iHiwtr, bReset); + raw_printf(pArg->out, "Schema Heap Usage: %d bytes\n", + iCur); + iHiwtr = iCur = -1; + sqlite3_db_status(db, SQLITE_DBSTATUS_STMT_USED, &iCur, &iHiwtr, bReset); + raw_printf(pArg->out, "Statement Heap/Lookaside Usage: %d bytes\n", + iCur); + } + + if( pArg && pArg->out && db && pArg->pStmt ){ + iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_FULLSCAN_STEP, + bReset); + raw_printf(pArg->out, "Fullscan Steps: %d\n", iCur); + iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_SORT, bReset); + raw_printf(pArg->out, "Sort Operations: %d\n", iCur); + iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_AUTOINDEX,bReset); + raw_printf(pArg->out, "Autoindex Inserts: %d\n", iCur); + iCur = sqlite3_stmt_status(pArg->pStmt, SQLITE_STMTSTATUS_VM_STEP, bReset); + raw_printf(pArg->out, "Virtual Machine Steps: %d\n", iCur); + } + + /* Do not remove this machine readable comment: extra-stats-output-here */ + + return 0; +} + +/* +** Display scan stats. +*/ +static void display_scanstats( + sqlite3 *db, /* Database to query */ + ShellState *pArg /* Pointer to ShellState */ +){ +#ifndef SQLITE_ENABLE_STMT_SCANSTATUS + UNUSED_PARAMETER(db); + UNUSED_PARAMETER(pArg); +#else + int i, k, n, mx; + raw_printf(pArg->out, "-------- scanstats --------\n"); + mx = 0; + for(k=0; k<=mx; k++){ + double rEstLoop = 1.0; + for(i=n=0; 1; i++){ + sqlite3_stmt *p = pArg->pStmt; + sqlite3_int64 nLoop, nVisit; + double rEst; + int iSid; + const char *zExplain; + if( sqlite3_stmt_scanstatus(p, i, SQLITE_SCANSTAT_NLOOP, (void*)&nLoop) ){ + break; + } + sqlite3_stmt_scanstatus(p, i, SQLITE_SCANSTAT_SELECTID, (void*)&iSid); + if( iSid>mx ) mx = iSid; + if( iSid!=k ) continue; + if( n==0 ){ + rEstLoop = (double)nLoop; + if( k>0 ) raw_printf(pArg->out, "-------- subquery %d -------\n", k); + } + n++; + sqlite3_stmt_scanstatus(p, i, SQLITE_SCANSTAT_NVISIT, (void*)&nVisit); + sqlite3_stmt_scanstatus(p, i, SQLITE_SCANSTAT_EST, (void*)&rEst); + sqlite3_stmt_scanstatus(p, i, SQLITE_SCANSTAT_EXPLAIN, (void*)&zExplain); + utf8_printf(pArg->out, "Loop %2d: %s\n", n, zExplain); + rEstLoop *= rEst; + raw_printf(pArg->out, + " nLoop=%-8lld nRow=%-8lld estRow=%-8lld estRow/Loop=%-8g\n", + nLoop, nVisit, (sqlite3_int64)(rEstLoop+0.5), rEst + ); + } + } + raw_printf(pArg->out, "---------------------------\n"); +#endif +} + +/* +** Parameter azArray points to a zero-terminated array of strings. zStr +** points to a single nul-terminated string. Return non-zero if zStr +** is equal, according to strcmp(), to any of the strings in the array. +** Otherwise, return zero. +*/ +static int str_in_array(const char *zStr, const char **azArray){ + int i; + for(i=0; azArray[i]; i++){ + if( 0==strcmp(zStr, azArray[i]) ) return 1; + } + return 0; +} + +/* +** If compiled statement pSql appears to be an EXPLAIN statement, allocate +** and populate the ShellState.aiIndent[] array with the number of +** spaces each opcode should be indented before it is output. +** +** The indenting rules are: +** +** * For each "Next", "Prev", "VNext" or "VPrev" instruction, indent +** all opcodes that occur between the p2 jump destination and the opcode +** itself by 2 spaces. +** +** * For each "Goto", if the jump destination is earlier in the program +** and ends on one of: +** Yield SeekGt SeekLt RowSetRead Rewind +** or if the P1 parameter is one instead of zero, +** then indent all opcodes between the earlier instruction +** and "Goto" by 2 spaces. +*/ +static void explain_data_prepare(ShellState *p, sqlite3_stmt *pSql){ + const char *zSql; /* The text of the SQL statement */ + const char *z; /* Used to check if this is an EXPLAIN */ + int *abYield = 0; /* True if op is an OP_Yield */ + int nAlloc = 0; /* Allocated size of p->aiIndent[], abYield */ + int iOp; /* Index of operation in p->aiIndent[] */ + + const char *azNext[] = { "Next", "Prev", "VPrev", "VNext", "SorterNext", + "NextIfOpen", "PrevIfOpen", 0 }; + const char *azYield[] = { "Yield", "SeekLT", "SeekGT", "RowSetRead", + "Rewind", 0 }; + const char *azGoto[] = { "Goto", 0 }; + + /* Try to figure out if this is really an EXPLAIN statement. If this + ** cannot be verified, return early. */ + if( sqlite3_column_count(pSql)!=8 ){ + p->cMode = p->mode; + return; + } + zSql = sqlite3_sql(pSql); + if( zSql==0 ) return; + for(z=zSql; *z==' ' || *z=='\t' || *z=='\n' || *z=='\f' || *z=='\r'; z++); + if( sqlite3_strnicmp(z, "explain", 7) ){ + p->cMode = p->mode; + return; + } + + for(iOp=0; SQLITE_ROW==sqlite3_step(pSql); iOp++){ + int i; + int iAddr = sqlite3_column_int(pSql, 0); + const char *zOp = (const char*)sqlite3_column_text(pSql, 1); + + /* Set p2 to the P2 field of the current opcode. Then, assuming that + ** p2 is an instruction address, set variable p2op to the index of that + ** instruction in the aiIndent[] array. p2 and p2op may be different if + ** the current instruction is part of a sub-program generated by an + ** SQL trigger or foreign key. */ + int p2 = sqlite3_column_int(pSql, 3); + int p2op = (p2 + (iOp-iAddr)); + + /* Grow the p->aiIndent array as required */ + if( iOp>=nAlloc ){ + if( iOp==0 ){ + /* Do further verfication that this is explain output. Abort if + ** it is not */ + static const char *explainCols[] = { + "addr", "opcode", "p1", "p2", "p3", "p4", "p5", "comment" }; + int jj; + for(jj=0; jjcMode = p->mode; + sqlite3_reset(pSql); + return; + } + } + } + nAlloc += 100; + p->aiIndent = (int*)sqlite3_realloc64(p->aiIndent, nAlloc*sizeof(int)); + abYield = (int*)sqlite3_realloc64(abYield, nAlloc*sizeof(int)); + } + abYield[iOp] = str_in_array(zOp, azYield); + p->aiIndent[iOp] = 0; + p->nIndent = iOp+1; + + if( str_in_array(zOp, azNext) ){ + for(i=p2op; iaiIndent[i] += 2; + } + if( str_in_array(zOp, azGoto) && p2opnIndent + && (abYield[p2op] || sqlite3_column_int(pSql, 2)) + ){ + for(i=p2op+1; iaiIndent[i] += 2; + } + } + + p->iIndent = 0; + sqlite3_free(abYield); + sqlite3_reset(pSql); +} + +/* +** Free the array allocated by explain_data_prepare(). +*/ +static void explain_data_delete(ShellState *p){ + sqlite3_free(p->aiIndent); + p->aiIndent = 0; + p->nIndent = 0; + p->iIndent = 0; +} + +/* +** Execute a statement or set of statements. Print +** any result rows/columns depending on the current mode +** set via the supplied callback. +** +** This is very similar to SQLite's built-in sqlite3_exec() +** function except it takes a slightly different callback +** and callback data argument. +*/ +static int shell_exec( + sqlite3 *db, /* An open database */ + const char *zSql, /* SQL to be evaluated */ + int (*xCallback)(void*,int,char**,char**,int*), /* Callback function */ + /* (not the same as sqlite3_exec) */ + ShellState *pArg, /* Pointer to ShellState */ + char **pzErrMsg /* Error msg written here */ +){ + sqlite3_stmt *pStmt = NULL; /* Statement to execute. */ + int rc = SQLITE_OK; /* Return Code */ + int rc2; + const char *zLeftover; /* Tail of unprocessed SQL */ + + if( pzErrMsg ){ + *pzErrMsg = NULL; + } + + while( zSql[0] && (SQLITE_OK == rc) ){ + rc = sqlite3_prepare_v2(db, zSql, -1, &pStmt, &zLeftover); + if( SQLITE_OK != rc ){ + if( pzErrMsg ){ + *pzErrMsg = save_err_msg(db); + } + }else{ + if( !pStmt ){ + /* this happens for a comment or white-space */ + zSql = zLeftover; + while( IsSpace(zSql[0]) ) zSql++; + continue; + } + + /* save off the prepared statment handle and reset row count */ + if( pArg ){ + pArg->pStmt = pStmt; + pArg->cnt = 0; + } + + /* echo the sql statement if echo on */ + if( pArg && pArg->echoOn ){ + const char *zStmtSql = sqlite3_sql(pStmt); + utf8_printf(pArg->out, "%s\n", zStmtSql ? zStmtSql : zSql); + } + + /* Show the EXPLAIN QUERY PLAN if .eqp is on */ + if( pArg && pArg->autoEQP ){ + sqlite3_stmt *pExplain; + char *zEQP = sqlite3_mprintf("EXPLAIN QUERY PLAN %s", + sqlite3_sql(pStmt)); + rc = sqlite3_prepare_v2(db, zEQP, -1, &pExplain, 0); + if( rc==SQLITE_OK ){ + while( sqlite3_step(pExplain)==SQLITE_ROW ){ + raw_printf(pArg->out,"--EQP-- %d,",sqlite3_column_int(pExplain, 0)); + raw_printf(pArg->out,"%d,", sqlite3_column_int(pExplain, 1)); + raw_printf(pArg->out,"%d,", sqlite3_column_int(pExplain, 2)); + utf8_printf(pArg->out,"%s\n", sqlite3_column_text(pExplain, 3)); + } + } + sqlite3_finalize(pExplain); + sqlite3_free(zEQP); + } + + if( pArg ){ + pArg->cMode = pArg->mode; + if( pArg->autoExplain + && sqlite3_column_count(pStmt)==8 + && sqlite3_strlike("%EXPLAIN%", sqlite3_sql(pStmt),0)==0 + ){ + pArg->cMode = MODE_Explain; + } + + /* If the shell is currently in ".explain" mode, gather the extra + ** data required to add indents to the output.*/ + if( pArg->cMode==MODE_Explain ){ + explain_data_prepare(pArg, pStmt); + } + } + + /* perform the first step. this will tell us if we + ** have a result set or not and how wide it is. + */ + rc = sqlite3_step(pStmt); + /* if we have a result set... */ + if( SQLITE_ROW == rc ){ + /* if we have a callback... */ + if( xCallback ){ + /* allocate space for col name ptr, value ptr, and type */ + int nCol = sqlite3_column_count(pStmt); + void *pData = sqlite3_malloc64(3*nCol*sizeof(const char*) + 1); + if( !pData ){ + rc = SQLITE_NOMEM; + }else{ + char **azCols = (char **)pData; /* Names of result columns */ + char **azVals = &azCols[nCol]; /* Results */ + int *aiTypes = (int *)&azVals[nCol]; /* Result types */ + int i, x; + assert(sizeof(int) <= sizeof(char *)); + /* save off ptrs to column names */ + for(i=0; icMode==MODE_Insert ){ + azVals[i] = ""; + }else{ + azVals[i] = (char*)sqlite3_column_text(pStmt, i); + } + if( !azVals[i] && (aiTypes[i]!=SQLITE_NULL) ){ + rc = SQLITE_NOMEM; + break; /* from for */ + } + } /* end for */ + + /* if data and types extracted successfully... */ + if( SQLITE_ROW == rc ){ + /* call the supplied callback with the result row data */ + if( xCallback(pArg, nCol, azVals, azCols, aiTypes) ){ + rc = SQLITE_ABORT; + }else{ + rc = sqlite3_step(pStmt); + } + } + } while( SQLITE_ROW == rc ); + sqlite3_free(pData); + } + }else{ + do{ + rc = sqlite3_step(pStmt); + } while( rc == SQLITE_ROW ); + } + } + + explain_data_delete(pArg); + + /* print usage stats if stats on */ + if( pArg && pArg->statsOn ){ + display_stats(db, pArg, 0); + } + + /* print loop-counters if required */ + if( pArg && pArg->scanstatsOn ){ + display_scanstats(db, pArg); + } + + /* Finalize the statement just executed. If this fails, save a + ** copy of the error message. Otherwise, set zSql to point to the + ** next statement to execute. */ + rc2 = sqlite3_finalize(pStmt); + if( rc!=SQLITE_NOMEM ) rc = rc2; + if( rc==SQLITE_OK ){ + zSql = zLeftover; + while( IsSpace(zSql[0]) ) zSql++; + }else if( pzErrMsg ){ + *pzErrMsg = save_err_msg(db); + } + + /* clear saved stmt handle */ + if( pArg ){ + pArg->pStmt = NULL; + } + } + } /* end while */ + + return rc; +} + + +/* +** This is a different callback routine used for dumping the database. +** Each row received by this callback consists of a table name, +** the table type ("index" or "table") and SQL to create the table. +** This routine should print text sufficient to recreate the table. +*/ +static int dump_callback(void *pArg, int nArg, char **azArg, char **azCol){ + int rc; + const char *zTable; + const char *zType; + const char *zSql; + const char *zPrepStmt = 0; + ShellState *p = (ShellState *)pArg; + + UNUSED_PARAMETER(azCol); + if( nArg!=3 ) return 1; + zTable = azArg[0]; + zType = azArg[1]; + zSql = azArg[2]; + + if( strcmp(zTable, "sqlite_sequence")==0 ){ + zPrepStmt = "DELETE FROM sqlite_sequence;\n"; + }else if( sqlite3_strglob("sqlite_stat?", zTable)==0 ){ + raw_printf(p->out, "ANALYZE sqlite_master;\n"); + }else if( strncmp(zTable, "sqlite_", 7)==0 ){ + return 0; + }else if( strncmp(zSql, "CREATE VIRTUAL TABLE", 20)==0 ){ + char *zIns; + if( !p->writableSchema ){ + raw_printf(p->out, "PRAGMA writable_schema=ON;\n"); + p->writableSchema = 1; + } + zIns = sqlite3_mprintf( + "INSERT INTO sqlite_master(type,name,tbl_name,rootpage,sql)" + "VALUES('table','%q','%q',0,'%q');", + zTable, zTable, zSql); + utf8_printf(p->out, "%s\n", zIns); + sqlite3_free(zIns); + return 0; + }else{ + utf8_printf(p->out, "%s;\n", zSql); + } + + if( strcmp(zType, "table")==0 ){ + sqlite3_stmt *pTableInfo = 0; + char *zSelect = 0; + char *zTableInfo = 0; + char *zTmp = 0; + int nRow = 0; + + zTableInfo = appendText(zTableInfo, "PRAGMA table_info(", 0); + zTableInfo = appendText(zTableInfo, zTable, '"'); + zTableInfo = appendText(zTableInfo, ");", 0); + + rc = sqlite3_prepare_v2(p->db, zTableInfo, -1, &pTableInfo, 0); + free(zTableInfo); + if( rc!=SQLITE_OK || !pTableInfo ){ + return 1; + } + + zSelect = appendText(zSelect, "SELECT 'INSERT INTO ' || ", 0); + /* Always quote the table name, even if it appears to be pure ascii, + ** in case it is a keyword. Ex: INSERT INTO "table" ... */ + zTmp = appendText(zTmp, zTable, '"'); + if( zTmp ){ + zSelect = appendText(zSelect, zTmp, '\''); + free(zTmp); + } + zSelect = appendText(zSelect, " || ' VALUES(' || ", 0); + rc = sqlite3_step(pTableInfo); + while( rc==SQLITE_ROW ){ + const char *zText = (const char *)sqlite3_column_text(pTableInfo, 1); + zSelect = appendText(zSelect, "quote(", 0); + zSelect = appendText(zSelect, zText, '"'); + rc = sqlite3_step(pTableInfo); + if( rc==SQLITE_ROW ){ + zSelect = appendText(zSelect, "), ", 0); + }else{ + zSelect = appendText(zSelect, ") ", 0); + } + nRow++; + } + rc = sqlite3_finalize(pTableInfo); + if( rc!=SQLITE_OK || nRow==0 ){ + free(zSelect); + return 1; + } + zSelect = appendText(zSelect, "|| ')' FROM ", 0); + zSelect = appendText(zSelect, zTable, '"'); + + rc = run_table_dump_query(p, zSelect, zPrepStmt); + if( rc==SQLITE_CORRUPT ){ + zSelect = appendText(zSelect, " ORDER BY rowid DESC", 0); + run_table_dump_query(p, zSelect, 0); + } + free(zSelect); + } + return 0; +} + +/* +** Run zQuery. Use dump_callback() as the callback routine so that +** the contents of the query are output as SQL statements. +** +** If we get a SQLITE_CORRUPT error, rerun the query after appending +** "ORDER BY rowid DESC" to the end. +*/ +static int run_schema_dump_query( + ShellState *p, + const char *zQuery +){ + int rc; + char *zErr = 0; + rc = sqlite3_exec(p->db, zQuery, dump_callback, p, &zErr); + if( rc==SQLITE_CORRUPT ){ + char *zQ2; + int len = strlen30(zQuery); + raw_printf(p->out, "/****** CORRUPTION ERROR *******/\n"); + if( zErr ){ + utf8_printf(p->out, "/****** %s ******/\n", zErr); + sqlite3_free(zErr); + zErr = 0; + } + zQ2 = malloc( len+100 ); + if( zQ2==0 ) return rc; + sqlite3_snprintf(len+100, zQ2, "%s ORDER BY rowid DESC", zQuery); + rc = sqlite3_exec(p->db, zQ2, dump_callback, p, &zErr); + if( rc ){ + utf8_printf(p->out, "/****** ERROR: %s ******/\n", zErr); + }else{ + rc = SQLITE_CORRUPT; + } + sqlite3_free(zErr); + free(zQ2); + } + return rc; +} + +/* +** Text of a help message +*/ +static char zHelp[] = + ".backup ?DB? FILE Backup DB (default \"main\") to FILE\n" + ".bail on|off Stop after hitting an error. Default OFF\n" + ".binary on|off Turn binary output on or off. Default OFF\n" + ".changes on|off Show number of rows changed by SQL\n" + ".clone NEWDB Clone data into NEWDB from the existing database\n" + ".databases List names and files of attached databases\n" + ".dbinfo ?DB? Show status information about the database\n" + ".dump ?TABLE? ... Dump the database in an SQL text format\n" + " If TABLE specified, only dump tables matching\n" + " LIKE pattern TABLE.\n" + ".echo on|off Turn command echo on or off\n" + ".eqp on|off Enable or disable automatic EXPLAIN QUERY PLAN\n" + ".exit Exit this program\n" + ".explain ?on|off|auto? Turn EXPLAIN output mode on or off or to automatic\n" + ".fullschema Show schema and the content of sqlite_stat tables\n" + ".headers on|off Turn display of headers on or off\n" + ".help Show this message\n" + ".import FILE TABLE Import data from FILE into TABLE\n" + ".indexes ?TABLE? Show names of all indexes\n" + " If TABLE specified, only show indexes for tables\n" + " matching LIKE pattern TABLE.\n" +#ifdef SQLITE_ENABLE_IOTRACE + ".iotrace FILE Enable I/O diagnostic logging to FILE\n" +#endif + ".limit ?LIMIT? ?VAL? Display or change the value of an SQLITE_LIMIT\n" +#ifndef SQLITE_OMIT_LOAD_EXTENSION + ".load FILE ?ENTRY? Load an extension library\n" +#endif + ".log FILE|off Turn logging on or off. FILE can be stderr/stdout\n" + ".mode MODE ?TABLE? Set output mode where MODE is one of:\n" + " ascii Columns/rows delimited by 0x1F and 0x1E\n" + " csv Comma-separated values\n" + " column Left-aligned columns. (See .width)\n" + " html HTML code\n" + " insert SQL insert statements for TABLE\n" + " line One value per line\n" + " list Values delimited by .separator strings\n" + " tabs Tab-separated values\n" + " tcl TCL list elements\n" + ".nullvalue STRING Use STRING in place of NULL values\n" + ".once FILENAME Output for the next SQL command only to FILENAME\n" + ".open ?FILENAME? Close existing database and reopen FILENAME\n" + ".output ?FILENAME? Send output to FILENAME or stdout\n" + ".print STRING... Print literal STRING\n" + ".prompt MAIN CONTINUE Replace the standard prompts\n" + ".quit Exit this program\n" + ".read FILENAME Execute SQL in FILENAME\n" + ".restore ?DB? FILE Restore content of DB (default \"main\") from FILE\n" + ".save FILE Write in-memory database into FILE\n" + ".scanstats on|off Turn sqlite3_stmt_scanstatus() metrics on or off\n" + ".schema ?TABLE? Show the CREATE statements\n" + " If TABLE specified, only show tables matching\n" + " LIKE pattern TABLE.\n" + ".separator COL ?ROW? Change the column separator and optionally the row\n" + " separator for both the output mode and .import\n" + ".shell CMD ARGS... Run CMD ARGS... in a system shell\n" + ".show Show the current values for various settings\n" + ".stats on|off Turn stats on or off\n" + ".system CMD ARGS... Run CMD ARGS... in a system shell\n" + ".tables ?TABLE? List names of tables\n" + " If TABLE specified, only list tables matching\n" + " LIKE pattern TABLE.\n" + ".timeout MS Try opening locked tables for MS milliseconds\n" + ".timer on|off Turn SQL timer on or off\n" + ".trace FILE|off Output each SQL statement as it is run\n" + ".vfsinfo ?AUX? Information about the top-level VFS\n" + ".vfslist List all available VFSes\n" + ".vfsname ?AUX? Print the name of the VFS stack\n" + ".width NUM1 NUM2 ... Set column widths for \"column\" mode\n" + " Negative values right-justify\n" +; + +/* Forward reference */ +static int process_input(ShellState *p, FILE *in); +/* +** Implementation of the "readfile(X)" SQL function. The entire content +** of the file named X is read and returned as a BLOB. NULL is returned +** if the file does not exist or is unreadable. +*/ +static void readfileFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const char *zName; + FILE *in; + long nIn; + void *pBuf; + + UNUSED_PARAMETER(argc); + zName = (const char*)sqlite3_value_text(argv[0]); + if( zName==0 ) return; + in = fopen(zName, "rb"); + if( in==0 ) return; + fseek(in, 0, SEEK_END); + nIn = ftell(in); + rewind(in); + pBuf = sqlite3_malloc64( nIn ); + if( pBuf && 1==fread(pBuf, nIn, 1, in) ){ + sqlite3_result_blob(context, pBuf, nIn, sqlite3_free); + }else{ + sqlite3_free(pBuf); + } + fclose(in); +} + +/* +** Implementation of the "writefile(X,Y)" SQL function. The argument Y +** is written into file X. The number of bytes written is returned. Or +** NULL is returned if something goes wrong, such as being unable to open +** file X for writing. +*/ +static void writefileFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + FILE *out; + const char *z; + sqlite3_int64 rc; + const char *zFile; + + UNUSED_PARAMETER(argc); + zFile = (const char*)sqlite3_value_text(argv[0]); + if( zFile==0 ) return; + out = fopen(zFile, "wb"); + if( out==0 ) return; + z = (const char*)sqlite3_value_blob(argv[1]); + if( z==0 ){ + rc = 0; + }else{ + rc = fwrite(z, 1, sqlite3_value_bytes(argv[1]), out); + } + fclose(out); + sqlite3_result_int64(context, rc); +} + +/* +** Make sure the database is open. If it is not, then open it. If +** the database fails to open, print an error message and exit. +*/ +static void open_db(ShellState *p, int keepAlive){ + if( p->db==0 ){ + sqlite3_initialize(); + sqlite3_open(p->zDbFilename, &p->db); + globalDb = p->db; + if( p->db && sqlite3_errcode(p->db)==SQLITE_OK ){ + sqlite3_create_function(p->db, "shellstatic", 0, SQLITE_UTF8, 0, + shellstaticFunc, 0, 0); + } + if( p->db==0 || SQLITE_OK!=sqlite3_errcode(p->db) ){ + utf8_printf(stderr,"Error: unable to open database \"%s\": %s\n", + p->zDbFilename, sqlite3_errmsg(p->db)); + if( keepAlive ) return; + exit(1); + } +#ifndef SQLITE_OMIT_LOAD_EXTENSION + sqlite3_enable_load_extension(p->db, 1); +#endif + sqlite3_create_function(p->db, "readfile", 1, SQLITE_UTF8, 0, + readfileFunc, 0, 0); + sqlite3_create_function(p->db, "writefile", 2, SQLITE_UTF8, 0, + writefileFunc, 0, 0); + } +} + +/* +** Do C-language style dequoting. +** +** \a -> alarm +** \b -> backspace +** \t -> tab +** \n -> newline +** \v -> vertical tab +** \f -> form feed +** \r -> carriage return +** \s -> space +** \" -> " +** \' -> ' +** \\ -> backslash +** \NNN -> ascii character NNN in octal +*/ +static void resolve_backslashes(char *z){ + int i, j; + char c; + while( *z && *z!='\\' ) z++; + for(i=j=0; (c = z[i])!=0; i++, j++){ + if( c=='\\' && z[i+1]!=0 ){ + c = z[++i]; + if( c=='a' ){ + c = '\a'; + }else if( c=='b' ){ + c = '\b'; + }else if( c=='t' ){ + c = '\t'; + }else if( c=='n' ){ + c = '\n'; + }else if( c=='v' ){ + c = '\v'; + }else if( c=='f' ){ + c = '\f'; + }else if( c=='r' ){ + c = '\r'; + }else if( c=='"' ){ + c = '"'; + }else if( c=='\'' ){ + c = '\''; + }else if( c=='\\' ){ + c = '\\'; + }else if( c>='0' && c<='7' ){ + c -= '0'; + if( z[i+1]>='0' && z[i+1]<='7' ){ + i++; + c = (c<<3) + z[i] - '0'; + if( z[i+1]>='0' && z[i+1]<='7' ){ + i++; + c = (c<<3) + z[i] - '0'; + } + } + } + } + z[j] = c; + } + if( j='0' && c<='9' ) return c - '0'; + if( c>='a' && c<='f' ) return c - 'a' + 10; + if( c>='A' && c<='F' ) return c - 'A' + 10; + return -1; +} + +/* +** Interpret zArg as an integer value, possibly with suffixes. +*/ +static sqlite3_int64 integerValue(const char *zArg){ + sqlite3_int64 v = 0; + static const struct { char *zSuffix; int iMult; } aMult[] = { + { "KiB", 1024 }, + { "MiB", 1024*1024 }, + { "GiB", 1024*1024*1024 }, + { "KB", 1000 }, + { "MB", 1000000 }, + { "GB", 1000000000 }, + { "K", 1000 }, + { "M", 1000000 }, + { "G", 1000000000 }, + }; + int i; + int isNeg = 0; + if( zArg[0]=='-' ){ + isNeg = 1; + zArg++; + }else if( zArg[0]=='+' ){ + zArg++; + } + if( zArg[0]=='0' && zArg[1]=='x' ){ + int x; + zArg += 2; + while( (x = hexDigitValue(zArg[0]))>=0 ){ + v = (v<<4) + x; + zArg++; + } + }else{ + while( IsDigit(zArg[0]) ){ + v = v*10 + zArg[0] - '0'; + zArg++; + } + } + for(i=0; i=0; i++){} + }else{ + for(i=0; zArg[i]>='0' && zArg[i]<='9'; i++){} + } + if( i>0 && zArg[i]==0 ) return (int)(integerValue(zArg) & 0xffffffff); + if( sqlite3_stricmp(zArg, "on")==0 || sqlite3_stricmp(zArg,"yes")==0 ){ + return 1; + } + if( sqlite3_stricmp(zArg, "off")==0 || sqlite3_stricmp(zArg,"no")==0 ){ + return 0; + } + utf8_printf(stderr, "ERROR: Not a boolean value: \"%s\". Assuming \"no\".\n", + zArg); + return 0; +} + +/* +** Close an output file, assuming it is not stderr or stdout +*/ +static void output_file_close(FILE *f){ + if( f && f!=stdout && f!=stderr ) fclose(f); +} + +/* +** Try to open an output file. The names "stdout" and "stderr" are +** recognized and do the right thing. NULL is returned if the output +** filename is "off". +*/ +static FILE *output_file_open(const char *zFile){ + FILE *f; + if( strcmp(zFile,"stdout")==0 ){ + f = stdout; + }else if( strcmp(zFile, "stderr")==0 ){ + f = stderr; + }else if( strcmp(zFile, "off")==0 ){ + f = 0; + }else{ + f = fopen(zFile, "wb"); + if( f==0 ){ + utf8_printf(stderr, "Error: cannot open \"%s\"\n", zFile); + } + } + return f; +} + +/* +** A routine for handling output from sqlite3_trace(). +*/ +static void sql_trace_callback(void *pArg, const char *z){ + FILE *f = (FILE*)pArg; + if( f ){ + int i = (int)strlen(z); + while( i>0 && z[i-1]==';' ){ i--; } + utf8_printf(f, "%.*s;\n", i, z); + } +} + +/* +** A no-op routine that runs with the ".breakpoint" doc-command. This is +** a useful spot to set a debugger breakpoint. +*/ +static void test_breakpoint(void){ + static int nCall = 0; + nCall++; +} + +/* +** An object used to read a CSV and other files for import. +*/ +typedef struct ImportCtx ImportCtx; +struct ImportCtx { + const char *zFile; /* Name of the input file */ + FILE *in; /* Read the CSV text from this input stream */ + char *z; /* Accumulated text for a field */ + int n; /* Number of bytes in z */ + int nAlloc; /* Space allocated for z[] */ + int nLine; /* Current line number */ + int cTerm; /* Character that terminated the most recent field */ + int cColSep; /* The column separator character. (Usually ",") */ + int cRowSep; /* The row separator character. (Usually "\n") */ +}; + +/* Append a single byte to z[] */ +static void import_append_char(ImportCtx *p, int c){ + if( p->n+1>=p->nAlloc ){ + p->nAlloc += p->nAlloc + 100; + p->z = sqlite3_realloc64(p->z, p->nAlloc); + if( p->z==0 ){ + raw_printf(stderr, "out of memory\n"); + exit(1); + } + } + p->z[p->n++] = (char)c; +} + +/* Read a single field of CSV text. Compatible with rfc4180 and extended +** with the option of having a separator other than ",". +** +** + Input comes from p->in. +** + Store results in p->z of length p->n. Space to hold p->z comes +** from sqlite3_malloc64(). +** + Use p->cSep as the column separator. The default is ",". +** + Use p->rSep as the row separator. The default is "\n". +** + Keep track of the line number in p->nLine. +** + Store the character that terminates the field in p->cTerm. Store +** EOF on end-of-file. +** + Report syntax errors on stderr +*/ +static char *SQLITE_CDECL csv_read_one_field(ImportCtx *p){ + int c; + int cSep = p->cColSep; + int rSep = p->cRowSep; + p->n = 0; + c = fgetc(p->in); + if( c==EOF || seenInterrupt ){ + p->cTerm = EOF; + return 0; + } + if( c=='"' ){ + int pc, ppc; + int startLine = p->nLine; + int cQuote = c; + pc = ppc = 0; + while( 1 ){ + c = fgetc(p->in); + if( c==rSep ) p->nLine++; + if( c==cQuote ){ + if( pc==cQuote ){ + pc = 0; + continue; + } + } + if( (c==cSep && pc==cQuote) + || (c==rSep && pc==cQuote) + || (c==rSep && pc=='\r' && ppc==cQuote) + || (c==EOF && pc==cQuote) + ){ + do{ p->n--; }while( p->z[p->n]!=cQuote ); + p->cTerm = c; + break; + } + if( pc==cQuote && c!='\r' ){ + utf8_printf(stderr, "%s:%d: unescaped %c character\n", + p->zFile, p->nLine, cQuote); + } + if( c==EOF ){ + utf8_printf(stderr, "%s:%d: unterminated %c-quoted field\n", + p->zFile, startLine, cQuote); + p->cTerm = c; + break; + } + import_append_char(p, c); + ppc = pc; + pc = c; + } + }else{ + while( c!=EOF && c!=cSep && c!=rSep ){ + import_append_char(p, c); + c = fgetc(p->in); + } + if( c==rSep ){ + p->nLine++; + if( p->n>0 && p->z[p->n-1]=='\r' ) p->n--; + } + p->cTerm = c; + } + if( p->z ) p->z[p->n] = 0; + return p->z; +} + +/* Read a single field of ASCII delimited text. +** +** + Input comes from p->in. +** + Store results in p->z of length p->n. Space to hold p->z comes +** from sqlite3_malloc64(). +** + Use p->cSep as the column separator. The default is "\x1F". +** + Use p->rSep as the row separator. The default is "\x1E". +** + Keep track of the row number in p->nLine. +** + Store the character that terminates the field in p->cTerm. Store +** EOF on end-of-file. +** + Report syntax errors on stderr +*/ +static char *SQLITE_CDECL ascii_read_one_field(ImportCtx *p){ + int c; + int cSep = p->cColSep; + int rSep = p->cRowSep; + p->n = 0; + c = fgetc(p->in); + if( c==EOF || seenInterrupt ){ + p->cTerm = EOF; + return 0; + } + while( c!=EOF && c!=cSep && c!=rSep ){ + import_append_char(p, c); + c = fgetc(p->in); + } + if( c==rSep ){ + p->nLine++; + } + p->cTerm = c; + if( p->z ) p->z[p->n] = 0; + return p->z; +} + +/* +** Try to transfer data for table zTable. If an error is seen while +** moving forward, try to go backwards. The backwards movement won't +** work for WITHOUT ROWID tables. +*/ +static void tryToCloneData( + ShellState *p, + sqlite3 *newDb, + const char *zTable +){ + sqlite3_stmt *pQuery = 0; + sqlite3_stmt *pInsert = 0; + char *zQuery = 0; + char *zInsert = 0; + int rc; + int i, j, n; + int nTable = (int)strlen(zTable); + int k = 0; + int cnt = 0; + const int spinRate = 10000; + + zQuery = sqlite3_mprintf("SELECT * FROM \"%w\"", zTable); + rc = sqlite3_prepare_v2(p->db, zQuery, -1, &pQuery, 0); + if( rc ){ + utf8_printf(stderr, "Error %d: %s on [%s]\n", + sqlite3_extended_errcode(p->db), sqlite3_errmsg(p->db), + zQuery); + goto end_data_xfer; + } + n = sqlite3_column_count(pQuery); + zInsert = sqlite3_malloc64(200 + nTable + n*3); + if( zInsert==0 ){ + raw_printf(stderr, "out of memory\n"); + goto end_data_xfer; + } + sqlite3_snprintf(200+nTable,zInsert, + "INSERT OR IGNORE INTO \"%s\" VALUES(?", zTable); + i = (int)strlen(zInsert); + for(j=1; jdb, zQuery, -1, &pQuery, 0); + if( rc ){ + utf8_printf(stderr, "Warning: cannot step \"%s\" backwards", zTable); + break; + } + } /* End for(k=0...) */ + +end_data_xfer: + sqlite3_finalize(pQuery); + sqlite3_finalize(pInsert); + sqlite3_free(zQuery); + sqlite3_free(zInsert); +} + + +/* +** Try to transfer all rows of the schema that match zWhere. For +** each row, invoke xForEach() on the object defined by that row. +** If an error is encountered while moving forward through the +** sqlite_master table, try again moving backwards. +*/ +static void tryToCloneSchema( + ShellState *p, + sqlite3 *newDb, + const char *zWhere, + void (*xForEach)(ShellState*,sqlite3*,const char*) +){ + sqlite3_stmt *pQuery = 0; + char *zQuery = 0; + int rc; + const unsigned char *zName; + const unsigned char *zSql; + char *zErrMsg = 0; + + zQuery = sqlite3_mprintf("SELECT name, sql FROM sqlite_master" + " WHERE %s", zWhere); + rc = sqlite3_prepare_v2(p->db, zQuery, -1, &pQuery, 0); + if( rc ){ + utf8_printf(stderr, "Error: (%d) %s on [%s]\n", + sqlite3_extended_errcode(p->db), sqlite3_errmsg(p->db), + zQuery); + goto end_schema_xfer; + } + while( (rc = sqlite3_step(pQuery))==SQLITE_ROW ){ + zName = sqlite3_column_text(pQuery, 0); + zSql = sqlite3_column_text(pQuery, 1); + printf("%s... ", zName); fflush(stdout); + sqlite3_exec(newDb, (const char*)zSql, 0, 0, &zErrMsg); + if( zErrMsg ){ + utf8_printf(stderr, "Error: %s\nSQL: [%s]\n", zErrMsg, zSql); + sqlite3_free(zErrMsg); + zErrMsg = 0; + } + if( xForEach ){ + xForEach(p, newDb, (const char*)zName); + } + printf("done\n"); + } + if( rc!=SQLITE_DONE ){ + sqlite3_finalize(pQuery); + sqlite3_free(zQuery); + zQuery = sqlite3_mprintf("SELECT name, sql FROM sqlite_master" + " WHERE %s ORDER BY rowid DESC", zWhere); + rc = sqlite3_prepare_v2(p->db, zQuery, -1, &pQuery, 0); + if( rc ){ + utf8_printf(stderr, "Error: (%d) %s on [%s]\n", + sqlite3_extended_errcode(p->db), sqlite3_errmsg(p->db), + zQuery); + goto end_schema_xfer; + } + while( (rc = sqlite3_step(pQuery))==SQLITE_ROW ){ + zName = sqlite3_column_text(pQuery, 0); + zSql = sqlite3_column_text(pQuery, 1); + printf("%s... ", zName); fflush(stdout); + sqlite3_exec(newDb, (const char*)zSql, 0, 0, &zErrMsg); + if( zErrMsg ){ + utf8_printf(stderr, "Error: %s\nSQL: [%s]\n", zErrMsg, zSql); + sqlite3_free(zErrMsg); + zErrMsg = 0; + } + if( xForEach ){ + xForEach(p, newDb, (const char*)zName); + } + printf("done\n"); + } + } +end_schema_xfer: + sqlite3_finalize(pQuery); + sqlite3_free(zQuery); +} + +/* +** Open a new database file named "zNewDb". Try to recover as much information +** as possible out of the main database (which might be corrupt) and write it +** into zNewDb. +*/ +static void tryToClone(ShellState *p, const char *zNewDb){ + int rc; + sqlite3 *newDb = 0; + if( access(zNewDb,0)==0 ){ + utf8_printf(stderr, "File \"%s\" already exists.\n", zNewDb); + return; + } + rc = sqlite3_open(zNewDb, &newDb); + if( rc ){ + utf8_printf(stderr, "Cannot create output database: %s\n", + sqlite3_errmsg(newDb)); + }else{ + sqlite3_exec(p->db, "PRAGMA writable_schema=ON;", 0, 0, 0); + sqlite3_exec(newDb, "BEGIN EXCLUSIVE;", 0, 0, 0); + tryToCloneSchema(p, newDb, "type='table'", tryToCloneData); + tryToCloneSchema(p, newDb, "type!='table'", 0); + sqlite3_exec(newDb, "COMMIT;", 0, 0, 0); + sqlite3_exec(p->db, "PRAGMA writable_schema=OFF;", 0, 0, 0); + } + sqlite3_close(newDb); +} + +/* +** Change the output file back to stdout +*/ +static void output_reset(ShellState *p){ + if( p->outfile[0]=='|' ){ +#ifndef SQLITE_OMIT_POPEN + pclose(p->out); +#endif + }else{ + output_file_close(p->out); + } + p->outfile[0] = 0; + p->out = stdout; +} + +/* +** Run an SQL command and return the single integer result. +*/ +static int db_int(ShellState *p, const char *zSql){ + sqlite3_stmt *pStmt; + int res = 0; + sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0); + if( pStmt && sqlite3_step(pStmt)==SQLITE_ROW ){ + res = sqlite3_column_int(pStmt,0); + } + sqlite3_finalize(pStmt); + return res; +} + +/* +** Convert a 2-byte or 4-byte big-endian integer into a native integer +*/ +unsigned int get2byteInt(unsigned char *a){ + return (a[0]<<8) + a[1]; +} +unsigned int get4byteInt(unsigned char *a){ + return (a[0]<<24) + (a[1]<<16) + (a[2]<<8) + a[3]; +} + +/* +** Implementation of the ".info" command. +** +** Return 1 on error, 2 to exit, and 0 otherwise. +*/ +static int shell_dbinfo_command(ShellState *p, int nArg, char **azArg){ + static const struct { const char *zName; int ofst; } aField[] = { + { "file change counter:", 24 }, + { "database page count:", 28 }, + { "freelist page count:", 36 }, + { "schema cookie:", 40 }, + { "schema format:", 44 }, + { "default cache size:", 48 }, + { "autovacuum top root:", 52 }, + { "incremental vacuum:", 64 }, + { "text encoding:", 56 }, + { "user version:", 60 }, + { "application id:", 68 }, + { "software version:", 96 }, + }; + static const struct { const char *zName; const char *zSql; } aQuery[] = { + { "number of tables:", + "SELECT count(*) FROM %s WHERE type='table'" }, + { "number of indexes:", + "SELECT count(*) FROM %s WHERE type='index'" }, + { "number of triggers:", + "SELECT count(*) FROM %s WHERE type='trigger'" }, + { "number of views:", + "SELECT count(*) FROM %s WHERE type='view'" }, + { "schema size:", + "SELECT total(length(sql)) FROM %s" }, + }; + sqlite3_file *pFile = 0; + int i; + char *zSchemaTab; + char *zDb = nArg>=2 ? azArg[1] : "main"; + unsigned char aHdr[100]; + open_db(p, 0); + if( p->db==0 ) return 1; + sqlite3_file_control(p->db, zDb, SQLITE_FCNTL_FILE_POINTER, &pFile); + if( pFile==0 || pFile->pMethods==0 || pFile->pMethods->xRead==0 ){ + return 1; + } + i = pFile->pMethods->xRead(pFile, aHdr, 100, 0); + if( i!=SQLITE_OK ){ + raw_printf(stderr, "unable to read database header\n"); + return 1; + } + i = get2byteInt(aHdr+16); + if( i==1 ) i = 65536; + utf8_printf(p->out, "%-20s %d\n", "database page size:", i); + utf8_printf(p->out, "%-20s %d\n", "write format:", aHdr[18]); + utf8_printf(p->out, "%-20s %d\n", "read format:", aHdr[19]); + utf8_printf(p->out, "%-20s %d\n", "reserved bytes:", aHdr[20]); + for(i=0; iout, "%-20s %u", aField[i].zName, val); + switch( ofst ){ + case 56: { + if( val==1 ) raw_printf(p->out, " (utf8)"); + if( val==2 ) raw_printf(p->out, " (utf16le)"); + if( val==3 ) raw_printf(p->out, " (utf16be)"); + } + } + raw_printf(p->out, "\n"); + } + if( zDb==0 ){ + zSchemaTab = sqlite3_mprintf("main.sqlite_master"); + }else if( strcmp(zDb,"temp")==0 ){ + zSchemaTab = sqlite3_mprintf("%s", "sqlite_temp_master"); + }else{ + zSchemaTab = sqlite3_mprintf("\"%w\".sqlite_master", zDb); + } + for(i=0; iout, "%-20s %d\n", aQuery[i].zName, val); + } + sqlite3_free(zSchemaTab); + return 0; +} + +/* +** Print the current sqlite3_errmsg() value to stderr and return 1. +*/ +static int shellDatabaseError(sqlite3 *db){ + const char *zErr = sqlite3_errmsg(db); + utf8_printf(stderr, "Error: %s\n", zErr); + return 1; +} + +/* +** Print an out-of-memory message to stderr and return 1. +*/ +static int shellNomemError(void){ + raw_printf(stderr, "Error: out of memory\n"); + return 1; +} + +/* +** If an input line begins with "." then invoke this routine to +** process that line. +** +** Return 1 on error, 2 to exit, and 0 otherwise. +*/ +static int do_meta_command(char *zLine, ShellState *p){ + int h = 1; + int nArg = 0; + int n, c; + int rc = 0; + char *azArg[50]; + + /* Parse the input line into tokens. + */ + while( zLine[h] && nArg=3 && strncmp(azArg[0], "backup", n)==0) + || (c=='s' && n>=3 && strncmp(azArg[0], "save", n)==0) + ){ + const char *zDestFile = 0; + const char *zDb = 0; + sqlite3 *pDest; + sqlite3_backup *pBackup; + int j; + for(j=1; jdb, zDb); + if( pBackup==0 ){ + utf8_printf(stderr, "Error: %s\n", sqlite3_errmsg(pDest)); + sqlite3_close(pDest); + return 1; + } + while( (rc = sqlite3_backup_step(pBackup,100))==SQLITE_OK ){} + sqlite3_backup_finish(pBackup); + if( rc==SQLITE_DONE ){ + rc = 0; + }else{ + utf8_printf(stderr, "Error: %s\n", sqlite3_errmsg(pDest)); + rc = 1; + } + sqlite3_close(pDest); + }else + + if( c=='b' && n>=3 && strncmp(azArg[0], "bail", n)==0 ){ + if( nArg==2 ){ + bail_on_error = booleanValue(azArg[1]); + }else{ + raw_printf(stderr, "Usage: .bail on|off\n"); + rc = 1; + } + }else + + if( c=='b' && n>=3 && strncmp(azArg[0], "binary", n)==0 ){ + if( nArg==2 ){ + if( booleanValue(azArg[1]) ){ + setBinaryMode(p->out); + }else{ + setTextMode(p->out); + } + }else{ + raw_printf(stderr, "Usage: .binary on|off\n"); + rc = 1; + } + }else + + /* The undocumented ".breakpoint" command causes a call to the no-op + ** routine named test_breakpoint(). + */ + if( c=='b' && n>=3 && strncmp(azArg[0], "breakpoint", n)==0 ){ + test_breakpoint(); + }else + + if( c=='c' && n>=3 && strncmp(azArg[0], "changes", n)==0 ){ + if( nArg==2 ){ + p->countChanges = booleanValue(azArg[1]); + }else{ + raw_printf(stderr, "Usage: .changes on|off\n"); + rc = 1; + } + }else + + if( c=='c' && strncmp(azArg[0], "clone", n)==0 ){ + if( nArg==2 ){ + tryToClone(p, azArg[1]); + }else{ + raw_printf(stderr, "Usage: .clone FILENAME\n"); + rc = 1; + } + }else + + if( c=='d' && n>1 && strncmp(azArg[0], "databases", n)==0 ){ + ShellState data; + char *zErrMsg = 0; + open_db(p, 0); + memcpy(&data, p, sizeof(data)); + data.showHeader = 1; + data.cMode = data.mode = MODE_Column; + data.colWidth[0] = 3; + data.colWidth[1] = 15; + data.colWidth[2] = 58; + data.cnt = 0; + sqlite3_exec(p->db, "PRAGMA database_list; ", callback, &data, &zErrMsg); + if( zErrMsg ){ + utf8_printf(stderr,"Error: %s\n", zErrMsg); + sqlite3_free(zErrMsg); + rc = 1; + } + }else + + if( c=='d' && strncmp(azArg[0], "dbinfo", n)==0 ){ + rc = shell_dbinfo_command(p, nArg, azArg); + }else + + if( c=='d' && strncmp(azArg[0], "dump", n)==0 ){ + open_db(p, 0); + /* When playing back a "dump", the content might appear in an order + ** which causes immediate foreign key constraints to be violated. + ** So disable foreign-key constraint enforcement to prevent problems. */ + if( nArg!=1 && nArg!=2 ){ + raw_printf(stderr, "Usage: .dump ?LIKE-PATTERN?\n"); + rc = 1; + goto meta_command_exit; + } + raw_printf(p->out, "PRAGMA foreign_keys=OFF;\n"); + raw_printf(p->out, "BEGIN TRANSACTION;\n"); + p->writableSchema = 0; + sqlite3_exec(p->db, "SAVEPOINT dump; PRAGMA writable_schema=ON", 0, 0, 0); + p->nErr = 0; + if( nArg==1 ){ + run_schema_dump_query(p, + "SELECT name, type, sql FROM sqlite_master " + "WHERE sql NOT NULL AND type=='table' AND name!='sqlite_sequence'" + ); + run_schema_dump_query(p, + "SELECT name, type, sql FROM sqlite_master " + "WHERE name=='sqlite_sequence'" + ); + run_table_dump_query(p, + "SELECT sql FROM sqlite_master " + "WHERE sql NOT NULL AND type IN ('index','trigger','view')", 0 + ); + }else{ + int i; + for(i=1; iwritableSchema ){ + raw_printf(p->out, "PRAGMA writable_schema=OFF;\n"); + p->writableSchema = 0; + } + sqlite3_exec(p->db, "PRAGMA writable_schema=OFF;", 0, 0, 0); + sqlite3_exec(p->db, "RELEASE dump;", 0, 0, 0); + raw_printf(p->out, p->nErr ? "ROLLBACK; -- due to errors\n" : "COMMIT;\n"); + }else + + if( c=='e' && strncmp(azArg[0], "echo", n)==0 ){ + if( nArg==2 ){ + p->echoOn = booleanValue(azArg[1]); + }else{ + raw_printf(stderr, "Usage: .echo on|off\n"); + rc = 1; + } + }else + + if( c=='e' && strncmp(azArg[0], "eqp", n)==0 ){ + if( nArg==2 ){ + p->autoEQP = booleanValue(azArg[1]); + }else{ + raw_printf(stderr, "Usage: .eqp on|off\n"); + rc = 1; + } + }else + + if( c=='e' && strncmp(azArg[0], "exit", n)==0 ){ + if( nArg>1 && (rc = (int)integerValue(azArg[1]))!=0 ) exit(rc); + rc = 2; + }else + + if( c=='e' && strncmp(azArg[0], "explain", n)==0 ){ + int val = 1; + if( nArg>=2 ){ + if( strcmp(azArg[1],"auto")==0 ){ + val = 99; + }else{ + val = booleanValue(azArg[1]); + } + } + if( val==1 && p->mode!=MODE_Explain ){ + p->normalMode = p->mode; + p->mode = MODE_Explain; + p->autoExplain = 0; + }else if( val==0 ){ + if( p->mode==MODE_Explain ) p->mode = p->normalMode; + p->autoExplain = 0; + }else if( val==99 ){ + if( p->mode==MODE_Explain ) p->mode = p->normalMode; + p->autoExplain = 1; + } + }else + + if( c=='f' && strncmp(azArg[0], "fullschema", n)==0 ){ + ShellState data; + char *zErrMsg = 0; + int doStats = 0; + if( nArg!=1 ){ + raw_printf(stderr, "Usage: .fullschema\n"); + rc = 1; + goto meta_command_exit; + } + open_db(p, 0); + memcpy(&data, p, sizeof(data)); + data.showHeader = 0; + data.cMode = data.mode = MODE_Semi; + rc = sqlite3_exec(p->db, + "SELECT sql FROM" + " (SELECT sql sql, type type, tbl_name tbl_name, name name, rowid x" + " FROM sqlite_master UNION ALL" + " SELECT sql, type, tbl_name, name, rowid FROM sqlite_temp_master) " + "WHERE type!='meta' AND sql NOTNULL AND name NOT LIKE 'sqlite_%' " + "ORDER BY rowid", + callback, &data, &zErrMsg + ); + if( rc==SQLITE_OK ){ + sqlite3_stmt *pStmt; + rc = sqlite3_prepare_v2(p->db, + "SELECT rowid FROM sqlite_master" + " WHERE name GLOB 'sqlite_stat[134]'", + -1, &pStmt, 0); + doStats = sqlite3_step(pStmt)==SQLITE_ROW; + sqlite3_finalize(pStmt); + } + if( doStats==0 ){ + raw_printf(p->out, "/* No STAT tables available */\n"); + }else{ + raw_printf(p->out, "ANALYZE sqlite_master;\n"); + sqlite3_exec(p->db, "SELECT 'ANALYZE sqlite_master'", + callback, &data, &zErrMsg); + data.cMode = data.mode = MODE_Insert; + data.zDestTable = "sqlite_stat1"; + shell_exec(p->db, "SELECT * FROM sqlite_stat1", + shell_callback, &data,&zErrMsg); + data.zDestTable = "sqlite_stat3"; + shell_exec(p->db, "SELECT * FROM sqlite_stat3", + shell_callback, &data,&zErrMsg); + data.zDestTable = "sqlite_stat4"; + shell_exec(p->db, "SELECT * FROM sqlite_stat4", + shell_callback, &data, &zErrMsg); + raw_printf(p->out, "ANALYZE sqlite_master;\n"); + } + }else + + if( c=='h' && strncmp(azArg[0], "headers", n)==0 ){ + if( nArg==2 ){ + p->showHeader = booleanValue(azArg[1]); + }else{ + raw_printf(stderr, "Usage: .headers on|off\n"); + rc = 1; + } + }else + + if( c=='h' && strncmp(azArg[0], "help", n)==0 ){ + utf8_printf(p->out, "%s", zHelp); + }else + + if( c=='i' && strncmp(azArg[0], "import", n)==0 ){ + char *zTable; /* Insert data into this table */ + char *zFile; /* Name of file to extra content from */ + sqlite3_stmt *pStmt = NULL; /* A statement */ + int nCol; /* Number of columns in the table */ + int nByte; /* Number of bytes in an SQL string */ + int i, j; /* Loop counters */ + int needCommit; /* True to COMMIT or ROLLBACK at end */ + int nSep; /* Number of bytes in p->colSeparator[] */ + char *zSql; /* An SQL statement */ + ImportCtx sCtx; /* Reader context */ + char *(SQLITE_CDECL *xRead)(ImportCtx*); /* Func to read one value */ + int (SQLITE_CDECL *xCloser)(FILE*); /* Func to close file */ + + if( nArg!=3 ){ + raw_printf(stderr, "Usage: .import FILE TABLE\n"); + goto meta_command_exit; + } + zFile = azArg[1]; + zTable = azArg[2]; + seenInterrupt = 0; + memset(&sCtx, 0, sizeof(sCtx)); + open_db(p, 0); + nSep = strlen30(p->colSeparator); + if( nSep==0 ){ + raw_printf(stderr, + "Error: non-null column separator required for import\n"); + return 1; + } + if( nSep>1 ){ + raw_printf(stderr, "Error: multi-character column separators not allowed" + " for import\n"); + return 1; + } + nSep = strlen30(p->rowSeparator); + if( nSep==0 ){ + raw_printf(stderr, "Error: non-null row separator required for import\n"); + return 1; + } + if( nSep==2 && p->mode==MODE_Csv && strcmp(p->rowSeparator, SEP_CrLf)==0 ){ + /* When importing CSV (only), if the row separator is set to the + ** default output row separator, change it to the default input + ** row separator. This avoids having to maintain different input + ** and output row separators. */ + sqlite3_snprintf(sizeof(p->rowSeparator), p->rowSeparator, SEP_Row); + nSep = strlen30(p->rowSeparator); + } + if( nSep>1 ){ + raw_printf(stderr, "Error: multi-character row separators not allowed" + " for import\n"); + return 1; + } + sCtx.zFile = zFile; + sCtx.nLine = 1; + if( sCtx.zFile[0]=='|' ){ +#ifdef SQLITE_OMIT_POPEN + raw_printf(stderr, "Error: pipes are not supported in this OS\n"); + return 1; +#else + sCtx.in = popen(sCtx.zFile+1, "r"); + sCtx.zFile = ""; + xCloser = pclose; +#endif + }else{ + sCtx.in = fopen(sCtx.zFile, "rb"); + xCloser = fclose; + } + if( p->mode==MODE_Ascii ){ + xRead = ascii_read_one_field; + }else{ + xRead = csv_read_one_field; + } + if( sCtx.in==0 ){ + utf8_printf(stderr, "Error: cannot open \"%s\"\n", zFile); + return 1; + } + sCtx.cColSep = p->colSeparator[0]; + sCtx.cRowSep = p->rowSeparator[0]; + zSql = sqlite3_mprintf("SELECT * FROM %s", zTable); + if( zSql==0 ){ + raw_printf(stderr, "Error: out of memory\n"); + xCloser(sCtx.in); + return 1; + } + nByte = strlen30(zSql); + rc = sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0); + import_append_char(&sCtx, 0); /* To ensure sCtx.z is allocated */ + if( rc && sqlite3_strglob("no such table: *", sqlite3_errmsg(p->db))==0 ){ + char *zCreate = sqlite3_mprintf("CREATE TABLE %s", zTable); + char cSep = '('; + while( xRead(&sCtx) ){ + zCreate = sqlite3_mprintf("%z%c\n \"%s\" TEXT", zCreate, cSep, sCtx.z); + cSep = ','; + if( sCtx.cTerm!=sCtx.cColSep ) break; + } + if( cSep=='(' ){ + sqlite3_free(zCreate); + sqlite3_free(sCtx.z); + xCloser(sCtx.in); + utf8_printf(stderr,"%s: empty file\n", sCtx.zFile); + return 1; + } + zCreate = sqlite3_mprintf("%z\n)", zCreate); + rc = sqlite3_exec(p->db, zCreate, 0, 0, 0); + sqlite3_free(zCreate); + if( rc ){ + utf8_printf(stderr, "CREATE TABLE %s(...) failed: %s\n", zTable, + sqlite3_errmsg(p->db)); + sqlite3_free(sCtx.z); + xCloser(sCtx.in); + return 1; + } + rc = sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0); + } + sqlite3_free(zSql); + if( rc ){ + if (pStmt) sqlite3_finalize(pStmt); + utf8_printf(stderr,"Error: %s\n", sqlite3_errmsg(p->db)); + xCloser(sCtx.in); + return 1; + } + nCol = sqlite3_column_count(pStmt); + sqlite3_finalize(pStmt); + pStmt = 0; + if( nCol==0 ) return 0; /* no columns, no error */ + zSql = sqlite3_malloc64( nByte*2 + 20 + nCol*2 ); + if( zSql==0 ){ + raw_printf(stderr, "Error: out of memory\n"); + xCloser(sCtx.in); + return 1; + } + sqlite3_snprintf(nByte+20, zSql, "INSERT INTO \"%w\" VALUES(?", zTable); + j = strlen30(zSql); + for(i=1; idb, zSql, -1, &pStmt, 0); + sqlite3_free(zSql); + if( rc ){ + utf8_printf(stderr, "Error: %s\n", sqlite3_errmsg(p->db)); + if (pStmt) sqlite3_finalize(pStmt); + xCloser(sCtx.in); + return 1; + } + needCommit = sqlite3_get_autocommit(p->db); + if( needCommit ) sqlite3_exec(p->db, "BEGIN", 0, 0, 0); + do{ + int startLine = sCtx.nLine; + for(i=0; imode==MODE_Ascii && (z==0 || z[0]==0) && i==0 ) break; + sqlite3_bind_text(pStmt, i+1, z, -1, SQLITE_TRANSIENT); + if( i=nCol ){ + sqlite3_step(pStmt); + rc = sqlite3_reset(pStmt); + if( rc!=SQLITE_OK ){ + utf8_printf(stderr, "%s:%d: INSERT failed: %s\n", sCtx.zFile, + startLine, sqlite3_errmsg(p->db)); + } + } + }while( sCtx.cTerm!=EOF ); + + xCloser(sCtx.in); + sqlite3_free(sCtx.z); + sqlite3_finalize(pStmt); + if( needCommit ) sqlite3_exec(p->db, "COMMIT", 0, 0, 0); + }else + + if( c=='i' && (strncmp(azArg[0], "indices", n)==0 + || strncmp(azArg[0], "indexes", n)==0) ){ + ShellState data; + char *zErrMsg = 0; + open_db(p, 0); + memcpy(&data, p, sizeof(data)); + data.showHeader = 0; + data.cMode = data.mode = MODE_List; + if( nArg==1 ){ + rc = sqlite3_exec(p->db, + "SELECT name FROM sqlite_master " + "WHERE type='index' AND name NOT LIKE 'sqlite_%' " + "UNION ALL " + "SELECT name FROM sqlite_temp_master " + "WHERE type='index' " + "ORDER BY 1", + callback, &data, &zErrMsg + ); + }else if( nArg==2 ){ + zShellStatic = azArg[1]; + rc = sqlite3_exec(p->db, + "SELECT name FROM sqlite_master " + "WHERE type='index' AND tbl_name LIKE shellstatic() " + "UNION ALL " + "SELECT name FROM sqlite_temp_master " + "WHERE type='index' AND tbl_name LIKE shellstatic() " + "ORDER BY 1", + callback, &data, &zErrMsg + ); + zShellStatic = 0; + }else{ + raw_printf(stderr, "Usage: .indexes ?LIKE-PATTERN?\n"); + rc = 1; + goto meta_command_exit; + } + if( zErrMsg ){ + utf8_printf(stderr,"Error: %s\n", zErrMsg); + sqlite3_free(zErrMsg); + rc = 1; + }else if( rc != SQLITE_OK ){ + raw_printf(stderr, + "Error: querying sqlite_master and sqlite_temp_master\n"); + rc = 1; + } + }else + +#ifdef SQLITE_ENABLE_IOTRACE + if( c=='i' && strncmp(azArg[0], "iotrace", n)==0 ){ + SQLITE_API extern void (SQLITE_CDECL *sqlite3IoTrace)(const char*, ...); + if( iotrace && iotrace!=stdout ) fclose(iotrace); + iotrace = 0; + if( nArg<2 ){ + sqlite3IoTrace = 0; + }else if( strcmp(azArg[1], "-")==0 ){ + sqlite3IoTrace = iotracePrintf; + iotrace = stdout; + }else{ + iotrace = fopen(azArg[1], "w"); + if( iotrace==0 ){ + utf8_printf(stderr, "Error: cannot open \"%s\"\n", azArg[1]); + sqlite3IoTrace = 0; + rc = 1; + }else{ + sqlite3IoTrace = iotracePrintf; + } + } + }else +#endif + if( c=='l' && n>=5 && strncmp(azArg[0], "limits", n)==0 ){ + static const struct { + const char *zLimitName; /* Name of a limit */ + int limitCode; /* Integer code for that limit */ + } aLimit[] = { + { "length", SQLITE_LIMIT_LENGTH }, + { "sql_length", SQLITE_LIMIT_SQL_LENGTH }, + { "column", SQLITE_LIMIT_COLUMN }, + { "expr_depth", SQLITE_LIMIT_EXPR_DEPTH }, + { "compound_select", SQLITE_LIMIT_COMPOUND_SELECT }, + { "vdbe_op", SQLITE_LIMIT_VDBE_OP }, + { "function_arg", SQLITE_LIMIT_FUNCTION_ARG }, + { "attached", SQLITE_LIMIT_ATTACHED }, + { "like_pattern_length", SQLITE_LIMIT_LIKE_PATTERN_LENGTH }, + { "variable_number", SQLITE_LIMIT_VARIABLE_NUMBER }, + { "trigger_depth", SQLITE_LIMIT_TRIGGER_DEPTH }, + { "worker_threads", SQLITE_LIMIT_WORKER_THREADS }, + }; + int i, n2; + open_db(p, 0); + if( nArg==1 ){ + for(i=0; idb, aLimit[i].limitCode, -1)); + } + }else if( nArg>3 ){ + raw_printf(stderr, "Usage: .limit NAME ?NEW-VALUE?\n"); + rc = 1; + goto meta_command_exit; + }else{ + int iLimit = -1; + n2 = strlen30(azArg[1]); + for(i=0; idb, aLimit[iLimit].limitCode, + (int)integerValue(azArg[2])); + } + printf("%20s %d\n", aLimit[iLimit].zLimitName, + sqlite3_limit(p->db, aLimit[iLimit].limitCode, -1)); + } + }else + +#ifndef SQLITE_OMIT_LOAD_EXTENSION + if( c=='l' && strncmp(azArg[0], "load", n)==0 ){ + const char *zFile, *zProc; + char *zErrMsg = 0; + if( nArg<2 ){ + raw_printf(stderr, "Usage: .load FILE ?ENTRYPOINT?\n"); + rc = 1; + goto meta_command_exit; + } + zFile = azArg[1]; + zProc = nArg>=3 ? azArg[2] : 0; + open_db(p, 0); + rc = sqlite3_load_extension(p->db, zFile, zProc, &zErrMsg); + if( rc!=SQLITE_OK ){ + utf8_printf(stderr, "Error: %s\n", zErrMsg); + sqlite3_free(zErrMsg); + rc = 1; + } + }else +#endif + + if( c=='l' && strncmp(azArg[0], "log", n)==0 ){ + if( nArg!=2 ){ + raw_printf(stderr, "Usage: .log FILENAME\n"); + rc = 1; + }else{ + const char *zFile = azArg[1]; + output_file_close(p->pLog); + p->pLog = output_file_open(zFile); + } + }else + + if( c=='m' && strncmp(azArg[0], "mode", n)==0 ){ + const char *zMode = nArg>=2 ? azArg[1] : ""; + int n2 = (int)strlen(zMode); + int c2 = zMode[0]; + if( c2=='l' && n2>2 && strncmp(azArg[1],"lines",n2)==0 ){ + p->mode = MODE_Line; + }else if( c2=='c' && strncmp(azArg[1],"columns",n2)==0 ){ + p->mode = MODE_Column; + }else if( c2=='l' && n2>2 && strncmp(azArg[1],"list",n2)==0 ){ + p->mode = MODE_List; + }else if( c2=='h' && strncmp(azArg[1],"html",n2)==0 ){ + p->mode = MODE_Html; + }else if( c2=='t' && strncmp(azArg[1],"tcl",n2)==0 ){ + p->mode = MODE_Tcl; + sqlite3_snprintf(sizeof(p->colSeparator), p->colSeparator, SEP_Space); + }else if( c2=='c' && strncmp(azArg[1],"csv",n2)==0 ){ + p->mode = MODE_Csv; + sqlite3_snprintf(sizeof(p->colSeparator), p->colSeparator, SEP_Comma); + sqlite3_snprintf(sizeof(p->rowSeparator), p->rowSeparator, SEP_CrLf); + }else if( c2=='t' && strncmp(azArg[1],"tabs",n2)==0 ){ + p->mode = MODE_List; + sqlite3_snprintf(sizeof(p->colSeparator), p->colSeparator, SEP_Tab); + }else if( c2=='i' && strncmp(azArg[1],"insert",n2)==0 ){ + p->mode = MODE_Insert; + set_table_name(p, nArg>=3 ? azArg[2] : "table"); + }else if( c2=='a' && strncmp(azArg[1],"ascii",n2)==0 ){ + p->mode = MODE_Ascii; + sqlite3_snprintf(sizeof(p->colSeparator), p->colSeparator, SEP_Unit); + sqlite3_snprintf(sizeof(p->rowSeparator), p->rowSeparator, SEP_Record); + }else { + raw_printf(stderr, "Error: mode should be one of: " + "ascii column csv html insert line list tabs tcl\n"); + rc = 1; + } + p->cMode = p->mode; + }else + + if( c=='n' && strncmp(azArg[0], "nullvalue", n)==0 ){ + if( nArg==2 ){ + sqlite3_snprintf(sizeof(p->nullValue), p->nullValue, + "%.*s", (int)ArraySize(p->nullValue)-1, azArg[1]); + }else{ + raw_printf(stderr, "Usage: .nullvalue STRING\n"); + rc = 1; + } + }else + + if( c=='o' && strncmp(azArg[0], "open", n)==0 && n>=2 ){ + sqlite3 *savedDb = p->db; + const char *zSavedFilename = p->zDbFilename; + char *zNewFilename = 0; + p->db = 0; + if( nArg>=2 ) zNewFilename = sqlite3_mprintf("%s", azArg[1]); + p->zDbFilename = zNewFilename; + open_db(p, 1); + if( p->db!=0 ){ + sqlite3_close(savedDb); + sqlite3_free(p->zFreeOnClose); + p->zFreeOnClose = zNewFilename; + }else{ + sqlite3_free(zNewFilename); + p->db = savedDb; + p->zDbFilename = zSavedFilename; + } + }else + + if( c=='o' + && (strncmp(azArg[0], "output", n)==0 || strncmp(azArg[0], "once", n)==0) + ){ + const char *zFile = nArg>=2 ? azArg[1] : "stdout"; + if( nArg>2 ){ + utf8_printf(stderr, "Usage: .%s FILE\n", azArg[0]); + rc = 1; + goto meta_command_exit; + } + if( n>1 && strncmp(azArg[0], "once", n)==0 ){ + if( nArg<2 ){ + raw_printf(stderr, "Usage: .once FILE\n"); + rc = 1; + goto meta_command_exit; + } + p->outCount = 2; + }else{ + p->outCount = 0; + } + output_reset(p); + if( zFile[0]=='|' ){ +#ifdef SQLITE_OMIT_POPEN + raw_printf(stderr, "Error: pipes are not supported in this OS\n"); + rc = 1; + p->out = stdout; +#else + p->out = popen(zFile + 1, "w"); + if( p->out==0 ){ + utf8_printf(stderr,"Error: cannot open pipe \"%s\"\n", zFile + 1); + p->out = stdout; + rc = 1; + }else{ + sqlite3_snprintf(sizeof(p->outfile), p->outfile, "%s", zFile); + } +#endif + }else{ + p->out = output_file_open(zFile); + if( p->out==0 ){ + if( strcmp(zFile,"off")!=0 ){ + utf8_printf(stderr,"Error: cannot write to \"%s\"\n", zFile); + } + p->out = stdout; + rc = 1; + } else { + sqlite3_snprintf(sizeof(p->outfile), p->outfile, "%s", zFile); + } + } + }else + + if( c=='p' && n>=3 && strncmp(azArg[0], "print", n)==0 ){ + int i; + for(i=1; i1 ) raw_printf(p->out, " "); + utf8_printf(p->out, "%s", azArg[i]); + } + raw_printf(p->out, "\n"); + }else + + if( c=='p' && strncmp(azArg[0], "prompt", n)==0 ){ + if( nArg >= 2) { + strncpy(mainPrompt,azArg[1],(int)ArraySize(mainPrompt)-1); + } + if( nArg >= 3) { + strncpy(continuePrompt,azArg[2],(int)ArraySize(continuePrompt)-1); + } + }else + + if( c=='q' && strncmp(azArg[0], "quit", n)==0 ){ + rc = 2; + }else + + if( c=='r' && n>=3 && strncmp(azArg[0], "read", n)==0 ){ + FILE *alt; + if( nArg!=2 ){ + raw_printf(stderr, "Usage: .read FILE\n"); + rc = 1; + goto meta_command_exit; + } + alt = fopen(azArg[1], "rb"); + if( alt==0 ){ + utf8_printf(stderr,"Error: cannot open \"%s\"\n", azArg[1]); + rc = 1; + }else{ + rc = process_input(p, alt); + fclose(alt); + } + }else + + if( c=='r' && n>=3 && strncmp(azArg[0], "restore", n)==0 ){ + const char *zSrcFile; + const char *zDb; + sqlite3 *pSrc; + sqlite3_backup *pBackup; + int nTimeout = 0; + + if( nArg==2 ){ + zSrcFile = azArg[1]; + zDb = "main"; + }else if( nArg==3 ){ + zSrcFile = azArg[2]; + zDb = azArg[1]; + }else{ + raw_printf(stderr, "Usage: .restore ?DB? FILE\n"); + rc = 1; + goto meta_command_exit; + } + rc = sqlite3_open(zSrcFile, &pSrc); + if( rc!=SQLITE_OK ){ + utf8_printf(stderr, "Error: cannot open \"%s\"\n", zSrcFile); + sqlite3_close(pSrc); + return 1; + } + open_db(p, 0); + pBackup = sqlite3_backup_init(p->db, zDb, pSrc, "main"); + if( pBackup==0 ){ + utf8_printf(stderr, "Error: %s\n", sqlite3_errmsg(p->db)); + sqlite3_close(pSrc); + return 1; + } + while( (rc = sqlite3_backup_step(pBackup,100))==SQLITE_OK + || rc==SQLITE_BUSY ){ + if( rc==SQLITE_BUSY ){ + if( nTimeout++ >= 3 ) break; + sqlite3_sleep(100); + } + } + sqlite3_backup_finish(pBackup); + if( rc==SQLITE_DONE ){ + rc = 0; + }else if( rc==SQLITE_BUSY || rc==SQLITE_LOCKED ){ + raw_printf(stderr, "Error: source database is busy\n"); + rc = 1; + }else{ + utf8_printf(stderr, "Error: %s\n", sqlite3_errmsg(p->db)); + rc = 1; + } + sqlite3_close(pSrc); + }else + + + if( c=='s' && strncmp(azArg[0], "scanstats", n)==0 ){ + if( nArg==2 ){ + p->scanstatsOn = booleanValue(azArg[1]); +#ifndef SQLITE_ENABLE_STMT_SCANSTATUS + raw_printf(stderr, "Warning: .scanstats not available in this build.\n"); +#endif + }else{ + raw_printf(stderr, "Usage: .scanstats on|off\n"); + rc = 1; + } + }else + + if( c=='s' && strncmp(azArg[0], "schema", n)==0 ){ + ShellState data; + char *zErrMsg = 0; + open_db(p, 0); + memcpy(&data, p, sizeof(data)); + data.showHeader = 0; + data.cMode = data.mode = MODE_Semi; + if( nArg==2 ){ + int i; + for(i=0; azArg[1][i]; i++) azArg[1][i] = ToLower(azArg[1][i]); + if( strcmp(azArg[1],"sqlite_master")==0 ){ + char *new_argv[2], *new_colv[2]; + new_argv[0] = "CREATE TABLE sqlite_master (\n" + " type text,\n" + " name text,\n" + " tbl_name text,\n" + " rootpage integer,\n" + " sql text\n" + ")"; + new_argv[1] = 0; + new_colv[0] = "sql"; + new_colv[1] = 0; + callback(&data, 1, new_argv, new_colv); + rc = SQLITE_OK; + }else if( strcmp(azArg[1],"sqlite_temp_master")==0 ){ + char *new_argv[2], *new_colv[2]; + new_argv[0] = "CREATE TEMP TABLE sqlite_temp_master (\n" + " type text,\n" + " name text,\n" + " tbl_name text,\n" + " rootpage integer,\n" + " sql text\n" + ")"; + new_argv[1] = 0; + new_colv[0] = "sql"; + new_colv[1] = 0; + callback(&data, 1, new_argv, new_colv); + rc = SQLITE_OK; + }else{ + zShellStatic = azArg[1]; + rc = sqlite3_exec(p->db, + "SELECT sql FROM " + " (SELECT sql sql, type type, tbl_name tbl_name, name name, rowid x" + " FROM sqlite_master UNION ALL" + " SELECT sql, type, tbl_name, name, rowid FROM sqlite_temp_master) " + "WHERE lower(tbl_name) LIKE shellstatic()" + " AND type!='meta' AND sql NOTNULL " + "ORDER BY rowid", + callback, &data, &zErrMsg); + zShellStatic = 0; + } + }else if( nArg==1 ){ + rc = sqlite3_exec(p->db, + "SELECT sql FROM " + " (SELECT sql sql, type type, tbl_name tbl_name, name name, rowid x" + " FROM sqlite_master UNION ALL" + " SELECT sql, type, tbl_name, name, rowid FROM sqlite_temp_master) " + "WHERE type!='meta' AND sql NOTNULL AND name NOT LIKE 'sqlite_%' " + "ORDER BY rowid", + callback, &data, &zErrMsg + ); + }else{ + raw_printf(stderr, "Usage: .schema ?LIKE-PATTERN?\n"); + rc = 1; + goto meta_command_exit; + } + if( zErrMsg ){ + utf8_printf(stderr,"Error: %s\n", zErrMsg); + sqlite3_free(zErrMsg); + rc = 1; + }else if( rc != SQLITE_OK ){ + raw_printf(stderr,"Error: querying schema information\n"); + rc = 1; + }else{ + rc = 0; + } + }else + + +#if defined(SQLITE_DEBUG) && defined(SQLITE_ENABLE_SELECTTRACE) + if( c=='s' && n==11 && strncmp(azArg[0], "selecttrace", n)==0 ){ + extern int sqlite3SelectTrace; + sqlite3SelectTrace = integerValue(azArg[1]); + }else +#endif + + +#ifdef SQLITE_DEBUG + /* Undocumented commands for internal testing. Subject to change + ** without notice. */ + if( c=='s' && n>=10 && strncmp(azArg[0], "selftest-", 9)==0 ){ + if( strncmp(azArg[0]+9, "boolean", n-9)==0 ){ + int i, v; + for(i=1; iout, "%s: %d 0x%x\n", azArg[i], v, v); + } + } + if( strncmp(azArg[0]+9, "integer", n-9)==0 ){ + int i; sqlite3_int64 v; + for(i=1; iout, "%s", zBuf); + } + } + }else +#endif + + if( c=='s' && strncmp(azArg[0], "separator", n)==0 ){ + if( nArg<2 || nArg>3 ){ + raw_printf(stderr, "Usage: .separator COL ?ROW?\n"); + rc = 1; + } + if( nArg>=2 ){ + sqlite3_snprintf(sizeof(p->colSeparator), p->colSeparator, + "%.*s", (int)ArraySize(p->colSeparator)-1, azArg[1]); + } + if( nArg>=3 ){ + sqlite3_snprintf(sizeof(p->rowSeparator), p->rowSeparator, + "%.*s", (int)ArraySize(p->rowSeparator)-1, azArg[2]); + } + }else + + if( c=='s' + && (strncmp(azArg[0], "shell", n)==0 || strncmp(azArg[0],"system",n)==0) + ){ + char *zCmd; + int i, x; + if( nArg<2 ){ + raw_printf(stderr, "Usage: .system COMMAND\n"); + rc = 1; + goto meta_command_exit; + } + zCmd = sqlite3_mprintf(strchr(azArg[1],' ')==0?"%s":"\"%s\"", azArg[1]); + for(i=2; iout, "%12.12s: %s\n","echo", p->echoOn ? "on" : "off"); + utf8_printf(p->out, "%12.12s: %s\n","eqp", p->autoEQP ? "on" : "off"); + utf8_printf(p->out, "%12.12s: %s\n","explain", + p->mode==MODE_Explain ? "on" : p->autoExplain ? "auto" : "off"); + utf8_printf(p->out,"%12.12s: %s\n","headers", p->showHeader ? "on" : "off"); + utf8_printf(p->out, "%12.12s: %s\n","mode", modeDescr[p->mode]); + utf8_printf(p->out, "%12.12s: ", "nullvalue"); + output_c_string(p->out, p->nullValue); + raw_printf(p->out, "\n"); + utf8_printf(p->out,"%12.12s: %s\n","output", + strlen30(p->outfile) ? p->outfile : "stdout"); + utf8_printf(p->out,"%12.12s: ", "colseparator"); + output_c_string(p->out, p->colSeparator); + raw_printf(p->out, "\n"); + utf8_printf(p->out,"%12.12s: ", "rowseparator"); + output_c_string(p->out, p->rowSeparator); + raw_printf(p->out, "\n"); + utf8_printf(p->out, "%12.12s: %s\n","stats", p->statsOn ? "on" : "off"); + utf8_printf(p->out, "%12.12s: ", "width"); + for (i=0;i<(int)ArraySize(p->colWidth) && p->colWidth[i] != 0;i++) { + raw_printf(p->out, "%d ", p->colWidth[i]); + } + raw_printf(p->out, "\n"); + }else + + if( c=='s' && strncmp(azArg[0], "stats", n)==0 ){ + if( nArg==2 ){ + p->statsOn = booleanValue(azArg[1]); + }else{ + raw_printf(stderr, "Usage: .stats on|off\n"); + rc = 1; + } + }else + + if( c=='t' && n>1 && strncmp(azArg[0], "tables", n)==0 ){ + sqlite3_stmt *pStmt; + char **azResult; + int nRow, nAlloc; + char *zSql = 0; + int ii; + open_db(p, 0); + rc = sqlite3_prepare_v2(p->db, "PRAGMA database_list", -1, &pStmt, 0); + if( rc ) return shellDatabaseError(p->db); + + /* Create an SQL statement to query for the list of tables in the + ** main and all attached databases where the table name matches the + ** LIKE pattern bound to variable "?1". */ + zSql = sqlite3_mprintf( + "SELECT name FROM sqlite_master" + " WHERE type IN ('table','view')" + " AND name NOT LIKE 'sqlite_%%'" + " AND name LIKE ?1"); + while( zSql && sqlite3_step(pStmt)==SQLITE_ROW ){ + const char *zDbName = (const char*)sqlite3_column_text(pStmt, 1); + if( zDbName==0 || strcmp(zDbName,"main")==0 ) continue; + if( strcmp(zDbName,"temp")==0 ){ + zSql = sqlite3_mprintf( + "%z UNION ALL " + "SELECT 'temp.' || name FROM sqlite_temp_master" + " WHERE type IN ('table','view')" + " AND name NOT LIKE 'sqlite_%%'" + " AND name LIKE ?1", zSql); + }else{ + zSql = sqlite3_mprintf( + "%z UNION ALL " + "SELECT '%q.' || name FROM \"%w\".sqlite_master" + " WHERE type IN ('table','view')" + " AND name NOT LIKE 'sqlite_%%'" + " AND name LIKE ?1", zSql, zDbName, zDbName); + } + } + rc = sqlite3_finalize(pStmt); + if( zSql && rc==SQLITE_OK ){ + zSql = sqlite3_mprintf("%z ORDER BY 1", zSql); + if( zSql ) rc = sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0); + } + sqlite3_free(zSql); + if( !zSql ) return shellNomemError(); + if( rc ) return shellDatabaseError(p->db); + + /* Run the SQL statement prepared by the above block. Store the results + ** as an array of nul-terminated strings in azResult[]. */ + nRow = nAlloc = 0; + azResult = 0; + if( nArg>1 ){ + sqlite3_bind_text(pStmt, 1, azArg[1], -1, SQLITE_TRANSIENT); + }else{ + sqlite3_bind_text(pStmt, 1, "%", -1, SQLITE_STATIC); + } + while( sqlite3_step(pStmt)==SQLITE_ROW ){ + if( nRow>=nAlloc ){ + char **azNew; + int n2 = nAlloc*2 + 10; + azNew = sqlite3_realloc64(azResult, sizeof(azResult[0])*n2); + if( azNew==0 ){ + rc = shellNomemError(); + break; + } + nAlloc = n2; + azResult = azNew; + } + azResult[nRow] = sqlite3_mprintf("%s", sqlite3_column_text(pStmt, 0)); + if( 0==azResult[nRow] ){ + rc = shellNomemError(); + break; + } + nRow++; + } + if( sqlite3_finalize(pStmt)!=SQLITE_OK ){ + rc = shellDatabaseError(p->db); + } + + /* Pretty-print the contents of array azResult[] to the output */ + if( rc==0 && nRow>0 ){ + int len, maxlen = 0; + int i, j; + int nPrintCol, nPrintRow; + for(i=0; imaxlen ) maxlen = len; + } + nPrintCol = 80/(maxlen+2); + if( nPrintCol<1 ) nPrintCol = 1; + nPrintRow = (nRow + nPrintCol - 1)/nPrintCol; + for(i=0; iout, "%s%-*s", zSp, maxlen, + azResult[j] ? azResult[j]:""); + } + raw_printf(p->out, "\n"); + } + } + + for(ii=0; ii=8 && strncmp(azArg[0], "testctrl", n)==0 && nArg>=2 ){ + static const struct { + const char *zCtrlName; /* Name of a test-control option */ + int ctrlCode; /* Integer code for that option */ + } aCtrl[] = { + { "prng_save", SQLITE_TESTCTRL_PRNG_SAVE }, + { "prng_restore", SQLITE_TESTCTRL_PRNG_RESTORE }, + { "prng_reset", SQLITE_TESTCTRL_PRNG_RESET }, + { "bitvec_test", SQLITE_TESTCTRL_BITVEC_TEST }, + { "fault_install", SQLITE_TESTCTRL_FAULT_INSTALL }, + { "benign_malloc_hooks", SQLITE_TESTCTRL_BENIGN_MALLOC_HOOKS }, + { "pending_byte", SQLITE_TESTCTRL_PENDING_BYTE }, + { "assert", SQLITE_TESTCTRL_ASSERT }, + { "always", SQLITE_TESTCTRL_ALWAYS }, + { "reserve", SQLITE_TESTCTRL_RESERVE }, + { "optimizations", SQLITE_TESTCTRL_OPTIMIZATIONS }, + { "iskeyword", SQLITE_TESTCTRL_ISKEYWORD }, + { "scratchmalloc", SQLITE_TESTCTRL_SCRATCHMALLOC }, + { "byteorder", SQLITE_TESTCTRL_BYTEORDER }, + { "never_corrupt", SQLITE_TESTCTRL_NEVER_CORRUPT }, + { "imposter", SQLITE_TESTCTRL_IMPOSTER }, + }; + int testctrl = -1; + int rc2 = 0; + int i, n2; + open_db(p, 0); + + /* convert testctrl text option to value. allow any unique prefix + ** of the option name, or a numerical value. */ + n2 = strlen30(azArg[1]); + for(i=0; iSQLITE_TESTCTRL_LAST) ){ + utf8_printf(stderr,"Error: invalid testctrl option: %s\n", azArg[1]); + }else{ + switch(testctrl){ + + /* sqlite3_test_control(int, db, int) */ + case SQLITE_TESTCTRL_OPTIMIZATIONS: + case SQLITE_TESTCTRL_RESERVE: + if( nArg==3 ){ + int opt = (int)strtol(azArg[2], 0, 0); + rc2 = sqlite3_test_control(testctrl, p->db, opt); + raw_printf(p->out, "%d (0x%08x)\n", rc2, rc2); + } else { + utf8_printf(stderr,"Error: testctrl %s takes a single int option\n", + azArg[1]); + } + break; + + /* sqlite3_test_control(int) */ + case SQLITE_TESTCTRL_PRNG_SAVE: + case SQLITE_TESTCTRL_PRNG_RESTORE: + case SQLITE_TESTCTRL_PRNG_RESET: + case SQLITE_TESTCTRL_BYTEORDER: + if( nArg==2 ){ + rc2 = sqlite3_test_control(testctrl); + raw_printf(p->out, "%d (0x%08x)\n", rc2, rc2); + } else { + utf8_printf(stderr,"Error: testctrl %s takes no options\n", + azArg[1]); + } + break; + + /* sqlite3_test_control(int, uint) */ + case SQLITE_TESTCTRL_PENDING_BYTE: + if( nArg==3 ){ + unsigned int opt = (unsigned int)integerValue(azArg[2]); + rc2 = sqlite3_test_control(testctrl, opt); + raw_printf(p->out, "%d (0x%08x)\n", rc2, rc2); + } else { + utf8_printf(stderr,"Error: testctrl %s takes a single unsigned" + " int option\n", azArg[1]); + } + break; + + /* sqlite3_test_control(int, int) */ + case SQLITE_TESTCTRL_ASSERT: + case SQLITE_TESTCTRL_ALWAYS: + case SQLITE_TESTCTRL_NEVER_CORRUPT: + if( nArg==3 ){ + int opt = booleanValue(azArg[2]); + rc2 = sqlite3_test_control(testctrl, opt); + raw_printf(p->out, "%d (0x%08x)\n", rc2, rc2); + } else { + utf8_printf(stderr,"Error: testctrl %s takes a single int option\n", + azArg[1]); + } + break; + + /* sqlite3_test_control(int, char *) */ +#ifdef SQLITE_N_KEYWORD + case SQLITE_TESTCTRL_ISKEYWORD: + if( nArg==3 ){ + const char *opt = azArg[2]; + rc2 = sqlite3_test_control(testctrl, opt); + raw_printf(p->out, "%d (0x%08x)\n", rc2, rc2); + } else { + utf8_printf(stderr, + "Error: testctrl %s takes a single char * option\n", + azArg[1]); + } + break; +#endif + + case SQLITE_TESTCTRL_IMPOSTER: + if( nArg==5 ){ + rc2 = sqlite3_test_control(testctrl, p->db, + azArg[2], + integerValue(azArg[3]), + integerValue(azArg[4])); + raw_printf(p->out, "%d (0x%08x)\n", rc2, rc2); + }else{ + raw_printf(stderr,"Usage: .testctrl imposter dbName onoff tnum\n"); + } + break; + + case SQLITE_TESTCTRL_BITVEC_TEST: + case SQLITE_TESTCTRL_FAULT_INSTALL: + case SQLITE_TESTCTRL_BENIGN_MALLOC_HOOKS: + case SQLITE_TESTCTRL_SCRATCHMALLOC: + default: + utf8_printf(stderr, + "Error: CLI support for testctrl %s not implemented\n", + azArg[1]); + break; + } + } + }else + + if( c=='t' && n>4 && strncmp(azArg[0], "timeout", n)==0 ){ + open_db(p, 0); + sqlite3_busy_timeout(p->db, nArg>=2 ? (int)integerValue(azArg[1]) : 0); + }else + + if( c=='t' && n>=5 && strncmp(azArg[0], "timer", n)==0 ){ + if( nArg==2 ){ + enableTimer = booleanValue(azArg[1]); + if( enableTimer && !HAS_TIMER ){ + raw_printf(stderr, "Error: timer not available on this system.\n"); + enableTimer = 0; + } + }else{ + raw_printf(stderr, "Usage: .timer on|off\n"); + rc = 1; + } + }else + + if( c=='t' && strncmp(azArg[0], "trace", n)==0 ){ + open_db(p, 0); + if( nArg!=2 ){ + raw_printf(stderr, "Usage: .trace FILE|off\n"); + rc = 1; + goto meta_command_exit; + } + output_file_close(p->traceOut); + p->traceOut = output_file_open(azArg[1]); +#if !defined(SQLITE_OMIT_TRACE) && !defined(SQLITE_OMIT_FLOATING_POINT) + if( p->traceOut==0 ){ + sqlite3_trace(p->db, 0, 0); + }else{ + sqlite3_trace(p->db, sql_trace_callback, p->traceOut); + } +#endif + }else + +#if SQLITE_USER_AUTHENTICATION + if( c=='u' && strncmp(azArg[0], "user", n)==0 ){ + if( nArg<2 ){ + raw_printf(stderr, "Usage: .user SUBCOMMAND ...\n"); + rc = 1; + goto meta_command_exit; + } + open_db(p, 0); + if( strcmp(azArg[1],"login")==0 ){ + if( nArg!=4 ){ + raw_printf(stderr, "Usage: .user login USER PASSWORD\n"); + rc = 1; + goto meta_command_exit; + } + rc = sqlite3_user_authenticate(p->db, azArg[2], azArg[3], + (int)strlen(azArg[3])); + if( rc ){ + utf8_printf(stderr, "Authentication failed for user %s\n", azArg[2]); + rc = 1; + } + }else if( strcmp(azArg[1],"add")==0 ){ + if( nArg!=5 ){ + raw_printf(stderr, "Usage: .user add USER PASSWORD ISADMIN\n"); + rc = 1; + goto meta_command_exit; + } + rc = sqlite3_user_add(p->db, azArg[2], + azArg[3], (int)strlen(azArg[3]), + booleanValue(azArg[4])); + if( rc ){ + raw_printf(stderr, "User-Add failed: %d\n", rc); + rc = 1; + } + }else if( strcmp(azArg[1],"edit")==0 ){ + if( nArg!=5 ){ + raw_printf(stderr, "Usage: .user edit USER PASSWORD ISADMIN\n"); + rc = 1; + goto meta_command_exit; + } + rc = sqlite3_user_change(p->db, azArg[2], + azArg[3], (int)strlen(azArg[3]), + booleanValue(azArg[4])); + if( rc ){ + raw_printf(stderr, "User-Edit failed: %d\n", rc); + rc = 1; + } + }else if( strcmp(azArg[1],"delete")==0 ){ + if( nArg!=3 ){ + raw_printf(stderr, "Usage: .user delete USER\n"); + rc = 1; + goto meta_command_exit; + } + rc = sqlite3_user_delete(p->db, azArg[2]); + if( rc ){ + raw_printf(stderr, "User-Delete failed: %d\n", rc); + rc = 1; + } + }else{ + raw_printf(stderr, "Usage: .user login|add|edit|delete ...\n"); + rc = 1; + goto meta_command_exit; + } + }else +#endif /* SQLITE_USER_AUTHENTICATION */ + + if( c=='v' && strncmp(azArg[0], "version", n)==0 ){ + utf8_printf(p->out, "SQLite %s %s\n" /*extra-version-info*/, + sqlite3_libversion(), sqlite3_sourceid()); + }else + + if( c=='v' && strncmp(azArg[0], "vfsinfo", n)==0 ){ + const char *zDbName = nArg==2 ? azArg[1] : "main"; + sqlite3_vfs *pVfs; + if( p->db ){ + sqlite3_file_control(p->db, zDbName, SQLITE_FCNTL_VFS_POINTER, &pVfs); + if( pVfs ){ + utf8_printf(p->out, "vfs.zName = \"%s\"\n", pVfs->zName); + raw_printf(p->out, "vfs.iVersion = %d\n", pVfs->iVersion); + raw_printf(p->out, "vfs.szOsFile = %d\n", pVfs->szOsFile); + raw_printf(p->out, "vfs.mxPathname = %d\n", pVfs->mxPathname); + } + } + }else + + if( c=='v' && strncmp(azArg[0], "vfslist", n)==0 ){ + sqlite3_vfs *pVfs; + sqlite3_vfs *pCurrent = 0; + if( p->db ){ + sqlite3_file_control(p->db, "main", SQLITE_FCNTL_VFS_POINTER, &pCurrent); + } + for(pVfs=sqlite3_vfs_find(0); pVfs; pVfs=pVfs->pNext){ + utf8_printf(p->out, "vfs.zName = \"%s\"%s\n", pVfs->zName, + pVfs==pCurrent ? " <--- CURRENT" : ""); + raw_printf(p->out, "vfs.iVersion = %d\n", pVfs->iVersion); + raw_printf(p->out, "vfs.szOsFile = %d\n", pVfs->szOsFile); + raw_printf(p->out, "vfs.mxPathname = %d\n", pVfs->mxPathname); + if( pVfs->pNext ){ + raw_printf(p->out, "-----------------------------------\n"); + } + } + }else + + if( c=='v' && strncmp(azArg[0], "vfsname", n)==0 ){ + const char *zDbName = nArg==2 ? azArg[1] : "main"; + char *zVfsName = 0; + if( p->db ){ + sqlite3_file_control(p->db, zDbName, SQLITE_FCNTL_VFSNAME, &zVfsName); + if( zVfsName ){ + utf8_printf(p->out, "%s\n", zVfsName); + sqlite3_free(zVfsName); + } + } + }else + +#if defined(SQLITE_DEBUG) && defined(SQLITE_ENABLE_WHERETRACE) + if( c=='w' && strncmp(azArg[0], "wheretrace", n)==0 ){ + extern int sqlite3WhereTrace; + sqlite3WhereTrace = nArg>=2 ? booleanValue(azArg[1]) : 0xff; + }else +#endif + + if( c=='w' && strncmp(azArg[0], "width", n)==0 ){ + int j; + assert( nArg<=ArraySize(azArg) ); + for(j=1; jcolWidth); j++){ + p->colWidth[j-1] = (int)integerValue(azArg[j]); + } + }else + + { + utf8_printf(stderr, "Error: unknown command or invalid arguments: " + " \"%s\". Enter \".help\" for help\n", azArg[0]); + rc = 1; + } + +meta_command_exit: + if( p->outCount ){ + p->outCount--; + if( p->outCount==0 ) output_reset(p); + } + return rc; +} + +/* +** Return TRUE if a semicolon occurs anywhere in the first N characters +** of string z[]. +*/ +static int line_contains_semicolon(const char *z, int N){ + int i; + for(i=0; iout); + zLine = one_input_line(in, zLine, nSql>0); + if( zLine==0 ){ + /* End of input */ + if( stdin_is_interactive ) printf("\n"); + break; + } + if( seenInterrupt ){ + if( in!=0 ) break; + seenInterrupt = 0; + } + lineno++; + if( nSql==0 && _all_whitespace(zLine) ){ + if( p->echoOn ) printf("%s\n", zLine); + continue; + } + if( zLine && zLine[0]=='.' && nSql==0 ){ + if( p->echoOn ) printf("%s\n", zLine); + rc = do_meta_command(zLine, p); + if( rc==2 ){ /* exit requested */ + break; + }else if( rc ){ + errCnt++; + } + continue; + } + if( line_is_command_terminator(zLine) && line_is_complete(zSql, nSql) ){ + memcpy(zLine,";",2); + } + nLine = strlen30(zLine); + if( nSql+nLine+2>=nAlloc ){ + nAlloc = nSql+nLine+100; + zSql = realloc(zSql, nAlloc); + if( zSql==0 ){ + raw_printf(stderr, "Error: out of memory\n"); + exit(1); + } + } + nSqlPrior = nSql; + if( nSql==0 ){ + int i; + for(i=0; zLine[i] && IsSpace(zLine[i]); i++){} + assert( nAlloc>0 && zSql!=0 ); + memcpy(zSql, zLine+i, nLine+1-i); + startline = lineno; + nSql = nLine-i; + }else{ + zSql[nSql++] = '\n'; + memcpy(zSql+nSql, zLine, nLine+1); + nSql += nLine; + } + if( nSql && line_contains_semicolon(&zSql[nSqlPrior], nSql-nSqlPrior) + && sqlite3_complete(zSql) ){ + p->cnt = 0; + open_db(p, 0); + if( p->backslashOn ) resolve_backslashes(zSql); + BEGIN_TIMER; + rc = shell_exec(p->db, zSql, shell_callback, p, &zErrMsg); + END_TIMER; + if( rc || zErrMsg ){ + char zPrefix[100]; + if( in!=0 || !stdin_is_interactive ){ + sqlite3_snprintf(sizeof(zPrefix), zPrefix, + "Error: near line %d:", startline); + }else{ + sqlite3_snprintf(sizeof(zPrefix), zPrefix, "Error:"); + } + if( zErrMsg!=0 ){ + utf8_printf(stderr, "%s %s\n", zPrefix, zErrMsg); + sqlite3_free(zErrMsg); + zErrMsg = 0; + }else{ + utf8_printf(stderr, "%s %s\n", zPrefix, sqlite3_errmsg(p->db)); + } + errCnt++; + }else if( p->countChanges ){ + raw_printf(p->out, "changes: %3d total_changes: %d\n", + sqlite3_changes(p->db), sqlite3_total_changes(p->db)); + } + nSql = 0; + if( p->outCount ){ + output_reset(p); + p->outCount = 0; + } + }else if( nSql && _all_whitespace(zSql) ){ + if( p->echoOn ) printf("%s\n", zSql); + nSql = 0; + } + } + if( nSql ){ + if( !_all_whitespace(zSql) ){ + utf8_printf(stderr, "Error: incomplete SQL: %s\n", zSql); + errCnt++; + } + } + free(zSql); + free(zLine); + return errCnt>0; +} + +/* +** Return a pathname which is the user's home directory. A +** 0 return indicates an error of some kind. +*/ +static char *find_home_dir(void){ + static char *home_dir = NULL; + if( home_dir ) return home_dir; + +#if !defined(_WIN32) && !defined(WIN32) && !defined(_WIN32_WCE) \ + && !defined(__RTP__) && !defined(_WRS_KERNEL) + { + struct passwd *pwent; + uid_t uid = getuid(); + if( (pwent=getpwuid(uid)) != NULL) { + home_dir = pwent->pw_dir; + } + } +#endif + +#if defined(_WIN32_WCE) + /* Windows CE (arm-wince-mingw32ce-gcc) does not provide getenv() + */ + home_dir = "/"; +#else + +#if defined(_WIN32) || defined(WIN32) + if (!home_dir) { + home_dir = getenv("USERPROFILE"); + } +#endif + + if (!home_dir) { + home_dir = getenv("HOME"); + } + +#if defined(_WIN32) || defined(WIN32) + if (!home_dir) { + char *zDrive, *zPath; + int n; + zDrive = getenv("HOMEDRIVE"); + zPath = getenv("HOMEPATH"); + if( zDrive && zPath ){ + n = strlen30(zDrive) + strlen30(zPath) + 1; + home_dir = malloc( n ); + if( home_dir==0 ) return 0; + sqlite3_snprintf(n, home_dir, "%s%s", zDrive, zPath); + return home_dir; + } + home_dir = "c:\\"; + } +#endif + +#endif /* !_WIN32_WCE */ + + if( home_dir ){ + int n = strlen30(home_dir) + 1; + char *z = malloc( n ); + if( z ) memcpy(z, home_dir, n); + home_dir = z; + } + + return home_dir; +} + +/* +** Read input from the file given by sqliterc_override. Or if that +** parameter is NULL, take input from ~/.sqliterc +** +** Returns the number of errors. +*/ +static void process_sqliterc( + ShellState *p, /* Configuration data */ + const char *sqliterc_override /* Name of config file. NULL to use default */ +){ + char *home_dir = NULL; + const char *sqliterc = sqliterc_override; + char *zBuf = 0; + FILE *in = NULL; + + if (sqliterc == NULL) { + home_dir = find_home_dir(); + if( home_dir==0 ){ + raw_printf(stderr, "-- warning: cannot find home directory;" + " cannot read ~/.sqliterc\n"); + return; + } + sqlite3_initialize(); + zBuf = sqlite3_mprintf("%s/.sqliterc",home_dir); + sqliterc = zBuf; + } + in = fopen(sqliterc,"rb"); + if( in ){ + if( stdin_is_interactive ){ + utf8_printf(stderr,"-- Loading resources from %s\n",sqliterc); + } + process_input(p,in); + fclose(in); + } + sqlite3_free(zBuf); +} + +/* +** Show available command line options +*/ +static const char zOptions[] = + " -ascii set output mode to 'ascii'\n" + " -bail stop after hitting an error\n" + " -batch force batch I/O\n" + " -column set output mode to 'column'\n" + " -cmd COMMAND run \"COMMAND\" before reading stdin\n" + " -csv set output mode to 'csv'\n" + " -echo print commands before execution\n" + " -init FILENAME read/process named file\n" + " -[no]header turn headers on or off\n" +#if defined(SQLITE_ENABLE_MEMSYS3) || defined(SQLITE_ENABLE_MEMSYS5) + " -heap SIZE Size of heap for memsys3 or memsys5\n" +#endif + " -help show this message\n" + " -html set output mode to HTML\n" + " -interactive force interactive I/O\n" + " -line set output mode to 'line'\n" + " -list set output mode to 'list'\n" + " -lookaside SIZE N use N entries of SZ bytes for lookaside memory\n" + " -mmap N default mmap size set to N\n" +#ifdef SQLITE_ENABLE_MULTIPLEX + " -multiplex enable the multiplexor VFS\n" +#endif + " -newline SEP set output row separator. Default: '\\n'\n" + " -nullvalue TEXT set text string for NULL values. Default ''\n" + " -pagecache SIZE N use N slots of SZ bytes each for page cache memory\n" + " -scratch SIZE N use N slots of SZ bytes each for scratch memory\n" + " -separator SEP set output column separator. Default: '|'\n" + " -stats print memory stats before each finalize\n" + " -version show SQLite version\n" + " -vfs NAME use NAME as the default VFS\n" +#ifdef SQLITE_ENABLE_VFSTRACE + " -vfstrace enable tracing of all VFS calls\n" +#endif +; +static void usage(int showDetail){ + utf8_printf(stderr, + "Usage: %s [OPTIONS] FILENAME [SQL]\n" + "FILENAME is the name of an SQLite database. A new database is created\n" + "if the file does not previously exist.\n", Argv0); + if( showDetail ){ + utf8_printf(stderr, "OPTIONS include:\n%s", zOptions); + }else{ + raw_printf(stderr, "Use the -help option for additional information\n"); + } + exit(1); +} + +/* +** Initialize the state information in data +*/ +static void main_init(ShellState *data) { + memset(data, 0, sizeof(*data)); + data->normalMode = data->cMode = data->mode = MODE_List; + data->autoExplain = 1; + memcpy(data->colSeparator,SEP_Column, 2); + memcpy(data->rowSeparator,SEP_Row, 2); + data->showHeader = 0; + data->shellFlgs = SHFLG_Lookaside; + sqlite3_config(SQLITE_CONFIG_URI, 1); + sqlite3_config(SQLITE_CONFIG_LOG, shellLog, data); + sqlite3_config(SQLITE_CONFIG_MULTITHREAD); + sqlite3_snprintf(sizeof(mainPrompt), mainPrompt,"sqlite> "); + sqlite3_snprintf(sizeof(continuePrompt), continuePrompt," ...> "); +} + +/* +** Output text to the console in a font that attracts extra attention. +*/ +#ifdef _WIN32 +static void printBold(const char *zText){ + HANDLE out = GetStdHandle(STD_OUTPUT_HANDLE); + CONSOLE_SCREEN_BUFFER_INFO defaultScreenInfo; + GetConsoleScreenBufferInfo(out, &defaultScreenInfo); + SetConsoleTextAttribute(out, + FOREGROUND_RED|FOREGROUND_INTENSITY + ); + printf("%s", zText); + SetConsoleTextAttribute(out, defaultScreenInfo.wAttributes); +} +#else +static void printBold(const char *zText){ + printf("\033[1m%s\033[0m", zText); +} +#endif + +/* +** Get the argument to an --option. Throw an error and die if no argument +** is available. +*/ +static char *cmdline_option_value(int argc, char **argv, int i){ + if( i==argc ){ + utf8_printf(stderr, "%s: Error: missing argument to %s\n", + argv[0], argv[argc-1]); + exit(1); + } + return argv[i]; +} + +int SQLITE_CDECL main(int argc, char **argv){ + char *zErrMsg = 0; + ShellState data; + const char *zInitFile = 0; + int i; + int rc = 0; + int warnInmemoryDb = 0; + int readStdin = 1; + int nCmd = 0; + char **azCmd = 0; + +#if USE_SYSTEM_SQLITE+0!=1 + if( strcmp(sqlite3_sourceid(),SQLITE_SOURCE_ID)!=0 ){ + utf8_printf(stderr, "SQLite header and source version mismatch\n%s\n%s\n", + sqlite3_sourceid(), SQLITE_SOURCE_ID); + exit(1); + } +#endif + setBinaryMode(stdin); + setvbuf(stderr, 0, _IONBF, 0); /* Make sure stderr is unbuffered */ + Argv0 = argv[0]; + main_init(&data); + stdin_is_interactive = isatty(0); + stdout_is_console = isatty(1); + + /* Make sure we have a valid signal handler early, before anything + ** else is done. + */ +#ifdef SIGINT + signal(SIGINT, interrupt_handler); +#endif + +#ifdef SQLITE_SHELL_DBNAME_PROC + { + /* If the SQLITE_SHELL_DBNAME_PROC macro is defined, then it is the name + ** of a C-function that will provide the name of the database file. Use + ** this compile-time option to embed this shell program in larger + ** applications. */ + extern void SQLITE_SHELL_DBNAME_PROC(const char**); + SQLITE_SHELL_DBNAME_PROC(&data.zDbFilename); + warnInmemoryDb = 0; + } +#endif + + /* Do an initial pass through the command-line argument to locate + ** the name of the database file, the name of the initialization file, + ** the size of the alternative malloc heap, + ** and the first command to execute. + */ + for(i=1; i0x7fff0000 ) szHeap = 0x7fff0000; + sqlite3_config(SQLITE_CONFIG_HEAP, malloc((int)szHeap), (int)szHeap, 64); +#endif + }else if( strcmp(z,"-scratch")==0 ){ + int n, sz; + sz = (int)integerValue(cmdline_option_value(argc,argv,++i)); + if( sz>400000 ) sz = 400000; + if( sz<2500 ) sz = 2500; + n = (int)integerValue(cmdline_option_value(argc,argv,++i)); + if( n>10 ) n = 10; + if( n<1 ) n = 1; + sqlite3_config(SQLITE_CONFIG_SCRATCH, malloc(n*sz+1), sz, n); + data.shellFlgs |= SHFLG_Scratch; + }else if( strcmp(z,"-pagecache")==0 ){ + int n, sz; + sz = (int)integerValue(cmdline_option_value(argc,argv,++i)); + if( sz>70000 ) sz = 70000; + if( sz<0 ) sz = 0; + n = (int)integerValue(cmdline_option_value(argc,argv,++i)); + sqlite3_config(SQLITE_CONFIG_PAGECACHE, + (n>0 && sz>0) ? malloc(n*sz) : 0, sz, n); + data.shellFlgs |= SHFLG_Pagecache; + }else if( strcmp(z,"-lookaside")==0 ){ + int n, sz; + sz = (int)integerValue(cmdline_option_value(argc,argv,++i)); + if( sz<0 ) sz = 0; + n = (int)integerValue(cmdline_option_value(argc,argv,++i)); + if( n<0 ) n = 0; + sqlite3_config(SQLITE_CONFIG_LOOKASIDE, sz, n); + if( sz*n==0 ) data.shellFlgs &= ~SHFLG_Lookaside; +#ifdef SQLITE_ENABLE_VFSTRACE + }else if( strcmp(z,"-vfstrace")==0 ){ + extern int vfstrace_register( + const char *zTraceName, + const char *zOldVfsName, + int (*xOut)(const char*,void*), + void *pOutArg, + int makeDefault + ); + vfstrace_register("trace",0,(int(*)(const char*,void*))fputs,stderr,1); +#endif +#ifdef SQLITE_ENABLE_MULTIPLEX + }else if( strcmp(z,"-multiplex")==0 ){ + extern int sqlite3_multiple_initialize(const char*,int); + sqlite3_multiplex_initialize(0, 1); +#endif + }else if( strcmp(z,"-mmap")==0 ){ + sqlite3_int64 sz = integerValue(cmdline_option_value(argc,argv,++i)); + sqlite3_config(SQLITE_CONFIG_MMAP_SIZE, sz, sz); + }else if( strcmp(z,"-vfs")==0 ){ + sqlite3_vfs *pVfs = sqlite3_vfs_find(cmdline_option_value(argc,argv,++i)); + if( pVfs ){ + sqlite3_vfs_register(pVfs, 1); + }else{ + utf8_printf(stderr, "no such VFS: \"%s\"\n", argv[i]); + exit(1); + } + } + } + if( data.zDbFilename==0 ){ +#ifndef SQLITE_OMIT_MEMORYDB + data.zDbFilename = ":memory:"; + warnInmemoryDb = argc==1; +#else + utf8_printf(stderr,"%s: Error: no database filename specified\n", Argv0); + return 1; +#endif + } + data.out = stdout; + + /* Go ahead and open the database file if it already exists. If the + ** file does not exist, delay opening it. This prevents empty database + ** files from being created if a user mistypes the database name argument + ** to the sqlite command-line tool. + */ + if( access(data.zDbFilename, 0)==0 ){ + open_db(&data, 0); + } + + /* Process the initialization file if there is one. If no -init option + ** is given on the command line, look for a file named ~/.sqliterc and + ** try to process it. + */ + process_sqliterc(&data,zInitFile); + + /* Make a second pass through the command-line argument and set + ** options. This second pass is delayed until after the initialization + ** file is processed so that the command-line arguments will override + ** settings in the initialization file. + */ + for(i=1; iError: Bad artifact IDs.

      + fossil_free(zCanonical); + zCanonical = 0; + break; + }else{ + canonical16(p, UUID_SIZE); + p += UUID_SIZE+1; + } + } + zUuid = zCanonical; } style_header("Shunned Artifacts"); if( zUuid && P("sub") ){ + const char *p = zUuid; + int allExist = 1; login_verify_csrf_secret(); - db_multi_exec("DELETE FROM shun WHERE uuid='%s'", zUuid); - if( db_exists("SELECT 1 FROM blob WHERE uuid='%s'", zUuid) ){ - @

      Artifact - @ %s(zUuid) is no - @ longer being shunned.

      + while( *p ){ + db_multi_exec("DELETE FROM shun WHERE uuid=%Q", p); + if( !db_exists("SELECT 1 FROM blob WHERE uuid=%Q", p) ){ + allExist = 0; + } + admin_log("Unshunned %Q", p); + p += UUID_SIZE+1; + } + if( allExist ){ + @

      Artifact(s)
      + for( p = zUuid ; *p ; p += UUID_SIZE+1 ){ + @ %s(p)
      + } + @ are no longer being shunned.

      }else{ - @

      Artifact %s(zUuid) will no longer - @ be shunned. But it does not exist in the repository. It - @ may be necessary to rebuild the repository using the + @

      Artifact(s)
      + for( p = zUuid ; *p ; p += UUID_SIZE+1 ){ + @ %s(p)
      + } + @ will no longer be shunned. But they may not exist in the repository. + @ It may be necessary to rebuild the repository using the @ fossil rebuild command-line before the artifact content - @ can pulled in from other respositories.

      + @ can pulled in from other repositories.

      } } if( zUuid && P("add") ){ + const char *p = zUuid; + int rid, tagid; login_verify_csrf_secret(); - db_multi_exec("INSERT OR IGNORE INTO shun VALUES('%s')", zUuid); - @

      Artifact - @ %s(zUuid) has been - @ shunned. It will no longer be pushed. - @ It will be removed from the repository the next time the respository - @ is rebuilt using the fossil rebuild command-line

      + while( *p ){ + db_multi_exec( + "INSERT OR IGNORE INTO shun(uuid,mtime)" + " VALUES(%Q, now())", p); + db_multi_exec("DELETE FROM attachment WHERE src=%Q", p); + rid = db_int(0, "SELECT rid FROM blob WHERE uuid=%Q", p); + if( rid ){ + db_multi_exec("DELETE FROM event WHERE objid=%d", rid); + } + tagid = db_int(0, "SELECT tagid FROM tag WHERE tagname='tkt-%q'", p); + if( tagid ){ + db_multi_exec("DELETE FROM ticket WHERE tkt_uuid=%Q", p); + db_multi_exec("DELETE FROM tag WHERE tagid=%d", tagid); + db_multi_exec("DELETE FROM tagxref WHERE tagid=%d", tagid); + } + admin_log("Shunned %Q", p); + p += UUID_SIZE+1; + } + @

      Artifact(s)
      + for( p = zUuid ; *p ; p += UUID_SIZE+1 ){ + @ %s(p)
      + } + @ have been shunned. They will no longer be pushed. + @ They will be removed from the repository the next time the repository + @ is rebuilt using the fossil rebuild command-line

      + } + if( zRcvid ){ + nRcvid = atoi(zRcvid); + numRows = db_int(0, "SELECT min(count(), 10) FROM blob WHERE rcvid=%d", + nRcvid); } @

      A shunned artifact will not be pushed nor accepted in a pull and the @ artifact content will be purged from the repository the next time the @ repository is rebuilt. A list of shunned artifacts can be seen at the @ bottom of this page.

      - @ + @ @ - @

      To shun an artifact, enter its artifact ID (the 40-character SHA1 - @ hash of the artifact) in the - @ following box and press the "Shun" button. This will cause the artifact - @ to be removed from the repository and will prevent the artifact from being + @

      To shun artifacts, enter their artifact IDs (the 40-character SHA1 + @ hash of the artifacts) in the + @ following box and press the "Shun" button. This will cause the artifacts + @ to be removed from the repository and will prevent the artifacts from being @ readded to the repository by subsequent sync operation.

      @ - @

      Note that you must enter the full 40-character artifact ID, not + @

      Note that you must enter the full 40-character artifact IDs, not @ an abbreviation or a symbolic tag.

      @ @

      Warning: Shunning should only be used to remove inappropriate content @ from the repository. Inappropriate content includes such things as @ spam added to Wiki, files that violate copyright or patent agreements, @ or artifacts that by design or accident interfere with the processing @ of the repository. Do not shun artifacts merely to remove them from @ sight - set the "hidden" tag on such artifacts instead.

      - @ + @ @
      - @
      + @
      login_insert_csrf_secret(); - @ - @ - @ + @ + @ + @
      @
      @ - @

      Enter the UUID of a previous shunned artifact to cause it to be - @ accepted again in the repository. The artifact content is not + @ + @

      Enter the UUIDs of previously shunned artifacts to cause them to be + @ accepted again in the repository. The artifacts content is not @ restored because the content is unknown. The only change is that - @ the formerly shunned artifact will be accepted on subsequent sync + @ the formerly shunned artifacts will be accepted on subsequent sync @ operations.

      @ @
      - @
      + @
      login_insert_csrf_secret(); - @ - @ - @ + @ + @ + @
      @
      @ - @

      Press the Rebuild button below to rebuild the respository. The + @

      Press the Rebuild button below to rebuild the repository. The @ content of newly shunned artifacts is not purged until the repository @ is rebuilt. On larger repositories, the rebuild may take minute or @ two, so be patient after pressing the button.

      @ @
      - @
      + @
      login_insert_csrf_secret(); - @ - @ + @ + @
      @
      - @ - @

      Shunned Artifacts:

      - @
      - db_prepare(&q, + @ + @

      Shunned Artifacts:

      + @

      + db_prepare(&q, "SELECT uuid, EXISTS(SELECT 1 FROM blob WHERE blob.uuid=shun.uuid)" " FROM shun ORDER BY uuid"); while( db_step(&q)==SQLITE_ROW ){ const char *zUuid = db_column_text(&q, 0); int stillExists = db_column_int(&q, 1); cnt++; if( stillExists ){ - @ %s(zUuid)
      + @ %s(zUuid)
      }else{ - @ %s(zUuid)
      + @ %s(zUuid)
      } } if( cnt==0 ){ @ no artifacts are shunned on this server } db_finalize(&q); - @

      + @

      style_footer(); + fossil_free(zCanonical); } /* ** Remove from the BLOB table all artifacts that are in the SHUN table. */ @@ -198,61 +296,86 @@ /* ** WEBPAGE: rcvfromlist ** ** Show a listing of RCVFROM table entries. +** +** The RCVFROM table records where this repository received each +** artifact, including the time of receipt, user, and IP address. +** +** Access requires Admin privilege. */ void rcvfromlist_page(void){ int ofst = atoi(PD("ofst","0")); + int showAll = P("all")!=0; int cnt; Stmt q; login_check_credentials(); - if( !g.okAdmin ){ - login_needed(); + if( !g.perm.Admin ){ + login_needed(0); + return; } - style_header("Content Sources"); + style_header("Artifact Receipts"); + if( showAll ){ + ofst = 0; + }else{ + style_submenu_element("All", "All", "rcvfromlist?all=1"); + } if( ofst>0 ){ style_submenu_element("Newer", "Newer", "rcvfromlist?ofst=%d", ofst>30 ? ofst-30 : 0); } - db_prepare(&q, - "SELECT rcvid, login, datetime(rcvfrom.mtime), rcvfrom.ipaddr" + db_multi_exec( + "CREATE TEMP TABLE rcvidUsed(x INTEGER PRIMARY KEY);" + "INSERT OR IGNORE INTO rcvidUsed(x) SELECT rcvid FROM blob;" + ); + db_prepare(&q, + "SELECT rcvid, login, datetime(rcvfrom.mtime), rcvfrom.ipaddr," + " EXISTS(SELECT 1 FROM rcvidUsed WHERE x=rcvfrom.rcvid)" " FROM rcvfrom LEFT JOIN user USING(uid)" - " ORDER BY rcvid DESC LIMIT 31 OFFSET %d", - ofst + " ORDER BY rcvid DESC LIMIT %d OFFSET %d", + showAll ? -1 : 31, ofst ); @

      Whenever new artifacts are added to the repository, either by @ push or using the web interface, an entry is made in the RCVFROM table @ to record the source of that artifact. This log facilitates @ finding and fixing attempts to inject illicit content into the @ repository.

      @ @

      Click on the "rcvid" to show a list of specific artifacts received @ by a transaction. After identifying illicit artifacts, remove them - @ using the "Shun" feature.

      + @ using the "Shun" button. If an "rcvid" is not hyperlinked, that means + @ all artifacts associated with that rcvid have already been shunned + @ or purged.

      @ - @
      - @ - @ + @
      rcvid - @ DateUserIP Address
      + @ + @ + @ + @ cnt = 0; while( db_step(&q)==SQLITE_ROW ){ int rcvid = db_column_int(&q, 0); const char *zUser = db_column_text(&q, 1); const char *zDate = db_column_text(&q, 2); const char *zIpAddr = db_column_text(&q, 3); - if( cnt==30 ){ + if( cnt==30 && !showAll ){ style_submenu_element("Older", "Older", "rcvfromlist?ofst=%d", ofst+30); }else{ cnt++; @ - @ + if( db_column_int(&q,4) ){ + @ + }else{ + @ + } + @ + @ + @ @ } } db_finalize(&q); @
      rcvidDateUserIP Address
      %d(rcvid) - @ %s(zDate) - @ %h(zUser) - @  %s(zIpAddr)  + @ %d(rcvid)%d(rcvid)%s(zDate)%h(zUser)%s(zIpAddr)
      @@ -260,52 +383,78 @@ } /* ** WEBPAGE: rcvfrom ** -** Show a single RCVFROM table entry. +** Show a single RCVFROM table entry identified by the rcvid= query +** parameters. Requires Admin privilege. */ void rcvfrom_page(void){ int rcvid = atoi(PD("rcvid","0")); Stmt q; login_check_credentials(); - if( !g.okAdmin ){ - login_needed(); + if( !g.perm.Admin ){ + login_needed(0); + return; + } + style_header("Artifact Receipt %d", rcvid); + if( db_exists( + "SELECT 1 FROM blob WHERE rcvid=%d AND" + " NOT EXISTS (SELECT 1 FROM shun WHERE shun.uuid=blob.uuid)", rcvid) + ){ + style_submenu_element("Shun All", "Shun All", + "shun?shun&rcvid=%d#addshun", rcvid); + } + if( db_exists( + "SELECT 1 FROM blob WHERE rcvid=%d AND" + " EXISTS (SELECT 1 FROM shun WHERE shun.uuid=blob.uuid)", rcvid) + ){ + style_submenu_element("Unshun All", "Unshun All", + "shun?accept&rcvid=%d#delshun", rcvid); } - style_header("Content Source %d", rcvid); - db_prepare(&q, + db_prepare(&q, "SELECT login, datetime(rcvfrom.mtime), rcvfrom.ipaddr" " FROM rcvfrom LEFT JOIN user USING(uid)" " WHERE rcvid=%d", rcvid ); - @ - @ + @
      rcvid:
      + @ @ if( db_step(&q)==SQLITE_ROW ){ const char *zUser = db_column_text(&q, 0); const char *zDate = db_column_text(&q, 1); const char *zIpAddr = db_column_text(&q, 2); - @ + @ @ - @ + @ @ - @ + @ @ } db_finalize(&q); + db_multi_exec( + "CREATE TEMP TABLE toshow(rid INTEGER PRIMARY KEY);" + "INSERT INTO toshow SELECT rid FROM blob WHERE rcvid=%d", rcvid + ); + describe_artifacts("IN toshow"); db_prepare(&q, - "SELECT rid, uuid, size FROM blob WHERE rcvid=%d", rcvid + "SELECT blob.rid, blob.uuid, blob.size, description.summary\n" + " FROM blob LEFT JOIN description ON (blob.rid=description.rid)" + " WHERE blob.rcvid=%d", rcvid ); - @ + @ @ @
      rcvid:%d(rcvid)
      User:
      User:%s(zUser)
      Date:
      Date:%s(zDate)
      IP Address:
      IP Address:%s(zIpAddr)
      Artifacts:
      Artifacts: while( db_step(&q)==SQLITE_ROW ){ - int rid = db_column_int(&q, 0); const char *zUuid = db_column_text(&q, 1); int size = db_column_int(&q, 2); - @ %s(zUuid) - @ (rid: %d(rid), size: %d(size))
      + const char *zDesc = db_column_text(&q, 3); + if( zDesc==0 ) zDesc = ""; + @ %s(zUuid) + @ %h(zDesc) (size: %d(size))
      } @
      + db_finalize(&q); + style_footer(); } ADDED src/sitemap.c Index: src/sitemap.c ================================================================== --- src/sitemap.c +++ src/sitemap.c @@ -0,0 +1,152 @@ +/* +** Copyright (c) 2014 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code to implement the sitemap webpage. +*/ +#include "config.h" +#include "sitemap.h" +#include + +/* +** WEBPAGE: sitemap +** +** List some of the web pages offered by the Fossil web engine. This +** page is intended as a suppliment to the menu bar on the main screen. +** That is, this page is designed to hold links that are omitted from +** the main menu due to lack of space. +*/ +void sitemap_page(void){ + int srchFlags; + login_check_credentials(); + srchFlags = search_restrict(SRCH_ALL); + style_header("Site Map"); + style_adunit_config(ADUNIT_RIGHT_OK); +#if 0 + @

      + @ The following links are just a few of the many web-pages available for + @ this Fossil repository: + @

      + @ +#endif + @
        + @
      • %z(href("%R/home"))Home Page + if( srchFlags & SRCH_DOC ){ + @
          + @
        • %z(href("%R/docsrch"))Search Project Documentation
        • + @
        + } + @
      • + if( g.perm.Read ){ + @
      • %z(href("%R/tree"))File Browser
      • + @
          + @
        • %z(href("%R/tree?type=tree&ci=trunk"))Tree-view, + @ Trunk Check-in
        • + @
        • %z(href("%R/tree?type=flat"))Flat-view
        • + @
        • %z(href("%R/fileage?name=trunk"))File ages for Trunk
        • + @
        + } + if( g.perm.Read ){ + @
      • %z(href("%R/timeline?n=200"))Project Timeline
      • + @
          + @
        • %z(href("%R/reports"))Activity Reports
        • + @
        • %z(href("%R/timeline?n=all&namechng"))File name changes
        • + @
        • %z(href("%R/timeline?n=all&forks"))Forks
        • + @
        • %z(href("%R/timeline?a=1970-01-01&y=ci&n=10"))First 10 + @ check-ins
        • + @
        + } + if( g.perm.Read ){ + @
      • %z(href("%R/brlist"))Branches
      • + @
          + @
        • %z(href("%R/leaves"))Leaf Check-ins
        • + @
        • %z(href("%R/taglist"))List of Tags
        • + @
        + @ + } + if( g.perm.RdWiki ){ + @
      • %z(href("%R/wikihelp"))Wiki + @
          + if( srchFlags & SRCH_WIKI ){ + @
        • %z(href("%R/wikisrch"))Wiki Search
        • + } + @
        • %z(href("%R/wcontent"))List of Wiki Pages
        • + @
        • %z(href("%R/timeline?y=w"))Recent activity
        • + @
        • %z(href("%R/wiki_rules"))Wiki Formatting Rules
        • + @
        • %z(href("%R/md_rules"))Markdown Formatting Rules
        • + @
        • %z(href("%R/wiki?name=Sandbox"))Sandbox
        • + @
        • %z(href("%R/attachlist"))List of Attachments
        • + @
        + @
      • + } + if( g.perm.RdTkt ){ + @
      • %z(href("%R/reportlist"))Tickets + @
          + if( srchFlags & SRCH_TKT ){ + @
        • %z(href("%R/tktsrch"))Ticket Search
        • + } + @
        • %z(href("%R/timeline?y=t"))Recent activity
        • + @
        • %z(href("%R/attachlist"))List of Attachments
        • + @
        + @
      • + } + if( srchFlags ){ + @
      • %z(href("%R/search"))Full-Text Search
      • + } + @
      • %z(href("%R/login"))Login/Logout/Change Password
      • + if( g.perm.Read ){ + @
      • %z(href("%R/stat"))Repository Status + @
          + @
        • %z(href("%R/hash-collisions"))Collisions on SHA1 hash + @ prefixes
        • + if( g.perm.Admin ){ + @
        • %z(href("%R/urllist"))List of URLs used to access + @ this repository
        • + } + @
        • %z(href("%R/bloblist"))List of Artifacts
        • + @
        • %z(href("%R/timewarps"))List of "Timewarp" Check-ins
        • + @
        + @
      • + } + @
      • On-line Documentation + @
          + @
        • %z(href("%R/help"))List of All Commands and Web Pages
        • + @
        • %z(href("%R/test-all-help"))All "help" text on a single page
        • + @
        • %z(href("%R/mimetype_list"))Filename suffix to mimetype map
        • + @
      • + if( g.perm.Admin ){ + @
      • %z(href("%R/setup"))Administration Pages + @
          + @
        • %z(href("%R/modreq"))Pending Moderation Requests
        • + @
        • %z(href("%R/admin_log"))Admin log
        • + @
        • %z(href("%R/cachestat"))Status of the web-page cache
        • + @
      • + } + @
      • Test Pages + @
          + if( g.perm.Admin || db_get_boolean("test_env_enable",0) ){ + @
        • %z(href("%R/test_env"))CGI Environment Test
        • + } + if( g.perm.Read ){ + @
        • %z(href("%R/test-rename-list"))List of file renames
        • + } + @
        • %z(href("%R/hash-color-test"))Page to experiment with the automatic + @ colors assigned to branch names + @
        • %z(href("%R/test-captcha"))Random ASCII-art Captcha image
        • + @
      • + @
      + style_footer(); +} Index: src/skins.c ================================================================== --- src/skins.c +++ src/skins.c @@ -15,657 +15,252 @@ ** ******************************************************************************* ** ** Implementation of the Setup page for "skins". */ -#include #include "config.h" +#include #include "skins.h" -/* @-comment: // */ -/* -** A black-and-white theme with the project title in a bar across the top -** and no logo image. -*/ -static const char zBuiltinSkin1[] = -@ REPLACE INTO config VALUES('css','/* General settings for the entire page */ -@ body { -@ margin: 0ex 1ex; -@ padding: 0px; -@ background-color: white; -@ font-family: "sans serif"; -@ } -@ -@ /* The project logo in the upper left-hand corner of each page */ -@ div.logo { -@ display: table-row; -@ text-align: center; -@ /* vertical-align: bottom;*/ -@ font-size: 2em; -@ font-weight: bold; -@ background-color: #707070; -@ color: #ffffff; -@ } -@ -@ /* The page title centered at the top of each page */ -@ div.title { -@ display: table-cell; -@ font-size: 1.5em; -@ font-weight: bold; -@ text-align: left; -@ padding: 0 0 0 10px; -@ color: #404040; -@ vertical-align: bottom; -@ width: 100%; -@ } -@ -@ /* The login status message in the top right-hand corner */ -@ div.status { -@ display: table-cell; -@ text-align: right; -@ vertical-align: bottom; -@ color: #404040; -@ font-size: 0.8em; -@ font-weight: bold; -@ } -@ -@ /* The header across the top of the page */ -@ div.header { -@ display: table; -@ width: 100%; -@ } -@ -@ /* The main menu bar that appears at the top of the page beneath -@ ** the header */ -@ div.mainmenu { -@ padding: 5px 10px 5px 10px; -@ font-size: 0.9em; -@ font-weight: bold; -@ text-align: center; -@ letter-spacing: 1px; -@ background-color: #404040; -@ color: white; -@ } -@ -@ /* The submenu bar that *sometimes* appears below the main menu */ -@ div.submenu { -@ padding: 3px 10px 3px 0px; -@ font-size: 0.9em; -@ text-align: center; -@ background-color: #606060; -@ color: white; -@ } -@ div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited { -@ padding: 3px 10px 3px 10px; -@ color: white; -@ text-decoration: none; -@ } -@ div.mainmenu a:hover, div.submenu a:hover { -@ color: #404040; -@ background-color: white; -@ } -@ -@ /* All page content from the bottom of the menu or submenu down to -@ ** the footer */ -@ div.content { -@ padding: 0ex 0ex 0ex 0ex; -@ } -@ /* Hyperlink colors */ -@ div.content a { color: #604000; } -@ div.content a:link { color: #604000;} -@ div.content a:visited { color: #600000; } -@ -@ /* Some pages have section dividers */ -@ div.section { -@ margin-bottom: 0px; -@ margin-top: 1em; -@ padding: 1px 1px 1px 1px; -@ font-size: 1.2em; -@ font-weight: bold; -@ background-color: #404040; -@ color: white; -@ } -@ -@ /* The "Date" that occurs on the left hand side of timelines */ -@ div.divider { -@ background: #a0a0a0; -@ border: 2px #505050 solid; -@ font-size: 1em; font-weight: normal; -@ padding: .25em; -@ margin: .2em 0 .2em 0; -@ float: left; -@ clear: left; -@ } -@ -@ /* The footer at the very bottom of the page */ -@ div.footer { -@ font-size: 0.8em; -@ margin-top: 12px; -@ padding: 5px 10px 5px 10px; -@ text-align: right; -@ background-color: #404040; -@ color: white; -@ } -@ -@ /* The label/value pairs on (for example) the vinfo page */ -@ table.label-value th { -@ vertical-align: top; -@ text-align: right; -@ padding: 0.2ex 2ex; -@ }'); -@ REPLACE INTO config VALUES('header',' -@ -@ $<project_name>: $<title> -@ -@ -@ -@ -@
      -@ -@
      -@
      -@
      $</div> -@ <div class="status"><nobr><th1> -@ if {[info exists login]} { -@ puts "Logged in as $login" -@ } else { -@ puts "Not logged in" -@ } -@ </th1></nobr></div> -@ </div> -@ <div class="mainmenu"><th1> -@ html "<a href=''$baseurl$index_page''>Home</a> " -@ if {[anycap jor]} { -@ html "<a href=''$baseurl/timeline''>Timeline</a> " -@ } -@ if {[hascap oh]} { -@ html "<a href=''$baseurl/dir?ci=tip''>Files</a> " -@ } -@ if {[hascap o]} { -@ html "<a href=''$baseurl/leaves''>Leaves</a> " -@ html "<a href=''$baseurl/brlist''>Branches</a> " -@ html "<a href=''$baseurl/taglist''>Tags</a> " -@ } -@ if {[hascap r]} { -@ html "<a href=''$baseurl/reportlist''>Tickets</a> " -@ } -@ if {[hascap j]} { -@ html "<a href=''$baseurl/wiki''>Wiki</a> " -@ } -@ if {[hascap s]} { -@ html "<a href=''$baseurl/setup''>Admin</a> " -@ } elseif {[hascap a]} { -@ html "<a href=''$baseurl/setup_ulist''>Users</a> " -@ } -@ if {[info exists login]} { -@ html "<a href=''$baseurl/login''>Logout</a> " -@ } else { -@ html "<a href=''$baseurl/login''>Login</a> " -@ } -@ </th1></div> -@ '); -@ REPLACE INTO config VALUES('footer','<div class="footer"> -@ Fossil version $manifest_version $manifest_date -@ </div> -@ </body></html> -@ '); -; - -/* -** A tan theme with the project title above the user identification -** and no logo image. -*/ -static const char zBuiltinSkin2[] = -@ REPLACE INTO config VALUES('css','/* General settings for the entire page */ -@ body { -@ margin: 0ex 0ex; -@ padding: 0px; -@ background-color: #fef3bc; -@ font-family: sans-serif; -@ } -@ -@ /* The project logo in the upper left-hand corner of each page */ -@ div.logo { -@ display: inline; -@ text-align: center; -@ vertical-align: bottom; -@ font-weight: bold; -@ font-size: 2.5em; -@ color: #a09048; -@ } -@ -@ /* The page title centered at the top of each page */ -@ div.title { -@ display: table-cell; -@ font-size: 2em; -@ font-weight: bold; -@ text-align: left; -@ padding: 0 0 0 5px; -@ color: #a09048; -@ vertical-align: bottom; -@ width: 100%; -@ } -@ -@ /* The login status message in the top right-hand corner */ -@ div.status { -@ display: table-cell; -@ text-align: right; -@ vertical-align: bottom; -@ color: #a09048; -@ padding: 5px 5px 0 0; -@ font-size: 0.8em; -@ font-weight: bold; -@ } -@ -@ /* The header across the top of the page */ -@ div.header { -@ display: table; -@ width: 100%; -@ } -@ -@ /* The main menu bar that appears at the top of the page beneath -@ ** the header */ -@ div.mainmenu { -@ padding: 5px 10px 5px 10px; -@ font-size: 0.9em; -@ font-weight: bold; -@ text-align: center; -@ letter-spacing: 1px; -@ background-color: #a09048; -@ color: black; -@ } -@ -@ /* The submenu bar that *sometimes* appears below the main menu */ -@ div.submenu { -@ padding: 3px 10px 3px 0px; -@ font-size: 0.9em; -@ text-align: center; -@ background-color: #c0af58; -@ color: white; -@ } -@ div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited { -@ padding: 3px 10px 3px 10px; -@ color: white; -@ text-decoration: none; -@ } -@ div.mainmenu a:hover, div.submenu a:hover { -@ color: #a09048; -@ background-color: white; -@ } -@ -@ /* All page content from the bottom of the menu or submenu down to -@ ** the footer */ -@ div.content { -@ padding: 1ex 5px; -@ } -@ div.content a { color: #706532; } -@ div.content a:link { color: #706532; } -@ div.content a:visited { color: #704032; } -@ div.content a:hover { background-color: white; color: #706532; } -@ -@ /* Some pages have section dividers */ -@ div.section { -@ margin-bottom: 0px; -@ margin-top: 1em; -@ padding: 3px 3px 0 3px; -@ font-size: 1.2em; -@ font-weight: bold; -@ background-color: #a09048; -@ color: white; -@ } -@ -@ /* The "Date" that occurs on the left hand side of timelines */ -@ div.divider { -@ background: #e1d498; -@ border: 2px #a09048 solid; -@ font-size: 1em; font-weight: normal; -@ padding: .25em; -@ margin: .2em 0 .2em 0; -@ float: left; -@ clear: left; -@ } -@ -@ /* The footer at the very bottom of the page */ -@ div.footer { -@ font-size: 0.8em; -@ margin-top: 12px; -@ padding: 5px 10px 5px 10px; -@ text-align: right; -@ background-color: #a09048; -@ color: white; -@ } -@ -@ /* Hyperlink colors */ -@ div.footer a { color: white; } -@ div.footer a:link { color: white; } -@ div.footer a:visited { color: white; } -@ div.footer a:hover { background-color: white; color: #558195; } -@ -@ /* <verbatim> blocks */ -@ pre.verbatim { -@ background-color: #f5f5f5; -@ padding: 0.5em; -@ } -@ -@ /* The label/value pairs on (for example) the ci page */ -@ table.label-value th { -@ vertical-align: top; -@ text-align: right; -@ padding: 0.2ex 2ex; -@ } -@ '); -@ REPLACE INTO config VALUES('header','<html> -@ <head> -@ <title>$<project_name>: $<title> -@ -@ -@ -@ -@
      -@
      $</div> -@ <div class="status"> -@ <div class="logo"><nobr>$<project_name></nobr></div><br/> -@ <nobr><th1> -@ if {[info exists login]} { -@ puts "Logged in as $login" -@ } else { -@ puts "Not logged in" -@ } -@ </th1></nobr></div> -@ </div> -@ <div class="mainmenu"><th1> -@ html "<a href=''$baseurl$index_page''>Home</a> " -@ if {[anycap jor]} { -@ html "<a href=''$baseurl/timeline''>Timeline</a> " -@ } -@ if {[hascap oh]} { -@ html "<a href=''$baseurl/dir?ci=tip''>Files</a> " -@ } -@ if {[hascap o]} { -@ html "<a href=''$baseurl/leaves''>Leaves</a> " -@ html "<a href=''$baseurl/brlist''>Branches</a> " -@ html "<a href=''$baseurl/taglist''>Tags</a> " -@ } -@ if {[hascap r]} { -@ html "<a href=''$baseurl/reportlist''>Tickets</a> " -@ } -@ if {[hascap j]} { -@ html "<a href=''$baseurl/wiki''>Wiki</a> " -@ } -@ if {[hascap s]} { -@ html "<a href=''$baseurl/setup''>Admin</a> " -@ } elseif {[hascap a]} { -@ html "<a href=''$baseurl/setup_ulist''>Users</a> " -@ } -@ if {[info exists login]} { -@ html "<a href=''$baseurl/login''>Logout</a> " -@ } else { -@ html "<a href=''$baseurl/login''>Login</a> " -@ } -@ </th1></div> -@ '); -@ REPLACE INTO config VALUES('footer','<div class="footer"> -@ Fossil version $manifest_version $manifest_date -@ </div> -@ </body></html> -@ '); -; - -/* -** Black letters on a white or cream background with the main menu -** stuck on the left-hand side. -*/ -static const char zBuiltinSkin3[] = -@ REPLACE INTO config VALUES('css','/* General settings for the entire page */ -@ body { -@ margin:0px 0px 0px 0px; -@ padding:0px; -@ font-family:verdana, arial, helvetica, "sans serif"; -@ color:#333; -@ background-color:white; -@ } -@ -@ /* consistent colours */ -@ h2 { -@ color: #333; -@ } -@ h3 { -@ color: #333; -@ } -@ -@ /* The project logo in the upper left-hand corner of each page */ -@ div.logo { -@ display: table-cell; -@ text-align: left; -@ vertical-align: bottom; -@ font-weight: bold; -@ color: #333; -@ } -@ -@ /* The page title centered at the top of each page */ -@ div.title { -@ display: table-cell; -@ font-size: 2em; -@ font-weight: bold; -@ text-align: center; -@ color: #333; -@ vertical-align: bottom; -@ width: 100%; -@ } -@ -@ /* The login status message in the top right-hand corner */ -@ div.status { -@ display: table-cell; -@ padding-right: 10px; -@ text-align: right; -@ vertical-align: bottom; -@ padding-bottom: 5px; -@ color: #333; -@ font-size: 0.8em; -@ font-weight: bold; -@ } -@ -@ /* The header across the top of the page */ -@ div.header { -@ margin:10px 0px 10px 0px; -@ padding:1px 0px 0px 20px; -@ border-style:solid; -@ border-color:black; -@ border-width:1px 0px; -@ background-color:#eee; -@ } -@ -@ /* The main menu bar that appears at the top left of the page beneath -@ ** the header. Width must be co-ordinated with the container below */ -@ div.mainmenu { -@ float: left; -@ margin-left: 10px; -@ margin-right: 10px; -@ font-size: 0.9em; -@ font-weight: bold; -@ padding:5px; -@ background-color:#eee; -@ border:1px solid #999; -@ width:8em; -@ } -@ -@ /* Main menu is now a list */ -@ div.mainmenu ul { -@ padding: 0; -@ list-style:none; -@ } -@ div.mainmenu a, div.mainmenu a:visited{ -@ padding: 1px 10px 1px 10px; -@ color: #333; -@ text-decoration: none; -@ } -@ div.mainmenu a:hover { -@ color: #eee; -@ background-color: #333; -@ } -@ -@ /* Container for the sub-menu and content so they don''t spread -@ ** out underneath the main menu */ -@ #container { -@ padding-left: 9em; -@ } -@ -@ /* The submenu bar that *sometimes* appears below the main menu */ -@ div.submenu { -@ padding: 3px 10px 3px 10px; -@ font-size: 0.9em; -@ text-align: center; -@ border:1px solid #999; -@ border-width:1px 0px; -@ background-color: #eee; -@ color: #333; -@ } -@ div.submenu a, div.submenu a:visited { -@ padding: 3px 10px 3px 10px; -@ color: #333; -@ text-decoration: none; -@ } -@ div.submenu a:hover { -@ color: #eee; -@ background-color: #333; -@ } -@ -@ /* All page content from the bottom of the menu or submenu down to -@ ** the footer */ -@ div.content { -@ float right; -@ padding: 2ex 1ex 0ex 2ex; -@ } -@ -@ /* Some pages have section dividers */ -@ div.section { -@ margin-bottom: 0px; -@ margin-top: 1em; -@ padding: 1px 1px 1px 1px; -@ font-size: 1.2em; -@ font-weight: bold; -@ border-style:solid; -@ border-color:#999; -@ border-width:1px 0px; -@ background-color: #eee; -@ color: #333; -@ } -@ -@ /* The "Date" that occurs on the left hand side of timelines */ -@ div.divider { -@ background: #eee; -@ border: 2px #999 solid; -@ font-size: 1em; font-weight: normal; -@ padding: .25em; -@ margin: .2em 0 .2em 0; -@ float: left; -@ clear: left; -@ color: #333 -@ } -@ -@ /* The footer at the very bottom of the page */ -@ div.footer { -@ font-size: 0.8em; -@ margin-top: 12px; -@ padding: 5px 10px 5px 10px; -@ text-align: right; -@ background-color: #eee; -@ color: #555; -@ } -@ -@ /* <verbatim> blocks */ -@ pre.verbatim { -@ background-color: #f5f5f5; -@ padding: 0.5em; -@ } -@ -@ /* The label/value pairs on (for example) the ci page */ -@ table.label-value th { -@ vertical-align: top; -@ text-align: right; -@ padding: 0.2ex 2ex; -@ }'); -@ REPLACE INTO config VALUES('header','<html> -@ <head> -@ <title>$<project_name>: $<title> -@ -@ -@ -@ -@
      -@ -@
      $</div> -@ <div class="status"><nobr><th1> -@ if {[info exists login]} { -@ puts "Logged in as $login" -@ } else { -@ puts "Not logged in" -@ } -@ </th1></nobr></div> -@ </div> -@ <div class="mainmenu"><ul><th1> -@ html "<li><a href=''$baseurl$index_page''>Home</a></li>" -@ if {[anycap jor]} { -@ html "<li><a href=''$baseurl/timeline''>Timeline</a></li>" -@ } -@ if {[hascap oh]} { -@ html "<li><a href=''$baseurl/dir?ci=tip''>Files</a></li>" -@ } -@ if {[hascap o]} { -@ html "<li><a href=''$baseurl/leaves''>Leaves</a></li>" -@ html "<li><a href=''$baseurl/brlist''>Branches</a></li>" -@ html "<li><a href=''$baseurl/taglist''>Tags</a></li>" -@ } -@ if {[hascap r]} { -@ html "<li><a href=''$baseurl/reportlist''>Tickets</a></li>" -@ } -@ if {[hascap j]} { -@ html "<li><a href=''$baseurl/wiki''>Wiki</a></li>" -@ } -@ if {[hascap s]} { -@ html "<li><a href=''$baseurl/setup''>Admin</a></li>" -@ } elseif {[hascap a]} { -@ html "<li><a href=''$baseurl/setup_ulist''>Users</a></li>" -@ } -@ if {[info exists login]} { -@ html "<li><a href=''$baseurl/login''>Logout</a></li>" -@ } else { -@ html "<li><a href=''$baseurl/login''>Login</a></li>" -@ } -@ </th1></ul></div> -@ <div id="container"> -@ '); -@ REPLACE INTO config VALUES('footer','</div> -@ <div class="footer"> -@ Fossil version $manifest_version $manifest_date -@ </div> -@ </body></html> -@ '); -; /* ** An array of available built-in skins. +** +** To add new built-in skins: +** +** 1. Pick a name for the new skin. (Here we use "xyzzy"). +** +** 2. Install files skins/xyzzy/css.txt, skins/xyzzy/header.txt, +** and skins/xyzzy/footer.txt into the source tree. +** +** 3. Rerun "tclsh makemake.tcl" in the src/ folder in order to +** rebuild the makefiles to reference the new CSS, headers, and footers. +** +** 4. Make an entry in the following array for the new skin. */ static struct BuiltinSkin { - const char *zName; - const char *zValue; + const char *zDesc; /* Description of this skin */ + const char *zLabel; /* The directory under skins/ holding this skin */ + char *zSQL; /* Filled in at run-time with SQL to insert this skin */ } aBuiltinSkin[] = { - { "Default", 0 /* Filled in at runtime */ }, - { "Plain Gray, No Logo", zBuiltinSkin1 }, - { "Khaki, No Logo", zBuiltinSkin2 }, - { "Black & White, Menu on Left", zBuiltinSkin3 }, + { "Default", "default", 0 }, + { "Blitz", "blitz", 0 }, + { "Blitz, No Logo", "blitz_no_logo", 0 }, + { "Xekri", "xekri", 0 }, + { "Original", "original", 0 }, + { "Enhanced Original", "enhanced1", 0 }, + { "Shadow boxes & Rounded Corners", "rounded1", 0 }, + { "Eagle", "eagle", 0 }, + { "Black & White, Menu on Left", "black_and_white", 0 }, + { "Plain Gray, No Logo", "plain_gray", 0 }, + { "Khaki, No Logo", "khaki", 0 }, +}; + +/* +** Alternative skins can be specified in the CGI script or by options +** on the "http", "ui", and "server" commands. The alternative skin +** name must be one of the aBuiltinSkin[].zLabel names. If there is +** a match, that alternative is used. +** +** The following static variable holds the name of the alternative skin, +** or NULL if the skin should be as configured. +*/ +static struct BuiltinSkin *pAltSkin = 0; +static char *zAltSkinDir = 0; + +/* +** Skin details are a set of key/value pairs that define display +** attributes of the skin that cannot be easily specified using CSS +** or that need to be known on the server-side. +** +** The following array holds the value for all known skin details. +*/ +static struct SkinDetail { + const char *zName; /* Name of the detail */ + const char *zValue; /* Value of the detail */ +} aSkinDetail[] = { + { "timeline-arrowheads", "1" }, + { "timeline-circle-nodes", "0" }, + { "timeline-color-graph-lines", "0" }, + { "white-foreground", "0" }, }; + +/* +** Invoke this routine to set the alternative skin. Return NULL if the +** alternative was successfully installed. Return a string listing all +** available skins if zName does not match an available skin. Memory +** for the returned string comes from fossil_malloc() and should be freed +** by the caller. +** +** If the alternative skin name contains one or more '/' characters, then +** it is assumed to be a directory on disk that holds override css.txt, +** footer.txt, and header.txt. This mode can be used for interactive +** development of new skins. +*/ +char *skin_use_alternative(const char *zName){ + int i; + Blob err = BLOB_INITIALIZER; + if( strchr(zName, '/')!=0 ){ + zAltSkinDir = fossil_strdup(zName); + return 0; + } + for(i=0; i<ArraySize(aBuiltinSkin); i++){ + if( fossil_strcmp(aBuiltinSkin[i].zLabel, zName)==0 ){ + pAltSkin = &aBuiltinSkin[i]; + return 0; + } + } + blob_appendf(&err, "available skins: %s", aBuiltinSkin[0].zLabel); + for(i=1; i<ArraySize(aBuiltinSkin); i++){ + blob_append(&err, " ", 1); + blob_append(&err, aBuiltinSkin[i].zLabel, -1); + } + return blob_str(&err); +} + +/* +** Look for the --skin command-line option and process it. Or +** call fossil_fatal() if an unknown skin is specified. +*/ +void skin_override(void){ + const char *zSkin = find_option("skin",0,1); + if( zSkin ){ + char *zErr = skin_use_alternative(zSkin); + if( zErr ) fossil_fatal("%s", zErr); + } +} + +/* +** The following routines return the various components of the skin +** that should be used for the current run. +*/ +const char *skin_get(const char *zWhat){ + const char *zOut; + char *z; + if( zAltSkinDir ){ + char *z = mprintf("%s/%s.txt", zAltSkinDir, zWhat); + if( file_isfile(z) ){ + Blob x; + blob_read_from_file(&x, z); + fossil_free(z); + return blob_str(&x); + } + fossil_free(z); + } + if( pAltSkin ){ + z = mprintf("skins/%s/%s.txt", pAltSkin->zLabel, zWhat); + zOut = builtin_text(z); + fossil_free(z); + }else{ + zOut = db_get(zWhat, 0); + if( zOut==0 ){ + z = mprintf("skins/default/%s.txt", zWhat); + zOut = builtin_text(z); + fossil_free(z); + } + } + return zOut; +} + +/* +** Return a pointer to a SkinDetail element. Return 0 if not found. +*/ +static struct SkinDetail *skin_detail_find(const char *zName){ + int lwr = 0; + int upr = ArraySize(aSkinDetail); + while( upr>=lwr ){ + int mid = (upr+lwr)/2; + int c = fossil_strcmp(aSkinDetail[mid].zName, zName); + if( c==0 ) return &aSkinDetail[mid]; + if( c<0 ){ + lwr = mid+1; + }else{ + upr = mid-1; + } + } + return 0; +} + +/* Initialize the aSkinDetail array using the text in the details.txt +** file. +*/ +static void skin_detail_initialize(void){ + static int isInit = 0; + char *zDetail; + Blob detail, line, key, value; + if( isInit ) return; + isInit = 1; + zDetail = (char*)skin_get("details"); + if( zDetail==0 ) return; + zDetail = fossil_strdup(zDetail); + blob_init(&detail, zDetail, -1); + while( blob_line(&detail, &line) ){ + char *zKey; + int nKey; + struct SkinDetail *pDetail; + if( !blob_token(&line, &key) ) continue; + zKey = blob_buffer(&key); + if( zKey[0]=='#' ) continue; + nKey = blob_size(&key); + if( nKey<2 ) continue; + if( zKey[nKey-1]!=':' ) continue; + zKey[nKey-1] = 0; + pDetail = skin_detail_find(zKey); + if( pDetail==0 ) continue; + if( !blob_token(&line, &value) ) continue; + pDetail->zValue = fossil_strdup(blob_str(&value)); + } + blob_reset(&detail); + fossil_free(zDetail); +} + +/* +** Return a skin detail setting +*/ +const char *skin_detail(const char *zName){ + struct SkinDetail *pDetail; + skin_detail_initialize(); + pDetail = skin_detail_find(zName); + if( pDetail==0 ) fossil_fatal("no such skin detail: %s", zName); + return pDetail->zValue; +} +int skin_detail_boolean(const char *zName){ + return !is_false(skin_detail(zName)); +} + +/* +** Hash function for computing a skin id. +*/ +static unsigned int skin_hash(unsigned int h, const char *z){ + if( z==0 ) return h; + while( z[0] ){ + h = (h<<11) ^ (h<<1) ^ (h>>3) ^ z[0]; + z++; + } + return h; +} + +/* +** Return an identifier that is (probably) different for every skin +** but that is (probably) the same if the skin is unchanged. This +** identifier can be attached to resource URLs to force reloading when +** the resources change but allow the resources to be read from cache +** as long as they are unchanged. +*/ +unsigned int skin_id(const char *zResource){ + unsigned int h = 0; + if( zAltSkinDir ){ + h = skin_hash(0, zAltSkinDir); + }else if( pAltSkin ){ + h = skin_hash(0, pAltSkin->zLabel); + }else{ + char *zMTime = db_get_mtime(zResource, 0, 0); + h = skin_hash(0, zMTime); + fossil_free(zMTime); + } + h = skin_hash(h, MANIFEST_UUID); + return h; +} /* ** For a skin named zSkinName, compute the name of the CONFIG table ** entry where that skin is stored and return it. ** @@ -683,164 +278,384 @@ } return z; } /* -** Construct and return a string that represents the current skin if -** useDefault==0 or a string for the default skin if useDefault==1. +** Return true if there exists a skin name "zSkinName". +*/ +static int skinExists(const char *zSkinName){ + int i; + if( zSkinName==0 ) return 0; + for(i=0; i<sizeof(aBuiltinSkin)/sizeof(aBuiltinSkin[0]); i++){ + if( fossil_strcmp(zSkinName, aBuiltinSkin[i].zDesc)==0 ) return 1; + } + return db_exists("SELECT 1 FROM config WHERE name='skin:%q'", zSkinName); +} + +/* +** Construct and return an string of SQL statements that represents +** a "skin" setting. If zName==0 then return the skin currently +** installed. Otherwise, return one of the built-in skins designated +** by zName. ** ** Memory to hold the returned string is obtained from malloc. */ -static char *getSkin(int useDefault){ +static char *getSkin(const char *zName){ + const char *z; + char *zLabel; + static const char *azType[] = { "css", "header", "footer", "details" }; + int i; Blob val; blob_zero(&val); - blob_appendf(&val, "REPLACE INTO config VALUES('css',%Q);\n", - useDefault ? zDefaultCSS : db_get("css", (char*)zDefaultCSS) - ); - blob_appendf(&val, "REPLACE INTO config VALUES('header',%Q);\n", - useDefault ? zDefaultHeader : db_get("header", (char*)zDefaultHeader) - ); - blob_appendf(&val, "REPLACE INTO config VALUES('footer',%Q);\n", - useDefault ? zDefaultFooter : db_get("footer", (char*)zDefaultFooter) - ); + for(i=0; i<sizeof(azType)/sizeof(azType[0]); i++){ + if( zName ){ + zLabel = mprintf("skins/%s/%s.txt", zName, azType[i]); + z = builtin_text(zLabel); + fossil_free(zLabel); + }else{ + z = db_get(azType[i], 0); + if( z==0 ){ + zLabel = mprintf("skins/default/%s.txt", azType[i]); + z = builtin_text(zLabel); + fossil_free(zLabel); + } + } + blob_appendf(&val, + "REPLACE INTO config(name,value,mtime) VALUES(%Q,%Q,now());\n", + azType[i], z + ); + } return blob_str(&val); } /* -** Construct the default skin string and fill in the corresponding -** entry in aBuildinSkin[] +** Respond to a Rename button press. Return TRUE if a dialog was painted. +** Return FALSE to continue with the main Skins page. +*/ +static int skinRename(void){ + const char *zOldName; + const char *zNewName; + int ex = 0; + if( P("rename")==0 ) return 0; + zOldName = P("sn"); + zNewName = P("newname"); + if( zOldName==0 ) return 0; + if( zNewName==0 || zNewName[0]==0 || (ex = skinExists(zNewName))!=0 ){ + if( zNewName==0 ) zNewName = zOldName; + style_header("Rename A Skin"); + if( ex ){ + @ <p><span class="generalError">There is already another skin + @ named "%h(zNewName)". Choose a different name.</span></p> + } + @ <form action="%s(g.zTop)/setup_skin" method="post"><div> + @ <table border="0"><tr> + @ <tr><td align="right">Current name:<td align="left"><b>%h(zOldName)</b> + @ <tr><td align="right">New name:<td align="left"> + @ <input type="text" size="35" name="newname" value="%h(zNewName)"> + @ <tr><td><td> + @ <input type="hidden" name="sn" value="%h(zOldName)"> + @ <input type="submit" name="rename" value="Rename"> + @ <input type="submit" name="canren" value="Cancel"> + @ </table> + login_insert_csrf_secret(); + @ </div></form> + style_footer(); + return 1; + } + db_multi_exec( + "UPDATE config SET name='skin:%q' WHERE name='skin:%q';", + zNewName, zOldName + ); + return 0; +} + +/* +** Respond to a Save button press. Return TRUE if a dialog was painted. +** Return FALSE to continue with the main Skins page. */ -static void setDefaultSkin(void){ - aBuiltinSkin[0].zValue = getSkin(1); +static int skinSave(const char *zCurrent){ + const char *zNewName; + int ex = 0; + if( P("save")==0 ) return 0; + zNewName = P("svname"); + if( zNewName && zNewName[0]!=0 ){ + } + if( zNewName==0 || zNewName[0]==0 || (ex = skinExists(zNewName))!=0 ){ + if( zNewName==0 ) zNewName = ""; + style_header("Save Current Skin"); + if( ex ){ + @ <p><span class="generalError">There is already another skin + @ named "%h(zNewName)". Choose a different name.</span></p> + } + @ <form action="%s(g.zTop)/setup_skin" method="post"><div> + @ <table border="0"><tr> + @ <tr><td align="right">Name for this skin:<td align="left"> + @ <input type="text" size="35" name="svname" value="%h(zNewName)"> + @ <tr><td><td> + @ <input type="submit" name="save" value="Save"> + @ <input type="submit" name="cansave" value="Cancel"> + @ </table> + login_insert_csrf_secret(); + @ </div></form> + style_footer(); + return 1; + } + db_multi_exec( + "INSERT OR IGNORE INTO config(name, value, mtime)" + "VALUES('skin:%q',%Q,now())", + zNewName, zCurrent + ); + return 0; } /* ** WEBPAGE: setup_skin +** +** Show a list of available skins with buttons for selecting which +** skin to use. Requires Admin privilege. */ void setup_skin(void){ const char *z; char *zName; char *zErr = 0; - const char *zCurrent; /* Current skin */ - int i; /* Loop counter */ + const char *zCurrent = 0; /* Current skin */ + int i; /* Loop counter */ Stmt q; + int seenCurrent = 0; login_check_credentials(); - if( !g.okSetup ){ - login_needed(); + if( !g.perm.Setup ){ + login_needed(0); + return; } db_begin_transaction(); + zCurrent = getSkin(0); + for(i=0; i<sizeof(aBuiltinSkin)/sizeof(aBuiltinSkin[0]); i++){ + aBuiltinSkin[i].zSQL = getSkin(aBuiltinSkin[i].zLabel); + } /* Process requests to delete a user-defined skin */ if( P("del1") && (zName = skinVarName(P("sn"), 1))!=0 ){ style_header("Confirm Custom Skin Delete"); - @ <form action="%s(g.zBaseURL)/setup_skin" method="POST"> + @ <form action="%s(g.zTop)/setup_skin" method="post"><div> @ <p>Deletion of a custom skin is a permanent action that cannot @ be undone. Please confirm that this is what you want to do:</p> - @ <input type="hidden" name="sn" value="%h(P("sn"))"> - @ <input type="submit" name="del2" value="Confirm - Delete The Skin"> - @ <input type="submit" name="cancel" value="Cancel - Do Not Delete"> + @ <input type="hidden" name="sn" value="%h(P("sn"))" /> + @ <input type="submit" name="del2" value="Confirm - Delete The Skin" /> + @ <input type="submit" name="cancel" value="Cancel - Do Not Delete" /> login_insert_csrf_secret(); - @ </form> + @ </div></form> style_footer(); return; } if( P("del2")!=0 && (zName = skinVarName(P("sn"), 1))!=0 ){ db_multi_exec("DELETE FROM config WHERE name=%Q", zName); } - - setDefaultSkin(); - zCurrent = getSkin(0); - - if( P("save")!=0 && (zName = skinVarName(P("save"),0))!=0 ){ - if( db_exists("SELECT 1 FROM config WHERE name=%Q", zName) - || strcmp(zName, "Default")==0 ){ - zErr = mprintf("Skin name \"%h\" already exists. " - "Choose a different name.", P("sn")); - }else{ - db_multi_exec("INSERT INTO config VALUES(%Q,%Q)", - zName, zCurrent - ); - } - } - - /* The user pressed the "Use This Skin" button. */ + if( skinRename() ) return; + if( skinSave(zCurrent) ) return; + + /* The user pressed one of the "Install" buttons. */ if( P("load") && (z = P("sn"))!=0 && z[0] ){ int seen = 0; + + /* Check to see if the current skin is already saved. If it is, there + ** is no need to create a backup */ + zCurrent = getSkin(0); for(i=0; i<sizeof(aBuiltinSkin)/sizeof(aBuiltinSkin[0]); i++){ - if( strcmp(aBuiltinSkin[i].zValue, zCurrent)==0 ){ + if( fossil_strcmp(aBuiltinSkin[i].zSQL, zCurrent)==0 ){ seen = 1; break; } } if( !seen ){ seen = db_exists("SELECT 1 FROM config WHERE name GLOB 'skin:*'" " AND value=%Q", zCurrent); - } - if( !seen ){ - db_multi_exec( - "INSERT INTO config VALUES(" - " strftime('skin:Backup On %%Y-%%m-%%d %%H:%%M:%%S')," - " %Q)", zCurrent - ); + if( !seen ){ + db_multi_exec( + "INSERT INTO config(name,value,mtime) VALUES(" + " strftime('skin:Backup On %%Y-%%m-%%d %%H:%%M:%%S')," + " %Q,now())", zCurrent + ); + } } seen = 0; for(i=0; i<sizeof(aBuiltinSkin)/sizeof(aBuiltinSkin[0]); i++){ - if( strcmp(aBuiltinSkin[i].zName, z)==0 ){ + if( fossil_strcmp(aBuiltinSkin[i].zDesc, z)==0 ){ seen = 1; - zCurrent = aBuiltinSkin[i].zValue; - db_multi_exec("%s", zCurrent); + zCurrent = aBuiltinSkin[i].zSQL; + db_multi_exec("%s", zCurrent/*safe-for-%s*/); break; } } if( !seen ){ zName = skinVarName(z,0); zCurrent = db_get(zName, 0); - db_multi_exec("%s", zCurrent); + db_multi_exec("%s", zCurrent/*safe-for-%s*/); } } style_header("Skins"); + if( zErr ){ + @ <p><font color="red">%h(zErr)</font></p> + } @ <p>A "skin" is a combination of - @ <a href="setup_editcss">CSS</a>, - @ <a href="setup_header">Header</a>, and - @ <a href="setup_footer">Footer</a> that determines the look and feel + @ <a href="setup_skinedit?w=0">CSS</a>, + @ <a href="setup_skinedit?w=2">Header</a>, + @ <a href="setup_skinedit?w=1">Footer</a>, and + @ <a href="setup_skinedit?w=3">Details</a> + @ that determines the look and feel @ of the web interface.</p> @ + if( pAltSkin ){ + @ <p class="generalError"> + @ This page is generated using an skin override named + @ "%h(pAltSkin->zLabel)". You can change the skin configuration + @ below, but the changes will not take effect until the Fossil server + @ is restarted without the override.</p> + @ + } @ <h2>Available Skins:</h2> - @ <ol> + @ <table border="0"> for(i=0; i<sizeof(aBuiltinSkin)/sizeof(aBuiltinSkin[0]); i++){ - z = aBuiltinSkin[i].zName; - if( strcmp(aBuiltinSkin[i].zValue, zCurrent)==0 ){ - @ <li><p>%h(z).   <b>Currently In Use</b></p> + z = aBuiltinSkin[i].zDesc; + @ <tr><td>%d(i+1).<td>%h(z)<td>  <td> + if( fossil_strcmp(aBuiltinSkin[i].zSQL, zCurrent)==0 ){ + @ (Currently In Use) + seenCurrent = 1; }else{ - @ <li><form action="%s(g.zBaseURL)/setup_skin" method="POST"> - @ %h(z).   - @ <input type="hidden" name="sn" value="%h(z)"> - @ <input type="submit" name="load" value="Use This Skin"> - @ </form></li> + @ <form action="%s(g.zTop)/setup_skin" method="post"> + @ <input type="hidden" name="sn" value="%h(z)" /> + @ <input type="submit" name="load" value="Install" /> + if( pAltSkin==&aBuiltinSkin[i] ){ + @ (Current override) + } + @ </form> } + @ </tr> } db_prepare(&q, "SELECT substr(name, 6), value FROM config" " WHERE name GLOB 'skin:*'" " ORDER BY name" ); while( db_step(&q)==SQLITE_ROW ){ const char *zN = db_column_text(&q, 0); const char *zV = db_column_text(&q, 1); - if( strcmp(zV, zCurrent)==0 ){ - @ <li><p>%h(zN).   <b>Currently In Use</b></p> - }else{ - @ <li><form action="%s(g.zBaseURL)/setup_skin" method="POST"> - @ %h(zN).   - @ <input type="hidden" name="sn" value="%h(zN)"> - @ <input type="submit" name="load" value="Use This Skin"> - @ <input type="submit" name="del1" value="Delete This Skin"> - @ </form></li> - } + i++; + @ <tr><td>%d(i).<td>%h(zN)<td>  <td> + @ <form action="%s(g.zTop)/setup_skin" method="post"> + if( fossil_strcmp(zV, zCurrent)==0 ){ + @ (Currently In Use) + seenCurrent = 1; + }else{ + @ <input type="submit" name="load" value="Install"> + @ <input type="submit" name="del1" value="Delete"> + } + @ <input type="submit" name="rename" value="Rename"> + @ <input type="hidden" name="sn" value="%h(zN)"> + @ </form></tr> } db_finalize(&q); - @ </ol> + if( !seenCurrent ){ + i++; + @ <tr><td>%d(i).<td><i>Current Configuration</i><td>  <td> + @ <form action="%s(g.zTop)/setup_skin" method="post"> + @ <input type="submit" name="save" value="Save"> + @ </form> + } + @ </table> + style_footer(); + db_end_transaction(0); +} + + +/* +** WEBPAGE: setup_skinedit +** +** Edit aspects of a skin determined by the w= query parameter. +** Requires Admin privileges. +** +** w=N -- 0=CSS, 1=footer, 2=header, 3=details +*/ +void setup_skinedit(void){ + static const struct sSkinAddr { + const char *zFile; + const char *zTitle; + const char *zSubmenu; + } aSkinAttr[] = { + /* 0 */ { "css", "CSS", "CSS", }, + /* 1 */ { "footer", "Page Footer", "Footer", }, + /* 2 */ { "header", "Page Header", "Header", }, + /* 3 */ { "details", "Display Details", "Details", }, + }; + const char *zBasis; + const char *zContent; + char *zDflt; + int ii; + int j; + + login_check_credentials(); + if( !g.perm.Setup ){ + login_needed(0); + return; + } + ii = atoi(PD("w","0")); + if( ii<0 || ii>ArraySize(aSkinAttr) ) ii = 0; + zBasis = PD("basis","default"); + zDflt = mprintf("skins/%s/%s.txt", zBasis, aSkinAttr[ii].zFile); + db_begin_transaction(); + if( P("revert")!=0 ){ + db_multi_exec("DELETE FROM config WHERE name=%Q", aSkinAttr[ii].zFile); + cgi_replace_parameter(aSkinAttr[ii].zFile, builtin_text(zDflt)); + } + style_header("%s", aSkinAttr[ii].zTitle); + for(j=0; j<ArraySize(aSkinAttr); j++){ + if( j==ii ) continue; + style_submenu_element(aSkinAttr[j].zSubmenu, 0, + "%R/setup_skinedit?w=%d&basis=%h",j,zBasis); + } + style_submenu_element("Skins", 0, "%R/setup_skin"); + @ <form action="%s(g.zTop)/setup_skinedit" method="post"><div> + login_insert_csrf_secret(); + @ <input type='hidden' name='w' value='%d(ii)'> + @ <h2>Edit %s(aSkinAttr[ii].zTitle):</h2> + zContent = textarea_attribute("", 10, 80, aSkinAttr[ii].zFile, + aSkinAttr[ii].zFile, builtin_text(zDflt), 0); + @ <br /> + @ <input type="submit" name="submit" value="Apply Changes" /> + @ <hr /> + @ Baseline: <select size='1' name='basis'> + for(j=0; j<ArraySize(aBuiltinSkin); j++){ + cgi_printf("<option value='%h'%s>%h</option>\n", + aBuiltinSkin[j].zLabel, + fossil_strcmp(zBasis,aBuiltinSkin[j].zLabel)==0 ? " selected" : "", + aBuiltinSkin[j].zDesc + ); + } + @ </select> + @ <input type="submit" name="diff" value="Diff" /> + if( P("diff")!=0 ){ + u64 diffFlags = construct_diff_flags(0,0) | + DIFF_STRIP_EOLCR; + Blob from, to, out; + blob_init(&to, zContent, -1); + blob_init(&from, builtin_text(zDflt), -1); + blob_zero(&out); + @ <input type="submit" name="revert" value="Revert" /><p> + if( diffFlags & DIFF_SIDEBYSIDE ){ + text_diff(&from, &to, &out, 0, diffFlags | DIFF_HTML | DIFF_NOTTOOBIG); + @ %s(blob_str(&out)) + }else{ + text_diff(&from, &to, &out, 0, + diffFlags | DIFF_LINENO | DIFF_HTML | DIFF_NOTTOOBIG); + @ <pre class="udiff"> + @ %s(blob_str(&out)) + @ </pre> + } + blob_reset(&from); + blob_reset(&to); + blob_reset(&out); + } + @ </div></form> style_footer(); db_end_transaction(0); } ADDED src/sqlcmd.c Index: src/sqlcmd.c ================================================================== --- src/sqlcmd.c +++ src/sqlcmd.c @@ -0,0 +1,235 @@ +/* +** Copyright (c) 2010 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This module contains the code that initializes the "sqlite3" command-line +** shell against the repository database. The command-line shell itself +** is a copy of the "shell.c" code from SQLite. This file contains logic +** to initialize the code in shell.c. +*/ +#include "config.h" +#include "sqlcmd.h" +#if defined(FOSSIL_ENABLE_MINIZ) +# define MINIZ_HEADER_FILE_ONLY +# include "miniz.c" +#else +# include <zlib.h> +#endif + +/* +** Implementation of the "content(X)" SQL function. Return the complete +** content of artifact identified by X as a blob. +*/ +static void sqlcmd_content( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + int rid; + Blob cx; + const char *zName; + assert( argc==1 ); + zName = (const char*)sqlite3_value_text(argv[0]); + if( zName==0 ) return; + g.db = sqlite3_context_db_handle(context); + g.repositoryOpen = 1; + rid = name_to_rid(zName); + if( rid==0 ) return; + if( content_get(rid, &cx) ){ + sqlite3_result_blob(context, blob_buffer(&cx), blob_size(&cx), + SQLITE_TRANSIENT); + blob_reset(&cx); + } +} + +/* +** Implementation of the "compress(X)" SQL function. The input X is +** compressed using zLib and the output is returned. +*/ +static void sqlcmd_compress( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const unsigned char *pIn; + unsigned char *pOut; + unsigned int nIn; + unsigned long int nOut; + int rc; + + pIn = sqlite3_value_blob(argv[0]); + nIn = sqlite3_value_bytes(argv[0]); + nOut = 13 + nIn + (nIn+999)/1000; + pOut = sqlite3_malloc( nOut+4 ); + pOut[0] = nIn>>24 & 0xff; + pOut[1] = nIn>>16 & 0xff; + pOut[2] = nIn>>8 & 0xff; + pOut[3] = nIn & 0xff; + rc = compress(&pOut[4], &nOut, pIn, nIn); + if( rc==Z_OK ){ + sqlite3_result_blob(context, pOut, nOut+4, sqlite3_free); + }else{ + sqlite3_free(pOut); + sqlite3_result_error(context, "input cannot be zlib compressed", -1); + } +} + +/* +** Implementation of the "decompress(X)" SQL function. The argument X +** is a blob which was obtained from compress(Y). The output will be +** the value Y. +*/ +static void sqlcmd_decompress( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const unsigned char *pIn; + unsigned char *pOut; + unsigned int nIn; + unsigned long int nOut; + int rc; + + pIn = sqlite3_value_blob(argv[0]); + nIn = sqlite3_value_bytes(argv[0]); + nOut = (pIn[0]<<24) + (pIn[1]<<16) + (pIn[2]<<8) + pIn[3]; + pOut = sqlite3_malloc( nOut+1 ); + rc = uncompress(pOut, &nOut, &pIn[4], nIn-4); + if( rc==Z_OK ){ + sqlite3_result_blob(context, pOut, nOut, sqlite3_free); + }else{ + sqlite3_free(pOut); + sqlite3_result_error(context, "input is not zlib compressed", -1); + } +} + +/* +** Add the content(), compress(), and decompress() SQL functions to +** database connection db. +*/ +int add_content_sql_commands(sqlite3 *db){ + sqlite3_create_function(db, "content", 1, SQLITE_UTF8, 0, + sqlcmd_content, 0, 0); + sqlite3_create_function(db, "compress", 1, SQLITE_UTF8, 0, + sqlcmd_compress, 0, 0); + sqlite3_create_function(db, "decompress", 1, SQLITE_UTF8, 0, + sqlcmd_decompress, 0, 0); + return SQLITE_OK; +} + +/* +** This is the "automatic extension" initializer that runs right after +** the connection to the repository database is opened. Set up the +** database connection to be more useful to the human operator. +*/ +static int sqlcmd_autoinit( + sqlite3 *db, + const char **pzErrMsg, + const void *notUsed +){ + add_content_sql_commands(db); + db_add_aux_functions(db); + re_add_sql_func(db); + search_sql_setup(db); + g.zMainDbType = "repository"; + foci_register(db); + g.repositoryOpen = 1; + g.db = db; + return SQLITE_OK; +} + +/* +** COMMAND: sqlite3 +** +** Usage: %fossil sqlite3 ?FOSSIL_OPTS? ?DATABASE? ?SHELL_OPTS? +** +** Run the standalone sqlite3 command-line shell on DATABASE with SHELL_OPTS. +** If DATABASE is omitted, then the repository that serves the working +** directory is opened. See https://www.sqlite.org/cli.html for additional +** information. +** +** Fossil Options: +** +** --no-repository Skip opening the repository database. +** +** WARNING: Careless use of this command can corrupt a Fossil repository +** in ways that are unrecoverable. Be sure you know what you are doing before +** running any SQL commands that modifies the repository database. +** +** The following extensions to the usual SQLite commands are provided: +** +** content(X) Return the contenxt of artifact X. X can be a +** SHA1 hash or prefix or a tag. +** +** compress(X) Compress text X. +** +** decompress(X) Decompress text X. Undoes the work of +** compress(X). +** +** checkin_mtime(X,Y) Return the mtime for the file Y (a BLOB.RID) +** found in check-in X (another BLOB.RID value). +** +** symbolic_name_to_rid(X) Return a the BLOB.RID corresponding to symbolic +** name X. +** +** now() Return the number of seconds since 1970. +** +** REGEXP The REGEXP operator works, unlike in +** standard SQLite. +** +** files_of_checkin The "files_of_check" virtual table is +** available for decoding manifests. +** +** Usage example for files_of_checkin: +** +** CREATE VIRTUAL TABLE temp.foci USING files_of_checkin; +** SELECT * FROM foci WHERE checkinID=symbolic_name_to_rid('trunk'); +*/ +void cmd_sqlite3(void){ + int noRepository; + extern int sqlite3_shell(int, char**); + noRepository = find_option("no-repository", 0, 0)!=0; + if( !noRepository ){ + db_find_and_open_repository(OPEN_ANY_SCHEMA, 0); + } + fossil_close(1, noRepository); + sqlite3_shutdown(); + sqlite3_shell(g.argc-1, g.argv+1); + sqlite3_cancel_auto_extension((void(*)(void))sqlcmd_autoinit); + fossil_close(0, noRepository); +} + +/* +** This routine is called by the patched sqlite3 command-line shell in order +** to load the name and database connection for the open Fossil database. +*/ +void fossil_open(const char **pzRepoName){ + sqlite3_auto_extension((void(*)(void))sqlcmd_autoinit); + *pzRepoName = g.zRepositoryName; +} + +/* +** This routine closes the Fossil databases and/or invalidates the global +** state variables that keep track of them. +*/ +void fossil_close(int bDb, int noRepository){ + if( bDb ) db_close(1); + if( noRepository ) g.zRepositoryName = 0; + g.db = 0; + g.zMainDbType = 0; + g.repositoryOpen = 0; + g.localOpen = 0; +} Index: src/sqlite3.c ================================================================== --- src/sqlite3.c +++ src/sqlite3.c @@ -1,12 +1,12 @@ /****************************************************************************** ** This file is an amalgamation of many separate C source files from SQLite -** version 3.6.23. By combining all the individual C code files into this -** single large file, the entire code can be compiled as a one translation +** version 3.11.0. By combining all the individual C code files into this +** single large file, the entire code can be compiled as a single translation ** unit. This allows many compilers to do optimizations that would not be ** possible if the files were compiled separately. Performance improvements -** of 5% are more are commonly seen when SQLite is compiled as a single +** of 5% or more are commonly seen when SQLite is compiled as a single ** translation unit. ** ** This file is all you need to compile SQLite. To use SQLite in other ** programs, you need this file and the "sqlite3.h" header file that defines ** the programming interface to the SQLite library. (If you do not have @@ -20,13 +20,10 @@ #define SQLITE_CORE 1 #define SQLITE_AMALGAMATION 1 #ifndef SQLITE_PRIVATE # define SQLITE_PRIVATE static #endif -#ifndef SQLITE_API -# define SQLITE_API -#endif /************** Begin file sqliteInt.h ***************************************/ /* ** 2001 September 15 ** ** The author disclaims copyright to this source code. In place of @@ -41,10 +38,99 @@ ** */ #ifndef _SQLITEINT_H_ #define _SQLITEINT_H_ +/* +** Include the header file used to customize the compiler options for MSVC. +** This should be done first so that it can successfully prevent spurious +** compiler warnings due to subsequent content in this file and other files +** that are included by this file. +*/ +/************** Include msvc.h in the middle of sqliteInt.h ******************/ +/************** Begin file msvc.h ********************************************/ +/* +** 2015 January 12 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This file contains code that is specific to MSVC. +*/ +#ifndef _MSVC_H_ +#define _MSVC_H_ + +#if defined(_MSC_VER) +#pragma warning(disable : 4054) +#pragma warning(disable : 4055) +#pragma warning(disable : 4100) +#pragma warning(disable : 4127) +#pragma warning(disable : 4130) +#pragma warning(disable : 4152) +#pragma warning(disable : 4189) +#pragma warning(disable : 4206) +#pragma warning(disable : 4210) +#pragma warning(disable : 4232) +#pragma warning(disable : 4244) +#pragma warning(disable : 4305) +#pragma warning(disable : 4306) +#pragma warning(disable : 4702) +#pragma warning(disable : 4706) +#endif /* defined(_MSC_VER) */ + +#endif /* _MSVC_H_ */ + +/************** End of msvc.h ************************************************/ +/************** Continuing where we left off in sqliteInt.h ******************/ + +/* +** Special setup for VxWorks +*/ +/************** Include vxworks.h in the middle of sqliteInt.h ***************/ +/************** Begin file vxworks.h *****************************************/ +/* +** 2015-03-02 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This file contains code that is specific to Wind River's VxWorks +*/ +#if defined(__RTP__) || defined(_WRS_KERNEL) +/* This is VxWorks. Set up things specially for that OS +*/ +#include <vxWorks.h> +#include <pthread.h> /* amalgamator: dontcache */ +#define OS_VXWORKS 1 +#define SQLITE_OS_OTHER 0 +#define SQLITE_HOMEGROWN_RECURSIVE_MUTEX 1 +#define SQLITE_OMIT_LOAD_EXTENSION 1 +#define SQLITE_ENABLE_LOCKING_STYLE 0 +#define HAVE_UTIME 1 +#else +/* This is not VxWorks. */ +#define OS_VXWORKS 0 +#define HAVE_FCHOWN 1 +#define HAVE_READLINK 1 +#define HAVE_LSTAT 1 +#endif /* defined(_WRS_KERNEL) */ + +/************** End of vxworks.h *********************************************/ +/************** Continuing where we left off in sqliteInt.h ******************/ + /* ** These #defines should enable >2GB file support on POSIX if the ** underlying operating system supports it. If the OS lacks ** large file support, or if the OS is windows, these should be no-ops. ** @@ -58,10 +144,15 @@ ** on an older machine (ex: Red Hat 6.0). If you compile on Red Hat 7.2 ** without this option, LFS is enable. But LFS does not exist in the kernel ** in Red Hat 6.0, so the code won't work. Hence, for maximum binary ** portability you should omit LFS. ** +** The previous paragraph was written in 2005. (This paragraph is written +** on 2008-11-28.) These days, all Linux kernels support large files, so +** you should probably leave LFS enabled. But some embedded platforms might +** lack LFS in which case the SQLITE_DISABLE_LFS macro might still be useful. +** ** Similar is true for Mac OS X. LFS is only supported on Mac OS X 9 and later. */ #ifndef SQLITE_DISABLE_LFS # define _LARGE_FILE 1 # ifndef _FILE_OFFSET_BITS @@ -68,10 +159,8756 @@ # define _FILE_OFFSET_BITS 64 # endif # define _LARGEFILE_SOURCE 1 #endif +/* What version of GCC is being used. 0 means GCC is not being used */ +#ifdef __GNUC__ +# define GCC_VERSION (__GNUC__*1000000+__GNUC_MINOR__*1000+__GNUC_PATCHLEVEL__) +#else +# define GCC_VERSION 0 +#endif + +/* Needed for various definitions... */ +#if defined(__GNUC__) && !defined(_GNU_SOURCE) +# define _GNU_SOURCE +#endif + +#if defined(__OpenBSD__) && !defined(_BSD_SOURCE) +# define _BSD_SOURCE +#endif + +/* +** For MinGW, check to see if we can include the header file containing its +** version information, among other things. Normally, this internal MinGW +** header file would [only] be included automatically by other MinGW header +** files; however, the contained version information is now required by this +** header file to work around binary compatibility issues (see below) and +** this is the only known way to reliably obtain it. This entire #if block +** would be completely unnecessary if there was any other way of detecting +** MinGW via their preprocessor (e.g. if they customized their GCC to define +** some MinGW-specific macros). When compiling for MinGW, either the +** _HAVE_MINGW_H or _HAVE__MINGW_H (note the extra underscore) macro must be +** defined; otherwise, detection of conditions specific to MinGW will be +** disabled. +*/ +#if defined(_HAVE_MINGW_H) +# include "mingw.h" +#elif defined(_HAVE__MINGW_H) +# include "_mingw.h" +#endif + +/* +** For MinGW version 4.x (and higher), check to see if the _USE_32BIT_TIME_T +** define is required to maintain binary compatibility with the MSVC runtime +** library in use (e.g. for Windows XP). +*/ +#if !defined(_USE_32BIT_TIME_T) && !defined(_USE_64BIT_TIME_T) && \ + defined(_WIN32) && !defined(_WIN64) && \ + defined(__MINGW_MAJOR_VERSION) && __MINGW_MAJOR_VERSION >= 4 && \ + defined(__MSVCRT__) +# define _USE_32BIT_TIME_T +#endif + +/* The public SQLite interface. The _FILE_OFFSET_BITS macro must appear +** first in QNX. Also, the _USE_32BIT_TIME_T macro must appear first for +** MinGW. +*/ +/************** Include sqlite3.h in the middle of sqliteInt.h ***************/ +/************** Begin file sqlite3.h *****************************************/ +/* +** 2001 September 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This header file defines the interface that the SQLite library +** presents to client programs. If a C-function, structure, datatype, +** or constant definition does not appear in this file, then it is +** not a published API of SQLite, is subject to change without +** notice, and should not be referenced by programs that use SQLite. +** +** Some of the definitions that are in this file are marked as +** "experimental". Experimental interfaces are normally new +** features recently added to SQLite. We do not anticipate changes +** to experimental interfaces but reserve the right to make minor changes +** if experience from use "in the wild" suggest such changes are prudent. +** +** The official C-language API documentation for SQLite is derived +** from comments in this file. This file is the authoritative source +** on how SQLite interfaces are supposed to operate. +** +** The name of this file under configuration management is "sqlite.h.in". +** The makefile makes some minor changes to this file (such as inserting +** the version number) and changes its name to "sqlite3.h" as +** part of the build process. +*/ +#ifndef _SQLITE3_H_ +#define _SQLITE3_H_ +#include <stdarg.h> /* Needed for the definition of va_list */ + +/* +** Make sure we can call this stuff from C++. +*/ +#if 0 +extern "C" { +#endif + + +/* +** Provide the ability to override linkage features of the interface. +*/ +#ifndef SQLITE_EXTERN +# define SQLITE_EXTERN extern +#endif +#ifndef SQLITE_API +# define SQLITE_API +#endif +#ifndef SQLITE_CDECL +# define SQLITE_CDECL +#endif +#ifndef SQLITE_STDCALL +# define SQLITE_STDCALL +#endif + +/* +** These no-op macros are used in front of interfaces to mark those +** interfaces as either deprecated or experimental. New applications +** should not use deprecated interfaces - they are supported for backwards +** compatibility only. Application writers should be aware that +** experimental interfaces are subject to change in point releases. +** +** These macros used to resolve to various kinds of compiler magic that +** would generate warning messages when they were used. But that +** compiler magic ended up generating such a flurry of bug reports +** that we have taken it all out and gone back to using simple +** noop macros. +*/ +#define SQLITE_DEPRECATED +#define SQLITE_EXPERIMENTAL + +/* +** Ensure these symbols were not defined by some previous header file. +*/ +#ifdef SQLITE_VERSION +# undef SQLITE_VERSION +#endif +#ifdef SQLITE_VERSION_NUMBER +# undef SQLITE_VERSION_NUMBER +#endif + +/* +** CAPI3REF: Compile-Time Library Version Numbers +** +** ^(The [SQLITE_VERSION] C preprocessor macro in the sqlite3.h header +** evaluates to a string literal that is the SQLite version in the +** format "X.Y.Z" where X is the major version number (always 3 for +** SQLite3) and Y is the minor version number and Z is the release number.)^ +** ^(The [SQLITE_VERSION_NUMBER] C preprocessor macro resolves to an integer +** with the value (X*1000000 + Y*1000 + Z) where X, Y, and Z are the same +** numbers used in [SQLITE_VERSION].)^ +** The SQLITE_VERSION_NUMBER for any given release of SQLite will also +** be larger than the release from which it is derived. Either Y will +** be held constant and Z will be incremented or else Y will be incremented +** and Z will be reset to zero. +** +** Since version 3.6.18, SQLite source code has been stored in the +** <a href="http://www.fossil-scm.org/">Fossil configuration management +** system</a>. ^The SQLITE_SOURCE_ID macro evaluates to +** a string which identifies a particular check-in of SQLite +** within its configuration management system. ^The SQLITE_SOURCE_ID +** string contains the date and time of the check-in (UTC) and an SHA1 +** hash of the entire source tree. +** +** See also: [sqlite3_libversion()], +** [sqlite3_libversion_number()], [sqlite3_sourceid()], +** [sqlite_version()] and [sqlite_source_id()]. +*/ +#define SQLITE_VERSION "3.11.0" +#define SQLITE_VERSION_NUMBER 3011000 +#define SQLITE_SOURCE_ID "2016-02-15 17:29:24 3d862f207e3adc00f78066799ac5a8c282430a5f" + +/* +** CAPI3REF: Run-Time Library Version Numbers +** KEYWORDS: sqlite3_version, sqlite3_sourceid +** +** These interfaces provide the same information as the [SQLITE_VERSION], +** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros +** but are associated with the library instead of the header file. ^(Cautious +** programmers might include assert() statements in their application to +** verify that values returned by these interfaces match the macros in +** the header, and thus ensure that the application is +** compiled with matching library and header files. +** +** <blockquote><pre> +** assert( sqlite3_libversion_number()==SQLITE_VERSION_NUMBER ); +** assert( strcmp(sqlite3_sourceid(),SQLITE_SOURCE_ID)==0 ); +** assert( strcmp(sqlite3_libversion(),SQLITE_VERSION)==0 ); +** </pre></blockquote>)^ +** +** ^The sqlite3_version[] string constant contains the text of [SQLITE_VERSION] +** macro. ^The sqlite3_libversion() function returns a pointer to the +** to the sqlite3_version[] string constant. The sqlite3_libversion() +** function is provided for use in DLLs since DLL users usually do not have +** direct access to string constants within the DLL. ^The +** sqlite3_libversion_number() function returns an integer equal to +** [SQLITE_VERSION_NUMBER]. ^The sqlite3_sourceid() function returns +** a pointer to a string constant whose value is the same as the +** [SQLITE_SOURCE_ID] C preprocessor macro. +** +** See also: [sqlite_version()] and [sqlite_source_id()]. +*/ +SQLITE_API const char sqlite3_version[] = SQLITE_VERSION; +SQLITE_API const char *SQLITE_STDCALL sqlite3_libversion(void); +SQLITE_API const char *SQLITE_STDCALL sqlite3_sourceid(void); +SQLITE_API int SQLITE_STDCALL sqlite3_libversion_number(void); + +/* +** CAPI3REF: Run-Time Library Compilation Options Diagnostics +** +** ^The sqlite3_compileoption_used() function returns 0 or 1 +** indicating whether the specified option was defined at +** compile time. ^The SQLITE_ prefix may be omitted from the +** option name passed to sqlite3_compileoption_used(). +** +** ^The sqlite3_compileoption_get() function allows iterating +** over the list of options that were defined at compile time by +** returning the N-th compile time option string. ^If N is out of range, +** sqlite3_compileoption_get() returns a NULL pointer. ^The SQLITE_ +** prefix is omitted from any strings returned by +** sqlite3_compileoption_get(). +** +** ^Support for the diagnostic functions sqlite3_compileoption_used() +** and sqlite3_compileoption_get() may be omitted by specifying the +** [SQLITE_OMIT_COMPILEOPTION_DIAGS] option at compile time. +** +** See also: SQL functions [sqlite_compileoption_used()] and +** [sqlite_compileoption_get()] and the [compile_options pragma]. +*/ +#ifndef SQLITE_OMIT_COMPILEOPTION_DIAGS +SQLITE_API int SQLITE_STDCALL sqlite3_compileoption_used(const char *zOptName); +SQLITE_API const char *SQLITE_STDCALL sqlite3_compileoption_get(int N); +#endif + +/* +** CAPI3REF: Test To See If The Library Is Threadsafe +** +** ^The sqlite3_threadsafe() function returns zero if and only if +** SQLite was compiled with mutexing code omitted due to the +** [SQLITE_THREADSAFE] compile-time option being set to 0. +** +** SQLite can be compiled with or without mutexes. When +** the [SQLITE_THREADSAFE] C preprocessor macro is 1 or 2, mutexes +** are enabled and SQLite is threadsafe. When the +** [SQLITE_THREADSAFE] macro is 0, +** the mutexes are omitted. Without the mutexes, it is not safe +** to use SQLite concurrently from more than one thread. +** +** Enabling mutexes incurs a measurable performance penalty. +** So if speed is of utmost importance, it makes sense to disable +** the mutexes. But for maximum safety, mutexes should be enabled. +** ^The default behavior is for mutexes to be enabled. +** +** This interface can be used by an application to make sure that the +** version of SQLite that it is linking against was compiled with +** the desired setting of the [SQLITE_THREADSAFE] macro. +** +** This interface only reports on the compile-time mutex setting +** of the [SQLITE_THREADSAFE] flag. If SQLite is compiled with +** SQLITE_THREADSAFE=1 or =2 then mutexes are enabled by default but +** can be fully or partially disabled using a call to [sqlite3_config()] +** with the verbs [SQLITE_CONFIG_SINGLETHREAD], [SQLITE_CONFIG_MULTITHREAD], +** or [SQLITE_CONFIG_SERIALIZED]. ^(The return value of the +** sqlite3_threadsafe() function shows only the compile-time setting of +** thread safety, not any run-time changes to that setting made by +** sqlite3_config(). In other words, the return value from sqlite3_threadsafe() +** is unchanged by calls to sqlite3_config().)^ +** +** See the [threading mode] documentation for additional information. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_threadsafe(void); + +/* +** CAPI3REF: Database Connection Handle +** KEYWORDS: {database connection} {database connections} +** +** Each open SQLite database is represented by a pointer to an instance of +** the opaque structure named "sqlite3". It is useful to think of an sqlite3 +** pointer as an object. The [sqlite3_open()], [sqlite3_open16()], and +** [sqlite3_open_v2()] interfaces are its constructors, and [sqlite3_close()] +** and [sqlite3_close_v2()] are its destructors. There are many other +** interfaces (such as +** [sqlite3_prepare_v2()], [sqlite3_create_function()], and +** [sqlite3_busy_timeout()] to name but three) that are methods on an +** sqlite3 object. +*/ +typedef struct sqlite3 sqlite3; + +/* +** CAPI3REF: 64-Bit Integer Types +** KEYWORDS: sqlite_int64 sqlite_uint64 +** +** Because there is no cross-platform way to specify 64-bit integer types +** SQLite includes typedefs for 64-bit signed and unsigned integers. +** +** The sqlite3_int64 and sqlite3_uint64 are the preferred type definitions. +** The sqlite_int64 and sqlite_uint64 types are supported for backwards +** compatibility only. +** +** ^The sqlite3_int64 and sqlite_int64 types can store integer values +** between -9223372036854775808 and +9223372036854775807 inclusive. ^The +** sqlite3_uint64 and sqlite_uint64 types can store integer values +** between 0 and +18446744073709551615 inclusive. +*/ +#ifdef SQLITE_INT64_TYPE + typedef SQLITE_INT64_TYPE sqlite_int64; + typedef unsigned SQLITE_INT64_TYPE sqlite_uint64; +#elif defined(_MSC_VER) || defined(__BORLANDC__) + typedef __int64 sqlite_int64; + typedef unsigned __int64 sqlite_uint64; +#else + typedef long long int sqlite_int64; + typedef unsigned long long int sqlite_uint64; +#endif +typedef sqlite_int64 sqlite3_int64; +typedef sqlite_uint64 sqlite3_uint64; + +/* +** If compiling for a processor that lacks floating point support, +** substitute integer for floating-point. +*/ +#ifdef SQLITE_OMIT_FLOATING_POINT +# define double sqlite3_int64 +#endif + +/* +** CAPI3REF: Closing A Database Connection +** DESTRUCTOR: sqlite3 +** +** ^The sqlite3_close() and sqlite3_close_v2() routines are destructors +** for the [sqlite3] object. +** ^Calls to sqlite3_close() and sqlite3_close_v2() return [SQLITE_OK] if +** the [sqlite3] object is successfully destroyed and all associated +** resources are deallocated. +** +** ^If the database connection is associated with unfinalized prepared +** statements or unfinished sqlite3_backup objects then sqlite3_close() +** will leave the database connection open and return [SQLITE_BUSY]. +** ^If sqlite3_close_v2() is called with unfinalized prepared statements +** and/or unfinished sqlite3_backups, then the database connection becomes +** an unusable "zombie" which will automatically be deallocated when the +** last prepared statement is finalized or the last sqlite3_backup is +** finished. The sqlite3_close_v2() interface is intended for use with +** host languages that are garbage collected, and where the order in which +** destructors are called is arbitrary. +** +** Applications should [sqlite3_finalize | finalize] all [prepared statements], +** [sqlite3_blob_close | close] all [BLOB handles], and +** [sqlite3_backup_finish | finish] all [sqlite3_backup] objects associated +** with the [sqlite3] object prior to attempting to close the object. ^If +** sqlite3_close_v2() is called on a [database connection] that still has +** outstanding [prepared statements], [BLOB handles], and/or +** [sqlite3_backup] objects then it returns [SQLITE_OK] and the deallocation +** of resources is deferred until all [prepared statements], [BLOB handles], +** and [sqlite3_backup] objects are also destroyed. +** +** ^If an [sqlite3] object is destroyed while a transaction is open, +** the transaction is automatically rolled back. +** +** The C parameter to [sqlite3_close(C)] and [sqlite3_close_v2(C)] +** must be either a NULL +** pointer or an [sqlite3] object pointer obtained +** from [sqlite3_open()], [sqlite3_open16()], or +** [sqlite3_open_v2()], and not previously closed. +** ^Calling sqlite3_close() or sqlite3_close_v2() with a NULL pointer +** argument is a harmless no-op. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_close(sqlite3*); +SQLITE_API int SQLITE_STDCALL sqlite3_close_v2(sqlite3*); + +/* +** The type for a callback function. +** This is legacy and deprecated. It is included for historical +** compatibility and is not documented. +*/ +typedef int (*sqlite3_callback)(void*,int,char**, char**); + +/* +** CAPI3REF: One-Step Query Execution Interface +** METHOD: sqlite3 +** +** The sqlite3_exec() interface is a convenience wrapper around +** [sqlite3_prepare_v2()], [sqlite3_step()], and [sqlite3_finalize()], +** that allows an application to run multiple statements of SQL +** without having to use a lot of C code. +** +** ^The sqlite3_exec() interface runs zero or more UTF-8 encoded, +** semicolon-separate SQL statements passed into its 2nd argument, +** in the context of the [database connection] passed in as its 1st +** argument. ^If the callback function of the 3rd argument to +** sqlite3_exec() is not NULL, then it is invoked for each result row +** coming out of the evaluated SQL statements. ^The 4th argument to +** sqlite3_exec() is relayed through to the 1st argument of each +** callback invocation. ^If the callback pointer to sqlite3_exec() +** is NULL, then no callback is ever invoked and result rows are +** ignored. +** +** ^If an error occurs while evaluating the SQL statements passed into +** sqlite3_exec(), then execution of the current statement stops and +** subsequent statements are skipped. ^If the 5th parameter to sqlite3_exec() +** is not NULL then any error message is written into memory obtained +** from [sqlite3_malloc()] and passed back through the 5th parameter. +** To avoid memory leaks, the application should invoke [sqlite3_free()] +** on error message strings returned through the 5th parameter of +** sqlite3_exec() after the error message string is no longer needed. +** ^If the 5th parameter to sqlite3_exec() is not NULL and no errors +** occur, then sqlite3_exec() sets the pointer in its 5th parameter to +** NULL before returning. +** +** ^If an sqlite3_exec() callback returns non-zero, the sqlite3_exec() +** routine returns SQLITE_ABORT without invoking the callback again and +** without running any subsequent SQL statements. +** +** ^The 2nd argument to the sqlite3_exec() callback function is the +** number of columns in the result. ^The 3rd argument to the sqlite3_exec() +** callback is an array of pointers to strings obtained as if from +** [sqlite3_column_text()], one for each column. ^If an element of a +** result row is NULL then the corresponding string pointer for the +** sqlite3_exec() callback is a NULL pointer. ^The 4th argument to the +** sqlite3_exec() callback is an array of pointers to strings where each +** entry represents the name of corresponding result column as obtained +** from [sqlite3_column_name()]. +** +** ^If the 2nd parameter to sqlite3_exec() is a NULL pointer, a pointer +** to an empty string, or a pointer that contains only whitespace and/or +** SQL comments, then no SQL statements are evaluated and the database +** is not changed. +** +** Restrictions: +** +** <ul> +** <li> The application must ensure that the 1st parameter to sqlite3_exec() +** is a valid and open [database connection]. +** <li> The application must not close the [database connection] specified by +** the 1st parameter to sqlite3_exec() while sqlite3_exec() is running. +** <li> The application must not modify the SQL statement text passed into +** the 2nd parameter of sqlite3_exec() while sqlite3_exec() is running. +** </ul> +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_exec( + sqlite3*, /* An open database */ + const char *sql, /* SQL to be evaluated */ + int (*callback)(void*,int,char**,char**), /* Callback function */ + void *, /* 1st argument to callback */ + char **errmsg /* Error msg written here */ +); + +/* +** CAPI3REF: Result Codes +** KEYWORDS: {result code definitions} +** +** Many SQLite functions return an integer result code from the set shown +** here in order to indicate success or failure. +** +** New error codes may be added in future versions of SQLite. +** +** See also: [extended result code definitions] +*/ +#define SQLITE_OK 0 /* Successful result */ +/* beginning-of-error-codes */ +#define SQLITE_ERROR 1 /* SQL error or missing database */ +#define SQLITE_INTERNAL 2 /* Internal logic error in SQLite */ +#define SQLITE_PERM 3 /* Access permission denied */ +#define SQLITE_ABORT 4 /* Callback routine requested an abort */ +#define SQLITE_BUSY 5 /* The database file is locked */ +#define SQLITE_LOCKED 6 /* A table in the database is locked */ +#define SQLITE_NOMEM 7 /* A malloc() failed */ +#define SQLITE_READONLY 8 /* Attempt to write a readonly database */ +#define SQLITE_INTERRUPT 9 /* Operation terminated by sqlite3_interrupt()*/ +#define SQLITE_IOERR 10 /* Some kind of disk I/O error occurred */ +#define SQLITE_CORRUPT 11 /* The database disk image is malformed */ +#define SQLITE_NOTFOUND 12 /* Unknown opcode in sqlite3_file_control() */ +#define SQLITE_FULL 13 /* Insertion failed because database is full */ +#define SQLITE_CANTOPEN 14 /* Unable to open the database file */ +#define SQLITE_PROTOCOL 15 /* Database lock protocol error */ +#define SQLITE_EMPTY 16 /* Database is empty */ +#define SQLITE_SCHEMA 17 /* The database schema changed */ +#define SQLITE_TOOBIG 18 /* String or BLOB exceeds size limit */ +#define SQLITE_CONSTRAINT 19 /* Abort due to constraint violation */ +#define SQLITE_MISMATCH 20 /* Data type mismatch */ +#define SQLITE_MISUSE 21 /* Library used incorrectly */ +#define SQLITE_NOLFS 22 /* Uses OS features not supported on host */ +#define SQLITE_AUTH 23 /* Authorization denied */ +#define SQLITE_FORMAT 24 /* Auxiliary database format error */ +#define SQLITE_RANGE 25 /* 2nd parameter to sqlite3_bind out of range */ +#define SQLITE_NOTADB 26 /* File opened that is not a database file */ +#define SQLITE_NOTICE 27 /* Notifications from sqlite3_log() */ +#define SQLITE_WARNING 28 /* Warnings from sqlite3_log() */ +#define SQLITE_ROW 100 /* sqlite3_step() has another row ready */ +#define SQLITE_DONE 101 /* sqlite3_step() has finished executing */ +/* end-of-error-codes */ + +/* +** CAPI3REF: Extended Result Codes +** KEYWORDS: {extended result code definitions} +** +** In its default configuration, SQLite API routines return one of 30 integer +** [result codes]. However, experience has shown that many of +** these result codes are too coarse-grained. They do not provide as +** much information about problems as programmers might like. In an effort to +** address this, newer versions of SQLite (version 3.3.8 and later) include +** support for additional result codes that provide more detailed information +** about errors. These [extended result codes] are enabled or disabled +** on a per database connection basis using the +** [sqlite3_extended_result_codes()] API. Or, the extended code for +** the most recent error can be obtained using +** [sqlite3_extended_errcode()]. +*/ +#define SQLITE_IOERR_READ (SQLITE_IOERR | (1<<8)) +#define SQLITE_IOERR_SHORT_READ (SQLITE_IOERR | (2<<8)) +#define SQLITE_IOERR_WRITE (SQLITE_IOERR | (3<<8)) +#define SQLITE_IOERR_FSYNC (SQLITE_IOERR | (4<<8)) +#define SQLITE_IOERR_DIR_FSYNC (SQLITE_IOERR | (5<<8)) +#define SQLITE_IOERR_TRUNCATE (SQLITE_IOERR | (6<<8)) +#define SQLITE_IOERR_FSTAT (SQLITE_IOERR | (7<<8)) +#define SQLITE_IOERR_UNLOCK (SQLITE_IOERR | (8<<8)) +#define SQLITE_IOERR_RDLOCK (SQLITE_IOERR | (9<<8)) +#define SQLITE_IOERR_DELETE (SQLITE_IOERR | (10<<8)) +#define SQLITE_IOERR_BLOCKED (SQLITE_IOERR | (11<<8)) +#define SQLITE_IOERR_NOMEM (SQLITE_IOERR | (12<<8)) +#define SQLITE_IOERR_ACCESS (SQLITE_IOERR | (13<<8)) +#define SQLITE_IOERR_CHECKRESERVEDLOCK (SQLITE_IOERR | (14<<8)) +#define SQLITE_IOERR_LOCK (SQLITE_IOERR | (15<<8)) +#define SQLITE_IOERR_CLOSE (SQLITE_IOERR | (16<<8)) +#define SQLITE_IOERR_DIR_CLOSE (SQLITE_IOERR | (17<<8)) +#define SQLITE_IOERR_SHMOPEN (SQLITE_IOERR | (18<<8)) +#define SQLITE_IOERR_SHMSIZE (SQLITE_IOERR | (19<<8)) +#define SQLITE_IOERR_SHMLOCK (SQLITE_IOERR | (20<<8)) +#define SQLITE_IOERR_SHMMAP (SQLITE_IOERR | (21<<8)) +#define SQLITE_IOERR_SEEK (SQLITE_IOERR | (22<<8)) +#define SQLITE_IOERR_DELETE_NOENT (SQLITE_IOERR | (23<<8)) +#define SQLITE_IOERR_MMAP (SQLITE_IOERR | (24<<8)) +#define SQLITE_IOERR_GETTEMPPATH (SQLITE_IOERR | (25<<8)) +#define SQLITE_IOERR_CONVPATH (SQLITE_IOERR | (26<<8)) +#define SQLITE_IOERR_VNODE (SQLITE_IOERR | (27<<8)) +#define SQLITE_IOERR_AUTH (SQLITE_IOERR | (28<<8)) +#define SQLITE_LOCKED_SHAREDCACHE (SQLITE_LOCKED | (1<<8)) +#define SQLITE_BUSY_RECOVERY (SQLITE_BUSY | (1<<8)) +#define SQLITE_BUSY_SNAPSHOT (SQLITE_BUSY | (2<<8)) +#define SQLITE_CANTOPEN_NOTEMPDIR (SQLITE_CANTOPEN | (1<<8)) +#define SQLITE_CANTOPEN_ISDIR (SQLITE_CANTOPEN | (2<<8)) +#define SQLITE_CANTOPEN_FULLPATH (SQLITE_CANTOPEN | (3<<8)) +#define SQLITE_CANTOPEN_CONVPATH (SQLITE_CANTOPEN | (4<<8)) +#define SQLITE_CORRUPT_VTAB (SQLITE_CORRUPT | (1<<8)) +#define SQLITE_READONLY_RECOVERY (SQLITE_READONLY | (1<<8)) +#define SQLITE_READONLY_CANTLOCK (SQLITE_READONLY | (2<<8)) +#define SQLITE_READONLY_ROLLBACK (SQLITE_READONLY | (3<<8)) +#define SQLITE_READONLY_DBMOVED (SQLITE_READONLY | (4<<8)) +#define SQLITE_ABORT_ROLLBACK (SQLITE_ABORT | (2<<8)) +#define SQLITE_CONSTRAINT_CHECK (SQLITE_CONSTRAINT | (1<<8)) +#define SQLITE_CONSTRAINT_COMMITHOOK (SQLITE_CONSTRAINT | (2<<8)) +#define SQLITE_CONSTRAINT_FOREIGNKEY (SQLITE_CONSTRAINT | (3<<8)) +#define SQLITE_CONSTRAINT_FUNCTION (SQLITE_CONSTRAINT | (4<<8)) +#define SQLITE_CONSTRAINT_NOTNULL (SQLITE_CONSTRAINT | (5<<8)) +#define SQLITE_CONSTRAINT_PRIMARYKEY (SQLITE_CONSTRAINT | (6<<8)) +#define SQLITE_CONSTRAINT_TRIGGER (SQLITE_CONSTRAINT | (7<<8)) +#define SQLITE_CONSTRAINT_UNIQUE (SQLITE_CONSTRAINT | (8<<8)) +#define SQLITE_CONSTRAINT_VTAB (SQLITE_CONSTRAINT | (9<<8)) +#define SQLITE_CONSTRAINT_ROWID (SQLITE_CONSTRAINT |(10<<8)) +#define SQLITE_NOTICE_RECOVER_WAL (SQLITE_NOTICE | (1<<8)) +#define SQLITE_NOTICE_RECOVER_ROLLBACK (SQLITE_NOTICE | (2<<8)) +#define SQLITE_WARNING_AUTOINDEX (SQLITE_WARNING | (1<<8)) +#define SQLITE_AUTH_USER (SQLITE_AUTH | (1<<8)) + +/* +** CAPI3REF: Flags For File Open Operations +** +** These bit values are intended for use in the +** 3rd parameter to the [sqlite3_open_v2()] interface and +** in the 4th parameter to the [sqlite3_vfs.xOpen] method. +*/ +#define SQLITE_OPEN_READONLY 0x00000001 /* Ok for sqlite3_open_v2() */ +#define SQLITE_OPEN_READWRITE 0x00000002 /* Ok for sqlite3_open_v2() */ +#define SQLITE_OPEN_CREATE 0x00000004 /* Ok for sqlite3_open_v2() */ +#define SQLITE_OPEN_DELETEONCLOSE 0x00000008 /* VFS only */ +#define SQLITE_OPEN_EXCLUSIVE 0x00000010 /* VFS only */ +#define SQLITE_OPEN_AUTOPROXY 0x00000020 /* VFS only */ +#define SQLITE_OPEN_URI 0x00000040 /* Ok for sqlite3_open_v2() */ +#define SQLITE_OPEN_MEMORY 0x00000080 /* Ok for sqlite3_open_v2() */ +#define SQLITE_OPEN_MAIN_DB 0x00000100 /* VFS only */ +#define SQLITE_OPEN_TEMP_DB 0x00000200 /* VFS only */ +#define SQLITE_OPEN_TRANSIENT_DB 0x00000400 /* VFS only */ +#define SQLITE_OPEN_MAIN_JOURNAL 0x00000800 /* VFS only */ +#define SQLITE_OPEN_TEMP_JOURNAL 0x00001000 /* VFS only */ +#define SQLITE_OPEN_SUBJOURNAL 0x00002000 /* VFS only */ +#define SQLITE_OPEN_MASTER_JOURNAL 0x00004000 /* VFS only */ +#define SQLITE_OPEN_NOMUTEX 0x00008000 /* Ok for sqlite3_open_v2() */ +#define SQLITE_OPEN_FULLMUTEX 0x00010000 /* Ok for sqlite3_open_v2() */ +#define SQLITE_OPEN_SHAREDCACHE 0x00020000 /* Ok for sqlite3_open_v2() */ +#define SQLITE_OPEN_PRIVATECACHE 0x00040000 /* Ok for sqlite3_open_v2() */ +#define SQLITE_OPEN_WAL 0x00080000 /* VFS only */ + +/* Reserved: 0x00F00000 */ + +/* +** CAPI3REF: Device Characteristics +** +** The xDeviceCharacteristics method of the [sqlite3_io_methods] +** object returns an integer which is a vector of these +** bit values expressing I/O characteristics of the mass storage +** device that holds the file that the [sqlite3_io_methods] +** refers to. +** +** The SQLITE_IOCAP_ATOMIC property means that all writes of +** any size are atomic. The SQLITE_IOCAP_ATOMICnnn values +** mean that writes of blocks that are nnn bytes in size and +** are aligned to an address which is an integer multiple of +** nnn are atomic. The SQLITE_IOCAP_SAFE_APPEND value means +** that when data is appended to a file, the data is appended +** first then the size of the file is extended, never the other +** way around. The SQLITE_IOCAP_SEQUENTIAL property means that +** information is written to disk in the same order as calls +** to xWrite(). The SQLITE_IOCAP_POWERSAFE_OVERWRITE property means that +** after reboot following a crash or power loss, the only bytes in a +** file that were written at the application level might have changed +** and that adjacent bytes, even bytes within the same sector are +** guaranteed to be unchanged. The SQLITE_IOCAP_UNDELETABLE_WHEN_OPEN +** flag indicate that a file cannot be deleted when open. The +** SQLITE_IOCAP_IMMUTABLE flag indicates that the file is on +** read-only media and cannot be changed even by processes with +** elevated privileges. +*/ +#define SQLITE_IOCAP_ATOMIC 0x00000001 +#define SQLITE_IOCAP_ATOMIC512 0x00000002 +#define SQLITE_IOCAP_ATOMIC1K 0x00000004 +#define SQLITE_IOCAP_ATOMIC2K 0x00000008 +#define SQLITE_IOCAP_ATOMIC4K 0x00000010 +#define SQLITE_IOCAP_ATOMIC8K 0x00000020 +#define SQLITE_IOCAP_ATOMIC16K 0x00000040 +#define SQLITE_IOCAP_ATOMIC32K 0x00000080 +#define SQLITE_IOCAP_ATOMIC64K 0x00000100 +#define SQLITE_IOCAP_SAFE_APPEND 0x00000200 +#define SQLITE_IOCAP_SEQUENTIAL 0x00000400 +#define SQLITE_IOCAP_UNDELETABLE_WHEN_OPEN 0x00000800 +#define SQLITE_IOCAP_POWERSAFE_OVERWRITE 0x00001000 +#define SQLITE_IOCAP_IMMUTABLE 0x00002000 + +/* +** CAPI3REF: File Locking Levels +** +** SQLite uses one of these integer values as the second +** argument to calls it makes to the xLock() and xUnlock() methods +** of an [sqlite3_io_methods] object. +*/ +#define SQLITE_LOCK_NONE 0 +#define SQLITE_LOCK_SHARED 1 +#define SQLITE_LOCK_RESERVED 2 +#define SQLITE_LOCK_PENDING 3 +#define SQLITE_LOCK_EXCLUSIVE 4 + +/* +** CAPI3REF: Synchronization Type Flags +** +** When SQLite invokes the xSync() method of an +** [sqlite3_io_methods] object it uses a combination of +** these integer values as the second argument. +** +** When the SQLITE_SYNC_DATAONLY flag is used, it means that the +** sync operation only needs to flush data to mass storage. Inode +** information need not be flushed. If the lower four bits of the flag +** equal SQLITE_SYNC_NORMAL, that means to use normal fsync() semantics. +** If the lower four bits equal SQLITE_SYNC_FULL, that means +** to use Mac OS X style fullsync instead of fsync(). +** +** Do not confuse the SQLITE_SYNC_NORMAL and SQLITE_SYNC_FULL flags +** with the [PRAGMA synchronous]=NORMAL and [PRAGMA synchronous]=FULL +** settings. The [synchronous pragma] determines when calls to the +** xSync VFS method occur and applies uniformly across all platforms. +** The SQLITE_SYNC_NORMAL and SQLITE_SYNC_FULL flags determine how +** energetic or rigorous or forceful the sync operations are and +** only make a difference on Mac OSX for the default SQLite code. +** (Third-party VFS implementations might also make the distinction +** between SQLITE_SYNC_NORMAL and SQLITE_SYNC_FULL, but among the +** operating systems natively supported by SQLite, only Mac OSX +** cares about the difference.) +*/ +#define SQLITE_SYNC_NORMAL 0x00002 +#define SQLITE_SYNC_FULL 0x00003 +#define SQLITE_SYNC_DATAONLY 0x00010 + +/* +** CAPI3REF: OS Interface Open File Handle +** +** An [sqlite3_file] object represents an open file in the +** [sqlite3_vfs | OS interface layer]. Individual OS interface +** implementations will +** want to subclass this object by appending additional fields +** for their own use. The pMethods entry is a pointer to an +** [sqlite3_io_methods] object that defines methods for performing +** I/O operations on the open file. +*/ +typedef struct sqlite3_file sqlite3_file; +struct sqlite3_file { + const struct sqlite3_io_methods *pMethods; /* Methods for an open file */ +}; + +/* +** CAPI3REF: OS Interface File Virtual Methods Object +** +** Every file opened by the [sqlite3_vfs.xOpen] method populates an +** [sqlite3_file] object (or, more commonly, a subclass of the +** [sqlite3_file] object) with a pointer to an instance of this object. +** This object defines the methods used to perform various operations +** against the open file represented by the [sqlite3_file] object. +** +** If the [sqlite3_vfs.xOpen] method sets the sqlite3_file.pMethods element +** to a non-NULL pointer, then the sqlite3_io_methods.xClose method +** may be invoked even if the [sqlite3_vfs.xOpen] reported that it failed. The +** only way to prevent a call to xClose following a failed [sqlite3_vfs.xOpen] +** is for the [sqlite3_vfs.xOpen] to set the sqlite3_file.pMethods element +** to NULL. +** +** The flags argument to xSync may be one of [SQLITE_SYNC_NORMAL] or +** [SQLITE_SYNC_FULL]. The first choice is the normal fsync(). +** The second choice is a Mac OS X style fullsync. The [SQLITE_SYNC_DATAONLY] +** flag may be ORed in to indicate that only the data of the file +** and not its inode needs to be synced. +** +** The integer values to xLock() and xUnlock() are one of +** <ul> +** <li> [SQLITE_LOCK_NONE], +** <li> [SQLITE_LOCK_SHARED], +** <li> [SQLITE_LOCK_RESERVED], +** <li> [SQLITE_LOCK_PENDING], or +** <li> [SQLITE_LOCK_EXCLUSIVE]. +** </ul> +** xLock() increases the lock. xUnlock() decreases the lock. +** The xCheckReservedLock() method checks whether any database connection, +** either in this process or in some other process, is holding a RESERVED, +** PENDING, or EXCLUSIVE lock on the file. It returns true +** if such a lock exists and false otherwise. +** +** The xFileControl() method is a generic interface that allows custom +** VFS implementations to directly control an open file using the +** [sqlite3_file_control()] interface. The second "op" argument is an +** integer opcode. The third argument is a generic pointer intended to +** point to a structure that may contain arguments or space in which to +** write return values. Potential uses for xFileControl() might be +** functions to enable blocking locks with timeouts, to change the +** locking strategy (for example to use dot-file locks), to inquire +** about the status of a lock, or to break stale locks. The SQLite +** core reserves all opcodes less than 100 for its own use. +** A [file control opcodes | list of opcodes] less than 100 is available. +** Applications that define a custom xFileControl method should use opcodes +** greater than 100 to avoid conflicts. VFS implementations should +** return [SQLITE_NOTFOUND] for file control opcodes that they do not +** recognize. +** +** The xSectorSize() method returns the sector size of the +** device that underlies the file. The sector size is the +** minimum write that can be performed without disturbing +** other bytes in the file. The xDeviceCharacteristics() +** method returns a bit vector describing behaviors of the +** underlying device: +** +** <ul> +** <li> [SQLITE_IOCAP_ATOMIC] +** <li> [SQLITE_IOCAP_ATOMIC512] +** <li> [SQLITE_IOCAP_ATOMIC1K] +** <li> [SQLITE_IOCAP_ATOMIC2K] +** <li> [SQLITE_IOCAP_ATOMIC4K] +** <li> [SQLITE_IOCAP_ATOMIC8K] +** <li> [SQLITE_IOCAP_ATOMIC16K] +** <li> [SQLITE_IOCAP_ATOMIC32K] +** <li> [SQLITE_IOCAP_ATOMIC64K] +** <li> [SQLITE_IOCAP_SAFE_APPEND] +** <li> [SQLITE_IOCAP_SEQUENTIAL] +** </ul> +** +** The SQLITE_IOCAP_ATOMIC property means that all writes of +** any size are atomic. The SQLITE_IOCAP_ATOMICnnn values +** mean that writes of blocks that are nnn bytes in size and +** are aligned to an address which is an integer multiple of +** nnn are atomic. The SQLITE_IOCAP_SAFE_APPEND value means +** that when data is appended to a file, the data is appended +** first then the size of the file is extended, never the other +** way around. The SQLITE_IOCAP_SEQUENTIAL property means that +** information is written to disk in the same order as calls +** to xWrite(). +** +** If xRead() returns SQLITE_IOERR_SHORT_READ it must also fill +** in the unread portions of the buffer with zeros. A VFS that +** fails to zero-fill short reads might seem to work. However, +** failure to zero-fill short reads will eventually lead to +** database corruption. +*/ +typedef struct sqlite3_io_methods sqlite3_io_methods; +struct sqlite3_io_methods { + int iVersion; + int (*xClose)(sqlite3_file*); + int (*xRead)(sqlite3_file*, void*, int iAmt, sqlite3_int64 iOfst); + int (*xWrite)(sqlite3_file*, const void*, int iAmt, sqlite3_int64 iOfst); + int (*xTruncate)(sqlite3_file*, sqlite3_int64 size); + int (*xSync)(sqlite3_file*, int flags); + int (*xFileSize)(sqlite3_file*, sqlite3_int64 *pSize); + int (*xLock)(sqlite3_file*, int); + int (*xUnlock)(sqlite3_file*, int); + int (*xCheckReservedLock)(sqlite3_file*, int *pResOut); + int (*xFileControl)(sqlite3_file*, int op, void *pArg); + int (*xSectorSize)(sqlite3_file*); + int (*xDeviceCharacteristics)(sqlite3_file*); + /* Methods above are valid for version 1 */ + int (*xShmMap)(sqlite3_file*, int iPg, int pgsz, int, void volatile**); + int (*xShmLock)(sqlite3_file*, int offset, int n, int flags); + void (*xShmBarrier)(sqlite3_file*); + int (*xShmUnmap)(sqlite3_file*, int deleteFlag); + /* Methods above are valid for version 2 */ + int (*xFetch)(sqlite3_file*, sqlite3_int64 iOfst, int iAmt, void **pp); + int (*xUnfetch)(sqlite3_file*, sqlite3_int64 iOfst, void *p); + /* Methods above are valid for version 3 */ + /* Additional methods may be added in future releases */ +}; + +/* +** CAPI3REF: Standard File Control Opcodes +** KEYWORDS: {file control opcodes} {file control opcode} +** +** These integer constants are opcodes for the xFileControl method +** of the [sqlite3_io_methods] object and for the [sqlite3_file_control()] +** interface. +** +** <ul> +** <li>[[SQLITE_FCNTL_LOCKSTATE]] +** The [SQLITE_FCNTL_LOCKSTATE] opcode is used for debugging. This +** opcode causes the xFileControl method to write the current state of +** the lock (one of [SQLITE_LOCK_NONE], [SQLITE_LOCK_SHARED], +** [SQLITE_LOCK_RESERVED], [SQLITE_LOCK_PENDING], or [SQLITE_LOCK_EXCLUSIVE]) +** into an integer that the pArg argument points to. This capability +** is used during testing and is only available when the SQLITE_TEST +** compile-time option is used. +** +** <li>[[SQLITE_FCNTL_SIZE_HINT]] +** The [SQLITE_FCNTL_SIZE_HINT] opcode is used by SQLite to give the VFS +** layer a hint of how large the database file will grow to be during the +** current transaction. This hint is not guaranteed to be accurate but it +** is often close. The underlying VFS might choose to preallocate database +** file space based on this hint in order to help writes to the database +** file run faster. +** +** <li>[[SQLITE_FCNTL_CHUNK_SIZE]] +** The [SQLITE_FCNTL_CHUNK_SIZE] opcode is used to request that the VFS +** extends and truncates the database file in chunks of a size specified +** by the user. The fourth argument to [sqlite3_file_control()] should +** point to an integer (type int) containing the new chunk-size to use +** for the nominated database. Allocating database file space in large +** chunks (say 1MB at a time), may reduce file-system fragmentation and +** improve performance on some systems. +** +** <li>[[SQLITE_FCNTL_FILE_POINTER]] +** The [SQLITE_FCNTL_FILE_POINTER] opcode is used to obtain a pointer +** to the [sqlite3_file] object associated with a particular database +** connection. See also [SQLITE_FCNTL_JOURNAL_POINTER]. +** +** <li>[[SQLITE_FCNTL_JOURNAL_POINTER]] +** The [SQLITE_FCNTL_JOURNAL_POINTER] opcode is used to obtain a pointer +** to the [sqlite3_file] object associated with the journal file (either +** the [rollback journal] or the [write-ahead log]) for a particular database +** connection. See also [SQLITE_FCNTL_FILE_POINTER]. +** +** <li>[[SQLITE_FCNTL_SYNC_OMITTED]] +** No longer in use. +** +** <li>[[SQLITE_FCNTL_SYNC]] +** The [SQLITE_FCNTL_SYNC] opcode is generated internally by SQLite and +** sent to the VFS immediately before the xSync method is invoked on a +** database file descriptor. Or, if the xSync method is not invoked +** because the user has configured SQLite with +** [PRAGMA synchronous | PRAGMA synchronous=OFF] it is invoked in place +** of the xSync method. In most cases, the pointer argument passed with +** this file-control is NULL. However, if the database file is being synced +** as part of a multi-database commit, the argument points to a nul-terminated +** string containing the transactions master-journal file name. VFSes that +** do not need this signal should silently ignore this opcode. Applications +** should not call [sqlite3_file_control()] with this opcode as doing so may +** disrupt the operation of the specialized VFSes that do require it. +** +** <li>[[SQLITE_FCNTL_COMMIT_PHASETWO]] +** The [SQLITE_FCNTL_COMMIT_PHASETWO] opcode is generated internally by SQLite +** and sent to the VFS after a transaction has been committed immediately +** but before the database is unlocked. VFSes that do not need this signal +** should silently ignore this opcode. Applications should not call +** [sqlite3_file_control()] with this opcode as doing so may disrupt the +** operation of the specialized VFSes that do require it. +** +** <li>[[SQLITE_FCNTL_WIN32_AV_RETRY]] +** ^The [SQLITE_FCNTL_WIN32_AV_RETRY] opcode is used to configure automatic +** retry counts and intervals for certain disk I/O operations for the +** windows [VFS] in order to provide robustness in the presence of +** anti-virus programs. By default, the windows VFS will retry file read, +** file write, and file delete operations up to 10 times, with a delay +** of 25 milliseconds before the first retry and with the delay increasing +** by an additional 25 milliseconds with each subsequent retry. This +** opcode allows these two values (10 retries and 25 milliseconds of delay) +** to be adjusted. The values are changed for all database connections +** within the same process. The argument is a pointer to an array of two +** integers where the first integer i the new retry count and the second +** integer is the delay. If either integer is negative, then the setting +** is not changed but instead the prior value of that setting is written +** into the array entry, allowing the current retry settings to be +** interrogated. The zDbName parameter is ignored. +** +** <li>[[SQLITE_FCNTL_PERSIST_WAL]] +** ^The [SQLITE_FCNTL_PERSIST_WAL] opcode is used to set or query the +** persistent [WAL | Write Ahead Log] setting. By default, the auxiliary +** write ahead log and shared memory files used for transaction control +** are automatically deleted when the latest connection to the database +** closes. Setting persistent WAL mode causes those files to persist after +** close. Persisting the files is useful when other processes that do not +** have write permission on the directory containing the database file want +** to read the database file, as the WAL and shared memory files must exist +** in order for the database to be readable. The fourth parameter to +** [sqlite3_file_control()] for this opcode should be a pointer to an integer. +** That integer is 0 to disable persistent WAL mode or 1 to enable persistent +** WAL mode. If the integer is -1, then it is overwritten with the current +** WAL persistence setting. +** +** <li>[[SQLITE_FCNTL_POWERSAFE_OVERWRITE]] +** ^The [SQLITE_FCNTL_POWERSAFE_OVERWRITE] opcode is used to set or query the +** persistent "powersafe-overwrite" or "PSOW" setting. The PSOW setting +** determines the [SQLITE_IOCAP_POWERSAFE_OVERWRITE] bit of the +** xDeviceCharacteristics methods. The fourth parameter to +** [sqlite3_file_control()] for this opcode should be a pointer to an integer. +** That integer is 0 to disable zero-damage mode or 1 to enable zero-damage +** mode. If the integer is -1, then it is overwritten with the current +** zero-damage mode setting. +** +** <li>[[SQLITE_FCNTL_OVERWRITE]] +** ^The [SQLITE_FCNTL_OVERWRITE] opcode is invoked by SQLite after opening +** a write transaction to indicate that, unless it is rolled back for some +** reason, the entire database file will be overwritten by the current +** transaction. This is used by VACUUM operations. +** +** <li>[[SQLITE_FCNTL_VFSNAME]] +** ^The [SQLITE_FCNTL_VFSNAME] opcode can be used to obtain the names of +** all [VFSes] in the VFS stack. The names are of all VFS shims and the +** final bottom-level VFS are written into memory obtained from +** [sqlite3_malloc()] and the result is stored in the char* variable +** that the fourth parameter of [sqlite3_file_control()] points to. +** The caller is responsible for freeing the memory when done. As with +** all file-control actions, there is no guarantee that this will actually +** do anything. Callers should initialize the char* variable to a NULL +** pointer in case this file-control is not implemented. This file-control +** is intended for diagnostic use only. +** +** <li>[[SQLITE_FCNTL_VFS_POINTER]] +** ^The [SQLITE_FCNTL_VFS_POINTER] opcode finds a pointer to the top-level +** [VFSes] currently in use. ^(The argument X in +** sqlite3_file_control(db,SQLITE_FCNTL_VFS_POINTER,X) must be +** of type "[sqlite3_vfs] **". This opcodes will set *X +** to a pointer to the top-level VFS.)^ +** ^When there are multiple VFS shims in the stack, this opcode finds the +** upper-most shim only. +** +** <li>[[SQLITE_FCNTL_PRAGMA]] +** ^Whenever a [PRAGMA] statement is parsed, an [SQLITE_FCNTL_PRAGMA] +** file control is sent to the open [sqlite3_file] object corresponding +** to the database file to which the pragma statement refers. ^The argument +** to the [SQLITE_FCNTL_PRAGMA] file control is an array of +** pointers to strings (char**) in which the second element of the array +** is the name of the pragma and the third element is the argument to the +** pragma or NULL if the pragma has no argument. ^The handler for an +** [SQLITE_FCNTL_PRAGMA] file control can optionally make the first element +** of the char** argument point to a string obtained from [sqlite3_mprintf()] +** or the equivalent and that string will become the result of the pragma or +** the error message if the pragma fails. ^If the +** [SQLITE_FCNTL_PRAGMA] file control returns [SQLITE_NOTFOUND], then normal +** [PRAGMA] processing continues. ^If the [SQLITE_FCNTL_PRAGMA] +** file control returns [SQLITE_OK], then the parser assumes that the +** VFS has handled the PRAGMA itself and the parser generates a no-op +** prepared statement if result string is NULL, or that returns a copy +** of the result string if the string is non-NULL. +** ^If the [SQLITE_FCNTL_PRAGMA] file control returns +** any result code other than [SQLITE_OK] or [SQLITE_NOTFOUND], that means +** that the VFS encountered an error while handling the [PRAGMA] and the +** compilation of the PRAGMA fails with an error. ^The [SQLITE_FCNTL_PRAGMA] +** file control occurs at the beginning of pragma statement analysis and so +** it is able to override built-in [PRAGMA] statements. +** +** <li>[[SQLITE_FCNTL_BUSYHANDLER]] +** ^The [SQLITE_FCNTL_BUSYHANDLER] +** file-control may be invoked by SQLite on the database file handle +** shortly after it is opened in order to provide a custom VFS with access +** to the connections busy-handler callback. The argument is of type (void **) +** - an array of two (void *) values. The first (void *) actually points +** to a function of type (int (*)(void *)). In order to invoke the connections +** busy-handler, this function should be invoked with the second (void *) in +** the array as the only argument. If it returns non-zero, then the operation +** should be retried. If it returns zero, the custom VFS should abandon the +** current operation. +** +** <li>[[SQLITE_FCNTL_TEMPFILENAME]] +** ^Application can invoke the [SQLITE_FCNTL_TEMPFILENAME] file-control +** to have SQLite generate a +** temporary filename using the same algorithm that is followed to generate +** temporary filenames for TEMP tables and other internal uses. The +** argument should be a char** which will be filled with the filename +** written into memory obtained from [sqlite3_malloc()]. The caller should +** invoke [sqlite3_free()] on the result to avoid a memory leak. +** +** <li>[[SQLITE_FCNTL_MMAP_SIZE]] +** The [SQLITE_FCNTL_MMAP_SIZE] file control is used to query or set the +** maximum number of bytes that will be used for memory-mapped I/O. +** The argument is a pointer to a value of type sqlite3_int64 that +** is an advisory maximum number of bytes in the file to memory map. The +** pointer is overwritten with the old value. The limit is not changed if +** the value originally pointed to is negative, and so the current limit +** can be queried by passing in a pointer to a negative number. This +** file-control is used internally to implement [PRAGMA mmap_size]. +** +** <li>[[SQLITE_FCNTL_TRACE]] +** The [SQLITE_FCNTL_TRACE] file control provides advisory information +** to the VFS about what the higher layers of the SQLite stack are doing. +** This file control is used by some VFS activity tracing [shims]. +** The argument is a zero-terminated string. Higher layers in the +** SQLite stack may generate instances of this file control if +** the [SQLITE_USE_FCNTL_TRACE] compile-time option is enabled. +** +** <li>[[SQLITE_FCNTL_HAS_MOVED]] +** The [SQLITE_FCNTL_HAS_MOVED] file control interprets its argument as a +** pointer to an integer and it writes a boolean into that integer depending +** on whether or not the file has been renamed, moved, or deleted since it +** was first opened. +** +** <li>[[SQLITE_FCNTL_WIN32_SET_HANDLE]] +** The [SQLITE_FCNTL_WIN32_SET_HANDLE] opcode is used for debugging. This +** opcode causes the xFileControl method to swap the file handle with the one +** pointed to by the pArg argument. This capability is used during testing +** and only needs to be supported when SQLITE_TEST is defined. +** +** <li>[[SQLITE_FCNTL_WAL_BLOCK]] +** The [SQLITE_FCNTL_WAL_BLOCK] is a signal to the VFS layer that it might +** be advantageous to block on the next WAL lock if the lock is not immediately +** available. The WAL subsystem issues this signal during rare +** circumstances in order to fix a problem with priority inversion. +** Applications should <em>not</em> use this file-control. +** +** <li>[[SQLITE_FCNTL_ZIPVFS]] +** The [SQLITE_FCNTL_ZIPVFS] opcode is implemented by zipvfs only. All other +** VFS should return SQLITE_NOTFOUND for this opcode. +** +** <li>[[SQLITE_FCNTL_RBU]] +** The [SQLITE_FCNTL_RBU] opcode is implemented by the special VFS used by +** the RBU extension only. All other VFS should return SQLITE_NOTFOUND for +** this opcode. +** </ul> +*/ +#define SQLITE_FCNTL_LOCKSTATE 1 +#define SQLITE_FCNTL_GET_LOCKPROXYFILE 2 +#define SQLITE_FCNTL_SET_LOCKPROXYFILE 3 +#define SQLITE_FCNTL_LAST_ERRNO 4 +#define SQLITE_FCNTL_SIZE_HINT 5 +#define SQLITE_FCNTL_CHUNK_SIZE 6 +#define SQLITE_FCNTL_FILE_POINTER 7 +#define SQLITE_FCNTL_SYNC_OMITTED 8 +#define SQLITE_FCNTL_WIN32_AV_RETRY 9 +#define SQLITE_FCNTL_PERSIST_WAL 10 +#define SQLITE_FCNTL_OVERWRITE 11 +#define SQLITE_FCNTL_VFSNAME 12 +#define SQLITE_FCNTL_POWERSAFE_OVERWRITE 13 +#define SQLITE_FCNTL_PRAGMA 14 +#define SQLITE_FCNTL_BUSYHANDLER 15 +#define SQLITE_FCNTL_TEMPFILENAME 16 +#define SQLITE_FCNTL_MMAP_SIZE 18 +#define SQLITE_FCNTL_TRACE 19 +#define SQLITE_FCNTL_HAS_MOVED 20 +#define SQLITE_FCNTL_SYNC 21 +#define SQLITE_FCNTL_COMMIT_PHASETWO 22 +#define SQLITE_FCNTL_WIN32_SET_HANDLE 23 +#define SQLITE_FCNTL_WAL_BLOCK 24 +#define SQLITE_FCNTL_ZIPVFS 25 +#define SQLITE_FCNTL_RBU 26 +#define SQLITE_FCNTL_VFS_POINTER 27 +#define SQLITE_FCNTL_JOURNAL_POINTER 28 + +/* deprecated names */ +#define SQLITE_GET_LOCKPROXYFILE SQLITE_FCNTL_GET_LOCKPROXYFILE +#define SQLITE_SET_LOCKPROXYFILE SQLITE_FCNTL_SET_LOCKPROXYFILE +#define SQLITE_LAST_ERRNO SQLITE_FCNTL_LAST_ERRNO + + +/* +** CAPI3REF: Mutex Handle +** +** The mutex module within SQLite defines [sqlite3_mutex] to be an +** abstract type for a mutex object. The SQLite core never looks +** at the internal representation of an [sqlite3_mutex]. It only +** deals with pointers to the [sqlite3_mutex] object. +** +** Mutexes are created using [sqlite3_mutex_alloc()]. +*/ +typedef struct sqlite3_mutex sqlite3_mutex; + +/* +** CAPI3REF: OS Interface Object +** +** An instance of the sqlite3_vfs object defines the interface between +** the SQLite core and the underlying operating system. The "vfs" +** in the name of the object stands for "virtual file system". See +** the [VFS | VFS documentation] for further information. +** +** The value of the iVersion field is initially 1 but may be larger in +** future versions of SQLite. Additional fields may be appended to this +** object when the iVersion value is increased. Note that the structure +** of the sqlite3_vfs object changes in the transaction between +** SQLite version 3.5.9 and 3.6.0 and yet the iVersion field was not +** modified. +** +** The szOsFile field is the size of the subclassed [sqlite3_file] +** structure used by this VFS. mxPathname is the maximum length of +** a pathname in this VFS. +** +** Registered sqlite3_vfs objects are kept on a linked list formed by +** the pNext pointer. The [sqlite3_vfs_register()] +** and [sqlite3_vfs_unregister()] interfaces manage this list +** in a thread-safe way. The [sqlite3_vfs_find()] interface +** searches the list. Neither the application code nor the VFS +** implementation should use the pNext pointer. +** +** The pNext field is the only field in the sqlite3_vfs +** structure that SQLite will ever modify. SQLite will only access +** or modify this field while holding a particular static mutex. +** The application should never modify anything within the sqlite3_vfs +** object once the object has been registered. +** +** The zName field holds the name of the VFS module. The name must +** be unique across all VFS modules. +** +** [[sqlite3_vfs.xOpen]] +** ^SQLite guarantees that the zFilename parameter to xOpen +** is either a NULL pointer or string obtained +** from xFullPathname() with an optional suffix added. +** ^If a suffix is added to the zFilename parameter, it will +** consist of a single "-" character followed by no more than +** 11 alphanumeric and/or "-" characters. +** ^SQLite further guarantees that +** the string will be valid and unchanged until xClose() is +** called. Because of the previous sentence, +** the [sqlite3_file] can safely store a pointer to the +** filename if it needs to remember the filename for some reason. +** If the zFilename parameter to xOpen is a NULL pointer then xOpen +** must invent its own temporary name for the file. ^Whenever the +** xFilename parameter is NULL it will also be the case that the +** flags parameter will include [SQLITE_OPEN_DELETEONCLOSE]. +** +** The flags argument to xOpen() includes all bits set in +** the flags argument to [sqlite3_open_v2()]. Or if [sqlite3_open()] +** or [sqlite3_open16()] is used, then flags includes at least +** [SQLITE_OPEN_READWRITE] | [SQLITE_OPEN_CREATE]. +** If xOpen() opens a file read-only then it sets *pOutFlags to +** include [SQLITE_OPEN_READONLY]. Other bits in *pOutFlags may be set. +** +** ^(SQLite will also add one of the following flags to the xOpen() +** call, depending on the object being opened: +** +** <ul> +** <li> [SQLITE_OPEN_MAIN_DB] +** <li> [SQLITE_OPEN_MAIN_JOURNAL] +** <li> [SQLITE_OPEN_TEMP_DB] +** <li> [SQLITE_OPEN_TEMP_JOURNAL] +** <li> [SQLITE_OPEN_TRANSIENT_DB] +** <li> [SQLITE_OPEN_SUBJOURNAL] +** <li> [SQLITE_OPEN_MASTER_JOURNAL] +** <li> [SQLITE_OPEN_WAL] +** </ul>)^ +** +** The file I/O implementation can use the object type flags to +** change the way it deals with files. For example, an application +** that does not care about crash recovery or rollback might make +** the open of a journal file a no-op. Writes to this journal would +** also be no-ops, and any attempt to read the journal would return +** SQLITE_IOERR. Or the implementation might recognize that a database +** file will be doing page-aligned sector reads and writes in a random +** order and set up its I/O subsystem accordingly. +** +** SQLite might also add one of the following flags to the xOpen method: +** +** <ul> +** <li> [SQLITE_OPEN_DELETEONCLOSE] +** <li> [SQLITE_OPEN_EXCLUSIVE] +** </ul> +** +** The [SQLITE_OPEN_DELETEONCLOSE] flag means the file should be +** deleted when it is closed. ^The [SQLITE_OPEN_DELETEONCLOSE] +** will be set for TEMP databases and their journals, transient +** databases, and subjournals. +** +** ^The [SQLITE_OPEN_EXCLUSIVE] flag is always used in conjunction +** with the [SQLITE_OPEN_CREATE] flag, which are both directly +** analogous to the O_EXCL and O_CREAT flags of the POSIX open() +** API. The SQLITE_OPEN_EXCLUSIVE flag, when paired with the +** SQLITE_OPEN_CREATE, is used to indicate that file should always +** be created, and that it is an error if it already exists. +** It is <i>not</i> used to indicate the file should be opened +** for exclusive access. +** +** ^At least szOsFile bytes of memory are allocated by SQLite +** to hold the [sqlite3_file] structure passed as the third +** argument to xOpen. The xOpen method does not have to +** allocate the structure; it should just fill it in. Note that +** the xOpen method must set the sqlite3_file.pMethods to either +** a valid [sqlite3_io_methods] object or to NULL. xOpen must do +** this even if the open fails. SQLite expects that the sqlite3_file.pMethods +** element will be valid after xOpen returns regardless of the success +** or failure of the xOpen call. +** +** [[sqlite3_vfs.xAccess]] +** ^The flags argument to xAccess() may be [SQLITE_ACCESS_EXISTS] +** to test for the existence of a file, or [SQLITE_ACCESS_READWRITE] to +** test whether a file is readable and writable, or [SQLITE_ACCESS_READ] +** to test whether a file is at least readable. The file can be a +** directory. +** +** ^SQLite will always allocate at least mxPathname+1 bytes for the +** output buffer xFullPathname. The exact size of the output buffer +** is also passed as a parameter to both methods. If the output buffer +** is not large enough, [SQLITE_CANTOPEN] should be returned. Since this is +** handled as a fatal error by SQLite, vfs implementations should endeavor +** to prevent this by setting mxPathname to a sufficiently large value. +** +** The xRandomness(), xSleep(), xCurrentTime(), and xCurrentTimeInt64() +** interfaces are not strictly a part of the filesystem, but they are +** included in the VFS structure for completeness. +** The xRandomness() function attempts to return nBytes bytes +** of good-quality randomness into zOut. The return value is +** the actual number of bytes of randomness obtained. +** The xSleep() method causes the calling thread to sleep for at +** least the number of microseconds given. ^The xCurrentTime() +** method returns a Julian Day Number for the current date and time as +** a floating point value. +** ^The xCurrentTimeInt64() method returns, as an integer, the Julian +** Day Number multiplied by 86400000 (the number of milliseconds in +** a 24-hour day). +** ^SQLite will use the xCurrentTimeInt64() method to get the current +** date and time if that method is available (if iVersion is 2 or +** greater and the function pointer is not NULL) and will fall back +** to xCurrentTime() if xCurrentTimeInt64() is unavailable. +** +** ^The xSetSystemCall(), xGetSystemCall(), and xNestSystemCall() interfaces +** are not used by the SQLite core. These optional interfaces are provided +** by some VFSes to facilitate testing of the VFS code. By overriding +** system calls with functions under its control, a test program can +** simulate faults and error conditions that would otherwise be difficult +** or impossible to induce. The set of system calls that can be overridden +** varies from one VFS to another, and from one version of the same VFS to the +** next. Applications that use these interfaces must be prepared for any +** or all of these interfaces to be NULL or for their behavior to change +** from one release to the next. Applications must not attempt to access +** any of these methods if the iVersion of the VFS is less than 3. +*/ +typedef struct sqlite3_vfs sqlite3_vfs; +typedef void (*sqlite3_syscall_ptr)(void); +struct sqlite3_vfs { + int iVersion; /* Structure version number (currently 3) */ + int szOsFile; /* Size of subclassed sqlite3_file */ + int mxPathname; /* Maximum file pathname length */ + sqlite3_vfs *pNext; /* Next registered VFS */ + const char *zName; /* Name of this virtual file system */ + void *pAppData; /* Pointer to application-specific data */ + int (*xOpen)(sqlite3_vfs*, const char *zName, sqlite3_file*, + int flags, int *pOutFlags); + int (*xDelete)(sqlite3_vfs*, const char *zName, int syncDir); + int (*xAccess)(sqlite3_vfs*, const char *zName, int flags, int *pResOut); + int (*xFullPathname)(sqlite3_vfs*, const char *zName, int nOut, char *zOut); + void *(*xDlOpen)(sqlite3_vfs*, const char *zFilename); + void (*xDlError)(sqlite3_vfs*, int nByte, char *zErrMsg); + void (*(*xDlSym)(sqlite3_vfs*,void*, const char *zSymbol))(void); + void (*xDlClose)(sqlite3_vfs*, void*); + int (*xRandomness)(sqlite3_vfs*, int nByte, char *zOut); + int (*xSleep)(sqlite3_vfs*, int microseconds); + int (*xCurrentTime)(sqlite3_vfs*, double*); + int (*xGetLastError)(sqlite3_vfs*, int, char *); + /* + ** The methods above are in version 1 of the sqlite_vfs object + ** definition. Those that follow are added in version 2 or later + */ + int (*xCurrentTimeInt64)(sqlite3_vfs*, sqlite3_int64*); + /* + ** The methods above are in versions 1 and 2 of the sqlite_vfs object. + ** Those below are for version 3 and greater. + */ + int (*xSetSystemCall)(sqlite3_vfs*, const char *zName, sqlite3_syscall_ptr); + sqlite3_syscall_ptr (*xGetSystemCall)(sqlite3_vfs*, const char *zName); + const char *(*xNextSystemCall)(sqlite3_vfs*, const char *zName); + /* + ** The methods above are in versions 1 through 3 of the sqlite_vfs object. + ** New fields may be appended in figure versions. The iVersion + ** value will increment whenever this happens. + */ +}; + +/* +** CAPI3REF: Flags for the xAccess VFS method +** +** These integer constants can be used as the third parameter to +** the xAccess method of an [sqlite3_vfs] object. They determine +** what kind of permissions the xAccess method is looking for. +** With SQLITE_ACCESS_EXISTS, the xAccess method +** simply checks whether the file exists. +** With SQLITE_ACCESS_READWRITE, the xAccess method +** checks whether the named directory is both readable and writable +** (in other words, if files can be added, removed, and renamed within +** the directory). +** The SQLITE_ACCESS_READWRITE constant is currently used only by the +** [temp_store_directory pragma], though this could change in a future +** release of SQLite. +** With SQLITE_ACCESS_READ, the xAccess method +** checks whether the file is readable. The SQLITE_ACCESS_READ constant is +** currently unused, though it might be used in a future release of +** SQLite. +*/ +#define SQLITE_ACCESS_EXISTS 0 +#define SQLITE_ACCESS_READWRITE 1 /* Used by PRAGMA temp_store_directory */ +#define SQLITE_ACCESS_READ 2 /* Unused */ + +/* +** CAPI3REF: Flags for the xShmLock VFS method +** +** These integer constants define the various locking operations +** allowed by the xShmLock method of [sqlite3_io_methods]. The +** following are the only legal combinations of flags to the +** xShmLock method: +** +** <ul> +** <li> SQLITE_SHM_LOCK | SQLITE_SHM_SHARED +** <li> SQLITE_SHM_LOCK | SQLITE_SHM_EXCLUSIVE +** <li> SQLITE_SHM_UNLOCK | SQLITE_SHM_SHARED +** <li> SQLITE_SHM_UNLOCK | SQLITE_SHM_EXCLUSIVE +** </ul> +** +** When unlocking, the same SHARED or EXCLUSIVE flag must be supplied as +** was given on the corresponding lock. +** +** The xShmLock method can transition between unlocked and SHARED or +** between unlocked and EXCLUSIVE. It cannot transition between SHARED +** and EXCLUSIVE. +*/ +#define SQLITE_SHM_UNLOCK 1 +#define SQLITE_SHM_LOCK 2 +#define SQLITE_SHM_SHARED 4 +#define SQLITE_SHM_EXCLUSIVE 8 + +/* +** CAPI3REF: Maximum xShmLock index +** +** The xShmLock method on [sqlite3_io_methods] may use values +** between 0 and this upper bound as its "offset" argument. +** The SQLite core will never attempt to acquire or release a +** lock outside of this range +*/ +#define SQLITE_SHM_NLOCK 8 + + +/* +** CAPI3REF: Initialize The SQLite Library +** +** ^The sqlite3_initialize() routine initializes the +** SQLite library. ^The sqlite3_shutdown() routine +** deallocates any resources that were allocated by sqlite3_initialize(). +** These routines are designed to aid in process initialization and +** shutdown on embedded systems. Workstation applications using +** SQLite normally do not need to invoke either of these routines. +** +** A call to sqlite3_initialize() is an "effective" call if it is +** the first time sqlite3_initialize() is invoked during the lifetime of +** the process, or if it is the first time sqlite3_initialize() is invoked +** following a call to sqlite3_shutdown(). ^(Only an effective call +** of sqlite3_initialize() does any initialization. All other calls +** are harmless no-ops.)^ +** +** A call to sqlite3_shutdown() is an "effective" call if it is the first +** call to sqlite3_shutdown() since the last sqlite3_initialize(). ^(Only +** an effective call to sqlite3_shutdown() does any deinitialization. +** All other valid calls to sqlite3_shutdown() are harmless no-ops.)^ +** +** The sqlite3_initialize() interface is threadsafe, but sqlite3_shutdown() +** is not. The sqlite3_shutdown() interface must only be called from a +** single thread. All open [database connections] must be closed and all +** other SQLite resources must be deallocated prior to invoking +** sqlite3_shutdown(). +** +** Among other things, ^sqlite3_initialize() will invoke +** sqlite3_os_init(). Similarly, ^sqlite3_shutdown() +** will invoke sqlite3_os_end(). +** +** ^The sqlite3_initialize() routine returns [SQLITE_OK] on success. +** ^If for some reason, sqlite3_initialize() is unable to initialize +** the library (perhaps it is unable to allocate a needed resource such +** as a mutex) it returns an [error code] other than [SQLITE_OK]. +** +** ^The sqlite3_initialize() routine is called internally by many other +** SQLite interfaces so that an application usually does not need to +** invoke sqlite3_initialize() directly. For example, [sqlite3_open()] +** calls sqlite3_initialize() so the SQLite library will be automatically +** initialized when [sqlite3_open()] is called if it has not be initialized +** already. ^However, if SQLite is compiled with the [SQLITE_OMIT_AUTOINIT] +** compile-time option, then the automatic calls to sqlite3_initialize() +** are omitted and the application must call sqlite3_initialize() directly +** prior to using any other SQLite interface. For maximum portability, +** it is recommended that applications always invoke sqlite3_initialize() +** directly prior to using any other SQLite interface. Future releases +** of SQLite may require this. In other words, the behavior exhibited +** when SQLite is compiled with [SQLITE_OMIT_AUTOINIT] might become the +** default behavior in some future release of SQLite. +** +** The sqlite3_os_init() routine does operating-system specific +** initialization of the SQLite library. The sqlite3_os_end() +** routine undoes the effect of sqlite3_os_init(). Typical tasks +** performed by these routines include allocation or deallocation +** of static resources, initialization of global variables, +** setting up a default [sqlite3_vfs] module, or setting up +** a default configuration using [sqlite3_config()]. +** +** The application should never invoke either sqlite3_os_init() +** or sqlite3_os_end() directly. The application should only invoke +** sqlite3_initialize() and sqlite3_shutdown(). The sqlite3_os_init() +** interface is called automatically by sqlite3_initialize() and +** sqlite3_os_end() is called by sqlite3_shutdown(). Appropriate +** implementations for sqlite3_os_init() and sqlite3_os_end() +** are built into SQLite when it is compiled for Unix, Windows, or OS/2. +** When [custom builds | built for other platforms] +** (using the [SQLITE_OS_OTHER=1] compile-time +** option) the application must supply a suitable implementation for +** sqlite3_os_init() and sqlite3_os_end(). An application-supplied +** implementation of sqlite3_os_init() or sqlite3_os_end() +** must return [SQLITE_OK] on success and some other [error code] upon +** failure. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_initialize(void); +SQLITE_API int SQLITE_STDCALL sqlite3_shutdown(void); +SQLITE_API int SQLITE_STDCALL sqlite3_os_init(void); +SQLITE_API int SQLITE_STDCALL sqlite3_os_end(void); + +/* +** CAPI3REF: Configuring The SQLite Library +** +** The sqlite3_config() interface is used to make global configuration +** changes to SQLite in order to tune SQLite to the specific needs of +** the application. The default configuration is recommended for most +** applications and so this routine is usually not necessary. It is +** provided to support rare applications with unusual needs. +** +** <b>The sqlite3_config() interface is not threadsafe. The application +** must ensure that no other SQLite interfaces are invoked by other +** threads while sqlite3_config() is running.</b> +** +** The sqlite3_config() interface +** may only be invoked prior to library initialization using +** [sqlite3_initialize()] or after shutdown by [sqlite3_shutdown()]. +** ^If sqlite3_config() is called after [sqlite3_initialize()] and before +** [sqlite3_shutdown()] then it will return SQLITE_MISUSE. +** Note, however, that ^sqlite3_config() can be called as part of the +** implementation of an application-defined [sqlite3_os_init()]. +** +** The first argument to sqlite3_config() is an integer +** [configuration option] that determines +** what property of SQLite is to be configured. Subsequent arguments +** vary depending on the [configuration option] +** in the first argument. +** +** ^When a configuration option is set, sqlite3_config() returns [SQLITE_OK]. +** ^If the option is unknown or SQLite is unable to set the option +** then this routine returns a non-zero [error code]. +*/ +SQLITE_API int SQLITE_CDECL sqlite3_config(int, ...); + +/* +** CAPI3REF: Configure database connections +** METHOD: sqlite3 +** +** The sqlite3_db_config() interface is used to make configuration +** changes to a [database connection]. The interface is similar to +** [sqlite3_config()] except that the changes apply to a single +** [database connection] (specified in the first argument). +** +** The second argument to sqlite3_db_config(D,V,...) is the +** [SQLITE_DBCONFIG_LOOKASIDE | configuration verb] - an integer code +** that indicates what aspect of the [database connection] is being configured. +** Subsequent arguments vary depending on the configuration verb. +** +** ^Calls to sqlite3_db_config() return SQLITE_OK if and only if +** the call is considered successful. +*/ +SQLITE_API int SQLITE_CDECL sqlite3_db_config(sqlite3*, int op, ...); + +/* +** CAPI3REF: Memory Allocation Routines +** +** An instance of this object defines the interface between SQLite +** and low-level memory allocation routines. +** +** This object is used in only one place in the SQLite interface. +** A pointer to an instance of this object is the argument to +** [sqlite3_config()] when the configuration option is +** [SQLITE_CONFIG_MALLOC] or [SQLITE_CONFIG_GETMALLOC]. +** By creating an instance of this object +** and passing it to [sqlite3_config]([SQLITE_CONFIG_MALLOC]) +** during configuration, an application can specify an alternative +** memory allocation subsystem for SQLite to use for all of its +** dynamic memory needs. +** +** Note that SQLite comes with several [built-in memory allocators] +** that are perfectly adequate for the overwhelming majority of applications +** and that this object is only useful to a tiny minority of applications +** with specialized memory allocation requirements. This object is +** also used during testing of SQLite in order to specify an alternative +** memory allocator that simulates memory out-of-memory conditions in +** order to verify that SQLite recovers gracefully from such +** conditions. +** +** The xMalloc, xRealloc, and xFree methods must work like the +** malloc(), realloc() and free() functions from the standard C library. +** ^SQLite guarantees that the second argument to +** xRealloc is always a value returned by a prior call to xRoundup. +** +** xSize should return the allocated size of a memory allocation +** previously obtained from xMalloc or xRealloc. The allocated size +** is always at least as big as the requested size but may be larger. +** +** The xRoundup method returns what would be the allocated size of +** a memory allocation given a particular requested size. Most memory +** allocators round up memory allocations at least to the next multiple +** of 8. Some allocators round up to a larger multiple or to a power of 2. +** Every memory allocation request coming in through [sqlite3_malloc()] +** or [sqlite3_realloc()] first calls xRoundup. If xRoundup returns 0, +** that causes the corresponding memory allocation to fail. +** +** The xInit method initializes the memory allocator. For example, +** it might allocate any require mutexes or initialize internal data +** structures. The xShutdown method is invoked (indirectly) by +** [sqlite3_shutdown()] and should deallocate any resources acquired +** by xInit. The pAppData pointer is used as the only parameter to +** xInit and xShutdown. +** +** SQLite holds the [SQLITE_MUTEX_STATIC_MASTER] mutex when it invokes +** the xInit method, so the xInit method need not be threadsafe. The +** xShutdown method is only called from [sqlite3_shutdown()] so it does +** not need to be threadsafe either. For all other methods, SQLite +** holds the [SQLITE_MUTEX_STATIC_MEM] mutex as long as the +** [SQLITE_CONFIG_MEMSTATUS] configuration option is turned on (which +** it is by default) and so the methods are automatically serialized. +** However, if [SQLITE_CONFIG_MEMSTATUS] is disabled, then the other +** methods must be threadsafe or else make their own arrangements for +** serialization. +** +** SQLite will never invoke xInit() more than once without an intervening +** call to xShutdown(). +*/ +typedef struct sqlite3_mem_methods sqlite3_mem_methods; +struct sqlite3_mem_methods { + void *(*xMalloc)(int); /* Memory allocation function */ + void (*xFree)(void*); /* Free a prior allocation */ + void *(*xRealloc)(void*,int); /* Resize an allocation */ + int (*xSize)(void*); /* Return the size of an allocation */ + int (*xRoundup)(int); /* Round up request size to allocation size */ + int (*xInit)(void*); /* Initialize the memory allocator */ + void (*xShutdown)(void*); /* Deinitialize the memory allocator */ + void *pAppData; /* Argument to xInit() and xShutdown() */ +}; + +/* +** CAPI3REF: Configuration Options +** KEYWORDS: {configuration option} +** +** These constants are the available integer configuration options that +** can be passed as the first argument to the [sqlite3_config()] interface. +** +** New configuration options may be added in future releases of SQLite. +** Existing configuration options might be discontinued. Applications +** should check the return code from [sqlite3_config()] to make sure that +** the call worked. The [sqlite3_config()] interface will return a +** non-zero [error code] if a discontinued or unsupported configuration option +** is invoked. +** +** <dl> +** [[SQLITE_CONFIG_SINGLETHREAD]] <dt>SQLITE_CONFIG_SINGLETHREAD</dt> +** <dd>There are no arguments to this option. ^This option sets the +** [threading mode] to Single-thread. In other words, it disables +** all mutexing and puts SQLite into a mode where it can only be used +** by a single thread. ^If SQLite is compiled with +** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then +** it is not possible to change the [threading mode] from its default +** value of Single-thread and so [sqlite3_config()] will return +** [SQLITE_ERROR] if called with the SQLITE_CONFIG_SINGLETHREAD +** configuration option.</dd> +** +** [[SQLITE_CONFIG_MULTITHREAD]] <dt>SQLITE_CONFIG_MULTITHREAD</dt> +** <dd>There are no arguments to this option. ^This option sets the +** [threading mode] to Multi-thread. In other words, it disables +** mutexing on [database connection] and [prepared statement] objects. +** The application is responsible for serializing access to +** [database connections] and [prepared statements]. But other mutexes +** are enabled so that SQLite will be safe to use in a multi-threaded +** environment as long as no two threads attempt to use the same +** [database connection] at the same time. ^If SQLite is compiled with +** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then +** it is not possible to set the Multi-thread [threading mode] and +** [sqlite3_config()] will return [SQLITE_ERROR] if called with the +** SQLITE_CONFIG_MULTITHREAD configuration option.</dd> +** +** [[SQLITE_CONFIG_SERIALIZED]] <dt>SQLITE_CONFIG_SERIALIZED</dt> +** <dd>There are no arguments to this option. ^This option sets the +** [threading mode] to Serialized. In other words, this option enables +** all mutexes including the recursive +** mutexes on [database connection] and [prepared statement] objects. +** In this mode (which is the default when SQLite is compiled with +** [SQLITE_THREADSAFE=1]) the SQLite library will itself serialize access +** to [database connections] and [prepared statements] so that the +** application is free to use the same [database connection] or the +** same [prepared statement] in different threads at the same time. +** ^If SQLite is compiled with +** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then +** it is not possible to set the Serialized [threading mode] and +** [sqlite3_config()] will return [SQLITE_ERROR] if called with the +** SQLITE_CONFIG_SERIALIZED configuration option.</dd> +** +** [[SQLITE_CONFIG_MALLOC]] <dt>SQLITE_CONFIG_MALLOC</dt> +** <dd> ^(The SQLITE_CONFIG_MALLOC option takes a single argument which is +** a pointer to an instance of the [sqlite3_mem_methods] structure. +** The argument specifies +** alternative low-level memory allocation routines to be used in place of +** the memory allocation routines built into SQLite.)^ ^SQLite makes +** its own private copy of the content of the [sqlite3_mem_methods] structure +** before the [sqlite3_config()] call returns.</dd> +** +** [[SQLITE_CONFIG_GETMALLOC]] <dt>SQLITE_CONFIG_GETMALLOC</dt> +** <dd> ^(The SQLITE_CONFIG_GETMALLOC option takes a single argument which +** is a pointer to an instance of the [sqlite3_mem_methods] structure. +** The [sqlite3_mem_methods] +** structure is filled with the currently defined memory allocation routines.)^ +** This option can be used to overload the default memory allocation +** routines with a wrapper that simulations memory allocation failure or +** tracks memory usage, for example. </dd> +** +** [[SQLITE_CONFIG_MEMSTATUS]] <dt>SQLITE_CONFIG_MEMSTATUS</dt> +** <dd> ^The SQLITE_CONFIG_MEMSTATUS option takes single argument of type int, +** interpreted as a boolean, which enables or disables the collection of +** memory allocation statistics. ^(When memory allocation statistics are +** disabled, the following SQLite interfaces become non-operational: +** <ul> +** <li> [sqlite3_memory_used()] +** <li> [sqlite3_memory_highwater()] +** <li> [sqlite3_soft_heap_limit64()] +** <li> [sqlite3_status64()] +** </ul>)^ +** ^Memory allocation statistics are enabled by default unless SQLite is +** compiled with [SQLITE_DEFAULT_MEMSTATUS]=0 in which case memory +** allocation statistics are disabled by default. +** </dd> +** +** [[SQLITE_CONFIG_SCRATCH]] <dt>SQLITE_CONFIG_SCRATCH</dt> +** <dd> ^The SQLITE_CONFIG_SCRATCH option specifies a static memory buffer +** that SQLite can use for scratch memory. ^(There are three arguments +** to SQLITE_CONFIG_SCRATCH: A pointer an 8-byte +** aligned memory buffer from which the scratch allocations will be +** drawn, the size of each scratch allocation (sz), +** and the maximum number of scratch allocations (N).)^ +** The first argument must be a pointer to an 8-byte aligned buffer +** of at least sz*N bytes of memory. +** ^SQLite will not use more than one scratch buffers per thread. +** ^SQLite will never request a scratch buffer that is more than 6 +** times the database page size. +** ^If SQLite needs needs additional +** scratch memory beyond what is provided by this configuration option, then +** [sqlite3_malloc()] will be used to obtain the memory needed.<p> +** ^When the application provides any amount of scratch memory using +** SQLITE_CONFIG_SCRATCH, SQLite avoids unnecessary large +** [sqlite3_malloc|heap allocations]. +** This can help [Robson proof|prevent memory allocation failures] due to heap +** fragmentation in low-memory embedded systems. +** </dd> +** +** [[SQLITE_CONFIG_PAGECACHE]] <dt>SQLITE_CONFIG_PAGECACHE</dt> +** <dd> ^The SQLITE_CONFIG_PAGECACHE option specifies a memory pool +** that SQLite can use for the database page cache with the default page +** cache implementation. +** This configuration option is a no-op if an application-define page +** cache implementation is loaded using the [SQLITE_CONFIG_PCACHE2]. +** ^There are three arguments to SQLITE_CONFIG_PAGECACHE: A pointer to +** 8-byte aligned memory (pMem), the size of each page cache line (sz), +** and the number of cache lines (N). +** The sz argument should be the size of the largest database page +** (a power of two between 512 and 65536) plus some extra bytes for each +** page header. ^The number of extra bytes needed by the page header +** can be determined using [SQLITE_CONFIG_PCACHE_HDRSZ]. +** ^It is harmless, apart from the wasted memory, +** for the sz parameter to be larger than necessary. The pMem +** argument must be either a NULL pointer or a pointer to an 8-byte +** aligned block of memory of at least sz*N bytes, otherwise +** subsequent behavior is undefined. +** ^When pMem is not NULL, SQLite will strive to use the memory provided +** to satisfy page cache needs, falling back to [sqlite3_malloc()] if +** a page cache line is larger than sz bytes or if all of the pMem buffer +** is exhausted. +** ^If pMem is NULL and N is non-zero, then each database connection +** does an initial bulk allocation for page cache memory +** from [sqlite3_malloc()] sufficient for N cache lines if N is positive or +** of -1024*N bytes if N is negative, . ^If additional +** page cache memory is needed beyond what is provided by the initial +** allocation, then SQLite goes to [sqlite3_malloc()] separately for each +** additional cache line. </dd> +** +** [[SQLITE_CONFIG_HEAP]] <dt>SQLITE_CONFIG_HEAP</dt> +** <dd> ^The SQLITE_CONFIG_HEAP option specifies a static memory buffer +** that SQLite will use for all of its dynamic memory allocation needs +** beyond those provided for by [SQLITE_CONFIG_SCRATCH] and +** [SQLITE_CONFIG_PAGECACHE]. +** ^The SQLITE_CONFIG_HEAP option is only available if SQLite is compiled +** with either [SQLITE_ENABLE_MEMSYS3] or [SQLITE_ENABLE_MEMSYS5] and returns +** [SQLITE_ERROR] if invoked otherwise. +** ^There are three arguments to SQLITE_CONFIG_HEAP: +** An 8-byte aligned pointer to the memory, +** the number of bytes in the memory buffer, and the minimum allocation size. +** ^If the first pointer (the memory pointer) is NULL, then SQLite reverts +** to using its default memory allocator (the system malloc() implementation), +** undoing any prior invocation of [SQLITE_CONFIG_MALLOC]. ^If the +** memory pointer is not NULL then the alternative memory +** allocator is engaged to handle all of SQLites memory allocation needs. +** The first pointer (the memory pointer) must be aligned to an 8-byte +** boundary or subsequent behavior of SQLite will be undefined. +** The minimum allocation size is capped at 2**12. Reasonable values +** for the minimum allocation size are 2**5 through 2**8.</dd> +** +** [[SQLITE_CONFIG_MUTEX]] <dt>SQLITE_CONFIG_MUTEX</dt> +** <dd> ^(The SQLITE_CONFIG_MUTEX option takes a single argument which is a +** pointer to an instance of the [sqlite3_mutex_methods] structure. +** The argument specifies alternative low-level mutex routines to be used +** in place the mutex routines built into SQLite.)^ ^SQLite makes a copy of +** the content of the [sqlite3_mutex_methods] structure before the call to +** [sqlite3_config()] returns. ^If SQLite is compiled with +** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then +** the entire mutexing subsystem is omitted from the build and hence calls to +** [sqlite3_config()] with the SQLITE_CONFIG_MUTEX configuration option will +** return [SQLITE_ERROR].</dd> +** +** [[SQLITE_CONFIG_GETMUTEX]] <dt>SQLITE_CONFIG_GETMUTEX</dt> +** <dd> ^(The SQLITE_CONFIG_GETMUTEX option takes a single argument which +** is a pointer to an instance of the [sqlite3_mutex_methods] structure. The +** [sqlite3_mutex_methods] +** structure is filled with the currently defined mutex routines.)^ +** This option can be used to overload the default mutex allocation +** routines with a wrapper used to track mutex usage for performance +** profiling or testing, for example. ^If SQLite is compiled with +** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then +** the entire mutexing subsystem is omitted from the build and hence calls to +** [sqlite3_config()] with the SQLITE_CONFIG_GETMUTEX configuration option will +** return [SQLITE_ERROR].</dd> +** +** [[SQLITE_CONFIG_LOOKASIDE]] <dt>SQLITE_CONFIG_LOOKASIDE</dt> +** <dd> ^(The SQLITE_CONFIG_LOOKASIDE option takes two arguments that determine +** the default size of lookaside memory on each [database connection]. +** The first argument is the +** size of each lookaside buffer slot and the second is the number of +** slots allocated to each database connection.)^ ^(SQLITE_CONFIG_LOOKASIDE +** sets the <i>default</i> lookaside size. The [SQLITE_DBCONFIG_LOOKASIDE] +** option to [sqlite3_db_config()] can be used to change the lookaside +** configuration on individual connections.)^ </dd> +** +** [[SQLITE_CONFIG_PCACHE2]] <dt>SQLITE_CONFIG_PCACHE2</dt> +** <dd> ^(The SQLITE_CONFIG_PCACHE2 option takes a single argument which is +** a pointer to an [sqlite3_pcache_methods2] object. This object specifies +** the interface to a custom page cache implementation.)^ +** ^SQLite makes a copy of the [sqlite3_pcache_methods2] object.</dd> +** +** [[SQLITE_CONFIG_GETPCACHE2]] <dt>SQLITE_CONFIG_GETPCACHE2</dt> +** <dd> ^(The SQLITE_CONFIG_GETPCACHE2 option takes a single argument which +** is a pointer to an [sqlite3_pcache_methods2] object. SQLite copies of +** the current page cache implementation into that object.)^ </dd> +** +** [[SQLITE_CONFIG_LOG]] <dt>SQLITE_CONFIG_LOG</dt> +** <dd> The SQLITE_CONFIG_LOG option is used to configure the SQLite +** global [error log]. +** (^The SQLITE_CONFIG_LOG option takes two arguments: a pointer to a +** function with a call signature of void(*)(void*,int,const char*), +** and a pointer to void. ^If the function pointer is not NULL, it is +** invoked by [sqlite3_log()] to process each logging event. ^If the +** function pointer is NULL, the [sqlite3_log()] interface becomes a no-op. +** ^The void pointer that is the second argument to SQLITE_CONFIG_LOG is +** passed through as the first parameter to the application-defined logger +** function whenever that function is invoked. ^The second parameter to +** the logger function is a copy of the first parameter to the corresponding +** [sqlite3_log()] call and is intended to be a [result code] or an +** [extended result code]. ^The third parameter passed to the logger is +** log message after formatting via [sqlite3_snprintf()]. +** The SQLite logging interface is not reentrant; the logger function +** supplied by the application must not invoke any SQLite interface. +** In a multi-threaded application, the application-defined logger +** function must be threadsafe. </dd> +** +** [[SQLITE_CONFIG_URI]] <dt>SQLITE_CONFIG_URI +** <dd>^(The SQLITE_CONFIG_URI option takes a single argument of type int. +** If non-zero, then URI handling is globally enabled. If the parameter is zero, +** then URI handling is globally disabled.)^ ^If URI handling is globally +** enabled, all filenames passed to [sqlite3_open()], [sqlite3_open_v2()], +** [sqlite3_open16()] or +** specified as part of [ATTACH] commands are interpreted as URIs, regardless +** of whether or not the [SQLITE_OPEN_URI] flag is set when the database +** connection is opened. ^If it is globally disabled, filenames are +** only interpreted as URIs if the SQLITE_OPEN_URI flag is set when the +** database connection is opened. ^(By default, URI handling is globally +** disabled. The default value may be changed by compiling with the +** [SQLITE_USE_URI] symbol defined.)^ +** +** [[SQLITE_CONFIG_COVERING_INDEX_SCAN]] <dt>SQLITE_CONFIG_COVERING_INDEX_SCAN +** <dd>^The SQLITE_CONFIG_COVERING_INDEX_SCAN option takes a single integer +** argument which is interpreted as a boolean in order to enable or disable +** the use of covering indices for full table scans in the query optimizer. +** ^The default setting is determined +** by the [SQLITE_ALLOW_COVERING_INDEX_SCAN] compile-time option, or is "on" +** if that compile-time option is omitted. +** The ability to disable the use of covering indices for full table scans +** is because some incorrectly coded legacy applications might malfunction +** when the optimization is enabled. Providing the ability to +** disable the optimization allows the older, buggy application code to work +** without change even with newer versions of SQLite. +** +** [[SQLITE_CONFIG_PCACHE]] [[SQLITE_CONFIG_GETPCACHE]] +** <dt>SQLITE_CONFIG_PCACHE and SQLITE_CONFIG_GETPCACHE +** <dd> These options are obsolete and should not be used by new code. +** They are retained for backwards compatibility but are now no-ops. +** </dd> +** +** [[SQLITE_CONFIG_SQLLOG]] +** <dt>SQLITE_CONFIG_SQLLOG +** <dd>This option is only available if sqlite is compiled with the +** [SQLITE_ENABLE_SQLLOG] pre-processor macro defined. The first argument should +** be a pointer to a function of type void(*)(void*,sqlite3*,const char*, int). +** The second should be of type (void*). The callback is invoked by the library +** in three separate circumstances, identified by the value passed as the +** fourth parameter. If the fourth parameter is 0, then the database connection +** passed as the second argument has just been opened. The third argument +** points to a buffer containing the name of the main database file. If the +** fourth parameter is 1, then the SQL statement that the third parameter +** points to has just been executed. Or, if the fourth parameter is 2, then +** the connection being passed as the second parameter is being closed. The +** third parameter is passed NULL In this case. An example of using this +** configuration option can be seen in the "test_sqllog.c" source file in +** the canonical SQLite source tree.</dd> +** +** [[SQLITE_CONFIG_MMAP_SIZE]] +** <dt>SQLITE_CONFIG_MMAP_SIZE +** <dd>^SQLITE_CONFIG_MMAP_SIZE takes two 64-bit integer (sqlite3_int64) values +** that are the default mmap size limit (the default setting for +** [PRAGMA mmap_size]) and the maximum allowed mmap size limit. +** ^The default setting can be overridden by each database connection using +** either the [PRAGMA mmap_size] command, or by using the +** [SQLITE_FCNTL_MMAP_SIZE] file control. ^(The maximum allowed mmap size +** will be silently truncated if necessary so that it does not exceed the +** compile-time maximum mmap size set by the +** [SQLITE_MAX_MMAP_SIZE] compile-time option.)^ +** ^If either argument to this option is negative, then that argument is +** changed to its compile-time default. +** +** [[SQLITE_CONFIG_WIN32_HEAPSIZE]] +** <dt>SQLITE_CONFIG_WIN32_HEAPSIZE +** <dd>^The SQLITE_CONFIG_WIN32_HEAPSIZE option is only available if SQLite is +** compiled for Windows with the [SQLITE_WIN32_MALLOC] pre-processor macro +** defined. ^SQLITE_CONFIG_WIN32_HEAPSIZE takes a 32-bit unsigned integer value +** that specifies the maximum size of the created heap. +** +** [[SQLITE_CONFIG_PCACHE_HDRSZ]] +** <dt>SQLITE_CONFIG_PCACHE_HDRSZ +** <dd>^The SQLITE_CONFIG_PCACHE_HDRSZ option takes a single parameter which +** is a pointer to an integer and writes into that integer the number of extra +** bytes per page required for each page in [SQLITE_CONFIG_PAGECACHE]. +** The amount of extra space required can change depending on the compiler, +** target platform, and SQLite version. +** +** [[SQLITE_CONFIG_PMASZ]] +** <dt>SQLITE_CONFIG_PMASZ +** <dd>^The SQLITE_CONFIG_PMASZ option takes a single parameter which +** is an unsigned integer and sets the "Minimum PMA Size" for the multithreaded +** sorter to that integer. The default minimum PMA Size is set by the +** [SQLITE_SORTER_PMASZ] compile-time option. New threads are launched +** to help with sort operations when multithreaded sorting +** is enabled (using the [PRAGMA threads] command) and the amount of content +** to be sorted exceeds the page size times the minimum of the +** [PRAGMA cache_size] setting and this value. +** </dl> +*/ +#define SQLITE_CONFIG_SINGLETHREAD 1 /* nil */ +#define SQLITE_CONFIG_MULTITHREAD 2 /* nil */ +#define SQLITE_CONFIG_SERIALIZED 3 /* nil */ +#define SQLITE_CONFIG_MALLOC 4 /* sqlite3_mem_methods* */ +#define SQLITE_CONFIG_GETMALLOC 5 /* sqlite3_mem_methods* */ +#define SQLITE_CONFIG_SCRATCH 6 /* void*, int sz, int N */ +#define SQLITE_CONFIG_PAGECACHE 7 /* void*, int sz, int N */ +#define SQLITE_CONFIG_HEAP 8 /* void*, int nByte, int min */ +#define SQLITE_CONFIG_MEMSTATUS 9 /* boolean */ +#define SQLITE_CONFIG_MUTEX 10 /* sqlite3_mutex_methods* */ +#define SQLITE_CONFIG_GETMUTEX 11 /* sqlite3_mutex_methods* */ +/* previously SQLITE_CONFIG_CHUNKALLOC 12 which is now unused. */ +#define SQLITE_CONFIG_LOOKASIDE 13 /* int int */ +#define SQLITE_CONFIG_PCACHE 14 /* no-op */ +#define SQLITE_CONFIG_GETPCACHE 15 /* no-op */ +#define SQLITE_CONFIG_LOG 16 /* xFunc, void* */ +#define SQLITE_CONFIG_URI 17 /* int */ +#define SQLITE_CONFIG_PCACHE2 18 /* sqlite3_pcache_methods2* */ +#define SQLITE_CONFIG_GETPCACHE2 19 /* sqlite3_pcache_methods2* */ +#define SQLITE_CONFIG_COVERING_INDEX_SCAN 20 /* int */ +#define SQLITE_CONFIG_SQLLOG 21 /* xSqllog, void* */ +#define SQLITE_CONFIG_MMAP_SIZE 22 /* sqlite3_int64, sqlite3_int64 */ +#define SQLITE_CONFIG_WIN32_HEAPSIZE 23 /* int nByte */ +#define SQLITE_CONFIG_PCACHE_HDRSZ 24 /* int *psz */ +#define SQLITE_CONFIG_PMASZ 25 /* unsigned int szPma */ + +/* +** CAPI3REF: Database Connection Configuration Options +** +** These constants are the available integer configuration options that +** can be passed as the second argument to the [sqlite3_db_config()] interface. +** +** New configuration options may be added in future releases of SQLite. +** Existing configuration options might be discontinued. Applications +** should check the return code from [sqlite3_db_config()] to make sure that +** the call worked. ^The [sqlite3_db_config()] interface will return a +** non-zero [error code] if a discontinued or unsupported configuration option +** is invoked. +** +** <dl> +** <dt>SQLITE_DBCONFIG_LOOKASIDE</dt> +** <dd> ^This option takes three additional arguments that determine the +** [lookaside memory allocator] configuration for the [database connection]. +** ^The first argument (the third parameter to [sqlite3_db_config()] is a +** pointer to a memory buffer to use for lookaside memory. +** ^The first argument after the SQLITE_DBCONFIG_LOOKASIDE verb +** may be NULL in which case SQLite will allocate the +** lookaside buffer itself using [sqlite3_malloc()]. ^The second argument is the +** size of each lookaside buffer slot. ^The third argument is the number of +** slots. The size of the buffer in the first argument must be greater than +** or equal to the product of the second and third arguments. The buffer +** must be aligned to an 8-byte boundary. ^If the second argument to +** SQLITE_DBCONFIG_LOOKASIDE is not a multiple of 8, it is internally +** rounded down to the next smaller multiple of 8. ^(The lookaside memory +** configuration for a database connection can only be changed when that +** connection is not currently using lookaside memory, or in other words +** when the "current value" returned by +** [sqlite3_db_status](D,[SQLITE_CONFIG_LOOKASIDE],...) is zero. +** Any attempt to change the lookaside memory configuration when lookaside +** memory is in use leaves the configuration unchanged and returns +** [SQLITE_BUSY].)^</dd> +** +** <dt>SQLITE_DBCONFIG_ENABLE_FKEY</dt> +** <dd> ^This option is used to enable or disable the enforcement of +** [foreign key constraints]. There should be two additional arguments. +** The first argument is an integer which is 0 to disable FK enforcement, +** positive to enable FK enforcement or negative to leave FK enforcement +** unchanged. The second parameter is a pointer to an integer into which +** is written 0 or 1 to indicate whether FK enforcement is off or on +** following this call. The second parameter may be a NULL pointer, in +** which case the FK enforcement setting is not reported back. </dd> +** +** <dt>SQLITE_DBCONFIG_ENABLE_TRIGGER</dt> +** <dd> ^This option is used to enable or disable [CREATE TRIGGER | triggers]. +** There should be two additional arguments. +** The first argument is an integer which is 0 to disable triggers, +** positive to enable triggers or negative to leave the setting unchanged. +** The second parameter is a pointer to an integer into which +** is written 0 or 1 to indicate whether triggers are disabled or enabled +** following this call. The second parameter may be a NULL pointer, in +** which case the trigger setting is not reported back. </dd> +** +** </dl> +*/ +#define SQLITE_DBCONFIG_LOOKASIDE 1001 /* void* int int */ +#define SQLITE_DBCONFIG_ENABLE_FKEY 1002 /* int int* */ +#define SQLITE_DBCONFIG_ENABLE_TRIGGER 1003 /* int int* */ + + +/* +** CAPI3REF: Enable Or Disable Extended Result Codes +** METHOD: sqlite3 +** +** ^The sqlite3_extended_result_codes() routine enables or disables the +** [extended result codes] feature of SQLite. ^The extended result +** codes are disabled by default for historical compatibility. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_extended_result_codes(sqlite3*, int onoff); + +/* +** CAPI3REF: Last Insert Rowid +** METHOD: sqlite3 +** +** ^Each entry in most SQLite tables (except for [WITHOUT ROWID] tables) +** has a unique 64-bit signed +** integer key called the [ROWID | "rowid"]. ^The rowid is always available +** as an undeclared column named ROWID, OID, or _ROWID_ as long as those +** names are not also used by explicitly declared columns. ^If +** the table has a column of type [INTEGER PRIMARY KEY] then that column +** is another alias for the rowid. +** +** ^The sqlite3_last_insert_rowid(D) interface returns the [rowid] of the +** most recent successful [INSERT] into a rowid table or [virtual table] +** on database connection D. +** ^Inserts into [WITHOUT ROWID] tables are not recorded. +** ^If no successful [INSERT]s into rowid tables +** have ever occurred on the database connection D, +** then sqlite3_last_insert_rowid(D) returns zero. +** +** ^(If an [INSERT] occurs within a trigger or within a [virtual table] +** method, then this routine will return the [rowid] of the inserted +** row as long as the trigger or virtual table method is running. +** But once the trigger or virtual table method ends, the value returned +** by this routine reverts to what it was before the trigger or virtual +** table method began.)^ +** +** ^An [INSERT] that fails due to a constraint violation is not a +** successful [INSERT] and does not change the value returned by this +** routine. ^Thus INSERT OR FAIL, INSERT OR IGNORE, INSERT OR ROLLBACK, +** and INSERT OR ABORT make no changes to the return value of this +** routine when their insertion fails. ^(When INSERT OR REPLACE +** encounters a constraint violation, it does not fail. The +** INSERT continues to completion after deleting rows that caused +** the constraint problem so INSERT OR REPLACE will always change +** the return value of this interface.)^ +** +** ^For the purposes of this routine, an [INSERT] is considered to +** be successful even if it is subsequently rolled back. +** +** This function is accessible to SQL statements via the +** [last_insert_rowid() SQL function]. +** +** If a separate thread performs a new [INSERT] on the same +** database connection while the [sqlite3_last_insert_rowid()] +** function is running and thus changes the last insert [rowid], +** then the value returned by [sqlite3_last_insert_rowid()] is +** unpredictable and might not equal either the old or the new +** last insert [rowid]. +*/ +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_last_insert_rowid(sqlite3*); + +/* +** CAPI3REF: Count The Number Of Rows Modified +** METHOD: sqlite3 +** +** ^This function returns the number of rows modified, inserted or +** deleted by the most recently completed INSERT, UPDATE or DELETE +** statement on the database connection specified by the only parameter. +** ^Executing any other type of SQL statement does not modify the value +** returned by this function. +** +** ^Only changes made directly by the INSERT, UPDATE or DELETE statement are +** considered - auxiliary changes caused by [CREATE TRIGGER | triggers], +** [foreign key actions] or [REPLACE] constraint resolution are not counted. +** +** Changes to a view that are intercepted by +** [INSTEAD OF trigger | INSTEAD OF triggers] are not counted. ^The value +** returned by sqlite3_changes() immediately after an INSERT, UPDATE or +** DELETE statement run on a view is always zero. Only changes made to real +** tables are counted. +** +** Things are more complicated if the sqlite3_changes() function is +** executed while a trigger program is running. This may happen if the +** program uses the [changes() SQL function], or if some other callback +** function invokes sqlite3_changes() directly. Essentially: +** +** <ul> +** <li> ^(Before entering a trigger program the value returned by +** sqlite3_changes() function is saved. After the trigger program +** has finished, the original value is restored.)^ +** +** <li> ^(Within a trigger program each INSERT, UPDATE and DELETE +** statement sets the value returned by sqlite3_changes() +** upon completion as normal. Of course, this value will not include +** any changes performed by sub-triggers, as the sqlite3_changes() +** value will be saved and restored after each sub-trigger has run.)^ +** </ul> +** +** ^This means that if the changes() SQL function (or similar) is used +** by the first INSERT, UPDATE or DELETE statement within a trigger, it +** returns the value as set when the calling statement began executing. +** ^If it is used by the second or subsequent such statement within a trigger +** program, the value returned reflects the number of rows modified by the +** previous INSERT, UPDATE or DELETE statement within the same trigger. +** +** See also the [sqlite3_total_changes()] interface, the +** [count_changes pragma], and the [changes() SQL function]. +** +** If a separate thread makes changes on the same database connection +** while [sqlite3_changes()] is running then the value returned +** is unpredictable and not meaningful. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_changes(sqlite3*); + +/* +** CAPI3REF: Total Number Of Rows Modified +** METHOD: sqlite3 +** +** ^This function returns the total number of rows inserted, modified or +** deleted by all [INSERT], [UPDATE] or [DELETE] statements completed +** since the database connection was opened, including those executed as +** part of trigger programs. ^Executing any other type of SQL statement +** does not affect the value returned by sqlite3_total_changes(). +** +** ^Changes made as part of [foreign key actions] are included in the +** count, but those made as part of REPLACE constraint resolution are +** not. ^Changes to a view that are intercepted by INSTEAD OF triggers +** are not counted. +** +** See also the [sqlite3_changes()] interface, the +** [count_changes pragma], and the [total_changes() SQL function]. +** +** If a separate thread makes changes on the same database connection +** while [sqlite3_total_changes()] is running then the value +** returned is unpredictable and not meaningful. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_total_changes(sqlite3*); + +/* +** CAPI3REF: Interrupt A Long-Running Query +** METHOD: sqlite3 +** +** ^This function causes any pending database operation to abort and +** return at its earliest opportunity. This routine is typically +** called in response to a user action such as pressing "Cancel" +** or Ctrl-C where the user wants a long query operation to halt +** immediately. +** +** ^It is safe to call this routine from a thread different from the +** thread that is currently running the database operation. But it +** is not safe to call this routine with a [database connection] that +** is closed or might close before sqlite3_interrupt() returns. +** +** ^If an SQL operation is very nearly finished at the time when +** sqlite3_interrupt() is called, then it might not have an opportunity +** to be interrupted and might continue to completion. +** +** ^An SQL operation that is interrupted will return [SQLITE_INTERRUPT]. +** ^If the interrupted SQL operation is an INSERT, UPDATE, or DELETE +** that is inside an explicit transaction, then the entire transaction +** will be rolled back automatically. +** +** ^The sqlite3_interrupt(D) call is in effect until all currently running +** SQL statements on [database connection] D complete. ^Any new SQL statements +** that are started after the sqlite3_interrupt() call and before the +** running statements reaches zero are interrupted as if they had been +** running prior to the sqlite3_interrupt() call. ^New SQL statements +** that are started after the running statement count reaches zero are +** not effected by the sqlite3_interrupt(). +** ^A call to sqlite3_interrupt(D) that occurs when there are no running +** SQL statements is a no-op and has no effect on SQL statements +** that are started after the sqlite3_interrupt() call returns. +** +** If the database connection closes while [sqlite3_interrupt()] +** is running then bad things will likely happen. +*/ +SQLITE_API void SQLITE_STDCALL sqlite3_interrupt(sqlite3*); + +/* +** CAPI3REF: Determine If An SQL Statement Is Complete +** +** These routines are useful during command-line input to determine if the +** currently entered text seems to form a complete SQL statement or +** if additional input is needed before sending the text into +** SQLite for parsing. ^These routines return 1 if the input string +** appears to be a complete SQL statement. ^A statement is judged to be +** complete if it ends with a semicolon token and is not a prefix of a +** well-formed CREATE TRIGGER statement. ^Semicolons that are embedded within +** string literals or quoted identifier names or comments are not +** independent tokens (they are part of the token in which they are +** embedded) and thus do not count as a statement terminator. ^Whitespace +** and comments that follow the final semicolon are ignored. +** +** ^These routines return 0 if the statement is incomplete. ^If a +** memory allocation fails, then SQLITE_NOMEM is returned. +** +** ^These routines do not parse the SQL statements thus +** will not detect syntactically incorrect SQL. +** +** ^(If SQLite has not been initialized using [sqlite3_initialize()] prior +** to invoking sqlite3_complete16() then sqlite3_initialize() is invoked +** automatically by sqlite3_complete16(). If that initialization fails, +** then the return value from sqlite3_complete16() will be non-zero +** regardless of whether or not the input SQL is complete.)^ +** +** The input to [sqlite3_complete()] must be a zero-terminated +** UTF-8 string. +** +** The input to [sqlite3_complete16()] must be a zero-terminated +** UTF-16 string in native byte order. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_complete(const char *sql); +SQLITE_API int SQLITE_STDCALL sqlite3_complete16(const void *sql); + +/* +** CAPI3REF: Register A Callback To Handle SQLITE_BUSY Errors +** KEYWORDS: {busy-handler callback} {busy handler} +** METHOD: sqlite3 +** +** ^The sqlite3_busy_handler(D,X,P) routine sets a callback function X +** that might be invoked with argument P whenever +** an attempt is made to access a database table associated with +** [database connection] D when another thread +** or process has the table locked. +** The sqlite3_busy_handler() interface is used to implement +** [sqlite3_busy_timeout()] and [PRAGMA busy_timeout]. +** +** ^If the busy callback is NULL, then [SQLITE_BUSY] +** is returned immediately upon encountering the lock. ^If the busy callback +** is not NULL, then the callback might be invoked with two arguments. +** +** ^The first argument to the busy handler is a copy of the void* pointer which +** is the third argument to sqlite3_busy_handler(). ^The second argument to +** the busy handler callback is the number of times that the busy handler has +** been invoked previously for the same locking event. ^If the +** busy callback returns 0, then no additional attempts are made to +** access the database and [SQLITE_BUSY] is returned +** to the application. +** ^If the callback returns non-zero, then another attempt +** is made to access the database and the cycle repeats. +** +** The presence of a busy handler does not guarantee that it will be invoked +** when there is lock contention. ^If SQLite determines that invoking the busy +** handler could result in a deadlock, it will go ahead and return [SQLITE_BUSY] +** to the application instead of invoking the +** busy handler. +** Consider a scenario where one process is holding a read lock that +** it is trying to promote to a reserved lock and +** a second process is holding a reserved lock that it is trying +** to promote to an exclusive lock. The first process cannot proceed +** because it is blocked by the second and the second process cannot +** proceed because it is blocked by the first. If both processes +** invoke the busy handlers, neither will make any progress. Therefore, +** SQLite returns [SQLITE_BUSY] for the first process, hoping that this +** will induce the first process to release its read lock and allow +** the second process to proceed. +** +** ^The default busy callback is NULL. +** +** ^(There can only be a single busy handler defined for each +** [database connection]. Setting a new busy handler clears any +** previously set handler.)^ ^Note that calling [sqlite3_busy_timeout()] +** or evaluating [PRAGMA busy_timeout=N] will change the +** busy handler and thus clear any previously set busy handler. +** +** The busy callback should not take any actions which modify the +** database connection that invoked the busy handler. In other words, +** the busy handler is not reentrant. Any such actions +** result in undefined behavior. +** +** A busy handler must not close the database connection +** or [prepared statement] that invoked the busy handler. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_busy_handler(sqlite3*, int(*)(void*,int), void*); + +/* +** CAPI3REF: Set A Busy Timeout +** METHOD: sqlite3 +** +** ^This routine sets a [sqlite3_busy_handler | busy handler] that sleeps +** for a specified amount of time when a table is locked. ^The handler +** will sleep multiple times until at least "ms" milliseconds of sleeping +** have accumulated. ^After at least "ms" milliseconds of sleeping, +** the handler returns 0 which causes [sqlite3_step()] to return +** [SQLITE_BUSY]. +** +** ^Calling this routine with an argument less than or equal to zero +** turns off all busy handlers. +** +** ^(There can only be a single busy handler for a particular +** [database connection] at any given moment. If another busy handler +** was defined (using [sqlite3_busy_handler()]) prior to calling +** this routine, that other busy handler is cleared.)^ +** +** See also: [PRAGMA busy_timeout] +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_busy_timeout(sqlite3*, int ms); + +/* +** CAPI3REF: Convenience Routines For Running Queries +** METHOD: sqlite3 +** +** This is a legacy interface that is preserved for backwards compatibility. +** Use of this interface is not recommended. +** +** Definition: A <b>result table</b> is memory data structure created by the +** [sqlite3_get_table()] interface. A result table records the +** complete query results from one or more queries. +** +** The table conceptually has a number of rows and columns. But +** these numbers are not part of the result table itself. These +** numbers are obtained separately. Let N be the number of rows +** and M be the number of columns. +** +** A result table is an array of pointers to zero-terminated UTF-8 strings. +** There are (N+1)*M elements in the array. The first M pointers point +** to zero-terminated strings that contain the names of the columns. +** The remaining entries all point to query results. NULL values result +** in NULL pointers. All other values are in their UTF-8 zero-terminated +** string representation as returned by [sqlite3_column_text()]. +** +** A result table might consist of one or more memory allocations. +** It is not safe to pass a result table directly to [sqlite3_free()]. +** A result table should be deallocated using [sqlite3_free_table()]. +** +** ^(As an example of the result table format, suppose a query result +** is as follows: +** +** <blockquote><pre> +** Name | Age +** ----------------------- +** Alice | 43 +** Bob | 28 +** Cindy | 21 +** </pre></blockquote> +** +** There are two column (M==2) and three rows (N==3). Thus the +** result table has 8 entries. Suppose the result table is stored +** in an array names azResult. Then azResult holds this content: +** +** <blockquote><pre> +** azResult[0] = "Name"; +** azResult[1] = "Age"; +** azResult[2] = "Alice"; +** azResult[3] = "43"; +** azResult[4] = "Bob"; +** azResult[5] = "28"; +** azResult[6] = "Cindy"; +** azResult[7] = "21"; +** </pre></blockquote>)^ +** +** ^The sqlite3_get_table() function evaluates one or more +** semicolon-separated SQL statements in the zero-terminated UTF-8 +** string of its 2nd parameter and returns a result table to the +** pointer given in its 3rd parameter. +** +** After the application has finished with the result from sqlite3_get_table(), +** it must pass the result table pointer to sqlite3_free_table() in order to +** release the memory that was malloced. Because of the way the +** [sqlite3_malloc()] happens within sqlite3_get_table(), the calling +** function must not try to call [sqlite3_free()] directly. Only +** [sqlite3_free_table()] is able to release the memory properly and safely. +** +** The sqlite3_get_table() interface is implemented as a wrapper around +** [sqlite3_exec()]. The sqlite3_get_table() routine does not have access +** to any internal data structures of SQLite. It uses only the public +** interface defined here. As a consequence, errors that occur in the +** wrapper layer outside of the internal [sqlite3_exec()] call are not +** reflected in subsequent calls to [sqlite3_errcode()] or +** [sqlite3_errmsg()]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_get_table( + sqlite3 *db, /* An open database */ + const char *zSql, /* SQL to be evaluated */ + char ***pazResult, /* Results of the query */ + int *pnRow, /* Number of result rows written here */ + int *pnColumn, /* Number of result columns written here */ + char **pzErrmsg /* Error msg written here */ +); +SQLITE_API void SQLITE_STDCALL sqlite3_free_table(char **result); + +/* +** CAPI3REF: Formatted String Printing Functions +** +** These routines are work-alikes of the "printf()" family of functions +** from the standard C library. +** These routines understand most of the common K&R formatting options, +** plus some additional non-standard formats, detailed below. +** Note that some of the more obscure formatting options from recent +** C-library standards are omitted from this implementation. +** +** ^The sqlite3_mprintf() and sqlite3_vmprintf() routines write their +** results into memory obtained from [sqlite3_malloc()]. +** The strings returned by these two routines should be +** released by [sqlite3_free()]. ^Both routines return a +** NULL pointer if [sqlite3_malloc()] is unable to allocate enough +** memory to hold the resulting string. +** +** ^(The sqlite3_snprintf() routine is similar to "snprintf()" from +** the standard C library. The result is written into the +** buffer supplied as the second parameter whose size is given by +** the first parameter. Note that the order of the +** first two parameters is reversed from snprintf().)^ This is an +** historical accident that cannot be fixed without breaking +** backwards compatibility. ^(Note also that sqlite3_snprintf() +** returns a pointer to its buffer instead of the number of +** characters actually written into the buffer.)^ We admit that +** the number of characters written would be a more useful return +** value but we cannot change the implementation of sqlite3_snprintf() +** now without breaking compatibility. +** +** ^As long as the buffer size is greater than zero, sqlite3_snprintf() +** guarantees that the buffer is always zero-terminated. ^The first +** parameter "n" is the total size of the buffer, including space for +** the zero terminator. So the longest string that can be completely +** written will be n-1 characters. +** +** ^The sqlite3_vsnprintf() routine is a varargs version of sqlite3_snprintf(). +** +** These routines all implement some additional formatting +** options that are useful for constructing SQL statements. +** All of the usual printf() formatting options apply. In addition, there +** is are "%q", "%Q", "%w" and "%z" options. +** +** ^(The %q option works like %s in that it substitutes a nul-terminated +** string from the argument list. But %q also doubles every '\'' character. +** %q is designed for use inside a string literal.)^ By doubling each '\'' +** character it escapes that character and allows it to be inserted into +** the string. +** +** For example, assume the string variable zText contains text as follows: +** +** <blockquote><pre> +** char *zText = "It's a happy day!"; +** </pre></blockquote> +** +** One can use this text in an SQL statement as follows: +** +** <blockquote><pre> +** char *zSQL = sqlite3_mprintf("INSERT INTO table VALUES('%q')", zText); +** sqlite3_exec(db, zSQL, 0, 0, 0); +** sqlite3_free(zSQL); +** </pre></blockquote> +** +** Because the %q format string is used, the '\'' character in zText +** is escaped and the SQL generated is as follows: +** +** <blockquote><pre> +** INSERT INTO table1 VALUES('It''s a happy day!') +** </pre></blockquote> +** +** This is correct. Had we used %s instead of %q, the generated SQL +** would have looked like this: +** +** <blockquote><pre> +** INSERT INTO table1 VALUES('It's a happy day!'); +** </pre></blockquote> +** +** This second example is an SQL syntax error. As a general rule you should +** always use %q instead of %s when inserting text into a string literal. +** +** ^(The %Q option works like %q except it also adds single quotes around +** the outside of the total string. Additionally, if the parameter in the +** argument list is a NULL pointer, %Q substitutes the text "NULL" (without +** single quotes).)^ So, for example, one could say: +** +** <blockquote><pre> +** char *zSQL = sqlite3_mprintf("INSERT INTO table VALUES(%Q)", zText); +** sqlite3_exec(db, zSQL, 0, 0, 0); +** sqlite3_free(zSQL); +** </pre></blockquote> +** +** The code above will render a correct SQL statement in the zSQL +** variable even if the zText variable is a NULL pointer. +** +** ^(The "%w" formatting option is like "%q" except that it expects to +** be contained within double-quotes instead of single quotes, and it +** escapes the double-quote character instead of the single-quote +** character.)^ The "%w" formatting option is intended for safely inserting +** table and column names into a constructed SQL statement. +** +** ^(The "%z" formatting option works like "%s" but with the +** addition that after the string has been read and copied into +** the result, [sqlite3_free()] is called on the input string.)^ +*/ +SQLITE_API char *SQLITE_CDECL sqlite3_mprintf(const char*,...); +SQLITE_API char *SQLITE_STDCALL sqlite3_vmprintf(const char*, va_list); +SQLITE_API char *SQLITE_CDECL sqlite3_snprintf(int,char*,const char*, ...); +SQLITE_API char *SQLITE_STDCALL sqlite3_vsnprintf(int,char*,const char*, va_list); + +/* +** CAPI3REF: Memory Allocation Subsystem +** +** The SQLite core uses these three routines for all of its own +** internal memory allocation needs. "Core" in the previous sentence +** does not include operating-system specific VFS implementation. The +** Windows VFS uses native malloc() and free() for some operations. +** +** ^The sqlite3_malloc() routine returns a pointer to a block +** of memory at least N bytes in length, where N is the parameter. +** ^If sqlite3_malloc() is unable to obtain sufficient free +** memory, it returns a NULL pointer. ^If the parameter N to +** sqlite3_malloc() is zero or negative then sqlite3_malloc() returns +** a NULL pointer. +** +** ^The sqlite3_malloc64(N) routine works just like +** sqlite3_malloc(N) except that N is an unsigned 64-bit integer instead +** of a signed 32-bit integer. +** +** ^Calling sqlite3_free() with a pointer previously returned +** by sqlite3_malloc() or sqlite3_realloc() releases that memory so +** that it might be reused. ^The sqlite3_free() routine is +** a no-op if is called with a NULL pointer. Passing a NULL pointer +** to sqlite3_free() is harmless. After being freed, memory +** should neither be read nor written. Even reading previously freed +** memory might result in a segmentation fault or other severe error. +** Memory corruption, a segmentation fault, or other severe error +** might result if sqlite3_free() is called with a non-NULL pointer that +** was not obtained from sqlite3_malloc() or sqlite3_realloc(). +** +** ^The sqlite3_realloc(X,N) interface attempts to resize a +** prior memory allocation X to be at least N bytes. +** ^If the X parameter to sqlite3_realloc(X,N) +** is a NULL pointer then its behavior is identical to calling +** sqlite3_malloc(N). +** ^If the N parameter to sqlite3_realloc(X,N) is zero or +** negative then the behavior is exactly the same as calling +** sqlite3_free(X). +** ^sqlite3_realloc(X,N) returns a pointer to a memory allocation +** of at least N bytes in size or NULL if insufficient memory is available. +** ^If M is the size of the prior allocation, then min(N,M) bytes +** of the prior allocation are copied into the beginning of buffer returned +** by sqlite3_realloc(X,N) and the prior allocation is freed. +** ^If sqlite3_realloc(X,N) returns NULL and N is positive, then the +** prior allocation is not freed. +** +** ^The sqlite3_realloc64(X,N) interfaces works the same as +** sqlite3_realloc(X,N) except that N is a 64-bit unsigned integer instead +** of a 32-bit signed integer. +** +** ^If X is a memory allocation previously obtained from sqlite3_malloc(), +** sqlite3_malloc64(), sqlite3_realloc(), or sqlite3_realloc64(), then +** sqlite3_msize(X) returns the size of that memory allocation in bytes. +** ^The value returned by sqlite3_msize(X) might be larger than the number +** of bytes requested when X was allocated. ^If X is a NULL pointer then +** sqlite3_msize(X) returns zero. If X points to something that is not +** the beginning of memory allocation, or if it points to a formerly +** valid memory allocation that has now been freed, then the behavior +** of sqlite3_msize(X) is undefined and possibly harmful. +** +** ^The memory returned by sqlite3_malloc(), sqlite3_realloc(), +** sqlite3_malloc64(), and sqlite3_realloc64() +** is always aligned to at least an 8 byte boundary, or to a +** 4 byte boundary if the [SQLITE_4_BYTE_ALIGNED_MALLOC] compile-time +** option is used. +** +** In SQLite version 3.5.0 and 3.5.1, it was possible to define +** the SQLITE_OMIT_MEMORY_ALLOCATION which would cause the built-in +** implementation of these routines to be omitted. That capability +** is no longer provided. Only built-in memory allocators can be used. +** +** Prior to SQLite version 3.7.10, the Windows OS interface layer called +** the system malloc() and free() directly when converting +** filenames between the UTF-8 encoding used by SQLite +** and whatever filename encoding is used by the particular Windows +** installation. Memory allocation errors were detected, but +** they were reported back as [SQLITE_CANTOPEN] or +** [SQLITE_IOERR] rather than [SQLITE_NOMEM]. +** +** The pointer arguments to [sqlite3_free()] and [sqlite3_realloc()] +** must be either NULL or else pointers obtained from a prior +** invocation of [sqlite3_malloc()] or [sqlite3_realloc()] that have +** not yet been released. +** +** The application must not read or write any part of +** a block of memory after it has been released using +** [sqlite3_free()] or [sqlite3_realloc()]. +*/ +SQLITE_API void *SQLITE_STDCALL sqlite3_malloc(int); +SQLITE_API void *SQLITE_STDCALL sqlite3_malloc64(sqlite3_uint64); +SQLITE_API void *SQLITE_STDCALL sqlite3_realloc(void*, int); +SQLITE_API void *SQLITE_STDCALL sqlite3_realloc64(void*, sqlite3_uint64); +SQLITE_API void SQLITE_STDCALL sqlite3_free(void*); +SQLITE_API sqlite3_uint64 SQLITE_STDCALL sqlite3_msize(void*); + +/* +** CAPI3REF: Memory Allocator Statistics +** +** SQLite provides these two interfaces for reporting on the status +** of the [sqlite3_malloc()], [sqlite3_free()], and [sqlite3_realloc()] +** routines, which form the built-in memory allocation subsystem. +** +** ^The [sqlite3_memory_used()] routine returns the number of bytes +** of memory currently outstanding (malloced but not freed). +** ^The [sqlite3_memory_highwater()] routine returns the maximum +** value of [sqlite3_memory_used()] since the high-water mark +** was last reset. ^The values returned by [sqlite3_memory_used()] and +** [sqlite3_memory_highwater()] include any overhead +** added by SQLite in its implementation of [sqlite3_malloc()], +** but not overhead added by the any underlying system library +** routines that [sqlite3_malloc()] may call. +** +** ^The memory high-water mark is reset to the current value of +** [sqlite3_memory_used()] if and only if the parameter to +** [sqlite3_memory_highwater()] is true. ^The value returned +** by [sqlite3_memory_highwater(1)] is the high-water mark +** prior to the reset. +*/ +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_memory_used(void); +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_memory_highwater(int resetFlag); + +/* +** CAPI3REF: Pseudo-Random Number Generator +** +** SQLite contains a high-quality pseudo-random number generator (PRNG) used to +** select random [ROWID | ROWIDs] when inserting new records into a table that +** already uses the largest possible [ROWID]. The PRNG is also used for +** the build-in random() and randomblob() SQL functions. This interface allows +** applications to access the same PRNG for other purposes. +** +** ^A call to this routine stores N bytes of randomness into buffer P. +** ^The P parameter can be a NULL pointer. +** +** ^If this routine has not been previously called or if the previous +** call had N less than one or a NULL pointer for P, then the PRNG is +** seeded using randomness obtained from the xRandomness method of +** the default [sqlite3_vfs] object. +** ^If the previous call to this routine had an N of 1 or more and a +** non-NULL P then the pseudo-randomness is generated +** internally and without recourse to the [sqlite3_vfs] xRandomness +** method. +*/ +SQLITE_API void SQLITE_STDCALL sqlite3_randomness(int N, void *P); + +/* +** CAPI3REF: Compile-Time Authorization Callbacks +** METHOD: sqlite3 +** +** ^This routine registers an authorizer callback with a particular +** [database connection], supplied in the first argument. +** ^The authorizer callback is invoked as SQL statements are being compiled +** by [sqlite3_prepare()] or its variants [sqlite3_prepare_v2()], +** [sqlite3_prepare16()] and [sqlite3_prepare16_v2()]. ^At various +** points during the compilation process, as logic is being created +** to perform various actions, the authorizer callback is invoked to +** see if those actions are allowed. ^The authorizer callback should +** return [SQLITE_OK] to allow the action, [SQLITE_IGNORE] to disallow the +** specific action but allow the SQL statement to continue to be +** compiled, or [SQLITE_DENY] to cause the entire SQL statement to be +** rejected with an error. ^If the authorizer callback returns +** any value other than [SQLITE_IGNORE], [SQLITE_OK], or [SQLITE_DENY] +** then the [sqlite3_prepare_v2()] or equivalent call that triggered +** the authorizer will fail with an error message. +** +** When the callback returns [SQLITE_OK], that means the operation +** requested is ok. ^When the callback returns [SQLITE_DENY], the +** [sqlite3_prepare_v2()] or equivalent call that triggered the +** authorizer will fail with an error message explaining that +** access is denied. +** +** ^The first parameter to the authorizer callback is a copy of the third +** parameter to the sqlite3_set_authorizer() interface. ^The second parameter +** to the callback is an integer [SQLITE_COPY | action code] that specifies +** the particular action to be authorized. ^The third through sixth parameters +** to the callback are zero-terminated strings that contain additional +** details about the action to be authorized. +** +** ^If the action code is [SQLITE_READ] +** and the callback returns [SQLITE_IGNORE] then the +** [prepared statement] statement is constructed to substitute +** a NULL value in place of the table column that would have +** been read if [SQLITE_OK] had been returned. The [SQLITE_IGNORE] +** return can be used to deny an untrusted user access to individual +** columns of a table. +** ^If the action code is [SQLITE_DELETE] and the callback returns +** [SQLITE_IGNORE] then the [DELETE] operation proceeds but the +** [truncate optimization] is disabled and all rows are deleted individually. +** +** An authorizer is used when [sqlite3_prepare | preparing] +** SQL statements from an untrusted source, to ensure that the SQL statements +** do not try to access data they are not allowed to see, or that they do not +** try to execute malicious statements that damage the database. For +** example, an application may allow a user to enter arbitrary +** SQL queries for evaluation by a database. But the application does +** not want the user to be able to make arbitrary changes to the +** database. An authorizer could then be put in place while the +** user-entered SQL is being [sqlite3_prepare | prepared] that +** disallows everything except [SELECT] statements. +** +** Applications that need to process SQL from untrusted sources +** might also consider lowering resource limits using [sqlite3_limit()] +** and limiting database size using the [max_page_count] [PRAGMA] +** in addition to using an authorizer. +** +** ^(Only a single authorizer can be in place on a database connection +** at a time. Each call to sqlite3_set_authorizer overrides the +** previous call.)^ ^Disable the authorizer by installing a NULL callback. +** The authorizer is disabled by default. +** +** The authorizer callback must not do anything that will modify +** the database connection that invoked the authorizer callback. +** Note that [sqlite3_prepare_v2()] and [sqlite3_step()] both modify their +** database connections for the meaning of "modify" in this paragraph. +** +** ^When [sqlite3_prepare_v2()] is used to prepare a statement, the +** statement might be re-prepared during [sqlite3_step()] due to a +** schema change. Hence, the application should ensure that the +** correct authorizer callback remains in place during the [sqlite3_step()]. +** +** ^Note that the authorizer callback is invoked only during +** [sqlite3_prepare()] or its variants. Authorization is not +** performed during statement evaluation in [sqlite3_step()], unless +** as stated in the previous paragraph, sqlite3_step() invokes +** sqlite3_prepare_v2() to reprepare a statement after a schema change. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_set_authorizer( + sqlite3*, + int (*xAuth)(void*,int,const char*,const char*,const char*,const char*), + void *pUserData +); + +/* +** CAPI3REF: Authorizer Return Codes +** +** The [sqlite3_set_authorizer | authorizer callback function] must +** return either [SQLITE_OK] or one of these two constants in order +** to signal SQLite whether or not the action is permitted. See the +** [sqlite3_set_authorizer | authorizer documentation] for additional +** information. +** +** Note that SQLITE_IGNORE is also used as a [conflict resolution mode] +** returned from the [sqlite3_vtab_on_conflict()] interface. +*/ +#define SQLITE_DENY 1 /* Abort the SQL statement with an error */ +#define SQLITE_IGNORE 2 /* Don't allow access, but don't generate an error */ + +/* +** CAPI3REF: Authorizer Action Codes +** +** The [sqlite3_set_authorizer()] interface registers a callback function +** that is invoked to authorize certain SQL statement actions. The +** second parameter to the callback is an integer code that specifies +** what action is being authorized. These are the integer action codes that +** the authorizer callback may be passed. +** +** These action code values signify what kind of operation is to be +** authorized. The 3rd and 4th parameters to the authorization +** callback function will be parameters or NULL depending on which of these +** codes is used as the second parameter. ^(The 5th parameter to the +** authorizer callback is the name of the database ("main", "temp", +** etc.) if applicable.)^ ^The 6th parameter to the authorizer callback +** is the name of the inner-most trigger or view that is responsible for +** the access attempt or NULL if this access attempt is directly from +** top-level SQL code. +*/ +/******************************************* 3rd ************ 4th ***********/ +#define SQLITE_CREATE_INDEX 1 /* Index Name Table Name */ +#define SQLITE_CREATE_TABLE 2 /* Table Name NULL */ +#define SQLITE_CREATE_TEMP_INDEX 3 /* Index Name Table Name */ +#define SQLITE_CREATE_TEMP_TABLE 4 /* Table Name NULL */ +#define SQLITE_CREATE_TEMP_TRIGGER 5 /* Trigger Name Table Name */ +#define SQLITE_CREATE_TEMP_VIEW 6 /* View Name NULL */ +#define SQLITE_CREATE_TRIGGER 7 /* Trigger Name Table Name */ +#define SQLITE_CREATE_VIEW 8 /* View Name NULL */ +#define SQLITE_DELETE 9 /* Table Name NULL */ +#define SQLITE_DROP_INDEX 10 /* Index Name Table Name */ +#define SQLITE_DROP_TABLE 11 /* Table Name NULL */ +#define SQLITE_DROP_TEMP_INDEX 12 /* Index Name Table Name */ +#define SQLITE_DROP_TEMP_TABLE 13 /* Table Name NULL */ +#define SQLITE_DROP_TEMP_TRIGGER 14 /* Trigger Name Table Name */ +#define SQLITE_DROP_TEMP_VIEW 15 /* View Name NULL */ +#define SQLITE_DROP_TRIGGER 16 /* Trigger Name Table Name */ +#define SQLITE_DROP_VIEW 17 /* View Name NULL */ +#define SQLITE_INSERT 18 /* Table Name NULL */ +#define SQLITE_PRAGMA 19 /* Pragma Name 1st arg or NULL */ +#define SQLITE_READ 20 /* Table Name Column Name */ +#define SQLITE_SELECT 21 /* NULL NULL */ +#define SQLITE_TRANSACTION 22 /* Operation NULL */ +#define SQLITE_UPDATE 23 /* Table Name Column Name */ +#define SQLITE_ATTACH 24 /* Filename NULL */ +#define SQLITE_DETACH 25 /* Database Name NULL */ +#define SQLITE_ALTER_TABLE 26 /* Database Name Table Name */ +#define SQLITE_REINDEX 27 /* Index Name NULL */ +#define SQLITE_ANALYZE 28 /* Table Name NULL */ +#define SQLITE_CREATE_VTABLE 29 /* Table Name Module Name */ +#define SQLITE_DROP_VTABLE 30 /* Table Name Module Name */ +#define SQLITE_FUNCTION 31 /* NULL Function Name */ +#define SQLITE_SAVEPOINT 32 /* Operation Savepoint Name */ +#define SQLITE_COPY 0 /* No longer used */ +#define SQLITE_RECURSIVE 33 /* NULL NULL */ + +/* +** CAPI3REF: Tracing And Profiling Functions +** METHOD: sqlite3 +** +** These routines register callback functions that can be used for +** tracing and profiling the execution of SQL statements. +** +** ^The callback function registered by sqlite3_trace() is invoked at +** various times when an SQL statement is being run by [sqlite3_step()]. +** ^The sqlite3_trace() callback is invoked with a UTF-8 rendering of the +** SQL statement text as the statement first begins executing. +** ^(Additional sqlite3_trace() callbacks might occur +** as each triggered subprogram is entered. The callbacks for triggers +** contain a UTF-8 SQL comment that identifies the trigger.)^ +** +** The [SQLITE_TRACE_SIZE_LIMIT] compile-time option can be used to limit +** the length of [bound parameter] expansion in the output of sqlite3_trace(). +** +** ^The callback function registered by sqlite3_profile() is invoked +** as each SQL statement finishes. ^The profile callback contains +** the original statement text and an estimate of wall-clock time +** of how long that statement took to run. ^The profile callback +** time is in units of nanoseconds, however the current implementation +** is only capable of millisecond resolution so the six least significant +** digits in the time are meaningless. Future versions of SQLite +** might provide greater resolution on the profiler callback. The +** sqlite3_profile() function is considered experimental and is +** subject to change in future versions of SQLite. +*/ +SQLITE_API void *SQLITE_STDCALL sqlite3_trace(sqlite3*, void(*xTrace)(void*,const char*), void*); +SQLITE_API SQLITE_EXPERIMENTAL void *SQLITE_STDCALL sqlite3_profile(sqlite3*, + void(*xProfile)(void*,const char*,sqlite3_uint64), void*); + +/* +** CAPI3REF: Query Progress Callbacks +** METHOD: sqlite3 +** +** ^The sqlite3_progress_handler(D,N,X,P) interface causes the callback +** function X to be invoked periodically during long running calls to +** [sqlite3_exec()], [sqlite3_step()] and [sqlite3_get_table()] for +** database connection D. An example use for this +** interface is to keep a GUI updated during a large query. +** +** ^The parameter P is passed through as the only parameter to the +** callback function X. ^The parameter N is the approximate number of +** [virtual machine instructions] that are evaluated between successive +** invocations of the callback X. ^If N is less than one then the progress +** handler is disabled. +** +** ^Only a single progress handler may be defined at one time per +** [database connection]; setting a new progress handler cancels the +** old one. ^Setting parameter X to NULL disables the progress handler. +** ^The progress handler is also disabled by setting N to a value less +** than 1. +** +** ^If the progress callback returns non-zero, the operation is +** interrupted. This feature can be used to implement a +** "Cancel" button on a GUI progress dialog box. +** +** The progress handler callback must not do anything that will modify +** the database connection that invoked the progress handler. +** Note that [sqlite3_prepare_v2()] and [sqlite3_step()] both modify their +** database connections for the meaning of "modify" in this paragraph. +** +*/ +SQLITE_API void SQLITE_STDCALL sqlite3_progress_handler(sqlite3*, int, int(*)(void*), void*); + +/* +** CAPI3REF: Opening A New Database Connection +** CONSTRUCTOR: sqlite3 +** +** ^These routines open an SQLite database file as specified by the +** filename argument. ^The filename argument is interpreted as UTF-8 for +** sqlite3_open() and sqlite3_open_v2() and as UTF-16 in the native byte +** order for sqlite3_open16(). ^(A [database connection] handle is usually +** returned in *ppDb, even if an error occurs. The only exception is that +** if SQLite is unable to allocate memory to hold the [sqlite3] object, +** a NULL will be written into *ppDb instead of a pointer to the [sqlite3] +** object.)^ ^(If the database is opened (and/or created) successfully, then +** [SQLITE_OK] is returned. Otherwise an [error code] is returned.)^ ^The +** [sqlite3_errmsg()] or [sqlite3_errmsg16()] routines can be used to obtain +** an English language description of the error following a failure of any +** of the sqlite3_open() routines. +** +** ^The default encoding will be UTF-8 for databases created using +** sqlite3_open() or sqlite3_open_v2(). ^The default encoding for databases +** created using sqlite3_open16() will be UTF-16 in the native byte order. +** +** Whether or not an error occurs when it is opened, resources +** associated with the [database connection] handle should be released by +** passing it to [sqlite3_close()] when it is no longer required. +** +** The sqlite3_open_v2() interface works like sqlite3_open() +** except that it accepts two additional parameters for additional control +** over the new database connection. ^(The flags parameter to +** sqlite3_open_v2() can take one of +** the following three values, optionally combined with the +** [SQLITE_OPEN_NOMUTEX], [SQLITE_OPEN_FULLMUTEX], [SQLITE_OPEN_SHAREDCACHE], +** [SQLITE_OPEN_PRIVATECACHE], and/or [SQLITE_OPEN_URI] flags:)^ +** +** <dl> +** ^(<dt>[SQLITE_OPEN_READONLY]</dt> +** <dd>The database is opened in read-only mode. If the database does not +** already exist, an error is returned.</dd>)^ +** +** ^(<dt>[SQLITE_OPEN_READWRITE]</dt> +** <dd>The database is opened for reading and writing if possible, or reading +** only if the file is write protected by the operating system. In either +** case the database must already exist, otherwise an error is returned.</dd>)^ +** +** ^(<dt>[SQLITE_OPEN_READWRITE] | [SQLITE_OPEN_CREATE]</dt> +** <dd>The database is opened for reading and writing, and is created if +** it does not already exist. This is the behavior that is always used for +** sqlite3_open() and sqlite3_open16().</dd>)^ +** </dl> +** +** If the 3rd parameter to sqlite3_open_v2() is not one of the +** combinations shown above optionally combined with other +** [SQLITE_OPEN_READONLY | SQLITE_OPEN_* bits] +** then the behavior is undefined. +** +** ^If the [SQLITE_OPEN_NOMUTEX] flag is set, then the database connection +** opens in the multi-thread [threading mode] as long as the single-thread +** mode has not been set at compile-time or start-time. ^If the +** [SQLITE_OPEN_FULLMUTEX] flag is set then the database connection opens +** in the serialized [threading mode] unless single-thread was +** previously selected at compile-time or start-time. +** ^The [SQLITE_OPEN_SHAREDCACHE] flag causes the database connection to be +** eligible to use [shared cache mode], regardless of whether or not shared +** cache is enabled using [sqlite3_enable_shared_cache()]. ^The +** [SQLITE_OPEN_PRIVATECACHE] flag causes the database connection to not +** participate in [shared cache mode] even if it is enabled. +** +** ^The fourth parameter to sqlite3_open_v2() is the name of the +** [sqlite3_vfs] object that defines the operating system interface that +** the new database connection should use. ^If the fourth parameter is +** a NULL pointer then the default [sqlite3_vfs] object is used. +** +** ^If the filename is ":memory:", then a private, temporary in-memory database +** is created for the connection. ^This in-memory database will vanish when +** the database connection is closed. Future versions of SQLite might +** make use of additional special filenames that begin with the ":" character. +** It is recommended that when a database filename actually does begin with +** a ":" character you should prefix the filename with a pathname such as +** "./" to avoid ambiguity. +** +** ^If the filename is an empty string, then a private, temporary +** on-disk database will be created. ^This private database will be +** automatically deleted as soon as the database connection is closed. +** +** [[URI filenames in sqlite3_open()]] <h3>URI Filenames</h3> +** +** ^If [URI filename] interpretation is enabled, and the filename argument +** begins with "file:", then the filename is interpreted as a URI. ^URI +** filename interpretation is enabled if the [SQLITE_OPEN_URI] flag is +** set in the fourth argument to sqlite3_open_v2(), or if it has +** been enabled globally using the [SQLITE_CONFIG_URI] option with the +** [sqlite3_config()] method or by the [SQLITE_USE_URI] compile-time option. +** As of SQLite version 3.7.7, URI filename interpretation is turned off +** by default, but future releases of SQLite might enable URI filename +** interpretation by default. See "[URI filenames]" for additional +** information. +** +** URI filenames are parsed according to RFC 3986. ^If the URI contains an +** authority, then it must be either an empty string or the string +** "localhost". ^If the authority is not an empty string or "localhost", an +** error is returned to the caller. ^The fragment component of a URI, if +** present, is ignored. +** +** ^SQLite uses the path component of the URI as the name of the disk file +** which contains the database. ^If the path begins with a '/' character, +** then it is interpreted as an absolute path. ^If the path does not begin +** with a '/' (meaning that the authority section is omitted from the URI) +** then the path is interpreted as a relative path. +** ^(On windows, the first component of an absolute path +** is a drive specification (e.g. "C:").)^ +** +** [[core URI query parameters]] +** The query component of a URI may contain parameters that are interpreted +** either by SQLite itself, or by a [VFS | custom VFS implementation]. +** SQLite and its built-in [VFSes] interpret the +** following query parameters: +** +** <ul> +** <li> <b>vfs</b>: ^The "vfs" parameter may be used to specify the name of +** a VFS object that provides the operating system interface that should +** be used to access the database file on disk. ^If this option is set to +** an empty string the default VFS object is used. ^Specifying an unknown +** VFS is an error. ^If sqlite3_open_v2() is used and the vfs option is +** present, then the VFS specified by the option takes precedence over +** the value passed as the fourth parameter to sqlite3_open_v2(). +** +** <li> <b>mode</b>: ^(The mode parameter may be set to either "ro", "rw", +** "rwc", or "memory". Attempting to set it to any other value is +** an error)^. +** ^If "ro" is specified, then the database is opened for read-only +** access, just as if the [SQLITE_OPEN_READONLY] flag had been set in the +** third argument to sqlite3_open_v2(). ^If the mode option is set to +** "rw", then the database is opened for read-write (but not create) +** access, as if SQLITE_OPEN_READWRITE (but not SQLITE_OPEN_CREATE) had +** been set. ^Value "rwc" is equivalent to setting both +** SQLITE_OPEN_READWRITE and SQLITE_OPEN_CREATE. ^If the mode option is +** set to "memory" then a pure [in-memory database] that never reads +** or writes from disk is used. ^It is an error to specify a value for +** the mode parameter that is less restrictive than that specified by +** the flags passed in the third parameter to sqlite3_open_v2(). +** +** <li> <b>cache</b>: ^The cache parameter may be set to either "shared" or +** "private". ^Setting it to "shared" is equivalent to setting the +** SQLITE_OPEN_SHAREDCACHE bit in the flags argument passed to +** sqlite3_open_v2(). ^Setting the cache parameter to "private" is +** equivalent to setting the SQLITE_OPEN_PRIVATECACHE bit. +** ^If sqlite3_open_v2() is used and the "cache" parameter is present in +** a URI filename, its value overrides any behavior requested by setting +** SQLITE_OPEN_PRIVATECACHE or SQLITE_OPEN_SHAREDCACHE flag. +** +** <li> <b>psow</b>: ^The psow parameter indicates whether or not the +** [powersafe overwrite] property does or does not apply to the +** storage media on which the database file resides. +** +** <li> <b>nolock</b>: ^The nolock parameter is a boolean query parameter +** which if set disables file locking in rollback journal modes. This +** is useful for accessing a database on a filesystem that does not +** support locking. Caution: Database corruption might result if two +** or more processes write to the same database and any one of those +** processes uses nolock=1. +** +** <li> <b>immutable</b>: ^The immutable parameter is a boolean query +** parameter that indicates that the database file is stored on +** read-only media. ^When immutable is set, SQLite assumes that the +** database file cannot be changed, even by a process with higher +** privilege, and so the database is opened read-only and all locking +** and change detection is disabled. Caution: Setting the immutable +** property on a database file that does in fact change can result +** in incorrect query results and/or [SQLITE_CORRUPT] errors. +** See also: [SQLITE_IOCAP_IMMUTABLE]. +** +** </ul> +** +** ^Specifying an unknown parameter in the query component of a URI is not an +** error. Future versions of SQLite might understand additional query +** parameters. See "[query parameters with special meaning to SQLite]" for +** additional information. +** +** [[URI filename examples]] <h3>URI filename examples</h3> +** +** <table border="1" align=center cellpadding=5> +** <tr><th> URI filenames <th> Results +** <tr><td> file:data.db <td> +** Open the file "data.db" in the current directory. +** <tr><td> file:/home/fred/data.db<br> +** file:///home/fred/data.db <br> +** file://localhost/home/fred/data.db <br> <td> +** Open the database file "/home/fred/data.db". +** <tr><td> file://darkstar/home/fred/data.db <td> +** An error. "darkstar" is not a recognized authority. +** <tr><td style="white-space:nowrap"> +** file:///C:/Documents%20and%20Settings/fred/Desktop/data.db +** <td> Windows only: Open the file "data.db" on fred's desktop on drive +** C:. Note that the %20 escaping in this example is not strictly +** necessary - space characters can be used literally +** in URI filenames. +** <tr><td> file:data.db?mode=ro&cache=private <td> +** Open file "data.db" in the current directory for read-only access. +** Regardless of whether or not shared-cache mode is enabled by +** default, use a private cache. +** <tr><td> file:/home/fred/data.db?vfs=unix-dotfile <td> +** Open file "/home/fred/data.db". Use the special VFS "unix-dotfile" +** that uses dot-files in place of posix advisory locking. +** <tr><td> file:data.db?mode=readonly <td> +** An error. "readonly" is not a valid option for the "mode" parameter. +** </table> +** +** ^URI hexadecimal escape sequences (%HH) are supported within the path and +** query components of a URI. A hexadecimal escape sequence consists of a +** percent sign - "%" - followed by exactly two hexadecimal digits +** specifying an octet value. ^Before the path or query components of a +** URI filename are interpreted, they are encoded using UTF-8 and all +** hexadecimal escape sequences replaced by a single byte containing the +** corresponding octet. If this process generates an invalid UTF-8 encoding, +** the results are undefined. +** +** <b>Note to Windows users:</b> The encoding used for the filename argument +** of sqlite3_open() and sqlite3_open_v2() must be UTF-8, not whatever +** codepage is currently defined. Filenames containing international +** characters must be converted to UTF-8 prior to passing them into +** sqlite3_open() or sqlite3_open_v2(). +** +** <b>Note to Windows Runtime users:</b> The temporary directory must be set +** prior to calling sqlite3_open() or sqlite3_open_v2(). Otherwise, various +** features that require the use of temporary files may fail. +** +** See also: [sqlite3_temp_directory] +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_open( + const char *filename, /* Database filename (UTF-8) */ + sqlite3 **ppDb /* OUT: SQLite db handle */ +); +SQLITE_API int SQLITE_STDCALL sqlite3_open16( + const void *filename, /* Database filename (UTF-16) */ + sqlite3 **ppDb /* OUT: SQLite db handle */ +); +SQLITE_API int SQLITE_STDCALL sqlite3_open_v2( + const char *filename, /* Database filename (UTF-8) */ + sqlite3 **ppDb, /* OUT: SQLite db handle */ + int flags, /* Flags */ + const char *zVfs /* Name of VFS module to use */ +); + +/* +** CAPI3REF: Obtain Values For URI Parameters +** +** These are utility routines, useful to VFS implementations, that check +** to see if a database file was a URI that contained a specific query +** parameter, and if so obtains the value of that query parameter. +** +** If F is the database filename pointer passed into the xOpen() method of +** a VFS implementation when the flags parameter to xOpen() has one or +** more of the [SQLITE_OPEN_URI] or [SQLITE_OPEN_MAIN_DB] bits set and +** P is the name of the query parameter, then +** sqlite3_uri_parameter(F,P) returns the value of the P +** parameter if it exists or a NULL pointer if P does not appear as a +** query parameter on F. If P is a query parameter of F +** has no explicit value, then sqlite3_uri_parameter(F,P) returns +** a pointer to an empty string. +** +** The sqlite3_uri_boolean(F,P,B) routine assumes that P is a boolean +** parameter and returns true (1) or false (0) according to the value +** of P. The sqlite3_uri_boolean(F,P,B) routine returns true (1) if the +** value of query parameter P is one of "yes", "true", or "on" in any +** case or if the value begins with a non-zero number. The +** sqlite3_uri_boolean(F,P,B) routines returns false (0) if the value of +** query parameter P is one of "no", "false", or "off" in any case or +** if the value begins with a numeric zero. If P is not a query +** parameter on F or if the value of P is does not match any of the +** above, then sqlite3_uri_boolean(F,P,B) returns (B!=0). +** +** The sqlite3_uri_int64(F,P,D) routine converts the value of P into a +** 64-bit signed integer and returns that integer, or D if P does not +** exist. If the value of P is something other than an integer, then +** zero is returned. +** +** If F is a NULL pointer, then sqlite3_uri_parameter(F,P) returns NULL and +** sqlite3_uri_boolean(F,P,B) returns B. If F is not a NULL pointer and +** is not a database file pathname pointer that SQLite passed into the xOpen +** VFS method, then the behavior of this routine is undefined and probably +** undesirable. +*/ +SQLITE_API const char *SQLITE_STDCALL sqlite3_uri_parameter(const char *zFilename, const char *zParam); +SQLITE_API int SQLITE_STDCALL sqlite3_uri_boolean(const char *zFile, const char *zParam, int bDefault); +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_uri_int64(const char*, const char*, sqlite3_int64); + + +/* +** CAPI3REF: Error Codes And Messages +** METHOD: sqlite3 +** +** ^If the most recent sqlite3_* API call associated with +** [database connection] D failed, then the sqlite3_errcode(D) interface +** returns the numeric [result code] or [extended result code] for that +** API call. +** If the most recent API call was successful, +** then the return value from sqlite3_errcode() is undefined. +** ^The sqlite3_extended_errcode() +** interface is the same except that it always returns the +** [extended result code] even when extended result codes are +** disabled. +** +** ^The sqlite3_errmsg() and sqlite3_errmsg16() return English-language +** text that describes the error, as either UTF-8 or UTF-16 respectively. +** ^(Memory to hold the error message string is managed internally. +** The application does not need to worry about freeing the result. +** However, the error string might be overwritten or deallocated by +** subsequent calls to other SQLite interface functions.)^ +** +** ^The sqlite3_errstr() interface returns the English-language text +** that describes the [result code], as UTF-8. +** ^(Memory to hold the error message string is managed internally +** and must not be freed by the application)^. +** +** When the serialized [threading mode] is in use, it might be the +** case that a second error occurs on a separate thread in between +** the time of the first error and the call to these interfaces. +** When that happens, the second error will be reported since these +** interfaces always report the most recent result. To avoid +** this, each thread can obtain exclusive use of the [database connection] D +** by invoking [sqlite3_mutex_enter]([sqlite3_db_mutex](D)) before beginning +** to use D and invoking [sqlite3_mutex_leave]([sqlite3_db_mutex](D)) after +** all calls to the interfaces listed here are completed. +** +** If an interface fails with SQLITE_MISUSE, that means the interface +** was invoked incorrectly by the application. In that case, the +** error code and message may or may not be set. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_errcode(sqlite3 *db); +SQLITE_API int SQLITE_STDCALL sqlite3_extended_errcode(sqlite3 *db); +SQLITE_API const char *SQLITE_STDCALL sqlite3_errmsg(sqlite3*); +SQLITE_API const void *SQLITE_STDCALL sqlite3_errmsg16(sqlite3*); +SQLITE_API const char *SQLITE_STDCALL sqlite3_errstr(int); + +/* +** CAPI3REF: Prepared Statement Object +** KEYWORDS: {prepared statement} {prepared statements} +** +** An instance of this object represents a single SQL statement that +** has been compiled into binary form and is ready to be evaluated. +** +** Think of each SQL statement as a separate computer program. The +** original SQL text is source code. A prepared statement object +** is the compiled object code. All SQL must be converted into a +** prepared statement before it can be run. +** +** The life-cycle of a prepared statement object usually goes like this: +** +** <ol> +** <li> Create the prepared statement object using [sqlite3_prepare_v2()]. +** <li> Bind values to [parameters] using the sqlite3_bind_*() +** interfaces. +** <li> Run the SQL by calling [sqlite3_step()] one or more times. +** <li> Reset the prepared statement using [sqlite3_reset()] then go back +** to step 2. Do this zero or more times. +** <li> Destroy the object using [sqlite3_finalize()]. +** </ol> +*/ +typedef struct sqlite3_stmt sqlite3_stmt; + +/* +** CAPI3REF: Run-time Limits +** METHOD: sqlite3 +** +** ^(This interface allows the size of various constructs to be limited +** on a connection by connection basis. The first parameter is the +** [database connection] whose limit is to be set or queried. The +** second parameter is one of the [limit categories] that define a +** class of constructs to be size limited. The third parameter is the +** new limit for that construct.)^ +** +** ^If the new limit is a negative number, the limit is unchanged. +** ^(For each limit category SQLITE_LIMIT_<i>NAME</i> there is a +** [limits | hard upper bound] +** set at compile-time by a C preprocessor macro called +** [limits | SQLITE_MAX_<i>NAME</i>]. +** (The "_LIMIT_" in the name is changed to "_MAX_".))^ +** ^Attempts to increase a limit above its hard upper bound are +** silently truncated to the hard upper bound. +** +** ^Regardless of whether or not the limit was changed, the +** [sqlite3_limit()] interface returns the prior value of the limit. +** ^Hence, to find the current value of a limit without changing it, +** simply invoke this interface with the third parameter set to -1. +** +** Run-time limits are intended for use in applications that manage +** both their own internal database and also databases that are controlled +** by untrusted external sources. An example application might be a +** web browser that has its own databases for storing history and +** separate databases controlled by JavaScript applications downloaded +** off the Internet. The internal databases can be given the +** large, default limits. Databases managed by external sources can +** be given much smaller limits designed to prevent a denial of service +** attack. Developers might also want to use the [sqlite3_set_authorizer()] +** interface to further control untrusted SQL. The size of the database +** created by an untrusted script can be contained using the +** [max_page_count] [PRAGMA]. +** +** New run-time limit categories may be added in future releases. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_limit(sqlite3*, int id, int newVal); + +/* +** CAPI3REF: Run-Time Limit Categories +** KEYWORDS: {limit category} {*limit categories} +** +** These constants define various performance limits +** that can be lowered at run-time using [sqlite3_limit()]. +** The synopsis of the meanings of the various limits is shown below. +** Additional information is available at [limits | Limits in SQLite]. +** +** <dl> +** [[SQLITE_LIMIT_LENGTH]] ^(<dt>SQLITE_LIMIT_LENGTH</dt> +** <dd>The maximum size of any string or BLOB or table row, in bytes.<dd>)^ +** +** [[SQLITE_LIMIT_SQL_LENGTH]] ^(<dt>SQLITE_LIMIT_SQL_LENGTH</dt> +** <dd>The maximum length of an SQL statement, in bytes.</dd>)^ +** +** [[SQLITE_LIMIT_COLUMN]] ^(<dt>SQLITE_LIMIT_COLUMN</dt> +** <dd>The maximum number of columns in a table definition or in the +** result set of a [SELECT] or the maximum number of columns in an index +** or in an ORDER BY or GROUP BY clause.</dd>)^ +** +** [[SQLITE_LIMIT_EXPR_DEPTH]] ^(<dt>SQLITE_LIMIT_EXPR_DEPTH</dt> +** <dd>The maximum depth of the parse tree on any expression.</dd>)^ +** +** [[SQLITE_LIMIT_COMPOUND_SELECT]] ^(<dt>SQLITE_LIMIT_COMPOUND_SELECT</dt> +** <dd>The maximum number of terms in a compound SELECT statement.</dd>)^ +** +** [[SQLITE_LIMIT_VDBE_OP]] ^(<dt>SQLITE_LIMIT_VDBE_OP</dt> +** <dd>The maximum number of instructions in a virtual machine program +** used to implement an SQL statement. This limit is not currently +** enforced, though that might be added in some future release of +** SQLite.</dd>)^ +** +** [[SQLITE_LIMIT_FUNCTION_ARG]] ^(<dt>SQLITE_LIMIT_FUNCTION_ARG</dt> +** <dd>The maximum number of arguments on a function.</dd>)^ +** +** [[SQLITE_LIMIT_ATTACHED]] ^(<dt>SQLITE_LIMIT_ATTACHED</dt> +** <dd>The maximum number of [ATTACH | attached databases].)^</dd> +** +** [[SQLITE_LIMIT_LIKE_PATTERN_LENGTH]] +** ^(<dt>SQLITE_LIMIT_LIKE_PATTERN_LENGTH</dt> +** <dd>The maximum length of the pattern argument to the [LIKE] or +** [GLOB] operators.</dd>)^ +** +** [[SQLITE_LIMIT_VARIABLE_NUMBER]] +** ^(<dt>SQLITE_LIMIT_VARIABLE_NUMBER</dt> +** <dd>The maximum index number of any [parameter] in an SQL statement.)^ +** +** [[SQLITE_LIMIT_TRIGGER_DEPTH]] ^(<dt>SQLITE_LIMIT_TRIGGER_DEPTH</dt> +** <dd>The maximum depth of recursion for triggers.</dd>)^ +** +** [[SQLITE_LIMIT_WORKER_THREADS]] ^(<dt>SQLITE_LIMIT_WORKER_THREADS</dt> +** <dd>The maximum number of auxiliary worker threads that a single +** [prepared statement] may start.</dd>)^ +** </dl> +*/ +#define SQLITE_LIMIT_LENGTH 0 +#define SQLITE_LIMIT_SQL_LENGTH 1 +#define SQLITE_LIMIT_COLUMN 2 +#define SQLITE_LIMIT_EXPR_DEPTH 3 +#define SQLITE_LIMIT_COMPOUND_SELECT 4 +#define SQLITE_LIMIT_VDBE_OP 5 +#define SQLITE_LIMIT_FUNCTION_ARG 6 +#define SQLITE_LIMIT_ATTACHED 7 +#define SQLITE_LIMIT_LIKE_PATTERN_LENGTH 8 +#define SQLITE_LIMIT_VARIABLE_NUMBER 9 +#define SQLITE_LIMIT_TRIGGER_DEPTH 10 +#define SQLITE_LIMIT_WORKER_THREADS 11 + +/* +** CAPI3REF: Compiling An SQL Statement +** KEYWORDS: {SQL statement compiler} +** METHOD: sqlite3 +** CONSTRUCTOR: sqlite3_stmt +** +** To execute an SQL query, it must first be compiled into a byte-code +** program using one of these routines. +** +** The first argument, "db", is a [database connection] obtained from a +** prior successful call to [sqlite3_open()], [sqlite3_open_v2()] or +** [sqlite3_open16()]. The database connection must not have been closed. +** +** The second argument, "zSql", is the statement to be compiled, encoded +** as either UTF-8 or UTF-16. The sqlite3_prepare() and sqlite3_prepare_v2() +** interfaces use UTF-8, and sqlite3_prepare16() and sqlite3_prepare16_v2() +** use UTF-16. +** +** ^If the nByte argument is negative, then zSql is read up to the +** first zero terminator. ^If nByte is positive, then it is the +** number of bytes read from zSql. ^If nByte is zero, then no prepared +** statement is generated. +** If the caller knows that the supplied string is nul-terminated, then +** there is a small performance advantage to passing an nByte parameter that +** is the number of bytes in the input string <i>including</i> +** the nul-terminator. +** +** ^If pzTail is not NULL then *pzTail is made to point to the first byte +** past the end of the first SQL statement in zSql. These routines only +** compile the first statement in zSql, so *pzTail is left pointing to +** what remains uncompiled. +** +** ^*ppStmt is left pointing to a compiled [prepared statement] that can be +** executed using [sqlite3_step()]. ^If there is an error, *ppStmt is set +** to NULL. ^If the input text contains no SQL (if the input is an empty +** string or a comment) then *ppStmt is set to NULL. +** The calling procedure is responsible for deleting the compiled +** SQL statement using [sqlite3_finalize()] after it has finished with it. +** ppStmt may not be NULL. +** +** ^On success, the sqlite3_prepare() family of routines return [SQLITE_OK]; +** otherwise an [error code] is returned. +** +** The sqlite3_prepare_v2() and sqlite3_prepare16_v2() interfaces are +** recommended for all new programs. The two older interfaces are retained +** for backwards compatibility, but their use is discouraged. +** ^In the "v2" interfaces, the prepared statement +** that is returned (the [sqlite3_stmt] object) contains a copy of the +** original SQL text. This causes the [sqlite3_step()] interface to +** behave differently in three ways: +** +** <ol> +** <li> +** ^If the database schema changes, instead of returning [SQLITE_SCHEMA] as it +** always used to do, [sqlite3_step()] will automatically recompile the SQL +** statement and try to run it again. As many as [SQLITE_MAX_SCHEMA_RETRY] +** retries will occur before sqlite3_step() gives up and returns an error. +** </li> +** +** <li> +** ^When an error occurs, [sqlite3_step()] will return one of the detailed +** [error codes] or [extended error codes]. ^The legacy behavior was that +** [sqlite3_step()] would only return a generic [SQLITE_ERROR] result code +** and the application would have to make a second call to [sqlite3_reset()] +** in order to find the underlying cause of the problem. With the "v2" prepare +** interfaces, the underlying reason for the error is returned immediately. +** </li> +** +** <li> +** ^If the specific value bound to [parameter | host parameter] in the +** WHERE clause might influence the choice of query plan for a statement, +** then the statement will be automatically recompiled, as if there had been +** a schema change, on the first [sqlite3_step()] call following any change +** to the [sqlite3_bind_text | bindings] of that [parameter]. +** ^The specific value of WHERE-clause [parameter] might influence the +** choice of query plan if the parameter is the left-hand side of a [LIKE] +** or [GLOB] operator or if the parameter is compared to an indexed column +** and the [SQLITE_ENABLE_STAT3] compile-time option is enabled. +** </li> +** </ol> +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_prepare( + sqlite3 *db, /* Database handle */ + const char *zSql, /* SQL statement, UTF-8 encoded */ + int nByte, /* Maximum length of zSql in bytes. */ + sqlite3_stmt **ppStmt, /* OUT: Statement handle */ + const char **pzTail /* OUT: Pointer to unused portion of zSql */ +); +SQLITE_API int SQLITE_STDCALL sqlite3_prepare_v2( + sqlite3 *db, /* Database handle */ + const char *zSql, /* SQL statement, UTF-8 encoded */ + int nByte, /* Maximum length of zSql in bytes. */ + sqlite3_stmt **ppStmt, /* OUT: Statement handle */ + const char **pzTail /* OUT: Pointer to unused portion of zSql */ +); +SQLITE_API int SQLITE_STDCALL sqlite3_prepare16( + sqlite3 *db, /* Database handle */ + const void *zSql, /* SQL statement, UTF-16 encoded */ + int nByte, /* Maximum length of zSql in bytes. */ + sqlite3_stmt **ppStmt, /* OUT: Statement handle */ + const void **pzTail /* OUT: Pointer to unused portion of zSql */ +); +SQLITE_API int SQLITE_STDCALL sqlite3_prepare16_v2( + sqlite3 *db, /* Database handle */ + const void *zSql, /* SQL statement, UTF-16 encoded */ + int nByte, /* Maximum length of zSql in bytes. */ + sqlite3_stmt **ppStmt, /* OUT: Statement handle */ + const void **pzTail /* OUT: Pointer to unused portion of zSql */ +); + +/* +** CAPI3REF: Retrieving Statement SQL +** METHOD: sqlite3_stmt +** +** ^This interface can be used to retrieve a saved copy of the original +** SQL text used to create a [prepared statement] if that statement was +** compiled using either [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()]. +*/ +SQLITE_API const char *SQLITE_STDCALL sqlite3_sql(sqlite3_stmt *pStmt); + +/* +** CAPI3REF: Determine If An SQL Statement Writes The Database +** METHOD: sqlite3_stmt +** +** ^The sqlite3_stmt_readonly(X) interface returns true (non-zero) if +** and only if the [prepared statement] X makes no direct changes to +** the content of the database file. +** +** Note that [application-defined SQL functions] or +** [virtual tables] might change the database indirectly as a side effect. +** ^(For example, if an application defines a function "eval()" that +** calls [sqlite3_exec()], then the following SQL statement would +** change the database file through side-effects: +** +** <blockquote><pre> +** SELECT eval('DELETE FROM t1') FROM t2; +** </pre></blockquote> +** +** But because the [SELECT] statement does not change the database file +** directly, sqlite3_stmt_readonly() would still return true.)^ +** +** ^Transaction control statements such as [BEGIN], [COMMIT], [ROLLBACK], +** [SAVEPOINT], and [RELEASE] cause sqlite3_stmt_readonly() to return true, +** since the statements themselves do not actually modify the database but +** rather they control the timing of when other statements modify the +** database. ^The [ATTACH] and [DETACH] statements also cause +** sqlite3_stmt_readonly() to return true since, while those statements +** change the configuration of a database connection, they do not make +** changes to the content of the database files on disk. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_stmt_readonly(sqlite3_stmt *pStmt); + +/* +** CAPI3REF: Determine If A Prepared Statement Has Been Reset +** METHOD: sqlite3_stmt +** +** ^The sqlite3_stmt_busy(S) interface returns true (non-zero) if the +** [prepared statement] S has been stepped at least once using +** [sqlite3_step(S)] but has neither run to completion (returned +** [SQLITE_DONE] from [sqlite3_step(S)]) nor +** been reset using [sqlite3_reset(S)]. ^The sqlite3_stmt_busy(S) +** interface returns false if S is a NULL pointer. If S is not a +** NULL pointer and is not a pointer to a valid [prepared statement] +** object, then the behavior is undefined and probably undesirable. +** +** This interface can be used in combination [sqlite3_next_stmt()] +** to locate all prepared statements associated with a database +** connection that are in need of being reset. This can be used, +** for example, in diagnostic routines to search for prepared +** statements that are holding a transaction open. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_stmt_busy(sqlite3_stmt*); + +/* +** CAPI3REF: Dynamically Typed Value Object +** KEYWORDS: {protected sqlite3_value} {unprotected sqlite3_value} +** +** SQLite uses the sqlite3_value object to represent all values +** that can be stored in a database table. SQLite uses dynamic typing +** for the values it stores. ^Values stored in sqlite3_value objects +** can be integers, floating point values, strings, BLOBs, or NULL. +** +** An sqlite3_value object may be either "protected" or "unprotected". +** Some interfaces require a protected sqlite3_value. Other interfaces +** will accept either a protected or an unprotected sqlite3_value. +** Every interface that accepts sqlite3_value arguments specifies +** whether or not it requires a protected sqlite3_value. The +** [sqlite3_value_dup()] interface can be used to construct a new +** protected sqlite3_value from an unprotected sqlite3_value. +** +** The terms "protected" and "unprotected" refer to whether or not +** a mutex is held. An internal mutex is held for a protected +** sqlite3_value object but no mutex is held for an unprotected +** sqlite3_value object. If SQLite is compiled to be single-threaded +** (with [SQLITE_THREADSAFE=0] and with [sqlite3_threadsafe()] returning 0) +** or if SQLite is run in one of reduced mutex modes +** [SQLITE_CONFIG_SINGLETHREAD] or [SQLITE_CONFIG_MULTITHREAD] +** then there is no distinction between protected and unprotected +** sqlite3_value objects and they can be used interchangeably. However, +** for maximum code portability it is recommended that applications +** still make the distinction between protected and unprotected +** sqlite3_value objects even when not strictly required. +** +** ^The sqlite3_value objects that are passed as parameters into the +** implementation of [application-defined SQL functions] are protected. +** ^The sqlite3_value object returned by +** [sqlite3_column_value()] is unprotected. +** Unprotected sqlite3_value objects may only be used with +** [sqlite3_result_value()] and [sqlite3_bind_value()]. +** The [sqlite3_value_blob | sqlite3_value_type()] family of +** interfaces require protected sqlite3_value objects. +*/ +typedef struct Mem sqlite3_value; + +/* +** CAPI3REF: SQL Function Context Object +** +** The context in which an SQL function executes is stored in an +** sqlite3_context object. ^A pointer to an sqlite3_context object +** is always first parameter to [application-defined SQL functions]. +** The application-defined SQL function implementation will pass this +** pointer through into calls to [sqlite3_result_int | sqlite3_result()], +** [sqlite3_aggregate_context()], [sqlite3_user_data()], +** [sqlite3_context_db_handle()], [sqlite3_get_auxdata()], +** and/or [sqlite3_set_auxdata()]. +*/ +typedef struct sqlite3_context sqlite3_context; + +/* +** CAPI3REF: Binding Values To Prepared Statements +** KEYWORDS: {host parameter} {host parameters} {host parameter name} +** KEYWORDS: {SQL parameter} {SQL parameters} {parameter binding} +** METHOD: sqlite3_stmt +** +** ^(In the SQL statement text input to [sqlite3_prepare_v2()] and its variants, +** literals may be replaced by a [parameter] that matches one of following +** templates: +** +** <ul> +** <li> ? +** <li> ?NNN +** <li> :VVV +** <li> @VVV +** <li> $VVV +** </ul> +** +** In the templates above, NNN represents an integer literal, +** and VVV represents an alphanumeric identifier.)^ ^The values of these +** parameters (also called "host parameter names" or "SQL parameters") +** can be set using the sqlite3_bind_*() routines defined here. +** +** ^The first argument to the sqlite3_bind_*() routines is always +** a pointer to the [sqlite3_stmt] object returned from +** [sqlite3_prepare_v2()] or its variants. +** +** ^The second argument is the index of the SQL parameter to be set. +** ^The leftmost SQL parameter has an index of 1. ^When the same named +** SQL parameter is used more than once, second and subsequent +** occurrences have the same index as the first occurrence. +** ^The index for named parameters can be looked up using the +** [sqlite3_bind_parameter_index()] API if desired. ^The index +** for "?NNN" parameters is the value of NNN. +** ^The NNN value must be between 1 and the [sqlite3_limit()] +** parameter [SQLITE_LIMIT_VARIABLE_NUMBER] (default value: 999). +** +** ^The third argument is the value to bind to the parameter. +** ^If the third parameter to sqlite3_bind_text() or sqlite3_bind_text16() +** or sqlite3_bind_blob() is a NULL pointer then the fourth parameter +** is ignored and the end result is the same as sqlite3_bind_null(). +** +** ^(In those routines that have a fourth argument, its value is the +** number of bytes in the parameter. To be clear: the value is the +** number of <u>bytes</u> in the value, not the number of characters.)^ +** ^If the fourth parameter to sqlite3_bind_text() or sqlite3_bind_text16() +** is negative, then the length of the string is +** the number of bytes up to the first zero terminator. +** If the fourth parameter to sqlite3_bind_blob() is negative, then +** the behavior is undefined. +** If a non-negative fourth parameter is provided to sqlite3_bind_text() +** or sqlite3_bind_text16() or sqlite3_bind_text64() then +** that parameter must be the byte offset +** where the NUL terminator would occur assuming the string were NUL +** terminated. If any NUL characters occur at byte offsets less than +** the value of the fourth parameter then the resulting string value will +** contain embedded NULs. The result of expressions involving strings +** with embedded NULs is undefined. +** +** ^The fifth argument to the BLOB and string binding interfaces +** is a destructor used to dispose of the BLOB or +** string after SQLite has finished with it. ^The destructor is called +** to dispose of the BLOB or string even if the call to bind API fails. +** ^If the fifth argument is +** the special value [SQLITE_STATIC], then SQLite assumes that the +** information is in static, unmanaged space and does not need to be freed. +** ^If the fifth argument has the value [SQLITE_TRANSIENT], then +** SQLite makes its own private copy of the data immediately, before +** the sqlite3_bind_*() routine returns. +** +** ^The sixth argument to sqlite3_bind_text64() must be one of +** [SQLITE_UTF8], [SQLITE_UTF16], [SQLITE_UTF16BE], or [SQLITE_UTF16LE] +** to specify the encoding of the text in the third parameter. If +** the sixth argument to sqlite3_bind_text64() is not one of the +** allowed values shown above, or if the text encoding is different +** from the encoding specified by the sixth parameter, then the behavior +** is undefined. +** +** ^The sqlite3_bind_zeroblob() routine binds a BLOB of length N that +** is filled with zeroes. ^A zeroblob uses a fixed amount of memory +** (just an integer to hold its size) while it is being processed. +** Zeroblobs are intended to serve as placeholders for BLOBs whose +** content is later written using +** [sqlite3_blob_open | incremental BLOB I/O] routines. +** ^A negative value for the zeroblob results in a zero-length BLOB. +** +** ^If any of the sqlite3_bind_*() routines are called with a NULL pointer +** for the [prepared statement] or with a prepared statement for which +** [sqlite3_step()] has been called more recently than [sqlite3_reset()], +** then the call will return [SQLITE_MISUSE]. If any sqlite3_bind_() +** routine is passed a [prepared statement] that has been finalized, the +** result is undefined and probably harmful. +** +** ^Bindings are not cleared by the [sqlite3_reset()] routine. +** ^Unbound parameters are interpreted as NULL. +** +** ^The sqlite3_bind_* routines return [SQLITE_OK] on success or an +** [error code] if anything goes wrong. +** ^[SQLITE_TOOBIG] might be returned if the size of a string or BLOB +** exceeds limits imposed by [sqlite3_limit]([SQLITE_LIMIT_LENGTH]) or +** [SQLITE_MAX_LENGTH]. +** ^[SQLITE_RANGE] is returned if the parameter +** index is out of range. ^[SQLITE_NOMEM] is returned if malloc() fails. +** +** See also: [sqlite3_bind_parameter_count()], +** [sqlite3_bind_parameter_name()], and [sqlite3_bind_parameter_index()]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_bind_blob(sqlite3_stmt*, int, const void*, int n, void(*)(void*)); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_blob64(sqlite3_stmt*, int, const void*, sqlite3_uint64, + void(*)(void*)); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_double(sqlite3_stmt*, int, double); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_int(sqlite3_stmt*, int, int); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_int64(sqlite3_stmt*, int, sqlite3_int64); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_null(sqlite3_stmt*, int); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_text(sqlite3_stmt*,int,const char*,int,void(*)(void*)); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_text16(sqlite3_stmt*, int, const void*, int, void(*)(void*)); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_text64(sqlite3_stmt*, int, const char*, sqlite3_uint64, + void(*)(void*), unsigned char encoding); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_value(sqlite3_stmt*, int, const sqlite3_value*); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_zeroblob(sqlite3_stmt*, int, int n); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_zeroblob64(sqlite3_stmt*, int, sqlite3_uint64); + +/* +** CAPI3REF: Number Of SQL Parameters +** METHOD: sqlite3_stmt +** +** ^This routine can be used to find the number of [SQL parameters] +** in a [prepared statement]. SQL parameters are tokens of the +** form "?", "?NNN", ":AAA", "$AAA", or "@AAA" that serve as +** placeholders for values that are [sqlite3_bind_blob | bound] +** to the parameters at a later time. +** +** ^(This routine actually returns the index of the largest (rightmost) +** parameter. For all forms except ?NNN, this will correspond to the +** number of unique parameters. If parameters of the ?NNN form are used, +** there may be gaps in the list.)^ +** +** See also: [sqlite3_bind_blob|sqlite3_bind()], +** [sqlite3_bind_parameter_name()], and +** [sqlite3_bind_parameter_index()]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_bind_parameter_count(sqlite3_stmt*); + +/* +** CAPI3REF: Name Of A Host Parameter +** METHOD: sqlite3_stmt +** +** ^The sqlite3_bind_parameter_name(P,N) interface returns +** the name of the N-th [SQL parameter] in the [prepared statement] P. +** ^(SQL parameters of the form "?NNN" or ":AAA" or "@AAA" or "$AAA" +** have a name which is the string "?NNN" or ":AAA" or "@AAA" or "$AAA" +** respectively. +** In other words, the initial ":" or "$" or "@" or "?" +** is included as part of the name.)^ +** ^Parameters of the form "?" without a following integer have no name +** and are referred to as "nameless" or "anonymous parameters". +** +** ^The first host parameter has an index of 1, not 0. +** +** ^If the value N is out of range or if the N-th parameter is +** nameless, then NULL is returned. ^The returned string is +** always in UTF-8 encoding even if the named parameter was +** originally specified as UTF-16 in [sqlite3_prepare16()] or +** [sqlite3_prepare16_v2()]. +** +** See also: [sqlite3_bind_blob|sqlite3_bind()], +** [sqlite3_bind_parameter_count()], and +** [sqlite3_bind_parameter_index()]. +*/ +SQLITE_API const char *SQLITE_STDCALL sqlite3_bind_parameter_name(sqlite3_stmt*, int); + +/* +** CAPI3REF: Index Of A Parameter With A Given Name +** METHOD: sqlite3_stmt +** +** ^Return the index of an SQL parameter given its name. ^The +** index value returned is suitable for use as the second +** parameter to [sqlite3_bind_blob|sqlite3_bind()]. ^A zero +** is returned if no matching parameter is found. ^The parameter +** name must be given in UTF-8 even if the original statement +** was prepared from UTF-16 text using [sqlite3_prepare16_v2()]. +** +** See also: [sqlite3_bind_blob|sqlite3_bind()], +** [sqlite3_bind_parameter_count()], and +** [sqlite3_bind_parameter_name()]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_bind_parameter_index(sqlite3_stmt*, const char *zName); + +/* +** CAPI3REF: Reset All Bindings On A Prepared Statement +** METHOD: sqlite3_stmt +** +** ^Contrary to the intuition of many, [sqlite3_reset()] does not reset +** the [sqlite3_bind_blob | bindings] on a [prepared statement]. +** ^Use this routine to reset all host parameters to NULL. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_clear_bindings(sqlite3_stmt*); + +/* +** CAPI3REF: Number Of Columns In A Result Set +** METHOD: sqlite3_stmt +** +** ^Return the number of columns in the result set returned by the +** [prepared statement]. ^This routine returns 0 if pStmt is an SQL +** statement that does not return data (for example an [UPDATE]). +** +** See also: [sqlite3_data_count()] +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_column_count(sqlite3_stmt *pStmt); + +/* +** CAPI3REF: Column Names In A Result Set +** METHOD: sqlite3_stmt +** +** ^These routines return the name assigned to a particular column +** in the result set of a [SELECT] statement. ^The sqlite3_column_name() +** interface returns a pointer to a zero-terminated UTF-8 string +** and sqlite3_column_name16() returns a pointer to a zero-terminated +** UTF-16 string. ^The first parameter is the [prepared statement] +** that implements the [SELECT] statement. ^The second parameter is the +** column number. ^The leftmost column is number 0. +** +** ^The returned string pointer is valid until either the [prepared statement] +** is destroyed by [sqlite3_finalize()] or until the statement is automatically +** reprepared by the first call to [sqlite3_step()] for a particular run +** or until the next call to +** sqlite3_column_name() or sqlite3_column_name16() on the same column. +** +** ^If sqlite3_malloc() fails during the processing of either routine +** (for example during a conversion from UTF-8 to UTF-16) then a +** NULL pointer is returned. +** +** ^The name of a result column is the value of the "AS" clause for +** that column, if there is an AS clause. If there is no AS clause +** then the name of the column is unspecified and may change from +** one release of SQLite to the next. +*/ +SQLITE_API const char *SQLITE_STDCALL sqlite3_column_name(sqlite3_stmt*, int N); +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_name16(sqlite3_stmt*, int N); + +/* +** CAPI3REF: Source Of Data In A Query Result +** METHOD: sqlite3_stmt +** +** ^These routines provide a means to determine the database, table, and +** table column that is the origin of a particular result column in +** [SELECT] statement. +** ^The name of the database or table or column can be returned as +** either a UTF-8 or UTF-16 string. ^The _database_ routines return +** the database name, the _table_ routines return the table name, and +** the origin_ routines return the column name. +** ^The returned string is valid until the [prepared statement] is destroyed +** using [sqlite3_finalize()] or until the statement is automatically +** reprepared by the first call to [sqlite3_step()] for a particular run +** or until the same information is requested +** again in a different encoding. +** +** ^The names returned are the original un-aliased names of the +** database, table, and column. +** +** ^The first argument to these interfaces is a [prepared statement]. +** ^These functions return information about the Nth result column returned by +** the statement, where N is the second function argument. +** ^The left-most column is column 0 for these routines. +** +** ^If the Nth column returned by the statement is an expression or +** subquery and is not a column value, then all of these functions return +** NULL. ^These routine might also return NULL if a memory allocation error +** occurs. ^Otherwise, they return the name of the attached database, table, +** or column that query result column was extracted from. +** +** ^As with all other SQLite APIs, those whose names end with "16" return +** UTF-16 encoded strings and the other functions return UTF-8. +** +** ^These APIs are only available if the library was compiled with the +** [SQLITE_ENABLE_COLUMN_METADATA] C-preprocessor symbol. +** +** If two or more threads call one or more of these routines against the same +** prepared statement and column at the same time then the results are +** undefined. +** +** If two or more threads call one or more +** [sqlite3_column_database_name | column metadata interfaces] +** for the same [prepared statement] and result column +** at the same time then the results are undefined. +*/ +SQLITE_API const char *SQLITE_STDCALL sqlite3_column_database_name(sqlite3_stmt*,int); +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_database_name16(sqlite3_stmt*,int); +SQLITE_API const char *SQLITE_STDCALL sqlite3_column_table_name(sqlite3_stmt*,int); +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_table_name16(sqlite3_stmt*,int); +SQLITE_API const char *SQLITE_STDCALL sqlite3_column_origin_name(sqlite3_stmt*,int); +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_origin_name16(sqlite3_stmt*,int); + +/* +** CAPI3REF: Declared Datatype Of A Query Result +** METHOD: sqlite3_stmt +** +** ^(The first parameter is a [prepared statement]. +** If this statement is a [SELECT] statement and the Nth column of the +** returned result set of that [SELECT] is a table column (not an +** expression or subquery) then the declared type of the table +** column is returned.)^ ^If the Nth column of the result set is an +** expression or subquery, then a NULL pointer is returned. +** ^The returned string is always UTF-8 encoded. +** +** ^(For example, given the database schema: +** +** CREATE TABLE t1(c1 VARIANT); +** +** and the following statement to be compiled: +** +** SELECT c1 + 1, c1 FROM t1; +** +** this routine would return the string "VARIANT" for the second result +** column (i==1), and a NULL pointer for the first result column (i==0).)^ +** +** ^SQLite uses dynamic run-time typing. ^So just because a column +** is declared to contain a particular type does not mean that the +** data stored in that column is of the declared type. SQLite is +** strongly typed, but the typing is dynamic not static. ^Type +** is associated with individual values, not with the containers +** used to hold those values. +*/ +SQLITE_API const char *SQLITE_STDCALL sqlite3_column_decltype(sqlite3_stmt*,int); +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_decltype16(sqlite3_stmt*,int); + +/* +** CAPI3REF: Evaluate An SQL Statement +** METHOD: sqlite3_stmt +** +** After a [prepared statement] has been prepared using either +** [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()] or one of the legacy +** interfaces [sqlite3_prepare()] or [sqlite3_prepare16()], this function +** must be called one or more times to evaluate the statement. +** +** The details of the behavior of the sqlite3_step() interface depend +** on whether the statement was prepared using the newer "v2" interface +** [sqlite3_prepare_v2()] and [sqlite3_prepare16_v2()] or the older legacy +** interface [sqlite3_prepare()] and [sqlite3_prepare16()]. The use of the +** new "v2" interface is recommended for new applications but the legacy +** interface will continue to be supported. +** +** ^In the legacy interface, the return value will be either [SQLITE_BUSY], +** [SQLITE_DONE], [SQLITE_ROW], [SQLITE_ERROR], or [SQLITE_MISUSE]. +** ^With the "v2" interface, any of the other [result codes] or +** [extended result codes] might be returned as well. +** +** ^[SQLITE_BUSY] means that the database engine was unable to acquire the +** database locks it needs to do its job. ^If the statement is a [COMMIT] +** or occurs outside of an explicit transaction, then you can retry the +** statement. If the statement is not a [COMMIT] and occurs within an +** explicit transaction then you should rollback the transaction before +** continuing. +** +** ^[SQLITE_DONE] means that the statement has finished executing +** successfully. sqlite3_step() should not be called again on this virtual +** machine without first calling [sqlite3_reset()] to reset the virtual +** machine back to its initial state. +** +** ^If the SQL statement being executed returns any data, then [SQLITE_ROW] +** is returned each time a new row of data is ready for processing by the +** caller. The values may be accessed using the [column access functions]. +** sqlite3_step() is called again to retrieve the next row of data. +** +** ^[SQLITE_ERROR] means that a run-time error (such as a constraint +** violation) has occurred. sqlite3_step() should not be called again on +** the VM. More information may be found by calling [sqlite3_errmsg()]. +** ^With the legacy interface, a more specific error code (for example, +** [SQLITE_INTERRUPT], [SQLITE_SCHEMA], [SQLITE_CORRUPT], and so forth) +** can be obtained by calling [sqlite3_reset()] on the +** [prepared statement]. ^In the "v2" interface, +** the more specific error code is returned directly by sqlite3_step(). +** +** [SQLITE_MISUSE] means that the this routine was called inappropriately. +** Perhaps it was called on a [prepared statement] that has +** already been [sqlite3_finalize | finalized] or on one that had +** previously returned [SQLITE_ERROR] or [SQLITE_DONE]. Or it could +** be the case that the same database connection is being used by two or +** more threads at the same moment in time. +** +** For all versions of SQLite up to and including 3.6.23.1, a call to +** [sqlite3_reset()] was required after sqlite3_step() returned anything +** other than [SQLITE_ROW] before any subsequent invocation of +** sqlite3_step(). Failure to reset the prepared statement using +** [sqlite3_reset()] would result in an [SQLITE_MISUSE] return from +** sqlite3_step(). But after version 3.6.23.1, sqlite3_step() began +** calling [sqlite3_reset()] automatically in this circumstance rather +** than returning [SQLITE_MISUSE]. This is not considered a compatibility +** break because any application that ever receives an SQLITE_MISUSE error +** is broken by definition. The [SQLITE_OMIT_AUTORESET] compile-time option +** can be used to restore the legacy behavior. +** +** <b>Goofy Interface Alert:</b> In the legacy interface, the sqlite3_step() +** API always returns a generic error code, [SQLITE_ERROR], following any +** error other than [SQLITE_BUSY] and [SQLITE_MISUSE]. You must call +** [sqlite3_reset()] or [sqlite3_finalize()] in order to find one of the +** specific [error codes] that better describes the error. +** We admit that this is a goofy design. The problem has been fixed +** with the "v2" interface. If you prepare all of your SQL statements +** using either [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()] instead +** of the legacy [sqlite3_prepare()] and [sqlite3_prepare16()] interfaces, +** then the more specific [error codes] are returned directly +** by sqlite3_step(). The use of the "v2" interface is recommended. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_step(sqlite3_stmt*); + +/* +** CAPI3REF: Number of columns in a result set +** METHOD: sqlite3_stmt +** +** ^The sqlite3_data_count(P) interface returns the number of columns in the +** current row of the result set of [prepared statement] P. +** ^If prepared statement P does not have results ready to return +** (via calls to the [sqlite3_column_int | sqlite3_column_*()] of +** interfaces) then sqlite3_data_count(P) returns 0. +** ^The sqlite3_data_count(P) routine also returns 0 if P is a NULL pointer. +** ^The sqlite3_data_count(P) routine returns 0 if the previous call to +** [sqlite3_step](P) returned [SQLITE_DONE]. ^The sqlite3_data_count(P) +** will return non-zero if previous call to [sqlite3_step](P) returned +** [SQLITE_ROW], except in the case of the [PRAGMA incremental_vacuum] +** where it always returns zero since each step of that multi-step +** pragma returns 0 columns of data. +** +** See also: [sqlite3_column_count()] +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_data_count(sqlite3_stmt *pStmt); + +/* +** CAPI3REF: Fundamental Datatypes +** KEYWORDS: SQLITE_TEXT +** +** ^(Every value in SQLite has one of five fundamental datatypes: +** +** <ul> +** <li> 64-bit signed integer +** <li> 64-bit IEEE floating point number +** <li> string +** <li> BLOB +** <li> NULL +** </ul>)^ +** +** These constants are codes for each of those types. +** +** Note that the SQLITE_TEXT constant was also used in SQLite version 2 +** for a completely different meaning. Software that links against both +** SQLite version 2 and SQLite version 3 should use SQLITE3_TEXT, not +** SQLITE_TEXT. +*/ +#define SQLITE_INTEGER 1 +#define SQLITE_FLOAT 2 +#define SQLITE_BLOB 4 +#define SQLITE_NULL 5 +#ifdef SQLITE_TEXT +# undef SQLITE_TEXT +#else +# define SQLITE_TEXT 3 +#endif +#define SQLITE3_TEXT 3 + +/* +** CAPI3REF: Result Values From A Query +** KEYWORDS: {column access functions} +** METHOD: sqlite3_stmt +** +** ^These routines return information about a single column of the current +** result row of a query. ^In every case the first argument is a pointer +** to the [prepared statement] that is being evaluated (the [sqlite3_stmt*] +** that was returned from [sqlite3_prepare_v2()] or one of its variants) +** and the second argument is the index of the column for which information +** should be returned. ^The leftmost column of the result set has the index 0. +** ^The number of columns in the result can be determined using +** [sqlite3_column_count()]. +** +** If the SQL statement does not currently point to a valid row, or if the +** column index is out of range, the result is undefined. +** These routines may only be called when the most recent call to +** [sqlite3_step()] has returned [SQLITE_ROW] and neither +** [sqlite3_reset()] nor [sqlite3_finalize()] have been called subsequently. +** If any of these routines are called after [sqlite3_reset()] or +** [sqlite3_finalize()] or after [sqlite3_step()] has returned +** something other than [SQLITE_ROW], the results are undefined. +** If [sqlite3_step()] or [sqlite3_reset()] or [sqlite3_finalize()] +** are called from a different thread while any of these routines +** are pending, then the results are undefined. +** +** ^The sqlite3_column_type() routine returns the +** [SQLITE_INTEGER | datatype code] for the initial data type +** of the result column. ^The returned value is one of [SQLITE_INTEGER], +** [SQLITE_FLOAT], [SQLITE_TEXT], [SQLITE_BLOB], or [SQLITE_NULL]. The value +** returned by sqlite3_column_type() is only meaningful if no type +** conversions have occurred as described below. After a type conversion, +** the value returned by sqlite3_column_type() is undefined. Future +** versions of SQLite may change the behavior of sqlite3_column_type() +** following a type conversion. +** +** ^If the result is a BLOB or UTF-8 string then the sqlite3_column_bytes() +** routine returns the number of bytes in that BLOB or string. +** ^If the result is a UTF-16 string, then sqlite3_column_bytes() converts +** the string to UTF-8 and then returns the number of bytes. +** ^If the result is a numeric value then sqlite3_column_bytes() uses +** [sqlite3_snprintf()] to convert that value to a UTF-8 string and returns +** the number of bytes in that string. +** ^If the result is NULL, then sqlite3_column_bytes() returns zero. +** +** ^If the result is a BLOB or UTF-16 string then the sqlite3_column_bytes16() +** routine returns the number of bytes in that BLOB or string. +** ^If the result is a UTF-8 string, then sqlite3_column_bytes16() converts +** the string to UTF-16 and then returns the number of bytes. +** ^If the result is a numeric value then sqlite3_column_bytes16() uses +** [sqlite3_snprintf()] to convert that value to a UTF-16 string and returns +** the number of bytes in that string. +** ^If the result is NULL, then sqlite3_column_bytes16() returns zero. +** +** ^The values returned by [sqlite3_column_bytes()] and +** [sqlite3_column_bytes16()] do not include the zero terminators at the end +** of the string. ^For clarity: the values returned by +** [sqlite3_column_bytes()] and [sqlite3_column_bytes16()] are the number of +** bytes in the string, not the number of characters. +** +** ^Strings returned by sqlite3_column_text() and sqlite3_column_text16(), +** even empty strings, are always zero-terminated. ^The return +** value from sqlite3_column_blob() for a zero-length BLOB is a NULL pointer. +** +** <b>Warning:</b> ^The object returned by [sqlite3_column_value()] is an +** [unprotected sqlite3_value] object. In a multithreaded environment, +** an unprotected sqlite3_value object may only be used safely with +** [sqlite3_bind_value()] and [sqlite3_result_value()]. +** If the [unprotected sqlite3_value] object returned by +** [sqlite3_column_value()] is used in any other way, including calls +** to routines like [sqlite3_value_int()], [sqlite3_value_text()], +** or [sqlite3_value_bytes()], the behavior is not threadsafe. +** +** These routines attempt to convert the value where appropriate. ^For +** example, if the internal representation is FLOAT and a text result +** is requested, [sqlite3_snprintf()] is used internally to perform the +** conversion automatically. ^(The following table details the conversions +** that are applied: +** +** <blockquote> +** <table border="1"> +** <tr><th> Internal<br>Type <th> Requested<br>Type <th> Conversion +** +** <tr><td> NULL <td> INTEGER <td> Result is 0 +** <tr><td> NULL <td> FLOAT <td> Result is 0.0 +** <tr><td> NULL <td> TEXT <td> Result is a NULL pointer +** <tr><td> NULL <td> BLOB <td> Result is a NULL pointer +** <tr><td> INTEGER <td> FLOAT <td> Convert from integer to float +** <tr><td> INTEGER <td> TEXT <td> ASCII rendering of the integer +** <tr><td> INTEGER <td> BLOB <td> Same as INTEGER->TEXT +** <tr><td> FLOAT <td> INTEGER <td> [CAST] to INTEGER +** <tr><td> FLOAT <td> TEXT <td> ASCII rendering of the float +** <tr><td> FLOAT <td> BLOB <td> [CAST] to BLOB +** <tr><td> TEXT <td> INTEGER <td> [CAST] to INTEGER +** <tr><td> TEXT <td> FLOAT <td> [CAST] to REAL +** <tr><td> TEXT <td> BLOB <td> No change +** <tr><td> BLOB <td> INTEGER <td> [CAST] to INTEGER +** <tr><td> BLOB <td> FLOAT <td> [CAST] to REAL +** <tr><td> BLOB <td> TEXT <td> Add a zero terminator if needed +** </table> +** </blockquote>)^ +** +** Note that when type conversions occur, pointers returned by prior +** calls to sqlite3_column_blob(), sqlite3_column_text(), and/or +** sqlite3_column_text16() may be invalidated. +** Type conversions and pointer invalidations might occur +** in the following cases: +** +** <ul> +** <li> The initial content is a BLOB and sqlite3_column_text() or +** sqlite3_column_text16() is called. A zero-terminator might +** need to be added to the string.</li> +** <li> The initial content is UTF-8 text and sqlite3_column_bytes16() or +** sqlite3_column_text16() is called. The content must be converted +** to UTF-16.</li> +** <li> The initial content is UTF-16 text and sqlite3_column_bytes() or +** sqlite3_column_text() is called. The content must be converted +** to UTF-8.</li> +** </ul> +** +** ^Conversions between UTF-16be and UTF-16le are always done in place and do +** not invalidate a prior pointer, though of course the content of the buffer +** that the prior pointer references will have been modified. Other kinds +** of conversion are done in place when it is possible, but sometimes they +** are not possible and in those cases prior pointers are invalidated. +** +** The safest policy is to invoke these routines +** in one of the following ways: +** +** <ul> +** <li>sqlite3_column_text() followed by sqlite3_column_bytes()</li> +** <li>sqlite3_column_blob() followed by sqlite3_column_bytes()</li> +** <li>sqlite3_column_text16() followed by sqlite3_column_bytes16()</li> +** </ul> +** +** In other words, you should call sqlite3_column_text(), +** sqlite3_column_blob(), or sqlite3_column_text16() first to force the result +** into the desired format, then invoke sqlite3_column_bytes() or +** sqlite3_column_bytes16() to find the size of the result. Do not mix calls +** to sqlite3_column_text() or sqlite3_column_blob() with calls to +** sqlite3_column_bytes16(), and do not mix calls to sqlite3_column_text16() +** with calls to sqlite3_column_bytes(). +** +** ^The pointers returned are valid until a type conversion occurs as +** described above, or until [sqlite3_step()] or [sqlite3_reset()] or +** [sqlite3_finalize()] is called. ^The memory space used to hold strings +** and BLOBs is freed automatically. Do <em>not</em> pass the pointers returned +** from [sqlite3_column_blob()], [sqlite3_column_text()], etc. into +** [sqlite3_free()]. +** +** ^(If a memory allocation error occurs during the evaluation of any +** of these routines, a default value is returned. The default value +** is either the integer 0, the floating point number 0.0, or a NULL +** pointer. Subsequent calls to [sqlite3_errcode()] will return +** [SQLITE_NOMEM].)^ +*/ +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_blob(sqlite3_stmt*, int iCol); +SQLITE_API int SQLITE_STDCALL sqlite3_column_bytes(sqlite3_stmt*, int iCol); +SQLITE_API int SQLITE_STDCALL sqlite3_column_bytes16(sqlite3_stmt*, int iCol); +SQLITE_API double SQLITE_STDCALL sqlite3_column_double(sqlite3_stmt*, int iCol); +SQLITE_API int SQLITE_STDCALL sqlite3_column_int(sqlite3_stmt*, int iCol); +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_column_int64(sqlite3_stmt*, int iCol); +SQLITE_API const unsigned char *SQLITE_STDCALL sqlite3_column_text(sqlite3_stmt*, int iCol); +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_text16(sqlite3_stmt*, int iCol); +SQLITE_API int SQLITE_STDCALL sqlite3_column_type(sqlite3_stmt*, int iCol); +SQLITE_API sqlite3_value *SQLITE_STDCALL sqlite3_column_value(sqlite3_stmt*, int iCol); + +/* +** CAPI3REF: Destroy A Prepared Statement Object +** DESTRUCTOR: sqlite3_stmt +** +** ^The sqlite3_finalize() function is called to delete a [prepared statement]. +** ^If the most recent evaluation of the statement encountered no errors +** or if the statement is never been evaluated, then sqlite3_finalize() returns +** SQLITE_OK. ^If the most recent evaluation of statement S failed, then +** sqlite3_finalize(S) returns the appropriate [error code] or +** [extended error code]. +** +** ^The sqlite3_finalize(S) routine can be called at any point during +** the life cycle of [prepared statement] S: +** before statement S is ever evaluated, after +** one or more calls to [sqlite3_reset()], or after any call +** to [sqlite3_step()] regardless of whether or not the statement has +** completed execution. +** +** ^Invoking sqlite3_finalize() on a NULL pointer is a harmless no-op. +** +** The application must finalize every [prepared statement] in order to avoid +** resource leaks. It is a grievous error for the application to try to use +** a prepared statement after it has been finalized. Any use of a prepared +** statement after it has been finalized can result in undefined and +** undesirable behavior such as segfaults and heap corruption. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_finalize(sqlite3_stmt *pStmt); + +/* +** CAPI3REF: Reset A Prepared Statement Object +** METHOD: sqlite3_stmt +** +** The sqlite3_reset() function is called to reset a [prepared statement] +** object back to its initial state, ready to be re-executed. +** ^Any SQL statement variables that had values bound to them using +** the [sqlite3_bind_blob | sqlite3_bind_*() API] retain their values. +** Use [sqlite3_clear_bindings()] to reset the bindings. +** +** ^The [sqlite3_reset(S)] interface resets the [prepared statement] S +** back to the beginning of its program. +** +** ^If the most recent call to [sqlite3_step(S)] for the +** [prepared statement] S returned [SQLITE_ROW] or [SQLITE_DONE], +** or if [sqlite3_step(S)] has never before been called on S, +** then [sqlite3_reset(S)] returns [SQLITE_OK]. +** +** ^If the most recent call to [sqlite3_step(S)] for the +** [prepared statement] S indicated an error, then +** [sqlite3_reset(S)] returns an appropriate [error code]. +** +** ^The [sqlite3_reset(S)] interface does not change the values +** of any [sqlite3_bind_blob|bindings] on the [prepared statement] S. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_reset(sqlite3_stmt *pStmt); + +/* +** CAPI3REF: Create Or Redefine SQL Functions +** KEYWORDS: {function creation routines} +** KEYWORDS: {application-defined SQL function} +** KEYWORDS: {application-defined SQL functions} +** METHOD: sqlite3 +** +** ^These functions (collectively known as "function creation routines") +** are used to add SQL functions or aggregates or to redefine the behavior +** of existing SQL functions or aggregates. The only differences between +** these routines are the text encoding expected for +** the second parameter (the name of the function being created) +** and the presence or absence of a destructor callback for +** the application data pointer. +** +** ^The first parameter is the [database connection] to which the SQL +** function is to be added. ^If an application uses more than one database +** connection then application-defined SQL functions must be added +** to each database connection separately. +** +** ^The second parameter is the name of the SQL function to be created or +** redefined. ^The length of the name is limited to 255 bytes in a UTF-8 +** representation, exclusive of the zero-terminator. ^Note that the name +** length limit is in UTF-8 bytes, not characters nor UTF-16 bytes. +** ^Any attempt to create a function with a longer name +** will result in [SQLITE_MISUSE] being returned. +** +** ^The third parameter (nArg) +** is the number of arguments that the SQL function or +** aggregate takes. ^If this parameter is -1, then the SQL function or +** aggregate may take any number of arguments between 0 and the limit +** set by [sqlite3_limit]([SQLITE_LIMIT_FUNCTION_ARG]). If the third +** parameter is less than -1 or greater than 127 then the behavior is +** undefined. +** +** ^The fourth parameter, eTextRep, specifies what +** [SQLITE_UTF8 | text encoding] this SQL function prefers for +** its parameters. The application should set this parameter to +** [SQLITE_UTF16LE] if the function implementation invokes +** [sqlite3_value_text16le()] on an input, or [SQLITE_UTF16BE] if the +** implementation invokes [sqlite3_value_text16be()] on an input, or +** [SQLITE_UTF16] if [sqlite3_value_text16()] is used, or [SQLITE_UTF8] +** otherwise. ^The same SQL function may be registered multiple times using +** different preferred text encodings, with different implementations for +** each encoding. +** ^When multiple implementations of the same function are available, SQLite +** will pick the one that involves the least amount of data conversion. +** +** ^The fourth parameter may optionally be ORed with [SQLITE_DETERMINISTIC] +** to signal that the function will always return the same result given +** the same inputs within a single SQL statement. Most SQL functions are +** deterministic. The built-in [random()] SQL function is an example of a +** function that is not deterministic. The SQLite query planner is able to +** perform additional optimizations on deterministic functions, so use +** of the [SQLITE_DETERMINISTIC] flag is recommended where possible. +** +** ^(The fifth parameter is an arbitrary pointer. The implementation of the +** function can gain access to this pointer using [sqlite3_user_data()].)^ +** +** ^The sixth, seventh and eighth parameters, xFunc, xStep and xFinal, are +** pointers to C-language functions that implement the SQL function or +** aggregate. ^A scalar SQL function requires an implementation of the xFunc +** callback only; NULL pointers must be passed as the xStep and xFinal +** parameters. ^An aggregate SQL function requires an implementation of xStep +** and xFinal and NULL pointer must be passed for xFunc. ^To delete an existing +** SQL function or aggregate, pass NULL pointers for all three function +** callbacks. +** +** ^(If the ninth parameter to sqlite3_create_function_v2() is not NULL, +** then it is destructor for the application data pointer. +** The destructor is invoked when the function is deleted, either by being +** overloaded or when the database connection closes.)^ +** ^The destructor is also invoked if the call to +** sqlite3_create_function_v2() fails. +** ^When the destructor callback of the tenth parameter is invoked, it +** is passed a single argument which is a copy of the application data +** pointer which was the fifth parameter to sqlite3_create_function_v2(). +** +** ^It is permitted to register multiple implementations of the same +** functions with the same name but with either differing numbers of +** arguments or differing preferred text encodings. ^SQLite will use +** the implementation that most closely matches the way in which the +** SQL function is used. ^A function implementation with a non-negative +** nArg parameter is a better match than a function implementation with +** a negative nArg. ^A function where the preferred text encoding +** matches the database encoding is a better +** match than a function where the encoding is different. +** ^A function where the encoding difference is between UTF16le and UTF16be +** is a closer match than a function where the encoding difference is +** between UTF8 and UTF16. +** +** ^Built-in functions may be overloaded by new application-defined functions. +** +** ^An application-defined function is permitted to call other +** SQLite interfaces. However, such calls must not +** close the database connection nor finalize or reset the prepared +** statement in which the function is running. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_create_function( + sqlite3 *db, + const char *zFunctionName, + int nArg, + int eTextRep, + void *pApp, + void (*xFunc)(sqlite3_context*,int,sqlite3_value**), + void (*xStep)(sqlite3_context*,int,sqlite3_value**), + void (*xFinal)(sqlite3_context*) +); +SQLITE_API int SQLITE_STDCALL sqlite3_create_function16( + sqlite3 *db, + const void *zFunctionName, + int nArg, + int eTextRep, + void *pApp, + void (*xFunc)(sqlite3_context*,int,sqlite3_value**), + void (*xStep)(sqlite3_context*,int,sqlite3_value**), + void (*xFinal)(sqlite3_context*) +); +SQLITE_API int SQLITE_STDCALL sqlite3_create_function_v2( + sqlite3 *db, + const char *zFunctionName, + int nArg, + int eTextRep, + void *pApp, + void (*xFunc)(sqlite3_context*,int,sqlite3_value**), + void (*xStep)(sqlite3_context*,int,sqlite3_value**), + void (*xFinal)(sqlite3_context*), + void(*xDestroy)(void*) +); + +/* +** CAPI3REF: Text Encodings +** +** These constant define integer codes that represent the various +** text encodings supported by SQLite. +*/ +#define SQLITE_UTF8 1 /* IMP: R-37514-35566 */ +#define SQLITE_UTF16LE 2 /* IMP: R-03371-37637 */ +#define SQLITE_UTF16BE 3 /* IMP: R-51971-34154 */ +#define SQLITE_UTF16 4 /* Use native byte order */ +#define SQLITE_ANY 5 /* Deprecated */ +#define SQLITE_UTF16_ALIGNED 8 /* sqlite3_create_collation only */ + +/* +** CAPI3REF: Function Flags +** +** These constants may be ORed together with the +** [SQLITE_UTF8 | preferred text encoding] as the fourth argument +** to [sqlite3_create_function()], [sqlite3_create_function16()], or +** [sqlite3_create_function_v2()]. +*/ +#define SQLITE_DETERMINISTIC 0x800 + +/* +** CAPI3REF: Deprecated Functions +** DEPRECATED +** +** These functions are [deprecated]. In order to maintain +** backwards compatibility with older code, these functions continue +** to be supported. However, new applications should avoid +** the use of these functions. To encourage programmers to avoid +** these functions, we will not explain what they do. +*/ +#ifndef SQLITE_OMIT_DEPRECATED +SQLITE_API SQLITE_DEPRECATED int SQLITE_STDCALL sqlite3_aggregate_count(sqlite3_context*); +SQLITE_API SQLITE_DEPRECATED int SQLITE_STDCALL sqlite3_expired(sqlite3_stmt*); +SQLITE_API SQLITE_DEPRECATED int SQLITE_STDCALL sqlite3_transfer_bindings(sqlite3_stmt*, sqlite3_stmt*); +SQLITE_API SQLITE_DEPRECATED int SQLITE_STDCALL sqlite3_global_recover(void); +SQLITE_API SQLITE_DEPRECATED void SQLITE_STDCALL sqlite3_thread_cleanup(void); +SQLITE_API SQLITE_DEPRECATED int SQLITE_STDCALL sqlite3_memory_alarm(void(*)(void*,sqlite3_int64,int), + void*,sqlite3_int64); +#endif + +/* +** CAPI3REF: Obtaining SQL Values +** METHOD: sqlite3_value +** +** The C-language implementation of SQL functions and aggregates uses +** this set of interface routines to access the parameter values on +** the function or aggregate. +** +** The xFunc (for scalar functions) or xStep (for aggregates) parameters +** to [sqlite3_create_function()] and [sqlite3_create_function16()] +** define callbacks that implement the SQL functions and aggregates. +** The 3rd parameter to these callbacks is an array of pointers to +** [protected sqlite3_value] objects. There is one [sqlite3_value] object for +** each parameter to the SQL function. These routines are used to +** extract values from the [sqlite3_value] objects. +** +** These routines work only with [protected sqlite3_value] objects. +** Any attempt to use these routines on an [unprotected sqlite3_value] +** object results in undefined behavior. +** +** ^These routines work just like the corresponding [column access functions] +** except that these routines take a single [protected sqlite3_value] object +** pointer instead of a [sqlite3_stmt*] pointer and an integer column number. +** +** ^The sqlite3_value_text16() interface extracts a UTF-16 string +** in the native byte-order of the host machine. ^The +** sqlite3_value_text16be() and sqlite3_value_text16le() interfaces +** extract UTF-16 strings as big-endian and little-endian respectively. +** +** ^(The sqlite3_value_numeric_type() interface attempts to apply +** numeric affinity to the value. This means that an attempt is +** made to convert the value to an integer or floating point. If +** such a conversion is possible without loss of information (in other +** words, if the value is a string that looks like a number) +** then the conversion is performed. Otherwise no conversion occurs. +** The [SQLITE_INTEGER | datatype] after conversion is returned.)^ +** +** Please pay particular attention to the fact that the pointer returned +** from [sqlite3_value_blob()], [sqlite3_value_text()], or +** [sqlite3_value_text16()] can be invalidated by a subsequent call to +** [sqlite3_value_bytes()], [sqlite3_value_bytes16()], [sqlite3_value_text()], +** or [sqlite3_value_text16()]. +** +** These routines must be called from the same thread as +** the SQL function that supplied the [sqlite3_value*] parameters. +*/ +SQLITE_API const void *SQLITE_STDCALL sqlite3_value_blob(sqlite3_value*); +SQLITE_API int SQLITE_STDCALL sqlite3_value_bytes(sqlite3_value*); +SQLITE_API int SQLITE_STDCALL sqlite3_value_bytes16(sqlite3_value*); +SQLITE_API double SQLITE_STDCALL sqlite3_value_double(sqlite3_value*); +SQLITE_API int SQLITE_STDCALL sqlite3_value_int(sqlite3_value*); +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_value_int64(sqlite3_value*); +SQLITE_API const unsigned char *SQLITE_STDCALL sqlite3_value_text(sqlite3_value*); +SQLITE_API const void *SQLITE_STDCALL sqlite3_value_text16(sqlite3_value*); +SQLITE_API const void *SQLITE_STDCALL sqlite3_value_text16le(sqlite3_value*); +SQLITE_API const void *SQLITE_STDCALL sqlite3_value_text16be(sqlite3_value*); +SQLITE_API int SQLITE_STDCALL sqlite3_value_type(sqlite3_value*); +SQLITE_API int SQLITE_STDCALL sqlite3_value_numeric_type(sqlite3_value*); + +/* +** CAPI3REF: Finding The Subtype Of SQL Values +** METHOD: sqlite3_value +** +** The sqlite3_value_subtype(V) function returns the subtype for +** an [application-defined SQL function] argument V. The subtype +** information can be used to pass a limited amount of context from +** one SQL function to another. Use the [sqlite3_result_subtype()] +** routine to set the subtype for the return value of an SQL function. +** +** SQLite makes no use of subtype itself. It merely passes the subtype +** from the result of one [application-defined SQL function] into the +** input of another. +*/ +SQLITE_API unsigned int SQLITE_STDCALL sqlite3_value_subtype(sqlite3_value*); + +/* +** CAPI3REF: Copy And Free SQL Values +** METHOD: sqlite3_value +** +** ^The sqlite3_value_dup(V) interface makes a copy of the [sqlite3_value] +** object D and returns a pointer to that copy. ^The [sqlite3_value] returned +** is a [protected sqlite3_value] object even if the input is not. +** ^The sqlite3_value_dup(V) interface returns NULL if V is NULL or if a +** memory allocation fails. +** +** ^The sqlite3_value_free(V) interface frees an [sqlite3_value] object +** previously obtained from [sqlite3_value_dup()]. ^If V is a NULL pointer +** then sqlite3_value_free(V) is a harmless no-op. +*/ +SQLITE_API sqlite3_value *SQLITE_STDCALL sqlite3_value_dup(const sqlite3_value*); +SQLITE_API void SQLITE_STDCALL sqlite3_value_free(sqlite3_value*); + +/* +** CAPI3REF: Obtain Aggregate Function Context +** METHOD: sqlite3_context +** +** Implementations of aggregate SQL functions use this +** routine to allocate memory for storing their state. +** +** ^The first time the sqlite3_aggregate_context(C,N) routine is called +** for a particular aggregate function, SQLite +** allocates N of memory, zeroes out that memory, and returns a pointer +** to the new memory. ^On second and subsequent calls to +** sqlite3_aggregate_context() for the same aggregate function instance, +** the same buffer is returned. Sqlite3_aggregate_context() is normally +** called once for each invocation of the xStep callback and then one +** last time when the xFinal callback is invoked. ^(When no rows match +** an aggregate query, the xStep() callback of the aggregate function +** implementation is never called and xFinal() is called exactly once. +** In those cases, sqlite3_aggregate_context() might be called for the +** first time from within xFinal().)^ +** +** ^The sqlite3_aggregate_context(C,N) routine returns a NULL pointer +** when first called if N is less than or equal to zero or if a memory +** allocate error occurs. +** +** ^(The amount of space allocated by sqlite3_aggregate_context(C,N) is +** determined by the N parameter on first successful call. Changing the +** value of N in subsequent call to sqlite3_aggregate_context() within +** the same aggregate function instance will not resize the memory +** allocation.)^ Within the xFinal callback, it is customary to set +** N=0 in calls to sqlite3_aggregate_context(C,N) so that no +** pointless memory allocations occur. +** +** ^SQLite automatically frees the memory allocated by +** sqlite3_aggregate_context() when the aggregate query concludes. +** +** The first parameter must be a copy of the +** [sqlite3_context | SQL function context] that is the first parameter +** to the xStep or xFinal callback routine that implements the aggregate +** function. +** +** This routine must be called from the same thread in which +** the aggregate SQL function is running. +*/ +SQLITE_API void *SQLITE_STDCALL sqlite3_aggregate_context(sqlite3_context*, int nBytes); + +/* +** CAPI3REF: User Data For Functions +** METHOD: sqlite3_context +** +** ^The sqlite3_user_data() interface returns a copy of +** the pointer that was the pUserData parameter (the 5th parameter) +** of the [sqlite3_create_function()] +** and [sqlite3_create_function16()] routines that originally +** registered the application defined function. +** +** This routine must be called from the same thread in which +** the application-defined function is running. +*/ +SQLITE_API void *SQLITE_STDCALL sqlite3_user_data(sqlite3_context*); + +/* +** CAPI3REF: Database Connection For Functions +** METHOD: sqlite3_context +** +** ^The sqlite3_context_db_handle() interface returns a copy of +** the pointer to the [database connection] (the 1st parameter) +** of the [sqlite3_create_function()] +** and [sqlite3_create_function16()] routines that originally +** registered the application defined function. +*/ +SQLITE_API sqlite3 *SQLITE_STDCALL sqlite3_context_db_handle(sqlite3_context*); + +/* +** CAPI3REF: Function Auxiliary Data +** METHOD: sqlite3_context +** +** These functions may be used by (non-aggregate) SQL functions to +** associate metadata with argument values. If the same value is passed to +** multiple invocations of the same SQL function during query execution, under +** some circumstances the associated metadata may be preserved. An example +** of where this might be useful is in a regular-expression matching +** function. The compiled version of the regular expression can be stored as +** metadata associated with the pattern string. +** Then as long as the pattern string remains the same, +** the compiled regular expression can be reused on multiple +** invocations of the same function. +** +** ^The sqlite3_get_auxdata() interface returns a pointer to the metadata +** associated by the sqlite3_set_auxdata() function with the Nth argument +** value to the application-defined function. ^If there is no metadata +** associated with the function argument, this sqlite3_get_auxdata() interface +** returns a NULL pointer. +** +** ^The sqlite3_set_auxdata(C,N,P,X) interface saves P as metadata for the N-th +** argument of the application-defined function. ^Subsequent +** calls to sqlite3_get_auxdata(C,N) return P from the most recent +** sqlite3_set_auxdata(C,N,P,X) call if the metadata is still valid or +** NULL if the metadata has been discarded. +** ^After each call to sqlite3_set_auxdata(C,N,P,X) where X is not NULL, +** SQLite will invoke the destructor function X with parameter P exactly +** once, when the metadata is discarded. +** SQLite is free to discard the metadata at any time, including: <ul> +** <li> when the corresponding function parameter changes, or +** <li> when [sqlite3_reset()] or [sqlite3_finalize()] is called for the +** SQL statement, or +** <li> when sqlite3_set_auxdata() is invoked again on the same parameter, or +** <li> during the original sqlite3_set_auxdata() call when a memory +** allocation error occurs. </ul>)^ +** +** Note the last bullet in particular. The destructor X in +** sqlite3_set_auxdata(C,N,P,X) might be called immediately, before the +** sqlite3_set_auxdata() interface even returns. Hence sqlite3_set_auxdata() +** should be called near the end of the function implementation and the +** function implementation should not make any use of P after +** sqlite3_set_auxdata() has been called. +** +** ^(In practice, metadata is preserved between function calls for +** function parameters that are compile-time constants, including literal +** values and [parameters] and expressions composed from the same.)^ +** +** These routines must be called from the same thread in which +** the SQL function is running. +*/ +SQLITE_API void *SQLITE_STDCALL sqlite3_get_auxdata(sqlite3_context*, int N); +SQLITE_API void SQLITE_STDCALL sqlite3_set_auxdata(sqlite3_context*, int N, void*, void (*)(void*)); + + +/* +** CAPI3REF: Constants Defining Special Destructor Behavior +** +** These are special values for the destructor that is passed in as the +** final argument to routines like [sqlite3_result_blob()]. ^If the destructor +** argument is SQLITE_STATIC, it means that the content pointer is constant +** and will never change. It does not need to be destroyed. ^The +** SQLITE_TRANSIENT value means that the content will likely change in +** the near future and that SQLite should make its own private copy of +** the content before returning. +** +** The typedef is necessary to work around problems in certain +** C++ compilers. +*/ +typedef void (*sqlite3_destructor_type)(void*); +#define SQLITE_STATIC ((sqlite3_destructor_type)0) +#define SQLITE_TRANSIENT ((sqlite3_destructor_type)-1) + +/* +** CAPI3REF: Setting The Result Of An SQL Function +** METHOD: sqlite3_context +** +** These routines are used by the xFunc or xFinal callbacks that +** implement SQL functions and aggregates. See +** [sqlite3_create_function()] and [sqlite3_create_function16()] +** for additional information. +** +** These functions work very much like the [parameter binding] family of +** functions used to bind values to host parameters in prepared statements. +** Refer to the [SQL parameter] documentation for additional information. +** +** ^The sqlite3_result_blob() interface sets the result from +** an application-defined function to be the BLOB whose content is pointed +** to by the second parameter and which is N bytes long where N is the +** third parameter. +** +** ^The sqlite3_result_zeroblob(C,N) and sqlite3_result_zeroblob64(C,N) +** interfaces set the result of the application-defined function to be +** a BLOB containing all zero bytes and N bytes in size. +** +** ^The sqlite3_result_double() interface sets the result from +** an application-defined function to be a floating point value specified +** by its 2nd argument. +** +** ^The sqlite3_result_error() and sqlite3_result_error16() functions +** cause the implemented SQL function to throw an exception. +** ^SQLite uses the string pointed to by the +** 2nd parameter of sqlite3_result_error() or sqlite3_result_error16() +** as the text of an error message. ^SQLite interprets the error +** message string from sqlite3_result_error() as UTF-8. ^SQLite +** interprets the string from sqlite3_result_error16() as UTF-16 in native +** byte order. ^If the third parameter to sqlite3_result_error() +** or sqlite3_result_error16() is negative then SQLite takes as the error +** message all text up through the first zero character. +** ^If the third parameter to sqlite3_result_error() or +** sqlite3_result_error16() is non-negative then SQLite takes that many +** bytes (not characters) from the 2nd parameter as the error message. +** ^The sqlite3_result_error() and sqlite3_result_error16() +** routines make a private copy of the error message text before +** they return. Hence, the calling function can deallocate or +** modify the text after they return without harm. +** ^The sqlite3_result_error_code() function changes the error code +** returned by SQLite as a result of an error in a function. ^By default, +** the error code is SQLITE_ERROR. ^A subsequent call to sqlite3_result_error() +** or sqlite3_result_error16() resets the error code to SQLITE_ERROR. +** +** ^The sqlite3_result_error_toobig() interface causes SQLite to throw an +** error indicating that a string or BLOB is too long to represent. +** +** ^The sqlite3_result_error_nomem() interface causes SQLite to throw an +** error indicating that a memory allocation failed. +** +** ^The sqlite3_result_int() interface sets the return value +** of the application-defined function to be the 32-bit signed integer +** value given in the 2nd argument. +** ^The sqlite3_result_int64() interface sets the return value +** of the application-defined function to be the 64-bit signed integer +** value given in the 2nd argument. +** +** ^The sqlite3_result_null() interface sets the return value +** of the application-defined function to be NULL. +** +** ^The sqlite3_result_text(), sqlite3_result_text16(), +** sqlite3_result_text16le(), and sqlite3_result_text16be() interfaces +** set the return value of the application-defined function to be +** a text string which is represented as UTF-8, UTF-16 native byte order, +** UTF-16 little endian, or UTF-16 big endian, respectively. +** ^The sqlite3_result_text64() interface sets the return value of an +** application-defined function to be a text string in an encoding +** specified by the fifth (and last) parameter, which must be one +** of [SQLITE_UTF8], [SQLITE_UTF16], [SQLITE_UTF16BE], or [SQLITE_UTF16LE]. +** ^SQLite takes the text result from the application from +** the 2nd parameter of the sqlite3_result_text* interfaces. +** ^If the 3rd parameter to the sqlite3_result_text* interfaces +** is negative, then SQLite takes result text from the 2nd parameter +** through the first zero character. +** ^If the 3rd parameter to the sqlite3_result_text* interfaces +** is non-negative, then as many bytes (not characters) of the text +** pointed to by the 2nd parameter are taken as the application-defined +** function result. If the 3rd parameter is non-negative, then it +** must be the byte offset into the string where the NUL terminator would +** appear if the string where NUL terminated. If any NUL characters occur +** in the string at a byte offset that is less than the value of the 3rd +** parameter, then the resulting string will contain embedded NULs and the +** result of expressions operating on strings with embedded NULs is undefined. +** ^If the 4th parameter to the sqlite3_result_text* interfaces +** or sqlite3_result_blob is a non-NULL pointer, then SQLite calls that +** function as the destructor on the text or BLOB result when it has +** finished using that result. +** ^If the 4th parameter to the sqlite3_result_text* interfaces or to +** sqlite3_result_blob is the special constant SQLITE_STATIC, then SQLite +** assumes that the text or BLOB result is in constant space and does not +** copy the content of the parameter nor call a destructor on the content +** when it has finished using that result. +** ^If the 4th parameter to the sqlite3_result_text* interfaces +** or sqlite3_result_blob is the special constant SQLITE_TRANSIENT +** then SQLite makes a copy of the result into space obtained from +** from [sqlite3_malloc()] before it returns. +** +** ^The sqlite3_result_value() interface sets the result of +** the application-defined function to be a copy of the +** [unprotected sqlite3_value] object specified by the 2nd parameter. ^The +** sqlite3_result_value() interface makes a copy of the [sqlite3_value] +** so that the [sqlite3_value] specified in the parameter may change or +** be deallocated after sqlite3_result_value() returns without harm. +** ^A [protected sqlite3_value] object may always be used where an +** [unprotected sqlite3_value] object is required, so either +** kind of [sqlite3_value] object can be used with this interface. +** +** If these routines are called from within the different thread +** than the one containing the application-defined function that received +** the [sqlite3_context] pointer, the results are undefined. +*/ +SQLITE_API void SQLITE_STDCALL sqlite3_result_blob(sqlite3_context*, const void*, int, void(*)(void*)); +SQLITE_API void SQLITE_STDCALL sqlite3_result_blob64(sqlite3_context*,const void*, + sqlite3_uint64,void(*)(void*)); +SQLITE_API void SQLITE_STDCALL sqlite3_result_double(sqlite3_context*, double); +SQLITE_API void SQLITE_STDCALL sqlite3_result_error(sqlite3_context*, const char*, int); +SQLITE_API void SQLITE_STDCALL sqlite3_result_error16(sqlite3_context*, const void*, int); +SQLITE_API void SQLITE_STDCALL sqlite3_result_error_toobig(sqlite3_context*); +SQLITE_API void SQLITE_STDCALL sqlite3_result_error_nomem(sqlite3_context*); +SQLITE_API void SQLITE_STDCALL sqlite3_result_error_code(sqlite3_context*, int); +SQLITE_API void SQLITE_STDCALL sqlite3_result_int(sqlite3_context*, int); +SQLITE_API void SQLITE_STDCALL sqlite3_result_int64(sqlite3_context*, sqlite3_int64); +SQLITE_API void SQLITE_STDCALL sqlite3_result_null(sqlite3_context*); +SQLITE_API void SQLITE_STDCALL sqlite3_result_text(sqlite3_context*, const char*, int, void(*)(void*)); +SQLITE_API void SQLITE_STDCALL sqlite3_result_text64(sqlite3_context*, const char*,sqlite3_uint64, + void(*)(void*), unsigned char encoding); +SQLITE_API void SQLITE_STDCALL sqlite3_result_text16(sqlite3_context*, const void*, int, void(*)(void*)); +SQLITE_API void SQLITE_STDCALL sqlite3_result_text16le(sqlite3_context*, const void*, int,void(*)(void*)); +SQLITE_API void SQLITE_STDCALL sqlite3_result_text16be(sqlite3_context*, const void*, int,void(*)(void*)); +SQLITE_API void SQLITE_STDCALL sqlite3_result_value(sqlite3_context*, sqlite3_value*); +SQLITE_API void SQLITE_STDCALL sqlite3_result_zeroblob(sqlite3_context*, int n); +SQLITE_API int SQLITE_STDCALL sqlite3_result_zeroblob64(sqlite3_context*, sqlite3_uint64 n); + + +/* +** CAPI3REF: Setting The Subtype Of An SQL Function +** METHOD: sqlite3_context +** +** The sqlite3_result_subtype(C,T) function causes the subtype of +** the result from the [application-defined SQL function] with +** [sqlite3_context] C to be the value T. Only the lower 8 bits +** of the subtype T are preserved in current versions of SQLite; +** higher order bits are discarded. +** The number of subtype bytes preserved by SQLite might increase +** in future releases of SQLite. +*/ +SQLITE_API void SQLITE_STDCALL sqlite3_result_subtype(sqlite3_context*,unsigned int); + +/* +** CAPI3REF: Define New Collating Sequences +** METHOD: sqlite3 +** +** ^These functions add, remove, or modify a [collation] associated +** with the [database connection] specified as the first argument. +** +** ^The name of the collation is a UTF-8 string +** for sqlite3_create_collation() and sqlite3_create_collation_v2() +** and a UTF-16 string in native byte order for sqlite3_create_collation16(). +** ^Collation names that compare equal according to [sqlite3_strnicmp()] are +** considered to be the same name. +** +** ^(The third argument (eTextRep) must be one of the constants: +** <ul> +** <li> [SQLITE_UTF8], +** <li> [SQLITE_UTF16LE], +** <li> [SQLITE_UTF16BE], +** <li> [SQLITE_UTF16], or +** <li> [SQLITE_UTF16_ALIGNED]. +** </ul>)^ +** ^The eTextRep argument determines the encoding of strings passed +** to the collating function callback, xCallback. +** ^The [SQLITE_UTF16] and [SQLITE_UTF16_ALIGNED] values for eTextRep +** force strings to be UTF16 with native byte order. +** ^The [SQLITE_UTF16_ALIGNED] value for eTextRep forces strings to begin +** on an even byte address. +** +** ^The fourth argument, pArg, is an application data pointer that is passed +** through as the first argument to the collating function callback. +** +** ^The fifth argument, xCallback, is a pointer to the collating function. +** ^Multiple collating functions can be registered using the same name but +** with different eTextRep parameters and SQLite will use whichever +** function requires the least amount of data transformation. +** ^If the xCallback argument is NULL then the collating function is +** deleted. ^When all collating functions having the same name are deleted, +** that collation is no longer usable. +** +** ^The collating function callback is invoked with a copy of the pArg +** application data pointer and with two strings in the encoding specified +** by the eTextRep argument. The collating function must return an +** integer that is negative, zero, or positive +** if the first string is less than, equal to, or greater than the second, +** respectively. A collating function must always return the same answer +** given the same inputs. If two or more collating functions are registered +** to the same collation name (using different eTextRep values) then all +** must give an equivalent answer when invoked with equivalent strings. +** The collating function must obey the following properties for all +** strings A, B, and C: +** +** <ol> +** <li> If A==B then B==A. +** <li> If A==B and B==C then A==C. +** <li> If A<B THEN B>A. +** <li> If A<B and B<C then A<C. +** </ol> +** +** If a collating function fails any of the above constraints and that +** collating function is registered and used, then the behavior of SQLite +** is undefined. +** +** ^The sqlite3_create_collation_v2() works like sqlite3_create_collation() +** with the addition that the xDestroy callback is invoked on pArg when +** the collating function is deleted. +** ^Collating functions are deleted when they are overridden by later +** calls to the collation creation functions or when the +** [database connection] is closed using [sqlite3_close()]. +** +** ^The xDestroy callback is <u>not</u> called if the +** sqlite3_create_collation_v2() function fails. Applications that invoke +** sqlite3_create_collation_v2() with a non-NULL xDestroy argument should +** check the return code and dispose of the application data pointer +** themselves rather than expecting SQLite to deal with it for them. +** This is different from every other SQLite interface. The inconsistency +** is unfortunate but cannot be changed without breaking backwards +** compatibility. +** +** See also: [sqlite3_collation_needed()] and [sqlite3_collation_needed16()]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_create_collation( + sqlite3*, + const char *zName, + int eTextRep, + void *pArg, + int(*xCompare)(void*,int,const void*,int,const void*) +); +SQLITE_API int SQLITE_STDCALL sqlite3_create_collation_v2( + sqlite3*, + const char *zName, + int eTextRep, + void *pArg, + int(*xCompare)(void*,int,const void*,int,const void*), + void(*xDestroy)(void*) +); +SQLITE_API int SQLITE_STDCALL sqlite3_create_collation16( + sqlite3*, + const void *zName, + int eTextRep, + void *pArg, + int(*xCompare)(void*,int,const void*,int,const void*) +); + +/* +** CAPI3REF: Collation Needed Callbacks +** METHOD: sqlite3 +** +** ^To avoid having to register all collation sequences before a database +** can be used, a single callback function may be registered with the +** [database connection] to be invoked whenever an undefined collation +** sequence is required. +** +** ^If the function is registered using the sqlite3_collation_needed() API, +** then it is passed the names of undefined collation sequences as strings +** encoded in UTF-8. ^If sqlite3_collation_needed16() is used, +** the names are passed as UTF-16 in machine native byte order. +** ^A call to either function replaces the existing collation-needed callback. +** +** ^(When the callback is invoked, the first argument passed is a copy +** of the second argument to sqlite3_collation_needed() or +** sqlite3_collation_needed16(). The second argument is the database +** connection. The third argument is one of [SQLITE_UTF8], [SQLITE_UTF16BE], +** or [SQLITE_UTF16LE], indicating the most desirable form of the collation +** sequence function required. The fourth parameter is the name of the +** required collation sequence.)^ +** +** The callback function should register the desired collation using +** [sqlite3_create_collation()], [sqlite3_create_collation16()], or +** [sqlite3_create_collation_v2()]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_collation_needed( + sqlite3*, + void*, + void(*)(void*,sqlite3*,int eTextRep,const char*) +); +SQLITE_API int SQLITE_STDCALL sqlite3_collation_needed16( + sqlite3*, + void*, + void(*)(void*,sqlite3*,int eTextRep,const void*) +); + +#ifdef SQLITE_HAS_CODEC +/* +** Specify the key for an encrypted database. This routine should be +** called right after sqlite3_open(). +** +** The code to implement this API is not available in the public release +** of SQLite. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_key( + sqlite3 *db, /* Database to be rekeyed */ + const void *pKey, int nKey /* The key */ +); +SQLITE_API int SQLITE_STDCALL sqlite3_key_v2( + sqlite3 *db, /* Database to be rekeyed */ + const char *zDbName, /* Name of the database */ + const void *pKey, int nKey /* The key */ +); + +/* +** Change the key on an open database. If the current database is not +** encrypted, this routine will encrypt it. If pNew==0 or nNew==0, the +** database is decrypted. +** +** The code to implement this API is not available in the public release +** of SQLite. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_rekey( + sqlite3 *db, /* Database to be rekeyed */ + const void *pKey, int nKey /* The new key */ +); +SQLITE_API int SQLITE_STDCALL sqlite3_rekey_v2( + sqlite3 *db, /* Database to be rekeyed */ + const char *zDbName, /* Name of the database */ + const void *pKey, int nKey /* The new key */ +); + +/* +** Specify the activation key for a SEE database. Unless +** activated, none of the SEE routines will work. +*/ +SQLITE_API void SQLITE_STDCALL sqlite3_activate_see( + const char *zPassPhrase /* Activation phrase */ +); +#endif + +#ifdef SQLITE_ENABLE_CEROD +/* +** Specify the activation key for a CEROD database. Unless +** activated, none of the CEROD routines will work. +*/ +SQLITE_API void SQLITE_STDCALL sqlite3_activate_cerod( + const char *zPassPhrase /* Activation phrase */ +); +#endif + +/* +** CAPI3REF: Suspend Execution For A Short Time +** +** The sqlite3_sleep() function causes the current thread to suspend execution +** for at least a number of milliseconds specified in its parameter. +** +** If the operating system does not support sleep requests with +** millisecond time resolution, then the time will be rounded up to +** the nearest second. The number of milliseconds of sleep actually +** requested from the operating system is returned. +** +** ^SQLite implements this interface by calling the xSleep() +** method of the default [sqlite3_vfs] object. If the xSleep() method +** of the default VFS is not implemented correctly, or not implemented at +** all, then the behavior of sqlite3_sleep() may deviate from the description +** in the previous paragraphs. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_sleep(int); + +/* +** CAPI3REF: Name Of The Folder Holding Temporary Files +** +** ^(If this global variable is made to point to a string which is +** the name of a folder (a.k.a. directory), then all temporary files +** created by SQLite when using a built-in [sqlite3_vfs | VFS] +** will be placed in that directory.)^ ^If this variable +** is a NULL pointer, then SQLite performs a search for an appropriate +** temporary file directory. +** +** Applications are strongly discouraged from using this global variable. +** It is required to set a temporary folder on Windows Runtime (WinRT). +** But for all other platforms, it is highly recommended that applications +** neither read nor write this variable. This global variable is a relic +** that exists for backwards compatibility of legacy applications and should +** be avoided in new projects. +** +** It is not safe to read or modify this variable in more than one +** thread at a time. It is not safe to read or modify this variable +** if a [database connection] is being used at the same time in a separate +** thread. +** It is intended that this variable be set once +** as part of process initialization and before any SQLite interface +** routines have been called and that this variable remain unchanged +** thereafter. +** +** ^The [temp_store_directory pragma] may modify this variable and cause +** it to point to memory obtained from [sqlite3_malloc]. ^Furthermore, +** the [temp_store_directory pragma] always assumes that any string +** that this variable points to is held in memory obtained from +** [sqlite3_malloc] and the pragma may attempt to free that memory +** using [sqlite3_free]. +** Hence, if this variable is modified directly, either it should be +** made NULL or made to point to memory obtained from [sqlite3_malloc] +** or else the use of the [temp_store_directory pragma] should be avoided. +** Except when requested by the [temp_store_directory pragma], SQLite +** does not free the memory that sqlite3_temp_directory points to. If +** the application wants that memory to be freed, it must do +** so itself, taking care to only do so after all [database connection] +** objects have been destroyed. +** +** <b>Note to Windows Runtime users:</b> The temporary directory must be set +** prior to calling [sqlite3_open] or [sqlite3_open_v2]. Otherwise, various +** features that require the use of temporary files may fail. Here is an +** example of how to do this using C++ with the Windows Runtime: +** +** <blockquote><pre> +** LPCWSTR zPath = Windows::Storage::ApplicationData::Current-> +**   TemporaryFolder->Path->Data(); +** char zPathBuf[MAX_PATH + 1]; +** memset(zPathBuf, 0, sizeof(zPathBuf)); +** WideCharToMultiByte(CP_UTF8, 0, zPath, -1, zPathBuf, sizeof(zPathBuf), +**   NULL, NULL); +** sqlite3_temp_directory = sqlite3_mprintf("%s", zPathBuf); +** </pre></blockquote> +*/ +SQLITE_API char *sqlite3_temp_directory; + +/* +** CAPI3REF: Name Of The Folder Holding Database Files +** +** ^(If this global variable is made to point to a string which is +** the name of a folder (a.k.a. directory), then all database files +** specified with a relative pathname and created or accessed by +** SQLite when using a built-in windows [sqlite3_vfs | VFS] will be assumed +** to be relative to that directory.)^ ^If this variable is a NULL +** pointer, then SQLite assumes that all database files specified +** with a relative pathname are relative to the current directory +** for the process. Only the windows VFS makes use of this global +** variable; it is ignored by the unix VFS. +** +** Changing the value of this variable while a database connection is +** open can result in a corrupt database. +** +** It is not safe to read or modify this variable in more than one +** thread at a time. It is not safe to read or modify this variable +** if a [database connection] is being used at the same time in a separate +** thread. +** It is intended that this variable be set once +** as part of process initialization and before any SQLite interface +** routines have been called and that this variable remain unchanged +** thereafter. +** +** ^The [data_store_directory pragma] may modify this variable and cause +** it to point to memory obtained from [sqlite3_malloc]. ^Furthermore, +** the [data_store_directory pragma] always assumes that any string +** that this variable points to is held in memory obtained from +** [sqlite3_malloc] and the pragma may attempt to free that memory +** using [sqlite3_free]. +** Hence, if this variable is modified directly, either it should be +** made NULL or made to point to memory obtained from [sqlite3_malloc] +** or else the use of the [data_store_directory pragma] should be avoided. +*/ +SQLITE_API char *sqlite3_data_directory; + +/* +** CAPI3REF: Test For Auto-Commit Mode +** KEYWORDS: {autocommit mode} +** METHOD: sqlite3 +** +** ^The sqlite3_get_autocommit() interface returns non-zero or +** zero if the given database connection is or is not in autocommit mode, +** respectively. ^Autocommit mode is on by default. +** ^Autocommit mode is disabled by a [BEGIN] statement. +** ^Autocommit mode is re-enabled by a [COMMIT] or [ROLLBACK]. +** +** If certain kinds of errors occur on a statement within a multi-statement +** transaction (errors including [SQLITE_FULL], [SQLITE_IOERR], +** [SQLITE_NOMEM], [SQLITE_BUSY], and [SQLITE_INTERRUPT]) then the +** transaction might be rolled back automatically. The only way to +** find out whether SQLite automatically rolled back the transaction after +** an error is to use this function. +** +** If another thread changes the autocommit status of the database +** connection while this routine is running, then the return value +** is undefined. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_get_autocommit(sqlite3*); + +/* +** CAPI3REF: Find The Database Handle Of A Prepared Statement +** METHOD: sqlite3_stmt +** +** ^The sqlite3_db_handle interface returns the [database connection] handle +** to which a [prepared statement] belongs. ^The [database connection] +** returned by sqlite3_db_handle is the same [database connection] +** that was the first argument +** to the [sqlite3_prepare_v2()] call (or its variants) that was used to +** create the statement in the first place. +*/ +SQLITE_API sqlite3 *SQLITE_STDCALL sqlite3_db_handle(sqlite3_stmt*); + +/* +** CAPI3REF: Return The Filename For A Database Connection +** METHOD: sqlite3 +** +** ^The sqlite3_db_filename(D,N) interface returns a pointer to a filename +** associated with database N of connection D. ^The main database file +** has the name "main". If there is no attached database N on the database +** connection D, or if database N is a temporary or in-memory database, then +** a NULL pointer is returned. +** +** ^The filename returned by this function is the output of the +** xFullPathname method of the [VFS]. ^In other words, the filename +** will be an absolute pathname, even if the filename used +** to open the database originally was a URI or relative pathname. +*/ +SQLITE_API const char *SQLITE_STDCALL sqlite3_db_filename(sqlite3 *db, const char *zDbName); + +/* +** CAPI3REF: Determine if a database is read-only +** METHOD: sqlite3 +** +** ^The sqlite3_db_readonly(D,N) interface returns 1 if the database N +** of connection D is read-only, 0 if it is read/write, or -1 if N is not +** the name of a database on connection D. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_db_readonly(sqlite3 *db, const char *zDbName); + +/* +** CAPI3REF: Find the next prepared statement +** METHOD: sqlite3 +** +** ^This interface returns a pointer to the next [prepared statement] after +** pStmt associated with the [database connection] pDb. ^If pStmt is NULL +** then this interface returns a pointer to the first prepared statement +** associated with the database connection pDb. ^If no prepared statement +** satisfies the conditions of this routine, it returns NULL. +** +** The [database connection] pointer D in a call to +** [sqlite3_next_stmt(D,S)] must refer to an open database +** connection and in particular must not be a NULL pointer. +*/ +SQLITE_API sqlite3_stmt *SQLITE_STDCALL sqlite3_next_stmt(sqlite3 *pDb, sqlite3_stmt *pStmt); + +/* +** CAPI3REF: Commit And Rollback Notification Callbacks +** METHOD: sqlite3 +** +** ^The sqlite3_commit_hook() interface registers a callback +** function to be invoked whenever a transaction is [COMMIT | committed]. +** ^Any callback set by a previous call to sqlite3_commit_hook() +** for the same database connection is overridden. +** ^The sqlite3_rollback_hook() interface registers a callback +** function to be invoked whenever a transaction is [ROLLBACK | rolled back]. +** ^Any callback set by a previous call to sqlite3_rollback_hook() +** for the same database connection is overridden. +** ^The pArg argument is passed through to the callback. +** ^If the callback on a commit hook function returns non-zero, +** then the commit is converted into a rollback. +** +** ^The sqlite3_commit_hook(D,C,P) and sqlite3_rollback_hook(D,C,P) functions +** return the P argument from the previous call of the same function +** on the same [database connection] D, or NULL for +** the first call for each function on D. +** +** The commit and rollback hook callbacks are not reentrant. +** The callback implementation must not do anything that will modify +** the database connection that invoked the callback. Any actions +** to modify the database connection must be deferred until after the +** completion of the [sqlite3_step()] call that triggered the commit +** or rollback hook in the first place. +** Note that running any other SQL statements, including SELECT statements, +** or merely calling [sqlite3_prepare_v2()] and [sqlite3_step()] will modify +** the database connections for the meaning of "modify" in this paragraph. +** +** ^Registering a NULL function disables the callback. +** +** ^When the commit hook callback routine returns zero, the [COMMIT] +** operation is allowed to continue normally. ^If the commit hook +** returns non-zero, then the [COMMIT] is converted into a [ROLLBACK]. +** ^The rollback hook is invoked on a rollback that results from a commit +** hook returning non-zero, just as it would be with any other rollback. +** +** ^For the purposes of this API, a transaction is said to have been +** rolled back if an explicit "ROLLBACK" statement is executed, or +** an error or constraint causes an implicit rollback to occur. +** ^The rollback callback is not invoked if a transaction is +** automatically rolled back because the database connection is closed. +** +** See also the [sqlite3_update_hook()] interface. +*/ +SQLITE_API void *SQLITE_STDCALL sqlite3_commit_hook(sqlite3*, int(*)(void*), void*); +SQLITE_API void *SQLITE_STDCALL sqlite3_rollback_hook(sqlite3*, void(*)(void *), void*); + +/* +** CAPI3REF: Data Change Notification Callbacks +** METHOD: sqlite3 +** +** ^The sqlite3_update_hook() interface registers a callback function +** with the [database connection] identified by the first argument +** to be invoked whenever a row is updated, inserted or deleted in +** a rowid table. +** ^Any callback set by a previous call to this function +** for the same database connection is overridden. +** +** ^The second argument is a pointer to the function to invoke when a +** row is updated, inserted or deleted in a rowid table. +** ^The first argument to the callback is a copy of the third argument +** to sqlite3_update_hook(). +** ^The second callback argument is one of [SQLITE_INSERT], [SQLITE_DELETE], +** or [SQLITE_UPDATE], depending on the operation that caused the callback +** to be invoked. +** ^The third and fourth arguments to the callback contain pointers to the +** database and table name containing the affected row. +** ^The final callback parameter is the [rowid] of the row. +** ^In the case of an update, this is the [rowid] after the update takes place. +** +** ^(The update hook is not invoked when internal system tables are +** modified (i.e. sqlite_master and sqlite_sequence).)^ +** ^The update hook is not invoked when [WITHOUT ROWID] tables are modified. +** +** ^In the current implementation, the update hook +** is not invoked when duplication rows are deleted because of an +** [ON CONFLICT | ON CONFLICT REPLACE] clause. ^Nor is the update hook +** invoked when rows are deleted using the [truncate optimization]. +** The exceptions defined in this paragraph might change in a future +** release of SQLite. +** +** The update hook implementation must not do anything that will modify +** the database connection that invoked the update hook. Any actions +** to modify the database connection must be deferred until after the +** completion of the [sqlite3_step()] call that triggered the update hook. +** Note that [sqlite3_prepare_v2()] and [sqlite3_step()] both modify their +** database connections for the meaning of "modify" in this paragraph. +** +** ^The sqlite3_update_hook(D,C,P) function +** returns the P argument from the previous call +** on the same [database connection] D, or NULL for +** the first call on D. +** +** See also the [sqlite3_commit_hook()] and [sqlite3_rollback_hook()] +** interfaces. +*/ +SQLITE_API void *SQLITE_STDCALL sqlite3_update_hook( + sqlite3*, + void(*)(void *,int ,char const *,char const *,sqlite3_int64), + void* +); + +/* +** CAPI3REF: Enable Or Disable Shared Pager Cache +** +** ^(This routine enables or disables the sharing of the database cache +** and schema data structures between [database connection | connections] +** to the same database. Sharing is enabled if the argument is true +** and disabled if the argument is false.)^ +** +** ^Cache sharing is enabled and disabled for an entire process. +** This is a change as of SQLite version 3.5.0. In prior versions of SQLite, +** sharing was enabled or disabled for each thread separately. +** +** ^(The cache sharing mode set by this interface effects all subsequent +** calls to [sqlite3_open()], [sqlite3_open_v2()], and [sqlite3_open16()]. +** Existing database connections continue use the sharing mode +** that was in effect at the time they were opened.)^ +** +** ^(This routine returns [SQLITE_OK] if shared cache was enabled or disabled +** successfully. An [error code] is returned otherwise.)^ +** +** ^Shared cache is disabled by default. But this might change in +** future releases of SQLite. Applications that care about shared +** cache setting should set it explicitly. +** +** Note: This method is disabled on MacOS X 10.7 and iOS version 5.0 +** and will always return SQLITE_MISUSE. On those systems, +** shared cache mode should be enabled per-database connection via +** [sqlite3_open_v2()] with [SQLITE_OPEN_SHAREDCACHE]. +** +** This interface is threadsafe on processors where writing a +** 32-bit integer is atomic. +** +** See Also: [SQLite Shared-Cache Mode] +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_enable_shared_cache(int); + +/* +** CAPI3REF: Attempt To Free Heap Memory +** +** ^The sqlite3_release_memory() interface attempts to free N bytes +** of heap memory by deallocating non-essential memory allocations +** held by the database library. Memory used to cache database +** pages to improve performance is an example of non-essential memory. +** ^sqlite3_release_memory() returns the number of bytes actually freed, +** which might be more or less than the amount requested. +** ^The sqlite3_release_memory() routine is a no-op returning zero +** if SQLite is not compiled with [SQLITE_ENABLE_MEMORY_MANAGEMENT]. +** +** See also: [sqlite3_db_release_memory()] +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_release_memory(int); + +/* +** CAPI3REF: Free Memory Used By A Database Connection +** METHOD: sqlite3 +** +** ^The sqlite3_db_release_memory(D) interface attempts to free as much heap +** memory as possible from database connection D. Unlike the +** [sqlite3_release_memory()] interface, this interface is in effect even +** when the [SQLITE_ENABLE_MEMORY_MANAGEMENT] compile-time option is +** omitted. +** +** See also: [sqlite3_release_memory()] +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_db_release_memory(sqlite3*); + +/* +** CAPI3REF: Impose A Limit On Heap Size +** +** ^The sqlite3_soft_heap_limit64() interface sets and/or queries the +** soft limit on the amount of heap memory that may be allocated by SQLite. +** ^SQLite strives to keep heap memory utilization below the soft heap +** limit by reducing the number of pages held in the page cache +** as heap memory usages approaches the limit. +** ^The soft heap limit is "soft" because even though SQLite strives to stay +** below the limit, it will exceed the limit rather than generate +** an [SQLITE_NOMEM] error. In other words, the soft heap limit +** is advisory only. +** +** ^The return value from sqlite3_soft_heap_limit64() is the size of +** the soft heap limit prior to the call, or negative in the case of an +** error. ^If the argument N is negative +** then no change is made to the soft heap limit. Hence, the current +** size of the soft heap limit can be determined by invoking +** sqlite3_soft_heap_limit64() with a negative argument. +** +** ^If the argument N is zero then the soft heap limit is disabled. +** +** ^(The soft heap limit is not enforced in the current implementation +** if one or more of following conditions are true: +** +** <ul> +** <li> The soft heap limit is set to zero. +** <li> Memory accounting is disabled using a combination of the +** [sqlite3_config]([SQLITE_CONFIG_MEMSTATUS],...) start-time option and +** the [SQLITE_DEFAULT_MEMSTATUS] compile-time option. +** <li> An alternative page cache implementation is specified using +** [sqlite3_config]([SQLITE_CONFIG_PCACHE2],...). +** <li> The page cache allocates from its own memory pool supplied +** by [sqlite3_config]([SQLITE_CONFIG_PAGECACHE],...) rather than +** from the heap. +** </ul>)^ +** +** Beginning with SQLite version 3.7.3, the soft heap limit is enforced +** regardless of whether or not the [SQLITE_ENABLE_MEMORY_MANAGEMENT] +** compile-time option is invoked. With [SQLITE_ENABLE_MEMORY_MANAGEMENT], +** the soft heap limit is enforced on every memory allocation. Without +** [SQLITE_ENABLE_MEMORY_MANAGEMENT], the soft heap limit is only enforced +** when memory is allocated by the page cache. Testing suggests that because +** the page cache is the predominate memory user in SQLite, most +** applications will achieve adequate soft heap limit enforcement without +** the use of [SQLITE_ENABLE_MEMORY_MANAGEMENT]. +** +** The circumstances under which SQLite will enforce the soft heap limit may +** changes in future releases of SQLite. +*/ +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_soft_heap_limit64(sqlite3_int64 N); + +/* +** CAPI3REF: Deprecated Soft Heap Limit Interface +** DEPRECATED +** +** This is a deprecated version of the [sqlite3_soft_heap_limit64()] +** interface. This routine is provided for historical compatibility +** only. All new applications should use the +** [sqlite3_soft_heap_limit64()] interface rather than this one. +*/ +SQLITE_API SQLITE_DEPRECATED void SQLITE_STDCALL sqlite3_soft_heap_limit(int N); + + +/* +** CAPI3REF: Extract Metadata About A Column Of A Table +** METHOD: sqlite3 +** +** ^(The sqlite3_table_column_metadata(X,D,T,C,....) routine returns +** information about column C of table T in database D +** on [database connection] X.)^ ^The sqlite3_table_column_metadata() +** interface returns SQLITE_OK and fills in the non-NULL pointers in +** the final five arguments with appropriate values if the specified +** column exists. ^The sqlite3_table_column_metadata() interface returns +** SQLITE_ERROR and if the specified column does not exist. +** ^If the column-name parameter to sqlite3_table_column_metadata() is a +** NULL pointer, then this routine simply checks for the existance of the +** table and returns SQLITE_OK if the table exists and SQLITE_ERROR if it +** does not. +** +** ^The column is identified by the second, third and fourth parameters to +** this function. ^(The second parameter is either the name of the database +** (i.e. "main", "temp", or an attached database) containing the specified +** table or NULL.)^ ^If it is NULL, then all attached databases are searched +** for the table using the same algorithm used by the database engine to +** resolve unqualified table references. +** +** ^The third and fourth parameters to this function are the table and column +** name of the desired column, respectively. +** +** ^Metadata is returned by writing to the memory locations passed as the 5th +** and subsequent parameters to this function. ^Any of these arguments may be +** NULL, in which case the corresponding element of metadata is omitted. +** +** ^(<blockquote> +** <table border="1"> +** <tr><th> Parameter <th> Output<br>Type <th> Description +** +** <tr><td> 5th <td> const char* <td> Data type +** <tr><td> 6th <td> const char* <td> Name of default collation sequence +** <tr><td> 7th <td> int <td> True if column has a NOT NULL constraint +** <tr><td> 8th <td> int <td> True if column is part of the PRIMARY KEY +** <tr><td> 9th <td> int <td> True if column is [AUTOINCREMENT] +** </table> +** </blockquote>)^ +** +** ^The memory pointed to by the character pointers returned for the +** declaration type and collation sequence is valid until the next +** call to any SQLite API function. +** +** ^If the specified table is actually a view, an [error code] is returned. +** +** ^If the specified column is "rowid", "oid" or "_rowid_" and the table +** is not a [WITHOUT ROWID] table and an +** [INTEGER PRIMARY KEY] column has been explicitly declared, then the output +** parameters are set for the explicitly declared column. ^(If there is no +** [INTEGER PRIMARY KEY] column, then the outputs +** for the [rowid] are set as follows: +** +** <pre> +** data type: "INTEGER" +** collation sequence: "BINARY" +** not null: 0 +** primary key: 1 +** auto increment: 0 +** </pre>)^ +** +** ^This function causes all database schemas to be read from disk and +** parsed, if that has not already been done, and returns an error if +** any errors are encountered while loading the schema. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_table_column_metadata( + sqlite3 *db, /* Connection handle */ + const char *zDbName, /* Database name or NULL */ + const char *zTableName, /* Table name */ + const char *zColumnName, /* Column name */ + char const **pzDataType, /* OUTPUT: Declared data type */ + char const **pzCollSeq, /* OUTPUT: Collation sequence name */ + int *pNotNull, /* OUTPUT: True if NOT NULL constraint exists */ + int *pPrimaryKey, /* OUTPUT: True if column part of PK */ + int *pAutoinc /* OUTPUT: True if column is auto-increment */ +); + +/* +** CAPI3REF: Load An Extension +** METHOD: sqlite3 +** +** ^This interface loads an SQLite extension library from the named file. +** +** ^The sqlite3_load_extension() interface attempts to load an +** [SQLite extension] library contained in the file zFile. If +** the file cannot be loaded directly, attempts are made to load +** with various operating-system specific extensions added. +** So for example, if "samplelib" cannot be loaded, then names like +** "samplelib.so" or "samplelib.dylib" or "samplelib.dll" might +** be tried also. +** +** ^The entry point is zProc. +** ^(zProc may be 0, in which case SQLite will try to come up with an +** entry point name on its own. It first tries "sqlite3_extension_init". +** If that does not work, it constructs a name "sqlite3_X_init" where the +** X is consists of the lower-case equivalent of all ASCII alphabetic +** characters in the filename from the last "/" to the first following +** "." and omitting any initial "lib".)^ +** ^The sqlite3_load_extension() interface returns +** [SQLITE_OK] on success and [SQLITE_ERROR] if something goes wrong. +** ^If an error occurs and pzErrMsg is not 0, then the +** [sqlite3_load_extension()] interface shall attempt to +** fill *pzErrMsg with error message text stored in memory +** obtained from [sqlite3_malloc()]. The calling function +** should free this memory by calling [sqlite3_free()]. +** +** ^Extension loading must be enabled using +** [sqlite3_enable_load_extension()] prior to calling this API, +** otherwise an error will be returned. +** +** See also the [load_extension() SQL function]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_load_extension( + sqlite3 *db, /* Load the extension into this database connection */ + const char *zFile, /* Name of the shared library containing extension */ + const char *zProc, /* Entry point. Derived from zFile if 0 */ + char **pzErrMsg /* Put error message here if not 0 */ +); + +/* +** CAPI3REF: Enable Or Disable Extension Loading +** METHOD: sqlite3 +** +** ^So as not to open security holes in older applications that are +** unprepared to deal with [extension loading], and as a means of disabling +** [extension loading] while evaluating user-entered SQL, the following API +** is provided to turn the [sqlite3_load_extension()] mechanism on and off. +** +** ^Extension loading is off by default. +** ^Call the sqlite3_enable_load_extension() routine with onoff==1 +** to turn extension loading on and call it with onoff==0 to turn +** it back off again. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_enable_load_extension(sqlite3 *db, int onoff); + +/* +** CAPI3REF: Automatically Load Statically Linked Extensions +** +** ^This interface causes the xEntryPoint() function to be invoked for +** each new [database connection] that is created. The idea here is that +** xEntryPoint() is the entry point for a statically linked [SQLite extension] +** that is to be automatically loaded into all new database connections. +** +** ^(Even though the function prototype shows that xEntryPoint() takes +** no arguments and returns void, SQLite invokes xEntryPoint() with three +** arguments and expects and integer result as if the signature of the +** entry point where as follows: +** +** <blockquote><pre> +**   int xEntryPoint( +**   sqlite3 *db, +**   const char **pzErrMsg, +**   const struct sqlite3_api_routines *pThunk +**   ); +** </pre></blockquote>)^ +** +** If the xEntryPoint routine encounters an error, it should make *pzErrMsg +** point to an appropriate error message (obtained from [sqlite3_mprintf()]) +** and return an appropriate [error code]. ^SQLite ensures that *pzErrMsg +** is NULL before calling the xEntryPoint(). ^SQLite will invoke +** [sqlite3_free()] on *pzErrMsg after xEntryPoint() returns. ^If any +** xEntryPoint() returns an error, the [sqlite3_open()], [sqlite3_open16()], +** or [sqlite3_open_v2()] call that provoked the xEntryPoint() will fail. +** +** ^Calling sqlite3_auto_extension(X) with an entry point X that is already +** on the list of automatic extensions is a harmless no-op. ^No entry point +** will be called more than once for each database connection that is opened. +** +** See also: [sqlite3_reset_auto_extension()] +** and [sqlite3_cancel_auto_extension()] +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_auto_extension(void (*xEntryPoint)(void)); + +/* +** CAPI3REF: Cancel Automatic Extension Loading +** +** ^The [sqlite3_cancel_auto_extension(X)] interface unregisters the +** initialization routine X that was registered using a prior call to +** [sqlite3_auto_extension(X)]. ^The [sqlite3_cancel_auto_extension(X)] +** routine returns 1 if initialization routine X was successfully +** unregistered and it returns 0 if X was not on the list of initialization +** routines. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_cancel_auto_extension(void (*xEntryPoint)(void)); + +/* +** CAPI3REF: Reset Automatic Extension Loading +** +** ^This interface disables all automatic extensions previously +** registered using [sqlite3_auto_extension()]. +*/ +SQLITE_API void SQLITE_STDCALL sqlite3_reset_auto_extension(void); + +/* +** The interface to the virtual-table mechanism is currently considered +** to be experimental. The interface might change in incompatible ways. +** If this is a problem for you, do not use the interface at this time. +** +** When the virtual-table mechanism stabilizes, we will declare the +** interface fixed, support it indefinitely, and remove this comment. +*/ + +/* +** Structures used by the virtual table interface +*/ +typedef struct sqlite3_vtab sqlite3_vtab; +typedef struct sqlite3_index_info sqlite3_index_info; +typedef struct sqlite3_vtab_cursor sqlite3_vtab_cursor; +typedef struct sqlite3_module sqlite3_module; + +/* +** CAPI3REF: Virtual Table Object +** KEYWORDS: sqlite3_module {virtual table module} +** +** This structure, sometimes called a "virtual table module", +** defines the implementation of a [virtual tables]. +** This structure consists mostly of methods for the module. +** +** ^A virtual table module is created by filling in a persistent +** instance of this structure and passing a pointer to that instance +** to [sqlite3_create_module()] or [sqlite3_create_module_v2()]. +** ^The registration remains valid until it is replaced by a different +** module or until the [database connection] closes. The content +** of this structure must not change while it is registered with +** any database connection. +*/ +struct sqlite3_module { + int iVersion; + int (*xCreate)(sqlite3*, void *pAux, + int argc, const char *const*argv, + sqlite3_vtab **ppVTab, char**); + int (*xConnect)(sqlite3*, void *pAux, + int argc, const char *const*argv, + sqlite3_vtab **ppVTab, char**); + int (*xBestIndex)(sqlite3_vtab *pVTab, sqlite3_index_info*); + int (*xDisconnect)(sqlite3_vtab *pVTab); + int (*xDestroy)(sqlite3_vtab *pVTab); + int (*xOpen)(sqlite3_vtab *pVTab, sqlite3_vtab_cursor **ppCursor); + int (*xClose)(sqlite3_vtab_cursor*); + int (*xFilter)(sqlite3_vtab_cursor*, int idxNum, const char *idxStr, + int argc, sqlite3_value **argv); + int (*xNext)(sqlite3_vtab_cursor*); + int (*xEof)(sqlite3_vtab_cursor*); + int (*xColumn)(sqlite3_vtab_cursor*, sqlite3_context*, int); + int (*xRowid)(sqlite3_vtab_cursor*, sqlite3_int64 *pRowid); + int (*xUpdate)(sqlite3_vtab *, int, sqlite3_value **, sqlite3_int64 *); + int (*xBegin)(sqlite3_vtab *pVTab); + int (*xSync)(sqlite3_vtab *pVTab); + int (*xCommit)(sqlite3_vtab *pVTab); + int (*xRollback)(sqlite3_vtab *pVTab); + int (*xFindFunction)(sqlite3_vtab *pVtab, int nArg, const char *zName, + void (**pxFunc)(sqlite3_context*,int,sqlite3_value**), + void **ppArg); + int (*xRename)(sqlite3_vtab *pVtab, const char *zNew); + /* The methods above are in version 1 of the sqlite_module object. Those + ** below are for version 2 and greater. */ + int (*xSavepoint)(sqlite3_vtab *pVTab, int); + int (*xRelease)(sqlite3_vtab *pVTab, int); + int (*xRollbackTo)(sqlite3_vtab *pVTab, int); +}; + +/* +** CAPI3REF: Virtual Table Indexing Information +** KEYWORDS: sqlite3_index_info +** +** The sqlite3_index_info structure and its substructures is used as part +** of the [virtual table] interface to +** pass information into and receive the reply from the [xBestIndex] +** method of a [virtual table module]. The fields under **Inputs** are the +** inputs to xBestIndex and are read-only. xBestIndex inserts its +** results into the **Outputs** fields. +** +** ^(The aConstraint[] array records WHERE clause constraints of the form: +** +** <blockquote>column OP expr</blockquote> +** +** where OP is =, <, <=, >, or >=.)^ ^(The particular operator is +** stored in aConstraint[].op using one of the +** [SQLITE_INDEX_CONSTRAINT_EQ | SQLITE_INDEX_CONSTRAINT_ values].)^ +** ^(The index of the column is stored in +** aConstraint[].iColumn.)^ ^(aConstraint[].usable is TRUE if the +** expr on the right-hand side can be evaluated (and thus the constraint +** is usable) and false if it cannot.)^ +** +** ^The optimizer automatically inverts terms of the form "expr OP column" +** and makes other simplifications to the WHERE clause in an attempt to +** get as many WHERE clause terms into the form shown above as possible. +** ^The aConstraint[] array only reports WHERE clause terms that are +** relevant to the particular virtual table being queried. +** +** ^Information about the ORDER BY clause is stored in aOrderBy[]. +** ^Each term of aOrderBy records a column of the ORDER BY clause. +** +** The colUsed field indicates which columns of the virtual table may be +** required by the current scan. Virtual table columns are numbered from +** zero in the order in which they appear within the CREATE TABLE statement +** passed to sqlite3_declare_vtab(). For the first 63 columns (columns 0-62), +** the corresponding bit is set within the colUsed mask if the column may be +** required by SQLite. If the table has at least 64 columns and any column +** to the right of the first 63 is required, then bit 63 of colUsed is also +** set. In other words, column iCol may be required if the expression +** (colUsed & ((sqlite3_uint64)1 << (iCol>=63 ? 63 : iCol))) evaluates to +** non-zero. +** +** The [xBestIndex] method must fill aConstraintUsage[] with information +** about what parameters to pass to xFilter. ^If argvIndex>0 then +** the right-hand side of the corresponding aConstraint[] is evaluated +** and becomes the argvIndex-th entry in argv. ^(If aConstraintUsage[].omit +** is true, then the constraint is assumed to be fully handled by the +** virtual table and is not checked again by SQLite.)^ +** +** ^The idxNum and idxPtr values are recorded and passed into the +** [xFilter] method. +** ^[sqlite3_free()] is used to free idxPtr if and only if +** needToFreeIdxPtr is true. +** +** ^The orderByConsumed means that output from [xFilter]/[xNext] will occur in +** the correct order to satisfy the ORDER BY clause so that no separate +** sorting step is required. +** +** ^The estimatedCost value is an estimate of the cost of a particular +** strategy. A cost of N indicates that the cost of the strategy is similar +** to a linear scan of an SQLite table with N rows. A cost of log(N) +** indicates that the expense of the operation is similar to that of a +** binary search on a unique indexed field of an SQLite table with N rows. +** +** ^The estimatedRows value is an estimate of the number of rows that +** will be returned by the strategy. +** +** The xBestIndex method may optionally populate the idxFlags field with a +** mask of SQLITE_INDEX_SCAN_* flags. Currently there is only one such flag - +** SQLITE_INDEX_SCAN_UNIQUE. If the xBestIndex method sets this flag, SQLite +** assumes that the strategy may visit at most one row. +** +** Additionally, if xBestIndex sets the SQLITE_INDEX_SCAN_UNIQUE flag, then +** SQLite also assumes that if a call to the xUpdate() method is made as +** part of the same statement to delete or update a virtual table row and the +** implementation returns SQLITE_CONSTRAINT, then there is no need to rollback +** any database changes. In other words, if the xUpdate() returns +** SQLITE_CONSTRAINT, the database contents must be exactly as they were +** before xUpdate was called. By contrast, if SQLITE_INDEX_SCAN_UNIQUE is not +** set and xUpdate returns SQLITE_CONSTRAINT, any database changes made by +** the xUpdate method are automatically rolled back by SQLite. +** +** IMPORTANT: The estimatedRows field was added to the sqlite3_index_info +** structure for SQLite version 3.8.2. If a virtual table extension is +** used with an SQLite version earlier than 3.8.2, the results of attempting +** to read or write the estimatedRows field are undefined (but are likely +** to included crashing the application). The estimatedRows field should +** therefore only be used if [sqlite3_libversion_number()] returns a +** value greater than or equal to 3008002. Similarly, the idxFlags field +** was added for version 3.9.0. It may therefore only be used if +** sqlite3_libversion_number() returns a value greater than or equal to +** 3009000. +*/ +struct sqlite3_index_info { + /* Inputs */ + int nConstraint; /* Number of entries in aConstraint */ + struct sqlite3_index_constraint { + int iColumn; /* Column constrained. -1 for ROWID */ + unsigned char op; /* Constraint operator */ + unsigned char usable; /* True if this constraint is usable */ + int iTermOffset; /* Used internally - xBestIndex should ignore */ + } *aConstraint; /* Table of WHERE clause constraints */ + int nOrderBy; /* Number of terms in the ORDER BY clause */ + struct sqlite3_index_orderby { + int iColumn; /* Column number */ + unsigned char desc; /* True for DESC. False for ASC. */ + } *aOrderBy; /* The ORDER BY clause */ + /* Outputs */ + struct sqlite3_index_constraint_usage { + int argvIndex; /* if >0, constraint is part of argv to xFilter */ + unsigned char omit; /* Do not code a test for this constraint */ + } *aConstraintUsage; + int idxNum; /* Number used to identify the index */ + char *idxStr; /* String, possibly obtained from sqlite3_malloc */ + int needToFreeIdxStr; /* Free idxStr using sqlite3_free() if true */ + int orderByConsumed; /* True if output is already ordered */ + double estimatedCost; /* Estimated cost of using this index */ + /* Fields below are only available in SQLite 3.8.2 and later */ + sqlite3_int64 estimatedRows; /* Estimated number of rows returned */ + /* Fields below are only available in SQLite 3.9.0 and later */ + int idxFlags; /* Mask of SQLITE_INDEX_SCAN_* flags */ + /* Fields below are only available in SQLite 3.10.0 and later */ + sqlite3_uint64 colUsed; /* Input: Mask of columns used by statement */ +}; + +/* +** CAPI3REF: Virtual Table Scan Flags +*/ +#define SQLITE_INDEX_SCAN_UNIQUE 1 /* Scan visits at most 1 row */ + +/* +** CAPI3REF: Virtual Table Constraint Operator Codes +** +** These macros defined the allowed values for the +** [sqlite3_index_info].aConstraint[].op field. Each value represents +** an operator that is part of a constraint term in the wHERE clause of +** a query that uses a [virtual table]. +*/ +#define SQLITE_INDEX_CONSTRAINT_EQ 2 +#define SQLITE_INDEX_CONSTRAINT_GT 4 +#define SQLITE_INDEX_CONSTRAINT_LE 8 +#define SQLITE_INDEX_CONSTRAINT_LT 16 +#define SQLITE_INDEX_CONSTRAINT_GE 32 +#define SQLITE_INDEX_CONSTRAINT_MATCH 64 +#define SQLITE_INDEX_CONSTRAINT_LIKE 65 +#define SQLITE_INDEX_CONSTRAINT_GLOB 66 +#define SQLITE_INDEX_CONSTRAINT_REGEXP 67 + +/* +** CAPI3REF: Register A Virtual Table Implementation +** METHOD: sqlite3 +** +** ^These routines are used to register a new [virtual table module] name. +** ^Module names must be registered before +** creating a new [virtual table] using the module and before using a +** preexisting [virtual table] for the module. +** +** ^The module name is registered on the [database connection] specified +** by the first parameter. ^The name of the module is given by the +** second parameter. ^The third parameter is a pointer to +** the implementation of the [virtual table module]. ^The fourth +** parameter is an arbitrary client data pointer that is passed through +** into the [xCreate] and [xConnect] methods of the virtual table module +** when a new virtual table is be being created or reinitialized. +** +** ^The sqlite3_create_module_v2() interface has a fifth parameter which +** is a pointer to a destructor for the pClientData. ^SQLite will +** invoke the destructor function (if it is not NULL) when SQLite +** no longer needs the pClientData pointer. ^The destructor will also +** be invoked if the call to sqlite3_create_module_v2() fails. +** ^The sqlite3_create_module() +** interface is equivalent to sqlite3_create_module_v2() with a NULL +** destructor. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_create_module( + sqlite3 *db, /* SQLite connection to register module with */ + const char *zName, /* Name of the module */ + const sqlite3_module *p, /* Methods for the module */ + void *pClientData /* Client data for xCreate/xConnect */ +); +SQLITE_API int SQLITE_STDCALL sqlite3_create_module_v2( + sqlite3 *db, /* SQLite connection to register module with */ + const char *zName, /* Name of the module */ + const sqlite3_module *p, /* Methods for the module */ + void *pClientData, /* Client data for xCreate/xConnect */ + void(*xDestroy)(void*) /* Module destructor function */ +); + +/* +** CAPI3REF: Virtual Table Instance Object +** KEYWORDS: sqlite3_vtab +** +** Every [virtual table module] implementation uses a subclass +** of this object to describe a particular instance +** of the [virtual table]. Each subclass will +** be tailored to the specific needs of the module implementation. +** The purpose of this superclass is to define certain fields that are +** common to all module implementations. +** +** ^Virtual tables methods can set an error message by assigning a +** string obtained from [sqlite3_mprintf()] to zErrMsg. The method should +** take care that any prior string is freed by a call to [sqlite3_free()] +** prior to assigning a new string to zErrMsg. ^After the error message +** is delivered up to the client application, the string will be automatically +** freed by sqlite3_free() and the zErrMsg field will be zeroed. +*/ +struct sqlite3_vtab { + const sqlite3_module *pModule; /* The module for this virtual table */ + int nRef; /* Number of open cursors */ + char *zErrMsg; /* Error message from sqlite3_mprintf() */ + /* Virtual table implementations will typically add additional fields */ +}; + +/* +** CAPI3REF: Virtual Table Cursor Object +** KEYWORDS: sqlite3_vtab_cursor {virtual table cursor} +** +** Every [virtual table module] implementation uses a subclass of the +** following structure to describe cursors that point into the +** [virtual table] and are used +** to loop through the virtual table. Cursors are created using the +** [sqlite3_module.xOpen | xOpen] method of the module and are destroyed +** by the [sqlite3_module.xClose | xClose] method. Cursors are used +** by the [xFilter], [xNext], [xEof], [xColumn], and [xRowid] methods +** of the module. Each module implementation will define +** the content of a cursor structure to suit its own needs. +** +** This superclass exists in order to define fields of the cursor that +** are common to all implementations. +*/ +struct sqlite3_vtab_cursor { + sqlite3_vtab *pVtab; /* Virtual table of this cursor */ + /* Virtual table implementations will typically add additional fields */ +}; + +/* +** CAPI3REF: Declare The Schema Of A Virtual Table +** +** ^The [xCreate] and [xConnect] methods of a +** [virtual table module] call this interface +** to declare the format (the names and datatypes of the columns) of +** the virtual tables they implement. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_declare_vtab(sqlite3*, const char *zSQL); + +/* +** CAPI3REF: Overload A Function For A Virtual Table +** METHOD: sqlite3 +** +** ^(Virtual tables can provide alternative implementations of functions +** using the [xFindFunction] method of the [virtual table module]. +** But global versions of those functions +** must exist in order to be overloaded.)^ +** +** ^(This API makes sure a global version of a function with a particular +** name and number of parameters exists. If no such function exists +** before this API is called, a new function is created.)^ ^The implementation +** of the new function always causes an exception to be thrown. So +** the new function is not good for anything by itself. Its only +** purpose is to be a placeholder function that can be overloaded +** by a [virtual table]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_overload_function(sqlite3*, const char *zFuncName, int nArg); + +/* +** The interface to the virtual-table mechanism defined above (back up +** to a comment remarkably similar to this one) is currently considered +** to be experimental. The interface might change in incompatible ways. +** If this is a problem for you, do not use the interface at this time. +** +** When the virtual-table mechanism stabilizes, we will declare the +** interface fixed, support it indefinitely, and remove this comment. +*/ + +/* +** CAPI3REF: A Handle To An Open BLOB +** KEYWORDS: {BLOB handle} {BLOB handles} +** +** An instance of this object represents an open BLOB on which +** [sqlite3_blob_open | incremental BLOB I/O] can be performed. +** ^Objects of this type are created by [sqlite3_blob_open()] +** and destroyed by [sqlite3_blob_close()]. +** ^The [sqlite3_blob_read()] and [sqlite3_blob_write()] interfaces +** can be used to read or write small subsections of the BLOB. +** ^The [sqlite3_blob_bytes()] interface returns the size of the BLOB in bytes. +*/ +typedef struct sqlite3_blob sqlite3_blob; + +/* +** CAPI3REF: Open A BLOB For Incremental I/O +** METHOD: sqlite3 +** CONSTRUCTOR: sqlite3_blob +** +** ^(This interfaces opens a [BLOB handle | handle] to the BLOB located +** in row iRow, column zColumn, table zTable in database zDb; +** in other words, the same BLOB that would be selected by: +** +** <pre> +** SELECT zColumn FROM zDb.zTable WHERE [rowid] = iRow; +** </pre>)^ +** +** ^(Parameter zDb is not the filename that contains the database, but +** rather the symbolic name of the database. For attached databases, this is +** the name that appears after the AS keyword in the [ATTACH] statement. +** For the main database file, the database name is "main". For TEMP +** tables, the database name is "temp".)^ +** +** ^If the flags parameter is non-zero, then the BLOB is opened for read +** and write access. ^If the flags parameter is zero, the BLOB is opened for +** read-only access. +** +** ^(On success, [SQLITE_OK] is returned and the new [BLOB handle] is stored +** in *ppBlob. Otherwise an [error code] is returned and, unless the error +** code is SQLITE_MISUSE, *ppBlob is set to NULL.)^ ^This means that, provided +** the API is not misused, it is always safe to call [sqlite3_blob_close()] +** on *ppBlob after this function it returns. +** +** This function fails with SQLITE_ERROR if any of the following are true: +** <ul> +** <li> ^(Database zDb does not exist)^, +** <li> ^(Table zTable does not exist within database zDb)^, +** <li> ^(Table zTable is a WITHOUT ROWID table)^, +** <li> ^(Column zColumn does not exist)^, +** <li> ^(Row iRow is not present in the table)^, +** <li> ^(The specified column of row iRow contains a value that is not +** a TEXT or BLOB value)^, +** <li> ^(Column zColumn is part of an index, PRIMARY KEY or UNIQUE +** constraint and the blob is being opened for read/write access)^, +** <li> ^([foreign key constraints | Foreign key constraints] are enabled, +** column zColumn is part of a [child key] definition and the blob is +** being opened for read/write access)^. +** </ul> +** +** ^Unless it returns SQLITE_MISUSE, this function sets the +** [database connection] error code and message accessible via +** [sqlite3_errcode()] and [sqlite3_errmsg()] and related functions. +** +** +** ^(If the row that a BLOB handle points to is modified by an +** [UPDATE], [DELETE], or by [ON CONFLICT] side-effects +** then the BLOB handle is marked as "expired". +** This is true if any column of the row is changed, even a column +** other than the one the BLOB handle is open on.)^ +** ^Calls to [sqlite3_blob_read()] and [sqlite3_blob_write()] for +** an expired BLOB handle fail with a return code of [SQLITE_ABORT]. +** ^(Changes written into a BLOB prior to the BLOB expiring are not +** rolled back by the expiration of the BLOB. Such changes will eventually +** commit if the transaction continues to completion.)^ +** +** ^Use the [sqlite3_blob_bytes()] interface to determine the size of +** the opened blob. ^The size of a blob may not be changed by this +** interface. Use the [UPDATE] SQL command to change the size of a +** blob. +** +** ^The [sqlite3_bind_zeroblob()] and [sqlite3_result_zeroblob()] interfaces +** and the built-in [zeroblob] SQL function may be used to create a +** zero-filled blob to read or write using the incremental-blob interface. +** +** To avoid a resource leak, every open [BLOB handle] should eventually +** be released by a call to [sqlite3_blob_close()]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_blob_open( + sqlite3*, + const char *zDb, + const char *zTable, + const char *zColumn, + sqlite3_int64 iRow, + int flags, + sqlite3_blob **ppBlob +); + +/* +** CAPI3REF: Move a BLOB Handle to a New Row +** METHOD: sqlite3_blob +** +** ^This function is used to move an existing blob handle so that it points +** to a different row of the same database table. ^The new row is identified +** by the rowid value passed as the second argument. Only the row can be +** changed. ^The database, table and column on which the blob handle is open +** remain the same. Moving an existing blob handle to a new row can be +** faster than closing the existing handle and opening a new one. +** +** ^(The new row must meet the same criteria as for [sqlite3_blob_open()] - +** it must exist and there must be either a blob or text value stored in +** the nominated column.)^ ^If the new row is not present in the table, or if +** it does not contain a blob or text value, or if another error occurs, an +** SQLite error code is returned and the blob handle is considered aborted. +** ^All subsequent calls to [sqlite3_blob_read()], [sqlite3_blob_write()] or +** [sqlite3_blob_reopen()] on an aborted blob handle immediately return +** SQLITE_ABORT. ^Calling [sqlite3_blob_bytes()] on an aborted blob handle +** always returns zero. +** +** ^This function sets the database handle error code and message. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_blob_reopen(sqlite3_blob *, sqlite3_int64); + +/* +** CAPI3REF: Close A BLOB Handle +** DESTRUCTOR: sqlite3_blob +** +** ^This function closes an open [BLOB handle]. ^(The BLOB handle is closed +** unconditionally. Even if this routine returns an error code, the +** handle is still closed.)^ +** +** ^If the blob handle being closed was opened for read-write access, and if +** the database is in auto-commit mode and there are no other open read-write +** blob handles or active write statements, the current transaction is +** committed. ^If an error occurs while committing the transaction, an error +** code is returned and the transaction rolled back. +** +** Calling this function with an argument that is not a NULL pointer or an +** open blob handle results in undefined behaviour. ^Calling this routine +** with a null pointer (such as would be returned by a failed call to +** [sqlite3_blob_open()]) is a harmless no-op. ^Otherwise, if this function +** is passed a valid open blob handle, the values returned by the +** sqlite3_errcode() and sqlite3_errmsg() functions are set before returning. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_blob_close(sqlite3_blob *); + +/* +** CAPI3REF: Return The Size Of An Open BLOB +** METHOD: sqlite3_blob +** +** ^Returns the size in bytes of the BLOB accessible via the +** successfully opened [BLOB handle] in its only argument. ^The +** incremental blob I/O routines can only read or overwriting existing +** blob content; they cannot change the size of a blob. +** +** This routine only works on a [BLOB handle] which has been created +** by a prior successful call to [sqlite3_blob_open()] and which has not +** been closed by [sqlite3_blob_close()]. Passing any other pointer in +** to this routine results in undefined and probably undesirable behavior. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_blob_bytes(sqlite3_blob *); + +/* +** CAPI3REF: Read Data From A BLOB Incrementally +** METHOD: sqlite3_blob +** +** ^(This function is used to read data from an open [BLOB handle] into a +** caller-supplied buffer. N bytes of data are copied into buffer Z +** from the open BLOB, starting at offset iOffset.)^ +** +** ^If offset iOffset is less than N bytes from the end of the BLOB, +** [SQLITE_ERROR] is returned and no data is read. ^If N or iOffset is +** less than zero, [SQLITE_ERROR] is returned and no data is read. +** ^The size of the blob (and hence the maximum value of N+iOffset) +** can be determined using the [sqlite3_blob_bytes()] interface. +** +** ^An attempt to read from an expired [BLOB handle] fails with an +** error code of [SQLITE_ABORT]. +** +** ^(On success, sqlite3_blob_read() returns SQLITE_OK. +** Otherwise, an [error code] or an [extended error code] is returned.)^ +** +** This routine only works on a [BLOB handle] which has been created +** by a prior successful call to [sqlite3_blob_open()] and which has not +** been closed by [sqlite3_blob_close()]. Passing any other pointer in +** to this routine results in undefined and probably undesirable behavior. +** +** See also: [sqlite3_blob_write()]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_blob_read(sqlite3_blob *, void *Z, int N, int iOffset); + +/* +** CAPI3REF: Write Data Into A BLOB Incrementally +** METHOD: sqlite3_blob +** +** ^(This function is used to write data into an open [BLOB handle] from a +** caller-supplied buffer. N bytes of data are copied from the buffer Z +** into the open BLOB, starting at offset iOffset.)^ +** +** ^(On success, sqlite3_blob_write() returns SQLITE_OK. +** Otherwise, an [error code] or an [extended error code] is returned.)^ +** ^Unless SQLITE_MISUSE is returned, this function sets the +** [database connection] error code and message accessible via +** [sqlite3_errcode()] and [sqlite3_errmsg()] and related functions. +** +** ^If the [BLOB handle] passed as the first argument was not opened for +** writing (the flags parameter to [sqlite3_blob_open()] was zero), +** this function returns [SQLITE_READONLY]. +** +** This function may only modify the contents of the BLOB; it is +** not possible to increase the size of a BLOB using this API. +** ^If offset iOffset is less than N bytes from the end of the BLOB, +** [SQLITE_ERROR] is returned and no data is written. The size of the +** BLOB (and hence the maximum value of N+iOffset) can be determined +** using the [sqlite3_blob_bytes()] interface. ^If N or iOffset are less +** than zero [SQLITE_ERROR] is returned and no data is written. +** +** ^An attempt to write to an expired [BLOB handle] fails with an +** error code of [SQLITE_ABORT]. ^Writes to the BLOB that occurred +** before the [BLOB handle] expired are not rolled back by the +** expiration of the handle, though of course those changes might +** have been overwritten by the statement that expired the BLOB handle +** or by other independent statements. +** +** This routine only works on a [BLOB handle] which has been created +** by a prior successful call to [sqlite3_blob_open()] and which has not +** been closed by [sqlite3_blob_close()]. Passing any other pointer in +** to this routine results in undefined and probably undesirable behavior. +** +** See also: [sqlite3_blob_read()]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_blob_write(sqlite3_blob *, const void *z, int n, int iOffset); + +/* +** CAPI3REF: Virtual File System Objects +** +** A virtual filesystem (VFS) is an [sqlite3_vfs] object +** that SQLite uses to interact +** with the underlying operating system. Most SQLite builds come with a +** single default VFS that is appropriate for the host computer. +** New VFSes can be registered and existing VFSes can be unregistered. +** The following interfaces are provided. +** +** ^The sqlite3_vfs_find() interface returns a pointer to a VFS given its name. +** ^Names are case sensitive. +** ^Names are zero-terminated UTF-8 strings. +** ^If there is no match, a NULL pointer is returned. +** ^If zVfsName is NULL then the default VFS is returned. +** +** ^New VFSes are registered with sqlite3_vfs_register(). +** ^Each new VFS becomes the default VFS if the makeDflt flag is set. +** ^The same VFS can be registered multiple times without injury. +** ^To make an existing VFS into the default VFS, register it again +** with the makeDflt flag set. If two different VFSes with the +** same name are registered, the behavior is undefined. If a +** VFS is registered with a name that is NULL or an empty string, +** then the behavior is undefined. +** +** ^Unregister a VFS with the sqlite3_vfs_unregister() interface. +** ^(If the default VFS is unregistered, another VFS is chosen as +** the default. The choice for the new VFS is arbitrary.)^ +*/ +SQLITE_API sqlite3_vfs *SQLITE_STDCALL sqlite3_vfs_find(const char *zVfsName); +SQLITE_API int SQLITE_STDCALL sqlite3_vfs_register(sqlite3_vfs*, int makeDflt); +SQLITE_API int SQLITE_STDCALL sqlite3_vfs_unregister(sqlite3_vfs*); + +/* +** CAPI3REF: Mutexes +** +** The SQLite core uses these routines for thread +** synchronization. Though they are intended for internal +** use by SQLite, code that links against SQLite is +** permitted to use any of these routines. +** +** The SQLite source code contains multiple implementations +** of these mutex routines. An appropriate implementation +** is selected automatically at compile-time. The following +** implementations are available in the SQLite core: +** +** <ul> +** <li> SQLITE_MUTEX_PTHREADS +** <li> SQLITE_MUTEX_W32 +** <li> SQLITE_MUTEX_NOOP +** </ul> +** +** The SQLITE_MUTEX_NOOP implementation is a set of routines +** that does no real locking and is appropriate for use in +** a single-threaded application. The SQLITE_MUTEX_PTHREADS and +** SQLITE_MUTEX_W32 implementations are appropriate for use on Unix +** and Windows. +** +** If SQLite is compiled with the SQLITE_MUTEX_APPDEF preprocessor +** macro defined (with "-DSQLITE_MUTEX_APPDEF=1"), then no mutex +** implementation is included with the library. In this case the +** application must supply a custom mutex implementation using the +** [SQLITE_CONFIG_MUTEX] option of the sqlite3_config() function +** before calling sqlite3_initialize() or any other public sqlite3_ +** function that calls sqlite3_initialize(). +** +** ^The sqlite3_mutex_alloc() routine allocates a new +** mutex and returns a pointer to it. ^The sqlite3_mutex_alloc() +** routine returns NULL if it is unable to allocate the requested +** mutex. The argument to sqlite3_mutex_alloc() must one of these +** integer constants: +** +** <ul> +** <li> SQLITE_MUTEX_FAST +** <li> SQLITE_MUTEX_RECURSIVE +** <li> SQLITE_MUTEX_STATIC_MASTER +** <li> SQLITE_MUTEX_STATIC_MEM +** <li> SQLITE_MUTEX_STATIC_OPEN +** <li> SQLITE_MUTEX_STATIC_PRNG +** <li> SQLITE_MUTEX_STATIC_LRU +** <li> SQLITE_MUTEX_STATIC_PMEM +** <li> SQLITE_MUTEX_STATIC_APP1 +** <li> SQLITE_MUTEX_STATIC_APP2 +** <li> SQLITE_MUTEX_STATIC_APP3 +** <li> SQLITE_MUTEX_STATIC_VFS1 +** <li> SQLITE_MUTEX_STATIC_VFS2 +** <li> SQLITE_MUTEX_STATIC_VFS3 +** </ul> +** +** ^The first two constants (SQLITE_MUTEX_FAST and SQLITE_MUTEX_RECURSIVE) +** cause sqlite3_mutex_alloc() to create +** a new mutex. ^The new mutex is recursive when SQLITE_MUTEX_RECURSIVE +** is used but not necessarily so when SQLITE_MUTEX_FAST is used. +** The mutex implementation does not need to make a distinction +** between SQLITE_MUTEX_RECURSIVE and SQLITE_MUTEX_FAST if it does +** not want to. SQLite will only request a recursive mutex in +** cases where it really needs one. If a faster non-recursive mutex +** implementation is available on the host platform, the mutex subsystem +** might return such a mutex in response to SQLITE_MUTEX_FAST. +** +** ^The other allowed parameters to sqlite3_mutex_alloc() (anything other +** than SQLITE_MUTEX_FAST and SQLITE_MUTEX_RECURSIVE) each return +** a pointer to a static preexisting mutex. ^Nine static mutexes are +** used by the current version of SQLite. Future versions of SQLite +** may add additional static mutexes. Static mutexes are for internal +** use by SQLite only. Applications that use SQLite mutexes should +** use only the dynamic mutexes returned by SQLITE_MUTEX_FAST or +** SQLITE_MUTEX_RECURSIVE. +** +** ^Note that if one of the dynamic mutex parameters (SQLITE_MUTEX_FAST +** or SQLITE_MUTEX_RECURSIVE) is used then sqlite3_mutex_alloc() +** returns a different mutex on every call. ^For the static +** mutex types, the same mutex is returned on every call that has +** the same type number. +** +** ^The sqlite3_mutex_free() routine deallocates a previously +** allocated dynamic mutex. Attempting to deallocate a static +** mutex results in undefined behavior. +** +** ^The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt +** to enter a mutex. ^If another thread is already within the mutex, +** sqlite3_mutex_enter() will block and sqlite3_mutex_try() will return +** SQLITE_BUSY. ^The sqlite3_mutex_try() interface returns [SQLITE_OK] +** upon successful entry. ^(Mutexes created using +** SQLITE_MUTEX_RECURSIVE can be entered multiple times by the same thread. +** In such cases, the +** mutex must be exited an equal number of times before another thread +** can enter.)^ If the same thread tries to enter any mutex other +** than an SQLITE_MUTEX_RECURSIVE more than once, the behavior is undefined. +** +** ^(Some systems (for example, Windows 95) do not support the operation +** implemented by sqlite3_mutex_try(). On those systems, sqlite3_mutex_try() +** will always return SQLITE_BUSY. The SQLite core only ever uses +** sqlite3_mutex_try() as an optimization so this is acceptable +** behavior.)^ +** +** ^The sqlite3_mutex_leave() routine exits a mutex that was +** previously entered by the same thread. The behavior +** is undefined if the mutex is not currently entered by the +** calling thread or is not currently allocated. +** +** ^If the argument to sqlite3_mutex_enter(), sqlite3_mutex_try(), or +** sqlite3_mutex_leave() is a NULL pointer, then all three routines +** behave as no-ops. +** +** See also: [sqlite3_mutex_held()] and [sqlite3_mutex_notheld()]. +*/ +SQLITE_API sqlite3_mutex *SQLITE_STDCALL sqlite3_mutex_alloc(int); +SQLITE_API void SQLITE_STDCALL sqlite3_mutex_free(sqlite3_mutex*); +SQLITE_API void SQLITE_STDCALL sqlite3_mutex_enter(sqlite3_mutex*); +SQLITE_API int SQLITE_STDCALL sqlite3_mutex_try(sqlite3_mutex*); +SQLITE_API void SQLITE_STDCALL sqlite3_mutex_leave(sqlite3_mutex*); + +/* +** CAPI3REF: Mutex Methods Object +** +** An instance of this structure defines the low-level routines +** used to allocate and use mutexes. +** +** Usually, the default mutex implementations provided by SQLite are +** sufficient, however the application has the option of substituting a custom +** implementation for specialized deployments or systems for which SQLite +** does not provide a suitable implementation. In this case, the application +** creates and populates an instance of this structure to pass +** to sqlite3_config() along with the [SQLITE_CONFIG_MUTEX] option. +** Additionally, an instance of this structure can be used as an +** output variable when querying the system for the current mutex +** implementation, using the [SQLITE_CONFIG_GETMUTEX] option. +** +** ^The xMutexInit method defined by this structure is invoked as +** part of system initialization by the sqlite3_initialize() function. +** ^The xMutexInit routine is called by SQLite exactly once for each +** effective call to [sqlite3_initialize()]. +** +** ^The xMutexEnd method defined by this structure is invoked as +** part of system shutdown by the sqlite3_shutdown() function. The +** implementation of this method is expected to release all outstanding +** resources obtained by the mutex methods implementation, especially +** those obtained by the xMutexInit method. ^The xMutexEnd() +** interface is invoked exactly once for each call to [sqlite3_shutdown()]. +** +** ^(The remaining seven methods defined by this structure (xMutexAlloc, +** xMutexFree, xMutexEnter, xMutexTry, xMutexLeave, xMutexHeld and +** xMutexNotheld) implement the following interfaces (respectively): +** +** <ul> +** <li> [sqlite3_mutex_alloc()] </li> +** <li> [sqlite3_mutex_free()] </li> +** <li> [sqlite3_mutex_enter()] </li> +** <li> [sqlite3_mutex_try()] </li> +** <li> [sqlite3_mutex_leave()] </li> +** <li> [sqlite3_mutex_held()] </li> +** <li> [sqlite3_mutex_notheld()] </li> +** </ul>)^ +** +** The only difference is that the public sqlite3_XXX functions enumerated +** above silently ignore any invocations that pass a NULL pointer instead +** of a valid mutex handle. The implementations of the methods defined +** by this structure are not required to handle this case, the results +** of passing a NULL pointer instead of a valid mutex handle are undefined +** (i.e. it is acceptable to provide an implementation that segfaults if +** it is passed a NULL pointer). +** +** The xMutexInit() method must be threadsafe. It must be harmless to +** invoke xMutexInit() multiple times within the same process and without +** intervening calls to xMutexEnd(). Second and subsequent calls to +** xMutexInit() must be no-ops. +** +** xMutexInit() must not use SQLite memory allocation ([sqlite3_malloc()] +** and its associates). Similarly, xMutexAlloc() must not use SQLite memory +** allocation for a static mutex. ^However xMutexAlloc() may use SQLite +** memory allocation for a fast or recursive mutex. +** +** ^SQLite will invoke the xMutexEnd() method when [sqlite3_shutdown()] is +** called, but only if the prior call to xMutexInit returned SQLITE_OK. +** If xMutexInit fails in any way, it is expected to clean up after itself +** prior to returning. +*/ +typedef struct sqlite3_mutex_methods sqlite3_mutex_methods; +struct sqlite3_mutex_methods { + int (*xMutexInit)(void); + int (*xMutexEnd)(void); + sqlite3_mutex *(*xMutexAlloc)(int); + void (*xMutexFree)(sqlite3_mutex *); + void (*xMutexEnter)(sqlite3_mutex *); + int (*xMutexTry)(sqlite3_mutex *); + void (*xMutexLeave)(sqlite3_mutex *); + int (*xMutexHeld)(sqlite3_mutex *); + int (*xMutexNotheld)(sqlite3_mutex *); +}; + +/* +** CAPI3REF: Mutex Verification Routines +** +** The sqlite3_mutex_held() and sqlite3_mutex_notheld() routines +** are intended for use inside assert() statements. The SQLite core +** never uses these routines except inside an assert() and applications +** are advised to follow the lead of the core. The SQLite core only +** provides implementations for these routines when it is compiled +** with the SQLITE_DEBUG flag. External mutex implementations +** are only required to provide these routines if SQLITE_DEBUG is +** defined and if NDEBUG is not defined. +** +** These routines should return true if the mutex in their argument +** is held or not held, respectively, by the calling thread. +** +** The implementation is not required to provide versions of these +** routines that actually work. If the implementation does not provide working +** versions of these routines, it should at least provide stubs that always +** return true so that one does not get spurious assertion failures. +** +** If the argument to sqlite3_mutex_held() is a NULL pointer then +** the routine should return 1. This seems counter-intuitive since +** clearly the mutex cannot be held if it does not exist. But +** the reason the mutex does not exist is because the build is not +** using mutexes. And we do not want the assert() containing the +** call to sqlite3_mutex_held() to fail, so a non-zero return is +** the appropriate thing to do. The sqlite3_mutex_notheld() +** interface should also return 1 when given a NULL pointer. +*/ +#ifndef NDEBUG +SQLITE_API int SQLITE_STDCALL sqlite3_mutex_held(sqlite3_mutex*); +SQLITE_API int SQLITE_STDCALL sqlite3_mutex_notheld(sqlite3_mutex*); +#endif + +/* +** CAPI3REF: Mutex Types +** +** The [sqlite3_mutex_alloc()] interface takes a single argument +** which is one of these integer constants. +** +** The set of static mutexes may change from one SQLite release to the +** next. Applications that override the built-in mutex logic must be +** prepared to accommodate additional static mutexes. +*/ +#define SQLITE_MUTEX_FAST 0 +#define SQLITE_MUTEX_RECURSIVE 1 +#define SQLITE_MUTEX_STATIC_MASTER 2 +#define SQLITE_MUTEX_STATIC_MEM 3 /* sqlite3_malloc() */ +#define SQLITE_MUTEX_STATIC_MEM2 4 /* NOT USED */ +#define SQLITE_MUTEX_STATIC_OPEN 4 /* sqlite3BtreeOpen() */ +#define SQLITE_MUTEX_STATIC_PRNG 5 /* sqlite3_random() */ +#define SQLITE_MUTEX_STATIC_LRU 6 /* lru page list */ +#define SQLITE_MUTEX_STATIC_LRU2 7 /* NOT USED */ +#define SQLITE_MUTEX_STATIC_PMEM 7 /* sqlite3PageMalloc() */ +#define SQLITE_MUTEX_STATIC_APP1 8 /* For use by application */ +#define SQLITE_MUTEX_STATIC_APP2 9 /* For use by application */ +#define SQLITE_MUTEX_STATIC_APP3 10 /* For use by application */ +#define SQLITE_MUTEX_STATIC_VFS1 11 /* For use by built-in VFS */ +#define SQLITE_MUTEX_STATIC_VFS2 12 /* For use by extension VFS */ +#define SQLITE_MUTEX_STATIC_VFS3 13 /* For use by application VFS */ + +/* +** CAPI3REF: Retrieve the mutex for a database connection +** METHOD: sqlite3 +** +** ^This interface returns a pointer the [sqlite3_mutex] object that +** serializes access to the [database connection] given in the argument +** when the [threading mode] is Serialized. +** ^If the [threading mode] is Single-thread or Multi-thread then this +** routine returns a NULL pointer. +*/ +SQLITE_API sqlite3_mutex *SQLITE_STDCALL sqlite3_db_mutex(sqlite3*); + +/* +** CAPI3REF: Low-Level Control Of Database Files +** METHOD: sqlite3 +** +** ^The [sqlite3_file_control()] interface makes a direct call to the +** xFileControl method for the [sqlite3_io_methods] object associated +** with a particular database identified by the second argument. ^The +** name of the database is "main" for the main database or "temp" for the +** TEMP database, or the name that appears after the AS keyword for +** databases that are added using the [ATTACH] SQL command. +** ^A NULL pointer can be used in place of "main" to refer to the +** main database file. +** ^The third and fourth parameters to this routine +** are passed directly through to the second and third parameters of +** the xFileControl method. ^The return value of the xFileControl +** method becomes the return value of this routine. +** +** ^The SQLITE_FCNTL_FILE_POINTER value for the op parameter causes +** a pointer to the underlying [sqlite3_file] object to be written into +** the space pointed to by the 4th parameter. ^The SQLITE_FCNTL_FILE_POINTER +** case is a short-circuit path which does not actually invoke the +** underlying sqlite3_io_methods.xFileControl method. +** +** ^If the second parameter (zDbName) does not match the name of any +** open database file, then SQLITE_ERROR is returned. ^This error +** code is not remembered and will not be recalled by [sqlite3_errcode()] +** or [sqlite3_errmsg()]. The underlying xFileControl method might +** also return SQLITE_ERROR. There is no way to distinguish between +** an incorrect zDbName and an SQLITE_ERROR return from the underlying +** xFileControl method. +** +** See also: [SQLITE_FCNTL_LOCKSTATE] +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_file_control(sqlite3*, const char *zDbName, int op, void*); + +/* +** CAPI3REF: Testing Interface +** +** ^The sqlite3_test_control() interface is used to read out internal +** state of SQLite and to inject faults into SQLite for testing +** purposes. ^The first parameter is an operation code that determines +** the number, meaning, and operation of all subsequent parameters. +** +** This interface is not for use by applications. It exists solely +** for verifying the correct operation of the SQLite library. Depending +** on how the SQLite library is compiled, this interface might not exist. +** +** The details of the operation codes, their meanings, the parameters +** they take, and what they do are all subject to change without notice. +** Unlike most of the SQLite API, this function is not guaranteed to +** operate consistently from one release to the next. +*/ +SQLITE_API int SQLITE_CDECL sqlite3_test_control(int op, ...); + +/* +** CAPI3REF: Testing Interface Operation Codes +** +** These constants are the valid operation code parameters used +** as the first argument to [sqlite3_test_control()]. +** +** These parameters and their meanings are subject to change +** without notice. These values are for testing purposes only. +** Applications should not use any of these parameters or the +** [sqlite3_test_control()] interface. +*/ +#define SQLITE_TESTCTRL_FIRST 5 +#define SQLITE_TESTCTRL_PRNG_SAVE 5 +#define SQLITE_TESTCTRL_PRNG_RESTORE 6 +#define SQLITE_TESTCTRL_PRNG_RESET 7 +#define SQLITE_TESTCTRL_BITVEC_TEST 8 +#define SQLITE_TESTCTRL_FAULT_INSTALL 9 +#define SQLITE_TESTCTRL_BENIGN_MALLOC_HOOKS 10 +#define SQLITE_TESTCTRL_PENDING_BYTE 11 +#define SQLITE_TESTCTRL_ASSERT 12 +#define SQLITE_TESTCTRL_ALWAYS 13 +#define SQLITE_TESTCTRL_RESERVE 14 +#define SQLITE_TESTCTRL_OPTIMIZATIONS 15 +#define SQLITE_TESTCTRL_ISKEYWORD 16 +#define SQLITE_TESTCTRL_SCRATCHMALLOC 17 +#define SQLITE_TESTCTRL_LOCALTIME_FAULT 18 +#define SQLITE_TESTCTRL_EXPLAIN_STMT 19 /* NOT USED */ +#define SQLITE_TESTCTRL_NEVER_CORRUPT 20 +#define SQLITE_TESTCTRL_VDBE_COVERAGE 21 +#define SQLITE_TESTCTRL_BYTEORDER 22 +#define SQLITE_TESTCTRL_ISINIT 23 +#define SQLITE_TESTCTRL_SORTER_MMAP 24 +#define SQLITE_TESTCTRL_IMPOSTER 25 +#define SQLITE_TESTCTRL_LAST 25 + +/* +** CAPI3REF: SQLite Runtime Status +** +** ^These interfaces are used to retrieve runtime status information +** about the performance of SQLite, and optionally to reset various +** highwater marks. ^The first argument is an integer code for +** the specific parameter to measure. ^(Recognized integer codes +** are of the form [status parameters | SQLITE_STATUS_...].)^ +** ^The current value of the parameter is returned into *pCurrent. +** ^The highest recorded value is returned in *pHighwater. ^If the +** resetFlag is true, then the highest record value is reset after +** *pHighwater is written. ^(Some parameters do not record the highest +** value. For those parameters +** nothing is written into *pHighwater and the resetFlag is ignored.)^ +** ^(Other parameters record only the highwater mark and not the current +** value. For these latter parameters nothing is written into *pCurrent.)^ +** +** ^The sqlite3_status() and sqlite3_status64() routines return +** SQLITE_OK on success and a non-zero [error code] on failure. +** +** If either the current value or the highwater mark is too large to +** be represented by a 32-bit integer, then the values returned by +** sqlite3_status() are undefined. +** +** See also: [sqlite3_db_status()] +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_status(int op, int *pCurrent, int *pHighwater, int resetFlag); +SQLITE_API int SQLITE_STDCALL sqlite3_status64( + int op, + sqlite3_int64 *pCurrent, + sqlite3_int64 *pHighwater, + int resetFlag +); + + +/* +** CAPI3REF: Status Parameters +** KEYWORDS: {status parameters} +** +** These integer constants designate various run-time status parameters +** that can be returned by [sqlite3_status()]. +** +** <dl> +** [[SQLITE_STATUS_MEMORY_USED]] ^(<dt>SQLITE_STATUS_MEMORY_USED</dt> +** <dd>This parameter is the current amount of memory checked out +** using [sqlite3_malloc()], either directly or indirectly. The +** figure includes calls made to [sqlite3_malloc()] by the application +** and internal memory usage by the SQLite library. Scratch memory +** controlled by [SQLITE_CONFIG_SCRATCH] and auxiliary page-cache +** memory controlled by [SQLITE_CONFIG_PAGECACHE] is not included in +** this parameter. The amount returned is the sum of the allocation +** sizes as reported by the xSize method in [sqlite3_mem_methods].</dd>)^ +** +** [[SQLITE_STATUS_MALLOC_SIZE]] ^(<dt>SQLITE_STATUS_MALLOC_SIZE</dt> +** <dd>This parameter records the largest memory allocation request +** handed to [sqlite3_malloc()] or [sqlite3_realloc()] (or their +** internal equivalents). Only the value returned in the +** *pHighwater parameter to [sqlite3_status()] is of interest. +** The value written into the *pCurrent parameter is undefined.</dd>)^ +** +** [[SQLITE_STATUS_MALLOC_COUNT]] ^(<dt>SQLITE_STATUS_MALLOC_COUNT</dt> +** <dd>This parameter records the number of separate memory allocations +** currently checked out.</dd>)^ +** +** [[SQLITE_STATUS_PAGECACHE_USED]] ^(<dt>SQLITE_STATUS_PAGECACHE_USED</dt> +** <dd>This parameter returns the number of pages used out of the +** [pagecache memory allocator] that was configured using +** [SQLITE_CONFIG_PAGECACHE]. The +** value returned is in pages, not in bytes.</dd>)^ +** +** [[SQLITE_STATUS_PAGECACHE_OVERFLOW]] +** ^(<dt>SQLITE_STATUS_PAGECACHE_OVERFLOW</dt> +** <dd>This parameter returns the number of bytes of page cache +** allocation which could not be satisfied by the [SQLITE_CONFIG_PAGECACHE] +** buffer and where forced to overflow to [sqlite3_malloc()]. The +** returned value includes allocations that overflowed because they +** where too large (they were larger than the "sz" parameter to +** [SQLITE_CONFIG_PAGECACHE]) and allocations that overflowed because +** no space was left in the page cache.</dd>)^ +** +** [[SQLITE_STATUS_PAGECACHE_SIZE]] ^(<dt>SQLITE_STATUS_PAGECACHE_SIZE</dt> +** <dd>This parameter records the largest memory allocation request +** handed to [pagecache memory allocator]. Only the value returned in the +** *pHighwater parameter to [sqlite3_status()] is of interest. +** The value written into the *pCurrent parameter is undefined.</dd>)^ +** +** [[SQLITE_STATUS_SCRATCH_USED]] ^(<dt>SQLITE_STATUS_SCRATCH_USED</dt> +** <dd>This parameter returns the number of allocations used out of the +** [scratch memory allocator] configured using +** [SQLITE_CONFIG_SCRATCH]. The value returned is in allocations, not +** in bytes. Since a single thread may only have one scratch allocation +** outstanding at time, this parameter also reports the number of threads +** using scratch memory at the same time.</dd>)^ +** +** [[SQLITE_STATUS_SCRATCH_OVERFLOW]] ^(<dt>SQLITE_STATUS_SCRATCH_OVERFLOW</dt> +** <dd>This parameter returns the number of bytes of scratch memory +** allocation which could not be satisfied by the [SQLITE_CONFIG_SCRATCH] +** buffer and where forced to overflow to [sqlite3_malloc()]. The values +** returned include overflows because the requested allocation was too +** larger (that is, because the requested allocation was larger than the +** "sz" parameter to [SQLITE_CONFIG_SCRATCH]) and because no scratch buffer +** slots were available. +** </dd>)^ +** +** [[SQLITE_STATUS_SCRATCH_SIZE]] ^(<dt>SQLITE_STATUS_SCRATCH_SIZE</dt> +** <dd>This parameter records the largest memory allocation request +** handed to [scratch memory allocator]. Only the value returned in the +** *pHighwater parameter to [sqlite3_status()] is of interest. +** The value written into the *pCurrent parameter is undefined.</dd>)^ +** +** [[SQLITE_STATUS_PARSER_STACK]] ^(<dt>SQLITE_STATUS_PARSER_STACK</dt> +** <dd>The *pHighwater parameter records the deepest parser stack. +** The *pCurrent value is undefined. The *pHighwater value is only +** meaningful if SQLite is compiled with [YYTRACKMAXSTACKDEPTH].</dd>)^ +** </dl> +** +** New status parameters may be added from time to time. +*/ +#define SQLITE_STATUS_MEMORY_USED 0 +#define SQLITE_STATUS_PAGECACHE_USED 1 +#define SQLITE_STATUS_PAGECACHE_OVERFLOW 2 +#define SQLITE_STATUS_SCRATCH_USED 3 +#define SQLITE_STATUS_SCRATCH_OVERFLOW 4 +#define SQLITE_STATUS_MALLOC_SIZE 5 +#define SQLITE_STATUS_PARSER_STACK 6 +#define SQLITE_STATUS_PAGECACHE_SIZE 7 +#define SQLITE_STATUS_SCRATCH_SIZE 8 +#define SQLITE_STATUS_MALLOC_COUNT 9 + +/* +** CAPI3REF: Database Connection Status +** METHOD: sqlite3 +** +** ^This interface is used to retrieve runtime status information +** about a single [database connection]. ^The first argument is the +** database connection object to be interrogated. ^The second argument +** is an integer constant, taken from the set of +** [SQLITE_DBSTATUS options], that +** determines the parameter to interrogate. The set of +** [SQLITE_DBSTATUS options] is likely +** to grow in future releases of SQLite. +** +** ^The current value of the requested parameter is written into *pCur +** and the highest instantaneous value is written into *pHiwtr. ^If +** the resetFlg is true, then the highest instantaneous value is +** reset back down to the current value. +** +** ^The sqlite3_db_status() routine returns SQLITE_OK on success and a +** non-zero [error code] on failure. +** +** See also: [sqlite3_status()] and [sqlite3_stmt_status()]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int resetFlg); + +/* +** CAPI3REF: Status Parameters for database connections +** KEYWORDS: {SQLITE_DBSTATUS options} +** +** These constants are the available integer "verbs" that can be passed as +** the second argument to the [sqlite3_db_status()] interface. +** +** New verbs may be added in future releases of SQLite. Existing verbs +** might be discontinued. Applications should check the return code from +** [sqlite3_db_status()] to make sure that the call worked. +** The [sqlite3_db_status()] interface will return a non-zero error code +** if a discontinued or unsupported verb is invoked. +** +** <dl> +** [[SQLITE_DBSTATUS_LOOKASIDE_USED]] ^(<dt>SQLITE_DBSTATUS_LOOKASIDE_USED</dt> +** <dd>This parameter returns the number of lookaside memory slots currently +** checked out.</dd>)^ +** +** [[SQLITE_DBSTATUS_LOOKASIDE_HIT]] ^(<dt>SQLITE_DBSTATUS_LOOKASIDE_HIT</dt> +** <dd>This parameter returns the number malloc attempts that were +** satisfied using lookaside memory. Only the high-water value is meaningful; +** the current value is always zero.)^ +** +** [[SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE]] +** ^(<dt>SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE</dt> +** <dd>This parameter returns the number malloc attempts that might have +** been satisfied using lookaside memory but failed due to the amount of +** memory requested being larger than the lookaside slot size. +** Only the high-water value is meaningful; +** the current value is always zero.)^ +** +** [[SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL]] +** ^(<dt>SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL</dt> +** <dd>This parameter returns the number malloc attempts that might have +** been satisfied using lookaside memory but failed due to all lookaside +** memory already being in use. +** Only the high-water value is meaningful; +** the current value is always zero.)^ +** +** [[SQLITE_DBSTATUS_CACHE_USED]] ^(<dt>SQLITE_DBSTATUS_CACHE_USED</dt> +** <dd>This parameter returns the approximate number of bytes of heap +** memory used by all pager caches associated with the database connection.)^ +** ^The highwater mark associated with SQLITE_DBSTATUS_CACHE_USED is always 0. +** +** [[SQLITE_DBSTATUS_SCHEMA_USED]] ^(<dt>SQLITE_DBSTATUS_SCHEMA_USED</dt> +** <dd>This parameter returns the approximate number of bytes of heap +** memory used to store the schema for all databases associated +** with the connection - main, temp, and any [ATTACH]-ed databases.)^ +** ^The full amount of memory used by the schemas is reported, even if the +** schema memory is shared with other database connections due to +** [shared cache mode] being enabled. +** ^The highwater mark associated with SQLITE_DBSTATUS_SCHEMA_USED is always 0. +** +** [[SQLITE_DBSTATUS_STMT_USED]] ^(<dt>SQLITE_DBSTATUS_STMT_USED</dt> +** <dd>This parameter returns the approximate number of bytes of heap +** and lookaside memory used by all prepared statements associated with +** the database connection.)^ +** ^The highwater mark associated with SQLITE_DBSTATUS_STMT_USED is always 0. +** </dd> +** +** [[SQLITE_DBSTATUS_CACHE_HIT]] ^(<dt>SQLITE_DBSTATUS_CACHE_HIT</dt> +** <dd>This parameter returns the number of pager cache hits that have +** occurred.)^ ^The highwater mark associated with SQLITE_DBSTATUS_CACHE_HIT +** is always 0. +** </dd> +** +** [[SQLITE_DBSTATUS_CACHE_MISS]] ^(<dt>SQLITE_DBSTATUS_CACHE_MISS</dt> +** <dd>This parameter returns the number of pager cache misses that have +** occurred.)^ ^The highwater mark associated with SQLITE_DBSTATUS_CACHE_MISS +** is always 0. +** </dd> +** +** [[SQLITE_DBSTATUS_CACHE_WRITE]] ^(<dt>SQLITE_DBSTATUS_CACHE_WRITE</dt> +** <dd>This parameter returns the number of dirty cache entries that have +** been written to disk. Specifically, the number of pages written to the +** wal file in wal mode databases, or the number of pages written to the +** database file in rollback mode databases. Any pages written as part of +** transaction rollback or database recovery operations are not included. +** If an IO or other error occurs while writing a page to disk, the effect +** on subsequent SQLITE_DBSTATUS_CACHE_WRITE requests is undefined.)^ ^The +** highwater mark associated with SQLITE_DBSTATUS_CACHE_WRITE is always 0. +** </dd> +** +** [[SQLITE_DBSTATUS_DEFERRED_FKS]] ^(<dt>SQLITE_DBSTATUS_DEFERRED_FKS</dt> +** <dd>This parameter returns zero for the current value if and only if +** all foreign key constraints (deferred or immediate) have been +** resolved.)^ ^The highwater mark is always 0. +** </dd> +** </dl> +*/ +#define SQLITE_DBSTATUS_LOOKASIDE_USED 0 +#define SQLITE_DBSTATUS_CACHE_USED 1 +#define SQLITE_DBSTATUS_SCHEMA_USED 2 +#define SQLITE_DBSTATUS_STMT_USED 3 +#define SQLITE_DBSTATUS_LOOKASIDE_HIT 4 +#define SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE 5 +#define SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL 6 +#define SQLITE_DBSTATUS_CACHE_HIT 7 +#define SQLITE_DBSTATUS_CACHE_MISS 8 +#define SQLITE_DBSTATUS_CACHE_WRITE 9 +#define SQLITE_DBSTATUS_DEFERRED_FKS 10 +#define SQLITE_DBSTATUS_MAX 10 /* Largest defined DBSTATUS */ + + +/* +** CAPI3REF: Prepared Statement Status +** METHOD: sqlite3_stmt +** +** ^(Each prepared statement maintains various +** [SQLITE_STMTSTATUS counters] that measure the number +** of times it has performed specific operations.)^ These counters can +** be used to monitor the performance characteristics of the prepared +** statements. For example, if the number of table steps greatly exceeds +** the number of table searches or result rows, that would tend to indicate +** that the prepared statement is using a full table scan rather than +** an index. +** +** ^(This interface is used to retrieve and reset counter values from +** a [prepared statement]. The first argument is the prepared statement +** object to be interrogated. The second argument +** is an integer code for a specific [SQLITE_STMTSTATUS counter] +** to be interrogated.)^ +** ^The current value of the requested counter is returned. +** ^If the resetFlg is true, then the counter is reset to zero after this +** interface call returns. +** +** See also: [sqlite3_status()] and [sqlite3_db_status()]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_stmt_status(sqlite3_stmt*, int op,int resetFlg); + +/* +** CAPI3REF: Status Parameters for prepared statements +** KEYWORDS: {SQLITE_STMTSTATUS counter} {SQLITE_STMTSTATUS counters} +** +** These preprocessor macros define integer codes that name counter +** values associated with the [sqlite3_stmt_status()] interface. +** The meanings of the various counters are as follows: +** +** <dl> +** [[SQLITE_STMTSTATUS_FULLSCAN_STEP]] <dt>SQLITE_STMTSTATUS_FULLSCAN_STEP</dt> +** <dd>^This is the number of times that SQLite has stepped forward in +** a table as part of a full table scan. Large numbers for this counter +** may indicate opportunities for performance improvement through +** careful use of indices.</dd> +** +** [[SQLITE_STMTSTATUS_SORT]] <dt>SQLITE_STMTSTATUS_SORT</dt> +** <dd>^This is the number of sort operations that have occurred. +** A non-zero value in this counter may indicate an opportunity to +** improvement performance through careful use of indices.</dd> +** +** [[SQLITE_STMTSTATUS_AUTOINDEX]] <dt>SQLITE_STMTSTATUS_AUTOINDEX</dt> +** <dd>^This is the number of rows inserted into transient indices that +** were created automatically in order to help joins run faster. +** A non-zero value in this counter may indicate an opportunity to +** improvement performance by adding permanent indices that do not +** need to be reinitialized each time the statement is run.</dd> +** +** [[SQLITE_STMTSTATUS_VM_STEP]] <dt>SQLITE_STMTSTATUS_VM_STEP</dt> +** <dd>^This is the number of virtual machine operations executed +** by the prepared statement if that number is less than or equal +** to 2147483647. The number of virtual machine operations can be +** used as a proxy for the total work done by the prepared statement. +** If the number of virtual machine operations exceeds 2147483647 +** then the value returned by this statement status code is undefined. +** </dd> +** </dl> +*/ +#define SQLITE_STMTSTATUS_FULLSCAN_STEP 1 +#define SQLITE_STMTSTATUS_SORT 2 +#define SQLITE_STMTSTATUS_AUTOINDEX 3 +#define SQLITE_STMTSTATUS_VM_STEP 4 + +/* +** CAPI3REF: Custom Page Cache Object +** +** The sqlite3_pcache type is opaque. It is implemented by +** the pluggable module. The SQLite core has no knowledge of +** its size or internal structure and never deals with the +** sqlite3_pcache object except by holding and passing pointers +** to the object. +** +** See [sqlite3_pcache_methods2] for additional information. +*/ +typedef struct sqlite3_pcache sqlite3_pcache; + +/* +** CAPI3REF: Custom Page Cache Object +** +** The sqlite3_pcache_page object represents a single page in the +** page cache. The page cache will allocate instances of this +** object. Various methods of the page cache use pointers to instances +** of this object as parameters or as their return value. +** +** See [sqlite3_pcache_methods2] for additional information. +*/ +typedef struct sqlite3_pcache_page sqlite3_pcache_page; +struct sqlite3_pcache_page { + void *pBuf; /* The content of the page */ + void *pExtra; /* Extra information associated with the page */ +}; + +/* +** CAPI3REF: Application Defined Page Cache. +** KEYWORDS: {page cache} +** +** ^(The [sqlite3_config]([SQLITE_CONFIG_PCACHE2], ...) interface can +** register an alternative page cache implementation by passing in an +** instance of the sqlite3_pcache_methods2 structure.)^ +** In many applications, most of the heap memory allocated by +** SQLite is used for the page cache. +** By implementing a +** custom page cache using this API, an application can better control +** the amount of memory consumed by SQLite, the way in which +** that memory is allocated and released, and the policies used to +** determine exactly which parts of a database file are cached and for +** how long. +** +** The alternative page cache mechanism is an +** extreme measure that is only needed by the most demanding applications. +** The built-in page cache is recommended for most uses. +** +** ^(The contents of the sqlite3_pcache_methods2 structure are copied to an +** internal buffer by SQLite within the call to [sqlite3_config]. Hence +** the application may discard the parameter after the call to +** [sqlite3_config()] returns.)^ +** +** [[the xInit() page cache method]] +** ^(The xInit() method is called once for each effective +** call to [sqlite3_initialize()])^ +** (usually only once during the lifetime of the process). ^(The xInit() +** method is passed a copy of the sqlite3_pcache_methods2.pArg value.)^ +** The intent of the xInit() method is to set up global data structures +** required by the custom page cache implementation. +** ^(If the xInit() method is NULL, then the +** built-in default page cache is used instead of the application defined +** page cache.)^ +** +** [[the xShutdown() page cache method]] +** ^The xShutdown() method is called by [sqlite3_shutdown()]. +** It can be used to clean up +** any outstanding resources before process shutdown, if required. +** ^The xShutdown() method may be NULL. +** +** ^SQLite automatically serializes calls to the xInit method, +** so the xInit method need not be threadsafe. ^The +** xShutdown method is only called from [sqlite3_shutdown()] so it does +** not need to be threadsafe either. All other methods must be threadsafe +** in multithreaded applications. +** +** ^SQLite will never invoke xInit() more than once without an intervening +** call to xShutdown(). +** +** [[the xCreate() page cache methods]] +** ^SQLite invokes the xCreate() method to construct a new cache instance. +** SQLite will typically create one cache instance for each open database file, +** though this is not guaranteed. ^The +** first parameter, szPage, is the size in bytes of the pages that must +** be allocated by the cache. ^szPage will always a power of two. ^The +** second parameter szExtra is a number of bytes of extra storage +** associated with each page cache entry. ^The szExtra parameter will +** a number less than 250. SQLite will use the +** extra szExtra bytes on each page to store metadata about the underlying +** database page on disk. The value passed into szExtra depends +** on the SQLite version, the target platform, and how SQLite was compiled. +** ^The third argument to xCreate(), bPurgeable, is true if the cache being +** created will be used to cache database pages of a file stored on disk, or +** false if it is used for an in-memory database. The cache implementation +** does not have to do anything special based with the value of bPurgeable; +** it is purely advisory. ^On a cache where bPurgeable is false, SQLite will +** never invoke xUnpin() except to deliberately delete a page. +** ^In other words, calls to xUnpin() on a cache with bPurgeable set to +** false will always have the "discard" flag set to true. +** ^Hence, a cache created with bPurgeable false will +** never contain any unpinned pages. +** +** [[the xCachesize() page cache method]] +** ^(The xCachesize() method may be called at any time by SQLite to set the +** suggested maximum cache-size (number of pages stored by) the cache +** instance passed as the first argument. This is the value configured using +** the SQLite "[PRAGMA cache_size]" command.)^ As with the bPurgeable +** parameter, the implementation is not required to do anything with this +** value; it is advisory only. +** +** [[the xPagecount() page cache methods]] +** The xPagecount() method must return the number of pages currently +** stored in the cache, both pinned and unpinned. +** +** [[the xFetch() page cache methods]] +** The xFetch() method locates a page in the cache and returns a pointer to +** an sqlite3_pcache_page object associated with that page, or a NULL pointer. +** The pBuf element of the returned sqlite3_pcache_page object will be a +** pointer to a buffer of szPage bytes used to store the content of a +** single database page. The pExtra element of sqlite3_pcache_page will be +** a pointer to the szExtra bytes of extra storage that SQLite has requested +** for each entry in the page cache. +** +** The page to be fetched is determined by the key. ^The minimum key value +** is 1. After it has been retrieved using xFetch, the page is considered +** to be "pinned". +** +** If the requested page is already in the page cache, then the page cache +** implementation must return a pointer to the page buffer with its content +** intact. If the requested page is not already in the cache, then the +** cache implementation should use the value of the createFlag +** parameter to help it determined what action to take: +** +** <table border=1 width=85% align=center> +** <tr><th> createFlag <th> Behavior when page is not already in cache +** <tr><td> 0 <td> Do not allocate a new page. Return NULL. +** <tr><td> 1 <td> Allocate a new page if it easy and convenient to do so. +** Otherwise return NULL. +** <tr><td> 2 <td> Make every effort to allocate a new page. Only return +** NULL if allocating a new page is effectively impossible. +** </table> +** +** ^(SQLite will normally invoke xFetch() with a createFlag of 0 or 1. SQLite +** will only use a createFlag of 2 after a prior call with a createFlag of 1 +** failed.)^ In between the to xFetch() calls, SQLite may +** attempt to unpin one or more cache pages by spilling the content of +** pinned pages to disk and synching the operating system disk cache. +** +** [[the xUnpin() page cache method]] +** ^xUnpin() is called by SQLite with a pointer to a currently pinned page +** as its second argument. If the third parameter, discard, is non-zero, +** then the page must be evicted from the cache. +** ^If the discard parameter is +** zero, then the page may be discarded or retained at the discretion of +** page cache implementation. ^The page cache implementation +** may choose to evict unpinned pages at any time. +** +** The cache must not perform any reference counting. A single +** call to xUnpin() unpins the page regardless of the number of prior calls +** to xFetch(). +** +** [[the xRekey() page cache methods]] +** The xRekey() method is used to change the key value associated with the +** page passed as the second argument. If the cache +** previously contains an entry associated with newKey, it must be +** discarded. ^Any prior cache entry associated with newKey is guaranteed not +** to be pinned. +** +** When SQLite calls the xTruncate() method, the cache must discard all +** existing cache entries with page numbers (keys) greater than or equal +** to the value of the iLimit parameter passed to xTruncate(). If any +** of these pages are pinned, they are implicitly unpinned, meaning that +** they can be safely discarded. +** +** [[the xDestroy() page cache method]] +** ^The xDestroy() method is used to delete a cache allocated by xCreate(). +** All resources associated with the specified cache should be freed. ^After +** calling the xDestroy() method, SQLite considers the [sqlite3_pcache*] +** handle invalid, and will not use it with any other sqlite3_pcache_methods2 +** functions. +** +** [[the xShrink() page cache method]] +** ^SQLite invokes the xShrink() method when it wants the page cache to +** free up as much of heap memory as possible. The page cache implementation +** is not obligated to free any memory, but well-behaved implementations should +** do their best. +*/ +typedef struct sqlite3_pcache_methods2 sqlite3_pcache_methods2; +struct sqlite3_pcache_methods2 { + int iVersion; + void *pArg; + int (*xInit)(void*); + void (*xShutdown)(void*); + sqlite3_pcache *(*xCreate)(int szPage, int szExtra, int bPurgeable); + void (*xCachesize)(sqlite3_pcache*, int nCachesize); + int (*xPagecount)(sqlite3_pcache*); + sqlite3_pcache_page *(*xFetch)(sqlite3_pcache*, unsigned key, int createFlag); + void (*xUnpin)(sqlite3_pcache*, sqlite3_pcache_page*, int discard); + void (*xRekey)(sqlite3_pcache*, sqlite3_pcache_page*, + unsigned oldKey, unsigned newKey); + void (*xTruncate)(sqlite3_pcache*, unsigned iLimit); + void (*xDestroy)(sqlite3_pcache*); + void (*xShrink)(sqlite3_pcache*); +}; + +/* +** This is the obsolete pcache_methods object that has now been replaced +** by sqlite3_pcache_methods2. This object is not used by SQLite. It is +** retained in the header file for backwards compatibility only. +*/ +typedef struct sqlite3_pcache_methods sqlite3_pcache_methods; +struct sqlite3_pcache_methods { + void *pArg; + int (*xInit)(void*); + void (*xShutdown)(void*); + sqlite3_pcache *(*xCreate)(int szPage, int bPurgeable); + void (*xCachesize)(sqlite3_pcache*, int nCachesize); + int (*xPagecount)(sqlite3_pcache*); + void *(*xFetch)(sqlite3_pcache*, unsigned key, int createFlag); + void (*xUnpin)(sqlite3_pcache*, void*, int discard); + void (*xRekey)(sqlite3_pcache*, void*, unsigned oldKey, unsigned newKey); + void (*xTruncate)(sqlite3_pcache*, unsigned iLimit); + void (*xDestroy)(sqlite3_pcache*); +}; + + +/* +** CAPI3REF: Online Backup Object +** +** The sqlite3_backup object records state information about an ongoing +** online backup operation. ^The sqlite3_backup object is created by +** a call to [sqlite3_backup_init()] and is destroyed by a call to +** [sqlite3_backup_finish()]. +** +** See Also: [Using the SQLite Online Backup API] +*/ +typedef struct sqlite3_backup sqlite3_backup; + +/* +** CAPI3REF: Online Backup API. +** +** The backup API copies the content of one database into another. +** It is useful either for creating backups of databases or +** for copying in-memory databases to or from persistent files. +** +** See Also: [Using the SQLite Online Backup API] +** +** ^SQLite holds a write transaction open on the destination database file +** for the duration of the backup operation. +** ^The source database is read-locked only while it is being read; +** it is not locked continuously for the entire backup operation. +** ^Thus, the backup may be performed on a live source database without +** preventing other database connections from +** reading or writing to the source database while the backup is underway. +** +** ^(To perform a backup operation: +** <ol> +** <li><b>sqlite3_backup_init()</b> is called once to initialize the +** backup, +** <li><b>sqlite3_backup_step()</b> is called one or more times to transfer +** the data between the two databases, and finally +** <li><b>sqlite3_backup_finish()</b> is called to release all resources +** associated with the backup operation. +** </ol>)^ +** There should be exactly one call to sqlite3_backup_finish() for each +** successful call to sqlite3_backup_init(). +** +** [[sqlite3_backup_init()]] <b>sqlite3_backup_init()</b> +** +** ^The D and N arguments to sqlite3_backup_init(D,N,S,M) are the +** [database connection] associated with the destination database +** and the database name, respectively. +** ^The database name is "main" for the main database, "temp" for the +** temporary database, or the name specified after the AS keyword in +** an [ATTACH] statement for an attached database. +** ^The S and M arguments passed to +** sqlite3_backup_init(D,N,S,M) identify the [database connection] +** and database name of the source database, respectively. +** ^The source and destination [database connections] (parameters S and D) +** must be different or else sqlite3_backup_init(D,N,S,M) will fail with +** an error. +** +** ^A call to sqlite3_backup_init() will fail, returning SQLITE_ERROR, if +** there is already a read or read-write transaction open on the +** destination database. +** +** ^If an error occurs within sqlite3_backup_init(D,N,S,M), then NULL is +** returned and an error code and error message are stored in the +** destination [database connection] D. +** ^The error code and message for the failed call to sqlite3_backup_init() +** can be retrieved using the [sqlite3_errcode()], [sqlite3_errmsg()], and/or +** [sqlite3_errmsg16()] functions. +** ^A successful call to sqlite3_backup_init() returns a pointer to an +** [sqlite3_backup] object. +** ^The [sqlite3_backup] object may be used with the sqlite3_backup_step() and +** sqlite3_backup_finish() functions to perform the specified backup +** operation. +** +** [[sqlite3_backup_step()]] <b>sqlite3_backup_step()</b> +** +** ^Function sqlite3_backup_step(B,N) will copy up to N pages between +** the source and destination databases specified by [sqlite3_backup] object B. +** ^If N is negative, all remaining source pages are copied. +** ^If sqlite3_backup_step(B,N) successfully copies N pages and there +** are still more pages to be copied, then the function returns [SQLITE_OK]. +** ^If sqlite3_backup_step(B,N) successfully finishes copying all pages +** from source to destination, then it returns [SQLITE_DONE]. +** ^If an error occurs while running sqlite3_backup_step(B,N), +** then an [error code] is returned. ^As well as [SQLITE_OK] and +** [SQLITE_DONE], a call to sqlite3_backup_step() may return [SQLITE_READONLY], +** [SQLITE_NOMEM], [SQLITE_BUSY], [SQLITE_LOCKED], or an +** [SQLITE_IOERR_ACCESS | SQLITE_IOERR_XXX] extended error code. +** +** ^(The sqlite3_backup_step() might return [SQLITE_READONLY] if +** <ol> +** <li> the destination database was opened read-only, or +** <li> the destination database is using write-ahead-log journaling +** and the destination and source page sizes differ, or +** <li> the destination database is an in-memory database and the +** destination and source page sizes differ. +** </ol>)^ +** +** ^If sqlite3_backup_step() cannot obtain a required file-system lock, then +** the [sqlite3_busy_handler | busy-handler function] +** is invoked (if one is specified). ^If the +** busy-handler returns non-zero before the lock is available, then +** [SQLITE_BUSY] is returned to the caller. ^In this case the call to +** sqlite3_backup_step() can be retried later. ^If the source +** [database connection] +** is being used to write to the source database when sqlite3_backup_step() +** is called, then [SQLITE_LOCKED] is returned immediately. ^Again, in this +** case the call to sqlite3_backup_step() can be retried later on. ^(If +** [SQLITE_IOERR_ACCESS | SQLITE_IOERR_XXX], [SQLITE_NOMEM], or +** [SQLITE_READONLY] is returned, then +** there is no point in retrying the call to sqlite3_backup_step(). These +** errors are considered fatal.)^ The application must accept +** that the backup operation has failed and pass the backup operation handle +** to the sqlite3_backup_finish() to release associated resources. +** +** ^The first call to sqlite3_backup_step() obtains an exclusive lock +** on the destination file. ^The exclusive lock is not released until either +** sqlite3_backup_finish() is called or the backup operation is complete +** and sqlite3_backup_step() returns [SQLITE_DONE]. ^Every call to +** sqlite3_backup_step() obtains a [shared lock] on the source database that +** lasts for the duration of the sqlite3_backup_step() call. +** ^Because the source database is not locked between calls to +** sqlite3_backup_step(), the source database may be modified mid-way +** through the backup process. ^If the source database is modified by an +** external process or via a database connection other than the one being +** used by the backup operation, then the backup will be automatically +** restarted by the next call to sqlite3_backup_step(). ^If the source +** database is modified by the using the same database connection as is used +** by the backup operation, then the backup database is automatically +** updated at the same time. +** +** [[sqlite3_backup_finish()]] <b>sqlite3_backup_finish()</b> +** +** When sqlite3_backup_step() has returned [SQLITE_DONE], or when the +** application wishes to abandon the backup operation, the application +** should destroy the [sqlite3_backup] by passing it to sqlite3_backup_finish(). +** ^The sqlite3_backup_finish() interfaces releases all +** resources associated with the [sqlite3_backup] object. +** ^If sqlite3_backup_step() has not yet returned [SQLITE_DONE], then any +** active write-transaction on the destination database is rolled back. +** The [sqlite3_backup] object is invalid +** and may not be used following a call to sqlite3_backup_finish(). +** +** ^The value returned by sqlite3_backup_finish is [SQLITE_OK] if no +** sqlite3_backup_step() errors occurred, regardless or whether or not +** sqlite3_backup_step() completed. +** ^If an out-of-memory condition or IO error occurred during any prior +** sqlite3_backup_step() call on the same [sqlite3_backup] object, then +** sqlite3_backup_finish() returns the corresponding [error code]. +** +** ^A return of [SQLITE_BUSY] or [SQLITE_LOCKED] from sqlite3_backup_step() +** is not a permanent error and does not affect the return value of +** sqlite3_backup_finish(). +** +** [[sqlite3_backup_remaining()]] [[sqlite3_backup_pagecount()]] +** <b>sqlite3_backup_remaining() and sqlite3_backup_pagecount()</b> +** +** ^The sqlite3_backup_remaining() routine returns the number of pages still +** to be backed up at the conclusion of the most recent sqlite3_backup_step(). +** ^The sqlite3_backup_pagecount() routine returns the total number of pages +** in the source database at the conclusion of the most recent +** sqlite3_backup_step(). +** ^(The values returned by these functions are only updated by +** sqlite3_backup_step(). If the source database is modified in a way that +** changes the size of the source database or the number of pages remaining, +** those changes are not reflected in the output of sqlite3_backup_pagecount() +** and sqlite3_backup_remaining() until after the next +** sqlite3_backup_step().)^ +** +** <b>Concurrent Usage of Database Handles</b> +** +** ^The source [database connection] may be used by the application for other +** purposes while a backup operation is underway or being initialized. +** ^If SQLite is compiled and configured to support threadsafe database +** connections, then the source database connection may be used concurrently +** from within other threads. +** +** However, the application must guarantee that the destination +** [database connection] is not passed to any other API (by any thread) after +** sqlite3_backup_init() is called and before the corresponding call to +** sqlite3_backup_finish(). SQLite does not currently check to see +** if the application incorrectly accesses the destination [database connection] +** and so no error code is reported, but the operations may malfunction +** nevertheless. Use of the destination database connection while a +** backup is in progress might also also cause a mutex deadlock. +** +** If running in [shared cache mode], the application must +** guarantee that the shared cache used by the destination database +** is not accessed while the backup is running. In practice this means +** that the application must guarantee that the disk file being +** backed up to is not accessed by any connection within the process, +** not just the specific connection that was passed to sqlite3_backup_init(). +** +** The [sqlite3_backup] object itself is partially threadsafe. Multiple +** threads may safely make multiple concurrent calls to sqlite3_backup_step(). +** However, the sqlite3_backup_remaining() and sqlite3_backup_pagecount() +** APIs are not strictly speaking threadsafe. If they are invoked at the +** same time as another thread is invoking sqlite3_backup_step() it is +** possible that they return invalid values. +*/ +SQLITE_API sqlite3_backup *SQLITE_STDCALL sqlite3_backup_init( + sqlite3 *pDest, /* Destination database handle */ + const char *zDestName, /* Destination database name */ + sqlite3 *pSource, /* Source database handle */ + const char *zSourceName /* Source database name */ +); +SQLITE_API int SQLITE_STDCALL sqlite3_backup_step(sqlite3_backup *p, int nPage); +SQLITE_API int SQLITE_STDCALL sqlite3_backup_finish(sqlite3_backup *p); +SQLITE_API int SQLITE_STDCALL sqlite3_backup_remaining(sqlite3_backup *p); +SQLITE_API int SQLITE_STDCALL sqlite3_backup_pagecount(sqlite3_backup *p); + +/* +** CAPI3REF: Unlock Notification +** METHOD: sqlite3 +** +** ^When running in shared-cache mode, a database operation may fail with +** an [SQLITE_LOCKED] error if the required locks on the shared-cache or +** individual tables within the shared-cache cannot be obtained. See +** [SQLite Shared-Cache Mode] for a description of shared-cache locking. +** ^This API may be used to register a callback that SQLite will invoke +** when the connection currently holding the required lock relinquishes it. +** ^This API is only available if the library was compiled with the +** [SQLITE_ENABLE_UNLOCK_NOTIFY] C-preprocessor symbol defined. +** +** See Also: [Using the SQLite Unlock Notification Feature]. +** +** ^Shared-cache locks are released when a database connection concludes +** its current transaction, either by committing it or rolling it back. +** +** ^When a connection (known as the blocked connection) fails to obtain a +** shared-cache lock and SQLITE_LOCKED is returned to the caller, the +** identity of the database connection (the blocking connection) that +** has locked the required resource is stored internally. ^After an +** application receives an SQLITE_LOCKED error, it may call the +** sqlite3_unlock_notify() method with the blocked connection handle as +** the first argument to register for a callback that will be invoked +** when the blocking connections current transaction is concluded. ^The +** callback is invoked from within the [sqlite3_step] or [sqlite3_close] +** call that concludes the blocking connections transaction. +** +** ^(If sqlite3_unlock_notify() is called in a multi-threaded application, +** there is a chance that the blocking connection will have already +** concluded its transaction by the time sqlite3_unlock_notify() is invoked. +** If this happens, then the specified callback is invoked immediately, +** from within the call to sqlite3_unlock_notify().)^ +** +** ^If the blocked connection is attempting to obtain a write-lock on a +** shared-cache table, and more than one other connection currently holds +** a read-lock on the same table, then SQLite arbitrarily selects one of +** the other connections to use as the blocking connection. +** +** ^(There may be at most one unlock-notify callback registered by a +** blocked connection. If sqlite3_unlock_notify() is called when the +** blocked connection already has a registered unlock-notify callback, +** then the new callback replaces the old.)^ ^If sqlite3_unlock_notify() is +** called with a NULL pointer as its second argument, then any existing +** unlock-notify callback is canceled. ^The blocked connections +** unlock-notify callback may also be canceled by closing the blocked +** connection using [sqlite3_close()]. +** +** The unlock-notify callback is not reentrant. If an application invokes +** any sqlite3_xxx API functions from within an unlock-notify callback, a +** crash or deadlock may be the result. +** +** ^Unless deadlock is detected (see below), sqlite3_unlock_notify() always +** returns SQLITE_OK. +** +** <b>Callback Invocation Details</b> +** +** When an unlock-notify callback is registered, the application provides a +** single void* pointer that is passed to the callback when it is invoked. +** However, the signature of the callback function allows SQLite to pass +** it an array of void* context pointers. The first argument passed to +** an unlock-notify callback is a pointer to an array of void* pointers, +** and the second is the number of entries in the array. +** +** When a blocking connections transaction is concluded, there may be +** more than one blocked connection that has registered for an unlock-notify +** callback. ^If two or more such blocked connections have specified the +** same callback function, then instead of invoking the callback function +** multiple times, it is invoked once with the set of void* context pointers +** specified by the blocked connections bundled together into an array. +** This gives the application an opportunity to prioritize any actions +** related to the set of unblocked database connections. +** +** <b>Deadlock Detection</b> +** +** Assuming that after registering for an unlock-notify callback a +** database waits for the callback to be issued before taking any further +** action (a reasonable assumption), then using this API may cause the +** application to deadlock. For example, if connection X is waiting for +** connection Y's transaction to be concluded, and similarly connection +** Y is waiting on connection X's transaction, then neither connection +** will proceed and the system may remain deadlocked indefinitely. +** +** To avoid this scenario, the sqlite3_unlock_notify() performs deadlock +** detection. ^If a given call to sqlite3_unlock_notify() would put the +** system in a deadlocked state, then SQLITE_LOCKED is returned and no +** unlock-notify callback is registered. The system is said to be in +** a deadlocked state if connection A has registered for an unlock-notify +** callback on the conclusion of connection B's transaction, and connection +** B has itself registered for an unlock-notify callback when connection +** A's transaction is concluded. ^Indirect deadlock is also detected, so +** the system is also considered to be deadlocked if connection B has +** registered for an unlock-notify callback on the conclusion of connection +** C's transaction, where connection C is waiting on connection A. ^Any +** number of levels of indirection are allowed. +** +** <b>The "DROP TABLE" Exception</b> +** +** When a call to [sqlite3_step()] returns SQLITE_LOCKED, it is almost +** always appropriate to call sqlite3_unlock_notify(). There is however, +** one exception. When executing a "DROP TABLE" or "DROP INDEX" statement, +** SQLite checks if there are any currently executing SELECT statements +** that belong to the same connection. If there are, SQLITE_LOCKED is +** returned. In this case there is no "blocking connection", so invoking +** sqlite3_unlock_notify() results in the unlock-notify callback being +** invoked immediately. If the application then re-attempts the "DROP TABLE" +** or "DROP INDEX" query, an infinite loop might be the result. +** +** One way around this problem is to check the extended error code returned +** by an sqlite3_step() call. ^(If there is a blocking connection, then the +** extended error code is set to SQLITE_LOCKED_SHAREDCACHE. Otherwise, in +** the special "DROP TABLE/INDEX" case, the extended error code is just +** SQLITE_LOCKED.)^ +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_unlock_notify( + sqlite3 *pBlocked, /* Waiting connection */ + void (*xNotify)(void **apArg, int nArg), /* Callback function to invoke */ + void *pNotifyArg /* Argument to pass to xNotify */ +); + + +/* +** CAPI3REF: String Comparison +** +** ^The [sqlite3_stricmp()] and [sqlite3_strnicmp()] APIs allow applications +** and extensions to compare the contents of two buffers containing UTF-8 +** strings in a case-independent fashion, using the same definition of "case +** independence" that SQLite uses internally when comparing identifiers. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_stricmp(const char *, const char *); +SQLITE_API int SQLITE_STDCALL sqlite3_strnicmp(const char *, const char *, int); + +/* +** CAPI3REF: String Globbing +* +** ^The [sqlite3_strglob(P,X)] interface returns zero if and only if +** string X matches the [GLOB] pattern P. +** ^The definition of [GLOB] pattern matching used in +** [sqlite3_strglob(P,X)] is the same as for the "X GLOB P" operator in the +** SQL dialect understood by SQLite. ^The [sqlite3_strglob(P,X)] function +** is case sensitive. +** +** Note that this routine returns zero on a match and non-zero if the strings +** do not match, the same as [sqlite3_stricmp()] and [sqlite3_strnicmp()]. +** +** See also: [sqlite3_strlike()]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_strglob(const char *zGlob, const char *zStr); + +/* +** CAPI3REF: String LIKE Matching +* +** ^The [sqlite3_strlike(P,X,E)] interface returns zero if and only if +** string X matches the [LIKE] pattern P with escape character E. +** ^The definition of [LIKE] pattern matching used in +** [sqlite3_strlike(P,X,E)] is the same as for the "X LIKE P ESCAPE E" +** operator in the SQL dialect understood by SQLite. ^For "X LIKE P" without +** the ESCAPE clause, set the E parameter of [sqlite3_strlike(P,X,E)] to 0. +** ^As with the LIKE operator, the [sqlite3_strlike(P,X,E)] function is case +** insensitive - equivalent upper and lower case ASCII characters match +** one another. +** +** ^The [sqlite3_strlike(P,X,E)] function matches Unicode characters, though +** only ASCII characters are case folded. +** +** Note that this routine returns zero on a match and non-zero if the strings +** do not match, the same as [sqlite3_stricmp()] and [sqlite3_strnicmp()]. +** +** See also: [sqlite3_strglob()]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_strlike(const char *zGlob, const char *zStr, unsigned int cEsc); + +/* +** CAPI3REF: Error Logging Interface +** +** ^The [sqlite3_log()] interface writes a message into the [error log] +** established by the [SQLITE_CONFIG_LOG] option to [sqlite3_config()]. +** ^If logging is enabled, the zFormat string and subsequent arguments are +** used with [sqlite3_snprintf()] to generate the final output string. +** +** The sqlite3_log() interface is intended for use by extensions such as +** virtual tables, collating functions, and SQL functions. While there is +** nothing to prevent an application from calling sqlite3_log(), doing so +** is considered bad form. +** +** The zFormat string must not be NULL. +** +** To avoid deadlocks and other threading problems, the sqlite3_log() routine +** will not use dynamically allocated memory. The log message is stored in +** a fixed-length buffer on the stack. If the log message is longer than +** a few hundred characters, it will be truncated to the length of the +** buffer. +*/ +SQLITE_API void SQLITE_CDECL sqlite3_log(int iErrCode, const char *zFormat, ...); + +/* +** CAPI3REF: Write-Ahead Log Commit Hook +** METHOD: sqlite3 +** +** ^The [sqlite3_wal_hook()] function is used to register a callback that +** is invoked each time data is committed to a database in wal mode. +** +** ^(The callback is invoked by SQLite after the commit has taken place and +** the associated write-lock on the database released)^, so the implementation +** may read, write or [checkpoint] the database as required. +** +** ^The first parameter passed to the callback function when it is invoked +** is a copy of the third parameter passed to sqlite3_wal_hook() when +** registering the callback. ^The second is a copy of the database handle. +** ^The third parameter is the name of the database that was written to - +** either "main" or the name of an [ATTACH]-ed database. ^The fourth parameter +** is the number of pages currently in the write-ahead log file, +** including those that were just committed. +** +** The callback function should normally return [SQLITE_OK]. ^If an error +** code is returned, that error will propagate back up through the +** SQLite code base to cause the statement that provoked the callback +** to report an error, though the commit will have still occurred. If the +** callback returns [SQLITE_ROW] or [SQLITE_DONE], or if it returns a value +** that does not correspond to any valid SQLite error code, the results +** are undefined. +** +** A single database handle may have at most a single write-ahead log callback +** registered at one time. ^Calling [sqlite3_wal_hook()] replaces any +** previously registered write-ahead log callback. ^Note that the +** [sqlite3_wal_autocheckpoint()] interface and the +** [wal_autocheckpoint pragma] both invoke [sqlite3_wal_hook()] and will +** those overwrite any prior [sqlite3_wal_hook()] settings. +*/ +SQLITE_API void *SQLITE_STDCALL sqlite3_wal_hook( + sqlite3*, + int(*)(void *,sqlite3*,const char*,int), + void* +); + +/* +** CAPI3REF: Configure an auto-checkpoint +** METHOD: sqlite3 +** +** ^The [sqlite3_wal_autocheckpoint(D,N)] is a wrapper around +** [sqlite3_wal_hook()] that causes any database on [database connection] D +** to automatically [checkpoint] +** after committing a transaction if there are N or +** more frames in the [write-ahead log] file. ^Passing zero or +** a negative value as the nFrame parameter disables automatic +** checkpoints entirely. +** +** ^The callback registered by this function replaces any existing callback +** registered using [sqlite3_wal_hook()]. ^Likewise, registering a callback +** using [sqlite3_wal_hook()] disables the automatic checkpoint mechanism +** configured by this function. +** +** ^The [wal_autocheckpoint pragma] can be used to invoke this interface +** from SQL. +** +** ^Checkpoints initiated by this mechanism are +** [sqlite3_wal_checkpoint_v2|PASSIVE]. +** +** ^Every new [database connection] defaults to having the auto-checkpoint +** enabled with a threshold of 1000 or [SQLITE_DEFAULT_WAL_AUTOCHECKPOINT] +** pages. The use of this interface +** is only necessary if the default setting is found to be suboptimal +** for a particular application. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_wal_autocheckpoint(sqlite3 *db, int N); + +/* +** CAPI3REF: Checkpoint a database +** METHOD: sqlite3 +** +** ^(The sqlite3_wal_checkpoint(D,X) is equivalent to +** [sqlite3_wal_checkpoint_v2](D,X,[SQLITE_CHECKPOINT_PASSIVE],0,0).)^ +** +** In brief, sqlite3_wal_checkpoint(D,X) causes the content in the +** [write-ahead log] for database X on [database connection] D to be +** transferred into the database file and for the write-ahead log to +** be reset. See the [checkpointing] documentation for addition +** information. +** +** This interface used to be the only way to cause a checkpoint to +** occur. But then the newer and more powerful [sqlite3_wal_checkpoint_v2()] +** interface was added. This interface is retained for backwards +** compatibility and as a convenience for applications that need to manually +** start a callback but which do not need the full power (and corresponding +** complication) of [sqlite3_wal_checkpoint_v2()]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_wal_checkpoint(sqlite3 *db, const char *zDb); + +/* +** CAPI3REF: Checkpoint a database +** METHOD: sqlite3 +** +** ^(The sqlite3_wal_checkpoint_v2(D,X,M,L,C) interface runs a checkpoint +** operation on database X of [database connection] D in mode M. Status +** information is written back into integers pointed to by L and C.)^ +** ^(The M parameter must be a valid [checkpoint mode]:)^ +** +** <dl> +** <dt>SQLITE_CHECKPOINT_PASSIVE<dd> +** ^Checkpoint as many frames as possible without waiting for any database +** readers or writers to finish, then sync the database file if all frames +** in the log were checkpointed. ^The [busy-handler callback] +** is never invoked in the SQLITE_CHECKPOINT_PASSIVE mode. +** ^On the other hand, passive mode might leave the checkpoint unfinished +** if there are concurrent readers or writers. +** +** <dt>SQLITE_CHECKPOINT_FULL<dd> +** ^This mode blocks (it invokes the +** [sqlite3_busy_handler|busy-handler callback]) until there is no +** database writer and all readers are reading from the most recent database +** snapshot. ^It then checkpoints all frames in the log file and syncs the +** database file. ^This mode blocks new database writers while it is pending, +** but new database readers are allowed to continue unimpeded. +** +** <dt>SQLITE_CHECKPOINT_RESTART<dd> +** ^This mode works the same way as SQLITE_CHECKPOINT_FULL with the addition +** that after checkpointing the log file it blocks (calls the +** [busy-handler callback]) +** until all readers are reading from the database file only. ^This ensures +** that the next writer will restart the log file from the beginning. +** ^Like SQLITE_CHECKPOINT_FULL, this mode blocks new +** database writer attempts while it is pending, but does not impede readers. +** +** <dt>SQLITE_CHECKPOINT_TRUNCATE<dd> +** ^This mode works the same way as SQLITE_CHECKPOINT_RESTART with the +** addition that it also truncates the log file to zero bytes just prior +** to a successful return. +** </dl> +** +** ^If pnLog is not NULL, then *pnLog is set to the total number of frames in +** the log file or to -1 if the checkpoint could not run because +** of an error or because the database is not in [WAL mode]. ^If pnCkpt is not +** NULL,then *pnCkpt is set to the total number of checkpointed frames in the +** log file (including any that were already checkpointed before the function +** was called) or to -1 if the checkpoint could not run due to an error or +** because the database is not in WAL mode. ^Note that upon successful +** completion of an SQLITE_CHECKPOINT_TRUNCATE, the log file will have been +** truncated to zero bytes and so both *pnLog and *pnCkpt will be set to zero. +** +** ^All calls obtain an exclusive "checkpoint" lock on the database file. ^If +** any other process is running a checkpoint operation at the same time, the +** lock cannot be obtained and SQLITE_BUSY is returned. ^Even if there is a +** busy-handler configured, it will not be invoked in this case. +** +** ^The SQLITE_CHECKPOINT_FULL, RESTART and TRUNCATE modes also obtain the +** exclusive "writer" lock on the database file. ^If the writer lock cannot be +** obtained immediately, and a busy-handler is configured, it is invoked and +** the writer lock retried until either the busy-handler returns 0 or the lock +** is successfully obtained. ^The busy-handler is also invoked while waiting for +** database readers as described above. ^If the busy-handler returns 0 before +** the writer lock is obtained or while waiting for database readers, the +** checkpoint operation proceeds from that point in the same way as +** SQLITE_CHECKPOINT_PASSIVE - checkpointing as many frames as possible +** without blocking any further. ^SQLITE_BUSY is returned in this case. +** +** ^If parameter zDb is NULL or points to a zero length string, then the +** specified operation is attempted on all WAL databases [attached] to +** [database connection] db. In this case the +** values written to output parameters *pnLog and *pnCkpt are undefined. ^If +** an SQLITE_BUSY error is encountered when processing one or more of the +** attached WAL databases, the operation is still attempted on any remaining +** attached databases and SQLITE_BUSY is returned at the end. ^If any other +** error occurs while processing an attached database, processing is abandoned +** and the error code is returned to the caller immediately. ^If no error +** (SQLITE_BUSY or otherwise) is encountered while processing the attached +** databases, SQLITE_OK is returned. +** +** ^If database zDb is the name of an attached database that is not in WAL +** mode, SQLITE_OK is returned and both *pnLog and *pnCkpt set to -1. ^If +** zDb is not NULL (or a zero length string) and is not the name of any +** attached database, SQLITE_ERROR is returned to the caller. +** +** ^Unless it returns SQLITE_MISUSE, +** the sqlite3_wal_checkpoint_v2() interface +** sets the error information that is queried by +** [sqlite3_errcode()] and [sqlite3_errmsg()]. +** +** ^The [PRAGMA wal_checkpoint] command can be used to invoke this interface +** from SQL. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_wal_checkpoint_v2( + sqlite3 *db, /* Database handle */ + const char *zDb, /* Name of attached database (or NULL) */ + int eMode, /* SQLITE_CHECKPOINT_* value */ + int *pnLog, /* OUT: Size of WAL log in frames */ + int *pnCkpt /* OUT: Total number of frames checkpointed */ +); + +/* +** CAPI3REF: Checkpoint Mode Values +** KEYWORDS: {checkpoint mode} +** +** These constants define all valid values for the "checkpoint mode" passed +** as the third parameter to the [sqlite3_wal_checkpoint_v2()] interface. +** See the [sqlite3_wal_checkpoint_v2()] documentation for details on the +** meaning of each of these checkpoint modes. +*/ +#define SQLITE_CHECKPOINT_PASSIVE 0 /* Do as much as possible w/o blocking */ +#define SQLITE_CHECKPOINT_FULL 1 /* Wait for writers, then checkpoint */ +#define SQLITE_CHECKPOINT_RESTART 2 /* Like FULL but wait for for readers */ +#define SQLITE_CHECKPOINT_TRUNCATE 3 /* Like RESTART but also truncate WAL */ + +/* +** CAPI3REF: Virtual Table Interface Configuration +** +** This function may be called by either the [xConnect] or [xCreate] method +** of a [virtual table] implementation to configure +** various facets of the virtual table interface. +** +** If this interface is invoked outside the context of an xConnect or +** xCreate virtual table method then the behavior is undefined. +** +** At present, there is only one option that may be configured using +** this function. (See [SQLITE_VTAB_CONSTRAINT_SUPPORT].) Further options +** may be added in the future. +*/ +SQLITE_API int SQLITE_CDECL sqlite3_vtab_config(sqlite3*, int op, ...); + +/* +** CAPI3REF: Virtual Table Configuration Options +** +** These macros define the various options to the +** [sqlite3_vtab_config()] interface that [virtual table] implementations +** can use to customize and optimize their behavior. +** +** <dl> +** <dt>SQLITE_VTAB_CONSTRAINT_SUPPORT +** <dd>Calls of the form +** [sqlite3_vtab_config](db,SQLITE_VTAB_CONSTRAINT_SUPPORT,X) are supported, +** where X is an integer. If X is zero, then the [virtual table] whose +** [xCreate] or [xConnect] method invoked [sqlite3_vtab_config()] does not +** support constraints. In this configuration (which is the default) if +** a call to the [xUpdate] method returns [SQLITE_CONSTRAINT], then the entire +** statement is rolled back as if [ON CONFLICT | OR ABORT] had been +** specified as part of the users SQL statement, regardless of the actual +** ON CONFLICT mode specified. +** +** If X is non-zero, then the virtual table implementation guarantees +** that if [xUpdate] returns [SQLITE_CONSTRAINT], it will do so before +** any modifications to internal or persistent data structures have been made. +** If the [ON CONFLICT] mode is ABORT, FAIL, IGNORE or ROLLBACK, SQLite +** is able to roll back a statement or database transaction, and abandon +** or continue processing the current SQL statement as appropriate. +** If the ON CONFLICT mode is REPLACE and the [xUpdate] method returns +** [SQLITE_CONSTRAINT], SQLite handles this as if the ON CONFLICT mode +** had been ABORT. +** +** Virtual table implementations that are required to handle OR REPLACE +** must do so within the [xUpdate] method. If a call to the +** [sqlite3_vtab_on_conflict()] function indicates that the current ON +** CONFLICT policy is REPLACE, the virtual table implementation should +** silently replace the appropriate rows within the xUpdate callback and +** return SQLITE_OK. Or, if this is not possible, it may return +** SQLITE_CONSTRAINT, in which case SQLite falls back to OR ABORT +** constraint handling. +** </dl> +*/ +#define SQLITE_VTAB_CONSTRAINT_SUPPORT 1 + +/* +** CAPI3REF: Determine The Virtual Table Conflict Policy +** +** This function may only be called from within a call to the [xUpdate] method +** of a [virtual table] implementation for an INSERT or UPDATE operation. ^The +** value returned is one of [SQLITE_ROLLBACK], [SQLITE_IGNORE], [SQLITE_FAIL], +** [SQLITE_ABORT], or [SQLITE_REPLACE], according to the [ON CONFLICT] mode +** of the SQL statement that triggered the call to the [xUpdate] method of the +** [virtual table]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_vtab_on_conflict(sqlite3 *); + +/* +** CAPI3REF: Conflict resolution modes +** KEYWORDS: {conflict resolution mode} +** +** These constants are returned by [sqlite3_vtab_on_conflict()] to +** inform a [virtual table] implementation what the [ON CONFLICT] mode +** is for the SQL statement being evaluated. +** +** Note that the [SQLITE_IGNORE] constant is also used as a potential +** return value from the [sqlite3_set_authorizer()] callback and that +** [SQLITE_ABORT] is also a [result code]. +*/ +#define SQLITE_ROLLBACK 1 +/* #define SQLITE_IGNORE 2 // Also used by sqlite3_authorizer() callback */ +#define SQLITE_FAIL 3 +/* #define SQLITE_ABORT 4 // Also an error code */ +#define SQLITE_REPLACE 5 + +/* +** CAPI3REF: Prepared Statement Scan Status Opcodes +** KEYWORDS: {scanstatus options} +** +** The following constants can be used for the T parameter to the +** [sqlite3_stmt_scanstatus(S,X,T,V)] interface. Each constant designates a +** different metric for sqlite3_stmt_scanstatus() to return. +** +** When the value returned to V is a string, space to hold that string is +** managed by the prepared statement S and will be automatically freed when +** S is finalized. +** +** <dl> +** [[SQLITE_SCANSTAT_NLOOP]] <dt>SQLITE_SCANSTAT_NLOOP</dt> +** <dd>^The [sqlite3_int64] variable pointed to by the T parameter will be +** set to the total number of times that the X-th loop has run.</dd> +** +** [[SQLITE_SCANSTAT_NVISIT]] <dt>SQLITE_SCANSTAT_NVISIT</dt> +** <dd>^The [sqlite3_int64] variable pointed to by the T parameter will be set +** to the total number of rows examined by all iterations of the X-th loop.</dd> +** +** [[SQLITE_SCANSTAT_EST]] <dt>SQLITE_SCANSTAT_EST</dt> +** <dd>^The "double" variable pointed to by the T parameter will be set to the +** query planner's estimate for the average number of rows output from each +** iteration of the X-th loop. If the query planner's estimates was accurate, +** then this value will approximate the quotient NVISIT/NLOOP and the +** product of this value for all prior loops with the same SELECTID will +** be the NLOOP value for the current loop. +** +** [[SQLITE_SCANSTAT_NAME]] <dt>SQLITE_SCANSTAT_NAME</dt> +** <dd>^The "const char *" variable pointed to by the T parameter will be set +** to a zero-terminated UTF-8 string containing the name of the index or table +** used for the X-th loop. +** +** [[SQLITE_SCANSTAT_EXPLAIN]] <dt>SQLITE_SCANSTAT_EXPLAIN</dt> +** <dd>^The "const char *" variable pointed to by the T parameter will be set +** to a zero-terminated UTF-8 string containing the [EXPLAIN QUERY PLAN] +** description for the X-th loop. +** +** [[SQLITE_SCANSTAT_SELECTID]] <dt>SQLITE_SCANSTAT_SELECT</dt> +** <dd>^The "int" variable pointed to by the T parameter will be set to the +** "select-id" for the X-th loop. The select-id identifies which query or +** subquery the loop is part of. The main query has a select-id of zero. +** The select-id is the same value as is output in the first column +** of an [EXPLAIN QUERY PLAN] query. +** </dl> +*/ +#define SQLITE_SCANSTAT_NLOOP 0 +#define SQLITE_SCANSTAT_NVISIT 1 +#define SQLITE_SCANSTAT_EST 2 +#define SQLITE_SCANSTAT_NAME 3 +#define SQLITE_SCANSTAT_EXPLAIN 4 +#define SQLITE_SCANSTAT_SELECTID 5 + +/* +** CAPI3REF: Prepared Statement Scan Status +** METHOD: sqlite3_stmt +** +** This interface returns information about the predicted and measured +** performance for pStmt. Advanced applications can use this +** interface to compare the predicted and the measured performance and +** issue warnings and/or rerun [ANALYZE] if discrepancies are found. +** +** Since this interface is expected to be rarely used, it is only +** available if SQLite is compiled using the [SQLITE_ENABLE_STMT_SCANSTATUS] +** compile-time option. +** +** The "iScanStatusOp" parameter determines which status information to return. +** The "iScanStatusOp" must be one of the [scanstatus options] or the behavior +** of this interface is undefined. +** ^The requested measurement is written into a variable pointed to by +** the "pOut" parameter. +** Parameter "idx" identifies the specific loop to retrieve statistics for. +** Loops are numbered starting from zero. ^If idx is out of range - less than +** zero or greater than or equal to the total number of loops used to implement +** the statement - a non-zero value is returned and the variable that pOut +** points to is unchanged. +** +** ^Statistics might not be available for all loops in all statements. ^In cases +** where there exist loops with no available statistics, this function behaves +** as if the loop did not exist - it returns non-zero and leave the variable +** that pOut points to unchanged. +** +** See also: [sqlite3_stmt_scanstatus_reset()] +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_stmt_scanstatus( + sqlite3_stmt *pStmt, /* Prepared statement for which info desired */ + int idx, /* Index of loop to report on */ + int iScanStatusOp, /* Information desired. SQLITE_SCANSTAT_* */ + void *pOut /* Result written here */ +); + +/* +** CAPI3REF: Zero Scan-Status Counters +** METHOD: sqlite3_stmt +** +** ^Zero all [sqlite3_stmt_scanstatus()] related event counters. +** +** This API is only available if the library is built with pre-processor +** symbol [SQLITE_ENABLE_STMT_SCANSTATUS] defined. +*/ +SQLITE_API void SQLITE_STDCALL sqlite3_stmt_scanstatus_reset(sqlite3_stmt*); + +/* +** CAPI3REF: Flush caches to disk mid-transaction +** +** ^If a write-transaction is open on [database connection] D when the +** [sqlite3_db_cacheflush(D)] interface invoked, any dirty +** pages in the pager-cache that are not currently in use are written out +** to disk. A dirty page may be in use if a database cursor created by an +** active SQL statement is reading from it, or if it is page 1 of a database +** file (page 1 is always "in use"). ^The [sqlite3_db_cacheflush(D)] +** interface flushes caches for all schemas - "main", "temp", and +** any [attached] databases. +** +** ^If this function needs to obtain extra database locks before dirty pages +** can be flushed to disk, it does so. ^If those locks cannot be obtained +** immediately and there is a busy-handler callback configured, it is invoked +** in the usual manner. ^If the required lock still cannot be obtained, then +** the database is skipped and an attempt made to flush any dirty pages +** belonging to the next (if any) database. ^If any databases are skipped +** because locks cannot be obtained, but no other error occurs, this +** function returns SQLITE_BUSY. +** +** ^If any other error occurs while flushing dirty pages to disk (for +** example an IO error or out-of-memory condition), then processing is +** abandoned and an SQLite [error code] is returned to the caller immediately. +** +** ^Otherwise, if no error occurs, [sqlite3_db_cacheflush()] returns SQLITE_OK. +** +** ^This function does not set the database handle error code or message +** returned by the [sqlite3_errcode()] and [sqlite3_errmsg()] functions. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_db_cacheflush(sqlite3*); + +/* +** CAPI3REF: Database Snapshot +** KEYWORDS: {snapshot} +** EXPERIMENTAL +** +** An instance of the snapshot object records the state of a [WAL mode] +** database for some specific point in history. +** +** In [WAL mode], multiple [database connections] that are open on the +** same database file can each be reading a different historical version +** of the database file. When a [database connection] begins a read +** transaction, that connection sees an unchanging copy of the database +** as it existed for the point in time when the transaction first started. +** Subsequent changes to the database from other connections are not seen +** by the reader until a new read transaction is started. +** +** The sqlite3_snapshot object records state information about an historical +** version of the database file so that it is possible to later open a new read +** transaction that sees that historical version of the database rather than +** the most recent version. +** +** The constructor for this object is [sqlite3_snapshot_get()]. The +** [sqlite3_snapshot_open()] method causes a fresh read transaction to refer +** to an historical snapshot (if possible). The destructor for +** sqlite3_snapshot objects is [sqlite3_snapshot_free()]. +*/ +typedef struct sqlite3_snapshot sqlite3_snapshot; + +/* +** CAPI3REF: Record A Database Snapshot +** EXPERIMENTAL +** +** ^The [sqlite3_snapshot_get(D,S,P)] interface attempts to make a +** new [sqlite3_snapshot] object that records the current state of +** schema S in database connection D. ^On success, the +** [sqlite3_snapshot_get(D,S,P)] interface writes a pointer to the newly +** created [sqlite3_snapshot] object into *P and returns SQLITE_OK. +** ^If schema S of [database connection] D is not a [WAL mode] database +** that is in a read transaction, then [sqlite3_snapshot_get(D,S,P)] +** leaves the *P value unchanged and returns an appropriate [error code]. +** +** The [sqlite3_snapshot] object returned from a successful call to +** [sqlite3_snapshot_get()] must be freed using [sqlite3_snapshot_free()] +** to avoid a memory leak. +** +** The [sqlite3_snapshot_get()] interface is only available when the +** SQLITE_ENABLE_SNAPSHOT compile-time option is used. +*/ +SQLITE_API SQLITE_EXPERIMENTAL int SQLITE_STDCALL sqlite3_snapshot_get( + sqlite3 *db, + const char *zSchema, + sqlite3_snapshot **ppSnapshot +); + +/* +** CAPI3REF: Start a read transaction on an historical snapshot +** EXPERIMENTAL +** +** ^The [sqlite3_snapshot_open(D,S,P)] interface attempts to move the +** read transaction that is currently open on schema S of +** [database connection] D so that it refers to historical [snapshot] P. +** ^The [sqlite3_snapshot_open()] interface returns SQLITE_OK on success +** or an appropriate [error code] if it fails. +** +** ^In order to succeed, a call to [sqlite3_snapshot_open(D,S,P)] must be +** the first operation, apart from other sqlite3_snapshot_open() calls, +** following the [BEGIN] that starts a new read transaction. +** ^A [snapshot] will fail to open if it has been overwritten by a +** [checkpoint]. +** +** The [sqlite3_snapshot_open()] interface is only available when the +** SQLITE_ENABLE_SNAPSHOT compile-time option is used. +*/ +SQLITE_API SQLITE_EXPERIMENTAL int SQLITE_STDCALL sqlite3_snapshot_open( + sqlite3 *db, + const char *zSchema, + sqlite3_snapshot *pSnapshot +); + +/* +** CAPI3REF: Destroy a snapshot +** EXPERIMENTAL +** +** ^The [sqlite3_snapshot_free(P)] interface destroys [sqlite3_snapshot] P. +** The application must eventually free every [sqlite3_snapshot] object +** using this routine to avoid a memory leak. +** +** The [sqlite3_snapshot_free()] interface is only available when the +** SQLITE_ENABLE_SNAPSHOT compile-time option is used. +*/ +SQLITE_API SQLITE_EXPERIMENTAL void SQLITE_STDCALL sqlite3_snapshot_free(sqlite3_snapshot*); + +/* +** Undo the hack that converts floating point types to integer for +** builds on processors without floating point support. +*/ +#ifdef SQLITE_OMIT_FLOATING_POINT +# undef double +#endif + +#if 0 +} /* End of the 'extern "C"' block */ +#endif +#endif /* _SQLITE3_H_ */ + +/* +** 2010 August 30 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +*/ + +#ifndef _SQLITE3RTREE_H_ +#define _SQLITE3RTREE_H_ + + +#if 0 +extern "C" { +#endif + +typedef struct sqlite3_rtree_geometry sqlite3_rtree_geometry; +typedef struct sqlite3_rtree_query_info sqlite3_rtree_query_info; + +/* The double-precision datatype used by RTree depends on the +** SQLITE_RTREE_INT_ONLY compile-time option. +*/ +#ifdef SQLITE_RTREE_INT_ONLY + typedef sqlite3_int64 sqlite3_rtree_dbl; +#else + typedef double sqlite3_rtree_dbl; +#endif + +/* +** Register a geometry callback named zGeom that can be used as part of an +** R-Tree geometry query as follows: +** +** SELECT ... FROM <rtree> WHERE <rtree col> MATCH $zGeom(... params ...) +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_rtree_geometry_callback( + sqlite3 *db, + const char *zGeom, + int (*xGeom)(sqlite3_rtree_geometry*, int, sqlite3_rtree_dbl*,int*), + void *pContext +); + + +/* +** A pointer to a structure of the following type is passed as the first +** argument to callbacks registered using rtree_geometry_callback(). +*/ +struct sqlite3_rtree_geometry { + void *pContext; /* Copy of pContext passed to s_r_g_c() */ + int nParam; /* Size of array aParam[] */ + sqlite3_rtree_dbl *aParam; /* Parameters passed to SQL geom function */ + void *pUser; /* Callback implementation user data */ + void (*xDelUser)(void *); /* Called by SQLite to clean up pUser */ +}; + +/* +** Register a 2nd-generation geometry callback named zScore that can be +** used as part of an R-Tree geometry query as follows: +** +** SELECT ... FROM <rtree> WHERE <rtree col> MATCH $zQueryFunc(... params ...) +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_rtree_query_callback( + sqlite3 *db, + const char *zQueryFunc, + int (*xQueryFunc)(sqlite3_rtree_query_info*), + void *pContext, + void (*xDestructor)(void*) +); + + +/* +** A pointer to a structure of the following type is passed as the +** argument to scored geometry callback registered using +** sqlite3_rtree_query_callback(). +** +** Note that the first 5 fields of this structure are identical to +** sqlite3_rtree_geometry. This structure is a subclass of +** sqlite3_rtree_geometry. +*/ +struct sqlite3_rtree_query_info { + void *pContext; /* pContext from when function registered */ + int nParam; /* Number of function parameters */ + sqlite3_rtree_dbl *aParam; /* value of function parameters */ + void *pUser; /* callback can use this, if desired */ + void (*xDelUser)(void*); /* function to free pUser */ + sqlite3_rtree_dbl *aCoord; /* Coordinates of node or entry to check */ + unsigned int *anQueue; /* Number of pending entries in the queue */ + int nCoord; /* Number of coordinates */ + int iLevel; /* Level of current node or entry */ + int mxLevel; /* The largest iLevel value in the tree */ + sqlite3_int64 iRowid; /* Rowid for current entry */ + sqlite3_rtree_dbl rParentScore; /* Score of parent node */ + int eParentWithin; /* Visibility of parent node */ + int eWithin; /* OUT: Visiblity */ + sqlite3_rtree_dbl rScore; /* OUT: Write the score here */ + /* The following fields are only available in 3.8.11 and later */ + sqlite3_value **apSqlParam; /* Original SQL values of parameters */ +}; + +/* +** Allowed values for sqlite3_rtree_query.eWithin and .eParentWithin. +*/ +#define NOT_WITHIN 0 /* Object completely outside of query region */ +#define PARTLY_WITHIN 1 /* Object partially overlaps query region */ +#define FULLY_WITHIN 2 /* Object fully contained within query region */ + + +#if 0 +} /* end of the 'extern "C"' block */ +#endif + +#endif /* ifndef _SQLITE3RTREE_H_ */ + +/* +** 2014 May 31 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** Interfaces to extend FTS5. Using the interfaces defined in this file, +** FTS5 may be extended with: +** +** * custom tokenizers, and +** * custom auxiliary functions. +*/ + + +#ifndef _FTS5_H +#define _FTS5_H + + +#if 0 +extern "C" { +#endif + +/************************************************************************* +** CUSTOM AUXILIARY FUNCTIONS +** +** Virtual table implementations may overload SQL functions by implementing +** the sqlite3_module.xFindFunction() method. +*/ + +typedef struct Fts5ExtensionApi Fts5ExtensionApi; +typedef struct Fts5Context Fts5Context; +typedef struct Fts5PhraseIter Fts5PhraseIter; + +typedef void (*fts5_extension_function)( + const Fts5ExtensionApi *pApi, /* API offered by current FTS version */ + Fts5Context *pFts, /* First arg to pass to pApi functions */ + sqlite3_context *pCtx, /* Context for returning result/error */ + int nVal, /* Number of values in apVal[] array */ + sqlite3_value **apVal /* Array of trailing arguments */ +); + +struct Fts5PhraseIter { + const unsigned char *a; + const unsigned char *b; +}; + +/* +** EXTENSION API FUNCTIONS +** +** xUserData(pFts): +** Return a copy of the context pointer the extension function was +** registered with. +** +** xColumnTotalSize(pFts, iCol, pnToken): +** If parameter iCol is less than zero, set output variable *pnToken +** to the total number of tokens in the FTS5 table. Or, if iCol is +** non-negative but less than the number of columns in the table, return +** the total number of tokens in column iCol, considering all rows in +** the FTS5 table. +** +** If parameter iCol is greater than or equal to the number of columns +** in the table, SQLITE_RANGE is returned. Or, if an error occurs (e.g. +** an OOM condition or IO error), an appropriate SQLite error code is +** returned. +** +** xColumnCount(pFts): +** Return the number of columns in the table. +** +** xColumnSize(pFts, iCol, pnToken): +** If parameter iCol is less than zero, set output variable *pnToken +** to the total number of tokens in the current row. Or, if iCol is +** non-negative but less than the number of columns in the table, set +** *pnToken to the number of tokens in column iCol of the current row. +** +** If parameter iCol is greater than or equal to the number of columns +** in the table, SQLITE_RANGE is returned. Or, if an error occurs (e.g. +** an OOM condition or IO error), an appropriate SQLite error code is +** returned. +** +** This function may be quite inefficient if used with an FTS5 table +** created with the "columnsize=0" option. +** +** xColumnText: +** This function attempts to retrieve the text of column iCol of the +** current document. If successful, (*pz) is set to point to a buffer +** containing the text in utf-8 encoding, (*pn) is set to the size in bytes +** (not characters) of the buffer and SQLITE_OK is returned. Otherwise, +** if an error occurs, an SQLite error code is returned and the final values +** of (*pz) and (*pn) are undefined. +** +** xPhraseCount: +** Returns the number of phrases in the current query expression. +** +** xPhraseSize: +** Returns the number of tokens in phrase iPhrase of the query. Phrases +** are numbered starting from zero. +** +** xInstCount: +** Set *pnInst to the total number of occurrences of all phrases within +** the query within the current row. Return SQLITE_OK if successful, or +** an error code (i.e. SQLITE_NOMEM) if an error occurs. +** +** This API can be quite slow if used with an FTS5 table created with the +** "detail=none" or "detail=column" option. If the FTS5 table is created +** with either "detail=none" or "detail=column" and "content=" option +** (i.e. if it is a contentless table), then this API always returns 0. +** +** xInst: +** Query for the details of phrase match iIdx within the current row. +** Phrase matches are numbered starting from zero, so the iIdx argument +** should be greater than or equal to zero and smaller than the value +** output by xInstCount(). +** +** Usually, output parameter *piPhrase is set to the phrase number, *piCol +** to the column in which it occurs and *piOff the token offset of the +** first token of the phrase. The exception is if the table was created +** with the offsets=0 option specified. In this case *piOff is always +** set to -1. +** +** Returns SQLITE_OK if successful, or an error code (i.e. SQLITE_NOMEM) +** if an error occurs. +** +** This API can be quite slow if used with an FTS5 table created with the +** "detail=none" or "detail=column" option. +** +** xRowid: +** Returns the rowid of the current row. +** +** xTokenize: +** Tokenize text using the tokenizer belonging to the FTS5 table. +** +** xQueryPhrase(pFts5, iPhrase, pUserData, xCallback): +** This API function is used to query the FTS table for phrase iPhrase +** of the current query. Specifically, a query equivalent to: +** +** ... FROM ftstable WHERE ftstable MATCH $p ORDER BY rowid +** +** with $p set to a phrase equivalent to the phrase iPhrase of the +** current query is executed. For each row visited, the callback function +** passed as the fourth argument is invoked. The context and API objects +** passed to the callback function may be used to access the properties of +** each matched row. Invoking Api.xUserData() returns a copy of the pointer +** passed as the third argument to pUserData. +** +** If the callback function returns any value other than SQLITE_OK, the +** query is abandoned and the xQueryPhrase function returns immediately. +** If the returned value is SQLITE_DONE, xQueryPhrase returns SQLITE_OK. +** Otherwise, the error code is propagated upwards. +** +** If the query runs to completion without incident, SQLITE_OK is returned. +** Or, if some error occurs before the query completes or is aborted by +** the callback, an SQLite error code is returned. +** +** +** xSetAuxdata(pFts5, pAux, xDelete) +** +** Save the pointer passed as the second argument as the extension functions +** "auxiliary data". The pointer may then be retrieved by the current or any +** future invocation of the same fts5 extension function made as part of +** of the same MATCH query using the xGetAuxdata() API. +** +** Each extension function is allocated a single auxiliary data slot for +** each FTS query (MATCH expression). If the extension function is invoked +** more than once for a single FTS query, then all invocations share a +** single auxiliary data context. +** +** If there is already an auxiliary data pointer when this function is +** invoked, then it is replaced by the new pointer. If an xDelete callback +** was specified along with the original pointer, it is invoked at this +** point. +** +** The xDelete callback, if one is specified, is also invoked on the +** auxiliary data pointer after the FTS5 query has finished. +** +** If an error (e.g. an OOM condition) occurs within this function, an +** the auxiliary data is set to NULL and an error code returned. If the +** xDelete parameter was not NULL, it is invoked on the auxiliary data +** pointer before returning. +** +** +** xGetAuxdata(pFts5, bClear) +** +** Returns the current auxiliary data pointer for the fts5 extension +** function. See the xSetAuxdata() method for details. +** +** If the bClear argument is non-zero, then the auxiliary data is cleared +** (set to NULL) before this function returns. In this case the xDelete, +** if any, is not invoked. +** +** +** xRowCount(pFts5, pnRow) +** +** This function is used to retrieve the total number of rows in the table. +** In other words, the same value that would be returned by: +** +** SELECT count(*) FROM ftstable; +** +** xPhraseFirst() +** This function is used, along with type Fts5PhraseIter and the xPhraseNext +** method, to iterate through all instances of a single query phrase within +** the current row. This is the same information as is accessible via the +** xInstCount/xInst APIs. While the xInstCount/xInst APIs are more convenient +** to use, this API may be faster under some circumstances. To iterate +** through instances of phrase iPhrase, use the following code: +** +** Fts5PhraseIter iter; +** int iCol, iOff; +** for(pApi->xPhraseFirst(pFts, iPhrase, &iter, &iCol, &iOff); +** iCol>=0; +** pApi->xPhraseNext(pFts, &iter, &iCol, &iOff) +** ){ +** // An instance of phrase iPhrase at offset iOff of column iCol +** } +** +** The Fts5PhraseIter structure is defined above. Applications should not +** modify this structure directly - it should only be used as shown above +** with the xPhraseFirst() and xPhraseNext() API methods (and by +** xPhraseFirstColumn() and xPhraseNextColumn() as illustrated below). +** +** This API can be quite slow if used with an FTS5 table created with the +** "detail=none" or "detail=column" option. If the FTS5 table is created +** with either "detail=none" or "detail=column" and "content=" option +** (i.e. if it is a contentless table), then this API always iterates +** through an empty set (all calls to xPhraseFirst() set iCol to -1). +** +** xPhraseNext() +** See xPhraseFirst above. +** +** xPhraseFirstColumn() +** This function and xPhraseNextColumn() are similar to the xPhraseFirst() +** and xPhraseNext() APIs described above. The difference is that instead +** of iterating through all instances of a phrase in the current row, these +** APIs are used to iterate through the set of columns in the current row +** that contain one or more instances of a specified phrase. For example: +** +** Fts5PhraseIter iter; +** int iCol; +** for(pApi->xPhraseFirstColumn(pFts, iPhrase, &iter, &iCol); +** iCol>=0; +** pApi->xPhraseNextColumn(pFts, &iter, &iCol) +** ){ +** // Column iCol contains at least one instance of phrase iPhrase +** } +** +** This API can be quite slow if used with an FTS5 table created with the +** "detail=none" option. If the FTS5 table is created with either +** "detail=none" "content=" option (i.e. if it is a contentless table), +** then this API always iterates through an empty set (all calls to +** xPhraseFirstColumn() set iCol to -1). +** +** The information accessed using this API and its companion +** xPhraseFirstColumn() may also be obtained using xPhraseFirst/xPhraseNext +** (or xInst/xInstCount). The chief advantage of this API is that it is +** significantly more efficient than those alternatives when used with +** "detail=column" tables. +** +** xPhraseNextColumn() +** See xPhraseFirstColumn above. +*/ +struct Fts5ExtensionApi { + int iVersion; /* Currently always set to 3 */ + + void *(*xUserData)(Fts5Context*); + + int (*xColumnCount)(Fts5Context*); + int (*xRowCount)(Fts5Context*, sqlite3_int64 *pnRow); + int (*xColumnTotalSize)(Fts5Context*, int iCol, sqlite3_int64 *pnToken); + + int (*xTokenize)(Fts5Context*, + const char *pText, int nText, /* Text to tokenize */ + void *pCtx, /* Context passed to xToken() */ + int (*xToken)(void*, int, const char*, int, int, int) /* Callback */ + ); + + int (*xPhraseCount)(Fts5Context*); + int (*xPhraseSize)(Fts5Context*, int iPhrase); + + int (*xInstCount)(Fts5Context*, int *pnInst); + int (*xInst)(Fts5Context*, int iIdx, int *piPhrase, int *piCol, int *piOff); + + sqlite3_int64 (*xRowid)(Fts5Context*); + int (*xColumnText)(Fts5Context*, int iCol, const char **pz, int *pn); + int (*xColumnSize)(Fts5Context*, int iCol, int *pnToken); + + int (*xQueryPhrase)(Fts5Context*, int iPhrase, void *pUserData, + int(*)(const Fts5ExtensionApi*,Fts5Context*,void*) + ); + int (*xSetAuxdata)(Fts5Context*, void *pAux, void(*xDelete)(void*)); + void *(*xGetAuxdata)(Fts5Context*, int bClear); + + int (*xPhraseFirst)(Fts5Context*, int iPhrase, Fts5PhraseIter*, int*, int*); + void (*xPhraseNext)(Fts5Context*, Fts5PhraseIter*, int *piCol, int *piOff); + + int (*xPhraseFirstColumn)(Fts5Context*, int iPhrase, Fts5PhraseIter*, int*); + void (*xPhraseNextColumn)(Fts5Context*, Fts5PhraseIter*, int *piCol); +}; + +/* +** CUSTOM AUXILIARY FUNCTIONS +*************************************************************************/ + +/************************************************************************* +** CUSTOM TOKENIZERS +** +** Applications may also register custom tokenizer types. A tokenizer +** is registered by providing fts5 with a populated instance of the +** following structure. All structure methods must be defined, setting +** any member of the fts5_tokenizer struct to NULL leads to undefined +** behaviour. The structure methods are expected to function as follows: +** +** xCreate: +** This function is used to allocate and inititalize a tokenizer instance. +** A tokenizer instance is required to actually tokenize text. +** +** The first argument passed to this function is a copy of the (void*) +** pointer provided by the application when the fts5_tokenizer object +** was registered with FTS5 (the third argument to xCreateTokenizer()). +** The second and third arguments are an array of nul-terminated strings +** containing the tokenizer arguments, if any, specified following the +** tokenizer name as part of the CREATE VIRTUAL TABLE statement used +** to create the FTS5 table. +** +** The final argument is an output variable. If successful, (*ppOut) +** should be set to point to the new tokenizer handle and SQLITE_OK +** returned. If an error occurs, some value other than SQLITE_OK should +** be returned. In this case, fts5 assumes that the final value of *ppOut +** is undefined. +** +** xDelete: +** This function is invoked to delete a tokenizer handle previously +** allocated using xCreate(). Fts5 guarantees that this function will +** be invoked exactly once for each successful call to xCreate(). +** +** xTokenize: +** This function is expected to tokenize the nText byte string indicated +** by argument pText. pText may or may not be nul-terminated. The first +** argument passed to this function is a pointer to an Fts5Tokenizer object +** returned by an earlier call to xCreate(). +** +** The second argument indicates the reason that FTS5 is requesting +** tokenization of the supplied text. This is always one of the following +** four values: +** +** <ul><li> <b>FTS5_TOKENIZE_DOCUMENT</b> - A document is being inserted into +** or removed from the FTS table. The tokenizer is being invoked to +** determine the set of tokens to add to (or delete from) the +** FTS index. +** +** <li> <b>FTS5_TOKENIZE_QUERY</b> - A MATCH query is being executed +** against the FTS index. The tokenizer is being called to tokenize +** a bareword or quoted string specified as part of the query. +** +** <li> <b>(FTS5_TOKENIZE_QUERY | FTS5_TOKENIZE_PREFIX)</b> - Same as +** FTS5_TOKENIZE_QUERY, except that the bareword or quoted string is +** followed by a "*" character, indicating that the last token +** returned by the tokenizer will be treated as a token prefix. +** +** <li> <b>FTS5_TOKENIZE_AUX</b> - The tokenizer is being invoked to +** satisfy an fts5_api.xTokenize() request made by an auxiliary +** function. Or an fts5_api.xColumnSize() request made by the same +** on a columnsize=0 database. +** </ul> +** +** For each token in the input string, the supplied callback xToken() must +** be invoked. The first argument to it should be a copy of the pointer +** passed as the second argument to xTokenize(). The third and fourth +** arguments are a pointer to a buffer containing the token text, and the +** size of the token in bytes. The 4th and 5th arguments are the byte offsets +** of the first byte of and first byte immediately following the text from +** which the token is derived within the input. +** +** The second argument passed to the xToken() callback ("tflags") should +** normally be set to 0. The exception is if the tokenizer supports +** synonyms. In this case see the discussion below for details. +** +** FTS5 assumes the xToken() callback is invoked for each token in the +** order that they occur within the input text. +** +** If an xToken() callback returns any value other than SQLITE_OK, then +** the tokenization should be abandoned and the xTokenize() method should +** immediately return a copy of the xToken() return value. Or, if the +** input buffer is exhausted, xTokenize() should return SQLITE_OK. Finally, +** if an error occurs with the xTokenize() implementation itself, it +** may abandon the tokenization and return any error code other than +** SQLITE_OK or SQLITE_DONE. +** +** SYNONYM SUPPORT +** +** Custom tokenizers may also support synonyms. Consider a case in which a +** user wishes to query for a phrase such as "first place". Using the +** built-in tokenizers, the FTS5 query 'first + place' will match instances +** of "first place" within the document set, but not alternative forms +** such as "1st place". In some applications, it would be better to match +** all instances of "first place" or "1st place" regardless of which form +** the user specified in the MATCH query text. +** +** There are several ways to approach this in FTS5: +** +** <ol><li> By mapping all synonyms to a single token. In this case, the +** In the above example, this means that the tokenizer returns the +** same token for inputs "first" and "1st". Say that token is in +** fact "first", so that when the user inserts the document "I won +** 1st place" entries are added to the index for tokens "i", "won", +** "first" and "place". If the user then queries for '1st + place', +** the tokenizer substitutes "first" for "1st" and the query works +** as expected. +** +** <li> By adding multiple synonyms for a single term to the FTS index. +** In this case, when tokenizing query text, the tokenizer may +** provide multiple synonyms for a single term within the document. +** FTS5 then queries the index for each synonym individually. For +** example, faced with the query: +** +** <codeblock> +** ... MATCH 'first place'</codeblock> +** +** the tokenizer offers both "1st" and "first" as synonyms for the +** first token in the MATCH query and FTS5 effectively runs a query +** similar to: +** +** <codeblock> +** ... MATCH '(first OR 1st) place'</codeblock> +** +** except that, for the purposes of auxiliary functions, the query +** still appears to contain just two phrases - "(first OR 1st)" +** being treated as a single phrase. +** +** <li> By adding multiple synonyms for a single term to the FTS index. +** Using this method, when tokenizing document text, the tokenizer +** provides multiple synonyms for each token. So that when a +** document such as "I won first place" is tokenized, entries are +** added to the FTS index for "i", "won", "first", "1st" and +** "place". +** +** This way, even if the tokenizer does not provide synonyms +** when tokenizing query text (it should not - to do would be +** inefficient), it doesn't matter if the user queries for +** 'first + place' or '1st + place', as there are entires in the +** FTS index corresponding to both forms of the first token. +** </ol> +** +** Whether it is parsing document or query text, any call to xToken that +** specifies a <i>tflags</i> argument with the FTS5_TOKEN_COLOCATED bit +** is considered to supply a synonym for the previous token. For example, +** when parsing the document "I won first place", a tokenizer that supports +** synonyms would call xToken() 5 times, as follows: +** +** <codeblock> +** xToken(pCtx, 0, "i", 1, 0, 1); +** xToken(pCtx, 0, "won", 3, 2, 5); +** xToken(pCtx, 0, "first", 5, 6, 11); +** xToken(pCtx, FTS5_TOKEN_COLOCATED, "1st", 3, 6, 11); +** xToken(pCtx, 0, "place", 5, 12, 17); +**</codeblock> +** +** It is an error to specify the FTS5_TOKEN_COLOCATED flag the first time +** xToken() is called. Multiple synonyms may be specified for a single token +** by making multiple calls to xToken(FTS5_TOKEN_COLOCATED) in sequence. +** There is no limit to the number of synonyms that may be provided for a +** single token. +** +** In many cases, method (1) above is the best approach. It does not add +** extra data to the FTS index or require FTS5 to query for multiple terms, +** so it is efficient in terms of disk space and query speed. However, it +** does not support prefix queries very well. If, as suggested above, the +** token "first" is subsituted for "1st" by the tokenizer, then the query: +** +** <codeblock> +** ... MATCH '1s*'</codeblock> +** +** will not match documents that contain the token "1st" (as the tokenizer +** will probably not map "1s" to any prefix of "first"). +** +** For full prefix support, method (3) may be preferred. In this case, +** because the index contains entries for both "first" and "1st", prefix +** queries such as 'fi*' or '1s*' will match correctly. However, because +** extra entries are added to the FTS index, this method uses more space +** within the database. +** +** Method (2) offers a midpoint between (1) and (3). Using this method, +** a query such as '1s*' will match documents that contain the literal +** token "1st", but not "first" (assuming the tokenizer is not able to +** provide synonyms for prefixes). However, a non-prefix query like '1st' +** will match against "1st" and "first". This method does not require +** extra disk space, as no extra entries are added to the FTS index. +** On the other hand, it may require more CPU cycles to run MATCH queries, +** as separate queries of the FTS index are required for each synonym. +** +** When using methods (2) or (3), it is important that the tokenizer only +** provide synonyms when tokenizing document text (method (2)) or query +** text (method (3)), not both. Doing so will not cause any errors, but is +** inefficient. +*/ +typedef struct Fts5Tokenizer Fts5Tokenizer; +typedef struct fts5_tokenizer fts5_tokenizer; +struct fts5_tokenizer { + int (*xCreate)(void*, const char **azArg, int nArg, Fts5Tokenizer **ppOut); + void (*xDelete)(Fts5Tokenizer*); + int (*xTokenize)(Fts5Tokenizer*, + void *pCtx, + int flags, /* Mask of FTS5_TOKENIZE_* flags */ + const char *pText, int nText, + int (*xToken)( + void *pCtx, /* Copy of 2nd argument to xTokenize() */ + int tflags, /* Mask of FTS5_TOKEN_* flags */ + const char *pToken, /* Pointer to buffer containing token */ + int nToken, /* Size of token in bytes */ + int iStart, /* Byte offset of token within input text */ + int iEnd /* Byte offset of end of token within input text */ + ) + ); +}; + +/* Flags that may be passed as the third argument to xTokenize() */ +#define FTS5_TOKENIZE_QUERY 0x0001 +#define FTS5_TOKENIZE_PREFIX 0x0002 +#define FTS5_TOKENIZE_DOCUMENT 0x0004 +#define FTS5_TOKENIZE_AUX 0x0008 + +/* Flags that may be passed by the tokenizer implementation back to FTS5 +** as the third argument to the supplied xToken callback. */ +#define FTS5_TOKEN_COLOCATED 0x0001 /* Same position as prev. token */ + +/* +** END OF CUSTOM TOKENIZERS +*************************************************************************/ + +/************************************************************************* +** FTS5 EXTENSION REGISTRATION API +*/ +typedef struct fts5_api fts5_api; +struct fts5_api { + int iVersion; /* Currently always set to 2 */ + + /* Create a new tokenizer */ + int (*xCreateTokenizer)( + fts5_api *pApi, + const char *zName, + void *pContext, + fts5_tokenizer *pTokenizer, + void (*xDestroy)(void*) + ); + + /* Find an existing tokenizer */ + int (*xFindTokenizer)( + fts5_api *pApi, + const char *zName, + void **ppContext, + fts5_tokenizer *pTokenizer + ); + + /* Create a new auxiliary function */ + int (*xCreateFunction)( + fts5_api *pApi, + const char *zName, + void *pContext, + fts5_extension_function xFunction, + void (*xDestroy)(void*) + ); +}; + +/* +** END OF REGISTRATION API +*************************************************************************/ + +#if 0 +} /* end of the 'extern "C"' block */ +#endif + +#endif /* _FTS5_H */ + + + +/************** End of sqlite3.h *********************************************/ +/************** Continuing where we left off in sqliteInt.h ******************/ + /* ** Include the configuration header output by 'configure' if we're using the ** autoconf-based build */ #ifdef _HAVE_SQLITE_CONFIG_H @@ -178,23 +9015,33 @@ #ifndef SQLITE_MAX_FUNCTION_ARG # define SQLITE_MAX_FUNCTION_ARG 127 #endif /* -** The maximum number of in-memory pages to use for the main database -** table and for temporary tables. The SQLITE_DEFAULT_CACHE_SIZE +** The suggested maximum number of in-memory pages to use for +** the main database table and for temporary tables. +** +** IMPLEMENTATION-OF: R-31093-59126 The default suggested cache size +** is 2000 pages. +** IMPLEMENTATION-OF: R-48205-43578 The default suggested cache size can be +** altered using the SQLITE_DEFAULT_CACHE_SIZE compile-time options. */ #ifndef SQLITE_DEFAULT_CACHE_SIZE # define SQLITE_DEFAULT_CACHE_SIZE 2000 #endif -#ifndef SQLITE_DEFAULT_TEMP_CACHE_SIZE -# define SQLITE_DEFAULT_TEMP_CACHE_SIZE 500 + +/* +** The default number of frames to accumulate in the log file before +** checkpointing the database in WAL mode. +*/ +#ifndef SQLITE_DEFAULT_WAL_AUTOCHECKPOINT +# define SQLITE_DEFAULT_WAL_AUTOCHECKPOINT 1000 #endif /* ** The maximum number of attached databases. This must be between 0 -** and 30. The upper bound on 30 is because a 32-bit integer bitmap +** and 62. The upper bound on 62 is because a 64-bit integer bitmap ** is used internally to track attached databases. */ #ifndef SQLITE_MAX_ATTACHED # define SQLITE_MAX_ATTACHED 10 #endif @@ -205,24 +9052,25 @@ */ #ifndef SQLITE_MAX_VARIABLE_NUMBER # define SQLITE_MAX_VARIABLE_NUMBER 999 #endif -/* Maximum page size. The upper bound on this value is 32768. This a limit -** imposed by the necessity of storing the value in a 2-byte unsigned integer -** and the fact that the page size must be a power of 2. +/* Maximum page size. The upper bound on this value is 65536. This a limit +** imposed by the use of 16-bit offsets within each page. ** -** If this limit is changed, then the compiled library is technically -** incompatible with an SQLite library compiled with a different limit. If -** a process operating on a database with a page-size of 65536 bytes -** crashes, then an instance of SQLite compiled with the default page-size -** limit will not be able to rollback the aborted transaction. This could -** lead to database corruption. +** Earlier versions of SQLite allowed the user to change this value at +** compile time. This is no longer permitted, on the grounds that it creates +** a library that is technically incompatible with an SQLite library +** compiled with a different limit. If a process operating on a database +** with a page-size of 65536 bytes crashes, then an instance of SQLite +** compiled with the default page-size limit will not be able to rollback +** the aborted transaction. This could lead to database corruption. */ -#ifndef SQLITE_MAX_PAGE_SIZE -# define SQLITE_MAX_PAGE_SIZE 32768 +#ifdef SQLITE_MAX_PAGE_SIZE +# undef SQLITE_MAX_PAGE_SIZE #endif +#define SQLITE_MAX_PAGE_SIZE 65536 /* ** The default size of a database page. */ @@ -290,15 +9138,10 @@ #pragma warn -aus /* Assigned value is never used */ #pragma warn -csu /* Comparing signed and unsigned */ #pragma warn -spa /* Suspicious pointer arithmetic */ #endif -/* Needed for various definitions... */ -#ifndef _GNU_SOURCE -# define _GNU_SOURCE -#endif - /* ** Include standard header files as necessary */ #ifdef HAVE_STDINT_H #include <stdint.h> @@ -305,17 +9148,10 @@ #endif #ifdef HAVE_INTTYPES_H #include <inttypes.h> #endif -/* -** The number of samples of an index that SQLite takes in order to -** construct a histogram of the table content when running ANALYZE -** and with SQLITE_ENABLE_STAT2 -*/ -#define SQLITE_INDEX_SAMPLES 10 - /* ** The following macros are used to cast pointers to integers and ** integers to pointers. The way you do this varies from one compiler ** to the next, so we have developed the following set of #if statements ** to generate appropriate macros for a wide range of compilers. @@ -343,27 +9179,85 @@ # define SQLITE_INT_TO_PTR(X) ((void*)(X)) # define SQLITE_PTR_TO_INT(X) ((int)(X)) #endif /* -** The SQLITE_THREADSAFE macro must be defined as either 0 or 1. +** The SQLITE_WITHIN(P,S,E) macro checks to see if pointer P points to +** something between S (inclusive) and E (exclusive). +** +** In other words, S is a buffer and E is a pointer to the first byte after +** the end of buffer S. This macro returns true if P points to something +** contained within the buffer S. +*/ +#if defined(HAVE_STDINT_H) +# define SQLITE_WITHIN(P,S,E) \ + ((uintptr_t)(P)>=(uintptr_t)(S) && (uintptr_t)(P)<(uintptr_t)(E)) +#else +# define SQLITE_WITHIN(P,S,E) ((P)>=(S) && (P)<(E)) +#endif + +/* +** A macro to hint to the compiler that a function should not be +** inlined. +*/ +#if defined(__GNUC__) +# define SQLITE_NOINLINE __attribute__((noinline)) +#elif defined(_MSC_VER) && _MSC_VER>=1310 +# define SQLITE_NOINLINE __declspec(noinline) +#else +# define SQLITE_NOINLINE +#endif + +/* +** Make sure that the compiler intrinsics we desire are enabled when +** compiling with an appropriate version of MSVC unless prevented by +** the SQLITE_DISABLE_INTRINSIC define. +*/ +#if !defined(SQLITE_DISABLE_INTRINSIC) +# if defined(_MSC_VER) && _MSC_VER>=1300 +# if !defined(_WIN32_WCE) +# include <intrin.h> +# pragma intrinsic(_byteswap_ushort) +# pragma intrinsic(_byteswap_ulong) +# pragma intrinsic(_ReadWriteBarrier) +# else +# include <cmnintrin.h> +# endif +# endif +#endif + +/* +** The SQLITE_THREADSAFE macro must be defined as 0, 1, or 2. +** 0 means mutexes are permanently disable and the library is never +** threadsafe. 1 means the library is serialized which is the highest +** level of threadsafety. 2 means the library is multithreaded - multiple +** threads can use SQLite as long as no two threads try to use the same +** database connection at the same time. +** ** Older versions of SQLite used an optional THREADSAFE macro. -** We support that for legacy +** We support that for legacy. */ #if !defined(SQLITE_THREADSAFE) -#if defined(THREADSAFE) -# define SQLITE_THREADSAFE THREADSAFE -#else -# define SQLITE_THREADSAFE 1 -#endif +# if defined(THREADSAFE) +# define SQLITE_THREADSAFE THREADSAFE +# else +# define SQLITE_THREADSAFE 1 /* IMP: R-07272-22309 */ +# endif #endif /* -** The SQLITE_DEFAULT_MEMSTATUS macro must be defined as either 0 or 1. -** It determines whether or not the features related to -** SQLITE_CONFIG_MEMSTATUS are available by default or not. This value can -** be overridden at runtime using the sqlite3_config() API. +** Powersafe overwrite is on by default. But can be turned off using +** the -DSQLITE_POWERSAFE_OVERWRITE=0 command-line option. +*/ +#ifndef SQLITE_POWERSAFE_OVERWRITE +# define SQLITE_POWERSAFE_OVERWRITE 1 +#endif + +/* +** EVIDENCE-OF: R-25715-37072 Memory allocation statistics are enabled by +** default unless SQLite is compiled with SQLITE_DEFAULT_MEMSTATUS=0 in +** which case memory allocation statistics are disabled by default. */ #if !defined(SQLITE_DEFAULT_MEMSTATUS) # define SQLITE_DEFAULT_MEMSTATUS 1 #endif @@ -370,23 +9264,35 @@ /* ** Exactly one of the following macros must be defined in order to ** specify which memory allocation subsystem to use. ** ** SQLITE_SYSTEM_MALLOC // Use normal system malloc() +** SQLITE_WIN32_MALLOC // Use Win32 native heap API +** SQLITE_ZERO_MALLOC // Use a stub allocator that always fails ** SQLITE_MEMDEBUG // Debugging version of system malloc() ** -** (Historical note: There used to be several other options, but we've -** pared it down to just these two.) +** On Windows, if the SQLITE_WIN32_MALLOC_VALIDATE macro is defined and the +** assert() macro is enabled, each call into the Win32 native heap subsystem +** will cause HeapValidate to be called. If heap validation should fail, an +** assertion will be triggered. ** ** If none of the above are defined, then set SQLITE_SYSTEM_MALLOC as ** the default. */ -#if defined(SQLITE_SYSTEM_MALLOC)+defined(SQLITE_MEMDEBUG)>1 -# error "At most one of the following compile-time configuration options\ - is allows: SQLITE_SYSTEM_MALLOC, SQLITE_MEMDEBUG" +#if defined(SQLITE_SYSTEM_MALLOC) \ + + defined(SQLITE_WIN32_MALLOC) \ + + defined(SQLITE_ZERO_MALLOC) \ + + defined(SQLITE_MEMDEBUG)>1 +# error "Two or more of the following compile-time configuration options\ + are defined but at most one is allowed:\ + SQLITE_SYSTEM_MALLOC, SQLITE_WIN32_MALLOC, SQLITE_MEMDEBUG,\ + SQLITE_ZERO_MALLOC" #endif -#if defined(SQLITE_SYSTEM_MALLOC)+defined(SQLITE_MEMDEBUG)==0 +#if defined(SQLITE_SYSTEM_MALLOC) \ + + defined(SQLITE_WIN32_MALLOC) \ + + defined(SQLITE_ZERO_MALLOC) \ + + defined(SQLITE_MEMDEBUG)==0 # define SQLITE_SYSTEM_MALLOC 1 #endif /* ** If SQLITE_MALLOC_SOFT_LIMIT is not zero, then try to keep the @@ -396,41 +9302,41 @@ # define SQLITE_MALLOC_SOFT_LIMIT 1024 #endif /* ** We need to define _XOPEN_SOURCE as follows in order to enable -** recursive mutexes on most Unix systems. But Mac OS X is different. -** The _XOPEN_SOURCE define causes problems for Mac OS X we are told, -** so it is omitted there. See ticket #2673. -** -** Later we learn that _XOPEN_SOURCE is poorly or incorrectly -** implemented on some systems. So we avoid defining it at all -** if it is already defined or if it is unneeded because we are -** not doing a threadsafe build. Ticket #2681. -** -** See also ticket #2741. -*/ -#if !defined(_XOPEN_SOURCE) && !defined(__DARWIN__) && !defined(__APPLE__) && SQLITE_THREADSAFE -# define _XOPEN_SOURCE 500 /* Needed to enable pthread recursive mutexes */ -#endif - -/* -** The TCL headers are only needed when compiling the TCL bindings. -*/ -#if defined(SQLITE_TCL) || defined(TCLSH) -# include <tcl.h> -#endif - -/* -** Many people are failing to set -DNDEBUG=1 when compiling SQLite. -** Setting NDEBUG makes the code smaller and run faster. So the following -** lines are added to automatically set NDEBUG unless the -DSQLITE_DEBUG=1 -** option is set. Thus NDEBUG becomes an opt-in rather than an opt-out +** recursive mutexes on most Unix systems and fchmod() on OpenBSD. +** But _XOPEN_SOURCE define causes problems for Mac OS X, so omit +** it. +*/ +#if !defined(_XOPEN_SOURCE) && !defined(__DARWIN__) && !defined(__APPLE__) +# define _XOPEN_SOURCE 600 +#endif + +/* +** NDEBUG and SQLITE_DEBUG are opposites. It should always be true that +** defined(NDEBUG)==!defined(SQLITE_DEBUG). If this is not currently true, +** make it true by defining or undefining NDEBUG. +** +** Setting NDEBUG makes the code smaller and faster by disabling the +** assert() statements in the code. So we want the default action +** to be for NDEBUG to be set and NDEBUG to be undefined only if SQLITE_DEBUG +** is set. Thus NDEBUG becomes an opt-in rather than an opt-out ** feature. */ #if !defined(NDEBUG) && !defined(SQLITE_DEBUG) # define NDEBUG 1 +#endif +#if defined(NDEBUG) && defined(SQLITE_DEBUG) +# undef NDEBUG +#endif + +/* +** Enable SQLITE_ENABLE_EXPLAIN_COMMENTS if SQLITE_DEBUG is turned on. +*/ +#if !defined(SQLITE_ENABLE_EXPLAIN_COMMENTS) && defined(SQLITE_DEBUG) +# define SQLITE_ENABLE_EXPLAIN_COMMENTS 1 #endif /* ** The testcase() macro is used to aid in coverage testing. When ** doing coverage testing, the condition inside the argument to @@ -487,11 +9393,11 @@ ** hint of unplanned behavior. ** ** In other words, ALWAYS and NEVER are added for defensive code. ** ** When doing coverage testing ALWAYS and NEVER are hard-coded to -** be true and false so that the unreachable code then specify will +** be true and false so that the unreachable code they specify will ** not be counted as untested code. */ #if defined(SQLITE_COVERAGE_TEST) # define ALWAYS(X) (1) # define NEVER(X) (0) @@ -501,5767 +9407,68 @@ #else # define ALWAYS(X) (X) # define NEVER(X) (X) #endif +/* +** Some malloc failures are only possible if SQLITE_TEST_REALLOC_STRESS is +** defined. We need to defend against those failures when testing with +** SQLITE_TEST_REALLOC_STRESS, but we don't want the unreachable branches +** during a normal build. The following macro can be used to disable tests +** that are always false except when SQLITE_TEST_REALLOC_STRESS is set. +*/ +#if defined(SQLITE_TEST_REALLOC_STRESS) +# define ONLY_IF_REALLOC_STRESS(X) (X) +#elif !defined(NDEBUG) +# define ONLY_IF_REALLOC_STRESS(X) ((X)?(assert(0),1):0) +#else +# define ONLY_IF_REALLOC_STRESS(X) (0) +#endif + +/* +** Declarations used for tracing the operating system interfaces. +*/ +#if defined(SQLITE_FORCE_OS_TRACE) || defined(SQLITE_TEST) || \ + (defined(SQLITE_DEBUG) && SQLITE_OS_WIN) + extern int sqlite3OSTrace; +# define OSTRACE(X) if( sqlite3OSTrace ) sqlite3DebugPrintf X +# define SQLITE_HAVE_OS_TRACE +#else +# define OSTRACE(X) +# undef SQLITE_HAVE_OS_TRACE +#endif + +/* +** Is the sqlite3ErrName() function needed in the build? Currently, +** it is needed by "mutex_w32.c" (when debugging), "os_win.c" (when +** OSTRACE is enabled), and by several "test*.c" files (which are +** compiled using SQLITE_TEST). +*/ +#if defined(SQLITE_HAVE_OS_TRACE) || defined(SQLITE_TEST) || \ + (defined(SQLITE_DEBUG) && SQLITE_OS_WIN) +# define SQLITE_NEED_ERR_NAME +#else +# undef SQLITE_NEED_ERR_NAME +#endif + +/* +** Return true (non-zero) if the input is an integer that is too large +** to fit in 32-bits. This macro is used inside of various testcase() +** macros to verify that we have tested SQLite for large-file support. +*/ +#define IS_BIG_INT(X) (((X)&~(i64)0xffffffff)!=0) + /* ** The macro unlikely() is a hint that surrounds a boolean ** expression that is usually false. Macro likely() surrounds -** a boolean expression that is usually true. GCC is able to -** use these hints to generate better code, sometimes. -*/ -#if defined(__GNUC__) && 0 -# define likely(X) __builtin_expect((X),1) -# define unlikely(X) __builtin_expect((X),0) -#else -# define likely(X) !!(X) -# define unlikely(X) !!(X) -#endif - -/************** Include sqlite3.h in the middle of sqliteInt.h ***************/ -/************** Begin file sqlite3.h *****************************************/ -/* -** 2001 September 15 -** -** The author disclaims copyright to this source code. In place of -** a legal notice, here is a blessing: -** -** May you do good and not evil. -** May you find forgiveness for yourself and forgive others. -** May you share freely, never taking more than you give. -** -************************************************************************* -** This header file defines the interface that the SQLite library -** presents to client programs. If a C-function, structure, datatype, -** or constant definition does not appear in this file, then it is -** not a published API of SQLite, is subject to change without -** notice, and should not be referenced by programs that use SQLite. -** -** Some of the definitions that are in this file are marked as -** "experimental". Experimental interfaces are normally new -** features recently added to SQLite. We do not anticipate changes -** to experimental interfaces but reserve the right to make minor changes -** if experience from use "in the wild" suggest such changes are prudent. -** -** The official C-language API documentation for SQLite is derived -** from comments in this file. This file is the authoritative source -** on how SQLite interfaces are suppose to operate. -** -** The name of this file under configuration management is "sqlite.h.in". -** The makefile makes some minor changes to this file (such as inserting -** the version number) and changes its name to "sqlite3.h" as -** part of the build process. -*/ -#ifndef _SQLITE3_H_ -#define _SQLITE3_H_ -#include <stdarg.h> /* Needed for the definition of va_list */ - -/* -** Make sure we can call this stuff from C++. -*/ -#if 0 -extern "C" { -#endif - - -/* -** Add the ability to override 'extern' -*/ -#ifndef SQLITE_EXTERN -# define SQLITE_EXTERN extern -#endif - -#ifndef SQLITE_API -# define SQLITE_API -#endif - - -/* -** These no-op macros are used in front of interfaces to mark those -** interfaces as either deprecated or experimental. New applications -** should not use deprecated interfaces - they are support for backwards -** compatibility only. Application writers should be aware that -** experimental interfaces are subject to change in point releases. -** -** These macros used to resolve to various kinds of compiler magic that -** would generate warning messages when they were used. But that -** compiler magic ended up generating such a flurry of bug reports -** that we have taken it all out and gone back to using simple -** noop macros. -*/ -#define SQLITE_DEPRECATED -#define SQLITE_EXPERIMENTAL - -/* -** Ensure these symbols were not defined by some previous header file. -*/ -#ifdef SQLITE_VERSION -# undef SQLITE_VERSION -#endif -#ifdef SQLITE_VERSION_NUMBER -# undef SQLITE_VERSION_NUMBER -#endif - -/* -** CAPI3REF: Compile-Time Library Version Numbers -** -** ^(The [SQLITE_VERSION] C preprocessor macro in the sqlite3.h header -** evaluates to a string literal that is the SQLite version in the -** format "X.Y.Z" where X is the major version number (always 3 for -** SQLite3) and Y is the minor version number and Z is the release number.)^ -** ^(The [SQLITE_VERSION_NUMBER] C preprocessor macro resolves to an integer -** with the value (X*1000000 + Y*1000 + Z) where X, Y, and Z are the same -** numbers used in [SQLITE_VERSION].)^ -** The SQLITE_VERSION_NUMBER for any given release of SQLite will also -** be larger than the release from which it is derived. Either Y will -** be held constant and Z will be incremented or else Y will be incremented -** and Z will be reset to zero. -** -** Since version 3.6.18, SQLite source code has been stored in the -** <a href="http://www.fossil-scm.org/">Fossil configuration management -** system</a>. ^The SQLITE_SOURCE_ID macro evalutes to -** a string which identifies a particular check-in of SQLite -** within its configuration management system. ^The SQLITE_SOURCE_ID -** string contains the date and time of the check-in (UTC) and an SHA1 -** hash of the entire source tree. -** -** See also: [sqlite3_libversion()], -** [sqlite3_libversion_number()], [sqlite3_sourceid()], -** [sqlite_version()] and [sqlite_source_id()]. -*/ -#define SQLITE_VERSION "3.6.23" -#define SQLITE_VERSION_NUMBER 3006023 -#define SQLITE_SOURCE_ID "2010-04-15 23:24:29 f96782b389b5b97b488dc5814f7082e0393f64cd" - -/* -** CAPI3REF: Run-Time Library Version Numbers -** KEYWORDS: sqlite3_version, sqlite3_sourceid -** -** These interfaces provide the same information as the [SQLITE_VERSION], -** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros -** but are associated with the library instead of the header file. ^(Cautious -** programmers might include assert() statements in their application to -** verify that values returned by these interfaces match the macros in -** the header, and thus insure that the application is -** compiled with matching library and header files. -** -** <blockquote><pre> -** assert( sqlite3_libversion_number()==SQLITE_VERSION_NUMBER ); -** assert( strcmp(sqlite3_sourceid(),SQLITE_SOURCE_ID)==0 ); -** assert( strcmp(sqlite3_libversion(),SQLITE_VERSION)==0 ); -** </pre></blockquote>)^ -** -** ^The sqlite3_version[] string constant contains the text of [SQLITE_VERSION] -** macro. ^The sqlite3_libversion() function returns a pointer to the -** to the sqlite3_version[] string constant. The sqlite3_libversion() -** function is provided for use in DLLs since DLL users usually do not have -** direct access to string constants within the DLL. ^The -** sqlite3_libversion_number() function returns an integer equal to -** [SQLITE_VERSION_NUMBER]. ^The sqlite3_sourceid() function returns -** a pointer to a string constant whose value is the same as the -** [SQLITE_SOURCE_ID] C preprocessor macro. -** -** See also: [sqlite_version()] and [sqlite_source_id()]. -*/ -SQLITE_API const char sqlite3_version[] = SQLITE_VERSION; -SQLITE_API const char *sqlite3_libversion(void); -SQLITE_API const char *sqlite3_sourceid(void); -SQLITE_API int sqlite3_libversion_number(void); - -/* -** CAPI3REF: Run-Time Library Compilation Options Diagnostics -** -** ^The sqlite3_compileoption_used() function returns 0 or 1 -** indicating whether the specified option was defined at -** compile time. ^The SQLITE_ prefix may be omitted from the -** option name passed to sqlite3_compileoption_used(). -** -** ^The sqlite3_compileoption_get() function allows interating -** over the list of options that were defined at compile time by -** returning the N-th compile time option string. ^If N is out of range, -** sqlite3_compileoption_get() returns a NULL pointer. ^The SQLITE_ -** prefix is omitted from any strings returned by -** sqlite3_compileoption_get(). -** -** ^Support for the diagnostic functions sqlite3_compileoption_used() -** and sqlite3_compileoption_get() may be omitted by specifing the -** [SQLITE_OMIT_COMPILEOPTION_DIAGS] option at compile time. -** -** See also: SQL functions [sqlite_compileoption_used()] and -** [sqlite_compileoption_get()] and the [compile_options pragma]. -*/ -#ifndef SQLITE_OMIT_COMPILEOPTION_DIAGS -SQLITE_API int sqlite3_compileoption_used(const char *zOptName); -SQLITE_API const char *sqlite3_compileoption_get(int N); -#endif - -/* -** CAPI3REF: Test To See If The Library Is Threadsafe -** -** ^The sqlite3_threadsafe() function returns zero if and only if -** SQLite was compiled mutexing code omitted due to the -** [SQLITE_THREADSAFE] compile-time option being set to 0. -** -** SQLite can be compiled with or without mutexes. When -** the [SQLITE_THREADSAFE] C preprocessor macro is 1 or 2, mutexes -** are enabled and SQLite is threadsafe. When the -** [SQLITE_THREADSAFE] macro is 0, -** the mutexes are omitted. Without the mutexes, it is not safe -** to use SQLite concurrently from more than one thread. -** -** Enabling mutexes incurs a measurable performance penalty. -** So if speed is of utmost importance, it makes sense to disable -** the mutexes. But for maximum safety, mutexes should be enabled. -** ^The default behavior is for mutexes to be enabled. -** -** This interface can be used by an application to make sure that the -** version of SQLite that it is linking against was compiled with -** the desired setting of the [SQLITE_THREADSAFE] macro. -** -** This interface only reports on the compile-time mutex setting -** of the [SQLITE_THREADSAFE] flag. If SQLite is compiled with -** SQLITE_THREADSAFE=1 or =2 then mutexes are enabled by default but -** can be fully or partially disabled using a call to [sqlite3_config()] -** with the verbs [SQLITE_CONFIG_SINGLETHREAD], [SQLITE_CONFIG_MULTITHREAD], -** or [SQLITE_CONFIG_MUTEX]. ^(The return value of the -** sqlite3_threadsafe() function shows only the compile-time setting of -** thread safety, not any run-time changes to that setting made by -** sqlite3_config(). In other words, the return value from sqlite3_threadsafe() -** is unchanged by calls to sqlite3_config().)^ -** -** See the [threading mode] documentation for additional information. -*/ -SQLITE_API int sqlite3_threadsafe(void); - -/* -** CAPI3REF: Database Connection Handle -** KEYWORDS: {database connection} {database connections} -** -** Each open SQLite database is represented by a pointer to an instance of -** the opaque structure named "sqlite3". It is useful to think of an sqlite3 -** pointer as an object. The [sqlite3_open()], [sqlite3_open16()], and -** [sqlite3_open_v2()] interfaces are its constructors, and [sqlite3_close()] -** is its destructor. There are many other interfaces (such as -** [sqlite3_prepare_v2()], [sqlite3_create_function()], and -** [sqlite3_busy_timeout()] to name but three) that are methods on an -** sqlite3 object. -*/ -typedef struct sqlite3 sqlite3; - -/* -** CAPI3REF: 64-Bit Integer Types -** KEYWORDS: sqlite_int64 sqlite_uint64 -** -** Because there is no cross-platform way to specify 64-bit integer types -** SQLite includes typedefs for 64-bit signed and unsigned integers. -** -** The sqlite3_int64 and sqlite3_uint64 are the preferred type definitions. -** The sqlite_int64 and sqlite_uint64 types are supported for backwards -** compatibility only. -** -** ^The sqlite3_int64 and sqlite_int64 types can store integer values -** between -9223372036854775808 and +9223372036854775807 inclusive. ^The -** sqlite3_uint64 and sqlite_uint64 types can store integer values -** between 0 and +18446744073709551615 inclusive. -*/ -#ifdef SQLITE_INT64_TYPE - typedef SQLITE_INT64_TYPE sqlite_int64; - typedef unsigned SQLITE_INT64_TYPE sqlite_uint64; -#elif defined(_MSC_VER) || defined(__BORLANDC__) - typedef __int64 sqlite_int64; - typedef unsigned __int64 sqlite_uint64; -#else - typedef long long int sqlite_int64; - typedef unsigned long long int sqlite_uint64; -#endif -typedef sqlite_int64 sqlite3_int64; -typedef sqlite_uint64 sqlite3_uint64; - -/* -** If compiling for a processor that lacks floating point support, -** substitute integer for floating-point. -*/ -#ifdef SQLITE_OMIT_FLOATING_POINT -# define double sqlite3_int64 -#endif - -/* -** CAPI3REF: Closing A Database Connection -** -** ^The sqlite3_close() routine is the destructor for the [sqlite3] object. -** ^Calls to sqlite3_close() return SQLITE_OK if the [sqlite3] object is -** successfullly destroyed and all associated resources are deallocated. -** -** Applications must [sqlite3_finalize | finalize] all [prepared statements] -** and [sqlite3_blob_close | close] all [BLOB handles] associated with -** the [sqlite3] object prior to attempting to close the object. ^If -** sqlite3_close() is called on a [database connection] that still has -** outstanding [prepared statements] or [BLOB handles], then it returns -** SQLITE_BUSY. -** -** ^If [sqlite3_close()] is invoked while a transaction is open, -** the transaction is automatically rolled back. -** -** The C parameter to [sqlite3_close(C)] must be either a NULL -** pointer or an [sqlite3] object pointer obtained -** from [sqlite3_open()], [sqlite3_open16()], or -** [sqlite3_open_v2()], and not previously closed. -** ^Calling sqlite3_close() with a NULL pointer argument is a -** harmless no-op. -*/ -SQLITE_API int sqlite3_close(sqlite3 *); - -/* -** The type for a callback function. -** This is legacy and deprecated. It is included for historical -** compatibility and is not documented. -*/ -typedef int (*sqlite3_callback)(void*,int,char**, char**); - -/* -** CAPI3REF: One-Step Query Execution Interface -** -** The sqlite3_exec() interface is a convenience wrapper around -** [sqlite3_prepare_v2()], [sqlite3_step()], and [sqlite3_finalize()], -** that allows an application to run multiple statements of SQL -** without having to use a lot of C code. -** -** ^The sqlite3_exec() interface runs zero or more UTF-8 encoded, -** semicolon-separate SQL statements passed into its 2nd argument, -** in the context of the [database connection] passed in as its 1st -** argument. ^If the callback function of the 3rd argument to -** sqlite3_exec() is not NULL, then it is invoked for each result row -** coming out of the evaluated SQL statements. ^The 4th argument to -** to sqlite3_exec() is relayed through to the 1st argument of each -** callback invocation. ^If the callback pointer to sqlite3_exec() -** is NULL, then no callback is ever invoked and result rows are -** ignored. -** -** ^If an error occurs while evaluating the SQL statements passed into -** sqlite3_exec(), then execution of the current statement stops and -** subsequent statements are skipped. ^If the 5th parameter to sqlite3_exec() -** is not NULL then any error message is written into memory obtained -** from [sqlite3_malloc()] and passed back through the 5th parameter. -** To avoid memory leaks, the application should invoke [sqlite3_free()] -** on error message strings returned through the 5th parameter of -** of sqlite3_exec() after the error message string is no longer needed. -** ^If the 5th parameter to sqlite3_exec() is not NULL and no errors -** occur, then sqlite3_exec() sets the pointer in its 5th parameter to -** NULL before returning. -** -** ^If an sqlite3_exec() callback returns non-zero, the sqlite3_exec() -** routine returns SQLITE_ABORT without invoking the callback again and -** without running any subsequent SQL statements. -** -** ^The 2nd argument to the sqlite3_exec() callback function is the -** number of columns in the result. ^The 3rd argument to the sqlite3_exec() -** callback is an array of pointers to strings obtained as if from -** [sqlite3_column_text()], one for each column. ^If an element of a -** result row is NULL then the corresponding string pointer for the -** sqlite3_exec() callback is a NULL pointer. ^The 4th argument to the -** sqlite3_exec() callback is an array of pointers to strings where each -** entry represents the name of corresponding result column as obtained -** from [sqlite3_column_name()]. -** -** ^If the 2nd parameter to sqlite3_exec() is a NULL pointer, a pointer -** to an empty string, or a pointer that contains only whitespace and/or -** SQL comments, then no SQL statements are evaluated and the database -** is not changed. -** -** Restrictions: -** -** <ul> -** <li> The application must insure that the 1st parameter to sqlite3_exec() -** is a valid and open [database connection]. -** <li> The application must not close [database connection] specified by -** the 1st parameter to sqlite3_exec() while sqlite3_exec() is running. -** <li> The application must not modify the SQL statement text passed into -** the 2nd parameter of sqlite3_exec() while sqlite3_exec() is running. -** </ul> -*/ -SQLITE_API int sqlite3_exec( - sqlite3*, /* An open database */ - const char *sql, /* SQL to be evaluated */ - int (*callback)(void*,int,char**,char**), /* Callback function */ - void *, /* 1st argument to callback */ - char **errmsg /* Error msg written here */ -); - -/* -** CAPI3REF: Result Codes -** KEYWORDS: SQLITE_OK {error code} {error codes} -** KEYWORDS: {result code} {result codes} -** -** Many SQLite functions return an integer result code from the set shown -** here in order to indicates success or failure. -** -** New error codes may be added in future versions of SQLite. -** -** See also: [SQLITE_IOERR_READ | extended result codes] -*/ -#define SQLITE_OK 0 /* Successful result */ -/* beginning-of-error-codes */ -#define SQLITE_ERROR 1 /* SQL error or missing database */ -#define SQLITE_INTERNAL 2 /* Internal logic error in SQLite */ -#define SQLITE_PERM 3 /* Access permission denied */ -#define SQLITE_ABORT 4 /* Callback routine requested an abort */ -#define SQLITE_BUSY 5 /* The database file is locked */ -#define SQLITE_LOCKED 6 /* A table in the database is locked */ -#define SQLITE_NOMEM 7 /* A malloc() failed */ -#define SQLITE_READONLY 8 /* Attempt to write a readonly database */ -#define SQLITE_INTERRUPT 9 /* Operation terminated by sqlite3_interrupt()*/ -#define SQLITE_IOERR 10 /* Some kind of disk I/O error occurred */ -#define SQLITE_CORRUPT 11 /* The database disk image is malformed */ -#define SQLITE_NOTFOUND 12 /* NOT USED. Table or record not found */ -#define SQLITE_FULL 13 /* Insertion failed because database is full */ -#define SQLITE_CANTOPEN 14 /* Unable to open the database file */ -#define SQLITE_PROTOCOL 15 /* NOT USED. Database lock protocol error */ -#define SQLITE_EMPTY 16 /* Database is empty */ -#define SQLITE_SCHEMA 17 /* The database schema changed */ -#define SQLITE_TOOBIG 18 /* String or BLOB exceeds size limit */ -#define SQLITE_CONSTRAINT 19 /* Abort due to constraint violation */ -#define SQLITE_MISMATCH 20 /* Data type mismatch */ -#define SQLITE_MISUSE 21 /* Library used incorrectly */ -#define SQLITE_NOLFS 22 /* Uses OS features not supported on host */ -#define SQLITE_AUTH 23 /* Authorization denied */ -#define SQLITE_FORMAT 24 /* Auxiliary database format error */ -#define SQLITE_RANGE 25 /* 2nd parameter to sqlite3_bind out of range */ -#define SQLITE_NOTADB 26 /* File opened that is not a database file */ -#define SQLITE_ROW 100 /* sqlite3_step() has another row ready */ -#define SQLITE_DONE 101 /* sqlite3_step() has finished executing */ -/* end-of-error-codes */ - -/* -** CAPI3REF: Extended Result Codes -** KEYWORDS: {extended error code} {extended error codes} -** KEYWORDS: {extended result code} {extended result codes} -** -** In its default configuration, SQLite API routines return one of 26 integer -** [SQLITE_OK | result codes]. However, experience has shown that many of -** these result codes are too coarse-grained. They do not provide as -** much information about problems as programmers might like. In an effort to -** address this, newer versions of SQLite (version 3.3.8 and later) include -** support for additional result codes that provide more detailed information -** about errors. The extended result codes are enabled or disabled -** on a per database connection basis using the -** [sqlite3_extended_result_codes()] API. -** -** Some of the available extended result codes are listed here. -** One may expect the number of extended result codes will be expand -** over time. Software that uses extended result codes should expect -** to see new result codes in future releases of SQLite. -** -** The SQLITE_OK result code will never be extended. It will always -** be exactly zero. -*/ -#define SQLITE_IOERR_READ (SQLITE_IOERR | (1<<8)) -#define SQLITE_IOERR_SHORT_READ (SQLITE_IOERR | (2<<8)) -#define SQLITE_IOERR_WRITE (SQLITE_IOERR | (3<<8)) -#define SQLITE_IOERR_FSYNC (SQLITE_IOERR | (4<<8)) -#define SQLITE_IOERR_DIR_FSYNC (SQLITE_IOERR | (5<<8)) -#define SQLITE_IOERR_TRUNCATE (SQLITE_IOERR | (6<<8)) -#define SQLITE_IOERR_FSTAT (SQLITE_IOERR | (7<<8)) -#define SQLITE_IOERR_UNLOCK (SQLITE_IOERR | (8<<8)) -#define SQLITE_IOERR_RDLOCK (SQLITE_IOERR | (9<<8)) -#define SQLITE_IOERR_DELETE (SQLITE_IOERR | (10<<8)) -#define SQLITE_IOERR_BLOCKED (SQLITE_IOERR | (11<<8)) -#define SQLITE_IOERR_NOMEM (SQLITE_IOERR | (12<<8)) -#define SQLITE_IOERR_ACCESS (SQLITE_IOERR | (13<<8)) -#define SQLITE_IOERR_CHECKRESERVEDLOCK (SQLITE_IOERR | (14<<8)) -#define SQLITE_IOERR_LOCK (SQLITE_IOERR | (15<<8)) -#define SQLITE_IOERR_CLOSE (SQLITE_IOERR | (16<<8)) -#define SQLITE_IOERR_DIR_CLOSE (SQLITE_IOERR | (17<<8)) -#define SQLITE_LOCKED_SHAREDCACHE (SQLITE_LOCKED | (1<<8) ) - -/* -** CAPI3REF: Flags For File Open Operations -** -** These bit values are intended for use in the -** 3rd parameter to the [sqlite3_open_v2()] interface and -** in the 4th parameter to the xOpen method of the -** [sqlite3_vfs] object. -*/ -#define SQLITE_OPEN_READONLY 0x00000001 /* Ok for sqlite3_open_v2() */ -#define SQLITE_OPEN_READWRITE 0x00000002 /* Ok for sqlite3_open_v2() */ -#define SQLITE_OPEN_CREATE 0x00000004 /* Ok for sqlite3_open_v2() */ -#define SQLITE_OPEN_DELETEONCLOSE 0x00000008 /* VFS only */ -#define SQLITE_OPEN_EXCLUSIVE 0x00000010 /* VFS only */ -#define SQLITE_OPEN_AUTOPROXY 0x00000020 /* VFS only */ -#define SQLITE_OPEN_MAIN_DB 0x00000100 /* VFS only */ -#define SQLITE_OPEN_TEMP_DB 0x00000200 /* VFS only */ -#define SQLITE_OPEN_TRANSIENT_DB 0x00000400 /* VFS only */ -#define SQLITE_OPEN_MAIN_JOURNAL 0x00000800 /* VFS only */ -#define SQLITE_OPEN_TEMP_JOURNAL 0x00001000 /* VFS only */ -#define SQLITE_OPEN_SUBJOURNAL 0x00002000 /* VFS only */ -#define SQLITE_OPEN_MASTER_JOURNAL 0x00004000 /* VFS only */ -#define SQLITE_OPEN_NOMUTEX 0x00008000 /* Ok for sqlite3_open_v2() */ -#define SQLITE_OPEN_FULLMUTEX 0x00010000 /* Ok for sqlite3_open_v2() */ -#define SQLITE_OPEN_SHAREDCACHE 0x00020000 /* Ok for sqlite3_open_v2() */ -#define SQLITE_OPEN_PRIVATECACHE 0x00040000 /* Ok for sqlite3_open_v2() */ - -/* -** CAPI3REF: Device Characteristics -** -** The xDeviceCapabilities method of the [sqlite3_io_methods] -** object returns an integer which is a vector of the these -** bit values expressing I/O characteristics of the mass storage -** device that holds the file that the [sqlite3_io_methods] -** refers to. -** -** The SQLITE_IOCAP_ATOMIC property means that all writes of -** any size are atomic. The SQLITE_IOCAP_ATOMICnnn values -** mean that writes of blocks that are nnn bytes in size and -** are aligned to an address which is an integer multiple of -** nnn are atomic. The SQLITE_IOCAP_SAFE_APPEND value means -** that when data is appended to a file, the data is appended -** first then the size of the file is extended, never the other -** way around. The SQLITE_IOCAP_SEQUENTIAL property means that -** information is written to disk in the same order as calls -** to xWrite(). -*/ -#define SQLITE_IOCAP_ATOMIC 0x00000001 -#define SQLITE_IOCAP_ATOMIC512 0x00000002 -#define SQLITE_IOCAP_ATOMIC1K 0x00000004 -#define SQLITE_IOCAP_ATOMIC2K 0x00000008 -#define SQLITE_IOCAP_ATOMIC4K 0x00000010 -#define SQLITE_IOCAP_ATOMIC8K 0x00000020 -#define SQLITE_IOCAP_ATOMIC16K 0x00000040 -#define SQLITE_IOCAP_ATOMIC32K 0x00000080 -#define SQLITE_IOCAP_ATOMIC64K 0x00000100 -#define SQLITE_IOCAP_SAFE_APPEND 0x00000200 -#define SQLITE_IOCAP_SEQUENTIAL 0x00000400 - -/* -** CAPI3REF: File Locking Levels -** -** SQLite uses one of these integer values as the second -** argument to calls it makes to the xLock() and xUnlock() methods -** of an [sqlite3_io_methods] object. -*/ -#define SQLITE_LOCK_NONE 0 -#define SQLITE_LOCK_SHARED 1 -#define SQLITE_LOCK_RESERVED 2 -#define SQLITE_LOCK_PENDING 3 -#define SQLITE_LOCK_EXCLUSIVE 4 - -/* -** CAPI3REF: Synchronization Type Flags -** -** When SQLite invokes the xSync() method of an -** [sqlite3_io_methods] object it uses a combination of -** these integer values as the second argument. -** -** When the SQLITE_SYNC_DATAONLY flag is used, it means that the -** sync operation only needs to flush data to mass storage. Inode -** information need not be flushed. If the lower four bits of the flag -** equal SQLITE_SYNC_NORMAL, that means to use normal fsync() semantics. -** If the lower four bits equal SQLITE_SYNC_FULL, that means -** to use Mac OS X style fullsync instead of fsync(). -*/ -#define SQLITE_SYNC_NORMAL 0x00002 -#define SQLITE_SYNC_FULL 0x00003 -#define SQLITE_SYNC_DATAONLY 0x00010 - -/* -** CAPI3REF: OS Interface Open File Handle -** -** An [sqlite3_file] object represents an open file in the -** [sqlite3_vfs | OS interface layer]. Individual OS interface -** implementations will -** want to subclass this object by appending additional fields -** for their own use. The pMethods entry is a pointer to an -** [sqlite3_io_methods] object that defines methods for performing -** I/O operations on the open file. -*/ -typedef struct sqlite3_file sqlite3_file; -struct sqlite3_file { - const struct sqlite3_io_methods *pMethods; /* Methods for an open file */ -}; - -/* -** CAPI3REF: OS Interface File Virtual Methods Object -** -** Every file opened by the [sqlite3_vfs] xOpen method populates an -** [sqlite3_file] object (or, more commonly, a subclass of the -** [sqlite3_file] object) with a pointer to an instance of this object. -** This object defines the methods used to perform various operations -** against the open file represented by the [sqlite3_file] object. -** -** If the xOpen method sets the sqlite3_file.pMethods element -** to a non-NULL pointer, then the sqlite3_io_methods.xClose method -** may be invoked even if the xOpen reported that it failed. The -** only way to prevent a call to xClose following a failed xOpen -** is for the xOpen to set the sqlite3_file.pMethods element to NULL. -** -** The flags argument to xSync may be one of [SQLITE_SYNC_NORMAL] or -** [SQLITE_SYNC_FULL]. The first choice is the normal fsync(). -** The second choice is a Mac OS X style fullsync. The [SQLITE_SYNC_DATAONLY] -** flag may be ORed in to indicate that only the data of the file -** and not its inode needs to be synced. -** -** The integer values to xLock() and xUnlock() are one of -** <ul> -** <li> [SQLITE_LOCK_NONE], -** <li> [SQLITE_LOCK_SHARED], -** <li> [SQLITE_LOCK_RESERVED], -** <li> [SQLITE_LOCK_PENDING], or -** <li> [SQLITE_LOCK_EXCLUSIVE]. -** </ul> -** xLock() increases the lock. xUnlock() decreases the lock. -** The xCheckReservedLock() method checks whether any database connection, -** either in this process or in some other process, is holding a RESERVED, -** PENDING, or EXCLUSIVE lock on the file. It returns true -** if such a lock exists and false otherwise. -** -** The xFileControl() method is a generic interface that allows custom -** VFS implementations to directly control an open file using the -** [sqlite3_file_control()] interface. The second "op" argument is an -** integer opcode. The third argument is a generic pointer intended to -** point to a structure that may contain arguments or space in which to -** write return values. Potential uses for xFileControl() might be -** functions to enable blocking locks with timeouts, to change the -** locking strategy (for example to use dot-file locks), to inquire -** about the status of a lock, or to break stale locks. The SQLite -** core reserves all opcodes less than 100 for its own use. -** A [SQLITE_FCNTL_LOCKSTATE | list of opcodes] less than 100 is available. -** Applications that define a custom xFileControl method should use opcodes -** greater than 100 to avoid conflicts. -** -** The xSectorSize() method returns the sector size of the -** device that underlies the file. The sector size is the -** minimum write that can be performed without disturbing -** other bytes in the file. The xDeviceCharacteristics() -** method returns a bit vector describing behaviors of the -** underlying device: -** -** <ul> -** <li> [SQLITE_IOCAP_ATOMIC] -** <li> [SQLITE_IOCAP_ATOMIC512] -** <li> [SQLITE_IOCAP_ATOMIC1K] -** <li> [SQLITE_IOCAP_ATOMIC2K] -** <li> [SQLITE_IOCAP_ATOMIC4K] -** <li> [SQLITE_IOCAP_ATOMIC8K] -** <li> [SQLITE_IOCAP_ATOMIC16K] -** <li> [SQLITE_IOCAP_ATOMIC32K] -** <li> [SQLITE_IOCAP_ATOMIC64K] -** <li> [SQLITE_IOCAP_SAFE_APPEND] -** <li> [SQLITE_IOCAP_SEQUENTIAL] -** </ul> -** -** The SQLITE_IOCAP_ATOMIC property means that all writes of -** any size are atomic. The SQLITE_IOCAP_ATOMICnnn values -** mean that writes of blocks that are nnn bytes in size and -** are aligned to an address which is an integer multiple of -** nnn are atomic. The SQLITE_IOCAP_SAFE_APPEND value means -** that when data is appended to a file, the data is appended -** first then the size of the file is extended, never the other -** way around. The SQLITE_IOCAP_SEQUENTIAL property means that -** information is written to disk in the same order as calls -** to xWrite(). -** -** If xRead() returns SQLITE_IOERR_SHORT_READ it must also fill -** in the unread portions of the buffer with zeros. A VFS that -** fails to zero-fill short reads might seem to work. However, -** failure to zero-fill short reads will eventually lead to -** database corruption. -*/ -typedef struct sqlite3_io_methods sqlite3_io_methods; -struct sqlite3_io_methods { - int iVersion; - int (*xClose)(sqlite3_file*); - int (*xRead)(sqlite3_file*, void*, int iAmt, sqlite3_int64 iOfst); - int (*xWrite)(sqlite3_file*, const void*, int iAmt, sqlite3_int64 iOfst); - int (*xTruncate)(sqlite3_file*, sqlite3_int64 size); - int (*xSync)(sqlite3_file*, int flags); - int (*xFileSize)(sqlite3_file*, sqlite3_int64 *pSize); - int (*xLock)(sqlite3_file*, int); - int (*xUnlock)(sqlite3_file*, int); - int (*xCheckReservedLock)(sqlite3_file*, int *pResOut); - int (*xFileControl)(sqlite3_file*, int op, void *pArg); - int (*xSectorSize)(sqlite3_file*); - int (*xDeviceCharacteristics)(sqlite3_file*); - /* Additional methods may be added in future releases */ -}; - -/* -** CAPI3REF: Standard File Control Opcodes -** -** These integer constants are opcodes for the xFileControl method -** of the [sqlite3_io_methods] object and for the [sqlite3_file_control()] -** interface. -** -** The [SQLITE_FCNTL_LOCKSTATE] opcode is used for debugging. This -** opcode causes the xFileControl method to write the current state of -** the lock (one of [SQLITE_LOCK_NONE], [SQLITE_LOCK_SHARED], -** [SQLITE_LOCK_RESERVED], [SQLITE_LOCK_PENDING], or [SQLITE_LOCK_EXCLUSIVE]) -** into an integer that the pArg argument points to. This capability -** is used during testing and only needs to be supported when SQLITE_TEST -** is defined. -*/ -#define SQLITE_FCNTL_LOCKSTATE 1 -#define SQLITE_GET_LOCKPROXYFILE 2 -#define SQLITE_SET_LOCKPROXYFILE 3 -#define SQLITE_LAST_ERRNO 4 - -/* -** CAPI3REF: Mutex Handle -** -** The mutex module within SQLite defines [sqlite3_mutex] to be an -** abstract type for a mutex object. The SQLite core never looks -** at the internal representation of an [sqlite3_mutex]. It only -** deals with pointers to the [sqlite3_mutex] object. -** -** Mutexes are created using [sqlite3_mutex_alloc()]. -*/ -typedef struct sqlite3_mutex sqlite3_mutex; - -/* -** CAPI3REF: OS Interface Object -** -** An instance of the sqlite3_vfs object defines the interface between -** the SQLite core and the underlying operating system. The "vfs" -** in the name of the object stands for "virtual file system". -** -** The value of the iVersion field is initially 1 but may be larger in -** future versions of SQLite. Additional fields may be appended to this -** object when the iVersion value is increased. Note that the structure -** of the sqlite3_vfs object changes in the transaction between -** SQLite version 3.5.9 and 3.6.0 and yet the iVersion field was not -** modified. -** -** The szOsFile field is the size of the subclassed [sqlite3_file] -** structure used by this VFS. mxPathname is the maximum length of -** a pathname in this VFS. -** -** Registered sqlite3_vfs objects are kept on a linked list formed by -** the pNext pointer. The [sqlite3_vfs_register()] -** and [sqlite3_vfs_unregister()] interfaces manage this list -** in a thread-safe way. The [sqlite3_vfs_find()] interface -** searches the list. Neither the application code nor the VFS -** implementation should use the pNext pointer. -** -** The pNext field is the only field in the sqlite3_vfs -** structure that SQLite will ever modify. SQLite will only access -** or modify this field while holding a particular static mutex. -** The application should never modify anything within the sqlite3_vfs -** object once the object has been registered. -** -** The zName field holds the name of the VFS module. The name must -** be unique across all VFS modules. -** -** SQLite will guarantee that the zFilename parameter to xOpen -** is either a NULL pointer or string obtained -** from xFullPathname(). SQLite further guarantees that -** the string will be valid and unchanged until xClose() is -** called. Because of the previous sentence, -** the [sqlite3_file] can safely store a pointer to the -** filename if it needs to remember the filename for some reason. -** If the zFilename parameter is xOpen is a NULL pointer then xOpen -** must invent its own temporary name for the file. Whenever the -** xFilename parameter is NULL it will also be the case that the -** flags parameter will include [SQLITE_OPEN_DELETEONCLOSE]. -** -** The flags argument to xOpen() includes all bits set in -** the flags argument to [sqlite3_open_v2()]. Or if [sqlite3_open()] -** or [sqlite3_open16()] is used, then flags includes at least -** [SQLITE_OPEN_READWRITE] | [SQLITE_OPEN_CREATE]. -** If xOpen() opens a file read-only then it sets *pOutFlags to -** include [SQLITE_OPEN_READONLY]. Other bits in *pOutFlags may be set. -** -** SQLite will also add one of the following flags to the xOpen() -** call, depending on the object being opened: -** -** <ul> -** <li> [SQLITE_OPEN_MAIN_DB] -** <li> [SQLITE_OPEN_MAIN_JOURNAL] -** <li> [SQLITE_OPEN_TEMP_DB] -** <li> [SQLITE_OPEN_TEMP_JOURNAL] -** <li> [SQLITE_OPEN_TRANSIENT_DB] -** <li> [SQLITE_OPEN_SUBJOURNAL] -** <li> [SQLITE_OPEN_MASTER_JOURNAL] -** </ul> -** -** The file I/O implementation can use the object type flags to -** change the way it deals with files. For example, an application -** that does not care about crash recovery or rollback might make -** the open of a journal file a no-op. Writes to this journal would -** also be no-ops, and any attempt to read the journal would return -** SQLITE_IOERR. Or the implementation might recognize that a database -** file will be doing page-aligned sector reads and writes in a random -** order and set up its I/O subsystem accordingly. -** -** SQLite might also add one of the following flags to the xOpen method: -** -** <ul> -** <li> [SQLITE_OPEN_DELETEONCLOSE] -** <li> [SQLITE_OPEN_EXCLUSIVE] -** </ul> -** -** The [SQLITE_OPEN_DELETEONCLOSE] flag means the file should be -** deleted when it is closed. The [SQLITE_OPEN_DELETEONCLOSE] -** will be set for TEMP databases, journals and for subjournals. -** -** The [SQLITE_OPEN_EXCLUSIVE] flag is always used in conjunction -** with the [SQLITE_OPEN_CREATE] flag, which are both directly -** analogous to the O_EXCL and O_CREAT flags of the POSIX open() -** API. The SQLITE_OPEN_EXCLUSIVE flag, when paired with the -** SQLITE_OPEN_CREATE, is used to indicate that file should always -** be created, and that it is an error if it already exists. -** It is <i>not</i> used to indicate the file should be opened -** for exclusive access. -** -** At least szOsFile bytes of memory are allocated by SQLite -** to hold the [sqlite3_file] structure passed as the third -** argument to xOpen. The xOpen method does not have to -** allocate the structure; it should just fill it in. Note that -** the xOpen method must set the sqlite3_file.pMethods to either -** a valid [sqlite3_io_methods] object or to NULL. xOpen must do -** this even if the open fails. SQLite expects that the sqlite3_file.pMethods -** element will be valid after xOpen returns regardless of the success -** or failure of the xOpen call. -** -** The flags argument to xAccess() may be [SQLITE_ACCESS_EXISTS] -** to test for the existence of a file, or [SQLITE_ACCESS_READWRITE] to -** test whether a file is readable and writable, or [SQLITE_ACCESS_READ] -** to test whether a file is at least readable. The file can be a -** directory. -** -** SQLite will always allocate at least mxPathname+1 bytes for the -** output buffer xFullPathname. The exact size of the output buffer -** is also passed as a parameter to both methods. If the output buffer -** is not large enough, [SQLITE_CANTOPEN] should be returned. Since this is -** handled as a fatal error by SQLite, vfs implementations should endeavor -** to prevent this by setting mxPathname to a sufficiently large value. -** -** The xRandomness(), xSleep(), and xCurrentTime() interfaces -** are not strictly a part of the filesystem, but they are -** included in the VFS structure for completeness. -** The xRandomness() function attempts to return nBytes bytes -** of good-quality randomness into zOut. The return value is -** the actual number of bytes of randomness obtained. -** The xSleep() method causes the calling thread to sleep for at -** least the number of microseconds given. The xCurrentTime() -** method returns a Julian Day Number for the current date and time. -** -*/ -typedef struct sqlite3_vfs sqlite3_vfs; -struct sqlite3_vfs { - int iVersion; /* Structure version number */ - int szOsFile; /* Size of subclassed sqlite3_file */ - int mxPathname; /* Maximum file pathname length */ - sqlite3_vfs *pNext; /* Next registered VFS */ - const char *zName; /* Name of this virtual file system */ - void *pAppData; /* Pointer to application-specific data */ - int (*xOpen)(sqlite3_vfs*, const char *zName, sqlite3_file*, - int flags, int *pOutFlags); - int (*xDelete)(sqlite3_vfs*, const char *zName, int syncDir); - int (*xAccess)(sqlite3_vfs*, const char *zName, int flags, int *pResOut); - int (*xFullPathname)(sqlite3_vfs*, const char *zName, int nOut, char *zOut); - void *(*xDlOpen)(sqlite3_vfs*, const char *zFilename); - void (*xDlError)(sqlite3_vfs*, int nByte, char *zErrMsg); - void (*(*xDlSym)(sqlite3_vfs*,void*, const char *zSymbol))(void); - void (*xDlClose)(sqlite3_vfs*, void*); - int (*xRandomness)(sqlite3_vfs*, int nByte, char *zOut); - int (*xSleep)(sqlite3_vfs*, int microseconds); - int (*xCurrentTime)(sqlite3_vfs*, double*); - int (*xGetLastError)(sqlite3_vfs*, int, char *); - /* New fields may be appended in figure versions. The iVersion - ** value will increment whenever this happens. */ -}; - -/* -** CAPI3REF: Flags for the xAccess VFS method -** -** These integer constants can be used as the third parameter to -** the xAccess method of an [sqlite3_vfs] object. They determine -** what kind of permissions the xAccess method is looking for. -** With SQLITE_ACCESS_EXISTS, the xAccess method -** simply checks whether the file exists. -** With SQLITE_ACCESS_READWRITE, the xAccess method -** checks whether the file is both readable and writable. -** With SQLITE_ACCESS_READ, the xAccess method -** checks whether the file is readable. -*/ -#define SQLITE_ACCESS_EXISTS 0 -#define SQLITE_ACCESS_READWRITE 1 -#define SQLITE_ACCESS_READ 2 - -/* -** CAPI3REF: Initialize The SQLite Library -** -** ^The sqlite3_initialize() routine initializes the -** SQLite library. ^The sqlite3_shutdown() routine -** deallocates any resources that were allocated by sqlite3_initialize(). -** These routines are designed to aid in process initialization and -** shutdown on embedded systems. Workstation applications using -** SQLite normally do not need to invoke either of these routines. -** -** A call to sqlite3_initialize() is an "effective" call if it is -** the first time sqlite3_initialize() is invoked during the lifetime of -** the process, or if it is the first time sqlite3_initialize() is invoked -** following a call to sqlite3_shutdown(). ^(Only an effective call -** of sqlite3_initialize() does any initialization. All other calls -** are harmless no-ops.)^ -** -** A call to sqlite3_shutdown() is an "effective" call if it is the first -** call to sqlite3_shutdown() since the last sqlite3_initialize(). ^(Only -** an effective call to sqlite3_shutdown() does any deinitialization. -** All other valid calls to sqlite3_shutdown() are harmless no-ops.)^ -** -** The sqlite3_initialize() interface is threadsafe, but sqlite3_shutdown() -** is not. The sqlite3_shutdown() interface must only be called from a -** single thread. All open [database connections] must be closed and all -** other SQLite resources must be deallocated prior to invoking -** sqlite3_shutdown(). -** -** Among other things, ^sqlite3_initialize() will invoke -** sqlite3_os_init(). Similarly, ^sqlite3_shutdown() -** will invoke sqlite3_os_end(). -** -** ^The sqlite3_initialize() routine returns [SQLITE_OK] on success. -** ^If for some reason, sqlite3_initialize() is unable to initialize -** the library (perhaps it is unable to allocate a needed resource such -** as a mutex) it returns an [error code] other than [SQLITE_OK]. -** -** ^The sqlite3_initialize() routine is called internally by many other -** SQLite interfaces so that an application usually does not need to -** invoke sqlite3_initialize() directly. For example, [sqlite3_open()] -** calls sqlite3_initialize() so the SQLite library will be automatically -** initialized when [sqlite3_open()] is called if it has not be initialized -** already. ^However, if SQLite is compiled with the [SQLITE_OMIT_AUTOINIT] -** compile-time option, then the automatic calls to sqlite3_initialize() -** are omitted and the application must call sqlite3_initialize() directly -** prior to using any other SQLite interface. For maximum portability, -** it is recommended that applications always invoke sqlite3_initialize() -** directly prior to using any other SQLite interface. Future releases -** of SQLite may require this. In other words, the behavior exhibited -** when SQLite is compiled with [SQLITE_OMIT_AUTOINIT] might become the -** default behavior in some future release of SQLite. -** -** The sqlite3_os_init() routine does operating-system specific -** initialization of the SQLite library. The sqlite3_os_end() -** routine undoes the effect of sqlite3_os_init(). Typical tasks -** performed by these routines include allocation or deallocation -** of static resources, initialization of global variables, -** setting up a default [sqlite3_vfs] module, or setting up -** a default configuration using [sqlite3_config()]. -** -** The application should never invoke either sqlite3_os_init() -** or sqlite3_os_end() directly. The application should only invoke -** sqlite3_initialize() and sqlite3_shutdown(). The sqlite3_os_init() -** interface is called automatically by sqlite3_initialize() and -** sqlite3_os_end() is called by sqlite3_shutdown(). Appropriate -** implementations for sqlite3_os_init() and sqlite3_os_end() -** are built into SQLite when it is compiled for Unix, Windows, or OS/2. -** When [custom builds | built for other platforms] -** (using the [SQLITE_OS_OTHER=1] compile-time -** option) the application must supply a suitable implementation for -** sqlite3_os_init() and sqlite3_os_end(). An application-supplied -** implementation of sqlite3_os_init() or sqlite3_os_end() -** must return [SQLITE_OK] on success and some other [error code] upon -** failure. -*/ -SQLITE_API int sqlite3_initialize(void); -SQLITE_API int sqlite3_shutdown(void); -SQLITE_API int sqlite3_os_init(void); -SQLITE_API int sqlite3_os_end(void); - -/* -** CAPI3REF: Configuring The SQLite Library -** -** The sqlite3_config() interface is used to make global configuration -** changes to SQLite in order to tune SQLite to the specific needs of -** the application. The default configuration is recommended for most -** applications and so this routine is usually not necessary. It is -** provided to support rare applications with unusual needs. -** -** The sqlite3_config() interface is not threadsafe. The application -** must insure that no other SQLite interfaces are invoked by other -** threads while sqlite3_config() is running. Furthermore, sqlite3_config() -** may only be invoked prior to library initialization using -** [sqlite3_initialize()] or after shutdown by [sqlite3_shutdown()]. -** ^If sqlite3_config() is called after [sqlite3_initialize()] and before -** [sqlite3_shutdown()] then it will return SQLITE_MISUSE. -** Note, however, that ^sqlite3_config() can be called as part of the -** implementation of an application-defined [sqlite3_os_init()]. -** -** The first argument to sqlite3_config() is an integer -** [SQLITE_CONFIG_SINGLETHREAD | configuration option] that determines -** what property of SQLite is to be configured. Subsequent arguments -** vary depending on the [SQLITE_CONFIG_SINGLETHREAD | configuration option] -** in the first argument. -** -** ^When a configuration option is set, sqlite3_config() returns [SQLITE_OK]. -** ^If the option is unknown or SQLite is unable to set the option -** then this routine returns a non-zero [error code]. -*/ -SQLITE_API int sqlite3_config(int, ...); - -/* -** CAPI3REF: Configure database connections -** -** The sqlite3_db_config() interface is used to make configuration -** changes to a [database connection]. The interface is similar to -** [sqlite3_config()] except that the changes apply to a single -** [database connection] (specified in the first argument). The -** sqlite3_db_config() interface should only be used immediately after -** the database connection is created using [sqlite3_open()], -** [sqlite3_open16()], or [sqlite3_open_v2()]. -** -** The second argument to sqlite3_db_config(D,V,...) is the -** configuration verb - an integer code that indicates what -** aspect of the [database connection] is being configured. -** The only choice for this value is [SQLITE_DBCONFIG_LOOKASIDE]. -** New verbs are likely to be added in future releases of SQLite. -** Additional arguments depend on the verb. -** -** ^Calls to sqlite3_db_config() return SQLITE_OK if and only if -** the call is considered successful. -*/ -SQLITE_API int sqlite3_db_config(sqlite3*, int op, ...); - -/* -** CAPI3REF: Memory Allocation Routines -** -** An instance of this object defines the interface between SQLite -** and low-level memory allocation routines. -** -** This object is used in only one place in the SQLite interface. -** A pointer to an instance of this object is the argument to -** [sqlite3_config()] when the configuration option is -** [SQLITE_CONFIG_MALLOC] or [SQLITE_CONFIG_GETMALLOC]. -** By creating an instance of this object -** and passing it to [sqlite3_config]([SQLITE_CONFIG_MALLOC]) -** during configuration, an application can specify an alternative -** memory allocation subsystem for SQLite to use for all of its -** dynamic memory needs. -** -** Note that SQLite comes with several [built-in memory allocators] -** that are perfectly adequate for the overwhelming majority of applications -** and that this object is only useful to a tiny minority of applications -** with specialized memory allocation requirements. This object is -** also used during testing of SQLite in order to specify an alternative -** memory allocator that simulates memory out-of-memory conditions in -** order to verify that SQLite recovers gracefully from such -** conditions. -** -** The xMalloc and xFree methods must work like the -** malloc() and free() functions from the standard C library. -** The xRealloc method must work like realloc() from the standard C library -** with the exception that if the second argument to xRealloc is zero, -** xRealloc must be a no-op - it must not perform any allocation or -** deallocation. ^SQLite guarantees that the second argument to -** xRealloc is always a value returned by a prior call to xRoundup. -** And so in cases where xRoundup always returns a positive number, -** xRealloc can perform exactly as the standard library realloc() and -** still be in compliance with this specification. -** -** xSize should return the allocated size of a memory allocation -** previously obtained from xMalloc or xRealloc. The allocated size -** is always at least as big as the requested size but may be larger. -** -** The xRoundup method returns what would be the allocated size of -** a memory allocation given a particular requested size. Most memory -** allocators round up memory allocations at least to the next multiple -** of 8. Some allocators round up to a larger multiple or to a power of 2. -** Every memory allocation request coming in through [sqlite3_malloc()] -** or [sqlite3_realloc()] first calls xRoundup. If xRoundup returns 0, -** that causes the corresponding memory allocation to fail. -** -** The xInit method initializes the memory allocator. (For example, -** it might allocate any require mutexes or initialize internal data -** structures. The xShutdown method is invoked (indirectly) by -** [sqlite3_shutdown()] and should deallocate any resources acquired -** by xInit. The pAppData pointer is used as the only parameter to -** xInit and xShutdown. -** -** SQLite holds the [SQLITE_MUTEX_STATIC_MASTER] mutex when it invokes -** the xInit method, so the xInit method need not be threadsafe. The -** xShutdown method is only called from [sqlite3_shutdown()] so it does -** not need to be threadsafe either. For all other methods, SQLite -** holds the [SQLITE_MUTEX_STATIC_MEM] mutex as long as the -** [SQLITE_CONFIG_MEMSTATUS] configuration option is turned on (which -** it is by default) and so the methods are automatically serialized. -** However, if [SQLITE_CONFIG_MEMSTATUS] is disabled, then the other -** methods must be threadsafe or else make their own arrangements for -** serialization. -** -** SQLite will never invoke xInit() more than once without an intervening -** call to xShutdown(). -*/ -typedef struct sqlite3_mem_methods sqlite3_mem_methods; -struct sqlite3_mem_methods { - void *(*xMalloc)(int); /* Memory allocation function */ - void (*xFree)(void*); /* Free a prior allocation */ - void *(*xRealloc)(void*,int); /* Resize an allocation */ - int (*xSize)(void*); /* Return the size of an allocation */ - int (*xRoundup)(int); /* Round up request size to allocation size */ - int (*xInit)(void*); /* Initialize the memory allocator */ - void (*xShutdown)(void*); /* Deinitialize the memory allocator */ - void *pAppData; /* Argument to xInit() and xShutdown() */ -}; - -/* -** CAPI3REF: Configuration Options -** -** These constants are the available integer configuration options that -** can be passed as the first argument to the [sqlite3_config()] interface. -** -** New configuration options may be added in future releases of SQLite. -** Existing configuration options might be discontinued. Applications -** should check the return code from [sqlite3_config()] to make sure that -** the call worked. The [sqlite3_config()] interface will return a -** non-zero [error code] if a discontinued or unsupported configuration option -** is invoked. -** -** <dl> -** <dt>SQLITE_CONFIG_SINGLETHREAD</dt> -** <dd>There are no arguments to this option. ^This option sets the -** [threading mode] to Single-thread. In other words, it disables -** all mutexing and puts SQLite into a mode where it can only be used -** by a single thread. ^If SQLite is compiled with -** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then -** it is not possible to change the [threading mode] from its default -** value of Single-thread and so [sqlite3_config()] will return -** [SQLITE_ERROR] if called with the SQLITE_CONFIG_SINGLETHREAD -** configuration option.</dd> -** -** <dt>SQLITE_CONFIG_MULTITHREAD</dt> -** <dd>There are no arguments to this option. ^This option sets the -** [threading mode] to Multi-thread. In other words, it disables -** mutexing on [database connection] and [prepared statement] objects. -** The application is responsible for serializing access to -** [database connections] and [prepared statements]. But other mutexes -** are enabled so that SQLite will be safe to use in a multi-threaded -** environment as long as no two threads attempt to use the same -** [database connection] at the same time. ^If SQLite is compiled with -** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then -** it is not possible to set the Multi-thread [threading mode] and -** [sqlite3_config()] will return [SQLITE_ERROR] if called with the -** SQLITE_CONFIG_MULTITHREAD configuration option.</dd> -** -** <dt>SQLITE_CONFIG_SERIALIZED</dt> -** <dd>There are no arguments to this option. ^This option sets the -** [threading mode] to Serialized. In other words, this option enables -** all mutexes including the recursive -** mutexes on [database connection] and [prepared statement] objects. -** In this mode (which is the default when SQLite is compiled with -** [SQLITE_THREADSAFE=1]) the SQLite library will itself serialize access -** to [database connections] and [prepared statements] so that the -** application is free to use the same [database connection] or the -** same [prepared statement] in different threads at the same time. -** ^If SQLite is compiled with -** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then -** it is not possible to set the Serialized [threading mode] and -** [sqlite3_config()] will return [SQLITE_ERROR] if called with the -** SQLITE_CONFIG_SERIALIZED configuration option.</dd> -** -** <dt>SQLITE_CONFIG_MALLOC</dt> -** <dd> ^(This option takes a single argument which is a pointer to an -** instance of the [sqlite3_mem_methods] structure. The argument specifies -** alternative low-level memory allocation routines to be used in place of -** the memory allocation routines built into SQLite.)^ ^SQLite makes -** its own private copy of the content of the [sqlite3_mem_methods] structure -** before the [sqlite3_config()] call returns.</dd> -** -** <dt>SQLITE_CONFIG_GETMALLOC</dt> -** <dd> ^(This option takes a single argument which is a pointer to an -** instance of the [sqlite3_mem_methods] structure. The [sqlite3_mem_methods] -** structure is filled with the currently defined memory allocation routines.)^ -** This option can be used to overload the default memory allocation -** routines with a wrapper that simulations memory allocation failure or -** tracks memory usage, for example. </dd> -** -** <dt>SQLITE_CONFIG_MEMSTATUS</dt> -** <dd> ^This option takes single argument of type int, interpreted as a -** boolean, which enables or disables the collection of memory allocation -** statistics. ^(When memory allocation statistics are disabled, the -** following SQLite interfaces become non-operational: -** <ul> -** <li> [sqlite3_memory_used()] -** <li> [sqlite3_memory_highwater()] -** <li> [sqlite3_soft_heap_limit()] -** <li> [sqlite3_status()] -** </ul>)^ -** ^Memory allocation statistics are enabled by default unless SQLite is -** compiled with [SQLITE_DEFAULT_MEMSTATUS]=0 in which case memory -** allocation statistics are disabled by default. -** </dd> -** -** <dt>SQLITE_CONFIG_SCRATCH</dt> -** <dd> ^This option specifies a static memory buffer that SQLite can use for -** scratch memory. There are three arguments: A pointer an 8-byte -** aligned memory buffer from which the scrach allocations will be -** drawn, the size of each scratch allocation (sz), -** and the maximum number of scratch allocations (N). The sz -** argument must be a multiple of 16. The sz parameter should be a few bytes -** larger than the actual scratch space required due to internal overhead. -** The first argument must be a pointer to an 8-byte aligned buffer -** of at least sz*N bytes of memory. -** ^SQLite will use no more than one scratch buffer per thread. So -** N should be set to the expected maximum number of threads. ^SQLite will -** never require a scratch buffer that is more than 6 times the database -** page size. ^If SQLite needs needs additional scratch memory beyond -** what is provided by this configuration option, then -** [sqlite3_malloc()] will be used to obtain the memory needed.</dd> -** -** <dt>SQLITE_CONFIG_PAGECACHE</dt> -** <dd> ^This option specifies a static memory buffer that SQLite can use for -** the database page cache with the default page cache implemenation. -** This configuration should not be used if an application-define page -** cache implementation is loaded using the SQLITE_CONFIG_PCACHE option. -** There are three arguments to this option: A pointer to 8-byte aligned -** memory, the size of each page buffer (sz), and the number of pages (N). -** The sz argument should be the size of the largest database page -** (a power of two between 512 and 32768) plus a little extra for each -** page header. ^The page header size is 20 to 40 bytes depending on -** the host architecture. ^It is harmless, apart from the wasted memory, -** to make sz a little too large. The first -** argument should point to an allocation of at least sz*N bytes of memory. -** ^SQLite will use the memory provided by the first argument to satisfy its -** memory needs for the first N pages that it adds to cache. ^If additional -** page cache memory is needed beyond what is provided by this option, then -** SQLite goes to [sqlite3_malloc()] for the additional storage space. -** ^The implementation might use one or more of the N buffers to hold -** memory accounting information. The pointer in the first argument must -** be aligned to an 8-byte boundary or subsequent behavior of SQLite -** will be undefined.</dd> -** -** <dt>SQLITE_CONFIG_HEAP</dt> -** <dd> ^This option specifies a static memory buffer that SQLite will use -** for all of its dynamic memory allocation needs beyond those provided -** for by [SQLITE_CONFIG_SCRATCH] and [SQLITE_CONFIG_PAGECACHE]. -** There are three arguments: An 8-byte aligned pointer to the memory, -** the number of bytes in the memory buffer, and the minimum allocation size. -** ^If the first pointer (the memory pointer) is NULL, then SQLite reverts -** to using its default memory allocator (the system malloc() implementation), -** undoing any prior invocation of [SQLITE_CONFIG_MALLOC]. ^If the -** memory pointer is not NULL and either [SQLITE_ENABLE_MEMSYS3] or -** [SQLITE_ENABLE_MEMSYS5] are defined, then the alternative memory -** allocator is engaged to handle all of SQLites memory allocation needs. -** The first pointer (the memory pointer) must be aligned to an 8-byte -** boundary or subsequent behavior of SQLite will be undefined.</dd> -** -** <dt>SQLITE_CONFIG_MUTEX</dt> -** <dd> ^(This option takes a single argument which is a pointer to an -** instance of the [sqlite3_mutex_methods] structure. The argument specifies -** alternative low-level mutex routines to be used in place -** the mutex routines built into SQLite.)^ ^SQLite makes a copy of the -** content of the [sqlite3_mutex_methods] structure before the call to -** [sqlite3_config()] returns. ^If SQLite is compiled with -** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then -** the entire mutexing subsystem is omitted from the build and hence calls to -** [sqlite3_config()] with the SQLITE_CONFIG_MUTEX configuration option will -** return [SQLITE_ERROR].</dd> -** -** <dt>SQLITE_CONFIG_GETMUTEX</dt> -** <dd> ^(This option takes a single argument which is a pointer to an -** instance of the [sqlite3_mutex_methods] structure. The -** [sqlite3_mutex_methods] -** structure is filled with the currently defined mutex routines.)^ -** This option can be used to overload the default mutex allocation -** routines with a wrapper used to track mutex usage for performance -** profiling or testing, for example. ^If SQLite is compiled with -** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then -** the entire mutexing subsystem is omitted from the build and hence calls to -** [sqlite3_config()] with the SQLITE_CONFIG_GETMUTEX configuration option will -** return [SQLITE_ERROR].</dd> -** -** <dt>SQLITE_CONFIG_LOOKASIDE</dt> -** <dd> ^(This option takes two arguments that determine the default -** memory allocation for the lookaside memory allocator on each -** [database connection]. The first argument is the -** size of each lookaside buffer slot and the second is the number of -** slots allocated to each database connection.)^ ^(This option sets the -** <i>default</i> lookaside size. The [SQLITE_DBCONFIG_LOOKASIDE] -** verb to [sqlite3_db_config()] can be used to change the lookaside -** configuration on individual connections.)^ </dd> -** -** <dt>SQLITE_CONFIG_PCACHE</dt> -** <dd> ^(This option takes a single argument which is a pointer to -** an [sqlite3_pcache_methods] object. This object specifies the interface -** to a custom page cache implementation.)^ ^SQLite makes a copy of the -** object and uses it for page cache memory allocations.</dd> -** -** <dt>SQLITE_CONFIG_GETPCACHE</dt> -** <dd> ^(This option takes a single argument which is a pointer to an -** [sqlite3_pcache_methods] object. SQLite copies of the current -** page cache implementation into that object.)^ </dd> -** -** <dt>SQLITE_CONFIG_LOG</dt> -** <dd> ^The SQLITE_CONFIG_LOG option takes two arguments: a pointer to a -** function with a call signature of void(*)(void*,int,const char*), -** and a pointer to void. ^If the function pointer is not NULL, it is -** invoked by [sqlite3_log()] to process each logging event. ^If the -** function pointer is NULL, the [sqlite3_log()] interface becomes a no-op. -** ^The void pointer that is the second argument to SQLITE_CONFIG_LOG is -** passed through as the first parameter to the application-defined logger -** function whenever that function is invoked. ^The second parameter to -** the logger function is a copy of the first parameter to the corresponding -** [sqlite3_log()] call and is intended to be a [result code] or an -** [extended result code]. ^The third parameter passed to the logger is -** log message after formatting via [sqlite3_snprintf()]. -** The SQLite logging interface is not reentrant; the logger function -** supplied by the application must not invoke any SQLite interface. -** In a multi-threaded application, the application-defined logger -** function must be threadsafe. </dd> -** -** </dl> -*/ -#define SQLITE_CONFIG_SINGLETHREAD 1 /* nil */ -#define SQLITE_CONFIG_MULTITHREAD 2 /* nil */ -#define SQLITE_CONFIG_SERIALIZED 3 /* nil */ -#define SQLITE_CONFIG_MALLOC 4 /* sqlite3_mem_methods* */ -#define SQLITE_CONFIG_GETMALLOC 5 /* sqlite3_mem_methods* */ -#define SQLITE_CONFIG_SCRATCH 6 /* void*, int sz, int N */ -#define SQLITE_CONFIG_PAGECACHE 7 /* void*, int sz, int N */ -#define SQLITE_CONFIG_HEAP 8 /* void*, int nByte, int min */ -#define SQLITE_CONFIG_MEMSTATUS 9 /* boolean */ -#define SQLITE_CONFIG_MUTEX 10 /* sqlite3_mutex_methods* */ -#define SQLITE_CONFIG_GETMUTEX 11 /* sqlite3_mutex_methods* */ -/* previously SQLITE_CONFIG_CHUNKALLOC 12 which is now unused. */ -#define SQLITE_CONFIG_LOOKASIDE 13 /* int int */ -#define SQLITE_CONFIG_PCACHE 14 /* sqlite3_pcache_methods* */ -#define SQLITE_CONFIG_GETPCACHE 15 /* sqlite3_pcache_methods* */ -#define SQLITE_CONFIG_LOG 16 /* xFunc, void* */ - -/* -** CAPI3REF: Database Connection Configuration Options -** -** These constants are the available integer configuration options that -** can be passed as the second argument to the [sqlite3_db_config()] interface. -** -** New configuration options may be added in future releases of SQLite. -** Existing configuration options might be discontinued. Applications -** should check the return code from [sqlite3_db_config()] to make sure that -** the call worked. ^The [sqlite3_db_config()] interface will return a -** non-zero [error code] if a discontinued or unsupported configuration option -** is invoked. -** -** <dl> -** <dt>SQLITE_DBCONFIG_LOOKASIDE</dt> -** <dd> ^This option takes three additional arguments that determine the -** [lookaside memory allocator] configuration for the [database connection]. -** ^The first argument (the third parameter to [sqlite3_db_config()] is a -** pointer to an memory buffer to use for lookaside memory. -** ^The first argument after the SQLITE_DBCONFIG_LOOKASIDE verb -** may be NULL in which case SQLite will allocate the -** lookaside buffer itself using [sqlite3_malloc()]. ^The second argument is the -** size of each lookaside buffer slot. ^The third argument is the number of -** slots. The size of the buffer in the first argument must be greater than -** or equal to the product of the second and third arguments. The buffer -** must be aligned to an 8-byte boundary. ^If the second argument to -** SQLITE_DBCONFIG_LOOKASIDE is not a multiple of 8, it is internally -** rounded down to the next smaller -** multiple of 8. See also: [SQLITE_CONFIG_LOOKASIDE]</dd> -** -** </dl> -*/ -#define SQLITE_DBCONFIG_LOOKASIDE 1001 /* void* int int */ - - -/* -** CAPI3REF: Enable Or Disable Extended Result Codes -** -** ^The sqlite3_extended_result_codes() routine enables or disables the -** [extended result codes] feature of SQLite. ^The extended result -** codes are disabled by default for historical compatibility. -*/ -SQLITE_API int sqlite3_extended_result_codes(sqlite3*, int onoff); - -/* -** CAPI3REF: Last Insert Rowid -** -** ^Each entry in an SQLite table has a unique 64-bit signed -** integer key called the [ROWID | "rowid"]. ^The rowid is always available -** as an undeclared column named ROWID, OID, or _ROWID_ as long as those -** names are not also used by explicitly declared columns. ^If -** the table has a column of type [INTEGER PRIMARY KEY] then that column -** is another alias for the rowid. -** -** ^This routine returns the [rowid] of the most recent -** successful [INSERT] into the database from the [database connection] -** in the first argument. ^If no successful [INSERT]s -** have ever occurred on that database connection, zero is returned. -** -** ^(If an [INSERT] occurs within a trigger, then the [rowid] of the inserted -** row is returned by this routine as long as the trigger is running. -** But once the trigger terminates, the value returned by this routine -** reverts to the last value inserted before the trigger fired.)^ -** -** ^An [INSERT] that fails due to a constraint violation is not a -** successful [INSERT] and does not change the value returned by this -** routine. ^Thus INSERT OR FAIL, INSERT OR IGNORE, INSERT OR ROLLBACK, -** and INSERT OR ABORT make no changes to the return value of this -** routine when their insertion fails. ^(When INSERT OR REPLACE -** encounters a constraint violation, it does not fail. The -** INSERT continues to completion after deleting rows that caused -** the constraint problem so INSERT OR REPLACE will always change -** the return value of this interface.)^ -** -** ^For the purposes of this routine, an [INSERT] is considered to -** be successful even if it is subsequently rolled back. -** -** This function is accessible to SQL statements via the -** [last_insert_rowid() SQL function]. -** -** If a separate thread performs a new [INSERT] on the same -** database connection while the [sqlite3_last_insert_rowid()] -** function is running and thus changes the last insert [rowid], -** then the value returned by [sqlite3_last_insert_rowid()] is -** unpredictable and might not equal either the old or the new -** last insert [rowid]. -*/ -SQLITE_API sqlite3_int64 sqlite3_last_insert_rowid(sqlite3*); - -/* -** CAPI3REF: Count The Number Of Rows Modified -** -** ^This function returns the number of database rows that were changed -** or inserted or deleted by the most recently completed SQL statement -** on the [database connection] specified by the first parameter. -** ^(Only changes that are directly specified by the [INSERT], [UPDATE], -** or [DELETE] statement are counted. Auxiliary changes caused by -** triggers or [foreign key actions] are not counted.)^ Use the -** [sqlite3_total_changes()] function to find the total number of changes -** including changes caused by triggers and foreign key actions. -** -** ^Changes to a view that are simulated by an [INSTEAD OF trigger] -** are not counted. Only real table changes are counted. -** -** ^(A "row change" is a change to a single row of a single table -** caused by an INSERT, DELETE, or UPDATE statement. Rows that -** are changed as side effects of [REPLACE] constraint resolution, -** rollback, ABORT processing, [DROP TABLE], or by any other -** mechanisms do not count as direct row changes.)^ -** -** A "trigger context" is a scope of execution that begins and -** ends with the script of a [CREATE TRIGGER | trigger]. -** Most SQL statements are -** evaluated outside of any trigger. This is the "top level" -** trigger context. If a trigger fires from the top level, a -** new trigger context is entered for the duration of that one -** trigger. Subtriggers create subcontexts for their duration. -** -** ^Calling [sqlite3_exec()] or [sqlite3_step()] recursively does -** not create a new trigger context. -** -** ^This function returns the number of direct row changes in the -** most recent INSERT, UPDATE, or DELETE statement within the same -** trigger context. -** -** ^Thus, when called from the top level, this function returns the -** number of changes in the most recent INSERT, UPDATE, or DELETE -** that also occurred at the top level. ^(Within the body of a trigger, -** the sqlite3_changes() interface can be called to find the number of -** changes in the most recently completed INSERT, UPDATE, or DELETE -** statement within the body of the same trigger. -** However, the number returned does not include changes -** caused by subtriggers since those have their own context.)^ -** -** See also the [sqlite3_total_changes()] interface, the -** [count_changes pragma], and the [changes() SQL function]. -** -** If a separate thread makes changes on the same database connection -** while [sqlite3_changes()] is running then the value returned -** is unpredictable and not meaningful. -*/ -SQLITE_API int sqlite3_changes(sqlite3*); - -/* -** CAPI3REF: Total Number Of Rows Modified -** -** ^This function returns the number of row changes caused by [INSERT], -** [UPDATE] or [DELETE] statements since the [database connection] was opened. -** ^(The count returned by sqlite3_total_changes() includes all changes -** from all [CREATE TRIGGER | trigger] contexts and changes made by -** [foreign key actions]. However, -** the count does not include changes used to implement [REPLACE] constraints, -** do rollbacks or ABORT processing, or [DROP TABLE] processing. The -** count does not include rows of views that fire an [INSTEAD OF trigger], -** though if the INSTEAD OF trigger makes changes of its own, those changes -** are counted.)^ -** ^The sqlite3_total_changes() function counts the changes as soon as -** the statement that makes them is completed (when the statement handle -** is passed to [sqlite3_reset()] or [sqlite3_finalize()]). -** -** See also the [sqlite3_changes()] interface, the -** [count_changes pragma], and the [total_changes() SQL function]. -** -** If a separate thread makes changes on the same database connection -** while [sqlite3_total_changes()] is running then the value -** returned is unpredictable and not meaningful. -*/ -SQLITE_API int sqlite3_total_changes(sqlite3*); - -/* -** CAPI3REF: Interrupt A Long-Running Query -** -** ^This function causes any pending database operation to abort and -** return at its earliest opportunity. This routine is typically -** called in response to a user action such as pressing "Cancel" -** or Ctrl-C where the user wants a long query operation to halt -** immediately. -** -** ^It is safe to call this routine from a thread different from the -** thread that is currently running the database operation. But it -** is not safe to call this routine with a [database connection] that -** is closed or might close before sqlite3_interrupt() returns. -** -** ^If an SQL operation is very nearly finished at the time when -** sqlite3_interrupt() is called, then it might not have an opportunity -** to be interrupted and might continue to completion. -** -** ^An SQL operation that is interrupted will return [SQLITE_INTERRUPT]. -** ^If the interrupted SQL operation is an INSERT, UPDATE, or DELETE -** that is inside an explicit transaction, then the entire transaction -** will be rolled back automatically. -** -** ^The sqlite3_interrupt(D) call is in effect until all currently running -** SQL statements on [database connection] D complete. ^Any new SQL statements -** that are started after the sqlite3_interrupt() call and before the -** running statements reaches zero are interrupted as if they had been -** running prior to the sqlite3_interrupt() call. ^New SQL statements -** that are started after the running statement count reaches zero are -** not effected by the sqlite3_interrupt(). -** ^A call to sqlite3_interrupt(D) that occurs when there are no running -** SQL statements is a no-op and has no effect on SQL statements -** that are started after the sqlite3_interrupt() call returns. -** -** If the database connection closes while [sqlite3_interrupt()] -** is running then bad things will likely happen. -*/ -SQLITE_API void sqlite3_interrupt(sqlite3*); - -/* -** CAPI3REF: Determine If An SQL Statement Is Complete -** -** These routines are useful during command-line input to determine if the -** currently entered text seems to form a complete SQL statement or -** if additional input is needed before sending the text into -** SQLite for parsing. ^These routines return 1 if the input string -** appears to be a complete SQL statement. ^A statement is judged to be -** complete if it ends with a semicolon token and is not a prefix of a -** well-formed CREATE TRIGGER statement. ^Semicolons that are embedded within -** string literals or quoted identifier names or comments are not -** independent tokens (they are part of the token in which they are -** embedded) and thus do not count as a statement terminator. ^Whitespace -** and comments that follow the final semicolon are ignored. -** -** ^These routines return 0 if the statement is incomplete. ^If a -** memory allocation fails, then SQLITE_NOMEM is returned. -** -** ^These routines do not parse the SQL statements thus -** will not detect syntactically incorrect SQL. -** -** ^(If SQLite has not been initialized using [sqlite3_initialize()] prior -** to invoking sqlite3_complete16() then sqlite3_initialize() is invoked -** automatically by sqlite3_complete16(). If that initialization fails, -** then the return value from sqlite3_complete16() will be non-zero -** regardless of whether or not the input SQL is complete.)^ -** -** The input to [sqlite3_complete()] must be a zero-terminated -** UTF-8 string. -** -** The input to [sqlite3_complete16()] must be a zero-terminated -** UTF-16 string in native byte order. -*/ -SQLITE_API int sqlite3_complete(const char *sql); -SQLITE_API int sqlite3_complete16(const void *sql); - -/* -** CAPI3REF: Register A Callback To Handle SQLITE_BUSY Errors -** -** ^This routine sets a callback function that might be invoked whenever -** an attempt is made to open a database table that another thread -** or process has locked. -** -** ^If the busy callback is NULL, then [SQLITE_BUSY] or [SQLITE_IOERR_BLOCKED] -** is returned immediately upon encountering the lock. ^If the busy callback -** is not NULL, then the callback might be invoked with two arguments. -** -** ^The first argument to the busy handler is a copy of the void* pointer which -** is the third argument to sqlite3_busy_handler(). ^The second argument to -** the busy handler callback is the number of times that the busy handler has -** been invoked for this locking event. ^If the -** busy callback returns 0, then no additional attempts are made to -** access the database and [SQLITE_BUSY] or [SQLITE_IOERR_BLOCKED] is returned. -** ^If the callback returns non-zero, then another attempt -** is made to open the database for reading and the cycle repeats. -** -** The presence of a busy handler does not guarantee that it will be invoked -** when there is lock contention. ^If SQLite determines that invoking the busy -** handler could result in a deadlock, it will go ahead and return [SQLITE_BUSY] -** or [SQLITE_IOERR_BLOCKED] instead of invoking the busy handler. -** Consider a scenario where one process is holding a read lock that -** it is trying to promote to a reserved lock and -** a second process is holding a reserved lock that it is trying -** to promote to an exclusive lock. The first process cannot proceed -** because it is blocked by the second and the second process cannot -** proceed because it is blocked by the first. If both processes -** invoke the busy handlers, neither will make any progress. Therefore, -** SQLite returns [SQLITE_BUSY] for the first process, hoping that this -** will induce the first process to release its read lock and allow -** the second process to proceed. -** -** ^The default busy callback is NULL. -** -** ^The [SQLITE_BUSY] error is converted to [SQLITE_IOERR_BLOCKED] -** when SQLite is in the middle of a large transaction where all the -** changes will not fit into the in-memory cache. SQLite will -** already hold a RESERVED lock on the database file, but it needs -** to promote this lock to EXCLUSIVE so that it can spill cache -** pages into the database file without harm to concurrent -** readers. ^If it is unable to promote the lock, then the in-memory -** cache will be left in an inconsistent state and so the error -** code is promoted from the relatively benign [SQLITE_BUSY] to -** the more severe [SQLITE_IOERR_BLOCKED]. ^This error code promotion -** forces an automatic rollback of the changes. See the -** <a href="/cvstrac/wiki?p=CorruptionFollowingBusyError"> -** CorruptionFollowingBusyError</a> wiki page for a discussion of why -** this is important. -** -** ^(There can only be a single busy handler defined for each -** [database connection]. Setting a new busy handler clears any -** previously set handler.)^ ^Note that calling [sqlite3_busy_timeout()] -** will also set or clear the busy handler. -** -** The busy callback should not take any actions which modify the -** database connection that invoked the busy handler. Any such actions -** result in undefined behavior. -** -** A busy handler must not close the database connection -** or [prepared statement] that invoked the busy handler. -*/ -SQLITE_API int sqlite3_busy_handler(sqlite3*, int(*)(void*,int), void*); - -/* -** CAPI3REF: Set A Busy Timeout -** -** ^This routine sets a [sqlite3_busy_handler | busy handler] that sleeps -** for a specified amount of time when a table is locked. ^The handler -** will sleep multiple times until at least "ms" milliseconds of sleeping -** have accumulated. ^After at least "ms" milliseconds of sleeping, -** the handler returns 0 which causes [sqlite3_step()] to return -** [SQLITE_BUSY] or [SQLITE_IOERR_BLOCKED]. -** -** ^Calling this routine with an argument less than or equal to zero -** turns off all busy handlers. -** -** ^(There can only be a single busy handler for a particular -** [database connection] any any given moment. If another busy handler -** was defined (using [sqlite3_busy_handler()]) prior to calling -** this routine, that other busy handler is cleared.)^ -*/ -SQLITE_API int sqlite3_busy_timeout(sqlite3*, int ms); - -/* -** CAPI3REF: Convenience Routines For Running Queries -** -** Definition: A <b>result table</b> is memory data structure created by the -** [sqlite3_get_table()] interface. A result table records the -** complete query results from one or more queries. -** -** The table conceptually has a number of rows and columns. But -** these numbers are not part of the result table itself. These -** numbers are obtained separately. Let N be the number of rows -** and M be the number of columns. -** -** A result table is an array of pointers to zero-terminated UTF-8 strings. -** There are (N+1)*M elements in the array. The first M pointers point -** to zero-terminated strings that contain the names of the columns. -** The remaining entries all point to query results. NULL values result -** in NULL pointers. All other values are in their UTF-8 zero-terminated -** string representation as returned by [sqlite3_column_text()]. -** -** A result table might consist of one or more memory allocations. -** It is not safe to pass a result table directly to [sqlite3_free()]. -** A result table should be deallocated using [sqlite3_free_table()]. -** -** As an example of the result table format, suppose a query result -** is as follows: -** -** <blockquote><pre> -** Name | Age -** ----------------------- -** Alice | 43 -** Bob | 28 -** Cindy | 21 -** </pre></blockquote> -** -** There are two column (M==2) and three rows (N==3). Thus the -** result table has 8 entries. Suppose the result table is stored -** in an array names azResult. Then azResult holds this content: -** -** <blockquote><pre> -** azResult[0] = "Name"; -** azResult[1] = "Age"; -** azResult[2] = "Alice"; -** azResult[3] = "43"; -** azResult[4] = "Bob"; -** azResult[5] = "28"; -** azResult[6] = "Cindy"; -** azResult[7] = "21"; -** </pre></blockquote> -** -** ^The sqlite3_get_table() function evaluates one or more -** semicolon-separated SQL statements in the zero-terminated UTF-8 -** string of its 2nd parameter and returns a result table to the -** pointer given in its 3rd parameter. -** -** After the application has finished with the result from sqlite3_get_table(), -** it should pass the result table pointer to sqlite3_free_table() in order to -** release the memory that was malloced. Because of the way the -** [sqlite3_malloc()] happens within sqlite3_get_table(), the calling -** function must not try to call [sqlite3_free()] directly. Only -** [sqlite3_free_table()] is able to release the memory properly and safely. -** -** ^(The sqlite3_get_table() interface is implemented as a wrapper around -** [sqlite3_exec()]. The sqlite3_get_table() routine does not have access -** to any internal data structures of SQLite. It uses only the public -** interface defined here. As a consequence, errors that occur in the -** wrapper layer outside of the internal [sqlite3_exec()] call are not -** reflected in subsequent calls to [sqlite3_errcode()] or -** [sqlite3_errmsg()].)^ -*/ -SQLITE_API int sqlite3_get_table( - sqlite3 *db, /* An open database */ - const char *zSql, /* SQL to be evaluated */ - char ***pazResult, /* Results of the query */ - int *pnRow, /* Number of result rows written here */ - int *pnColumn, /* Number of result columns written here */ - char **pzErrmsg /* Error msg written here */ -); -SQLITE_API void sqlite3_free_table(char **result); - -/* -** CAPI3REF: Formatted String Printing Functions -** -** These routines are work-alikes of the "printf()" family of functions -** from the standard C library. -** -** ^The sqlite3_mprintf() and sqlite3_vmprintf() routines write their -** results into memory obtained from [sqlite3_malloc()]. -** The strings returned by these two routines should be -** released by [sqlite3_free()]. ^Both routines return a -** NULL pointer if [sqlite3_malloc()] is unable to allocate enough -** memory to hold the resulting string. -** -** ^(In sqlite3_snprintf() routine is similar to "snprintf()" from -** the standard C library. The result is written into the -** buffer supplied as the second parameter whose size is given by -** the first parameter. Note that the order of the -** first two parameters is reversed from snprintf().)^ This is an -** historical accident that cannot be fixed without breaking -** backwards compatibility. ^(Note also that sqlite3_snprintf() -** returns a pointer to its buffer instead of the number of -** characters actually written into the buffer.)^ We admit that -** the number of characters written would be a more useful return -** value but we cannot change the implementation of sqlite3_snprintf() -** now without breaking compatibility. -** -** ^As long as the buffer size is greater than zero, sqlite3_snprintf() -** guarantees that the buffer is always zero-terminated. ^The first -** parameter "n" is the total size of the buffer, including space for -** the zero terminator. So the longest string that can be completely -** written will be n-1 characters. -** -** These routines all implement some additional formatting -** options that are useful for constructing SQL statements. -** All of the usual printf() formatting options apply. In addition, there -** is are "%q", "%Q", and "%z" options. -** -** ^(The %q option works like %s in that it substitutes a null-terminated -** string from the argument list. But %q also doubles every '\'' character. -** %q is designed for use inside a string literal.)^ By doubling each '\'' -** character it escapes that character and allows it to be inserted into -** the string. -** -** For example, assume the string variable zText contains text as follows: -** -** <blockquote><pre> -** char *zText = "It's a happy day!"; -** </pre></blockquote> -** -** One can use this text in an SQL statement as follows: -** -** <blockquote><pre> -** char *zSQL = sqlite3_mprintf("INSERT INTO table VALUES('%q')", zText); -** sqlite3_exec(db, zSQL, 0, 0, 0); -** sqlite3_free(zSQL); -** </pre></blockquote> -** -** Because the %q format string is used, the '\'' character in zText -** is escaped and the SQL generated is as follows: -** -** <blockquote><pre> -** INSERT INTO table1 VALUES('It''s a happy day!') -** </pre></blockquote> -** -** This is correct. Had we used %s instead of %q, the generated SQL -** would have looked like this: -** -** <blockquote><pre> -** INSERT INTO table1 VALUES('It's a happy day!'); -** </pre></blockquote> -** -** This second example is an SQL syntax error. As a general rule you should -** always use %q instead of %s when inserting text into a string literal. -** -** ^(The %Q option works like %q except it also adds single quotes around -** the outside of the total string. Additionally, if the parameter in the -** argument list is a NULL pointer, %Q substitutes the text "NULL" (without -** single quotes).)^ So, for example, one could say: -** -** <blockquote><pre> -** char *zSQL = sqlite3_mprintf("INSERT INTO table VALUES(%Q)", zText); -** sqlite3_exec(db, zSQL, 0, 0, 0); -** sqlite3_free(zSQL); -** </pre></blockquote> -** -** The code above will render a correct SQL statement in the zSQL -** variable even if the zText variable is a NULL pointer. -** -** ^(The "%z" formatting option works like "%s" but with the -** addition that after the string has been read and copied into -** the result, [sqlite3_free()] is called on the input string.)^ -*/ -SQLITE_API char *sqlite3_mprintf(const char*,...); -SQLITE_API char *sqlite3_vmprintf(const char*, va_list); -SQLITE_API char *sqlite3_snprintf(int,char*,const char*, ...); - -/* -** CAPI3REF: Memory Allocation Subsystem -** -** The SQLite core uses these three routines for all of its own -** internal memory allocation needs. "Core" in the previous sentence -** does not include operating-system specific VFS implementation. The -** Windows VFS uses native malloc() and free() for some operations. -** -** ^The sqlite3_malloc() routine returns a pointer to a block -** of memory at least N bytes in length, where N is the parameter. -** ^If sqlite3_malloc() is unable to obtain sufficient free -** memory, it returns a NULL pointer. ^If the parameter N to -** sqlite3_malloc() is zero or negative then sqlite3_malloc() returns -** a NULL pointer. -** -** ^Calling sqlite3_free() with a pointer previously returned -** by sqlite3_malloc() or sqlite3_realloc() releases that memory so -** that it might be reused. ^The sqlite3_free() routine is -** a no-op if is called with a NULL pointer. Passing a NULL pointer -** to sqlite3_free() is harmless. After being freed, memory -** should neither be read nor written. Even reading previously freed -** memory might result in a segmentation fault or other severe error. -** Memory corruption, a segmentation fault, or other severe error -** might result if sqlite3_free() is called with a non-NULL pointer that -** was not obtained from sqlite3_malloc() or sqlite3_realloc(). -** -** ^(The sqlite3_realloc() interface attempts to resize a -** prior memory allocation to be at least N bytes, where N is the -** second parameter. The memory allocation to be resized is the first -** parameter.)^ ^ If the first parameter to sqlite3_realloc() -** is a NULL pointer then its behavior is identical to calling -** sqlite3_malloc(N) where N is the second parameter to sqlite3_realloc(). -** ^If the second parameter to sqlite3_realloc() is zero or -** negative then the behavior is exactly the same as calling -** sqlite3_free(P) where P is the first parameter to sqlite3_realloc(). -** ^sqlite3_realloc() returns a pointer to a memory allocation -** of at least N bytes in size or NULL if sufficient memory is unavailable. -** ^If M is the size of the prior allocation, then min(N,M) bytes -** of the prior allocation are copied into the beginning of buffer returned -** by sqlite3_realloc() and the prior allocation is freed. -** ^If sqlite3_realloc() returns NULL, then the prior allocation -** is not freed. -** -** ^The memory returned by sqlite3_malloc() and sqlite3_realloc() -** is always aligned to at least an 8 byte boundary. -** -** In SQLite version 3.5.0 and 3.5.1, it was possible to define -** the SQLITE_OMIT_MEMORY_ALLOCATION which would cause the built-in -** implementation of these routines to be omitted. That capability -** is no longer provided. Only built-in memory allocators can be used. -** -** The Windows OS interface layer calls -** the system malloc() and free() directly when converting -** filenames between the UTF-8 encoding used by SQLite -** and whatever filename encoding is used by the particular Windows -** installation. Memory allocation errors are detected, but -** they are reported back as [SQLITE_CANTOPEN] or -** [SQLITE_IOERR] rather than [SQLITE_NOMEM]. -** -** The pointer arguments to [sqlite3_free()] and [sqlite3_realloc()] -** must be either NULL or else pointers obtained from a prior -** invocation of [sqlite3_malloc()] or [sqlite3_realloc()] that have -** not yet been released. -** -** The application must not read or write any part of -** a block of memory after it has been released using -** [sqlite3_free()] or [sqlite3_realloc()]. -*/ -SQLITE_API void *sqlite3_malloc(int); -SQLITE_API void *sqlite3_realloc(void*, int); -SQLITE_API void sqlite3_free(void*); - -/* -** CAPI3REF: Memory Allocator Statistics -** -** SQLite provides these two interfaces for reporting on the status -** of the [sqlite3_malloc()], [sqlite3_free()], and [sqlite3_realloc()] -** routines, which form the built-in memory allocation subsystem. -** -** ^The [sqlite3_memory_used()] routine returns the number of bytes -** of memory currently outstanding (malloced but not freed). -** ^The [sqlite3_memory_highwater()] routine returns the maximum -** value of [sqlite3_memory_used()] since the high-water mark -** was last reset. ^The values returned by [sqlite3_memory_used()] and -** [sqlite3_memory_highwater()] include any overhead -** added by SQLite in its implementation of [sqlite3_malloc()], -** but not overhead added by the any underlying system library -** routines that [sqlite3_malloc()] may call. -** -** ^The memory high-water mark is reset to the current value of -** [sqlite3_memory_used()] if and only if the parameter to -** [sqlite3_memory_highwater()] is true. ^The value returned -** by [sqlite3_memory_highwater(1)] is the high-water mark -** prior to the reset. -*/ -SQLITE_API sqlite3_int64 sqlite3_memory_used(void); -SQLITE_API sqlite3_int64 sqlite3_memory_highwater(int resetFlag); - -/* -** CAPI3REF: Pseudo-Random Number Generator -** -** SQLite contains a high-quality pseudo-random number generator (PRNG) used to -** select random [ROWID | ROWIDs] when inserting new records into a table that -** already uses the largest possible [ROWID]. The PRNG is also used for -** the build-in random() and randomblob() SQL functions. This interface allows -** applications to access the same PRNG for other purposes. -** -** ^A call to this routine stores N bytes of randomness into buffer P. -** -** ^The first time this routine is invoked (either internally or by -** the application) the PRNG is seeded using randomness obtained -** from the xRandomness method of the default [sqlite3_vfs] object. -** ^On all subsequent invocations, the pseudo-randomness is generated -** internally and without recourse to the [sqlite3_vfs] xRandomness -** method. -*/ -SQLITE_API void sqlite3_randomness(int N, void *P); - -/* -** CAPI3REF: Compile-Time Authorization Callbacks -** -** ^This routine registers a authorizer callback with a particular -** [database connection], supplied in the first argument. -** ^The authorizer callback is invoked as SQL statements are being compiled -** by [sqlite3_prepare()] or its variants [sqlite3_prepare_v2()], -** [sqlite3_prepare16()] and [sqlite3_prepare16_v2()]. ^At various -** points during the compilation process, as logic is being created -** to perform various actions, the authorizer callback is invoked to -** see if those actions are allowed. ^The authorizer callback should -** return [SQLITE_OK] to allow the action, [SQLITE_IGNORE] to disallow the -** specific action but allow the SQL statement to continue to be -** compiled, or [SQLITE_DENY] to cause the entire SQL statement to be -** rejected with an error. ^If the authorizer callback returns -** any value other than [SQLITE_IGNORE], [SQLITE_OK], or [SQLITE_DENY] -** then the [sqlite3_prepare_v2()] or equivalent call that triggered -** the authorizer will fail with an error message. -** -** When the callback returns [SQLITE_OK], that means the operation -** requested is ok. ^When the callback returns [SQLITE_DENY], the -** [sqlite3_prepare_v2()] or equivalent call that triggered the -** authorizer will fail with an error message explaining that -** access is denied. -** -** ^The first parameter to the authorizer callback is a copy of the third -** parameter to the sqlite3_set_authorizer() interface. ^The second parameter -** to the callback is an integer [SQLITE_COPY | action code] that specifies -** the particular action to be authorized. ^The third through sixth parameters -** to the callback are zero-terminated strings that contain additional -** details about the action to be authorized. -** -** ^If the action code is [SQLITE_READ] -** and the callback returns [SQLITE_IGNORE] then the -** [prepared statement] statement is constructed to substitute -** a NULL value in place of the table column that would have -** been read if [SQLITE_OK] had been returned. The [SQLITE_IGNORE] -** return can be used to deny an untrusted user access to individual -** columns of a table. -** ^If the action code is [SQLITE_DELETE] and the callback returns -** [SQLITE_IGNORE] then the [DELETE] operation proceeds but the -** [truncate optimization] is disabled and all rows are deleted individually. -** -** An authorizer is used when [sqlite3_prepare | preparing] -** SQL statements from an untrusted source, to ensure that the SQL statements -** do not try to access data they are not allowed to see, or that they do not -** try to execute malicious statements that damage the database. For -** example, an application may allow a user to enter arbitrary -** SQL queries for evaluation by a database. But the application does -** not want the user to be able to make arbitrary changes to the -** database. An authorizer could then be put in place while the -** user-entered SQL is being [sqlite3_prepare | prepared] that -** disallows everything except [SELECT] statements. -** -** Applications that need to process SQL from untrusted sources -** might also consider lowering resource limits using [sqlite3_limit()] -** and limiting database size using the [max_page_count] [PRAGMA] -** in addition to using an authorizer. -** -** ^(Only a single authorizer can be in place on a database connection -** at a time. Each call to sqlite3_set_authorizer overrides the -** previous call.)^ ^Disable the authorizer by installing a NULL callback. -** The authorizer is disabled by default. -** -** The authorizer callback must not do anything that will modify -** the database connection that invoked the authorizer callback. -** Note that [sqlite3_prepare_v2()] and [sqlite3_step()] both modify their -** database connections for the meaning of "modify" in this paragraph. -** -** ^When [sqlite3_prepare_v2()] is used to prepare a statement, the -** statement might be re-prepared during [sqlite3_step()] due to a -** schema change. Hence, the application should ensure that the -** correct authorizer callback remains in place during the [sqlite3_step()]. -** -** ^Note that the authorizer callback is invoked only during -** [sqlite3_prepare()] or its variants. Authorization is not -** performed during statement evaluation in [sqlite3_step()], unless -** as stated in the previous paragraph, sqlite3_step() invokes -** sqlite3_prepare_v2() to reprepare a statement after a schema change. -*/ -SQLITE_API int sqlite3_set_authorizer( - sqlite3*, - int (*xAuth)(void*,int,const char*,const char*,const char*,const char*), - void *pUserData -); - -/* -** CAPI3REF: Authorizer Return Codes -** -** The [sqlite3_set_authorizer | authorizer callback function] must -** return either [SQLITE_OK] or one of these two constants in order -** to signal SQLite whether or not the action is permitted. See the -** [sqlite3_set_authorizer | authorizer documentation] for additional -** information. -*/ -#define SQLITE_DENY 1 /* Abort the SQL statement with an error */ -#define SQLITE_IGNORE 2 /* Don't allow access, but don't generate an error */ - -/* -** CAPI3REF: Authorizer Action Codes -** -** The [sqlite3_set_authorizer()] interface registers a callback function -** that is invoked to authorize certain SQL statement actions. The -** second parameter to the callback is an integer code that specifies -** what action is being authorized. These are the integer action codes that -** the authorizer callback may be passed. -** -** These action code values signify what kind of operation is to be -** authorized. The 3rd and 4th parameters to the authorization -** callback function will be parameters or NULL depending on which of these -** codes is used as the second parameter. ^(The 5th parameter to the -** authorizer callback is the name of the database ("main", "temp", -** etc.) if applicable.)^ ^The 6th parameter to the authorizer callback -** is the name of the inner-most trigger or view that is responsible for -** the access attempt or NULL if this access attempt is directly from -** top-level SQL code. -*/ -/******************************************* 3rd ************ 4th ***********/ -#define SQLITE_CREATE_INDEX 1 /* Index Name Table Name */ -#define SQLITE_CREATE_TABLE 2 /* Table Name NULL */ -#define SQLITE_CREATE_TEMP_INDEX 3 /* Index Name Table Name */ -#define SQLITE_CREATE_TEMP_TABLE 4 /* Table Name NULL */ -#define SQLITE_CREATE_TEMP_TRIGGER 5 /* Trigger Name Table Name */ -#define SQLITE_CREATE_TEMP_VIEW 6 /* View Name NULL */ -#define SQLITE_CREATE_TRIGGER 7 /* Trigger Name Table Name */ -#define SQLITE_CREATE_VIEW 8 /* View Name NULL */ -#define SQLITE_DELETE 9 /* Table Name NULL */ -#define SQLITE_DROP_INDEX 10 /* Index Name Table Name */ -#define SQLITE_DROP_TABLE 11 /* Table Name NULL */ -#define SQLITE_DROP_TEMP_INDEX 12 /* Index Name Table Name */ -#define SQLITE_DROP_TEMP_TABLE 13 /* Table Name NULL */ -#define SQLITE_DROP_TEMP_TRIGGER 14 /* Trigger Name Table Name */ -#define SQLITE_DROP_TEMP_VIEW 15 /* View Name NULL */ -#define SQLITE_DROP_TRIGGER 16 /* Trigger Name Table Name */ -#define SQLITE_DROP_VIEW 17 /* View Name NULL */ -#define SQLITE_INSERT 18 /* Table Name NULL */ -#define SQLITE_PRAGMA 19 /* Pragma Name 1st arg or NULL */ -#define SQLITE_READ 20 /* Table Name Column Name */ -#define SQLITE_SELECT 21 /* NULL NULL */ -#define SQLITE_TRANSACTION 22 /* Operation NULL */ -#define SQLITE_UPDATE 23 /* Table Name Column Name */ -#define SQLITE_ATTACH 24 /* Filename NULL */ -#define SQLITE_DETACH 25 /* Database Name NULL */ -#define SQLITE_ALTER_TABLE 26 /* Database Name Table Name */ -#define SQLITE_REINDEX 27 /* Index Name NULL */ -#define SQLITE_ANALYZE 28 /* Table Name NULL */ -#define SQLITE_CREATE_VTABLE 29 /* Table Name Module Name */ -#define SQLITE_DROP_VTABLE 30 /* Table Name Module Name */ -#define SQLITE_FUNCTION 31 /* NULL Function Name */ -#define SQLITE_SAVEPOINT 32 /* Operation Savepoint Name */ -#define SQLITE_COPY 0 /* No longer used */ - -/* -** CAPI3REF: Tracing And Profiling Functions -** -** These routines register callback functions that can be used for -** tracing and profiling the execution of SQL statements. -** -** ^The callback function registered by sqlite3_trace() is invoked at -** various times when an SQL statement is being run by [sqlite3_step()]. -** ^The sqlite3_trace() callback is invoked with a UTF-8 rendering of the -** SQL statement text as the statement first begins executing. -** ^(Additional sqlite3_trace() callbacks might occur -** as each triggered subprogram is entered. The callbacks for triggers -** contain a UTF-8 SQL comment that identifies the trigger.)^ -** -** ^The callback function registered by sqlite3_profile() is invoked -** as each SQL statement finishes. ^The profile callback contains -** the original statement text and an estimate of wall-clock time -** of how long that statement took to run. -*/ -SQLITE_API void *sqlite3_trace(sqlite3*, void(*xTrace)(void*,const char*), void*); -SQLITE_API SQLITE_EXPERIMENTAL void *sqlite3_profile(sqlite3*, - void(*xProfile)(void*,const char*,sqlite3_uint64), void*); - -/* -** CAPI3REF: Query Progress Callbacks -** -** ^This routine configures a callback function - the -** progress callback - that is invoked periodically during long -** running calls to [sqlite3_exec()], [sqlite3_step()] and -** [sqlite3_get_table()]. An example use for this -** interface is to keep a GUI updated during a large query. -** -** ^If the progress callback returns non-zero, the operation is -** interrupted. This feature can be used to implement a -** "Cancel" button on a GUI progress dialog box. -** -** The progress handler must not do anything that will modify -** the database connection that invoked the progress handler. -** Note that [sqlite3_prepare_v2()] and [sqlite3_step()] both modify their -** database connections for the meaning of "modify" in this paragraph. -** -*/ -SQLITE_API void sqlite3_progress_handler(sqlite3*, int, int(*)(void*), void*); - -/* -** CAPI3REF: Opening A New Database Connection -** -** ^These routines open an SQLite database file whose name is given by the -** filename argument. ^The filename argument is interpreted as UTF-8 for -** sqlite3_open() and sqlite3_open_v2() and as UTF-16 in the native byte -** order for sqlite3_open16(). ^(A [database connection] handle is usually -** returned in *ppDb, even if an error occurs. The only exception is that -** if SQLite is unable to allocate memory to hold the [sqlite3] object, -** a NULL will be written into *ppDb instead of a pointer to the [sqlite3] -** object.)^ ^(If the database is opened (and/or created) successfully, then -** [SQLITE_OK] is returned. Otherwise an [error code] is returned.)^ ^The -** [sqlite3_errmsg()] or [sqlite3_errmsg16()] routines can be used to obtain -** an English language description of the error following a failure of any -** of the sqlite3_open() routines. -** -** ^The default encoding for the database will be UTF-8 if -** sqlite3_open() or sqlite3_open_v2() is called and -** UTF-16 in the native byte order if sqlite3_open16() is used. -** -** Whether or not an error occurs when it is opened, resources -** associated with the [database connection] handle should be released by -** passing it to [sqlite3_close()] when it is no longer required. -** -** The sqlite3_open_v2() interface works like sqlite3_open() -** except that it accepts two additional parameters for additional control -** over the new database connection. ^(The flags parameter to -** sqlite3_open_v2() can take one of -** the following three values, optionally combined with the -** [SQLITE_OPEN_NOMUTEX], [SQLITE_OPEN_FULLMUTEX], [SQLITE_OPEN_SHAREDCACHE], -** and/or [SQLITE_OPEN_PRIVATECACHE] flags:)^ -** -** <dl> -** ^(<dt>[SQLITE_OPEN_READONLY]</dt> -** <dd>The database is opened in read-only mode. If the database does not -** already exist, an error is returned.</dd>)^ -** -** ^(<dt>[SQLITE_OPEN_READWRITE]</dt> -** <dd>The database is opened for reading and writing if possible, or reading -** only if the file is write protected by the operating system. In either -** case the database must already exist, otherwise an error is returned.</dd>)^ -** -** ^(<dt>[SQLITE_OPEN_READWRITE] | [SQLITE_OPEN_CREATE]</dt> -** <dd>The database is opened for reading and writing, and is creates it if -** it does not already exist. This is the behavior that is always used for -** sqlite3_open() and sqlite3_open16().</dd>)^ -** </dl> -** -** If the 3rd parameter to sqlite3_open_v2() is not one of the -** combinations shown above or one of the combinations shown above combined -** with the [SQLITE_OPEN_NOMUTEX], [SQLITE_OPEN_FULLMUTEX], -** [SQLITE_OPEN_SHAREDCACHE] and/or [SQLITE_OPEN_SHAREDCACHE] flags, -** then the behavior is undefined. -** -** ^If the [SQLITE_OPEN_NOMUTEX] flag is set, then the database connection -** opens in the multi-thread [threading mode] as long as the single-thread -** mode has not been set at compile-time or start-time. ^If the -** [SQLITE_OPEN_FULLMUTEX] flag is set then the database connection opens -** in the serialized [threading mode] unless single-thread was -** previously selected at compile-time or start-time. -** ^The [SQLITE_OPEN_SHAREDCACHE] flag causes the database connection to be -** eligible to use [shared cache mode], regardless of whether or not shared -** cache is enabled using [sqlite3_enable_shared_cache()]. ^The -** [SQLITE_OPEN_PRIVATECACHE] flag causes the database connection to not -** participate in [shared cache mode] even if it is enabled. -** -** ^If the filename is ":memory:", then a private, temporary in-memory database -** is created for the connection. ^This in-memory database will vanish when -** the database connection is closed. Future versions of SQLite might -** make use of additional special filenames that begin with the ":" character. -** It is recommended that when a database filename actually does begin with -** a ":" character you should prefix the filename with a pathname such as -** "./" to avoid ambiguity. -** -** ^If the filename is an empty string, then a private, temporary -** on-disk database will be created. ^This private database will be -** automatically deleted as soon as the database connection is closed. -** -** ^The fourth parameter to sqlite3_open_v2() is the name of the -** [sqlite3_vfs] object that defines the operating system interface that -** the new database connection should use. ^If the fourth parameter is -** a NULL pointer then the default [sqlite3_vfs] object is used. -** -** <b>Note to Windows users:</b> The encoding used for the filename argument -** of sqlite3_open() and sqlite3_open_v2() must be UTF-8, not whatever -** codepage is currently defined. Filenames containing international -** characters must be converted to UTF-8 prior to passing them into -** sqlite3_open() or sqlite3_open_v2(). -*/ -SQLITE_API int sqlite3_open( - const char *filename, /* Database filename (UTF-8) */ - sqlite3 **ppDb /* OUT: SQLite db handle */ -); -SQLITE_API int sqlite3_open16( - const void *filename, /* Database filename (UTF-16) */ - sqlite3 **ppDb /* OUT: SQLite db handle */ -); -SQLITE_API int sqlite3_open_v2( - const char *filename, /* Database filename (UTF-8) */ - sqlite3 **ppDb, /* OUT: SQLite db handle */ - int flags, /* Flags */ - const char *zVfs /* Name of VFS module to use */ -); - -/* -** CAPI3REF: Error Codes And Messages -** -** ^The sqlite3_errcode() interface returns the numeric [result code] or -** [extended result code] for the most recent failed sqlite3_* API call -** associated with a [database connection]. If a prior API call failed -** but the most recent API call succeeded, the return value from -** sqlite3_errcode() is undefined. ^The sqlite3_extended_errcode() -** interface is the same except that it always returns the -** [extended result code] even when extended result codes are -** disabled. -** -** ^The sqlite3_errmsg() and sqlite3_errmsg16() return English-language -** text that describes the error, as either UTF-8 or UTF-16 respectively. -** ^(Memory to hold the error message string is managed internally. -** The application does not need to worry about freeing the result. -** However, the error string might be overwritten or deallocated by -** subsequent calls to other SQLite interface functions.)^ -** -** When the serialized [threading mode] is in use, it might be the -** case that a second error occurs on a separate thread in between -** the time of the first error and the call to these interfaces. -** When that happens, the second error will be reported since these -** interfaces always report the most recent result. To avoid -** this, each thread can obtain exclusive use of the [database connection] D -** by invoking [sqlite3_mutex_enter]([sqlite3_db_mutex](D)) before beginning -** to use D and invoking [sqlite3_mutex_leave]([sqlite3_db_mutex](D)) after -** all calls to the interfaces listed here are completed. -** -** If an interface fails with SQLITE_MISUSE, that means the interface -** was invoked incorrectly by the application. In that case, the -** error code and message may or may not be set. -*/ -SQLITE_API int sqlite3_errcode(sqlite3 *db); -SQLITE_API int sqlite3_extended_errcode(sqlite3 *db); -SQLITE_API const char *sqlite3_errmsg(sqlite3*); -SQLITE_API const void *sqlite3_errmsg16(sqlite3*); - -/* -** CAPI3REF: SQL Statement Object -** KEYWORDS: {prepared statement} {prepared statements} -** -** An instance of this object represents a single SQL statement. -** This object is variously known as a "prepared statement" or a -** "compiled SQL statement" or simply as a "statement". -** -** The life of a statement object goes something like this: -** -** <ol> -** <li> Create the object using [sqlite3_prepare_v2()] or a related -** function. -** <li> Bind values to [host parameters] using the sqlite3_bind_*() -** interfaces. -** <li> Run the SQL by calling [sqlite3_step()] one or more times. -** <li> Reset the statement using [sqlite3_reset()] then go back -** to step 2. Do this zero or more times. -** <li> Destroy the object using [sqlite3_finalize()]. -** </ol> -** -** Refer to documentation on individual methods above for additional -** information. -*/ -typedef struct sqlite3_stmt sqlite3_stmt; - -/* -** CAPI3REF: Run-time Limits -** -** ^(This interface allows the size of various constructs to be limited -** on a connection by connection basis. The first parameter is the -** [database connection] whose limit is to be set or queried. The -** second parameter is one of the [limit categories] that define a -** class of constructs to be size limited. The third parameter is the -** new limit for that construct. The function returns the old limit.)^ -** -** ^If the new limit is a negative number, the limit is unchanged. -** ^(For the limit category of SQLITE_LIMIT_XYZ there is a -** [limits | hard upper bound] -** set by a compile-time C preprocessor macro named -** [limits | SQLITE_MAX_XYZ]. -** (The "_LIMIT_" in the name is changed to "_MAX_".))^ -** ^Attempts to increase a limit above its hard upper bound are -** silently truncated to the hard upper bound. -** -** Run-time limits are intended for use in applications that manage -** both their own internal database and also databases that are controlled -** by untrusted external sources. An example application might be a -** web browser that has its own databases for storing history and -** separate databases controlled by JavaScript applications downloaded -** off the Internet. The internal databases can be given the -** large, default limits. Databases managed by external sources can -** be given much smaller limits designed to prevent a denial of service -** attack. Developers might also want to use the [sqlite3_set_authorizer()] -** interface to further control untrusted SQL. The size of the database -** created by an untrusted script can be contained using the -** [max_page_count] [PRAGMA]. -** -** New run-time limit categories may be added in future releases. -*/ -SQLITE_API int sqlite3_limit(sqlite3*, int id, int newVal); - -/* -** CAPI3REF: Run-Time Limit Categories -** KEYWORDS: {limit category} {*limit categories} -** -** These constants define various performance limits -** that can be lowered at run-time using [sqlite3_limit()]. -** The synopsis of the meanings of the various limits is shown below. -** Additional information is available at [limits | Limits in SQLite]. -** -** <dl> -** ^(<dt>SQLITE_LIMIT_LENGTH</dt> -** <dd>The maximum size of any string or BLOB or table row.<dd>)^ -** -** ^(<dt>SQLITE_LIMIT_SQL_LENGTH</dt> -** <dd>The maximum length of an SQL statement, in bytes.</dd>)^ -** -** ^(<dt>SQLITE_LIMIT_COLUMN</dt> -** <dd>The maximum number of columns in a table definition or in the -** result set of a [SELECT] or the maximum number of columns in an index -** or in an ORDER BY or GROUP BY clause.</dd>)^ -** -** ^(<dt>SQLITE_LIMIT_EXPR_DEPTH</dt> -** <dd>The maximum depth of the parse tree on any expression.</dd>)^ -** -** ^(<dt>SQLITE_LIMIT_COMPOUND_SELECT</dt> -** <dd>The maximum number of terms in a compound SELECT statement.</dd>)^ -** -** ^(<dt>SQLITE_LIMIT_VDBE_OP</dt> -** <dd>The maximum number of instructions in a virtual machine program -** used to implement an SQL statement.</dd>)^ -** -** ^(<dt>SQLITE_LIMIT_FUNCTION_ARG</dt> -** <dd>The maximum number of arguments on a function.</dd>)^ -** -** ^(<dt>SQLITE_LIMIT_ATTACHED</dt> -** <dd>The maximum number of [ATTACH | attached databases].)^</dd> -** -** ^(<dt>SQLITE_LIMIT_LIKE_PATTERN_LENGTH</dt> -** <dd>The maximum length of the pattern argument to the [LIKE] or -** [GLOB] operators.</dd>)^ -** -** ^(<dt>SQLITE_LIMIT_VARIABLE_NUMBER</dt> -** <dd>The maximum number of variables in an SQL statement that can -** be bound.</dd>)^ -** -** ^(<dt>SQLITE_LIMIT_TRIGGER_DEPTH</dt> -** <dd>The maximum depth of recursion for triggers.</dd>)^ -** </dl> -*/ -#define SQLITE_LIMIT_LENGTH 0 -#define SQLITE_LIMIT_SQL_LENGTH 1 -#define SQLITE_LIMIT_COLUMN 2 -#define SQLITE_LIMIT_EXPR_DEPTH 3 -#define SQLITE_LIMIT_COMPOUND_SELECT 4 -#define SQLITE_LIMIT_VDBE_OP 5 -#define SQLITE_LIMIT_FUNCTION_ARG 6 -#define SQLITE_LIMIT_ATTACHED 7 -#define SQLITE_LIMIT_LIKE_PATTERN_LENGTH 8 -#define SQLITE_LIMIT_VARIABLE_NUMBER 9 -#define SQLITE_LIMIT_TRIGGER_DEPTH 10 - -/* -** CAPI3REF: Compiling An SQL Statement -** KEYWORDS: {SQL statement compiler} -** -** To execute an SQL query, it must first be compiled into a byte-code -** program using one of these routines. -** -** The first argument, "db", is a [database connection] obtained from a -** prior successful call to [sqlite3_open()], [sqlite3_open_v2()] or -** [sqlite3_open16()]. The database connection must not have been closed. -** -** The second argument, "zSql", is the statement to be compiled, encoded -** as either UTF-8 or UTF-16. The sqlite3_prepare() and sqlite3_prepare_v2() -** interfaces use UTF-8, and sqlite3_prepare16() and sqlite3_prepare16_v2() -** use UTF-16. -** -** ^If the nByte argument is less than zero, then zSql is read up to the -** first zero terminator. ^If nByte is non-negative, then it is the maximum -** number of bytes read from zSql. ^When nByte is non-negative, the -** zSql string ends at either the first '\000' or '\u0000' character or -** the nByte-th byte, whichever comes first. If the caller knows -** that the supplied string is nul-terminated, then there is a small -** performance advantage to be gained by passing an nByte parameter that -** is equal to the number of bytes in the input string <i>including</i> -** the nul-terminator bytes. -** -** ^If pzTail is not NULL then *pzTail is made to point to the first byte -** past the end of the first SQL statement in zSql. These routines only -** compile the first statement in zSql, so *pzTail is left pointing to -** what remains uncompiled. -** -** ^*ppStmt is left pointing to a compiled [prepared statement] that can be -** executed using [sqlite3_step()]. ^If there is an error, *ppStmt is set -** to NULL. ^If the input text contains no SQL (if the input is an empty -** string or a comment) then *ppStmt is set to NULL. -** The calling procedure is responsible for deleting the compiled -** SQL statement using [sqlite3_finalize()] after it has finished with it. -** ppStmt may not be NULL. -** -** ^On success, the sqlite3_prepare() family of routines return [SQLITE_OK]; -** otherwise an [error code] is returned. -** -** The sqlite3_prepare_v2() and sqlite3_prepare16_v2() interfaces are -** recommended for all new programs. The two older interfaces are retained -** for backwards compatibility, but their use is discouraged. -** ^In the "v2" interfaces, the prepared statement -** that is returned (the [sqlite3_stmt] object) contains a copy of the -** original SQL text. This causes the [sqlite3_step()] interface to -** behave differently in three ways: -** -** <ol> -** <li> -** ^If the database schema changes, instead of returning [SQLITE_SCHEMA] as it -** always used to do, [sqlite3_step()] will automatically recompile the SQL -** statement and try to run it again. ^If the schema has changed in -** a way that makes the statement no longer valid, [sqlite3_step()] will still -** return [SQLITE_SCHEMA]. But unlike the legacy behavior, [SQLITE_SCHEMA] is -** now a fatal error. Calling [sqlite3_prepare_v2()] again will not make the -** error go away. Note: use [sqlite3_errmsg()] to find the text -** of the parsing error that results in an [SQLITE_SCHEMA] return. -** </li> -** -** <li> -** ^When an error occurs, [sqlite3_step()] will return one of the detailed -** [error codes] or [extended error codes]. ^The legacy behavior was that -** [sqlite3_step()] would only return a generic [SQLITE_ERROR] result code -** and the application would have to make a second call to [sqlite3_reset()] -** in order to find the underlying cause of the problem. With the "v2" prepare -** interfaces, the underlying reason for the error is returned immediately. -** </li> -** -** <li> -** ^If the value of a [parameter | host parameter] in the WHERE clause might -** change the query plan for a statement, then the statement may be -** automatically recompiled (as if there had been a schema change) on the first -** [sqlite3_step()] call following any change to the -** [sqlite3_bind_text | bindings] of the [parameter]. -** </li> -** </ol> -*/ -SQLITE_API int sqlite3_prepare( - sqlite3 *db, /* Database handle */ - const char *zSql, /* SQL statement, UTF-8 encoded */ - int nByte, /* Maximum length of zSql in bytes. */ - sqlite3_stmt **ppStmt, /* OUT: Statement handle */ - const char **pzTail /* OUT: Pointer to unused portion of zSql */ -); -SQLITE_API int sqlite3_prepare_v2( - sqlite3 *db, /* Database handle */ - const char *zSql, /* SQL statement, UTF-8 encoded */ - int nByte, /* Maximum length of zSql in bytes. */ - sqlite3_stmt **ppStmt, /* OUT: Statement handle */ - const char **pzTail /* OUT: Pointer to unused portion of zSql */ -); -SQLITE_API int sqlite3_prepare16( - sqlite3 *db, /* Database handle */ - const void *zSql, /* SQL statement, UTF-16 encoded */ - int nByte, /* Maximum length of zSql in bytes. */ - sqlite3_stmt **ppStmt, /* OUT: Statement handle */ - const void **pzTail /* OUT: Pointer to unused portion of zSql */ -); -SQLITE_API int sqlite3_prepare16_v2( - sqlite3 *db, /* Database handle */ - const void *zSql, /* SQL statement, UTF-16 encoded */ - int nByte, /* Maximum length of zSql in bytes. */ - sqlite3_stmt **ppStmt, /* OUT: Statement handle */ - const void **pzTail /* OUT: Pointer to unused portion of zSql */ -); - -/* -** CAPI3REF: Retrieving Statement SQL -** -** ^This interface can be used to retrieve a saved copy of the original -** SQL text used to create a [prepared statement] if that statement was -** compiled using either [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()]. -*/ -SQLITE_API const char *sqlite3_sql(sqlite3_stmt *pStmt); - -/* -** CAPI3REF: Dynamically Typed Value Object -** KEYWORDS: {protected sqlite3_value} {unprotected sqlite3_value} -** -** SQLite uses the sqlite3_value object to represent all values -** that can be stored in a database table. SQLite uses dynamic typing -** for the values it stores. ^Values stored in sqlite3_value objects -** can be integers, floating point values, strings, BLOBs, or NULL. -** -** An sqlite3_value object may be either "protected" or "unprotected". -** Some interfaces require a protected sqlite3_value. Other interfaces -** will accept either a protected or an unprotected sqlite3_value. -** Every interface that accepts sqlite3_value arguments specifies -** whether or not it requires a protected sqlite3_value. -** -** The terms "protected" and "unprotected" refer to whether or not -** a mutex is held. A internal mutex is held for a protected -** sqlite3_value object but no mutex is held for an unprotected -** sqlite3_value object. If SQLite is compiled to be single-threaded -** (with [SQLITE_THREADSAFE=0] and with [sqlite3_threadsafe()] returning 0) -** or if SQLite is run in one of reduced mutex modes -** [SQLITE_CONFIG_SINGLETHREAD] or [SQLITE_CONFIG_MULTITHREAD] -** then there is no distinction between protected and unprotected -** sqlite3_value objects and they can be used interchangeably. However, -** for maximum code portability it is recommended that applications -** still make the distinction between between protected and unprotected -** sqlite3_value objects even when not strictly required. -** -** ^The sqlite3_value objects that are passed as parameters into the -** implementation of [application-defined SQL functions] are protected. -** ^The sqlite3_value object returned by -** [sqlite3_column_value()] is unprotected. -** Unprotected sqlite3_value objects may only be used with -** [sqlite3_result_value()] and [sqlite3_bind_value()]. -** The [sqlite3_value_blob | sqlite3_value_type()] family of -** interfaces require protected sqlite3_value objects. -*/ -typedef struct Mem sqlite3_value; - -/* -** CAPI3REF: SQL Function Context Object -** -** The context in which an SQL function executes is stored in an -** sqlite3_context object. ^A pointer to an sqlite3_context object -** is always first parameter to [application-defined SQL functions]. -** The application-defined SQL function implementation will pass this -** pointer through into calls to [sqlite3_result_int | sqlite3_result()], -** [sqlite3_aggregate_context()], [sqlite3_user_data()], -** [sqlite3_context_db_handle()], [sqlite3_get_auxdata()], -** and/or [sqlite3_set_auxdata()]. -*/ -typedef struct sqlite3_context sqlite3_context; - -/* -** CAPI3REF: Binding Values To Prepared Statements -** KEYWORDS: {host parameter} {host parameters} {host parameter name} -** KEYWORDS: {SQL parameter} {SQL parameters} {parameter binding} -** -** ^(In the SQL statement text input to [sqlite3_prepare_v2()] and its variants, -** literals may be replaced by a [parameter] that matches one of following -** templates: -** -** <ul> -** <li> ? -** <li> ?NNN -** <li> :VVV -** <li> @VVV -** <li> $VVV -** </ul> -** -** In the templates above, NNN represents an integer literal, -** and VVV represents an alphanumeric identifer.)^ ^The values of these -** parameters (also called "host parameter names" or "SQL parameters") -** can be set using the sqlite3_bind_*() routines defined here. -** -** ^The first argument to the sqlite3_bind_*() routines is always -** a pointer to the [sqlite3_stmt] object returned from -** [sqlite3_prepare_v2()] or its variants. -** -** ^The second argument is the index of the SQL parameter to be set. -** ^The leftmost SQL parameter has an index of 1. ^When the same named -** SQL parameter is used more than once, second and subsequent -** occurrences have the same index as the first occurrence. -** ^The index for named parameters can be looked up using the -** [sqlite3_bind_parameter_index()] API if desired. ^The index -** for "?NNN" parameters is the value of NNN. -** ^The NNN value must be between 1 and the [sqlite3_limit()] -** parameter [SQLITE_LIMIT_VARIABLE_NUMBER] (default value: 999). -** -** ^The third argument is the value to bind to the parameter. -** -** ^(In those routines that have a fourth argument, its value is the -** number of bytes in the parameter. To be clear: the value is the -** number of <u>bytes</u> in the value, not the number of characters.)^ -** ^If the fourth parameter is negative, the length of the string is -** the number of bytes up to the first zero terminator. -** -** ^The fifth argument to sqlite3_bind_blob(), sqlite3_bind_text(), and -** sqlite3_bind_text16() is a destructor used to dispose of the BLOB or -** string after SQLite has finished with it. ^If the fifth argument is -** the special value [SQLITE_STATIC], then SQLite assumes that the -** information is in static, unmanaged space and does not need to be freed. -** ^If the fifth argument has the value [SQLITE_TRANSIENT], then -** SQLite makes its own private copy of the data immediately, before -** the sqlite3_bind_*() routine returns. -** -** ^The sqlite3_bind_zeroblob() routine binds a BLOB of length N that -** is filled with zeroes. ^A zeroblob uses a fixed amount of memory -** (just an integer to hold its size) while it is being processed. -** Zeroblobs are intended to serve as placeholders for BLOBs whose -** content is later written using -** [sqlite3_blob_open | incremental BLOB I/O] routines. -** ^A negative value for the zeroblob results in a zero-length BLOB. -** -** ^If any of the sqlite3_bind_*() routines are called with a NULL pointer -** for the [prepared statement] or with a prepared statement for which -** [sqlite3_step()] has been called more recently than [sqlite3_reset()], -** then the call will return [SQLITE_MISUSE]. If any sqlite3_bind_() -** routine is passed a [prepared statement] that has been finalized, the -** result is undefined and probably harmful. -** -** ^Bindings are not cleared by the [sqlite3_reset()] routine. -** ^Unbound parameters are interpreted as NULL. -** -** ^The sqlite3_bind_* routines return [SQLITE_OK] on success or an -** [error code] if anything goes wrong. -** ^[SQLITE_RANGE] is returned if the parameter -** index is out of range. ^[SQLITE_NOMEM] is returned if malloc() fails. -** -** See also: [sqlite3_bind_parameter_count()], -** [sqlite3_bind_parameter_name()], and [sqlite3_bind_parameter_index()]. -*/ -SQLITE_API int sqlite3_bind_blob(sqlite3_stmt*, int, const void*, int n, void(*)(void*)); -SQLITE_API int sqlite3_bind_double(sqlite3_stmt*, int, double); -SQLITE_API int sqlite3_bind_int(sqlite3_stmt*, int, int); -SQLITE_API int sqlite3_bind_int64(sqlite3_stmt*, int, sqlite3_int64); -SQLITE_API int sqlite3_bind_null(sqlite3_stmt*, int); -SQLITE_API int sqlite3_bind_text(sqlite3_stmt*, int, const char*, int n, void(*)(void*)); -SQLITE_API int sqlite3_bind_text16(sqlite3_stmt*, int, const void*, int, void(*)(void*)); -SQLITE_API int sqlite3_bind_value(sqlite3_stmt*, int, const sqlite3_value*); -SQLITE_API int sqlite3_bind_zeroblob(sqlite3_stmt*, int, int n); - -/* -** CAPI3REF: Number Of SQL Parameters -** -** ^This routine can be used to find the number of [SQL parameters] -** in a [prepared statement]. SQL parameters are tokens of the -** form "?", "?NNN", ":AAA", "$AAA", or "@AAA" that serve as -** placeholders for values that are [sqlite3_bind_blob | bound] -** to the parameters at a later time. -** -** ^(This routine actually returns the index of the largest (rightmost) -** parameter. For all forms except ?NNN, this will correspond to the -** number of unique parameters. If parameters of the ?NNN form are used, -** there may be gaps in the list.)^ -** -** See also: [sqlite3_bind_blob|sqlite3_bind()], -** [sqlite3_bind_parameter_name()], and -** [sqlite3_bind_parameter_index()]. -*/ -SQLITE_API int sqlite3_bind_parameter_count(sqlite3_stmt*); - -/* -** CAPI3REF: Name Of A Host Parameter -** -** ^The sqlite3_bind_parameter_name(P,N) interface returns -** the name of the N-th [SQL parameter] in the [prepared statement] P. -** ^(SQL parameters of the form "?NNN" or ":AAA" or "@AAA" or "$AAA" -** have a name which is the string "?NNN" or ":AAA" or "@AAA" or "$AAA" -** respectively. -** In other words, the initial ":" or "$" or "@" or "?" -** is included as part of the name.)^ -** ^Parameters of the form "?" without a following integer have no name -** and are referred to as "nameless" or "anonymous parameters". -** -** ^The first host parameter has an index of 1, not 0. -** -** ^If the value N is out of range or if the N-th parameter is -** nameless, then NULL is returned. ^The returned string is -** always in UTF-8 encoding even if the named parameter was -** originally specified as UTF-16 in [sqlite3_prepare16()] or -** [sqlite3_prepare16_v2()]. -** -** See also: [sqlite3_bind_blob|sqlite3_bind()], -** [sqlite3_bind_parameter_count()], and -** [sqlite3_bind_parameter_index()]. -*/ -SQLITE_API const char *sqlite3_bind_parameter_name(sqlite3_stmt*, int); - -/* -** CAPI3REF: Index Of A Parameter With A Given Name -** -** ^Return the index of an SQL parameter given its name. ^The -** index value returned is suitable for use as the second -** parameter to [sqlite3_bind_blob|sqlite3_bind()]. ^A zero -** is returned if no matching parameter is found. ^The parameter -** name must be given in UTF-8 even if the original statement -** was prepared from UTF-16 text using [sqlite3_prepare16_v2()]. -** -** See also: [sqlite3_bind_blob|sqlite3_bind()], -** [sqlite3_bind_parameter_count()], and -** [sqlite3_bind_parameter_index()]. -*/ -SQLITE_API int sqlite3_bind_parameter_index(sqlite3_stmt*, const char *zName); - -/* -** CAPI3REF: Reset All Bindings On A Prepared Statement -** -** ^Contrary to the intuition of many, [sqlite3_reset()] does not reset -** the [sqlite3_bind_blob | bindings] on a [prepared statement]. -** ^Use this routine to reset all host parameters to NULL. -*/ -SQLITE_API int sqlite3_clear_bindings(sqlite3_stmt*); - -/* -** CAPI3REF: Number Of Columns In A Result Set -** -** ^Return the number of columns in the result set returned by the -** [prepared statement]. ^This routine returns 0 if pStmt is an SQL -** statement that does not return data (for example an [UPDATE]). -*/ -SQLITE_API int sqlite3_column_count(sqlite3_stmt *pStmt); - -/* -** CAPI3REF: Column Names In A Result Set -** -** ^These routines return the name assigned to a particular column -** in the result set of a [SELECT] statement. ^The sqlite3_column_name() -** interface returns a pointer to a zero-terminated UTF-8 string -** and sqlite3_column_name16() returns a pointer to a zero-terminated -** UTF-16 string. ^The first parameter is the [prepared statement] -** that implements the [SELECT] statement. ^The second parameter is the -** column number. ^The leftmost column is number 0. -** -** ^The returned string pointer is valid until either the [prepared statement] -** is destroyed by [sqlite3_finalize()] or until the next call to -** sqlite3_column_name() or sqlite3_column_name16() on the same column. -** -** ^If sqlite3_malloc() fails during the processing of either routine -** (for example during a conversion from UTF-8 to UTF-16) then a -** NULL pointer is returned. -** -** ^The name of a result column is the value of the "AS" clause for -** that column, if there is an AS clause. If there is no AS clause -** then the name of the column is unspecified and may change from -** one release of SQLite to the next. -*/ -SQLITE_API const char *sqlite3_column_name(sqlite3_stmt*, int N); -SQLITE_API const void *sqlite3_column_name16(sqlite3_stmt*, int N); - -/* -** CAPI3REF: Source Of Data In A Query Result -** -** ^These routines provide a means to determine the database, table, and -** table column that is the origin of a particular result column in -** [SELECT] statement. -** ^The name of the database or table or column can be returned as -** either a UTF-8 or UTF-16 string. ^The _database_ routines return -** the database name, the _table_ routines return the table name, and -** the origin_ routines return the column name. -** ^The returned string is valid until the [prepared statement] is destroyed -** using [sqlite3_finalize()] or until the same information is requested -** again in a different encoding. -** -** ^The names returned are the original un-aliased names of the -** database, table, and column. -** -** ^The first argument to these interfaces is a [prepared statement]. -** ^These functions return information about the Nth result column returned by -** the statement, where N is the second function argument. -** ^The left-most column is column 0 for these routines. -** -** ^If the Nth column returned by the statement is an expression or -** subquery and is not a column value, then all of these functions return -** NULL. ^These routine might also return NULL if a memory allocation error -** occurs. ^Otherwise, they return the name of the attached database, table, -** or column that query result column was extracted from. -** -** ^As with all other SQLite APIs, those whose names end with "16" return -** UTF-16 encoded strings and the other functions return UTF-8. -** -** ^These APIs are only available if the library was compiled with the -** [SQLITE_ENABLE_COLUMN_METADATA] C-preprocessor symbol. -** -** If two or more threads call one or more of these routines against the same -** prepared statement and column at the same time then the results are -** undefined. -** -** If two or more threads call one or more -** [sqlite3_column_database_name | column metadata interfaces] -** for the same [prepared statement] and result column -** at the same time then the results are undefined. -*/ -SQLITE_API const char *sqlite3_column_database_name(sqlite3_stmt*,int); -SQLITE_API const void *sqlite3_column_database_name16(sqlite3_stmt*,int); -SQLITE_API const char *sqlite3_column_table_name(sqlite3_stmt*,int); -SQLITE_API const void *sqlite3_column_table_name16(sqlite3_stmt*,int); -SQLITE_API const char *sqlite3_column_origin_name(sqlite3_stmt*,int); -SQLITE_API const void *sqlite3_column_origin_name16(sqlite3_stmt*,int); - -/* -** CAPI3REF: Declared Datatype Of A Query Result -** -** ^(The first parameter is a [prepared statement]. -** If this statement is a [SELECT] statement and the Nth column of the -** returned result set of that [SELECT] is a table column (not an -** expression or subquery) then the declared type of the table -** column is returned.)^ ^If the Nth column of the result set is an -** expression or subquery, then a NULL pointer is returned. -** ^The returned string is always UTF-8 encoded. -** -** ^(For example, given the database schema: -** -** CREATE TABLE t1(c1 VARIANT); -** -** and the following statement to be compiled: -** -** SELECT c1 + 1, c1 FROM t1; -** -** this routine would return the string "VARIANT" for the second result -** column (i==1), and a NULL pointer for the first result column (i==0).)^ -** -** ^SQLite uses dynamic run-time typing. ^So just because a column -** is declared to contain a particular type does not mean that the -** data stored in that column is of the declared type. SQLite is -** strongly typed, but the typing is dynamic not static. ^Type -** is associated with individual values, not with the containers -** used to hold those values. -*/ -SQLITE_API const char *sqlite3_column_decltype(sqlite3_stmt*,int); -SQLITE_API const void *sqlite3_column_decltype16(sqlite3_stmt*,int); - -/* -** CAPI3REF: Evaluate An SQL Statement -** -** After a [prepared statement] has been prepared using either -** [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()] or one of the legacy -** interfaces [sqlite3_prepare()] or [sqlite3_prepare16()], this function -** must be called one or more times to evaluate the statement. -** -** The details of the behavior of the sqlite3_step() interface depend -** on whether the statement was prepared using the newer "v2" interface -** [sqlite3_prepare_v2()] and [sqlite3_prepare16_v2()] or the older legacy -** interface [sqlite3_prepare()] and [sqlite3_prepare16()]. The use of the -** new "v2" interface is recommended for new applications but the legacy -** interface will continue to be supported. -** -** ^In the legacy interface, the return value will be either [SQLITE_BUSY], -** [SQLITE_DONE], [SQLITE_ROW], [SQLITE_ERROR], or [SQLITE_MISUSE]. -** ^With the "v2" interface, any of the other [result codes] or -** [extended result codes] might be returned as well. -** -** ^[SQLITE_BUSY] means that the database engine was unable to acquire the -** database locks it needs to do its job. ^If the statement is a [COMMIT] -** or occurs outside of an explicit transaction, then you can retry the -** statement. If the statement is not a [COMMIT] and occurs within a -** explicit transaction then you should rollback the transaction before -** continuing. -** -** ^[SQLITE_DONE] means that the statement has finished executing -** successfully. sqlite3_step() should not be called again on this virtual -** machine without first calling [sqlite3_reset()] to reset the virtual -** machine back to its initial state. -** -** ^If the SQL statement being executed returns any data, then [SQLITE_ROW] -** is returned each time a new row of data is ready for processing by the -** caller. The values may be accessed using the [column access functions]. -** sqlite3_step() is called again to retrieve the next row of data. -** -** ^[SQLITE_ERROR] means that a run-time error (such as a constraint -** violation) has occurred. sqlite3_step() should not be called again on -** the VM. More information may be found by calling [sqlite3_errmsg()]. -** ^With the legacy interface, a more specific error code (for example, -** [SQLITE_INTERRUPT], [SQLITE_SCHEMA], [SQLITE_CORRUPT], and so forth) -** can be obtained by calling [sqlite3_reset()] on the -** [prepared statement]. ^In the "v2" interface, -** the more specific error code is returned directly by sqlite3_step(). -** -** [SQLITE_MISUSE] means that the this routine was called inappropriately. -** Perhaps it was called on a [prepared statement] that has -** already been [sqlite3_finalize | finalized] or on one that had -** previously returned [SQLITE_ERROR] or [SQLITE_DONE]. Or it could -** be the case that the same database connection is being used by two or -** more threads at the same moment in time. -** -** <b>Goofy Interface Alert:</b> In the legacy interface, the sqlite3_step() -** API always returns a generic error code, [SQLITE_ERROR], following any -** error other than [SQLITE_BUSY] and [SQLITE_MISUSE]. You must call -** [sqlite3_reset()] or [sqlite3_finalize()] in order to find one of the -** specific [error codes] that better describes the error. -** We admit that this is a goofy design. The problem has been fixed -** with the "v2" interface. If you prepare all of your SQL statements -** using either [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()] instead -** of the legacy [sqlite3_prepare()] and [sqlite3_prepare16()] interfaces, -** then the more specific [error codes] are returned directly -** by sqlite3_step(). The use of the "v2" interface is recommended. -*/ -SQLITE_API int sqlite3_step(sqlite3_stmt*); - -/* -** CAPI3REF: Number of columns in a result set -** -** ^The sqlite3_data_count(P) the number of columns in the -** of the result set of [prepared statement] P. -*/ -SQLITE_API int sqlite3_data_count(sqlite3_stmt *pStmt); - -/* -** CAPI3REF: Fundamental Datatypes -** KEYWORDS: SQLITE_TEXT -** -** ^(Every value in SQLite has one of five fundamental datatypes: -** -** <ul> -** <li> 64-bit signed integer -** <li> 64-bit IEEE floating point number -** <li> string -** <li> BLOB -** <li> NULL -** </ul>)^ -** -** These constants are codes for each of those types. -** -** Note that the SQLITE_TEXT constant was also used in SQLite version 2 -** for a completely different meaning. Software that links against both -** SQLite version 2 and SQLite version 3 should use SQLITE3_TEXT, not -** SQLITE_TEXT. -*/ -#define SQLITE_INTEGER 1 -#define SQLITE_FLOAT 2 -#define SQLITE_BLOB 4 -#define SQLITE_NULL 5 -#ifdef SQLITE_TEXT -# undef SQLITE_TEXT -#else -# define SQLITE_TEXT 3 -#endif -#define SQLITE3_TEXT 3 - -/* -** CAPI3REF: Result Values From A Query -** KEYWORDS: {column access functions} -** -** These routines form the "result set" interface. -** -** ^These routines return information about a single column of the current -** result row of a query. ^In every case the first argument is a pointer -** to the [prepared statement] that is being evaluated (the [sqlite3_stmt*] -** that was returned from [sqlite3_prepare_v2()] or one of its variants) -** and the second argument is the index of the column for which information -** should be returned. ^The leftmost column of the result set has the index 0. -** ^The number of columns in the result can be determined using -** [sqlite3_column_count()]. -** -** If the SQL statement does not currently point to a valid row, or if the -** column index is out of range, the result is undefined. -** These routines may only be called when the most recent call to -** [sqlite3_step()] has returned [SQLITE_ROW] and neither -** [sqlite3_reset()] nor [sqlite3_finalize()] have been called subsequently. -** If any of these routines are called after [sqlite3_reset()] or -** [sqlite3_finalize()] or after [sqlite3_step()] has returned -** something other than [SQLITE_ROW], the results are undefined. -** If [sqlite3_step()] or [sqlite3_reset()] or [sqlite3_finalize()] -** are called from a different thread while any of these routines -** are pending, then the results are undefined. -** -** ^The sqlite3_column_type() routine returns the -** [SQLITE_INTEGER | datatype code] for the initial data type -** of the result column. ^The returned value is one of [SQLITE_INTEGER], -** [SQLITE_FLOAT], [SQLITE_TEXT], [SQLITE_BLOB], or [SQLITE_NULL]. The value -** returned by sqlite3_column_type() is only meaningful if no type -** conversions have occurred as described below. After a type conversion, -** the value returned by sqlite3_column_type() is undefined. Future -** versions of SQLite may change the behavior of sqlite3_column_type() -** following a type conversion. -** -** ^If the result is a BLOB or UTF-8 string then the sqlite3_column_bytes() -** routine returns the number of bytes in that BLOB or string. -** ^If the result is a UTF-16 string, then sqlite3_column_bytes() converts -** the string to UTF-8 and then returns the number of bytes. -** ^If the result is a numeric value then sqlite3_column_bytes() uses -** [sqlite3_snprintf()] to convert that value to a UTF-8 string and returns -** the number of bytes in that string. -** ^The value returned does not include the zero terminator at the end -** of the string. ^For clarity: the value returned is the number of -** bytes in the string, not the number of characters. -** -** ^Strings returned by sqlite3_column_text() and sqlite3_column_text16(), -** even empty strings, are always zero terminated. ^The return -** value from sqlite3_column_blob() for a zero-length BLOB is an arbitrary -** pointer, possibly even a NULL pointer. -** -** ^The sqlite3_column_bytes16() routine is similar to sqlite3_column_bytes() -** but leaves the result in UTF-16 in native byte order instead of UTF-8. -** ^The zero terminator is not included in this count. -** -** ^The object returned by [sqlite3_column_value()] is an -** [unprotected sqlite3_value] object. An unprotected sqlite3_value object -** may only be used with [sqlite3_bind_value()] and [sqlite3_result_value()]. -** If the [unprotected sqlite3_value] object returned by -** [sqlite3_column_value()] is used in any other way, including calls -** to routines like [sqlite3_value_int()], [sqlite3_value_text()], -** or [sqlite3_value_bytes()], then the behavior is undefined. -** -** These routines attempt to convert the value where appropriate. ^For -** example, if the internal representation is FLOAT and a text result -** is requested, [sqlite3_snprintf()] is used internally to perform the -** conversion automatically. ^(The following table details the conversions -** that are applied: -** -** <blockquote> -** <table border="1"> -** <tr><th> Internal<br>Type <th> Requested<br>Type <th> Conversion -** -** <tr><td> NULL <td> INTEGER <td> Result is 0 -** <tr><td> NULL <td> FLOAT <td> Result is 0.0 -** <tr><td> NULL <td> TEXT <td> Result is NULL pointer -** <tr><td> NULL <td> BLOB <td> Result is NULL pointer -** <tr><td> INTEGER <td> FLOAT <td> Convert from integer to float -** <tr><td> INTEGER <td> TEXT <td> ASCII rendering of the integer -** <tr><td> INTEGER <td> BLOB <td> Same as INTEGER->TEXT -** <tr><td> FLOAT <td> INTEGER <td> Convert from float to integer -** <tr><td> FLOAT <td> TEXT <td> ASCII rendering of the float -** <tr><td> FLOAT <td> BLOB <td> Same as FLOAT->TEXT -** <tr><td> TEXT <td> INTEGER <td> Use atoi() -** <tr><td> TEXT <td> FLOAT <td> Use atof() -** <tr><td> TEXT <td> BLOB <td> No change -** <tr><td> BLOB <td> INTEGER <td> Convert to TEXT then use atoi() -** <tr><td> BLOB <td> FLOAT <td> Convert to TEXT then use atof() -** <tr><td> BLOB <td> TEXT <td> Add a zero terminator if needed -** </table> -** </blockquote>)^ -** -** The table above makes reference to standard C library functions atoi() -** and atof(). SQLite does not really use these functions. It has its -** own equivalent internal routines. The atoi() and atof() names are -** used in the table for brevity and because they are familiar to most -** C programmers. -** -** ^Note that when type conversions occur, pointers returned by prior -** calls to sqlite3_column_blob(), sqlite3_column_text(), and/or -** sqlite3_column_text16() may be invalidated. -** ^(Type conversions and pointer invalidations might occur -** in the following cases: -** -** <ul> -** <li> The initial content is a BLOB and sqlite3_column_text() or -** sqlite3_column_text16() is called. A zero-terminator might -** need to be added to the string.</li> -** <li> The initial content is UTF-8 text and sqlite3_column_bytes16() or -** sqlite3_column_text16() is called. The content must be converted -** to UTF-16.</li> -** <li> The initial content is UTF-16 text and sqlite3_column_bytes() or -** sqlite3_column_text() is called. The content must be converted -** to UTF-8.</li> -** </ul>)^ -** -** ^Conversions between UTF-16be and UTF-16le are always done in place and do -** not invalidate a prior pointer, though of course the content of the buffer -** that the prior pointer points to will have been modified. Other kinds -** of conversion are done in place when it is possible, but sometimes they -** are not possible and in those cases prior pointers are invalidated. -** -** ^(The safest and easiest to remember policy is to invoke these routines -** in one of the following ways: -** -** <ul> -** <li>sqlite3_column_text() followed by sqlite3_column_bytes()</li> -** <li>sqlite3_column_blob() followed by sqlite3_column_bytes()</li> -** <li>sqlite3_column_text16() followed by sqlite3_column_bytes16()</li> -** </ul>)^ -** -** In other words, you should call sqlite3_column_text(), -** sqlite3_column_blob(), or sqlite3_column_text16() first to force the result -** into the desired format, then invoke sqlite3_column_bytes() or -** sqlite3_column_bytes16() to find the size of the result. Do not mix calls -** to sqlite3_column_text() or sqlite3_column_blob() with calls to -** sqlite3_column_bytes16(), and do not mix calls to sqlite3_column_text16() -** with calls to sqlite3_column_bytes(). -** -** ^The pointers returned are valid until a type conversion occurs as -** described above, or until [sqlite3_step()] or [sqlite3_reset()] or -** [sqlite3_finalize()] is called. ^The memory space used to hold strings -** and BLOBs is freed automatically. Do <b>not</b> pass the pointers returned -** [sqlite3_column_blob()], [sqlite3_column_text()], etc. into -** [sqlite3_free()]. -** -** ^(If a memory allocation error occurs during the evaluation of any -** of these routines, a default value is returned. The default value -** is either the integer 0, the floating point number 0.0, or a NULL -** pointer. Subsequent calls to [sqlite3_errcode()] will return -** [SQLITE_NOMEM].)^ -*/ -SQLITE_API const void *sqlite3_column_blob(sqlite3_stmt*, int iCol); -SQLITE_API int sqlite3_column_bytes(sqlite3_stmt*, int iCol); -SQLITE_API int sqlite3_column_bytes16(sqlite3_stmt*, int iCol); -SQLITE_API double sqlite3_column_double(sqlite3_stmt*, int iCol); -SQLITE_API int sqlite3_column_int(sqlite3_stmt*, int iCol); -SQLITE_API sqlite3_int64 sqlite3_column_int64(sqlite3_stmt*, int iCol); -SQLITE_API const unsigned char *sqlite3_column_text(sqlite3_stmt*, int iCol); -SQLITE_API const void *sqlite3_column_text16(sqlite3_stmt*, int iCol); -SQLITE_API int sqlite3_column_type(sqlite3_stmt*, int iCol); -SQLITE_API sqlite3_value *sqlite3_column_value(sqlite3_stmt*, int iCol); - -/* -** CAPI3REF: Destroy A Prepared Statement Object -** -** ^The sqlite3_finalize() function is called to delete a [prepared statement]. -** ^If the statement was executed successfully or not executed at all, then -** SQLITE_OK is returned. ^If execution of the statement failed then an -** [error code] or [extended error code] is returned. -** -** ^This routine can be called at any point during the execution of the -** [prepared statement]. ^If the virtual machine has not -** completed execution when this routine is called, that is like -** encountering an error or an [sqlite3_interrupt | interrupt]. -** ^Incomplete updates may be rolled back and transactions canceled, -** depending on the circumstances, and the -** [error code] returned will be [SQLITE_ABORT]. -*/ -SQLITE_API int sqlite3_finalize(sqlite3_stmt *pStmt); - -/* -** CAPI3REF: Reset A Prepared Statement Object -** -** The sqlite3_reset() function is called to reset a [prepared statement] -** object back to its initial state, ready to be re-executed. -** ^Any SQL statement variables that had values bound to them using -** the [sqlite3_bind_blob | sqlite3_bind_*() API] retain their values. -** Use [sqlite3_clear_bindings()] to reset the bindings. -** -** ^The [sqlite3_reset(S)] interface resets the [prepared statement] S -** back to the beginning of its program. -** -** ^If the most recent call to [sqlite3_step(S)] for the -** [prepared statement] S returned [SQLITE_ROW] or [SQLITE_DONE], -** or if [sqlite3_step(S)] has never before been called on S, -** then [sqlite3_reset(S)] returns [SQLITE_OK]. -** -** ^If the most recent call to [sqlite3_step(S)] for the -** [prepared statement] S indicated an error, then -** [sqlite3_reset(S)] returns an appropriate [error code]. -** -** ^The [sqlite3_reset(S)] interface does not change the values -** of any [sqlite3_bind_blob|bindings] on the [prepared statement] S. -*/ -SQLITE_API int sqlite3_reset(sqlite3_stmt *pStmt); - -/* -** CAPI3REF: Create Or Redefine SQL Functions -** KEYWORDS: {function creation routines} -** KEYWORDS: {application-defined SQL function} -** KEYWORDS: {application-defined SQL functions} -** -** ^These two functions (collectively known as "function creation routines") -** are used to add SQL functions or aggregates or to redefine the behavior -** of existing SQL functions or aggregates. The only difference between the -** two is that the second parameter, the name of the (scalar) function or -** aggregate, is encoded in UTF-8 for sqlite3_create_function() and UTF-16 -** for sqlite3_create_function16(). -** -** ^The first parameter is the [database connection] to which the SQL -** function is to be added. ^If an application uses more than one database -** connection then application-defined SQL functions must be added -** to each database connection separately. -** -** The second parameter is the name of the SQL function to be created or -** redefined. ^The length of the name is limited to 255 bytes, exclusive of -** the zero-terminator. Note that the name length limit is in bytes, not -** characters. ^Any attempt to create a function with a longer name -** will result in [SQLITE_ERROR] being returned. -** -** ^The third parameter (nArg) -** is the number of arguments that the SQL function or -** aggregate takes. ^If this parameter is -1, then the SQL function or -** aggregate may take any number of arguments between 0 and the limit -** set by [sqlite3_limit]([SQLITE_LIMIT_FUNCTION_ARG]). If the third -** parameter is less than -1 or greater than 127 then the behavior is -** undefined. -** -** The fourth parameter, eTextRep, specifies what -** [SQLITE_UTF8 | text encoding] this SQL function prefers for -** its parameters. Any SQL function implementation should be able to work -** work with UTF-8, UTF-16le, or UTF-16be. But some implementations may be -** more efficient with one encoding than another. ^An application may -** invoke sqlite3_create_function() or sqlite3_create_function16() multiple -** times with the same function but with different values of eTextRep. -** ^When multiple implementations of the same function are available, SQLite -** will pick the one that involves the least amount of data conversion. -** If there is only a single implementation which does not care what text -** encoding is used, then the fourth argument should be [SQLITE_ANY]. -** -** ^(The fifth parameter is an arbitrary pointer. The implementation of the -** function can gain access to this pointer using [sqlite3_user_data()].)^ -** -** The seventh, eighth and ninth parameters, xFunc, xStep and xFinal, are -** pointers to C-language functions that implement the SQL function or -** aggregate. ^A scalar SQL function requires an implementation of the xFunc -** callback only; NULL pointers should be passed as the xStep and xFinal -** parameters. ^An aggregate SQL function requires an implementation of xStep -** and xFinal and NULL should be passed for xFunc. ^To delete an existing -** SQL function or aggregate, pass NULL for all three function callbacks. -** -** ^It is permitted to register multiple implementations of the same -** functions with the same name but with either differing numbers of -** arguments or differing preferred text encodings. ^SQLite will use -** the implementation that most closely matches the way in which the -** SQL function is used. ^A function implementation with a non-negative -** nArg parameter is a better match than a function implementation with -** a negative nArg. ^A function where the preferred text encoding -** matches the database encoding is a better -** match than a function where the encoding is different. -** ^A function where the encoding difference is between UTF16le and UTF16be -** is a closer match than a function where the encoding difference is -** between UTF8 and UTF16. -** -** ^Built-in functions may be overloaded by new application-defined functions. -** ^The first application-defined function with a given name overrides all -** built-in functions in the same [database connection] with the same name. -** ^Subsequent application-defined functions of the same name only override -** prior application-defined functions that are an exact match for the -** number of parameters and preferred encoding. -** -** ^An application-defined function is permitted to call other -** SQLite interfaces. However, such calls must not -** close the database connection nor finalize or reset the prepared -** statement in which the function is running. -*/ -SQLITE_API int sqlite3_create_function( - sqlite3 *db, - const char *zFunctionName, - int nArg, - int eTextRep, - void *pApp, - void (*xFunc)(sqlite3_context*,int,sqlite3_value**), - void (*xStep)(sqlite3_context*,int,sqlite3_value**), - void (*xFinal)(sqlite3_context*) -); -SQLITE_API int sqlite3_create_function16( - sqlite3 *db, - const void *zFunctionName, - int nArg, - int eTextRep, - void *pApp, - void (*xFunc)(sqlite3_context*,int,sqlite3_value**), - void (*xStep)(sqlite3_context*,int,sqlite3_value**), - void (*xFinal)(sqlite3_context*) -); - -/* -** CAPI3REF: Text Encodings -** -** These constant define integer codes that represent the various -** text encodings supported by SQLite. -*/ -#define SQLITE_UTF8 1 -#define SQLITE_UTF16LE 2 -#define SQLITE_UTF16BE 3 -#define SQLITE_UTF16 4 /* Use native byte order */ -#define SQLITE_ANY 5 /* sqlite3_create_function only */ -#define SQLITE_UTF16_ALIGNED 8 /* sqlite3_create_collation only */ - -/* -** CAPI3REF: Deprecated Functions -** DEPRECATED -** -** These functions are [deprecated]. In order to maintain -** backwards compatibility with older code, these functions continue -** to be supported. However, new applications should avoid -** the use of these functions. To help encourage people to avoid -** using these functions, we are not going to tell you what they do. -*/ -#ifndef SQLITE_OMIT_DEPRECATED -SQLITE_API SQLITE_DEPRECATED int sqlite3_aggregate_count(sqlite3_context*); -SQLITE_API SQLITE_DEPRECATED int sqlite3_expired(sqlite3_stmt*); -SQLITE_API SQLITE_DEPRECATED int sqlite3_transfer_bindings(sqlite3_stmt*, sqlite3_stmt*); -SQLITE_API SQLITE_DEPRECATED int sqlite3_global_recover(void); -SQLITE_API SQLITE_DEPRECATED void sqlite3_thread_cleanup(void); -SQLITE_API SQLITE_DEPRECATED int sqlite3_memory_alarm(void(*)(void*,sqlite3_int64,int),void*,sqlite3_int64); -#endif - -/* -** CAPI3REF: Obtaining SQL Function Parameter Values -** -** The C-language implementation of SQL functions and aggregates uses -** this set of interface routines to access the parameter values on -** the function or aggregate. -** -** The xFunc (for scalar functions) or xStep (for aggregates) parameters -** to [sqlite3_create_function()] and [sqlite3_create_function16()] -** define callbacks that implement the SQL functions and aggregates. -** The 4th parameter to these callbacks is an array of pointers to -** [protected sqlite3_value] objects. There is one [sqlite3_value] object for -** each parameter to the SQL function. These routines are used to -** extract values from the [sqlite3_value] objects. -** -** These routines work only with [protected sqlite3_value] objects. -** Any attempt to use these routines on an [unprotected sqlite3_value] -** object results in undefined behavior. -** -** ^These routines work just like the corresponding [column access functions] -** except that these routines take a single [protected sqlite3_value] object -** pointer instead of a [sqlite3_stmt*] pointer and an integer column number. -** -** ^The sqlite3_value_text16() interface extracts a UTF-16 string -** in the native byte-order of the host machine. ^The -** sqlite3_value_text16be() and sqlite3_value_text16le() interfaces -** extract UTF-16 strings as big-endian and little-endian respectively. -** -** ^(The sqlite3_value_numeric_type() interface attempts to apply -** numeric affinity to the value. This means that an attempt is -** made to convert the value to an integer or floating point. If -** such a conversion is possible without loss of information (in other -** words, if the value is a string that looks like a number) -** then the conversion is performed. Otherwise no conversion occurs. -** The [SQLITE_INTEGER | datatype] after conversion is returned.)^ -** -** Please pay particular attention to the fact that the pointer returned -** from [sqlite3_value_blob()], [sqlite3_value_text()], or -** [sqlite3_value_text16()] can be invalidated by a subsequent call to -** [sqlite3_value_bytes()], [sqlite3_value_bytes16()], [sqlite3_value_text()], -** or [sqlite3_value_text16()]. -** -** These routines must be called from the same thread as -** the SQL function that supplied the [sqlite3_value*] parameters. -*/ -SQLITE_API const void *sqlite3_value_blob(sqlite3_value*); -SQLITE_API int sqlite3_value_bytes(sqlite3_value*); -SQLITE_API int sqlite3_value_bytes16(sqlite3_value*); -SQLITE_API double sqlite3_value_double(sqlite3_value*); -SQLITE_API int sqlite3_value_int(sqlite3_value*); -SQLITE_API sqlite3_int64 sqlite3_value_int64(sqlite3_value*); -SQLITE_API const unsigned char *sqlite3_value_text(sqlite3_value*); -SQLITE_API const void *sqlite3_value_text16(sqlite3_value*); -SQLITE_API const void *sqlite3_value_text16le(sqlite3_value*); -SQLITE_API const void *sqlite3_value_text16be(sqlite3_value*); -SQLITE_API int sqlite3_value_type(sqlite3_value*); -SQLITE_API int sqlite3_value_numeric_type(sqlite3_value*); - -/* -** CAPI3REF: Obtain Aggregate Function Context -** -** Implementions of aggregate SQL functions use this -** routine to allocate memory for storing their state. -** -** ^The first time the sqlite3_aggregate_context(C,N) routine is called -** for a particular aggregate function, SQLite -** allocates N of memory, zeroes out that memory, and returns a pointer -** to the new memory. ^On second and subsequent calls to -** sqlite3_aggregate_context() for the same aggregate function instance, -** the same buffer is returned. Sqlite3_aggregate_context() is normally -** called once for each invocation of the xStep callback and then one -** last time when the xFinal callback is invoked. ^(When no rows match -** an aggregate query, the xStep() callback of the aggregate function -** implementation is never called and xFinal() is called exactly once. -** In those cases, sqlite3_aggregate_context() might be called for the -** first time from within xFinal().)^ -** -** ^The sqlite3_aggregate_context(C,N) routine returns a NULL pointer if N is -** less than or equal to zero or if a memory allocate error occurs. -** -** ^(The amount of space allocated by sqlite3_aggregate_context(C,N) is -** determined by the N parameter on first successful call. Changing the -** value of N in subsequent call to sqlite3_aggregate_context() within -** the same aggregate function instance will not resize the memory -** allocation.)^ -** -** ^SQLite automatically frees the memory allocated by -** sqlite3_aggregate_context() when the aggregate query concludes. -** -** The first parameter must be a copy of the -** [sqlite3_context | SQL function context] that is the first parameter -** to the xStep or xFinal callback routine that implements the aggregate -** function. -** -** This routine must be called from the same thread in which -** the aggregate SQL function is running. -*/ -SQLITE_API void *sqlite3_aggregate_context(sqlite3_context*, int nBytes); - -/* -** CAPI3REF: User Data For Functions -** -** ^The sqlite3_user_data() interface returns a copy of -** the pointer that was the pUserData parameter (the 5th parameter) -** of the [sqlite3_create_function()] -** and [sqlite3_create_function16()] routines that originally -** registered the application defined function. -** -** This routine must be called from the same thread in which -** the application-defined function is running. -*/ -SQLITE_API void *sqlite3_user_data(sqlite3_context*); - -/* -** CAPI3REF: Database Connection For Functions -** -** ^The sqlite3_context_db_handle() interface returns a copy of -** the pointer to the [database connection] (the 1st parameter) -** of the [sqlite3_create_function()] -** and [sqlite3_create_function16()] routines that originally -** registered the application defined function. -*/ -SQLITE_API sqlite3 *sqlite3_context_db_handle(sqlite3_context*); - -/* -** CAPI3REF: Function Auxiliary Data -** -** The following two functions may be used by scalar SQL functions to -** associate metadata with argument values. If the same value is passed to -** multiple invocations of the same SQL function during query execution, under -** some circumstances the associated metadata may be preserved. This may -** be used, for example, to add a regular-expression matching scalar -** function. The compiled version of the regular expression is stored as -** metadata associated with the SQL value passed as the regular expression -** pattern. The compiled regular expression can be reused on multiple -** invocations of the same function so that the original pattern string -** does not need to be recompiled on each invocation. -** -** ^The sqlite3_get_auxdata() interface returns a pointer to the metadata -** associated by the sqlite3_set_auxdata() function with the Nth argument -** value to the application-defined function. ^If no metadata has been ever -** been set for the Nth argument of the function, or if the corresponding -** function parameter has changed since the meta-data was set, -** then sqlite3_get_auxdata() returns a NULL pointer. -** -** ^The sqlite3_set_auxdata() interface saves the metadata -** pointed to by its 3rd parameter as the metadata for the N-th -** argument of the application-defined function. Subsequent -** calls to sqlite3_get_auxdata() might return this data, if it has -** not been destroyed. -** ^If it is not NULL, SQLite will invoke the destructor -** function given by the 4th parameter to sqlite3_set_auxdata() on -** the metadata when the corresponding function parameter changes -** or when the SQL statement completes, whichever comes first. -** -** SQLite is free to call the destructor and drop metadata on any -** parameter of any function at any time. ^The only guarantee is that -** the destructor will be called before the metadata is dropped. -** -** ^(In practice, metadata is preserved between function calls for -** expressions that are constant at compile time. This includes literal -** values and [parameters].)^ -** -** These routines must be called from the same thread in which -** the SQL function is running. -*/ -SQLITE_API void *sqlite3_get_auxdata(sqlite3_context*, int N); -SQLITE_API void sqlite3_set_auxdata(sqlite3_context*, int N, void*, void (*)(void*)); - - -/* -** CAPI3REF: Constants Defining Special Destructor Behavior -** -** These are special values for the destructor that is passed in as the -** final argument to routines like [sqlite3_result_blob()]. ^If the destructor -** argument is SQLITE_STATIC, it means that the content pointer is constant -** and will never change. It does not need to be destroyed. ^The -** SQLITE_TRANSIENT value means that the content will likely change in -** the near future and that SQLite should make its own private copy of -** the content before returning. -** -** The typedef is necessary to work around problems in certain -** C++ compilers. See ticket #2191. -*/ -typedef void (*sqlite3_destructor_type)(void*); -#define SQLITE_STATIC ((sqlite3_destructor_type)0) -#define SQLITE_TRANSIENT ((sqlite3_destructor_type)-1) - -/* -** CAPI3REF: Setting The Result Of An SQL Function -** -** These routines are used by the xFunc or xFinal callbacks that -** implement SQL functions and aggregates. See -** [sqlite3_create_function()] and [sqlite3_create_function16()] -** for additional information. -** -** These functions work very much like the [parameter binding] family of -** functions used to bind values to host parameters in prepared statements. -** Refer to the [SQL parameter] documentation for additional information. -** -** ^The sqlite3_result_blob() interface sets the result from -** an application-defined function to be the BLOB whose content is pointed -** to by the second parameter and which is N bytes long where N is the -** third parameter. -** -** ^The sqlite3_result_zeroblob() interfaces set the result of -** the application-defined function to be a BLOB containing all zero -** bytes and N bytes in size, where N is the value of the 2nd parameter. -** -** ^The sqlite3_result_double() interface sets the result from -** an application-defined function to be a floating point value specified -** by its 2nd argument. -** -** ^The sqlite3_result_error() and sqlite3_result_error16() functions -** cause the implemented SQL function to throw an exception. -** ^SQLite uses the string pointed to by the -** 2nd parameter of sqlite3_result_error() or sqlite3_result_error16() -** as the text of an error message. ^SQLite interprets the error -** message string from sqlite3_result_error() as UTF-8. ^SQLite -** interprets the string from sqlite3_result_error16() as UTF-16 in native -** byte order. ^If the third parameter to sqlite3_result_error() -** or sqlite3_result_error16() is negative then SQLite takes as the error -** message all text up through the first zero character. -** ^If the third parameter to sqlite3_result_error() or -** sqlite3_result_error16() is non-negative then SQLite takes that many -** bytes (not characters) from the 2nd parameter as the error message. -** ^The sqlite3_result_error() and sqlite3_result_error16() -** routines make a private copy of the error message text before -** they return. Hence, the calling function can deallocate or -** modify the text after they return without harm. -** ^The sqlite3_result_error_code() function changes the error code -** returned by SQLite as a result of an error in a function. ^By default, -** the error code is SQLITE_ERROR. ^A subsequent call to sqlite3_result_error() -** or sqlite3_result_error16() resets the error code to SQLITE_ERROR. -** -** ^The sqlite3_result_toobig() interface causes SQLite to throw an error -** indicating that a string or BLOB is too long to represent. -** -** ^The sqlite3_result_nomem() interface causes SQLite to throw an error -** indicating that a memory allocation failed. -** -** ^The sqlite3_result_int() interface sets the return value -** of the application-defined function to be the 32-bit signed integer -** value given in the 2nd argument. -** ^The sqlite3_result_int64() interface sets the return value -** of the application-defined function to be the 64-bit signed integer -** value given in the 2nd argument. -** -** ^The sqlite3_result_null() interface sets the return value -** of the application-defined function to be NULL. -** -** ^The sqlite3_result_text(), sqlite3_result_text16(), -** sqlite3_result_text16le(), and sqlite3_result_text16be() interfaces -** set the return value of the application-defined function to be -** a text string which is represented as UTF-8, UTF-16 native byte order, -** UTF-16 little endian, or UTF-16 big endian, respectively. -** ^SQLite takes the text result from the application from -** the 2nd parameter of the sqlite3_result_text* interfaces. -** ^If the 3rd parameter to the sqlite3_result_text* interfaces -** is negative, then SQLite takes result text from the 2nd parameter -** through the first zero character. -** ^If the 3rd parameter to the sqlite3_result_text* interfaces -** is non-negative, then as many bytes (not characters) of the text -** pointed to by the 2nd parameter are taken as the application-defined -** function result. -** ^If the 4th parameter to the sqlite3_result_text* interfaces -** or sqlite3_result_blob is a non-NULL pointer, then SQLite calls that -** function as the destructor on the text or BLOB result when it has -** finished using that result. -** ^If the 4th parameter to the sqlite3_result_text* interfaces or to -** sqlite3_result_blob is the special constant SQLITE_STATIC, then SQLite -** assumes that the text or BLOB result is in constant space and does not -** copy the content of the parameter nor call a destructor on the content -** when it has finished using that result. -** ^If the 4th parameter to the sqlite3_result_text* interfaces -** or sqlite3_result_blob is the special constant SQLITE_TRANSIENT -** then SQLite makes a copy of the result into space obtained from -** from [sqlite3_malloc()] before it returns. -** -** ^The sqlite3_result_value() interface sets the result of -** the application-defined function to be a copy the -** [unprotected sqlite3_value] object specified by the 2nd parameter. ^The -** sqlite3_result_value() interface makes a copy of the [sqlite3_value] -** so that the [sqlite3_value] specified in the parameter may change or -** be deallocated after sqlite3_result_value() returns without harm. -** ^A [protected sqlite3_value] object may always be used where an -** [unprotected sqlite3_value] object is required, so either -** kind of [sqlite3_value] object can be used with this interface. -** -** If these routines are called from within the different thread -** than the one containing the application-defined function that received -** the [sqlite3_context] pointer, the results are undefined. -*/ -SQLITE_API void sqlite3_result_blob(sqlite3_context*, const void*, int, void(*)(void*)); -SQLITE_API void sqlite3_result_double(sqlite3_context*, double); -SQLITE_API void sqlite3_result_error(sqlite3_context*, const char*, int); -SQLITE_API void sqlite3_result_error16(sqlite3_context*, const void*, int); -SQLITE_API void sqlite3_result_error_toobig(sqlite3_context*); -SQLITE_API void sqlite3_result_error_nomem(sqlite3_context*); -SQLITE_API void sqlite3_result_error_code(sqlite3_context*, int); -SQLITE_API void sqlite3_result_int(sqlite3_context*, int); -SQLITE_API void sqlite3_result_int64(sqlite3_context*, sqlite3_int64); -SQLITE_API void sqlite3_result_null(sqlite3_context*); -SQLITE_API void sqlite3_result_text(sqlite3_context*, const char*, int, void(*)(void*)); -SQLITE_API void sqlite3_result_text16(sqlite3_context*, const void*, int, void(*)(void*)); -SQLITE_API void sqlite3_result_text16le(sqlite3_context*, const void*, int,void(*)(void*)); -SQLITE_API void sqlite3_result_text16be(sqlite3_context*, const void*, int,void(*)(void*)); -SQLITE_API void sqlite3_result_value(sqlite3_context*, sqlite3_value*); -SQLITE_API void sqlite3_result_zeroblob(sqlite3_context*, int n); - -/* -** CAPI3REF: Define New Collating Sequences -** -** These functions are used to add new collation sequences to the -** [database connection] specified as the first argument. -** -** ^The name of the new collation sequence is specified as a UTF-8 string -** for sqlite3_create_collation() and sqlite3_create_collation_v2() -** and a UTF-16 string for sqlite3_create_collation16(). ^In all cases -** the name is passed as the second function argument. -** -** ^The third argument may be one of the constants [SQLITE_UTF8], -** [SQLITE_UTF16LE], or [SQLITE_UTF16BE], indicating that the user-supplied -** routine expects to be passed pointers to strings encoded using UTF-8, -** UTF-16 little-endian, or UTF-16 big-endian, respectively. ^The -** third argument might also be [SQLITE_UTF16] to indicate that the routine -** expects pointers to be UTF-16 strings in the native byte order, or the -** argument can be [SQLITE_UTF16_ALIGNED] if the -** the routine expects pointers to 16-bit word aligned strings -** of UTF-16 in the native byte order. -** -** A pointer to the user supplied routine must be passed as the fifth -** argument. ^If it is NULL, this is the same as deleting the collation -** sequence (so that SQLite cannot call it anymore). -** ^Each time the application supplied function is invoked, it is passed -** as its first parameter a copy of the void* passed as the fourth argument -** to sqlite3_create_collation() or sqlite3_create_collation16(). -** -** ^The remaining arguments to the application-supplied routine are two strings, -** each represented by a (length, data) pair and encoded in the encoding -** that was passed as the third argument when the collation sequence was -** registered. The application defined collation routine should -** return negative, zero or positive if the first string is less than, -** equal to, or greater than the second string. i.e. (STRING1 - STRING2). -** -** ^The sqlite3_create_collation_v2() works like sqlite3_create_collation() -** except that it takes an extra argument which is a destructor for -** the collation. ^The destructor is called when the collation is -** destroyed and is passed a copy of the fourth parameter void* pointer -** of the sqlite3_create_collation_v2(). -** ^Collations are destroyed when they are overridden by later calls to the -** collation creation functions or when the [database connection] is closed -** using [sqlite3_close()]. -** -** See also: [sqlite3_collation_needed()] and [sqlite3_collation_needed16()]. -*/ -SQLITE_API int sqlite3_create_collation( - sqlite3*, - const char *zName, - int eTextRep, - void*, - int(*xCompare)(void*,int,const void*,int,const void*) -); -SQLITE_API int sqlite3_create_collation_v2( - sqlite3*, - const char *zName, - int eTextRep, - void*, - int(*xCompare)(void*,int,const void*,int,const void*), - void(*xDestroy)(void*) -); -SQLITE_API int sqlite3_create_collation16( - sqlite3*, - const void *zName, - int eTextRep, - void*, - int(*xCompare)(void*,int,const void*,int,const void*) -); - -/* -** CAPI3REF: Collation Needed Callbacks -** -** ^To avoid having to register all collation sequences before a database -** can be used, a single callback function may be registered with the -** [database connection] to be invoked whenever an undefined collation -** sequence is required. -** -** ^If the function is registered using the sqlite3_collation_needed() API, -** then it is passed the names of undefined collation sequences as strings -** encoded in UTF-8. ^If sqlite3_collation_needed16() is used, -** the names are passed as UTF-16 in machine native byte order. -** ^A call to either function replaces the existing collation-needed callback. -** -** ^(When the callback is invoked, the first argument passed is a copy -** of the second argument to sqlite3_collation_needed() or -** sqlite3_collation_needed16(). The second argument is the database -** connection. The third argument is one of [SQLITE_UTF8], [SQLITE_UTF16BE], -** or [SQLITE_UTF16LE], indicating the most desirable form of the collation -** sequence function required. The fourth parameter is the name of the -** required collation sequence.)^ -** -** The callback function should register the desired collation using -** [sqlite3_create_collation()], [sqlite3_create_collation16()], or -** [sqlite3_create_collation_v2()]. -*/ -SQLITE_API int sqlite3_collation_needed( - sqlite3*, - void*, - void(*)(void*,sqlite3*,int eTextRep,const char*) -); -SQLITE_API int sqlite3_collation_needed16( - sqlite3*, - void*, - void(*)(void*,sqlite3*,int eTextRep,const void*) -); - -#ifdef SQLITE_HAS_CODEC -/* -** Specify the key for an encrypted database. This routine should be -** called right after sqlite3_open(). -** -** The code to implement this API is not available in the public release -** of SQLite. -*/ -SQLITE_API int sqlite3_key( - sqlite3 *db, /* Database to be rekeyed */ - const void *pKey, int nKey /* The key */ -); - -/* -** Change the key on an open database. If the current database is not -** encrypted, this routine will encrypt it. If pNew==0 or nNew==0, the -** database is decrypted. -** -** The code to implement this API is not available in the public release -** of SQLite. -*/ -SQLITE_API int sqlite3_rekey( - sqlite3 *db, /* Database to be rekeyed */ - const void *pKey, int nKey /* The new key */ -); - -/* -** Specify the activation key for a SEE database. Unless -** activated, none of the SEE routines will work. -*/ -SQLITE_API void sqlite3_activate_see( - const char *zPassPhrase /* Activation phrase */ -); -#endif - -#ifdef SQLITE_ENABLE_CEROD -/* -** Specify the activation key for a CEROD database. Unless -** activated, none of the CEROD routines will work. -*/ -SQLITE_API void sqlite3_activate_cerod( - const char *zPassPhrase /* Activation phrase */ -); -#endif - -/* -** CAPI3REF: Suspend Execution For A Short Time -** -** ^The sqlite3_sleep() function causes the current thread to suspend execution -** for at least a number of milliseconds specified in its parameter. -** -** ^If the operating system does not support sleep requests with -** millisecond time resolution, then the time will be rounded up to -** the nearest second. ^The number of milliseconds of sleep actually -** requested from the operating system is returned. -** -** ^SQLite implements this interface by calling the xSleep() -** method of the default [sqlite3_vfs] object. -*/ -SQLITE_API int sqlite3_sleep(int); - -/* -** CAPI3REF: Name Of The Folder Holding Temporary Files -** -** ^(If this global variable is made to point to a string which is -** the name of a folder (a.k.a. directory), then all temporary files -** created by SQLite when using a built-in [sqlite3_vfs | VFS] -** will be placed in that directory.)^ ^If this variable -** is a NULL pointer, then SQLite performs a search for an appropriate -** temporary file directory. -** -** It is not safe to read or modify this variable in more than one -** thread at a time. It is not safe to read or modify this variable -** if a [database connection] is being used at the same time in a separate -** thread. -** It is intended that this variable be set once -** as part of process initialization and before any SQLite interface -** routines have been called and that this variable remain unchanged -** thereafter. -** -** ^The [temp_store_directory pragma] may modify this variable and cause -** it to point to memory obtained from [sqlite3_malloc]. ^Furthermore, -** the [temp_store_directory pragma] always assumes that any string -** that this variable points to is held in memory obtained from -** [sqlite3_malloc] and the pragma may attempt to free that memory -** using [sqlite3_free]. -** Hence, if this variable is modified directly, either it should be -** made NULL or made to point to memory obtained from [sqlite3_malloc] -** or else the use of the [temp_store_directory pragma] should be avoided. -*/ -SQLITE_API char *sqlite3_temp_directory; - -/* -** CAPI3REF: Test For Auto-Commit Mode -** KEYWORDS: {autocommit mode} -** -** ^The sqlite3_get_autocommit() interface returns non-zero or -** zero if the given database connection is or is not in autocommit mode, -** respectively. ^Autocommit mode is on by default. -** ^Autocommit mode is disabled by a [BEGIN] statement. -** ^Autocommit mode is re-enabled by a [COMMIT] or [ROLLBACK]. -** -** If certain kinds of errors occur on a statement within a multi-statement -** transaction (errors including [SQLITE_FULL], [SQLITE_IOERR], -** [SQLITE_NOMEM], [SQLITE_BUSY], and [SQLITE_INTERRUPT]) then the -** transaction might be rolled back automatically. The only way to -** find out whether SQLite automatically rolled back the transaction after -** an error is to use this function. -** -** If another thread changes the autocommit status of the database -** connection while this routine is running, then the return value -** is undefined. -*/ -SQLITE_API int sqlite3_get_autocommit(sqlite3*); - -/* -** CAPI3REF: Find The Database Handle Of A Prepared Statement -** -** ^The sqlite3_db_handle interface returns the [database connection] handle -** to which a [prepared statement] belongs. ^The [database connection] -** returned by sqlite3_db_handle is the same [database connection] -** that was the first argument -** to the [sqlite3_prepare_v2()] call (or its variants) that was used to -** create the statement in the first place. -*/ -SQLITE_API sqlite3 *sqlite3_db_handle(sqlite3_stmt*); - -/* -** CAPI3REF: Find the next prepared statement -** -** ^This interface returns a pointer to the next [prepared statement] after -** pStmt associated with the [database connection] pDb. ^If pStmt is NULL -** then this interface returns a pointer to the first prepared statement -** associated with the database connection pDb. ^If no prepared statement -** satisfies the conditions of this routine, it returns NULL. -** -** The [database connection] pointer D in a call to -** [sqlite3_next_stmt(D,S)] must refer to an open database -** connection and in particular must not be a NULL pointer. -*/ -SQLITE_API sqlite3_stmt *sqlite3_next_stmt(sqlite3 *pDb, sqlite3_stmt *pStmt); - -/* -** CAPI3REF: Commit And Rollback Notification Callbacks -** -** ^The sqlite3_commit_hook() interface registers a callback -** function to be invoked whenever a transaction is [COMMIT | committed]. -** ^Any callback set by a previous call to sqlite3_commit_hook() -** for the same database connection is overridden. -** ^The sqlite3_rollback_hook() interface registers a callback -** function to be invoked whenever a transaction is [ROLLBACK | rolled back]. -** ^Any callback set by a previous call to sqlite3_rollback_hook() -** for the same database connection is overridden. -** ^The pArg argument is passed through to the callback. -** ^If the callback on a commit hook function returns non-zero, -** then the commit is converted into a rollback. -** -** ^The sqlite3_commit_hook(D,C,P) and sqlite3_rollback_hook(D,C,P) functions -** return the P argument from the previous call of the same function -** on the same [database connection] D, or NULL for -** the first call for each function on D. -** -** The callback implementation must not do anything that will modify -** the database connection that invoked the callback. Any actions -** to modify the database connection must be deferred until after the -** completion of the [sqlite3_step()] call that triggered the commit -** or rollback hook in the first place. -** Note that [sqlite3_prepare_v2()] and [sqlite3_step()] both modify their -** database connections for the meaning of "modify" in this paragraph. -** -** ^Registering a NULL function disables the callback. -** -** ^When the commit hook callback routine returns zero, the [COMMIT] -** operation is allowed to continue normally. ^If the commit hook -** returns non-zero, then the [COMMIT] is converted into a [ROLLBACK]. -** ^The rollback hook is invoked on a rollback that results from a commit -** hook returning non-zero, just as it would be with any other rollback. -** -** ^For the purposes of this API, a transaction is said to have been -** rolled back if an explicit "ROLLBACK" statement is executed, or -** an error or constraint causes an implicit rollback to occur. -** ^The rollback callback is not invoked if a transaction is -** automatically rolled back because the database connection is closed. -** -** See also the [sqlite3_update_hook()] interface. -*/ -SQLITE_API void *sqlite3_commit_hook(sqlite3*, int(*)(void*), void*); -SQLITE_API void *sqlite3_rollback_hook(sqlite3*, void(*)(void *), void*); - -/* -** CAPI3REF: Data Change Notification Callbacks -** -** ^The sqlite3_update_hook() interface registers a callback function -** with the [database connection] identified by the first argument -** to be invoked whenever a row is updated, inserted or deleted. -** ^Any callback set by a previous call to this function -** for the same database connection is overridden. -** -** ^The second argument is a pointer to the function to invoke when a -** row is updated, inserted or deleted. -** ^The first argument to the callback is a copy of the third argument -** to sqlite3_update_hook(). -** ^The second callback argument is one of [SQLITE_INSERT], [SQLITE_DELETE], -** or [SQLITE_UPDATE], depending on the operation that caused the callback -** to be invoked. -** ^The third and fourth arguments to the callback contain pointers to the -** database and table name containing the affected row. -** ^The final callback parameter is the [rowid] of the row. -** ^In the case of an update, this is the [rowid] after the update takes place. -** -** ^(The update hook is not invoked when internal system tables are -** modified (i.e. sqlite_master and sqlite_sequence).)^ -** -** ^In the current implementation, the update hook -** is not invoked when duplication rows are deleted because of an -** [ON CONFLICT | ON CONFLICT REPLACE] clause. ^Nor is the update hook -** invoked when rows are deleted using the [truncate optimization]. -** The exceptions defined in this paragraph might change in a future -** release of SQLite. -** -** The update hook implementation must not do anything that will modify -** the database connection that invoked the update hook. Any actions -** to modify the database connection must be deferred until after the -** completion of the [sqlite3_step()] call that triggered the update hook. -** Note that [sqlite3_prepare_v2()] and [sqlite3_step()] both modify their -** database connections for the meaning of "modify" in this paragraph. -** -** ^The sqlite3_update_hook(D,C,P) function -** returns the P argument from the previous call -** on the same [database connection] D, or NULL for -** the first call on D. -** -** See also the [sqlite3_commit_hook()] and [sqlite3_rollback_hook()] -** interfaces. -*/ -SQLITE_API void *sqlite3_update_hook( - sqlite3*, - void(*)(void *,int ,char const *,char const *,sqlite3_int64), - void* -); - -/* -** CAPI3REF: Enable Or Disable Shared Pager Cache -** KEYWORDS: {shared cache} -** -** ^(This routine enables or disables the sharing of the database cache -** and schema data structures between [database connection | connections] -** to the same database. Sharing is enabled if the argument is true -** and disabled if the argument is false.)^ -** -** ^Cache sharing is enabled and disabled for an entire process. -** This is a change as of SQLite version 3.5.0. In prior versions of SQLite, -** sharing was enabled or disabled for each thread separately. -** -** ^(The cache sharing mode set by this interface effects all subsequent -** calls to [sqlite3_open()], [sqlite3_open_v2()], and [sqlite3_open16()]. -** Existing database connections continue use the sharing mode -** that was in effect at the time they were opened.)^ -** -** ^(This routine returns [SQLITE_OK] if shared cache was enabled or disabled -** successfully. An [error code] is returned otherwise.)^ -** -** ^Shared cache is disabled by default. But this might change in -** future releases of SQLite. Applications that care about shared -** cache setting should set it explicitly. -** -** See Also: [SQLite Shared-Cache Mode] -*/ -SQLITE_API int sqlite3_enable_shared_cache(int); - -/* -** CAPI3REF: Attempt To Free Heap Memory -** -** ^The sqlite3_release_memory() interface attempts to free N bytes -** of heap memory by deallocating non-essential memory allocations -** held by the database library. Memory used to cache database -** pages to improve performance is an example of non-essential memory. -** ^sqlite3_release_memory() returns the number of bytes actually freed, -** which might be more or less than the amount requested. -*/ -SQLITE_API int sqlite3_release_memory(int); - -/* -** CAPI3REF: Impose A Limit On Heap Size -** -** ^The sqlite3_soft_heap_limit() interface places a "soft" limit -** on the amount of heap memory that may be allocated by SQLite. -** ^If an internal allocation is requested that would exceed the -** soft heap limit, [sqlite3_release_memory()] is invoked one or -** more times to free up some space before the allocation is performed. -** -** ^The limit is called "soft" because if [sqlite3_release_memory()] -** cannot free sufficient memory to prevent the limit from being exceeded, -** the memory is allocated anyway and the current operation proceeds. -** -** ^A negative or zero value for N means that there is no soft heap limit and -** [sqlite3_release_memory()] will only be called when memory is exhausted. -** ^The default value for the soft heap limit is zero. -** -** ^(SQLite makes a best effort to honor the soft heap limit. -** But if the soft heap limit cannot be honored, execution will -** continue without error or notification.)^ This is why the limit is -** called a "soft" limit. It is advisory only. -** -** Prior to SQLite version 3.5.0, this routine only constrained the memory -** allocated by a single thread - the same thread in which this routine -** runs. Beginning with SQLite version 3.5.0, the soft heap limit is -** applied to all threads. The value specified for the soft heap limit -** is an upper bound on the total memory allocation for all threads. In -** version 3.5.0 there is no mechanism for limiting the heap usage for -** individual threads. -*/ -SQLITE_API void sqlite3_soft_heap_limit(int); - -/* -** CAPI3REF: Extract Metadata About A Column Of A Table -** -** ^This routine returns metadata about a specific column of a specific -** database table accessible using the [database connection] handle -** passed as the first function argument. -** -** ^The column is identified by the second, third and fourth parameters to -** this function. ^The second parameter is either the name of the database -** (i.e. "main", "temp", or an attached database) containing the specified -** table or NULL. ^If it is NULL, then all attached databases are searched -** for the table using the same algorithm used by the database engine to -** resolve unqualified table references. -** -** ^The third and fourth parameters to this function are the table and column -** name of the desired column, respectively. Neither of these parameters -** may be NULL. -** -** ^Metadata is returned by writing to the memory locations passed as the 5th -** and subsequent parameters to this function. ^Any of these arguments may be -** NULL, in which case the corresponding element of metadata is omitted. -** -** ^(<blockquote> -** <table border="1"> -** <tr><th> Parameter <th> Output<br>Type <th> Description -** -** <tr><td> 5th <td> const char* <td> Data type -** <tr><td> 6th <td> const char* <td> Name of default collation sequence -** <tr><td> 7th <td> int <td> True if column has a NOT NULL constraint -** <tr><td> 8th <td> int <td> True if column is part of the PRIMARY KEY -** <tr><td> 9th <td> int <td> True if column is [AUTOINCREMENT] -** </table> -** </blockquote>)^ -** -** ^The memory pointed to by the character pointers returned for the -** declaration type and collation sequence is valid only until the next -** call to any SQLite API function. -** -** ^If the specified table is actually a view, an [error code] is returned. -** -** ^If the specified column is "rowid", "oid" or "_rowid_" and an -** [INTEGER PRIMARY KEY] column has been explicitly declared, then the output -** parameters are set for the explicitly declared column. ^(If there is no -** explicitly declared [INTEGER PRIMARY KEY] column, then the output -** parameters are set as follows: -** -** <pre> -** data type: "INTEGER" -** collation sequence: "BINARY" -** not null: 0 -** primary key: 1 -** auto increment: 0 -** </pre>)^ -** -** ^(This function may load one or more schemas from database files. If an -** error occurs during this process, or if the requested table or column -** cannot be found, an [error code] is returned and an error message left -** in the [database connection] (to be retrieved using sqlite3_errmsg()).)^ -** -** ^This API is only available if the library was compiled with the -** [SQLITE_ENABLE_COLUMN_METADATA] C-preprocessor symbol defined. -*/ -SQLITE_API int sqlite3_table_column_metadata( - sqlite3 *db, /* Connection handle */ - const char *zDbName, /* Database name or NULL */ - const char *zTableName, /* Table name */ - const char *zColumnName, /* Column name */ - char const **pzDataType, /* OUTPUT: Declared data type */ - char const **pzCollSeq, /* OUTPUT: Collation sequence name */ - int *pNotNull, /* OUTPUT: True if NOT NULL constraint exists */ - int *pPrimaryKey, /* OUTPUT: True if column part of PK */ - int *pAutoinc /* OUTPUT: True if column is auto-increment */ -); - -/* -** CAPI3REF: Load An Extension -** -** ^This interface loads an SQLite extension library from the named file. -** -** ^The sqlite3_load_extension() interface attempts to load an -** SQLite extension library contained in the file zFile. -** -** ^The entry point is zProc. -** ^zProc may be 0, in which case the name of the entry point -** defaults to "sqlite3_extension_init". -** ^The sqlite3_load_extension() interface returns -** [SQLITE_OK] on success and [SQLITE_ERROR] if something goes wrong. -** ^If an error occurs and pzErrMsg is not 0, then the -** [sqlite3_load_extension()] interface shall attempt to -** fill *pzErrMsg with error message text stored in memory -** obtained from [sqlite3_malloc()]. The calling function -** should free this memory by calling [sqlite3_free()]. -** -** ^Extension loading must be enabled using -** [sqlite3_enable_load_extension()] prior to calling this API, -** otherwise an error will be returned. -** -** See also the [load_extension() SQL function]. -*/ -SQLITE_API int sqlite3_load_extension( - sqlite3 *db, /* Load the extension into this database connection */ - const char *zFile, /* Name of the shared library containing extension */ - const char *zProc, /* Entry point. Derived from zFile if 0 */ - char **pzErrMsg /* Put error message here if not 0 */ -); - -/* -** CAPI3REF: Enable Or Disable Extension Loading -** -** ^So as not to open security holes in older applications that are -** unprepared to deal with extension loading, and as a means of disabling -** extension loading while evaluating user-entered SQL, the following API -** is provided to turn the [sqlite3_load_extension()] mechanism on and off. -** -** ^Extension loading is off by default. See ticket #1863. -** ^Call the sqlite3_enable_load_extension() routine with onoff==1 -** to turn extension loading on and call it with onoff==0 to turn -** it back off again. -*/ -SQLITE_API int sqlite3_enable_load_extension(sqlite3 *db, int onoff); - -/* -** CAPI3REF: Automatically Load An Extensions -** -** ^This API can be invoked at program startup in order to register -** one or more statically linked extensions that will be available -** to all new [database connections]. -** -** ^(This routine stores a pointer to the extension entry point -** in an array that is obtained from [sqlite3_malloc()]. That memory -** is deallocated by [sqlite3_reset_auto_extension()].)^ -** -** ^This function registers an extension entry point that is -** automatically invoked whenever a new [database connection] -** is opened using [sqlite3_open()], [sqlite3_open16()], -** or [sqlite3_open_v2()]. -** ^Duplicate extensions are detected so calling this routine -** multiple times with the same extension is harmless. -** ^Automatic extensions apply across all threads. -*/ -SQLITE_API int sqlite3_auto_extension(void (*xEntryPoint)(void)); - -/* -** CAPI3REF: Reset Automatic Extension Loading -** -** ^(This function disables all previously registered automatic -** extensions. It undoes the effect of all prior -** [sqlite3_auto_extension()] calls.)^ -** -** ^This function disables automatic extensions in all threads. -*/ -SQLITE_API void sqlite3_reset_auto_extension(void); - -/* -** The interface to the virtual-table mechanism is currently considered -** to be experimental. The interface might change in incompatible ways. -** If this is a problem for you, do not use the interface at this time. -** -** When the virtual-table mechanism stabilizes, we will declare the -** interface fixed, support it indefinitely, and remove this comment. -*/ - -/* -** Structures used by the virtual table interface -*/ -typedef struct sqlite3_vtab sqlite3_vtab; -typedef struct sqlite3_index_info sqlite3_index_info; -typedef struct sqlite3_vtab_cursor sqlite3_vtab_cursor; -typedef struct sqlite3_module sqlite3_module; - -/* -** CAPI3REF: Virtual Table Object -** KEYWORDS: sqlite3_module {virtual table module} -** -** This structure, sometimes called a a "virtual table module", -** defines the implementation of a [virtual tables]. -** This structure consists mostly of methods for the module. -** -** ^A virtual table module is created by filling in a persistent -** instance of this structure and passing a pointer to that instance -** to [sqlite3_create_module()] or [sqlite3_create_module_v2()]. -** ^The registration remains valid until it is replaced by a different -** module or until the [database connection] closes. The content -** of this structure must not change while it is registered with -** any database connection. -*/ -struct sqlite3_module { - int iVersion; - int (*xCreate)(sqlite3*, void *pAux, - int argc, const char *const*argv, - sqlite3_vtab **ppVTab, char**); - int (*xConnect)(sqlite3*, void *pAux, - int argc, const char *const*argv, - sqlite3_vtab **ppVTab, char**); - int (*xBestIndex)(sqlite3_vtab *pVTab, sqlite3_index_info*); - int (*xDisconnect)(sqlite3_vtab *pVTab); - int (*xDestroy)(sqlite3_vtab *pVTab); - int (*xOpen)(sqlite3_vtab *pVTab, sqlite3_vtab_cursor **ppCursor); - int (*xClose)(sqlite3_vtab_cursor*); - int (*xFilter)(sqlite3_vtab_cursor*, int idxNum, const char *idxStr, - int argc, sqlite3_value **argv); - int (*xNext)(sqlite3_vtab_cursor*); - int (*xEof)(sqlite3_vtab_cursor*); - int (*xColumn)(sqlite3_vtab_cursor*, sqlite3_context*, int); - int (*xRowid)(sqlite3_vtab_cursor*, sqlite3_int64 *pRowid); - int (*xUpdate)(sqlite3_vtab *, int, sqlite3_value **, sqlite3_int64 *); - int (*xBegin)(sqlite3_vtab *pVTab); - int (*xSync)(sqlite3_vtab *pVTab); - int (*xCommit)(sqlite3_vtab *pVTab); - int (*xRollback)(sqlite3_vtab *pVTab); - int (*xFindFunction)(sqlite3_vtab *pVtab, int nArg, const char *zName, - void (**pxFunc)(sqlite3_context*,int,sqlite3_value**), - void **ppArg); - int (*xRename)(sqlite3_vtab *pVtab, const char *zNew); -}; - -/* -** CAPI3REF: Virtual Table Indexing Information -** KEYWORDS: sqlite3_index_info -** -** The sqlite3_index_info structure and its substructures is used to -** pass information into and receive the reply from the [xBestIndex] -** method of a [virtual table module]. The fields under **Inputs** are the -** inputs to xBestIndex and are read-only. xBestIndex inserts its -** results into the **Outputs** fields. -** -** ^(The aConstraint[] array records WHERE clause constraints of the form: -** -** <pre>column OP expr</pre> -** -** where OP is =, <, <=, >, or >=.)^ ^(The particular operator is -** stored in aConstraint[].op.)^ ^(The index of the column is stored in -** aConstraint[].iColumn.)^ ^(aConstraint[].usable is TRUE if the -** expr on the right-hand side can be evaluated (and thus the constraint -** is usable) and false if it cannot.)^ -** -** ^The optimizer automatically inverts terms of the form "expr OP column" -** and makes other simplifications to the WHERE clause in an attempt to -** get as many WHERE clause terms into the form shown above as possible. -** ^The aConstraint[] array only reports WHERE clause terms that are -** relevant to the particular virtual table being queried. -** -** ^Information about the ORDER BY clause is stored in aOrderBy[]. -** ^Each term of aOrderBy records a column of the ORDER BY clause. -** -** The [xBestIndex] method must fill aConstraintUsage[] with information -** about what parameters to pass to xFilter. ^If argvIndex>0 then -** the right-hand side of the corresponding aConstraint[] is evaluated -** and becomes the argvIndex-th entry in argv. ^(If aConstraintUsage[].omit -** is true, then the constraint is assumed to be fully handled by the -** virtual table and is not checked again by SQLite.)^ -** -** ^The idxNum and idxPtr values are recorded and passed into the -** [xFilter] method. -** ^[sqlite3_free()] is used to free idxPtr if and only if -** needToFreeIdxPtr is true. -** -** ^The orderByConsumed means that output from [xFilter]/[xNext] will occur in -** the correct order to satisfy the ORDER BY clause so that no separate -** sorting step is required. -** -** ^The estimatedCost value is an estimate of the cost of doing the -** particular lookup. A full scan of a table with N entries should have -** a cost of N. A binary search of a table of N entries should have a -** cost of approximately log(N). -*/ -struct sqlite3_index_info { - /* Inputs */ - int nConstraint; /* Number of entries in aConstraint */ - struct sqlite3_index_constraint { - int iColumn; /* Column on left-hand side of constraint */ - unsigned char op; /* Constraint operator */ - unsigned char usable; /* True if this constraint is usable */ - int iTermOffset; /* Used internally - xBestIndex should ignore */ - } *aConstraint; /* Table of WHERE clause constraints */ - int nOrderBy; /* Number of terms in the ORDER BY clause */ - struct sqlite3_index_orderby { - int iColumn; /* Column number */ - unsigned char desc; /* True for DESC. False for ASC. */ - } *aOrderBy; /* The ORDER BY clause */ - /* Outputs */ - struct sqlite3_index_constraint_usage { - int argvIndex; /* if >0, constraint is part of argv to xFilter */ - unsigned char omit; /* Do not code a test for this constraint */ - } *aConstraintUsage; - int idxNum; /* Number used to identify the index */ - char *idxStr; /* String, possibly obtained from sqlite3_malloc */ - int needToFreeIdxStr; /* Free idxStr using sqlite3_free() if true */ - int orderByConsumed; /* True if output is already ordered */ - double estimatedCost; /* Estimated cost of using this index */ -}; -#define SQLITE_INDEX_CONSTRAINT_EQ 2 -#define SQLITE_INDEX_CONSTRAINT_GT 4 -#define SQLITE_INDEX_CONSTRAINT_LE 8 -#define SQLITE_INDEX_CONSTRAINT_LT 16 -#define SQLITE_INDEX_CONSTRAINT_GE 32 -#define SQLITE_INDEX_CONSTRAINT_MATCH 64 - -/* -** CAPI3REF: Register A Virtual Table Implementation -** -** ^These routines are used to register a new [virtual table module] name. -** ^Module names must be registered before -** creating a new [virtual table] using the module and before using a -** preexisting [virtual table] for the module. -** -** ^The module name is registered on the [database connection] specified -** by the first parameter. ^The name of the module is given by the -** second parameter. ^The third parameter is a pointer to -** the implementation of the [virtual table module]. ^The fourth -** parameter is an arbitrary client data pointer that is passed through -** into the [xCreate] and [xConnect] methods of the virtual table module -** when a new virtual table is be being created or reinitialized. -** -** ^The sqlite3_create_module_v2() interface has a fifth parameter which -** is a pointer to a destructor for the pClientData. ^SQLite will -** invoke the destructor function (if it is not NULL) when SQLite -** no longer needs the pClientData pointer. ^The sqlite3_create_module() -** interface is equivalent to sqlite3_create_module_v2() with a NULL -** destructor. -*/ -SQLITE_API int sqlite3_create_module( - sqlite3 *db, /* SQLite connection to register module with */ - const char *zName, /* Name of the module */ - const sqlite3_module *p, /* Methods for the module */ - void *pClientData /* Client data for xCreate/xConnect */ -); -SQLITE_API int sqlite3_create_module_v2( - sqlite3 *db, /* SQLite connection to register module with */ - const char *zName, /* Name of the module */ - const sqlite3_module *p, /* Methods for the module */ - void *pClientData, /* Client data for xCreate/xConnect */ - void(*xDestroy)(void*) /* Module destructor function */ -); - -/* -** CAPI3REF: Virtual Table Instance Object -** KEYWORDS: sqlite3_vtab -** -** Every [virtual table module] implementation uses a subclass -** of this object to describe a particular instance -** of the [virtual table]. Each subclass will -** be tailored to the specific needs of the module implementation. -** The purpose of this superclass is to define certain fields that are -** common to all module implementations. -** -** ^Virtual tables methods can set an error message by assigning a -** string obtained from [sqlite3_mprintf()] to zErrMsg. The method should -** take care that any prior string is freed by a call to [sqlite3_free()] -** prior to assigning a new string to zErrMsg. ^After the error message -** is delivered up to the client application, the string will be automatically -** freed by sqlite3_free() and the zErrMsg field will be zeroed. -*/ -struct sqlite3_vtab { - const sqlite3_module *pModule; /* The module for this virtual table */ - int nRef; /* NO LONGER USED */ - char *zErrMsg; /* Error message from sqlite3_mprintf() */ - /* Virtual table implementations will typically add additional fields */ -}; - -/* -** CAPI3REF: Virtual Table Cursor Object -** KEYWORDS: sqlite3_vtab_cursor {virtual table cursor} -** -** Every [virtual table module] implementation uses a subclass of the -** following structure to describe cursors that point into the -** [virtual table] and are used -** to loop through the virtual table. Cursors are created using the -** [sqlite3_module.xOpen | xOpen] method of the module and are destroyed -** by the [sqlite3_module.xClose | xClose] method. Cursors are used -** by the [xFilter], [xNext], [xEof], [xColumn], and [xRowid] methods -** of the module. Each module implementation will define -** the content of a cursor structure to suit its own needs. -** -** This superclass exists in order to define fields of the cursor that -** are common to all implementations. -*/ -struct sqlite3_vtab_cursor { - sqlite3_vtab *pVtab; /* Virtual table of this cursor */ - /* Virtual table implementations will typically add additional fields */ -}; - -/* -** CAPI3REF: Declare The Schema Of A Virtual Table -** -** ^The [xCreate] and [xConnect] methods of a -** [virtual table module] call this interface -** to declare the format (the names and datatypes of the columns) of -** the virtual tables they implement. -*/ -SQLITE_API int sqlite3_declare_vtab(sqlite3*, const char *zSQL); - -/* -** CAPI3REF: Overload A Function For A Virtual Table -** -** ^(Virtual tables can provide alternative implementations of functions -** using the [xFindFunction] method of the [virtual table module]. -** But global versions of those functions -** must exist in order to be overloaded.)^ -** -** ^(This API makes sure a global version of a function with a particular -** name and number of parameters exists. If no such function exists -** before this API is called, a new function is created.)^ ^The implementation -** of the new function always causes an exception to be thrown. So -** the new function is not good for anything by itself. Its only -** purpose is to be a placeholder function that can be overloaded -** by a [virtual table]. -*/ -SQLITE_API int sqlite3_overload_function(sqlite3*, const char *zFuncName, int nArg); - -/* -** The interface to the virtual-table mechanism defined above (back up -** to a comment remarkably similar to this one) is currently considered -** to be experimental. The interface might change in incompatible ways. -** If this is a problem for you, do not use the interface at this time. -** -** When the virtual-table mechanism stabilizes, we will declare the -** interface fixed, support it indefinitely, and remove this comment. -*/ - -/* -** CAPI3REF: A Handle To An Open BLOB -** KEYWORDS: {BLOB handle} {BLOB handles} -** -** An instance of this object represents an open BLOB on which -** [sqlite3_blob_open | incremental BLOB I/O] can be performed. -** ^Objects of this type are created by [sqlite3_blob_open()] -** and destroyed by [sqlite3_blob_close()]. -** ^The [sqlite3_blob_read()] and [sqlite3_blob_write()] interfaces -** can be used to read or write small subsections of the BLOB. -** ^The [sqlite3_blob_bytes()] interface returns the size of the BLOB in bytes. -*/ -typedef struct sqlite3_blob sqlite3_blob; - -/* -** CAPI3REF: Open A BLOB For Incremental I/O -** -** ^(This interfaces opens a [BLOB handle | handle] to the BLOB located -** in row iRow, column zColumn, table zTable in database zDb; -** in other words, the same BLOB that would be selected by: -** -** <pre> -** SELECT zColumn FROM zDb.zTable WHERE [rowid] = iRow; -** </pre>)^ -** -** ^If the flags parameter is non-zero, then the BLOB is opened for read -** and write access. ^If it is zero, the BLOB is opened for read access. -** ^It is not possible to open a column that is part of an index or primary -** key for writing. ^If [foreign key constraints] are enabled, it is -** not possible to open a column that is part of a [child key] for writing. -** -** ^Note that the database name is not the filename that contains -** the database but rather the symbolic name of the database that -** appears after the AS keyword when the database is connected using [ATTACH]. -** ^For the main database file, the database name is "main". -** ^For TEMP tables, the database name is "temp". -** -** ^(On success, [SQLITE_OK] is returned and the new [BLOB handle] is written -** to *ppBlob. Otherwise an [error code] is returned and *ppBlob is set -** to be a null pointer.)^ -** ^This function sets the [database connection] error code and message -** accessible via [sqlite3_errcode()] and [sqlite3_errmsg()] and related -** functions. ^Note that the *ppBlob variable is always initialized in a -** way that makes it safe to invoke [sqlite3_blob_close()] on *ppBlob -** regardless of the success or failure of this routine. -** -** ^(If the row that a BLOB handle points to is modified by an -** [UPDATE], [DELETE], or by [ON CONFLICT] side-effects -** then the BLOB handle is marked as "expired". -** This is true if any column of the row is changed, even a column -** other than the one the BLOB handle is open on.)^ -** ^Calls to [sqlite3_blob_read()] and [sqlite3_blob_write()] for -** a expired BLOB handle fail with an return code of [SQLITE_ABORT]. -** ^(Changes written into a BLOB prior to the BLOB expiring are not -** rolled back by the expiration of the BLOB. Such changes will eventually -** commit if the transaction continues to completion.)^ -** -** ^Use the [sqlite3_blob_bytes()] interface to determine the size of -** the opened blob. ^The size of a blob may not be changed by this -** interface. Use the [UPDATE] SQL command to change the size of a -** blob. -** -** ^The [sqlite3_bind_zeroblob()] and [sqlite3_result_zeroblob()] interfaces -** and the built-in [zeroblob] SQL function can be used, if desired, -** to create an empty, zero-filled blob in which to read or write using -** this interface. -** -** To avoid a resource leak, every open [BLOB handle] should eventually -** be released by a call to [sqlite3_blob_close()]. -*/ -SQLITE_API int sqlite3_blob_open( - sqlite3*, - const char *zDb, - const char *zTable, - const char *zColumn, - sqlite3_int64 iRow, - int flags, - sqlite3_blob **ppBlob -); - -/* -** CAPI3REF: Close A BLOB Handle -** -** ^Closes an open [BLOB handle]. -** -** ^Closing a BLOB shall cause the current transaction to commit -** if there are no other BLOBs, no pending prepared statements, and the -** database connection is in [autocommit mode]. -** ^If any writes were made to the BLOB, they might be held in cache -** until the close operation if they will fit. -** -** ^(Closing the BLOB often forces the changes -** out to disk and so if any I/O errors occur, they will likely occur -** at the time when the BLOB is closed. Any errors that occur during -** closing are reported as a non-zero return value.)^ -** -** ^(The BLOB is closed unconditionally. Even if this routine returns -** an error code, the BLOB is still closed.)^ -** -** ^Calling this routine with a null pointer (such as would be returned -** by a failed call to [sqlite3_blob_open()]) is a harmless no-op. -*/ -SQLITE_API int sqlite3_blob_close(sqlite3_blob *); - -/* -** CAPI3REF: Return The Size Of An Open BLOB -** -** ^Returns the size in bytes of the BLOB accessible via the -** successfully opened [BLOB handle] in its only argument. ^The -** incremental blob I/O routines can only read or overwriting existing -** blob content; they cannot change the size of a blob. -** -** This routine only works on a [BLOB handle] which has been created -** by a prior successful call to [sqlite3_blob_open()] and which has not -** been closed by [sqlite3_blob_close()]. Passing any other pointer in -** to this routine results in undefined and probably undesirable behavior. -*/ -SQLITE_API int sqlite3_blob_bytes(sqlite3_blob *); - -/* -** CAPI3REF: Read Data From A BLOB Incrementally -** -** ^(This function is used to read data from an open [BLOB handle] into a -** caller-supplied buffer. N bytes of data are copied into buffer Z -** from the open BLOB, starting at offset iOffset.)^ -** -** ^If offset iOffset is less than N bytes from the end of the BLOB, -** [SQLITE_ERROR] is returned and no data is read. ^If N or iOffset is -** less than zero, [SQLITE_ERROR] is returned and no data is read. -** ^The size of the blob (and hence the maximum value of N+iOffset) -** can be determined using the [sqlite3_blob_bytes()] interface. -** -** ^An attempt to read from an expired [BLOB handle] fails with an -** error code of [SQLITE_ABORT]. -** -** ^(On success, sqlite3_blob_read() returns SQLITE_OK. -** Otherwise, an [error code] or an [extended error code] is returned.)^ -** -** This routine only works on a [BLOB handle] which has been created -** by a prior successful call to [sqlite3_blob_open()] and which has not -** been closed by [sqlite3_blob_close()]. Passing any other pointer in -** to this routine results in undefined and probably undesirable behavior. -** -** See also: [sqlite3_blob_write()]. -*/ -SQLITE_API int sqlite3_blob_read(sqlite3_blob *, void *Z, int N, int iOffset); - -/* -** CAPI3REF: Write Data Into A BLOB Incrementally -** -** ^This function is used to write data into an open [BLOB handle] from a -** caller-supplied buffer. ^N bytes of data are copied from the buffer Z -** into the open BLOB, starting at offset iOffset. -** -** ^If the [BLOB handle] passed as the first argument was not opened for -** writing (the flags parameter to [sqlite3_blob_open()] was zero), -** this function returns [SQLITE_READONLY]. -** -** ^This function may only modify the contents of the BLOB; it is -** not possible to increase the size of a BLOB using this API. -** ^If offset iOffset is less than N bytes from the end of the BLOB, -** [SQLITE_ERROR] is returned and no data is written. ^If N is -** less than zero [SQLITE_ERROR] is returned and no data is written. -** The size of the BLOB (and hence the maximum value of N+iOffset) -** can be determined using the [sqlite3_blob_bytes()] interface. -** -** ^An attempt to write to an expired [BLOB handle] fails with an -** error code of [SQLITE_ABORT]. ^Writes to the BLOB that occurred -** before the [BLOB handle] expired are not rolled back by the -** expiration of the handle, though of course those changes might -** have been overwritten by the statement that expired the BLOB handle -** or by other independent statements. -** -** ^(On success, sqlite3_blob_write() returns SQLITE_OK. -** Otherwise, an [error code] or an [extended error code] is returned.)^ -** -** This routine only works on a [BLOB handle] which has been created -** by a prior successful call to [sqlite3_blob_open()] and which has not -** been closed by [sqlite3_blob_close()]. Passing any other pointer in -** to this routine results in undefined and probably undesirable behavior. -** -** See also: [sqlite3_blob_read()]. -*/ -SQLITE_API int sqlite3_blob_write(sqlite3_blob *, const void *z, int n, int iOffset); - -/* -** CAPI3REF: Virtual File System Objects -** -** A virtual filesystem (VFS) is an [sqlite3_vfs] object -** that SQLite uses to interact -** with the underlying operating system. Most SQLite builds come with a -** single default VFS that is appropriate for the host computer. -** New VFSes can be registered and existing VFSes can be unregistered. -** The following interfaces are provided. -** -** ^The sqlite3_vfs_find() interface returns a pointer to a VFS given its name. -** ^Names are case sensitive. -** ^Names are zero-terminated UTF-8 strings. -** ^If there is no match, a NULL pointer is returned. -** ^If zVfsName is NULL then the default VFS is returned. -** -** ^New VFSes are registered with sqlite3_vfs_register(). -** ^Each new VFS becomes the default VFS if the makeDflt flag is set. -** ^The same VFS can be registered multiple times without injury. -** ^To make an existing VFS into the default VFS, register it again -** with the makeDflt flag set. If two different VFSes with the -** same name are registered, the behavior is undefined. If a -** VFS is registered with a name that is NULL or an empty string, -** then the behavior is undefined. -** -** ^Unregister a VFS with the sqlite3_vfs_unregister() interface. -** ^(If the default VFS is unregistered, another VFS is chosen as -** the default. The choice for the new VFS is arbitrary.)^ -*/ -SQLITE_API sqlite3_vfs *sqlite3_vfs_find(const char *zVfsName); -SQLITE_API int sqlite3_vfs_register(sqlite3_vfs*, int makeDflt); -SQLITE_API int sqlite3_vfs_unregister(sqlite3_vfs*); - -/* -** CAPI3REF: Mutexes -** -** The SQLite core uses these routines for thread -** synchronization. Though they are intended for internal -** use by SQLite, code that links against SQLite is -** permitted to use any of these routines. -** -** The SQLite source code contains multiple implementations -** of these mutex routines. An appropriate implementation -** is selected automatically at compile-time. ^(The following -** implementations are available in the SQLite core: -** -** <ul> -** <li> SQLITE_MUTEX_OS2 -** <li> SQLITE_MUTEX_PTHREAD -** <li> SQLITE_MUTEX_W32 -** <li> SQLITE_MUTEX_NOOP -** </ul>)^ -** -** ^The SQLITE_MUTEX_NOOP implementation is a set of routines -** that does no real locking and is appropriate for use in -** a single-threaded application. ^The SQLITE_MUTEX_OS2, -** SQLITE_MUTEX_PTHREAD, and SQLITE_MUTEX_W32 implementations -** are appropriate for use on OS/2, Unix, and Windows. -** -** ^(If SQLite is compiled with the SQLITE_MUTEX_APPDEF preprocessor -** macro defined (with "-DSQLITE_MUTEX_APPDEF=1"), then no mutex -** implementation is included with the library. In this case the -** application must supply a custom mutex implementation using the -** [SQLITE_CONFIG_MUTEX] option of the sqlite3_config() function -** before calling sqlite3_initialize() or any other public sqlite3_ -** function that calls sqlite3_initialize().)^ -** -** ^The sqlite3_mutex_alloc() routine allocates a new -** mutex and returns a pointer to it. ^If it returns NULL -** that means that a mutex could not be allocated. ^SQLite -** will unwind its stack and return an error. ^(The argument -** to sqlite3_mutex_alloc() is one of these integer constants: -** -** <ul> -** <li> SQLITE_MUTEX_FAST -** <li> SQLITE_MUTEX_RECURSIVE -** <li> SQLITE_MUTEX_STATIC_MASTER -** <li> SQLITE_MUTEX_STATIC_MEM -** <li> SQLITE_MUTEX_STATIC_MEM2 -** <li> SQLITE_MUTEX_STATIC_PRNG -** <li> SQLITE_MUTEX_STATIC_LRU -** <li> SQLITE_MUTEX_STATIC_LRU2 -** </ul>)^ -** -** ^The first two constants (SQLITE_MUTEX_FAST and SQLITE_MUTEX_RECURSIVE) -** cause sqlite3_mutex_alloc() to create -** a new mutex. ^The new mutex is recursive when SQLITE_MUTEX_RECURSIVE -** is used but not necessarily so when SQLITE_MUTEX_FAST is used. -** The mutex implementation does not need to make a distinction -** between SQLITE_MUTEX_RECURSIVE and SQLITE_MUTEX_FAST if it does -** not want to. ^SQLite will only request a recursive mutex in -** cases where it really needs one. ^If a faster non-recursive mutex -** implementation is available on the host platform, the mutex subsystem -** might return such a mutex in response to SQLITE_MUTEX_FAST. -** -** ^The other allowed parameters to sqlite3_mutex_alloc() (anything other -** than SQLITE_MUTEX_FAST and SQLITE_MUTEX_RECURSIVE) each return -** a pointer to a static preexisting mutex. ^Six static mutexes are -** used by the current version of SQLite. Future versions of SQLite -** may add additional static mutexes. Static mutexes are for internal -** use by SQLite only. Applications that use SQLite mutexes should -** use only the dynamic mutexes returned by SQLITE_MUTEX_FAST or -** SQLITE_MUTEX_RECURSIVE. -** -** ^Note that if one of the dynamic mutex parameters (SQLITE_MUTEX_FAST -** or SQLITE_MUTEX_RECURSIVE) is used then sqlite3_mutex_alloc() -** returns a different mutex on every call. ^But for the static -** mutex types, the same mutex is returned on every call that has -** the same type number. -** -** ^The sqlite3_mutex_free() routine deallocates a previously -** allocated dynamic mutex. ^SQLite is careful to deallocate every -** dynamic mutex that it allocates. The dynamic mutexes must not be in -** use when they are deallocated. Attempting to deallocate a static -** mutex results in undefined behavior. ^SQLite never deallocates -** a static mutex. -** -** ^The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt -** to enter a mutex. ^If another thread is already within the mutex, -** sqlite3_mutex_enter() will block and sqlite3_mutex_try() will return -** SQLITE_BUSY. ^The sqlite3_mutex_try() interface returns [SQLITE_OK] -** upon successful entry. ^(Mutexes created using -** SQLITE_MUTEX_RECURSIVE can be entered multiple times by the same thread. -** In such cases the, -** mutex must be exited an equal number of times before another thread -** can enter.)^ ^(If the same thread tries to enter any other -** kind of mutex more than once, the behavior is undefined. -** SQLite will never exhibit -** such behavior in its own use of mutexes.)^ -** -** ^(Some systems (for example, Windows 95) do not support the operation -** implemented by sqlite3_mutex_try(). On those systems, sqlite3_mutex_try() -** will always return SQLITE_BUSY. The SQLite core only ever uses -** sqlite3_mutex_try() as an optimization so this is acceptable behavior.)^ -** -** ^The sqlite3_mutex_leave() routine exits a mutex that was -** previously entered by the same thread. ^(The behavior -** is undefined if the mutex is not currently entered by the -** calling thread or is not currently allocated. SQLite will -** never do either.)^ -** -** ^If the argument to sqlite3_mutex_enter(), sqlite3_mutex_try(), or -** sqlite3_mutex_leave() is a NULL pointer, then all three routines -** behave as no-ops. -** -** See also: [sqlite3_mutex_held()] and [sqlite3_mutex_notheld()]. -*/ -SQLITE_API sqlite3_mutex *sqlite3_mutex_alloc(int); -SQLITE_API void sqlite3_mutex_free(sqlite3_mutex*); -SQLITE_API void sqlite3_mutex_enter(sqlite3_mutex*); -SQLITE_API int sqlite3_mutex_try(sqlite3_mutex*); -SQLITE_API void sqlite3_mutex_leave(sqlite3_mutex*); - -/* -** CAPI3REF: Mutex Methods Object -** -** An instance of this structure defines the low-level routines -** used to allocate and use mutexes. -** -** Usually, the default mutex implementations provided by SQLite are -** sufficient, however the user has the option of substituting a custom -** implementation for specialized deployments or systems for which SQLite -** does not provide a suitable implementation. In this case, the user -** creates and populates an instance of this structure to pass -** to sqlite3_config() along with the [SQLITE_CONFIG_MUTEX] option. -** Additionally, an instance of this structure can be used as an -** output variable when querying the system for the current mutex -** implementation, using the [SQLITE_CONFIG_GETMUTEX] option. -** -** ^The xMutexInit method defined by this structure is invoked as -** part of system initialization by the sqlite3_initialize() function. -** ^The xMutexInit routine is calle by SQLite exactly once for each -** effective call to [sqlite3_initialize()]. -** -** ^The xMutexEnd method defined by this structure is invoked as -** part of system shutdown by the sqlite3_shutdown() function. The -** implementation of this method is expected to release all outstanding -** resources obtained by the mutex methods implementation, especially -** those obtained by the xMutexInit method. ^The xMutexEnd() -** interface is invoked exactly once for each call to [sqlite3_shutdown()]. -** -** ^(The remaining seven methods defined by this structure (xMutexAlloc, -** xMutexFree, xMutexEnter, xMutexTry, xMutexLeave, xMutexHeld and -** xMutexNotheld) implement the following interfaces (respectively): -** -** <ul> -** <li> [sqlite3_mutex_alloc()] </li> -** <li> [sqlite3_mutex_free()] </li> -** <li> [sqlite3_mutex_enter()] </li> -** <li> [sqlite3_mutex_try()] </li> -** <li> [sqlite3_mutex_leave()] </li> -** <li> [sqlite3_mutex_held()] </li> -** <li> [sqlite3_mutex_notheld()] </li> -** </ul>)^ -** -** The only difference is that the public sqlite3_XXX functions enumerated -** above silently ignore any invocations that pass a NULL pointer instead -** of a valid mutex handle. The implementations of the methods defined -** by this structure are not required to handle this case, the results -** of passing a NULL pointer instead of a valid mutex handle are undefined -** (i.e. it is acceptable to provide an implementation that segfaults if -** it is passed a NULL pointer). -** -** The xMutexInit() method must be threadsafe. ^It must be harmless to -** invoke xMutexInit() mutiple times within the same process and without -** intervening calls to xMutexEnd(). Second and subsequent calls to -** xMutexInit() must be no-ops. -** -** ^xMutexInit() must not use SQLite memory allocation ([sqlite3_malloc()] -** and its associates). ^Similarly, xMutexAlloc() must not use SQLite memory -** allocation for a static mutex. ^However xMutexAlloc() may use SQLite -** memory allocation for a fast or recursive mutex. -** -** ^SQLite will invoke the xMutexEnd() method when [sqlite3_shutdown()] is -** called, but only if the prior call to xMutexInit returned SQLITE_OK. -** If xMutexInit fails in any way, it is expected to clean up after itself -** prior to returning. -*/ -typedef struct sqlite3_mutex_methods sqlite3_mutex_methods; -struct sqlite3_mutex_methods { - int (*xMutexInit)(void); - int (*xMutexEnd)(void); - sqlite3_mutex *(*xMutexAlloc)(int); - void (*xMutexFree)(sqlite3_mutex *); - void (*xMutexEnter)(sqlite3_mutex *); - int (*xMutexTry)(sqlite3_mutex *); - void (*xMutexLeave)(sqlite3_mutex *); - int (*xMutexHeld)(sqlite3_mutex *); - int (*xMutexNotheld)(sqlite3_mutex *); -}; - -/* -** CAPI3REF: Mutex Verification Routines -** -** The sqlite3_mutex_held() and sqlite3_mutex_notheld() routines -** are intended for use inside assert() statements. ^The SQLite core -** never uses these routines except inside an assert() and applications -** are advised to follow the lead of the core. ^The SQLite core only -** provides implementations for these routines when it is compiled -** with the SQLITE_DEBUG flag. ^External mutex implementations -** are only required to provide these routines if SQLITE_DEBUG is -** defined and if NDEBUG is not defined. -** -** ^These routines should return true if the mutex in their argument -** is held or not held, respectively, by the calling thread. -** -** ^The implementation is not required to provided versions of these -** routines that actually work. If the implementation does not provide working -** versions of these routines, it should at least provide stubs that always -** return true so that one does not get spurious assertion failures. -** -** ^If the argument to sqlite3_mutex_held() is a NULL pointer then -** the routine should return 1. This seems counter-intuitive since -** clearly the mutex cannot be held if it does not exist. But the -** the reason the mutex does not exist is because the build is not -** using mutexes. And we do not want the assert() containing the -** call to sqlite3_mutex_held() to fail, so a non-zero return is -** the appropriate thing to do. ^The sqlite3_mutex_notheld() -** interface should also return 1 when given a NULL pointer. -*/ -#ifndef NDEBUG -SQLITE_API int sqlite3_mutex_held(sqlite3_mutex*); -SQLITE_API int sqlite3_mutex_notheld(sqlite3_mutex*); -#endif - -/* -** CAPI3REF: Mutex Types -** -** The [sqlite3_mutex_alloc()] interface takes a single argument -** which is one of these integer constants. -** -** The set of static mutexes may change from one SQLite release to the -** next. Applications that override the built-in mutex logic must be -** prepared to accommodate additional static mutexes. -*/ -#define SQLITE_MUTEX_FAST 0 -#define SQLITE_MUTEX_RECURSIVE 1 -#define SQLITE_MUTEX_STATIC_MASTER 2 -#define SQLITE_MUTEX_STATIC_MEM 3 /* sqlite3_malloc() */ -#define SQLITE_MUTEX_STATIC_MEM2 4 /* NOT USED */ -#define SQLITE_MUTEX_STATIC_OPEN 4 /* sqlite3BtreeOpen() */ -#define SQLITE_MUTEX_STATIC_PRNG 5 /* sqlite3_random() */ -#define SQLITE_MUTEX_STATIC_LRU 6 /* lru page list */ -#define SQLITE_MUTEX_STATIC_LRU2 7 /* lru page list */ - -/* -** CAPI3REF: Retrieve the mutex for a database connection -** -** ^This interface returns a pointer the [sqlite3_mutex] object that -** serializes access to the [database connection] given in the argument -** when the [threading mode] is Serialized. -** ^If the [threading mode] is Single-thread or Multi-thread then this -** routine returns a NULL pointer. -*/ -SQLITE_API sqlite3_mutex *sqlite3_db_mutex(sqlite3*); - -/* -** CAPI3REF: Low-Level Control Of Database Files -** -** ^The [sqlite3_file_control()] interface makes a direct call to the -** xFileControl method for the [sqlite3_io_methods] object associated -** with a particular database identified by the second argument. ^The -** name of the database "main" for the main database or "temp" for the -** TEMP database, or the name that appears after the AS keyword for -** databases that are added using the [ATTACH] SQL command. -** ^A NULL pointer can be used in place of "main" to refer to the -** main database file. -** ^The third and fourth parameters to this routine -** are passed directly through to the second and third parameters of -** the xFileControl method. ^The return value of the xFileControl -** method becomes the return value of this routine. -** -** ^If the second parameter (zDbName) does not match the name of any -** open database file, then SQLITE_ERROR is returned. ^This error -** code is not remembered and will not be recalled by [sqlite3_errcode()] -** or [sqlite3_errmsg()]. The underlying xFileControl method might -** also return SQLITE_ERROR. There is no way to distinguish between -** an incorrect zDbName and an SQLITE_ERROR return from the underlying -** xFileControl method. -** -** See also: [SQLITE_FCNTL_LOCKSTATE] -*/ -SQLITE_API int sqlite3_file_control(sqlite3*, const char *zDbName, int op, void*); - -/* -** CAPI3REF: Testing Interface -** -** ^The sqlite3_test_control() interface is used to read out internal -** state of SQLite and to inject faults into SQLite for testing -** purposes. ^The first parameter is an operation code that determines -** the number, meaning, and operation of all subsequent parameters. -** -** This interface is not for use by applications. It exists solely -** for verifying the correct operation of the SQLite library. Depending -** on how the SQLite library is compiled, this interface might not exist. -** -** The details of the operation codes, their meanings, the parameters -** they take, and what they do are all subject to change without notice. -** Unlike most of the SQLite API, this function is not guaranteed to -** operate consistently from one release to the next. -*/ -SQLITE_API int sqlite3_test_control(int op, ...); - -/* -** CAPI3REF: Testing Interface Operation Codes -** -** These constants are the valid operation code parameters used -** as the first argument to [sqlite3_test_control()]. -** -** These parameters and their meanings are subject to change -** without notice. These values are for testing purposes only. -** Applications should not use any of these parameters or the -** [sqlite3_test_control()] interface. -*/ -#define SQLITE_TESTCTRL_FIRST 5 -#define SQLITE_TESTCTRL_PRNG_SAVE 5 -#define SQLITE_TESTCTRL_PRNG_RESTORE 6 -#define SQLITE_TESTCTRL_PRNG_RESET 7 -#define SQLITE_TESTCTRL_BITVEC_TEST 8 -#define SQLITE_TESTCTRL_FAULT_INSTALL 9 -#define SQLITE_TESTCTRL_BENIGN_MALLOC_HOOKS 10 -#define SQLITE_TESTCTRL_PENDING_BYTE 11 -#define SQLITE_TESTCTRL_ASSERT 12 -#define SQLITE_TESTCTRL_ALWAYS 13 -#define SQLITE_TESTCTRL_RESERVE 14 -#define SQLITE_TESTCTRL_OPTIMIZATIONS 15 -#define SQLITE_TESTCTRL_ISKEYWORD 16 -#define SQLITE_TESTCTRL_LAST 16 - -/* -** CAPI3REF: SQLite Runtime Status -** -** ^This interface is used to retrieve runtime status information -** about the preformance of SQLite, and optionally to reset various -** highwater marks. ^The first argument is an integer code for -** the specific parameter to measure. ^(Recognized integer codes -** are of the form [SQLITE_STATUS_MEMORY_USED | SQLITE_STATUS_...].)^ -** ^The current value of the parameter is returned into *pCurrent. -** ^The highest recorded value is returned in *pHighwater. ^If the -** resetFlag is true, then the highest record value is reset after -** *pHighwater is written. ^(Some parameters do not record the highest -** value. For those parameters -** nothing is written into *pHighwater and the resetFlag is ignored.)^ -** ^(Other parameters record only the highwater mark and not the current -** value. For these latter parameters nothing is written into *pCurrent.)^ -** -** ^The sqlite3_db_status() routine returns SQLITE_OK on success and a -** non-zero [error code] on failure. -** -** This routine is threadsafe but is not atomic. This routine can be -** called while other threads are running the same or different SQLite -** interfaces. However the values returned in *pCurrent and -** *pHighwater reflect the status of SQLite at different points in time -** and it is possible that another thread might change the parameter -** in between the times when *pCurrent and *pHighwater are written. -** -** See also: [sqlite3_db_status()] -*/ -SQLITE_API int sqlite3_status(int op, int *pCurrent, int *pHighwater, int resetFlag); - - -/* -** CAPI3REF: Status Parameters -** -** These integer constants designate various run-time status parameters -** that can be returned by [sqlite3_status()]. -** -** <dl> -** ^(<dt>SQLITE_STATUS_MEMORY_USED</dt> -** <dd>This parameter is the current amount of memory checked out -** using [sqlite3_malloc()], either directly or indirectly. The -** figure includes calls made to [sqlite3_malloc()] by the application -** and internal memory usage by the SQLite library. Scratch memory -** controlled by [SQLITE_CONFIG_SCRATCH] and auxiliary page-cache -** memory controlled by [SQLITE_CONFIG_PAGECACHE] is not included in -** this parameter. The amount returned is the sum of the allocation -** sizes as reported by the xSize method in [sqlite3_mem_methods].</dd>)^ -** -** ^(<dt>SQLITE_STATUS_MALLOC_SIZE</dt> -** <dd>This parameter records the largest memory allocation request -** handed to [sqlite3_malloc()] or [sqlite3_realloc()] (or their -** internal equivalents). Only the value returned in the -** *pHighwater parameter to [sqlite3_status()] is of interest. -** The value written into the *pCurrent parameter is undefined.</dd>)^ -** -** ^(<dt>SQLITE_STATUS_PAGECACHE_USED</dt> -** <dd>This parameter returns the number of pages used out of the -** [pagecache memory allocator] that was configured using -** [SQLITE_CONFIG_PAGECACHE]. The -** value returned is in pages, not in bytes.</dd>)^ -** -** ^(<dt>SQLITE_STATUS_PAGECACHE_OVERFLOW</dt> -** <dd>This parameter returns the number of bytes of page cache -** allocation which could not be statisfied by the [SQLITE_CONFIG_PAGECACHE] -** buffer and where forced to overflow to [sqlite3_malloc()]. The -** returned value includes allocations that overflowed because they -** where too large (they were larger than the "sz" parameter to -** [SQLITE_CONFIG_PAGECACHE]) and allocations that overflowed because -** no space was left in the page cache.</dd>)^ -** -** ^(<dt>SQLITE_STATUS_PAGECACHE_SIZE</dt> -** <dd>This parameter records the largest memory allocation request -** handed to [pagecache memory allocator]. Only the value returned in the -** *pHighwater parameter to [sqlite3_status()] is of interest. -** The value written into the *pCurrent parameter is undefined.</dd>)^ -** -** ^(<dt>SQLITE_STATUS_SCRATCH_USED</dt> -** <dd>This parameter returns the number of allocations used out of the -** [scratch memory allocator] configured using -** [SQLITE_CONFIG_SCRATCH]. The value returned is in allocations, not -** in bytes. Since a single thread may only have one scratch allocation -** outstanding at time, this parameter also reports the number of threads -** using scratch memory at the same time.</dd>)^ -** -** ^(<dt>SQLITE_STATUS_SCRATCH_OVERFLOW</dt> -** <dd>This parameter returns the number of bytes of scratch memory -** allocation which could not be statisfied by the [SQLITE_CONFIG_SCRATCH] -** buffer and where forced to overflow to [sqlite3_malloc()]. The values -** returned include overflows because the requested allocation was too -** larger (that is, because the requested allocation was larger than the -** "sz" parameter to [SQLITE_CONFIG_SCRATCH]) and because no scratch buffer -** slots were available. -** </dd>)^ -** -** ^(<dt>SQLITE_STATUS_SCRATCH_SIZE</dt> -** <dd>This parameter records the largest memory allocation request -** handed to [scratch memory allocator]. Only the value returned in the -** *pHighwater parameter to [sqlite3_status()] is of interest. -** The value written into the *pCurrent parameter is undefined.</dd>)^ -** -** ^(<dt>SQLITE_STATUS_PARSER_STACK</dt> -** <dd>This parameter records the deepest parser stack. It is only -** meaningful if SQLite is compiled with [YYTRACKMAXSTACKDEPTH].</dd>)^ -** </dl> -** -** New status parameters may be added from time to time. -*/ -#define SQLITE_STATUS_MEMORY_USED 0 -#define SQLITE_STATUS_PAGECACHE_USED 1 -#define SQLITE_STATUS_PAGECACHE_OVERFLOW 2 -#define SQLITE_STATUS_SCRATCH_USED 3 -#define SQLITE_STATUS_SCRATCH_OVERFLOW 4 -#define SQLITE_STATUS_MALLOC_SIZE 5 -#define SQLITE_STATUS_PARSER_STACK 6 -#define SQLITE_STATUS_PAGECACHE_SIZE 7 -#define SQLITE_STATUS_SCRATCH_SIZE 8 - -/* -** CAPI3REF: Database Connection Status -** -** ^This interface is used to retrieve runtime status information -** about a single [database connection]. ^The first argument is the -** database connection object to be interrogated. ^The second argument -** is an integer constant, taken from the set of -** [SQLITE_DBSTATUS_LOOKASIDE_USED | SQLITE_DBSTATUS_*] macros, that -** determiness the parameter to interrogate. The set of -** [SQLITE_DBSTATUS_LOOKASIDE_USED | SQLITE_DBSTATUS_*] macros is likely -** to grow in future releases of SQLite. -** -** ^The current value of the requested parameter is written into *pCur -** and the highest instantaneous value is written into *pHiwtr. ^If -** the resetFlg is true, then the highest instantaneous value is -** reset back down to the current value. -** -** See also: [sqlite3_status()] and [sqlite3_stmt_status()]. -*/ -SQLITE_API int sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int resetFlg); - -/* -** CAPI3REF: Status Parameters for database connections -** -** These constants are the available integer "verbs" that can be passed as -** the second argument to the [sqlite3_db_status()] interface. -** -** New verbs may be added in future releases of SQLite. Existing verbs -** might be discontinued. Applications should check the return code from -** [sqlite3_db_status()] to make sure that the call worked. -** The [sqlite3_db_status()] interface will return a non-zero error code -** if a discontinued or unsupported verb is invoked. -** -** <dl> -** ^(<dt>SQLITE_DBSTATUS_LOOKASIDE_USED</dt> -** <dd>This parameter returns the number of lookaside memory slots currently -** checked out.</dd>)^ -** -** <dt>SQLITE_DBSTATUS_CACHE_USED</dt> -** <dd>^This parameter returns the approximate number of of bytes of heap -** memory used by all pager caches associated with the database connection. -** ^The highwater mark associated with SQLITE_DBSTATUS_CACHE_USED is always 0. -** checked out.</dd>)^ -** </dl> -*/ -#define SQLITE_DBSTATUS_LOOKASIDE_USED 0 -#define SQLITE_DBSTATUS_CACHE_USED 1 -#define SQLITE_DBSTATUS_MAX 1 /* Largest defined DBSTATUS */ - - -/* -** CAPI3REF: Prepared Statement Status -** -** ^(Each prepared statement maintains various -** [SQLITE_STMTSTATUS_SORT | counters] that measure the number -** of times it has performed specific operations.)^ These counters can -** be used to monitor the performance characteristics of the prepared -** statements. For example, if the number of table steps greatly exceeds -** the number of table searches or result rows, that would tend to indicate -** that the prepared statement is using a full table scan rather than -** an index. -** -** ^(This interface is used to retrieve and reset counter values from -** a [prepared statement]. The first argument is the prepared statement -** object to be interrogated. The second argument -** is an integer code for a specific [SQLITE_STMTSTATUS_SORT | counter] -** to be interrogated.)^ -** ^The current value of the requested counter is returned. -** ^If the resetFlg is true, then the counter is reset to zero after this -** interface call returns. -** -** See also: [sqlite3_status()] and [sqlite3_db_status()]. -*/ -SQLITE_API int sqlite3_stmt_status(sqlite3_stmt*, int op,int resetFlg); - -/* -** CAPI3REF: Status Parameters for prepared statements -** -** These preprocessor macros define integer codes that name counter -** values associated with the [sqlite3_stmt_status()] interface. -** The meanings of the various counters are as follows: -** -** <dl> -** <dt>SQLITE_STMTSTATUS_FULLSCAN_STEP</dt> -** <dd>^This is the number of times that SQLite has stepped forward in -** a table as part of a full table scan. Large numbers for this counter -** may indicate opportunities for performance improvement through -** careful use of indices.</dd> -** -** <dt>SQLITE_STMTSTATUS_SORT</dt> -** <dd>^This is the number of sort operations that have occurred. -** A non-zero value in this counter may indicate an opportunity to -** improvement performance through careful use of indices.</dd> -** -** <dt>SQLITE_STMTSTATUS_AUTOINDEX</dt> -** <dd>^This is the number of rows inserted into transient indices that -** were created automatically in order to help joins run faster. -** A non-zero value in this counter may indicate an opportunity to -** improvement performance by adding permanent indices that do not -** need to be reinitialized each time the statement is run.</dd> -** -** </dl> -*/ -#define SQLITE_STMTSTATUS_FULLSCAN_STEP 1 -#define SQLITE_STMTSTATUS_SORT 2 -#define SQLITE_STMTSTATUS_AUTOINDEX 3 - -/* -** CAPI3REF: Custom Page Cache Object -** -** The sqlite3_pcache type is opaque. It is implemented by -** the pluggable module. The SQLite core has no knowledge of -** its size or internal structure and never deals with the -** sqlite3_pcache object except by holding and passing pointers -** to the object. -** -** See [sqlite3_pcache_methods] for additional information. -*/ -typedef struct sqlite3_pcache sqlite3_pcache; - -/* -** CAPI3REF: Application Defined Page Cache. -** KEYWORDS: {page cache} -** -** ^(The [sqlite3_config]([SQLITE_CONFIG_PCACHE], ...) interface can -** register an alternative page cache implementation by passing in an -** instance of the sqlite3_pcache_methods structure.)^ The majority of the -** heap memory used by SQLite is used by the page cache to cache data read -** from, or ready to be written to, the database file. By implementing a -** custom page cache using this API, an application can control more -** precisely the amount of memory consumed by SQLite, the way in which -** that memory is allocated and released, and the policies used to -** determine exactly which parts of a database file are cached and for -** how long. -** -** ^(The contents of the sqlite3_pcache_methods structure are copied to an -** internal buffer by SQLite within the call to [sqlite3_config]. Hence -** the application may discard the parameter after the call to -** [sqlite3_config()] returns.)^ -** -** ^The xInit() method is called once for each call to [sqlite3_initialize()] -** (usually only once during the lifetime of the process). ^(The xInit() -** method is passed a copy of the sqlite3_pcache_methods.pArg value.)^ -** ^The xInit() method can set up up global structures and/or any mutexes -** required by the custom page cache implementation. -** -** ^The xShutdown() method is called from within [sqlite3_shutdown()], -** if the application invokes this API. It can be used to clean up -** any outstanding resources before process shutdown, if required. -** -** ^SQLite holds a [SQLITE_MUTEX_RECURSIVE] mutex when it invokes -** the xInit method, so the xInit method need not be threadsafe. ^The -** xShutdown method is only called from [sqlite3_shutdown()] so it does -** not need to be threadsafe either. All other methods must be threadsafe -** in multithreaded applications. -** -** ^SQLite will never invoke xInit() more than once without an intervening -** call to xShutdown(). -** -** ^The xCreate() method is used to construct a new cache instance. SQLite -** will typically create one cache instance for each open database file, -** though this is not guaranteed. ^The -** first parameter, szPage, is the size in bytes of the pages that must -** be allocated by the cache. ^szPage will not be a power of two. ^szPage -** will the page size of the database file that is to be cached plus an -** increment (here called "R") of about 100 or 200. ^SQLite will use the -** extra R bytes on each page to store metadata about the underlying -** database page on disk. The value of R depends -** on the SQLite version, the target platform, and how SQLite was compiled. -** ^R is constant for a particular build of SQLite. ^The second argument to -** xCreate(), bPurgeable, is true if the cache being created will -** be used to cache database pages of a file stored on disk, or -** false if it is used for an in-memory database. ^The cache implementation -** does not have to do anything special based with the value of bPurgeable; -** it is purely advisory. ^On a cache where bPurgeable is false, SQLite will -** never invoke xUnpin() except to deliberately delete a page. -** ^In other words, a cache created with bPurgeable set to false will -** never contain any unpinned pages. -** -** ^(The xCachesize() method may be called at any time by SQLite to set the -** suggested maximum cache-size (number of pages stored by) the cache -** instance passed as the first argument. This is the value configured using -** the SQLite "[PRAGMA cache_size]" command.)^ ^As with the bPurgeable -** parameter, the implementation is not required to do anything with this -** value; it is advisory only. -** -** ^The xPagecount() method should return the number of pages currently -** stored in the cache. -** -** ^The xFetch() method is used to fetch a page and return a pointer to it. -** ^A 'page', in this context, is a buffer of szPage bytes aligned at an -** 8-byte boundary. ^The page to be fetched is determined by the key. ^The -** mimimum key value is 1. After it has been retrieved using xFetch, the page -** is considered to be "pinned". -** -** ^If the requested page is already in the page cache, then the page cache -** implementation must return a pointer to the page buffer with its content -** intact. ^(If the requested page is not already in the cache, then the -** behavior of the cache implementation is determined by the value of the -** createFlag parameter passed to xFetch, according to the following table: -** -** <table border=1 width=85% align=center> -** <tr><th> createFlag <th> Behaviour when page is not already in cache -** <tr><td> 0 <td> Do not allocate a new page. Return NULL. -** <tr><td> 1 <td> Allocate a new page if it easy and convenient to do so. -** Otherwise return NULL. -** <tr><td> 2 <td> Make every effort to allocate a new page. Only return -** NULL if allocating a new page is effectively impossible. -** </table>)^ -** -** SQLite will normally invoke xFetch() with a createFlag of 0 or 1. If -** a call to xFetch() with createFlag==1 returns NULL, then SQLite will -** attempt to unpin one or more cache pages by spilling the content of -** pinned pages to disk and synching the operating system disk cache. After -** attempting to unpin pages, the xFetch() method will be invoked again with -** a createFlag of 2. -** -** ^xUnpin() is called by SQLite with a pointer to a currently pinned page -** as its second argument. ^(If the third parameter, discard, is non-zero, -** then the page should be evicted from the cache. In this case SQLite -** assumes that the next time the page is retrieved from the cache using -** the xFetch() method, it will be zeroed.)^ ^If the discard parameter is -** zero, then the page is considered to be unpinned. ^The cache implementation -** may choose to evict unpinned pages at any time. -** -** ^(The cache is not required to perform any reference counting. A single -** call to xUnpin() unpins the page regardless of the number of prior calls -** to xFetch().)^ -** -** ^The xRekey() method is used to change the key value associated with the -** page passed as the second argument from oldKey to newKey. ^If the cache -** previously contains an entry associated with newKey, it should be -** discarded. ^Any prior cache entry associated with newKey is guaranteed not -** to be pinned. -** -** ^When SQLite calls the xTruncate() method, the cache must discard all -** existing cache entries with page numbers (keys) greater than or equal -** to the value of the iLimit parameter passed to xTruncate(). ^If any -** of these pages are pinned, they are implicitly unpinned, meaning that -** they can be safely discarded. -** -** ^The xDestroy() method is used to delete a cache allocated by xCreate(). -** All resources associated with the specified cache should be freed. ^After -** calling the xDestroy() method, SQLite considers the [sqlite3_pcache*] -** handle invalid, and will not use it with any other sqlite3_pcache_methods -** functions. -*/ -typedef struct sqlite3_pcache_methods sqlite3_pcache_methods; -struct sqlite3_pcache_methods { - void *pArg; - int (*xInit)(void*); - void (*xShutdown)(void*); - sqlite3_pcache *(*xCreate)(int szPage, int bPurgeable); - void (*xCachesize)(sqlite3_pcache*, int nCachesize); - int (*xPagecount)(sqlite3_pcache*); - void *(*xFetch)(sqlite3_pcache*, unsigned key, int createFlag); - void (*xUnpin)(sqlite3_pcache*, void*, int discard); - void (*xRekey)(sqlite3_pcache*, void*, unsigned oldKey, unsigned newKey); - void (*xTruncate)(sqlite3_pcache*, unsigned iLimit); - void (*xDestroy)(sqlite3_pcache*); -}; - -/* -** CAPI3REF: Online Backup Object -** -** The sqlite3_backup object records state information about an ongoing -** online backup operation. ^The sqlite3_backup object is created by -** a call to [sqlite3_backup_init()] and is destroyed by a call to -** [sqlite3_backup_finish()]. -** -** See Also: [Using the SQLite Online Backup API] -*/ -typedef struct sqlite3_backup sqlite3_backup; - -/* -** CAPI3REF: Online Backup API. -** -** The backup API copies the content of one database into another. -** It is useful either for creating backups of databases or -** for copying in-memory databases to or from persistent files. -** -** See Also: [Using the SQLite Online Backup API] -** -** ^Exclusive access is required to the destination database for the -** duration of the operation. ^However the source database is only -** read-locked while it is actually being read; it is not locked -** continuously for the entire backup operation. ^Thus, the backup may be -** performed on a live source database without preventing other users from -** reading or writing to the source database while the backup is underway. -** -** ^(To perform a backup operation: -** <ol> -** <li><b>sqlite3_backup_init()</b> is called once to initialize the -** backup, -** <li><b>sqlite3_backup_step()</b> is called one or more times to transfer -** the data between the two databases, and finally -** <li><b>sqlite3_backup_finish()</b> is called to release all resources -** associated with the backup operation. -** </ol>)^ -** There should be exactly one call to sqlite3_backup_finish() for each -** successful call to sqlite3_backup_init(). -** -** <b>sqlite3_backup_init()</b> -** -** ^The D and N arguments to sqlite3_backup_init(D,N,S,M) are the -** [database connection] associated with the destination database -** and the database name, respectively. -** ^The database name is "main" for the main database, "temp" for the -** temporary database, or the name specified after the AS keyword in -** an [ATTACH] statement for an attached database. -** ^The S and M arguments passed to -** sqlite3_backup_init(D,N,S,M) identify the [database connection] -** and database name of the source database, respectively. -** ^The source and destination [database connections] (parameters S and D) -** must be different or else sqlite3_backup_init(D,N,S,M) will file with -** an error. -** -** ^If an error occurs within sqlite3_backup_init(D,N,S,M), then NULL is -** returned and an error code and error message are store3d in the -** destination [database connection] D. -** ^The error code and message for the failed call to sqlite3_backup_init() -** can be retrieved using the [sqlite3_errcode()], [sqlite3_errmsg()], and/or -** [sqlite3_errmsg16()] functions. -** ^A successful call to sqlite3_backup_init() returns a pointer to an -** [sqlite3_backup] object. -** ^The [sqlite3_backup] object may be used with the sqlite3_backup_step() and -** sqlite3_backup_finish() functions to perform the specified backup -** operation. -** -** <b>sqlite3_backup_step()</b> -** -** ^Function sqlite3_backup_step(B,N) will copy up to N pages between -** the source and destination databases specified by [sqlite3_backup] object B. -** ^If N is negative, all remaining source pages are copied. -** ^If sqlite3_backup_step(B,N) successfully copies N pages and there -** are still more pages to be copied, then the function resturns [SQLITE_OK]. -** ^If sqlite3_backup_step(B,N) successfully finishes copying all pages -** from source to destination, then it returns [SQLITE_DONE]. -** ^If an error occurs while running sqlite3_backup_step(B,N), -** then an [error code] is returned. ^As well as [SQLITE_OK] and -** [SQLITE_DONE], a call to sqlite3_backup_step() may return [SQLITE_READONLY], -** [SQLITE_NOMEM], [SQLITE_BUSY], [SQLITE_LOCKED], or an -** [SQLITE_IOERR_ACCESS | SQLITE_IOERR_XXX] extended error code. -** -** ^The sqlite3_backup_step() might return [SQLITE_READONLY] if the destination -** database was opened read-only or if -** the destination is an in-memory database with a different page size -** from the source database. -** -** ^If sqlite3_backup_step() cannot obtain a required file-system lock, then -** the [sqlite3_busy_handler | busy-handler function] -** is invoked (if one is specified). ^If the -** busy-handler returns non-zero before the lock is available, then -** [SQLITE_BUSY] is returned to the caller. ^In this case the call to -** sqlite3_backup_step() can be retried later. ^If the source -** [database connection] -** is being used to write to the source database when sqlite3_backup_step() -** is called, then [SQLITE_LOCKED] is returned immediately. ^Again, in this -** case the call to sqlite3_backup_step() can be retried later on. ^(If -** [SQLITE_IOERR_ACCESS | SQLITE_IOERR_XXX], [SQLITE_NOMEM], or -** [SQLITE_READONLY] is returned, then -** there is no point in retrying the call to sqlite3_backup_step(). These -** errors are considered fatal.)^ The application must accept -** that the backup operation has failed and pass the backup operation handle -** to the sqlite3_backup_finish() to release associated resources. -** -** ^The first call to sqlite3_backup_step() obtains an exclusive lock -** on the destination file. ^The exclusive lock is not released until either -** sqlite3_backup_finish() is called or the backup operation is complete -** and sqlite3_backup_step() returns [SQLITE_DONE]. ^Every call to -** sqlite3_backup_step() obtains a [shared lock] on the source database that -** lasts for the duration of the sqlite3_backup_step() call. -** ^Because the source database is not locked between calls to -** sqlite3_backup_step(), the source database may be modified mid-way -** through the backup process. ^If the source database is modified by an -** external process or via a database connection other than the one being -** used by the backup operation, then the backup will be automatically -** restarted by the next call to sqlite3_backup_step(). ^If the source -** database is modified by the using the same database connection as is used -** by the backup operation, then the backup database is automatically -** updated at the same time. -** -** <b>sqlite3_backup_finish()</b> -** -** When sqlite3_backup_step() has returned [SQLITE_DONE], or when the -** application wishes to abandon the backup operation, the application -** should destroy the [sqlite3_backup] by passing it to sqlite3_backup_finish(). -** ^The sqlite3_backup_finish() interfaces releases all -** resources associated with the [sqlite3_backup] object. -** ^If sqlite3_backup_step() has not yet returned [SQLITE_DONE], then any -** active write-transaction on the destination database is rolled back. -** The [sqlite3_backup] object is invalid -** and may not be used following a call to sqlite3_backup_finish(). -** -** ^The value returned by sqlite3_backup_finish is [SQLITE_OK] if no -** sqlite3_backup_step() errors occurred, regardless or whether or not -** sqlite3_backup_step() completed. -** ^If an out-of-memory condition or IO error occurred during any prior -** sqlite3_backup_step() call on the same [sqlite3_backup] object, then -** sqlite3_backup_finish() returns the corresponding [error code]. -** -** ^A return of [SQLITE_BUSY] or [SQLITE_LOCKED] from sqlite3_backup_step() -** is not a permanent error and does not affect the return value of -** sqlite3_backup_finish(). -** -** <b>sqlite3_backup_remaining(), sqlite3_backup_pagecount()</b> -** -** ^Each call to sqlite3_backup_step() sets two values inside -** the [sqlite3_backup] object: the number of pages still to be backed -** up and the total number of pages in the source databae file. -** The sqlite3_backup_remaining() and sqlite3_backup_pagecount() interfaces -** retrieve these two values, respectively. -** -** ^The values returned by these functions are only updated by -** sqlite3_backup_step(). ^If the source database is modified during a backup -** operation, then the values are not updated to account for any extra -** pages that need to be updated or the size of the source database file -** changing. -** -** <b>Concurrent Usage of Database Handles</b> -** -** ^The source [database connection] may be used by the application for other -** purposes while a backup operation is underway or being initialized. -** ^If SQLite is compiled and configured to support threadsafe database -** connections, then the source database connection may be used concurrently -** from within other threads. -** -** However, the application must guarantee that the destination -** [database connection] is not passed to any other API (by any thread) after -** sqlite3_backup_init() is called and before the corresponding call to -** sqlite3_backup_finish(). SQLite does not currently check to see -** if the application incorrectly accesses the destination [database connection] -** and so no error code is reported, but the operations may malfunction -** nevertheless. Use of the destination database connection while a -** backup is in progress might also also cause a mutex deadlock. -** -** If running in [shared cache mode], the application must -** guarantee that the shared cache used by the destination database -** is not accessed while the backup is running. In practice this means -** that the application must guarantee that the disk file being -** backed up to is not accessed by any connection within the process, -** not just the specific connection that was passed to sqlite3_backup_init(). -** -** The [sqlite3_backup] object itself is partially threadsafe. Multiple -** threads may safely make multiple concurrent calls to sqlite3_backup_step(). -** However, the sqlite3_backup_remaining() and sqlite3_backup_pagecount() -** APIs are not strictly speaking threadsafe. If they are invoked at the -** same time as another thread is invoking sqlite3_backup_step() it is -** possible that they return invalid values. -*/ -SQLITE_API sqlite3_backup *sqlite3_backup_init( - sqlite3 *pDest, /* Destination database handle */ - const char *zDestName, /* Destination database name */ - sqlite3 *pSource, /* Source database handle */ - const char *zSourceName /* Source database name */ -); -SQLITE_API int sqlite3_backup_step(sqlite3_backup *p, int nPage); -SQLITE_API int sqlite3_backup_finish(sqlite3_backup *p); -SQLITE_API int sqlite3_backup_remaining(sqlite3_backup *p); -SQLITE_API int sqlite3_backup_pagecount(sqlite3_backup *p); - -/* -** CAPI3REF: Unlock Notification -** -** ^When running in shared-cache mode, a database operation may fail with -** an [SQLITE_LOCKED] error if the required locks on the shared-cache or -** individual tables within the shared-cache cannot be obtained. See -** [SQLite Shared-Cache Mode] for a description of shared-cache locking. -** ^This API may be used to register a callback that SQLite will invoke -** when the connection currently holding the required lock relinquishes it. -** ^This API is only available if the library was compiled with the -** [SQLITE_ENABLE_UNLOCK_NOTIFY] C-preprocessor symbol defined. -** -** See Also: [Using the SQLite Unlock Notification Feature]. -** -** ^Shared-cache locks are released when a database connection concludes -** its current transaction, either by committing it or rolling it back. -** -** ^When a connection (known as the blocked connection) fails to obtain a -** shared-cache lock and SQLITE_LOCKED is returned to the caller, the -** identity of the database connection (the blocking connection) that -** has locked the required resource is stored internally. ^After an -** application receives an SQLITE_LOCKED error, it may call the -** sqlite3_unlock_notify() method with the blocked connection handle as -** the first argument to register for a callback that will be invoked -** when the blocking connections current transaction is concluded. ^The -** callback is invoked from within the [sqlite3_step] or [sqlite3_close] -** call that concludes the blocking connections transaction. -** -** ^(If sqlite3_unlock_notify() is called in a multi-threaded application, -** there is a chance that the blocking connection will have already -** concluded its transaction by the time sqlite3_unlock_notify() is invoked. -** If this happens, then the specified callback is invoked immediately, -** from within the call to sqlite3_unlock_notify().)^ -** -** ^If the blocked connection is attempting to obtain a write-lock on a -** shared-cache table, and more than one other connection currently holds -** a read-lock on the same table, then SQLite arbitrarily selects one of -** the other connections to use as the blocking connection. -** -** ^(There may be at most one unlock-notify callback registered by a -** blocked connection. If sqlite3_unlock_notify() is called when the -** blocked connection already has a registered unlock-notify callback, -** then the new callback replaces the old.)^ ^If sqlite3_unlock_notify() is -** called with a NULL pointer as its second argument, then any existing -** unlock-notify callback is cancelled. ^The blocked connections -** unlock-notify callback may also be canceled by closing the blocked -** connection using [sqlite3_close()]. -** -** The unlock-notify callback is not reentrant. If an application invokes -** any sqlite3_xxx API functions from within an unlock-notify callback, a -** crash or deadlock may be the result. -** -** ^Unless deadlock is detected (see below), sqlite3_unlock_notify() always -** returns SQLITE_OK. -** -** <b>Callback Invocation Details</b> -** -** When an unlock-notify callback is registered, the application provides a -** single void* pointer that is passed to the callback when it is invoked. -** However, the signature of the callback function allows SQLite to pass -** it an array of void* context pointers. The first argument passed to -** an unlock-notify callback is a pointer to an array of void* pointers, -** and the second is the number of entries in the array. -** -** When a blocking connections transaction is concluded, there may be -** more than one blocked connection that has registered for an unlock-notify -** callback. ^If two or more such blocked connections have specified the -** same callback function, then instead of invoking the callback function -** multiple times, it is invoked once with the set of void* context pointers -** specified by the blocked connections bundled together into an array. -** This gives the application an opportunity to prioritize any actions -** related to the set of unblocked database connections. -** -** <b>Deadlock Detection</b> -** -** Assuming that after registering for an unlock-notify callback a -** database waits for the callback to be issued before taking any further -** action (a reasonable assumption), then using this API may cause the -** application to deadlock. For example, if connection X is waiting for -** connection Y's transaction to be concluded, and similarly connection -** Y is waiting on connection X's transaction, then neither connection -** will proceed and the system may remain deadlocked indefinitely. -** -** To avoid this scenario, the sqlite3_unlock_notify() performs deadlock -** detection. ^If a given call to sqlite3_unlock_notify() would put the -** system in a deadlocked state, then SQLITE_LOCKED is returned and no -** unlock-notify callback is registered. The system is said to be in -** a deadlocked state if connection A has registered for an unlock-notify -** callback on the conclusion of connection B's transaction, and connection -** B has itself registered for an unlock-notify callback when connection -** A's transaction is concluded. ^Indirect deadlock is also detected, so -** the system is also considered to be deadlocked if connection B has -** registered for an unlock-notify callback on the conclusion of connection -** C's transaction, where connection C is waiting on connection A. ^Any -** number of levels of indirection are allowed. -** -** <b>The "DROP TABLE" Exception</b> -** -** When a call to [sqlite3_step()] returns SQLITE_LOCKED, it is almost -** always appropriate to call sqlite3_unlock_notify(). There is however, -** one exception. When executing a "DROP TABLE" or "DROP INDEX" statement, -** SQLite checks if there are any currently executing SELECT statements -** that belong to the same connection. If there are, SQLITE_LOCKED is -** returned. In this case there is no "blocking connection", so invoking -** sqlite3_unlock_notify() results in the unlock-notify callback being -** invoked immediately. If the application then re-attempts the "DROP TABLE" -** or "DROP INDEX" query, an infinite loop might be the result. -** -** One way around this problem is to check the extended error code returned -** by an sqlite3_step() call. ^(If there is a blocking connection, then the -** extended error code is set to SQLITE_LOCKED_SHAREDCACHE. Otherwise, in -** the special "DROP TABLE/INDEX" case, the extended error code is just -** SQLITE_LOCKED.)^ -*/ -SQLITE_API int sqlite3_unlock_notify( - sqlite3 *pBlocked, /* Waiting connection */ - void (*xNotify)(void **apArg, int nArg), /* Callback function to invoke */ - void *pNotifyArg /* Argument to pass to xNotify */ -); - - -/* -** CAPI3REF: String Comparison -** -** ^The [sqlite3_strnicmp()] API allows applications and extensions to -** compare the contents of two buffers containing UTF-8 strings in a -** case-indendent fashion, using the same definition of case independence -** that SQLite uses internally when comparing identifiers. -*/ -SQLITE_API int sqlite3_strnicmp(const char *, const char *, int); - -/* -** CAPI3REF: Error Logging Interface -** -** ^The [sqlite3_log()] interface writes a message into the error log -** established by the [SQLITE_CONFIG_LOG] option to [sqlite3_config()]. -** ^If logging is enabled, the zFormat string and subsequent arguments are -** used with [sqlite3_snprintf()] to generate the final output string. -** -** The sqlite3_log() interface is intended for use by extensions such as -** virtual tables, collating functions, and SQL functions. While there is -** nothing to prevent an application from calling sqlite3_log(), doing so -** is considered bad form. -** -** The zFormat string must not be NULL. -** -** To avoid deadlocks and other threading problems, the sqlite3_log() routine -** will not use dynamically allocated memory. The log message is stored in -** a fixed-length buffer on the stack. If the log message is longer than -** a few hundred characters, it will be truncated to the length of the -** buffer. -*/ -SQLITE_API void sqlite3_log(int iErrCode, const char *zFormat, ...); - -/* -** Undo the hack that converts floating point types to integer for -** builds on processors without floating point support. -*/ -#ifdef SQLITE_OMIT_FLOATING_POINT -# undef double -#endif - -#if 0 -} /* End of the 'extern "C"' block */ -#endif -#endif - - -/************** End of sqlite3.h *********************************************/ -/************** Continuing where we left off in sqliteInt.h ******************/ +** a boolean expression that is usually true. These hints could, +** in theory, be used by the compiler to generate better code, but +** currently they are just comments for human readers. +*/ +#define likely(X) (X) +#define unlikely(X) (X) + /************** Include hash.h in the middle of sqliteInt.h ******************/ /************** Begin file hash.h ********************************************/ /* ** 2001 September 22 ** @@ -6271,11 +9478,11 @@ ** May you do good and not evil. ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* -** This is the header file for the generic hash-table implemenation +** This is the header file for the generic hash-table implementation ** used in SQLite. */ #ifndef _SQLITE_HASH_H_ #define _SQLITE_HASH_H_ @@ -6321,19 +9528,19 @@ ** be opaque because it is used by macros. */ struct HashElem { HashElem *next, *prev; /* Next and previous elements in the table */ void *data; /* Data associated with this element */ - const char *pKey; int nKey; /* Key associated with this element */ + const char *pKey; /* Key associated with this element */ }; /* ** Access routines. To delete, insert a NULL pointer. */ SQLITE_PRIVATE void sqlite3HashInit(Hash*); -SQLITE_PRIVATE void *sqlite3HashInsert(Hash*, const char *pKey, int nKey, void *pData); -SQLITE_PRIVATE void *sqlite3HashFind(const Hash*, const char *pKey, int nKey); +SQLITE_PRIVATE void *sqlite3HashInsert(Hash*, const char *pKey, void *pData); +SQLITE_PRIVATE void *sqlite3HashFind(const Hash*, const char *pKey); SQLITE_PRIVATE void sqlite3HashClear(Hash*); /* ** Macros for looping over all elements of a hash table. The idiom is ** like this: @@ -6361,167 +9568,177 @@ /************** End of hash.h ************************************************/ /************** Continuing where we left off in sqliteInt.h ******************/ /************** Include parse.h in the middle of sqliteInt.h *****************/ /************** Begin file parse.h *******************************************/ -#define TK_SEMI 1 -#define TK_EXPLAIN 2 -#define TK_QUERY 3 -#define TK_PLAN 4 -#define TK_BEGIN 5 -#define TK_TRANSACTION 6 -#define TK_DEFERRED 7 -#define TK_IMMEDIATE 8 -#define TK_EXCLUSIVE 9 -#define TK_COMMIT 10 -#define TK_END 11 -#define TK_ROLLBACK 12 -#define TK_SAVEPOINT 13 -#define TK_RELEASE 14 -#define TK_TO 15 -#define TK_TABLE 16 -#define TK_CREATE 17 -#define TK_IF 18 -#define TK_NOT 19 -#define TK_EXISTS 20 -#define TK_TEMP 21 -#define TK_LP 22 -#define TK_RP 23 -#define TK_AS 24 -#define TK_COMMA 25 -#define TK_ID 26 -#define TK_INDEXED 27 -#define TK_ABORT 28 -#define TK_ACTION 29 -#define TK_AFTER 30 -#define TK_ANALYZE 31 -#define TK_ASC 32 -#define TK_ATTACH 33 -#define TK_BEFORE 34 -#define TK_BY 35 -#define TK_CASCADE 36 -#define TK_CAST 37 -#define TK_COLUMNKW 38 -#define TK_CONFLICT 39 -#define TK_DATABASE 40 -#define TK_DESC 41 -#define TK_DETACH 42 -#define TK_EACH 43 -#define TK_FAIL 44 -#define TK_FOR 45 -#define TK_IGNORE 46 -#define TK_INITIALLY 47 -#define TK_INSTEAD 48 -#define TK_LIKE_KW 49 -#define TK_MATCH 50 -#define TK_NO 51 -#define TK_KEY 52 -#define TK_OF 53 -#define TK_OFFSET 54 -#define TK_PRAGMA 55 -#define TK_RAISE 56 -#define TK_REPLACE 57 -#define TK_RESTRICT 58 -#define TK_ROW 59 -#define TK_TRIGGER 60 -#define TK_VACUUM 61 -#define TK_VIEW 62 -#define TK_VIRTUAL 63 -#define TK_REINDEX 64 -#define TK_RENAME 65 -#define TK_CTIME_KW 66 -#define TK_ANY 67 -#define TK_OR 68 -#define TK_AND 69 -#define TK_IS 70 -#define TK_BETWEEN 71 -#define TK_IN 72 -#define TK_ISNULL 73 -#define TK_NOTNULL 74 -#define TK_NE 75 -#define TK_EQ 76 -#define TK_GT 77 -#define TK_LE 78 -#define TK_LT 79 -#define TK_GE 80 -#define TK_ESCAPE 81 -#define TK_BITAND 82 -#define TK_BITOR 83 -#define TK_LSHIFT 84 -#define TK_RSHIFT 85 -#define TK_PLUS 86 -#define TK_MINUS 87 -#define TK_STAR 88 -#define TK_SLASH 89 -#define TK_REM 90 -#define TK_CONCAT 91 -#define TK_COLLATE 92 -#define TK_BITNOT 93 -#define TK_STRING 94 -#define TK_JOIN_KW 95 -#define TK_CONSTRAINT 96 -#define TK_DEFAULT 97 -#define TK_NULL 98 -#define TK_PRIMARY 99 -#define TK_UNIQUE 100 -#define TK_CHECK 101 -#define TK_REFERENCES 102 -#define TK_AUTOINCR 103 -#define TK_ON 104 -#define TK_INSERT 105 -#define TK_DELETE 106 -#define TK_UPDATE 107 -#define TK_SET 108 -#define TK_DEFERRABLE 109 -#define TK_FOREIGN 110 -#define TK_DROP 111 -#define TK_UNION 112 -#define TK_ALL 113 -#define TK_EXCEPT 114 -#define TK_INTERSECT 115 -#define TK_SELECT 116 -#define TK_DISTINCT 117 -#define TK_DOT 118 -#define TK_FROM 119 -#define TK_JOIN 120 -#define TK_USING 121 -#define TK_ORDER 122 -#define TK_GROUP 123 -#define TK_HAVING 124 -#define TK_LIMIT 125 -#define TK_WHERE 126 -#define TK_INTO 127 -#define TK_VALUES 128 -#define TK_INTEGER 129 -#define TK_FLOAT 130 -#define TK_BLOB 131 -#define TK_REGISTER 132 -#define TK_VARIABLE 133 -#define TK_CASE 134 -#define TK_WHEN 135 -#define TK_THEN 136 -#define TK_ELSE 137 -#define TK_INDEX 138 -#define TK_ALTER 139 -#define TK_ADD 140 -#define TK_TO_TEXT 141 -#define TK_TO_BLOB 142 -#define TK_TO_NUMERIC 143 -#define TK_TO_INT 144 -#define TK_TO_REAL 145 -#define TK_ISNOT 146 -#define TK_END_OF_FILE 147 -#define TK_ILLEGAL 148 -#define TK_SPACE 149 +#define TK_SEMI 1 +#define TK_EXPLAIN 2 +#define TK_QUERY 3 +#define TK_PLAN 4 +#define TK_BEGIN 5 +#define TK_TRANSACTION 6 +#define TK_DEFERRED 7 +#define TK_IMMEDIATE 8 +#define TK_EXCLUSIVE 9 +#define TK_COMMIT 10 +#define TK_END 11 +#define TK_ROLLBACK 12 +#define TK_SAVEPOINT 13 +#define TK_RELEASE 14 +#define TK_TO 15 +#define TK_TABLE 16 +#define TK_CREATE 17 +#define TK_IF 18 +#define TK_NOT 19 +#define TK_EXISTS 20 +#define TK_TEMP 21 +#define TK_LP 22 +#define TK_RP 23 +#define TK_AS 24 +#define TK_WITHOUT 25 +#define TK_COMMA 26 +#define TK_ID 27 +#define TK_INDEXED 28 +#define TK_ABORT 29 +#define TK_ACTION 30 +#define TK_AFTER 31 +#define TK_ANALYZE 32 +#define TK_ASC 33 +#define TK_ATTACH 34 +#define TK_BEFORE 35 +#define TK_BY 36 +#define TK_CASCADE 37 +#define TK_CAST 38 +#define TK_COLUMNKW 39 +#define TK_CONFLICT 40 +#define TK_DATABASE 41 +#define TK_DESC 42 +#define TK_DETACH 43 +#define TK_EACH 44 +#define TK_FAIL 45 +#define TK_FOR 46 +#define TK_IGNORE 47 +#define TK_INITIALLY 48 +#define TK_INSTEAD 49 +#define TK_LIKE_KW 50 +#define TK_MATCH 51 +#define TK_NO 52 +#define TK_KEY 53 +#define TK_OF 54 +#define TK_OFFSET 55 +#define TK_PRAGMA 56 +#define TK_RAISE 57 +#define TK_RECURSIVE 58 +#define TK_REPLACE 59 +#define TK_RESTRICT 60 +#define TK_ROW 61 +#define TK_TRIGGER 62 +#define TK_VACUUM 63 +#define TK_VIEW 64 +#define TK_VIRTUAL 65 +#define TK_WITH 66 +#define TK_REINDEX 67 +#define TK_RENAME 68 +#define TK_CTIME_KW 69 +#define TK_ANY 70 +#define TK_OR 71 +#define TK_AND 72 +#define TK_IS 73 +#define TK_BETWEEN 74 +#define TK_IN 75 +#define TK_ISNULL 76 +#define TK_NOTNULL 77 +#define TK_NE 78 +#define TK_EQ 79 +#define TK_GT 80 +#define TK_LE 81 +#define TK_LT 82 +#define TK_GE 83 +#define TK_ESCAPE 84 +#define TK_BITAND 85 +#define TK_BITOR 86 +#define TK_LSHIFT 87 +#define TK_RSHIFT 88 +#define TK_PLUS 89 +#define TK_MINUS 90 +#define TK_STAR 91 +#define TK_SLASH 92 +#define TK_REM 93 +#define TK_CONCAT 94 +#define TK_COLLATE 95 +#define TK_BITNOT 96 +#define TK_STRING 97 +#define TK_JOIN_KW 98 +#define TK_CONSTRAINT 99 +#define TK_DEFAULT 100 +#define TK_NULL 101 +#define TK_PRIMARY 102 +#define TK_UNIQUE 103 +#define TK_CHECK 104 +#define TK_REFERENCES 105 +#define TK_AUTOINCR 106 +#define TK_ON 107 +#define TK_INSERT 108 +#define TK_DELETE 109 +#define TK_UPDATE 110 +#define TK_SET 111 +#define TK_DEFERRABLE 112 +#define TK_FOREIGN 113 +#define TK_DROP 114 +#define TK_UNION 115 +#define TK_ALL 116 +#define TK_EXCEPT 117 +#define TK_INTERSECT 118 +#define TK_SELECT 119 +#define TK_VALUES 120 +#define TK_DISTINCT 121 +#define TK_DOT 122 +#define TK_FROM 123 +#define TK_JOIN 124 +#define TK_USING 125 +#define TK_ORDER 126 +#define TK_GROUP 127 +#define TK_HAVING 128 +#define TK_LIMIT 129 +#define TK_WHERE 130 +#define TK_INTO 131 +#define TK_INTEGER 132 +#define TK_FLOAT 133 +#define TK_BLOB 134 +#define TK_VARIABLE 135 +#define TK_CASE 136 +#define TK_WHEN 137 +#define TK_THEN 138 +#define TK_ELSE 139 +#define TK_INDEX 140 +#define TK_ALTER 141 +#define TK_ADD 142 +#define TK_TO_TEXT 143 +#define TK_TO_BLOB 144 +#define TK_TO_NUMERIC 145 +#define TK_TO_INT 146 +#define TK_TO_REAL 147 +#define TK_ISNOT 148 +#define TK_END_OF_FILE 149 #define TK_UNCLOSED_STRING 150 #define TK_FUNCTION 151 #define TK_COLUMN 152 #define TK_AGG_FUNCTION 153 #define TK_AGG_COLUMN 154 -#define TK_CONST_FUNC 155 -#define TK_UMINUS 156 -#define TK_UPLUS 157 +#define TK_UMINUS 155 +#define TK_UPLUS 156 +#define TK_REGISTER 157 +#define TK_ASTERISK 158 +#define TK_SPACE 159 +#define TK_ILLEGAL 160 + +/* The token codes above must all fit in 8 bits */ +#define TKFLG_MASK 0xff + +/* Flags that can be added to a token code when it is not +** being stored in a u8: */ +#define TKFLG_DONTFOLD 0x100 /* Omit constant folding optimizations */ /************** End of parse.h ***********************************************/ /************** Continuing where we left off in sqliteInt.h ******************/ #include <stdio.h> #include <stdlib.h> @@ -6566,11 +9783,11 @@ ** the default file format for new databases and the maximum file format ** that the library can read. */ #define SQLITE_MAX_FILE_FORMAT 4 #ifndef SQLITE_DEFAULT_FILE_FORMAT -# define SQLITE_DEFAULT_FILE_FORMAT 1 +# define SQLITE_DEFAULT_FILE_FORMAT 4 #endif /* ** Determine whether triggers are recursive by default. This can be ** changed at run-time using a pragma. @@ -6583,10 +9800,41 @@ ** Provide a default value for SQLITE_TEMP_STORE in case it is not specified ** on the command-line */ #ifndef SQLITE_TEMP_STORE # define SQLITE_TEMP_STORE 1 +# define SQLITE_TEMP_STORE_xc 1 /* Exclude from ctime.c */ +#endif + +/* +** If no value has been provided for SQLITE_MAX_WORKER_THREADS, or if +** SQLITE_TEMP_STORE is set to 3 (never use temporary files), set it +** to zero. +*/ +#if SQLITE_TEMP_STORE==3 || SQLITE_THREADSAFE==0 +# undef SQLITE_MAX_WORKER_THREADS +# define SQLITE_MAX_WORKER_THREADS 0 +#endif +#ifndef SQLITE_MAX_WORKER_THREADS +# define SQLITE_MAX_WORKER_THREADS 8 +#endif +#ifndef SQLITE_DEFAULT_WORKER_THREADS +# define SQLITE_DEFAULT_WORKER_THREADS 0 +#endif +#if SQLITE_DEFAULT_WORKER_THREADS>SQLITE_MAX_WORKER_THREADS +# undef SQLITE_MAX_WORKER_THREADS +# define SQLITE_MAX_WORKER_THREADS SQLITE_DEFAULT_WORKER_THREADS +#endif + +/* +** The default initial allocation for the pagecache when using separate +** pagecaches for each database connection. A positive number is the +** number of pages. A negative number N translations means that a buffer +** of -1024*N bytes is allocated and used for as many pages as it will hold. +*/ +#ifndef SQLITE_DEFAULT_PCACHE_INITSZ +# define SQLITE_DEFAULT_PCACHE_INITSZ 100 #endif /* ** GCC does not define the offsetof() macro so we'll have to do it ** ourselves. @@ -6593,10 +9841,21 @@ */ #ifndef offsetof #define offsetof(STRUCTURE,FIELD) ((int)((char*)&((STRUCTURE*)0)->FIELD)) #endif +/* +** Macros to compute minimum and maximum of two numbers. +*/ +#define MIN(A,B) ((A)<(B)?(A):(B)) +#define MAX(A,B) ((A)>(B)?(A):(B)) + +/* +** Swap two objects of type TYPE. +*/ +#define SWAP(TYPE,A,B) {TYPE t=A; A=B; B=t;} + /* ** Check to see if this machine uses EBCDIC. (Yes, believe it or ** not, there are still machines out there that use EBCDIC.) */ #if 'A' == '\301' @@ -6664,28 +9923,96 @@ ** is 0x00000000ffffffff. But because of quirks of some compilers, we ** have to specify the value in the less intuitive manner shown: */ #define SQLITE_MAX_U32 ((((u64)1)<<32)-1) +/* +** The datatype used to store estimates of the number of rows in a +** table or index. This is an unsigned integer type. For 99.9% of +** the world, a 32-bit integer is sufficient. But a 64-bit integer +** can be used at compile-time if desired. +*/ +#ifdef SQLITE_64BIT_STATS + typedef u64 tRowcnt; /* 64-bit only if requested at compile-time */ +#else + typedef u32 tRowcnt; /* 32-bit is the default */ +#endif + +/* +** Estimated quantities used for query planning are stored as 16-bit +** logarithms. For quantity X, the value stored is 10*log2(X). This +** gives a possible range of values of approximately 1.0e986 to 1e-986. +** But the allowed values are "grainy". Not every value is representable. +** For example, quantities 16 and 17 are both represented by a LogEst +** of 40. However, since LogEst quantities are suppose to be estimates, +** not exact values, this imprecision is not a problem. +** +** "LogEst" is short for "Logarithmic Estimate". +** +** Examples: +** 1 -> 0 20 -> 43 10000 -> 132 +** 2 -> 10 25 -> 46 25000 -> 146 +** 3 -> 16 100 -> 66 1000000 -> 199 +** 4 -> 20 1000 -> 99 1048576 -> 200 +** 10 -> 33 1024 -> 100 4294967296 -> 320 +** +** The LogEst can be negative to indicate fractional values. +** Examples: +** +** 0.5 -> -10 0.1 -> -33 0.0625 -> -40 +*/ +typedef INT16_TYPE LogEst; + +/* +** Set the SQLITE_PTRSIZE macro to the number of bytes in a pointer +*/ +#ifndef SQLITE_PTRSIZE +# if defined(__SIZEOF_POINTER__) +# define SQLITE_PTRSIZE __SIZEOF_POINTER__ +# elif defined(i386) || defined(__i386__) || defined(_M_IX86) || \ + defined(_M_ARM) || defined(__arm__) || defined(__x86) +# define SQLITE_PTRSIZE 4 +# else +# define SQLITE_PTRSIZE 8 +# endif +#endif + /* ** Macros to determine whether the machine is big or little endian, -** evaluated at runtime. +** and whether or not that determination is run-time or compile-time. +** +** For best performance, an attempt is made to guess at the byte-order +** using C-preprocessor macros. If that is unsuccessful, or if +** -DSQLITE_RUNTIME_BYTEORDER=1 is set, then byte-order is determined +** at run-time. */ -#ifdef SQLITE_AMALGAMATION -SQLITE_PRIVATE const int sqlite3one = 1; -#else -SQLITE_PRIVATE const int sqlite3one; -#endif -#if defined(i386) || defined(__i386__) || defined(_M_IX86)\ - || defined(__x86_64) || defined(__x86_64__) +#if (defined(i386) || defined(__i386__) || defined(_M_IX86) || \ + defined(__x86_64) || defined(__x86_64__) || defined(_M_X64) || \ + defined(_M_AMD64) || defined(_M_ARM) || defined(__x86) || \ + defined(__arm__)) && !defined(SQLITE_RUNTIME_BYTEORDER) +# define SQLITE_BYTEORDER 1234 # define SQLITE_BIGENDIAN 0 # define SQLITE_LITTLEENDIAN 1 # define SQLITE_UTF16NATIVE SQLITE_UTF16LE -#else +#endif +#if (defined(sparc) || defined(__ppc__)) \ + && !defined(SQLITE_RUNTIME_BYTEORDER) +# define SQLITE_BYTEORDER 4321 +# define SQLITE_BIGENDIAN 1 +# define SQLITE_LITTLEENDIAN 0 +# define SQLITE_UTF16NATIVE SQLITE_UTF16BE +#endif +#if !defined(SQLITE_BYTEORDER) +# ifdef SQLITE_AMALGAMATION + const int sqlite3one = 1; +# else + extern const int sqlite3one; +# endif +# define SQLITE_BYTEORDER 0 /* 0 means "unknown at compile-time" */ # define SQLITE_BIGENDIAN (*(char *)(&sqlite3one)==0) # define SQLITE_LITTLEENDIAN (*(char *)(&sqlite3one)==1) -# define SQLITE_UTF16NATIVE (SQLITE_BIGENDIAN?SQLITE_UTF16BE:SQLITE_UTF16LE) +# define SQLITE_UTF16NATIVE (SQLITE_BIGENDIAN?SQLITE_UTF16BE:SQLITE_UTF16LE) #endif /* ** Constants for the largest and smallest possible 64-bit signed integers. ** These macros are designed to work correctly on both 32-bit and 64-bit @@ -6709,19 +10036,84 @@ ** Assert that the pointer X is aligned to an 8-byte boundary. This ** macro is used only within assert() to verify that the code gets ** all alignment restrictions correct. ** ** Except, if SQLITE_4_BYTE_ALIGNED_MALLOC is defined, then the -** underlying malloc() implemention might return us 4-byte aligned +** underlying malloc() implementation might return us 4-byte aligned ** pointers. In that case, only verify 4-byte alignment. */ #ifdef SQLITE_4_BYTE_ALIGNED_MALLOC # define EIGHT_BYTE_ALIGNMENT(X) ((((char*)(X) - (char*)0)&3)==0) #else # define EIGHT_BYTE_ALIGNMENT(X) ((((char*)(X) - (char*)0)&7)==0) #endif +/* +** Disable MMAP on platforms where it is known to not work +*/ +#if defined(__OpenBSD__) || defined(__QNXNTO__) +# undef SQLITE_MAX_MMAP_SIZE +# define SQLITE_MAX_MMAP_SIZE 0 +#endif + +/* +** Default maximum size of memory used by memory-mapped I/O in the VFS +*/ +#ifdef __APPLE__ +# include <TargetConditionals.h> +#endif +#ifndef SQLITE_MAX_MMAP_SIZE +# if defined(__linux__) \ + || defined(_WIN32) \ + || (defined(__APPLE__) && defined(__MACH__)) \ + || defined(__sun) \ + || defined(__FreeBSD__) \ + || defined(__DragonFly__) +# define SQLITE_MAX_MMAP_SIZE 0x7fff0000 /* 2147418112 */ +# else +# define SQLITE_MAX_MMAP_SIZE 0 +# endif +# define SQLITE_MAX_MMAP_SIZE_xc 1 /* exclude from ctime.c */ +#endif + +/* +** The default MMAP_SIZE is zero on all platforms. Or, even if a larger +** default MMAP_SIZE is specified at compile-time, make sure that it does +** not exceed the maximum mmap size. +*/ +#ifndef SQLITE_DEFAULT_MMAP_SIZE +# define SQLITE_DEFAULT_MMAP_SIZE 0 +# define SQLITE_DEFAULT_MMAP_SIZE_xc 1 /* Exclude from ctime.c */ +#endif +#if SQLITE_DEFAULT_MMAP_SIZE>SQLITE_MAX_MMAP_SIZE +# undef SQLITE_DEFAULT_MMAP_SIZE +# define SQLITE_DEFAULT_MMAP_SIZE SQLITE_MAX_MMAP_SIZE +#endif + +/* +** Only one of SQLITE_ENABLE_STAT3 or SQLITE_ENABLE_STAT4 can be defined. +** Priority is given to SQLITE_ENABLE_STAT4. If either are defined, also +** define SQLITE_ENABLE_STAT3_OR_STAT4 +*/ +#ifdef SQLITE_ENABLE_STAT4 +# undef SQLITE_ENABLE_STAT3 +# define SQLITE_ENABLE_STAT3_OR_STAT4 1 +#elif SQLITE_ENABLE_STAT3 +# define SQLITE_ENABLE_STAT3_OR_STAT4 1 +#elif SQLITE_ENABLE_STAT3_OR_STAT4 +# undef SQLITE_ENABLE_STAT3_OR_STAT4 +#endif + +/* +** SELECTTRACE_ENABLED will be either 1 or 0 depending on whether or not +** the Select query generator tracing logic is turned on. +*/ +#if defined(SQLITE_DEBUG) || defined(SQLITE_ENABLE_SELECTTRACE) +# define SELECTTRACE_ENABLED 1 +#else +# define SELECTTRACE_ENABLED 0 +#endif /* ** An instance of the following structure is used to store the busy-handler ** callback for a given sqlite handle. ** @@ -6759,15 +10151,24 @@ ** A convenience macro that returns the number of elements in ** an array. */ #define ArraySize(X) ((int)(sizeof(X)/sizeof(X[0]))) +/* +** Determine if the argument is a power of two +*/ +#define IsPowerOfTwo(X) (((X)&((X)-1))==0) + /* ** The following value as a destructor means to use sqlite3DbFree(). -** This is an internal extension to SQLITE_STATIC and SQLITE_TRANSIENT. +** The sqlite3DbFree() routine requires two parameters instead of the +** one parameter that destructors normally want. So we have to introduce +** this magic value that the code knows to handle differently. Any +** pointer will work here as long as it is distinct from SQLITE_STATIC +** and SQLITE_TRANSIENT. */ -#define SQLITE_DYNAMIC ((sqlite3_destructor_type)sqlite3DbFree) +#define SQLITE_DYNAMIC ((sqlite3_destructor_type)sqlite3MallocSize) /* ** When SQLITE_OMIT_WSD is defined, it means that the target platform does ** not support Writable Static Data (WSD) such as global and static variables. ** All variables must either be on the stack or dynamically allocated from @@ -6783,12 +10184,12 @@ */ #ifdef SQLITE_OMIT_WSD #define SQLITE_WSD const #define GLOBAL(t,v) (*(t*)sqlite3_wsd_find((void*)&(v), sizeof(v))) #define sqlite3GlobalConfig GLOBAL(struct Sqlite3Config, sqlite3Config) -SQLITE_API int sqlite3_wsd_init(int N, int J); -SQLITE_API void *sqlite3_wsd_find(void *K, int L); +SQLITE_API int SQLITE_STDCALL sqlite3_wsd_init(int N, int J); +SQLITE_API void *SQLITE_STDCALL sqlite3_wsd_find(void *K, int L); #else #define SQLITE_WSD #define GLOBAL(t,v) v #define sqlite3GlobalConfig sqlite3Config #endif @@ -6825,10 +10226,11 @@ typedef struct Schema Schema; typedef struct Expr Expr; typedef struct ExprList ExprList; typedef struct ExprSpan ExprSpan; typedef struct FKey FKey; +typedef struct FuncDestructor FuncDestructor; typedef struct FuncDef FuncDef; typedef struct FuncDefHash FuncDefHash; typedef struct IdList IdList; typedef struct Index Index; typedef struct IndexSample IndexSample; @@ -6837,27 +10239,31 @@ typedef struct Lookaside Lookaside; typedef struct LookasideSlot LookasideSlot; typedef struct Module Module; typedef struct NameContext NameContext; typedef struct Parse Parse; +typedef struct PrintfArguments PrintfArguments; typedef struct RowSet RowSet; typedef struct Savepoint Savepoint; typedef struct Select Select; +typedef struct SQLiteThread SQLiteThread; +typedef struct SelectDest SelectDest; typedef struct SrcList SrcList; typedef struct StrAccum StrAccum; typedef struct Table Table; typedef struct TableLock TableLock; typedef struct Token Token; +typedef struct TreeView TreeView; typedef struct Trigger Trigger; typedef struct TriggerPrg TriggerPrg; typedef struct TriggerStep TriggerStep; typedef struct UnpackedRecord UnpackedRecord; typedef struct VTable VTable; +typedef struct VtabCtx VtabCtx; typedef struct Walker Walker; -typedef struct WherePlan WherePlan; typedef struct WhereInfo WhereInfo; -typedef struct WhereLevel WhereLevel; +typedef struct With With; /* ** Defer sourcing vdbe.h and btree.h until after the "u8" and ** "BusyHandler" typedefs. vdbe.h also requires a few of the opaque ** pointer types (i.e. FuncDef) defined above. @@ -6883,11 +10289,11 @@ #define _BTREE_H_ /* TODO: This definition is just included so other modules compile. It ** needs to be revisited. */ -#define SQLITE_N_BTREE_META 10 +#define SQLITE_N_BTREE_META 16 /* ** If defined as non-zero, auto-vacuum is enabled by default. Otherwise ** it must be turned on for each database using "PRAGMA auto_vacuum = 1". */ @@ -6903,25 +10309,14 @@ ** Forward declarations of structure */ typedef struct Btree Btree; typedef struct BtCursor BtCursor; typedef struct BtShared BtShared; -typedef struct BtreeMutexArray BtreeMutexArray; - -/* -** This structure records all of the Btrees that need to hold -** a mutex before we enter sqlite3VdbeExec(). The Btrees are -** are placed in aBtree[] in order of aBtree[]->pBt. That way, -** we can always lock and unlock them all quickly. -*/ -struct BtreeMutexArray { - int nMutex; - Btree *aBtree[SQLITE_MAX_ATTACHED+1]; -}; SQLITE_PRIVATE int sqlite3BtreeOpen( + sqlite3_vfs *pVfs, /* VFS to use with this b-tree */ const char *zFilename, /* Name of database file to open */ sqlite3 *db, /* Associated database connection */ Btree **ppBtree, /* Return open Btree* here */ int flags, /* Flags */ int vfsFlags /* Flags passed through to VFS open */ @@ -6931,34 +10326,37 @@ ** following values. ** ** NOTE: These values must match the corresponding PAGER_ values in ** pager.h. */ -#define BTREE_OMIT_JOURNAL 1 /* Do not use journal. No argument */ -#define BTREE_NO_READLOCK 2 /* Omit readlocks on readonly files */ -#define BTREE_MEMORY 4 /* In-memory DB. No argument */ -#define BTREE_READONLY 8 /* Open the database in read-only mode */ -#define BTREE_READWRITE 16 /* Open for both reading and writing */ -#define BTREE_CREATE 32 /* Create the database if it does not exist */ +#define BTREE_OMIT_JOURNAL 1 /* Do not create or use a rollback journal */ +#define BTREE_MEMORY 2 /* This is an in-memory DB */ +#define BTREE_SINGLE 4 /* The file contains at most 1 b-tree */ +#define BTREE_UNORDERED 8 /* Use of a hash implementation is OK */ SQLITE_PRIVATE int sqlite3BtreeClose(Btree*); SQLITE_PRIVATE int sqlite3BtreeSetCacheSize(Btree*,int); -SQLITE_PRIVATE int sqlite3BtreeSetSafetyLevel(Btree*,int,int); +SQLITE_PRIVATE int sqlite3BtreeSetSpillSize(Btree*,int); +#if SQLITE_MAX_MMAP_SIZE>0 +SQLITE_PRIVATE int sqlite3BtreeSetMmapLimit(Btree*,sqlite3_int64); +#endif +SQLITE_PRIVATE int sqlite3BtreeSetPagerFlags(Btree*,unsigned); SQLITE_PRIVATE int sqlite3BtreeSyncDisabled(Btree*); SQLITE_PRIVATE int sqlite3BtreeSetPageSize(Btree *p, int nPagesize, int nReserve, int eFix); SQLITE_PRIVATE int sqlite3BtreeGetPageSize(Btree*); SQLITE_PRIVATE int sqlite3BtreeMaxPageCount(Btree*,int); SQLITE_PRIVATE u32 sqlite3BtreeLastPage(Btree*); SQLITE_PRIVATE int sqlite3BtreeSecureDelete(Btree*,int); -SQLITE_PRIVATE int sqlite3BtreeGetReserve(Btree*); +SQLITE_PRIVATE int sqlite3BtreeGetOptimalReserve(Btree*); +SQLITE_PRIVATE int sqlite3BtreeGetReserveNoMutex(Btree *p); SQLITE_PRIVATE int sqlite3BtreeSetAutoVacuum(Btree *, int); SQLITE_PRIVATE int sqlite3BtreeGetAutoVacuum(Btree *); SQLITE_PRIVATE int sqlite3BtreeBeginTrans(Btree*,int); SQLITE_PRIVATE int sqlite3BtreeCommitPhaseOne(Btree*, const char *zMaster); -SQLITE_PRIVATE int sqlite3BtreeCommitPhaseTwo(Btree*); +SQLITE_PRIVATE int sqlite3BtreeCommitPhaseTwo(Btree*, int); SQLITE_PRIVATE int sqlite3BtreeCommit(Btree*); -SQLITE_PRIVATE int sqlite3BtreeRollback(Btree*); +SQLITE_PRIVATE int sqlite3BtreeRollback(Btree*,int,int); SQLITE_PRIVATE int sqlite3BtreeBeginStmt(Btree*,int); SQLITE_PRIVATE int sqlite3BtreeCreateTable(Btree*, int*, int flags); SQLITE_PRIVATE int sqlite3BtreeIsInTrans(Btree*); SQLITE_PRIVATE int sqlite3BtreeIsInReadTrans(Btree*); SQLITE_PRIVATE int sqlite3BtreeIsInBackup(Btree*); @@ -6972,22 +10370,31 @@ SQLITE_PRIVATE int sqlite3BtreeCopyFile(Btree *, Btree *); SQLITE_PRIVATE int sqlite3BtreeIncrVacuum(Btree *); /* The flags parameter to sqlite3BtreeCreateTable can be the bitwise OR -** of the following flags: +** of the flags shown below. +** +** Every SQLite table must have either BTREE_INTKEY or BTREE_BLOBKEY set. +** With BTREE_INTKEY, the table key is a 64-bit integer and arbitrary data +** is stored in the leaves. (BTREE_INTKEY is used for SQL tables.) With +** BTREE_BLOBKEY, the key is an arbitrary BLOB and no content is stored +** anywhere - the key is the content. (BTREE_BLOBKEY is used for SQL +** indices.) */ #define BTREE_INTKEY 1 /* Table has only 64-bit signed integer keys */ -#define BTREE_ZERODATA 2 /* Table has keys only - no data */ -#define BTREE_LEAFDATA 4 /* Data stored in leaves only. Implies INTKEY */ +#define BTREE_BLOBKEY 2 /* Table has keys only - no data */ SQLITE_PRIVATE int sqlite3BtreeDropTable(Btree*, int, int*); SQLITE_PRIVATE int sqlite3BtreeClearTable(Btree*, int, int*); -SQLITE_PRIVATE void sqlite3BtreeTripAllCursors(Btree*, int); +SQLITE_PRIVATE int sqlite3BtreeClearTableOfCursor(BtCursor*); +SQLITE_PRIVATE int sqlite3BtreeTripAllCursors(Btree*, int, int); SQLITE_PRIVATE void sqlite3BtreeGetMeta(Btree *pBtree, int idx, u32 *pValue); SQLITE_PRIVATE int sqlite3BtreeUpdateMeta(Btree*, int idx, u32 value); + +SQLITE_PRIVATE int sqlite3BtreeNewDb(Btree *p); /* ** The second parameter to sqlite3BtreeGetMeta or sqlite3BtreeUpdateMeta ** should be one of the following values. The integer values are assigned ** to constants so that the offset of the corresponding field in an @@ -6996,19 +10403,97 @@ ** offset = 36 + (idx * 4) ** ** For example, the free-page-count field is located at byte offset 36 of ** the database file header. The incr-vacuum-flag field is located at ** byte offset 64 (== 36+4*7). +** +** The BTREE_DATA_VERSION value is not really a value stored in the header. +** It is a read-only number computed by the pager. But we merge it with +** the header value access routines since its access pattern is the same. +** Call it a "virtual meta value". */ #define BTREE_FREE_PAGE_COUNT 0 #define BTREE_SCHEMA_VERSION 1 #define BTREE_FILE_FORMAT 2 #define BTREE_DEFAULT_CACHE_SIZE 3 #define BTREE_LARGEST_ROOT_PAGE 4 #define BTREE_TEXT_ENCODING 5 #define BTREE_USER_VERSION 6 #define BTREE_INCR_VACUUM 7 +#define BTREE_APPLICATION_ID 8 +#define BTREE_DATA_VERSION 15 /* A virtual meta-value */ + +/* +** Kinds of hints that can be passed into the sqlite3BtreeCursorHint() +** interface. +** +** BTREE_HINT_RANGE (arguments: Expr*, Mem*) +** +** The first argument is an Expr* (which is guaranteed to be constant for +** the lifetime of the cursor) that defines constraints on which rows +** might be fetched with this cursor. The Expr* tree may contain +** TK_REGISTER nodes that refer to values stored in the array of registers +** passed as the second parameter. In other words, if Expr.op==TK_REGISTER +** then the value of the node is the value in Mem[pExpr.iTable]. Any +** TK_COLUMN node in the expression tree refers to the Expr.iColumn-th +** column of the b-tree of the cursor. The Expr tree will not contain +** any function calls nor subqueries nor references to b-trees other than +** the cursor being hinted. +** +** The design of the _RANGE hint is aid b-tree implementations that try +** to prefetch content from remote machines - to provide those +** implementations with limits on what needs to be prefetched and thereby +** reduce network bandwidth. +** +** Note that BTREE_HINT_FLAGS with BTREE_BULKLOAD is the only hint used by +** standard SQLite. The other hints are provided for extentions that use +** the SQLite parser and code generator but substitute their own storage +** engine. +*/ +#define BTREE_HINT_RANGE 0 /* Range constraints on queries */ + +/* +** Values that may be OR'd together to form the argument to the +** BTREE_HINT_FLAGS hint for sqlite3BtreeCursorHint(): +** +** The BTREE_BULKLOAD flag is set on index cursors when the index is going +** to be filled with content that is already in sorted order. +** +** The BTREE_SEEK_EQ flag is set on cursors that will get OP_SeekGE or +** OP_SeekLE opcodes for a range search, but where the range of entries +** selected will all have the same key. In other words, the cursor will +** be used only for equality key searches. +** +*/ +#define BTREE_BULKLOAD 0x00000001 /* Used to full index in sorted order */ +#define BTREE_SEEK_EQ 0x00000002 /* EQ seeks only - no range seeks */ + +/* +** Flags passed as the third argument to sqlite3BtreeCursor(). +** +** For read-only cursors the wrFlag argument is always zero. For read-write +** cursors it may be set to either (BTREE_WRCSR|BTREE_FORDELETE) or just +** (BTREE_WRCSR). If the BTREE_FORDELETE bit is set, then the cursor will +** only be used by SQLite for the following: +** +** * to seek to and then delete specific entries, and/or +** +** * to read values that will be used to create keys that other +** BTREE_FORDELETE cursors will seek to and delete. +** +** The BTREE_FORDELETE flag is an optimization hint. It is not used by +** by this, the native b-tree engine of SQLite, but it is available to +** alternative storage engines that might be substituted in place of this +** b-tree system. For alternative storage engines in which a delete of +** the main table row automatically deletes corresponding index rows, +** the FORDELETE flag hint allows those alternative storage engines to +** skip a lot of work. Namely: FORDELETE cursors may treat all SEEK +** and DELETE operations as no-ops, and any READ operation against a +** FORDELETE cursor may return a null row: 0x01 0x00. +*/ +#define BTREE_WRCSR 0x00000004 /* read-write cursor */ +#define BTREE_FORDELETE 0x00000008 /* Cursor is for seek/delete only */ SQLITE_PRIVATE int sqlite3BtreeCursor( Btree*, /* BTree containing table to open */ int iTable, /* Index of root page */ int wrFlag, /* 1 for writing. 0 for read-only */ @@ -7015,21 +10500,31 @@ struct KeyInfo*, /* First argument to compare function */ BtCursor *pCursor /* Space to write cursor structure */ ); SQLITE_PRIVATE int sqlite3BtreeCursorSize(void); SQLITE_PRIVATE void sqlite3BtreeCursorZero(BtCursor*); +SQLITE_PRIVATE void sqlite3BtreeCursorHintFlags(BtCursor*, unsigned); +#ifdef SQLITE_ENABLE_CURSOR_HINTS +SQLITE_PRIVATE void sqlite3BtreeCursorHint(BtCursor*, int, ...); +#endif SQLITE_PRIVATE int sqlite3BtreeCloseCursor(BtCursor*); SQLITE_PRIVATE int sqlite3BtreeMovetoUnpacked( BtCursor*, UnpackedRecord *pUnKey, i64 intKey, int bias, int *pRes ); -SQLITE_PRIVATE int sqlite3BtreeCursorHasMoved(BtCursor*, int*); -SQLITE_PRIVATE int sqlite3BtreeDelete(BtCursor*); +SQLITE_PRIVATE int sqlite3BtreeCursorHasMoved(BtCursor*); +SQLITE_PRIVATE int sqlite3BtreeCursorRestore(BtCursor*, int*); +SQLITE_PRIVATE int sqlite3BtreeDelete(BtCursor*, u8 flags); + +/* Allowed flags for the 2nd argument to sqlite3BtreeDelete() */ +#define BTREE_SAVEPOSITION 0x02 /* Leave cursor pointing at NEXT or PREV */ +#define BTREE_AUXDELETE 0x04 /* not the primary delete operation */ + SQLITE_PRIVATE int sqlite3BtreeInsert(BtCursor*, const void *pKey, i64 nKey, const void *pData, int nData, int nZero, int bias, int seekResult); SQLITE_PRIVATE int sqlite3BtreeFirst(BtCursor*, int *pRes); SQLITE_PRIVATE int sqlite3BtreeLast(BtCursor*, int *pRes); @@ -7036,23 +10531,25 @@ SQLITE_PRIVATE int sqlite3BtreeNext(BtCursor*, int *pRes); SQLITE_PRIVATE int sqlite3BtreeEof(BtCursor*); SQLITE_PRIVATE int sqlite3BtreePrevious(BtCursor*, int *pRes); SQLITE_PRIVATE int sqlite3BtreeKeySize(BtCursor*, i64 *pSize); SQLITE_PRIVATE int sqlite3BtreeKey(BtCursor*, u32 offset, u32 amt, void*); -SQLITE_PRIVATE const void *sqlite3BtreeKeyFetch(BtCursor*, int *pAmt); -SQLITE_PRIVATE const void *sqlite3BtreeDataFetch(BtCursor*, int *pAmt); +SQLITE_PRIVATE const void *sqlite3BtreeKeyFetch(BtCursor*, u32 *pAmt); +SQLITE_PRIVATE const void *sqlite3BtreeDataFetch(BtCursor*, u32 *pAmt); SQLITE_PRIVATE int sqlite3BtreeDataSize(BtCursor*, u32 *pSize); SQLITE_PRIVATE int sqlite3BtreeData(BtCursor*, u32 offset, u32 amt, void*); -SQLITE_PRIVATE void sqlite3BtreeSetCachedRowid(BtCursor*, sqlite3_int64); -SQLITE_PRIVATE sqlite3_int64 sqlite3BtreeGetCachedRowid(BtCursor*); SQLITE_PRIVATE char *sqlite3BtreeIntegrityCheck(Btree*, int *aRoot, int nRoot, int, int*); SQLITE_PRIVATE struct Pager *sqlite3BtreePager(Btree*); SQLITE_PRIVATE int sqlite3BtreePutData(BtCursor*, u32 offset, u32 amt, void*); -SQLITE_PRIVATE void sqlite3BtreeCacheOverflow(BtCursor *); +SQLITE_PRIVATE void sqlite3BtreeIncrblobCursor(BtCursor *); SQLITE_PRIVATE void sqlite3BtreeClearCursor(BtCursor *); +SQLITE_PRIVATE int sqlite3BtreeSetVersion(Btree *pBt, int iVersion); +SQLITE_PRIVATE int sqlite3BtreeCursorHasHint(BtCursor*, unsigned int mask); +SQLITE_PRIVATE int sqlite3BtreeIsReadonly(Btree *pBt); +SQLITE_PRIVATE int sqlite3HeaderSizeBtree(void); #ifndef NDEBUG SQLITE_PRIVATE int sqlite3BtreeCursorIsValid(BtCursor*); #endif @@ -7063,48 +10560,50 @@ #ifdef SQLITE_TEST SQLITE_PRIVATE int sqlite3BtreeCursorInfo(BtCursor*, int*, int); SQLITE_PRIVATE void sqlite3BtreeCursorList(Btree*); #endif +#ifndef SQLITE_OMIT_WAL +SQLITE_PRIVATE int sqlite3BtreeCheckpoint(Btree*, int, int *, int *); +#endif + /* ** If we are not using shared cache, then there is no need to ** use mutexes to access the BtShared structures. So make the ** Enter and Leave procedures no-ops. */ #ifndef SQLITE_OMIT_SHARED_CACHE SQLITE_PRIVATE void sqlite3BtreeEnter(Btree*); SQLITE_PRIVATE void sqlite3BtreeEnterAll(sqlite3*); +SQLITE_PRIVATE int sqlite3BtreeSharable(Btree*); +SQLITE_PRIVATE void sqlite3BtreeEnterCursor(BtCursor*); #else # define sqlite3BtreeEnter(X) # define sqlite3BtreeEnterAll(X) +# define sqlite3BtreeSharable(X) 0 +# define sqlite3BtreeEnterCursor(X) #endif #if !defined(SQLITE_OMIT_SHARED_CACHE) && SQLITE_THREADSAFE SQLITE_PRIVATE void sqlite3BtreeLeave(Btree*); -SQLITE_PRIVATE void sqlite3BtreeEnterCursor(BtCursor*); SQLITE_PRIVATE void sqlite3BtreeLeaveCursor(BtCursor*); SQLITE_PRIVATE void sqlite3BtreeLeaveAll(sqlite3*); -SQLITE_PRIVATE void sqlite3BtreeMutexArrayEnter(BtreeMutexArray*); -SQLITE_PRIVATE void sqlite3BtreeMutexArrayLeave(BtreeMutexArray*); -SQLITE_PRIVATE void sqlite3BtreeMutexArrayInsert(BtreeMutexArray*, Btree*); #ifndef NDEBUG /* These routines are used inside assert() statements only. */ SQLITE_PRIVATE int sqlite3BtreeHoldsMutex(Btree*); SQLITE_PRIVATE int sqlite3BtreeHoldsAllMutexes(sqlite3*); +SQLITE_PRIVATE int sqlite3SchemaMutexHeld(sqlite3*,int,Schema*); #endif #else # define sqlite3BtreeLeave(X) -# define sqlite3BtreeEnterCursor(X) # define sqlite3BtreeLeaveCursor(X) # define sqlite3BtreeLeaveAll(X) -# define sqlite3BtreeMutexArrayEnter(X) -# define sqlite3BtreeMutexArrayLeave(X) -# define sqlite3BtreeMutexArrayInsert(X,Y) # define sqlite3BtreeHoldsMutex(X) 1 # define sqlite3BtreeHoldsAllMutexes(X) 1 +# define sqlite3SchemaMutexHeld(X,Y,Z) 1 #endif #endif /* _BTREE_H_ */ @@ -7129,10 +10628,11 @@ ** or VDBE. The VDBE implements an abstract machine that runs a ** simple program to access and modify the underlying database. */ #ifndef _SQLITE_VDBE_H_ #define _SQLITE_VDBE_H_ +/* #include <stdio.h> */ /* ** A single VDBE is an opaque structure named "Vdbe". Only routines ** in the source file sqliteVdbe.c are allowed to see the insides ** of this structure. @@ -7141,11 +10641,10 @@ /* ** The names of the following types declared in vdbeInt.h are required ** for the VdbeOp definition. */ -typedef struct VdbeFunc VdbeFunc; typedef struct Mem Mem; typedef struct SubProgram SubProgram; /* ** A single instruction of the virtual machine has an opcode @@ -7158,32 +10657,39 @@ u8 opflags; /* Mask of the OPFLG_* flags in opcodes.h */ u8 p5; /* Fifth parameter is an unsigned character */ int p1; /* First operand */ int p2; /* Second parameter (often the jump destination) */ int p3; /* The third parameter */ - union { /* fourth parameter */ + union p4union { /* fourth parameter */ int i; /* Integer value if p4type==P4_INT32 */ void *p; /* Generic pointer */ char *z; /* Pointer to data for string (char array) types */ i64 *pI64; /* Used when p4type is P4_INT64 */ double *pReal; /* Used when p4type is P4_REAL */ FuncDef *pFunc; /* Used when p4type is P4_FUNCDEF */ - VdbeFunc *pVdbeFunc; /* Used when p4type is P4_VDBEFUNC */ + sqlite3_context *pCtx; /* Used when p4type is P4_FUNCCTX */ CollSeq *pColl; /* Used when p4type is P4_COLLSEQ */ Mem *pMem; /* Used when p4type is P4_MEM */ VTable *pVtab; /* Used when p4type is P4_VTAB */ KeyInfo *pKeyInfo; /* Used when p4type is P4_KEYINFO */ int *ai; /* Used when p4type is P4_INTARRAY */ SubProgram *pProgram; /* Used when p4type is P4_SUBPROGRAM */ +#ifdef SQLITE_ENABLE_CURSOR_HINTS + Expr *pExpr; /* Used when p4type is P4_EXPR */ +#endif + int (*xAdvance)(BtCursor *, int *); } p4; -#ifdef SQLITE_DEBUG +#ifdef SQLITE_ENABLE_EXPLAIN_COMMENTS char *zComment; /* Comment to improve readability */ #endif #ifdef VDBE_PROFILE - int cnt; /* Number of times this instruction was executed */ + u32 cnt; /* Number of times this instruction was executed */ u64 cycles; /* Total time spent executing this instruction */ #endif +#ifdef SQLITE_VDBE_COVERAGE + int iSrcLine; /* Source-code line that generated this opcode */ +#endif }; typedef struct VdbeOp VdbeOp; /* @@ -7192,12 +10698,13 @@ struct SubProgram { VdbeOp *aOp; /* Array of opcodes for sub-program */ int nOp; /* Elements in aOp[] */ int nMem; /* Number of memory cells required */ int nCsr; /* Number of cursors required */ - int nRef; /* Number of pointers to this structure */ + int nOnce; /* Number of OP_Once instructions */ void *token; /* id that may be used to recursive triggers */ + SubProgram *pNext; /* Next sub-program already visited */ }; /* ** A smaller version of VdbeOp used for the VdbeAddOpList() function because ** it takes up less space. @@ -7217,30 +10724,28 @@ #define P4_DYNAMIC (-1) /* Pointer to a string obtained from sqliteMalloc() */ #define P4_STATIC (-2) /* Pointer to a static string */ #define P4_COLLSEQ (-4) /* P4 is a pointer to a CollSeq structure */ #define P4_FUNCDEF (-5) /* P4 is a pointer to a FuncDef structure */ #define P4_KEYINFO (-6) /* P4 is a pointer to a KeyInfo structure */ -#define P4_VDBEFUNC (-7) /* P4 is a pointer to a VdbeFunc structure */ +#define P4_EXPR (-7) /* P4 is a pointer to an Expr tree */ #define P4_MEM (-8) /* P4 is a pointer to a Mem* structure */ -#define P4_TRANSIENT (-9) /* P4 is a pointer to a transient string */ +#define P4_TRANSIENT 0 /* P4 is a pointer to a transient string */ #define P4_VTAB (-10) /* P4 is a pointer to an sqlite3_vtab structure */ #define P4_MPRINTF (-11) /* P4 is a string obtained from sqlite3_mprintf() */ #define P4_REAL (-12) /* P4 is a 64-bit floating point value */ #define P4_INT64 (-13) /* P4 is a 64-bit signed integer */ #define P4_INT32 (-14) /* P4 is a 32-bit signed integer */ #define P4_INTARRAY (-15) /* P4 is a vector of 32-bit integers */ #define P4_SUBPROGRAM (-18) /* P4 is a pointer to a SubProgram structure */ - -/* When adding a P4 argument using P4_KEYINFO, a copy of the KeyInfo structure -** is made. That copy is freed when the Vdbe is finalized. But if the -** argument is P4_KEYINFO_HANDOFF, the passed in pointer is used. It still -** gets freed when the Vdbe is finalized so it still should be obtained -** from a single sqliteMalloc(). But no copy is made and the calling -** function should *not* try to free the KeyInfo. -*/ -#define P4_KEYINFO_HANDOFF (-16) -#define P4_KEYINFO_STATIC (-17) +#define P4_ADVANCE (-19) /* P4 is a pointer to BtreeNext() or BtreePrev() */ +#define P4_FUNCCTX (-20) /* P4 is a pointer to an sqlite3_context object */ + +/* Error message codes for OP_Halt */ +#define P5_ConstraintNotNull 1 +#define P5_ConstraintUnique 2 +#define P5_ConstraintCheck 3 +#define P5_ConstraintFK 4 /* ** The Vdbe.aColName array contains 5n Mem structures, where n is the ** number of columns of data returned by the statement. */ @@ -7272,256 +10777,349 @@ ** header file that defines a number for each opcode used by the VDBE. */ /************** Include opcodes.h in the middle of vdbe.h ********************/ /************** Begin file opcodes.h *****************************************/ /* Automatically generated. Do not edit */ -/* See the mkopcodeh.awk script for details */ -#define OP_Goto 1 -#define OP_Gosub 2 -#define OP_Return 3 -#define OP_Yield 4 -#define OP_HaltIfNull 5 -#define OP_Halt 6 -#define OP_Integer 7 -#define OP_Int64 8 -#define OP_Real 130 /* same as TK_FLOAT */ -#define OP_String8 94 /* same as TK_STRING */ -#define OP_String 9 -#define OP_Null 10 -#define OP_Blob 11 -#define OP_Variable 12 -#define OP_Move 13 -#define OP_Copy 14 -#define OP_SCopy 15 -#define OP_ResultRow 16 -#define OP_Concat 91 /* same as TK_CONCAT */ -#define OP_Add 86 /* same as TK_PLUS */ -#define OP_Subtract 87 /* same as TK_MINUS */ -#define OP_Multiply 88 /* same as TK_STAR */ -#define OP_Divide 89 /* same as TK_SLASH */ -#define OP_Remainder 90 /* same as TK_REM */ -#define OP_CollSeq 17 -#define OP_Function 18 -#define OP_BitAnd 82 /* same as TK_BITAND */ -#define OP_BitOr 83 /* same as TK_BITOR */ -#define OP_ShiftLeft 84 /* same as TK_LSHIFT */ -#define OP_ShiftRight 85 /* same as TK_RSHIFT */ -#define OP_AddImm 20 -#define OP_MustBeInt 21 -#define OP_RealAffinity 22 -#define OP_ToText 141 /* same as TK_TO_TEXT */ -#define OP_ToBlob 142 /* same as TK_TO_BLOB */ -#define OP_ToNumeric 143 /* same as TK_TO_NUMERIC*/ -#define OP_ToInt 144 /* same as TK_TO_INT */ -#define OP_ToReal 145 /* same as TK_TO_REAL */ -#define OP_Eq 76 /* same as TK_EQ */ -#define OP_Ne 75 /* same as TK_NE */ -#define OP_Lt 79 /* same as TK_LT */ -#define OP_Le 78 /* same as TK_LE */ -#define OP_Gt 77 /* same as TK_GT */ -#define OP_Ge 80 /* same as TK_GE */ -#define OP_Permutation 23 -#define OP_Compare 24 -#define OP_Jump 25 -#define OP_And 69 /* same as TK_AND */ -#define OP_Or 68 /* same as TK_OR */ -#define OP_Not 19 /* same as TK_NOT */ -#define OP_BitNot 93 /* same as TK_BITNOT */ -#define OP_If 26 -#define OP_IfNot 27 -#define OP_IsNull 73 /* same as TK_ISNULL */ -#define OP_NotNull 74 /* same as TK_NOTNULL */ -#define OP_Column 28 -#define OP_Affinity 29 -#define OP_MakeRecord 30 -#define OP_Count 31 -#define OP_Savepoint 32 -#define OP_AutoCommit 33 -#define OP_Transaction 34 -#define OP_ReadCookie 35 -#define OP_SetCookie 36 -#define OP_VerifyCookie 37 -#define OP_OpenRead 38 -#define OP_OpenWrite 39 -#define OP_OpenAutoindex 40 -#define OP_OpenEphemeral 41 -#define OP_OpenPseudo 42 -#define OP_Close 43 -#define OP_SeekLt 44 -#define OP_SeekLe 45 -#define OP_SeekGe 46 -#define OP_SeekGt 47 -#define OP_Seek 48 -#define OP_NotFound 49 -#define OP_Found 50 -#define OP_IsUnique 51 -#define OP_NotExists 52 -#define OP_Sequence 53 -#define OP_NewRowid 54 -#define OP_Insert 55 -#define OP_InsertInt 56 -#define OP_Delete 57 -#define OP_ResetCount 58 -#define OP_RowKey 59 -#define OP_RowData 60 -#define OP_Rowid 61 -#define OP_NullRow 62 -#define OP_Last 63 -#define OP_Sort 64 -#define OP_Rewind 65 -#define OP_Prev 66 -#define OP_Next 67 -#define OP_IdxInsert 70 -#define OP_IdxDelete 71 -#define OP_IdxRowid 72 -#define OP_IdxLT 81 -#define OP_IdxGE 92 -#define OP_Destroy 95 -#define OP_Clear 96 -#define OP_CreateIndex 97 -#define OP_CreateTable 98 -#define OP_ParseSchema 99 -#define OP_LoadAnalysis 100 -#define OP_DropTable 101 -#define OP_DropIndex 102 -#define OP_DropTrigger 103 -#define OP_IntegrityCk 104 -#define OP_RowSetAdd 105 -#define OP_RowSetRead 106 -#define OP_RowSetTest 107 -#define OP_Program 108 -#define OP_Param 109 -#define OP_FkCounter 110 -#define OP_FkIfZero 111 -#define OP_MemMax 112 -#define OP_IfPos 113 -#define OP_IfNeg 114 -#define OP_IfZero 115 -#define OP_AggStep 116 -#define OP_AggFinal 117 -#define OP_Vacuum 118 -#define OP_IncrVacuum 119 -#define OP_Expire 120 -#define OP_TableLock 121 -#define OP_VBegin 122 -#define OP_VCreate 123 -#define OP_VDestroy 124 -#define OP_VOpen 125 -#define OP_VFilter 126 -#define OP_VColumn 127 -#define OP_VNext 128 -#define OP_VRename 129 -#define OP_VUpdate 131 -#define OP_Pagecount 132 -#define OP_Trace 133 -#define OP_Noop 134 -#define OP_Explain 135 - -/* The following opcode values are never used */ -#define OP_NotUsed_136 136 -#define OP_NotUsed_137 137 -#define OP_NotUsed_138 138 -#define OP_NotUsed_139 139 -#define OP_NotUsed_140 140 - +/* See the tool/mkopcodeh.tcl script for details */ +#define OP_Savepoint 0 +#define OP_AutoCommit 1 +#define OP_Transaction 2 +#define OP_SorterNext 3 +#define OP_PrevIfOpen 4 +#define OP_NextIfOpen 5 +#define OP_Prev 6 +#define OP_Next 7 +#define OP_Checkpoint 8 +#define OP_JournalMode 9 +#define OP_Vacuum 10 +#define OP_VFilter 11 /* synopsis: iplan=r[P3] zplan='P4' */ +#define OP_VUpdate 12 /* synopsis: data=r[P3@P2] */ +#define OP_Goto 13 +#define OP_Gosub 14 +#define OP_Return 15 +#define OP_InitCoroutine 16 +#define OP_EndCoroutine 17 +#define OP_Yield 18 +#define OP_Not 19 /* same as TK_NOT, synopsis: r[P2]= !r[P1] */ +#define OP_HaltIfNull 20 /* synopsis: if r[P3]=null halt */ +#define OP_Halt 21 +#define OP_Integer 22 /* synopsis: r[P2]=P1 */ +#define OP_Int64 23 /* synopsis: r[P2]=P4 */ +#define OP_String 24 /* synopsis: r[P2]='P4' (len=P1) */ +#define OP_Null 25 /* synopsis: r[P2..P3]=NULL */ +#define OP_SoftNull 26 /* synopsis: r[P1]=NULL */ +#define OP_Blob 27 /* synopsis: r[P2]=P4 (len=P1) */ +#define OP_Variable 28 /* synopsis: r[P2]=parameter(P1,P4) */ +#define OP_Move 29 /* synopsis: r[P2@P3]=r[P1@P3] */ +#define OP_Copy 30 /* synopsis: r[P2@P3+1]=r[P1@P3+1] */ +#define OP_SCopy 31 /* synopsis: r[P2]=r[P1] */ +#define OP_IntCopy 32 /* synopsis: r[P2]=r[P1] */ +#define OP_ResultRow 33 /* synopsis: output=r[P1@P2] */ +#define OP_CollSeq 34 +#define OP_Function0 35 /* synopsis: r[P3]=func(r[P2@P5]) */ +#define OP_Function 36 /* synopsis: r[P3]=func(r[P2@P5]) */ +#define OP_AddImm 37 /* synopsis: r[P1]=r[P1]+P2 */ +#define OP_MustBeInt 38 +#define OP_RealAffinity 39 +#define OP_Cast 40 /* synopsis: affinity(r[P1]) */ +#define OP_Permutation 41 +#define OP_Compare 42 /* synopsis: r[P1@P3] <-> r[P2@P3] */ +#define OP_Jump 43 +#define OP_Once 44 +#define OP_If 45 +#define OP_IfNot 46 +#define OP_Column 47 /* synopsis: r[P3]=PX */ +#define OP_Affinity 48 /* synopsis: affinity(r[P1@P2]) */ +#define OP_MakeRecord 49 /* synopsis: r[P3]=mkrec(r[P1@P2]) */ +#define OP_Count 50 /* synopsis: r[P2]=count() */ +#define OP_ReadCookie 51 +#define OP_SetCookie 52 +#define OP_ReopenIdx 53 /* synopsis: root=P2 iDb=P3 */ +#define OP_OpenRead 54 /* synopsis: root=P2 iDb=P3 */ +#define OP_OpenWrite 55 /* synopsis: root=P2 iDb=P3 */ +#define OP_OpenAutoindex 56 /* synopsis: nColumn=P2 */ +#define OP_OpenEphemeral 57 /* synopsis: nColumn=P2 */ +#define OP_SorterOpen 58 +#define OP_SequenceTest 59 /* synopsis: if( cursor[P1].ctr++ ) pc = P2 */ +#define OP_OpenPseudo 60 /* synopsis: P3 columns in r[P2] */ +#define OP_Close 61 +#define OP_ColumnsUsed 62 +#define OP_SeekLT 63 /* synopsis: key=r[P3@P4] */ +#define OP_SeekLE 64 /* synopsis: key=r[P3@P4] */ +#define OP_SeekGE 65 /* synopsis: key=r[P3@P4] */ +#define OP_SeekGT 66 /* synopsis: key=r[P3@P4] */ +#define OP_NoConflict 67 /* synopsis: key=r[P3@P4] */ +#define OP_NotFound 68 /* synopsis: key=r[P3@P4] */ +#define OP_Found 69 /* synopsis: key=r[P3@P4] */ +#define OP_NotExists 70 /* synopsis: intkey=r[P3] */ +#define OP_Or 71 /* same as TK_OR, synopsis: r[P3]=(r[P1] || r[P2]) */ +#define OP_And 72 /* same as TK_AND, synopsis: r[P3]=(r[P1] && r[P2]) */ +#define OP_Sequence 73 /* synopsis: r[P2]=cursor[P1].ctr++ */ +#define OP_NewRowid 74 /* synopsis: r[P2]=rowid */ +#define OP_Insert 75 /* synopsis: intkey=r[P3] data=r[P2] */ +#define OP_IsNull 76 /* same as TK_ISNULL, synopsis: if r[P1]==NULL goto P2 */ +#define OP_NotNull 77 /* same as TK_NOTNULL, synopsis: if r[P1]!=NULL goto P2 */ +#define OP_Ne 78 /* same as TK_NE, synopsis: if r[P1]!=r[P3] goto P2 */ +#define OP_Eq 79 /* same as TK_EQ, synopsis: if r[P1]==r[P3] goto P2 */ +#define OP_Gt 80 /* same as TK_GT, synopsis: if r[P1]>r[P3] goto P2 */ +#define OP_Le 81 /* same as TK_LE, synopsis: if r[P1]<=r[P3] goto P2 */ +#define OP_Lt 82 /* same as TK_LT, synopsis: if r[P1]<r[P3] goto P2 */ +#define OP_Ge 83 /* same as TK_GE, synopsis: if r[P1]>=r[P3] goto P2 */ +#define OP_InsertInt 84 /* synopsis: intkey=P3 data=r[P2] */ +#define OP_BitAnd 85 /* same as TK_BITAND, synopsis: r[P3]=r[P1]&r[P2] */ +#define OP_BitOr 86 /* same as TK_BITOR, synopsis: r[P3]=r[P1]|r[P2] */ +#define OP_ShiftLeft 87 /* same as TK_LSHIFT, synopsis: r[P3]=r[P2]<<r[P1] */ +#define OP_ShiftRight 88 /* same as TK_RSHIFT, synopsis: r[P3]=r[P2]>>r[P1] */ +#define OP_Add 89 /* same as TK_PLUS, synopsis: r[P3]=r[P1]+r[P2] */ +#define OP_Subtract 90 /* same as TK_MINUS, synopsis: r[P3]=r[P2]-r[P1] */ +#define OP_Multiply 91 /* same as TK_STAR, synopsis: r[P3]=r[P1]*r[P2] */ +#define OP_Divide 92 /* same as TK_SLASH, synopsis: r[P3]=r[P2]/r[P1] */ +#define OP_Remainder 93 /* same as TK_REM, synopsis: r[P3]=r[P2]%r[P1] */ +#define OP_Concat 94 /* same as TK_CONCAT, synopsis: r[P3]=r[P2]+r[P1] */ +#define OP_Delete 95 +#define OP_BitNot 96 /* same as TK_BITNOT, synopsis: r[P1]= ~r[P1] */ +#define OP_String8 97 /* same as TK_STRING, synopsis: r[P2]='P4' */ +#define OP_ResetCount 98 +#define OP_SorterCompare 99 /* synopsis: if key(P1)!=trim(r[P3],P4) goto P2 */ +#define OP_SorterData 100 /* synopsis: r[P2]=data */ +#define OP_RowKey 101 /* synopsis: r[P2]=key */ +#define OP_RowData 102 /* synopsis: r[P2]=data */ +#define OP_Rowid 103 /* synopsis: r[P2]=rowid */ +#define OP_NullRow 104 +#define OP_Last 105 +#define OP_SorterSort 106 +#define OP_Sort 107 +#define OP_Rewind 108 +#define OP_SorterInsert 109 +#define OP_IdxInsert 110 /* synopsis: key=r[P2] */ +#define OP_IdxDelete 111 /* synopsis: key=r[P2@P3] */ +#define OP_Seek 112 /* synopsis: Move P3 to P1.rowid */ +#define OP_IdxRowid 113 /* synopsis: r[P2]=rowid */ +#define OP_IdxLE 114 /* synopsis: key=r[P3@P4] */ +#define OP_IdxGT 115 /* synopsis: key=r[P3@P4] */ +#define OP_IdxLT 116 /* synopsis: key=r[P3@P4] */ +#define OP_IdxGE 117 /* synopsis: key=r[P3@P4] */ +#define OP_Destroy 118 +#define OP_Clear 119 +#define OP_ResetSorter 120 +#define OP_CreateIndex 121 /* synopsis: r[P2]=root iDb=P1 */ +#define OP_CreateTable 122 /* synopsis: r[P2]=root iDb=P1 */ +#define OP_ParseSchema 123 +#define OP_LoadAnalysis 124 +#define OP_DropTable 125 +#define OP_DropIndex 126 +#define OP_DropTrigger 127 +#define OP_IntegrityCk 128 +#define OP_RowSetAdd 129 /* synopsis: rowset(P1)=r[P2] */ +#define OP_RowSetRead 130 /* synopsis: r[P3]=rowset(P1) */ +#define OP_RowSetTest 131 /* synopsis: if r[P3] in rowset(P1) goto P2 */ +#define OP_Program 132 +#define OP_Real 133 /* same as TK_FLOAT, synopsis: r[P2]=P4 */ +#define OP_Param 134 +#define OP_FkCounter 135 /* synopsis: fkctr[P1]+=P2 */ +#define OP_FkIfZero 136 /* synopsis: if fkctr[P1]==0 goto P2 */ +#define OP_MemMax 137 /* synopsis: r[P1]=max(r[P1],r[P2]) */ +#define OP_IfPos 138 /* synopsis: if r[P1]>0 then r[P1]-=P3, goto P2 */ +#define OP_OffsetLimit 139 /* synopsis: if r[P1]>0 then r[P2]=r[P1]+max(0,r[P3]) else r[P2]=(-1) */ +#define OP_IfNotZero 140 /* synopsis: if r[P1]!=0 then r[P1]-=P3, goto P2 */ +#define OP_DecrJumpZero 141 /* synopsis: if (--r[P1])==0 goto P2 */ +#define OP_JumpZeroIncr 142 /* synopsis: if (r[P1]++)==0 ) goto P2 */ +#define OP_AggStep0 143 /* synopsis: accum=r[P3] step(r[P2@P5]) */ +#define OP_AggStep 144 /* synopsis: accum=r[P3] step(r[P2@P5]) */ +#define OP_AggFinal 145 /* synopsis: accum=r[P1] N=P2 */ +#define OP_IncrVacuum 146 +#define OP_Expire 147 +#define OP_TableLock 148 /* synopsis: iDb=P1 root=P2 write=P3 */ +#define OP_VBegin 149 +#define OP_VCreate 150 +#define OP_VDestroy 151 +#define OP_VOpen 152 +#define OP_VColumn 153 /* synopsis: r[P3]=vcolumn(P2) */ +#define OP_VNext 154 +#define OP_VRename 155 +#define OP_Pagecount 156 +#define OP_MaxPgcnt 157 +#define OP_Init 158 /* synopsis: Start at P2 */ +#define OP_CursorHint 159 +#define OP_Noop 160 +#define OP_Explain 161 /* Properties such as "out2" or "jump" that are specified in ** comments following the "case" for each opcode in the vdbe.c ** are encoded into bitvectors as follows: */ -#define OPFLG_JUMP 0x0001 /* jump: P2 holds jmp target */ -#define OPFLG_OUT2_PRERELEASE 0x0002 /* out2-prerelease: */ -#define OPFLG_IN1 0x0004 /* in1: P1 is an input */ -#define OPFLG_IN2 0x0008 /* in2: P2 is an input */ -#define OPFLG_IN3 0x0010 /* in3: P3 is an input */ -#define OPFLG_OUT2 0x0020 /* out2: P2 is an output */ -#define OPFLG_OUT3 0x0040 /* out3: P3 is an output */ +#define OPFLG_JUMP 0x01 /* jump: P2 holds jmp target */ +#define OPFLG_IN1 0x02 /* in1: P1 is an input */ +#define OPFLG_IN2 0x04 /* in2: P2 is an input */ +#define OPFLG_IN3 0x08 /* in3: P3 is an input */ +#define OPFLG_OUT2 0x10 /* out2: P2 is an output */ +#define OPFLG_OUT3 0x20 /* out3: P3 is an output */ #define OPFLG_INITIALIZER {\ -/* 0 */ 0x00, 0x01, 0x05, 0x04, 0x04, 0x10, 0x00, 0x02,\ -/* 8 */ 0x02, 0x02, 0x02, 0x02, 0x00, 0x00, 0x24, 0x24,\ -/* 16 */ 0x00, 0x00, 0x00, 0x24, 0x04, 0x05, 0x04, 0x00,\ -/* 24 */ 0x00, 0x01, 0x05, 0x05, 0x00, 0x00, 0x00, 0x02,\ -/* 32 */ 0x00, 0x00, 0x00, 0x02, 0x10, 0x00, 0x00, 0x00,\ -/* 40 */ 0x00, 0x00, 0x00, 0x00, 0x11, 0x11, 0x11, 0x11,\ -/* 48 */ 0x08, 0x11, 0x11, 0x11, 0x11, 0x02, 0x02, 0x00,\ -/* 56 */ 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x01,\ -/* 64 */ 0x01, 0x01, 0x01, 0x01, 0x4c, 0x4c, 0x08, 0x00,\ -/* 72 */ 0x02, 0x05, 0x05, 0x15, 0x15, 0x15, 0x15, 0x15,\ -/* 80 */ 0x15, 0x01, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c, 0x4c,\ -/* 88 */ 0x4c, 0x4c, 0x4c, 0x4c, 0x01, 0x24, 0x02, 0x02,\ -/* 96 */ 0x00, 0x02, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00,\ -/* 104 */ 0x00, 0x0c, 0x45, 0x15, 0x01, 0x02, 0x00, 0x01,\ -/* 112 */ 0x08, 0x05, 0x05, 0x05, 0x00, 0x00, 0x00, 0x01,\ -/* 120 */ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00,\ -/* 128 */ 0x01, 0x00, 0x02, 0x00, 0x02, 0x00, 0x00, 0x00,\ -/* 136 */ 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x04, 0x04,\ -/* 144 */ 0x04, 0x04,} +/* 0 */ 0x00, 0x00, 0x00, 0x01, 0x01, 0x01, 0x01, 0x01,\ +/* 8 */ 0x00, 0x10, 0x00, 0x01, 0x00, 0x01, 0x01, 0x02,\ +/* 16 */ 0x01, 0x02, 0x03, 0x12, 0x08, 0x00, 0x10, 0x10,\ +/* 24 */ 0x10, 0x10, 0x00, 0x10, 0x10, 0x00, 0x00, 0x10,\ +/* 32 */ 0x10, 0x00, 0x00, 0x00, 0x00, 0x02, 0x03, 0x02,\ +/* 40 */ 0x02, 0x00, 0x00, 0x01, 0x01, 0x03, 0x03, 0x00,\ +/* 48 */ 0x00, 0x00, 0x10, 0x10, 0x00, 0x00, 0x00, 0x00,\ +/* 56 */ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x09,\ +/* 64 */ 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x09, 0x26,\ +/* 72 */ 0x26, 0x10, 0x10, 0x00, 0x03, 0x03, 0x0b, 0x0b,\ +/* 80 */ 0x0b, 0x0b, 0x0b, 0x0b, 0x00, 0x26, 0x26, 0x26,\ +/* 88 */ 0x26, 0x26, 0x26, 0x26, 0x26, 0x26, 0x26, 0x00,\ +/* 96 */ 0x12, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10,\ +/* 104 */ 0x00, 0x01, 0x01, 0x01, 0x01, 0x04, 0x04, 0x00,\ +/* 112 */ 0x00, 0x10, 0x01, 0x01, 0x01, 0x01, 0x10, 0x00,\ +/* 120 */ 0x00, 0x10, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00,\ +/* 128 */ 0x00, 0x06, 0x23, 0x0b, 0x01, 0x10, 0x10, 0x00,\ +/* 136 */ 0x01, 0x04, 0x03, 0x1a, 0x03, 0x03, 0x03, 0x00,\ +/* 144 */ 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00,\ +/* 152 */ 0x00, 0x00, 0x01, 0x00, 0x10, 0x10, 0x01, 0x00,\ +/* 160 */ 0x00, 0x00,} /************** End of opcodes.h *********************************************/ /************** Continuing where we left off in vdbe.h ***********************/ /* ** Prototypes for the VDBE interface. See comments on the implementation ** for a description of what each of these routines does. */ -SQLITE_PRIVATE Vdbe *sqlite3VdbeCreate(sqlite3*); +SQLITE_PRIVATE Vdbe *sqlite3VdbeCreate(Parse*); SQLITE_PRIVATE int sqlite3VdbeAddOp0(Vdbe*,int); SQLITE_PRIVATE int sqlite3VdbeAddOp1(Vdbe*,int,int); SQLITE_PRIVATE int sqlite3VdbeAddOp2(Vdbe*,int,int,int); +SQLITE_PRIVATE int sqlite3VdbeGoto(Vdbe*,int); +SQLITE_PRIVATE int sqlite3VdbeLoadString(Vdbe*,int,const char*); +SQLITE_PRIVATE void sqlite3VdbeMultiLoad(Vdbe*,int,const char*,...); SQLITE_PRIVATE int sqlite3VdbeAddOp3(Vdbe*,int,int,int,int); SQLITE_PRIVATE int sqlite3VdbeAddOp4(Vdbe*,int,int,int,int,const char *zP4,int); +SQLITE_PRIVATE int sqlite3VdbeAddOp4Dup8(Vdbe*,int,int,int,int,const u8*,int); SQLITE_PRIVATE int sqlite3VdbeAddOp4Int(Vdbe*,int,int,int,int,int); -SQLITE_PRIVATE int sqlite3VdbeAddOpList(Vdbe*, int nOp, VdbeOpList const *aOp); -SQLITE_PRIVATE void sqlite3VdbeChangeP1(Vdbe*, int addr, int P1); -SQLITE_PRIVATE void sqlite3VdbeChangeP2(Vdbe*, int addr, int P2); -SQLITE_PRIVATE void sqlite3VdbeChangeP3(Vdbe*, int addr, int P3); +SQLITE_PRIVATE void sqlite3VdbeEndCoroutine(Vdbe*,int); +#if defined(SQLITE_DEBUG) && !defined(SQLITE_TEST_REALLOC_STRESS) +SQLITE_PRIVATE void sqlite3VdbeVerifyNoMallocRequired(Vdbe *p, int N); +#else +# define sqlite3VdbeVerifyNoMallocRequired(A,B) +#endif +SQLITE_PRIVATE VdbeOp *sqlite3VdbeAddOpList(Vdbe*, int nOp, VdbeOpList const *aOp, int iLineno); +SQLITE_PRIVATE void sqlite3VdbeAddParseSchemaOp(Vdbe*,int,char*); +SQLITE_PRIVATE void sqlite3VdbeChangeOpcode(Vdbe*, u32 addr, u8); +SQLITE_PRIVATE void sqlite3VdbeChangeP1(Vdbe*, u32 addr, int P1); +SQLITE_PRIVATE void sqlite3VdbeChangeP2(Vdbe*, u32 addr, int P2); +SQLITE_PRIVATE void sqlite3VdbeChangeP3(Vdbe*, u32 addr, int P3); SQLITE_PRIVATE void sqlite3VdbeChangeP5(Vdbe*, u8 P5); SQLITE_PRIVATE void sqlite3VdbeJumpHere(Vdbe*, int addr); -SQLITE_PRIVATE void sqlite3VdbeChangeToNoop(Vdbe*, int addr, int N); +SQLITE_PRIVATE int sqlite3VdbeChangeToNoop(Vdbe*, int addr); +SQLITE_PRIVATE int sqlite3VdbeDeletePriorOpcode(Vdbe*, u8 op); SQLITE_PRIVATE void sqlite3VdbeChangeP4(Vdbe*, int addr, const char *zP4, int N); +SQLITE_PRIVATE void sqlite3VdbeSetP4KeyInfo(Parse*, Index*); SQLITE_PRIVATE void sqlite3VdbeUsesBtree(Vdbe*, int); SQLITE_PRIVATE VdbeOp *sqlite3VdbeGetOp(Vdbe*, int); SQLITE_PRIVATE int sqlite3VdbeMakeLabel(Vdbe*); SQLITE_PRIVATE void sqlite3VdbeRunOnlyOnce(Vdbe*); SQLITE_PRIVATE void sqlite3VdbeDelete(Vdbe*); -SQLITE_PRIVATE void sqlite3VdbeMakeReady(Vdbe*,int,int,int,int,int,int); +SQLITE_PRIVATE void sqlite3VdbeClearObject(sqlite3*,Vdbe*); +SQLITE_PRIVATE void sqlite3VdbeMakeReady(Vdbe*,Parse*); SQLITE_PRIVATE int sqlite3VdbeFinalize(Vdbe*); SQLITE_PRIVATE void sqlite3VdbeResolveLabel(Vdbe*, int); SQLITE_PRIVATE int sqlite3VdbeCurrentAddr(Vdbe*); #ifdef SQLITE_DEBUG SQLITE_PRIVATE int sqlite3VdbeAssertMayAbort(Vdbe *, int); -SQLITE_PRIVATE void sqlite3VdbeTrace(Vdbe*,FILE*); #endif SQLITE_PRIVATE void sqlite3VdbeResetStepResult(Vdbe*); +SQLITE_PRIVATE void sqlite3VdbeRewind(Vdbe*); SQLITE_PRIVATE int sqlite3VdbeReset(Vdbe*); SQLITE_PRIVATE void sqlite3VdbeSetNumCols(Vdbe*,int); SQLITE_PRIVATE int sqlite3VdbeSetColName(Vdbe*, int, int, const char *, void(*)(void*)); SQLITE_PRIVATE void sqlite3VdbeCountChanges(Vdbe*); SQLITE_PRIVATE sqlite3 *sqlite3VdbeDb(Vdbe*); SQLITE_PRIVATE void sqlite3VdbeSetSql(Vdbe*, const char *z, int n, int); SQLITE_PRIVATE void sqlite3VdbeSwap(Vdbe*,Vdbe*); SQLITE_PRIVATE VdbeOp *sqlite3VdbeTakeOpArray(Vdbe*, int*, int*); -SQLITE_PRIVATE void sqlite3VdbeProgramDelete(sqlite3 *, SubProgram *, int); -SQLITE_PRIVATE sqlite3_value *sqlite3VdbeGetValue(Vdbe*, int, u8); +SQLITE_PRIVATE sqlite3_value *sqlite3VdbeGetBoundValue(Vdbe*, int, u8); SQLITE_PRIVATE void sqlite3VdbeSetVarmask(Vdbe*, int); #ifndef SQLITE_OMIT_TRACE SQLITE_PRIVATE char *sqlite3VdbeExpandSql(Vdbe*, const char*); #endif +SQLITE_PRIVATE int sqlite3MemCompare(const Mem*, const Mem*, const CollSeq*); -SQLITE_PRIVATE UnpackedRecord *sqlite3VdbeRecordUnpack(KeyInfo*,int,const void*,char*,int); -SQLITE_PRIVATE void sqlite3VdbeDeleteUnpackedRecord(UnpackedRecord*); +SQLITE_PRIVATE void sqlite3VdbeRecordUnpack(KeyInfo*,int,const void*,UnpackedRecord*); SQLITE_PRIVATE int sqlite3VdbeRecordCompare(int,const void*,UnpackedRecord*); +SQLITE_PRIVATE int sqlite3VdbeRecordCompareWithSkip(int, const void *, UnpackedRecord *, int); +SQLITE_PRIVATE UnpackedRecord *sqlite3VdbeAllocUnpackedRecord(KeyInfo *, char *, int, char **); +typedef int (*RecordCompare)(int,const void*,UnpackedRecord*); +SQLITE_PRIVATE RecordCompare sqlite3VdbeFindCompare(UnpackedRecord*); -#ifndef NDEBUG +#ifndef SQLITE_OMIT_TRIGGER +SQLITE_PRIVATE void sqlite3VdbeLinkSubProgram(Vdbe *, SubProgram *); +#endif + +/* Use SQLITE_ENABLE_COMMENTS to enable generation of extra comments on +** each VDBE opcode. +** +** Use the SQLITE_ENABLE_MODULE_COMMENTS macro to see some extra no-op +** comments in VDBE programs that show key decision points in the code +** generator. +*/ +#ifdef SQLITE_ENABLE_EXPLAIN_COMMENTS SQLITE_PRIVATE void sqlite3VdbeComment(Vdbe*, const char*, ...); # define VdbeComment(X) sqlite3VdbeComment X SQLITE_PRIVATE void sqlite3VdbeNoopComment(Vdbe*, const char*, ...); # define VdbeNoopComment(X) sqlite3VdbeNoopComment X +# ifdef SQLITE_ENABLE_MODULE_COMMENTS +# define VdbeModuleComment(X) sqlite3VdbeNoopComment X +# else +# define VdbeModuleComment(X) +# endif #else # define VdbeComment(X) # define VdbeNoopComment(X) +# define VdbeModuleComment(X) +#endif + +/* +** The VdbeCoverage macros are used to set a coverage testing point +** for VDBE branch instructions. The coverage testing points are line +** numbers in the sqlite3.c source file. VDBE branch coverage testing +** only works with an amalagmation build. That's ok since a VDBE branch +** coverage build designed for testing the test suite only. No application +** should ever ship with VDBE branch coverage measuring turned on. +** +** VdbeCoverage(v) // Mark the previously coded instruction +** // as a branch +** +** VdbeCoverageIf(v, conditional) // Mark previous if conditional true +** +** VdbeCoverageAlwaysTaken(v) // Previous branch is always taken +** +** VdbeCoverageNeverTaken(v) // Previous branch is never taken +** +** Every VDBE branch operation must be tagged with one of the macros above. +** If not, then when "make test" is run with -DSQLITE_VDBE_COVERAGE and +** -DSQLITE_DEBUG then an ALWAYS() will fail in the vdbeTakeBranch() +** routine in vdbe.c, alerting the developer to the missed tag. +*/ +#ifdef SQLITE_VDBE_COVERAGE +SQLITE_PRIVATE void sqlite3VdbeSetLineNumber(Vdbe*,int); +# define VdbeCoverage(v) sqlite3VdbeSetLineNumber(v,__LINE__) +# define VdbeCoverageIf(v,x) if(x)sqlite3VdbeSetLineNumber(v,__LINE__) +# define VdbeCoverageAlwaysTaken(v) sqlite3VdbeSetLineNumber(v,2); +# define VdbeCoverageNeverTaken(v) sqlite3VdbeSetLineNumber(v,1); +# define VDBE_OFFSET_LINENO(x) (__LINE__+x) +#else +# define VdbeCoverage(v) +# define VdbeCoverageIf(v,x) +# define VdbeCoverageAlwaysTaken(v) +# define VdbeCoverageNeverTaken(v) +# define VDBE_OFFSET_LINENO(x) 0 +#endif + +#ifdef SQLITE_ENABLE_STMT_SCANSTATUS +SQLITE_PRIVATE void sqlite3VdbeScanStatus(Vdbe*, int, int, int, LogEst, const char*); +#else +# define sqlite3VdbeScanStatus(a,b,c,d,e) #endif #endif /************** End of vdbe.h ************************************************/ @@ -7586,28 +11184,48 @@ ** Allowed values for the flags parameter to sqlite3PagerOpen(). ** ** NOTE: These values must match the corresponding BTREE_ values in btree.h. */ #define PAGER_OMIT_JOURNAL 0x0001 /* Do not use a rollback journal */ -#define PAGER_NO_READLOCK 0x0002 /* Omit readlocks on readonly files */ +#define PAGER_MEMORY 0x0002 /* In-memory database */ /* ** Valid values for the second argument to sqlite3PagerLockingMode(). */ #define PAGER_LOCKINGMODE_QUERY -1 #define PAGER_LOCKINGMODE_NORMAL 0 #define PAGER_LOCKINGMODE_EXCLUSIVE 1 /* -** Valid values for the second argument to sqlite3PagerJournalMode(). +** Numeric constants that encode the journalmode. */ -#define PAGER_JOURNALMODE_QUERY -1 +#define PAGER_JOURNALMODE_QUERY (-1) /* Query the value of journalmode */ #define PAGER_JOURNALMODE_DELETE 0 /* Commit by deleting journal file */ #define PAGER_JOURNALMODE_PERSIST 1 /* Commit by zeroing journal header */ #define PAGER_JOURNALMODE_OFF 2 /* Journal omitted. */ #define PAGER_JOURNALMODE_TRUNCATE 3 /* Commit by truncating journal */ #define PAGER_JOURNALMODE_MEMORY 4 /* In-memory journal file */ +#define PAGER_JOURNALMODE_WAL 5 /* Use write-ahead logging */ + +/* +** Flags that make up the mask passed to sqlite3PagerGet(). +*/ +#define PAGER_GET_NOCONTENT 0x01 /* Do not load data from disk */ +#define PAGER_GET_READONLY 0x02 /* Read-only page is acceptable */ + +/* +** Flags for sqlite3PagerSetFlags() +*/ +#define PAGER_SYNCHRONOUS_OFF 0x01 /* PRAGMA synchronous=OFF */ +#define PAGER_SYNCHRONOUS_NORMAL 0x02 /* PRAGMA synchronous=NORMAL */ +#define PAGER_SYNCHRONOUS_FULL 0x03 /* PRAGMA synchronous=FULL */ +#define PAGER_SYNCHRONOUS_EXTRA 0x04 /* PRAGMA synchronous=EXTRA */ +#define PAGER_SYNCHRONOUS_MASK 0x07 /* Mask for four values above */ +#define PAGER_FULLFSYNC 0x08 /* PRAGMA fullfsync=ON */ +#define PAGER_CKPT_FULLFSYNC 0x10 /* PRAGMA checkpoint_fullfsync=ON */ +#define PAGER_CACHESPILL 0x20 /* PRAGMA cache_spill=ON */ +#define PAGER_FLAGS_MASK 0x38 /* All above except SYNCHRONOUS */ /* ** The remainder of this file contains the declarations of the functions ** that make up the Pager sub-system API. See source code comments for ** a detailed description of each routine. @@ -7626,25 +11244,34 @@ SQLITE_PRIVATE int sqlite3PagerClose(Pager *pPager); SQLITE_PRIVATE int sqlite3PagerReadFileheader(Pager*, int, unsigned char*); /* Functions used to configure a Pager object. */ SQLITE_PRIVATE void sqlite3PagerSetBusyhandler(Pager*, int(*)(void *), void *); -SQLITE_PRIVATE int sqlite3PagerSetPagesize(Pager*, u16*, int); +SQLITE_PRIVATE int sqlite3PagerSetPagesize(Pager*, u32*, int); +#ifdef SQLITE_HAS_CODEC +SQLITE_PRIVATE void sqlite3PagerAlignReserve(Pager*,Pager*); +#endif SQLITE_PRIVATE int sqlite3PagerMaxPageCount(Pager*, int); SQLITE_PRIVATE void sqlite3PagerSetCachesize(Pager*, int); -SQLITE_PRIVATE void sqlite3PagerSetSafetyLevel(Pager*,int,int); +SQLITE_PRIVATE int sqlite3PagerSetSpillsize(Pager*, int); +SQLITE_PRIVATE void sqlite3PagerSetMmapLimit(Pager *, sqlite3_int64); +SQLITE_PRIVATE void sqlite3PagerShrink(Pager*); +SQLITE_PRIVATE void sqlite3PagerSetFlags(Pager*,unsigned); SQLITE_PRIVATE int sqlite3PagerLockingMode(Pager *, int); -SQLITE_PRIVATE int sqlite3PagerJournalMode(Pager *, int); +SQLITE_PRIVATE int sqlite3PagerSetJournalMode(Pager *, int); +SQLITE_PRIVATE int sqlite3PagerGetJournalMode(Pager*); +SQLITE_PRIVATE int sqlite3PagerOkToChangeJournalMode(Pager*); SQLITE_PRIVATE i64 sqlite3PagerJournalSizeLimit(Pager *, i64); SQLITE_PRIVATE sqlite3_backup **sqlite3PagerBackupPtr(Pager*); +SQLITE_PRIVATE int sqlite3PagerFlush(Pager*); /* Functions used to obtain and release page references. */ -SQLITE_PRIVATE int sqlite3PagerAcquire(Pager *pPager, Pgno pgno, DbPage **ppPage, int clrFlag); -#define sqlite3PagerGet(A,B,C) sqlite3PagerAcquire(A,B,C,0) +SQLITE_PRIVATE int sqlite3PagerGet(Pager *pPager, Pgno pgno, DbPage **ppPage, int clrFlag); SQLITE_PRIVATE DbPage *sqlite3PagerLookup(Pager *pPager, Pgno pgno); SQLITE_PRIVATE void sqlite3PagerRef(DbPage*); SQLITE_PRIVATE void sqlite3PagerUnref(DbPage*); +SQLITE_PRIVATE void sqlite3PagerUnrefNotNull(DbPage*); /* Operations on page references. */ SQLITE_PRIVATE int sqlite3PagerWrite(DbPage*); SQLITE_PRIVATE void sqlite3PagerDontWrite(DbPage*); SQLITE_PRIVATE int sqlite3PagerMovepage(Pager*,DbPage*,Pgno,int); @@ -7651,34 +11278,64 @@ SQLITE_PRIVATE int sqlite3PagerPageRefcount(DbPage*); SQLITE_PRIVATE void *sqlite3PagerGetData(DbPage *); SQLITE_PRIVATE void *sqlite3PagerGetExtra(DbPage *); /* Functions used to manage pager transactions and savepoints. */ -SQLITE_PRIVATE int sqlite3PagerPagecount(Pager*, int*); +SQLITE_PRIVATE void sqlite3PagerPagecount(Pager*, int*); SQLITE_PRIVATE int sqlite3PagerBegin(Pager*, int exFlag, int); SQLITE_PRIVATE int sqlite3PagerCommitPhaseOne(Pager*,const char *zMaster, int); -SQLITE_PRIVATE int sqlite3PagerSync(Pager *pPager); +SQLITE_PRIVATE int sqlite3PagerExclusiveLock(Pager*); +SQLITE_PRIVATE int sqlite3PagerSync(Pager *pPager, const char *zMaster); SQLITE_PRIVATE int sqlite3PagerCommitPhaseTwo(Pager*); SQLITE_PRIVATE int sqlite3PagerRollback(Pager*); SQLITE_PRIVATE int sqlite3PagerOpenSavepoint(Pager *pPager, int n); SQLITE_PRIVATE int sqlite3PagerSavepoint(Pager *pPager, int op, int iSavepoint); SQLITE_PRIVATE int sqlite3PagerSharedLock(Pager *pPager); + +#ifndef SQLITE_OMIT_WAL +SQLITE_PRIVATE int sqlite3PagerCheckpoint(Pager *pPager, int, int*, int*); +SQLITE_PRIVATE int sqlite3PagerWalSupported(Pager *pPager); +SQLITE_PRIVATE int sqlite3PagerWalCallback(Pager *pPager); +SQLITE_PRIVATE int sqlite3PagerOpenWal(Pager *pPager, int *pisOpen); +SQLITE_PRIVATE int sqlite3PagerCloseWal(Pager *pPager); +# ifdef SQLITE_ENABLE_SNAPSHOT +SQLITE_PRIVATE int sqlite3PagerSnapshotGet(Pager *pPager, sqlite3_snapshot **ppSnapshot); +SQLITE_PRIVATE int sqlite3PagerSnapshotOpen(Pager *pPager, sqlite3_snapshot *pSnapshot); +# endif +#endif + +#ifdef SQLITE_ENABLE_ZIPVFS +SQLITE_PRIVATE int sqlite3PagerWalFramesize(Pager *pPager); +#endif /* Functions used to query pager state and configuration. */ SQLITE_PRIVATE u8 sqlite3PagerIsreadonly(Pager*); -SQLITE_PRIVATE int sqlite3PagerRefcount(Pager*); +SQLITE_PRIVATE u32 sqlite3PagerDataVersion(Pager*); +#ifdef SQLITE_DEBUG +SQLITE_PRIVATE int sqlite3PagerRefcount(Pager*); +#endif SQLITE_PRIVATE int sqlite3PagerMemUsed(Pager*); -SQLITE_PRIVATE const char *sqlite3PagerFilename(Pager*); -SQLITE_PRIVATE const sqlite3_vfs *sqlite3PagerVfs(Pager*); +SQLITE_PRIVATE const char *sqlite3PagerFilename(Pager*, int); +SQLITE_PRIVATE sqlite3_vfs *sqlite3PagerVfs(Pager*); SQLITE_PRIVATE sqlite3_file *sqlite3PagerFile(Pager*); +SQLITE_PRIVATE sqlite3_file *sqlite3PagerJrnlFile(Pager*); SQLITE_PRIVATE const char *sqlite3PagerJournalname(Pager*); SQLITE_PRIVATE int sqlite3PagerNosync(Pager*); SQLITE_PRIVATE void *sqlite3PagerTempSpace(Pager*); SQLITE_PRIVATE int sqlite3PagerIsMemdb(Pager*); +SQLITE_PRIVATE void sqlite3PagerCacheStat(Pager *, int, int, int *); +SQLITE_PRIVATE void sqlite3PagerClearCache(Pager *); +SQLITE_PRIVATE int sqlite3SectorSize(sqlite3_file *); /* Functions used to truncate the database file. */ SQLITE_PRIVATE void sqlite3PagerTruncateImage(Pager*,Pgno); + +SQLITE_PRIVATE void sqlite3PagerRekey(DbPage*, Pgno, u16); + +#if defined(SQLITE_HAS_CODEC) && !defined(SQLITE_OMIT_WAL) +SQLITE_PRIVATE void *sqlite3PagerCodec(DbPage *); +#endif /* Functions to support testing and debugging. */ #if !defined(NDEBUG) || defined(SQLITE_TEST) SQLITE_PRIVATE Pgno sqlite3PagerPagenumber(DbPage*); SQLITE_PRIVATE int sqlite3PagerIswriteable(DbPage*); @@ -7722,15 +11379,16 @@ /* ** Every page in the cache is controlled by an instance of the following ** structure. */ struct PgHdr { - void *pData; /* Content of this page */ + sqlite3_pcache_page *pPage; /* Pcache object page handle */ + void *pData; /* Page data */ void *pExtra; /* Extra content */ PgHdr *pDirty; /* Transient list of dirty pages */ - Pgno pgno; /* Page number for this page */ Pager *pPager; /* The pager this page is part of */ + Pgno pgno; /* Page number for this page */ #ifdef SQLITE_CHECK_PAGES u32 pageHash; /* Hash of page content */ #endif u16 flags; /* PGHDR flags defined below */ @@ -7744,16 +11402,20 @@ PgHdr *pDirtyNext; /* Next element in list of dirty pages */ PgHdr *pDirtyPrev; /* Previous element in list of dirty pages */ }; /* Bit values for PgHdr.flags */ -#define PGHDR_DIRTY 0x002 /* Page has changed */ -#define PGHDR_NEED_SYNC 0x004 /* Fsync the rollback journal before - ** writing this page to the database */ -#define PGHDR_NEED_READ 0x008 /* Content is unread */ -#define PGHDR_REUSE_UNLIKELY 0x010 /* A hint that reuse is unlikely */ -#define PGHDR_DONT_WRITE 0x020 /* Do not write content to disk */ +#define PGHDR_CLEAN 0x001 /* Page not on the PCache.pDirty list */ +#define PGHDR_DIRTY 0x002 /* Page is on the PCache.pDirty list */ +#define PGHDR_WRITEABLE 0x004 /* Journaled and ready to modify */ +#define PGHDR_NEED_SYNC 0x008 /* Fsync the rollback journal before + ** writing this page to the database */ +#define PGHDR_NEED_READ 0x010 /* Content is unread */ +#define PGHDR_DONT_WRITE 0x020 /* Do not write content to disk */ +#define PGHDR_MMAP 0x040 /* This is an mmap page object */ + +#define PGHDR_WAL_APPEND 0x080 /* Appended to wal file */ /* Initialize and shutdown the page cache subsystem */ SQLITE_PRIVATE int sqlite3PcacheInitialize(void); SQLITE_PRIVATE void sqlite3PcacheShutdown(void); @@ -7764,31 +11426,33 @@ /* Create a new pager cache. ** Under memory stress, invoke xStress to try to make pages clean. ** Only clean and unpinned pages can be reclaimed. */ -SQLITE_PRIVATE void sqlite3PcacheOpen( +SQLITE_PRIVATE int sqlite3PcacheOpen( int szPage, /* Size of every page */ int szExtra, /* Extra space associated with each page */ int bPurgeable, /* True if pages are on backing store */ int (*xStress)(void*, PgHdr*), /* Call to try to make pages clean */ void *pStress, /* Argument to xStress */ PCache *pToInit /* Preallocated space for the PCache */ ); /* Modify the page-size after the cache has been created. */ -SQLITE_PRIVATE void sqlite3PcacheSetPageSize(PCache *, int); +SQLITE_PRIVATE int sqlite3PcacheSetPageSize(PCache *, int); /* Return the size in bytes of a PCache object. Used to preallocate ** storage space. */ SQLITE_PRIVATE int sqlite3PcacheSize(void); /* One release per successful fetch. Page is pinned until released. ** Reference counted. */ -SQLITE_PRIVATE int sqlite3PcacheFetch(PCache*, Pgno, int createFlag, PgHdr**); +SQLITE_PRIVATE sqlite3_pcache_page *sqlite3PcacheFetch(PCache*, Pgno, int createFlag); +SQLITE_PRIVATE int sqlite3PcacheFetchStress(PCache*, Pgno, sqlite3_pcache_page**); +SQLITE_PRIVATE PgHdr *sqlite3PcacheFetchFinish(PCache*, Pgno, sqlite3_pcache_page *pPage); SQLITE_PRIVATE void sqlite3PcacheRelease(PgHdr*); SQLITE_PRIVATE void sqlite3PcacheDrop(PgHdr*); /* Remove page from cache */ SQLITE_PRIVATE void sqlite3PcacheMakeDirty(PgHdr*); /* Make sure page is marked dirty */ SQLITE_PRIVATE void sqlite3PcacheMakeClean(PgHdr*); /* Mark a single page as clean */ @@ -7840,10 +11504,20 @@ SQLITE_PRIVATE void sqlite3PcacheSetCachesize(PCache *, int); #ifdef SQLITE_TEST SQLITE_PRIVATE int sqlite3PcacheGetCachesize(PCache *); #endif +/* Set or get the suggested spill-size for the specified pager-cache. +** +** The spill-size is the minimum number of pages in cache before the cache +** will attempt to spill dirty pages by calling xStress. +*/ +SQLITE_PRIVATE int sqlite3PcacheSetSpillsize(PCache *, int); + +/* Free up as much memory as possible from the page cache */ +SQLITE_PRIVATE void sqlite3PcacheShrink(PCache*); + #ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT /* Try to return memory used by the pcache module to the main memory heap */ SQLITE_PRIVATE int sqlite3PcacheReleaseMemory(int); #endif @@ -7851,10 +11525,14 @@ SQLITE_PRIVATE void sqlite3PcacheStats(int*,int*,int*,int*); #endif SQLITE_PRIVATE void sqlite3PCacheSetDefault(void); +/* Return the header size */ +SQLITE_PRIVATE int sqlite3HeaderSizePcache(void); +SQLITE_PRIVATE int sqlite3HeaderSizePcache1(void); + #endif /* _PCACHE_H_ */ /************** End of pcache.h **********************************************/ /************** Continuing where we left off in sqliteInt.h ******************/ @@ -7881,88 +11559,75 @@ */ #ifndef _SQLITE_OS_H_ #define _SQLITE_OS_H_ /* -** Figure out if we are dealing with Unix, Windows, or some other -** operating system. After the following block of preprocess macros, -** all of SQLITE_OS_UNIX, SQLITE_OS_WIN, SQLITE_OS_OS2, and SQLITE_OS_OTHER -** will defined to either 1 or 0. One of the four will be 1. The other -** three will be 0. +** Attempt to automatically detect the operating system and setup the +** necessary pre-processor macros for it. +*/ +/************** Include os_setup.h in the middle of os.h *********************/ +/************** Begin file os_setup.h ****************************************/ +/* +** 2013 November 25 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This file contains pre-processor directives related to operating system +** detection and/or setup. +*/ +#ifndef _OS_SETUP_H_ +#define _OS_SETUP_H_ + +/* +** Figure out if we are dealing with Unix, Windows, or some other operating +** system. +** +** After the following block of preprocess macros, all of SQLITE_OS_UNIX, +** SQLITE_OS_WIN, and SQLITE_OS_OTHER will defined to either 1 or 0. One of +** the three will be 1. The other two will be 0. */ #if defined(SQLITE_OS_OTHER) -# if SQLITE_OS_OTHER==1 -# undef SQLITE_OS_UNIX -# define SQLITE_OS_UNIX 0 -# undef SQLITE_OS_WIN -# define SQLITE_OS_WIN 0 -# undef SQLITE_OS_OS2 -# define SQLITE_OS_OS2 0 -# else -# undef SQLITE_OS_OTHER -# endif +# if SQLITE_OS_OTHER==1 +# undef SQLITE_OS_UNIX +# define SQLITE_OS_UNIX 0 +# undef SQLITE_OS_WIN +# define SQLITE_OS_WIN 0 +# else +# undef SQLITE_OS_OTHER +# endif #endif #if !defined(SQLITE_OS_UNIX) && !defined(SQLITE_OS_OTHER) -# define SQLITE_OS_OTHER 0 -# ifndef SQLITE_OS_WIN -# if defined(_WIN32) || defined(WIN32) || defined(__CYGWIN__) || defined(__MINGW32__) || defined(__BORLANDC__) -# define SQLITE_OS_WIN 1 -# define SQLITE_OS_UNIX 0 -# define SQLITE_OS_OS2 0 -# elif defined(__EMX__) || defined(_OS2) || defined(OS2) || defined(_OS2_) || defined(__OS2__) -# define SQLITE_OS_WIN 0 -# define SQLITE_OS_UNIX 0 -# define SQLITE_OS_OS2 1 -# else -# define SQLITE_OS_WIN 0 -# define SQLITE_OS_UNIX 1 -# define SQLITE_OS_OS2 0 +# define SQLITE_OS_OTHER 0 +# ifndef SQLITE_OS_WIN +# if defined(_WIN32) || defined(WIN32) || defined(__CYGWIN__) || \ + defined(__MINGW32__) || defined(__BORLANDC__) +# define SQLITE_OS_WIN 1 +# define SQLITE_OS_UNIX 0 +# else +# define SQLITE_OS_WIN 0 +# define SQLITE_OS_UNIX 1 +# endif +# else +# define SQLITE_OS_UNIX 0 +# endif +#else +# ifndef SQLITE_OS_WIN +# define SQLITE_OS_WIN 0 # endif -# else -# define SQLITE_OS_UNIX 0 -# define SQLITE_OS_OS2 0 -# endif -#else -# ifndef SQLITE_OS_WIN -# define SQLITE_OS_WIN 0 -# endif -#endif - -/* -** Determine if we are dealing with WindowsCE - which has a much -** reduced API. -*/ -#if defined(_WIN32_WCE) -# define SQLITE_OS_WINCE 1 -#else -# define SQLITE_OS_WINCE 0 -#endif - - -/* -** Define the maximum size of a temporary filename -*/ -#if SQLITE_OS_WIN -# include <windows.h> -# define SQLITE_TEMPNAME_SIZE (MAX_PATH+50) -#elif SQLITE_OS_OS2 -# if (__GNUC__ > 3 || __GNUC__ == 3 && __GNUC_MINOR__ >= 3) && defined(OS2_HIGH_MEMORY) -# include <os2safe.h> /* has to be included before os2.h for linking to work */ -# endif -# define INCL_DOSDATETIME -# define INCL_DOSFILEMGR -# define INCL_DOSERRORS -# define INCL_DOSMISC -# define INCL_DOSPROCESS -# define INCL_DOSMODULEMGR -# define INCL_DOSSEMAPHORES -# include <os2.h> -# include <uconv.h> -# define SQLITE_TEMPNAME_SIZE (CCHMAXPATHCOMP) -#else -# define SQLITE_TEMPNAME_SIZE 200 -#endif +#endif + +#endif /* _OS_SETUP_H_ */ + +/************** End of os_setup.h ********************************************/ +/************** Continuing where we left off in os.h *************************/ /* If the SET_FULLSYNC macro is not defined above, then make it ** a no-op */ #ifndef SET_FULLSYNC @@ -7971,11 +11636,11 @@ /* ** The default size of a disk sector */ #ifndef SQLITE_DEFAULT_SECTOR_SIZE -# define SQLITE_DEFAULT_SECTOR_SIZE 512 +# define SQLITE_DEFAULT_SECTOR_SIZE 4096 #endif /* ** Temporary files are named starting with this prefix followed by 16 random ** alphanumeric characters, and no file extension. They are stored in the @@ -8054,11 +11719,11 @@ ** SHARED_SIZE is the number of bytes available in the pool from which ** a random byte is selected for a shared lock. The pool of bytes for ** shared locks begins at SHARED_FIRST. ** ** The same locking strategy and -** byte ranges are used for Unix. This leaves open the possiblity of having +** byte ranges are used for Unix. This leaves open the possibility of having ** clients on win95, winNT, and unix all talking to the same shared file ** and all locking correctly. To do so would require that samba (or whatever ** tool is being used for file sharing) implements locks correctly between ** windows and unix. I'm guessing that isn't likely to happen, but by ** using the same locking range we are at least open to the possibility. @@ -8077,11 +11742,15 @@ ** the incompatibility right away, even running a full regression test. ** The default location of PENDING_BYTE is the first byte past the ** 1GB boundary. ** */ -#define PENDING_BYTE sqlite3PendingByte +#ifdef SQLITE_OMIT_WSD +# define PENDING_BYTE (0x40000000) +#else +# define PENDING_BYTE sqlite3PendingByte +#endif #define RESERVED_BYTE (PENDING_BYTE+1) #define SHARED_FIRST (PENDING_BYTE+2) #define SHARED_SIZE 510 /* @@ -8100,13 +11769,21 @@ SQLITE_PRIVATE int sqlite3OsFileSize(sqlite3_file*, i64 *pSize); SQLITE_PRIVATE int sqlite3OsLock(sqlite3_file*, int); SQLITE_PRIVATE int sqlite3OsUnlock(sqlite3_file*, int); SQLITE_PRIVATE int sqlite3OsCheckReservedLock(sqlite3_file *id, int *pResOut); SQLITE_PRIVATE int sqlite3OsFileControl(sqlite3_file*,int,void*); +SQLITE_PRIVATE void sqlite3OsFileControlHint(sqlite3_file*,int,void*); #define SQLITE_FCNTL_DB_UNCHANGED 0xca093fa0 SQLITE_PRIVATE int sqlite3OsSectorSize(sqlite3_file *id); SQLITE_PRIVATE int sqlite3OsDeviceCharacteristics(sqlite3_file *id); +SQLITE_PRIVATE int sqlite3OsShmMap(sqlite3_file *,int,int,int,void volatile **); +SQLITE_PRIVATE int sqlite3OsShmLock(sqlite3_file *id, int, int, int); +SQLITE_PRIVATE void sqlite3OsShmBarrier(sqlite3_file *id); +SQLITE_PRIVATE int sqlite3OsShmUnmap(sqlite3_file *id, int); +SQLITE_PRIVATE int sqlite3OsFetch(sqlite3_file *id, i64, int, void **); +SQLITE_PRIVATE int sqlite3OsUnfetch(sqlite3_file *, i64, void *); + /* ** Functions for accessing sqlite3_vfs methods */ SQLITE_PRIVATE int sqlite3OsOpen(sqlite3_vfs *, const char *, sqlite3_file*, int, int *); @@ -8119,11 +11796,11 @@ SQLITE_PRIVATE void (*sqlite3OsDlSym(sqlite3_vfs *, void *, const char *))(void); SQLITE_PRIVATE void sqlite3OsDlClose(sqlite3_vfs *, void *); #endif /* SQLITE_OMIT_LOAD_EXTENSION */ SQLITE_PRIVATE int sqlite3OsRandomness(sqlite3_vfs *, int, char *); SQLITE_PRIVATE int sqlite3OsSleep(sqlite3_vfs *, int); -SQLITE_PRIVATE int sqlite3OsCurrentTime(sqlite3_vfs *, double*); +SQLITE_PRIVATE int sqlite3OsCurrentTimeInt64(sqlite3_vfs *, sqlite3_int64*); /* ** Convenience functions for opening and closing files using ** sqlite3_malloc() to obtain space for the file-handle structure. */ @@ -8161,11 +11838,11 @@ /* ** Figure out what version of the code to use. The choices are ** ** SQLITE_MUTEX_OMIT No mutex logic. Not even stubs. The -** mutexes implemention cannot be overridden +** mutexes implementation cannot be overridden ** at start-time. ** ** SQLITE_MUTEX_NOOP For single-threaded applications. No ** mutual exclusion is provided. But this ** implementation can be overridden at @@ -8172,23 +11849,19 @@ ** start-time. ** ** SQLITE_MUTEX_PTHREADS For multi-threaded applications on Unix. ** ** SQLITE_MUTEX_W32 For multi-threaded applications on Win32. -** -** SQLITE_MUTEX_OS2 For multi-threaded applications on OS/2. */ #if !SQLITE_THREADSAFE # define SQLITE_MUTEX_OMIT #endif #if SQLITE_THREADSAFE && !defined(SQLITE_MUTEX_NOOP) # if SQLITE_OS_UNIX # define SQLITE_MUTEX_PTHREADS # elif SQLITE_OS_WIN # define SQLITE_MUTEX_W32 -# elif SQLITE_OS_OS2 -# define SQLITE_MUTEX_OS2 # else # define SQLITE_MUTEX_NOOP # endif #endif @@ -8196,18 +11869,21 @@ /* ** If this is a no-op implementation, implement everything as macros. */ #define sqlite3_mutex_alloc(X) ((sqlite3_mutex*)8) #define sqlite3_mutex_free(X) -#define sqlite3_mutex_enter(X) +#define sqlite3_mutex_enter(X) #define sqlite3_mutex_try(X) SQLITE_OK -#define sqlite3_mutex_leave(X) -#define sqlite3_mutex_held(X) 1 -#define sqlite3_mutex_notheld(X) 1 +#define sqlite3_mutex_leave(X) +#define sqlite3_mutex_held(X) ((void)(X),1) +#define sqlite3_mutex_notheld(X) ((void)(X),1) #define sqlite3MutexAlloc(X) ((sqlite3_mutex*)8) #define sqlite3MutexInit() SQLITE_OK #define sqlite3MutexEnd() +#define MUTEX_LOGIC(X) +#else +#define MUTEX_LOGIC(X) X #endif /* defined(SQLITE_MUTEX_OMIT) */ /************** End of mutex.h ***********************************************/ /************** Continuing where we left off in sqliteInt.h ******************/ @@ -8220,50 +11896,53 @@ ** databases may be attached. */ struct Db { char *zName; /* Name of this database */ Btree *pBt; /* The B*Tree structure for this database file */ - u8 inTrans; /* 0: not writable. 1: Transaction. 2: Checkpoint */ u8 safety_level; /* How aggressive at syncing data to disk */ Schema *pSchema; /* Pointer to database schema (possibly shared) */ }; /* ** An instance of the following structure stores a database schema. ** -** If there are no virtual tables configured in this schema, the -** Schema.db variable is set to NULL. After the first virtual table -** has been added, it is set to point to the database connection -** used to create the connection. Once a virtual table has been -** added to the Schema structure and the Schema.db variable populated, -** only that database connection may use the Schema to prepare -** statements. +** Most Schema objects are associated with a Btree. The exception is +** the Schema for the TEMP databaes (sqlite3.aDb[1]) which is free-standing. +** In shared cache mode, a single Schema object can be shared by multiple +** Btrees that refer to the same underlying BtShared object. +** +** Schema objects are automatically deallocated when the last Btree that +** references them is destroyed. The TEMP Schema is manually freed by +** sqlite3_close(). +* +** A thread must be holding a mutex on the corresponding Btree in order +** to access Schema content. This implies that the thread must also be +** holding a mutex on the sqlite3 connection pointer that owns the Btree. +** For a TEMP Schema, only the connection mutex is required. */ struct Schema { int schema_cookie; /* Database schema version number for this file */ + int iGeneration; /* Generation counter. Incremented with each change */ Hash tblHash; /* All tables indexed by name */ Hash idxHash; /* All (named) indices indexed by name */ Hash trigHash; /* All triggers indexed by name */ Hash fkeyHash; /* All foreign keys by referenced table name */ Table *pSeqTab; /* The sqlite_sequence table used by AUTOINCREMENT */ u8 file_format; /* Schema format version for this file */ u8 enc; /* Text encoding used by this database */ - u16 flags; /* Flags associated with this schema */ + u16 schemaFlags; /* Flags associated with this schema */ int cache_size; /* Number of pages to use in the cache */ -#ifndef SQLITE_OMIT_VIRTUALTABLE - sqlite3 *db; /* "Owner" connection. See comment above */ -#endif }; /* ** These macros can be used to test, set, or clear bits in the ** Db.pSchema->flags field. */ -#define DbHasProperty(D,I,P) (((D)->aDb[I].pSchema->flags&(P))==(P)) -#define DbHasAnyProperty(D,I,P) (((D)->aDb[I].pSchema->flags&(P))!=0) -#define DbSetProperty(D,I,P) (D)->aDb[I].pSchema->flags|=(P) -#define DbClearProperty(D,I,P) (D)->aDb[I].pSchema->flags&=~(P) +#define DbHasProperty(D,I,P) (((D)->aDb[I].pSchema->schemaFlags&(P))==(P)) +#define DbHasAnyProperty(D,I,P) (((D)->aDb[I].pSchema->schemaFlags&(P))!=0) +#define DbSetProperty(D,I,P) (D)->aDb[I].pSchema->schemaFlags|=(P) +#define DbClearProperty(D,I,P) (D)->aDb[I].pSchema->schemaFlags&=~(P) /* ** Allowed values for the DB.pSchema->flags field. ** ** The DB_SchemaLoaded flag is set after the database schema has been @@ -8279,11 +11958,11 @@ /* ** The number of different kinds of things that can be limited ** using the sqlite3_limit() interface. */ -#define SQLITE_N_LIMIT (SQLITE_LIMIT_TRIGGER_DEPTH+1) +#define SQLITE_N_LIMIT (SQLITE_LIMIT_WORKER_THREADS+1) /* ** Lookaside malloc is a set of fixed-size buffers that can be used ** to satisfy small transient memory allocation requests for objects ** associated with a particular database connection. The use of @@ -8302,15 +11981,16 @@ ** is shared by multiple database connections. Therefore, while parsing ** schema information, the Lookaside.bEnabled flag is cleared so that ** lookaside allocations are not used to construct the schema objects. */ struct Lookaside { + u32 bDisable; /* Only operate the lookaside when zero */ u16 sz; /* Size of each buffer in bytes */ - u8 bEnabled; /* False to disable new lookaside allocations */ u8 bMalloced; /* True if pStart obtained from sqlite3_malloc() */ int nOut; /* Number of buffers currently checked out */ int mxOut; /* Highwater mark for nOut */ + int anStat[3]; /* 0: hits. 1: size misses. 2: full misses */ LookasideSlot *pFree; /* List of available buffers */ void *pStart; /* First byte of available memory space */ void *pEnd; /* First byte past end of available space */ }; struct LookasideSlot { @@ -8324,72 +12004,97 @@ ** Collisions are on the FuncDef.pHash chain. */ struct FuncDefHash { FuncDef *a[23]; /* Hash table for functions */ }; + +#ifdef SQLITE_USER_AUTHENTICATION +/* +** Information held in the "sqlite3" database connection object and used +** to manage user authentication. +*/ +typedef struct sqlite3_userauth sqlite3_userauth; +struct sqlite3_userauth { + u8 authLevel; /* Current authentication level */ + int nAuthPW; /* Size of the zAuthPW in bytes */ + char *zAuthPW; /* Password used to authenticate */ + char *zAuthUser; /* User name used to authenticate */ +}; + +/* Allowed values for sqlite3_userauth.authLevel */ +#define UAUTH_Unknown 0 /* Authentication not yet checked */ +#define UAUTH_Fail 1 /* User authentication failed */ +#define UAUTH_User 2 /* Authenticated as a normal user */ +#define UAUTH_Admin 3 /* Authenticated as an administrator */ + +/* Functions used only by user authorization logic */ +SQLITE_PRIVATE int sqlite3UserAuthTable(const char*); +SQLITE_PRIVATE int sqlite3UserAuthCheckLogin(sqlite3*,const char*,u8*); +SQLITE_PRIVATE void sqlite3UserAuthInit(sqlite3*); +SQLITE_PRIVATE void sqlite3CryptFunc(sqlite3_context*,int,sqlite3_value**); + +#endif /* SQLITE_USER_AUTHENTICATION */ + +/* +** typedef for the authorization callback function. +*/ +#ifdef SQLITE_USER_AUTHENTICATION + typedef int (*sqlite3_xauth)(void*,int,const char*,const char*,const char*, + const char*, const char*); +#else + typedef int (*sqlite3_xauth)(void*,int,const char*,const char*,const char*, + const char*); +#endif + /* ** Each database connection is an instance of the following structure. -** -** The sqlite.lastRowid records the last insert rowid generated by an -** insert statement. Inserts on views do not affect its value. Each -** trigger has its own context, so that lastRowid can be updated inside -** triggers as usual. The previous value will be restored once the trigger -** exits. Upon entering a before or instead of trigger, lastRowid is no -** longer (since after version 2.8.12) reset to -1. -** -** The sqlite.nChange does not count changes within triggers and keeps no -** context. It is reset at start of sqlite3_exec. -** The sqlite.lsChange represents the number of changes made by the last -** insert, update, or delete statement. It remains constant throughout the -** length of a statement and is then updated by OP_SetCounts. It keeps a -** context stack just like lastRowid so that the count of changes -** within a trigger is not seen outside the trigger. Changes to views do not -** affect the value of lsChange. -** The sqlite.csChange keeps track of the number of current changes (since -** the last statement) and is used to update sqlite_lsChange. -** -** The member variables sqlite.errCode, sqlite.zErrMsg and sqlite.zErrMsg16 -** store the most recent error code and, if applicable, string. The -** internal function sqlite3Error() is used to set these variables -** consistently. */ struct sqlite3 { sqlite3_vfs *pVfs; /* OS Interface */ - int nDb; /* Number of backends currently in use */ + struct Vdbe *pVdbe; /* List of active virtual machines */ + CollSeq *pDfltColl; /* The default collating sequence (BINARY) */ + sqlite3_mutex *mutex; /* Connection mutex */ Db *aDb; /* All backends */ + int nDb; /* Number of backends currently in use */ int flags; /* Miscellaneous flags. See below */ - int openFlags; /* Flags passed to sqlite3_vfs.xOpen() */ + i64 lastRowid; /* ROWID of most recent insert (see above) */ + i64 szMmap; /* Default mmap_size setting */ + unsigned int openFlags; /* Flags passed to sqlite3_vfs.xOpen() */ int errCode; /* Most recent error code (SQLITE_*) */ int errMask; /* & result codes with this before returning */ + u16 dbOptFlags; /* Flags to enable/disable optimizations */ + u8 enc; /* Text encoding */ u8 autoCommit; /* The auto-commit flag. */ u8 temp_store; /* 1: file 2: memory 0: default */ u8 mallocFailed; /* True if we have seen a malloc failure */ + u8 bBenignMalloc; /* Do not require OOMs if true */ u8 dfltLockMode; /* Default locking-mode for attached dbs */ - u8 dfltJournalMode; /* Default journal mode for attached dbs */ signed char nextAutovac; /* Autovac setting after VACUUM if >=0 */ u8 suppressErr; /* Do not issue error messages if true */ + u8 vtabOnConflict; /* Value to return for s3_vtab_on_conflict() */ + u8 isTransactionSavepoint; /* True if the outermost savepoint is a TS */ int nextPagesize; /* Pagesize after VACUUM if >0 */ - int nTable; /* Number of tables in the database */ - CollSeq *pDfltColl; /* The default collating sequence (BINARY) */ - i64 lastRowid; /* ROWID of most recent insert (see above) */ u32 magic; /* Magic number for detect library misuse */ int nChange; /* Value returned by sqlite3_changes() */ int nTotalChange; /* Value returned by sqlite3_total_changes() */ - sqlite3_mutex *mutex; /* Connection mutex */ int aLimit[SQLITE_N_LIMIT]; /* Limits */ + int nMaxSorterMmap; /* Maximum size of regions mapped by sorter */ struct sqlite3InitInfo { /* Information used during initialization */ - int iDb; /* When back is being initialized */ int newTnum; /* Rootpage of table being initialized */ + u8 iDb; /* Which db file is being initialized */ u8 busy; /* TRUE if currently initializing */ u8 orphanTrigger; /* Last statement is orphaned TEMP trigger */ + u8 imposterTable; /* Building an imposter table */ } init; + int nVdbeActive; /* Number of VDBEs currently running */ + int nVdbeRead; /* Number of active VDBEs that read or write */ + int nVdbeWrite; /* Number of active VDBEs that read and write */ + int nVdbeExec; /* Number of nested calls to VdbeExec() */ + int nVDestroy; /* Number of active OP_VDestroy operations */ int nExtension; /* Number of loaded extensions */ void **aExtension; /* Array of shared library handles */ - struct Vdbe *pVdbe; /* List of active virtual machines */ - int activeVdbeCnt; /* Number of VDBEs currently executing */ - int writeVdbeCnt; /* Number of active VDBEs that are writing */ void (*xTrace)(void*,const char*); /* Trace function */ void *pTraceArg; /* Argument to the trace function */ void (*xProfile)(void*,const char*,u64); /* Profiling function */ void *pProfileArg; /* Argument to profile function */ void *pCommitArg; /* Argument to xCommitCallback() */ @@ -8396,49 +12101,50 @@ int (*xCommitCallback)(void*); /* Invoked at every commit. */ void *pRollbackArg; /* Argument to xRollbackCallback() */ void (*xRollbackCallback)(void*); /* Invoked at every commit. */ void *pUpdateArg; void (*xUpdateCallback)(void*,int, const char*,const char*,sqlite_int64); +#ifndef SQLITE_OMIT_WAL + int (*xWalCallback)(void *, sqlite3 *, const char *, int); + void *pWalArg; +#endif void(*xCollNeeded)(void*,sqlite3*,int eTextRep,const char*); void(*xCollNeeded16)(void*,sqlite3*,int eTextRep,const void*); void *pCollNeededArg; sqlite3_value *pErr; /* Most recent error message */ - char *zErrMsg; /* Most recent error message (UTF-8 encoded) */ - char *zErrMsg16; /* Most recent error message (UTF-16 encoded) */ union { volatile int isInterrupted; /* True if sqlite3_interrupt has been called */ double notUsed1; /* Spacer */ } u1; Lookaside lookaside; /* Lookaside malloc configuration */ #ifndef SQLITE_OMIT_AUTHORIZATION - int (*xAuth)(void*,int,const char*,const char*,const char*,const char*); - /* Access authorization function */ + sqlite3_xauth xAuth; /* Access authorization function */ void *pAuthArg; /* 1st argument to the access auth function */ #endif #ifndef SQLITE_OMIT_PROGRESS_CALLBACK int (*xProgress)(void *); /* The progress callback */ void *pProgressArg; /* Argument to the progress callback */ - int nProgressOps; /* Number of opcodes for progress callback */ + unsigned nProgressOps; /* Number of opcodes for progress callback */ #endif #ifndef SQLITE_OMIT_VIRTUALTABLE + int nVTrans; /* Allocated size of aVTrans */ Hash aModule; /* populated by sqlite3_create_module() */ - Table *pVTab; /* vtab with active Connect/Create method */ + VtabCtx *pVtabCtx; /* Context for active vtab connect/create */ VTable **aVTrans; /* Virtual tables with open transactions */ - int nVTrans; /* Allocated size of aVTrans */ VTable *pDisconnect; /* Disconnect these in next sqlite3_prepare() */ #endif FuncDefHash aFunc; /* Hash table of connection functions */ Hash aCollSeq; /* All collating sequences */ BusyHandler busyHandler; /* Busy callback */ - int busyTimeout; /* Busy handler timeout, in msec */ Db aDbStatic[2]; /* Static space for the 2 default backends */ Savepoint *pSavepoint; /* List of active savepoints */ + int busyTimeout; /* Busy handler timeout, in msec */ int nSavepoint; /* Number of non-transaction savepoints */ int nStatement; /* Number of nested statement-transactions */ - u8 isTransactionSavepoint; /* True if the outermost savepoint is a TS */ i64 nDeferredCons; /* Net deferred constraints this transaction. */ - + i64 nDeferredImmCons; /* Net deferred immediate constraints */ + int *pnBytesFreed; /* If not NULL, increment this in DbFree() */ #ifdef SQLITE_ENABLE_UNLOCK_NOTIFY /* The following variables are all protected by the STATIC_MASTER ** mutex, not by sqlite3.mutex. They are used by code in notify.c. ** ** When X.pUnlockConnection==Y, that means that X is waiting for Y to @@ -8452,56 +12158,94 @@ sqlite3 *pUnlockConnection; /* Connection to watch for unlock */ void *pUnlockArg; /* Argument to xUnlockNotify */ void (*xUnlockNotify)(void **, int); /* Unlock notify callback */ sqlite3 *pNextBlocked; /* Next in list of all blocked connections */ #endif +#ifdef SQLITE_USER_AUTHENTICATION + sqlite3_userauth auth; /* User authentication information */ +#endif }; /* ** A macro to discover the encoding of a database. */ -#define ENC(db) ((db)->aDb[0].pSchema->enc) +#define SCHEMA_ENC(db) ((db)->aDb[0].pSchema->enc) +#define ENC(db) ((db)->enc) /* ** Possible values for the sqlite3.flags. */ -#define SQLITE_VdbeTrace 0x00000100 /* True to trace VDBE execution */ -#define SQLITE_InternChanges 0x00000200 /* Uncommitted Hash table changes */ -#define SQLITE_FullColNames 0x00000400 /* Show full column names on SELECT */ -#define SQLITE_ShortColNames 0x00000800 /* Show short columns names */ -#define SQLITE_CountRows 0x00001000 /* Count rows changed by INSERT, */ +#define SQLITE_VdbeTrace 0x00000001 /* True to trace VDBE execution */ +#define SQLITE_InternChanges 0x00000002 /* Uncommitted Hash table changes */ +#define SQLITE_FullColNames 0x00000004 /* Show full column names on SELECT */ +#define SQLITE_FullFSync 0x00000008 /* Use full fsync on the backend */ +#define SQLITE_CkptFullFSync 0x00000010 /* Use full fsync for checkpoint */ +#define SQLITE_CacheSpill 0x00000020 /* OK to spill pager cache */ +#define SQLITE_ShortColNames 0x00000040 /* Show short columns names */ +#define SQLITE_CountRows 0x00000080 /* Count rows changed by INSERT, */ /* DELETE, or UPDATE and return */ /* the count using a callback. */ -#define SQLITE_NullCallback 0x00002000 /* Invoke the callback once if the */ +#define SQLITE_NullCallback 0x00000100 /* Invoke the callback once if the */ /* result set is empty */ -#define SQLITE_SqlTrace 0x00004000 /* Debug print SQL as it executes */ -#define SQLITE_VdbeListing 0x00008000 /* Debug listings of VDBE programs */ -#define SQLITE_WriteSchema 0x00010000 /* OK to update SQLITE_MASTER */ -#define SQLITE_NoReadlock 0x00020000 /* Readlocks are omitted when - ** accessing read-only databases */ -#define SQLITE_IgnoreChecks 0x00040000 /* Do not enforce check constraints */ -#define SQLITE_ReadUncommitted 0x0080000 /* For shared-cache mode */ -#define SQLITE_LegacyFileFmt 0x00100000 /* Create new databases in format 1 */ -#define SQLITE_FullFSync 0x00200000 /* Use full fsync on the backend */ +#define SQLITE_SqlTrace 0x00000200 /* Debug print SQL as it executes */ +#define SQLITE_VdbeListing 0x00000400 /* Debug listings of VDBE programs */ +#define SQLITE_WriteSchema 0x00000800 /* OK to update SQLITE_MASTER */ +#define SQLITE_VdbeAddopTrace 0x00001000 /* Trace sqlite3VdbeAddOp() calls */ +#define SQLITE_IgnoreChecks 0x00002000 /* Do not enforce check constraints */ +#define SQLITE_ReadUncommitted 0x0004000 /* For shared-cache mode */ +#define SQLITE_LegacyFileFmt 0x00008000 /* Create new databases in format 1 */ +#define SQLITE_RecoveryMode 0x00010000 /* Ignore schema errors */ +#define SQLITE_ReverseOrder 0x00020000 /* Reverse unordered SELECTs */ +#define SQLITE_RecTriggers 0x00040000 /* Enable recursive triggers */ +#define SQLITE_ForeignKeys 0x00080000 /* Enforce foreign key constraints */ +#define SQLITE_AutoIndex 0x00100000 /* Enable automatic indexes */ +#define SQLITE_PreferBuiltin 0x00200000 /* Preference to built-in funcs */ #define SQLITE_LoadExtension 0x00400000 /* Enable load_extension */ -#define SQLITE_RecoveryMode 0x00800000 /* Ignore schema errors */ -#define SQLITE_ReverseOrder 0x01000000 /* Reverse unordered SELECTs */ -#define SQLITE_RecTriggers 0x02000000 /* Enable recursive triggers */ -#define SQLITE_ForeignKeys 0x04000000 /* Enforce foreign key constraints */ -#define SQLITE_AutoIndex 0x08000000 /* Enable automatic indexes */ +#define SQLITE_EnableTrigger 0x00800000 /* True to enable triggers */ +#define SQLITE_DeferFKs 0x01000000 /* Defer all FK constraints */ +#define SQLITE_QueryOnly 0x02000000 /* Disable database changes */ +#define SQLITE_VdbeEQP 0x04000000 /* Debug EXPLAIN QUERY PLAN */ +#define SQLITE_Vacuum 0x08000000 /* Currently in a VACUUM */ +#define SQLITE_CellSizeCk 0x10000000 /* Check btree cell sizes on load */ + /* -** Bits of the sqlite3.flags field that are used by the -** sqlite3_test_control(SQLITE_TESTCTRL_OPTIMIZATIONS,...) interface. -** These must be the low-order bits of the flags field. +** Bits of the sqlite3.dbOptFlags field that are used by the +** sqlite3_test_control(SQLITE_TESTCTRL_OPTIMIZATIONS,...) interface to +** selectively disable various optimizations. */ -#define SQLITE_QueryFlattener 0x01 /* Disable query flattening */ -#define SQLITE_ColumnCache 0x02 /* Disable the column cache */ -#define SQLITE_IndexSort 0x04 /* Disable indexes for sorting */ -#define SQLITE_IndexSearch 0x08 /* Disable indexes for searching */ -#define SQLITE_IndexCover 0x10 /* Disable index covering table */ -#define SQLITE_OptMask 0x1f /* Mask of all disablable opts */ +#define SQLITE_QueryFlattener 0x0001 /* Query flattening */ +#define SQLITE_ColumnCache 0x0002 /* Column cache */ +#define SQLITE_GroupByOrder 0x0004 /* GROUPBY cover of ORDERBY */ +#define SQLITE_FactorOutConst 0x0008 /* Constant factoring */ +/* not used 0x0010 // Was: SQLITE_IdxRealAsInt */ +#define SQLITE_DistinctOpt 0x0020 /* DISTINCT using indexes */ +#define SQLITE_CoverIdxScan 0x0040 /* Covering index scans */ +#define SQLITE_OrderByIdxJoin 0x0080 /* ORDER BY of joins via index */ +#define SQLITE_SubqCoroutine 0x0100 /* Evaluate subqueries as coroutines */ +#define SQLITE_Transitive 0x0200 /* Transitive constraints */ +#define SQLITE_OmitNoopJoin 0x0400 /* Omit unused tables in joins */ +#define SQLITE_Stat34 0x0800 /* Use STAT3 or STAT4 data */ +#define SQLITE_CursorHints 0x2000 /* Add OP_CursorHint opcodes */ +#define SQLITE_AllOpts 0xffff /* All optimizations */ + +/* +** Macros for testing whether or not optimizations are enabled or disabled. +*/ +#ifndef SQLITE_OMIT_BUILTIN_TEST +#define OptimizationDisabled(db, mask) (((db)->dbOptFlags&(mask))!=0) +#define OptimizationEnabled(db, mask) (((db)->dbOptFlags&(mask))==0) +#else +#define OptimizationDisabled(db, mask) 0 +#define OptimizationEnabled(db, mask) 1 +#endif + +/* +** Return true if it OK to factor constant expressions into the initialization +** code. The argument is a Parse object for the code generator. +*/ +#define ConstFactorOk(P) ((P)->okConstFactor) /* ** Possible values for the sqlite.magic field. ** The numbers are obtained at random and have no special meaning, other ** than being distinct from one another. @@ -8509,40 +12253,70 @@ #define SQLITE_MAGIC_OPEN 0xa029a697 /* Database is open */ #define SQLITE_MAGIC_CLOSED 0x9f3c2d33 /* Database is closed */ #define SQLITE_MAGIC_SICK 0x4b771290 /* Error and awaiting close */ #define SQLITE_MAGIC_BUSY 0xf03b7906 /* Database currently in use */ #define SQLITE_MAGIC_ERROR 0xb5357930 /* An SQLITE_MISUSE error occurred */ +#define SQLITE_MAGIC_ZOMBIE 0x64cffc7f /* Close with last statement close */ /* ** Each SQL function is defined by an instance of the following ** structure. A pointer to this structure is stored in the sqlite.aFunc ** hash table. When multiple functions have the same name, the hash table ** points to a linked list of these structures. */ struct FuncDef { i16 nArg; /* Number of arguments. -1 means unlimited */ - u8 iPrefEnc; /* Preferred text encoding (SQLITE_UTF8, 16LE, 16BE) */ - u8 flags; /* Some combination of SQLITE_FUNC_* */ + u16 funcFlags; /* Some combination of SQLITE_FUNC_* */ void *pUserData; /* User data parameter */ FuncDef *pNext; /* Next function with same name */ - void (*xFunc)(sqlite3_context*,int,sqlite3_value**); /* Regular function */ - void (*xStep)(sqlite3_context*,int,sqlite3_value**); /* Aggregate step */ - void (*xFinalize)(sqlite3_context*); /* Aggregate finalizer */ + void (*xSFunc)(sqlite3_context*,int,sqlite3_value**); /* func or agg-step */ + void (*xFinalize)(sqlite3_context*); /* Agg finalizer */ char *zName; /* SQL name of the function. */ FuncDef *pHash; /* Next with a different name but the same hash */ + FuncDestructor *pDestructor; /* Reference counted destructor function */ }; /* -** Possible values for FuncDef.flags +** This structure encapsulates a user-function destructor callback (as +** configured using create_function_v2()) and a reference counter. When +** create_function_v2() is called to create a function with a destructor, +** a single object of this type is allocated. FuncDestructor.nRef is set to +** the number of FuncDef objects created (either 1 or 3, depending on whether +** or not the specified encoding is SQLITE_ANY). The FuncDef.pDestructor +** member of each of the new FuncDef objects is set to point to the allocated +** FuncDestructor. +** +** Thereafter, when one of the FuncDef objects is deleted, the reference +** count on this object is decremented. When it reaches 0, the destructor +** is invoked and the FuncDestructor structure freed. */ -#define SQLITE_FUNC_LIKE 0x01 /* Candidate for the LIKE optimization */ -#define SQLITE_FUNC_CASE 0x02 /* Case-sensitive LIKE-type function */ -#define SQLITE_FUNC_EPHEM 0x04 /* Ephemeral. Delete with VDBE */ -#define SQLITE_FUNC_NEEDCOLL 0x08 /* sqlite3GetFuncCollSeq() might be called */ -#define SQLITE_FUNC_PRIVATE 0x10 /* Allowed for internal use only */ -#define SQLITE_FUNC_COUNT 0x20 /* Built-in count(*) aggregate */ -#define SQLITE_FUNC_COALESCE 0x40 /* Built-in coalesce() or ifnull() function */ +struct FuncDestructor { + int nRef; + void (*xDestroy)(void *); + void *pUserData; +}; + +/* +** Possible values for FuncDef.flags. Note that the _LENGTH and _TYPEOF +** values must correspond to OPFLAG_LENGTHARG and OPFLAG_TYPEOFARG. And +** SQLITE_FUNC_CONSTANT must be the same as SQLITE_DETERMINISTIC. There +** are assert() statements in the code to verify this. +*/ +#define SQLITE_FUNC_ENCMASK 0x0003 /* SQLITE_UTF8, SQLITE_UTF16BE or UTF16LE */ +#define SQLITE_FUNC_LIKE 0x0004 /* Candidate for the LIKE optimization */ +#define SQLITE_FUNC_CASE 0x0008 /* Case-sensitive LIKE-type function */ +#define SQLITE_FUNC_EPHEM 0x0010 /* Ephemeral. Delete with VDBE */ +#define SQLITE_FUNC_NEEDCOLL 0x0020 /* sqlite3GetFuncCollSeq() might be called*/ +#define SQLITE_FUNC_LENGTH 0x0040 /* Built-in length() function */ +#define SQLITE_FUNC_TYPEOF 0x0080 /* Built-in typeof() function */ +#define SQLITE_FUNC_COUNT 0x0100 /* Built-in count(*) aggregate */ +#define SQLITE_FUNC_COALESCE 0x0200 /* Built-in coalesce() or ifnull() */ +#define SQLITE_FUNC_UNLIKELY 0x0400 /* Built-in unlikely() function */ +#define SQLITE_FUNC_CONSTANT 0x0800 /* Constant inputs give a constant output */ +#define SQLITE_FUNC_MINMAX 0x1000 /* True for min() and max() aggregates */ +#define SQLITE_FUNC_SLOCHNG 0x2000 /* "Slow Change". Value constant during a + ** single query - might change over time */ /* ** The following three macros, FUNCTION(), LIKEFUNC() and AGGREGATE() are ** used to create the initializers for the FuncDef structures. ** @@ -8550,10 +12324,19 @@ ** Used to create a scalar function definition of a function zName ** implemented by C function xFunc that accepts nArg arguments. The ** value passed as iArg is cast to a (void*) and made available ** as the user-data (sqlite3_user_data()) for the function. If ** argument bNC is true, then the SQLITE_FUNC_NEEDCOLL flag is set. +** +** VFUNCTION(zName, nArg, iArg, bNC, xFunc) +** Like FUNCTION except it omits the SQLITE_FUNC_CONSTANT flag. +** +** DFUNCTION(zName, nArg, iArg, bNC, xFunc) +** Like FUNCTION except it omits the SQLITE_FUNC_CONSTANT flag and +** adds the SQLITE_FUNC_SLOCHNG flag. Used for date & time functions +** and functions like sqlite_version() that can change, but not during +** a single query. ** ** AGGREGATE(zName, nArg, iArg, bNC, xStep, xFinal) ** Used to create an aggregate function definition implemented by ** the C functions xStep and xFinal. The first four parameters ** are interpreted in the same way as the first 4 parameters to @@ -8566,20 +12349,33 @@ ** available as the function user-data (sqlite3_user_data()). The ** FuncDef.flags variable is set to the value passed as the flags ** parameter. */ #define FUNCTION(zName, nArg, iArg, bNC, xFunc) \ - {nArg, SQLITE_UTF8, bNC*SQLITE_FUNC_NEEDCOLL, \ - SQLITE_INT_TO_PTR(iArg), 0, xFunc, 0, 0, #zName, 0} + {nArg, SQLITE_FUNC_CONSTANT|SQLITE_UTF8|(bNC*SQLITE_FUNC_NEEDCOLL), \ + SQLITE_INT_TO_PTR(iArg), 0, xFunc, 0, #zName, 0, 0} +#define VFUNCTION(zName, nArg, iArg, bNC, xFunc) \ + {nArg, SQLITE_UTF8|(bNC*SQLITE_FUNC_NEEDCOLL), \ + SQLITE_INT_TO_PTR(iArg), 0, xFunc, 0, #zName, 0, 0} +#define DFUNCTION(zName, nArg, iArg, bNC, xFunc) \ + {nArg, SQLITE_FUNC_SLOCHNG|SQLITE_UTF8|(bNC*SQLITE_FUNC_NEEDCOLL), \ + SQLITE_INT_TO_PTR(iArg), 0, xFunc, 0, #zName, 0, 0} +#define FUNCTION2(zName, nArg, iArg, bNC, xFunc, extraFlags) \ + {nArg,SQLITE_FUNC_CONSTANT|SQLITE_UTF8|(bNC*SQLITE_FUNC_NEEDCOLL)|extraFlags,\ + SQLITE_INT_TO_PTR(iArg), 0, xFunc, 0, #zName, 0, 0} #define STR_FUNCTION(zName, nArg, pArg, bNC, xFunc) \ - {nArg, SQLITE_UTF8, bNC*SQLITE_FUNC_NEEDCOLL, \ - pArg, 0, xFunc, 0, 0, #zName, 0} + {nArg, SQLITE_FUNC_SLOCHNG|SQLITE_UTF8|(bNC*SQLITE_FUNC_NEEDCOLL), \ + pArg, 0, xFunc, 0, #zName, 0, 0} #define LIKEFUNC(zName, nArg, arg, flags) \ - {nArg, SQLITE_UTF8, flags, (void *)arg, 0, likeFunc, 0, 0, #zName, 0} + {nArg, SQLITE_FUNC_CONSTANT|SQLITE_UTF8|flags, \ + (void *)arg, 0, likeFunc, 0, #zName, 0, 0} #define AGGREGATE(zName, nArg, arg, nc, xStep, xFinal) \ - {nArg, SQLITE_UTF8, nc*SQLITE_FUNC_NEEDCOLL, \ - SQLITE_INT_TO_PTR(arg), 0, 0, xStep,xFinal,#zName,0} + {nArg, SQLITE_UTF8|(nc*SQLITE_FUNC_NEEDCOLL), \ + SQLITE_INT_TO_PTR(arg), 0, xStep,xFinal,#zName,0,0} +#define AGGREGATE2(zName, nArg, arg, nc, xStep, xFinal, extraFlags) \ + {nArg, SQLITE_UTF8|(nc*SQLITE_FUNC_NEEDCOLL)|extraFlags, \ + SQLITE_INT_TO_PTR(arg), 0, xStep,xFinal,#zName,0,0} /* ** All current savepoints are stored in a linked list starting at ** sqlite3.pSavepoint. The first element in the list is the most recently ** opened savepoint. Savepoints are added to the list by the vdbe @@ -8586,10 +12382,11 @@ ** OP_Savepoint instruction. */ struct Savepoint { char *zName; /* Savepoint name (nul-terminated) */ i64 nDeferredCons; /* Number of deferred fk violations */ + i64 nDeferredImmCons; /* Number of deferred imm fk. */ Savepoint *pNext; /* Parent savepoint (if any) */ }; /* ** The following are used as the second parameter to sqlite3Savepoint(), @@ -8608,10 +12405,11 @@ struct Module { const sqlite3_module *pModule; /* Callback pointers */ const char *zName; /* Name passed to create_module() */ void *pAux; /* pAux passed to create_module() */ void (*xDestroy)(void *); /* Module destructor function */ + Table *pEpoTab; /* Eponymous table for this module */ }; /* ** information about each column of an SQL table is held in an instance ** of this structure. @@ -8620,97 +12418,86 @@ char *zName; /* Name of this column */ Expr *pDflt; /* Default value of this column */ char *zDflt; /* Original text of the default value */ char *zType; /* Data type for this column */ char *zColl; /* Collating sequence. If NULL, use the default */ - u8 notNull; /* True if there is a NOT NULL constraint */ - u8 isPrimKey; /* True if this column is part of the PRIMARY KEY */ + u8 notNull; /* An OE_ code for handling a NOT NULL constraint */ char affinity; /* One of the SQLITE_AFF_... values */ -#ifndef SQLITE_OMIT_VIRTUALTABLE - u8 isHidden; /* True if this column is 'hidden' */ -#endif + u8 szEst; /* Estimated size of value in this column. sizeof(INT)==1 */ + u8 colFlags; /* Boolean properties. See COLFLAG_ defines below */ }; +/* Allowed values for Column.colFlags: +*/ +#define COLFLAG_PRIMKEY 0x0001 /* Column is part of the primary key */ +#define COLFLAG_HIDDEN 0x0002 /* A hidden column in a virtual table */ + /* ** A "Collating Sequence" is defined by an instance of the following ** structure. Conceptually, a collating sequence consists of a name and ** a comparison routine that defines the order of that sequence. ** -** There may two separate implementations of the collation function, one -** that processes text in UTF-8 encoding (CollSeq.xCmp) and another that -** processes text encoded in UTF-16 (CollSeq.xCmp16), using the machine -** native byte order. When a collation sequence is invoked, SQLite selects -** the version that will require the least expensive encoding -** translations, if any. -** -** The CollSeq.pUser member variable is an extra parameter that passed in -** as the first argument to the UTF-8 comparison function, xCmp. -** CollSeq.pUser16 is the equivalent for the UTF-16 comparison function, -** xCmp16. -** -** If both CollSeq.xCmp and CollSeq.xCmp16 are NULL, it means that the +** If CollSeq.xCmp is NULL, it means that the ** collating sequence is undefined. Indices built on an undefined ** collating sequence may not be read or written. */ struct CollSeq { char *zName; /* Name of the collating sequence, UTF-8 encoded */ u8 enc; /* Text encoding handled by xCmp() */ - u8 type; /* One of the SQLITE_COLL_... values below */ void *pUser; /* First argument to xCmp() */ int (*xCmp)(void*,int, const void*, int, const void*); void (*xDel)(void*); /* Destructor for pUser */ }; -/* -** Allowed values of CollSeq.type: -*/ -#define SQLITE_COLL_BINARY 1 /* The default memcmp() collating sequence */ -#define SQLITE_COLL_NOCASE 2 /* The built-in NOCASE collating sequence */ -#define SQLITE_COLL_REVERSE 3 /* The built-in REVERSE collating sequence */ -#define SQLITE_COLL_USER 0 /* Any other user-defined collating sequence */ - /* ** A sort order can be either ASC or DESC. */ #define SQLITE_SO_ASC 0 /* Sort in ascending order */ #define SQLITE_SO_DESC 1 /* Sort in ascending order */ +#define SQLITE_SO_UNDEFINED -1 /* No sort order specified */ /* ** Column affinity types. ** ** These used to have mnemonic name like 'i' for SQLITE_AFF_INTEGER and ** 't' for SQLITE_AFF_TEXT. But we can save a little space and improve ** the speed a little by numbering the values consecutively. ** -** But rather than start with 0 or 1, we begin with 'a'. That way, +** But rather than start with 0 or 1, we begin with 'A'. That way, ** when multiple affinity types are concatenated into a string and ** used as the P4 operand, they will be more readable. ** ** Note also that the numeric types are grouped together so that testing -** for a numeric type is a single comparison. +** for a numeric type is a single comparison. And the BLOB type is first. */ -#define SQLITE_AFF_TEXT 'a' -#define SQLITE_AFF_NONE 'b' -#define SQLITE_AFF_NUMERIC 'c' -#define SQLITE_AFF_INTEGER 'd' -#define SQLITE_AFF_REAL 'e' +#define SQLITE_AFF_BLOB 'A' +#define SQLITE_AFF_TEXT 'B' +#define SQLITE_AFF_NUMERIC 'C' +#define SQLITE_AFF_INTEGER 'D' +#define SQLITE_AFF_REAL 'E' #define sqlite3IsNumericAffinity(X) ((X)>=SQLITE_AFF_NUMERIC) /* ** The SQLITE_AFF_MASK values masks off the significant bits of an ** affinity value. */ -#define SQLITE_AFF_MASK 0x67 +#define SQLITE_AFF_MASK 0x47 /* ** Additional bit values that can be ORed with an affinity without ** changing the affinity. +** +** The SQLITE_NOTNULL flag is a combination of NULLEQ and JUMPIFNULL. +** It causes an assert() to fire if either operand to a comparison +** operator is NULL. It is added to certain comparison operators to +** prove that the operands are always NOT NULL. */ -#define SQLITE_JUMPIFNULL 0x08 /* jumps if either operand is NULL */ -#define SQLITE_STOREP2 0x10 /* Store result in reg[P2] rather than jump */ +#define SQLITE_JUMPIFNULL 0x10 /* jumps if either operand is NULL */ +#define SQLITE_STOREP2 0x20 /* Store result in reg[P2] rather than jump */ #define SQLITE_NULLEQ 0x80 /* NULL=NULL */ +#define SQLITE_NOTNULL 0x90 /* Assert that operands are never NULL */ /* ** An object of this type is created for each virtual table present in ** the database schema. ** @@ -8721,11 +12508,11 @@ ** implementation. sqlite3_vtab* handles can not be shared between ** database connections, even when the rest of the in-memory database ** schema is shared, as the implementation often stores the database ** connection handle passed to it via the xConnect() or xCreate() method ** during initialization internally. This database connection handle may -** then used by the virtual table implementation to access real tables +** then be used by the virtual table implementation to access real tables ** within the database. So that they appear as part of the callers ** transaction, these accesses need to be made via the same database ** connection as that used to execute SQL operations on the virtual table. ** ** All VTable objects that correspond to a single table in a shared @@ -8755,98 +12542,104 @@ struct VTable { sqlite3 *db; /* Database connection associated with this table */ Module *pMod; /* Pointer to module implementation */ sqlite3_vtab *pVtab; /* Pointer to vtab instance */ int nRef; /* Number of pointers to this structure */ + u8 bConstraint; /* True if constraints are supported */ + int iSavepoint; /* Depth of the SAVEPOINT stack */ VTable *pNext; /* Next in linked list (see above) */ }; /* -** Each SQL table is represented in memory by an instance of the -** following structure. -** -** Table.zName is the name of the table. The case of the original -** CREATE TABLE statement is stored, but case is not significant for -** comparisons. -** -** Table.nCol is the number of columns in this table. Table.aCol is a -** pointer to an array of Column structures, one for each column. -** -** If the table has an INTEGER PRIMARY KEY, then Table.iPKey is the index of -** the column that is that key. Otherwise Table.iPKey is negative. Note -** that the datatype of the PRIMARY KEY must be INTEGER for this field to -** be set. An INTEGER PRIMARY KEY is used as the rowid for each row of -** the table. If a table has no INTEGER PRIMARY KEY, then a random rowid -** is generated for each row of the table. TF_HasPrimaryKey is set if -** the table has any PRIMARY KEY, INTEGER or otherwise. -** -** Table.tnum is the page number for the root BTree page of the table in the -** database file. If Table.iDb is the index of the database table backend -** in sqlite.aDb[]. 0 is for the main database and 1 is for the file that -** holds temporary tables and indices. If TF_Ephemeral is set -** then the table is stored in a file that is automatically deleted -** when the VDBE cursor to the table is closed. In this case Table.tnum -** refers VDBE cursor number that holds the table open, not to the root -** page number. Transient tables are used to hold the results of a -** sub-query that appears instead of a real table name in the FROM clause -** of a SELECT statement. +** The schema for each SQL table and view is represented in memory +** by an instance of the following structure. */ struct Table { - sqlite3 *dbMem; /* DB connection used for lookaside allocations. */ char *zName; /* Name of the table or view */ - int iPKey; /* If not negative, use aCol[iPKey] as the primary key */ - int nCol; /* Number of columns in this table */ Column *aCol; /* Information about each column */ Index *pIndex; /* List of SQL indexes on this table. */ - int tnum; /* Root BTree node for this table (see note above) */ Select *pSelect; /* NULL for tables. Points to definition if a view. */ - u16 nRef; /* Number of pointers to this Table */ - u8 tabFlags; /* Mask of TF_* values */ - u8 keyConf; /* What to do in case of uniqueness conflict on iPKey */ FKey *pFKey; /* Linked list of all foreign keys in this table */ char *zColAff; /* String defining the affinity of each column */ -#ifndef SQLITE_OMIT_CHECK - Expr *pCheck; /* The AND of all CHECK constraints */ + ExprList *pCheck; /* All CHECK constraints */ + /* ... also used as column name list in a VIEW */ + int tnum; /* Root BTree page for this table */ + i16 iPKey; /* If not negative, use aCol[iPKey] as the rowid */ + i16 nCol; /* Number of columns in this table */ + u16 nRef; /* Number of pointers to this Table */ + LogEst nRowLogEst; /* Estimated rows in table - from sqlite_stat1 table */ + LogEst szTabRow; /* Estimated size of each table row in bytes */ +#ifdef SQLITE_ENABLE_COSTMULT + LogEst costMult; /* Cost multiplier for using this table */ #endif + u8 tabFlags; /* Mask of TF_* values */ + u8 keyConf; /* What to do in case of uniqueness conflict on iPKey */ #ifndef SQLITE_OMIT_ALTERTABLE int addColOffset; /* Offset in CREATE TABLE stmt to add a new column */ #endif #ifndef SQLITE_OMIT_VIRTUALTABLE - VTable *pVTable; /* List of VTable objects. */ int nModuleArg; /* Number of arguments to the module */ - char **azModuleArg; /* Text of all module args. [0] is module name */ + char **azModuleArg; /* 0: module 1: schema 2: vtab name 3...: args */ + VTable *pVTable; /* List of VTable objects. */ #endif Trigger *pTrigger; /* List of triggers stored in pSchema */ Schema *pSchema; /* Schema that contains this table */ Table *pNextZombie; /* Next on the Parse.pZombieTab list */ }; /* -** Allowed values for Tabe.tabFlags. +** Allowed values for Table.tabFlags. +** +** TF_OOOHidden applies to tables or view that have hidden columns that are +** followed by non-hidden columns. Example: "CREATE VIRTUAL TABLE x USING +** vtab1(a HIDDEN, b);". Since "b" is a non-hidden column but "a" is hidden, +** the TF_OOOHidden attribute would apply in this case. Such tables require +** special handling during INSERT processing. */ #define TF_Readonly 0x01 /* Read-only system table */ #define TF_Ephemeral 0x02 /* An ephemeral table */ #define TF_HasPrimaryKey 0x04 /* Table has a primary key */ #define TF_Autoincrement 0x08 /* Integer primary key is autoincrement */ #define TF_Virtual 0x10 /* Is a virtual table */ -#define TF_NeedMetadata 0x20 /* aCol[].zType and aCol[].pColl missing */ - +#define TF_WithoutRowid 0x20 /* No rowid. PRIMARY KEY is the key */ +#define TF_NoVisibleRowid 0x40 /* No user-visible "rowid" column */ +#define TF_OOOHidden 0x80 /* Out-of-Order hidden columns */ /* ** Test to see whether or not a table is a virtual table. This is ** done as a macro so that it will be optimized out when virtual ** table support is omitted from the build. */ #ifndef SQLITE_OMIT_VIRTUALTABLE # define IsVirtual(X) (((X)->tabFlags & TF_Virtual)!=0) -# define IsHiddenColumn(X) ((X)->isHidden) #else # define IsVirtual(X) 0 -# define IsHiddenColumn(X) 0 #endif +/* +** Macros to determine if a column is hidden. IsOrdinaryHiddenColumn() +** only works for non-virtual tables (ordinary tables and views) and is +** always false unless SQLITE_ENABLE_HIDDEN_COLUMNS is defined. The +** IsHiddenColumn() macro is general purpose. +*/ +#if defined(SQLITE_ENABLE_HIDDEN_COLUMNS) +# define IsHiddenColumn(X) (((X)->colFlags & COLFLAG_HIDDEN)!=0) +# define IsOrdinaryHiddenColumn(X) (((X)->colFlags & COLFLAG_HIDDEN)!=0) +#elif !defined(SQLITE_OMIT_VIRTUALTABLE) +# define IsHiddenColumn(X) (((X)->colFlags & COLFLAG_HIDDEN)!=0) +# define IsOrdinaryHiddenColumn(X) 0 +#else +# define IsHiddenColumn(X) 0 +# define IsOrdinaryHiddenColumn(X) 0 +#endif + + +/* Does the table have a rowid */ +#define HasRowid(X) (((X)->tabFlags & TF_WithoutRowid)==0) +#define VisibleRowid(X) (((X)->tabFlags & TF_NoVisibleRowid)==0) + /* ** Each foreign key constraint is an instance of the following structure. ** ** A foreign key is associated with two tables. The "from" table is ** the table that contains the REFERENCES clause that creates the foreign @@ -8857,30 +12650,39 @@ ** a INTEGER PRIMARY KEY, ** b INTEGER CONSTRAINT fk1 REFERENCES ex2(x) ** ); ** ** For foreign key "fk1", the from-table is "ex1" and the to-table is "ex2". +** Equivalent names: +** +** from-table == child-table +** to-table == parent-table ** ** Each REFERENCES clause generates an instance of the following structure ** which is attached to the from-table. The to-table need not exist when ** the from-table is created. The existence of the to-table is not checked. +** +** The list of all parents for child Table X is held at X.pFKey. +** +** A list of all children for a table named Z (which might not even exist) +** is held in Schema.fkeyHash with a hash key of Z. */ struct FKey { Table *pFrom; /* Table containing the REFERENCES clause (aka: Child) */ - FKey *pNextFrom; /* Next foreign key in pFrom */ + FKey *pNextFrom; /* Next FKey with the same in pFrom. Next parent of pFrom */ char *zTo; /* Name of table that the key points to (aka: Parent) */ - FKey *pNextTo; /* Next foreign key on table named zTo */ - FKey *pPrevTo; /* Previous foreign key on table named zTo */ + FKey *pNextTo; /* Next with the same zTo. Next child of zTo. */ + FKey *pPrevTo; /* Previous with the same zTo */ int nCol; /* Number of columns in this key */ /* EV: R-30323-21917 */ - u8 isDeferred; /* True if constraint checking is deferred till COMMIT */ - u8 aAction[2]; /* ON DELETE and ON UPDATE actions, respectively */ - Trigger *apTrigger[2]; /* Triggers for aAction[] actions */ - struct sColMap { /* Mapping of columns in pFrom to columns in zTo */ - int iFrom; /* Index of column in pFrom */ - char *zCol; /* Name of column in zTo. If 0 use PRIMARY KEY */ - } aCol[1]; /* One entry for each of nCol column s */ + u8 isDeferred; /* True if constraint checking is deferred till COMMIT */ + u8 aAction[2]; /* ON DELETE and ON UPDATE actions, respectively */ + Trigger *apTrigger[2];/* Triggers for aAction[] actions */ + struct sColMap { /* Mapping of columns in pFrom to columns in zTo */ + int iFrom; /* Index of column in pFrom */ + char *zCol; /* Name of column in zTo. If NULL use PRIMARY KEY */ + } aCol[1]; /* One entry for each of nCol columns */ }; /* ** SQLite supports many different ways to resolve a constraint ** error. ROLLBACK processing means that a constraint violation @@ -8916,57 +12718,78 @@ #define OE_Restrict 6 /* OE_Abort for IMMEDIATE, OE_Rollback for DEFERRED */ #define OE_SetNull 7 /* Set the foreign key value to NULL */ #define OE_SetDflt 8 /* Set the foreign key value to its default */ #define OE_Cascade 9 /* Cascade the changes */ -#define OE_Default 99 /* Do whatever the default action is */ +#define OE_Default 10 /* Do whatever the default action is */ /* ** An instance of the following structure is passed as the first ** argument to sqlite3VdbeKeyCompare and is used to control the ** comparison of the two index keys. +** +** Note that aSortOrder[] and aColl[] have nField+1 slots. There +** are nField slots for the columns of an index then one extra slot +** for the rowid at the end. */ struct KeyInfo { + u32 nRef; /* Number of references to this KeyInfo object */ + u8 enc; /* Text encoding - one of the SQLITE_UTF* values */ + u16 nField; /* Number of key columns in the index */ + u16 nXField; /* Number of columns beyond the key columns */ sqlite3 *db; /* The database connection */ - u8 enc; /* Text encoding - one of the TEXT_Utf* values */ - u16 nField; /* Number of entries in aColl[] */ - u8 *aSortOrder; /* If defined an aSortOrder[i] is true, sort DESC */ + u8 *aSortOrder; /* Sort order for each column. */ CollSeq *aColl[1]; /* Collating sequence for each term of the key */ }; /* -** An instance of the following structure holds information about a -** single index record that has already been parsed out into individual -** values. +** This object holds a record which has been parsed out into individual +** fields, for the purposes of doing a comparison. ** ** A record is an object that contains one or more fields of data. ** Records are used to store the content of a table row and to store ** the key of an index. A blob encoding of a record is created by ** the OP_MakeRecord opcode of the VDBE and is disassembled by the ** OP_Column opcode. ** -** This structure holds a record that has already been disassembled -** into its constituent fields. +** An instance of this object serves as a "key" for doing a search on +** an index b+tree. The goal of the search is to find the entry that +** is closed to the key described by this object. This object might hold +** just a prefix of the key. The number of fields is given by +** pKeyInfo->nField. +** +** The r1 and r2 fields are the values to return if this key is less than +** or greater than a key in the btree, respectively. These are normally +** -1 and +1 respectively, but might be inverted to +1 and -1 if the b-tree +** is in DESC order. +** +** The key comparison functions actually return default_rc when they find +** an equals comparison. default_rc can be -1, 0, or +1. If there are +** multiple entries in the b-tree with the same key (when only looking +** at the first pKeyInfo->nFields,) then default_rc can be set to -1 to +** cause the search to find the last match, or +1 to cause the search to +** find the first match. +** +** The key comparison functions will set eqSeen to true if they ever +** get and equal results when comparing this structure to a b-tree record. +** When default_rc!=0, the search might end up on the record immediately +** before the first match or immediately after the last match. The +** eqSeen field will indicate whether or not an exact match exists in the +** b-tree. */ struct UnpackedRecord { KeyInfo *pKeyInfo; /* Collation and sort-order information */ - u16 nField; /* Number of entries in apMem[] */ - u16 flags; /* Boolean settings. UNPACKED_... below */ - i64 rowid; /* Used by UNPACKED_PREFIX_SEARCH */ Mem *aMem; /* Values */ + u16 nField; /* Number of entries in apMem[] */ + i8 default_rc; /* Comparison result if keys are equal */ + u8 errCode; /* Error detected by xRecordCompare (CORRUPT or NOMEM) */ + i8 r1; /* Value to return if (lhs > rhs) */ + i8 r2; /* Value to return if (rhs < lhs) */ + u8 eqSeen; /* True if an equality comparison has been seen */ }; -/* -** Allowed values of UnpackedRecord.flags -*/ -#define UNPACKED_NEED_FREE 0x0001 /* Memory is from sqlite3Malloc() */ -#define UNPACKED_NEED_DESTROY 0x0002 /* apMem[]s should all be destroyed */ -#define UNPACKED_IGNORE_ROWID 0x0004 /* Ignore trailing rowid on key1 */ -#define UNPACKED_INCRKEY 0x0008 /* Make this key an epsilon larger */ -#define UNPACKED_PREFIX_MATCH 0x0010 /* A prefix match is considered OK */ -#define UNPACKED_PREFIX_SEARCH 0x0020 /* A prefix match is considered OK */ /* ** Each SQL index is represented in memory by an ** instance of the following structure. ** @@ -8989,39 +12812,82 @@ ** must be unique and what to do if they are not. When Index.onError=OE_None, ** it means this is not a unique index. Otherwise it is a unique index ** and the value of Index.onError indicate the which conflict resolution ** algorithm to employ whenever an attempt is made to insert a non-unique ** element. +** +** While parsing a CREATE TABLE or CREATE INDEX statement in order to +** generate VDBE code (as opposed to parsing one read from an sqlite_master +** table as part of parsing an existing database schema), transient instances +** of this structure may be created. In this case the Index.tnum variable is +** used to store the address of a VDBE instruction, not a database page +** number (it cannot - the database page is not allocated until the VDBE +** program is executed). See convertToWithoutRowidTable() for details. */ struct Index { - char *zName; /* Name of this index */ - int nColumn; /* Number of columns in the table used by this index */ - int *aiColumn; /* Which columns are used by this index. 1st is 0 */ - unsigned *aiRowEst; /* Result of ANALYZE: Est. rows selected by each column */ - Table *pTable; /* The SQL table being indexed */ - int tnum; /* Page containing root of this index in database file */ - u8 onError; /* OE_Abort, OE_Ignore, OE_Replace, or OE_None */ - u8 autoIndex; /* True if is automatically created (ex: by UNIQUE) */ - char *zColAff; /* String defining the affinity of each column */ - Index *pNext; /* The next index associated with the same table */ - Schema *pSchema; /* Schema containing this index */ - u8 *aSortOrder; /* Array of size Index.nColumn. True==DESC, False==ASC */ - char **azColl; /* Array of collation sequence names for index */ - IndexSample *aSample; /* Array of SQLITE_INDEX_SAMPLES samples */ + char *zName; /* Name of this index */ + i16 *aiColumn; /* Which columns are used by this index. 1st is 0 */ + LogEst *aiRowLogEst; /* From ANALYZE: Est. rows selected by each column */ + Table *pTable; /* The SQL table being indexed */ + char *zColAff; /* String defining the affinity of each column */ + Index *pNext; /* The next index associated with the same table */ + Schema *pSchema; /* Schema containing this index */ + u8 *aSortOrder; /* for each column: True==DESC, False==ASC */ + const char **azColl; /* Array of collation sequence names for index */ + Expr *pPartIdxWhere; /* WHERE clause for partial indices */ + ExprList *aColExpr; /* Column expressions */ + int tnum; /* DB Page containing root of this index */ + LogEst szIdxRow; /* Estimated average row size in bytes */ + u16 nKeyCol; /* Number of columns forming the key */ + u16 nColumn; /* Number of columns stored in the index */ + u8 onError; /* OE_Abort, OE_Ignore, OE_Replace, or OE_None */ + unsigned idxType:2; /* 1==UNIQUE, 2==PRIMARY KEY, 0==CREATE INDEX */ + unsigned bUnordered:1; /* Use this index for == or IN queries only */ + unsigned uniqNotNull:1; /* True if UNIQUE and NOT NULL for all columns */ + unsigned isResized:1; /* True if resizeIndexObject() has been called */ + unsigned isCovering:1; /* True if this is a covering index */ + unsigned noSkipScan:1; /* Do not try to use skip-scan if true */ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + int nSample; /* Number of elements in aSample[] */ + int nSampleCol; /* Size of IndexSample.anEq[] and so on */ + tRowcnt *aAvgEq; /* Average nEq values for keys not in aSample */ + IndexSample *aSample; /* Samples of the left-most key */ + tRowcnt *aiRowEst; /* Non-logarithmic stat1 data for this index */ + tRowcnt nRowEst0; /* Non-logarithmic number of rows in the index */ +#endif }; /* -** Each sample stored in the sqlite_stat2 table is represented in memory -** using a structure of this type. +** Allowed values for Index.idxType +*/ +#define SQLITE_IDXTYPE_APPDEF 0 /* Created using CREATE INDEX */ +#define SQLITE_IDXTYPE_UNIQUE 1 /* Implements a UNIQUE constraint */ +#define SQLITE_IDXTYPE_PRIMARYKEY 2 /* Is the PRIMARY KEY for the table */ + +/* Return true if index X is a PRIMARY KEY index */ +#define IsPrimaryKeyIndex(X) ((X)->idxType==SQLITE_IDXTYPE_PRIMARYKEY) + +/* Return true if index X is a UNIQUE index */ +#define IsUniqueIndex(X) ((X)->onError!=OE_None) + +/* The Index.aiColumn[] values are normally positive integer. But +** there are some negative values that have special meaning: +*/ +#define XN_ROWID (-1) /* Indexed column is the rowid */ +#define XN_EXPR (-2) /* Indexed column is an expression */ + +/* +** Each sample stored in the sqlite_stat3 table is represented in memory +** using a structure of this type. See documentation at the top of the +** analyze.c source file for additional information. */ struct IndexSample { - union { - char *z; /* Value if eType is SQLITE_TEXT or SQLITE_BLOB */ - double r; /* Value if eType is SQLITE_FLOAT or SQLITE_INTEGER */ - } u; - u8 eType; /* SQLITE_NULL, SQLITE_INTEGER ... etc. */ - u8 nByte; /* Size in byte of text or blob. */ + void *p; /* Pointer to sampled record */ + int n; /* Size of record in bytes */ + tRowcnt *anEq; /* Est. number of rows where the key equals this sample */ + tRowcnt *anLt; /* Est. number of rows where key is less than this sample */ + tRowcnt *anDLt; /* Est. number of distinct keys less than this sample */ }; /* ** Each token coming out of the lexer is an instance of ** this structure. Tokens are also used as part of an expression. @@ -9052,22 +12918,23 @@ u8 directMode; /* Direct rendering mode means take data directly ** from source tables rather than from accumulators */ u8 useSortingIdx; /* In direct mode, reference the sorting index rather ** than the source table */ int sortingIdx; /* Cursor number of the sorting index */ - ExprList *pGroupBy; /* The group by clause */ + int sortingIdxPTab; /* Cursor number of pseudo-table */ int nSortingColumn; /* Number of columns in the sorting index */ + int mnReg, mxReg; /* Range of registers allocated for aCol and aFunc */ + ExprList *pGroupBy; /* The group by clause */ struct AggInfo_col { /* For each column used in source tables */ Table *pTab; /* Source table */ int iTable; /* Cursor number of the source table */ int iColumn; /* Column number within the source table */ int iSorterColumn; /* Column number in the sorting index */ int iMem; /* Memory location that acts as accumulator */ Expr *pExpr; /* The original expression */ } *aCol; int nColumn; /* Number of used entries in aCol[] */ - int nColumnAlloc; /* Number of slots allocated for aCol[] */ int nAccumulator; /* Number of columns that show through to the output. ** Additional columns are used only as parameters to ** aggregate functions */ struct AggInfo_func { /* For each aggregate function */ Expr *pExpr; /* Expression encoding the function */ @@ -9074,11 +12941,10 @@ FuncDef *pFunc; /* The aggregate function implementation */ int iMem; /* Memory location that acts as accumulator */ int iDistinct; /* Ephemeral table used to enforce DISTINCT */ } *aFunc; int nFunc; /* Number of entries in aFunc[] */ - int nFuncAlloc; /* Number of slots allocated for aFunc[] */ }; /* ** The datatype ynVar is a signed integer, either 16-bit or 32-bit. ** Usually it is 16-bits. But if SQLITE_MAX_VARIABLE_NUMBER is greater @@ -9159,14 +13025,14 @@ ** allocated, regardless of whether or not EP_Reduced is set. */ struct Expr { u8 op; /* Operation performed by this node */ char affinity; /* The affinity of the column or 0 if not a column */ - u16 flags; /* Various flags. EP_* See below */ + u32 flags; /* Various flags. EP_* See below */ union { char *zToken; /* Token value. Zero terminated and dequoted */ - int iValue; /* Integer value if EP_IntValue */ + int iValue; /* Non-negative integer value if EP_IntValue */ } u; /* If the EP_TokenOnly flag is set in the Expr.flags mask, then no ** space is allocated for the fields below this point. An attempt to ** access them will result in a segfault or malfunction. @@ -9173,82 +13039,87 @@ *********************************************************************/ Expr *pLeft; /* Left subnode */ Expr *pRight; /* Right subnode */ union { - ExprList *pList; /* Function arguments or in "<expr> IN (<expr-list)" */ - Select *pSelect; /* Used for sub-selects and "<expr> IN (<select>)" */ + ExprList *pList; /* op = IN, EXISTS, SELECT, CASE, FUNCTION, BETWEEN */ + Select *pSelect; /* EP_xIsSelect and op = IN, EXISTS, SELECT */ } x; - CollSeq *pColl; /* The collation type of the column or 0 */ /* If the EP_Reduced flag is set in the Expr.flags mask, then no ** space is allocated for the fields below this point. An attempt to ** access them will result in a segfault or malfunction. *********************************************************************/ +#if SQLITE_MAX_EXPR_DEPTH>0 + int nHeight; /* Height of the tree headed by this node */ +#endif int iTable; /* TK_COLUMN: cursor number of table holding column ** TK_REGISTER: register number - ** TK_TRIGGER: 1 -> new, 0 -> old */ + ** TK_TRIGGER: 1 -> new, 0 -> old + ** EP_Unlikely: 134217728 times likelihood */ ynVar iColumn; /* TK_COLUMN: column index. -1 for rowid. ** TK_VARIABLE: variable number (always >= 1). */ i16 iAgg; /* Which entry in pAggInfo->aCol[] or ->aFunc[] */ i16 iRightJoinTable; /* If EP_FromJoin, the right table of the join */ - u8 flags2; /* Second set of flags. EP2_... */ - u8 op2; /* If a TK_REGISTER, the original value of Expr.op */ + u8 op2; /* TK_REGISTER: original value of Expr.op + ** TK_COLUMN: the value of p5 for OP_Column + ** TK_AGG_FUNCTION: nesting depth */ AggInfo *pAggInfo; /* Used by TK_AGG_COLUMN and TK_AGG_FUNCTION */ Table *pTab; /* Table for TK_COLUMN expressions. */ -#if SQLITE_MAX_EXPR_DEPTH>0 - int nHeight; /* Height of the tree headed by this node */ -#endif }; /* ** The following are the meanings of bits in the Expr.flags field. */ -#define EP_FromJoin 0x0001 /* Originated in ON or USING clause of a join */ -#define EP_Agg 0x0002 /* Contains one or more aggregate functions */ -#define EP_Resolved 0x0004 /* IDs have been resolved to COLUMNs */ -#define EP_Error 0x0008 /* Expression contains one or more errors */ -#define EP_Distinct 0x0010 /* Aggregate function with DISTINCT keyword */ -#define EP_VarSelect 0x0020 /* pSelect is correlated, not constant */ -#define EP_DblQuoted 0x0040 /* token.z was originally in "..." */ -#define EP_InfixFunc 0x0080 /* True for an infix function: LIKE, GLOB, etc */ -#define EP_ExpCollate 0x0100 /* Collating sequence specified explicitly */ -#define EP_FixedDest 0x0200 /* Result needed in a specific register */ -#define EP_IntValue 0x0400 /* Integer value contained in u.iValue */ -#define EP_xIsSelect 0x0800 /* x.pSelect is valid (otherwise x.pList is) */ - -#define EP_Reduced 0x1000 /* Expr struct is EXPR_REDUCEDSIZE bytes only */ -#define EP_TokenOnly 0x2000 /* Expr struct is EXPR_TOKENONLYSIZE bytes only */ -#define EP_Static 0x4000 /* Held in memory not obtained from malloc() */ - -/* -** The following are the meanings of bits in the Expr.flags2 field. -*/ -#define EP2_MallocedToken 0x0001 /* Need to sqlite3DbFree() Expr.zToken */ -#define EP2_Irreducible 0x0002 /* Cannot EXPRDUP_REDUCE this Expr */ - -/* -** The pseudo-routine sqlite3ExprSetIrreducible sets the EP2_Irreducible -** flag on an expression structure. This flag is used for VV&A only. The -** routine is implemented as a macro that only works when in debugging mode, -** so as not to burden production code. -*/ -#ifdef SQLITE_DEBUG -# define ExprSetIrreducible(X) (X)->flags2 |= EP2_Irreducible -#else -# define ExprSetIrreducible(X) -#endif +#define EP_FromJoin 0x000001 /* Originates in ON/USING clause of outer join */ +#define EP_Agg 0x000002 /* Contains one or more aggregate functions */ +#define EP_Resolved 0x000004 /* IDs have been resolved to COLUMNs */ +#define EP_Error 0x000008 /* Expression contains one or more errors */ +#define EP_Distinct 0x000010 /* Aggregate function with DISTINCT keyword */ +#define EP_VarSelect 0x000020 /* pSelect is correlated, not constant */ +#define EP_DblQuoted 0x000040 /* token.z was originally in "..." */ +#define EP_InfixFunc 0x000080 /* True for an infix function: LIKE, GLOB, etc */ +#define EP_Collate 0x000100 /* Tree contains a TK_COLLATE operator */ +#define EP_Generic 0x000200 /* Ignore COLLATE or affinity on this tree */ +#define EP_IntValue 0x000400 /* Integer value contained in u.iValue */ +#define EP_xIsSelect 0x000800 /* x.pSelect is valid (otherwise x.pList is) */ +#define EP_Skip 0x001000 /* COLLATE, AS, or UNLIKELY */ +#define EP_Reduced 0x002000 /* Expr struct EXPR_REDUCEDSIZE bytes only */ +#define EP_TokenOnly 0x004000 /* Expr struct EXPR_TOKENONLYSIZE bytes only */ +#define EP_Static 0x008000 /* Held in memory not obtained from malloc() */ +#define EP_MemToken 0x010000 /* Need to sqlite3DbFree() Expr.zToken */ +#define EP_NoReduce 0x020000 /* Cannot EXPRDUP_REDUCE this Expr */ +#define EP_Unlikely 0x040000 /* unlikely() or likelihood() function */ +#define EP_ConstFunc 0x080000 /* A SQLITE_FUNC_CONSTANT or _SLOCHNG function */ +#define EP_CanBeNull 0x100000 /* Can be null despite NOT NULL constraint */ +#define EP_Subquery 0x200000 /* Tree contains a TK_SELECT operator */ +#define EP_Alias 0x400000 /* Is an alias for a result set column */ + +/* +** Combinations of two or more EP_* flags +*/ +#define EP_Propagate (EP_Collate|EP_Subquery) /* Propagate these bits up tree */ /* ** These macros can be used to test, set, or clear bits in the ** Expr.flags field. */ -#define ExprHasProperty(E,P) (((E)->flags&(P))==(P)) -#define ExprHasAnyProperty(E,P) (((E)->flags&(P))!=0) +#define ExprHasProperty(E,P) (((E)->flags&(P))!=0) +#define ExprHasAllProperty(E,P) (((E)->flags&(P))==(P)) #define ExprSetProperty(E,P) (E)->flags|=(P) #define ExprClearProperty(E,P) (E)->flags&=~(P) + +/* The ExprSetVVAProperty() macro is used for Verification, Validation, +** and Accreditation only. It works like ExprSetProperty() during VVA +** processes but is a no-op for delivery. +*/ +#ifdef SQLITE_DEBUG +# define ExprSetVVAProperty(E,P) (E)->flags|=(P) +#else +# define ExprSetVVAProperty(E,P) +#endif /* ** Macros to determine the number of bytes required by a normal Expr ** struct, an Expr struct with the EP_Reduced flag set in Expr.flags ** and an Expr struct with the EP_TokenOnly flag set. @@ -9268,24 +13139,37 @@ ** name. An expr/name combination can be used in several ways, such ** as the list of "expr AS ID" fields following a "SELECT" or in the ** list of "ID = expr" items in an UPDATE. A list of expressions can ** also be used as the argument to a function, in which case the a.zName ** field is not used. +** +** By default the Expr.zSpan field holds a human-readable description of +** the expression that is used in the generation of error messages and +** column labels. In this case, Expr.zSpan is typically the text of a +** column expression as it exists in a SELECT statement. However, if +** the bSpanIsTab flag is set, then zSpan is overloaded to mean the name +** of the result column in the form: DATABASE.TABLE.COLUMN. This later +** form is used for name resolution with nested FROM clauses. */ struct ExprList { int nExpr; /* Number of expressions on the list */ - int nAlloc; /* Number of entries allocated below */ - int iECursor; /* VDBE Cursor associated with this ExprList */ - struct ExprList_item { - Expr *pExpr; /* The list of expressions */ - char *zName; /* Token associated with this expression */ - char *zSpan; /* Original text of the expression */ - u8 sortOrder; /* 1 for DESC or 0 for ASC */ - u8 done; /* A flag to indicate when processing is finished */ - u16 iCol; /* For ORDER BY, column number in result set */ - u16 iAlias; /* Index into Parse.aAlias[] for zName */ - } *a; /* One entry for each expression */ + struct ExprList_item { /* For each expression in the list */ + Expr *pExpr; /* The list of expressions */ + char *zName; /* Token associated with this expression */ + char *zSpan; /* Original text of the expression */ + u8 sortOrder; /* 1 for DESC or 0 for ASC */ + unsigned done :1; /* A flag to indicate when processing is finished */ + unsigned bSpanIsTab :1; /* zSpan holds DB.TABLE.COLUMN */ + unsigned reusable :1; /* Constant expression is reusable */ + union { + struct { + u16 iOrderByCol; /* For ORDER BY, column number in result set */ + u16 iAlias; /* Index into Parse.aAlias[] for zName */ + } x; + int iConstExprReg; /* Register in which Expr value is cached */ + } u; + } *a; /* Alloc a power of two greater or equal to nExpr */ }; /* ** An instance of this structure is used by the parser to record both ** the parse tree for an expression and the span of input text for an @@ -9316,11 +13200,10 @@ struct IdList_item { char *zName; /* Name of the identifier */ int idx; /* Index in some Table.aCol[] of a column named zName */ } *a; int nId; /* Number of identifiers on the list */ - int nAlloc; /* Number of entries allocated for a[] below */ }; /* ** The bitmask datatype defined below is used for various optimizations. ** @@ -9333,10 +13216,16 @@ /* ** The number of bits in a Bitmask. "BMS" means "BitMask Size". */ #define BMS ((int)(sizeof(Bitmask)*8)) +/* +** A bit in a Bitmask +*/ +#define MASKBIT(n) (((Bitmask)1)<<(n)) +#define MASKBIT32(n) (((unsigned int)1)<<(n)) + /* ** The following structure describes the FROM clause of a SELECT statement. ** Each table or subquery in the FROM clause is a separate element of ** the SrcList.a[] array. ** @@ -9353,27 +13242,43 @@ ** ** In the colUsed field, the high-order bit (bit 63) is set if the table ** contains more than 63 columns and the 64-th or later column is used. */ struct SrcList { - i16 nSrc; /* Number of tables or subqueries in the FROM clause */ - i16 nAlloc; /* Number of entries allocated in a[] below */ + int nSrc; /* Number of tables or subqueries in the FROM clause */ + u32 nAlloc; /* Number of entries allocated in a[] below */ struct SrcList_item { + Schema *pSchema; /* Schema to which this item is fixed */ char *zDatabase; /* Name of database holding this table */ char *zName; /* Name of the table */ char *zAlias; /* The "B" part of a "A AS B" phrase. zName is the "A" */ Table *pTab; /* An SQL table corresponding to zName */ Select *pSelect; /* A SELECT statement used in place of a table name */ - u8 isPopulated; /* Temporary table associated with SELECT is populated */ - u8 jointype; /* Type of join between this able and the previous */ - u8 notIndexed; /* True if there is a NOT INDEXED clause */ + int addrFillSub; /* Address of subroutine to manifest a subquery */ + int regReturn; /* Register holding return address of addrFillSub */ + int regResult; /* Registers holding results of a co-routine */ + struct { + u8 jointype; /* Type of join between this able and the previous */ + unsigned notIndexed :1; /* True if there is a NOT INDEXED clause */ + unsigned isIndexedBy :1; /* True if there is an INDEXED BY clause */ + unsigned isTabFunc :1; /* True if table-valued-function syntax */ + unsigned isCorrelated :1; /* True if sub-query is correlated */ + unsigned viaCoroutine :1; /* Implemented as a co-routine */ + unsigned isRecursive :1; /* True for recursive reference in WITH */ + } fg; +#ifndef SQLITE_OMIT_EXPLAIN + u8 iSelectId; /* If pSelect!=0, the id of the sub-select in EQP */ +#endif int iCursor; /* The VDBE cursor number used to access this table */ Expr *pOn; /* The ON clause of a join */ IdList *pUsing; /* The USING clause of a join */ Bitmask colUsed; /* Bit N (1<<N) set if column N of pTab is used */ - char *zIndex; /* Identifier from "INDEXED BY <zIndex>" clause */ - Index *pIndex; /* Index structure corresponding to zIndex, if any */ + union { + char *zIndexedBy; /* Identifier from "INDEXED BY <zIndex>" clause */ + ExprList *pFuncArg; /* Arguments to table-valued-function */ + } u1; + Index *pIBIndex; /* Index structure corresponding to u1.zIndexedBy */ } a[1]; /* One entry for each identifier on the list */ }; /* ** Permitted values of the SrcList.a.jointype field @@ -9385,113 +13290,36 @@ #define JT_RIGHT 0x0010 /* Right outer join */ #define JT_OUTER 0x0020 /* The "OUTER" keyword is present */ #define JT_ERROR 0x0040 /* unknown or unsupported join type */ -/* -** A WherePlan object holds information that describes a lookup -** strategy. -** -** This object is intended to be opaque outside of the where.c module. -** It is included here only so that that compiler will know how big it -** is. None of the fields in this object should be used outside of -** the where.c module. -** -** Within the union, pIdx is only used when wsFlags&WHERE_INDEXED is true. -** pTerm is only used when wsFlags&WHERE_MULTI_OR is true. And pVtabIdx -** is only used when wsFlags&WHERE_VIRTUALTABLE is true. It is never the -** case that more than one of these conditions is true. -*/ -struct WherePlan { - u32 wsFlags; /* WHERE_* flags that describe the strategy */ - u32 nEq; /* Number of == constraints */ - union { - Index *pIdx; /* Index when WHERE_INDEXED is true */ - struct WhereTerm *pTerm; /* WHERE clause term for OR-search */ - sqlite3_index_info *pVtabIdx; /* Virtual table index to use */ - } u; -}; - -/* -** For each nested loop in a WHERE clause implementation, the WhereInfo -** structure contains a single instance of this structure. This structure -** is intended to be private the the where.c module and should not be -** access or modified by other modules. -** -** The pIdxInfo field is used to help pick the best index on a -** virtual table. The pIdxInfo pointer contains indexing -** information for the i-th table in the FROM clause before reordering. -** All the pIdxInfo pointers are freed by whereInfoFree() in where.c. -** All other information in the i-th WhereLevel object for the i-th table -** after FROM clause ordering. -*/ -struct WhereLevel { - WherePlan plan; /* query plan for this element of the FROM clause */ - int iLeftJoin; /* Memory cell used to implement LEFT OUTER JOIN */ - int iTabCur; /* The VDBE cursor used to access the table */ - int iIdxCur; /* The VDBE cursor used to access pIdx */ - int addrBrk; /* Jump here to break out of the loop */ - int addrNxt; /* Jump here to start the next IN combination */ - int addrCont; /* Jump here to continue with the next loop cycle */ - int addrFirst; /* First instruction of interior of the loop */ - u8 iFrom; /* Which entry in the FROM clause */ - u8 op, p5; /* Opcode and P5 of the opcode that ends the loop */ - int p1, p2; /* Operands of the opcode used to ends the loop */ - union { /* Information that depends on plan.wsFlags */ - struct { - int nIn; /* Number of entries in aInLoop[] */ - struct InLoop { - int iCur; /* The VDBE cursor used by this IN operator */ - int addrInTop; /* Top of the IN loop */ - } *aInLoop; /* Information about each nested IN operator */ - } in; /* Used when plan.wsFlags&WHERE_IN_ABLE */ - } u; - - /* The following field is really not part of the current level. But - ** we need a place to cache virtual table index information for each - ** virtual table in the FROM clause and the WhereLevel structure is - ** a convenient place since there is one WhereLevel for each FROM clause - ** element. - */ - sqlite3_index_info *pIdxInfo; /* Index info for n-th source table */ -}; - /* ** Flags appropriate for the wctrlFlags parameter of sqlite3WhereBegin() ** and the WhereInfo.wctrlFlags member. */ #define WHERE_ORDERBY_NORMAL 0x0000 /* No-op */ #define WHERE_ORDERBY_MIN 0x0001 /* ORDER BY processing for min() func */ #define WHERE_ORDERBY_MAX 0x0002 /* ORDER BY processing for max() func */ #define WHERE_ONEPASS_DESIRED 0x0004 /* Want to do one-pass UPDATE/DELETE */ #define WHERE_DUPLICATES_OK 0x0008 /* Ok to return a row more than once */ -#define WHERE_OMIT_OPEN 0x0010 /* Table cursors are already open */ -#define WHERE_OMIT_CLOSE 0x0020 /* Omit close of table & index cursors */ -#define WHERE_FORCE_TABLE 0x0040 /* Do not use an index-only search */ -#define WHERE_ONETABLE_ONLY 0x0080 /* Only code the 1st table in pTabList */ - -/* -** The WHERE clause processing routine has two halves. The -** first part does the start of the WHERE loop and the second -** half does the tail of the WHERE loop. An instance of -** this structure is returned by the first half and passed -** into the second half to give some continuity. -*/ -struct WhereInfo { - Parse *pParse; /* Parsing and code generating context */ - u16 wctrlFlags; /* Flags originally passed to sqlite3WhereBegin() */ - u8 okOnePass; /* Ok to use one-pass algorithm for UPDATE or DELETE */ - u8 untestedTerms; /* Not all WHERE terms resolved by outer loop */ - SrcList *pTabList; /* List of tables in the join */ - int iTop; /* The very beginning of the WHERE loop */ - int iContinue; /* Jump here to continue with next record */ - int iBreak; /* Jump here to break out of the loop */ - int nLevel; /* Number of nested loop */ - struct WhereClause *pWC; /* Decomposition of the WHERE clause */ - double savedNQueryLoop; /* pParse->nQueryLoop outside the WHERE loop */ - WhereLevel a[1]; /* Information about each nest loop in WHERE */ -}; +#define WHERE_OMIT_OPEN_CLOSE 0x0010 /* Table cursors are already open */ +#define WHERE_FORCE_TABLE 0x0020 /* Do not use an index-only search */ +#define WHERE_ONETABLE_ONLY 0x0040 /* Only code the 1st table in pTabList */ +#define WHERE_NO_AUTOINDEX 0x0080 /* Disallow automatic indexes */ +#define WHERE_GROUPBY 0x0100 /* pOrderBy is really a GROUP BY */ +#define WHERE_DISTINCTBY 0x0200 /* pOrderby is really a DISTINCT clause */ +#define WHERE_WANT_DISTINCT 0x0400 /* All output needs to be distinct */ +#define WHERE_SORTBYGROUP 0x0800 /* Support sqlite3WhereIsSorted() */ +#define WHERE_REOPEN_IDX 0x1000 /* Try to use OP_ReopenIdx */ +#define WHERE_ONEPASS_MULTIROW 0x2000 /* ONEPASS is ok with multiple rows */ + +/* Allowed return values from sqlite3WhereIsDistinct() +*/ +#define WHERE_DISTINCT_NOOP 0 /* DISTINCT keyword not used */ +#define WHERE_DISTINCT_UNIQUE 1 /* No duplicates */ +#define WHERE_DISTINCT_ORDERED 2 /* All duplicates are adjacent */ +#define WHERE_DISTINCT_UNORDERED 3 /* Duplicates are scattered */ /* ** A NameContext defines a context in which to resolve table and column ** names. The context consists of a list of tables (the pSrcList) field and ** a list of named expression (pEList). The named expression list may @@ -9513,21 +13341,33 @@ ** subqueries looking for a match. */ struct NameContext { Parse *pParse; /* The parser */ SrcList *pSrcList; /* One or more tables used to resolve names */ - ExprList *pEList; /* Optional list of named expressions */ + ExprList *pEList; /* Optional list of result-set columns */ + AggInfo *pAggInfo; /* Information about aggregates at this level */ + NameContext *pNext; /* Next outer name context. NULL for outermost */ int nRef; /* Number of names resolved by this context */ int nErr; /* Number of errors encountered while resolving names */ - u8 allowAgg; /* Aggregate functions allowed here */ - u8 hasAgg; /* True if aggregates are seen */ - u8 isCheck; /* True if resolving names in a CHECK constraint */ - int nDepth; /* Depth of subquery recursion. 1 for no recursion */ - AggInfo *pAggInfo; /* Information about aggregates at this level */ - NameContext *pNext; /* Next outer name context. NULL for outermost */ + u16 ncFlags; /* Zero or more NC_* flags defined below */ }; +/* +** Allowed values for the NameContext, ncFlags field. +** +** Note: NC_MinMaxAgg must have the same value as SF_MinMaxAgg and +** SQLITE_FUNC_MINMAX. +** +*/ +#define NC_AllowAgg 0x0001 /* Aggregate functions are allowed here */ +#define NC_HasAgg 0x0002 /* One or more aggregate functions seen */ +#define NC_IsCheck 0x0004 /* True if resolving names in a CHECK constraint */ +#define NC_InAggFunc 0x0008 /* True if analyzing arguments to an agg func */ +#define NC_PartIdx 0x0010 /* True if resolving a partial index WHERE */ +#define NC_IdxExpr 0x0020 /* True if resolving columns of CREATE INDEX */ +#define NC_MinMaxAgg 0x1000 /* min/max aggregates seen. See note above */ + /* ** An instance of the following structure contains all information ** needed to generate code for a single SELECT statement. ** ** nLimit is set to -1 if there is no LIMIT clause. nOffset is set to 0. @@ -9541,82 +13381,159 @@ ** the P4_KEYINFO and P2 parameters later. Neither the KeyInfo nor ** the number of columns in P2 can be computed at the same time ** as the OP_OpenEphm instruction is coded because not ** enough information about the compound query is known at that point. ** The KeyInfo for addrOpenTran[0] and [1] contains collating sequences -** for the result set. The KeyInfo for addrOpenTran[2] contains collating +** for the result set. The KeyInfo for addrOpenEphm[2] contains collating ** sequences for the ORDER BY clause. */ struct Select { ExprList *pEList; /* The fields of the result */ u8 op; /* One of: TK_UNION TK_ALL TK_INTERSECT TK_EXCEPT */ - char affinity; /* MakeRecord with this affinity for SRT_Set */ u16 selFlags; /* Various SF_* values */ + int iLimit, iOffset; /* Memory registers holding LIMIT & OFFSET counters */ +#if SELECTTRACE_ENABLED + char zSelName[12]; /* Symbolic name of this SELECT use for debugging */ +#endif + int addrOpenEphm[2]; /* OP_OpenEphem opcodes related to this select */ + u64 nSelectRow; /* Estimated number of result rows */ SrcList *pSrc; /* The FROM clause */ Expr *pWhere; /* The WHERE clause */ ExprList *pGroupBy; /* The GROUP BY clause */ Expr *pHaving; /* The HAVING clause */ ExprList *pOrderBy; /* The ORDER BY clause */ Select *pPrior; /* Prior select in a compound select statement */ Select *pNext; /* Next select to the left in a compound */ - Select *pRightmost; /* Right-most select in a compound select statement */ Expr *pLimit; /* LIMIT expression. NULL means not used. */ Expr *pOffset; /* OFFSET expression. NULL means not used. */ - int iLimit, iOffset; /* Memory registers holding LIMIT & OFFSET counters */ - int addrOpenEphm[3]; /* OP_OpenEphem opcodes related to this select */ + With *pWith; /* WITH clause attached to this select. Or NULL. */ }; /* ** Allowed values for Select.selFlags. The "SF" prefix stands for ** "Select Flag". */ #define SF_Distinct 0x0001 /* Output should be DISTINCT */ -#define SF_Resolved 0x0002 /* Identifiers have been resolved */ -#define SF_Aggregate 0x0004 /* Contains aggregate functions */ -#define SF_UsesEphemeral 0x0008 /* Uses the OpenEphemeral opcode */ -#define SF_Expanded 0x0010 /* sqlite3SelectExpand() called on this */ -#define SF_HasTypeInfo 0x0020 /* FROM subqueries have Table metadata */ +#define SF_All 0x0002 /* Includes the ALL keyword */ +#define SF_Resolved 0x0004 /* Identifiers have been resolved */ +#define SF_Aggregate 0x0008 /* Contains aggregate functions */ +#define SF_UsesEphemeral 0x0010 /* Uses the OpenEphemeral opcode */ +#define SF_Expanded 0x0020 /* sqlite3SelectExpand() called on this */ +#define SF_HasTypeInfo 0x0040 /* FROM subqueries have Table metadata */ +#define SF_Compound 0x0080 /* Part of a compound query */ +#define SF_Values 0x0100 /* Synthesized from VALUES clause */ +#define SF_MultiValue 0x0200 /* Single VALUES term with multiple rows */ +#define SF_NestedFrom 0x0400 /* Part of a parenthesized FROM clause */ +#define SF_MaybeConvert 0x0800 /* Need convertCompoundSelectToSubquery() */ +#define SF_MinMaxAgg 0x1000 /* Aggregate containing min() or max() */ +#define SF_Recursive 0x2000 /* The recursive part of a recursive CTE */ +#define SF_Converted 0x4000 /* By convertCompoundSelectToSubquery() */ +#define SF_IncludeHidden 0x8000 /* Include hidden columns in output */ /* -** The results of a select can be distributed in several ways. The -** "SRT" prefix means "SELECT Result Type". +** The results of a SELECT can be distributed in several ways, as defined +** by one of the following macros. The "SRT" prefix means "SELECT Result +** Type". +** +** SRT_Union Store results as a key in a temporary index +** identified by pDest->iSDParm. +** +** SRT_Except Remove results from the temporary index pDest->iSDParm. +** +** SRT_Exists Store a 1 in memory cell pDest->iSDParm if the result +** set is not empty. +** +** SRT_Discard Throw the results away. This is used by SELECT +** statements within triggers whose only purpose is +** the side-effects of functions. +** +** All of the above are free to ignore their ORDER BY clause. Those that +** follow must honor the ORDER BY clause. +** +** SRT_Output Generate a row of output (using the OP_ResultRow +** opcode) for each row in the result set. +** +** SRT_Mem Only valid if the result is a single column. +** Store the first column of the first result row +** in register pDest->iSDParm then abandon the rest +** of the query. This destination implies "LIMIT 1". +** +** SRT_Set The result must be a single column. Store each +** row of result as the key in table pDest->iSDParm. +** Apply the affinity pDest->affSdst before storing +** results. Used to implement "IN (SELECT ...)". +** +** SRT_EphemTab Create an temporary table pDest->iSDParm and store +** the result there. The cursor is left open after +** returning. This is like SRT_Table except that +** this destination uses OP_OpenEphemeral to create +** the table first. +** +** SRT_Coroutine Generate a co-routine that returns a new row of +** results each time it is invoked. The entry point +** of the co-routine is stored in register pDest->iSDParm +** and the result row is stored in pDest->nDest registers +** starting with pDest->iSdst. +** +** SRT_Table Store results in temporary table pDest->iSDParm. +** SRT_Fifo This is like SRT_EphemTab except that the table +** is assumed to already be open. SRT_Fifo has +** the additional property of being able to ignore +** the ORDER BY clause. +** +** SRT_DistFifo Store results in a temporary table pDest->iSDParm. +** But also use temporary table pDest->iSDParm+1 as +** a record of all prior results and ignore any duplicate +** rows. Name means: "Distinct Fifo". +** +** SRT_Queue Store results in priority queue pDest->iSDParm (really +** an index). Append a sequence number so that all entries +** are distinct. +** +** SRT_DistQueue Store results in priority queue pDest->iSDParm only if +** the same record has never been stored before. The +** index at pDest->iSDParm+1 hold all prior stores. */ #define SRT_Union 1 /* Store result as keys in an index */ #define SRT_Except 2 /* Remove result from a UNION index */ #define SRT_Exists 3 /* Store 1 if the result is not empty */ #define SRT_Discard 4 /* Do not save the results anywhere */ +#define SRT_Fifo 5 /* Store result as data with an automatic rowid */ +#define SRT_DistFifo 6 /* Like SRT_Fifo, but unique results only */ +#define SRT_Queue 7 /* Store result in an queue */ +#define SRT_DistQueue 8 /* Like SRT_Queue, but unique results only */ /* The ORDER BY clause is ignored for all of the above */ -#define IgnorableOrderby(X) ((X->eDest)<=SRT_Discard) - -#define SRT_Output 5 /* Output each row of result */ -#define SRT_Mem 6 /* Store result in a memory cell */ -#define SRT_Set 7 /* Store results as keys in an index */ -#define SRT_Table 8 /* Store result as data with an automatic rowid */ -#define SRT_EphemTab 9 /* Create transient tab and store like SRT_Table */ -#define SRT_Coroutine 10 /* Generate a single row of result */ +#define IgnorableOrderby(X) ((X->eDest)<=SRT_DistQueue) + +#define SRT_Output 9 /* Output each row of result */ +#define SRT_Mem 10 /* Store result in a memory cell */ +#define SRT_Set 11 /* Store results as keys in an index */ +#define SRT_EphemTab 12 /* Create transient tab and store like SRT_Table */ +#define SRT_Coroutine 13 /* Generate a single row of result */ +#define SRT_Table 14 /* Store result as data with an automatic rowid */ /* -** A structure used to customize the behavior of sqlite3Select(). See -** comments above sqlite3Select() for details. +** An instance of this object describes where to put of the results of +** a SELECT statement. */ -typedef struct SelectDest SelectDest; struct SelectDest { - u8 eDest; /* How to dispose of the results */ - u8 affinity; /* Affinity used when eDest==SRT_Set */ - int iParm; /* A parameter used by the eDest disposal method */ - int iMem; /* Base register where results are written */ - int nMem; /* Number of registers allocated */ + u8 eDest; /* How to dispose of the results. On of SRT_* above. */ + char affSdst; /* Affinity used when eDest==SRT_Set */ + int iSDParm; /* A parameter used by the eDest disposal method */ + int iSdst; /* Base register where results are written */ + int nSdst; /* Number of registers allocated */ + ExprList *pOrderBy; /* Key columns for SRT_Queue and SRT_DistQueue */ }; /* ** During code generation of statements that do inserts into AUTOINCREMENT ** tables, the following information is attached to the Table.u.autoInc.p ** pointer of each autoincrement table to record some side information that ** the code generator needs. We have to keep per-table autoincrement -** information in case inserts are down within triggers. Triggers do not +** information in case inserts are done within triggers. Triggers do not ** normally coordinate their activities, but we do need to coordinate the ** loading and saving of autoincrement information. */ struct AutoincInfo { AutoincInfo *pNext; /* Next info block in a list of them all */ @@ -9650,16 +13567,35 @@ ** statements). Similarly, the TriggerPrg.aColmask[1] variable is set to ** a mask of new.* columns used by the program. */ struct TriggerPrg { Trigger *pTrigger; /* Trigger this program was coded from */ - int orconf; /* Default ON CONFLICT policy */ + TriggerPrg *pNext; /* Next entry in Parse.pTriggerPrg list */ SubProgram *pProgram; /* Program implementing pTrigger/orconf */ + int orconf; /* Default ON CONFLICT policy */ u32 aColmask[2]; /* Masks of old.*, new.* columns accessed */ - TriggerPrg *pNext; /* Next entry in Parse.pTriggerPrg list */ }; +/* +** The yDbMask datatype for the bitmask of all attached databases. +*/ +#if SQLITE_MAX_ATTACHED>30 + typedef unsigned char yDbMask[(SQLITE_MAX_ATTACHED+9)/8]; +# define DbMaskTest(M,I) (((M)[(I)/8]&(1<<((I)&7)))!=0) +# define DbMaskZero(M) memset((M),0,sizeof(M)) +# define DbMaskSet(M,I) (M)[(I)/8]|=(1<<((I)&7)) +# define DbMaskAllZero(M) sqlite3DbMaskAllZero(M) +# define DbMaskNonZero(M) (sqlite3DbMaskAllZero(M)==0) +#else + typedef unsigned int yDbMask; +# define DbMaskTest(M,I) (((M)&(((yDbMask)1)<<(I)))!=0) +# define DbMaskZero(M) (M)=0 +# define DbMaskSet(M,I) (M)|=(((yDbMask)1)<<(I)) +# define DbMaskAllZero(M) (M)==0 +# define DbMaskNonZero(M) (M)!=0 +#endif + /* ** An SQL parser context. A copy of this structure is passed through ** the parser and down into all the parser action routine in order to ** carry around information that is global to the entire parse. ** @@ -9674,94 +13610,118 @@ ** compiled. Function sqlite3TableLock() is used to add entries to the ** list. */ struct Parse { sqlite3 *db; /* The main database structure */ - int rc; /* Return code from execution */ char *zErrMsg; /* An error message */ Vdbe *pVdbe; /* An engine for executing database bytecode */ + int rc; /* Return code from execution */ u8 colNamesSet; /* TRUE after OP_ColumnName has been issued to pVdbe */ - u8 nameClash; /* A permanent table name clashes with temp table name */ u8 checkSchema; /* Causes schema cookie check after an error */ u8 nested; /* Number of nested calls to the parser/code generator */ - u8 parseError; /* True after a parsing error. Ticket #1794 */ u8 nTempReg; /* Number of temporary registers in aTempReg[] */ - u8 nTempInUse; /* Number of aTempReg[] currently checked out */ + u8 isMultiWrite; /* True if statement may modify/insert multiple rows */ + u8 mayAbort; /* True if statement may throw an ABORT exception */ + u8 hasCompound; /* Need to invoke convertCompoundSelectToSubquery() */ + u8 okConstFactor; /* OK to factor out constants */ + u8 disableLookaside; /* Number of times lookaside has been disabled */ int aTempReg[8]; /* Holding area for temporary registers */ int nRangeReg; /* Size of the temporary register block */ int iRangeReg; /* First register in temporary register block */ int nErr; /* Number of errors seen */ int nTab; /* Number of previously allocated VDBE cursors */ int nMem; /* Number of memory cells used so far */ int nSet; /* Number of sets used so far */ + int nOnce; /* Number of OP_Once instructions so far */ + int nOpAlloc; /* Number of slots allocated for Vdbe.aOp[] */ + int szOpAlloc; /* Bytes of memory space allocated for Vdbe.aOp[] */ + int iFixedOp; /* Never back out opcodes iFixedOp-1 or earlier */ int ckBase; /* Base register of data during check constraints */ + int iSelfTab; /* Table of an index whose exprs are being coded */ int iCacheLevel; /* ColCache valid when aColCache[].iLevel<=iCacheLevel */ int iCacheCnt; /* Counter used to generate aColCache[].lru values */ - u8 nColCache; /* Number of entries in the column cache */ - u8 iColCache; /* Next entry of the cache to replace */ + int nLabel; /* Number of labels used */ + int *aLabel; /* Space to hold the labels */ struct yColCache { int iTable; /* Table cursor number */ - int iColumn; /* Table column number */ + i16 iColumn; /* Table column number */ u8 tempReg; /* iReg is a temp register that needs to be freed */ int iLevel; /* Nesting level */ int iReg; /* Reg with value of this column. 0 means none. */ int lru; /* Least recently used entry has the smallest value */ } aColCache[SQLITE_N_COLCACHE]; /* One for each column cache entry */ - u32 writeMask; /* Start a write transaction on these databases */ - u32 cookieMask; /* Bitmask of schema verified databases */ - u8 isMultiWrite; /* True if statement may affect/insert multiple rows */ - u8 mayAbort; /* True if statement may throw an ABORT exception */ - int cookieGoto; /* Address of OP_Goto to cookie verifier subroutine */ + ExprList *pConstExpr;/* Constant expressions */ + Token constraintName;/* Name of the constraint currently being parsed */ + yDbMask writeMask; /* Start a write transaction on these databases */ + yDbMask cookieMask; /* Bitmask of schema verified databases */ int cookieValue[SQLITE_MAX_ATTACHED+2]; /* Values of cookies to verify */ + int regRowid; /* Register holding rowid of CREATE TABLE entry */ + int regRoot; /* Register holding root page number for new objects */ + int nMaxArg; /* Max args passed to user function by sub-program */ +#if SELECTTRACE_ENABLED + int nSelect; /* Number of SELECT statements seen */ + int nSelectIndent; /* How far to indent SELECTTRACE() output */ +#endif #ifndef SQLITE_OMIT_SHARED_CACHE int nTableLock; /* Number of locks in aTableLock */ TableLock *aTableLock; /* Required table locks for shared-cache mode */ #endif - int regRowid; /* Register holding rowid of CREATE TABLE entry */ - int regRoot; /* Register holding root page number for new objects */ AutoincInfo *pAinc; /* Information about AUTOINCREMENT counters */ - int nMaxArg; /* Max args passed to user function by sub-program */ /* Information used while coding trigger programs. */ Parse *pToplevel; /* Parse structure for main program (or NULL) */ Table *pTriggerTab; /* Table triggers are being coded for */ + int addrCrTab; /* Address of OP_CreateTable opcode on CREATE TABLE */ + u32 nQueryLoop; /* Est number of iterations of a query (10*log2(N)) */ u32 oldmask; /* Mask of old.* columns referenced */ u32 newmask; /* Mask of new.* columns referenced */ u8 eTriggerOp; /* TK_UPDATE, TK_INSERT or TK_DELETE */ u8 eOrconf; /* Default ON CONFLICT policy for trigger steps */ u8 disableTriggers; /* True to disable triggers */ - double nQueryLoop; /* Estimated number of iterations of a query */ - - /* Above is constant between recursions. Below is reset before and after - ** each recursion */ - - int nVar; /* Number of '?' variables seen in the SQL so far */ - int nVarExpr; /* Number of used slots in apVarExpr[] */ - int nVarExprAlloc; /* Number of allocated slots in apVarExpr[] */ - Expr **apVarExpr; /* Pointers to :aaa and $aaaa wildcard expressions */ - Vdbe *pReprepare; /* VM being reprepared (sqlite3Reprepare()) */ - int nAlias; /* Number of aliased result set columns */ - int nAliasAlloc; /* Number of allocated slots for aAlias[] */ - int *aAlias; /* Register used to hold aliased result */ - u8 explain; /* True if the EXPLAIN flag is found on the query */ - Token sNameToken; /* Token with unqualified schema object name */ - Token sLastToken; /* The last token parsed */ - const char *zTail; /* All SQL text past the last semicolon parsed */ - Table *pNewTable; /* A table being constructed by CREATE TABLE */ + + /************************************************************************ + ** Above is constant between recursions. Below is reset before and after + ** each recursion. The boundary between these two regions is determined + ** using offsetof(Parse,nVar) so the nVar field must be the first field + ** in the recursive region. + ************************************************************************/ + + ynVar nVar; /* Number of '?' variables seen in the SQL so far */ + int nzVar; /* Number of available slots in azVar[] */ + u8 iPkSortOrder; /* ASC or DESC for INTEGER PRIMARY KEY */ + u8 explain; /* True if the EXPLAIN flag is found on the query */ +#ifndef SQLITE_OMIT_VIRTUALTABLE + u8 declareVtab; /* True if inside sqlite3_declare_vtab() */ + int nVtabLock; /* Number of virtual tables to lock */ +#endif + int nAlias; /* Number of aliased result set columns */ + int nHeight; /* Expression tree height of current sub-select */ +#ifndef SQLITE_OMIT_EXPLAIN + int iSelectId; /* ID of current select for EXPLAIN output */ + int iNextSelectId; /* Next available select ID for EXPLAIN output */ +#endif + char **azVar; /* Pointers to names of parameters */ + Vdbe *pReprepare; /* VM being reprepared (sqlite3Reprepare()) */ + const char *zTail; /* All SQL text past the last semicolon parsed */ + Table *pNewTable; /* A table being constructed by CREATE TABLE */ Trigger *pNewTrigger; /* Trigger under construct by a CREATE TRIGGER */ const char *zAuthContext; /* The 6th parameter to db->xAuth callbacks */ + Token sNameToken; /* Token with unqualified schema object name */ + Token sLastToken; /* The last token parsed */ #ifndef SQLITE_OMIT_VIRTUALTABLE - Token sArg; /* Complete text of a module argument */ - u8 declareVtab; /* True if inside sqlite3_declare_vtab() */ - int nVtabLock; /* Number of virtual tables to lock */ - Table **apVtabLock; /* Pointer to virtual tables needing locking */ + Token sArg; /* Complete text of a module argument */ + Table **apVtabLock; /* Pointer to virtual tables needing locking */ #endif - int nHeight; /* Expression tree height of current sub-select */ - Table *pZombieTab; /* List of Table objects to delete after code gen */ - TriggerPrg *pTriggerPrg; /* Linked list of coded triggers */ + Table *pZombieTab; /* List of Table objects to delete after code gen */ + TriggerPrg *pTriggerPrg; /* Linked list of coded triggers */ + With *pWith; /* Current WITH clause, or NULL */ + With *pWithToFree; /* Free this WITH object at the end of the parse */ }; +/* +** Return true if currently inside an sqlite3_declare_vtab() call. +*/ #ifdef SQLITE_OMIT_VIRTUALTABLE #define IN_DECLARE_VTAB 0 #else #define IN_DECLARE_VTAB (pParse->declareVtab) #endif @@ -9774,18 +13734,28 @@ const char *zAuthContext; /* Put saved Parse.zAuthContext here */ Parse *pParse; /* The Parse structure */ }; /* -** Bitfield flags for P5 value in OP_Insert and OP_Delete +** Bitfield flags for P5 value in various opcodes. */ -#define OPFLAG_NCHANGE 0x01 /* Set to update db->nChange */ +#define OPFLAG_NCHANGE 0x01 /* OP_Insert: Set to update db->nChange */ + /* Also used in P2 (not P5) of OP_Delete */ +#define OPFLAG_EPHEM 0x01 /* OP_Column: Ephemeral output is ok */ #define OPFLAG_LASTROWID 0x02 /* Set to update db->lastRowid */ #define OPFLAG_ISUPDATE 0x04 /* This OP_Insert is an sql UPDATE */ #define OPFLAG_APPEND 0x08 /* This is likely to be an append */ #define OPFLAG_USESEEKRESULT 0x10 /* Try to avoid a seek in BtreeInsert() */ -#define OPFLAG_CLEARCACHE 0x20 /* Clear pseudo-table cache in OP_Column */ +#define OPFLAG_LENGTHARG 0x40 /* OP_Column only used for length() */ +#define OPFLAG_TYPEOFARG 0x80 /* OP_Column only used for typeof() */ +#define OPFLAG_BULKCSR 0x01 /* OP_Open** used to open bulk cursor */ +#define OPFLAG_SEEKEQ 0x02 /* OP_Open** cursor uses EQ seek only */ +#define OPFLAG_FORDELETE 0x08 /* OP_Open should use BTREE_FORDELETE */ +#define OPFLAG_P2ISREG 0x10 /* P2 to OP_Open** is a register number */ +#define OPFLAG_PERMUTE 0x01 /* OP_Compare: use the permutation */ +#define OPFLAG_SAVEPOSITION 0x02 /* OP_Delete: keep cursor position */ +#define OPFLAG_AUXDELETE 0x04 /* OP_Delete: index in a DELETE op */ /* * Each trigger present in the database schema is stored as an instance of * struct Trigger. * @@ -9839,24 +13809,24 @@ * * (op == TK_INSERT) * orconf -> stores the ON CONFLICT algorithm * pSelect -> If this is an INSERT INTO ... SELECT ... statement, then * this stores a pointer to the SELECT statement. Otherwise NULL. - * target -> A token holding the quoted name of the table to insert into. + * zTarget -> Dequoted name of the table to insert into. * pExprList -> If this is an INSERT INTO ... VALUES ... statement, then * this stores values to be inserted. Otherwise NULL. * pIdList -> If this is an INSERT INTO ... (<column-names>) VALUES ... * statement, then this stores the column-names to be * inserted into. * * (op == TK_DELETE) - * target -> A token holding the quoted name of the table to delete from. + * zTarget -> Dequoted name of the table to delete from. * pWhere -> The WHERE clause of the DELETE statement if one is specified. * Otherwise NULL. * * (op == TK_UPDATE) - * target -> A token holding the quoted name of the table to update rows of. + * zTarget -> Dequoted name of the table to update. * pWhere -> The WHERE clause of the UPDATE statement if one is specified. * Otherwise NULL. * pExprList -> A list of the columns to update and the expressions to update * them to. See sqlite3Update() documentation of "pChanges" * argument. @@ -9864,14 +13834,14 @@ */ struct TriggerStep { u8 op; /* One of TK_DELETE, TK_UPDATE, TK_INSERT, TK_SELECT */ u8 orconf; /* OE_Rollback etc. */ Trigger *pTrig; /* The trigger that this step is a part of */ - Select *pSelect; /* SELECT statment or RHS of INSERT INTO .. SELECT ... */ - Token target; /* Target table for DELETE, UPDATE, INSERT */ + Select *pSelect; /* SELECT statement or RHS of INSERT INTO SELECT ... */ + char *zTarget; /* Target table for DELETE, UPDATE, INSERT */ Expr *pWhere; /* The WHERE clause for DELETE or UPDATE steps */ - ExprList *pExprList; /* SET clause for UPDATE. VALUES clause for INSERT */ + ExprList *pExprList; /* SET clause for UPDATE. */ IdList *pIdList; /* Column names for INSERT */ TriggerStep *pNext; /* Next in the link-list */ TriggerStep *pLast; /* Last element in link-list. Valid for 1st elem only */ }; @@ -9881,10 +13851,12 @@ ** explicit. */ typedef struct DbFixer DbFixer; struct DbFixer { Parse *pParse; /* The parsing context. Error messages written here */ + Schema *pSchema; /* Fix items to this schema */ + int bVarOnly; /* Check for variable references only */ const char *zDb; /* Make sure all objects are contained in this database */ const char *zType; /* Type of the container - used for error messages */ const Token *pName; /* Name of the container - used for error messages */ }; @@ -9894,26 +13866,33 @@ */ struct StrAccum { sqlite3 *db; /* Optional database for lookaside. Can be NULL */ char *zBase; /* A base allocation. Not from malloc. */ char *zText; /* The string collected so far */ - int nChar; /* Length of the string so far */ - int nAlloc; /* Amount of space allocated in zText */ - int mxAlloc; /* Maximum allowed string length */ - u8 mallocFailed; /* Becomes true if any memory allocation fails */ - u8 useMalloc; /* True if zText is enlargeable using realloc */ - u8 tooBig; /* Becomes true if string size exceeds limits */ + u32 nChar; /* Length of the string so far */ + u32 nAlloc; /* Amount of space allocated in zText */ + u32 mxAlloc; /* Maximum allowed allocation. 0 for no malloc usage */ + u8 accError; /* STRACCUM_NOMEM or STRACCUM_TOOBIG */ + u8 printfFlags; /* SQLITE_PRINTF flags below */ }; +#define STRACCUM_NOMEM 1 +#define STRACCUM_TOOBIG 2 +#define SQLITE_PRINTF_INTERNAL 0x01 /* Internal-use-only converters allowed */ +#define SQLITE_PRINTF_SQLFUNC 0x02 /* SQL function arguments to VXPrintf */ +#define SQLITE_PRINTF_MALLOCED 0x04 /* True if xText is allocated space */ + +#define isMalloced(X) (((X)->printfFlags & SQLITE_PRINTF_MALLOCED)!=0) + /* ** A pointer to this structure is used to communicate information ** from sqlite3Init and OP_ParseSchema into the sqlite3InitCallback. */ typedef struct { sqlite3 *db; /* The database being initialized */ - int iDb; /* 0 for main database. 1 for TEMP, 2.. for ATTACHed */ char **pzErrMsg; /* Error message stored here */ + int iDb; /* 0 for main database. 1 for TEMP, 2.. for ATTACHed */ int rc; /* Result code stored here */ } InitData; /* ** Structure containing global configuration data for the SQLite library. @@ -9922,68 +13901,142 @@ */ struct Sqlite3Config { int bMemstat; /* True to enable memory status */ int bCoreMutex; /* True to enable core mutexing */ int bFullMutex; /* True to enable full mutexing */ + int bOpenUri; /* True to interpret filenames as URIs */ + int bUseCis; /* Use covering indices for full-scans */ int mxStrlen; /* Maximum string length */ + int neverCorrupt; /* Database is always well-formed */ int szLookaside; /* Default lookaside buffer size */ int nLookaside; /* Default lookaside buffer count */ sqlite3_mem_methods m; /* Low-level memory allocation interface */ sqlite3_mutex_methods mutex; /* Low-level mutex interface */ - sqlite3_pcache_methods pcache; /* Low-level page-cache interface */ + sqlite3_pcache_methods2 pcache2; /* Low-level page-cache interface */ void *pHeap; /* Heap storage space */ int nHeap; /* Size of pHeap[] */ int mnReq, mxReq; /* Min and max heap requests sizes */ + sqlite3_int64 szMmap; /* mmap() space per open file */ + sqlite3_int64 mxMmap; /* Maximum value for szMmap */ void *pScratch; /* Scratch memory */ int szScratch; /* Size of each scratch buffer */ int nScratch; /* Number of scratch buffers */ void *pPage; /* Page cache memory */ int szPage; /* Size of each page in pPage[] */ int nPage; /* Number of pages in pPage[] */ int mxParserStack; /* maximum depth of the parser stack */ int sharedCacheEnabled; /* true if shared-cache mode enabled */ + u32 szPma; /* Maximum Sorter PMA size */ /* The above might be initialized to non-zero. The following need to always ** initially be zero, however. */ int isInit; /* True after initialization has finished */ int inProgress; /* True while initialization in progress */ int isMutexInit; /* True after mutexes are initialized */ int isMallocInit; /* True after malloc is initialized */ int isPCacheInit; /* True after malloc is initialized */ - sqlite3_mutex *pInitMutex; /* Mutex used by sqlite3_initialize() */ int nRefInitMutex; /* Number of users of pInitMutex */ + sqlite3_mutex *pInitMutex; /* Mutex used by sqlite3_initialize() */ void (*xLog)(void*,int,const char*); /* Function for logging */ void *pLogArg; /* First argument to xLog() */ +#ifdef SQLITE_ENABLE_SQLLOG + void(*xSqllog)(void*,sqlite3*,const char*, int); + void *pSqllogArg; +#endif +#ifdef SQLITE_VDBE_COVERAGE + /* The following callback (if not NULL) is invoked on every VDBE branch + ** operation. Set the callback using SQLITE_TESTCTRL_VDBE_COVERAGE. + */ + void (*xVdbeBranch)(void*,int iSrcLine,u8 eThis,u8 eMx); /* Callback */ + void *pVdbeBranchArg; /* 1st argument */ +#endif +#ifndef SQLITE_OMIT_BUILTIN_TEST + int (*xTestCallback)(int); /* Invoked by sqlite3FaultSim() */ +#endif + int bLocaltimeFault; /* True to fail localtime() calls */ }; +/* +** This macro is used inside of assert() statements to indicate that +** the assert is only valid on a well-formed database. Instead of: +** +** assert( X ); +** +** One writes: +** +** assert( X || CORRUPT_DB ); +** +** CORRUPT_DB is true during normal operation. CORRUPT_DB does not indicate +** that the database is definitely corrupt, only that it might be corrupt. +** For most test cases, CORRUPT_DB is set to false using a special +** sqlite3_test_control(). This enables assert() statements to prove +** things that are always true for well-formed databases. +*/ +#define CORRUPT_DB (sqlite3Config.neverCorrupt==0) + /* ** Context pointer passed down through the tree-walk. */ struct Walker { + Parse *pParse; /* Parser context. */ int (*xExprCallback)(Walker*, Expr*); /* Callback for expressions */ int (*xSelectCallback)(Walker*,Select*); /* Callback for SELECTs */ - Parse *pParse; /* Parser context. */ + void (*xSelectCallback2)(Walker*,Select*);/* Second callback for SELECTs */ + int walkerDepth; /* Number of subqueries */ + u8 eCode; /* A small processing code */ union { /* Extra data for callback */ NameContext *pNC; /* Naming context */ - int i; /* Integer value */ + int n; /* A counter */ + int iCur; /* A cursor number */ + SrcList *pSrcList; /* FROM clause */ + struct SrcCount *pSrcCount; /* Counting column references */ + struct CCurHint *pCCurHint; /* Used by codeCursorHint() */ + int *aiCol; /* array of column indexes */ } u; }; /* Forward declarations */ SQLITE_PRIVATE int sqlite3WalkExpr(Walker*, Expr*); SQLITE_PRIVATE int sqlite3WalkExprList(Walker*, ExprList*); SQLITE_PRIVATE int sqlite3WalkSelect(Walker*, Select*); SQLITE_PRIVATE int sqlite3WalkSelectExpr(Walker*, Select*); SQLITE_PRIVATE int sqlite3WalkSelectFrom(Walker*, Select*); +SQLITE_PRIVATE int sqlite3ExprWalkNoop(Walker*, Expr*); /* ** Return code from the parse-tree walking primitives and their ** callbacks. */ #define WRC_Continue 0 /* Continue down into children */ #define WRC_Prune 1 /* Omit children but continue walking siblings */ #define WRC_Abort 2 /* Abandon the tree walk */ +/* +** An instance of this structure represents a set of one or more CTEs +** (common table expressions) created by a single WITH clause. +*/ +struct With { + int nCte; /* Number of CTEs in the WITH clause */ + With *pOuter; /* Containing WITH clause, or NULL */ + struct Cte { /* For each CTE in the WITH clause.... */ + char *zName; /* Name of this CTE */ + ExprList *pCols; /* List of explicit column names, or NULL */ + Select *pSelect; /* The definition of this CTE */ + const char *zCteErr; /* Error message for circular references */ + } a[1]; +}; + +#ifdef SQLITE_DEBUG +/* +** An instance of the TreeView object is used for printing the content of +** data structures on sqlite3DebugPrintf() using a tree-like view. +*/ +struct TreeView { + int iLevel; /* Which level of the tree we are on */ + u8 bLine[100]; /* Draw vertical in column i if bLine[i] is true */ +}; +#endif /* SQLITE_DEBUG */ + /* ** Assuming zIn points to the first byte of a UTF-8 character, ** advance zIn to point to the first byte of the next UTF-8 character. */ #define SQLITE_SKIP_UTF8(zIn) { \ @@ -10004,18 +14057,25 @@ SQLITE_PRIVATE int sqlite3CantopenError(int); #define SQLITE_CORRUPT_BKPT sqlite3CorruptError(__LINE__) #define SQLITE_MISUSE_BKPT sqlite3MisuseError(__LINE__) #define SQLITE_CANTOPEN_BKPT sqlite3CantopenError(__LINE__) +/* +** FTS3 and FTS4 both require virtual table support +*/ +#if defined(SQLITE_OMIT_VIRTUALTABLE) +# undef SQLITE_ENABLE_FTS3 +# undef SQLITE_ENABLE_FTS4 +#endif /* ** FTS4 is really an extension for FTS3. It is enabled using the -** SQLITE_ENABLE_FTS3 macro. But to avoid confusion we also all -** the SQLITE_ENABLE_FTS4 macro to serve as an alisse for SQLITE_ENABLE_FTS3. +** SQLITE_ENABLE_FTS3 macro. But to avoid confusion we also call +** the SQLITE_ENABLE_FTS4 macro to serve as an alias for SQLITE_ENABLE_FTS3. */ #if defined(SQLITE_ENABLE_FTS4) && !defined(SQLITE_ENABLE_FTS3) -# define SQLITE_ENABLE_FTS3 +# define SQLITE_ENABLE_FTS3 1 #endif /* ** The ctype.h header is needed for non-ASCII systems. It is also ** needed by FTS3 when FTS3 is included in the amalgamation. @@ -10045,40 +14105,45 @@ # define sqlite3Isalpha(x) isalpha((unsigned char)(x)) # define sqlite3Isdigit(x) isdigit((unsigned char)(x)) # define sqlite3Isxdigit(x) isxdigit((unsigned char)(x)) # define sqlite3Tolower(x) tolower((unsigned char)(x)) #endif +#ifndef SQLITE_OMIT_COMPILEOPTION_DIAGS +SQLITE_PRIVATE int sqlite3IsIdChar(u8); +#endif /* ** Internal function prototypes */ -SQLITE_PRIVATE int sqlite3StrICmp(const char *, const char *); -SQLITE_PRIVATE int sqlite3IsNumber(const char*, int*, u8); +#define sqlite3StrICmp sqlite3_stricmp SQLITE_PRIVATE int sqlite3Strlen30(const char*); #define sqlite3StrNICmp sqlite3_strnicmp SQLITE_PRIVATE int sqlite3MallocInit(void); SQLITE_PRIVATE void sqlite3MallocEnd(void); -SQLITE_PRIVATE void *sqlite3Malloc(int); -SQLITE_PRIVATE void *sqlite3MallocZero(int); -SQLITE_PRIVATE void *sqlite3DbMallocZero(sqlite3*, int); -SQLITE_PRIVATE void *sqlite3DbMallocRaw(sqlite3*, int); +SQLITE_PRIVATE void *sqlite3Malloc(u64); +SQLITE_PRIVATE void *sqlite3MallocZero(u64); +SQLITE_PRIVATE void *sqlite3DbMallocZero(sqlite3*, u64); +SQLITE_PRIVATE void *sqlite3DbMallocRaw(sqlite3*, u64); +SQLITE_PRIVATE void *sqlite3DbMallocRawNN(sqlite3*, u64); SQLITE_PRIVATE char *sqlite3DbStrDup(sqlite3*,const char*); -SQLITE_PRIVATE char *sqlite3DbStrNDup(sqlite3*,const char*, int); -SQLITE_PRIVATE void *sqlite3Realloc(void*, int); -SQLITE_PRIVATE void *sqlite3DbReallocOrFree(sqlite3 *, void *, int); -SQLITE_PRIVATE void *sqlite3DbRealloc(sqlite3 *, void *, int); +SQLITE_PRIVATE char *sqlite3DbStrNDup(sqlite3*,const char*, u64); +SQLITE_PRIVATE void *sqlite3Realloc(void*, u64); +SQLITE_PRIVATE void *sqlite3DbReallocOrFree(sqlite3 *, void *, u64); +SQLITE_PRIVATE void *sqlite3DbRealloc(sqlite3 *, void *, u64); SQLITE_PRIVATE void sqlite3DbFree(sqlite3*, void*); SQLITE_PRIVATE int sqlite3MallocSize(void*); SQLITE_PRIVATE int sqlite3DbMallocSize(sqlite3*, void*); SQLITE_PRIVATE void *sqlite3ScratchMalloc(int); SQLITE_PRIVATE void sqlite3ScratchFree(void*); SQLITE_PRIVATE void *sqlite3PageMalloc(int); SQLITE_PRIVATE void sqlite3PageFree(void*); SQLITE_PRIVATE void sqlite3MemSetDefault(void); +#ifndef SQLITE_OMIT_BUILTIN_TEST SQLITE_PRIVATE void sqlite3BenignMallocHooks(void (*)(void), void (*)(void)); -SQLITE_PRIVATE int sqlite3MemoryAlarm(void (*)(void*, sqlite3_int64, int), void*, sqlite3_int64); +#endif +SQLITE_PRIVATE int sqlite3HeapNearlyFull(void); /* ** On systems with ample stack space and that support alloca(), make ** use of alloca() to obtain space for large automatic objects. By default, ** obtain space from malloc(). @@ -10103,207 +14168,303 @@ SQLITE_PRIVATE const sqlite3_mem_methods *sqlite3MemGetMemsys5(void); #endif #ifndef SQLITE_MUTEX_OMIT -SQLITE_PRIVATE sqlite3_mutex_methods *sqlite3DefaultMutex(void); +SQLITE_PRIVATE sqlite3_mutex_methods const *sqlite3DefaultMutex(void); +SQLITE_PRIVATE sqlite3_mutex_methods const *sqlite3NoopMutex(void); SQLITE_PRIVATE sqlite3_mutex *sqlite3MutexAlloc(int); SQLITE_PRIVATE int sqlite3MutexInit(void); SQLITE_PRIVATE int sqlite3MutexEnd(void); #endif +#if !defined(SQLITE_MUTEX_OMIT) && !defined(SQLITE_MUTEX_NOOP) +SQLITE_PRIVATE void sqlite3MemoryBarrier(void); +#else +# define sqlite3MemoryBarrier() +#endif -SQLITE_PRIVATE int sqlite3StatusValue(int); -SQLITE_PRIVATE void sqlite3StatusAdd(int, int); -SQLITE_PRIVATE void sqlite3StatusSet(int, int); +SQLITE_PRIVATE sqlite3_int64 sqlite3StatusValue(int); +SQLITE_PRIVATE void sqlite3StatusUp(int, int); +SQLITE_PRIVATE void sqlite3StatusDown(int, int); +SQLITE_PRIVATE void sqlite3StatusHighwater(int, int); + +/* Access to mutexes used by sqlite3_status() */ +SQLITE_PRIVATE sqlite3_mutex *sqlite3Pcache1Mutex(void); +SQLITE_PRIVATE sqlite3_mutex *sqlite3MallocMutex(void); #ifndef SQLITE_OMIT_FLOATING_POINT SQLITE_PRIVATE int sqlite3IsNaN(double); #else # define sqlite3IsNaN(X) 0 #endif -SQLITE_PRIVATE void sqlite3VXPrintf(StrAccum*, int, const char*, va_list); -#ifndef SQLITE_OMIT_TRACE +/* +** An instance of the following structure holds information about SQL +** functions arguments that are the parameters to the printf() function. +*/ +struct PrintfArguments { + int nArg; /* Total number of arguments */ + int nUsed; /* Number of arguments used so far */ + sqlite3_value **apArg; /* The argument values */ +}; + +SQLITE_PRIVATE void sqlite3VXPrintf(StrAccum*, const char*, va_list); SQLITE_PRIVATE void sqlite3XPrintf(StrAccum*, const char*, ...); -#endif SQLITE_PRIVATE char *sqlite3MPrintf(sqlite3*,const char*, ...); SQLITE_PRIVATE char *sqlite3VMPrintf(sqlite3*,const char*, va_list); -SQLITE_PRIVATE char *sqlite3MAppendf(sqlite3*,char*,const char*,...); -#if defined(SQLITE_TEST) || defined(SQLITE_DEBUG) +#if defined(SQLITE_DEBUG) || defined(SQLITE_HAVE_OS_TRACE) SQLITE_PRIVATE void sqlite3DebugPrintf(const char*, ...); #endif #if defined(SQLITE_TEST) SQLITE_PRIVATE void *sqlite3TestTextToPtr(const char*); #endif -SQLITE_PRIVATE void sqlite3SetString(char **, sqlite3*, const char*, ...); + +#if defined(SQLITE_DEBUG) +SQLITE_PRIVATE void sqlite3TreeViewExpr(TreeView*, const Expr*, u8); +SQLITE_PRIVATE void sqlite3TreeViewExprList(TreeView*, const ExprList*, u8, const char*); +SQLITE_PRIVATE void sqlite3TreeViewSelect(TreeView*, const Select*, u8); +SQLITE_PRIVATE void sqlite3TreeViewWith(TreeView*, const With*, u8); +#endif + + +SQLITE_PRIVATE void sqlite3SetString(char **, sqlite3*, const char*); SQLITE_PRIVATE void sqlite3ErrorMsg(Parse*, const char*, ...); SQLITE_PRIVATE int sqlite3Dequote(char*); +SQLITE_PRIVATE void sqlite3TokenInit(Token*,char*); SQLITE_PRIVATE int sqlite3KeywordCode(const unsigned char*, int); SQLITE_PRIVATE int sqlite3RunParser(Parse*, const char*, char **); SQLITE_PRIVATE void sqlite3FinishCoding(Parse*); SQLITE_PRIVATE int sqlite3GetTempReg(Parse*); SQLITE_PRIVATE void sqlite3ReleaseTempReg(Parse*,int); SQLITE_PRIVATE int sqlite3GetTempRange(Parse*,int); SQLITE_PRIVATE void sqlite3ReleaseTempRange(Parse*,int,int); +SQLITE_PRIVATE void sqlite3ClearTempRegCache(Parse*); SQLITE_PRIVATE Expr *sqlite3ExprAlloc(sqlite3*,int,const Token*,int); SQLITE_PRIVATE Expr *sqlite3Expr(sqlite3*,int,const char*); SQLITE_PRIVATE void sqlite3ExprAttachSubtrees(sqlite3*,Expr*,Expr*,Expr*); SQLITE_PRIVATE Expr *sqlite3PExpr(Parse*, int, Expr*, Expr*, const Token*); SQLITE_PRIVATE Expr *sqlite3ExprAnd(sqlite3*,Expr*, Expr*); SQLITE_PRIVATE Expr *sqlite3ExprFunction(Parse*,ExprList*, Token*); SQLITE_PRIVATE void sqlite3ExprAssignVarNumber(Parse*, Expr*); SQLITE_PRIVATE void sqlite3ExprDelete(sqlite3*, Expr*); SQLITE_PRIVATE ExprList *sqlite3ExprListAppend(Parse*,ExprList*,Expr*); +SQLITE_PRIVATE void sqlite3ExprListSetSortOrder(ExprList*,int); SQLITE_PRIVATE void sqlite3ExprListSetName(Parse*,ExprList*,Token*,int); SQLITE_PRIVATE void sqlite3ExprListSetSpan(Parse*,ExprList*,ExprSpan*); SQLITE_PRIVATE void sqlite3ExprListDelete(sqlite3*, ExprList*); +SQLITE_PRIVATE u32 sqlite3ExprListFlags(const ExprList*); SQLITE_PRIVATE int sqlite3Init(sqlite3*, char**); SQLITE_PRIVATE int sqlite3InitCallback(void*, int, char**, char**); SQLITE_PRIVATE void sqlite3Pragma(Parse*,Token*,Token*,Token*,int); -SQLITE_PRIVATE void sqlite3ResetInternalSchema(sqlite3*, int); -SQLITE_PRIVATE void sqlite3BeginParse(Parse*,int); +SQLITE_PRIVATE void sqlite3ResetAllSchemasOfConnection(sqlite3*); +SQLITE_PRIVATE void sqlite3ResetOneSchema(sqlite3*,int); +SQLITE_PRIVATE void sqlite3CollapseDatabaseArray(sqlite3*); SQLITE_PRIVATE void sqlite3CommitInternalChanges(sqlite3*); +SQLITE_PRIVATE void sqlite3DeleteColumnNames(sqlite3*,Table*); +SQLITE_PRIVATE int sqlite3ColumnsFromExprList(Parse*,ExprList*,i16*,Column**); SQLITE_PRIVATE Table *sqlite3ResultSetOfSelect(Parse*,Select*); SQLITE_PRIVATE void sqlite3OpenMasterTable(Parse *, int); +SQLITE_PRIVATE Index *sqlite3PrimaryKeyIndex(Table*); +SQLITE_PRIVATE i16 sqlite3ColumnOfIndex(Index*, i16); SQLITE_PRIVATE void sqlite3StartTable(Parse*,Token*,Token*,int,int,int,int); +#if SQLITE_ENABLE_HIDDEN_COLUMNS +SQLITE_PRIVATE void sqlite3ColumnPropertiesFromName(Table*, Column*); +#else +# define sqlite3ColumnPropertiesFromName(T,C) /* no-op */ +#endif SQLITE_PRIVATE void sqlite3AddColumn(Parse*,Token*); SQLITE_PRIVATE void sqlite3AddNotNull(Parse*, int); SQLITE_PRIVATE void sqlite3AddPrimaryKey(Parse*, ExprList*, int, int, int); SQLITE_PRIVATE void sqlite3AddCheckConstraint(Parse*, Expr*); SQLITE_PRIVATE void sqlite3AddColumnType(Parse*,Token*); SQLITE_PRIVATE void sqlite3AddDefaultValue(Parse*,ExprSpan*); SQLITE_PRIVATE void sqlite3AddCollateType(Parse*, Token*); -SQLITE_PRIVATE void sqlite3EndTable(Parse*,Token*,Token*,Select*); +SQLITE_PRIVATE void sqlite3EndTable(Parse*,Token*,Token*,u8,Select*); +SQLITE_PRIVATE int sqlite3ParseUri(const char*,const char*,unsigned int*, + sqlite3_vfs**,char**,char **); +SQLITE_PRIVATE Btree *sqlite3DbNameToBtree(sqlite3*,const char*); +SQLITE_PRIVATE int sqlite3CodeOnce(Parse *); + +#ifdef SQLITE_OMIT_BUILTIN_TEST +# define sqlite3FaultSim(X) SQLITE_OK +#else +SQLITE_PRIVATE int sqlite3FaultSim(int); +#endif SQLITE_PRIVATE Bitvec *sqlite3BitvecCreate(u32); SQLITE_PRIVATE int sqlite3BitvecTest(Bitvec*, u32); +SQLITE_PRIVATE int sqlite3BitvecTestNotNull(Bitvec*, u32); SQLITE_PRIVATE int sqlite3BitvecSet(Bitvec*, u32); SQLITE_PRIVATE void sqlite3BitvecClear(Bitvec*, u32, void*); SQLITE_PRIVATE void sqlite3BitvecDestroy(Bitvec*); SQLITE_PRIVATE u32 sqlite3BitvecSize(Bitvec*); +#ifndef SQLITE_OMIT_BUILTIN_TEST SQLITE_PRIVATE int sqlite3BitvecBuiltinTest(int,int*); +#endif SQLITE_PRIVATE RowSet *sqlite3RowSetInit(sqlite3*, void*, unsigned int); SQLITE_PRIVATE void sqlite3RowSetClear(RowSet*); SQLITE_PRIVATE void sqlite3RowSetInsert(RowSet*, i64); -SQLITE_PRIVATE int sqlite3RowSetTest(RowSet*, u8 iBatch, i64); +SQLITE_PRIVATE int sqlite3RowSetTest(RowSet*, int iBatch, i64); SQLITE_PRIVATE int sqlite3RowSetNext(RowSet*, i64*); -SQLITE_PRIVATE void sqlite3CreateView(Parse*,Token*,Token*,Token*,Select*,int,int); +SQLITE_PRIVATE void sqlite3CreateView(Parse*,Token*,Token*,Token*,ExprList*,Select*,int,int); #if !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_VIRTUALTABLE) SQLITE_PRIVATE int sqlite3ViewGetColumnNames(Parse*,Table*); #else # define sqlite3ViewGetColumnNames(A,B) 0 #endif +#if SQLITE_MAX_ATTACHED>30 +SQLITE_PRIVATE int sqlite3DbMaskAllZero(yDbMask); +#endif SQLITE_PRIVATE void sqlite3DropTable(Parse*, SrcList*, int, int); -SQLITE_PRIVATE void sqlite3DeleteTable(Table*); +SQLITE_PRIVATE void sqlite3CodeDropTable(Parse*, Table*, int, int); +SQLITE_PRIVATE void sqlite3DeleteTable(sqlite3*, Table*); #ifndef SQLITE_OMIT_AUTOINCREMENT SQLITE_PRIVATE void sqlite3AutoincrementBegin(Parse *pParse); SQLITE_PRIVATE void sqlite3AutoincrementEnd(Parse *pParse); #else # define sqlite3AutoincrementBegin(X) # define sqlite3AutoincrementEnd(X) #endif -SQLITE_PRIVATE void sqlite3Insert(Parse*, SrcList*, ExprList*, Select*, IdList*, int); -SQLITE_PRIVATE void *sqlite3ArrayAllocate(sqlite3*,void*,int,int,int*,int*,int*); +SQLITE_PRIVATE void sqlite3Insert(Parse*, SrcList*, Select*, IdList*, int); +SQLITE_PRIVATE void *sqlite3ArrayAllocate(sqlite3*,void*,int,int*,int*); SQLITE_PRIVATE IdList *sqlite3IdListAppend(sqlite3*, IdList*, Token*); SQLITE_PRIVATE int sqlite3IdListIndex(IdList*,const char*); SQLITE_PRIVATE SrcList *sqlite3SrcListEnlarge(sqlite3*, SrcList*, int, int); SQLITE_PRIVATE SrcList *sqlite3SrcListAppend(sqlite3*, SrcList*, Token*, Token*); SQLITE_PRIVATE SrcList *sqlite3SrcListAppendFromTerm(Parse*, SrcList*, Token*, Token*, Token*, Select*, Expr*, IdList*); SQLITE_PRIVATE void sqlite3SrcListIndexedBy(Parse *, SrcList *, Token *); +SQLITE_PRIVATE void sqlite3SrcListFuncArgs(Parse*, SrcList*, ExprList*); SQLITE_PRIVATE int sqlite3IndexedByLookup(Parse *, struct SrcList_item *); SQLITE_PRIVATE void sqlite3SrcListShiftJoinType(SrcList*); SQLITE_PRIVATE void sqlite3SrcListAssignCursors(Parse*, SrcList*); SQLITE_PRIVATE void sqlite3IdListDelete(sqlite3*, IdList*); SQLITE_PRIVATE void sqlite3SrcListDelete(sqlite3*, SrcList*); +SQLITE_PRIVATE Index *sqlite3AllocateIndexObject(sqlite3*,i16,int,char**); SQLITE_PRIVATE Index *sqlite3CreateIndex(Parse*,Token*,Token*,SrcList*,ExprList*,int,Token*, - Token*, int, int); + Expr*, int, int); SQLITE_PRIVATE void sqlite3DropIndex(Parse*, SrcList*, int); SQLITE_PRIVATE int sqlite3Select(Parse*, Select*, SelectDest*); SQLITE_PRIVATE Select *sqlite3SelectNew(Parse*,ExprList*,SrcList*,Expr*,ExprList*, - Expr*,ExprList*,int,Expr*,Expr*); + Expr*,ExprList*,u16,Expr*,Expr*); SQLITE_PRIVATE void sqlite3SelectDelete(sqlite3*, Select*); SQLITE_PRIVATE Table *sqlite3SrcListLookup(Parse*, SrcList*); SQLITE_PRIVATE int sqlite3IsReadOnly(Parse*, Table*, int); SQLITE_PRIVATE void sqlite3OpenTable(Parse*, int iCur, int iDb, Table*, int); #if defined(SQLITE_ENABLE_UPDATE_DELETE_LIMIT) && !defined(SQLITE_OMIT_SUBQUERY) -SQLITE_PRIVATE Expr *sqlite3LimitWhere(Parse *, SrcList *, Expr *, ExprList *, Expr *, Expr *, char *); +SQLITE_PRIVATE Expr *sqlite3LimitWhere(Parse*,SrcList*,Expr*,ExprList*,Expr*,Expr*,char*); #endif SQLITE_PRIVATE void sqlite3DeleteFrom(Parse*, SrcList*, Expr*); SQLITE_PRIVATE void sqlite3Update(Parse*, SrcList*, ExprList*, Expr*, int); -SQLITE_PRIVATE WhereInfo *sqlite3WhereBegin(Parse*, SrcList*, Expr*, ExprList**, u16); +SQLITE_PRIVATE WhereInfo *sqlite3WhereBegin(Parse*,SrcList*,Expr*,ExprList*,ExprList*,u16,int); SQLITE_PRIVATE void sqlite3WhereEnd(WhereInfo*); -SQLITE_PRIVATE int sqlite3ExprCodeGetColumn(Parse*, Table*, int, int, int); +SQLITE_PRIVATE u64 sqlite3WhereOutputRowCount(WhereInfo*); +SQLITE_PRIVATE int sqlite3WhereIsDistinct(WhereInfo*); +SQLITE_PRIVATE int sqlite3WhereIsOrdered(WhereInfo*); +SQLITE_PRIVATE int sqlite3WhereIsSorted(WhereInfo*); +SQLITE_PRIVATE int sqlite3WhereContinueLabel(WhereInfo*); +SQLITE_PRIVATE int sqlite3WhereBreakLabel(WhereInfo*); +SQLITE_PRIVATE int sqlite3WhereOkOnePass(WhereInfo*, int*); +#define ONEPASS_OFF 0 /* Use of ONEPASS not allowed */ +#define ONEPASS_SINGLE 1 /* ONEPASS valid for a single row update */ +#define ONEPASS_MULTI 2 /* ONEPASS is valid for multiple rows */ +SQLITE_PRIVATE void sqlite3ExprCodeLoadIndexColumn(Parse*, Index*, int, int, int); +SQLITE_PRIVATE int sqlite3ExprCodeGetColumn(Parse*, Table*, int, int, int, u8); +SQLITE_PRIVATE void sqlite3ExprCodeGetColumnToReg(Parse*, Table*, int, int, int); +SQLITE_PRIVATE void sqlite3ExprCodeGetColumnOfTable(Vdbe*, Table*, int, int, int); SQLITE_PRIVATE void sqlite3ExprCodeMove(Parse*, int, int, int); -SQLITE_PRIVATE void sqlite3ExprCodeCopy(Parse*, int, int, int); SQLITE_PRIVATE void sqlite3ExprCacheStore(Parse*, int, int, int); SQLITE_PRIVATE void sqlite3ExprCachePush(Parse*); -SQLITE_PRIVATE void sqlite3ExprCachePop(Parse*, int); +SQLITE_PRIVATE void sqlite3ExprCachePop(Parse*); SQLITE_PRIVATE void sqlite3ExprCacheRemove(Parse*, int, int); SQLITE_PRIVATE void sqlite3ExprCacheClear(Parse*); SQLITE_PRIVATE void sqlite3ExprCacheAffinityChange(Parse*, int, int); -SQLITE_PRIVATE void sqlite3ExprHardCopy(Parse*,int,int); -SQLITE_PRIVATE int sqlite3ExprCode(Parse*, Expr*, int); +SQLITE_PRIVATE void sqlite3ExprCode(Parse*, Expr*, int); +SQLITE_PRIVATE void sqlite3ExprCodeCopy(Parse*, Expr*, int); +SQLITE_PRIVATE void sqlite3ExprCodeFactorable(Parse*, Expr*, int); +SQLITE_PRIVATE void sqlite3ExprCodeAtInit(Parse*, Expr*, int, u8); SQLITE_PRIVATE int sqlite3ExprCodeTemp(Parse*, Expr*, int*); SQLITE_PRIVATE int sqlite3ExprCodeTarget(Parse*, Expr*, int); -SQLITE_PRIVATE int sqlite3ExprCodeAndCache(Parse*, Expr*, int); -SQLITE_PRIVATE void sqlite3ExprCodeConstants(Parse*, Expr*); -SQLITE_PRIVATE int sqlite3ExprCodeExprList(Parse*, ExprList*, int, int); +SQLITE_PRIVATE void sqlite3ExprCodeAndCache(Parse*, Expr*, int); +SQLITE_PRIVATE int sqlite3ExprCodeExprList(Parse*, ExprList*, int, int, u8); +#define SQLITE_ECEL_DUP 0x01 /* Deep, not shallow copies */ +#define SQLITE_ECEL_FACTOR 0x02 /* Factor out constant terms */ +#define SQLITE_ECEL_REF 0x04 /* Use ExprList.u.x.iOrderByCol */ SQLITE_PRIVATE void sqlite3ExprIfTrue(Parse*, Expr*, int, int); SQLITE_PRIVATE void sqlite3ExprIfFalse(Parse*, Expr*, int, int); +SQLITE_PRIVATE void sqlite3ExprIfFalseDup(Parse*, Expr*, int, int); SQLITE_PRIVATE Table *sqlite3FindTable(sqlite3*,const char*, const char*); SQLITE_PRIVATE Table *sqlite3LocateTable(Parse*,int isView,const char*, const char*); +SQLITE_PRIVATE Table *sqlite3LocateTableItem(Parse*,int isView,struct SrcList_item *); SQLITE_PRIVATE Index *sqlite3FindIndex(sqlite3*,const char*, const char*); SQLITE_PRIVATE void sqlite3UnlinkAndDeleteTable(sqlite3*,int,const char*); SQLITE_PRIVATE void sqlite3UnlinkAndDeleteIndex(sqlite3*,int,const char*); SQLITE_PRIVATE void sqlite3Vacuum(Parse*); SQLITE_PRIVATE int sqlite3RunVacuum(char**, sqlite3*); SQLITE_PRIVATE char *sqlite3NameFromToken(sqlite3*, Token*); -SQLITE_PRIVATE int sqlite3ExprCompare(Expr*, Expr*); +SQLITE_PRIVATE int sqlite3ExprCompare(Expr*, Expr*, int); +SQLITE_PRIVATE int sqlite3ExprListCompare(ExprList*, ExprList*, int); +SQLITE_PRIVATE int sqlite3ExprImpliesExpr(Expr*, Expr*, int); SQLITE_PRIVATE void sqlite3ExprAnalyzeAggregates(NameContext*, Expr*); SQLITE_PRIVATE void sqlite3ExprAnalyzeAggList(NameContext*,ExprList*); +SQLITE_PRIVATE int sqlite3FunctionUsesThisSrc(Expr*, SrcList*); SQLITE_PRIVATE Vdbe *sqlite3GetVdbe(Parse*); +#ifndef SQLITE_OMIT_BUILTIN_TEST SQLITE_PRIVATE void sqlite3PrngSaveState(void); SQLITE_PRIVATE void sqlite3PrngRestoreState(void); -SQLITE_PRIVATE void sqlite3PrngResetState(void); -SQLITE_PRIVATE void sqlite3RollbackAll(sqlite3*); +#endif +SQLITE_PRIVATE void sqlite3RollbackAll(sqlite3*,int); SQLITE_PRIVATE void sqlite3CodeVerifySchema(Parse*, int); +SQLITE_PRIVATE void sqlite3CodeVerifyNamedSchema(Parse*, const char *zDb); SQLITE_PRIVATE void sqlite3BeginTransaction(Parse*, int); SQLITE_PRIVATE void sqlite3CommitTransaction(Parse*); SQLITE_PRIVATE void sqlite3RollbackTransaction(Parse*); SQLITE_PRIVATE void sqlite3Savepoint(Parse*, int, Token*); SQLITE_PRIVATE void sqlite3CloseSavepoints(sqlite3 *); +SQLITE_PRIVATE void sqlite3LeaveMutexAndCloseZombie(sqlite3*); SQLITE_PRIVATE int sqlite3ExprIsConstant(Expr*); SQLITE_PRIVATE int sqlite3ExprIsConstantNotJoin(Expr*); -SQLITE_PRIVATE int sqlite3ExprIsConstantOrFunction(Expr*); +SQLITE_PRIVATE int sqlite3ExprIsConstantOrFunction(Expr*, u8); +SQLITE_PRIVATE int sqlite3ExprIsTableConstant(Expr*,int); +#ifdef SQLITE_ENABLE_CURSOR_HINTS +SQLITE_PRIVATE int sqlite3ExprContainsSubquery(Expr*); +#endif SQLITE_PRIVATE int sqlite3ExprIsInteger(Expr*, int*); SQLITE_PRIVATE int sqlite3ExprCanBeNull(const Expr*); -SQLITE_PRIVATE void sqlite3ExprCodeIsNullJump(Vdbe*, const Expr*, int, int); SQLITE_PRIVATE int sqlite3ExprNeedsNoAffinityChange(const Expr*, char); SQLITE_PRIVATE int sqlite3IsRowid(const char*); -SQLITE_PRIVATE void sqlite3GenerateRowDelete(Parse*, Table*, int, int, int, Trigger *, int); -SQLITE_PRIVATE void sqlite3GenerateRowIndexDelete(Parse*, Table*, int, int*); -SQLITE_PRIVATE int sqlite3GenerateIndexKey(Parse*, Index*, int, int, int); -SQLITE_PRIVATE void sqlite3GenerateConstraintChecks(Parse*,Table*,int,int, - int*,int,int,int,int,int*); -SQLITE_PRIVATE void sqlite3CompleteInsertion(Parse*, Table*, int, int, int*, int, int, int); -SQLITE_PRIVATE int sqlite3OpenTableAndIndices(Parse*, Table*, int, int); +SQLITE_PRIVATE void sqlite3GenerateRowDelete( + Parse*,Table*,Trigger*,int,int,int,i16,u8,u8,u8,int); +SQLITE_PRIVATE void sqlite3GenerateRowIndexDelete(Parse*, Table*, int, int, int*, int); +SQLITE_PRIVATE int sqlite3GenerateIndexKey(Parse*, Index*, int, int, int, int*,Index*,int); +SQLITE_PRIVATE void sqlite3ResolvePartIdxLabel(Parse*,int); +SQLITE_PRIVATE void sqlite3GenerateConstraintChecks(Parse*,Table*,int*,int,int,int,int, + u8,u8,int,int*,int*); +SQLITE_PRIVATE void sqlite3CompleteInsertion(Parse*,Table*,int,int,int,int*,int,int,int); +SQLITE_PRIVATE int sqlite3OpenTableAndIndices(Parse*, Table*, int, u8, int, u8*, int*, int*); SQLITE_PRIVATE void sqlite3BeginWriteOperation(Parse*, int, int); SQLITE_PRIVATE void sqlite3MultiWrite(Parse*); SQLITE_PRIVATE void sqlite3MayAbort(Parse*); -SQLITE_PRIVATE void sqlite3HaltConstraint(Parse*, int, char*, int); +SQLITE_PRIVATE void sqlite3HaltConstraint(Parse*, int, int, char*, i8, u8); +SQLITE_PRIVATE void sqlite3UniqueConstraint(Parse*, int, Index*); +SQLITE_PRIVATE void sqlite3RowidConstraint(Parse*, int, Table*); SQLITE_PRIVATE Expr *sqlite3ExprDup(sqlite3*,Expr*,int); SQLITE_PRIVATE ExprList *sqlite3ExprListDup(sqlite3*,ExprList*,int); SQLITE_PRIVATE SrcList *sqlite3SrcListDup(sqlite3*,SrcList*,int); SQLITE_PRIVATE IdList *sqlite3IdListDup(sqlite3*,IdList*); SQLITE_PRIVATE Select *sqlite3SelectDup(sqlite3*,Select*,int); +#if SELECTTRACE_ENABLED +SQLITE_PRIVATE void sqlite3SelectSetName(Select*,const char*); +#else +# define sqlite3SelectSetName(A,B) +#endif SQLITE_PRIVATE void sqlite3FuncDefInsert(FuncDefHash*, FuncDef*); -SQLITE_PRIVATE FuncDef *sqlite3FindFunction(sqlite3*,const char*,int,int,u8,int); +SQLITE_PRIVATE FuncDef *sqlite3FindFunction(sqlite3*,const char*,int,int,u8,u8); SQLITE_PRIVATE void sqlite3RegisterBuiltinFunctions(sqlite3*); SQLITE_PRIVATE void sqlite3RegisterDateTimeFunctions(void); SQLITE_PRIVATE void sqlite3RegisterGlobalFunctions(void); SQLITE_PRIVATE int sqlite3SafetyCheckOk(sqlite3*); SQLITE_PRIVATE int sqlite3SafetyCheckSickOrOk(sqlite3*); @@ -10326,26 +14487,28 @@ SQLITE_PRIVATE void sqlite3CodeRowTriggerDirect(Parse *, Trigger *, Table *, int, int, int); void sqliteViewTriggers(Parse*, Table*, Expr*, int, ExprList*); SQLITE_PRIVATE void sqlite3DeleteTriggerStep(sqlite3*, TriggerStep*); SQLITE_PRIVATE TriggerStep *sqlite3TriggerSelectStep(sqlite3*,Select*); SQLITE_PRIVATE TriggerStep *sqlite3TriggerInsertStep(sqlite3*,Token*, IdList*, - ExprList*,Select*,u8); + Select*,u8); SQLITE_PRIVATE TriggerStep *sqlite3TriggerUpdateStep(sqlite3*,Token*,ExprList*, Expr*, u8); SQLITE_PRIVATE TriggerStep *sqlite3TriggerDeleteStep(sqlite3*,Token*, Expr*); SQLITE_PRIVATE void sqlite3DeleteTrigger(sqlite3*, Trigger*); SQLITE_PRIVATE void sqlite3UnlinkAndDeleteTrigger(sqlite3*,int,const char*); SQLITE_PRIVATE u32 sqlite3TriggerColmask(Parse*,Trigger*,ExprList*,int,int,Table*,int); # define sqlite3ParseToplevel(p) ((p)->pToplevel ? (p)->pToplevel : (p)) +# define sqlite3IsToplevel(p) ((p)->pToplevel==0) #else # define sqlite3TriggersExist(B,C,D,E,F) 0 # define sqlite3DeleteTrigger(A,B) # define sqlite3DropTriggerPtr(A,B) # define sqlite3UnlinkAndDeleteTrigger(A,B,C) # define sqlite3CodeRowTrigger(A,B,C,D,E,F,G,H,I) # define sqlite3CodeRowTriggerDirect(A,B,C,D,E,F) # define sqlite3TriggerList(X, Y) 0 # define sqlite3ParseToplevel(p) p +# define sqlite3IsToplevel(p) 1 # define sqlite3TriggerColmask(A,B,C,D,E,F,G) 0 #endif SQLITE_PRIVATE int sqlite3JoinType(Parse*, Token*, Token*, Token*); SQLITE_PRIVATE void sqlite3CreateForeignKey(Parse*, ExprList*, Token*, ExprList*, int); @@ -10362,151 +14525,188 @@ # define sqlite3AuthContextPush(a,b,c) # define sqlite3AuthContextPop(a) ((void)(a)) #endif SQLITE_PRIVATE void sqlite3Attach(Parse*, Expr*, Expr*, Expr*); SQLITE_PRIVATE void sqlite3Detach(Parse*, Expr*); -SQLITE_PRIVATE int sqlite3BtreeFactory(sqlite3 *db, const char *zFilename, - int omitJournal, int nCache, int flags, Btree **ppBtree); -SQLITE_PRIVATE int sqlite3FixInit(DbFixer*, Parse*, int, const char*, const Token*); +SQLITE_PRIVATE void sqlite3FixInit(DbFixer*, Parse*, int, const char*, const Token*); SQLITE_PRIVATE int sqlite3FixSrcList(DbFixer*, SrcList*); SQLITE_PRIVATE int sqlite3FixSelect(DbFixer*, Select*); SQLITE_PRIVATE int sqlite3FixExpr(DbFixer*, Expr*); SQLITE_PRIVATE int sqlite3FixExprList(DbFixer*, ExprList*); SQLITE_PRIVATE int sqlite3FixTriggerStep(DbFixer*, TriggerStep*); -SQLITE_PRIVATE int sqlite3AtoF(const char *z, double*); +SQLITE_PRIVATE int sqlite3AtoF(const char *z, double*, int, u8); SQLITE_PRIVATE int sqlite3GetInt32(const char *, int*); -SQLITE_PRIVATE int sqlite3FitsIn64Bits(const char *, int); +SQLITE_PRIVATE int sqlite3Atoi(const char*); SQLITE_PRIVATE int sqlite3Utf16ByteLen(const void *pData, int nChar); SQLITE_PRIVATE int sqlite3Utf8CharLen(const char *pData, int nByte); -SQLITE_PRIVATE int sqlite3Utf8Read(const u8*, const u8**); +SQLITE_PRIVATE u32 sqlite3Utf8Read(const u8**); +SQLITE_PRIVATE LogEst sqlite3LogEst(u64); +SQLITE_PRIVATE LogEst sqlite3LogEstAdd(LogEst,LogEst); +#ifndef SQLITE_OMIT_VIRTUALTABLE +SQLITE_PRIVATE LogEst sqlite3LogEstFromDouble(double); +#endif +SQLITE_PRIVATE u64 sqlite3LogEstToInt(LogEst); /* ** Routines to read and write variable-length integers. These used to ** be defined locally, but now we use the varint routines in the util.c -** file. Code should use the MACRO forms below, as the Varint32 versions -** are coded to assume the single byte case is already handled (which -** the MACRO form does). +** file. */ SQLITE_PRIVATE int sqlite3PutVarint(unsigned char*, u64); -SQLITE_PRIVATE int sqlite3PutVarint32(unsigned char*, u32); SQLITE_PRIVATE u8 sqlite3GetVarint(const unsigned char *, u64 *); SQLITE_PRIVATE u8 sqlite3GetVarint32(const unsigned char *, u32 *); SQLITE_PRIVATE int sqlite3VarintLen(u64 v); /* -** The header of a record consists of a sequence variable-length integers. -** These integers are almost always small and are encoded as a single byte. -** The following macros take advantage this fact to provide a fast encode -** and decode of the integers in a record header. It is faster for the common -** case where the integer is a single byte. It is a little slower when the -** integer is two or more bytes. But overall it is faster. -** -** The following expressions are equivalent: -** -** x = sqlite3GetVarint32( A, &B ); -** x = sqlite3PutVarint32( A, B ); -** -** x = getVarint32( A, B ); -** x = putVarint32( A, B ); -** +** The common case is for a varint to be a single byte. They following +** macros handle the common case without a procedure call, but then call +** the procedure for larger varints. */ -#define getVarint32(A,B) (u8)((*(A)<(u8)0x80) ? ((B) = (u32)*(A)),1 : sqlite3GetVarint32((A), (u32 *)&(B))) -#define putVarint32(A,B) (u8)(((u32)(B)<(u32)0x80) ? (*(A) = (unsigned char)(B)),1 : sqlite3PutVarint32((A), (B))) +#define getVarint32(A,B) \ + (u8)((*(A)<(u8)0x80)?((B)=(u32)*(A)),1:sqlite3GetVarint32((A),(u32 *)&(B))) +#define putVarint32(A,B) \ + (u8)(((u32)(B)<(u32)0x80)?(*(A)=(unsigned char)(B)),1:\ + sqlite3PutVarint((A),(B))) #define getVarint sqlite3GetVarint #define putVarint sqlite3PutVarint -SQLITE_PRIVATE const char *sqlite3IndexAffinityStr(Vdbe *, Index *); -SQLITE_PRIVATE void sqlite3TableAffinityStr(Vdbe *, Table *); +SQLITE_PRIVATE const char *sqlite3IndexAffinityStr(sqlite3*, Index*); +SQLITE_PRIVATE void sqlite3TableAffinity(Vdbe*, Table*, int); SQLITE_PRIVATE char sqlite3CompareAffinity(Expr *pExpr, char aff2); SQLITE_PRIVATE int sqlite3IndexAffinityOk(Expr *pExpr, char idx_affinity); SQLITE_PRIVATE char sqlite3ExprAffinity(Expr *pExpr); -SQLITE_PRIVATE int sqlite3Atoi64(const char*, i64*); -SQLITE_PRIVATE void sqlite3Error(sqlite3*, int, const char*,...); +SQLITE_PRIVATE int sqlite3Atoi64(const char*, i64*, int, u8); +SQLITE_PRIVATE int sqlite3DecOrHexToI64(const char*, i64*); +SQLITE_PRIVATE void sqlite3ErrorWithMsg(sqlite3*, int, const char*,...); +SQLITE_PRIVATE void sqlite3Error(sqlite3*,int); SQLITE_PRIVATE void *sqlite3HexToBlob(sqlite3*, const char *z, int n); +SQLITE_PRIVATE u8 sqlite3HexToInt(int h); SQLITE_PRIVATE int sqlite3TwoPartName(Parse *, Token *, Token *, Token **); + +#if defined(SQLITE_NEED_ERR_NAME) +SQLITE_PRIVATE const char *sqlite3ErrName(int); +#endif + SQLITE_PRIVATE const char *sqlite3ErrStr(int); SQLITE_PRIVATE int sqlite3ReadSchema(Parse *pParse); SQLITE_PRIVATE CollSeq *sqlite3FindCollSeq(sqlite3*,u8 enc, const char*,int); SQLITE_PRIVATE CollSeq *sqlite3LocateCollSeq(Parse *pParse, const char*zName); SQLITE_PRIVATE CollSeq *sqlite3ExprCollSeq(Parse *pParse, Expr *pExpr); -SQLITE_PRIVATE Expr *sqlite3ExprSetColl(Parse *pParse, Expr *, Token *); +SQLITE_PRIVATE Expr *sqlite3ExprAddCollateToken(Parse *pParse, Expr*, const Token*, int); +SQLITE_PRIVATE Expr *sqlite3ExprAddCollateString(Parse*,Expr*,const char*); +SQLITE_PRIVATE Expr *sqlite3ExprSkipCollate(Expr*); SQLITE_PRIVATE int sqlite3CheckCollSeq(Parse *, CollSeq *); SQLITE_PRIVATE int sqlite3CheckObjectName(Parse *, const char *); SQLITE_PRIVATE void sqlite3VdbeSetChanges(sqlite3 *, int); +SQLITE_PRIVATE int sqlite3AddInt64(i64*,i64); +SQLITE_PRIVATE int sqlite3SubInt64(i64*,i64); +SQLITE_PRIVATE int sqlite3MulInt64(i64*,i64); +SQLITE_PRIVATE int sqlite3AbsInt32(int); +#ifdef SQLITE_ENABLE_8_3_NAMES +SQLITE_PRIVATE void sqlite3FileSuffix3(const char*, char*); +#else +# define sqlite3FileSuffix3(X,Y) +#endif +SQLITE_PRIVATE u8 sqlite3GetBoolean(const char *z,u8); SQLITE_PRIVATE const void *sqlite3ValueText(sqlite3_value*, u8); SQLITE_PRIVATE int sqlite3ValueBytes(sqlite3_value*, u8); SQLITE_PRIVATE void sqlite3ValueSetStr(sqlite3_value*, int, const void *,u8, void(*)(void*)); +SQLITE_PRIVATE void sqlite3ValueSetNull(sqlite3_value*); SQLITE_PRIVATE void sqlite3ValueFree(sqlite3_value*); SQLITE_PRIVATE sqlite3_value *sqlite3ValueNew(sqlite3 *); SQLITE_PRIVATE char *sqlite3Utf16to8(sqlite3 *, const void*, int, u8); -#ifdef SQLITE_ENABLE_STAT2 -SQLITE_PRIVATE char *sqlite3Utf8to16(sqlite3 *, u8, char *, int, int *); -#endif SQLITE_PRIVATE int sqlite3ValueFromExpr(sqlite3 *, Expr *, u8, u8, sqlite3_value **); SQLITE_PRIVATE void sqlite3ValueApplyAffinity(sqlite3_value *, u8, u8); #ifndef SQLITE_AMALGAMATION SQLITE_PRIVATE const unsigned char sqlite3OpcodeProperty[]; +SQLITE_PRIVATE const char sqlite3StrBINARY[]; SQLITE_PRIVATE const unsigned char sqlite3UpperToLower[]; SQLITE_PRIVATE const unsigned char sqlite3CtypeMap[]; +SQLITE_PRIVATE const Token sqlite3IntTokens[]; SQLITE_PRIVATE SQLITE_WSD struct Sqlite3Config sqlite3Config; SQLITE_PRIVATE SQLITE_WSD FuncDefHash sqlite3GlobalFunctions; +#ifndef SQLITE_OMIT_WSD SQLITE_PRIVATE int sqlite3PendingByte; #endif -SQLITE_PRIVATE void sqlite3RootPageMoved(Db*, int, int); +#endif +SQLITE_PRIVATE void sqlite3RootPageMoved(sqlite3*, int, int, int); SQLITE_PRIVATE void sqlite3Reindex(Parse*, Token*, Token*); -SQLITE_PRIVATE void sqlite3AlterFunctions(sqlite3*); +SQLITE_PRIVATE void sqlite3AlterFunctions(void); SQLITE_PRIVATE void sqlite3AlterRenameTable(Parse*, SrcList*, Token*); SQLITE_PRIVATE int sqlite3GetToken(const unsigned char *, int *); SQLITE_PRIVATE void sqlite3NestedParse(Parse*, const char*, ...); SQLITE_PRIVATE void sqlite3ExpirePreparedStatements(sqlite3*); SQLITE_PRIVATE int sqlite3CodeSubselect(Parse *, Expr *, int, int); SQLITE_PRIVATE void sqlite3SelectPrep(Parse*, Select*, NameContext*); +SQLITE_PRIVATE void sqlite3SelectWrongNumTermsError(Parse *pParse, Select *p); +SQLITE_PRIVATE int sqlite3MatchSpanName(const char*, const char*, const char*, const char*); SQLITE_PRIVATE int sqlite3ResolveExprNames(NameContext*, Expr*); +SQLITE_PRIVATE int sqlite3ResolveExprListNames(NameContext*, ExprList*); SQLITE_PRIVATE void sqlite3ResolveSelectNames(Parse*, Select*, NameContext*); +SQLITE_PRIVATE void sqlite3ResolveSelfReference(Parse*,Table*,int,Expr*,ExprList*); SQLITE_PRIVATE int sqlite3ResolveOrderGroupBy(Parse*, Select*, ExprList*, const char*); SQLITE_PRIVATE void sqlite3ColumnDefault(Vdbe *, Table *, int, int); SQLITE_PRIVATE void sqlite3AlterFinishAddColumn(Parse *, Token *); SQLITE_PRIVATE void sqlite3AlterBeginAddColumn(Parse *, SrcList *); -SQLITE_PRIVATE CollSeq *sqlite3GetCollSeq(sqlite3*, u8, CollSeq *, const char*); -SQLITE_PRIVATE char sqlite3AffinityType(const char*); +SQLITE_PRIVATE CollSeq *sqlite3GetCollSeq(Parse*, u8, CollSeq *, const char*); +SQLITE_PRIVATE char sqlite3AffinityType(const char*, u8*); SQLITE_PRIVATE void sqlite3Analyze(Parse*, Token*, Token*); SQLITE_PRIVATE int sqlite3InvokeBusyHandler(BusyHandler*); SQLITE_PRIVATE int sqlite3FindDb(sqlite3*, Token*); SQLITE_PRIVATE int sqlite3FindDbName(sqlite3 *, const char *); SQLITE_PRIVATE int sqlite3AnalysisLoad(sqlite3*,int iDB); -SQLITE_PRIVATE void sqlite3DeleteIndexSamples(Index*); +SQLITE_PRIVATE void sqlite3DeleteIndexSamples(sqlite3*,Index*); SQLITE_PRIVATE void sqlite3DefaultRowEst(Index*); SQLITE_PRIVATE void sqlite3RegisterLikeFunctions(sqlite3*, int); SQLITE_PRIVATE int sqlite3IsLikeFunction(sqlite3*,Expr*,int*,char*); -SQLITE_PRIVATE void sqlite3MinimumFileFormat(Parse*, int, int); -SQLITE_PRIVATE void sqlite3SchemaFree(void *); +SQLITE_PRIVATE void sqlite3SchemaClear(void *); SQLITE_PRIVATE Schema *sqlite3SchemaGet(sqlite3 *, Btree *); SQLITE_PRIVATE int sqlite3SchemaToIndex(sqlite3 *db, Schema *); -SQLITE_PRIVATE KeyInfo *sqlite3IndexKeyinfo(Parse *, Index *); +SQLITE_PRIVATE KeyInfo *sqlite3KeyInfoAlloc(sqlite3*,int,int); +SQLITE_PRIVATE void sqlite3KeyInfoUnref(KeyInfo*); +SQLITE_PRIVATE KeyInfo *sqlite3KeyInfoRef(KeyInfo*); +SQLITE_PRIVATE KeyInfo *sqlite3KeyInfoOfIndex(Parse*, Index*); +#ifdef SQLITE_DEBUG +SQLITE_PRIVATE int sqlite3KeyInfoIsWriteable(KeyInfo*); +#endif SQLITE_PRIVATE int sqlite3CreateFunc(sqlite3 *, const char *, int, int, void *, void (*)(sqlite3_context*,int,sqlite3_value **), - void (*)(sqlite3_context*,int,sqlite3_value **), void (*)(sqlite3_context*)); + void (*)(sqlite3_context*,int,sqlite3_value **), void (*)(sqlite3_context*), + FuncDestructor *pDestructor +); +SQLITE_PRIVATE void sqlite3OomFault(sqlite3*); +SQLITE_PRIVATE void sqlite3OomClear(sqlite3*); SQLITE_PRIVATE int sqlite3ApiExit(sqlite3 *db, int); SQLITE_PRIVATE int sqlite3OpenTempDatabase(Parse *); -SQLITE_PRIVATE void sqlite3StrAccumInit(StrAccum*, char*, int, int); +SQLITE_PRIVATE void sqlite3StrAccumInit(StrAccum*, sqlite3*, char*, int, int); SQLITE_PRIVATE void sqlite3StrAccumAppend(StrAccum*,const char*,int); +SQLITE_PRIVATE void sqlite3StrAccumAppendAll(StrAccum*,const char*); +SQLITE_PRIVATE void sqlite3AppendChar(StrAccum*,int,char); SQLITE_PRIVATE char *sqlite3StrAccumFinish(StrAccum*); SQLITE_PRIVATE void sqlite3StrAccumReset(StrAccum*); SQLITE_PRIVATE void sqlite3SelectDestInit(SelectDest*,int,int); SQLITE_PRIVATE Expr *sqlite3CreateColumnExpr(sqlite3 *, SrcList *, int, int); SQLITE_PRIVATE void sqlite3BackupRestart(sqlite3_backup *); SQLITE_PRIVATE void sqlite3BackupUpdate(sqlite3_backup *, Pgno, const u8 *); +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 +SQLITE_PRIVATE void sqlite3AnalyzeFunctions(void); +SQLITE_PRIVATE int sqlite3Stat4ProbeSetValue(Parse*,Index*,UnpackedRecord**,Expr*,u8,int,int*); +SQLITE_PRIVATE int sqlite3Stat4ValueFromExpr(Parse*, Expr*, u8, sqlite3_value**); +SQLITE_PRIVATE void sqlite3Stat4ProbeFree(UnpackedRecord*); +SQLITE_PRIVATE int sqlite3Stat4Column(sqlite3*, const void*, int, int, sqlite3_value**); +#endif + /* ** The interface to the LEMON-generated parser */ -SQLITE_PRIVATE void *sqlite3ParserAlloc(void*(*)(size_t)); +SQLITE_PRIVATE void *sqlite3ParserAlloc(void*(*)(u64)); SQLITE_PRIVATE void sqlite3ParserFree(void*, void(*)(void*)); SQLITE_PRIVATE void sqlite3Parser(void*, int, Token, Parse*); #ifdef YYTRACKMAXSTACKDEPTH SQLITE_PRIVATE int sqlite3ParserStackPeak(void*); #endif @@ -10535,64 +14735,88 @@ # define sqlite3VtabCommit(X) # define sqlite3VtabInSync(db) 0 # define sqlite3VtabLock(X) # define sqlite3VtabUnlock(X) # define sqlite3VtabUnlockList(X) +# define sqlite3VtabSavepoint(X, Y, Z) SQLITE_OK +# define sqlite3GetVTable(X,Y) ((VTable*)0) #else -SQLITE_PRIVATE void sqlite3VtabClear(Table*); -SQLITE_PRIVATE int sqlite3VtabSync(sqlite3 *db, char **); +SQLITE_PRIVATE void sqlite3VtabClear(sqlite3 *db, Table*); +SQLITE_PRIVATE void sqlite3VtabDisconnect(sqlite3 *db, Table *p); +SQLITE_PRIVATE int sqlite3VtabSync(sqlite3 *db, Vdbe*); SQLITE_PRIVATE int sqlite3VtabRollback(sqlite3 *db); SQLITE_PRIVATE int sqlite3VtabCommit(sqlite3 *db); SQLITE_PRIVATE void sqlite3VtabLock(VTable *); SQLITE_PRIVATE void sqlite3VtabUnlock(VTable *); SQLITE_PRIVATE void sqlite3VtabUnlockList(sqlite3*); +SQLITE_PRIVATE int sqlite3VtabSavepoint(sqlite3 *, int, int); +SQLITE_PRIVATE void sqlite3VtabImportErrmsg(Vdbe*, sqlite3_vtab*); +SQLITE_PRIVATE VTable *sqlite3GetVTable(sqlite3*, Table*); # define sqlite3VtabInSync(db) ((db)->nVTrans>0 && (db)->aVTrans==0) #endif +SQLITE_PRIVATE int sqlite3VtabEponymousTableInit(Parse*,Module*); +SQLITE_PRIVATE void sqlite3VtabEponymousTableClear(sqlite3*,Module*); SQLITE_PRIVATE void sqlite3VtabMakeWritable(Parse*,Table*); -SQLITE_PRIVATE void sqlite3VtabBeginParse(Parse*, Token*, Token*, Token*); +SQLITE_PRIVATE void sqlite3VtabBeginParse(Parse*, Token*, Token*, Token*, int); SQLITE_PRIVATE void sqlite3VtabFinishParse(Parse*, Token*); SQLITE_PRIVATE void sqlite3VtabArgInit(Parse*); SQLITE_PRIVATE void sqlite3VtabArgExtend(Parse*, Token*); SQLITE_PRIVATE int sqlite3VtabCallCreate(sqlite3*, int, const char *, char **); SQLITE_PRIVATE int sqlite3VtabCallConnect(Parse*, Table*); SQLITE_PRIVATE int sqlite3VtabCallDestroy(sqlite3*, int, const char *); SQLITE_PRIVATE int sqlite3VtabBegin(sqlite3 *, VTable *); SQLITE_PRIVATE FuncDef *sqlite3VtabOverloadFunction(sqlite3 *,FuncDef*, int nArg, Expr*); SQLITE_PRIVATE void sqlite3InvalidFunction(sqlite3_context*,int,sqlite3_value**); +SQLITE_PRIVATE sqlite3_int64 sqlite3StmtCurrentTime(sqlite3_context*); SQLITE_PRIVATE int sqlite3VdbeParameterIndex(Vdbe*, const char*, int); SQLITE_PRIVATE int sqlite3TransferBindings(sqlite3_stmt *, sqlite3_stmt *); +SQLITE_PRIVATE void sqlite3ParserReset(Parse*); SQLITE_PRIVATE int sqlite3Reprepare(Vdbe*); SQLITE_PRIVATE void sqlite3ExprListCheckLength(Parse*, ExprList*, const char*); SQLITE_PRIVATE CollSeq *sqlite3BinaryCompareCollSeq(Parse *, Expr *, Expr *); SQLITE_PRIVATE int sqlite3TempInMemory(const sqlite3*); -SQLITE_PRIVATE VTable *sqlite3GetVTable(sqlite3*, Table*); +SQLITE_PRIVATE const char *sqlite3JournalModename(int); +#ifndef SQLITE_OMIT_WAL +SQLITE_PRIVATE int sqlite3Checkpoint(sqlite3*, int, int, int*, int*); +SQLITE_PRIVATE int sqlite3WalDefaultHook(void*,sqlite3*,const char*,int); +#endif +#ifndef SQLITE_OMIT_CTE +SQLITE_PRIVATE With *sqlite3WithAdd(Parse*,With*,Token*,ExprList*,Select*); +SQLITE_PRIVATE void sqlite3WithDelete(sqlite3*,With*); +SQLITE_PRIVATE void sqlite3WithPush(Parse*, With*, u8); +#else +#define sqlite3WithPush(x,y,z) +#define sqlite3WithDelete(x,y) +#endif /* Declarations for functions in fkey.c. All of these are replaced by ** no-op macros if OMIT_FOREIGN_KEY is defined. In this case no foreign ** key functionality is available. If OMIT_TRIGGER is defined but ** OMIT_FOREIGN_KEY is not, only some of the functions are no-oped. In ** this case foreign keys are parsed, but no other functionality is ** provided (enforcement of FK constraints requires the triggers sub-system). */ #if !defined(SQLITE_OMIT_FOREIGN_KEY) && !defined(SQLITE_OMIT_TRIGGER) -SQLITE_PRIVATE void sqlite3FkCheck(Parse*, Table*, int, int); +SQLITE_PRIVATE void sqlite3FkCheck(Parse*, Table*, int, int, int*, int); SQLITE_PRIVATE void sqlite3FkDropTable(Parse*, SrcList *, Table*); -SQLITE_PRIVATE void sqlite3FkActions(Parse*, Table*, ExprList*, int); +SQLITE_PRIVATE void sqlite3FkActions(Parse*, Table*, ExprList*, int, int*, int); SQLITE_PRIVATE int sqlite3FkRequired(Parse*, Table*, int*, int); SQLITE_PRIVATE u32 sqlite3FkOldmask(Parse*, Table*); SQLITE_PRIVATE FKey *sqlite3FkReferences(Table *); #else - #define sqlite3FkActions(a,b,c,d) - #define sqlite3FkCheck(a,b,c,d) + #define sqlite3FkActions(a,b,c,d,e,f) + #define sqlite3FkCheck(a,b,c,d,e,f) #define sqlite3FkDropTable(a,b,c) - #define sqlite3FkOldmask(a,b) 0 - #define sqlite3FkRequired(a,b,c,d) 0 + #define sqlite3FkOldmask(a,b) 0 + #define sqlite3FkRequired(a,b,c,d) 0 #endif #ifndef SQLITE_OMIT_FOREIGN_KEY -SQLITE_PRIVATE void sqlite3FkDelete(Table*); +SQLITE_PRIVATE void sqlite3FkDelete(sqlite3 *, Table*); +SQLITE_PRIVATE int sqlite3FkLocateIndex(Parse*,Table*,FKey*,Index**,int**); #else - #define sqlite3FkDelete(a) + #define sqlite3FkDelete(a,b) + #define sqlite3FkLocateIndex(a,b,c,d,e) #endif /* ** Available fault injectors. Should be numbered beginning with 0. @@ -10611,33 +14835,45 @@ #else #define sqlite3BeginBenignMalloc() #define sqlite3EndBenignMalloc() #endif -#define IN_INDEX_ROWID 1 -#define IN_INDEX_EPH 2 -#define IN_INDEX_INDEX 3 -SQLITE_PRIVATE int sqlite3FindInIndex(Parse *, Expr *, int*); +/* +** Allowed return values from sqlite3FindInIndex() +*/ +#define IN_INDEX_ROWID 1 /* Search the rowid of the table */ +#define IN_INDEX_EPH 2 /* Search an ephemeral b-tree */ +#define IN_INDEX_INDEX_ASC 3 /* Existing index ASCENDING */ +#define IN_INDEX_INDEX_DESC 4 /* Existing index DESCENDING */ +#define IN_INDEX_NOOP 5 /* No table available. Use comparisons */ +/* +** Allowed flags for the 3rd parameter to sqlite3FindInIndex(). +*/ +#define IN_INDEX_NOOP_OK 0x0001 /* OK to return IN_INDEX_NOOP */ +#define IN_INDEX_MEMBERSHIP 0x0002 /* IN operator used for membership test */ +#define IN_INDEX_LOOP 0x0004 /* IN operator used as a loop */ +SQLITE_PRIVATE int sqlite3FindInIndex(Parse *, Expr *, u32, int*); #ifdef SQLITE_ENABLE_ATOMIC_WRITE SQLITE_PRIVATE int sqlite3JournalOpen(sqlite3_vfs *, const char *, sqlite3_file *, int, int); SQLITE_PRIVATE int sqlite3JournalSize(sqlite3_vfs *); SQLITE_PRIVATE int sqlite3JournalCreate(sqlite3_file *); +SQLITE_PRIVATE int sqlite3JournalExists(sqlite3_file *p); #else #define sqlite3JournalSize(pVfs) ((pVfs)->szOsFile) + #define sqlite3JournalExists(p) 1 #endif SQLITE_PRIVATE void sqlite3MemJournalOpen(sqlite3_file *); SQLITE_PRIVATE int sqlite3MemJournalSize(void); SQLITE_PRIVATE int sqlite3IsMemJournal(sqlite3_file *); +SQLITE_PRIVATE void sqlite3ExprSetHeightAndFlags(Parse *pParse, Expr *p); #if SQLITE_MAX_EXPR_DEPTH>0 -SQLITE_PRIVATE void sqlite3ExprSetHeight(Parse *pParse, Expr *p); SQLITE_PRIVATE int sqlite3SelectExprHeight(Select *); SQLITE_PRIVATE int sqlite3ExprCheckHeight(Parse*, int); #else - #define sqlite3ExprSetHeight(x,y) #define sqlite3SelectExprHeight(x) 0 #define sqlite3ExprCheckHeight(x,y) #endif SQLITE_PRIVATE u32 sqlite3Get4byte(const u8*); @@ -10663,11 +14899,11 @@ ** print I/O tracing messages. */ #ifdef SQLITE_ENABLE_IOTRACE # define IOTRACE(A) if( sqlite3IoTrace ){ sqlite3IoTrace A; } SQLITE_PRIVATE void sqlite3VdbeIOTraceSql(Vdbe*); -SQLITE_PRIVATE void (*sqlite3IoTrace)(const char*,...); +SQLITE_API SQLITE_EXTERN void (SQLITE_CDECL *sqlite3IoTrace)(const char*,...); #else # define IOTRACE(A) # define sqlite3VdbeIOTraceSql(X) #endif @@ -10681,36 +14917,51 @@ ** a single bit set. ** ** sqlite3MemdebugHasType() returns true if any of the bits in its second ** argument match the type set by the previous sqlite3MemdebugSetType(). ** sqlite3MemdebugHasType() is intended for use inside assert() statements. -** For example: ** -** assert( sqlite3MemdebugHasType(p, MEMTYPE_HEAP) ); +** sqlite3MemdebugNoType() returns true if none of the bits in its second +** argument match the type set by the previous sqlite3MemdebugSetType(). ** ** Perhaps the most important point is the difference between MEMTYPE_HEAP -** and MEMTYPE_DB. If an allocation is MEMTYPE_DB, that means it might have -** been allocated by lookaside, except the allocation was too large or -** lookaside was already full. It is important to verify that allocations -** that might have been satisfied by lookaside are not passed back to -** non-lookaside free() routines. Asserts such as the example above are -** placed on the non-lookaside free() routines to verify this constraint. +** and MEMTYPE_LOOKASIDE. If an allocation is MEMTYPE_LOOKASIDE, that means +** it might have been allocated by lookaside, except the allocation was +** too large or lookaside was already full. It is important to verify +** that allocations that might have been satisfied by lookaside are not +** passed back to non-lookaside free() routines. Asserts such as the +** example above are placed on the non-lookaside free() routines to verify +** this constraint. ** ** All of this is no-op for a production build. It only comes into ** play when the SQLITE_MEMDEBUG compile-time option is used. */ #ifdef SQLITE_MEMDEBUG SQLITE_PRIVATE void sqlite3MemdebugSetType(void*,u8); SQLITE_PRIVATE int sqlite3MemdebugHasType(void*,u8); +SQLITE_PRIVATE int sqlite3MemdebugNoType(void*,u8); #else # define sqlite3MemdebugSetType(X,Y) /* no-op */ # define sqlite3MemdebugHasType(X,Y) 1 +# define sqlite3MemdebugNoType(X,Y) 1 #endif -#define MEMTYPE_HEAP 0x01 /* General heap allocations */ -#define MEMTYPE_DB 0x02 /* Associated with a database connection */ -#define MEMTYPE_SCRATCH 0x04 /* Scratch allocations */ -#define MEMTYPE_PCACHE 0x08 /* Page cache allocations */ +#define MEMTYPE_HEAP 0x01 /* General heap allocations */ +#define MEMTYPE_LOOKASIDE 0x02 /* Heap that might have been lookaside */ +#define MEMTYPE_SCRATCH 0x04 /* Scratch allocations */ +#define MEMTYPE_PCACHE 0x08 /* Page cache allocations */ + +/* +** Threading interface +*/ +#if SQLITE_MAX_WORKER_THREADS>0 +SQLITE_PRIVATE int sqlite3ThreadCreate(SQLiteThread**,void*(*)(void*),void*); +SQLITE_PRIVATE int sqlite3ThreadJoin(SQLiteThread*, void**); +#endif + +#if defined(SQLITE_ENABLE_DBSTAT_VTAB) || defined(SQLITE_TEST) +SQLITE_PRIVATE int sqlite3DbstatRegister(sqlite3*); +#endif #endif /* _SQLITEINT_H_ */ /************** End of sqliteInt.h *******************************************/ /************** Begin file global.c ******************************************/ @@ -10724,12 +14975,13 @@ ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* ** -** This file contains definitions of global variables and contants. +** This file contains definitions of global variables and constants. */ +/* #include "sqliteInt.h" */ /* An array to map all upper-case characters into their corresponding ** lower-case character. ** ** SQLite only considers US-ASCII (or EBCDIC) characters. We do not @@ -10759,20 +15011,20 @@ 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, /* 1x */ 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, /* 2x */ 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, /* 3x */ 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, /* 4x */ 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, /* 5x */ - 96, 97, 66, 67, 68, 69, 70, 71, 72, 73,106,107,108,109,110,111, /* 6x */ - 112, 81, 82, 83, 84, 85, 86, 87, 88, 89,122,123,124,125,126,127, /* 7x */ + 96, 97, 98, 99,100,101,102,103,104,105,106,107,108,109,110,111, /* 6x */ + 112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127, /* 7x */ 128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143, /* 8x */ - 144,145,146,147,148,149,150,151,152,153,154,155,156,157,156,159, /* 9x */ + 144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159, /* 9x */ 160,161,162,163,164,165,166,167,168,169,170,171,140,141,142,175, /* Ax */ 176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191, /* Bx */ 192,129,130,131,132,133,134,135,136,137,202,203,204,205,206,207, /* Cx */ 208,145,146,147,148,149,150,151,152,153,218,219,220,221,222,223, /* Dx */ - 224,225,162,163,164,165,166,167,168,169,232,203,204,205,206,207, /* Ex */ - 239,240,241,242,243,244,245,246,247,248,249,219,220,221,222,255, /* Fx */ + 224,225,162,163,164,165,166,167,168,169,234,235,236,237,238,239, /* Ex */ + 240,241,242,243,244,245,246,247,248,249,250,251,252,253,254,255, /* Fx */ #endif }; /* ** The following 256 byte lookup table is used to support SQLites built-in @@ -10842,56 +15094,110 @@ 0x40, 0x40, 0x40, 0x40, 0x40, 0x40, 0x40, 0x40, /* f0..f7 ........ */ 0x40, 0x40, 0x40, 0x40, 0x40, 0x40, 0x40, 0x40 /* f8..ff ........ */ }; #endif +/* EVIDENCE-OF: R-02982-34736 In order to maintain full backwards +** compatibility for legacy applications, the URI filename capability is +** disabled by default. +** +** EVIDENCE-OF: R-38799-08373 URI filenames can be enabled or disabled +** using the SQLITE_USE_URI=1 or SQLITE_USE_URI=0 compile-time options. +** +** EVIDENCE-OF: R-43642-56306 By default, URI handling is globally +** disabled. The default value may be changed by compiling with the +** SQLITE_USE_URI symbol defined. +*/ +#ifndef SQLITE_USE_URI +# define SQLITE_USE_URI 0 +#endif +/* EVIDENCE-OF: R-38720-18127 The default setting is determined by the +** SQLITE_ALLOW_COVERING_INDEX_SCAN compile-time option, or is "on" if +** that compile-time option is omitted. +*/ +#ifndef SQLITE_ALLOW_COVERING_INDEX_SCAN +# define SQLITE_ALLOW_COVERING_INDEX_SCAN 1 +#endif + +/* The minimum PMA size is set to this value multiplied by the database +** page size in bytes. +*/ +#ifndef SQLITE_SORTER_PMASZ +# define SQLITE_SORTER_PMASZ 250 +#endif /* ** The following singleton contains the global configuration for ** the SQLite library. */ SQLITE_PRIVATE SQLITE_WSD struct Sqlite3Config sqlite3Config = { SQLITE_DEFAULT_MEMSTATUS, /* bMemstat */ 1, /* bCoreMutex */ SQLITE_THREADSAFE==1, /* bFullMutex */ + SQLITE_USE_URI, /* bOpenUri */ + SQLITE_ALLOW_COVERING_INDEX_SCAN, /* bUseCis */ 0x7ffffffe, /* mxStrlen */ - 100, /* szLookaside */ + 0, /* neverCorrupt */ + 128, /* szLookaside */ 500, /* nLookaside */ {0,0,0,0,0,0,0,0}, /* m */ {0,0,0,0,0,0,0,0,0}, /* mutex */ - {0,0,0,0,0,0,0,0,0,0,0}, /* pcache */ + {0,0,0,0,0,0,0,0,0,0,0,0,0},/* pcache2 */ (void*)0, /* pHeap */ 0, /* nHeap */ 0, 0, /* mnHeap, mxHeap */ + SQLITE_DEFAULT_MMAP_SIZE, /* szMmap */ + SQLITE_MAX_MMAP_SIZE, /* mxMmap */ (void*)0, /* pScratch */ 0, /* szScratch */ 0, /* nScratch */ (void*)0, /* pPage */ 0, /* szPage */ - 0, /* nPage */ + SQLITE_DEFAULT_PCACHE_INITSZ, /* nPage */ 0, /* mxParserStack */ 0, /* sharedCacheEnabled */ + SQLITE_SORTER_PMASZ, /* szPma */ /* All the rest should always be initialized to zero */ 0, /* isInit */ 0, /* inProgress */ 0, /* isMutexInit */ 0, /* isMallocInit */ 0, /* isPCacheInit */ - 0, /* pInitMutex */ 0, /* nRefInitMutex */ + 0, /* pInitMutex */ 0, /* xLog */ 0, /* pLogArg */ +#ifdef SQLITE_ENABLE_SQLLOG + 0, /* xSqllog */ + 0, /* pSqllogArg */ +#endif +#ifdef SQLITE_VDBE_COVERAGE + 0, /* xVdbeBranch */ + 0, /* pVbeBranchArg */ +#endif +#ifndef SQLITE_OMIT_BUILTIN_TEST + 0, /* xTestCallback */ +#endif + 0 /* bLocaltimeFault */ }; - /* ** Hash table for global functions - functions common to all ** database connections. After initialization, this table is ** read-only. */ SQLITE_PRIVATE SQLITE_WSD FuncDefHash sqlite3GlobalFunctions; + +/* +** Constant tokens for values 0 and 1. +*/ +SQLITE_PRIVATE const Token sqlite3IntTokens[] = { + { "0", 1 }, + { "1", 1 } +}; + /* ** The value of the "pending" byte must be 0x40000000 (1 byte past the ** 1-gibabyte boundary) in a compatible database. SQLite never uses ** the database page that contains the pending byte. It never attempts @@ -10904,23 +15210,31 @@ ** than 1 GiB. The sqlite3_test_control() interface can be used to ** move the pending byte. ** ** IMPORTANT: Changing the pending byte to any value other than ** 0x40000000 results in an incompatible database file format! -** Changing the pending byte during operating results in undefined -** and dileterious behavior. +** Changing the pending byte during operation will result in undefined +** and incorrect behavior. */ +#ifndef SQLITE_OMIT_WSD SQLITE_PRIVATE int sqlite3PendingByte = 0x40000000; +#endif +/* #include "opcodes.h" */ /* ** Properties of opcodes. The OPFLG_INITIALIZER macro is ** created by mkopcodeh.awk during compilation. Data is obtained ** from the comments following the "case OP_xxxx:" statements in ** the vdbe.c file. */ SQLITE_PRIVATE const unsigned char sqlite3OpcodeProperty[] = OPFLG_INITIALIZER; +/* +** Name of the default collating sequence +*/ +SQLITE_PRIVATE const char sqlite3StrBINARY[] = "BINARY"; + /************** End of global.c **********************************************/ /************** Begin file ctime.c *******************************************/ /* ** 2010 February 23 ** @@ -10937,10 +15251,11 @@ ** SQLite was built with. */ #ifndef SQLITE_OMIT_COMPILEOPTION_DIAGS +/* #include "sqliteInt.h" */ /* ** An array of names of all compile-time options. This array should ** be sorted A-Z. ** @@ -10953,326 +15268,370 @@ /* These macros are provided to "stringify" the value of the define ** for those options in which the value is meaningful. */ #define CTIMEOPT_VAL_(opt) #opt #define CTIMEOPT_VAL(opt) CTIMEOPT_VAL_(opt) -#ifdef SQLITE_32BIT_ROWID +#if SQLITE_32BIT_ROWID "32BIT_ROWID", #endif -#ifdef SQLITE_4_BYTE_ALIGNED_MALLOC +#if SQLITE_4_BYTE_ALIGNED_MALLOC "4_BYTE_ALIGNED_MALLOC", #endif -#ifdef SQLITE_CASE_SENSITIVE_LIKE +#if SQLITE_CASE_SENSITIVE_LIKE "CASE_SENSITIVE_LIKE", #endif -#ifdef SQLITE_CHECK_PAGES +#if SQLITE_CHECK_PAGES "CHECK_PAGES", #endif -#ifdef SQLITE_COVERAGE_TEST +#if SQLITE_COVERAGE_TEST "COVERAGE_TEST", #endif -#ifdef SQLITE_DEBUG +#if SQLITE_DEBUG "DEBUG", #endif -#ifdef SQLITE_DEFAULT_LOCKING_MODE +#if SQLITE_DEFAULT_LOCKING_MODE "DEFAULT_LOCKING_MODE=" CTIMEOPT_VAL(SQLITE_DEFAULT_LOCKING_MODE), #endif -#ifdef SQLITE_DISABLE_DIRSYNC +#if defined(SQLITE_DEFAULT_MMAP_SIZE) && !defined(SQLITE_DEFAULT_MMAP_SIZE_xc) + "DEFAULT_MMAP_SIZE=" CTIMEOPT_VAL(SQLITE_DEFAULT_MMAP_SIZE), +#endif +#if SQLITE_DISABLE_DIRSYNC "DISABLE_DIRSYNC", #endif -#ifdef SQLITE_DISABLE_LFS +#if SQLITE_DISABLE_LFS "DISABLE_LFS", #endif -#ifdef SQLITE_ENABLE_ATOMIC_WRITE +#if SQLITE_ENABLE_8_3_NAMES + "ENABLE_8_3_NAMES", +#endif +#if SQLITE_ENABLE_API_ARMOR + "ENABLE_API_ARMOR", +#endif +#if SQLITE_ENABLE_ATOMIC_WRITE "ENABLE_ATOMIC_WRITE", #endif -#ifdef SQLITE_ENABLE_CEROD +#if SQLITE_ENABLE_CEROD "ENABLE_CEROD", #endif -#ifdef SQLITE_ENABLE_COLUMN_METADATA +#if SQLITE_ENABLE_COLUMN_METADATA "ENABLE_COLUMN_METADATA", #endif -#ifdef SQLITE_ENABLE_EXPENSIVE_ASSERT +#if SQLITE_ENABLE_DBSTAT_VTAB + "ENABLE_DBSTAT_VTAB", +#endif +#if SQLITE_ENABLE_EXPENSIVE_ASSERT "ENABLE_EXPENSIVE_ASSERT", #endif -#ifdef SQLITE_ENABLE_FTS1 +#if SQLITE_ENABLE_FTS1 "ENABLE_FTS1", #endif -#ifdef SQLITE_ENABLE_FTS2 +#if SQLITE_ENABLE_FTS2 "ENABLE_FTS2", #endif -#ifdef SQLITE_ENABLE_FTS3 +#if SQLITE_ENABLE_FTS3 "ENABLE_FTS3", #endif -#ifdef SQLITE_ENABLE_FTS3_PARENTHESIS +#if SQLITE_ENABLE_FTS3_PARENTHESIS "ENABLE_FTS3_PARENTHESIS", #endif -#ifdef SQLITE_ENABLE_FTS4 +#if SQLITE_ENABLE_FTS4 "ENABLE_FTS4", #endif -#ifdef SQLITE_ENABLE_ICU +#if SQLITE_ENABLE_FTS5 + "ENABLE_FTS5", +#endif +#if SQLITE_ENABLE_ICU "ENABLE_ICU", #endif -#ifdef SQLITE_ENABLE_IOTRACE +#if SQLITE_ENABLE_IOTRACE "ENABLE_IOTRACE", #endif -#ifdef SQLITE_ENABLE_LOAD_EXTENSION +#if SQLITE_ENABLE_JSON1 + "ENABLE_JSON1", +#endif +#if SQLITE_ENABLE_LOAD_EXTENSION "ENABLE_LOAD_EXTENSION", #endif -#ifdef SQLITE_ENABLE_LOCKING_STYLE +#if SQLITE_ENABLE_LOCKING_STYLE "ENABLE_LOCKING_STYLE=" CTIMEOPT_VAL(SQLITE_ENABLE_LOCKING_STYLE), #endif -#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT +#if SQLITE_ENABLE_MEMORY_MANAGEMENT "ENABLE_MEMORY_MANAGEMENT", #endif -#ifdef SQLITE_ENABLE_MEMSYS3 +#if SQLITE_ENABLE_MEMSYS3 "ENABLE_MEMSYS3", #endif -#ifdef SQLITE_ENABLE_MEMSYS5 +#if SQLITE_ENABLE_MEMSYS5 "ENABLE_MEMSYS5", #endif -#ifdef SQLITE_ENABLE_OVERSIZE_CELL_CHECK +#if SQLITE_ENABLE_OVERSIZE_CELL_CHECK "ENABLE_OVERSIZE_CELL_CHECK", #endif -#ifdef SQLITE_ENABLE_RTREE +#if SQLITE_ENABLE_RTREE "ENABLE_RTREE", #endif -#ifdef SQLITE_ENABLE_STAT2 - "ENABLE_STAT2", +#if defined(SQLITE_ENABLE_STAT4) + "ENABLE_STAT4", +#elif defined(SQLITE_ENABLE_STAT3) + "ENABLE_STAT3", #endif -#ifdef SQLITE_ENABLE_UNLOCK_NOTIFY +#if SQLITE_ENABLE_UNLOCK_NOTIFY "ENABLE_UNLOCK_NOTIFY", #endif -#ifdef SQLITE_ENABLE_UPDATE_DELETE_LIMIT +#if SQLITE_ENABLE_UPDATE_DELETE_LIMIT "ENABLE_UPDATE_DELETE_LIMIT", #endif -#ifdef SQLITE_HAS_CODEC +#if SQLITE_HAS_CODEC "HAS_CODEC", #endif -#ifdef SQLITE_HAVE_ISNAN +#if HAVE_ISNAN || SQLITE_HAVE_ISNAN "HAVE_ISNAN", #endif -#ifdef SQLITE_HOMEGROWN_RECURSIVE_MUTEX +#if SQLITE_HOMEGROWN_RECURSIVE_MUTEX "HOMEGROWN_RECURSIVE_MUTEX", #endif -#ifdef SQLITE_IGNORE_AFP_LOCK_ERRORS +#if SQLITE_IGNORE_AFP_LOCK_ERRORS "IGNORE_AFP_LOCK_ERRORS", #endif -#ifdef SQLITE_IGNORE_FLOCK_LOCK_ERRORS +#if SQLITE_IGNORE_FLOCK_LOCK_ERRORS "IGNORE_FLOCK_LOCK_ERRORS", #endif #ifdef SQLITE_INT64_TYPE "INT64_TYPE", #endif -#ifdef SQLITE_LOCK_TRACE +#ifdef SQLITE_LIKE_DOESNT_MATCH_BLOBS + "LIKE_DOESNT_MATCH_BLOBS", +#endif +#if SQLITE_LOCK_TRACE "LOCK_TRACE", #endif -#ifdef SQLITE_MEMDEBUG +#if defined(SQLITE_MAX_MMAP_SIZE) && !defined(SQLITE_MAX_MMAP_SIZE_xc) + "MAX_MMAP_SIZE=" CTIMEOPT_VAL(SQLITE_MAX_MMAP_SIZE), +#endif +#ifdef SQLITE_MAX_SCHEMA_RETRY + "MAX_SCHEMA_RETRY=" CTIMEOPT_VAL(SQLITE_MAX_SCHEMA_RETRY), +#endif +#if SQLITE_MEMDEBUG "MEMDEBUG", #endif -#ifdef SQLITE_MIXED_ENDIAN_64BIT_FLOAT +#if SQLITE_MIXED_ENDIAN_64BIT_FLOAT "MIXED_ENDIAN_64BIT_FLOAT", #endif -#ifdef SQLITE_NO_SYNC +#if SQLITE_NO_SYNC "NO_SYNC", #endif -#ifdef SQLITE_OMIT_ALTERTABLE +#if SQLITE_OMIT_ALTERTABLE "OMIT_ALTERTABLE", #endif -#ifdef SQLITE_OMIT_ANALYZE +#if SQLITE_OMIT_ANALYZE "OMIT_ANALYZE", #endif -#ifdef SQLITE_OMIT_ATTACH +#if SQLITE_OMIT_ATTACH "OMIT_ATTACH", #endif -#ifdef SQLITE_OMIT_AUTHORIZATION +#if SQLITE_OMIT_AUTHORIZATION "OMIT_AUTHORIZATION", #endif -#ifdef SQLITE_OMIT_AUTOINCREMENT +#if SQLITE_OMIT_AUTOINCREMENT "OMIT_AUTOINCREMENT", #endif -#ifdef SQLITE_OMIT_AUTOINIT +#if SQLITE_OMIT_AUTOINIT "OMIT_AUTOINIT", #endif -#ifdef SQLITE_OMIT_AUTOMATIC_INDEX +#if SQLITE_OMIT_AUTOMATIC_INDEX "OMIT_AUTOMATIC_INDEX", #endif -#ifdef SQLITE_OMIT_AUTOVACUUM +#if SQLITE_OMIT_AUTORESET + "OMIT_AUTORESET", +#endif +#if SQLITE_OMIT_AUTOVACUUM "OMIT_AUTOVACUUM", #endif -#ifdef SQLITE_OMIT_BETWEEN_OPTIMIZATION +#if SQLITE_OMIT_BETWEEN_OPTIMIZATION "OMIT_BETWEEN_OPTIMIZATION", #endif -#ifdef SQLITE_OMIT_BLOB_LITERAL +#if SQLITE_OMIT_BLOB_LITERAL "OMIT_BLOB_LITERAL", #endif -#ifdef SQLITE_OMIT_BTREECOUNT +#if SQLITE_OMIT_BTREECOUNT "OMIT_BTREECOUNT", #endif -#ifdef SQLITE_OMIT_BUILTIN_TEST +#if SQLITE_OMIT_BUILTIN_TEST "OMIT_BUILTIN_TEST", #endif -#ifdef SQLITE_OMIT_CAST +#if SQLITE_OMIT_CAST "OMIT_CAST", #endif -#ifdef SQLITE_OMIT_CHECK +#if SQLITE_OMIT_CHECK "OMIT_CHECK", #endif -#ifdef SQLITE_OMIT_COMPILEOPTION_DIAGS - "OMIT_COMPILEOPTION_DIAGS", -#endif -#ifdef SQLITE_OMIT_COMPLETE +#if SQLITE_OMIT_COMPLETE "OMIT_COMPLETE", #endif -#ifdef SQLITE_OMIT_COMPOUND_SELECT +#if SQLITE_OMIT_COMPOUND_SELECT "OMIT_COMPOUND_SELECT", #endif -#ifdef SQLITE_OMIT_DATETIME_FUNCS +#if SQLITE_OMIT_CTE + "OMIT_CTE", +#endif +#if SQLITE_OMIT_DATETIME_FUNCS "OMIT_DATETIME_FUNCS", #endif -#ifdef SQLITE_OMIT_DECLTYPE +#if SQLITE_OMIT_DECLTYPE "OMIT_DECLTYPE", #endif -#ifdef SQLITE_OMIT_DEPRECATED +#if SQLITE_OMIT_DEPRECATED "OMIT_DEPRECATED", #endif -#ifdef SQLITE_OMIT_DISKIO +#if SQLITE_OMIT_DISKIO "OMIT_DISKIO", #endif -#ifdef SQLITE_OMIT_EXPLAIN +#if SQLITE_OMIT_EXPLAIN "OMIT_EXPLAIN", #endif -#ifdef SQLITE_OMIT_FLAG_PRAGMAS +#if SQLITE_OMIT_FLAG_PRAGMAS "OMIT_FLAG_PRAGMAS", #endif -#ifdef SQLITE_OMIT_FLOATING_POINT +#if SQLITE_OMIT_FLOATING_POINT "OMIT_FLOATING_POINT", #endif -#ifdef SQLITE_OMIT_FOREIGN_KEY +#if SQLITE_OMIT_FOREIGN_KEY "OMIT_FOREIGN_KEY", #endif -#ifdef SQLITE_OMIT_GET_TABLE +#if SQLITE_OMIT_GET_TABLE "OMIT_GET_TABLE", #endif -#ifdef SQLITE_OMIT_GLOBALRECOVER - "OMIT_GLOBALRECOVER", -#endif -#ifdef SQLITE_OMIT_INCRBLOB +#if SQLITE_OMIT_INCRBLOB "OMIT_INCRBLOB", #endif -#ifdef SQLITE_OMIT_INTEGRITY_CHECK +#if SQLITE_OMIT_INTEGRITY_CHECK "OMIT_INTEGRITY_CHECK", #endif -#ifdef SQLITE_OMIT_LIKE_OPTIMIZATION +#if SQLITE_OMIT_LIKE_OPTIMIZATION "OMIT_LIKE_OPTIMIZATION", #endif -#ifdef SQLITE_OMIT_LOAD_EXTENSION +#if SQLITE_OMIT_LOAD_EXTENSION "OMIT_LOAD_EXTENSION", #endif -#ifdef SQLITE_OMIT_LOCALTIME +#if SQLITE_OMIT_LOCALTIME "OMIT_LOCALTIME", #endif -#ifdef SQLITE_OMIT_LOOKASIDE +#if SQLITE_OMIT_LOOKASIDE "OMIT_LOOKASIDE", #endif -#ifdef SQLITE_OMIT_MEMORYDB +#if SQLITE_OMIT_MEMORYDB "OMIT_MEMORYDB", #endif -#ifdef SQLITE_OMIT_OR_OPTIMIZATION +#if SQLITE_OMIT_OR_OPTIMIZATION "OMIT_OR_OPTIMIZATION", #endif -#ifdef SQLITE_OMIT_PAGER_PRAGMAS +#if SQLITE_OMIT_PAGER_PRAGMAS "OMIT_PAGER_PRAGMAS", #endif -#ifdef SQLITE_OMIT_PRAGMA +#if SQLITE_OMIT_PRAGMA "OMIT_PRAGMA", #endif -#ifdef SQLITE_OMIT_PROGRESS_CALLBACK +#if SQLITE_OMIT_PROGRESS_CALLBACK "OMIT_PROGRESS_CALLBACK", #endif -#ifdef SQLITE_OMIT_QUICKBALANCE +#if SQLITE_OMIT_QUICKBALANCE "OMIT_QUICKBALANCE", #endif -#ifdef SQLITE_OMIT_REINDEX +#if SQLITE_OMIT_REINDEX "OMIT_REINDEX", #endif -#ifdef SQLITE_OMIT_SCHEMA_PRAGMAS +#if SQLITE_OMIT_SCHEMA_PRAGMAS "OMIT_SCHEMA_PRAGMAS", #endif -#ifdef SQLITE_OMIT_SCHEMA_VERSION_PRAGMAS +#if SQLITE_OMIT_SCHEMA_VERSION_PRAGMAS "OMIT_SCHEMA_VERSION_PRAGMAS", #endif -#ifdef SQLITE_OMIT_SHARED_CACHE +#if SQLITE_OMIT_SHARED_CACHE "OMIT_SHARED_CACHE", #endif -#ifdef SQLITE_OMIT_SUBQUERY +#if SQLITE_OMIT_SUBQUERY "OMIT_SUBQUERY", #endif -#ifdef SQLITE_OMIT_TCL_VARIABLE +#if SQLITE_OMIT_TCL_VARIABLE "OMIT_TCL_VARIABLE", #endif -#ifdef SQLITE_OMIT_TEMPDB +#if SQLITE_OMIT_TEMPDB "OMIT_TEMPDB", #endif -#ifdef SQLITE_OMIT_TRACE +#if SQLITE_OMIT_TRACE "OMIT_TRACE", #endif -#ifdef SQLITE_OMIT_TRIGGER +#if SQLITE_OMIT_TRIGGER "OMIT_TRIGGER", #endif -#ifdef SQLITE_OMIT_TRUNCATE_OPTIMIZATION +#if SQLITE_OMIT_TRUNCATE_OPTIMIZATION "OMIT_TRUNCATE_OPTIMIZATION", #endif -#ifdef SQLITE_OMIT_UTF16 +#if SQLITE_OMIT_UTF16 "OMIT_UTF16", #endif -#ifdef SQLITE_OMIT_VACUUM +#if SQLITE_OMIT_VACUUM "OMIT_VACUUM", #endif -#ifdef SQLITE_OMIT_VIEW +#if SQLITE_OMIT_VIEW "OMIT_VIEW", #endif -#ifdef SQLITE_OMIT_VIRTUALTABLE +#if SQLITE_OMIT_VIRTUALTABLE "OMIT_VIRTUALTABLE", #endif -#ifdef SQLITE_OMIT_WSD +#if SQLITE_OMIT_WAL + "OMIT_WAL", +#endif +#if SQLITE_OMIT_WSD "OMIT_WSD", #endif -#ifdef SQLITE_OMIT_XFER_OPT +#if SQLITE_OMIT_XFER_OPT "OMIT_XFER_OPT", #endif -#ifdef SQLITE_PERFORMANCE_TRACE +#if SQLITE_PERFORMANCE_TRACE "PERFORMANCE_TRACE", #endif -#ifdef SQLITE_PROXY_DEBUG +#if SQLITE_PROXY_DEBUG "PROXY_DEBUG", #endif -#ifdef SQLITE_SECURE_DELETE +#if SQLITE_RTREE_INT_ONLY + "RTREE_INT_ONLY", +#endif +#if SQLITE_SECURE_DELETE "SECURE_DELETE", #endif -#ifdef SQLITE_SMALL_STACK +#if SQLITE_SMALL_STACK "SMALL_STACK", #endif -#ifdef SQLITE_SOUNDEX +#if SQLITE_SOUNDEX "SOUNDEX", #endif -#ifdef SQLITE_TCL +#if SQLITE_SYSTEM_MALLOC + "SYSTEM_MALLOC", +#endif +#if SQLITE_TCL "TCL", #endif -#ifdef SQLITE_TEMP_STORE +#if defined(SQLITE_TEMP_STORE) && !defined(SQLITE_TEMP_STORE_xc) "TEMP_STORE=" CTIMEOPT_VAL(SQLITE_TEMP_STORE), #endif -#ifdef SQLITE_TEST +#if SQLITE_TEST "TEST", #endif -#ifdef SQLITE_THREADSAFE +#if defined(SQLITE_THREADSAFE) "THREADSAFE=" CTIMEOPT_VAL(SQLITE_THREADSAFE), #endif -#ifdef SQLITE_USE_ALLOCA +#if SQLITE_USE_ALLOCA "USE_ALLOCA", #endif -#ifdef SQLITE_ZERO_MALLOC +#if SQLITE_USER_AUTHENTICATION + "USER_AUTHENTICATION", +#endif +#if SQLITE_WIN32_MALLOC + "WIN32_MALLOC", +#endif +#if SQLITE_ZERO_MALLOC "ZERO_MALLOC" #endif }; /* @@ -11280,29 +15639,39 @@ ** was used and false if not. ** ** The name can optionally begin with "SQLITE_" but the "SQLITE_" prefix ** is not required for a match. */ -SQLITE_API int sqlite3_compileoption_used(const char *zOptName){ +SQLITE_API int SQLITE_STDCALL sqlite3_compileoption_used(const char *zOptName){ int i, n; + +#if SQLITE_ENABLE_API_ARMOR + if( zOptName==0 ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif if( sqlite3StrNICmp(zOptName, "SQLITE_", 7)==0 ) zOptName += 7; n = sqlite3Strlen30(zOptName); /* Since ArraySize(azCompileOpt) is normally in single digits, a ** linear search is adequate. No need for a binary search. */ for(i=0; i<ArraySize(azCompileOpt); i++){ - if( (sqlite3StrNICmp(zOptName, azCompileOpt[i], n)==0) - && ( (azCompileOpt[i][n]==0) || (azCompileOpt[i][n]=='=') ) ) return 1; + if( sqlite3StrNICmp(zOptName, azCompileOpt[i], n)==0 + && sqlite3IsIdChar((unsigned char)azCompileOpt[i][n])==0 + ){ + return 1; + } } return 0; } /* ** Return the N-th compile-time option string. If N is out of range, ** return a NULL pointer. */ -SQLITE_API const char *sqlite3_compileoption_get(int N){ +SQLITE_API const char *SQLITE_STDCALL sqlite3_compileoption_get(int N){ if( N>=0 && N<ArraySize(azCompileOpt) ){ return azCompileOpt[N]; } return 0; } @@ -11324,19 +15693,585 @@ ************************************************************************* ** ** This module implements the sqlite3_status() interface and related ** functionality. */ +/* #include "sqliteInt.h" */ +/************** Include vdbeInt.h in the middle of status.c ******************/ +/************** Begin file vdbeInt.h *****************************************/ +/* +** 2003 September 6 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This is the header file for information that is private to the +** VDBE. This information used to all be at the top of the single +** source code file "vdbe.c". When that file became too big (over +** 6000 lines long) it was split up into several smaller files and +** this header information was factored out. +*/ +#ifndef _VDBEINT_H_ +#define _VDBEINT_H_ + +/* +** The maximum number of times that a statement will try to reparse +** itself before giving up and returning SQLITE_SCHEMA. +*/ +#ifndef SQLITE_MAX_SCHEMA_RETRY +# define SQLITE_MAX_SCHEMA_RETRY 50 +#endif + +/* +** VDBE_DISPLAY_P4 is true or false depending on whether or not the +** "explain" P4 display logic is enabled. +*/ +#if !defined(SQLITE_OMIT_EXPLAIN) || !defined(NDEBUG) \ + || defined(VDBE_PROFILE) || defined(SQLITE_DEBUG) +# define VDBE_DISPLAY_P4 1 +#else +# define VDBE_DISPLAY_P4 0 +#endif + +/* +** SQL is translated into a sequence of instructions to be +** executed by a virtual machine. Each instruction is an instance +** of the following structure. +*/ +typedef struct VdbeOp Op; + +/* +** Boolean values +*/ +typedef unsigned Bool; + +/* Opaque type used by code in vdbesort.c */ +typedef struct VdbeSorter VdbeSorter; + +/* Opaque type used by the explainer */ +typedef struct Explain Explain; + +/* Elements of the linked list at Vdbe.pAuxData */ +typedef struct AuxData AuxData; + +/* Types of VDBE cursors */ +#define CURTYPE_BTREE 0 +#define CURTYPE_SORTER 1 +#define CURTYPE_VTAB 2 +#define CURTYPE_PSEUDO 3 + +/* +** A VdbeCursor is an superclass (a wrapper) for various cursor objects: +** +** * A b-tree cursor +** - In the main database or in an ephemeral database +** - On either an index or a table +** * A sorter +** * A virtual table +** * A one-row "pseudotable" stored in a single register +*/ +typedef struct VdbeCursor VdbeCursor; +struct VdbeCursor { + u8 eCurType; /* One of the CURTYPE_* values above */ + i8 iDb; /* Index of cursor database in db->aDb[] (or -1) */ + u8 nullRow; /* True if pointing to a row with no data */ + u8 deferredMoveto; /* A call to sqlite3BtreeMoveto() is needed */ + u8 isTable; /* True for rowid tables. False for indexes */ +#ifdef SQLITE_DEBUG + u8 seekOp; /* Most recent seek operation on this cursor */ + u8 wrFlag; /* The wrFlag argument to sqlite3BtreeCursor() */ +#endif + Bool isEphemeral:1; /* True for an ephemeral table */ + Bool useRandomRowid:1;/* Generate new record numbers semi-randomly */ + Bool isOrdered:1; /* True if the underlying table is BTREE_UNORDERED */ + Pgno pgnoRoot; /* Root page of the open btree cursor */ + i16 nField; /* Number of fields in the header */ + u16 nHdrParsed; /* Number of header fields parsed so far */ + union { + BtCursor *pCursor; /* CURTYPE_BTREE. Btree cursor */ + sqlite3_vtab_cursor *pVCur; /* CURTYPE_VTAB. Vtab cursor */ + int pseudoTableReg; /* CURTYPE_PSEUDO. Reg holding content. */ + VdbeSorter *pSorter; /* CURTYPE_SORTER. Sorter object */ + } uc; + Btree *pBt; /* Separate file holding temporary table */ + KeyInfo *pKeyInfo; /* Info about index keys needed by index cursors */ + int seekResult; /* Result of previous sqlite3BtreeMoveto() */ + i64 seqCount; /* Sequence counter */ + i64 movetoTarget; /* Argument to the deferred sqlite3BtreeMoveto() */ + VdbeCursor *pAltCursor; /* Associated index cursor from which to read */ + int *aAltMap; /* Mapping from table to index column numbers */ +#ifdef SQLITE_ENABLE_COLUMN_USED_MASK + u64 maskUsed; /* Mask of columns used by this cursor */ +#endif + + /* Cached information about the header for the data record that the + ** cursor is currently pointing to. Only valid if cacheStatus matches + ** Vdbe.cacheCtr. Vdbe.cacheCtr will never take on the value of + ** CACHE_STALE and so setting cacheStatus=CACHE_STALE guarantees that + ** the cache is out of date. + ** + ** aRow might point to (ephemeral) data for the current row, or it might + ** be NULL. + */ + u32 cacheStatus; /* Cache is valid if this matches Vdbe.cacheCtr */ + u32 payloadSize; /* Total number of bytes in the record */ + u32 szRow; /* Byte available in aRow */ + u32 iHdrOffset; /* Offset to next unparsed byte of the header */ + const u8 *aRow; /* Data for the current row, if all on one page */ + u32 *aOffset; /* Pointer to aType[nField] */ + u32 aType[1]; /* Type values for all entries in the record */ + /* 2*nField extra array elements allocated for aType[], beyond the one + ** static element declared in the structure. nField total array slots for + ** aType[] and nField+1 array slots for aOffset[] */ +}; + +/* +** When a sub-program is executed (OP_Program), a structure of this type +** is allocated to store the current value of the program counter, as +** well as the current memory cell array and various other frame specific +** values stored in the Vdbe struct. When the sub-program is finished, +** these values are copied back to the Vdbe from the VdbeFrame structure, +** restoring the state of the VM to as it was before the sub-program +** began executing. +** +** The memory for a VdbeFrame object is allocated and managed by a memory +** cell in the parent (calling) frame. When the memory cell is deleted or +** overwritten, the VdbeFrame object is not freed immediately. Instead, it +** is linked into the Vdbe.pDelFrame list. The contents of the Vdbe.pDelFrame +** list is deleted when the VM is reset in VdbeHalt(). The reason for doing +** this instead of deleting the VdbeFrame immediately is to avoid recursive +** calls to sqlite3VdbeMemRelease() when the memory cells belonging to the +** child frame are released. +** +** The currently executing frame is stored in Vdbe.pFrame. Vdbe.pFrame is +** set to NULL if the currently executing frame is the main program. +*/ +typedef struct VdbeFrame VdbeFrame; +struct VdbeFrame { + Vdbe *v; /* VM this frame belongs to */ + VdbeFrame *pParent; /* Parent of this frame, or NULL if parent is main */ + Op *aOp; /* Program instructions for parent frame */ + i64 *anExec; /* Event counters from parent frame */ + Mem *aMem; /* Array of memory cells for parent frame */ + u8 *aOnceFlag; /* Array of OP_Once flags for parent frame */ + VdbeCursor **apCsr; /* Array of Vdbe cursors for parent frame */ + void *token; /* Copy of SubProgram.token */ + i64 lastRowid; /* Last insert rowid (sqlite3.lastRowid) */ + int nCursor; /* Number of entries in apCsr */ + int pc; /* Program Counter in parent (calling) frame */ + int nOp; /* Size of aOp array */ + int nMem; /* Number of entries in aMem */ + int nOnceFlag; /* Number of entries in aOnceFlag */ + int nChildMem; /* Number of memory cells for child frame */ + int nChildCsr; /* Number of cursors for child frame */ + int nChange; /* Statement changes (Vdbe.nChange) */ + int nDbChange; /* Value of db->nChange */ +}; + +#define VdbeFrameMem(p) ((Mem *)&((u8 *)p)[ROUND8(sizeof(VdbeFrame))]) + +/* +** A value for VdbeCursor.cacheValid that means the cache is always invalid. +*/ +#define CACHE_STALE 0 + +/* +** Internally, the vdbe manipulates nearly all SQL values as Mem +** structures. Each Mem struct may cache multiple representations (string, +** integer etc.) of the same value. +*/ +struct Mem { + union MemValue { + double r; /* Real value used when MEM_Real is set in flags */ + i64 i; /* Integer value used when MEM_Int is set in flags */ + int nZero; /* Used when bit MEM_Zero is set in flags */ + FuncDef *pDef; /* Used only when flags==MEM_Agg */ + RowSet *pRowSet; /* Used only when flags==MEM_RowSet */ + VdbeFrame *pFrame; /* Used when flags==MEM_Frame */ + } u; + u16 flags; /* Some combination of MEM_Null, MEM_Str, MEM_Dyn, etc. */ + u8 enc; /* SQLITE_UTF8, SQLITE_UTF16BE, SQLITE_UTF16LE */ + u8 eSubtype; /* Subtype for this value */ + int n; /* Number of characters in string value, excluding '\0' */ + char *z; /* String or BLOB value */ + /* ShallowCopy only needs to copy the information above */ + char *zMalloc; /* Space to hold MEM_Str or MEM_Blob if szMalloc>0 */ + int szMalloc; /* Size of the zMalloc allocation */ + u32 uTemp; /* Transient storage for serial_type in OP_MakeRecord */ + sqlite3 *db; /* The associated database connection */ + void (*xDel)(void*);/* Destructor for Mem.z - only valid if MEM_Dyn */ +#ifdef SQLITE_DEBUG + Mem *pScopyFrom; /* This Mem is a shallow copy of pScopyFrom */ + void *pFiller; /* So that sizeof(Mem) is a multiple of 8 */ +#endif +}; + +/* +** Size of struct Mem not including the Mem.zMalloc member or anything that +** follows. +*/ +#define MEMCELLSIZE offsetof(Mem,zMalloc) + +/* One or more of the following flags are set to indicate the validOK +** representations of the value stored in the Mem struct. +** +** If the MEM_Null flag is set, then the value is an SQL NULL value. +** No other flags may be set in this case. +** +** If the MEM_Str flag is set then Mem.z points at a string representation. +** Usually this is encoded in the same unicode encoding as the main +** database (see below for exceptions). If the MEM_Term flag is also +** set, then the string is nul terminated. The MEM_Int and MEM_Real +** flags may coexist with the MEM_Str flag. +*/ +#define MEM_Null 0x0001 /* Value is NULL */ +#define MEM_Str 0x0002 /* Value is a string */ +#define MEM_Int 0x0004 /* Value is an integer */ +#define MEM_Real 0x0008 /* Value is a real number */ +#define MEM_Blob 0x0010 /* Value is a BLOB */ +#define MEM_AffMask 0x001f /* Mask of affinity bits */ +#define MEM_RowSet 0x0020 /* Value is a RowSet object */ +#define MEM_Frame 0x0040 /* Value is a VdbeFrame object */ +#define MEM_Undefined 0x0080 /* Value is undefined */ +#define MEM_Cleared 0x0100 /* NULL set by OP_Null, not from data */ +#define MEM_TypeMask 0x81ff /* Mask of type bits */ + + +/* Whenever Mem contains a valid string or blob representation, one of +** the following flags must be set to determine the memory management +** policy for Mem.z. The MEM_Term flag tells us whether or not the +** string is \000 or \u0000 terminated +*/ +#define MEM_Term 0x0200 /* String rep is nul terminated */ +#define MEM_Dyn 0x0400 /* Need to call Mem.xDel() on Mem.z */ +#define MEM_Static 0x0800 /* Mem.z points to a static string */ +#define MEM_Ephem 0x1000 /* Mem.z points to an ephemeral string */ +#define MEM_Agg 0x2000 /* Mem.z points to an agg function context */ +#define MEM_Zero 0x4000 /* Mem.i contains count of 0s appended to blob */ +#define MEM_Subtype 0x8000 /* Mem.eSubtype is valid */ +#ifdef SQLITE_OMIT_INCRBLOB + #undef MEM_Zero + #define MEM_Zero 0x0000 +#endif + +/* Return TRUE if Mem X contains dynamically allocated content - anything +** that needs to be deallocated to avoid a leak. +*/ +#define VdbeMemDynamic(X) \ + (((X)->flags&(MEM_Agg|MEM_Dyn|MEM_RowSet|MEM_Frame))!=0) + +/* +** Clear any existing type flags from a Mem and replace them with f +*/ +#define MemSetTypeFlag(p, f) \ + ((p)->flags = ((p)->flags&~(MEM_TypeMask|MEM_Zero))|f) + +/* +** Return true if a memory cell is not marked as invalid. This macro +** is for use inside assert() statements only. +*/ +#ifdef SQLITE_DEBUG +#define memIsValid(M) ((M)->flags & MEM_Undefined)==0 +#endif + +/* +** Each auxiliary data pointer stored by a user defined function +** implementation calling sqlite3_set_auxdata() is stored in an instance +** of this structure. All such structures associated with a single VM +** are stored in a linked list headed at Vdbe.pAuxData. All are destroyed +** when the VM is halted (if not before). +*/ +struct AuxData { + int iOp; /* Instruction number of OP_Function opcode */ + int iArg; /* Index of function argument. */ + void *pAux; /* Aux data pointer */ + void (*xDelete)(void *); /* Destructor for the aux data */ + AuxData *pNext; /* Next element in list */ +}; + +/* +** The "context" argument for an installable function. A pointer to an +** instance of this structure is the first argument to the routines used +** implement the SQL functions. +** +** There is a typedef for this structure in sqlite.h. So all routines, +** even the public interface to SQLite, can use a pointer to this structure. +** But this file is the only place where the internal details of this +** structure are known. +** +** This structure is defined inside of vdbeInt.h because it uses substructures +** (Mem) which are only defined there. +*/ +struct sqlite3_context { + Mem *pOut; /* The return value is stored here */ + FuncDef *pFunc; /* Pointer to function information */ + Mem *pMem; /* Memory cell used to store aggregate context */ + Vdbe *pVdbe; /* The VM that owns this context */ + int iOp; /* Instruction number of OP_Function */ + int isError; /* Error code returned by the function. */ + u8 skipFlag; /* Skip accumulator loading if true */ + u8 fErrorOrAux; /* isError!=0 or pVdbe->pAuxData modified */ + u8 argc; /* Number of arguments */ + sqlite3_value *argv[1]; /* Argument set */ +}; + +/* +** An Explain object accumulates indented output which is helpful +** in describing recursive data structures. +*/ +struct Explain { + Vdbe *pVdbe; /* Attach the explanation to this Vdbe */ + StrAccum str; /* The string being accumulated */ + int nIndent; /* Number of elements in aIndent */ + u16 aIndent[100]; /* Levels of indentation */ + char zBase[100]; /* Initial space */ +}; + +/* A bitfield type for use inside of structures. Always follow with :N where +** N is the number of bits. +*/ +typedef unsigned bft; /* Bit Field Type */ + +typedef struct ScanStatus ScanStatus; +struct ScanStatus { + int addrExplain; /* OP_Explain for loop */ + int addrLoop; /* Address of "loops" counter */ + int addrVisit; /* Address of "rows visited" counter */ + int iSelectID; /* The "Select-ID" for this loop */ + LogEst nEst; /* Estimated output rows per loop */ + char *zName; /* Name of table or index */ +}; + +/* +** An instance of the virtual machine. This structure contains the complete +** state of the virtual machine. +** +** The "sqlite3_stmt" structure pointer that is returned by sqlite3_prepare() +** is really a pointer to an instance of this structure. +*/ +struct Vdbe { + sqlite3 *db; /* The database connection that owns this statement */ + Op *aOp; /* Space to hold the virtual machine's program */ + Mem *aMem; /* The memory locations */ + Mem **apArg; /* Arguments to currently executing user function */ + Mem *aColName; /* Column names to return */ + Mem *pResultSet; /* Pointer to an array of results */ + Parse *pParse; /* Parsing context used to create this Vdbe */ + int nMem; /* Number of memory locations currently allocated */ + int nOp; /* Number of instructions in the program */ + int nCursor; /* Number of slots in apCsr[] */ + u32 magic; /* Magic number for sanity checking */ + char *zErrMsg; /* Error message written here */ + Vdbe *pPrev,*pNext; /* Linked list of VDBEs with the same Vdbe.db */ + VdbeCursor **apCsr; /* One element of this array for each open cursor */ + Mem *aVar; /* Values for the OP_Variable opcode. */ + char **azVar; /* Name of variables */ + ynVar nVar; /* Number of entries in aVar[] */ + ynVar nzVar; /* Number of entries in azVar[] */ + u32 cacheCtr; /* VdbeCursor row cache generation counter */ + int pc; /* The program counter */ + int rc; /* Value to return */ +#ifdef SQLITE_DEBUG + int rcApp; /* errcode set by sqlite3_result_error_code() */ +#endif + u16 nResColumn; /* Number of columns in one row of the result set */ + u8 errorAction; /* Recovery action to do in case of an error */ + u8 minWriteFileFormat; /* Minimum file format for writable database files */ + bft explain:2; /* True if EXPLAIN present on SQL command */ + bft changeCntOn:1; /* True to update the change-counter */ + bft expired:1; /* True if the VM needs to be recompiled */ + bft runOnlyOnce:1; /* Automatically expire on reset */ + bft usesStmtJournal:1; /* True if uses a statement journal */ + bft readOnly:1; /* True for statements that do not write */ + bft bIsReader:1; /* True for statements that read */ + bft isPrepareV2:1; /* True if prepared with prepare_v2() */ + bft doingRerun:1; /* True if rerunning after an auto-reprepare */ + int nChange; /* Number of db changes made since last reset */ + yDbMask btreeMask; /* Bitmask of db->aDb[] entries referenced */ + yDbMask lockMask; /* Subset of btreeMask that requires a lock */ + int iStatement; /* Statement number (or 0 if has not opened stmt) */ + u32 aCounter[5]; /* Counters used by sqlite3_stmt_status() */ +#ifndef SQLITE_OMIT_TRACE + i64 startTime; /* Time when query started - used for profiling */ +#endif + i64 iCurrentTime; /* Value of julianday('now') for this statement */ + i64 nFkConstraint; /* Number of imm. FK constraints this VM */ + i64 nStmtDefCons; /* Number of def. constraints when stmt started */ + i64 nStmtDefImmCons; /* Number of def. imm constraints when stmt started */ + char *zSql; /* Text of the SQL statement that generated this */ + void *pFree; /* Free this when deleting the vdbe */ + VdbeFrame *pFrame; /* Parent frame */ + VdbeFrame *pDelFrame; /* List of frame objects to free on VM reset */ + int nFrame; /* Number of frames in pFrame list */ + u32 expmask; /* Binding to these vars invalidates VM */ + SubProgram *pProgram; /* Linked list of all sub-programs used by VM */ + int nOnceFlag; /* Size of array aOnceFlag[] */ + u8 *aOnceFlag; /* Flags for OP_Once */ + AuxData *pAuxData; /* Linked list of auxdata allocations */ +#ifdef SQLITE_ENABLE_STMT_SCANSTATUS + i64 *anExec; /* Number of times each op has been executed */ + int nScan; /* Entries in aScan[] */ + ScanStatus *aScan; /* Scan definitions for sqlite3_stmt_scanstatus() */ +#endif +}; + +/* +** The following are allowed values for Vdbe.magic +*/ +#define VDBE_MAGIC_INIT 0x26bceaa5 /* Building a VDBE program */ +#define VDBE_MAGIC_RUN 0xbdf20da3 /* VDBE is ready to execute */ +#define VDBE_MAGIC_HALT 0x519c2973 /* VDBE has completed execution */ +#define VDBE_MAGIC_DEAD 0xb606c3c8 /* The VDBE has been deallocated */ + +/* +** Function prototypes +*/ +SQLITE_PRIVATE void sqlite3VdbeError(Vdbe*, const char *, ...); +SQLITE_PRIVATE void sqlite3VdbeFreeCursor(Vdbe *, VdbeCursor*); +void sqliteVdbePopStack(Vdbe*,int); +SQLITE_PRIVATE int sqlite3VdbeCursorMoveto(VdbeCursor**, int*); +SQLITE_PRIVATE int sqlite3VdbeCursorRestore(VdbeCursor*); +#if defined(SQLITE_DEBUG) || defined(VDBE_PROFILE) +SQLITE_PRIVATE void sqlite3VdbePrintOp(FILE*, int, Op*); +#endif +SQLITE_PRIVATE u32 sqlite3VdbeSerialTypeLen(u32); +SQLITE_PRIVATE u8 sqlite3VdbeOneByteSerialTypeLen(u8); +SQLITE_PRIVATE u32 sqlite3VdbeSerialType(Mem*, int, u32*); +SQLITE_PRIVATE u32 sqlite3VdbeSerialPut(unsigned char*, Mem*, u32); +SQLITE_PRIVATE u32 sqlite3VdbeSerialGet(const unsigned char*, u32, Mem*); +SQLITE_PRIVATE void sqlite3VdbeDeleteAuxData(Vdbe*, int, int); + +int sqlite2BtreeKeyCompare(BtCursor *, const void *, int, int, int *); +SQLITE_PRIVATE int sqlite3VdbeIdxKeyCompare(sqlite3*,VdbeCursor*,UnpackedRecord*,int*); +SQLITE_PRIVATE int sqlite3VdbeIdxRowid(sqlite3*, BtCursor*, i64*); +SQLITE_PRIVATE int sqlite3VdbeExec(Vdbe*); +SQLITE_PRIVATE int sqlite3VdbeList(Vdbe*); +SQLITE_PRIVATE int sqlite3VdbeHalt(Vdbe*); +SQLITE_PRIVATE int sqlite3VdbeChangeEncoding(Mem *, int); +SQLITE_PRIVATE int sqlite3VdbeMemTooBig(Mem*); +SQLITE_PRIVATE int sqlite3VdbeMemCopy(Mem*, const Mem*); +SQLITE_PRIVATE void sqlite3VdbeMemShallowCopy(Mem*, const Mem*, int); +SQLITE_PRIVATE void sqlite3VdbeMemMove(Mem*, Mem*); +SQLITE_PRIVATE int sqlite3VdbeMemNulTerminate(Mem*); +SQLITE_PRIVATE int sqlite3VdbeMemSetStr(Mem*, const char*, int, u8, void(*)(void*)); +SQLITE_PRIVATE void sqlite3VdbeMemSetInt64(Mem*, i64); +#ifdef SQLITE_OMIT_FLOATING_POINT +# define sqlite3VdbeMemSetDouble sqlite3VdbeMemSetInt64 +#else +SQLITE_PRIVATE void sqlite3VdbeMemSetDouble(Mem*, double); +#endif +SQLITE_PRIVATE void sqlite3VdbeMemInit(Mem*,sqlite3*,u16); +SQLITE_PRIVATE void sqlite3VdbeMemSetNull(Mem*); +SQLITE_PRIVATE void sqlite3VdbeMemSetZeroBlob(Mem*,int); +SQLITE_PRIVATE void sqlite3VdbeMemSetRowSet(Mem*); +SQLITE_PRIVATE int sqlite3VdbeMemMakeWriteable(Mem*); +SQLITE_PRIVATE int sqlite3VdbeMemStringify(Mem*, u8, u8); +SQLITE_PRIVATE i64 sqlite3VdbeIntValue(Mem*); +SQLITE_PRIVATE int sqlite3VdbeMemIntegerify(Mem*); +SQLITE_PRIVATE double sqlite3VdbeRealValue(Mem*); +SQLITE_PRIVATE void sqlite3VdbeIntegerAffinity(Mem*); +SQLITE_PRIVATE int sqlite3VdbeMemRealify(Mem*); +SQLITE_PRIVATE int sqlite3VdbeMemNumerify(Mem*); +SQLITE_PRIVATE void sqlite3VdbeMemCast(Mem*,u8,u8); +SQLITE_PRIVATE int sqlite3VdbeMemFromBtree(BtCursor*,u32,u32,int,Mem*); +SQLITE_PRIVATE void sqlite3VdbeMemRelease(Mem *p); +SQLITE_PRIVATE int sqlite3VdbeMemFinalize(Mem*, FuncDef*); +SQLITE_PRIVATE const char *sqlite3OpcodeName(int); +SQLITE_PRIVATE int sqlite3VdbeMemGrow(Mem *pMem, int n, int preserve); +SQLITE_PRIVATE int sqlite3VdbeMemClearAndResize(Mem *pMem, int n); +SQLITE_PRIVATE int sqlite3VdbeCloseStatement(Vdbe *, int); +SQLITE_PRIVATE void sqlite3VdbeFrameDelete(VdbeFrame*); +SQLITE_PRIVATE int sqlite3VdbeFrameRestore(VdbeFrame *); +SQLITE_PRIVATE int sqlite3VdbeTransferError(Vdbe *p); + +SQLITE_PRIVATE int sqlite3VdbeSorterInit(sqlite3 *, int, VdbeCursor *); +SQLITE_PRIVATE void sqlite3VdbeSorterReset(sqlite3 *, VdbeSorter *); +SQLITE_PRIVATE void sqlite3VdbeSorterClose(sqlite3 *, VdbeCursor *); +SQLITE_PRIVATE int sqlite3VdbeSorterRowkey(const VdbeCursor *, Mem *); +SQLITE_PRIVATE int sqlite3VdbeSorterNext(sqlite3 *, const VdbeCursor *, int *); +SQLITE_PRIVATE int sqlite3VdbeSorterRewind(const VdbeCursor *, int *); +SQLITE_PRIVATE int sqlite3VdbeSorterWrite(const VdbeCursor *, Mem *); +SQLITE_PRIVATE int sqlite3VdbeSorterCompare(const VdbeCursor *, Mem *, int, int *); + +#if !defined(SQLITE_OMIT_SHARED_CACHE) +SQLITE_PRIVATE void sqlite3VdbeEnter(Vdbe*); +#else +# define sqlite3VdbeEnter(X) +#endif + +#if !defined(SQLITE_OMIT_SHARED_CACHE) && SQLITE_THREADSAFE>0 +SQLITE_PRIVATE void sqlite3VdbeLeave(Vdbe*); +#else +# define sqlite3VdbeLeave(X) +#endif + +#ifdef SQLITE_DEBUG +SQLITE_PRIVATE void sqlite3VdbeMemAboutToChange(Vdbe*,Mem*); +SQLITE_PRIVATE int sqlite3VdbeCheckMemInvariants(Mem*); +#endif + +#ifndef SQLITE_OMIT_FOREIGN_KEY +SQLITE_PRIVATE int sqlite3VdbeCheckFk(Vdbe *, int); +#else +# define sqlite3VdbeCheckFk(p,i) 0 +#endif + +SQLITE_PRIVATE int sqlite3VdbeMemTranslate(Mem*, u8); +#ifdef SQLITE_DEBUG +SQLITE_PRIVATE void sqlite3VdbePrintSql(Vdbe*); +SQLITE_PRIVATE void sqlite3VdbeMemPrettyPrint(Mem *pMem, char *zBuf); +#endif +SQLITE_PRIVATE int sqlite3VdbeMemHandleBom(Mem *pMem); + +#ifndef SQLITE_OMIT_INCRBLOB +SQLITE_PRIVATE int sqlite3VdbeMemExpandBlob(Mem *); + #define ExpandBlob(P) (((P)->flags&MEM_Zero)?sqlite3VdbeMemExpandBlob(P):0) +#else + #define sqlite3VdbeMemExpandBlob(x) SQLITE_OK + #define ExpandBlob(P) SQLITE_OK +#endif + +#endif /* !defined(_VDBEINT_H_) */ + +/************** End of vdbeInt.h *********************************************/ +/************** Continuing where we left off in status.c *********************/ /* ** Variables in which to record status information. */ +#if SQLITE_PTRSIZE>4 +typedef sqlite3_int64 sqlite3StatValueType; +#else +typedef u32 sqlite3StatValueType; +#endif typedef struct sqlite3StatType sqlite3StatType; static SQLITE_WSD struct sqlite3StatType { - int nowValue[9]; /* Current value */ - int mxValue[9]; /* Maximum value */ + sqlite3StatValueType nowValue[10]; /* Current value */ + sqlite3StatValueType mxValue[10]; /* Maximum value */ } sqlite3Stat = { {0,}, {0,} }; + +/* +** Elements of sqlite3Stat[] are protected by either the memory allocator +** mutex, or by the pcache1 mutex. The following array determines which. +*/ +static const char statMutex[] = { + 0, /* SQLITE_STATUS_MEMORY_USED */ + 1, /* SQLITE_STATUS_PAGECACHE_USED */ + 1, /* SQLITE_STATUS_PAGECACHE_OVERFLOW */ + 0, /* SQLITE_STATUS_SCRATCH_USED */ + 0, /* SQLITE_STATUS_SCRATCH_OVERFLOW */ + 0, /* SQLITE_STATUS_MALLOC_SIZE */ + 0, /* SQLITE_STATUS_PARSER_STACK */ + 1, /* SQLITE_STATUS_PAGECACHE_SIZE */ + 0, /* SQLITE_STATUS_SCRATCH_SIZE */ + 0, /* SQLITE_STATUS_MALLOC_COUNT */ +}; /* The "wsdStat" macro will resolve to the status information ** state vector. If writable static data is unsupported on the target, ** we have to locate the state vector at run-time. In the more common @@ -11350,107 +16285,290 @@ # define wsdStatInit # define wsdStat sqlite3Stat #endif /* -** Return the current value of a status parameter. +** Return the current value of a status parameter. The caller must +** be holding the appropriate mutex. */ -SQLITE_PRIVATE int sqlite3StatusValue(int op){ +SQLITE_PRIVATE sqlite3_int64 sqlite3StatusValue(int op){ wsdStatInit; assert( op>=0 && op<ArraySize(wsdStat.nowValue) ); + assert( op>=0 && op<ArraySize(statMutex) ); + assert( sqlite3_mutex_held(statMutex[op] ? sqlite3Pcache1Mutex() + : sqlite3MallocMutex()) ); return wsdStat.nowValue[op]; } /* -** Add N to the value of a status record. It is assumed that the -** caller holds appropriate locks. +** Add N to the value of a status record. The caller must hold the +** appropriate mutex. (Locking is checked by assert()). +** +** The StatusUp() routine can accept positive or negative values for N. +** The value of N is added to the current status value and the high-water +** mark is adjusted if necessary. +** +** The StatusDown() routine lowers the current value by N. The highwater +** mark is unchanged. N must be non-negative for StatusDown(). */ -SQLITE_PRIVATE void sqlite3StatusAdd(int op, int N){ +SQLITE_PRIVATE void sqlite3StatusUp(int op, int N){ wsdStatInit; assert( op>=0 && op<ArraySize(wsdStat.nowValue) ); + assert( op>=0 && op<ArraySize(statMutex) ); + assert( sqlite3_mutex_held(statMutex[op] ? sqlite3Pcache1Mutex() + : sqlite3MallocMutex()) ); wsdStat.nowValue[op] += N; if( wsdStat.nowValue[op]>wsdStat.mxValue[op] ){ wsdStat.mxValue[op] = wsdStat.nowValue[op]; } } +SQLITE_PRIVATE void sqlite3StatusDown(int op, int N){ + wsdStatInit; + assert( N>=0 ); + assert( op>=0 && op<ArraySize(statMutex) ); + assert( sqlite3_mutex_held(statMutex[op] ? sqlite3Pcache1Mutex() + : sqlite3MallocMutex()) ); + assert( op>=0 && op<ArraySize(wsdStat.nowValue) ); + wsdStat.nowValue[op] -= N; +} /* -** Set the value of a status to X. +** Adjust the highwater mark if necessary. +** The caller must hold the appropriate mutex. */ -SQLITE_PRIVATE void sqlite3StatusSet(int op, int X){ +SQLITE_PRIVATE void sqlite3StatusHighwater(int op, int X){ + sqlite3StatValueType newValue; wsdStatInit; + assert( X>=0 ); + newValue = (sqlite3StatValueType)X; assert( op>=0 && op<ArraySize(wsdStat.nowValue) ); - wsdStat.nowValue[op] = X; - if( wsdStat.nowValue[op]>wsdStat.mxValue[op] ){ - wsdStat.mxValue[op] = wsdStat.nowValue[op]; + assert( op>=0 && op<ArraySize(statMutex) ); + assert( sqlite3_mutex_held(statMutex[op] ? sqlite3Pcache1Mutex() + : sqlite3MallocMutex()) ); + assert( op==SQLITE_STATUS_MALLOC_SIZE + || op==SQLITE_STATUS_PAGECACHE_SIZE + || op==SQLITE_STATUS_SCRATCH_SIZE + || op==SQLITE_STATUS_PARSER_STACK ); + if( newValue>wsdStat.mxValue[op] ){ + wsdStat.mxValue[op] = newValue; } } /* ** Query status information. -** -** This implementation assumes that reading or writing an aligned -** 32-bit integer is an atomic operation. If that assumption is not true, -** then this routine is not threadsafe. */ -SQLITE_API int sqlite3_status(int op, int *pCurrent, int *pHighwater, int resetFlag){ +SQLITE_API int SQLITE_STDCALL sqlite3_status64( + int op, + sqlite3_int64 *pCurrent, + sqlite3_int64 *pHighwater, + int resetFlag +){ + sqlite3_mutex *pMutex; wsdStatInit; if( op<0 || op>=ArraySize(wsdStat.nowValue) ){ return SQLITE_MISUSE_BKPT; } +#ifdef SQLITE_ENABLE_API_ARMOR + if( pCurrent==0 || pHighwater==0 ) return SQLITE_MISUSE_BKPT; +#endif + pMutex = statMutex[op] ? sqlite3Pcache1Mutex() : sqlite3MallocMutex(); + sqlite3_mutex_enter(pMutex); *pCurrent = wsdStat.nowValue[op]; *pHighwater = wsdStat.mxValue[op]; if( resetFlag ){ wsdStat.mxValue[op] = wsdStat.nowValue[op]; } + sqlite3_mutex_leave(pMutex); + (void)pMutex; /* Prevent warning when SQLITE_THREADSAFE=0 */ return SQLITE_OK; +} +SQLITE_API int SQLITE_STDCALL sqlite3_status(int op, int *pCurrent, int *pHighwater, int resetFlag){ + sqlite3_int64 iCur, iHwtr; + int rc; +#ifdef SQLITE_ENABLE_API_ARMOR + if( pCurrent==0 || pHighwater==0 ) return SQLITE_MISUSE_BKPT; +#endif + rc = sqlite3_status64(op, &iCur, &iHwtr, resetFlag); + if( rc==0 ){ + *pCurrent = (int)iCur; + *pHighwater = (int)iHwtr; + } + return rc; } /* ** Query status information for a single database connection */ -SQLITE_API int sqlite3_db_status( +SQLITE_API int SQLITE_STDCALL sqlite3_db_status( sqlite3 *db, /* The database connection whose status is desired */ int op, /* Status verb */ int *pCurrent, /* Write current value here */ int *pHighwater, /* Write high-water mark here */ int resetFlag /* Reset high-water mark if true */ ){ + int rc = SQLITE_OK; /* Return code */ +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) || pCurrent==0|| pHighwater==0 ){ + return SQLITE_MISUSE_BKPT; + } +#endif + sqlite3_mutex_enter(db->mutex); switch( op ){ case SQLITE_DBSTATUS_LOOKASIDE_USED: { *pCurrent = db->lookaside.nOut; *pHighwater = db->lookaside.mxOut; if( resetFlag ){ db->lookaside.mxOut = db->lookaside.nOut; } break; } + + case SQLITE_DBSTATUS_LOOKASIDE_HIT: + case SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE: + case SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL: { + testcase( op==SQLITE_DBSTATUS_LOOKASIDE_HIT ); + testcase( op==SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE ); + testcase( op==SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL ); + assert( (op-SQLITE_DBSTATUS_LOOKASIDE_HIT)>=0 ); + assert( (op-SQLITE_DBSTATUS_LOOKASIDE_HIT)<3 ); + *pCurrent = 0; + *pHighwater = db->lookaside.anStat[op - SQLITE_DBSTATUS_LOOKASIDE_HIT]; + if( resetFlag ){ + db->lookaside.anStat[op - SQLITE_DBSTATUS_LOOKASIDE_HIT] = 0; + } + break; + } /* ** Return an approximation for the amount of memory currently used ** by all pagers associated with the given database connection. The ** highwater mark is meaningless and is returned as zero. */ case SQLITE_DBSTATUS_CACHE_USED: { int totalUsed = 0; int i; + sqlite3BtreeEnterAll(db); for(i=0; i<db->nDb; i++){ Btree *pBt = db->aDb[i].pBt; if( pBt ){ Pager *pPager = sqlite3BtreePager(pBt); totalUsed += sqlite3PagerMemUsed(pPager); } } + sqlite3BtreeLeaveAll(db); *pCurrent = totalUsed; *pHighwater = 0; break; } + + /* + ** *pCurrent gets an accurate estimate of the amount of memory used + ** to store the schema for all databases (main, temp, and any ATTACHed + ** databases. *pHighwater is set to zero. + */ + case SQLITE_DBSTATUS_SCHEMA_USED: { + int i; /* Used to iterate through schemas */ + int nByte = 0; /* Used to accumulate return value */ + + sqlite3BtreeEnterAll(db); + db->pnBytesFreed = &nByte; + for(i=0; i<db->nDb; i++){ + Schema *pSchema = db->aDb[i].pSchema; + if( ALWAYS(pSchema!=0) ){ + HashElem *p; + + nByte += sqlite3GlobalConfig.m.xRoundup(sizeof(HashElem)) * ( + pSchema->tblHash.count + + pSchema->trigHash.count + + pSchema->idxHash.count + + pSchema->fkeyHash.count + ); + nByte += sqlite3_msize(pSchema->tblHash.ht); + nByte += sqlite3_msize(pSchema->trigHash.ht); + nByte += sqlite3_msize(pSchema->idxHash.ht); + nByte += sqlite3_msize(pSchema->fkeyHash.ht); + + for(p=sqliteHashFirst(&pSchema->trigHash); p; p=sqliteHashNext(p)){ + sqlite3DeleteTrigger(db, (Trigger*)sqliteHashData(p)); + } + for(p=sqliteHashFirst(&pSchema->tblHash); p; p=sqliteHashNext(p)){ + sqlite3DeleteTable(db, (Table *)sqliteHashData(p)); + } + } + } + db->pnBytesFreed = 0; + sqlite3BtreeLeaveAll(db); + + *pHighwater = 0; + *pCurrent = nByte; + break; + } + + /* + ** *pCurrent gets an accurate estimate of the amount of memory used + ** to store all prepared statements. + ** *pHighwater is set to zero. + */ + case SQLITE_DBSTATUS_STMT_USED: { + struct Vdbe *pVdbe; /* Used to iterate through VMs */ + int nByte = 0; /* Used to accumulate return value */ + + db->pnBytesFreed = &nByte; + for(pVdbe=db->pVdbe; pVdbe; pVdbe=pVdbe->pNext){ + sqlite3VdbeClearObject(db, pVdbe); + sqlite3DbFree(db, pVdbe); + } + db->pnBytesFreed = 0; + + *pHighwater = 0; /* IMP: R-64479-57858 */ + *pCurrent = nByte; + + break; + } + + /* + ** Set *pCurrent to the total cache hits or misses encountered by all + ** pagers the database handle is connected to. *pHighwater is always set + ** to zero. + */ + case SQLITE_DBSTATUS_CACHE_HIT: + case SQLITE_DBSTATUS_CACHE_MISS: + case SQLITE_DBSTATUS_CACHE_WRITE:{ + int i; + int nRet = 0; + assert( SQLITE_DBSTATUS_CACHE_MISS==SQLITE_DBSTATUS_CACHE_HIT+1 ); + assert( SQLITE_DBSTATUS_CACHE_WRITE==SQLITE_DBSTATUS_CACHE_HIT+2 ); + + for(i=0; i<db->nDb; i++){ + if( db->aDb[i].pBt ){ + Pager *pPager = sqlite3BtreePager(db->aDb[i].pBt); + sqlite3PagerCacheStat(pPager, op, resetFlag, &nRet); + } + } + *pHighwater = 0; /* IMP: R-42420-56072 */ + /* IMP: R-54100-20147 */ + /* IMP: R-29431-39229 */ + *pCurrent = nRet; + break; + } + + /* Set *pCurrent to non-zero if there are unresolved deferred foreign + ** key constraints. Set *pCurrent to zero if all foreign key constraints + ** have been satisfied. The *pHighwater is always set to zero. + */ + case SQLITE_DBSTATUS_DEFERRED_FKS: { + *pHighwater = 0; /* IMP: R-11967-56545 */ + *pCurrent = db->nDeferredImmCons>0 || db->nDeferredCons>0; + break; + } + default: { - return SQLITE_ERROR; + rc = SQLITE_ERROR; } } - return SQLITE_OK; + sqlite3_mutex_leave(db->mutex); + return rc; } /************** End of status.c **********************************************/ /************** Begin file date.c ********************************************/ /* @@ -11469,26 +16587,26 @@ ** ** There is only one exported symbol in this file - the function ** sqlite3RegisterDateTimeFunctions() found at the bottom of the file. ** All other code has file scope. ** -** SQLite processes all times and dates as Julian Day numbers. The +** SQLite processes all times and dates as julian day numbers. The ** dates and times are stored as the number of days since noon ** in Greenwich on November 24, 4714 B.C. according to the Gregorian ** calendar system. ** ** 1970-01-01 00:00:00 is JD 2440587.5 ** 2000-01-01 00:00:00 is JD 2451544.5 ** -** This implemention requires years to be expressed as a 4-digit number +** This implementation requires years to be expressed as a 4-digit number ** which means that only dates between 0000-01-01 and 9999-12-31 can ** be represented, even though julian day numbers allow a much wider ** range of dates. ** ** The Gregorian calendar system is used for all dates and times, ** even those that predate the Gregorian calendar. Historians usually -** use the Julian calendar for dates prior to 1582-10-15 and for some +** use the julian calendar for dates prior to 1582-10-15 and for some ** dates afterwards, depending on locale. Beware of this difference. ** ** The conversion algorithms are implemented based on descriptions ** in the following text: ** @@ -11496,30 +16614,17 @@ ** Astronomical Algorithms, 2nd Edition, 1998 ** ISBM 0-943396-61-1 ** Willmann-Bell, Inc ** Richmond, Virginia (USA) */ +/* #include "sqliteInt.h" */ +/* #include <stdlib.h> */ +/* #include <assert.h> */ #include <time.h> #ifndef SQLITE_OMIT_DATETIME_FUNCS -/* -** On recent Windows platforms, the localtime_s() function is available -** as part of the "Secure CRT". It is essentially equivalent to -** localtime_r() available under most POSIX platforms, except that the -** order of the parameters is reversed. -** -** See http://msdn.microsoft.com/en-us/library/a442x3ye(VS.80).aspx. -** -** If the user has not indicated to use localtime_r() or localtime_s() -** already, check for an MSVC build environment that provides -** localtime_s(). -*/ -#if !defined(HAVE_LOCALTIME_R) && !defined(HAVE_LOCALTIME_S) && \ - defined(_MSC_VER) && defined(_CRT_INSECURE_DEPRECATE) -#define HAVE_LOCALTIME_S 1 -#endif /* ** A structure for holding a single date and time. */ typedef struct DateTime DateTime; @@ -11531,68 +16636,79 @@ double s; /* Seconds */ char validYMD; /* True (1) if Y,M,D are valid */ char validHMS; /* True (1) if h,m,s are valid */ char validJD; /* True (1) if iJD is valid */ char validTZ; /* True (1) if tz is valid */ + char tzSet; /* Timezone was set explicitly */ }; /* -** Convert zDate into one or more integers. Additional arguments -** come in groups of 5 as follows: -** -** N number of digits in the integer -** min minimum allowed value of the integer -** max maximum allowed value of the integer -** nextC first character after the integer -** pVal where to write the integers value. -** -** Conversions continue until one with nextC==0 is encountered. +** Convert zDate into one or more integers according to the conversion +** specifier zFormat. +** +** zFormat[] contains 4 characters for each integer converted, except for +** the last integer which is specified by three characters. The meaning +** of a four-character format specifiers ABCD is: +** +** A: number of digits to convert. Always "2" or "4". +** B: minimum value. Always "0" or "1". +** C: maximum value, decoded as: +** a: 12 +** b: 14 +** c: 24 +** d: 31 +** e: 59 +** f: 9999 +** D: the separator character, or \000 to indicate this is the +** last number to convert. +** +** Example: To translate an ISO-8601 date YYYY-MM-DD, the format would +** be "40f-21a-20c". The "40f-" indicates the 4-digit year followed by "-". +** The "21a-" indicates the 2-digit month followed by "-". The "20c" indicates +** the 2-digit day which is the last integer in the set. +** ** The function returns the number of successful conversions. */ -static int getDigits(const char *zDate, ...){ +static int getDigits(const char *zDate, const char *zFormat, ...){ + /* The aMx[] array translates the 3rd character of each format + ** spec into a max size: a b c d e f */ + static const u16 aMx[] = { 12, 14, 24, 31, 59, 9999 }; va_list ap; - int val; - int N; - int min; - int max; - int nextC; - int *pVal; int cnt = 0; - va_start(ap, zDate); + char nextC; + va_start(ap, zFormat); do{ - N = va_arg(ap, int); - min = va_arg(ap, int); - max = va_arg(ap, int); - nextC = va_arg(ap, int); - pVal = va_arg(ap, int*); + char N = zFormat[0] - '0'; + char min = zFormat[1] - '0'; + int val = 0; + u16 max; + + assert( zFormat[2]>='a' && zFormat[2]<='f' ); + max = aMx[zFormat[2] - 'a']; + nextC = zFormat[3]; val = 0; while( N-- ){ if( !sqlite3Isdigit(*zDate) ){ goto end_getDigits; } val = val*10 + *zDate - '0'; zDate++; } - if( val<min || val>max || (nextC!=0 && nextC!=*zDate) ){ + if( val<(int)min || val>(int)max || (nextC!=0 && nextC!=*zDate) ){ goto end_getDigits; } - *pVal = val; + *va_arg(ap,int*) = val; zDate++; cnt++; + zFormat += 4; }while( nextC ); end_getDigits: va_end(ap); return cnt; } -/* -** Read text from z[] and convert into a floating point number. Return -** the number of digits converted. -*/ -#define getValue sqlite3AtoF - /* ** Parse a timezone extension on the end of a date-time. ** The extension is of the form: ** ** (+/-)HH:MM @@ -11623,17 +16739,18 @@ goto zulu_time; }else{ return c!=0; } zDate++; - if( getDigits(zDate, 2, 0, 14, ':', &nHr, 2, 0, 59, 0, &nMn)!=2 ){ + if( getDigits(zDate, "20b:20e", &nHr, &nMn)!=2 ){ return 1; } zDate += 5; p->tz = sgn*(nMn + nHr*60); zulu_time: while( sqlite3Isspace(*zDate) ){ zDate++; } + p->tzSet = 1; return *zDate!=0; } /* ** Parse times of the form HH:MM or HH:MM:SS or HH:MM:SS.FFFF. @@ -11643,17 +16760,17 @@ ** Return 1 if there is a parsing error and 0 on success. */ static int parseHhMmSs(const char *zDate, DateTime *p){ int h, m, s; double ms = 0.0; - if( getDigits(zDate, 2, 0, 24, ':', &h, 2, 0, 59, 0, &m)!=2 ){ + if( getDigits(zDate, "20c:20e", &h, &m)!=2 ){ return 1; } zDate += 5; if( *zDate==':' ){ zDate++; - if( getDigits(zDate, 2, 0, 59, 0, &s)!=1 ){ + if( getDigits(zDate, "20e", &s)!=1 ){ return 1; } zDate += 2; if( *zDate=='.' && sqlite3Isdigit(zDate[1]) ){ double rScale = 1.0; @@ -11737,11 +16854,11 @@ zDate++; neg = 1; }else{ neg = 0; } - if( getDigits(zDate,4,0,9999,'-',&Y,2,1,12,'-',&M,2,1,31,0,&D)!=3 ){ + if( getDigits(zDate, "40f-21a-21d", &Y, &M, &D)!=3 ){ return 1; } zDate += 10; while( sqlite3Isspace(*zDate) || 'T'==*(u8*)zDate ){ zDate++; } if( parseHhMmSs(zDate, p)==0 ){ @@ -11761,22 +16878,26 @@ } return 0; } /* -** Set the time to the current time reported by the VFS +** Set the time to the current time reported by the VFS. +** +** Return the number of errors. */ -static void setDateTimeToCurrent(sqlite3_context *context, DateTime *p){ - double r; - sqlite3 *db = sqlite3_context_db_handle(context); - sqlite3OsCurrentTime(db->pVfs, &r); - p->iJD = (sqlite3_int64)(r*86400000.0 + 0.5); - p->validJD = 1; +static int setDateTimeToCurrent(sqlite3_context *context, DateTime *p){ + p->iJD = sqlite3StmtCurrentTime(context); + if( p->iJD>0 ){ + p->validJD = 1; + return 0; + }else{ + return 1; + } } /* -** Attempt to parse the given string into a Julian Day Number. Return +** Attempt to parse the given string into a julian day number. Return ** the number of errors. ** ** The following are acceptable forms for the input string: ** ** YYYY-MM-DD HH:MM:SS.FFF +/-HH:MM @@ -11792,21 +16913,18 @@ static int parseDateOrTime( sqlite3_context *context, const char *zDate, DateTime *p ){ - int isRealNum; /* Return from sqlite3IsNumber(). Not used */ + double r; if( parseYyyyMmDd(zDate,p)==0 ){ return 0; }else if( parseHhMmSs(zDate, p)==0 ){ return 0; }else if( sqlite3StrICmp(zDate,"now")==0){ - setDateTimeToCurrent(context, p); - return 0; - }else if( sqlite3IsNumber(zDate, &isRealNum, SQLITE_UTF8) ){ - double r; - getValue(zDate, &r); + return setDateTimeToCurrent(context, p); + }else if( sqlite3AtoF(zDate, &r, sqlite3Strlen30(zDate), SQLITE_UTF8) ){ p->iJD = (sqlite3_int64)(r*86400000.0 + 0.5); p->validJD = 1; return 0; } return 1; @@ -11826,11 +16944,11 @@ Z = (int)((p->iJD + 43200000)/86400000); A = (int)((Z - 1867216.25)/36524.25); A = Z + 1 + A - (A/4); B = A + 1524; C = (int)((B - 122.1)/365.25); - D = (36525*C)/100; + D = (36525*(C&32767))/100; E = (int)((B-D)/30.6001); X1 = (int)(30.6001*E); p->D = B - D - X1; p->M = E<14 ? E-1 : E-13; p->Y = p->M>2 ? C - 4716 : C - 4715; @@ -11870,23 +16988,102 @@ static void clearYMD_HMS_TZ(DateTime *p){ p->validYMD = 0; p->validHMS = 0; p->validTZ = 0; } + +/* +** On recent Windows platforms, the localtime_s() function is available +** as part of the "Secure CRT". It is essentially equivalent to +** localtime_r() available under most POSIX platforms, except that the +** order of the parameters is reversed. +** +** See http://msdn.microsoft.com/en-us/library/a442x3ye(VS.80).aspx. +** +** If the user has not indicated to use localtime_r() or localtime_s() +** already, check for an MSVC build environment that provides +** localtime_s(). +*/ +#if !HAVE_LOCALTIME_R && !HAVE_LOCALTIME_S \ + && defined(_MSC_VER) && defined(_CRT_INSECURE_DEPRECATE) +#undef HAVE_LOCALTIME_S +#define HAVE_LOCALTIME_S 1 +#endif + +#ifndef SQLITE_OMIT_LOCALTIME +/* +** The following routine implements the rough equivalent of localtime_r() +** using whatever operating-system specific localtime facility that +** is available. This routine returns 0 on success and +** non-zero on any kind of error. +** +** If the sqlite3GlobalConfig.bLocaltimeFault variable is true then this +** routine will always fail. +** +** EVIDENCE-OF: R-62172-00036 In this implementation, the standard C +** library function localtime_r() is used to assist in the calculation of +** local time. +*/ +static int osLocaltime(time_t *t, struct tm *pTm){ + int rc; +#if !HAVE_LOCALTIME_R && !HAVE_LOCALTIME_S + struct tm *pX; +#if SQLITE_THREADSAFE>0 + sqlite3_mutex *mutex = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER); +#endif + sqlite3_mutex_enter(mutex); + pX = localtime(t); +#ifndef SQLITE_OMIT_BUILTIN_TEST + if( sqlite3GlobalConfig.bLocaltimeFault ) pX = 0; +#endif + if( pX ) *pTm = *pX; + sqlite3_mutex_leave(mutex); + rc = pX==0; +#else +#ifndef SQLITE_OMIT_BUILTIN_TEST + if( sqlite3GlobalConfig.bLocaltimeFault ) return 1; +#endif +#if HAVE_LOCALTIME_R + rc = localtime_r(t, pTm)==0; +#else + rc = localtime_s(pTm, t); +#endif /* HAVE_LOCALTIME_R */ +#endif /* HAVE_LOCALTIME_R || HAVE_LOCALTIME_S */ + return rc; +} +#endif /* SQLITE_OMIT_LOCALTIME */ + #ifndef SQLITE_OMIT_LOCALTIME /* -** Compute the difference (in milliseconds) -** between localtime and UTC (a.k.a. GMT) -** for the time value p where p is in UTC. +** Compute the difference (in milliseconds) between localtime and UTC +** (a.k.a. GMT) for the time value p where p is in UTC. If no error occurs, +** return this value and set *pRc to SQLITE_OK. +** +** Or, if an error does occur, set *pRc to SQLITE_ERROR. The returned value +** is undefined in this case. */ -static sqlite3_int64 localtimeOffset(DateTime *p){ +static sqlite3_int64 localtimeOffset( + DateTime *p, /* Date at which to calculate offset */ + sqlite3_context *pCtx, /* Write error here if one occurs */ + int *pRc /* OUT: Error code. SQLITE_OK or ERROR */ +){ DateTime x, y; time_t t; + struct tm sLocal; + + /* Initialize the contents of sLocal to avoid a compiler warning. */ + memset(&sLocal, 0, sizeof(sLocal)); + x = *p; computeYMD_HMS(&x); if( x.Y<1971 || x.Y>=2038 ){ + /* EVIDENCE-OF: R-55269-29598 The localtime_r() C function normally only + ** works for years between 1970 and 2037. For dates outside this range, + ** SQLite attempts to map the year into an equivalent year within this + ** range, do the calculation, then map the year back. + */ x.Y = 2000; x.M = 1; x.D = 1; x.h = 0; x.m = 0; @@ -11897,51 +17094,27 @@ } x.tz = 0; x.validJD = 0; computeJD(&x); t = (time_t)(x.iJD/1000 - 21086676*(i64)10000); -#ifdef HAVE_LOCALTIME_R - { - struct tm sLocal; - localtime_r(&t, &sLocal); - y.Y = sLocal.tm_year + 1900; - y.M = sLocal.tm_mon + 1; - y.D = sLocal.tm_mday; - y.h = sLocal.tm_hour; - y.m = sLocal.tm_min; - y.s = sLocal.tm_sec; - } -#elif defined(HAVE_LOCALTIME_S) && HAVE_LOCALTIME_S - { - struct tm sLocal; - localtime_s(&sLocal, &t); - y.Y = sLocal.tm_year + 1900; - y.M = sLocal.tm_mon + 1; - y.D = sLocal.tm_mday; - y.h = sLocal.tm_hour; - y.m = sLocal.tm_min; - y.s = sLocal.tm_sec; - } -#else - { - struct tm *pTm; - sqlite3_mutex_enter(sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER)); - pTm = localtime(&t); - y.Y = pTm->tm_year + 1900; - y.M = pTm->tm_mon + 1; - y.D = pTm->tm_mday; - y.h = pTm->tm_hour; - y.m = pTm->tm_min; - y.s = pTm->tm_sec; - sqlite3_mutex_leave(sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER)); - } -#endif + if( osLocaltime(&t, &sLocal) ){ + sqlite3_result_error(pCtx, "local time unavailable", -1); + *pRc = SQLITE_ERROR; + return 0; + } + y.Y = sLocal.tm_year + 1900; + y.M = sLocal.tm_mon + 1; + y.D = sLocal.tm_mday; + y.h = sLocal.tm_hour; + y.m = sLocal.tm_min; + y.s = sLocal.tm_sec; y.validYMD = 1; y.validHMS = 1; y.validJD = 0; y.validTZ = 0; computeJD(&y); + *pRc = SQLITE_OK; return y.iJD - x.iJD; } #endif /* SQLITE_OMIT_LOCALTIME */ /* @@ -11961,13 +17134,16 @@ ** weekday N ** unixepoch ** localtime ** utc ** -** Return 0 on success and 1 if there is any kind of error. +** Return 0 on success and 1 if there is any kind of error. If the error +** is in a system call (i.e. localtime()), then an error message is written +** to context pCtx. If the error is an unrecognized modifier, no error is +** written to pCtx. */ -static int parseModifier(const char *zMod, DateTime *p){ +static int parseModifier(sqlite3_context *pCtx, const char *zMod, DateTime *p){ int rc = 1; int n; double r; char *z, zBuf[30]; z = zBuf; @@ -11983,13 +17159,12 @@ ** Assuming the current time value is UTC (a.k.a. GMT), shift it to ** show local time. */ if( strcmp(z, "localtime")==0 ){ computeJD(p); - p->iJD += localtimeOffset(p); + p->iJD += localtimeOffset(p, pCtx, &rc); clearYMD_HMS_TZ(p); - rc = 0; } break; } #endif case 'u': { @@ -12004,17 +17179,23 @@ clearYMD_HMS_TZ(p); rc = 0; } #ifndef SQLITE_OMIT_LOCALTIME else if( strcmp(z, "utc")==0 ){ - sqlite3_int64 c1; - computeJD(p); - c1 = localtimeOffset(p); - p->iJD -= c1; - clearYMD_HMS_TZ(p); - p->iJD += c1 - localtimeOffset(p); - rc = 0; + if( p->tzSet==0 ){ + sqlite3_int64 c1; + computeJD(p); + c1 = localtimeOffset(p, pCtx, &rc); + if( rc==SQLITE_OK ){ + p->iJD -= c1; + clearYMD_HMS_TZ(p); + p->iJD += c1 - localtimeOffset(p, pCtx, &rc); + } + p->tzSet = 1; + }else{ + rc = SQLITE_OK; + } } #endif break; } case 'w': { @@ -12023,12 +17204,13 @@ ** ** Move the date to the same time on the next occurrence of ** weekday N where 0==Sunday, 1==Monday, and so forth. If the ** date is already on the appropriate weekday, this is a no-op. */ - if( strncmp(z, "weekday ", 8)==0 && getValue(&z[8],&r)>0 - && (n=(int)r)==r && n>=0 && r<7 ){ + if( strncmp(z, "weekday ", 8)==0 + && sqlite3AtoF(&z[8], &r, sqlite3Strlen30(&z[8]), SQLITE_UTF8) + && (n=(int)r)==r && n>=0 && r<7 ){ sqlite3_int64 Z; computeYMD_HMS(p); p->validTZ = 0; p->validJD = 0; computeJD(p); @@ -12079,12 +17261,15 @@ case '6': case '7': case '8': case '9': { double rRounder; - n = getValue(z, &r); - assert( n>=1 ); + for(n=1; z[n] && z[n]!=':' && !sqlite3Isspace(z[n]); n++){} + if( !sqlite3AtoF(z, &r, n, SQLITE_UTF8) ){ + rc = 1; + break; + } if( z[n]==':' ){ /* A modifier of the form (+|-)HH:MM:SS.FFF adds (or subtracts) the ** specified number of hours, minutes, seconds, and fractional seconds ** to the time. The ".FFF" may be omitted. The ":SS.FFF" may be ** omitted. @@ -12175,12 +17360,13 @@ int i; const unsigned char *z; int eType; memset(p, 0, sizeof(*p)); if( argc==0 ){ - setDateTimeToCurrent(context, p); - }else if( (eType = sqlite3_value_type(argv[0]))==SQLITE_FLOAT + return setDateTimeToCurrent(context, p); + } + if( (eType = sqlite3_value_type(argv[0]))==SQLITE_FLOAT || eType==SQLITE_INTEGER ){ p->iJD = (sqlite3_int64)(sqlite3_value_double(argv[0])*86400000.0 + 0.5); p->validJD = 1; }else{ z = sqlite3_value_text(argv[0]); @@ -12187,13 +17373,12 @@ if( !z || parseDateOrTime(context, (char*)z, p) ){ return 1; } } for(i=1; i<argc; i++){ - if( (z = sqlite3_value_text(argv[i]))==0 || parseModifier((char*)z, p) ){ - return 1; - } + z = sqlite3_value_text(argv[i]); + if( z==0 || parseModifier(context, (char*)z, p) ) return 1; } return 0; } @@ -12284,11 +17469,11 @@ ** ** %d day of month ** %f ** fractional seconds SS.SSS ** %H hour 00-24 ** %j day of year 000-366 -** %J ** Julian day number +** %J ** julian day number ** %m month 01-12 ** %M minute 00-59 ** %s seconds since 1970-01-01 ** %S seconds 00-59 ** %w day of week 0-6 sunday==0 @@ -12304,12 +17489,14 @@ DateTime x; u64 n; size_t i,j; char *z; sqlite3 *db; - const char *zFmt = (const char*)sqlite3_value_text(argv[0]); + const char *zFmt; char zBuf[100]; + if( argc==0 ) return; + zFmt = (const char*)sqlite3_value_text(argv[0]); if( zFmt==0 || isDate(context, argc-1, argv+1, &x) ) return; db = sqlite3_context_db_handle(context); for(i=0, n=1; zFmt[i]; i++, n++){ if( zFmt[i]=='%' ){ switch( zFmt[i+1] ){ @@ -12351,11 +17538,11 @@ z = zBuf; }else if( n>(u64)db->aLimit[SQLITE_LIMIT_LENGTH] ){ sqlite3_result_error_toobig(context); return; }else{ - z = sqlite3DbMallocRaw(db, (int)n); + z = sqlite3DbMallocRawNN(db, (int)n); if( z==0 ){ sqlite3_result_error_nomem(context); return; } } @@ -12488,43 +17675,33 @@ sqlite3_value **argv ){ time_t t; char *zFormat = (char *)sqlite3_user_data(context); sqlite3 *db; - double rT; + sqlite3_int64 iT; + struct tm *pTm; + struct tm sNow; char zBuf[20]; UNUSED_PARAMETER(argc); UNUSED_PARAMETER(argv); - db = sqlite3_context_db_handle(context); - sqlite3OsCurrentTime(db->pVfs, &rT); -#ifndef SQLITE_OMIT_FLOATING_POINT - t = 86400.0*(rT - 2440587.5) + 0.5; + iT = sqlite3StmtCurrentTime(context); + if( iT<=0 ) return; + t = iT/1000 - 10000*(sqlite3_int64)21086676; +#if HAVE_GMTIME_R + pTm = gmtime_r(&t, &sNow); #else - /* without floating point support, rT will have - ** already lost fractional day precision. - */ - t = 86400 * (rT - 2440587) - 43200; + sqlite3_mutex_enter(sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER)); + pTm = gmtime(&t); + if( pTm ) memcpy(&sNow, pTm, sizeof(sNow)); + sqlite3_mutex_leave(sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER)); #endif -#ifdef HAVE_GMTIME_R - { - struct tm sNow; - gmtime_r(&t, &sNow); + if( pTm ){ strftime(zBuf, 20, zFormat, &sNow); - } -#else - { - struct tm *pTm; - sqlite3_mutex_enter(sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER)); - pTm = gmtime(&t); - strftime(zBuf, 20, zFormat, pTm); - sqlite3_mutex_leave(sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER)); - } -#endif - - sqlite3_result_text(context, zBuf, -1, SQLITE_TRANSIENT); + sqlite3_result_text(context, zBuf, -1, SQLITE_TRANSIENT); + } } #endif /* ** This function registered all of the above C functions as SQL @@ -12532,18 +17709,18 @@ ** external linkage. */ SQLITE_PRIVATE void sqlite3RegisterDateTimeFunctions(void){ static SQLITE_WSD FuncDef aDateTimeFuncs[] = { #ifndef SQLITE_OMIT_DATETIME_FUNCS - FUNCTION(julianday, -1, 0, 0, juliandayFunc ), - FUNCTION(date, -1, 0, 0, dateFunc ), - FUNCTION(time, -1, 0, 0, timeFunc ), - FUNCTION(datetime, -1, 0, 0, datetimeFunc ), - FUNCTION(strftime, -1, 0, 0, strftimeFunc ), - FUNCTION(current_time, 0, 0, 0, ctimeFunc ), - FUNCTION(current_timestamp, 0, 0, 0, ctimestampFunc), - FUNCTION(current_date, 0, 0, 0, cdateFunc ), + DFUNCTION(julianday, -1, 0, 0, juliandayFunc ), + DFUNCTION(date, -1, 0, 0, dateFunc ), + DFUNCTION(time, -1, 0, 0, timeFunc ), + DFUNCTION(datetime, -1, 0, 0, datetimeFunc ), + DFUNCTION(strftime, -1, 0, 0, strftimeFunc ), + DFUNCTION(current_time, 0, 0, 0, ctimeFunc ), + DFUNCTION(current_timestamp, 0, 0, 0, ctimestampFunc), + DFUNCTION(current_date, 0, 0, 0, cdateFunc ), #else STR_FUNCTION(current_time, 0, "%H:%M:%S", 0, currentTimeFunc), STR_FUNCTION(current_date, 0, "%Y-%m-%d", 0, currentTimeFunc), STR_FUNCTION(current_timestamp, 0, "%Y-%m-%d %H:%M:%S", 0, currentTimeFunc), #endif @@ -12573,31 +17750,63 @@ ** ** This file contains OS interface code that is common to all ** architectures. */ #define _SQLITE_OS_C_ 1 +/* #include "sqliteInt.h" */ #undef _SQLITE_OS_C_ +/* +** If we compile with the SQLITE_TEST macro set, then the following block +** of code will give us the ability to simulate a disk I/O error. This +** is used for testing the I/O recovery logic. +*/ +#if defined(SQLITE_TEST) +SQLITE_API int sqlite3_io_error_hit = 0; /* Total number of I/O Errors */ +SQLITE_API int sqlite3_io_error_hardhit = 0; /* Number of non-benign errors */ +SQLITE_API int sqlite3_io_error_pending = 0; /* Count down to first I/O error */ +SQLITE_API int sqlite3_io_error_persist = 0; /* True if I/O errors persist */ +SQLITE_API int sqlite3_io_error_benign = 0; /* True if errors are benign */ +SQLITE_API int sqlite3_diskfull_pending = 0; +SQLITE_API int sqlite3_diskfull = 0; +#endif /* defined(SQLITE_TEST) */ + +/* +** When testing, also keep a count of the number of open files. +*/ +#if defined(SQLITE_TEST) +SQLITE_API int sqlite3_open_file_count = 0; +#endif /* defined(SQLITE_TEST) */ + /* ** The default SQLite sqlite3_vfs implementations do not allocate ** memory (actually, os_unix.c allocates a small amount of memory ** from within OsOpen()), but some third-party implementations may. ** So we test the effects of a malloc() failing and the sqlite3OsXXX() ** function returning SQLITE_IOERR_NOMEM using the DO_OS_MALLOC_TEST macro. ** -** The following functions are instrumented for malloc() failure +** The following functions are instrumented for malloc() failure ** testing: ** -** sqlite3OsOpen() ** sqlite3OsRead() ** sqlite3OsWrite() ** sqlite3OsSync() +** sqlite3OsFileSize() ** sqlite3OsLock() +** sqlite3OsCheckReservedLock() +** sqlite3OsFileControl() +** sqlite3OsShmMap() +** sqlite3OsOpen() +** sqlite3OsDelete() +** sqlite3OsAccess() +** sqlite3OsFullPathname() ** */ -#if defined(SQLITE_TEST) && (SQLITE_OS_WIN==0) - #define DO_OS_MALLOC_TEST(x) if (!x || !sqlite3IsMemJournal(x)) { \ +#if defined(SQLITE_TEST) +SQLITE_API int sqlite3_memdebug_vfs_oom_test = 1; + #define DO_OS_MALLOC_TEST(x) \ + if (sqlite3_memdebug_vfs_oom_test && (!x || !sqlite3IsMemJournal(x))) { \ void *pTstAlloc = sqlite3Malloc(10); \ if (!pTstAlloc) return SQLITE_IOERR_NOMEM; \ sqlite3_free(pTstAlloc); \ } #else @@ -12646,60 +17855,130 @@ } SQLITE_PRIVATE int sqlite3OsCheckReservedLock(sqlite3_file *id, int *pResOut){ DO_OS_MALLOC_TEST(id); return id->pMethods->xCheckReservedLock(id, pResOut); } + +/* +** Use sqlite3OsFileControl() when we are doing something that might fail +** and we need to know about the failures. Use sqlite3OsFileControlHint() +** when simply tossing information over the wall to the VFS and we do not +** really care if the VFS receives and understands the information since it +** is only a hint and can be safely ignored. The sqlite3OsFileControlHint() +** routine has no return value since the return value would be meaningless. +*/ SQLITE_PRIVATE int sqlite3OsFileControl(sqlite3_file *id, int op, void *pArg){ +#ifdef SQLITE_TEST + if( op!=SQLITE_FCNTL_COMMIT_PHASETWO ){ + /* Faults are not injected into COMMIT_PHASETWO because, assuming SQLite + ** is using a regular VFS, it is called after the corresponding + ** transaction has been committed. Injecting a fault at this point + ** confuses the test scripts - the COMMIT comand returns SQLITE_NOMEM + ** but the transaction is committed anyway. + ** + ** The core must call OsFileControl() though, not OsFileControlHint(), + ** as if a custom VFS (e.g. zipvfs) returns an error here, it probably + ** means the commit really has failed and an error should be returned + ** to the user. */ + DO_OS_MALLOC_TEST(id); + } +#endif return id->pMethods->xFileControl(id, op, pArg); } +SQLITE_PRIVATE void sqlite3OsFileControlHint(sqlite3_file *id, int op, void *pArg){ + (void)id->pMethods->xFileControl(id, op, pArg); +} + SQLITE_PRIVATE int sqlite3OsSectorSize(sqlite3_file *id){ int (*xSectorSize)(sqlite3_file*) = id->pMethods->xSectorSize; return (xSectorSize ? xSectorSize(id) : SQLITE_DEFAULT_SECTOR_SIZE); } SQLITE_PRIVATE int sqlite3OsDeviceCharacteristics(sqlite3_file *id){ return id->pMethods->xDeviceCharacteristics(id); } +SQLITE_PRIVATE int sqlite3OsShmLock(sqlite3_file *id, int offset, int n, int flags){ + return id->pMethods->xShmLock(id, offset, n, flags); +} +SQLITE_PRIVATE void sqlite3OsShmBarrier(sqlite3_file *id){ + id->pMethods->xShmBarrier(id); +} +SQLITE_PRIVATE int sqlite3OsShmUnmap(sqlite3_file *id, int deleteFlag){ + return id->pMethods->xShmUnmap(id, deleteFlag); +} +SQLITE_PRIVATE int sqlite3OsShmMap( + sqlite3_file *id, /* Database file handle */ + int iPage, + int pgsz, + int bExtend, /* True to extend file if necessary */ + void volatile **pp /* OUT: Pointer to mapping */ +){ + DO_OS_MALLOC_TEST(id); + return id->pMethods->xShmMap(id, iPage, pgsz, bExtend, pp); +} + +#if SQLITE_MAX_MMAP_SIZE>0 +/* The real implementation of xFetch and xUnfetch */ +SQLITE_PRIVATE int sqlite3OsFetch(sqlite3_file *id, i64 iOff, int iAmt, void **pp){ + DO_OS_MALLOC_TEST(id); + return id->pMethods->xFetch(id, iOff, iAmt, pp); +} +SQLITE_PRIVATE int sqlite3OsUnfetch(sqlite3_file *id, i64 iOff, void *p){ + return id->pMethods->xUnfetch(id, iOff, p); +} +#else +/* No-op stubs to use when memory-mapped I/O is disabled */ +SQLITE_PRIVATE int sqlite3OsFetch(sqlite3_file *id, i64 iOff, int iAmt, void **pp){ + *pp = 0; + return SQLITE_OK; +} +SQLITE_PRIVATE int sqlite3OsUnfetch(sqlite3_file *id, i64 iOff, void *p){ + return SQLITE_OK; +} +#endif /* ** The next group of routines are convenience wrappers around the ** VFS methods. */ SQLITE_PRIVATE int sqlite3OsOpen( - sqlite3_vfs *pVfs, - const char *zPath, - sqlite3_file *pFile, - int flags, + sqlite3_vfs *pVfs, + const char *zPath, + sqlite3_file *pFile, + int flags, int *pFlagsOut ){ int rc; DO_OS_MALLOC_TEST(0); - /* 0x7f3f is a mask of SQLITE_OPEN_ flags that are valid to be passed + /* 0x87f7f is a mask of SQLITE_OPEN_ flags that are valid to be passed ** down into the VFS layer. Some SQLITE_OPEN_ flags (for example, ** SQLITE_OPEN_FULLMUTEX or SQLITE_OPEN_SHAREDCACHE) are blocked before ** reaching the VFS. */ - rc = pVfs->xOpen(pVfs, zPath, pFile, flags & 0x7f3f, pFlagsOut); + rc = pVfs->xOpen(pVfs, zPath, pFile, flags & 0x87f7f, pFlagsOut); assert( rc==SQLITE_OK || pFile->pMethods==0 ); return rc; } SQLITE_PRIVATE int sqlite3OsDelete(sqlite3_vfs *pVfs, const char *zPath, int dirSync){ + DO_OS_MALLOC_TEST(0); + assert( dirSync==0 || dirSync==1 ); return pVfs->xDelete(pVfs, zPath, dirSync); } SQLITE_PRIVATE int sqlite3OsAccess( - sqlite3_vfs *pVfs, - const char *zPath, - int flags, + sqlite3_vfs *pVfs, + const char *zPath, + int flags, int *pResOut ){ DO_OS_MALLOC_TEST(0); return pVfs->xAccess(pVfs, zPath, flags, pResOut); } SQLITE_PRIVATE int sqlite3OsFullPathname( - sqlite3_vfs *pVfs, - const char *zPath, - int nPathOut, + sqlite3_vfs *pVfs, + const char *zPath, + int nPathOut, char *zPathOut ){ + DO_OS_MALLOC_TEST(0); zPathOut[0] = 0; return pVfs->xFullPathname(pVfs, zPath, nPathOut, zPathOut); } #ifndef SQLITE_OMIT_LOAD_EXTENSION SQLITE_PRIVATE void *sqlite3OsDlOpen(sqlite3_vfs *pVfs, const char *zPath){ @@ -12719,24 +17998,38 @@ return pVfs->xRandomness(pVfs, nByte, zBufOut); } SQLITE_PRIVATE int sqlite3OsSleep(sqlite3_vfs *pVfs, int nMicro){ return pVfs->xSleep(pVfs, nMicro); } -SQLITE_PRIVATE int sqlite3OsCurrentTime(sqlite3_vfs *pVfs, double *pTimeOut){ - return pVfs->xCurrentTime(pVfs, pTimeOut); +SQLITE_PRIVATE int sqlite3OsCurrentTimeInt64(sqlite3_vfs *pVfs, sqlite3_int64 *pTimeOut){ + int rc; + /* IMPLEMENTATION-OF: R-49045-42493 SQLite will use the xCurrentTimeInt64() + ** method to get the current date and time if that method is available + ** (if iVersion is 2 or greater and the function pointer is not NULL) and + ** will fall back to xCurrentTime() if xCurrentTimeInt64() is + ** unavailable. + */ + if( pVfs->iVersion>=2 && pVfs->xCurrentTimeInt64 ){ + rc = pVfs->xCurrentTimeInt64(pVfs, pTimeOut); + }else{ + double r; + rc = pVfs->xCurrentTime(pVfs, &r); + *pTimeOut = (sqlite3_int64)(r*86400000.0); + } + return rc; } SQLITE_PRIVATE int sqlite3OsOpenMalloc( - sqlite3_vfs *pVfs, - const char *zFile, - sqlite3_file **ppFile, + sqlite3_vfs *pVfs, + const char *zFile, + sqlite3_file **ppFile, int flags, int *pOutFlags ){ int rc = SQLITE_NOMEM; sqlite3_file *pFile; - pFile = (sqlite3_file *)sqlite3Malloc(pVfs->szOsFile); + pFile = (sqlite3_file *)sqlite3MallocZero(pVfs->szOsFile); if( pFile ){ rc = sqlite3OsOpen(pVfs, zFile, pFile, flags, pOutFlags); if( rc!=SQLITE_OK ){ sqlite3_free(pFile); }else{ @@ -12774,11 +18067,11 @@ /* ** Locate a VFS by name. If no name is given, simply return the ** first VFS on the list. */ -SQLITE_API sqlite3_vfs *sqlite3_vfs_find(const char *zVfs){ +SQLITE_API sqlite3_vfs *SQLITE_STDCALL sqlite3_vfs_find(const char *zVfs){ sqlite3_vfs *pVfs = 0; #if SQLITE_THREADSAFE sqlite3_mutex *mutex; #endif #ifndef SQLITE_OMIT_AUTOINIT @@ -12820,17 +18113,21 @@ /* ** Register a VFS with the system. It is harmless to register the same ** VFS multiple times. The new VFS becomes the default if makeDflt is ** true. */ -SQLITE_API int sqlite3_vfs_register(sqlite3_vfs *pVfs, int makeDflt){ - sqlite3_mutex *mutex = 0; +SQLITE_API int SQLITE_STDCALL sqlite3_vfs_register(sqlite3_vfs *pVfs, int makeDflt){ + MUTEX_LOGIC(sqlite3_mutex *mutex;) #ifndef SQLITE_OMIT_AUTOINIT int rc = sqlite3_initialize(); if( rc ) return rc; #endif - mutex = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER); +#ifdef SQLITE_ENABLE_API_ARMOR + if( pVfs==0 ) return SQLITE_MISUSE_BKPT; +#endif + + MUTEX_LOGIC( mutex = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER); ) sqlite3_mutex_enter(mutex); vfsUnlink(pVfs); if( makeDflt || vfsList==0 ){ pVfs->pNext = vfsList; vfsList = pVfs; @@ -12844,11 +18141,11 @@ } /* ** Unregister a VFS so that it is no longer accessible. */ -SQLITE_API int sqlite3_vfs_unregister(sqlite3_vfs *pVfs){ +SQLITE_API int SQLITE_STDCALL sqlite3_vfs_unregister(sqlite3_vfs *pVfs){ #if SQLITE_THREADSAFE sqlite3_mutex *mutex = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER); #endif sqlite3_mutex_enter(mutex); vfsUnlink(pVfs); @@ -12882,10 +18179,11 @@ ** is completely recoverable simply by not carrying out the resize. The ** hash table will continue to function normally. So a malloc failure ** during a hash table resize is a benign fault. */ +/* #include "sqliteInt.h" */ #ifndef SQLITE_OMIT_BUILTIN_TEST /* ** Global variables. @@ -12963,10 +18261,11 @@ ** SQLITE_ZERO_MALLOC is defined. The allocation drivers implemented ** here always fail. SQLite will not operate with these drivers. These ** are merely placeholders. Real drivers must be substituted using ** sqlite3_config() before SQLite will operate. */ +/* #include "sqliteInt.h" */ /* ** This version of the memory allocator is the default. It is ** used when no other memory allocator is specified using compile-time ** macros. @@ -13023,19 +18322,109 @@ ** This file contains low-level memory allocation drivers for when ** SQLite will use the standard C-library malloc/realloc/free interface ** to obtain the memory it needs. ** ** This file contains implementations of the low-level memory allocation -** routines specified in the sqlite3_mem_methods object. +** routines specified in the sqlite3_mem_methods object. The content of +** this file is only used if SQLITE_SYSTEM_MALLOC is defined. The +** SQLITE_SYSTEM_MALLOC macro is defined automatically if neither the +** SQLITE_MEMDEBUG nor the SQLITE_WIN32_MALLOC macros are defined. The +** default configuration is to use memory allocation routines in this +** file. +** +** C-preprocessor macro summary: +** +** HAVE_MALLOC_USABLE_SIZE The configure script sets this symbol if +** the malloc_usable_size() interface exists +** on the target platform. Or, this symbol +** can be set manually, if desired. +** If an equivalent interface exists by +** a different name, using a separate -D +** option to rename it. +** +** SQLITE_WITHOUT_ZONEMALLOC Some older macs lack support for the zone +** memory allocator. Set this symbol to enable +** building on older macs. +** +** SQLITE_WITHOUT_MSIZE Set this symbol to disable the use of +** _msize() on windows systems. This might +** be necessary when compiling for Delphi, +** for example. */ +/* #include "sqliteInt.h" */ /* ** This version of the memory allocator is the default. It is ** used when no other memory allocator is specified using compile-time ** macros. */ #ifdef SQLITE_SYSTEM_MALLOC +#if defined(__APPLE__) && !defined(SQLITE_WITHOUT_ZONEMALLOC) + +/* +** Use the zone allocator available on apple products unless the +** SQLITE_WITHOUT_ZONEMALLOC symbol is defined. +*/ +#include <sys/sysctl.h> +#include <malloc/malloc.h> +#include <libkern/OSAtomic.h> +static malloc_zone_t* _sqliteZone_; +#define SQLITE_MALLOC(x) malloc_zone_malloc(_sqliteZone_, (x)) +#define SQLITE_FREE(x) malloc_zone_free(_sqliteZone_, (x)); +#define SQLITE_REALLOC(x,y) malloc_zone_realloc(_sqliteZone_, (x), (y)) +#define SQLITE_MALLOCSIZE(x) \ + (_sqliteZone_ ? _sqliteZone_->size(_sqliteZone_,x) : malloc_size(x)) + +#else /* if not __APPLE__ */ + +/* +** Use standard C library malloc and free on non-Apple systems. +** Also used by Apple systems if SQLITE_WITHOUT_ZONEMALLOC is defined. +*/ +#define SQLITE_MALLOC(x) malloc(x) +#define SQLITE_FREE(x) free(x) +#define SQLITE_REALLOC(x,y) realloc((x),(y)) + +/* +** The malloc.h header file is needed for malloc_usable_size() function +** on some systems (e.g. Linux). +*/ +#if HAVE_MALLOC_H && HAVE_MALLOC_USABLE_SIZE +# define SQLITE_USE_MALLOC_H 1 +# define SQLITE_USE_MALLOC_USABLE_SIZE 1 +/* +** The MSVCRT has malloc_usable_size(), but it is called _msize(). The +** use of _msize() is automatic, but can be disabled by compiling with +** -DSQLITE_WITHOUT_MSIZE. Using the _msize() function also requires +** the malloc.h header file. +*/ +#elif defined(_MSC_VER) && !defined(SQLITE_WITHOUT_MSIZE) +# define SQLITE_USE_MALLOC_H +# define SQLITE_USE_MSIZE +#endif + +/* +** Include the malloc.h header file, if necessary. Also set define macro +** SQLITE_MALLOCSIZE to the appropriate function name, which is _msize() +** for MSVC and malloc_usable_size() for most other systems (e.g. Linux). +** The memory size function can always be overridden manually by defining +** the macro SQLITE_MALLOCSIZE to the desired function name. +*/ +#if defined(SQLITE_USE_MALLOC_H) +# include <malloc.h> +# if defined(SQLITE_USE_MALLOC_USABLE_SIZE) +# if !defined(SQLITE_MALLOCSIZE) +# define SQLITE_MALLOCSIZE(x) malloc_usable_size(x) +# endif +# elif defined(SQLITE_USE_MSIZE) +# if !defined(SQLITE_MALLOCSIZE) +# define SQLITE_MALLOCSIZE _msize +# endif +# endif +#endif /* defined(SQLITE_USE_MALLOC_H) */ + +#endif /* __APPLE__ or not __APPLE__ */ /* ** Like malloc(), but remember the size of the allocation ** so that we can find it later using sqlite3MemSize(). ** @@ -13042,22 +18431,31 @@ ** For this low-level routine, we are guaranteed that nByte>0 because ** cases of nByte<=0 will be intercepted and dealt with by higher level ** routines. */ static void *sqlite3MemMalloc(int nByte){ +#ifdef SQLITE_MALLOCSIZE + void *p = SQLITE_MALLOC( nByte ); + if( p==0 ){ + testcase( sqlite3GlobalConfig.xLog!=0 ); + sqlite3_log(SQLITE_NOMEM, "failed to allocate %u bytes of memory", nByte); + } + return p; +#else sqlite3_int64 *p; assert( nByte>0 ); nByte = ROUND8(nByte); - p = malloc( nByte+8 ); + p = SQLITE_MALLOC( nByte+8 ); if( p ){ p[0] = nByte; p++; }else{ testcase( sqlite3GlobalConfig.xLog!=0 ); sqlite3_log(SQLITE_NOMEM, "failed to allocate %u bytes of memory", nByte); } return (void *)p; +#endif } /* ** Like free() but works for allocations obtained from sqlite3MemMalloc() ** or sqlite3MemRealloc(). @@ -13065,44 +18463,63 @@ ** For this low-level routine, we already know that pPrior!=0 since ** cases where pPrior==0 will have been intecepted and dealt with ** by higher-level routines. */ static void sqlite3MemFree(void *pPrior){ +#ifdef SQLITE_MALLOCSIZE + SQLITE_FREE(pPrior); +#else sqlite3_int64 *p = (sqlite3_int64*)pPrior; assert( pPrior!=0 ); p--; - free(p); + SQLITE_FREE(p); +#endif } /* ** Report the allocated size of a prior return from xMalloc() ** or xRealloc(). */ static int sqlite3MemSize(void *pPrior){ +#ifdef SQLITE_MALLOCSIZE + assert( pPrior!=0 ); + return (int)SQLITE_MALLOCSIZE(pPrior); +#else sqlite3_int64 *p; - if( pPrior==0 ) return 0; + assert( pPrior!=0 ); p = (sqlite3_int64*)pPrior; p--; return (int)p[0]; +#endif } /* ** Like realloc(). Resize an allocation previously obtained from ** sqlite3MemMalloc(). ** ** For this low-level interface, we know that pPrior!=0. Cases where ** pPrior==0 while have been intercepted by higher-level routine and -** redirected to xMalloc. Similarly, we know that nByte>0 becauses +** redirected to xMalloc. Similarly, we know that nByte>0 because ** cases where nByte<=0 will have been intercepted by higher-level ** routines and redirected to xFree. */ static void *sqlite3MemRealloc(void *pPrior, int nByte){ +#ifdef SQLITE_MALLOCSIZE + void *p = SQLITE_REALLOC(pPrior, nByte); + if( p==0 ){ + testcase( sqlite3GlobalConfig.xLog!=0 ); + sqlite3_log(SQLITE_NOMEM, + "failed memory resize %u to %u bytes", + SQLITE_MALLOCSIZE(pPrior), nByte); + } + return p; +#else sqlite3_int64 *p = (sqlite3_int64*)pPrior; assert( pPrior!=0 && nByte>0 ); - nByte = ROUND8(nByte); + assert( nByte==ROUND8(nByte) ); /* EV: R-46199-30249 */ p--; - p = realloc(p, nByte+8 ); + p = SQLITE_REALLOC(p, nByte+8 ); if( p ){ p[0] = nByte; p++; }else{ testcase( sqlite3GlobalConfig.xLog!=0 ); @@ -13109,10 +18526,11 @@ sqlite3_log(SQLITE_NOMEM, "failed memory resize %u to %u bytes", sqlite3MemSize(pPrior), nByte); } return (void*)p; +#endif } /* ** Round up a request size to the next valid allocation size. */ @@ -13122,10 +18540,38 @@ /* ** Initialize this module. */ static int sqlite3MemInit(void *NotUsed){ +#if defined(__APPLE__) && !defined(SQLITE_WITHOUT_ZONEMALLOC) + int cpuCount; + size_t len; + if( _sqliteZone_ ){ + return SQLITE_OK; + } + len = sizeof(cpuCount); + /* One usually wants to use hw.acctivecpu for MT decisions, but not here */ + sysctlbyname("hw.ncpu", &cpuCount, &len, NULL, 0); + if( cpuCount>1 ){ + /* defer MT decisions to system malloc */ + _sqliteZone_ = malloc_default_zone(); + }else{ + /* only 1 core, use our own zone to contention over global locks, + ** e.g. we have our own dedicated locks */ + bool success; + malloc_zone_t* newzone = malloc_create_zone(4096, 0); + malloc_set_zone_name(newzone, "Sqlite_Heap"); + do{ + success = OSAtomicCompareAndSwapPtrBarrier(NULL, newzone, + (void * volatile *)&_sqliteZone_); + }while(!_sqliteZone_); + if( !success ){ + /* somebody registered a zone first */ + malloc_destroy_zone(newzone); + } + } +#endif UNUSED_PARAMETER(NotUsed); return SQLITE_OK; } /* @@ -13179,10 +18625,11 @@ ** leaks and memory usage errors. ** ** This file contains implementations of the low-level memory allocation ** routines specified in the sqlite3_mem_methods object. */ +/* #include "sqliteInt.h" */ /* ** This version of the memory allocator is used only if the ** SQLITE_MEMDEBUG macro is defined */ @@ -13196,10 +18643,11 @@ extern void backtrace_symbols_fd(void*const*,int,int); #else # define backtrace(A,B) 1 # define backtrace_symbols_fd(A,B,C) #endif +/* #include <stdio.h> */ /* ** Each memory allocation looks like this: ** ** ------------------------------------------------------------------------ @@ -13337,11 +18785,11 @@ struct MemBlockHdr *pHdr; if( !p ){ return 0; } pHdr = sqlite3MemsysGetHeader(p); - return pHdr->iSize; + return (int)pHdr->iSize; } /* ** Initialize the memory allocation subsystem. */ @@ -13379,19 +18827,19 @@ static void randomFill(char *pBuf, int nByte){ unsigned int x, y, r; x = SQLITE_PTR_TO_INT(pBuf); y = nByte | 1; while( nByte >= 4 ){ - x = (x>>1) ^ (-(x&1) & 0xd0000001); + x = (x>>1) ^ (-(int)(x&1) & 0xd0000001); y = y*1103515245 + 12345; r = x ^ y; *(int*)pBuf = r; pBuf += 4; nByte -= 4; } while( nByte-- > 0 ){ - x = (x>>1) ^ (-(x&1) & 0xd0000001); + x = (x>>1) ^ (-(int)(x&1) & 0xd0000001); y = y*1103515245 + 12345; r = x ^ y; *(pBuf++) = r & 0xff; } } @@ -13482,13 +18930,13 @@ assert( mem.pLast==pHdr ); mem.pLast = pHdr->pPrev; } z = (char*)pBt; z -= pHdr->nTitle; - adjustStats(pHdr->iSize, -1); + adjustStats((int)pHdr->iSize, -1); randomFill(z, sizeof(void*)*pHdr->nBacktraceSlots + sizeof(*pHdr) + - pHdr->iSize + sizeof(int) + pHdr->nTitle); + (int)pHdr->iSize + sizeof(int) + pHdr->nTitle); free(z); sqlite3_mutex_leave(mem.mutex); } /* @@ -13502,16 +18950,17 @@ */ static void *sqlite3MemRealloc(void *pPrior, int nByte){ struct MemBlockHdr *pOldHdr; void *pNew; assert( mem.disallow==0 ); + assert( (nByte & 7)==0 ); /* EV: R-46199-30249 */ pOldHdr = sqlite3MemsysGetHeader(pPrior); pNew = sqlite3MemMalloc(nByte); if( pNew ){ - memcpy(pNew, pPrior, nByte<pOldHdr->iSize ? nByte : pOldHdr->iSize); + memcpy(pNew, pPrior, (int)(nByte<pOldHdr->iSize ? nByte : pOldHdr->iSize)); if( nByte>pOldHdr->iSize ){ - randomFill(&((char*)pNew)[pOldHdr->iSize], nByte - pOldHdr->iSize); + randomFill(&((char*)pNew)[pOldHdr->iSize], nByte - (int)pOldHdr->iSize); } sqlite3MemFree(pPrior); } return pNew; } @@ -13536,11 +18985,11 @@ /* ** Set the "type" of an allocation. */ SQLITE_PRIVATE void sqlite3MemdebugSetType(void *p, u8 eType){ - if( p ){ + if( p && sqlite3GlobalConfig.m.xMalloc==sqlite3MemMalloc ){ struct MemBlockHdr *pHdr; pHdr = sqlite3MemsysGetHeader(p); assert( pHdr->iForeGuard==FOREGUARD ); pHdr->eType = eType; } @@ -13551,31 +19000,46 @@ ** allocation p. Also return true if p==NULL. ** ** This routine is designed for use within an assert() statement, to ** verify the type of an allocation. For example: ** -** assert( sqlite3MemdebugHasType(p, MEMTYPE_DB) ); +** assert( sqlite3MemdebugHasType(p, MEMTYPE_HEAP) ); */ SQLITE_PRIVATE int sqlite3MemdebugHasType(void *p, u8 eType){ int rc = 1; - if( p ){ + if( p && sqlite3GlobalConfig.m.xMalloc==sqlite3MemMalloc ){ + struct MemBlockHdr *pHdr; + pHdr = sqlite3MemsysGetHeader(p); + assert( pHdr->iForeGuard==FOREGUARD ); /* Allocation is valid */ + if( (pHdr->eType&eType)==0 ){ + rc = 0; + } + } + return rc; +} + +/* +** Return TRUE if the mask of type in eType matches no bits of the type of the +** allocation p. Also return true if p==NULL. +** +** This routine is designed for use within an assert() statement, to +** verify the type of an allocation. For example: +** +** assert( sqlite3MemdebugNoType(p, MEMTYPE_LOOKASIDE) ); +*/ +SQLITE_PRIVATE int sqlite3MemdebugNoType(void *p, u8 eType){ + int rc = 1; + if( p && sqlite3GlobalConfig.m.xMalloc==sqlite3MemMalloc ){ struct MemBlockHdr *pHdr; pHdr = sqlite3MemsysGetHeader(p); assert( pHdr->iForeGuard==FOREGUARD ); /* Allocation is valid */ - assert( (pHdr->eType & (pHdr->eType-1))==0 ); /* Only one type bit set */ - if( (pHdr->eType&eType)==0 ){ - void **pBt; - pBt = (void**)pHdr; - pBt -= pHdr->nBacktraceSlots; - backtrace_symbols_fd(pBt, pHdr->nBacktrace, fileno(stderr)); - fprintf(stderr, "\n"); + if( (pHdr->eType&eType)!=0 ){ rc = 0; } } return rc; } - /* ** Set the number of backtrace levels kept for each allocation. ** A value of zero turns off backtracing. The number is always rounded ** up to a multiple of 2. @@ -13607,11 +19071,11 @@ SQLITE_PRIVATE void sqlite3MemdebugSync(){ struct MemBlockHdr *pHdr; for(pHdr=mem.pFirst; pHdr; pHdr=pHdr->pNext){ void **pBt = (void**)pHdr; pBt -= pHdr->nBacktraceSlots; - mem.xBacktrace(pHdr->iSize, pHdr->nBacktrace-1, &pBt[1]); + mem.xBacktrace((int)pHdr->iSize, pHdr->nBacktrace-1, &pBt[1]); } } /* ** Open the file indicated and write a log of all unfreed memory @@ -13696,10 +19160,11 @@ ** be changed. ** ** This version of the memory allocation subsystem is included ** in the build only if SQLITE_ENABLE_MEMSYS3 is defined. */ +/* #include "sqliteInt.h" */ /* ** This version of the memory allocator is only built into the library ** SQLITE_ENABLE_MEMSYS3 is defined. Defining this symbol does not ** mean that the library will use a memory-pool by default, just that @@ -14105,11 +19570,11 @@ ** Free an outstanding memory allocation. ** ** This function assumes that the necessary mutexes, if any, are ** already held by the caller. Hence "Unsafe". */ -void memsys3FreeUnsafe(void *pOld){ +static void memsys3FreeUnsafe(void *pOld){ Mem3Block *p = (Mem3Block*)pOld; int i; u32 size, x; assert( sqlite3_mutex_held(mem3.mutex) ); assert( p>mem3.aPool && p<&mem3.aPool[mem3.nPool] ); @@ -14148,11 +19613,11 @@ ** size returned omits the 8-byte header overhead. This only ** works for chunks that are currently checked out. */ static int memsys3Size(void *p){ Mem3Block *pBlock; - if( p==0 ) return 0; + assert( p!=0 ); pBlock = (Mem3Block*)p; assert( (pBlock[-1].u.hdr.size4x&1)!=0 ); return (pBlock[-1].u.hdr.size4x&~3)*2 - 4; } @@ -14180,21 +19645,21 @@ } /* ** Free memory. */ -void memsys3Free(void *pPrior){ +static void memsys3Free(void *pPrior){ assert( pPrior ); memsys3Enter(); memsys3FreeUnsafe(pPrior); memsys3Leave(); } /* ** Change the size of an existing memory allocation */ -void *memsys3Realloc(void *pPrior, int nBytes){ +static void *memsys3Realloc(void *pPrior, int nBytes){ int nOld; void *p; if( pPrior==0 ){ return sqlite3_malloc(nBytes); } @@ -14387,14 +19852,14 @@ ** This version of the memory allocation subsystem is included ** in the build only if SQLITE_ENABLE_MEMSYS5 is defined. ** ** This memory allocator uses the following algorithm: ** -** 1. All memory allocations sizes are rounded up to a power of 2. +** 1. All memory allocation sizes are rounded up to a power of 2. ** ** 2. If two adjacent free blocks are the halves of a larger block, -** then the two blocks are coalesed into the single larger block. +** then the two blocks are coalesced into the single larger block. ** ** 3. New memory is allocated from the first available free block. ** ** This algorithm is described in: J. M. Robson. "Bounds for Some Functions ** Concerning Dynamic Storage Allocation". Journal of the Association for @@ -14410,10 +19875,11 @@ ** N >= M*(1 + log2(n)/2) - n + 1 ** ** The sqlite3_status() logic tracks the maximum values of n and M so ** that an application can, at any time, verify this constraint. */ +/* #include "sqliteInt.h" */ /* ** This version of the memory allocator is used only when ** SQLITE_ENABLE_MEMSYS5 is defined. */ @@ -14463,10 +19929,11 @@ /* ** Mutex to control access to the memory allocation subsystem. */ sqlite3_mutex *mutex; +#if defined(SQLITE_DEBUG) || defined(SQLITE_TEST) /* ** Performance statistics */ u64 nAlloc; /* Total number of calls to malloc */ u64 totalAlloc; /* Total of all malloc calls - includes internal frag */ @@ -14474,34 +19941,35 @@ u32 currentOut; /* Current checkout, including internal fragmentation */ u32 currentCount; /* Current number of distinct checkouts */ u32 maxOut; /* Maximum instantaneous currentOut */ u32 maxCount; /* Maximum instantaneous currentCount */ u32 maxRequest; /* Largest allocation (exclusive of internal frag) */ +#endif /* ** Lists of free blocks. aiFreelist[0] is a list of free blocks of ** size mem5.szAtom. aiFreelist[1] holds blocks of size szAtom*2. - ** and so forth. + ** aiFreelist[2] holds free blocks of size szAtom*4. And so forth. */ int aiFreelist[LOGMAX+1]; /* ** Space for tracking which blocks are checked out and the size ** of each block. One byte per block. */ u8 *aCtrl; -} mem5 = { 0 }; +} mem5; /* -** Access the static variable through a macro for SQLITE_OMIT_WSD +** Access the static variable through a macro for SQLITE_OMIT_WSD. */ #define mem5 GLOBAL(struct Mem5Global, mem5) /* ** Assuming mem5.zPool is divided up into an array of Mem5Link -** structures, return a pointer to the idx-th such lik. +** structures, return a pointer to the idx-th such link. */ #define MEM5LINK(idx) ((Mem5Link *)(&mem5.zPool[(idx)*mem5.szAtom])) /* ** Unlink the chunk at mem5.aPool[i] from list it is currently @@ -14544,60 +20012,37 @@ } mem5.aiFreelist[iLogsize] = i; } /* -** If the STATIC_MEM mutex is not already held, obtain it now. The mutex -** will already be held (obtained by code in malloc.c) if -** sqlite3GlobalConfig.bMemStat is true. +** Obtain or release the mutex needed to access global data structures. */ static void memsys5Enter(void){ sqlite3_mutex_enter(mem5.mutex); } static void memsys5Leave(void){ sqlite3_mutex_leave(mem5.mutex); } /* -** Return the size of an outstanding allocation, in bytes. The -** size returned omits the 8-byte header overhead. This only -** works for chunks that are currently checked out. +** Return the size of an outstanding allocation, in bytes. +** This only works for chunks that are currently checked out. */ static int memsys5Size(void *p){ - int iSize = 0; - if( p ){ - int i = ((u8 *)p-mem5.zPool)/mem5.szAtom; - assert( i>=0 && i<mem5.nBlock ); - iSize = mem5.szAtom * (1 << (mem5.aCtrl[i]&CTRL_LOGSIZE)); - } + int iSize, i; + assert( p!=0 ); + i = (int)(((u8 *)p-mem5.zPool)/mem5.szAtom); + assert( i>=0 && i<mem5.nBlock ); + iSize = mem5.szAtom * (1 << (mem5.aCtrl[i]&CTRL_LOGSIZE)); return iSize; } -/* -** Find the first entry on the freelist iLogsize. Unlink that -** entry and return its index. -*/ -static int memsys5UnlinkFirst(int iLogsize){ - int i; - int iFirst; - - assert( iLogsize>=0 && iLogsize<=LOGMAX ); - i = iFirst = mem5.aiFreelist[iLogsize]; - assert( iFirst>=0 ); - while( i>0 ){ - if( i<iFirst ) iFirst = i; - i = MEM5LINK(i)->next; - } - memsys5Unlink(iFirst, iLogsize); - return iFirst; -} - /* ** Return a block of memory of at least nBytes in size. ** Return NULL if unable. Return NULL if nBytes==0. ** -** The caller guarantees that nByte positive. +** The caller guarantees that nByte is positive. ** ** The caller has obtained a mutex prior to invoking this ** routine so there is never any chance that two or more ** threads can be in this routine at the same time. */ @@ -14608,37 +20053,37 @@ int iLogsize; /* Log2 of iFullSz/POW2_MIN */ /* nByte must be a positive */ assert( nByte>0 ); + /* No more than 1GiB per allocation */ + if( nByte > 0x40000000 ) return 0; + +#if defined(SQLITE_DEBUG) || defined(SQLITE_TEST) /* Keep track of the maximum allocation request. Even unfulfilled ** requests are counted */ if( (u32)nByte>mem5.maxRequest ){ mem5.maxRequest = nByte; } +#endif - /* Abort if the requested allocation size is larger than the largest - ** power of two that we can represent using 32-bit signed integers. - */ - if( nByte > 0x40000000 ){ - return 0; - } /* Round nByte up to the next valid power of two */ - for(iFullSz=mem5.szAtom, iLogsize=0; iFullSz<nByte; iFullSz *= 2, iLogsize++){} + for(iFullSz=mem5.szAtom,iLogsize=0; iFullSz<nByte; iFullSz*=2,iLogsize++){} /* Make sure mem5.aiFreelist[iLogsize] contains at least one free ** block. If not, then split a block of the next larger power of ** two in order to create a new free block of size iLogsize. */ - for(iBin=iLogsize; mem5.aiFreelist[iBin]<0 && iBin<=LOGMAX; iBin++){} + for(iBin=iLogsize; iBin<=LOGMAX && mem5.aiFreelist[iBin]<0; iBin++){} if( iBin>LOGMAX ){ testcase( sqlite3GlobalConfig.xLog!=0 ); sqlite3_log(SQLITE_NOMEM, "failed to allocate %u bytes", nByte); return 0; } - i = memsys5UnlinkFirst(iBin); + i = mem5.aiFreelist[iBin]; + memsys5Unlink(i, iBin); while( iBin>iLogsize ){ int newSize; iBin--; newSize = 1 << iBin; @@ -14645,18 +20090,26 @@ mem5.aCtrl[i+newSize] = CTRL_FREE | iBin; memsys5Link(i+newSize, iBin); } mem5.aCtrl[i] = iLogsize; +#if defined(SQLITE_DEBUG) || defined(SQLITE_TEST) /* Update allocator performance statistics. */ mem5.nAlloc++; mem5.totalAlloc += iFullSz; mem5.totalExcess += iFullSz - nByte; mem5.currentCount++; mem5.currentOut += iFullSz; if( mem5.maxCount<mem5.currentCount ) mem5.maxCount = mem5.currentCount; if( mem5.maxOut<mem5.currentOut ) mem5.maxOut = mem5.currentOut; +#endif + +#ifdef SQLITE_DEBUG + /* Make sure the allocated memory does not assume that it is set to zero + ** or retains a value from a previous allocation */ + memset(&mem5.zPool[i*mem5.szAtom], 0xAA, iFullSz); +#endif /* Return a pointer to the allocated memory. */ return (void*)&mem5.zPool[i*mem5.szAtom]; } @@ -14668,11 +20121,11 @@ int iBlock; /* Set iBlock to the index of the block pointed to by pOld in ** the array of mem5.szAtom byte blocks pointed to by mem5.zPool. */ - iBlock = ((u8 *)pOld-mem5.zPool)/mem5.szAtom; + iBlock = (int)(((u8 *)pOld-mem5.zPool)/mem5.szAtom); /* Check that the pointer pOld points to a valid, non-free block. */ assert( iBlock>=0 && iBlock<mem5.nBlock ); assert( ((u8 *)pOld-mem5.zPool)%mem5.szAtom==0 ); assert( (mem5.aCtrl[iBlock] & CTRL_FREE)==0 ); @@ -14681,27 +20134,30 @@ size = 1<<iLogsize; assert( iBlock+size-1<(u32)mem5.nBlock ); mem5.aCtrl[iBlock] |= CTRL_FREE; mem5.aCtrl[iBlock+size-1] |= CTRL_FREE; + +#if defined(SQLITE_DEBUG) || defined(SQLITE_TEST) assert( mem5.currentCount>0 ); assert( mem5.currentOut>=(size*mem5.szAtom) ); mem5.currentCount--; mem5.currentOut -= size*mem5.szAtom; assert( mem5.currentOut>0 || mem5.currentCount==0 ); assert( mem5.currentCount>0 || mem5.currentOut==0 ); +#endif mem5.aCtrl[iBlock] = CTRL_FREE | iLogsize; while( ALWAYS(iLogsize<LOGMAX) ){ int iBuddy; if( (iBlock>>iLogsize) & 1 ){ iBuddy = iBlock - size; + assert( iBuddy>=0 ); }else{ iBuddy = iBlock + size; + if( iBuddy>=mem5.nBlock ) break; } - assert( iBuddy>=0 ); - if( (iBuddy+(1<<iLogsize))>mem5.nBlock ) break; if( mem5.aCtrl[iBuddy]!=(CTRL_FREE | iLogsize) ) break; memsys5Unlink(iBuddy, iLogsize); iLogsize++; if( iBuddy<iBlock ){ mem5.aCtrl[iBuddy] = CTRL_FREE | iLogsize; @@ -14711,15 +20167,22 @@ mem5.aCtrl[iBlock] = CTRL_FREE | iLogsize; mem5.aCtrl[iBuddy] = 0; } size *= 2; } + +#ifdef SQLITE_DEBUG + /* Overwrite freed memory with the 0x55 bit pattern to verify that it is + ** not used after being freed */ + memset(&mem5.zPool[iBlock*mem5.szAtom], 0x55, size); +#endif + memsys5Link(iBlock, iLogsize); } /* -** Allocate nBytes of memory +** Allocate nBytes of memory. */ static void *memsys5Malloc(int nBytes){ sqlite3_int64 *p = 0; if( nBytes>0 ){ memsys5Enter(); @@ -14756,26 +20219,24 @@ */ static void *memsys5Realloc(void *pPrior, int nBytes){ int nOld; void *p; assert( pPrior!=0 ); - assert( (nBytes&(nBytes-1))==0 ); + assert( (nBytes&(nBytes-1))==0 ); /* EV: R-46199-30249 */ assert( nBytes>=0 ); if( nBytes==0 ){ return 0; } nOld = memsys5Size(pPrior); if( nBytes<=nOld ){ return pPrior; } - memsys5Enter(); - p = memsys5MallocUnsafe(nBytes); + p = memsys5Malloc(nBytes); if( p ){ memcpy(p, pPrior, nOld); - memsys5FreeUnsafe(pPrior); + memsys5Free(pPrior); } - memsys5Leave(); return p; } /* ** Round up a request size to the next valid allocation size. If @@ -14803,11 +20264,11 @@ ** memsys5Log(8) -> 3 ** memsys5Log(9) -> 4 */ static int memsys5Log(int iValue){ int iLog; - for(iLog=0; (1<<iLog)<iValue; iLog++); + for(iLog=0; (iLog<(int)((sizeof(int)*8)-1)) && (1<<iLog)<iValue; iLog++); return iLog; } /* ** Initialize the memory allocator. @@ -14834,10 +20295,11 @@ nByte = sqlite3GlobalConfig.nHeap; zByte = (u8*)sqlite3GlobalConfig.pHeap; assert( zByte!=0 ); /* sqlite3_config() does not allow otherwise */ + /* boundaries on sqlite3GlobalConfig.mnReq are enforced in sqlite3_config() */ nMinLog = memsys5Log(sqlite3GlobalConfig.mnReq); mem5.szAtom = (1<<nMinLog); while( (int)sizeof(Mem5Link)>mem5.szAtom ){ mem5.szAtom = mem5.szAtom << 1; } @@ -14957,44 +20419,55 @@ ************************************************************************* ** This file contains the C functions that implement mutexes. ** ** This file contains code that is common across all mutex implementations. */ +/* #include "sqliteInt.h" */ #if defined(SQLITE_DEBUG) && !defined(SQLITE_MUTEX_OMIT) /* ** For debugging purposes, record when the mutex subsystem is initialized ** and uninitialized so that we can assert() if there is an attempt to ** allocate a mutex while the system is uninitialized. */ static SQLITE_WSD int mutexIsInit = 0; -#endif /* SQLITE_DEBUG */ +#endif /* SQLITE_DEBUG && !defined(SQLITE_MUTEX_OMIT) */ #ifndef SQLITE_MUTEX_OMIT /* ** Initialize the mutex system. */ SQLITE_PRIVATE int sqlite3MutexInit(void){ int rc = SQLITE_OK; - if( sqlite3GlobalConfig.bCoreMutex ){ - if( !sqlite3GlobalConfig.mutex.xMutexAlloc ){ - /* If the xMutexAlloc method has not been set, then the user did not - ** install a mutex implementation via sqlite3_config() prior to - ** sqlite3_initialize() being called. This block copies pointers to - ** the default implementation into the sqlite3GlobalConfig structure. - */ - sqlite3_mutex_methods *pFrom = sqlite3DefaultMutex(); - sqlite3_mutex_methods *pTo = &sqlite3GlobalConfig.mutex; - - memcpy(pTo, pFrom, offsetof(sqlite3_mutex_methods, xMutexAlloc)); - memcpy(&pTo->xMutexFree, &pFrom->xMutexFree, - sizeof(*pTo) - offsetof(sqlite3_mutex_methods, xMutexFree)); - pTo->xMutexAlloc = pFrom->xMutexAlloc; - } - rc = sqlite3GlobalConfig.mutex.xMutexInit(); - } + if( !sqlite3GlobalConfig.mutex.xMutexAlloc ){ + /* If the xMutexAlloc method has not been set, then the user did not + ** install a mutex implementation via sqlite3_config() prior to + ** sqlite3_initialize() being called. This block copies pointers to + ** the default implementation into the sqlite3GlobalConfig structure. + */ + sqlite3_mutex_methods const *pFrom; + sqlite3_mutex_methods *pTo = &sqlite3GlobalConfig.mutex; + + if( sqlite3GlobalConfig.bCoreMutex ){ + pFrom = sqlite3DefaultMutex(); + }else{ + pFrom = sqlite3NoopMutex(); + } + pTo->xMutexInit = pFrom->xMutexInit; + pTo->xMutexEnd = pFrom->xMutexEnd; + pTo->xMutexFree = pFrom->xMutexFree; + pTo->xMutexEnter = pFrom->xMutexEnter; + pTo->xMutexTry = pFrom->xMutexTry; + pTo->xMutexLeave = pFrom->xMutexLeave; + pTo->xMutexHeld = pFrom->xMutexHeld; + pTo->xMutexNotheld = pFrom->xMutexNotheld; + sqlite3MemoryBarrier(); + pTo->xMutexAlloc = pFrom->xMutexAlloc; + } + assert( sqlite3GlobalConfig.mutex.xMutexInit ); + rc = sqlite3GlobalConfig.mutex.xMutexInit(); #ifdef SQLITE_DEBUG GLOBAL(int, mutexIsInit) = 1; #endif @@ -15019,51 +20492,57 @@ } /* ** Retrieve a pointer to a static mutex or allocate a new dynamic one. */ -SQLITE_API sqlite3_mutex *sqlite3_mutex_alloc(int id){ +SQLITE_API sqlite3_mutex *SQLITE_STDCALL sqlite3_mutex_alloc(int id){ #ifndef SQLITE_OMIT_AUTOINIT - if( sqlite3_initialize() ) return 0; + if( id<=SQLITE_MUTEX_RECURSIVE && sqlite3_initialize() ) return 0; + if( id>SQLITE_MUTEX_RECURSIVE && sqlite3MutexInit() ) return 0; #endif + assert( sqlite3GlobalConfig.mutex.xMutexAlloc ); return sqlite3GlobalConfig.mutex.xMutexAlloc(id); } SQLITE_PRIVATE sqlite3_mutex *sqlite3MutexAlloc(int id){ if( !sqlite3GlobalConfig.bCoreMutex ){ return 0; } assert( GLOBAL(int, mutexIsInit) ); + assert( sqlite3GlobalConfig.mutex.xMutexAlloc ); return sqlite3GlobalConfig.mutex.xMutexAlloc(id); } /* ** Free a dynamic mutex. */ -SQLITE_API void sqlite3_mutex_free(sqlite3_mutex *p){ +SQLITE_API void SQLITE_STDCALL sqlite3_mutex_free(sqlite3_mutex *p){ if( p ){ + assert( sqlite3GlobalConfig.mutex.xMutexFree ); sqlite3GlobalConfig.mutex.xMutexFree(p); } } /* ** Obtain the mutex p. If some other thread already has the mutex, block ** until it can be obtained. */ -SQLITE_API void sqlite3_mutex_enter(sqlite3_mutex *p){ +SQLITE_API void SQLITE_STDCALL sqlite3_mutex_enter(sqlite3_mutex *p){ if( p ){ + assert( sqlite3GlobalConfig.mutex.xMutexEnter ); sqlite3GlobalConfig.mutex.xMutexEnter(p); } } /* ** Obtain the mutex p. If successful, return SQLITE_OK. Otherwise, if another ** thread holds the mutex and it cannot be obtained, return SQLITE_BUSY. */ -SQLITE_API int sqlite3_mutex_try(sqlite3_mutex *p){ +SQLITE_API int SQLITE_STDCALL sqlite3_mutex_try(sqlite3_mutex *p){ int rc = SQLITE_OK; if( p ){ + assert( sqlite3GlobalConfig.mutex.xMutexTry ); return sqlite3GlobalConfig.mutex.xMutexTry(p); } return rc; } @@ -15071,30 +20550,33 @@ ** The sqlite3_mutex_leave() routine exits a mutex that was previously ** entered by the same thread. The behavior is undefined if the mutex ** is not currently entered. If a NULL pointer is passed as an argument ** this function is a no-op. */ -SQLITE_API void sqlite3_mutex_leave(sqlite3_mutex *p){ +SQLITE_API void SQLITE_STDCALL sqlite3_mutex_leave(sqlite3_mutex *p){ if( p ){ + assert( sqlite3GlobalConfig.mutex.xMutexLeave ); sqlite3GlobalConfig.mutex.xMutexLeave(p); } } #ifndef NDEBUG /* ** The sqlite3_mutex_held() and sqlite3_mutex_notheld() routine are ** intended for use inside assert() statements. */ -SQLITE_API int sqlite3_mutex_held(sqlite3_mutex *p){ +SQLITE_API int SQLITE_STDCALL sqlite3_mutex_held(sqlite3_mutex *p){ + assert( p==0 || sqlite3GlobalConfig.mutex.xMutexHeld ); return p==0 || sqlite3GlobalConfig.mutex.xMutexHeld(p); } -SQLITE_API int sqlite3_mutex_notheld(sqlite3_mutex *p){ +SQLITE_API int SQLITE_STDCALL sqlite3_mutex_notheld(sqlite3_mutex *p){ + assert( p==0 || sqlite3GlobalConfig.mutex.xMutexNotheld ); return p==0 || sqlite3GlobalConfig.mutex.xMutexNotheld(p); } #endif -#endif /* SQLITE_MUTEX_OMIT */ +#endif /* !defined(SQLITE_MUTEX_OMIT) */ /************** End of mutex.c ***********************************************/ /************** Begin file mutex_noop.c **************************************/ /* ** 2008 October 07 @@ -15121,69 +20603,77 @@ ** ** If compiled with SQLITE_DEBUG, then additional logic is inserted ** that does error checking on mutexes to make sure they are being ** called correctly. */ +/* #include "sqliteInt.h" */ +#ifndef SQLITE_MUTEX_OMIT -#if defined(SQLITE_MUTEX_NOOP) && !defined(SQLITE_DEBUG) +#ifndef SQLITE_DEBUG /* ** Stub routines for all mutex methods. ** ** This routines provide no mutual exclusion or error checking. */ -static int noopMutexHeld(sqlite3_mutex *p){ return 1; } -static int noopMutexNotheld(sqlite3_mutex *p){ return 1; } static int noopMutexInit(void){ return SQLITE_OK; } static int noopMutexEnd(void){ return SQLITE_OK; } -static sqlite3_mutex *noopMutexAlloc(int id){ return (sqlite3_mutex*)8; } -static void noopMutexFree(sqlite3_mutex *p){ return; } -static void noopMutexEnter(sqlite3_mutex *p){ return; } -static int noopMutexTry(sqlite3_mutex *p){ return SQLITE_OK; } -static void noopMutexLeave(sqlite3_mutex *p){ return; } - -SQLITE_PRIVATE sqlite3_mutex_methods *sqlite3DefaultMutex(void){ - static sqlite3_mutex_methods sMutex = { +static sqlite3_mutex *noopMutexAlloc(int id){ + UNUSED_PARAMETER(id); + return (sqlite3_mutex*)8; +} +static void noopMutexFree(sqlite3_mutex *p){ UNUSED_PARAMETER(p); return; } +static void noopMutexEnter(sqlite3_mutex *p){ UNUSED_PARAMETER(p); return; } +static int noopMutexTry(sqlite3_mutex *p){ + UNUSED_PARAMETER(p); + return SQLITE_OK; +} +static void noopMutexLeave(sqlite3_mutex *p){ UNUSED_PARAMETER(p); return; } + +SQLITE_PRIVATE sqlite3_mutex_methods const *sqlite3NoopMutex(void){ + static const sqlite3_mutex_methods sMutex = { noopMutexInit, noopMutexEnd, noopMutexAlloc, noopMutexFree, noopMutexEnter, noopMutexTry, noopMutexLeave, - noopMutexHeld, - noopMutexNotheld + 0, + 0, }; return &sMutex; } -#endif /* defined(SQLITE_MUTEX_NOOP) && !defined(SQLITE_DEBUG) */ +#endif /* !SQLITE_DEBUG */ -#if defined(SQLITE_MUTEX_NOOP) && defined(SQLITE_DEBUG) +#ifdef SQLITE_DEBUG /* ** In this implementation, error checking is provided for testing ** and debugging purposes. The mutexes still do not provide any ** mutual exclusion. */ /* ** The mutex object */ -struct sqlite3_mutex { +typedef struct sqlite3_debug_mutex { int id; /* The mutex type */ int cnt; /* Number of entries without a matching leave */ -}; +} sqlite3_debug_mutex; /* ** The sqlite3_mutex_held() and sqlite3_mutex_notheld() routine are ** intended for use inside assert() statements. */ -static int debugMutexHeld(sqlite3_mutex *p){ +static int debugMutexHeld(sqlite3_mutex *pX){ + sqlite3_debug_mutex *p = (sqlite3_debug_mutex*)pX; return p==0 || p->cnt>0; } -static int debugMutexNotheld(sqlite3_mutex *p){ +static int debugMutexNotheld(sqlite3_mutex *pX){ + sqlite3_debug_mutex *p = (sqlite3_debug_mutex*)pX; return p==0 || p->cnt==0; } /* ** Initialize and deinitialize the mutex subsystem. @@ -15195,12 +20685,12 @@ ** The sqlite3_mutex_alloc() routine allocates a new ** mutex and returns a pointer to it. If it returns NULL ** that means that a mutex could not be allocated. */ static sqlite3_mutex *debugMutexAlloc(int id){ - static sqlite3_mutex aStatic[6]; - sqlite3_mutex *pNew = 0; + static sqlite3_debug_mutex aStatic[SQLITE_MUTEX_STATIC_VFS3 - 1]; + sqlite3_debug_mutex *pNew = 0; switch( id ){ case SQLITE_MUTEX_FAST: case SQLITE_MUTEX_RECURSIVE: { pNew = sqlite3Malloc(sizeof(*pNew)); if( pNew ){ @@ -15208,27 +20698,37 @@ pNew->cnt = 0; } break; } default: { - assert( id-2 >= 0 ); - assert( id-2 < (int)(sizeof(aStatic)/sizeof(aStatic[0])) ); +#ifdef SQLITE_ENABLE_API_ARMOR + if( id-2<0 || id-2>=ArraySize(aStatic) ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif pNew = &aStatic[id-2]; pNew->id = id; break; } } - return pNew; + return (sqlite3_mutex*)pNew; } /* ** This routine deallocates a previously allocated mutex. */ -static void debugMutexFree(sqlite3_mutex *p){ +static void debugMutexFree(sqlite3_mutex *pX){ + sqlite3_debug_mutex *p = (sqlite3_debug_mutex*)pX; assert( p->cnt==0 ); - assert( p->id==SQLITE_MUTEX_FAST || p->id==SQLITE_MUTEX_RECURSIVE ); - sqlite3_free(p); + if( p->id==SQLITE_MUTEX_RECURSIVE || p->id==SQLITE_MUTEX_FAST ){ + sqlite3_free(p); + }else{ +#ifdef SQLITE_ENABLE_API_ARMOR + (void)SQLITE_MISUSE_BKPT; +#endif + } } /* ** The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt ** to enter a mutex. If another thread is already within the mutex, @@ -15238,16 +20738,18 @@ ** be entered multiple times by the same thread. In such cases the, ** mutex must be exited an equal number of times before another thread ** can enter. If the same thread tries to enter any other kind of mutex ** more than once, the behavior is undefined. */ -static void debugMutexEnter(sqlite3_mutex *p){ - assert( p->id==SQLITE_MUTEX_RECURSIVE || debugMutexNotheld(p) ); +static void debugMutexEnter(sqlite3_mutex *pX){ + sqlite3_debug_mutex *p = (sqlite3_debug_mutex*)pX; + assert( p->id==SQLITE_MUTEX_RECURSIVE || debugMutexNotheld(pX) ); p->cnt++; } -static int debugMutexTry(sqlite3_mutex *p){ - assert( p->id==SQLITE_MUTEX_RECURSIVE || debugMutexNotheld(p) ); +static int debugMutexTry(sqlite3_mutex *pX){ + sqlite3_debug_mutex *p = (sqlite3_debug_mutex*)pX; + assert( p->id==SQLITE_MUTEX_RECURSIVE || debugMutexNotheld(pX) ); p->cnt++; return SQLITE_OK; } /* @@ -15254,18 +20756,19 @@ ** The sqlite3_mutex_leave() routine exits a mutex that was ** previously entered by the same thread. The behavior ** is undefined if the mutex is not currently entered or ** is not currently allocated. SQLite will never do either. */ -static void debugMutexLeave(sqlite3_mutex *p){ - assert( debugMutexHeld(p) ); +static void debugMutexLeave(sqlite3_mutex *pX){ + sqlite3_debug_mutex *p = (sqlite3_debug_mutex*)pX; + assert( debugMutexHeld(pX) ); p->cnt--; - assert( p->id==SQLITE_MUTEX_RECURSIVE || debugMutexNotheld(p) ); + assert( p->id==SQLITE_MUTEX_RECURSIVE || debugMutexNotheld(pX) ); } -SQLITE_PRIVATE sqlite3_mutex_methods *sqlite3DefaultMutex(void){ - static sqlite3_mutex_methods sMutex = { +SQLITE_PRIVATE sqlite3_mutex_methods const *sqlite3NoopMutex(void){ + static const sqlite3_mutex_methods sMutex = { debugMutexInit, debugMutexEnd, debugMutexAlloc, debugMutexFree, debugMutexEnter, @@ -15276,286 +20779,24 @@ debugMutexNotheld }; return &sMutex; } -#endif /* defined(SQLITE_MUTEX_NOOP) && defined(SQLITE_DEBUG) */ - -/************** End of mutex_noop.c ******************************************/ -/************** Begin file mutex_os2.c ***************************************/ -/* -** 2007 August 28 -** -** The author disclaims copyright to this source code. In place of -** a legal notice, here is a blessing: -** -** May you do good and not evil. -** May you find forgiveness for yourself and forgive others. -** May you share freely, never taking more than you give. -** -************************************************************************* -** This file contains the C functions that implement mutexes for OS/2 -*/ - -/* -** The code in this file is only used if SQLITE_MUTEX_OS2 is defined. -** See the mutex.h file for details. -*/ -#ifdef SQLITE_MUTEX_OS2 - -/********************** OS/2 Mutex Implementation ********************** -** -** This implementation of mutexes is built using the OS/2 API. -*/ - -/* -** The mutex object -** Each recursive mutex is an instance of the following structure. -*/ -struct sqlite3_mutex { - HMTX mutex; /* Mutex controlling the lock */ - int id; /* Mutex type */ - int nRef; /* Number of references */ - TID owner; /* Thread holding this mutex */ -}; - -#define OS2_MUTEX_INITIALIZER 0,0,0,0 - -/* -** Initialize and deinitialize the mutex subsystem. -*/ -static int os2MutexInit(void){ return SQLITE_OK; } -static int os2MutexEnd(void){ return SQLITE_OK; } - -/* -** The sqlite3_mutex_alloc() routine allocates a new -** mutex and returns a pointer to it. If it returns NULL -** that means that a mutex could not be allocated. -** SQLite will unwind its stack and return an error. The argument -** to sqlite3_mutex_alloc() is one of these integer constants: -** -** <ul> -** <li> SQLITE_MUTEX_FAST 0 -** <li> SQLITE_MUTEX_RECURSIVE 1 -** <li> SQLITE_MUTEX_STATIC_MASTER 2 -** <li> SQLITE_MUTEX_STATIC_MEM 3 -** <li> SQLITE_MUTEX_STATIC_PRNG 4 -** </ul> -** -** The first two constants cause sqlite3_mutex_alloc() to create -** a new mutex. The new mutex is recursive when SQLITE_MUTEX_RECURSIVE -** is used but not necessarily so when SQLITE_MUTEX_FAST is used. -** The mutex implementation does not need to make a distinction -** between SQLITE_MUTEX_RECURSIVE and SQLITE_MUTEX_FAST if it does -** not want to. But SQLite will only request a recursive mutex in -** cases where it really needs one. If a faster non-recursive mutex -** implementation is available on the host platform, the mutex subsystem -** might return such a mutex in response to SQLITE_MUTEX_FAST. -** -** The other allowed parameters to sqlite3_mutex_alloc() each return -** a pointer to a static preexisting mutex. Three static mutexes are -** used by the current version of SQLite. Future versions of SQLite -** may add additional static mutexes. Static mutexes are for internal -** use by SQLite only. Applications that use SQLite mutexes should -** use only the dynamic mutexes returned by SQLITE_MUTEX_FAST or -** SQLITE_MUTEX_RECURSIVE. -** -** Note that if one of the dynamic mutex parameters (SQLITE_MUTEX_FAST -** or SQLITE_MUTEX_RECURSIVE) is used then sqlite3_mutex_alloc() -** returns a different mutex on every call. But for the static -** mutex types, the same mutex is returned on every call that has -** the same type number. -*/ -static sqlite3_mutex *os2MutexAlloc(int iType){ - sqlite3_mutex *p = NULL; - switch( iType ){ - case SQLITE_MUTEX_FAST: - case SQLITE_MUTEX_RECURSIVE: { - p = sqlite3MallocZero( sizeof(*p) ); - if( p ){ - p->id = iType; - if( DosCreateMutexSem( 0, &p->mutex, 0, FALSE ) != NO_ERROR ){ - sqlite3_free( p ); - p = NULL; - } - } - break; - } - default: { - static volatile int isInit = 0; - static sqlite3_mutex staticMutexes[] = { - { OS2_MUTEX_INITIALIZER, }, - { OS2_MUTEX_INITIALIZER, }, - { OS2_MUTEX_INITIALIZER, }, - { OS2_MUTEX_INITIALIZER, }, - { OS2_MUTEX_INITIALIZER, }, - { OS2_MUTEX_INITIALIZER, }, - }; - if ( !isInit ){ - APIRET rc; - PTIB ptib; - PPIB ppib; - HMTX mutex; - char name[32]; - DosGetInfoBlocks( &ptib, &ppib ); - sqlite3_snprintf( sizeof(name), name, "\\SEM32\\SQLITE%04x", - ppib->pib_ulpid ); - while( !isInit ){ - mutex = 0; - rc = DosCreateMutexSem( name, &mutex, 0, FALSE); - if( rc == NO_ERROR ){ - unsigned int i; - if( !isInit ){ - for( i = 0; i < sizeof(staticMutexes)/sizeof(staticMutexes[0]); i++ ){ - DosCreateMutexSem( 0, &staticMutexes[i].mutex, 0, FALSE ); - } - isInit = 1; - } - DosCloseMutexSem( mutex ); - }else if( rc == ERROR_DUPLICATE_NAME ){ - DosSleep( 1 ); - }else{ - return p; - } - } - } - assert( iType-2 >= 0 ); - assert( iType-2 < sizeof(staticMutexes)/sizeof(staticMutexes[0]) ); - p = &staticMutexes[iType-2]; - p->id = iType; - break; - } - } - return p; -} - - -/* -** This routine deallocates a previously allocated mutex. -** SQLite is careful to deallocate every mutex that it allocates. -*/ -static void os2MutexFree(sqlite3_mutex *p){ - if( p==0 ) return; - assert( p->nRef==0 ); - assert( p->id==SQLITE_MUTEX_FAST || p->id==SQLITE_MUTEX_RECURSIVE ); - DosCloseMutexSem( p->mutex ); - sqlite3_free( p ); -} - -#ifdef SQLITE_DEBUG -/* -** The sqlite3_mutex_held() and sqlite3_mutex_notheld() routine are -** intended for use inside assert() statements. -*/ -static int os2MutexHeld(sqlite3_mutex *p){ - TID tid; - PID pid; - ULONG ulCount; - PTIB ptib; - if( p!=0 ) { - DosQueryMutexSem(p->mutex, &pid, &tid, &ulCount); - } else { - DosGetInfoBlocks(&ptib, NULL); - tid = ptib->tib_ptib2->tib2_ultid; - } - return p==0 || (p->nRef!=0 && p->owner==tid); -} -static int os2MutexNotheld(sqlite3_mutex *p){ - TID tid; - PID pid; - ULONG ulCount; - PTIB ptib; - if( p!= 0 ) { - DosQueryMutexSem(p->mutex, &pid, &tid, &ulCount); - } else { - DosGetInfoBlocks(&ptib, NULL); - tid = ptib->tib_ptib2->tib2_ultid; - } - return p==0 || p->nRef==0 || p->owner!=tid; -} -#endif - -/* -** The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt -** to enter a mutex. If another thread is already within the mutex, -** sqlite3_mutex_enter() will block and sqlite3_mutex_try() will return -** SQLITE_BUSY. The sqlite3_mutex_try() interface returns SQLITE_OK -** upon successful entry. Mutexes created using SQLITE_MUTEX_RECURSIVE can -** be entered multiple times by the same thread. In such cases the, -** mutex must be exited an equal number of times before another thread -** can enter. If the same thread tries to enter any other kind of mutex -** more than once, the behavior is undefined. -*/ -static void os2MutexEnter(sqlite3_mutex *p){ - TID tid; - PID holder1; - ULONG holder2; - if( p==0 ) return; - assert( p->id==SQLITE_MUTEX_RECURSIVE || os2MutexNotheld(p) ); - DosRequestMutexSem(p->mutex, SEM_INDEFINITE_WAIT); - DosQueryMutexSem(p->mutex, &holder1, &tid, &holder2); - p->owner = tid; - p->nRef++; -} -static int os2MutexTry(sqlite3_mutex *p){ - int rc; - TID tid; - PID holder1; - ULONG holder2; - if( p==0 ) return SQLITE_OK; - assert( p->id==SQLITE_MUTEX_RECURSIVE || os2MutexNotheld(p) ); - if( DosRequestMutexSem(p->mutex, SEM_IMMEDIATE_RETURN) == NO_ERROR) { - DosQueryMutexSem(p->mutex, &holder1, &tid, &holder2); - p->owner = tid; - p->nRef++; - rc = SQLITE_OK; - } else { - rc = SQLITE_BUSY; - } - - return rc; -} - -/* -** The sqlite3_mutex_leave() routine exits a mutex that was -** previously entered by the same thread. The behavior -** is undefined if the mutex is not currently entered or -** is not currently allocated. SQLite will never do either. -*/ -static void os2MutexLeave(sqlite3_mutex *p){ - TID tid; - PID holder1; - ULONG holder2; - if( p==0 ) return; - assert( p->nRef>0 ); - DosQueryMutexSem(p->mutex, &holder1, &tid, &holder2); - assert( p->owner==tid ); - p->nRef--; - assert( p->nRef==0 || p->id==SQLITE_MUTEX_RECURSIVE ); - DosReleaseMutexSem(p->mutex); -} - -SQLITE_PRIVATE sqlite3_mutex_methods *sqlite3DefaultMutex(void){ - static sqlite3_mutex_methods sMutex = { - os2MutexInit, - os2MutexEnd, - os2MutexAlloc, - os2MutexFree, - os2MutexEnter, - os2MutexTry, - os2MutexLeave, -#ifdef SQLITE_DEBUG - os2MutexHeld, - os2MutexNotheld -#endif - }; - - return &sMutex; -} -#endif /* SQLITE_MUTEX_OS2 */ - -/************** End of mutex_os2.c *******************************************/ +#endif /* SQLITE_DEBUG */ + +/* +** If compiled with SQLITE_MUTEX_NOOP, then the no-op mutex implementation +** is used regardless of the run-time threadsafety setting. +*/ +#ifdef SQLITE_MUTEX_NOOP +SQLITE_PRIVATE sqlite3_mutex_methods const *sqlite3DefaultMutex(void){ + return sqlite3NoopMutex(); +} +#endif /* defined(SQLITE_MUTEX_NOOP) */ +#endif /* !defined(SQLITE_MUTEX_OMIT) */ + +/************** End of mutex_noop.c ******************************************/ /************** Begin file mutex_unix.c **************************************/ /* ** 2007 August 28 ** ** The author disclaims copyright to this source code. In place of @@ -15566,10 +20807,11 @@ ** May you share freely, never taking more than you give. ** ************************************************************************* ** This file contains the C functions that implement mutexes for pthreads */ +/* #include "sqliteInt.h" */ /* ** The code in this file is only used if we are compiling threadsafe ** under unix with pthreads. ** @@ -15578,27 +20820,41 @@ */ #ifdef SQLITE_MUTEX_PTHREADS #include <pthread.h> +/* +** The sqlite3_mutex.id, sqlite3_mutex.nRef, and sqlite3_mutex.owner fields +** are necessary under two condidtions: (1) Debug builds and (2) using +** home-grown mutexes. Encapsulate these conditions into a single #define. +*/ +#if defined(SQLITE_DEBUG) || defined(SQLITE_HOMEGROWN_RECURSIVE_MUTEX) +# define SQLITE_MUTEX_NREF 1 +#else +# define SQLITE_MUTEX_NREF 0 +#endif /* ** Each recursive mutex is an instance of the following structure. */ struct sqlite3_mutex { pthread_mutex_t mutex; /* Mutex controlling the lock */ +#if SQLITE_MUTEX_NREF || defined(SQLITE_ENABLE_API_ARMOR) int id; /* Mutex type */ - int nRef; /* Number of entrances */ - pthread_t owner; /* Thread that is within this mutex */ -#ifdef SQLITE_DEBUG +#endif +#if SQLITE_MUTEX_NREF + volatile int nRef; /* Number of entrances */ + volatile pthread_t owner; /* Thread that is within this mutex */ int trace; /* True to trace changes */ #endif }; -#ifdef SQLITE_DEBUG -#define SQLITE3_MUTEX_INITIALIZER { PTHREAD_MUTEX_INITIALIZER, 0, 0, (pthread_t)0, 0 } +#if SQLITE_MUTEX_NREF +#define SQLITE3_MUTEX_INITIALIZER {PTHREAD_MUTEX_INITIALIZER,0,0,(pthread_t)0,0} +#elif defined(SQLITE_ENABLE_API_ARMOR) +#define SQLITE3_MUTEX_INITIALIZER { PTHREAD_MUTEX_INITIALIZER, 0 } #else -#define SQLITE3_MUTEX_INITIALIZER { PTHREAD_MUTEX_INITIALIZER, 0, 0, (pthread_t)0 } +#define SQLITE3_MUTEX_INITIALIZER { PTHREAD_MUTEX_INITIALIZER } #endif /* ** The sqlite3_mutex_held() and sqlite3_mutex_notheld() routine are ** intended for use only inside assert() statements. On some platforms, @@ -15621,10 +20877,23 @@ } static int pthreadMutexNotheld(sqlite3_mutex *p){ return p->nRef==0 || pthread_equal(p->owner, pthread_self())==0; } #endif + +/* +** Try to provide a memory barrier operation, needed for initialization +** and also for the implementation of xShmBarrier in the VFS in cases +** where SQLite is compiled without mutexes. +*/ +SQLITE_PRIVATE void sqlite3MemoryBarrier(void){ +#if defined(SQLITE_MEMORY_BARRIER) + SQLITE_MEMORY_BARRIER; +#elif defined(__GNUC__) && GCC_VERSION>=4001000 + __sync_synchronize(); +#endif +} /* ** Initialize and deinitialize the mutex subsystem. */ static int pthreadMutexInit(void){ return SQLITE_OK; } @@ -15640,14 +20909,20 @@ ** <ul> ** <li> SQLITE_MUTEX_FAST ** <li> SQLITE_MUTEX_RECURSIVE ** <li> SQLITE_MUTEX_STATIC_MASTER ** <li> SQLITE_MUTEX_STATIC_MEM -** <li> SQLITE_MUTEX_STATIC_MEM2 +** <li> SQLITE_MUTEX_STATIC_OPEN ** <li> SQLITE_MUTEX_STATIC_PRNG ** <li> SQLITE_MUTEX_STATIC_LRU -** <li> SQLITE_MUTEX_STATIC_LRU2 +** <li> SQLITE_MUTEX_STATIC_PMEM +** <li> SQLITE_MUTEX_STATIC_APP1 +** <li> SQLITE_MUTEX_STATIC_APP2 +** <li> SQLITE_MUTEX_STATIC_APP3 +** <li> SQLITE_MUTEX_STATIC_VFS1 +** <li> SQLITE_MUTEX_STATIC_VFS2 +** <li> SQLITE_MUTEX_STATIC_VFS3 ** </ul> ** ** The first two constants cause sqlite3_mutex_alloc() to create ** a new mutex. The new mutex is recursive when SQLITE_MUTEX_RECURSIVE ** is used but not necessarily so when SQLITE_MUTEX_FAST is used. @@ -15677,10 +20952,16 @@ SQLITE3_MUTEX_INITIALIZER, SQLITE3_MUTEX_INITIALIZER, SQLITE3_MUTEX_INITIALIZER, SQLITE3_MUTEX_INITIALIZER, SQLITE3_MUTEX_INITIALIZER, + SQLITE3_MUTEX_INITIALIZER, + SQLITE3_MUTEX_INITIALIZER, + SQLITE3_MUTEX_INITIALIZER, + SQLITE3_MUTEX_INITIALIZER, + SQLITE3_MUTEX_INITIALIZER, + SQLITE3_MUTEX_INITIALIZER, SQLITE3_MUTEX_INITIALIZER }; sqlite3_mutex *p; switch( iType ){ case SQLITE_MUTEX_RECURSIVE: { @@ -15696,30 +20977,34 @@ pthread_mutexattr_init(&recursiveAttr); pthread_mutexattr_settype(&recursiveAttr, PTHREAD_MUTEX_RECURSIVE); pthread_mutex_init(&p->mutex, &recursiveAttr); pthread_mutexattr_destroy(&recursiveAttr); #endif - p->id = iType; } break; } case SQLITE_MUTEX_FAST: { p = sqlite3MallocZero( sizeof(*p) ); if( p ){ - p->id = iType; pthread_mutex_init(&p->mutex, 0); } break; } default: { - assert( iType-2 >= 0 ); - assert( iType-2 < ArraySize(staticMutexes) ); +#ifdef SQLITE_ENABLE_API_ARMOR + if( iType-2<0 || iType-2>=ArraySize(staticMutexes) ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif p = &staticMutexes[iType-2]; - p->id = iType; break; } } +#if SQLITE_MUTEX_NREF || defined(SQLITE_ENABLE_API_ARMOR) + if( p ) p->id = iType; +#endif return p; } /* @@ -15727,13 +21012,22 @@ ** allocated mutex. SQLite is careful to deallocate every ** mutex that it allocates. */ static void pthreadMutexFree(sqlite3_mutex *p){ assert( p->nRef==0 ); - assert( p->id==SQLITE_MUTEX_FAST || p->id==SQLITE_MUTEX_RECURSIVE ); - pthread_mutex_destroy(&p->mutex); - sqlite3_free(p); +#if SQLITE_ENABLE_API_ARMOR + if( p->id==SQLITE_MUTEX_FAST || p->id==SQLITE_MUTEX_RECURSIVE ) +#endif + { + pthread_mutex_destroy(&p->mutex); + sqlite3_free(p); + } +#ifdef SQLITE_ENABLE_API_ARMOR + else{ + (void)SQLITE_MISUSE_BKPT; + } +#endif } /* ** The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt ** to enter a mutex. If another thread is already within the mutex, @@ -15772,12 +21066,15 @@ } #else /* Use the built-in recursive mutexes if they are available. */ pthread_mutex_lock(&p->mutex); +#if SQLITE_MUTEX_NREF + assert( p->nRef>0 || p->owner==0 ); p->owner = pthread_self(); p->nRef++; +#endif #endif #ifdef SQLITE_DEBUG if( p->trace ){ printf("enter mutex %p (%d) with nRef=%d\n", p, p->trace, p->nRef); @@ -15815,12 +21112,14 @@ } #else /* Use the built-in recursive mutexes if they are available. */ if( pthread_mutex_trylock(&p->mutex)==0 ){ +#if SQLITE_MUTEX_NREF p->owner = pthread_self(); p->nRef++; +#endif rc = SQLITE_OK; }else{ rc = SQLITE_BUSY; } #endif @@ -15839,11 +21138,14 @@ ** is undefined if the mutex is not currently entered or ** is not currently allocated. SQLite will never do either. */ static void pthreadMutexLeave(sqlite3_mutex *p){ assert( pthreadMutexHeld(p) ); +#if SQLITE_MUTEX_NREF p->nRef--; + if( p->nRef==0 ) p->owner = 0; +#endif assert( p->nRef==0 || p->id==SQLITE_MUTEX_RECURSIVE ); #ifdef SQLITE_HOMEGROWN_RECURSIVE_MUTEX if( p->nRef==0 ){ pthread_mutex_unlock(&p->mutex); @@ -15857,12 +21159,12 @@ printf("leave mutex %p (%d) with nRef=%d\n", p, p->trace, p->nRef); } #endif } -SQLITE_PRIVATE sqlite3_mutex_methods *sqlite3DefaultMutex(void){ - static sqlite3_mutex_methods sMutex = { +SQLITE_PRIVATE sqlite3_mutex_methods const *sqlite3DefaultMutex(void){ + static const sqlite3_mutex_methods sMutex = { pthreadMutexInit, pthreadMutexEnd, pthreadMutexAlloc, pthreadMutexFree, pthreadMutexEnter, @@ -15878,11 +21180,11 @@ }; return &sMutex; } -#endif /* SQLITE_MUTEX_PTHREAD */ +#endif /* SQLITE_MUTEX_PTHREADS */ /************** End of mutex_unix.c ******************************************/ /************** Begin file mutex_w32.c ***************************************/ /* ** 2007 August 14 @@ -15893,70 +21195,347 @@ ** May you do good and not evil. ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* -** This file contains the C functions that implement mutexes for win32 +** This file contains the C functions that implement mutexes for Win32. */ +/* #include "sqliteInt.h" */ + +#if SQLITE_OS_WIN +/* +** Include code that is common to all os_*.c files +*/ +/************** Include os_common.h in the middle of mutex_w32.c *************/ +/************** Begin file os_common.h ***************************************/ +/* +** 2004 May 22 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This file contains macros and a little bit of code that is common to +** all of the platform-specific files (os_*.c) and is #included into those +** files. +** +** This file should be #included by the os_*.c files only. It is not a +** general purpose header file. +*/ +#ifndef _OS_COMMON_H_ +#define _OS_COMMON_H_ + +/* +** At least two bugs have slipped in because we changed the MEMORY_DEBUG +** macro to SQLITE_DEBUG and some older makefiles have not yet made the +** switch. The following code should catch this problem at compile-time. +*/ +#ifdef MEMORY_DEBUG +# error "The MEMORY_DEBUG macro is obsolete. Use SQLITE_DEBUG instead." +#endif + +/* +** Macros for performance tracing. Normally turned off. Only works +** on i486 hardware. +*/ +#ifdef SQLITE_PERFORMANCE_TRACE + +/* +** hwtime.h contains inline assembler code for implementing +** high-performance timing routines. +*/ +/************** Include hwtime.h in the middle of os_common.h ****************/ +/************** Begin file hwtime.h ******************************************/ +/* +** 2008 May 27 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This file contains inline asm code for retrieving "high-performance" +** counters for x86 class CPUs. +*/ +#ifndef _HWTIME_H_ +#define _HWTIME_H_ + +/* +** The following routine only works on pentium-class (or newer) processors. +** It uses the RDTSC opcode to read the cycle count value out of the +** processor and returns that value. This can be used for high-res +** profiling. +*/ +#if (defined(__GNUC__) || defined(_MSC_VER)) && \ + (defined(i386) || defined(__i386__) || defined(_M_IX86)) + + #if defined(__GNUC__) + + __inline__ sqlite_uint64 sqlite3Hwtime(void){ + unsigned int lo, hi; + __asm__ __volatile__ ("rdtsc" : "=a" (lo), "=d" (hi)); + return (sqlite_uint64)hi << 32 | lo; + } + + #elif defined(_MSC_VER) + + __declspec(naked) __inline sqlite_uint64 __cdecl sqlite3Hwtime(void){ + __asm { + rdtsc + ret ; return value at EDX:EAX + } + } + + #endif + +#elif (defined(__GNUC__) && defined(__x86_64__)) + + __inline__ sqlite_uint64 sqlite3Hwtime(void){ + unsigned long val; + __asm__ __volatile__ ("rdtsc" : "=A" (val)); + return val; + } + +#elif (defined(__GNUC__) && defined(__ppc__)) + + __inline__ sqlite_uint64 sqlite3Hwtime(void){ + unsigned long long retval; + unsigned long junk; + __asm__ __volatile__ ("\n\ + 1: mftbu %1\n\ + mftb %L0\n\ + mftbu %0\n\ + cmpw %0,%1\n\ + bne 1b" + : "=r" (retval), "=r" (junk)); + return retval; + } + +#else + + #error Need implementation of sqlite3Hwtime() for your platform. + + /* + ** To compile without implementing sqlite3Hwtime() for your platform, + ** you can remove the above #error and use the following + ** stub function. You will lose timing support for many + ** of the debugging and testing utilities, but it should at + ** least compile and run. + */ +SQLITE_PRIVATE sqlite_uint64 sqlite3Hwtime(void){ return ((sqlite_uint64)0); } + +#endif + +#endif /* !defined(_HWTIME_H_) */ + +/************** End of hwtime.h **********************************************/ +/************** Continuing where we left off in os_common.h ******************/ + +static sqlite_uint64 g_start; +static sqlite_uint64 g_elapsed; +#define TIMER_START g_start=sqlite3Hwtime() +#define TIMER_END g_elapsed=sqlite3Hwtime()-g_start +#define TIMER_ELAPSED g_elapsed +#else +#define TIMER_START +#define TIMER_END +#define TIMER_ELAPSED ((sqlite_uint64)0) +#endif + +/* +** If we compile with the SQLITE_TEST macro set, then the following block +** of code will give us the ability to simulate a disk I/O error. This +** is used for testing the I/O recovery logic. +*/ +#if defined(SQLITE_TEST) +SQLITE_API extern int sqlite3_io_error_hit; +SQLITE_API extern int sqlite3_io_error_hardhit; +SQLITE_API extern int sqlite3_io_error_pending; +SQLITE_API extern int sqlite3_io_error_persist; +SQLITE_API extern int sqlite3_io_error_benign; +SQLITE_API extern int sqlite3_diskfull_pending; +SQLITE_API extern int sqlite3_diskfull; +#define SimulateIOErrorBenign(X) sqlite3_io_error_benign=(X) +#define SimulateIOError(CODE) \ + if( (sqlite3_io_error_persist && sqlite3_io_error_hit) \ + || sqlite3_io_error_pending-- == 1 ) \ + { local_ioerr(); CODE; } +static void local_ioerr(){ + IOTRACE(("IOERR\n")); + sqlite3_io_error_hit++; + if( !sqlite3_io_error_benign ) sqlite3_io_error_hardhit++; +} +#define SimulateDiskfullError(CODE) \ + if( sqlite3_diskfull_pending ){ \ + if( sqlite3_diskfull_pending == 1 ){ \ + local_ioerr(); \ + sqlite3_diskfull = 1; \ + sqlite3_io_error_hit = 1; \ + CODE; \ + }else{ \ + sqlite3_diskfull_pending--; \ + } \ + } +#else +#define SimulateIOErrorBenign(X) +#define SimulateIOError(A) +#define SimulateDiskfullError(A) +#endif /* defined(SQLITE_TEST) */ + +/* +** When testing, keep a count of the number of open files. +*/ +#if defined(SQLITE_TEST) +SQLITE_API extern int sqlite3_open_file_count; +#define OpenCounter(X) sqlite3_open_file_count+=(X) +#else +#define OpenCounter(X) +#endif /* defined(SQLITE_TEST) */ + +#endif /* !defined(_OS_COMMON_H_) */ + +/************** End of os_common.h *******************************************/ +/************** Continuing where we left off in mutex_w32.c ******************/ + +/* +** Include the header file for the Windows VFS. +*/ +/************** Include os_win.h in the middle of mutex_w32.c ****************/ +/************** Begin file os_win.h ******************************************/ +/* +** 2013 November 25 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This file contains code that is specific to Windows. +*/ +#ifndef _OS_WIN_H_ +#define _OS_WIN_H_ + +/* +** Include the primary Windows SDK header file. +*/ +#include "windows.h" + +#ifdef __CYGWIN__ +# include <sys/cygwin.h> +# include <errno.h> /* amalgamator: dontcache */ +#endif + +/* +** Determine if we are dealing with Windows NT. +** +** We ought to be able to determine if we are compiling for Windows 9x or +** Windows NT using the _WIN32_WINNT macro as follows: +** +** #if defined(_WIN32_WINNT) +** # define SQLITE_OS_WINNT 1 +** #else +** # define SQLITE_OS_WINNT 0 +** #endif +** +** However, Visual Studio 2005 does not set _WIN32_WINNT by default, as +** it ought to, so the above test does not work. We'll just assume that +** everything is Windows NT unless the programmer explicitly says otherwise +** by setting SQLITE_OS_WINNT to 0. +*/ +#if SQLITE_OS_WIN && !defined(SQLITE_OS_WINNT) +# define SQLITE_OS_WINNT 1 +#endif + +/* +** Determine if we are dealing with Windows CE - which has a much reduced +** API. +*/ +#if defined(_WIN32_WCE) +# define SQLITE_OS_WINCE 1 +#else +# define SQLITE_OS_WINCE 0 +#endif + +/* +** Determine if we are dealing with WinRT, which provides only a subset of +** the full Win32 API. +*/ +#if !defined(SQLITE_OS_WINRT) +# define SQLITE_OS_WINRT 0 +#endif + +/* +** For WinCE, some API function parameters do not appear to be declared as +** volatile. +*/ +#if SQLITE_OS_WINCE +# define SQLITE_WIN32_VOLATILE +#else +# define SQLITE_WIN32_VOLATILE volatile +#endif + +/* +** For some Windows sub-platforms, the _beginthreadex() / _endthreadex() +** functions are not available (e.g. those not using MSVC, Cygwin, etc). +*/ +#if SQLITE_OS_WIN && !SQLITE_OS_WINCE && !SQLITE_OS_WINRT && \ + SQLITE_THREADSAFE>0 && !defined(__CYGWIN__) +# define SQLITE_OS_WIN_THREADS 1 +#else +# define SQLITE_OS_WIN_THREADS 0 +#endif + +#endif /* _OS_WIN_H_ */ + +/************** End of os_win.h **********************************************/ +/************** Continuing where we left off in mutex_w32.c ******************/ +#endif /* ** The code in this file is only used if we are compiling multithreaded -** on a win32 system. +** on a Win32 system. */ #ifdef SQLITE_MUTEX_W32 /* ** Each recursive mutex is an instance of the following structure. */ struct sqlite3_mutex { CRITICAL_SECTION mutex; /* Mutex controlling the lock */ int id; /* Mutex type */ - int nRef; /* Number of enterances */ - DWORD owner; /* Thread holding this mutex */ #ifdef SQLITE_DEBUG - int trace; /* True to trace changes */ + volatile int nRef; /* Number of enterances */ + volatile DWORD owner; /* Thread holding this mutex */ + volatile int trace; /* True to trace changes */ #endif }; + +/* +** These are the initializer values used when declaring a "static" mutex +** on Win32. It should be noted that all mutexes require initialization +** on the Win32 platform. +*/ #define SQLITE_W32_MUTEX_INITIALIZER { 0 } + #ifdef SQLITE_DEBUG -#define SQLITE3_MUTEX_INITIALIZER { SQLITE_W32_MUTEX_INITIALIZER, 0, 0L, (DWORD)0, 0 } -#else -#define SQLITE3_MUTEX_INITIALIZER { SQLITE_W32_MUTEX_INITIALIZER, 0, 0L, (DWORD)0 } -#endif - -/* -** Return true (non-zero) if we are running under WinNT, Win2K, WinXP, -** or WinCE. Return false (zero) for Win95, Win98, or WinME. -** -** Here is an interesting observation: Win95, Win98, and WinME lack -** the LockFileEx() API. But we can still statically link against that -** API as long as we don't call it win running Win95/98/ME. A call to -** this routine is used to determine if the host is Win95/98/ME or -** WinNT/2K/XP so that we will know whether or not we can safely call -** the LockFileEx() API. -** -** mutexIsNT() is only used for the TryEnterCriticalSection() API call, -** which is only available if your application was compiled with -** _WIN32_WINNT defined to a value >= 0x0400. Currently, the only -** call to TryEnterCriticalSection() is #ifdef'ed out, so #ifdef -** this out as well. -*/ -#if 0 -#if SQLITE_OS_WINCE -# define mutexIsNT() (1) -#else - static int mutexIsNT(void){ - static int osType = 0; - if( osType==0 ){ - OSVERSIONINFO sInfo; - sInfo.dwOSVersionInfoSize = sizeof(sInfo); - GetVersionEx(&sInfo); - osType = sInfo.dwPlatformId==VER_PLATFORM_WIN32_NT ? 2 : 1; - } - return osType==2; - } -#endif /* SQLITE_OS_WINCE */ +#define SQLITE3_MUTEX_INITIALIZER { SQLITE_W32_MUTEX_INITIALIZER, 0, \ + 0L, (DWORD)0, 0 } +#else +#define SQLITE3_MUTEX_INITIALIZER { SQLITE_W32_MUTEX_INITIALIZER, 0 } #endif #ifdef SQLITE_DEBUG /* ** The sqlite3_mutex_held() and sqlite3_mutex_notheld() routine are @@ -15963,58 +21542,93 @@ ** intended for use only inside assert() statements. */ static int winMutexHeld(sqlite3_mutex *p){ return p->nRef!=0 && p->owner==GetCurrentThreadId(); } + static int winMutexNotheld2(sqlite3_mutex *p, DWORD tid){ return p->nRef==0 || p->owner!=tid; } + static int winMutexNotheld(sqlite3_mutex *p){ - DWORD tid = GetCurrentThreadId(); + DWORD tid = GetCurrentThreadId(); return winMutexNotheld2(p, tid); } #endif +/* +** Try to provide a memory barrier operation, needed for initialization +** and also for the xShmBarrier method of the VFS in cases when SQLite is +** compiled without mutexes (SQLITE_THREADSAFE=0). +*/ +SQLITE_PRIVATE void sqlite3MemoryBarrier(void){ +#if defined(SQLITE_MEMORY_BARRIER) + SQLITE_MEMORY_BARRIER; +#elif defined(__GNUC__) + __sync_synchronize(); +#elif !defined(SQLITE_DISABLE_INTRINSIC) && \ + defined(_MSC_VER) && _MSC_VER>=1300 + _ReadWriteBarrier(); +#elif defined(MemoryBarrier) + MemoryBarrier(); +#endif +} /* ** Initialize and deinitialize the mutex subsystem. */ -static sqlite3_mutex winMutex_staticMutexes[6] = { +static sqlite3_mutex winMutex_staticMutexes[] = { + SQLITE3_MUTEX_INITIALIZER, + SQLITE3_MUTEX_INITIALIZER, + SQLITE3_MUTEX_INITIALIZER, + SQLITE3_MUTEX_INITIALIZER, + SQLITE3_MUTEX_INITIALIZER, + SQLITE3_MUTEX_INITIALIZER, SQLITE3_MUTEX_INITIALIZER, SQLITE3_MUTEX_INITIALIZER, SQLITE3_MUTEX_INITIALIZER, SQLITE3_MUTEX_INITIALIZER, SQLITE3_MUTEX_INITIALIZER, SQLITE3_MUTEX_INITIALIZER }; + static int winMutex_isInit = 0; -/* As winMutexInit() and winMutexEnd() are called as part -** of the sqlite3_initialize and sqlite3_shutdown() -** processing, the "interlocked" magic is probably not -** strictly necessary. +static int winMutex_isNt = -1; /* <0 means "need to query" */ + +/* As the winMutexInit() and winMutexEnd() functions are called as part +** of the sqlite3_initialize() and sqlite3_shutdown() processing, the +** "interlocked" magic used here is probably not strictly necessary. */ -static long winMutex_lock = 0; +static LONG SQLITE_WIN32_VOLATILE winMutex_lock = 0; -static int winMutexInit(void){ +SQLITE_API int SQLITE_STDCALL sqlite3_win32_is_nt(void); /* os_win.c */ +SQLITE_API void SQLITE_STDCALL sqlite3_win32_sleep(DWORD milliseconds); /* os_win.c */ + +static int winMutexInit(void){ /* The first to increment to 1 does actual initialization */ if( InterlockedCompareExchange(&winMutex_lock, 1, 0)==0 ){ int i; for(i=0; i<ArraySize(winMutex_staticMutexes); i++){ +#if SQLITE_OS_WINRT + InitializeCriticalSectionEx(&winMutex_staticMutexes[i].mutex, 0, 0); +#else InitializeCriticalSection(&winMutex_staticMutexes[i].mutex); +#endif } winMutex_isInit = 1; }else{ - /* Someone else is in the process of initing the static mutexes */ + /* Another thread is (in the process of) initializing the static + ** mutexes */ while( !winMutex_isInit ){ - Sleep(1); + sqlite3_win32_sleep(1); } } - return SQLITE_OK; + return SQLITE_OK; } -static int winMutexEnd(void){ - /* The first to decrement to 0 does actual shutdown +static int winMutexEnd(void){ + /* The first to decrement to 0 does actual shutdown ** (which should be the last to shutdown.) */ if( InterlockedCompareExchange(&winMutex_lock, 0, 1)==1 ){ if( winMutex_isInit==1 ){ int i; for(i=0; i<ArraySize(winMutex_staticMutexes); i++){ @@ -16021,11 +21635,11 @@ DeleteCriticalSection(&winMutex_staticMutexes[i].mutex); } winMutex_isInit = 0; } } - return SQLITE_OK; + return SQLITE_OK; } /* ** The sqlite3_mutex_alloc() routine allocates a new ** mutex and returns a pointer to it. If it returns NULL @@ -16036,14 +21650,20 @@ ** <ul> ** <li> SQLITE_MUTEX_FAST ** <li> SQLITE_MUTEX_RECURSIVE ** <li> SQLITE_MUTEX_STATIC_MASTER ** <li> SQLITE_MUTEX_STATIC_MEM -** <li> SQLITE_MUTEX_STATIC_MEM2 +** <li> SQLITE_MUTEX_STATIC_OPEN ** <li> SQLITE_MUTEX_STATIC_PRNG ** <li> SQLITE_MUTEX_STATIC_LRU -** <li> SQLITE_MUTEX_STATIC_LRU2 +** <li> SQLITE_MUTEX_STATIC_PMEM +** <li> SQLITE_MUTEX_STATIC_APP1 +** <li> SQLITE_MUTEX_STATIC_APP2 +** <li> SQLITE_MUTEX_STATIC_APP3 +** <li> SQLITE_MUTEX_STATIC_VFS1 +** <li> SQLITE_MUTEX_STATIC_VFS2 +** <li> SQLITE_MUTEX_STATIC_VFS3 ** </ul> ** ** The first two constants cause sqlite3_mutex_alloc() to create ** a new mutex. The new mutex is recursive when SQLITE_MUTEX_RECURSIVE ** is used but not necessarily so when SQLITE_MUTEX_FAST is used. @@ -16062,11 +21682,11 @@ ** use only the dynamic mutexes returned by SQLITE_MUTEX_FAST or ** SQLITE_MUTEX_RECURSIVE. ** ** Note that if one of the dynamic mutex parameters (SQLITE_MUTEX_FAST ** or SQLITE_MUTEX_RECURSIVE) is used then sqlite3_mutex_alloc() -** returns a different mutex on every call. But for the static +** returns a different mutex on every call. But for the static ** mutex types, the same mutex is returned on every call that has ** the same type number. */ static sqlite3_mutex *winMutexAlloc(int iType){ sqlite3_mutex *p; @@ -16073,22 +21693,39 @@ switch( iType ){ case SQLITE_MUTEX_FAST: case SQLITE_MUTEX_RECURSIVE: { p = sqlite3MallocZero( sizeof(*p) ); - if( p ){ + if( p ){ p->id = iType; +#ifdef SQLITE_DEBUG +#ifdef SQLITE_WIN32_MUTEX_TRACE_DYNAMIC + p->trace = 1; +#endif +#endif +#if SQLITE_OS_WINRT + InitializeCriticalSectionEx(&p->mutex, 0, 0); +#else InitializeCriticalSection(&p->mutex); +#endif } break; } default: { - assert( winMutex_isInit==1 ); - assert( iType-2 >= 0 ); - assert( iType-2 < ArraySize(winMutex_staticMutexes) ); +#ifdef SQLITE_ENABLE_API_ARMOR + if( iType-2<0 || iType-2>=ArraySize(winMutex_staticMutexes) ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif p = &winMutex_staticMutexes[iType-2]; p->id = iType; +#ifdef SQLITE_DEBUG +#ifdef SQLITE_WIN32_MUTEX_TRACE_STATIC + p->trace = 1; +#endif +#endif break; } } return p; } @@ -16099,14 +21736,19 @@ ** allocated mutex. SQLite is careful to deallocate every ** mutex that it allocates. */ static void winMutexFree(sqlite3_mutex *p){ assert( p ); - assert( p->nRef==0 ); - assert( p->id==SQLITE_MUTEX_FAST || p->id==SQLITE_MUTEX_RECURSIVE ); - DeleteCriticalSection(&p->mutex); - sqlite3_free(p); + assert( p->nRef==0 && p->owner==0 ); + if( p->id==SQLITE_MUTEX_FAST || p->id==SQLITE_MUTEX_RECURSIVE ){ + DeleteCriticalSection(&p->mutex); + sqlite3_free(p); + }else{ +#ifdef SQLITE_ENABLE_API_ARMOR + (void)SQLITE_MISUSE_BKPT; +#endif + } } /* ** The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt ** to enter a mutex. If another thread is already within the mutex, @@ -16117,50 +21759,71 @@ ** mutex must be exited an equal number of times before another thread ** can enter. If the same thread tries to enter any other kind of mutex ** more than once, the behavior is undefined. */ static void winMutexEnter(sqlite3_mutex *p){ - DWORD tid = GetCurrentThreadId(); +#if defined(SQLITE_DEBUG) || defined(SQLITE_TEST) + DWORD tid = GetCurrentThreadId(); +#endif +#ifdef SQLITE_DEBUG + assert( p ); assert( p->id==SQLITE_MUTEX_RECURSIVE || winMutexNotheld2(p, tid) ); +#else + assert( p ); +#endif + assert( winMutex_isInit==1 ); EnterCriticalSection(&p->mutex); - p->owner = tid; +#ifdef SQLITE_DEBUG + assert( p->nRef>0 || p->owner==0 ); + p->owner = tid; p->nRef++; -#ifdef SQLITE_DEBUG if( p->trace ){ - printf("enter mutex %p (%d) with nRef=%d\n", p, p->trace, p->nRef); + OSTRACE(("ENTER-MUTEX tid=%lu, mutex=%p (%d), nRef=%d\n", + tid, p, p->trace, p->nRef)); } #endif } + static int winMutexTry(sqlite3_mutex *p){ -#ifndef NDEBUG - DWORD tid = GetCurrentThreadId(); +#if defined(SQLITE_DEBUG) || defined(SQLITE_TEST) + DWORD tid = GetCurrentThreadId(); #endif int rc = SQLITE_BUSY; + assert( p ); assert( p->id==SQLITE_MUTEX_RECURSIVE || winMutexNotheld2(p, tid) ); /* ** The sqlite3_mutex_try() routine is very rarely used, and when it ** is used it is merely an optimization. So it is OK for it to always - ** fail. + ** fail. ** ** The TryEnterCriticalSection() interface is only available on WinNT. ** And some windows compilers complain if you try to use it without ** first doing some #defines that prevent SQLite from building on Win98. ** For that reason, we will omit this optimization for now. See ** ticket #2685. */ -#if 0 - if( mutexIsNT() && TryEnterCriticalSection(&p->mutex) ){ +#if defined(_WIN32_WINNT) && _WIN32_WINNT >= 0x0400 + assert( winMutex_isInit==1 ); + assert( winMutex_isNt>=-1 && winMutex_isNt<=1 ); + if( winMutex_isNt<0 ){ + winMutex_isNt = sqlite3_win32_is_nt(); + } + assert( winMutex_isNt==0 || winMutex_isNt==1 ); + if( winMutex_isNt && TryEnterCriticalSection(&p->mutex) ){ +#ifdef SQLITE_DEBUG p->owner = tid; p->nRef++; +#endif rc = SQLITE_OK; } #else UNUSED_PARAMETER(p); #endif #ifdef SQLITE_DEBUG - if( rc==SQLITE_OK && p->trace ){ - printf("enter mutex %p (%d) with nRef=%d\n", p, p->trace, p->nRef); + if( p->trace ){ + OSTRACE(("TRY-MUTEX tid=%lu, mutex=%p (%d), owner=%lu, nRef=%d, rc=%s\n", + tid, p, p->trace, p->owner, p->nRef, sqlite3ErrName(rc))); } #endif return rc; } @@ -16169,27 +21832,33 @@ ** previously entered by the same thread. The behavior ** is undefined if the mutex is not currently entered or ** is not currently allocated. SQLite will never do either. */ static void winMutexLeave(sqlite3_mutex *p){ -#ifndef NDEBUG +#if defined(SQLITE_DEBUG) || defined(SQLITE_TEST) DWORD tid = GetCurrentThreadId(); #endif + assert( p ); +#ifdef SQLITE_DEBUG assert( p->nRef>0 ); assert( p->owner==tid ); p->nRef--; + if( p->nRef==0 ) p->owner = 0; assert( p->nRef==0 || p->id==SQLITE_MUTEX_RECURSIVE ); +#endif + assert( winMutex_isInit==1 ); LeaveCriticalSection(&p->mutex); #ifdef SQLITE_DEBUG if( p->trace ){ - printf("leave mutex %p (%d) with nRef=%d\n", p, p->trace, p->nRef); + OSTRACE(("LEAVE-MUTEX tid=%lu, mutex=%p (%d), nRef=%d\n", + tid, p, p->trace, p->nRef)); } #endif } -SQLITE_PRIVATE sqlite3_mutex_methods *sqlite3DefaultMutex(void){ - static sqlite3_mutex_methods sMutex = { +SQLITE_PRIVATE sqlite3_mutex_methods const *sqlite3DefaultMutex(void){ + static const sqlite3_mutex_methods sMutex = { winMutexInit, winMutexEnd, winMutexAlloc, winMutexFree, winMutexEnter, @@ -16201,13 +21870,13 @@ #else 0, 0 #endif }; - return &sMutex; } + #endif /* SQLITE_MUTEX_W32 */ /************** End of mutex_w32.c *******************************************/ /************** Begin file malloc.c ******************************************/ /* @@ -16222,138 +21891,169 @@ ** ************************************************************************* ** ** Memory allocation functions used throughout sqlite. */ - -/* -** This routine runs when the memory allocator sees that the -** total memory allocation is about to exceed the soft heap -** limit. -*/ -static void softHeapLimitEnforcer( - void *NotUsed, - sqlite3_int64 NotUsed2, - int allocSize -){ - UNUSED_PARAMETER2(NotUsed, NotUsed2); - sqlite3_release_memory(allocSize); -} - -/* -** Set the soft heap-size limit for the library. Passing a zero or -** negative value indicates no limit. -*/ -SQLITE_API void sqlite3_soft_heap_limit(int n){ - sqlite3_uint64 iLimit; - int overage; - if( n<0 ){ - iLimit = 0; - }else{ - iLimit = n; - } -#ifndef SQLITE_OMIT_AUTOINIT - sqlite3_initialize(); -#endif - if( iLimit>0 ){ - sqlite3MemoryAlarm(softHeapLimitEnforcer, 0, iLimit); - }else{ - sqlite3MemoryAlarm(0, 0, 0); - } - overage = (int)(sqlite3_memory_used() - (i64)n); - if( overage>0 ){ - sqlite3_release_memory(overage); - } -} +/* #include "sqliteInt.h" */ +/* #include <stdarg.h> */ /* ** Attempt to release up to n bytes of non-essential memory currently ** held by SQLite. An example of non-essential memory is memory used to ** cache database pages that are not currently in use. */ -SQLITE_API int sqlite3_release_memory(int n){ +SQLITE_API int SQLITE_STDCALL sqlite3_release_memory(int n){ #ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT - int nRet = 0; - nRet += sqlite3PcacheReleaseMemory(n-nRet); - return nRet; + return sqlite3PcacheReleaseMemory(n); #else + /* IMPLEMENTATION-OF: R-34391-24921 The sqlite3_release_memory() routine + ** is a no-op returning zero if SQLite is not compiled with + ** SQLITE_ENABLE_MEMORY_MANAGEMENT. */ UNUSED_PARAMETER(n); - return SQLITE_OK; + return 0; #endif } +/* +** An instance of the following object records the location of +** each unused scratch buffer. +*/ +typedef struct ScratchFreeslot { + struct ScratchFreeslot *pNext; /* Next unused scratch buffer */ +} ScratchFreeslot; + /* ** State information local to the memory allocation subsystem. */ static SQLITE_WSD struct Mem0Global { - /* Number of free pages for scratch and page-cache memory */ - u32 nScratchFree; - u32 nPageFree; - sqlite3_mutex *mutex; /* Mutex to serialize access */ - - /* - ** The alarm callback and its arguments. The mem0.mutex lock will - ** be held while the callback is running. Recursive calls into - ** the memory subsystem are allowed, but no new callbacks will be - ** issued. - */ - sqlite3_int64 alarmThreshold; - void (*alarmCallback)(void*, sqlite3_int64,int); - void *alarmArg; - - /* - ** Pointers to the end of sqlite3GlobalConfig.pScratch and - ** sqlite3GlobalConfig.pPage to a block of memory that records - ** which pages are available. - */ - u32 *aScratchFree; - u32 *aPageFree; -} mem0 = { 0, 0, 0, 0, 0, 0, 0, 0 }; + sqlite3_int64 alarmThreshold; /* The soft heap limit */ + + /* + ** Pointers to the end of sqlite3GlobalConfig.pScratch memory + ** (so that a range test can be used to determine if an allocation + ** being freed came from pScratch) and a pointer to the list of + ** unused scratch allocations. + */ + void *pScratchEnd; + ScratchFreeslot *pScratchFree; + u32 nScratchFree; + + /* + ** True if heap is nearly "full" where "full" is defined by the + ** sqlite3_soft_heap_limit() setting. + */ + int nearlyFull; +} mem0 = { 0, 0, 0, 0, 0, 0 }; #define mem0 GLOBAL(struct Mem0Global, mem0) + +/* +** Return the memory allocator mutex. sqlite3_status() needs it. +*/ +SQLITE_PRIVATE sqlite3_mutex *sqlite3MallocMutex(void){ + return mem0.mutex; +} + +#ifndef SQLITE_OMIT_DEPRECATED +/* +** Deprecated external interface. It used to set an alarm callback +** that was invoked when memory usage grew too large. Now it is a +** no-op. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_memory_alarm( + void(*xCallback)(void *pArg, sqlite3_int64 used,int N), + void *pArg, + sqlite3_int64 iThreshold +){ + (void)xCallback; + (void)pArg; + (void)iThreshold; + return SQLITE_OK; +} +#endif + +/* +** Set the soft heap-size limit for the library. Passing a zero or +** negative value indicates no limit. +*/ +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_soft_heap_limit64(sqlite3_int64 n){ + sqlite3_int64 priorLimit; + sqlite3_int64 excess; + sqlite3_int64 nUsed; +#ifndef SQLITE_OMIT_AUTOINIT + int rc = sqlite3_initialize(); + if( rc ) return -1; +#endif + sqlite3_mutex_enter(mem0.mutex); + priorLimit = mem0.alarmThreshold; + if( n<0 ){ + sqlite3_mutex_leave(mem0.mutex); + return priorLimit; + } + mem0.alarmThreshold = n; + nUsed = sqlite3StatusValue(SQLITE_STATUS_MEMORY_USED); + mem0.nearlyFull = (n>0 && n<=nUsed); + sqlite3_mutex_leave(mem0.mutex); + excess = sqlite3_memory_used() - n; + if( excess>0 ) sqlite3_release_memory((int)(excess & 0x7fffffff)); + return priorLimit; +} +SQLITE_API void SQLITE_STDCALL sqlite3_soft_heap_limit(int n){ + if( n<0 ) n = 0; + sqlite3_soft_heap_limit64(n); +} /* ** Initialize the memory allocation subsystem. */ SQLITE_PRIVATE int sqlite3MallocInit(void){ + int rc; if( sqlite3GlobalConfig.m.xMalloc==0 ){ sqlite3MemSetDefault(); } memset(&mem0, 0, sizeof(mem0)); - if( sqlite3GlobalConfig.bCoreMutex ){ - mem0.mutex = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MEM); - } + mem0.mutex = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MEM); if( sqlite3GlobalConfig.pScratch && sqlite3GlobalConfig.szScratch>=100 - && sqlite3GlobalConfig.nScratch>=0 ){ - int i; - sqlite3GlobalConfig.szScratch = ROUNDDOWN8(sqlite3GlobalConfig.szScratch-4); - mem0.aScratchFree = (u32*)&((char*)sqlite3GlobalConfig.pScratch) - [sqlite3GlobalConfig.szScratch*sqlite3GlobalConfig.nScratch]; - for(i=0; i<sqlite3GlobalConfig.nScratch; i++){ mem0.aScratchFree[i] = i; } - mem0.nScratchFree = sqlite3GlobalConfig.nScratch; + && sqlite3GlobalConfig.nScratch>0 ){ + int i, n, sz; + ScratchFreeslot *pSlot; + sz = ROUNDDOWN8(sqlite3GlobalConfig.szScratch); + sqlite3GlobalConfig.szScratch = sz; + pSlot = (ScratchFreeslot*)sqlite3GlobalConfig.pScratch; + n = sqlite3GlobalConfig.nScratch; + mem0.pScratchFree = pSlot; + mem0.nScratchFree = n; + for(i=0; i<n-1; i++){ + pSlot->pNext = (ScratchFreeslot*)(sz+(char*)pSlot); + pSlot = pSlot->pNext; + } + pSlot->pNext = 0; + mem0.pScratchEnd = (void*)&pSlot[1]; }else{ + mem0.pScratchEnd = 0; sqlite3GlobalConfig.pScratch = 0; sqlite3GlobalConfig.szScratch = 0; - } - if( sqlite3GlobalConfig.pPage && sqlite3GlobalConfig.szPage>=512 - && sqlite3GlobalConfig.nPage>=1 ){ - int i; - int overhead; - int sz = ROUNDDOWN8(sqlite3GlobalConfig.szPage); - int n = sqlite3GlobalConfig.nPage; - overhead = (4*n + sz - 1)/sz; - sqlite3GlobalConfig.nPage -= overhead; - mem0.aPageFree = (u32*)&((char*)sqlite3GlobalConfig.pPage) - [sqlite3GlobalConfig.szPage*sqlite3GlobalConfig.nPage]; - for(i=0; i<sqlite3GlobalConfig.nPage; i++){ mem0.aPageFree[i] = i; } - mem0.nPageFree = sqlite3GlobalConfig.nPage; - }else{ + sqlite3GlobalConfig.nScratch = 0; + } + if( sqlite3GlobalConfig.pPage==0 || sqlite3GlobalConfig.szPage<512 + || sqlite3GlobalConfig.nPage<=0 ){ sqlite3GlobalConfig.pPage = 0; sqlite3GlobalConfig.szPage = 0; } - return sqlite3GlobalConfig.m.xInit(sqlite3GlobalConfig.m.pAppData); + rc = sqlite3GlobalConfig.m.xInit(sqlite3GlobalConfig.m.pAppData); + if( rc!=SQLITE_OK ) memset(&mem0, 0, sizeof(mem0)); + return rc; +} + +/* +** Return true if the heap is currently under memory pressure - in other +** words if the amount of heap used is close to the limit set by +** sqlite3_soft_heap_limit(). +*/ +SQLITE_PRIVATE int sqlite3HeapNearlyFull(void){ + return mem0.nearlyFull; } /* ** Deinitialize the memory allocation subsystem. */ @@ -16365,78 +22065,35 @@ } /* ** Return the amount of memory currently checked out. */ -SQLITE_API sqlite3_int64 sqlite3_memory_used(void){ - int n, mx; - sqlite3_int64 res; - sqlite3_status(SQLITE_STATUS_MEMORY_USED, &n, &mx, 0); - res = (sqlite3_int64)n; /* Work around bug in Borland C. Ticket #3216 */ +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_memory_used(void){ + sqlite3_int64 res, mx; + sqlite3_status64(SQLITE_STATUS_MEMORY_USED, &res, &mx, 0); return res; } /* ** Return the maximum amount of memory that has ever been ** checked out since either the beginning of this process ** or since the most recent reset. */ -SQLITE_API sqlite3_int64 sqlite3_memory_highwater(int resetFlag){ - int n, mx; - sqlite3_int64 res; - sqlite3_status(SQLITE_STATUS_MEMORY_USED, &n, &mx, resetFlag); - res = (sqlite3_int64)mx; /* Work around bug in Borland C. Ticket #3216 */ - return res; -} - -/* -** Change the alarm callback -*/ -SQLITE_PRIVATE int sqlite3MemoryAlarm( - void(*xCallback)(void *pArg, sqlite3_int64 used,int N), - void *pArg, - sqlite3_int64 iThreshold -){ - sqlite3_mutex_enter(mem0.mutex); - mem0.alarmCallback = xCallback; - mem0.alarmArg = pArg; - mem0.alarmThreshold = iThreshold; - sqlite3_mutex_leave(mem0.mutex); - return SQLITE_OK; -} - -#ifndef SQLITE_OMIT_DEPRECATED -/* -** Deprecated external interface. Internal/core SQLite code -** should call sqlite3MemoryAlarm. -*/ -SQLITE_API int sqlite3_memory_alarm( - void(*xCallback)(void *pArg, sqlite3_int64 used,int N), - void *pArg, - sqlite3_int64 iThreshold -){ - return sqlite3MemoryAlarm(xCallback, pArg, iThreshold); -} -#endif +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_memory_highwater(int resetFlag){ + sqlite3_int64 res, mx; + sqlite3_status64(SQLITE_STATUS_MEMORY_USED, &res, &mx, resetFlag); + return mx; +} /* ** Trigger the alarm */ static void sqlite3MallocAlarm(int nByte){ - void (*xCallback)(void*,sqlite3_int64,int); - sqlite3_int64 nowUsed; - void *pArg; - if( mem0.alarmCallback==0 ) return; - xCallback = mem0.alarmCallback; - nowUsed = sqlite3StatusValue(SQLITE_STATUS_MEMORY_USED); - pArg = mem0.alarmArg; - mem0.alarmCallback = 0; + if( mem0.alarmThreshold<=0 ) return; sqlite3_mutex_leave(mem0.mutex); - xCallback(pArg, nowUsed, nByte); + sqlite3_release_memory(nByte); sqlite3_mutex_enter(mem0.mutex); - mem0.alarmCallback = xCallback; - mem0.alarmArg = pArg; } /* ** Do a memory allocation with statistics and alarms. Assume the ** lock is already held. @@ -16444,59 +22101,72 @@ static int mallocWithAlarm(int n, void **pp){ int nFull; void *p; assert( sqlite3_mutex_held(mem0.mutex) ); nFull = sqlite3GlobalConfig.m.xRoundup(n); - sqlite3StatusSet(SQLITE_STATUS_MALLOC_SIZE, n); - if( mem0.alarmCallback!=0 ){ - int nUsed = sqlite3StatusValue(SQLITE_STATUS_MEMORY_USED); - if( nUsed+nFull >= mem0.alarmThreshold ){ + sqlite3StatusHighwater(SQLITE_STATUS_MALLOC_SIZE, n); + if( mem0.alarmThreshold>0 ){ + sqlite3_int64 nUsed = sqlite3StatusValue(SQLITE_STATUS_MEMORY_USED); + if( nUsed >= mem0.alarmThreshold - nFull ){ + mem0.nearlyFull = 1; sqlite3MallocAlarm(nFull); + }else{ + mem0.nearlyFull = 0; } } p = sqlite3GlobalConfig.m.xMalloc(nFull); - if( p==0 && mem0.alarmCallback ){ +#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT + if( p==0 && mem0.alarmThreshold>0 ){ sqlite3MallocAlarm(nFull); p = sqlite3GlobalConfig.m.xMalloc(nFull); } +#endif if( p ){ nFull = sqlite3MallocSize(p); - sqlite3StatusAdd(SQLITE_STATUS_MEMORY_USED, nFull); + sqlite3StatusUp(SQLITE_STATUS_MEMORY_USED, nFull); + sqlite3StatusUp(SQLITE_STATUS_MALLOC_COUNT, 1); } *pp = p; return nFull; } /* ** Allocate memory. This routine is like sqlite3_malloc() except that it ** assumes the memory subsystem has already been initialized. */ -SQLITE_PRIVATE void *sqlite3Malloc(int n){ +SQLITE_PRIVATE void *sqlite3Malloc(u64 n){ void *p; - if( n<=0 || n>=0x7fffff00 ){ + if( n==0 || n>=0x7fffff00 ){ /* A memory allocation of a number of bytes which is near the maximum ** signed integer value might cause an integer overflow inside of the ** xMalloc(). Hence we limit the maximum size to 0x7fffff00, giving ** 255 bytes of overhead. SQLite itself will never use anything near ** this amount. The only way to reach the limit is with sqlite3_malloc() */ p = 0; }else if( sqlite3GlobalConfig.bMemstat ){ sqlite3_mutex_enter(mem0.mutex); - mallocWithAlarm(n, &p); + mallocWithAlarm((int)n, &p); sqlite3_mutex_leave(mem0.mutex); }else{ - p = sqlite3GlobalConfig.m.xMalloc(n); + p = sqlite3GlobalConfig.m.xMalloc((int)n); } + assert( EIGHT_BYTE_ALIGNMENT(p) ); /* IMP: R-11148-40995 */ return p; } /* ** This version of the memory allocation is for use by the application. ** First make sure the memory subsystem is initialized, then do the ** allocation. */ -SQLITE_API void *sqlite3_malloc(int n){ +SQLITE_API void *SQLITE_STDCALL sqlite3_malloc(int n){ +#ifndef SQLITE_OMIT_AUTOINIT + if( sqlite3_initialize() ) return 0; +#endif + return n<=0 ? 0 : sqlite3Malloc(n); +} +SQLITE_API void *SQLITE_STDCALL sqlite3_malloc64(sqlite3_uint64 n){ #ifndef SQLITE_OMIT_AUTOINIT if( sqlite3_initialize() ) return 0; #endif return sqlite3Malloc(n); } @@ -16522,105 +22192,92 @@ */ SQLITE_PRIVATE void *sqlite3ScratchMalloc(int n){ void *p; assert( n>0 ); -#if SQLITE_THREADSAFE==0 && !defined(NDEBUG) - /* Verify that no more than one scratch allocation per thread - ** is outstanding at one time. (This is only checked in the - ** single-threaded case since checking in the multi-threaded case - ** would be much more complicated.) */ - assert( scratchAllocOut==0 ); -#endif - - if( sqlite3GlobalConfig.szScratch<n ){ - goto scratch_overflow; - }else{ - sqlite3_mutex_enter(mem0.mutex); - if( mem0.nScratchFree==0 ){ - sqlite3_mutex_leave(mem0.mutex); - goto scratch_overflow; - }else{ - int i; - i = mem0.aScratchFree[--mem0.nScratchFree]; - i *= sqlite3GlobalConfig.szScratch; - sqlite3StatusAdd(SQLITE_STATUS_SCRATCH_USED, 1); - sqlite3StatusSet(SQLITE_STATUS_SCRATCH_SIZE, n); - sqlite3_mutex_leave(mem0.mutex); - p = (void*)&((char*)sqlite3GlobalConfig.pScratch)[i]; - assert( (((u8*)p - (u8*)0) & 7)==0 ); - } - } -#if SQLITE_THREADSAFE==0 && !defined(NDEBUG) - scratchAllocOut = p!=0; -#endif - - return p; - -scratch_overflow: - if( sqlite3GlobalConfig.bMemstat ){ - sqlite3_mutex_enter(mem0.mutex); - sqlite3StatusSet(SQLITE_STATUS_SCRATCH_SIZE, n); - n = mallocWithAlarm(n, &p); - if( p ) sqlite3StatusAdd(SQLITE_STATUS_SCRATCH_OVERFLOW, n); - sqlite3_mutex_leave(mem0.mutex); - }else{ - p = sqlite3GlobalConfig.m.xMalloc(n); - } - sqlite3MemdebugSetType(p, MEMTYPE_SCRATCH); -#if SQLITE_THREADSAFE==0 && !defined(NDEBUG) - scratchAllocOut = p!=0; -#endif - return p; + sqlite3_mutex_enter(mem0.mutex); + sqlite3StatusHighwater(SQLITE_STATUS_SCRATCH_SIZE, n); + if( mem0.nScratchFree && sqlite3GlobalConfig.szScratch>=n ){ + p = mem0.pScratchFree; + mem0.pScratchFree = mem0.pScratchFree->pNext; + mem0.nScratchFree--; + sqlite3StatusUp(SQLITE_STATUS_SCRATCH_USED, 1); + sqlite3_mutex_leave(mem0.mutex); + }else{ + sqlite3_mutex_leave(mem0.mutex); + p = sqlite3Malloc(n); + if( sqlite3GlobalConfig.bMemstat && p ){ + sqlite3_mutex_enter(mem0.mutex); + sqlite3StatusUp(SQLITE_STATUS_SCRATCH_OVERFLOW, sqlite3MallocSize(p)); + sqlite3_mutex_leave(mem0.mutex); + } + sqlite3MemdebugSetType(p, MEMTYPE_SCRATCH); + } + assert( sqlite3_mutex_notheld(mem0.mutex) ); + + +#if SQLITE_THREADSAFE==0 && !defined(NDEBUG) + /* EVIDENCE-OF: R-12970-05880 SQLite will not use more than one scratch + ** buffers per thread. + ** + ** This can only be checked in single-threaded mode. + */ + assert( scratchAllocOut==0 ); + if( p ) scratchAllocOut++; +#endif + + return p; } SQLITE_PRIVATE void sqlite3ScratchFree(void *p){ if( p ){ #if SQLITE_THREADSAFE==0 && !defined(NDEBUG) - /* Verify that no more than one scratch allocation per thread + /* Verify that no more than two scratch allocation per thread ** is outstanding at one time. (This is only checked in the ** single-threaded case since checking in the multi-threaded case ** would be much more complicated.) */ - assert( scratchAllocOut==1 ); - scratchAllocOut = 0; + assert( scratchAllocOut>=1 && scratchAllocOut<=2 ); + scratchAllocOut--; #endif - if( sqlite3GlobalConfig.pScratch==0 - || p<sqlite3GlobalConfig.pScratch - || p>=(void*)mem0.aScratchFree ){ + if( SQLITE_WITHIN(p, sqlite3GlobalConfig.pScratch, mem0.pScratchEnd) ){ + /* Release memory from the SQLITE_CONFIG_SCRATCH allocation */ + ScratchFreeslot *pSlot; + pSlot = (ScratchFreeslot*)p; + sqlite3_mutex_enter(mem0.mutex); + pSlot->pNext = mem0.pScratchFree; + mem0.pScratchFree = pSlot; + mem0.nScratchFree++; + assert( mem0.nScratchFree <= (u32)sqlite3GlobalConfig.nScratch ); + sqlite3StatusDown(SQLITE_STATUS_SCRATCH_USED, 1); + sqlite3_mutex_leave(mem0.mutex); + }else{ + /* Release memory back to the heap */ assert( sqlite3MemdebugHasType(p, MEMTYPE_SCRATCH) ); + assert( sqlite3MemdebugNoType(p, (u8)~MEMTYPE_SCRATCH) ); sqlite3MemdebugSetType(p, MEMTYPE_HEAP); if( sqlite3GlobalConfig.bMemstat ){ int iSize = sqlite3MallocSize(p); sqlite3_mutex_enter(mem0.mutex); - sqlite3StatusAdd(SQLITE_STATUS_SCRATCH_OVERFLOW, -iSize); - sqlite3StatusAdd(SQLITE_STATUS_MEMORY_USED, -iSize); + sqlite3StatusDown(SQLITE_STATUS_SCRATCH_OVERFLOW, iSize); + sqlite3StatusDown(SQLITE_STATUS_MEMORY_USED, iSize); + sqlite3StatusDown(SQLITE_STATUS_MALLOC_COUNT, 1); sqlite3GlobalConfig.m.xFree(p); sqlite3_mutex_leave(mem0.mutex); }else{ sqlite3GlobalConfig.m.xFree(p); } - }else{ - int i; - i = (int)((u8*)p - (u8*)sqlite3GlobalConfig.pScratch); - i /= sqlite3GlobalConfig.szScratch; - assert( i>=0 && i<sqlite3GlobalConfig.nScratch ); - sqlite3_mutex_enter(mem0.mutex); - assert( mem0.nScratchFree<(u32)sqlite3GlobalConfig.nScratch ); - mem0.aScratchFree[mem0.nScratchFree++] = i; - sqlite3StatusAdd(SQLITE_STATUS_SCRATCH_USED, -1); - sqlite3_mutex_leave(mem0.mutex); } } } /* ** TRUE if p is a lookaside memory allocation from db */ #ifndef SQLITE_OMIT_LOOKASIDE static int isLookaside(sqlite3 *db, void *p){ - return db && p && p>=db->lookaside.pStart && p<db->lookaside.pEnd; + return SQLITE_WITHIN(p, db->lookaside.pStart, db->lookaside.pEnd); } #else #define isLookaside(A,B) 0 #endif @@ -16631,104 +22288,153 @@ SQLITE_PRIVATE int sqlite3MallocSize(void *p){ assert( sqlite3MemdebugHasType(p, MEMTYPE_HEAP) ); return sqlite3GlobalConfig.m.xSize(p); } SQLITE_PRIVATE int sqlite3DbMallocSize(sqlite3 *db, void *p){ - assert( db==0 || sqlite3_mutex_held(db->mutex) ); - if( isLookaside(db, p) ){ - return db->lookaside.sz; - }else{ - assert( sqlite3MemdebugHasType(p, - db ? (MEMTYPE_DB|MEMTYPE_HEAP) : MEMTYPE_HEAP) ); + assert( p!=0 ); + if( db==0 || !isLookaside(db,p) ){ +#if SQLITE_DEBUG + if( db==0 ){ + assert( sqlite3MemdebugNoType(p, (u8)~MEMTYPE_HEAP) ); + assert( sqlite3MemdebugHasType(p, MEMTYPE_HEAP) ); + }else{ + assert( sqlite3MemdebugHasType(p, (MEMTYPE_LOOKASIDE|MEMTYPE_HEAP)) ); + assert( sqlite3MemdebugNoType(p, (u8)~(MEMTYPE_LOOKASIDE|MEMTYPE_HEAP)) ); + } +#endif return sqlite3GlobalConfig.m.xSize(p); + }else{ + assert( sqlite3_mutex_held(db->mutex) ); + return db->lookaside.sz; } +} +SQLITE_API sqlite3_uint64 SQLITE_STDCALL sqlite3_msize(void *p){ + assert( sqlite3MemdebugNoType(p, (u8)~MEMTYPE_HEAP) ); + assert( sqlite3MemdebugHasType(p, MEMTYPE_HEAP) ); + return p ? sqlite3GlobalConfig.m.xSize(p) : 0; } /* ** Free memory previously obtained from sqlite3Malloc(). */ -SQLITE_API void sqlite3_free(void *p){ - if( p==0 ) return; +SQLITE_API void SQLITE_STDCALL sqlite3_free(void *p){ + if( p==0 ) return; /* IMP: R-49053-54554 */ assert( sqlite3MemdebugHasType(p, MEMTYPE_HEAP) ); + assert( sqlite3MemdebugNoType(p, (u8)~MEMTYPE_HEAP) ); if( sqlite3GlobalConfig.bMemstat ){ sqlite3_mutex_enter(mem0.mutex); - sqlite3StatusAdd(SQLITE_STATUS_MEMORY_USED, -sqlite3MallocSize(p)); + sqlite3StatusDown(SQLITE_STATUS_MEMORY_USED, sqlite3MallocSize(p)); + sqlite3StatusDown(SQLITE_STATUS_MALLOC_COUNT, 1); sqlite3GlobalConfig.m.xFree(p); sqlite3_mutex_leave(mem0.mutex); }else{ sqlite3GlobalConfig.m.xFree(p); } } + +/* +** Add the size of memory allocation "p" to the count in +** *db->pnBytesFreed. +*/ +static SQLITE_NOINLINE void measureAllocationSize(sqlite3 *db, void *p){ + *db->pnBytesFreed += sqlite3DbMallocSize(db,p); +} /* ** Free memory that might be associated with a particular database ** connection. */ SQLITE_PRIVATE void sqlite3DbFree(sqlite3 *db, void *p){ assert( db==0 || sqlite3_mutex_held(db->mutex) ); - if( isLookaside(db, p) ){ - LookasideSlot *pBuf = (LookasideSlot*)p; - pBuf->pNext = db->lookaside.pFree; - db->lookaside.pFree = pBuf; - db->lookaside.nOut--; - }else{ - assert( sqlite3MemdebugHasType(p, MEMTYPE_DB|MEMTYPE_HEAP) ); - sqlite3MemdebugSetType(p, MEMTYPE_HEAP); - sqlite3_free(p); - } + if( p==0 ) return; + if( db ){ + if( db->pnBytesFreed ){ + measureAllocationSize(db, p); + return; + } + if( isLookaside(db, p) ){ + LookasideSlot *pBuf = (LookasideSlot*)p; +#if SQLITE_DEBUG + /* Trash all content in the buffer being freed */ + memset(p, 0xaa, db->lookaside.sz); +#endif + pBuf->pNext = db->lookaside.pFree; + db->lookaside.pFree = pBuf; + db->lookaside.nOut--; + return; + } + } + assert( sqlite3MemdebugHasType(p, (MEMTYPE_LOOKASIDE|MEMTYPE_HEAP)) ); + assert( sqlite3MemdebugNoType(p, (u8)~(MEMTYPE_LOOKASIDE|MEMTYPE_HEAP)) ); + assert( db!=0 || sqlite3MemdebugNoType(p, MEMTYPE_LOOKASIDE) ); + sqlite3MemdebugSetType(p, MEMTYPE_HEAP); + sqlite3_free(p); } /* ** Change the size of an existing memory allocation */ -SQLITE_PRIVATE void *sqlite3Realloc(void *pOld, int nBytes){ - int nOld, nNew; +SQLITE_PRIVATE void *sqlite3Realloc(void *pOld, u64 nBytes){ + int nOld, nNew, nDiff; void *pNew; + assert( sqlite3MemdebugHasType(pOld, MEMTYPE_HEAP) ); + assert( sqlite3MemdebugNoType(pOld, (u8)~MEMTYPE_HEAP) ); if( pOld==0 ){ - return sqlite3Malloc(nBytes); + return sqlite3Malloc(nBytes); /* IMP: R-04300-56712 */ } - if( nBytes<=0 ){ - sqlite3_free(pOld); + if( nBytes==0 ){ + sqlite3_free(pOld); /* IMP: R-26507-47431 */ return 0; } if( nBytes>=0x7fffff00 ){ /* The 0x7ffff00 limit term is explained in comments on sqlite3Malloc() */ return 0; } nOld = sqlite3MallocSize(pOld); - nNew = sqlite3GlobalConfig.m.xRoundup(nBytes); + /* IMPLEMENTATION-OF: R-46199-30249 SQLite guarantees that the second + ** argument to xRealloc is always a value returned by a prior call to + ** xRoundup. */ + nNew = sqlite3GlobalConfig.m.xRoundup((int)nBytes); if( nOld==nNew ){ pNew = pOld; }else if( sqlite3GlobalConfig.bMemstat ){ sqlite3_mutex_enter(mem0.mutex); - sqlite3StatusSet(SQLITE_STATUS_MALLOC_SIZE, nBytes); - if( sqlite3StatusValue(SQLITE_STATUS_MEMORY_USED)+nNew-nOld >= - mem0.alarmThreshold ){ - sqlite3MallocAlarm(nNew-nOld); + sqlite3StatusHighwater(SQLITE_STATUS_MALLOC_SIZE, (int)nBytes); + nDiff = nNew - nOld; + if( sqlite3StatusValue(SQLITE_STATUS_MEMORY_USED) >= + mem0.alarmThreshold-nDiff ){ + sqlite3MallocAlarm(nDiff); } - assert( sqlite3MemdebugHasType(pOld, MEMTYPE_HEAP) ); pNew = sqlite3GlobalConfig.m.xRealloc(pOld, nNew); - if( pNew==0 && mem0.alarmCallback ){ - sqlite3MallocAlarm(nBytes); + if( pNew==0 && mem0.alarmThreshold>0 ){ + sqlite3MallocAlarm((int)nBytes); pNew = sqlite3GlobalConfig.m.xRealloc(pOld, nNew); } if( pNew ){ nNew = sqlite3MallocSize(pNew); - sqlite3StatusAdd(SQLITE_STATUS_MEMORY_USED, nNew-nOld); + sqlite3StatusUp(SQLITE_STATUS_MEMORY_USED, nNew-nOld); } sqlite3_mutex_leave(mem0.mutex); }else{ pNew = sqlite3GlobalConfig.m.xRealloc(pOld, nNew); } + assert( EIGHT_BYTE_ALIGNMENT(pNew) ); /* IMP: R-11148-40995 */ return pNew; } /* ** The public interface to sqlite3Realloc. Make sure that the memory ** subsystem is initialized prior to invoking sqliteRealloc. */ -SQLITE_API void *sqlite3_realloc(void *pOld, int n){ +SQLITE_API void *SQLITE_STDCALL sqlite3_realloc(void *pOld, int n){ +#ifndef SQLITE_OMIT_AUTOINIT + if( sqlite3_initialize() ) return 0; +#endif + if( n<0 ) n = 0; /* IMP: R-26507-47431 */ + return sqlite3Realloc(pOld, n); +} +SQLITE_API void *SQLITE_STDCALL sqlite3_realloc64(void *pOld, sqlite3_uint64 n){ #ifndef SQLITE_OMIT_AUTOINIT if( sqlite3_initialize() ) return 0; #endif return sqlite3Realloc(pOld, n); } @@ -16735,33 +22441,48 @@ /* ** Allocate and zero memory. */ -SQLITE_PRIVATE void *sqlite3MallocZero(int n){ +SQLITE_PRIVATE void *sqlite3MallocZero(u64 n){ void *p = sqlite3Malloc(n); if( p ){ - memset(p, 0, n); + memset(p, 0, (size_t)n); } return p; } /* ** Allocate and zero memory. If the allocation fails, make ** the mallocFailed flag in the connection pointer. */ -SQLITE_PRIVATE void *sqlite3DbMallocZero(sqlite3 *db, int n){ - void *p = sqlite3DbMallocRaw(db, n); - if( p ){ - memset(p, 0, n); - } +SQLITE_PRIVATE void *sqlite3DbMallocZero(sqlite3 *db, u64 n){ + void *p; + testcase( db==0 ); + p = sqlite3DbMallocRaw(db, n); + if( p ) memset(p, 0, (size_t)n); + return p; +} + + +/* Finish the work of sqlite3DbMallocRawNN for the unusual and +** slower case when the allocation cannot be fulfilled using lookaside. +*/ +static SQLITE_NOINLINE void *dbMallocRawFinish(sqlite3 *db, u64 n){ + void *p; + assert( db!=0 ); + p = sqlite3Malloc(n); + if( !p ) sqlite3OomFault(db); + sqlite3MemdebugSetType(p, + (db->lookaside.bDisable==0) ? MEMTYPE_LOOKASIDE : MEMTYPE_HEAP); return p; } /* -** Allocate and zero memory. If the allocation fails, make -** the mallocFailed flag in the connection pointer. +** Allocate memory, either lookaside (if possible) or heap. +** If the allocation fails, set the mallocFailed flag in +** the connection pointer. ** ** If db!=0 and db->mallocFailed is true (indicating a prior malloc ** failure on the same database connection) then always return 0. ** Hence for a particular database connection, once malloc starts ** failing, it fails consistently until mallocFailed is reset. @@ -16772,84 +22493,101 @@ ** int *b = (int*)sqlite3DbMallocRaw(db, 200); ** if( b ) a[10] = 9; ** ** In other words, if a subsequent malloc (ex: "b") worked, it is assumed ** that all prior mallocs (ex: "a") worked too. +** +** The sqlite3MallocRawNN() variant guarantees that the "db" parameter is +** not a NULL pointer. */ -SQLITE_PRIVATE void *sqlite3DbMallocRaw(sqlite3 *db, int n){ +SQLITE_PRIVATE void *sqlite3DbMallocRaw(sqlite3 *db, u64 n){ void *p; - assert( db==0 || sqlite3_mutex_held(db->mutex) ); + if( db ) return sqlite3DbMallocRawNN(db, n); + p = sqlite3Malloc(n); + sqlite3MemdebugSetType(p, MEMTYPE_HEAP); + return p; +} +SQLITE_PRIVATE void *sqlite3DbMallocRawNN(sqlite3 *db, u64 n){ #ifndef SQLITE_OMIT_LOOKASIDE - if( db ){ - LookasideSlot *pBuf; - if( db->mallocFailed ){ - return 0; - } - if( db->lookaside.bEnabled && n<=db->lookaside.sz - && (pBuf = db->lookaside.pFree)!=0 ){ + LookasideSlot *pBuf; + assert( db!=0 ); + assert( sqlite3_mutex_held(db->mutex) ); + assert( db->pnBytesFreed==0 ); + if( db->lookaside.bDisable==0 ){ + assert( db->mallocFailed==0 ); + if( n>db->lookaside.sz ){ + db->lookaside.anStat[1]++; + }else if( (pBuf = db->lookaside.pFree)==0 ){ + db->lookaside.anStat[2]++; + }else{ db->lookaside.pFree = pBuf->pNext; db->lookaside.nOut++; + db->lookaside.anStat[0]++; if( db->lookaside.nOut>db->lookaside.mxOut ){ db->lookaside.mxOut = db->lookaside.nOut; } return (void*)pBuf; } + }else if( db->mallocFailed ){ + return 0; } #else - if( db && db->mallocFailed ){ + assert( db!=0 ); + assert( sqlite3_mutex_held(db->mutex) ); + assert( db->pnBytesFreed==0 ); + if( db->mallocFailed ){ return 0; } #endif - p = sqlite3Malloc(n); - if( !p && db ){ - db->mallocFailed = 1; - } - sqlite3MemdebugSetType(p, - (db && db->lookaside.bEnabled) ? MEMTYPE_DB : MEMTYPE_HEAP); - return p; -} + return dbMallocRawFinish(db, n); +} + +/* Forward declaration */ +static SQLITE_NOINLINE void *dbReallocFinish(sqlite3 *db, void *p, u64 n); /* ** Resize the block of memory pointed to by p to n bytes. If the ** resize fails, set the mallocFailed flag in the connection object. */ -SQLITE_PRIVATE void *sqlite3DbRealloc(sqlite3 *db, void *p, int n){ +SQLITE_PRIVATE void *sqlite3DbRealloc(sqlite3 *db, void *p, u64 n){ + assert( db!=0 ); + if( p==0 ) return sqlite3DbMallocRawNN(db, n); + assert( sqlite3_mutex_held(db->mutex) ); + if( isLookaside(db,p) && n<=db->lookaside.sz ) return p; + return dbReallocFinish(db, p, n); +} +static SQLITE_NOINLINE void *dbReallocFinish(sqlite3 *db, void *p, u64 n){ void *pNew = 0; assert( db!=0 ); - assert( sqlite3_mutex_held(db->mutex) ); + assert( p!=0 ); if( db->mallocFailed==0 ){ - if( p==0 ){ - return sqlite3DbMallocRaw(db, n); - } if( isLookaside(db, p) ){ - if( n<=db->lookaside.sz ){ - return p; - } - pNew = sqlite3DbMallocRaw(db, n); + pNew = sqlite3DbMallocRawNN(db, n); if( pNew ){ memcpy(pNew, p, db->lookaside.sz); sqlite3DbFree(db, p); } }else{ - assert( sqlite3MemdebugHasType(p, MEMTYPE_DB|MEMTYPE_HEAP) ); + assert( sqlite3MemdebugHasType(p, (MEMTYPE_LOOKASIDE|MEMTYPE_HEAP)) ); + assert( sqlite3MemdebugNoType(p, (u8)~(MEMTYPE_LOOKASIDE|MEMTYPE_HEAP)) ); sqlite3MemdebugSetType(p, MEMTYPE_HEAP); - pNew = sqlite3_realloc(p, n); + pNew = sqlite3_realloc64(p, n); if( !pNew ){ - db->mallocFailed = 1; + sqlite3OomFault(db); } sqlite3MemdebugSetType(pNew, - db->lookaside.bEnabled ? MEMTYPE_DB : MEMTYPE_HEAP); + (db->lookaside.bDisable==0 ? MEMTYPE_LOOKASIDE : MEMTYPE_HEAP)); } } return pNew; } /* ** Attempt to reallocate p. If the reallocation fails, then free p ** and set the mallocFailed flag in the database connection. */ -SQLITE_PRIVATE void *sqlite3DbReallocOrFree(sqlite3 *db, void *p, int n){ +SQLITE_PRIVATE void *sqlite3DbReallocOrFree(sqlite3 *db, void *p, u64 n){ void *pNew; pNew = sqlite3DbRealloc(db, p, n); if( !pNew ){ sqlite3DbFree(db, p); } @@ -16875,40 +22613,73 @@ if( zNew ){ memcpy(zNew, z, n); } return zNew; } -SQLITE_PRIVATE char *sqlite3DbStrNDup(sqlite3 *db, const char *z, int n){ +SQLITE_PRIVATE char *sqlite3DbStrNDup(sqlite3 *db, const char *z, u64 n){ char *zNew; + assert( db!=0 ); if( z==0 ){ return 0; } assert( (n&0x7fffffff)==n ); - zNew = sqlite3DbMallocRaw(db, n+1); + zNew = sqlite3DbMallocRawNN(db, n+1); if( zNew ){ - memcpy(zNew, z, n); + memcpy(zNew, z, (size_t)n); zNew[n] = 0; } return zNew; } /* -** Create a string from the zFromat argument and the va_list that follows. -** Store the string in memory obtained from sqliteMalloc() and make *pz -** point to that string. +** Free any prior content in *pz and replace it with a copy of zNew. */ -SQLITE_PRIVATE void sqlite3SetString(char **pz, sqlite3 *db, const char *zFormat, ...){ - va_list ap; - char *z; - - va_start(ap, zFormat); - z = sqlite3VMPrintf(db, zFormat, ap); - va_end(ap); +SQLITE_PRIVATE void sqlite3SetString(char **pz, sqlite3 *db, const char *zNew){ sqlite3DbFree(db, *pz); - *pz = z; + *pz = sqlite3DbStrDup(db, zNew); } +/* +** Call this routine to record the fact that an OOM (out-of-memory) error +** has happened. This routine will set db->mallocFailed, and also +** temporarily disable the lookaside memory allocator and interrupt +** any running VDBEs. +*/ +SQLITE_PRIVATE void sqlite3OomFault(sqlite3 *db){ + if( db->mallocFailed==0 && db->bBenignMalloc==0 ){ + db->mallocFailed = 1; + if( db->nVdbeExec>0 ){ + db->u1.isInterrupted = 1; + } + db->lookaside.bDisable++; + } +} + +/* +** This routine reactivates the memory allocator and clears the +** db->mallocFailed flag as necessary. +** +** The memory allocator is not restarted if there are running +** VDBEs. +*/ +SQLITE_PRIVATE void sqlite3OomClear(sqlite3 *db){ + if( db->mallocFailed && db->nVdbeExec==0 ){ + db->mallocFailed = 0; + db->u1.isInterrupted = 0; + assert( db->lookaside.bDisable>0 ); + db->lookaside.bDisable--; + } +} + +/* +** Take actions at the end of an API call to indicate an OOM error +*/ +static SQLITE_NOINLINE int apiOomError(sqlite3 *db){ + sqlite3OomClear(db); + sqlite3Error(db, SQLITE_NOMEM); + return SQLITE_NOMEM; +} /* ** This function must be called before exiting any API function (i.e. ** returning control to the user) that has called sqlite3_malloc or ** sqlite3_realloc. @@ -16915,82 +22686,40 @@ ** ** The returned value is normally a copy of the second argument to this ** function. However, if a malloc() failure has occurred since the previous ** invocation SQLITE_NOMEM is returned instead. ** -** If the first argument, db, is not NULL and a malloc() error has occurred, -** then the connection error-code (the value returned by sqlite3_errcode()) -** is set to SQLITE_NOMEM. +** If an OOM as occurred, then the connection error-code (the value +** returned by sqlite3_errcode()) is set to SQLITE_NOMEM. */ SQLITE_PRIVATE int sqlite3ApiExit(sqlite3* db, int rc){ - /* If the db handle is not NULL, then we must hold the connection handle - ** mutex here. Otherwise the read (and possible write) of db->mallocFailed + /* If the db handle must hold the connection handle mutex here. + ** Otherwise the read (and possible write) of db->mallocFailed ** is unsafe, as is the call to sqlite3Error(). */ - assert( !db || sqlite3_mutex_held(db->mutex) ); - if( db && (db->mallocFailed || rc==SQLITE_IOERR_NOMEM) ){ - sqlite3Error(db, SQLITE_NOMEM, 0); - db->mallocFailed = 0; - rc = SQLITE_NOMEM; + assert( db!=0 ); + assert( sqlite3_mutex_held(db->mutex) ); + if( db->mallocFailed || rc==SQLITE_IOERR_NOMEM ){ + return apiOomError(db); } - return rc & (db ? db->errMask : 0xff); + return rc & db->errMask; } /************** End of malloc.c **********************************************/ /************** Begin file printf.c ******************************************/ /* ** The "printf" code that follows dates from the 1980's. It is in -** the public domain. The original comments are included here for -** completeness. They are very out-of-date but might be useful as -** an historical reference. Most of the "enhancements" have been backed -** out so that the functionality is now the same as standard printf(). +** the public domain. ** ************************************************************************** ** -** The following modules is an enhanced replacement for the "printf" subroutines -** found in the standard C library. The following enhancements are -** supported: -** -** + Additional functions. The standard set of "printf" functions -** includes printf, fprintf, sprintf, vprintf, vfprintf, and -** vsprintf. This module adds the following: -** -** * snprintf -- Works like sprintf, but has an extra argument -** which is the size of the buffer written to. -** -** * mprintf -- Similar to sprintf. Writes output to memory -** obtained from malloc. -** -** * xprintf -- Calls a function to dispose of output. -** -** * nprintf -- No output, but returns the number of characters -** that would have been output by printf. -** -** * A v- version (ex: vsnprintf) of every function is also -** supplied. -** -** + A few extensions to the formatting notation are supported: -** -** * The "=" flag (similar to "-") causes the output to be -** be centered in the appropriately sized field. -** -** * The %b field outputs an integer in binary notation. -** -** * The %c field now accepts a precision. The character output -** is repeated by the number of times the precision specifies. -** -** * The %' field works like %c, but takes as its character the -** next character of the format string, instead of the next -** argument. For example, printf("%.78'-") prints 78 minus -** signs, the same as printf("%.78c",'-'). -** -** + When compiled using GCC on a SPARC, this version of printf is -** faster than the library printf for SUN OS 4.1. -** -** + All functions are fully reentrant. -** +** This file contains code for a set of "printf"-like routines. These +** routines format strings much like the printf() from the standard C +** library, though the implementation here has enhancements to support +** SQLite. */ +/* #include "sqliteInt.h" */ /* ** Conversion types fall into various categories as defined by the ** following enumeration. */ @@ -17098,78 +22827,62 @@ ** always returned. */ static char et_getdigit(LONGDOUBLE_TYPE *val, int *cnt){ int digit; LONGDOUBLE_TYPE d; - if( (*cnt)++ >= 16 ) return '0'; + if( (*cnt)<=0 ) return '0'; + (*cnt)--; digit = (int)*val; d = digit; digit += '0'; *val = (*val - d)*10.0; return (char)digit; } #endif /* SQLITE_OMIT_FLOATING_POINT */ /* -** Append N space characters to the given string buffer. +** Set the StrAccum object to an error mode. */ -static void appendSpace(StrAccum *pAccum, int N){ - static const char zSpaces[] = " "; - while( N>=(int)sizeof(zSpaces)-1 ){ - sqlite3StrAccumAppend(pAccum, zSpaces, sizeof(zSpaces)-1); - N -= sizeof(zSpaces)-1; - } - if( N>0 ){ - sqlite3StrAccumAppend(pAccum, zSpaces, N); - } -} +static void setStrAccumError(StrAccum *p, u8 eError){ + assert( eError==STRACCUM_NOMEM || eError==STRACCUM_TOOBIG ); + p->accError = eError; + p->nAlloc = 0; +} + +/* +** Extra argument values from a PrintfArguments object +*/ +static sqlite3_int64 getIntArg(PrintfArguments *p){ + if( p->nArg<=p->nUsed ) return 0; + return sqlite3_value_int64(p->apArg[p->nUsed++]); +} +static double getDoubleArg(PrintfArguments *p){ + if( p->nArg<=p->nUsed ) return 0.0; + return sqlite3_value_double(p->apArg[p->nUsed++]); +} +static char *getTextArg(PrintfArguments *p){ + if( p->nArg<=p->nUsed ) return 0; + return (char*)sqlite3_value_text(p->apArg[p->nUsed++]); +} + /* ** On machines with a small stack size, you can redefine the -** SQLITE_PRINT_BUF_SIZE to be less than 350. +** SQLITE_PRINT_BUF_SIZE to be something smaller, if desired. */ #ifndef SQLITE_PRINT_BUF_SIZE -# if defined(SQLITE_SMALL_STACK) -# define SQLITE_PRINT_BUF_SIZE 50 -# else -# define SQLITE_PRINT_BUF_SIZE 350 -# endif +# define SQLITE_PRINT_BUF_SIZE 70 #endif #define etBUFSIZE SQLITE_PRINT_BUF_SIZE /* Size of the output buffer */ /* -** The root program. All variations call this core. -** -** INPUTS: -** func This is a pointer to a function taking three arguments -** 1. A pointer to anything. Same as the "arg" parameter. -** 2. A pointer to the list of characters to be output -** (Note, this list is NOT null terminated.) -** 3. An integer number of characters to be output. -** (Note: This number might be zero.) -** -** arg This is the pointer to anything which will be passed as the -** first argument to "func". Use it for whatever you like. -** -** fmt This is the format string, as in the usual print. -** -** ap This is a pointer to a list of arguments. Same as in -** vfprint. -** -** OUTPUTS: -** The return value is the total number of characters sent to -** the function "func". Returns -1 on a error. -** -** Note that the order in which automatic variables are declared below -** seems to make a big difference in determining how fast this beast -** will run. +** Render a string given by "fmt" into the StrAccum object. */ SQLITE_PRIVATE void sqlite3VXPrintf( - StrAccum *pAccum, /* Accumulate results here */ - int useExtended, /* Allow extended %-conversions */ - const char *fmt, /* Format string */ - va_list ap /* arguments */ + StrAccum *pAccum, /* Accumulate results here */ + const char *fmt, /* Format string */ + va_list ap /* arguments */ ){ int c; /* Next character in the format string */ char *bufpt; /* Pointer to the conversion buffer */ int precision; /* Precision of the current field */ int length; /* Length of the field */ @@ -17182,36 +22895,49 @@ etByte flag_altform2; /* True if "!" flag is present */ etByte flag_zeropad; /* True if field width constant starts with zero */ etByte flag_long; /* True if "l" flag is present */ etByte flag_longlong; /* True if the "ll" flag is present */ etByte done; /* Loop termination flag */ + etByte xtype = 0; /* Conversion paradigm */ + u8 bArgList; /* True for SQLITE_PRINTF_SQLFUNC */ + u8 useIntern; /* Ok to use internal conversions (ex: %T) */ + char prefix; /* Prefix character. "+" or "-" or " " or '\0'. */ sqlite_uint64 longvalue; /* Value for integer types */ LONGDOUBLE_TYPE realvalue; /* Value for real types */ const et_info *infop; /* Pointer to the appropriate info structure */ - char buf[etBUFSIZE]; /* Conversion buffer */ - char prefix; /* Prefix character. "+" or "-" or " " or '\0'. */ - etByte xtype = 0; /* Conversion paradigm */ - char *zExtra; /* Extra memory used for etTCLESCAPE conversions */ + char *zOut; /* Rendering buffer */ + int nOut; /* Size of the rendering buffer */ + char *zExtra = 0; /* Malloced memory used by some conversion */ #ifndef SQLITE_OMIT_FLOATING_POINT int exp, e2; /* exponent of real numbers */ + int nsd; /* Number of significant digits returned */ double rounder; /* Used for rounding floating point values */ etByte flag_dp; /* True if decimal point should be shown */ etByte flag_rtz; /* True if trailing zeros should be removed */ - etByte flag_exp; /* True to force display of the exponent */ - int nsd; /* Number of significant digits returned */ #endif + PrintfArguments *pArgList = 0; /* Arguments for SQLITE_PRINTF_SQLFUNC */ + char buf[etBUFSIZE]; /* Conversion buffer */ - length = 0; bufpt = 0; + if( pAccum->printfFlags ){ + if( (bArgList = (pAccum->printfFlags & SQLITE_PRINTF_SQLFUNC))!=0 ){ + pArgList = va_arg(ap, PrintfArguments*); + } + useIntern = pAccum->printfFlags & SQLITE_PRINTF_INTERNAL; + }else{ + bArgList = useIntern = 0; + } for(; (c=(*fmt))!=0; ++fmt){ if( c!='%' ){ - int amt; bufpt = (char *)fmt; - amt = 1; - while( (c=(*++fmt))!='%' && c!=0 ) amt++; - sqlite3StrAccumAppend(pAccum, bufpt, amt); - if( c==0 ) break; +#if HAVE_STRCHRNUL + fmt = strchrnul(fmt, '%'); +#else + do{ fmt++; }while( *fmt && *fmt != '%' ); +#endif + sqlite3StrAccumAppend(pAccum, bufpt, (int)(fmt - bufpt)); + if( *fmt==0 ) break; } if( (c=(*++fmt))==0 ){ sqlite3StrAccumAppend(pAccum, "%", 1); break; } @@ -17229,44 +22955,70 @@ case '0': flag_zeropad = 1; break; default: done = 1; break; } }while( !done && (c=(*++fmt))!=0 ); /* Get the field width */ - width = 0; if( c=='*' ){ - width = va_arg(ap,int); + if( bArgList ){ + width = (int)getIntArg(pArgList); + }else{ + width = va_arg(ap,int); + } if( width<0 ){ flag_leftjustify = 1; - width = -width; + width = width >= -2147483647 ? -width : 0; } c = *++fmt; }else{ + unsigned wx = 0; while( c>='0' && c<='9' ){ - width = width*10 + c - '0'; + wx = wx*10 + c - '0'; c = *++fmt; } + testcase( wx>0x7fffffff ); + width = wx & 0x7fffffff; } - if( width > etBUFSIZE-10 ){ - width = etBUFSIZE-10; + assert( width>=0 ); +#ifdef SQLITE_PRINTF_PRECISION_LIMIT + if( width>SQLITE_PRINTF_PRECISION_LIMIT ){ + width = SQLITE_PRINTF_PRECISION_LIMIT; } +#endif + /* Get the precision */ if( c=='.' ){ - precision = 0; c = *++fmt; if( c=='*' ){ - precision = va_arg(ap,int); - if( precision<0 ) precision = -precision; + if( bArgList ){ + precision = (int)getIntArg(pArgList); + }else{ + precision = va_arg(ap,int); + } c = *++fmt; + if( precision<0 ){ + precision = precision >= -2147483647 ? -precision : -1; + } }else{ + unsigned px = 0; while( c>='0' && c<='9' ){ - precision = precision*10 + c - '0'; + px = px*10 + c - '0'; c = *++fmt; } + testcase( px>0x7fffffff ); + precision = px & 0x7fffffff; } }else{ precision = -1; } + assert( precision>=(-1) ); +#ifdef SQLITE_PRINTF_PRECISION_LIMIT + if( precision>SQLITE_PRINTF_PRECISION_LIMIT ){ + precision = SQLITE_PRINTF_PRECISION_LIMIT; + } +#endif + + /* Get the conversion type modifier */ if( c=='l' ){ flag_long = 1; c = *++fmt; if( c=='l' ){ @@ -17282,25 +23034,18 @@ infop = &fmtinfo[0]; xtype = etINVALID; for(idx=0; idx<ArraySize(fmtinfo); idx++){ if( c==fmtinfo[idx].fmttype ){ infop = &fmtinfo[idx]; - if( useExtended || (infop->flags & FLAG_INTERN)==0 ){ + if( useIntern || (infop->flags & FLAG_INTERN)==0 ){ xtype = infop->type; }else{ return; } break; } } - zExtra = 0; - - - /* Limit the precision to prevent overflowing buf[] during conversion */ - if( precision>etBUFSIZE-40 && (infop->flags & FLAG_STRING)==0 ){ - precision = etBUFSIZE-40; - } /* ** At this point, variables are initialized as follows: ** ** flag_alternateform TRUE if a '#' is present. @@ -17328,28 +23073,36 @@ /* Fall through into the next case */ case etORDINAL: case etRADIX: if( infop->flags & FLAG_SIGNED ){ i64 v; - if( flag_longlong ){ + if( bArgList ){ + v = getIntArg(pArgList); + }else if( flag_longlong ){ v = va_arg(ap,i64); }else if( flag_long ){ v = va_arg(ap,long int); }else{ v = va_arg(ap,int); } if( v<0 ){ - longvalue = -v; + if( v==SMALLEST_INT64 ){ + longvalue = ((u64)1)<<63; + }else{ + longvalue = -v; + } prefix = '-'; }else{ longvalue = v; if( flag_plussign ) prefix = '+'; else if( flag_blanksign ) prefix = ' '; else prefix = 0; } }else{ - if( flag_longlong ){ + if( bArgList ){ + longvalue = (u64)getIntArg(pArgList); + }else if( flag_longlong ){ longvalue = va_arg(ap,u64); }else if( flag_long ){ longvalue = va_arg(ap,unsigned long int); }else{ longvalue = va_arg(ap,unsigned int); @@ -17358,32 +23111,40 @@ } if( longvalue==0 ) flag_alternateform = 0; if( flag_zeropad && precision<width-(prefix!=0) ){ precision = width-(prefix!=0); } - bufpt = &buf[etBUFSIZE-1]; + if( precision<etBUFSIZE-10 ){ + nOut = etBUFSIZE; + zOut = buf; + }else{ + nOut = precision + 10; + zOut = zExtra = sqlite3Malloc( nOut ); + if( zOut==0 ){ + setStrAccumError(pAccum, STRACCUM_NOMEM); + return; + } + } + bufpt = &zOut[nOut-1]; if( xtype==etORDINAL ){ static const char zOrd[] = "thstndrd"; int x = (int)(longvalue % 10); if( x>=4 || (longvalue/10)%10==1 ){ x = 0; } - buf[etBUFSIZE-3] = zOrd[x*2]; - buf[etBUFSIZE-2] = zOrd[x*2+1]; - bufpt -= 2; + *(--bufpt) = zOrd[x*2+1]; + *(--bufpt) = zOrd[x*2]; } { - register const char *cset; /* Use registers for speed */ - register int base; - cset = &aDigits[infop->charset]; - base = infop->base; + const char *cset = &aDigits[infop->charset]; + u8 base = infop->base; do{ /* Convert to ascii */ *(--bufpt) = cset[longvalue%base]; longvalue = longvalue/base; }while( longvalue>0 ); } - length = (int)(&buf[etBUFSIZE-1]-bufpt); + length = (int)(&zOut[nOut-1]-bufpt); for(idx=precision-length; idx>0; idx--){ *(--bufpt) = '0'; /* Zero pad */ } if( prefix ) *(--bufpt) = prefix; /* Add sign */ if( flag_alternateform && infop->prefix ){ /* Add "0" or "0x" */ @@ -17390,69 +23151,64 @@ const char *pre; char x; pre = &aPrefix[infop->prefix]; for(; (x=(*pre))!=0; pre++) *(--bufpt) = x; } - length = (int)(&buf[etBUFSIZE-1]-bufpt); + length = (int)(&zOut[nOut-1]-bufpt); break; case etFLOAT: case etEXP: case etGENERIC: - realvalue = va_arg(ap,double); + if( bArgList ){ + realvalue = getDoubleArg(pArgList); + }else{ + realvalue = va_arg(ap,double); + } #ifdef SQLITE_OMIT_FLOATING_POINT length = 0; #else if( precision<0 ) precision = 6; /* Set default precision */ - if( precision>etBUFSIZE/2-10 ) precision = etBUFSIZE/2-10; if( realvalue<0.0 ){ realvalue = -realvalue; prefix = '-'; }else{ if( flag_plussign ) prefix = '+'; else if( flag_blanksign ) prefix = ' '; else prefix = 0; } if( xtype==etGENERIC && precision>0 ) precision--; -#if 0 - /* Rounding works like BSD when the constant 0.4999 is used. Wierd! */ - for(idx=precision, rounder=0.4999; idx>0; idx--, rounder*=0.1); -#else - /* It makes more sense to use 0.5 */ - for(idx=precision, rounder=0.5; idx>0; idx--, rounder*=0.1){} -#endif + testcase( precision>0xfff ); + for(idx=precision&0xfff, rounder=0.5; idx>0; idx--, rounder*=0.1){} if( xtype==etFLOAT ) realvalue += rounder; /* Normalize realvalue to within 10.0 > realvalue >= 1.0 */ exp = 0; if( sqlite3IsNaN((double)realvalue) ){ bufpt = "NaN"; length = 3; break; } if( realvalue>0.0 ){ - while( realvalue>=1e32 && exp<=350 ){ realvalue *= 1e-32; exp+=32; } - while( realvalue>=1e8 && exp<=350 ){ realvalue *= 1e-8; exp+=8; } - while( realvalue>=10.0 && exp<=350 ){ realvalue *= 0.1; exp++; } + LONGDOUBLE_TYPE scale = 1.0; + while( realvalue>=1e100*scale && exp<=350 ){ scale *= 1e100;exp+=100;} + while( realvalue>=1e10*scale && exp<=350 ){ scale *= 1e10; exp+=10; } + while( realvalue>=10.0*scale && exp<=350 ){ scale *= 10.0; exp++; } + realvalue /= scale; while( realvalue<1e-8 ){ realvalue *= 1e8; exp-=8; } while( realvalue<1.0 ){ realvalue *= 10.0; exp--; } if( exp>350 ){ - if( prefix=='-' ){ - bufpt = "-Inf"; - }else if( prefix=='+' ){ - bufpt = "+Inf"; - }else{ - bufpt = "Inf"; - } - length = sqlite3Strlen30(bufpt); + bufpt = buf; + buf[0] = prefix; + memcpy(buf+(prefix!=0),"Inf",4); + length = 3+(prefix!=0); break; } } bufpt = buf; /* ** If the field type is etGENERIC, then convert to either etEXP ** or etFLOAT, as appropriate. */ - flag_exp = xtype==etEXP; if( xtype!=etFLOAT ){ realvalue += rounder; if( realvalue>=10.0 ){ realvalue *= 0.1; exp++; } } if( xtype==etGENERIC ){ @@ -17462,18 +23218,27 @@ }else{ precision = precision - exp; xtype = etFLOAT; } }else{ - flag_rtz = 0; + flag_rtz = flag_altform2; } if( xtype==etEXP ){ e2 = 0; }else{ e2 = exp; } - nsd = 0; + if( MAX(e2,0)+(i64)precision+(i64)width > etBUFSIZE - 15 ){ + bufpt = zExtra + = sqlite3Malloc( MAX(e2,0)+(i64)precision+(i64)width+15 ); + if( bufpt==0 ){ + setStrAccumError(pAccum, STRACCUM_NOMEM); + return; + } + } + zOut = bufpt; + nsd = 16 + flag_altform2*10; flag_dp = (precision>0 ?1:0) | flag_alternateform | flag_altform2; /* The sign in front of the number */ if( prefix ){ *(bufpt++) = prefix; } @@ -17500,21 +23265,21 @@ *(bufpt++) = et_getdigit(&realvalue,&nsd); } /* Remove trailing zeros and the "." if no digits follow the "." */ if( flag_rtz && flag_dp ){ while( bufpt[-1]=='0' ) *(--bufpt) = 0; - assert( bufpt>buf ); + assert( bufpt>zOut ); if( bufpt[-1]=='.' ){ if( flag_altform2 ){ *(bufpt++) = '0'; }else{ *(--bufpt) = 0; } } } /* Add the "eNNN" suffix */ - if( flag_exp || xtype==etEXP ){ + if( xtype==etEXP ){ *(bufpt++) = aDigits[infop->charset]; if( exp<0 ){ *(bufpt++) = '-'; exp = -exp; }else{ *(bufpt++) = '+'; @@ -17529,12 +23294,12 @@ *bufpt = 0; /* The converted number is in buf[] and zero terminated. Output it. ** Note that the number is in the usual order, not reversed as with ** integer conversions. */ - length = (int)(bufpt-buf); - bufpt = buf; + length = (int)(bufpt-zOut); + bufpt = zOut; /* Special case: Add leading zeros if the flag_zeropad flag is ** set and we are not left justified */ if( flag_zeropad && !flag_leftjustify && length < width){ int i; @@ -17547,32 +23312,47 @@ length = width; } #endif /* !defined(SQLITE_OMIT_FLOATING_POINT) */ break; case etSIZE: - *(va_arg(ap,int*)) = pAccum->nChar; + if( !bArgList ){ + *(va_arg(ap,int*)) = pAccum->nChar; + } length = width = 0; break; case etPERCENT: buf[0] = '%'; bufpt = buf; length = 1; break; case etCHARX: - c = va_arg(ap,int); - buf[0] = (char)c; - if( precision>=0 ){ - for(idx=1; idx<precision; idx++) buf[idx] = (char)c; - length = precision; + if( bArgList ){ + bufpt = getTextArg(pArgList); + c = bufpt ? bufpt[0] : 0; }else{ - length =1; + c = va_arg(ap,int); } + if( precision>1 ){ + width -= precision-1; + if( width>1 && !flag_leftjustify ){ + sqlite3AppendChar(pAccum, width-1, ' '); + width = 0; + } + sqlite3AppendChar(pAccum, precision-1, c); + } + length = 1; + buf[0] = c; bufpt = buf; break; case etSTRING: case etDYNSTRING: - bufpt = va_arg(ap,char*); + if( bArgList ){ + bufpt = getTextArg(pArgList); + xtype = etSTRING; + }else{ + bufpt = va_arg(ap,char*); + } if( bufpt==0 ){ bufpt = ""; }else if( xtype==etDYNSTRING ){ zExtra = bufpt; } @@ -17580,30 +23360,36 @@ for(length=0; length<precision && bufpt[length]; length++){} }else{ length = sqlite3Strlen30(bufpt); } break; - case etSQLESCAPE: - case etSQLESCAPE2: - case etSQLESCAPE3: { + case etSQLESCAPE: /* Escape ' characters */ + case etSQLESCAPE2: /* Escape ' and enclose in '...' */ + case etSQLESCAPE3: { /* Escape " characters */ int i, j, k, n, isnull; int needQuote; char ch; char q = ((xtype==etSQLESCAPE3)?'"':'\''); /* Quote character */ - char *escarg = va_arg(ap,char*); + char *escarg; + + if( bArgList ){ + escarg = getTextArg(pArgList); + }else{ + escarg = va_arg(ap,char*); + } isnull = escarg==0; if( isnull ) escarg = (xtype==etSQLESCAPE2 ? "NULL" : "(NULL)"); k = precision; for(i=n=0; k!=0 && (ch=escarg[i])!=0; i++, k--){ if( ch==q ) n++; } needQuote = !isnull && xtype==etSQLESCAPE2; - n += i + 1 + needQuote*2; + n += i + 3; if( n>etBUFSIZE ){ bufpt = zExtra = sqlite3Malloc( n ); if( bufpt==0 ){ - pAccum->mallocFailed = 1; + setStrAccumError(pAccum, STRACCUM_NOMEM); return; } }else{ bufpt = buf; } @@ -17622,26 +23408,28 @@ ** if( precision>=0 && precision<length ) length = precision; */ break; } case etTOKEN: { Token *pToken = va_arg(ap, Token*); - if( pToken ){ + assert( bArgList==0 ); + if( pToken && pToken->n ){ sqlite3StrAccumAppend(pAccum, (const char*)pToken->z, pToken->n); } length = width = 0; break; } case etSRCLIST: { SrcList *pSrc = va_arg(ap, SrcList*); int k = va_arg(ap, int); struct SrcList_item *pItem = &pSrc->a[k]; + assert( bArgList==0 ); assert( k>=0 && k<pSrc->nSrc ); if( pItem->zDatabase ){ - sqlite3StrAccumAppend(pAccum, pItem->zDatabase, -1); + sqlite3StrAccumAppendAll(pAccum, pItem->zDatabase); sqlite3StrAccumAppend(pAccum, ".", 1); } - sqlite3StrAccumAppend(pAccum, pItem->zName, -1); + sqlite3StrAccumAppendAll(pAccum, pItem->zName); length = width = 0; break; } default: { assert( xtype==etINVALID ); @@ -17651,97 +23439,149 @@ /* ** The text of the conversion is pointed to by "bufpt" and is ** "length" characters long. The field width is "width". Do ** the output. */ - if( !flag_leftjustify ){ - register int nspace; - nspace = width-length; - if( nspace>0 ){ - appendSpace(pAccum, nspace); - } - } - if( length>0 ){ - sqlite3StrAccumAppend(pAccum, bufpt, length); - } - if( flag_leftjustify ){ - register int nspace; - nspace = width-length; - if( nspace>0 ){ - appendSpace(pAccum, nspace); - } - } + width -= length; + if( width>0 && !flag_leftjustify ) sqlite3AppendChar(pAccum, width, ' '); + sqlite3StrAccumAppend(pAccum, bufpt, length); + if( width>0 && flag_leftjustify ) sqlite3AppendChar(pAccum, width, ' '); + if( zExtra ){ - sqlite3_free(zExtra); + sqlite3DbFree(pAccum->db, zExtra); + zExtra = 0; } }/* End for loop over the format string */ } /* End of function */ /* -** Append N bytes of text from z to the StrAccum object. +** Enlarge the memory allocation on a StrAccum object so that it is +** able to accept at least N more bytes of text. +** +** Return the number of bytes of text that StrAccum is able to accept +** after the attempted enlargement. The value returned might be zero. +*/ +static int sqlite3StrAccumEnlarge(StrAccum *p, int N){ + char *zNew; + assert( p->nChar+(i64)N >= p->nAlloc ); /* Only called if really needed */ + if( p->accError ){ + testcase(p->accError==STRACCUM_TOOBIG); + testcase(p->accError==STRACCUM_NOMEM); + return 0; + } + if( p->mxAlloc==0 ){ + N = p->nAlloc - p->nChar - 1; + setStrAccumError(p, STRACCUM_TOOBIG); + return N; + }else{ + char *zOld = isMalloced(p) ? p->zText : 0; + i64 szNew = p->nChar; + assert( (p->zText==0 || p->zText==p->zBase)==!isMalloced(p) ); + szNew += N + 1; + if( szNew+p->nChar<=p->mxAlloc ){ + /* Force exponential buffer size growth as long as it does not overflow, + ** to avoid having to call this routine too often */ + szNew += p->nChar; + } + if( szNew > p->mxAlloc ){ + sqlite3StrAccumReset(p); + setStrAccumError(p, STRACCUM_TOOBIG); + return 0; + }else{ + p->nAlloc = (int)szNew; + } + if( p->db ){ + zNew = sqlite3DbRealloc(p->db, zOld, p->nAlloc); + }else{ + zNew = sqlite3_realloc64(zOld, p->nAlloc); + } + if( zNew ){ + assert( p->zText!=0 || p->nChar==0 ); + if( !isMalloced(p) && p->nChar>0 ) memcpy(zNew, p->zText, p->nChar); + p->zText = zNew; + p->nAlloc = sqlite3DbMallocSize(p->db, zNew); + p->printfFlags |= SQLITE_PRINTF_MALLOCED; + }else{ + sqlite3StrAccumReset(p); + setStrAccumError(p, STRACCUM_NOMEM); + return 0; + } + } + return N; +} + +/* +** Append N copies of character c to the given string buffer. +*/ +SQLITE_PRIVATE void sqlite3AppendChar(StrAccum *p, int N, char c){ + testcase( p->nChar + (i64)N > 0x7fffffff ); + if( p->nChar+(i64)N >= p->nAlloc && (N = sqlite3StrAccumEnlarge(p, N))<=0 ){ + return; + } + assert( (p->zText==p->zBase)==!isMalloced(p) ); + while( (N--)>0 ) p->zText[p->nChar++] = c; +} + +/* +** The StrAccum "p" is not large enough to accept N new bytes of z[]. +** So enlarge if first, then do the append. +** +** This is a helper routine to sqlite3StrAccumAppend() that does special-case +** work (enlarging the buffer) using tail recursion, so that the +** sqlite3StrAccumAppend() routine can use fast calling semantics. +*/ +static void SQLITE_NOINLINE enlargeAndAppend(StrAccum *p, const char *z, int N){ + N = sqlite3StrAccumEnlarge(p, N); + if( N>0 ){ + memcpy(&p->zText[p->nChar], z, N); + p->nChar += N; + } + assert( (p->zText==0 || p->zText==p->zBase)==!isMalloced(p) ); +} + +/* +** Append N bytes of text from z to the StrAccum object. Increase the +** size of the memory allocation for StrAccum if necessary. */ SQLITE_PRIVATE void sqlite3StrAccumAppend(StrAccum *p, const char *z, int N){ assert( z!=0 || N==0 ); - if( p->tooBig | p->mallocFailed ){ - testcase(p->tooBig); - testcase(p->mallocFailed); - return; - } - if( N<0 ){ - N = sqlite3Strlen30(z); - } - if( N==0 || NEVER(z==0) ){ - return; - } + assert( p->zText!=0 || p->nChar==0 || p->accError ); + assert( N>=0 ); + assert( p->accError==0 || p->nAlloc==0 ); if( p->nChar+N >= p->nAlloc ){ - char *zNew; - if( !p->useMalloc ){ - p->tooBig = 1; - N = p->nAlloc - p->nChar - 1; - if( N<=0 ){ - return; - } - }else{ - i64 szNew = p->nChar; - szNew += N + 1; - if( szNew > p->mxAlloc ){ - sqlite3StrAccumReset(p); - p->tooBig = 1; - return; - }else{ - p->nAlloc = (int)szNew; - } - zNew = sqlite3DbMallocRaw(p->db, p->nAlloc ); - if( zNew ){ - memcpy(zNew, p->zText, p->nChar); - sqlite3StrAccumReset(p); - p->zText = zNew; - }else{ - p->mallocFailed = 1; - sqlite3StrAccumReset(p); - return; - } - } - } - memcpy(&p->zText[p->nChar], z, N); - p->nChar += N; -} + enlargeAndAppend(p,z,N); + }else{ + assert( p->zText ); + p->nChar += N; + memcpy(&p->zText[p->nChar-N], z, N); + } +} + +/* +** Append the complete text of zero-terminated string z[] to the p string. +*/ +SQLITE_PRIVATE void sqlite3StrAccumAppendAll(StrAccum *p, const char *z){ + sqlite3StrAccumAppend(p, z, sqlite3Strlen30(z)); +} + /* ** Finish off a string by making sure it is zero-terminated. ** Return a pointer to the resulting string. Return a NULL ** pointer if any kind of error was encountered. */ SQLITE_PRIVATE char *sqlite3StrAccumFinish(StrAccum *p){ if( p->zText ){ + assert( (p->zText==p->zBase)==!isMalloced(p) ); p->zText[p->nChar] = 0; - if( p->useMalloc && p->zText==p->zBase ){ + if( p->mxAlloc>0 && !isMalloced(p) ){ p->zText = sqlite3DbMallocRaw(p->db, p->nChar+1 ); if( p->zText ){ memcpy(p->zText, p->zBase, p->nChar+1); + p->printfFlags |= SQLITE_PRINTF_MALLOCED; }else{ - p->mallocFailed = 1; + setStrAccumError(p, STRACCUM_NOMEM); } } } return p->zText; } @@ -17748,28 +23588,40 @@ /* ** Reset an StrAccum string. Reclaim all malloced memory. */ SQLITE_PRIVATE void sqlite3StrAccumReset(StrAccum *p){ - if( p->zText!=p->zBase ){ + assert( (p->zText==0 || p->zText==p->zBase)==!isMalloced(p) ); + if( isMalloced(p) ){ sqlite3DbFree(p->db, p->zText); + p->printfFlags &= ~SQLITE_PRINTF_MALLOCED; } p->zText = 0; } /* -** Initialize a string accumulator +** Initialize a string accumulator. +** +** p: The accumulator to be initialized. +** db: Pointer to a database connection. May be NULL. Lookaside +** memory is used if not NULL. db->mallocFailed is set appropriately +** when not NULL. +** zBase: An initial buffer. May be NULL in which case the initial buffer +** is malloced. +** n: Size of zBase in bytes. If total space requirements never exceed +** n then no memory allocations ever occur. +** mx: Maximum number of bytes to accumulate. If mx==0 then no memory +** allocations will ever occur. */ -SQLITE_PRIVATE void sqlite3StrAccumInit(StrAccum *p, char *zBase, int n, int mx){ +SQLITE_PRIVATE void sqlite3StrAccumInit(StrAccum *p, sqlite3 *db, char *zBase, int n, int mx){ p->zText = p->zBase = zBase; - p->db = 0; + p->db = db; p->nChar = 0; p->nAlloc = n; p->mxAlloc = mx; - p->useMalloc = 1; - p->tooBig = 0; - p->mallocFailed = 0; + p->accError = 0; + p->printfFlags = 0; } /* ** Print into memory obtained from sqliteMalloc(). Use the internal ** %-conversion extensions. @@ -17777,17 +23629,17 @@ SQLITE_PRIVATE char *sqlite3VMPrintf(sqlite3 *db, const char *zFormat, va_list ap){ char *z; char zBase[SQLITE_PRINT_BUF_SIZE]; StrAccum acc; assert( db!=0 ); - sqlite3StrAccumInit(&acc, zBase, sizeof(zBase), + sqlite3StrAccumInit(&acc, db, zBase, sizeof(zBase), db->aLimit[SQLITE_LIMIT_LENGTH]); - acc.db = db; - sqlite3VXPrintf(&acc, 1, zFormat, ap); + acc.printfFlags = SQLITE_PRINTF_INTERNAL; + sqlite3VXPrintf(&acc, zFormat, ap); z = sqlite3StrAccumFinish(&acc); - if( acc.mallocFailed ){ - db->mallocFailed = 1; + if( acc.accError==STRACCUM_NOMEM ){ + sqlite3OomFault(db); } return z; } /* @@ -17801,50 +23653,39 @@ z = sqlite3VMPrintf(db, zFormat, ap); va_end(ap); return z; } -/* -** Like sqlite3MPrintf(), but call sqlite3DbFree() on zStr after formatting -** the string and before returnning. This routine is intended to be used -** to modify an existing string. For example: -** -** x = sqlite3MPrintf(db, x, "prefix %s suffix", x); -** -*/ -SQLITE_PRIVATE char *sqlite3MAppendf(sqlite3 *db, char *zStr, const char *zFormat, ...){ - va_list ap; - char *z; - va_start(ap, zFormat); - z = sqlite3VMPrintf(db, zFormat, ap); - va_end(ap); - sqlite3DbFree(db, zStr); - return z; -} - /* ** Print into memory obtained from sqlite3_malloc(). Omit the internal ** %-conversion extensions. */ -SQLITE_API char *sqlite3_vmprintf(const char *zFormat, va_list ap){ +SQLITE_API char *SQLITE_STDCALL sqlite3_vmprintf(const char *zFormat, va_list ap){ char *z; char zBase[SQLITE_PRINT_BUF_SIZE]; StrAccum acc; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( zFormat==0 ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif #ifndef SQLITE_OMIT_AUTOINIT if( sqlite3_initialize() ) return 0; #endif - sqlite3StrAccumInit(&acc, zBase, sizeof(zBase), SQLITE_MAX_LENGTH); - sqlite3VXPrintf(&acc, 0, zFormat, ap); + sqlite3StrAccumInit(&acc, 0, zBase, sizeof(zBase), SQLITE_MAX_LENGTH); + sqlite3VXPrintf(&acc, zFormat, ap); z = sqlite3StrAccumFinish(&acc); return z; } /* ** Print into memory obtained from sqlite3_malloc()(). Omit the internal ** %-conversion extensions. */ -SQLITE_API char *sqlite3_mprintf(const char *zFormat, ...){ +SQLITE_API char *SQLITE_CDECL sqlite3_mprintf(const char *zFormat, ...){ va_list ap; char *z; #ifndef SQLITE_OMIT_AUTOINIT if( sqlite3_initialize() ) return 0; #endif @@ -17857,25 +23698,38 @@ /* ** sqlite3_snprintf() works like snprintf() except that it ignores the ** current locale settings. This is important for SQLite because we ** are not able to use a "," as the decimal point in place of "." as ** specified by some locales. +** +** Oops: The first two arguments of sqlite3_snprintf() are backwards +** from the snprintf() standard. Unfortunately, it is too late to change +** this without breaking compatibility, so we just have to live with the +** mistake. +** +** sqlite3_vsnprintf() is the varargs version. */ -SQLITE_API char *sqlite3_snprintf(int n, char *zBuf, const char *zFormat, ...){ - char *z; - va_list ap; +SQLITE_API char *SQLITE_STDCALL sqlite3_vsnprintf(int n, char *zBuf, const char *zFormat, va_list ap){ StrAccum acc; - - if( n<=0 ){ + if( n<=0 ) return zBuf; +#ifdef SQLITE_ENABLE_API_ARMOR + if( zBuf==0 || zFormat==0 ) { + (void)SQLITE_MISUSE_BKPT; + if( zBuf ) zBuf[0] = 0; return zBuf; } - sqlite3StrAccumInit(&acc, zBuf, n, 0); - acc.useMalloc = 0; +#endif + sqlite3StrAccumInit(&acc, 0, zBuf, n, 0); + sqlite3VXPrintf(&acc, zFormat, ap); + return sqlite3StrAccumFinish(&acc); +} +SQLITE_API char *SQLITE_CDECL sqlite3_snprintf(int n, char *zBuf, const char *zFormat, ...){ + char *z; + va_list ap; va_start(ap,zFormat); - sqlite3VXPrintf(&acc, 0, zFormat, ap); + z = sqlite3_vsnprintf(n, zBuf, zFormat, ap); va_end(ap); - z = sqlite3StrAccumFinish(&acc); return z; } /* ** This is the routine that actually formats the sqlite3_log() message. @@ -17883,68 +23737,560 @@ ** stack space on small-stack systems when logging is disabled. ** ** sqlite3_log() must render into a static buffer. It cannot dynamically ** allocate memory because it might be called while the memory allocator ** mutex is held. +** +** sqlite3VXPrintf() might ask for *temporary* memory allocations for +** certain format characters (%q) or for very large precisions or widths. +** Care must be taken that any sqlite3_log() calls that occur while the +** memory mutex is held do not use these mechanisms. */ static void renderLogMsg(int iErrCode, const char *zFormat, va_list ap){ StrAccum acc; /* String accumulator */ char zMsg[SQLITE_PRINT_BUF_SIZE*3]; /* Complete log message */ - sqlite3StrAccumInit(&acc, zMsg, sizeof(zMsg), 0); - acc.useMalloc = 0; - sqlite3VXPrintf(&acc, 0, zFormat, ap); + sqlite3StrAccumInit(&acc, 0, zMsg, sizeof(zMsg), 0); + sqlite3VXPrintf(&acc, zFormat, ap); sqlite3GlobalConfig.xLog(sqlite3GlobalConfig.pLogArg, iErrCode, sqlite3StrAccumFinish(&acc)); } /* ** Format and write a message to the log if logging is enabled. */ -SQLITE_API void sqlite3_log(int iErrCode, const char *zFormat, ...){ +SQLITE_API void SQLITE_CDECL sqlite3_log(int iErrCode, const char *zFormat, ...){ va_list ap; /* Vararg list */ if( sqlite3GlobalConfig.xLog ){ va_start(ap, zFormat); renderLogMsg(iErrCode, zFormat, ap); va_end(ap); } } -#if defined(SQLITE_DEBUG) +#if defined(SQLITE_DEBUG) || defined(SQLITE_HAVE_OS_TRACE) /* ** A version of printf() that understands %lld. Used for debugging. ** The printf() built into some versions of windows does not understand %lld ** and segfaults if you give it a long long int. */ SQLITE_PRIVATE void sqlite3DebugPrintf(const char *zFormat, ...){ va_list ap; StrAccum acc; char zBuf[500]; - sqlite3StrAccumInit(&acc, zBuf, sizeof(zBuf), 0); - acc.useMalloc = 0; + sqlite3StrAccumInit(&acc, 0, zBuf, sizeof(zBuf), 0); va_start(ap,zFormat); - sqlite3VXPrintf(&acc, 0, zFormat, ap); + sqlite3VXPrintf(&acc, zFormat, ap); va_end(ap); sqlite3StrAccumFinish(&acc); fprintf(stdout,"%s", zBuf); fflush(stdout); } #endif -#ifndef SQLITE_OMIT_TRACE + /* -** variable-argument wrapper around sqlite3VXPrintf(). +** variable-argument wrapper around sqlite3VXPrintf(). The bFlags argument +** can contain the bit SQLITE_PRINTF_INTERNAL enable internal formats. */ SQLITE_PRIVATE void sqlite3XPrintf(StrAccum *p, const char *zFormat, ...){ va_list ap; va_start(ap,zFormat); - sqlite3VXPrintf(p, 1, zFormat, ap); + sqlite3VXPrintf(p, zFormat, ap); va_end(ap); } -#endif /************** End of printf.c **********************************************/ +/************** Begin file treeview.c ****************************************/ +/* +** 2015-06-08 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** +** This file contains C code to implement the TreeView debugging routines. +** These routines print a parse tree to standard output for debugging and +** analysis. +** +** The interfaces in this file is only available when compiling +** with SQLITE_DEBUG. +*/ +/* #include "sqliteInt.h" */ +#ifdef SQLITE_DEBUG + +/* +** Add a new subitem to the tree. The moreToFollow flag indicates that this +** is not the last item in the tree. +*/ +static TreeView *sqlite3TreeViewPush(TreeView *p, u8 moreToFollow){ + if( p==0 ){ + p = sqlite3_malloc64( sizeof(*p) ); + if( p==0 ) return 0; + memset(p, 0, sizeof(*p)); + }else{ + p->iLevel++; + } + assert( moreToFollow==0 || moreToFollow==1 ); + if( p->iLevel<sizeof(p->bLine) ) p->bLine[p->iLevel] = moreToFollow; + return p; +} + +/* +** Finished with one layer of the tree +*/ +static void sqlite3TreeViewPop(TreeView *p){ + if( p==0 ) return; + p->iLevel--; + if( p->iLevel<0 ) sqlite3_free(p); +} + +/* +** Generate a single line of output for the tree, with a prefix that contains +** all the appropriate tree lines +*/ +static void sqlite3TreeViewLine(TreeView *p, const char *zFormat, ...){ + va_list ap; + int i; + StrAccum acc; + char zBuf[500]; + sqlite3StrAccumInit(&acc, 0, zBuf, sizeof(zBuf), 0); + if( p ){ + for(i=0; i<p->iLevel && i<sizeof(p->bLine)-1; i++){ + sqlite3StrAccumAppend(&acc, p->bLine[i] ? "| " : " ", 4); + } + sqlite3StrAccumAppend(&acc, p->bLine[i] ? "|-- " : "'-- ", 4); + } + va_start(ap, zFormat); + sqlite3VXPrintf(&acc, zFormat, ap); + va_end(ap); + if( zBuf[acc.nChar-1]!='\n' ) sqlite3StrAccumAppend(&acc, "\n", 1); + sqlite3StrAccumFinish(&acc); + fprintf(stdout,"%s", zBuf); + fflush(stdout); +} + +/* +** Shorthand for starting a new tree item that consists of a single label +*/ +static void sqlite3TreeViewItem(TreeView *p, const char *zLabel,u8 moreFollows){ + p = sqlite3TreeViewPush(p, moreFollows); + sqlite3TreeViewLine(p, "%s", zLabel); +} + +/* +** Generate a human-readable description of a WITH clause. +*/ +SQLITE_PRIVATE void sqlite3TreeViewWith(TreeView *pView, const With *pWith, u8 moreToFollow){ + int i; + if( pWith==0 ) return; + if( pWith->nCte==0 ) return; + if( pWith->pOuter ){ + sqlite3TreeViewLine(pView, "WITH (0x%p, pOuter=0x%p)",pWith,pWith->pOuter); + }else{ + sqlite3TreeViewLine(pView, "WITH (0x%p)", pWith); + } + if( pWith->nCte>0 ){ + pView = sqlite3TreeViewPush(pView, 1); + for(i=0; i<pWith->nCte; i++){ + StrAccum x; + char zLine[1000]; + const struct Cte *pCte = &pWith->a[i]; + sqlite3StrAccumInit(&x, 0, zLine, sizeof(zLine), 0); + sqlite3XPrintf(&x, "%s", pCte->zName); + if( pCte->pCols && pCte->pCols->nExpr>0 ){ + char cSep = '('; + int j; + for(j=0; j<pCte->pCols->nExpr; j++){ + sqlite3XPrintf(&x, "%c%s", cSep, pCte->pCols->a[j].zName); + cSep = ','; + } + sqlite3XPrintf(&x, ")"); + } + sqlite3XPrintf(&x, " AS"); + sqlite3StrAccumFinish(&x); + sqlite3TreeViewItem(pView, zLine, i<pWith->nCte-1); + sqlite3TreeViewSelect(pView, pCte->pSelect, 0); + sqlite3TreeViewPop(pView); + } + sqlite3TreeViewPop(pView); + } +} + + +/* +** Generate a human-readable description of a the Select object. +*/ +SQLITE_PRIVATE void sqlite3TreeViewSelect(TreeView *pView, const Select *p, u8 moreToFollow){ + int n = 0; + int cnt = 0; + pView = sqlite3TreeViewPush(pView, moreToFollow); + if( p->pWith ){ + sqlite3TreeViewWith(pView, p->pWith, 1); + cnt = 1; + sqlite3TreeViewPush(pView, 1); + } + do{ + sqlite3TreeViewLine(pView, "SELECT%s%s (0x%p) selFlags=0x%x", + ((p->selFlags & SF_Distinct) ? " DISTINCT" : ""), + ((p->selFlags & SF_Aggregate) ? " agg_flag" : ""), p, p->selFlags + ); + if( cnt++ ) sqlite3TreeViewPop(pView); + if( p->pPrior ){ + n = 1000; + }else{ + n = 0; + if( p->pSrc && p->pSrc->nSrc ) n++; + if( p->pWhere ) n++; + if( p->pGroupBy ) n++; + if( p->pHaving ) n++; + if( p->pOrderBy ) n++; + if( p->pLimit ) n++; + if( p->pOffset ) n++; + } + sqlite3TreeViewExprList(pView, p->pEList, (n--)>0, "result-set"); + if( p->pSrc && p->pSrc->nSrc ){ + int i; + pView = sqlite3TreeViewPush(pView, (n--)>0); + sqlite3TreeViewLine(pView, "FROM"); + for(i=0; i<p->pSrc->nSrc; i++){ + struct SrcList_item *pItem = &p->pSrc->a[i]; + StrAccum x; + char zLine[100]; + sqlite3StrAccumInit(&x, 0, zLine, sizeof(zLine), 0); + sqlite3XPrintf(&x, "{%d,*}", pItem->iCursor); + if( pItem->zDatabase ){ + sqlite3XPrintf(&x, " %s.%s", pItem->zDatabase, pItem->zName); + }else if( pItem->zName ){ + sqlite3XPrintf(&x, " %s", pItem->zName); + } + if( pItem->pTab ){ + sqlite3XPrintf(&x, " tabname=%Q", pItem->pTab->zName); + } + if( pItem->zAlias ){ + sqlite3XPrintf(&x, " (AS %s)", pItem->zAlias); + } + if( pItem->fg.jointype & JT_LEFT ){ + sqlite3XPrintf(&x, " LEFT-JOIN"); + } + sqlite3StrAccumFinish(&x); + sqlite3TreeViewItem(pView, zLine, i<p->pSrc->nSrc-1); + if( pItem->pSelect ){ + sqlite3TreeViewSelect(pView, pItem->pSelect, 0); + } + if( pItem->fg.isTabFunc ){ + sqlite3TreeViewExprList(pView, pItem->u1.pFuncArg, 0, "func-args:"); + } + sqlite3TreeViewPop(pView); + } + sqlite3TreeViewPop(pView); + } + if( p->pWhere ){ + sqlite3TreeViewItem(pView, "WHERE", (n--)>0); + sqlite3TreeViewExpr(pView, p->pWhere, 0); + sqlite3TreeViewPop(pView); + } + if( p->pGroupBy ){ + sqlite3TreeViewExprList(pView, p->pGroupBy, (n--)>0, "GROUPBY"); + } + if( p->pHaving ){ + sqlite3TreeViewItem(pView, "HAVING", (n--)>0); + sqlite3TreeViewExpr(pView, p->pHaving, 0); + sqlite3TreeViewPop(pView); + } + if( p->pOrderBy ){ + sqlite3TreeViewExprList(pView, p->pOrderBy, (n--)>0, "ORDERBY"); + } + if( p->pLimit ){ + sqlite3TreeViewItem(pView, "LIMIT", (n--)>0); + sqlite3TreeViewExpr(pView, p->pLimit, 0); + sqlite3TreeViewPop(pView); + } + if( p->pOffset ){ + sqlite3TreeViewItem(pView, "OFFSET", (n--)>0); + sqlite3TreeViewExpr(pView, p->pOffset, 0); + sqlite3TreeViewPop(pView); + } + if( p->pPrior ){ + const char *zOp = "UNION"; + switch( p->op ){ + case TK_ALL: zOp = "UNION ALL"; break; + case TK_INTERSECT: zOp = "INTERSECT"; break; + case TK_EXCEPT: zOp = "EXCEPT"; break; + } + sqlite3TreeViewItem(pView, zOp, 1); + } + p = p->pPrior; + }while( p!=0 ); + sqlite3TreeViewPop(pView); +} + +/* +** Generate a human-readable explanation of an expression tree. +*/ +SQLITE_PRIVATE void sqlite3TreeViewExpr(TreeView *pView, const Expr *pExpr, u8 moreToFollow){ + const char *zBinOp = 0; /* Binary operator */ + const char *zUniOp = 0; /* Unary operator */ + char zFlgs[30]; + pView = sqlite3TreeViewPush(pView, moreToFollow); + if( pExpr==0 ){ + sqlite3TreeViewLine(pView, "nil"); + sqlite3TreeViewPop(pView); + return; + } + if( pExpr->flags ){ + sqlite3_snprintf(sizeof(zFlgs),zFlgs," flags=0x%x",pExpr->flags); + }else{ + zFlgs[0] = 0; + } + switch( pExpr->op ){ + case TK_AGG_COLUMN: { + sqlite3TreeViewLine(pView, "AGG{%d:%d}%s", + pExpr->iTable, pExpr->iColumn, zFlgs); + break; + } + case TK_COLUMN: { + if( pExpr->iTable<0 ){ + /* This only happens when coding check constraints */ + sqlite3TreeViewLine(pView, "COLUMN(%d)%s", pExpr->iColumn, zFlgs); + }else{ + sqlite3TreeViewLine(pView, "{%d:%d}%s", + pExpr->iTable, pExpr->iColumn, zFlgs); + } + break; + } + case TK_INTEGER: { + if( pExpr->flags & EP_IntValue ){ + sqlite3TreeViewLine(pView, "%d", pExpr->u.iValue); + }else{ + sqlite3TreeViewLine(pView, "%s", pExpr->u.zToken); + } + break; + } +#ifndef SQLITE_OMIT_FLOATING_POINT + case TK_FLOAT: { + sqlite3TreeViewLine(pView,"%s", pExpr->u.zToken); + break; + } +#endif + case TK_STRING: { + sqlite3TreeViewLine(pView,"%Q", pExpr->u.zToken); + break; + } + case TK_NULL: { + sqlite3TreeViewLine(pView,"NULL"); + break; + } +#ifndef SQLITE_OMIT_BLOB_LITERAL + case TK_BLOB: { + sqlite3TreeViewLine(pView,"%s", pExpr->u.zToken); + break; + } +#endif + case TK_VARIABLE: { + sqlite3TreeViewLine(pView,"VARIABLE(%s,%d)", + pExpr->u.zToken, pExpr->iColumn); + break; + } + case TK_REGISTER: { + sqlite3TreeViewLine(pView,"REGISTER(%d)", pExpr->iTable); + break; + } + case TK_ID: { + sqlite3TreeViewLine(pView,"ID \"%w\"", pExpr->u.zToken); + break; + } +#ifndef SQLITE_OMIT_CAST + case TK_CAST: { + /* Expressions of the form: CAST(pLeft AS token) */ + sqlite3TreeViewLine(pView,"CAST %Q", pExpr->u.zToken); + sqlite3TreeViewExpr(pView, pExpr->pLeft, 0); + break; + } +#endif /* SQLITE_OMIT_CAST */ + case TK_LT: zBinOp = "LT"; break; + case TK_LE: zBinOp = "LE"; break; + case TK_GT: zBinOp = "GT"; break; + case TK_GE: zBinOp = "GE"; break; + case TK_NE: zBinOp = "NE"; break; + case TK_EQ: zBinOp = "EQ"; break; + case TK_IS: zBinOp = "IS"; break; + case TK_ISNOT: zBinOp = "ISNOT"; break; + case TK_AND: zBinOp = "AND"; break; + case TK_OR: zBinOp = "OR"; break; + case TK_PLUS: zBinOp = "ADD"; break; + case TK_STAR: zBinOp = "MUL"; break; + case TK_MINUS: zBinOp = "SUB"; break; + case TK_REM: zBinOp = "REM"; break; + case TK_BITAND: zBinOp = "BITAND"; break; + case TK_BITOR: zBinOp = "BITOR"; break; + case TK_SLASH: zBinOp = "DIV"; break; + case TK_LSHIFT: zBinOp = "LSHIFT"; break; + case TK_RSHIFT: zBinOp = "RSHIFT"; break; + case TK_CONCAT: zBinOp = "CONCAT"; break; + case TK_DOT: zBinOp = "DOT"; break; + + case TK_UMINUS: zUniOp = "UMINUS"; break; + case TK_UPLUS: zUniOp = "UPLUS"; break; + case TK_BITNOT: zUniOp = "BITNOT"; break; + case TK_NOT: zUniOp = "NOT"; break; + case TK_ISNULL: zUniOp = "ISNULL"; break; + case TK_NOTNULL: zUniOp = "NOTNULL"; break; + + case TK_COLLATE: { + sqlite3TreeViewLine(pView, "COLLATE %Q", pExpr->u.zToken); + sqlite3TreeViewExpr(pView, pExpr->pLeft, 0); + break; + } + + case TK_AGG_FUNCTION: + case TK_FUNCTION: { + ExprList *pFarg; /* List of function arguments */ + if( ExprHasProperty(pExpr, EP_TokenOnly) ){ + pFarg = 0; + }else{ + pFarg = pExpr->x.pList; + } + if( pExpr->op==TK_AGG_FUNCTION ){ + sqlite3TreeViewLine(pView, "AGG_FUNCTION%d %Q", + pExpr->op2, pExpr->u.zToken); + }else{ + sqlite3TreeViewLine(pView, "FUNCTION %Q", pExpr->u.zToken); + } + if( pFarg ){ + sqlite3TreeViewExprList(pView, pFarg, 0, 0); + } + break; + } +#ifndef SQLITE_OMIT_SUBQUERY + case TK_EXISTS: { + sqlite3TreeViewLine(pView, "EXISTS-expr"); + sqlite3TreeViewSelect(pView, pExpr->x.pSelect, 0); + break; + } + case TK_SELECT: { + sqlite3TreeViewLine(pView, "SELECT-expr"); + sqlite3TreeViewSelect(pView, pExpr->x.pSelect, 0); + break; + } + case TK_IN: { + sqlite3TreeViewLine(pView, "IN"); + sqlite3TreeViewExpr(pView, pExpr->pLeft, 1); + if( ExprHasProperty(pExpr, EP_xIsSelect) ){ + sqlite3TreeViewSelect(pView, pExpr->x.pSelect, 0); + }else{ + sqlite3TreeViewExprList(pView, pExpr->x.pList, 0, 0); + } + break; + } +#endif /* SQLITE_OMIT_SUBQUERY */ + + /* + ** x BETWEEN y AND z + ** + ** This is equivalent to + ** + ** x>=y AND x<=z + ** + ** X is stored in pExpr->pLeft. + ** Y is stored in pExpr->pList->a[0].pExpr. + ** Z is stored in pExpr->pList->a[1].pExpr. + */ + case TK_BETWEEN: { + Expr *pX = pExpr->pLeft; + Expr *pY = pExpr->x.pList->a[0].pExpr; + Expr *pZ = pExpr->x.pList->a[1].pExpr; + sqlite3TreeViewLine(pView, "BETWEEN"); + sqlite3TreeViewExpr(pView, pX, 1); + sqlite3TreeViewExpr(pView, pY, 1); + sqlite3TreeViewExpr(pView, pZ, 0); + break; + } + case TK_TRIGGER: { + /* If the opcode is TK_TRIGGER, then the expression is a reference + ** to a column in the new.* or old.* pseudo-tables available to + ** trigger programs. In this case Expr.iTable is set to 1 for the + ** new.* pseudo-table, or 0 for the old.* pseudo-table. Expr.iColumn + ** is set to the column of the pseudo-table to read, or to -1 to + ** read the rowid field. + */ + sqlite3TreeViewLine(pView, "%s(%d)", + pExpr->iTable ? "NEW" : "OLD", pExpr->iColumn); + break; + } + case TK_CASE: { + sqlite3TreeViewLine(pView, "CASE"); + sqlite3TreeViewExpr(pView, pExpr->pLeft, 1); + sqlite3TreeViewExprList(pView, pExpr->x.pList, 0, 0); + break; + } +#ifndef SQLITE_OMIT_TRIGGER + case TK_RAISE: { + const char *zType = "unk"; + switch( pExpr->affinity ){ + case OE_Rollback: zType = "rollback"; break; + case OE_Abort: zType = "abort"; break; + case OE_Fail: zType = "fail"; break; + case OE_Ignore: zType = "ignore"; break; + } + sqlite3TreeViewLine(pView, "RAISE %s(%Q)", zType, pExpr->u.zToken); + break; + } +#endif + default: { + sqlite3TreeViewLine(pView, "op=%d", pExpr->op); + break; + } + } + if( zBinOp ){ + sqlite3TreeViewLine(pView, "%s%s", zBinOp, zFlgs); + sqlite3TreeViewExpr(pView, pExpr->pLeft, 1); + sqlite3TreeViewExpr(pView, pExpr->pRight, 0); + }else if( zUniOp ){ + sqlite3TreeViewLine(pView, "%s%s", zUniOp, zFlgs); + sqlite3TreeViewExpr(pView, pExpr->pLeft, 0); + } + sqlite3TreeViewPop(pView); +} + +/* +** Generate a human-readable explanation of an expression list. +*/ +SQLITE_PRIVATE void sqlite3TreeViewExprList( + TreeView *pView, + const ExprList *pList, + u8 moreToFollow, + const char *zLabel +){ + int i; + pView = sqlite3TreeViewPush(pView, moreToFollow); + if( zLabel==0 || zLabel[0]==0 ) zLabel = "LIST"; + if( pList==0 ){ + sqlite3TreeViewLine(pView, "%s (empty)", zLabel); + }else{ + sqlite3TreeViewLine(pView, "%s", zLabel); + for(i=0; i<pList->nExpr; i++){ + int j = pList->a[i].u.x.iOrderByCol; + if( j ){ + sqlite3TreeViewPush(pView, 0); + sqlite3TreeViewLine(pView, "iOrderByCol=%d", j); + } + sqlite3TreeViewExpr(pView, pList->a[i].pExpr, i<pList->nExpr-1); + if( j ) sqlite3TreeViewPop(pView); + } + } + sqlite3TreeViewPop(pView); +} + +#endif /* SQLITE_DEBUG */ + +/************** End of treeview.c ********************************************/ /************** Begin file random.c ******************************************/ /* ** 2001 September 15 ** ** The author disclaims copyright to this source code. In place of @@ -17959,10 +24305,11 @@ ** generator (PRNG) for SQLite. ** ** Random numbers are used by some of the database backends in order ** to generate random integer keys for tables or random filenames. */ +/* #include "sqliteInt.h" */ /* All threads share a single random number generator. ** This structure is the current state of the generator. */ @@ -17971,28 +24318,15 @@ unsigned char i, j; /* State variables */ unsigned char s[256]; /* State variables */ } sqlite3Prng; /* -** Get a single 8-bit random value from the RC4 PRNG. The Mutex -** must be held while executing this routine. -** -** Why not just use a library random generator like lrand48() for this? -** Because the OP_NewRowid opcode in the VDBE depends on having a very -** good source of random numbers. The lrand48() library function may -** well be good enough. But maybe not. Or maybe lrand48() has some -** subtle problems on some systems that could cause problems. It is hard -** to know. To minimize the risk of problems due to bad lrand48() -** implementations, SQLite uses this random number generator based -** on RC4, which we know works very well. -** -** (Later): Actually, OP_NewRowid does not depend on a good source of -** randomness any more. But we will leave this code in all the same. +** Return N random bytes. */ -static u8 randomByte(void){ +SQLITE_API void SQLITE_STDCALL sqlite3_randomness(int N, void *pBuf){ unsigned char t; - + unsigned char *zBuf = pBuf; /* The "wsdPrng" macro will resolve to the pseudo-random number generator ** state vector. If writable static data is unsupported on the target, ** we have to locate the state vector at run-time. In the more common ** case where writable static data is supported, wsdPrng can refer directly @@ -18003,10 +24337,28 @@ # define wsdPrng p[0] #else # define wsdPrng sqlite3Prng #endif +#if SQLITE_THREADSAFE + sqlite3_mutex *mutex; +#endif + +#ifndef SQLITE_OMIT_AUTOINIT + if( sqlite3_initialize() ) return; +#endif + +#if SQLITE_THREADSAFE + mutex = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_PRNG); +#endif + + sqlite3_mutex_enter(mutex); + if( N<=0 || pBuf==0 ){ + wsdPrng.isInit = 0; + sqlite3_mutex_leave(mutex); + return; + } /* Initialize the state of the random number generator once, ** the first time this routine is called. The seed value does ** not need to contain a lot of randomness since we are not ** trying to do secure encryption or anything like that... @@ -18031,33 +24383,20 @@ wsdPrng.s[i] = t; } wsdPrng.isInit = 1; } - /* Generate and return single random byte - */ - wsdPrng.i++; - t = wsdPrng.s[wsdPrng.i]; - wsdPrng.j += t; - wsdPrng.s[wsdPrng.i] = wsdPrng.s[wsdPrng.j]; - wsdPrng.s[wsdPrng.j] = t; - t += wsdPrng.s[wsdPrng.i]; - return wsdPrng.s[t]; -} - -/* -** Return N random bytes. -*/ -SQLITE_API void sqlite3_randomness(int N, void *pBuf){ - unsigned char *zBuf = pBuf; -#if SQLITE_THREADSAFE - sqlite3_mutex *mutex = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_PRNG); -#endif - sqlite3_mutex_enter(mutex); - while( N-- ){ - *(zBuf++) = randomByte(); - } + assert( N>0 ); + do{ + wsdPrng.i++; + t = wsdPrng.s[wsdPrng.i]; + wsdPrng.j += t; + wsdPrng.s[wsdPrng.i] = wsdPrng.s[wsdPrng.j]; + wsdPrng.s[wsdPrng.j] = t; + t += wsdPrng.s[wsdPrng.i]; + *(zBuf++) = wsdPrng.s[t]; + }while( --N ); sqlite3_mutex_leave(mutex); } #ifndef SQLITE_OMIT_BUILTIN_TEST /* @@ -18082,16 +24421,290 @@ &GLOBAL(struct sqlite3PrngType, sqlite3Prng), &GLOBAL(struct sqlite3PrngType, sqlite3SavedPrng), sizeof(sqlite3Prng) ); } -SQLITE_PRIVATE void sqlite3PrngResetState(void){ - GLOBAL(struct sqlite3PrngType, sqlite3Prng).isInit = 0; -} #endif /* SQLITE_OMIT_BUILTIN_TEST */ /************** End of random.c **********************************************/ +/************** Begin file threads.c *****************************************/ +/* +** 2012 July 21 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This file presents a simple cross-platform threading interface for +** use internally by SQLite. +** +** A "thread" can be created using sqlite3ThreadCreate(). This thread +** runs independently of its creator until it is joined using +** sqlite3ThreadJoin(), at which point it terminates. +** +** Threads do not have to be real. It could be that the work of the +** "thread" is done by the main thread at either the sqlite3ThreadCreate() +** or sqlite3ThreadJoin() call. This is, in fact, what happens in +** single threaded systems. Nothing in SQLite requires multiple threads. +** This interface exists so that applications that want to take advantage +** of multiple cores can do so, while also allowing applications to stay +** single-threaded if desired. +*/ +/* #include "sqliteInt.h" */ +#if SQLITE_OS_WIN +/* # include "os_win.h" */ +#endif + +#if SQLITE_MAX_WORKER_THREADS>0 + +/********************************* Unix Pthreads ****************************/ +#if SQLITE_OS_UNIX && defined(SQLITE_MUTEX_PTHREADS) && SQLITE_THREADSAFE>0 + +#define SQLITE_THREADS_IMPLEMENTED 1 /* Prevent the single-thread code below */ +/* #include <pthread.h> */ + +/* A running thread */ +struct SQLiteThread { + pthread_t tid; /* Thread ID */ + int done; /* Set to true when thread finishes */ + void *pOut; /* Result returned by the thread */ + void *(*xTask)(void*); /* The thread routine */ + void *pIn; /* Argument to the thread */ +}; + +/* Create a new thread */ +SQLITE_PRIVATE int sqlite3ThreadCreate( + SQLiteThread **ppThread, /* OUT: Write the thread object here */ + void *(*xTask)(void*), /* Routine to run in a separate thread */ + void *pIn /* Argument passed into xTask() */ +){ + SQLiteThread *p; + int rc; + + assert( ppThread!=0 ); + assert( xTask!=0 ); + /* This routine is never used in single-threaded mode */ + assert( sqlite3GlobalConfig.bCoreMutex!=0 ); + + *ppThread = 0; + p = sqlite3Malloc(sizeof(*p)); + if( p==0 ) return SQLITE_NOMEM; + memset(p, 0, sizeof(*p)); + p->xTask = xTask; + p->pIn = pIn; + /* If the SQLITE_TESTCTRL_FAULT_INSTALL callback is registered to a + ** function that returns SQLITE_ERROR when passed the argument 200, that + ** forces worker threads to run sequentially and deterministically + ** for testing purposes. */ + if( sqlite3FaultSim(200) ){ + rc = 1; + }else{ + rc = pthread_create(&p->tid, 0, xTask, pIn); + } + if( rc ){ + p->done = 1; + p->pOut = xTask(pIn); + } + *ppThread = p; + return SQLITE_OK; +} + +/* Get the results of the thread */ +SQLITE_PRIVATE int sqlite3ThreadJoin(SQLiteThread *p, void **ppOut){ + int rc; + + assert( ppOut!=0 ); + if( NEVER(p==0) ) return SQLITE_NOMEM; + if( p->done ){ + *ppOut = p->pOut; + rc = SQLITE_OK; + }else{ + rc = pthread_join(p->tid, ppOut) ? SQLITE_ERROR : SQLITE_OK; + } + sqlite3_free(p); + return rc; +} + +#endif /* SQLITE_OS_UNIX && defined(SQLITE_MUTEX_PTHREADS) */ +/******************************** End Unix Pthreads *************************/ + + +/********************************* Win32 Threads ****************************/ +#if SQLITE_OS_WIN_THREADS + +#define SQLITE_THREADS_IMPLEMENTED 1 /* Prevent the single-thread code below */ +#include <process.h> + +/* A running thread */ +struct SQLiteThread { + void *tid; /* The thread handle */ + unsigned id; /* The thread identifier */ + void *(*xTask)(void*); /* The routine to run as a thread */ + void *pIn; /* Argument to xTask */ + void *pResult; /* Result of xTask */ +}; + +/* Thread procedure Win32 compatibility shim */ +static unsigned __stdcall sqlite3ThreadProc( + void *pArg /* IN: Pointer to the SQLiteThread structure */ +){ + SQLiteThread *p = (SQLiteThread *)pArg; + + assert( p!=0 ); +#if 0 + /* + ** This assert appears to trigger spuriously on certain + ** versions of Windows, possibly due to _beginthreadex() + ** and/or CreateThread() not fully setting their thread + ** ID parameter before starting the thread. + */ + assert( p->id==GetCurrentThreadId() ); +#endif + assert( p->xTask!=0 ); + p->pResult = p->xTask(p->pIn); + + _endthreadex(0); + return 0; /* NOT REACHED */ +} + +/* Create a new thread */ +SQLITE_PRIVATE int sqlite3ThreadCreate( + SQLiteThread **ppThread, /* OUT: Write the thread object here */ + void *(*xTask)(void*), /* Routine to run in a separate thread */ + void *pIn /* Argument passed into xTask() */ +){ + SQLiteThread *p; + + assert( ppThread!=0 ); + assert( xTask!=0 ); + *ppThread = 0; + p = sqlite3Malloc(sizeof(*p)); + if( p==0 ) return SQLITE_NOMEM; + /* If the SQLITE_TESTCTRL_FAULT_INSTALL callback is registered to a + ** function that returns SQLITE_ERROR when passed the argument 200, that + ** forces worker threads to run sequentially and deterministically + ** (via the sqlite3FaultSim() term of the conditional) for testing + ** purposes. */ + if( sqlite3GlobalConfig.bCoreMutex==0 || sqlite3FaultSim(200) ){ + memset(p, 0, sizeof(*p)); + }else{ + p->xTask = xTask; + p->pIn = pIn; + p->tid = (void*)_beginthreadex(0, 0, sqlite3ThreadProc, p, 0, &p->id); + if( p->tid==0 ){ + memset(p, 0, sizeof(*p)); + } + } + if( p->xTask==0 ){ + p->id = GetCurrentThreadId(); + p->pResult = xTask(pIn); + } + *ppThread = p; + return SQLITE_OK; +} + +SQLITE_PRIVATE DWORD sqlite3Win32Wait(HANDLE hObject); /* os_win.c */ + +/* Get the results of the thread */ +SQLITE_PRIVATE int sqlite3ThreadJoin(SQLiteThread *p, void **ppOut){ + DWORD rc; + BOOL bRc; + + assert( ppOut!=0 ); + if( NEVER(p==0) ) return SQLITE_NOMEM; + if( p->xTask==0 ){ + /* assert( p->id==GetCurrentThreadId() ); */ + rc = WAIT_OBJECT_0; + assert( p->tid==0 ); + }else{ + assert( p->id!=0 && p->id!=GetCurrentThreadId() ); + rc = sqlite3Win32Wait((HANDLE)p->tid); + assert( rc!=WAIT_IO_COMPLETION ); + bRc = CloseHandle((HANDLE)p->tid); + assert( bRc ); + } + if( rc==WAIT_OBJECT_0 ) *ppOut = p->pResult; + sqlite3_free(p); + return (rc==WAIT_OBJECT_0) ? SQLITE_OK : SQLITE_ERROR; +} + +#endif /* SQLITE_OS_WIN_THREADS */ +/******************************** End Win32 Threads *************************/ + + +/********************************* Single-Threaded **************************/ +#ifndef SQLITE_THREADS_IMPLEMENTED +/* +** This implementation does not actually create a new thread. It does the +** work of the thread in the main thread, when either the thread is created +** or when it is joined +*/ + +/* A running thread */ +struct SQLiteThread { + void *(*xTask)(void*); /* The routine to run as a thread */ + void *pIn; /* Argument to xTask */ + void *pResult; /* Result of xTask */ +}; + +/* Create a new thread */ +SQLITE_PRIVATE int sqlite3ThreadCreate( + SQLiteThread **ppThread, /* OUT: Write the thread object here */ + void *(*xTask)(void*), /* Routine to run in a separate thread */ + void *pIn /* Argument passed into xTask() */ +){ + SQLiteThread *p; + + assert( ppThread!=0 ); + assert( xTask!=0 ); + *ppThread = 0; + p = sqlite3Malloc(sizeof(*p)); + if( p==0 ) return SQLITE_NOMEM; + if( (SQLITE_PTR_TO_INT(p)/17)&1 ){ + p->xTask = xTask; + p->pIn = pIn; + }else{ + p->xTask = 0; + p->pResult = xTask(pIn); + } + *ppThread = p; + return SQLITE_OK; +} + +/* Get the results of the thread */ +SQLITE_PRIVATE int sqlite3ThreadJoin(SQLiteThread *p, void **ppOut){ + + assert( ppOut!=0 ); + if( NEVER(p==0) ) return SQLITE_NOMEM; + if( p->xTask ){ + *ppOut = p->xTask(p->pIn); + }else{ + *ppOut = p->pResult; + } + sqlite3_free(p); + +#if defined(SQLITE_TEST) + { + void *pTstAlloc = sqlite3Malloc(10); + if (!pTstAlloc) return SQLITE_NOMEM; + sqlite3_free(pTstAlloc); + } +#endif + + return SQLITE_OK; +} + +#endif /* !defined(SQLITE_THREADS_IMPLEMENTED) */ +/****************************** End Single-Threaded *************************/ +#endif /* SQLITE_MAX_WORKER_THREADS>0 */ + +/************** End of threads.c *********************************************/ /************** Begin file utf.c *********************************************/ /* ** 2004 April 13 ** ** The author disclaims copyright to this source code. In place of @@ -18124,441 +24737,21 @@ ** BOM or Byte Order Mark: ** 0xff 0xfe little-endian utf-16 follows ** 0xfe 0xff big-endian utf-16 follows ** */ -/************** Include vdbeInt.h in the middle of utf.c *********************/ -/************** Begin file vdbeInt.h *****************************************/ -/* -** 2003 September 6 -** -** The author disclaims copyright to this source code. In place of -** a legal notice, here is a blessing: -** -** May you do good and not evil. -** May you find forgiveness for yourself and forgive others. -** May you share freely, never taking more than you give. -** -************************************************************************* -** This is the header file for information that is private to the -** VDBE. This information used to all be at the top of the single -** source code file "vdbe.c". When that file became too big (over -** 6000 lines long) it was split up into several smaller files and -** this header information was factored out. -*/ -#ifndef _VDBEINT_H_ -#define _VDBEINT_H_ - -/* -** SQL is translated into a sequence of instructions to be -** executed by a virtual machine. Each instruction is an instance -** of the following structure. -*/ -typedef struct VdbeOp Op; - -/* -** Boolean values -*/ -typedef unsigned char Bool; - -/* -** A cursor is a pointer into a single BTree within a database file. -** The cursor can seek to a BTree entry with a particular key, or -** loop over all entries of the Btree. You can also insert new BTree -** entries or retrieve the key or data from the entry that the cursor -** is currently pointing to. -** -** Every cursor that the virtual machine has open is represented by an -** instance of the following structure. -** -** If the VdbeCursor.isTriggerRow flag is set it means that this cursor is -** really a single row that represents the NEW or OLD pseudo-table of -** a row trigger. The data for the row is stored in VdbeCursor.pData and -** the rowid is in VdbeCursor.iKey. -*/ -struct VdbeCursor { - BtCursor *pCursor; /* The cursor structure of the backend */ - int iDb; /* Index of cursor database in db->aDb[] (or -1) */ - i64 lastRowid; /* Last rowid from a Next or NextIdx operation */ - Bool zeroed; /* True if zeroed out and ready for reuse */ - Bool rowidIsValid; /* True if lastRowid is valid */ - Bool atFirst; /* True if pointing to first entry */ - Bool useRandomRowid; /* Generate new record numbers semi-randomly */ - Bool nullRow; /* True if pointing to a row with no data */ - Bool deferredMoveto; /* A call to sqlite3BtreeMoveto() is needed */ - Bool isTable; /* True if a table requiring integer keys */ - Bool isIndex; /* True if an index containing keys only - no data */ - i64 movetoTarget; /* Argument to the deferred sqlite3BtreeMoveto() */ - Btree *pBt; /* Separate file holding temporary table */ - int pseudoTableReg; /* Register holding pseudotable content. */ - KeyInfo *pKeyInfo; /* Info about index keys needed by index cursors */ - int nField; /* Number of fields in the header */ - i64 seqCount; /* Sequence counter */ - sqlite3_vtab_cursor *pVtabCursor; /* The cursor for a virtual table */ - const sqlite3_module *pModule; /* Module for cursor pVtabCursor */ - - /* Result of last sqlite3BtreeMoveto() done by an OP_NotExists or - ** OP_IsUnique opcode on this cursor. */ - int seekResult; - - /* Cached information about the header for the data record that the - ** cursor is currently pointing to. Only valid if cacheStatus matches - ** Vdbe.cacheCtr. Vdbe.cacheCtr will never take on the value of - ** CACHE_STALE and so setting cacheStatus=CACHE_STALE guarantees that - ** the cache is out of date. - ** - ** aRow might point to (ephemeral) data for the current row, or it might - ** be NULL. - */ - u32 cacheStatus; /* Cache is valid if this matches Vdbe.cacheCtr */ - int payloadSize; /* Total number of bytes in the record */ - u32 *aType; /* Type values for all entries in the record */ - u32 *aOffset; /* Cached offsets to the start of each columns data */ - u8 *aRow; /* Data for the current row, if all on one page */ -}; -typedef struct VdbeCursor VdbeCursor; - -/* -** When a sub-program is executed (OP_Program), a structure of this type -** is allocated to store the current value of the program counter, as -** well as the current memory cell array and various other frame specific -** values stored in the Vdbe struct. When the sub-program is finished, -** these values are copied back to the Vdbe from the VdbeFrame structure, -** restoring the state of the VM to as it was before the sub-program -** began executing. -** -** Frames are stored in a linked list headed at Vdbe.pParent. Vdbe.pParent -** is the parent of the current frame, or zero if the current frame -** is the main Vdbe program. -*/ -typedef struct VdbeFrame VdbeFrame; -struct VdbeFrame { - Vdbe *v; /* VM this frame belongs to */ - int pc; /* Program Counter */ - Op *aOp; /* Program instructions */ - int nOp; /* Size of aOp array */ - Mem *aMem; /* Array of memory cells */ - int nMem; /* Number of entries in aMem */ - VdbeCursor **apCsr; /* Element of Vdbe cursors */ - u16 nCursor; /* Number of entries in apCsr */ - void *token; /* Copy of SubProgram.token */ - int nChildMem; /* Number of memory cells for child frame */ - int nChildCsr; /* Number of cursors for child frame */ - i64 lastRowid; /* Last insert rowid (sqlite3.lastRowid) */ - int nChange; /* Statement changes (Vdbe.nChanges) */ - VdbeFrame *pParent; /* Parent of this frame */ -}; - -#define VdbeFrameMem(p) ((Mem *)&((u8 *)p)[ROUND8(sizeof(VdbeFrame))]) - -/* -** A value for VdbeCursor.cacheValid that means the cache is always invalid. -*/ -#define CACHE_STALE 0 - -/* -** Internally, the vdbe manipulates nearly all SQL values as Mem -** structures. Each Mem struct may cache multiple representations (string, -** integer etc.) of the same value. A value (and therefore Mem structure) -** has the following properties: -** -** Each value has a manifest type. The manifest type of the value stored -** in a Mem struct is returned by the MemType(Mem*) macro. The type is -** one of SQLITE_NULL, SQLITE_INTEGER, SQLITE_REAL, SQLITE_TEXT or -** SQLITE_BLOB. -*/ -struct Mem { - union { - i64 i; /* Integer value. */ - int nZero; /* Used when bit MEM_Zero is set in flags */ - FuncDef *pDef; /* Used only when flags==MEM_Agg */ - RowSet *pRowSet; /* Used only when flags==MEM_RowSet */ - VdbeFrame *pFrame; /* Used when flags==MEM_Frame */ - } u; - double r; /* Real value */ - sqlite3 *db; /* The associated database connection */ - char *z; /* String or BLOB value */ - int n; /* Number of characters in string value, excluding '\0' */ - u16 flags; /* Some combination of MEM_Null, MEM_Str, MEM_Dyn, etc. */ - u8 type; /* One of SQLITE_NULL, SQLITE_TEXT, SQLITE_INTEGER, etc */ - u8 enc; /* SQLITE_UTF8, SQLITE_UTF16BE, SQLITE_UTF16LE */ - void (*xDel)(void *); /* If not null, call this function to delete Mem.z */ - char *zMalloc; /* Dynamic buffer allocated by sqlite3_malloc() */ -}; - -/* One or more of the following flags are set to indicate the validOK -** representations of the value stored in the Mem struct. -** -** If the MEM_Null flag is set, then the value is an SQL NULL value. -** No other flags may be set in this case. -** -** If the MEM_Str flag is set then Mem.z points at a string representation. -** Usually this is encoded in the same unicode encoding as the main -** database (see below for exceptions). If the MEM_Term flag is also -** set, then the string is nul terminated. The MEM_Int and MEM_Real -** flags may coexist with the MEM_Str flag. -** -** Multiple of these values can appear in Mem.flags. But only one -** at a time can appear in Mem.type. -*/ -#define MEM_Null 0x0001 /* Value is NULL */ -#define MEM_Str 0x0002 /* Value is a string */ -#define MEM_Int 0x0004 /* Value is an integer */ -#define MEM_Real 0x0008 /* Value is a real number */ -#define MEM_Blob 0x0010 /* Value is a BLOB */ -#define MEM_RowSet 0x0020 /* Value is a RowSet object */ -#define MEM_Frame 0x0040 /* Value is a VdbeFrame object */ -#define MEM_TypeMask 0x00ff /* Mask of type bits */ - -/* Whenever Mem contains a valid string or blob representation, one of -** the following flags must be set to determine the memory management -** policy for Mem.z. The MEM_Term flag tells us whether or not the -** string is \000 or \u0000 terminated -*/ -#define MEM_Term 0x0200 /* String rep is nul terminated */ -#define MEM_Dyn 0x0400 /* Need to call sqliteFree() on Mem.z */ -#define MEM_Static 0x0800 /* Mem.z points to a static string */ -#define MEM_Ephem 0x1000 /* Mem.z points to an ephemeral string */ -#define MEM_Agg 0x2000 /* Mem.z points to an agg function context */ -#define MEM_Zero 0x4000 /* Mem.i contains count of 0s appended to blob */ - -#ifdef SQLITE_OMIT_INCRBLOB - #undef MEM_Zero - #define MEM_Zero 0x0000 -#endif - - -/* -** Clear any existing type flags from a Mem and replace them with f -*/ -#define MemSetTypeFlag(p, f) \ - ((p)->flags = ((p)->flags&~(MEM_TypeMask|MEM_Zero))|f) - - -/* A VdbeFunc is just a FuncDef (defined in sqliteInt.h) that contains -** additional information about auxiliary information bound to arguments -** of the function. This is used to implement the sqlite3_get_auxdata() -** and sqlite3_set_auxdata() APIs. The "auxdata" is some auxiliary data -** that can be associated with a constant argument to a function. This -** allows functions such as "regexp" to compile their constant regular -** expression argument once and reused the compiled code for multiple -** invocations. -*/ -struct VdbeFunc { - FuncDef *pFunc; /* The definition of the function */ - int nAux; /* Number of entries allocated for apAux[] */ - struct AuxData { - void *pAux; /* Aux data for the i-th argument */ - void (*xDelete)(void *); /* Destructor for the aux data */ - } apAux[1]; /* One slot for each function argument */ -}; - -/* -** The "context" argument for a installable function. A pointer to an -** instance of this structure is the first argument to the routines used -** implement the SQL functions. -** -** There is a typedef for this structure in sqlite.h. So all routines, -** even the public interface to SQLite, can use a pointer to this structure. -** But this file is the only place where the internal details of this -** structure are known. -** -** This structure is defined inside of vdbeInt.h because it uses substructures -** (Mem) which are only defined there. -*/ -struct sqlite3_context { - FuncDef *pFunc; /* Pointer to function information. MUST BE FIRST */ - VdbeFunc *pVdbeFunc; /* Auxilary data, if created. */ - Mem s; /* The return value is stored here */ - Mem *pMem; /* Memory cell used to store aggregate context */ - int isError; /* Error code returned by the function. */ - CollSeq *pColl; /* Collating sequence */ -}; - -/* -** A Set structure is used for quick testing to see if a value -** is part of a small set. Sets are used to implement code like -** this: -** x.y IN ('hi','hoo','hum') -*/ -typedef struct Set Set; -struct Set { - Hash hash; /* A set is just a hash table */ - HashElem *prev; /* Previously accessed hash elemen */ -}; - -/* -** An instance of the virtual machine. This structure contains the complete -** state of the virtual machine. -** -** The "sqlite3_stmt" structure pointer that is returned by sqlite3_compile() -** is really a pointer to an instance of this structure. -** -** The Vdbe.inVtabMethod variable is set to non-zero for the duration of -** any virtual table method invocations made by the vdbe program. It is -** set to 2 for xDestroy method calls and 1 for all other methods. This -** variable is used for two purposes: to allow xDestroy methods to execute -** "DROP TABLE" statements and to prevent some nasty side effects of -** malloc failure when SQLite is invoked recursively by a virtual table -** method function. -*/ -struct Vdbe { - sqlite3 *db; /* The database connection that owns this statement */ - Vdbe *pPrev,*pNext; /* Linked list of VDBEs with the same Vdbe.db */ - int nOp; /* Number of instructions in the program */ - int nOpAlloc; /* Number of slots allocated for aOp[] */ - Op *aOp; /* Space to hold the virtual machine's program */ - int nLabel; /* Number of labels used */ - int nLabelAlloc; /* Number of slots allocated in aLabel[] */ - int *aLabel; /* Space to hold the labels */ - Mem **apArg; /* Arguments to currently executing user function */ - Mem *aColName; /* Column names to return */ - Mem *pResultSet; /* Pointer to an array of results */ - u16 nResColumn; /* Number of columns in one row of the result set */ - u16 nCursor; /* Number of slots in apCsr[] */ - VdbeCursor **apCsr; /* One element of this array for each open cursor */ - u8 errorAction; /* Recovery action to do in case of an error */ - u8 okVar; /* True if azVar[] has been initialized */ - ynVar nVar; /* Number of entries in aVar[] */ - Mem *aVar; /* Values for the OP_Variable opcode. */ - char **azVar; /* Name of variables */ - u32 magic; /* Magic number for sanity checking */ - int nMem; /* Number of memory locations currently allocated */ - Mem *aMem; /* The memory locations */ - u32 cacheCtr; /* VdbeCursor row cache generation counter */ - int pc; /* The program counter */ - int rc; /* Value to return */ - char *zErrMsg; /* Error message written here */ - u8 explain; /* True if EXPLAIN present on SQL command */ - u8 changeCntOn; /* True to update the change-counter */ - u8 expired; /* True if the VM needs to be recompiled */ - u8 runOnlyOnce; /* Automatically expire on reset */ - u8 minWriteFileFormat; /* Minimum file format for writable database files */ - u8 inVtabMethod; /* See comments above */ - u8 usesStmtJournal; /* True if uses a statement journal */ - u8 readOnly; /* True for read-only statements */ - u8 isPrepareV2; /* True if prepared with prepare_v2() */ - int nChange; /* Number of db changes made since last reset */ - int btreeMask; /* Bitmask of db->aDb[] entries referenced */ - i64 startTime; /* Time when query started - used for profiling */ - BtreeMutexArray aMutex; /* An array of Btree used here and needing locks */ - int aCounter[3]; /* Counters used by sqlite3_stmt_status() */ - char *zSql; /* Text of the SQL statement that generated this */ - void *pFree; /* Free this when deleting the vdbe */ - i64 nFkConstraint; /* Number of imm. FK constraints this VM */ - i64 nStmtDefCons; /* Number of def. constraints when stmt started */ - int iStatement; /* Statement number (or 0 if has not opened stmt) */ -#ifdef SQLITE_DEBUG - FILE *trace; /* Write an execution trace here, if not NULL */ -#endif - VdbeFrame *pFrame; /* Parent frame */ - int nFrame; /* Number of frames in pFrame list */ - u32 expmask; /* Binding to these vars invalidates VM */ -}; - -/* -** The following are allowed values for Vdbe.magic -*/ -#define VDBE_MAGIC_INIT 0x26bceaa5 /* Building a VDBE program */ -#define VDBE_MAGIC_RUN 0xbdf20da3 /* VDBE is ready to execute */ -#define VDBE_MAGIC_HALT 0x519c2973 /* VDBE has completed execution */ -#define VDBE_MAGIC_DEAD 0xb606c3c8 /* The VDBE has been deallocated */ - -/* -** Function prototypes -*/ -SQLITE_PRIVATE void sqlite3VdbeFreeCursor(Vdbe *, VdbeCursor*); -void sqliteVdbePopStack(Vdbe*,int); -SQLITE_PRIVATE int sqlite3VdbeCursorMoveto(VdbeCursor*); -#if defined(SQLITE_DEBUG) || defined(VDBE_PROFILE) -SQLITE_PRIVATE void sqlite3VdbePrintOp(FILE*, int, Op*); -#endif -SQLITE_PRIVATE u32 sqlite3VdbeSerialTypeLen(u32); -SQLITE_PRIVATE u32 sqlite3VdbeSerialType(Mem*, int); -SQLITE_PRIVATE u32 sqlite3VdbeSerialPut(unsigned char*, int, Mem*, int); -SQLITE_PRIVATE u32 sqlite3VdbeSerialGet(const unsigned char*, u32, Mem*); -SQLITE_PRIVATE void sqlite3VdbeDeleteAuxData(VdbeFunc*, int); - -int sqlite2BtreeKeyCompare(BtCursor *, const void *, int, int, int *); -SQLITE_PRIVATE int sqlite3VdbeIdxKeyCompare(VdbeCursor*,UnpackedRecord*,int*); -SQLITE_PRIVATE int sqlite3VdbeIdxRowid(sqlite3*, BtCursor *, i64 *); -SQLITE_PRIVATE int sqlite3MemCompare(const Mem*, const Mem*, const CollSeq*); -SQLITE_PRIVATE int sqlite3VdbeExec(Vdbe*); -SQLITE_PRIVATE int sqlite3VdbeList(Vdbe*); -SQLITE_PRIVATE int sqlite3VdbeHalt(Vdbe*); -SQLITE_PRIVATE int sqlite3VdbeChangeEncoding(Mem *, int); -SQLITE_PRIVATE int sqlite3VdbeMemTooBig(Mem*); -SQLITE_PRIVATE int sqlite3VdbeMemCopy(Mem*, const Mem*); -SQLITE_PRIVATE void sqlite3VdbeMemShallowCopy(Mem*, const Mem*, int); -SQLITE_PRIVATE void sqlite3VdbeMemMove(Mem*, Mem*); -SQLITE_PRIVATE int sqlite3VdbeMemNulTerminate(Mem*); -SQLITE_PRIVATE int sqlite3VdbeMemSetStr(Mem*, const char*, int, u8, void(*)(void*)); -SQLITE_PRIVATE void sqlite3VdbeMemSetInt64(Mem*, i64); -#ifdef SQLITE_OMIT_FLOATING_POINT -# define sqlite3VdbeMemSetDouble sqlite3VdbeMemSetInt64 -#else -SQLITE_PRIVATE void sqlite3VdbeMemSetDouble(Mem*, double); -#endif -SQLITE_PRIVATE void sqlite3VdbeMemSetNull(Mem*); -SQLITE_PRIVATE void sqlite3VdbeMemSetZeroBlob(Mem*,int); -SQLITE_PRIVATE void sqlite3VdbeMemSetRowSet(Mem*); -SQLITE_PRIVATE int sqlite3VdbeMemMakeWriteable(Mem*); -SQLITE_PRIVATE int sqlite3VdbeMemStringify(Mem*, int); -SQLITE_PRIVATE i64 sqlite3VdbeIntValue(Mem*); -SQLITE_PRIVATE int sqlite3VdbeMemIntegerify(Mem*); -SQLITE_PRIVATE double sqlite3VdbeRealValue(Mem*); -SQLITE_PRIVATE void sqlite3VdbeIntegerAffinity(Mem*); -SQLITE_PRIVATE int sqlite3VdbeMemRealify(Mem*); -SQLITE_PRIVATE int sqlite3VdbeMemNumerify(Mem*); -SQLITE_PRIVATE int sqlite3VdbeMemFromBtree(BtCursor*,int,int,int,Mem*); -SQLITE_PRIVATE void sqlite3VdbeMemRelease(Mem *p); -SQLITE_PRIVATE void sqlite3VdbeMemReleaseExternal(Mem *p); -SQLITE_PRIVATE int sqlite3VdbeMemFinalize(Mem*, FuncDef*); -SQLITE_PRIVATE const char *sqlite3OpcodeName(int); -SQLITE_PRIVATE int sqlite3VdbeMemGrow(Mem *pMem, int n, int preserve); -SQLITE_PRIVATE int sqlite3VdbeCloseStatement(Vdbe *, int); -SQLITE_PRIVATE void sqlite3VdbeFrameDelete(VdbeFrame*); -SQLITE_PRIVATE int sqlite3VdbeFrameRestore(VdbeFrame *); -SQLITE_PRIVATE void sqlite3VdbeMemStoreType(Mem *pMem); - -#ifndef SQLITE_OMIT_FOREIGN_KEY -SQLITE_PRIVATE int sqlite3VdbeCheckFk(Vdbe *, int); -#else -# define sqlite3VdbeCheckFk(p,i) 0 -#endif - -#ifndef SQLITE_OMIT_SHARED_CACHE -SQLITE_PRIVATE void sqlite3VdbeMutexArrayEnter(Vdbe *p); -#else -# define sqlite3VdbeMutexArrayEnter(p) -#endif - -SQLITE_PRIVATE int sqlite3VdbeMemTranslate(Mem*, u8); -#ifdef SQLITE_DEBUG -SQLITE_PRIVATE void sqlite3VdbePrintSql(Vdbe*); -SQLITE_PRIVATE void sqlite3VdbeMemPrettyPrint(Mem *pMem, char *zBuf); -#endif -SQLITE_PRIVATE int sqlite3VdbeMemHandleBom(Mem *pMem); - -#ifndef SQLITE_OMIT_INCRBLOB -SQLITE_PRIVATE int sqlite3VdbeMemExpandBlob(Mem *); -#else - #define sqlite3VdbeMemExpandBlob(x) SQLITE_OK -#endif - -#endif /* !defined(_VDBEINT_H_) */ - -/************** End of vdbeInt.h *********************************************/ -/************** Continuing where we left off in utf.c ************************/ - -#ifndef SQLITE_AMALGAMATION +/* #include "sqliteInt.h" */ +/* #include <assert.h> */ +/* #include "vdbeInt.h" */ + +#if !defined(SQLITE_AMALGAMATION) && SQLITE_BYTEORDER==0 /* ** The following constant value is used by the SQLITE_BIGENDIAN and ** SQLITE_LITTLEENDIAN macros. */ SQLITE_PRIVATE const int sqlite3one = 1; -#endif /* SQLITE_AMALGAMATION */ +#endif /* SQLITE_AMALGAMATION && SQLITE_BYTEORDER==0 */ /* ** This lookup table is used to help decode the first byte of ** a multi-byte UTF8 character. */ @@ -18659,12 +24852,12 @@ ** * Bytes in the range of 0x80 through 0xbf which occur as the first ** byte of a character are interpreted as single-byte characters ** and rendered as themselves even though they are technically ** invalid characters. ** -** * This routine accepts an infinite number of different UTF8 encodings -** for unicode values 0x80 and greater. It do not change over-length +** * This routine accepts over-length UTF8 encodings +** for unicode values 0x80 and greater. It does not change over-length ** encodings to 0xfffd as some systems recommend. */ #define READ_UTF8(zIn, zTerm, c) \ c = *(zIn++); \ if( c>=0xc0 ){ \ @@ -18674,30 +24867,28 @@ } \ if( c<0x80 \ || (c&0xFFFFF800)==0xD800 \ || (c&0xFFFFFFFE)==0xFFFE ){ c = 0xFFFD; } \ } -SQLITE_PRIVATE int sqlite3Utf8Read( - const unsigned char *zIn, /* First byte of UTF-8 character */ - const unsigned char **pzNext /* Write first byte past UTF-8 char here */ +SQLITE_PRIVATE u32 sqlite3Utf8Read( + const unsigned char **pz /* Pointer to string from which to read char */ ){ - int c; + unsigned int c; /* Same as READ_UTF8() above but without the zTerm parameter. ** For this routine, we assume the UTF8 string is always zero-terminated. */ - c = *(zIn++); + c = *((*pz)++); if( c>=0xc0 ){ c = sqlite3Utf8Trans1[c-0xc0]; - while( (*zIn & 0xc0)==0x80 ){ - c = (c<<6) + (0x3f & *(zIn++)); + while( (*(*pz) & 0xc0)==0x80 ){ + c = (c<<6) + (0x3f & *((*pz)++)); } if( c<0x80 || (c&0xFFFFF800)==0xD800 || (c&0xFFFFFFFE)==0xFFFE ){ c = 0xFFFD; } } - *pzNext = zIn; return c; } @@ -18712,11 +24903,11 @@ /* ** This routine transforms the internal text encoding used by pMem to ** desiredEnc. It is an error if the string is already of the desired ** encoding, or if *pMem does not contain a string value. */ -SQLITE_PRIVATE int sqlite3VdbeMemTranslate(Mem *pMem, u8 desiredEnc){ +SQLITE_PRIVATE SQLITE_NOINLINE int sqlite3VdbeMemTranslate(Mem *pMem, u8 desiredEnc){ int len; /* Maximum length of output string in bytes */ unsigned char *zOut; /* Output buffer */ unsigned char *zIn; /* Input iterator */ unsigned char *zTerm; /* End of input */ unsigned char *z; /* Output iterator */ @@ -18794,19 +24985,17 @@ if( pMem->enc==SQLITE_UTF8 ){ if( desiredEnc==SQLITE_UTF16LE ){ /* UTF-8 -> UTF-16 Little-endian */ while( zIn<zTerm ){ - /* c = sqlite3Utf8Read(zIn, zTerm, (const u8**)&zIn); */ READ_UTF8(zIn, zTerm, c); WRITE_UTF16LE(z, c); } }else{ assert( desiredEnc==SQLITE_UTF16BE ); /* UTF-8 -> UTF-16 Big-endian */ while( zIn<zTerm ){ - /* c = sqlite3Utf8Read(zIn, zTerm, (const u8**)&zIn); */ READ_UTF8(zIn, zTerm, c); WRITE_UTF16BE(z, c); } } pMem->n = (int)(z - zOut); @@ -18829,16 +25018,17 @@ pMem->n = (int)(z - zOut); } *z = 0; assert( (pMem->n+(desiredEnc==SQLITE_UTF8?1:2))<=len ); + c = pMem->flags; sqlite3VdbeMemRelease(pMem); - pMem->flags &= ~(MEM_Static|MEM_Dyn|MEM_Ephem); + pMem->flags = MEM_Str|MEM_Term|(c&(MEM_AffMask|MEM_Subtype)); pMem->enc = desiredEnc; - pMem->flags |= (MEM_Term|MEM_Dyn); pMem->z = (char*)zOut; pMem->zMalloc = pMem->z; + pMem->szMalloc = sqlite3DbMallocSize(pMem->db, pMem->z); translate_out: #if defined(TRANSLATE_TRACE) && defined(SQLITE_DEBUG) { char zBuf[100]; @@ -18921,20 +25111,20 @@ ** Translate UTF-8 to UTF-8. ** ** This has the effect of making sure that the string is well-formed ** UTF-8. Miscoded characters are removed. ** -** The translation is done in-place (since it is impossible for the -** correct UTF-8 encoding to be longer than a malformed encoding). +** The translation is done in-place and aborted if the output +** overruns the input. */ SQLITE_PRIVATE int sqlite3Utf8To8(unsigned char *zIn){ unsigned char *zOut = zIn; unsigned char *zStart = zIn; u32 c; - while( zIn[0] ){ - c = sqlite3Utf8Read(zIn, (const u8**)&zIn); + while( zIn[0] && zOut<=zIn ){ + c = sqlite3Utf8Read((const u8**)&zIn); if( c!=0xfffd ){ WRITE_UTF8(zOut, c); } } *zOut = 0; @@ -18960,41 +25150,14 @@ sqlite3VdbeMemRelease(&m); m.z = 0; } assert( (m.flags & MEM_Term)!=0 || db->mallocFailed ); assert( (m.flags & MEM_Str)!=0 || db->mallocFailed ); - assert( (m.flags & MEM_Dyn)!=0 || db->mallocFailed ); assert( m.z || db->mallocFailed ); return m.z; } -/* -** Convert a UTF-8 string to the UTF-16 encoding specified by parameter -** enc. A pointer to the new string is returned, and the value of *pnOut -** is set to the length of the returned string in bytes. The call should -** arrange to call sqlite3DbFree() on the returned pointer when it is -** no longer required. -** -** If a malloc failure occurs, NULL is returned and the db.mallocFailed -** flag set. -*/ -#ifdef SQLITE_ENABLE_STAT2 -SQLITE_PRIVATE char *sqlite3Utf8to16(sqlite3 *db, u8 enc, char *z, int n, int *pnOut){ - Mem m; - memset(&m, 0, sizeof(m)); - m.db = db; - sqlite3VdbeMemSetStr(&m, z, n, SQLITE_UTF8, SQLITE_STATIC); - if( sqlite3VdbeMemTranslate(&m, enc) ){ - assert( db->mallocFailed ); - return 0; - } - assert( m.z==m.zMalloc ); - *pnOut = m.n; - return m.z; -} -#endif - /* ** zIn is a UTF-16 encoded unicode string at least nChar characters long. ** Return the number of bytes in the first nChar unicode characters ** in pZ. nChar must be non-negative. */ @@ -19035,11 +25198,11 @@ WRITE_UTF8(z, i); n = (int)(z-zBuf); assert( n>0 && n<=4 ); z[0] = 0; z = zBuf; - c = sqlite3Utf8Read(z, (const u8**)&z); + c = sqlite3Utf8Read((const u8**)&z); t = i; if( i>=0xD800 && i<=0xDFFF ) t = 0xFFFD; if( (i&0xFFFFFFFE)==0xFFFE ) t = 0xFFFD; assert( c==t ); assert( (z-zBuf)==n ); @@ -19089,21 +25252,41 @@ ** ** This file contains functions for allocating memory, comparing ** strings, and stuff like that. ** */ -#ifdef SQLITE_HAVE_ISNAN +/* #include "sqliteInt.h" */ +/* #include <stdarg.h> */ +#if HAVE_ISNAN || SQLITE_HAVE_ISNAN # include <math.h> #endif /* ** Routine needed to support the testcase() macro. */ #ifdef SQLITE_COVERAGE_TEST SQLITE_PRIVATE void sqlite3Coverage(int x){ - static int dummy = 0; - dummy += x; + static unsigned dummy = 0; + dummy += (unsigned)x; +} +#endif + +/* +** Give a callback to the test harness that can be used to simulate faults +** in places where it is difficult or expensive to do so purely by means +** of inputs. +** +** The intent of the integer argument is to let the fault simulator know +** which of multiple sqlite3FaultSim() calls has been hit. +** +** Return whatever integer value the test callback returns, or return +** SQLITE_OK if no test callback is installed. +*/ +#ifndef SQLITE_OMIT_BUILTIN_TEST +SQLITE_PRIVATE int sqlite3FaultSim(int iTest){ + int (*xCallback)(int) = sqlite3GlobalConfig.xTestCallback; + return xCallback ? xCallback(iTest) : SQLITE_OK; } #endif #ifndef SQLITE_OMIT_FLOATING_POINT /* @@ -19112,11 +25295,11 @@ ** Use the math library isnan() function if compiled with SQLITE_HAVE_ISNAN. ** Otherwise, we have our own implementation that works on most systems. */ SQLITE_PRIVATE int sqlite3IsNaN(double x){ int rc; /* The value return */ -#if !defined(SQLITE_HAVE_ISNAN) +#if !SQLITE_HAVE_ISNAN && !HAVE_ISNAN /* ** Systems that support the isnan() library function should probably ** make use of it by compiling with -DSQLITE_HAVE_ISNAN. But we have ** found that many systems do not have a working isnan() function so ** this implementation is provided as an alternative. @@ -19142,13 +25325,13 @@ # error SQLite will not work correctly with the -ffast-math option of GCC. #endif volatile double y = x; volatile double z = y; rc = (y!=z); -#else /* if defined(SQLITE_HAVE_ISNAN) */ +#else /* if HAVE_ISNAN */ rc = isnan(x); -#endif /* SQLITE_HAVE_ISNAN */ +#endif /* HAVE_ISNAN */ testcase( rc ); return rc; } #endif /* SQLITE_OMIT_FLOATING_POINT */ @@ -19159,14 +25342,21 @@ ** The value returned will never be negative. Nor will it ever be greater ** than the actual length of the string. For very long strings (greater ** than 1GiB) the value returned might be less than the true string length. */ SQLITE_PRIVATE int sqlite3Strlen30(const char *z){ - const char *z2 = z; if( z==0 ) return 0; - while( *z2 ){ z2++; } - return 0x3fffffff & (int)(z2 - z); + return 0x3fffffff & (int)strlen(z); +} + +/* +** Set the current error code to err_code and clear any prior error message. +*/ +SQLITE_PRIVATE void sqlite3Error(sqlite3 *db, int err_code){ + assert( db!=0 ); + db->errCode = err_code; + if( db->pErr ) sqlite3ValueSetNull(db->pErr); } /* ** Set the most recent error code and error string for the sqlite ** handle "db". The error code is set to "err_code". @@ -19186,23 +25376,22 @@ ** ** To clear the most recent error for sqlite handle "db", sqlite3Error ** should be called with err_code set to SQLITE_OK and zFormat set ** to NULL. */ -SQLITE_PRIVATE void sqlite3Error(sqlite3 *db, int err_code, const char *zFormat, ...){ - if( db && (db->pErr || (db->pErr = sqlite3ValueNew(db))!=0) ){ - db->errCode = err_code; - if( zFormat ){ - char *z; - va_list ap; - va_start(ap, zFormat); - z = sqlite3VMPrintf(db, zFormat, ap); - va_end(ap); - sqlite3ValueSetStr(db->pErr, -1, z, SQLITE_UTF8, SQLITE_DYNAMIC); - }else{ - sqlite3ValueSetStr(db->pErr, 0, 0, SQLITE_UTF8, SQLITE_STATIC); - } +SQLITE_PRIVATE void sqlite3ErrorWithMsg(sqlite3 *db, int err_code, const char *zFormat, ...){ + assert( db!=0 ); + db->errCode = err_code; + if( zFormat==0 ){ + sqlite3Error(db, err_code); + }else if( db->pErr || (db->pErr = sqlite3ValueNew(db))!=0 ){ + char *z; + va_list ap; + va_start(ap, zFormat); + z = sqlite3VMPrintf(db, zFormat, ap); + va_end(ap); + sqlite3ValueSetStr(db->pErr, -1, z, SQLITE_UTF8, SQLITE_DYNAMIC); } } /* ** Add an error message to pParse->zErrMsg and increment pParse->nErr. @@ -19212,16 +25401,16 @@ ** %z A string that should be freed after use ** %d Insert an integer ** %T Insert a token ** %S Insert the first element of a SrcList ** -** This function should be used to report any error that occurs whilst +** This function should be used to report any error that occurs while ** compiling an SQL statement (i.e. within sqlite3_prepare()). The ** last thing the sqlite3_prepare() function does is copy the error ** stored by this function into the database handle using sqlite3Error(). -** Function sqlite3Error() should be used during statement execution -** (sqlite3_step() etc.). +** Functions sqlite3Error() or sqlite3ErrorWithMsg() should be used +** during statement execution (sqlite3_step() etc.). */ SQLITE_PRIVATE void sqlite3ErrorMsg(Parse *pParse, const char *zFormat, ...){ char *zMsg; va_list ap; sqlite3 *db = pParse->db; @@ -19250,11 +25439,11 @@ ** The return value is -1 if no dequoting occurs or the length of the ** dequoted string, exclusive of the zero terminator, if dequoting does ** occur. ** ** 2002-Feb-14: This routine is extended to remove MS-Access style -** brackets from around identifers. For example: "[a-b-c]" becomes +** brackets from around identifiers. For example: "[a-b-c]" becomes ** "a-b-c". */ SQLITE_PRIVATE int sqlite3Dequote(char *z){ char quote; int i, j; @@ -19265,11 +25454,12 @@ case '"': break; case '`': break; /* For MySQL compatibility */ case '[': quote = ']'; break; /* For MS SqlServer compatibility */ default: return -1; } - for(i=1, j=0; ALWAYS(z[i]); i++){ + for(i=1, j=0;; i++){ + assert( z[i] ); if( z[i]==quote ){ if( z[i+1]==quote ){ z[j++] = quote; i++; }else{ @@ -19280,149 +25470,175 @@ } } z[j] = 0; return j; } + +/* +** Generate a Token object from a string +*/ +SQLITE_PRIVATE void sqlite3TokenInit(Token *p, char *z){ + p->z = z; + p->n = sqlite3Strlen30(z); +} /* Convenient short-hand */ #define UpperToLower sqlite3UpperToLower /* ** Some systems have stricmp(). Others have strcasecmp(). Because ** there is no consistency, we will define our own. +** +** IMPLEMENTATION-OF: R-30243-02494 The sqlite3_stricmp() and +** sqlite3_strnicmp() APIs allow applications and extensions to compare +** the contents of two buffers containing UTF-8 strings in a +** case-independent fashion, using the same definition of "case +** independence" that SQLite uses internally when comparing identifiers. */ -SQLITE_PRIVATE int sqlite3StrICmp(const char *zLeft, const char *zRight){ +SQLITE_API int SQLITE_STDCALL sqlite3_stricmp(const char *zLeft, const char *zRight){ register unsigned char *a, *b; + if( zLeft==0 ){ + return zRight ? -1 : 0; + }else if( zRight==0 ){ + return 1; + } a = (unsigned char *)zLeft; b = (unsigned char *)zRight; while( *a!=0 && UpperToLower[*a]==UpperToLower[*b]){ a++; b++; } return UpperToLower[*a] - UpperToLower[*b]; } -SQLITE_API int sqlite3_strnicmp(const char *zLeft, const char *zRight, int N){ +SQLITE_API int SQLITE_STDCALL sqlite3_strnicmp(const char *zLeft, const char *zRight, int N){ register unsigned char *a, *b; + if( zLeft==0 ){ + return zRight ? -1 : 0; + }else if( zRight==0 ){ + return 1; + } a = (unsigned char *)zLeft; b = (unsigned char *)zRight; while( N-- > 0 && *a!=0 && UpperToLower[*a]==UpperToLower[*b]){ a++; b++; } return N<0 ? 0 : UpperToLower[*a] - UpperToLower[*b]; } /* -** Return TRUE if z is a pure numeric string. Return FALSE and leave -** *realnum unchanged if the string contains any character which is not -** part of a number. -** -** If the string is pure numeric, set *realnum to TRUE if the string -** contains the '.' character or an "E+000" style exponentiation suffix. -** Otherwise set *realnum to FALSE. Note that just becaue *realnum is -** false does not mean that the number can be successfully converted into -** an integer - it might be too big. -** -** An empty string is considered non-numeric. -*/ -SQLITE_PRIVATE int sqlite3IsNumber(const char *z, int *realnum, u8 enc){ - int incr = (enc==SQLITE_UTF8?1:2); - if( enc==SQLITE_UTF16BE ) z++; - if( *z=='-' || *z=='+' ) z += incr; - if( !sqlite3Isdigit(*z) ){ - return 0; - } - z += incr; - *realnum = 0; - while( sqlite3Isdigit(*z) ){ z += incr; } -#ifndef SQLITE_OMIT_FLOATING_POINT - if( *z=='.' ){ - z += incr; - if( !sqlite3Isdigit(*z) ) return 0; - while( sqlite3Isdigit(*z) ){ z += incr; } - *realnum = 1; - } - if( *z=='e' || *z=='E' ){ - z += incr; - if( *z=='+' || *z=='-' ) z += incr; - if( !sqlite3Isdigit(*z) ) return 0; - while( sqlite3Isdigit(*z) ){ z += incr; } - *realnum = 1; - } -#endif - return *z==0; -} - -/* -** The string z[] is an ASCII representation of a real number. -** Convert this string to a double. -** -** This routine assumes that z[] really is a valid number. If it -** is not, the result is undefined. -** -** This routine is used instead of the library atof() function because -** the library atof() might want to use "," as the decimal point instead -** of "." depending on how locale is set. But that would cause problems -** for SQL. So this routine always uses "." regardless of locale. -*/ -SQLITE_PRIVATE int sqlite3AtoF(const char *z, double *pResult){ -#ifndef SQLITE_OMIT_FLOATING_POINT - const char *zBegin = z; +** The string z[] is an text representation of a real number. +** Convert this string to a double and write it into *pResult. +** +** The string z[] is length bytes in length (bytes, not characters) and +** uses the encoding enc. The string is not necessarily zero-terminated. +** +** Return TRUE if the result is a valid real number (or integer) and FALSE +** if the string is empty or contains extraneous text. Valid numbers +** are in one of these formats: +** +** [+-]digits[E[+-]digits] +** [+-]digits.[digits][E[+-]digits] +** [+-].digits[E[+-]digits] +** +** Leading and trailing whitespace is ignored for the purpose of determining +** validity. +** +** If some prefix of the input string is a valid number, this routine +** returns FALSE but it still converts the prefix and writes the result +** into *pResult. +*/ +SQLITE_PRIVATE int sqlite3AtoF(const char *z, double *pResult, int length, u8 enc){ +#ifndef SQLITE_OMIT_FLOATING_POINT + int incr; + const char *zEnd = z + length; /* sign * significand * (10 ^ (esign * exponent)) */ - int sign = 1; /* sign of significand */ - i64 s = 0; /* significand */ - int d = 0; /* adjust exponent for shifting decimal point */ - int esign = 1; /* sign of exponent */ - int e = 0; /* exponent */ + int sign = 1; /* sign of significand */ + i64 s = 0; /* significand */ + int d = 0; /* adjust exponent for shifting decimal point */ + int esign = 1; /* sign of exponent */ + int e = 0; /* exponent */ + int eValid = 1; /* True exponent is either not used or is well-formed */ double result; int nDigits = 0; + int nonNum = 0; + + assert( enc==SQLITE_UTF8 || enc==SQLITE_UTF16LE || enc==SQLITE_UTF16BE ); + *pResult = 0.0; /* Default return value, in case of an error */ + + if( enc==SQLITE_UTF8 ){ + incr = 1; + }else{ + int i; + incr = 2; + assert( SQLITE_UTF16LE==2 && SQLITE_UTF16BE==3 ); + for(i=3-enc; i<length && z[i]==0; i+=2){} + nonNum = i<length; + zEnd = z+i+enc-3; + z += (enc&1); + } /* skip leading spaces */ - while( sqlite3Isspace(*z) ) z++; + while( z<zEnd && sqlite3Isspace(*z) ) z+=incr; + if( z>=zEnd ) return 0; + /* get sign of significand */ if( *z=='-' ){ sign = -1; - z++; + z+=incr; }else if( *z=='+' ){ - z++; + z+=incr; } + /* skip leading zeroes */ - while( z[0]=='0' ) z++, nDigits++; + while( z<zEnd && z[0]=='0' ) z+=incr, nDigits++; /* copy max significant digits to significand */ - while( sqlite3Isdigit(*z) && s<((LARGEST_INT64-9)/10) ){ + while( z<zEnd && sqlite3Isdigit(*z) && s<((LARGEST_INT64-9)/10) ){ s = s*10 + (*z - '0'); - z++, nDigits++; + z+=incr, nDigits++; } + /* skip non-significant significand digits ** (increase exponent by d to shift decimal left) */ - while( sqlite3Isdigit(*z) ) z++, nDigits++, d++; + while( z<zEnd && sqlite3Isdigit(*z) ) z+=incr, nDigits++, d++; + if( z>=zEnd ) goto do_atof_calc; /* if decimal point is present */ if( *z=='.' ){ - z++; + z+=incr; /* copy digits from after decimal to significand ** (decrease exponent by d to shift decimal right) */ - while( sqlite3Isdigit(*z) && s<((LARGEST_INT64-9)/10) ){ + while( z<zEnd && sqlite3Isdigit(*z) && s<((LARGEST_INT64-9)/10) ){ s = s*10 + (*z - '0'); - z++, nDigits++, d--; + z+=incr, nDigits++, d--; } /* skip non-significant digits */ - while( sqlite3Isdigit(*z) ) z++, nDigits++; + while( z<zEnd && sqlite3Isdigit(*z) ) z+=incr, nDigits++; } + if( z>=zEnd ) goto do_atof_calc; /* if exponent is present */ if( *z=='e' || *z=='E' ){ - z++; + z+=incr; + eValid = 0; + if( z>=zEnd ) goto do_atof_calc; /* get sign of exponent */ if( *z=='-' ){ esign = -1; - z++; + z+=incr; }else if( *z=='+' ){ - z++; + z+=incr; } /* copy digits to exponent */ - while( sqlite3Isdigit(*z) ){ - e = e*10 + (*z - '0'); - z++; + while( z<zEnd && sqlite3Isdigit(*z) ){ + e = e<10000 ? (e*10 + (*z - '0')) : 10000; + z+=incr; + eValid = 1; } } + /* skip trailing spaces */ + if( nDigits && eValid ){ + while( z<zEnd && sqlite3Isspace(*z) ) z+=incr; + } + +do_atof_calc: /* adjust exponent by d, and update sign */ e = (e*esign) + d; if( e<0 ) { esign = -1; e *= -1; @@ -19447,11 +25663,11 @@ s = sign<0 ? -s : s; /* if exponent, scale significand as appropriate ** and store in result. */ if( e ){ - double scale = 1.0; + LONGDOUBLE_TYPE scale = 1.0; /* attempt to handle extremely small/large numbers better */ if( e>307 && e<342 ){ while( e%308 ) { scale *= 1.0e+1; e -= 1; } if( esign<0 ){ result = s / scale; @@ -19458,10 +25674,16 @@ result /= 1.0e+308; }else{ result = s * scale; result *= 1.0e+308; } + }else if( e>=342 ){ + if( esign<0 ){ + result = 0.0*s; + }else{ + result = 1e308*1e308*s; /* Infinity */ + } }else{ /* 1.0e+22 is the largest power of 10 than can be ** represented exactly. */ while( e%22 ) { scale *= 1.0e+1; e -= 1; } while( e>0 ) { scale *= 1.0e+22; e -= 22; } @@ -19477,138 +25699,176 @@ } /* store the result */ *pResult = result; - /* return number of characters used */ - return (int)(z - zBegin); + /* return true if number and no extra non-whitespace chracters after */ + return z>=zEnd && nDigits>0 && eValid && nonNum==0; #else - return sqlite3Atoi64(z, pResult); + return !sqlite3Atoi64(z, pResult, length, enc); #endif /* SQLITE_OMIT_FLOATING_POINT */ } /* ** Compare the 19-character string zNum against the text representation ** value 2^63: 9223372036854775808. Return negative, zero, or positive ** if zNum is less than, equal to, or greater than the string. +** Note that zNum must contain exactly 19 characters. ** ** Unlike memcmp() this routine is guaranteed to return the difference ** in the values of the last digit if the only difference is in the ** last digit. So, for example, ** -** compare2pow63("9223372036854775800") +** compare2pow63("9223372036854775800", 1) ** ** will return -8. */ -static int compare2pow63(const char *zNum){ - int c; - c = memcmp(zNum,"922337203685477580",18)*10; +static int compare2pow63(const char *zNum, int incr){ + int c = 0; + int i; + /* 012345678901234567 */ + const char *pow63 = "922337203685477580"; + for(i=0; c==0 && i<18; i++){ + c = (zNum[i*incr]-pow63[i])*10; + } if( c==0 ){ - c = zNum[18] - '8'; + c = zNum[18*incr] - '8'; testcase( c==(-1) ); testcase( c==0 ); testcase( c==(+1) ); } return c; } - -/* -** Return TRUE if zNum is a 64-bit signed integer and write -** the value of the integer into *pNum. If zNum is not an integer -** or is an integer that is too large to be expressed with 64 bits, -** then return false. -** -** When this routine was originally written it dealt with only -** 32-bit numbers. At that time, it was much faster than the -** atoi() library routine in RedHat 7.2. -*/ -SQLITE_PRIVATE int sqlite3Atoi64(const char *zNum, i64 *pNum){ - i64 v = 0; - int neg; - int i, c; - const char *zStart; - while( sqlite3Isspace(*zNum) ) zNum++; - if( *zNum=='-' ){ - neg = 1; - zNum++; - }else if( *zNum=='+' ){ - neg = 0; - zNum++; - }else{ - neg = 0; - } - zStart = zNum; - while( zNum[0]=='0' ){ zNum++; } /* Skip over leading zeros. Ticket #2454 */ - for(i=0; (c=zNum[i])>='0' && c<='9'; i++){ - v = v*10 + c - '0'; - } - *pNum = neg ? -v : v; - testcase( i==18 ); - testcase( i==19 ); - testcase( i==20 ); - if( c!=0 || (i==0 && zStart==zNum) || i>19 ){ - /* zNum is empty or contains non-numeric text or is longer - ** than 19 digits (thus guaranting that it is too large) */ - return 0; - }else if( i<19 ){ - /* Less than 19 digits, so we know that it fits in 64 bits */ - return 1; - }else{ - /* 19-digit numbers must be no larger than 9223372036854775807 if positive - ** or 9223372036854775808 if negative. Note that 9223372036854665808 - ** is 2^63. */ - return compare2pow63(zNum)<neg; - } -} - -/* -** The string zNum represents an unsigned integer. The zNum string -** consists of one or more digit characters and is terminated by -** a zero character. Any stray characters in zNum result in undefined -** behavior. -** -** If the unsigned integer that zNum represents will fit in a -** 64-bit signed integer, return TRUE. Otherwise return FALSE. -** -** If the negFlag parameter is true, that means that zNum really represents -** a negative number. (The leading "-" is omitted from zNum.) This -** parameter is needed to determine a boundary case. A string -** of "9223373036854775808" returns false if negFlag is false or true -** if negFlag is true. -** -** Leading zeros are ignored. -*/ -SQLITE_PRIVATE int sqlite3FitsIn64Bits(const char *zNum, int negFlag){ - int i; - int neg = 0; - - assert( zNum[0]>='0' && zNum[0]<='9' ); /* zNum is an unsigned number */ - - if( negFlag ) neg = 1-neg; - while( *zNum=='0' ){ - zNum++; /* Skip leading zeros. Ticket #2454 */ - } - for(i=0; zNum[i]; i++){ assert( zNum[i]>='0' && zNum[i]<='9' ); } - testcase( i==18 ); - testcase( i==19 ); - testcase( i==20 ); - if( i<19 ){ - /* Guaranteed to fit if less than 19 digits */ - return 1; - }else if( i>19 ){ - /* Guaranteed to be too big if greater than 19 digits */ - return 0; - }else{ - /* Compare against 2^63. */ - return compare2pow63(zNum)<neg; +/* +** Convert zNum to a 64-bit signed integer. zNum must be decimal. This +** routine does *not* accept hexadecimal notation. +** +** If the zNum value is representable as a 64-bit twos-complement +** integer, then write that value into *pNum and return 0. +** +** If zNum is exactly 9223372036854775808, return 2. This special +** case is broken out because while 9223372036854775808 cannot be a +** signed 64-bit integer, its negative -9223372036854775808 can be. +** +** If zNum is too big for a 64-bit integer and is not +** 9223372036854775808 or if zNum contains any non-numeric text, +** then return 1. +** +** length is the number of bytes in the string (bytes, not characters). +** The string is not necessarily zero-terminated. The encoding is +** given by enc. +*/ +SQLITE_PRIVATE int sqlite3Atoi64(const char *zNum, i64 *pNum, int length, u8 enc){ + int incr; + u64 u = 0; + int neg = 0; /* assume positive */ + int i; + int c = 0; + int nonNum = 0; + const char *zStart; + const char *zEnd = zNum + length; + assert( enc==SQLITE_UTF8 || enc==SQLITE_UTF16LE || enc==SQLITE_UTF16BE ); + if( enc==SQLITE_UTF8 ){ + incr = 1; + }else{ + incr = 2; + assert( SQLITE_UTF16LE==2 && SQLITE_UTF16BE==3 ); + for(i=3-enc; i<length && zNum[i]==0; i+=2){} + nonNum = i<length; + zEnd = zNum+i+enc-3; + zNum += (enc&1); + } + while( zNum<zEnd && sqlite3Isspace(*zNum) ) zNum+=incr; + if( zNum<zEnd ){ + if( *zNum=='-' ){ + neg = 1; + zNum+=incr; + }else if( *zNum=='+' ){ + zNum+=incr; + } + } + zStart = zNum; + while( zNum<zEnd && zNum[0]=='0' ){ zNum+=incr; } /* Skip leading zeros. */ + for(i=0; &zNum[i]<zEnd && (c=zNum[i])>='0' && c<='9'; i+=incr){ + u = u*10 + c - '0'; + } + if( u>LARGEST_INT64 ){ + *pNum = neg ? SMALLEST_INT64 : LARGEST_INT64; + }else if( neg ){ + *pNum = -(i64)u; + }else{ + *pNum = (i64)u; + } + testcase( i==18 ); + testcase( i==19 ); + testcase( i==20 ); + if( (c!=0 && &zNum[i]<zEnd) || (i==0 && zStart==zNum) + || i>19*incr || nonNum ){ + /* zNum is empty or contains non-numeric text or is longer + ** than 19 digits (thus guaranteeing that it is too large) */ + return 1; + }else if( i<19*incr ){ + /* Less than 19 digits, so we know that it fits in 64 bits */ + assert( u<=LARGEST_INT64 ); + return 0; + }else{ + /* zNum is a 19-digit numbers. Compare it against 9223372036854775808. */ + c = compare2pow63(zNum, incr); + if( c<0 ){ + /* zNum is less than 9223372036854775808 so it fits */ + assert( u<=LARGEST_INT64 ); + return 0; + }else if( c>0 ){ + /* zNum is greater than 9223372036854775808 so it overflows */ + return 1; + }else{ + /* zNum is exactly 9223372036854775808. Fits if negative. The + ** special case 2 overflow if positive */ + assert( u-1==LARGEST_INT64 ); + return neg ? 0 : 2; + } + } +} + +/* +** Transform a UTF-8 integer literal, in either decimal or hexadecimal, +** into a 64-bit signed integer. This routine accepts hexadecimal literals, +** whereas sqlite3Atoi64() does not. +** +** Returns: +** +** 0 Successful transformation. Fits in a 64-bit signed integer. +** 1 Integer too large for a 64-bit signed integer or is malformed +** 2 Special case of 9223372036854775808 +*/ +SQLITE_PRIVATE int sqlite3DecOrHexToI64(const char *z, i64 *pOut){ +#ifndef SQLITE_OMIT_HEX_INTEGER + if( z[0]=='0' + && (z[1]=='x' || z[1]=='X') + && sqlite3Isxdigit(z[2]) + ){ + u64 u = 0; + int i, k; + for(i=2; z[i]=='0'; i++){} + for(k=i; sqlite3Isxdigit(z[k]); k++){ + u = u*16 + sqlite3HexToInt(z[k]); + } + memcpy(pOut, &u, 8); + return (z[k]==0 && k-i<=16) ? 0 : 1; + }else +#endif /* SQLITE_OMIT_HEX_INTEGER */ + { + return sqlite3Atoi64(z, pOut, sqlite3Strlen30(z), SQLITE_UTF8); } } /* ** If zNum represents an integer that will fit in 32-bits, then set ** *pValue to that integer and return true. Otherwise return false. +** +** This routine accepts both decimal and hexadecimal notation for integers. ** ** Any non-numeric characters that following zNum are ignored. ** This is different from sqlite3Atoi64() which requires the ** input number to be zero-terminated. */ @@ -19620,10 +25880,29 @@ neg = 1; zNum++; }else if( zNum[0]=='+' ){ zNum++; } +#ifndef SQLITE_OMIT_HEX_INTEGER + else if( zNum[0]=='0' + && (zNum[1]=='x' || zNum[1]=='X') + && sqlite3Isxdigit(zNum[2]) + ){ + u32 u = 0; + zNum += 2; + while( zNum[0]=='0' ) zNum++; + for(i=0; sqlite3Isxdigit(zNum[i]) && i<8; i++){ + u = u*16 + sqlite3HexToInt(zNum[i]); + } + if( (u&0x80000000)==0 && sqlite3Isxdigit(zNum[i])==0 ){ + memcpy(pValue, &u, 4); + return 1; + }else{ + return 0; + } + } +#endif while( zNum[0]=='0' ) zNum++; for(i=0; i<11 && (c = zNum[i] - '0')>=0 && c<=9; i++){ v = v*10 + c; } @@ -19644,10 +25923,20 @@ v = -v; } *pValue = (int)v; return 1; } + +/* +** Return a 32-bit integer value extracted from a string. If the +** string is not an integer, just return 0. +*/ +SQLITE_PRIVATE int sqlite3Atoi(const char *z){ + int x = 0; + if( z ) sqlite3GetInt32(z, &x); + return x; +} /* ** The variable-length integer encoding is as follows: ** ** KEY: @@ -19674,11 +25963,11 @@ ** A variable-length integer consists of the lower 7 bits of each byte ** for all bytes that have the 8th bit set and one byte with the 8th ** bit clear. Except, if we get to the 9th byte, it stores the full ** 8 bits and is the last byte. */ -SQLITE_PRIVATE int sqlite3PutVarint(unsigned char *p, u64 v){ +static int SQLITE_NOINLINE putVarint64(unsigned char *p, u64 v){ int i, j, n; u8 buf[10]; if( v & (((u64)0xff000000)<<32) ){ p[8] = (u8)v; v >>= 8; @@ -19698,32 +25987,21 @@ for(i=0, j=n-1; j>=0; j--, i++){ p[i] = buf[j]; } return n; } - -/* -** This routine is a faster version of sqlite3PutVarint() that only -** works for 32-bit positive integers and which is optimized for -** the common case of small integers. A MACRO version, putVarint32, -** is provided which inlines the single-byte case. All code should use -** the MACRO version as this function assumes the single-byte case has -** already been handled. -*/ -SQLITE_PRIVATE int sqlite3PutVarint32(unsigned char *p, u32 v){ -#ifndef putVarint32 - if( (v & ~0x7f)==0 ){ - p[0] = v; +SQLITE_PRIVATE int sqlite3PutVarint(unsigned char *p, u64 v){ + if( v<=0x7f ){ + p[0] = v&0x7f; return 1; } -#endif - if( (v & ~0x3fff)==0 ){ - p[0] = (u8)((v>>7) | 0x80); - p[1] = (u8)(v & 0x7f); + if( v<=0x3fff ){ + p[0] = ((v>>7)&0x7f)|0x80; + p[1] = v&0x7f; return 2; } - return sqlite3PutVarint(p, v); + return putVarint64(p,v); } /* ** Bitmasks used by sqlite3GetVarint(). These precomputed constants ** are defined here rather than simply putting the constant expressions @@ -19812,11 +26090,12 @@ a = a<<14; a |= *p; /* a: p0<<28 | p2<<14 | p4 (unmasked) */ if (!(a&0x80)) { - /* we can skip these cause they were (effectively) done above in calc'ing s */ + /* we can skip these cause they were (effectively) done above + ** while calculating s */ /* a &= (0x7f<<28)|(0x7f<<14)|(0x7f); */ /* b &= (0x7f<<14)|(0x7f); */ b = b<<7; a |= b; s = s>>18; @@ -20033,51 +26312,73 @@ /* ** Return the number of bytes that will be needed to store the given ** 64-bit integer. */ SQLITE_PRIVATE int sqlite3VarintLen(u64 v){ - int i = 0; - do{ - i++; - v >>= 7; - }while( v!=0 && ALWAYS(i<9) ); + int i; + for(i=1; (v >>= 7)!=0; i++){ assert( i<9 ); } return i; } /* ** Read or write a four-byte big-endian integer value. */ SQLITE_PRIVATE u32 sqlite3Get4byte(const u8 *p){ - return (p[0]<<24) | (p[1]<<16) | (p[2]<<8) | p[3]; +#if SQLITE_BYTEORDER==4321 + u32 x; + memcpy(&x,p,4); + return x; +#elif SQLITE_BYTEORDER==1234 && !defined(SQLITE_DISABLE_INTRINSIC) \ + && defined(__GNUC__) && GCC_VERSION>=4003000 + u32 x; + memcpy(&x,p,4); + return __builtin_bswap32(x); +#elif SQLITE_BYTEORDER==1234 && !defined(SQLITE_DISABLE_INTRINSIC) \ + && defined(_MSC_VER) && _MSC_VER>=1300 + u32 x; + memcpy(&x,p,4); + return _byteswap_ulong(x); +#else + testcase( p[0]&0x80 ); + return ((unsigned)p[0]<<24) | (p[1]<<16) | (p[2]<<8) | p[3]; +#endif } SQLITE_PRIVATE void sqlite3Put4byte(unsigned char *p, u32 v){ +#if SQLITE_BYTEORDER==4321 + memcpy(p,&v,4); +#elif SQLITE_BYTEORDER==1234 && defined(__GNUC__) && GCC_VERSION>=4003000 + u32 x = __builtin_bswap32(v); + memcpy(p,&x,4); +#elif SQLITE_BYTEORDER==1234 && defined(_MSC_VER) && _MSC_VER>=1300 + u32 x = _byteswap_ulong(v); + memcpy(p,&x,4); +#else p[0] = (u8)(v>>24); p[1] = (u8)(v>>16); p[2] = (u8)(v>>8); p[3] = (u8)v; +#endif } -#if !defined(SQLITE_OMIT_BLOB_LITERAL) || defined(SQLITE_HAS_CODEC) /* ** Translate a single byte of Hex into an integer. ** This routine only works if h really is a valid hexadecimal ** character: 0..9a..fA..F */ -static u8 hexToInt(int h){ +SQLITE_PRIVATE u8 sqlite3HexToInt(int h){ assert( (h>='0' && h<='9') || (h>='a' && h<='f') || (h>='A' && h<='F') ); #ifdef SQLITE_ASCII h += 9*(1&(h>>6)); #endif #ifdef SQLITE_EBCDIC h += 9*(1&~(h>>4)); #endif return (u8)(h & 0xf); } -#endif /* !SQLITE_OMIT_BLOB_LITERAL || SQLITE_HAS_CODEC */ #if !defined(SQLITE_OMIT_BLOB_LITERAL) || defined(SQLITE_HAS_CODEC) /* ** Convert a BLOB literal of the form "x'hhhhhh'" into its binary ** value. Return a pointer to its binary value. Space to hold the @@ -20086,15 +26387,15 @@ */ SQLITE_PRIVATE void *sqlite3HexToBlob(sqlite3 *db, const char *z, int n){ char *zBlob; int i; - zBlob = (char *)sqlite3DbMallocRaw(db, n/2 + 1); + zBlob = (char *)sqlite3DbMallocRawNN(db, n/2 + 1); n--; if( zBlob ){ for(i=0; i<n; i+=2){ - zBlob[i/2] = (hexToInt(z[i])<<4) | hexToInt(z[i+1]); + zBlob[i/2] = (sqlite3HexToInt(z[i])<<4) | sqlite3HexToInt(z[i+1]); } zBlob[i/2] = 0; } return zBlob; } @@ -20154,10 +26455,196 @@ return 0; }else{ return 1; } } + +/* +** Attempt to add, substract, or multiply the 64-bit signed value iB against +** the other 64-bit signed integer at *pA and store the result in *pA. +** Return 0 on success. Or if the operation would have resulted in an +** overflow, leave *pA unchanged and return 1. +*/ +SQLITE_PRIVATE int sqlite3AddInt64(i64 *pA, i64 iB){ + i64 iA = *pA; + testcase( iA==0 ); testcase( iA==1 ); + testcase( iB==-1 ); testcase( iB==0 ); + if( iB>=0 ){ + testcase( iA>0 && LARGEST_INT64 - iA == iB ); + testcase( iA>0 && LARGEST_INT64 - iA == iB - 1 ); + if( iA>0 && LARGEST_INT64 - iA < iB ) return 1; + }else{ + testcase( iA<0 && -(iA + LARGEST_INT64) == iB + 1 ); + testcase( iA<0 && -(iA + LARGEST_INT64) == iB + 2 ); + if( iA<0 && -(iA + LARGEST_INT64) > iB + 1 ) return 1; + } + *pA += iB; + return 0; +} +SQLITE_PRIVATE int sqlite3SubInt64(i64 *pA, i64 iB){ + testcase( iB==SMALLEST_INT64+1 ); + if( iB==SMALLEST_INT64 ){ + testcase( (*pA)==(-1) ); testcase( (*pA)==0 ); + if( (*pA)>=0 ) return 1; + *pA -= iB; + return 0; + }else{ + return sqlite3AddInt64(pA, -iB); + } +} +#define TWOPOWER32 (((i64)1)<<32) +#define TWOPOWER31 (((i64)1)<<31) +SQLITE_PRIVATE int sqlite3MulInt64(i64 *pA, i64 iB){ + i64 iA = *pA; + i64 iA1, iA0, iB1, iB0, r; + + iA1 = iA/TWOPOWER32; + iA0 = iA % TWOPOWER32; + iB1 = iB/TWOPOWER32; + iB0 = iB % TWOPOWER32; + if( iA1==0 ){ + if( iB1==0 ){ + *pA *= iB; + return 0; + } + r = iA0*iB1; + }else if( iB1==0 ){ + r = iA1*iB0; + }else{ + /* If both iA1 and iB1 are non-zero, overflow will result */ + return 1; + } + testcase( r==(-TWOPOWER31)-1 ); + testcase( r==(-TWOPOWER31) ); + testcase( r==TWOPOWER31 ); + testcase( r==TWOPOWER31-1 ); + if( r<(-TWOPOWER31) || r>=TWOPOWER31 ) return 1; + r *= TWOPOWER32; + if( sqlite3AddInt64(&r, iA0*iB0) ) return 1; + *pA = r; + return 0; +} + +/* +** Compute the absolute value of a 32-bit signed integer, of possible. Or +** if the integer has a value of -2147483648, return +2147483647 +*/ +SQLITE_PRIVATE int sqlite3AbsInt32(int x){ + if( x>=0 ) return x; + if( x==(int)0x80000000 ) return 0x7fffffff; + return -x; +} + +#ifdef SQLITE_ENABLE_8_3_NAMES +/* +** If SQLITE_ENABLE_8_3_NAMES is set at compile-time and if the database +** filename in zBaseFilename is a URI with the "8_3_names=1" parameter and +** if filename in z[] has a suffix (a.k.a. "extension") that is longer than +** three characters, then shorten the suffix on z[] to be the last three +** characters of the original suffix. +** +** If SQLITE_ENABLE_8_3_NAMES is set to 2 at compile-time, then always +** do the suffix shortening regardless of URI parameter. +** +** Examples: +** +** test.db-journal => test.nal +** test.db-wal => test.wal +** test.db-shm => test.shm +** test.db-mj7f3319fa => test.9fa +*/ +SQLITE_PRIVATE void sqlite3FileSuffix3(const char *zBaseFilename, char *z){ +#if SQLITE_ENABLE_8_3_NAMES<2 + if( sqlite3_uri_boolean(zBaseFilename, "8_3_names", 0) ) +#endif + { + int i, sz; + sz = sqlite3Strlen30(z); + for(i=sz-1; i>0 && z[i]!='/' && z[i]!='.'; i--){} + if( z[i]=='.' && ALWAYS(sz>i+4) ) memmove(&z[i+1], &z[sz-3], 4); + } +} +#endif + +/* +** Find (an approximate) sum of two LogEst values. This computation is +** not a simple "+" operator because LogEst is stored as a logarithmic +** value. +** +*/ +SQLITE_PRIVATE LogEst sqlite3LogEstAdd(LogEst a, LogEst b){ + static const unsigned char x[] = { + 10, 10, /* 0,1 */ + 9, 9, /* 2,3 */ + 8, 8, /* 4,5 */ + 7, 7, 7, /* 6,7,8 */ + 6, 6, 6, /* 9,10,11 */ + 5, 5, 5, /* 12-14 */ + 4, 4, 4, 4, /* 15-18 */ + 3, 3, 3, 3, 3, 3, /* 19-24 */ + 2, 2, 2, 2, 2, 2, 2, /* 25-31 */ + }; + if( a>=b ){ + if( a>b+49 ) return a; + if( a>b+31 ) return a+1; + return a+x[a-b]; + }else{ + if( b>a+49 ) return b; + if( b>a+31 ) return b+1; + return b+x[b-a]; + } +} + +/* +** Convert an integer into a LogEst. In other words, compute an +** approximation for 10*log2(x). +*/ +SQLITE_PRIVATE LogEst sqlite3LogEst(u64 x){ + static LogEst a[] = { 0, 2, 3, 5, 6, 7, 8, 9 }; + LogEst y = 40; + if( x<8 ){ + if( x<2 ) return 0; + while( x<8 ){ y -= 10; x <<= 1; } + }else{ + while( x>255 ){ y += 40; x >>= 4; } + while( x>15 ){ y += 10; x >>= 1; } + } + return a[x&7] + y - 10; +} + +#ifndef SQLITE_OMIT_VIRTUALTABLE +/* +** Convert a double into a LogEst +** In other words, compute an approximation for 10*log2(x). +*/ +SQLITE_PRIVATE LogEst sqlite3LogEstFromDouble(double x){ + u64 a; + LogEst e; + assert( sizeof(x)==8 && sizeof(a)==8 ); + if( x<=1 ) return 0; + if( x<=2000000000 ) return sqlite3LogEst((u64)x); + memcpy(&a, &x, 8); + e = (a>>52) - 1022; + return e*10; +} +#endif /* SQLITE_OMIT_VIRTUALTABLE */ + +/* +** Convert a LogEst into an integer. +*/ +SQLITE_PRIVATE u64 sqlite3LogEstToInt(LogEst x){ + u64 n; + if( x<10 ) return 1; + n = x%10; + x /= 10; + if( n>=5 ) n -= 2; + else if( n>=1 ) n -= 1; + if( x>=3 ){ + return x>60 ? (u64)LARGEST_INT64 : (n+8)<<(x-3); + } + return (n+8)>>(3-x); +} /************** End of util.c ************************************************/ /************** Begin file hash.c ********************************************/ /* ** 2001 September 22 @@ -20171,10 +26658,12 @@ ** ************************************************************************* ** This is the implementation of generic hash-tables ** used in SQLite. */ +/* #include "sqliteInt.h" */ +/* #include <assert.h> */ /* Turn bulk memory into a hash table object by initializing the ** fields of the Hash structure. ** ** "pNew" is a pointer to the hash table that is to be initialized. @@ -20209,16 +26698,15 @@ } /* ** The hashing function. */ -static unsigned int strHash(const char *z, int nKey){ - int h = 0; - assert( nKey>=0 ); - while( nKey > 0 ){ - h = (h<<3) ^ h ^ sqlite3UpperToLower[(unsigned char)*z++]; - nKey--; +static unsigned int strHash(const char *z){ + unsigned int h = 0; + unsigned char c; + while( (c = (unsigned char)*z++)!=0 ){ + h = (h<<3) ^ h ^ sqlite3UpperToLower[c]; } return h; } @@ -20270,11 +26758,15 @@ if( new_size==pH->htsize ) return 0; #endif /* The inability to allocates space for a larger hash table is ** a performance hit but it is not a fatal error. So mark the - ** allocation as a benign. + ** allocation as a benign. Use sqlite3Malloc()/memset(0) instead of + ** sqlite3MallocZero() to make the allocation, as sqlite3MallocZero() + ** only zeroes the requested number of bytes whereas this module will + ** use the actual amount of space allocated for the hash table (which + ** may be larger than the requested amount). */ sqlite3BeginBenignMalloc(); new_ht = (struct _ht *)sqlite3Malloc( new_size*sizeof(struct _ht) ); sqlite3EndBenignMalloc(); @@ -20282,40 +26774,45 @@ sqlite3_free(pH->ht); pH->ht = new_ht; pH->htsize = new_size = sqlite3MallocSize(new_ht)/sizeof(struct _ht); memset(new_ht, 0, new_size*sizeof(struct _ht)); for(elem=pH->first, pH->first=0; elem; elem = next_elem){ - unsigned int h = strHash(elem->pKey, elem->nKey) % new_size; + unsigned int h = strHash(elem->pKey) % new_size; next_elem = elem->next; insertElement(pH, &new_ht[h], elem); } return 1; } /* This function (for internal use only) locates an element in an -** hash table that matches the given key. The hash for this key has -** already been computed and is passed as the 4th parameter. +** hash table that matches the given key. The hash for this key is +** also computed and returned in the *pH parameter. */ -static HashElem *findElementGivenHash( +static HashElem *findElementWithHash( const Hash *pH, /* The pH to be searched */ const char *pKey, /* The key we are searching for */ - int nKey, /* Bytes in key (not counting zero terminator) */ - unsigned int h /* The hash for this key. */ + unsigned int *pHash /* Write the hash value here */ ){ HashElem *elem; /* Used to loop thru the element list */ int count; /* Number of elements left to test */ + unsigned int h; /* The computed hash */ if( pH->ht ){ - struct _ht *pEntry = &pH->ht[h]; + struct _ht *pEntry; + h = strHash(pKey) % pH->htsize; + pEntry = &pH->ht[h]; elem = pEntry->chain; count = pEntry->count; }else{ + h = 0; elem = pH->first; count = pH->count; } - while( count-- && ALWAYS(elem) ){ - if( elem->nKey==nKey && sqlite3StrNICmp(elem->pKey,pKey,nKey)==0 ){ + *pHash = h; + while( count-- ){ + assert( elem!=0 ); + if( sqlite3StrICmp(elem->pKey,pKey)==0 ){ return elem; } elem = elem->next; } return 0; @@ -20346,38 +26843,32 @@ pEntry->count--; assert( pEntry->count>=0 ); } sqlite3_free( elem ); pH->count--; - if( pH->count<=0 ){ + if( pH->count==0 ){ assert( pH->first==0 ); assert( pH->count==0 ); sqlite3HashClear(pH); } } /* Attempt to locate an element of the hash table pH with a key -** that matches pKey,nKey. Return the data for this element if it is +** that matches pKey. Return the data for this element if it is ** found, or NULL if there is no match. */ -SQLITE_PRIVATE void *sqlite3HashFind(const Hash *pH, const char *pKey, int nKey){ +SQLITE_PRIVATE void *sqlite3HashFind(const Hash *pH, const char *pKey){ HashElem *elem; /* The element that matches key */ unsigned int h; /* A hash on key */ assert( pH!=0 ); assert( pKey!=0 ); - assert( nKey>=0 ); - if( pH->ht ){ - h = strHash(pKey, nKey) % pH->htsize; - }else{ - h = 0; - } - elem = findElementGivenHash(pH, pKey, nKey, h); + elem = findElementWithHash(pH, pKey, &h); return elem ? elem->data : 0; } -/* Insert an element into the hash table pH. The key is pKey,nKey +/* Insert an element into the hash table pH. The key is pKey ** and the data is "data". ** ** If no element exists with a matching key, then a new ** element is created and NULL is returned. ** @@ -20387,217 +26878,229 @@ ** the new data is returned and the hash table is unchanged. ** ** If the "data" parameter to this function is NULL, then the ** element corresponding to "key" is removed from the hash table. */ -SQLITE_PRIVATE void *sqlite3HashInsert(Hash *pH, const char *pKey, int nKey, void *data){ +SQLITE_PRIVATE void *sqlite3HashInsert(Hash *pH, const char *pKey, void *data){ unsigned int h; /* the hash of the key modulo hash table size */ HashElem *elem; /* Used to loop thru the element list */ HashElem *new_elem; /* New element added to the pH */ assert( pH!=0 ); assert( pKey!=0 ); - assert( nKey>=0 ); - if( pH->htsize ){ - h = strHash(pKey, nKey) % pH->htsize; - }else{ - h = 0; - } - elem = findElementGivenHash(pH,pKey,nKey,h); + elem = findElementWithHash(pH,pKey,&h); if( elem ){ void *old_data = elem->data; if( data==0 ){ removeElementGivenHash(pH,elem,h); }else{ elem->data = data; elem->pKey = pKey; - assert(nKey==elem->nKey); } return old_data; } if( data==0 ) return 0; new_elem = (HashElem*)sqlite3Malloc( sizeof(HashElem) ); if( new_elem==0 ) return data; new_elem->pKey = pKey; - new_elem->nKey = nKey; new_elem->data = data; pH->count++; if( pH->count>=10 && pH->count > 2*pH->htsize ){ if( rehash(pH, pH->count*2) ){ assert( pH->htsize>0 ); - h = strHash(pKey, nKey) % pH->htsize; + h = strHash(pKey) % pH->htsize; } } - if( pH->ht ){ - insertElement(pH, &pH->ht[h], new_elem); - }else{ - insertElement(pH, 0, new_elem); - } + insertElement(pH, pH->ht ? &pH->ht[h] : 0, new_elem); return 0; } /************** End of hash.c ************************************************/ /************** Begin file opcodes.c *****************************************/ /* Automatically generated. Do not edit */ -/* See the mkopcodec.awk script for details. */ -#if !defined(SQLITE_OMIT_EXPLAIN) || !defined(NDEBUG) || defined(VDBE_PROFILE) || defined(SQLITE_DEBUG) +/* See the tool/mkopcodec.tcl script for details. */ +#if !defined(SQLITE_OMIT_EXPLAIN) \ + || defined(VDBE_PROFILE) \ + || defined(SQLITE_DEBUG) +#if defined(SQLITE_ENABLE_EXPLAIN_COMMENTS) || defined(SQLITE_DEBUG) +# define OpHelp(X) "\0" X +#else +# define OpHelp(X) +#endif SQLITE_PRIVATE const char *sqlite3OpcodeName(int i){ - static const char *const azName[] = { "?", - /* 1 */ "Goto", - /* 2 */ "Gosub", - /* 3 */ "Return", - /* 4 */ "Yield", - /* 5 */ "HaltIfNull", - /* 6 */ "Halt", - /* 7 */ "Integer", - /* 8 */ "Int64", - /* 9 */ "String", - /* 10 */ "Null", - /* 11 */ "Blob", - /* 12 */ "Variable", - /* 13 */ "Move", - /* 14 */ "Copy", - /* 15 */ "SCopy", - /* 16 */ "ResultRow", - /* 17 */ "CollSeq", - /* 18 */ "Function", - /* 19 */ "Not", - /* 20 */ "AddImm", - /* 21 */ "MustBeInt", - /* 22 */ "RealAffinity", - /* 23 */ "Permutation", - /* 24 */ "Compare", - /* 25 */ "Jump", - /* 26 */ "If", - /* 27 */ "IfNot", - /* 28 */ "Column", - /* 29 */ "Affinity", - /* 30 */ "MakeRecord", - /* 31 */ "Count", - /* 32 */ "Savepoint", - /* 33 */ "AutoCommit", - /* 34 */ "Transaction", - /* 35 */ "ReadCookie", - /* 36 */ "SetCookie", - /* 37 */ "VerifyCookie", - /* 38 */ "OpenRead", - /* 39 */ "OpenWrite", - /* 40 */ "OpenAutoindex", - /* 41 */ "OpenEphemeral", - /* 42 */ "OpenPseudo", - /* 43 */ "Close", - /* 44 */ "SeekLt", - /* 45 */ "SeekLe", - /* 46 */ "SeekGe", - /* 47 */ "SeekGt", - /* 48 */ "Seek", - /* 49 */ "NotFound", - /* 50 */ "Found", - /* 51 */ "IsUnique", - /* 52 */ "NotExists", - /* 53 */ "Sequence", - /* 54 */ "NewRowid", - /* 55 */ "Insert", - /* 56 */ "InsertInt", - /* 57 */ "Delete", - /* 58 */ "ResetCount", - /* 59 */ "RowKey", - /* 60 */ "RowData", - /* 61 */ "Rowid", - /* 62 */ "NullRow", - /* 63 */ "Last", - /* 64 */ "Sort", - /* 65 */ "Rewind", - /* 66 */ "Prev", - /* 67 */ "Next", - /* 68 */ "Or", - /* 69 */ "And", - /* 70 */ "IdxInsert", - /* 71 */ "IdxDelete", - /* 72 */ "IdxRowid", - /* 73 */ "IsNull", - /* 74 */ "NotNull", - /* 75 */ "Ne", - /* 76 */ "Eq", - /* 77 */ "Gt", - /* 78 */ "Le", - /* 79 */ "Lt", - /* 80 */ "Ge", - /* 81 */ "IdxLT", - /* 82 */ "BitAnd", - /* 83 */ "BitOr", - /* 84 */ "ShiftLeft", - /* 85 */ "ShiftRight", - /* 86 */ "Add", - /* 87 */ "Subtract", - /* 88 */ "Multiply", - /* 89 */ "Divide", - /* 90 */ "Remainder", - /* 91 */ "Concat", - /* 92 */ "IdxGE", - /* 93 */ "BitNot", - /* 94 */ "String8", - /* 95 */ "Destroy", - /* 96 */ "Clear", - /* 97 */ "CreateIndex", - /* 98 */ "CreateTable", - /* 99 */ "ParseSchema", - /* 100 */ "LoadAnalysis", - /* 101 */ "DropTable", - /* 102 */ "DropIndex", - /* 103 */ "DropTrigger", - /* 104 */ "IntegrityCk", - /* 105 */ "RowSetAdd", - /* 106 */ "RowSetRead", - /* 107 */ "RowSetTest", - /* 108 */ "Program", - /* 109 */ "Param", - /* 110 */ "FkCounter", - /* 111 */ "FkIfZero", - /* 112 */ "MemMax", - /* 113 */ "IfPos", - /* 114 */ "IfNeg", - /* 115 */ "IfZero", - /* 116 */ "AggStep", - /* 117 */ "AggFinal", - /* 118 */ "Vacuum", - /* 119 */ "IncrVacuum", - /* 120 */ "Expire", - /* 121 */ "TableLock", - /* 122 */ "VBegin", - /* 123 */ "VCreate", - /* 124 */ "VDestroy", - /* 125 */ "VOpen", - /* 126 */ "VFilter", - /* 127 */ "VColumn", - /* 128 */ "VNext", - /* 129 */ "VRename", - /* 130 */ "Real", - /* 131 */ "VUpdate", - /* 132 */ "Pagecount", - /* 133 */ "Trace", - /* 134 */ "Noop", - /* 135 */ "Explain", - /* 136 */ "NotUsed_136", - /* 137 */ "NotUsed_137", - /* 138 */ "NotUsed_138", - /* 139 */ "NotUsed_139", - /* 140 */ "NotUsed_140", - /* 141 */ "ToText", - /* 142 */ "ToBlob", - /* 143 */ "ToNumeric", - /* 144 */ "ToInt", - /* 145 */ "ToReal", + static const char *const azName[] = { + /* 0 */ "Savepoint" OpHelp(""), + /* 1 */ "AutoCommit" OpHelp(""), + /* 2 */ "Transaction" OpHelp(""), + /* 3 */ "SorterNext" OpHelp(""), + /* 4 */ "PrevIfOpen" OpHelp(""), + /* 5 */ "NextIfOpen" OpHelp(""), + /* 6 */ "Prev" OpHelp(""), + /* 7 */ "Next" OpHelp(""), + /* 8 */ "Checkpoint" OpHelp(""), + /* 9 */ "JournalMode" OpHelp(""), + /* 10 */ "Vacuum" OpHelp(""), + /* 11 */ "VFilter" OpHelp("iplan=r[P3] zplan='P4'"), + /* 12 */ "VUpdate" OpHelp("data=r[P3@P2]"), + /* 13 */ "Goto" OpHelp(""), + /* 14 */ "Gosub" OpHelp(""), + /* 15 */ "Return" OpHelp(""), + /* 16 */ "InitCoroutine" OpHelp(""), + /* 17 */ "EndCoroutine" OpHelp(""), + /* 18 */ "Yield" OpHelp(""), + /* 19 */ "Not" OpHelp("r[P2]= !r[P1]"), + /* 20 */ "HaltIfNull" OpHelp("if r[P3]=null halt"), + /* 21 */ "Halt" OpHelp(""), + /* 22 */ "Integer" OpHelp("r[P2]=P1"), + /* 23 */ "Int64" OpHelp("r[P2]=P4"), + /* 24 */ "String" OpHelp("r[P2]='P4' (len=P1)"), + /* 25 */ "Null" OpHelp("r[P2..P3]=NULL"), + /* 26 */ "SoftNull" OpHelp("r[P1]=NULL"), + /* 27 */ "Blob" OpHelp("r[P2]=P4 (len=P1)"), + /* 28 */ "Variable" OpHelp("r[P2]=parameter(P1,P4)"), + /* 29 */ "Move" OpHelp("r[P2@P3]=r[P1@P3]"), + /* 30 */ "Copy" OpHelp("r[P2@P3+1]=r[P1@P3+1]"), + /* 31 */ "SCopy" OpHelp("r[P2]=r[P1]"), + /* 32 */ "IntCopy" OpHelp("r[P2]=r[P1]"), + /* 33 */ "ResultRow" OpHelp("output=r[P1@P2]"), + /* 34 */ "CollSeq" OpHelp(""), + /* 35 */ "Function0" OpHelp("r[P3]=func(r[P2@P5])"), + /* 36 */ "Function" OpHelp("r[P3]=func(r[P2@P5])"), + /* 37 */ "AddImm" OpHelp("r[P1]=r[P1]+P2"), + /* 38 */ "MustBeInt" OpHelp(""), + /* 39 */ "RealAffinity" OpHelp(""), + /* 40 */ "Cast" OpHelp("affinity(r[P1])"), + /* 41 */ "Permutation" OpHelp(""), + /* 42 */ "Compare" OpHelp("r[P1@P3] <-> r[P2@P3]"), + /* 43 */ "Jump" OpHelp(""), + /* 44 */ "Once" OpHelp(""), + /* 45 */ "If" OpHelp(""), + /* 46 */ "IfNot" OpHelp(""), + /* 47 */ "Column" OpHelp("r[P3]=PX"), + /* 48 */ "Affinity" OpHelp("affinity(r[P1@P2])"), + /* 49 */ "MakeRecord" OpHelp("r[P3]=mkrec(r[P1@P2])"), + /* 50 */ "Count" OpHelp("r[P2]=count()"), + /* 51 */ "ReadCookie" OpHelp(""), + /* 52 */ "SetCookie" OpHelp(""), + /* 53 */ "ReopenIdx" OpHelp("root=P2 iDb=P3"), + /* 54 */ "OpenRead" OpHelp("root=P2 iDb=P3"), + /* 55 */ "OpenWrite" OpHelp("root=P2 iDb=P3"), + /* 56 */ "OpenAutoindex" OpHelp("nColumn=P2"), + /* 57 */ "OpenEphemeral" OpHelp("nColumn=P2"), + /* 58 */ "SorterOpen" OpHelp(""), + /* 59 */ "SequenceTest" OpHelp("if( cursor[P1].ctr++ ) pc = P2"), + /* 60 */ "OpenPseudo" OpHelp("P3 columns in r[P2]"), + /* 61 */ "Close" OpHelp(""), + /* 62 */ "ColumnsUsed" OpHelp(""), + /* 63 */ "SeekLT" OpHelp("key=r[P3@P4]"), + /* 64 */ "SeekLE" OpHelp("key=r[P3@P4]"), + /* 65 */ "SeekGE" OpHelp("key=r[P3@P4]"), + /* 66 */ "SeekGT" OpHelp("key=r[P3@P4]"), + /* 67 */ "NoConflict" OpHelp("key=r[P3@P4]"), + /* 68 */ "NotFound" OpHelp("key=r[P3@P4]"), + /* 69 */ "Found" OpHelp("key=r[P3@P4]"), + /* 70 */ "NotExists" OpHelp("intkey=r[P3]"), + /* 71 */ "Or" OpHelp("r[P3]=(r[P1] || r[P2])"), + /* 72 */ "And" OpHelp("r[P3]=(r[P1] && r[P2])"), + /* 73 */ "Sequence" OpHelp("r[P2]=cursor[P1].ctr++"), + /* 74 */ "NewRowid" OpHelp("r[P2]=rowid"), + /* 75 */ "Insert" OpHelp("intkey=r[P3] data=r[P2]"), + /* 76 */ "IsNull" OpHelp("if r[P1]==NULL goto P2"), + /* 77 */ "NotNull" OpHelp("if r[P1]!=NULL goto P2"), + /* 78 */ "Ne" OpHelp("if r[P1]!=r[P3] goto P2"), + /* 79 */ "Eq" OpHelp("if r[P1]==r[P3] goto P2"), + /* 80 */ "Gt" OpHelp("if r[P1]>r[P3] goto P2"), + /* 81 */ "Le" OpHelp("if r[P1]<=r[P3] goto P2"), + /* 82 */ "Lt" OpHelp("if r[P1]<r[P3] goto P2"), + /* 83 */ "Ge" OpHelp("if r[P1]>=r[P3] goto P2"), + /* 84 */ "InsertInt" OpHelp("intkey=P3 data=r[P2]"), + /* 85 */ "BitAnd" OpHelp("r[P3]=r[P1]&r[P2]"), + /* 86 */ "BitOr" OpHelp("r[P3]=r[P1]|r[P2]"), + /* 87 */ "ShiftLeft" OpHelp("r[P3]=r[P2]<<r[P1]"), + /* 88 */ "ShiftRight" OpHelp("r[P3]=r[P2]>>r[P1]"), + /* 89 */ "Add" OpHelp("r[P3]=r[P1]+r[P2]"), + /* 90 */ "Subtract" OpHelp("r[P3]=r[P2]-r[P1]"), + /* 91 */ "Multiply" OpHelp("r[P3]=r[P1]*r[P2]"), + /* 92 */ "Divide" OpHelp("r[P3]=r[P2]/r[P1]"), + /* 93 */ "Remainder" OpHelp("r[P3]=r[P2]%r[P1]"), + /* 94 */ "Concat" OpHelp("r[P3]=r[P2]+r[P1]"), + /* 95 */ "Delete" OpHelp(""), + /* 96 */ "BitNot" OpHelp("r[P1]= ~r[P1]"), + /* 97 */ "String8" OpHelp("r[P2]='P4'"), + /* 98 */ "ResetCount" OpHelp(""), + /* 99 */ "SorterCompare" OpHelp("if key(P1)!=trim(r[P3],P4) goto P2"), + /* 100 */ "SorterData" OpHelp("r[P2]=data"), + /* 101 */ "RowKey" OpHelp("r[P2]=key"), + /* 102 */ "RowData" OpHelp("r[P2]=data"), + /* 103 */ "Rowid" OpHelp("r[P2]=rowid"), + /* 104 */ "NullRow" OpHelp(""), + /* 105 */ "Last" OpHelp(""), + /* 106 */ "SorterSort" OpHelp(""), + /* 107 */ "Sort" OpHelp(""), + /* 108 */ "Rewind" OpHelp(""), + /* 109 */ "SorterInsert" OpHelp(""), + /* 110 */ "IdxInsert" OpHelp("key=r[P2]"), + /* 111 */ "IdxDelete" OpHelp("key=r[P2@P3]"), + /* 112 */ "Seek" OpHelp("Move P3 to P1.rowid"), + /* 113 */ "IdxRowid" OpHelp("r[P2]=rowid"), + /* 114 */ "IdxLE" OpHelp("key=r[P3@P4]"), + /* 115 */ "IdxGT" OpHelp("key=r[P3@P4]"), + /* 116 */ "IdxLT" OpHelp("key=r[P3@P4]"), + /* 117 */ "IdxGE" OpHelp("key=r[P3@P4]"), + /* 118 */ "Destroy" OpHelp(""), + /* 119 */ "Clear" OpHelp(""), + /* 120 */ "ResetSorter" OpHelp(""), + /* 121 */ "CreateIndex" OpHelp("r[P2]=root iDb=P1"), + /* 122 */ "CreateTable" OpHelp("r[P2]=root iDb=P1"), + /* 123 */ "ParseSchema" OpHelp(""), + /* 124 */ "LoadAnalysis" OpHelp(""), + /* 125 */ "DropTable" OpHelp(""), + /* 126 */ "DropIndex" OpHelp(""), + /* 127 */ "DropTrigger" OpHelp(""), + /* 128 */ "IntegrityCk" OpHelp(""), + /* 129 */ "RowSetAdd" OpHelp("rowset(P1)=r[P2]"), + /* 130 */ "RowSetRead" OpHelp("r[P3]=rowset(P1)"), + /* 131 */ "RowSetTest" OpHelp("if r[P3] in rowset(P1) goto P2"), + /* 132 */ "Program" OpHelp(""), + /* 133 */ "Real" OpHelp("r[P2]=P4"), + /* 134 */ "Param" OpHelp(""), + /* 135 */ "FkCounter" OpHelp("fkctr[P1]+=P2"), + /* 136 */ "FkIfZero" OpHelp("if fkctr[P1]==0 goto P2"), + /* 137 */ "MemMax" OpHelp("r[P1]=max(r[P1],r[P2])"), + /* 138 */ "IfPos" OpHelp("if r[P1]>0 then r[P1]-=P3, goto P2"), + /* 139 */ "OffsetLimit" OpHelp("if r[P1]>0 then r[P2]=r[P1]+max(0,r[P3]) else r[P2]=(-1)"), + /* 140 */ "IfNotZero" OpHelp("if r[P1]!=0 then r[P1]-=P3, goto P2"), + /* 141 */ "DecrJumpZero" OpHelp("if (--r[P1])==0 goto P2"), + /* 142 */ "JumpZeroIncr" OpHelp("if (r[P1]++)==0 ) goto P2"), + /* 143 */ "AggStep0" OpHelp("accum=r[P3] step(r[P2@P5])"), + /* 144 */ "AggStep" OpHelp("accum=r[P3] step(r[P2@P5])"), + /* 145 */ "AggFinal" OpHelp("accum=r[P1] N=P2"), + /* 146 */ "IncrVacuum" OpHelp(""), + /* 147 */ "Expire" OpHelp(""), + /* 148 */ "TableLock" OpHelp("iDb=P1 root=P2 write=P3"), + /* 149 */ "VBegin" OpHelp(""), + /* 150 */ "VCreate" OpHelp(""), + /* 151 */ "VDestroy" OpHelp(""), + /* 152 */ "VOpen" OpHelp(""), + /* 153 */ "VColumn" OpHelp("r[P3]=vcolumn(P2)"), + /* 154 */ "VNext" OpHelp(""), + /* 155 */ "VRename" OpHelp(""), + /* 156 */ "Pagecount" OpHelp(""), + /* 157 */ "MaxPgcnt" OpHelp(""), + /* 158 */ "Init" OpHelp("Start at P2"), + /* 159 */ "CursorHint" OpHelp(""), + /* 160 */ "Noop" OpHelp(""), + /* 161 */ "Explain" OpHelp(""), }; return azName[i]; } #endif /************** End of opcodes.c *********************************************/ -/************** Begin file os_os2.c ******************************************/ +/************** Begin file os_unix.c *****************************************/ /* -** 2006 Feb 14 +** 2004 May 22 ** ** The author disclaims copyright to this source code. In place of ** a legal notice, here is a blessing: ** ** May you do good and not evil. @@ -20604,53 +27107,268 @@ ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ****************************************************************************** ** -** This file contains code that is specific to OS/2. -*/ - - -#if SQLITE_OS_OS2 - -/* -** A Note About Memory Allocation: -** -** This driver uses malloc()/free() directly rather than going through -** the SQLite-wrappers sqlite3_malloc()/sqlite3_free(). Those wrappers -** are designed for use on embedded systems where memory is scarce and -** malloc failures happen frequently. OS/2 does not typically run on -** embedded systems, and when it does the developers normally have bigger -** problems to worry about than running out of memory. So there is not -** a compelling need to use the wrappers. -** -** But there is a good reason to not use the wrappers. If we use the -** wrappers then we will get simulated malloc() failures within this -** driver. And that causes all kinds of problems for our tests. We -** could enhance SQLite to deal with simulated malloc failures within -** the OS driver, but the code to deal with those failure would not -** be exercised on Linux (which does not need to malloc() in the driver) -** and so we would have difficulty writing coverage tests for that -** code. Better to leave the code out, we think. -** -** The point of this discussion is as follows: When creating a new -** OS layer for an embedded system, if you use this file as an example, -** avoid the use of malloc()/free(). Those routines work ok on OS/2 -** desktops but not so well in embedded systems. -*/ - -/* -** Macros used to determine whether or not to use threads. -*/ -#if defined(SQLITE_THREADSAFE) && SQLITE_THREADSAFE -# define SQLITE_OS2_THREADS 1 -#endif +** This file contains the VFS implementation for unix-like operating systems +** include Linux, MacOSX, *BSD, QNX, VxWorks, AIX, HPUX, and others. +** +** There are actually several different VFS implementations in this file. +** The differences are in the way that file locking is done. The default +** implementation uses Posix Advisory Locks. Alternative implementations +** use flock(), dot-files, various proprietary locking schemas, or simply +** skip locking all together. +** +** This source file is organized into divisions where the logic for various +** subfunctions is contained within the appropriate division. PLEASE +** KEEP THE STRUCTURE OF THIS FILE INTACT. New code should be placed +** in the correct division and should be clearly labeled. +** +** The layout of divisions is as follows: +** +** * General-purpose declarations and utility functions. +** * Unique file ID logic used by VxWorks. +** * Various locking primitive implementations (all except proxy locking): +** + for Posix Advisory Locks +** + for no-op locks +** + for dot-file locks +** + for flock() locking +** + for named semaphore locks (VxWorks only) +** + for AFP filesystem locks (MacOSX only) +** * sqlite3_file methods not associated with locking. +** * Definitions of sqlite3_io_methods objects for all locking +** methods plus "finder" functions for each locking method. +** * sqlite3_vfs method implementations. +** * Locking primitives for the proxy uber-locking-method. (MacOSX only) +** * Definitions of sqlite3_vfs objects for all locking methods +** plus implementations of sqlite3_os_init() and sqlite3_os_end(). +*/ +/* #include "sqliteInt.h" */ +#if SQLITE_OS_UNIX /* This file is used on unix only */ + +/* +** There are various methods for file locking used for concurrency +** control: +** +** 1. POSIX locking (the default), +** 2. No locking, +** 3. Dot-file locking, +** 4. flock() locking, +** 5. AFP locking (OSX only), +** 6. Named POSIX semaphores (VXWorks only), +** 7. proxy locking. (OSX only) +** +** Styles 4, 5, and 7 are only available of SQLITE_ENABLE_LOCKING_STYLE +** is defined to 1. The SQLITE_ENABLE_LOCKING_STYLE also enables automatic +** selection of the appropriate locking style based on the filesystem +** where the database is located. +*/ +#if !defined(SQLITE_ENABLE_LOCKING_STYLE) +# if defined(__APPLE__) +# define SQLITE_ENABLE_LOCKING_STYLE 1 +# else +# define SQLITE_ENABLE_LOCKING_STYLE 0 +# endif +#endif + +/* +** standard include files. +*/ +#include <sys/types.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <unistd.h> +/* #include <time.h> */ +#include <sys/time.h> +#include <errno.h> +#if !defined(SQLITE_OMIT_WAL) || SQLITE_MAX_MMAP_SIZE>0 +# include <sys/mman.h> +#endif + +#if SQLITE_ENABLE_LOCKING_STYLE +# include <sys/ioctl.h> +# include <sys/file.h> +# include <sys/param.h> +#endif /* SQLITE_ENABLE_LOCKING_STYLE */ + +#if defined(__APPLE__) && ((__MAC_OS_X_VERSION_MIN_REQUIRED > 1050) || \ + (__IPHONE_OS_VERSION_MIN_REQUIRED > 2000)) +# if (!defined(TARGET_OS_EMBEDDED) || (TARGET_OS_EMBEDDED==0)) \ + && (!defined(TARGET_IPHONE_SIMULATOR) || (TARGET_IPHONE_SIMULATOR==0)) +# define HAVE_GETHOSTUUID 1 +# else +# warning "gethostuuid() is disabled." +# endif +#endif + + +#if OS_VXWORKS +/* # include <sys/ioctl.h> */ +# include <semaphore.h> +# include <limits.h> +#endif /* OS_VXWORKS */ + +#if defined(__APPLE__) || SQLITE_ENABLE_LOCKING_STYLE +# include <sys/mount.h> +#endif + +#ifdef HAVE_UTIME +# include <utime.h> +#endif + +/* +** Allowed values of unixFile.fsFlags +*/ +#define SQLITE_FSFLAGS_IS_MSDOS 0x1 + +/* +** If we are to be thread-safe, include the pthreads header and define +** the SQLITE_UNIX_THREADS macro. +*/ +#if SQLITE_THREADSAFE +/* # include <pthread.h> */ +# define SQLITE_UNIX_THREADS 1 +#endif + +/* +** Default permissions when creating a new file +*/ +#ifndef SQLITE_DEFAULT_FILE_PERMISSIONS +# define SQLITE_DEFAULT_FILE_PERMISSIONS 0644 +#endif + +/* +** Default permissions when creating auto proxy dir +*/ +#ifndef SQLITE_DEFAULT_PROXYDIR_PERMISSIONS +# define SQLITE_DEFAULT_PROXYDIR_PERMISSIONS 0755 +#endif + +/* +** Maximum supported path-length. +*/ +#define MAX_PATHNAME 512 + +/* +** Maximum supported symbolic links +*/ +#define SQLITE_MAX_SYMLINKS 100 + +/* Always cast the getpid() return type for compatibility with +** kernel modules in VxWorks. */ +#define osGetpid(X) (pid_t)getpid() + +/* +** Only set the lastErrno if the error code is a real error and not +** a normal expected return code of SQLITE_BUSY or SQLITE_OK +*/ +#define IS_LOCK_ERROR(x) ((x != SQLITE_OK) && (x != SQLITE_BUSY)) + +/* Forward references */ +typedef struct unixShm unixShm; /* Connection shared memory */ +typedef struct unixShmNode unixShmNode; /* Shared memory instance */ +typedef struct unixInodeInfo unixInodeInfo; /* An i-node */ +typedef struct UnixUnusedFd UnixUnusedFd; /* An unused file descriptor */ + +/* +** Sometimes, after a file handle is closed by SQLite, the file descriptor +** cannot be closed immediately. In these cases, instances of the following +** structure are used to store the file descriptor while waiting for an +** opportunity to either close or reuse it. +*/ +struct UnixUnusedFd { + int fd; /* File descriptor to close */ + int flags; /* Flags this file descriptor was opened with */ + UnixUnusedFd *pNext; /* Next unused file descriptor on same file */ +}; + +/* +** The unixFile structure is subclass of sqlite3_file specific to the unix +** VFS implementations. +*/ +typedef struct unixFile unixFile; +struct unixFile { + sqlite3_io_methods const *pMethod; /* Always the first entry */ + sqlite3_vfs *pVfs; /* The VFS that created this unixFile */ + unixInodeInfo *pInode; /* Info about locks on this inode */ + int h; /* The file descriptor */ + unsigned char eFileLock; /* The type of lock held on this fd */ + unsigned short int ctrlFlags; /* Behavioral bits. UNIXFILE_* flags */ + int lastErrno; /* The unix errno from last I/O error */ + void *lockingContext; /* Locking style specific state */ + UnixUnusedFd *pUnused; /* Pre-allocated UnixUnusedFd */ + const char *zPath; /* Name of the file */ + unixShm *pShm; /* Shared memory segment information */ + int szChunk; /* Configured by FCNTL_CHUNK_SIZE */ +#if SQLITE_MAX_MMAP_SIZE>0 + int nFetchOut; /* Number of outstanding xFetch refs */ + sqlite3_int64 mmapSize; /* Usable size of mapping at pMapRegion */ + sqlite3_int64 mmapSizeActual; /* Actual size of mapping at pMapRegion */ + sqlite3_int64 mmapSizeMax; /* Configured FCNTL_MMAP_SIZE value */ + void *pMapRegion; /* Memory mapped region */ +#endif +#ifdef __QNXNTO__ + int sectorSize; /* Device sector size */ + int deviceCharacteristics; /* Precomputed device characteristics */ +#endif +#if SQLITE_ENABLE_LOCKING_STYLE + int openFlags; /* The flags specified at open() */ +#endif +#if SQLITE_ENABLE_LOCKING_STYLE || defined(__APPLE__) + unsigned fsFlags; /* cached details from statfs() */ +#endif +#if OS_VXWORKS + struct vxworksFileId *pId; /* Unique file ID */ +#endif +#ifdef SQLITE_DEBUG + /* The next group of variables are used to track whether or not the + ** transaction counter in bytes 24-27 of database files are updated + ** whenever any part of the database changes. An assertion fault will + ** occur if a file is updated without also updating the transaction + ** counter. This test is made to avoid new problems similar to the + ** one described by ticket #3584. + */ + unsigned char transCntrChng; /* True if the transaction counter changed */ + unsigned char dbUpdate; /* True if any part of database file changed */ + unsigned char inNormalWrite; /* True if in a normal write operation */ + +#endif + +#ifdef SQLITE_TEST + /* In test mode, increase the size of this structure a bit so that + ** it is larger than the struct CrashFile defined in test6.c. + */ + char aPadding[32]; +#endif +}; + +/* This variable holds the process id (pid) from when the xRandomness() +** method was called. If xOpen() is called from a different process id, +** indicating that a fork() has occurred, the PRNG will be reset. +*/ +static pid_t randomnessPid = 0; + +/* +** Allowed values for the unixFile.ctrlFlags bitmask: +*/ +#define UNIXFILE_EXCL 0x01 /* Connections from one process only */ +#define UNIXFILE_RDONLY 0x02 /* Connection is read only */ +#define UNIXFILE_PERSIST_WAL 0x04 /* Persistent WAL mode */ +#ifndef SQLITE_DISABLE_DIRSYNC +# define UNIXFILE_DIRSYNC 0x08 /* Directory sync needed */ +#else +# define UNIXFILE_DIRSYNC 0x00 +#endif +#define UNIXFILE_PSOW 0x10 /* SQLITE_IOCAP_POWERSAFE_OVERWRITE */ +#define UNIXFILE_DELETE 0x20 /* Delete on close */ +#define UNIXFILE_URI 0x40 /* Filename might have query parameters */ +#define UNIXFILE_NOLOCK 0x80 /* Do no file locking */ /* ** Include code that is common to all os_*.c files */ -/************** Include os_common.h in the middle of os_os2.c ****************/ +/************** Include os_common.h in the middle of os_unix.c ***************/ /************** Begin file os_common.h ***************************************/ /* ** 2004 May 22 ** ** The author disclaims copyright to this source code. In place of @@ -20679,39 +27397,18 @@ */ #ifdef MEMORY_DEBUG # error "The MEMORY_DEBUG macro is obsolete. Use SQLITE_DEBUG instead." #endif -#ifdef SQLITE_DEBUG -SQLITE_PRIVATE int sqlite3OSTrace = 0; -#define OSTRACE1(X) if( sqlite3OSTrace ) sqlite3DebugPrintf(X) -#define OSTRACE2(X,Y) if( sqlite3OSTrace ) sqlite3DebugPrintf(X,Y) -#define OSTRACE3(X,Y,Z) if( sqlite3OSTrace ) sqlite3DebugPrintf(X,Y,Z) -#define OSTRACE4(X,Y,Z,A) if( sqlite3OSTrace ) sqlite3DebugPrintf(X,Y,Z,A) -#define OSTRACE5(X,Y,Z,A,B) if( sqlite3OSTrace ) sqlite3DebugPrintf(X,Y,Z,A,B) -#define OSTRACE6(X,Y,Z,A,B,C) \ - if(sqlite3OSTrace) sqlite3DebugPrintf(X,Y,Z,A,B,C) -#define OSTRACE7(X,Y,Z,A,B,C,D) \ - if(sqlite3OSTrace) sqlite3DebugPrintf(X,Y,Z,A,B,C,D) -#else -#define OSTRACE1(X) -#define OSTRACE2(X,Y) -#define OSTRACE3(X,Y,Z) -#define OSTRACE4(X,Y,Z,A) -#define OSTRACE5(X,Y,Z,A,B) -#define OSTRACE6(X,Y,Z,A,B,C) -#define OSTRACE7(X,Y,Z,A,B,C,D) -#endif - /* ** Macros for performance tracing. Normally turned off. Only works ** on i486 hardware. */ #ifdef SQLITE_PERFORMANCE_TRACE -/* -** hwtime.h contains inline assembler code for implementing +/* +** hwtime.h contains inline assembler code for implementing ** high-performance timing routines. */ /************** Include hwtime.h in the middle of os_common.h ****************/ /************** Begin file hwtime.h ******************************************/ /* @@ -20817,18 +27514,18 @@ /* ** If we compile with the SQLITE_TEST macro set, then the following block ** of code will give us the ability to simulate a disk I/O error. This ** is used for testing the I/O recovery logic. */ -#ifdef SQLITE_TEST -SQLITE_API int sqlite3_io_error_hit = 0; /* Total number of I/O Errors */ -SQLITE_API int sqlite3_io_error_hardhit = 0; /* Number of non-benign errors */ -SQLITE_API int sqlite3_io_error_pending = 0; /* Count down to first I/O error */ -SQLITE_API int sqlite3_io_error_persist = 0; /* True if I/O errors persist */ -SQLITE_API int sqlite3_io_error_benign = 0; /* True if errors are benign */ -SQLITE_API int sqlite3_diskfull_pending = 0; -SQLITE_API int sqlite3_diskfull = 0; +#if defined(SQLITE_TEST) +SQLITE_API extern int sqlite3_io_error_hit; +SQLITE_API extern int sqlite3_io_error_hardhit; +SQLITE_API extern int sqlite3_io_error_pending; +SQLITE_API extern int sqlite3_io_error_persist; +SQLITE_API extern int sqlite3_io_error_benign; +SQLITE_API extern int sqlite3_diskfull_pending; +SQLITE_API extern int sqlite3_diskfull; #define SimulateIOErrorBenign(X) sqlite3_io_error_benign=(X) #define SimulateIOError(CODE) \ if( (sqlite3_io_error_persist && sqlite3_io_error_hit) \ || sqlite3_io_error_pending-- == 1 ) \ { local_ioerr(); CODE; } @@ -20850,1557 +27547,21 @@ } #else #define SimulateIOErrorBenign(X) #define SimulateIOError(A) #define SimulateDiskfullError(A) -#endif - -/* -** When testing, keep a count of the number of open files. -*/ -#ifdef SQLITE_TEST -SQLITE_API int sqlite3_open_file_count = 0; -#define OpenCounter(X) sqlite3_open_file_count+=(X) -#else -#define OpenCounter(X) -#endif - -#endif /* !defined(_OS_COMMON_H_) */ - -/************** End of os_common.h *******************************************/ -/************** Continuing where we left off in os_os2.c *********************/ - -/* -** The os2File structure is subclass of sqlite3_file specific for the OS/2 -** protability layer. -*/ -typedef struct os2File os2File; -struct os2File { - const sqlite3_io_methods *pMethod; /* Always the first entry */ - HFILE h; /* Handle for accessing the file */ - char* pathToDel; /* Name of file to delete on close, NULL if not */ - unsigned char locktype; /* Type of lock currently held on this file */ -}; - -#define LOCK_TIMEOUT 10L /* the default locking timeout */ - -/***************************************************************************** -** The next group of routines implement the I/O methods specified -** by the sqlite3_io_methods object. -******************************************************************************/ - -/* -** Close a file. -*/ -static int os2Close( sqlite3_file *id ){ - APIRET rc = NO_ERROR; - os2File *pFile; - if( id && (pFile = (os2File*)id) != 0 ){ - OSTRACE2( "CLOSE %d\n", pFile->h ); - rc = DosClose( pFile->h ); - pFile->locktype = NO_LOCK; - if( pFile->pathToDel != NULL ){ - rc = DosForceDelete( (PSZ)pFile->pathToDel ); - free( pFile->pathToDel ); - pFile->pathToDel = NULL; - } - id = 0; - OpenCounter( -1 ); - } - - return rc == NO_ERROR ? SQLITE_OK : SQLITE_IOERR; -} - -/* -** Read data from a file into a buffer. Return SQLITE_OK if all -** bytes were read successfully and SQLITE_IOERR if anything goes -** wrong. -*/ -static int os2Read( - sqlite3_file *id, /* File to read from */ - void *pBuf, /* Write content into this buffer */ - int amt, /* Number of bytes to read */ - sqlite3_int64 offset /* Begin reading at this offset */ -){ - ULONG fileLocation = 0L; - ULONG got; - os2File *pFile = (os2File*)id; - assert( id!=0 ); - SimulateIOError( return SQLITE_IOERR_READ ); - OSTRACE3( "READ %d lock=%d\n", pFile->h, pFile->locktype ); - if( DosSetFilePtr(pFile->h, offset, FILE_BEGIN, &fileLocation) != NO_ERROR ){ - return SQLITE_IOERR; - } - if( DosRead( pFile->h, pBuf, amt, &got ) != NO_ERROR ){ - return SQLITE_IOERR_READ; - } - if( got == (ULONG)amt ) - return SQLITE_OK; - else { - /* Unread portions of the input buffer must be zero-filled */ - memset(&((char*)pBuf)[got], 0, amt-got); - return SQLITE_IOERR_SHORT_READ; - } -} - -/* -** Write data from a buffer into a file. Return SQLITE_OK on success -** or some other error code on failure. -*/ -static int os2Write( - sqlite3_file *id, /* File to write into */ - const void *pBuf, /* The bytes to be written */ - int amt, /* Number of bytes to write */ - sqlite3_int64 offset /* Offset into the file to begin writing at */ -){ - ULONG fileLocation = 0L; - APIRET rc = NO_ERROR; - ULONG wrote; - os2File *pFile = (os2File*)id; - assert( id!=0 ); - SimulateIOError( return SQLITE_IOERR_WRITE ); - SimulateDiskfullError( return SQLITE_FULL ); - OSTRACE3( "WRITE %d lock=%d\n", pFile->h, pFile->locktype ); - if( DosSetFilePtr(pFile->h, offset, FILE_BEGIN, &fileLocation) != NO_ERROR ){ - return SQLITE_IOERR; - } - assert( amt>0 ); - while( amt > 0 && - ( rc = DosWrite( pFile->h, (PVOID)pBuf, amt, &wrote ) ) == NO_ERROR && - wrote > 0 - ){ - amt -= wrote; - pBuf = &((char*)pBuf)[wrote]; - } - - return ( rc != NO_ERROR || amt > (int)wrote ) ? SQLITE_FULL : SQLITE_OK; -} - -/* -** Truncate an open file to a specified size -*/ -static int os2Truncate( sqlite3_file *id, i64 nByte ){ - APIRET rc = NO_ERROR; - os2File *pFile = (os2File*)id; - OSTRACE3( "TRUNCATE %d %lld\n", pFile->h, nByte ); - SimulateIOError( return SQLITE_IOERR_TRUNCATE ); - rc = DosSetFileSize( pFile->h, nByte ); - return rc == NO_ERROR ? SQLITE_OK : SQLITE_IOERR_TRUNCATE; -} - -#ifdef SQLITE_TEST -/* -** Count the number of fullsyncs and normal syncs. This is used to test -** that syncs and fullsyncs are occuring at the right times. -*/ -SQLITE_API int sqlite3_sync_count = 0; -SQLITE_API int sqlite3_fullsync_count = 0; -#endif - -/* -** Make sure all writes to a particular file are committed to disk. -*/ -static int os2Sync( sqlite3_file *id, int flags ){ - os2File *pFile = (os2File*)id; - OSTRACE3( "SYNC %d lock=%d\n", pFile->h, pFile->locktype ); -#ifdef SQLITE_TEST - if( flags & SQLITE_SYNC_FULL){ - sqlite3_fullsync_count++; - } - sqlite3_sync_count++; -#endif - /* If we compiled with the SQLITE_NO_SYNC flag, then syncing is a - ** no-op - */ -#ifdef SQLITE_NO_SYNC - UNUSED_PARAMETER(pFile); - return SQLITE_OK; -#else - return DosResetBuffer( pFile->h ) == NO_ERROR ? SQLITE_OK : SQLITE_IOERR; -#endif -} - -/* -** Determine the current size of a file in bytes -*/ -static int os2FileSize( sqlite3_file *id, sqlite3_int64 *pSize ){ - APIRET rc = NO_ERROR; - FILESTATUS3 fsts3FileInfo; - memset(&fsts3FileInfo, 0, sizeof(fsts3FileInfo)); - assert( id!=0 ); - SimulateIOError( return SQLITE_IOERR_FSTAT ); - rc = DosQueryFileInfo( ((os2File*)id)->h, FIL_STANDARD, &fsts3FileInfo, sizeof(FILESTATUS3) ); - if( rc == NO_ERROR ){ - *pSize = fsts3FileInfo.cbFile; - return SQLITE_OK; - }else{ - return SQLITE_IOERR_FSTAT; - } -} - -/* -** Acquire a reader lock. -*/ -static int getReadLock( os2File *pFile ){ - FILELOCK LockArea, - UnlockArea; - APIRET res; - memset(&LockArea, 0, sizeof(LockArea)); - memset(&UnlockArea, 0, sizeof(UnlockArea)); - LockArea.lOffset = SHARED_FIRST; - LockArea.lRange = SHARED_SIZE; - UnlockArea.lOffset = 0L; - UnlockArea.lRange = 0L; - res = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, LOCK_TIMEOUT, 1L ); - OSTRACE3( "GETREADLOCK %d res=%d\n", pFile->h, res ); - return res; -} - -/* -** Undo a readlock -*/ -static int unlockReadLock( os2File *id ){ - FILELOCK LockArea, - UnlockArea; - APIRET res; - memset(&LockArea, 0, sizeof(LockArea)); - memset(&UnlockArea, 0, sizeof(UnlockArea)); - LockArea.lOffset = 0L; - LockArea.lRange = 0L; - UnlockArea.lOffset = SHARED_FIRST; - UnlockArea.lRange = SHARED_SIZE; - res = DosSetFileLocks( id->h, &UnlockArea, &LockArea, LOCK_TIMEOUT, 1L ); - OSTRACE3( "UNLOCK-READLOCK file handle=%d res=%d?\n", id->h, res ); - return res; -} - -/* -** Lock the file with the lock specified by parameter locktype - one -** of the following: -** -** (1) SHARED_LOCK -** (2) RESERVED_LOCK -** (3) PENDING_LOCK -** (4) EXCLUSIVE_LOCK -** -** Sometimes when requesting one lock state, additional lock states -** are inserted in between. The locking might fail on one of the later -** transitions leaving the lock state different from what it started but -** still short of its goal. The following chart shows the allowed -** transitions and the inserted intermediate states: -** -** UNLOCKED -> SHARED -** SHARED -> RESERVED -** SHARED -> (PENDING) -> EXCLUSIVE -** RESERVED -> (PENDING) -> EXCLUSIVE -** PENDING -> EXCLUSIVE -** -** This routine will only increase a lock. The os2Unlock() routine -** erases all locks at once and returns us immediately to locking level 0. -** It is not possible to lower the locking level one step at a time. You -** must go straight to locking level 0. -*/ -static int os2Lock( sqlite3_file *id, int locktype ){ - int rc = SQLITE_OK; /* Return code from subroutines */ - APIRET res = NO_ERROR; /* Result of an OS/2 lock call */ - int newLocktype; /* Set pFile->locktype to this value before exiting */ - int gotPendingLock = 0;/* True if we acquired a PENDING lock this time */ - FILELOCK LockArea, - UnlockArea; - os2File *pFile = (os2File*)id; - memset(&LockArea, 0, sizeof(LockArea)); - memset(&UnlockArea, 0, sizeof(UnlockArea)); - assert( pFile!=0 ); - OSTRACE4( "LOCK %d %d was %d\n", pFile->h, locktype, pFile->locktype ); - - /* If there is already a lock of this type or more restrictive on the - ** os2File, do nothing. Don't use the end_lock: exit path, as - ** sqlite3_mutex_enter() hasn't been called yet. - */ - if( pFile->locktype>=locktype ){ - OSTRACE3( "LOCK %d %d ok (already held)\n", pFile->h, locktype ); - return SQLITE_OK; - } - - /* Make sure the locking sequence is correct - */ - assert( pFile->locktype!=NO_LOCK || locktype==SHARED_LOCK ); - assert( locktype!=PENDING_LOCK ); - assert( locktype!=RESERVED_LOCK || pFile->locktype==SHARED_LOCK ); - - /* Lock the PENDING_LOCK byte if we need to acquire a PENDING lock or - ** a SHARED lock. If we are acquiring a SHARED lock, the acquisition of - ** the PENDING_LOCK byte is temporary. - */ - newLocktype = pFile->locktype; - if( pFile->locktype==NO_LOCK - || (locktype==EXCLUSIVE_LOCK && pFile->locktype==RESERVED_LOCK) - ){ - LockArea.lOffset = PENDING_BYTE; - LockArea.lRange = 1L; - UnlockArea.lOffset = 0L; - UnlockArea.lRange = 0L; - - /* wait longer than LOCK_TIMEOUT here not to have to try multiple times */ - res = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, 100L, 0L ); - if( res == NO_ERROR ){ - gotPendingLock = 1; - OSTRACE3( "LOCK %d pending lock boolean set. res=%d\n", pFile->h, res ); - } - } - - /* Acquire a shared lock - */ - if( locktype==SHARED_LOCK && res == NO_ERROR ){ - assert( pFile->locktype==NO_LOCK ); - res = getReadLock(pFile); - if( res == NO_ERROR ){ - newLocktype = SHARED_LOCK; - } - OSTRACE3( "LOCK %d acquire shared lock. res=%d\n", pFile->h, res ); - } - - /* Acquire a RESERVED lock - */ - if( locktype==RESERVED_LOCK && res == NO_ERROR ){ - assert( pFile->locktype==SHARED_LOCK ); - LockArea.lOffset = RESERVED_BYTE; - LockArea.lRange = 1L; - UnlockArea.lOffset = 0L; - UnlockArea.lRange = 0L; - res = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, LOCK_TIMEOUT, 0L ); - if( res == NO_ERROR ){ - newLocktype = RESERVED_LOCK; - } - OSTRACE3( "LOCK %d acquire reserved lock. res=%d\n", pFile->h, res ); - } - - /* Acquire a PENDING lock - */ - if( locktype==EXCLUSIVE_LOCK && res == NO_ERROR ){ - newLocktype = PENDING_LOCK; - gotPendingLock = 0; - OSTRACE2( "LOCK %d acquire pending lock. pending lock boolean unset.\n", pFile->h ); - } - - /* Acquire an EXCLUSIVE lock - */ - if( locktype==EXCLUSIVE_LOCK && res == NO_ERROR ){ - assert( pFile->locktype>=SHARED_LOCK ); - res = unlockReadLock(pFile); - OSTRACE2( "unreadlock = %d\n", res ); - LockArea.lOffset = SHARED_FIRST; - LockArea.lRange = SHARED_SIZE; - UnlockArea.lOffset = 0L; - UnlockArea.lRange = 0L; - res = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, LOCK_TIMEOUT, 0L ); - if( res == NO_ERROR ){ - newLocktype = EXCLUSIVE_LOCK; - }else{ - OSTRACE2( "OS/2 error-code = %d\n", res ); - getReadLock(pFile); - } - OSTRACE3( "LOCK %d acquire exclusive lock. res=%d\n", pFile->h, res ); - } - - /* If we are holding a PENDING lock that ought to be released, then - ** release it now. - */ - if( gotPendingLock && locktype==SHARED_LOCK ){ - int r; - LockArea.lOffset = 0L; - LockArea.lRange = 0L; - UnlockArea.lOffset = PENDING_BYTE; - UnlockArea.lRange = 1L; - r = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, LOCK_TIMEOUT, 0L ); - OSTRACE3( "LOCK %d unlocking pending/is shared. r=%d\n", pFile->h, r ); - } - - /* Update the state of the lock has held in the file descriptor then - ** return the appropriate result code. - */ - if( res == NO_ERROR ){ - rc = SQLITE_OK; - }else{ - OSTRACE4( "LOCK FAILED %d trying for %d but got %d\n", pFile->h, - locktype, newLocktype ); - rc = SQLITE_BUSY; - } - pFile->locktype = newLocktype; - OSTRACE3( "LOCK %d now %d\n", pFile->h, pFile->locktype ); - return rc; -} - -/* -** This routine checks if there is a RESERVED lock held on the specified -** file by this or any other process. If such a lock is held, return -** non-zero, otherwise zero. -*/ -static int os2CheckReservedLock( sqlite3_file *id, int *pOut ){ - int r = 0; - os2File *pFile = (os2File*)id; - assert( pFile!=0 ); - if( pFile->locktype>=RESERVED_LOCK ){ - r = 1; - OSTRACE3( "TEST WR-LOCK %d %d (local)\n", pFile->h, r ); - }else{ - FILELOCK LockArea, - UnlockArea; - APIRET rc = NO_ERROR; - memset(&LockArea, 0, sizeof(LockArea)); - memset(&UnlockArea, 0, sizeof(UnlockArea)); - LockArea.lOffset = RESERVED_BYTE; - LockArea.lRange = 1L; - UnlockArea.lOffset = 0L; - UnlockArea.lRange = 0L; - rc = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, LOCK_TIMEOUT, 0L ); - OSTRACE3( "TEST WR-LOCK %d lock reserved byte rc=%d\n", pFile->h, rc ); - if( rc == NO_ERROR ){ - APIRET rcu = NO_ERROR; /* return code for unlocking */ - LockArea.lOffset = 0L; - LockArea.lRange = 0L; - UnlockArea.lOffset = RESERVED_BYTE; - UnlockArea.lRange = 1L; - rcu = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, LOCK_TIMEOUT, 0L ); - OSTRACE3( "TEST WR-LOCK %d unlock reserved byte r=%d\n", pFile->h, rcu ); - } - r = !(rc == NO_ERROR); - OSTRACE3( "TEST WR-LOCK %d %d (remote)\n", pFile->h, r ); - } - *pOut = r; - return SQLITE_OK; -} - -/* -** Lower the locking level on file descriptor id to locktype. locktype -** must be either NO_LOCK or SHARED_LOCK. -** -** If the locking level of the file descriptor is already at or below -** the requested locking level, this routine is a no-op. -** -** It is not possible for this routine to fail if the second argument -** is NO_LOCK. If the second argument is SHARED_LOCK then this routine -** might return SQLITE_IOERR; -*/ -static int os2Unlock( sqlite3_file *id, int locktype ){ - int type; - os2File *pFile = (os2File*)id; - APIRET rc = SQLITE_OK; - APIRET res = NO_ERROR; - FILELOCK LockArea, - UnlockArea; - memset(&LockArea, 0, sizeof(LockArea)); - memset(&UnlockArea, 0, sizeof(UnlockArea)); - assert( pFile!=0 ); - assert( locktype<=SHARED_LOCK ); - OSTRACE4( "UNLOCK %d to %d was %d\n", pFile->h, locktype, pFile->locktype ); - type = pFile->locktype; - if( type>=EXCLUSIVE_LOCK ){ - LockArea.lOffset = 0L; - LockArea.lRange = 0L; - UnlockArea.lOffset = SHARED_FIRST; - UnlockArea.lRange = SHARED_SIZE; - res = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, LOCK_TIMEOUT, 0L ); - OSTRACE3( "UNLOCK %d exclusive lock res=%d\n", pFile->h, res ); - if( locktype==SHARED_LOCK && getReadLock(pFile) != NO_ERROR ){ - /* This should never happen. We should always be able to - ** reacquire the read lock */ - OSTRACE3( "UNLOCK %d to %d getReadLock() failed\n", pFile->h, locktype ); - rc = SQLITE_IOERR_UNLOCK; - } - } - if( type>=RESERVED_LOCK ){ - LockArea.lOffset = 0L; - LockArea.lRange = 0L; - UnlockArea.lOffset = RESERVED_BYTE; - UnlockArea.lRange = 1L; - res = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, LOCK_TIMEOUT, 0L ); - OSTRACE3( "UNLOCK %d reserved res=%d\n", pFile->h, res ); - } - if( locktype==NO_LOCK && type>=SHARED_LOCK ){ - res = unlockReadLock(pFile); - OSTRACE5( "UNLOCK %d is %d want %d res=%d\n", pFile->h, type, locktype, res ); - } - if( type>=PENDING_LOCK ){ - LockArea.lOffset = 0L; - LockArea.lRange = 0L; - UnlockArea.lOffset = PENDING_BYTE; - UnlockArea.lRange = 1L; - res = DosSetFileLocks( pFile->h, &UnlockArea, &LockArea, LOCK_TIMEOUT, 0L ); - OSTRACE3( "UNLOCK %d pending res=%d\n", pFile->h, res ); - } - pFile->locktype = locktype; - OSTRACE3( "UNLOCK %d now %d\n", pFile->h, pFile->locktype ); - return rc; -} - -/* -** Control and query of the open file handle. -*/ -static int os2FileControl(sqlite3_file *id, int op, void *pArg){ - switch( op ){ - case SQLITE_FCNTL_LOCKSTATE: { - *(int*)pArg = ((os2File*)id)->locktype; - OSTRACE3( "FCNTL_LOCKSTATE %d lock=%d\n", ((os2File*)id)->h, ((os2File*)id)->locktype ); - return SQLITE_OK; - } - } - return SQLITE_ERROR; -} - -/* -** Return the sector size in bytes of the underlying block device for -** the specified file. This is almost always 512 bytes, but may be -** larger for some devices. -** -** SQLite code assumes this function cannot fail. It also assumes that -** if two files are created in the same file-system directory (i.e. -** a database and its journal file) that the sector size will be the -** same for both. -*/ -static int os2SectorSize(sqlite3_file *id){ - return SQLITE_DEFAULT_SECTOR_SIZE; -} - -/* -** Return a vector of device characteristics. -*/ -static int os2DeviceCharacteristics(sqlite3_file *id){ - return 0; -} - - -/* -** Character set conversion objects used by conversion routines. -*/ -static UconvObject ucUtf8 = NULL; /* convert between UTF-8 and UCS-2 */ -static UconvObject uclCp = NULL; /* convert between local codepage and UCS-2 */ - -/* -** Helper function to initialize the conversion objects from and to UTF-8. -*/ -static void initUconvObjects( void ){ - if( UniCreateUconvObject( UTF_8, &ucUtf8 ) != ULS_SUCCESS ) - ucUtf8 = NULL; - if ( UniCreateUconvObject( (UniChar *)L"@path=yes", &uclCp ) != ULS_SUCCESS ) - uclCp = NULL; -} - -/* -** Helper function to free the conversion objects from and to UTF-8. -*/ -static void freeUconvObjects( void ){ - if ( ucUtf8 ) - UniFreeUconvObject( ucUtf8 ); - if ( uclCp ) - UniFreeUconvObject( uclCp ); - ucUtf8 = NULL; - uclCp = NULL; -} - -/* -** Helper function to convert UTF-8 filenames to local OS/2 codepage. -** The two-step process: first convert the incoming UTF-8 string -** into UCS-2 and then from UCS-2 to the current codepage. -** The returned char pointer has to be freed. -*/ -static char *convertUtf8PathToCp( const char *in ){ - UniChar tempPath[CCHMAXPATH]; - char *out = (char *)calloc( CCHMAXPATH, 1 ); - - if( !out ) - return NULL; - - if( !ucUtf8 || !uclCp ) - initUconvObjects(); - - /* determine string for the conversion of UTF-8 which is CP1208 */ - if( UniStrToUcs( ucUtf8, tempPath, (char *)in, CCHMAXPATH ) != ULS_SUCCESS ) - return out; /* if conversion fails, return the empty string */ - - /* conversion for current codepage which can be used for paths */ - UniStrFromUcs( uclCp, out, tempPath, CCHMAXPATH ); - - return out; -} - -/* -** Helper function to convert filenames from local codepage to UTF-8. -** The two-step process: first convert the incoming codepage-specific -** string into UCS-2 and then from UCS-2 to the codepage of UTF-8. -** The returned char pointer has to be freed. -** -** This function is non-static to be able to use this in shell.c and -** similar applications that take command line arguments. -*/ -char *convertCpPathToUtf8( const char *in ){ - UniChar tempPath[CCHMAXPATH]; - char *out = (char *)calloc( CCHMAXPATH, 1 ); - - if( !out ) - return NULL; - - if( !ucUtf8 || !uclCp ) - initUconvObjects(); - - /* conversion for current codepage which can be used for paths */ - if( UniStrToUcs( uclCp, tempPath, (char *)in, CCHMAXPATH ) != ULS_SUCCESS ) - return out; /* if conversion fails, return the empty string */ - - /* determine string for the conversion of UTF-8 which is CP1208 */ - UniStrFromUcs( ucUtf8, out, tempPath, CCHMAXPATH ); - - return out; -} - -/* -** This vector defines all the methods that can operate on an -** sqlite3_file for os2. -*/ -static const sqlite3_io_methods os2IoMethod = { - 1, /* iVersion */ - os2Close, - os2Read, - os2Write, - os2Truncate, - os2Sync, - os2FileSize, - os2Lock, - os2Unlock, - os2CheckReservedLock, - os2FileControl, - os2SectorSize, - os2DeviceCharacteristics -}; - -/*************************************************************************** -** Here ends the I/O methods that form the sqlite3_io_methods object. -** -** The next block of code implements the VFS methods. -****************************************************************************/ - -/* -** Create a temporary file name in zBuf. zBuf must be big enough to -** hold at pVfs->mxPathname characters. -*/ -static int getTempname(int nBuf, char *zBuf ){ - static const unsigned char zChars[] = - "abcdefghijklmnopqrstuvwxyz" - "ABCDEFGHIJKLMNOPQRSTUVWXYZ" - "0123456789"; - int i, j; - char zTempPathBuf[3]; - PSZ zTempPath = (PSZ)&zTempPathBuf; - if( sqlite3_temp_directory ){ - zTempPath = sqlite3_temp_directory; - }else{ - if( DosScanEnv( (PSZ)"TEMP", &zTempPath ) ){ - if( DosScanEnv( (PSZ)"TMP", &zTempPath ) ){ - if( DosScanEnv( (PSZ)"TMPDIR", &zTempPath ) ){ - ULONG ulDriveNum = 0, ulDriveMap = 0; - DosQueryCurrentDisk( &ulDriveNum, &ulDriveMap ); - sprintf( (char*)zTempPath, "%c:", (char)( 'A' + ulDriveNum - 1 ) ); - } - } - } - } - /* Strip off a trailing slashes or backslashes, otherwise we would get * - * multiple (back)slashes which causes DosOpen() to fail. * - * Trailing spaces are not allowed, either. */ - j = sqlite3Strlen30(zTempPath); - while( j > 0 && ( zTempPath[j-1] == '\\' || zTempPath[j-1] == '/' - || zTempPath[j-1] == ' ' ) ){ - j--; - } - zTempPath[j] = '\0'; - if( !sqlite3_temp_directory ){ - char *zTempPathUTF = convertCpPathToUtf8( zTempPath ); - sqlite3_snprintf( nBuf-30, zBuf, - "%s\\"SQLITE_TEMP_FILE_PREFIX, zTempPathUTF ); - free( zTempPathUTF ); - }else{ - sqlite3_snprintf( nBuf-30, zBuf, - "%s\\"SQLITE_TEMP_FILE_PREFIX, zTempPath ); - } - j = sqlite3Strlen30( zBuf ); - sqlite3_randomness( 20, &zBuf[j] ); - for( i = 0; i < 20; i++, j++ ){ - zBuf[j] = (char)zChars[ ((unsigned char)zBuf[j])%(sizeof(zChars)-1) ]; - } - zBuf[j] = 0; - OSTRACE2( "TEMP FILENAME: %s\n", zBuf ); - return SQLITE_OK; -} - - -/* -** Turn a relative pathname into a full pathname. Write the full -** pathname into zFull[]. zFull[] will be at least pVfs->mxPathname -** bytes in size. -*/ -static int os2FullPathname( - sqlite3_vfs *pVfs, /* Pointer to vfs object */ - const char *zRelative, /* Possibly relative input path */ - int nFull, /* Size of output buffer in bytes */ - char *zFull /* Output buffer */ -){ - char *zRelativeCp = convertUtf8PathToCp( zRelative ); - char zFullCp[CCHMAXPATH] = "\0"; - char *zFullUTF; - APIRET rc = DosQueryPathInfo( zRelativeCp, FIL_QUERYFULLNAME, zFullCp, - CCHMAXPATH ); - free( zRelativeCp ); - zFullUTF = convertCpPathToUtf8( zFullCp ); - sqlite3_snprintf( nFull, zFull, zFullUTF ); - free( zFullUTF ); - return rc == NO_ERROR ? SQLITE_OK : SQLITE_IOERR; -} - - -/* -** Open a file. -*/ -static int os2Open( - sqlite3_vfs *pVfs, /* Not used */ - const char *zName, /* Name of the file */ - sqlite3_file *id, /* Write the SQLite file handle here */ - int flags, /* Open mode flags */ - int *pOutFlags /* Status return flags */ -){ - HFILE h; - ULONG ulFileAttribute = FILE_NORMAL; - ULONG ulOpenFlags = 0; - ULONG ulOpenMode = 0; - os2File *pFile = (os2File*)id; - APIRET rc = NO_ERROR; - ULONG ulAction; - char *zNameCp; - char zTmpname[CCHMAXPATH+1]; /* Buffer to hold name of temp file */ - - /* If the second argument to this function is NULL, generate a - ** temporary file name to use - */ - if( !zName ){ - int rc = getTempname(CCHMAXPATH+1, zTmpname); - if( rc!=SQLITE_OK ){ - return rc; - } - zName = zTmpname; - } - - - memset( pFile, 0, sizeof(*pFile) ); - - OSTRACE2( "OPEN want %d\n", flags ); - - if( flags & SQLITE_OPEN_READWRITE ){ - ulOpenMode |= OPEN_ACCESS_READWRITE; - OSTRACE1( "OPEN read/write\n" ); - }else{ - ulOpenMode |= OPEN_ACCESS_READONLY; - OSTRACE1( "OPEN read only\n" ); - } - - if( flags & SQLITE_OPEN_CREATE ){ - ulOpenFlags |= OPEN_ACTION_OPEN_IF_EXISTS | OPEN_ACTION_CREATE_IF_NEW; - OSTRACE1( "OPEN open new/create\n" ); - }else{ - ulOpenFlags |= OPEN_ACTION_OPEN_IF_EXISTS | OPEN_ACTION_FAIL_IF_NEW; - OSTRACE1( "OPEN open existing\n" ); - } - - if( flags & SQLITE_OPEN_MAIN_DB ){ - ulOpenMode |= OPEN_SHARE_DENYNONE; - OSTRACE1( "OPEN share read/write\n" ); - }else{ - ulOpenMode |= OPEN_SHARE_DENYWRITE; - OSTRACE1( "OPEN share read only\n" ); - } - - if( flags & SQLITE_OPEN_DELETEONCLOSE ){ - char pathUtf8[CCHMAXPATH]; -#ifdef NDEBUG /* when debugging we want to make sure it is deleted */ - ulFileAttribute = FILE_HIDDEN; -#endif - os2FullPathname( pVfs, zName, CCHMAXPATH, pathUtf8 ); - pFile->pathToDel = convertUtf8PathToCp( pathUtf8 ); - OSTRACE1( "OPEN hidden/delete on close file attributes\n" ); - }else{ - pFile->pathToDel = NULL; - OSTRACE1( "OPEN normal file attribute\n" ); - } - - /* always open in random access mode for possibly better speed */ - ulOpenMode |= OPEN_FLAGS_RANDOM; - ulOpenMode |= OPEN_FLAGS_FAIL_ON_ERROR; - ulOpenMode |= OPEN_FLAGS_NOINHERIT; - - zNameCp = convertUtf8PathToCp( zName ); - rc = DosOpen( (PSZ)zNameCp, - &h, - &ulAction, - 0L, - ulFileAttribute, - ulOpenFlags, - ulOpenMode, - (PEAOP2)NULL ); - free( zNameCp ); - if( rc != NO_ERROR ){ - OSTRACE7( "OPEN Invalid handle rc=%d: zName=%s, ulAction=%#lx, ulAttr=%#lx, ulFlags=%#lx, ulMode=%#lx\n", - rc, zName, ulAction, ulFileAttribute, ulOpenFlags, ulOpenMode ); - if( pFile->pathToDel ) - free( pFile->pathToDel ); - pFile->pathToDel = NULL; - if( flags & SQLITE_OPEN_READWRITE ){ - OSTRACE2( "OPEN %d Invalid handle\n", ((flags | SQLITE_OPEN_READONLY) & ~SQLITE_OPEN_READWRITE) ); - return os2Open( pVfs, zName, id, - ((flags | SQLITE_OPEN_READONLY) & ~SQLITE_OPEN_READWRITE), - pOutFlags ); - }else{ - return SQLITE_CANTOPEN; - } - } - - if( pOutFlags ){ - *pOutFlags = flags & SQLITE_OPEN_READWRITE ? SQLITE_OPEN_READWRITE : SQLITE_OPEN_READONLY; - } - - pFile->pMethod = &os2IoMethod; - pFile->h = h; - OpenCounter(+1); - OSTRACE3( "OPEN %d pOutFlags=%d\n", pFile->h, pOutFlags ); - return SQLITE_OK; -} - -/* -** Delete the named file. -*/ -static int os2Delete( - sqlite3_vfs *pVfs, /* Not used on os2 */ - const char *zFilename, /* Name of file to delete */ - int syncDir /* Not used on os2 */ -){ - APIRET rc = NO_ERROR; - char *zFilenameCp = convertUtf8PathToCp( zFilename ); - SimulateIOError( return SQLITE_IOERR_DELETE ); - rc = DosDelete( (PSZ)zFilenameCp ); - free( zFilenameCp ); - OSTRACE2( "DELETE \"%s\"\n", zFilename ); - return rc == NO_ERROR ? SQLITE_OK : SQLITE_IOERR_DELETE; -} - -/* -** Check the existance and status of a file. -*/ -static int os2Access( - sqlite3_vfs *pVfs, /* Not used on os2 */ - const char *zFilename, /* Name of file to check */ - int flags, /* Type of test to make on this file */ - int *pOut /* Write results here */ -){ - FILESTATUS3 fsts3ConfigInfo; - APIRET rc = NO_ERROR; - char *zFilenameCp = convertUtf8PathToCp( zFilename ); - - memset( &fsts3ConfigInfo, 0, sizeof(fsts3ConfigInfo) ); - rc = DosQueryPathInfo( (PSZ)zFilenameCp, FIL_STANDARD, - &fsts3ConfigInfo, sizeof(FILESTATUS3) ); - free( zFilenameCp ); - OSTRACE4( "ACCESS fsts3ConfigInfo.attrFile=%d flags=%d rc=%d\n", - fsts3ConfigInfo.attrFile, flags, rc ); - switch( flags ){ - case SQLITE_ACCESS_READ: - case SQLITE_ACCESS_EXISTS: - rc = (rc == NO_ERROR); - OSTRACE3( "ACCESS %s access of read and exists rc=%d\n", zFilename, rc ); - break; - case SQLITE_ACCESS_READWRITE: - rc = (rc == NO_ERROR) && ( (fsts3ConfigInfo.attrFile & FILE_READONLY) == 0 ); - OSTRACE3( "ACCESS %s access of read/write rc=%d\n", zFilename, rc ); - break; - default: - assert( !"Invalid flags argument" ); - } - *pOut = rc; - return SQLITE_OK; -} - - -#ifndef SQLITE_OMIT_LOAD_EXTENSION -/* -** Interfaces for opening a shared library, finding entry points -** within the shared library, and closing the shared library. -*/ -/* -** Interfaces for opening a shared library, finding entry points -** within the shared library, and closing the shared library. -*/ -static void *os2DlOpen(sqlite3_vfs *pVfs, const char *zFilename){ - UCHAR loadErr[256]; - HMODULE hmod; - APIRET rc; - char *zFilenameCp = convertUtf8PathToCp(zFilename); - rc = DosLoadModule((PSZ)loadErr, sizeof(loadErr), zFilenameCp, &hmod); - free(zFilenameCp); - return rc != NO_ERROR ? 0 : (void*)hmod; -} -/* -** A no-op since the error code is returned on the DosLoadModule call. -** os2Dlopen returns zero if DosLoadModule is not successful. -*/ -static void os2DlError(sqlite3_vfs *pVfs, int nBuf, char *zBufOut){ -/* no-op */ -} -static void *os2DlSym(sqlite3_vfs *pVfs, void *pHandle, const char *zSymbol){ - PFN pfn; - APIRET rc; - rc = DosQueryProcAddr((HMODULE)pHandle, 0L, zSymbol, &pfn); - if( rc != NO_ERROR ){ - /* if the symbol itself was not found, search again for the same - * symbol with an extra underscore, that might be needed depending - * on the calling convention */ - char _zSymbol[256] = "_"; - strncat(_zSymbol, zSymbol, 255); - rc = DosQueryProcAddr((HMODULE)pHandle, 0L, _zSymbol, &pfn); - } - return rc != NO_ERROR ? 0 : (void*)pfn; -} -static void os2DlClose(sqlite3_vfs *pVfs, void *pHandle){ - DosFreeModule((HMODULE)pHandle); -} -#else /* if SQLITE_OMIT_LOAD_EXTENSION is defined: */ - #define os2DlOpen 0 - #define os2DlError 0 - #define os2DlSym 0 - #define os2DlClose 0 -#endif - - -/* -** Write up to nBuf bytes of randomness into zBuf. -*/ -static int os2Randomness(sqlite3_vfs *pVfs, int nBuf, char *zBuf ){ - int n = 0; -#if defined(SQLITE_TEST) - n = nBuf; - memset(zBuf, 0, nBuf); -#else - int sizeofULong = sizeof(ULONG); - if( (int)sizeof(DATETIME) <= nBuf - n ){ - DATETIME x; - DosGetDateTime(&x); - memcpy(&zBuf[n], &x, sizeof(x)); - n += sizeof(x); - } - - if( sizeofULong <= nBuf - n ){ - PPIB ppib; - DosGetInfoBlocks(NULL, &ppib); - memcpy(&zBuf[n], &ppib->pib_ulpid, sizeofULong); - n += sizeofULong; - } - - if( sizeofULong <= nBuf - n ){ - PTIB ptib; - DosGetInfoBlocks(&ptib, NULL); - memcpy(&zBuf[n], &ptib->tib_ptib2->tib2_ultid, sizeofULong); - n += sizeofULong; - } - - /* if we still haven't filled the buffer yet the following will */ - /* grab everything once instead of making several calls for a single item */ - if( sizeofULong <= nBuf - n ){ - ULONG ulSysInfo[QSV_MAX]; - DosQuerySysInfo(1L, QSV_MAX, ulSysInfo, sizeofULong * QSV_MAX); - - memcpy(&zBuf[n], &ulSysInfo[QSV_MS_COUNT - 1], sizeofULong); - n += sizeofULong; - - if( sizeofULong <= nBuf - n ){ - memcpy(&zBuf[n], &ulSysInfo[QSV_TIMER_INTERVAL - 1], sizeofULong); - n += sizeofULong; - } - if( sizeofULong <= nBuf - n ){ - memcpy(&zBuf[n], &ulSysInfo[QSV_TIME_LOW - 1], sizeofULong); - n += sizeofULong; - } - if( sizeofULong <= nBuf - n ){ - memcpy(&zBuf[n], &ulSysInfo[QSV_TIME_HIGH - 1], sizeofULong); - n += sizeofULong; - } - if( sizeofULong <= nBuf - n ){ - memcpy(&zBuf[n], &ulSysInfo[QSV_TOTAVAILMEM - 1], sizeofULong); - n += sizeofULong; - } - } -#endif - - return n; -} - -/* -** Sleep for a little while. Return the amount of time slept. -** The argument is the number of microseconds we want to sleep. -** The return value is the number of microseconds of sleep actually -** requested from the underlying operating system, a number which -** might be greater than or equal to the argument, but not less -** than the argument. -*/ -static int os2Sleep( sqlite3_vfs *pVfs, int microsec ){ - DosSleep( (microsec/1000) ); - return microsec; -} - -/* -** The following variable, if set to a non-zero value, becomes the result -** returned from sqlite3OsCurrentTime(). This is used for testing. -*/ -#ifdef SQLITE_TEST -SQLITE_API int sqlite3_current_time = 0; -#endif - -/* -** Find the current time (in Universal Coordinated Time). Write the -** current time and date as a Julian Day number into *prNow and -** return 0. Return 1 if the time and date cannot be found. -*/ -int os2CurrentTime( sqlite3_vfs *pVfs, double *prNow ){ - double now; - SHORT minute; /* needs to be able to cope with negative timezone offset */ - USHORT second, hour, - day, month, year; - DATETIME dt; - DosGetDateTime( &dt ); - second = (USHORT)dt.seconds; - minute = (SHORT)dt.minutes + dt.timezone; - hour = (USHORT)dt.hours; - day = (USHORT)dt.day; - month = (USHORT)dt.month; - year = (USHORT)dt.year; - - /* Calculations from http://www.astro.keele.ac.uk/~rno/Astronomy/hjd.html - http://www.astro.keele.ac.uk/~rno/Astronomy/hjd-0.1.c */ - /* Calculate the Julian days */ - now = day - 32076 + - 1461*(year + 4800 + (month - 14)/12)/4 + - 367*(month - 2 - (month - 14)/12*12)/12 - - 3*((year + 4900 + (month - 14)/12)/100)/4; - - /* Add the fractional hours, mins and seconds */ - now += (hour + 12.0)/24.0; - now += minute/1440.0; - now += second/86400.0; - *prNow = now; -#ifdef SQLITE_TEST - if( sqlite3_current_time ){ - *prNow = sqlite3_current_time/86400.0 + 2440587.5; - } -#endif - return 0; -} - -static int os2GetLastError(sqlite3_vfs *pVfs, int nBuf, char *zBuf){ - return 0; -} - -/* -** Initialize and deinitialize the operating system interface. -*/ -SQLITE_API int sqlite3_os_init(void){ - static sqlite3_vfs os2Vfs = { - 1, /* iVersion */ - sizeof(os2File), /* szOsFile */ - CCHMAXPATH, /* mxPathname */ - 0, /* pNext */ - "os2", /* zName */ - 0, /* pAppData */ - - os2Open, /* xOpen */ - os2Delete, /* xDelete */ - os2Access, /* xAccess */ - os2FullPathname, /* xFullPathname */ - os2DlOpen, /* xDlOpen */ - os2DlError, /* xDlError */ - os2DlSym, /* xDlSym */ - os2DlClose, /* xDlClose */ - os2Randomness, /* xRandomness */ - os2Sleep, /* xSleep */ - os2CurrentTime, /* xCurrentTime */ - os2GetLastError /* xGetLastError */ - }; - sqlite3_vfs_register(&os2Vfs, 1); - initUconvObjects(); - return SQLITE_OK; -} -SQLITE_API int sqlite3_os_end(void){ - freeUconvObjects(); - return SQLITE_OK; -} - -#endif /* SQLITE_OS_OS2 */ - -/************** End of os_os2.c **********************************************/ -/************** Begin file os_unix.c *****************************************/ -/* -** 2004 May 22 -** -** The author disclaims copyright to this source code. In place of -** a legal notice, here is a blessing: -** -** May you do good and not evil. -** May you find forgiveness for yourself and forgive others. -** May you share freely, never taking more than you give. -** -****************************************************************************** -** -** This file contains the VFS implementation for unix-like operating systems -** include Linux, MacOSX, *BSD, QNX, VxWorks, AIX, HPUX, and others. -** -** There are actually several different VFS implementations in this file. -** The differences are in the way that file locking is done. The default -** implementation uses Posix Advisory Locks. Alternative implementations -** use flock(), dot-files, various proprietary locking schemas, or simply -** skip locking all together. -** -** This source file is organized into divisions where the logic for various -** subfunctions is contained within the appropriate division. PLEASE -** KEEP THE STRUCTURE OF THIS FILE INTACT. New code should be placed -** in the correct division and should be clearly labeled. -** -** The layout of divisions is as follows: -** -** * General-purpose declarations and utility functions. -** * Unique file ID logic used by VxWorks. -** * Various locking primitive implementations (all except proxy locking): -** + for Posix Advisory Locks -** + for no-op locks -** + for dot-file locks -** + for flock() locking -** + for named semaphore locks (VxWorks only) -** + for AFP filesystem locks (MacOSX only) -** * sqlite3_file methods not associated with locking. -** * Definitions of sqlite3_io_methods objects for all locking -** methods plus "finder" functions for each locking method. -** * sqlite3_vfs method implementations. -** * Locking primitives for the proxy uber-locking-method. (MacOSX only) -** * Definitions of sqlite3_vfs objects for all locking methods -** plus implementations of sqlite3_os_init() and sqlite3_os_end(). -*/ -#if SQLITE_OS_UNIX /* This file is used on unix only */ - -/* -** There are various methods for file locking used for concurrency -** control: -** -** 1. POSIX locking (the default), -** 2. No locking, -** 3. Dot-file locking, -** 4. flock() locking, -** 5. AFP locking (OSX only), -** 6. Named POSIX semaphores (VXWorks only), -** 7. proxy locking. (OSX only) -** -** Styles 4, 5, and 7 are only available of SQLITE_ENABLE_LOCKING_STYLE -** is defined to 1. The SQLITE_ENABLE_LOCKING_STYLE also enables automatic -** selection of the appropriate locking style based on the filesystem -** where the database is located. -*/ -#if !defined(SQLITE_ENABLE_LOCKING_STYLE) -# if defined(__APPLE__) -# define SQLITE_ENABLE_LOCKING_STYLE 1 -# else -# define SQLITE_ENABLE_LOCKING_STYLE 0 -# endif -#endif - -/* -** Define the OS_VXWORKS pre-processor macro to 1 if building on -** vxworks, or 0 otherwise. -*/ -#ifndef OS_VXWORKS -# if defined(__RTP__) || defined(_WRS_KERNEL) -# define OS_VXWORKS 1 -# else -# define OS_VXWORKS 0 -# endif -#endif - -/* -** These #defines should enable >2GB file support on Posix if the -** underlying operating system supports it. If the OS lacks -** large file support, these should be no-ops. -** -** Large file support can be disabled using the -DSQLITE_DISABLE_LFS switch -** on the compiler command line. This is necessary if you are compiling -** on a recent machine (ex: RedHat 7.2) but you want your code to work -** on an older machine (ex: RedHat 6.0). If you compile on RedHat 7.2 -** without this option, LFS is enable. But LFS does not exist in the kernel -** in RedHat 6.0, so the code won't work. Hence, for maximum binary -** portability you should omit LFS. -** -** The previous paragraph was written in 2005. (This paragraph is written -** on 2008-11-28.) These days, all Linux kernels support large files, so -** you should probably leave LFS enabled. But some embedded platforms might -** lack LFS in which case the SQLITE_DISABLE_LFS macro might still be useful. -*/ -#ifndef SQLITE_DISABLE_LFS -# define _LARGE_FILE 1 -# ifndef _FILE_OFFSET_BITS -# define _FILE_OFFSET_BITS 64 -# endif -# define _LARGEFILE_SOURCE 1 -#endif - -/* -** standard include files. -*/ -#include <sys/types.h> -#include <sys/stat.h> -#include <fcntl.h> -#include <unistd.h> -#include <sys/time.h> -#include <errno.h> - -#if SQLITE_ENABLE_LOCKING_STYLE -# include <sys/ioctl.h> -# if OS_VXWORKS -# include <semaphore.h> -# include <limits.h> -# else -# include <sys/file.h> -# include <sys/param.h> -# endif -#endif /* SQLITE_ENABLE_LOCKING_STYLE */ - -#if defined(__APPLE__) || (SQLITE_ENABLE_LOCKING_STYLE && !OS_VXWORKS) -# include <sys/mount.h> -#endif - -/* -** Allowed values of unixFile.fsFlags -*/ -#define SQLITE_FSFLAGS_IS_MSDOS 0x1 - -/* -** If we are to be thread-safe, include the pthreads header and define -** the SQLITE_UNIX_THREADS macro. -*/ -#if SQLITE_THREADSAFE -# define SQLITE_UNIX_THREADS 1 -#endif - -/* -** Default permissions when creating a new file -*/ -#ifndef SQLITE_DEFAULT_FILE_PERMISSIONS -# define SQLITE_DEFAULT_FILE_PERMISSIONS 0644 -#endif - -/* - ** Default permissions when creating auto proxy dir - */ -#ifndef SQLITE_DEFAULT_PROXYDIR_PERMISSIONS -# define SQLITE_DEFAULT_PROXYDIR_PERMISSIONS 0755 -#endif - -/* -** Maximum supported path-length. -*/ -#define MAX_PATHNAME 512 - -/* -** Only set the lastErrno if the error code is a real error and not -** a normal expected return code of SQLITE_BUSY or SQLITE_OK -*/ -#define IS_LOCK_ERROR(x) ((x != SQLITE_OK) && (x != SQLITE_BUSY)) - - -/* -** Sometimes, after a file handle is closed by SQLite, the file descriptor -** cannot be closed immediately. In these cases, instances of the following -** structure are used to store the file descriptor while waiting for an -** opportunity to either close or reuse it. -*/ -typedef struct UnixUnusedFd UnixUnusedFd; -struct UnixUnusedFd { - int fd; /* File descriptor to close */ - int flags; /* Flags this file descriptor was opened with */ - UnixUnusedFd *pNext; /* Next unused file descriptor on same file */ -}; - -/* -** The unixFile structure is subclass of sqlite3_file specific to the unix -** VFS implementations. -*/ -typedef struct unixFile unixFile; -struct unixFile { - sqlite3_io_methods const *pMethod; /* Always the first entry */ - struct unixOpenCnt *pOpen; /* Info about all open fd's on this inode */ - struct unixLockInfo *pLock; /* Info about locks on this inode */ - int h; /* The file descriptor */ - int dirfd; /* File descriptor for the directory */ - unsigned char locktype; /* The type of lock held on this fd */ - int lastErrno; /* The unix errno from the last I/O error */ - void *lockingContext; /* Locking style specific state */ - UnixUnusedFd *pUnused; /* Pre-allocated UnixUnusedFd */ - int fileFlags; /* Miscellanous flags */ -#if SQLITE_ENABLE_LOCKING_STYLE - int openFlags; /* The flags specified at open() */ -#endif -#if SQLITE_ENABLE_LOCKING_STYLE || defined(__APPLE__) - unsigned fsFlags; /* cached details from statfs() */ -#endif -#if SQLITE_THREADSAFE && defined(__linux__) - pthread_t tid; /* The thread that "owns" this unixFile */ -#endif -#if OS_VXWORKS - int isDelete; /* Delete on close if true */ - struct vxworksFileId *pId; /* Unique file ID */ -#endif -#ifndef NDEBUG - /* The next group of variables are used to track whether or not the - ** transaction counter in bytes 24-27 of database files are updated - ** whenever any part of the database changes. An assertion fault will - ** occur if a file is updated without also updating the transaction - ** counter. This test is made to avoid new problems similar to the - ** one described by ticket #3584. - */ - unsigned char transCntrChng; /* True if the transaction counter changed */ - unsigned char dbUpdate; /* True if any part of database file changed */ - unsigned char inNormalWrite; /* True if in a normal write operation */ -#endif -#ifdef SQLITE_TEST - /* In test mode, increase the size of this structure a bit so that - ** it is larger than the struct CrashFile defined in test6.c. - */ - char aPadding[32]; -#endif -}; - -/* -** The following macros define bits in unixFile.fileFlags -*/ -#define SQLITE_WHOLE_FILE_LOCKING 0x0001 /* Use whole-file locking */ - -/* -** Include code that is common to all os_*.c files -*/ -/************** Include os_common.h in the middle of os_unix.c ***************/ -/************** Begin file os_common.h ***************************************/ -/* -** 2004 May 22 -** -** The author disclaims copyright to this source code. In place of -** a legal notice, here is a blessing: -** -** May you do good and not evil. -** May you find forgiveness for yourself and forgive others. -** May you share freely, never taking more than you give. -** -****************************************************************************** -** -** This file contains macros and a little bit of code that is common to -** all of the platform-specific files (os_*.c) and is #included into those -** files. -** -** This file should be #included by the os_*.c files only. It is not a -** general purpose header file. -*/ -#ifndef _OS_COMMON_H_ -#define _OS_COMMON_H_ - -/* -** At least two bugs have slipped in because we changed the MEMORY_DEBUG -** macro to SQLITE_DEBUG and some older makefiles have not yet made the -** switch. The following code should catch this problem at compile-time. -*/ -#ifdef MEMORY_DEBUG -# error "The MEMORY_DEBUG macro is obsolete. Use SQLITE_DEBUG instead." -#endif - -#ifdef SQLITE_DEBUG -SQLITE_PRIVATE int sqlite3OSTrace = 0; -#define OSTRACE1(X) if( sqlite3OSTrace ) sqlite3DebugPrintf(X) -#define OSTRACE2(X,Y) if( sqlite3OSTrace ) sqlite3DebugPrintf(X,Y) -#define OSTRACE3(X,Y,Z) if( sqlite3OSTrace ) sqlite3DebugPrintf(X,Y,Z) -#define OSTRACE4(X,Y,Z,A) if( sqlite3OSTrace ) sqlite3DebugPrintf(X,Y,Z,A) -#define OSTRACE5(X,Y,Z,A,B) if( sqlite3OSTrace ) sqlite3DebugPrintf(X,Y,Z,A,B) -#define OSTRACE6(X,Y,Z,A,B,C) \ - if(sqlite3OSTrace) sqlite3DebugPrintf(X,Y,Z,A,B,C) -#define OSTRACE7(X,Y,Z,A,B,C,D) \ - if(sqlite3OSTrace) sqlite3DebugPrintf(X,Y,Z,A,B,C,D) -#else -#define OSTRACE1(X) -#define OSTRACE2(X,Y) -#define OSTRACE3(X,Y,Z) -#define OSTRACE4(X,Y,Z,A) -#define OSTRACE5(X,Y,Z,A,B) -#define OSTRACE6(X,Y,Z,A,B,C) -#define OSTRACE7(X,Y,Z,A,B,C,D) -#endif - -/* -** Macros for performance tracing. Normally turned off. Only works -** on i486 hardware. -*/ -#ifdef SQLITE_PERFORMANCE_TRACE - -/* -** hwtime.h contains inline assembler code for implementing -** high-performance timing routines. -*/ -/************** Include hwtime.h in the middle of os_common.h ****************/ -/************** Begin file hwtime.h ******************************************/ -/* -** 2008 May 27 -** -** The author disclaims copyright to this source code. In place of -** a legal notice, here is a blessing: -** -** May you do good and not evil. -** May you find forgiveness for yourself and forgive others. -** May you share freely, never taking more than you give. -** -****************************************************************************** -** -** This file contains inline asm code for retrieving "high-performance" -** counters for x86 class CPUs. -*/ -#ifndef _HWTIME_H_ -#define _HWTIME_H_ - -/* -** The following routine only works on pentium-class (or newer) processors. -** It uses the RDTSC opcode to read the cycle count value out of the -** processor and returns that value. This can be used for high-res -** profiling. -*/ -#if (defined(__GNUC__) || defined(_MSC_VER)) && \ - (defined(i386) || defined(__i386__) || defined(_M_IX86)) - - #if defined(__GNUC__) - - __inline__ sqlite_uint64 sqlite3Hwtime(void){ - unsigned int lo, hi; - __asm__ __volatile__ ("rdtsc" : "=a" (lo), "=d" (hi)); - return (sqlite_uint64)hi << 32 | lo; - } - - #elif defined(_MSC_VER) - - __declspec(naked) __inline sqlite_uint64 __cdecl sqlite3Hwtime(void){ - __asm { - rdtsc - ret ; return value at EDX:EAX - } - } - - #endif - -#elif (defined(__GNUC__) && defined(__x86_64__)) - - __inline__ sqlite_uint64 sqlite3Hwtime(void){ - unsigned long val; - __asm__ __volatile__ ("rdtsc" : "=A" (val)); - return val; - } - -#elif (defined(__GNUC__) && defined(__ppc__)) - - __inline__ sqlite_uint64 sqlite3Hwtime(void){ - unsigned long long retval; - unsigned long junk; - __asm__ __volatile__ ("\n\ - 1: mftbu %1\n\ - mftb %L0\n\ - mftbu %0\n\ - cmpw %0,%1\n\ - bne 1b" - : "=r" (retval), "=r" (junk)); - return retval; - } - -#else - - #error Need implementation of sqlite3Hwtime() for your platform. - - /* - ** To compile without implementing sqlite3Hwtime() for your platform, - ** you can remove the above #error and use the following - ** stub function. You will lose timing support for many - ** of the debugging and testing utilities, but it should at - ** least compile and run. - */ -SQLITE_PRIVATE sqlite_uint64 sqlite3Hwtime(void){ return ((sqlite_uint64)0); } - -#endif - -#endif /* !defined(_HWTIME_H_) */ - -/************** End of hwtime.h **********************************************/ -/************** Continuing where we left off in os_common.h ******************/ - -static sqlite_uint64 g_start; -static sqlite_uint64 g_elapsed; -#define TIMER_START g_start=sqlite3Hwtime() -#define TIMER_END g_elapsed=sqlite3Hwtime()-g_start -#define TIMER_ELAPSED g_elapsed -#else -#define TIMER_START -#define TIMER_END -#define TIMER_ELAPSED ((sqlite_uint64)0) -#endif - -/* -** If we compile with the SQLITE_TEST macro set, then the following block -** of code will give us the ability to simulate a disk I/O error. This -** is used for testing the I/O recovery logic. -*/ -#ifdef SQLITE_TEST -SQLITE_API int sqlite3_io_error_hit = 0; /* Total number of I/O Errors */ -SQLITE_API int sqlite3_io_error_hardhit = 0; /* Number of non-benign errors */ -SQLITE_API int sqlite3_io_error_pending = 0; /* Count down to first I/O error */ -SQLITE_API int sqlite3_io_error_persist = 0; /* True if I/O errors persist */ -SQLITE_API int sqlite3_io_error_benign = 0; /* True if errors are benign */ -SQLITE_API int sqlite3_diskfull_pending = 0; -SQLITE_API int sqlite3_diskfull = 0; -#define SimulateIOErrorBenign(X) sqlite3_io_error_benign=(X) -#define SimulateIOError(CODE) \ - if( (sqlite3_io_error_persist && sqlite3_io_error_hit) \ - || sqlite3_io_error_pending-- == 1 ) \ - { local_ioerr(); CODE; } -static void local_ioerr(){ - IOTRACE(("IOERR\n")); - sqlite3_io_error_hit++; - if( !sqlite3_io_error_benign ) sqlite3_io_error_hardhit++; -} -#define SimulateDiskfullError(CODE) \ - if( sqlite3_diskfull_pending ){ \ - if( sqlite3_diskfull_pending == 1 ){ \ - local_ioerr(); \ - sqlite3_diskfull = 1; \ - sqlite3_io_error_hit = 1; \ - CODE; \ - }else{ \ - sqlite3_diskfull_pending--; \ - } \ - } -#else -#define SimulateIOErrorBenign(X) -#define SimulateIOError(A) -#define SimulateDiskfullError(A) -#endif - -/* -** When testing, keep a count of the number of open files. -*/ -#ifdef SQLITE_TEST -SQLITE_API int sqlite3_open_file_count = 0; -#define OpenCounter(X) sqlite3_open_file_count+=(X) -#else -#define OpenCounter(X) -#endif +#endif /* defined(SQLITE_TEST) */ + +/* +** When testing, keep a count of the number of open files. +*/ +#if defined(SQLITE_TEST) +SQLITE_API extern int sqlite3_open_file_count; +#define OpenCounter(X) sqlite3_open_file_count+=(X) +#else +#define OpenCounter(X) +#endif /* defined(SQLITE_TEST) */ #endif /* !defined(_OS_COMMON_H_) */ /************** End of os_common.h *******************************************/ /************** Continuing where we left off in os_unix.c ********************/ @@ -22420,20 +27581,10 @@ #endif #ifndef O_BINARY # define O_BINARY 0 #endif -/* -** The DJGPP compiler environment looks mostly like Unix, but it -** lacks the fcntl() system call. So redefine fcntl() to be something -** that always succeeds. This means that locking does not occur under -** DJGPP. But it is DOS - what did you expect? -*/ -#ifdef __DJGPP__ -# define fcntl(A,B,C) 0 -#endif - /* ** The threadid macro resolves to the thread-id or to 0. Used for ** testing and debugging only. */ #if SQLITE_THREADSAFE @@ -22440,14 +27591,366 @@ #define threadid pthread_self() #else #define threadid 0 #endif +/* +** HAVE_MREMAP defaults to true on Linux and false everywhere else. +*/ +#if !defined(HAVE_MREMAP) +# if defined(__linux__) && defined(_GNU_SOURCE) +# define HAVE_MREMAP 1 +# else +# define HAVE_MREMAP 0 +# endif +#endif + +/* +** Explicitly call the 64-bit version of lseek() on Android. Otherwise, lseek() +** is the 32-bit version, even if _FILE_OFFSET_BITS=64 is defined. +*/ +#ifdef __ANDROID__ +# define lseek lseek64 +#endif + +/* +** Different Unix systems declare open() in different ways. Same use +** open(const char*,int,mode_t). Others use open(const char*,int,...). +** The difference is important when using a pointer to the function. +** +** The safest way to deal with the problem is to always use this wrapper +** which always has the same well-defined interface. +*/ +static int posixOpen(const char *zFile, int flags, int mode){ + return open(zFile, flags, mode); +} + +/* Forward reference */ +static int openDirectory(const char*, int*); +static int unixGetpagesize(void); + +/* +** Many system calls are accessed through pointer-to-functions so that +** they may be overridden at runtime to facilitate fault injection during +** testing and sandboxing. The following array holds the names and pointers +** to all overrideable system calls. +*/ +static struct unix_syscall { + const char *zName; /* Name of the system call */ + sqlite3_syscall_ptr pCurrent; /* Current value of the system call */ + sqlite3_syscall_ptr pDefault; /* Default value */ +} aSyscall[] = { + { "open", (sqlite3_syscall_ptr)posixOpen, 0 }, +#define osOpen ((int(*)(const char*,int,int))aSyscall[0].pCurrent) + + { "close", (sqlite3_syscall_ptr)close, 0 }, +#define osClose ((int(*)(int))aSyscall[1].pCurrent) + + { "access", (sqlite3_syscall_ptr)access, 0 }, +#define osAccess ((int(*)(const char*,int))aSyscall[2].pCurrent) + + { "getcwd", (sqlite3_syscall_ptr)getcwd, 0 }, +#define osGetcwd ((char*(*)(char*,size_t))aSyscall[3].pCurrent) + + { "stat", (sqlite3_syscall_ptr)stat, 0 }, +#define osStat ((int(*)(const char*,struct stat*))aSyscall[4].pCurrent) + +/* +** The DJGPP compiler environment looks mostly like Unix, but it +** lacks the fcntl() system call. So redefine fcntl() to be something +** that always succeeds. This means that locking does not occur under +** DJGPP. But it is DOS - what did you expect? +*/ +#ifdef __DJGPP__ + { "fstat", 0, 0 }, +#define osFstat(a,b,c) 0 +#else + { "fstat", (sqlite3_syscall_ptr)fstat, 0 }, +#define osFstat ((int(*)(int,struct stat*))aSyscall[5].pCurrent) +#endif + + { "ftruncate", (sqlite3_syscall_ptr)ftruncate, 0 }, +#define osFtruncate ((int(*)(int,off_t))aSyscall[6].pCurrent) + + { "fcntl", (sqlite3_syscall_ptr)fcntl, 0 }, +#define osFcntl ((int(*)(int,int,...))aSyscall[7].pCurrent) + + { "read", (sqlite3_syscall_ptr)read, 0 }, +#define osRead ((ssize_t(*)(int,void*,size_t))aSyscall[8].pCurrent) + +#if defined(USE_PREAD) || SQLITE_ENABLE_LOCKING_STYLE + { "pread", (sqlite3_syscall_ptr)pread, 0 }, +#else + { "pread", (sqlite3_syscall_ptr)0, 0 }, +#endif +#define osPread ((ssize_t(*)(int,void*,size_t,off_t))aSyscall[9].pCurrent) + +#if defined(USE_PREAD64) + { "pread64", (sqlite3_syscall_ptr)pread64, 0 }, +#else + { "pread64", (sqlite3_syscall_ptr)0, 0 }, +#endif +#define osPread64 ((ssize_t(*)(int,void*,size_t,off_t))aSyscall[10].pCurrent) + + { "write", (sqlite3_syscall_ptr)write, 0 }, +#define osWrite ((ssize_t(*)(int,const void*,size_t))aSyscall[11].pCurrent) + +#if defined(USE_PREAD) || SQLITE_ENABLE_LOCKING_STYLE + { "pwrite", (sqlite3_syscall_ptr)pwrite, 0 }, +#else + { "pwrite", (sqlite3_syscall_ptr)0, 0 }, +#endif +#define osPwrite ((ssize_t(*)(int,const void*,size_t,off_t))\ + aSyscall[12].pCurrent) + +#if defined(USE_PREAD64) + { "pwrite64", (sqlite3_syscall_ptr)pwrite64, 0 }, +#else + { "pwrite64", (sqlite3_syscall_ptr)0, 0 }, +#endif +#define osPwrite64 ((ssize_t(*)(int,const void*,size_t,off_t))\ + aSyscall[13].pCurrent) + + { "fchmod", (sqlite3_syscall_ptr)fchmod, 0 }, +#define osFchmod ((int(*)(int,mode_t))aSyscall[14].pCurrent) + +#if defined(HAVE_POSIX_FALLOCATE) && HAVE_POSIX_FALLOCATE + { "fallocate", (sqlite3_syscall_ptr)posix_fallocate, 0 }, +#else + { "fallocate", (sqlite3_syscall_ptr)0, 0 }, +#endif +#define osFallocate ((int(*)(int,off_t,off_t))aSyscall[15].pCurrent) + + { "unlink", (sqlite3_syscall_ptr)unlink, 0 }, +#define osUnlink ((int(*)(const char*))aSyscall[16].pCurrent) + + { "openDirectory", (sqlite3_syscall_ptr)openDirectory, 0 }, +#define osOpenDirectory ((int(*)(const char*,int*))aSyscall[17].pCurrent) + + { "mkdir", (sqlite3_syscall_ptr)mkdir, 0 }, +#define osMkdir ((int(*)(const char*,mode_t))aSyscall[18].pCurrent) + + { "rmdir", (sqlite3_syscall_ptr)rmdir, 0 }, +#define osRmdir ((int(*)(const char*))aSyscall[19].pCurrent) + +#if defined(HAVE_FCHOWN) + { "fchown", (sqlite3_syscall_ptr)fchown, 0 }, +#else + { "fchown", (sqlite3_syscall_ptr)0, 0 }, +#endif +#define osFchown ((int(*)(int,uid_t,gid_t))aSyscall[20].pCurrent) + + { "geteuid", (sqlite3_syscall_ptr)geteuid, 0 }, +#define osGeteuid ((uid_t(*)(void))aSyscall[21].pCurrent) + +#if !defined(SQLITE_OMIT_WAL) || SQLITE_MAX_MMAP_SIZE>0 + { "mmap", (sqlite3_syscall_ptr)mmap, 0 }, +#else + { "mmap", (sqlite3_syscall_ptr)0, 0 }, +#endif +#define osMmap ((void*(*)(void*,size_t,int,int,int,off_t))aSyscall[22].pCurrent) + +#if !defined(SQLITE_OMIT_WAL) || SQLITE_MAX_MMAP_SIZE>0 + { "munmap", (sqlite3_syscall_ptr)munmap, 0 }, +#else + { "munmap", (sqlite3_syscall_ptr)0, 0 }, +#endif +#define osMunmap ((void*(*)(void*,size_t))aSyscall[23].pCurrent) + +#if HAVE_MREMAP && (!defined(SQLITE_OMIT_WAL) || SQLITE_MAX_MMAP_SIZE>0) + { "mremap", (sqlite3_syscall_ptr)mremap, 0 }, +#else + { "mremap", (sqlite3_syscall_ptr)0, 0 }, +#endif +#define osMremap ((void*(*)(void*,size_t,size_t,int,...))aSyscall[24].pCurrent) + +#if !defined(SQLITE_OMIT_WAL) || SQLITE_MAX_MMAP_SIZE>0 + { "getpagesize", (sqlite3_syscall_ptr)unixGetpagesize, 0 }, +#else + { "getpagesize", (sqlite3_syscall_ptr)0, 0 }, +#endif +#define osGetpagesize ((int(*)(void))aSyscall[25].pCurrent) + +#if defined(HAVE_READLINK) + { "readlink", (sqlite3_syscall_ptr)readlink, 0 }, +#else + { "readlink", (sqlite3_syscall_ptr)0, 0 }, +#endif +#define osReadlink ((ssize_t(*)(const char*,char*,size_t))aSyscall[26].pCurrent) + +#if defined(HAVE_LSTAT) + { "lstat", (sqlite3_syscall_ptr)lstat, 0 }, +#else + { "lstat", (sqlite3_syscall_ptr)0, 0 }, +#endif +#define osLstat ((int(*)(const char*,struct stat*))aSyscall[27].pCurrent) + +}; /* End of the overrideable system calls */ + + +/* +** On some systems, calls to fchown() will trigger a message in a security +** log if they come from non-root processes. So avoid calling fchown() if +** we are not running as root. +*/ +static int robustFchown(int fd, uid_t uid, gid_t gid){ +#if defined(HAVE_FCHOWN) + return osGeteuid() ? 0 : osFchown(fd,uid,gid); +#else + return 0; +#endif +} + +/* +** This is the xSetSystemCall() method of sqlite3_vfs for all of the +** "unix" VFSes. Return SQLITE_OK opon successfully updating the +** system call pointer, or SQLITE_NOTFOUND if there is no configurable +** system call named zName. +*/ +static int unixSetSystemCall( + sqlite3_vfs *pNotUsed, /* The VFS pointer. Not used */ + const char *zName, /* Name of system call to override */ + sqlite3_syscall_ptr pNewFunc /* Pointer to new system call value */ +){ + unsigned int i; + int rc = SQLITE_NOTFOUND; + + UNUSED_PARAMETER(pNotUsed); + if( zName==0 ){ + /* If no zName is given, restore all system calls to their default + ** settings and return NULL + */ + rc = SQLITE_OK; + for(i=0; i<sizeof(aSyscall)/sizeof(aSyscall[0]); i++){ + if( aSyscall[i].pDefault ){ + aSyscall[i].pCurrent = aSyscall[i].pDefault; + } + } + }else{ + /* If zName is specified, operate on only the one system call + ** specified. + */ + for(i=0; i<sizeof(aSyscall)/sizeof(aSyscall[0]); i++){ + if( strcmp(zName, aSyscall[i].zName)==0 ){ + if( aSyscall[i].pDefault==0 ){ + aSyscall[i].pDefault = aSyscall[i].pCurrent; + } + rc = SQLITE_OK; + if( pNewFunc==0 ) pNewFunc = aSyscall[i].pDefault; + aSyscall[i].pCurrent = pNewFunc; + break; + } + } + } + return rc; +} + +/* +** Return the value of a system call. Return NULL if zName is not a +** recognized system call name. NULL is also returned if the system call +** is currently undefined. +*/ +static sqlite3_syscall_ptr unixGetSystemCall( + sqlite3_vfs *pNotUsed, + const char *zName +){ + unsigned int i; + + UNUSED_PARAMETER(pNotUsed); + for(i=0; i<sizeof(aSyscall)/sizeof(aSyscall[0]); i++){ + if( strcmp(zName, aSyscall[i].zName)==0 ) return aSyscall[i].pCurrent; + } + return 0; +} + +/* +** Return the name of the first system call after zName. If zName==NULL +** then return the name of the first system call. Return NULL if zName +** is the last system call or if zName is not the name of a valid +** system call. +*/ +static const char *unixNextSystemCall(sqlite3_vfs *p, const char *zName){ + int i = -1; + + UNUSED_PARAMETER(p); + if( zName ){ + for(i=0; i<ArraySize(aSyscall)-1; i++){ + if( strcmp(zName, aSyscall[i].zName)==0 ) break; + } + } + for(i++; i<ArraySize(aSyscall); i++){ + if( aSyscall[i].pCurrent!=0 ) return aSyscall[i].zName; + } + return 0; +} + +/* +** Do not accept any file descriptor less than this value, in order to avoid +** opening database file using file descriptors that are commonly used for +** standard input, output, and error. +*/ +#ifndef SQLITE_MINIMUM_FILE_DESCRIPTOR +# define SQLITE_MINIMUM_FILE_DESCRIPTOR 3 +#endif + +/* +** Invoke open(). Do so multiple times, until it either succeeds or +** fails for some reason other than EINTR. +** +** If the file creation mode "m" is 0 then set it to the default for +** SQLite. The default is SQLITE_DEFAULT_FILE_PERMISSIONS (normally +** 0644) as modified by the system umask. If m is not 0, then +** make the file creation mode be exactly m ignoring the umask. +** +** The m parameter will be non-zero only when creating -wal, -journal, +** and -shm files. We want those files to have *exactly* the same +** permissions as their original database, unadulterated by the umask. +** In that way, if a database file is -rw-rw-rw or -rw-rw-r-, and a +** transaction crashes and leaves behind hot journals, then any +** process that is able to write to the database will also be able to +** recover the hot journals. +*/ +static int robust_open(const char *z, int f, mode_t m){ + int fd; + mode_t m2 = m ? m : SQLITE_DEFAULT_FILE_PERMISSIONS; + while(1){ +#if defined(O_CLOEXEC) + fd = osOpen(z,f|O_CLOEXEC,m2); +#else + fd = osOpen(z,f,m2); +#endif + if( fd<0 ){ + if( errno==EINTR ) continue; + break; + } + if( fd>=SQLITE_MINIMUM_FILE_DESCRIPTOR ) break; + osClose(fd); + sqlite3_log(SQLITE_WARNING, + "attempt to open \"%s\" as file descriptor %d", z, fd); + fd = -1; + if( osOpen("/dev/null", f, m)<0 ) break; + } + if( fd>=0 ){ + if( m!=0 ){ + struct stat statbuf; + if( osFstat(fd, &statbuf)==0 + && statbuf.st_size==0 + && (statbuf.st_mode&0777)!=m + ){ + osFchmod(fd, m); + } + } +#if defined(FD_CLOEXEC) && (!defined(O_CLOEXEC) || O_CLOEXEC==0) + osFcntl(fd, F_SETFD, osFcntl(fd, F_GETFD, 0) | FD_CLOEXEC); +#endif + } + return fd; +} /* ** Helper functions to obtain and relinquish the global mutex. The -** global mutex is used to protect the unixOpenCnt, unixLockInfo and +** global mutex is used to protect the unixInodeInfo and ** vxworksFileId objects used by this file, all of which may be ** shared by multiple threads. ** ** Function unixMutexHeld() is used to assert() that the global mutex ** is held when required. This function is only used as part of assert() @@ -22456,30 +27959,30 @@ ** unixEnterMutex() ** assert( unixMutexHeld() ); ** unixEnterLeave() */ static void unixEnterMutex(void){ - sqlite3_mutex_enter(sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER)); + sqlite3_mutex_enter(sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_VFS1)); } static void unixLeaveMutex(void){ - sqlite3_mutex_leave(sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER)); + sqlite3_mutex_leave(sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_VFS1)); } #ifdef SQLITE_DEBUG static int unixMutexHeld(void) { - return sqlite3_mutex_held(sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER)); + return sqlite3_mutex_held(sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_VFS1)); } #endif -#ifdef SQLITE_DEBUG +#ifdef SQLITE_HAVE_OS_TRACE /* ** Helper function for printing out trace information from debugging -** binaries. This returns the string represetation of the supplied +** binaries. This returns the string representation of the supplied ** integer lock-type. */ -static const char *locktypeName(int locktype){ - switch( locktype ){ +static const char *azFileLock(int eFileLock){ + switch( eFileLock ){ case NO_LOCK: return "NONE"; case SHARED_LOCK: return "SHARED"; case RESERVED_LOCK: return "RESERVED"; case PENDING_LOCK: return "PENDING"; case EXCLUSIVE_LOCK: return "EXCLUSIVE"; @@ -22504,11 +28007,11 @@ if( op==F_GETLK ){ zOpName = "GETLK"; }else if( op==F_SETLK ){ zOpName = "SETLK"; }else{ - s = fcntl(fd, op, p); + s = osFcntl(fd, op, p); sqlite3DebugPrintf("fcntl unknown %d %d %d\n", fd, op, s); return s; } if( p->l_type==F_RDLCK ){ zType = "RDLCK"; @@ -22518,19 +28021,19 @@ zType = "UNLCK"; }else{ assert( 0 ); } assert( p->l_whence==SEEK_SET ); - s = fcntl(fd, op, p); + s = osFcntl(fd, op, p); savedErrno = errno; sqlite3DebugPrintf("fcntl %d %d %s %s %d %d %d %d\n", threadid, fd, zOpName, zType, (int)p->l_start, (int)p->l_len, (int)p->l_pid, s); if( s==(-1) && op==F_SETLK && (p->l_type==F_RDLCK || p->l_type==F_WRLCK) ){ struct flock l2; l2 = *p; - fcntl(fd, F_GETLK, &l2); + osFcntl(fd, F_GETLK, &l2); if( l2.l_type==F_RDLCK ){ zType = "RDLCK"; }else if( l2.l_type==F_WRLCK ){ zType = "WRLCK"; }else if( l2.l_type==F_UNLCK ){ @@ -22542,14 +28045,35 @@ zType, (int)l2.l_start, (int)l2.l_len, (int)l2.l_pid); } errno = savedErrno; return s; } -#define fcntl lockTrace +#undef osFcntl +#define osFcntl lockTrace #endif /* SQLITE_LOCK_TRACE */ - +/* +** Retry ftruncate() calls that fail due to EINTR +** +** All calls to ftruncate() within this file should be made through +** this wrapper. On the Android platform, bypassing the logic below +** could lead to a corrupt database. +*/ +static int robust_ftruncate(int h, sqlite3_int64 sz){ + int rc; +#ifdef __ANDROID__ + /* On Android, ftruncate() always uses 32-bit offsets, even if + ** _FILE_OFFSET_BITS=64 is defined. This means it is unsafe to attempt to + ** truncate a file to any size larger than 2GiB. Silently ignore any + ** such attempts. */ + if( sz>(sqlite3_int64)0x7FFFFFFF ){ + rc = SQLITE_OK; + }else +#endif + do{ rc = osFtruncate(h,sz); }while( rc<0 && errno==EINTR ); + return rc; +} /* ** This routine translates a standard POSIX errno code into something ** useful to the clients of the sqlite3 functions. Specifically, it is ** intended to translate a variety of "try again" errors into SQLITE_BUSY @@ -22558,64 +28082,32 @@ ** ** Errors during initialization of locks, or file system support for locks, ** should handle ENOLCK, ENOTSUP, EOPNOTSUPP separately. */ static int sqliteErrorFromPosixError(int posixError, int sqliteIOErr) { + assert( (sqliteIOErr == SQLITE_IOERR_LOCK) || + (sqliteIOErr == SQLITE_IOERR_UNLOCK) || + (sqliteIOErr == SQLITE_IOERR_RDLOCK) || + (sqliteIOErr == SQLITE_IOERR_CHECKRESERVEDLOCK) ); switch (posixError) { - case 0: - return SQLITE_OK; - + case EACCES: case EAGAIN: case ETIMEDOUT: case EBUSY: case EINTR: case ENOLCK: /* random NFS retry error, unless during file system support * introspection, in which it actually means what it says */ return SQLITE_BUSY; - case EACCES: - /* EACCES is like EAGAIN during locking operations, but not any other time*/ - if( (sqliteIOErr == SQLITE_IOERR_LOCK) || - (sqliteIOErr == SQLITE_IOERR_UNLOCK) || - (sqliteIOErr == SQLITE_IOERR_RDLOCK) || - (sqliteIOErr == SQLITE_IOERR_CHECKRESERVEDLOCK) ){ - return SQLITE_BUSY; - } - /* else fall through */ case EPERM: return SQLITE_PERM; - case EDEADLK: - return SQLITE_IOERR_BLOCKED; - -#if EOPNOTSUPP!=ENOTSUP - case EOPNOTSUPP: - /* something went terribly awry, unless during file system support - * introspection, in which it actually means what it says */ -#endif -#ifdef ENOTSUP - case ENOTSUP: - /* invalid fd, unless during file system support introspection, in which - * it actually means what it says */ -#endif - case EIO: - case EBADF: - case EINVAL: - case ENOTCONN: - case ENODEV: - case ENXIO: - case ENOENT: - case ESTALE: - case ENOSYS: - /* these should force the client to close the file and reconnect */ - default: return sqliteIOErr; } } - /****************************************************************************** ****************** Begin Unique File ID Utility Used By VxWorks *************** ** @@ -22699,11 +28191,11 @@ struct vxworksFileId *pCandidate; /* For looping over existing file IDs */ int n; /* Length of zAbsoluteName string */ assert( zAbsoluteName[0]=='/' ); n = (int)strlen(zAbsoluteName); - pNew = sqlite3_malloc( sizeof(*pNew) + (n+1) ); + pNew = sqlite3_malloc64( sizeof(*pNew) + (n+1) ); if( pNew==0 ) return 0; pNew->zCanonicalName = (char*)&pNew[1]; memcpy(pNew->zCanonicalName, zAbsoluteName, n+1); n = vxworksSimplifyName(pNew->zCanonicalName, n); @@ -22812,17 +28304,16 @@ ** ** But wait: there are yet more problems with POSIX advisory locks. ** ** If you close a file descriptor that points to a file that has locks, ** all locks on that file that are owned by the current process are -** released. To work around this problem, each unixFile structure contains -** a pointer to an unixOpenCnt structure. There is one unixOpenCnt structure -** per open inode, which means that multiple unixFile can point to a single -** unixOpenCnt. When an attempt is made to close an unixFile, if there are +** released. To work around this problem, each unixInodeInfo object +** maintains a count of the number of pending locks on tha inode. +** When an attempt is made to close an unixFile, if there are ** other unixFile open on the same inode that are holding locks, the call ** to close() the file descriptor is deferred until all of the locks clear. -** The unixOpenCnt structure keeps a list of file descriptors that need to +** The unixInodeInfo structure keeps a list of file descriptors that need to ** be closed and that list is walked (and cleared) when the last lock ** clears. ** ** Yet another problem: LinuxThreads do not play well with posix locks. ** @@ -22833,50 +28324,23 @@ ** if the appliation uses the newer Native Posix Thread Library (NPTL) ** on linux - with NPTL a lock created by thread A can override locks ** in thread B. But there is no way to know at compile-time which ** threading library is being used. So there is no way to know at ** compile-time whether or not thread A can override locks on thread B. -** We have to do a run-time check to discover the behavior of the +** One has to do a run-time check to discover the behavior of the ** current process. ** -** On systems where thread A is unable to modify locks created by -** thread B, we have to keep track of which thread created each -** lock. Hence there is an extra field in the key to the unixLockInfo -** structure to record this information. And on those systems it -** is illegal to begin a transaction in one thread and finish it -** in another. For this latter restriction, there is no work-around. -** It is a limitation of LinuxThreads. -*/ - -/* -** Set or check the unixFile.tid field. This field is set when an unixFile -** is first opened. All subsequent uses of the unixFile verify that the -** same thread is operating on the unixFile. Some operating systems do -** not allow locks to be overridden by other threads and that restriction -** means that sqlite3* database handles cannot be moved from one thread -** to another while locks are held. -** -** Version 3.3.1 (2006-01-15): unixFile can be moved from one thread to -** another as long as we are running on a system that supports threads -** overriding each others locks (which is now the most common behavior) -** or if no locks are held. But the unixFile.pLock field needs to be -** recomputed because its key includes the thread-id. See the -** transferOwnership() function below for additional information -*/ -#if SQLITE_THREADSAFE && defined(__linux__) -# define SET_THREADID(X) (X)->tid = pthread_self() -# define CHECK_THREADID(X) (threadsOverrideEachOthersLocks==0 && \ - !pthread_equal((X)->tid, pthread_self())) -#else -# define SET_THREADID(X) -# define CHECK_THREADID(X) 0 -#endif +** SQLite used to support LinuxThreads. But support for LinuxThreads +** was dropped beginning with version 3.7.0. SQLite will still work with +** LinuxThreads provided that (1) there is no more than one connection +** per database file in the same process and (2) database connections +** do not move across threads. +*/ /* ** An instance of the following structure serves as the key used -** to locate a particular unixOpenCnt structure given its inode. This -** is the same as the unixLockKey except that the thread ID is omitted. +** to locate a particular unixInodeInfo object. */ struct unixFileId { dev_t dev; /* Device number */ #if OS_VXWORKS struct vxworksFileId *pId; /* Unique file ID for vxworks. */ @@ -22883,271 +28347,216 @@ #else ino_t ino; /* Inode number */ #endif }; -/* -** An instance of the following structure serves as the key used -** to locate a particular unixLockInfo structure given its inode. -** -** If threads cannot override each others locks (LinuxThreads), then we -** set the unixLockKey.tid field to the thread ID. If threads can override -** each others locks (Posix and NPTL) then tid is always set to zero. -** tid is omitted if we compile without threading support or on an OS -** other than linux. -*/ -struct unixLockKey { - struct unixFileId fid; /* Unique identifier for the file */ -#if SQLITE_THREADSAFE && defined(__linux__) - pthread_t tid; /* Thread ID of lock owner. Zero if not using LinuxThreads */ -#endif -}; - /* ** An instance of the following structure is allocated for each open ** inode. Or, on LinuxThreads, there is one of these structures for ** each inode opened by each thread. ** ** A single inode can have multiple file descriptors, so each unixFile ** structure contains a pointer to an instance of this object and this ** object keeps a count of the number of unixFile pointing to it. */ -struct unixLockInfo { - struct unixLockKey lockKey; /* The lookup key */ - int cnt; /* Number of SHARED locks held */ - int locktype; /* One of SHARED_LOCK, RESERVED_LOCK etc. */ - int nRef; /* Number of pointers to this structure */ -#if defined(SQLITE_ENABLE_LOCKING_STYLE) - unsigned long long sharedByte; /* for AFP simulated shared lock */ -#endif - struct unixLockInfo *pNext; /* List of all unixLockInfo objects */ - struct unixLockInfo *pPrev; /* .... doubly linked */ -}; - -/* -** An instance of the following structure is allocated for each open -** inode. This structure keeps track of the number of locks on that -** inode. If a close is attempted against an inode that is holding -** locks, the close is deferred until all locks clear by adding the -** file descriptor to be closed to the pending list. -** -** TODO: Consider changing this so that there is only a single file -** descriptor for each open file, even when it is opened multiple times. -** The close() system call would only occur when the last database -** using the file closes. -*/ -struct unixOpenCnt { - struct unixFileId fileId; /* The lookup key */ - int nRef; /* Number of pointers to this structure */ - int nLock; /* Number of outstanding locks */ - UnixUnusedFd *pUnused; /* Unused file descriptors to close */ -#if OS_VXWORKS - sem_t *pSem; /* Named POSIX semaphore */ - char aSemName[MAX_PATHNAME+2]; /* Name of that semaphore */ -#endif - struct unixOpenCnt *pNext, *pPrev; /* List of all unixOpenCnt objects */ -}; - -/* -** Lists of all unixLockInfo and unixOpenCnt objects. These used to be hash -** tables. But the number of objects is rarely more than a dozen and -** never exceeds a few thousand. And lookup is not on a critical -** path so a simple linked list will suffice. -*/ -static struct unixLockInfo *lockList = 0; -static struct unixOpenCnt *openList = 0; - -/* -** This variable remembers whether or not threads can override each others -** locks. -** -** 0: No. Threads cannot override each others locks. (LinuxThreads) -** 1: Yes. Threads can override each others locks. (Posix & NLPT) -** -1: We don't know yet. -** -** On some systems, we know at compile-time if threads can override each -** others locks. On those systems, the SQLITE_THREAD_OVERRIDE_LOCK macro -** will be set appropriately. On other systems, we have to check at -** runtime. On these latter systems, SQLTIE_THREAD_OVERRIDE_LOCK is -** undefined. -** -** This variable normally has file scope only. But during testing, we make -** it a global so that the test code can change its value in order to verify -** that the right stuff happens in either case. -*/ -#if SQLITE_THREADSAFE && defined(__linux__) -# ifndef SQLITE_THREAD_OVERRIDE_LOCK -# define SQLITE_THREAD_OVERRIDE_LOCK -1 -# endif -# ifdef SQLITE_TEST -int threadsOverrideEachOthersLocks = SQLITE_THREAD_OVERRIDE_LOCK; -# else -static int threadsOverrideEachOthersLocks = SQLITE_THREAD_OVERRIDE_LOCK; -# endif -#endif - -/* -** This structure holds information passed into individual test -** threads by the testThreadLockingBehavior() routine. -*/ -struct threadTestData { - int fd; /* File to be locked */ - struct flock lock; /* The locking operation */ - int result; /* Result of the locking operation */ -}; - -#if SQLITE_THREADSAFE && defined(__linux__) -/* -** This function is used as the main routine for a thread launched by -** testThreadLockingBehavior(). It tests whether the shared-lock obtained -** by the main thread in testThreadLockingBehavior() conflicts with a -** hypothetical write-lock obtained by this thread on the same file. -** -** The write-lock is not actually acquired, as this is not possible if -** the file is open in read-only mode (see ticket #3472). -*/ -static void *threadLockingTest(void *pArg){ - struct threadTestData *pData = (struct threadTestData*)pArg; - pData->result = fcntl(pData->fd, F_GETLK, &pData->lock); - return pArg; -} -#endif /* SQLITE_THREADSAFE && defined(__linux__) */ - - -#if SQLITE_THREADSAFE && defined(__linux__) -/* -** This procedure attempts to determine whether or not threads -** can override each others locks then sets the -** threadsOverrideEachOthersLocks variable appropriately. -*/ -static void testThreadLockingBehavior(int fd_orig){ - int fd; - int rc; - struct threadTestData d; - struct flock l; - pthread_t t; - - fd = dup(fd_orig); - if( fd<0 ) return; - memset(&l, 0, sizeof(l)); - l.l_type = F_RDLCK; - l.l_len = 1; - l.l_start = 0; - l.l_whence = SEEK_SET; - rc = fcntl(fd_orig, F_SETLK, &l); - if( rc!=0 ) return; - memset(&d, 0, sizeof(d)); - d.fd = fd; - d.lock = l; - d.lock.l_type = F_WRLCK; - if( pthread_create(&t, 0, threadLockingTest, &d)==0 ){ - pthread_join(t, 0); - } - close(fd); - if( d.result!=0 ) return; - threadsOverrideEachOthersLocks = (d.lock.l_type==F_UNLCK); -} -#endif /* SQLITE_THREADSAFE && defined(__linux__) */ - -/* -** Release a unixLockInfo structure previously allocated by findLockInfo(). -** -** The mutex entered using the unixEnterMutex() function must be held -** when this function is called. -*/ -static void releaseLockInfo(struct unixLockInfo *pLock){ - assert( unixMutexHeld() ); - if( pLock ){ - pLock->nRef--; - if( pLock->nRef==0 ){ - if( pLock->pPrev ){ - assert( pLock->pPrev->pNext==pLock ); - pLock->pPrev->pNext = pLock->pNext; - }else{ - assert( lockList==pLock ); - lockList = pLock->pNext; - } - if( pLock->pNext ){ - assert( pLock->pNext->pPrev==pLock ); - pLock->pNext->pPrev = pLock->pPrev; - } - sqlite3_free(pLock); - } - } -} - -/* -** Release a unixOpenCnt structure previously allocated by findLockInfo(). -** -** The mutex entered using the unixEnterMutex() function must be held -** when this function is called. -*/ -static void releaseOpenCnt(struct unixOpenCnt *pOpen){ - assert( unixMutexHeld() ); - if( pOpen ){ - pOpen->nRef--; - if( pOpen->nRef==0 ){ - if( pOpen->pPrev ){ - assert( pOpen->pPrev->pNext==pOpen ); - pOpen->pPrev->pNext = pOpen->pNext; - }else{ - assert( openList==pOpen ); - openList = pOpen->pNext; - } - if( pOpen->pNext ){ - assert( pOpen->pNext->pPrev==pOpen ); - pOpen->pNext->pPrev = pOpen->pPrev; - } -#if SQLITE_THREADSAFE && defined(__linux__) - assert( !pOpen->pUnused || threadsOverrideEachOthersLocks==0 ); -#endif - - /* If pOpen->pUnused is not null, then memory and file-descriptors - ** are leaked. - ** - ** This will only happen if, under Linuxthreads, the user has opened - ** a transaction in one thread, then attempts to close the database - ** handle from another thread (without first unlocking the db file). - ** This is a misuse. */ - sqlite3_free(pOpen); - } - } -} - -/* -** Given a file descriptor, locate unixLockInfo and unixOpenCnt structures that -** describes that file descriptor. Create new ones if necessary. The -** return values might be uninitialized if an error occurs. +struct unixInodeInfo { + struct unixFileId fileId; /* The lookup key */ + int nShared; /* Number of SHARED locks held */ + unsigned char eFileLock; /* One of SHARED_LOCK, RESERVED_LOCK etc. */ + unsigned char bProcessLock; /* An exclusive process lock is held */ + int nRef; /* Number of pointers to this structure */ + unixShmNode *pShmNode; /* Shared memory associated with this inode */ + int nLock; /* Number of outstanding file locks */ + UnixUnusedFd *pUnused; /* Unused file descriptors to close */ + unixInodeInfo *pNext; /* List of all unixInodeInfo objects */ + unixInodeInfo *pPrev; /* .... doubly linked */ +#if SQLITE_ENABLE_LOCKING_STYLE + unsigned long long sharedByte; /* for AFP simulated shared lock */ +#endif +#if OS_VXWORKS + sem_t *pSem; /* Named POSIX semaphore */ + char aSemName[MAX_PATHNAME+2]; /* Name of that semaphore */ +#endif +}; + +/* +** A lists of all unixInodeInfo objects. +*/ +static unixInodeInfo *inodeList = 0; + +/* +** +** This function - unixLogErrorAtLine(), is only ever called via the macro +** unixLogError(). +** +** It is invoked after an error occurs in an OS function and errno has been +** set. It logs a message using sqlite3_log() containing the current value of +** errno and, if possible, the human-readable equivalent from strerror() or +** strerror_r(). +** +** The first argument passed to the macro should be the error code that +** will be returned to SQLite (e.g. SQLITE_IOERR_DELETE, SQLITE_CANTOPEN). +** The two subsequent arguments should be the name of the OS function that +** failed (e.g. "unlink", "open") and the associated file-system path, +** if any. +*/ +#define unixLogError(a,b,c) unixLogErrorAtLine(a,b,c,__LINE__) +static int unixLogErrorAtLine( + int errcode, /* SQLite error code */ + const char *zFunc, /* Name of OS function that failed */ + const char *zPath, /* File path associated with error */ + int iLine /* Source line number where error occurred */ +){ + char *zErr; /* Message from strerror() or equivalent */ + int iErrno = errno; /* Saved syscall error number */ + + /* If this is not a threadsafe build (SQLITE_THREADSAFE==0), then use + ** the strerror() function to obtain the human-readable error message + ** equivalent to errno. Otherwise, use strerror_r(). + */ +#if SQLITE_THREADSAFE && defined(HAVE_STRERROR_R) + char aErr[80]; + memset(aErr, 0, sizeof(aErr)); + zErr = aErr; + + /* If STRERROR_R_CHAR_P (set by autoconf scripts) or __USE_GNU is defined, + ** assume that the system provides the GNU version of strerror_r() that + ** returns a pointer to a buffer containing the error message. That pointer + ** may point to aErr[], or it may point to some static storage somewhere. + ** Otherwise, assume that the system provides the POSIX version of + ** strerror_r(), which always writes an error message into aErr[]. + ** + ** If the code incorrectly assumes that it is the POSIX version that is + ** available, the error message will often be an empty string. Not a + ** huge problem. Incorrectly concluding that the GNU version is available + ** could lead to a segfault though. + */ +#if defined(STRERROR_R_CHAR_P) || defined(__USE_GNU) + zErr = +# endif + strerror_r(iErrno, aErr, sizeof(aErr)-1); + +#elif SQLITE_THREADSAFE + /* This is a threadsafe build, but strerror_r() is not available. */ + zErr = ""; +#else + /* Non-threadsafe build, use strerror(). */ + zErr = strerror(iErrno); +#endif + + if( zPath==0 ) zPath = ""; + sqlite3_log(errcode, + "os_unix.c:%d: (%d) %s(%s) - %s", + iLine, iErrno, zFunc, zPath, zErr + ); + + return errcode; +} + +/* +** Close a file descriptor. +** +** We assume that close() almost always works, since it is only in a +** very sick application or on a very sick platform that it might fail. +** If it does fail, simply leak the file descriptor, but do log the +** error. +** +** Note that it is not safe to retry close() after EINTR since the +** file descriptor might have already been reused by another thread. +** So we don't even try to recover from an EINTR. Just log the error +** and move on. +*/ +static void robust_close(unixFile *pFile, int h, int lineno){ + if( osClose(h) ){ + unixLogErrorAtLine(SQLITE_IOERR_CLOSE, "close", + pFile ? pFile->zPath : 0, lineno); + } +} + +/* +** Set the pFile->lastErrno. Do this in a subroutine as that provides +** a convenient place to set a breakpoint. +*/ +static void storeLastErrno(unixFile *pFile, int error){ + pFile->lastErrno = error; +} + +/* +** Close all file descriptors accumuated in the unixInodeInfo->pUnused list. +*/ +static void closePendingFds(unixFile *pFile){ + unixInodeInfo *pInode = pFile->pInode; + UnixUnusedFd *p; + UnixUnusedFd *pNext; + for(p=pInode->pUnused; p; p=pNext){ + pNext = p->pNext; + robust_close(pFile, p->fd, __LINE__); + sqlite3_free(p); + } + pInode->pUnused = 0; +} + +/* +** Release a unixInodeInfo structure previously allocated by findInodeInfo(). +** +** The mutex entered using the unixEnterMutex() function must be held +** when this function is called. +*/ +static void releaseInodeInfo(unixFile *pFile){ + unixInodeInfo *pInode = pFile->pInode; + assert( unixMutexHeld() ); + if( ALWAYS(pInode) ){ + pInode->nRef--; + if( pInode->nRef==0 ){ + assert( pInode->pShmNode==0 ); + closePendingFds(pFile); + if( pInode->pPrev ){ + assert( pInode->pPrev->pNext==pInode ); + pInode->pPrev->pNext = pInode->pNext; + }else{ + assert( inodeList==pInode ); + inodeList = pInode->pNext; + } + if( pInode->pNext ){ + assert( pInode->pNext->pPrev==pInode ); + pInode->pNext->pPrev = pInode->pPrev; + } + sqlite3_free(pInode); + } + } +} + +/* +** Given a file descriptor, locate the unixInodeInfo object that +** describes that file descriptor. Create a new one if necessary. The +** return value might be uninitialized if an error occurs. ** ** The mutex entered using the unixEnterMutex() function must be held ** when this function is called. ** ** Return an appropriate error code. */ -static int findLockInfo( +static int findInodeInfo( unixFile *pFile, /* Unix file with file desc used in the key */ - struct unixLockInfo **ppLock, /* Return the unixLockInfo structure here */ - struct unixOpenCnt **ppOpen /* Return the unixOpenCnt structure here */ + unixInodeInfo **ppInode /* Return the unixInodeInfo object here */ ){ int rc; /* System call return code */ int fd; /* The file descriptor for pFile */ - struct unixLockKey lockKey; /* Lookup key for the unixLockInfo structure */ - struct unixFileId fileId; /* Lookup key for the unixOpenCnt struct */ + struct unixFileId fileId; /* Lookup key for the unixInodeInfo */ struct stat statbuf; /* Low-level file information */ - struct unixLockInfo *pLock = 0;/* Candidate unixLockInfo object */ - struct unixOpenCnt *pOpen; /* Candidate unixOpenCnt object */ + unixInodeInfo *pInode = 0; /* Candidate unixInodeInfo object */ assert( unixMutexHeld() ); /* Get low-level information about the file that we can used to ** create a unique name for the file. */ fd = pFile->h; - rc = fstat(fd, &statbuf); + rc = osFstat(fd, &statbuf); if( rc!=0 ){ - pFile->lastErrno = errno; -#ifdef EOVERFLOW + storeLastErrno(pFile, errno); +#if defined(EOVERFLOW) && defined(SQLITE_DISABLE_LFS) if( pFile->lastErrno==EOVERFLOW ) return SQLITE_NOLFS; #endif return SQLITE_IOERR; } @@ -23161,139 +28570,97 @@ ** in the header of every SQLite database. In this way, if there ** is a race condition such that another thread has already populated ** the first page of the database, no damage is done. */ if( statbuf.st_size==0 && (pFile->fsFlags & SQLITE_FSFLAGS_IS_MSDOS)!=0 ){ - rc = write(fd, "S", 1); - if( rc!=1 ){ - pFile->lastErrno = errno; - return SQLITE_IOERR; - } - rc = fstat(fd, &statbuf); - if( rc!=0 ){ - pFile->lastErrno = errno; - return SQLITE_IOERR; - } - } -#endif - - memset(&lockKey, 0, sizeof(lockKey)); - lockKey.fid.dev = statbuf.st_dev; -#if OS_VXWORKS - lockKey.fid.pId = pFile->pId; -#else - lockKey.fid.ino = statbuf.st_ino; -#endif -#if SQLITE_THREADSAFE && defined(__linux__) - if( threadsOverrideEachOthersLocks<0 ){ - testThreadLockingBehavior(fd); - } - lockKey.tid = threadsOverrideEachOthersLocks ? 0 : pthread_self(); -#endif - fileId = lockKey.fid; - if( ppLock!=0 ){ - pLock = lockList; - while( pLock && memcmp(&lockKey, &pLock->lockKey, sizeof(lockKey)) ){ - pLock = pLock->pNext; - } - if( pLock==0 ){ - pLock = sqlite3_malloc( sizeof(*pLock) ); - if( pLock==0 ){ - rc = SQLITE_NOMEM; - goto exit_findlockinfo; - } - memcpy(&pLock->lockKey,&lockKey,sizeof(lockKey)); - pLock->nRef = 1; - pLock->cnt = 0; - pLock->locktype = 0; -#if defined(SQLITE_ENABLE_LOCKING_STYLE) - pLock->sharedByte = 0; -#endif - pLock->pNext = lockList; - pLock->pPrev = 0; - if( lockList ) lockList->pPrev = pLock; - lockList = pLock; - }else{ - pLock->nRef++; - } - *ppLock = pLock; - } - if( ppOpen!=0 ){ - pOpen = openList; - while( pOpen && memcmp(&fileId, &pOpen->fileId, sizeof(fileId)) ){ - pOpen = pOpen->pNext; - } - if( pOpen==0 ){ - pOpen = sqlite3_malloc( sizeof(*pOpen) ); - if( pOpen==0 ){ - releaseLockInfo(pLock); - rc = SQLITE_NOMEM; - goto exit_findlockinfo; - } - memset(pOpen, 0, sizeof(*pOpen)); - pOpen->fileId = fileId; - pOpen->nRef = 1; - pOpen->pNext = openList; - if( openList ) openList->pPrev = pOpen; - openList = pOpen; - }else{ - pOpen->nRef++; - } - *ppOpen = pOpen; - } - -exit_findlockinfo: - return rc; -} - -/* -** If we are currently in a different thread than the thread that the -** unixFile argument belongs to, then transfer ownership of the unixFile -** over to the current thread. -** -** A unixFile is only owned by a thread on systems that use LinuxThreads. -** -** Ownership transfer is only allowed if the unixFile is currently unlocked. -** If the unixFile is locked and an ownership is wrong, then return -** SQLITE_MISUSE. SQLITE_OK is returned if everything works. -*/ -#if SQLITE_THREADSAFE && defined(__linux__) -static int transferOwnership(unixFile *pFile){ - int rc; - pthread_t hSelf; - if( threadsOverrideEachOthersLocks ){ - /* Ownership transfers not needed on this system */ - return SQLITE_OK; - } - hSelf = pthread_self(); - if( pthread_equal(pFile->tid, hSelf) ){ - /* We are still in the same thread */ - OSTRACE1("No-transfer, same thread\n"); - return SQLITE_OK; - } - if( pFile->locktype!=NO_LOCK ){ - /* We cannot change ownership while we are holding a lock! */ - return SQLITE_MISUSE_BKPT; - } - OSTRACE4("Transfer ownership of %d from %d to %d\n", - pFile->h, pFile->tid, hSelf); - pFile->tid = hSelf; - if (pFile->pLock != NULL) { - releaseLockInfo(pFile->pLock); - rc = findLockInfo(pFile, &pFile->pLock, 0); - OSTRACE5("LOCK %d is now %s(%s,%d)\n", pFile->h, - locktypeName(pFile->locktype), - locktypeName(pFile->pLock->locktype), pFile->pLock->cnt); - return rc; - } else { - return SQLITE_OK; - } -} -#else /* if not SQLITE_THREADSAFE */ - /* On single-threaded builds, ownership transfer is a no-op */ -# define transferOwnership(X) SQLITE_OK -#endif /* SQLITE_THREADSAFE */ + do{ rc = osWrite(fd, "S", 1); }while( rc<0 && errno==EINTR ); + if( rc!=1 ){ + storeLastErrno(pFile, errno); + return SQLITE_IOERR; + } + rc = osFstat(fd, &statbuf); + if( rc!=0 ){ + storeLastErrno(pFile, errno); + return SQLITE_IOERR; + } + } +#endif + + memset(&fileId, 0, sizeof(fileId)); + fileId.dev = statbuf.st_dev; +#if OS_VXWORKS + fileId.pId = pFile->pId; +#else + fileId.ino = statbuf.st_ino; +#endif + pInode = inodeList; + while( pInode && memcmp(&fileId, &pInode->fileId, sizeof(fileId)) ){ + pInode = pInode->pNext; + } + if( pInode==0 ){ + pInode = sqlite3_malloc64( sizeof(*pInode) ); + if( pInode==0 ){ + return SQLITE_NOMEM; + } + memset(pInode, 0, sizeof(*pInode)); + memcpy(&pInode->fileId, &fileId, sizeof(fileId)); + pInode->nRef = 1; + pInode->pNext = inodeList; + pInode->pPrev = 0; + if( inodeList ) inodeList->pPrev = pInode; + inodeList = pInode; + }else{ + pInode->nRef++; + } + *ppInode = pInode; + return SQLITE_OK; +} + +/* +** Return TRUE if pFile has been renamed or unlinked since it was first opened. +*/ +static int fileHasMoved(unixFile *pFile){ +#if OS_VXWORKS + return pFile->pInode!=0 && pFile->pId!=pFile->pInode->fileId.pId; +#else + struct stat buf; + return pFile->pInode!=0 && + (osStat(pFile->zPath, &buf)!=0 || buf.st_ino!=pFile->pInode->fileId.ino); +#endif +} + + +/* +** Check a unixFile that is a database. Verify the following: +** +** (1) There is exactly one hard link on the file +** (2) The file is not a symbolic link +** (3) The file has not been renamed or unlinked +** +** Issue sqlite3_log(SQLITE_WARNING,...) messages if anything is not right. +*/ +static void verifyDbFile(unixFile *pFile){ + struct stat buf; + int rc; + rc = osFstat(pFile->h, &buf); + if( rc!=0 ){ + sqlite3_log(SQLITE_WARNING, "cannot fstat db file %s", pFile->zPath); + return; + } + if( buf.st_nlink==0 && (pFile->ctrlFlags & UNIXFILE_DELETE)==0 ){ + sqlite3_log(SQLITE_WARNING, "file unlinked while open: %s", pFile->zPath); + return; + } + if( buf.st_nlink>1 ){ + sqlite3_log(SQLITE_WARNING, "multiple links to file: %s", pFile->zPath); + return; + } + if( fileHasMoved(pFile) ){ + sqlite3_log(SQLITE_WARNING, "file renamed while open: %s", pFile->zPath); + return; + } +} /* ** This routine checks if there is a RESERVED lock held on the specified ** file by this or any other process. If such a lock is held, set *pResOut @@ -23306,45 +28673,90 @@ unixFile *pFile = (unixFile*)id; SimulateIOError( return SQLITE_IOERR_CHECKRESERVEDLOCK; ); assert( pFile ); - unixEnterMutex(); /* Because pFile->pLock is shared across threads */ + assert( pFile->eFileLock<=SHARED_LOCK ); + unixEnterMutex(); /* Because pFile->pInode is shared across threads */ /* Check if a thread in this process holds such a lock */ - if( pFile->pLock->locktype>SHARED_LOCK ){ + if( pFile->pInode->eFileLock>SHARED_LOCK ){ reserved = 1; } /* Otherwise see if some other process holds it. */ #ifndef __DJGPP__ - if( !reserved ){ + if( !reserved && !pFile->pInode->bProcessLock ){ struct flock lock; lock.l_whence = SEEK_SET; lock.l_start = RESERVED_BYTE; lock.l_len = 1; lock.l_type = F_WRLCK; - if (-1 == fcntl(pFile->h, F_GETLK, &lock)) { - int tErrno = errno; - rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_CHECKRESERVEDLOCK); - pFile->lastErrno = tErrno; + if( osFcntl(pFile->h, F_GETLK, &lock) ){ + rc = SQLITE_IOERR_CHECKRESERVEDLOCK; + storeLastErrno(pFile, errno); } else if( lock.l_type!=F_UNLCK ){ reserved = 1; } } #endif unixLeaveMutex(); - OSTRACE4("TEST WR-LOCK %d %d %d (unix)\n", pFile->h, rc, reserved); + OSTRACE(("TEST WR-LOCK %d %d %d (unix)\n", pFile->h, rc, reserved)); *pResOut = reserved; return rc; } /* -** Lock the file with the lock specified by parameter locktype - one +** Attempt to set a system-lock on the file pFile. The lock is +** described by pLock. +** +** If the pFile was opened read/write from unix-excl, then the only lock +** ever obtained is an exclusive lock, and it is obtained exactly once +** the first time any lock is attempted. All subsequent system locking +** operations become no-ops. Locking operations still happen internally, +** in order to coordinate access between separate database connections +** within this process, but all of that is handled in memory and the +** operating system does not participate. +** +** This function is a pass-through to fcntl(F_SETLK) if pFile is using +** any VFS other than "unix-excl" or if pFile is opened on "unix-excl" +** and is read-only. +** +** Zero is returned if the call completes successfully, or -1 if a call +** to fcntl() fails. In this case, errno is set appropriately (by fcntl()). +*/ +static int unixFileLock(unixFile *pFile, struct flock *pLock){ + int rc; + unixInodeInfo *pInode = pFile->pInode; + assert( unixMutexHeld() ); + assert( pInode!=0 ); + if( (pFile->ctrlFlags & (UNIXFILE_EXCL|UNIXFILE_RDONLY))==UNIXFILE_EXCL ){ + if( pInode->bProcessLock==0 ){ + struct flock lock; + assert( pInode->nLock==0 ); + lock.l_whence = SEEK_SET; + lock.l_start = SHARED_FIRST; + lock.l_len = SHARED_SIZE; + lock.l_type = F_WRLCK; + rc = osFcntl(pFile->h, F_SETLK, &lock); + if( rc<0 ) return rc; + pInode->bProcessLock = 1; + pInode->nLock++; + }else{ + rc = 0; + } + }else{ + rc = osFcntl(pFile->h, F_SETLK, pLock); + } + return rc; +} + +/* +** Lock the file with the lock specified by parameter eFileLock - one ** of the following: ** ** (1) SHARED_LOCK ** (2) RESERVED_LOCK ** (3) PENDING_LOCK @@ -23363,11 +28775,11 @@ ** PENDING -> EXCLUSIVE ** ** This routine will only increase a lock. Use the sqlite3OsUnlock() ** routine to lower a locking level. */ -static int unixLock(sqlite3_file *id, int locktype){ +static int unixLock(sqlite3_file *id, int eFileLock){ /* The following describes the implementation of the various locks and ** lock transitions in terms of the POSIX advisory shared and exclusive ** lock primitives (called read-locks and write-locks below, to avoid ** confusion with SQLite lock names). The algorithms are complicated ** slightly in order to be compatible with windows systems simultaneously @@ -23404,74 +28816,66 @@ ** locking a random byte from a range, concurrent SHARED locks may exist ** even if the locking primitive used is always a write-lock. */ int rc = SQLITE_OK; unixFile *pFile = (unixFile*)id; - struct unixLockInfo *pLock = pFile->pLock; + unixInodeInfo *pInode; struct flock lock; - int s = 0; int tErrno = 0; assert( pFile ); - OSTRACE7("LOCK %d %s was %s(%s,%d) pid=%d (unix)\n", pFile->h, - locktypeName(locktype), locktypeName(pFile->locktype), - locktypeName(pLock->locktype), pLock->cnt , getpid()); + OSTRACE(("LOCK %d %s was %s(%s,%d) pid=%d (unix)\n", pFile->h, + azFileLock(eFileLock), azFileLock(pFile->eFileLock), + azFileLock(pFile->pInode->eFileLock), pFile->pInode->nShared, + osGetpid(0))); /* If there is already a lock of this type or more restrictive on the ** unixFile, do nothing. Don't use the end_lock: exit path, as ** unixEnterMutex() hasn't been called yet. */ - if( pFile->locktype>=locktype ){ - OSTRACE3("LOCK %d %s ok (already held) (unix)\n", pFile->h, - locktypeName(locktype)); + if( pFile->eFileLock>=eFileLock ){ + OSTRACE(("LOCK %d %s ok (already held) (unix)\n", pFile->h, + azFileLock(eFileLock))); return SQLITE_OK; } /* Make sure the locking sequence is correct. ** (1) We never move from unlocked to anything higher than shared lock. ** (2) SQLite never explicitly requests a pendig lock. ** (3) A shared lock is always held when a reserve lock is requested. */ - assert( pFile->locktype!=NO_LOCK || locktype==SHARED_LOCK ); - assert( locktype!=PENDING_LOCK ); - assert( locktype!=RESERVED_LOCK || pFile->locktype==SHARED_LOCK ); + assert( pFile->eFileLock!=NO_LOCK || eFileLock==SHARED_LOCK ); + assert( eFileLock!=PENDING_LOCK ); + assert( eFileLock!=RESERVED_LOCK || pFile->eFileLock==SHARED_LOCK ); - /* This mutex is needed because pFile->pLock is shared across threads + /* This mutex is needed because pFile->pInode is shared across threads */ unixEnterMutex(); - - /* Make sure the current thread owns the pFile. - */ - rc = transferOwnership(pFile); - if( rc!=SQLITE_OK ){ - unixLeaveMutex(); - return rc; - } - pLock = pFile->pLock; + pInode = pFile->pInode; /* If some thread using this PID has a lock via a different unixFile* ** handle that precludes the requested lock, return BUSY. */ - if( (pFile->locktype!=pLock->locktype && - (pLock->locktype>=PENDING_LOCK || locktype>SHARED_LOCK)) + if( (pFile->eFileLock!=pInode->eFileLock && + (pInode->eFileLock>=PENDING_LOCK || eFileLock>SHARED_LOCK)) ){ rc = SQLITE_BUSY; goto end_lock; } /* If a SHARED lock is requested, and some thread using this PID already ** has a SHARED or RESERVED lock, then increment reference counts and ** return SQLITE_OK. */ - if( locktype==SHARED_LOCK && - (pLock->locktype==SHARED_LOCK || pLock->locktype==RESERVED_LOCK) ){ - assert( locktype==SHARED_LOCK ); - assert( pFile->locktype==0 ); - assert( pLock->cnt>0 ); - pFile->locktype = SHARED_LOCK; - pLock->cnt++; - pFile->pOpen->nLock++; + if( eFileLock==SHARED_LOCK && + (pInode->eFileLock==SHARED_LOCK || pInode->eFileLock==RESERVED_LOCK) ){ + assert( eFileLock==SHARED_LOCK ); + assert( pFile->eFileLock==0 ); + assert( pInode->nShared>0 ); + pFile->eFileLock = SHARED_LOCK; + pInode->nShared++; + pInode->nLock++; goto end_lock; } /* A PENDING lock is needed before acquiring a SHARED lock and before @@ -23478,175 +28882,140 @@ ** acquiring an EXCLUSIVE lock. For the SHARED lock, the PENDING will ** be released. */ lock.l_len = 1L; lock.l_whence = SEEK_SET; - if( locktype==SHARED_LOCK - || (locktype==EXCLUSIVE_LOCK && pFile->locktype<PENDING_LOCK) + if( eFileLock==SHARED_LOCK + || (eFileLock==EXCLUSIVE_LOCK && pFile->eFileLock<PENDING_LOCK) ){ - lock.l_type = (locktype==SHARED_LOCK?F_RDLCK:F_WRLCK); + lock.l_type = (eFileLock==SHARED_LOCK?F_RDLCK:F_WRLCK); lock.l_start = PENDING_BYTE; - s = fcntl(pFile->h, F_SETLK, &lock); - if( s==(-1) ){ + if( unixFileLock(pFile, &lock) ){ tErrno = errno; rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_LOCK); - if( IS_LOCK_ERROR(rc) ){ - pFile->lastErrno = tErrno; + if( rc!=SQLITE_BUSY ){ + storeLastErrno(pFile, tErrno); } goto end_lock; } } /* If control gets to this point, then actually go ahead and make ** operating system calls for the specified lock. */ - if( locktype==SHARED_LOCK ){ - assert( pLock->cnt==0 ); - assert( pLock->locktype==0 ); + if( eFileLock==SHARED_LOCK ){ + assert( pInode->nShared==0 ); + assert( pInode->eFileLock==0 ); + assert( rc==SQLITE_OK ); /* Now get the read-lock */ lock.l_start = SHARED_FIRST; lock.l_len = SHARED_SIZE; - if( (s = fcntl(pFile->h, F_SETLK, &lock))==(-1) ){ + if( unixFileLock(pFile, &lock) ){ tErrno = errno; + rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_LOCK); } + /* Drop the temporary PENDING lock */ lock.l_start = PENDING_BYTE; lock.l_len = 1L; lock.l_type = F_UNLCK; - if( fcntl(pFile->h, F_SETLK, &lock)!=0 ){ - if( s != -1 ){ - /* This could happen with a network mount */ - tErrno = errno; - rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_UNLOCK); - if( IS_LOCK_ERROR(rc) ){ - pFile->lastErrno = tErrno; - } - goto end_lock; - } - } - if( s==(-1) ){ - rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_LOCK); - if( IS_LOCK_ERROR(rc) ){ - pFile->lastErrno = tErrno; - } - }else{ - pFile->locktype = SHARED_LOCK; - pFile->pOpen->nLock++; - pLock->cnt = 1; - } - }else if( locktype==EXCLUSIVE_LOCK && pLock->cnt>1 ){ + if( unixFileLock(pFile, &lock) && rc==SQLITE_OK ){ + /* This could happen with a network mount */ + tErrno = errno; + rc = SQLITE_IOERR_UNLOCK; + } + + if( rc ){ + if( rc!=SQLITE_BUSY ){ + storeLastErrno(pFile, tErrno); + } + goto end_lock; + }else{ + pFile->eFileLock = SHARED_LOCK; + pInode->nLock++; + pInode->nShared = 1; + } + }else if( eFileLock==EXCLUSIVE_LOCK && pInode->nShared>1 ){ /* We are trying for an exclusive lock but another thread in this ** same process is still holding a shared lock. */ rc = SQLITE_BUSY; }else{ /* The request was for a RESERVED or EXCLUSIVE lock. It is ** assumed that there is a SHARED or greater lock on the file ** already. */ - assert( 0!=pFile->locktype ); + assert( 0!=pFile->eFileLock ); lock.l_type = F_WRLCK; - switch( locktype ){ - case RESERVED_LOCK: - lock.l_start = RESERVED_BYTE; - break; - case EXCLUSIVE_LOCK: - lock.l_start = SHARED_FIRST; - lock.l_len = SHARED_SIZE; - break; - default: - assert(0); - } - s = fcntl(pFile->h, F_SETLK, &lock); - if( s==(-1) ){ + + assert( eFileLock==RESERVED_LOCK || eFileLock==EXCLUSIVE_LOCK ); + if( eFileLock==RESERVED_LOCK ){ + lock.l_start = RESERVED_BYTE; + lock.l_len = 1L; + }else{ + lock.l_start = SHARED_FIRST; + lock.l_len = SHARED_SIZE; + } + + if( unixFileLock(pFile, &lock) ){ tErrno = errno; rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_LOCK); - if( IS_LOCK_ERROR(rc) ){ - pFile->lastErrno = tErrno; + if( rc!=SQLITE_BUSY ){ + storeLastErrno(pFile, tErrno); } } } -#ifndef NDEBUG +#ifdef SQLITE_DEBUG /* Set up the transaction-counter change checking flags when ** transitioning from a SHARED to a RESERVED lock. The change ** from SHARED to RESERVED marks the beginning of a normal ** write operation (not a hot journal rollback). */ if( rc==SQLITE_OK - && pFile->locktype<=SHARED_LOCK - && locktype==RESERVED_LOCK + && pFile->eFileLock<=SHARED_LOCK + && eFileLock==RESERVED_LOCK ){ pFile->transCntrChng = 0; pFile->dbUpdate = 0; pFile->inNormalWrite = 1; } #endif if( rc==SQLITE_OK ){ - pFile->locktype = locktype; - pLock->locktype = locktype; - }else if( locktype==EXCLUSIVE_LOCK ){ - pFile->locktype = PENDING_LOCK; - pLock->locktype = PENDING_LOCK; + pFile->eFileLock = eFileLock; + pInode->eFileLock = eFileLock; + }else if( eFileLock==EXCLUSIVE_LOCK ){ + pFile->eFileLock = PENDING_LOCK; + pInode->eFileLock = PENDING_LOCK; } end_lock: unixLeaveMutex(); - OSTRACE4("LOCK %d %s %s (unix)\n", pFile->h, locktypeName(locktype), - rc==SQLITE_OK ? "ok" : "failed"); - return rc; -} - -/* -** Close all file descriptors accumuated in the unixOpenCnt->pUnused list. -** If all such file descriptors are closed without error, the list is -** cleared and SQLITE_OK returned. -** -** Otherwise, if an error occurs, then successfully closed file descriptor -** entries are removed from the list, and SQLITE_IOERR_CLOSE returned. -** not deleted and SQLITE_IOERR_CLOSE returned. -*/ -static int closePendingFds(unixFile *pFile){ - int rc = SQLITE_OK; - struct unixOpenCnt *pOpen = pFile->pOpen; - UnixUnusedFd *pError = 0; - UnixUnusedFd *p; - UnixUnusedFd *pNext; - for(p=pOpen->pUnused; p; p=pNext){ - pNext = p->pNext; - if( close(p->fd) ){ - pFile->lastErrno = errno; - rc = SQLITE_IOERR_CLOSE; - p->pNext = pError; - pError = p; - }else{ - sqlite3_free(p); - } - } - pOpen->pUnused = pError; + OSTRACE(("LOCK %d %s %s (unix)\n", pFile->h, azFileLock(eFileLock), + rc==SQLITE_OK ? "ok" : "failed")); return rc; } /* ** Add the file descriptor used by file handle pFile to the corresponding ** pUnused list. */ static void setPendingFd(unixFile *pFile){ - struct unixOpenCnt *pOpen = pFile->pOpen; + unixInodeInfo *pInode = pFile->pInode; UnixUnusedFd *p = pFile->pUnused; - p->pNext = pOpen->pUnused; - pOpen->pUnused = p; + p->pNext = pInode->pUnused; + pInode->pUnused = p; pFile->h = -1; pFile->pUnused = 0; } /* -** Lower the locking level on file descriptor pFile to locktype. locktype +** Lower the locking level on file descriptor pFile to eFileLock. eFileLock ** must be either NO_LOCK or SHARED_LOCK. ** ** If the locking level of the file descriptor is already at or below ** the requested locking level, this routine is a no-op. ** @@ -23654,51 +29023,40 @@ ** the byte range is divided into 2 parts and the first part is unlocked then ** set to a read lock, then the other part is simply unlocked. This works ** around a bug in BSD NFS lockd (also seen on MacOSX 10.3+) that fails to ** remove the write lock on a region when a read lock is set. */ -static int _posixUnlock(sqlite3_file *id, int locktype, int handleNFSUnlock){ +static int posixUnlock(sqlite3_file *id, int eFileLock, int handleNFSUnlock){ unixFile *pFile = (unixFile*)id; - struct unixLockInfo *pLock; + unixInodeInfo *pInode; struct flock lock; int rc = SQLITE_OK; - int h; - int tErrno; /* Error code from system call errors */ assert( pFile ); - OSTRACE7("UNLOCK %d %d was %d(%d,%d) pid=%d (unix)\n", pFile->h, locktype, - pFile->locktype, pFile->pLock->locktype, pFile->pLock->cnt, getpid()); - - assert( locktype<=SHARED_LOCK ); - if( pFile->locktype<=locktype ){ - return SQLITE_OK; - } - if( CHECK_THREADID(pFile) ){ - return SQLITE_MISUSE_BKPT; + OSTRACE(("UNLOCK %d %d was %d(%d,%d) pid=%d (unix)\n", pFile->h, eFileLock, + pFile->eFileLock, pFile->pInode->eFileLock, pFile->pInode->nShared, + osGetpid(0))); + + assert( eFileLock<=SHARED_LOCK ); + if( pFile->eFileLock<=eFileLock ){ + return SQLITE_OK; } unixEnterMutex(); - h = pFile->h; - pLock = pFile->pLock; - assert( pLock->cnt!=0 ); - if( pFile->locktype>SHARED_LOCK ){ - assert( pLock->locktype==pFile->locktype ); - SimulateIOErrorBenign(1); - SimulateIOError( h=(-1) ) - SimulateIOErrorBenign(0); - -#ifndef NDEBUG + pInode = pFile->pInode; + assert( pInode->nShared!=0 ); + if( pFile->eFileLock>SHARED_LOCK ){ + assert( pInode->eFileLock==pFile->eFileLock ); + +#ifdef SQLITE_DEBUG /* When reducing a lock such that other processes can start ** reading the database file again, make sure that the ** transaction counter was updated if any part of the database ** file changed. If the transaction counter is not updated, ** other connections to the same file might not realize that ** the file has changed and hence might not know to flush their ** cache. The use of a stale cache can lead to database corruption. */ - assert( pFile->inNormalWrite==0 - || pFile->dbUpdate==0 - || pFile->transCntrChng==1 ); pFile->inNormalWrite = 0; #endif /* downgrading to a shared lock on NFS involves clearing the write lock ** before establishing the readlock - to avoid a race condition we downgrade @@ -23707,139 +29065,139 @@ ** 1: [WWWWW] ** 2: [....W] ** 3: [RRRRW] ** 4: [RRRR.] */ - if( locktype==SHARED_LOCK ){ + if( eFileLock==SHARED_LOCK ){ +#if !defined(__APPLE__) || !SQLITE_ENABLE_LOCKING_STYLE + (void)handleNFSUnlock; + assert( handleNFSUnlock==0 ); +#endif +#if defined(__APPLE__) && SQLITE_ENABLE_LOCKING_STYLE if( handleNFSUnlock ){ + int tErrno; /* Error code from system call errors */ off_t divSize = SHARED_SIZE - 1; lock.l_type = F_UNLCK; lock.l_whence = SEEK_SET; lock.l_start = SHARED_FIRST; lock.l_len = divSize; - if( fcntl(h, F_SETLK, &lock)==(-1) ){ + if( unixFileLock(pFile, &lock)==(-1) ){ tErrno = errno; - rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_UNLOCK); - if( IS_LOCK_ERROR(rc) ){ - pFile->lastErrno = tErrno; - } + rc = SQLITE_IOERR_UNLOCK; + storeLastErrno(pFile, tErrno); goto end_unlock; } lock.l_type = F_RDLCK; lock.l_whence = SEEK_SET; lock.l_start = SHARED_FIRST; lock.l_len = divSize; - if( fcntl(h, F_SETLK, &lock)==(-1) ){ + if( unixFileLock(pFile, &lock)==(-1) ){ tErrno = errno; rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_RDLOCK); if( IS_LOCK_ERROR(rc) ){ - pFile->lastErrno = tErrno; + storeLastErrno(pFile, tErrno); } goto end_unlock; } lock.l_type = F_UNLCK; lock.l_whence = SEEK_SET; lock.l_start = SHARED_FIRST+divSize; lock.l_len = SHARED_SIZE-divSize; - if( fcntl(h, F_SETLK, &lock)==(-1) ){ + if( unixFileLock(pFile, &lock)==(-1) ){ tErrno = errno; - rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_UNLOCK); - if( IS_LOCK_ERROR(rc) ){ - pFile->lastErrno = tErrno; - } + rc = SQLITE_IOERR_UNLOCK; + storeLastErrno(pFile, tErrno); goto end_unlock; } - }else{ + }else +#endif /* defined(__APPLE__) && SQLITE_ENABLE_LOCKING_STYLE */ + { lock.l_type = F_RDLCK; lock.l_whence = SEEK_SET; lock.l_start = SHARED_FIRST; lock.l_len = SHARED_SIZE; - if( fcntl(h, F_SETLK, &lock)==(-1) ){ - tErrno = errno; - rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_RDLOCK); - if( IS_LOCK_ERROR(rc) ){ - pFile->lastErrno = tErrno; - } + if( unixFileLock(pFile, &lock) ){ + /* In theory, the call to unixFileLock() cannot fail because another + ** process is holding an incompatible lock. If it does, this + ** indicates that the other process is not following the locking + ** protocol. If this happens, return SQLITE_IOERR_RDLOCK. Returning + ** SQLITE_BUSY would confuse the upper layer (in practice it causes + ** an assert to fail). */ + rc = SQLITE_IOERR_RDLOCK; + storeLastErrno(pFile, errno); goto end_unlock; } } } lock.l_type = F_UNLCK; lock.l_whence = SEEK_SET; lock.l_start = PENDING_BYTE; lock.l_len = 2L; assert( PENDING_BYTE+1==RESERVED_BYTE ); - if( fcntl(h, F_SETLK, &lock)!=(-1) ){ - pLock->locktype = SHARED_LOCK; + if( unixFileLock(pFile, &lock)==0 ){ + pInode->eFileLock = SHARED_LOCK; }else{ - tErrno = errno; - rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_UNLOCK); - if( IS_LOCK_ERROR(rc) ){ - pFile->lastErrno = tErrno; - } + rc = SQLITE_IOERR_UNLOCK; + storeLastErrno(pFile, errno); goto end_unlock; } } - if( locktype==NO_LOCK ){ - struct unixOpenCnt *pOpen; - + if( eFileLock==NO_LOCK ){ /* Decrement the shared lock counter. Release the lock using an ** OS call only when all threads in this same process have released ** the lock. */ - pLock->cnt--; - if( pLock->cnt==0 ){ + pInode->nShared--; + if( pInode->nShared==0 ){ lock.l_type = F_UNLCK; lock.l_whence = SEEK_SET; lock.l_start = lock.l_len = 0L; - SimulateIOErrorBenign(1); - SimulateIOError( h=(-1) ) - SimulateIOErrorBenign(0); - if( fcntl(h, F_SETLK, &lock)!=(-1) ){ - pLock->locktype = NO_LOCK; + if( unixFileLock(pFile, &lock)==0 ){ + pInode->eFileLock = NO_LOCK; }else{ - tErrno = errno; - rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_UNLOCK); - if( IS_LOCK_ERROR(rc) ){ - pFile->lastErrno = tErrno; - } - pLock->locktype = NO_LOCK; - pFile->locktype = NO_LOCK; + rc = SQLITE_IOERR_UNLOCK; + storeLastErrno(pFile, errno); + pInode->eFileLock = NO_LOCK; + pFile->eFileLock = NO_LOCK; } } /* Decrement the count of locks against this same file. When the ** count reaches zero, close any other file descriptors whose close ** was deferred because of outstanding locks. */ - pOpen = pFile->pOpen; - pOpen->nLock--; - assert( pOpen->nLock>=0 ); - if( pOpen->nLock==0 ){ - int rc2 = closePendingFds(pFile); - if( rc==SQLITE_OK ){ - rc = rc2; - } + pInode->nLock--; + assert( pInode->nLock>=0 ); + if( pInode->nLock==0 ){ + closePendingFds(pFile); } } - + end_unlock: unixLeaveMutex(); - if( rc==SQLITE_OK ) pFile->locktype = locktype; + if( rc==SQLITE_OK ) pFile->eFileLock = eFileLock; return rc; } /* -** Lower the locking level on file descriptor pFile to locktype. locktype +** Lower the locking level on file descriptor pFile to eFileLock. eFileLock ** must be either NO_LOCK or SHARED_LOCK. ** ** If the locking level of the file descriptor is already at or below ** the requested locking level, this routine is a no-op. */ -static int unixUnlock(sqlite3_file *id, int locktype){ - return _posixUnlock(id, locktype, 0); +static int unixUnlock(sqlite3_file *id, int eFileLock){ +#if SQLITE_MAX_MMAP_SIZE>0 + assert( eFileLock==SHARED_LOCK || ((unixFile *)id)->nFetchOut==0 ); +#endif + return posixUnlock(id, eFileLock, 0); } + +#if SQLITE_MAX_MMAP_SIZE>0 +static int unixMapfile(unixFile *pFd, i64 nByte); +static void unixUnmapfile(unixFile *pFd); +#endif /* ** This function performs the parts of the "close file" operation ** common to all locking schemes. It closes the directory and file ** handles, if they are valid, and sets all fields of the unixFile @@ -23849,66 +29207,65 @@ ** even on VxWorks. A mutex will be acquired on VxWorks by the ** vxworksReleaseFileId() routine. */ static int closeUnixFile(sqlite3_file *id){ unixFile *pFile = (unixFile*)id; - if( pFile ){ - if( pFile->dirfd>=0 ){ - int err = close(pFile->dirfd); - if( err ){ - pFile->lastErrno = errno; - return SQLITE_IOERR_DIR_CLOSE; - }else{ - pFile->dirfd=-1; - } - } - if( pFile->h>=0 ){ - int err = close(pFile->h); - if( err ){ - pFile->lastErrno = errno; - return SQLITE_IOERR_CLOSE; - } - } +#if SQLITE_MAX_MMAP_SIZE>0 + unixUnmapfile(pFile); +#endif + if( pFile->h>=0 ){ + robust_close(pFile, pFile->h, __LINE__); + pFile->h = -1; + } #if OS_VXWORKS - if( pFile->pId ){ - if( pFile->isDelete ){ - unlink(pFile->pId->zCanonicalName); - } - vxworksReleaseFileId(pFile->pId); - pFile->pId = 0; - } -#endif - OSTRACE2("CLOSE %-3d\n", pFile->h); - OpenCounter(-1); - sqlite3_free(pFile->pUnused); - memset(pFile, 0, sizeof(unixFile)); - } + if( pFile->pId ){ + if( pFile->ctrlFlags & UNIXFILE_DELETE ){ + osUnlink(pFile->pId->zCanonicalName); + } + vxworksReleaseFileId(pFile->pId); + pFile->pId = 0; + } +#endif +#ifdef SQLITE_UNLINK_AFTER_CLOSE + if( pFile->ctrlFlags & UNIXFILE_DELETE ){ + osUnlink(pFile->zPath); + sqlite3_free(*(char**)&pFile->zPath); + pFile->zPath = 0; + } +#endif + OSTRACE(("CLOSE %-3d\n", pFile->h)); + OpenCounter(-1); + sqlite3_free(pFile->pUnused); + memset(pFile, 0, sizeof(unixFile)); return SQLITE_OK; } /* ** Close a file. */ static int unixClose(sqlite3_file *id){ int rc = SQLITE_OK; - if( id ){ - unixFile *pFile = (unixFile *)id; - unixUnlock(id, NO_LOCK); - unixEnterMutex(); - if( pFile->pOpen && pFile->pOpen->nLock ){ - /* If there are outstanding locks, do not actually close the file just - ** yet because that would clear those locks. Instead, add the file - ** descriptor to pOpen->pUnused list. It will be automatically closed - ** when the last lock is cleared. - */ - setPendingFd(pFile); - } - releaseLockInfo(pFile->pLock); - releaseOpenCnt(pFile->pOpen); - rc = closeUnixFile(id); - unixLeaveMutex(); - } + unixFile *pFile = (unixFile *)id; + verifyDbFile(pFile); + unixUnlock(id, NO_LOCK); + unixEnterMutex(); + + /* unixFile.pInode is always valid here. Otherwise, a different close + ** routine (e.g. nolockClose()) would be called instead. + */ + assert( pFile->pInode->nLock>0 || pFile->pInode->bProcessLock==0 ); + if( ALWAYS(pFile->pInode) && pFile->pInode->nLock ){ + /* If there are outstanding locks, do not actually close the file just + ** yet because that would clear those locks. Instead, add the file + ** descriptor to pInode->pUnused list. It will be automatically closed + ** when the last lock is cleared. + */ + setPendingFd(pFile); + } + releaseInodeInfo(pFile); + rc = closeUnixFile(id); + unixLeaveMutex(); return rc; } /************** End of the posix advisory lock implementation ***************** ******************************************************************************/ @@ -23955,13 +29312,13 @@ ******************************************************************************/ /****************************************************************************** ************************* Begin dot-file Locking ****************************** ** -** The dotfile locking implementation uses the existance of separate lock -** files in order to control access to the database. This works on just -** about every filesystem imaginable. But there are serious downsides: +** The dotfile locking implementation uses the existence of separate lock +** files (really a directory) to control access to the database. This works +** on just about every filesystem imaginable. But there are serious downsides: ** ** (1) There is zero concurrency. A single reader blocks all other ** connections from reading or writing the database. ** ** (2) An application crash or power loss can leave stale lock files @@ -23968,19 +29325,19 @@ ** sitting around that need to be cleared manually. ** ** Nevertheless, a dotlock is an appropriate locking mode for use if no ** other locking strategy is available. ** -** Dotfile locking works by creating a file in the same directory as the -** database and with the same name but with a ".lock" extension added. -** The existance of a lock file implies an EXCLUSIVE lock. All other lock -** types (SHARED, RESERVED, PENDING) are mapped into EXCLUSIVE. +** Dotfile locking works by creating a subdirectory in the same directory as +** the database and with the same name but with a ".lock" extension added. +** The existence of a lock directory implies an EXCLUSIVE lock. All other +** lock types (SHARED, RESERVED, PENDING) are mapped into EXCLUSIVE. */ /* ** The file suffix added to the data base filename in order to create the -** lock file. +** lock directory. */ #define DOTLOCK_SUFFIX ".lock" /* ** This routine checks if there is a RESERVED lock held on the specified @@ -23998,28 +29355,18 @@ unixFile *pFile = (unixFile*)id; SimulateIOError( return SQLITE_IOERR_CHECKRESERVEDLOCK; ); assert( pFile ); - - /* Check if a thread in this process holds such a lock */ - if( pFile->locktype>SHARED_LOCK ){ - /* Either this connection or some other connection in the same process - ** holds a lock on the file. No need to check further. */ - reserved = 1; - }else{ - /* The lock is held if and only if the lockfile exists */ - const char *zLockFile = (const char*)pFile->lockingContext; - reserved = access(zLockFile, 0)==0; - } - OSTRACE4("TEST WR-LOCK %d %d %d (dotlock)\n", pFile->h, rc, reserved); + reserved = osAccess((const char*)pFile->lockingContext, 0)==0; + OSTRACE(("TEST WR-LOCK %d %d %d (dotlock)\n", pFile->h, rc, reserved)); *pResOut = reserved; return rc; } /* -** Lock the file with the lock specified by parameter locktype - one +** Lock the file with the lock specified by parameter eFileLock - one ** of the following: ** ** (1) SHARED_LOCK ** (2) RESERVED_LOCK ** (3) PENDING_LOCK @@ -24041,114 +29388,109 @@ ** routine to lower a locking level. ** ** With dotfile locking, we really only support state (4): EXCLUSIVE. ** But we track the other locking levels internally. */ -static int dotlockLock(sqlite3_file *id, int locktype) { +static int dotlockLock(sqlite3_file *id, int eFileLock) { unixFile *pFile = (unixFile*)id; - int fd; char *zLockFile = (char *)pFile->lockingContext; int rc = SQLITE_OK; /* If we have any lock, then the lock file already exists. All we have ** to do is adjust our internal record of the lock level. */ - if( pFile->locktype > NO_LOCK ){ - pFile->locktype = locktype; -#if !OS_VXWORKS + if( pFile->eFileLock > NO_LOCK ){ + pFile->eFileLock = eFileLock; /* Always update the timestamp on the old file */ +#ifdef HAVE_UTIME + utime(zLockFile, NULL); +#else utimes(zLockFile, NULL); #endif return SQLITE_OK; } /* grab an exclusive lock */ - fd = open(zLockFile,O_RDONLY|O_CREAT|O_EXCL,0600); - if( fd<0 ){ - /* failed to open/create the file, someone else may have stolen the lock */ + rc = osMkdir(zLockFile, 0777); + if( rc<0 ){ + /* failed to open/create the lock directory */ int tErrno = errno; if( EEXIST == tErrno ){ rc = SQLITE_BUSY; } else { rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_LOCK); - if( IS_LOCK_ERROR(rc) ){ - pFile->lastErrno = tErrno; + if( rc!=SQLITE_BUSY ){ + storeLastErrno(pFile, tErrno); } } return rc; } - if( close(fd) ){ - pFile->lastErrno = errno; - rc = SQLITE_IOERR_CLOSE; - } /* got it, set the type and return ok */ - pFile->locktype = locktype; + pFile->eFileLock = eFileLock; return rc; } /* -** Lower the locking level on file descriptor pFile to locktype. locktype +** Lower the locking level on file descriptor pFile to eFileLock. eFileLock ** must be either NO_LOCK or SHARED_LOCK. ** ** If the locking level of the file descriptor is already at or below ** the requested locking level, this routine is a no-op. ** ** When the locking level reaches NO_LOCK, delete the lock file. */ -static int dotlockUnlock(sqlite3_file *id, int locktype) { +static int dotlockUnlock(sqlite3_file *id, int eFileLock) { unixFile *pFile = (unixFile*)id; char *zLockFile = (char *)pFile->lockingContext; + int rc; assert( pFile ); - OSTRACE5("UNLOCK %d %d was %d pid=%d (dotlock)\n", pFile->h, locktype, - pFile->locktype, getpid()); - assert( locktype<=SHARED_LOCK ); + OSTRACE(("UNLOCK %d %d was %d pid=%d (dotlock)\n", pFile->h, eFileLock, + pFile->eFileLock, osGetpid(0))); + assert( eFileLock<=SHARED_LOCK ); /* no-op if possible */ - if( pFile->locktype==locktype ){ + if( pFile->eFileLock==eFileLock ){ return SQLITE_OK; } /* To downgrade to shared, simply update our internal notion of the ** lock state. No need to mess with the file on disk. */ - if( locktype==SHARED_LOCK ){ - pFile->locktype = SHARED_LOCK; + if( eFileLock==SHARED_LOCK ){ + pFile->eFileLock = SHARED_LOCK; return SQLITE_OK; } /* To fully unlock the database, delete the lock file */ - assert( locktype==NO_LOCK ); - if( unlink(zLockFile) ){ - int rc = 0; + assert( eFileLock==NO_LOCK ); + rc = osRmdir(zLockFile); + if( rc<0 ){ int tErrno = errno; - if( ENOENT != tErrno ){ - rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_UNLOCK); - } - if( IS_LOCK_ERROR(rc) ){ - pFile->lastErrno = tErrno; + if( tErrno==ENOENT ){ + rc = SQLITE_OK; + }else{ + rc = SQLITE_IOERR_UNLOCK; + storeLastErrno(pFile, tErrno); } return rc; } - pFile->locktype = NO_LOCK; + pFile->eFileLock = NO_LOCK; return SQLITE_OK; } /* ** Close a file. Make sure the lock has been released before closing. */ static int dotlockClose(sqlite3_file *id) { - int rc; - if( id ){ - unixFile *pFile = (unixFile*)id; - dotlockUnlock(id, NO_LOCK); - sqlite3_free(pFile->lockingContext); - } - rc = closeUnixFile(id); - return rc; + unixFile *pFile = (unixFile*)id; + assert( id!=0 ); + dotlockUnlock(id, NO_LOCK); + sqlite3_free(pFile->lockingContext); + return closeUnixFile(id); } /****************** End of the dot-file lock implementation ******************* ******************************************************************************/ /****************************************************************************** @@ -24161,14 +29503,27 @@ ** a single exclusive lock. In other words, SHARED, RESERVED, and ** PENDING locks are the same thing as an EXCLUSIVE lock. SQLite ** still works when you do this, but concurrency is reduced since ** only a single process can be reading the database at a time. ** -** Omit this section if SQLITE_ENABLE_LOCKING_STYLE is turned off or if -** compiling for VXWORKS. +** Omit this section if SQLITE_ENABLE_LOCKING_STYLE is turned off */ -#if SQLITE_ENABLE_LOCKING_STYLE && !OS_VXWORKS +#if SQLITE_ENABLE_LOCKING_STYLE + +/* +** Retry flock() calls that fail with EINTR +*/ +#ifdef EINTR +static int robust_flock(int fd, int op){ + int rc; + do{ rc = flock(fd,op); }while( rc<0 && errno==EINTR ); + return rc; +} +#else +# define robust_flock(a,b) flock(a,b) +#endif + /* ** This routine checks if there is a RESERVED lock held on the specified ** file by this or any other process. If such a lock is held, set *pResOut ** to a non-zero value otherwise *pResOut is set to zero. The return value @@ -24182,42 +29537,40 @@ SimulateIOError( return SQLITE_IOERR_CHECKRESERVEDLOCK; ); assert( pFile ); /* Check if a thread in this process holds such a lock */ - if( pFile->locktype>SHARED_LOCK ){ + if( pFile->eFileLock>SHARED_LOCK ){ reserved = 1; } /* Otherwise see if some other process holds it. */ if( !reserved ){ /* attempt to get the lock */ - int lrc = flock(pFile->h, LOCK_EX | LOCK_NB); + int lrc = robust_flock(pFile->h, LOCK_EX | LOCK_NB); if( !lrc ){ /* got the lock, unlock it */ - lrc = flock(pFile->h, LOCK_UN); + lrc = robust_flock(pFile->h, LOCK_UN); if ( lrc ) { int tErrno = errno; /* unlock failed with an error */ - lrc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_UNLOCK); - if( IS_LOCK_ERROR(lrc) ){ - pFile->lastErrno = tErrno; - rc = lrc; - } + lrc = SQLITE_IOERR_UNLOCK; + storeLastErrno(pFile, tErrno); + rc = lrc; } } else { int tErrno = errno; reserved = 1; /* someone else might have it reserved */ lrc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_LOCK); if( IS_LOCK_ERROR(lrc) ){ - pFile->lastErrno = tErrno; + storeLastErrno(pFile, tErrno); rc = lrc; } } } - OSTRACE4("TEST WR-LOCK %d %d %d (flock)\n", pFile->h, rc, reserved); + OSTRACE(("TEST WR-LOCK %d %d %d (flock)\n", pFile->h, rc, reserved)); #ifdef SQLITE_IGNORE_FLOCK_LOCK_ERRORS if( (rc & SQLITE_IOERR) == SQLITE_IOERR ){ rc = SQLITE_OK; reserved=1; @@ -24226,11 +29579,11 @@ *pResOut = reserved; return rc; } /* -** Lock the file with the lock specified by parameter locktype - one +** Lock the file with the lock specified by parameter eFileLock - one ** of the following: ** ** (1) SHARED_LOCK ** (2) RESERVED_LOCK ** (3) PENDING_LOCK @@ -24254,38 +29607,38 @@ ** access the file. ** ** This routine will only increase a lock. Use the sqlite3OsUnlock() ** routine to lower a locking level. */ -static int flockLock(sqlite3_file *id, int locktype) { +static int flockLock(sqlite3_file *id, int eFileLock) { int rc = SQLITE_OK; unixFile *pFile = (unixFile*)id; assert( pFile ); /* if we already have a lock, it is exclusive. ** Just adjust level and punt on outta here. */ - if (pFile->locktype > NO_LOCK) { - pFile->locktype = locktype; + if (pFile->eFileLock > NO_LOCK) { + pFile->eFileLock = eFileLock; return SQLITE_OK; } /* grab an exclusive lock */ - if (flock(pFile->h, LOCK_EX | LOCK_NB)) { + if (robust_flock(pFile->h, LOCK_EX | LOCK_NB)) { int tErrno = errno; /* didn't get, must be busy */ rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_LOCK); if( IS_LOCK_ERROR(rc) ){ - pFile->lastErrno = tErrno; + storeLastErrno(pFile, tErrno); } } else { /* got it, set the type and return ok */ - pFile->locktype = locktype; + pFile->eFileLock = eFileLock; } - OSTRACE4("LOCK %d %s %s (flock)\n", pFile->h, locktypeName(locktype), - rc==SQLITE_OK ? "ok" : "failed"); + OSTRACE(("LOCK %d %s %s (flock)\n", pFile->h, azFileLock(eFileLock), + rc==SQLITE_OK ? "ok" : "failed")); #ifdef SQLITE_IGNORE_FLOCK_LOCK_ERRORS if( (rc & SQLITE_IOERR) == SQLITE_IOERR ){ rc = SQLITE_BUSY; } #endif /* SQLITE_IGNORE_FLOCK_LOCK_ERRORS */ @@ -24292,63 +29645,53 @@ return rc; } /* -** Lower the locking level on file descriptor pFile to locktype. locktype +** Lower the locking level on file descriptor pFile to eFileLock. eFileLock ** must be either NO_LOCK or SHARED_LOCK. ** ** If the locking level of the file descriptor is already at or below ** the requested locking level, this routine is a no-op. */ -static int flockUnlock(sqlite3_file *id, int locktype) { +static int flockUnlock(sqlite3_file *id, int eFileLock) { unixFile *pFile = (unixFile*)id; assert( pFile ); - OSTRACE5("UNLOCK %d %d was %d pid=%d (flock)\n", pFile->h, locktype, - pFile->locktype, getpid()); - assert( locktype<=SHARED_LOCK ); + OSTRACE(("UNLOCK %d %d was %d pid=%d (flock)\n", pFile->h, eFileLock, + pFile->eFileLock, osGetpid(0))); + assert( eFileLock<=SHARED_LOCK ); /* no-op if possible */ - if( pFile->locktype==locktype ){ + if( pFile->eFileLock==eFileLock ){ return SQLITE_OK; } /* shared can just be set because we always have an exclusive */ - if (locktype==SHARED_LOCK) { - pFile->locktype = locktype; + if (eFileLock==SHARED_LOCK) { + pFile->eFileLock = eFileLock; return SQLITE_OK; } /* no, really, unlock. */ - int rc = flock(pFile->h, LOCK_UN); - if (rc) { - int r, tErrno = errno; - r = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_UNLOCK); - if( IS_LOCK_ERROR(r) ){ - pFile->lastErrno = tErrno; - } + if( robust_flock(pFile->h, LOCK_UN) ){ #ifdef SQLITE_IGNORE_FLOCK_LOCK_ERRORS - if( (r & SQLITE_IOERR) == SQLITE_IOERR ){ - r = SQLITE_BUSY; - } + return SQLITE_OK; #endif /* SQLITE_IGNORE_FLOCK_LOCK_ERRORS */ - - return r; - } else { - pFile->locktype = NO_LOCK; + return SQLITE_IOERR_UNLOCK; + }else{ + pFile->eFileLock = NO_LOCK; return SQLITE_OK; } } /* ** Close a file. */ static int flockClose(sqlite3_file *id) { - if( id ){ - flockUnlock(id, NO_LOCK); - } + assert( id!=0 ); + flockUnlock(id, NO_LOCK); return closeUnixFile(id); } #endif /* SQLITE_ENABLE_LOCKING_STYLE && !OS_VXWORK */ @@ -24371,51 +29714,50 @@ ** This routine checks if there is a RESERVED lock held on the specified ** file by this or any other process. If such a lock is held, set *pResOut ** to a non-zero value otherwise *pResOut is set to zero. The return value ** is set to SQLITE_OK unless an I/O error occurs during lock checking. */ -static int semCheckReservedLock(sqlite3_file *id, int *pResOut) { +static int semXCheckReservedLock(sqlite3_file *id, int *pResOut) { int rc = SQLITE_OK; int reserved = 0; unixFile *pFile = (unixFile*)id; SimulateIOError( return SQLITE_IOERR_CHECKRESERVEDLOCK; ); assert( pFile ); /* Check if a thread in this process holds such a lock */ - if( pFile->locktype>SHARED_LOCK ){ + if( pFile->eFileLock>SHARED_LOCK ){ reserved = 1; } /* Otherwise see if some other process holds it. */ if( !reserved ){ - sem_t *pSem = pFile->pOpen->pSem; - struct stat statBuf; + sem_t *pSem = pFile->pInode->pSem; if( sem_trywait(pSem)==-1 ){ int tErrno = errno; if( EAGAIN != tErrno ){ rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_CHECKRESERVEDLOCK); - pFile->lastErrno = tErrno; + storeLastErrno(pFile, tErrno); } else { /* someone else has the lock when we are in NO_LOCK */ - reserved = (pFile->locktype < SHARED_LOCK); + reserved = (pFile->eFileLock < SHARED_LOCK); } }else{ /* we could have it if we want it */ sem_post(pSem); } } - OSTRACE4("TEST WR-LOCK %d %d %d (sem)\n", pFile->h, rc, reserved); + OSTRACE(("TEST WR-LOCK %d %d %d (sem)\n", pFile->h, rc, reserved)); *pResOut = reserved; return rc; } /* -** Lock the file with the lock specified by parameter locktype - one +** Lock the file with the lock specified by parameter eFileLock - one ** of the following: ** ** (1) SHARED_LOCK ** (2) RESERVED_LOCK ** (3) PENDING_LOCK @@ -24439,20 +29781,19 @@ ** access the file. ** ** This routine will only increase a lock. Use the sqlite3OsUnlock() ** routine to lower a locking level. */ -static int semLock(sqlite3_file *id, int locktype) { +static int semXLock(sqlite3_file *id, int eFileLock) { unixFile *pFile = (unixFile*)id; - int fd; - sem_t *pSem = pFile->pOpen->pSem; + sem_t *pSem = pFile->pInode->pSem; int rc = SQLITE_OK; /* if we already have a lock, it is exclusive. ** Just adjust level and punt on outta here. */ - if (pFile->locktype > NO_LOCK) { - pFile->locktype = locktype; + if (pFile->eFileLock > NO_LOCK) { + pFile->eFileLock = eFileLock; rc = SQLITE_OK; goto sem_end_lock; } /* lock semaphore now but bail out when already locked. */ @@ -24460,68 +29801,67 @@ rc = SQLITE_BUSY; goto sem_end_lock; } /* got it, set the type and return ok */ - pFile->locktype = locktype; + pFile->eFileLock = eFileLock; sem_end_lock: return rc; } /* -** Lower the locking level on file descriptor pFile to locktype. locktype +** Lower the locking level on file descriptor pFile to eFileLock. eFileLock ** must be either NO_LOCK or SHARED_LOCK. ** ** If the locking level of the file descriptor is already at or below ** the requested locking level, this routine is a no-op. */ -static int semUnlock(sqlite3_file *id, int locktype) { +static int semXUnlock(sqlite3_file *id, int eFileLock) { unixFile *pFile = (unixFile*)id; - sem_t *pSem = pFile->pOpen->pSem; + sem_t *pSem = pFile->pInode->pSem; assert( pFile ); assert( pSem ); - OSTRACE5("UNLOCK %d %d was %d pid=%d (sem)\n", pFile->h, locktype, - pFile->locktype, getpid()); - assert( locktype<=SHARED_LOCK ); + OSTRACE(("UNLOCK %d %d was %d pid=%d (sem)\n", pFile->h, eFileLock, + pFile->eFileLock, osGetpid(0))); + assert( eFileLock<=SHARED_LOCK ); /* no-op if possible */ - if( pFile->locktype==locktype ){ + if( pFile->eFileLock==eFileLock ){ return SQLITE_OK; } /* shared can just be set because we always have an exclusive */ - if (locktype==SHARED_LOCK) { - pFile->locktype = locktype; + if (eFileLock==SHARED_LOCK) { + pFile->eFileLock = eFileLock; return SQLITE_OK; } /* no, really unlock. */ if ( sem_post(pSem)==-1 ) { int rc, tErrno = errno; rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_UNLOCK); if( IS_LOCK_ERROR(rc) ){ - pFile->lastErrno = tErrno; + storeLastErrno(pFile, tErrno); } return rc; } - pFile->locktype = NO_LOCK; + pFile->eFileLock = NO_LOCK; return SQLITE_OK; } /* ** Close a file. */ -static int semClose(sqlite3_file *id) { +static int semXClose(sqlite3_file *id) { if( id ){ unixFile *pFile = (unixFile*)id; - semUnlock(id, NO_LOCK); + semXUnlock(id, NO_LOCK); assert( pFile ); unixEnterMutex(); - releaseLockInfo(pFile->pLock); - releaseOpenCnt(pFile->pOpen); + releaseInodeInfo(pFile); unixLeaveMutex(); closeUnixFile(id); } return SQLITE_OK; } @@ -24586,27 +29926,27 @@ pb.startEndFlag = 0; pb.offset = offset; pb.length = length; pb.fd = pFile->h; - OSTRACE6("AFPSETLOCK [%s] for %d%s in range %llx:%llx\n", + OSTRACE(("AFPSETLOCK [%s] for %d%s in range %llx:%llx\n", (setLockFlag?"ON":"OFF"), pFile->h, (pb.fd==-1?"[testval-1]":""), - offset, length); + offset, length)); err = fsctl(path, afpfsByteRangeLock2FSCTL, &pb, 0); if ( err==-1 ) { int rc; int tErrno = errno; - OSTRACE4("AFPSETLOCK failed to fsctl() '%s' %d %s\n", - path, tErrno, strerror(tErrno)); + OSTRACE(("AFPSETLOCK failed to fsctl() '%s' %d %s\n", + path, tErrno, strerror(tErrno))); #ifdef SQLITE_IGNORE_AFP_LOCK_ERRORS rc = SQLITE_BUSY; #else rc = sqliteErrorFromPosixError(tErrno, setLockFlag ? SQLITE_IOERR_LOCK : SQLITE_IOERR_UNLOCK); #endif /* SQLITE_IGNORE_AFP_LOCK_ERRORS */ if( IS_LOCK_ERROR(rc) ){ - pFile->lastErrno = tErrno; + storeLastErrno(pFile, tErrno); } return rc; } else { return SQLITE_OK; } @@ -24620,23 +29960,24 @@ */ static int afpCheckReservedLock(sqlite3_file *id, int *pResOut){ int rc = SQLITE_OK; int reserved = 0; unixFile *pFile = (unixFile*)id; + afpLockingContext *context; SimulateIOError( return SQLITE_IOERR_CHECKRESERVEDLOCK; ); assert( pFile ); - afpLockingContext *context = (afpLockingContext *) pFile->lockingContext; + context = (afpLockingContext *) pFile->lockingContext; if( context->reserved ){ *pResOut = 1; return SQLITE_OK; } - unixEnterMutex(); /* Because pFile->pLock is shared across threads */ + unixEnterMutex(); /* Because pFile->pInode is shared across threads */ /* Check if a thread in this process holds such a lock */ - if( pFile->pLock->locktype>SHARED_LOCK ){ + if( pFile->pInode->eFileLock>SHARED_LOCK ){ reserved = 1; } /* Otherwise see if some other process holds it. */ @@ -24655,18 +29996,18 @@ rc=lrc; } } unixLeaveMutex(); - OSTRACE4("TEST WR-LOCK %d %d %d (afp)\n", pFile->h, rc, reserved); + OSTRACE(("TEST WR-LOCK %d %d %d (afp)\n", pFile->h, rc, reserved)); *pResOut = reserved; return rc; } /* -** Lock the file with the lock specified by parameter locktype - one +** Lock the file with the lock specified by parameter eFileLock - one ** of the following: ** ** (1) SHARED_LOCK ** (2) RESERVED_LOCK ** (3) PENDING_LOCK @@ -24685,84 +30026,76 @@ ** PENDING -> EXCLUSIVE ** ** This routine will only increase a lock. Use the sqlite3OsUnlock() ** routine to lower a locking level. */ -static int afpLock(sqlite3_file *id, int locktype){ +static int afpLock(sqlite3_file *id, int eFileLock){ int rc = SQLITE_OK; unixFile *pFile = (unixFile*)id; - struct unixLockInfo *pLock = pFile->pLock; + unixInodeInfo *pInode = pFile->pInode; afpLockingContext *context = (afpLockingContext *) pFile->lockingContext; assert( pFile ); - OSTRACE7("LOCK %d %s was %s(%s,%d) pid=%d (afp)\n", pFile->h, - locktypeName(locktype), locktypeName(pFile->locktype), - locktypeName(pLock->locktype), pLock->cnt , getpid()); + OSTRACE(("LOCK %d %s was %s(%s,%d) pid=%d (afp)\n", pFile->h, + azFileLock(eFileLock), azFileLock(pFile->eFileLock), + azFileLock(pInode->eFileLock), pInode->nShared , osGetpid(0))); /* If there is already a lock of this type or more restrictive on the ** unixFile, do nothing. Don't use the afp_end_lock: exit path, as ** unixEnterMutex() hasn't been called yet. */ - if( pFile->locktype>=locktype ){ - OSTRACE3("LOCK %d %s ok (already held) (afp)\n", pFile->h, - locktypeName(locktype)); + if( pFile->eFileLock>=eFileLock ){ + OSTRACE(("LOCK %d %s ok (already held) (afp)\n", pFile->h, + azFileLock(eFileLock))); return SQLITE_OK; } /* Make sure the locking sequence is correct ** (1) We never move from unlocked to anything higher than shared lock. ** (2) SQLite never explicitly requests a pendig lock. ** (3) A shared lock is always held when a reserve lock is requested. */ - assert( pFile->locktype!=NO_LOCK || locktype==SHARED_LOCK ); - assert( locktype!=PENDING_LOCK ); - assert( locktype!=RESERVED_LOCK || pFile->locktype==SHARED_LOCK ); + assert( pFile->eFileLock!=NO_LOCK || eFileLock==SHARED_LOCK ); + assert( eFileLock!=PENDING_LOCK ); + assert( eFileLock!=RESERVED_LOCK || pFile->eFileLock==SHARED_LOCK ); - /* This mutex is needed because pFile->pLock is shared across threads + /* This mutex is needed because pFile->pInode is shared across threads */ unixEnterMutex(); - - /* Make sure the current thread owns the pFile. - */ - rc = transferOwnership(pFile); - if( rc!=SQLITE_OK ){ - unixLeaveMutex(); - return rc; - } - pLock = pFile->pLock; + pInode = pFile->pInode; /* If some thread using this PID has a lock via a different unixFile* ** handle that precludes the requested lock, return BUSY. */ - if( (pFile->locktype!=pLock->locktype && - (pLock->locktype>=PENDING_LOCK || locktype>SHARED_LOCK)) + if( (pFile->eFileLock!=pInode->eFileLock && + (pInode->eFileLock>=PENDING_LOCK || eFileLock>SHARED_LOCK)) ){ rc = SQLITE_BUSY; goto afp_end_lock; } /* If a SHARED lock is requested, and some thread using this PID already ** has a SHARED or RESERVED lock, then increment reference counts and ** return SQLITE_OK. */ - if( locktype==SHARED_LOCK && - (pLock->locktype==SHARED_LOCK || pLock->locktype==RESERVED_LOCK) ){ - assert( locktype==SHARED_LOCK ); - assert( pFile->locktype==0 ); - assert( pLock->cnt>0 ); - pFile->locktype = SHARED_LOCK; - pLock->cnt++; - pFile->pOpen->nLock++; + if( eFileLock==SHARED_LOCK && + (pInode->eFileLock==SHARED_LOCK || pInode->eFileLock==RESERVED_LOCK) ){ + assert( eFileLock==SHARED_LOCK ); + assert( pFile->eFileLock==0 ); + assert( pInode->nShared>0 ); + pFile->eFileLock = SHARED_LOCK; + pInode->nShared++; + pInode->nLock++; goto afp_end_lock; } /* A PENDING lock is needed before acquiring a SHARED lock and before ** acquiring an EXCLUSIVE lock. For the SHARED lock, the PENDING will ** be released. */ - if( locktype==SHARED_LOCK - || (locktype==EXCLUSIVE_LOCK && pFile->locktype<PENDING_LOCK) + if( eFileLock==SHARED_LOCK + || (eFileLock==EXCLUSIVE_LOCK && pFile->eFileLock<PENDING_LOCK) ){ int failed; failed = afpSetLock(context->dbPath, pFile, PENDING_BYTE, 1, 1); if (failed) { rc = failed; @@ -24771,76 +30104,76 @@ } /* If control gets to this point, then actually go ahead and make ** operating system calls for the specified lock. */ - if( locktype==SHARED_LOCK ){ - int lrc1, lrc2, lrc1Errno; + if( eFileLock==SHARED_LOCK ){ + int lrc1, lrc2, lrc1Errno = 0; long lk, mask; - assert( pLock->cnt==0 ); - assert( pLock->locktype==0 ); + assert( pInode->nShared==0 ); + assert( pInode->eFileLock==0 ); mask = (sizeof(long)==8) ? LARGEST_INT64 : 0x7fffffff; /* Now get the read-lock SHARED_LOCK */ /* note that the quality of the randomness doesn't matter that much */ lk = random(); - pLock->sharedByte = (lk & mask)%(SHARED_SIZE - 1); + pInode->sharedByte = (lk & mask)%(SHARED_SIZE - 1); lrc1 = afpSetLock(context->dbPath, pFile, - SHARED_FIRST+pLock->sharedByte, 1, 1); + SHARED_FIRST+pInode->sharedByte, 1, 1); if( IS_LOCK_ERROR(lrc1) ){ lrc1Errno = pFile->lastErrno; } /* Drop the temporary PENDING lock */ lrc2 = afpSetLock(context->dbPath, pFile, PENDING_BYTE, 1, 0); if( IS_LOCK_ERROR(lrc1) ) { - pFile->lastErrno = lrc1Errno; + storeLastErrno(pFile, lrc1Errno); rc = lrc1; goto afp_end_lock; } else if( IS_LOCK_ERROR(lrc2) ){ rc = lrc2; goto afp_end_lock; } else if( lrc1 != SQLITE_OK ) { rc = lrc1; } else { - pFile->locktype = SHARED_LOCK; - pFile->pOpen->nLock++; - pLock->cnt = 1; + pFile->eFileLock = SHARED_LOCK; + pInode->nLock++; + pInode->nShared = 1; } - }else if( locktype==EXCLUSIVE_LOCK && pLock->cnt>1 ){ + }else if( eFileLock==EXCLUSIVE_LOCK && pInode->nShared>1 ){ /* We are trying for an exclusive lock but another thread in this ** same process is still holding a shared lock. */ rc = SQLITE_BUSY; }else{ /* The request was for a RESERVED or EXCLUSIVE lock. It is ** assumed that there is a SHARED or greater lock on the file ** already. */ int failed = 0; - assert( 0!=pFile->locktype ); - if (locktype >= RESERVED_LOCK && pFile->locktype < RESERVED_LOCK) { + assert( 0!=pFile->eFileLock ); + if (eFileLock >= RESERVED_LOCK && pFile->eFileLock < RESERVED_LOCK) { /* Acquire a RESERVED lock */ failed = afpSetLock(context->dbPath, pFile, RESERVED_BYTE, 1,1); if( !failed ){ context->reserved = 1; } } - if (!failed && locktype == EXCLUSIVE_LOCK) { + if (!failed && eFileLock == EXCLUSIVE_LOCK) { /* Acquire an EXCLUSIVE lock */ /* Remove the shared lock before trying the range. we'll need to ** reestablish the shared lock if we can't get the afpUnlock */ if( !(failed = afpSetLock(context->dbPath, pFile, SHARED_FIRST + - pLock->sharedByte, 1, 0)) ){ + pInode->sharedByte, 1, 0)) ){ int failed2 = SQLITE_OK; /* now attemmpt to get the exclusive lock range */ failed = afpSetLock(context->dbPath, pFile, SHARED_FIRST, SHARED_SIZE, 1); if( failed && (failed2 = afpSetLock(context->dbPath, pFile, - SHARED_FIRST + pLock->sharedByte, 1, 1)) ){ + SHARED_FIRST + pInode->sharedByte, 1, 1)) ){ /* Can't reestablish the shared lock. Sqlite can't deal, this is ** a critical I/O error */ rc = ((failed & SQLITE_IOERR) == SQLITE_IOERR) ? failed2 : SQLITE_IOERR_LOCK; @@ -24854,62 +30187,60 @@ rc = failed; } } if( rc==SQLITE_OK ){ - pFile->locktype = locktype; - pLock->locktype = locktype; - }else if( locktype==EXCLUSIVE_LOCK ){ - pFile->locktype = PENDING_LOCK; - pLock->locktype = PENDING_LOCK; + pFile->eFileLock = eFileLock; + pInode->eFileLock = eFileLock; + }else if( eFileLock==EXCLUSIVE_LOCK ){ + pFile->eFileLock = PENDING_LOCK; + pInode->eFileLock = PENDING_LOCK; } afp_end_lock: unixLeaveMutex(); - OSTRACE4("LOCK %d %s %s (afp)\n", pFile->h, locktypeName(locktype), - rc==SQLITE_OK ? "ok" : "failed"); + OSTRACE(("LOCK %d %s %s (afp)\n", pFile->h, azFileLock(eFileLock), + rc==SQLITE_OK ? "ok" : "failed")); return rc; } /* -** Lower the locking level on file descriptor pFile to locktype. locktype +** Lower the locking level on file descriptor pFile to eFileLock. eFileLock ** must be either NO_LOCK or SHARED_LOCK. ** ** If the locking level of the file descriptor is already at or below ** the requested locking level, this routine is a no-op. */ -static int afpUnlock(sqlite3_file *id, int locktype) { +static int afpUnlock(sqlite3_file *id, int eFileLock) { int rc = SQLITE_OK; unixFile *pFile = (unixFile*)id; - struct unixLockInfo *pLock; + unixInodeInfo *pInode; afpLockingContext *context = (afpLockingContext *) pFile->lockingContext; int skipShared = 0; #ifdef SQLITE_TEST int h = pFile->h; #endif assert( pFile ); - OSTRACE7("UNLOCK %d %d was %d(%d,%d) pid=%d (afp)\n", pFile->h, locktype, - pFile->locktype, pFile->pLock->locktype, pFile->pLock->cnt, getpid()); - - assert( locktype<=SHARED_LOCK ); - if( pFile->locktype<=locktype ){ - return SQLITE_OK; - } - if( CHECK_THREADID(pFile) ){ - return SQLITE_MISUSE_BKPT; + OSTRACE(("UNLOCK %d %d was %d(%d,%d) pid=%d (afp)\n", pFile->h, eFileLock, + pFile->eFileLock, pFile->pInode->eFileLock, pFile->pInode->nShared, + osGetpid(0))); + + assert( eFileLock<=SHARED_LOCK ); + if( pFile->eFileLock<=eFileLock ){ + return SQLITE_OK; } unixEnterMutex(); - pLock = pFile->pLock; - assert( pLock->cnt!=0 ); - if( pFile->locktype>SHARED_LOCK ){ - assert( pLock->locktype==pFile->locktype ); + pInode = pFile->pInode; + assert( pInode->nShared!=0 ); + if( pFile->eFileLock>SHARED_LOCK ){ + assert( pInode->eFileLock==pFile->eFileLock ); SimulateIOErrorBenign(1); SimulateIOError( h=(-1) ) SimulateIOErrorBenign(0); -#ifndef NDEBUG +#ifdef SQLITE_DEBUG /* When reducing a lock such that other processes can start ** reading the database file again, make sure that the ** transaction counter was updated if any part of the database ** file changed. If the transaction counter is not updated, ** other connections to the same file might not realize that @@ -24920,92 +30251,88 @@ || pFile->dbUpdate==0 || pFile->transCntrChng==1 ); pFile->inNormalWrite = 0; #endif - if( pFile->locktype==EXCLUSIVE_LOCK ){ + if( pFile->eFileLock==EXCLUSIVE_LOCK ){ rc = afpSetLock(context->dbPath, pFile, SHARED_FIRST, SHARED_SIZE, 0); - if( rc==SQLITE_OK && (locktype==SHARED_LOCK || pLock->cnt>1) ){ + if( rc==SQLITE_OK && (eFileLock==SHARED_LOCK || pInode->nShared>1) ){ /* only re-establish the shared lock if necessary */ - int sharedLockByte = SHARED_FIRST+pLock->sharedByte; + int sharedLockByte = SHARED_FIRST+pInode->sharedByte; rc = afpSetLock(context->dbPath, pFile, sharedLockByte, 1, 1); } else { skipShared = 1; } } - if( rc==SQLITE_OK && pFile->locktype>=PENDING_LOCK ){ + if( rc==SQLITE_OK && pFile->eFileLock>=PENDING_LOCK ){ rc = afpSetLock(context->dbPath, pFile, PENDING_BYTE, 1, 0); } - if( rc==SQLITE_OK && pFile->locktype>=RESERVED_LOCK && context->reserved ){ + if( rc==SQLITE_OK && pFile->eFileLock>=RESERVED_LOCK && context->reserved ){ rc = afpSetLock(context->dbPath, pFile, RESERVED_BYTE, 1, 0); if( !rc ){ context->reserved = 0; } } - if( rc==SQLITE_OK && (locktype==SHARED_LOCK || pLock->cnt>1)){ - pLock->locktype = SHARED_LOCK; + if( rc==SQLITE_OK && (eFileLock==SHARED_LOCK || pInode->nShared>1)){ + pInode->eFileLock = SHARED_LOCK; } } - if( rc==SQLITE_OK && locktype==NO_LOCK ){ + if( rc==SQLITE_OK && eFileLock==NO_LOCK ){ /* Decrement the shared lock counter. Release the lock using an ** OS call only when all threads in this same process have released ** the lock. */ - unsigned long long sharedLockByte = SHARED_FIRST+pLock->sharedByte; - pLock->cnt--; - if( pLock->cnt==0 ){ + unsigned long long sharedLockByte = SHARED_FIRST+pInode->sharedByte; + pInode->nShared--; + if( pInode->nShared==0 ){ SimulateIOErrorBenign(1); SimulateIOError( h=(-1) ) SimulateIOErrorBenign(0); if( !skipShared ){ rc = afpSetLock(context->dbPath, pFile, sharedLockByte, 1, 0); } if( !rc ){ - pLock->locktype = NO_LOCK; - pFile->locktype = NO_LOCK; + pInode->eFileLock = NO_LOCK; + pFile->eFileLock = NO_LOCK; } } if( rc==SQLITE_OK ){ - struct unixOpenCnt *pOpen = pFile->pOpen; - - pOpen->nLock--; - assert( pOpen->nLock>=0 ); - if( pOpen->nLock==0 ){ - rc = closePendingFds(pFile); + pInode->nLock--; + assert( pInode->nLock>=0 ); + if( pInode->nLock==0 ){ + closePendingFds(pFile); } } } unixLeaveMutex(); - if( rc==SQLITE_OK ) pFile->locktype = locktype; + if( rc==SQLITE_OK ) pFile->eFileLock = eFileLock; return rc; } /* ** Close a file & cleanup AFP specific locking context */ static int afpClose(sqlite3_file *id) { int rc = SQLITE_OK; - if( id ){ - unixFile *pFile = (unixFile*)id; - afpUnlock(id, NO_LOCK); - unixEnterMutex(); - if( pFile->pOpen && pFile->pOpen->nLock ){ - /* If there are outstanding locks, do not actually close the file just - ** yet because that would clear those locks. Instead, add the file - ** descriptor to pOpen->aPending. It will be automatically closed when - ** the last lock is cleared. - */ - setPendingFd(pFile); - } - releaseLockInfo(pFile->pLock); - releaseOpenCnt(pFile->pOpen); - sqlite3_free(pFile->lockingContext); - rc = closeUnixFile(id); - unixLeaveMutex(); - } + unixFile *pFile = (unixFile*)id; + assert( id!=0 ); + afpUnlock(id, NO_LOCK); + unixEnterMutex(); + if( pFile->pInode && pFile->pInode->nLock ){ + /* If there are outstanding locks, do not actually close the file just + ** yet because that would clear those locks. Instead, add the file + ** descriptor to pInode->aPending. It will be automatically closed when + ** the last lock is cleared. + */ + setPendingFd(pFile); + } + releaseInodeInfo(pFile); + sqlite3_free(pFile->lockingContext); + rc = closeUnixFile(id); + unixLeaveMutex(); return rc; } #endif /* defined(__APPLE__) && SQLITE_ENABLE_LOCKING_STYLE */ /* @@ -25020,18 +30347,18 @@ /****************************************************************************** *************************** Begin NFS Locking ********************************/ #if defined(__APPLE__) && SQLITE_ENABLE_LOCKING_STYLE /* - ** Lower the locking level on file descriptor pFile to locktype. locktype + ** Lower the locking level on file descriptor pFile to eFileLock. eFileLock ** must be either NO_LOCK or SHARED_LOCK. ** ** If the locking level of the file descriptor is already at or below ** the requested locking level, this routine is a no-op. */ -static int nfsUnlock(sqlite3_file *id, int locktype){ - return _posixUnlock(id, locktype, 1); +static int nfsUnlock(sqlite3_file *id, int eFileLock){ + return posixUnlock(id, eFileLock, 1); } #endif /* defined(__APPLE__) && SQLITE_ENABLE_LOCKING_STYLE */ /* ** The code above is the NFS lock implementation. The code is specific @@ -25056,47 +30383,58 @@ ** bytes into pBuf. Return the number of bytes actually read. ** ** NB: If you define USE_PREAD or USE_PREAD64, then it might also ** be necessary to define _XOPEN_SOURCE to be 500. This varies from ** one system to another. Since SQLite does not define USE_PREAD -** any any form by default, we will not attempt to define _XOPEN_SOURCE. +** in any form by default, we will not attempt to define _XOPEN_SOURCE. ** See tickets #2741 and #2681. ** ** To avoid stomping the errno value on a failed read the lastErrno value ** is set before returning. */ static int seekAndRead(unixFile *id, sqlite3_int64 offset, void *pBuf, int cnt){ int got; + int prior = 0; #if (!defined(USE_PREAD) && !defined(USE_PREAD64)) i64 newOffset; #endif TIMER_START; + assert( cnt==(cnt&0x1ffff) ); + assert( id->h>2 ); + do{ #if defined(USE_PREAD) - got = pread(id->h, pBuf, cnt, offset); - SimulateIOError( got = -1 ); + got = osPread(id->h, pBuf, cnt, offset); + SimulateIOError( got = -1 ); #elif defined(USE_PREAD64) - got = pread64(id->h, pBuf, cnt, offset); - SimulateIOError( got = -1 ); + got = osPread64(id->h, pBuf, cnt, offset); + SimulateIOError( got = -1 ); #else - newOffset = lseek(id->h, offset, SEEK_SET); - SimulateIOError( newOffset-- ); - if( newOffset!=offset ){ - if( newOffset == -1 ){ - ((unixFile*)id)->lastErrno = errno; - }else{ - ((unixFile*)id)->lastErrno = 0; - } - return -1; - } - got = read(id->h, pBuf, cnt); -#endif + newOffset = lseek(id->h, offset, SEEK_SET); + SimulateIOError( newOffset = -1 ); + if( newOffset<0 ){ + storeLastErrno((unixFile*)id, errno); + return -1; + } + got = osRead(id->h, pBuf, cnt); +#endif + if( got==cnt ) break; + if( got<0 ){ + if( errno==EINTR ){ got = 1; continue; } + prior = 0; + storeLastErrno((unixFile*)id, errno); + break; + }else if( got>0 ){ + cnt -= got; + offset += got; + prior += got; + pBuf = (void*)(got + (char*)pBuf); + } + }while( got>0 ); TIMER_END; - if( got<0 ){ - ((unixFile*)id)->lastErrno = errno; - } - OSTRACE5("READ %-3d %5d %7lld %llu\n", id->h, got, offset, TIMER_ELAPSED); - return got; + OSTRACE(("READ %-3d %5d %7lld %llu\n", + id->h, got+prior, offset-prior, TIMER_ELAPSED)); + return got+prior; } /* ** Read data from a file into a buffer. Return SQLITE_OK if all ** bytes were read successfully and SQLITE_IOERR if anything goes @@ -25109,68 +30447,108 @@ sqlite3_int64 offset ){ unixFile *pFile = (unixFile *)id; int got; assert( id ); + assert( offset>=0 ); + assert( amt>0 ); /* If this is a database file (not a journal, master-journal or temp ** file), the bytes in the locking range should never be read or written. */ +#if 0 assert( pFile->pUnused==0 || offset>=PENDING_BYTE+512 || offset+amt<=PENDING_BYTE ); +#endif + +#if SQLITE_MAX_MMAP_SIZE>0 + /* Deal with as much of this read request as possible by transfering + ** data from the memory mapping using memcpy(). */ + if( offset<pFile->mmapSize ){ + if( offset+amt <= pFile->mmapSize ){ + memcpy(pBuf, &((u8 *)(pFile->pMapRegion))[offset], amt); + return SQLITE_OK; + }else{ + int nCopy = pFile->mmapSize - offset; + memcpy(pBuf, &((u8 *)(pFile->pMapRegion))[offset], nCopy); + pBuf = &((u8 *)pBuf)[nCopy]; + amt -= nCopy; + offset += nCopy; + } + } +#endif got = seekAndRead(pFile, offset, pBuf, amt); if( got==amt ){ return SQLITE_OK; }else if( got<0 ){ /* lastErrno set by seekAndRead */ return SQLITE_IOERR_READ; }else{ - pFile->lastErrno = 0; /* not a system error */ + storeLastErrno(pFile, 0); /* not a system error */ /* Unread parts of the buffer must be zero-filled */ memset(&((char*)pBuf)[got], 0, amt-got); return SQLITE_IOERR_SHORT_READ; } } + +/* +** Attempt to seek the file-descriptor passed as the first argument to +** absolute offset iOff, then attempt to write nBuf bytes of data from +** pBuf to it. If an error occurs, return -1 and set *piErrno. Otherwise, +** return the actual number of bytes written (which may be less than +** nBuf). +*/ +static int seekAndWriteFd( + int fd, /* File descriptor to write to */ + i64 iOff, /* File offset to begin writing at */ + const void *pBuf, /* Copy data from this buffer to the file */ + int nBuf, /* Size of buffer pBuf in bytes */ + int *piErrno /* OUT: Error number if error occurs */ +){ + int rc = 0; /* Value returned by system call */ + + assert( nBuf==(nBuf&0x1ffff) ); + assert( fd>2 ); + assert( piErrno!=0 ); + nBuf &= 0x1ffff; + TIMER_START; + +#if defined(USE_PREAD) + do{ rc = (int)osPwrite(fd, pBuf, nBuf, iOff); }while( rc<0 && errno==EINTR ); +#elif defined(USE_PREAD64) + do{ rc = (int)osPwrite64(fd, pBuf, nBuf, iOff);}while( rc<0 && errno==EINTR); +#else + do{ + i64 iSeek = lseek(fd, iOff, SEEK_SET); + SimulateIOError( iSeek = -1 ); + if( iSeek<0 ){ + rc = -1; + break; + } + rc = osWrite(fd, pBuf, nBuf); + }while( rc<0 && errno==EINTR ); +#endif + + TIMER_END; + OSTRACE(("WRITE %-3d %5d %7lld %llu\n", fd, rc, iOff, TIMER_ELAPSED)); + + if( rc<0 ) *piErrno = errno; + return rc; +} + /* ** Seek to the offset in id->offset then read cnt bytes into pBuf. ** Return the number of bytes actually read. Update the offset. ** ** To avoid stomping the errno value on a failed write the lastErrno value ** is set before returning. */ static int seekAndWrite(unixFile *id, i64 offset, const void *pBuf, int cnt){ - int got; -#if (!defined(USE_PREAD) && !defined(USE_PREAD64)) - i64 newOffset; -#endif - TIMER_START; -#if defined(USE_PREAD) - got = pwrite(id->h, pBuf, cnt, offset); -#elif defined(USE_PREAD64) - got = pwrite64(id->h, pBuf, cnt, offset); -#else - newOffset = lseek(id->h, offset, SEEK_SET); - if( newOffset!=offset ){ - if( newOffset == -1 ){ - ((unixFile*)id)->lastErrno = errno; - }else{ - ((unixFile*)id)->lastErrno = 0; - } - return -1; - } - got = write(id->h, pBuf, cnt); -#endif - TIMER_END; - if( got<0 ){ - ((unixFile*)id)->lastErrno = errno; - } - - OSTRACE5("WRITE %-3d %5d %7lld %llu\n", id->h, got, offset, TIMER_ELAPSED); - return got; + return seekAndWriteFd(id->h, offset, pBuf, cnt, &id->lastErrno); } /* ** Write data from a buffer into a file. Return SQLITE_OK on success @@ -25187,16 +30565,18 @@ assert( id ); assert( amt>0 ); /* If this is a database file (not a journal, master-journal or temp ** file), the bytes in the locking range should never be read or written. */ +#if 0 assert( pFile->pUnused==0 || offset>=PENDING_BYTE+512 || offset+amt<=PENDING_BYTE ); +#endif -#ifndef NDEBUG +#ifdef SQLITE_DEBUG /* If we are doing a normal write to a database file (as opposed to ** doing a hot-journal rollback or a write to some file other than a ** normal database file) then record the fact that the database ** has changed. If the transaction counter is modified, record that ** fact too. @@ -25214,26 +30594,45 @@ } } } #endif - while( amt>0 && (wrote = seekAndWrite(pFile, offset, pBuf, amt))>0 ){ +#if defined(SQLITE_MMAP_READWRITE) && SQLITE_MAX_MMAP_SIZE>0 + /* Deal with as much of this write request as possible by transfering + ** data from the memory mapping using memcpy(). */ + if( offset<pFile->mmapSize ){ + if( offset+amt <= pFile->mmapSize ){ + memcpy(&((u8 *)(pFile->pMapRegion))[offset], pBuf, amt); + return SQLITE_OK; + }else{ + int nCopy = pFile->mmapSize - offset; + memcpy(&((u8 *)(pFile->pMapRegion))[offset], pBuf, nCopy); + pBuf = &((u8 *)pBuf)[nCopy]; + amt -= nCopy; + offset += nCopy; + } + } +#endif + + while( (wrote = seekAndWrite(pFile, offset, pBuf, amt))<amt && wrote>0 ){ amt -= wrote; offset += wrote; pBuf = &((char*)pBuf)[wrote]; } SimulateIOError(( wrote=(-1), amt=1 )); SimulateDiskfullError(( wrote=0, amt=1 )); - if( amt>0 ){ - if( wrote<0 ){ + + if( amt>wrote ){ + if( wrote<0 && pFile->lastErrno!=ENOSPC ){ /* lastErrno set by seekAndWrite */ return SQLITE_IOERR_WRITE; }else{ - pFile->lastErrno = 0; /* not a system error */ + storeLastErrno(pFile, 0); /* not a system error */ return SQLITE_FULL; } } + return SQLITE_OK; } #ifdef SQLITE_TEST /* @@ -25244,15 +30643,15 @@ SQLITE_API int sqlite3_fullsync_count = 0; #endif /* ** We do not trust systems to provide a working fdatasync(). Some do. -** Others do no. To be safe, we will stick with the (slower) fsync(). -** If you know that your system does support fdatasync() correctly, -** then simply compile with -Dfdatasync=fdatasync +** Others do no. To be safe, we will stick with the (slightly slower) +** fsync(). If you know that your system does support fdatasync() correctly, +** then simply compile with -Dfdatasync=fdatasync or -DHAVE_FDATASYNC */ -#if !defined(fdatasync) && !defined(__linux__) +#if !defined(fdatasync) && !HAVE_FDATASYNC # define fdatasync fsync #endif /* ** Define HAVE_FULLFSYNC to 0 or 1 depending on whether or not @@ -25316,17 +30715,22 @@ if( fullSync ) sqlite3_fullsync_count++; sqlite3_sync_count++; #endif /* If we compiled with the SQLITE_NO_SYNC flag, then syncing is a - ** no-op + ** no-op. But go ahead and call fstat() to validate the file + ** descriptor as we need a method to provoke a failure during + ** coverate testing. */ #ifdef SQLITE_NO_SYNC - rc = SQLITE_OK; + { + struct stat buf; + rc = osFstat(fd, &buf); + } #elif HAVE_FULLFSYNC if( fullSync ){ - rc = fcntl(fd, F_FULLFSYNC, 0); + rc = osFcntl(fd, F_FULLFSYNC, 0); }else{ rc = 1; } /* If the FULLFSYNC failed, fall back to attempting an fsync(). ** It shouldn't be possible for fullfsync to fail on the local @@ -25355,10 +30759,55 @@ if( OS_VXWORKS && rc!= -1 ){ rc = 0; } return rc; } + +/* +** Open a file descriptor to the directory containing file zFilename. +** If successful, *pFd is set to the opened file descriptor and +** SQLITE_OK is returned. If an error occurs, either SQLITE_NOMEM +** or SQLITE_CANTOPEN is returned and *pFd is set to an undefined +** value. +** +** The directory file descriptor is used for only one thing - to +** fsync() a directory to make sure file creation and deletion events +** are flushed to disk. Such fsyncs are not needed on newer +** journaling filesystems, but are required on older filesystems. +** +** This routine can be overridden using the xSetSysCall interface. +** The ability to override this routine was added in support of the +** chromium sandbox. Opening a directory is a security risk (we are +** told) so making it overrideable allows the chromium sandbox to +** replace this routine with a harmless no-op. To make this routine +** a no-op, replace it with a stub that returns SQLITE_OK but leaves +** *pFd set to a negative number. +** +** If SQLITE_OK is returned, the caller is responsible for closing +** the file descriptor *pFd using close(). +*/ +static int openDirectory(const char *zFilename, int *pFd){ + int ii; + int fd = -1; + char zDirname[MAX_PATHNAME+1]; + + sqlite3_snprintf(MAX_PATHNAME, zDirname, "%s", zFilename); + for(ii=(int)strlen(zDirname); ii>0 && zDirname[ii]!='/'; ii--); + if( ii>0 ){ + zDirname[ii] = '\0'; + }else{ + if( zDirname[0]!='/' ) zDirname[0] = '.'; + zDirname[1] = 0; + } + fd = robust_open(zDirname, O_RDONLY|O_BINARY, 0); + if( fd>=0 ){ + OSTRACE(("OPENDIR %-3d %s\n", fd, zDirname)); + } + *pFd = fd; + if( fd>=0 ) return SQLITE_OK; + return unixLogError(SQLITE_CANTOPEN_BKPT, "openDirectory", zDirname); +} /* ** Make sure all writes to a particular file are committed to disk. ** ** If dataOnly==0 then both the file itself and its metadata (file @@ -25389,70 +30838,82 @@ ** line is to test that doing so does not cause any problems. */ SimulateDiskfullError( return SQLITE_FULL ); assert( pFile ); - OSTRACE2("SYNC %-3d\n", pFile->h); + OSTRACE(("SYNC %-3d\n", pFile->h)); rc = full_fsync(pFile->h, isFullsync, isDataOnly); SimulateIOError( rc=1 ); if( rc ){ - pFile->lastErrno = errno; - return SQLITE_IOERR_FSYNC; - } - if( pFile->dirfd>=0 ){ - int err; - OSTRACE4("DIRSYNC %-3d (have_fullfsync=%d fullsync=%d)\n", pFile->dirfd, - HAVE_FULLFSYNC, isFullsync); -#ifndef SQLITE_DISABLE_DIRSYNC - /* The directory sync is only attempted if full_fsync is - ** turned off or unavailable. If a full_fsync occurred above, - ** then the directory sync is superfluous. - */ - if( (!HAVE_FULLFSYNC || !isFullsync) && full_fsync(pFile->dirfd,0,0) ){ - /* - ** We have received multiple reports of fsync() returning - ** errors when applied to directories on certain file systems. - ** A failed directory sync is not a big deal. So it seems - ** better to ignore the error. Ticket #1657 - */ - /* pFile->lastErrno = errno; */ - /* return SQLITE_IOERR; */ - } -#endif - err = close(pFile->dirfd); /* Only need to sync once, so close the */ - if( err==0 ){ /* directory when we are done */ - pFile->dirfd = -1; - }else{ - pFile->lastErrno = errno; - rc = SQLITE_IOERR_DIR_CLOSE; - } + storeLastErrno(pFile, errno); + return unixLogError(SQLITE_IOERR_FSYNC, "full_fsync", pFile->zPath); + } + + /* Also fsync the directory containing the file if the DIRSYNC flag + ** is set. This is a one-time occurrence. Many systems (examples: AIX) + ** are unable to fsync a directory, so ignore errors on the fsync. + */ + if( pFile->ctrlFlags & UNIXFILE_DIRSYNC ){ + int dirfd; + OSTRACE(("DIRSYNC %s (have_fullfsync=%d fullsync=%d)\n", pFile->zPath, + HAVE_FULLFSYNC, isFullsync)); + rc = osOpenDirectory(pFile->zPath, &dirfd); + if( rc==SQLITE_OK ){ + full_fsync(dirfd, 0, 0); + robust_close(pFile, dirfd, __LINE__); + }else{ + assert( rc==SQLITE_CANTOPEN ); + rc = SQLITE_OK; + } + pFile->ctrlFlags &= ~UNIXFILE_DIRSYNC; } return rc; } /* ** Truncate an open file to a specified size */ static int unixTruncate(sqlite3_file *id, i64 nByte){ + unixFile *pFile = (unixFile *)id; int rc; - assert( id ); + assert( pFile ); SimulateIOError( return SQLITE_IOERR_TRUNCATE ); - rc = ftruncate(((unixFile*)id)->h, (off_t)nByte); + + /* If the user has configured a chunk-size for this file, truncate the + ** file so that it consists of an integer number of chunks (i.e. the + ** actual file size after the operation may be larger than the requested + ** size). + */ + if( pFile->szChunk>0 ){ + nByte = ((nByte + pFile->szChunk - 1)/pFile->szChunk) * pFile->szChunk; + } + + rc = robust_ftruncate(pFile->h, nByte); if( rc ){ - ((unixFile*)id)->lastErrno = errno; - return SQLITE_IOERR_TRUNCATE; + storeLastErrno(pFile, errno); + return unixLogError(SQLITE_IOERR_TRUNCATE, "ftruncate", pFile->zPath); }else{ -#ifndef NDEBUG +#ifdef SQLITE_DEBUG /* If we are doing a normal write to a database file (as opposed to ** doing a hot-journal rollback or a write to some file other than a ** normal database file) and we truncate the file to zero length, ** that effectively updates the change counter. This might happen ** when restoring a database using the backup API from a zero-length ** source. */ - if( ((unixFile*)id)->inNormalWrite && nByte==0 ){ - ((unixFile*)id)->transCntrChng = 1; + if( pFile->inNormalWrite && nByte==0 ){ + pFile->transCntrChng = 1; + } +#endif + +#if SQLITE_MAX_MMAP_SIZE>0 + /* If the file was just truncated to a size smaller than the currently + ** mapped region, reduce the effective mapping size as well. SQLite will + ** use read() and write() to access data beyond this point from now on. + */ + if( nByte<pFile->mmapSize ){ + pFile->mmapSize = nByte; } #endif return SQLITE_OK; } @@ -25463,19 +30924,19 @@ */ static int unixFileSize(sqlite3_file *id, i64 *pSize){ int rc; struct stat buf; assert( id ); - rc = fstat(((unixFile*)id)->h, &buf); + rc = osFstat(((unixFile*)id)->h, &buf); SimulateIOError( rc=1 ); if( rc!=0 ){ - ((unixFile*)id)->lastErrno = errno; + storeLastErrno((unixFile*)id, errno); return SQLITE_IOERR_FSTAT; } *pSize = buf.st_size; - /* When opening a zero-size database, the findLockInfo() procedure + /* When opening a zero-size database, the findInodeInfo() procedure ** writes a single byte into that file in order to work around a bug ** in the OS-X msdos filesystem. In order to avoid problems with upper ** layers, we need to report this file size as zero even though it is ** really 1. Ticket #3260. */ @@ -25491,25 +30952,166 @@ ** proxying locking division. */ static int proxyFileControl(sqlite3_file*,int,void*); #endif +/* +** This function is called to handle the SQLITE_FCNTL_SIZE_HINT +** file-control operation. Enlarge the database to nBytes in size +** (rounded up to the next chunk-size). If the database is already +** nBytes or larger, this routine is a no-op. +*/ +static int fcntlSizeHint(unixFile *pFile, i64 nByte){ + if( pFile->szChunk>0 ){ + i64 nSize; /* Required file size */ + struct stat buf; /* Used to hold return values of fstat() */ + + if( osFstat(pFile->h, &buf) ){ + return SQLITE_IOERR_FSTAT; + } + + nSize = ((nByte+pFile->szChunk-1) / pFile->szChunk) * pFile->szChunk; + if( nSize>(i64)buf.st_size ){ + +#if defined(HAVE_POSIX_FALLOCATE) && HAVE_POSIX_FALLOCATE + /* The code below is handling the return value of osFallocate() + ** correctly. posix_fallocate() is defined to "returns zero on success, + ** or an error number on failure". See the manpage for details. */ + int err; + do{ + err = osFallocate(pFile->h, buf.st_size, nSize-buf.st_size); + }while( err==EINTR ); + if( err ) return SQLITE_IOERR_WRITE; +#else + /* If the OS does not have posix_fallocate(), fake it. Write a + ** single byte to the last byte in each block that falls entirely + ** within the extended region. Then, if required, a single byte + ** at offset (nSize-1), to set the size of the file correctly. + ** This is a similar technique to that used by glibc on systems + ** that do not have a real fallocate() call. + */ + int nBlk = buf.st_blksize; /* File-system block size */ + int nWrite = 0; /* Number of bytes written by seekAndWrite */ + i64 iWrite; /* Next offset to write to */ + + iWrite = (buf.st_size/nBlk)*nBlk + nBlk - 1; + assert( iWrite>=buf.st_size ); + assert( ((iWrite+1)%nBlk)==0 ); + for(/*no-op*/; iWrite<nSize+nBlk-1; iWrite+=nBlk ){ + if( iWrite>=nSize ) iWrite = nSize - 1; + nWrite = seekAndWrite(pFile, iWrite, "", 1); + if( nWrite!=1 ) return SQLITE_IOERR_WRITE; + } +#endif + } + } + +#if SQLITE_MAX_MMAP_SIZE>0 + if( pFile->mmapSizeMax>0 && nByte>pFile->mmapSize ){ + int rc; + if( pFile->szChunk<=0 ){ + if( robust_ftruncate(pFile->h, nByte) ){ + storeLastErrno(pFile, errno); + return unixLogError(SQLITE_IOERR_TRUNCATE, "ftruncate", pFile->zPath); + } + } + + rc = unixMapfile(pFile, nByte); + return rc; + } +#endif + + return SQLITE_OK; +} + +/* +** If *pArg is initially negative then this is a query. Set *pArg to +** 1 or 0 depending on whether or not bit mask of pFile->ctrlFlags is set. +** +** If *pArg is 0 or 1, then clear or set the mask bit of pFile->ctrlFlags. +*/ +static void unixModeBit(unixFile *pFile, unsigned char mask, int *pArg){ + if( *pArg<0 ){ + *pArg = (pFile->ctrlFlags & mask)!=0; + }else if( (*pArg)==0 ){ + pFile->ctrlFlags &= ~mask; + }else{ + pFile->ctrlFlags |= mask; + } +} + +/* Forward declaration */ +static int unixGetTempname(int nBuf, char *zBuf); /* ** Information and control of an open file handle. */ static int unixFileControl(sqlite3_file *id, int op, void *pArg){ + unixFile *pFile = (unixFile*)id; switch( op ){ case SQLITE_FCNTL_LOCKSTATE: { - *(int*)pArg = ((unixFile*)id)->locktype; + *(int*)pArg = pFile->eFileLock; + return SQLITE_OK; + } + case SQLITE_FCNTL_LAST_ERRNO: { + *(int*)pArg = pFile->lastErrno; + return SQLITE_OK; + } + case SQLITE_FCNTL_CHUNK_SIZE: { + pFile->szChunk = *(int *)pArg; + return SQLITE_OK; + } + case SQLITE_FCNTL_SIZE_HINT: { + int rc; + SimulateIOErrorBenign(1); + rc = fcntlSizeHint(pFile, *(i64 *)pArg); + SimulateIOErrorBenign(0); + return rc; + } + case SQLITE_FCNTL_PERSIST_WAL: { + unixModeBit(pFile, UNIXFILE_PERSIST_WAL, (int*)pArg); + return SQLITE_OK; + } + case SQLITE_FCNTL_POWERSAFE_OVERWRITE: { + unixModeBit(pFile, UNIXFILE_PSOW, (int*)pArg); + return SQLITE_OK; + } + case SQLITE_FCNTL_VFSNAME: { + *(char**)pArg = sqlite3_mprintf("%s", pFile->pVfs->zName); + return SQLITE_OK; + } + case SQLITE_FCNTL_TEMPFILENAME: { + char *zTFile = sqlite3_malloc64( pFile->pVfs->mxPathname ); + if( zTFile ){ + unixGetTempname(pFile->pVfs->mxPathname, zTFile); + *(char**)pArg = zTFile; + } return SQLITE_OK; } - case SQLITE_LAST_ERRNO: { - *(int*)pArg = ((unixFile*)id)->lastErrno; + case SQLITE_FCNTL_HAS_MOVED: { + *(int*)pArg = fileHasMoved(pFile); return SQLITE_OK; } -#ifndef NDEBUG +#if SQLITE_MAX_MMAP_SIZE>0 + case SQLITE_FCNTL_MMAP_SIZE: { + i64 newLimit = *(i64*)pArg; + int rc = SQLITE_OK; + if( newLimit>sqlite3GlobalConfig.mxMmap ){ + newLimit = sqlite3GlobalConfig.mxMmap; + } + *(i64*)pArg = pFile->mmapSizeMax; + if( newLimit>=0 && newLimit!=pFile->mmapSizeMax && pFile->nFetchOut==0 ){ + pFile->mmapSizeMax = newLimit; + if( pFile->mmapSize>0 ){ + unixUnmapfile(pFile); + rc = unixMapfile(pFile, -1); + } + } + return rc; + } +#endif +#ifdef SQLITE_DEBUG /* The pager calls this method to signal that it has done ** a rollback and that the database is therefore unchanged and ** it hence it is OK for the transaction change counter to be ** unchanged. */ @@ -25517,17 +31119,17 @@ ((unixFile*)id)->dbUpdate = 0; return SQLITE_OK; } #endif #if SQLITE_ENABLE_LOCKING_STYLE && defined(__APPLE__) - case SQLITE_SET_LOCKPROXYFILE: - case SQLITE_GET_LOCKPROXYFILE: { + case SQLITE_FCNTL_SET_LOCKPROXYFILE: + case SQLITE_FCNTL_GET_LOCKPROXYFILE: { return proxyFileControl(id,op,pArg); } #endif /* SQLITE_ENABLE_LOCKING_STYLE && defined(__APPLE__) */ } - return SQLITE_ERROR; + return SQLITE_NOTFOUND; } /* ** Return the sector size in bytes of the underlying block device for ** the specified file. This is almost always 512 bytes, but may be @@ -25536,21 +31138,1059 @@ ** SQLite code assumes this function cannot fail. It also assumes that ** if two files are created in the same file-system directory (i.e. ** a database and its journal file) that the sector size will be the ** same for both. */ +#ifndef __QNXNTO__ static int unixSectorSize(sqlite3_file *NotUsed){ UNUSED_PARAMETER(NotUsed); return SQLITE_DEFAULT_SECTOR_SIZE; } - -/* -** Return the device characteristics for the file. This is always 0 for unix. -*/ -static int unixDeviceCharacteristics(sqlite3_file *NotUsed){ - UNUSED_PARAMETER(NotUsed); - return 0; +#endif + +/* +** The following version of unixSectorSize() is optimized for QNX. +*/ +#ifdef __QNXNTO__ +#include <sys/dcmd_blk.h> +#include <sys/statvfs.h> +static int unixSectorSize(sqlite3_file *id){ + unixFile *pFile = (unixFile*)id; + if( pFile->sectorSize == 0 ){ + struct statvfs fsInfo; + + /* Set defaults for non-supported filesystems */ + pFile->sectorSize = SQLITE_DEFAULT_SECTOR_SIZE; + pFile->deviceCharacteristics = 0; + if( fstatvfs(pFile->h, &fsInfo) == -1 ) { + return pFile->sectorSize; + } + + if( !strcmp(fsInfo.f_basetype, "tmp") ) { + pFile->sectorSize = fsInfo.f_bsize; + pFile->deviceCharacteristics = + SQLITE_IOCAP_ATOMIC4K | /* All ram filesystem writes are atomic */ + SQLITE_IOCAP_SAFE_APPEND | /* growing the file does not occur until + ** the write succeeds */ + SQLITE_IOCAP_SEQUENTIAL | /* The ram filesystem has no write behind + ** so it is ordered */ + 0; + }else if( strstr(fsInfo.f_basetype, "etfs") ){ + pFile->sectorSize = fsInfo.f_bsize; + pFile->deviceCharacteristics = + /* etfs cluster size writes are atomic */ + (pFile->sectorSize / 512 * SQLITE_IOCAP_ATOMIC512) | + SQLITE_IOCAP_SAFE_APPEND | /* growing the file does not occur until + ** the write succeeds */ + SQLITE_IOCAP_SEQUENTIAL | /* The ram filesystem has no write behind + ** so it is ordered */ + 0; + }else if( !strcmp(fsInfo.f_basetype, "qnx6") ){ + pFile->sectorSize = fsInfo.f_bsize; + pFile->deviceCharacteristics = + SQLITE_IOCAP_ATOMIC | /* All filesystem writes are atomic */ + SQLITE_IOCAP_SAFE_APPEND | /* growing the file does not occur until + ** the write succeeds */ + SQLITE_IOCAP_SEQUENTIAL | /* The ram filesystem has no write behind + ** so it is ordered */ + 0; + }else if( !strcmp(fsInfo.f_basetype, "qnx4") ){ + pFile->sectorSize = fsInfo.f_bsize; + pFile->deviceCharacteristics = + /* full bitset of atomics from max sector size and smaller */ + ((pFile->sectorSize / 512 * SQLITE_IOCAP_ATOMIC512) << 1) - 2 | + SQLITE_IOCAP_SEQUENTIAL | /* The ram filesystem has no write behind + ** so it is ordered */ + 0; + }else if( strstr(fsInfo.f_basetype, "dos") ){ + pFile->sectorSize = fsInfo.f_bsize; + pFile->deviceCharacteristics = + /* full bitset of atomics from max sector size and smaller */ + ((pFile->sectorSize / 512 * SQLITE_IOCAP_ATOMIC512) << 1) - 2 | + SQLITE_IOCAP_SEQUENTIAL | /* The ram filesystem has no write behind + ** so it is ordered */ + 0; + }else{ + pFile->deviceCharacteristics = + SQLITE_IOCAP_ATOMIC512 | /* blocks are atomic */ + SQLITE_IOCAP_SAFE_APPEND | /* growing the file does not occur until + ** the write succeeds */ + 0; + } + } + /* Last chance verification. If the sector size isn't a multiple of 512 + ** then it isn't valid.*/ + if( pFile->sectorSize % 512 != 0 ){ + pFile->deviceCharacteristics = 0; + pFile->sectorSize = SQLITE_DEFAULT_SECTOR_SIZE; + } + return pFile->sectorSize; +} +#endif /* __QNXNTO__ */ + +/* +** Return the device characteristics for the file. +** +** This VFS is set up to return SQLITE_IOCAP_POWERSAFE_OVERWRITE by default. +** However, that choice is controversial since technically the underlying +** file system does not always provide powersafe overwrites. (In other +** words, after a power-loss event, parts of the file that were never +** written might end up being altered.) However, non-PSOW behavior is very, +** very rare. And asserting PSOW makes a large reduction in the amount +** of required I/O for journaling, since a lot of padding is eliminated. +** Hence, while POWERSAFE_OVERWRITE is on by default, there is a file-control +** available to turn it off and URI query parameter available to turn it off. +*/ +static int unixDeviceCharacteristics(sqlite3_file *id){ + unixFile *p = (unixFile*)id; + int rc = 0; +#ifdef __QNXNTO__ + if( p->sectorSize==0 ) unixSectorSize(id); + rc = p->deviceCharacteristics; +#endif + if( p->ctrlFlags & UNIXFILE_PSOW ){ + rc |= SQLITE_IOCAP_POWERSAFE_OVERWRITE; + } + return rc; +} + +#if !defined(SQLITE_OMIT_WAL) || SQLITE_MAX_MMAP_SIZE>0 + +/* +** Return the system page size. +** +** This function should not be called directly by other code in this file. +** Instead, it should be called via macro osGetpagesize(). +*/ +static int unixGetpagesize(void){ +#if OS_VXWORKS + return 1024; +#elif defined(_BSD_SOURCE) + return getpagesize(); +#else + return (int)sysconf(_SC_PAGESIZE); +#endif +} + +#endif /* !defined(SQLITE_OMIT_WAL) || SQLITE_MAX_MMAP_SIZE>0 */ + +#ifndef SQLITE_OMIT_WAL + +/* +** Object used to represent an shared memory buffer. +** +** When multiple threads all reference the same wal-index, each thread +** has its own unixShm object, but they all point to a single instance +** of this unixShmNode object. In other words, each wal-index is opened +** only once per process. +** +** Each unixShmNode object is connected to a single unixInodeInfo object. +** We could coalesce this object into unixInodeInfo, but that would mean +** every open file that does not use shared memory (in other words, most +** open files) would have to carry around this extra information. So +** the unixInodeInfo object contains a pointer to this unixShmNode object +** and the unixShmNode object is created only when needed. +** +** unixMutexHeld() must be true when creating or destroying +** this object or while reading or writing the following fields: +** +** nRef +** +** The following fields are read-only after the object is created: +** +** fid +** zFilename +** +** Either unixShmNode.mutex must be held or unixShmNode.nRef==0 and +** unixMutexHeld() is true when reading or writing any other field +** in this structure. +*/ +struct unixShmNode { + unixInodeInfo *pInode; /* unixInodeInfo that owns this SHM node */ + sqlite3_mutex *mutex; /* Mutex to access this object */ + char *zFilename; /* Name of the mmapped file */ + int h; /* Open file descriptor */ + int szRegion; /* Size of shared-memory regions */ + u16 nRegion; /* Size of array apRegion */ + u8 isReadonly; /* True if read-only */ + char **apRegion; /* Array of mapped shared-memory regions */ + int nRef; /* Number of unixShm objects pointing to this */ + unixShm *pFirst; /* All unixShm objects pointing to this */ +#ifdef SQLITE_DEBUG + u8 exclMask; /* Mask of exclusive locks held */ + u8 sharedMask; /* Mask of shared locks held */ + u8 nextShmId; /* Next available unixShm.id value */ +#endif +}; + +/* +** Structure used internally by this VFS to record the state of an +** open shared memory connection. +** +** The following fields are initialized when this object is created and +** are read-only thereafter: +** +** unixShm.pFile +** unixShm.id +** +** All other fields are read/write. The unixShm.pFile->mutex must be held +** while accessing any read/write fields. +*/ +struct unixShm { + unixShmNode *pShmNode; /* The underlying unixShmNode object */ + unixShm *pNext; /* Next unixShm with the same unixShmNode */ + u8 hasMutex; /* True if holding the unixShmNode mutex */ + u8 id; /* Id of this connection within its unixShmNode */ + u16 sharedMask; /* Mask of shared locks held */ + u16 exclMask; /* Mask of exclusive locks held */ +}; + +/* +** Constants used for locking +*/ +#define UNIX_SHM_BASE ((22+SQLITE_SHM_NLOCK)*4) /* first lock byte */ +#define UNIX_SHM_DMS (UNIX_SHM_BASE+SQLITE_SHM_NLOCK) /* deadman switch */ + +/* +** Apply posix advisory locks for all bytes from ofst through ofst+n-1. +** +** Locks block if the mask is exactly UNIX_SHM_C and are non-blocking +** otherwise. +*/ +static int unixShmSystemLock( + unixFile *pFile, /* Open connection to the WAL file */ + int lockType, /* F_UNLCK, F_RDLCK, or F_WRLCK */ + int ofst, /* First byte of the locking range */ + int n /* Number of bytes to lock */ +){ + unixShmNode *pShmNode; /* Apply locks to this open shared-memory segment */ + struct flock f; /* The posix advisory locking structure */ + int rc = SQLITE_OK; /* Result code form fcntl() */ + + /* Access to the unixShmNode object is serialized by the caller */ + pShmNode = pFile->pInode->pShmNode; + assert( sqlite3_mutex_held(pShmNode->mutex) || pShmNode->nRef==0 ); + + /* Shared locks never span more than one byte */ + assert( n==1 || lockType!=F_RDLCK ); + + /* Locks are within range */ + assert( n>=1 && n<=SQLITE_SHM_NLOCK ); + + if( pShmNode->h>=0 ){ + /* Initialize the locking parameters */ + memset(&f, 0, sizeof(f)); + f.l_type = lockType; + f.l_whence = SEEK_SET; + f.l_start = ofst; + f.l_len = n; + + rc = osFcntl(pShmNode->h, F_SETLK, &f); + rc = (rc!=(-1)) ? SQLITE_OK : SQLITE_BUSY; + } + + /* Update the global lock state and do debug tracing */ +#ifdef SQLITE_DEBUG + { u16 mask; + OSTRACE(("SHM-LOCK ")); + mask = ofst>31 ? 0xffff : (1<<(ofst+n)) - (1<<ofst); + if( rc==SQLITE_OK ){ + if( lockType==F_UNLCK ){ + OSTRACE(("unlock %d ok", ofst)); + pShmNode->exclMask &= ~mask; + pShmNode->sharedMask &= ~mask; + }else if( lockType==F_RDLCK ){ + OSTRACE(("read-lock %d ok", ofst)); + pShmNode->exclMask &= ~mask; + pShmNode->sharedMask |= mask; + }else{ + assert( lockType==F_WRLCK ); + OSTRACE(("write-lock %d ok", ofst)); + pShmNode->exclMask |= mask; + pShmNode->sharedMask &= ~mask; + } + }else{ + if( lockType==F_UNLCK ){ + OSTRACE(("unlock %d failed", ofst)); + }else if( lockType==F_RDLCK ){ + OSTRACE(("read-lock failed")); + }else{ + assert( lockType==F_WRLCK ); + OSTRACE(("write-lock %d failed", ofst)); + } + } + OSTRACE((" - afterwards %03x,%03x\n", + pShmNode->sharedMask, pShmNode->exclMask)); + } +#endif + + return rc; +} + +/* +** Return the minimum number of 32KB shm regions that should be mapped at +** a time, assuming that each mapping must be an integer multiple of the +** current system page-size. +** +** Usually, this is 1. The exception seems to be systems that are configured +** to use 64KB pages - in this case each mapping must cover at least two +** shm regions. +*/ +static int unixShmRegionPerMap(void){ + int shmsz = 32*1024; /* SHM region size */ + int pgsz = osGetpagesize(); /* System page size */ + assert( ((pgsz-1)&pgsz)==0 ); /* Page size must be a power of 2 */ + if( pgsz<shmsz ) return 1; + return pgsz/shmsz; +} + +/* +** Purge the unixShmNodeList list of all entries with unixShmNode.nRef==0. +** +** This is not a VFS shared-memory method; it is a utility function called +** by VFS shared-memory methods. +*/ +static void unixShmPurge(unixFile *pFd){ + unixShmNode *p = pFd->pInode->pShmNode; + assert( unixMutexHeld() ); + if( p && ALWAYS(p->nRef==0) ){ + int nShmPerMap = unixShmRegionPerMap(); + int i; + assert( p->pInode==pFd->pInode ); + sqlite3_mutex_free(p->mutex); + for(i=0; i<p->nRegion; i+=nShmPerMap){ + if( p->h>=0 ){ + osMunmap(p->apRegion[i], p->szRegion); + }else{ + sqlite3_free(p->apRegion[i]); + } + } + sqlite3_free(p->apRegion); + if( p->h>=0 ){ + robust_close(pFd, p->h, __LINE__); + p->h = -1; + } + p->pInode->pShmNode = 0; + sqlite3_free(p); + } +} + +/* +** Open a shared-memory area associated with open database file pDbFd. +** This particular implementation uses mmapped files. +** +** The file used to implement shared-memory is in the same directory +** as the open database file and has the same name as the open database +** file with the "-shm" suffix added. For example, if the database file +** is "/home/user1/config.db" then the file that is created and mmapped +** for shared memory will be called "/home/user1/config.db-shm". +** +** Another approach to is to use files in /dev/shm or /dev/tmp or an +** some other tmpfs mount. But if a file in a different directory +** from the database file is used, then differing access permissions +** or a chroot() might cause two different processes on the same +** database to end up using different files for shared memory - +** meaning that their memory would not really be shared - resulting +** in database corruption. Nevertheless, this tmpfs file usage +** can be enabled at compile-time using -DSQLITE_SHM_DIRECTORY="/dev/shm" +** or the equivalent. The use of the SQLITE_SHM_DIRECTORY compile-time +** option results in an incompatible build of SQLite; builds of SQLite +** that with differing SQLITE_SHM_DIRECTORY settings attempt to use the +** same database file at the same time, database corruption will likely +** result. The SQLITE_SHM_DIRECTORY compile-time option is considered +** "unsupported" and may go away in a future SQLite release. +** +** When opening a new shared-memory file, if no other instances of that +** file are currently open, in this process or in other processes, then +** the file must be truncated to zero length or have its header cleared. +** +** If the original database file (pDbFd) is using the "unix-excl" VFS +** that means that an exclusive lock is held on the database file and +** that no other processes are able to read or write the database. In +** that case, we do not really need shared memory. No shared memory +** file is created. The shared memory will be simulated with heap memory. +*/ +static int unixOpenSharedMemory(unixFile *pDbFd){ + struct unixShm *p = 0; /* The connection to be opened */ + struct unixShmNode *pShmNode; /* The underlying mmapped file */ + int rc; /* Result code */ + unixInodeInfo *pInode; /* The inode of fd */ + char *zShmFilename; /* Name of the file used for SHM */ + int nShmFilename; /* Size of the SHM filename in bytes */ + + /* Allocate space for the new unixShm object. */ + p = sqlite3_malloc64( sizeof(*p) ); + if( p==0 ) return SQLITE_NOMEM; + memset(p, 0, sizeof(*p)); + assert( pDbFd->pShm==0 ); + + /* Check to see if a unixShmNode object already exists. Reuse an existing + ** one if present. Create a new one if necessary. + */ + unixEnterMutex(); + pInode = pDbFd->pInode; + pShmNode = pInode->pShmNode; + if( pShmNode==0 ){ + struct stat sStat; /* fstat() info for database file */ +#ifndef SQLITE_SHM_DIRECTORY + const char *zBasePath = pDbFd->zPath; +#endif + + /* Call fstat() to figure out the permissions on the database file. If + ** a new *-shm file is created, an attempt will be made to create it + ** with the same permissions. + */ + if( osFstat(pDbFd->h, &sStat) ){ + rc = SQLITE_IOERR_FSTAT; + goto shm_open_err; + } + +#ifdef SQLITE_SHM_DIRECTORY + nShmFilename = sizeof(SQLITE_SHM_DIRECTORY) + 31; +#else + nShmFilename = 6 + (int)strlen(zBasePath); +#endif + pShmNode = sqlite3_malloc64( sizeof(*pShmNode) + nShmFilename ); + if( pShmNode==0 ){ + rc = SQLITE_NOMEM; + goto shm_open_err; + } + memset(pShmNode, 0, sizeof(*pShmNode)+nShmFilename); + zShmFilename = pShmNode->zFilename = (char*)&pShmNode[1]; +#ifdef SQLITE_SHM_DIRECTORY + sqlite3_snprintf(nShmFilename, zShmFilename, + SQLITE_SHM_DIRECTORY "/sqlite-shm-%x-%x", + (u32)sStat.st_ino, (u32)sStat.st_dev); +#else + sqlite3_snprintf(nShmFilename, zShmFilename, "%s-shm", zBasePath); + sqlite3FileSuffix3(pDbFd->zPath, zShmFilename); +#endif + pShmNode->h = -1; + pDbFd->pInode->pShmNode = pShmNode; + pShmNode->pInode = pDbFd->pInode; + pShmNode->mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_FAST); + if( pShmNode->mutex==0 ){ + rc = SQLITE_NOMEM; + goto shm_open_err; + } + + if( pInode->bProcessLock==0 ){ + int openFlags = O_RDWR | O_CREAT; + if( sqlite3_uri_boolean(pDbFd->zPath, "readonly_shm", 0) ){ + openFlags = O_RDONLY; + pShmNode->isReadonly = 1; + } + pShmNode->h = robust_open(zShmFilename, openFlags, (sStat.st_mode&0777)); + if( pShmNode->h<0 ){ + rc = unixLogError(SQLITE_CANTOPEN_BKPT, "open", zShmFilename); + goto shm_open_err; + } + + /* If this process is running as root, make sure that the SHM file + ** is owned by the same user that owns the original database. Otherwise, + ** the original owner will not be able to connect. + */ + robustFchown(pShmNode->h, sStat.st_uid, sStat.st_gid); + + /* Check to see if another process is holding the dead-man switch. + ** If not, truncate the file to zero length. + */ + rc = SQLITE_OK; + if( unixShmSystemLock(pDbFd, F_WRLCK, UNIX_SHM_DMS, 1)==SQLITE_OK ){ + if( robust_ftruncate(pShmNode->h, 0) ){ + rc = unixLogError(SQLITE_IOERR_SHMOPEN, "ftruncate", zShmFilename); + } + } + if( rc==SQLITE_OK ){ + rc = unixShmSystemLock(pDbFd, F_RDLCK, UNIX_SHM_DMS, 1); + } + if( rc ) goto shm_open_err; + } + } + + /* Make the new connection a child of the unixShmNode */ + p->pShmNode = pShmNode; +#ifdef SQLITE_DEBUG + p->id = pShmNode->nextShmId++; +#endif + pShmNode->nRef++; + pDbFd->pShm = p; + unixLeaveMutex(); + + /* The reference count on pShmNode has already been incremented under + ** the cover of the unixEnterMutex() mutex and the pointer from the + ** new (struct unixShm) object to the pShmNode has been set. All that is + ** left to do is to link the new object into the linked list starting + ** at pShmNode->pFirst. This must be done while holding the pShmNode->mutex + ** mutex. + */ + sqlite3_mutex_enter(pShmNode->mutex); + p->pNext = pShmNode->pFirst; + pShmNode->pFirst = p; + sqlite3_mutex_leave(pShmNode->mutex); + return SQLITE_OK; + + /* Jump here on any error */ +shm_open_err: + unixShmPurge(pDbFd); /* This call frees pShmNode if required */ + sqlite3_free(p); + unixLeaveMutex(); + return rc; +} + +/* +** This function is called to obtain a pointer to region iRegion of the +** shared-memory associated with the database file fd. Shared-memory regions +** are numbered starting from zero. Each shared-memory region is szRegion +** bytes in size. +** +** If an error occurs, an error code is returned and *pp is set to NULL. +** +** Otherwise, if the bExtend parameter is 0 and the requested shared-memory +** region has not been allocated (by any client, including one running in a +** separate process), then *pp is set to NULL and SQLITE_OK returned. If +** bExtend is non-zero and the requested shared-memory region has not yet +** been allocated, it is allocated by this function. +** +** If the shared-memory region has already been allocated or is allocated by +** this call as described above, then it is mapped into this processes +** address space (if it is not already), *pp is set to point to the mapped +** memory and SQLITE_OK returned. +*/ +static int unixShmMap( + sqlite3_file *fd, /* Handle open on database file */ + int iRegion, /* Region to retrieve */ + int szRegion, /* Size of regions */ + int bExtend, /* True to extend file if necessary */ + void volatile **pp /* OUT: Mapped memory */ +){ + unixFile *pDbFd = (unixFile*)fd; + unixShm *p; + unixShmNode *pShmNode; + int rc = SQLITE_OK; + int nShmPerMap = unixShmRegionPerMap(); + int nReqRegion; + + /* If the shared-memory file has not yet been opened, open it now. */ + if( pDbFd->pShm==0 ){ + rc = unixOpenSharedMemory(pDbFd); + if( rc!=SQLITE_OK ) return rc; + } + + p = pDbFd->pShm; + pShmNode = p->pShmNode; + sqlite3_mutex_enter(pShmNode->mutex); + assert( szRegion==pShmNode->szRegion || pShmNode->nRegion==0 ); + assert( pShmNode->pInode==pDbFd->pInode ); + assert( pShmNode->h>=0 || pDbFd->pInode->bProcessLock==1 ); + assert( pShmNode->h<0 || pDbFd->pInode->bProcessLock==0 ); + + /* Minimum number of regions required to be mapped. */ + nReqRegion = ((iRegion+nShmPerMap) / nShmPerMap) * nShmPerMap; + + if( pShmNode->nRegion<nReqRegion ){ + char **apNew; /* New apRegion[] array */ + int nByte = nReqRegion*szRegion; /* Minimum required file size */ + struct stat sStat; /* Used by fstat() */ + + pShmNode->szRegion = szRegion; + + if( pShmNode->h>=0 ){ + /* The requested region is not mapped into this processes address space. + ** Check to see if it has been allocated (i.e. if the wal-index file is + ** large enough to contain the requested region). + */ + if( osFstat(pShmNode->h, &sStat) ){ + rc = SQLITE_IOERR_SHMSIZE; + goto shmpage_out; + } + + if( sStat.st_size<nByte ){ + /* The requested memory region does not exist. If bExtend is set to + ** false, exit early. *pp will be set to NULL and SQLITE_OK returned. + */ + if( !bExtend ){ + goto shmpage_out; + } + + /* Alternatively, if bExtend is true, extend the file. Do this by + ** writing a single byte to the end of each (OS) page being + ** allocated or extended. Technically, we need only write to the + ** last page in order to extend the file. But writing to all new + ** pages forces the OS to allocate them immediately, which reduces + ** the chances of SIGBUS while accessing the mapped region later on. + */ + else{ + static const int pgsz = 4096; + int iPg; + + /* Write to the last byte of each newly allocated or extended page */ + assert( (nByte % pgsz)==0 ); + for(iPg=(sStat.st_size/pgsz); iPg<(nByte/pgsz); iPg++){ + int x = 0; + if( seekAndWriteFd(pShmNode->h, iPg*pgsz + pgsz-1, "", 1, &x)!=1 ){ + const char *zFile = pShmNode->zFilename; + rc = unixLogError(SQLITE_IOERR_SHMSIZE, "write", zFile); + goto shmpage_out; + } + } + } + } + } + + /* Map the requested memory region into this processes address space. */ + apNew = (char **)sqlite3_realloc( + pShmNode->apRegion, nReqRegion*sizeof(char *) + ); + if( !apNew ){ + rc = SQLITE_IOERR_NOMEM; + goto shmpage_out; + } + pShmNode->apRegion = apNew; + while( pShmNode->nRegion<nReqRegion ){ + int nMap = szRegion*nShmPerMap; + int i; + void *pMem; + if( pShmNode->h>=0 ){ + pMem = osMmap(0, nMap, + pShmNode->isReadonly ? PROT_READ : PROT_READ|PROT_WRITE, + MAP_SHARED, pShmNode->h, szRegion*(i64)pShmNode->nRegion + ); + if( pMem==MAP_FAILED ){ + rc = unixLogError(SQLITE_IOERR_SHMMAP, "mmap", pShmNode->zFilename); + goto shmpage_out; + } + }else{ + pMem = sqlite3_malloc64(szRegion); + if( pMem==0 ){ + rc = SQLITE_NOMEM; + goto shmpage_out; + } + memset(pMem, 0, szRegion); + } + + for(i=0; i<nShmPerMap; i++){ + pShmNode->apRegion[pShmNode->nRegion+i] = &((char*)pMem)[szRegion*i]; + } + pShmNode->nRegion += nShmPerMap; + } + } + +shmpage_out: + if( pShmNode->nRegion>iRegion ){ + *pp = pShmNode->apRegion[iRegion]; + }else{ + *pp = 0; + } + if( pShmNode->isReadonly && rc==SQLITE_OK ) rc = SQLITE_READONLY; + sqlite3_mutex_leave(pShmNode->mutex); + return rc; +} + +/* +** Change the lock state for a shared-memory segment. +** +** Note that the relationship between SHAREd and EXCLUSIVE locks is a little +** different here than in posix. In xShmLock(), one can go from unlocked +** to shared and back or from unlocked to exclusive and back. But one may +** not go from shared to exclusive or from exclusive to shared. +*/ +static int unixShmLock( + sqlite3_file *fd, /* Database file holding the shared memory */ + int ofst, /* First lock to acquire or release */ + int n, /* Number of locks to acquire or release */ + int flags /* What to do with the lock */ +){ + unixFile *pDbFd = (unixFile*)fd; /* Connection holding shared memory */ + unixShm *p = pDbFd->pShm; /* The shared memory being locked */ + unixShm *pX; /* For looping over all siblings */ + unixShmNode *pShmNode = p->pShmNode; /* The underlying file iNode */ + int rc = SQLITE_OK; /* Result code */ + u16 mask; /* Mask of locks to take or release */ + + assert( pShmNode==pDbFd->pInode->pShmNode ); + assert( pShmNode->pInode==pDbFd->pInode ); + assert( ofst>=0 && ofst+n<=SQLITE_SHM_NLOCK ); + assert( n>=1 ); + assert( flags==(SQLITE_SHM_LOCK | SQLITE_SHM_SHARED) + || flags==(SQLITE_SHM_LOCK | SQLITE_SHM_EXCLUSIVE) + || flags==(SQLITE_SHM_UNLOCK | SQLITE_SHM_SHARED) + || flags==(SQLITE_SHM_UNLOCK | SQLITE_SHM_EXCLUSIVE) ); + assert( n==1 || (flags & SQLITE_SHM_EXCLUSIVE)!=0 ); + assert( pShmNode->h>=0 || pDbFd->pInode->bProcessLock==1 ); + assert( pShmNode->h<0 || pDbFd->pInode->bProcessLock==0 ); + + mask = (1<<(ofst+n)) - (1<<ofst); + assert( n>1 || mask==(1<<ofst) ); + sqlite3_mutex_enter(pShmNode->mutex); + if( flags & SQLITE_SHM_UNLOCK ){ + u16 allMask = 0; /* Mask of locks held by siblings */ + + /* See if any siblings hold this same lock */ + for(pX=pShmNode->pFirst; pX; pX=pX->pNext){ + if( pX==p ) continue; + assert( (pX->exclMask & (p->exclMask|p->sharedMask))==0 ); + allMask |= pX->sharedMask; + } + + /* Unlock the system-level locks */ + if( (mask & allMask)==0 ){ + rc = unixShmSystemLock(pDbFd, F_UNLCK, ofst+UNIX_SHM_BASE, n); + }else{ + rc = SQLITE_OK; + } + + /* Undo the local locks */ + if( rc==SQLITE_OK ){ + p->exclMask &= ~mask; + p->sharedMask &= ~mask; + } + }else if( flags & SQLITE_SHM_SHARED ){ + u16 allShared = 0; /* Union of locks held by connections other than "p" */ + + /* Find out which shared locks are already held by sibling connections. + ** If any sibling already holds an exclusive lock, go ahead and return + ** SQLITE_BUSY. + */ + for(pX=pShmNode->pFirst; pX; pX=pX->pNext){ + if( (pX->exclMask & mask)!=0 ){ + rc = SQLITE_BUSY; + break; + } + allShared |= pX->sharedMask; + } + + /* Get shared locks at the system level, if necessary */ + if( rc==SQLITE_OK ){ + if( (allShared & mask)==0 ){ + rc = unixShmSystemLock(pDbFd, F_RDLCK, ofst+UNIX_SHM_BASE, n); + }else{ + rc = SQLITE_OK; + } + } + + /* Get the local shared locks */ + if( rc==SQLITE_OK ){ + p->sharedMask |= mask; + } + }else{ + /* Make sure no sibling connections hold locks that will block this + ** lock. If any do, return SQLITE_BUSY right away. + */ + for(pX=pShmNode->pFirst; pX; pX=pX->pNext){ + if( (pX->exclMask & mask)!=0 || (pX->sharedMask & mask)!=0 ){ + rc = SQLITE_BUSY; + break; + } + } + + /* Get the exclusive locks at the system level. Then if successful + ** also mark the local connection as being locked. + */ + if( rc==SQLITE_OK ){ + rc = unixShmSystemLock(pDbFd, F_WRLCK, ofst+UNIX_SHM_BASE, n); + if( rc==SQLITE_OK ){ + assert( (p->sharedMask & mask)==0 ); + p->exclMask |= mask; + } + } + } + sqlite3_mutex_leave(pShmNode->mutex); + OSTRACE(("SHM-LOCK shmid-%d, pid-%d got %03x,%03x\n", + p->id, osGetpid(0), p->sharedMask, p->exclMask)); + return rc; +} + +/* +** Implement a memory barrier or memory fence on shared memory. +** +** All loads and stores begun before the barrier must complete before +** any load or store begun after the barrier. +*/ +static void unixShmBarrier( + sqlite3_file *fd /* Database file holding the shared memory */ +){ + UNUSED_PARAMETER(fd); + sqlite3MemoryBarrier(); /* compiler-defined memory barrier */ + unixEnterMutex(); /* Also mutex, for redundancy */ + unixLeaveMutex(); +} + +/* +** Close a connection to shared-memory. Delete the underlying +** storage if deleteFlag is true. +** +** If there is no shared memory associated with the connection then this +** routine is a harmless no-op. +*/ +static int unixShmUnmap( + sqlite3_file *fd, /* The underlying database file */ + int deleteFlag /* Delete shared-memory if true */ +){ + unixShm *p; /* The connection to be closed */ + unixShmNode *pShmNode; /* The underlying shared-memory file */ + unixShm **pp; /* For looping over sibling connections */ + unixFile *pDbFd; /* The underlying database file */ + + pDbFd = (unixFile*)fd; + p = pDbFd->pShm; + if( p==0 ) return SQLITE_OK; + pShmNode = p->pShmNode; + + assert( pShmNode==pDbFd->pInode->pShmNode ); + assert( pShmNode->pInode==pDbFd->pInode ); + + /* Remove connection p from the set of connections associated + ** with pShmNode */ + sqlite3_mutex_enter(pShmNode->mutex); + for(pp=&pShmNode->pFirst; (*pp)!=p; pp = &(*pp)->pNext){} + *pp = p->pNext; + + /* Free the connection p */ + sqlite3_free(p); + pDbFd->pShm = 0; + sqlite3_mutex_leave(pShmNode->mutex); + + /* If pShmNode->nRef has reached 0, then close the underlying + ** shared-memory file, too */ + unixEnterMutex(); + assert( pShmNode->nRef>0 ); + pShmNode->nRef--; + if( pShmNode->nRef==0 ){ + if( deleteFlag && pShmNode->h>=0 ){ + osUnlink(pShmNode->zFilename); + } + unixShmPurge(pDbFd); + } + unixLeaveMutex(); + + return SQLITE_OK; +} + + +#else +# define unixShmMap 0 +# define unixShmLock 0 +# define unixShmBarrier 0 +# define unixShmUnmap 0 +#endif /* #ifndef SQLITE_OMIT_WAL */ + +#if SQLITE_MAX_MMAP_SIZE>0 +/* +** If it is currently memory mapped, unmap file pFd. +*/ +static void unixUnmapfile(unixFile *pFd){ + assert( pFd->nFetchOut==0 ); + if( pFd->pMapRegion ){ + osMunmap(pFd->pMapRegion, pFd->mmapSizeActual); + pFd->pMapRegion = 0; + pFd->mmapSize = 0; + pFd->mmapSizeActual = 0; + } +} + +/* +** Attempt to set the size of the memory mapping maintained by file +** descriptor pFd to nNew bytes. Any existing mapping is discarded. +** +** If successful, this function sets the following variables: +** +** unixFile.pMapRegion +** unixFile.mmapSize +** unixFile.mmapSizeActual +** +** If unsuccessful, an error message is logged via sqlite3_log() and +** the three variables above are zeroed. In this case SQLite should +** continue accessing the database using the xRead() and xWrite() +** methods. +*/ +static void unixRemapfile( + unixFile *pFd, /* File descriptor object */ + i64 nNew /* Required mapping size */ +){ + const char *zErr = "mmap"; + int h = pFd->h; /* File descriptor open on db file */ + u8 *pOrig = (u8 *)pFd->pMapRegion; /* Pointer to current file mapping */ + i64 nOrig = pFd->mmapSizeActual; /* Size of pOrig region in bytes */ + u8 *pNew = 0; /* Location of new mapping */ + int flags = PROT_READ; /* Flags to pass to mmap() */ + + assert( pFd->nFetchOut==0 ); + assert( nNew>pFd->mmapSize ); + assert( nNew<=pFd->mmapSizeMax ); + assert( nNew>0 ); + assert( pFd->mmapSizeActual>=pFd->mmapSize ); + assert( MAP_FAILED!=0 ); + +#ifdef SQLITE_MMAP_READWRITE + if( (pFd->ctrlFlags & UNIXFILE_RDONLY)==0 ) flags |= PROT_WRITE; +#endif + + if( pOrig ){ +#if HAVE_MREMAP + i64 nReuse = pFd->mmapSize; +#else + const int szSyspage = osGetpagesize(); + i64 nReuse = (pFd->mmapSize & ~(szSyspage-1)); +#endif + u8 *pReq = &pOrig[nReuse]; + + /* Unmap any pages of the existing mapping that cannot be reused. */ + if( nReuse!=nOrig ){ + osMunmap(pReq, nOrig-nReuse); + } + +#if HAVE_MREMAP + pNew = osMremap(pOrig, nReuse, nNew, MREMAP_MAYMOVE); + zErr = "mremap"; +#else + pNew = osMmap(pReq, nNew-nReuse, flags, MAP_SHARED, h, nReuse); + if( pNew!=MAP_FAILED ){ + if( pNew!=pReq ){ + osMunmap(pNew, nNew - nReuse); + pNew = 0; + }else{ + pNew = pOrig; + } + } +#endif + + /* The attempt to extend the existing mapping failed. Free it. */ + if( pNew==MAP_FAILED || pNew==0 ){ + osMunmap(pOrig, nReuse); + } + } + + /* If pNew is still NULL, try to create an entirely new mapping. */ + if( pNew==0 ){ + pNew = osMmap(0, nNew, flags, MAP_SHARED, h, 0); + } + + if( pNew==MAP_FAILED ){ + pNew = 0; + nNew = 0; + unixLogError(SQLITE_OK, zErr, pFd->zPath); + + /* If the mmap() above failed, assume that all subsequent mmap() calls + ** will probably fail too. Fall back to using xRead/xWrite exclusively + ** in this case. */ + pFd->mmapSizeMax = 0; + } + pFd->pMapRegion = (void *)pNew; + pFd->mmapSize = pFd->mmapSizeActual = nNew; +} + +/* +** Memory map or remap the file opened by file-descriptor pFd (if the file +** is already mapped, the existing mapping is replaced by the new). Or, if +** there already exists a mapping for this file, and there are still +** outstanding xFetch() references to it, this function is a no-op. +** +** If parameter nByte is non-negative, then it is the requested size of +** the mapping to create. Otherwise, if nByte is less than zero, then the +** requested size is the size of the file on disk. The actual size of the +** created mapping is either the requested size or the value configured +** using SQLITE_FCNTL_MMAP_LIMIT, whichever is smaller. +** +** SQLITE_OK is returned if no error occurs (even if the mapping is not +** recreated as a result of outstanding references) or an SQLite error +** code otherwise. +*/ +static int unixMapfile(unixFile *pFd, i64 nMap){ + assert( nMap>=0 || pFd->nFetchOut==0 ); + assert( nMap>0 || (pFd->mmapSize==0 && pFd->pMapRegion==0) ); + if( pFd->nFetchOut>0 ) return SQLITE_OK; + + if( nMap<0 ){ + struct stat statbuf; /* Low-level file information */ + if( osFstat(pFd->h, &statbuf) ){ + return SQLITE_IOERR_FSTAT; + } + nMap = statbuf.st_size; + } + if( nMap>pFd->mmapSizeMax ){ + nMap = pFd->mmapSizeMax; + } + + assert( nMap>0 || (pFd->mmapSize==0 && pFd->pMapRegion==0) ); + if( nMap!=pFd->mmapSize ){ + unixRemapfile(pFd, nMap); + } + + return SQLITE_OK; +} +#endif /* SQLITE_MAX_MMAP_SIZE>0 */ + +/* +** If possible, return a pointer to a mapping of file fd starting at offset +** iOff. The mapping must be valid for at least nAmt bytes. +** +** If such a pointer can be obtained, store it in *pp and return SQLITE_OK. +** Or, if one cannot but no error occurs, set *pp to 0 and return SQLITE_OK. +** Finally, if an error does occur, return an SQLite error code. The final +** value of *pp is undefined in this case. +** +** If this function does return a pointer, the caller must eventually +** release the reference by calling unixUnfetch(). +*/ +static int unixFetch(sqlite3_file *fd, i64 iOff, int nAmt, void **pp){ +#if SQLITE_MAX_MMAP_SIZE>0 + unixFile *pFd = (unixFile *)fd; /* The underlying database file */ +#endif + *pp = 0; + +#if SQLITE_MAX_MMAP_SIZE>0 + if( pFd->mmapSizeMax>0 ){ + if( pFd->pMapRegion==0 ){ + int rc = unixMapfile(pFd, -1); + if( rc!=SQLITE_OK ) return rc; + } + if( pFd->mmapSize >= iOff+nAmt ){ + *pp = &((u8 *)pFd->pMapRegion)[iOff]; + pFd->nFetchOut++; + } + } +#endif + return SQLITE_OK; +} + +/* +** If the third argument is non-NULL, then this function releases a +** reference obtained by an earlier call to unixFetch(). The second +** argument passed to this function must be the same as the corresponding +** argument that was passed to the unixFetch() invocation. +** +** Or, if the third argument is NULL, then this function is being called +** to inform the VFS layer that, according to POSIX, any existing mapping +** may now be invalid and should be unmapped. +*/ +static int unixUnfetch(sqlite3_file *fd, i64 iOff, void *p){ +#if SQLITE_MAX_MMAP_SIZE>0 + unixFile *pFd = (unixFile *)fd; /* The underlying database file */ + UNUSED_PARAMETER(iOff); + + /* If p==0 (unmap the entire file) then there must be no outstanding + ** xFetch references. Or, if p!=0 (meaning it is an xFetch reference), + ** then there must be at least one outstanding. */ + assert( (p==0)==(pFd->nFetchOut==0) ); + + /* If p!=0, it must match the iOff value. */ + assert( p==0 || p==&((u8 *)pFd->pMapRegion)[iOff] ); + + if( p ){ + pFd->nFetchOut--; + }else{ + unixUnmapfile(pFd); + } + + assert( pFd->nFetchOut>=0 ); +#else + UNUSED_PARAMETER(fd); + UNUSED_PARAMETER(p); + UNUSED_PARAMETER(iOff); +#endif + return SQLITE_OK; } /* ** Here ends the implementation of all sqlite3_file methods. ** @@ -25568,11 +32208,11 @@ ** Most finder functions return a pointer to a fixed sqlite3_io_methods ** object. The only interesting finder-function is autolockIoFinder, which ** looks at the filesystem type and tries to guess the best locking ** strategy from that. ** -** For finder-funtion F, two objects are created: +** For finder-function F, two objects are created: ** ** (1) The real finder-function named "FImpt()". ** ** (2) A constant pointer to this function named just "F". ** @@ -25589,13 +32229,13 @@ ** methods CLOSE, LOCK, UNLOCK, CKRESLOCK. ** ** * An I/O method finder function called FINDER that returns a pointer ** to the METHOD object in the previous bullet. */ -#define IOMETHODS(FINDER, METHOD, CLOSE, LOCK, UNLOCK, CKLOCK) \ +#define IOMETHODS(FINDER,METHOD,VERSION,CLOSE,LOCK,UNLOCK,CKLOCK,SHMMAP) \ static const sqlite3_io_methods METHOD = { \ - 1, /* iVersion */ \ + VERSION, /* iVersion */ \ CLOSE, /* xClose */ \ unixRead, /* xRead */ \ unixWrite, /* xWrite */ \ unixTruncate, /* xTruncate */ \ unixSync, /* xSync */ \ @@ -25603,11 +32243,17 @@ LOCK, /* xLock */ \ UNLOCK, /* xUnlock */ \ CKLOCK, /* xCheckReservedLock */ \ unixFileControl, /* xFileControl */ \ unixSectorSize, /* xSectorSize */ \ - unixDeviceCharacteristics /* xDeviceCapabilities */ \ + unixDeviceCharacteristics, /* xDeviceCapabilities */ \ + SHMMAP, /* xShmMap */ \ + unixShmLock, /* xShmLock */ \ + unixShmBarrier, /* xShmBarrier */ \ + unixShmUnmap, /* xShmUnmap */ \ + unixFetch, /* xFetch */ \ + unixUnfetch, /* xUnfetch */ \ }; \ static const sqlite3_io_methods *FINDER##Impl(const char *z, unixFile *p){ \ UNUSED_PARAMETER(z); UNUSED_PARAMETER(p); \ return &METHOD; \ } \ @@ -25620,62 +32266,74 @@ ** are also created. */ IOMETHODS( posixIoFinder, /* Finder function name */ posixIoMethods, /* sqlite3_io_methods object name */ + 3, /* shared memory and mmap are enabled */ unixClose, /* xClose method */ unixLock, /* xLock method */ unixUnlock, /* xUnlock method */ - unixCheckReservedLock /* xCheckReservedLock method */ + unixCheckReservedLock, /* xCheckReservedLock method */ + unixShmMap /* xShmMap method */ ) IOMETHODS( nolockIoFinder, /* Finder function name */ nolockIoMethods, /* sqlite3_io_methods object name */ + 3, /* shared memory is disabled */ nolockClose, /* xClose method */ nolockLock, /* xLock method */ nolockUnlock, /* xUnlock method */ - nolockCheckReservedLock /* xCheckReservedLock method */ + nolockCheckReservedLock, /* xCheckReservedLock method */ + 0 /* xShmMap method */ ) IOMETHODS( dotlockIoFinder, /* Finder function name */ dotlockIoMethods, /* sqlite3_io_methods object name */ + 1, /* shared memory is disabled */ dotlockClose, /* xClose method */ dotlockLock, /* xLock method */ dotlockUnlock, /* xUnlock method */ - dotlockCheckReservedLock /* xCheckReservedLock method */ + dotlockCheckReservedLock, /* xCheckReservedLock method */ + 0 /* xShmMap method */ ) -#if SQLITE_ENABLE_LOCKING_STYLE && !OS_VXWORKS +#if SQLITE_ENABLE_LOCKING_STYLE IOMETHODS( flockIoFinder, /* Finder function name */ flockIoMethods, /* sqlite3_io_methods object name */ + 1, /* shared memory is disabled */ flockClose, /* xClose method */ flockLock, /* xLock method */ flockUnlock, /* xUnlock method */ - flockCheckReservedLock /* xCheckReservedLock method */ + flockCheckReservedLock, /* xCheckReservedLock method */ + 0 /* xShmMap method */ ) #endif #if OS_VXWORKS IOMETHODS( semIoFinder, /* Finder function name */ semIoMethods, /* sqlite3_io_methods object name */ - semClose, /* xClose method */ - semLock, /* xLock method */ - semUnlock, /* xUnlock method */ - semCheckReservedLock /* xCheckReservedLock method */ + 1, /* shared memory is disabled */ + semXClose, /* xClose method */ + semXLock, /* xLock method */ + semXUnlock, /* xUnlock method */ + semXCheckReservedLock, /* xCheckReservedLock method */ + 0 /* xShmMap method */ ) #endif #if defined(__APPLE__) && SQLITE_ENABLE_LOCKING_STYLE IOMETHODS( afpIoFinder, /* Finder function name */ afpIoMethods, /* sqlite3_io_methods object name */ + 1, /* shared memory is disabled */ afpClose, /* xClose method */ afpLock, /* xLock method */ afpUnlock, /* xUnlock method */ - afpCheckReservedLock /* xCheckReservedLock method */ + afpCheckReservedLock, /* xCheckReservedLock method */ + 0 /* xShmMap method */ ) #endif /* ** The proxy locking method is a "super-method" in the sense that it @@ -25692,26 +32350,30 @@ static int proxyUnlock(sqlite3_file*, int); static int proxyCheckReservedLock(sqlite3_file*, int*); IOMETHODS( proxyIoFinder, /* Finder function name */ proxyIoMethods, /* sqlite3_io_methods object name */ + 1, /* shared memory is disabled */ proxyClose, /* xClose method */ proxyLock, /* xLock method */ proxyUnlock, /* xUnlock method */ - proxyCheckReservedLock /* xCheckReservedLock method */ + proxyCheckReservedLock, /* xCheckReservedLock method */ + 0 /* xShmMap method */ ) #endif /* nfs lockd on OSX 10.3+ doesn't clear write locks when a read lock is set */ #if defined(__APPLE__) && SQLITE_ENABLE_LOCKING_STYLE IOMETHODS( nfsIoFinder, /* Finder function name */ nfsIoMethods, /* sqlite3_io_methods object name */ + 1, /* shared memory is disabled */ unixClose, /* xClose method */ unixLock, /* xLock method */ nfsUnlock, /* xUnlock method */ - unixCheckReservedLock /* xCheckReservedLock method */ + unixCheckReservedLock, /* xCheckReservedLock method */ + 0 /* xShmMap method */ ) #endif #if defined(__APPLE__) && SQLITE_ENABLE_LOCKING_STYLE /* @@ -25762,11 +32424,11 @@ */ lockInfo.l_len = 1; lockInfo.l_start = 0; lockInfo.l_whence = SEEK_SET; lockInfo.l_type = F_RDLCK; - if( fcntl(pNew->h, F_GETLK, &lockInfo)!=-1 ) { + if( osFcntl(pNew->h, F_GETLK, &lockInfo)!=-1 ) { if( strcmp(fsInfo.f_fstypename, "nfs")==0 ){ return &nfsIoMethods; } else { return &posixIoMethods; } @@ -25777,19 +32439,17 @@ static const sqlite3_io_methods *(*const autolockIoFinder)(const char*,unixFile*) = autolockIoFinderImpl; #endif /* defined(__APPLE__) && SQLITE_ENABLE_LOCKING_STYLE */ -#if OS_VXWORKS && SQLITE_ENABLE_LOCKING_STYLE -/* -** This "finder" function attempts to determine the best locking strategy -** for the database file "filePath". It then returns the sqlite3_io_methods -** object that implements that strategy. -** -** This is for VXWorks only. +#if OS_VXWORKS +/* +** This "finder" function for VxWorks checks to see if posix advisory +** locking works. If it does, then that is what is used. If it does not +** work, then fallback to named semaphore locking. */ -static const sqlite3_io_methods *autolockIoFinderImpl( +static const sqlite3_io_methods *vxworksIoFinderImpl( const char *filePath, /* name of the database file */ unixFile *pNew /* the open file object */ ){ struct flock lockInfo; @@ -25804,23 +32464,23 @@ */ lockInfo.l_len = 1; lockInfo.l_start = 0; lockInfo.l_whence = SEEK_SET; lockInfo.l_type = F_RDLCK; - if( fcntl(pNew->h, F_GETLK, &lockInfo)!=-1 ) { + if( osFcntl(pNew->h, F_GETLK, &lockInfo)!=-1 ) { return &posixIoMethods; }else{ return &semIoMethods; } } static const sqlite3_io_methods - *(*const autolockIoFinder)(const char*,unixFile*) = autolockIoFinderImpl; + *(*const vxworksIoFinder)(const char*,unixFile*) = vxworksIoFinderImpl; -#endif /* OS_VXWORKS && SQLITE_ENABLE_LOCKING_STYLE */ +#endif /* OS_VXWORKS */ /* -** An abstract type for a pointer to a IO method finder function: +** An abstract type for a pointer to an IO method finder function: */ typedef const sqlite3_io_methods *(*finder_type)(const char*,unixFile*); /**************************************************************************** @@ -25834,43 +32494,59 @@ ** Initialize the contents of the unixFile structure pointed to by pId. */ static int fillInUnixFile( sqlite3_vfs *pVfs, /* Pointer to vfs object */ int h, /* Open file descriptor of file being opened */ - int dirfd, /* Directory file descriptor */ sqlite3_file *pId, /* Write to the unixFile structure here */ const char *zFilename, /* Name of the file being opened */ - int noLock, /* Omit locking if true */ - int isDelete /* Delete on close if true */ + int ctrlFlags /* Zero or more UNIXFILE_* values */ ){ const sqlite3_io_methods *pLockingStyle; unixFile *pNew = (unixFile *)pId; int rc = SQLITE_OK; - assert( pNew->pLock==NULL ); - assert( pNew->pOpen==NULL ); - - /* Parameter isDelete is only used on vxworks. Express this explicitly - ** here to prevent compiler warnings about unused parameters. - */ - UNUSED_PARAMETER(isDelete); - - OSTRACE3("OPEN %-3d %s\n", h, zFilename); + assert( pNew->pInode==NULL ); + + /* Usually the path zFilename should not be a relative pathname. The + ** exception is when opening the proxy "conch" file in builds that + ** include the special Apple locking styles. + */ +#if defined(__APPLE__) && SQLITE_ENABLE_LOCKING_STYLE + assert( zFilename==0 || zFilename[0]=='/' + || pVfs->pAppData==(void*)&autolockIoFinder ); +#else + assert( zFilename==0 || zFilename[0]=='/' ); +#endif + + /* No locking occurs in temporary files */ + assert( zFilename!=0 || (ctrlFlags & UNIXFILE_NOLOCK)!=0 ); + + OSTRACE(("OPEN %-3d %s\n", h, zFilename)); pNew->h = h; - pNew->dirfd = dirfd; - SET_THREADID(pNew); - pNew->fileFlags = 0; + pNew->pVfs = pVfs; + pNew->zPath = zFilename; + pNew->ctrlFlags = (u8)ctrlFlags; +#if SQLITE_MAX_MMAP_SIZE>0 + pNew->mmapSizeMax = sqlite3GlobalConfig.szMmap; +#endif + if( sqlite3_uri_boolean(((ctrlFlags & UNIXFILE_URI) ? zFilename : 0), + "psow", SQLITE_POWERSAFE_OVERWRITE) ){ + pNew->ctrlFlags |= UNIXFILE_PSOW; + } + if( strcmp(pVfs->zName,"unix-excl")==0 ){ + pNew->ctrlFlags |= UNIXFILE_EXCL; + } #if OS_VXWORKS pNew->pId = vxworksFindFileId(zFilename); if( pNew->pId==0 ){ - noLock = 1; + ctrlFlags |= UNIXFILE_NOLOCK; rc = SQLITE_NOMEM; } #endif - if( noLock ){ + if( ctrlFlags & UNIXFILE_NOLOCK ){ pLockingStyle = &nolockIoMethods; }else{ pLockingStyle = (**(finder_type*)pVfs->pAppData)(zFilename, pNew); #if SQLITE_ENABLE_LOCKING_STYLE /* Cache zFilename in the locking context (AFP and dotlock override) for @@ -25884,31 +32560,31 @@ #if defined(__APPLE__) && SQLITE_ENABLE_LOCKING_STYLE || pLockingStyle == &nfsIoMethods #endif ){ unixEnterMutex(); - rc = findLockInfo(pNew, &pNew->pLock, &pNew->pOpen); + rc = findInodeInfo(pNew, &pNew->pInode); if( rc!=SQLITE_OK ){ - /* If an error occured in findLockInfo(), close the file descriptor - ** immediately, before releasing the mutex. findLockInfo() may fail + /* If an error occurred in findInodeInfo(), close the file descriptor + ** immediately, before releasing the mutex. findInodeInfo() may fail ** in two scenarios: ** ** (a) A call to fstat() failed. ** (b) A malloc failed. ** ** Scenario (b) may only occur if the process is holding no other ** file descriptors open on the same file. If there were other file ** descriptors on this file, then no malloc would be required by - ** findLockInfo(). If this is the case, it is quite safe to close + ** findInodeInfo(). If this is the case, it is quite safe to close ** handle h - as it is guaranteed that no posix locks will be released ** by doing so. ** ** If scenario (a) caused the error then things are not so safe. The ** implicit assumption here is that if fstat() fails, things are in ** such bad shape that dropping a lock or two doesn't matter much. */ - close(h); + robust_close(pNew, h, __LINE__); h = -1; } unixLeaveMutex(); } @@ -25916,11 +32592,11 @@ else if( pLockingStyle == &afpIoMethods ){ /* AFP locking uses the file path so it needs to be included in ** the afpLockingContext. */ afpLockingContext *pCtx; - pNew->lockingContext = pCtx = sqlite3_malloc( sizeof(*pCtx) ); + pNew->lockingContext = pCtx = sqlite3_malloc64( sizeof(*pCtx) ); if( pCtx==0 ){ rc = SQLITE_NOMEM; }else{ /* NB: zFilename exists and remains valid until the file is closed ** according to requirement F11141. So we do not need to make a @@ -25927,14 +32603,14 @@ ** copy of the filename. */ pCtx->dbPath = zFilename; pCtx->reserved = 0; srandomdev(); unixEnterMutex(); - rc = findLockInfo(pNew, &pNew->pLock, &pNew->pOpen); + rc = findInodeInfo(pNew, &pNew->pInode); if( rc!=SQLITE_OK ){ sqlite3_free(pNew->lockingContext); - close(h); + robust_close(pNew, h, __LINE__); h = -1; } unixLeaveMutex(); } } @@ -25944,12 +32620,13 @@ /* Dotfile locking uses the file path so it needs to be included in ** the dotlockLockingContext */ char *zLockFile; int nFilename; + assert( zFilename!=0 ); nFilename = (int)strlen(zFilename) + 6; - zLockFile = (char *)sqlite3_malloc(nFilename); + zLockFile = (char *)sqlite3_malloc64(nFilename); if( zLockFile==0 ){ rc = SQLITE_NOMEM; }else{ sqlite3_snprintf(nFilename, zLockFile, "%s" DOTLOCK_SUFFIX, zFilename); } @@ -25960,137 +32637,101 @@ else if( pLockingStyle == &semIoMethods ){ /* Named semaphore locking uses the file path so it needs to be ** included in the semLockingContext */ unixEnterMutex(); - rc = findLockInfo(pNew, &pNew->pLock, &pNew->pOpen); - if( (rc==SQLITE_OK) && (pNew->pOpen->pSem==NULL) ){ - char *zSemName = pNew->pOpen->aSemName; + rc = findInodeInfo(pNew, &pNew->pInode); + if( (rc==SQLITE_OK) && (pNew->pInode->pSem==NULL) ){ + char *zSemName = pNew->pInode->aSemName; int n; sqlite3_snprintf(MAX_PATHNAME, zSemName, "/%s.sem", pNew->pId->zCanonicalName); for( n=1; zSemName[n]; n++ ) if( zSemName[n]=='/' ) zSemName[n] = '_'; - pNew->pOpen->pSem = sem_open(zSemName, O_CREAT, 0666, 1); - if( pNew->pOpen->pSem == SEM_FAILED ){ + pNew->pInode->pSem = sem_open(zSemName, O_CREAT, 0666, 1); + if( pNew->pInode->pSem == SEM_FAILED ){ rc = SQLITE_NOMEM; - pNew->pOpen->aSemName[0] = '\0'; + pNew->pInode->aSemName[0] = '\0'; } } unixLeaveMutex(); } #endif - pNew->lastErrno = 0; + storeLastErrno(pNew, 0); #if OS_VXWORKS if( rc!=SQLITE_OK ){ - if( h>=0 ) close(h); + if( h>=0 ) robust_close(pNew, h, __LINE__); h = -1; - unlink(zFilename); - isDelete = 0; + osUnlink(zFilename); + pNew->ctrlFlags |= UNIXFILE_DELETE; } - pNew->isDelete = isDelete; #endif if( rc!=SQLITE_OK ){ - if( dirfd>=0 ) close(dirfd); /* silent leak if fail, already in error */ - if( h>=0 ) close(h); + if( h>=0 ) robust_close(pNew, h, __LINE__); }else{ pNew->pMethod = pLockingStyle; OpenCounter(+1); + verifyDbFile(pNew); } return rc; } /* -** Open a file descriptor to the directory containing file zFilename. -** If successful, *pFd is set to the opened file descriptor and -** SQLITE_OK is returned. If an error occurs, either SQLITE_NOMEM -** or SQLITE_CANTOPEN is returned and *pFd is set to an undefined -** value. -** -** If SQLITE_OK is returned, the caller is responsible for closing -** the file descriptor *pFd using close(). +** Return the name of a directory in which to put temporary files. +** If no suitable temporary file directory can be found, return NULL. */ -static int openDirectory(const char *zFilename, int *pFd){ - int ii; - int fd = -1; - char zDirname[MAX_PATHNAME+1]; - - sqlite3_snprintf(MAX_PATHNAME, zDirname, "%s", zFilename); - for(ii=(int)strlen(zDirname); ii>1 && zDirname[ii]!='/'; ii--); - if( ii>0 ){ - zDirname[ii] = '\0'; - fd = open(zDirname, O_RDONLY|O_BINARY, 0); - if( fd>=0 ){ -#ifdef FD_CLOEXEC - fcntl(fd, F_SETFD, fcntl(fd, F_GETFD, 0) | FD_CLOEXEC); -#endif - OSTRACE3("OPENDIR %-3d %s\n", fd, zDirname); - } - } - *pFd = fd; - return (fd>=0?SQLITE_OK:SQLITE_CANTOPEN_BKPT); +static const char *unixTempFileDir(void){ + static const char *azDirs[] = { + 0, + 0, + "/var/tmp", + "/usr/tmp", + "/tmp", + "." + }; + unsigned int i; + struct stat buf; + const char *zDir = sqlite3_temp_directory; + + if( !azDirs[0] ) azDirs[0] = getenv("SQLITE_TMPDIR"); + if( !azDirs[1] ) azDirs[1] = getenv("TMPDIR"); + for(i=0; i<sizeof(azDirs)/sizeof(azDirs[0]); zDir=azDirs[i++]){ + if( zDir==0 ) continue; + if( osStat(zDir, &buf) ) continue; + if( !S_ISDIR(buf.st_mode) ) continue; + if( osAccess(zDir, 07) ) continue; + break; + } + return zDir; } /* ** Create a temporary file name in zBuf. zBuf must be allocated ** by the calling process and must be big enough to hold at least ** pVfs->mxPathname bytes. */ -static int getTempname(int nBuf, char *zBuf){ - static const char *azDirs[] = { - 0, - 0, - "/var/tmp", - "/usr/tmp", - "/tmp", - ".", - }; - static const unsigned char zChars[] = - "abcdefghijklmnopqrstuvwxyz" - "ABCDEFGHIJKLMNOPQRSTUVWXYZ" - "0123456789"; - unsigned int i, j; - struct stat buf; - const char *zDir = "."; +static int unixGetTempname(int nBuf, char *zBuf){ + const char *zDir; + int iLimit = 0; /* It's odd to simulate an io-error here, but really this is just ** using the io-error infrastructure to test that SQLite handles this ** function failing. */ SimulateIOError( return SQLITE_IOERR ); - azDirs[0] = sqlite3_temp_directory; - if (NULL == azDirs[1]) { - azDirs[1] = getenv("TMPDIR"); - } - - for(i=0; i<sizeof(azDirs)/sizeof(azDirs[0]); i++){ - if( azDirs[i]==0 ) continue; - if( stat(azDirs[i], &buf) ) continue; - if( !S_ISDIR(buf.st_mode) ) continue; - if( access(azDirs[i], 07) ) continue; - zDir = azDirs[i]; - break; - } - - /* Check that the output buffer is large enough for the temporary file - ** name. If it is not, return SQLITE_ERROR. - */ - if( (strlen(zDir) + strlen(SQLITE_TEMP_FILE_PREFIX) + 17) >= (size_t)nBuf ){ - return SQLITE_ERROR; - } - + zDir = unixTempFileDir(); do{ - sqlite3_snprintf(nBuf-17, zBuf, "%s/"SQLITE_TEMP_FILE_PREFIX, zDir); - j = (int)strlen(zBuf); - sqlite3_randomness(15, &zBuf[j]); - for(i=0; i<15; i++, j++){ - zBuf[j] = (char)zChars[ ((unsigned char)zBuf[j])%(sizeof(zChars)-1) ]; - } - zBuf[j] = 0; - }while( access(zBuf,0)==0 ); + u64 r; + sqlite3_randomness(sizeof(r), &r); + assert( nBuf>2 ); + zBuf[nBuf-2] = 0; + sqlite3_snprintf(nBuf, zBuf, "%s/"SQLITE_TEMP_FILE_PREFIX"%llx%c", + zDir, r, 0); + if( zBuf[nBuf-2]!=0 || (iLimit++)>10 ) return SQLITE_ERROR; + }while( osAccess(zBuf,0)==0 ); return SQLITE_OK; } #if SQLITE_ENABLE_LOCKING_STYLE && defined(__APPLE__) /* @@ -26133,23 +32774,23 @@ ** For this reason, if an error occurs in the stat() call here, it is ** ignored and -1 is returned. The caller will try to open a new file ** descriptor on the same path, fail, and return an error to SQLite. ** ** Even if a subsequent open() call does succeed, the consequences of - ** not searching for a resusable file descriptor are not dire. */ - if( 0==stat(zPath, &sStat) ){ - struct unixOpenCnt *pOpen; + ** not searching for a reusable file descriptor are not dire. */ + if( 0==osStat(zPath, &sStat) ){ + unixInodeInfo *pInode; unixEnterMutex(); - pOpen = openList; - while( pOpen && (pOpen->fileId.dev!=sStat.st_dev - || pOpen->fileId.ino!=sStat.st_ino) ){ - pOpen = pOpen->pNext; + pInode = inodeList; + while( pInode && (pInode->fileId.dev!=sStat.st_dev + || pInode->fileId.ino!=sStat.st_ino) ){ + pInode = pInode->pNext; } - if( pOpen ){ + if( pInode ){ UnixUnusedFd **pp; - for(pp=&pOpen->pUnused; *pp && (*pp)->flags!=flags; pp=&((*pp)->pNext)); + for(pp=&pInode->pUnused; *pp && (*pp)->flags!=flags; pp=&((*pp)->pNext)); pUnused = *pp; if( pUnused ){ *pp = pUnused->pNext; } } @@ -26156,10 +32797,89 @@ unixLeaveMutex(); } #endif /* if !OS_VXWORKS */ return pUnused; } + +/* +** This function is called by unixOpen() to determine the unix permissions +** to create new files with. If no error occurs, then SQLITE_OK is returned +** and a value suitable for passing as the third argument to open(2) is +** written to *pMode. If an IO error occurs, an SQLite error code is +** returned and the value of *pMode is not modified. +** +** In most cases, this routine sets *pMode to 0, which will become +** an indication to robust_open() to create the file using +** SQLITE_DEFAULT_FILE_PERMISSIONS adjusted by the umask. +** But if the file being opened is a WAL or regular journal file, then +** this function queries the file-system for the permissions on the +** corresponding database file and sets *pMode to this value. Whenever +** possible, WAL and journal files are created using the same permissions +** as the associated database file. +** +** If the SQLITE_ENABLE_8_3_NAMES option is enabled, then the +** original filename is unavailable. But 8_3_NAMES is only used for +** FAT filesystems and permissions do not matter there, so just use +** the default permissions. +*/ +static int findCreateFileMode( + const char *zPath, /* Path of file (possibly) being created */ + int flags, /* Flags passed as 4th argument to xOpen() */ + mode_t *pMode, /* OUT: Permissions to open file with */ + uid_t *pUid, /* OUT: uid to set on the file */ + gid_t *pGid /* OUT: gid to set on the file */ +){ + int rc = SQLITE_OK; /* Return Code */ + *pMode = 0; + *pUid = 0; + *pGid = 0; + if( flags & (SQLITE_OPEN_WAL|SQLITE_OPEN_MAIN_JOURNAL) ){ + char zDb[MAX_PATHNAME+1]; /* Database file path */ + int nDb; /* Number of valid bytes in zDb */ + struct stat sStat; /* Output of stat() on database file */ + + /* zPath is a path to a WAL or journal file. The following block derives + ** the path to the associated database file from zPath. This block handles + ** the following naming conventions: + ** + ** "<path to db>-journal" + ** "<path to db>-wal" + ** "<path to db>-journalNN" + ** "<path to db>-walNN" + ** + ** where NN is a decimal number. The NN naming schemes are + ** used by the test_multiplex.c module. + */ + nDb = sqlite3Strlen30(zPath) - 1; + while( zPath[nDb]!='-' ){ +#ifndef SQLITE_ENABLE_8_3_NAMES + /* In the normal case (8+3 filenames disabled) the journal filename + ** is guaranteed to contain a '-' character. */ + assert( nDb>0 ); + assert( sqlite3Isalnum(zPath[nDb]) ); +#else + /* If 8+3 names are possible, then the journal file might not contain + ** a '-' character. So check for that case and return early. */ + if( nDb==0 || zPath[nDb]=='.' ) return SQLITE_OK; +#endif + nDb--; + } + memcpy(zDb, zPath, nDb); + zDb[nDb] = '\0'; + + if( 0==osStat(zDb, &sStat) ){ + *pMode = sStat.st_mode & 0777; + *pUid = sStat.st_uid; + *pGid = sStat.st_gid; + }else{ + rc = SQLITE_IOERR_FSTAT; + } + }else if( flags & SQLITE_OPEN_DELETEONCLOSE ){ + *pMode = 0600; + } + return rc; +} /* ** Open the file zPath. ** ** Previously, the SQLite OS layer used three functions in place of this @@ -26188,37 +32908,42 @@ int flags, /* Input flags to control the opening */ int *pOutFlags /* Output flags returned to SQLite core */ ){ unixFile *p = (unixFile *)pFile; int fd = -1; /* File descriptor returned by open() */ - int dirfd = -1; /* Directory file descriptor */ int openFlags = 0; /* Flags to pass to open() */ int eType = flags&0xFFFFFF00; /* Type of file to open */ int noLock; /* True to omit locking primitives */ int rc = SQLITE_OK; /* Function Return Code */ + int ctrlFlags = 0; /* UNIXFILE_* flags */ int isExclusive = (flags & SQLITE_OPEN_EXCLUSIVE); int isDelete = (flags & SQLITE_OPEN_DELETEONCLOSE); int isCreate = (flags & SQLITE_OPEN_CREATE); int isReadonly = (flags & SQLITE_OPEN_READONLY); int isReadWrite = (flags & SQLITE_OPEN_READWRITE); #if SQLITE_ENABLE_LOCKING_STYLE int isAutoProxy = (flags & SQLITE_OPEN_AUTOPROXY); #endif +#if defined(__APPLE__) || SQLITE_ENABLE_LOCKING_STYLE + struct statfs fsInfo; +#endif /* If creating a master or main-file journal, this function will open ** a file-descriptor on the directory too. The first time unixSync() ** is called the directory file descriptor will be fsync()ed and close()d. */ - int isOpenDirectory = (isCreate && - (eType==SQLITE_OPEN_MASTER_JOURNAL || eType==SQLITE_OPEN_MAIN_JOURNAL) - ); + int syncDir = (isCreate && ( + eType==SQLITE_OPEN_MASTER_JOURNAL + || eType==SQLITE_OPEN_MAIN_JOURNAL + || eType==SQLITE_OPEN_WAL + )); /* If argument zPath is a NULL pointer, this function is required to open ** a temporary file. Use this buffer to store the file name in. */ - char zTmpname[MAX_PATHNAME+1]; + char zTmpname[MAX_PATHNAME+2]; const char *zName = zPath; /* Check the following statements are true: ** ** (a) Exactly one of the READWRITE and READONLY flags must be set, and @@ -26229,45 +32954,66 @@ assert((isReadonly==0 || isReadWrite==0) && (isReadWrite || isReadonly)); assert(isCreate==0 || isReadWrite); assert(isExclusive==0 || isCreate); assert(isDelete==0 || isCreate); - /* The main DB, main journal, and master journal are never automatically - ** deleted. Nor are they ever temporary files. */ + /* The main DB, main journal, WAL file and master journal are never + ** automatically deleted. Nor are they ever temporary files. */ assert( (!isDelete && zName) || eType!=SQLITE_OPEN_MAIN_DB ); assert( (!isDelete && zName) || eType!=SQLITE_OPEN_MAIN_JOURNAL ); assert( (!isDelete && zName) || eType!=SQLITE_OPEN_MASTER_JOURNAL ); + assert( (!isDelete && zName) || eType!=SQLITE_OPEN_WAL ); /* Assert that the upper layer has set one of the "file-type" flags. */ assert( eType==SQLITE_OPEN_MAIN_DB || eType==SQLITE_OPEN_TEMP_DB || eType==SQLITE_OPEN_MAIN_JOURNAL || eType==SQLITE_OPEN_TEMP_JOURNAL || eType==SQLITE_OPEN_SUBJOURNAL || eType==SQLITE_OPEN_MASTER_JOURNAL - || eType==SQLITE_OPEN_TRANSIENT_DB + || eType==SQLITE_OPEN_TRANSIENT_DB || eType==SQLITE_OPEN_WAL ); + + /* Detect a pid change and reset the PRNG. There is a race condition + ** here such that two or more threads all trying to open databases at + ** the same instant might all reset the PRNG. But multiple resets + ** are harmless. + */ + if( randomnessPid!=osGetpid(0) ){ + randomnessPid = osGetpid(0); + sqlite3_randomness(0,0); + } memset(p, 0, sizeof(unixFile)); if( eType==SQLITE_OPEN_MAIN_DB ){ UnixUnusedFd *pUnused; pUnused = findReusableFd(zName, flags); if( pUnused ){ fd = pUnused->fd; }else{ - pUnused = sqlite3_malloc(sizeof(*pUnused)); + pUnused = sqlite3_malloc64(sizeof(*pUnused)); if( !pUnused ){ return SQLITE_NOMEM; } } p->pUnused = pUnused; + + /* Database filenames are double-zero terminated if they are not + ** URIs with parameters. Hence, they can always be passed into + ** sqlite3_uri_parameter(). */ + assert( (flags & SQLITE_OPEN_URI) || zName[strlen(zName)+1]==0 ); + }else if( !zName ){ /* If zName is NULL, the upper layer is requesting a temp file. */ - assert(isDelete && !isOpenDirectory); - rc = getTempname(MAX_PATHNAME+1, zTmpname); + assert(isDelete && !syncDir); + rc = unixGetTempname(pVfs->mxPathname, zTmpname); if( rc!=SQLITE_OK ){ return rc; } zName = zTmpname; + + /* Generated temporary filenames are always double-zero terminated + ** for use by sqlite3_uri_parameter(). */ + assert( zName[strlen(zName)+1]==0 ); } /* Determine the value of the flags parameter passed to POSIX function ** open(). These must be calculated even if open() is not called, as ** they may be stored as part of the file handle and used by the @@ -26277,25 +33023,43 @@ if( isCreate ) openFlags |= O_CREAT; if( isExclusive ) openFlags |= (O_EXCL|O_NOFOLLOW); openFlags |= (O_LARGEFILE|O_BINARY); if( fd<0 ){ - mode_t openMode = (isDelete?0600:SQLITE_DEFAULT_FILE_PERMISSIONS); - fd = open(zName, openFlags, openMode); - OSTRACE4("OPENX %-3d %s 0%o\n", fd, zName, openFlags); - if( fd<0 && errno!=EISDIR && isReadWrite && !isExclusive ){ + mode_t openMode; /* Permissions to create file with */ + uid_t uid; /* Userid for the file */ + gid_t gid; /* Groupid for the file */ + rc = findCreateFileMode(zName, flags, &openMode, &uid, &gid); + if( rc!=SQLITE_OK ){ + assert( !p->pUnused ); + assert( eType==SQLITE_OPEN_WAL || eType==SQLITE_OPEN_MAIN_JOURNAL ); + return rc; + } + fd = robust_open(zName, openFlags, openMode); + OSTRACE(("OPENX %-3d %s 0%o\n", fd, zName, openFlags)); + assert( !isExclusive || (openFlags & O_CREAT)!=0 ); + if( fd<0 && errno!=EISDIR && isReadWrite ){ /* Failed to open the file for read/write access. Try read-only. */ flags &= ~(SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE); openFlags &= ~(O_RDWR|O_CREAT); flags |= SQLITE_OPEN_READONLY; openFlags |= O_RDONLY; - fd = open(zName, openFlags, openMode); + isReadonly = 1; + fd = robust_open(zName, openFlags, openMode); } if( fd<0 ){ - rc = SQLITE_CANTOPEN_BKPT; + rc = unixLogError(SQLITE_CANTOPEN_BKPT, "open", zName); goto open_finished; } + + /* If this process is running as root and if creating a new rollback + ** journal or WAL file, set the ownership of the journal or WAL to be + ** the same as the original database. + */ + if( flags & (SQLITE_OPEN_WAL|SQLITE_OPEN_MAIN_JOURNAL) ){ + robustFchown(fd, uid, gid); + } } assert( fd>=0 ); if( pOutFlags ){ *pOutFlags = flags; } @@ -26306,53 +33070,50 @@ } if( isDelete ){ #if OS_VXWORKS zPath = zName; +#elif defined(SQLITE_UNLINK_AFTER_CLOSE) + zPath = sqlite3_mprintf("%s", zName); + if( zPath==0 ){ + robust_close(p, fd, __LINE__); + return SQLITE_NOMEM; + } #else - unlink(zName); + osUnlink(zName); #endif } #if SQLITE_ENABLE_LOCKING_STYLE else{ p->openFlags = openFlags; } #endif - if( isOpenDirectory ){ - rc = openDirectory(zPath, &dirfd); - if( rc!=SQLITE_OK ){ - /* It is safe to close fd at this point, because it is guaranteed not - ** to be open on a database file. If it were open on a database file, - ** it would not be safe to close as this would release any locks held - ** on the file by this process. */ - assert( eType!=SQLITE_OPEN_MAIN_DB ); - close(fd); /* silently leak if fail, already in error */ - goto open_finished; - } - } - -#ifdef FD_CLOEXEC - fcntl(fd, F_SETFD, fcntl(fd, F_GETFD, 0) | FD_CLOEXEC); -#endif - noLock = eType!=SQLITE_OPEN_MAIN_DB; #if defined(__APPLE__) || SQLITE_ENABLE_LOCKING_STYLE - struct statfs fsInfo; if( fstatfs(fd, &fsInfo) == -1 ){ - ((unixFile*)pFile)->lastErrno = errno; - if( dirfd>=0 ) close(dirfd); /* silently leak if fail, in error */ - close(fd); /* silently leak if fail, in error */ + storeLastErrno(p, errno); + robust_close(p, fd, __LINE__); return SQLITE_IOERR_ACCESS; } if (0 == strncmp("msdos", fsInfo.f_fstypename, 5)) { ((unixFile*)pFile)->fsFlags |= SQLITE_FSFLAGS_IS_MSDOS; } + if (0 == strncmp("exfat", fsInfo.f_fstypename, 5)) { + ((unixFile*)pFile)->fsFlags |= SQLITE_FSFLAGS_IS_MSDOS; + } #endif - + + /* Set up appropriate ctrlFlags */ + if( isDelete ) ctrlFlags |= UNIXFILE_DELETE; + if( isReadonly ) ctrlFlags |= UNIXFILE_RDONLY; + if( noLock ) ctrlFlags |= UNIXFILE_NOLOCK; + if( syncDir ) ctrlFlags |= UNIXFILE_DIRSYNC; + if( flags & SQLITE_OPEN_URI ) ctrlFlags |= UNIXFILE_URI; + #if SQLITE_ENABLE_LOCKING_STYLE #if SQLITE_PREFER_PROXY_LOCKING isAutoProxy = 1; #endif if( isAutoProxy && (zPath!=NULL) && (!noLock) && pVfs->xOpen ){ @@ -26362,31 +33123,14 @@ /* SQLITE_FORCE_PROXY_LOCKING==1 means force always use proxy, 0 means ** never use proxy, NULL means use proxy for non-local files only. */ if( envforce!=NULL ){ useProxy = atoi(envforce)>0; }else{ - struct statfs fsInfo; - if( statfs(zPath, &fsInfo) == -1 ){ - /* In theory, the close(fd) call is sub-optimal. If the file opened - ** with fd is a database file, and there are other connections open - ** on that file that are currently holding advisory locks on it, - ** then the call to close() will cancel those locks. In practice, - ** we're assuming that statfs() doesn't fail very often. At least - ** not while other file descriptors opened by the same process on - ** the same file are working. */ - p->lastErrno = errno; - if( dirfd>=0 ){ - close(dirfd); /* silently leak if fail, in error */ - } - close(fd); /* silently leak if fail, in error */ - rc = SQLITE_IOERR_ACCESS; - goto open_finished; - } useProxy = !(fsInfo.f_flags&MNT_LOCAL); } if( useProxy ){ - rc = fillInUnixFile(pVfs, fd, dirfd, pFile, zPath, noLock, isDelete); + rc = fillInUnixFile(pVfs, fd, pFile, zPath, ctrlFlags); if( rc==SQLITE_OK ){ rc = proxyTransformUnixFile((unixFile*)pFile, ":auto:"); if( rc!=SQLITE_OK ){ /* Use unixClose to clean up the resources added in fillInUnixFile ** and clear all the structure's references. Specifically, @@ -26399,11 +33143,12 @@ goto open_finished; } } #endif - rc = fillInUnixFile(pVfs, fd, dirfd, pFile, zPath, noLock, isDelete); + rc = fillInUnixFile(pVfs, fd, pFile, zPath, ctrlFlags); + open_finished: if( rc!=SQLITE_OK ){ sqlite3_free(p->pUnused); } return rc; @@ -26420,35 +33165,42 @@ int dirSync /* If true, fsync() directory after deleting file */ ){ int rc = SQLITE_OK; UNUSED_PARAMETER(NotUsed); SimulateIOError(return SQLITE_IOERR_DELETE); - unlink(zPath); -#ifndef SQLITE_DISABLE_DIRSYNC - if( dirSync ){ - int fd; - rc = openDirectory(zPath, &fd); - if( rc==SQLITE_OK ){ + if( osUnlink(zPath)==(-1) ){ + if( errno==ENOENT #if OS_VXWORKS - if( fsync(fd)==-1 ) -#else - if( fsync(fd) ) + || osAccess(zPath,0)!=0 #endif - { - rc = SQLITE_IOERR_DIR_FSYNC; + ){ + rc = SQLITE_IOERR_DELETE_NOENT; + }else{ + rc = unixLogError(SQLITE_IOERR_DELETE, "unlink", zPath); + } + return rc; + } +#ifndef SQLITE_DISABLE_DIRSYNC + if( (dirSync & 1)!=0 ){ + int fd; + rc = osOpenDirectory(zPath, &fd); + if( rc==SQLITE_OK ){ + if( full_fsync(fd,0,0) ){ + rc = unixLogError(SQLITE_IOERR_DIR_FSYNC, "fsync", zPath); } - if( close(fd)&&!rc ){ - rc = SQLITE_IOERR_DIR_CLOSE; - } + robust_close(0, fd, __LINE__); + }else{ + assert( rc==SQLITE_CANTOPEN ); + rc = SQLITE_OK; } } #endif return rc; } /* -** Test the existance of or access permissions of file zPath. The +** Test the existence of or access permissions of file zPath. The ** test performed depends on the value of flags: ** ** SQLITE_ACCESS_EXISTS: Return 1 if the file exists ** SQLITE_ACCESS_READWRITE: Return 1 if the file is read and writable. ** SQLITE_ACCESS_READONLY: Return 1 if the file is readable. @@ -26459,31 +33211,53 @@ sqlite3_vfs *NotUsed, /* The VFS containing this xAccess method */ const char *zPath, /* Path of the file to examine */ int flags, /* What do we want to learn about the zPath file? */ int *pResOut /* Write result boolean here */ ){ - int amode = 0; UNUSED_PARAMETER(NotUsed); SimulateIOError( return SQLITE_IOERR_ACCESS; ); - switch( flags ){ - case SQLITE_ACCESS_EXISTS: - amode = F_OK; - break; - case SQLITE_ACCESS_READWRITE: - amode = W_OK|R_OK; - break; - case SQLITE_ACCESS_READ: - amode = R_OK; - break; - - default: - assert(!"Invalid flags argument"); - } - *pResOut = (access(zPath, amode)==0); + assert( pResOut!=0 ); + + /* The spec says there are three possible values for flags. But only + ** two of them are actually used */ + assert( flags==SQLITE_ACCESS_EXISTS || flags==SQLITE_ACCESS_READWRITE ); + + if( flags==SQLITE_ACCESS_EXISTS ){ + struct stat buf; + *pResOut = (0==osStat(zPath, &buf) && buf.st_size>0); + }else{ + *pResOut = osAccess(zPath, W_OK|R_OK)==0; + } return SQLITE_OK; } +/* +** +*/ +static int mkFullPathname( + const char *zPath, /* Input path */ + char *zOut, /* Output buffer */ + int nOut /* Allocated size of buffer zOut */ +){ + int nPath = sqlite3Strlen30(zPath); + int iOff = 0; + if( zPath[0]!='/' ){ + if( osGetcwd(zOut, nOut-2)==0 ){ + return unixLogError(SQLITE_CANTOPEN_BKPT, "getcwd", zPath); + } + iOff = sqlite3Strlen30(zOut); + zOut[iOff++] = '/'; + } + if( (iOff+nPath+1)>nOut ){ + /* SQLite assumes that xFullPathname() nul-terminates the output buffer + ** even if it returns an error. */ + zOut[iOff] = '\0'; + return SQLITE_CANTOPEN_BKPT; + } + sqlite3_snprintf(nOut-iOff, &zOut[iOff], "%s", zPath); + return SQLITE_OK; +} /* ** Turn a relative pathname into a full pathname. The relative path ** is stored as a nul-terminated string in the buffer pointed to by ** zPath. @@ -26496,33 +33270,85 @@ sqlite3_vfs *pVfs, /* Pointer to vfs object */ const char *zPath, /* Possibly relative input path */ int nOut, /* Size of output buffer in bytes */ char *zOut /* Output buffer */ ){ +#if !defined(HAVE_READLINK) || !defined(HAVE_LSTAT) + return mkFullPathname(zPath, zOut, nOut); +#else + int rc = SQLITE_OK; + int nByte; + int nLink = 1; /* Number of symbolic links followed so far */ + const char *zIn = zPath; /* Input path for each iteration of loop */ + char *zDel = 0; + + assert( pVfs->mxPathname==MAX_PATHNAME ); + UNUSED_PARAMETER(pVfs); /* It's odd to simulate an io-error here, but really this is just ** using the io-error infrastructure to test that SQLite handles this ** function failing. This function could fail if, for example, the ** current working directory has been unlinked. */ SimulateIOError( return SQLITE_ERROR ); - assert( pVfs->mxPathname==MAX_PATHNAME ); - UNUSED_PARAMETER(pVfs); - - zOut[nOut-1] = '\0'; - if( zPath[0]=='/' ){ - sqlite3_snprintf(nOut, zOut, "%s", zPath); - }else{ - int nCwd; - if( getcwd(zOut, nOut-1)==0 ){ - return SQLITE_CANTOPEN_BKPT; - } - nCwd = (int)strlen(zOut); - sqlite3_snprintf(nOut-nCwd, &zOut[nCwd], "/%s", zPath); - } - return SQLITE_OK; + do { + + /* Call stat() on path zIn. Set bLink to true if the path is a symbolic + ** link, or false otherwise. */ + int bLink = 0; + struct stat buf; + if( osLstat(zIn, &buf)!=0 ){ + if( errno!=ENOENT ){ + rc = unixLogError(SQLITE_CANTOPEN_BKPT, "lstat", zIn); + } + }else{ + bLink = S_ISLNK(buf.st_mode); + } + + if( bLink ){ + if( zDel==0 ){ + zDel = sqlite3_malloc(nOut); + if( zDel==0 ) rc = SQLITE_NOMEM; + }else if( ++nLink>SQLITE_MAX_SYMLINKS ){ + rc = SQLITE_CANTOPEN_BKPT; + } + + if( rc==SQLITE_OK ){ + nByte = osReadlink(zIn, zDel, nOut-1); + if( nByte<0 ){ + rc = unixLogError(SQLITE_CANTOPEN_BKPT, "readlink", zIn); + }else{ + if( zDel[0]!='/' ){ + int n; + for(n = sqlite3Strlen30(zIn); n>0 && zIn[n-1]!='/'; n--); + if( nByte+n+1>nOut ){ + rc = SQLITE_CANTOPEN_BKPT; + }else{ + memmove(&zDel[n], zDel, nByte+1); + memcpy(zDel, zIn, n); + nByte += n; + } + } + zDel[nByte] = '\0'; + } + } + + zIn = zDel; + } + + assert( rc!=SQLITE_OK || zIn!=zOut || zIn[0]=='/' ); + if( rc==SQLITE_OK && zIn!=zOut ){ + rc = mkFullPathname(zIn, zOut, nOut); + } + if( bLink==0 ) break; + zIn = zOut; + }while( rc==SQLITE_OK ); + + sqlite3_free(zDel); + return rc; +#endif /* HAVE_READLINK && HAVE_LSTAT */ } #ifndef SQLITE_OMIT_LOAD_EXTENSION /* @@ -26541,11 +33367,11 @@ ** message is available, it is written to zBufOut. If no error message ** is available, zBufOut is left unmodified and SQLite uses a default ** error message. */ static void unixDlError(sqlite3_vfs *NotUsed, int nBuf, char *zBufOut){ - char *zErr; + const char *zErr; UNUSED_PARAMETER(NotUsed); unixEnterMutex(); zErr = dlerror(); if( zErr ){ sqlite3_snprintf(nBuf, zBufOut, "%s", zErr); @@ -26604,25 +33430,25 @@ ** When testing, initializing zBuf[] to zero is all we do. That means ** that we always use the same random number sequence. This makes the ** tests repeatable. */ memset(zBuf, 0, nBuf); -#if !defined(SQLITE_TEST) + randomnessPid = osGetpid(0); +#if !defined(SQLITE_TEST) && !defined(SQLITE_OMIT_RANDOMNESS) { - int pid, fd; - fd = open("/dev/urandom", O_RDONLY); + int fd, got; + fd = robust_open("/dev/urandom", O_RDONLY, 0); if( fd<0 ){ time_t t; time(&t); memcpy(zBuf, &t, sizeof(t)); - pid = getpid(); - memcpy(&zBuf[sizeof(t)], &pid, sizeof(pid)); - assert( sizeof(t)+sizeof(pid)<=(size_t)nBuf ); - nBuf = sizeof(t) + sizeof(pid); + memcpy(&zBuf[sizeof(t)], &randomnessPid, sizeof(randomnessPid)); + assert( sizeof(t)+sizeof(randomnessPid)<=(size_t)nBuf ); + nBuf = sizeof(t) + sizeof(randomnessPid); }else{ - nBuf = read(fd, zBuf, nBuf); - close(fd); + do{ got = osRead(fd, zBuf, nBuf); }while( got<0 && errno==EINTR ); + robust_close(0, fd, __LINE__); } } #endif return nBuf; } @@ -26664,43 +33490,65 @@ */ #ifdef SQLITE_TEST SQLITE_API int sqlite3_current_time = 0; /* Fake system time in seconds since 1970. */ #endif +/* +** Find the current time (in Universal Coordinated Time). Write into *piNow +** the current time and date as a Julian Day number times 86_400_000. In +** other words, write into *piNow the number of milliseconds since the Julian +** epoch of noon in Greenwich on November 24, 4714 B.C according to the +** proleptic Gregorian calendar. +** +** On success, return SQLITE_OK. Return SQLITE_ERROR if the time and date +** cannot be found. +*/ +static int unixCurrentTimeInt64(sqlite3_vfs *NotUsed, sqlite3_int64 *piNow){ + static const sqlite3_int64 unixEpoch = 24405875*(sqlite3_int64)8640000; + int rc = SQLITE_OK; +#if defined(NO_GETTOD) + time_t t; + time(&t); + *piNow = ((sqlite3_int64)t)*1000 + unixEpoch; +#elif OS_VXWORKS + struct timespec sNow; + clock_gettime(CLOCK_REALTIME, &sNow); + *piNow = unixEpoch + 1000*(sqlite3_int64)sNow.tv_sec + sNow.tv_nsec/1000000; +#else + struct timeval sNow; + (void)gettimeofday(&sNow, 0); /* Cannot fail given valid arguments */ + *piNow = unixEpoch + 1000*(sqlite3_int64)sNow.tv_sec + sNow.tv_usec/1000; +#endif + +#ifdef SQLITE_TEST + if( sqlite3_current_time ){ + *piNow = 1000*(sqlite3_int64)sqlite3_current_time + unixEpoch; + } +#endif + UNUSED_PARAMETER(NotUsed); + return rc; +} + +#ifndef SQLITE_OMIT_DEPRECATED /* ** Find the current time (in Universal Coordinated Time). Write the ** current time and date as a Julian Day number into *prNow and ** return 0. Return 1 if the time and date cannot be found. */ static int unixCurrentTime(sqlite3_vfs *NotUsed, double *prNow){ -#if defined(SQLITE_OMIT_FLOATING_POINT) - time_t t; - time(&t); - *prNow = (((sqlite3_int64)t)/8640 + 24405875)/10; -#elif defined(NO_GETTOD) - time_t t; - time(&t); - *prNow = t/86400.0 + 2440587.5; -#elif OS_VXWORKS - struct timespec sNow; - clock_gettime(CLOCK_REALTIME, &sNow); - *prNow = 2440587.5 + sNow.tv_sec/86400.0 + sNow.tv_nsec/86400000000000.0; -#else - struct timeval sNow; - gettimeofday(&sNow, 0); - *prNow = 2440587.5 + sNow.tv_sec/86400.0 + sNow.tv_usec/86400000000.0; -#endif - -#ifdef SQLITE_TEST - if( sqlite3_current_time ){ - *prNow = sqlite3_current_time/86400.0 + 2440587.5; - } -#endif + sqlite3_int64 i = 0; + int rc; UNUSED_PARAMETER(NotUsed); - return 0; + rc = unixCurrentTimeInt64(0, &i); + *prNow = i/86400000.0; + return rc; } +#else +# define unixCurrentTime 0 +#endif +#ifndef SQLITE_OMIT_DEPRECATED /* ** We added the xGetLastError() method with the intention of providing ** better low-level error messages when operating-system problems come up ** during SQLite operation. But so far, none of that has been implemented ** in the core. So this routine is never called. For now, it is merely @@ -26710,10 +33558,14 @@ UNUSED_PARAMETER(NotUsed); UNUSED_PARAMETER(NotUsed2); UNUSED_PARAMETER(NotUsed3); return 0; } +#else +# define unixGetLastError 0 +#endif + /* ************************ End of sqlite3_vfs methods *************************** ******************************************************************************/ @@ -26739,11 +33591,11 @@ ** with _IOWR('z', 23, struct ByteRangeLockPB2) to track the same 5 states. ** To simulate a F_RDLCK on the shared range, on AFP a randomly selected ** address in the shared range is taken for a SHARED lock, the entire ** shared range is taken for an EXCLUSIVE lock): ** -** PENDING_BYTE 0x40000000 +** PENDING_BYTE 0x40000000 ** RESERVED_BYTE 0x40000001 ** SHARED_RANGE 0x40000002 -> 0x40000200 ** ** This works well on the local file system, but shows a nearly 100x ** slowdown in read performance on AFP because the AFP client disables @@ -26765,13 +33617,14 @@ ** Using proxy locks ** ----------------- ** ** C APIs ** -** sqlite3_file_control(db, dbname, SQLITE_SET_LOCKPROXYFILE, +** sqlite3_file_control(db, dbname, SQLITE_FCNTL_SET_LOCKPROXYFILE, ** <proxy_path> | ":auto:"); -** sqlite3_file_control(db, dbname, SQLITE_GET_LOCKPROXYFILE, &<proxy_path>); +** sqlite3_file_control(db, dbname, SQLITE_FCNTL_GET_LOCKPROXYFILE, +** &<proxy_path>); ** ** ** SQL pragmas ** ** PRAGMA [database.]lock_proxy_file=<proxy_path> | :auto: @@ -26808,11 +33661,11 @@ ** by taking an sqlite-style shared lock on the conch file, reading the ** contents and comparing the host's unique host ID (see below) and lock ** proxy path against the values stored in the conch. The conch file is ** stored in the same directory as the database file and the file name ** is patterned after the database file name as ".<databasename>-conch". -** If the conch file does not exist, or it's contents do not match the +** If the conch file does not exist, or its contents do not match the ** host ID and/or proxy path, then the lock is escalated to an exclusive ** lock and the conch file contents is updated with the host ID and proxy ** path and the lock is downgraded to a shared lock again. If the conch ** is held by another process (with a shared lock), the exclusive lock ** will fail and SQLITE_BUSY is returned. @@ -26860,11 +33713,11 @@ ** ** As mentioned above, when compiled with SQLITE_PREFER_PROXY_LOCKING, ** setting the environment variable SQLITE_FORCE_PROXY_LOCKING to 1 will ** force proxy locking to be used for every database file opened, and 0 ** will force automatic proxy locking to be disabled for all database -** files (explicity calling the SQLITE_SET_LOCKPROXYFILE pragma or +** files (explicitly calling the SQLITE_FCNTL_SET_LOCKPROXYFILE pragma or ** sqlite_file_control API is not affected by SQLITE_FORCE_PROXY_LOCKING). */ /* ** Proxy locking is only available on MacOSX @@ -26881,10 +33734,11 @@ char *conchFilePath; /* Name of the conch file */ unixFile *lockProxy; /* Open proxy lock file */ char *lockProxyPath; /* Name of the proxy lock file */ char *dbPath; /* Name of the open file */ int conchHeld; /* 1 if the conch is held, -1 if lockless */ + int nFails; /* Number of conch taking failures */ void *oldLockingContext; /* Original lockingcontext to restore on close */ sqlite3_io_methods const *pOldMethod; /* Original I/O methods for close */ }; /* @@ -26901,12 +33755,12 @@ len = strlcpy(lPath, LOCKPROXYDIR, maxLen); #else # ifdef _CS_DARWIN_USER_TEMP_DIR { if( !confstr(_CS_DARWIN_USER_TEMP_DIR, lPath, maxLen) ){ - OSTRACE4("GETLOCKPATH failed %s errno=%d pid=%d\n", - lPath, errno, getpid()); + OSTRACE(("GETLOCKPATH failed %s errno=%d pid=%d\n", + lPath, errno, osGetpid(0))); return SQLITE_IOERR_LOCK; } len = strlcat(lPath, "sqliteplocks", maxLen); } # else @@ -26918,17 +33772,17 @@ len = strlcat(lPath, "/", maxLen); } /* transform the db path to a unique cache name */ dbLen = (int)strlen(dbPath); - for( i=0; i<dbLen && (i+len+7)<maxLen; i++){ + for( i=0; i<dbLen && (i+len+7)<(int)maxLen; i++){ char c = dbPath[i]; lPath[i+len] = (c=='/')?'_':c; } lPath[i+len]='\0'; strlcat(lPath, ":auto:", maxLen); - OSTRACE3("GETLOCKPATH proxy lock path=%s pid=%d\n", lPath, getpid()); + OSTRACE(("GETLOCKPATH proxy lock path=%s pid=%d\n", lPath, osGetpid(0))); return SQLITE_OK; } /* ** Creates the lock file and any missing directories in lockPath @@ -26946,25 +33800,25 @@ if( lockPath[i] == '/' && (i - start > 0) ){ /* only mkdir if leaf dir != "." or "/" or ".." */ if( i-start>2 || (i-start==1 && buf[start] != '.' && buf[start] != '/') || (i-start==2 && buf[start] != '.' && buf[start+1] != '.') ){ buf[i]='\0'; - if( mkdir(buf, SQLITE_DEFAULT_PROXYDIR_PERMISSIONS) ){ + if( osMkdir(buf, SQLITE_DEFAULT_PROXYDIR_PERMISSIONS) ){ int err=errno; if( err!=EEXIST ) { - OSTRACE5("CREATELOCKPATH FAILED creating %s, " + OSTRACE(("CREATELOCKPATH FAILED creating %s, " "'%s' proxy lock path=%s pid=%d\n", - buf, strerror(err), lockPath, getpid()); + buf, strerror(err), lockPath, osGetpid(0))); return err; } } } start=i+1; } buf[i] = lockPath[i]; } - OSTRACE3("CREATELOCKPATH proxy lock path=%s pid=%d\n", lockPath, getpid()); + OSTRACE(("CREATELOCKPATH proxy lock path=%s pid=%d\n",lockPath,osGetpid(0))); return 0; } /* ** Create a new VFS file descriptor (stored in memory obtained from @@ -26977,11 +33831,10 @@ const char *path, /* path for the new unixFile */ unixFile **ppFile, /* unixFile created and returned by ref */ int islockfile /* if non zero missing dirs will be created */ ) { int fd = -1; - int dirfd = -1; unixFile *pNew; int rc = SQLITE_OK; int openFlags = O_RDWR | O_CREAT; sqlite3_vfs dummyVfs; int terrno = 0; @@ -26995,27 +33848,27 @@ */ pUnused = findReusableFd(path, openFlags); if( pUnused ){ fd = pUnused->fd; }else{ - pUnused = sqlite3_malloc(sizeof(*pUnused)); + pUnused = sqlite3_malloc64(sizeof(*pUnused)); if( !pUnused ){ return SQLITE_NOMEM; } } if( fd<0 ){ - fd = open(path, openFlags, SQLITE_DEFAULT_FILE_PERMISSIONS); + fd = robust_open(path, openFlags, 0); terrno = errno; if( fd<0 && errno==ENOENT && islockfile ){ if( proxyCreateLockPath(path) == SQLITE_OK ){ - fd = open(path, openFlags, SQLITE_DEFAULT_FILE_PERMISSIONS); + fd = robust_open(path, openFlags, 0); } } } if( fd<0 ){ openFlags = O_RDONLY; - fd = open(path, openFlags, SQLITE_DEFAULT_FILE_PERMISSIONS); + fd = robust_open(path, openFlags, 0); terrno = errno; } if( fd<0 ){ if( islockfile ){ return SQLITE_BUSY; @@ -27028,29 +33881,31 @@ default: return SQLITE_CANTOPEN_BKPT; } } - pNew = (unixFile *)sqlite3_malloc(sizeof(*pNew)); + pNew = (unixFile *)sqlite3_malloc64(sizeof(*pNew)); if( pNew==NULL ){ rc = SQLITE_NOMEM; goto end_create_proxy; } memset(pNew, 0, sizeof(unixFile)); pNew->openFlags = openFlags; + memset(&dummyVfs, 0, sizeof(dummyVfs)); dummyVfs.pAppData = (void*)&autolockIoFinder; + dummyVfs.zName = "dummy"; pUnused->fd = fd; pUnused->flags = openFlags; pNew->pUnused = pUnused; - rc = fillInUnixFile(&dummyVfs, fd, dirfd, (sqlite3_file*)pNew, path, 0, 0); + rc = fillInUnixFile(&dummyVfs, fd, (sqlite3_file*)pNew, path, 0); if( rc==SQLITE_OK ){ *ppFile = pNew; return SQLITE_OK; } end_create_proxy: - close(fd); /* silently leak fd if error, we're already in error */ + robust_close(pNew, fd, __LINE__); sqlite3_free(pNew); sqlite3_free(pUnused); return rc; } @@ -27058,26 +33913,36 @@ /* simulate multiple hosts by creating unique hostid file paths */ SQLITE_API int sqlite3_hostid_num = 0; #endif #define PROXY_HOSTIDLEN 16 /* conch file host id length */ + +#ifdef HAVE_GETHOSTUUID +/* Not always defined in the headers as it ought to be */ +extern int gethostuuid(uuid_t id, const struct timespec *wait); +#endif /* get the host ID via gethostuuid(), pHostID must point to PROXY_HOSTIDLEN ** bytes of writable memory. */ static int proxyGetHostID(unsigned char *pHostID, int *pError){ - struct timespec timeout = {1, 0}; /* 1 sec timeout */ - assert(PROXY_HOSTIDLEN == sizeof(uuid_t)); memset(pHostID, 0, PROXY_HOSTIDLEN); - if( gethostuuid(pHostID, &timeout) ){ - int err = errno; - if( pError ){ - *pError = err; +#ifdef HAVE_GETHOSTUUID + { + struct timespec timeout = {1, 0}; /* 1 sec timeout */ + if( gethostuuid(pHostID, &timeout) ){ + int err = errno; + if( pError ){ + *pError = err; + } + return SQLITE_IOERR; } - return SQLITE_IOERR; } +#else + UNUSED_PARAMETER(pError); +#endif #ifdef SQLITE_TEST /* simulate multiple hosts by creating unique hostid file paths */ if( sqlite3_hostid_num != 0){ pHostID[0] = (char)(pHostID[0] + (char)(sqlite3_hostid_num & 0xFF)); } @@ -27108,49 +33973,50 @@ size_t readLen = 0; size_t pathLen = 0; char errmsg[64] = ""; int fd = -1; int rc = -1; + UNUSED_PARAMETER(myHostID); /* create a new path by replace the trailing '-conch' with '-break' */ pathLen = strlcpy(tPath, cPath, MAXPATHLEN); if( pathLen>MAXPATHLEN || pathLen<6 || (strlcpy(&tPath[pathLen-5], "break", 6) != 5) ){ - sprintf(errmsg, "path error (len %d)", (int)pathLen); + sqlite3_snprintf(sizeof(errmsg),errmsg,"path error (len %d)",(int)pathLen); goto end_breaklock; } /* read the conch content */ - readLen = pread(conchFile->h, buf, PROXY_MAXCONCHLEN, 0); + readLen = osPread(conchFile->h, buf, PROXY_MAXCONCHLEN, 0); if( readLen<PROXY_PATHINDEX ){ - sprintf(errmsg, "read error (len %d)", (int)readLen); + sqlite3_snprintf(sizeof(errmsg),errmsg,"read error (len %d)",(int)readLen); goto end_breaklock; } /* write it out to the temporary break file */ - fd = open(tPath, (O_RDWR|O_CREAT|O_EXCL), SQLITE_DEFAULT_FILE_PERMISSIONS); + fd = robust_open(tPath, (O_RDWR|O_CREAT|O_EXCL), 0); if( fd<0 ){ - sprintf(errmsg, "create failed (%d)", errno); + sqlite3_snprintf(sizeof(errmsg), errmsg, "create failed (%d)", errno); goto end_breaklock; } - if( pwrite(fd, buf, readLen, 0) != readLen ){ - sprintf(errmsg, "write failed (%d)", errno); + if( osPwrite(fd, buf, readLen, 0) != (ssize_t)readLen ){ + sqlite3_snprintf(sizeof(errmsg), errmsg, "write failed (%d)", errno); goto end_breaklock; } if( rename(tPath, cPath) ){ - sprintf(errmsg, "rename failed (%d)", errno); + sqlite3_snprintf(sizeof(errmsg), errmsg, "rename failed (%d)", errno); goto end_breaklock; } rc = 0; fprintf(stderr, "broke stale lock on %s\n", cPath); - close(conchFile->h); + robust_close(pFile, conchFile->h, __LINE__); conchFile->h = fd; conchFile->openFlags = O_RDWR | O_CREAT; end_breaklock: if( rc ){ if( fd>=0 ){ - unlink(tPath); - close(fd); + osUnlink(tPath); + robust_close(pFile, fd, __LINE__); } fprintf(stderr, "failed to break stale lock on %s, %s\n", cPath, errmsg); } return rc; } @@ -27163,10 +34029,11 @@ unixFile *conchFile = pCtx->conchFile; int rc = SQLITE_OK; int nTries = 0; struct timespec conchModTime; + memset(&conchModTime, 0, sizeof(conchModTime)); do { rc = conchFile->pMethod->xLock((sqlite3_file*)conchFile, lockType); nTries ++; if( rc==SQLITE_BUSY ){ /* If the lock failed (busy): @@ -27174,12 +34041,12 @@ * 2nd try: fail if the mod time changed or host id is different, wait * 10 sec and try again * 3rd try: break the lock unless the mod time has changed. */ struct stat buf; - if( fstat(conchFile->h, &buf) ){ - pFile->lastErrno = errno; + if( osFstat(conchFile->h, &buf) ){ + storeLastErrno(pFile, errno); return SQLITE_IOERR_LOCK; } if( nTries==1 ){ conchModTime = buf.st_mtimespec; @@ -27193,13 +34060,13 @@ return SQLITE_BUSY; } if( nTries==2 ){ char tBuf[PROXY_MAXCONCHLEN]; - int len = pread(conchFile->h, tBuf, PROXY_MAXCONCHLEN, 0); + int len = osPread(conchFile->h, tBuf, PROXY_MAXCONCHLEN, 0); if( len<0 ){ - pFile->lastErrno = errno; + storeLastErrno(pFile, errno); return SQLITE_IOERR_LOCK; } if( len>PROXY_PATHINDEX && tBuf[0]==(char)PROXY_CONCHVERSION){ /* don't break the lock if the host id doesn't match */ if( 0!=memcmp(&tBuf[PROXY_HEADERLEN], myHostID, PROXY_HOSTIDLEN) ){ @@ -27215,11 +34082,11 @@ assert( nTries==3 ); if( 0==proxyBreakConchLock(pFile, myHostID) ){ rc = SQLITE_OK; if( lockType==EXCLUSIVE_LOCK ){ - rc = conchFile->pMethod->xLock((sqlite3_file*)conchFile, SHARED_LOCK); + rc = conchFile->pMethod->xLock((sqlite3_file*)conchFile, SHARED_LOCK); } if( !rc ){ rc = conchFile->pMethod->xLock((sqlite3_file*)conchFile, lockType); } } @@ -27252,16 +34119,17 @@ int hostIdMatch = 0; int readLen = 0; int tryOldLockPath = 0; int forceNewLockPath = 0; - OSTRACE4("TAKECONCH %d for %s pid=%d\n", conchFile->h, - (pCtx->lockProxyPath ? pCtx->lockProxyPath : ":auto:"), getpid()); + OSTRACE(("TAKECONCH %d for %s pid=%d\n", conchFile->h, + (pCtx->lockProxyPath ? pCtx->lockProxyPath : ":auto:"), + osGetpid(0))); rc = proxyGetHostID(myHostID, &pError); if( (rc&0xff)==SQLITE_IOERR ){ - pFile->lastErrno = pError; + storeLastErrno(pFile, pError); goto end_takeconch; } rc = proxyConchLock(pFile, myHostID, SHARED_LOCK); if( rc!=SQLITE_OK ){ goto end_takeconch; @@ -27268,11 +34136,11 @@ } /* read the existing conch file */ readLen = seekAndRead((unixFile*)conchFile, 0, readBuf, PROXY_MAXCONCHLEN); if( readLen<0 ){ /* I/O error: lastErrno set by seekAndRead */ - pFile->lastErrno = conchFile->lastErrno; + storeLastErrno(pFile, conchFile->lastErrno); rc = SQLITE_IOERR_READ; goto end_takeconch; }else if( readLen<=(PROXY_HEADERLEN+PROXY_HOSTIDLEN) || readBuf[0]!=(char)PROXY_CONCHVERSION ){ /* a short read or version format mismatch means we need to create a new @@ -27333,49 +34201,53 @@ ** has a shared lock already), if the host id matches, use the big ** stick. */ futimes(conchFile->h, NULL); if( hostIdMatch && !createConch ){ - if( conchFile->pLock && conchFile->pLock->cnt>1 ){ + if( conchFile->pInode && conchFile->pInode->nShared>1 ){ /* We are trying for an exclusive lock but another thread in this ** same process is still holding a shared lock. */ rc = SQLITE_BUSY; } else { rc = proxyConchLock(pFile, myHostID, EXCLUSIVE_LOCK); } }else{ - rc = conchFile->pMethod->xLock((sqlite3_file*)conchFile, EXCLUSIVE_LOCK); + rc = proxyConchLock(pFile, myHostID, EXCLUSIVE_LOCK); } if( rc==SQLITE_OK ){ char writeBuffer[PROXY_MAXCONCHLEN]; int writeSize = 0; writeBuffer[0] = (char)PROXY_CONCHVERSION; memcpy(&writeBuffer[PROXY_HEADERLEN], myHostID, PROXY_HOSTIDLEN); if( pCtx->lockProxyPath!=NULL ){ - strlcpy(&writeBuffer[PROXY_PATHINDEX], pCtx->lockProxyPath, MAXPATHLEN); + strlcpy(&writeBuffer[PROXY_PATHINDEX], pCtx->lockProxyPath, + MAXPATHLEN); }else{ strlcpy(&writeBuffer[PROXY_PATHINDEX], tempLockPath, MAXPATHLEN); } writeSize = PROXY_PATHINDEX + strlen(&writeBuffer[PROXY_PATHINDEX]); - ftruncate(conchFile->h, writeSize); + robust_ftruncate(conchFile->h, writeSize); rc = unixWrite((sqlite3_file *)conchFile, writeBuffer, writeSize, 0); - fsync(conchFile->h); + full_fsync(conchFile->h,0,0); /* If we created a new conch file (not just updated the contents of a ** valid conch file), try to match the permissions of the database */ if( rc==SQLITE_OK && createConch ){ struct stat buf; - int err = fstat(pFile->h, &buf); + int err = osFstat(pFile->h, &buf); if( err==0 ){ mode_t cmode = buf.st_mode&(S_IRUSR|S_IWUSR | S_IRGRP|S_IWGRP | S_IROTH|S_IWOTH); /* try to match the database file R/W permissions, ignore failure */ #ifndef SQLITE_PROXY_DEBUG - fchmod(conchFile->h, cmode); + osFchmod(conchFile->h, cmode); #else - if( fchmod(conchFile->h, cmode)!=0 ){ + do{ + rc = osFchmod(conchFile->h, cmode); + }while( rc==(-1) && errno==EINTR ); + if( rc!=0 ){ int code = errno; fprintf(stderr, "fchmod %o FAILED with %d %s\n", cmode, code, strerror(code)); } else { fprintf(stderr, "fchmod %o SUCCEDED\n",cmode); @@ -27389,26 +34261,19 @@ } } conchFile->pMethod->xUnlock((sqlite3_file*)conchFile, SHARED_LOCK); end_takeconch: - OSTRACE2("TRANSPROXY: CLOSE %d\n", pFile->h); + OSTRACE(("TRANSPROXY: CLOSE %d\n", pFile->h)); if( rc==SQLITE_OK && pFile->openFlags ){ + int fd; if( pFile->h>=0 ){ -#ifdef STRICT_CLOSE_ERROR - if( close(pFile->h) ){ - pFile->lastErrno = errno; - return SQLITE_IOERR_CLOSE; - } -#else - close(pFile->h); /* silently leak fd if fail */ -#endif + robust_close(pFile, pFile->h, __LINE__); } pFile->h = -1; - int fd = open(pCtx->dbPath, pFile->openFlags, - SQLITE_DEFAULT_FILE_PERMISSIONS); - OSTRACE2("TRANSPROXY: OPEN %d\n", fd); + fd = robust_open(pCtx->dbPath, pFile->openFlags, 0); + OSTRACE(("TRANSPROXY: OPEN %d\n", fd)); if( fd>=0 ){ pFile->h = fd; }else{ rc=SQLITE_CANTOPEN_BKPT; /* SQLITE_BUSY? proxyTakeConch called during locking */ @@ -27446,41 +34311,43 @@ afpCtx->dbPath = pCtx->lockProxyPath; } } else { conchFile->pMethod->xUnlock((sqlite3_file*)conchFile, NO_LOCK); } - OSTRACE3("TAKECONCH %d %s\n", conchFile->h, rc==SQLITE_OK?"ok":"failed"); + OSTRACE(("TAKECONCH %d %s\n", conchFile->h, + rc==SQLITE_OK?"ok":"failed")); return rc; - } while (1); /* in case we need to retry the :auto: lock file - we should never get here except via the 'continue' call. */ + } while (1); /* in case we need to retry the :auto: lock file - + ** we should never get here except via the 'continue' call. */ } } /* ** If pFile holds a lock on a conch file, then release that lock. */ static int proxyReleaseConch(unixFile *pFile){ - int rc; /* Subroutine return code */ + int rc = SQLITE_OK; /* Subroutine return code */ proxyLockingContext *pCtx; /* The locking context for the proxy lock */ unixFile *conchFile; /* Name of the conch file */ pCtx = (proxyLockingContext *)pFile->lockingContext; conchFile = pCtx->conchFile; - OSTRACE4("RELEASECONCH %d for %s pid=%d\n", conchFile->h, + OSTRACE(("RELEASECONCH %d for %s pid=%d\n", conchFile->h, (pCtx->lockProxyPath ? pCtx->lockProxyPath : ":auto:"), - getpid()); + osGetpid(0))); if( pCtx->conchHeld>0 ){ rc = conchFile->pMethod->xUnlock((sqlite3_file*)conchFile, NO_LOCK); } pCtx->conchHeld = 0; - OSTRACE3("RELEASECONCH %d %s\n", conchFile->h, - (rc==SQLITE_OK ? "ok" : "failed")); + OSTRACE(("RELEASECONCH %d %s\n", conchFile->h, + (rc==SQLITE_OK ? "ok" : "failed"))); return rc; } /* ** Given the name of a database file, compute the name of its conch file. -** Store the conch filename in memory obtained from sqlite3_malloc(). +** Store the conch filename in memory obtained from sqlite3_malloc64(). ** Make *pConchPath point to the new name. Return SQLITE_OK on success ** or SQLITE_NOMEM if unable to obtain memory. ** ** The caller is responsible for ensuring that the allocated memory ** space is eventually freed. @@ -27492,11 +34359,11 @@ int len = (int)strlen(dbPath); /* Length of database filename - dbPath */ char *conchPath; /* buffer in which to construct conch name */ /* Allocate space for the conch filename and initialize the name to ** the name of the original database file. */ - *pConchPath = conchPath = (char *)sqlite3_malloc(len + 8); + *pConchPath = conchPath = (char *)sqlite3_malloc64(len + 8); if( conchPath==0 ){ return SQLITE_NOMEM; } memcpy(conchPath, dbPath, len+1); @@ -27527,11 +34394,11 @@ static int switchLockProxyPath(unixFile *pFile, const char *path) { proxyLockingContext *pCtx = (proxyLockingContext*)pFile->lockingContext; char *oldPath = pCtx->lockProxyPath; int rc = SQLITE_OK; - if( pFile->locktype!=NO_LOCK ){ + if( pFile->eFileLock!=NO_LOCK ){ return SQLITE_BUSY; } /* nothing to do if the path is NULL, :auto: or matches the existing path */ if( !path || path[0]=='\0' || !strcmp(path, ":auto:") || @@ -27564,11 +34431,12 @@ #if defined(__APPLE__) if( pFile->pMethod == &afpIoMethods ){ /* afp style keeps a reference to the db path in the filePath field ** of the struct */ assert( (int)strlen((char*)pFile->lockingContext)<=MAXPATHLEN ); - strlcpy(dbPath, ((afpLockingContext *)pFile->lockingContext)->dbPath, MAXPATHLEN); + strlcpy(dbPath, ((afpLockingContext *)pFile->lockingContext)->dbPath, + MAXPATHLEN); } else #endif if( pFile->pMethod == &dotlockIoMethods ){ /* dot lock style uses the locking context to store the dot lock ** file path */ @@ -27594,24 +34462,24 @@ proxyLockingContext *pCtx; char dbPath[MAXPATHLEN+1]; /* Name of the database file */ char *lockPath=NULL; int rc = SQLITE_OK; - if( pFile->locktype!=NO_LOCK ){ + if( pFile->eFileLock!=NO_LOCK ){ return SQLITE_BUSY; } proxyGetDbPathForUnixFile(pFile, dbPath); if( !path || path[0]=='\0' || !strcmp(path, ":auto:") ){ lockPath=NULL; }else{ lockPath=(char *)path; } - OSTRACE4("TRANSPROXY %d for %s pid=%d\n", pFile->h, - (lockPath ? lockPath : ":auto:"), getpid()); + OSTRACE(("TRANSPROXY %d for %s pid=%d\n", pFile->h, + (lockPath ? lockPath : ":auto:"), osGetpid(0))); - pCtx = sqlite3_malloc( sizeof(*pCtx) ); + pCtx = sqlite3_malloc64( sizeof(*pCtx) ); if( pCtx==0 ){ return SQLITE_NOMEM; } memset(pCtx, 0, sizeof(*pCtx)); @@ -27626,11 +34494,11 @@ */ struct statfs fsInfo; struct stat conchInfo; int goLockless = 0; - if( stat(pCtx->conchFilePath, &conchInfo) == -1 ) { + if( osStat(pCtx->conchFilePath, &conchInfo) == -1 ) { int err = errno; if( (err==ENOENT) && (statfs(dbPath, &fsInfo) != -1) ){ goLockless = (fsInfo.f_flags&MNT_RDONLY) == MNT_RDONLY; } } @@ -27661,16 +34529,16 @@ }else{ if( pCtx->conchFile ){ pCtx->conchFile->pMethod->xClose((sqlite3_file *)pCtx->conchFile); sqlite3_free(pCtx->conchFile); } - sqlite3_free(pCtx->lockProxyPath); + sqlite3DbFree(0, pCtx->lockProxyPath); sqlite3_free(pCtx->conchFilePath); sqlite3_free(pCtx); } - OSTRACE3("TRANSPROXY %d %s\n", pFile->h, - (rc==SQLITE_OK ? "ok" : "failed")); + OSTRACE(("TRANSPROXY %d %s\n", pFile->h, + (rc==SQLITE_OK ? "ok" : "failed"))); return rc; } /* @@ -27677,11 +34545,11 @@ ** This routine handles sqlite3_file_control() calls that are specific ** to proxy locking. */ static int proxyFileControl(sqlite3_file *id, int op, void *pArg){ switch( op ){ - case SQLITE_GET_LOCKPROXYFILE: { + case SQLITE_FCNTL_GET_LOCKPROXYFILE: { unixFile *pFile = (unixFile*)id; if( pFile->pMethod == &proxyIoMethods ){ proxyLockingContext *pCtx = (proxyLockingContext*)pFile->lockingContext; proxyTakeConch(pFile); if( pCtx->lockProxyPath ){ @@ -27692,17 +34560,20 @@ } else { *(const char **)pArg = NULL; } return SQLITE_OK; } - case SQLITE_SET_LOCKPROXYFILE: { + case SQLITE_FCNTL_SET_LOCKPROXYFILE: { unixFile *pFile = (unixFile*)id; int rc = SQLITE_OK; int isProxyStyle = (pFile->pMethod == &proxyIoMethods); if( pArg==NULL || (const char *)pArg==0 ){ if( isProxyStyle ){ - /* turn off proxy locking - not supported */ + /* turn off proxy locking - not supported. If support is added for + ** switching proxy locking mode off then it will need to fail if + ** the journal mode is WAL mode. + */ rc = SQLITE_ERROR /*SQLITE_PROTOCOL? SQLITE_MISUSE?*/; }else{ /* turn off proxy locking - already off - NOOP */ rc = SQLITE_OK; } @@ -27761,11 +34632,11 @@ } return rc; } /* -** Lock the file with the lock specified by parameter locktype - one +** Lock the file with the lock specified by parameter eFileLock - one ** of the following: ** ** (1) SHARED_LOCK ** (2) RESERVED_LOCK ** (3) PENDING_LOCK @@ -27784,43 +34655,43 @@ ** PENDING -> EXCLUSIVE ** ** This routine will only increase a lock. Use the sqlite3OsUnlock() ** routine to lower a locking level. */ -static int proxyLock(sqlite3_file *id, int locktype) { +static int proxyLock(sqlite3_file *id, int eFileLock) { unixFile *pFile = (unixFile*)id; int rc = proxyTakeConch(pFile); if( rc==SQLITE_OK ){ proxyLockingContext *pCtx = (proxyLockingContext *)pFile->lockingContext; if( pCtx->conchHeld>0 ){ unixFile *proxy = pCtx->lockProxy; - rc = proxy->pMethod->xLock((sqlite3_file*)proxy, locktype); - pFile->locktype = proxy->locktype; + rc = proxy->pMethod->xLock((sqlite3_file*)proxy, eFileLock); + pFile->eFileLock = proxy->eFileLock; }else{ /* conchHeld < 0 is lockless */ } } return rc; } /* -** Lower the locking level on file descriptor pFile to locktype. locktype +** Lower the locking level on file descriptor pFile to eFileLock. eFileLock ** must be either NO_LOCK or SHARED_LOCK. ** ** If the locking level of the file descriptor is already at or below ** the requested locking level, this routine is a no-op. */ -static int proxyUnlock(sqlite3_file *id, int locktype) { +static int proxyUnlock(sqlite3_file *id, int eFileLock) { unixFile *pFile = (unixFile*)id; int rc = proxyTakeConch(pFile); if( rc==SQLITE_OK ){ proxyLockingContext *pCtx = (proxyLockingContext *)pFile->lockingContext; if( pCtx->conchHeld>0 ){ unixFile *proxy = pCtx->lockProxy; - rc = proxy->pMethod->xUnlock((sqlite3_file*)proxy, locktype); - pFile->locktype = proxy->locktype; + rc = proxy->pMethod->xUnlock((sqlite3_file*)proxy, eFileLock); + pFile->eFileLock = proxy->eFileLock; }else{ /* conchHeld < 0 is lockless */ } } return rc; @@ -27828,11 +34699,11 @@ /* ** Close a file that uses proxy locks. */ static int proxyClose(sqlite3_file *id) { - if( id ){ + if( ALWAYS(id) ){ unixFile *pFile = (unixFile*)id; proxyLockingContext *pCtx = (proxyLockingContext *)pFile->lockingContext; unixFile *lockProxy = pCtx->lockProxy; unixFile *conchFile = pCtx->conchFile; int rc = SQLITE_OK; @@ -27852,13 +34723,13 @@ } rc = conchFile->pMethod->xClose((sqlite3_file*)conchFile); if( rc ) return rc; sqlite3_free(conchFile); } - sqlite3_free(pCtx->lockProxyPath); + sqlite3DbFree(0, pCtx->lockProxyPath); sqlite3_free(pCtx->conchFilePath); - sqlite3_free(pCtx->dbPath); + sqlite3DbFree(0, pCtx->dbPath); /* restore the original locking context and pMethod then close it */ pFile->lockingContext = pCtx->oldLockingContext; pFile->pMethod = pCtx->pOldMethod; sqlite3_free(pCtx); return pFile->pMethod->xClose(id); @@ -27889,11 +34760,11 @@ ** This routine is called once during SQLite initialization and by a ** single thread. The memory allocation and mutex subsystems have not ** necessarily been initialized when this routine is called, and so they ** should not be used. */ -SQLITE_API int sqlite3_os_init(void){ +SQLITE_API int SQLITE_STDCALL sqlite3_os_init(void){ /* ** The following macro defines an initializer for an sqlite3_vfs object. ** The name of the VFS is NAME. The pAppData is a pointer to a pointer ** to the "finder" function. (pAppData is a pointer to a pointer because ** silly C90 rules prohibit a void* from being cast to a function pointer @@ -27911,11 +34782,11 @@ ** more than that; it looks at the filesystem type that hosts the ** database file and tries to choose an locking method appropriate for ** that filesystem time. */ #define UNIXVFS(VFSNAME, FINDER) { \ - 1, /* iVersion */ \ + 3, /* iVersion */ \ sizeof(unixFile), /* szOsFile */ \ MAX_PATHNAME, /* mxPathname */ \ 0, /* pNext */ \ VFSNAME, /* zName */ \ (void*)&FINDER, /* pAppData */ \ @@ -27928,11 +34799,15 @@ unixDlSym, /* xDlSym */ \ unixDlClose, /* xDlClose */ \ unixRandomness, /* xRandomness */ \ unixSleep, /* xSleep */ \ unixCurrentTime, /* xCurrentTime */ \ - unixGetLastError /* xGetLastError */ \ + unixGetLastError, /* xGetLastError */ \ + unixCurrentTimeInt64, /* xCurrentTimeInt64 */ \ + unixSetSystemCall, /* xSetSystemCall */ \ + unixGetSystemCall, /* xGetSystemCall */ \ + unixNextSystemCall, /* xNextSystemCall */ \ } /* ** All default VFSes for unix are contained in the following array. ** @@ -27939,33 +34814,40 @@ ** Note that the sqlite3_vfs.pNext field of the VFS object is modified ** by the SQLite core when the VFS is registered. So the following ** array cannot be const. */ static sqlite3_vfs aVfs[] = { -#if SQLITE_ENABLE_LOCKING_STYLE && (OS_VXWORKS || defined(__APPLE__)) +#if SQLITE_ENABLE_LOCKING_STYLE && defined(__APPLE__) UNIXVFS("unix", autolockIoFinder ), +#elif OS_VXWORKS + UNIXVFS("unix", vxworksIoFinder ), #else UNIXVFS("unix", posixIoFinder ), #endif UNIXVFS("unix-none", nolockIoFinder ), UNIXVFS("unix-dotfile", dotlockIoFinder ), + UNIXVFS("unix-excl", posixIoFinder ), #if OS_VXWORKS UNIXVFS("unix-namedsem", semIoFinder ), #endif -#if SQLITE_ENABLE_LOCKING_STYLE +#if SQLITE_ENABLE_LOCKING_STYLE || OS_VXWORKS UNIXVFS("unix-posix", posixIoFinder ), -#if !OS_VXWORKS +#endif +#if SQLITE_ENABLE_LOCKING_STYLE UNIXVFS("unix-flock", flockIoFinder ), -#endif #endif #if SQLITE_ENABLE_LOCKING_STYLE && defined(__APPLE__) UNIXVFS("unix-afp", afpIoFinder ), UNIXVFS("unix-nfs", nfsIoFinder ), UNIXVFS("unix-proxy", proxyIoFinder ), #endif }; unsigned int i; /* Loop counter */ + + /* Double-check that the aSyscall[] array has been constructed + ** correctly. See ticket [bb3a86e890c8e96ab] */ + assert( ArraySize(aSyscall)==28 ); /* Register all VFSes defined in the aVfs[] array */ for(i=0; i<(sizeof(aVfs)/sizeof(sqlite3_vfs)); i++){ sqlite3_vfs_register(&aVfs[i], i==0); } @@ -27977,11 +34859,11 @@ ** ** Some operating systems might need to do some cleanup in this routine, ** to release dynamically allocated objects. But not on unix. ** This routine is a no-op for unix. */ -SQLITE_API int sqlite3_os_end(void){ +SQLITE_API int SQLITE_STDCALL sqlite3_os_end(void){ return SQLITE_OK; } #endif /* SQLITE_OS_UNIX */ @@ -27997,53 +34879,14 @@ ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ****************************************************************************** ** -** This file contains code that is specific to windows. -*/ -#if SQLITE_OS_WIN /* This file is used for windows only */ - - -/* -** A Note About Memory Allocation: -** -** This driver uses malloc()/free() directly rather than going through -** the SQLite-wrappers sqlite3_malloc()/sqlite3_free(). Those wrappers -** are designed for use on embedded systems where memory is scarce and -** malloc failures happen frequently. Win32 does not typically run on -** embedded systems, and when it does the developers normally have bigger -** problems to worry about than running out of memory. So there is not -** a compelling need to use the wrappers. -** -** But there is a good reason to not use the wrappers. If we use the -** wrappers then we will get simulated malloc() failures within this -** driver. And that causes all kinds of problems for our tests. We -** could enhance SQLite to deal with simulated malloc failures within -** the OS driver, but the code to deal with those failure would not -** be exercised on Linux (which does not need to malloc() in the driver) -** and so we would have difficulty writing coverage tests for that -** code. Better to leave the code out, we think. -** -** The point of this discussion is as follows: When creating a new -** OS layer for an embedded system, if you use this file as an example, -** avoid the use of malloc()/free(). Those routines work ok on windows -** desktops but not so well in embedded systems. -*/ - -#include <winbase.h> - -#ifdef __CYGWIN__ -# include <sys/cygwin.h> -#endif - -/* -** Macros used to determine whether or not to use threads. -*/ -#if defined(THREADSAFE) && THREADSAFE -# define SQLITE_W32_THREADS 1 -#endif +** This file contains code that is specific to Windows. +*/ +/* #include "sqliteInt.h" */ +#if SQLITE_OS_WIN /* This file is used for Windows only */ /* ** Include code that is common to all os_*.c files */ /************** Include os_common.h in the middle of os_win.c ****************/ @@ -28077,39 +34920,18 @@ */ #ifdef MEMORY_DEBUG # error "The MEMORY_DEBUG macro is obsolete. Use SQLITE_DEBUG instead." #endif -#ifdef SQLITE_DEBUG -SQLITE_PRIVATE int sqlite3OSTrace = 0; -#define OSTRACE1(X) if( sqlite3OSTrace ) sqlite3DebugPrintf(X) -#define OSTRACE2(X,Y) if( sqlite3OSTrace ) sqlite3DebugPrintf(X,Y) -#define OSTRACE3(X,Y,Z) if( sqlite3OSTrace ) sqlite3DebugPrintf(X,Y,Z) -#define OSTRACE4(X,Y,Z,A) if( sqlite3OSTrace ) sqlite3DebugPrintf(X,Y,Z,A) -#define OSTRACE5(X,Y,Z,A,B) if( sqlite3OSTrace ) sqlite3DebugPrintf(X,Y,Z,A,B) -#define OSTRACE6(X,Y,Z,A,B,C) \ - if(sqlite3OSTrace) sqlite3DebugPrintf(X,Y,Z,A,B,C) -#define OSTRACE7(X,Y,Z,A,B,C,D) \ - if(sqlite3OSTrace) sqlite3DebugPrintf(X,Y,Z,A,B,C,D) -#else -#define OSTRACE1(X) -#define OSTRACE2(X,Y) -#define OSTRACE3(X,Y,Z) -#define OSTRACE4(X,Y,Z,A) -#define OSTRACE5(X,Y,Z,A,B) -#define OSTRACE6(X,Y,Z,A,B,C) -#define OSTRACE7(X,Y,Z,A,B,C,D) -#endif - /* ** Macros for performance tracing. Normally turned off. Only works ** on i486 hardware. */ #ifdef SQLITE_PERFORMANCE_TRACE -/* -** hwtime.h contains inline assembler code for implementing +/* +** hwtime.h contains inline assembler code for implementing ** high-performance timing routines. */ /************** Include hwtime.h in the middle of os_common.h ****************/ /************** Begin file hwtime.h ******************************************/ /* @@ -28215,18 +35037,18 @@ /* ** If we compile with the SQLITE_TEST macro set, then the following block ** of code will give us the ability to simulate a disk I/O error. This ** is used for testing the I/O recovery logic. */ -#ifdef SQLITE_TEST -SQLITE_API int sqlite3_io_error_hit = 0; /* Total number of I/O Errors */ -SQLITE_API int sqlite3_io_error_hardhit = 0; /* Number of non-benign errors */ -SQLITE_API int sqlite3_io_error_pending = 0; /* Count down to first I/O error */ -SQLITE_API int sqlite3_io_error_persist = 0; /* True if I/O errors persist */ -SQLITE_API int sqlite3_io_error_benign = 0; /* True if errors are benign */ -SQLITE_API int sqlite3_diskfull_pending = 0; -SQLITE_API int sqlite3_diskfull = 0; +#if defined(SQLITE_TEST) +SQLITE_API extern int sqlite3_io_error_hit; +SQLITE_API extern int sqlite3_io_error_hardhit; +SQLITE_API extern int sqlite3_io_error_pending; +SQLITE_API extern int sqlite3_io_error_persist; +SQLITE_API extern int sqlite3_io_error_benign; +SQLITE_API extern int sqlite3_diskfull_pending; +SQLITE_API extern int sqlite3_diskfull; #define SimulateIOErrorBenign(X) sqlite3_io_error_benign=(X) #define SimulateIOError(CODE) \ if( (sqlite3_io_error_persist && sqlite3_io_error_hit) \ || sqlite3_io_error_pending-- == 1 ) \ { local_ioerr(); CODE; } @@ -28248,41 +35070,247 @@ } #else #define SimulateIOErrorBenign(X) #define SimulateIOError(A) #define SimulateDiskfullError(A) -#endif +#endif /* defined(SQLITE_TEST) */ /* ** When testing, keep a count of the number of open files. */ -#ifdef SQLITE_TEST -SQLITE_API int sqlite3_open_file_count = 0; +#if defined(SQLITE_TEST) +SQLITE_API extern int sqlite3_open_file_count; #define OpenCounter(X) sqlite3_open_file_count+=(X) #else #define OpenCounter(X) -#endif +#endif /* defined(SQLITE_TEST) */ #endif /* !defined(_OS_COMMON_H_) */ /************** End of os_common.h *******************************************/ /************** Continuing where we left off in os_win.c *********************/ /* -** Some microsoft compilers lack this definition. +** Include the header file for the Windows VFS. +*/ +/* #include "os_win.h" */ + +/* +** Compiling and using WAL mode requires several APIs that are only +** available in Windows platforms based on the NT kernel. +*/ +#if !SQLITE_OS_WINNT && !defined(SQLITE_OMIT_WAL) +# error "WAL mode requires support from the Windows NT kernel, compile\ + with SQLITE_OMIT_WAL." +#endif + +#if !SQLITE_OS_WINNT && SQLITE_MAX_MMAP_SIZE>0 +# error "Memory mapped files require support from the Windows NT kernel,\ + compile with SQLITE_MAX_MMAP_SIZE=0." +#endif + +/* +** Are most of the Win32 ANSI APIs available (i.e. with certain exceptions +** based on the sub-platform)? +*/ +#if !SQLITE_OS_WINCE && !SQLITE_OS_WINRT && !defined(SQLITE_WIN32_NO_ANSI) +# define SQLITE_WIN32_HAS_ANSI +#endif + +/* +** Are most of the Win32 Unicode APIs available (i.e. with certain exceptions +** based on the sub-platform)? +*/ +#if (SQLITE_OS_WINCE || SQLITE_OS_WINNT || SQLITE_OS_WINRT) && \ + !defined(SQLITE_WIN32_NO_WIDE) +# define SQLITE_WIN32_HAS_WIDE +#endif + +/* +** Make sure at least one set of Win32 APIs is available. +*/ +#if !defined(SQLITE_WIN32_HAS_ANSI) && !defined(SQLITE_WIN32_HAS_WIDE) +# error "At least one of SQLITE_WIN32_HAS_ANSI and SQLITE_WIN32_HAS_WIDE\ + must be defined." +#endif + +/* +** Define the required Windows SDK version constants if they are not +** already available. +*/ +#ifndef NTDDI_WIN8 +# define NTDDI_WIN8 0x06020000 +#endif + +#ifndef NTDDI_WINBLUE +# define NTDDI_WINBLUE 0x06030000 +#endif + +#ifndef NTDDI_WINTHRESHOLD +# define NTDDI_WINTHRESHOLD 0x06040000 +#endif + +/* +** Check to see if the GetVersionEx[AW] functions are deprecated on the +** target system. GetVersionEx was first deprecated in Win8.1. +*/ +#ifndef SQLITE_WIN32_GETVERSIONEX +# if defined(NTDDI_VERSION) && NTDDI_VERSION >= NTDDI_WINBLUE +# define SQLITE_WIN32_GETVERSIONEX 0 /* GetVersionEx() is deprecated */ +# else +# define SQLITE_WIN32_GETVERSIONEX 1 /* GetVersionEx() is current */ +# endif +#endif + +/* +** Check to see if the CreateFileMappingA function is supported on the +** target system. It is unavailable when using "mincore.lib" on Win10. +** When compiling for Windows 10, always assume "mincore.lib" is in use. +*/ +#ifndef SQLITE_WIN32_CREATEFILEMAPPINGA +# if defined(NTDDI_VERSION) && NTDDI_VERSION >= NTDDI_WINTHRESHOLD +# define SQLITE_WIN32_CREATEFILEMAPPINGA 0 +# else +# define SQLITE_WIN32_CREATEFILEMAPPINGA 1 +# endif +#endif + +/* +** This constant should already be defined (in the "WinDef.h" SDK file). +*/ +#ifndef MAX_PATH +# define MAX_PATH (260) +#endif + +/* +** Maximum pathname length (in chars) for Win32. This should normally be +** MAX_PATH. +*/ +#ifndef SQLITE_WIN32_MAX_PATH_CHARS +# define SQLITE_WIN32_MAX_PATH_CHARS (MAX_PATH) +#endif + +/* +** This constant should already be defined (in the "WinNT.h" SDK file). +*/ +#ifndef UNICODE_STRING_MAX_CHARS +# define UNICODE_STRING_MAX_CHARS (32767) +#endif + +/* +** Maximum pathname length (in chars) for WinNT. This should normally be +** UNICODE_STRING_MAX_CHARS. +*/ +#ifndef SQLITE_WINNT_MAX_PATH_CHARS +# define SQLITE_WINNT_MAX_PATH_CHARS (UNICODE_STRING_MAX_CHARS) +#endif + +/* +** Maximum pathname length (in bytes) for Win32. The MAX_PATH macro is in +** characters, so we allocate 4 bytes per character assuming worst-case of +** 4-bytes-per-character for UTF8. +*/ +#ifndef SQLITE_WIN32_MAX_PATH_BYTES +# define SQLITE_WIN32_MAX_PATH_BYTES (SQLITE_WIN32_MAX_PATH_CHARS*4) +#endif + +/* +** Maximum pathname length (in bytes) for WinNT. This should normally be +** UNICODE_STRING_MAX_CHARS * sizeof(WCHAR). +*/ +#ifndef SQLITE_WINNT_MAX_PATH_BYTES +# define SQLITE_WINNT_MAX_PATH_BYTES \ + (sizeof(WCHAR) * SQLITE_WINNT_MAX_PATH_CHARS) +#endif + +/* +** Maximum error message length (in chars) for WinRT. +*/ +#ifndef SQLITE_WIN32_MAX_ERRMSG_CHARS +# define SQLITE_WIN32_MAX_ERRMSG_CHARS (1024) +#endif + +/* +** Returns non-zero if the character should be treated as a directory +** separator. +*/ +#ifndef winIsDirSep +# define winIsDirSep(a) (((a) == '/') || ((a) == '\\')) +#endif + +/* +** This macro is used when a local variable is set to a value that is +** [sometimes] not used by the code (e.g. via conditional compilation). +*/ +#ifndef UNUSED_VARIABLE_VALUE +# define UNUSED_VARIABLE_VALUE(x) (void)(x) +#endif + +/* +** Returns the character that should be used as the directory separator. +*/ +#ifndef winGetDirSep +# define winGetDirSep() '\\' +#endif + +/* +** Do we need to manually define the Win32 file mapping APIs for use with WAL +** mode or memory mapped files (e.g. these APIs are available in the Windows +** CE SDK; however, they are not present in the header file)? +*/ +#if SQLITE_WIN32_FILEMAPPING_API && \ + (!defined(SQLITE_OMIT_WAL) || SQLITE_MAX_MMAP_SIZE>0) +/* +** Two of the file mapping APIs are different under WinRT. Figure out which +** set we need. +*/ +#if SQLITE_OS_WINRT +WINBASEAPI HANDLE WINAPI CreateFileMappingFromApp(HANDLE, \ + LPSECURITY_ATTRIBUTES, ULONG, ULONG64, LPCWSTR); + +WINBASEAPI LPVOID WINAPI MapViewOfFileFromApp(HANDLE, ULONG, ULONG64, SIZE_T); +#else +#if defined(SQLITE_WIN32_HAS_ANSI) +WINBASEAPI HANDLE WINAPI CreateFileMappingA(HANDLE, LPSECURITY_ATTRIBUTES, \ + DWORD, DWORD, DWORD, LPCSTR); +#endif /* defined(SQLITE_WIN32_HAS_ANSI) */ + +#if defined(SQLITE_WIN32_HAS_WIDE) +WINBASEAPI HANDLE WINAPI CreateFileMappingW(HANDLE, LPSECURITY_ATTRIBUTES, \ + DWORD, DWORD, DWORD, LPCWSTR); +#endif /* defined(SQLITE_WIN32_HAS_WIDE) */ + +WINBASEAPI LPVOID WINAPI MapViewOfFile(HANDLE, DWORD, DWORD, DWORD, SIZE_T); +#endif /* SQLITE_OS_WINRT */ + +/* +** These file mapping APIs are common to both Win32 and WinRT. +*/ + +WINBASEAPI BOOL WINAPI FlushViewOfFile(LPCVOID, SIZE_T); +WINBASEAPI BOOL WINAPI UnmapViewOfFile(LPCVOID); +#endif /* SQLITE_WIN32_FILEMAPPING_API */ + +/* +** Some Microsoft compilers lack this definition. */ #ifndef INVALID_FILE_ATTRIBUTES -# define INVALID_FILE_ATTRIBUTES ((DWORD)-1) +# define INVALID_FILE_ATTRIBUTES ((DWORD)-1) #endif -/* -** Determine if we are dealing with WindowsCE - which has a much -** reduced API. -*/ -#if SQLITE_OS_WINCE -# define AreFileApisANSI() 1 -# define FormatMessageW(a,b,c,d,e,f,g) 0 +#ifndef FILE_FLAG_MASK +# define FILE_FLAG_MASK (0xFF3C0000) +#endif + +#ifndef FILE_ATTRIBUTE_MASK +# define FILE_ATTRIBUTE_MASK (0x0003FFF7) +#endif + +#ifndef SQLITE_OMIT_WAL +/* Forward references to structures used for WAL */ +typedef struct winShm winShm; /* A connection to shared-memory */ +typedef struct winShmNode winShmNode; /* A region of shared-memory */ #endif /* ** WinCE lacks native support for file locking so we have to fake it ** with some code of our own. @@ -28300,49 +35328,1099 @@ ** The winFile structure is a subclass of sqlite3_file* specific to the win32 ** portability layer. */ typedef struct winFile winFile; struct winFile { - const sqlite3_io_methods *pMethod;/* Must be first */ + const sqlite3_io_methods *pMethod; /*** Must be first ***/ + sqlite3_vfs *pVfs; /* The VFS used to open this file */ HANDLE h; /* Handle for accessing the file */ - unsigned char locktype; /* Type of lock currently held on this file */ + u8 locktype; /* Type of lock currently held on this file */ short sharedLockByte; /* Randomly chosen byte used as a shared lock */ + u8 ctrlFlags; /* Flags. See WINFILE_* below */ DWORD lastErrno; /* The Windows errno from the last I/O error */ - DWORD sectorSize; /* Sector size of the device file is on */ +#ifndef SQLITE_OMIT_WAL + winShm *pShm; /* Instance of shared memory on this file */ +#endif + const char *zPath; /* Full pathname of this file */ + int szChunk; /* Chunk size configured by FCNTL_CHUNK_SIZE */ #if SQLITE_OS_WINCE - WCHAR *zDeleteOnClose; /* Name of file to delete when closing */ - HANDLE hMutex; /* Mutex used to control access to shared lock */ + LPWSTR zDeleteOnClose; /* Name of file to delete when closing */ + HANDLE hMutex; /* Mutex used to control access to shared lock */ HANDLE hShared; /* Shared memory segment used for locking */ winceLock local; /* Locks obtained by this instance of winFile */ winceLock *shared; /* Global shared lock memory for the file */ #endif +#if SQLITE_MAX_MMAP_SIZE>0 + int nFetchOut; /* Number of outstanding xFetch references */ + HANDLE hMap; /* Handle for accessing memory mapping */ + void *pMapRegion; /* Area memory mapped */ + sqlite3_int64 mmapSize; /* Usable size of mapped region */ + sqlite3_int64 mmapSizeActual; /* Actual size of mapped region */ + sqlite3_int64 mmapSizeMax; /* Configured FCNTL_MMAP_SIZE value */ +#endif }; /* -** Forward prototypes. +** Allowed values for winFile.ctrlFlags */ -static int getSectorSize( - sqlite3_vfs *pVfs, - const char *zRelative /* UTF-8 file name */ -); +#define WINFILE_RDONLY 0x02 /* Connection is read only */ +#define WINFILE_PERSIST_WAL 0x04 /* Persistent WAL mode */ +#define WINFILE_PSOW 0x10 /* SQLITE_IOCAP_POWERSAFE_OVERWRITE */ + +/* + * The size of the buffer used by sqlite3_win32_write_debug(). + */ +#ifndef SQLITE_WIN32_DBG_BUF_SIZE +# define SQLITE_WIN32_DBG_BUF_SIZE ((int)(4096-sizeof(DWORD))) +#endif + +/* + * The value used with sqlite3_win32_set_directory() to specify that + * the data directory should be changed. + */ +#ifndef SQLITE_WIN32_DATA_DIRECTORY_TYPE +# define SQLITE_WIN32_DATA_DIRECTORY_TYPE (1) +#endif + +/* + * The value used with sqlite3_win32_set_directory() to specify that + * the temporary directory should be changed. + */ +#ifndef SQLITE_WIN32_TEMP_DIRECTORY_TYPE +# define SQLITE_WIN32_TEMP_DIRECTORY_TYPE (2) +#endif + +/* + * If compiled with SQLITE_WIN32_MALLOC on Windows, we will use the + * various Win32 API heap functions instead of our own. + */ +#ifdef SQLITE_WIN32_MALLOC + +/* + * If this is non-zero, an isolated heap will be created by the native Win32 + * allocator subsystem; otherwise, the default process heap will be used. This + * setting has no effect when compiling for WinRT. By default, this is enabled + * and an isolated heap will be created to store all allocated data. + * + ****************************************************************************** + * WARNING: It is important to note that when this setting is non-zero and the + * winMemShutdown function is called (e.g. by the sqlite3_shutdown + * function), all data that was allocated using the isolated heap will + * be freed immediately and any attempt to access any of that freed + * data will almost certainly result in an immediate access violation. + ****************************************************************************** + */ +#ifndef SQLITE_WIN32_HEAP_CREATE +# define SQLITE_WIN32_HEAP_CREATE (TRUE) +#endif + +/* + * The initial size of the Win32-specific heap. This value may be zero. + */ +#ifndef SQLITE_WIN32_HEAP_INIT_SIZE +# define SQLITE_WIN32_HEAP_INIT_SIZE ((SQLITE_DEFAULT_CACHE_SIZE) * \ + (SQLITE_DEFAULT_PAGE_SIZE) + 4194304) +#endif + +/* + * The maximum size of the Win32-specific heap. This value may be zero. + */ +#ifndef SQLITE_WIN32_HEAP_MAX_SIZE +# define SQLITE_WIN32_HEAP_MAX_SIZE (0) +#endif + +/* + * The extra flags to use in calls to the Win32 heap APIs. This value may be + * zero for the default behavior. + */ +#ifndef SQLITE_WIN32_HEAP_FLAGS +# define SQLITE_WIN32_HEAP_FLAGS (0) +#endif + + +/* +** The winMemData structure stores information required by the Win32-specific +** sqlite3_mem_methods implementation. +*/ +typedef struct winMemData winMemData; +struct winMemData { +#ifndef NDEBUG + u32 magic1; /* Magic number to detect structure corruption. */ +#endif + HANDLE hHeap; /* The handle to our heap. */ + BOOL bOwned; /* Do we own the heap (i.e. destroy it on shutdown)? */ +#ifndef NDEBUG + u32 magic2; /* Magic number to detect structure corruption. */ +#endif +}; + +#ifndef NDEBUG +#define WINMEM_MAGIC1 0x42b2830b +#define WINMEM_MAGIC2 0xbd4d7cf4 +#endif + +static struct winMemData win_mem_data = { +#ifndef NDEBUG + WINMEM_MAGIC1, +#endif + NULL, FALSE +#ifndef NDEBUG + ,WINMEM_MAGIC2 +#endif +}; + +#ifndef NDEBUG +#define winMemAssertMagic1() assert( win_mem_data.magic1==WINMEM_MAGIC1 ) +#define winMemAssertMagic2() assert( win_mem_data.magic2==WINMEM_MAGIC2 ) +#define winMemAssertMagic() winMemAssertMagic1(); winMemAssertMagic2(); +#else +#define winMemAssertMagic() +#endif + +#define winMemGetDataPtr() &win_mem_data +#define winMemGetHeap() win_mem_data.hHeap +#define winMemGetOwned() win_mem_data.bOwned + +static void *winMemMalloc(int nBytes); +static void winMemFree(void *pPrior); +static void *winMemRealloc(void *pPrior, int nBytes); +static int winMemSize(void *p); +static int winMemRoundup(int n); +static int winMemInit(void *pAppData); +static void winMemShutdown(void *pAppData); + +SQLITE_PRIVATE const sqlite3_mem_methods *sqlite3MemGetWin32(void); +#endif /* SQLITE_WIN32_MALLOC */ /* ** The following variable is (normally) set once and never changes -** thereafter. It records whether the operating system is Win95 +** thereafter. It records whether the operating system is Win9x ** or WinNT. ** ** 0: Operating system unknown. -** 1: Operating system is Win95. +** 1: Operating system is Win9x. ** 2: Operating system is WinNT. ** ** In order to facilitate testing on a WinNT system, the test fixture ** can manually set this value to 1 to emulate Win98 behavior. */ #ifdef SQLITE_TEST -SQLITE_API int sqlite3_os_type = 0; -#else -static int sqlite3_os_type = 0; +SQLITE_API LONG SQLITE_WIN32_VOLATILE sqlite3_os_type = 0; +#else +static LONG SQLITE_WIN32_VOLATILE sqlite3_os_type = 0; +#endif + +#ifndef SYSCALL +# define SYSCALL sqlite3_syscall_ptr +#endif + +/* +** This function is not available on Windows CE or WinRT. + */ + +#if SQLITE_OS_WINCE || SQLITE_OS_WINRT +# define osAreFileApisANSI() 1 +#endif + +/* +** Many system calls are accessed through pointer-to-functions so that +** they may be overridden at runtime to facilitate fault injection during +** testing and sandboxing. The following array holds the names and pointers +** to all overrideable system calls. +*/ +static struct win_syscall { + const char *zName; /* Name of the system call */ + sqlite3_syscall_ptr pCurrent; /* Current value of the system call */ + sqlite3_syscall_ptr pDefault; /* Default value */ +} aSyscall[] = { +#if !SQLITE_OS_WINCE && !SQLITE_OS_WINRT + { "AreFileApisANSI", (SYSCALL)AreFileApisANSI, 0 }, +#else + { "AreFileApisANSI", (SYSCALL)0, 0 }, +#endif + +#ifndef osAreFileApisANSI +#define osAreFileApisANSI ((BOOL(WINAPI*)(VOID))aSyscall[0].pCurrent) +#endif + +#if SQLITE_OS_WINCE && defined(SQLITE_WIN32_HAS_WIDE) + { "CharLowerW", (SYSCALL)CharLowerW, 0 }, +#else + { "CharLowerW", (SYSCALL)0, 0 }, +#endif + +#define osCharLowerW ((LPWSTR(WINAPI*)(LPWSTR))aSyscall[1].pCurrent) + +#if SQLITE_OS_WINCE && defined(SQLITE_WIN32_HAS_WIDE) + { "CharUpperW", (SYSCALL)CharUpperW, 0 }, +#else + { "CharUpperW", (SYSCALL)0, 0 }, +#endif + +#define osCharUpperW ((LPWSTR(WINAPI*)(LPWSTR))aSyscall[2].pCurrent) + + { "CloseHandle", (SYSCALL)CloseHandle, 0 }, + +#define osCloseHandle ((BOOL(WINAPI*)(HANDLE))aSyscall[3].pCurrent) + +#if defined(SQLITE_WIN32_HAS_ANSI) + { "CreateFileA", (SYSCALL)CreateFileA, 0 }, +#else + { "CreateFileA", (SYSCALL)0, 0 }, +#endif + +#define osCreateFileA ((HANDLE(WINAPI*)(LPCSTR,DWORD,DWORD, \ + LPSECURITY_ATTRIBUTES,DWORD,DWORD,HANDLE))aSyscall[4].pCurrent) + +#if !SQLITE_OS_WINRT && defined(SQLITE_WIN32_HAS_WIDE) + { "CreateFileW", (SYSCALL)CreateFileW, 0 }, +#else + { "CreateFileW", (SYSCALL)0, 0 }, +#endif + +#define osCreateFileW ((HANDLE(WINAPI*)(LPCWSTR,DWORD,DWORD, \ + LPSECURITY_ATTRIBUTES,DWORD,DWORD,HANDLE))aSyscall[5].pCurrent) + +#if !SQLITE_OS_WINRT && defined(SQLITE_WIN32_HAS_ANSI) && \ + (!defined(SQLITE_OMIT_WAL) || SQLITE_MAX_MMAP_SIZE>0) && \ + SQLITE_WIN32_CREATEFILEMAPPINGA + { "CreateFileMappingA", (SYSCALL)CreateFileMappingA, 0 }, +#else + { "CreateFileMappingA", (SYSCALL)0, 0 }, +#endif + +#define osCreateFileMappingA ((HANDLE(WINAPI*)(HANDLE,LPSECURITY_ATTRIBUTES, \ + DWORD,DWORD,DWORD,LPCSTR))aSyscall[6].pCurrent) + +#if SQLITE_OS_WINCE || (!SQLITE_OS_WINRT && defined(SQLITE_WIN32_HAS_WIDE) && \ + (!defined(SQLITE_OMIT_WAL) || SQLITE_MAX_MMAP_SIZE>0)) + { "CreateFileMappingW", (SYSCALL)CreateFileMappingW, 0 }, +#else + { "CreateFileMappingW", (SYSCALL)0, 0 }, +#endif + +#define osCreateFileMappingW ((HANDLE(WINAPI*)(HANDLE,LPSECURITY_ATTRIBUTES, \ + DWORD,DWORD,DWORD,LPCWSTR))aSyscall[7].pCurrent) + +#if !SQLITE_OS_WINRT && defined(SQLITE_WIN32_HAS_WIDE) + { "CreateMutexW", (SYSCALL)CreateMutexW, 0 }, +#else + { "CreateMutexW", (SYSCALL)0, 0 }, +#endif + +#define osCreateMutexW ((HANDLE(WINAPI*)(LPSECURITY_ATTRIBUTES,BOOL, \ + LPCWSTR))aSyscall[8].pCurrent) + +#if defined(SQLITE_WIN32_HAS_ANSI) + { "DeleteFileA", (SYSCALL)DeleteFileA, 0 }, +#else + { "DeleteFileA", (SYSCALL)0, 0 }, +#endif + +#define osDeleteFileA ((BOOL(WINAPI*)(LPCSTR))aSyscall[9].pCurrent) + +#if defined(SQLITE_WIN32_HAS_WIDE) + { "DeleteFileW", (SYSCALL)DeleteFileW, 0 }, +#else + { "DeleteFileW", (SYSCALL)0, 0 }, +#endif + +#define osDeleteFileW ((BOOL(WINAPI*)(LPCWSTR))aSyscall[10].pCurrent) + +#if SQLITE_OS_WINCE + { "FileTimeToLocalFileTime", (SYSCALL)FileTimeToLocalFileTime, 0 }, +#else + { "FileTimeToLocalFileTime", (SYSCALL)0, 0 }, +#endif + +#define osFileTimeToLocalFileTime ((BOOL(WINAPI*)(CONST FILETIME*, \ + LPFILETIME))aSyscall[11].pCurrent) + +#if SQLITE_OS_WINCE + { "FileTimeToSystemTime", (SYSCALL)FileTimeToSystemTime, 0 }, +#else + { "FileTimeToSystemTime", (SYSCALL)0, 0 }, +#endif + +#define osFileTimeToSystemTime ((BOOL(WINAPI*)(CONST FILETIME*, \ + LPSYSTEMTIME))aSyscall[12].pCurrent) + + { "FlushFileBuffers", (SYSCALL)FlushFileBuffers, 0 }, + +#define osFlushFileBuffers ((BOOL(WINAPI*)(HANDLE))aSyscall[13].pCurrent) + +#if defined(SQLITE_WIN32_HAS_ANSI) + { "FormatMessageA", (SYSCALL)FormatMessageA, 0 }, +#else + { "FormatMessageA", (SYSCALL)0, 0 }, +#endif + +#define osFormatMessageA ((DWORD(WINAPI*)(DWORD,LPCVOID,DWORD,DWORD,LPSTR, \ + DWORD,va_list*))aSyscall[14].pCurrent) + +#if defined(SQLITE_WIN32_HAS_WIDE) + { "FormatMessageW", (SYSCALL)FormatMessageW, 0 }, +#else + { "FormatMessageW", (SYSCALL)0, 0 }, +#endif + +#define osFormatMessageW ((DWORD(WINAPI*)(DWORD,LPCVOID,DWORD,DWORD,LPWSTR, \ + DWORD,va_list*))aSyscall[15].pCurrent) + +#if !defined(SQLITE_OMIT_LOAD_EXTENSION) + { "FreeLibrary", (SYSCALL)FreeLibrary, 0 }, +#else + { "FreeLibrary", (SYSCALL)0, 0 }, +#endif + +#define osFreeLibrary ((BOOL(WINAPI*)(HMODULE))aSyscall[16].pCurrent) + + { "GetCurrentProcessId", (SYSCALL)GetCurrentProcessId, 0 }, + +#define osGetCurrentProcessId ((DWORD(WINAPI*)(VOID))aSyscall[17].pCurrent) + +#if !SQLITE_OS_WINCE && defined(SQLITE_WIN32_HAS_ANSI) + { "GetDiskFreeSpaceA", (SYSCALL)GetDiskFreeSpaceA, 0 }, +#else + { "GetDiskFreeSpaceA", (SYSCALL)0, 0 }, +#endif + +#define osGetDiskFreeSpaceA ((BOOL(WINAPI*)(LPCSTR,LPDWORD,LPDWORD,LPDWORD, \ + LPDWORD))aSyscall[18].pCurrent) + +#if !SQLITE_OS_WINCE && !SQLITE_OS_WINRT && defined(SQLITE_WIN32_HAS_WIDE) + { "GetDiskFreeSpaceW", (SYSCALL)GetDiskFreeSpaceW, 0 }, +#else + { "GetDiskFreeSpaceW", (SYSCALL)0, 0 }, +#endif + +#define osGetDiskFreeSpaceW ((BOOL(WINAPI*)(LPCWSTR,LPDWORD,LPDWORD,LPDWORD, \ + LPDWORD))aSyscall[19].pCurrent) + +#if defined(SQLITE_WIN32_HAS_ANSI) + { "GetFileAttributesA", (SYSCALL)GetFileAttributesA, 0 }, +#else + { "GetFileAttributesA", (SYSCALL)0, 0 }, +#endif + +#define osGetFileAttributesA ((DWORD(WINAPI*)(LPCSTR))aSyscall[20].pCurrent) + +#if !SQLITE_OS_WINRT && defined(SQLITE_WIN32_HAS_WIDE) + { "GetFileAttributesW", (SYSCALL)GetFileAttributesW, 0 }, +#else + { "GetFileAttributesW", (SYSCALL)0, 0 }, +#endif + +#define osGetFileAttributesW ((DWORD(WINAPI*)(LPCWSTR))aSyscall[21].pCurrent) + +#if defined(SQLITE_WIN32_HAS_WIDE) + { "GetFileAttributesExW", (SYSCALL)GetFileAttributesExW, 0 }, +#else + { "GetFileAttributesExW", (SYSCALL)0, 0 }, +#endif + +#define osGetFileAttributesExW ((BOOL(WINAPI*)(LPCWSTR,GET_FILEEX_INFO_LEVELS, \ + LPVOID))aSyscall[22].pCurrent) + +#if !SQLITE_OS_WINRT + { "GetFileSize", (SYSCALL)GetFileSize, 0 }, +#else + { "GetFileSize", (SYSCALL)0, 0 }, +#endif + +#define osGetFileSize ((DWORD(WINAPI*)(HANDLE,LPDWORD))aSyscall[23].pCurrent) + +#if !SQLITE_OS_WINCE && defined(SQLITE_WIN32_HAS_ANSI) + { "GetFullPathNameA", (SYSCALL)GetFullPathNameA, 0 }, +#else + { "GetFullPathNameA", (SYSCALL)0, 0 }, +#endif + +#define osGetFullPathNameA ((DWORD(WINAPI*)(LPCSTR,DWORD,LPSTR, \ + LPSTR*))aSyscall[24].pCurrent) + +#if !SQLITE_OS_WINCE && !SQLITE_OS_WINRT && defined(SQLITE_WIN32_HAS_WIDE) + { "GetFullPathNameW", (SYSCALL)GetFullPathNameW, 0 }, +#else + { "GetFullPathNameW", (SYSCALL)0, 0 }, +#endif + +#define osGetFullPathNameW ((DWORD(WINAPI*)(LPCWSTR,DWORD,LPWSTR, \ + LPWSTR*))aSyscall[25].pCurrent) + + { "GetLastError", (SYSCALL)GetLastError, 0 }, + +#define osGetLastError ((DWORD(WINAPI*)(VOID))aSyscall[26].pCurrent) + +#if !defined(SQLITE_OMIT_LOAD_EXTENSION) +#if SQLITE_OS_WINCE + /* The GetProcAddressA() routine is only available on Windows CE. */ + { "GetProcAddressA", (SYSCALL)GetProcAddressA, 0 }, +#else + /* All other Windows platforms expect GetProcAddress() to take + ** an ANSI string regardless of the _UNICODE setting */ + { "GetProcAddressA", (SYSCALL)GetProcAddress, 0 }, +#endif +#else + { "GetProcAddressA", (SYSCALL)0, 0 }, +#endif + +#define osGetProcAddressA ((FARPROC(WINAPI*)(HMODULE, \ + LPCSTR))aSyscall[27].pCurrent) + +#if !SQLITE_OS_WINRT + { "GetSystemInfo", (SYSCALL)GetSystemInfo, 0 }, +#else + { "GetSystemInfo", (SYSCALL)0, 0 }, +#endif + +#define osGetSystemInfo ((VOID(WINAPI*)(LPSYSTEM_INFO))aSyscall[28].pCurrent) + + { "GetSystemTime", (SYSCALL)GetSystemTime, 0 }, + +#define osGetSystemTime ((VOID(WINAPI*)(LPSYSTEMTIME))aSyscall[29].pCurrent) + +#if !SQLITE_OS_WINCE + { "GetSystemTimeAsFileTime", (SYSCALL)GetSystemTimeAsFileTime, 0 }, +#else + { "GetSystemTimeAsFileTime", (SYSCALL)0, 0 }, +#endif + +#define osGetSystemTimeAsFileTime ((VOID(WINAPI*)( \ + LPFILETIME))aSyscall[30].pCurrent) + +#if defined(SQLITE_WIN32_HAS_ANSI) + { "GetTempPathA", (SYSCALL)GetTempPathA, 0 }, +#else + { "GetTempPathA", (SYSCALL)0, 0 }, +#endif + +#define osGetTempPathA ((DWORD(WINAPI*)(DWORD,LPSTR))aSyscall[31].pCurrent) + +#if !SQLITE_OS_WINRT && defined(SQLITE_WIN32_HAS_WIDE) + { "GetTempPathW", (SYSCALL)GetTempPathW, 0 }, +#else + { "GetTempPathW", (SYSCALL)0, 0 }, +#endif + +#define osGetTempPathW ((DWORD(WINAPI*)(DWORD,LPWSTR))aSyscall[32].pCurrent) + +#if !SQLITE_OS_WINRT + { "GetTickCount", (SYSCALL)GetTickCount, 0 }, +#else + { "GetTickCount", (SYSCALL)0, 0 }, +#endif + +#define osGetTickCount ((DWORD(WINAPI*)(VOID))aSyscall[33].pCurrent) + +#if defined(SQLITE_WIN32_HAS_ANSI) && SQLITE_WIN32_GETVERSIONEX + { "GetVersionExA", (SYSCALL)GetVersionExA, 0 }, +#else + { "GetVersionExA", (SYSCALL)0, 0 }, +#endif + +#define osGetVersionExA ((BOOL(WINAPI*)( \ + LPOSVERSIONINFOA))aSyscall[34].pCurrent) + +#if !SQLITE_OS_WINRT && defined(SQLITE_WIN32_HAS_WIDE) && \ + SQLITE_WIN32_GETVERSIONEX + { "GetVersionExW", (SYSCALL)GetVersionExW, 0 }, +#else + { "GetVersionExW", (SYSCALL)0, 0 }, +#endif + +#define osGetVersionExW ((BOOL(WINAPI*)( \ + LPOSVERSIONINFOW))aSyscall[35].pCurrent) + + { "HeapAlloc", (SYSCALL)HeapAlloc, 0 }, + +#define osHeapAlloc ((LPVOID(WINAPI*)(HANDLE,DWORD, \ + SIZE_T))aSyscall[36].pCurrent) + +#if !SQLITE_OS_WINRT + { "HeapCreate", (SYSCALL)HeapCreate, 0 }, +#else + { "HeapCreate", (SYSCALL)0, 0 }, +#endif + +#define osHeapCreate ((HANDLE(WINAPI*)(DWORD,SIZE_T, \ + SIZE_T))aSyscall[37].pCurrent) + +#if !SQLITE_OS_WINRT + { "HeapDestroy", (SYSCALL)HeapDestroy, 0 }, +#else + { "HeapDestroy", (SYSCALL)0, 0 }, +#endif + +#define osHeapDestroy ((BOOL(WINAPI*)(HANDLE))aSyscall[38].pCurrent) + + { "HeapFree", (SYSCALL)HeapFree, 0 }, + +#define osHeapFree ((BOOL(WINAPI*)(HANDLE,DWORD,LPVOID))aSyscall[39].pCurrent) + + { "HeapReAlloc", (SYSCALL)HeapReAlloc, 0 }, + +#define osHeapReAlloc ((LPVOID(WINAPI*)(HANDLE,DWORD,LPVOID, \ + SIZE_T))aSyscall[40].pCurrent) + + { "HeapSize", (SYSCALL)HeapSize, 0 }, + +#define osHeapSize ((SIZE_T(WINAPI*)(HANDLE,DWORD, \ + LPCVOID))aSyscall[41].pCurrent) + +#if !SQLITE_OS_WINRT + { "HeapValidate", (SYSCALL)HeapValidate, 0 }, +#else + { "HeapValidate", (SYSCALL)0, 0 }, +#endif + +#define osHeapValidate ((BOOL(WINAPI*)(HANDLE,DWORD, \ + LPCVOID))aSyscall[42].pCurrent) + +#if !SQLITE_OS_WINCE && !SQLITE_OS_WINRT + { "HeapCompact", (SYSCALL)HeapCompact, 0 }, +#else + { "HeapCompact", (SYSCALL)0, 0 }, +#endif + +#define osHeapCompact ((UINT(WINAPI*)(HANDLE,DWORD))aSyscall[43].pCurrent) + +#if defined(SQLITE_WIN32_HAS_ANSI) && !defined(SQLITE_OMIT_LOAD_EXTENSION) + { "LoadLibraryA", (SYSCALL)LoadLibraryA, 0 }, +#else + { "LoadLibraryA", (SYSCALL)0, 0 }, +#endif + +#define osLoadLibraryA ((HMODULE(WINAPI*)(LPCSTR))aSyscall[44].pCurrent) + +#if !SQLITE_OS_WINRT && defined(SQLITE_WIN32_HAS_WIDE) && \ + !defined(SQLITE_OMIT_LOAD_EXTENSION) + { "LoadLibraryW", (SYSCALL)LoadLibraryW, 0 }, +#else + { "LoadLibraryW", (SYSCALL)0, 0 }, +#endif + +#define osLoadLibraryW ((HMODULE(WINAPI*)(LPCWSTR))aSyscall[45].pCurrent) + +#if !SQLITE_OS_WINRT + { "LocalFree", (SYSCALL)LocalFree, 0 }, +#else + { "LocalFree", (SYSCALL)0, 0 }, +#endif + +#define osLocalFree ((HLOCAL(WINAPI*)(HLOCAL))aSyscall[46].pCurrent) + +#if !SQLITE_OS_WINCE && !SQLITE_OS_WINRT + { "LockFile", (SYSCALL)LockFile, 0 }, +#else + { "LockFile", (SYSCALL)0, 0 }, +#endif + +#ifndef osLockFile +#define osLockFile ((BOOL(WINAPI*)(HANDLE,DWORD,DWORD,DWORD, \ + DWORD))aSyscall[47].pCurrent) +#endif + +#if !SQLITE_OS_WINCE + { "LockFileEx", (SYSCALL)LockFileEx, 0 }, +#else + { "LockFileEx", (SYSCALL)0, 0 }, +#endif + +#ifndef osLockFileEx +#define osLockFileEx ((BOOL(WINAPI*)(HANDLE,DWORD,DWORD,DWORD,DWORD, \ + LPOVERLAPPED))aSyscall[48].pCurrent) +#endif + +#if SQLITE_OS_WINCE || (!SQLITE_OS_WINRT && \ + (!defined(SQLITE_OMIT_WAL) || SQLITE_MAX_MMAP_SIZE>0)) + { "MapViewOfFile", (SYSCALL)MapViewOfFile, 0 }, +#else + { "MapViewOfFile", (SYSCALL)0, 0 }, +#endif + +#define osMapViewOfFile ((LPVOID(WINAPI*)(HANDLE,DWORD,DWORD,DWORD, \ + SIZE_T))aSyscall[49].pCurrent) + + { "MultiByteToWideChar", (SYSCALL)MultiByteToWideChar, 0 }, + +#define osMultiByteToWideChar ((int(WINAPI*)(UINT,DWORD,LPCSTR,int,LPWSTR, \ + int))aSyscall[50].pCurrent) + + { "QueryPerformanceCounter", (SYSCALL)QueryPerformanceCounter, 0 }, + +#define osQueryPerformanceCounter ((BOOL(WINAPI*)( \ + LARGE_INTEGER*))aSyscall[51].pCurrent) + + { "ReadFile", (SYSCALL)ReadFile, 0 }, + +#define osReadFile ((BOOL(WINAPI*)(HANDLE,LPVOID,DWORD,LPDWORD, \ + LPOVERLAPPED))aSyscall[52].pCurrent) + + { "SetEndOfFile", (SYSCALL)SetEndOfFile, 0 }, + +#define osSetEndOfFile ((BOOL(WINAPI*)(HANDLE))aSyscall[53].pCurrent) + +#if !SQLITE_OS_WINRT + { "SetFilePointer", (SYSCALL)SetFilePointer, 0 }, +#else + { "SetFilePointer", (SYSCALL)0, 0 }, +#endif + +#define osSetFilePointer ((DWORD(WINAPI*)(HANDLE,LONG,PLONG, \ + DWORD))aSyscall[54].pCurrent) + +#if !SQLITE_OS_WINRT + { "Sleep", (SYSCALL)Sleep, 0 }, +#else + { "Sleep", (SYSCALL)0, 0 }, +#endif + +#define osSleep ((VOID(WINAPI*)(DWORD))aSyscall[55].pCurrent) + + { "SystemTimeToFileTime", (SYSCALL)SystemTimeToFileTime, 0 }, + +#define osSystemTimeToFileTime ((BOOL(WINAPI*)(CONST SYSTEMTIME*, \ + LPFILETIME))aSyscall[56].pCurrent) + +#if !SQLITE_OS_WINCE && !SQLITE_OS_WINRT + { "UnlockFile", (SYSCALL)UnlockFile, 0 }, +#else + { "UnlockFile", (SYSCALL)0, 0 }, +#endif + +#ifndef osUnlockFile +#define osUnlockFile ((BOOL(WINAPI*)(HANDLE,DWORD,DWORD,DWORD, \ + DWORD))aSyscall[57].pCurrent) +#endif + +#if !SQLITE_OS_WINCE + { "UnlockFileEx", (SYSCALL)UnlockFileEx, 0 }, +#else + { "UnlockFileEx", (SYSCALL)0, 0 }, +#endif + +#define osUnlockFileEx ((BOOL(WINAPI*)(HANDLE,DWORD,DWORD,DWORD, \ + LPOVERLAPPED))aSyscall[58].pCurrent) + +#if SQLITE_OS_WINCE || !defined(SQLITE_OMIT_WAL) || SQLITE_MAX_MMAP_SIZE>0 + { "UnmapViewOfFile", (SYSCALL)UnmapViewOfFile, 0 }, +#else + { "UnmapViewOfFile", (SYSCALL)0, 0 }, +#endif + +#define osUnmapViewOfFile ((BOOL(WINAPI*)(LPCVOID))aSyscall[59].pCurrent) + + { "WideCharToMultiByte", (SYSCALL)WideCharToMultiByte, 0 }, + +#define osWideCharToMultiByte ((int(WINAPI*)(UINT,DWORD,LPCWSTR,int,LPSTR,int, \ + LPCSTR,LPBOOL))aSyscall[60].pCurrent) + + { "WriteFile", (SYSCALL)WriteFile, 0 }, + +#define osWriteFile ((BOOL(WINAPI*)(HANDLE,LPCVOID,DWORD,LPDWORD, \ + LPOVERLAPPED))aSyscall[61].pCurrent) + +#if SQLITE_OS_WINRT + { "CreateEventExW", (SYSCALL)CreateEventExW, 0 }, +#else + { "CreateEventExW", (SYSCALL)0, 0 }, +#endif + +#define osCreateEventExW ((HANDLE(WINAPI*)(LPSECURITY_ATTRIBUTES,LPCWSTR, \ + DWORD,DWORD))aSyscall[62].pCurrent) + +#if !SQLITE_OS_WINRT + { "WaitForSingleObject", (SYSCALL)WaitForSingleObject, 0 }, +#else + { "WaitForSingleObject", (SYSCALL)0, 0 }, +#endif + +#define osWaitForSingleObject ((DWORD(WINAPI*)(HANDLE, \ + DWORD))aSyscall[63].pCurrent) + +#if !SQLITE_OS_WINCE + { "WaitForSingleObjectEx", (SYSCALL)WaitForSingleObjectEx, 0 }, +#else + { "WaitForSingleObjectEx", (SYSCALL)0, 0 }, +#endif + +#define osWaitForSingleObjectEx ((DWORD(WINAPI*)(HANDLE,DWORD, \ + BOOL))aSyscall[64].pCurrent) + +#if SQLITE_OS_WINRT + { "SetFilePointerEx", (SYSCALL)SetFilePointerEx, 0 }, +#else + { "SetFilePointerEx", (SYSCALL)0, 0 }, +#endif + +#define osSetFilePointerEx ((BOOL(WINAPI*)(HANDLE,LARGE_INTEGER, \ + PLARGE_INTEGER,DWORD))aSyscall[65].pCurrent) + +#if SQLITE_OS_WINRT + { "GetFileInformationByHandleEx", (SYSCALL)GetFileInformationByHandleEx, 0 }, +#else + { "GetFileInformationByHandleEx", (SYSCALL)0, 0 }, +#endif + +#define osGetFileInformationByHandleEx ((BOOL(WINAPI*)(HANDLE, \ + FILE_INFO_BY_HANDLE_CLASS,LPVOID,DWORD))aSyscall[66].pCurrent) + +#if SQLITE_OS_WINRT && (!defined(SQLITE_OMIT_WAL) || SQLITE_MAX_MMAP_SIZE>0) + { "MapViewOfFileFromApp", (SYSCALL)MapViewOfFileFromApp, 0 }, +#else + { "MapViewOfFileFromApp", (SYSCALL)0, 0 }, +#endif + +#define osMapViewOfFileFromApp ((LPVOID(WINAPI*)(HANDLE,ULONG,ULONG64, \ + SIZE_T))aSyscall[67].pCurrent) + +#if SQLITE_OS_WINRT + { "CreateFile2", (SYSCALL)CreateFile2, 0 }, +#else + { "CreateFile2", (SYSCALL)0, 0 }, +#endif + +#define osCreateFile2 ((HANDLE(WINAPI*)(LPCWSTR,DWORD,DWORD,DWORD, \ + LPCREATEFILE2_EXTENDED_PARAMETERS))aSyscall[68].pCurrent) + +#if SQLITE_OS_WINRT && !defined(SQLITE_OMIT_LOAD_EXTENSION) + { "LoadPackagedLibrary", (SYSCALL)LoadPackagedLibrary, 0 }, +#else + { "LoadPackagedLibrary", (SYSCALL)0, 0 }, +#endif + +#define osLoadPackagedLibrary ((HMODULE(WINAPI*)(LPCWSTR, \ + DWORD))aSyscall[69].pCurrent) + +#if SQLITE_OS_WINRT + { "GetTickCount64", (SYSCALL)GetTickCount64, 0 }, +#else + { "GetTickCount64", (SYSCALL)0, 0 }, +#endif + +#define osGetTickCount64 ((ULONGLONG(WINAPI*)(VOID))aSyscall[70].pCurrent) + +#if SQLITE_OS_WINRT + { "GetNativeSystemInfo", (SYSCALL)GetNativeSystemInfo, 0 }, +#else + { "GetNativeSystemInfo", (SYSCALL)0, 0 }, +#endif + +#define osGetNativeSystemInfo ((VOID(WINAPI*)( \ + LPSYSTEM_INFO))aSyscall[71].pCurrent) + +#if defined(SQLITE_WIN32_HAS_ANSI) + { "OutputDebugStringA", (SYSCALL)OutputDebugStringA, 0 }, +#else + { "OutputDebugStringA", (SYSCALL)0, 0 }, +#endif + +#define osOutputDebugStringA ((VOID(WINAPI*)(LPCSTR))aSyscall[72].pCurrent) + +#if defined(SQLITE_WIN32_HAS_WIDE) + { "OutputDebugStringW", (SYSCALL)OutputDebugStringW, 0 }, +#else + { "OutputDebugStringW", (SYSCALL)0, 0 }, +#endif + +#define osOutputDebugStringW ((VOID(WINAPI*)(LPCWSTR))aSyscall[73].pCurrent) + + { "GetProcessHeap", (SYSCALL)GetProcessHeap, 0 }, + +#define osGetProcessHeap ((HANDLE(WINAPI*)(VOID))aSyscall[74].pCurrent) + +#if SQLITE_OS_WINRT && (!defined(SQLITE_OMIT_WAL) || SQLITE_MAX_MMAP_SIZE>0) + { "CreateFileMappingFromApp", (SYSCALL)CreateFileMappingFromApp, 0 }, +#else + { "CreateFileMappingFromApp", (SYSCALL)0, 0 }, +#endif + +#define osCreateFileMappingFromApp ((HANDLE(WINAPI*)(HANDLE, \ + LPSECURITY_ATTRIBUTES,ULONG,ULONG64,LPCWSTR))aSyscall[75].pCurrent) + +/* +** NOTE: On some sub-platforms, the InterlockedCompareExchange "function" +** is really just a macro that uses a compiler intrinsic (e.g. x64). +** So do not try to make this is into a redefinable interface. +*/ +#if defined(InterlockedCompareExchange) + { "InterlockedCompareExchange", (SYSCALL)0, 0 }, + +#define osInterlockedCompareExchange InterlockedCompareExchange +#else + { "InterlockedCompareExchange", (SYSCALL)InterlockedCompareExchange, 0 }, + +#define osInterlockedCompareExchange ((LONG(WINAPI*)(LONG \ + SQLITE_WIN32_VOLATILE*, LONG,LONG))aSyscall[76].pCurrent) +#endif /* defined(InterlockedCompareExchange) */ + +#if !SQLITE_OS_WINCE && !SQLITE_OS_WINRT && SQLITE_WIN32_USE_UUID + { "UuidCreate", (SYSCALL)UuidCreate, 0 }, +#else + { "UuidCreate", (SYSCALL)0, 0 }, +#endif + +#define osUuidCreate ((RPC_STATUS(RPC_ENTRY*)(UUID*))aSyscall[77].pCurrent) + +#if !SQLITE_OS_WINCE && !SQLITE_OS_WINRT && SQLITE_WIN32_USE_UUID + { "UuidCreateSequential", (SYSCALL)UuidCreateSequential, 0 }, +#else + { "UuidCreateSequential", (SYSCALL)0, 0 }, +#endif + +#define osUuidCreateSequential \ + ((RPC_STATUS(RPC_ENTRY*)(UUID*))aSyscall[78].pCurrent) + +#if !defined(SQLITE_NO_SYNC) && SQLITE_MAX_MMAP_SIZE>0 + { "FlushViewOfFile", (SYSCALL)FlushViewOfFile, 0 }, +#else + { "FlushViewOfFile", (SYSCALL)0, 0 }, +#endif + +#define osFlushViewOfFile \ + ((BOOL(WINAPI*)(LPCVOID,SIZE_T))aSyscall[79].pCurrent) + +}; /* End of the overrideable system calls */ + +/* +** This is the xSetSystemCall() method of sqlite3_vfs for all of the +** "win32" VFSes. Return SQLITE_OK opon successfully updating the +** system call pointer, or SQLITE_NOTFOUND if there is no configurable +** system call named zName. +*/ +static int winSetSystemCall( + sqlite3_vfs *pNotUsed, /* The VFS pointer. Not used */ + const char *zName, /* Name of system call to override */ + sqlite3_syscall_ptr pNewFunc /* Pointer to new system call value */ +){ + unsigned int i; + int rc = SQLITE_NOTFOUND; + + UNUSED_PARAMETER(pNotUsed); + if( zName==0 ){ + /* If no zName is given, restore all system calls to their default + ** settings and return NULL + */ + rc = SQLITE_OK; + for(i=0; i<sizeof(aSyscall)/sizeof(aSyscall[0]); i++){ + if( aSyscall[i].pDefault ){ + aSyscall[i].pCurrent = aSyscall[i].pDefault; + } + } + }else{ + /* If zName is specified, operate on only the one system call + ** specified. + */ + for(i=0; i<sizeof(aSyscall)/sizeof(aSyscall[0]); i++){ + if( strcmp(zName, aSyscall[i].zName)==0 ){ + if( aSyscall[i].pDefault==0 ){ + aSyscall[i].pDefault = aSyscall[i].pCurrent; + } + rc = SQLITE_OK; + if( pNewFunc==0 ) pNewFunc = aSyscall[i].pDefault; + aSyscall[i].pCurrent = pNewFunc; + break; + } + } + } + return rc; +} + +/* +** Return the value of a system call. Return NULL if zName is not a +** recognized system call name. NULL is also returned if the system call +** is currently undefined. +*/ +static sqlite3_syscall_ptr winGetSystemCall( + sqlite3_vfs *pNotUsed, + const char *zName +){ + unsigned int i; + + UNUSED_PARAMETER(pNotUsed); + for(i=0; i<sizeof(aSyscall)/sizeof(aSyscall[0]); i++){ + if( strcmp(zName, aSyscall[i].zName)==0 ) return aSyscall[i].pCurrent; + } + return 0; +} + +/* +** Return the name of the first system call after zName. If zName==NULL +** then return the name of the first system call. Return NULL if zName +** is the last system call or if zName is not the name of a valid +** system call. +*/ +static const char *winNextSystemCall(sqlite3_vfs *p, const char *zName){ + int i = -1; + + UNUSED_PARAMETER(p); + if( zName ){ + for(i=0; i<ArraySize(aSyscall)-1; i++){ + if( strcmp(zName, aSyscall[i].zName)==0 ) break; + } + } + for(i++; i<ArraySize(aSyscall); i++){ + if( aSyscall[i].pCurrent!=0 ) return aSyscall[i].zName; + } + return 0; +} + +#ifdef SQLITE_WIN32_MALLOC +/* +** If a Win32 native heap has been configured, this function will attempt to +** compact it. Upon success, SQLITE_OK will be returned. Upon failure, one +** of SQLITE_NOMEM, SQLITE_ERROR, or SQLITE_NOTFOUND will be returned. The +** "pnLargest" argument, if non-zero, will be used to return the size of the +** largest committed free block in the heap, in bytes. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_win32_compact_heap(LPUINT pnLargest){ + int rc = SQLITE_OK; + UINT nLargest = 0; + HANDLE hHeap; + + winMemAssertMagic(); + hHeap = winMemGetHeap(); + assert( hHeap!=0 ); + assert( hHeap!=INVALID_HANDLE_VALUE ); +#if !SQLITE_OS_WINRT && defined(SQLITE_WIN32_MALLOC_VALIDATE) + assert( osHeapValidate(hHeap, SQLITE_WIN32_HEAP_FLAGS, NULL) ); +#endif +#if !SQLITE_OS_WINCE && !SQLITE_OS_WINRT + if( (nLargest=osHeapCompact(hHeap, SQLITE_WIN32_HEAP_FLAGS))==0 ){ + DWORD lastErrno = osGetLastError(); + if( lastErrno==NO_ERROR ){ + sqlite3_log(SQLITE_NOMEM, "failed to HeapCompact (no space), heap=%p", + (void*)hHeap); + rc = SQLITE_NOMEM; + }else{ + sqlite3_log(SQLITE_ERROR, "failed to HeapCompact (%lu), heap=%p", + osGetLastError(), (void*)hHeap); + rc = SQLITE_ERROR; + } + } +#else + sqlite3_log(SQLITE_NOTFOUND, "failed to HeapCompact, heap=%p", + (void*)hHeap); + rc = SQLITE_NOTFOUND; +#endif + if( pnLargest ) *pnLargest = nLargest; + return rc; +} + +/* +** If a Win32 native heap has been configured, this function will attempt to +** destroy and recreate it. If the Win32 native heap is not isolated and/or +** the sqlite3_memory_used() function does not return zero, SQLITE_BUSY will +** be returned and no changes will be made to the Win32 native heap. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_win32_reset_heap(){ + int rc; + MUTEX_LOGIC( sqlite3_mutex *pMaster; ) /* The main static mutex */ + MUTEX_LOGIC( sqlite3_mutex *pMem; ) /* The memsys static mutex */ + MUTEX_LOGIC( pMaster = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MASTER); ) + MUTEX_LOGIC( pMem = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MEM); ) + sqlite3_mutex_enter(pMaster); + sqlite3_mutex_enter(pMem); + winMemAssertMagic(); + if( winMemGetHeap()!=NULL && winMemGetOwned() && sqlite3_memory_used()==0 ){ + /* + ** At this point, there should be no outstanding memory allocations on + ** the heap. Also, since both the master and memsys locks are currently + ** being held by us, no other function (i.e. from another thread) should + ** be able to even access the heap. Attempt to destroy and recreate our + ** isolated Win32 native heap now. + */ + assert( winMemGetHeap()!=NULL ); + assert( winMemGetOwned() ); + assert( sqlite3_memory_used()==0 ); + winMemShutdown(winMemGetDataPtr()); + assert( winMemGetHeap()==NULL ); + assert( !winMemGetOwned() ); + assert( sqlite3_memory_used()==0 ); + rc = winMemInit(winMemGetDataPtr()); + assert( rc!=SQLITE_OK || winMemGetHeap()!=NULL ); + assert( rc!=SQLITE_OK || winMemGetOwned() ); + assert( rc!=SQLITE_OK || sqlite3_memory_used()==0 ); + }else{ + /* + ** The Win32 native heap cannot be modified because it may be in use. + */ + rc = SQLITE_BUSY; + } + sqlite3_mutex_leave(pMem); + sqlite3_mutex_leave(pMaster); + return rc; +} +#endif /* SQLITE_WIN32_MALLOC */ + +/* +** This function outputs the specified (ANSI) string to the Win32 debugger +** (if available). +*/ + +SQLITE_API void SQLITE_STDCALL sqlite3_win32_write_debug(const char *zBuf, int nBuf){ + char zDbgBuf[SQLITE_WIN32_DBG_BUF_SIZE]; + int nMin = MIN(nBuf, (SQLITE_WIN32_DBG_BUF_SIZE - 1)); /* may be negative. */ + if( nMin<-1 ) nMin = -1; /* all negative values become -1. */ + assert( nMin==-1 || nMin==0 || nMin<SQLITE_WIN32_DBG_BUF_SIZE ); +#if defined(SQLITE_WIN32_HAS_ANSI) + if( nMin>0 ){ + memset(zDbgBuf, 0, SQLITE_WIN32_DBG_BUF_SIZE); + memcpy(zDbgBuf, zBuf, nMin); + osOutputDebugStringA(zDbgBuf); + }else{ + osOutputDebugStringA(zBuf); + } +#elif defined(SQLITE_WIN32_HAS_WIDE) + memset(zDbgBuf, 0, SQLITE_WIN32_DBG_BUF_SIZE); + if ( osMultiByteToWideChar( + osAreFileApisANSI() ? CP_ACP : CP_OEMCP, 0, zBuf, + nMin, (LPWSTR)zDbgBuf, SQLITE_WIN32_DBG_BUF_SIZE/sizeof(WCHAR))<=0 ){ + return; + } + osOutputDebugStringW((LPCWSTR)zDbgBuf); +#else + if( nMin>0 ){ + memset(zDbgBuf, 0, SQLITE_WIN32_DBG_BUF_SIZE); + memcpy(zDbgBuf, zBuf, nMin); + fprintf(stderr, "%s", zDbgBuf); + }else{ + fprintf(stderr, "%s", zBuf); + } +#endif +} + +/* +** The following routine suspends the current thread for at least ms +** milliseconds. This is equivalent to the Win32 Sleep() interface. +*/ +#if SQLITE_OS_WINRT +static HANDLE sleepObj = NULL; +#endif + +SQLITE_API void SQLITE_STDCALL sqlite3_win32_sleep(DWORD milliseconds){ +#if SQLITE_OS_WINRT + if ( sleepObj==NULL ){ + sleepObj = osCreateEventExW(NULL, NULL, CREATE_EVENT_MANUAL_RESET, + SYNCHRONIZE); + } + assert( sleepObj!=NULL ); + osWaitForSingleObjectEx(sleepObj, milliseconds, FALSE); +#else + osSleep(milliseconds); +#endif +} + +#if SQLITE_MAX_WORKER_THREADS>0 && !SQLITE_OS_WINCE && !SQLITE_OS_WINRT && \ + SQLITE_THREADSAFE>0 +SQLITE_PRIVATE DWORD sqlite3Win32Wait(HANDLE hObject){ + DWORD rc; + while( (rc = osWaitForSingleObjectEx(hObject, INFINITE, + TRUE))==WAIT_IO_COMPLETION ){} + return rc; +} #endif /* ** Return true (non-zero) if we are running under WinNT, Win2K, WinXP, ** or WinCE. Return false (zero) for Win95, Win98, or WinME. @@ -28352,161 +36430,660 @@ ** API as long as we don't call it when running Win95/98/ME. A call to ** this routine is used to determine if the host is Win95/98/ME or ** WinNT/2K/XP so that we will know whether or not we can safely call ** the LockFileEx() API. */ -#if SQLITE_OS_WINCE -# define isNT() (1) -#else - static int isNT(void){ - if( sqlite3_os_type==0 ){ - OSVERSIONINFO sInfo; - sInfo.dwOSVersionInfoSize = sizeof(sInfo); - GetVersionEx(&sInfo); - sqlite3_os_type = sInfo.dwPlatformId==VER_PLATFORM_WIN32_NT ? 2 : 1; - } - return sqlite3_os_type==2; - } -#endif /* SQLITE_OS_WINCE */ - -/* -** Convert a UTF-8 string to microsoft unicode (UTF-16?). + +#if !SQLITE_WIN32_GETVERSIONEX +# define osIsNT() (1) +#elif SQLITE_OS_WINCE || SQLITE_OS_WINRT || !defined(SQLITE_WIN32_HAS_ANSI) +# define osIsNT() (1) +#elif !defined(SQLITE_WIN32_HAS_WIDE) +# define osIsNT() (0) +#else +# define osIsNT() ((sqlite3_os_type==2) || sqlite3_win32_is_nt()) +#endif + +/* +** This function determines if the machine is running a version of Windows +** based on the NT kernel. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_win32_is_nt(void){ +#if SQLITE_OS_WINRT + /* + ** NOTE: The WinRT sub-platform is always assumed to be based on the NT + ** kernel. + */ + return 1; +#elif SQLITE_WIN32_GETVERSIONEX + if( osInterlockedCompareExchange(&sqlite3_os_type, 0, 0)==0 ){ +#if defined(SQLITE_WIN32_HAS_ANSI) + OSVERSIONINFOA sInfo; + sInfo.dwOSVersionInfoSize = sizeof(sInfo); + osGetVersionExA(&sInfo); + osInterlockedCompareExchange(&sqlite3_os_type, + (sInfo.dwPlatformId == VER_PLATFORM_WIN32_NT) ? 2 : 1, 0); +#elif defined(SQLITE_WIN32_HAS_WIDE) + OSVERSIONINFOW sInfo; + sInfo.dwOSVersionInfoSize = sizeof(sInfo); + osGetVersionExW(&sInfo); + osInterlockedCompareExchange(&sqlite3_os_type, + (sInfo.dwPlatformId == VER_PLATFORM_WIN32_NT) ? 2 : 1, 0); +#endif + } + return osInterlockedCompareExchange(&sqlite3_os_type, 2, 2)==2; +#elif SQLITE_TEST + return osInterlockedCompareExchange(&sqlite3_os_type, 2, 2)==2; +#else + /* + ** NOTE: All sub-platforms where the GetVersionEx[AW] functions are + ** deprecated are always assumed to be based on the NT kernel. + */ + return 1; +#endif +} + +#ifdef SQLITE_WIN32_MALLOC +/* +** Allocate nBytes of memory. +*/ +static void *winMemMalloc(int nBytes){ + HANDLE hHeap; + void *p; + + winMemAssertMagic(); + hHeap = winMemGetHeap(); + assert( hHeap!=0 ); + assert( hHeap!=INVALID_HANDLE_VALUE ); +#if !SQLITE_OS_WINRT && defined(SQLITE_WIN32_MALLOC_VALIDATE) + assert( osHeapValidate(hHeap, SQLITE_WIN32_HEAP_FLAGS, NULL) ); +#endif + assert( nBytes>=0 ); + p = osHeapAlloc(hHeap, SQLITE_WIN32_HEAP_FLAGS, (SIZE_T)nBytes); + if( !p ){ + sqlite3_log(SQLITE_NOMEM, "failed to HeapAlloc %u bytes (%lu), heap=%p", + nBytes, osGetLastError(), (void*)hHeap); + } + return p; +} + +/* +** Free memory. +*/ +static void winMemFree(void *pPrior){ + HANDLE hHeap; + + winMemAssertMagic(); + hHeap = winMemGetHeap(); + assert( hHeap!=0 ); + assert( hHeap!=INVALID_HANDLE_VALUE ); +#if !SQLITE_OS_WINRT && defined(SQLITE_WIN32_MALLOC_VALIDATE) + assert( osHeapValidate(hHeap, SQLITE_WIN32_HEAP_FLAGS, pPrior) ); +#endif + if( !pPrior ) return; /* Passing NULL to HeapFree is undefined. */ + if( !osHeapFree(hHeap, SQLITE_WIN32_HEAP_FLAGS, pPrior) ){ + sqlite3_log(SQLITE_NOMEM, "failed to HeapFree block %p (%lu), heap=%p", + pPrior, osGetLastError(), (void*)hHeap); + } +} + +/* +** Change the size of an existing memory allocation +*/ +static void *winMemRealloc(void *pPrior, int nBytes){ + HANDLE hHeap; + void *p; + + winMemAssertMagic(); + hHeap = winMemGetHeap(); + assert( hHeap!=0 ); + assert( hHeap!=INVALID_HANDLE_VALUE ); +#if !SQLITE_OS_WINRT && defined(SQLITE_WIN32_MALLOC_VALIDATE) + assert( osHeapValidate(hHeap, SQLITE_WIN32_HEAP_FLAGS, pPrior) ); +#endif + assert( nBytes>=0 ); + if( !pPrior ){ + p = osHeapAlloc(hHeap, SQLITE_WIN32_HEAP_FLAGS, (SIZE_T)nBytes); + }else{ + p = osHeapReAlloc(hHeap, SQLITE_WIN32_HEAP_FLAGS, pPrior, (SIZE_T)nBytes); + } + if( !p ){ + sqlite3_log(SQLITE_NOMEM, "failed to %s %u bytes (%lu), heap=%p", + pPrior ? "HeapReAlloc" : "HeapAlloc", nBytes, osGetLastError(), + (void*)hHeap); + } + return p; +} + +/* +** Return the size of an outstanding allocation, in bytes. +*/ +static int winMemSize(void *p){ + HANDLE hHeap; + SIZE_T n; + + winMemAssertMagic(); + hHeap = winMemGetHeap(); + assert( hHeap!=0 ); + assert( hHeap!=INVALID_HANDLE_VALUE ); +#if !SQLITE_OS_WINRT && defined(SQLITE_WIN32_MALLOC_VALIDATE) + assert( osHeapValidate(hHeap, SQLITE_WIN32_HEAP_FLAGS, p) ); +#endif + if( !p ) return 0; + n = osHeapSize(hHeap, SQLITE_WIN32_HEAP_FLAGS, p); + if( n==(SIZE_T)-1 ){ + sqlite3_log(SQLITE_NOMEM, "failed to HeapSize block %p (%lu), heap=%p", + p, osGetLastError(), (void*)hHeap); + return 0; + } + return (int)n; +} + +/* +** Round up a request size to the next valid allocation size. +*/ +static int winMemRoundup(int n){ + return n; +} + +/* +** Initialize this module. +*/ +static int winMemInit(void *pAppData){ + winMemData *pWinMemData = (winMemData *)pAppData; + + if( !pWinMemData ) return SQLITE_ERROR; + assert( pWinMemData->magic1==WINMEM_MAGIC1 ); + assert( pWinMemData->magic2==WINMEM_MAGIC2 ); + +#if !SQLITE_OS_WINRT && SQLITE_WIN32_HEAP_CREATE + if( !pWinMemData->hHeap ){ + DWORD dwInitialSize = SQLITE_WIN32_HEAP_INIT_SIZE; + DWORD dwMaximumSize = (DWORD)sqlite3GlobalConfig.nHeap; + if( dwMaximumSize==0 ){ + dwMaximumSize = SQLITE_WIN32_HEAP_MAX_SIZE; + }else if( dwInitialSize>dwMaximumSize ){ + dwInitialSize = dwMaximumSize; + } + pWinMemData->hHeap = osHeapCreate(SQLITE_WIN32_HEAP_FLAGS, + dwInitialSize, dwMaximumSize); + if( !pWinMemData->hHeap ){ + sqlite3_log(SQLITE_NOMEM, + "failed to HeapCreate (%lu), flags=%u, initSize=%lu, maxSize=%lu", + osGetLastError(), SQLITE_WIN32_HEAP_FLAGS, dwInitialSize, + dwMaximumSize); + return SQLITE_NOMEM; + } + pWinMemData->bOwned = TRUE; + assert( pWinMemData->bOwned ); + } +#else + pWinMemData->hHeap = osGetProcessHeap(); + if( !pWinMemData->hHeap ){ + sqlite3_log(SQLITE_NOMEM, + "failed to GetProcessHeap (%lu)", osGetLastError()); + return SQLITE_NOMEM; + } + pWinMemData->bOwned = FALSE; + assert( !pWinMemData->bOwned ); +#endif + assert( pWinMemData->hHeap!=0 ); + assert( pWinMemData->hHeap!=INVALID_HANDLE_VALUE ); +#if !SQLITE_OS_WINRT && defined(SQLITE_WIN32_MALLOC_VALIDATE) + assert( osHeapValidate(pWinMemData->hHeap, SQLITE_WIN32_HEAP_FLAGS, NULL) ); +#endif + return SQLITE_OK; +} + +/* +** Deinitialize this module. +*/ +static void winMemShutdown(void *pAppData){ + winMemData *pWinMemData = (winMemData *)pAppData; + + if( !pWinMemData ) return; + assert( pWinMemData->magic1==WINMEM_MAGIC1 ); + assert( pWinMemData->magic2==WINMEM_MAGIC2 ); + + if( pWinMemData->hHeap ){ + assert( pWinMemData->hHeap!=INVALID_HANDLE_VALUE ); +#if !SQLITE_OS_WINRT && defined(SQLITE_WIN32_MALLOC_VALIDATE) + assert( osHeapValidate(pWinMemData->hHeap, SQLITE_WIN32_HEAP_FLAGS, NULL) ); +#endif + if( pWinMemData->bOwned ){ + if( !osHeapDestroy(pWinMemData->hHeap) ){ + sqlite3_log(SQLITE_NOMEM, "failed to HeapDestroy (%lu), heap=%p", + osGetLastError(), (void*)pWinMemData->hHeap); + } + pWinMemData->bOwned = FALSE; + } + pWinMemData->hHeap = NULL; + } +} + +/* +** Populate the low-level memory allocation function pointers in +** sqlite3GlobalConfig.m with pointers to the routines in this file. The +** arguments specify the block of memory to manage. +** +** This routine is only called by sqlite3_config(), and therefore +** is not required to be threadsafe (it is not). +*/ +SQLITE_PRIVATE const sqlite3_mem_methods *sqlite3MemGetWin32(void){ + static const sqlite3_mem_methods winMemMethods = { + winMemMalloc, + winMemFree, + winMemRealloc, + winMemSize, + winMemRoundup, + winMemInit, + winMemShutdown, + &win_mem_data + }; + return &winMemMethods; +} + +SQLITE_PRIVATE void sqlite3MemSetDefault(void){ + sqlite3_config(SQLITE_CONFIG_MALLOC, sqlite3MemGetWin32()); +} +#endif /* SQLITE_WIN32_MALLOC */ + +/* +** Convert a UTF-8 string to Microsoft Unicode (UTF-16?). ** ** Space to hold the returned string is obtained from malloc. */ -static WCHAR *utf8ToUnicode(const char *zFilename){ +static LPWSTR winUtf8ToUnicode(const char *zFilename){ int nChar; - WCHAR *zWideFilename; + LPWSTR zWideFilename; - nChar = MultiByteToWideChar(CP_UTF8, 0, zFilename, -1, NULL, 0); - zWideFilename = malloc( nChar*sizeof(zWideFilename[0]) ); + nChar = osMultiByteToWideChar(CP_UTF8, 0, zFilename, -1, NULL, 0); + if( nChar==0 ){ + return 0; + } + zWideFilename = sqlite3MallocZero( nChar*sizeof(zWideFilename[0]) ); if( zWideFilename==0 ){ return 0; } - nChar = MultiByteToWideChar(CP_UTF8, 0, zFilename, -1, zWideFilename, nChar); + nChar = osMultiByteToWideChar(CP_UTF8, 0, zFilename, -1, zWideFilename, + nChar); if( nChar==0 ){ - free(zWideFilename); + sqlite3_free(zWideFilename); zWideFilename = 0; } return zWideFilename; } /* -** Convert microsoft unicode to UTF-8. Space to hold the returned string is -** obtained from malloc(). +** Convert Microsoft Unicode to UTF-8. Space to hold the returned string is +** obtained from sqlite3_malloc(). */ -static char *unicodeToUtf8(const WCHAR *zWideFilename){ +static char *winUnicodeToUtf8(LPCWSTR zWideFilename){ int nByte; char *zFilename; - nByte = WideCharToMultiByte(CP_UTF8, 0, zWideFilename, -1, 0, 0, 0, 0); - zFilename = malloc( nByte ); + nByte = osWideCharToMultiByte(CP_UTF8, 0, zWideFilename, -1, 0, 0, 0, 0); + if( nByte == 0 ){ + return 0; + } + zFilename = sqlite3MallocZero( nByte ); if( zFilename==0 ){ return 0; } - nByte = WideCharToMultiByte(CP_UTF8, 0, zWideFilename, -1, zFilename, nByte, - 0, 0); + nByte = osWideCharToMultiByte(CP_UTF8, 0, zWideFilename, -1, zFilename, nByte, + 0, 0); if( nByte == 0 ){ - free(zFilename); + sqlite3_free(zFilename); zFilename = 0; } return zFilename; } /* -** Convert an ansi string to microsoft unicode, based on the +** Convert an ANSI string to Microsoft Unicode, based on the ** current codepage settings for file apis. -** +** ** Space to hold the returned string is obtained -** from malloc. +** from sqlite3_malloc. */ -static WCHAR *mbcsToUnicode(const char *zFilename){ +static LPWSTR winMbcsToUnicode(const char *zFilename){ int nByte; - WCHAR *zMbcsFilename; - int codepage = AreFileApisANSI() ? CP_ACP : CP_OEMCP; + LPWSTR zMbcsFilename; + int codepage = osAreFileApisANSI() ? CP_ACP : CP_OEMCP; - nByte = MultiByteToWideChar(codepage, 0, zFilename, -1, NULL,0)*sizeof(WCHAR); - zMbcsFilename = malloc( nByte*sizeof(zMbcsFilename[0]) ); + nByte = osMultiByteToWideChar(codepage, 0, zFilename, -1, NULL, + 0)*sizeof(WCHAR); + if( nByte==0 ){ + return 0; + } + zMbcsFilename = sqlite3MallocZero( nByte*sizeof(zMbcsFilename[0]) ); if( zMbcsFilename==0 ){ return 0; } - nByte = MultiByteToWideChar(codepage, 0, zFilename, -1, zMbcsFilename, nByte); + nByte = osMultiByteToWideChar(codepage, 0, zFilename, -1, zMbcsFilename, + nByte); if( nByte==0 ){ - free(zMbcsFilename); + sqlite3_free(zMbcsFilename); zMbcsFilename = 0; } return zMbcsFilename; } /* -** Convert microsoft unicode to multibyte character string, based on the -** user's Ansi codepage. +** Convert Microsoft Unicode to multi-byte character string, based on the +** user's ANSI codepage. ** ** Space to hold the returned string is obtained from -** malloc(). +** sqlite3_malloc(). */ -static char *unicodeToMbcs(const WCHAR *zWideFilename){ +static char *winUnicodeToMbcs(LPCWSTR zWideFilename){ int nByte; char *zFilename; - int codepage = AreFileApisANSI() ? CP_ACP : CP_OEMCP; + int codepage = osAreFileApisANSI() ? CP_ACP : CP_OEMCP; - nByte = WideCharToMultiByte(codepage, 0, zWideFilename, -1, 0, 0, 0, 0); - zFilename = malloc( nByte ); + nByte = osWideCharToMultiByte(codepage, 0, zWideFilename, -1, 0, 0, 0, 0); + if( nByte == 0 ){ + return 0; + } + zFilename = sqlite3MallocZero( nByte ); if( zFilename==0 ){ return 0; } - nByte = WideCharToMultiByte(codepage, 0, zWideFilename, -1, zFilename, nByte, - 0, 0); + nByte = osWideCharToMultiByte(codepage, 0, zWideFilename, -1, zFilename, + nByte, 0, 0); if( nByte == 0 ){ - free(zFilename); + sqlite3_free(zFilename); zFilename = 0; } return zFilename; } /* ** Convert multibyte character string to UTF-8. Space to hold the -** returned string is obtained from malloc(). +** returned string is obtained from sqlite3_malloc(). */ -SQLITE_API char *sqlite3_win32_mbcs_to_utf8(const char *zFilename){ +SQLITE_API char *SQLITE_STDCALL sqlite3_win32_mbcs_to_utf8(const char *zFilename){ char *zFilenameUtf8; - WCHAR *zTmpWide; + LPWSTR zTmpWide; - zTmpWide = mbcsToUnicode(zFilename); + zTmpWide = winMbcsToUnicode(zFilename); if( zTmpWide==0 ){ return 0; } - zFilenameUtf8 = unicodeToUtf8(zTmpWide); - free(zTmpWide); + zFilenameUtf8 = winUnicodeToUtf8(zTmpWide); + sqlite3_free(zTmpWide); return zFilenameUtf8; } /* -** Convert UTF-8 to multibyte character string. Space to hold the -** returned string is obtained from malloc(). +** Convert UTF-8 to multibyte character string. Space to hold the +** returned string is obtained from sqlite3_malloc(). */ -static char *utf8ToMbcs(const char *zFilename){ +SQLITE_API char *SQLITE_STDCALL sqlite3_win32_utf8_to_mbcs(const char *zFilename){ char *zFilenameMbcs; - WCHAR *zTmpWide; + LPWSTR zTmpWide; - zTmpWide = utf8ToUnicode(zFilename); + zTmpWide = winUtf8ToUnicode(zFilename); if( zTmpWide==0 ){ return 0; } - zFilenameMbcs = unicodeToMbcs(zTmpWide); - free(zTmpWide); + zFilenameMbcs = winUnicodeToMbcs(zTmpWide); + sqlite3_free(zTmpWide); return zFilenameMbcs; } + +/* +** This function sets the data directory or the temporary directory based on +** the provided arguments. The type argument must be 1 in order to set the +** data directory or 2 in order to set the temporary directory. The zValue +** argument is the name of the directory to use. The return value will be +** SQLITE_OK if successful. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_win32_set_directory(DWORD type, LPCWSTR zValue){ + char **ppDirectory = 0; +#ifndef SQLITE_OMIT_AUTOINIT + int rc = sqlite3_initialize(); + if( rc ) return rc; +#endif + if( type==SQLITE_WIN32_DATA_DIRECTORY_TYPE ){ + ppDirectory = &sqlite3_data_directory; + }else if( type==SQLITE_WIN32_TEMP_DIRECTORY_TYPE ){ + ppDirectory = &sqlite3_temp_directory; + } + assert( !ppDirectory || type==SQLITE_WIN32_DATA_DIRECTORY_TYPE + || type==SQLITE_WIN32_TEMP_DIRECTORY_TYPE + ); + assert( !ppDirectory || sqlite3MemdebugHasType(*ppDirectory, MEMTYPE_HEAP) ); + if( ppDirectory ){ + char *zValueUtf8 = 0; + if( zValue && zValue[0] ){ + zValueUtf8 = winUnicodeToUtf8(zValue); + if ( zValueUtf8==0 ){ + return SQLITE_NOMEM; + } + } + sqlite3_free(*ppDirectory); + *ppDirectory = zValueUtf8; + return SQLITE_OK; + } + return SQLITE_ERROR; +} + +/* +** The return value of winGetLastErrorMsg +** is zero if the error message fits in the buffer, or non-zero +** otherwise (if the message was truncated). +*/ +static int winGetLastErrorMsg(DWORD lastErrno, int nBuf, char *zBuf){ + /* FormatMessage returns 0 on failure. Otherwise it + ** returns the number of TCHARs written to the output + ** buffer, excluding the terminating null char. + */ + DWORD dwLen = 0; + char *zOut = 0; + + if( osIsNT() ){ +#if SQLITE_OS_WINRT + WCHAR zTempWide[SQLITE_WIN32_MAX_ERRMSG_CHARS+1]; + dwLen = osFormatMessageW(FORMAT_MESSAGE_FROM_SYSTEM | + FORMAT_MESSAGE_IGNORE_INSERTS, + NULL, + lastErrno, + 0, + zTempWide, + SQLITE_WIN32_MAX_ERRMSG_CHARS, + 0); +#else + LPWSTR zTempWide = NULL; + dwLen = osFormatMessageW(FORMAT_MESSAGE_ALLOCATE_BUFFER | + FORMAT_MESSAGE_FROM_SYSTEM | + FORMAT_MESSAGE_IGNORE_INSERTS, + NULL, + lastErrno, + 0, + (LPWSTR) &zTempWide, + 0, + 0); +#endif + if( dwLen > 0 ){ + /* allocate a buffer and convert to UTF8 */ + sqlite3BeginBenignMalloc(); + zOut = winUnicodeToUtf8(zTempWide); + sqlite3EndBenignMalloc(); +#if !SQLITE_OS_WINRT + /* free the system buffer allocated by FormatMessage */ + osLocalFree(zTempWide); +#endif + } + } +#ifdef SQLITE_WIN32_HAS_ANSI + else{ + char *zTemp = NULL; + dwLen = osFormatMessageA(FORMAT_MESSAGE_ALLOCATE_BUFFER | + FORMAT_MESSAGE_FROM_SYSTEM | + FORMAT_MESSAGE_IGNORE_INSERTS, + NULL, + lastErrno, + 0, + (LPSTR) &zTemp, + 0, + 0); + if( dwLen > 0 ){ + /* allocate a buffer and convert to UTF8 */ + sqlite3BeginBenignMalloc(); + zOut = sqlite3_win32_mbcs_to_utf8(zTemp); + sqlite3EndBenignMalloc(); + /* free the system buffer allocated by FormatMessage */ + osLocalFree(zTemp); + } + } +#endif + if( 0 == dwLen ){ + sqlite3_snprintf(nBuf, zBuf, "OsError 0x%lx (%lu)", lastErrno, lastErrno); + }else{ + /* copy a maximum of nBuf chars to output buffer */ + sqlite3_snprintf(nBuf, zBuf, "%s", zOut); + /* free the UTF8 buffer */ + sqlite3_free(zOut); + } + return 0; +} + +/* +** +** This function - winLogErrorAtLine() - is only ever called via the macro +** winLogError(). +** +** This routine is invoked after an error occurs in an OS function. +** It logs a message using sqlite3_log() containing the current value of +** error code and, if possible, the human-readable equivalent from +** FormatMessage. +** +** The first argument passed to the macro should be the error code that +** will be returned to SQLite (e.g. SQLITE_IOERR_DELETE, SQLITE_CANTOPEN). +** The two subsequent arguments should be the name of the OS function that +** failed and the associated file-system path, if any. +*/ +#define winLogError(a,b,c,d) winLogErrorAtLine(a,b,c,d,__LINE__) +static int winLogErrorAtLine( + int errcode, /* SQLite error code */ + DWORD lastErrno, /* Win32 last error */ + const char *zFunc, /* Name of OS function that failed */ + const char *zPath, /* File path associated with error */ + int iLine /* Source line number where error occurred */ +){ + char zMsg[500]; /* Human readable error text */ + int i; /* Loop counter */ + + zMsg[0] = 0; + winGetLastErrorMsg(lastErrno, sizeof(zMsg), zMsg); + assert( errcode!=SQLITE_OK ); + if( zPath==0 ) zPath = ""; + for(i=0; zMsg[i] && zMsg[i]!='\r' && zMsg[i]!='\n'; i++){} + zMsg[i] = 0; + sqlite3_log(errcode, + "os_win.c:%d: (%lu) %s(%s) - %s", + iLine, lastErrno, zFunc, zPath, zMsg + ); + + return errcode; +} + +/* +** The number of times that a ReadFile(), WriteFile(), and DeleteFile() +** will be retried following a locking error - probably caused by +** antivirus software. Also the initial delay before the first retry. +** The delay increases linearly with each retry. +*/ +#ifndef SQLITE_WIN32_IOERR_RETRY +# define SQLITE_WIN32_IOERR_RETRY 10 +#endif +#ifndef SQLITE_WIN32_IOERR_RETRY_DELAY +# define SQLITE_WIN32_IOERR_RETRY_DELAY 25 +#endif +static int winIoerrRetry = SQLITE_WIN32_IOERR_RETRY; +static int winIoerrRetryDelay = SQLITE_WIN32_IOERR_RETRY_DELAY; + +/* +** The "winIoerrCanRetry1" macro is used to determine if a particular I/O +** error code obtained via GetLastError() is eligible to be retried. It +** must accept the error code DWORD as its only argument and should return +** non-zero if the error code is transient in nature and the operation +** responsible for generating the original error might succeed upon being +** retried. The argument to this macro should be a variable. +** +** Additionally, a macro named "winIoerrCanRetry2" may be defined. If it +** is defined, it will be consulted only when the macro "winIoerrCanRetry1" +** returns zero. The "winIoerrCanRetry2" macro is completely optional and +** may be used to include additional error codes in the set that should +** result in the failing I/O operation being retried by the caller. If +** defined, the "winIoerrCanRetry2" macro must exhibit external semantics +** identical to those of the "winIoerrCanRetry1" macro. +*/ +#if !defined(winIoerrCanRetry1) +#define winIoerrCanRetry1(a) (((a)==ERROR_ACCESS_DENIED) || \ + ((a)==ERROR_SHARING_VIOLATION) || \ + ((a)==ERROR_LOCK_VIOLATION) || \ + ((a)==ERROR_DEV_NOT_EXIST) || \ + ((a)==ERROR_NETNAME_DELETED) || \ + ((a)==ERROR_SEM_TIMEOUT) || \ + ((a)==ERROR_NETWORK_UNREACHABLE)) +#endif + +/* +** If a ReadFile() or WriteFile() error occurs, invoke this routine +** to see if it should be retried. Return TRUE to retry. Return FALSE +** to give up with an error. +*/ +static int winRetryIoerr(int *pnRetry, DWORD *pError){ + DWORD e = osGetLastError(); + if( *pnRetry>=winIoerrRetry ){ + if( pError ){ + *pError = e; + } + return 0; + } + if( winIoerrCanRetry1(e) ){ + sqlite3_win32_sleep(winIoerrRetryDelay*(1+*pnRetry)); + ++*pnRetry; + return 1; + } +#if defined(winIoerrCanRetry2) + else if( winIoerrCanRetry2(e) ){ + sqlite3_win32_sleep(winIoerrRetryDelay*(1+*pnRetry)); + ++*pnRetry; + return 1; + } +#endif + if( pError ){ + *pError = e; + } + return 0; +} + +/* +** Log a I/O error retry episode. +*/ +static void winLogIoerr(int nRetry, int lineno){ + if( nRetry ){ + sqlite3_log(SQLITE_NOTICE, + "delayed %dms for lock/sharing conflict at line %d", + winIoerrRetryDelay*nRetry*(nRetry+1)/2, lineno + ); + } +} #if SQLITE_OS_WINCE /************************************************************************* ** This section contains code for WinCE only. */ +#if !defined(SQLITE_MSVC_LOCALTIME_API) || !SQLITE_MSVC_LOCALTIME_API /* -** WindowsCE does not have a localtime() function. So create a -** substitute. +** The MSVC CRT on Windows CE may not have a localtime() function. So +** create a substitute. */ +/* #include <time.h> */ struct tm *__cdecl localtime(const time_t *t) { static struct tm y; FILETIME uTm, lTm; SYSTEMTIME pTm; @@ -28513,38 +37090,32 @@ sqlite3_int64 t64; t64 = *t; t64 = (t64 + 11644473600)*10000000; uTm.dwLowDateTime = (DWORD)(t64 & 0xFFFFFFFF); uTm.dwHighDateTime= (DWORD)(t64 >> 32); - FileTimeToLocalFileTime(&uTm,&lTm); - FileTimeToSystemTime(&lTm,&pTm); + osFileTimeToLocalFileTime(&uTm,&lTm); + osFileTimeToSystemTime(&lTm,&pTm); y.tm_year = pTm.wYear - 1900; y.tm_mon = pTm.wMonth - 1; y.tm_wday = pTm.wDayOfWeek; y.tm_mday = pTm.wDay; y.tm_hour = pTm.wHour; y.tm_min = pTm.wMinute; y.tm_sec = pTm.wSecond; return &y; } - -/* This will never be called, but defined to make the code compile */ -#define GetTempPathA(a,b) - -#define LockFile(a,b,c,d,e) winceLockFile(&a, b, c, d, e) -#define UnlockFile(a,b,c,d,e) winceUnlockFile(&a, b, c, d, e) -#define LockFileEx(a,b,c,d,e,f) winceLockFileEx(&a, b, c, d, e, f) +#endif #define HANDLE_TO_WINFILE(a) (winFile*)&((char*)a)[-(int)offsetof(winFile,h)] /* ** Acquire a lock on the handle h */ static void winceMutexAcquire(HANDLE h){ DWORD dwErr; do { - dwErr = WaitForSingleObject(h, INFINITE); + dwErr = osWaitForSingleObject(h, INFINITE); } while (dwErr != WAIT_OBJECT_0 && dwErr != WAIT_ABANDONED); } /* ** Release a lock acquired by winceMutexAcquire() */ @@ -28552,80 +37123,99 @@ /* ** Create the mutex and shared memory used for locking in the file ** descriptor pFile */ -static BOOL winceCreateLock(const char *zFilename, winFile *pFile){ - WCHAR *zTok; - WCHAR *zName = utf8ToUnicode(zFilename); +static int winceCreateLock(const char *zFilename, winFile *pFile){ + LPWSTR zTok; + LPWSTR zName; + DWORD lastErrno; + BOOL bLogged = FALSE; BOOL bInit = TRUE; + + zName = winUtf8ToUnicode(zFilename); + if( zName==0 ){ + /* out of memory */ + return SQLITE_IOERR_NOMEM; + } /* Initialize the local lockdata */ - ZeroMemory(&pFile->local, sizeof(pFile->local)); + memset(&pFile->local, 0, sizeof(pFile->local)); /* Replace the backslashes from the filename and lowercase it ** to derive a mutex name. */ - zTok = CharLowerW(zName); + zTok = osCharLowerW(zName); for (;*zTok;zTok++){ if (*zTok == '\\') *zTok = '_'; } /* Create/open the named mutex */ - pFile->hMutex = CreateMutexW(NULL, FALSE, zName); + pFile->hMutex = osCreateMutexW(NULL, FALSE, zName); if (!pFile->hMutex){ - pFile->lastErrno = GetLastError(); - free(zName); - return FALSE; + pFile->lastErrno = osGetLastError(); + sqlite3_free(zName); + return winLogError(SQLITE_IOERR, pFile->lastErrno, + "winceCreateLock1", zFilename); } /* Acquire the mutex before continuing */ winceMutexAcquire(pFile->hMutex); - - /* Since the names of named mutexes, semaphores, file mappings etc are + + /* Since the names of named mutexes, semaphores, file mappings etc are ** case-sensitive, take advantage of that by uppercasing the mutex name ** and using that as the shared filemapping name. */ - CharUpperW(zName); - pFile->hShared = CreateFileMappingW(INVALID_HANDLE_VALUE, NULL, - PAGE_READWRITE, 0, sizeof(winceLock), - zName); + osCharUpperW(zName); + pFile->hShared = osCreateFileMappingW(INVALID_HANDLE_VALUE, NULL, + PAGE_READWRITE, 0, sizeof(winceLock), + zName); - /* Set a flag that indicates we're the first to create the memory so it + /* Set a flag that indicates we're the first to create the memory so it ** must be zero-initialized */ - if (GetLastError() == ERROR_ALREADY_EXISTS){ + lastErrno = osGetLastError(); + if (lastErrno == ERROR_ALREADY_EXISTS){ bInit = FALSE; } - free(zName); + sqlite3_free(zName); /* If we succeeded in making the shared memory handle, map it. */ - if (pFile->hShared){ - pFile->shared = (winceLock*)MapViewOfFile(pFile->hShared, + if( pFile->hShared ){ + pFile->shared = (winceLock*)osMapViewOfFile(pFile->hShared, FILE_MAP_READ|FILE_MAP_WRITE, 0, 0, sizeof(winceLock)); /* If mapping failed, close the shared memory handle and erase it */ - if (!pFile->shared){ - pFile->lastErrno = GetLastError(); - CloseHandle(pFile->hShared); + if( !pFile->shared ){ + pFile->lastErrno = osGetLastError(); + winLogError(SQLITE_IOERR, pFile->lastErrno, + "winceCreateLock2", zFilename); + bLogged = TRUE; + osCloseHandle(pFile->hShared); pFile->hShared = NULL; } } /* If shared memory could not be created, then close the mutex and fail */ - if (pFile->hShared == NULL){ + if( pFile->hShared==NULL ){ + if( !bLogged ){ + pFile->lastErrno = lastErrno; + winLogError(SQLITE_IOERR, pFile->lastErrno, + "winceCreateLock3", zFilename); + bLogged = TRUE; + } winceMutexRelease(pFile->hMutex); - CloseHandle(pFile->hMutex); + osCloseHandle(pFile->hMutex); pFile->hMutex = NULL; - return FALSE; + return SQLITE_IOERR; } - + /* Initialize the shared memory if we're supposed to */ - if (bInit) { - ZeroMemory(pFile->shared, sizeof(winceLock)); + if( bInit ){ + memset(pFile->shared, 0, sizeof(winceLock)); } winceMutexRelease(pFile->hMutex); - return TRUE; + return SQLITE_OK; } /* ** Destroy the part of winFile that deals with wince locks */ @@ -28648,25 +37238,25 @@ if (pFile->local.bExclusive){ pFile->shared->bExclusive = FALSE; } /* De-reference and close our copy of the shared memory handle */ - UnmapViewOfFile(pFile->shared); - CloseHandle(pFile->hShared); + osUnmapViewOfFile(pFile->shared); + osCloseHandle(pFile->hShared); /* Done with the mutex */ - winceMutexRelease(pFile->hMutex); - CloseHandle(pFile->hMutex); + winceMutexRelease(pFile->hMutex); + osCloseHandle(pFile->hMutex); pFile->hMutex = NULL; } } -/* -** An implementation of the LockFile() API of windows for wince +/* +** An implementation of the LockFile() API of Windows for CE */ static BOOL winceLockFile( - HANDLE *phFile, + LPHANDLE phFile, DWORD dwFileOffsetLow, DWORD dwFileOffsetHigh, DWORD nNumberOfBytesToLockLow, DWORD nNumberOfBytesToLockHigh ){ @@ -28700,21 +37290,23 @@ bReturn = TRUE; } } /* Want a pending lock? */ - else if (dwFileOffsetLow == (DWORD)PENDING_BYTE && nNumberOfBytesToLockLow == 1){ + else if (dwFileOffsetLow == (DWORD)PENDING_BYTE + && nNumberOfBytesToLockLow == 1){ /* If no pending lock has been acquired, then acquire it */ if (pFile->shared->bPending == 0) { pFile->shared->bPending = TRUE; pFile->local.bPending = TRUE; bReturn = TRUE; } } /* Want a reserved lock? */ - else if (dwFileOffsetLow == (DWORD)RESERVED_BYTE && nNumberOfBytesToLockLow == 1){ + else if (dwFileOffsetLow == (DWORD)RESERVED_BYTE + && nNumberOfBytesToLockLow == 1){ if (pFile->shared->bReserved == 0) { pFile->shared->bReserved = TRUE; pFile->local.bReserved = TRUE; bReturn = TRUE; } @@ -28723,14 +37315,14 @@ winceMutexRelease(pFile->hMutex); return bReturn; } /* -** An implementation of the UnlockFile API of windows for wince +** An implementation of the UnlockFile API of Windows for CE */ static BOOL winceUnlockFile( - HANDLE *phFile, + LPHANDLE phFile, DWORD dwFileOffsetLow, DWORD dwFileOffsetHigh, DWORD nNumberOfBytesToUnlockLow, DWORD nNumberOfBytesToUnlockHigh ){ @@ -28753,11 +37345,12 @@ bReturn = TRUE; } /* Did we just have a reader lock? */ else if (pFile->local.nReaders){ - assert(nNumberOfBytesToUnlockLow == (DWORD)SHARED_SIZE || nNumberOfBytesToUnlockLow == 1); + assert(nNumberOfBytesToUnlockLow == (DWORD)SHARED_SIZE + || nNumberOfBytesToUnlockLow == 1); pFile->local.nReaders --; if (pFile->local.nReaders == 0) { pFile->shared->nReaders --; } @@ -28764,19 +37357,21 @@ bReturn = TRUE; } } /* Releasing a pending lock */ - else if (dwFileOffsetLow == (DWORD)PENDING_BYTE && nNumberOfBytesToUnlockLow == 1){ + else if (dwFileOffsetLow == (DWORD)PENDING_BYTE + && nNumberOfBytesToUnlockLow == 1){ if (pFile->local.bPending){ pFile->local.bPending = FALSE; pFile->shared->bPending = FALSE; bReturn = TRUE; } } /* Releasing a reserved lock */ - else if (dwFileOffsetLow == (DWORD)RESERVED_BYTE && nNumberOfBytesToUnlockLow == 1){ + else if (dwFileOffsetLow == (DWORD)RESERVED_BYTE + && nNumberOfBytesToUnlockLow == 1){ if (pFile->local.bReserved) { pFile->local.bReserved = FALSE; pFile->shared->bReserved = FALSE; bReturn = TRUE; } @@ -28783,49 +37378,162 @@ } winceMutexRelease(pFile->hMutex); return bReturn; } - -/* -** An implementation of the LockFileEx() API of windows for wince -*/ -static BOOL winceLockFileEx( - HANDLE *phFile, - DWORD dwFlags, - DWORD dwReserved, - DWORD nNumberOfBytesToLockLow, - DWORD nNumberOfBytesToLockHigh, - LPOVERLAPPED lpOverlapped -){ - UNUSED_PARAMETER(dwReserved); - UNUSED_PARAMETER(nNumberOfBytesToLockHigh); - - /* If the caller wants a shared read lock, forward this call - ** to winceLockFile */ - if (lpOverlapped->Offset == (DWORD)SHARED_FIRST && - dwFlags == 1 && - nNumberOfBytesToLockLow == (DWORD)SHARED_SIZE){ - return winceLockFile(phFile, SHARED_FIRST, 0, 1, 0); - } - return FALSE; -} /* ** End of the special code for wince *****************************************************************************/ #endif /* SQLITE_OS_WINCE */ + +/* +** Lock a file region. +*/ +static BOOL winLockFile( + LPHANDLE phFile, + DWORD flags, + DWORD offsetLow, + DWORD offsetHigh, + DWORD numBytesLow, + DWORD numBytesHigh +){ +#if SQLITE_OS_WINCE + /* + ** NOTE: Windows CE is handled differently here due its lack of the Win32 + ** API LockFile. + */ + return winceLockFile(phFile, offsetLow, offsetHigh, + numBytesLow, numBytesHigh); +#else + if( osIsNT() ){ + OVERLAPPED ovlp; + memset(&ovlp, 0, sizeof(OVERLAPPED)); + ovlp.Offset = offsetLow; + ovlp.OffsetHigh = offsetHigh; + return osLockFileEx(*phFile, flags, 0, numBytesLow, numBytesHigh, &ovlp); + }else{ + return osLockFile(*phFile, offsetLow, offsetHigh, numBytesLow, + numBytesHigh); + } +#endif +} + +/* +** Unlock a file region. + */ +static BOOL winUnlockFile( + LPHANDLE phFile, + DWORD offsetLow, + DWORD offsetHigh, + DWORD numBytesLow, + DWORD numBytesHigh +){ +#if SQLITE_OS_WINCE + /* + ** NOTE: Windows CE is handled differently here due its lack of the Win32 + ** API UnlockFile. + */ + return winceUnlockFile(phFile, offsetLow, offsetHigh, + numBytesLow, numBytesHigh); +#else + if( osIsNT() ){ + OVERLAPPED ovlp; + memset(&ovlp, 0, sizeof(OVERLAPPED)); + ovlp.Offset = offsetLow; + ovlp.OffsetHigh = offsetHigh; + return osUnlockFileEx(*phFile, 0, numBytesLow, numBytesHigh, &ovlp); + }else{ + return osUnlockFile(*phFile, offsetLow, offsetHigh, numBytesLow, + numBytesHigh); + } +#endif +} /***************************************************************************** ** The next group of routines implement the I/O methods specified ** by the sqlite3_io_methods object. ******************************************************************************/ +/* +** Some Microsoft compilers lack this definition. +*/ +#ifndef INVALID_SET_FILE_POINTER +# define INVALID_SET_FILE_POINTER ((DWORD)-1) +#endif + +/* +** Move the current position of the file handle passed as the first +** argument to offset iOffset within the file. If successful, return 0. +** Otherwise, set pFile->lastErrno and return non-zero. +*/ +static int winSeekFile(winFile *pFile, sqlite3_int64 iOffset){ +#if !SQLITE_OS_WINRT + LONG upperBits; /* Most sig. 32 bits of new offset */ + LONG lowerBits; /* Least sig. 32 bits of new offset */ + DWORD dwRet; /* Value returned by SetFilePointer() */ + DWORD lastErrno; /* Value returned by GetLastError() */ + + OSTRACE(("SEEK file=%p, offset=%lld\n", pFile->h, iOffset)); + + upperBits = (LONG)((iOffset>>32) & 0x7fffffff); + lowerBits = (LONG)(iOffset & 0xffffffff); + + /* API oddity: If successful, SetFilePointer() returns a dword + ** containing the lower 32-bits of the new file-offset. Or, if it fails, + ** it returns INVALID_SET_FILE_POINTER. However according to MSDN, + ** INVALID_SET_FILE_POINTER may also be a valid new offset. So to determine + ** whether an error has actually occurred, it is also necessary to call + ** GetLastError(). + */ + dwRet = osSetFilePointer(pFile->h, lowerBits, &upperBits, FILE_BEGIN); + + if( (dwRet==INVALID_SET_FILE_POINTER + && ((lastErrno = osGetLastError())!=NO_ERROR)) ){ + pFile->lastErrno = lastErrno; + winLogError(SQLITE_IOERR_SEEK, pFile->lastErrno, + "winSeekFile", pFile->zPath); + OSTRACE(("SEEK file=%p, rc=SQLITE_IOERR_SEEK\n", pFile->h)); + return 1; + } + + OSTRACE(("SEEK file=%p, rc=SQLITE_OK\n", pFile->h)); + return 0; +#else + /* + ** Same as above, except that this implementation works for WinRT. + */ + + LARGE_INTEGER x; /* The new offset */ + BOOL bRet; /* Value returned by SetFilePointerEx() */ + + x.QuadPart = iOffset; + bRet = osSetFilePointerEx(pFile->h, x, 0, FILE_BEGIN); + + if(!bRet){ + pFile->lastErrno = osGetLastError(); + winLogError(SQLITE_IOERR_SEEK, pFile->lastErrno, + "winSeekFile", pFile->zPath); + OSTRACE(("SEEK file=%p, rc=SQLITE_IOERR_SEEK\n", pFile->h)); + return 1; + } + + OSTRACE(("SEEK file=%p, rc=SQLITE_OK\n", pFile->h)); + return 0; +#endif +} + +#if SQLITE_MAX_MMAP_SIZE>0 +/* Forward references to VFS helper methods used for memory mapped files */ +static int winMapfile(winFile*, sqlite3_int64); +static int winUnmapfile(winFile*); +#endif + /* ** Close a file. ** ** It is reported that an attempt to close a handle might sometimes -** fail. This is a very unreasonable result, but windows is notorious +** fail. This is a very unreasonable result, but Windows is notorious ** for being unreasonable so I do not doubt that it might happen. If ** the close fails, we pause for 100 milliseconds and try again. As ** many as MX_CLOSE_ATTEMPT attempts to close the handle are made before ** giving up and returning an error. */ @@ -28833,39 +37541,50 @@ static int winClose(sqlite3_file *id){ int rc, cnt = 0; winFile *pFile = (winFile*)id; assert( id!=0 ); - OSTRACE2("CLOSE %d\n", pFile->h); +#ifndef SQLITE_OMIT_WAL + assert( pFile->pShm==0 ); +#endif + assert( pFile->h!=NULL && pFile->h!=INVALID_HANDLE_VALUE ); + OSTRACE(("CLOSE pid=%lu, pFile=%p, file=%p\n", + osGetCurrentProcessId(), pFile, pFile->h)); + +#if SQLITE_MAX_MMAP_SIZE>0 + winUnmapfile(pFile); +#endif + do{ - rc = CloseHandle(pFile->h); - }while( rc==0 && ++cnt < MX_CLOSE_ATTEMPT && (Sleep(100), 1) ); + rc = osCloseHandle(pFile->h); + /* SimulateIOError( rc=0; cnt=MX_CLOSE_ATTEMPT; ); */ + }while( rc==0 && ++cnt < MX_CLOSE_ATTEMPT && (sqlite3_win32_sleep(100), 1) ); #if SQLITE_OS_WINCE #define WINCE_DELETION_ATTEMPTS 3 winceDestroyLock(pFile); if( pFile->zDeleteOnClose ){ int cnt = 0; while( - DeleteFileW(pFile->zDeleteOnClose)==0 - && GetFileAttributesW(pFile->zDeleteOnClose)!=0xffffffff + osDeleteFileW(pFile->zDeleteOnClose)==0 + && osGetFileAttributesW(pFile->zDeleteOnClose)!=0xffffffff && cnt++ < WINCE_DELETION_ATTEMPTS ){ - Sleep(100); /* Wait a little before trying again */ + sqlite3_win32_sleep(100); /* Wait a little before trying again */ } - free(pFile->zDeleteOnClose); + sqlite3_free(pFile->zDeleteOnClose); } #endif + if( rc ){ + pFile->h = NULL; + } OpenCounter(-1); - return rc ? SQLITE_OK : SQLITE_IOERR; -} - -/* -** Some microsoft compilers lack this definition. -*/ -#ifndef INVALID_SET_FILE_POINTER -# define INVALID_SET_FILE_POINTER ((DWORD)-1) -#endif + OSTRACE(("CLOSE pid=%lu, pFile=%p, file=%p, rc=%s\n", + osGetCurrentProcessId(), pFile, pFile->h, rc ? "ok" : "failed")); + return rc ? SQLITE_OK + : winLogError(SQLITE_IOERR_CLOSE, osGetLastError(), + "winClose", pFile->zPath); +} /* ** Read data from a file into a buffer. Return SQLITE_OK if all ** bytes were read successfully and SQLITE_IOERR if anything goes ** wrong. @@ -28874,104 +37593,236 @@ sqlite3_file *id, /* File to read from */ void *pBuf, /* Write content into this buffer */ int amt, /* Number of bytes to read */ sqlite3_int64 offset /* Begin reading at this offset */ ){ - LONG upperBits = (LONG)((offset>>32) & 0x7fffffff); - LONG lowerBits = (LONG)(offset & 0xffffffff); - DWORD rc; - winFile *pFile = (winFile*)id; - DWORD error; - DWORD got; +#if !SQLITE_OS_WINCE && !defined(SQLITE_WIN32_NO_OVERLAPPED) + OVERLAPPED overlapped; /* The offset for ReadFile. */ +#endif + winFile *pFile = (winFile*)id; /* file handle */ + DWORD nRead; /* Number of bytes actually read from file */ + int nRetry = 0; /* Number of retrys */ assert( id!=0 ); + assert( amt>0 ); + assert( offset>=0 ); SimulateIOError(return SQLITE_IOERR_READ); - OSTRACE3("READ %d lock=%d\n", pFile->h, pFile->locktype); - rc = SetFilePointer(pFile->h, lowerBits, &upperBits, FILE_BEGIN); - if( rc==INVALID_SET_FILE_POINTER && (error=GetLastError())!=NO_ERROR ){ - pFile->lastErrno = error; + OSTRACE(("READ pid=%lu, pFile=%p, file=%p, buffer=%p, amount=%d, " + "offset=%lld, lock=%d\n", osGetCurrentProcessId(), pFile, + pFile->h, pBuf, amt, offset, pFile->locktype)); + +#if SQLITE_MAX_MMAP_SIZE>0 + /* Deal with as much of this read request as possible by transfering + ** data from the memory mapping using memcpy(). */ + if( offset<pFile->mmapSize ){ + if( offset+amt <= pFile->mmapSize ){ + memcpy(pBuf, &((u8 *)(pFile->pMapRegion))[offset], amt); + OSTRACE(("READ-MMAP pid=%lu, pFile=%p, file=%p, rc=SQLITE_OK\n", + osGetCurrentProcessId(), pFile, pFile->h)); + return SQLITE_OK; + }else{ + int nCopy = (int)(pFile->mmapSize - offset); + memcpy(pBuf, &((u8 *)(pFile->pMapRegion))[offset], nCopy); + pBuf = &((u8 *)pBuf)[nCopy]; + amt -= nCopy; + offset += nCopy; + } + } +#endif + +#if SQLITE_OS_WINCE || defined(SQLITE_WIN32_NO_OVERLAPPED) + if( winSeekFile(pFile, offset) ){ + OSTRACE(("READ pid=%lu, pFile=%p, file=%p, rc=SQLITE_FULL\n", + osGetCurrentProcessId(), pFile, pFile->h)); return SQLITE_FULL; } - if( !ReadFile(pFile->h, pBuf, amt, &got, 0) ){ - pFile->lastErrno = GetLastError(); - return SQLITE_IOERR_READ; + while( !osReadFile(pFile->h, pBuf, amt, &nRead, 0) ){ +#else + memset(&overlapped, 0, sizeof(OVERLAPPED)); + overlapped.Offset = (LONG)(offset & 0xffffffff); + overlapped.OffsetHigh = (LONG)((offset>>32) & 0x7fffffff); + while( !osReadFile(pFile->h, pBuf, amt, &nRead, &overlapped) && + osGetLastError()!=ERROR_HANDLE_EOF ){ +#endif + DWORD lastErrno; + if( winRetryIoerr(&nRetry, &lastErrno) ) continue; + pFile->lastErrno = lastErrno; + OSTRACE(("READ pid=%lu, pFile=%p, file=%p, rc=SQLITE_IOERR_READ\n", + osGetCurrentProcessId(), pFile, pFile->h)); + return winLogError(SQLITE_IOERR_READ, pFile->lastErrno, + "winRead", pFile->zPath); } - if( got==(DWORD)amt ){ - return SQLITE_OK; - }else{ + winLogIoerr(nRetry, __LINE__); + if( nRead<(DWORD)amt ){ /* Unread parts of the buffer must be zero-filled */ - memset(&((char*)pBuf)[got], 0, amt-got); + memset(&((char*)pBuf)[nRead], 0, amt-nRead); + OSTRACE(("READ pid=%lu, pFile=%p, file=%p, rc=SQLITE_IOERR_SHORT_READ\n", + osGetCurrentProcessId(), pFile, pFile->h)); return SQLITE_IOERR_SHORT_READ; } + + OSTRACE(("READ pid=%lu, pFile=%p, file=%p, rc=SQLITE_OK\n", + osGetCurrentProcessId(), pFile, pFile->h)); + return SQLITE_OK; } /* ** Write data from a buffer into a file. Return SQLITE_OK on success ** or some other error code on failure. */ static int winWrite( - sqlite3_file *id, /* File to write into */ - const void *pBuf, /* The bytes to be written */ - int amt, /* Number of bytes to write */ - sqlite3_int64 offset /* Offset into the file to begin writing at */ + sqlite3_file *id, /* File to write into */ + const void *pBuf, /* The bytes to be written */ + int amt, /* Number of bytes to write */ + sqlite3_int64 offset /* Offset into the file to begin writing at */ ){ - LONG upperBits = (LONG)((offset>>32) & 0x7fffffff); - LONG lowerBits = (LONG)(offset & 0xffffffff); - DWORD rc; - winFile *pFile = (winFile*)id; - DWORD error; - DWORD wrote = 0; - - assert( id!=0 ); + int rc = 0; /* True if error has occurred, else false */ + winFile *pFile = (winFile*)id; /* File handle */ + int nRetry = 0; /* Number of retries */ + + assert( amt>0 ); + assert( pFile ); SimulateIOError(return SQLITE_IOERR_WRITE); SimulateDiskfullError(return SQLITE_FULL); - OSTRACE3("WRITE %d lock=%d\n", pFile->h, pFile->locktype); - rc = SetFilePointer(pFile->h, lowerBits, &upperBits, FILE_BEGIN); - if( rc==INVALID_SET_FILE_POINTER && (error=GetLastError())!=NO_ERROR ){ - pFile->lastErrno = error; - return SQLITE_FULL; - } - assert( amt>0 ); - while( - amt>0 - && (rc = WriteFile(pFile->h, pBuf, amt, &wrote, 0))!=0 - && wrote>0 - ){ - amt -= wrote; - pBuf = &((char*)pBuf)[wrote]; - } - if( !rc || amt>(int)wrote ){ - pFile->lastErrno = GetLastError(); - return SQLITE_FULL; - } + + OSTRACE(("WRITE pid=%lu, pFile=%p, file=%p, buffer=%p, amount=%d, " + "offset=%lld, lock=%d\n", osGetCurrentProcessId(), pFile, + pFile->h, pBuf, amt, offset, pFile->locktype)); + +#if defined(SQLITE_MMAP_READWRITE) && SQLITE_MAX_MMAP_SIZE>0 + /* Deal with as much of this write request as possible by transfering + ** data from the memory mapping using memcpy(). */ + if( offset<pFile->mmapSize ){ + if( offset+amt <= pFile->mmapSize ){ + memcpy(&((u8 *)(pFile->pMapRegion))[offset], pBuf, amt); + OSTRACE(("WRITE-MMAP pid=%lu, pFile=%p, file=%p, rc=SQLITE_OK\n", + osGetCurrentProcessId(), pFile, pFile->h)); + return SQLITE_OK; + }else{ + int nCopy = (int)(pFile->mmapSize - offset); + memcpy(&((u8 *)(pFile->pMapRegion))[offset], pBuf, nCopy); + pBuf = &((u8 *)pBuf)[nCopy]; + amt -= nCopy; + offset += nCopy; + } + } +#endif + +#if SQLITE_OS_WINCE || defined(SQLITE_WIN32_NO_OVERLAPPED) + rc = winSeekFile(pFile, offset); + if( rc==0 ){ +#else + { +#endif +#if !SQLITE_OS_WINCE && !defined(SQLITE_WIN32_NO_OVERLAPPED) + OVERLAPPED overlapped; /* The offset for WriteFile. */ +#endif + u8 *aRem = (u8 *)pBuf; /* Data yet to be written */ + int nRem = amt; /* Number of bytes yet to be written */ + DWORD nWrite; /* Bytes written by each WriteFile() call */ + DWORD lastErrno = NO_ERROR; /* Value returned by GetLastError() */ + +#if !SQLITE_OS_WINCE && !defined(SQLITE_WIN32_NO_OVERLAPPED) + memset(&overlapped, 0, sizeof(OVERLAPPED)); + overlapped.Offset = (LONG)(offset & 0xffffffff); + overlapped.OffsetHigh = (LONG)((offset>>32) & 0x7fffffff); +#endif + + while( nRem>0 ){ +#if SQLITE_OS_WINCE || defined(SQLITE_WIN32_NO_OVERLAPPED) + if( !osWriteFile(pFile->h, aRem, nRem, &nWrite, 0) ){ +#else + if( !osWriteFile(pFile->h, aRem, nRem, &nWrite, &overlapped) ){ +#endif + if( winRetryIoerr(&nRetry, &lastErrno) ) continue; + break; + } + assert( nWrite==0 || nWrite<=(DWORD)nRem ); + if( nWrite==0 || nWrite>(DWORD)nRem ){ + lastErrno = osGetLastError(); + break; + } +#if !SQLITE_OS_WINCE && !defined(SQLITE_WIN32_NO_OVERLAPPED) + offset += nWrite; + overlapped.Offset = (LONG)(offset & 0xffffffff); + overlapped.OffsetHigh = (LONG)((offset>>32) & 0x7fffffff); +#endif + aRem += nWrite; + nRem -= nWrite; + } + if( nRem>0 ){ + pFile->lastErrno = lastErrno; + rc = 1; + } + } + + if( rc ){ + if( ( pFile->lastErrno==ERROR_HANDLE_DISK_FULL ) + || ( pFile->lastErrno==ERROR_DISK_FULL )){ + OSTRACE(("WRITE pid=%lu, pFile=%p, file=%p, rc=SQLITE_FULL\n", + osGetCurrentProcessId(), pFile, pFile->h)); + return winLogError(SQLITE_FULL, pFile->lastErrno, + "winWrite1", pFile->zPath); + } + OSTRACE(("WRITE pid=%lu, pFile=%p, file=%p, rc=SQLITE_IOERR_WRITE\n", + osGetCurrentProcessId(), pFile, pFile->h)); + return winLogError(SQLITE_IOERR_WRITE, pFile->lastErrno, + "winWrite2", pFile->zPath); + }else{ + winLogIoerr(nRetry, __LINE__); + } + OSTRACE(("WRITE pid=%lu, pFile=%p, file=%p, rc=SQLITE_OK\n", + osGetCurrentProcessId(), pFile, pFile->h)); return SQLITE_OK; } /* ** Truncate an open file to a specified size */ static int winTruncate(sqlite3_file *id, sqlite3_int64 nByte){ - LONG upperBits = (LONG)((nByte>>32) & 0x7fffffff); - LONG lowerBits = (LONG)(nByte & 0xffffffff); - DWORD rc; - winFile *pFile = (winFile*)id; - DWORD error; - - assert( id!=0 ); - OSTRACE3("TRUNCATE %d %lld\n", pFile->h, nByte); + winFile *pFile = (winFile*)id; /* File handle object */ + int rc = SQLITE_OK; /* Return code for this function */ + DWORD lastErrno; + + assert( pFile ); SimulateIOError(return SQLITE_IOERR_TRUNCATE); - rc = SetFilePointer(pFile->h, lowerBits, &upperBits, FILE_BEGIN); - if( rc==INVALID_SET_FILE_POINTER && (error=GetLastError())!=NO_ERROR ){ - pFile->lastErrno = error; - return SQLITE_IOERR_TRUNCATE; - } - /* SetEndOfFile will fail if nByte is negative */ - if( !SetEndOfFile(pFile->h) ){ - pFile->lastErrno = GetLastError(); - return SQLITE_IOERR_TRUNCATE; - } - return SQLITE_OK; + OSTRACE(("TRUNCATE pid=%lu, pFile=%p, file=%p, size=%lld, lock=%d\n", + osGetCurrentProcessId(), pFile, pFile->h, nByte, pFile->locktype)); + + /* If the user has configured a chunk-size for this file, truncate the + ** file so that it consists of an integer number of chunks (i.e. the + ** actual file size after the operation may be larger than the requested + ** size). + */ + if( pFile->szChunk>0 ){ + nByte = ((nByte + pFile->szChunk - 1)/pFile->szChunk) * pFile->szChunk; + } + + /* SetEndOfFile() returns non-zero when successful, or zero when it fails. */ + if( winSeekFile(pFile, nByte) ){ + rc = winLogError(SQLITE_IOERR_TRUNCATE, pFile->lastErrno, + "winTruncate1", pFile->zPath); + }else if( 0==osSetEndOfFile(pFile->h) && + ((lastErrno = osGetLastError())!=ERROR_USER_MAPPED_FILE) ){ + pFile->lastErrno = lastErrno; + rc = winLogError(SQLITE_IOERR_TRUNCATE, pFile->lastErrno, + "winTruncate2", pFile->zPath); + } + +#if SQLITE_MAX_MMAP_SIZE>0 + /* If the file was truncated to a size smaller than the currently + ** mapped region, reduce the effective mapping size as well. SQLite will + ** use read() and write() to access data beyond this point from now on. + */ + if( pFile->pMapRegion && nByte<pFile->mmapSize ){ + pFile->mmapSize = nByte; + } +#endif + + OSTRACE(("TRUNCATE pid=%lu, pFile=%p, file=%p, rc=%s\n", + osGetCurrentProcessId(), pFile, pFile->h, sqlite3ErrName(rc))); + return rc; } #ifdef SQLITE_TEST /* ** Count the number of fullsyncs and normal syncs. This is used to test @@ -28984,116 +37835,224 @@ /* ** Make sure all writes to a particular file are committed to disk. */ static int winSync(sqlite3_file *id, int flags){ #ifndef SQLITE_NO_SYNC + /* + ** Used only when SQLITE_NO_SYNC is not defined. + */ + BOOL rc; +#endif +#if !defined(NDEBUG) || !defined(SQLITE_NO_SYNC) || \ + defined(SQLITE_HAVE_OS_TRACE) + /* + ** Used when SQLITE_NO_SYNC is not defined and by the assert() and/or + ** OSTRACE() macros. + */ winFile *pFile = (winFile*)id; - - assert( id!=0 ); - OSTRACE3("SYNC %d lock=%d\n", pFile->h, pFile->locktype); #else UNUSED_PARAMETER(id); #endif + + assert( pFile ); + /* Check that one of SQLITE_SYNC_NORMAL or FULL was passed */ + assert((flags&0x0F)==SQLITE_SYNC_NORMAL + || (flags&0x0F)==SQLITE_SYNC_FULL + ); + + /* Unix cannot, but some systems may return SQLITE_FULL from here. This + ** line is to test that doing so does not cause any problems. + */ + SimulateDiskfullError( return SQLITE_FULL ); + + OSTRACE(("SYNC pid=%lu, pFile=%p, file=%p, flags=%x, lock=%d\n", + osGetCurrentProcessId(), pFile, pFile->h, flags, + pFile->locktype)); + #ifndef SQLITE_TEST UNUSED_PARAMETER(flags); #else - if( flags & SQLITE_SYNC_FULL ){ + if( (flags&0x0F)==SQLITE_SYNC_FULL ){ sqlite3_fullsync_count++; } sqlite3_sync_count++; #endif + /* If we compiled with the SQLITE_NO_SYNC flag, then syncing is a ** no-op */ #ifdef SQLITE_NO_SYNC - return SQLITE_OK; + OSTRACE(("SYNC-NOP pid=%lu, pFile=%p, file=%p, rc=SQLITE_OK\n", + osGetCurrentProcessId(), pFile, pFile->h)); + return SQLITE_OK; #else - if( FlushFileBuffers(pFile->h) ){ +#if SQLITE_MAX_MMAP_SIZE>0 + if( pFile->pMapRegion ){ + if( osFlushViewOfFile(pFile->pMapRegion, 0) ){ + OSTRACE(("SYNC-MMAP pid=%lu, pFile=%p, pMapRegion=%p, " + "rc=SQLITE_OK\n", osGetCurrentProcessId(), + pFile, pFile->pMapRegion)); + }else{ + pFile->lastErrno = osGetLastError(); + OSTRACE(("SYNC-MMAP pid=%lu, pFile=%p, pMapRegion=%p, " + "rc=SQLITE_IOERR_MMAP\n", osGetCurrentProcessId(), + pFile, pFile->pMapRegion)); + return winLogError(SQLITE_IOERR_MMAP, pFile->lastErrno, + "winSync1", pFile->zPath); + } + } +#endif + rc = osFlushFileBuffers(pFile->h); + SimulateIOError( rc=FALSE ); + if( rc ){ + OSTRACE(("SYNC pid=%lu, pFile=%p, file=%p, rc=SQLITE_OK\n", + osGetCurrentProcessId(), pFile, pFile->h)); return SQLITE_OK; }else{ - pFile->lastErrno = GetLastError(); - return SQLITE_IOERR; + pFile->lastErrno = osGetLastError(); + OSTRACE(("SYNC pid=%lu, pFile=%p, file=%p, rc=SQLITE_IOERR_FSYNC\n", + osGetCurrentProcessId(), pFile, pFile->h)); + return winLogError(SQLITE_IOERR_FSYNC, pFile->lastErrno, + "winSync2", pFile->zPath); } #endif } /* ** Determine the current size of a file in bytes */ static int winFileSize(sqlite3_file *id, sqlite3_int64 *pSize){ - DWORD upperBits; - DWORD lowerBits; winFile *pFile = (winFile*)id; - DWORD error; + int rc = SQLITE_OK; assert( id!=0 ); + assert( pSize!=0 ); SimulateIOError(return SQLITE_IOERR_FSTAT); - lowerBits = GetFileSize(pFile->h, &upperBits); - if( (lowerBits == INVALID_FILE_SIZE) - && ((error = GetLastError()) != NO_ERROR) ) - { - pFile->lastErrno = error; - return SQLITE_IOERR_FSTAT; - } - *pSize = (((sqlite3_int64)upperBits)<<32) + lowerBits; - return SQLITE_OK; + OSTRACE(("SIZE file=%p, pSize=%p\n", pFile->h, pSize)); + +#if SQLITE_OS_WINRT + { + FILE_STANDARD_INFO info; + if( osGetFileInformationByHandleEx(pFile->h, FileStandardInfo, + &info, sizeof(info)) ){ + *pSize = info.EndOfFile.QuadPart; + }else{ + pFile->lastErrno = osGetLastError(); + rc = winLogError(SQLITE_IOERR_FSTAT, pFile->lastErrno, + "winFileSize", pFile->zPath); + } + } +#else + { + DWORD upperBits; + DWORD lowerBits; + DWORD lastErrno; + + lowerBits = osGetFileSize(pFile->h, &upperBits); + *pSize = (((sqlite3_int64)upperBits)<<32) + lowerBits; + if( (lowerBits == INVALID_FILE_SIZE) + && ((lastErrno = osGetLastError())!=NO_ERROR) ){ + pFile->lastErrno = lastErrno; + rc = winLogError(SQLITE_IOERR_FSTAT, pFile->lastErrno, + "winFileSize", pFile->zPath); + } + } +#endif + OSTRACE(("SIZE file=%p, pSize=%p, *pSize=%lld, rc=%s\n", + pFile->h, pSize, *pSize, sqlite3ErrName(rc))); + return rc; } /* ** LOCKFILE_FAIL_IMMEDIATELY is undefined on some Windows systems. */ #ifndef LOCKFILE_FAIL_IMMEDIATELY # define LOCKFILE_FAIL_IMMEDIATELY 1 #endif +#ifndef LOCKFILE_EXCLUSIVE_LOCK +# define LOCKFILE_EXCLUSIVE_LOCK 2 +#endif + +/* +** Historically, SQLite has used both the LockFile and LockFileEx functions. +** When the LockFile function was used, it was always expected to fail +** immediately if the lock could not be obtained. Also, it always expected to +** obtain an exclusive lock. These flags are used with the LockFileEx function +** and reflect those expectations; therefore, they should not be changed. +*/ +#ifndef SQLITE_LOCKFILE_FLAGS +# define SQLITE_LOCKFILE_FLAGS (LOCKFILE_FAIL_IMMEDIATELY | \ + LOCKFILE_EXCLUSIVE_LOCK) +#endif + +/* +** Currently, SQLite never calls the LockFileEx function without wanting the +** call to fail immediately if the lock cannot be obtained. +*/ +#ifndef SQLITE_LOCKFILEEX_FLAGS +# define SQLITE_LOCKFILEEX_FLAGS (LOCKFILE_FAIL_IMMEDIATELY) +#endif + /* ** Acquire a reader lock. ** Different API routines are called depending on whether or not this -** is Win95 or WinNT. +** is Win9x or WinNT. */ -static int getReadLock(winFile *pFile){ +static int winGetReadLock(winFile *pFile){ int res; - if( isNT() ){ - OVERLAPPED ovlp; - ovlp.Offset = SHARED_FIRST; - ovlp.OffsetHigh = 0; - ovlp.hEvent = 0; - res = LockFileEx(pFile->h, LOCKFILE_FAIL_IMMEDIATELY, - 0, SHARED_SIZE, 0, &ovlp); -/* isNT() is 1 if SQLITE_OS_WINCE==1, so this else is never executed. -*/ -#if SQLITE_OS_WINCE==0 - }else{ + OSTRACE(("READ-LOCK file=%p, lock=%d\n", pFile->h, pFile->locktype)); + if( osIsNT() ){ +#if SQLITE_OS_WINCE + /* + ** NOTE: Windows CE is handled differently here due its lack of the Win32 + ** API LockFileEx. + */ + res = winceLockFile(&pFile->h, SHARED_FIRST, 0, 1, 0); +#else + res = winLockFile(&pFile->h, SQLITE_LOCKFILEEX_FLAGS, SHARED_FIRST, 0, + SHARED_SIZE, 0); +#endif + } +#ifdef SQLITE_WIN32_HAS_ANSI + else{ int lk; sqlite3_randomness(sizeof(lk), &lk); pFile->sharedLockByte = (short)((lk & 0x7fffffff)%(SHARED_SIZE - 1)); - res = LockFile(pFile->h, SHARED_FIRST+pFile->sharedLockByte, 0, 1, 0); + res = winLockFile(&pFile->h, SQLITE_LOCKFILE_FLAGS, + SHARED_FIRST+pFile->sharedLockByte, 0, 1, 0); + } #endif - } if( res == 0 ){ - pFile->lastErrno = GetLastError(); + pFile->lastErrno = osGetLastError(); + /* No need to log a failure to lock */ } + OSTRACE(("READ-LOCK file=%p, result=%d\n", pFile->h, res)); return res; } /* ** Undo a readlock */ -static int unlockReadLock(winFile *pFile){ +static int winUnlockReadLock(winFile *pFile){ int res; - if( isNT() ){ - res = UnlockFile(pFile->h, SHARED_FIRST, 0, SHARED_SIZE, 0); -/* isNT() is 1 if SQLITE_OS_WINCE==1, so this else is never executed. -*/ -#if SQLITE_OS_WINCE==0 - }else{ - res = UnlockFile(pFile->h, SHARED_FIRST + pFile->sharedLockByte, 0, 1, 0); + DWORD lastErrno; + OSTRACE(("READ-UNLOCK file=%p, lock=%d\n", pFile->h, pFile->locktype)); + if( osIsNT() ){ + res = winUnlockFile(&pFile->h, SHARED_FIRST, 0, SHARED_SIZE, 0); + } +#ifdef SQLITE_WIN32_HAS_ANSI + else{ + res = winUnlockFile(&pFile->h, SHARED_FIRST+pFile->sharedLockByte, 0, 1, 0); + } #endif + if( res==0 && ((lastErrno = osGetLastError())!=ERROR_NOT_LOCKED) ){ + pFile->lastErrno = lastErrno; + winLogError(SQLITE_IOERR_UNLOCK, pFile->lastErrno, + "winUnlockReadLock", pFile->zPath); } - if( res == 0 ){ - pFile->lastErrno = GetLastError(); - } + OSTRACE(("READ-UNLOCK file=%p, result=%d\n", pFile->h, res)); return res; } /* ** Lock the file with the lock specified by parameter locktype - one @@ -29121,27 +38080,34 @@ ** It is not possible to lower the locking level one step at a time. You ** must go straight to locking level 0. */ static int winLock(sqlite3_file *id, int locktype){ int rc = SQLITE_OK; /* Return code from subroutines */ - int res = 1; /* Result of a windows lock call */ + int res = 1; /* Result of a Windows lock call */ int newLocktype; /* Set pFile->locktype to this value before exiting */ int gotPendingLock = 0;/* True if we acquired a PENDING lock this time */ winFile *pFile = (winFile*)id; - DWORD error = NO_ERROR; + DWORD lastErrno = NO_ERROR; assert( id!=0 ); - OSTRACE5("LOCK %d %d was %d(%d)\n", - pFile->h, locktype, pFile->locktype, pFile->sharedLockByte); + OSTRACE(("LOCK file=%p, oldLock=%d(%d), newLock=%d\n", + pFile->h, pFile->locktype, pFile->sharedLockByte, locktype)); /* If there is already a lock of this type or more restrictive on the ** OsFile, do nothing. Don't use the end_lock: exit path, as ** sqlite3OsEnterMutex() hasn't been called yet. */ if( pFile->locktype>=locktype ){ + OSTRACE(("LOCK-HELD file=%p, rc=SQLITE_OK\n", pFile->h)); return SQLITE_OK; } + + /* Do not allow any kind of write-lock on a read-only database + */ + if( (pFile->ctrlFlags & WINFILE_RDONLY)!=0 && locktype>=RESERVED_LOCK ){ + return SQLITE_IOERR_LOCK; + } /* Make sure the locking sequence is correct */ assert( pFile->locktype!=NO_LOCK || locktype==SHARED_LOCK ); assert( locktype!=PENDING_LOCK ); @@ -29155,44 +38121,57 @@ if( (pFile->locktype==NO_LOCK) || ( (locktype==EXCLUSIVE_LOCK) && (pFile->locktype==RESERVED_LOCK)) ){ int cnt = 3; - while( cnt-->0 && (res = LockFile(pFile->h, PENDING_BYTE, 0, 1, 0))==0 ){ - /* Try 3 times to get the pending lock. The pending lock might be - ** held by another reader process who will release it momentarily. + while( cnt-->0 && (res = winLockFile(&pFile->h, SQLITE_LOCKFILE_FLAGS, + PENDING_BYTE, 0, 1, 0))==0 ){ + /* Try 3 times to get the pending lock. This is needed to work + ** around problems caused by indexing and/or anti-virus software on + ** Windows systems. + ** If you are using this code as a model for alternative VFSes, do not + ** copy this retry logic. It is a hack intended for Windows only. */ - OSTRACE2("could not get a PENDING lock. cnt=%d\n", cnt); - Sleep(1); + lastErrno = osGetLastError(); + OSTRACE(("LOCK-PENDING-FAIL file=%p, count=%d, result=%d\n", + pFile->h, cnt, res)); + if( lastErrno==ERROR_INVALID_HANDLE ){ + pFile->lastErrno = lastErrno; + rc = SQLITE_IOERR_LOCK; + OSTRACE(("LOCK-FAIL file=%p, count=%d, rc=%s\n", + pFile->h, cnt, sqlite3ErrName(rc))); + return rc; + } + if( cnt ) sqlite3_win32_sleep(1); } gotPendingLock = res; if( !res ){ - error = GetLastError(); + lastErrno = osGetLastError(); } } /* Acquire a shared lock */ if( locktype==SHARED_LOCK && res ){ assert( pFile->locktype==NO_LOCK ); - res = getReadLock(pFile); + res = winGetReadLock(pFile); if( res ){ newLocktype = SHARED_LOCK; }else{ - error = GetLastError(); + lastErrno = osGetLastError(); } } /* Acquire a RESERVED lock */ if( locktype==RESERVED_LOCK && res ){ assert( pFile->locktype==SHARED_LOCK ); - res = LockFile(pFile->h, RESERVED_BYTE, 0, 1, 0); + res = winLockFile(&pFile->h, SQLITE_LOCKFILE_FLAGS, RESERVED_BYTE, 0, 1, 0); if( res ){ newLocktype = RESERVED_LOCK; }else{ - error = GetLastError(); + lastErrno = osGetLastError(); } } /* Acquire a PENDING lock */ @@ -29203,66 +38182,72 @@ /* Acquire an EXCLUSIVE lock */ if( locktype==EXCLUSIVE_LOCK && res ){ assert( pFile->locktype>=SHARED_LOCK ); - res = unlockReadLock(pFile); - OSTRACE2("unreadlock = %d\n", res); - res = LockFile(pFile->h, SHARED_FIRST, 0, SHARED_SIZE, 0); + res = winUnlockReadLock(pFile); + res = winLockFile(&pFile->h, SQLITE_LOCKFILE_FLAGS, SHARED_FIRST, 0, + SHARED_SIZE, 0); if( res ){ newLocktype = EXCLUSIVE_LOCK; }else{ - error = GetLastError(); - OSTRACE2("error-code = %d\n", error); - getReadLock(pFile); + lastErrno = osGetLastError(); + winGetReadLock(pFile); } } /* If we are holding a PENDING lock that ought to be released, then ** release it now. */ if( gotPendingLock && locktype==SHARED_LOCK ){ - UnlockFile(pFile->h, PENDING_BYTE, 0, 1, 0); + winUnlockFile(&pFile->h, PENDING_BYTE, 0, 1, 0); } /* Update the state of the lock has held in the file descriptor then ** return the appropriate result code. */ if( res ){ rc = SQLITE_OK; }else{ - OSTRACE4("LOCK FAILED %d trying for %d but got %d\n", pFile->h, - locktype, newLocktype); - pFile->lastErrno = error; + pFile->lastErrno = lastErrno; rc = SQLITE_BUSY; + OSTRACE(("LOCK-FAIL file=%p, wanted=%d, got=%d\n", + pFile->h, locktype, newLocktype)); } pFile->locktype = (u8)newLocktype; + OSTRACE(("LOCK file=%p, lock=%d, rc=%s\n", + pFile->h, pFile->locktype, sqlite3ErrName(rc))); return rc; } /* ** This routine checks if there is a RESERVED lock held on the specified ** file by this or any other process. If such a lock is held, return ** non-zero, otherwise zero. */ static int winCheckReservedLock(sqlite3_file *id, int *pResOut){ - int rc; + int res; winFile *pFile = (winFile*)id; + SimulateIOError( return SQLITE_IOERR_CHECKRESERVEDLOCK; ); + OSTRACE(("TEST-WR-LOCK file=%p, pResOut=%p\n", pFile->h, pResOut)); + assert( id!=0 ); if( pFile->locktype>=RESERVED_LOCK ){ - rc = 1; - OSTRACE3("TEST WR-LOCK %d %d (local)\n", pFile->h, rc); + res = 1; + OSTRACE(("TEST-WR-LOCK file=%p, result=%d (local)\n", pFile->h, res)); }else{ - rc = LockFile(pFile->h, RESERVED_BYTE, 0, 1, 0); - if( rc ){ - UnlockFile(pFile->h, RESERVED_BYTE, 0, 1, 0); - } - rc = !rc; - OSTRACE3("TEST WR-LOCK %d %d (remote)\n", pFile->h, rc); - } - *pResOut = rc; + res = winLockFile(&pFile->h, SQLITE_LOCKFILEEX_FLAGS,RESERVED_BYTE,0,1,0); + if( res ){ + winUnlockFile(&pFile->h, RESERVED_BYTE, 0, 1, 0); + } + res = !res; + OSTRACE(("TEST-WR-LOCK file=%p, result=%d (remote)\n", pFile->h, res)); + } + *pResOut = res; + OSTRACE(("TEST-WR-LOCK file=%p, pResOut=%p, *pResOut=%d, rc=SQLITE_OK\n", + pFile->h, pResOut, *pResOut)); return SQLITE_OK; } /* ** Lower the locking level on file descriptor id to locktype. locktype @@ -29279,49 +38264,170 @@ int type; winFile *pFile = (winFile*)id; int rc = SQLITE_OK; assert( pFile!=0 ); assert( locktype<=SHARED_LOCK ); - OSTRACE5("UNLOCK %d to %d was %d(%d)\n", pFile->h, locktype, - pFile->locktype, pFile->sharedLockByte); + OSTRACE(("UNLOCK file=%p, oldLock=%d(%d), newLock=%d\n", + pFile->h, pFile->locktype, pFile->sharedLockByte, locktype)); type = pFile->locktype; if( type>=EXCLUSIVE_LOCK ){ - UnlockFile(pFile->h, SHARED_FIRST, 0, SHARED_SIZE, 0); - if( locktype==SHARED_LOCK && !getReadLock(pFile) ){ + winUnlockFile(&pFile->h, SHARED_FIRST, 0, SHARED_SIZE, 0); + if( locktype==SHARED_LOCK && !winGetReadLock(pFile) ){ /* This should never happen. We should always be able to ** reacquire the read lock */ - rc = SQLITE_IOERR_UNLOCK; + rc = winLogError(SQLITE_IOERR_UNLOCK, osGetLastError(), + "winUnlock", pFile->zPath); } } if( type>=RESERVED_LOCK ){ - UnlockFile(pFile->h, RESERVED_BYTE, 0, 1, 0); + winUnlockFile(&pFile->h, RESERVED_BYTE, 0, 1, 0); } if( locktype==NO_LOCK && type>=SHARED_LOCK ){ - unlockReadLock(pFile); + winUnlockReadLock(pFile); } if( type>=PENDING_LOCK ){ - UnlockFile(pFile->h, PENDING_BYTE, 0, 1, 0); + winUnlockFile(&pFile->h, PENDING_BYTE, 0, 1, 0); } pFile->locktype = (u8)locktype; + OSTRACE(("UNLOCK file=%p, lock=%d, rc=%s\n", + pFile->h, pFile->locktype, sqlite3ErrName(rc))); return rc; } +/* +** If *pArg is initially negative then this is a query. Set *pArg to +** 1 or 0 depending on whether or not bit mask of pFile->ctrlFlags is set. +** +** If *pArg is 0 or 1, then clear or set the mask bit of pFile->ctrlFlags. +*/ +static void winModeBit(winFile *pFile, unsigned char mask, int *pArg){ + if( *pArg<0 ){ + *pArg = (pFile->ctrlFlags & mask)!=0; + }else if( (*pArg)==0 ){ + pFile->ctrlFlags &= ~mask; + }else{ + pFile->ctrlFlags |= mask; + } +} + +/* Forward references to VFS helper methods used for temporary files */ +static int winGetTempname(sqlite3_vfs *, char **); +static int winIsDir(const void *); +static BOOL winIsDriveLetterAndColon(const char *); + /* ** Control and query of the open file handle. */ static int winFileControl(sqlite3_file *id, int op, void *pArg){ + winFile *pFile = (winFile*)id; + OSTRACE(("FCNTL file=%p, op=%d, pArg=%p\n", pFile->h, op, pArg)); switch( op ){ case SQLITE_FCNTL_LOCKSTATE: { - *(int*)pArg = ((winFile*)id)->locktype; + *(int*)pArg = pFile->locktype; + OSTRACE(("FCNTL file=%p, rc=SQLITE_OK\n", pFile->h)); return SQLITE_OK; } case SQLITE_LAST_ERRNO: { - *(int*)pArg = (int)((winFile*)id)->lastErrno; + *(int*)pArg = (int)pFile->lastErrno; + OSTRACE(("FCNTL file=%p, rc=SQLITE_OK\n", pFile->h)); + return SQLITE_OK; + } + case SQLITE_FCNTL_CHUNK_SIZE: { + pFile->szChunk = *(int *)pArg; + OSTRACE(("FCNTL file=%p, rc=SQLITE_OK\n", pFile->h)); + return SQLITE_OK; + } + case SQLITE_FCNTL_SIZE_HINT: { + if( pFile->szChunk>0 ){ + sqlite3_int64 oldSz; + int rc = winFileSize(id, &oldSz); + if( rc==SQLITE_OK ){ + sqlite3_int64 newSz = *(sqlite3_int64*)pArg; + if( newSz>oldSz ){ + SimulateIOErrorBenign(1); + rc = winTruncate(id, newSz); + SimulateIOErrorBenign(0); + } + } + OSTRACE(("FCNTL file=%p, rc=%s\n", pFile->h, sqlite3ErrName(rc))); + return rc; + } + OSTRACE(("FCNTL file=%p, rc=SQLITE_OK\n", pFile->h)); + return SQLITE_OK; + } + case SQLITE_FCNTL_PERSIST_WAL: { + winModeBit(pFile, WINFILE_PERSIST_WAL, (int*)pArg); + OSTRACE(("FCNTL file=%p, rc=SQLITE_OK\n", pFile->h)); + return SQLITE_OK; + } + case SQLITE_FCNTL_POWERSAFE_OVERWRITE: { + winModeBit(pFile, WINFILE_PSOW, (int*)pArg); + OSTRACE(("FCNTL file=%p, rc=SQLITE_OK\n", pFile->h)); + return SQLITE_OK; + } + case SQLITE_FCNTL_VFSNAME: { + *(char**)pArg = sqlite3_mprintf("%s", pFile->pVfs->zName); + OSTRACE(("FCNTL file=%p, rc=SQLITE_OK\n", pFile->h)); + return SQLITE_OK; + } + case SQLITE_FCNTL_WIN32_AV_RETRY: { + int *a = (int*)pArg; + if( a[0]>0 ){ + winIoerrRetry = a[0]; + }else{ + a[0] = winIoerrRetry; + } + if( a[1]>0 ){ + winIoerrRetryDelay = a[1]; + }else{ + a[1] = winIoerrRetryDelay; + } + OSTRACE(("FCNTL file=%p, rc=SQLITE_OK\n", pFile->h)); + return SQLITE_OK; + } +#ifdef SQLITE_TEST + case SQLITE_FCNTL_WIN32_SET_HANDLE: { + LPHANDLE phFile = (LPHANDLE)pArg; + HANDLE hOldFile = pFile->h; + pFile->h = *phFile; + *phFile = hOldFile; + OSTRACE(("FCNTL oldFile=%p, newFile=%p, rc=SQLITE_OK\n", + hOldFile, pFile->h)); return SQLITE_OK; } +#endif + case SQLITE_FCNTL_TEMPFILENAME: { + char *zTFile = 0; + int rc = winGetTempname(pFile->pVfs, &zTFile); + if( rc==SQLITE_OK ){ + *(char**)pArg = zTFile; + } + OSTRACE(("FCNTL file=%p, rc=%s\n", pFile->h, sqlite3ErrName(rc))); + return rc; + } +#if SQLITE_MAX_MMAP_SIZE>0 + case SQLITE_FCNTL_MMAP_SIZE: { + i64 newLimit = *(i64*)pArg; + int rc = SQLITE_OK; + if( newLimit>sqlite3GlobalConfig.mxMmap ){ + newLimit = sqlite3GlobalConfig.mxMmap; + } + *(i64*)pArg = pFile->mmapSizeMax; + if( newLimit>=0 && newLimit!=pFile->mmapSizeMax && pFile->nFetchOut==0 ){ + pFile->mmapSizeMax = newLimit; + if( pFile->mmapSize>0 ){ + winUnmapfile(pFile); + rc = winMapfile(pFile, -1); + } + } + OSTRACE(("FCNTL file=%p, rc=%s\n", pFile->h, sqlite3ErrName(rc))); + return rc; + } +#endif } - return SQLITE_ERROR; + OSTRACE(("FCNTL file=%p, rc=SQLITE_NOTFOUND\n", pFile->h)); + return SQLITE_NOTFOUND; } /* ** Return the sector size in bytes of the underlying block device for ** the specified file. This is almost always 512 bytes, but may be @@ -29331,253 +38437,1402 @@ ** if two files are created in the same file-system directory (i.e. ** a database and its journal file) that the sector size will be the ** same for both. */ static int winSectorSize(sqlite3_file *id){ - assert( id!=0 ); - return (int)(((winFile*)id)->sectorSize); + (void)id; + return SQLITE_DEFAULT_SECTOR_SIZE; } /* ** Return a vector of device characteristics. */ static int winDeviceCharacteristics(sqlite3_file *id){ - UNUSED_PARAMETER(id); - return 0; -} + winFile *p = (winFile*)id; + return SQLITE_IOCAP_UNDELETABLE_WHEN_OPEN | + ((p->ctrlFlags & WINFILE_PSOW)?SQLITE_IOCAP_POWERSAFE_OVERWRITE:0); +} + +/* +** Windows will only let you create file view mappings +** on allocation size granularity boundaries. +** During sqlite3_os_init() we do a GetSystemInfo() +** to get the granularity size. +*/ +static SYSTEM_INFO winSysInfo; + +#ifndef SQLITE_OMIT_WAL + +/* +** Helper functions to obtain and relinquish the global mutex. The +** global mutex is used to protect the winLockInfo objects used by +** this file, all of which may be shared by multiple threads. +** +** Function winShmMutexHeld() is used to assert() that the global mutex +** is held when required. This function is only used as part of assert() +** statements. e.g. +** +** winShmEnterMutex() +** assert( winShmMutexHeld() ); +** winShmLeaveMutex() +*/ +static void winShmEnterMutex(void){ + sqlite3_mutex_enter(sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_VFS1)); +} +static void winShmLeaveMutex(void){ + sqlite3_mutex_leave(sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_VFS1)); +} +#ifndef NDEBUG +static int winShmMutexHeld(void) { + return sqlite3_mutex_held(sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_VFS1)); +} +#endif + +/* +** Object used to represent a single file opened and mmapped to provide +** shared memory. When multiple threads all reference the same +** log-summary, each thread has its own winFile object, but they all +** point to a single instance of this object. In other words, each +** log-summary is opened only once per process. +** +** winShmMutexHeld() must be true when creating or destroying +** this object or while reading or writing the following fields: +** +** nRef +** pNext +** +** The following fields are read-only after the object is created: +** +** fid +** zFilename +** +** Either winShmNode.mutex must be held or winShmNode.nRef==0 and +** winShmMutexHeld() is true when reading or writing any other field +** in this structure. +** +*/ +struct winShmNode { + sqlite3_mutex *mutex; /* Mutex to access this object */ + char *zFilename; /* Name of the file */ + winFile hFile; /* File handle from winOpen */ + + int szRegion; /* Size of shared-memory regions */ + int nRegion; /* Size of array apRegion */ + struct ShmRegion { + HANDLE hMap; /* File handle from CreateFileMapping */ + void *pMap; + } *aRegion; + DWORD lastErrno; /* The Windows errno from the last I/O error */ + + int nRef; /* Number of winShm objects pointing to this */ + winShm *pFirst; /* All winShm objects pointing to this */ + winShmNode *pNext; /* Next in list of all winShmNode objects */ +#if defined(SQLITE_DEBUG) || defined(SQLITE_HAVE_OS_TRACE) + u8 nextShmId; /* Next available winShm.id value */ +#endif +}; + +/* +** A global array of all winShmNode objects. +** +** The winShmMutexHeld() must be true while reading or writing this list. +*/ +static winShmNode *winShmNodeList = 0; + +/* +** Structure used internally by this VFS to record the state of an +** open shared memory connection. +** +** The following fields are initialized when this object is created and +** are read-only thereafter: +** +** winShm.pShmNode +** winShm.id +** +** All other fields are read/write. The winShm.pShmNode->mutex must be held +** while accessing any read/write fields. +*/ +struct winShm { + winShmNode *pShmNode; /* The underlying winShmNode object */ + winShm *pNext; /* Next winShm with the same winShmNode */ + u8 hasMutex; /* True if holding the winShmNode mutex */ + u16 sharedMask; /* Mask of shared locks held */ + u16 exclMask; /* Mask of exclusive locks held */ +#if defined(SQLITE_DEBUG) || defined(SQLITE_HAVE_OS_TRACE) + u8 id; /* Id of this connection with its winShmNode */ +#endif +}; + +/* +** Constants used for locking +*/ +#define WIN_SHM_BASE ((22+SQLITE_SHM_NLOCK)*4) /* first lock byte */ +#define WIN_SHM_DMS (WIN_SHM_BASE+SQLITE_SHM_NLOCK) /* deadman switch */ + +/* +** Apply advisory locks for all n bytes beginning at ofst. +*/ +#define _SHM_UNLCK 1 +#define _SHM_RDLCK 2 +#define _SHM_WRLCK 3 +static int winShmSystemLock( + winShmNode *pFile, /* Apply locks to this open shared-memory segment */ + int lockType, /* _SHM_UNLCK, _SHM_RDLCK, or _SHM_WRLCK */ + int ofst, /* Offset to first byte to be locked/unlocked */ + int nByte /* Number of bytes to lock or unlock */ +){ + int rc = 0; /* Result code form Lock/UnlockFileEx() */ + + /* Access to the winShmNode object is serialized by the caller */ + assert( sqlite3_mutex_held(pFile->mutex) || pFile->nRef==0 ); + + OSTRACE(("SHM-LOCK file=%p, lock=%d, offset=%d, size=%d\n", + pFile->hFile.h, lockType, ofst, nByte)); + + /* Release/Acquire the system-level lock */ + if( lockType==_SHM_UNLCK ){ + rc = winUnlockFile(&pFile->hFile.h, ofst, 0, nByte, 0); + }else{ + /* Initialize the locking parameters */ + DWORD dwFlags = LOCKFILE_FAIL_IMMEDIATELY; + if( lockType == _SHM_WRLCK ) dwFlags |= LOCKFILE_EXCLUSIVE_LOCK; + rc = winLockFile(&pFile->hFile.h, dwFlags, ofst, 0, nByte, 0); + } + + if( rc!= 0 ){ + rc = SQLITE_OK; + }else{ + pFile->lastErrno = osGetLastError(); + rc = SQLITE_BUSY; + } + + OSTRACE(("SHM-LOCK file=%p, func=%s, errno=%lu, rc=%s\n", + pFile->hFile.h, (lockType == _SHM_UNLCK) ? "winUnlockFile" : + "winLockFile", pFile->lastErrno, sqlite3ErrName(rc))); + + return rc; +} + +/* Forward references to VFS methods */ +static int winOpen(sqlite3_vfs*,const char*,sqlite3_file*,int,int*); +static int winDelete(sqlite3_vfs *,const char*,int); + +/* +** Purge the winShmNodeList list of all entries with winShmNode.nRef==0. +** +** This is not a VFS shared-memory method; it is a utility function called +** by VFS shared-memory methods. +*/ +static void winShmPurge(sqlite3_vfs *pVfs, int deleteFlag){ + winShmNode **pp; + winShmNode *p; + assert( winShmMutexHeld() ); + OSTRACE(("SHM-PURGE pid=%lu, deleteFlag=%d\n", + osGetCurrentProcessId(), deleteFlag)); + pp = &winShmNodeList; + while( (p = *pp)!=0 ){ + if( p->nRef==0 ){ + int i; + if( p->mutex ){ sqlite3_mutex_free(p->mutex); } + for(i=0; i<p->nRegion; i++){ + BOOL bRc = osUnmapViewOfFile(p->aRegion[i].pMap); + OSTRACE(("SHM-PURGE-UNMAP pid=%lu, region=%d, rc=%s\n", + osGetCurrentProcessId(), i, bRc ? "ok" : "failed")); + UNUSED_VARIABLE_VALUE(bRc); + bRc = osCloseHandle(p->aRegion[i].hMap); + OSTRACE(("SHM-PURGE-CLOSE pid=%lu, region=%d, rc=%s\n", + osGetCurrentProcessId(), i, bRc ? "ok" : "failed")); + UNUSED_VARIABLE_VALUE(bRc); + } + if( p->hFile.h!=NULL && p->hFile.h!=INVALID_HANDLE_VALUE ){ + SimulateIOErrorBenign(1); + winClose((sqlite3_file *)&p->hFile); + SimulateIOErrorBenign(0); + } + if( deleteFlag ){ + SimulateIOErrorBenign(1); + sqlite3BeginBenignMalloc(); + winDelete(pVfs, p->zFilename, 0); + sqlite3EndBenignMalloc(); + SimulateIOErrorBenign(0); + } + *pp = p->pNext; + sqlite3_free(p->aRegion); + sqlite3_free(p); + }else{ + pp = &p->pNext; + } + } +} + +/* +** Open the shared-memory area associated with database file pDbFd. +** +** When opening a new shared-memory file, if no other instances of that +** file are currently open, in this process or in other processes, then +** the file must be truncated to zero length or have its header cleared. +*/ +static int winOpenSharedMemory(winFile *pDbFd){ + struct winShm *p; /* The connection to be opened */ + struct winShmNode *pShmNode = 0; /* The underlying mmapped file */ + int rc; /* Result code */ + struct winShmNode *pNew; /* Newly allocated winShmNode */ + int nName; /* Size of zName in bytes */ + + assert( pDbFd->pShm==0 ); /* Not previously opened */ + + /* Allocate space for the new sqlite3_shm object. Also speculatively + ** allocate space for a new winShmNode and filename. + */ + p = sqlite3MallocZero( sizeof(*p) ); + if( p==0 ) return SQLITE_IOERR_NOMEM; + nName = sqlite3Strlen30(pDbFd->zPath); + pNew = sqlite3MallocZero( sizeof(*pShmNode) + nName + 17 ); + if( pNew==0 ){ + sqlite3_free(p); + return SQLITE_IOERR_NOMEM; + } + pNew->zFilename = (char*)&pNew[1]; + sqlite3_snprintf(nName+15, pNew->zFilename, "%s-shm", pDbFd->zPath); + sqlite3FileSuffix3(pDbFd->zPath, pNew->zFilename); + + /* Look to see if there is an existing winShmNode that can be used. + ** If no matching winShmNode currently exists, create a new one. + */ + winShmEnterMutex(); + for(pShmNode = winShmNodeList; pShmNode; pShmNode=pShmNode->pNext){ + /* TBD need to come up with better match here. Perhaps + ** use FILE_ID_BOTH_DIR_INFO Structure. + */ + if( sqlite3StrICmp(pShmNode->zFilename, pNew->zFilename)==0 ) break; + } + if( pShmNode ){ + sqlite3_free(pNew); + }else{ + pShmNode = pNew; + pNew = 0; + ((winFile*)(&pShmNode->hFile))->h = INVALID_HANDLE_VALUE; + pShmNode->pNext = winShmNodeList; + winShmNodeList = pShmNode; + + pShmNode->mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_FAST); + if( pShmNode->mutex==0 ){ + rc = SQLITE_IOERR_NOMEM; + goto shm_open_err; + } + + rc = winOpen(pDbFd->pVfs, + pShmNode->zFilename, /* Name of the file (UTF-8) */ + (sqlite3_file*)&pShmNode->hFile, /* File handle here */ + SQLITE_OPEN_WAL | SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, + 0); + if( SQLITE_OK!=rc ){ + goto shm_open_err; + } + + /* Check to see if another process is holding the dead-man switch. + ** If not, truncate the file to zero length. + */ + if( winShmSystemLock(pShmNode, _SHM_WRLCK, WIN_SHM_DMS, 1)==SQLITE_OK ){ + rc = winTruncate((sqlite3_file *)&pShmNode->hFile, 0); + if( rc!=SQLITE_OK ){ + rc = winLogError(SQLITE_IOERR_SHMOPEN, osGetLastError(), + "winOpenShm", pDbFd->zPath); + } + } + if( rc==SQLITE_OK ){ + winShmSystemLock(pShmNode, _SHM_UNLCK, WIN_SHM_DMS, 1); + rc = winShmSystemLock(pShmNode, _SHM_RDLCK, WIN_SHM_DMS, 1); + } + if( rc ) goto shm_open_err; + } + + /* Make the new connection a child of the winShmNode */ + p->pShmNode = pShmNode; +#if defined(SQLITE_DEBUG) || defined(SQLITE_HAVE_OS_TRACE) + p->id = pShmNode->nextShmId++; +#endif + pShmNode->nRef++; + pDbFd->pShm = p; + winShmLeaveMutex(); + + /* The reference count on pShmNode has already been incremented under + ** the cover of the winShmEnterMutex() mutex and the pointer from the + ** new (struct winShm) object to the pShmNode has been set. All that is + ** left to do is to link the new object into the linked list starting + ** at pShmNode->pFirst. This must be done while holding the pShmNode->mutex + ** mutex. + */ + sqlite3_mutex_enter(pShmNode->mutex); + p->pNext = pShmNode->pFirst; + pShmNode->pFirst = p; + sqlite3_mutex_leave(pShmNode->mutex); + return SQLITE_OK; + + /* Jump here on any error */ +shm_open_err: + winShmSystemLock(pShmNode, _SHM_UNLCK, WIN_SHM_DMS, 1); + winShmPurge(pDbFd->pVfs, 0); /* This call frees pShmNode if required */ + sqlite3_free(p); + sqlite3_free(pNew); + winShmLeaveMutex(); + return rc; +} + +/* +** Close a connection to shared-memory. Delete the underlying +** storage if deleteFlag is true. +*/ +static int winShmUnmap( + sqlite3_file *fd, /* Database holding shared memory */ + int deleteFlag /* Delete after closing if true */ +){ + winFile *pDbFd; /* Database holding shared-memory */ + winShm *p; /* The connection to be closed */ + winShmNode *pShmNode; /* The underlying shared-memory file */ + winShm **pp; /* For looping over sibling connections */ + + pDbFd = (winFile*)fd; + p = pDbFd->pShm; + if( p==0 ) return SQLITE_OK; + pShmNode = p->pShmNode; + + /* Remove connection p from the set of connections associated + ** with pShmNode */ + sqlite3_mutex_enter(pShmNode->mutex); + for(pp=&pShmNode->pFirst; (*pp)!=p; pp = &(*pp)->pNext){} + *pp = p->pNext; + + /* Free the connection p */ + sqlite3_free(p); + pDbFd->pShm = 0; + sqlite3_mutex_leave(pShmNode->mutex); + + /* If pShmNode->nRef has reached 0, then close the underlying + ** shared-memory file, too */ + winShmEnterMutex(); + assert( pShmNode->nRef>0 ); + pShmNode->nRef--; + if( pShmNode->nRef==0 ){ + winShmPurge(pDbFd->pVfs, deleteFlag); + } + winShmLeaveMutex(); + + return SQLITE_OK; +} + +/* +** Change the lock state for a shared-memory segment. +*/ +static int winShmLock( + sqlite3_file *fd, /* Database file holding the shared memory */ + int ofst, /* First lock to acquire or release */ + int n, /* Number of locks to acquire or release */ + int flags /* What to do with the lock */ +){ + winFile *pDbFd = (winFile*)fd; /* Connection holding shared memory */ + winShm *p = pDbFd->pShm; /* The shared memory being locked */ + winShm *pX; /* For looping over all siblings */ + winShmNode *pShmNode = p->pShmNode; + int rc = SQLITE_OK; /* Result code */ + u16 mask; /* Mask of locks to take or release */ + + assert( ofst>=0 && ofst+n<=SQLITE_SHM_NLOCK ); + assert( n>=1 ); + assert( flags==(SQLITE_SHM_LOCK | SQLITE_SHM_SHARED) + || flags==(SQLITE_SHM_LOCK | SQLITE_SHM_EXCLUSIVE) + || flags==(SQLITE_SHM_UNLOCK | SQLITE_SHM_SHARED) + || flags==(SQLITE_SHM_UNLOCK | SQLITE_SHM_EXCLUSIVE) ); + assert( n==1 || (flags & SQLITE_SHM_EXCLUSIVE)!=0 ); + + mask = (u16)((1U<<(ofst+n)) - (1U<<ofst)); + assert( n>1 || mask==(1<<ofst) ); + sqlite3_mutex_enter(pShmNode->mutex); + if( flags & SQLITE_SHM_UNLOCK ){ + u16 allMask = 0; /* Mask of locks held by siblings */ + + /* See if any siblings hold this same lock */ + for(pX=pShmNode->pFirst; pX; pX=pX->pNext){ + if( pX==p ) continue; + assert( (pX->exclMask & (p->exclMask|p->sharedMask))==0 ); + allMask |= pX->sharedMask; + } + + /* Unlock the system-level locks */ + if( (mask & allMask)==0 ){ + rc = winShmSystemLock(pShmNode, _SHM_UNLCK, ofst+WIN_SHM_BASE, n); + }else{ + rc = SQLITE_OK; + } + + /* Undo the local locks */ + if( rc==SQLITE_OK ){ + p->exclMask &= ~mask; + p->sharedMask &= ~mask; + } + }else if( flags & SQLITE_SHM_SHARED ){ + u16 allShared = 0; /* Union of locks held by connections other than "p" */ + + /* Find out which shared locks are already held by sibling connections. + ** If any sibling already holds an exclusive lock, go ahead and return + ** SQLITE_BUSY. + */ + for(pX=pShmNode->pFirst; pX; pX=pX->pNext){ + if( (pX->exclMask & mask)!=0 ){ + rc = SQLITE_BUSY; + break; + } + allShared |= pX->sharedMask; + } + + /* Get shared locks at the system level, if necessary */ + if( rc==SQLITE_OK ){ + if( (allShared & mask)==0 ){ + rc = winShmSystemLock(pShmNode, _SHM_RDLCK, ofst+WIN_SHM_BASE, n); + }else{ + rc = SQLITE_OK; + } + } + + /* Get the local shared locks */ + if( rc==SQLITE_OK ){ + p->sharedMask |= mask; + } + }else{ + /* Make sure no sibling connections hold locks that will block this + ** lock. If any do, return SQLITE_BUSY right away. + */ + for(pX=pShmNode->pFirst; pX; pX=pX->pNext){ + if( (pX->exclMask & mask)!=0 || (pX->sharedMask & mask)!=0 ){ + rc = SQLITE_BUSY; + break; + } + } + + /* Get the exclusive locks at the system level. Then if successful + ** also mark the local connection as being locked. + */ + if( rc==SQLITE_OK ){ + rc = winShmSystemLock(pShmNode, _SHM_WRLCK, ofst+WIN_SHM_BASE, n); + if( rc==SQLITE_OK ){ + assert( (p->sharedMask & mask)==0 ); + p->exclMask |= mask; + } + } + } + sqlite3_mutex_leave(pShmNode->mutex); + OSTRACE(("SHM-LOCK pid=%lu, id=%d, sharedMask=%03x, exclMask=%03x, rc=%s\n", + osGetCurrentProcessId(), p->id, p->sharedMask, p->exclMask, + sqlite3ErrName(rc))); + return rc; +} + +/* +** Implement a memory barrier or memory fence on shared memory. +** +** All loads and stores begun before the barrier must complete before +** any load or store begun after the barrier. +*/ +static void winShmBarrier( + sqlite3_file *fd /* Database holding the shared memory */ +){ + UNUSED_PARAMETER(fd); + sqlite3MemoryBarrier(); /* compiler-defined memory barrier */ + winShmEnterMutex(); /* Also mutex, for redundancy */ + winShmLeaveMutex(); +} + +/* +** This function is called to obtain a pointer to region iRegion of the +** shared-memory associated with the database file fd. Shared-memory regions +** are numbered starting from zero. Each shared-memory region is szRegion +** bytes in size. +** +** If an error occurs, an error code is returned and *pp is set to NULL. +** +** Otherwise, if the isWrite parameter is 0 and the requested shared-memory +** region has not been allocated (by any client, including one running in a +** separate process), then *pp is set to NULL and SQLITE_OK returned. If +** isWrite is non-zero and the requested shared-memory region has not yet +** been allocated, it is allocated by this function. +** +** If the shared-memory region has already been allocated or is allocated by +** this call as described above, then it is mapped into this processes +** address space (if it is not already), *pp is set to point to the mapped +** memory and SQLITE_OK returned. +*/ +static int winShmMap( + sqlite3_file *fd, /* Handle open on database file */ + int iRegion, /* Region to retrieve */ + int szRegion, /* Size of regions */ + int isWrite, /* True to extend file if necessary */ + void volatile **pp /* OUT: Mapped memory */ +){ + winFile *pDbFd = (winFile*)fd; + winShm *pShm = pDbFd->pShm; + winShmNode *pShmNode; + int rc = SQLITE_OK; + + if( !pShm ){ + rc = winOpenSharedMemory(pDbFd); + if( rc!=SQLITE_OK ) return rc; + pShm = pDbFd->pShm; + } + pShmNode = pShm->pShmNode; + + sqlite3_mutex_enter(pShmNode->mutex); + assert( szRegion==pShmNode->szRegion || pShmNode->nRegion==0 ); + + if( pShmNode->nRegion<=iRegion ){ + struct ShmRegion *apNew; /* New aRegion[] array */ + int nByte = (iRegion+1)*szRegion; /* Minimum required file size */ + sqlite3_int64 sz; /* Current size of wal-index file */ + + pShmNode->szRegion = szRegion; + + /* The requested region is not mapped into this processes address space. + ** Check to see if it has been allocated (i.e. if the wal-index file is + ** large enough to contain the requested region). + */ + rc = winFileSize((sqlite3_file *)&pShmNode->hFile, &sz); + if( rc!=SQLITE_OK ){ + rc = winLogError(SQLITE_IOERR_SHMSIZE, osGetLastError(), + "winShmMap1", pDbFd->zPath); + goto shmpage_out; + } + + if( sz<nByte ){ + /* The requested memory region does not exist. If isWrite is set to + ** zero, exit early. *pp will be set to NULL and SQLITE_OK returned. + ** + ** Alternatively, if isWrite is non-zero, use ftruncate() to allocate + ** the requested memory region. + */ + if( !isWrite ) goto shmpage_out; + rc = winTruncate((sqlite3_file *)&pShmNode->hFile, nByte); + if( rc!=SQLITE_OK ){ + rc = winLogError(SQLITE_IOERR_SHMSIZE, osGetLastError(), + "winShmMap2", pDbFd->zPath); + goto shmpage_out; + } + } + + /* Map the requested memory region into this processes address space. */ + apNew = (struct ShmRegion *)sqlite3_realloc64( + pShmNode->aRegion, (iRegion+1)*sizeof(apNew[0]) + ); + if( !apNew ){ + rc = SQLITE_IOERR_NOMEM; + goto shmpage_out; + } + pShmNode->aRegion = apNew; + + while( pShmNode->nRegion<=iRegion ){ + HANDLE hMap = NULL; /* file-mapping handle */ + void *pMap = 0; /* Mapped memory region */ + +#if SQLITE_OS_WINRT + hMap = osCreateFileMappingFromApp(pShmNode->hFile.h, + NULL, PAGE_READWRITE, nByte, NULL + ); +#elif defined(SQLITE_WIN32_HAS_WIDE) + hMap = osCreateFileMappingW(pShmNode->hFile.h, + NULL, PAGE_READWRITE, 0, nByte, NULL + ); +#elif defined(SQLITE_WIN32_HAS_ANSI) && SQLITE_WIN32_CREATEFILEMAPPINGA + hMap = osCreateFileMappingA(pShmNode->hFile.h, + NULL, PAGE_READWRITE, 0, nByte, NULL + ); +#endif + OSTRACE(("SHM-MAP-CREATE pid=%lu, region=%d, size=%d, rc=%s\n", + osGetCurrentProcessId(), pShmNode->nRegion, nByte, + hMap ? "ok" : "failed")); + if( hMap ){ + int iOffset = pShmNode->nRegion*szRegion; + int iOffsetShift = iOffset % winSysInfo.dwAllocationGranularity; +#if SQLITE_OS_WINRT + pMap = osMapViewOfFileFromApp(hMap, FILE_MAP_WRITE | FILE_MAP_READ, + iOffset - iOffsetShift, szRegion + iOffsetShift + ); +#else + pMap = osMapViewOfFile(hMap, FILE_MAP_WRITE | FILE_MAP_READ, + 0, iOffset - iOffsetShift, szRegion + iOffsetShift + ); +#endif + OSTRACE(("SHM-MAP-MAP pid=%lu, region=%d, offset=%d, size=%d, rc=%s\n", + osGetCurrentProcessId(), pShmNode->nRegion, iOffset, + szRegion, pMap ? "ok" : "failed")); + } + if( !pMap ){ + pShmNode->lastErrno = osGetLastError(); + rc = winLogError(SQLITE_IOERR_SHMMAP, pShmNode->lastErrno, + "winShmMap3", pDbFd->zPath); + if( hMap ) osCloseHandle(hMap); + goto shmpage_out; + } + + pShmNode->aRegion[pShmNode->nRegion].pMap = pMap; + pShmNode->aRegion[pShmNode->nRegion].hMap = hMap; + pShmNode->nRegion++; + } + } + +shmpage_out: + if( pShmNode->nRegion>iRegion ){ + int iOffset = iRegion*szRegion; + int iOffsetShift = iOffset % winSysInfo.dwAllocationGranularity; + char *p = (char *)pShmNode->aRegion[iRegion].pMap; + *pp = (void *)&p[iOffsetShift]; + }else{ + *pp = 0; + } + sqlite3_mutex_leave(pShmNode->mutex); + return rc; +} + +#else +# define winShmMap 0 +# define winShmLock 0 +# define winShmBarrier 0 +# define winShmUnmap 0 +#endif /* #ifndef SQLITE_OMIT_WAL */ + +/* +** Cleans up the mapped region of the specified file, if any. +*/ +#if SQLITE_MAX_MMAP_SIZE>0 +static int winUnmapfile(winFile *pFile){ + assert( pFile!=0 ); + OSTRACE(("UNMAP-FILE pid=%lu, pFile=%p, hMap=%p, pMapRegion=%p, " + "mmapSize=%lld, mmapSizeActual=%lld, mmapSizeMax=%lld\n", + osGetCurrentProcessId(), pFile, pFile->hMap, pFile->pMapRegion, + pFile->mmapSize, pFile->mmapSizeActual, pFile->mmapSizeMax)); + if( pFile->pMapRegion ){ + if( !osUnmapViewOfFile(pFile->pMapRegion) ){ + pFile->lastErrno = osGetLastError(); + OSTRACE(("UNMAP-FILE pid=%lu, pFile=%p, pMapRegion=%p, " + "rc=SQLITE_IOERR_MMAP\n", osGetCurrentProcessId(), pFile, + pFile->pMapRegion)); + return winLogError(SQLITE_IOERR_MMAP, pFile->lastErrno, + "winUnmapfile1", pFile->zPath); + } + pFile->pMapRegion = 0; + pFile->mmapSize = 0; + pFile->mmapSizeActual = 0; + } + if( pFile->hMap!=NULL ){ + if( !osCloseHandle(pFile->hMap) ){ + pFile->lastErrno = osGetLastError(); + OSTRACE(("UNMAP-FILE pid=%lu, pFile=%p, hMap=%p, rc=SQLITE_IOERR_MMAP\n", + osGetCurrentProcessId(), pFile, pFile->hMap)); + return winLogError(SQLITE_IOERR_MMAP, pFile->lastErrno, + "winUnmapfile2", pFile->zPath); + } + pFile->hMap = NULL; + } + OSTRACE(("UNMAP-FILE pid=%lu, pFile=%p, rc=SQLITE_OK\n", + osGetCurrentProcessId(), pFile)); + return SQLITE_OK; +} + +/* +** Memory map or remap the file opened by file-descriptor pFd (if the file +** is already mapped, the existing mapping is replaced by the new). Or, if +** there already exists a mapping for this file, and there are still +** outstanding xFetch() references to it, this function is a no-op. +** +** If parameter nByte is non-negative, then it is the requested size of +** the mapping to create. Otherwise, if nByte is less than zero, then the +** requested size is the size of the file on disk. The actual size of the +** created mapping is either the requested size or the value configured +** using SQLITE_FCNTL_MMAP_SIZE, whichever is smaller. +** +** SQLITE_OK is returned if no error occurs (even if the mapping is not +** recreated as a result of outstanding references) or an SQLite error +** code otherwise. +*/ +static int winMapfile(winFile *pFd, sqlite3_int64 nByte){ + sqlite3_int64 nMap = nByte; + int rc; + + assert( nMap>=0 || pFd->nFetchOut==0 ); + OSTRACE(("MAP-FILE pid=%lu, pFile=%p, size=%lld\n", + osGetCurrentProcessId(), pFd, nByte)); + + if( pFd->nFetchOut>0 ) return SQLITE_OK; + + if( nMap<0 ){ + rc = winFileSize((sqlite3_file*)pFd, &nMap); + if( rc ){ + OSTRACE(("MAP-FILE pid=%lu, pFile=%p, rc=SQLITE_IOERR_FSTAT\n", + osGetCurrentProcessId(), pFd)); + return SQLITE_IOERR_FSTAT; + } + } + if( nMap>pFd->mmapSizeMax ){ + nMap = pFd->mmapSizeMax; + } + nMap &= ~(sqlite3_int64)(winSysInfo.dwPageSize - 1); + + if( nMap==0 && pFd->mmapSize>0 ){ + winUnmapfile(pFd); + } + if( nMap!=pFd->mmapSize ){ + void *pNew = 0; + DWORD protect = PAGE_READONLY; + DWORD flags = FILE_MAP_READ; + + winUnmapfile(pFd); +#ifdef SQLITE_MMAP_READWRITE + if( (pFd->ctrlFlags & WINFILE_RDONLY)==0 ){ + protect = PAGE_READWRITE; + flags |= FILE_MAP_WRITE; + } +#endif +#if SQLITE_OS_WINRT + pFd->hMap = osCreateFileMappingFromApp(pFd->h, NULL, protect, nMap, NULL); +#elif defined(SQLITE_WIN32_HAS_WIDE) + pFd->hMap = osCreateFileMappingW(pFd->h, NULL, protect, + (DWORD)((nMap>>32) & 0xffffffff), + (DWORD)(nMap & 0xffffffff), NULL); +#elif defined(SQLITE_WIN32_HAS_ANSI) && SQLITE_WIN32_CREATEFILEMAPPINGA + pFd->hMap = osCreateFileMappingA(pFd->h, NULL, protect, + (DWORD)((nMap>>32) & 0xffffffff), + (DWORD)(nMap & 0xffffffff), NULL); +#endif + if( pFd->hMap==NULL ){ + pFd->lastErrno = osGetLastError(); + rc = winLogError(SQLITE_IOERR_MMAP, pFd->lastErrno, + "winMapfile1", pFd->zPath); + /* Log the error, but continue normal operation using xRead/xWrite */ + OSTRACE(("MAP-FILE-CREATE pid=%lu, pFile=%p, rc=%s\n", + osGetCurrentProcessId(), pFd, sqlite3ErrName(rc))); + return SQLITE_OK; + } + assert( (nMap % winSysInfo.dwPageSize)==0 ); + assert( sizeof(SIZE_T)==sizeof(sqlite3_int64) || nMap<=0xffffffff ); +#if SQLITE_OS_WINRT + pNew = osMapViewOfFileFromApp(pFd->hMap, flags, 0, (SIZE_T)nMap); +#else + pNew = osMapViewOfFile(pFd->hMap, flags, 0, 0, (SIZE_T)nMap); +#endif + if( pNew==NULL ){ + osCloseHandle(pFd->hMap); + pFd->hMap = NULL; + pFd->lastErrno = osGetLastError(); + rc = winLogError(SQLITE_IOERR_MMAP, pFd->lastErrno, + "winMapfile2", pFd->zPath); + /* Log the error, but continue normal operation using xRead/xWrite */ + OSTRACE(("MAP-FILE-MAP pid=%lu, pFile=%p, rc=%s\n", + osGetCurrentProcessId(), pFd, sqlite3ErrName(rc))); + return SQLITE_OK; + } + pFd->pMapRegion = pNew; + pFd->mmapSize = nMap; + pFd->mmapSizeActual = nMap; + } + + OSTRACE(("MAP-FILE pid=%lu, pFile=%p, rc=SQLITE_OK\n", + osGetCurrentProcessId(), pFd)); + return SQLITE_OK; +} +#endif /* SQLITE_MAX_MMAP_SIZE>0 */ + +/* +** If possible, return a pointer to a mapping of file fd starting at offset +** iOff. The mapping must be valid for at least nAmt bytes. +** +** If such a pointer can be obtained, store it in *pp and return SQLITE_OK. +** Or, if one cannot but no error occurs, set *pp to 0 and return SQLITE_OK. +** Finally, if an error does occur, return an SQLite error code. The final +** value of *pp is undefined in this case. +** +** If this function does return a pointer, the caller must eventually +** release the reference by calling winUnfetch(). +*/ +static int winFetch(sqlite3_file *fd, i64 iOff, int nAmt, void **pp){ +#if SQLITE_MAX_MMAP_SIZE>0 + winFile *pFd = (winFile*)fd; /* The underlying database file */ +#endif + *pp = 0; + + OSTRACE(("FETCH pid=%lu, pFile=%p, offset=%lld, amount=%d, pp=%p\n", + osGetCurrentProcessId(), fd, iOff, nAmt, pp)); + +#if SQLITE_MAX_MMAP_SIZE>0 + if( pFd->mmapSizeMax>0 ){ + if( pFd->pMapRegion==0 ){ + int rc = winMapfile(pFd, -1); + if( rc!=SQLITE_OK ){ + OSTRACE(("FETCH pid=%lu, pFile=%p, rc=%s\n", + osGetCurrentProcessId(), pFd, sqlite3ErrName(rc))); + return rc; + } + } + if( pFd->mmapSize >= iOff+nAmt ){ + *pp = &((u8 *)pFd->pMapRegion)[iOff]; + pFd->nFetchOut++; + } + } +#endif + + OSTRACE(("FETCH pid=%lu, pFile=%p, pp=%p, *pp=%p, rc=SQLITE_OK\n", + osGetCurrentProcessId(), fd, pp, *pp)); + return SQLITE_OK; +} + +/* +** If the third argument is non-NULL, then this function releases a +** reference obtained by an earlier call to winFetch(). The second +** argument passed to this function must be the same as the corresponding +** argument that was passed to the winFetch() invocation. +** +** Or, if the third argument is NULL, then this function is being called +** to inform the VFS layer that, according to POSIX, any existing mapping +** may now be invalid and should be unmapped. +*/ +static int winUnfetch(sqlite3_file *fd, i64 iOff, void *p){ +#if SQLITE_MAX_MMAP_SIZE>0 + winFile *pFd = (winFile*)fd; /* The underlying database file */ + + /* If p==0 (unmap the entire file) then there must be no outstanding + ** xFetch references. Or, if p!=0 (meaning it is an xFetch reference), + ** then there must be at least one outstanding. */ + assert( (p==0)==(pFd->nFetchOut==0) ); + + /* If p!=0, it must match the iOff value. */ + assert( p==0 || p==&((u8 *)pFd->pMapRegion)[iOff] ); + + OSTRACE(("UNFETCH pid=%lu, pFile=%p, offset=%lld, p=%p\n", + osGetCurrentProcessId(), pFd, iOff, p)); + + if( p ){ + pFd->nFetchOut--; + }else{ + /* FIXME: If Windows truly always prevents truncating or deleting a + ** file while a mapping is held, then the following winUnmapfile() call + ** is unnecessary can be omitted - potentially improving + ** performance. */ + winUnmapfile(pFd); + } + + assert( pFd->nFetchOut>=0 ); +#endif + + OSTRACE(("UNFETCH pid=%lu, pFile=%p, rc=SQLITE_OK\n", + osGetCurrentProcessId(), fd)); + return SQLITE_OK; +} + +/* +** Here ends the implementation of all sqlite3_file methods. +** +********************** End sqlite3_file Methods ******************************* +******************************************************************************/ /* ** This vector defines all the methods that can operate on an ** sqlite3_file for win32. */ static const sqlite3_io_methods winIoMethod = { - 1, /* iVersion */ - winClose, - winRead, - winWrite, - winTruncate, - winSync, - winFileSize, - winLock, - winUnlock, - winCheckReservedLock, - winFileControl, - winSectorSize, - winDeviceCharacteristics + 3, /* iVersion */ + winClose, /* xClose */ + winRead, /* xRead */ + winWrite, /* xWrite */ + winTruncate, /* xTruncate */ + winSync, /* xSync */ + winFileSize, /* xFileSize */ + winLock, /* xLock */ + winUnlock, /* xUnlock */ + winCheckReservedLock, /* xCheckReservedLock */ + winFileControl, /* xFileControl */ + winSectorSize, /* xSectorSize */ + winDeviceCharacteristics, /* xDeviceCharacteristics */ + winShmMap, /* xShmMap */ + winShmLock, /* xShmLock */ + winShmBarrier, /* xShmBarrier */ + winShmUnmap, /* xShmUnmap */ + winFetch, /* xFetch */ + winUnfetch /* xUnfetch */ }; -/*************************************************************************** -** Here ends the I/O methods that form the sqlite3_io_methods object. +/**************************************************************************** +**************************** sqlite3_vfs methods **************************** ** -** The next block of code implements the VFS methods. -****************************************************************************/ +** This division contains the implementation of methods on the +** sqlite3_vfs object. +*/ + +#if defined(__CYGWIN__) +/* +** Convert a filename from whatever the underlying operating system +** supports for filenames into UTF-8. Space to hold the result is +** obtained from malloc and must be freed by the calling function. +*/ +static char *winConvertToUtf8Filename(const void *zFilename){ + char *zConverted = 0; + if( osIsNT() ){ + zConverted = winUnicodeToUtf8(zFilename); + } +#ifdef SQLITE_WIN32_HAS_ANSI + else{ + zConverted = sqlite3_win32_mbcs_to_utf8(zFilename); + } +#endif + /* caller will handle out of memory */ + return zConverted; +} +#endif /* ** Convert a UTF-8 filename into whatever form the underlying ** operating system wants filenames in. Space to hold the result ** is obtained from malloc and must be freed by the calling ** function. */ -static void *convertUtf8Filename(const char *zFilename){ +static void *winConvertFromUtf8Filename(const char *zFilename){ void *zConverted = 0; - if( isNT() ){ - zConverted = utf8ToUnicode(zFilename); -/* isNT() is 1 if SQLITE_OS_WINCE==1, so this else is never executed. -*/ -#if SQLITE_OS_WINCE==0 - }else{ - zConverted = utf8ToMbcs(zFilename); + if( osIsNT() ){ + zConverted = winUtf8ToUnicode(zFilename); + } +#ifdef SQLITE_WIN32_HAS_ANSI + else{ + zConverted = sqlite3_win32_utf8_to_mbcs(zFilename); + } #endif - } /* caller will handle out of memory */ return zConverted; } /* -** Create a temporary file name in zBuf. zBuf must be big enough to -** hold at pVfs->mxPathname characters. +** This function returns non-zero if the specified UTF-8 string buffer +** ends with a directory separator character or one was successfully +** added to it. */ -static int getTempname(int nBuf, char *zBuf){ +static int winMakeEndInDirSep(int nBuf, char *zBuf){ + if( zBuf ){ + int nLen = sqlite3Strlen30(zBuf); + if( nLen>0 ){ + if( winIsDirSep(zBuf[nLen-1]) ){ + return 1; + }else if( nLen+1<nBuf ){ + zBuf[nLen] = winGetDirSep(); + zBuf[nLen+1] = '\0'; + return 1; + } + } + } + return 0; +} + +/* +** Create a temporary file name and store the resulting pointer into pzBuf. +** The pointer returned in pzBuf must be freed via sqlite3_free(). +*/ +static int winGetTempname(sqlite3_vfs *pVfs, char **pzBuf){ static char zChars[] = "abcdefghijklmnopqrstuvwxyz" "ABCDEFGHIJKLMNOPQRSTUVWXYZ" "0123456789"; size_t i, j; - char zTempPath[MAX_PATH+1]; + int nPre = sqlite3Strlen30(SQLITE_TEMP_FILE_PREFIX); + int nMax, nBuf, nDir, nLen; + char *zBuf; + + /* It's odd to simulate an io-error here, but really this is just + ** using the io-error infrastructure to test that SQLite handles this + ** function failing. + */ + SimulateIOError( return SQLITE_IOERR ); + + /* Allocate a temporary buffer to store the fully qualified file + ** name for the temporary file. If this fails, we cannot continue. + */ + nMax = pVfs->mxPathname; nBuf = nMax + 2; + zBuf = sqlite3MallocZero( nBuf ); + if( !zBuf ){ + OSTRACE(("TEMP-FILENAME rc=SQLITE_IOERR_NOMEM\n")); + return SQLITE_IOERR_NOMEM; + } + + /* Figure out the effective temporary directory. First, check if one + ** has been explicitly set by the application; otherwise, use the one + ** configured by the operating system. + */ + nDir = nMax - (nPre + 15); + assert( nDir>0 ); if( sqlite3_temp_directory ){ - sqlite3_snprintf(MAX_PATH-30, zTempPath, "%s", sqlite3_temp_directory); - }else if( isNT() ){ + int nDirLen = sqlite3Strlen30(sqlite3_temp_directory); + if( nDirLen>0 ){ + if( !winIsDirSep(sqlite3_temp_directory[nDirLen-1]) ){ + nDirLen++; + } + if( nDirLen>nDir ){ + sqlite3_free(zBuf); + OSTRACE(("TEMP-FILENAME rc=SQLITE_ERROR\n")); + return winLogError(SQLITE_ERROR, 0, "winGetTempname1", 0); + } + sqlite3_snprintf(nMax, zBuf, "%s", sqlite3_temp_directory); + } + } +#if defined(__CYGWIN__) + else{ + static const char *azDirs[] = { + 0, /* getenv("SQLITE_TMPDIR") */ + 0, /* getenv("TMPDIR") */ + 0, /* getenv("TMP") */ + 0, /* getenv("TEMP") */ + 0, /* getenv("USERPROFILE") */ + "/var/tmp", + "/usr/tmp", + "/tmp", + ".", + 0 /* List terminator */ + }; + unsigned int i; + const char *zDir = 0; + + if( !azDirs[0] ) azDirs[0] = getenv("SQLITE_TMPDIR"); + if( !azDirs[1] ) azDirs[1] = getenv("TMPDIR"); + if( !azDirs[2] ) azDirs[2] = getenv("TMP"); + if( !azDirs[3] ) azDirs[3] = getenv("TEMP"); + if( !azDirs[4] ) azDirs[4] = getenv("USERPROFILE"); + for(i=0; i<sizeof(azDirs)/sizeof(azDirs[0]); zDir=azDirs[i++]){ + void *zConverted; + if( zDir==0 ) continue; + /* If the path starts with a drive letter followed by the colon + ** character, assume it is already a native Win32 path; otherwise, + ** it must be converted to a native Win32 path via the Cygwin API + ** prior to using it. + */ + if( winIsDriveLetterAndColon(zDir) ){ + zConverted = winConvertFromUtf8Filename(zDir); + if( !zConverted ){ + sqlite3_free(zBuf); + OSTRACE(("TEMP-FILENAME rc=SQLITE_IOERR_NOMEM\n")); + return SQLITE_IOERR_NOMEM; + } + if( winIsDir(zConverted) ){ + sqlite3_snprintf(nMax, zBuf, "%s", zDir); + sqlite3_free(zConverted); + break; + } + sqlite3_free(zConverted); + }else{ + zConverted = sqlite3MallocZero( nMax+1 ); + if( !zConverted ){ + sqlite3_free(zBuf); + OSTRACE(("TEMP-FILENAME rc=SQLITE_IOERR_NOMEM\n")); + return SQLITE_IOERR_NOMEM; + } + if( cygwin_conv_path( + osIsNT() ? CCP_POSIX_TO_WIN_W : CCP_POSIX_TO_WIN_A, zDir, + zConverted, nMax+1)<0 ){ + sqlite3_free(zConverted); + sqlite3_free(zBuf); + OSTRACE(("TEMP-FILENAME rc=SQLITE_IOERR_CONVPATH\n")); + return winLogError(SQLITE_IOERR_CONVPATH, (DWORD)errno, + "winGetTempname2", zDir); + } + if( winIsDir(zConverted) ){ + /* At this point, we know the candidate directory exists and should + ** be used. However, we may need to convert the string containing + ** its name into UTF-8 (i.e. if it is UTF-16 right now). + */ + char *zUtf8 = winConvertToUtf8Filename(zConverted); + if( !zUtf8 ){ + sqlite3_free(zConverted); + sqlite3_free(zBuf); + OSTRACE(("TEMP-FILENAME rc=SQLITE_IOERR_NOMEM\n")); + return SQLITE_IOERR_NOMEM; + } + sqlite3_snprintf(nMax, zBuf, "%s", zUtf8); + sqlite3_free(zUtf8); + sqlite3_free(zConverted); + break; + } + sqlite3_free(zConverted); + } + } + } +#elif !SQLITE_OS_WINRT && !defined(__CYGWIN__) + else if( osIsNT() ){ char *zMulti; - WCHAR zWidePath[MAX_PATH]; - GetTempPathW(MAX_PATH-30, zWidePath); - zMulti = unicodeToUtf8(zWidePath); + LPWSTR zWidePath = sqlite3MallocZero( nMax*sizeof(WCHAR) ); + if( !zWidePath ){ + sqlite3_free(zBuf); + OSTRACE(("TEMP-FILENAME rc=SQLITE_IOERR_NOMEM\n")); + return SQLITE_IOERR_NOMEM; + } + if( osGetTempPathW(nMax, zWidePath)==0 ){ + sqlite3_free(zWidePath); + sqlite3_free(zBuf); + OSTRACE(("TEMP-FILENAME rc=SQLITE_IOERR_GETTEMPPATH\n")); + return winLogError(SQLITE_IOERR_GETTEMPPATH, osGetLastError(), + "winGetTempname2", 0); + } + zMulti = winUnicodeToUtf8(zWidePath); if( zMulti ){ - sqlite3_snprintf(MAX_PATH-30, zTempPath, "%s", zMulti); - free(zMulti); - }else{ - return SQLITE_NOMEM; - } -/* isNT() is 1 if SQLITE_OS_WINCE==1, so this else is never executed. -** Since the ASCII version of these Windows API do not exist for WINCE, -** it's important to not reference them for WINCE builds. -*/ -#if SQLITE_OS_WINCE==0 - }else{ + sqlite3_snprintf(nMax, zBuf, "%s", zMulti); + sqlite3_free(zMulti); + sqlite3_free(zWidePath); + }else{ + sqlite3_free(zWidePath); + sqlite3_free(zBuf); + OSTRACE(("TEMP-FILENAME rc=SQLITE_IOERR_NOMEM\n")); + return SQLITE_IOERR_NOMEM; + } + } +#ifdef SQLITE_WIN32_HAS_ANSI + else{ char *zUtf8; - char zMbcsPath[MAX_PATH]; - GetTempPathA(MAX_PATH-30, zMbcsPath); + char *zMbcsPath = sqlite3MallocZero( nMax ); + if( !zMbcsPath ){ + sqlite3_free(zBuf); + OSTRACE(("TEMP-FILENAME rc=SQLITE_IOERR_NOMEM\n")); + return SQLITE_IOERR_NOMEM; + } + if( osGetTempPathA(nMax, zMbcsPath)==0 ){ + sqlite3_free(zBuf); + OSTRACE(("TEMP-FILENAME rc=SQLITE_IOERR_GETTEMPPATH\n")); + return winLogError(SQLITE_IOERR_GETTEMPPATH, osGetLastError(), + "winGetTempname3", 0); + } zUtf8 = sqlite3_win32_mbcs_to_utf8(zMbcsPath); if( zUtf8 ){ - sqlite3_snprintf(MAX_PATH-30, zTempPath, "%s", zUtf8); - free(zUtf8); + sqlite3_snprintf(nMax, zBuf, "%s", zUtf8); + sqlite3_free(zUtf8); }else{ - return SQLITE_NOMEM; - } -#endif - } - for(i=sqlite3Strlen30(zTempPath); i>0 && zTempPath[i-1]=='\\'; i--){} - zTempPath[i] = 0; - sqlite3_snprintf(nBuf-30, zBuf, - "%s\\"SQLITE_TEMP_FILE_PREFIX, zTempPath); + sqlite3_free(zBuf); + OSTRACE(("TEMP-FILENAME rc=SQLITE_IOERR_NOMEM\n")); + return SQLITE_IOERR_NOMEM; + } + } +#endif /* SQLITE_WIN32_HAS_ANSI */ +#endif /* !SQLITE_OS_WINRT */ + + /* + ** Check to make sure the temporary directory ends with an appropriate + ** separator. If it does not and there is not enough space left to add + ** one, fail. + */ + if( !winMakeEndInDirSep(nDir+1, zBuf) ){ + sqlite3_free(zBuf); + OSTRACE(("TEMP-FILENAME rc=SQLITE_ERROR\n")); + return winLogError(SQLITE_ERROR, 0, "winGetTempname4", 0); + } + + /* + ** Check that the output buffer is large enough for the temporary file + ** name in the following format: + ** + ** "<temporary_directory>/etilqs_XXXXXXXXXXXXXXX\0\0" + ** + ** If not, return SQLITE_ERROR. The number 17 is used here in order to + ** account for the space used by the 15 character random suffix and the + ** two trailing NUL characters. The final directory separator character + ** has already added if it was not already present. + */ + nLen = sqlite3Strlen30(zBuf); + if( (nLen + nPre + 17) > nBuf ){ + sqlite3_free(zBuf); + OSTRACE(("TEMP-FILENAME rc=SQLITE_ERROR\n")); + return winLogError(SQLITE_ERROR, 0, "winGetTempname5", 0); + } + + sqlite3_snprintf(nBuf-16-nLen, zBuf+nLen, SQLITE_TEMP_FILE_PREFIX); + j = sqlite3Strlen30(zBuf); - sqlite3_randomness(20, &zBuf[j]); - for(i=0; i<20; i++, j++){ + sqlite3_randomness(15, &zBuf[j]); + for(i=0; i<15; i++, j++){ zBuf[j] = (char)zChars[ ((unsigned char)zBuf[j])%(sizeof(zChars)-1) ]; } zBuf[j] = 0; - OSTRACE2("TEMP FILENAME: %s\n", zBuf); - return SQLITE_OK; + zBuf[j+1] = 0; + *pzBuf = zBuf; + + OSTRACE(("TEMP-FILENAME name=%s, rc=SQLITE_OK\n", zBuf)); + return SQLITE_OK; } /* -** The return value of getLastErrorMsg -** is zero if the error message fits in the buffer, or non-zero -** otherwise (if the message was truncated). -*/ -static int getLastErrorMsg(int nBuf, char *zBuf){ - /* FormatMessage returns 0 on failure. Otherwise it - ** returns the number of TCHARs written to the output - ** buffer, excluding the terminating null char. - */ - DWORD error = GetLastError(); - DWORD dwLen = 0; - char *zOut = 0; - - if( isNT() ){ - WCHAR *zTempWide = NULL; - dwLen = FormatMessageW(FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS, - NULL, - error, - 0, - (LPWSTR) &zTempWide, - 0, - 0); - if( dwLen > 0 ){ - /* allocate a buffer and convert to UTF8 */ - zOut = unicodeToUtf8(zTempWide); - /* free the system buffer allocated by FormatMessage */ - LocalFree(zTempWide); - } -/* isNT() is 1 if SQLITE_OS_WINCE==1, so this else is never executed. -** Since the ASCII version of these Windows API do not exist for WINCE, -** it's important to not reference them for WINCE builds. -*/ +** Return TRUE if the named file is really a directory. Return false if +** it is something other than a directory, or if there is any kind of memory +** allocation failure. +*/ +static int winIsDir(const void *zConverted){ + DWORD attr; + int rc = 0; + DWORD lastErrno; + + if( osIsNT() ){ + int cnt = 0; + WIN32_FILE_ATTRIBUTE_DATA sAttrData; + memset(&sAttrData, 0, sizeof(sAttrData)); + while( !(rc = osGetFileAttributesExW((LPCWSTR)zConverted, + GetFileExInfoStandard, + &sAttrData)) && winRetryIoerr(&cnt, &lastErrno) ){} + if( !rc ){ + return 0; /* Invalid name? */ + } + attr = sAttrData.dwFileAttributes; #if SQLITE_OS_WINCE==0 }else{ - char *zTemp = NULL; - dwLen = FormatMessageA(FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS, - NULL, - error, - 0, - (LPSTR) &zTemp, - 0, - 0); - if( dwLen > 0 ){ - /* allocate a buffer and convert to UTF8 */ - zOut = sqlite3_win32_mbcs_to_utf8(zTemp); - /* free the system buffer allocated by FormatMessage */ - LocalFree(zTemp); - } + attr = osGetFileAttributesA((char*)zConverted); #endif } - if( 0 == dwLen ){ - sqlite3_snprintf(nBuf, zBuf, "OsError 0x%x (%u)", error, error); - }else{ - /* copy a maximum of nBuf chars to output buffer */ - sqlite3_snprintf(nBuf, zBuf, "%s", zOut); - /* free the UTF8 buffer */ - free(zOut); - } - return 0; + return (attr!=INVALID_FILE_ATTRIBUTES) && (attr&FILE_ATTRIBUTE_DIRECTORY); } /* ** Open a file. */ static int winOpen( - sqlite3_vfs *pVfs, /* Not used */ + sqlite3_vfs *pVfs, /* Used to get maximum path name length */ const char *zName, /* Name of the file (UTF-8) */ sqlite3_file *id, /* Write the SQLite file handle here */ int flags, /* Open mode flags */ int *pOutFlags /* Status return flags */ ){ HANDLE h; + DWORD lastErrno = 0; DWORD dwDesiredAccess; DWORD dwShareMode; DWORD dwCreationDisposition; DWORD dwFlagsAndAttributes = 0; #if SQLITE_OS_WINCE int isTemp = 0; #endif winFile *pFile = (winFile*)id; - void *zConverted; /* Filename in OS encoding */ - const char *zUtf8Name = zName; /* Filename in UTF-8 encoding */ - char zTmpname[MAX_PATH+1]; /* Buffer used to create temp filename */ - - assert( id!=0 ); - UNUSED_PARAMETER(pVfs); - - /* If the second argument to this function is NULL, generate a - ** temporary file name to use + void *zConverted; /* Filename in OS encoding */ + const char *zUtf8Name = zName; /* Filename in UTF-8 encoding */ + int cnt = 0; + + /* If argument zPath is a NULL pointer, this function is required to open + ** a temporary file. Use this buffer to store the file name in. + */ + char *zTmpname = 0; /* For temporary filename, if necessary. */ + + int rc = SQLITE_OK; /* Function Return Code */ +#if !defined(NDEBUG) || SQLITE_OS_WINCE + int eType = flags&0xFFFFFF00; /* Type of file to open */ +#endif + + int isExclusive = (flags & SQLITE_OPEN_EXCLUSIVE); + int isDelete = (flags & SQLITE_OPEN_DELETEONCLOSE); + int isCreate = (flags & SQLITE_OPEN_CREATE); + int isReadonly = (flags & SQLITE_OPEN_READONLY); + int isReadWrite = (flags & SQLITE_OPEN_READWRITE); + +#ifndef NDEBUG + int isOpenJournal = (isCreate && ( + eType==SQLITE_OPEN_MASTER_JOURNAL + || eType==SQLITE_OPEN_MAIN_JOURNAL + || eType==SQLITE_OPEN_WAL + )); +#endif + + OSTRACE(("OPEN name=%s, pFile=%p, flags=%x, pOutFlags=%p\n", + zUtf8Name, id, flags, pOutFlags)); + + /* Check the following statements are true: + ** + ** (a) Exactly one of the READWRITE and READONLY flags must be set, and + ** (b) if CREATE is set, then READWRITE must also be set, and + ** (c) if EXCLUSIVE is set, then CREATE must also be set. + ** (d) if DELETEONCLOSE is set, then CREATE must also be set. + */ + assert((isReadonly==0 || isReadWrite==0) && (isReadWrite || isReadonly)); + assert(isCreate==0 || isReadWrite); + assert(isExclusive==0 || isCreate); + assert(isDelete==0 || isCreate); + + /* The main DB, main journal, WAL file and master journal are never + ** automatically deleted. Nor are they ever temporary files. */ + assert( (!isDelete && zName) || eType!=SQLITE_OPEN_MAIN_DB ); + assert( (!isDelete && zName) || eType!=SQLITE_OPEN_MAIN_JOURNAL ); + assert( (!isDelete && zName) || eType!=SQLITE_OPEN_MASTER_JOURNAL ); + assert( (!isDelete && zName) || eType!=SQLITE_OPEN_WAL ); + + /* Assert that the upper layer has set one of the "file-type" flags. */ + assert( eType==SQLITE_OPEN_MAIN_DB || eType==SQLITE_OPEN_TEMP_DB + || eType==SQLITE_OPEN_MAIN_JOURNAL || eType==SQLITE_OPEN_TEMP_JOURNAL + || eType==SQLITE_OPEN_SUBJOURNAL || eType==SQLITE_OPEN_MASTER_JOURNAL + || eType==SQLITE_OPEN_TRANSIENT_DB || eType==SQLITE_OPEN_WAL + ); + + assert( pFile!=0 ); + memset(pFile, 0, sizeof(winFile)); + pFile->h = INVALID_HANDLE_VALUE; + +#if SQLITE_OS_WINRT + if( !zUtf8Name && !sqlite3_temp_directory ){ + sqlite3_log(SQLITE_ERROR, + "sqlite3_temp_directory variable should be set for WinRT"); + } +#endif + + /* If the second argument to this function is NULL, generate a + ** temporary file name to use */ if( !zUtf8Name ){ - int rc = getTempname(MAX_PATH+1, zTmpname); + assert( isDelete && !isOpenJournal ); + rc = winGetTempname(pVfs, &zTmpname); if( rc!=SQLITE_OK ){ + OSTRACE(("OPEN name=%s, rc=%s", zUtf8Name, sqlite3ErrName(rc))); return rc; } zUtf8Name = zTmpname; } + /* Database filenames are double-zero terminated if they are not + ** URIs with parameters. Hence, they can always be passed into + ** sqlite3_uri_parameter(). + */ + assert( (eType!=SQLITE_OPEN_MAIN_DB) || (flags & SQLITE_OPEN_URI) || + zUtf8Name[sqlite3Strlen30(zUtf8Name)+1]==0 ); + /* Convert the filename to the system encoding. */ - zConverted = convertUtf8Filename(zUtf8Name); + zConverted = winConvertFromUtf8Filename(zUtf8Name); if( zConverted==0 ){ - return SQLITE_NOMEM; + sqlite3_free(zTmpname); + OSTRACE(("OPEN name=%s, rc=SQLITE_IOERR_NOMEM", zUtf8Name)); + return SQLITE_IOERR_NOMEM; } - if( flags & SQLITE_OPEN_READWRITE ){ + if( winIsDir(zConverted) ){ + sqlite3_free(zConverted); + sqlite3_free(zTmpname); + OSTRACE(("OPEN name=%s, rc=SQLITE_CANTOPEN_ISDIR", zUtf8Name)); + return SQLITE_CANTOPEN_ISDIR; + } + + if( isReadWrite ){ dwDesiredAccess = GENERIC_READ | GENERIC_WRITE; }else{ dwDesiredAccess = GENERIC_READ; } - /* SQLITE_OPEN_EXCLUSIVE is used to make sure that a new file is - ** created. SQLite doesn't use it to indicate "exclusive access" + + /* SQLITE_OPEN_EXCLUSIVE is used to make sure that a new file is + ** created. SQLite doesn't use it to indicate "exclusive access" ** as it is usually understood. */ - assert(!(flags & SQLITE_OPEN_EXCLUSIVE) || (flags & SQLITE_OPEN_CREATE)); - if( flags & SQLITE_OPEN_EXCLUSIVE ){ + if( isExclusive ){ /* Creates a new file, only if it does not already exist. */ /* If the file exists, it fails. */ dwCreationDisposition = CREATE_NEW; - }else if( flags & SQLITE_OPEN_CREATE ){ + }else if( isCreate ){ /* Open existing file, or create if it doesn't exist */ dwCreationDisposition = OPEN_ALWAYS; }else{ /* Opens a file, only if it exists. */ dwCreationDisposition = OPEN_EXISTING; } + dwShareMode = FILE_SHARE_READ | FILE_SHARE_WRITE; - if( flags & SQLITE_OPEN_DELETEONCLOSE ){ + + if( isDelete ){ #if SQLITE_OS_WINCE dwFlagsAndAttributes = FILE_ATTRIBUTE_HIDDEN; isTemp = 1; #else dwFlagsAndAttributes = FILE_ATTRIBUTE_TEMPORARY @@ -29590,174 +39845,368 @@ /* Reports from the internet are that performance is always ** better if FILE_FLAG_RANDOM_ACCESS is used. Ticket #2699. */ #if SQLITE_OS_WINCE dwFlagsAndAttributes |= FILE_FLAG_RANDOM_ACCESS; #endif - if( isNT() ){ - h = CreateFileW((WCHAR*)zConverted, - dwDesiredAccess, - dwShareMode, - NULL, - dwCreationDisposition, - dwFlagsAndAttributes, - NULL - ); -/* isNT() is 1 if SQLITE_OS_WINCE==1, so this else is never executed. -** Since the ASCII version of these Windows API do not exist for WINCE, -** it's important to not reference them for WINCE builds. -*/ -#if SQLITE_OS_WINCE==0 - }else{ - h = CreateFileA((char*)zConverted, - dwDesiredAccess, - dwShareMode, - NULL, - dwCreationDisposition, - dwFlagsAndAttributes, - NULL - ); + + if( osIsNT() ){ +#if SQLITE_OS_WINRT + CREATEFILE2_EXTENDED_PARAMETERS extendedParameters; + extendedParameters.dwSize = sizeof(CREATEFILE2_EXTENDED_PARAMETERS); + extendedParameters.dwFileAttributes = + dwFlagsAndAttributes & FILE_ATTRIBUTE_MASK; + extendedParameters.dwFileFlags = dwFlagsAndAttributes & FILE_FLAG_MASK; + extendedParameters.dwSecurityQosFlags = SECURITY_ANONYMOUS; + extendedParameters.lpSecurityAttributes = NULL; + extendedParameters.hTemplateFile = NULL; + while( (h = osCreateFile2((LPCWSTR)zConverted, + dwDesiredAccess, + dwShareMode, + dwCreationDisposition, + &extendedParameters))==INVALID_HANDLE_VALUE && + winRetryIoerr(&cnt, &lastErrno) ){ + /* Noop */ + } +#else + while( (h = osCreateFileW((LPCWSTR)zConverted, + dwDesiredAccess, + dwShareMode, NULL, + dwCreationDisposition, + dwFlagsAndAttributes, + NULL))==INVALID_HANDLE_VALUE && + winRetryIoerr(&cnt, &lastErrno) ){ + /* Noop */ + } #endif } +#ifdef SQLITE_WIN32_HAS_ANSI + else{ + while( (h = osCreateFileA((LPCSTR)zConverted, + dwDesiredAccess, + dwShareMode, NULL, + dwCreationDisposition, + dwFlagsAndAttributes, + NULL))==INVALID_HANDLE_VALUE && + winRetryIoerr(&cnt, &lastErrno) ){ + /* Noop */ + } + } +#endif + winLogIoerr(cnt, __LINE__); + + OSTRACE(("OPEN file=%p, name=%s, access=%lx, rc=%s\n", h, zUtf8Name, + dwDesiredAccess, (h==INVALID_HANDLE_VALUE) ? "failed" : "ok")); + if( h==INVALID_HANDLE_VALUE ){ - free(zConverted); - if( flags & SQLITE_OPEN_READWRITE ){ - return winOpen(pVfs, zName, id, - ((flags|SQLITE_OPEN_READONLY)&~SQLITE_OPEN_READWRITE), pOutFlags); + pFile->lastErrno = lastErrno; + winLogError(SQLITE_CANTOPEN, pFile->lastErrno, "winOpen", zUtf8Name); + sqlite3_free(zConverted); + sqlite3_free(zTmpname); + if( isReadWrite && !isExclusive ){ + return winOpen(pVfs, zName, id, + ((flags|SQLITE_OPEN_READONLY) & + ~(SQLITE_OPEN_CREATE|SQLITE_OPEN_READWRITE)), + pOutFlags); }else{ return SQLITE_CANTOPEN_BKPT; } } + if( pOutFlags ){ - if( flags & SQLITE_OPEN_READWRITE ){ + if( isReadWrite ){ *pOutFlags = SQLITE_OPEN_READWRITE; }else{ *pOutFlags = SQLITE_OPEN_READONLY; } } - memset(pFile, 0, sizeof(*pFile)); - pFile->pMethod = &winIoMethod; - pFile->h = h; - pFile->lastErrno = NO_ERROR; - pFile->sectorSize = getSectorSize(pVfs, zUtf8Name); + + OSTRACE(("OPEN file=%p, name=%s, access=%lx, pOutFlags=%p, *pOutFlags=%d, " + "rc=%s\n", h, zUtf8Name, dwDesiredAccess, pOutFlags, pOutFlags ? + *pOutFlags : 0, (h==INVALID_HANDLE_VALUE) ? "failed" : "ok")); + #if SQLITE_OS_WINCE - if( (flags & (SQLITE_OPEN_READWRITE|SQLITE_OPEN_MAIN_DB)) == - (SQLITE_OPEN_READWRITE|SQLITE_OPEN_MAIN_DB) - && !winceCreateLock(zName, pFile) + if( isReadWrite && eType==SQLITE_OPEN_MAIN_DB + && (rc = winceCreateLock(zName, pFile))!=SQLITE_OK ){ - CloseHandle(h); - free(zConverted); - return SQLITE_CANTOPEN_BKPT; + osCloseHandle(h); + sqlite3_free(zConverted); + sqlite3_free(zTmpname); + OSTRACE(("OPEN-CE-LOCK name=%s, rc=%s\n", zName, sqlite3ErrName(rc))); + return rc; } if( isTemp ){ pFile->zDeleteOnClose = zConverted; }else #endif { - free(zConverted); + sqlite3_free(zConverted); } + + sqlite3_free(zTmpname); + pFile->pMethod = &winIoMethod; + pFile->pVfs = pVfs; + pFile->h = h; + if( isReadonly ){ + pFile->ctrlFlags |= WINFILE_RDONLY; + } + if( sqlite3_uri_boolean(zName, "psow", SQLITE_POWERSAFE_OVERWRITE) ){ + pFile->ctrlFlags |= WINFILE_PSOW; + } + pFile->lastErrno = NO_ERROR; + pFile->zPath = zName; +#if SQLITE_MAX_MMAP_SIZE>0 + pFile->hMap = NULL; + pFile->pMapRegion = 0; + pFile->mmapSize = 0; + pFile->mmapSizeActual = 0; + pFile->mmapSizeMax = sqlite3GlobalConfig.szMmap; +#endif + OpenCounter(+1); - return SQLITE_OK; + return rc; } /* ** Delete the named file. ** -** Note that windows does not allow a file to be deleted if some other +** Note that Windows does not allow a file to be deleted if some other ** process has it open. Sometimes a virus scanner or indexing program ** will open a journal file shortly after it is created in order to do ** whatever it does. While this other process is holding the ** file open, we will be unable to delete it. To work around this ** problem, we delay 100 milliseconds and try to delete again. Up ** to MX_DELETION_ATTEMPTs deletion attempts are run before giving ** up and returning an error. */ -#define MX_DELETION_ATTEMPTS 5 static int winDelete( sqlite3_vfs *pVfs, /* Not used on win32 */ const char *zFilename, /* Name of file to delete */ int syncDir /* Not used on win32 */ ){ int cnt = 0; - DWORD rc; - DWORD error = 0; - void *zConverted = convertUtf8Filename(zFilename); + int rc; + DWORD attr; + DWORD lastErrno = 0; + void *zConverted; UNUSED_PARAMETER(pVfs); UNUSED_PARAMETER(syncDir); - if( zConverted==0 ){ - return SQLITE_NOMEM; - } + SimulateIOError(return SQLITE_IOERR_DELETE); - if( isNT() ){ - do{ - DeleteFileW(zConverted); - }while( ( ((rc = GetFileAttributesW(zConverted)) != INVALID_FILE_ATTRIBUTES) - || ((error = GetLastError()) == ERROR_ACCESS_DENIED)) - && (++cnt < MX_DELETION_ATTEMPTS) - && (Sleep(100), 1) ); -/* isNT() is 1 if SQLITE_OS_WINCE==1, so this else is never executed. -** Since the ASCII version of these Windows API do not exist for WINCE, -** it's important to not reference them for WINCE builds. -*/ -#if SQLITE_OS_WINCE==0 - }else{ - do{ - DeleteFileA(zConverted); - }while( ( ((rc = GetFileAttributesA(zConverted)) != INVALID_FILE_ATTRIBUTES) - || ((error = GetLastError()) == ERROR_ACCESS_DENIED)) - && (++cnt < MX_DELETION_ATTEMPTS) - && (Sleep(100), 1) ); -#endif - } - free(zConverted); - OSTRACE2("DELETE \"%s\"\n", zFilename); - return ( (rc == INVALID_FILE_ATTRIBUTES) - && (error == ERROR_FILE_NOT_FOUND)) ? SQLITE_OK : SQLITE_IOERR_DELETE; -} - -/* -** Check the existance and status of a file. + OSTRACE(("DELETE name=%s, syncDir=%d\n", zFilename, syncDir)); + + zConverted = winConvertFromUtf8Filename(zFilename); + if( zConverted==0 ){ + OSTRACE(("DELETE name=%s, rc=SQLITE_IOERR_NOMEM\n", zFilename)); + return SQLITE_IOERR_NOMEM; + } + if( osIsNT() ){ + do { +#if SQLITE_OS_WINRT + WIN32_FILE_ATTRIBUTE_DATA sAttrData; + memset(&sAttrData, 0, sizeof(sAttrData)); + if ( osGetFileAttributesExW(zConverted, GetFileExInfoStandard, + &sAttrData) ){ + attr = sAttrData.dwFileAttributes; + }else{ + lastErrno = osGetLastError(); + if( lastErrno==ERROR_FILE_NOT_FOUND + || lastErrno==ERROR_PATH_NOT_FOUND ){ + rc = SQLITE_IOERR_DELETE_NOENT; /* Already gone? */ + }else{ + rc = SQLITE_ERROR; + } + break; + } +#else + attr = osGetFileAttributesW(zConverted); +#endif + if ( attr==INVALID_FILE_ATTRIBUTES ){ + lastErrno = osGetLastError(); + if( lastErrno==ERROR_FILE_NOT_FOUND + || lastErrno==ERROR_PATH_NOT_FOUND ){ + rc = SQLITE_IOERR_DELETE_NOENT; /* Already gone? */ + }else{ + rc = SQLITE_ERROR; + } + break; + } + if ( attr&FILE_ATTRIBUTE_DIRECTORY ){ + rc = SQLITE_ERROR; /* Files only. */ + break; + } + if ( osDeleteFileW(zConverted) ){ + rc = SQLITE_OK; /* Deleted OK. */ + break; + } + if ( !winRetryIoerr(&cnt, &lastErrno) ){ + rc = SQLITE_ERROR; /* No more retries. */ + break; + } + } while(1); + } +#ifdef SQLITE_WIN32_HAS_ANSI + else{ + do { + attr = osGetFileAttributesA(zConverted); + if ( attr==INVALID_FILE_ATTRIBUTES ){ + lastErrno = osGetLastError(); + if( lastErrno==ERROR_FILE_NOT_FOUND + || lastErrno==ERROR_PATH_NOT_FOUND ){ + rc = SQLITE_IOERR_DELETE_NOENT; /* Already gone? */ + }else{ + rc = SQLITE_ERROR; + } + break; + } + if ( attr&FILE_ATTRIBUTE_DIRECTORY ){ + rc = SQLITE_ERROR; /* Files only. */ + break; + } + if ( osDeleteFileA(zConverted) ){ + rc = SQLITE_OK; /* Deleted OK. */ + break; + } + if ( !winRetryIoerr(&cnt, &lastErrno) ){ + rc = SQLITE_ERROR; /* No more retries. */ + break; + } + } while(1); + } +#endif + if( rc && rc!=SQLITE_IOERR_DELETE_NOENT ){ + rc = winLogError(SQLITE_IOERR_DELETE, lastErrno, "winDelete", zFilename); + }else{ + winLogIoerr(cnt, __LINE__); + } + sqlite3_free(zConverted); + OSTRACE(("DELETE name=%s, rc=%s\n", zFilename, sqlite3ErrName(rc))); + return rc; +} + +/* +** Check the existence and status of a file. */ static int winAccess( sqlite3_vfs *pVfs, /* Not used on win32 */ const char *zFilename, /* Name of file to check */ int flags, /* Type of test to make on this file */ int *pResOut /* OUT: Result */ ){ DWORD attr; int rc = 0; - void *zConverted = convertUtf8Filename(zFilename); + DWORD lastErrno = 0; + void *zConverted; UNUSED_PARAMETER(pVfs); + + SimulateIOError( return SQLITE_IOERR_ACCESS; ); + OSTRACE(("ACCESS name=%s, flags=%x, pResOut=%p\n", + zFilename, flags, pResOut)); + + zConverted = winConvertFromUtf8Filename(zFilename); if( zConverted==0 ){ - return SQLITE_NOMEM; - } - if( isNT() ){ - attr = GetFileAttributesW((WCHAR*)zConverted); -/* isNT() is 1 if SQLITE_OS_WINCE==1, so this else is never executed. -** Since the ASCII version of these Windows API do not exist for WINCE, -** it's important to not reference them for WINCE builds. -*/ -#if SQLITE_OS_WINCE==0 - }else{ - attr = GetFileAttributesA((char*)zConverted); -#endif - } - free(zConverted); + OSTRACE(("ACCESS name=%s, rc=SQLITE_IOERR_NOMEM\n", zFilename)); + return SQLITE_IOERR_NOMEM; + } + if( osIsNT() ){ + int cnt = 0; + WIN32_FILE_ATTRIBUTE_DATA sAttrData; + memset(&sAttrData, 0, sizeof(sAttrData)); + while( !(rc = osGetFileAttributesExW((LPCWSTR)zConverted, + GetFileExInfoStandard, + &sAttrData)) && winRetryIoerr(&cnt, &lastErrno) ){} + if( rc ){ + /* For an SQLITE_ACCESS_EXISTS query, treat a zero-length file + ** as if it does not exist. + */ + if( flags==SQLITE_ACCESS_EXISTS + && sAttrData.nFileSizeHigh==0 + && sAttrData.nFileSizeLow==0 ){ + attr = INVALID_FILE_ATTRIBUTES; + }else{ + attr = sAttrData.dwFileAttributes; + } + }else{ + winLogIoerr(cnt, __LINE__); + if( lastErrno!=ERROR_FILE_NOT_FOUND && lastErrno!=ERROR_PATH_NOT_FOUND ){ + sqlite3_free(zConverted); + return winLogError(SQLITE_IOERR_ACCESS, lastErrno, "winAccess", + zFilename); + }else{ + attr = INVALID_FILE_ATTRIBUTES; + } + } + } +#ifdef SQLITE_WIN32_HAS_ANSI + else{ + attr = osGetFileAttributesA((char*)zConverted); + } +#endif + sqlite3_free(zConverted); switch( flags ){ case SQLITE_ACCESS_READ: case SQLITE_ACCESS_EXISTS: rc = attr!=INVALID_FILE_ATTRIBUTES; break; case SQLITE_ACCESS_READWRITE: - rc = (attr & FILE_ATTRIBUTE_READONLY)==0; + rc = attr!=INVALID_FILE_ATTRIBUTES && + (attr & FILE_ATTRIBUTE_READONLY)==0; break; default: assert(!"Invalid flags argument"); } *pResOut = rc; + OSTRACE(("ACCESS name=%s, pResOut=%p, *pResOut=%d, rc=SQLITE_OK\n", + zFilename, pResOut, *pResOut)); return SQLITE_OK; } +/* +** Returns non-zero if the specified path name starts with a drive letter +** followed by a colon character. +*/ +static BOOL winIsDriveLetterAndColon( + const char *zPathname +){ + return ( sqlite3Isalpha(zPathname[0]) && zPathname[1]==':' ); +} + +/* +** Returns non-zero if the specified path name should be used verbatim. If +** non-zero is returned from this function, the calling function must simply +** use the provided path name verbatim -OR- resolve it into a full path name +** using the GetFullPathName Win32 API function (if available). +*/ +static BOOL winIsVerbatimPathname( + const char *zPathname +){ + /* + ** If the path name starts with a forward slash or a backslash, it is either + ** a legal UNC name, a volume relative path, or an absolute path name in the + ** "Unix" format on Windows. There is no easy way to differentiate between + ** the final two cases; therefore, we return the safer return value of TRUE + ** so that callers of this function will simply use it verbatim. + */ + if ( winIsDirSep(zPathname[0]) ){ + return TRUE; + } + + /* + ** If the path name starts with a letter and a colon it is either a volume + ** relative path or an absolute path. Callers of this function must not + ** attempt to treat it as a relative path name (i.e. they should simply use + ** it verbatim). + */ + if ( winIsDriveLetterAndColon(zPathname) ){ + return TRUE; + } + + /* + ** If we get to this point, the path name should almost certainly be a purely + ** relative one (i.e. not a UNC name, not absolute, and not volume relative). + */ + return FALSE; +} /* ** Turn a relative pathname into a full pathname. Write the full ** pathname into zOut[]. zOut[] will be at least pVfs->mxPathname ** bytes in size. @@ -29766,187 +40215,244 @@ sqlite3_vfs *pVfs, /* Pointer to vfs object */ const char *zRelative, /* Possibly relative input path */ int nFull, /* Size of output buffer in bytes */ char *zFull /* Output buffer */ ){ - -#if defined(__CYGWIN__) - UNUSED_PARAMETER(nFull); - cygwin_conv_to_full_win32_path(zRelative, zFull); - return SQLITE_OK; -#endif - -#if SQLITE_OS_WINCE - UNUSED_PARAMETER(nFull); - /* WinCE has no concept of a relative pathname, or so I am told. */ - sqlite3_snprintf(pVfs->mxPathname, zFull, "%s", zRelative); - return SQLITE_OK; -#endif - -#if !SQLITE_OS_WINCE && !defined(__CYGWIN__) - int nByte; + +#if defined(__CYGWIN__) + SimulateIOError( return SQLITE_ERROR ); + UNUSED_PARAMETER(nFull); + assert( nFull>=pVfs->mxPathname ); + if ( sqlite3_data_directory && !winIsVerbatimPathname(zRelative) ){ + /* + ** NOTE: We are dealing with a relative path name and the data + ** directory has been set. Therefore, use it as the basis + ** for converting the relative path name to an absolute + ** one by prepending the data directory and a slash. + */ + char *zOut = sqlite3MallocZero( pVfs->mxPathname+1 ); + if( !zOut ){ + return SQLITE_IOERR_NOMEM; + } + if( cygwin_conv_path( + (osIsNT() ? CCP_POSIX_TO_WIN_W : CCP_POSIX_TO_WIN_A) | + CCP_RELATIVE, zRelative, zOut, pVfs->mxPathname+1)<0 ){ + sqlite3_free(zOut); + return winLogError(SQLITE_CANTOPEN_CONVPATH, (DWORD)errno, + "winFullPathname1", zRelative); + }else{ + char *zUtf8 = winConvertToUtf8Filename(zOut); + if( !zUtf8 ){ + sqlite3_free(zOut); + return SQLITE_IOERR_NOMEM; + } + sqlite3_snprintf(MIN(nFull, pVfs->mxPathname), zFull, "%s%c%s", + sqlite3_data_directory, winGetDirSep(), zUtf8); + sqlite3_free(zUtf8); + sqlite3_free(zOut); + } + }else{ + char *zOut = sqlite3MallocZero( pVfs->mxPathname+1 ); + if( !zOut ){ + return SQLITE_IOERR_NOMEM; + } + if( cygwin_conv_path( + (osIsNT() ? CCP_POSIX_TO_WIN_W : CCP_POSIX_TO_WIN_A), + zRelative, zOut, pVfs->mxPathname+1)<0 ){ + sqlite3_free(zOut); + return winLogError(SQLITE_CANTOPEN_CONVPATH, (DWORD)errno, + "winFullPathname2", zRelative); + }else{ + char *zUtf8 = winConvertToUtf8Filename(zOut); + if( !zUtf8 ){ + sqlite3_free(zOut); + return SQLITE_IOERR_NOMEM; + } + sqlite3_snprintf(MIN(nFull, pVfs->mxPathname), zFull, "%s", zUtf8); + sqlite3_free(zUtf8); + sqlite3_free(zOut); + } + } + return SQLITE_OK; +#endif + +#if (SQLITE_OS_WINCE || SQLITE_OS_WINRT) && !defined(__CYGWIN__) + SimulateIOError( return SQLITE_ERROR ); + /* WinCE has no concept of a relative pathname, or so I am told. */ + /* WinRT has no way to convert a relative path to an absolute one. */ + if ( sqlite3_data_directory && !winIsVerbatimPathname(zRelative) ){ + /* + ** NOTE: We are dealing with a relative path name and the data + ** directory has been set. Therefore, use it as the basis + ** for converting the relative path name to an absolute + ** one by prepending the data directory and a backslash. + */ + sqlite3_snprintf(MIN(nFull, pVfs->mxPathname), zFull, "%s%c%s", + sqlite3_data_directory, winGetDirSep(), zRelative); + }else{ + sqlite3_snprintf(MIN(nFull, pVfs->mxPathname), zFull, "%s", zRelative); + } + return SQLITE_OK; +#endif + +#if !SQLITE_OS_WINCE && !SQLITE_OS_WINRT && !defined(__CYGWIN__) + DWORD nByte; void *zConverted; char *zOut; - UNUSED_PARAMETER(nFull); - zConverted = convertUtf8Filename(zRelative); - if( isNT() ){ - WCHAR *zTemp; - nByte = GetFullPathNameW((WCHAR*)zConverted, 0, 0, 0) + 3; - zTemp = malloc( nByte*sizeof(zTemp[0]) ); + + /* If this path name begins with "/X:", where "X" is any alphabetic + ** character, discard the initial "/" from the pathname. + */ + if( zRelative[0]=='/' && winIsDriveLetterAndColon(zRelative+1) ){ + zRelative++; + } + + /* It's odd to simulate an io-error here, but really this is just + ** using the io-error infrastructure to test that SQLite handles this + ** function failing. This function could fail if, for example, the + ** current working directory has been unlinked. + */ + SimulateIOError( return SQLITE_ERROR ); + if ( sqlite3_data_directory && !winIsVerbatimPathname(zRelative) ){ + /* + ** NOTE: We are dealing with a relative path name and the data + ** directory has been set. Therefore, use it as the basis + ** for converting the relative path name to an absolute + ** one by prepending the data directory and a backslash. + */ + sqlite3_snprintf(MIN(nFull, pVfs->mxPathname), zFull, "%s%c%s", + sqlite3_data_directory, winGetDirSep(), zRelative); + return SQLITE_OK; + } + zConverted = winConvertFromUtf8Filename(zRelative); + if( zConverted==0 ){ + return SQLITE_IOERR_NOMEM; + } + if( osIsNT() ){ + LPWSTR zTemp; + nByte = osGetFullPathNameW((LPCWSTR)zConverted, 0, 0, 0); + if( nByte==0 ){ + sqlite3_free(zConverted); + return winLogError(SQLITE_CANTOPEN_FULLPATH, osGetLastError(), + "winFullPathname1", zRelative); + } + nByte += 3; + zTemp = sqlite3MallocZero( nByte*sizeof(zTemp[0]) ); if( zTemp==0 ){ - free(zConverted); - return SQLITE_NOMEM; - } - GetFullPathNameW((WCHAR*)zConverted, nByte, zTemp, 0); - free(zConverted); - zOut = unicodeToUtf8(zTemp); - free(zTemp); -/* isNT() is 1 if SQLITE_OS_WINCE==1, so this else is never executed. -** Since the ASCII version of these Windows API do not exist for WINCE, -** it's important to not reference them for WINCE builds. -*/ -#if SQLITE_OS_WINCE==0 - }else{ + sqlite3_free(zConverted); + return SQLITE_IOERR_NOMEM; + } + nByte = osGetFullPathNameW((LPCWSTR)zConverted, nByte, zTemp, 0); + if( nByte==0 ){ + sqlite3_free(zConverted); + sqlite3_free(zTemp); + return winLogError(SQLITE_CANTOPEN_FULLPATH, osGetLastError(), + "winFullPathname2", zRelative); + } + sqlite3_free(zConverted); + zOut = winUnicodeToUtf8(zTemp); + sqlite3_free(zTemp); + } +#ifdef SQLITE_WIN32_HAS_ANSI + else{ char *zTemp; - nByte = GetFullPathNameA((char*)zConverted, 0, 0, 0) + 3; - zTemp = malloc( nByte*sizeof(zTemp[0]) ); + nByte = osGetFullPathNameA((char*)zConverted, 0, 0, 0); + if( nByte==0 ){ + sqlite3_free(zConverted); + return winLogError(SQLITE_CANTOPEN_FULLPATH, osGetLastError(), + "winFullPathname3", zRelative); + } + nByte += 3; + zTemp = sqlite3MallocZero( nByte*sizeof(zTemp[0]) ); if( zTemp==0 ){ - free(zConverted); - return SQLITE_NOMEM; + sqlite3_free(zConverted); + return SQLITE_IOERR_NOMEM; } - GetFullPathNameA((char*)zConverted, nByte, zTemp, 0); - free(zConverted); + nByte = osGetFullPathNameA((char*)zConverted, nByte, zTemp, 0); + if( nByte==0 ){ + sqlite3_free(zConverted); + sqlite3_free(zTemp); + return winLogError(SQLITE_CANTOPEN_FULLPATH, osGetLastError(), + "winFullPathname4", zRelative); + } + sqlite3_free(zConverted); zOut = sqlite3_win32_mbcs_to_utf8(zTemp); - free(zTemp); + sqlite3_free(zTemp); + } #endif - } if( zOut ){ - sqlite3_snprintf(pVfs->mxPathname, zFull, "%s", zOut); - free(zOut); + sqlite3_snprintf(MIN(nFull, pVfs->mxPathname), zFull, "%s", zOut); + sqlite3_free(zOut); return SQLITE_OK; }else{ - return SQLITE_NOMEM; - } -#endif -} - -/* -** Get the sector size of the device used to store -** file. -*/ -static int getSectorSize( - sqlite3_vfs *pVfs, - const char *zRelative /* UTF-8 file name */ -){ - DWORD bytesPerSector = SQLITE_DEFAULT_SECTOR_SIZE; - /* GetDiskFreeSpace is not supported under WINCE */ -#if SQLITE_OS_WINCE - UNUSED_PARAMETER(pVfs); - UNUSED_PARAMETER(zRelative); -#else - char zFullpath[MAX_PATH+1]; - int rc; - DWORD dwRet = 0; - DWORD dwDummy; - - /* - ** We need to get the full path name of the file - ** to get the drive letter to look up the sector - ** size. - */ - rc = winFullPathname(pVfs, zRelative, MAX_PATH, zFullpath); - if( rc == SQLITE_OK ) - { - void *zConverted = convertUtf8Filename(zFullpath); - if( zConverted ){ - if( isNT() ){ - /* trim path to just drive reference */ - WCHAR *p = zConverted; - for(;*p;p++){ - if( *p == '\\' ){ - *p = '\0'; - break; - } - } - dwRet = GetDiskFreeSpaceW((WCHAR*)zConverted, - &dwDummy, - &bytesPerSector, - &dwDummy, - &dwDummy); - }else{ - /* trim path to just drive reference */ - char *p = (char *)zConverted; - for(;*p;p++){ - if( *p == '\\' ){ - *p = '\0'; - break; - } - } - dwRet = GetDiskFreeSpaceA((char*)zConverted, - &dwDummy, - &bytesPerSector, - &dwDummy, - &dwDummy); - } - free(zConverted); - } - if( !dwRet ){ - bytesPerSector = SQLITE_DEFAULT_SECTOR_SIZE; - } - } -#endif - return (int) bytesPerSector; + return SQLITE_IOERR_NOMEM; + } +#endif } #ifndef SQLITE_OMIT_LOAD_EXTENSION /* ** Interfaces for opening a shared library, finding entry points ** within the shared library, and closing the shared library. */ -/* -** Interfaces for opening a shared library, finding entry points -** within the shared library, and closing the shared library. -*/ static void *winDlOpen(sqlite3_vfs *pVfs, const char *zFilename){ HANDLE h; - void *zConverted = convertUtf8Filename(zFilename); +#if defined(__CYGWIN__) + int nFull = pVfs->mxPathname+1; + char *zFull = sqlite3MallocZero( nFull ); + void *zConverted = 0; + if( zFull==0 ){ + OSTRACE(("DLOPEN name=%s, handle=%p\n", zFilename, (void*)0)); + return 0; + } + if( winFullPathname(pVfs, zFilename, nFull, zFull)!=SQLITE_OK ){ + sqlite3_free(zFull); + OSTRACE(("DLOPEN name=%s, handle=%p\n", zFilename, (void*)0)); + return 0; + } + zConverted = winConvertFromUtf8Filename(zFull); + sqlite3_free(zFull); +#else + void *zConverted = winConvertFromUtf8Filename(zFilename); UNUSED_PARAMETER(pVfs); +#endif if( zConverted==0 ){ + OSTRACE(("DLOPEN name=%s, handle=%p\n", zFilename, (void*)0)); return 0; } - if( isNT() ){ - h = LoadLibraryW((WCHAR*)zConverted); -/* isNT() is 1 if SQLITE_OS_WINCE==1, so this else is never executed. -** Since the ASCII version of these Windows API do not exist for WINCE, -** it's important to not reference them for WINCE builds. -*/ -#if SQLITE_OS_WINCE==0 - }else{ - h = LoadLibraryA((char*)zConverted); + if( osIsNT() ){ +#if SQLITE_OS_WINRT + h = osLoadPackagedLibrary((LPCWSTR)zConverted, 0); +#else + h = osLoadLibraryW((LPCWSTR)zConverted); #endif } - free(zConverted); +#ifdef SQLITE_WIN32_HAS_ANSI + else{ + h = osLoadLibraryA((char*)zConverted); + } +#endif + OSTRACE(("DLOPEN name=%s, handle=%p\n", zFilename, (void*)h)); + sqlite3_free(zConverted); return (void*)h; } static void winDlError(sqlite3_vfs *pVfs, int nBuf, char *zBufOut){ UNUSED_PARAMETER(pVfs); - getLastErrorMsg(nBuf, zBufOut); -} -void (*winDlSym(sqlite3_vfs *pVfs, void *pHandle, const char *zSymbol))(void){ - UNUSED_PARAMETER(pVfs); -#if SQLITE_OS_WINCE - /* The GetProcAddressA() routine is only available on wince. */ - return (void(*)(void))GetProcAddressA((HANDLE)pHandle, zSymbol); -#else - /* All other windows platforms expect GetProcAddress() to take - ** an Ansi string regardless of the _UNICODE setting */ - return (void(*)(void))GetProcAddress((HANDLE)pHandle, zSymbol); -#endif -} -void winDlClose(sqlite3_vfs *pVfs, void *pHandle){ - UNUSED_PARAMETER(pVfs); - FreeLibrary((HANDLE)pHandle); + winGetLastErrorMsg(osGetLastError(), nBuf, zBufOut); +} +static void (*winDlSym(sqlite3_vfs *pVfs,void *pH,const char *zSym))(void){ + FARPROC proc; + UNUSED_PARAMETER(pVfs); + proc = osGetProcAddressA((HANDLE)pH, zSym); + OSTRACE(("DLSYM handle=%p, symbol=%s, address=%p\n", + (void*)pH, zSym, (void*)proc)); + return (void(*)(void))proc; +} +static void winDlClose(sqlite3_vfs *pVfs, void *pHandle){ + UNUSED_PARAMETER(pVfs); + osFreeLibrary((HANDLE)pHandle); + OSTRACE(("DLCLOSE handle=%p\n", (void*)pHandle)); } #else /* if SQLITE_OMIT_LOAD_EXTENSION is defined: */ #define winDlOpen 0 #define winDlError 0 #define winDlSym 0 @@ -29958,114 +40464,150 @@ ** Write up to nBuf bytes of randomness into zBuf. */ static int winRandomness(sqlite3_vfs *pVfs, int nBuf, char *zBuf){ int n = 0; UNUSED_PARAMETER(pVfs); -#if defined(SQLITE_TEST) +#if defined(SQLITE_TEST) || defined(SQLITE_OMIT_RANDOMNESS) n = nBuf; memset(zBuf, 0, nBuf); #else if( sizeof(SYSTEMTIME)<=nBuf-n ){ SYSTEMTIME x; - GetSystemTime(&x); + osGetSystemTime(&x); memcpy(&zBuf[n], &x, sizeof(x)); n += sizeof(x); } if( sizeof(DWORD)<=nBuf-n ){ - DWORD pid = GetCurrentProcessId(); + DWORD pid = osGetCurrentProcessId(); memcpy(&zBuf[n], &pid, sizeof(pid)); n += sizeof(pid); } +#if SQLITE_OS_WINRT + if( sizeof(ULONGLONG)<=nBuf-n ){ + ULONGLONG cnt = osGetTickCount64(); + memcpy(&zBuf[n], &cnt, sizeof(cnt)); + n += sizeof(cnt); + } +#else if( sizeof(DWORD)<=nBuf-n ){ - DWORD cnt = GetTickCount(); + DWORD cnt = osGetTickCount(); memcpy(&zBuf[n], &cnt, sizeof(cnt)); n += sizeof(cnt); } +#endif if( sizeof(LARGE_INTEGER)<=nBuf-n ){ LARGE_INTEGER i; - QueryPerformanceCounter(&i); + osQueryPerformanceCounter(&i); memcpy(&zBuf[n], &i, sizeof(i)); n += sizeof(i); } +#if !SQLITE_OS_WINCE && !SQLITE_OS_WINRT && SQLITE_WIN32_USE_UUID + if( sizeof(UUID)<=nBuf-n ){ + UUID id; + memset(&id, 0, sizeof(UUID)); + osUuidCreate(&id); + memcpy(&zBuf[n], &id, sizeof(UUID)); + n += sizeof(UUID); + } + if( sizeof(UUID)<=nBuf-n ){ + UUID id; + memset(&id, 0, sizeof(UUID)); + osUuidCreateSequential(&id); + memcpy(&zBuf[n], &id, sizeof(UUID)); + n += sizeof(UUID); + } #endif +#endif /* defined(SQLITE_TEST) || defined(SQLITE_ZERO_PRNG_SEED) */ return n; } /* ** Sleep for a little while. Return the amount of time slept. */ static int winSleep(sqlite3_vfs *pVfs, int microsec){ - Sleep((microsec+999)/1000); + sqlite3_win32_sleep((microsec+999)/1000); UNUSED_PARAMETER(pVfs); return ((microsec+999)/1000)*1000; } /* -** The following variable, if set to a non-zero value, becomes the result -** returned from sqlite3OsCurrentTime(). This is used for testing. +** The following variable, if set to a non-zero value, is interpreted as +** the number of seconds since 1970 and is used to set the result of +** sqlite3OsCurrentTime() during testing. */ +#ifdef SQLITE_TEST +SQLITE_API int sqlite3_current_time = 0; /* Fake system time in seconds since 1970. */ +#endif + +/* +** Find the current time (in Universal Coordinated Time). Write into *piNow +** the current time and date as a Julian Day number times 86_400_000. In +** other words, write into *piNow the number of milliseconds since the Julian +** epoch of noon in Greenwich on November 24, 4714 B.C according to the +** proleptic Gregorian calendar. +** +** On success, return SQLITE_OK. Return SQLITE_ERROR if the time and date +** cannot be found. +*/ +static int winCurrentTimeInt64(sqlite3_vfs *pVfs, sqlite3_int64 *piNow){ + /* FILETIME structure is a 64-bit value representing the number of + 100-nanosecond intervals since January 1, 1601 (= JD 2305813.5). + */ + FILETIME ft; + static const sqlite3_int64 winFiletimeEpoch = 23058135*(sqlite3_int64)8640000; +#ifdef SQLITE_TEST + static const sqlite3_int64 unixEpoch = 24405875*(sqlite3_int64)8640000; +#endif + /* 2^32 - to avoid use of LL and warnings in gcc */ + static const sqlite3_int64 max32BitValue = + (sqlite3_int64)2000000000 + (sqlite3_int64)2000000000 + + (sqlite3_int64)294967296; + +#if SQLITE_OS_WINCE + SYSTEMTIME time; + osGetSystemTime(&time); + /* if SystemTimeToFileTime() fails, it returns zero. */ + if (!osSystemTimeToFileTime(&time,&ft)){ + return SQLITE_ERROR; + } +#else + osGetSystemTimeAsFileTime( &ft ); +#endif + + *piNow = winFiletimeEpoch + + ((((sqlite3_int64)ft.dwHighDateTime)*max32BitValue) + + (sqlite3_int64)ft.dwLowDateTime)/(sqlite3_int64)10000; + #ifdef SQLITE_TEST -SQLITE_API int sqlite3_current_time = 0; + if( sqlite3_current_time ){ + *piNow = 1000*(sqlite3_int64)sqlite3_current_time + unixEpoch; + } #endif + UNUSED_PARAMETER(pVfs); + return SQLITE_OK; +} /* ** Find the current time (in Universal Coordinated Time). Write the ** current time and date as a Julian Day number into *prNow and ** return 0. Return 1 if the time and date cannot be found. */ -int winCurrentTime(sqlite3_vfs *pVfs, double *prNow){ - FILETIME ft; - /* FILETIME structure is a 64-bit value representing the number of - 100-nanosecond intervals since January 1, 1601 (= JD 2305813.5). - */ - sqlite3_int64 timeW; /* Whole days */ - sqlite3_int64 timeF; /* Fractional Days */ - - /* Number of 100-nanosecond intervals in a single day */ - static const sqlite3_int64 ntuPerDay = - 10000000*(sqlite3_int64)86400; - - /* Number of 100-nanosecond intervals in half of a day */ - static const sqlite3_int64 ntuPerHalfDay = - 10000000*(sqlite3_int64)43200; - - /* 2^32 - to avoid use of LL and warnings in gcc */ - static const sqlite3_int64 max32BitValue = - (sqlite3_int64)2000000000 + (sqlite3_int64)2000000000 + (sqlite3_int64)294967296; - -#if SQLITE_OS_WINCE - SYSTEMTIME time; - GetSystemTime(&time); - /* if SystemTimeToFileTime() fails, it returns zero. */ - if (!SystemTimeToFileTime(&time,&ft)){ - return 1; - } -#else - GetSystemTimeAsFileTime( &ft ); -#endif - UNUSED_PARAMETER(pVfs); - timeW = (((sqlite3_int64)ft.dwHighDateTime)*max32BitValue) + (sqlite3_int64)ft.dwLowDateTime; - timeF = timeW % ntuPerDay; /* fractional days (100-nanoseconds) */ - timeW = timeW / ntuPerDay; /* whole days */ - timeW = timeW + 2305813; /* add whole days (from 2305813.5) */ - timeF = timeF + ntuPerHalfDay; /* add half a day (from 2305813.5) */ - timeW = timeW + (timeF/ntuPerDay); /* add whole day if half day made one */ - timeF = timeF % ntuPerDay; /* compute new fractional days */ - *prNow = (double)timeW + ((double)timeF / (double)ntuPerDay); -#ifdef SQLITE_TEST - if( sqlite3_current_time ){ - *prNow = ((double)sqlite3_current_time + (double)43200) / (double)86400 + (double)2440587; - } -#endif - return 0; +static int winCurrentTime(sqlite3_vfs *pVfs, double *prNow){ + int rc; + sqlite3_int64 i; + rc = winCurrentTimeInt64(pVfs, &i); + if( !rc ){ + *prNow = i/86400000.0; + } + return rc; } /* ** The idea is that this function works like a combination of -** GetLastError() and FormatMessage() on windows (or errno and -** strerror_r() on unix). After an error is returned by an OS +** GetLastError() and FormatMessage() on Windows (or errno and +** strerror_r() on Unix). After an error is returned by an OS ** function, SQLite calls this function with zBuf pointing to ** a buffer of nBuf bytes. The OS layer should populate the ** buffer with a nul-terminated UTF-8 encoded error message ** describing the last IO error to have occurred within the calling ** thread. @@ -30090,43 +40632,98 @@ ** by sqlite into the error message available to the user using ** sqlite3_errmsg(), possibly making IO errors easier to debug. */ static int winGetLastError(sqlite3_vfs *pVfs, int nBuf, char *zBuf){ UNUSED_PARAMETER(pVfs); - return getLastErrorMsg(nBuf, zBuf); + return winGetLastErrorMsg(osGetLastError(), nBuf, zBuf); } /* ** Initialize and deinitialize the operating system interface. */ -SQLITE_API int sqlite3_os_init(void){ +SQLITE_API int SQLITE_STDCALL sqlite3_os_init(void){ static sqlite3_vfs winVfs = { - 1, /* iVersion */ - sizeof(winFile), /* szOsFile */ - MAX_PATH, /* mxPathname */ - 0, /* pNext */ - "win32", /* zName */ - 0, /* pAppData */ - - winOpen, /* xOpen */ - winDelete, /* xDelete */ - winAccess, /* xAccess */ - winFullPathname, /* xFullPathname */ - winDlOpen, /* xDlOpen */ - winDlError, /* xDlError */ - winDlSym, /* xDlSym */ - winDlClose, /* xDlClose */ - winRandomness, /* xRandomness */ - winSleep, /* xSleep */ - winCurrentTime, /* xCurrentTime */ - winGetLastError /* xGetLastError */ + 3, /* iVersion */ + sizeof(winFile), /* szOsFile */ + SQLITE_WIN32_MAX_PATH_BYTES, /* mxPathname */ + 0, /* pNext */ + "win32", /* zName */ + 0, /* pAppData */ + winOpen, /* xOpen */ + winDelete, /* xDelete */ + winAccess, /* xAccess */ + winFullPathname, /* xFullPathname */ + winDlOpen, /* xDlOpen */ + winDlError, /* xDlError */ + winDlSym, /* xDlSym */ + winDlClose, /* xDlClose */ + winRandomness, /* xRandomness */ + winSleep, /* xSleep */ + winCurrentTime, /* xCurrentTime */ + winGetLastError, /* xGetLastError */ + winCurrentTimeInt64, /* xCurrentTimeInt64 */ + winSetSystemCall, /* xSetSystemCall */ + winGetSystemCall, /* xGetSystemCall */ + winNextSystemCall, /* xNextSystemCall */ }; +#if defined(SQLITE_WIN32_HAS_WIDE) + static sqlite3_vfs winLongPathVfs = { + 3, /* iVersion */ + sizeof(winFile), /* szOsFile */ + SQLITE_WINNT_MAX_PATH_BYTES, /* mxPathname */ + 0, /* pNext */ + "win32-longpath", /* zName */ + 0, /* pAppData */ + winOpen, /* xOpen */ + winDelete, /* xDelete */ + winAccess, /* xAccess */ + winFullPathname, /* xFullPathname */ + winDlOpen, /* xDlOpen */ + winDlError, /* xDlError */ + winDlSym, /* xDlSym */ + winDlClose, /* xDlClose */ + winRandomness, /* xRandomness */ + winSleep, /* xSleep */ + winCurrentTime, /* xCurrentTime */ + winGetLastError, /* xGetLastError */ + winCurrentTimeInt64, /* xCurrentTimeInt64 */ + winSetSystemCall, /* xSetSystemCall */ + winGetSystemCall, /* xGetSystemCall */ + winNextSystemCall, /* xNextSystemCall */ + }; +#endif + + /* Double-check that the aSyscall[] array has been constructed + ** correctly. See ticket [bb3a86e890c8e96ab] */ + assert( ArraySize(aSyscall)==80 ); + + /* get memory map allocation granularity */ + memset(&winSysInfo, 0, sizeof(SYSTEM_INFO)); +#if SQLITE_OS_WINRT + osGetNativeSystemInfo(&winSysInfo); +#else + osGetSystemInfo(&winSysInfo); +#endif + assert( winSysInfo.dwAllocationGranularity>0 ); + assert( winSysInfo.dwPageSize>0 ); sqlite3_vfs_register(&winVfs, 1); - return SQLITE_OK; + +#if defined(SQLITE_WIN32_HAS_WIDE) + sqlite3_vfs_register(&winLongPathVfs, 0); +#endif + + return SQLITE_OK; } -SQLITE_API int sqlite3_os_end(void){ + +SQLITE_API int SQLITE_STDCALL sqlite3_os_end(void){ +#if SQLITE_OS_WINRT + if( sleepObj!=NULL ){ + osCloseHandle(sleepObj); + sleepObj = NULL; + } +#endif return SQLITE_OK; } #endif /* SQLITE_OS_WIN */ @@ -30166,17 +40763,19 @@ ** sometimes grow into tens of thousands or larger. The size of the ** Bitvec object is the number of pages in the database file at the ** start of a transaction, and is thus usually less than a few thousand, ** but can be as large as 2 billion for a really big database. */ +/* #include "sqliteInt.h" */ /* Size of the Bitvec structure in bytes. */ -#define BITVEC_SZ (sizeof(void*)*128) /* 512 on 32bit. 1024 on 64bit */ +#define BITVEC_SZ 512 /* Round the union size down to the nearest pointer boundary, since that's how ** it will be aligned within the Bitvec struct. */ -#define BITVEC_USIZE (((BITVEC_SZ-(3*sizeof(u32)))/sizeof(Bitvec*))*sizeof(Bitvec*)) +#define BITVEC_USIZE \ + (((BITVEC_SZ-(3*sizeof(u32)))/sizeof(Bitvec*))*sizeof(Bitvec*)) /* Type of the array "element" for the bitmap representation. ** Should be a power of 2, and ideally, evenly divide into BITVEC_USIZE. ** Setting this to the "natural word" size of your CPU may improve ** performance. */ @@ -30203,11 +40802,11 @@ /* ** A bitmap is an instance of the following structure. ** -** This bitmap records the existance of zero or more bits +** This bitmap records the existence of zero or more bits ** with values between 1 and iSize, inclusive. ** ** There are three possible representations of the bitmap. ** If iSize<=BITVEC_NBIT, then Bitvec.u.aBitmap[] is a straight ** bitmap. The least significant bit is bit 1. @@ -30257,14 +40856,14 @@ /* ** Check to see if the i-th bit is set. Return true or false. ** If p is NULL (if the bitmap has not been created) or if ** i is out of range, then return false. */ -SQLITE_PRIVATE int sqlite3BitvecTest(Bitvec *p, u32 i){ - if( p==0 ) return 0; - if( i>p->iSize || i==0 ) return 0; +SQLITE_PRIVATE int sqlite3BitvecTestNotNull(Bitvec *p, u32 i){ + assert( p!=0 ); i--; + if( i>=p->iSize ) return 0; while( p->iDivisor ){ u32 bin = i/p->iDivisor; i = i%p->iDivisor; p = p->u.apSub[bin]; if (!p) { @@ -30279,10 +40878,13 @@ if( p->u.aHash[h]==i ) return 1; h = (h+1) % BITVEC_NINT; } return 0; } +} +SQLITE_PRIVATE int sqlite3BitvecTest(Bitvec *p, u32 i){ + return p!=0 && sqlite3BitvecTestNotNull(p,i); } /* ** Set the i-th bit. Return 0 on success and an error code if ** anything goes wrong. @@ -30471,14 +41073,13 @@ void *pTmpSpace; /* Allocate the Bitvec to be tested and a linear array of ** bits to act as the reference */ pBitvec = sqlite3BitvecCreate( sz ); - pV = sqlite3_malloc( (sz+7)/8 + 1 ); - pTmpSpace = sqlite3_malloc(BITVEC_SZ); + pV = sqlite3MallocZero( (sz+7)/8 + 1 ); + pTmpSpace = sqlite3_malloc64(BITVEC_SZ); if( pBitvec==0 || pV==0 || pTmpSpace==0 ) goto bitvec_end; - memset(pV, 0, (sz+7)/8 + 1); /* NULL pBitvec tests */ sqlite3BitvecSet(0, 1); sqlite3BitvecClear(0, 1, pTmpSpace); @@ -30553,147 +41154,144 @@ ** May you share freely, never taking more than you give. ** ************************************************************************* ** This file implements that page cache. */ +/* #include "sqliteInt.h" */ /* ** A complete page cache is an instance of this structure. */ struct PCache { PgHdr *pDirty, *pDirtyTail; /* List of dirty pages in LRU order */ PgHdr *pSynced; /* Last synced page in dirty page list */ - int nRef; /* Number of referenced pages */ - int nMax; /* Configured cache size */ + int nRefSum; /* Sum of ref counts over all pages */ + int szCache; /* Configured cache size */ + int szSpill; /* Size before spilling occurs */ int szPage; /* Size of every page in this cache */ int szExtra; /* Size of extra space for each page */ - int bPurgeable; /* True if pages are on backing store */ + u8 bPurgeable; /* True if pages are on backing store */ + u8 eCreate; /* eCreate value for for xFetch() */ int (*xStress)(void*,PgHdr*); /* Call to try make a page clean */ void *pStress; /* Argument to xStress */ sqlite3_pcache *pCache; /* Pluggable cache module */ - PgHdr *pPage1; /* Reference to page 1 */ }; -/* -** Some of the assert() macros in this code are too expensive to run -** even during normal debugging. Use them only rarely on long-running -** tests. Enable the expensive asserts using the -** -DSQLITE_ENABLE_EXPENSIVE_ASSERT=1 compile-time option. -*/ -#ifdef SQLITE_ENABLE_EXPENSIVE_ASSERT -# define expensive_assert(X) assert(X) -#else -# define expensive_assert(X) -#endif - /********************************** Linked List Management ********************/ -#if !defined(NDEBUG) && defined(SQLITE_ENABLE_EXPENSIVE_ASSERT) -/* -** Check that the pCache->pSynced variable is set correctly. If it -** is not, either fail an assert or return zero. Otherwise, return -** non-zero. This is only used in debugging builds, as follows: -** -** expensive_assert( pcacheCheckSynced(pCache) ); -*/ -static int pcacheCheckSynced(PCache *pCache){ - PgHdr *p; - for(p=pCache->pDirtyTail; p!=pCache->pSynced; p=p->pDirtyPrev){ - assert( p->nRef || (p->flags&PGHDR_NEED_SYNC) ); - } - return (p==0 || p->nRef || (p->flags&PGHDR_NEED_SYNC)==0); -} -#endif /* !NDEBUG && SQLITE_ENABLE_EXPENSIVE_ASSERT */ - -/* -** Remove page pPage from the list of dirty pages. -*/ -static void pcacheRemoveFromDirtyList(PgHdr *pPage){ - PCache *p = pPage->pCache; - - assert( pPage->pDirtyNext || pPage==p->pDirtyTail ); - assert( pPage->pDirtyPrev || pPage==p->pDirty ); - - /* Update the PCache1.pSynced variable if necessary. */ - if( p->pSynced==pPage ){ - PgHdr *pSynced = pPage->pDirtyPrev; - while( pSynced && (pSynced->flags&PGHDR_NEED_SYNC) ){ - pSynced = pSynced->pDirtyPrev; - } - p->pSynced = pSynced; - } - - if( pPage->pDirtyNext ){ - pPage->pDirtyNext->pDirtyPrev = pPage->pDirtyPrev; - }else{ - assert( pPage==p->pDirtyTail ); - p->pDirtyTail = pPage->pDirtyPrev; - } - if( pPage->pDirtyPrev ){ - pPage->pDirtyPrev->pDirtyNext = pPage->pDirtyNext; - }else{ - assert( pPage==p->pDirty ); - p->pDirty = pPage->pDirtyNext; - } - pPage->pDirtyNext = 0; - pPage->pDirtyPrev = 0; - - expensive_assert( pcacheCheckSynced(p) ); -} - -/* -** Add page pPage to the head of the dirty list (PCache1.pDirty is set to -** pPage). -*/ -static void pcacheAddToDirtyList(PgHdr *pPage){ - PCache *p = pPage->pCache; - - assert( pPage->pDirtyNext==0 && pPage->pDirtyPrev==0 && p->pDirty!=pPage ); - - pPage->pDirtyNext = p->pDirty; - if( pPage->pDirtyNext ){ - assert( pPage->pDirtyNext->pDirtyPrev==0 ); - pPage->pDirtyNext->pDirtyPrev = pPage; - } - p->pDirty = pPage; - if( !p->pDirtyTail ){ - p->pDirtyTail = pPage; - } - if( !p->pSynced && 0==(pPage->flags&PGHDR_NEED_SYNC) ){ - p->pSynced = pPage; - } - expensive_assert( pcacheCheckSynced(p) ); +/* Allowed values for second argument to pcacheManageDirtyList() */ +#define PCACHE_DIRTYLIST_REMOVE 1 /* Remove pPage from dirty list */ +#define PCACHE_DIRTYLIST_ADD 2 /* Add pPage to the dirty list */ +#define PCACHE_DIRTYLIST_FRONT 3 /* Move pPage to the front of the list */ + +/* +** Manage pPage's participation on the dirty list. Bits of the addRemove +** argument determines what operation to do. The 0x01 bit means first +** remove pPage from the dirty list. The 0x02 means add pPage back to +** the dirty list. Doing both moves pPage to the front of the dirty list. +*/ +static void pcacheManageDirtyList(PgHdr *pPage, u8 addRemove){ + PCache *p = pPage->pCache; + + if( addRemove & PCACHE_DIRTYLIST_REMOVE ){ + assert( pPage->pDirtyNext || pPage==p->pDirtyTail ); + assert( pPage->pDirtyPrev || pPage==p->pDirty ); + + /* Update the PCache1.pSynced variable if necessary. */ + if( p->pSynced==pPage ){ + PgHdr *pSynced = pPage->pDirtyPrev; + while( pSynced && (pSynced->flags&PGHDR_NEED_SYNC) ){ + pSynced = pSynced->pDirtyPrev; + } + p->pSynced = pSynced; + } + + if( pPage->pDirtyNext ){ + pPage->pDirtyNext->pDirtyPrev = pPage->pDirtyPrev; + }else{ + assert( pPage==p->pDirtyTail ); + p->pDirtyTail = pPage->pDirtyPrev; + } + if( pPage->pDirtyPrev ){ + pPage->pDirtyPrev->pDirtyNext = pPage->pDirtyNext; + }else{ + assert( pPage==p->pDirty ); + p->pDirty = pPage->pDirtyNext; + if( p->pDirty==0 && p->bPurgeable ){ + assert( p->eCreate==1 ); + p->eCreate = 2; + } + } + pPage->pDirtyNext = 0; + pPage->pDirtyPrev = 0; + } + if( addRemove & PCACHE_DIRTYLIST_ADD ){ + assert( pPage->pDirtyNext==0 && pPage->pDirtyPrev==0 && p->pDirty!=pPage ); + + pPage->pDirtyNext = p->pDirty; + if( pPage->pDirtyNext ){ + assert( pPage->pDirtyNext->pDirtyPrev==0 ); + pPage->pDirtyNext->pDirtyPrev = pPage; + }else{ + p->pDirtyTail = pPage; + if( p->bPurgeable ){ + assert( p->eCreate==2 ); + p->eCreate = 1; + } + } + p->pDirty = pPage; + if( !p->pSynced && 0==(pPage->flags&PGHDR_NEED_SYNC) ){ + p->pSynced = pPage; + } + } } /* ** Wrapper around the pluggable caches xUnpin method. If the cache is ** being used for an in-memory database, this function is a no-op. */ static void pcacheUnpin(PgHdr *p){ - PCache *pCache = p->pCache; - if( pCache->bPurgeable ){ - if( p->pgno==1 ){ - pCache->pPage1 = 0; - } - sqlite3GlobalConfig.pcache.xUnpin(pCache->pCache, p, 0); + if( p->pCache->bPurgeable ){ + sqlite3GlobalConfig.pcache2.xUnpin(p->pCache->pCache, p->pPage, 0); + } +} + +/* +** Compute the number of pages of cache requested. p->szCache is the +** cache size requested by the "PRAGMA cache_size" statement. +*/ +static int numberOfCachePages(PCache *p){ + if( p->szCache>=0 ){ + /* IMPLEMENTATION-OF: R-42059-47211 If the argument N is positive then the + ** suggested cache size is set to N. */ + return p->szCache; + }else{ + /* IMPLEMENTATION-OF: R-61436-13639 If the argument N is negative, then + ** the number of cache pages is adjusted to use approximately abs(N*1024) + ** bytes of memory. */ + return (int)((-1024*(i64)p->szCache)/(p->szPage+p->szExtra)); } } /*************************************************** General Interfaces ****** ** ** Initialize and shutdown the page cache subsystem. Neither of these ** functions are threadsafe. */ SQLITE_PRIVATE int sqlite3PcacheInitialize(void){ - if( sqlite3GlobalConfig.pcache.xInit==0 ){ + if( sqlite3GlobalConfig.pcache2.xInit==0 ){ + /* IMPLEMENTATION-OF: R-26801-64137 If the xInit() method is NULL, then the + ** built-in default page cache is used instead of the application defined + ** page cache. */ sqlite3PCacheSetDefault(); } - return sqlite3GlobalConfig.pcache.xInit(sqlite3GlobalConfig.pcache.pArg); + return sqlite3GlobalConfig.pcache2.xInit(sqlite3GlobalConfig.pcache2.pArg); } SQLITE_PRIVATE void sqlite3PcacheShutdown(void){ - if( sqlite3GlobalConfig.pcache.xShutdown ){ - sqlite3GlobalConfig.pcache.xShutdown(sqlite3GlobalConfig.pcache.pArg); + if( sqlite3GlobalConfig.pcache2.xShutdown ){ + /* IMPLEMENTATION-OF: R-26000-56589 The xShutdown() method may be NULL. */ + sqlite3GlobalConfig.pcache2.xShutdown(sqlite3GlobalConfig.pcache2.pArg); } } /* ** Return the size in bytes of a PCache object. @@ -30704,86 +41302,128 @@ ** Create a new PCache object. Storage space to hold the object ** has already been allocated and is passed in as the p pointer. ** The caller discovers how much space needs to be allocated by ** calling sqlite3PcacheSize(). */ -SQLITE_PRIVATE void sqlite3PcacheOpen( +SQLITE_PRIVATE int sqlite3PcacheOpen( int szPage, /* Size of every page */ int szExtra, /* Extra space associated with each page */ int bPurgeable, /* True if pages are on backing store */ int (*xStress)(void*,PgHdr*),/* Call to try to make pages clean */ void *pStress, /* Argument to xStress */ PCache *p /* Preallocated space for the PCache */ ){ memset(p, 0, sizeof(PCache)); - p->szPage = szPage; + p->szPage = 1; p->szExtra = szExtra; p->bPurgeable = bPurgeable; + p->eCreate = 2; p->xStress = xStress; p->pStress = pStress; - p->nMax = 100; + p->szCache = 100; + p->szSpill = 1; + return sqlite3PcacheSetPageSize(p, szPage); } /* ** Change the page size for PCache object. The caller must ensure that there ** are no outstanding page references when this function is called. */ -SQLITE_PRIVATE void sqlite3PcacheSetPageSize(PCache *pCache, int szPage){ - assert( pCache->nRef==0 && pCache->pDirty==0 ); - if( pCache->pCache ){ - sqlite3GlobalConfig.pcache.xDestroy(pCache->pCache); - pCache->pCache = 0; - pCache->pPage1 = 0; - } - pCache->szPage = szPage; +SQLITE_PRIVATE int sqlite3PcacheSetPageSize(PCache *pCache, int szPage){ + assert( pCache->nRefSum==0 && pCache->pDirty==0 ); + if( pCache->szPage ){ + sqlite3_pcache *pNew; + pNew = sqlite3GlobalConfig.pcache2.xCreate( + szPage, pCache->szExtra + ROUND8(sizeof(PgHdr)), + pCache->bPurgeable + ); + if( pNew==0 ) return SQLITE_NOMEM; + sqlite3GlobalConfig.pcache2.xCachesize(pNew, numberOfCachePages(pCache)); + if( pCache->pCache ){ + sqlite3GlobalConfig.pcache2.xDestroy(pCache->pCache); + } + pCache->pCache = pNew; + pCache->szPage = szPage; + } + return SQLITE_OK; } /* ** Try to obtain a page from the cache. +** +** This routine returns a pointer to an sqlite3_pcache_page object if +** such an object is already in cache, or if a new one is created. +** This routine returns a NULL pointer if the object was not in cache +** and could not be created. +** +** The createFlags should be 0 to check for existing pages and should +** be 3 (not 1, but 3) to try to create a new page. +** +** If the createFlag is 0, then NULL is always returned if the page +** is not already in the cache. If createFlag is 1, then a new page +** is created only if that can be done without spilling dirty pages +** and without exceeding the cache size limit. +** +** The caller needs to invoke sqlite3PcacheFetchFinish() to properly +** initialize the sqlite3_pcache_page object and convert it into a +** PgHdr object. The sqlite3PcacheFetch() and sqlite3PcacheFetchFinish() +** routines are split this way for performance reasons. When separated +** they can both (usually) operate without having to push values to +** the stack on entry and pop them back off on exit, which saves a +** lot of pushing and popping. */ -SQLITE_PRIVATE int sqlite3PcacheFetch( +SQLITE_PRIVATE sqlite3_pcache_page *sqlite3PcacheFetch( PCache *pCache, /* Obtain the page from this cache */ Pgno pgno, /* Page number to obtain */ - int createFlag, /* If true, create page if it does not exist already */ - PgHdr **ppPage /* Write the page here */ + int createFlag /* If true, create page if it does not exist already */ ){ - PgHdr *pPage = 0; int eCreate; assert( pCache!=0 ); - assert( createFlag==1 || createFlag==0 ); + assert( pCache->pCache!=0 ); + assert( createFlag==3 || createFlag==0 ); assert( pgno>0 ); - /* If the pluggable cache (sqlite3_pcache*) has not been allocated, - ** allocate it now. + /* eCreate defines what to do if the page does not exist. + ** 0 Do not allocate a new page. (createFlag==0) + ** 1 Allocate a new page if doing so is inexpensive. + ** (createFlag==1 AND bPurgeable AND pDirty) + ** 2 Allocate a new page even it doing so is difficult. + ** (createFlag==1 AND !(bPurgeable AND pDirty) */ - if( !pCache->pCache && createFlag ){ - sqlite3_pcache *p; - int nByte; - nByte = pCache->szPage + pCache->szExtra + sizeof(PgHdr); - p = sqlite3GlobalConfig.pcache.xCreate(nByte, pCache->bPurgeable); - if( !p ){ - return SQLITE_NOMEM; - } - sqlite3GlobalConfig.pcache.xCachesize(p, pCache->nMax); - pCache->pCache = p; - } - - eCreate = createFlag * (1 + (!pCache->bPurgeable || !pCache->pDirty)); - if( pCache->pCache ){ - pPage = sqlite3GlobalConfig.pcache.xFetch(pCache->pCache, pgno, eCreate); - } - - if( !pPage && eCreate==1 ){ - PgHdr *pPg; - + eCreate = createFlag & pCache->eCreate; + assert( eCreate==0 || eCreate==1 || eCreate==2 ); + assert( createFlag==0 || pCache->eCreate==eCreate ); + assert( createFlag==0 || eCreate==1+(!pCache->bPurgeable||!pCache->pDirty) ); + return sqlite3GlobalConfig.pcache2.xFetch(pCache->pCache, pgno, eCreate); +} + +/* +** If the sqlite3PcacheFetch() routine is unable to allocate a new +** page because new clean pages are available for reuse and the cache +** size limit has been reached, then this routine can be invoked to +** try harder to allocate a page. This routine might invoke the stress +** callback to spill dirty pages to the journal. It will then try to +** allocate the new page and will only fail to allocate a new page on +** an OOM error. +** +** This routine should be invoked only after sqlite3PcacheFetch() fails. +*/ +SQLITE_PRIVATE int sqlite3PcacheFetchStress( + PCache *pCache, /* Obtain the page from this cache */ + Pgno pgno, /* Page number to obtain */ + sqlite3_pcache_page **ppPage /* Write result here */ +){ + PgHdr *pPg; + if( pCache->eCreate==2 ) return 0; + + if( sqlite3PcachePagecount(pCache)>pCache->szSpill ){ /* Find a dirty page to write-out and recycle. First try to find a ** page that does not require a journal-sync (one with PGHDR_NEED_SYNC ** cleared), but if that is not possible settle for any other ** unreferenced dirty page. */ - expensive_assert( pcacheCheckSynced(pCache) ); for(pPg=pCache->pSynced; pPg && (pPg->nRef || (pPg->flags&PGHDR_NEED_SYNC)); pPg=pPg->pDirtyPrev ); pCache->pSynced = pPg; @@ -30790,59 +41430,93 @@ if( !pPg ){ for(pPg=pCache->pDirtyTail; pPg && pPg->nRef; pPg=pPg->pDirtyPrev); } if( pPg ){ int rc; +#ifdef SQLITE_LOG_CACHE_SPILL + sqlite3_log(SQLITE_FULL, + "spill page %d making room for %d - cache used: %d/%d", + pPg->pgno, pgno, + sqlite3GlobalConfig.pcache.xPagecount(pCache->pCache), + numberOfCachePages(pCache)); +#endif rc = pCache->xStress(pCache->pStress, pPg); if( rc!=SQLITE_OK && rc!=SQLITE_BUSY ){ return rc; } } - - pPage = sqlite3GlobalConfig.pcache.xFetch(pCache->pCache, pgno, 2); - } - - if( pPage ){ - if( !pPage->pData ){ - memset(pPage, 0, sizeof(PgHdr) + pCache->szExtra); - pPage->pExtra = (void*)&pPage[1]; - pPage->pData = (void *)&((char *)pPage)[sizeof(PgHdr) + pCache->szExtra]; - pPage->pCache = pCache; - pPage->pgno = pgno; - } - assert( pPage->pCache==pCache ); - assert( pPage->pgno==pgno ); - assert( pPage->pExtra==(void *)&pPage[1] ); - - if( 0==pPage->nRef ){ - pCache->nRef++; - } - pPage->nRef++; - if( pgno==1 ){ - pCache->pPage1 = pPage; - } - } - *ppPage = pPage; - return (pPage==0 && eCreate) ? SQLITE_NOMEM : SQLITE_OK; + } + *ppPage = sqlite3GlobalConfig.pcache2.xFetch(pCache->pCache, pgno, 2); + return *ppPage==0 ? SQLITE_NOMEM : SQLITE_OK; +} + +/* +** This is a helper routine for sqlite3PcacheFetchFinish() +** +** In the uncommon case where the page being fetched has not been +** initialized, this routine is invoked to do the initialization. +** This routine is broken out into a separate function since it +** requires extra stack manipulation that can be avoided in the common +** case. +*/ +static SQLITE_NOINLINE PgHdr *pcacheFetchFinishWithInit( + PCache *pCache, /* Obtain the page from this cache */ + Pgno pgno, /* Page number obtained */ + sqlite3_pcache_page *pPage /* Page obtained by prior PcacheFetch() call */ +){ + PgHdr *pPgHdr; + assert( pPage!=0 ); + pPgHdr = (PgHdr*)pPage->pExtra; + assert( pPgHdr->pPage==0 ); + memset(pPgHdr, 0, sizeof(PgHdr)); + pPgHdr->pPage = pPage; + pPgHdr->pData = pPage->pBuf; + pPgHdr->pExtra = (void *)&pPgHdr[1]; + memset(pPgHdr->pExtra, 0, pCache->szExtra); + pPgHdr->pCache = pCache; + pPgHdr->pgno = pgno; + pPgHdr->flags = PGHDR_CLEAN; + return sqlite3PcacheFetchFinish(pCache,pgno,pPage); +} + +/* +** This routine converts the sqlite3_pcache_page object returned by +** sqlite3PcacheFetch() into an initialized PgHdr object. This routine +** must be called after sqlite3PcacheFetch() in order to get a usable +** result. +*/ +SQLITE_PRIVATE PgHdr *sqlite3PcacheFetchFinish( + PCache *pCache, /* Obtain the page from this cache */ + Pgno pgno, /* Page number obtained */ + sqlite3_pcache_page *pPage /* Page obtained by prior PcacheFetch() call */ +){ + PgHdr *pPgHdr; + + assert( pPage!=0 ); + pPgHdr = (PgHdr *)pPage->pExtra; + + if( !pPgHdr->pPage ){ + return pcacheFetchFinishWithInit(pCache, pgno, pPage); + } + pCache->nRefSum++; + pPgHdr->nRef++; + return pPgHdr; } /* ** Decrement the reference count on a page. If the page is clean and the -** reference count drops to 0, then it is made elible for recycling. +** reference count drops to 0, then it is made eligible for recycling. */ -SQLITE_PRIVATE void sqlite3PcacheRelease(PgHdr *p){ +SQLITE_PRIVATE void SQLITE_NOINLINE sqlite3PcacheRelease(PgHdr *p){ assert( p->nRef>0 ); - p->nRef--; - if( p->nRef==0 ){ - PCache *pCache = p->pCache; - pCache->nRef--; - if( (p->flags&PGHDR_DIRTY)==0 ){ + p->pCache->nRefSum--; + if( (--p->nRef)==0 ){ + if( p->flags&PGHDR_CLEAN ){ pcacheUnpin(p); - }else{ + }else if( p->pDirtyPrev!=0 ){ /* Move the page to the head of the dirty list. */ - pcacheRemoveFromDirtyList(p); - pcacheAddToDirtyList(p); + pcacheManageDirtyList(p, PCACHE_DIRTYLIST_FRONT); } } } /* @@ -30849,52 +41523,53 @@ ** Increase the reference count of a supplied page by 1. */ SQLITE_PRIVATE void sqlite3PcacheRef(PgHdr *p){ assert(p->nRef>0); p->nRef++; + p->pCache->nRefSum++; } /* ** Drop a page from the cache. There must be exactly one reference to the ** page. This function deletes that reference, so after it returns the ** page pointed to by p is invalid. */ SQLITE_PRIVATE void sqlite3PcacheDrop(PgHdr *p){ - PCache *pCache; assert( p->nRef==1 ); if( p->flags&PGHDR_DIRTY ){ - pcacheRemoveFromDirtyList(p); - } - pCache = p->pCache; - pCache->nRef--; - if( p->pgno==1 ){ - pCache->pPage1 = 0; - } - sqlite3GlobalConfig.pcache.xUnpin(pCache->pCache, p, 1); + pcacheManageDirtyList(p, PCACHE_DIRTYLIST_REMOVE); + } + p->pCache->nRefSum--; + sqlite3GlobalConfig.pcache2.xUnpin(p->pCache->pCache, p->pPage, 1); } /* ** Make sure the page is marked as dirty. If it isn't dirty already, ** make it so. */ SQLITE_PRIVATE void sqlite3PcacheMakeDirty(PgHdr *p){ - p->flags &= ~PGHDR_DONT_WRITE; assert( p->nRef>0 ); - if( 0==(p->flags & PGHDR_DIRTY) ){ - p->flags |= PGHDR_DIRTY; - pcacheAddToDirtyList( p); + if( p->flags & (PGHDR_CLEAN|PGHDR_DONT_WRITE) ){ + p->flags &= ~PGHDR_DONT_WRITE; + if( p->flags & PGHDR_CLEAN ){ + p->flags ^= (PGHDR_DIRTY|PGHDR_CLEAN); + assert( (p->flags & (PGHDR_DIRTY|PGHDR_CLEAN))==PGHDR_DIRTY ); + pcacheManageDirtyList(p, PCACHE_DIRTYLIST_ADD); + } } } /* ** Make sure the page is marked as clean. If it isn't clean already, ** make it so. */ SQLITE_PRIVATE void sqlite3PcacheMakeClean(PgHdr *p){ if( (p->flags & PGHDR_DIRTY) ){ - pcacheRemoveFromDirtyList(p); - p->flags &= ~(PGHDR_DIRTY|PGHDR_NEED_SYNC); + assert( (p->flags & PGHDR_CLEAN)==0 ); + pcacheManageDirtyList(p, PCACHE_DIRTYLIST_REMOVE); + p->flags &= ~(PGHDR_DIRTY|PGHDR_NEED_SYNC|PGHDR_WRITEABLE); + p->flags |= PGHDR_CLEAN; if( p->nRef==0 ){ pcacheUnpin(p); } } } @@ -30925,15 +41600,14 @@ */ SQLITE_PRIVATE void sqlite3PcacheMove(PgHdr *p, Pgno newPgno){ PCache *pCache = p->pCache; assert( p->nRef>0 ); assert( newPgno>0 ); - sqlite3GlobalConfig.pcache.xRekey(pCache->pCache, p, p->pgno, newPgno); + sqlite3GlobalConfig.pcache2.xRekey(pCache->pCache, p->pPage, p->pgno,newPgno); p->pgno = newPgno; if( (p->flags&PGHDR_DIRTY) && (p->flags&PGHDR_NEED_SYNC) ){ - pcacheRemoveFromDirtyList(p); - pcacheAddToDirtyList(p); + pcacheManageDirtyList(p, PCACHE_DIRTYLIST_FRONT); } } /* ** Drop every cache entry whose page number is greater than "pgno". The @@ -30958,25 +41632,29 @@ if( ALWAYS(p->pgno>pgno) ){ assert( p->flags&PGHDR_DIRTY ); sqlite3PcacheMakeClean(p); } } - if( pgno==0 && pCache->pPage1 ){ - memset(pCache->pPage1->pData, 0, pCache->szPage); - pgno = 1; + if( pgno==0 && pCache->nRefSum ){ + sqlite3_pcache_page *pPage1; + pPage1 = sqlite3GlobalConfig.pcache2.xFetch(pCache->pCache,1,0); + if( ALWAYS(pPage1) ){ /* Page 1 is always available in cache, because + ** pCache->nRefSum>0 */ + memset(pPage1->pBuf, 0, pCache->szPage); + pgno = 1; + } } - sqlite3GlobalConfig.pcache.xTruncate(pCache->pCache, pgno+1); + sqlite3GlobalConfig.pcache2.xTruncate(pCache->pCache, pgno+1); } } /* ** Close a cache. */ SQLITE_PRIVATE void sqlite3PcacheClose(PCache *pCache){ - if( pCache->pCache ){ - sqlite3GlobalConfig.pcache.xDestroy(pCache->pCache); - } + assert( pCache->pCache!=0 ); + sqlite3GlobalConfig.pcache2.xDestroy(pCache->pCache); } /* ** Discard the contents of the cache. */ @@ -31064,14 +41742,17 @@ } return pcacheSortDirtyList(pCache->pDirty); } /* -** Return the total number of referenced pages held by the cache. +** Return the total number of references to all pages held by the cache. +** +** This is not the total number of pages referenced, but the sum of the +** reference count for all pages. */ SQLITE_PRIVATE int sqlite3PcacheRefCount(PCache *pCache){ - return pCache->nRef; + return pCache->nRefSum; } /* ** Return the number of references to the page supplied as an argument. */ @@ -31081,35 +41762,66 @@ /* ** Return the total number of pages in the cache. */ SQLITE_PRIVATE int sqlite3PcachePagecount(PCache *pCache){ - int nPage = 0; - if( pCache->pCache ){ - nPage = sqlite3GlobalConfig.pcache.xPagecount(pCache->pCache); - } - return nPage; + assert( pCache->pCache!=0 ); + return sqlite3GlobalConfig.pcache2.xPagecount(pCache->pCache); } #ifdef SQLITE_TEST /* ** Get the suggested cache-size value. */ SQLITE_PRIVATE int sqlite3PcacheGetCachesize(PCache *pCache){ - return pCache->nMax; + return numberOfCachePages(pCache); } #endif /* ** Set the suggested cache-size value. */ SQLITE_PRIVATE void sqlite3PcacheSetCachesize(PCache *pCache, int mxPage){ - pCache->nMax = mxPage; - if( pCache->pCache ){ - sqlite3GlobalConfig.pcache.xCachesize(pCache->pCache, mxPage); + assert( pCache->pCache!=0 ); + pCache->szCache = mxPage; + sqlite3GlobalConfig.pcache2.xCachesize(pCache->pCache, + numberOfCachePages(pCache)); +} + +/* +** Set the suggested cache-spill value. Make no changes if if the +** argument is zero. Return the effective cache-spill size, which will +** be the larger of the szSpill and szCache. +*/ +SQLITE_PRIVATE int sqlite3PcacheSetSpillsize(PCache *p, int mxPage){ + int res; + assert( p->pCache!=0 ); + if( mxPage ){ + if( mxPage<0 ){ + mxPage = (int)((-1024*(i64)mxPage)/(p->szPage+p->szExtra)); + } + p->szSpill = mxPage; } + res = numberOfCachePages(p); + if( res<p->szSpill ) res = p->szSpill; + return res; } + +/* +** Free up as much memory as possible from the page cache. +*/ +SQLITE_PRIVATE void sqlite3PcacheShrink(PCache *pCache){ + assert( pCache->pCache!=0 ); + sqlite3GlobalConfig.pcache2.xShrink(pCache->pCache); +} + +/* +** Return the size of the header added by this middleware layer +** in the page-cache hierarchy. +*/ +SQLITE_PRIVATE int sqlite3HeaderSizePcache(void){ return ROUND8(sizeof(PgHdr)); } + #if defined(SQLITE_CHECK_PAGES) || defined(SQLITE_DEBUG) /* ** For all dirty pages currently in the cache, invoke the specified ** callback. This is only used if the SQLITE_CHECK_PAGES macro is @@ -31138,83 +41850,203 @@ ************************************************************************* ** ** This file implements the default page cache implementation (the ** sqlite3_pcache interface). It also contains part of the implementation ** of the SQLITE_CONFIG_PAGECACHE and sqlite3_release_memory() features. -** If the default page cache implementation is overriden, then neither of +** If the default page cache implementation is overridden, then neither of ** these two features are available. +** +** A Page cache line looks like this: +** +** ------------------------------------------------------------- +** | database page content | PgHdr1 | MemPage | PgHdr | +** ------------------------------------------------------------- +** +** The database page content is up front (so that buffer overreads tend to +** flow harmlessly into the PgHdr1, MemPage, and PgHdr extensions). MemPage +** is the extension added by the btree.c module containing information such +** as the database page number and how that database page is used. PgHdr +** is added by the pcache.c layer and contains information used to keep track +** of which pages are "dirty". PgHdr1 is an extension added by this +** module (pcache1.c). The PgHdr1 header is a subclass of sqlite3_pcache_page. +** PgHdr1 contains information needed to look up a page by its page number. +** The superclass sqlite3_pcache_page.pBuf points to the start of the +** database page content and sqlite3_pcache_page.pExtra points to PgHdr. +** +** The size of the extension (MemPage+PgHdr+PgHdr1) can be determined at +** runtime using sqlite3_config(SQLITE_CONFIG_PCACHE_HDRSZ, &size). The +** sizes of the extensions sum to 272 bytes on x64 for 3.8.10, but this +** size can vary according to architecture, compile-time options, and +** SQLite library version number. +** +** If SQLITE_PCACHE_SEPARATE_HEADER is defined, then the extension is obtained +** using a separate memory allocation from the database page content. This +** seeks to overcome the "clownshoe" problem (also called "internal +** fragmentation" in academic literature) of allocating a few bytes more +** than a power of two with the memory allocator rounding up to the next +** power of two, and leaving the rounded-up space unused. +** +** This module tracks pointers to PgHdr1 objects. Only pcache.c communicates +** with this module. Information is passed back and forth as PgHdr1 pointers. +** +** The pcache.c and pager.c modules deal pointers to PgHdr objects. +** The btree.c module deals with pointers to MemPage objects. +** +** SOURCE OF PAGE CACHE MEMORY: +** +** Memory for a page might come from any of three sources: +** +** (1) The general-purpose memory allocator - sqlite3Malloc() +** (2) Global page-cache memory provided using sqlite3_config() with +** SQLITE_CONFIG_PAGECACHE. +** (3) PCache-local bulk allocation. +** +** The third case is a chunk of heap memory (defaulting to 100 pages worth) +** that is allocated when the page cache is created. The size of the local +** bulk allocation can be adjusted using +** +** sqlite3_config(SQLITE_CONFIG_PAGECACHE, (void*)0, 0, N). +** +** If N is positive, then N pages worth of memory are allocated using a single +** sqlite3Malloc() call and that memory is used for the first N pages allocated. +** Or if N is negative, then -1024*N bytes of memory are allocated and used +** for as many pages as can be accomodated. +** +** Only one of (2) or (3) can be used. Once the memory available to (2) or +** (3) is exhausted, subsequent allocations fail over to the general-purpose +** memory allocator (1). +** +** Earlier versions of SQLite used only methods (1) and (2). But experiments +** show that method (3) with N==100 provides about a 5% performance boost for +** common workloads. */ - +/* #include "sqliteInt.h" */ typedef struct PCache1 PCache1; typedef struct PgHdr1 PgHdr1; typedef struct PgFreeslot PgFreeslot; - -/* Pointers to structures of this type are cast and returned as -** opaque sqlite3_pcache* handles -*/ -struct PCache1 { - /* Cache configuration parameters. Page size (szPage) and the purgeable - ** flag (bPurgeable) are set when the cache is created. nMax may be - ** modified at any time by a call to the pcache1CacheSize() method. - ** The global mutex must be held when accessing nMax. - */ - int szPage; /* Size of allocated pages in bytes */ - int bPurgeable; /* True if cache is purgeable */ - unsigned int nMin; /* Minimum number of pages reserved */ - unsigned int nMax; /* Configured "cache_size" value */ - - /* Hash table of all pages. The following variables may only be accessed - ** when the accessor is holding the global mutex (see pcache1EnterMutex() - ** and pcache1LeaveMutex()). - */ - unsigned int nRecyclable; /* Number of pages in the LRU list */ - unsigned int nPage; /* Total number of pages in apHash */ - unsigned int nHash; /* Number of slots in apHash[] */ - PgHdr1 **apHash; /* Hash table for fast lookup by key */ - - unsigned int iMaxKey; /* Largest key seen since xTruncate() */ -}; +typedef struct PGroup PGroup; /* ** Each cache entry is represented by an instance of the following -** structure. A buffer of PgHdr1.pCache->szPage bytes is allocated -** directly before this structure in memory (see the PGHDR1_TO_PAGE() -** macro below). +** structure. Unless SQLITE_PCACHE_SEPARATE_HEADER is defined, a buffer of +** PgHdr1.pCache->szPage bytes is allocated directly before this structure +** in memory. */ struct PgHdr1 { + sqlite3_pcache_page page; /* Base class. Must be first. pBuf & pExtra */ unsigned int iKey; /* Key value (page number) */ + u8 isPinned; /* Page in use, not on the LRU list */ + u8 isBulkLocal; /* This page from bulk local storage */ + u8 isAnchor; /* This is the PGroup.lru element */ PgHdr1 *pNext; /* Next in hash table chain */ PCache1 *pCache; /* Cache that currently owns this page */ PgHdr1 *pLruNext; /* Next in LRU list of unpinned pages */ PgHdr1 *pLruPrev; /* Previous in LRU list of unpinned pages */ }; +/* Each page cache (or PCache) belongs to a PGroup. A PGroup is a set +** of one or more PCaches that are able to recycle each other's unpinned +** pages when they are under memory pressure. A PGroup is an instance of +** the following object. +** +** This page cache implementation works in one of two modes: +** +** (1) Every PCache is the sole member of its own PGroup. There is +** one PGroup per PCache. +** +** (2) There is a single global PGroup that all PCaches are a member +** of. +** +** Mode 1 uses more memory (since PCache instances are not able to rob +** unused pages from other PCaches) but it also operates without a mutex, +** and is therefore often faster. Mode 2 requires a mutex in order to be +** threadsafe, but recycles pages more efficiently. +** +** For mode (1), PGroup.mutex is NULL. For mode (2) there is only a single +** PGroup which is the pcache1.grp global variable and its mutex is +** SQLITE_MUTEX_STATIC_LRU. +*/ +struct PGroup { + sqlite3_mutex *mutex; /* MUTEX_STATIC_LRU or NULL */ + unsigned int nMaxPage; /* Sum of nMax for purgeable caches */ + unsigned int nMinPage; /* Sum of nMin for purgeable caches */ + unsigned int mxPinned; /* nMaxpage + 10 - nMinPage */ + unsigned int nCurrentPage; /* Number of purgeable pages allocated */ + PgHdr1 lru; /* The beginning and end of the LRU list */ +}; + +/* Each page cache is an instance of the following object. Every +** open database file (including each in-memory database and each +** temporary or transient database) has a single page cache which +** is an instance of this object. +** +** Pointers to structures of this type are cast and returned as +** opaque sqlite3_pcache* handles. +*/ +struct PCache1 { + /* Cache configuration parameters. Page size (szPage) and the purgeable + ** flag (bPurgeable) are set when the cache is created. nMax may be + ** modified at any time by a call to the pcache1Cachesize() method. + ** The PGroup mutex must be held when accessing nMax. + */ + PGroup *pGroup; /* PGroup this cache belongs to */ + int szPage; /* Size of database content section */ + int szExtra; /* sizeof(MemPage)+sizeof(PgHdr) */ + int szAlloc; /* Total size of one pcache line */ + int bPurgeable; /* True if cache is purgeable */ + unsigned int nMin; /* Minimum number of pages reserved */ + unsigned int nMax; /* Configured "cache_size" value */ + unsigned int n90pct; /* nMax*9/10 */ + unsigned int iMaxKey; /* Largest key seen since xTruncate() */ + + /* Hash table of all pages. The following variables may only be accessed + ** when the accessor is holding the PGroup mutex. + */ + unsigned int nRecyclable; /* Number of pages in the LRU list */ + unsigned int nPage; /* Total number of pages in apHash */ + unsigned int nHash; /* Number of slots in apHash[] */ + PgHdr1 **apHash; /* Hash table for fast lookup by key */ + PgHdr1 *pFree; /* List of unused pcache-local pages */ + void *pBulk; /* Bulk memory used by pcache-local */ +}; + /* -** Free slots in the allocator used to divide up the buffer provided using -** the SQLITE_CONFIG_PAGECACHE mechanism. +** Free slots in the allocator used to divide up the global page cache +** buffer provided using the SQLITE_CONFIG_PAGECACHE mechanism. */ struct PgFreeslot { PgFreeslot *pNext; /* Next free slot */ }; /* ** Global data used by this cache. */ static SQLITE_WSD struct PCacheGlobal { - sqlite3_mutex *mutex; /* static mutex MUTEX_STATIC_LRU */ - - int nMaxPage; /* Sum of nMaxPage for purgeable caches */ - int nMinPage; /* Sum of nMinPage for purgeable caches */ - int nCurrentPage; /* Number of purgeable pages allocated */ - PgHdr1 *pLruHead, *pLruTail; /* LRU list of unpinned pages */ - - /* Variables related to SQLITE_CONFIG_PAGECACHE settings. */ - int szSlot; /* Size of each free slot */ - void *pStart, *pEnd; /* Bounds of pagecache malloc range */ - PgFreeslot *pFree; /* Free page blocks */ - int isInit; /* True if initialized */ + PGroup grp; /* The global PGroup for mode (2) */ + + /* Variables related to SQLITE_CONFIG_PAGECACHE settings. The + ** szSlot, nSlot, pStart, pEnd, nReserve, and isInit values are all + ** fixed at sqlite3_initialize() time and do not require mutex protection. + ** The nFreeSlot and pFree values do require mutex protection. + */ + int isInit; /* True if initialized */ + int separateCache; /* Use a new PGroup for each PCache */ + int nInitPage; /* Initial bulk allocation size */ + int szSlot; /* Size of each free slot */ + int nSlot; /* The number of pcache slots */ + int nReserve; /* Try to keep nFreeSlot above this */ + void *pStart, *pEnd; /* Bounds of global page cache memory */ + /* Above requires no mutex. Use mutex below for variable that follow. */ + sqlite3_mutex *mutex; /* Mutex for accessing the following: */ + PgFreeslot *pFree; /* Free page blocks */ + int nFreeSlot; /* Number of unused pcache slots */ + /* The following value requires a mutex to change. We skip the mutex on + ** reading because (1) most platforms read a 32-bit integer atomically and + ** (2) even if an incorrect value is read, no great harm is done since this + ** is really just an optimization. */ + int bUnderPressure; /* True if low on PAGECACHE memory */ } pcache1_g; /* ** All code in this file should access the global structure above via the ** alias "pcache1". This ensures that the WSD emulation is used when @@ -31221,197 +42053,326 @@ ** compiling for systems that do not support real WSD. */ #define pcache1 (GLOBAL(struct PCacheGlobal, pcache1_g)) /* -** When a PgHdr1 structure is allocated, the associated PCache1.szPage -** bytes of data are located directly before it in memory (i.e. the total -** size of the allocation is sizeof(PgHdr1)+PCache1.szPage byte). The -** PGHDR1_TO_PAGE() macro takes a pointer to a PgHdr1 structure as -** an argument and returns a pointer to the associated block of szPage -** bytes. The PAGE_TO_PGHDR1() macro does the opposite: its argument is -** a pointer to a block of szPage bytes of data and the return value is -** a pointer to the associated PgHdr1 structure. -** -** assert( PGHDR1_TO_PAGE(PAGE_TO_PGHDR1(pCache, X))==X ); -*/ -#define PGHDR1_TO_PAGE(p) (void*)(((char*)p) - p->pCache->szPage) -#define PAGE_TO_PGHDR1(c, p) (PgHdr1*)(((char*)p) + c->szPage) - -/* -** Macros to enter and leave the global LRU mutex. -*/ -#define pcache1EnterMutex() sqlite3_mutex_enter(pcache1.mutex) -#define pcache1LeaveMutex() sqlite3_mutex_leave(pcache1.mutex) +** Macros to enter and leave the PCache LRU mutex. +*/ +#if !defined(SQLITE_ENABLE_MEMORY_MANAGEMENT) || SQLITE_THREADSAFE==0 +# define pcache1EnterMutex(X) assert((X)->mutex==0) +# define pcache1LeaveMutex(X) assert((X)->mutex==0) +# define PCACHE1_MIGHT_USE_GROUP_MUTEX 0 +#else +# define pcache1EnterMutex(X) sqlite3_mutex_enter((X)->mutex) +# define pcache1LeaveMutex(X) sqlite3_mutex_leave((X)->mutex) +# define PCACHE1_MIGHT_USE_GROUP_MUTEX 1 +#endif /******************************************************************************/ /******** Page Allocation/SQLITE_CONFIG_PCACHE Related Functions **************/ + /* ** This function is called during initialization if a static buffer is ** supplied to use for the page-cache by passing the SQLITE_CONFIG_PAGECACHE ** verb to sqlite3_config(). Parameter pBuf points to an allocation large ** enough to contain 'n' buffers of 'sz' bytes each. +** +** This routine is called from sqlite3_initialize() and so it is guaranteed +** to be serialized already. There is no need for further mutexing. */ SQLITE_PRIVATE void sqlite3PCacheBufferSetup(void *pBuf, int sz, int n){ if( pcache1.isInit ){ PgFreeslot *p; + if( pBuf==0 ) sz = n = 0; sz = ROUNDDOWN8(sz); pcache1.szSlot = sz; + pcache1.nSlot = pcache1.nFreeSlot = n; + pcache1.nReserve = n>90 ? 10 : (n/10 + 1); pcache1.pStart = pBuf; pcache1.pFree = 0; + pcache1.bUnderPressure = 0; while( n-- ){ p = (PgFreeslot*)pBuf; p->pNext = pcache1.pFree; pcache1.pFree = p; pBuf = (void*)&((char*)pBuf)[sz]; } pcache1.pEnd = pBuf; } } + +/* +** Try to initialize the pCache->pFree and pCache->pBulk fields. Return +** true if pCache->pFree ends up containing one or more free pages. +*/ +static int pcache1InitBulk(PCache1 *pCache){ + i64 szBulk; + char *zBulk; + if( pcache1.nInitPage==0 ) return 0; + /* Do not bother with a bulk allocation if the cache size very small */ + if( pCache->nMax<3 ) return 0; + sqlite3BeginBenignMalloc(); + if( pcache1.nInitPage>0 ){ + szBulk = pCache->szAlloc * (i64)pcache1.nInitPage; + }else{ + szBulk = -1024 * (i64)pcache1.nInitPage; + } + if( szBulk > pCache->szAlloc*(i64)pCache->nMax ){ + szBulk = pCache->szAlloc*pCache->nMax; + } + zBulk = pCache->pBulk = sqlite3Malloc( szBulk ); + sqlite3EndBenignMalloc(); + if( zBulk ){ + int nBulk = sqlite3MallocSize(zBulk)/pCache->szAlloc; + int i; + for(i=0; i<nBulk; i++){ + PgHdr1 *pX = (PgHdr1*)&zBulk[pCache->szPage]; + pX->page.pBuf = zBulk; + pX->page.pExtra = &pX[1]; + pX->isBulkLocal = 1; + pX->isAnchor = 0; + pX->pNext = pCache->pFree; + pCache->pFree = pX; + zBulk += pCache->szAlloc; + } + } + return pCache->pFree!=0; +} /* ** Malloc function used within this file to allocate space from the buffer ** configured using sqlite3_config(SQLITE_CONFIG_PAGECACHE) option. If no ** such buffer exists or there is no space left in it, this function falls ** back to sqlite3Malloc(). +** +** Multiple threads can run this routine at the same time. Global variables +** in pcache1 need to be protected via mutex. */ static void *pcache1Alloc(int nByte){ - void *p; - assert( sqlite3_mutex_held(pcache1.mutex) ); - if( nByte<=pcache1.szSlot && pcache1.pFree ){ - assert( pcache1.isInit ); + void *p = 0; + assert( sqlite3_mutex_notheld(pcache1.grp.mutex) ); + if( nByte<=pcache1.szSlot ){ + sqlite3_mutex_enter(pcache1.mutex); p = (PgHdr1 *)pcache1.pFree; - pcache1.pFree = pcache1.pFree->pNext; - sqlite3StatusSet(SQLITE_STATUS_PAGECACHE_SIZE, nByte); - sqlite3StatusAdd(SQLITE_STATUS_PAGECACHE_USED, 1); - }else{ - - /* Allocate a new buffer using sqlite3Malloc. Before doing so, exit the - ** global pcache mutex and unlock the pager-cache object pCache. This is - ** so that if the attempt to allocate a new buffer causes the the - ** configured soft-heap-limit to be breached, it will be possible to - ** reclaim memory from this pager-cache. + if( p ){ + pcache1.pFree = pcache1.pFree->pNext; + pcache1.nFreeSlot--; + pcache1.bUnderPressure = pcache1.nFreeSlot<pcache1.nReserve; + assert( pcache1.nFreeSlot>=0 ); + sqlite3StatusHighwater(SQLITE_STATUS_PAGECACHE_SIZE, nByte); + sqlite3StatusUp(SQLITE_STATUS_PAGECACHE_USED, 1); + } + sqlite3_mutex_leave(pcache1.mutex); + } + if( p==0 ){ + /* Memory is not available in the SQLITE_CONFIG_PAGECACHE pool. Get + ** it from sqlite3Malloc instead. */ - pcache1LeaveMutex(); p = sqlite3Malloc(nByte); - pcache1EnterMutex(); +#ifndef SQLITE_DISABLE_PAGECACHE_OVERFLOW_STATS if( p ){ int sz = sqlite3MallocSize(p); - sqlite3StatusAdd(SQLITE_STATUS_PAGECACHE_OVERFLOW, sz); + sqlite3_mutex_enter(pcache1.mutex); + sqlite3StatusHighwater(SQLITE_STATUS_PAGECACHE_SIZE, nByte); + sqlite3StatusUp(SQLITE_STATUS_PAGECACHE_OVERFLOW, sz); + sqlite3_mutex_leave(pcache1.mutex); } +#endif sqlite3MemdebugSetType(p, MEMTYPE_PCACHE); } return p; } /* ** Free an allocated buffer obtained from pcache1Alloc(). */ static void pcache1Free(void *p){ - assert( sqlite3_mutex_held(pcache1.mutex) ); + int nFreed = 0; if( p==0 ) return; - if( p>=pcache1.pStart && p<pcache1.pEnd ){ + if( SQLITE_WITHIN(p, pcache1.pStart, pcache1.pEnd) ){ PgFreeslot *pSlot; - sqlite3StatusAdd(SQLITE_STATUS_PAGECACHE_USED, -1); + sqlite3_mutex_enter(pcache1.mutex); + sqlite3StatusDown(SQLITE_STATUS_PAGECACHE_USED, 1); pSlot = (PgFreeslot*)p; pSlot->pNext = pcache1.pFree; pcache1.pFree = pSlot; + pcache1.nFreeSlot++; + pcache1.bUnderPressure = pcache1.nFreeSlot<pcache1.nReserve; + assert( pcache1.nFreeSlot<=pcache1.nSlot ); + sqlite3_mutex_leave(pcache1.mutex); + }else{ + assert( sqlite3MemdebugHasType(p, MEMTYPE_PCACHE) ); + sqlite3MemdebugSetType(p, MEMTYPE_HEAP); +#ifndef SQLITE_DISABLE_PAGECACHE_OVERFLOW_STATS + nFreed = sqlite3MallocSize(p); + sqlite3_mutex_enter(pcache1.mutex); + sqlite3StatusDown(SQLITE_STATUS_PAGECACHE_OVERFLOW, nFreed); + sqlite3_mutex_leave(pcache1.mutex); +#endif + sqlite3_free(p); + } +} + +#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT +/* +** Return the size of a pcache allocation +*/ +static int pcache1MemSize(void *p){ + if( p>=pcache1.pStart && p<pcache1.pEnd ){ + return pcache1.szSlot; }else{ int iSize; assert( sqlite3MemdebugHasType(p, MEMTYPE_PCACHE) ); sqlite3MemdebugSetType(p, MEMTYPE_HEAP); iSize = sqlite3MallocSize(p); - sqlite3StatusAdd(SQLITE_STATUS_PAGECACHE_OVERFLOW, -iSize); - sqlite3_free(p); + sqlite3MemdebugSetType(p, MEMTYPE_PCACHE); + return iSize; } } +#endif /* SQLITE_ENABLE_MEMORY_MANAGEMENT */ /* ** Allocate a new page object initially associated with cache pCache. */ -static PgHdr1 *pcache1AllocPage(PCache1 *pCache){ - int nByte = sizeof(PgHdr1) + pCache->szPage; - void *pPg = pcache1Alloc(nByte); - PgHdr1 *p; - if( pPg ){ - p = PAGE_TO_PGHDR1(pCache, pPg); - if( pCache->bPurgeable ){ - pcache1.nCurrentPage++; - } - }else{ - p = 0; +static PgHdr1 *pcache1AllocPage(PCache1 *pCache, int benignMalloc){ + PgHdr1 *p = 0; + void *pPg; + + assert( sqlite3_mutex_held(pCache->pGroup->mutex) ); + if( pCache->pFree || (pCache->nPage==0 && pcache1InitBulk(pCache)) ){ + p = pCache->pFree; + pCache->pFree = p->pNext; + p->pNext = 0; + }else{ +#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT + /* The group mutex must be released before pcache1Alloc() is called. This + ** is because it might call sqlite3_release_memory(), which assumes that + ** this mutex is not held. */ + assert( pcache1.separateCache==0 ); + assert( pCache->pGroup==&pcache1.grp ); + pcache1LeaveMutex(pCache->pGroup); +#endif + if( benignMalloc ){ sqlite3BeginBenignMalloc(); } +#ifdef SQLITE_PCACHE_SEPARATE_HEADER + pPg = pcache1Alloc(pCache->szPage); + p = sqlite3Malloc(sizeof(PgHdr1) + pCache->szExtra); + if( !pPg || !p ){ + pcache1Free(pPg); + sqlite3_free(p); + pPg = 0; + } +#else + pPg = pcache1Alloc(pCache->szAlloc); + p = (PgHdr1 *)&((u8 *)pPg)[pCache->szPage]; +#endif + if( benignMalloc ){ sqlite3EndBenignMalloc(); } +#ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT + pcache1EnterMutex(pCache->pGroup); +#endif + if( pPg==0 ) return 0; + p->page.pBuf = pPg; + p->page.pExtra = &p[1]; + p->isBulkLocal = 0; + p->isAnchor = 0; + } + if( pCache->bPurgeable ){ + pCache->pGroup->nCurrentPage++; } return p; } /* ** Free a page object allocated by pcache1AllocPage(). -** -** The pointer is allowed to be NULL, which is prudent. But it turns out -** that the current implementation happens to never call this routine -** with a NULL pointer, so we mark the NULL test with ALWAYS(). */ static void pcache1FreePage(PgHdr1 *p){ - if( ALWAYS(p) ){ - if( p->pCache->bPurgeable ){ - pcache1.nCurrentPage--; - } - pcache1Free(PGHDR1_TO_PAGE(p)); + PCache1 *pCache; + assert( p!=0 ); + pCache = p->pCache; + assert( sqlite3_mutex_held(p->pCache->pGroup->mutex) ); + if( p->isBulkLocal ){ + p->pNext = pCache->pFree; + pCache->pFree = p; + }else{ + pcache1Free(p->page.pBuf); +#ifdef SQLITE_PCACHE_SEPARATE_HEADER + sqlite3_free(p); +#endif + } + if( pCache->bPurgeable ){ + pCache->pGroup->nCurrentPage--; } } /* ** Malloc function used by SQLite to obtain space from the buffer configured ** using sqlite3_config(SQLITE_CONFIG_PAGECACHE) option. If no such buffer ** exists, this function falls back to sqlite3Malloc(). */ SQLITE_PRIVATE void *sqlite3PageMalloc(int sz){ - void *p; - pcache1EnterMutex(); - p = pcache1Alloc(sz); - pcache1LeaveMutex(); - return p; + return pcache1Alloc(sz); } /* ** Free an allocated buffer obtained from sqlite3PageMalloc(). */ SQLITE_PRIVATE void sqlite3PageFree(void *p){ - pcache1EnterMutex(); pcache1Free(p); - pcache1LeaveMutex(); +} + + +/* +** Return true if it desirable to avoid allocating a new page cache +** entry. +** +** If memory was allocated specifically to the page cache using +** SQLITE_CONFIG_PAGECACHE but that memory has all been used, then +** it is desirable to avoid allocating a new page cache entry because +** presumably SQLITE_CONFIG_PAGECACHE was suppose to be sufficient +** for all page cache needs and we should not need to spill the +** allocation onto the heap. +** +** Or, the heap is used for all page cache memory but the heap is +** under memory pressure, then again it is desirable to avoid +** allocating a new page cache entry in order to avoid stressing +** the heap even further. +*/ +static int pcache1UnderMemoryPressure(PCache1 *pCache){ + if( pcache1.nSlot && (pCache->szPage+pCache->szExtra)<=pcache1.szSlot ){ + return pcache1.bUnderPressure; + }else{ + return sqlite3HeapNearlyFull(); + } } /******************************************************************************/ /******** General Implementation Functions ************************************/ /* ** This function is used to resize the hash table used by the cache passed ** as the first argument. ** -** The global mutex must be held when this function is called. +** The PCache mutex must be held when this function is called. */ -static int pcache1ResizeHash(PCache1 *p){ +static void pcache1ResizeHash(PCache1 *p){ PgHdr1 **apNew; unsigned int nNew; unsigned int i; - assert( sqlite3_mutex_held(pcache1.mutex) ); + assert( sqlite3_mutex_held(p->pGroup->mutex) ); nNew = p->nHash*2; if( nNew<256 ){ nNew = 256; } - pcache1LeaveMutex(); + pcache1LeaveMutex(p->pGroup); if( p->nHash ){ sqlite3BeginBenignMalloc(); } - apNew = (PgHdr1 **)sqlite3_malloc(sizeof(PgHdr1 *)*nNew); + apNew = (PgHdr1 **)sqlite3MallocZero(sizeof(PgHdr1 *)*nNew); if( p->nHash ){ sqlite3EndBenignMalloc(); } - pcache1EnterMutex(); + pcache1EnterMutex(p->pGroup); if( apNew ){ - memset(apNew, 0, sizeof(PgHdr1 *)*nNew); for(i=0; i<p->nHash; i++){ PgHdr1 *pPage; PgHdr1 *pNext = p->apHash[i]; while( (pPage = pNext)!=0 ){ unsigned int h = pPage->iKey % nNew; @@ -31422,97 +42383,105 @@ } sqlite3_free(p->apHash); p->apHash = apNew; p->nHash = nNew; } - - return (p->apHash ? SQLITE_OK : SQLITE_NOMEM); } /* ** This function is used internally to remove the page pPage from the -** global LRU list, if is part of it. If pPage is not part of the global +** PGroup LRU list, if is part of it. If pPage is not part of the PGroup ** LRU list, then this function is a no-op. ** -** The global mutex must be held when this function is called. +** The PGroup mutex must be held when this function is called. */ -static void pcache1PinPage(PgHdr1 *pPage){ - assert( sqlite3_mutex_held(pcache1.mutex) ); - if( pPage && (pPage->pLruNext || pPage==pcache1.pLruTail) ){ - if( pPage->pLruPrev ){ - pPage->pLruPrev->pLruNext = pPage->pLruNext; - } - if( pPage->pLruNext ){ - pPage->pLruNext->pLruPrev = pPage->pLruPrev; - } - if( pcache1.pLruHead==pPage ){ - pcache1.pLruHead = pPage->pLruNext; - } - if( pcache1.pLruTail==pPage ){ - pcache1.pLruTail = pPage->pLruPrev; - } - pPage->pLruNext = 0; - pPage->pLruPrev = 0; - pPage->pCache->nRecyclable--; - } +static PgHdr1 *pcache1PinPage(PgHdr1 *pPage){ + PCache1 *pCache; + + assert( pPage!=0 ); + assert( pPage->isPinned==0 ); + pCache = pPage->pCache; + assert( pPage->pLruNext ); + assert( pPage->pLruPrev ); + assert( sqlite3_mutex_held(pCache->pGroup->mutex) ); + pPage->pLruPrev->pLruNext = pPage->pLruNext; + pPage->pLruNext->pLruPrev = pPage->pLruPrev; + pPage->pLruNext = 0; + pPage->pLruPrev = 0; + pPage->isPinned = 1; + assert( pPage->isAnchor==0 ); + assert( pCache->pGroup->lru.isAnchor==1 ); + pCache->nRecyclable--; + return pPage; } /* ** Remove the page supplied as an argument from the hash table ** (PCache1.apHash structure) that it is currently stored in. +** Also free the page if freePage is true. ** -** The global mutex must be held when this function is called. +** The PGroup mutex must be held when this function is called. */ -static void pcache1RemoveFromHash(PgHdr1 *pPage){ +static void pcache1RemoveFromHash(PgHdr1 *pPage, int freeFlag){ unsigned int h; PCache1 *pCache = pPage->pCache; PgHdr1 **pp; + assert( sqlite3_mutex_held(pCache->pGroup->mutex) ); h = pPage->iKey % pCache->nHash; for(pp=&pCache->apHash[h]; (*pp)!=pPage; pp=&(*pp)->pNext); *pp = (*pp)->pNext; pCache->nPage--; + if( freeFlag ) pcache1FreePage(pPage); } /* -** If there are currently more than pcache.nMaxPage pages allocated, try -** to recycle pages to reduce the number allocated to pcache.nMaxPage. +** If there are currently more than nMaxPage pages allocated, try +** to recycle pages to reduce the number allocated to nMaxPage. */ -static void pcache1EnforceMaxPage(void){ - assert( sqlite3_mutex_held(pcache1.mutex) ); - while( pcache1.nCurrentPage>pcache1.nMaxPage && pcache1.pLruTail ){ - PgHdr1 *p = pcache1.pLruTail; +static void pcache1EnforceMaxPage(PCache1 *pCache){ + PGroup *pGroup = pCache->pGroup; + PgHdr1 *p; + assert( sqlite3_mutex_held(pGroup->mutex) ); + while( pGroup->nCurrentPage>pGroup->nMaxPage + && (p=pGroup->lru.pLruPrev)->isAnchor==0 + ){ + assert( p->pCache->pGroup==pGroup ); + assert( p->isPinned==0 ); pcache1PinPage(p); - pcache1RemoveFromHash(p); - pcache1FreePage(p); + pcache1RemoveFromHash(p, 1); + } + if( pCache->nPage==0 && pCache->pBulk ){ + sqlite3_free(pCache->pBulk); + pCache->pBulk = pCache->pFree = 0; } } /* ** Discard all pages from cache pCache with a page number (key value) ** greater than or equal to iLimit. Any pinned pages that meet this ** criteria are unpinned before they are discarded. ** -** The global mutex must be held when this function is called. +** The PCache mutex must be held when this function is called. */ static void pcache1TruncateUnsafe( - PCache1 *pCache, - unsigned int iLimit + PCache1 *pCache, /* The cache to truncate */ + unsigned int iLimit /* Drop pages with this pgno or larger */ ){ - TESTONLY( unsigned int nPage = 0; ) /* Used to assert pCache->nPage is correct */ + TESTONLY( unsigned int nPage = 0; ) /* To assert pCache->nPage is correct */ unsigned int h; - assert( sqlite3_mutex_held(pcache1.mutex) ); + assert( sqlite3_mutex_held(pCache->pGroup->mutex) ); for(h=0; h<pCache->nHash; h++){ PgHdr1 **pp = &pCache->apHash[h]; PgHdr1 *pPage; while( (pPage = *pp)!=0 ){ if( pPage->iKey>=iLimit ){ pCache->nPage--; *pp = pPage->pNext; - pcache1PinPage(pPage); + if( !pPage->isPinned ) pcache1PinPage(pPage); pcache1FreePage(pPage); }else{ pp = &pPage->pNext; TESTONLY( nPage++; ) } @@ -31529,13 +42498,50 @@ */ static int pcache1Init(void *NotUsed){ UNUSED_PARAMETER(NotUsed); assert( pcache1.isInit==0 ); memset(&pcache1, 0, sizeof(pcache1)); + + + /* + ** The pcache1.separateCache variable is true if each PCache has its own + ** private PGroup (mode-1). pcache1.separateCache is false if the single + ** PGroup in pcache1.grp is used for all page caches (mode-2). + ** + ** * Always use a unified cache (mode-2) if ENABLE_MEMORY_MANAGEMENT + ** + ** * Use a unified cache in single-threaded applications that have + ** configured a start-time buffer for use as page-cache memory using + ** sqlite3_config(SQLITE_CONFIG_PAGECACHE, pBuf, sz, N) with non-NULL + ** pBuf argument. + ** + ** * Otherwise use separate caches (mode-1) + */ +#if defined(SQLITE_ENABLE_MEMORY_MANAGEMENT) + pcache1.separateCache = 0; +#elif SQLITE_THREADSAFE + pcache1.separateCache = sqlite3GlobalConfig.pPage==0 + || sqlite3GlobalConfig.bCoreMutex>0; +#else + pcache1.separateCache = sqlite3GlobalConfig.pPage==0; +#endif + +#if SQLITE_THREADSAFE if( sqlite3GlobalConfig.bCoreMutex ){ - pcache1.mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_LRU); + pcache1.grp.mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_LRU); + pcache1.mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_PMEM); } +#endif + if( pcache1.separateCache + && sqlite3GlobalConfig.nPage!=0 + && sqlite3GlobalConfig.pPage==0 + ){ + pcache1.nInitPage = sqlite3GlobalConfig.nPage; + }else{ + pcache1.nInitPage = 0; + } + pcache1.grp.mxPinned = 10; pcache1.isInit = 1; return SQLITE_OK; } /* @@ -31547,28 +42553,55 @@ UNUSED_PARAMETER(NotUsed); assert( pcache1.isInit!=0 ); memset(&pcache1, 0, sizeof(pcache1)); } +/* forward declaration */ +static void pcache1Destroy(sqlite3_pcache *p); + /* ** Implementation of the sqlite3_pcache.xCreate method. ** ** Allocate a new cache. */ -static sqlite3_pcache *pcache1Create(int szPage, int bPurgeable){ - PCache1 *pCache; +static sqlite3_pcache *pcache1Create(int szPage, int szExtra, int bPurgeable){ + PCache1 *pCache; /* The newly created page cache */ + PGroup *pGroup; /* The group the new page cache will belong to */ + int sz; /* Bytes of memory required to allocate the new cache */ - pCache = (PCache1 *)sqlite3_malloc(sizeof(PCache1)); + assert( (szPage & (szPage-1))==0 && szPage>=512 && szPage<=65536 ); + assert( szExtra < 300 ); + + sz = sizeof(PCache1) + sizeof(PGroup)*pcache1.separateCache; + pCache = (PCache1 *)sqlite3MallocZero(sz); if( pCache ){ - memset(pCache, 0, sizeof(PCache1)); + if( pcache1.separateCache ){ + pGroup = (PGroup*)&pCache[1]; + pGroup->mxPinned = 10; + }else{ + pGroup = &pcache1.grp; + } + if( pGroup->lru.isAnchor==0 ){ + pGroup->lru.isAnchor = 1; + pGroup->lru.pLruPrev = pGroup->lru.pLruNext = &pGroup->lru; + } + pCache->pGroup = pGroup; pCache->szPage = szPage; + pCache->szExtra = szExtra; + pCache->szAlloc = szPage + szExtra + ROUND8(sizeof(PgHdr1)); pCache->bPurgeable = (bPurgeable ? 1 : 0); + pcache1EnterMutex(pGroup); + pcache1ResizeHash(pCache); if( bPurgeable ){ pCache->nMin = 10; - pcache1EnterMutex(); - pcache1.nMinPage += pCache->nMin; - pcache1LeaveMutex(); + pGroup->nMinPage += pCache->nMin; + pGroup->mxPinned = pGroup->nMaxPage + 10 - pGroup->nMinPage; + } + pcache1LeaveMutex(pGroup); + if( pCache->nHash==0 ){ + pcache1Destroy((sqlite3_pcache*)pCache); + pCache = 0; } } return (sqlite3_pcache *)pCache; } @@ -31578,29 +42611,130 @@ ** Configure the cache_size limit for a cache. */ static void pcache1Cachesize(sqlite3_pcache *p, int nMax){ PCache1 *pCache = (PCache1 *)p; if( pCache->bPurgeable ){ - pcache1EnterMutex(); - pcache1.nMaxPage += (nMax - pCache->nMax); + PGroup *pGroup = pCache->pGroup; + pcache1EnterMutex(pGroup); + pGroup->nMaxPage += (nMax - pCache->nMax); + pGroup->mxPinned = pGroup->nMaxPage + 10 - pGroup->nMinPage; pCache->nMax = nMax; - pcache1EnforceMaxPage(); - pcache1LeaveMutex(); + pCache->n90pct = pCache->nMax*9/10; + pcache1EnforceMaxPage(pCache); + pcache1LeaveMutex(pGroup); + } +} + +/* +** Implementation of the sqlite3_pcache.xShrink method. +** +** Free up as much memory as possible. +*/ +static void pcache1Shrink(sqlite3_pcache *p){ + PCache1 *pCache = (PCache1*)p; + if( pCache->bPurgeable ){ + PGroup *pGroup = pCache->pGroup; + int savedMaxPage; + pcache1EnterMutex(pGroup); + savedMaxPage = pGroup->nMaxPage; + pGroup->nMaxPage = 0; + pcache1EnforceMaxPage(pCache); + pGroup->nMaxPage = savedMaxPage; + pcache1LeaveMutex(pGroup); } } /* ** Implementation of the sqlite3_pcache.xPagecount method. */ static int pcache1Pagecount(sqlite3_pcache *p){ int n; - pcache1EnterMutex(); - n = ((PCache1 *)p)->nPage; - pcache1LeaveMutex(); + PCache1 *pCache = (PCache1*)p; + pcache1EnterMutex(pCache->pGroup); + n = pCache->nPage; + pcache1LeaveMutex(pCache->pGroup); return n; } + +/* +** Implement steps 3, 4, and 5 of the pcache1Fetch() algorithm described +** in the header of the pcache1Fetch() procedure. +** +** This steps are broken out into a separate procedure because they are +** usually not needed, and by avoiding the stack initialization required +** for these steps, the main pcache1Fetch() procedure can run faster. +*/ +static SQLITE_NOINLINE PgHdr1 *pcache1FetchStage2( + PCache1 *pCache, + unsigned int iKey, + int createFlag +){ + unsigned int nPinned; + PGroup *pGroup = pCache->pGroup; + PgHdr1 *pPage = 0; + + /* Step 3: Abort if createFlag is 1 but the cache is nearly full */ + assert( pCache->nPage >= pCache->nRecyclable ); + nPinned = pCache->nPage - pCache->nRecyclable; + assert( pGroup->mxPinned == pGroup->nMaxPage + 10 - pGroup->nMinPage ); + assert( pCache->n90pct == pCache->nMax*9/10 ); + if( createFlag==1 && ( + nPinned>=pGroup->mxPinned + || nPinned>=pCache->n90pct + || (pcache1UnderMemoryPressure(pCache) && pCache->nRecyclable<nPinned) + )){ + return 0; + } + + if( pCache->nPage>=pCache->nHash ) pcache1ResizeHash(pCache); + assert( pCache->nHash>0 && pCache->apHash ); + + /* Step 4. Try to recycle a page. */ + if( pCache->bPurgeable + && !pGroup->lru.pLruPrev->isAnchor + && ((pCache->nPage+1>=pCache->nMax) || pcache1UnderMemoryPressure(pCache)) + ){ + PCache1 *pOther; + pPage = pGroup->lru.pLruPrev; + assert( pPage->isPinned==0 ); + pcache1RemoveFromHash(pPage, 0); + pcache1PinPage(pPage); + pOther = pPage->pCache; + if( pOther->szAlloc != pCache->szAlloc ){ + pcache1FreePage(pPage); + pPage = 0; + }else{ + pGroup->nCurrentPage -= (pOther->bPurgeable - pCache->bPurgeable); + } + } + + /* Step 5. If a usable page buffer has still not been found, + ** attempt to allocate a new one. + */ + if( !pPage ){ + pPage = pcache1AllocPage(pCache, createFlag==1); + } + + if( pPage ){ + unsigned int h = iKey % pCache->nHash; + pCache->nPage++; + pPage->iKey = iKey; + pPage->pNext = pCache->apHash[h]; + pPage->pCache = pCache; + pPage->pLruPrev = 0; + pPage->pLruNext = 0; + pPage->isPinned = 1; + *(void **)pPage->page.pExtra = 0; + pCache->apHash[h] = pPage; + if( iKey>pCache->iMaxKey ){ + pCache->iMaxKey = iKey; + } + } + return pPage; +} + /* ** Implementation of the sqlite3_pcache.xFetch method. ** ** Fetch a page by key value. ** @@ -31610,11 +42744,11 @@ ** means to try really hard to allocate a new page. ** ** For a non-purgeable cache (a cache used as the storage for an in-memory ** database) there is really no difference between createFlag 1 and 2. So ** the calling function (pcache.c) will never have a createFlag of 1 on -** a non-purgable cache. +** a non-purgeable cache. ** ** There are three different approaches to obtaining space for a page, ** depending on the value of parameter createFlag (which may be 0, 1 or 2). ** ** 1. Regardless of the value of createFlag, the cache is searched for a @@ -31621,18 +42755,20 @@ ** copy of the requested page. If one is found, it is returned. ** ** 2. If createFlag==0 and the page is not already in the cache, NULL is ** returned. ** -** 3. If createFlag is 1, and the page is not already in the cache, -** and if either of the following are true, return NULL: +** 3. If createFlag is 1, and the page is not already in the cache, then +** return NULL (do not allocate a new page) if any of the following +** conditions are true: ** ** (a) the number of pages pinned by the cache is greater than ** PCache1.nMax, or +** ** (b) the number of pages pinned by the cache is greater than ** the sum of nMax for all purgeable caches, less the sum of -** nMin for all other purgeable caches. +** nMin for all other purgeable caches, or ** ** 4. If none of the first three conditions apply and the cache is marked ** as purgeable, and if one of the following is true: ** ** (a) The number of pages allocated for the cache is already @@ -31639,152 +42775,151 @@ ** PCache1.nMax, or ** ** (b) The number of pages allocated for all purgeable caches is ** already equal to or greater than the sum of nMax for all ** purgeable caches, +** +** (c) The system is under memory pressure and wants to avoid +** unnecessary pages cache entry allocations ** ** then attempt to recycle a page from the LRU list. If it is the right ** size, return the recycled buffer. Otherwise, free the buffer and ** proceed to step 5. ** ** 5. Otherwise, allocate and return a new page buffer. +** +** There are two versions of this routine. pcache1FetchWithMutex() is +** the general case. pcache1FetchNoMutex() is a faster implementation for +** the common case where pGroup->mutex is NULL. The pcache1Fetch() wrapper +** invokes the appropriate routine. */ -static void *pcache1Fetch(sqlite3_pcache *p, unsigned int iKey, int createFlag){ - unsigned int nPinned; +static PgHdr1 *pcache1FetchNoMutex( + sqlite3_pcache *p, + unsigned int iKey, + int createFlag +){ PCache1 *pCache = (PCache1 *)p; PgHdr1 *pPage = 0; - assert( pCache->bPurgeable || createFlag!=1 ); - pcache1EnterMutex(); - if( createFlag==1 ) sqlite3BeginBenignMalloc(); - - /* Search the hash table for an existing entry. */ - if( pCache->nHash>0 ){ - unsigned int h = iKey % pCache->nHash; - for(pPage=pCache->apHash[h]; pPage&&pPage->iKey!=iKey; pPage=pPage->pNext); - } - - if( pPage || createFlag==0 ){ - pcache1PinPage(pPage); - goto fetch_out; - } - - /* Step 3 of header comment. */ - nPinned = pCache->nPage - pCache->nRecyclable; - if( createFlag==1 && ( - nPinned>=(pcache1.nMaxPage+pCache->nMin-pcache1.nMinPage) - || nPinned>=(pCache->nMax * 9 / 10) - )){ - goto fetch_out; - } - - if( pCache->nPage>=pCache->nHash && pcache1ResizeHash(pCache) ){ - goto fetch_out; - } - - /* Step 4. Try to recycle a page buffer if appropriate. */ - if( pCache->bPurgeable && pcache1.pLruTail && ( - (pCache->nPage+1>=pCache->nMax) || pcache1.nCurrentPage>=pcache1.nMaxPage - )){ - pPage = pcache1.pLruTail; - pcache1RemoveFromHash(pPage); - pcache1PinPage(pPage); - if( pPage->pCache->szPage!=pCache->szPage ){ - pcache1FreePage(pPage); - pPage = 0; - }else{ - pcache1.nCurrentPage -= (pPage->pCache->bPurgeable - pCache->bPurgeable); - } - } - - /* Step 5. If a usable page buffer has still not been found, - ** attempt to allocate a new one. - */ - if( !pPage ){ - pPage = pcache1AllocPage(pCache); - } - + /* Step 1: Search the hash table for an existing entry. */ + pPage = pCache->apHash[iKey % pCache->nHash]; + while( pPage && pPage->iKey!=iKey ){ pPage = pPage->pNext; } + + /* Step 2: If the page was found in the hash table, then return it. + ** If the page was not in the hash table and createFlag is 0, abort. + ** Otherwise (page not in hash and createFlag!=0) continue with + ** subsequent steps to try to create the page. */ if( pPage ){ - unsigned int h = iKey % pCache->nHash; - pCache->nPage++; - pPage->iKey = iKey; - pPage->pNext = pCache->apHash[h]; - pPage->pCache = pCache; - pPage->pLruPrev = 0; - pPage->pLruNext = 0; - *(void **)(PGHDR1_TO_PAGE(pPage)) = 0; - pCache->apHash[h] = pPage; - } - -fetch_out: - if( pPage && iKey>pCache->iMaxKey ){ - pCache->iMaxKey = iKey; - } - if( createFlag==1 ) sqlite3EndBenignMalloc(); - pcache1LeaveMutex(); - return (pPage ? PGHDR1_TO_PAGE(pPage) : 0); + if( !pPage->isPinned ){ + return pcache1PinPage(pPage); + }else{ + return pPage; + } + }else if( createFlag ){ + /* Steps 3, 4, and 5 implemented by this subroutine */ + return pcache1FetchStage2(pCache, iKey, createFlag); + }else{ + return 0; + } +} +#if PCACHE1_MIGHT_USE_GROUP_MUTEX +static PgHdr1 *pcache1FetchWithMutex( + sqlite3_pcache *p, + unsigned int iKey, + int createFlag +){ + PCache1 *pCache = (PCache1 *)p; + PgHdr1 *pPage; + + pcache1EnterMutex(pCache->pGroup); + pPage = pcache1FetchNoMutex(p, iKey, createFlag); + assert( pPage==0 || pCache->iMaxKey>=iKey ); + pcache1LeaveMutex(pCache->pGroup); + return pPage; +} +#endif +static sqlite3_pcache_page *pcache1Fetch( + sqlite3_pcache *p, + unsigned int iKey, + int createFlag +){ +#if PCACHE1_MIGHT_USE_GROUP_MUTEX || defined(SQLITE_DEBUG) + PCache1 *pCache = (PCache1 *)p; +#endif + + assert( offsetof(PgHdr1,page)==0 ); + assert( pCache->bPurgeable || createFlag!=1 ); + assert( pCache->bPurgeable || pCache->nMin==0 ); + assert( pCache->bPurgeable==0 || pCache->nMin==10 ); + assert( pCache->nMin==0 || pCache->bPurgeable ); + assert( pCache->nHash>0 ); +#if PCACHE1_MIGHT_USE_GROUP_MUTEX + if( pCache->pGroup->mutex ){ + return (sqlite3_pcache_page*)pcache1FetchWithMutex(p, iKey, createFlag); + }else +#endif + { + return (sqlite3_pcache_page*)pcache1FetchNoMutex(p, iKey, createFlag); + } } /* ** Implementation of the sqlite3_pcache.xUnpin method. ** ** Mark a page as unpinned (eligible for asynchronous recycling). */ -static void pcache1Unpin(sqlite3_pcache *p, void *pPg, int reuseUnlikely){ +static void pcache1Unpin( + sqlite3_pcache *p, + sqlite3_pcache_page *pPg, + int reuseUnlikely +){ PCache1 *pCache = (PCache1 *)p; - PgHdr1 *pPage = PAGE_TO_PGHDR1(pCache, pPg); + PgHdr1 *pPage = (PgHdr1 *)pPg; + PGroup *pGroup = pCache->pGroup; assert( pPage->pCache==pCache ); - pcache1EnterMutex(); + pcache1EnterMutex(pGroup); /* It is an error to call this function if the page is already - ** part of the global LRU list. + ** part of the PGroup LRU list. */ assert( pPage->pLruPrev==0 && pPage->pLruNext==0 ); - assert( pcache1.pLruHead!=pPage && pcache1.pLruTail!=pPage ); - - if( reuseUnlikely || pcache1.nCurrentPage>pcache1.nMaxPage ){ - pcache1RemoveFromHash(pPage); - pcache1FreePage(pPage); - }else{ - /* Add the page to the global LRU list. Normally, the page is added to - ** the head of the list (last page to be recycled). However, if the - ** reuseUnlikely flag passed to this function is true, the page is added - ** to the tail of the list (first page to be recycled). - */ - if( pcache1.pLruHead ){ - pcache1.pLruHead->pLruPrev = pPage; - pPage->pLruNext = pcache1.pLruHead; - pcache1.pLruHead = pPage; - }else{ - pcache1.pLruTail = pPage; - pcache1.pLruHead = pPage; - } + assert( pPage->isPinned==1 ); + + if( reuseUnlikely || pGroup->nCurrentPage>pGroup->nMaxPage ){ + pcache1RemoveFromHash(pPage, 1); + }else{ + /* Add the page to the PGroup LRU list. */ + PgHdr1 **ppFirst = &pGroup->lru.pLruNext; + pPage->pLruPrev = &pGroup->lru; + (pPage->pLruNext = *ppFirst)->pLruPrev = pPage; + *ppFirst = pPage; pCache->nRecyclable++; + pPage->isPinned = 0; } - pcache1LeaveMutex(); + pcache1LeaveMutex(pCache->pGroup); } /* ** Implementation of the sqlite3_pcache.xRekey method. */ static void pcache1Rekey( sqlite3_pcache *p, - void *pPg, + sqlite3_pcache_page *pPg, unsigned int iOld, unsigned int iNew ){ PCache1 *pCache = (PCache1 *)p; - PgHdr1 *pPage = PAGE_TO_PGHDR1(pCache, pPg); + PgHdr1 *pPage = (PgHdr1 *)pPg; PgHdr1 **pp; unsigned int h; assert( pPage->iKey==iOld ); assert( pPage->pCache==pCache ); - pcache1EnterMutex(); + pcache1EnterMutex(pCache->pGroup); h = iOld%pCache->nHash; pp = &pCache->apHash[h]; while( (*pp)!=pPage ){ pp = &(*pp)->pNext; @@ -31797,11 +42932,11 @@ pCache->apHash[h] = pPage; if( iNew>pCache->iMaxKey ){ pCache->iMaxKey = iNew; } - pcache1LeaveMutex(); + pcache1LeaveMutex(pCache->pGroup); } /* ** Implementation of the sqlite3_pcache.xTruncate method. ** @@ -31809,31 +42944,37 @@ ** or greater than parameter iLimit. Any pinned pages with a page number ** equal to or greater than iLimit are implicitly unpinned. */ static void pcache1Truncate(sqlite3_pcache *p, unsigned int iLimit){ PCache1 *pCache = (PCache1 *)p; - pcache1EnterMutex(); + pcache1EnterMutex(pCache->pGroup); if( iLimit<=pCache->iMaxKey ){ pcache1TruncateUnsafe(pCache, iLimit); pCache->iMaxKey = iLimit-1; } - pcache1LeaveMutex(); + pcache1LeaveMutex(pCache->pGroup); } /* ** Implementation of the sqlite3_pcache.xDestroy method. ** ** Destroy a cache allocated using pcache1Create(). */ static void pcache1Destroy(sqlite3_pcache *p){ PCache1 *pCache = (PCache1 *)p; - pcache1EnterMutex(); + PGroup *pGroup = pCache->pGroup; + assert( pCache->bPurgeable || (pCache->nMax==0 && pCache->nMin==0) ); + pcache1EnterMutex(pGroup); pcache1TruncateUnsafe(pCache, 0); - pcache1.nMaxPage -= pCache->nMax; - pcache1.nMinPage -= pCache->nMin; - pcache1EnforceMaxPage(); - pcache1LeaveMutex(); + assert( pGroup->nMaxPage >= pCache->nMax ); + pGroup->nMaxPage -= pCache->nMax; + assert( pGroup->nMinPage >= pCache->nMin ); + pGroup->nMinPage -= pCache->nMin; + pGroup->mxPinned = pGroup->nMaxPage + 10 - pGroup->nMinPage; + pcache1EnforceMaxPage(pCache); + pcache1LeaveMutex(pGroup); + sqlite3_free(pCache->pBulk); sqlite3_free(pCache->apHash); sqlite3_free(pCache); } /* @@ -31840,11 +42981,12 @@ ** This function is called during initialization (sqlite3_initialize()) to ** install the default pluggable cache module, assuming the user has not ** already provided an alternative. */ SQLITE_PRIVATE void sqlite3PCacheSetDefault(void){ - static sqlite3_pcache_methods defaultMethods = { + static const sqlite3_pcache_methods2 defaultMethods = { + 1, /* iVersion */ 0, /* pArg */ pcache1Init, /* xInit */ pcache1Shutdown, /* xShutdown */ pcache1Create, /* xCreate */ pcache1Cachesize, /* xCachesize */ @@ -31851,13 +42993,27 @@ pcache1Pagecount, /* xPagecount */ pcache1Fetch, /* xFetch */ pcache1Unpin, /* xUnpin */ pcache1Rekey, /* xRekey */ pcache1Truncate, /* xTruncate */ - pcache1Destroy /* xDestroy */ + pcache1Destroy, /* xDestroy */ + pcache1Shrink /* xShrink */ }; - sqlite3_config(SQLITE_CONFIG_PCACHE, &defaultMethods); + sqlite3_config(SQLITE_CONFIG_PCACHE2, &defaultMethods); +} + +/* +** Return the size of the header on each page of this PCACHE implementation. +*/ +SQLITE_PRIVATE int sqlite3HeaderSizePcache1(void){ return ROUND8(sizeof(PgHdr1)); } + +/* +** Return the global mutex used by this PCACHE implementation. The +** sqlite3_status() routine needs access to this mutex. +*/ +SQLITE_PRIVATE sqlite3_mutex *sqlite3Pcache1Mutex(void){ + return pcache1.mutex; } #ifdef SQLITE_ENABLE_MEMORY_MANAGEMENT /* ** This function is called to free superfluous dynamically allocated memory @@ -31868,20 +43024,28 @@ ** been released, the function returns. The return value is the total number ** of bytes of memory released. */ SQLITE_PRIVATE int sqlite3PcacheReleaseMemory(int nReq){ int nFree = 0; - if( pcache1.pStart==0 ){ + assert( sqlite3_mutex_notheld(pcache1.grp.mutex) ); + assert( sqlite3_mutex_notheld(pcache1.mutex) ); + if( sqlite3GlobalConfig.nPage==0 ){ PgHdr1 *p; - pcache1EnterMutex(); - while( (nReq<0 || nFree<nReq) && (p=pcache1.pLruTail) ){ - nFree += sqlite3MallocSize(PGHDR1_TO_PAGE(p)); + pcache1EnterMutex(&pcache1.grp); + while( (nReq<0 || nFree<nReq) + && (p=pcache1.grp.lru.pLruPrev)!=0 + && p->isAnchor==0 + ){ + nFree += pcache1MemSize(p->page.pBuf); +#ifdef SQLITE_PCACHE_SEPARATE_HEADER + nFree += sqlite3MemSize(p); +#endif + assert( p->isPinned==0 ); pcache1PinPage(p); - pcache1RemoveFromHash(p); - pcache1FreePage(p); + pcache1RemoveFromHash(p, 1); } - pcache1LeaveMutex(); + pcache1LeaveMutex(&pcache1.grp); } return nFree; } #endif /* SQLITE_ENABLE_MEMORY_MANAGEMENT */ @@ -31896,16 +43060,17 @@ int *pnMin, /* OUT: Sum of PCache1.nMin for purgeable caches */ int *pnRecyclable /* OUT: Total number of pages available for recycling */ ){ PgHdr1 *p; int nRecyclable = 0; - for(p=pcache1.pLruHead; p; p=p->pLruNext){ + for(p=pcache1.grp.lru.pLruNext; p && !p->isAnchor; p=p->pLruNext){ + assert( p->isPinned==0 ); nRecyclable++; } - *pnCurrent = pcache1.nCurrentPage; - *pnMax = pcache1.nMaxPage; - *pnMin = pcache1.nMinPage; + *pnCurrent = pcache1.grp.nCurrentPage; + *pnMax = (int)pcache1.grp.nMaxPage; + *pnMin = (int)pcache1.grp.nMinPage; *pnRecyclable = nRecyclable; } #endif /************** End of pcache1.c *********************************************/ @@ -31960,20 +43125,21 @@ ** a non-zero batch number, it will see all prior INSERTs. ** ** No INSERTs may occurs after a SMALLEST. An assertion will fail if ** that is attempted. ** -** The cost of an INSERT is roughly constant. (Sometime new memory +** The cost of an INSERT is roughly constant. (Sometimes new memory ** has to be allocated on an INSERT.) The cost of a TEST with a new ** batch number is O(NlogN) where N is the number of elements in the RowSet. ** The cost of a TEST using the same batch number is O(logN). The cost ** of the first SMALLEST is O(NlogN). Second and subsequent SMALLEST ** primitives are constant time. The cost of DESTROY is O(N). ** ** There is an added cost of O(N) when switching between TEST and ** SMALLEST primitives. */ +/* #include "sqliteInt.h" */ /* ** Target size for allocation chunks. */ @@ -31985,10 +43151,15 @@ #define ROWSET_ENTRY_PER_CHUNK \ ((ROWSET_ALLOCATION_SIZE-8)/sizeof(struct RowSetEntry)) /* ** Each entry in a RowSet is an instance of the following object. +** +** This same object is reused to store a linked list of trees of RowSetEntry +** objects. In that alternative use, pRight points to the next entry +** in the list, pLeft points to the tree, and v is unused. The +** RowSet.pForest value points to the head of this forest list. */ struct RowSetEntry { i64 v; /* ROWID value for this entry */ struct RowSetEntry *pRight; /* Right subtree (larger entries) or list */ struct RowSetEntry *pLeft; /* Left subtree (smaller entries) */ @@ -32014,16 +43185,22 @@ struct RowSetChunk *pChunk; /* List of all chunk allocations */ sqlite3 *db; /* The database connection */ struct RowSetEntry *pEntry; /* List of entries using pRight */ struct RowSetEntry *pLast; /* Last entry on the pEntry list */ struct RowSetEntry *pFresh; /* Source of new entry objects */ - struct RowSetEntry *pTree; /* Binary tree of entries */ + struct RowSetEntry *pForest; /* List of binary trees of entries */ u16 nFresh; /* Number of objects on pFresh */ - u8 isSorted; /* True if pEntry is sorted */ - u8 iBatch; /* Current insert batch */ + u16 rsFlags; /* Various flags */ + int iBatch; /* Current insert batch */ }; +/* +** Allowed values for RowSet.rsFlags +*/ +#define ROWSET_SORTED 0x01 /* True if RowSet.pEntry is sorted */ +#define ROWSET_NEXT 0x02 /* True if sqlite3RowSetNext() has been called */ + /* ** Turn bulk memory into a RowSet object. N bytes of memory ** are available at pSpace. The db pointer is used as a memory context ** for any subsequent allocations that need to occur. ** Return a pointer to the new RowSet object. @@ -32040,14 +43217,14 @@ p = pSpace; p->pChunk = 0; p->db = db; p->pEntry = 0; p->pLast = 0; - p->pTree = 0; + p->pForest = 0; p->pFresh = (struct RowSetEntry*)(ROUND8(sizeof(*p)) + (char*)p); p->nFresh = (u16)((N - ROUND8(sizeof(*p)))/sizeof(struct RowSetEntry)); - p->isSorted = 1; + p->rsFlags = ROWSET_SORTED; p->iBatch = 0; return p; } /* @@ -32063,12 +43240,37 @@ } p->pChunk = 0; p->nFresh = 0; p->pEntry = 0; p->pLast = 0; - p->pTree = 0; - p->isSorted = 1; + p->pForest = 0; + p->rsFlags = ROWSET_SORTED; +} + +/* +** Allocate a new RowSetEntry object that is associated with the +** given RowSet. Return a pointer to the new and completely uninitialized +** objected. +** +** In an OOM situation, the RowSet.db->mallocFailed flag is set and this +** routine returns NULL. +*/ +static struct RowSetEntry *rowSetEntryAlloc(RowSet *p){ + assert( p!=0 ); + if( p->nFresh==0 ){ + struct RowSetChunk *pNew; + pNew = sqlite3DbMallocRawNN(p->db, sizeof(*pNew)); + if( pNew==0 ){ + return 0; + } + pNew->pNextChunk = p->pChunk; + p->pChunk = pNew; + p->pFresh = pNew->aEntry; + p->nFresh = ROWSET_ENTRY_PER_CHUNK; + } + p->nFresh--; + return p->pFresh++; } /* ** Insert a new value into a RowSet. ** @@ -32076,34 +43278,25 @@ ** memory allocation fails. */ SQLITE_PRIVATE void sqlite3RowSetInsert(RowSet *p, i64 rowid){ struct RowSetEntry *pEntry; /* The new entry */ struct RowSetEntry *pLast; /* The last prior entry */ - assert( p!=0 ); - if( p->nFresh==0 ){ - struct RowSetChunk *pNew; - pNew = sqlite3DbMallocRaw(p->db, sizeof(*pNew)); - if( pNew==0 ){ - return; - } - pNew->pNextChunk = p->pChunk; - p->pChunk = pNew; - p->pFresh = pNew->aEntry; - p->nFresh = ROWSET_ENTRY_PER_CHUNK; - } - pEntry = p->pFresh++; - p->nFresh--; + + /* This routine is never called after sqlite3RowSetNext() */ + assert( p!=0 && (p->rsFlags & ROWSET_NEXT)==0 ); + + pEntry = rowSetEntryAlloc(p); + if( pEntry==0 ) return; pEntry->v = rowid; pEntry->pRight = 0; pLast = p->pLast; if( pLast ){ - if( p->isSorted && rowid<=pLast->v ){ - p->isSorted = 0; + if( (p->rsFlags & ROWSET_SORTED)!=0 && rowid<=pLast->v ){ + p->rsFlags &= ~ROWSET_SORTED; } pLast->pRight = pEntry; }else{ - assert( p->pEntry==0 ); /* Fires if INSERT after SMALLEST */ p->pEntry = pEntry; } p->pLast = pEntry; } @@ -32111,11 +43304,11 @@ ** Merge two lists of RowSetEntry objects. Remove duplicates. ** ** The input lists are connected via pRight pointers and are ** assumed to each already be in sorted order. */ -static struct RowSetEntry *rowSetMerge( +static struct RowSetEntry *rowSetEntryMerge( struct RowSetEntry *pA, /* First sorted list to be merged */ struct RowSetEntry *pB /* Second sorted list to be merged */ ){ struct RowSetEntry head; struct RowSetEntry *pTail; @@ -32145,36 +43338,33 @@ } return head.pRight; } /* -** Sort all elements on the pEntry list of the RowSet into ascending order. +** Sort all elements on the list of RowSetEntry objects into order of +** increasing v. */ -static void rowSetSort(RowSet *p){ +static struct RowSetEntry *rowSetEntrySort(struct RowSetEntry *pIn){ unsigned int i; - struct RowSetEntry *pEntry; - struct RowSetEntry *aBucket[40]; + struct RowSetEntry *pNext, *aBucket[40]; - assert( p->isSorted==0 ); memset(aBucket, 0, sizeof(aBucket)); - while( p->pEntry ){ - pEntry = p->pEntry; - p->pEntry = pEntry->pRight; - pEntry->pRight = 0; + while( pIn ){ + pNext = pIn->pRight; + pIn->pRight = 0; for(i=0; aBucket[i]; i++){ - pEntry = rowSetMerge(aBucket[i], pEntry); + pIn = rowSetEntryMerge(aBucket[i], pIn); aBucket[i] = 0; } - aBucket[i] = pEntry; + aBucket[i] = pIn; + pIn = pNext; } - pEntry = 0; + pIn = 0; for(i=0; i<sizeof(aBucket)/sizeof(aBucket[0]); i++){ - pEntry = rowSetMerge(pEntry, aBucket[i]); + pIn = rowSetEntryMerge(pIn, aBucket[i]); } - p->pEntry = pEntry; - p->pLast = 0; - p->isSorted = 1; + return pIn; } /* ** The input, pIn, is a binary tree (or subtree) of RowSetEntry objects. @@ -32264,24 +43454,41 @@ } return p; } /* -** Convert the list in p->pEntry into a sorted list if it is not -** sorted already. If there is a binary tree on p->pTree, then -** convert it into a list too and merge it into the p->pEntry list. +** Take all the entries on p->pEntry and on the trees in p->pForest and +** sort them all together into one big ordered list on p->pEntry. +** +** This routine should only be called once in the life of a RowSet. */ static void rowSetToList(RowSet *p){ - if( !p->isSorted ){ - rowSetSort(p); - } - if( p->pTree ){ - struct RowSetEntry *pHead, *pTail; - rowSetTreeToList(p->pTree, &pHead, &pTail); - p->pTree = 0; - p->pEntry = rowSetMerge(p->pEntry, pHead); - } + + /* This routine is called only once */ + assert( p!=0 && (p->rsFlags & ROWSET_NEXT)==0 ); + + if( (p->rsFlags & ROWSET_SORTED)==0 ){ + p->pEntry = rowSetEntrySort(p->pEntry); + } + + /* While this module could theoretically support it, sqlite3RowSetNext() + ** is never called after sqlite3RowSetText() for the same RowSet. So + ** there is never a forest to deal with. Should this change, simply + ** remove the assert() and the #if 0. */ + assert( p->pForest==0 ); +#if 0 + while( p->pForest ){ + struct RowSetEntry *pTree = p->pForest->pLeft; + if( pTree ){ + struct RowSetEntry *pHead, *pTail; + rowSetTreeToList(pTree, &pHead, &pTail); + p->pEntry = rowSetEntryMerge(p->pEntry, pHead); + } + p->pForest = p->pForest->pRight; + } +#endif + p->rsFlags |= ROWSET_NEXT; /* Verify this routine is never called again */ } /* ** Extract the smallest element from the RowSet. ** Write the element into *pRowid. Return 1 on success. Return @@ -32289,11 +43496,16 @@ ** ** After this routine has been called, the sqlite3RowSetInsert() ** routine may not be called again. */ SQLITE_PRIVATE int sqlite3RowSetNext(RowSet *p, i64 *pRowid){ - rowSetToList(p); + assert( p!=0 ); + + /* Merge the forest into a single sorted list on first call */ + if( (p->rsFlags & ROWSET_NEXT)==0 ) rowSetToList(p); + + /* Return the next entry on the list */ if( p->pEntry ){ *pRowid = p->pEntry->v; p->pEntry = p->pEntry->pRight; if( p->pEntry==0 ){ sqlite3RowSetClear(p); @@ -32303,32 +43515,72 @@ return 0; } } /* -** Check to see if element iRowid was inserted into the the rowset as +** Check to see if element iRowid was inserted into the rowset as ** part of any insert batch prior to iBatch. Return 1 or 0. +** +** If this is the first test of a new batch and if there exist entries +** on pRowSet->pEntry, then sort those entries into the forest at +** pRowSet->pForest so that they can be tested. */ -SQLITE_PRIVATE int sqlite3RowSetTest(RowSet *pRowSet, u8 iBatch, sqlite3_int64 iRowid){ - struct RowSetEntry *p; +SQLITE_PRIVATE int sqlite3RowSetTest(RowSet *pRowSet, int iBatch, sqlite3_int64 iRowid){ + struct RowSetEntry *p, *pTree; + + /* This routine is never called after sqlite3RowSetNext() */ + assert( pRowSet!=0 && (pRowSet->rsFlags & ROWSET_NEXT)==0 ); + + /* Sort entries into the forest on the first test of a new batch + */ if( iBatch!=pRowSet->iBatch ){ - if( pRowSet->pEntry ){ - rowSetToList(pRowSet); - pRowSet->pTree = rowSetListToTree(pRowSet->pEntry); + p = pRowSet->pEntry; + if( p ){ + struct RowSetEntry **ppPrevTree = &pRowSet->pForest; + if( (pRowSet->rsFlags & ROWSET_SORTED)==0 ){ + p = rowSetEntrySort(p); + } + for(pTree = pRowSet->pForest; pTree; pTree=pTree->pRight){ + ppPrevTree = &pTree->pRight; + if( pTree->pLeft==0 ){ + pTree->pLeft = rowSetListToTree(p); + break; + }else{ + struct RowSetEntry *pAux, *pTail; + rowSetTreeToList(pTree->pLeft, &pAux, &pTail); + pTree->pLeft = 0; + p = rowSetEntryMerge(pAux, p); + } + } + if( pTree==0 ){ + *ppPrevTree = pTree = rowSetEntryAlloc(pRowSet); + if( pTree ){ + pTree->v = 0; + pTree->pRight = 0; + pTree->pLeft = rowSetListToTree(p); + } + } pRowSet->pEntry = 0; pRowSet->pLast = 0; + pRowSet->rsFlags |= ROWSET_SORTED; } pRowSet->iBatch = iBatch; } - p = pRowSet->pTree; - while( p ){ - if( p->v<iRowid ){ - p = p->pRight; - }else if( p->v>iRowid ){ - p = p->pLeft; - }else{ - return 1; + + /* Test to see if the iRowid value appears anywhere in the forest. + ** Return 1 if it does and 0 if not. + */ + for(pTree = pRowSet->pForest; pTree; pTree=pTree->pRight){ + p = pTree->pLeft; + while( p ){ + if( p->v<iRowid ){ + p = p->pRight; + }else if( p->v>iRowid ){ + p = p->pLeft; + }else{ + return 1; + } } } return 0; } @@ -32353,13 +43605,169 @@ ** locking to prevent two processes from writing the same database ** file simultaneously, or one process from reading the database while ** another is writing. */ #ifndef SQLITE_OMIT_DISKIO +/* #include "sqliteInt.h" */ +/************** Include wal.h in the middle of pager.c ***********************/ +/************** Begin file wal.h *********************************************/ +/* +** 2010 February 1 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This header file defines the interface to the write-ahead logging +** system. Refer to the comments below and the header comment attached to +** the implementation of each function in log.c for further details. +*/ -/* -******************** NOTES ON THE DESIGN OF THE PAGER ************************ +#ifndef _WAL_H_ +#define _WAL_H_ + +/* #include "sqliteInt.h" */ + +/* Additional values that can be added to the sync_flags argument of +** sqlite3WalFrames(): +*/ +#define WAL_SYNC_TRANSACTIONS 0x20 /* Sync at the end of each transaction */ +#define SQLITE_SYNC_MASK 0x13 /* Mask off the SQLITE_SYNC_* values */ + +#ifdef SQLITE_OMIT_WAL +# define sqlite3WalOpen(x,y,z) 0 +# define sqlite3WalLimit(x,y) +# define sqlite3WalClose(w,x,y,z) 0 +# define sqlite3WalBeginReadTransaction(y,z) 0 +# define sqlite3WalEndReadTransaction(z) +# define sqlite3WalDbsize(y) 0 +# define sqlite3WalBeginWriteTransaction(y) 0 +# define sqlite3WalEndWriteTransaction(x) 0 +# define sqlite3WalUndo(x,y,z) 0 +# define sqlite3WalSavepoint(y,z) +# define sqlite3WalSavepointUndo(y,z) 0 +# define sqlite3WalFrames(u,v,w,x,y,z) 0 +# define sqlite3WalCheckpoint(r,s,t,u,v,w,x,y,z) 0 +# define sqlite3WalCallback(z) 0 +# define sqlite3WalExclusiveMode(y,z) 0 +# define sqlite3WalHeapMemory(z) 0 +# define sqlite3WalFramesize(z) 0 +# define sqlite3WalFindFrame(x,y,z) 0 +# define sqlite3WalFile(x) 0 +#else + +#define WAL_SAVEPOINT_NDATA 4 + +/* Connection to a write-ahead log (WAL) file. +** There is one object of this type for each pager. +*/ +typedef struct Wal Wal; + +/* Open and close a connection to a write-ahead log. */ +SQLITE_PRIVATE int sqlite3WalOpen(sqlite3_vfs*, sqlite3_file*, const char *, int, i64, Wal**); +SQLITE_PRIVATE int sqlite3WalClose(Wal *pWal, int sync_flags, int, u8 *); + +/* Set the limiting size of a WAL file. */ +SQLITE_PRIVATE void sqlite3WalLimit(Wal*, i64); + +/* Used by readers to open (lock) and close (unlock) a snapshot. A +** snapshot is like a read-transaction. It is the state of the database +** at an instant in time. sqlite3WalOpenSnapshot gets a read lock and +** preserves the current state even if the other threads or processes +** write to or checkpoint the WAL. sqlite3WalCloseSnapshot() closes the +** transaction and releases the lock. +*/ +SQLITE_PRIVATE int sqlite3WalBeginReadTransaction(Wal *pWal, int *); +SQLITE_PRIVATE void sqlite3WalEndReadTransaction(Wal *pWal); + +/* Read a page from the write-ahead log, if it is present. */ +SQLITE_PRIVATE int sqlite3WalFindFrame(Wal *, Pgno, u32 *); +SQLITE_PRIVATE int sqlite3WalReadFrame(Wal *, u32, int, u8 *); + +/* If the WAL is not empty, return the size of the database. */ +SQLITE_PRIVATE Pgno sqlite3WalDbsize(Wal *pWal); + +/* Obtain or release the WRITER lock. */ +SQLITE_PRIVATE int sqlite3WalBeginWriteTransaction(Wal *pWal); +SQLITE_PRIVATE int sqlite3WalEndWriteTransaction(Wal *pWal); + +/* Undo any frames written (but not committed) to the log */ +SQLITE_PRIVATE int sqlite3WalUndo(Wal *pWal, int (*xUndo)(void *, Pgno), void *pUndoCtx); + +/* Return an integer that records the current (uncommitted) write +** position in the WAL */ +SQLITE_PRIVATE void sqlite3WalSavepoint(Wal *pWal, u32 *aWalData); + +/* Move the write position of the WAL back to iFrame. Called in +** response to a ROLLBACK TO command. */ +SQLITE_PRIVATE int sqlite3WalSavepointUndo(Wal *pWal, u32 *aWalData); + +/* Write a frame or frames to the log. */ +SQLITE_PRIVATE int sqlite3WalFrames(Wal *pWal, int, PgHdr *, Pgno, int, int); + +/* Copy pages from the log to the database file */ +SQLITE_PRIVATE int sqlite3WalCheckpoint( + Wal *pWal, /* Write-ahead log connection */ + int eMode, /* One of PASSIVE, FULL and RESTART */ + int (*xBusy)(void*), /* Function to call when busy */ + void *pBusyArg, /* Context argument for xBusyHandler */ + int sync_flags, /* Flags to sync db file with (or 0) */ + int nBuf, /* Size of buffer nBuf */ + u8 *zBuf, /* Temporary buffer to use */ + int *pnLog, /* OUT: Number of frames in WAL */ + int *pnCkpt /* OUT: Number of backfilled frames in WAL */ +); + +/* Return the value to pass to a sqlite3_wal_hook callback, the +** number of frames in the WAL at the point of the last commit since +** sqlite3WalCallback() was called. If no commits have occurred since +** the last call, then return 0. +*/ +SQLITE_PRIVATE int sqlite3WalCallback(Wal *pWal); + +/* Tell the wal layer that an EXCLUSIVE lock has been obtained (or released) +** by the pager layer on the database file. +*/ +SQLITE_PRIVATE int sqlite3WalExclusiveMode(Wal *pWal, int op); + +/* Return true if the argument is non-NULL and the WAL module is using +** heap-memory for the wal-index. Otherwise, if the argument is NULL or the +** WAL module is using shared-memory, return false. +*/ +SQLITE_PRIVATE int sqlite3WalHeapMemory(Wal *pWal); + +#ifdef SQLITE_ENABLE_SNAPSHOT +SQLITE_PRIVATE int sqlite3WalSnapshotGet(Wal *pWal, sqlite3_snapshot **ppSnapshot); +SQLITE_PRIVATE void sqlite3WalSnapshotOpen(Wal *pWal, sqlite3_snapshot *pSnapshot); +#endif + +#ifdef SQLITE_ENABLE_ZIPVFS +/* If the WAL file is not empty, return the number of bytes of content +** stored in each frame (i.e. the db page-size when the WAL was created). +*/ +SQLITE_PRIVATE int sqlite3WalFramesize(Wal *pWal); +#endif + +/* Return the sqlite3_file object for the WAL file */ +SQLITE_PRIVATE sqlite3_file *sqlite3WalFile(Wal *pWal); + +#endif /* ifndef SQLITE_OMIT_WAL */ +#endif /* _WAL_H_ */ + +/************** End of wal.h *************************************************/ +/************** Continuing where we left off in pager.c **********************/ + + +/******************* NOTES ON THE DESIGN OF THE PAGER ************************ +** +** This comment block describes invariants that hold when using a rollback +** journal. These invariants do not apply for journal_mode=WAL, +** journal_mode=MEMORY, or journal_mode=OFF. ** ** Within this comment block, a page is deemed to have been synced ** automatically as soon as it is written when PRAGMA synchronous=OFF. ** Otherwise, the page is not synced until the xSync method of the VFS ** is called successfully on the file containing the page. @@ -32389,11 +43797,11 @@ ** both the content in the database when the rollback journal was written ** and the content in the database at the beginning of the current ** transaction. ** ** (3) Writes to the database file are an integer multiple of the page size -** in length and are aligned to a page boundary. +** in length and are aligned on a page boundary. ** ** (4) Reads from the database file are either aligned on a page boundary and ** an integer multiple of the page size in length or are taken from the ** first 100 bytes of the database file. ** @@ -32403,17 +43811,17 @@ ** (6) If a master journal file is used, then all writes to the database file ** are synced prior to the master journal being deleted. ** ** Definition: Two databases (or the same database at two points it time) ** are said to be "logically equivalent" if they give the same answer to -** all queries. Note in particular the the content of freelist leaf -** pages can be changed arbitarily without effecting the logical equivalence +** all queries. Note in particular the content of freelist leaf +** pages can be changed arbitrarily without affecting the logical equivalence ** of the database. ** ** (7) At any time, if any subset, including the empty set and the total set, ** of the unsynced changes to a rollback journal are removed and the -** journal is rolled back, the resulting database file will be logical +** journal is rolled back, the resulting database file will be logically ** equivalent to the database file at the beginning of the transaction. ** ** (8) When a transaction is rolled back, the xTruncate method of the VFS ** is called to restore the database file to the same size it was at ** the beginning of the transaction. (In some VFSes, the xTruncate @@ -32420,11 +43828,12 @@ ** method is a no-op, but that does not change the fact the SQLite will ** invoke it.) ** ** (9) Whenever the database file is modified, at least one bit in the range ** of bytes from 24 through 39 inclusive will be changed prior to releasing -** the EXCLUSIVE lock. +** the EXCLUSIVE lock, thus signaling other connections on the same +** database to flush their caches. ** ** (10) The pattern of bits in bytes 24 through 39 shall not repeat in less ** than one billion transactions. ** ** (11) A database file is well-formed at the beginning and at the conclusion @@ -32433,11 +43842,12 @@ ** (12) An EXCLUSIVE lock is held on the database file when writing to ** the database file. ** ** (13) A SHARED lock is held on the database file while reading any ** content out of the database file. -*/ +** +******************************************************************************/ /* ** Macros for troubleshooting. Normally turned off */ #if 0 @@ -32458,62 +43868,283 @@ */ #define PAGERID(p) ((int)(p->fd)) #define FILEHANDLEID(fd) ((int)fd) /* -** The page cache as a whole is always in one of the following -** states: -** -** PAGER_UNLOCK The page cache is not currently reading or -** writing the database file. There is no -** data held in memory. This is the initial -** state. -** -** PAGER_SHARED The page cache is reading the database. -** Writing is not permitted. There can be -** multiple readers accessing the same database -** file at the same time. -** -** PAGER_RESERVED This process has reserved the database for writing -** but has not yet made any changes. Only one process -** at a time can reserve the database. The original -** database file has not been modified so other -** processes may still be reading the on-disk -** database file. -** -** PAGER_EXCLUSIVE The page cache is writing the database. -** Access is exclusive. No other processes or -** threads can be reading or writing while one -** process is writing. -** -** PAGER_SYNCED The pager moves to this state from PAGER_EXCLUSIVE -** after all dirty pages have been written to the -** database file and the file has been synced to -** disk. All that remains to do is to remove or -** truncate the journal file and the transaction -** will be committed. -** -** The page cache comes up in PAGER_UNLOCK. The first time a -** sqlite3PagerGet() occurs, the state transitions to PAGER_SHARED. -** After all pages have been released using sqlite_page_unref(), -** the state transitions back to PAGER_UNLOCK. The first time -** that sqlite3PagerWrite() is called, the state transitions to -** PAGER_RESERVED. (Note that sqlite3PagerWrite() can only be -** called on an outstanding page which means that the pager must -** be in PAGER_SHARED before it transitions to PAGER_RESERVED.) -** PAGER_RESERVED means that there is an open rollback journal. -** The transition to PAGER_EXCLUSIVE occurs before any changes -** are made to the database file, though writes to the rollback -** journal occurs with just PAGER_RESERVED. After an sqlite3PagerRollback() -** or sqlite3PagerCommitPhaseTwo(), the state can go back to PAGER_SHARED, -** or it can stay at PAGER_EXCLUSIVE if we are in exclusive access mode. -*/ -#define PAGER_UNLOCK 0 -#define PAGER_SHARED 1 /* same as SHARED_LOCK */ -#define PAGER_RESERVED 2 /* same as RESERVED_LOCK */ -#define PAGER_EXCLUSIVE 4 /* same as EXCLUSIVE_LOCK */ -#define PAGER_SYNCED 5 +** The Pager.eState variable stores the current 'state' of a pager. A +** pager may be in any one of the seven states shown in the following +** state diagram. +** +** OPEN <------+------+ +** | | | +** V | | +** +---------> READER-------+ | +** | | | +** | V | +** |<-------WRITER_LOCKED------> ERROR +** | | ^ +** | V | +** |<------WRITER_CACHEMOD-------->| +** | | | +** | V | +** |<-------WRITER_DBMOD---------->| +** | | | +** | V | +** +<------WRITER_FINISHED-------->+ +** +** +** List of state transitions and the C [function] that performs each: +** +** OPEN -> READER [sqlite3PagerSharedLock] +** READER -> OPEN [pager_unlock] +** +** READER -> WRITER_LOCKED [sqlite3PagerBegin] +** WRITER_LOCKED -> WRITER_CACHEMOD [pager_open_journal] +** WRITER_CACHEMOD -> WRITER_DBMOD [syncJournal] +** WRITER_DBMOD -> WRITER_FINISHED [sqlite3PagerCommitPhaseOne] +** WRITER_*** -> READER [pager_end_transaction] +** +** WRITER_*** -> ERROR [pager_error] +** ERROR -> OPEN [pager_unlock] +** +** +** OPEN: +** +** The pager starts up in this state. Nothing is guaranteed in this +** state - the file may or may not be locked and the database size is +** unknown. The database may not be read or written. +** +** * No read or write transaction is active. +** * Any lock, or no lock at all, may be held on the database file. +** * The dbSize, dbOrigSize and dbFileSize variables may not be trusted. +** +** READER: +** +** In this state all the requirements for reading the database in +** rollback (non-WAL) mode are met. Unless the pager is (or recently +** was) in exclusive-locking mode, a user-level read transaction is +** open. The database size is known in this state. +** +** A connection running with locking_mode=normal enters this state when +** it opens a read-transaction on the database and returns to state +** OPEN after the read-transaction is completed. However a connection +** running in locking_mode=exclusive (including temp databases) remains in +** this state even after the read-transaction is closed. The only way +** a locking_mode=exclusive connection can transition from READER to OPEN +** is via the ERROR state (see below). +** +** * A read transaction may be active (but a write-transaction cannot). +** * A SHARED or greater lock is held on the database file. +** * The dbSize variable may be trusted (even if a user-level read +** transaction is not active). The dbOrigSize and dbFileSize variables +** may not be trusted at this point. +** * If the database is a WAL database, then the WAL connection is open. +** * Even if a read-transaction is not open, it is guaranteed that +** there is no hot-journal in the file-system. +** +** WRITER_LOCKED: +** +** The pager moves to this state from READER when a write-transaction +** is first opened on the database. In WRITER_LOCKED state, all locks +** required to start a write-transaction are held, but no actual +** modifications to the cache or database have taken place. +** +** In rollback mode, a RESERVED or (if the transaction was opened with +** BEGIN EXCLUSIVE) EXCLUSIVE lock is obtained on the database file when +** moving to this state, but the journal file is not written to or opened +** to in this state. If the transaction is committed or rolled back while +** in WRITER_LOCKED state, all that is required is to unlock the database +** file. +** +** IN WAL mode, WalBeginWriteTransaction() is called to lock the log file. +** If the connection is running with locking_mode=exclusive, an attempt +** is made to obtain an EXCLUSIVE lock on the database file. +** +** * A write transaction is active. +** * If the connection is open in rollback-mode, a RESERVED or greater +** lock is held on the database file. +** * If the connection is open in WAL-mode, a WAL write transaction +** is open (i.e. sqlite3WalBeginWriteTransaction() has been successfully +** called). +** * The dbSize, dbOrigSize and dbFileSize variables are all valid. +** * The contents of the pager cache have not been modified. +** * The journal file may or may not be open. +** * Nothing (not even the first header) has been written to the journal. +** +** WRITER_CACHEMOD: +** +** A pager moves from WRITER_LOCKED state to this state when a page is +** first modified by the upper layer. In rollback mode the journal file +** is opened (if it is not already open) and a header written to the +** start of it. The database file on disk has not been modified. +** +** * A write transaction is active. +** * A RESERVED or greater lock is held on the database file. +** * The journal file is open and the first header has been written +** to it, but the header has not been synced to disk. +** * The contents of the page cache have been modified. +** +** WRITER_DBMOD: +** +** The pager transitions from WRITER_CACHEMOD into WRITER_DBMOD state +** when it modifies the contents of the database file. WAL connections +** never enter this state (since they do not modify the database file, +** just the log file). +** +** * A write transaction is active. +** * An EXCLUSIVE or greater lock is held on the database file. +** * The journal file is open and the first header has been written +** and synced to disk. +** * The contents of the page cache have been modified (and possibly +** written to disk). +** +** WRITER_FINISHED: +** +** It is not possible for a WAL connection to enter this state. +** +** A rollback-mode pager changes to WRITER_FINISHED state from WRITER_DBMOD +** state after the entire transaction has been successfully written into the +** database file. In this state the transaction may be committed simply +** by finalizing the journal file. Once in WRITER_FINISHED state, it is +** not possible to modify the database further. At this point, the upper +** layer must either commit or rollback the transaction. +** +** * A write transaction is active. +** * An EXCLUSIVE or greater lock is held on the database file. +** * All writing and syncing of journal and database data has finished. +** If no error occurred, all that remains is to finalize the journal to +** commit the transaction. If an error did occur, the caller will need +** to rollback the transaction. +** +** ERROR: +** +** The ERROR state is entered when an IO or disk-full error (including +** SQLITE_IOERR_NOMEM) occurs at a point in the code that makes it +** difficult to be sure that the in-memory pager state (cache contents, +** db size etc.) are consistent with the contents of the file-system. +** +** Temporary pager files may enter the ERROR state, but in-memory pagers +** cannot. +** +** For example, if an IO error occurs while performing a rollback, +** the contents of the page-cache may be left in an inconsistent state. +** At this point it would be dangerous to change back to READER state +** (as usually happens after a rollback). Any subsequent readers might +** report database corruption (due to the inconsistent cache), and if +** they upgrade to writers, they may inadvertently corrupt the database +** file. To avoid this hazard, the pager switches into the ERROR state +** instead of READER following such an error. +** +** Once it has entered the ERROR state, any attempt to use the pager +** to read or write data returns an error. Eventually, once all +** outstanding transactions have been abandoned, the pager is able to +** transition back to OPEN state, discarding the contents of the +** page-cache and any other in-memory state at the same time. Everything +** is reloaded from disk (and, if necessary, hot-journal rollback peformed) +** when a read-transaction is next opened on the pager (transitioning +** the pager into READER state). At that point the system has recovered +** from the error. +** +** Specifically, the pager jumps into the ERROR state if: +** +** 1. An error occurs while attempting a rollback. This happens in +** function sqlite3PagerRollback(). +** +** 2. An error occurs while attempting to finalize a journal file +** following a commit in function sqlite3PagerCommitPhaseTwo(). +** +** 3. An error occurs while attempting to write to the journal or +** database file in function pagerStress() in order to free up +** memory. +** +** In other cases, the error is returned to the b-tree layer. The b-tree +** layer then attempts a rollback operation. If the error condition +** persists, the pager enters the ERROR state via condition (1) above. +** +** Condition (3) is necessary because it can be triggered by a read-only +** statement executed within a transaction. In this case, if the error +** code were simply returned to the user, the b-tree layer would not +** automatically attempt a rollback, as it assumes that an error in a +** read-only statement cannot leave the pager in an internally inconsistent +** state. +** +** * The Pager.errCode variable is set to something other than SQLITE_OK. +** * There are one or more outstanding references to pages (after the +** last reference is dropped the pager should move back to OPEN state). +** * The pager is not an in-memory pager. +** +** +** Notes: +** +** * A pager is never in WRITER_DBMOD or WRITER_FINISHED state if the +** connection is open in WAL mode. A WAL connection is always in one +** of the first four states. +** +** * Normally, a connection open in exclusive mode is never in PAGER_OPEN +** state. There are two exceptions: immediately after exclusive-mode has +** been turned on (and before any read or write transactions are +** executed), and when the pager is leaving the "error state". +** +** * See also: assert_pager_state(). +*/ +#define PAGER_OPEN 0 +#define PAGER_READER 1 +#define PAGER_WRITER_LOCKED 2 +#define PAGER_WRITER_CACHEMOD 3 +#define PAGER_WRITER_DBMOD 4 +#define PAGER_WRITER_FINISHED 5 +#define PAGER_ERROR 6 + +/* +** The Pager.eLock variable is almost always set to one of the +** following locking-states, according to the lock currently held on +** the database file: NO_LOCK, SHARED_LOCK, RESERVED_LOCK or EXCLUSIVE_LOCK. +** This variable is kept up to date as locks are taken and released by +** the pagerLockDb() and pagerUnlockDb() wrappers. +** +** If the VFS xLock() or xUnlock() returns an error other than SQLITE_BUSY +** (i.e. one of the SQLITE_IOERR subtypes), it is not clear whether or not +** the operation was successful. In these circumstances pagerLockDb() and +** pagerUnlockDb() take a conservative approach - eLock is always updated +** when unlocking the file, and only updated when locking the file if the +** VFS call is successful. This way, the Pager.eLock variable may be set +** to a less exclusive (lower) value than the lock that is actually held +** at the system level, but it is never set to a more exclusive value. +** +** This is usually safe. If an xUnlock fails or appears to fail, there may +** be a few redundant xLock() calls or a lock may be held for longer than +** required, but nothing really goes wrong. +** +** The exception is when the database file is unlocked as the pager moves +** from ERROR to OPEN state. At this point there may be a hot-journal file +** in the file-system that needs to be rolled back (as part of an OPEN->SHARED +** transition, by the same pager or any other). If the call to xUnlock() +** fails at this point and the pager is left holding an EXCLUSIVE lock, this +** can confuse the call to xCheckReservedLock() call made later as part +** of hot-journal detection. +** +** xCheckReservedLock() is defined as returning true "if there is a RESERVED +** lock held by this process or any others". So xCheckReservedLock may +** return true because the caller itself is holding an EXCLUSIVE lock (but +** doesn't know it because of a previous error in xUnlock). If this happens +** a hot-journal may be mistaken for a journal being created by an active +** transaction in another process, causing SQLite to read from the database +** without rolling it back. +** +** To work around this, if a call to xUnlock() fails when unlocking the +** database in the ERROR state, Pager.eLock is set to UNKNOWN_LOCK. It +** is only changed back to a real locking state after a successful call +** to xLock(EXCLUSIVE). Also, the code to do the OPEN->SHARED state transition +** omits the check for a hot-journal if Pager.eLock is set to UNKNOWN_LOCK +** lock. Instead, it assumes a hot-journal exists and obtains an EXCLUSIVE +** lock on the database file before attempting to roll it back. See function +** PagerSharedLock() for more detail. +** +** Pager.eLock may only be set to UNKNOWN_LOCK when the pager is in +** PAGER_OPEN state. +*/ +#define UNKNOWN_LOCK (EXCLUSIVE_LOCK+1) /* ** A macro used for invoking the codec if there is one */ #ifdef SQLITE_HAS_CODEC @@ -32532,10 +44163,24 @@ ** returns a value larger than this, then MAX_SECTOR_SIZE is used instead. ** This could conceivably cause corruption following a power failure on ** such a system. This is currently an undocumented limit. */ #define MAX_SECTOR_SIZE 0x10000 + +/* +** If the option SQLITE_EXTRA_DURABLE option is set at compile-time, then +** SQLite will do extra fsync() operations when synchronous==FULL to help +** ensure that transactions are durable across a power failure. Most +** applications are happy as long as transactions are consistent across +** a power failure, and are perfectly willing to lose the last transaction +** in exchange for the extra performance of avoiding directory syncs. +** And so the default SQLITE_EXTRA_DURABLE setting is off. +*/ +#ifndef SQLITE_EXTRA_DURABLE +# define SQLITE_EXTRA_DURABLE 0 +#endif + /* ** An instance of the following structure is allocated for each active ** savepoint and statement transaction in the system. All such structures ** are stored in the Pager.aSavepoint[] array, which is allocated and @@ -32553,40 +44198,45 @@ i64 iOffset; /* Starting offset in main journal */ i64 iHdrOffset; /* See above */ Bitvec *pInSavepoint; /* Set of pages in this savepoint */ Pgno nOrig; /* Original number of pages in file */ Pgno iSubRec; /* Index of first record in sub-journal */ +#ifndef SQLITE_OMIT_WAL + u32 aWalData[WAL_SAVEPOINT_NDATA]; /* WAL savepoint context */ +#endif }; /* -** A open page cache is an instance of the following structure. -** -** errCode -** -** Pager.errCode may be set to SQLITE_IOERR, SQLITE_CORRUPT, or -** or SQLITE_FULL. Once one of the first three errors occurs, it persists -** and is returned as the result of every major pager API call. The -** SQLITE_FULL return code is slightly different. It persists only until the -** next successful rollback is performed on the pager cache. Also, -** SQLITE_FULL does not affect the sqlite3PagerGet() and sqlite3PagerLookup() -** APIs, they may still be used successfully. -** -** dbSizeValid, dbSize, dbOrigSize, dbFileSize -** -** Managing the size of the database file in pages is a little complicated. -** The variable Pager.dbSize contains the number of pages that the database -** image currently contains. As the database image grows or shrinks this -** variable is updated. The variable Pager.dbFileSize contains the number -** of pages in the database file. This may be different from Pager.dbSize -** if some pages have been appended to the database image but not yet written -** out from the cache to the actual file on disk. Or if the image has been -** truncated by an incremental-vacuum operation. The Pager.dbOrigSize variable -** contains the number of pages in the database image when the current -** transaction was opened. The contents of all three of these variables is -** only guaranteed to be correct if the boolean Pager.dbSizeValid is true. -** -** TODO: Under what conditions is dbSizeValid set? Cleared? +** Bits of the Pager.doNotSpill flag. See further description below. +*/ +#define SPILLFLAG_OFF 0x01 /* Never spill cache. Set via pragma */ +#define SPILLFLAG_ROLLBACK 0x02 /* Current rolling back, so do not spill */ +#define SPILLFLAG_NOSYNC 0x04 /* Spill is ok, but do not sync */ + +/* +** An open page cache is an instance of struct Pager. A description of +** some of the more important member variables follows: +** +** eState +** +** The current 'state' of the pager object. See the comment and state +** diagram above for a description of the pager state. +** +** eLock +** +** For a real on-disk database, the current lock held on the database file - +** NO_LOCK, SHARED_LOCK, RESERVED_LOCK or EXCLUSIVE_LOCK. +** +** For a temporary or in-memory database (neither of which require any +** locks), this variable is always set to EXCLUSIVE_LOCK. Since such +** databases always have Pager.exclusiveMode==1, this tricks the pager +** logic into thinking that it already has all the locks it will ever +** need (and no reason to release them). +** +** In some (obscure) circumstances, this variable may also be set to +** UNKNOWN_LOCK. See the comment above the #define of UNKNOWN_LOCK for +** details. ** ** changeCountDone ** ** This boolean variable is used to make sure that the change-counter ** (the 4-byte header field at byte offset 24 of the database file) is @@ -32601,100 +44251,162 @@ ** ** This mechanism means that when running in exclusive mode, a connection ** need only update the change-counter once, for the first transaction ** committed. ** -** dbModified -** -** The dbModified flag is set whenever a database page is dirtied. -** It is cleared at the end of each transaction. -** -** It is used when committing or otherwise ending a transaction. If -** the dbModified flag is clear then less work has to be done. -** -** journalStarted -** -** This flag is set whenever the the main journal is opened and -** initialized -** -** The point of this flag is that it must be set after the -** first journal header in a journal file has been synced to disk. -** After this has happened, new pages appended to the database -** do not need the PGHDR_NEED_SYNC flag set, as they do not need -** to wait for a journal sync before they can be written out to -** the database file (see function pager_write()). -** ** setMaster ** -** This variable is used to ensure that the master journal file name -** (if any) is only written into the journal file once. -** -** When committing a transaction, the master journal file name (if any) -** may be written into the journal file while the pager is still in -** PAGER_RESERVED state (see CommitPhaseOne() for the action). It -** then attempts to upgrade to an exclusive lock. If this attempt -** fails, then SQLITE_BUSY may be returned to the user and the user -** may attempt to commit the transaction again later (calling -** CommitPhaseOne() again). This flag is used to ensure that the -** master journal name is only written to the journal file the first -** time CommitPhaseOne() is called. -** -** doNotSync -** -** When enabled, cache spills are prohibited and the journal file cannot -** be synced. This variable is set and cleared by sqlite3PagerWrite() -** in order to prevent a journal sync from happening in between the -** journalling of two pages on the same sector. -** -** needSync -** -** TODO: It might be easier to set this variable in writeJournalHdr() -** and writeMasterJournal() only. Change its meaning to "unsynced data -** has been written to the journal". +** When PagerCommitPhaseOne() is called to commit a transaction, it may +** (or may not) specify a master-journal name to be written into the +** journal file before it is synced to disk. +** +** Whether or not a journal file contains a master-journal pointer affects +** the way in which the journal file is finalized after the transaction is +** committed or rolled back when running in "journal_mode=PERSIST" mode. +** If a journal file does not contain a master-journal pointer, it is +** finalized by overwriting the first journal header with zeroes. If +** it does contain a master-journal pointer the journal file is finalized +** by truncating it to zero bytes, just as if the connection were +** running in "journal_mode=truncate" mode. +** +** Journal files that contain master journal pointers cannot be finalized +** simply by overwriting the first journal-header with zeroes, as the +** master journal pointer could interfere with hot-journal rollback of any +** subsequently interrupted transaction that reuses the journal file. +** +** The flag is cleared as soon as the journal file is finalized (either +** by PagerCommitPhaseTwo or PagerRollback). If an IO error prevents the +** journal file from being successfully finalized, the setMaster flag +** is cleared anyway (and the pager will move to ERROR state). +** +** doNotSpill +** +** This variables control the behavior of cache-spills (calls made by +** the pcache module to the pagerStress() routine to write cached data +** to the file-system in order to free up memory). +** +** When bits SPILLFLAG_OFF or SPILLFLAG_ROLLBACK of doNotSpill are set, +** writing to the database from pagerStress() is disabled altogether. +** The SPILLFLAG_ROLLBACK case is done in a very obscure case that +** comes up during savepoint rollback that requires the pcache module +** to allocate a new page to prevent the journal file from being written +** while it is being traversed by code in pager_playback(). The SPILLFLAG_OFF +** case is a user preference. +** +** If the SPILLFLAG_NOSYNC bit is set, writing to the database from +** pagerStress() is permitted, but syncing the journal file is not. +** This flag is set by sqlite3PagerWrite() when the file-system sector-size +** is larger than the database page-size in order to prevent a journal sync +** from happening in between the journalling of two pages on the same sector. ** ** subjInMemory ** ** This is a boolean variable. If true, then any required sub-journal ** is opened as an in-memory journal file. If false, then in-memory ** sub-journals are only used for in-memory pager files. +** +** This variable is updated by the upper layer each time a new +** write-transaction is opened. +** +** dbSize, dbOrigSize, dbFileSize +** +** Variable dbSize is set to the number of pages in the database file. +** It is valid in PAGER_READER and higher states (all states except for +** OPEN and ERROR). +** +** dbSize is set based on the size of the database file, which may be +** larger than the size of the database (the value stored at offset +** 28 of the database header by the btree). If the size of the file +** is not an integer multiple of the page-size, the value stored in +** dbSize is rounded down (i.e. a 5KB file with 2K page-size has dbSize==2). +** Except, any file that is greater than 0 bytes in size is considered +** to have at least one page. (i.e. a 1KB file with 2K page-size leads +** to dbSize==1). +** +** During a write-transaction, if pages with page-numbers greater than +** dbSize are modified in the cache, dbSize is updated accordingly. +** Similarly, if the database is truncated using PagerTruncateImage(), +** dbSize is updated. +** +** Variables dbOrigSize and dbFileSize are valid in states +** PAGER_WRITER_LOCKED and higher. dbOrigSize is a copy of the dbSize +** variable at the start of the transaction. It is used during rollback, +** and to determine whether or not pages need to be journalled before +** being modified. +** +** Throughout a write-transaction, dbFileSize contains the size of +** the file on disk in pages. It is set to a copy of dbSize when the +** write-transaction is first opened, and updated when VFS calls are made +** to write or truncate the database file on disk. +** +** The only reason the dbFileSize variable is required is to suppress +** unnecessary calls to xTruncate() after committing a transaction. If, +** when a transaction is committed, the dbFileSize variable indicates +** that the database file is larger than the database image (Pager.dbSize), +** pager_truncate() is called. The pager_truncate() call uses xFilesize() +** to measure the database file on disk, and then truncates it if required. +** dbFileSize is not used when rolling back a transaction. In this case +** pager_truncate() is called unconditionally (which means there may be +** a call to xFilesize() that is not strictly required). In either case, +** pager_truncate() may cause the file to become smaller or larger. +** +** dbHintSize +** +** The dbHintSize variable is used to limit the number of calls made to +** the VFS xFileControl(FCNTL_SIZE_HINT) method. +** +** dbHintSize is set to a copy of the dbSize variable when a +** write-transaction is opened (at the same time as dbFileSize and +** dbOrigSize). If the xFileControl(FCNTL_SIZE_HINT) method is called, +** dbHintSize is increased to the number of pages that correspond to the +** size-hint passed to the method call. See pager_write_pagelist() for +** details. +** +** errCode +** +** The Pager.errCode variable is only ever used in PAGER_ERROR state. It +** is set to zero in all other states. In PAGER_ERROR state, Pager.errCode +** is always set to SQLITE_FULL, SQLITE_IOERR or one of the SQLITE_IOERR_XXX +** sub-codes. */ struct Pager { sqlite3_vfs *pVfs; /* OS functions to use for IO */ u8 exclusiveMode; /* Boolean. True if locking_mode==EXCLUSIVE */ - u8 journalMode; /* On of the PAGER_JOURNALMODE_* values */ + u8 journalMode; /* One of the PAGER_JOURNALMODE_* values */ u8 useJournal; /* Use a rollback journal on this file */ - u8 noReadlock; /* Do not bother to obtain readlocks */ u8 noSync; /* Do not sync the journal if true */ u8 fullSync; /* Do extra syncs of the journal for robustness */ - u8 sync_flags; /* One of SYNC_NORMAL or SYNC_FULL */ - u8 tempFile; /* zFilename is a temporary file */ + u8 extraSync; /* sync directory after journal delete */ + u8 ckptSyncFlags; /* SYNC_NORMAL or SYNC_FULL for checkpoint */ + u8 walSyncFlags; /* SYNC_NORMAL or SYNC_FULL for wal writes */ + u8 syncFlags; /* SYNC_NORMAL or SYNC_FULL otherwise */ + u8 tempFile; /* zFilename is a temporary or immutable file */ + u8 noLock; /* Do not lock (except in WAL mode) */ u8 readOnly; /* True for a read-only database */ u8 memDb; /* True to inhibit all file I/O */ - /* The following block contains those class members that are dynamically - ** modified during normal operations. The other variables in this structure - ** are either constant throughout the lifetime of the pager, or else - ** used to store configuration parameters that affect the way the pager - ** operates. - ** - ** The 'state' variable is described in more detail along with the - ** descriptions of the values it may take - PAGER_UNLOCK etc. Many of the - ** other variables in this block are described in the comment directly - ** above this class definition. + /************************************************************************** + ** The following block contains those class members that change during + ** routine operation. Class members not in this block are either fixed + ** when the pager is first created or else only change when there is a + ** significant mode change (such as changing the page_size, locking_mode, + ** or the journal_mode). From another view, these class members describe + ** the "state" of the pager, while other class members describe the + ** "configuration" of the pager. */ - u8 state; /* PAGER_UNLOCK, _SHARED, _RESERVED, etc. */ - u8 dbModified; /* True if there are any changes to the Db */ - u8 needSync; /* True if an fsync() is needed on the journal */ - u8 journalStarted; /* True if header of journal is synced */ + u8 eState; /* Pager state (OPEN, READER, WRITER_LOCKED..) */ + u8 eLock; /* Current lock held on database file */ u8 changeCountDone; /* Set after incrementing the change-counter */ u8 setMaster; /* True if a m-j name has been written to jrnl */ - u8 doNotSync; /* Boolean. While true, do not spill the cache */ - u8 dbSizeValid; /* Set when dbSize is correct */ + u8 doNotSpill; /* Do not spill the cache when non-zero */ u8 subjInMemory; /* True to use in-memory sub-journals */ + u8 bUseFetch; /* True to use xFetch() */ + u8 hasHeldSharedLock; /* True if a shared lock has ever been held */ Pgno dbSize; /* Number of pages in the database */ Pgno dbOrigSize; /* dbSize before the current transaction */ Pgno dbFileSize; /* Number of pages in the database file */ + Pgno dbHintSize; /* Value passed to FCNTL_SIZE_HINT call */ int errCode; /* One of several kinds of errors */ int nRec; /* Pages journalled since last j-header written */ u32 cksumInit; /* Quasi-random value added to every checksum */ u32 nSubRec; /* Number of records written to sub-journal */ Bitvec *pInJournal; /* One bit for each page in the database file */ @@ -32701,28 +44413,37 @@ sqlite3_file *fd; /* File descriptor for database */ sqlite3_file *jfd; /* File descriptor for main journal */ sqlite3_file *sjfd; /* File descriptor for sub-journal */ i64 journalOff; /* Current write offset in the journal file */ i64 journalHdr; /* Byte offset to previous journal header */ - i64 journalSizeLimit; /* Size limit for persistent journal files */ + sqlite3_backup *pBackup; /* Pointer to list of ongoing backup processes */ PagerSavepoint *aSavepoint; /* Array of active savepoints */ int nSavepoint; /* Number of elements in aSavepoint[] */ + u32 iDataVersion; /* Changes whenever database content changes */ char dbFileVers[16]; /* Changes whenever database file changes */ - u32 sectorSize; /* Assumed sector size during rollback */ + + int nMmapOut; /* Number of mmap pages currently outstanding */ + sqlite3_int64 szMmap; /* Desired maximum mmap size */ + PgHdr *pMmapFreelist; /* List of free mmap page headers (pDirty) */ + /* + ** End of the routinely-changing class members + ***************************************************************************/ u16 nExtra; /* Add this many bytes to each in-memory page */ i16 nReserve; /* Number of unused bytes at end of each page */ u32 vfsFlags; /* Flags for sqlite3_vfs.xOpen() */ + u32 sectorSize; /* Assumed sector size during rollback */ int pageSize; /* Number of bytes in a page */ Pgno mxPgno; /* Maximum allowed size of the database */ + i64 journalSizeLimit; /* Size limit for persistent journal files */ char *zFilename; /* Name of the database file */ char *zJournal; /* Name of the journal file */ int (*xBusyHandler)(void*); /* Function to call when busy */ void *pBusyHandlerArg; /* Context argument for xBusyHandler */ + int aStat[3]; /* Total cache hits, misses and writes */ #ifdef SQLITE_TEST - int nHit, nMiss; /* Cache hits and missing */ - int nRead, nWrite; /* Database pages read/written */ + int nRead; /* Database pages read */ #endif void (*xReiniter)(DbPage*); /* Call this routine when reloading pages */ #ifdef SQLITE_HAS_CODEC void *(*xCodec)(void*,void*,Pgno,int); /* Routine for en/decoding data */ void (*xCodecSizeChng)(void*,int,int); /* Notify of page size changes */ @@ -32729,13 +44450,25 @@ void (*xCodecFree)(void*); /* Destructor for the codec */ void *pCodec; /* First argument to xCodec... methods */ #endif char *pTmpSpace; /* Pager.pageSize bytes of space for tmp use */ PCache *pPCache; /* Pointer to page cache object */ - sqlite3_backup *pBackup; /* Pointer to list of ongoing backup processes */ +#ifndef SQLITE_OMIT_WAL + Wal *pWal; /* Write-ahead log used by "journal_mode=wal" */ + char *zWal; /* File name for write-ahead log */ +#endif }; +/* +** Indexes for use with Pager.aStat[]. The Pager.aStat[] array contains +** the values accessed by passing SQLITE_DBSTATUS_CACHE_HIT, CACHE_MISS +** or CACHE_WRITE to sqlite3_db_status(). +*/ +#define PAGER_STAT_HIT 0 +#define PAGER_STAT_MISS 1 +#define PAGER_STAT_WRITE 2 + /* ** The following global variables hold counters used for ** testing purposes only. These variables do not exist in ** a non-testing build. These variables are not thread-safe. */ @@ -32799,30 +44532,241 @@ # define MEMDB 0 #else # define MEMDB pPager->memDb #endif +/* +** The macro USEFETCH is true if we are allowed to use the xFetch and xUnfetch +** interfaces to access the database using memory-mapped I/O. +*/ +#if SQLITE_MAX_MMAP_SIZE>0 +# define USEFETCH(x) ((x)->bUseFetch) +#else +# define USEFETCH(x) 0 +#endif + /* ** The maximum legal page number is (2^31 - 1). */ #define PAGER_MAX_PGNO 2147483647 +/* +** The argument to this macro is a file descriptor (type sqlite3_file*). +** Return 0 if it is not open, or non-zero (but not 1) if it is. +** +** This is so that expressions can be written as: +** +** if( isOpen(pPager->jfd) ){ ... +** +** instead of +** +** if( pPager->jfd->pMethods ){ ... +*/ +#define isOpen(pFd) ((pFd)->pMethods!=0) + +/* +** Return true if this pager uses a write-ahead log instead of the usual +** rollback journal. Otherwise false. +*/ +#ifndef SQLITE_OMIT_WAL +static int pagerUseWal(Pager *pPager){ + return (pPager->pWal!=0); +} +#else +# define pagerUseWal(x) 0 +# define pagerRollbackWal(x) 0 +# define pagerWalFrames(v,w,x,y) 0 +# define pagerOpenWalIfPresent(z) SQLITE_OK +# define pagerBeginReadTransaction(z) SQLITE_OK +#endif + #ifndef NDEBUG /* ** Usage: ** ** assert( assert_pager_state(pPager) ); +** +** This function runs many asserts to try to find inconsistencies in +** the internal state of the Pager object. */ -static int assert_pager_state(Pager *pPager){ +static int assert_pager_state(Pager *p){ + Pager *pPager = p; - /* A temp-file is always in PAGER_EXCLUSIVE or PAGER_SYNCED state. */ - assert( pPager->tempFile==0 || pPager->state>=PAGER_EXCLUSIVE ); + /* State must be valid. */ + assert( p->eState==PAGER_OPEN + || p->eState==PAGER_READER + || p->eState==PAGER_WRITER_LOCKED + || p->eState==PAGER_WRITER_CACHEMOD + || p->eState==PAGER_WRITER_DBMOD + || p->eState==PAGER_WRITER_FINISHED + || p->eState==PAGER_ERROR + ); - /* The changeCountDone flag is always set for temp-files */ - assert( pPager->tempFile==0 || pPager->changeCountDone ); + /* Regardless of the current state, a temp-file connection always behaves + ** as if it has an exclusive lock on the database file. It never updates + ** the change-counter field, so the changeCountDone flag is always set. + */ + assert( p->tempFile==0 || p->eLock==EXCLUSIVE_LOCK ); + assert( p->tempFile==0 || pPager->changeCountDone ); + + /* If the useJournal flag is clear, the journal-mode must be "OFF". + ** And if the journal-mode is "OFF", the journal file must not be open. + */ + assert( p->journalMode==PAGER_JOURNALMODE_OFF || p->useJournal ); + assert( p->journalMode!=PAGER_JOURNALMODE_OFF || !isOpen(p->jfd) ); + + /* Check that MEMDB implies noSync. And an in-memory journal. Since + ** this means an in-memory pager performs no IO at all, it cannot encounter + ** either SQLITE_IOERR or SQLITE_FULL during rollback or while finalizing + ** a journal file. (although the in-memory journal implementation may + ** return SQLITE_IOERR_NOMEM while the journal file is being written). It + ** is therefore not possible for an in-memory pager to enter the ERROR + ** state. + */ + if( MEMDB ){ + assert( p->noSync ); + assert( p->journalMode==PAGER_JOURNALMODE_OFF + || p->journalMode==PAGER_JOURNALMODE_MEMORY + ); + assert( p->eState!=PAGER_ERROR && p->eState!=PAGER_OPEN ); + assert( pagerUseWal(p)==0 ); + } + + /* If changeCountDone is set, a RESERVED lock or greater must be held + ** on the file. + */ + assert( pPager->changeCountDone==0 || pPager->eLock>=RESERVED_LOCK ); + assert( p->eLock!=PENDING_LOCK ); + + switch( p->eState ){ + case PAGER_OPEN: + assert( !MEMDB ); + assert( pPager->errCode==SQLITE_OK ); + assert( sqlite3PcacheRefCount(pPager->pPCache)==0 || pPager->tempFile ); + break; + + case PAGER_READER: + assert( pPager->errCode==SQLITE_OK ); + assert( p->eLock!=UNKNOWN_LOCK ); + assert( p->eLock>=SHARED_LOCK ); + break; + + case PAGER_WRITER_LOCKED: + assert( p->eLock!=UNKNOWN_LOCK ); + assert( pPager->errCode==SQLITE_OK ); + if( !pagerUseWal(pPager) ){ + assert( p->eLock>=RESERVED_LOCK ); + } + assert( pPager->dbSize==pPager->dbOrigSize ); + assert( pPager->dbOrigSize==pPager->dbFileSize ); + assert( pPager->dbOrigSize==pPager->dbHintSize ); + assert( pPager->setMaster==0 ); + break; + + case PAGER_WRITER_CACHEMOD: + assert( p->eLock!=UNKNOWN_LOCK ); + assert( pPager->errCode==SQLITE_OK ); + if( !pagerUseWal(pPager) ){ + /* It is possible that if journal_mode=wal here that neither the + ** journal file nor the WAL file are open. This happens during + ** a rollback transaction that switches from journal_mode=off + ** to journal_mode=wal. + */ + assert( p->eLock>=RESERVED_LOCK ); + assert( isOpen(p->jfd) + || p->journalMode==PAGER_JOURNALMODE_OFF + || p->journalMode==PAGER_JOURNALMODE_WAL + ); + } + assert( pPager->dbOrigSize==pPager->dbFileSize ); + assert( pPager->dbOrigSize==pPager->dbHintSize ); + break; + + case PAGER_WRITER_DBMOD: + assert( p->eLock==EXCLUSIVE_LOCK ); + assert( pPager->errCode==SQLITE_OK ); + assert( !pagerUseWal(pPager) ); + assert( p->eLock>=EXCLUSIVE_LOCK ); + assert( isOpen(p->jfd) + || p->journalMode==PAGER_JOURNALMODE_OFF + || p->journalMode==PAGER_JOURNALMODE_WAL + ); + assert( pPager->dbOrigSize<=pPager->dbHintSize ); + break; + + case PAGER_WRITER_FINISHED: + assert( p->eLock==EXCLUSIVE_LOCK ); + assert( pPager->errCode==SQLITE_OK ); + assert( !pagerUseWal(pPager) ); + assert( isOpen(p->jfd) + || p->journalMode==PAGER_JOURNALMODE_OFF + || p->journalMode==PAGER_JOURNALMODE_WAL + ); + break; + + case PAGER_ERROR: + /* There must be at least one outstanding reference to the pager if + ** in ERROR state. Otherwise the pager should have already dropped + ** back to OPEN state. + */ + assert( pPager->errCode!=SQLITE_OK ); + assert( sqlite3PcacheRefCount(pPager->pPCache)>0 ); + break; + } return 1; +} +#endif /* ifndef NDEBUG */ + +#ifdef SQLITE_DEBUG +/* +** Return a pointer to a human readable string in a static buffer +** containing the state of the Pager object passed as an argument. This +** is intended to be used within debuggers. For example, as an alternative +** to "print *pPager" in gdb: +** +** (gdb) printf "%s", print_pager_state(pPager) +*/ +static char *print_pager_state(Pager *p){ + static char zRet[1024]; + + sqlite3_snprintf(1024, zRet, + "Filename: %s\n" + "State: %s errCode=%d\n" + "Lock: %s\n" + "Locking mode: locking_mode=%s\n" + "Journal mode: journal_mode=%s\n" + "Backing store: tempFile=%d memDb=%d useJournal=%d\n" + "Journal: journalOff=%lld journalHdr=%lld\n" + "Size: dbsize=%d dbOrigSize=%d dbFileSize=%d\n" + , p->zFilename + , p->eState==PAGER_OPEN ? "OPEN" : + p->eState==PAGER_READER ? "READER" : + p->eState==PAGER_WRITER_LOCKED ? "WRITER_LOCKED" : + p->eState==PAGER_WRITER_CACHEMOD ? "WRITER_CACHEMOD" : + p->eState==PAGER_WRITER_DBMOD ? "WRITER_DBMOD" : + p->eState==PAGER_WRITER_FINISHED ? "WRITER_FINISHED" : + p->eState==PAGER_ERROR ? "ERROR" : "?error?" + , (int)p->errCode + , p->eLock==NO_LOCK ? "NO_LOCK" : + p->eLock==RESERVED_LOCK ? "RESERVED" : + p->eLock==EXCLUSIVE_LOCK ? "EXCLUSIVE" : + p->eLock==SHARED_LOCK ? "SHARED" : + p->eLock==UNKNOWN_LOCK ? "UNKNOWN" : "?error?" + , p->exclusiveMode ? "exclusive" : "normal" + , p->journalMode==PAGER_JOURNALMODE_MEMORY ? "memory" : + p->journalMode==PAGER_JOURNALMODE_OFF ? "off" : + p->journalMode==PAGER_JOURNALMODE_DELETE ? "delete" : + p->journalMode==PAGER_JOURNALMODE_PERSIST ? "persist" : + p->journalMode==PAGER_JOURNALMODE_TRUNCATE ? "truncate" : + p->journalMode==PAGER_JOURNALMODE_WAL ? "wal" : "?error?" + , (int)p->tempFile, (int)p->memDb, (int)p->useJournal + , p->journalOff, p->journalHdr + , (int)p->dbSize, (int)p->dbOrigSize, (int)p->dbFileSize + ); + + return zRet; } #endif /* ** Return true if it is necessary to write page *pPg into the sub-journal. @@ -32832,28 +44776,31 @@ ** * The page-number is less than or equal to PagerSavepoint.nOrig, and ** * The bit corresponding to the page-number is not set in ** PagerSavepoint.pInSavepoint. */ static int subjRequiresPage(PgHdr *pPg){ - Pgno pgno = pPg->pgno; Pager *pPager = pPg->pPager; + PagerSavepoint *p; + Pgno pgno = pPg->pgno; int i; for(i=0; i<pPager->nSavepoint; i++){ - PagerSavepoint *p = &pPager->aSavepoint[i]; - if( p->nOrig>=pgno && 0==sqlite3BitvecTest(p->pInSavepoint, pgno) ){ + p = &pPager->aSavepoint[i]; + if( p->nOrig>=pgno && 0==sqlite3BitvecTestNotNull(p->pInSavepoint, pgno) ){ return 1; } } return 0; } +#ifdef SQLITE_DEBUG /* ** Return true if the page is already in the journal file. */ -static int pageInJournal(PgHdr *pPg){ - return sqlite3BitvecTest(pPg->pPager->pInJournal, pPg->pgno); +static int pageInJournal(Pager *pPager, PgHdr *pPg){ + return sqlite3BitvecTest(pPager->pInJournal, pPg->pgno); } +#endif /* ** Read a 32-bit integer from the given file descriptor. Store the integer ** that is read in *pRes. Return SQLITE_OK if everything worked, or an ** error code is something goes wrong. @@ -32871,10 +44818,11 @@ /* ** Write a 32-bit integer into a string buffer in big-endian byte order. */ #define put32bits(A,B) sqlite3Put4byte((u8*)A,B) + /* ** Write a 32-bit integer into the given file descriptor. Return SQLITE_OK ** on success or an error code is something goes wrong. */ @@ -32883,31 +44831,57 @@ put32bits(ac, val); return sqlite3OsWrite(fd, ac, 4, offset); } /* -** The argument to this macro is a file descriptor (type sqlite3_file*). -** Return 0 if it is not open, or non-zero (but not 1) if it is. -** -** This is so that expressions can be written as: -** -** if( isOpen(pPager->jfd) ){ ... -** -** instead of -** -** if( pPager->jfd->pMethods ){ ... -*/ -#define isOpen(pFd) ((pFd)->pMethods) +** Unlock the database file to level eLock, which must be either NO_LOCK +** or SHARED_LOCK. Regardless of whether or not the call to xUnlock() +** succeeds, set the Pager.eLock variable to match the (attempted) new lock. +** +** Except, if Pager.eLock is set to UNKNOWN_LOCK when this function is +** called, do not modify it. See the comment above the #define of +** UNKNOWN_LOCK for an explanation of this. +*/ +static int pagerUnlockDb(Pager *pPager, int eLock){ + int rc = SQLITE_OK; + + assert( !pPager->exclusiveMode || pPager->eLock==eLock ); + assert( eLock==NO_LOCK || eLock==SHARED_LOCK ); + assert( eLock!=NO_LOCK || pagerUseWal(pPager)==0 ); + if( isOpen(pPager->fd) ){ + assert( pPager->eLock>=eLock ); + rc = pPager->noLock ? SQLITE_OK : sqlite3OsUnlock(pPager->fd, eLock); + if( pPager->eLock!=UNKNOWN_LOCK ){ + pPager->eLock = (u8)eLock; + } + IOTRACE(("UNLOCK %p %d\n", pPager, eLock)) + } + return rc; +} /* -** If file pFd is open, call sqlite3OsUnlock() on it. +** Lock the database file to level eLock, which must be either SHARED_LOCK, +** RESERVED_LOCK or EXCLUSIVE_LOCK. If the caller is successful, set the +** Pager.eLock variable to the new locking state. +** +** Except, if Pager.eLock is set to UNKNOWN_LOCK when this function is +** called, do not modify it unless the new locking state is EXCLUSIVE_LOCK. +** See the comment above the #define of UNKNOWN_LOCK for an explanation +** of this. */ -static int osUnlock(sqlite3_file *pFd, int eLock){ - if( !isOpen(pFd) ){ - return SQLITE_OK; +static int pagerLockDb(Pager *pPager, int eLock){ + int rc = SQLITE_OK; + + assert( eLock==SHARED_LOCK || eLock==RESERVED_LOCK || eLock==EXCLUSIVE_LOCK ); + if( pPager->eLock<eLock || pPager->eLock==UNKNOWN_LOCK ){ + rc = pPager->noLock ? SQLITE_OK : sqlite3OsLock(pPager->fd, eLock); + if( rc==SQLITE_OK && (pPager->eLock!=UNKNOWN_LOCK||eLock==EXCLUSIVE_LOCK) ){ + pPager->eLock = (u8)eLock; + IOTRACE(("LOCK %p %d\n", pPager, eLock)) + } } - return sqlite3OsUnlock(pFd, eLock); + return rc; } /* ** This function determines whether or not the atomic-write optimization ** can be used with this pager. The optimization can be used if: @@ -32979,17 +44953,18 @@ ** that the page is either dirty or still matches the calculated page-hash. */ #define CHECK_PAGE(x) checkPage(x) static void checkPage(PgHdr *pPg){ Pager *pPager = pPg->pPager; - assert( !pPg->pageHash || pPager->errCode - || (pPg->flags&PGHDR_DIRTY) || pPg->pageHash==pager_pagehash(pPg) ); + assert( pPager->eState!=PAGER_ERROR ); + assert( (pPg->flags&PGHDR_DIRTY) || pPg->pageHash==pager_pagehash(pPg) ); } #else #define pager_datahash(X,Y) 0 #define pager_pagehash(X) 0 +#define pager_set_pagehash(X) #define CHECK_PAGE(x) #endif /* SQLITE_CHECK_PAGES */ /* ** When this is called the journal file for pager pPager must be open. @@ -33027,10 +45002,11 @@ if( SQLITE_OK!=(rc = sqlite3OsFileSize(pJrnl, &szJ)) || szJ<16 || SQLITE_OK!=(rc = read32bits(pJrnl, szJ-16, &len)) || len>=nMaster + || len==0 || SQLITE_OK!=(rc = read32bits(pJrnl, szJ-12, &cksum)) || SQLITE_OK!=(rc = sqlite3OsRead(pJrnl, aMagic, 8, szJ-8)) || memcmp(aMagic, aJournalMagic, 8) || SQLITE_OK!=(rc = sqlite3OsRead(pJrnl, zMaster, len, szJ-16-len)) ){ @@ -33114,11 +45090,11 @@ }else{ static const char zeroHdr[28] = {0}; rc = sqlite3OsWrite(pPager->jfd, zeroHdr, sizeof(zeroHdr), 0); } if( rc==SQLITE_OK && !pPager->noSync ){ - rc = sqlite3OsSync(pPager->jfd, SQLITE_SYNC_DATAONLY|pPager->sync_flags); + rc = sqlite3OsSync(pPager->jfd, SQLITE_SYNC_DATAONLY|pPager->syncFlags); } /* At this point the transaction is committed but the write lock ** is still held on the file. If there is a size limit configured for ** the persistent journal and the journal file currently consumes more @@ -33152,11 +45128,11 @@ ** Followed by (JOURNAL_HDR_SZ - 28) bytes of unused space. */ static int writeJournalHdr(Pager *pPager){ int rc = SQLITE_OK; /* Return code */ char *zHeader = pPager->pTmpSpace; /* Temporary space used to build header */ - u32 nHeader = pPager->pageSize; /* Size of buffer pointed to by zHeader */ + u32 nHeader = (u32)pPager->pageSize;/* Size of buffer pointed to by zHeader */ u32 nWrite; /* Bytes of header sector written */ int ii; /* Loop counter */ assert( isOpen(pPager->jfd) ); /* Journal file must be open. */ @@ -33195,20 +45171,20 @@ ** ** * When the SQLITE_IOCAP_SAFE_APPEND flag is set. This guarantees ** that garbage data is never appended to the journal file. */ assert( isOpen(pPager->fd) || pPager->noSync ); - if( (pPager->noSync) || (pPager->journalMode==PAGER_JOURNALMODE_MEMORY) + if( pPager->noSync || (pPager->journalMode==PAGER_JOURNALMODE_MEMORY) || (sqlite3OsDeviceCharacteristics(pPager->fd)&SQLITE_IOCAP_SAFE_APPEND) ){ memcpy(zHeader, aJournalMagic, sizeof(aJournalMagic)); put32bits(&zHeader[sizeof(aJournalMagic)], 0xffffffff); }else{ memset(zHeader, 0, sizeof(aJournalMagic)+4); } - /* The random check-hash initialiser */ + /* The random check-hash initializer */ sqlite3_randomness(sizeof(pPager->cksumInit), &pPager->cksumInit); put32bits(&zHeader[sizeof(aJournalMagic)+4], pPager->cksumInit); /* The initial database size */ put32bits(&zHeader[sizeof(aJournalMagic)+8], pPager->dbOrigSize); /* The assumed sector size for this process */ @@ -33319,18 +45295,25 @@ } if( pPager->journalOff==0 ){ u32 iPageSize; /* Page-size field of journal header */ u32 iSectorSize; /* Sector-size field of journal header */ - u16 iPageSize16; /* Copy of iPageSize in 16-bit variable */ /* Read the page-size and sector-size journal header fields. */ if( SQLITE_OK!=(rc = read32bits(pPager->jfd, iHdrOff+20, &iSectorSize)) || SQLITE_OK!=(rc = read32bits(pPager->jfd, iHdrOff+24, &iPageSize)) ){ return rc; } + + /* Versions of SQLite prior to 3.5.8 set the page-size field of the + ** journal header to zero. In this case, assume that the Pager.pageSize + ** variable is already set to the correct page size. + */ + if( iPageSize==0 ){ + iPageSize = pPager->pageSize; + } /* Check that the values read from the page-size and sector-size fields ** are within range. To be 'in range', both values need to be a power ** of two greater than or equal to 512 or 32, and not greater than their ** respective compile time maximum limits. @@ -33349,14 +45332,12 @@ /* Update the page-size to match the value read from the journal. ** Use a testcase() macro to make sure that malloc failure within ** PagerSetPagesize() is tested. */ - iPageSize16 = (u16)iPageSize; - rc = sqlite3PagerSetPagesize(pPager, &iPageSize16, -1); + rc = sqlite3PagerSetPagesize(pPager, &iPageSize, -1); testcase( rc!=SQLITE_OK ); - assert( rc!=SQLITE_OK || iPageSize16==(u16)iPageSize ); /* Update the assumed sector-size to match the value used by ** the process that created this journal. If this journal was ** created by a process other than this one, then this routine ** is being called from within pager_playback(). The local value @@ -33394,18 +45375,20 @@ int nMaster; /* Length of string zMaster */ i64 iHdrOff; /* Offset of header in journal file */ i64 jrnlSize; /* Size of journal file on disk */ u32 cksum = 0; /* Checksum of string zMaster */ - if( !zMaster || pPager->setMaster + assert( pPager->setMaster==0 ); + assert( !pagerUseWal(pPager) ); + + if( !zMaster || pPager->journalMode==PAGER_JOURNALMODE_MEMORY - || pPager->journalMode==PAGER_JOURNALMODE_OFF + || !isOpen(pPager->jfd) ){ return SQLITE_OK; } pPager->setMaster = 1; - assert( isOpen(pPager->jfd) ); assert( pPager->journalHdr <= pPager->journalOff ); /* Calculate the length in bytes and the checksum of zMaster */ for(nMaster=0; zMaster[nMaster]; nMaster++){ cksum += zMaster[nMaster]; @@ -33425,16 +45408,16 @@ */ if( (0 != (rc = write32bits(pPager->jfd, iHdrOff, PAGER_MJ_PGNO(pPager)))) || (0 != (rc = sqlite3OsWrite(pPager->jfd, zMaster, nMaster, iHdrOff+4))) || (0 != (rc = write32bits(pPager->jfd, iHdrOff+4+nMaster, nMaster))) || (0 != (rc = write32bits(pPager->jfd, iHdrOff+4+nMaster+4, cksum))) - || (0 != (rc = sqlite3OsWrite(pPager->jfd, aJournalMagic, 8, iHdrOff+4+nMaster+8))) + || (0 != (rc = sqlite3OsWrite(pPager->jfd, aJournalMagic, 8, + iHdrOff+4+nMaster+8))) ){ return rc; } pPager->journalOff += (nMaster+20); - pPager->needSync = !pPager->noSync; /* If the pager is in peristent-journal mode, then the physical ** journal-file may extend past the end of the master-journal name ** and 8 bytes of magic data just written to the file. This is ** dangerous because the code to rollback a hot-journal file @@ -33451,36 +45434,24 @@ } return rc; } /* -** Find a page in the hash table given its page number. Return -** a pointer to the page or NULL if the requested page is not -** already in memory. -*/ -static PgHdr *pager_lookup(Pager *pPager, Pgno pgno){ - PgHdr *p; /* Return value */ - - /* It is not possible for a call to PcacheFetch() with createFlag==0 to - ** fail, since no attempt to allocate dynamic memory will be made. - */ - (void)sqlite3PcacheFetch(pPager->pPCache, pgno, 0, &p); - return p; -} - -/* -** Unless the pager is in error-state, discard all in-memory pages. If -** the pager is in error-state, then this call is a no-op. -** -** TODO: Why can we not reset the pager while in error state? +** Discard the entire contents of the in-memory page-cache. */ static void pager_reset(Pager *pPager){ - if( SQLITE_OK==pPager->errCode ){ - sqlite3BackupRestart(pPager->pBackup); - sqlite3PcacheClear(pPager->pPCache); - pPager->dbSizeValid = 0; - } + pPager->iDataVersion++; + sqlite3BackupRestart(pPager->pBackup); + sqlite3PcacheClear(pPager->pPCache); +} + +/* +** Return the pPager->iDataVersion value +*/ +SQLITE_PRIVATE u32 sqlite3PagerDataVersion(Pager *pPager){ + assert( pPager->eState>PAGER_OPEN ); + return pPager->iDataVersion; } /* ** Free all structures in the Pager.aSavepoint[] array and set both ** Pager.aSavepoint and Pager.nSavepoint to zero. Close the sub-journal @@ -33519,76 +45490,113 @@ } return rc; } /* -** Unlock the database file. This function is a no-op if the pager -** is in exclusive mode. +** This function is a no-op if the pager is in exclusive mode and not +** in the ERROR state. Otherwise, it switches the pager to PAGER_OPEN +** state. ** -** If the pager is currently in error state, discard the contents of -** the cache and reset the Pager structure internal state. If there is -** an open journal-file, then the next time a shared-lock is obtained -** on the pager file (by this or any other process), it will be -** treated as a hot-journal and rolled back. +** If the pager is not in exclusive-access mode, the database file is +** completely unlocked. If the file is unlocked and the file-system does +** not exhibit the UNDELETABLE_WHEN_OPEN property, the journal file is +** closed (if it is open). +** +** If the pager is in ERROR state when this function is called, the +** contents of the pager cache are discarded before switching back to +** the OPEN state. Regardless of whether the pager is in exclusive-mode +** or not, any journal file left in the file-system will be treated +** as a hot-journal and rolled back the next time a read-transaction +** is opened (by this or by any other connection). */ static void pager_unlock(Pager *pPager){ - if( !pPager->exclusiveMode ){ - int rc; /* Return code */ - - /* Always close the journal file when dropping the database lock. - ** Otherwise, another connection with journal_mode=delete might - ** delete the file out from under us. - */ - sqlite3OsClose(pPager->jfd); - sqlite3BitvecDestroy(pPager->pInJournal); - pPager->pInJournal = 0; - releaseAllSavepoints(pPager); - - /* If the file is unlocked, somebody else might change it. The - ** values stored in Pager.dbSize etc. might become invalid if - ** this happens. One can argue that this doesn't need to be cleared - ** until the change-counter check fails in PagerSharedLock(). - ** Clearing the page size cache here is being conservative. - */ - pPager->dbSizeValid = 0; - - rc = osUnlock(pPager->fd, NO_LOCK); - if( rc ){ - pPager->errCode = rc; - } - IOTRACE(("UNLOCK %p\n", pPager)) - - /* If Pager.errCode is set, the contents of the pager cache cannot be - ** trusted. Now that the pager file is unlocked, the contents of the - ** cache can be discarded and the error code safely cleared. - */ - if( pPager->errCode ){ - if( rc==SQLITE_OK ){ - pPager->errCode = SQLITE_OK; - } - pager_reset(pPager); - } - + + assert( pPager->eState==PAGER_READER + || pPager->eState==PAGER_OPEN + || pPager->eState==PAGER_ERROR + ); + + sqlite3BitvecDestroy(pPager->pInJournal); + pPager->pInJournal = 0; + releaseAllSavepoints(pPager); + + if( pagerUseWal(pPager) ){ + assert( !isOpen(pPager->jfd) ); + sqlite3WalEndReadTransaction(pPager->pWal); + pPager->eState = PAGER_OPEN; + }else if( !pPager->exclusiveMode ){ + int rc; /* Error code returned by pagerUnlockDb() */ + int iDc = isOpen(pPager->fd)?sqlite3OsDeviceCharacteristics(pPager->fd):0; + + /* If the operating system support deletion of open files, then + ** close the journal file when dropping the database lock. Otherwise + ** another connection with journal_mode=delete might delete the file + ** out from under us. + */ + assert( (PAGER_JOURNALMODE_MEMORY & 5)!=1 ); + assert( (PAGER_JOURNALMODE_OFF & 5)!=1 ); + assert( (PAGER_JOURNALMODE_WAL & 5)!=1 ); + assert( (PAGER_JOURNALMODE_DELETE & 5)!=1 ); + assert( (PAGER_JOURNALMODE_TRUNCATE & 5)==1 ); + assert( (PAGER_JOURNALMODE_PERSIST & 5)==1 ); + if( 0==(iDc & SQLITE_IOCAP_UNDELETABLE_WHEN_OPEN) + || 1!=(pPager->journalMode & 5) + ){ + sqlite3OsClose(pPager->jfd); + } + + /* If the pager is in the ERROR state and the call to unlock the database + ** file fails, set the current lock to UNKNOWN_LOCK. See the comment + ** above the #define for UNKNOWN_LOCK for an explanation of why this + ** is necessary. + */ + rc = pagerUnlockDb(pPager, NO_LOCK); + if( rc!=SQLITE_OK && pPager->eState==PAGER_ERROR ){ + pPager->eLock = UNKNOWN_LOCK; + } + + /* The pager state may be changed from PAGER_ERROR to PAGER_OPEN here + ** without clearing the error code. This is intentional - the error + ** code is cleared and the cache reset in the block below. + */ + assert( pPager->errCode || pPager->eState!=PAGER_ERROR ); pPager->changeCountDone = 0; - pPager->state = PAGER_UNLOCK; - pPager->dbModified = 0; + pPager->eState = PAGER_OPEN; } + + /* If Pager.errCode is set, the contents of the pager cache cannot be + ** trusted. Now that there are no outstanding references to the pager, + ** it can safely move back to PAGER_OPEN state. This happens in both + ** normal and exclusive-locking mode. + */ + if( pPager->errCode ){ + assert( !MEMDB ); + pager_reset(pPager); + pPager->changeCountDone = pPager->tempFile; + pPager->eState = PAGER_OPEN; + pPager->errCode = SQLITE_OK; + if( USEFETCH(pPager) ) sqlite3OsUnfetch(pPager->fd, 0, 0); + } + + pPager->journalOff = 0; + pPager->journalHdr = 0; + pPager->setMaster = 0; } /* -** This function should be called when an IOERR, CORRUPT or FULL error -** may have occurred. The first argument is a pointer to the pager -** structure, the second the error-code about to be returned by a pager -** API function. The value returned is a copy of the second argument -** to this function. -** -** If the second argument is SQLITE_IOERR, SQLITE_CORRUPT, or SQLITE_FULL -** the error becomes persistent. Until the persisten error is cleared, -** subsequent API calls on this Pager will immediately return the same -** error code. -** -** A persistent error indicates that the contents of the pager-cache +** This function is called whenever an IOERR or FULL error that requires +** the pager to transition into the ERROR state may ahve occurred. +** The first argument is a pointer to the pager structure, the second +** the error-code about to be returned by a pager API function. The +** value returned is a copy of the second argument to this function. +** +** If the second argument is SQLITE_FULL, SQLITE_IOERR or one of the +** IOERR sub-codes, the pager enters the ERROR state and the error code +** is stored in Pager.errCode. While the pager remains in the ERROR state, +** all major API calls on the Pager will immediately return Pager.errCode. +** +** The ERROR state indicates that the contents of the pager-cache ** cannot be trusted. This state can be cleared by completely discarding ** the contents of the pager-cache. If a transaction was active when ** the persistent error occurred, then the rollback journal may need ** to be replayed to restore the contents of the database file (as if ** it were a hot-journal). @@ -33601,49 +45609,27 @@ pPager->errCode==SQLITE_OK || (pPager->errCode & 0xff)==SQLITE_IOERR ); if( rc2==SQLITE_FULL || rc2==SQLITE_IOERR ){ pPager->errCode = rc; + pPager->eState = PAGER_ERROR; } return rc; } -/* -** Execute a rollback if a transaction is active and unlock the -** database file. -** -** If the pager has already entered the error state, do not attempt -** the rollback at this time. Instead, pager_unlock() is called. The -** call to pager_unlock() will discard all in-memory pages, unlock -** the database file and clear the error state. If this means that -** there is a hot-journal left in the file-system, the next connection -** to obtain a shared lock on the pager (which may be this one) will -** roll it back. -** -** If the pager has not already entered the error state, but an IO or -** malloc error occurs during a rollback, then this will itself cause -** the pager to enter the error state. Which will be cleared by the -** call to pager_unlock(), as described above. -*/ -static void pagerUnlockAndRollback(Pager *pPager){ - if( pPager->errCode==SQLITE_OK && pPager->state>=PAGER_RESERVED ){ - sqlite3BeginBenignMalloc(); - sqlite3PagerRollback(pPager); - sqlite3EndBenignMalloc(); - } - pager_unlock(pPager); -} +static int pager_truncate(Pager *pPager, Pgno nPage); /* ** This routine ends a transaction. A transaction is usually ended by ** either a COMMIT or a ROLLBACK operation. This routine may be called ** after rollback of a hot-journal, or if an error occurs while opening ** the journal file or writing the very first journal-header of a ** database transaction. ** -** If the pager is in PAGER_SHARED or PAGER_UNLOCK state when this -** routine is called, it is a no-op (returns SQLITE_OK). +** This routine is never called in PAGER_ERROR state. If it is called +** in PAGER_NONE or PAGER_SHARED state and the lock held is less +** exclusive than a RESERVED lock, it is a no-op. ** ** Otherwise, any active savepoints are released. ** ** If the journal file is open, then it is "finalized". Once a journal ** file has been finalized it is not possible to use it to roll back a @@ -33670,17 +45656,13 @@ ** If the pager is running in exclusive mode, this method of finalizing ** the journal file is never used. Instead, if the journalMode is ** DELETE and the pager is in exclusive mode, the method described under ** journalMode==PERSIST is used instead. ** -** After the journal is finalized, if running in non-exclusive mode, the -** pager moves to PAGER_SHARED state (and downgrades the lock on the -** database file accordingly). -** -** If the pager is running in exclusive mode and is in PAGER_SYNCED state, -** it moves to PAGER_EXCLUSIVE. No locks are downgraded when running in -** exclusive mode. +** After the journal is finalized, the pager moves to PAGER_READER state. +** If running in non-exclusive rollback mode, the lock on the file is +** downgraded to a SHARED_LOCK. ** ** SQLITE_OK is returned if no error occurs. If an error occurs during ** any of the IO operations to finalize the journal file or unlock the ** database then the IO error code is returned to the user. If the ** operation to finalize the journal file fails, then the code still @@ -33687,21 +45669,37 @@ ** tries to unlock the database file if not in exclusive mode. If the ** unlock operation fails as well, then the first error code related ** to the first error encountered (the journal finalization one) is ** returned. */ -static int pager_end_transaction(Pager *pPager, int hasMaster){ +static int pager_end_transaction(Pager *pPager, int hasMaster, int bCommit){ int rc = SQLITE_OK; /* Error code from journal finalization operation */ int rc2 = SQLITE_OK; /* Error code from db file unlock operation */ - if( pPager->state<PAGER_RESERVED ){ + /* Do nothing if the pager does not have an open write transaction + ** or at least a RESERVED lock. This function may be called when there + ** is no write-transaction active but a RESERVED or greater lock is + ** held under two circumstances: + ** + ** 1. After a successful hot-journal rollback, it is called with + ** eState==PAGER_NONE and eLock==EXCLUSIVE_LOCK. + ** + ** 2. If a connection with locking_mode=exclusive holding an EXCLUSIVE + ** lock switches back to locking_mode=normal and then executes a + ** read-transaction, this function is called with eState==PAGER_READER + ** and eLock==EXCLUSIVE_LOCK when the read-transaction is closed. + */ + assert( assert_pager_state(pPager) ); + assert( pPager->eState!=PAGER_ERROR ); + if( pPager->eState<PAGER_WRITER_LOCKED && pPager->eLock<RESERVED_LOCK ){ return SQLITE_OK; } + releaseAllSavepoints(pPager); - assert( isOpen(pPager->jfd) || pPager->pInJournal==0 ); if( isOpen(pPager->jfd) ){ + assert( !pagerUseWal(pPager) ); /* Finalize the journal file. */ if( sqlite3IsMemJournal(pPager->jfd) ){ assert( pPager->journalMode==PAGER_JOURNALMODE_MEMORY ); sqlite3OsClose(pPager->jfd); @@ -33708,64 +45706,126 @@ }else if( pPager->journalMode==PAGER_JOURNALMODE_TRUNCATE ){ if( pPager->journalOff==0 ){ rc = SQLITE_OK; }else{ rc = sqlite3OsTruncate(pPager->jfd, 0); + if( rc==SQLITE_OK && pPager->fullSync ){ + /* Make sure the new file size is written into the inode right away. + ** Otherwise the journal might resurrect following a power loss and + ** cause the last transaction to roll back. See + ** https://bugzilla.mozilla.org/show_bug.cgi?id=1072773 + */ + rc = sqlite3OsSync(pPager->jfd, pPager->syncFlags); + } } pPager->journalOff = 0; - pPager->journalStarted = 0; - }else if( pPager->exclusiveMode - || pPager->journalMode==PAGER_JOURNALMODE_PERSIST + }else if( pPager->journalMode==PAGER_JOURNALMODE_PERSIST + || (pPager->exclusiveMode && pPager->journalMode!=PAGER_JOURNALMODE_WAL) ){ rc = zeroJournalHdr(pPager, hasMaster); - pager_error(pPager, rc); pPager->journalOff = 0; - pPager->journalStarted = 0; }else{ /* This branch may be executed with Pager.journalMode==MEMORY if ** a hot-journal was just rolled back. In this case the journal ** file should be closed and deleted. If this connection writes to - ** the database file, it will do so using an in-memory journal. */ + ** the database file, it will do so using an in-memory journal. + */ + int bDelete = (!pPager->tempFile && sqlite3JournalExists(pPager->jfd)); assert( pPager->journalMode==PAGER_JOURNALMODE_DELETE || pPager->journalMode==PAGER_JOURNALMODE_MEMORY + || pPager->journalMode==PAGER_JOURNALMODE_WAL ); sqlite3OsClose(pPager->jfd); - if( !pPager->tempFile ){ - rc = sqlite3OsDelete(pPager->pVfs, pPager->zJournal, 0); + if( bDelete ){ + rc = sqlite3OsDelete(pPager->pVfs, pPager->zJournal, pPager->extraSync); } } + } #ifdef SQLITE_CHECK_PAGES - sqlite3PcacheIterateDirty(pPager->pPCache, pager_set_pagehash); + sqlite3PcacheIterateDirty(pPager->pPCache, pager_set_pagehash); + if( pPager->dbSize==0 && sqlite3PcacheRefCount(pPager->pPCache)>0 ){ + PgHdr *p = sqlite3PagerLookup(pPager, 1); + if( p ){ + p->pageHash = 0; + sqlite3PagerUnrefNotNull(p); + } + } #endif - } + sqlite3BitvecDestroy(pPager->pInJournal); pPager->pInJournal = 0; pPager->nRec = 0; sqlite3PcacheCleanAll(pPager->pPCache); - - if( !pPager->exclusiveMode ){ - rc2 = osUnlock(pPager->fd, SHARED_LOCK); - pPager->state = PAGER_SHARED; - pPager->changeCountDone = 0; - }else if( pPager->state==PAGER_SYNCED ){ - pPager->state = PAGER_EXCLUSIVE; - } - pPager->setMaster = 0; - pPager->needSync = 0; - pPager->dbModified = 0; - - /* TODO: Is this optimal? Why is the db size invalidated here - ** when the database file is not unlocked? */ - pPager->dbOrigSize = 0; sqlite3PcacheTruncate(pPager->pPCache, pPager->dbSize); - if( !MEMDB ){ - pPager->dbSizeValid = 0; + + if( pagerUseWal(pPager) ){ + /* Drop the WAL write-lock, if any. Also, if the connection was in + ** locking_mode=exclusive mode but is no longer, drop the EXCLUSIVE + ** lock held on the database file. + */ + rc2 = sqlite3WalEndWriteTransaction(pPager->pWal); + assert( rc2==SQLITE_OK ); + }else if( rc==SQLITE_OK && bCommit && pPager->dbFileSize>pPager->dbSize ){ + /* This branch is taken when committing a transaction in rollback-journal + ** mode if the database file on disk is larger than the database image. + ** At this point the journal has been finalized and the transaction + ** successfully committed, but the EXCLUSIVE lock is still held on the + ** file. So it is safe to truncate the database file to its minimum + ** required size. */ + assert( pPager->eLock==EXCLUSIVE_LOCK ); + rc = pager_truncate(pPager, pPager->dbSize); } + + if( rc==SQLITE_OK && bCommit && isOpen(pPager->fd) ){ + rc = sqlite3OsFileControl(pPager->fd, SQLITE_FCNTL_COMMIT_PHASETWO, 0); + if( rc==SQLITE_NOTFOUND ) rc = SQLITE_OK; + } + + if( !pPager->exclusiveMode + && (!pagerUseWal(pPager) || sqlite3WalExclusiveMode(pPager->pWal, 0)) + ){ + rc2 = pagerUnlockDb(pPager, SHARED_LOCK); + pPager->changeCountDone = 0; + } + pPager->eState = PAGER_READER; + pPager->setMaster = 0; return (rc==SQLITE_OK?rc2:rc); } + +/* +** Execute a rollback if a transaction is active and unlock the +** database file. +** +** If the pager has already entered the ERROR state, do not attempt +** the rollback at this time. Instead, pager_unlock() is called. The +** call to pager_unlock() will discard all in-memory pages, unlock +** the database file and move the pager back to OPEN state. If this +** means that there is a hot-journal left in the file-system, the next +** connection to obtain a shared lock on the pager (which may be this one) +** will roll it back. +** +** If the pager has not already entered the ERROR state, but an IO or +** malloc error occurs during a rollback, then this will itself cause +** the pager to enter the ERROR state. Which will be cleared by the +** call to pager_unlock(), as described above. +*/ +static void pagerUnlockAndRollback(Pager *pPager){ + if( pPager->eState!=PAGER_ERROR && pPager->eState!=PAGER_OPEN ){ + assert( assert_pager_state(pPager) ); + if( pPager->eState>=PAGER_WRITER_LOCKED ){ + sqlite3BeginBenignMalloc(); + sqlite3PagerRollback(pPager); + sqlite3EndBenignMalloc(); + }else if( !pPager->exclusiveMode ){ + assert( pPager->eState==PAGER_READER ); + pager_end_transaction(pPager, 0, 0); + } + } + pager_unlock(pPager); +} /* ** Parameter aData must point to a buffer of pPager->pageSize bytes ** of data. Compute and return a checksum based ont the contents of the ** page of data and the current value of pPager->cksumInit. @@ -33792,19 +45852,47 @@ i -= 200; } return cksum; } +/* +** Report the current page size and number of reserved bytes back +** to the codec. +*/ +#ifdef SQLITE_HAS_CODEC +static void pagerReportSize(Pager *pPager){ + if( pPager->xCodecSizeChng ){ + pPager->xCodecSizeChng(pPager->pCodec, pPager->pageSize, + (int)pPager->nReserve); + } +} +#else +# define pagerReportSize(X) /* No-op if we do not support a codec */ +#endif + +#ifdef SQLITE_HAS_CODEC +/* +** Make sure the number of reserved bits is the same in the destination +** pager as it is in the source. This comes up when a VACUUM changes the +** number of reserved bits to the "optimal" amount. +*/ +SQLITE_PRIVATE void sqlite3PagerAlignReserve(Pager *pDest, Pager *pSrc){ + if( pDest->nReserve!=pSrc->nReserve ){ + pDest->nReserve = pSrc->nReserve; + pagerReportSize(pDest); + } +} +#endif + /* ** Read a single page from either the journal file (if isMainJrnl==1) or ** from the sub-journal (if isMainJrnl==0) and playback that page. ** The page begins at offset *pOffset into the file. The *pOffset ** value is increased to the start of the next page in the journal. ** -** The isMainJrnl flag is true if this is the main rollback journal and -** false for the statement journal. The main rollback journal uses -** checksums - the statement journal does not. +** The main rollback journal uses checksums - the statement journal does +** not. ** ** If the page number of the page record read from the (sub-)journal file ** is greater than the current value of Pager.dbSize, then playback is ** skipped and SQLITE_OK is returned. ** @@ -33852,10 +45940,22 @@ assert( isMainJrnl || pDone ); /* pDone always used on sub-journals */ assert( isSavepnt || pDone==0 ); /* pDone never used on non-savepoint */ aData = pPager->pTmpSpace; assert( aData ); /* Temp storage must have already been allocated */ + assert( pagerUseWal(pPager)==0 || (!isMainJrnl && isSavepnt) ); + + /* Either the state is greater than PAGER_WRITER_CACHEMOD (a transaction + ** or savepoint rollback done at the request of the caller) or this is + ** a hot-journal rollback. If it is a hot-journal rollback, the pager + ** is in state OPEN and holds an EXCLUSIVE lock. Hot-journal rollback + ** only reads from the main journal, not the sub-journal. + */ + assert( pPager->eState>=PAGER_WRITER_CACHEMOD + || (pPager->eState==PAGER_OPEN && pPager->eLock==EXCLUSIVE_LOCK) + ); + assert( pPager->eState>=PAGER_WRITER_CACHEMOD || isMainJrnl ); /* Read the page number and page data from the journal or sub-journal ** file. Return an error code to the caller if an IO error occurs. */ jfd = isMainJrnl ? pPager->jfd : pPager->sjfd; @@ -33883,17 +45983,25 @@ if( !isSavepnt && pager_cksum(pPager, (u8*)aData)!=cksum ){ return SQLITE_DONE; } } + /* If this page has already been played back before during the current + ** rollback, then don't bother to play it back again. + */ if( pDone && (rc = sqlite3BitvecSet(pDone, pgno))!=SQLITE_OK ){ return rc; } - assert( pPager->state==PAGER_RESERVED || pPager->state>=PAGER_EXCLUSIVE ); + /* When playing back page 1, restore the nReserve setting + */ + if( pgno==1 && pPager->nReserve!=((u8*)aData)[20] ){ + pPager->nReserve = ((u8*)aData)[20]; + pagerReportSize(pPager); + } - /* If the pager is in RESERVED state, then there must be a copy of this + /* If the pager is in CACHEMOD state, then there must be a copy of this ** page in the pager cache. In this case just update the pager cache, ** not the database file. The page is left marked dirty in this case. ** ** An exception to the above rule: If the database is in no-sync mode ** and a page is moved during an incremental vacuum then the page may @@ -33900,12 +46008,15 @@ ** not be in the pager cache. Later: if a malloc() or IO error occurs ** during a Movepage() call, then the page may not be in the cache ** either. So the condition described in the above paragraph is not ** assert()able. ** - ** If in EXCLUSIVE state, then we update the pager cache if it exists - ** and the main file. The page is then marked not dirty. + ** If in WRITER_DBMOD, WRITER_FINISHED or OPEN state, then we update the + ** pager cache if it exists and the main file. The page is then marked + ** not dirty. Since this code is only executed in PAGER_OPEN state for + ** a hot-journal rollback, it is guaranteed that the page-cache is empty + ** if the pager is in OPEN state. ** ** Ticket #1171: The statement journal might contain page content that is ** different from the page content at the start of the transaction. ** This occurs when a page is changed prior to the start of a statement ** then changed again within the statement. When rolling back such a @@ -33921,28 +46032,34 @@ ** ** 2008-04-14: When attempting to vacuum a corrupt database file, it ** is possible to fail a statement on a database that does not yet exist. ** Do not attempt to write if database file has never been opened. */ - pPg = pager_lookup(pPager, pgno); + if( pagerUseWal(pPager) ){ + pPg = 0; + }else{ + pPg = sqlite3PagerLookup(pPager, pgno); + } assert( pPg || !MEMDB ); + assert( pPager->eState!=PAGER_OPEN || pPg==0 ); PAGERTRACE(("PLAYBACK %d page %d hash(%08x) %s\n", PAGERID(pPager), pgno, pager_datahash(pPager->pageSize, (u8*)aData), (isMainJrnl?"main-journal":"sub-journal") )); if( isMainJrnl ){ isSynced = pPager->noSync || (*pOffset <= pPager->journalHdr); }else{ isSynced = (pPg==0 || 0==(pPg->flags & PGHDR_NEED_SYNC)); } - if( (pPager->state>=PAGER_EXCLUSIVE) - && isOpen(pPager->fd) + if( isOpen(pPager->fd) + && (pPager->eState>=PAGER_WRITER_DBMOD || pPager->eState==PAGER_OPEN) && isSynced ){ i64 ofst = (pgno-1)*(i64)pPager->pageSize; testcase( !isSavepnt && pPg!=0 && (pPg->flags&PGHDR_NEED_SYNC)!=0 ); - rc = sqlite3OsWrite(pPager->fd, (u8*)aData, pPager->pageSize, ofst); + assert( !pagerUseWal(pPager) ); + rc = sqlite3OsWrite(pPager->fd, (u8 *)aData, pPager->pageSize, ofst); if( pgno>pPager->dbFileSize ){ pPager->dbFileSize = pgno; } if( pPager->pBackup ){ CODEC1(pPager, aData, pgno, 3, rc=SQLITE_NOMEM); @@ -33965,13 +46082,16 @@ ** the data just read from the sub-journal. Mark the page as dirty ** and if the pager requires a journal-sync, then mark the page as ** requiring a journal-sync before it is written. */ assert( isSavepnt ); - if( (rc = sqlite3PagerAcquire(pPager, pgno, &pPg, 1))!=SQLITE_OK ){ - return rc; - } + assert( (pPager->doNotSpill & SPILLFLAG_ROLLBACK)==0 ); + pPager->doNotSpill |= SPILLFLAG_ROLLBACK; + rc = sqlite3PagerGet(pPager, pgno, &pPg, 1); + assert( (pPager->doNotSpill & SPILLFLAG_ROLLBACK)!=0 ); + pPager->doNotSpill &= ~SPILLFLAG_ROLLBACK; + if( rc!=SQLITE_OK ) return rc; pPg->flags &= ~PGHDR_NEED_READ; sqlite3PcacheMakeDirty(pPg); } if( pPg ){ /* No page should ever be explicitly rolled back that is in use, except @@ -34002,15 +46122,15 @@ ** the PGHDR_NEED_SYNC flag will not be set. It could then potentially ** be written out into the database file before its journal file ** segment is synced. If a crash occurs during or following this, ** database corruption may ensue. */ + assert( !pagerUseWal(pPager) ); sqlite3PcacheMakeClean(pPg); } -#ifdef SQLITE_CHECK_PAGES - pPg->pageHash = pager_pagehash(pPg); -#endif + pager_set_pagehash(pPg); + /* If this was page 1, then restore the value of Pager.dbFileVers. ** Do this before any decoding. */ if( pgno==1 ){ memcpy(&pPager->dbFileVers, &((u8*)pData)[24],sizeof(pPager->dbFileVers)); } @@ -34070,10 +46190,13 @@ int rc; /* Return code */ sqlite3_file *pMaster; /* Malloc'd master-journal file descriptor */ sqlite3_file *pJournal; /* Malloc'd child-journal file descriptor */ char *zMasterJournal = 0; /* Contents of master journal file */ i64 nMasterJournal; /* Size of master journal file */ + char *zJournal; /* Pointer to one journal within MJ file */ + char *zMasterPtr; /* Space to hold MJ filename from a journal file */ + int nMasterPtr; /* Amount of space allocated to zMasterPtr[] */ /* Allocate space for both the pJournal and pMaster file descriptors. ** If successful, open the master journal file for reading. */ pMaster = (sqlite3_file *)sqlite3MallocZero(pVfs->szOsFile * 2); @@ -34084,93 +46207,88 @@ const int flags = (SQLITE_OPEN_READONLY|SQLITE_OPEN_MASTER_JOURNAL); rc = sqlite3OsOpen(pVfs, zMaster, pMaster, flags, 0); } if( rc!=SQLITE_OK ) goto delmaster_out; - rc = sqlite3OsFileSize(pMaster, &nMasterJournal); - if( rc!=SQLITE_OK ) goto delmaster_out; - - if( nMasterJournal>0 ){ - char *zJournal; - char *zMasterPtr = 0; - int nMasterPtr = pVfs->mxPathname+1; - - /* Load the entire master journal file into space obtained from - ** sqlite3_malloc() and pointed to by zMasterJournal. - */ - zMasterJournal = sqlite3Malloc((int)nMasterJournal + nMasterPtr + 1); - if( !zMasterJournal ){ - rc = SQLITE_NOMEM; - goto delmaster_out; - } - zMasterPtr = &zMasterJournal[nMasterJournal+1]; - rc = sqlite3OsRead(pMaster, zMasterJournal, (int)nMasterJournal, 0); - if( rc!=SQLITE_OK ) goto delmaster_out; - zMasterJournal[nMasterJournal] = 0; - - zJournal = zMasterJournal; - while( (zJournal-zMasterJournal)<nMasterJournal ){ - int exists; - rc = sqlite3OsAccess(pVfs, zJournal, SQLITE_ACCESS_EXISTS, &exists); - if( rc!=SQLITE_OK ){ - goto delmaster_out; - } - if( exists ){ - /* One of the journals pointed to by the master journal exists. - ** Open it and check if it points at the master journal. If - ** so, return without deleting the master journal file. - */ - int c; - int flags = (SQLITE_OPEN_READONLY|SQLITE_OPEN_MAIN_JOURNAL); - rc = sqlite3OsOpen(pVfs, zJournal, pJournal, flags, 0); - if( rc!=SQLITE_OK ){ - goto delmaster_out; - } - - rc = readMasterJournal(pJournal, zMasterPtr, nMasterPtr); - sqlite3OsClose(pJournal); - if( rc!=SQLITE_OK ){ - goto delmaster_out; - } - - c = zMasterPtr[0]!=0 && strcmp(zMasterPtr, zMaster)==0; - if( c ){ - /* We have a match. Do not delete the master journal file. */ - goto delmaster_out; - } - } - zJournal += (sqlite3Strlen30(zJournal)+1); - } - } - + /* Load the entire master journal file into space obtained from + ** sqlite3_malloc() and pointed to by zMasterJournal. Also obtain + ** sufficient space (in zMasterPtr) to hold the names of master + ** journal files extracted from regular rollback-journals. + */ + rc = sqlite3OsFileSize(pMaster, &nMasterJournal); + if( rc!=SQLITE_OK ) goto delmaster_out; + nMasterPtr = pVfs->mxPathname+1; + zMasterJournal = sqlite3Malloc(nMasterJournal + nMasterPtr + 1); + if( !zMasterJournal ){ + rc = SQLITE_NOMEM; + goto delmaster_out; + } + zMasterPtr = &zMasterJournal[nMasterJournal+1]; + rc = sqlite3OsRead(pMaster, zMasterJournal, (int)nMasterJournal, 0); + if( rc!=SQLITE_OK ) goto delmaster_out; + zMasterJournal[nMasterJournal] = 0; + + zJournal = zMasterJournal; + while( (zJournal-zMasterJournal)<nMasterJournal ){ + int exists; + rc = sqlite3OsAccess(pVfs, zJournal, SQLITE_ACCESS_EXISTS, &exists); + if( rc!=SQLITE_OK ){ + goto delmaster_out; + } + if( exists ){ + /* One of the journals pointed to by the master journal exists. + ** Open it and check if it points at the master journal. If + ** so, return without deleting the master journal file. + */ + int c; + int flags = (SQLITE_OPEN_READONLY|SQLITE_OPEN_MAIN_JOURNAL); + rc = sqlite3OsOpen(pVfs, zJournal, pJournal, flags, 0); + if( rc!=SQLITE_OK ){ + goto delmaster_out; + } + + rc = readMasterJournal(pJournal, zMasterPtr, nMasterPtr); + sqlite3OsClose(pJournal); + if( rc!=SQLITE_OK ){ + goto delmaster_out; + } + + c = zMasterPtr[0]!=0 && strcmp(zMasterPtr, zMaster)==0; + if( c ){ + /* We have a match. Do not delete the master journal file. */ + goto delmaster_out; + } + } + zJournal += (sqlite3Strlen30(zJournal)+1); + } + + sqlite3OsClose(pMaster); rc = sqlite3OsDelete(pVfs, zMaster, 0); delmaster_out: - if( zMasterJournal ){ - sqlite3_free(zMasterJournal); - } + sqlite3_free(zMasterJournal); if( pMaster ){ sqlite3OsClose(pMaster); assert( !isOpen(pJournal) ); + sqlite3_free(pMaster); } - sqlite3_free(pMaster); return rc; } /* ** This function is used to change the actual size of the database ** file in the file-system. This only happens when committing a transaction, ** or rolling back a transaction (including rolling back a hot-journal). ** -** If the main database file is not open, or an exclusive lock is not -** held, this function is a no-op. Otherwise, the size of the file is -** changed to nPage pages (nPage*pPager->pageSize bytes). If the file -** on disk is currently larger than nPage pages, then use the VFS +** If the main database file is not open, or the pager is not in either +** DBMOD or OPEN state, this function is a no-op. Otherwise, the size +** of the file is changed to nPage pages (nPage*pPager->pageSize bytes). +** If the file on disk is currently larger than nPage pages, then use the VFS ** xTruncate() method to truncate it. ** -** Or, it might might be the case that the file on disk is smaller than +** Or, it might be the case that the file on disk is smaller than ** nPage pages. Some operating system implementations can get confused if ** you try to truncate a file to some size that is larger than it ** currently is, so detect this case and write a single zero byte to ** the end of the new file instead. ** @@ -34177,59 +46295,91 @@ ** If successful, return SQLITE_OK. If an IO error occurs while modifying ** the database file, return the error code to the caller. */ static int pager_truncate(Pager *pPager, Pgno nPage){ int rc = SQLITE_OK; - if( pPager->state>=PAGER_EXCLUSIVE && isOpen(pPager->fd) ){ + assert( pPager->eState!=PAGER_ERROR ); + assert( pPager->eState!=PAGER_READER ); + + if( isOpen(pPager->fd) + && (pPager->eState>=PAGER_WRITER_DBMOD || pPager->eState==PAGER_OPEN) + ){ i64 currentSize, newSize; + int szPage = pPager->pageSize; + assert( pPager->eLock==EXCLUSIVE_LOCK ); /* TODO: Is it safe to use Pager.dbFileSize here? */ rc = sqlite3OsFileSize(pPager->fd, ¤tSize); - newSize = pPager->pageSize*(i64)nPage; + newSize = szPage*(i64)nPage; if( rc==SQLITE_OK && currentSize!=newSize ){ if( currentSize>newSize ){ rc = sqlite3OsTruncate(pPager->fd, newSize); - }else{ - rc = sqlite3OsWrite(pPager->fd, "", 1, newSize-1); + }else if( (currentSize+szPage)<=newSize ){ + char *pTmp = pPager->pTmpSpace; + memset(pTmp, 0, szPage); + testcase( (newSize-szPage) == currentSize ); + testcase( (newSize-szPage) > currentSize ); + rc = sqlite3OsWrite(pPager->fd, pTmp, szPage, newSize-szPage); } if( rc==SQLITE_OK ){ pPager->dbFileSize = nPage; } } } return rc; } + +/* +** Return a sanitized version of the sector-size of OS file pFile. The +** return value is guaranteed to lie between 32 and MAX_SECTOR_SIZE. +*/ +SQLITE_PRIVATE int sqlite3SectorSize(sqlite3_file *pFile){ + int iRet = sqlite3OsSectorSize(pFile); + if( iRet<32 ){ + iRet = 512; + }else if( iRet>MAX_SECTOR_SIZE ){ + assert( MAX_SECTOR_SIZE>=512 ); + iRet = MAX_SECTOR_SIZE; + } + return iRet; +} /* ** Set the value of the Pager.sectorSize variable for the given ** pager based on the value returned by the xSectorSize method -** of the open database file. The sector size will be used used +** of the open database file. The sector size will be used ** to determine the size and alignment of journal header and ** master journal pointers within created journal files. ** ** For temporary files the effective sector size is always 512 bytes. ** ** Otherwise, for non-temporary files, the effective sector size is ** the value returned by the xSectorSize() method rounded up to 32 if ** it is less than 32, or rounded down to MAX_SECTOR_SIZE if it ** is greater than MAX_SECTOR_SIZE. +** +** If the file has the SQLITE_IOCAP_POWERSAFE_OVERWRITE property, then set +** the effective sector size to its minimum value (512). The purpose of +** pPager->sectorSize is to define the "blast radius" of bytes that +** might change if a crash occurs while writing to a single byte in +** that range. But with POWERSAFE_OVERWRITE, the blast radius is zero +** (that is what POWERSAFE_OVERWRITE means), so we minimize the sector +** size. For backwards compatibility of the rollback journal file format, +** we cannot reduce the effective sector size below 512. */ static void setSectorSize(Pager *pPager){ assert( isOpen(pPager->fd) || pPager->tempFile ); - if( !pPager->tempFile ){ + if( pPager->tempFile + || (sqlite3OsDeviceCharacteristics(pPager->fd) & + SQLITE_IOCAP_POWERSAFE_OVERWRITE)!=0 + ){ /* Sector size doesn't matter for temporary files. Also, the file ** may not have been opened yet, in which case the OsSectorSize() - ** call will segfault. - */ - pPager->sectorSize = sqlite3OsSectorSize(pPager->fd); - } - if( pPager->sectorSize<32 ){ + ** call will segfault. */ pPager->sectorSize = 512; - } - if( pPager->sectorSize>MAX_SECTOR_SIZE ){ - assert( MAX_SECTOR_SIZE>=512 ); - pPager->sectorSize = MAX_SECTOR_SIZE; + }else{ + pPager->sectorSize = sqlite3SectorSize(pPager->fd); } } /* ** Playback the journal and thus restore the database file to @@ -34296,17 +46446,18 @@ Pgno mxPg = 0; /* Size of the original file in pages */ int rc; /* Result code of a subroutine */ int res = 1; /* Value returned by sqlite3OsAccess() */ char *zMaster = 0; /* Name of master journal file if any */ int needPagerReset; /* True to reset page prior to first page rollback */ + int nPlayback = 0; /* Total number of pages restored from journal */ /* Figure out how many records are in the journal. Abort early if ** the journal is empty. */ assert( isOpen(pPager->jfd) ); rc = sqlite3OsFileSize(pPager->jfd, &szJ); - if( rc!=SQLITE_OK || szJ==0 ){ + if( rc!=SQLITE_OK ){ goto end_playback; } /* Read the master journal name from the journal, if it is present. ** If a master journal file name is specified, but the file is not @@ -34336,11 +46487,11 @@ ** occurs. */ while( 1 ){ /* Read the next journal header from the journal file. If there are ** not enough bytes left in the journal file for a complete header, or - ** it is corrupted, then a process must of failed while writing it. + ** it is corrupted, then a process must have failed while writing it. ** This indicates nothing more needs to be rolled back. */ rc = readJournalHdr(pPager, isHot, szJ, &nRec, &mxPg); if( rc!=SQLITE_OK ){ if( rc==SQLITE_DONE ){ @@ -34396,13 +46547,14 @@ if( needPagerReset ){ pager_reset(pPager); needPagerReset = 0; } rc = pager_playback_one_page(pPager,&pPager->journalOff,0,1,0); - if( rc!=SQLITE_OK ){ + if( rc==SQLITE_OK ){ + nPlayback++; + }else{ if( rc==SQLITE_DONE ){ - rc = SQLITE_OK; pPager->journalOff = szJ; break; }else if( rc==SQLITE_IOERR_SHORT_READ ){ /* If the journal has been truncated, simply stop reading and ** processing the journal. This might happen if the journal was @@ -34429,14 +46581,15 @@ /* Following a rollback, the database file should be back in its original ** state prior to the start of the transaction, so invoke the ** SQLITE_FCNTL_DB_UNCHANGED file-control method to disable the ** assertion that the transaction counter was modified. */ - assert( - pPager->fd->pMethods==0 || - sqlite3OsFileControl(pPager->fd,SQLITE_FCNTL_DB_UNCHANGED,0)>=SQLITE_OK - ); +#ifdef SQLITE_DEBUG + if( pPager->fd->pMethods ){ + sqlite3OsFileControlHint(pPager->fd,SQLITE_FCNTL_DB_UNCHANGED,0); + } +#endif /* If this playback is happening automatically as a result of an IO or ** malloc error that occurred after the change-counter was updated but ** before the transaction was committed, then the change-counter ** modification may just have been reverted. If this happens in exclusive @@ -34450,33 +46603,405 @@ if( rc==SQLITE_OK ){ zMaster = pPager->pTmpSpace; rc = readMasterJournal(pPager->jfd, zMaster, pPager->pVfs->mxPathname+1); testcase( rc!=SQLITE_OK ); } - if( rc==SQLITE_OK && pPager->noSync==0 && pPager->state>=PAGER_EXCLUSIVE ){ - rc = sqlite3OsSync(pPager->fd, pPager->sync_flags); + if( rc==SQLITE_OK + && (pPager->eState>=PAGER_WRITER_DBMOD || pPager->eState==PAGER_OPEN) + ){ + rc = sqlite3PagerSync(pPager, 0); } if( rc==SQLITE_OK ){ - rc = pager_end_transaction(pPager, zMaster[0]!='\0'); + rc = pager_end_transaction(pPager, zMaster[0]!='\0', 0); testcase( rc!=SQLITE_OK ); } if( rc==SQLITE_OK && zMaster[0] && res ){ /* If there was a master journal and this routine will return success, ** see if it is possible to delete the master journal. */ rc = pager_delmaster(pPager, zMaster); testcase( rc!=SQLITE_OK ); } + if( isHot && nPlayback ){ + sqlite3_log(SQLITE_NOTICE_RECOVER_ROLLBACK, "recovered %d pages from %s", + nPlayback, pPager->zJournal); + } /* The Pager.sectorSize variable may have been updated while rolling ** back a journal created by a process with a different sector size ** value. Reset it to the correct value for this process. */ setSectorSize(pPager); return rc; } + +/* +** Read the content for page pPg out of the database file and into +** pPg->pData. A shared lock or greater must be held on the database +** file before this function is called. +** +** If page 1 is read, then the value of Pager.dbFileVers[] is set to +** the value read from the database file. +** +** If an IO error occurs, then the IO error is returned to the caller. +** Otherwise, SQLITE_OK is returned. +*/ +static int readDbPage(PgHdr *pPg, u32 iFrame){ + Pager *pPager = pPg->pPager; /* Pager object associated with page pPg */ + Pgno pgno = pPg->pgno; /* Page number to read */ + int rc = SQLITE_OK; /* Return code */ + int pgsz = pPager->pageSize; /* Number of bytes to read */ + + assert( pPager->eState>=PAGER_READER && !MEMDB ); + assert( isOpen(pPager->fd) ); + +#ifndef SQLITE_OMIT_WAL + if( iFrame ){ + /* Try to pull the page from the write-ahead log. */ + rc = sqlite3WalReadFrame(pPager->pWal, iFrame, pgsz, pPg->pData); + }else +#endif + { + i64 iOffset = (pgno-1)*(i64)pPager->pageSize; + rc = sqlite3OsRead(pPager->fd, pPg->pData, pgsz, iOffset); + if( rc==SQLITE_IOERR_SHORT_READ ){ + rc = SQLITE_OK; + } + } + + if( pgno==1 ){ + if( rc ){ + /* If the read is unsuccessful, set the dbFileVers[] to something + ** that will never be a valid file version. dbFileVers[] is a copy + ** of bytes 24..39 of the database. Bytes 28..31 should always be + ** zero or the size of the database in page. Bytes 32..35 and 35..39 + ** should be page numbers which are never 0xffffffff. So filling + ** pPager->dbFileVers[] with all 0xff bytes should suffice. + ** + ** For an encrypted database, the situation is more complex: bytes + ** 24..39 of the database are white noise. But the probability of + ** white noise equaling 16 bytes of 0xff is vanishingly small so + ** we should still be ok. + */ + memset(pPager->dbFileVers, 0xff, sizeof(pPager->dbFileVers)); + }else{ + u8 *dbFileVers = &((u8*)pPg->pData)[24]; + memcpy(&pPager->dbFileVers, dbFileVers, sizeof(pPager->dbFileVers)); + } + } + CODEC1(pPager, pPg->pData, pgno, 3, rc = SQLITE_NOMEM); + + PAGER_INCR(sqlite3_pager_readdb_count); + PAGER_INCR(pPager->nRead); + IOTRACE(("PGIN %p %d\n", pPager, pgno)); + PAGERTRACE(("FETCH %d page %d hash(%08x)\n", + PAGERID(pPager), pgno, pager_pagehash(pPg))); + + return rc; +} + +/* +** Update the value of the change-counter at offsets 24 and 92 in +** the header and the sqlite version number at offset 96. +** +** This is an unconditional update. See also the pager_incr_changecounter() +** routine which only updates the change-counter if the update is actually +** needed, as determined by the pPager->changeCountDone state variable. +*/ +static void pager_write_changecounter(PgHdr *pPg){ + u32 change_counter; + + /* Increment the value just read and write it back to byte 24. */ + change_counter = sqlite3Get4byte((u8*)pPg->pPager->dbFileVers)+1; + put32bits(((char*)pPg->pData)+24, change_counter); + + /* Also store the SQLite version number in bytes 96..99 and in + ** bytes 92..95 store the change counter for which the version number + ** is valid. */ + put32bits(((char*)pPg->pData)+92, change_counter); + put32bits(((char*)pPg->pData)+96, SQLITE_VERSION_NUMBER); +} + +#ifndef SQLITE_OMIT_WAL +/* +** This function is invoked once for each page that has already been +** written into the log file when a WAL transaction is rolled back. +** Parameter iPg is the page number of said page. The pCtx argument +** is actually a pointer to the Pager structure. +** +** If page iPg is present in the cache, and has no outstanding references, +** it is discarded. Otherwise, if there are one or more outstanding +** references, the page content is reloaded from the database. If the +** attempt to reload content from the database is required and fails, +** return an SQLite error code. Otherwise, SQLITE_OK. +*/ +static int pagerUndoCallback(void *pCtx, Pgno iPg){ + int rc = SQLITE_OK; + Pager *pPager = (Pager *)pCtx; + PgHdr *pPg; + + assert( pagerUseWal(pPager) ); + pPg = sqlite3PagerLookup(pPager, iPg); + if( pPg ){ + if( sqlite3PcachePageRefcount(pPg)==1 ){ + sqlite3PcacheDrop(pPg); + }else{ + u32 iFrame = 0; + rc = sqlite3WalFindFrame(pPager->pWal, pPg->pgno, &iFrame); + if( rc==SQLITE_OK ){ + rc = readDbPage(pPg, iFrame); + } + if( rc==SQLITE_OK ){ + pPager->xReiniter(pPg); + } + sqlite3PagerUnrefNotNull(pPg); + } + } + + /* Normally, if a transaction is rolled back, any backup processes are + ** updated as data is copied out of the rollback journal and into the + ** database. This is not generally possible with a WAL database, as + ** rollback involves simply truncating the log file. Therefore, if one + ** or more frames have already been written to the log (and therefore + ** also copied into the backup databases) as part of this transaction, + ** the backups must be restarted. + */ + sqlite3BackupRestart(pPager->pBackup); + + return rc; +} + +/* +** This function is called to rollback a transaction on a WAL database. +*/ +static int pagerRollbackWal(Pager *pPager){ + int rc; /* Return Code */ + PgHdr *pList; /* List of dirty pages to revert */ + + /* For all pages in the cache that are currently dirty or have already + ** been written (but not committed) to the log file, do one of the + ** following: + ** + ** + Discard the cached page (if refcount==0), or + ** + Reload page content from the database (if refcount>0). + */ + pPager->dbSize = pPager->dbOrigSize; + rc = sqlite3WalUndo(pPager->pWal, pagerUndoCallback, (void *)pPager); + pList = sqlite3PcacheDirtyList(pPager->pPCache); + while( pList && rc==SQLITE_OK ){ + PgHdr *pNext = pList->pDirty; + rc = pagerUndoCallback((void *)pPager, pList->pgno); + pList = pNext; + } + + return rc; +} + +/* +** This function is a wrapper around sqlite3WalFrames(). As well as logging +** the contents of the list of pages headed by pList (connected by pDirty), +** this function notifies any active backup processes that the pages have +** changed. +** +** The list of pages passed into this routine is always sorted by page number. +** Hence, if page 1 appears anywhere on the list, it will be the first page. +*/ +static int pagerWalFrames( + Pager *pPager, /* Pager object */ + PgHdr *pList, /* List of frames to log */ + Pgno nTruncate, /* Database size after this commit */ + int isCommit /* True if this is a commit */ +){ + int rc; /* Return code */ + int nList; /* Number of pages in pList */ + PgHdr *p; /* For looping over pages */ + + assert( pPager->pWal ); + assert( pList ); +#ifdef SQLITE_DEBUG + /* Verify that the page list is in accending order */ + for(p=pList; p && p->pDirty; p=p->pDirty){ + assert( p->pgno < p->pDirty->pgno ); + } +#endif + + assert( pList->pDirty==0 || isCommit ); + if( isCommit ){ + /* If a WAL transaction is being committed, there is no point in writing + ** any pages with page numbers greater than nTruncate into the WAL file. + ** They will never be read by any client. So remove them from the pDirty + ** list here. */ + PgHdr **ppNext = &pList; + nList = 0; + for(p=pList; (*ppNext = p)!=0; p=p->pDirty){ + if( p->pgno<=nTruncate ){ + ppNext = &p->pDirty; + nList++; + } + } + assert( pList ); + }else{ + nList = 1; + } + pPager->aStat[PAGER_STAT_WRITE] += nList; + + if( pList->pgno==1 ) pager_write_changecounter(pList); + rc = sqlite3WalFrames(pPager->pWal, + pPager->pageSize, pList, nTruncate, isCommit, pPager->walSyncFlags + ); + if( rc==SQLITE_OK && pPager->pBackup ){ + for(p=pList; p; p=p->pDirty){ + sqlite3BackupUpdate(pPager->pBackup, p->pgno, (u8 *)p->pData); + } + } + +#ifdef SQLITE_CHECK_PAGES + pList = sqlite3PcacheDirtyList(pPager->pPCache); + for(p=pList; p; p=p->pDirty){ + pager_set_pagehash(p); + } +#endif + + return rc; +} + +/* +** Begin a read transaction on the WAL. +** +** This routine used to be called "pagerOpenSnapshot()" because it essentially +** makes a snapshot of the database at the current point in time and preserves +** that snapshot for use by the reader in spite of concurrently changes by +** other writers or checkpointers. +*/ +static int pagerBeginReadTransaction(Pager *pPager){ + int rc; /* Return code */ + int changed = 0; /* True if cache must be reset */ + + assert( pagerUseWal(pPager) ); + assert( pPager->eState==PAGER_OPEN || pPager->eState==PAGER_READER ); + + /* sqlite3WalEndReadTransaction() was not called for the previous + ** transaction in locking_mode=EXCLUSIVE. So call it now. If we + ** are in locking_mode=NORMAL and EndRead() was previously called, + ** the duplicate call is harmless. + */ + sqlite3WalEndReadTransaction(pPager->pWal); + + rc = sqlite3WalBeginReadTransaction(pPager->pWal, &changed); + if( rc!=SQLITE_OK || changed ){ + pager_reset(pPager); + if( USEFETCH(pPager) ) sqlite3OsUnfetch(pPager->fd, 0, 0); + } + + return rc; +} +#endif + +/* +** This function is called as part of the transition from PAGER_OPEN +** to PAGER_READER state to determine the size of the database file +** in pages (assuming the page size currently stored in Pager.pageSize). +** +** If no error occurs, SQLITE_OK is returned and the size of the database +** in pages is stored in *pnPage. Otherwise, an error code (perhaps +** SQLITE_IOERR_FSTAT) is returned and *pnPage is left unmodified. +*/ +static int pagerPagecount(Pager *pPager, Pgno *pnPage){ + Pgno nPage; /* Value to return via *pnPage */ + + /* Query the WAL sub-system for the database size. The WalDbsize() + ** function returns zero if the WAL is not open (i.e. Pager.pWal==0), or + ** if the database size is not available. The database size is not + ** available from the WAL sub-system if the log file is empty or + ** contains no valid committed transactions. + */ + assert( pPager->eState==PAGER_OPEN ); + assert( pPager->eLock>=SHARED_LOCK ); + nPage = sqlite3WalDbsize(pPager->pWal); + + /* If the number of pages in the database is not available from the + ** WAL sub-system, determine the page counte based on the size of + ** the database file. If the size of the database file is not an + ** integer multiple of the page-size, round up the result. + */ + if( nPage==0 ){ + i64 n = 0; /* Size of db file in bytes */ + assert( isOpen(pPager->fd) || pPager->tempFile ); + if( isOpen(pPager->fd) ){ + int rc = sqlite3OsFileSize(pPager->fd, &n); + if( rc!=SQLITE_OK ){ + return rc; + } + } + nPage = (Pgno)((n+pPager->pageSize-1) / pPager->pageSize); + } + + /* If the current number of pages in the file is greater than the + ** configured maximum pager number, increase the allowed limit so + ** that the file can be read. + */ + if( nPage>pPager->mxPgno ){ + pPager->mxPgno = (Pgno)nPage; + } + + *pnPage = nPage; + return SQLITE_OK; +} + +#ifndef SQLITE_OMIT_WAL +/* +** Check if the *-wal file that corresponds to the database opened by pPager +** exists if the database is not empy, or verify that the *-wal file does +** not exist (by deleting it) if the database file is empty. +** +** If the database is not empty and the *-wal file exists, open the pager +** in WAL mode. If the database is empty or if no *-wal file exists and +** if no error occurs, make sure Pager.journalMode is not set to +** PAGER_JOURNALMODE_WAL. +** +** Return SQLITE_OK or an error code. +** +** The caller must hold a SHARED lock on the database file to call this +** function. Because an EXCLUSIVE lock on the db file is required to delete +** a WAL on a none-empty database, this ensures there is no race condition +** between the xAccess() below and an xDelete() being executed by some +** other connection. +*/ +static int pagerOpenWalIfPresent(Pager *pPager){ + int rc = SQLITE_OK; + assert( pPager->eState==PAGER_OPEN ); + assert( pPager->eLock>=SHARED_LOCK ); + + if( !pPager->tempFile ){ + int isWal; /* True if WAL file exists */ + Pgno nPage; /* Size of the database file */ + + rc = pagerPagecount(pPager, &nPage); + if( rc ) return rc; + if( nPage==0 ){ + rc = sqlite3OsDelete(pPager->pVfs, pPager->zWal, 0); + if( rc==SQLITE_IOERR_DELETE_NOENT ) rc = SQLITE_OK; + isWal = 0; + }else{ + rc = sqlite3OsAccess( + pPager->pVfs, pPager->zWal, SQLITE_ACCESS_EXISTS, &isWal + ); + } + if( rc==SQLITE_OK ){ + if( isWal ){ + testcase( sqlite3PcachePagecount(pPager->pPCache)==0 ); + rc = sqlite3PagerOpenWal(pPager, 0); + }else if( pPager->journalMode==PAGER_JOURNALMODE_WAL ){ + pPager->journalMode = PAGER_JOURNALMODE_DELETE; + } + } + } + return rc; +} +#endif + /* ** Playback savepoint pSavepoint. Or, if pSavepoint==NULL, then playback ** the entire master journal file. The case pSavepoint==NULL occurs when ** a ROLLBACK TO command is invoked on a SAVEPOINT that is a transaction ** savepoint. @@ -34515,11 +47040,12 @@ i64 szJ; /* Effective size of the main journal */ i64 iHdrOff; /* End of first segment of main-journal records */ int rc = SQLITE_OK; /* Return code */ Bitvec *pDone = 0; /* Bitvec to ensure pages played back only once */ - assert( pPager->state>=PAGER_SHARED ); + assert( pPager->eState!=PAGER_ERROR ); + assert( pPager->eState>=PAGER_WRITER_LOCKED ); /* Allocate a bitvec to use to store the set of pages rolled back */ if( pSavepoint ){ pDone = sqlite3BitvecCreate(pSavepoint->nOrig); if( !pDone ){ @@ -34529,26 +47055,32 @@ /* Set the database size back to the value it was before the savepoint ** being reverted was opened. */ pPager->dbSize = pSavepoint ? pSavepoint->nOrig : pPager->dbOrigSize; + pPager->changeCountDone = pPager->tempFile; + + if( !pSavepoint && pagerUseWal(pPager) ){ + return pagerRollbackWal(pPager); + } /* Use pPager->journalOff as the effective size of the main rollback ** journal. The actual file might be larger than this in ** PAGER_JOURNALMODE_TRUNCATE or PAGER_JOURNALMODE_PERSIST. But anything ** past pPager->journalOff is off-limits to us. */ szJ = pPager->journalOff; + assert( pagerUseWal(pPager)==0 || szJ==0 ); /* Begin by rolling back records from the main journal starting at ** PagerSavepoint.iOffset and continuing to the next journal header. ** There might be records in the main journal that have a page number ** greater than the current database size (pPager->dbSize) but those ** will be skipped automatically. Pages are added to pDone as they ** are played back. */ - if( pSavepoint ){ + if( pSavepoint && !pagerUseWal(pPager) ){ iHdrOff = pSavepoint->iHdrOffset ? pSavepoint->iHdrOffset : szJ; pPager->journalOff = pSavepoint->iOffset; while( rc==SQLITE_OK && pPager->journalOff<iHdrOff ){ rc = pager_playback_one_page(pPager, &pPager->journalOff, pDone, 1, 1); } @@ -34582,44 +47114,91 @@ for(ii=0; rc==SQLITE_OK && ii<nJRec && pPager->journalOff<szJ; ii++){ rc = pager_playback_one_page(pPager, &pPager->journalOff, pDone, 1, 1); } assert( rc!=SQLITE_DONE ); } - assert( rc!=SQLITE_OK || pPager->journalOff==szJ ); + assert( rc!=SQLITE_OK || pPager->journalOff>=szJ ); /* Finally, rollback pages from the sub-journal. Page that were ** previously rolled back out of the main journal (and are hence in pDone) ** will be skipped. Out-of-range pages are also skipped. */ if( pSavepoint ){ u32 ii; /* Loop counter */ - i64 offset = pSavepoint->iSubRec*(4+pPager->pageSize); + i64 offset = (i64)pSavepoint->iSubRec*(4+pPager->pageSize); + + if( pagerUseWal(pPager) ){ + rc = sqlite3WalSavepointUndo(pPager->pWal, pSavepoint->aWalData); + } for(ii=pSavepoint->iSubRec; rc==SQLITE_OK && ii<pPager->nSubRec; ii++){ - assert( offset==ii*(4+pPager->pageSize) ); + assert( offset==(i64)ii*(4+pPager->pageSize) ); rc = pager_playback_one_page(pPager, &offset, pDone, 0, 1); } assert( rc!=SQLITE_DONE ); } sqlite3BitvecDestroy(pDone); if( rc==SQLITE_OK ){ pPager->journalOff = szJ; } + return rc; } /* -** Change the maximum number of in-memory pages that are allowed. +** Change the maximum number of in-memory pages that are allowed +** before attempting to recycle clean and unused pages. */ SQLITE_PRIVATE void sqlite3PagerSetCachesize(Pager *pPager, int mxPage){ sqlite3PcacheSetCachesize(pPager->pPCache, mxPage); } /* -** Adjust the robustness of the database to damage due to OS crashes -** or power failures by changing the number of syncs()s when writing -** the rollback journal. There are three levels: +** Change the maximum number of in-memory pages that are allowed +** before attempting to spill pages to journal. +*/ +SQLITE_PRIVATE int sqlite3PagerSetSpillsize(Pager *pPager, int mxPage){ + return sqlite3PcacheSetSpillsize(pPager->pPCache, mxPage); +} + +/* +** Invoke SQLITE_FCNTL_MMAP_SIZE based on the current value of szMmap. +*/ +static void pagerFixMaplimit(Pager *pPager){ +#if SQLITE_MAX_MMAP_SIZE>0 + sqlite3_file *fd = pPager->fd; + if( isOpen(fd) && fd->pMethods->iVersion>=3 ){ + sqlite3_int64 sz; + sz = pPager->szMmap; + pPager->bUseFetch = (sz>0); + sqlite3OsFileControlHint(pPager->fd, SQLITE_FCNTL_MMAP_SIZE, &sz); + } +#endif +} + +/* +** Change the maximum size of any memory mapping made of the database file. +*/ +SQLITE_PRIVATE void sqlite3PagerSetMmapLimit(Pager *pPager, sqlite3_int64 szMmap){ + pPager->szMmap = szMmap; + pagerFixMaplimit(pPager); +} + +/* +** Free as much memory as possible from the pager. +*/ +SQLITE_PRIVATE void sqlite3PagerShrink(Pager *pPager){ + sqlite3PcacheShrink(pPager->pPCache); +} + +/* +** Adjust settings of the pager to those specified in the pgFlags parameter. +** +** The "level" in pgFlags & PAGER_SYNCHRONOUS_MASK sets the robustness +** of the database to damage due to OS crashes or power failures by +** changing the number of syncs()s when writing the journals. +** There are three levels: ** ** OFF sqlite3OsSync() is never called. This is the default ** for temporary and transient files. ** ** NORMAL The journal is synced once before writes begin on the @@ -34634,20 +47213,68 @@ ** of the journal header - being written in between the two ** syncs). If we assume that writing a ** single disk sector is atomic, then this mode provides ** assurance that the journal will not be corrupted to the ** point of causing damage to the database during rollback. +** +** The above is for a rollback-journal mode. For WAL mode, OFF continues +** to mean that no syncs ever occur. NORMAL means that the WAL is synced +** prior to the start of checkpoint and that the database file is synced +** at the conclusion of the checkpoint if the entire content of the WAL +** was written back into the database. But no sync operations occur for +** an ordinary commit in NORMAL mode with WAL. FULL means that the WAL +** file is synced following each commit operation, in addition to the +** syncs associated with NORMAL. +** +** Do not confuse synchronous=FULL with SQLITE_SYNC_FULL. The +** SQLITE_SYNC_FULL macro means to use the MacOSX-style full-fsync +** using fcntl(F_FULLFSYNC). SQLITE_SYNC_NORMAL means to do an +** ordinary fsync() call. There is no difference between SQLITE_SYNC_FULL +** and SQLITE_SYNC_NORMAL on platforms other than MacOSX. But the +** synchronous=FULL versus synchronous=NORMAL setting determines when +** the xSync primitive is called and is relevant to all platforms. ** ** Numeric values associated with these states are OFF==1, NORMAL=2, ** and FULL=3. */ #ifndef SQLITE_OMIT_PAGER_PRAGMAS -SQLITE_PRIVATE void sqlite3PagerSetSafetyLevel(Pager *pPager, int level, int bFullFsync){ - pPager->noSync = (level==1 || pPager->tempFile) ?1:0; - pPager->fullSync = (level==3 && !pPager->tempFile) ?1:0; - pPager->sync_flags = (bFullFsync?SQLITE_SYNC_FULL:SQLITE_SYNC_NORMAL); - if( pPager->noSync ) pPager->needSync = 0; +SQLITE_PRIVATE void sqlite3PagerSetFlags( + Pager *pPager, /* The pager to set safety level for */ + unsigned pgFlags /* Various flags */ +){ + unsigned level = pgFlags & PAGER_SYNCHRONOUS_MASK; + if( pPager->tempFile ){ + pPager->noSync = 1; + pPager->fullSync = 0; + pPager->extraSync = 0; + }else{ + pPager->noSync = level==PAGER_SYNCHRONOUS_OFF ?1:0; + pPager->fullSync = level>=PAGER_SYNCHRONOUS_FULL ?1:0; + pPager->extraSync = level==PAGER_SYNCHRONOUS_EXTRA ?1:0; + } + if( pPager->noSync ){ + pPager->syncFlags = 0; + pPager->ckptSyncFlags = 0; + }else if( pgFlags & PAGER_FULLFSYNC ){ + pPager->syncFlags = SQLITE_SYNC_FULL; + pPager->ckptSyncFlags = SQLITE_SYNC_FULL; + }else if( pgFlags & PAGER_CKPT_FULLFSYNC ){ + pPager->syncFlags = SQLITE_SYNC_NORMAL; + pPager->ckptSyncFlags = SQLITE_SYNC_FULL; + }else{ + pPager->syncFlags = SQLITE_SYNC_NORMAL; + pPager->ckptSyncFlags = SQLITE_SYNC_NORMAL; + } + pPager->walSyncFlags = pPager->syncFlags; + if( pPager->fullSync ){ + pPager->walSyncFlags |= WAL_SYNC_TRANSACTIONS; + } + if( pgFlags & PAGER_CACHESPILL ){ + pPager->doNotSpill &= ~SPILLFLAG_OFF; + }else{ + pPager->doNotSpill |= SPILLFLAG_OFF; + } } #endif /* ** The following global variable is incremented whenever the library @@ -34714,37 +47341,29 @@ */ SQLITE_PRIVATE void sqlite3PagerSetBusyhandler( Pager *pPager, /* Pager object */ int (*xBusyHandler)(void *), /* Pointer to busy-handler function */ void *pBusyHandlerArg /* Argument to pass to xBusyHandler */ -){ +){ pPager->xBusyHandler = xBusyHandler; pPager->pBusyHandlerArg = pBusyHandlerArg; -} - -/* -** Report the current page size and number of reserved bytes back -** to the codec. -*/ -#ifdef SQLITE_HAS_CODEC -static void pagerReportSize(Pager *pPager){ - if( pPager->xCodecSizeChng ){ - pPager->xCodecSizeChng(pPager->pCodec, pPager->pageSize, - (int)pPager->nReserve); - } -} -#else -# define pagerReportSize(X) /* No-op if we do not support a codec */ -#endif + + if( isOpen(pPager->fd) ){ + void **ap = (void **)&pPager->xBusyHandler; + assert( ((int(*)(void *))(ap[0]))==xBusyHandler ); + assert( ap[1]==pBusyHandlerArg ); + sqlite3OsFileControlHint(pPager->fd, SQLITE_FCNTL_BUSYHANDLER, (void *)ap); + } +} /* ** Change the page size used by the Pager object. The new page size ** is passed in *pPageSize. ** ** If the pager is in the error state when this function is called, it ** is a no-op. The value returned is the error state error code (i.e. -** one of SQLITE_IOERR, SQLITE_CORRUPT or SQLITE_FULL). +** one of SQLITE_IOERR, an SQLITE_IOERR_xxx sub-code or SQLITE_FULL). ** ** Otherwise, if all of the following are true: ** ** * the new page size (value of *pPageSize) is valid (a power ** of two between 512 and SQLITE_MAX_PAGE_SIZE, inclusive), and @@ -34764,36 +47383,61 @@ ** If the page size is not changed, either because one of the enumerated ** conditions above is not true, the pager was in error state when this ** function was called, or because the memory allocation attempt failed, ** then *pPageSize is set to the old, retained page size before returning. */ -SQLITE_PRIVATE int sqlite3PagerSetPagesize(Pager *pPager, u16 *pPageSize, int nReserve){ - int rc = pPager->errCode; - - if( rc==SQLITE_OK ){ - u16 pageSize = *pPageSize; - assert( pageSize==0 || (pageSize>=512 && pageSize<=SQLITE_MAX_PAGE_SIZE) ); - if( (pPager->memDb==0 || pPager->dbSize==0) - && sqlite3PcacheRefCount(pPager->pPCache)==0 - && pageSize && pageSize!=pPager->pageSize - ){ - char *pNew = (char *)sqlite3PageMalloc(pageSize); - if( !pNew ){ - rc = SQLITE_NOMEM; - }else{ - pager_reset(pPager); - pPager->pageSize = pageSize; - sqlite3PageFree(pPager->pTmpSpace); - pPager->pTmpSpace = pNew; - sqlite3PcacheSetPageSize(pPager->pPCache, pageSize); - } - } - *pPageSize = (u16)pPager->pageSize; +SQLITE_PRIVATE int sqlite3PagerSetPagesize(Pager *pPager, u32 *pPageSize, int nReserve){ + int rc = SQLITE_OK; + + /* It is not possible to do a full assert_pager_state() here, as this + ** function may be called from within PagerOpen(), before the state + ** of the Pager object is internally consistent. + ** + ** At one point this function returned an error if the pager was in + ** PAGER_ERROR state. But since PAGER_ERROR state guarantees that + ** there is at least one outstanding page reference, this function + ** is a no-op for that case anyhow. + */ + + u32 pageSize = *pPageSize; + assert( pageSize==0 || (pageSize>=512 && pageSize<=SQLITE_MAX_PAGE_SIZE) ); + if( (pPager->memDb==0 || pPager->dbSize==0) + && sqlite3PcacheRefCount(pPager->pPCache)==0 + && pageSize && pageSize!=(u32)pPager->pageSize + ){ + char *pNew = NULL; /* New temp space */ + i64 nByte = 0; + + if( pPager->eState>PAGER_OPEN && isOpen(pPager->fd) ){ + rc = sqlite3OsFileSize(pPager->fd, &nByte); + } + if( rc==SQLITE_OK ){ + pNew = (char *)sqlite3PageMalloc(pageSize); + if( !pNew ) rc = SQLITE_NOMEM; + } + + if( rc==SQLITE_OK ){ + pager_reset(pPager); + rc = sqlite3PcacheSetPageSize(pPager->pPCache, pageSize); + } + if( rc==SQLITE_OK ){ + sqlite3PageFree(pPager->pTmpSpace); + pPager->pTmpSpace = pNew; + pPager->dbSize = (Pgno)((nByte+pageSize-1)/pageSize); + pPager->pageSize = pageSize; + }else{ + sqlite3PageFree(pNew); + } + } + + *pPageSize = pPager->pageSize; + if( rc==SQLITE_OK ){ if( nReserve<0 ) nReserve = pPager->nReserve; assert( nReserve>=0 && nReserve<1000 ); pPager->nReserve = (i16)nReserve; pagerReportSize(pPager); + pagerFixMaplimit(pPager); } return rc; } /* @@ -34814,16 +47458,15 @@ ** maximum page count below the current size of the database. ** ** Regardless of mxPage, return the current maximum page count. */ SQLITE_PRIVATE int sqlite3PagerMaxPageCount(Pager *pPager, int mxPage){ - int nPage; if( mxPage>0 ){ pPager->mxPgno = mxPage; } - sqlite3PagerPagecount(pPager, &nPage); - assert( pPager->mxPgno>=nPage ); + assert( pPager->eState!=PAGER_OPEN ); /* Called only by OP_MaxPgcnt */ + assert( pPager->mxPgno>=pPager->dbSize ); /* OP_MaxPgcnt enforces this */ return pPager->mxPgno; } /* ** The following set of routines are used to disable the simulated @@ -34865,10 +47508,17 @@ */ SQLITE_PRIVATE int sqlite3PagerReadFileheader(Pager *pPager, int N, unsigned char *pDest){ int rc = SQLITE_OK; memset(pDest, 0, N); assert( isOpen(pPager->fd) || pPager->tempFile ); + + /* This routine is only called by btree immediately after creating + ** the Pager object. There has not been an opportunity to transition + ** to WAL mode yet. + */ + assert( !pagerUseWal(pPager) ); + if( isOpen(pPager->fd) ){ IOTRACE(("DBHDR %p 0 %d\n", pPager, N)) rc = sqlite3OsRead(pPager->fd, pDest, N, 0); if( rc==SQLITE_IOERR_SHORT_READ ){ rc = SQLITE_OK; @@ -34876,62 +47526,20 @@ } return rc; } /* -** Return the total number of pages in the database file associated -** with pPager. Normally, this is calculated as (<db file size>/<page-size>). +** This function may only be called when a read-transaction is open on +** the pager. It returns the total number of pages in the database. +** ** However, if the file is between 1 and <page-size> bytes in size, then ** this is considered a 1 page file. -** -** If the pager is in error state when this function is called, then the -** error state error code is returned and *pnPage left unchanged. Or, -** if the file system has to be queried for the size of the file and -** the query attempt returns an IO error, the IO error code is returned -** and *pnPage is left unchanged. -** -** Otherwise, if everything is successful, then SQLITE_OK is returned -** and *pnPage is set to the number of pages in the database. -*/ -SQLITE_PRIVATE int sqlite3PagerPagecount(Pager *pPager, int *pnPage){ - Pgno nPage; /* Value to return via *pnPage */ - - /* Determine the number of pages in the file. Store this in nPage. */ - if( pPager->dbSizeValid ){ - nPage = pPager->dbSize; - }else{ - int rc; /* Error returned by OsFileSize() */ - i64 n = 0; /* File size in bytes returned by OsFileSize() */ - - assert( isOpen(pPager->fd) || pPager->tempFile ); - if( isOpen(pPager->fd) && (0 != (rc = sqlite3OsFileSize(pPager->fd, &n))) ){ - pager_error(pPager, rc); - return rc; - } - if( n>0 && n<pPager->pageSize ){ - nPage = 1; - }else{ - nPage = (Pgno)(n / pPager->pageSize); - } - if( pPager->state!=PAGER_UNLOCK ){ - pPager->dbSize = nPage; - pPager->dbFileSize = nPage; - pPager->dbSizeValid = 1; - } - } - - /* If the current number of pages in the file is greater than the - ** configured maximum pager number, increase the allowed limit so - ** that the file can be read. - */ - if( nPage>pPager->mxPgno ){ - pPager->mxPgno = (Pgno)nPage; - } - - /* Set the output variable and return SQLITE_OK */ - *pnPage = nPage; - return SQLITE_OK; +*/ +SQLITE_PRIVATE void sqlite3PagerPagecount(Pager *pPager, int *pnPage){ + assert( pPager->eState>=PAGER_READER ); + assert( pPager->eState!=PAGER_WRITER_FINISHED ); + *pnPage = (int)pPager->dbSize; } /* ** Try to obtain a lock of type locktype on the database file. If @@ -34948,42 +47556,23 @@ ** variable to locktype before returning. */ static int pager_wait_on_lock(Pager *pPager, int locktype){ int rc; /* Return code */ - /* The OS lock values must be the same as the Pager lock values */ - assert( PAGER_SHARED==SHARED_LOCK ); - assert( PAGER_RESERVED==RESERVED_LOCK ); - assert( PAGER_EXCLUSIVE==EXCLUSIVE_LOCK ); - - /* If the file is currently unlocked then the size must be unknown. It - ** must not have been modified at this point. - */ - assert( pPager->state>=PAGER_SHARED || pPager->dbSizeValid==0 ); - assert( pPager->state>=PAGER_SHARED || pPager->dbModified==0 ); - /* Check that this is either a no-op (because the requested lock is - ** already held, or one of the transistions that the busy-handler + ** already held), or one of the transitions that the busy-handler ** may be invoked during, according to the comment above ** sqlite3PagerSetBusyhandler(). */ - assert( (pPager->state>=locktype) - || (pPager->state==PAGER_UNLOCK && locktype==PAGER_SHARED) - || (pPager->state==PAGER_RESERVED && locktype==PAGER_EXCLUSIVE) + assert( (pPager->eLock>=locktype) + || (pPager->eLock==NO_LOCK && locktype==SHARED_LOCK) + || (pPager->eLock==RESERVED_LOCK && locktype==EXCLUSIVE_LOCK) ); - if( pPager->state>=locktype ){ - rc = SQLITE_OK; - }else{ - do { - rc = sqlite3OsLock(pPager->fd, locktype); - }while( rc==SQLITE_BUSY && pPager->xBusyHandler(pPager->pBusyHandlerArg) ); - if( rc==SQLITE_OK ){ - pPager->state = (u8)locktype; - IOTRACE(("LOCK %p %d\n", pPager, locktype)) - } - } + do { + rc = pagerLockDb(pPager, locktype); + }while( rc==SQLITE_BUSY && pPager->xBusyHandler(pPager->pBusyHandlerArg) ); return rc; } /* ** Function assertTruncateConstraint(pPager) checks that one of the @@ -34998,11 +47587,11 @@ ** ** If the condition asserted by this function were not true, and the ** dirty page were to be discarded from the cache via the pagerStress() ** routine, pagerStress() would not write the current page content to ** the database file. If a savepoint transaction were rolled back after -** this happened, the correct behaviour would be to restore the current +** this happened, the correct behavior would be to restore the current ** content of the page. However, since this content is not present in either ** the database file or the portion of the rollback journal and ** sub-journal rolled back the content could not be restored and the ** database image would become corrupt. It is therefore fortunate that ** this circumstance cannot arise. @@ -35022,18 +47611,32 @@ /* ** Truncate the in-memory database file image to nPage pages. This ** function does not actually modify the database file on disk. It ** just sets the internal state of the pager object so that the ** truncation will be done when the current transaction is committed. +** +** This function is only called right before committing a transaction. +** Once this function has been called, the transaction must either be +** rolled back or committed. It is not safe to call this function and +** then continue writing to the database. */ SQLITE_PRIVATE void sqlite3PagerTruncateImage(Pager *pPager, Pgno nPage){ - assert( pPager->dbSizeValid ); assert( pPager->dbSize>=nPage ); - assert( pPager->state>=PAGER_RESERVED ); + assert( pPager->eState>=PAGER_WRITER_CACHEMOD ); pPager->dbSize = nPage; - assertTruncateConstraint(pPager); + + /* At one point the code here called assertTruncateConstraint() to + ** ensure that all pages being truncated away by this operation are, + ** if one or more savepoints are open, present in the savepoint + ** journal so that they can be restored if the savepoint is rolled + ** back. This is no longer necessary as this function is now only + ** called right before committing a transaction. So although the + ** Pager object may still have open savepoints (Pager.nSavepoint!=0), + ** they cannot be rolled back. So the assertTruncateConstraint() call + ** is no longer correct. */ } + /* ** This function is called before attempting a hot-journal rollback. It ** syncs the journal file to disk, then sets pPager->journalHdr to the ** size of the journal file so that the pager_playback() routine knows @@ -35055,10 +47658,85 @@ if( rc==SQLITE_OK ){ rc = sqlite3OsFileSize(pPager->jfd, &pPager->journalHdr); } return rc; } + +/* +** Obtain a reference to a memory mapped page object for page number pgno. +** The new object will use the pointer pData, obtained from xFetch(). +** If successful, set *ppPage to point to the new page reference +** and return SQLITE_OK. Otherwise, return an SQLite error code and set +** *ppPage to zero. +** +** Page references obtained by calling this function should be released +** by calling pagerReleaseMapPage(). +*/ +static int pagerAcquireMapPage( + Pager *pPager, /* Pager object */ + Pgno pgno, /* Page number */ + void *pData, /* xFetch()'d data for this page */ + PgHdr **ppPage /* OUT: Acquired page object */ +){ + PgHdr *p; /* Memory mapped page to return */ + + if( pPager->pMmapFreelist ){ + *ppPage = p = pPager->pMmapFreelist; + pPager->pMmapFreelist = p->pDirty; + p->pDirty = 0; + memset(p->pExtra, 0, pPager->nExtra); + }else{ + *ppPage = p = (PgHdr *)sqlite3MallocZero(sizeof(PgHdr) + pPager->nExtra); + if( p==0 ){ + sqlite3OsUnfetch(pPager->fd, (i64)(pgno-1) * pPager->pageSize, pData); + return SQLITE_NOMEM; + } + p->pExtra = (void *)&p[1]; + p->flags = PGHDR_MMAP; + p->nRef = 1; + p->pPager = pPager; + } + + assert( p->pExtra==(void *)&p[1] ); + assert( p->pPage==0 ); + assert( p->flags==PGHDR_MMAP ); + assert( p->pPager==pPager ); + assert( p->nRef==1 ); + + p->pgno = pgno; + p->pData = pData; + pPager->nMmapOut++; + + return SQLITE_OK; +} + +/* +** Release a reference to page pPg. pPg must have been returned by an +** earlier call to pagerAcquireMapPage(). +*/ +static void pagerReleaseMapPage(PgHdr *pPg){ + Pager *pPager = pPg->pPager; + pPager->nMmapOut--; + pPg->pDirty = pPager->pMmapFreelist; + pPager->pMmapFreelist = pPg; + + assert( pPager->fd->pMethods->iVersion>=3 ); + sqlite3OsUnfetch(pPager->fd, (i64)(pPg->pgno-1)*pPager->pageSize, pPg->pData); +} + +/* +** Free all PgHdr objects stored in the Pager.pMmapFreelist list. +*/ +static void pagerFreeMapHdrs(Pager *pPager){ + PgHdr *p; + PgHdr *pNext; + for(p=pPager->pMmapFreelist; p; p=pNext){ + pNext = p->pDirty; + sqlite3_free(p); + } +} + /* ** Shutdown the page cache. Free all memory and close all files. ** ** If a transaction was in progress when this routine is called, that @@ -35071,35 +47749,49 @@ ** is made to roll it back. If an error occurs during the rollback ** a hot journal may be left in the filesystem but no error is returned ** to the caller. */ SQLITE_PRIVATE int sqlite3PagerClose(Pager *pPager){ + u8 *pTmp = (u8 *)pPager->pTmpSpace; + + assert( assert_pager_state(pPager) ); disable_simulated_io_errors(); sqlite3BeginBenignMalloc(); - pPager->errCode = 0; + pagerFreeMapHdrs(pPager); + /* pPager->errCode = 0; */ pPager->exclusiveMode = 0; +#ifndef SQLITE_OMIT_WAL + sqlite3WalClose(pPager->pWal, pPager->ckptSyncFlags, pPager->pageSize, pTmp); + pPager->pWal = 0; +#endif pager_reset(pPager); if( MEMDB ){ pager_unlock(pPager); }else{ - /* Set Pager.journalHdr to -1 for the benefit of the pager_playback() - ** call which may be made from within pagerUnlockAndRollback(). If it - ** is not -1, then the unsynced portion of an open journal file may - ** be played back into the database. If a power failure occurs while - ** this is happening, the database may become corrupt. + /* If it is open, sync the journal file before calling UnlockAndRollback. + ** If this is not done, then an unsynced portion of the open journal + ** file may be played back into the database. If a power failure occurs + ** while this is happening, the database could become corrupt. + ** + ** If an error occurs while trying to sync the journal, shift the pager + ** into the ERROR state. This causes UnlockAndRollback to unlock the + ** database and close the journal file without attempting to roll it + ** back or finalize it. The next database user will have to do hot-journal + ** rollback before accessing the database file. */ if( isOpen(pPager->jfd) ){ - pPager->errCode = pagerSyncHotJournal(pPager); + pager_error(pPager, pagerSyncHotJournal(pPager)); } pagerUnlockAndRollback(pPager); } sqlite3EndBenignMalloc(); enable_simulated_io_errors(); PAGERTRACE(("CLOSE %d\n", PAGERID(pPager))); IOTRACE(("CLOSE %p\n", pPager)) + sqlite3OsClose(pPager->jfd); sqlite3OsClose(pPager->fd); - sqlite3PageFree(pPager->pTmpSpace); + sqlite3PageFree(pTmp); sqlite3PcacheClose(pPager->pPCache); #ifdef SQLITE_HAS_CODEC if( pPager->xCodecFree ) pPager->xCodecFree(pPager->pCodec); #endif @@ -35130,13 +47822,13 @@ /* ** Sync the journal. In other words, make sure all the pages that have ** been written to the journal have actually reached the surface of the ** disk and can be restored in the event of a hot-journal rollback. ** -** If the Pager.needSync flag is not set, then this function is a -** no-op. Otherwise, the actions required depend on the journal-mode -** and the device characteristics of the the file-system, as follows: +** If the Pager.noSync flag is set, then this function is a no-op. +** Otherwise, the actions required depend on the journal-mode and the +** device characteristics of the file-system, as follows: ** ** * If the journal file is an in-memory journal file, no action need ** be taken. ** ** * Otherwise, if the device does not support the SAFE_APPEND property, @@ -35156,22 +47848,29 @@ ** <update nRec field> ** } ** if( NOT SEQUENTIAL ) xSync(<journal file>); ** } ** -** The Pager.needSync flag is never be set for temporary files, or any -** file operating in no-sync mode (Pager.noSync set to non-zero). -** ** If successful, this routine clears the PGHDR_NEED_SYNC flag of every ** page currently held in memory before returning SQLITE_OK. If an IO ** error is encountered, then the IO error code is returned to the caller. */ -static int syncJournal(Pager *pPager){ - if( pPager->needSync ){ +static int syncJournal(Pager *pPager, int newHdr){ + int rc; /* Return code */ + + assert( pPager->eState==PAGER_WRITER_CACHEMOD + || pPager->eState==PAGER_WRITER_DBMOD + ); + assert( assert_pager_state(pPager) ); + assert( !pagerUseWal(pPager) ); + + rc = sqlite3PagerExclusiveLock(pPager); + if( rc!=SQLITE_OK ) return rc; + + if( !pPager->noSync ){ assert( !pPager->tempFile ); - if( pPager->journalMode!=PAGER_JOURNALMODE_MEMORY ){ - int rc; /* Return code */ + if( isOpen(pPager->jfd) && pPager->journalMode!=PAGER_JOURNALMODE_MEMORY ){ const int iDc = sqlite3OsDeviceCharacteristics(pPager->fd); assert( isOpen(pPager->jfd) ); if( 0==(iDc&SQLITE_IOCAP_SAFE_APPEND) ){ /* This block deals with an obscure problem. If the last connection @@ -35196,14 +47895,14 @@ ** as a temporary buffer to inspect the first couple of bytes of ** the potential journal header. */ i64 iNextHdrOffset; u8 aMagic[8]; - u8 zHeader[sizeof(aJournalMagic)+4]; + u8 zHeader[sizeof(aJournalMagic)+4]; - memcpy(zHeader, aJournalMagic, sizeof(aJournalMagic)); - put32bits(&zHeader[sizeof(aJournalMagic)], pPager->nRec); + memcpy(zHeader, aJournalMagic, sizeof(aJournalMagic)); + put32bits(&zHeader[sizeof(aJournalMagic)], pPager->nRec); iNextHdrOffset = journalHdrOffset(pPager); rc = sqlite3OsRead(pPager->jfd, aMagic, 8, iNextHdrOffset); if( rc==SQLITE_OK && 0==memcmp(aMagic, aJournalMagic, 8) ){ static const u8 zerobyte = 0; @@ -35225,22 +47924,29 @@ ** and never needs to be updated. */ if( pPager->fullSync && 0==(iDc&SQLITE_IOCAP_SEQUENTIAL) ){ PAGERTRACE(("SYNC journal of %d\n", PAGERID(pPager))); IOTRACE(("JSYNC %p\n", pPager)) - rc = sqlite3OsSync(pPager->jfd, pPager->sync_flags); + rc = sqlite3OsSync(pPager->jfd, pPager->syncFlags); if( rc!=SQLITE_OK ) return rc; } IOTRACE(("JHDR %p %lld\n", pPager, pPager->journalHdr)); rc = sqlite3OsWrite( pPager->jfd, zHeader, sizeof(zHeader), pPager->journalHdr - ); + ); if( rc!=SQLITE_OK ) return rc; } if( 0==(iDc&SQLITE_IOCAP_SEQUENTIAL) ){ PAGERTRACE(("SYNC journal of %d\n", PAGERID(pPager))); IOTRACE(("JSYNC %p\n", pPager)) - rc = sqlite3OsSync(pPager->jfd, pPager->sync_flags| - (pPager->sync_flags==SQLITE_SYNC_FULL?SQLITE_SYNC_DATAONLY:0) + rc = sqlite3OsSync(pPager->jfd, pPager->syncFlags| + (pPager->syncFlags==SQLITE_SYNC_FULL?SQLITE_SYNC_DATAONLY:0) ); if( rc!=SQLITE_OK ) return rc; } + + pPager->journalHdr = pPager->journalOff; + if( newHdr && 0==(iDc&SQLITE_IOCAP_SAFE_APPEND) ){ + pPager->nRec = 0; + rc = writeJournalHdr(pPager); + if( rc!=SQLITE_OK ) return rc; + } @@ -35247,16 +47953,17 @@ - } - - /* The journal file was just successfully synced. Set Pager.needSync - ** to zero and clear the PGHDR_NEED_SYNC flag on all pagess. - */ - pPager->needSync = 0; - pPager->journalStarted = 1; - pPager->journalHdr = pPager->journalOff; - sqlite3PcacheClearSyncFlags(pPager->pPCache); + }else{ + pPager->journalHdr = pPager->journalOff; + } } + /* Unless the pager is in noSync mode, the journal file was just + ** successfully synced. Either way, clear the PGHDR_NEED_SYNC flag on + ** all pages. + */ + sqlite3PcacheClearSyncFlags(pPager->pPCache); + pPager->eState = PAGER_WRITER_DBMOD; + assert( assert_pager_state(pPager) ); return SQLITE_OK; } /* ** The argument is the first in a linked list of dirty pages connected @@ -35288,44 +47995,39 @@ ** ** If everything is successful, SQLITE_OK is returned. If an IO error ** occurs, an IO error code is returned. Or, if the EXCLUSIVE lock cannot ** be obtained, SQLITE_BUSY is returned. */ -static int pager_write_pagelist(PgHdr *pList){ - Pager *pPager; /* Pager object */ - int rc; /* Return code */ - - if( NEVER(pList==0) ) return SQLITE_OK; - pPager = pList->pPager; - - /* At this point there may be either a RESERVED or EXCLUSIVE lock on the - ** database file. If there is already an EXCLUSIVE lock, the following - ** call is a no-op. - ** - ** Moving the lock from RESERVED to EXCLUSIVE actually involves going - ** through an intermediate state PENDING. A PENDING lock prevents new - ** readers from attaching to the database but is unsufficient for us to - ** write. The idea of a PENDING lock is to prevent new readers from - ** coming in while we wait for existing readers to clear. - ** - ** While the pager is in the RESERVED state, the original database file - ** is unchanged and we can rollback without having to playback the - ** journal into the original database file. Once we transition to - ** EXCLUSIVE, it means the database file has been changed and any rollback - ** will require a journal playback. - */ - assert( pPager->state>=PAGER_RESERVED ); - rc = pager_wait_on_lock(pPager, EXCLUSIVE_LOCK); +static int pager_write_pagelist(Pager *pPager, PgHdr *pList){ + int rc = SQLITE_OK; /* Return code */ + + /* This function is only called for rollback pagers in WRITER_DBMOD state. */ + assert( !pagerUseWal(pPager) ); + assert( pPager->eState==PAGER_WRITER_DBMOD ); + assert( pPager->eLock==EXCLUSIVE_LOCK ); /* If the file is a temp-file has not yet been opened, open it now. It ** is not possible for rc to be other than SQLITE_OK if this branch ** is taken, as pager_wait_on_lock() is a no-op for temp-files. */ if( !isOpen(pPager->fd) ){ assert( pPager->tempFile && rc==SQLITE_OK ); rc = pagerOpentemp(pPager, pPager->fd, pPager->vfsFlags); } + + /* Before the first write, give the VFS a hint of what the final + ** file size will be. + */ + assert( rc!=SQLITE_OK || isOpen(pPager->fd) ); + if( rc==SQLITE_OK + && pPager->dbHintSize<pPager->dbSize + && (pList->pDirty || pList->pgno>pPager->dbHintSize) + ){ + sqlite3_int64 szFile = pPager->pageSize * (sqlite3_int64)pPager->dbSize; + sqlite3OsFileControlHint(pPager->fd, SQLITE_FCNTL_SIZE_HINT, &szFile); + pPager->dbHintSize = pPager->dbSize; + } while( rc==SQLITE_OK && pList ){ Pgno pgno = pList->pgno; /* If there are dirty pages in the page cache with page numbers greater @@ -35338,10 +48040,13 @@ */ if( pgno<=pPager->dbSize && 0==(pList->flags&PGHDR_DONT_WRITE) ){ i64 offset = (pgno-1)*(i64)pPager->pageSize; /* Offset to write */ char *pData; /* Data to write */ + assert( (pList->flags&PGHDR_NEED_SYNC)==0 ); + if( pList->pgno==1 ) pager_write_changecounter(pList); + /* Encode the database */ CODEC2(pPager, pList->pData, pgno, 6, return SQLITE_NOMEM, pData); /* Write out the page data. */ rc = sqlite3OsWrite(pPager->fd, pData, pPager->pageSize, offset); @@ -35354,35 +48059,51 @@ memcpy(&pPager->dbFileVers, &pData[24], sizeof(pPager->dbFileVers)); } if( pgno>pPager->dbFileSize ){ pPager->dbFileSize = pgno; } + pPager->aStat[PAGER_STAT_WRITE]++; /* Update any backup objects copying the contents of this pager. */ sqlite3BackupUpdate(pPager->pBackup, pgno, (u8*)pList->pData); PAGERTRACE(("STORE %d page %d hash(%08x)\n", PAGERID(pPager), pgno, pager_pagehash(pList))); IOTRACE(("PGOUT %p %d\n", pPager, pgno)); PAGER_INCR(sqlite3_pager_writedb_count); - PAGER_INCR(pPager->nWrite); }else{ PAGERTRACE(("NOSTORE %d page %d\n", PAGERID(pPager), pgno)); } -#ifdef SQLITE_CHECK_PAGES - pList->pageHash = pager_pagehash(pList); -#endif + pager_set_pagehash(pList); pList = pList->pDirty; } return rc; } + +/* +** Ensure that the sub-journal file is open. If it is already open, this +** function is a no-op. +** +** SQLITE_OK is returned if everything goes according to plan. An +** SQLITE_IOERR_XXX error code is returned if a call to sqlite3OsOpen() +** fails. +*/ +static int openSubJournal(Pager *pPager){ + int rc = SQLITE_OK; + if( !isOpen(pPager->sjfd) ){ + if( pPager->journalMode==PAGER_JOURNALMODE_MEMORY || pPager->subjInMemory ){ + sqlite3MemJournalOpen(pPager->sjfd); + }else{ + rc = pagerOpentemp(pPager, pPager->sjfd, SQLITE_OPEN_SUBJOURNAL); + } + } + return rc; +} /* ** Append a record of the current state of page pPg to the sub-journal. -** It is the callers responsibility to use subjRequiresPage() to check -** that it is really required before calling this function. ** ** If successful, set the bit corresponding to pPg->pgno in the bitvecs ** for all open savepoints before returning. ** ** This function returns SQLITE_OK if everything is successful, an IO @@ -35391,32 +48112,51 @@ ** bitvec. */ static int subjournalPage(PgHdr *pPg){ int rc = SQLITE_OK; Pager *pPager = pPg->pPager; - if( isOpen(pPager->sjfd) ){ - void *pData = pPg->pData; - i64 offset = pPager->nSubRec*(4+pPager->pageSize); - char *pData2; - - CODEC2(pPager, pData, pPg->pgno, 7, return SQLITE_NOMEM, pData2); - PAGERTRACE(("STMT-JOURNAL %d page %d\n", PAGERID(pPager), pPg->pgno)); - - assert( pageInJournal(pPg) || pPg->pgno>pPager->dbOrigSize ); - rc = write32bits(pPager->sjfd, offset, pPg->pgno); - if( rc==SQLITE_OK ){ - rc = sqlite3OsWrite(pPager->sjfd, pData2, pPager->pageSize, offset+4); + if( pPager->journalMode!=PAGER_JOURNALMODE_OFF ){ + + /* Open the sub-journal, if it has not already been opened */ + assert( pPager->useJournal ); + assert( isOpen(pPager->jfd) || pagerUseWal(pPager) ); + assert( isOpen(pPager->sjfd) || pPager->nSubRec==0 ); + assert( pagerUseWal(pPager) + || pageInJournal(pPager, pPg) + || pPg->pgno>pPager->dbOrigSize + ); + rc = openSubJournal(pPager); + + /* If the sub-journal was opened successfully (or was already open), + ** write the journal record into the file. */ + if( rc==SQLITE_OK ){ + void *pData = pPg->pData; + i64 offset = (i64)pPager->nSubRec*(4+pPager->pageSize); + char *pData2; + + CODEC2(pPager, pData, pPg->pgno, 7, return SQLITE_NOMEM, pData2); + PAGERTRACE(("STMT-JOURNAL %d page %d\n", PAGERID(pPager), pPg->pgno)); + rc = write32bits(pPager->sjfd, offset, pPg->pgno); + if( rc==SQLITE_OK ){ + rc = sqlite3OsWrite(pPager->sjfd, pData2, pPager->pageSize, offset+4); + } } } if( rc==SQLITE_OK ){ pPager->nSubRec++; assert( pPager->nSavepoint>0 ); rc = addToSavepointBitvecs(pPager, pPg->pgno); } return rc; } - +static int subjournalPageIfRequired(PgHdr *pPg){ + if( subjRequiresPage(pPg) ){ + return subjournalPage(pPg); + }else{ + return SQLITE_OK; + } +} /* ** This function is called by the pcache layer when it has reached some ** soft memory limit. The first argument is a pointer to a Pager object ** (cast as a void*). The pager is always 'purgeable' (not an in-memory @@ -35440,89 +48180,88 @@ int rc = SQLITE_OK; assert( pPg->pPager==pPager ); assert( pPg->flags&PGHDR_DIRTY ); - /* The doNotSync flag is set by the sqlite3PagerWrite() function while it - ** is journalling a set of two or more database pages that are stored - ** on the same disk sector. Syncing the journal is not allowed while - ** this is happening as it is important that all members of such a - ** set of pages are synced to disk together. So, if the page this function - ** is trying to make clean will require a journal sync and the doNotSync - ** flag is set, return without doing anything. The pcache layer will - ** just have to go ahead and allocate a new page buffer instead of - ** reusing pPg. + /* The doNotSpill NOSYNC bit is set during times when doing a sync of + ** journal (and adding a new header) is not allowed. This occurs + ** during calls to sqlite3PagerWrite() while trying to journal multiple + ** pages belonging to the same sector. ** - ** Similarly, if the pager has already entered the error state, do not - ** try to write the contents of pPg to disk. + ** The doNotSpill ROLLBACK and OFF bits inhibits all cache spilling + ** regardless of whether or not a sync is required. This is set during + ** a rollback or by user request, respectively. + ** + ** Spilling is also prohibited when in an error state since that could + ** lead to database corruption. In the current implementation it + ** is impossible for sqlite3PcacheFetch() to be called with createFlag==3 + ** while in the error state, hence it is impossible for this routine to + ** be called in the error state. Nevertheless, we include a NEVER() + ** test for the error state as a safeguard against future changes. */ - if( NEVER(pPager->errCode) - || (pPager->doNotSync && pPg->flags&PGHDR_NEED_SYNC) + if( NEVER(pPager->errCode) ) return SQLITE_OK; + testcase( pPager->doNotSpill & SPILLFLAG_ROLLBACK ); + testcase( pPager->doNotSpill & SPILLFLAG_OFF ); + testcase( pPager->doNotSpill & SPILLFLAG_NOSYNC ); + if( pPager->doNotSpill + && ((pPager->doNotSpill & (SPILLFLAG_ROLLBACK|SPILLFLAG_OFF))!=0 + || (pPg->flags & PGHDR_NEED_SYNC)!=0) ){ return SQLITE_OK; } - /* Sync the journal file if required. */ - if( pPg->flags&PGHDR_NEED_SYNC ){ - rc = syncJournal(pPager); - if( rc==SQLITE_OK && pPager->fullSync && - !(pPager->journalMode==PAGER_JOURNALMODE_MEMORY) && - !(sqlite3OsDeviceCharacteristics(pPager->fd)&SQLITE_IOCAP_SAFE_APPEND) + pPg->pDirty = 0; + if( pagerUseWal(pPager) ){ + /* Write a single frame for this page to the log. */ + rc = subjournalPageIfRequired(pPg); + if( rc==SQLITE_OK ){ + rc = pagerWalFrames(pPager, pPg, 0, 0); + } + }else{ + + /* Sync the journal file if required. */ + if( pPg->flags&PGHDR_NEED_SYNC + || pPager->eState==PAGER_WRITER_CACHEMOD ){ - pPager->nRec = 0; - rc = writeJournalHdr(pPager); - } - } - - /* If the page number of this page is larger than the current size of - ** the database image, it may need to be written to the sub-journal. - ** This is because the call to pager_write_pagelist() below will not - ** actually write data to the file in this case. - ** - ** Consider the following sequence of events: - ** - ** BEGIN; - ** <journal page X> - ** <modify page X> - ** SAVEPOINT sp; - ** <shrink database file to Y pages> - ** pagerStress(page X) - ** ROLLBACK TO sp; - ** - ** If (X>Y), then when pagerStress is called page X will not be written - ** out to the database file, but will be dropped from the cache. Then, - ** following the "ROLLBACK TO sp" statement, reading page X will read - ** data from the database file. This will be the copy of page X as it - ** was when the transaction started, not as it was when "SAVEPOINT sp" - ** was executed. - ** - ** The solution is to write the current data for page X into the - ** sub-journal file now (if it is not already there), so that it will - ** be restored to its current value when the "ROLLBACK TO sp" is - ** executed. - */ - if( NEVER( - rc==SQLITE_OK && pPg->pgno>pPager->dbSize && subjRequiresPage(pPg) - ) ){ - rc = subjournalPage(pPg); - } - - /* Write the contents of the page out to the database file. */ - if( rc==SQLITE_OK ){ - pPg->pDirty = 0; - rc = pager_write_pagelist(pPg); + rc = syncJournal(pPager, 1); + } + + /* Write the contents of the page out to the database file. */ + if( rc==SQLITE_OK ){ + assert( (pPg->flags&PGHDR_NEED_SYNC)==0 ); + rc = pager_write_pagelist(pPager, pPg); + } } /* Mark the page as clean. */ if( rc==SQLITE_OK ){ PAGERTRACE(("STRESS %d page %d\n", PAGERID(pPager), pPg->pgno)); sqlite3PcacheMakeClean(pPg); } - return pager_error(pPager, rc); + return pager_error(pPager, rc); } +/* +** Flush all unreferenced dirty pages to disk. +*/ +SQLITE_PRIVATE int sqlite3PagerFlush(Pager *pPager){ + int rc = pPager->errCode; + if( !MEMDB ){ + PgHdr *pList = sqlite3PcacheDirtyList(pPager->pPCache); + assert( assert_pager_state(pPager) ); + while( rc==SQLITE_OK && pList ){ + PgHdr *pNext = pList->pDirty; + if( pList->nRef==0 ){ + rc = pagerStress((void*)pPager, pList); + } + pList = pNext; + } + } + + return rc; +} /* ** Allocate and initialize a new Pager object and put a pointer to it ** in *ppPager. The pager should eventually be freed by passing it ** to sqlite3PagerClose(). @@ -35538,11 +48277,11 @@ ** along with each page reference. This space is available to the user ** via the sqlite3PagerGetExtra() API. ** ** The flags argument is used to specify properties that affect the ** operation of the pager. It should be passed some bitwise combination -** of the PAGER_OMIT_JOURNAL and PAGER_NO_READLOCK flags. +** of the PAGER_* flags. ** ** The vfsFlags parameter is a bitmask to pass to the flags parameter ** of the xOpen() method of the supplied VFS when opening files. ** ** If the pager object is allocated and the specified file opened @@ -35569,13 +48308,14 @@ int readOnly = 0; /* True if this is a read-only file */ int journalFileSize; /* Bytes to allocate for each journal fd */ char *zPathname = 0; /* Full path to database file */ int nPathname = 0; /* Number of bytes in zPathname */ int useJournal = (flags & PAGER_OMIT_JOURNAL)==0; /* False to omit journal */ - int noReadlock = (flags & PAGER_NO_READLOCK)!=0; /* True to omit read-lock */ int pcacheSize = sqlite3PcacheSize(); /* Bytes to allocate for PCache */ - u16 szPageDflt = SQLITE_DEFAULT_PAGE_SIZE; /* Default page size */ + u32 szPageDflt = SQLITE_DEFAULT_PAGE_SIZE; /* Default page size */ + const char *zUri = 0; /* URI args to copy */ + int nUri = 0; /* Number of bytes of URI args at *zUri */ /* Figure out how much space is required for each journal file-handle ** (there are two of them, the main journal and the sub-journal). This ** is the maximum space required for an in-memory journal file handle ** and a regular journal file-handle. Note that a "regular journal-handle" @@ -35589,33 +48329,44 @@ journalFileSize = ROUND8(sqlite3MemJournalSize()); } /* Set the output variable to NULL in case an error occurs. */ *ppPager = 0; + +#ifndef SQLITE_OMIT_MEMORYDB + if( flags & PAGER_MEMORY ){ + memDb = 1; + if( zFilename && zFilename[0] ){ + zPathname = sqlite3DbStrDup(0, zFilename); + if( zPathname==0 ) return SQLITE_NOMEM; + nPathname = sqlite3Strlen30(zPathname); + zFilename = 0; + } + } +#endif /* Compute and store the full pathname in an allocated buffer pointed ** to by zPathname, length nPathname. Or, if this is a temporary file, ** leave both nPathname and zPathname set to 0. */ if( zFilename && zFilename[0] ){ + const char *z; nPathname = pVfs->mxPathname+1; - zPathname = sqlite3Malloc(nPathname*2); + zPathname = sqlite3DbMallocRaw(0, nPathname*2); if( zPathname==0 ){ return SQLITE_NOMEM; } -#ifndef SQLITE_OMIT_MEMORYDB - if( strcmp(zFilename,":memory:")==0 ){ - memDb = 1; - zPathname[0] = 0; - }else -#endif - { - zPathname[0] = 0; /* Make sure initialized even if FullPathname() fails */ - rc = sqlite3OsFullPathname(pVfs, zFilename, nPathname, zPathname); - } - + zPathname[0] = 0; /* Make sure initialized even if FullPathname() fails */ + rc = sqlite3OsFullPathname(pVfs, zFilename, nPathname, zPathname); nPathname = sqlite3Strlen30(zPathname); + z = zUri = &zFilename[sqlite3Strlen30(zFilename)+1]; + while( *z ){ + z += sqlite3Strlen30(z)+1; + z += sqlite3Strlen30(z)+1; + } + nUri = (int)(&z[1] - zUri); + assert( nUri>=0 ); if( rc==SQLITE_OK && nPathname+8>pVfs->mxPathname ){ /* This branch is taken when the journal path required by ** the database being opened will be more than pVfs->mxPathname ** bytes in length. This means the database cannot be opened, ** as it will not be possible to open the journal file or even @@ -35622,11 +48373,11 @@ ** check for a hot-journal before reading. */ rc = SQLITE_CANTOPEN_BKPT; } if( rc!=SQLITE_OK ){ - sqlite3_free(zPathname); + sqlite3DbFree(0, zPathname); return rc; } } /* Allocate memory for the Pager structure, PCache object, the @@ -35644,16 +48395,19 @@ pPtr = (u8 *)sqlite3MallocZero( ROUND8(sizeof(*pPager)) + /* Pager structure */ ROUND8(pcacheSize) + /* PCache object */ ROUND8(pVfs->szOsFile) + /* The main db file */ journalFileSize * 2 + /* The two journal files */ - nPathname + 1 + /* zFilename */ - nPathname + 8 + 1 /* zJournal */ + nPathname + 1 + nUri + /* zFilename */ + nPathname + 8 + 2 /* zJournal */ +#ifndef SQLITE_OMIT_WAL + + nPathname + 4 + 2 /* zWal */ +#endif ); assert( EIGHT_BYTE_ALIGNMENT(SQLITE_INT_TO_PTR(journalFileSize)) ); if( !pPtr ){ - sqlite3_free(zPathname); + sqlite3DbFree(0, zPathname); return SQLITE_NOMEM; } pPager = (Pager*)(pPtr); pPager->pPCache = (PCache*)(pPtr += ROUND8(sizeof(*pPager))); pPager->fd = (sqlite3_file*)(pPtr += ROUND8(pcacheSize)); @@ -35662,25 +48416,34 @@ pPager->zFilename = (char*)(pPtr += journalFileSize); assert( EIGHT_BYTE_ALIGNMENT(pPager->jfd) ); /* Fill in the Pager.zFilename and Pager.zJournal buffers, if required. */ if( zPathname ){ - pPager->zJournal = (char*)(pPtr += nPathname + 1); + assert( nPathname>0 ); + pPager->zJournal = (char*)(pPtr += nPathname + 1 + nUri); memcpy(pPager->zFilename, zPathname, nPathname); + if( nUri ) memcpy(&pPager->zFilename[nPathname+1], zUri, nUri); memcpy(pPager->zJournal, zPathname, nPathname); - memcpy(&pPager->zJournal[nPathname], "-journal", 8); - if( pPager->zFilename[0]==0 ) pPager->zJournal[0] = 0; - sqlite3_free(zPathname); + memcpy(&pPager->zJournal[nPathname], "-journal\000", 8+2); + sqlite3FileSuffix3(pPager->zFilename, pPager->zJournal); +#ifndef SQLITE_OMIT_WAL + pPager->zWal = &pPager->zJournal[nPathname+8+1]; + memcpy(pPager->zWal, zPathname, nPathname); + memcpy(&pPager->zWal[nPathname], "-wal\000", 4+1); + sqlite3FileSuffix3(pPager->zFilename, pPager->zWal); +#endif + sqlite3DbFree(0, zPathname); } pPager->pVfs = pVfs; pPager->vfsFlags = vfsFlags; /* Open the pager file. */ - if( zFilename && zFilename[0] && !memDb ){ + if( zFilename && zFilename[0] ){ int fout = 0; /* VFS flags returned by xOpen() */ rc = sqlite3OsOpen(pVfs, pPager->zFilename, pPager->fd, vfsFlags, &fout); + assert( !memDb ); readOnly = (fout&SQLITE_OPEN_READONLY); /* If the file was successfully opened for read/write access, ** choose a default page size in case we have to create the ** database file. The default page size is the maximum of: @@ -35687,46 +48450,59 @@ ** ** + SQLITE_DEFAULT_PAGE_SIZE, ** + The value returned by sqlite3OsSectorSize() ** + The largest page size that can be written atomically. */ - if( rc==SQLITE_OK && !readOnly ){ - setSectorSize(pPager); - assert(SQLITE_DEFAULT_PAGE_SIZE<=SQLITE_MAX_DEFAULT_PAGE_SIZE); - if( szPageDflt<pPager->sectorSize ){ - if( pPager->sectorSize>SQLITE_MAX_DEFAULT_PAGE_SIZE ){ - szPageDflt = SQLITE_MAX_DEFAULT_PAGE_SIZE; - }else{ - szPageDflt = (u16)pPager->sectorSize; - } - } + if( rc==SQLITE_OK ){ + int iDc = sqlite3OsDeviceCharacteristics(pPager->fd); + if( !readOnly ){ + setSectorSize(pPager); + assert(SQLITE_DEFAULT_PAGE_SIZE<=SQLITE_MAX_DEFAULT_PAGE_SIZE); + if( szPageDflt<pPager->sectorSize ){ + if( pPager->sectorSize>SQLITE_MAX_DEFAULT_PAGE_SIZE ){ + szPageDflt = SQLITE_MAX_DEFAULT_PAGE_SIZE; + }else{ + szPageDflt = (u32)pPager->sectorSize; + } + } #ifdef SQLITE_ENABLE_ATOMIC_WRITE - { - int iDc = sqlite3OsDeviceCharacteristics(pPager->fd); - int ii; - assert(SQLITE_IOCAP_ATOMIC512==(512>>8)); - assert(SQLITE_IOCAP_ATOMIC64K==(65536>>8)); - assert(SQLITE_MAX_DEFAULT_PAGE_SIZE<=65536); - for(ii=szPageDflt; ii<=SQLITE_MAX_DEFAULT_PAGE_SIZE; ii=ii*2){ - if( iDc&(SQLITE_IOCAP_ATOMIC|(ii>>8)) ){ - szPageDflt = ii; + { + int ii; + assert(SQLITE_IOCAP_ATOMIC512==(512>>8)); + assert(SQLITE_IOCAP_ATOMIC64K==(65536>>8)); + assert(SQLITE_MAX_DEFAULT_PAGE_SIZE<=65536); + for(ii=szPageDflt; ii<=SQLITE_MAX_DEFAULT_PAGE_SIZE; ii=ii*2){ + if( iDc&(SQLITE_IOCAP_ATOMIC|(ii>>8)) ){ + szPageDflt = ii; + } } } +#endif } -#endif + pPager->noLock = sqlite3_uri_boolean(zFilename, "nolock", 0); + if( (iDc & SQLITE_IOCAP_IMMUTABLE)!=0 + || sqlite3_uri_boolean(zFilename, "immutable", 0) ){ + vfsFlags |= SQLITE_OPEN_READONLY; + goto act_like_temp_file; + } } }else{ /* If a temporary file is requested, it is not opened immediately. ** In this case we accept the default page size and delay actually ** opening the file until the first call to OsWrite(). ** ** This branch is also run for an in-memory database. An in-memory ** database is the same as a temp-file that is never written out to ** disk and uses an in-memory rollback journal. + ** + ** This branch also runs for files marked as immutable. */ +act_like_temp_file: tempFile = 1; - pPager->state = PAGER_EXCLUSIVE; + pPager->eState = PAGER_READER; /* Pretend we already have a lock */ + pPager->eLock = EXCLUSIVE_LOCK; /* Pretend we are in EXCLUSIVE mode */ + pPager->noLock = 1; /* Do no locking */ readOnly = (vfsFlags&SQLITE_OPEN_READONLY); } /* The following call to PagerSetPagesize() serves to set the value of ** Pager.pageSize and to allocate the Pager.pTmpSpace buffer. @@ -35735,55 +48511,67 @@ assert( pPager->memDb==0 ); rc = sqlite3PagerSetPagesize(pPager, &szPageDflt, -1); testcase( rc!=SQLITE_OK ); } - /* If an error occurred in either of the blocks above, free the - ** Pager structure and close the file. + /* Initialize the PCache object. */ + if( rc==SQLITE_OK ){ + assert( nExtra<1000 ); + nExtra = ROUND8(nExtra); + rc = sqlite3PcacheOpen(szPageDflt, nExtra, !memDb, + !memDb?pagerStress:0, (void *)pPager, pPager->pPCache); + } + + /* If an error occurred above, free the Pager structure and close the file. */ if( rc!=SQLITE_OK ){ - assert( !pPager->pTmpSpace ); sqlite3OsClose(pPager->fd); + sqlite3PageFree(pPager->pTmpSpace); sqlite3_free(pPager); return rc; } - /* Initialize the PCache object. */ - assert( nExtra<1000 ); - nExtra = ROUND8(nExtra); - sqlite3PcacheOpen(szPageDflt, nExtra, !memDb, - !memDb?pagerStress:0, (void *)pPager, pPager->pPCache); - PAGERTRACE(("OPEN %d %s\n", FILEHANDLEID(pPager->fd), pPager->zFilename)); IOTRACE(("OPEN %p %s\n", pPager, pPager->zFilename)) pPager->useJournal = (u8)useJournal; - pPager->noReadlock = (noReadlock && readOnly) ?1:0; /* pPager->stmtOpen = 0; */ /* pPager->stmtInUse = 0; */ /* pPager->nRef = 0; */ - pPager->dbSizeValid = (u8)memDb; /* pPager->stmtSize = 0; */ /* pPager->stmtJSize = 0; */ /* pPager->nPage = 0; */ pPager->mxPgno = SQLITE_MAX_PAGE_COUNT; /* pPager->state = PAGER_UNLOCK; */ - assert( pPager->state == (tempFile ? PAGER_EXCLUSIVE : PAGER_UNLOCK) ); /* pPager->errMask = 0; */ pPager->tempFile = (u8)tempFile; assert( tempFile==PAGER_LOCKINGMODE_NORMAL || tempFile==PAGER_LOCKINGMODE_EXCLUSIVE ); assert( PAGER_LOCKINGMODE_EXCLUSIVE==1 ); pPager->exclusiveMode = (u8)tempFile; pPager->changeCountDone = pPager->tempFile; pPager->memDb = (u8)memDb; pPager->readOnly = (u8)readOnly; - /* pPager->needSync = 0; */ assert( useJournal || pPager->tempFile ); pPager->noSync = pPager->tempFile; - pPager->fullSync = pPager->noSync ?0:1; - pPager->sync_flags = SQLITE_SYNC_NORMAL; + if( pPager->noSync ){ + assert( pPager->fullSync==0 ); + assert( pPager->extraSync==0 ); + assert( pPager->syncFlags==0 ); + assert( pPager->walSyncFlags==0 ); + assert( pPager->ckptSyncFlags==0 ); + }else{ + pPager->fullSync = 1; +#if SQLITE_EXTRA_DURABLE + pPager->extraSync = 1; +#else + pPager->extraSync = 0; +#endif + pPager->syncFlags = SQLITE_SYNC_NORMAL; + pPager->walSyncFlags = SQLITE_SYNC_NORMAL | WAL_SYNC_TRANSACTIONS; + pPager->ckptSyncFlags = SQLITE_SYNC_NORMAL; + } /* pPager->pFirst = 0; */ /* pPager->pFirstSynced = 0; */ /* pPager->pLast = 0; */ pPager->nExtra = (u16)nExtra; pPager->journalSizeLimit = SQLITE_DEFAULT_JOURNAL_SIZE_LIMIT; @@ -35796,15 +48584,40 @@ } /* pPager->xBusyHandler = 0; */ /* pPager->pBusyHandlerArg = 0; */ pPager->xReiniter = xReinit; /* memset(pPager->aHash, 0, sizeof(pPager->aHash)); */ + /* pPager->szMmap = SQLITE_DEFAULT_MMAP_SIZE // will be set by btree.c */ *ppPager = pPager; return SQLITE_OK; } + +/* Verify that the database file has not be deleted or renamed out from +** under the pager. Return SQLITE_OK if the database is still were it ought +** to be on disk. Return non-zero (SQLITE_READONLY_DBMOVED or some other error +** code from sqlite3OsAccess()) if the database has gone missing. +*/ +static int databaseIsUnmoved(Pager *pPager){ + int bHasMoved = 0; + int rc; + + if( pPager->tempFile ) return SQLITE_OK; + if( pPager->dbSize==0 ) return SQLITE_OK; + assert( pPager->zFilename && pPager->zFilename[0] ); + rc = sqlite3OsFileControl(pPager->fd, SQLITE_FCNTL_HAS_MOVED, &bHasMoved); + if( rc==SQLITE_NOTFOUND ){ + /* If the HAS_MOVED file-control is unimplemented, assume that the file + ** has not been moved. That is the historical behavior of SQLite: prior to + ** version 3.8.3, it never checked */ + rc = SQLITE_OK; + }else if( rc==SQLITE_OK && bHasMoved ){ + rc = SQLITE_READONLY_DBMOVED; + } + return rc; +} /* ** This function is called after transitioning from PAGER_UNLOCK to ** PAGER_SHARED state. It tests if there is a hot journal present in @@ -35836,23 +48649,28 @@ ** to determine whether or not a hot-journal file exists, the IO error ** code is returned and the value of *pExists is undefined. */ static int hasHotJournal(Pager *pPager, int *pExists){ sqlite3_vfs * const pVfs = pPager->pVfs; - int rc; /* Return code */ - int exists; /* True if a journal file is present */ + int rc = SQLITE_OK; /* Return code */ + int exists = 1; /* True if a journal file is present */ + int jrnlOpen = !!isOpen(pPager->jfd); - assert( pPager!=0 ); assert( pPager->useJournal ); assert( isOpen(pPager->fd) ); - assert( !isOpen(pPager->jfd) ); - assert( pPager->state <= PAGER_SHARED ); + assert( pPager->eState==PAGER_OPEN ); + + assert( jrnlOpen==0 || ( sqlite3OsDeviceCharacteristics(pPager->jfd) & + SQLITE_IOCAP_UNDELETABLE_WHEN_OPEN + )); *pExists = 0; - rc = sqlite3OsAccess(pVfs, pPager->zJournal, SQLITE_ACCESS_EXISTS, &exists); + if( !jrnlOpen ){ + rc = sqlite3OsAccess(pVfs, pPager->zJournal, SQLITE_ACCESS_EXISTS, &exists); + } if( rc==SQLITE_OK && exists ){ - int locked; /* True if some process holds a RESERVED lock */ + int locked = 0; /* True if some process holds a RESERVED lock */ /* Race condition here: Another process might have been holding the ** the RESERVED lock and have a journal open at the sqlite3OsAccess() ** call above, but then delete the journal and drop the lock before ** we get to the following sqlite3OsCheckReservedLock() call. If that @@ -35860,47 +48678,53 @@ ** in fact there is none. This results in a false-positive which will ** be dealt with by the playback routine. Ticket #3883. */ rc = sqlite3OsCheckReservedLock(pPager->fd, &locked); if( rc==SQLITE_OK && !locked ){ - int nPage; - - /* Check the size of the database file. If it consists of 0 pages, - ** then delete the journal file. See the header comment above for - ** the reasoning here. Delete the obsolete journal file under - ** a RESERVED lock to avoid race conditions and to avoid violating - ** [H33020]. - */ - rc = sqlite3PagerPagecount(pPager, &nPage); - if( rc==SQLITE_OK ){ - if( nPage==0 ){ + Pgno nPage; /* Number of pages in database file */ + + rc = pagerPagecount(pPager, &nPage); + if( rc==SQLITE_OK ){ + /* If the database is zero pages in size, that means that either (1) the + ** journal is a remnant from a prior database with the same name where + ** the database file but not the journal was deleted, or (2) the initial + ** transaction that populates a new database is being rolled back. + ** In either case, the journal file can be deleted. However, take care + ** not to delete the journal file if it is already open due to + ** journal_mode=PERSIST. + */ + if( nPage==0 && !jrnlOpen ){ sqlite3BeginBenignMalloc(); - if( sqlite3OsLock(pPager->fd, RESERVED_LOCK)==SQLITE_OK ){ + if( pagerLockDb(pPager, RESERVED_LOCK)==SQLITE_OK ){ sqlite3OsDelete(pVfs, pPager->zJournal, 0); - sqlite3OsUnlock(pPager->fd, SHARED_LOCK); + if( !pPager->exclusiveMode ) pagerUnlockDb(pPager, SHARED_LOCK); } sqlite3EndBenignMalloc(); }else{ /* The journal file exists and no other connection has a reserved ** or greater lock on the database file. Now check that there is ** at least one non-zero bytes at the start of the journal file. ** If there is, then we consider this journal to be hot. If not, ** it can be ignored. */ - int f = SQLITE_OPEN_READONLY|SQLITE_OPEN_MAIN_JOURNAL; - rc = sqlite3OsOpen(pVfs, pPager->zJournal, pPager->jfd, f, &f); + if( !jrnlOpen ){ + int f = SQLITE_OPEN_READONLY|SQLITE_OPEN_MAIN_JOURNAL; + rc = sqlite3OsOpen(pVfs, pPager->zJournal, pPager->jfd, f, &f); + } if( rc==SQLITE_OK ){ u8 first = 0; rc = sqlite3OsRead(pPager->jfd, (void *)&first, 1, 0); if( rc==SQLITE_IOERR_SHORT_READ ){ rc = SQLITE_OK; } - sqlite3OsClose(pPager->jfd); + if( !jrnlOpen ){ + sqlite3OsClose(pPager->jfd); + } *pExists = (first!=0); }else if( rc==SQLITE_CANTOPEN ){ /* If we cannot open the rollback journal file in order to see if - ** its has a zero header, that might be due to an I/O error, or + ** it has a zero header, that might be due to an I/O error, or ** it might be due to the race condition described above and in ** ticket #3883. Either way, assume that the journal is hot. ** This might be a false positive. But if it is, then the ** automatic journal playback and recovery mechanism will deal ** with it under an EXCLUSIVE lock where we do not need to @@ -35915,80 +48739,19 @@ } return rc; } -/* -** Read the content for page pPg out of the database file and into -** pPg->pData. A shared lock or greater must be held on the database -** file before this function is called. -** -** If page 1 is read, then the value of Pager.dbFileVers[] is set to -** the value read from the database file. -** -** If an IO error occurs, then the IO error is returned to the caller. -** Otherwise, SQLITE_OK is returned. -*/ -static int readDbPage(PgHdr *pPg){ - Pager *pPager = pPg->pPager; /* Pager object associated with page pPg */ - Pgno pgno = pPg->pgno; /* Page number to read */ - int rc; /* Return code */ - i64 iOffset; /* Byte offset of file to read from */ - - assert( pPager->state>=PAGER_SHARED && !MEMDB ); - assert( isOpen(pPager->fd) ); - - if( NEVER(!isOpen(pPager->fd)) ){ - assert( pPager->tempFile ); - memset(pPg->pData, 0, pPager->pageSize); - return SQLITE_OK; - } - iOffset = (pgno-1)*(i64)pPager->pageSize; - rc = sqlite3OsRead(pPager->fd, pPg->pData, pPager->pageSize, iOffset); - if( rc==SQLITE_IOERR_SHORT_READ ){ - rc = SQLITE_OK; - } - if( pgno==1 ){ - if( rc ){ - /* If the read is unsuccessful, set the dbFileVers[] to something - ** that will never be a valid file version. dbFileVers[] is a copy - ** of bytes 24..39 of the database. Bytes 28..31 should always be - ** zero. Bytes 32..35 and 35..39 should be page numbers which are - ** never 0xffffffff. So filling pPager->dbFileVers[] with all 0xff - ** bytes should suffice. - ** - ** For an encrypted database, the situation is more complex: bytes - ** 24..39 of the database are white noise. But the probability of - ** white noising equaling 16 bytes of 0xff is vanishingly small so - ** we should still be ok. - */ - memset(pPager->dbFileVers, 0xff, sizeof(pPager->dbFileVers)); - }else{ - u8 *dbFileVers = &((u8*)pPg->pData)[24]; - memcpy(&pPager->dbFileVers, dbFileVers, sizeof(pPager->dbFileVers)); - } - } - CODEC1(pPager, pPg->pData, pgno, 3, rc = SQLITE_NOMEM); - - PAGER_INCR(sqlite3_pager_readdb_count); - PAGER_INCR(pPager->nRead); - IOTRACE(("PGIN %p %d\n", pPager, pgno)); - PAGERTRACE(("FETCH %d page %d hash(%08x)\n", - PAGERID(pPager), pgno, pager_pagehash(pPg))); - - return rc; -} - /* ** This function is called to obtain a shared lock on the database file. -** It is illegal to call sqlite3PagerAcquire() until after this function +** It is illegal to call sqlite3PagerGet() until after this function ** has been successfully called. If a shared-lock is already held when ** this function is called, it is a no-op. ** ** The following operations are also performed by this function. ** -** 1) If the pager is currently in PAGER_UNLOCK state (no lock held +** 1) If the pager is currently in PAGER_OPEN state (no lock held ** on the database file), then an attempt is made to obtain a ** SHARED lock on the database file. Immediately after obtaining ** the SHARED lock, the file-system is checked for a hot-journal, ** which is played back if present. Following any hot-journal ** rollback, the contents of the cache are validated by checking @@ -35999,68 +48762,53 @@ ** no outstanding references to any pages, and is in the error state, ** then an attempt is made to clear the error state by discarding ** the contents of the page cache and rolling back any open journal ** file. ** -** If the operation described by (2) above is not attempted, and if the -** pager is in an error state other than SQLITE_FULL when this is called, -** the error state error code is returned. It is permitted to read the -** database when in SQLITE_FULL error state. -** -** Otherwise, if everything is successful, SQLITE_OK is returned. If an -** IO error occurs while locking the database, checking for a hot-journal -** file or rolling back a journal file, the IO error code is returned. +** If everything is successful, SQLITE_OK is returned. If an IO error +** occurs while locking the database, checking for a hot-journal file or +** rolling back a journal file, the IO error code is returned. */ SQLITE_PRIVATE int sqlite3PagerSharedLock(Pager *pPager){ int rc = SQLITE_OK; /* Return code */ - int isErrorReset = 0; /* True if recovering from error state */ /* This routine is only called from b-tree and only when there are no - ** outstanding pages */ + ** outstanding pages. This implies that the pager state should either + ** be OPEN or READER. READER is only possible if the pager is or was in + ** exclusive access mode. + */ assert( sqlite3PcacheRefCount(pPager->pPCache)==0 ); + assert( assert_pager_state(pPager) ); + assert( pPager->eState==PAGER_OPEN || pPager->eState==PAGER_READER ); if( NEVER(MEMDB && pPager->errCode) ){ return pPager->errCode; } - /* If this database is in an error-state, now is a chance to clear - ** the error. Discard the contents of the pager-cache and rollback - ** any hot journal in the file-system. - */ - if( pPager->errCode ){ - if( isOpen(pPager->jfd) || pPager->zJournal ){ - isErrorReset = 1; - } - pPager->errCode = SQLITE_OK; - pager_reset(pPager); - } - - if( pPager->state==PAGER_UNLOCK || isErrorReset ){ - sqlite3_vfs * const pVfs = pPager->pVfs; - int isHotJournal = 0; + if( !pagerUseWal(pPager) && pPager->eState==PAGER_OPEN ){ + int bHotJournal = 1; /* True if there exists a hot journal-file */ + assert( !MEMDB ); - assert( sqlite3PcacheRefCount(pPager->pPCache)==0 ); - if( pPager->noReadlock ){ - assert( pPager->readOnly ); - pPager->state = PAGER_SHARED; - }else{ - rc = pager_wait_on_lock(pPager, SHARED_LOCK); - if( rc!=SQLITE_OK ){ - assert( pPager->state==PAGER_UNLOCK ); - return pager_error(pPager, rc); - } - } - assert( pPager->state>=SHARED_LOCK ); + + rc = pager_wait_on_lock(pPager, SHARED_LOCK); + if( rc!=SQLITE_OK ){ + assert( pPager->eLock==NO_LOCK || pPager->eLock==UNKNOWN_LOCK ); + goto failed; + } /* If a journal file exists, and there is no RESERVED lock on the ** database file, then it either needs to be played back or deleted. */ - if( !isErrorReset ){ - assert( pPager->state <= PAGER_SHARED ); - rc = hasHotJournal(pPager, &isHotJournal); - if( rc!=SQLITE_OK ){ + if( pPager->eLock<=SHARED_LOCK ){ + rc = hasHotJournal(pPager, &bHotJournal); + } + if( rc!=SQLITE_OK ){ + goto failed; + } + if( bHotJournal ){ + if( pPager->readOnly ){ + rc = SQLITE_READONLY_ROLLBACK; goto failed; } - } - if( isErrorReset || isHotJournal ){ + /* Get an EXCLUSIVE lock on the database file. At this point it is ** important that a RESERVED lock is not obtained on the way to the ** EXCLUSIVE lock. If it were, another process might open the ** database file, detect the RESERVED lock, and conclude that the ** database is safe to read while this process is still rolling the @@ -36068,62 +48816,49 @@ ** ** Because the intermediate RESERVED lock is not requested, any ** other process attempting to access the database file will get to ** this point in the code and fail to obtain its own EXCLUSIVE lock ** on the database file. - */ - if( pPager->state<EXCLUSIVE_LOCK ){ - rc = sqlite3OsLock(pPager->fd, EXCLUSIVE_LOCK); - if( rc!=SQLITE_OK ){ - rc = pager_error(pPager, rc); - goto failed; - } - pPager->state = PAGER_EXCLUSIVE; - } - - /* Open the journal for read/write access. This is because in - ** exclusive-access mode the file descriptor will be kept open and - ** possibly used for a transaction later on. On some systems, the - ** OsTruncate() call used in exclusive-access mode also requires - ** a read/write file handle. - */ - if( !isOpen(pPager->jfd) ){ - int res; - rc = sqlite3OsAccess(pVfs,pPager->zJournal,SQLITE_ACCESS_EXISTS,&res); - if( rc==SQLITE_OK ){ - if( res ){ - int fout = 0; - int f = SQLITE_OPEN_READWRITE|SQLITE_OPEN_MAIN_JOURNAL; - assert( !pPager->tempFile ); - rc = sqlite3OsOpen(pVfs, pPager->zJournal, pPager->jfd, f, &fout); - assert( rc!=SQLITE_OK || isOpen(pPager->jfd) ); - if( rc==SQLITE_OK && fout&SQLITE_OPEN_READONLY ){ - rc = SQLITE_CANTOPEN_BKPT; - sqlite3OsClose(pPager->jfd); - } - }else{ - /* If the journal does not exist, it usually means that some - ** other connection managed to get in and roll it back before - ** this connection obtained the exclusive lock above. Or, it - ** may mean that the pager was in the error-state when this - ** function was called and the journal file does not exist. */ - rc = pager_end_transaction(pPager, 0); - } - } - } - if( rc!=SQLITE_OK ){ - goto failed; - } - - /* Reset the journal status fields to indicates that we have no - ** rollback journal at this time. */ - pPager->journalStarted = 0; - pPager->journalOff = 0; - pPager->setMaster = 0; - pPager->journalHdr = 0; - - /* Make sure the journal file has been synced to disk. */ + ** + ** Unless the pager is in locking_mode=exclusive mode, the lock is + ** downgraded to SHARED_LOCK before this function returns. + */ + rc = pagerLockDb(pPager, EXCLUSIVE_LOCK); + if( rc!=SQLITE_OK ){ + goto failed; + } + + /* If it is not already open and the file exists on disk, open the + ** journal for read/write access. Write access is required because + ** in exclusive-access mode the file descriptor will be kept open + ** and possibly used for a transaction later on. Also, write-access + ** is usually required to finalize the journal in journal_mode=persist + ** mode (and also for journal_mode=truncate on some systems). + ** + ** If the journal does not exist, it usually means that some + ** other connection managed to get in and roll it back before + ** this connection obtained the exclusive lock above. Or, it + ** may mean that the pager was in the error-state when this + ** function was called and the journal file does not exist. + */ + if( !isOpen(pPager->jfd) ){ + sqlite3_vfs * const pVfs = pPager->pVfs; + int bExists; /* True if journal file exists */ + rc = sqlite3OsAccess( + pVfs, pPager->zJournal, SQLITE_ACCESS_EXISTS, &bExists); + if( rc==SQLITE_OK && bExists ){ + int fout = 0; + int f = SQLITE_OPEN_READWRITE|SQLITE_OPEN_MAIN_JOURNAL; + assert( !pPager->tempFile ); + rc = sqlite3OsOpen(pVfs, pPager->zJournal, pPager->jfd, f, &fout); + assert( rc!=SQLITE_OK || isOpen(pPager->jfd) ); + if( rc==SQLITE_OK && fout&SQLITE_OPEN_READONLY ){ + rc = SQLITE_CANTOPEN_BKPT; + sqlite3OsClose(pPager->jfd); + } + } + } /* Playback and delete the journal. Drop the database write ** lock and reacquire the read lock. Purge the cache before ** playing back the hot-journal so that we don't end up with ** an inconsistent cache. Sync the hot journal before playing @@ -36130,71 +48865,121 @@ ** it back since the process that crashed and left the hot journal ** probably did not sync it and we are required to always sync ** the journal before playing it back. */ if( isOpen(pPager->jfd) ){ + assert( rc==SQLITE_OK ); rc = pagerSyncHotJournal(pPager); if( rc==SQLITE_OK ){ rc = pager_playback(pPager, 1); - } - if( rc!=SQLITE_OK ){ - rc = pager_error(pPager, rc); - goto failed; - } - } - assert( (pPager->state==PAGER_SHARED) - || (pPager->exclusiveMode && pPager->state>PAGER_SHARED) + pPager->eState = PAGER_OPEN; + } + }else if( !pPager->exclusiveMode ){ + pagerUnlockDb(pPager, SHARED_LOCK); + } + + if( rc!=SQLITE_OK ){ + /* This branch is taken if an error occurs while trying to open + ** or roll back a hot-journal while holding an EXCLUSIVE lock. The + ** pager_unlock() routine will be called before returning to unlock + ** the file. If the unlock attempt fails, then Pager.eLock must be + ** set to UNKNOWN_LOCK (see the comment above the #define for + ** UNKNOWN_LOCK above for an explanation). + ** + ** In order to get pager_unlock() to do this, set Pager.eState to + ** PAGER_ERROR now. This is not actually counted as a transition + ** to ERROR state in the state diagram at the top of this file, + ** since we know that the same call to pager_unlock() will very + ** shortly transition the pager object to the OPEN state. Calling + ** assert_pager_state() would fail now, as it should not be possible + ** to be in ERROR state when there are zero outstanding page + ** references. + */ + pager_error(pPager, rc); + goto failed; + } + + assert( pPager->eState==PAGER_OPEN ); + assert( (pPager->eLock==SHARED_LOCK) + || (pPager->exclusiveMode && pPager->eLock>SHARED_LOCK) ); } - if( pPager->pBackup || sqlite3PcachePagecount(pPager->pPCache)>0 ){ - /* The shared-lock has just been acquired on the database file - ** and there are already pages in the cache (from a previous - ** read or write transaction). Check to see if the database - ** has been modified. If the database has changed, flush the - ** cache. + if( !pPager->tempFile && pPager->hasHeldSharedLock ){ + /* The shared-lock has just been acquired then check to + ** see if the database has been modified. If the database has changed, + ** flush the cache. The hasHeldSharedLock flag prevents this from + ** occurring on the very first access to a file, in order to save a + ** single unnecessary sqlite3OsRead() call at the start-up. ** - ** Database changes is detected by looking at 15 bytes beginning + ** Database changes are detected by looking at 15 bytes beginning ** at offset 24 into the file. The first 4 of these 16 bytes are ** a 32-bit counter that is incremented with each change. The ** other bytes change randomly with each file change when ** a codec is in use. ** ** There is a vanishingly small chance that a change will not be ** detected. The chance of an undetected change is so small that ** it can be neglected. */ - int nPage; + Pgno nPage = 0; char dbFileVers[sizeof(pPager->dbFileVers)]; - sqlite3PagerPagecount(pPager, &nPage); - if( pPager->errCode ){ - rc = pPager->errCode; - goto failed; - } + rc = pagerPagecount(pPager, &nPage); + if( rc ) goto failed; if( nPage>0 ){ IOTRACE(("CKVERS %p %d\n", pPager, sizeof(dbFileVers))); rc = sqlite3OsRead(pPager->fd, &dbFileVers, sizeof(dbFileVers), 24); - if( rc!=SQLITE_OK ){ + if( rc!=SQLITE_OK && rc!=SQLITE_IOERR_SHORT_READ ){ goto failed; } }else{ memset(dbFileVers, 0, sizeof(dbFileVers)); } if( memcmp(pPager->dbFileVers, dbFileVers, sizeof(dbFileVers))!=0 ){ pager_reset(pPager); + + /* Unmap the database file. It is possible that external processes + ** may have truncated the database file and then extended it back + ** to its original size while this process was not holding a lock. + ** In this case there may exist a Pager.pMap mapping that appears + ** to be the right size but is not actually valid. Avoid this + ** possibility by unmapping the db here. */ + if( USEFETCH(pPager) ){ + sqlite3OsUnfetch(pPager->fd, 0, 0); + } } } - assert( pPager->exclusiveMode || pPager->state==PAGER_SHARED ); + + /* If there is a WAL file in the file-system, open this database in WAL + ** mode. Otherwise, the following function call is a no-op. + */ + rc = pagerOpenWalIfPresent(pPager); +#ifndef SQLITE_OMIT_WAL + assert( pPager->pWal==0 || rc==SQLITE_OK ); +#endif + } + + if( pagerUseWal(pPager) ){ + assert( rc==SQLITE_OK ); + rc = pagerBeginReadTransaction(pPager); + } + + if( pPager->eState==PAGER_OPEN && rc==SQLITE_OK ){ + rc = pagerPagecount(pPager, &pPager->dbSize); } failed: if( rc!=SQLITE_OK ){ - /* pager_unlock() is a no-op for exclusive mode and in-memory databases. */ + assert( !MEMDB ); pager_unlock(pPager); + assert( pPager->eState==PAGER_OPEN ); + }else{ + pPager->eState = PAGER_READER; + pPager->hasHeldSharedLock = 1; } return rc; } /* @@ -36204,13 +48989,11 @@ ** Except, in locking_mode=EXCLUSIVE when there is nothing to in ** the rollback journal, the unlock is not performed and there is ** nothing to rollback, so this routine is a no-op. */ static void pagerUnlockIfUnused(Pager *pPager){ - if( (sqlite3PcacheRefCount(pPager->pPCache)==0) - && (!pPager->exclusiveMode || pPager->journalOff>0) - ){ + if( pPager->nMmapOut==0 && (sqlite3PcacheRefCount(pPager->pPCache)==0) ){ pagerUnlockAndRollback(pPager); } } /* @@ -36234,11 +49017,11 @@ ** requested page is not already stored in the cache, then no ** actual disk read occurs. In this case the memory image of the ** page is initialized to all zeros. ** ** If noContent is true, it means that we do not care about the contents -** of the page. This occurs in two seperate scenarios: +** of the page. This occurs in two scenarios: ** ** a) When reading a free-list leaf page from the database, and ** ** b) When a savepoint is being rolled back and we need to load ** a new page into the cache to be filled with the data read @@ -36261,76 +49044,134 @@ ** just returns 0. This routine acquires a read-lock the first time it ** has to go to disk, and could also playback an old journal if necessary. ** Since Lookup() never goes to disk, it never has to deal with locks ** or journal files. */ -SQLITE_PRIVATE int sqlite3PagerAcquire( +SQLITE_PRIVATE int sqlite3PagerGet( Pager *pPager, /* The pager open on the database file */ Pgno pgno, /* Page number to fetch */ DbPage **ppPage, /* Write a pointer to the page here */ - int noContent /* Do not bother reading content from disk if true */ + int flags /* PAGER_GET_XXX flags */ ){ - int rc; - PgHdr *pPg; + int rc = SQLITE_OK; + PgHdr *pPg = 0; + u32 iFrame = 0; /* Frame to read from WAL file */ + const int noContent = (flags & PAGER_GET_NOCONTENT); - assert( assert_pager_state(pPager) ); - assert( pPager->state>PAGER_UNLOCK ); + /* It is acceptable to use a read-only (mmap) page for any page except + ** page 1 if there is no write-transaction open or the ACQUIRE_READONLY + ** flag was specified by the caller. And so long as the db is not a + ** temporary or in-memory database. */ + const int bMmapOk = (pgno>1 && USEFETCH(pPager) + && (pPager->eState==PAGER_READER || (flags & PAGER_GET_READONLY)) +#ifdef SQLITE_HAS_CODEC + && pPager->xCodec==0 +#endif + ); - if( pgno==0 ){ + /* Optimization note: Adding the "pgno<=1" term before "pgno==0" here + ** allows the compiler optimizer to reuse the results of the "pgno>1" + ** test in the previous statement, and avoid testing pgno==0 in the + ** common case where pgno is large. */ + if( pgno<=1 && pgno==0 ){ return SQLITE_CORRUPT_BKPT; } + assert( pPager->eState>=PAGER_READER ); + assert( assert_pager_state(pPager) ); + assert( noContent==0 || bMmapOk==0 ); + + assert( pPager->hasHeldSharedLock==1 ); /* If the pager is in the error state, return an error immediately. ** Otherwise, request the page from the PCache layer. */ - if( pPager->errCode!=SQLITE_OK && pPager->errCode!=SQLITE_FULL ){ + if( pPager->errCode!=SQLITE_OK ){ rc = pPager->errCode; }else{ - rc = sqlite3PcacheFetch(pPager->pPCache, pgno, 1, ppPage); + if( bMmapOk && pagerUseWal(pPager) ){ + rc = sqlite3WalFindFrame(pPager->pWal, pgno, &iFrame); + if( rc!=SQLITE_OK ) goto pager_acquire_err; + } + + if( bMmapOk && iFrame==0 ){ + void *pData = 0; + + rc = sqlite3OsFetch(pPager->fd, + (i64)(pgno-1) * pPager->pageSize, pPager->pageSize, &pData + ); + + if( rc==SQLITE_OK && pData ){ + if( pPager->eState>PAGER_READER ){ + pPg = sqlite3PagerLookup(pPager, pgno); + } + if( pPg==0 ){ + rc = pagerAcquireMapPage(pPager, pgno, pData, &pPg); + }else{ + sqlite3OsUnfetch(pPager->fd, (i64)(pgno-1)*pPager->pageSize, pData); + } + if( pPg ){ + assert( rc==SQLITE_OK ); + *ppPage = pPg; + return SQLITE_OK; + } + } + if( rc!=SQLITE_OK ){ + goto pager_acquire_err; + } + } + + { + sqlite3_pcache_page *pBase; + pBase = sqlite3PcacheFetch(pPager->pPCache, pgno, 3); + if( pBase==0 ){ + rc = sqlite3PcacheFetchStress(pPager->pPCache, pgno, &pBase); + if( rc!=SQLITE_OK ) goto pager_acquire_err; + if( pBase==0 ){ + pPg = *ppPage = 0; + rc = SQLITE_NOMEM; + goto pager_acquire_err; + } + } + pPg = *ppPage = sqlite3PcacheFetchFinish(pPager->pPCache, pgno, pBase); + assert( pPg!=0 ); + } } if( rc!=SQLITE_OK ){ /* Either the call to sqlite3PcacheFetch() returned an error or the ** pager was already in the error-state when this function was called. ** Set pPg to 0 and jump to the exception handler. */ pPg = 0; goto pager_acquire_err; } - assert( (*ppPage)->pgno==pgno ); - assert( (*ppPage)->pPager==pPager || (*ppPage)->pPager==0 ); + assert( pPg==(*ppPage) ); + assert( pPg->pgno==pgno ); + assert( pPg->pPager==pPager || pPg->pPager==0 ); - if( (*ppPage)->pPager && !noContent ){ + if( pPg->pPager && !noContent ){ /* In this case the pcache already contains an initialized copy of ** the page. Return without further ado. */ assert( pgno<=PAGER_MAX_PGNO && pgno!=PAGER_MJ_PGNO(pPager) ); - PAGER_INCR(pPager->nHit); + pPager->aStat[PAGER_STAT_HIT]++; return SQLITE_OK; }else{ /* The pager cache has created a new page. Its content needs to ** be initialized. */ - int nMax; - PAGER_INCR(pPager->nMiss); - pPg = *ppPage; pPg->pPager = pPager; /* The maximum page number is 2^31. Return SQLITE_CORRUPT if a page ** number greater than this, or the unused locking-page, is requested. */ if( pgno>PAGER_MAX_PGNO || pgno==PAGER_MJ_PGNO(pPager) ){ rc = SQLITE_CORRUPT_BKPT; goto pager_acquire_err; } - rc = sqlite3PagerPagecount(pPager, &nMax); - if( rc!=SQLITE_OK ){ - goto pager_acquire_err; - } - - if( MEMDB || nMax<(int)pgno || noContent || !isOpen(pPager->fd) ){ + if( MEMDB || pPager->dbSize<pgno || noContent || !isOpen(pPager->fd) ){ if( pgno>pPager->mxPgno ){ - rc = SQLITE_FULL; - goto pager_acquire_err; + rc = SQLITE_FULL; + goto pager_acquire_err; } if( noContent ){ /* Failure to set the bits in the InJournal bit-vectors is benign. ** It merely means that we might do some extra work to journal a ** page that does not need to be journaled. Nevertheless, be sure @@ -36347,19 +49188,22 @@ sqlite3EndBenignMalloc(); } memset(pPg->pData, 0, pPager->pageSize); IOTRACE(("ZERO %p %d\n", pPager, pgno)); }else{ + if( pagerUseWal(pPager) && bMmapOk==0 ){ + rc = sqlite3WalFindFrame(pPager->pWal, pgno, &iFrame); + if( rc!=SQLITE_OK ) goto pager_acquire_err; + } assert( pPg->pPager==pPager ); - rc = readDbPage(pPg); + pPager->aStat[PAGER_STAT_MISS]++; + rc = readDbPage(pPg, iFrame); if( rc!=SQLITE_OK ){ goto pager_acquire_err; } } -#ifdef SQLITE_CHECK_PAGES - pPg->pageHash = pager_pagehash(pPg); -#endif + pager_set_pagehash(pPg); } return SQLITE_OK; pager_acquire_err: @@ -36374,28 +49218,27 @@ } /* ** Acquire a page if it is already in the in-memory cache. Do ** not read the page from disk. Return a pointer to the page, -** or 0 if the page is not in cache. Also, return 0 if the -** pager is in PAGER_UNLOCK state when this function is called, -** or if the pager is in an error state other than SQLITE_FULL. +** or 0 if the page is not in cache. ** ** See also sqlite3PagerGet(). The difference between this routine ** and sqlite3PagerGet() is that _get() will go to the disk and read ** in the page if the page is not already in cache. This routine ** returns NULL if the page is not in cache or if a disk I/O error ** has ever happened. */ SQLITE_PRIVATE DbPage *sqlite3PagerLookup(Pager *pPager, Pgno pgno){ - PgHdr *pPg = 0; + sqlite3_pcache_page *pPage; assert( pPager!=0 ); assert( pgno!=0 ); assert( pPager->pPCache!=0 ); - assert( pPager->state > PAGER_UNLOCK ); - sqlite3PcacheFetch(pPager->pPCache, pgno, 0, &pPg); - return pPg; + pPage = sqlite3PcacheFetch(pPager->pPCache, pgno, 0); + assert( pPage==0 || pPager->hasHeldSharedLock ); + if( pPage==0 ) return 0; + return sqlite3PcacheFetchFinish(pPager->pPCache, pgno, pPage); } /* ** Release a page reference. ** @@ -36402,37 +49245,23 @@ ** If the number of references to the page drop to zero, then the ** page is added to the LRU list. When all references to all pages ** are released, a rollback occurs and the lock on the database is ** removed. */ -SQLITE_PRIVATE void sqlite3PagerUnref(DbPage *pPg){ - if( pPg ){ - Pager *pPager = pPg->pPager; +SQLITE_PRIVATE void sqlite3PagerUnrefNotNull(DbPage *pPg){ + Pager *pPager; + assert( pPg!=0 ); + pPager = pPg->pPager; + if( pPg->flags & PGHDR_MMAP ){ + pagerReleaseMapPage(pPg); + }else{ sqlite3PcacheRelease(pPg); - pagerUnlockIfUnused(pPager); - } -} - -/* -** If the main journal file has already been opened, ensure that the -** sub-journal file is open too. If the main journal is not open, -** this function is a no-op. -** -** SQLITE_OK is returned if everything goes according to plan. -** An SQLITE_IOERR_XXX error code is returned if a call to -** sqlite3OsOpen() fails. -*/ -static int openSubJournal(Pager *pPager){ - int rc = SQLITE_OK; - if( isOpen(pPager->jfd) && !isOpen(pPager->sjfd) ){ - if( pPager->journalMode==PAGER_JOURNALMODE_MEMORY || pPager->subjInMemory ){ - sqlite3MemJournalOpen(pPager->sjfd); - }else{ - rc = pagerOpentemp(pPager, pPager->sjfd, SQLITE_OPEN_SUBJOURNAL); - } - } - return rc; + } + pagerUnlockIfUnused(pPager); +} +SQLITE_PRIVATE void sqlite3PagerUnref(DbPage *pPg){ + if( pPg ) sqlite3PagerUnrefNotNull(pPg); } /* ** This function is called at the start of every write transaction. ** There must already be a RESERVED or EXCLUSIVE lock on the database @@ -36455,76 +49284,77 @@ ** SQLITE_NOMEM if the attempt to allocate Pager.pInJournal fails, or ** an IO error code if opening or writing the journal file fails. */ static int pager_open_journal(Pager *pPager){ int rc = SQLITE_OK; /* Return code */ - int nPage; /* Size of database file */ sqlite3_vfs * const pVfs = pPager->pVfs; /* Local cache of vfs pointer */ - assert( pPager->state>=PAGER_RESERVED ); - assert( pPager->useJournal ); - assert( pPager->journalMode!=PAGER_JOURNALMODE_OFF ); + assert( pPager->eState==PAGER_WRITER_LOCKED ); + assert( assert_pager_state(pPager) ); assert( pPager->pInJournal==0 ); /* If already in the error state, this function is a no-op. But on ** the other hand, this routine is never called if we are already in ** an error state. */ if( NEVER(pPager->errCode) ) return pPager->errCode; - testcase( pPager->dbSizeValid==0 ); - rc = sqlite3PagerPagecount(pPager, &nPage); - if( rc ) return rc; - pPager->pInJournal = sqlite3BitvecCreate(nPage); - if( pPager->pInJournal==0 ){ - return SQLITE_NOMEM; - } - - /* Open the journal file if it is not already open. */ - if( !isOpen(pPager->jfd) ){ - if( pPager->journalMode==PAGER_JOURNALMODE_MEMORY ){ - sqlite3MemJournalOpen(pPager->jfd); - }else{ - const int flags = /* VFS flags to open journal file */ - SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE| - (pPager->tempFile ? - (SQLITE_OPEN_DELETEONCLOSE|SQLITE_OPEN_TEMP_JOURNAL): - (SQLITE_OPEN_MAIN_JOURNAL) - ); + if( !pagerUseWal(pPager) && pPager->journalMode!=PAGER_JOURNALMODE_OFF ){ + pPager->pInJournal = sqlite3BitvecCreate(pPager->dbSize); + if( pPager->pInJournal==0 ){ + return SQLITE_NOMEM; + } + + /* Open the journal file if it is not already open. */ + if( !isOpen(pPager->jfd) ){ + if( pPager->journalMode==PAGER_JOURNALMODE_MEMORY ){ + sqlite3MemJournalOpen(pPager->jfd); + }else{ + const int flags = /* VFS flags to open journal file */ + SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE| + (pPager->tempFile ? + (SQLITE_OPEN_DELETEONCLOSE|SQLITE_OPEN_TEMP_JOURNAL): + (SQLITE_OPEN_MAIN_JOURNAL) + ); + + /* Verify that the database still has the same name as it did when + ** it was originally opened. */ + rc = databaseIsUnmoved(pPager); + if( rc==SQLITE_OK ){ #ifdef SQLITE_ENABLE_ATOMIC_WRITE - rc = sqlite3JournalOpen( - pVfs, pPager->zJournal, pPager->jfd, flags, jrnlBufferSize(pPager) - ); + rc = sqlite3JournalOpen( + pVfs, pPager->zJournal, pPager->jfd, flags, jrnlBufferSize(pPager) + ); #else - rc = sqlite3OsOpen(pVfs, pPager->zJournal, pPager->jfd, flags, 0); + rc = sqlite3OsOpen(pVfs, pPager->zJournal, pPager->jfd, flags, 0); #endif - } - assert( rc!=SQLITE_OK || isOpen(pPager->jfd) ); - } - - - /* Write the first journal header to the journal file and open - ** the sub-journal if necessary. - */ - if( rc==SQLITE_OK ){ - /* TODO: Check if all of these are really required. */ - pPager->dbOrigSize = pPager->dbSize; - pPager->journalStarted = 0; - pPager->needSync = 0; - pPager->nRec = 0; - pPager->journalOff = 0; - pPager->setMaster = 0; - pPager->journalHdr = 0; - rc = writeJournalHdr(pPager); - } - if( rc==SQLITE_OK && pPager->nSavepoint ){ - rc = openSubJournal(pPager); + } + } + assert( rc!=SQLITE_OK || isOpen(pPager->jfd) ); + } + + + /* Write the first journal header to the journal file and open + ** the sub-journal if necessary. + */ + if( rc==SQLITE_OK ){ + /* TODO: Check if all of these are really required. */ + pPager->nRec = 0; + pPager->journalOff = 0; + pPager->setMaster = 0; + pPager->journalHdr = 0; + rc = writeJournalHdr(pPager); + } } if( rc!=SQLITE_OK ){ sqlite3BitvecDestroy(pPager->pInJournal); pPager->pInJournal = 0; + }else{ + assert( pPager->eState==PAGER_WRITER_LOCKED ); + pPager->eState = PAGER_WRITER_CACHEMOD; } + return rc; } /* ** Begin a write-transaction on the specified pager object. If a @@ -36533,18 +49363,10 @@ ** If the exFlag argument is false, then acquire at least a RESERVED ** lock on the database file. If exFlag is true, then acquire at least ** an EXCLUSIVE lock. If such a lock is already held, no locking ** functions need be called. ** -** If this is not a temporary or in-memory file and, the journal file is -** opened if it has not been already. For a temporary file, the opening -** of the journal file is deferred until there is an actual need to -** write to the journal. TODO: Why handle temporary files differently? -** -** If the journal file is opened (or if it is already open), then a -** journal-header is written to the start of it. -** ** If the subjInMemory argument is non-zero, then any sub-journal opened ** within this transaction will be opened as an in-memory file. This ** has no effect if the sub-journal is already opened (as it may be when ** running in exclusive mode) or if the transaction does not require a ** sub-journal. If the subjInMemory argument is zero, then any required @@ -36551,57 +49373,124 @@ ** sub-journal is implemented in-memory if pPager is an in-memory database, ** or using a temporary file otherwise. */ SQLITE_PRIVATE int sqlite3PagerBegin(Pager *pPager, int exFlag, int subjInMemory){ int rc = SQLITE_OK; - assert( pPager->state!=PAGER_UNLOCK ); + + if( pPager->errCode ) return pPager->errCode; + assert( pPager->eState>=PAGER_READER && pPager->eState<PAGER_ERROR ); pPager->subjInMemory = (u8)subjInMemory; - if( pPager->state==PAGER_SHARED ){ + + if( ALWAYS(pPager->eState==PAGER_READER) ){ assert( pPager->pInJournal==0 ); - assert( !MEMDB && !pPager->tempFile ); - - /* Obtain a RESERVED lock on the database file. If the exFlag parameter - ** is true, then immediately upgrade this to an EXCLUSIVE lock. The - ** busy-handler callback can be used when upgrading to the EXCLUSIVE - ** lock, but not when obtaining the RESERVED lock. - */ - rc = sqlite3OsLock(pPager->fd, RESERVED_LOCK); - if( rc==SQLITE_OK ){ - pPager->state = PAGER_RESERVED; - if( exFlag ){ + + if( pagerUseWal(pPager) ){ + /* If the pager is configured to use locking_mode=exclusive, and an + ** exclusive lock on the database is not already held, obtain it now. + */ + if( pPager->exclusiveMode && sqlite3WalExclusiveMode(pPager->pWal, -1) ){ + rc = pagerLockDb(pPager, EXCLUSIVE_LOCK); + if( rc!=SQLITE_OK ){ + return rc; + } + (void)sqlite3WalExclusiveMode(pPager->pWal, 1); + } + + /* Grab the write lock on the log file. If successful, upgrade to + ** PAGER_RESERVED state. Otherwise, return an error code to the caller. + ** The busy-handler is not invoked if another connection already + ** holds the write-lock. If possible, the upper layer will call it. + */ + rc = sqlite3WalBeginWriteTransaction(pPager->pWal); + }else{ + /* Obtain a RESERVED lock on the database file. If the exFlag parameter + ** is true, then immediately upgrade this to an EXCLUSIVE lock. The + ** busy-handler callback can be used when upgrading to the EXCLUSIVE + ** lock, but not when obtaining the RESERVED lock. + */ + rc = pagerLockDb(pPager, RESERVED_LOCK); + if( rc==SQLITE_OK && exFlag ){ rc = pager_wait_on_lock(pPager, EXCLUSIVE_LOCK); } } - /* No need to open the journal file at this time. It will be - ** opened before it is written to. If we defer opening the journal, - ** we might save the work of creating a file if the transaction - ** ends up being a no-op. - */ - }else if( isOpen(pPager->jfd) && pPager->journalOff==0 ){ - /* This happens when the pager was in exclusive-access mode the last - ** time a (read or write) transaction was successfully concluded - ** by this connection. Instead of deleting the journal file it was - ** kept open and either was truncated to 0 bytes or its header was - ** overwritten with zeros. - */ - assert( pPager->nRec==0 ); - assert( pPager->dbOrigSize==0 ); - assert( pPager->pInJournal==0 ); - rc = pager_open_journal(pPager); + if( rc==SQLITE_OK ){ + /* Change to WRITER_LOCKED state. + ** + ** WAL mode sets Pager.eState to PAGER_WRITER_LOCKED or CACHEMOD + ** when it has an open transaction, but never to DBMOD or FINISHED. + ** This is because in those states the code to roll back savepoint + ** transactions may copy data from the sub-journal into the database + ** file as well as into the page cache. Which would be incorrect in + ** WAL mode. + */ + pPager->eState = PAGER_WRITER_LOCKED; + pPager->dbHintSize = pPager->dbSize; + pPager->dbFileSize = pPager->dbSize; + pPager->dbOrigSize = pPager->dbSize; + pPager->journalOff = 0; + } + + assert( rc==SQLITE_OK || pPager->eState==PAGER_READER ); + assert( rc!=SQLITE_OK || pPager->eState==PAGER_WRITER_LOCKED ); + assert( assert_pager_state(pPager) ); } PAGERTRACE(("TRANSACTION %d\n", PAGERID(pPager))); - if( rc!=SQLITE_OK ){ - assert( !pPager->dbModified ); - /* Ignore any IO error that occurs within pager_end_transaction(). The - ** purpose of this call is to reset the internal state of the pager - ** sub-system. It doesn't matter if the journal-file is not properly - ** finalized at this point (since it is not a valid journal file anyway). - */ - pager_end_transaction(pPager, 0); - } + return rc; +} + +/* +** Write page pPg onto the end of the rollback journal. +*/ +static SQLITE_NOINLINE int pagerAddPageToRollbackJournal(PgHdr *pPg){ + Pager *pPager = pPg->pPager; + int rc; + u32 cksum; + char *pData2; + i64 iOff = pPager->journalOff; + + /* We should never write to the journal file the page that + ** contains the database locks. The following assert verifies + ** that we do not. */ + assert( pPg->pgno!=PAGER_MJ_PGNO(pPager) ); + + assert( pPager->journalHdr<=pPager->journalOff ); + CODEC2(pPager, pPg->pData, pPg->pgno, 7, return SQLITE_NOMEM, pData2); + cksum = pager_cksum(pPager, (u8*)pData2); + + /* Even if an IO or diskfull error occurs while journalling the + ** page in the block above, set the need-sync flag for the page. + ** Otherwise, when the transaction is rolled back, the logic in + ** playback_one_page() will think that the page needs to be restored + ** in the database file. And if an IO error occurs while doing so, + ** then corruption may follow. + */ + pPg->flags |= PGHDR_NEED_SYNC; + + rc = write32bits(pPager->jfd, iOff, pPg->pgno); + if( rc!=SQLITE_OK ) return rc; + rc = sqlite3OsWrite(pPager->jfd, pData2, pPager->pageSize, iOff+4); + if( rc!=SQLITE_OK ) return rc; + rc = write32bits(pPager->jfd, iOff+pPager->pageSize+4, cksum); + if( rc!=SQLITE_OK ) return rc; + + IOTRACE(("JOUT %p %d %lld %d\n", pPager, pPg->pgno, + pPager->journalOff, pPager->pageSize)); + PAGER_INCR(sqlite3_pager_writej_count); + PAGERTRACE(("JOURNAL %d page %d needSync=%d hash(%08x)\n", + PAGERID(pPager), pPg->pgno, + ((pPg->flags&PGHDR_NEED_SYNC)?1:0), pager_pagehash(pPg))); + + pPager->journalOff += 8 + pPager->pageSize; + pPager->nRec++; + assert( pPager->pInJournal!=0 ); + rc = sqlite3BitvecSet(pPager->pInJournal, pPg->pgno); + testcase( rc==SQLITE_NOMEM ); + assert( rc==SQLITE_OK || rc==SQLITE_NOMEM ); + rc |= addToSavepointBitvecs(pPager, pPg->pgno); + assert( rc==SQLITE_OK || rc==SQLITE_NOMEM ); return rc; } /* ** Mark a single data page as writeable. The page is written into the @@ -36609,151 +49498,180 @@ ** one of the journals, the corresponding bit is set in the ** Pager.pInJournal bitvec and the PagerSavepoint.pInSavepoint bitvecs ** of any open savepoints as appropriate. */ static int pager_write(PgHdr *pPg){ - void *pData = pPg->pData; Pager *pPager = pPg->pPager; int rc = SQLITE_OK; - /* This routine is not called unless a transaction has already been - ** started. - */ - assert( pPager->state>=PAGER_RESERVED ); - - /* If an error has been previously detected, report the same error - ** again. - */ - if( NEVER(pPager->errCode) ) return pPager->errCode; - - /* Higher-level routines never call this function if database is not - ** writable. But check anyway, just for robustness. */ - if( NEVER(pPager->readOnly) ) return SQLITE_PERM; - - assert( !pPager->setMaster ); - + /* This routine is not called unless a write-transaction has already + ** been started. The journal file may or may not be open at this point. + ** It is never called in the ERROR state. + */ + assert( pPager->eState==PAGER_WRITER_LOCKED + || pPager->eState==PAGER_WRITER_CACHEMOD + || pPager->eState==PAGER_WRITER_DBMOD + ); + assert( assert_pager_state(pPager) ); + assert( pPager->errCode==0 ); + assert( pPager->readOnly==0 ); CHECK_PAGE(pPg); - /* Mark the page as dirty. If the page has already been written - ** to the journal then we can return right away. + /* The journal file needs to be opened. Higher level routines have already + ** obtained the necessary locks to begin the write-transaction, but the + ** rollback journal might not yet be open. Open it now if this is the case. + ** + ** This is done before calling sqlite3PcacheMakeDirty() on the page. + ** Otherwise, if it were done after calling sqlite3PcacheMakeDirty(), then + ** an error might occur and the pager would end up in WRITER_LOCKED state + ** with pages marked as dirty in the cache. */ + if( pPager->eState==PAGER_WRITER_LOCKED ){ + rc = pager_open_journal(pPager); + if( rc!=SQLITE_OK ) return rc; + } + assert( pPager->eState>=PAGER_WRITER_CACHEMOD ); + assert( assert_pager_state(pPager) ); + + /* Mark the page that is about to be modified as dirty. */ sqlite3PcacheMakeDirty(pPg); - if( pageInJournal(pPg) && !subjRequiresPage(pPg) ){ - pPager->dbModified = 1; - }else{ - - /* If we get this far, it means that the page needs to be - ** written to the transaction journal or the ckeckpoint journal - ** or both. - ** - ** Higher level routines should have already started a transaction, - ** which means they have acquired the necessary locks but the rollback - ** journal might not yet be open. - */ - rc = sqlite3PagerBegin(pPager, 0, pPager->subjInMemory); - if( rc!=SQLITE_OK ){ - return rc; - } - if( !isOpen(pPager->jfd) && pPager->journalMode!=PAGER_JOURNALMODE_OFF ){ - assert( pPager->useJournal ); - rc = pager_open_journal(pPager); - if( rc!=SQLITE_OK ) return rc; - } - pPager->dbModified = 1; - - /* The transaction journal now exists and we have a RESERVED or an - ** EXCLUSIVE lock on the main database file. Write the current page to - ** the transaction journal if it is not there already. - */ - if( !pageInJournal(pPg) && isOpen(pPager->jfd) ){ - if( pPg->pgno<=pPager->dbOrigSize ){ - u32 cksum; - char *pData2; - - /* We should never write to the journal file the page that - ** contains the database locks. The following assert verifies - ** that we do not. */ - assert( pPg->pgno!=PAGER_MJ_PGNO(pPager) ); - - assert( pPager->journalHdr <= pPager->journalOff ); - CODEC2(pPager, pData, pPg->pgno, 7, return SQLITE_NOMEM, pData2); - cksum = pager_cksum(pPager, (u8*)pData2); - rc = write32bits(pPager->jfd, pPager->journalOff, pPg->pgno); - if( rc==SQLITE_OK ){ - rc = sqlite3OsWrite(pPager->jfd, pData2, pPager->pageSize, - pPager->journalOff + 4); - pPager->journalOff += pPager->pageSize+4; - } - if( rc==SQLITE_OK ){ - rc = write32bits(pPager->jfd, pPager->journalOff, cksum); - pPager->journalOff += 4; - } - IOTRACE(("JOUT %p %d %lld %d\n", pPager, pPg->pgno, - pPager->journalOff, pPager->pageSize)); - PAGER_INCR(sqlite3_pager_writej_count); - PAGERTRACE(("JOURNAL %d page %d needSync=%d hash(%08x)\n", - PAGERID(pPager), pPg->pgno, - ((pPg->flags&PGHDR_NEED_SYNC)?1:0), pager_pagehash(pPg))); - - /* Even if an IO or diskfull error occurred while journalling the - ** page in the block above, set the need-sync flag for the page. - ** Otherwise, when the transaction is rolled back, the logic in - ** playback_one_page() will think that the page needs to be restored - ** in the database file. And if an IO error occurs while doing so, - ** then corruption may follow. - */ - if( !pPager->noSync ){ - pPg->flags |= PGHDR_NEED_SYNC; - pPager->needSync = 1; - } - - /* An error has occurred writing to the journal file. The - ** transaction will be rolled back by the layer above. - */ - if( rc!=SQLITE_OK ){ - return rc; - } - - pPager->nRec++; - assert( pPager->pInJournal!=0 ); - rc = sqlite3BitvecSet(pPager->pInJournal, pPg->pgno); - testcase( rc==SQLITE_NOMEM ); - assert( rc==SQLITE_OK || rc==SQLITE_NOMEM ); - rc |= addToSavepointBitvecs(pPager, pPg->pgno); - if( rc!=SQLITE_OK ){ - assert( rc==SQLITE_NOMEM ); - return rc; - } - }else{ - if( !pPager->journalStarted && !pPager->noSync ){ - pPg->flags |= PGHDR_NEED_SYNC; - pPager->needSync = 1; - } - PAGERTRACE(("APPEND %d page %d needSync=%d\n", - PAGERID(pPager), pPg->pgno, - ((pPg->flags&PGHDR_NEED_SYNC)?1:0))); - } - } - - /* If the statement journal is open and the page is not in it, - ** then write the current page to the statement journal. Note that - ** the statement journal format differs from the standard journal format - ** in that it omits the checksums and the header. - */ - if( subjRequiresPage(pPg) ){ - rc = subjournalPage(pPg); - } - } - - /* Update the database size and return. - */ - assert( pPager->state>=PAGER_SHARED ); + + /* If a rollback journal is in use, them make sure the page that is about + ** to change is in the rollback journal, or if the page is a new page off + ** then end of the file, make sure it is marked as PGHDR_NEED_SYNC. + */ + assert( (pPager->pInJournal!=0) == isOpen(pPager->jfd) ); + if( pPager->pInJournal!=0 + && sqlite3BitvecTestNotNull(pPager->pInJournal, pPg->pgno)==0 + ){ + assert( pagerUseWal(pPager)==0 ); + if( pPg->pgno<=pPager->dbOrigSize ){ + rc = pagerAddPageToRollbackJournal(pPg); + if( rc!=SQLITE_OK ){ + return rc; + } + }else{ + if( pPager->eState!=PAGER_WRITER_DBMOD ){ + pPg->flags |= PGHDR_NEED_SYNC; + } + PAGERTRACE(("APPEND %d page %d needSync=%d\n", + PAGERID(pPager), pPg->pgno, + ((pPg->flags&PGHDR_NEED_SYNC)?1:0))); + } + } + + /* The PGHDR_DIRTY bit is set above when the page was added to the dirty-list + ** and before writing the page into the rollback journal. Wait until now, + ** after the page has been successfully journalled, before setting the + ** PGHDR_WRITEABLE bit that indicates that the page can be safely modified. + */ + pPg->flags |= PGHDR_WRITEABLE; + + /* If the statement journal is open and the page is not in it, + ** then write the page into the statement journal. + */ + if( pPager->nSavepoint>0 ){ + rc = subjournalPageIfRequired(pPg); + } + + /* Update the database size and return. */ if( pPager->dbSize<pPg->pgno ){ pPager->dbSize = pPg->pgno; } return rc; } + +/* +** This is a variant of sqlite3PagerWrite() that runs when the sector size +** is larger than the page size. SQLite makes the (reasonable) assumption that +** all bytes of a sector are written together by hardware. Hence, all bytes of +** a sector need to be journalled in case of a power loss in the middle of +** a write. +** +** Usually, the sector size is less than or equal to the page size, in which +** case pages can be individually written. This routine only runs in the +** exceptional case where the page size is smaller than the sector size. +*/ +static SQLITE_NOINLINE int pagerWriteLargeSector(PgHdr *pPg){ + int rc = SQLITE_OK; /* Return code */ + Pgno nPageCount; /* Total number of pages in database file */ + Pgno pg1; /* First page of the sector pPg is located on. */ + int nPage = 0; /* Number of pages starting at pg1 to journal */ + int ii; /* Loop counter */ + int needSync = 0; /* True if any page has PGHDR_NEED_SYNC */ + Pager *pPager = pPg->pPager; /* The pager that owns pPg */ + Pgno nPagePerSector = (pPager->sectorSize/pPager->pageSize); + + /* Set the doNotSpill NOSYNC bit to 1. This is because we cannot allow + ** a journal header to be written between the pages journaled by + ** this function. + */ + assert( !MEMDB ); + assert( (pPager->doNotSpill & SPILLFLAG_NOSYNC)==0 ); + pPager->doNotSpill |= SPILLFLAG_NOSYNC; + + /* This trick assumes that both the page-size and sector-size are + ** an integer power of 2. It sets variable pg1 to the identifier + ** of the first page of the sector pPg is located on. + */ + pg1 = ((pPg->pgno-1) & ~(nPagePerSector-1)) + 1; + + nPageCount = pPager->dbSize; + if( pPg->pgno>nPageCount ){ + nPage = (pPg->pgno - pg1)+1; + }else if( (pg1+nPagePerSector-1)>nPageCount ){ + nPage = nPageCount+1-pg1; + }else{ + nPage = nPagePerSector; + } + assert(nPage>0); + assert(pg1<=pPg->pgno); + assert((pg1+nPage)>pPg->pgno); + + for(ii=0; ii<nPage && rc==SQLITE_OK; ii++){ + Pgno pg = pg1+ii; + PgHdr *pPage; + if( pg==pPg->pgno || !sqlite3BitvecTest(pPager->pInJournal, pg) ){ + if( pg!=PAGER_MJ_PGNO(pPager) ){ + rc = sqlite3PagerGet(pPager, pg, &pPage, 0); + if( rc==SQLITE_OK ){ + rc = pager_write(pPage); + if( pPage->flags&PGHDR_NEED_SYNC ){ + needSync = 1; + } + sqlite3PagerUnrefNotNull(pPage); + } + } + }else if( (pPage = sqlite3PagerLookup(pPager, pg))!=0 ){ + if( pPage->flags&PGHDR_NEED_SYNC ){ + needSync = 1; + } + sqlite3PagerUnrefNotNull(pPage); + } + } + + /* If the PGHDR_NEED_SYNC flag is set for any of the nPage pages + ** starting at pg1, then it needs to be set for all of them. Because + ** writing to any of these nPage pages may damage the others, the + ** journal file must contain sync()ed copies of all of them + ** before any of them can be written out to the database file. + */ + if( rc==SQLITE_OK && needSync ){ + assert( !MEMDB ); + for(ii=0; ii<nPage; ii++){ + PgHdr *pPage = sqlite3PagerLookup(pPager, pg1+ii); + if( pPage ){ + pPage->flags |= PGHDR_NEED_SYNC; + sqlite3PagerUnrefNotNull(pPage); + } + } + } + + assert( (pPager->doNotSpill & SPILLFLAG_NOSYNC)!=0 ); + pPager->doNotSpill &= ~SPILLFLAG_NOSYNC; + return rc; +} /* ** Mark a data page as writeable. This routine must be called before ** making changes to a page. The caller must check the return value ** of this function and be careful not to change any page data unless @@ -36765,107 +49683,35 @@ ** must have been written to the journal file before returning. ** ** If an error occurs, SQLITE_NOMEM or an IO error code is returned ** as appropriate. Otherwise, SQLITE_OK. */ -SQLITE_PRIVATE int sqlite3PagerWrite(DbPage *pDbPage){ - int rc = SQLITE_OK; - - PgHdr *pPg = pDbPage; +SQLITE_PRIVATE int sqlite3PagerWrite(PgHdr *pPg){ Pager *pPager = pPg->pPager; - Pgno nPagePerSector = (pPager->sectorSize/pPager->pageSize); - - if( nPagePerSector>1 ){ - Pgno nPageCount; /* Total number of pages in database file */ - Pgno pg1; /* First page of the sector pPg is located on. */ - int nPage; /* Number of pages starting at pg1 to journal */ - int ii; /* Loop counter */ - int needSync = 0; /* True if any page has PGHDR_NEED_SYNC */ - - /* Set the doNotSync flag to 1. This is because we cannot allow a journal - ** header to be written between the pages journaled by this function. - */ - assert( !MEMDB ); - assert( pPager->doNotSync==0 ); - pPager->doNotSync = 1; - - /* This trick assumes that both the page-size and sector-size are - ** an integer power of 2. It sets variable pg1 to the identifier - ** of the first page of the sector pPg is located on. - */ - pg1 = ((pPg->pgno-1) & ~(nPagePerSector-1)) + 1; - - rc = sqlite3PagerPagecount(pPager, (int *)&nPageCount); - if( rc ) return rc; - if( pPg->pgno>nPageCount ){ - nPage = (pPg->pgno - pg1)+1; - }else if( (pg1+nPagePerSector-1)>nPageCount ){ - nPage = nPageCount+1-pg1; - }else{ - nPage = nPagePerSector; - } - assert(nPage>0); - assert(pg1<=pPg->pgno); - assert((pg1+nPage)>pPg->pgno); - - for(ii=0; ii<nPage && rc==SQLITE_OK; ii++){ - Pgno pg = pg1+ii; - PgHdr *pPage; - if( pg==pPg->pgno || !sqlite3BitvecTest(pPager->pInJournal, pg) ){ - if( pg!=PAGER_MJ_PGNO(pPager) ){ - rc = sqlite3PagerGet(pPager, pg, &pPage); - if( rc==SQLITE_OK ){ - rc = pager_write(pPage); - if( pPage->flags&PGHDR_NEED_SYNC ){ - needSync = 1; - assert(pPager->needSync); - } - sqlite3PagerUnref(pPage); - } - } - }else if( (pPage = pager_lookup(pPager, pg))!=0 ){ - if( pPage->flags&PGHDR_NEED_SYNC ){ - needSync = 1; - } - sqlite3PagerUnref(pPage); - } - } - - /* If the PGHDR_NEED_SYNC flag is set for any of the nPage pages - ** starting at pg1, then it needs to be set for all of them. Because - ** writing to any of these nPage pages may damage the others, the - ** journal file must contain sync()ed copies of all of them - ** before any of them can be written out to the database file. - */ - if( rc==SQLITE_OK && needSync ){ - assert( !MEMDB && pPager->noSync==0 ); - for(ii=0; ii<nPage; ii++){ - PgHdr *pPage = pager_lookup(pPager, pg1+ii); - if( pPage ){ - pPage->flags |= PGHDR_NEED_SYNC; - sqlite3PagerUnref(pPage); - } - } - assert(pPager->needSync); - } - - assert( pPager->doNotSync==1 ); - pPager->doNotSync = 0; - }else{ - rc = pager_write(pDbPage); - } - return rc; + assert( (pPg->flags & PGHDR_MMAP)==0 ); + assert( pPager->eState>=PAGER_WRITER_LOCKED ); + assert( assert_pager_state(pPager) ); + if( pPager->errCode ){ + return pPager->errCode; + }else if( (pPg->flags & PGHDR_WRITEABLE)!=0 && pPager->dbSize>=pPg->pgno ){ + if( pPager->nSavepoint ) return subjournalPageIfRequired(pPg); + return SQLITE_OK; + }else if( pPager->sectorSize > (u32)pPager->pageSize ){ + return pagerWriteLargeSector(pPg); + }else{ + return pager_write(pPg); + } } /* ** Return TRUE if the page given in the argument was previously passed ** to sqlite3PagerWrite(). In other words, return TRUE if it is ok ** to change the content of the page. */ #ifndef NDEBUG SQLITE_PRIVATE int sqlite3PagerIswriteable(DbPage *pPg){ - return pPg->flags&PGHDR_DIRTY; + return pPg->flags & PGHDR_WRITEABLE; } #endif /* ** A call to this routine tells the pager that it is not necessary to @@ -36885,20 +49731,25 @@ Pager *pPager = pPg->pPager; if( (pPg->flags&PGHDR_DIRTY) && pPager->nSavepoint==0 ){ PAGERTRACE(("DONT_WRITE page %d of %d\n", pPg->pgno, PAGERID(pPager))); IOTRACE(("CLEAN %p %d\n", pPager, pPg->pgno)) pPg->flags |= PGHDR_DONT_WRITE; -#ifdef SQLITE_CHECK_PAGES - pPg->pageHash = pager_pagehash(pPg); -#endif + pPg->flags &= ~PGHDR_WRITEABLE; + pager_set_pagehash(pPg); } } /* ** This routine is called to increment the value of the database file ** change-counter, stored as a 4-byte big-endian integer starting at -** byte offset 24 of the pager file. +** byte offset 24 of the pager file. The secondary change counter at +** 92 is also updated, as is the SQLite version number at offset 96. +** +** But this only happens if the pPager->changeCountDone flag is false. +** To avoid excess churning of page 1, the update only happens once. +** See also the pager_write_changecounter() routine that does an +** unconditional update of the change counters. ** ** If the isDirectMode flag is zero, then this is done by calling ** sqlite3PagerWrite() on page 1, then modifying the contents of the ** page data. In this case the file will be updated when the current ** transaction is committed. @@ -36909,10 +49760,15 @@ ** by writing an updated version of page 1 using a call to the ** sqlite3OsWrite() function. */ static int pager_incr_changecounter(Pager *pPager, int isDirectMode){ int rc = SQLITE_OK; + + assert( pPager->eState==PAGER_WRITER_CACHEMOD + || pPager->eState==PAGER_WRITER_DBMOD + ); + assert( assert_pager_state(pPager) ); /* Declare and initialize constant integer 'isDirect'. If the ** atomic-write optimization is enabled in this build, then isDirect ** is initialized to the value passed as the isDirectMode parameter ** to this function. Otherwise, it is always set to zero. @@ -36928,19 +49784,17 @@ UNUSED_PARAMETER(isDirectMode); #else # define DIRECT_MODE isDirectMode #endif - assert( pPager->state>=PAGER_RESERVED ); - if( !pPager->changeCountDone && pPager->dbSize>0 ){ + if( !pPager->changeCountDone && ALWAYS(pPager->dbSize>0) ){ PgHdr *pPgHdr; /* Reference to page 1 */ - u32 change_counter; /* Initial value of change-counter field */ assert( !pPager->tempFile && isOpen(pPager->fd) ); /* Open page 1 of the file for writing. */ - rc = sqlite3PagerGet(pPager, 1, &pPgHdr); + rc = sqlite3PagerGet(pPager, 1, &pPgHdr, 0); assert( pPgHdr==0 || rc==SQLITE_OK ); /* If page one was fetched successfully, and this function is not ** operating in direct-mode, make page 1 writable. When not in ** direct mode, page 1 is always held in cache and hence the PagerGet() @@ -36949,24 +49803,28 @@ if( !DIRECT_MODE && ALWAYS(rc==SQLITE_OK) ){ rc = sqlite3PagerWrite(pPgHdr); } if( rc==SQLITE_OK ){ - /* Increment the value just read and write it back to byte 24. */ - change_counter = sqlite3Get4byte((u8*)pPager->dbFileVers); - change_counter++; - put32bits(((char*)pPgHdr->pData)+24, change_counter); - - /* Also store the SQLite version number in bytes 96..99 */ - put32bits(((char*)pPgHdr->pData)+96, SQLITE_VERSION_NUMBER); + /* Actually do the update of the change counter */ + pager_write_changecounter(pPgHdr); /* If running in direct mode, write the contents of page 1 to the file. */ if( DIRECT_MODE ){ - const void *zBuf = pPgHdr->pData; + const void *zBuf; assert( pPager->dbFileSize>0 ); - rc = sqlite3OsWrite(pPager->fd, zBuf, pPager->pageSize, 0); + CODEC2(pPager, pPgHdr->pData, 1, 6, rc=SQLITE_NOMEM, zBuf); + if( rc==SQLITE_OK ){ + rc = sqlite3OsWrite(pPager->fd, zBuf, pPager->pageSize, 0); + pPager->aStat[PAGER_STAT_WRITE]++; + } if( rc==SQLITE_OK ){ + /* Update the pager's copy of the change-counter. Otherwise, the + ** next time a read transaction is opened the cache will be + ** flushed (as the change-counter values will not match). */ + const void *pCopy = (const void *)&((const char *)zBuf)[24]; + memcpy(&pPager->dbFileVers, pCopy, sizeof(pPager->dbFileVers)); pPager->changeCountDone = 1; } }else{ pPager->changeCountDone = 1; } @@ -36977,23 +49835,54 @@ } return rc; } /* -** Sync the pager file to disk. This is a no-op for in-memory files +** Sync the database file to disk. This is a no-op for in-memory databases ** or pages with the Pager.noSync flag set. ** -** If successful, or called on a pager for which it is a no-op, this +** If successful, or if called on a pager for which it is a no-op, this ** function returns SQLITE_OK. Otherwise, an IO error code is returned. */ -SQLITE_PRIVATE int sqlite3PagerSync(Pager *pPager){ - int rc; /* Return code */ - assert( !MEMDB ); - if( pPager->noSync ){ - rc = SQLITE_OK; - }else{ - rc = sqlite3OsSync(pPager->fd, pPager->sync_flags); +SQLITE_PRIVATE int sqlite3PagerSync(Pager *pPager, const char *zMaster){ + int rc = SQLITE_OK; + + if( isOpen(pPager->fd) ){ + void *pArg = (void*)zMaster; + rc = sqlite3OsFileControl(pPager->fd, SQLITE_FCNTL_SYNC, pArg); + if( rc==SQLITE_NOTFOUND ) rc = SQLITE_OK; + } + if( rc==SQLITE_OK && !pPager->noSync ){ + assert( !MEMDB ); + rc = sqlite3OsSync(pPager->fd, pPager->syncFlags); + } + return rc; +} + +/* +** This function may only be called while a write-transaction is active in +** rollback. If the connection is in WAL mode, this call is a no-op. +** Otherwise, if the connection does not already have an EXCLUSIVE lock on +** the database file, an attempt is made to obtain one. +** +** If the EXCLUSIVE lock is already held or the attempt to obtain it is +** successful, or the connection is in WAL mode, SQLITE_OK is returned. +** Otherwise, either SQLITE_BUSY or an SQLITE_IOERR_XXX error code is +** returned. +*/ +SQLITE_PRIVATE int sqlite3PagerExclusiveLock(Pager *pPager){ + int rc = pPager->errCode; + assert( assert_pager_state(pPager) ); + if( rc==SQLITE_OK ){ + assert( pPager->eState==PAGER_WRITER_CACHEMOD + || pPager->eState==PAGER_WRITER_DBMOD + || pPager->eState==PAGER_WRITER_LOCKED + ); + assert( assert_pager_state(pPager) ); + if( 0==pagerUseWal(pPager) ){ + rc = pager_wait_on_lock(pPager, EXCLUSIVE_LOCK); + } } return rc; } /* @@ -37027,153 +49916,157 @@ const char *zMaster, /* If not NULL, the master journal name */ int noSync /* True to omit the xSync on the db file */ ){ int rc = SQLITE_OK; /* Return code */ - /* The dbOrigSize is never set if journal_mode=OFF */ - assert( pPager->journalMode!=PAGER_JOURNALMODE_OFF || pPager->dbOrigSize==0 ); + assert( pPager->eState==PAGER_WRITER_LOCKED + || pPager->eState==PAGER_WRITER_CACHEMOD + || pPager->eState==PAGER_WRITER_DBMOD + || pPager->eState==PAGER_ERROR + ); + assert( assert_pager_state(pPager) ); /* If a prior error occurred, report that error again. */ if( NEVER(pPager->errCode) ) return pPager->errCode; PAGERTRACE(("DATABASE SYNC: File=%s zMaster=%s nSize=%d\n", pPager->zFilename, zMaster, pPager->dbSize)); - if( MEMDB && pPager->dbModified ){ + /* If no database changes have been made, return early. */ + if( pPager->eState<PAGER_WRITER_CACHEMOD ) return SQLITE_OK; + + if( MEMDB ){ /* If this is an in-memory db, or no pages have been written to, or this ** function has already been called, it is mostly a no-op. However, any ** backup in progress needs to be restarted. */ sqlite3BackupRestart(pPager->pBackup); - }else if( pPager->state!=PAGER_SYNCED && pPager->dbModified ){ - - /* The following block updates the change-counter. Exactly how it - ** does this depends on whether or not the atomic-update optimization - ** was enabled at compile time, and if this transaction meets the - ** runtime criteria to use the operation: - ** - ** * The file-system supports the atomic-write property for - ** blocks of size page-size, and - ** * This commit is not part of a multi-file transaction, and - ** * Exactly one page has been modified and store in the journal file. - ** - ** If the optimization was not enabled at compile time, then the - ** pager_incr_changecounter() function is called to update the change - ** counter in 'indirect-mode'. If the optimization is compiled in but - ** is not applicable to this transaction, call sqlite3JournalCreate() - ** to make sure the journal file has actually been created, then call - ** pager_incr_changecounter() to update the change-counter in indirect - ** mode. - ** - ** Otherwise, if the optimization is both enabled and applicable, - ** then call pager_incr_changecounter() to update the change-counter - ** in 'direct' mode. In this case the journal file will never be - ** created for this transaction. - */ -#ifdef SQLITE_ENABLE_ATOMIC_WRITE - PgHdr *pPg; - assert( isOpen(pPager->jfd) || pPager->journalMode==PAGER_JOURNALMODE_OFF ); - if( !zMaster && isOpen(pPager->jfd) - && pPager->journalOff==jrnlBufferSize(pPager) - && pPager->dbSize>=pPager->dbFileSize - && (0==(pPg = sqlite3PcacheDirtyList(pPager->pPCache)) || 0==pPg->pDirty) - ){ - /* Update the db file change counter via the direct-write method. The - ** following call will modify the in-memory representation of page 1 - ** to include the updated change counter and then write page 1 - ** directly to the database file. Because of the atomic-write - ** property of the host file-system, this is safe. - */ - rc = pager_incr_changecounter(pPager, 1); - }else{ - rc = sqlite3JournalCreate(pPager->jfd); - if( rc==SQLITE_OK ){ - rc = pager_incr_changecounter(pPager, 0); - } - } -#else - rc = pager_incr_changecounter(pPager, 0); -#endif - if( rc!=SQLITE_OK ) goto commit_phase_one_exit; - - /* If this transaction has made the database smaller, then all pages - ** being discarded by the truncation must be written to the journal - ** file. This can only happen in auto-vacuum mode. - ** - ** Before reading the pages with page numbers larger than the - ** current value of Pager.dbSize, set dbSize back to the value - ** that it took at the start of the transaction. Otherwise, the - ** calls to sqlite3PagerGet() return zeroed pages instead of - ** reading data from the database file. - ** - ** When journal_mode==OFF the dbOrigSize is always zero, so this - ** block never runs if journal_mode=OFF. - */ -#ifndef SQLITE_OMIT_AUTOVACUUM - if( pPager->dbSize<pPager->dbOrigSize - && ALWAYS(pPager->journalMode!=PAGER_JOURNALMODE_OFF) - ){ - Pgno i; /* Iterator variable */ - const Pgno iSkip = PAGER_MJ_PGNO(pPager); /* Pending lock page */ - const Pgno dbSize = pPager->dbSize; /* Database image size */ - pPager->dbSize = pPager->dbOrigSize; - for( i=dbSize+1; i<=pPager->dbOrigSize; i++ ){ - if( !sqlite3BitvecTest(pPager->pInJournal, i) && i!=iSkip ){ - PgHdr *pPage; /* Page to journal */ - rc = sqlite3PagerGet(pPager, i, &pPage); - if( rc!=SQLITE_OK ) goto commit_phase_one_exit; - rc = sqlite3PagerWrite(pPage); - sqlite3PagerUnref(pPage); - if( rc!=SQLITE_OK ) goto commit_phase_one_exit; - } - } - pPager->dbSize = dbSize; - } -#endif - - /* Write the master journal name into the journal file. If a master - ** journal file name has already been written to the journal file, - ** or if zMaster is NULL (no master journal), then this call is a no-op. - */ - rc = writeMasterJournal(pPager, zMaster); - if( rc!=SQLITE_OK ) goto commit_phase_one_exit; - - /* Sync the journal file. If the atomic-update optimization is being - ** used, this call will not create the journal file or perform any - ** real IO. - */ - rc = syncJournal(pPager); - if( rc!=SQLITE_OK ) goto commit_phase_one_exit; - - /* Write all dirty pages to the database file. */ - rc = pager_write_pagelist(sqlite3PcacheDirtyList(pPager->pPCache)); - if( rc!=SQLITE_OK ){ - assert( rc!=SQLITE_IOERR_BLOCKED ); - goto commit_phase_one_exit; - } - sqlite3PcacheCleanAll(pPager->pPCache); - - /* If the file on disk is not the same size as the database image, - ** then use pager_truncate to grow or shrink the file here. - */ - if( pPager->dbSize!=pPager->dbFileSize ){ - Pgno nNew = pPager->dbSize - (pPager->dbSize==PAGER_MJ_PGNO(pPager)); - assert( pPager->state>=PAGER_EXCLUSIVE ); - rc = pager_truncate(pPager, nNew); - if( rc!=SQLITE_OK ) goto commit_phase_one_exit; - } - - /* Finally, sync the database file. */ - if( !pPager->noSync && !noSync ){ - rc = sqlite3OsSync(pPager->fd, pPager->sync_flags); - } - IOTRACE(("DBSYNC %p\n", pPager)) - - pPager->state = PAGER_SYNCED; + }else{ + if( pagerUseWal(pPager) ){ + PgHdr *pList = sqlite3PcacheDirtyList(pPager->pPCache); + PgHdr *pPageOne = 0; + if( pList==0 ){ + /* Must have at least one page for the WAL commit flag. + ** Ticket [2d1a5c67dfc2363e44f29d9bbd57f] 2011-05-18 */ + rc = sqlite3PagerGet(pPager, 1, &pPageOne, 0); + pList = pPageOne; + pList->pDirty = 0; + } + assert( rc==SQLITE_OK ); + if( ALWAYS(pList) ){ + rc = pagerWalFrames(pPager, pList, pPager->dbSize, 1); + } + sqlite3PagerUnref(pPageOne); + if( rc==SQLITE_OK ){ + sqlite3PcacheCleanAll(pPager->pPCache); + } + }else{ + /* The following block updates the change-counter. Exactly how it + ** does this depends on whether or not the atomic-update optimization + ** was enabled at compile time, and if this transaction meets the + ** runtime criteria to use the operation: + ** + ** * The file-system supports the atomic-write property for + ** blocks of size page-size, and + ** * This commit is not part of a multi-file transaction, and + ** * Exactly one page has been modified and store in the journal file. + ** + ** If the optimization was not enabled at compile time, then the + ** pager_incr_changecounter() function is called to update the change + ** counter in 'indirect-mode'. If the optimization is compiled in but + ** is not applicable to this transaction, call sqlite3JournalCreate() + ** to make sure the journal file has actually been created, then call + ** pager_incr_changecounter() to update the change-counter in indirect + ** mode. + ** + ** Otherwise, if the optimization is both enabled and applicable, + ** then call pager_incr_changecounter() to update the change-counter + ** in 'direct' mode. In this case the journal file will never be + ** created for this transaction. + */ + #ifdef SQLITE_ENABLE_ATOMIC_WRITE + PgHdr *pPg; + assert( isOpen(pPager->jfd) + || pPager->journalMode==PAGER_JOURNALMODE_OFF + || pPager->journalMode==PAGER_JOURNALMODE_WAL + ); + if( !zMaster && isOpen(pPager->jfd) + && pPager->journalOff==jrnlBufferSize(pPager) + && pPager->dbSize>=pPager->dbOrigSize + && (0==(pPg = sqlite3PcacheDirtyList(pPager->pPCache)) || 0==pPg->pDirty) + ){ + /* Update the db file change counter via the direct-write method. The + ** following call will modify the in-memory representation of page 1 + ** to include the updated change counter and then write page 1 + ** directly to the database file. Because of the atomic-write + ** property of the host file-system, this is safe. + */ + rc = pager_incr_changecounter(pPager, 1); + }else{ + rc = sqlite3JournalCreate(pPager->jfd); + if( rc==SQLITE_OK ){ + rc = pager_incr_changecounter(pPager, 0); + } + } + #else + rc = pager_incr_changecounter(pPager, 0); + #endif + if( rc!=SQLITE_OK ) goto commit_phase_one_exit; + + /* Write the master journal name into the journal file. If a master + ** journal file name has already been written to the journal file, + ** or if zMaster is NULL (no master journal), then this call is a no-op. + */ + rc = writeMasterJournal(pPager, zMaster); + if( rc!=SQLITE_OK ) goto commit_phase_one_exit; + + /* Sync the journal file and write all dirty pages to the database. + ** If the atomic-update optimization is being used, this sync will not + ** create the journal file or perform any real IO. + ** + ** Because the change-counter page was just modified, unless the + ** atomic-update optimization is used it is almost certain that the + ** journal requires a sync here. However, in locking_mode=exclusive + ** on a system under memory pressure it is just possible that this is + ** not the case. In this case it is likely enough that the redundant + ** xSync() call will be changed to a no-op by the OS anyhow. + */ + rc = syncJournal(pPager, 0); + if( rc!=SQLITE_OK ) goto commit_phase_one_exit; + + rc = pager_write_pagelist(pPager,sqlite3PcacheDirtyList(pPager->pPCache)); + if( rc!=SQLITE_OK ){ + assert( rc!=SQLITE_IOERR_BLOCKED ); + goto commit_phase_one_exit; + } + sqlite3PcacheCleanAll(pPager->pPCache); + + /* If the file on disk is smaller than the database image, use + ** pager_truncate to grow the file here. This can happen if the database + ** image was extended as part of the current transaction and then the + ** last page in the db image moved to the free-list. In this case the + ** last page is never written out to disk, leaving the database file + ** undersized. Fix this now if it is the case. */ + if( pPager->dbSize>pPager->dbFileSize ){ + Pgno nNew = pPager->dbSize - (pPager->dbSize==PAGER_MJ_PGNO(pPager)); + assert( pPager->eState==PAGER_WRITER_DBMOD ); + rc = pager_truncate(pPager, nNew); + if( rc!=SQLITE_OK ) goto commit_phase_one_exit; + } + + /* Finally, sync the database file. */ + if( !noSync ){ + rc = sqlite3PagerSync(pPager, zMaster); + } + IOTRACE(("DBSYNC %p\n", pPager)) + } } commit_phase_one_exit: + if( rc==SQLITE_OK && !pagerUseWal(pPager) ){ + pPager->eState = PAGER_WRITER_FINISHED; + } return rc; } /* @@ -37197,15 +50090,15 @@ /* This routine should not be called if a prior error has occurred. ** But if (due to a coding error elsewhere in the system) it does get ** called, just return the same error code without doing anything. */ if( NEVER(pPager->errCode) ) return pPager->errCode; - /* This function should not be called if the pager is not in at least - ** PAGER_RESERVED state. And indeed SQLite never does this. But it is - ** nice to have this defensive test here anyway. - */ - if( NEVER(pPager->state<PAGER_RESERVED) ) return SQLITE_ERROR; + assert( pPager->eState==PAGER_WRITER_LOCKED + || pPager->eState==PAGER_WRITER_FINISHED + || (pagerUseWal(pPager) && pPager->eState==PAGER_WRITER_CACHEMOD) + ); + assert( assert_pager_state(pPager) ); /* An optimization. If the database was not actually modified during ** this transaction, the pager is running in exclusive-mode and is ** using persistent journals, then this function is a no-op. ** @@ -37214,99 +50107,94 @@ ** a hot-journal during hot-journal rollback, 0 changes will be made ** to the database file. So there is no need to zero the journal ** header. Since the pager is in exclusive mode, there is no need ** to drop any locks either. */ - if( pPager->dbModified==0 && pPager->exclusiveMode + if( pPager->eState==PAGER_WRITER_LOCKED + && pPager->exclusiveMode && pPager->journalMode==PAGER_JOURNALMODE_PERSIST ){ - assert( pPager->journalOff==JOURNAL_HDR_SZ(pPager) ); + assert( pPager->journalOff==JOURNAL_HDR_SZ(pPager) || !pPager->journalOff ); + pPager->eState = PAGER_READER; return SQLITE_OK; } PAGERTRACE(("COMMIT %d\n", PAGERID(pPager))); - assert( pPager->state==PAGER_SYNCED || MEMDB || !pPager->dbModified ); - rc = pager_end_transaction(pPager, pPager->setMaster); + pPager->iDataVersion++; + rc = pager_end_transaction(pPager, pPager->setMaster, 1); return pager_error(pPager, rc); } /* -** Rollback all changes. The database falls back to PAGER_SHARED mode. +** If a write transaction is open, then all changes made within the +** transaction are reverted and the current write-transaction is closed. +** The pager falls back to PAGER_READER state if successful, or PAGER_ERROR +** state if an error occurs. ** -** This function performs two tasks: +** If the pager is already in PAGER_ERROR state when this function is called, +** it returns Pager.errCode immediately. No work is performed in this case. +** +** Otherwise, in rollback mode, this function performs two functions: ** ** 1) It rolls back the journal file, restoring all database file and ** in-memory cache pages to the state they were in when the transaction ** was opened, and +** ** 2) It finalizes the journal file, so that it is not used for hot ** rollback at any point in the future. ** -** subject to the following qualifications: -** -** * If the journal file is not yet open when this function is called, -** then only (2) is performed. In this case there is no journal file -** to roll back. -** -** * If in an error state other than SQLITE_FULL, then task (1) is -** performed. If successful, task (2). Regardless of the outcome -** of either, the error state error code is returned to the caller -** (i.e. either SQLITE_IOERR or SQLITE_CORRUPT). -** -** * If the pager is in PAGER_RESERVED state, then attempt (1). Whether -** or not (1) is succussful, also attempt (2). If successful, return -** SQLITE_OK. Otherwise, enter the error state and return the first -** error code encountered. -** -** In this case there is no chance that the database was written to. -** So is safe to finalize the journal file even if the playback -** (operation 1) failed. However the pager must enter the error state -** as the contents of the in-memory cache are now suspect. -** -** * Finally, if in PAGER_EXCLUSIVE state, then attempt (1). Only -** attempt (2) if (1) is successful. Return SQLITE_OK if successful, -** otherwise enter the error state and return the error code from the -** failing operation. -** -** In this case the database file may have been written to. So if the -** playback operation did not succeed it would not be safe to finalize -** the journal file. It needs to be left in the file-system so that -** some other process can use it to restore the database state (by -** hot-journal rollback). +** Finalization of the journal file (task 2) is only performed if the +** rollback is successful. +** +** In WAL mode, all cache-entries containing data modified within the +** current transaction are either expelled from the cache or reverted to +** their pre-transaction state by re-reading data from the database or +** WAL files. The WAL transaction is then closed. */ SQLITE_PRIVATE int sqlite3PagerRollback(Pager *pPager){ int rc = SQLITE_OK; /* Return code */ PAGERTRACE(("ROLLBACK %d\n", PAGERID(pPager))); - if( !pPager->dbModified || !isOpen(pPager->jfd) ){ - rc = pager_end_transaction(pPager, pPager->setMaster); - }else if( pPager->errCode && pPager->errCode!=SQLITE_FULL ){ - if( pPager->state>=PAGER_EXCLUSIVE ){ - pager_playback(pPager, 0); - } - rc = pPager->errCode; - }else{ - if( pPager->state==PAGER_RESERVED ){ - int rc2; - rc = pager_playback(pPager, 0); - rc2 = pager_end_transaction(pPager, pPager->setMaster); - if( rc==SQLITE_OK ){ - rc = rc2; - } - }else{ - rc = pager_playback(pPager, 0); - } - - if( !MEMDB ){ - pPager->dbSizeValid = 0; - } - - /* If an error occurs during a ROLLBACK, we can no longer trust the pager - ** cache. So call pager_error() on the way out to make any error - ** persistent. - */ - rc = pager_error(pPager, rc); - } - return rc; + + /* PagerRollback() is a no-op if called in READER or OPEN state. If + ** the pager is already in the ERROR state, the rollback is not + ** attempted here. Instead, the error code is returned to the caller. + */ + assert( assert_pager_state(pPager) ); + if( pPager->eState==PAGER_ERROR ) return pPager->errCode; + if( pPager->eState<=PAGER_READER ) return SQLITE_OK; + + if( pagerUseWal(pPager) ){ + int rc2; + rc = sqlite3PagerSavepoint(pPager, SAVEPOINT_ROLLBACK, -1); + rc2 = pager_end_transaction(pPager, pPager->setMaster, 0); + if( rc==SQLITE_OK ) rc = rc2; + }else if( !isOpen(pPager->jfd) || pPager->eState==PAGER_WRITER_LOCKED ){ + int eState = pPager->eState; + rc = pager_end_transaction(pPager, 0, 0); + if( !MEMDB && eState>PAGER_WRITER_LOCKED ){ + /* This can happen using journal_mode=off. Move the pager to the error + ** state to indicate that the contents of the cache may not be trusted. + ** Any active readers will get SQLITE_ABORT. + */ + pPager->errCode = SQLITE_ABORT; + pPager->eState = PAGER_ERROR; + return rc; + } + }else{ + rc = pager_playback(pPager, 0); + } + + assert( pPager->eState==PAGER_READER || rc!=SQLITE_OK ); + assert( rc==SQLITE_OK || rc==SQLITE_FULL || rc==SQLITE_CORRUPT + || rc==SQLITE_NOMEM || (rc&0xFF)==SQLITE_IOERR + || rc==SQLITE_CANTOPEN + ); + + /* If an error occurs during a ROLLBACK, we can no longer trust the pager + ** cache. So call pager_error() on the way out to make any error persistent. + */ + return pager_error(pPager, rc); } /* ** Return TRUE if the database file is opened read-only. Return FALSE ** if the database is (in theory) writable. @@ -37313,25 +50201,29 @@ */ SQLITE_PRIVATE u8 sqlite3PagerIsreadonly(Pager *pPager){ return pPager->readOnly; } +#ifdef SQLITE_DEBUG /* -** Return the number of references to the pager. +** Return the sum of the reference counts for all pages held by pPager. */ SQLITE_PRIVATE int sqlite3PagerRefcount(Pager *pPager){ return sqlite3PcacheRefCount(pPager->pPCache); } +#endif /* ** Return the approximate number of bytes of memory currently ** used by the pager and its associated cache. */ SQLITE_PRIVATE int sqlite3PagerMemUsed(Pager *pPager){ - int perPageSize = pPager->pageSize + pPager->nExtra + 20; + int perPageSize = pPager->pageSize + pPager->nExtra + sizeof(PgHdr) + + 5*sizeof(void*); return perPageSize*sqlite3PcachePagecount(pPager->pPCache) - + sqlite3MallocSize(pPager); + + sqlite3MallocSize(pPager) + + pPager->pageSize; } /* ** Return the number of references to the specified page. */ @@ -37346,21 +50238,45 @@ SQLITE_PRIVATE int *sqlite3PagerStats(Pager *pPager){ static int a[11]; a[0] = sqlite3PcacheRefCount(pPager->pPCache); a[1] = sqlite3PcachePagecount(pPager->pPCache); a[2] = sqlite3PcacheGetCachesize(pPager->pPCache); - a[3] = pPager->dbSizeValid ? (int) pPager->dbSize : -1; - a[4] = pPager->state; + a[3] = pPager->eState==PAGER_OPEN ? -1 : (int) pPager->dbSize; + a[4] = pPager->eState; a[5] = pPager->errCode; - a[6] = pPager->nHit; - a[7] = pPager->nMiss; + a[6] = pPager->aStat[PAGER_STAT_HIT]; + a[7] = pPager->aStat[PAGER_STAT_MISS]; a[8] = 0; /* Used to be pPager->nOvfl */ a[9] = pPager->nRead; - a[10] = pPager->nWrite; + a[10] = pPager->aStat[PAGER_STAT_WRITE]; return a; } #endif + +/* +** Parameter eStat must be either SQLITE_DBSTATUS_CACHE_HIT or +** SQLITE_DBSTATUS_CACHE_MISS. Before returning, *pnVal is incremented by the +** current cache hit or miss count, according to the value of eStat. If the +** reset parameter is non-zero, the cache hit or miss count is zeroed before +** returning. +*/ +SQLITE_PRIVATE void sqlite3PagerCacheStat(Pager *pPager, int eStat, int reset, int *pnVal){ + + assert( eStat==SQLITE_DBSTATUS_CACHE_HIT + || eStat==SQLITE_DBSTATUS_CACHE_MISS + || eStat==SQLITE_DBSTATUS_CACHE_WRITE + ); + + assert( SQLITE_DBSTATUS_CACHE_HIT+1==SQLITE_DBSTATUS_CACHE_MISS ); + assert( SQLITE_DBSTATUS_CACHE_HIT+2==SQLITE_DBSTATUS_CACHE_WRITE ); + assert( PAGER_STAT_HIT==0 && PAGER_STAT_MISS==1 && PAGER_STAT_WRITE==2 ); + + *pnVal += pPager->aStat[eStat - SQLITE_DBSTATUS_CACHE_HIT]; + if( reset ){ + pPager->aStat[eStat - SQLITE_DBSTATUS_CACHE_HIT] = 0; + } +} /* ** Return true if this is an in-memory pager. */ SQLITE_PRIVATE int sqlite3PagerIsMemdb(Pager *pPager){ @@ -37375,58 +50291,66 @@ ** ** If a memory allocation fails, SQLITE_NOMEM is returned. If an error ** occurs while opening the sub-journal file, then an IO error code is ** returned. Otherwise, SQLITE_OK. */ -SQLITE_PRIVATE int sqlite3PagerOpenSavepoint(Pager *pPager, int nSavepoint){ +static SQLITE_NOINLINE int pagerOpenSavepoint(Pager *pPager, int nSavepoint){ int rc = SQLITE_OK; /* Return code */ int nCurrent = pPager->nSavepoint; /* Current number of savepoints */ - - if( nSavepoint>nCurrent && pPager->useJournal ){ - int ii; /* Iterator variable */ - PagerSavepoint *aNew; /* New Pager.aSavepoint array */ - int nPage; /* Size of database file */ - - rc = sqlite3PagerPagecount(pPager, &nPage); - if( rc ) return rc; - - /* Grow the Pager.aSavepoint array using realloc(). Return SQLITE_NOMEM - ** if the allocation fails. Otherwise, zero the new portion in case a - ** malloc failure occurs while populating it in the for(...) loop below. - */ - aNew = (PagerSavepoint *)sqlite3Realloc( - pPager->aSavepoint, sizeof(PagerSavepoint)*nSavepoint - ); - if( !aNew ){ - return SQLITE_NOMEM; - } - memset(&aNew[nCurrent], 0, (nSavepoint-nCurrent) * sizeof(PagerSavepoint)); - pPager->aSavepoint = aNew; - pPager->nSavepoint = nSavepoint; - - /* Populate the PagerSavepoint structures just allocated. */ - for(ii=nCurrent; ii<nSavepoint; ii++){ - aNew[ii].nOrig = nPage; - if( isOpen(pPager->jfd) && pPager->journalOff>0 ){ - aNew[ii].iOffset = pPager->journalOff; - }else{ - aNew[ii].iOffset = JOURNAL_HDR_SZ(pPager); - } - aNew[ii].iSubRec = pPager->nSubRec; - aNew[ii].pInSavepoint = sqlite3BitvecCreate(nPage); - if( !aNew[ii].pInSavepoint ){ - return SQLITE_NOMEM; - } - } - - /* Open the sub-journal, if it is not already opened. */ - rc = openSubJournal(pPager); - assertTruncateConstraint(pPager); - } - - return rc; -} + int ii; /* Iterator variable */ + PagerSavepoint *aNew; /* New Pager.aSavepoint array */ + + assert( pPager->eState>=PAGER_WRITER_LOCKED ); + assert( assert_pager_state(pPager) ); + assert( nSavepoint>nCurrent && pPager->useJournal ); + + /* Grow the Pager.aSavepoint array using realloc(). Return SQLITE_NOMEM + ** if the allocation fails. Otherwise, zero the new portion in case a + ** malloc failure occurs while populating it in the for(...) loop below. + */ + aNew = (PagerSavepoint *)sqlite3Realloc( + pPager->aSavepoint, sizeof(PagerSavepoint)*nSavepoint + ); + if( !aNew ){ + return SQLITE_NOMEM; + } + memset(&aNew[nCurrent], 0, (nSavepoint-nCurrent) * sizeof(PagerSavepoint)); + pPager->aSavepoint = aNew; + + /* Populate the PagerSavepoint structures just allocated. */ + for(ii=nCurrent; ii<nSavepoint; ii++){ + aNew[ii].nOrig = pPager->dbSize; + if( isOpen(pPager->jfd) && pPager->journalOff>0 ){ + aNew[ii].iOffset = pPager->journalOff; + }else{ + aNew[ii].iOffset = JOURNAL_HDR_SZ(pPager); + } + aNew[ii].iSubRec = pPager->nSubRec; + aNew[ii].pInSavepoint = sqlite3BitvecCreate(pPager->dbSize); + if( !aNew[ii].pInSavepoint ){ + return SQLITE_NOMEM; + } + if( pagerUseWal(pPager) ){ + sqlite3WalSavepoint(pPager->pWal, aNew[ii].aWalData); + } + pPager->nSavepoint = ii+1; + } + assert( pPager->nSavepoint==nSavepoint ); + assertTruncateConstraint(pPager); + return rc; +} +SQLITE_PRIVATE int sqlite3PagerOpenSavepoint(Pager *pPager, int nSavepoint){ + assert( pPager->eState>=PAGER_WRITER_LOCKED ); + assert( assert_pager_state(pPager) ); + + if( nSavepoint>pPager->nSavepoint && pPager->useJournal ){ + return pagerOpenSavepoint(pPager, nSavepoint); + }else{ + return SQLITE_OK; + } +} + /* ** This function is called to rollback or release (commit) a savepoint. ** The savepoint to release or rollback need not be the most recently ** created savepoint. @@ -37455,16 +50379,16 @@ ** This function may return SQLITE_NOMEM if a memory allocation fails, ** or an IO error code if an IO error occurs while rolling back a ** savepoint. If no errors occur, SQLITE_OK is returned. */ SQLITE_PRIVATE int sqlite3PagerSavepoint(Pager *pPager, int op, int iSavepoint){ - int rc = SQLITE_OK; + int rc = pPager->errCode; /* Return code */ assert( op==SAVEPOINT_RELEASE || op==SAVEPOINT_ROLLBACK ); assert( iSavepoint>=0 || op==SAVEPOINT_ROLLBACK ); - if( iSavepoint<pPager->nSavepoint ){ + if( rc==SQLITE_OK && iSavepoint<pPager->nSavepoint ){ int ii; /* Iterator variable */ int nNew; /* Number of remaining savepoints after this op. */ /* Figure out how many savepoints will still be active after this ** operation. Store this value in nNew. Then free resources associated @@ -37491,31 +50415,38 @@ /* Else this is a rollback operation, playback the specified savepoint. ** If this is a temp-file, it is possible that the journal file has ** not yet been opened. In this case there have been no changes to ** the database file, so the playback operation can be skipped. */ - else if( isOpen(pPager->jfd) ){ + else if( pagerUseWal(pPager) || isOpen(pPager->jfd) ){ PagerSavepoint *pSavepoint = (nNew==0)?0:&pPager->aSavepoint[nNew-1]; rc = pagerPlaybackSavepoint(pPager, pSavepoint); assert(rc!=SQLITE_DONE); } - } + return rc; } /* ** Return the full pathname of the database file. +** +** Except, if the pager is in-memory only, then return an empty string if +** nullIfMemDb is true. This routine is called with nullIfMemDb==1 when +** used to report the filename to the user, for compatibility with legacy +** behavior. But when the Btree needs to know the filename for matching to +** shared cache, it uses nullIfMemDb==0 so that in-memory databases can +** participate in shared-cache. */ -SQLITE_PRIVATE const char *sqlite3PagerFilename(Pager *pPager){ - return pPager->zFilename; +SQLITE_PRIVATE const char *sqlite3PagerFilename(Pager *pPager, int nullIfMemDb){ + return (nullIfMemDb && pPager->memDb) ? "" : pPager->zFilename; } /* ** Return the VFS structure for the pager. */ -SQLITE_PRIVATE const sqlite3_vfs *sqlite3PagerVfs(Pager *pPager){ +SQLITE_PRIVATE sqlite3_vfs *sqlite3PagerVfs(Pager *pPager){ return pPager->pVfs; } /* ** Return the file handle for the database file associated @@ -37523,10 +50454,22 @@ ** not yet been opened. */ SQLITE_PRIVATE sqlite3_file *sqlite3PagerFile(Pager *pPager){ return pPager->fd; } + +/* +** Return the file handle for the journal file (if it exists). +** This will be either the rollback journal or the WAL file. +*/ +SQLITE_PRIVATE sqlite3_file *sqlite3PagerJrnlFile(Pager *pPager){ +#if SQLITE_OMIT_WAL + return pPager->jfd; +#else + return pPager->pWal ? sqlite3WalFile(pPager->pWal) : pPager->jfd; +#endif +} /* ** Return the full pathname of the journal file. */ SQLITE_PRIVATE const char *sqlite3PagerJournalname(Pager *pPager){ @@ -37543,11 +50486,11 @@ #ifdef SQLITE_HAS_CODEC /* ** Set or retrieve the codec for this pager */ -static void sqlite3PagerSetCodec( +SQLITE_PRIVATE void sqlite3PagerSetCodec( Pager *pPager, void *(*xCodec)(void*,void*,Pgno,int), void (*xCodecSizeChng)(void*,int,int), void (*xCodecFree)(void*), void *pCodec @@ -37557,14 +50500,34 @@ pPager->xCodecSizeChng = xCodecSizeChng; pPager->xCodecFree = xCodecFree; pPager->pCodec = pCodec; pagerReportSize(pPager); } -static void *sqlite3PagerGetCodec(Pager *pPager){ +SQLITE_PRIVATE void *sqlite3PagerGetCodec(Pager *pPager){ return pPager->pCodec; } -#endif + +/* +** This function is called by the wal module when writing page content +** into the log file. +** +** This function returns a pointer to a buffer containing the encrypted +** page content. If a malloc fails, this function may return NULL. +*/ +SQLITE_PRIVATE void *sqlite3PagerCodec(PgHdr *pPg){ + void *aData = 0; + CODEC2(pPg->pPager, pPg->pData, pPg->pgno, 6, return 0, aData); + return aData; +} + +/* +** Return the current pager state +*/ +SQLITE_PRIVATE int sqlite3PagerState(Pager *pPager){ + return pPager->eState; +} +#endif /* SQLITE_HAS_CODEC */ #ifndef SQLITE_OMIT_AUTOVACUUM /* ** Move the page pPg to location pgno in the file. ** @@ -37595,10 +50558,14 @@ Pgno needSyncPgno = 0; /* Old value of pPg->pgno, if sync is required */ int rc; /* Return code */ Pgno origPgno; /* The original page number */ assert( pPg->nRef>0 ); + assert( pPager->eState==PAGER_WRITER_CACHEMOD + || pPager->eState==PAGER_WRITER_DBMOD + ); + assert( assert_pager_state(pPager) ); /* In order to be able to rollback, an in-memory database must journal ** the page we are moving from. */ if( MEMDB ){ @@ -37622,13 +50589,12 @@ ** ** subjournalPage() may need to allocate space to store pPg->pgno into ** one or more savepoint bitvecs. This is the reason this function ** may return SQLITE_NOMEM. */ - if( pPg->flags&PGHDR_DIRTY - && subjRequiresPage(pPg) - && SQLITE_OK!=(rc = subjournalPage(pPg)) + if( (pPg->flags & PGHDR_DIRTY)!=0 + && SQLITE_OK!=(rc = subjournalPageIfRequired(pPg)) ){ return rc; } PAGERTRACE(("MOVE %d page %d (needSync=%d) moves to %d\n", @@ -37642,88 +50608,92 @@ ** the journal needs to be sync()ed before database page pPg->pgno ** can be written to. The caller has already promised not to write to it. */ if( (pPg->flags&PGHDR_NEED_SYNC) && !isCommit ){ needSyncPgno = pPg->pgno; - assert( pageInJournal(pPg) || pPg->pgno>pPager->dbOrigSize ); + assert( pPager->journalMode==PAGER_JOURNALMODE_OFF || + pageInJournal(pPager, pPg) || pPg->pgno>pPager->dbOrigSize ); assert( pPg->flags&PGHDR_DIRTY ); - assert( pPager->needSync ); } /* If the cache contains a page with page-number pgno, remove it - ** from its hash chain. Also, if the PgHdr.needSync was set for + ** from its hash chain. Also, if the PGHDR_NEED_SYNC flag was set for ** page pgno before the 'move' operation, it needs to be retained ** for the page moved there. */ pPg->flags &= ~PGHDR_NEED_SYNC; - pPgOld = pager_lookup(pPager, pgno); + pPgOld = sqlite3PagerLookup(pPager, pgno); assert( !pPgOld || pPgOld->nRef==1 ); if( pPgOld ){ pPg->flags |= (pPgOld->flags&PGHDR_NEED_SYNC); if( MEMDB ){ /* Do not discard pages from an in-memory database since we might ** need to rollback later. Just move the page out of the way. */ - assert( pPager->dbSizeValid ); sqlite3PcacheMove(pPgOld, pPager->dbSize+1); }else{ sqlite3PcacheDrop(pPgOld); } } origPgno = pPg->pgno; sqlite3PcacheMove(pPg, pgno); sqlite3PcacheMakeDirty(pPg); - pPager->dbModified = 1; + + /* For an in-memory database, make sure the original page continues + ** to exist, in case the transaction needs to roll back. Use pPgOld + ** as the original page since it has already been allocated. + */ + if( MEMDB ){ + assert( pPgOld ); + sqlite3PcacheMove(pPgOld, origPgno); + sqlite3PagerUnrefNotNull(pPgOld); + } if( needSyncPgno ){ /* If needSyncPgno is non-zero, then the journal file needs to be ** sync()ed before any data is written to database file page needSyncPgno. ** Currently, no such page exists in the page-cache and the ** "is journaled" bitvec flag has been set. This needs to be remedied by - ** loading the page into the pager-cache and setting the PgHdr.needSync + ** loading the page into the pager-cache and setting the PGHDR_NEED_SYNC ** flag. ** ** If the attempt to load the page into the page-cache fails, (due ** to a malloc() or IO failure), clear the bit in the pInJournal[] ** array. Otherwise, if the page is loaded and written again in ** this transaction, it may be written to the database file before ** it is synced into the journal file. This way, it may end up in ** the journal file twice, but that is not a problem. - ** - ** The sqlite3PagerGet() call may cause the journal to sync. So make - ** sure the Pager.needSync flag is set too. */ PgHdr *pPgHdr; - assert( pPager->needSync ); - rc = sqlite3PagerGet(pPager, needSyncPgno, &pPgHdr); + rc = sqlite3PagerGet(pPager, needSyncPgno, &pPgHdr, 0); if( rc!=SQLITE_OK ){ if( needSyncPgno<=pPager->dbOrigSize ){ assert( pPager->pTmpSpace!=0 ); sqlite3BitvecClear(pPager->pInJournal, needSyncPgno, pPager->pTmpSpace); } return rc; } - pPager->needSync = 1; - assert( pPager->noSync==0 && !MEMDB ); pPgHdr->flags |= PGHDR_NEED_SYNC; sqlite3PcacheMakeDirty(pPgHdr); - sqlite3PagerUnref(pPgHdr); - } - - /* - ** For an in-memory database, make sure the original page continues - ** to exist, in case the transaction needs to roll back. Use pPgOld - ** as the original page since it has already been allocated. - */ - if( MEMDB ){ - sqlite3PcacheMove(pPgOld, origPgno); - sqlite3PagerUnref(pPgOld); + sqlite3PagerUnrefNotNull(pPgHdr); } return SQLITE_OK; } #endif + +/* +** The page handle passed as the first argument refers to a dirty page +** with a page number other than iNew. This function changes the page's +** page number to iNew and sets the value of the PgHdr.flags field to +** the value passed as the third parameter. +*/ +SQLITE_PRIVATE void sqlite3PagerRekey(DbPage *pPg, Pgno iNew, u16 flags){ + assert( pPg->pgno!=iNew ); + pPg->flags = flags; + sqlite3PcacheMove(pPg, iNew); +} /* ** Return a pointer to the data for the specified page. */ SQLITE_PRIVATE void *sqlite3PagerGetData(DbPage *pPg){ @@ -37753,67 +50723,150 @@ assert( eMode==PAGER_LOCKINGMODE_QUERY || eMode==PAGER_LOCKINGMODE_NORMAL || eMode==PAGER_LOCKINGMODE_EXCLUSIVE ); assert( PAGER_LOCKINGMODE_QUERY<0 ); assert( PAGER_LOCKINGMODE_NORMAL>=0 && PAGER_LOCKINGMODE_EXCLUSIVE>=0 ); - if( eMode>=0 && !pPager->tempFile ){ + assert( pPager->exclusiveMode || 0==sqlite3WalHeapMemory(pPager->pWal) ); + if( eMode>=0 && !pPager->tempFile && !sqlite3WalHeapMemory(pPager->pWal) ){ pPager->exclusiveMode = (u8)eMode; } return (int)pPager->exclusiveMode; } /* -** Get/set the journal-mode for this pager. Parameter eMode must be one of: +** Set the journal-mode for this pager. Parameter eMode must be one of: ** -** PAGER_JOURNALMODE_QUERY ** PAGER_JOURNALMODE_DELETE ** PAGER_JOURNALMODE_TRUNCATE ** PAGER_JOURNALMODE_PERSIST ** PAGER_JOURNALMODE_OFF ** PAGER_JOURNALMODE_MEMORY +** PAGER_JOURNALMODE_WAL ** -** If the parameter is not _QUERY, then the journal_mode is set to the -** value specified if the change is allowed. The change is disallowed -** for the following reasons: +** The journalmode is set to the value specified if the change is allowed. +** The change may be disallowed for the following reasons: ** ** * An in-memory database can only have its journal_mode set to _OFF ** or _MEMORY. ** -** * The journal mode may not be changed while a transaction is active. +** * Temporary databases cannot have _WAL journalmode. ** ** The returned indicate the current (possibly updated) journal-mode. */ -SQLITE_PRIVATE int sqlite3PagerJournalMode(Pager *pPager, int eMode){ - assert( eMode==PAGER_JOURNALMODE_QUERY - || eMode==PAGER_JOURNALMODE_DELETE +SQLITE_PRIVATE int sqlite3PagerSetJournalMode(Pager *pPager, int eMode){ + u8 eOld = pPager->journalMode; /* Prior journalmode */ + +#ifdef SQLITE_DEBUG + /* The print_pager_state() routine is intended to be used by the debugger + ** only. We invoke it once here to suppress a compiler warning. */ + print_pager_state(pPager); +#endif + + + /* The eMode parameter is always valid */ + assert( eMode==PAGER_JOURNALMODE_DELETE || eMode==PAGER_JOURNALMODE_TRUNCATE || eMode==PAGER_JOURNALMODE_PERSIST || eMode==PAGER_JOURNALMODE_OFF + || eMode==PAGER_JOURNALMODE_WAL || eMode==PAGER_JOURNALMODE_MEMORY ); - assert( PAGER_JOURNALMODE_QUERY<0 ); - if( eMode>=0 - && (!MEMDB || eMode==PAGER_JOURNALMODE_MEMORY - || eMode==PAGER_JOURNALMODE_OFF) - && !pPager->dbModified - && (!isOpen(pPager->jfd) || 0==pPager->journalOff) - ){ - if( isOpen(pPager->jfd) ){ + + /* This routine is only called from the OP_JournalMode opcode, and + ** the logic there will never allow a temporary file to be changed + ** to WAL mode. + */ + assert( pPager->tempFile==0 || eMode!=PAGER_JOURNALMODE_WAL ); + + /* Do allow the journalmode of an in-memory database to be set to + ** anything other than MEMORY or OFF + */ + if( MEMDB ){ + assert( eOld==PAGER_JOURNALMODE_MEMORY || eOld==PAGER_JOURNALMODE_OFF ); + if( eMode!=PAGER_JOURNALMODE_MEMORY && eMode!=PAGER_JOURNALMODE_OFF ){ + eMode = eOld; + } + } + + if( eMode!=eOld ){ + + /* Change the journal mode. */ + assert( pPager->eState!=PAGER_ERROR ); + pPager->journalMode = (u8)eMode; + + /* When transistioning from TRUNCATE or PERSIST to any other journal + ** mode except WAL, unless the pager is in locking_mode=exclusive mode, + ** delete the journal file. + */ + assert( (PAGER_JOURNALMODE_TRUNCATE & 5)==1 ); + assert( (PAGER_JOURNALMODE_PERSIST & 5)==1 ); + assert( (PAGER_JOURNALMODE_DELETE & 5)==0 ); + assert( (PAGER_JOURNALMODE_MEMORY & 5)==4 ); + assert( (PAGER_JOURNALMODE_OFF & 5)==0 ); + assert( (PAGER_JOURNALMODE_WAL & 5)==5 ); + + assert( isOpen(pPager->fd) || pPager->exclusiveMode ); + if( !pPager->exclusiveMode && (eOld & 5)==1 && (eMode & 1)==0 ){ + + /* In this case we would like to delete the journal file. If it is + ** not possible, then that is not a problem. Deleting the journal file + ** here is an optimization only. + ** + ** Before deleting the journal file, obtain a RESERVED lock on the + ** database file. This ensures that the journal file is not deleted + ** while it is in use by some other client. + */ + sqlite3OsClose(pPager->jfd); + if( pPager->eLock>=RESERVED_LOCK ){ + sqlite3OsDelete(pPager->pVfs, pPager->zJournal, 0); + }else{ + int rc = SQLITE_OK; + int state = pPager->eState; + assert( state==PAGER_OPEN || state==PAGER_READER ); + if( state==PAGER_OPEN ){ + rc = sqlite3PagerSharedLock(pPager); + } + if( pPager->eState==PAGER_READER ){ + assert( rc==SQLITE_OK ); + rc = pagerLockDb(pPager, RESERVED_LOCK); + } + if( rc==SQLITE_OK ){ + sqlite3OsDelete(pPager->pVfs, pPager->zJournal, 0); + } + if( rc==SQLITE_OK && state==PAGER_READER ){ + pagerUnlockDb(pPager, SHARED_LOCK); + }else if( state==PAGER_OPEN ){ + pager_unlock(pPager); + } + assert( state==pPager->eState ); + } + }else if( eMode==PAGER_JOURNALMODE_OFF ){ sqlite3OsClose(pPager->jfd); } - assert( (PAGER_JOURNALMODE_TRUNCATE & 1)==1 ); - assert( (PAGER_JOURNALMODE_PERSIST & 1)==1 ); - assert( (PAGER_JOURNALMODE_DELETE & 1)==0 ); - assert( (PAGER_JOURNALMODE_MEMORY & 1)==0 ); - assert( (PAGER_JOURNALMODE_OFF & 1)==0 ); - if( (pPager->journalMode & 1)==1 && (eMode & 1)==0 - && !pPager->exclusiveMode ){ - sqlite3OsDelete(pPager->pVfs, pPager->zJournal, 0); - } - pPager->journalMode = (u8)eMode; - } + } + + /* Return the new journal mode */ + return (int)pPager->journalMode; +} + +/* +** Return the current journal mode. +*/ +SQLITE_PRIVATE int sqlite3PagerGetJournalMode(Pager *pPager){ return (int)pPager->journalMode; } + +/* +** Return TRUE if the pager is in a state where it is OK to change the +** journalmode. Journalmode changes can only happen when the database +** is unmodified. +*/ +SQLITE_PRIVATE int sqlite3PagerOkToChangeJournalMode(Pager *pPager){ + assert( assert_pager_state(pPager) ); + if( pPager->eState>=PAGER_WRITER_CACHEMOD ) return 0; + if( NEVER(isOpen(pPager->jfd) && pPager->journalOff>0) ) return 0; + return 1; +} /* ** Get/set the size-limit used for persistent journal files. ** ** Setting the size limit to -1 means no limit is enforced. @@ -37820,10 +50873,11 @@ ** An attempt to set a limit smaller than -1 is a no-op. */ SQLITE_PRIVATE i64 sqlite3PagerJournalSizeLimit(Pager *pPager, i64 iLimit){ if( iLimit>=-1 ){ pPager->journalSizeLimit = iLimit; + sqlite3WalLimit(pPager->pWal, iLimit); } return pPager->journalSizeLimit; } /* @@ -37833,14 +50887,3672 @@ ** sqlite3BackupUpdate() only. */ SQLITE_PRIVATE sqlite3_backup **sqlite3PagerBackupPtr(Pager *pPager){ return &pPager->pBackup; } + +#ifndef SQLITE_OMIT_VACUUM +/* +** Unless this is an in-memory or temporary database, clear the pager cache. +*/ +SQLITE_PRIVATE void sqlite3PagerClearCache(Pager *pPager){ + if( !MEMDB && pPager->tempFile==0 ) pager_reset(pPager); +} +#endif + +#ifndef SQLITE_OMIT_WAL +/* +** This function is called when the user invokes "PRAGMA wal_checkpoint", +** "PRAGMA wal_blocking_checkpoint" or calls the sqlite3_wal_checkpoint() +** or wal_blocking_checkpoint() API functions. +** +** Parameter eMode is one of SQLITE_CHECKPOINT_PASSIVE, FULL or RESTART. +*/ +SQLITE_PRIVATE int sqlite3PagerCheckpoint(Pager *pPager, int eMode, int *pnLog, int *pnCkpt){ + int rc = SQLITE_OK; + if( pPager->pWal ){ + rc = sqlite3WalCheckpoint(pPager->pWal, eMode, + (eMode==SQLITE_CHECKPOINT_PASSIVE ? 0 : pPager->xBusyHandler), + pPager->pBusyHandlerArg, + pPager->ckptSyncFlags, pPager->pageSize, (u8 *)pPager->pTmpSpace, + pnLog, pnCkpt + ); + } + return rc; +} + +SQLITE_PRIVATE int sqlite3PagerWalCallback(Pager *pPager){ + return sqlite3WalCallback(pPager->pWal); +} + +/* +** Return true if the underlying VFS for the given pager supports the +** primitives necessary for write-ahead logging. +*/ +SQLITE_PRIVATE int sqlite3PagerWalSupported(Pager *pPager){ + const sqlite3_io_methods *pMethods = pPager->fd->pMethods; + return pPager->exclusiveMode || (pMethods->iVersion>=2 && pMethods->xShmMap); +} + +/* +** Attempt to take an exclusive lock on the database file. If a PENDING lock +** is obtained instead, immediately release it. +*/ +static int pagerExclusiveLock(Pager *pPager){ + int rc; /* Return code */ + + assert( pPager->eLock==SHARED_LOCK || pPager->eLock==EXCLUSIVE_LOCK ); + rc = pagerLockDb(pPager, EXCLUSIVE_LOCK); + if( rc!=SQLITE_OK ){ + /* If the attempt to grab the exclusive lock failed, release the + ** pending lock that may have been obtained instead. */ + pagerUnlockDb(pPager, SHARED_LOCK); + } + + return rc; +} + +/* +** Call sqlite3WalOpen() to open the WAL handle. If the pager is in +** exclusive-locking mode when this function is called, take an EXCLUSIVE +** lock on the database file and use heap-memory to store the wal-index +** in. Otherwise, use the normal shared-memory. +*/ +static int pagerOpenWal(Pager *pPager){ + int rc = SQLITE_OK; + + assert( pPager->pWal==0 && pPager->tempFile==0 ); + assert( pPager->eLock==SHARED_LOCK || pPager->eLock==EXCLUSIVE_LOCK ); + + /* If the pager is already in exclusive-mode, the WAL module will use + ** heap-memory for the wal-index instead of the VFS shared-memory + ** implementation. Take the exclusive lock now, before opening the WAL + ** file, to make sure this is safe. + */ + if( pPager->exclusiveMode ){ + rc = pagerExclusiveLock(pPager); + } + + /* Open the connection to the log file. If this operation fails, + ** (e.g. due to malloc() failure), return an error code. + */ + if( rc==SQLITE_OK ){ + rc = sqlite3WalOpen(pPager->pVfs, + pPager->fd, pPager->zWal, pPager->exclusiveMode, + pPager->journalSizeLimit, &pPager->pWal + ); + } + pagerFixMaplimit(pPager); + + return rc; +} + + +/* +** The caller must be holding a SHARED lock on the database file to call +** this function. +** +** If the pager passed as the first argument is open on a real database +** file (not a temp file or an in-memory database), and the WAL file +** is not already open, make an attempt to open it now. If successful, +** return SQLITE_OK. If an error occurs or the VFS used by the pager does +** not support the xShmXXX() methods, return an error code. *pbOpen is +** not modified in either case. +** +** If the pager is open on a temp-file (or in-memory database), or if +** the WAL file is already open, set *pbOpen to 1 and return SQLITE_OK +** without doing anything. +*/ +SQLITE_PRIVATE int sqlite3PagerOpenWal( + Pager *pPager, /* Pager object */ + int *pbOpen /* OUT: Set to true if call is a no-op */ +){ + int rc = SQLITE_OK; /* Return code */ + + assert( assert_pager_state(pPager) ); + assert( pPager->eState==PAGER_OPEN || pbOpen ); + assert( pPager->eState==PAGER_READER || !pbOpen ); + assert( pbOpen==0 || *pbOpen==0 ); + assert( pbOpen!=0 || (!pPager->tempFile && !pPager->pWal) ); + + if( !pPager->tempFile && !pPager->pWal ){ + if( !sqlite3PagerWalSupported(pPager) ) return SQLITE_CANTOPEN; + + /* Close any rollback journal previously open */ + sqlite3OsClose(pPager->jfd); + + rc = pagerOpenWal(pPager); + if( rc==SQLITE_OK ){ + pPager->journalMode = PAGER_JOURNALMODE_WAL; + pPager->eState = PAGER_OPEN; + } + }else{ + *pbOpen = 1; + } + + return rc; +} + +/* +** This function is called to close the connection to the log file prior +** to switching from WAL to rollback mode. +** +** Before closing the log file, this function attempts to take an +** EXCLUSIVE lock on the database file. If this cannot be obtained, an +** error (SQLITE_BUSY) is returned and the log connection is not closed. +** If successful, the EXCLUSIVE lock is not released before returning. +*/ +SQLITE_PRIVATE int sqlite3PagerCloseWal(Pager *pPager){ + int rc = SQLITE_OK; + + assert( pPager->journalMode==PAGER_JOURNALMODE_WAL ); + + /* If the log file is not already open, but does exist in the file-system, + ** it may need to be checkpointed before the connection can switch to + ** rollback mode. Open it now so this can happen. + */ + if( !pPager->pWal ){ + int logexists = 0; + rc = pagerLockDb(pPager, SHARED_LOCK); + if( rc==SQLITE_OK ){ + rc = sqlite3OsAccess( + pPager->pVfs, pPager->zWal, SQLITE_ACCESS_EXISTS, &logexists + ); + } + if( rc==SQLITE_OK && logexists ){ + rc = pagerOpenWal(pPager); + } + } + + /* Checkpoint and close the log. Because an EXCLUSIVE lock is held on + ** the database file, the log and log-summary files will be deleted. + */ + if( rc==SQLITE_OK && pPager->pWal ){ + rc = pagerExclusiveLock(pPager); + if( rc==SQLITE_OK ){ + rc = sqlite3WalClose(pPager->pWal, pPager->ckptSyncFlags, + pPager->pageSize, (u8*)pPager->pTmpSpace); + pPager->pWal = 0; + pagerFixMaplimit(pPager); + } + } + return rc; +} + +#ifdef SQLITE_ENABLE_SNAPSHOT +/* +** If this is a WAL database, obtain a snapshot handle for the snapshot +** currently open. Otherwise, return an error. +*/ +SQLITE_PRIVATE int sqlite3PagerSnapshotGet(Pager *pPager, sqlite3_snapshot **ppSnapshot){ + int rc = SQLITE_ERROR; + if( pPager->pWal ){ + rc = sqlite3WalSnapshotGet(pPager->pWal, ppSnapshot); + } + return rc; +} + +/* +** If this is a WAL database, store a pointer to pSnapshot. Next time a +** read transaction is opened, attempt to read from the snapshot it +** identifies. If this is not a WAL database, return an error. +*/ +SQLITE_PRIVATE int sqlite3PagerSnapshotOpen(Pager *pPager, sqlite3_snapshot *pSnapshot){ + int rc = SQLITE_OK; + if( pPager->pWal ){ + sqlite3WalSnapshotOpen(pPager->pWal, pSnapshot); + }else{ + rc = SQLITE_ERROR; + } + return rc; +} +#endif /* SQLITE_ENABLE_SNAPSHOT */ +#endif /* !SQLITE_OMIT_WAL */ + +#ifdef SQLITE_ENABLE_ZIPVFS +/* +** A read-lock must be held on the pager when this function is called. If +** the pager is in WAL mode and the WAL file currently contains one or more +** frames, return the size in bytes of the page images stored within the +** WAL frames. Otherwise, if this is not a WAL database or the WAL file +** is empty, return 0. +*/ +SQLITE_PRIVATE int sqlite3PagerWalFramesize(Pager *pPager){ + assert( pPager->eState>=PAGER_READER ); + return sqlite3WalFramesize(pPager->pWal); +} +#endif + #endif /* SQLITE_OMIT_DISKIO */ /************** End of pager.c ***********************************************/ +/************** Begin file wal.c *********************************************/ +/* +** 2010 February 1 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** +** This file contains the implementation of a write-ahead log (WAL) used in +** "journal_mode=WAL" mode. +** +** WRITE-AHEAD LOG (WAL) FILE FORMAT +** +** A WAL file consists of a header followed by zero or more "frames". +** Each frame records the revised content of a single page from the +** database file. All changes to the database are recorded by writing +** frames into the WAL. Transactions commit when a frame is written that +** contains a commit marker. A single WAL can and usually does record +** multiple transactions. Periodically, the content of the WAL is +** transferred back into the database file in an operation called a +** "checkpoint". +** +** A single WAL file can be used multiple times. In other words, the +** WAL can fill up with frames and then be checkpointed and then new +** frames can overwrite the old ones. A WAL always grows from beginning +** toward the end. Checksums and counters attached to each frame are +** used to determine which frames within the WAL are valid and which +** are leftovers from prior checkpoints. +** +** The WAL header is 32 bytes in size and consists of the following eight +** big-endian 32-bit unsigned integer values: +** +** 0: Magic number. 0x377f0682 or 0x377f0683 +** 4: File format version. Currently 3007000 +** 8: Database page size. Example: 1024 +** 12: Checkpoint sequence number +** 16: Salt-1, random integer incremented with each checkpoint +** 20: Salt-2, a different random integer changing with each ckpt +** 24: Checksum-1 (first part of checksum for first 24 bytes of header). +** 28: Checksum-2 (second part of checksum for first 24 bytes of header). +** +** Immediately following the wal-header are zero or more frames. Each +** frame consists of a 24-byte frame-header followed by a <page-size> bytes +** of page data. The frame-header is six big-endian 32-bit unsigned +** integer values, as follows: +** +** 0: Page number. +** 4: For commit records, the size of the database image in pages +** after the commit. For all other records, zero. +** 8: Salt-1 (copied from the header) +** 12: Salt-2 (copied from the header) +** 16: Checksum-1. +** 20: Checksum-2. +** +** A frame is considered valid if and only if the following conditions are +** true: +** +** (1) The salt-1 and salt-2 values in the frame-header match +** salt values in the wal-header +** +** (2) The checksum values in the final 8 bytes of the frame-header +** exactly match the checksum computed consecutively on the +** WAL header and the first 8 bytes and the content of all frames +** up to and including the current frame. +** +** The checksum is computed using 32-bit big-endian integers if the +** magic number in the first 4 bytes of the WAL is 0x377f0683 and it +** is computed using little-endian if the magic number is 0x377f0682. +** The checksum values are always stored in the frame header in a +** big-endian format regardless of which byte order is used to compute +** the checksum. The checksum is computed by interpreting the input as +** an even number of unsigned 32-bit integers: x[0] through x[N]. The +** algorithm used for the checksum is as follows: +** +** for i from 0 to n-1 step 2: +** s0 += x[i] + s1; +** s1 += x[i+1] + s0; +** endfor +** +** Note that s0 and s1 are both weighted checksums using fibonacci weights +** in reverse order (the largest fibonacci weight occurs on the first element +** of the sequence being summed.) The s1 value spans all 32-bit +** terms of the sequence whereas s0 omits the final term. +** +** On a checkpoint, the WAL is first VFS.xSync-ed, then valid content of the +** WAL is transferred into the database, then the database is VFS.xSync-ed. +** The VFS.xSync operations serve as write barriers - all writes launched +** before the xSync must complete before any write that launches after the +** xSync begins. +** +** After each checkpoint, the salt-1 value is incremented and the salt-2 +** value is randomized. This prevents old and new frames in the WAL from +** being considered valid at the same time and being checkpointing together +** following a crash. +** +** READER ALGORITHM +** +** To read a page from the database (call it page number P), a reader +** first checks the WAL to see if it contains page P. If so, then the +** last valid instance of page P that is a followed by a commit frame +** or is a commit frame itself becomes the value read. If the WAL +** contains no copies of page P that are valid and which are a commit +** frame or are followed by a commit frame, then page P is read from +** the database file. +** +** To start a read transaction, the reader records the index of the last +** valid frame in the WAL. The reader uses this recorded "mxFrame" value +** for all subsequent read operations. New transactions can be appended +** to the WAL, but as long as the reader uses its original mxFrame value +** and ignores the newly appended content, it will see a consistent snapshot +** of the database from a single point in time. This technique allows +** multiple concurrent readers to view different versions of the database +** content simultaneously. +** +** The reader algorithm in the previous paragraphs works correctly, but +** because frames for page P can appear anywhere within the WAL, the +** reader has to scan the entire WAL looking for page P frames. If the +** WAL is large (multiple megabytes is typical) that scan can be slow, +** and read performance suffers. To overcome this problem, a separate +** data structure called the wal-index is maintained to expedite the +** search for frames of a particular page. +** +** WAL-INDEX FORMAT +** +** Conceptually, the wal-index is shared memory, though VFS implementations +** might choose to implement the wal-index using a mmapped file. Because +** the wal-index is shared memory, SQLite does not support journal_mode=WAL +** on a network filesystem. All users of the database must be able to +** share memory. +** +** The wal-index is transient. After a crash, the wal-index can (and should +** be) reconstructed from the original WAL file. In fact, the VFS is required +** to either truncate or zero the header of the wal-index when the last +** connection to it closes. Because the wal-index is transient, it can +** use an architecture-specific format; it does not have to be cross-platform. +** Hence, unlike the database and WAL file formats which store all values +** as big endian, the wal-index can store multi-byte values in the native +** byte order of the host computer. +** +** The purpose of the wal-index is to answer this question quickly: Given +** a page number P and a maximum frame index M, return the index of the +** last frame in the wal before frame M for page P in the WAL, or return +** NULL if there are no frames for page P in the WAL prior to M. +** +** The wal-index consists of a header region, followed by an one or +** more index blocks. +** +** The wal-index header contains the total number of frames within the WAL +** in the mxFrame field. +** +** Each index block except for the first contains information on +** HASHTABLE_NPAGE frames. The first index block contains information on +** HASHTABLE_NPAGE_ONE frames. The values of HASHTABLE_NPAGE_ONE and +** HASHTABLE_NPAGE are selected so that together the wal-index header and +** first index block are the same size as all other index blocks in the +** wal-index. +** +** Each index block contains two sections, a page-mapping that contains the +** database page number associated with each wal frame, and a hash-table +** that allows readers to query an index block for a specific page number. +** The page-mapping is an array of HASHTABLE_NPAGE (or HASHTABLE_NPAGE_ONE +** for the first index block) 32-bit page numbers. The first entry in the +** first index-block contains the database page number corresponding to the +** first frame in the WAL file. The first entry in the second index block +** in the WAL file corresponds to the (HASHTABLE_NPAGE_ONE+1)th frame in +** the log, and so on. +** +** The last index block in a wal-index usually contains less than the full +** complement of HASHTABLE_NPAGE (or HASHTABLE_NPAGE_ONE) page-numbers, +** depending on the contents of the WAL file. This does not change the +** allocated size of the page-mapping array - the page-mapping array merely +** contains unused entries. +** +** Even without using the hash table, the last frame for page P +** can be found by scanning the page-mapping sections of each index block +** starting with the last index block and moving toward the first, and +** within each index block, starting at the end and moving toward the +** beginning. The first entry that equals P corresponds to the frame +** holding the content for that page. +** +** The hash table consists of HASHTABLE_NSLOT 16-bit unsigned integers. +** HASHTABLE_NSLOT = 2*HASHTABLE_NPAGE, and there is one entry in the +** hash table for each page number in the mapping section, so the hash +** table is never more than half full. The expected number of collisions +** prior to finding a match is 1. Each entry of the hash table is an +** 1-based index of an entry in the mapping section of the same +** index block. Let K be the 1-based index of the largest entry in +** the mapping section. (For index blocks other than the last, K will +** always be exactly HASHTABLE_NPAGE (4096) and for the last index block +** K will be (mxFrame%HASHTABLE_NPAGE).) Unused slots of the hash table +** contain a value of 0. +** +** To look for page P in the hash table, first compute a hash iKey on +** P as follows: +** +** iKey = (P * 383) % HASHTABLE_NSLOT +** +** Then start scanning entries of the hash table, starting with iKey +** (wrapping around to the beginning when the end of the hash table is +** reached) until an unused hash slot is found. Let the first unused slot +** be at index iUnused. (iUnused might be less than iKey if there was +** wrap-around.) Because the hash table is never more than half full, +** the search is guaranteed to eventually hit an unused entry. Let +** iMax be the value between iKey and iUnused, closest to iUnused, +** where aHash[iMax]==P. If there is no iMax entry (if there exists +** no hash slot such that aHash[i]==p) then page P is not in the +** current index block. Otherwise the iMax-th mapping entry of the +** current index block corresponds to the last entry that references +** page P. +** +** A hash search begins with the last index block and moves toward the +** first index block, looking for entries corresponding to page P. On +** average, only two or three slots in each index block need to be +** examined in order to either find the last entry for page P, or to +** establish that no such entry exists in the block. Each index block +** holds over 4000 entries. So two or three index blocks are sufficient +** to cover a typical 10 megabyte WAL file, assuming 1K pages. 8 or 10 +** comparisons (on average) suffice to either locate a frame in the +** WAL or to establish that the frame does not exist in the WAL. This +** is much faster than scanning the entire 10MB WAL. +** +** Note that entries are added in order of increasing K. Hence, one +** reader might be using some value K0 and a second reader that started +** at a later time (after additional transactions were added to the WAL +** and to the wal-index) might be using a different value K1, where K1>K0. +** Both readers can use the same hash table and mapping section to get +** the correct result. There may be entries in the hash table with +** K>K0 but to the first reader, those entries will appear to be unused +** slots in the hash table and so the first reader will get an answer as +** if no values greater than K0 had ever been inserted into the hash table +** in the first place - which is what reader one wants. Meanwhile, the +** second reader using K1 will see additional values that were inserted +** later, which is exactly what reader two wants. +** +** When a rollback occurs, the value of K is decreased. Hash table entries +** that correspond to frames greater than the new K value are removed +** from the hash table at this point. +*/ +#ifndef SQLITE_OMIT_WAL + +/* #include "wal.h" */ + +/* +** Trace output macros +*/ +#if defined(SQLITE_TEST) && defined(SQLITE_DEBUG) +SQLITE_PRIVATE int sqlite3WalTrace = 0; +# define WALTRACE(X) if(sqlite3WalTrace) sqlite3DebugPrintf X +#else +# define WALTRACE(X) +#endif + +/* +** The maximum (and only) versions of the wal and wal-index formats +** that may be interpreted by this version of SQLite. +** +** If a client begins recovering a WAL file and finds that (a) the checksum +** values in the wal-header are correct and (b) the version field is not +** WAL_MAX_VERSION, recovery fails and SQLite returns SQLITE_CANTOPEN. +** +** Similarly, if a client successfully reads a wal-index header (i.e. the +** checksum test is successful) and finds that the version field is not +** WALINDEX_MAX_VERSION, then no read-transaction is opened and SQLite +** returns SQLITE_CANTOPEN. +*/ +#define WAL_MAX_VERSION 3007000 +#define WALINDEX_MAX_VERSION 3007000 + +/* +** Indices of various locking bytes. WAL_NREADER is the number +** of available reader locks and should be at least 3. The default +** is SQLITE_SHM_NLOCK==8 and WAL_NREADER==5. +*/ +#define WAL_WRITE_LOCK 0 +#define WAL_ALL_BUT_WRITE 1 +#define WAL_CKPT_LOCK 1 +#define WAL_RECOVER_LOCK 2 +#define WAL_READ_LOCK(I) (3+(I)) +#define WAL_NREADER (SQLITE_SHM_NLOCK-3) + + +/* Object declarations */ +typedef struct WalIndexHdr WalIndexHdr; +typedef struct WalIterator WalIterator; +typedef struct WalCkptInfo WalCkptInfo; + + +/* +** The following object holds a copy of the wal-index header content. +** +** The actual header in the wal-index consists of two copies of this +** object followed by one instance of the WalCkptInfo object. +** For all versions of SQLite through 3.10.0 and probably beyond, +** the locking bytes (WalCkptInfo.aLock) start at offset 120 and +** the total header size is 136 bytes. +** +** The szPage value can be any power of 2 between 512 and 32768, inclusive. +** Or it can be 1 to represent a 65536-byte page. The latter case was +** added in 3.7.1 when support for 64K pages was added. +*/ +struct WalIndexHdr { + u32 iVersion; /* Wal-index version */ + u32 unused; /* Unused (padding) field */ + u32 iChange; /* Counter incremented each transaction */ + u8 isInit; /* 1 when initialized */ + u8 bigEndCksum; /* True if checksums in WAL are big-endian */ + u16 szPage; /* Database page size in bytes. 1==64K */ + u32 mxFrame; /* Index of last valid frame in the WAL */ + u32 nPage; /* Size of database in pages */ + u32 aFrameCksum[2]; /* Checksum of last frame in log */ + u32 aSalt[2]; /* Two salt values copied from WAL header */ + u32 aCksum[2]; /* Checksum over all prior fields */ +}; + +/* +** A copy of the following object occurs in the wal-index immediately +** following the second copy of the WalIndexHdr. This object stores +** information used by checkpoint. +** +** nBackfill is the number of frames in the WAL that have been written +** back into the database. (We call the act of moving content from WAL to +** database "backfilling".) The nBackfill number is never greater than +** WalIndexHdr.mxFrame. nBackfill can only be increased by threads +** holding the WAL_CKPT_LOCK lock (which includes a recovery thread). +** However, a WAL_WRITE_LOCK thread can move the value of nBackfill from +** mxFrame back to zero when the WAL is reset. +** +** nBackfillAttempted is the largest value of nBackfill that a checkpoint +** has attempted to achieve. Normally nBackfill==nBackfillAtempted, however +** the nBackfillAttempted is set before any backfilling is done and the +** nBackfill is only set after all backfilling completes. So if a checkpoint +** crashes, nBackfillAttempted might be larger than nBackfill. The +** WalIndexHdr.mxFrame must never be less than nBackfillAttempted. +** +** The aLock[] field is a set of bytes used for locking. These bytes should +** never be read or written. +** +** There is one entry in aReadMark[] for each reader lock. If a reader +** holds read-lock K, then the value in aReadMark[K] is no greater than +** the mxFrame for that reader. The value READMARK_NOT_USED (0xffffffff) +** for any aReadMark[] means that entry is unused. aReadMark[0] is +** a special case; its value is never used and it exists as a place-holder +** to avoid having to offset aReadMark[] indexs by one. Readers holding +** WAL_READ_LOCK(0) always ignore the entire WAL and read all content +** directly from the database. +** +** The value of aReadMark[K] may only be changed by a thread that +** is holding an exclusive lock on WAL_READ_LOCK(K). Thus, the value of +** aReadMark[K] cannot changed while there is a reader is using that mark +** since the reader will be holding a shared lock on WAL_READ_LOCK(K). +** +** The checkpointer may only transfer frames from WAL to database where +** the frame numbers are less than or equal to every aReadMark[] that is +** in use (that is, every aReadMark[j] for which there is a corresponding +** WAL_READ_LOCK(j)). New readers (usually) pick the aReadMark[] with the +** largest value and will increase an unused aReadMark[] to mxFrame if there +** is not already an aReadMark[] equal to mxFrame. The exception to the +** previous sentence is when nBackfill equals mxFrame (meaning that everything +** in the WAL has been backfilled into the database) then new readers +** will choose aReadMark[0] which has value 0 and hence such reader will +** get all their all content directly from the database file and ignore +** the WAL. +** +** Writers normally append new frames to the end of the WAL. However, +** if nBackfill equals mxFrame (meaning that all WAL content has been +** written back into the database) and if no readers are using the WAL +** (in other words, if there are no WAL_READ_LOCK(i) where i>0) then +** the writer will first "reset" the WAL back to the beginning and start +** writing new content beginning at frame 1. +** +** We assume that 32-bit loads are atomic and so no locks are needed in +** order to read from any aReadMark[] entries. +*/ +struct WalCkptInfo { + u32 nBackfill; /* Number of WAL frames backfilled into DB */ + u32 aReadMark[WAL_NREADER]; /* Reader marks */ + u8 aLock[SQLITE_SHM_NLOCK]; /* Reserved space for locks */ + u32 nBackfillAttempted; /* WAL frames perhaps written, or maybe not */ + u32 notUsed0; /* Available for future enhancements */ +}; +#define READMARK_NOT_USED 0xffffffff + + +/* A block of WALINDEX_LOCK_RESERVED bytes beginning at +** WALINDEX_LOCK_OFFSET is reserved for locks. Since some systems +** only support mandatory file-locks, we do not read or write data +** from the region of the file on which locks are applied. +*/ +#define WALINDEX_LOCK_OFFSET (sizeof(WalIndexHdr)*2+offsetof(WalCkptInfo,aLock)) +#define WALINDEX_HDR_SIZE (sizeof(WalIndexHdr)*2+sizeof(WalCkptInfo)) + +/* Size of header before each frame in wal */ +#define WAL_FRAME_HDRSIZE 24 + +/* Size of write ahead log header, including checksum. */ +/* #define WAL_HDRSIZE 24 */ +#define WAL_HDRSIZE 32 + +/* WAL magic value. Either this value, or the same value with the least +** significant bit also set (WAL_MAGIC | 0x00000001) is stored in 32-bit +** big-endian format in the first 4 bytes of a WAL file. +** +** If the LSB is set, then the checksums for each frame within the WAL +** file are calculated by treating all data as an array of 32-bit +** big-endian words. Otherwise, they are calculated by interpreting +** all data as 32-bit little-endian words. +*/ +#define WAL_MAGIC 0x377f0682 + +/* +** Return the offset of frame iFrame in the write-ahead log file, +** assuming a database page size of szPage bytes. The offset returned +** is to the start of the write-ahead log frame-header. +*/ +#define walFrameOffset(iFrame, szPage) ( \ + WAL_HDRSIZE + ((iFrame)-1)*(i64)((szPage)+WAL_FRAME_HDRSIZE) \ +) + +/* +** An open write-ahead log file is represented by an instance of the +** following object. +*/ +struct Wal { + sqlite3_vfs *pVfs; /* The VFS used to create pDbFd */ + sqlite3_file *pDbFd; /* File handle for the database file */ + sqlite3_file *pWalFd; /* File handle for WAL file */ + u32 iCallback; /* Value to pass to log callback (or 0) */ + i64 mxWalSize; /* Truncate WAL to this size upon reset */ + int nWiData; /* Size of array apWiData */ + int szFirstBlock; /* Size of first block written to WAL file */ + volatile u32 **apWiData; /* Pointer to wal-index content in memory */ + u32 szPage; /* Database page size */ + i16 readLock; /* Which read lock is being held. -1 for none */ + u8 syncFlags; /* Flags to use to sync header writes */ + u8 exclusiveMode; /* Non-zero if connection is in exclusive mode */ + u8 writeLock; /* True if in a write transaction */ + u8 ckptLock; /* True if holding a checkpoint lock */ + u8 readOnly; /* WAL_RDWR, WAL_RDONLY, or WAL_SHM_RDONLY */ + u8 truncateOnCommit; /* True to truncate WAL file on commit */ + u8 syncHeader; /* Fsync the WAL header if true */ + u8 padToSectorBoundary; /* Pad transactions out to the next sector */ + WalIndexHdr hdr; /* Wal-index header for current transaction */ + u32 minFrame; /* Ignore wal frames before this one */ + u32 iReCksum; /* On commit, recalculate checksums from here */ + const char *zWalName; /* Name of WAL file */ + u32 nCkpt; /* Checkpoint sequence counter in the wal-header */ +#ifdef SQLITE_DEBUG + u8 lockError; /* True if a locking error has occurred */ +#endif +#ifdef SQLITE_ENABLE_SNAPSHOT + WalIndexHdr *pSnapshot; /* Start transaction here if not NULL */ +#endif +}; + +/* +** Candidate values for Wal.exclusiveMode. +*/ +#define WAL_NORMAL_MODE 0 +#define WAL_EXCLUSIVE_MODE 1 +#define WAL_HEAPMEMORY_MODE 2 + +/* +** Possible values for WAL.readOnly +*/ +#define WAL_RDWR 0 /* Normal read/write connection */ +#define WAL_RDONLY 1 /* The WAL file is readonly */ +#define WAL_SHM_RDONLY 2 /* The SHM file is readonly */ + +/* +** Each page of the wal-index mapping contains a hash-table made up of +** an array of HASHTABLE_NSLOT elements of the following type. +*/ +typedef u16 ht_slot; + +/* +** This structure is used to implement an iterator that loops through +** all frames in the WAL in database page order. Where two or more frames +** correspond to the same database page, the iterator visits only the +** frame most recently written to the WAL (in other words, the frame with +** the largest index). +** +** The internals of this structure are only accessed by: +** +** walIteratorInit() - Create a new iterator, +** walIteratorNext() - Step an iterator, +** walIteratorFree() - Free an iterator. +** +** This functionality is used by the checkpoint code (see walCheckpoint()). +*/ +struct WalIterator { + int iPrior; /* Last result returned from the iterator */ + int nSegment; /* Number of entries in aSegment[] */ + struct WalSegment { + int iNext; /* Next slot in aIndex[] not yet returned */ + ht_slot *aIndex; /* i0, i1, i2... such that aPgno[iN] ascend */ + u32 *aPgno; /* Array of page numbers. */ + int nEntry; /* Nr. of entries in aPgno[] and aIndex[] */ + int iZero; /* Frame number associated with aPgno[0] */ + } aSegment[1]; /* One for every 32KB page in the wal-index */ +}; + +/* +** Define the parameters of the hash tables in the wal-index file. There +** is a hash-table following every HASHTABLE_NPAGE page numbers in the +** wal-index. +** +** Changing any of these constants will alter the wal-index format and +** create incompatibilities. +*/ +#define HASHTABLE_NPAGE 4096 /* Must be power of 2 */ +#define HASHTABLE_HASH_1 383 /* Should be prime */ +#define HASHTABLE_NSLOT (HASHTABLE_NPAGE*2) /* Must be a power of 2 */ + +/* +** The block of page numbers associated with the first hash-table in a +** wal-index is smaller than usual. This is so that there is a complete +** hash-table on each aligned 32KB page of the wal-index. +*/ +#define HASHTABLE_NPAGE_ONE (HASHTABLE_NPAGE - (WALINDEX_HDR_SIZE/sizeof(u32))) + +/* The wal-index is divided into pages of WALINDEX_PGSZ bytes each. */ +#define WALINDEX_PGSZ ( \ + sizeof(ht_slot)*HASHTABLE_NSLOT + HASHTABLE_NPAGE*sizeof(u32) \ +) + +/* +** Obtain a pointer to the iPage'th page of the wal-index. The wal-index +** is broken into pages of WALINDEX_PGSZ bytes. Wal-index pages are +** numbered from zero. +** +** If this call is successful, *ppPage is set to point to the wal-index +** page and SQLITE_OK is returned. If an error (an OOM or VFS error) occurs, +** then an SQLite error code is returned and *ppPage is set to 0. +*/ +static int walIndexPage(Wal *pWal, int iPage, volatile u32 **ppPage){ + int rc = SQLITE_OK; + + /* Enlarge the pWal->apWiData[] array if required */ + if( pWal->nWiData<=iPage ){ + int nByte = sizeof(u32*)*(iPage+1); + volatile u32 **apNew; + apNew = (volatile u32 **)sqlite3_realloc64((void *)pWal->apWiData, nByte); + if( !apNew ){ + *ppPage = 0; + return SQLITE_NOMEM; + } + memset((void*)&apNew[pWal->nWiData], 0, + sizeof(u32*)*(iPage+1-pWal->nWiData)); + pWal->apWiData = apNew; + pWal->nWiData = iPage+1; + } + + /* Request a pointer to the required page from the VFS */ + if( pWal->apWiData[iPage]==0 ){ + if( pWal->exclusiveMode==WAL_HEAPMEMORY_MODE ){ + pWal->apWiData[iPage] = (u32 volatile *)sqlite3MallocZero(WALINDEX_PGSZ); + if( !pWal->apWiData[iPage] ) rc = SQLITE_NOMEM; + }else{ + rc = sqlite3OsShmMap(pWal->pDbFd, iPage, WALINDEX_PGSZ, + pWal->writeLock, (void volatile **)&pWal->apWiData[iPage] + ); + if( rc==SQLITE_READONLY ){ + pWal->readOnly |= WAL_SHM_RDONLY; + rc = SQLITE_OK; + } + } + } + + *ppPage = pWal->apWiData[iPage]; + assert( iPage==0 || *ppPage || rc!=SQLITE_OK ); + return rc; +} + +/* +** Return a pointer to the WalCkptInfo structure in the wal-index. +*/ +static volatile WalCkptInfo *walCkptInfo(Wal *pWal){ + assert( pWal->nWiData>0 && pWal->apWiData[0] ); + return (volatile WalCkptInfo*)&(pWal->apWiData[0][sizeof(WalIndexHdr)/2]); +} + +/* +** Return a pointer to the WalIndexHdr structure in the wal-index. +*/ +static volatile WalIndexHdr *walIndexHdr(Wal *pWal){ + assert( pWal->nWiData>0 && pWal->apWiData[0] ); + return (volatile WalIndexHdr*)pWal->apWiData[0]; +} + +/* +** The argument to this macro must be of type u32. On a little-endian +** architecture, it returns the u32 value that results from interpreting +** the 4 bytes as a big-endian value. On a big-endian architecture, it +** returns the value that would be produced by interpreting the 4 bytes +** of the input value as a little-endian integer. +*/ +#define BYTESWAP32(x) ( \ + (((x)&0x000000FF)<<24) + (((x)&0x0000FF00)<<8) \ + + (((x)&0x00FF0000)>>8) + (((x)&0xFF000000)>>24) \ +) + +/* +** Generate or extend an 8 byte checksum based on the data in +** array aByte[] and the initial values of aIn[0] and aIn[1] (or +** initial values of 0 and 0 if aIn==NULL). +** +** The checksum is written back into aOut[] before returning. +** +** nByte must be a positive multiple of 8. +*/ +static void walChecksumBytes( + int nativeCksum, /* True for native byte-order, false for non-native */ + u8 *a, /* Content to be checksummed */ + int nByte, /* Bytes of content in a[]. Must be a multiple of 8. */ + const u32 *aIn, /* Initial checksum value input */ + u32 *aOut /* OUT: Final checksum value output */ +){ + u32 s1, s2; + u32 *aData = (u32 *)a; + u32 *aEnd = (u32 *)&a[nByte]; + + if( aIn ){ + s1 = aIn[0]; + s2 = aIn[1]; + }else{ + s1 = s2 = 0; + } + + assert( nByte>=8 ); + assert( (nByte&0x00000007)==0 ); + + if( nativeCksum ){ + do { + s1 += *aData++ + s2; + s2 += *aData++ + s1; + }while( aData<aEnd ); + }else{ + do { + s1 += BYTESWAP32(aData[0]) + s2; + s2 += BYTESWAP32(aData[1]) + s1; + aData += 2; + }while( aData<aEnd ); + } + + aOut[0] = s1; + aOut[1] = s2; +} + +static void walShmBarrier(Wal *pWal){ + if( pWal->exclusiveMode!=WAL_HEAPMEMORY_MODE ){ + sqlite3OsShmBarrier(pWal->pDbFd); + } +} + +/* +** Write the header information in pWal->hdr into the wal-index. +** +** The checksum on pWal->hdr is updated before it is written. +*/ +static void walIndexWriteHdr(Wal *pWal){ + volatile WalIndexHdr *aHdr = walIndexHdr(pWal); + const int nCksum = offsetof(WalIndexHdr, aCksum); + + assert( pWal->writeLock ); + pWal->hdr.isInit = 1; + pWal->hdr.iVersion = WALINDEX_MAX_VERSION; + walChecksumBytes(1, (u8*)&pWal->hdr, nCksum, 0, pWal->hdr.aCksum); + memcpy((void*)&aHdr[1], (const void*)&pWal->hdr, sizeof(WalIndexHdr)); + walShmBarrier(pWal); + memcpy((void*)&aHdr[0], (const void*)&pWal->hdr, sizeof(WalIndexHdr)); +} + +/* +** This function encodes a single frame header and writes it to a buffer +** supplied by the caller. A frame-header is made up of a series of +** 4-byte big-endian integers, as follows: +** +** 0: Page number. +** 4: For commit records, the size of the database image in pages +** after the commit. For all other records, zero. +** 8: Salt-1 (copied from the wal-header) +** 12: Salt-2 (copied from the wal-header) +** 16: Checksum-1. +** 20: Checksum-2. +*/ +static void walEncodeFrame( + Wal *pWal, /* The write-ahead log */ + u32 iPage, /* Database page number for frame */ + u32 nTruncate, /* New db size (or 0 for non-commit frames) */ + u8 *aData, /* Pointer to page data */ + u8 *aFrame /* OUT: Write encoded frame here */ +){ + int nativeCksum; /* True for native byte-order checksums */ + u32 *aCksum = pWal->hdr.aFrameCksum; + assert( WAL_FRAME_HDRSIZE==24 ); + sqlite3Put4byte(&aFrame[0], iPage); + sqlite3Put4byte(&aFrame[4], nTruncate); + if( pWal->iReCksum==0 ){ + memcpy(&aFrame[8], pWal->hdr.aSalt, 8); + + nativeCksum = (pWal->hdr.bigEndCksum==SQLITE_BIGENDIAN); + walChecksumBytes(nativeCksum, aFrame, 8, aCksum, aCksum); + walChecksumBytes(nativeCksum, aData, pWal->szPage, aCksum, aCksum); + + sqlite3Put4byte(&aFrame[16], aCksum[0]); + sqlite3Put4byte(&aFrame[20], aCksum[1]); + }else{ + memset(&aFrame[8], 0, 16); + } +} + +/* +** Check to see if the frame with header in aFrame[] and content +** in aData[] is valid. If it is a valid frame, fill *piPage and +** *pnTruncate and return true. Return if the frame is not valid. +*/ +static int walDecodeFrame( + Wal *pWal, /* The write-ahead log */ + u32 *piPage, /* OUT: Database page number for frame */ + u32 *pnTruncate, /* OUT: New db size (or 0 if not commit) */ + u8 *aData, /* Pointer to page data (for checksum) */ + u8 *aFrame /* Frame data */ +){ + int nativeCksum; /* True for native byte-order checksums */ + u32 *aCksum = pWal->hdr.aFrameCksum; + u32 pgno; /* Page number of the frame */ + assert( WAL_FRAME_HDRSIZE==24 ); + + /* A frame is only valid if the salt values in the frame-header + ** match the salt values in the wal-header. + */ + if( memcmp(&pWal->hdr.aSalt, &aFrame[8], 8)!=0 ){ + return 0; + } + + /* A frame is only valid if the page number is creater than zero. + */ + pgno = sqlite3Get4byte(&aFrame[0]); + if( pgno==0 ){ + return 0; + } + + /* A frame is only valid if a checksum of the WAL header, + ** all prior frams, the first 16 bytes of this frame-header, + ** and the frame-data matches the checksum in the last 8 + ** bytes of this frame-header. + */ + nativeCksum = (pWal->hdr.bigEndCksum==SQLITE_BIGENDIAN); + walChecksumBytes(nativeCksum, aFrame, 8, aCksum, aCksum); + walChecksumBytes(nativeCksum, aData, pWal->szPage, aCksum, aCksum); + if( aCksum[0]!=sqlite3Get4byte(&aFrame[16]) + || aCksum[1]!=sqlite3Get4byte(&aFrame[20]) + ){ + /* Checksum failed. */ + return 0; + } + + /* If we reach this point, the frame is valid. Return the page number + ** and the new database size. + */ + *piPage = pgno; + *pnTruncate = sqlite3Get4byte(&aFrame[4]); + return 1; +} + + +#if defined(SQLITE_TEST) && defined(SQLITE_DEBUG) +/* +** Names of locks. This routine is used to provide debugging output and is not +** a part of an ordinary build. +*/ +static const char *walLockName(int lockIdx){ + if( lockIdx==WAL_WRITE_LOCK ){ + return "WRITE-LOCK"; + }else if( lockIdx==WAL_CKPT_LOCK ){ + return "CKPT-LOCK"; + }else if( lockIdx==WAL_RECOVER_LOCK ){ + return "RECOVER-LOCK"; + }else{ + static char zName[15]; + sqlite3_snprintf(sizeof(zName), zName, "READ-LOCK[%d]", + lockIdx-WAL_READ_LOCK(0)); + return zName; + } +} +#endif /*defined(SQLITE_TEST) || defined(SQLITE_DEBUG) */ + + +/* +** Set or release locks on the WAL. Locks are either shared or exclusive. +** A lock cannot be moved directly between shared and exclusive - it must go +** through the unlocked state first. +** +** In locking_mode=EXCLUSIVE, all of these routines become no-ops. +*/ +static int walLockShared(Wal *pWal, int lockIdx){ + int rc; + if( pWal->exclusiveMode ) return SQLITE_OK; + rc = sqlite3OsShmLock(pWal->pDbFd, lockIdx, 1, + SQLITE_SHM_LOCK | SQLITE_SHM_SHARED); + WALTRACE(("WAL%p: acquire SHARED-%s %s\n", pWal, + walLockName(lockIdx), rc ? "failed" : "ok")); + VVA_ONLY( pWal->lockError = (u8)(rc!=SQLITE_OK && rc!=SQLITE_BUSY); ) + return rc; +} +static void walUnlockShared(Wal *pWal, int lockIdx){ + if( pWal->exclusiveMode ) return; + (void)sqlite3OsShmLock(pWal->pDbFd, lockIdx, 1, + SQLITE_SHM_UNLOCK | SQLITE_SHM_SHARED); + WALTRACE(("WAL%p: release SHARED-%s\n", pWal, walLockName(lockIdx))); +} +static int walLockExclusive(Wal *pWal, int lockIdx, int n){ + int rc; + if( pWal->exclusiveMode ) return SQLITE_OK; + rc = sqlite3OsShmLock(pWal->pDbFd, lockIdx, n, + SQLITE_SHM_LOCK | SQLITE_SHM_EXCLUSIVE); + WALTRACE(("WAL%p: acquire EXCLUSIVE-%s cnt=%d %s\n", pWal, + walLockName(lockIdx), n, rc ? "failed" : "ok")); + VVA_ONLY( pWal->lockError = (u8)(rc!=SQLITE_OK && rc!=SQLITE_BUSY); ) + return rc; +} +static void walUnlockExclusive(Wal *pWal, int lockIdx, int n){ + if( pWal->exclusiveMode ) return; + (void)sqlite3OsShmLock(pWal->pDbFd, lockIdx, n, + SQLITE_SHM_UNLOCK | SQLITE_SHM_EXCLUSIVE); + WALTRACE(("WAL%p: release EXCLUSIVE-%s cnt=%d\n", pWal, + walLockName(lockIdx), n)); +} + +/* +** Compute a hash on a page number. The resulting hash value must land +** between 0 and (HASHTABLE_NSLOT-1). The walHashNext() function advances +** the hash to the next value in the event of a collision. +*/ +static int walHash(u32 iPage){ + assert( iPage>0 ); + assert( (HASHTABLE_NSLOT & (HASHTABLE_NSLOT-1))==0 ); + return (iPage*HASHTABLE_HASH_1) & (HASHTABLE_NSLOT-1); +} +static int walNextHash(int iPriorHash){ + return (iPriorHash+1)&(HASHTABLE_NSLOT-1); +} + +/* +** Return pointers to the hash table and page number array stored on +** page iHash of the wal-index. The wal-index is broken into 32KB pages +** numbered starting from 0. +** +** Set output variable *paHash to point to the start of the hash table +** in the wal-index file. Set *piZero to one less than the frame +** number of the first frame indexed by this hash table. If a +** slot in the hash table is set to N, it refers to frame number +** (*piZero+N) in the log. +** +** Finally, set *paPgno so that *paPgno[1] is the page number of the +** first frame indexed by the hash table, frame (*piZero+1). +*/ +static int walHashGet( + Wal *pWal, /* WAL handle */ + int iHash, /* Find the iHash'th table */ + volatile ht_slot **paHash, /* OUT: Pointer to hash index */ + volatile u32 **paPgno, /* OUT: Pointer to page number array */ + u32 *piZero /* OUT: Frame associated with *paPgno[0] */ +){ + int rc; /* Return code */ + volatile u32 *aPgno; + + rc = walIndexPage(pWal, iHash, &aPgno); + assert( rc==SQLITE_OK || iHash>0 ); + + if( rc==SQLITE_OK ){ + u32 iZero; + volatile ht_slot *aHash; + + aHash = (volatile ht_slot *)&aPgno[HASHTABLE_NPAGE]; + if( iHash==0 ){ + aPgno = &aPgno[WALINDEX_HDR_SIZE/sizeof(u32)]; + iZero = 0; + }else{ + iZero = HASHTABLE_NPAGE_ONE + (iHash-1)*HASHTABLE_NPAGE; + } + + *paPgno = &aPgno[-1]; + *paHash = aHash; + *piZero = iZero; + } + return rc; +} + +/* +** Return the number of the wal-index page that contains the hash-table +** and page-number array that contain entries corresponding to WAL frame +** iFrame. The wal-index is broken up into 32KB pages. Wal-index pages +** are numbered starting from 0. +*/ +static int walFramePage(u32 iFrame){ + int iHash = (iFrame+HASHTABLE_NPAGE-HASHTABLE_NPAGE_ONE-1) / HASHTABLE_NPAGE; + assert( (iHash==0 || iFrame>HASHTABLE_NPAGE_ONE) + && (iHash>=1 || iFrame<=HASHTABLE_NPAGE_ONE) + && (iHash<=1 || iFrame>(HASHTABLE_NPAGE_ONE+HASHTABLE_NPAGE)) + && (iHash>=2 || iFrame<=HASHTABLE_NPAGE_ONE+HASHTABLE_NPAGE) + && (iHash<=2 || iFrame>(HASHTABLE_NPAGE_ONE+2*HASHTABLE_NPAGE)) + ); + return iHash; +} + +/* +** Return the page number associated with frame iFrame in this WAL. +*/ +static u32 walFramePgno(Wal *pWal, u32 iFrame){ + int iHash = walFramePage(iFrame); + if( iHash==0 ){ + return pWal->apWiData[0][WALINDEX_HDR_SIZE/sizeof(u32) + iFrame - 1]; + } + return pWal->apWiData[iHash][(iFrame-1-HASHTABLE_NPAGE_ONE)%HASHTABLE_NPAGE]; +} + +/* +** Remove entries from the hash table that point to WAL slots greater +** than pWal->hdr.mxFrame. +** +** This function is called whenever pWal->hdr.mxFrame is decreased due +** to a rollback or savepoint. +** +** At most only the hash table containing pWal->hdr.mxFrame needs to be +** updated. Any later hash tables will be automatically cleared when +** pWal->hdr.mxFrame advances to the point where those hash tables are +** actually needed. +*/ +static void walCleanupHash(Wal *pWal){ + volatile ht_slot *aHash = 0; /* Pointer to hash table to clear */ + volatile u32 *aPgno = 0; /* Page number array for hash table */ + u32 iZero = 0; /* frame == (aHash[x]+iZero) */ + int iLimit = 0; /* Zero values greater than this */ + int nByte; /* Number of bytes to zero in aPgno[] */ + int i; /* Used to iterate through aHash[] */ + + assert( pWal->writeLock ); + testcase( pWal->hdr.mxFrame==HASHTABLE_NPAGE_ONE-1 ); + testcase( pWal->hdr.mxFrame==HASHTABLE_NPAGE_ONE ); + testcase( pWal->hdr.mxFrame==HASHTABLE_NPAGE_ONE+1 ); + + if( pWal->hdr.mxFrame==0 ) return; + + /* Obtain pointers to the hash-table and page-number array containing + ** the entry that corresponds to frame pWal->hdr.mxFrame. It is guaranteed + ** that the page said hash-table and array reside on is already mapped. + */ + assert( pWal->nWiData>walFramePage(pWal->hdr.mxFrame) ); + assert( pWal->apWiData[walFramePage(pWal->hdr.mxFrame)] ); + walHashGet(pWal, walFramePage(pWal->hdr.mxFrame), &aHash, &aPgno, &iZero); + + /* Zero all hash-table entries that correspond to frame numbers greater + ** than pWal->hdr.mxFrame. + */ + iLimit = pWal->hdr.mxFrame - iZero; + assert( iLimit>0 ); + for(i=0; i<HASHTABLE_NSLOT; i++){ + if( aHash[i]>iLimit ){ + aHash[i] = 0; + } + } + + /* Zero the entries in the aPgno array that correspond to frames with + ** frame numbers greater than pWal->hdr.mxFrame. + */ + nByte = (int)((char *)aHash - (char *)&aPgno[iLimit+1]); + memset((void *)&aPgno[iLimit+1], 0, nByte); + +#ifdef SQLITE_ENABLE_EXPENSIVE_ASSERT + /* Verify that the every entry in the mapping region is still reachable + ** via the hash table even after the cleanup. + */ + if( iLimit ){ + int j; /* Loop counter */ + int iKey; /* Hash key */ + for(j=1; j<=iLimit; j++){ + for(iKey=walHash(aPgno[j]); aHash[iKey]; iKey=walNextHash(iKey)){ + if( aHash[iKey]==j ) break; + } + assert( aHash[iKey]==j ); + } + } +#endif /* SQLITE_ENABLE_EXPENSIVE_ASSERT */ +} + + +/* +** Set an entry in the wal-index that will map database page number +** pPage into WAL frame iFrame. +*/ +static int walIndexAppend(Wal *pWal, u32 iFrame, u32 iPage){ + int rc; /* Return code */ + u32 iZero = 0; /* One less than frame number of aPgno[1] */ + volatile u32 *aPgno = 0; /* Page number array */ + volatile ht_slot *aHash = 0; /* Hash table */ + + rc = walHashGet(pWal, walFramePage(iFrame), &aHash, &aPgno, &iZero); + + /* Assuming the wal-index file was successfully mapped, populate the + ** page number array and hash table entry. + */ + if( rc==SQLITE_OK ){ + int iKey; /* Hash table key */ + int idx; /* Value to write to hash-table slot */ + int nCollide; /* Number of hash collisions */ + + idx = iFrame - iZero; + assert( idx <= HASHTABLE_NSLOT/2 + 1 ); + + /* If this is the first entry to be added to this hash-table, zero the + ** entire hash table and aPgno[] array before proceeding. + */ + if( idx==1 ){ + int nByte = (int)((u8 *)&aHash[HASHTABLE_NSLOT] - (u8 *)&aPgno[1]); + memset((void*)&aPgno[1], 0, nByte); + } + + /* If the entry in aPgno[] is already set, then the previous writer + ** must have exited unexpectedly in the middle of a transaction (after + ** writing one or more dirty pages to the WAL to free up memory). + ** Remove the remnants of that writers uncommitted transaction from + ** the hash-table before writing any new entries. + */ + if( aPgno[idx] ){ + walCleanupHash(pWal); + assert( !aPgno[idx] ); + } + + /* Write the aPgno[] array entry and the hash-table slot. */ + nCollide = idx; + for(iKey=walHash(iPage); aHash[iKey]; iKey=walNextHash(iKey)){ + if( (nCollide--)==0 ) return SQLITE_CORRUPT_BKPT; + } + aPgno[idx] = iPage; + aHash[iKey] = (ht_slot)idx; + +#ifdef SQLITE_ENABLE_EXPENSIVE_ASSERT + /* Verify that the number of entries in the hash table exactly equals + ** the number of entries in the mapping region. + */ + { + int i; /* Loop counter */ + int nEntry = 0; /* Number of entries in the hash table */ + for(i=0; i<HASHTABLE_NSLOT; i++){ if( aHash[i] ) nEntry++; } + assert( nEntry==idx ); + } + + /* Verify that the every entry in the mapping region is reachable + ** via the hash table. This turns out to be a really, really expensive + ** thing to check, so only do this occasionally - not on every + ** iteration. + */ + if( (idx&0x3ff)==0 ){ + int i; /* Loop counter */ + for(i=1; i<=idx; i++){ + for(iKey=walHash(aPgno[i]); aHash[iKey]; iKey=walNextHash(iKey)){ + if( aHash[iKey]==i ) break; + } + assert( aHash[iKey]==i ); + } + } +#endif /* SQLITE_ENABLE_EXPENSIVE_ASSERT */ + } + + + return rc; +} + + +/* +** Recover the wal-index by reading the write-ahead log file. +** +** This routine first tries to establish an exclusive lock on the +** wal-index to prevent other threads/processes from doing anything +** with the WAL or wal-index while recovery is running. The +** WAL_RECOVER_LOCK is also held so that other threads will know +** that this thread is running recovery. If unable to establish +** the necessary locks, this routine returns SQLITE_BUSY. +*/ +static int walIndexRecover(Wal *pWal){ + int rc; /* Return Code */ + i64 nSize; /* Size of log file */ + u32 aFrameCksum[2] = {0, 0}; + int iLock; /* Lock offset to lock for checkpoint */ + int nLock; /* Number of locks to hold */ + + /* Obtain an exclusive lock on all byte in the locking range not already + ** locked by the caller. The caller is guaranteed to have locked the + ** WAL_WRITE_LOCK byte, and may have also locked the WAL_CKPT_LOCK byte. + ** If successful, the same bytes that are locked here are unlocked before + ** this function returns. + */ + assert( pWal->ckptLock==1 || pWal->ckptLock==0 ); + assert( WAL_ALL_BUT_WRITE==WAL_WRITE_LOCK+1 ); + assert( WAL_CKPT_LOCK==WAL_ALL_BUT_WRITE ); + assert( pWal->writeLock ); + iLock = WAL_ALL_BUT_WRITE + pWal->ckptLock; + nLock = SQLITE_SHM_NLOCK - iLock; + rc = walLockExclusive(pWal, iLock, nLock); + if( rc ){ + return rc; + } + WALTRACE(("WAL%p: recovery begin...\n", pWal)); + + memset(&pWal->hdr, 0, sizeof(WalIndexHdr)); + + rc = sqlite3OsFileSize(pWal->pWalFd, &nSize); + if( rc!=SQLITE_OK ){ + goto recovery_error; + } + + if( nSize>WAL_HDRSIZE ){ + u8 aBuf[WAL_HDRSIZE]; /* Buffer to load WAL header into */ + u8 *aFrame = 0; /* Malloc'd buffer to load entire frame */ + int szFrame; /* Number of bytes in buffer aFrame[] */ + u8 *aData; /* Pointer to data part of aFrame buffer */ + int iFrame; /* Index of last frame read */ + i64 iOffset; /* Next offset to read from log file */ + int szPage; /* Page size according to the log */ + u32 magic; /* Magic value read from WAL header */ + u32 version; /* Magic value read from WAL header */ + int isValid; /* True if this frame is valid */ + + /* Read in the WAL header. */ + rc = sqlite3OsRead(pWal->pWalFd, aBuf, WAL_HDRSIZE, 0); + if( rc!=SQLITE_OK ){ + goto recovery_error; + } + + /* If the database page size is not a power of two, or is greater than + ** SQLITE_MAX_PAGE_SIZE, conclude that the WAL file contains no valid + ** data. Similarly, if the 'magic' value is invalid, ignore the whole + ** WAL file. + */ + magic = sqlite3Get4byte(&aBuf[0]); + szPage = sqlite3Get4byte(&aBuf[8]); + if( (magic&0xFFFFFFFE)!=WAL_MAGIC + || szPage&(szPage-1) + || szPage>SQLITE_MAX_PAGE_SIZE + || szPage<512 + ){ + goto finished; + } + pWal->hdr.bigEndCksum = (u8)(magic&0x00000001); + pWal->szPage = szPage; + pWal->nCkpt = sqlite3Get4byte(&aBuf[12]); + memcpy(&pWal->hdr.aSalt, &aBuf[16], 8); + + /* Verify that the WAL header checksum is correct */ + walChecksumBytes(pWal->hdr.bigEndCksum==SQLITE_BIGENDIAN, + aBuf, WAL_HDRSIZE-2*4, 0, pWal->hdr.aFrameCksum + ); + if( pWal->hdr.aFrameCksum[0]!=sqlite3Get4byte(&aBuf[24]) + || pWal->hdr.aFrameCksum[1]!=sqlite3Get4byte(&aBuf[28]) + ){ + goto finished; + } + + /* Verify that the version number on the WAL format is one that + ** are able to understand */ + version = sqlite3Get4byte(&aBuf[4]); + if( version!=WAL_MAX_VERSION ){ + rc = SQLITE_CANTOPEN_BKPT; + goto finished; + } + + /* Malloc a buffer to read frames into. */ + szFrame = szPage + WAL_FRAME_HDRSIZE; + aFrame = (u8 *)sqlite3_malloc64(szFrame); + if( !aFrame ){ + rc = SQLITE_NOMEM; + goto recovery_error; + } + aData = &aFrame[WAL_FRAME_HDRSIZE]; + + /* Read all frames from the log file. */ + iFrame = 0; + for(iOffset=WAL_HDRSIZE; (iOffset+szFrame)<=nSize; iOffset+=szFrame){ + u32 pgno; /* Database page number for frame */ + u32 nTruncate; /* dbsize field from frame header */ + + /* Read and decode the next log frame. */ + iFrame++; + rc = sqlite3OsRead(pWal->pWalFd, aFrame, szFrame, iOffset); + if( rc!=SQLITE_OK ) break; + isValid = walDecodeFrame(pWal, &pgno, &nTruncate, aData, aFrame); + if( !isValid ) break; + rc = walIndexAppend(pWal, iFrame, pgno); + if( rc!=SQLITE_OK ) break; + + /* If nTruncate is non-zero, this is a commit record. */ + if( nTruncate ){ + pWal->hdr.mxFrame = iFrame; + pWal->hdr.nPage = nTruncate; + pWal->hdr.szPage = (u16)((szPage&0xff00) | (szPage>>16)); + testcase( szPage<=32768 ); + testcase( szPage>=65536 ); + aFrameCksum[0] = pWal->hdr.aFrameCksum[0]; + aFrameCksum[1] = pWal->hdr.aFrameCksum[1]; + } + } + + sqlite3_free(aFrame); + } + +finished: + if( rc==SQLITE_OK ){ + volatile WalCkptInfo *pInfo; + int i; + pWal->hdr.aFrameCksum[0] = aFrameCksum[0]; + pWal->hdr.aFrameCksum[1] = aFrameCksum[1]; + walIndexWriteHdr(pWal); + + /* Reset the checkpoint-header. This is safe because this thread is + ** currently holding locks that exclude all other readers, writers and + ** checkpointers. + */ + pInfo = walCkptInfo(pWal); + pInfo->nBackfill = 0; + pInfo->nBackfillAttempted = pWal->hdr.mxFrame; + pInfo->aReadMark[0] = 0; + for(i=1; i<WAL_NREADER; i++) pInfo->aReadMark[i] = READMARK_NOT_USED; + if( pWal->hdr.mxFrame ) pInfo->aReadMark[1] = pWal->hdr.mxFrame; + + /* If more than one frame was recovered from the log file, report an + ** event via sqlite3_log(). This is to help with identifying performance + ** problems caused by applications routinely shutting down without + ** checkpointing the log file. + */ + if( pWal->hdr.nPage ){ + sqlite3_log(SQLITE_NOTICE_RECOVER_WAL, + "recovered %d frames from WAL file %s", + pWal->hdr.mxFrame, pWal->zWalName + ); + } + } + +recovery_error: + WALTRACE(("WAL%p: recovery %s\n", pWal, rc ? "failed" : "ok")); + walUnlockExclusive(pWal, iLock, nLock); + return rc; +} + +/* +** Close an open wal-index. +*/ +static void walIndexClose(Wal *pWal, int isDelete){ + if( pWal->exclusiveMode==WAL_HEAPMEMORY_MODE ){ + int i; + for(i=0; i<pWal->nWiData; i++){ + sqlite3_free((void *)pWal->apWiData[i]); + pWal->apWiData[i] = 0; + } + }else{ + sqlite3OsShmUnmap(pWal->pDbFd, isDelete); + } +} + +/* +** Open a connection to the WAL file zWalName. The database file must +** already be opened on connection pDbFd. The buffer that zWalName points +** to must remain valid for the lifetime of the returned Wal* handle. +** +** A SHARED lock should be held on the database file when this function +** is called. The purpose of this SHARED lock is to prevent any other +** client from unlinking the WAL or wal-index file. If another process +** were to do this just after this client opened one of these files, the +** system would be badly broken. +** +** If the log file is successfully opened, SQLITE_OK is returned and +** *ppWal is set to point to a new WAL handle. If an error occurs, +** an SQLite error code is returned and *ppWal is left unmodified. +*/ +SQLITE_PRIVATE int sqlite3WalOpen( + sqlite3_vfs *pVfs, /* vfs module to open wal and wal-index */ + sqlite3_file *pDbFd, /* The open database file */ + const char *zWalName, /* Name of the WAL file */ + int bNoShm, /* True to run in heap-memory mode */ + i64 mxWalSize, /* Truncate WAL to this size on reset */ + Wal **ppWal /* OUT: Allocated Wal handle */ +){ + int rc; /* Return Code */ + Wal *pRet; /* Object to allocate and return */ + int flags; /* Flags passed to OsOpen() */ + + assert( zWalName && zWalName[0] ); + assert( pDbFd ); + + /* In the amalgamation, the os_unix.c and os_win.c source files come before + ** this source file. Verify that the #defines of the locking byte offsets + ** in os_unix.c and os_win.c agree with the WALINDEX_LOCK_OFFSET value. + ** For that matter, if the lock offset ever changes from its initial design + ** value of 120, we need to know that so there is an assert() to check it. + */ + assert( 120==WALINDEX_LOCK_OFFSET ); + assert( 136==WALINDEX_HDR_SIZE ); +#ifdef WIN_SHM_BASE + assert( WIN_SHM_BASE==WALINDEX_LOCK_OFFSET ); +#endif +#ifdef UNIX_SHM_BASE + assert( UNIX_SHM_BASE==WALINDEX_LOCK_OFFSET ); +#endif + + + /* Allocate an instance of struct Wal to return. */ + *ppWal = 0; + pRet = (Wal*)sqlite3MallocZero(sizeof(Wal) + pVfs->szOsFile); + if( !pRet ){ + return SQLITE_NOMEM; + } + + pRet->pVfs = pVfs; + pRet->pWalFd = (sqlite3_file *)&pRet[1]; + pRet->pDbFd = pDbFd; + pRet->readLock = -1; + pRet->mxWalSize = mxWalSize; + pRet->zWalName = zWalName; + pRet->syncHeader = 1; + pRet->padToSectorBoundary = 1; + pRet->exclusiveMode = (bNoShm ? WAL_HEAPMEMORY_MODE: WAL_NORMAL_MODE); + + /* Open file handle on the write-ahead log file. */ + flags = (SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE|SQLITE_OPEN_WAL); + rc = sqlite3OsOpen(pVfs, zWalName, pRet->pWalFd, flags, &flags); + if( rc==SQLITE_OK && flags&SQLITE_OPEN_READONLY ){ + pRet->readOnly = WAL_RDONLY; + } + + if( rc!=SQLITE_OK ){ + walIndexClose(pRet, 0); + sqlite3OsClose(pRet->pWalFd); + sqlite3_free(pRet); + }else{ + int iDC = sqlite3OsDeviceCharacteristics(pDbFd); + if( iDC & SQLITE_IOCAP_SEQUENTIAL ){ pRet->syncHeader = 0; } + if( iDC & SQLITE_IOCAP_POWERSAFE_OVERWRITE ){ + pRet->padToSectorBoundary = 0; + } + *ppWal = pRet; + WALTRACE(("WAL%d: opened\n", pRet)); + } + return rc; +} + +/* +** Change the size to which the WAL file is trucated on each reset. +*/ +SQLITE_PRIVATE void sqlite3WalLimit(Wal *pWal, i64 iLimit){ + if( pWal ) pWal->mxWalSize = iLimit; +} + +/* +** Find the smallest page number out of all pages held in the WAL that +** has not been returned by any prior invocation of this method on the +** same WalIterator object. Write into *piFrame the frame index where +** that page was last written into the WAL. Write into *piPage the page +** number. +** +** Return 0 on success. If there are no pages in the WAL with a page +** number larger than *piPage, then return 1. +*/ +static int walIteratorNext( + WalIterator *p, /* Iterator */ + u32 *piPage, /* OUT: The page number of the next page */ + u32 *piFrame /* OUT: Wal frame index of next page */ +){ + u32 iMin; /* Result pgno must be greater than iMin */ + u32 iRet = 0xFFFFFFFF; /* 0xffffffff is never a valid page number */ + int i; /* For looping through segments */ + + iMin = p->iPrior; + assert( iMin<0xffffffff ); + for(i=p->nSegment-1; i>=0; i--){ + struct WalSegment *pSegment = &p->aSegment[i]; + while( pSegment->iNext<pSegment->nEntry ){ + u32 iPg = pSegment->aPgno[pSegment->aIndex[pSegment->iNext]]; + if( iPg>iMin ){ + if( iPg<iRet ){ + iRet = iPg; + *piFrame = pSegment->iZero + pSegment->aIndex[pSegment->iNext]; + } + break; + } + pSegment->iNext++; + } + } + + *piPage = p->iPrior = iRet; + return (iRet==0xFFFFFFFF); +} + +/* +** This function merges two sorted lists into a single sorted list. +** +** aLeft[] and aRight[] are arrays of indices. The sort key is +** aContent[aLeft[]] and aContent[aRight[]]. Upon entry, the following +** is guaranteed for all J<K: +** +** aContent[aLeft[J]] < aContent[aLeft[K]] +** aContent[aRight[J]] < aContent[aRight[K]] +** +** This routine overwrites aRight[] with a new (probably longer) sequence +** of indices such that the aRight[] contains every index that appears in +** either aLeft[] or the old aRight[] and such that the second condition +** above is still met. +** +** The aContent[aLeft[X]] values will be unique for all X. And the +** aContent[aRight[X]] values will be unique too. But there might be +** one or more combinations of X and Y such that +** +** aLeft[X]!=aRight[Y] && aContent[aLeft[X]] == aContent[aRight[Y]] +** +** When that happens, omit the aLeft[X] and use the aRight[Y] index. +*/ +static void walMerge( + const u32 *aContent, /* Pages in wal - keys for the sort */ + ht_slot *aLeft, /* IN: Left hand input list */ + int nLeft, /* IN: Elements in array *paLeft */ + ht_slot **paRight, /* IN/OUT: Right hand input list */ + int *pnRight, /* IN/OUT: Elements in *paRight */ + ht_slot *aTmp /* Temporary buffer */ +){ + int iLeft = 0; /* Current index in aLeft */ + int iRight = 0; /* Current index in aRight */ + int iOut = 0; /* Current index in output buffer */ + int nRight = *pnRight; + ht_slot *aRight = *paRight; + + assert( nLeft>0 && nRight>0 ); + while( iRight<nRight || iLeft<nLeft ){ + ht_slot logpage; + Pgno dbpage; + + if( (iLeft<nLeft) + && (iRight>=nRight || aContent[aLeft[iLeft]]<aContent[aRight[iRight]]) + ){ + logpage = aLeft[iLeft++]; + }else{ + logpage = aRight[iRight++]; + } + dbpage = aContent[logpage]; + + aTmp[iOut++] = logpage; + if( iLeft<nLeft && aContent[aLeft[iLeft]]==dbpage ) iLeft++; + + assert( iLeft>=nLeft || aContent[aLeft[iLeft]]>dbpage ); + assert( iRight>=nRight || aContent[aRight[iRight]]>dbpage ); + } + + *paRight = aLeft; + *pnRight = iOut; + memcpy(aLeft, aTmp, sizeof(aTmp[0])*iOut); +} + +/* +** Sort the elements in list aList using aContent[] as the sort key. +** Remove elements with duplicate keys, preferring to keep the +** larger aList[] values. +** +** The aList[] entries are indices into aContent[]. The values in +** aList[] are to be sorted so that for all J<K: +** +** aContent[aList[J]] < aContent[aList[K]] +** +** For any X and Y such that +** +** aContent[aList[X]] == aContent[aList[Y]] +** +** Keep the larger of the two values aList[X] and aList[Y] and discard +** the smaller. +*/ +static void walMergesort( + const u32 *aContent, /* Pages in wal */ + ht_slot *aBuffer, /* Buffer of at least *pnList items to use */ + ht_slot *aList, /* IN/OUT: List to sort */ + int *pnList /* IN/OUT: Number of elements in aList[] */ +){ + struct Sublist { + int nList; /* Number of elements in aList */ + ht_slot *aList; /* Pointer to sub-list content */ + }; + + const int nList = *pnList; /* Size of input list */ + int nMerge = 0; /* Number of elements in list aMerge */ + ht_slot *aMerge = 0; /* List to be merged */ + int iList; /* Index into input list */ + u32 iSub = 0; /* Index into aSub array */ + struct Sublist aSub[13]; /* Array of sub-lists */ + + memset(aSub, 0, sizeof(aSub)); + assert( nList<=HASHTABLE_NPAGE && nList>0 ); + assert( HASHTABLE_NPAGE==(1<<(ArraySize(aSub)-1)) ); + + for(iList=0; iList<nList; iList++){ + nMerge = 1; + aMerge = &aList[iList]; + for(iSub=0; iList & (1<<iSub); iSub++){ + struct Sublist *p; + assert( iSub<ArraySize(aSub) ); + p = &aSub[iSub]; + assert( p->aList && p->nList<=(1<<iSub) ); + assert( p->aList==&aList[iList&~((2<<iSub)-1)] ); + walMerge(aContent, p->aList, p->nList, &aMerge, &nMerge, aBuffer); + } + aSub[iSub].aList = aMerge; + aSub[iSub].nList = nMerge; + } + + for(iSub++; iSub<ArraySize(aSub); iSub++){ + if( nList & (1<<iSub) ){ + struct Sublist *p; + assert( iSub<ArraySize(aSub) ); + p = &aSub[iSub]; + assert( p->nList<=(1<<iSub) ); + assert( p->aList==&aList[nList&~((2<<iSub)-1)] ); + walMerge(aContent, p->aList, p->nList, &aMerge, &nMerge, aBuffer); + } + } + assert( aMerge==aList ); + *pnList = nMerge; + +#ifdef SQLITE_DEBUG + { + int i; + for(i=1; i<*pnList; i++){ + assert( aContent[aList[i]] > aContent[aList[i-1]] ); + } + } +#endif +} + +/* +** Free an iterator allocated by walIteratorInit(). +*/ +static void walIteratorFree(WalIterator *p){ + sqlite3_free(p); +} + +/* +** Construct a WalInterator object that can be used to loop over all +** pages in the WAL in ascending order. The caller must hold the checkpoint +** lock. +** +** On success, make *pp point to the newly allocated WalInterator object +** return SQLITE_OK. Otherwise, return an error code. If this routine +** returns an error, the value of *pp is undefined. +** +** The calling routine should invoke walIteratorFree() to destroy the +** WalIterator object when it has finished with it. +*/ +static int walIteratorInit(Wal *pWal, WalIterator **pp){ + WalIterator *p; /* Return value */ + int nSegment; /* Number of segments to merge */ + u32 iLast; /* Last frame in log */ + int nByte; /* Number of bytes to allocate */ + int i; /* Iterator variable */ + ht_slot *aTmp; /* Temp space used by merge-sort */ + int rc = SQLITE_OK; /* Return Code */ + + /* This routine only runs while holding the checkpoint lock. And + ** it only runs if there is actually content in the log (mxFrame>0). + */ + assert( pWal->ckptLock && pWal->hdr.mxFrame>0 ); + iLast = pWal->hdr.mxFrame; + + /* Allocate space for the WalIterator object. */ + nSegment = walFramePage(iLast) + 1; + nByte = sizeof(WalIterator) + + (nSegment-1)*sizeof(struct WalSegment) + + iLast*sizeof(ht_slot); + p = (WalIterator *)sqlite3_malloc64(nByte); + if( !p ){ + return SQLITE_NOMEM; + } + memset(p, 0, nByte); + p->nSegment = nSegment; + + /* Allocate temporary space used by the merge-sort routine. This block + ** of memory will be freed before this function returns. + */ + aTmp = (ht_slot *)sqlite3_malloc64( + sizeof(ht_slot) * (iLast>HASHTABLE_NPAGE?HASHTABLE_NPAGE:iLast) + ); + if( !aTmp ){ + rc = SQLITE_NOMEM; + } + + for(i=0; rc==SQLITE_OK && i<nSegment; i++){ + volatile ht_slot *aHash; + u32 iZero; + volatile u32 *aPgno; + + rc = walHashGet(pWal, i, &aHash, &aPgno, &iZero); + if( rc==SQLITE_OK ){ + int j; /* Counter variable */ + int nEntry; /* Number of entries in this segment */ + ht_slot *aIndex; /* Sorted index for this segment */ + + aPgno++; + if( (i+1)==nSegment ){ + nEntry = (int)(iLast - iZero); + }else{ + nEntry = (int)((u32*)aHash - (u32*)aPgno); + } + aIndex = &((ht_slot *)&p->aSegment[p->nSegment])[iZero]; + iZero++; + + for(j=0; j<nEntry; j++){ + aIndex[j] = (ht_slot)j; + } + walMergesort((u32 *)aPgno, aTmp, aIndex, &nEntry); + p->aSegment[i].iZero = iZero; + p->aSegment[i].nEntry = nEntry; + p->aSegment[i].aIndex = aIndex; + p->aSegment[i].aPgno = (u32 *)aPgno; + } + } + sqlite3_free(aTmp); + + if( rc!=SQLITE_OK ){ + walIteratorFree(p); + } + *pp = p; + return rc; +} + +/* +** Attempt to obtain the exclusive WAL lock defined by parameters lockIdx and +** n. If the attempt fails and parameter xBusy is not NULL, then it is a +** busy-handler function. Invoke it and retry the lock until either the +** lock is successfully obtained or the busy-handler returns 0. +*/ +static int walBusyLock( + Wal *pWal, /* WAL connection */ + int (*xBusy)(void*), /* Function to call when busy */ + void *pBusyArg, /* Context argument for xBusyHandler */ + int lockIdx, /* Offset of first byte to lock */ + int n /* Number of bytes to lock */ +){ + int rc; + do { + rc = walLockExclusive(pWal, lockIdx, n); + }while( xBusy && rc==SQLITE_BUSY && xBusy(pBusyArg) ); + return rc; +} + +/* +** The cache of the wal-index header must be valid to call this function. +** Return the page-size in bytes used by the database. +*/ +static int walPagesize(Wal *pWal){ + return (pWal->hdr.szPage&0xfe00) + ((pWal->hdr.szPage&0x0001)<<16); +} + +/* +** The following is guaranteed when this function is called: +** +** a) the WRITER lock is held, +** b) the entire log file has been checkpointed, and +** c) any existing readers are reading exclusively from the database +** file - there are no readers that may attempt to read a frame from +** the log file. +** +** This function updates the shared-memory structures so that the next +** client to write to the database (which may be this one) does so by +** writing frames into the start of the log file. +** +** The value of parameter salt1 is used as the aSalt[1] value in the +** new wal-index header. It should be passed a pseudo-random value (i.e. +** one obtained from sqlite3_randomness()). +*/ +static void walRestartHdr(Wal *pWal, u32 salt1){ + volatile WalCkptInfo *pInfo = walCkptInfo(pWal); + int i; /* Loop counter */ + u32 *aSalt = pWal->hdr.aSalt; /* Big-endian salt values */ + pWal->nCkpt++; + pWal->hdr.mxFrame = 0; + sqlite3Put4byte((u8*)&aSalt[0], 1 + sqlite3Get4byte((u8*)&aSalt[0])); + memcpy(&pWal->hdr.aSalt[1], &salt1, 4); + walIndexWriteHdr(pWal); + pInfo->nBackfill = 0; + pInfo->nBackfillAttempted = 0; + pInfo->aReadMark[1] = 0; + for(i=2; i<WAL_NREADER; i++) pInfo->aReadMark[i] = READMARK_NOT_USED; + assert( pInfo->aReadMark[0]==0 ); +} + +/* +** Copy as much content as we can from the WAL back into the database file +** in response to an sqlite3_wal_checkpoint() request or the equivalent. +** +** The amount of information copies from WAL to database might be limited +** by active readers. This routine will never overwrite a database page +** that a concurrent reader might be using. +** +** All I/O barrier operations (a.k.a fsyncs) occur in this routine when +** SQLite is in WAL-mode in synchronous=NORMAL. That means that if +** checkpoints are always run by a background thread or background +** process, foreground threads will never block on a lengthy fsync call. +** +** Fsync is called on the WAL before writing content out of the WAL and +** into the database. This ensures that if the new content is persistent +** in the WAL and can be recovered following a power-loss or hard reset. +** +** Fsync is also called on the database file if (and only if) the entire +** WAL content is copied into the database file. This second fsync makes +** it safe to delete the WAL since the new content will persist in the +** database file. +** +** This routine uses and updates the nBackfill field of the wal-index header. +** This is the only routine that will increase the value of nBackfill. +** (A WAL reset or recovery will revert nBackfill to zero, but not increase +** its value.) +** +** The caller must be holding sufficient locks to ensure that no other +** checkpoint is running (in any other thread or process) at the same +** time. +*/ +static int walCheckpoint( + Wal *pWal, /* Wal connection */ + int eMode, /* One of PASSIVE, FULL or RESTART */ + int (*xBusy)(void*), /* Function to call when busy */ + void *pBusyArg, /* Context argument for xBusyHandler */ + int sync_flags, /* Flags for OsSync() (or 0) */ + u8 *zBuf /* Temporary buffer to use */ +){ + int rc = SQLITE_OK; /* Return code */ + int szPage; /* Database page-size */ + WalIterator *pIter = 0; /* Wal iterator context */ + u32 iDbpage = 0; /* Next database page to write */ + u32 iFrame = 0; /* Wal frame containing data for iDbpage */ + u32 mxSafeFrame; /* Max frame that can be backfilled */ + u32 mxPage; /* Max database page to write */ + int i; /* Loop counter */ + volatile WalCkptInfo *pInfo; /* The checkpoint status information */ + + szPage = walPagesize(pWal); + testcase( szPage<=32768 ); + testcase( szPage>=65536 ); + pInfo = walCkptInfo(pWal); + if( pInfo->nBackfill<pWal->hdr.mxFrame ){ + + /* Allocate the iterator */ + rc = walIteratorInit(pWal, &pIter); + if( rc!=SQLITE_OK ){ + return rc; + } + assert( pIter ); + + /* EVIDENCE-OF: R-62920-47450 The busy-handler callback is never invoked + ** in the SQLITE_CHECKPOINT_PASSIVE mode. */ + assert( eMode!=SQLITE_CHECKPOINT_PASSIVE || xBusy==0 ); + + /* Compute in mxSafeFrame the index of the last frame of the WAL that is + ** safe to write into the database. Frames beyond mxSafeFrame might + ** overwrite database pages that are in use by active readers and thus + ** cannot be backfilled from the WAL. + */ + mxSafeFrame = pWal->hdr.mxFrame; + mxPage = pWal->hdr.nPage; + for(i=1; i<WAL_NREADER; i++){ + /* Thread-sanitizer reports that the following is an unsafe read, + ** as some other thread may be in the process of updating the value + ** of the aReadMark[] slot. The assumption here is that if that is + ** happening, the other client may only be increasing the value, + ** not decreasing it. So assuming either that either the "old" or + ** "new" version of the value is read, and not some arbitrary value + ** that would never be written by a real client, things are still + ** safe. */ + u32 y = pInfo->aReadMark[i]; + if( mxSafeFrame>y ){ + assert( y<=pWal->hdr.mxFrame ); + rc = walBusyLock(pWal, xBusy, pBusyArg, WAL_READ_LOCK(i), 1); + if( rc==SQLITE_OK ){ + pInfo->aReadMark[i] = (i==1 ? mxSafeFrame : READMARK_NOT_USED); + walUnlockExclusive(pWal, WAL_READ_LOCK(i), 1); + }else if( rc==SQLITE_BUSY ){ + mxSafeFrame = y; + xBusy = 0; + }else{ + goto walcheckpoint_out; + } + } + } + + if( pInfo->nBackfill<mxSafeFrame + && (rc = walBusyLock(pWal, xBusy, pBusyArg, WAL_READ_LOCK(0),1))==SQLITE_OK + ){ + i64 nSize; /* Current size of database file */ + u32 nBackfill = pInfo->nBackfill; + + pInfo->nBackfillAttempted = mxSafeFrame; + + /* Sync the WAL to disk */ + if( sync_flags ){ + rc = sqlite3OsSync(pWal->pWalFd, sync_flags); + } + + /* If the database may grow as a result of this checkpoint, hint + ** about the eventual size of the db file to the VFS layer. + */ + if( rc==SQLITE_OK ){ + i64 nReq = ((i64)mxPage * szPage); + rc = sqlite3OsFileSize(pWal->pDbFd, &nSize); + if( rc==SQLITE_OK && nSize<nReq ){ + sqlite3OsFileControlHint(pWal->pDbFd, SQLITE_FCNTL_SIZE_HINT, &nReq); + } + } + + + /* Iterate through the contents of the WAL, copying data to the db file */ + while( rc==SQLITE_OK && 0==walIteratorNext(pIter, &iDbpage, &iFrame) ){ + i64 iOffset; + assert( walFramePgno(pWal, iFrame)==iDbpage ); + if( iFrame<=nBackfill || iFrame>mxSafeFrame || iDbpage>mxPage ){ + continue; + } + iOffset = walFrameOffset(iFrame, szPage) + WAL_FRAME_HDRSIZE; + /* testcase( IS_BIG_INT(iOffset) ); // requires a 4GiB WAL file */ + rc = sqlite3OsRead(pWal->pWalFd, zBuf, szPage, iOffset); + if( rc!=SQLITE_OK ) break; + iOffset = (iDbpage-1)*(i64)szPage; + testcase( IS_BIG_INT(iOffset) ); + rc = sqlite3OsWrite(pWal->pDbFd, zBuf, szPage, iOffset); + if( rc!=SQLITE_OK ) break; + } + + /* If work was actually accomplished... */ + if( rc==SQLITE_OK ){ + if( mxSafeFrame==walIndexHdr(pWal)->mxFrame ){ + i64 szDb = pWal->hdr.nPage*(i64)szPage; + testcase( IS_BIG_INT(szDb) ); + rc = sqlite3OsTruncate(pWal->pDbFd, szDb); + if( rc==SQLITE_OK && sync_flags ){ + rc = sqlite3OsSync(pWal->pDbFd, sync_flags); + } + } + if( rc==SQLITE_OK ){ + pInfo->nBackfill = mxSafeFrame; + } + } + + /* Release the reader lock held while backfilling */ + walUnlockExclusive(pWal, WAL_READ_LOCK(0), 1); + } + + if( rc==SQLITE_BUSY ){ + /* Reset the return code so as not to report a checkpoint failure + ** just because there are active readers. */ + rc = SQLITE_OK; + } + } + + /* If this is an SQLITE_CHECKPOINT_RESTART or TRUNCATE operation, and the + ** entire wal file has been copied into the database file, then block + ** until all readers have finished using the wal file. This ensures that + ** the next process to write to the database restarts the wal file. + */ + if( rc==SQLITE_OK && eMode!=SQLITE_CHECKPOINT_PASSIVE ){ + assert( pWal->writeLock ); + if( pInfo->nBackfill<pWal->hdr.mxFrame ){ + rc = SQLITE_BUSY; + }else if( eMode>=SQLITE_CHECKPOINT_RESTART ){ + u32 salt1; + sqlite3_randomness(4, &salt1); + assert( pInfo->nBackfill==pWal->hdr.mxFrame ); + rc = walBusyLock(pWal, xBusy, pBusyArg, WAL_READ_LOCK(1), WAL_NREADER-1); + if( rc==SQLITE_OK ){ + if( eMode==SQLITE_CHECKPOINT_TRUNCATE ){ + /* IMPLEMENTATION-OF: R-44699-57140 This mode works the same way as + ** SQLITE_CHECKPOINT_RESTART with the addition that it also + ** truncates the log file to zero bytes just prior to a + ** successful return. + ** + ** In theory, it might be safe to do this without updating the + ** wal-index header in shared memory, as all subsequent reader or + ** writer clients should see that the entire log file has been + ** checkpointed and behave accordingly. This seems unsafe though, + ** as it would leave the system in a state where the contents of + ** the wal-index header do not match the contents of the + ** file-system. To avoid this, update the wal-index header to + ** indicate that the log file contains zero valid frames. */ + walRestartHdr(pWal, salt1); + rc = sqlite3OsTruncate(pWal->pWalFd, 0); + } + walUnlockExclusive(pWal, WAL_READ_LOCK(1), WAL_NREADER-1); + } + } + } + + walcheckpoint_out: + walIteratorFree(pIter); + return rc; +} + +/* +** If the WAL file is currently larger than nMax bytes in size, truncate +** it to exactly nMax bytes. If an error occurs while doing so, ignore it. +*/ +static void walLimitSize(Wal *pWal, i64 nMax){ + i64 sz; + int rx; + sqlite3BeginBenignMalloc(); + rx = sqlite3OsFileSize(pWal->pWalFd, &sz); + if( rx==SQLITE_OK && (sz > nMax ) ){ + rx = sqlite3OsTruncate(pWal->pWalFd, nMax); + } + sqlite3EndBenignMalloc(); + if( rx ){ + sqlite3_log(rx, "cannot limit WAL size: %s", pWal->zWalName); + } +} + +/* +** Close a connection to a log file. +*/ +SQLITE_PRIVATE int sqlite3WalClose( + Wal *pWal, /* Wal to close */ + int sync_flags, /* Flags to pass to OsSync() (or 0) */ + int nBuf, + u8 *zBuf /* Buffer of at least nBuf bytes */ +){ + int rc = SQLITE_OK; + if( pWal ){ + int isDelete = 0; /* True to unlink wal and wal-index files */ + + /* If an EXCLUSIVE lock can be obtained on the database file (using the + ** ordinary, rollback-mode locking methods, this guarantees that the + ** connection associated with this log file is the only connection to + ** the database. In this case checkpoint the database and unlink both + ** the wal and wal-index files. + ** + ** The EXCLUSIVE lock is not released before returning. + */ + rc = sqlite3OsLock(pWal->pDbFd, SQLITE_LOCK_EXCLUSIVE); + if( rc==SQLITE_OK ){ + if( pWal->exclusiveMode==WAL_NORMAL_MODE ){ + pWal->exclusiveMode = WAL_EXCLUSIVE_MODE; + } + rc = sqlite3WalCheckpoint( + pWal, SQLITE_CHECKPOINT_PASSIVE, 0, 0, sync_flags, nBuf, zBuf, 0, 0 + ); + if( rc==SQLITE_OK ){ + int bPersist = -1; + sqlite3OsFileControlHint( + pWal->pDbFd, SQLITE_FCNTL_PERSIST_WAL, &bPersist + ); + if( bPersist!=1 ){ + /* Try to delete the WAL file if the checkpoint completed and + ** fsyned (rc==SQLITE_OK) and if we are not in persistent-wal + ** mode (!bPersist) */ + isDelete = 1; + }else if( pWal->mxWalSize>=0 ){ + /* Try to truncate the WAL file to zero bytes if the checkpoint + ** completed and fsynced (rc==SQLITE_OK) and we are in persistent + ** WAL mode (bPersist) and if the PRAGMA journal_size_limit is a + ** non-negative value (pWal->mxWalSize>=0). Note that we truncate + ** to zero bytes as truncating to the journal_size_limit might + ** leave a corrupt WAL file on disk. */ + walLimitSize(pWal, 0); + } + } + } + + walIndexClose(pWal, isDelete); + sqlite3OsClose(pWal->pWalFd); + if( isDelete ){ + sqlite3BeginBenignMalloc(); + sqlite3OsDelete(pWal->pVfs, pWal->zWalName, 0); + sqlite3EndBenignMalloc(); + } + WALTRACE(("WAL%p: closed\n", pWal)); + sqlite3_free((void *)pWal->apWiData); + sqlite3_free(pWal); + } + return rc; +} + +/* +** Try to read the wal-index header. Return 0 on success and 1 if +** there is a problem. +** +** The wal-index is in shared memory. Another thread or process might +** be writing the header at the same time this procedure is trying to +** read it, which might result in inconsistency. A dirty read is detected +** by verifying that both copies of the header are the same and also by +** a checksum on the header. +** +** If and only if the read is consistent and the header is different from +** pWal->hdr, then pWal->hdr is updated to the content of the new header +** and *pChanged is set to 1. +** +** If the checksum cannot be verified return non-zero. If the header +** is read successfully and the checksum verified, return zero. +*/ +static int walIndexTryHdr(Wal *pWal, int *pChanged){ + u32 aCksum[2]; /* Checksum on the header content */ + WalIndexHdr h1, h2; /* Two copies of the header content */ + WalIndexHdr volatile *aHdr; /* Header in shared memory */ + + /* The first page of the wal-index must be mapped at this point. */ + assert( pWal->nWiData>0 && pWal->apWiData[0] ); + + /* Read the header. This might happen concurrently with a write to the + ** same area of shared memory on a different CPU in a SMP, + ** meaning it is possible that an inconsistent snapshot is read + ** from the file. If this happens, return non-zero. + ** + ** There are two copies of the header at the beginning of the wal-index. + ** When reading, read [0] first then [1]. Writes are in the reverse order. + ** Memory barriers are used to prevent the compiler or the hardware from + ** reordering the reads and writes. + */ + aHdr = walIndexHdr(pWal); + memcpy(&h1, (void *)&aHdr[0], sizeof(h1)); + walShmBarrier(pWal); + memcpy(&h2, (void *)&aHdr[1], sizeof(h2)); + + if( memcmp(&h1, &h2, sizeof(h1))!=0 ){ + return 1; /* Dirty read */ + } + if( h1.isInit==0 ){ + return 1; /* Malformed header - probably all zeros */ + } + walChecksumBytes(1, (u8*)&h1, sizeof(h1)-sizeof(h1.aCksum), 0, aCksum); + if( aCksum[0]!=h1.aCksum[0] || aCksum[1]!=h1.aCksum[1] ){ + return 1; /* Checksum does not match */ + } + + if( memcmp(&pWal->hdr, &h1, sizeof(WalIndexHdr)) ){ + *pChanged = 1; + memcpy(&pWal->hdr, &h1, sizeof(WalIndexHdr)); + pWal->szPage = (pWal->hdr.szPage&0xfe00) + ((pWal->hdr.szPage&0x0001)<<16); + testcase( pWal->szPage<=32768 ); + testcase( pWal->szPage>=65536 ); + } + + /* The header was successfully read. Return zero. */ + return 0; +} + +/* +** Read the wal-index header from the wal-index and into pWal->hdr. +** If the wal-header appears to be corrupt, try to reconstruct the +** wal-index from the WAL before returning. +** +** Set *pChanged to 1 if the wal-index header value in pWal->hdr is +** changed by this operation. If pWal->hdr is unchanged, set *pChanged +** to 0. +** +** If the wal-index header is successfully read, return SQLITE_OK. +** Otherwise an SQLite error code. +*/ +static int walIndexReadHdr(Wal *pWal, int *pChanged){ + int rc; /* Return code */ + int badHdr; /* True if a header read failed */ + volatile u32 *page0; /* Chunk of wal-index containing header */ + + /* Ensure that page 0 of the wal-index (the page that contains the + ** wal-index header) is mapped. Return early if an error occurs here. + */ + assert( pChanged ); + rc = walIndexPage(pWal, 0, &page0); + if( rc!=SQLITE_OK ){ + return rc; + }; + assert( page0 || pWal->writeLock==0 ); + + /* If the first page of the wal-index has been mapped, try to read the + ** wal-index header immediately, without holding any lock. This usually + ** works, but may fail if the wal-index header is corrupt or currently + ** being modified by another thread or process. + */ + badHdr = (page0 ? walIndexTryHdr(pWal, pChanged) : 1); + + /* If the first attempt failed, it might have been due to a race + ** with a writer. So get a WRITE lock and try again. + */ + assert( badHdr==0 || pWal->writeLock==0 ); + if( badHdr ){ + if( pWal->readOnly & WAL_SHM_RDONLY ){ + if( SQLITE_OK==(rc = walLockShared(pWal, WAL_WRITE_LOCK)) ){ + walUnlockShared(pWal, WAL_WRITE_LOCK); + rc = SQLITE_READONLY_RECOVERY; + } + }else if( SQLITE_OK==(rc = walLockExclusive(pWal, WAL_WRITE_LOCK, 1)) ){ + pWal->writeLock = 1; + if( SQLITE_OK==(rc = walIndexPage(pWal, 0, &page0)) ){ + badHdr = walIndexTryHdr(pWal, pChanged); + if( badHdr ){ + /* If the wal-index header is still malformed even while holding + ** a WRITE lock, it can only mean that the header is corrupted and + ** needs to be reconstructed. So run recovery to do exactly that. + */ + rc = walIndexRecover(pWal); + *pChanged = 1; + } + } + pWal->writeLock = 0; + walUnlockExclusive(pWal, WAL_WRITE_LOCK, 1); + } + } + + /* If the header is read successfully, check the version number to make + ** sure the wal-index was not constructed with some future format that + ** this version of SQLite cannot understand. + */ + if( badHdr==0 && pWal->hdr.iVersion!=WALINDEX_MAX_VERSION ){ + rc = SQLITE_CANTOPEN_BKPT; + } + + return rc; +} + +/* +** This is the value that walTryBeginRead returns when it needs to +** be retried. +*/ +#define WAL_RETRY (-1) + +/* +** Attempt to start a read transaction. This might fail due to a race or +** other transient condition. When that happens, it returns WAL_RETRY to +** indicate to the caller that it is safe to retry immediately. +** +** On success return SQLITE_OK. On a permanent failure (such an +** I/O error or an SQLITE_BUSY because another process is running +** recovery) return a positive error code. +** +** The useWal parameter is true to force the use of the WAL and disable +** the case where the WAL is bypassed because it has been completely +** checkpointed. If useWal==0 then this routine calls walIndexReadHdr() +** to make a copy of the wal-index header into pWal->hdr. If the +** wal-index header has changed, *pChanged is set to 1 (as an indication +** to the caller that the local paget cache is obsolete and needs to be +** flushed.) When useWal==1, the wal-index header is assumed to already +** be loaded and the pChanged parameter is unused. +** +** The caller must set the cnt parameter to the number of prior calls to +** this routine during the current read attempt that returned WAL_RETRY. +** This routine will start taking more aggressive measures to clear the +** race conditions after multiple WAL_RETRY returns, and after an excessive +** number of errors will ultimately return SQLITE_PROTOCOL. The +** SQLITE_PROTOCOL return indicates that some other process has gone rogue +** and is not honoring the locking protocol. There is a vanishingly small +** chance that SQLITE_PROTOCOL could be returned because of a run of really +** bad luck when there is lots of contention for the wal-index, but that +** possibility is so small that it can be safely neglected, we believe. +** +** On success, this routine obtains a read lock on +** WAL_READ_LOCK(pWal->readLock). The pWal->readLock integer is +** in the range 0 <= pWal->readLock < WAL_NREADER. If pWal->readLock==(-1) +** that means the Wal does not hold any read lock. The reader must not +** access any database page that is modified by a WAL frame up to and +** including frame number aReadMark[pWal->readLock]. The reader will +** use WAL frames up to and including pWal->hdr.mxFrame if pWal->readLock>0 +** Or if pWal->readLock==0, then the reader will ignore the WAL +** completely and get all content directly from the database file. +** If the useWal parameter is 1 then the WAL will never be ignored and +** this routine will always set pWal->readLock>0 on success. +** When the read transaction is completed, the caller must release the +** lock on WAL_READ_LOCK(pWal->readLock) and set pWal->readLock to -1. +** +** This routine uses the nBackfill and aReadMark[] fields of the header +** to select a particular WAL_READ_LOCK() that strives to let the +** checkpoint process do as much work as possible. This routine might +** update values of the aReadMark[] array in the header, but if it does +** so it takes care to hold an exclusive lock on the corresponding +** WAL_READ_LOCK() while changing values. +*/ +static int walTryBeginRead(Wal *pWal, int *pChanged, int useWal, int cnt){ + volatile WalCkptInfo *pInfo; /* Checkpoint information in wal-index */ + u32 mxReadMark; /* Largest aReadMark[] value */ + int mxI; /* Index of largest aReadMark[] value */ + int i; /* Loop counter */ + int rc = SQLITE_OK; /* Return code */ + u32 mxFrame; /* Wal frame to lock to */ + + assert( pWal->readLock<0 ); /* Not currently locked */ + + /* Take steps to avoid spinning forever if there is a protocol error. + ** + ** Circumstances that cause a RETRY should only last for the briefest + ** instances of time. No I/O or other system calls are done while the + ** locks are held, so the locks should not be held for very long. But + ** if we are unlucky, another process that is holding a lock might get + ** paged out or take a page-fault that is time-consuming to resolve, + ** during the few nanoseconds that it is holding the lock. In that case, + ** it might take longer than normal for the lock to free. + ** + ** After 5 RETRYs, we begin calling sqlite3OsSleep(). The first few + ** calls to sqlite3OsSleep() have a delay of 1 microsecond. Really this + ** is more of a scheduler yield than an actual delay. But on the 10th + ** an subsequent retries, the delays start becoming longer and longer, + ** so that on the 100th (and last) RETRY we delay for 323 milliseconds. + ** The total delay time before giving up is less than 10 seconds. + */ + if( cnt>5 ){ + int nDelay = 1; /* Pause time in microseconds */ + if( cnt>100 ){ + VVA_ONLY( pWal->lockError = 1; ) + return SQLITE_PROTOCOL; + } + if( cnt>=10 ) nDelay = (cnt-9)*(cnt-9)*39; + sqlite3OsSleep(pWal->pVfs, nDelay); + } + + if( !useWal ){ + rc = walIndexReadHdr(pWal, pChanged); + if( rc==SQLITE_BUSY ){ + /* If there is not a recovery running in another thread or process + ** then convert BUSY errors to WAL_RETRY. If recovery is known to + ** be running, convert BUSY to BUSY_RECOVERY. There is a race here + ** which might cause WAL_RETRY to be returned even if BUSY_RECOVERY + ** would be technically correct. But the race is benign since with + ** WAL_RETRY this routine will be called again and will probably be + ** right on the second iteration. + */ + if( pWal->apWiData[0]==0 ){ + /* This branch is taken when the xShmMap() method returns SQLITE_BUSY. + ** We assume this is a transient condition, so return WAL_RETRY. The + ** xShmMap() implementation used by the default unix and win32 VFS + ** modules may return SQLITE_BUSY due to a race condition in the + ** code that determines whether or not the shared-memory region + ** must be zeroed before the requested page is returned. + */ + rc = WAL_RETRY; + }else if( SQLITE_OK==(rc = walLockShared(pWal, WAL_RECOVER_LOCK)) ){ + walUnlockShared(pWal, WAL_RECOVER_LOCK); + rc = WAL_RETRY; + }else if( rc==SQLITE_BUSY ){ + rc = SQLITE_BUSY_RECOVERY; + } + } + if( rc!=SQLITE_OK ){ + return rc; + } + } + + pInfo = walCkptInfo(pWal); + if( !useWal && pInfo->nBackfill==pWal->hdr.mxFrame +#ifdef SQLITE_ENABLE_SNAPSHOT + && (pWal->pSnapshot==0 || pWal->hdr.mxFrame==0 + || 0==memcmp(&pWal->hdr, pWal->pSnapshot, sizeof(WalIndexHdr))) +#endif + ){ + /* The WAL has been completely backfilled (or it is empty). + ** and can be safely ignored. + */ + rc = walLockShared(pWal, WAL_READ_LOCK(0)); + walShmBarrier(pWal); + if( rc==SQLITE_OK ){ + if( memcmp((void *)walIndexHdr(pWal), &pWal->hdr, sizeof(WalIndexHdr)) ){ + /* It is not safe to allow the reader to continue here if frames + ** may have been appended to the log before READ_LOCK(0) was obtained. + ** When holding READ_LOCK(0), the reader ignores the entire log file, + ** which implies that the database file contains a trustworthy + ** snapshot. Since holding READ_LOCK(0) prevents a checkpoint from + ** happening, this is usually correct. + ** + ** However, if frames have been appended to the log (or if the log + ** is wrapped and written for that matter) before the READ_LOCK(0) + ** is obtained, that is not necessarily true. A checkpointer may + ** have started to backfill the appended frames but crashed before + ** it finished. Leaving a corrupt image in the database file. + */ + walUnlockShared(pWal, WAL_READ_LOCK(0)); + return WAL_RETRY; + } + pWal->readLock = 0; + return SQLITE_OK; + }else if( rc!=SQLITE_BUSY ){ + return rc; + } + } + + /* If we get this far, it means that the reader will want to use + ** the WAL to get at content from recent commits. The job now is + ** to select one of the aReadMark[] entries that is closest to + ** but not exceeding pWal->hdr.mxFrame and lock that entry. + */ + mxReadMark = 0; + mxI = 0; + mxFrame = pWal->hdr.mxFrame; +#ifdef SQLITE_ENABLE_SNAPSHOT + if( pWal->pSnapshot && pWal->pSnapshot->mxFrame<mxFrame ){ + mxFrame = pWal->pSnapshot->mxFrame; + } +#endif + for(i=1; i<WAL_NREADER; i++){ + u32 thisMark = pInfo->aReadMark[i]; + if( mxReadMark<=thisMark && thisMark<=mxFrame ){ + assert( thisMark!=READMARK_NOT_USED ); + mxReadMark = thisMark; + mxI = i; + } + } + if( (pWal->readOnly & WAL_SHM_RDONLY)==0 + && (mxReadMark<mxFrame || mxI==0) + ){ + for(i=1; i<WAL_NREADER; i++){ + rc = walLockExclusive(pWal, WAL_READ_LOCK(i), 1); + if( rc==SQLITE_OK ){ + mxReadMark = pInfo->aReadMark[i] = mxFrame; + mxI = i; + walUnlockExclusive(pWal, WAL_READ_LOCK(i), 1); + break; + }else if( rc!=SQLITE_BUSY ){ + return rc; + } + } + } + if( mxI==0 ){ + assert( rc==SQLITE_BUSY || (pWal->readOnly & WAL_SHM_RDONLY)!=0 ); + return rc==SQLITE_BUSY ? WAL_RETRY : SQLITE_READONLY_CANTLOCK; + } + + rc = walLockShared(pWal, WAL_READ_LOCK(mxI)); + if( rc ){ + return rc==SQLITE_BUSY ? WAL_RETRY : rc; + } + /* Now that the read-lock has been obtained, check that neither the + ** value in the aReadMark[] array or the contents of the wal-index + ** header have changed. + ** + ** It is necessary to check that the wal-index header did not change + ** between the time it was read and when the shared-lock was obtained + ** on WAL_READ_LOCK(mxI) was obtained to account for the possibility + ** that the log file may have been wrapped by a writer, or that frames + ** that occur later in the log than pWal->hdr.mxFrame may have been + ** copied into the database by a checkpointer. If either of these things + ** happened, then reading the database with the current value of + ** pWal->hdr.mxFrame risks reading a corrupted snapshot. So, retry + ** instead. + ** + ** Before checking that the live wal-index header has not changed + ** since it was read, set Wal.minFrame to the first frame in the wal + ** file that has not yet been checkpointed. This client will not need + ** to read any frames earlier than minFrame from the wal file - they + ** can be safely read directly from the database file. + ** + ** Because a ShmBarrier() call is made between taking the copy of + ** nBackfill and checking that the wal-header in shared-memory still + ** matches the one cached in pWal->hdr, it is guaranteed that the + ** checkpointer that set nBackfill was not working with a wal-index + ** header newer than that cached in pWal->hdr. If it were, that could + ** cause a problem. The checkpointer could omit to checkpoint + ** a version of page X that lies before pWal->minFrame (call that version + ** A) on the basis that there is a newer version (version B) of the same + ** page later in the wal file. But if version B happens to like past + ** frame pWal->hdr.mxFrame - then the client would incorrectly assume + ** that it can read version A from the database file. However, since + ** we can guarantee that the checkpointer that set nBackfill could not + ** see any pages past pWal->hdr.mxFrame, this problem does not come up. + */ + pWal->minFrame = pInfo->nBackfill+1; + walShmBarrier(pWal); + if( pInfo->aReadMark[mxI]!=mxReadMark + || memcmp((void *)walIndexHdr(pWal), &pWal->hdr, sizeof(WalIndexHdr)) + ){ + walUnlockShared(pWal, WAL_READ_LOCK(mxI)); + return WAL_RETRY; + }else{ + assert( mxReadMark<=pWal->hdr.mxFrame ); + pWal->readLock = (i16)mxI; + } + return rc; +} + +/* +** Begin a read transaction on the database. +** +** This routine used to be called sqlite3OpenSnapshot() and with good reason: +** it takes a snapshot of the state of the WAL and wal-index for the current +** instant in time. The current thread will continue to use this snapshot. +** Other threads might append new content to the WAL and wal-index but +** that extra content is ignored by the current thread. +** +** If the database contents have changes since the previous read +** transaction, then *pChanged is set to 1 before returning. The +** Pager layer will use this to know that is cache is stale and +** needs to be flushed. +*/ +SQLITE_PRIVATE int sqlite3WalBeginReadTransaction(Wal *pWal, int *pChanged){ + int rc; /* Return code */ + int cnt = 0; /* Number of TryBeginRead attempts */ + +#ifdef SQLITE_ENABLE_SNAPSHOT + int bChanged = 0; + WalIndexHdr *pSnapshot = pWal->pSnapshot; + if( pSnapshot && memcmp(pSnapshot, &pWal->hdr, sizeof(WalIndexHdr))!=0 ){ + bChanged = 1; + } +#endif + + do{ + rc = walTryBeginRead(pWal, pChanged, 0, ++cnt); + }while( rc==WAL_RETRY ); + testcase( (rc&0xff)==SQLITE_BUSY ); + testcase( (rc&0xff)==SQLITE_IOERR ); + testcase( rc==SQLITE_PROTOCOL ); + testcase( rc==SQLITE_OK ); + +#ifdef SQLITE_ENABLE_SNAPSHOT + if( rc==SQLITE_OK ){ + if( pSnapshot && memcmp(pSnapshot, &pWal->hdr, sizeof(WalIndexHdr))!=0 ){ + /* At this point the client has a lock on an aReadMark[] slot holding + ** a value equal to or smaller than pSnapshot->mxFrame, but pWal->hdr + ** is populated with the wal-index header corresponding to the head + ** of the wal file. Verify that pSnapshot is still valid before + ** continuing. Reasons why pSnapshot might no longer be valid: + ** + ** (1) The WAL file has been reset since the snapshot was taken. + ** In this case, the salt will have changed. + ** + ** (2) A checkpoint as been attempted that wrote frames past + ** pSnapshot->mxFrame into the database file. Note that the + ** checkpoint need not have completed for this to cause problems. + */ + volatile WalCkptInfo *pInfo = walCkptInfo(pWal); + + assert( pWal->readLock>0 || pWal->hdr.mxFrame==0 ); + assert( pInfo->aReadMark[pWal->readLock]<=pSnapshot->mxFrame ); + + /* It is possible that there is a checkpointer thread running + ** concurrent with this code. If this is the case, it may be that the + ** checkpointer has already determined that it will checkpoint + ** snapshot X, where X is later in the wal file than pSnapshot, but + ** has not yet set the pInfo->nBackfillAttempted variable to indicate + ** its intent. To avoid the race condition this leads to, ensure that + ** there is no checkpointer process by taking a shared CKPT lock + ** before checking pInfo->nBackfillAttempted. */ + rc = walLockShared(pWal, WAL_CKPT_LOCK); + + if( rc==SQLITE_OK ){ + /* Check that the wal file has not been wrapped. Assuming that it has + ** not, also check that no checkpointer has attempted to checkpoint any + ** frames beyond pSnapshot->mxFrame. If either of these conditions are + ** true, return SQLITE_BUSY_SNAPSHOT. Otherwise, overwrite pWal->hdr + ** with *pSnapshot and set *pChanged as appropriate for opening the + ** snapshot. */ + if( !memcmp(pSnapshot->aSalt, pWal->hdr.aSalt, sizeof(pWal->hdr.aSalt)) + && pSnapshot->mxFrame>=pInfo->nBackfillAttempted + ){ + assert( pWal->readLock>0 ); + memcpy(&pWal->hdr, pSnapshot, sizeof(WalIndexHdr)); + *pChanged = bChanged; + }else{ + rc = SQLITE_BUSY_SNAPSHOT; + } + + /* Release the shared CKPT lock obtained above. */ + walUnlockShared(pWal, WAL_CKPT_LOCK); + } + + + if( rc!=SQLITE_OK ){ + sqlite3WalEndReadTransaction(pWal); + } + } + } +#endif + return rc; +} + +/* +** Finish with a read transaction. All this does is release the +** read-lock. +*/ +SQLITE_PRIVATE void sqlite3WalEndReadTransaction(Wal *pWal){ + sqlite3WalEndWriteTransaction(pWal); + if( pWal->readLock>=0 ){ + walUnlockShared(pWal, WAL_READ_LOCK(pWal->readLock)); + pWal->readLock = -1; + } +} + +/* +** Search the wal file for page pgno. If found, set *piRead to the frame that +** contains the page. Otherwise, if pgno is not in the wal file, set *piRead +** to zero. +** +** Return SQLITE_OK if successful, or an error code if an error occurs. If an +** error does occur, the final value of *piRead is undefined. +*/ +SQLITE_PRIVATE int sqlite3WalFindFrame( + Wal *pWal, /* WAL handle */ + Pgno pgno, /* Database page number to read data for */ + u32 *piRead /* OUT: Frame number (or zero) */ +){ + u32 iRead = 0; /* If !=0, WAL frame to return data from */ + u32 iLast = pWal->hdr.mxFrame; /* Last page in WAL for this reader */ + int iHash; /* Used to loop through N hash tables */ + int iMinHash; + + /* This routine is only be called from within a read transaction. */ + assert( pWal->readLock>=0 || pWal->lockError ); + + /* If the "last page" field of the wal-index header snapshot is 0, then + ** no data will be read from the wal under any circumstances. Return early + ** in this case as an optimization. Likewise, if pWal->readLock==0, + ** then the WAL is ignored by the reader so return early, as if the + ** WAL were empty. + */ + if( iLast==0 || pWal->readLock==0 ){ + *piRead = 0; + return SQLITE_OK; + } + + /* Search the hash table or tables for an entry matching page number + ** pgno. Each iteration of the following for() loop searches one + ** hash table (each hash table indexes up to HASHTABLE_NPAGE frames). + ** + ** This code might run concurrently to the code in walIndexAppend() + ** that adds entries to the wal-index (and possibly to this hash + ** table). This means the value just read from the hash + ** slot (aHash[iKey]) may have been added before or after the + ** current read transaction was opened. Values added after the + ** read transaction was opened may have been written incorrectly - + ** i.e. these slots may contain garbage data. However, we assume + ** that any slots written before the current read transaction was + ** opened remain unmodified. + ** + ** For the reasons above, the if(...) condition featured in the inner + ** loop of the following block is more stringent that would be required + ** if we had exclusive access to the hash-table: + ** + ** (aPgno[iFrame]==pgno): + ** This condition filters out normal hash-table collisions. + ** + ** (iFrame<=iLast): + ** This condition filters out entries that were added to the hash + ** table after the current read-transaction had started. + */ + iMinHash = walFramePage(pWal->minFrame); + for(iHash=walFramePage(iLast); iHash>=iMinHash && iRead==0; iHash--){ + volatile ht_slot *aHash; /* Pointer to hash table */ + volatile u32 *aPgno; /* Pointer to array of page numbers */ + u32 iZero; /* Frame number corresponding to aPgno[0] */ + int iKey; /* Hash slot index */ + int nCollide; /* Number of hash collisions remaining */ + int rc; /* Error code */ + + rc = walHashGet(pWal, iHash, &aHash, &aPgno, &iZero); + if( rc!=SQLITE_OK ){ + return rc; + } + nCollide = HASHTABLE_NSLOT; + for(iKey=walHash(pgno); aHash[iKey]; iKey=walNextHash(iKey)){ + u32 iFrame = aHash[iKey] + iZero; + if( iFrame<=iLast && iFrame>=pWal->minFrame && aPgno[aHash[iKey]]==pgno ){ + assert( iFrame>iRead || CORRUPT_DB ); + iRead = iFrame; + } + if( (nCollide--)==0 ){ + return SQLITE_CORRUPT_BKPT; + } + } + } + +#ifdef SQLITE_ENABLE_EXPENSIVE_ASSERT + /* If expensive assert() statements are available, do a linear search + ** of the wal-index file content. Make sure the results agree with the + ** result obtained using the hash indexes above. */ + { + u32 iRead2 = 0; + u32 iTest; + assert( pWal->minFrame>0 ); + for(iTest=iLast; iTest>=pWal->minFrame; iTest--){ + if( walFramePgno(pWal, iTest)==pgno ){ + iRead2 = iTest; + break; + } + } + assert( iRead==iRead2 ); + } +#endif + + *piRead = iRead; + return SQLITE_OK; +} + +/* +** Read the contents of frame iRead from the wal file into buffer pOut +** (which is nOut bytes in size). Return SQLITE_OK if successful, or an +** error code otherwise. +*/ +SQLITE_PRIVATE int sqlite3WalReadFrame( + Wal *pWal, /* WAL handle */ + u32 iRead, /* Frame to read */ + int nOut, /* Size of buffer pOut in bytes */ + u8 *pOut /* Buffer to write page data to */ +){ + int sz; + i64 iOffset; + sz = pWal->hdr.szPage; + sz = (sz&0xfe00) + ((sz&0x0001)<<16); + testcase( sz<=32768 ); + testcase( sz>=65536 ); + iOffset = walFrameOffset(iRead, sz) + WAL_FRAME_HDRSIZE; + /* testcase( IS_BIG_INT(iOffset) ); // requires a 4GiB WAL */ + return sqlite3OsRead(pWal->pWalFd, pOut, (nOut>sz ? sz : nOut), iOffset); +} + +/* +** Return the size of the database in pages (or zero, if unknown). +*/ +SQLITE_PRIVATE Pgno sqlite3WalDbsize(Wal *pWal){ + if( pWal && ALWAYS(pWal->readLock>=0) ){ + return pWal->hdr.nPage; + } + return 0; +} + + +/* +** This function starts a write transaction on the WAL. +** +** A read transaction must have already been started by a prior call +** to sqlite3WalBeginReadTransaction(). +** +** If another thread or process has written into the database since +** the read transaction was started, then it is not possible for this +** thread to write as doing so would cause a fork. So this routine +** returns SQLITE_BUSY in that case and no write transaction is started. +** +** There can only be a single writer active at a time. +*/ +SQLITE_PRIVATE int sqlite3WalBeginWriteTransaction(Wal *pWal){ + int rc; + + /* Cannot start a write transaction without first holding a read + ** transaction. */ + assert( pWal->readLock>=0 ); + assert( pWal->writeLock==0 && pWal->iReCksum==0 ); + + if( pWal->readOnly ){ + return SQLITE_READONLY; + } + + /* Only one writer allowed at a time. Get the write lock. Return + ** SQLITE_BUSY if unable. + */ + rc = walLockExclusive(pWal, WAL_WRITE_LOCK, 1); + if( rc ){ + return rc; + } + pWal->writeLock = 1; + + /* If another connection has written to the database file since the + ** time the read transaction on this connection was started, then + ** the write is disallowed. + */ + if( memcmp(&pWal->hdr, (void *)walIndexHdr(pWal), sizeof(WalIndexHdr))!=0 ){ + walUnlockExclusive(pWal, WAL_WRITE_LOCK, 1); + pWal->writeLock = 0; + rc = SQLITE_BUSY_SNAPSHOT; + } + + return rc; +} + +/* +** End a write transaction. The commit has already been done. This +** routine merely releases the lock. +*/ +SQLITE_PRIVATE int sqlite3WalEndWriteTransaction(Wal *pWal){ + if( pWal->writeLock ){ + walUnlockExclusive(pWal, WAL_WRITE_LOCK, 1); + pWal->writeLock = 0; + pWal->iReCksum = 0; + pWal->truncateOnCommit = 0; + } + return SQLITE_OK; +} + +/* +** If any data has been written (but not committed) to the log file, this +** function moves the write-pointer back to the start of the transaction. +** +** Additionally, the callback function is invoked for each frame written +** to the WAL since the start of the transaction. If the callback returns +** other than SQLITE_OK, it is not invoked again and the error code is +** returned to the caller. +** +** Otherwise, if the callback function does not return an error, this +** function returns SQLITE_OK. +*/ +SQLITE_PRIVATE int sqlite3WalUndo(Wal *pWal, int (*xUndo)(void *, Pgno), void *pUndoCtx){ + int rc = SQLITE_OK; + if( ALWAYS(pWal->writeLock) ){ + Pgno iMax = pWal->hdr.mxFrame; + Pgno iFrame; + + /* Restore the clients cache of the wal-index header to the state it + ** was in before the client began writing to the database. + */ + memcpy(&pWal->hdr, (void *)walIndexHdr(pWal), sizeof(WalIndexHdr)); + + for(iFrame=pWal->hdr.mxFrame+1; + ALWAYS(rc==SQLITE_OK) && iFrame<=iMax; + iFrame++ + ){ + /* This call cannot fail. Unless the page for which the page number + ** is passed as the second argument is (a) in the cache and + ** (b) has an outstanding reference, then xUndo is either a no-op + ** (if (a) is false) or simply expels the page from the cache (if (b) + ** is false). + ** + ** If the upper layer is doing a rollback, it is guaranteed that there + ** are no outstanding references to any page other than page 1. And + ** page 1 is never written to the log until the transaction is + ** committed. As a result, the call to xUndo may not fail. + */ + assert( walFramePgno(pWal, iFrame)!=1 ); + rc = xUndo(pUndoCtx, walFramePgno(pWal, iFrame)); + } + if( iMax!=pWal->hdr.mxFrame ) walCleanupHash(pWal); + } + return rc; +} + +/* +** Argument aWalData must point to an array of WAL_SAVEPOINT_NDATA u32 +** values. This function populates the array with values required to +** "rollback" the write position of the WAL handle back to the current +** point in the event of a savepoint rollback (via WalSavepointUndo()). +*/ +SQLITE_PRIVATE void sqlite3WalSavepoint(Wal *pWal, u32 *aWalData){ + assert( pWal->writeLock ); + aWalData[0] = pWal->hdr.mxFrame; + aWalData[1] = pWal->hdr.aFrameCksum[0]; + aWalData[2] = pWal->hdr.aFrameCksum[1]; + aWalData[3] = pWal->nCkpt; +} + +/* +** Move the write position of the WAL back to the point identified by +** the values in the aWalData[] array. aWalData must point to an array +** of WAL_SAVEPOINT_NDATA u32 values that has been previously populated +** by a call to WalSavepoint(). +*/ +SQLITE_PRIVATE int sqlite3WalSavepointUndo(Wal *pWal, u32 *aWalData){ + int rc = SQLITE_OK; + + assert( pWal->writeLock ); + assert( aWalData[3]!=pWal->nCkpt || aWalData[0]<=pWal->hdr.mxFrame ); + + if( aWalData[3]!=pWal->nCkpt ){ + /* This savepoint was opened immediately after the write-transaction + ** was started. Right after that, the writer decided to wrap around + ** to the start of the log. Update the savepoint values to match. + */ + aWalData[0] = 0; + aWalData[3] = pWal->nCkpt; + } + + if( aWalData[0]<pWal->hdr.mxFrame ){ + pWal->hdr.mxFrame = aWalData[0]; + pWal->hdr.aFrameCksum[0] = aWalData[1]; + pWal->hdr.aFrameCksum[1] = aWalData[2]; + walCleanupHash(pWal); + } + + return rc; +} + +/* +** This function is called just before writing a set of frames to the log +** file (see sqlite3WalFrames()). It checks to see if, instead of appending +** to the current log file, it is possible to overwrite the start of the +** existing log file with the new frames (i.e. "reset" the log). If so, +** it sets pWal->hdr.mxFrame to 0. Otherwise, pWal->hdr.mxFrame is left +** unchanged. +** +** SQLITE_OK is returned if no error is encountered (regardless of whether +** or not pWal->hdr.mxFrame is modified). An SQLite error code is returned +** if an error occurs. +*/ +static int walRestartLog(Wal *pWal){ + int rc = SQLITE_OK; + int cnt; + + if( pWal->readLock==0 ){ + volatile WalCkptInfo *pInfo = walCkptInfo(pWal); + assert( pInfo->nBackfill==pWal->hdr.mxFrame ); + if( pInfo->nBackfill>0 ){ + u32 salt1; + sqlite3_randomness(4, &salt1); + rc = walLockExclusive(pWal, WAL_READ_LOCK(1), WAL_NREADER-1); + if( rc==SQLITE_OK ){ + /* If all readers are using WAL_READ_LOCK(0) (in other words if no + ** readers are currently using the WAL), then the transactions + ** frames will overwrite the start of the existing log. Update the + ** wal-index header to reflect this. + ** + ** In theory it would be Ok to update the cache of the header only + ** at this point. But updating the actual wal-index header is also + ** safe and means there is no special case for sqlite3WalUndo() + ** to handle if this transaction is rolled back. */ + walRestartHdr(pWal, salt1); + walUnlockExclusive(pWal, WAL_READ_LOCK(1), WAL_NREADER-1); + }else if( rc!=SQLITE_BUSY ){ + return rc; + } + } + walUnlockShared(pWal, WAL_READ_LOCK(0)); + pWal->readLock = -1; + cnt = 0; + do{ + int notUsed; + rc = walTryBeginRead(pWal, ¬Used, 1, ++cnt); + }while( rc==WAL_RETRY ); + assert( (rc&0xff)!=SQLITE_BUSY ); /* BUSY not possible when useWal==1 */ + testcase( (rc&0xff)==SQLITE_IOERR ); + testcase( rc==SQLITE_PROTOCOL ); + testcase( rc==SQLITE_OK ); + } + return rc; +} + +/* +** Information about the current state of the WAL file and where +** the next fsync should occur - passed from sqlite3WalFrames() into +** walWriteToLog(). +*/ +typedef struct WalWriter { + Wal *pWal; /* The complete WAL information */ + sqlite3_file *pFd; /* The WAL file to which we write */ + sqlite3_int64 iSyncPoint; /* Fsync at this offset */ + int syncFlags; /* Flags for the fsync */ + int szPage; /* Size of one page */ +} WalWriter; + +/* +** Write iAmt bytes of content into the WAL file beginning at iOffset. +** Do a sync when crossing the p->iSyncPoint boundary. +** +** In other words, if iSyncPoint is in between iOffset and iOffset+iAmt, +** first write the part before iSyncPoint, then sync, then write the +** rest. +*/ +static int walWriteToLog( + WalWriter *p, /* WAL to write to */ + void *pContent, /* Content to be written */ + int iAmt, /* Number of bytes to write */ + sqlite3_int64 iOffset /* Start writing at this offset */ +){ + int rc; + if( iOffset<p->iSyncPoint && iOffset+iAmt>=p->iSyncPoint ){ + int iFirstAmt = (int)(p->iSyncPoint - iOffset); + rc = sqlite3OsWrite(p->pFd, pContent, iFirstAmt, iOffset); + if( rc ) return rc; + iOffset += iFirstAmt; + iAmt -= iFirstAmt; + pContent = (void*)(iFirstAmt + (char*)pContent); + assert( p->syncFlags & (SQLITE_SYNC_NORMAL|SQLITE_SYNC_FULL) ); + rc = sqlite3OsSync(p->pFd, p->syncFlags & SQLITE_SYNC_MASK); + if( iAmt==0 || rc ) return rc; + } + rc = sqlite3OsWrite(p->pFd, pContent, iAmt, iOffset); + return rc; +} + +/* +** Write out a single frame of the WAL +*/ +static int walWriteOneFrame( + WalWriter *p, /* Where to write the frame */ + PgHdr *pPage, /* The page of the frame to be written */ + int nTruncate, /* The commit flag. Usually 0. >0 for commit */ + sqlite3_int64 iOffset /* Byte offset at which to write */ +){ + int rc; /* Result code from subfunctions */ + void *pData; /* Data actually written */ + u8 aFrame[WAL_FRAME_HDRSIZE]; /* Buffer to assemble frame-header in */ +#if defined(SQLITE_HAS_CODEC) + if( (pData = sqlite3PagerCodec(pPage))==0 ) return SQLITE_NOMEM; +#else + pData = pPage->pData; +#endif + walEncodeFrame(p->pWal, pPage->pgno, nTruncate, pData, aFrame); + rc = walWriteToLog(p, aFrame, sizeof(aFrame), iOffset); + if( rc ) return rc; + /* Write the page data */ + rc = walWriteToLog(p, pData, p->szPage, iOffset+sizeof(aFrame)); + return rc; +} + +/* +** This function is called as part of committing a transaction within which +** one or more frames have been overwritten. It updates the checksums for +** all frames written to the wal file by the current transaction starting +** with the earliest to have been overwritten. +** +** SQLITE_OK is returned if successful, or an SQLite error code otherwise. +*/ +static int walRewriteChecksums(Wal *pWal, u32 iLast){ + const int szPage = pWal->szPage;/* Database page size */ + int rc = SQLITE_OK; /* Return code */ + u8 *aBuf; /* Buffer to load data from wal file into */ + u8 aFrame[WAL_FRAME_HDRSIZE]; /* Buffer to assemble frame-headers in */ + u32 iRead; /* Next frame to read from wal file */ + i64 iCksumOff; + + aBuf = sqlite3_malloc(szPage + WAL_FRAME_HDRSIZE); + if( aBuf==0 ) return SQLITE_NOMEM; + + /* Find the checksum values to use as input for the recalculating the + ** first checksum. If the first frame is frame 1 (implying that the current + ** transaction restarted the wal file), these values must be read from the + ** wal-file header. Otherwise, read them from the frame header of the + ** previous frame. */ + assert( pWal->iReCksum>0 ); + if( pWal->iReCksum==1 ){ + iCksumOff = 24; + }else{ + iCksumOff = walFrameOffset(pWal->iReCksum-1, szPage) + 16; + } + rc = sqlite3OsRead(pWal->pWalFd, aBuf, sizeof(u32)*2, iCksumOff); + pWal->hdr.aFrameCksum[0] = sqlite3Get4byte(aBuf); + pWal->hdr.aFrameCksum[1] = sqlite3Get4byte(&aBuf[sizeof(u32)]); + + iRead = pWal->iReCksum; + pWal->iReCksum = 0; + for(; rc==SQLITE_OK && iRead<=iLast; iRead++){ + i64 iOff = walFrameOffset(iRead, szPage); + rc = sqlite3OsRead(pWal->pWalFd, aBuf, szPage+WAL_FRAME_HDRSIZE, iOff); + if( rc==SQLITE_OK ){ + u32 iPgno, nDbSize; + iPgno = sqlite3Get4byte(aBuf); + nDbSize = sqlite3Get4byte(&aBuf[4]); + + walEncodeFrame(pWal, iPgno, nDbSize, &aBuf[WAL_FRAME_HDRSIZE], aFrame); + rc = sqlite3OsWrite(pWal->pWalFd, aFrame, sizeof(aFrame), iOff); + } + } + + sqlite3_free(aBuf); + return rc; +} + +/* +** Write a set of frames to the log. The caller must hold the write-lock +** on the log file (obtained using sqlite3WalBeginWriteTransaction()). +*/ +SQLITE_PRIVATE int sqlite3WalFrames( + Wal *pWal, /* Wal handle to write to */ + int szPage, /* Database page-size in bytes */ + PgHdr *pList, /* List of dirty pages to write */ + Pgno nTruncate, /* Database size after this commit */ + int isCommit, /* True if this is a commit */ + int sync_flags /* Flags to pass to OsSync() (or 0) */ +){ + int rc; /* Used to catch return codes */ + u32 iFrame; /* Next frame address */ + PgHdr *p; /* Iterator to run through pList with. */ + PgHdr *pLast = 0; /* Last frame in list */ + int nExtra = 0; /* Number of extra copies of last page */ + int szFrame; /* The size of a single frame */ + i64 iOffset; /* Next byte to write in WAL file */ + WalWriter w; /* The writer */ + u32 iFirst = 0; /* First frame that may be overwritten */ + WalIndexHdr *pLive; /* Pointer to shared header */ + + assert( pList ); + assert( pWal->writeLock ); + + /* If this frame set completes a transaction, then nTruncate>0. If + ** nTruncate==0 then this frame set does not complete the transaction. */ + assert( (isCommit!=0)==(nTruncate!=0) ); + +#if defined(SQLITE_TEST) && defined(SQLITE_DEBUG) + { int cnt; for(cnt=0, p=pList; p; p=p->pDirty, cnt++){} + WALTRACE(("WAL%p: frame write begin. %d frames. mxFrame=%d. %s\n", + pWal, cnt, pWal->hdr.mxFrame, isCommit ? "Commit" : "Spill")); + } +#endif + + pLive = (WalIndexHdr*)walIndexHdr(pWal); + if( memcmp(&pWal->hdr, (void *)pLive, sizeof(WalIndexHdr))!=0 ){ + iFirst = pLive->mxFrame+1; + } + + /* See if it is possible to write these frames into the start of the + ** log file, instead of appending to it at pWal->hdr.mxFrame. + */ + if( SQLITE_OK!=(rc = walRestartLog(pWal)) ){ + return rc; + } + + /* If this is the first frame written into the log, write the WAL + ** header to the start of the WAL file. See comments at the top of + ** this source file for a description of the WAL header format. + */ + iFrame = pWal->hdr.mxFrame; + if( iFrame==0 ){ + u8 aWalHdr[WAL_HDRSIZE]; /* Buffer to assemble wal-header in */ + u32 aCksum[2]; /* Checksum for wal-header */ + + sqlite3Put4byte(&aWalHdr[0], (WAL_MAGIC | SQLITE_BIGENDIAN)); + sqlite3Put4byte(&aWalHdr[4], WAL_MAX_VERSION); + sqlite3Put4byte(&aWalHdr[8], szPage); + sqlite3Put4byte(&aWalHdr[12], pWal->nCkpt); + if( pWal->nCkpt==0 ) sqlite3_randomness(8, pWal->hdr.aSalt); + memcpy(&aWalHdr[16], pWal->hdr.aSalt, 8); + walChecksumBytes(1, aWalHdr, WAL_HDRSIZE-2*4, 0, aCksum); + sqlite3Put4byte(&aWalHdr[24], aCksum[0]); + sqlite3Put4byte(&aWalHdr[28], aCksum[1]); + + pWal->szPage = szPage; + pWal->hdr.bigEndCksum = SQLITE_BIGENDIAN; + pWal->hdr.aFrameCksum[0] = aCksum[0]; + pWal->hdr.aFrameCksum[1] = aCksum[1]; + pWal->truncateOnCommit = 1; + + rc = sqlite3OsWrite(pWal->pWalFd, aWalHdr, sizeof(aWalHdr), 0); + WALTRACE(("WAL%p: wal-header write %s\n", pWal, rc ? "failed" : "ok")); + if( rc!=SQLITE_OK ){ + return rc; + } + + /* Sync the header (unless SQLITE_IOCAP_SEQUENTIAL is true or unless + ** all syncing is turned off by PRAGMA synchronous=OFF). Otherwise + ** an out-of-order write following a WAL restart could result in + ** database corruption. See the ticket: + ** + ** http://localhost:591/sqlite/info/ff5be73dee + */ + if( pWal->syncHeader && sync_flags ){ + rc = sqlite3OsSync(pWal->pWalFd, sync_flags & SQLITE_SYNC_MASK); + if( rc ) return rc; + } + } + assert( (int)pWal->szPage==szPage ); + + /* Setup information needed to write frames into the WAL */ + w.pWal = pWal; + w.pFd = pWal->pWalFd; + w.iSyncPoint = 0; + w.syncFlags = sync_flags; + w.szPage = szPage; + iOffset = walFrameOffset(iFrame+1, szPage); + szFrame = szPage + WAL_FRAME_HDRSIZE; + + /* Write all frames into the log file exactly once */ + for(p=pList; p; p=p->pDirty){ + int nDbSize; /* 0 normally. Positive == commit flag */ + + /* Check if this page has already been written into the wal file by + ** the current transaction. If so, overwrite the existing frame and + ** set Wal.writeLock to WAL_WRITELOCK_RECKSUM - indicating that + ** checksums must be recomputed when the transaction is committed. */ + if( iFirst && (p->pDirty || isCommit==0) ){ + u32 iWrite = 0; + VVA_ONLY(rc =) sqlite3WalFindFrame(pWal, p->pgno, &iWrite); + assert( rc==SQLITE_OK || iWrite==0 ); + if( iWrite>=iFirst ){ + i64 iOff = walFrameOffset(iWrite, szPage) + WAL_FRAME_HDRSIZE; + void *pData; + if( pWal->iReCksum==0 || iWrite<pWal->iReCksum ){ + pWal->iReCksum = iWrite; + } +#if defined(SQLITE_HAS_CODEC) + if( (pData = sqlite3PagerCodec(p))==0 ) return SQLITE_NOMEM; +#else + pData = p->pData; +#endif + rc = sqlite3OsWrite(pWal->pWalFd, pData, szPage, iOff); + if( rc ) return rc; + p->flags &= ~PGHDR_WAL_APPEND; + continue; + } + } + + iFrame++; + assert( iOffset==walFrameOffset(iFrame, szPage) ); + nDbSize = (isCommit && p->pDirty==0) ? nTruncate : 0; + rc = walWriteOneFrame(&w, p, nDbSize, iOffset); + if( rc ) return rc; + pLast = p; + iOffset += szFrame; + p->flags |= PGHDR_WAL_APPEND; + } + + /* Recalculate checksums within the wal file if required. */ + if( isCommit && pWal->iReCksum ){ + rc = walRewriteChecksums(pWal, iFrame); + if( rc ) return rc; + } + + /* If this is the end of a transaction, then we might need to pad + ** the transaction and/or sync the WAL file. + ** + ** Padding and syncing only occur if this set of frames complete a + ** transaction and if PRAGMA synchronous=FULL. If synchronous==NORMAL + ** or synchronous==OFF, then no padding or syncing are needed. + ** + ** If SQLITE_IOCAP_POWERSAFE_OVERWRITE is defined, then padding is not + ** needed and only the sync is done. If padding is needed, then the + ** final frame is repeated (with its commit mark) until the next sector + ** boundary is crossed. Only the part of the WAL prior to the last + ** sector boundary is synced; the part of the last frame that extends + ** past the sector boundary is written after the sync. + */ + if( isCommit && (sync_flags & WAL_SYNC_TRANSACTIONS)!=0 ){ + if( pWal->padToSectorBoundary ){ + int sectorSize = sqlite3SectorSize(pWal->pWalFd); + w.iSyncPoint = ((iOffset+sectorSize-1)/sectorSize)*sectorSize; + while( iOffset<w.iSyncPoint ){ + rc = walWriteOneFrame(&w, pLast, nTruncate, iOffset); + if( rc ) return rc; + iOffset += szFrame; + nExtra++; + } + }else{ + rc = sqlite3OsSync(w.pFd, sync_flags & SQLITE_SYNC_MASK); + } + } + + /* If this frame set completes the first transaction in the WAL and + ** if PRAGMA journal_size_limit is set, then truncate the WAL to the + ** journal size limit, if possible. + */ + if( isCommit && pWal->truncateOnCommit && pWal->mxWalSize>=0 ){ + i64 sz = pWal->mxWalSize; + if( walFrameOffset(iFrame+nExtra+1, szPage)>pWal->mxWalSize ){ + sz = walFrameOffset(iFrame+nExtra+1, szPage); + } + walLimitSize(pWal, sz); + pWal->truncateOnCommit = 0; + } + + /* Append data to the wal-index. It is not necessary to lock the + ** wal-index to do this as the SQLITE_SHM_WRITE lock held on the wal-index + ** guarantees that there are no other writers, and no data that may + ** be in use by existing readers is being overwritten. + */ + iFrame = pWal->hdr.mxFrame; + for(p=pList; p && rc==SQLITE_OK; p=p->pDirty){ + if( (p->flags & PGHDR_WAL_APPEND)==0 ) continue; + iFrame++; + rc = walIndexAppend(pWal, iFrame, p->pgno); + } + while( rc==SQLITE_OK && nExtra>0 ){ + iFrame++; + nExtra--; + rc = walIndexAppend(pWal, iFrame, pLast->pgno); + } + + if( rc==SQLITE_OK ){ + /* Update the private copy of the header. */ + pWal->hdr.szPage = (u16)((szPage&0xff00) | (szPage>>16)); + testcase( szPage<=32768 ); + testcase( szPage>=65536 ); + pWal->hdr.mxFrame = iFrame; + if( isCommit ){ + pWal->hdr.iChange++; + pWal->hdr.nPage = nTruncate; + } + /* If this is a commit, update the wal-index header too. */ + if( isCommit ){ + walIndexWriteHdr(pWal); + pWal->iCallback = iFrame; + } + } + + WALTRACE(("WAL%p: frame write %s\n", pWal, rc ? "failed" : "ok")); + return rc; +} + +/* +** This routine is called to implement sqlite3_wal_checkpoint() and +** related interfaces. +** +** Obtain a CHECKPOINT lock and then backfill as much information as +** we can from WAL into the database. +** +** If parameter xBusy is not NULL, it is a pointer to a busy-handler +** callback. In this case this function runs a blocking checkpoint. +*/ +SQLITE_PRIVATE int sqlite3WalCheckpoint( + Wal *pWal, /* Wal connection */ + int eMode, /* PASSIVE, FULL, RESTART, or TRUNCATE */ + int (*xBusy)(void*), /* Function to call when busy */ + void *pBusyArg, /* Context argument for xBusyHandler */ + int sync_flags, /* Flags to sync db file with (or 0) */ + int nBuf, /* Size of temporary buffer */ + u8 *zBuf, /* Temporary buffer to use */ + int *pnLog, /* OUT: Number of frames in WAL */ + int *pnCkpt /* OUT: Number of backfilled frames in WAL */ +){ + int rc; /* Return code */ + int isChanged = 0; /* True if a new wal-index header is loaded */ + int eMode2 = eMode; /* Mode to pass to walCheckpoint() */ + int (*xBusy2)(void*) = xBusy; /* Busy handler for eMode2 */ + + assert( pWal->ckptLock==0 ); + assert( pWal->writeLock==0 ); + + /* EVIDENCE-OF: R-62920-47450 The busy-handler callback is never invoked + ** in the SQLITE_CHECKPOINT_PASSIVE mode. */ + assert( eMode!=SQLITE_CHECKPOINT_PASSIVE || xBusy==0 ); + + if( pWal->readOnly ) return SQLITE_READONLY; + WALTRACE(("WAL%p: checkpoint begins\n", pWal)); + + /* IMPLEMENTATION-OF: R-62028-47212 All calls obtain an exclusive + ** "checkpoint" lock on the database file. */ + rc = walLockExclusive(pWal, WAL_CKPT_LOCK, 1); + if( rc ){ + /* EVIDENCE-OF: R-10421-19736 If any other process is running a + ** checkpoint operation at the same time, the lock cannot be obtained and + ** SQLITE_BUSY is returned. + ** EVIDENCE-OF: R-53820-33897 Even if there is a busy-handler configured, + ** it will not be invoked in this case. + */ + testcase( rc==SQLITE_BUSY ); + testcase( xBusy!=0 ); + return rc; + } + pWal->ckptLock = 1; + + /* IMPLEMENTATION-OF: R-59782-36818 The SQLITE_CHECKPOINT_FULL, RESTART and + ** TRUNCATE modes also obtain the exclusive "writer" lock on the database + ** file. + ** + ** EVIDENCE-OF: R-60642-04082 If the writer lock cannot be obtained + ** immediately, and a busy-handler is configured, it is invoked and the + ** writer lock retried until either the busy-handler returns 0 or the + ** lock is successfully obtained. + */ + if( eMode!=SQLITE_CHECKPOINT_PASSIVE ){ + rc = walBusyLock(pWal, xBusy, pBusyArg, WAL_WRITE_LOCK, 1); + if( rc==SQLITE_OK ){ + pWal->writeLock = 1; + }else if( rc==SQLITE_BUSY ){ + eMode2 = SQLITE_CHECKPOINT_PASSIVE; + xBusy2 = 0; + rc = SQLITE_OK; + } + } + + /* Read the wal-index header. */ + if( rc==SQLITE_OK ){ + rc = walIndexReadHdr(pWal, &isChanged); + if( isChanged && pWal->pDbFd->pMethods->iVersion>=3 ){ + sqlite3OsUnfetch(pWal->pDbFd, 0, 0); + } + } + + /* Copy data from the log to the database file. */ + if( rc==SQLITE_OK ){ + + if( pWal->hdr.mxFrame && walPagesize(pWal)!=nBuf ){ + rc = SQLITE_CORRUPT_BKPT; + }else{ + rc = walCheckpoint(pWal, eMode2, xBusy2, pBusyArg, sync_flags, zBuf); + } + + /* If no error occurred, set the output variables. */ + if( rc==SQLITE_OK || rc==SQLITE_BUSY ){ + if( pnLog ) *pnLog = (int)pWal->hdr.mxFrame; + if( pnCkpt ) *pnCkpt = (int)(walCkptInfo(pWal)->nBackfill); + } + } + + if( isChanged ){ + /* If a new wal-index header was loaded before the checkpoint was + ** performed, then the pager-cache associated with pWal is now + ** out of date. So zero the cached wal-index header to ensure that + ** next time the pager opens a snapshot on this database it knows that + ** the cache needs to be reset. + */ + memset(&pWal->hdr, 0, sizeof(WalIndexHdr)); + } + + /* Release the locks. */ + sqlite3WalEndWriteTransaction(pWal); + walUnlockExclusive(pWal, WAL_CKPT_LOCK, 1); + pWal->ckptLock = 0; + WALTRACE(("WAL%p: checkpoint %s\n", pWal, rc ? "failed" : "ok")); + return (rc==SQLITE_OK && eMode!=eMode2 ? SQLITE_BUSY : rc); +} + +/* Return the value to pass to a sqlite3_wal_hook callback, the +** number of frames in the WAL at the point of the last commit since +** sqlite3WalCallback() was called. If no commits have occurred since +** the last call, then return 0. +*/ +SQLITE_PRIVATE int sqlite3WalCallback(Wal *pWal){ + u32 ret = 0; + if( pWal ){ + ret = pWal->iCallback; + pWal->iCallback = 0; + } + return (int)ret; +} + +/* +** This function is called to change the WAL subsystem into or out +** of locking_mode=EXCLUSIVE. +** +** If op is zero, then attempt to change from locking_mode=EXCLUSIVE +** into locking_mode=NORMAL. This means that we must acquire a lock +** on the pWal->readLock byte. If the WAL is already in locking_mode=NORMAL +** or if the acquisition of the lock fails, then return 0. If the +** transition out of exclusive-mode is successful, return 1. This +** operation must occur while the pager is still holding the exclusive +** lock on the main database file. +** +** If op is one, then change from locking_mode=NORMAL into +** locking_mode=EXCLUSIVE. This means that the pWal->readLock must +** be released. Return 1 if the transition is made and 0 if the +** WAL is already in exclusive-locking mode - meaning that this +** routine is a no-op. The pager must already hold the exclusive lock +** on the main database file before invoking this operation. +** +** If op is negative, then do a dry-run of the op==1 case but do +** not actually change anything. The pager uses this to see if it +** should acquire the database exclusive lock prior to invoking +** the op==1 case. +*/ +SQLITE_PRIVATE int sqlite3WalExclusiveMode(Wal *pWal, int op){ + int rc; + assert( pWal->writeLock==0 ); + assert( pWal->exclusiveMode!=WAL_HEAPMEMORY_MODE || op==-1 ); + + /* pWal->readLock is usually set, but might be -1 if there was a + ** prior error while attempting to acquire are read-lock. This cannot + ** happen if the connection is actually in exclusive mode (as no xShmLock + ** locks are taken in this case). Nor should the pager attempt to + ** upgrade to exclusive-mode following such an error. + */ + assert( pWal->readLock>=0 || pWal->lockError ); + assert( pWal->readLock>=0 || (op<=0 && pWal->exclusiveMode==0) ); + + if( op==0 ){ + if( pWal->exclusiveMode ){ + pWal->exclusiveMode = 0; + if( walLockShared(pWal, WAL_READ_LOCK(pWal->readLock))!=SQLITE_OK ){ + pWal->exclusiveMode = 1; + } + rc = pWal->exclusiveMode==0; + }else{ + /* Already in locking_mode=NORMAL */ + rc = 0; + } + }else if( op>0 ){ + assert( pWal->exclusiveMode==0 ); + assert( pWal->readLock>=0 ); + walUnlockShared(pWal, WAL_READ_LOCK(pWal->readLock)); + pWal->exclusiveMode = 1; + rc = 1; + }else{ + rc = pWal->exclusiveMode==0; + } + return rc; +} + +/* +** Return true if the argument is non-NULL and the WAL module is using +** heap-memory for the wal-index. Otherwise, if the argument is NULL or the +** WAL module is using shared-memory, return false. +*/ +SQLITE_PRIVATE int sqlite3WalHeapMemory(Wal *pWal){ + return (pWal && pWal->exclusiveMode==WAL_HEAPMEMORY_MODE ); +} + +#ifdef SQLITE_ENABLE_SNAPSHOT +/* Create a snapshot object. The content of a snapshot is opaque to +** every other subsystem, so the WAL module can put whatever it needs +** in the object. +*/ +SQLITE_PRIVATE int sqlite3WalSnapshotGet(Wal *pWal, sqlite3_snapshot **ppSnapshot){ + int rc = SQLITE_OK; + WalIndexHdr *pRet; + + assert( pWal->readLock>=0 && pWal->writeLock==0 ); + + pRet = (WalIndexHdr*)sqlite3_malloc(sizeof(WalIndexHdr)); + if( pRet==0 ){ + rc = SQLITE_NOMEM; + }else{ + memcpy(pRet, &pWal->hdr, sizeof(WalIndexHdr)); + *ppSnapshot = (sqlite3_snapshot*)pRet; + } + + return rc; +} + +/* Try to open on pSnapshot when the next read-transaction starts +*/ +SQLITE_PRIVATE void sqlite3WalSnapshotOpen(Wal *pWal, sqlite3_snapshot *pSnapshot){ + pWal->pSnapshot = (WalIndexHdr*)pSnapshot; +} +#endif /* SQLITE_ENABLE_SNAPSHOT */ + +#ifdef SQLITE_ENABLE_ZIPVFS +/* +** If the argument is not NULL, it points to a Wal object that holds a +** read-lock. This function returns the database page-size if it is known, +** or zero if it is not (or if pWal is NULL). +*/ +SQLITE_PRIVATE int sqlite3WalFramesize(Wal *pWal){ + assert( pWal==0 || pWal->readLock>=0 ); + return (pWal ? pWal->szPage : 0); +} +#endif + +/* Return the sqlite3_file object for the WAL file +*/ +SQLITE_PRIVATE sqlite3_file *sqlite3WalFile(Wal *pWal){ + return pWal->pWalFd; +} + +#endif /* #ifndef SQLITE_OMIT_WAL */ + +/************** End of wal.c *************************************************/ /************** Begin file btmutex.c *****************************************/ /* ** 2007 August 27 ** ** The author disclaims copyright to this source code. In place of @@ -37868,11 +54580,11 @@ ** May you do good and not evil. ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* -** This file implements a external (disk-based) database using BTrees. +** This file implements an external (disk-based) database using BTrees. ** For a detailed discussion of BTrees, refer to ** ** Donald E. Knuth, THE ART OF COMPUTER PROGRAMMING, Volume 3: ** "Sorting And Searching", pages 473-480. Addison-Wesley ** Publishing Company, Reading, Massachusetts. @@ -37905,27 +54617,27 @@ ** ** FORMAT DETAILS ** ** The file is divided into pages. The first page is called page 1, ** the second is page 2, and so forth. A page number of zero indicates -** "no such page". The page size can be any power of 2 between 512 and 32768. +** "no such page". The page size can be any power of 2 between 512 and 65536. ** Each page can be either a btree page, a freelist page, an overflow ** page, or a pointer-map page. ** ** The first page is always a btree page. The first 100 bytes of the first ** page contain a special header (the "file header") that describes the file. ** The format of the file header is as follows: ** ** OFFSET SIZE DESCRIPTION ** 0 16 Header string: "SQLite format 3\000" -** 16 2 Page size in bytes. +** 16 2 Page size in bytes. (1 means 65536) ** 18 1 File format write version ** 19 1 File format read version ** 20 1 Bytes of unused space at the end of each page -** 21 1 Max embedded payload fraction -** 22 1 Min embedded payload fraction -** 23 1 Min leaf payload fraction +** 21 1 Max embedded payload fraction (must be 64) +** 22 1 Min embedded payload fraction (must be 32) +** 23 1 Min leaf payload fraction (must be 32) ** 24 4 File change counter ** 28 4 Reserved for future use ** 32 4 First freelist page ** 36 4 Number of freelist pages in the file ** 40 60 15 4-byte meta values passed to higher layers @@ -37935,13 +54647,14 @@ ** 48 4 Size of page cache ** 52 4 Largest root-page (auto/incr_vacuum) ** 56 4 1=UTF-8 2=UTF16le 3=UTF16be ** 60 4 User version ** 64 4 Incremental vacuum mode -** 68 4 unused -** 72 4 unused -** 76 4 unused +** 68 4 Application-ID +** 72 20 unused +** 92 4 The version-valid-for number +** 96 4 SQLITE_VERSION_NUMBER ** ** All of the integer values are big-endian (most significant byte first). ** ** The file change counter is incremented when the database is changed ** This counter allows other processes to know when the file has changed @@ -37993,11 +54706,11 @@ ** 7 1 number of fragmented free bytes ** 8 4 Right child (the Ptr(N) value). Omitted on leaves. ** ** The flags define the format of this btree page. The leaf flag means that ** this page has no children. The zerodata flag means that this page carries -** only keys and no data. The intkey flag means that the key is a integer +** only keys and no data. The intkey flag means that the key is an integer ** which is stored in the key size entry of the cell header rather than in ** the payload area. ** ** The cell pointer array begins on the first byte after the page header. ** The cell pointer array contains zero or more 2-byte numbers which are @@ -38071,16 +54784,17 @@ ** SIZE DESCRIPTION ** 4 Page number of next trunk page ** 4 Number of leaf pointers on this page ** * zero or more pages numbers of leaves */ +/* #include "sqliteInt.h" */ /* The following value is the maximum cell size assuming a maximum page ** size give above. */ -#define MX_CELL_SIZE(pBt) (pBt->pageSize-8) +#define MX_CELL_SIZE(pBt) ((int)(pBt->pageSize-8)) /* The maximum number of cells on a single page of the database. This ** assumes a minimum cell size of 6 bytes (4 bytes for the cell itself ** plus 2 bytes for the index to the cell in the page header). Such ** small cells will be rare, but they are possible. @@ -38088,10 +54802,11 @@ #define MX_CELL(pBt) ((pBt->pageSize-8)/6) /* Forward declarations */ typedef struct MemPage MemPage; typedef struct BtLock BtLock; +typedef struct CellInfo CellInfo; /* ** This is a magic string that appears at the beginning of every ** SQLite database in order to identify the file as a real database. ** @@ -38130,28 +54845,34 @@ ** stored in MemPage.pBt->mutex. */ struct MemPage { u8 isInit; /* True if previously initialized. MUST BE FIRST! */ u8 nOverflow; /* Number of overflow cell bodies in aCell[] */ - u8 intKey; /* True if intkey flag is set */ - u8 leaf; /* True if leaf flag is set */ - u8 hasData; /* True if this page stores data */ + u8 intKey; /* True if table b-trees. False for index b-trees */ + u8 intKeyLeaf; /* True if the leaf of an intKey table */ + u8 leaf; /* True if a leaf page */ u8 hdrOffset; /* 100 for page 1. 0 otherwise */ u8 childPtrSize; /* 0 if leaf==1. 4 if leaf==0 */ + u8 max1bytePayload; /* min(maxLocal,127) */ + u8 bBusy; /* Prevent endless loops on corrupt database files */ u16 maxLocal; /* Copy of BtShared.maxLocal or BtShared.maxLeaf */ u16 minLocal; /* Copy of BtShared.minLocal or BtShared.minLeaf */ u16 cellOffset; /* Index in aData of first cell pointer */ u16 nFree; /* Number of free bytes on the page */ u16 nCell; /* Number of cells on this page, local and ovfl */ u16 maskPage; /* Mask for page offset */ - struct _OvflCell { /* Cells that will not fit on aData[] */ - u8 *pCell; /* Pointers to the body of the overflow cell */ - u16 idx; /* Insert this cell before idx-th non-overflow cell */ - } aOvfl[5]; + u16 aiOvfl[5]; /* Insert the i-th overflow cell before the aiOvfl-th + ** non-overflow cell */ + u8 *apOvfl[5]; /* Pointers to the body of overflow cells */ BtShared *pBt; /* Pointer to BtShared that this page is part of */ u8 *aData; /* Pointer to disk image of the page data */ + u8 *aDataEnd; /* One byte past the end of usable data */ + u8 *aCellIdx; /* The cell index area */ + u8 *aDataOfst; /* Same as aData for leaves. aData+4 for interior */ DbPage *pDbPage; /* Pager page handle */ + u16 (*xCellSize)(MemPage*,u8*); /* cellSizePtr method */ + void (*xParseCell)(MemPage*,u8*,CellInfo*); /* btreeParseCell method */ Pgno pgno; /* Page number for this page */ }; /* ** The in-memory image of a disk page has the auxiliary information appended @@ -38194,21 +54915,23 @@ ** the BtShared object. ** ** All fields in this structure are accessed under sqlite3.mutex. ** The pBt pointer itself may not be changed while there exists cursors ** in the referenced BtShared that point back to this Btree since those -** cursors have to do go through this Btree to find their BtShared and +** cursors have to go through this Btree to find their BtShared and ** they often do so without holding sqlite3.mutex. */ struct Btree { sqlite3 *db; /* The database connection holding this btree */ BtShared *pBt; /* Sharable content of this btree */ u8 inTrans; /* TRANS_NONE, TRANS_READ or TRANS_WRITE */ u8 sharable; /* True if we can share pBt with another db */ u8 locked; /* True if db currently has pBt locked */ + u8 hasIncrblobCur; /* True if there are one or more Incrblob cursors */ int wantToLock; /* Number of nested calls to sqlite3BtreeEnter() */ int nBackup; /* Number of backup operations reading this btree */ + u32 iDataVersion; /* Combines with pBt->pPager->iDataVersion */ Btree *pNext; /* List of other sharable Btrees from the same db */ Btree *pPrev; /* Back pointer of the same list */ #ifndef SQLITE_OMIT_SHARED_CACHE BtLock lock; /* Object used to lock page 1 */ #endif @@ -38226,11 +54949,11 @@ #define TRANS_WRITE 2 /* ** An instance of this object represents a single database file. ** -** A single database file can be in use as the same time by two +** A single database file can be in use at the same time by two ** or more database connections. When two or more connections are ** sharing the same database file, each connection has it own ** private Btree object for the file and each of those Btrees points ** to this one BtShared object. BtShared.nRef is the number of ** connections currently sharing this database file. @@ -38263,56 +54986,64 @@ struct BtShared { Pager *pPager; /* The page cache */ sqlite3 *db; /* Database connection currently using this Btree */ BtCursor *pCursor; /* A list of all open cursors */ MemPage *pPage1; /* First page of the database */ - u8 readOnly; /* True if the underlying file is readonly */ - u8 pageSizeFixed; /* True if the page size can no longer be changed */ - u8 secureDelete; /* True if secure_delete is enabled */ - u8 initiallyEmpty; /* Database is empty at start of transaction */ + u8 openFlags; /* Flags to sqlite3BtreeOpen() */ #ifndef SQLITE_OMIT_AUTOVACUUM u8 autoVacuum; /* True if auto-vacuum is enabled */ u8 incrVacuum; /* True if incr-vacuum is enabled */ + u8 bDoTruncate; /* True to truncate db on commit */ #endif - u16 pageSize; /* Total number of bytes on a page */ - u16 usableSize; /* Number of usable bytes on each page */ + u8 inTransaction; /* Transaction state */ + u8 max1bytePayload; /* Maximum first byte of cell for a 1-byte payload */ +#ifdef SQLITE_HAS_CODEC + u8 optimalReserve; /* Desired amount of reserved space per page */ +#endif + u16 btsFlags; /* Boolean parameters. See BTS_* macros below */ u16 maxLocal; /* Maximum local payload in non-LEAFDATA tables */ u16 minLocal; /* Minimum local payload in non-LEAFDATA tables */ u16 maxLeaf; /* Maximum local payload in a LEAFDATA table */ u16 minLeaf; /* Minimum local payload in a LEAFDATA table */ - u8 inTransaction; /* Transaction state */ + u32 pageSize; /* Total number of bytes on a page */ + u32 usableSize; /* Number of usable bytes on each page */ int nTransaction; /* Number of open transactions (read + write) */ u32 nPage; /* Number of pages in the database */ void *pSchema; /* Pointer to space allocated by sqlite3BtreeSchema() */ void (*xFreeSchema)(void*); /* Destructor for BtShared.pSchema */ - sqlite3_mutex *mutex; /* Non-recursive mutex required to access this struct */ + sqlite3_mutex *mutex; /* Non-recursive mutex required to access this object */ Bitvec *pHasContent; /* Set of pages moved to free-list this transaction */ #ifndef SQLITE_OMIT_SHARED_CACHE int nRef; /* Number of references to this structure */ BtShared *pNext; /* Next on a list of sharable BtShared structs */ BtLock *pLock; /* List of locks held on this shared-btree struct */ Btree *pWriter; /* Btree with currently open write transaction */ - u8 isExclusive; /* True if pWriter has an EXCLUSIVE lock on the db */ - u8 isPending; /* If waiting for read-locks to clear */ #endif - u8 *pTmpSpace; /* BtShared.pageSize bytes of space for tmp use */ + u8 *pTmpSpace; /* Temp space sufficient to hold a single cell */ }; +/* +** Allowed values for BtShared.btsFlags +*/ +#define BTS_READ_ONLY 0x0001 /* Underlying file is readonly */ +#define BTS_PAGESIZE_FIXED 0x0002 /* Page size can no longer be changed */ +#define BTS_SECURE_DELETE 0x0004 /* PRAGMA secure_delete is enabled */ +#define BTS_INITIALLY_EMPTY 0x0008 /* Database was empty at trans start */ +#define BTS_NO_WAL 0x0010 /* Do not open write-ahead-log files */ +#define BTS_EXCLUSIVE 0x0020 /* pWriter has an exclusive lock */ +#define BTS_PENDING 0x0040 /* Waiting for read-locks to clear */ + /* ** An instance of the following structure is used to hold information ** about a cell. The parseCellPtr() function fills in this structure ** based on information extract from the raw disk page. */ -typedef struct CellInfo CellInfo; struct CellInfo { - u8 *pCell; /* Pointer to the start of cell content */ - i64 nKey; /* The key for INTKEY tables, or number of bytes in key */ - u32 nData; /* Number of bytes of data */ - u32 nPayload; /* Total amount of payload */ - u16 nHeader; /* Size of the cell content header in bytes */ - u16 nLocal; /* Amount of payload held locally */ - u16 iOverflow; /* Offset to overflow page number. Zero if no overflow */ + i64 nKey; /* The key for INTKEY tables, or nPayload otherwise */ + u8 *pPayload; /* Pointer to the start of payload */ + u32 nPayload; /* Bytes of payload */ + u16 nLocal; /* Amount of payload held locally, not on overflow */ u16 nSize; /* Size of the cell content on the main b-tree page */ }; /* ** Maximum depth of an SQLite B-Tree structure. Any B-Tree deeper than @@ -38330,70 +55061,94 @@ ** b-tree within a database file. ** ** The entry is identified by its MemPage and the index in ** MemPage.aCell[] of the entry. ** -** A single database file can shared by two more database connections, +** A single database file can be shared by two more database connections, ** but cursors cannot be shared. Each cursor is associated with a ** particular database connection identified BtCursor.pBtree.db. ** ** Fields in this structure are accessed under the BtShared.mutex ** found at self->pBt->mutex. +** +** skipNext meaning: +** eState==SKIPNEXT && skipNext>0: Next sqlite3BtreeNext() is no-op. +** eState==SKIPNEXT && skipNext<0: Next sqlite3BtreePrevious() is no-op. +** eState==FAULT: Cursor fault with skipNext as error code. */ struct BtCursor { Btree *pBtree; /* The Btree to which this cursor belongs */ BtShared *pBt; /* The BtShared this cursor points to */ - BtCursor *pNext, *pPrev; /* Forms a linked list of all cursors */ - struct KeyInfo *pKeyInfo; /* Argument passed to comparison function */ - Pgno pgnoRoot; /* The root page of this tree */ - sqlite3_int64 cachedRowid; /* Next rowid cache. 0 means not valid */ + BtCursor *pNext; /* Forms a linked list of all cursors */ + Pgno *aOverflow; /* Cache of overflow page locations */ CellInfo info; /* A parse of the cell we are pointing at */ - u8 wrFlag; /* True if writable */ - u8 atLast; /* Cursor pointing to the last entry */ - u8 validNKey; /* True if info.nKey is valid */ + i64 nKey; /* Size of pKey, or last integer key */ + void *pKey; /* Saved key that was cursor last known position */ + Pgno pgnoRoot; /* The root page of this tree */ + int nOvflAlloc; /* Allocated size of aOverflow[] array */ + int skipNext; /* Prev() is noop if negative. Next() is noop if positive. + ** Error code if eState==CURSOR_FAULT */ + u8 curFlags; /* zero or more BTCF_* flags defined below */ + u8 curPagerFlags; /* Flags to send to sqlite3PagerGet() */ u8 eState; /* One of the CURSOR_XXX constants (see below) */ - void *pKey; /* Saved key that was cursor's last known position */ - i64 nKey; /* Size of pKey, or last integer key */ - int skipNext; /* Prev() is noop if negative. Next() is noop if positive */ -#ifndef SQLITE_OMIT_INCRBLOB - u8 isIncrblobHandle; /* True if this cursor is an incr. io handle */ - Pgno *aOverflow; /* Cache of overflow page locations */ -#endif - i16 iPage; /* Index of current page in apPage */ - MemPage *apPage[BTCURSOR_MAX_DEPTH]; /* Pages from root to current page */ + u8 hints; /* As configured by CursorSetHints() */ + /* All fields above are zeroed when the cursor is allocated. See + ** sqlite3BtreeCursorZero(). Fields that follow must be manually + ** initialized. */ + i8 iPage; /* Index of current page in apPage */ + u8 curIntKey; /* Value of apPage[0]->intKey */ + struct KeyInfo *pKeyInfo; /* Argument passed to comparison function */ + void *padding1; /* Make object size a multiple of 16 */ u16 aiIdx[BTCURSOR_MAX_DEPTH]; /* Current index in apPage[i] */ + MemPage *apPage[BTCURSOR_MAX_DEPTH]; /* Pages from root to current page */ }; +/* +** Legal values for BtCursor.curFlags +*/ +#define BTCF_WriteFlag 0x01 /* True if a write cursor */ +#define BTCF_ValidNKey 0x02 /* True if info.nKey is valid */ +#define BTCF_ValidOvfl 0x04 /* True if aOverflow is valid */ +#define BTCF_AtLast 0x08 /* Cursor is pointing ot the last entry */ +#define BTCF_Incrblob 0x10 /* True if an incremental I/O handle */ +#define BTCF_Multiple 0x20 /* Maybe another cursor on the same btree */ + /* ** Potential values for BtCursor.eState. ** -** CURSOR_VALID: -** Cursor points to a valid entry. getPayload() etc. may be called. -** ** CURSOR_INVALID: ** Cursor does not point to a valid entry. This can happen (for example) ** because the table is empty or because BtreeCursorFirst() has not been ** called. ** +** CURSOR_VALID: +** Cursor points to a valid entry. getPayload() etc. may be called. +** +** CURSOR_SKIPNEXT: +** Cursor is valid except that the Cursor.skipNext field is non-zero +** indicating that the next sqlite3BtreeNext() or sqlite3BtreePrevious() +** operation should be a no-op. +** ** CURSOR_REQUIRESEEK: ** The table that this cursor was opened on still exists, but has been ** modified since the cursor was last used. The cursor position is saved ** in variables BtCursor.pKey and BtCursor.nKey. When a cursor is in ** this state, restoreCursorPosition() can be called to attempt to ** seek the cursor to the saved position. ** ** CURSOR_FAULT: -** A unrecoverable error (an I/O error or a malloc failure) has occurred +** An unrecoverable error (an I/O error or a malloc failure) has occurred ** on a different connection that shares the BtShared cache with this ** cursor. The error has left the cache in an inconsistent state. ** Do nothing else with this cursor. Any attempt to use the cursor -** should return the error code stored in BtCursor.skip +** should return the error code stored in BtCursor.skipNext */ #define CURSOR_INVALID 0 #define CURSOR_VALID 1 -#define CURSOR_REQUIRESEEK 2 -#define CURSOR_FAULT 3 +#define CURSOR_SKIPNEXT 2 +#define CURSOR_REQUIRESEEK 3 +#define CURSOR_FAULT 4 /* ** The database page the PENDING_BYTE occupies. This page is never used. */ # define PENDING_BYTE_PAGE(pBt) PAGER_MJ_PGNO(pBt) @@ -38477,31 +55232,57 @@ /* ** This structure is passed around through all the sanity checking routines ** in order to keep track of some global state information. +** +** The aRef[] array is allocated so that there is 1 bit for each page in +** the database. As the integrity-check proceeds, for each page used in +** the database the corresponding bit is set. This allows integrity-check to +** detect pages that are used twice and orphaned pages (both of which +** indicate corruption). */ typedef struct IntegrityCk IntegrityCk; struct IntegrityCk { BtShared *pBt; /* The tree being checked out */ Pager *pPager; /* The associated pager. Also accessible by pBt->pPager */ + u8 *aPgRef; /* 1 bit per page in the db (see above) */ Pgno nPage; /* Number of pages in the database */ - int *anRef; /* Number of times each page is referenced */ int mxErr; /* Stop accumulating errors when this reaches zero */ int nErr; /* Number of messages written to zErrMsg so far */ int mallocFailed; /* A memory allocation error has occurred */ + const char *zPfx; /* Error message prefix */ + int v1, v2; /* Values for up to two %d fields in zPfx */ StrAccum errMsg; /* Accumulate the error message text here */ + u32 *heap; /* Min-heap used for analyzing cell coverage */ }; /* -** Read or write a two- and four-byte big-endian integer values. +** Routines to read or write a two- and four-byte big-endian integer values. */ #define get2byte(x) ((x)[0]<<8 | (x)[1]) #define put2byte(p,v) ((p)[0] = (u8)((v)>>8), (p)[1] = (u8)(v)) #define get4byte sqlite3Get4byte #define put4byte sqlite3Put4byte +/* +** get2byteAligned(), unlike get2byte(), requires that its argument point to a +** two-byte aligned address. get2bytea() is only used for accessing the +** cell addresses in a btree header. +*/ +#if SQLITE_BYTEORDER==4321 +# define get2byteAligned(x) (*(u16*)(x)) +#elif SQLITE_BYTEORDER==1234 && !defined(SQLITE_DISABLE_INTRINSIC) \ + && GCC_VERSION>=4008000 +# define get2byteAligned(x) __builtin_bswap16(*(u16*)(x)) +#elif SQLITE_BYTEORDER==1234 && !defined(SQLITE_DISABLE_INTRINSIC) \ + && defined(_MSC_VER) && _MSC_VER>=1300 +# define get2byteAligned(x) _byteswap_ushort(*(u16*)(x)) +#else +# define get2byteAligned(x) ((x)[0]<<8 | (x)[1]) +#endif + /************** End of btreeInt.h ********************************************/ /************** Continuing where we left off in btmutex.c ********************/ #ifndef SQLITE_OMIT_SHARED_CACHE #if SQLITE_THREADSAFE @@ -38522,20 +55303,24 @@ /* ** Release the BtShared mutex associated with B-Tree handle p and ** clear the p->locked boolean. */ -static void unlockBtreeMutex(Btree *p){ +static void SQLITE_NOINLINE unlockBtreeMutex(Btree *p){ + BtShared *pBt = p->pBt; assert( p->locked==1 ); - assert( sqlite3_mutex_held(p->pBt->mutex) ); + assert( sqlite3_mutex_held(pBt->mutex) ); assert( sqlite3_mutex_held(p->db->mutex) ); - assert( p->db==p->pBt->db ); + assert( p->db==pBt->db ); - sqlite3_mutex_leave(p->pBt->mutex); + sqlite3_mutex_leave(pBt->mutex); p->locked = 0; } +/* Forward reference */ +static void SQLITE_NOINLINE btreeLockCarefully(Btree *p); + /* ** Enter a mutex on the given BTree object. ** ** If the object is not sharable, then no mutex is ever required ** and this routine is a no-op. The underlying mutex is non-recursive. @@ -38549,12 +55334,10 @@ ** p, then first unlock all of the others on p->pNext, then wait ** for the lock to become available on p, then relock all of the ** subsequent Btrees that desire a lock. */ SQLITE_PRIVATE void sqlite3BtreeEnter(Btree *p){ - Btree *pLater; - /* Some basic sanity checking on the Btree. The list of Btrees ** connected by pNext and pPrev should be in sorted order by ** Btree.pBt value. All elements of the list should belong to ** the same connection. Only shared Btrees are on the list. */ assert( p->pNext==0 || p->pNext->pBt>p->pBt ); @@ -38575,13 +55358,24 @@ assert( (p->locked==0 && p->sharable) || p->pBt->db==p->db ); if( !p->sharable ) return; p->wantToLock++; if( p->locked ) return; + btreeLockCarefully(p); +} + +/* This is a helper function for sqlite3BtreeLock(). By moving +** complex, but seldom used logic, out of sqlite3BtreeLock() and +** into this routine, we avoid unnecessary stack pointer changes +** and thus help the sqlite3BtreeLock() routine to run much faster +** in the common case. +*/ +static void SQLITE_NOINLINE btreeLockCarefully(Btree *p){ + Btree *pLater; /* In most cases, we should be able to acquire the lock we - ** want without having to go throught the ascending lock + ** want without having to go through the ascending lock ** procedure that follows. Just be sure not to block. */ if( sqlite3_mutex_try(p->pBt->mutex)==SQLITE_OK ){ p->pBt->db = p->db; p->locked = 1; @@ -38606,15 +55400,17 @@ if( pLater->wantToLock ){ lockBtreeMutex(pLater); } } } + /* ** Exit the recursive mutex on a Btree. */ SQLITE_PRIVATE void sqlite3BtreeLeave(Btree *p){ + assert( sqlite3_mutex_held(p->db->mutex) ); if( p->sharable ){ assert( p->wantToLock>0 ); p->wantToLock--; if( p->wantToLock==0 ){ unlockBtreeMutex(p); @@ -38637,25 +55433,10 @@ return (p->sharable==0 || p->locked); } #endif - -#ifndef SQLITE_OMIT_INCRBLOB -/* -** Enter and leave a mutex on a Btree given a cursor owned by that -** Btree. These entry points are used by incremental I/O and can be -** omitted if that module is not used. -*/ -SQLITE_PRIVATE void sqlite3BtreeEnterCursor(BtCursor *pCur){ - sqlite3BtreeEnter(pCur->pBtree); -} -SQLITE_PRIVATE void sqlite3BtreeLeaveCursor(BtCursor *pCur){ - sqlite3BtreeLeave(pCur->pBtree); -} -#endif /* SQLITE_OMIT_INCRBLOB */ - /* ** Enter the mutex on every Btree associated with a database ** connection. This is needed (for example) prior to parsing ** a statement since we will be comparing table and column names @@ -38669,49 +55450,24 @@ ** two or more btrees in common both try to lock all their btrees ** at the same instant. */ SQLITE_PRIVATE void sqlite3BtreeEnterAll(sqlite3 *db){ int i; - Btree *p, *pLater; + Btree *p; assert( sqlite3_mutex_held(db->mutex) ); for(i=0; i<db->nDb; i++){ p = db->aDb[i].pBt; - assert( !p || (p->locked==0 && p->sharable) || p->pBt->db==p->db ); - if( p && p->sharable ){ - p->wantToLock++; - if( !p->locked ){ - assert( p->wantToLock==1 ); - while( p->pPrev ) p = p->pPrev; - /* Reason for ALWAYS: There must be at least on unlocked Btree in - ** the chain. Otherwise the !p->locked test above would have failed */ - while( p->locked && ALWAYS(p->pNext) ) p = p->pNext; - for(pLater = p->pNext; pLater; pLater=pLater->pNext){ - if( pLater->locked ){ - unlockBtreeMutex(pLater); - } - } - while( p ){ - lockBtreeMutex(p); - p = p->pNext; - } - } - } + if( p ) sqlite3BtreeEnter(p); } } SQLITE_PRIVATE void sqlite3BtreeLeaveAll(sqlite3 *db){ int i; Btree *p; assert( sqlite3_mutex_held(db->mutex) ); for(i=0; i<db->nDb; i++){ p = db->aDb[i].pBt; - if( p && p->sharable ){ - assert( p->wantToLock>0 ); - p->wantToLock--; - if( p->wantToLock==0 ){ - unlockBtreeMutex(p); - } - } + if( p ) sqlite3BtreeLeave(p); } } #ifndef NDEBUG /* @@ -38735,101 +55491,46 @@ } return 1; } #endif /* NDEBUG */ -/* -** Add a new Btree pointer to a BtreeMutexArray. -** if the pointer can possibly be shared with -** another database connection. -** -** The pointers are kept in sorted order by pBtree->pBt. That -** way when we go to enter all the mutexes, we can enter them -** in order without every having to backup and retry and without -** worrying about deadlock. -** -** The number of shared btrees will always be small (usually 0 or 1) -** so an insertion sort is an adequate algorithm here. -*/ -SQLITE_PRIVATE void sqlite3BtreeMutexArrayInsert(BtreeMutexArray *pArray, Btree *pBtree){ - int i, j; - BtShared *pBt; - if( pBtree==0 || pBtree->sharable==0 ) return; #ifndef NDEBUG - { - for(i=0; i<pArray->nMutex; i++){ - assert( pArray->aBtree[i]!=pBtree ); - } - } -#endif - assert( pArray->nMutex>=0 ); - assert( pArray->nMutex<ArraySize(pArray->aBtree)-1 ); - pBt = pBtree->pBt; - for(i=0; i<pArray->nMutex; i++){ - assert( pArray->aBtree[i]!=pBtree ); - if( pArray->aBtree[i]->pBt>pBt ){ - for(j=pArray->nMutex; j>i; j--){ - pArray->aBtree[j] = pArray->aBtree[j-1]; - } - pArray->aBtree[i] = pBtree; - pArray->nMutex++; - return; - } - } - pArray->aBtree[pArray->nMutex++] = pBtree; -} - -/* -** Enter the mutex of every btree in the array. This routine is -** called at the beginning of sqlite3VdbeExec(). The mutexes are -** exited at the end of the same function. -*/ -SQLITE_PRIVATE void sqlite3BtreeMutexArrayEnter(BtreeMutexArray *pArray){ - int i; - for(i=0; i<pArray->nMutex; i++){ - Btree *p = pArray->aBtree[i]; - /* Some basic sanity checking */ - assert( i==0 || pArray->aBtree[i-1]->pBt<p->pBt ); - assert( !p->locked || p->wantToLock>0 ); - - /* We should already hold a lock on the database connection */ - assert( sqlite3_mutex_held(p->db->mutex) ); - - /* The Btree is sharable because only sharable Btrees are entered - ** into the array in the first place. */ - assert( p->sharable ); - - p->wantToLock++; - if( !p->locked ){ - lockBtreeMutex(p); - } - } -} - -/* -** Leave the mutex of every btree in the group. -*/ -SQLITE_PRIVATE void sqlite3BtreeMutexArrayLeave(BtreeMutexArray *pArray){ - int i; - for(i=0; i<pArray->nMutex; i++){ - Btree *p = pArray->aBtree[i]; - /* Some basic sanity checking */ - assert( i==0 || pArray->aBtree[i-1]->pBt<p->pBt ); - assert( p->locked ); - assert( p->wantToLock>0 ); - - /* We should already hold a lock on the database connection */ - assert( sqlite3_mutex_held(p->db->mutex) ); - - p->wantToLock--; - if( p->wantToLock==0 ){ - unlockBtreeMutex(p); - } - } -} - -#else +/* +** Return true if the correct mutexes are held for accessing the +** db->aDb[iDb].pSchema structure. The mutexes required for schema +** access are: +** +** (1) The mutex on db +** (2) if iDb!=1, then the mutex on db->aDb[iDb].pBt. +** +** If pSchema is not NULL, then iDb is computed from pSchema and +** db using sqlite3SchemaToIndex(). +*/ +SQLITE_PRIVATE int sqlite3SchemaMutexHeld(sqlite3 *db, int iDb, Schema *pSchema){ + Btree *p; + assert( db!=0 ); + if( pSchema ) iDb = sqlite3SchemaToIndex(db, pSchema); + assert( iDb>=0 && iDb<db->nDb ); + if( !sqlite3_mutex_held(db->mutex) ) return 0; + if( iDb==1 ) return 1; + p = db->aDb[iDb].pBt; + assert( p!=0 ); + return p->sharable==0 || p->locked==1; +} +#endif /* NDEBUG */ + +#else /* SQLITE_THREADSAFE>0 above. SQLITE_THREADSAFE==0 below */ +/* +** The following are special cases for mutex enter routines for use +** in single threaded applications that use shared cache. Except for +** these two routines, all mutex operations are no-ops in that case and +** are null #defines in btree.h. +** +** If shared cache is disabled, then all btree mutex routines, including +** the ones below, are no-ops and are null #defines in btree.h. +*/ + SQLITE_PRIVATE void sqlite3BtreeEnter(Btree *p){ p->pBt->db = p->db; } SQLITE_PRIVATE void sqlite3BtreeEnterAll(sqlite3 *db){ int i; @@ -38839,10 +55540,29 @@ p->pBt->db = p->db; } } } #endif /* if SQLITE_THREADSAFE */ + +#ifndef SQLITE_OMIT_INCRBLOB +/* +** Enter a mutex on a Btree given a cursor owned by that Btree. +** +** These entry points are used by incremental I/O only. Enter() is required +** any time OMIT_SHARED_CACHE is not defined, regardless of whether or not +** the build is threadsafe. Leave() is only required by threadsafe builds. +*/ +SQLITE_PRIVATE void sqlite3BtreeEnterCursor(BtCursor *pCur){ + sqlite3BtreeEnter(pCur->pBtree); +} +# if SQLITE_THREADSAFE +SQLITE_PRIVATE void sqlite3BtreeLeaveCursor(BtCursor *pCur){ + sqlite3BtreeLeave(pCur->pBtree); +} +# endif +#endif /* ifndef SQLITE_OMIT_INCRBLOB */ + #endif /* ifndef SQLITE_OMIT_SHARED_CACHE */ /************** End of btmutex.c *********************************************/ /************** Begin file btree.c *******************************************/ /* @@ -38854,14 +55574,15 @@ ** May you do good and not evil. ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* -** This file implements a external (disk-based) database using BTrees. +** This file implements an external (disk-based) database using BTrees. ** See the header comment on "btreeInt.h" for additional information. ** Including a description of file format and an overview of operation. */ +/* #include "btreeInt.h" */ /* ** The header string that appears at the beginning of every ** SQLite database. */ @@ -38876,11 +55597,39 @@ # define TRACE(X) if(sqlite3BtreeTrace){printf X;fflush(stdout);} #else # define TRACE(X) #endif +/* +** Extract a 2-byte big-endian integer from an array of unsigned bytes. +** But if the value is zero, make it 65536. +** +** This routine is used to extract the "offset to cell content area" value +** from the header of a btree page. If the page size is 65536 and the page +** is empty, the offset should be 65536, but the 2-byte value stores zero. +** This routine makes the necessary adjustment to 65536. +*/ +#define get2byteNotZero(X) (((((int)get2byte(X))-1)&0xffff)+1) +/* +** Values passed as the 5th argument to allocateBtreePage() +*/ +#define BTALLOC_ANY 0 /* Allocate any page */ +#define BTALLOC_EXACT 1 /* Allocate exact page if possible */ +#define BTALLOC_LE 2 /* Allocate any page <= the parameter */ + +/* +** Macro IfNotOmitAV(x) returns (x) if SQLITE_OMIT_AUTOVACUUM is not +** defined, or 0 if it is. For example: +** +** bIncrVacuum = IfNotOmitAV(pBtShared->incrVacuum); +*/ +#ifndef SQLITE_OMIT_AUTOVACUUM +#define IfNotOmitAV(expr) (expr) +#else +#define IfNotOmitAV(expr) 0 +#endif #ifndef SQLITE_OMIT_SHARED_CACHE /* ** A list of BtShared objects that are eligible for participation ** in shared cache. This variable has file scope during normal builds, @@ -38902,11 +55651,11 @@ ** ** This routine has no effect on existing database connections. ** The shared cache setting effects only future calls to ** sqlite3_open(), sqlite3_open16(), or sqlite3_open_v2(). */ -SQLITE_API int sqlite3_enable_shared_cache(int enable){ +SQLITE_API int SQLITE_STDCALL sqlite3_enable_shared_cache(int enable){ sqlite3GlobalConfig.sharedCacheEnabled = enable; return SQLITE_OK; } #endif @@ -38978,11 +55727,11 @@ /* If the client is reading or writing an index and the schema is ** not loaded, then it is too difficult to actually check to see if ** the correct locks are held. So do not bother - just return true. ** This case does not come up very often anyhow. */ - if( isIndex && (!pSchema || (pSchema->flags&DB_SchemaLoaded)==0) ){ + if( isIndex && (!pSchema || (pSchema->schemaFlags&DB_SchemaLoaded)==0) ){ return 1; } /* Figure out the root-page that the lock should be held on. For table ** b-trees, this is just the root page of the b-tree being read or @@ -38991,10 +55740,16 @@ if( isIndex ){ HashElem *p; for(p=sqliteHashFirst(&pSchema->idxHash); p; p=sqliteHashNext(p)){ Index *pIdx = (Index *)sqliteHashData(p); if( pIdx->tnum==(int)iRoot ){ + if( iTab ){ + /* Two or more indexes share the same root page. There must + ** be imposter tables. So just return true. The assert is not + ** useful in that case. */ + return 1; + } iTab = pIdx->pTable->tnum; } } }else{ iTab = iRoot; @@ -39078,11 +55833,11 @@ } /* If some other connection is holding an exclusive lock, the ** requested lock may not be obtained. */ - if( pBt->pWriter!=p && pBt->isExclusive ){ + if( pBt->pWriter!=p && (pBt->btsFlags & BTS_EXCLUSIVE)!=0 ){ sqlite3ConnectionBlocked(p->db, pBt->pWriter->db); return SQLITE_LOCKED_SHAREDCACHE; } for(pIter=pBt->pLock; pIter; pIter=pIter->pNext){ @@ -39099,11 +55854,11 @@ assert( eLock==READ_LOCK || pIter->pBtree==p || pIter->eLock==READ_LOCK); if( pIter->pBtree!=p && pIter->iTable==iTab && pIter->eLock!=eLock ){ sqlite3ConnectionBlocked(p->db, pIter->pBtree->db); if( eLock==WRITE_LOCK ){ assert( p==pBt->pWriter ); - pBt->isPending = 1; + pBt->btsFlags |= BTS_PENDING; } return SQLITE_LOCKED_SHAREDCACHE; } } return SQLITE_OK; @@ -39187,11 +55942,11 @@ /* ** Release all the table locks (locks obtained via calls to ** the setSharedCacheTableLock() procedure) held by Btree object p. ** ** This function assumes that Btree p has an open read or write -** transaction. If it does not, then the BtShared.isPending variable +** transaction. If it does not, then the BTS_PENDING flag ** may be incorrectly cleared. */ static void clearAllSharedCacheTableLocks(Btree *p){ BtShared *pBt = p->pBt; BtLock **ppIter = &pBt->pLock; @@ -39200,11 +55955,11 @@ assert( p->sharable || 0==*ppIter ); assert( p->inTrans>0 ); while( *ppIter ){ BtLock *pLock = *ppIter; - assert( pBt->isExclusive==0 || pBt->pWriter==pLock->pBtree ); + assert( (pBt->btsFlags & BTS_EXCLUSIVE)==0 || pBt->pWriter==pLock->pBtree ); assert( pLock->pBtree->inTrans>=pLock->eLock ); if( pLock->pBtree==p ){ *ppIter = pLock->pNext; assert( pLock->iTable!=1 || pLock==&p->lock ); if( pLock->iTable!=1 ){ @@ -39213,26 +55968,25 @@ }else{ ppIter = &pLock->pNext; } } - assert( pBt->isPending==0 || pBt->pWriter ); + assert( (pBt->btsFlags & BTS_PENDING)==0 || pBt->pWriter ); if( pBt->pWriter==p ){ pBt->pWriter = 0; - pBt->isExclusive = 0; - pBt->isPending = 0; + pBt->btsFlags &= ~(BTS_EXCLUSIVE|BTS_PENDING); }else if( pBt->nTransaction==2 ){ /* This function is called when Btree p is concluding its ** transaction. If there currently exists a writer, and p is not ** that writer, then the number of locks held by connections other ** than the writer must be about to drop to zero. In this case - ** set the isPending flag to 0. + ** set the BTS_PENDING flag to 0. ** - ** If there is not currently a writer, then BtShared.isPending must + ** If there is not currently a writer, then BTS_PENDING must ** be zero already. So this next line is harmless in that case. */ - pBt->isPending = 0; + pBt->btsFlags &= ~BTS_PENDING; } } /* ** This function changes all write-locks held by Btree p into read-locks. @@ -39240,12 +55994,11 @@ static void downgradeAllSharedCacheTableLocks(Btree *p){ BtShared *pBt = p->pBt; if( pBt->pWriter==p ){ BtLock *pLock; pBt->pWriter = 0; - pBt->isExclusive = 0; - pBt->isPending = 0; + pBt->btsFlags &= ~(BTS_EXCLUSIVE|BTS_PENDING); for(pLock=pBt->pLock; pLock; pLock=pLock->pNext){ assert( pLock->eLock==READ_LOCK || pLock->pBtree==p ); pLock->eLock = READ_LOCK; } } @@ -39262,22 +56015,21 @@ */ #ifdef SQLITE_DEBUG static int cursorHoldsMutex(BtCursor *p){ return sqlite3_mutex_held(p->pBt->mutex); } +static int cursorOwnsBtShared(BtCursor *p){ + assert( cursorHoldsMutex(p) ); + return (p->pBtree->db==p->pBt->db); +} #endif - -#ifndef SQLITE_OMIT_INCRBLOB /* -** Invalidate the overflow page-list cache for cursor pCur, if any. +** Invalidate the overflow cache of the cursor passed as the first argument. +** on the shared btree structure pBt. */ -static void invalidateOverflowCache(BtCursor *pCur){ - assert( cursorHoldsMutex(pCur) ); - sqlite3_free(pCur->aOverflow); - pCur->aOverflow = 0; -} +#define invalidateOverflowCache(pCur) (pCur->curFlags &= ~BTCF_ValidOvfl) /* ** Invalidate the overflow page-list cache for all cursors opened ** on the shared btree structure pBt. */ @@ -39287,10 +56039,11 @@ for(p=pBt->pCursor; p; p=p->pNext){ invalidateOverflowCache(p); } } +#ifndef SQLITE_OMIT_INCRBLOB /* ** This function is called before modifying the contents of a table ** to invalidate any incrblob cursors that are open on the ** row or one of the rows being modified. ** @@ -39306,23 +56059,25 @@ Btree *pBtree, /* The database file to check */ i64 iRow, /* The rowid that might be changing */ int isClearTable /* True if all rows are being deleted */ ){ BtCursor *p; - BtShared *pBt = pBtree->pBt; + if( pBtree->hasIncrblobCur==0 ) return; assert( sqlite3BtreeHoldsMutex(pBtree) ); - for(p=pBt->pCursor; p; p=p->pNext){ - if( p->isIncrblobHandle && (isClearTable || p->info.nKey==iRow) ){ - p->eState = CURSOR_INVALID; + pBtree->hasIncrblobCur = 0; + for(p=pBtree->pBt->pCursor; p; p=p->pNext){ + if( (p->curFlags & BTCF_Incrblob)!=0 ){ + pBtree->hasIncrblobCur = 1; + if( isClearTable || p->info.nKey==iRow ){ + p->eState = CURSOR_INVALID; + } } } } #else - /* Stub functions when INCRBLOB is omitted */ - #define invalidateOverflowCache(x) - #define invalidateAllOverflowCache(x) + /* Stub function when INCRBLOB is omitted */ #define invalidateIncrblobCursors(x,y,z) #endif /* SQLITE_OMIT_INCRBLOB */ /* ** Set bit pgno of the BtShared.pHasContent bitvec. This is called @@ -39394,19 +56149,36 @@ sqlite3BitvecDestroy(pBt->pHasContent); pBt->pHasContent = 0; } /* -** Save the current cursor position in the variables BtCursor.nKey -** and BtCursor.pKey. The cursor's state is set to CURSOR_REQUIRESEEK. +** Release all of the apPage[] pages for a cursor. +*/ +static void btreeReleaseAllCursorPages(BtCursor *pCur){ + int i; + for(i=0; i<=pCur->iPage; i++){ + releasePage(pCur->apPage[i]); + pCur->apPage[i] = 0; + } + pCur->iPage = -1; +} + +/* +** The cursor passed as the only argument must point to a valid entry +** when this function is called (i.e. have eState==CURSOR_VALID). This +** function saves the current cursor key in variables pCur->nKey and +** pCur->pKey. SQLITE_OK is returned if successful or an SQLite error +** code otherwise. ** -** The caller must ensure that the cursor is valid (has eState==CURSOR_VALID) -** prior to calling this routine. +** If the cursor is open on an intkey table, then the integer key +** (the rowid) is stored in pCur->nKey and pCur->pKey is left set to +** NULL. If the cursor is open on a non-intkey table, then pCur->pKey is +** set to point to a malloced buffer pCur->nKey bytes in size containing +** the key. */ -static int saveCursorPosition(BtCursor *pCur){ +static int saveCursorKey(BtCursor *pCur){ int rc; - assert( CURSOR_VALID==pCur->eState ); assert( 0==pCur->pKey ); assert( cursorHoldsMutex(pCur) ); rc = sqlite3BtreeKeySize(pCur, &pCur->nKey); @@ -39414,14 +56186,13 @@ /* If this is an intKey table, then the above call to BtreeKeySize() ** stores the integer key in pCur->nKey. In this case this value is ** all that is required. Otherwise, if pCur is not open on an intKey ** table, then malloc space for and store the pCur->nKey bytes of key - ** data. - */ - if( 0==pCur->apPage[0]->intKey ){ - void *pKey = sqlite3Malloc( (int)pCur->nKey ); + ** data. */ + if( 0==pCur->curIntKey ){ + void *pKey = sqlite3Malloc( pCur->nKey ); if( pKey ){ rc = sqlite3BtreeKey(pCur, 0, (int)pCur->nKey, pKey); if( rc==SQLITE_OK ){ pCur->pKey = pKey; }else{ @@ -39429,44 +56200,104 @@ } }else{ rc = SQLITE_NOMEM; } } - assert( !pCur->apPage[0]->intKey || !pCur->pKey ); - - if( rc==SQLITE_OK ){ - int i; - for(i=0; i<=pCur->iPage; i++){ - releasePage(pCur->apPage[i]); - pCur->apPage[i] = 0; - } - pCur->iPage = -1; + assert( !pCur->curIntKey || !pCur->pKey ); + return rc; +} + +/* +** Save the current cursor position in the variables BtCursor.nKey +** and BtCursor.pKey. The cursor's state is set to CURSOR_REQUIRESEEK. +** +** The caller must ensure that the cursor is valid (has eState==CURSOR_VALID) +** prior to calling this routine. +*/ +static int saveCursorPosition(BtCursor *pCur){ + int rc; + + assert( CURSOR_VALID==pCur->eState || CURSOR_SKIPNEXT==pCur->eState ); + assert( 0==pCur->pKey ); + assert( cursorHoldsMutex(pCur) ); + + if( pCur->eState==CURSOR_SKIPNEXT ){ + pCur->eState = CURSOR_VALID; + }else{ + pCur->skipNext = 0; + } + + rc = saveCursorKey(pCur); + if( rc==SQLITE_OK ){ + btreeReleaseAllCursorPages(pCur); pCur->eState = CURSOR_REQUIRESEEK; } - invalidateOverflowCache(pCur); + pCur->curFlags &= ~(BTCF_ValidNKey|BTCF_ValidOvfl|BTCF_AtLast); return rc; } +/* Forward reference */ +static int SQLITE_NOINLINE saveCursorsOnList(BtCursor*,Pgno,BtCursor*); + /* ** Save the positions of all cursors (except pExcept) that are open on -** the table with root-page iRoot. Usually, this is called just before cursor -** pExcept is used to modify the table (BtreeDelete() or BtreeInsert()). +** the table with root-page iRoot. "Saving the cursor position" means that +** the location in the btree is remembered in such a way that it can be +** moved back to the same spot after the btree has been modified. This +** routine is called just before cursor pExcept is used to modify the +** table, for example in BtreeDelete() or BtreeInsert(). +** +** If there are two or more cursors on the same btree, then all such +** cursors should have their BTCF_Multiple flag set. The btreeCursor() +** routine enforces that rule. This routine only needs to be called in +** the uncommon case when pExpect has the BTCF_Multiple flag set. +** +** If pExpect!=NULL and if no other cursors are found on the same root-page, +** then the BTCF_Multiple flag on pExpect is cleared, to avoid another +** pointless call to this routine. +** +** Implementation note: This routine merely checks to see if any cursors +** need to be saved. It calls out to saveCursorsOnList() in the (unusual) +** event that cursors are in need to being saved. */ static int saveAllCursors(BtShared *pBt, Pgno iRoot, BtCursor *pExcept){ BtCursor *p; assert( sqlite3_mutex_held(pBt->mutex) ); assert( pExcept==0 || pExcept->pBt==pBt ); for(p=pBt->pCursor; p; p=p->pNext){ - if( p!=pExcept && (0==iRoot || p->pgnoRoot==iRoot) && - p->eState==CURSOR_VALID ){ - int rc = saveCursorPosition(p); - if( SQLITE_OK!=rc ){ - return rc; + if( p!=pExcept && (0==iRoot || p->pgnoRoot==iRoot) ) break; + } + if( p ) return saveCursorsOnList(p, iRoot, pExcept); + if( pExcept ) pExcept->curFlags &= ~BTCF_Multiple; + return SQLITE_OK; +} + +/* This helper routine to saveAllCursors does the actual work of saving +** the cursors if and when a cursor is found that actually requires saving. +** The common case is that no cursors need to be saved, so this routine is +** broken out from its caller to avoid unnecessary stack pointer movement. +*/ +static int SQLITE_NOINLINE saveCursorsOnList( + BtCursor *p, /* The first cursor that needs saving */ + Pgno iRoot, /* Only save cursor with this iRoot. Save all if zero */ + BtCursor *pExcept /* Do not save this cursor */ +){ + do{ + if( p!=pExcept && (0==iRoot || p->pgnoRoot==iRoot) ){ + if( p->eState==CURSOR_VALID || p->eState==CURSOR_SKIPNEXT ){ + int rc = saveCursorPosition(p); + if( SQLITE_OK!=rc ){ + return rc; + } + }else{ + testcase( p->iPage>0 ); + btreeReleaseAllCursorPages(p); } } - } + p = p->pNext; + }while( p ); return SQLITE_OK; } /* ** Clear the current cursor position. @@ -39490,23 +56321,30 @@ int bias, /* Bias search to the high end */ int *pRes /* Write search results here */ ){ int rc; /* Status code */ UnpackedRecord *pIdxKey; /* Unpacked index key */ - char aSpace[150]; /* Temp space for pIdxKey - to avoid a malloc */ + char aSpace[200]; /* Temp space for pIdxKey - to avoid a malloc */ + char *pFree = 0; if( pKey ){ assert( nKey==(i64)(int)nKey ); - pIdxKey = sqlite3VdbeRecordUnpack(pCur->pKeyInfo, (int)nKey, pKey, - aSpace, sizeof(aSpace)); + pIdxKey = sqlite3VdbeAllocUnpackedRecord( + pCur->pKeyInfo, aSpace, sizeof(aSpace), &pFree + ); if( pIdxKey==0 ) return SQLITE_NOMEM; + sqlite3VdbeRecordUnpack(pCur->pKeyInfo, (int)nKey, pKey, pIdxKey); + if( pIdxKey->nField==0 ){ + sqlite3DbFree(pCur->pKeyInfo->db, pFree); + return SQLITE_CORRUPT_BKPT; + } }else{ pIdxKey = 0; } rc = sqlite3BtreeMovetoUnpacked(pCur, pIdxKey, nKey, bias, pRes); - if( pKey ){ - sqlite3VdbeDeleteUnpackedRecord(pIdxKey); + if( pFree ){ + sqlite3DbFree(pCur->pKeyInfo->db, pFree); } return rc; } /* @@ -39516,21 +56354,26 @@ ** at most one effective restoreCursorPosition() call after each ** saveCursorPosition(). */ static int btreeRestoreCursorPosition(BtCursor *pCur){ int rc; - assert( cursorHoldsMutex(pCur) ); + int skipNext; + assert( cursorOwnsBtShared(pCur) ); assert( pCur->eState>=CURSOR_REQUIRESEEK ); if( pCur->eState==CURSOR_FAULT ){ return pCur->skipNext; } pCur->eState = CURSOR_INVALID; - rc = btreeMoveto(pCur, pCur->pKey, pCur->nKey, 0, &pCur->skipNext); + rc = btreeMoveto(pCur, pCur->pKey, pCur->nKey, 0, &skipNext); if( rc==SQLITE_OK ){ sqlite3_free(pCur->pKey); pCur->pKey = 0; assert( pCur->eState==CURSOR_VALID || pCur->eState==CURSOR_INVALID ); + pCur->skipNext |= skipNext; + if( pCur->skipNext && pCur->eState==CURSOR_VALID ){ + pCur->eState = CURSOR_SKIPNEXT; + } } return rc; } #define restoreCursorPosition(p) \ @@ -39537,43 +56380,92 @@ (p->eState>=CURSOR_REQUIRESEEK ? \ btreeRestoreCursorPosition(p) : \ SQLITE_OK) /* -** Determine whether or not a cursor has moved from the position it -** was last placed at. Cursors can move when the row they are pointing -** at is deleted out from under them. +** Determine whether or not a cursor has moved from the position where +** it was last placed, or has been invalidated for any other reason. +** Cursors can move when the row they are pointing at is deleted out +** from under them, for example. Cursor might also move if a btree +** is rebalanced. ** -** This routine returns an error code if something goes wrong. The -** integer *pHasMoved is set to one if the cursor has moved and 0 if not. +** Calling this routine with a NULL cursor pointer returns false. +** +** Use the separate sqlite3BtreeCursorRestore() routine to restore a cursor +** back to where it ought to be if this routine returns true. */ -SQLITE_PRIVATE int sqlite3BtreeCursorHasMoved(BtCursor *pCur, int *pHasMoved){ +SQLITE_PRIVATE int sqlite3BtreeCursorHasMoved(BtCursor *pCur){ + return pCur->eState!=CURSOR_VALID; +} + +/* +** This routine restores a cursor back to its original position after it +** has been moved by some outside activity (such as a btree rebalance or +** a row having been deleted out from under the cursor). +** +** On success, the *pDifferentRow parameter is false if the cursor is left +** pointing at exactly the same row. *pDifferntRow is the row the cursor +** was pointing to has been deleted, forcing the cursor to point to some +** nearby row. +** +** This routine should only be called for a cursor that just returned +** TRUE from sqlite3BtreeCursorHasMoved(). +*/ +SQLITE_PRIVATE int sqlite3BtreeCursorRestore(BtCursor *pCur, int *pDifferentRow){ int rc; + assert( pCur!=0 ); + assert( pCur->eState!=CURSOR_VALID ); rc = restoreCursorPosition(pCur); if( rc ){ - *pHasMoved = 1; + *pDifferentRow = 1; return rc; } - if( pCur->eState!=CURSOR_VALID || pCur->skipNext!=0 ){ - *pHasMoved = 1; + if( pCur->eState!=CURSOR_VALID ){ + *pDifferentRow = 1; }else{ - *pHasMoved = 0; + assert( pCur->skipNext==0 ); + *pDifferentRow = 0; } return SQLITE_OK; } + +#ifdef SQLITE_ENABLE_CURSOR_HINTS +/* +** Provide hints to the cursor. The particular hint given (and the type +** and number of the varargs parameters) is determined by the eHintType +** parameter. See the definitions of the BTREE_HINT_* macros for details. +*/ +SQLITE_PRIVATE void sqlite3BtreeCursorHint(BtCursor *pCur, int eHintType, ...){ + /* Used only by system that substitute their own storage engine */ +} +#endif + +/* +** Provide flag hints to the cursor. +*/ +SQLITE_PRIVATE void sqlite3BtreeCursorHintFlags(BtCursor *pCur, unsigned x){ + assert( x==BTREE_SEEK_EQ || x==BTREE_BULKLOAD || x==0 ); + pCur->hints = x; +} + #ifndef SQLITE_OMIT_AUTOVACUUM /* ** Given a page number of a regular database page, return the page ** number for the pointer-map page that contains the entry for the ** input page number. +** +** Return 0 (not a valid page) for pgno==1 since there is +** no pointer map associated with page 1. The integrity_check logic +** requires that ptrmapPageno(*,1)!=1. */ static Pgno ptrmapPageno(BtShared *pBt, Pgno pgno){ int nPagesPerMapPage; Pgno iPtrMap, ret; assert( sqlite3_mutex_held(pBt->mutex) ); + if( pgno<2 ) return 0; nPagesPerMapPage = (pBt->usableSize/5)+1; iPtrMap = (pgno-2)/nPagesPerMapPage; ret = (iPtrMap*nPagesPerMapPage) + 2; if( ret==PENDING_BYTE_PAGE(pBt) ){ ret++; @@ -39608,20 +56500,21 @@ if( key==0 ){ *pRC = SQLITE_CORRUPT_BKPT; return; } iPtrmap = PTRMAP_PAGENO(pBt, key); - rc = sqlite3PagerGet(pBt->pPager, iPtrmap, &pDbPage); + rc = sqlite3PagerGet(pBt->pPager, iPtrmap, &pDbPage, 0); if( rc!=SQLITE_OK ){ *pRC = rc; return; } offset = PTRMAP_PTROFFSET(iPtrmap, key); if( offset<0 ){ *pRC = SQLITE_CORRUPT_BKPT; goto ptrmap_exit; } + assert( offset <= (int)pBt->usableSize-5 ); pPtrmap = (u8 *)sqlite3PagerGetData(pDbPage); if( eType!=pPtrmap[offset] || get4byte(&pPtrmap[offset+1])!=parent ){ TRACE(("PTRMAP_UPDATE: %d->(%d,%d)\n", key, eType, parent)); *pRC= rc = sqlite3PagerWrite(pDbPage); @@ -39650,17 +56543,22 @@ int rc; assert( sqlite3_mutex_held(pBt->mutex) ); iPtrmap = PTRMAP_PAGENO(pBt, key); - rc = sqlite3PagerGet(pBt->pPager, iPtrmap, &pDbPage); + rc = sqlite3PagerGet(pBt->pPager, iPtrmap, &pDbPage, 0); if( rc!=0 ){ return rc; } pPtrmap = (u8 *)sqlite3PagerGetData(pDbPage); offset = PTRMAP_PTROFFSET(iPtrmap, key); + if( offset<0 ){ + sqlite3PagerUnref(pDbPage); + return SQLITE_CORRUPT_BKPT; + } + assert( offset <= (int)pBt->usableSize-5 ); assert( pEType!=0 ); *pEType = pPtrmap[offset]; if( pPgno ) *pPgno = get4byte(&pPtrmap[offset+1]); sqlite3PagerUnref(pDbPage); @@ -39676,192 +56574,291 @@ /* ** Given a btree page and a cell index (0 means the first cell on ** the page, 1 means the second cell, and so forth) return a pointer ** to the cell content. +** +** findCellPastPtr() does the same except it skips past the initial +** 4-byte child pointer found on interior pages, if there is one. ** ** This routine works only for pages that do not contain overflow cells. */ #define findCell(P,I) \ - ((P)->aData + ((P)->maskPage & get2byte(&(P)->aData[(P)->cellOffset+2*(I)]))) + ((P)->aData + ((P)->maskPage & get2byteAligned(&(P)->aCellIdx[2*(I)]))) +#define findCellPastPtr(P,I) \ + ((P)->aDataOfst + ((P)->maskPage & get2byteAligned(&(P)->aCellIdx[2*(I)]))) + /* -** This a more complex version of findCell() that works for -** pages that do contain overflow cells. +** This is common tail processing for btreeParseCellPtr() and +** btreeParseCellPtrIndex() for the case when the cell does not fit entirely +** on a single B-tree page. Make necessary adjustments to the CellInfo +** structure. */ -static u8 *findOverflowCell(MemPage *pPage, int iCell){ - int i; +static SQLITE_NOINLINE void btreeParseCellAdjustSizeForOverflow( + MemPage *pPage, /* Page containing the cell */ + u8 *pCell, /* Pointer to the cell text. */ + CellInfo *pInfo /* Fill in this structure */ +){ + /* If the payload will not fit completely on the local page, we have + ** to decide how much to store locally and how much to spill onto + ** overflow pages. The strategy is to minimize the amount of unused + ** space on overflow pages while keeping the amount of local storage + ** in between minLocal and maxLocal. + ** + ** Warning: changing the way overflow payload is distributed in any + ** way will result in an incompatible file format. + */ + int minLocal; /* Minimum amount of payload held locally */ + int maxLocal; /* Maximum amount of payload held locally */ + int surplus; /* Overflow payload available for local storage */ + + minLocal = pPage->minLocal; + maxLocal = pPage->maxLocal; + surplus = minLocal + (pInfo->nPayload - minLocal)%(pPage->pBt->usableSize-4); + testcase( surplus==maxLocal ); + testcase( surplus==maxLocal+1 ); + if( surplus <= maxLocal ){ + pInfo->nLocal = (u16)surplus; + }else{ + pInfo->nLocal = (u16)minLocal; + } + pInfo->nSize = (u16)(&pInfo->pPayload[pInfo->nLocal] - pCell) + 4; +} + +/* +** The following routines are implementations of the MemPage.xParseCell() +** method. +** +** Parse a cell content block and fill in the CellInfo structure. +** +** btreeParseCellPtr() => table btree leaf nodes +** btreeParseCellNoPayload() => table btree internal nodes +** btreeParseCellPtrIndex() => index btree nodes +** +** There is also a wrapper function btreeParseCell() that works for +** all MemPage types and that references the cell by index rather than +** by pointer. +*/ +static void btreeParseCellPtrNoPayload( + MemPage *pPage, /* Page containing the cell */ + u8 *pCell, /* Pointer to the cell text. */ + CellInfo *pInfo /* Fill in this structure */ +){ assert( sqlite3_mutex_held(pPage->pBt->mutex) ); - for(i=pPage->nOverflow-1; i>=0; i--){ - int k; - struct _OvflCell *pOvfl; - pOvfl = &pPage->aOvfl[i]; - k = pOvfl->idx; - if( k<=iCell ){ - if( k==iCell ){ - return pOvfl->pCell; - } - iCell--; - } - } - return findCell(pPage, iCell); -} - -/* -** Parse a cell content block and fill in the CellInfo structure. There -** are two versions of this function. btreeParseCell() takes a -** cell index as the second argument and btreeParseCellPtr() -** takes a pointer to the body of the cell as its second argument. -** -** Within this file, the parseCell() macro can be called instead of -** btreeParseCellPtr(). Using some compilers, this will be faster. -*/ + assert( pPage->leaf==0 ); + assert( pPage->childPtrSize==4 ); +#ifndef SQLITE_DEBUG + UNUSED_PARAMETER(pPage); +#endif + pInfo->nSize = 4 + getVarint(&pCell[4], (u64*)&pInfo->nKey); + pInfo->nPayload = 0; + pInfo->nLocal = 0; + pInfo->pPayload = 0; + return; +} static void btreeParseCellPtr( MemPage *pPage, /* Page containing the cell */ u8 *pCell, /* Pointer to the cell text. */ CellInfo *pInfo /* Fill in this structure */ ){ - u16 n; /* Number bytes in cell content header */ + u8 *pIter; /* For scanning through pCell */ + u32 nPayload; /* Number of bytes of cell payload */ + u64 iKey; /* Extracted Key value */ + + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + assert( pPage->leaf==0 || pPage->leaf==1 ); + assert( pPage->intKeyLeaf ); + assert( pPage->childPtrSize==0 ); + pIter = pCell; + + /* The next block of code is equivalent to: + ** + ** pIter += getVarint32(pIter, nPayload); + ** + ** The code is inlined to avoid a function call. + */ + nPayload = *pIter; + if( nPayload>=0x80 ){ + u8 *pEnd = &pIter[8]; + nPayload &= 0x7f; + do{ + nPayload = (nPayload<<7) | (*++pIter & 0x7f); + }while( (*pIter)>=0x80 && pIter<pEnd ); + } + pIter++; + + /* The next block of code is equivalent to: + ** + ** pIter += getVarint(pIter, (u64*)&pInfo->nKey); + ** + ** The code is inlined to avoid a function call. + */ + iKey = *pIter; + if( iKey>=0x80 ){ + u8 *pEnd = &pIter[7]; + iKey &= 0x7f; + while(1){ + iKey = (iKey<<7) | (*++pIter & 0x7f); + if( (*pIter)<0x80 ) break; + if( pIter>=pEnd ){ + iKey = (iKey<<8) | *++pIter; + break; + } + } + } + pIter++; + + pInfo->nKey = *(i64*)&iKey; + pInfo->nPayload = nPayload; + pInfo->pPayload = pIter; + testcase( nPayload==pPage->maxLocal ); + testcase( nPayload==pPage->maxLocal+1 ); + if( nPayload<=pPage->maxLocal ){ + /* This is the (easy) common case where the entire payload fits + ** on the local page. No overflow is required. + */ + pInfo->nSize = nPayload + (u16)(pIter - pCell); + if( pInfo->nSize<4 ) pInfo->nSize = 4; + pInfo->nLocal = (u16)nPayload; + }else{ + btreeParseCellAdjustSizeForOverflow(pPage, pCell, pInfo); + } +} +static void btreeParseCellPtrIndex( + MemPage *pPage, /* Page containing the cell */ + u8 *pCell, /* Pointer to the cell text. */ + CellInfo *pInfo /* Fill in this structure */ +){ + u8 *pIter; /* For scanning through pCell */ u32 nPayload; /* Number of bytes of cell payload */ assert( sqlite3_mutex_held(pPage->pBt->mutex) ); - - pInfo->pCell = pCell; assert( pPage->leaf==0 || pPage->leaf==1 ); - n = pPage->childPtrSize; - assert( n==4-4*pPage->leaf ); - if( pPage->intKey ){ - if( pPage->hasData ){ - n += getVarint32(&pCell[n], nPayload); - }else{ - nPayload = 0; - } - n += getVarint(&pCell[n], (u64*)&pInfo->nKey); - pInfo->nData = nPayload; - }else{ - pInfo->nData = 0; - n += getVarint32(&pCell[n], nPayload); - pInfo->nKey = nPayload; - } + assert( pPage->intKeyLeaf==0 ); + pIter = pCell + pPage->childPtrSize; + nPayload = *pIter; + if( nPayload>=0x80 ){ + u8 *pEnd = &pIter[8]; + nPayload &= 0x7f; + do{ + nPayload = (nPayload<<7) | (*++pIter & 0x7f); + }while( *(pIter)>=0x80 && pIter<pEnd ); + } + pIter++; + pInfo->nKey = nPayload; pInfo->nPayload = nPayload; - pInfo->nHeader = n; + pInfo->pPayload = pIter; testcase( nPayload==pPage->maxLocal ); testcase( nPayload==pPage->maxLocal+1 ); - if( likely(nPayload<=pPage->maxLocal) ){ + if( nPayload<=pPage->maxLocal ){ /* This is the (easy) common case where the entire payload fits ** on the local page. No overflow is required. */ - int nSize; /* Total size of cell content in bytes */ - nSize = nPayload + n; + pInfo->nSize = nPayload + (u16)(pIter - pCell); + if( pInfo->nSize<4 ) pInfo->nSize = 4; pInfo->nLocal = (u16)nPayload; - pInfo->iOverflow = 0; - if( (nSize & ~3)==0 ){ - nSize = 4; /* Minimum cell size is 4 */ - } - pInfo->nSize = (u16)nSize; - }else{ - /* If the payload will not fit completely on the local page, we have - ** to decide how much to store locally and how much to spill onto - ** overflow pages. The strategy is to minimize the amount of unused - ** space on overflow pages while keeping the amount of local storage - ** in between minLocal and maxLocal. - ** - ** Warning: changing the way overflow payload is distributed in any - ** way will result in an incompatible file format. - */ - int minLocal; /* Minimum amount of payload held locally */ - int maxLocal; /* Maximum amount of payload held locally */ - int surplus; /* Overflow payload available for local storage */ - - minLocal = pPage->minLocal; - maxLocal = pPage->maxLocal; - surplus = minLocal + (nPayload - minLocal)%(pPage->pBt->usableSize - 4); - testcase( surplus==maxLocal ); - testcase( surplus==maxLocal+1 ); - if( surplus <= maxLocal ){ - pInfo->nLocal = (u16)surplus; - }else{ - pInfo->nLocal = (u16)minLocal; - } - pInfo->iOverflow = (u16)(pInfo->nLocal + n); - pInfo->nSize = pInfo->iOverflow + 4; - } -} -#define parseCell(pPage, iCell, pInfo) \ - btreeParseCellPtr((pPage), findCell((pPage), (iCell)), (pInfo)) + }else{ + btreeParseCellAdjustSizeForOverflow(pPage, pCell, pInfo); + } +} static void btreeParseCell( MemPage *pPage, /* Page containing the cell */ int iCell, /* The cell index. First cell is 0 */ CellInfo *pInfo /* Fill in this structure */ ){ - parseCell(pPage, iCell, pInfo); + pPage->xParseCell(pPage, findCell(pPage, iCell), pInfo); } /* +** The following routines are implementations of the MemPage.xCellSize +** method. +** ** Compute the total number of bytes that a Cell needs in the cell ** data area of the btree-page. The return number includes the cell ** data header and the local payload, but not any overflow page or ** the space used by the cell pointer. +** +** cellSizePtrNoPayload() => table internal nodes +** cellSizePtr() => all index nodes & table leaf nodes */ static u16 cellSizePtr(MemPage *pPage, u8 *pCell){ - u8 *pIter = &pCell[pPage->childPtrSize]; - u32 nSize; + u8 *pIter = pCell + pPage->childPtrSize; /* For looping over bytes of pCell */ + u8 *pEnd; /* End mark for a varint */ + u32 nSize; /* Size value to return */ #ifdef SQLITE_DEBUG /* The value returned by this function should always be the same as ** the (CellInfo.nSize) value found by doing a full parse of the ** cell. If SQLITE_DEBUG is defined, an assert() at the bottom of ** this function verifies that this invariant is not violated. */ CellInfo debuginfo; - btreeParseCellPtr(pPage, pCell, &debuginfo); + pPage->xParseCell(pPage, pCell, &debuginfo); #endif + nSize = *pIter; + if( nSize>=0x80 ){ + pEnd = &pIter[8]; + nSize &= 0x7f; + do{ + nSize = (nSize<<7) | (*++pIter & 0x7f); + }while( *(pIter)>=0x80 && pIter<pEnd ); + } + pIter++; if( pPage->intKey ){ - u8 *pEnd; - if( pPage->hasData ){ - pIter += getVarint32(pIter, nSize); - }else{ - nSize = 0; - } - /* pIter now points at the 64-bit integer key value, a variable length ** integer. The following block moves pIter to point at the first byte ** past the end of the key value. */ pEnd = &pIter[9]; while( (*pIter++)&0x80 && pIter<pEnd ); - }else{ - pIter += getVarint32(pIter, nSize); } - testcase( nSize==pPage->maxLocal ); testcase( nSize==pPage->maxLocal+1 ); - if( nSize>pPage->maxLocal ){ + if( nSize<=pPage->maxLocal ){ + nSize += (u32)(pIter - pCell); + if( nSize<4 ) nSize = 4; + }else{ int minLocal = pPage->minLocal; nSize = minLocal + (nSize - minLocal) % (pPage->pBt->usableSize - 4); testcase( nSize==pPage->maxLocal ); testcase( nSize==pPage->maxLocal+1 ); if( nSize>pPage->maxLocal ){ nSize = minLocal; } - nSize += 4; - } - nSize += (u32)(pIter - pCell); - - /* The minimum size of any cell is 4 bytes. */ - if( nSize<4 ){ - nSize = 4; - } - - assert( nSize==debuginfo.nSize ); + nSize += 4 + (u16)(pIter - pCell); + } + assert( nSize==debuginfo.nSize || CORRUPT_DB ); return (u16)nSize; } +static u16 cellSizePtrNoPayload(MemPage *pPage, u8 *pCell){ + u8 *pIter = pCell + 4; /* For looping over bytes of pCell */ + u8 *pEnd; /* End mark for a varint */ + +#ifdef SQLITE_DEBUG + /* The value returned by this function should always be the same as + ** the (CellInfo.nSize) value found by doing a full parse of the + ** cell. If SQLITE_DEBUG is defined, an assert() at the bottom of + ** this function verifies that this invariant is not violated. */ + CellInfo debuginfo; + pPage->xParseCell(pPage, pCell, &debuginfo); +#else + UNUSED_PARAMETER(pPage); +#endif + + assert( pPage->childPtrSize==4 ); + pEnd = pIter + 9; + while( (*pIter++)&0x80 && pIter<pEnd ); + assert( debuginfo.nSize==(u16)(pIter - pCell) || CORRUPT_DB ); + return (u16)(pIter - pCell); +} + #ifdef SQLITE_DEBUG /* This variation on cellSizePtr() is used inside of assert() statements ** only. */ static u16 cellSize(MemPage *pPage, int iCell){ - return cellSizePtr(pPage, findCell(pPage, iCell)); + return pPage->xCellSize(pPage, findCell(pPage, iCell)); } #endif #ifndef SQLITE_OMIT_AUTOVACUUM /* @@ -39871,14 +56868,13 @@ */ static void ptrmapPutOvflPtr(MemPage *pPage, u8 *pCell, int *pRC){ CellInfo info; if( *pRC ) return; assert( pCell!=0 ); - btreeParseCellPtr(pPage, pCell, &info); - assert( (info.nData+(pPage->intKey?0:info.nKey))==info.nPayload ); - if( info.iOverflow ){ - Pgno ovfl = get4byte(&pCell[info.iOverflow]); + pPage->xParseCell(pPage, pCell, &info); + if( info.nLocal<info.nPayload ){ + Pgno ovfl = get4byte(&pCell[info.nSize-4]); ptrmapPut(pPage->pBt, ovfl, PTRMAP_OVERFLOW1, pPage->pgno, pRC); } } #endif @@ -39886,74 +56882,78 @@ /* ** Defragment the page given. All Cells are moved to the ** end of the page and all free space is collected into one ** big FreeBlk that occurs in between the header and cell ** pointer array and the cell content area. +** +** EVIDENCE-OF: R-44582-60138 SQLite may from time to time reorganize a +** b-tree page so that there are no freeblocks or fragment bytes, all +** unused bytes are contained in the unallocated space region, and all +** cells are packed tightly at the end of the page. */ static int defragmentPage(MemPage *pPage){ int i; /* Loop counter */ - int pc; /* Address of a i-th cell */ + int pc; /* Address of the i-th cell */ int hdr; /* Offset to the page header */ int size; /* Size of a cell */ int usableSize; /* Number of usable bytes on a page */ int cellOffset; /* Offset to the cell pointer array */ int cbrk; /* Offset to the cell content area */ int nCell; /* Number of cells on the page */ unsigned char *data; /* The page data */ unsigned char *temp; /* Temp area for cell content */ + unsigned char *src; /* Source of content */ int iCellFirst; /* First allowable cell index */ int iCellLast; /* Last possible cell index */ assert( sqlite3PagerIswriteable(pPage->pDbPage) ); assert( pPage->pBt!=0 ); assert( pPage->pBt->usableSize <= SQLITE_MAX_PAGE_SIZE ); assert( pPage->nOverflow==0 ); assert( sqlite3_mutex_held(pPage->pBt->mutex) ); - temp = sqlite3PagerTempSpace(pPage->pBt->pPager); - data = pPage->aData; + temp = 0; + src = data = pPage->aData; hdr = pPage->hdrOffset; cellOffset = pPage->cellOffset; nCell = pPage->nCell; assert( nCell==get2byte(&data[hdr+3]) ); usableSize = pPage->pBt->usableSize; - cbrk = get2byte(&data[hdr+5]); - memcpy(&temp[cbrk], &data[cbrk], usableSize - cbrk); cbrk = usableSize; iCellFirst = cellOffset + 2*nCell; iCellLast = usableSize - 4; for(i=0; i<nCell; i++){ u8 *pAddr; /* The i-th cell pointer */ pAddr = &data[cellOffset + i*2]; pc = get2byte(pAddr); testcase( pc==iCellFirst ); testcase( pc==iCellLast ); -#if !defined(SQLITE_ENABLE_OVERSIZE_CELL_CHECK) /* These conditions have already been verified in btreeInitPage() - ** if SQLITE_ENABLE_OVERSIZE_CELL_CHECK is defined + ** if PRAGMA cell_size_check=ON. */ if( pc<iCellFirst || pc>iCellLast ){ return SQLITE_CORRUPT_BKPT; } -#endif assert( pc>=iCellFirst && pc<=iCellLast ); - size = cellSizePtr(pPage, &temp[pc]); + size = pPage->xCellSize(pPage, &src[pc]); cbrk -= size; -#if defined(SQLITE_ENABLE_OVERSIZE_CELL_CHECK) - if( cbrk<iCellFirst ){ - return SQLITE_CORRUPT_BKPT; - } -#else if( cbrk<iCellFirst || pc+size>usableSize ){ return SQLITE_CORRUPT_BKPT; } -#endif assert( cbrk+size<=usableSize && cbrk>=iCellFirst ); testcase( cbrk+size==usableSize ); testcase( pc+size==usableSize ); - memcpy(&data[cbrk], &temp[pc], size); put2byte(pAddr, cbrk); + if( temp==0 ){ + int x; + if( cbrk==pc ) continue; + temp = sqlite3PagerTempSpace(pPage->pBt->pPager); + x = get2byte(&data[hdr+5]); + memcpy(&temp[x], &data[x], (cbrk+size) - x); + src = temp; + } + memcpy(&data[cbrk], &src[pc], size); } assert( cbrk>=iCellFirst ); put2byte(&data[hdr+5], cbrk); data[hdr+1] = 0; data[hdr+2] = 0; @@ -39963,10 +56963,74 @@ if( cbrk-iCellFirst!=pPage->nFree ){ return SQLITE_CORRUPT_BKPT; } return SQLITE_OK; } + +/* +** Search the free-list on page pPg for space to store a cell nByte bytes in +** size. If one can be found, return a pointer to the space and remove it +** from the free-list. +** +** If no suitable space can be found on the free-list, return NULL. +** +** This function may detect corruption within pPg. If corruption is +** detected then *pRc is set to SQLITE_CORRUPT and NULL is returned. +** +** Slots on the free list that are between 1 and 3 bytes larger than nByte +** will be ignored if adding the extra space to the fragmentation count +** causes the fragmentation count to exceed 60. +*/ +static u8 *pageFindSlot(MemPage *pPg, int nByte, int *pRc){ + const int hdr = pPg->hdrOffset; + u8 * const aData = pPg->aData; + int iAddr = hdr + 1; + int pc = get2byte(&aData[iAddr]); + int x; + int usableSize = pPg->pBt->usableSize; + + assert( pc>0 ); + do{ + int size; /* Size of the free slot */ + /* EVIDENCE-OF: R-06866-39125 Freeblocks are always connected in order of + ** increasing offset. */ + if( pc>usableSize-4 || pc<iAddr+4 ){ + *pRc = SQLITE_CORRUPT_BKPT; + return 0; + } + /* EVIDENCE-OF: R-22710-53328 The third and fourth bytes of each + ** freeblock form a big-endian integer which is the size of the freeblock + ** in bytes, including the 4-byte header. */ + size = get2byte(&aData[pc+2]); + if( (x = size - nByte)>=0 ){ + testcase( x==4 ); + testcase( x==3 ); + if( pc < pPg->cellOffset+2*pPg->nCell || size+pc > usableSize ){ + *pRc = SQLITE_CORRUPT_BKPT; + return 0; + }else if( x<4 ){ + /* EVIDENCE-OF: R-11498-58022 In a well-formed b-tree page, the total + ** number of bytes in fragments may not exceed 60. */ + if( aData[hdr+7]>57 ) return 0; + + /* Remove the slot from the free-list. Update the number of + ** fragmented bytes within the page. */ + memcpy(&aData[iAddr], &aData[pc], 2); + aData[hdr+7] += (u8)x; + }else{ + /* The slot remains on the free-list. Reduce its size to account + ** for the portion used by the new allocation. */ + put2byte(&aData[pc+2], x); + } + return &aData[pc + x]; + } + iAddr = pc; + pc = get2byte(&aData[pc]); + }while( pc ); + + return 0; +} /* ** Allocate nByte bytes of space from within the B-Tree page passed ** as the first argument. Write into *pIdx the index into pPage->aData[] ** of the first byte of allocated space. Return either SQLITE_OK or @@ -39980,81 +57044,67 @@ ** also end up needing a new cell pointer. */ static int allocateSpace(MemPage *pPage, int nByte, int *pIdx){ const int hdr = pPage->hdrOffset; /* Local cache of pPage->hdrOffset */ u8 * const data = pPage->aData; /* Local cache of pPage->aData */ - int nFrag; /* Number of fragmented bytes on pPage */ int top; /* First byte of cell content area */ + int rc = SQLITE_OK; /* Integer return code */ int gap; /* First byte of gap between cell pointers and cell content */ - int rc; /* Integer return code */ - int usableSize; /* Usable size of the page */ assert( sqlite3PagerIswriteable(pPage->pDbPage) ); assert( pPage->pBt ); assert( sqlite3_mutex_held(pPage->pBt->mutex) ); assert( nByte>=0 ); /* Minimum cell size is 4 */ assert( pPage->nFree>=nByte ); assert( pPage->nOverflow==0 ); - usableSize = pPage->pBt->usableSize; - assert( nByte < usableSize-8 ); + assert( nByte < (int)(pPage->pBt->usableSize-8) ); - nFrag = data[hdr+7]; assert( pPage->cellOffset == hdr + 12 - 4*pPage->leaf ); gap = pPage->cellOffset + 2*pPage->nCell; + assert( gap<=65536 ); + /* EVIDENCE-OF: R-29356-02391 If the database uses a 65536-byte page size + ** and the reserved space is zero (the usual value for reserved space) + ** then the cell content offset of an empty page wants to be 65536. + ** However, that integer is too large to be stored in a 2-byte unsigned + ** integer, so a value of 0 is used in its place. */ top = get2byte(&data[hdr+5]); - if( gap>top ) return SQLITE_CORRUPT_BKPT; + assert( top<=(int)pPage->pBt->usableSize ); /* Prevent by getAndInitPage() */ + if( gap>top ){ + if( top==0 && pPage->pBt->usableSize==65536 ){ + top = 65536; + }else{ + return SQLITE_CORRUPT_BKPT; + } + } + + /* If there is enough space between gap and top for one more cell pointer + ** array entry offset, and if the freelist is not empty, then search the + ** freelist looking for a free slot big enough to satisfy the request. + */ testcase( gap+2==top ); testcase( gap+1==top ); testcase( gap==top ); - - if( nFrag>=60 ){ - /* Always defragment highly fragmented pages */ - rc = defragmentPage(pPage); - if( rc ) return rc; - top = get2byte(&data[hdr+5]); - }else if( gap+2<=top ){ - /* Search the freelist looking for a free slot big enough to satisfy - ** the request. The allocation is made from the first free slot in - ** the list that is large enough to accomadate it. - */ - int pc, addr; - for(addr=hdr+1; (pc = get2byte(&data[addr]))>0; addr=pc){ - int size; /* Size of the free slot */ - if( pc>usableSize-4 || pc<addr+4 ){ - return SQLITE_CORRUPT_BKPT; - } - size = get2byte(&data[pc+2]); - if( size>=nByte ){ - int x = size - nByte; - testcase( x==4 ); - testcase( x==3 ); - if( x<4 ){ - /* Remove the slot from the free-list. Update the number of - ** fragmented bytes within the page. */ - memcpy(&data[addr], &data[pc], 2); - data[hdr+7] = (u8)(nFrag + x); - }else if( size+pc > usableSize ){ - return SQLITE_CORRUPT_BKPT; - }else{ - /* The slot remains on the free-list. Reduce its size to account - ** for the portion used by the new allocation. */ - put2byte(&data[pc+2], x); - } - *pIdx = pc + x; - return SQLITE_OK; - } - } - } - - /* Check to make sure there is enough space in the gap to satisfy - ** the allocation. If not, defragment. + if( (data[hdr+2] || data[hdr+1]) && gap+2<=top ){ + u8 *pSpace = pageFindSlot(pPage, nByte, &rc); + if( pSpace ){ + assert( pSpace>=data && (pSpace - data)<65536 ); + *pIdx = (int)(pSpace - data); + return SQLITE_OK; + }else if( rc ){ + return rc; + } + } + + /* The request could not be fulfilled using a freelist slot. Check + ** to see if defragmentation is necessary. */ testcase( gap+2+nByte==top ); if( gap+2+nByte>top ){ + assert( pPage->nCell>0 || CORRUPT_DB ); rc = defragmentPage(pPage); if( rc ) return rc; - top = get2byte(&data[hdr+5]); + top = get2byteNotZero(&data[hdr+5]); assert( gap+nByte<=top ); } /* Allocate memory from the gap in between the cell pointer array @@ -40063,101 +57113,112 @@ ** is no way that the allocation can extend off the end of the page. ** The assert() below verifies the previous sentence. */ top -= nByte; put2byte(&data[hdr+5], top); - assert( top+nByte <= pPage->pBt->usableSize ); + assert( top+nByte <= (int)pPage->pBt->usableSize ); *pIdx = top; return SQLITE_OK; } /* ** Return a section of the pPage->aData to the freelist. -** The first byte of the new free block is pPage->aDisk[start] -** and the size of the block is "size" bytes. +** The first byte of the new free block is pPage->aData[iStart] +** and the size of the block is iSize bytes. ** -** Most of the effort here is involved in coalesing adjacent -** free blocks into a single big free block. +** Adjacent freeblocks are coalesced. +** +** Note that even though the freeblock list was checked by btreeInitPage(), +** that routine will not detect overlap between cells or freeblocks. Nor +** does it detect cells or freeblocks that encrouch into the reserved bytes +** at the end of the page. So do additional corruption checks inside this +** routine and return SQLITE_CORRUPT if any problems are found. */ -static int freeSpace(MemPage *pPage, int start, int size){ - int addr, pbegin, hdr; - int iLast; /* Largest possible freeblock offset */ - unsigned char *data = pPage->aData; +static int freeSpace(MemPage *pPage, u16 iStart, u16 iSize){ + u16 iPtr; /* Address of ptr to next freeblock */ + u16 iFreeBlk; /* Address of the next freeblock */ + u8 hdr; /* Page header size. 0 or 100 */ + u8 nFrag = 0; /* Reduction in fragmentation */ + u16 iOrigSize = iSize; /* Original value of iSize */ + u32 iLast = pPage->pBt->usableSize-4; /* Largest possible freeblock offset */ + u32 iEnd = iStart + iSize; /* First byte past the iStart buffer */ + unsigned char *data = pPage->aData; /* Page content */ assert( pPage->pBt!=0 ); assert( sqlite3PagerIswriteable(pPage->pDbPage) ); - assert( start>=pPage->hdrOffset+6+pPage->childPtrSize ); - assert( (start + size)<=pPage->pBt->usableSize ); + assert( CORRUPT_DB || iStart>=pPage->hdrOffset+6+pPage->childPtrSize ); + assert( CORRUPT_DB || iEnd <= pPage->pBt->usableSize ); assert( sqlite3_mutex_held(pPage->pBt->mutex) ); - assert( size>=0 ); /* Minimum cell size is 4 */ - - if( pPage->pBt->secureDelete ){ - /* Overwrite deleted information with zeros when the secure_delete - ** option is enabled */ - memset(&data[start], 0, size); - } - - /* Add the space back into the linked list of freeblocks. Note that - ** even though the freeblock list was checked by btreeInitPage(), - ** btreeInitPage() did not detect overlapping cells or - ** freeblocks that overlapped cells. Nor does it detect when the - ** cell content area exceeds the value in the page header. If these - ** situations arise, then subsequent insert operations might corrupt - ** the freelist. So we do need to check for corruption while scanning - ** the freelist. + assert( iSize>=4 ); /* Minimum cell size is 4 */ + assert( iStart<=iLast ); + + /* Overwrite deleted information with zeros when the secure_delete + ** option is enabled */ + if( pPage->pBt->btsFlags & BTS_SECURE_DELETE ){ + memset(&data[iStart], 0, iSize); + } + + /* The list of freeblocks must be in ascending order. Find the + ** spot on the list where iStart should be inserted. */ hdr = pPage->hdrOffset; - addr = hdr + 1; - iLast = pPage->pBt->usableSize - 4; - assert( start<=iLast ); - while( (pbegin = get2byte(&data[addr]))<start && pbegin>0 ){ - if( pbegin<addr+4 ){ - return SQLITE_CORRUPT_BKPT; - } - addr = pbegin; - } - if( pbegin>iLast ){ - return SQLITE_CORRUPT_BKPT; - } - assert( pbegin>addr || pbegin==0 ); - put2byte(&data[addr], start); - put2byte(&data[start], pbegin); - put2byte(&data[start+2], size); - pPage->nFree = pPage->nFree + (u16)size; - - /* Coalesce adjacent free blocks */ - addr = hdr + 1; - while( (pbegin = get2byte(&data[addr]))>0 ){ - int pnext, psize, x; - assert( pbegin>addr ); - assert( pbegin<=pPage->pBt->usableSize-4 ); - pnext = get2byte(&data[pbegin]); - psize = get2byte(&data[pbegin+2]); - if( pbegin + psize + 3 >= pnext && pnext>0 ){ - int frag = pnext - (pbegin+psize); - if( (frag<0) || (frag>(int)data[hdr+7]) ){ - return SQLITE_CORRUPT_BKPT; - } - data[hdr+7] -= (u8)frag; - x = get2byte(&data[pnext]); - put2byte(&data[pbegin], x); - x = pnext + get2byte(&data[pnext+2]) - pbegin; - put2byte(&data[pbegin+2], x); - }else{ - addr = pbegin; - } - } - - /* If the cell content area begins with a freeblock, remove it. */ - if( data[hdr+1]==data[hdr+5] && data[hdr+2]==data[hdr+6] ){ - int top; - pbegin = get2byte(&data[hdr+1]); - memcpy(&data[hdr+1], &data[pbegin], 2); - top = get2byte(&data[hdr+5]) + get2byte(&data[pbegin+2]); - put2byte(&data[hdr+5], top); - } - assert( sqlite3PagerIswriteable(pPage->pDbPage) ); + iPtr = hdr + 1; + if( data[iPtr+1]==0 && data[iPtr]==0 ){ + iFreeBlk = 0; /* Shortcut for the case when the freelist is empty */ + }else{ + while( (iFreeBlk = get2byte(&data[iPtr]))>0 && iFreeBlk<iStart ){ + if( iFreeBlk<iPtr+4 ) return SQLITE_CORRUPT_BKPT; + iPtr = iFreeBlk; + } + if( iFreeBlk>iLast ) return SQLITE_CORRUPT_BKPT; + assert( iFreeBlk>iPtr || iFreeBlk==0 ); + + /* At this point: + ** iFreeBlk: First freeblock after iStart, or zero if none + ** iPtr: The address of a pointer to iFreeBlk + ** + ** Check to see if iFreeBlk should be coalesced onto the end of iStart. + */ + if( iFreeBlk && iEnd+3>=iFreeBlk ){ + nFrag = iFreeBlk - iEnd; + if( iEnd>iFreeBlk ) return SQLITE_CORRUPT_BKPT; + iEnd = iFreeBlk + get2byte(&data[iFreeBlk+2]); + if( iEnd > pPage->pBt->usableSize ) return SQLITE_CORRUPT_BKPT; + iSize = iEnd - iStart; + iFreeBlk = get2byte(&data[iFreeBlk]); + } + + /* If iPtr is another freeblock (that is, if iPtr is not the freelist + ** pointer in the page header) then check to see if iStart should be + ** coalesced onto the end of iPtr. + */ + if( iPtr>hdr+1 ){ + int iPtrEnd = iPtr + get2byte(&data[iPtr+2]); + if( iPtrEnd+3>=iStart ){ + if( iPtrEnd>iStart ) return SQLITE_CORRUPT_BKPT; + nFrag += iStart - iPtrEnd; + iSize = iEnd - iPtr; + iStart = iPtr; + } + } + if( nFrag>data[hdr+7] ) return SQLITE_CORRUPT_BKPT; + data[hdr+7] -= nFrag; + } + if( iStart==get2byte(&data[hdr+5]) ){ + /* The new freeblock is at the beginning of the cell content area, + ** so just extend the cell content area rather than create another + ** freelist entry */ + if( iPtr!=hdr+1 ) return SQLITE_CORRUPT_BKPT; + put2byte(&data[hdr+1], iFreeBlk); + put2byte(&data[hdr+5], iEnd); + }else{ + /* Insert the new freeblock into the freelist */ + put2byte(&data[iPtr], iStart); + put2byte(&data[iStart], iFreeBlk); + put2byte(&data[iStart+2], iSize); + } + pPage->nFree += iOrigSize; return SQLITE_OK; } /* ** Decode the flags byte (the first byte of the header) for a page @@ -40177,24 +57238,48 @@ assert( pPage->hdrOffset==(pPage->pgno==1 ? 100 : 0) ); assert( sqlite3_mutex_held(pPage->pBt->mutex) ); pPage->leaf = (u8)(flagByte>>3); assert( PTF_LEAF == 1<<3 ); flagByte &= ~PTF_LEAF; pPage->childPtrSize = 4-4*pPage->leaf; + pPage->xCellSize = cellSizePtr; pBt = pPage->pBt; if( flagByte==(PTF_LEAFDATA | PTF_INTKEY) ){ + /* EVIDENCE-OF: R-03640-13415 A value of 5 means the page is an interior + ** table b-tree page. */ + assert( (PTF_LEAFDATA|PTF_INTKEY)==5 ); + /* EVIDENCE-OF: R-20501-61796 A value of 13 means the page is a leaf + ** table b-tree page. */ + assert( (PTF_LEAFDATA|PTF_INTKEY|PTF_LEAF)==13 ); pPage->intKey = 1; - pPage->hasData = pPage->leaf; + if( pPage->leaf ){ + pPage->intKeyLeaf = 1; + pPage->xParseCell = btreeParseCellPtr; + }else{ + pPage->intKeyLeaf = 0; + pPage->xCellSize = cellSizePtrNoPayload; + pPage->xParseCell = btreeParseCellPtrNoPayload; + } pPage->maxLocal = pBt->maxLeaf; pPage->minLocal = pBt->minLeaf; }else if( flagByte==PTF_ZERODATA ){ + /* EVIDENCE-OF: R-27225-53936 A value of 2 means the page is an interior + ** index b-tree page. */ + assert( (PTF_ZERODATA)==2 ); + /* EVIDENCE-OF: R-16571-11615 A value of 10 means the page is a leaf + ** index b-tree page. */ + assert( (PTF_ZERODATA|PTF_LEAF)==10 ); pPage->intKey = 0; - pPage->hasData = 0; + pPage->intKeyLeaf = 0; + pPage->xParseCell = btreeParseCellPtrIndex; pPage->maxLocal = pBt->maxLocal; pPage->minLocal = pBt->minLocal; }else{ + /* EVIDENCE-OF: R-47608-56469 Any other value for the b-tree page type is + ** an error. */ return SQLITE_CORRUPT_BKPT; } + pPage->max1bytePayload = pBt->max1bytePayload; return SQLITE_OK; } /* ** Initialize the auxiliary information for a disk block. @@ -40206,10 +57291,11 @@ ** we failed to detect any corruption. */ static int btreeInitPage(MemPage *pPage){ assert( pPage->pBt!=0 ); + assert( pPage->pBt->db!=0 ); assert( sqlite3_mutex_held(pPage->pBt->mutex) ); assert( pPage->pgno==sqlite3PagerPagenumber(pPage->pDbPage) ); assert( pPage == sqlite3PagerGetExtra(pPage->pDbPage) ); assert( pPage->aData == sqlite3PagerGetData(pPage->pDbPage) ); @@ -40216,34 +57302,49 @@ if( !pPage->isInit ){ u16 pc; /* Address of a freeblock within pPage->aData[] */ u8 hdr; /* Offset to beginning of page header */ u8 *data; /* Equal to pPage->aData */ BtShared *pBt; /* The main btree structure */ - u16 usableSize; /* Amount of usable space on each page */ + int usableSize; /* Amount of usable space on each page */ u16 cellOffset; /* Offset from start of page to first cell pointer */ - u16 nFree; /* Number of unused bytes on the page */ - u16 top; /* First byte of the cell content area */ + int nFree; /* Number of unused bytes on the page */ + int top; /* First byte of the cell content area */ int iCellFirst; /* First allowable cell or freeblock offset */ int iCellLast; /* Last possible cell or freeblock offset */ pBt = pPage->pBt; hdr = pPage->hdrOffset; data = pPage->aData; + /* EVIDENCE-OF: R-28594-02890 The one-byte flag at offset 0 indicating + ** the b-tree page type. */ if( decodeFlags(pPage, data[hdr]) ) return SQLITE_CORRUPT_BKPT; - assert( pBt->pageSize>=512 && pBt->pageSize<=32768 ); - pPage->maskPage = pBt->pageSize - 1; + assert( pBt->pageSize>=512 && pBt->pageSize<=65536 ); + pPage->maskPage = (u16)(pBt->pageSize - 1); pPage->nOverflow = 0; usableSize = pBt->usableSize; - pPage->cellOffset = cellOffset = hdr + 12 - 4*pPage->leaf; - top = get2byte(&data[hdr+5]); + pPage->cellOffset = cellOffset = hdr + 8 + pPage->childPtrSize; + pPage->aDataEnd = &data[usableSize]; + pPage->aCellIdx = &data[cellOffset]; + pPage->aDataOfst = &data[pPage->childPtrSize]; + /* EVIDENCE-OF: R-58015-48175 The two-byte integer at offset 5 designates + ** the start of the cell content area. A zero value for this integer is + ** interpreted as 65536. */ + top = get2byteNotZero(&data[hdr+5]); + /* EVIDENCE-OF: R-37002-32774 The two-byte integer at offset 3 gives the + ** number of cells on the page. */ pPage->nCell = get2byte(&data[hdr+3]); if( pPage->nCell>MX_CELL(pBt) ){ /* To many cells for a single page. The page must be corrupt */ return SQLITE_CORRUPT_BKPT; } testcase( pPage->nCell==MX_CELL(pBt) ); + /* EVIDENCE-OF: R-24089-57979 If a page contains no cells (which is only + ** possible for a root page of a table that contains no rows) then the + ** offset to the cell content area will equal the page size minus the + ** bytes of reserved space. */ + assert( pPage->nCell>0 || top==usableSize || CORRUPT_DB ); /* A malformed database page might cause us to read past the end ** of page when parsing a cell. ** ** The following block of code checks early to see if a cell extends @@ -40250,47 +57351,52 @@ ** past the end of a page boundary and causes SQLITE_CORRUPT to be ** returned if it does. */ iCellFirst = cellOffset + 2*pPage->nCell; iCellLast = usableSize - 4; -#if defined(SQLITE_ENABLE_OVERSIZE_CELL_CHECK) - { + if( pBt->db->flags & SQLITE_CellSizeCk ){ int i; /* Index into the cell pointer array */ int sz; /* Size of a cell */ if( !pPage->leaf ) iCellLast--; for(i=0; i<pPage->nCell; i++){ - pc = get2byte(&data[cellOffset+i*2]); + pc = get2byteAligned(&data[cellOffset+i*2]); testcase( pc==iCellFirst ); testcase( pc==iCellLast ); if( pc<iCellFirst || pc>iCellLast ){ return SQLITE_CORRUPT_BKPT; } - sz = cellSizePtr(pPage, &data[pc]); + sz = pPage->xCellSize(pPage, &data[pc]); testcase( pc+sz==usableSize ); if( pc+sz>usableSize ){ return SQLITE_CORRUPT_BKPT; } } if( !pPage->leaf ) iCellLast++; } -#endif - /* Compute the total free space on the page */ + /* Compute the total free space on the page + ** EVIDENCE-OF: R-23588-34450 The two-byte integer at offset 1 gives the + ** start of the first freeblock on the page, or is zero if there are no + ** freeblocks. */ pc = get2byte(&data[hdr+1]); - nFree = data[hdr+7] + top; + nFree = data[hdr+7] + top; /* Init nFree to non-freeblock free space */ while( pc>0 ){ u16 next, size; if( pc<iCellFirst || pc>iCellLast ){ - /* Start of free block is off the page */ + /* EVIDENCE-OF: R-55530-52930 In a well-formed b-tree page, there will + ** always be at least one cell before the first freeblock. + ** + ** Or, the freeblock is off the end of the page + */ return SQLITE_CORRUPT_BKPT; } next = get2byte(&data[pc]); size = get2byte(&data[pc+2]); if( (next>0 && next<=pc+size+3) || pc+size>usableSize ){ /* Free blocks must be in ascending order. And the last byte of - ** the free-block must lie on the database page. */ + ** the free-block must lie on the database page. */ return SQLITE_CORRUPT_BKPT; } nFree = nFree + size; pc = next; } @@ -40324,25 +57430,27 @@ assert( sqlite3PagerPagenumber(pPage->pDbPage)==pPage->pgno ); assert( sqlite3PagerGetExtra(pPage->pDbPage) == (void*)pPage ); assert( sqlite3PagerGetData(pPage->pDbPage) == data ); assert( sqlite3PagerIswriteable(pPage->pDbPage) ); assert( sqlite3_mutex_held(pBt->mutex) ); - if( pBt->secureDelete ){ + if( pBt->btsFlags & BTS_SECURE_DELETE ){ memset(&data[hdr], 0, pBt->usableSize - hdr); } data[hdr] = (char)flags; - first = hdr + 8 + 4*((flags&PTF_LEAF)==0 ?1:0); + first = hdr + ((flags&PTF_LEAF)==0 ? 12 : 8); memset(&data[hdr+1], 0, 4); data[hdr+7] = 0; put2byte(&data[hdr+5], pBt->usableSize); - pPage->nFree = pBt->usableSize - first; + pPage->nFree = (u16)(pBt->usableSize - first); decodeFlags(pPage, flags); - pPage->hdrOffset = hdr; pPage->cellOffset = first; + pPage->aDataEnd = &data[pBt->usableSize]; + pPage->aCellIdx = &data[first]; + pPage->aDataOfst = &data[pPage->childPtrSize]; pPage->nOverflow = 0; - assert( pBt->pageSize>=512 && pBt->pageSize<=32768 ); - pPage->maskPage = pBt->pageSize - 1; + assert( pBt->pageSize>=512 && pBt->pageSize<=65536 ); + pPage->maskPage = (u16)(pBt->pageSize - 1); pPage->nCell = 0; pPage->isInit = 1; } @@ -40350,40 +57458,44 @@ ** Convert a DbPage obtained from the pager into a MemPage used by ** the btree layer. */ static MemPage *btreePageFromDbPage(DbPage *pDbPage, Pgno pgno, BtShared *pBt){ MemPage *pPage = (MemPage*)sqlite3PagerGetExtra(pDbPage); - pPage->aData = sqlite3PagerGetData(pDbPage); - pPage->pDbPage = pDbPage; - pPage->pBt = pBt; - pPage->pgno = pgno; - pPage->hdrOffset = pPage->pgno==1 ? 100 : 0; + if( pgno!=pPage->pgno ){ + pPage->aData = sqlite3PagerGetData(pDbPage); + pPage->pDbPage = pDbPage; + pPage->pBt = pBt; + pPage->pgno = pgno; + pPage->hdrOffset = pgno==1 ? 100 : 0; + } + assert( pPage->aData==sqlite3PagerGetData(pDbPage) ); return pPage; } /* ** Get a page from the pager. Initialize the MemPage.pBt and -** MemPage.aData elements if needed. +** MemPage.aData elements if needed. See also: btreeGetUnusedPage(). ** -** If the noContent flag is set, it means that we do not care about -** the content of the page at this time. So do not go to the disk +** If the PAGER_GET_NOCONTENT flag is set, it means that we do not care +** about the content of the page at this time. So do not go to the disk ** to fetch the content. Just fill in the content with zeros for now. ** If in the future we call sqlite3PagerWrite() on this page, that ** means we have started to be concerned about content and the disk ** read should occur at that point. */ static int btreeGetPage( BtShared *pBt, /* The btree */ Pgno pgno, /* Number of the page to fetch */ MemPage **ppPage, /* Return the page in this parameter */ - int noContent /* Do not load page content if true */ + int flags /* PAGER_GET_NOCONTENT or PAGER_GET_READONLY */ ){ int rc; DbPage *pDbPage; + assert( flags==0 || flags==PAGER_GET_NOCONTENT || flags==PAGER_GET_READONLY ); assert( sqlite3_mutex_held(pBt->mutex) ); - rc = sqlite3PagerAcquire(pBt->pPager, pgno, (DbPage**)&pDbPage, noContent); + rc = sqlite3PagerGet(pBt->pPager, pgno, (DbPage**)&pDbPage, flags); if( rc ) return rc; *ppPage = btreePageFromDbPage(pDbPage, pgno, pBt); return SQLITE_OK; } @@ -40410,56 +57522,122 @@ return pBt->nPage; } SQLITE_PRIVATE u32 sqlite3BtreeLastPage(Btree *p){ assert( sqlite3BtreeHoldsMutex(p) ); assert( ((p->pBt->nPage)&0x8000000)==0 ); - return (int)btreePagecount(p->pBt); + return btreePagecount(p->pBt); } /* -** Get a page from the pager and initialize it. This routine is just a -** convenience wrapper around separate calls to btreeGetPage() and -** btreeInitPage(). +** Get a page from the pager and initialize it. ** -** If an error occurs, then the value *ppPage is set to is undefined. It +** If pCur!=0 then the page is being fetched as part of a moveToChild() +** call. Do additional sanity checking on the page in this case. +** And if the fetch fails, this routine must decrement pCur->iPage. +** +** The page is fetched as read-write unless pCur is not NULL and is +** a read-only cursor. +** +** If an error occurs, then *ppPage is undefined. It ** may remain unchanged, or it may be set to an invalid value. */ static int getAndInitPage( - BtShared *pBt, /* The database file */ - Pgno pgno, /* Number of the page to get */ - MemPage **ppPage /* Write the page pointer here */ + BtShared *pBt, /* The database file */ + Pgno pgno, /* Number of the page to get */ + MemPage **ppPage, /* Write the page pointer here */ + BtCursor *pCur, /* Cursor to receive the page, or NULL */ + int bReadOnly /* True for a read-only page */ ){ int rc; + DbPage *pDbPage; assert( sqlite3_mutex_held(pBt->mutex) ); + assert( pCur==0 || ppPage==&pCur->apPage[pCur->iPage] ); + assert( pCur==0 || bReadOnly==pCur->curPagerFlags ); + assert( pCur==0 || pCur->iPage>0 ); - if( pgno<=0 || pgno>btreePagecount(pBt) ){ - return SQLITE_CORRUPT_BKPT; + if( pgno>btreePagecount(pBt) ){ + rc = SQLITE_CORRUPT_BKPT; + goto getAndInitPage_error; } - rc = btreeGetPage(pBt, pgno, ppPage, 0); - if( rc==SQLITE_OK ){ + rc = sqlite3PagerGet(pBt->pPager, pgno, (DbPage**)&pDbPage, bReadOnly); + if( rc ){ + goto getAndInitPage_error; + } + *ppPage = (MemPage*)sqlite3PagerGetExtra(pDbPage); + if( (*ppPage)->isInit==0 ){ + btreePageFromDbPage(pDbPage, pgno, pBt); rc = btreeInitPage(*ppPage); if( rc!=SQLITE_OK ){ releasePage(*ppPage); + goto getAndInitPage_error; } } + assert( (*ppPage)->pgno==pgno ); + assert( (*ppPage)->aData==sqlite3PagerGetData(pDbPage) ); + + /* If obtaining a child page for a cursor, we must verify that the page is + ** compatible with the root page. */ + if( pCur && ((*ppPage)->nCell<1 || (*ppPage)->intKey!=pCur->curIntKey) ){ + rc = SQLITE_CORRUPT_BKPT; + releasePage(*ppPage); + goto getAndInitPage_error; + } + return SQLITE_OK; + +getAndInitPage_error: + if( pCur ) pCur->iPage--; + testcase( pgno==0 ); + assert( pgno!=0 || rc==SQLITE_CORRUPT ); return rc; } /* ** Release a MemPage. This should be called once for each prior ** call to btreeGetPage. */ +static void releasePageNotNull(MemPage *pPage){ + assert( pPage->aData ); + assert( pPage->pBt ); + assert( pPage->pDbPage!=0 ); + assert( sqlite3PagerGetExtra(pPage->pDbPage) == (void*)pPage ); + assert( sqlite3PagerGetData(pPage->pDbPage)==pPage->aData ); + assert( sqlite3_mutex_held(pPage->pBt->mutex) ); + sqlite3PagerUnrefNotNull(pPage->pDbPage); +} static void releasePage(MemPage *pPage){ - if( pPage ){ - assert( pPage->aData ); - assert( pPage->pBt ); - assert( sqlite3PagerGetExtra(pPage->pDbPage) == (void*)pPage ); - assert( sqlite3PagerGetData(pPage->pDbPage)==pPage->aData ); - assert( sqlite3_mutex_held(pPage->pBt->mutex) ); - sqlite3PagerUnref(pPage->pDbPage); - } -} + if( pPage ) releasePageNotNull(pPage); +} + +/* +** Get an unused page. +** +** This works just like btreeGetPage() with the addition: +** +** * If the page is already in use for some other purpose, immediately +** release it and return an SQLITE_CURRUPT error. +** * Make sure the isInit flag is clear +*/ +static int btreeGetUnusedPage( + BtShared *pBt, /* The btree */ + Pgno pgno, /* Number of the page to fetch */ + MemPage **ppPage, /* Return the page in this parameter */ + int flags /* PAGER_GET_NOCONTENT or PAGER_GET_READONLY */ +){ + int rc = btreeGetPage(pBt, pgno, ppPage, flags); + if( rc==SQLITE_OK ){ + if( sqlite3PagerPageRefcount((*ppPage)->pDbPage)>1 ){ + releasePage(*ppPage); + *ppPage = 0; + return SQLITE_CORRUPT_BKPT; + } + (*ppPage)->isInit = 0; + }else{ + *ppPage = 0; + } + return rc; +} + /* ** During a rollback, when the pager reloads information into the cache ** so that the cache is restored to its original state at the start of ** the transaction, for each page restored this routine is called. @@ -40498,53 +57676,73 @@ /* ** Open a database file. ** ** zFilename is the name of the database file. If zFilename is NULL -** a new database with a random name is created. This randomly named -** database file will be deleted when sqlite3BtreeClose() is called. +** then an ephemeral database is created. The ephemeral database might +** be exclusively in memory, or it might use a disk-based memory cache. +** Either way, the ephemeral database will be automatically deleted +** when sqlite3BtreeClose() is called. +** ** If zFilename is ":memory:" then an in-memory database is created ** that is automatically destroyed when it is closed. +** +** The "flags" parameter is a bitmask that might contain bits like +** BTREE_OMIT_JOURNAL and/or BTREE_MEMORY. ** ** If the database is already opened in the same database connection ** and we are in shared cache mode, then the open will fail with an ** SQLITE_CONSTRAINT error. We cannot allow two or more BtShared ** objects in the same database connection since doing so will lead ** to problems with locking. */ SQLITE_PRIVATE int sqlite3BtreeOpen( + sqlite3_vfs *pVfs, /* VFS to use for this b-tree */ const char *zFilename, /* Name of the file containing the BTree database */ sqlite3 *db, /* Associated database handle */ Btree **ppBtree, /* Pointer to new Btree object written here */ int flags, /* Options */ int vfsFlags /* Flags passed through to sqlite3_vfs.xOpen() */ ){ - sqlite3_vfs *pVfs; /* The VFS to use for this btree */ BtShared *pBt = 0; /* Shared part of btree structure */ Btree *p; /* Handle to return */ sqlite3_mutex *mutexOpen = 0; /* Prevents a race condition. Ticket #3537 */ int rc = SQLITE_OK; /* Result code from this function */ u8 nReserve; /* Byte of unused space on each page */ unsigned char zDbHeader[100]; /* Database header content */ + + /* True if opening an ephemeral, temporary database */ + const int isTempDb = zFilename==0 || zFilename[0]==0; /* Set the variable isMemdb to true for an in-memory database, or - ** false for a file-based database. This symbol is only required if - ** either of the shared-data or autovacuum features are compiled - ** into the library. + ** false for a file-based database. */ -#if !defined(SQLITE_OMIT_SHARED_CACHE) || !defined(SQLITE_OMIT_AUTOVACUUM) - #ifdef SQLITE_OMIT_MEMORYDB - const int isMemdb = 0; - #else - const int isMemdb = zFilename && !strcmp(zFilename, ":memory:"); - #endif +#ifdef SQLITE_OMIT_MEMORYDB + const int isMemdb = 0; +#else + const int isMemdb = (zFilename && strcmp(zFilename, ":memory:")==0) + || (isTempDb && sqlite3TempInMemory(db)) + || (vfsFlags & SQLITE_OPEN_MEMORY)!=0; #endif assert( db!=0 ); + assert( pVfs!=0 ); assert( sqlite3_mutex_held(db->mutex) ); + assert( (flags&0xff)==flags ); /* flags fit in 8 bits */ - pVfs = db->pVfs; + /* Only a BTREE_SINGLE database can be BTREE_UNORDERED */ + assert( (flags & BTREE_UNORDERED)==0 || (flags & BTREE_SINGLE)!=0 ); + + /* A BTREE_SINGLE database is always a temporary and/or ephemeral */ + assert( (flags & BTREE_SINGLE)==0 || isTempDb ); + + if( isMemdb ){ + flags |= BTREE_MEMORY; + } + if( (vfsFlags & SQLITE_OPEN_MAIN_DB)!=0 && (isMemdb || isTempDb) ){ + vfsFlags = (vfsFlags & ~SQLITE_OPEN_MAIN_DB) | SQLITE_OPEN_TEMP_DB; + } p = sqlite3MallocZero(sizeof(Btree)); if( !p ){ return SQLITE_NOMEM; } p->inTrans = TRANS_NONE; @@ -40557,28 +57755,42 @@ #if !defined(SQLITE_OMIT_SHARED_CACHE) && !defined(SQLITE_OMIT_DISKIO) /* ** If this Btree is a candidate for shared cache, try to find an ** existing BtShared object that we can share with */ - if( isMemdb==0 && zFilename && zFilename[0] ){ + if( isTempDb==0 && (isMemdb==0 || (vfsFlags&SQLITE_OPEN_URI)!=0) ){ if( vfsFlags & SQLITE_OPEN_SHAREDCACHE ){ + int nFilename = sqlite3Strlen30(zFilename)+1; int nFullPathname = pVfs->mxPathname+1; - char *zFullPathname = sqlite3Malloc(nFullPathname); - sqlite3_mutex *mutexShared; + char *zFullPathname = sqlite3Malloc(MAX(nFullPathname,nFilename)); + MUTEX_LOGIC( sqlite3_mutex *mutexShared; ) + p->sharable = 1; if( !zFullPathname ){ sqlite3_free(p); return SQLITE_NOMEM; } - sqlite3OsFullPathname(pVfs, zFilename, nFullPathname, zFullPathname); + if( isMemdb ){ + memcpy(zFullPathname, zFilename, nFilename); + }else{ + rc = sqlite3OsFullPathname(pVfs, zFilename, + nFullPathname, zFullPathname); + if( rc ){ + sqlite3_free(zFullPathname); + sqlite3_free(p); + return rc; + } + } +#if SQLITE_THREADSAFE mutexOpen = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_OPEN); sqlite3_mutex_enter(mutexOpen); mutexShared = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER); sqlite3_mutex_enter(mutexShared); +#endif for(pBt=GLOBAL(BtShared*,sqlite3SharedCacheList); pBt; pBt=pBt->pNext){ assert( pBt->nRef>0 ); - if( 0==strcmp(zFullPathname, sqlite3PagerFilename(pBt->pPager)) + if( 0==strcmp(zFullPathname, sqlite3PagerFilename(pBt->pPager, 0)) && sqlite3PagerVfs(pBt->pPager)==pVfs ){ int iDb; for(iDb=db->nDb-1; iDb>=0; iDb--){ Btree *pExisting = db->aDb[iDb].pBt; if( pExisting && pExisting->pBt==pBt ){ @@ -40613,12 +57825,12 @@ /* ** The following asserts make sure that structures used by the btree are ** the right size. This is to guard against size changes that result ** when compiling on a different architecture. */ - assert( sizeof(i64)==8 || sizeof(i64)==4 ); - assert( sizeof(u64)==8 || sizeof(u64)==4 ); + assert( sizeof(i64)==8 ); + assert( sizeof(u64)==8 ); assert( sizeof(u32)==4 ); assert( sizeof(u16)==2 ); assert( sizeof(Pgno)==4 ); pBt = sqlite3MallocZero( sizeof(*pBt) ); @@ -40627,26 +57839,31 @@ goto btree_open_out; } rc = sqlite3PagerOpen(pVfs, &pBt->pPager, zFilename, EXTRA_SIZE, flags, vfsFlags, pageReinit); if( rc==SQLITE_OK ){ + sqlite3PagerSetMmapLimit(pBt->pPager, db->szMmap); rc = sqlite3PagerReadFileheader(pBt->pPager,sizeof(zDbHeader),zDbHeader); } if( rc!=SQLITE_OK ){ goto btree_open_out; } + pBt->openFlags = (u8)flags; pBt->db = db; sqlite3PagerSetBusyhandler(pBt->pPager, btreeInvokeBusyHandler, pBt); p->pBt = pBt; pBt->pCursor = 0; pBt->pPage1 = 0; - pBt->readOnly = sqlite3PagerIsreadonly(pBt->pPager); + if( sqlite3PagerIsreadonly(pBt->pPager) ) pBt->btsFlags |= BTS_READ_ONLY; #ifdef SQLITE_SECURE_DELETE - pBt->secureDelete = 1; + pBt->btsFlags |= BTS_SECURE_DELETE; #endif - pBt->pageSize = get2byte(&zDbHeader[16]); + /* EVIDENCE-OF: R-51873-39618 The page size for a database file is + ** determined by the 2-byte integer located at an offset of 16 bytes from + ** the beginning of the database file. */ + pBt->pageSize = (zDbHeader[16]<<8) | (zDbHeader[17]<<16); if( pBt->pageSize<512 || pBt->pageSize>SQLITE_MAX_PAGE_SIZE || ((pBt->pageSize-1)&pBt->pageSize)!=0 ){ pBt->pageSize = 0; #ifndef SQLITE_OMIT_AUTOVACUUM /* If the magic name ":memory:" will create an in-memory database, then @@ -40660,12 +57877,15 @@ pBt->incrVacuum = (SQLITE_DEFAULT_AUTOVACUUM==2 ? 1 : 0); } #endif nReserve = 0; }else{ + /* EVIDENCE-OF: R-37497-42412 The size of the reserved region is + ** determined by the one-byte unsigned integer found at an offset of 20 + ** into the database file header. */ nReserve = zDbHeader[20]; - pBt->pageSizeFixed = 1; + pBt->btsFlags |= BTS_PAGESIZE_FIXED; #ifndef SQLITE_OMIT_AUTOVACUUM pBt->autoVacuum = (get4byte(&zDbHeader[36 + 4*4])?1:0); pBt->incrVacuum = (get4byte(&zDbHeader[36 + 7*4])?1:0); #endif } @@ -40676,18 +57896,17 @@ #if !defined(SQLITE_OMIT_SHARED_CACHE) && !defined(SQLITE_OMIT_DISKIO) /* Add the new BtShared object to the linked list sharable BtShareds. */ if( p->sharable ){ - sqlite3_mutex *mutexShared; + MUTEX_LOGIC( sqlite3_mutex *mutexShared; ) pBt->nRef = 1; - mutexShared = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER); + MUTEX_LOGIC( mutexShared = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER);) if( SQLITE_THREADSAFE && sqlite3GlobalConfig.bCoreMutex ){ pBt->mutex = sqlite3MutexAlloc(SQLITE_MUTEX_FAST); if( pBt->mutex==0 ){ rc = SQLITE_NOMEM; - db->mallocFailed = 0; goto btree_open_out; } } sqlite3_mutex_enter(mutexShared); pBt->pNext = GLOBAL(BtShared*,sqlite3SharedCacheList); @@ -40736,10 +57955,18 @@ sqlite3PagerClose(pBt->pPager); } sqlite3_free(pBt); sqlite3_free(p); *ppBtree = 0; + }else{ + /* If the B-Tree was successfully opened, set the pager-cache size to the + ** default value. Except, when opening on an existing shared pager-cache, + ** do not change the pager-cache size. + */ + if( sqlite3BtreeSchema(p, 0, 0)==0 ){ + sqlite3PagerSetCachesize(p->pBt->pPager, SQLITE_DEFAULT_CACHE_SIZE); + } } if( mutexOpen ){ assert( sqlite3_mutex_held(mutexOpen) ); sqlite3_mutex_leave(mutexOpen); } @@ -40752,16 +57979,16 @@ ** true if the BtShared.nRef counter reaches zero and return ** false if it is still positive. */ static int removeFromSharingList(BtShared *pBt){ #ifndef SQLITE_OMIT_SHARED_CACHE - sqlite3_mutex *pMaster; + MUTEX_LOGIC( sqlite3_mutex *pMaster; ) BtShared *pList; int removed = 0; assert( sqlite3_mutex_notheld(pBt->mutex) ); - pMaster = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER); + MUTEX_LOGIC( pMaster = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER); ) sqlite3_mutex_enter(pMaster); pBt->nRef--; if( pBt->nRef<=0 ){ if( GLOBAL(BtShared*,sqlite3SharedCacheList)==pBt ){ GLOBAL(BtShared*,sqlite3SharedCacheList) = pBt->pNext; @@ -40786,24 +58013,48 @@ #endif } /* ** Make sure pBt->pTmpSpace points to an allocation of -** MX_CELL_SIZE(pBt) bytes. +** MX_CELL_SIZE(pBt) bytes with a 4-byte prefix for a left-child +** pointer. */ static void allocateTempSpace(BtShared *pBt){ if( !pBt->pTmpSpace ){ pBt->pTmpSpace = sqlite3PageMalloc( pBt->pageSize ); + + /* One of the uses of pBt->pTmpSpace is to format cells before + ** inserting them into a leaf page (function fillInCell()). If + ** a cell is less than 4 bytes in size, it is rounded up to 4 bytes + ** by the various routines that manipulate binary cells. Which + ** can mean that fillInCell() only initializes the first 2 or 3 + ** bytes of pTmpSpace, but that the first 4 bytes are copied from + ** it into a database page. This is not actually a problem, but it + ** does cause a valgrind error when the 1 or 2 bytes of unitialized + ** data is passed to system call write(). So to avoid this error, + ** zero the first 4 bytes of temp space here. + ** + ** Also: Provide four bytes of initialized space before the + ** beginning of pTmpSpace as an area available to prepend the + ** left-child pointer to the beginning of a cell. + */ + if( pBt->pTmpSpace ){ + memset(pBt->pTmpSpace, 0, 8); + pBt->pTmpSpace += 4; + } } } /* ** Free the pBt->pTmpSpace allocation */ static void freeTempSpace(BtShared *pBt){ - sqlite3PageFree( pBt->pTmpSpace); - pBt->pTmpSpace = 0; + if( pBt->pTmpSpace ){ + pBt->pTmpSpace -= 4; + sqlite3PageFree(pBt->pTmpSpace); + pBt->pTmpSpace = 0; + } } /* ** Close an open database and invalidate all cursors. */ @@ -40825,11 +58076,11 @@ /* Rollback any active transaction and free the handle structure. ** The call to sqlite3BtreeRollback() drops any table-locks held by ** this handle. */ - sqlite3BtreeRollback(p); + sqlite3BtreeRollback(p, SQLITE_OK, 0); sqlite3BtreeLeave(p); /* If there are still other outstanding references to the shared-btree ** structure, return now. The remainder of this procedure cleans ** up the shared-btree. @@ -40844,11 +58095,11 @@ assert( !pBt->pCursor ); sqlite3PagerClose(pBt->pPager); if( pBt->xFreeSchema && pBt->pSchema ){ pBt->xFreeSchema(pBt->pSchema); } - sqlite3_free(pBt->pSchema); + sqlite3DbFree(0, pBt->pSchema); freeTempSpace(pBt); sqlite3_free(pBt); } #ifndef SQLITE_OMIT_SHARED_CACHE @@ -40861,23 +58112,15 @@ sqlite3_free(p); return SQLITE_OK; } /* -** Change the limit on the number of pages allowed in the cache. -** -** The maximum number of cache pages is set to the absolute -** value of mxPage. If mxPage is negative, the pager will -** operate asynchronously - it will not stop to do fsync()s -** to insure data is written to the disk surface before -** continuing. Transactions still work if synchronous is off, -** and the database cannot be corrupted if this program -** crashes. But if the operating system crashes or there is -** an abrupt power failure when synchronous is off, the database -** could be left in an inconsistent and unrecoverable state. -** Synchronous is on by default so database corruption is not -** normally a worry. +** Change the "soft" limit on the number of pages in the cache. +** Unused and unmodified pages will be recycled when the number of +** pages in the cache exceeds this soft limit. But the size of the +** cache is allowed to grow larger than this limit if it contains +** dirty pages or pages still in active use. */ SQLITE_PRIVATE int sqlite3BtreeSetCacheSize(Btree *p, int mxPage){ BtShared *pBt = p->pBt; assert( sqlite3_mutex_held(p->db->mutex) ); sqlite3BtreeEnter(p); @@ -40884,24 +58127,62 @@ sqlite3PagerSetCachesize(pBt->pPager, mxPage); sqlite3BtreeLeave(p); return SQLITE_OK; } +/* +** Change the "spill" limit on the number of pages in the cache. +** If the number of pages exceeds this limit during a write transaction, +** the pager might attempt to "spill" pages to the journal early in +** order to free up memory. +** +** The value returned is the current spill size. If zero is passed +** as an argument, no changes are made to the spill size setting, so +** using mxPage of 0 is a way to query the current spill size. +*/ +SQLITE_PRIVATE int sqlite3BtreeSetSpillSize(Btree *p, int mxPage){ + BtShared *pBt = p->pBt; + int res; + assert( sqlite3_mutex_held(p->db->mutex) ); + sqlite3BtreeEnter(p); + res = sqlite3PagerSetSpillsize(pBt->pPager, mxPage); + sqlite3BtreeLeave(p); + return res; +} + +#if SQLITE_MAX_MMAP_SIZE>0 +/* +** Change the limit on the amount of the database file that may be +** memory mapped. +*/ +SQLITE_PRIVATE int sqlite3BtreeSetMmapLimit(Btree *p, sqlite3_int64 szMmap){ + BtShared *pBt = p->pBt; + assert( sqlite3_mutex_held(p->db->mutex) ); + sqlite3BtreeEnter(p); + sqlite3PagerSetMmapLimit(pBt->pPager, szMmap); + sqlite3BtreeLeave(p); + return SQLITE_OK; +} +#endif /* SQLITE_MAX_MMAP_SIZE>0 */ + /* ** Change the way data is synced to disk in order to increase or decrease ** how well the database resists damage due to OS crashes and power ** failures. Level 1 is the same as asynchronous (no syncs() occur and ** there is a high probability of damage) Level 2 is the default. There ** is a very low but non-zero probability of damage. Level 3 reduces the ** probability of damage to near zero but with a write performance reduction. */ #ifndef SQLITE_OMIT_PAGER_PRAGMAS -SQLITE_PRIVATE int sqlite3BtreeSetSafetyLevel(Btree *p, int level, int fullSync){ +SQLITE_PRIVATE int sqlite3BtreeSetPagerFlags( + Btree *p, /* The btree to set the safety level on */ + unsigned pgFlags /* Various PAGER_* flags */ +){ BtShared *pBt = p->pBt; assert( sqlite3_mutex_held(p->db->mutex) ); sqlite3BtreeEnter(p); - sqlite3PagerSetSafetyLevel(pBt->pPager, level, fullSync); + sqlite3PagerSetFlags(pBt->pPager, pgFlags); sqlite3BtreeLeave(p); return SQLITE_OK; } #endif @@ -40918,11 +58199,10 @@ rc = sqlite3PagerNosync(pBt->pPager); sqlite3BtreeLeave(p); return rc; } -#if !defined(SQLITE_OMIT_PAGER_PRAGMAS) || !defined(SQLITE_OMIT_VACUUM) /* ** Change the default pages size and the number of reserved bytes per page. ** Or, if the page size has already been fixed, return SQLITE_READONLY ** without changing anything. ** @@ -40936,19 +58216,22 @@ ** at the beginning of a page. ** ** If parameter nReserve is less than zero, then the number of reserved ** bytes per page is left unchanged. ** -** If the iFix!=0 then the pageSizeFixed flag is set so that the page size +** If the iFix!=0 then the BTS_PAGESIZE_FIXED flag is set so that the page size ** and autovacuum mode can no longer be changed. */ SQLITE_PRIVATE int sqlite3BtreeSetPageSize(Btree *p, int pageSize, int nReserve, int iFix){ int rc = SQLITE_OK; BtShared *pBt = p->pBt; assert( nReserve>=-1 && nReserve<=255 ); sqlite3BtreeEnter(p); - if( pBt->pageSizeFixed ){ +#if SQLITE_HAS_CODEC + if( nReserve>pBt->optimalReserve ) pBt->optimalReserve = (u8)nReserve; +#endif + if( pBt->btsFlags & BTS_PAGESIZE_FIXED ){ sqlite3BtreeLeave(p); return SQLITE_READONLY; } if( nReserve<0 ){ nReserve = pBt->pageSize - pBt->usableSize; @@ -40955,17 +58238,17 @@ } assert( nReserve>=0 && nReserve<=255 ); if( pageSize>=512 && pageSize<=SQLITE_MAX_PAGE_SIZE && ((pageSize-1)&pageSize)==0 ){ assert( (pageSize & 7)==0 ); - assert( !pBt->pPage1 && !pBt->pCursor ); - pBt->pageSize = (u16)pageSize; + assert( !pBt->pCursor ); + pBt->pageSize = (u32)pageSize; freeTempSpace(pBt); } rc = sqlite3PagerSetPagesize(pBt->pPager, &pBt->pageSize, nReserve); pBt->usableSize = pBt->pageSize - (u16)nReserve; - if( iFix ) pBt->pageSizeFixed = 1; + if( iFix ) pBt->btsFlags |= BTS_PAGESIZE_FIXED; sqlite3BtreeLeave(p); return rc; } /* @@ -40972,23 +58255,49 @@ ** Return the currently defined page size */ SQLITE_PRIVATE int sqlite3BtreeGetPageSize(Btree *p){ return p->pBt->pageSize; } + +/* +** This function is similar to sqlite3BtreeGetReserve(), except that it +** may only be called if it is guaranteed that the b-tree mutex is already +** held. +** +** This is useful in one special case in the backup API code where it is +** known that the shared b-tree mutex is held, but the mutex on the +** database handle that owns *p is not. In this case if sqlite3BtreeEnter() +** were to be called, it might collide with some other operation on the +** database handle that owns *p, causing undefined behavior. +*/ +SQLITE_PRIVATE int sqlite3BtreeGetReserveNoMutex(Btree *p){ + int n; + assert( sqlite3_mutex_held(p->pBt->mutex) ); + n = p->pBt->pageSize - p->pBt->usableSize; + return n; +} /* ** Return the number of bytes of space at the end of every page that ** are intentually left unused. This is the "reserved" space that is ** sometimes used by extensions. +** +** If SQLITE_HAS_MUTEX is defined then the number returned is the +** greater of the current reserved space and the maximum requested +** reserve space. */ -SQLITE_PRIVATE int sqlite3BtreeGetReserve(Btree *p){ +SQLITE_PRIVATE int sqlite3BtreeGetOptimalReserve(Btree *p){ int n; sqlite3BtreeEnter(p); - n = p->pBt->pageSize - p->pBt->usableSize; + n = sqlite3BtreeGetReserveNoMutex(p); +#ifdef SQLITE_HAS_CODEC + if( n<p->pBt->optimalReserve ) n = p->pBt->optimalReserve; +#endif sqlite3BtreeLeave(p); return n; } + /* ** Set the maximum page count for a database if mxPage is positive. ** No changes are made if mxPage is 0 or negative. ** Regardless of the value of mxPage, return the maximum page count. @@ -41000,26 +58309,26 @@ sqlite3BtreeLeave(p); return n; } /* -** Set the secureDelete flag if newFlag is 0 or 1. If newFlag is -1, -** then make no changes. Always return the value of the secureDelete +** Set the BTS_SECURE_DELETE flag if newFlag is 0 or 1. If newFlag is -1, +** then make no changes. Always return the value of the BTS_SECURE_DELETE ** setting after the change. */ SQLITE_PRIVATE int sqlite3BtreeSecureDelete(Btree *p, int newFlag){ int b; if( p==0 ) return 0; sqlite3BtreeEnter(p); if( newFlag>=0 ){ - p->pBt->secureDelete = (newFlag!=0) ? 1 : 0; + p->pBt->btsFlags &= ~BTS_SECURE_DELETE; + if( newFlag ) p->pBt->btsFlags |= BTS_SECURE_DELETE; } - b = p->pBt->secureDelete; + b = (p->pBt->btsFlags & BTS_SECURE_DELETE)!=0; sqlite3BtreeLeave(p); return b; } -#endif /* !defined(SQLITE_OMIT_PAGER_PRAGMAS) || !defined(SQLITE_OMIT_VACUUM) */ /* ** Change the 'auto-vacuum' property of the database. If the 'autoVacuum' ** parameter is non-zero, then auto-vacuum mode is enabled. If zero, it ** is disabled. The default value for the auto-vacuum property is @@ -41032,11 +58341,11 @@ BtShared *pBt = p->pBt; int rc = SQLITE_OK; u8 av = (u8)autoVacuum; sqlite3BtreeEnter(p); - if( pBt->pageSizeFixed && (av ?1:0)!=pBt->autoVacuum ){ + if( (pBt->btsFlags & BTS_PAGESIZE_FIXED)!=0 && (av ?1:0)!=pBt->autoVacuum ){ rc = SQLITE_READONLY; }else{ pBt->autoVacuum = av ?1:0; pBt->incrVacuum = av==2 ?1:0; } @@ -41091,71 +58400,119 @@ /* Do some checking to help insure the file we opened really is ** a valid database file. */ nPage = nPageHeader = get4byte(28+(u8*)pPage1->aData); - if( (rc = sqlite3PagerPagecount(pBt->pPager, &nPageFile))!=SQLITE_OK ){; - goto page1_init_failed; - } - if( nPage==0 ){ + sqlite3PagerPagecount(pBt->pPager, &nPageFile); + if( nPage==0 || memcmp(24+(u8*)pPage1->aData, 92+(u8*)pPage1->aData,4)!=0 ){ nPage = nPageFile; } if( nPage>0 ){ - int pageSize; - int usableSize; + u32 pageSize; + u32 usableSize; u8 *page1 = pPage1->aData; rc = SQLITE_NOTADB; + /* EVIDENCE-OF: R-43737-39999 Every valid SQLite database file begins + ** with the following 16 bytes (in hex): 53 51 4c 69 74 65 20 66 6f 72 6d + ** 61 74 20 33 00. */ if( memcmp(page1, zMagicHeader, 16)!=0 ){ goto page1_init_failed; } + +#ifdef SQLITE_OMIT_WAL if( page1[18]>1 ){ - pBt->readOnly = 1; + pBt->btsFlags |= BTS_READ_ONLY; } if( page1[19]>1 ){ goto page1_init_failed; } +#else + if( page1[18]>2 ){ + pBt->btsFlags |= BTS_READ_ONLY; + } + if( page1[19]>2 ){ + goto page1_init_failed; + } - /* The maximum embedded fraction must be exactly 25%. And the minimum - ** embedded fraction must be 12.5% for both leaf-data and non-leaf-data. + /* If the write version is set to 2, this database should be accessed + ** in WAL mode. If the log is not already open, open it now. Then + ** return SQLITE_OK and return without populating BtShared.pPage1. + ** The caller detects this and calls this function again. This is + ** required as the version of page 1 currently in the page1 buffer + ** may not be the latest version - there may be a newer one in the log + ** file. + */ + if( page1[19]==2 && (pBt->btsFlags & BTS_NO_WAL)==0 ){ + int isOpen = 0; + rc = sqlite3PagerOpenWal(pBt->pPager, &isOpen); + if( rc!=SQLITE_OK ){ + goto page1_init_failed; + }else if( isOpen==0 ){ + releasePage(pPage1); + return SQLITE_OK; + } + rc = SQLITE_NOTADB; + } +#endif + + /* EVIDENCE-OF: R-15465-20813 The maximum and minimum embedded payload + ** fractions and the leaf payload fraction values must be 64, 32, and 32. + ** ** The original design allowed these amounts to vary, but as of ** version 3.6.0, we require them to be fixed. */ if( memcmp(&page1[21], "\100\040\040",3)!=0 ){ goto page1_init_failed; } - pageSize = get2byte(&page1[16]); - if( ((pageSize-1)&pageSize)!=0 || pageSize<512 || - (SQLITE_MAX_PAGE_SIZE<32768 && pageSize>SQLITE_MAX_PAGE_SIZE) + /* EVIDENCE-OF: R-51873-39618 The page size for a database file is + ** determined by the 2-byte integer located at an offset of 16 bytes from + ** the beginning of the database file. */ + pageSize = (page1[16]<<8) | (page1[17]<<16); + /* EVIDENCE-OF: R-25008-21688 The size of a page is a power of two + ** between 512 and 65536 inclusive. */ + if( ((pageSize-1)&pageSize)!=0 + || pageSize>SQLITE_MAX_PAGE_SIZE + || pageSize<=256 ){ goto page1_init_failed; } assert( (pageSize & 7)==0 ); + /* EVIDENCE-OF: R-59310-51205 The "reserved space" size in the 1-byte + ** integer at offset 20 is the number of bytes of space at the end of + ** each page to reserve for extensions. + ** + ** EVIDENCE-OF: R-37497-42412 The size of the reserved region is + ** determined by the one-byte unsigned integer found at an offset of 20 + ** into the database file header. */ usableSize = pageSize - page1[20]; - if( pageSize!=pBt->pageSize ){ + if( (u32)pageSize!=pBt->pageSize ){ /* After reading the first page of the database assuming a page size ** of BtShared.pageSize, we have discovered that the page-size is ** actually pageSize. Unlock the database, leave pBt->pPage1 at ** zero and return SQLITE_OK. The caller will call this function ** again with the correct page-size. */ releasePage(pPage1); - pBt->usableSize = (u16)usableSize; - pBt->pageSize = (u16)pageSize; + pBt->usableSize = usableSize; + pBt->pageSize = pageSize; freeTempSpace(pBt); rc = sqlite3PagerSetPagesize(pBt->pPager, &pBt->pageSize, pageSize-usableSize); return rc; } - if( nPageHeader>nPageFile ){ + if( (pBt->db->flags & SQLITE_RecoveryMode)==0 && nPage>nPageFile ){ rc = SQLITE_CORRUPT_BKPT; goto page1_init_failed; } + /* EVIDENCE-OF: R-28312-64704 However, the usable size is not allowed to + ** be less than 480. In other words, if the page size is 512, then the + ** reserved space size cannot exceed 32. */ if( usableSize<480 ){ goto page1_init_failed; } - pBt->pageSize = (u16)pageSize; - pBt->usableSize = (u16)usableSize; + pBt->pageSize = pageSize; + pBt->usableSize = usableSize; #ifndef SQLITE_OMIT_AUTOVACUUM pBt->autoVacuum = (get4byte(&page1[36 + 4*4])?1:0); pBt->incrVacuum = (get4byte(&page1[36 + 7*4])?1:0); #endif } @@ -41167,18 +58524,23 @@ ** 2-byte pointer to the cell ** 4-byte child pointer ** 9-byte nKey value ** 4-byte nData value ** 4-byte overflow page pointer - ** So a cell consists of a 2-byte poiner, a header which is as much as + ** So a cell consists of a 2-byte pointer, a header which is as much as ** 17 bytes long, 0 to N bytes of payload, and an optional 4 byte overflow ** page pointer. */ - pBt->maxLocal = (pBt->usableSize-12)*64/255 - 23; - pBt->minLocal = (pBt->usableSize-12)*32/255 - 23; - pBt->maxLeaf = pBt->usableSize - 35; - pBt->minLeaf = (pBt->usableSize-12)*32/255 - 23; + pBt->maxLocal = (u16)((pBt->usableSize-12)*64/255 - 23); + pBt->minLocal = (u16)((pBt->usableSize-12)*32/255 - 23); + pBt->maxLeaf = (u16)(pBt->usableSize - 35); + pBt->minLeaf = (u16)((pBt->usableSize-12)*32/255 - 23); + if( pBt->maxLocal>127 ){ + pBt->max1bytePayload = 127; + }else{ + pBt->max1bytePayload = (u8)pBt->maxLocal; + } assert( pBt->maxLeaf + 23 <= MX_CELL_SIZE(pBt) ); pBt->pPage1 = pPage1; pBt->nPage = nPage; return SQLITE_OK; @@ -41186,10 +58548,34 @@ releasePage(pPage1); pBt->pPage1 = 0; return rc; } +#ifndef NDEBUG +/* +** Return the number of cursors open on pBt. This is for use +** in assert() expressions, so it is only compiled if NDEBUG is not +** defined. +** +** Only write cursors are counted if wrOnly is true. If wrOnly is +** false then all cursors are counted. +** +** For the purposes of this routine, a cursor is any cursor that +** is capable of reading or writing to the database. Cursors that +** have been tripped into the CURSOR_FAULT state are not counted. +*/ +static int countValidCursors(BtShared *pBt, int wrOnly){ + BtCursor *pCur; + int r = 0; + for(pCur=pBt->pCursor; pCur; pCur=pCur->pNext){ + if( (wrOnly==0 || (pCur->curFlags & BTCF_WriteFlag)!=0) + && pCur->eState!=CURSOR_FAULT ) r++; + } + return r; +} +#endif + /* ** If there are no outstanding cursors and we are not in the middle ** of a transaction but there is a read lock on the database, then ** this routine unrefs the first page of the database file which ** has the effect of releasing the read lock. @@ -41196,17 +58582,17 @@ ** ** If there is a transaction in progress, this routine is a no-op. */ static void unlockBtreeIfUnused(BtShared *pBt){ assert( sqlite3_mutex_held(pBt->mutex) ); - assert( pBt->pCursor==0 || pBt->inTransaction>TRANS_NONE ); + assert( countValidCursors(pBt,0)==0 || pBt->inTransaction>TRANS_NONE ); if( pBt->inTransaction==TRANS_NONE && pBt->pPage1!=0 ){ - assert( pBt->pPage1->aData ); + MemPage *pPage1 = pBt->pPage1; + assert( pPage1->aData ); assert( sqlite3PagerRefcount(pBt->pPager)==1 ); - assert( pBt->pPage1->aData ); - releasePage(pBt->pPage1); pBt->pPage1 = 0; + releasePageNotNull(pPage1); } } /* ** If pBt points to an empty file then convert that empty file @@ -41227,21 +58613,22 @@ data = pP1->aData; rc = sqlite3PagerWrite(pP1->pDbPage); if( rc ) return rc; memcpy(data, zMagicHeader, sizeof(zMagicHeader)); assert( sizeof(zMagicHeader)==16 ); - put2byte(&data[16], pBt->pageSize); + data[16] = (u8)((pBt->pageSize>>8)&0xff); + data[17] = (u8)((pBt->pageSize>>16)&0xff); data[18] = 1; data[19] = 1; assert( pBt->usableSize<=pBt->pageSize && pBt->usableSize+255>=pBt->pageSize); data[20] = (u8)(pBt->pageSize - pBt->usableSize); data[21] = 64; data[22] = 32; data[23] = 32; memset(&data[24], 0, 100-24); zeroPage(pP1, PTF_INTKEY|PTF_LEAF|PTF_LEAFDATA ); - pBt->pageSizeFixed = 1; + pBt->btsFlags |= BTS_PAGESIZE_FIXED; #ifndef SQLITE_OMIT_AUTOVACUUM assert( pBt->autoVacuum==1 || pBt->autoVacuum==0 ); assert( pBt->incrVacuum==1 || pBt->incrVacuum==0 ); put4byte(&data[36 + 4*4], pBt->autoVacuum); put4byte(&data[36 + 7*4], pBt->incrVacuum); @@ -41248,10 +58635,24 @@ #endif pBt->nPage = 1; data[31] = 1; return SQLITE_OK; } + +/* +** Initialize the first page of the database file (creating a database +** consisting of a single page and no schema objects). Return SQLITE_OK +** if successful, or an SQLite error code otherwise. +*/ +SQLITE_PRIVATE int sqlite3BtreeNewDb(Btree *p){ + int rc; + sqlite3BtreeEnter(p); + p->pBt->nPage = 0; + rc = newDatabase(p->pBt); + sqlite3BtreeLeave(p); + return rc; +} /* ** Attempt to start a new transaction. A write-transaction ** is started if the second argument is nonzero, otherwise a read- ** transaction. If the second argument is 2 or more and exclusive @@ -41285,11 +58686,10 @@ ** no progress. By returning SQLITE_BUSY and not invoking the busy callback ** when A already has a read lock, we encourage A to give up and let B ** proceed. */ SQLITE_PRIVATE int sqlite3BtreeBeginTrans(Btree *p, int wrflag){ - sqlite3 *pBlock = 0; BtShared *pBt = p->pBt; int rc = SQLITE_OK; sqlite3BtreeEnter(p); btreeIntegrity(p); @@ -41299,47 +58699,54 @@ ** is requested, this is a no-op. */ if( p->inTrans==TRANS_WRITE || (p->inTrans==TRANS_READ && !wrflag) ){ goto trans_begun; } + assert( pBt->inTransaction==TRANS_WRITE || IfNotOmitAV(pBt->bDoTruncate)==0 ); /* Write transactions are not possible on a read-only database */ - if( pBt->readOnly && wrflag ){ + if( (pBt->btsFlags & BTS_READ_ONLY)!=0 && wrflag ){ rc = SQLITE_READONLY; goto trans_begun; } #ifndef SQLITE_OMIT_SHARED_CACHE - /* If another database handle has already opened a write transaction - ** on this shared-btree structure and a second write transaction is - ** requested, return SQLITE_LOCKED. - */ - if( (wrflag && pBt->inTransaction==TRANS_WRITE) || pBt->isPending ){ - pBlock = pBt->pWriter->db; - }else if( wrflag>1 ){ - BtLock *pIter; - for(pIter=pBt->pLock; pIter; pIter=pIter->pNext){ - if( pIter->pBtree!=p ){ - pBlock = pIter->pBtree->db; - break; - } - } - } - if( pBlock ){ - sqlite3ConnectionBlocked(p->db, pBlock); - rc = SQLITE_LOCKED_SHAREDCACHE; - goto trans_begun; + { + sqlite3 *pBlock = 0; + /* If another database handle has already opened a write transaction + ** on this shared-btree structure and a second write transaction is + ** requested, return SQLITE_LOCKED. + */ + if( (wrflag && pBt->inTransaction==TRANS_WRITE) + || (pBt->btsFlags & BTS_PENDING)!=0 + ){ + pBlock = pBt->pWriter->db; + }else if( wrflag>1 ){ + BtLock *pIter; + for(pIter=pBt->pLock; pIter; pIter=pIter->pNext){ + if( pIter->pBtree!=p ){ + pBlock = pIter->pBtree->db; + break; + } + } + } + if( pBlock ){ + sqlite3ConnectionBlocked(p->db, pBlock); + rc = SQLITE_LOCKED_SHAREDCACHE; + goto trans_begun; + } } #endif /* Any read-only or read-write transaction implies a read-lock on ** page 1. So if some other shared-cache client already has a write-lock ** on page 1, the transaction cannot be opened. */ rc = querySharedCacheTableLock(p, MASTER_ROOT, READ_LOCK); if( SQLITE_OK!=rc ) goto trans_begun; - pBt->initiallyEmpty = pBt->nPage==0; + pBt->btsFlags &= ~BTS_INITIALLY_EMPTY; + if( pBt->nPage==0 ) pBt->btsFlags |= BTS_INITIALLY_EMPTY; do { /* Call lockBtree() until either pBt->pPage1 is populated or ** lockBtree() returns something other than SQLITE_OK. lockBtree() ** may return SQLITE_OK but leave pBt->pPage1 set to 0 if after ** reading page 1 it discovers that the page-size of the database @@ -41347,11 +58754,11 @@ ** pBt->pageSize to the page-size of the file on disk. */ while( pBt->pPage1==0 && SQLITE_OK==(rc = lockBtree(pBt)) ); if( rc==SQLITE_OK && wrflag ){ - if( pBt->readOnly ){ + if( (pBt->btsFlags & BTS_READ_ONLY)!=0 ){ rc = SQLITE_READONLY; }else{ rc = sqlite3PagerBegin(pBt->pPager,wrflag>1,sqlite3TempInMemory(p->db)); if( rc==SQLITE_OK ){ rc = newDatabase(pBt); @@ -41360,19 +58767,19 @@ } if( rc!=SQLITE_OK ){ unlockBtreeIfUnused(pBt); } - }while( rc==SQLITE_BUSY && pBt->inTransaction==TRANS_NONE && + }while( (rc&0xFF)==SQLITE_BUSY && pBt->inTransaction==TRANS_NONE && btreeInvokeBusyHandler(pBt) ); if( rc==SQLITE_OK ){ if( p->inTrans==TRANS_NONE ){ pBt->nTransaction++; #ifndef SQLITE_OMIT_SHARED_CACHE if( p->sharable ){ - assert( p->lock.pBtree==p && p->lock.iTable==1 ); + assert( p->lock.pBtree==p && p->lock.iTable==1 ); p->lock.eLock = READ_LOCK; p->lock.pNext = pBt->pLock; pBt->pLock = &p->lock; } #endif @@ -41379,17 +58786,32 @@ } p->inTrans = (wrflag?TRANS_WRITE:TRANS_READ); if( p->inTrans>pBt->inTransaction ){ pBt->inTransaction = p->inTrans; } -#ifndef SQLITE_OMIT_SHARED_CACHE if( wrflag ){ + MemPage *pPage1 = pBt->pPage1; +#ifndef SQLITE_OMIT_SHARED_CACHE assert( !pBt->pWriter ); pBt->pWriter = p; - pBt->isExclusive = (u8)(wrflag>1); + pBt->btsFlags &= ~BTS_EXCLUSIVE; + if( wrflag>1 ) pBt->btsFlags |= BTS_EXCLUSIVE; +#endif + + /* If the db-size header field is incorrect (as it may be if an old + ** client has been writing the database file), update it now. Doing + ** this sooner rather than later means the database size can safely + ** re-read the database size from page 1 if a savepoint or transaction + ** rollback occurs within the transaction. + */ + if( pBt->nPage!=get4byte(&pPage1->aData[28]) ){ + rc = sqlite3PagerWrite(pPage1->pDbPage); + if( rc==SQLITE_OK ){ + put4byte(&pPage1->aData[28], pBt->nPage); + } + } } -#endif } trans_begun: if( rc==SQLITE_OK && wrflag ){ @@ -41473,24 +58895,27 @@ put4byte(pPage->aData, iTo); }else{ u8 isInitOrig = pPage->isInit; int i; int nCell; + int rc; - btreeInitPage(pPage); + rc = btreeInitPage(pPage); + if( rc ) return rc; nCell = pPage->nCell; for(i=0; i<nCell; i++){ u8 *pCell = findCell(pPage, i); if( eType==PTRMAP_OVERFLOW1 ){ CellInfo info; - btreeParseCellPtr(pPage, pCell, &info); - if( info.iOverflow ){ - if( iFrom==get4byte(&pCell[info.iOverflow]) ){ - put4byte(&pCell[info.iOverflow], iTo); - break; - } + pPage->xParseCell(pPage, pCell, &info); + if( info.nLocal<info.nPayload + && pCell+info.nSize-1<=pPage->aData+pPage->maskPage + && iFrom==get4byte(pCell+info.nSize-4) + ){ + put4byte(pCell+info.nSize-4, iTo); + break; } }else{ if( get4byte(pCell)==iFrom ){ put4byte(pCell, iTo); break; @@ -41596,28 +59021,27 @@ /* Forward declaration required by incrVacuumStep(). */ static int allocateBtreePage(BtShared *, MemPage **, Pgno *, Pgno, u8); /* -** Perform a single step of an incremental-vacuum. If successful, -** return SQLITE_OK. If there is no work to do (and therefore no -** point in calling this function again), return SQLITE_DONE. -** -** More specificly, this function attempts to re-organize the -** database so that the last page of the file currently in use -** is no longer in use. -** -** If the nFin parameter is non-zero, this function assumes -** that the caller will keep calling incrVacuumStep() until -** it returns SQLITE_DONE or an error, and that nFin is the -** number of pages the database file will contain after this -** process is complete. If nFin is zero, it is assumed that -** incrVacuumStep() will be called a finite amount of times -** which may or may not empty the freelist. A full autovacuum -** has nFin>0. A "PRAGMA incremental_vacuum" has nFin==0. -*/ -static int incrVacuumStep(BtShared *pBt, Pgno nFin, Pgno iLastPg){ +** Perform a single step of an incremental-vacuum. If successful, return +** SQLITE_OK. If there is no work to do (and therefore no point in +** calling this function again), return SQLITE_DONE. Or, if an error +** occurs, return some other error code. +** +** More specifically, this function attempts to re-organize the database so +** that the last page of the file currently in use is no longer in use. +** +** Parameter nFin is the number of pages that this database would contain +** were this function called until it returns SQLITE_DONE. +** +** If the bCommit parameter is non-zero, this function assumes that the +** caller will keep calling incrVacuumStep() until it returns SQLITE_DONE +** or an error. bCommit is passed true for an auto-vacuum-on-commit +** operation, or false for an incremental vacuum. +*/ +static int incrVacuumStep(BtShared *pBt, Pgno nFin, Pgno iLastPg, int bCommit){ Pgno nFreeList; /* Number of pages still on the free-list */ int rc; assert( sqlite3_mutex_held(pBt->mutex) ); assert( iLastPg>nFin ); @@ -41638,85 +59062,98 @@ if( eType==PTRMAP_ROOTPAGE ){ return SQLITE_CORRUPT_BKPT; } if( eType==PTRMAP_FREEPAGE ){ - if( nFin==0 ){ + if( bCommit==0 ){ /* Remove the page from the files free-list. This is not required - ** if nFin is non-zero. In that case, the free-list will be + ** if bCommit is non-zero. In that case, the free-list will be ** truncated to zero after this function returns, so it doesn't ** matter if it still contains some garbage entries. */ Pgno iFreePg; MemPage *pFreePg; - rc = allocateBtreePage(pBt, &pFreePg, &iFreePg, iLastPg, 1); + rc = allocateBtreePage(pBt, &pFreePg, &iFreePg, iLastPg, BTALLOC_EXACT); if( rc!=SQLITE_OK ){ return rc; } assert( iFreePg==iLastPg ); releasePage(pFreePg); } } else { Pgno iFreePg; /* Index of free page to move pLastPg to */ MemPage *pLastPg; + u8 eMode = BTALLOC_ANY; /* Mode parameter for allocateBtreePage() */ + Pgno iNear = 0; /* nearby parameter for allocateBtreePage() */ rc = btreeGetPage(pBt, iLastPg, &pLastPg, 0); if( rc!=SQLITE_OK ){ return rc; } - /* If nFin is zero, this loop runs exactly once and page pLastPg + /* If bCommit is zero, this loop runs exactly once and page pLastPg ** is swapped with the first free page pulled off the free list. ** - ** On the other hand, if nFin is greater than zero, then keep + ** On the other hand, if bCommit is greater than zero, then keep ** looping until a free-page located within the first nFin pages ** of the file is found. */ + if( bCommit==0 ){ + eMode = BTALLOC_LE; + iNear = nFin; + } do { MemPage *pFreePg; - rc = allocateBtreePage(pBt, &pFreePg, &iFreePg, 0, 0); + rc = allocateBtreePage(pBt, &pFreePg, &iFreePg, iNear, eMode); if( rc!=SQLITE_OK ){ releasePage(pLastPg); return rc; } releasePage(pFreePg); - }while( nFin!=0 && iFreePg>nFin ); + }while( bCommit && iFreePg>nFin ); assert( iFreePg<iLastPg ); - rc = sqlite3PagerWrite(pLastPg->pDbPage); - if( rc==SQLITE_OK ){ - rc = relocatePage(pBt, pLastPg, eType, iPtrPage, iFreePg, nFin!=0); - } + rc = relocatePage(pBt, pLastPg, eType, iPtrPage, iFreePg, bCommit); releasePage(pLastPg); if( rc!=SQLITE_OK ){ return rc; } } } - if( nFin==0 ){ - iLastPg--; - while( iLastPg==PENDING_BYTE_PAGE(pBt)||PTRMAP_ISPAGE(pBt, iLastPg) ){ - if( PTRMAP_ISPAGE(pBt, iLastPg) ){ - MemPage *pPg; - rc = btreeGetPage(pBt, iLastPg, &pPg, 0); - if( rc!=SQLITE_OK ){ - return rc; - } - rc = sqlite3PagerWrite(pPg->pDbPage); - releasePage(pPg); - if( rc!=SQLITE_OK ){ - return rc; - } - } - iLastPg--; - } - sqlite3PagerTruncateImage(pBt->pPager, iLastPg); + if( bCommit==0 ){ + do { + iLastPg--; + }while( iLastPg==PENDING_BYTE_PAGE(pBt) || PTRMAP_ISPAGE(pBt, iLastPg) ); + pBt->bDoTruncate = 1; pBt->nPage = iLastPg; } return SQLITE_OK; } + +/* +** The database opened by the first argument is an auto-vacuum database +** nOrig pages in size containing nFree free pages. Return the expected +** size of the database in pages following an auto-vacuum operation. +*/ +static Pgno finalDbSize(BtShared *pBt, Pgno nOrig, Pgno nFree){ + int nEntry; /* Number of entries on one ptrmap page */ + Pgno nPtrmap; /* Number of PtrMap pages to be freed */ + Pgno nFin; /* Return value */ + + nEntry = pBt->usableSize/5; + nPtrmap = (nFree-nOrig+PTRMAP_PAGENO(pBt, nOrig)+nEntry)/nEntry; + nFin = nOrig - nFree - nPtrmap; + if( nOrig>PENDING_BYTE_PAGE(pBt) && nFin<PENDING_BYTE_PAGE(pBt) ){ + nFin--; + } + while( PTRMAP_ISPAGE(pBt, nFin) || nFin==PENDING_BYTE_PAGE(pBt) ){ + nFin--; + } + + return nFin; +} /* ** A write-transaction must be opened before calling this function. ** It performs a single unit of work towards an incremental vacuum. ** @@ -41731,44 +59168,55 @@ sqlite3BtreeEnter(p); assert( pBt->inTransaction==TRANS_WRITE && p->inTrans==TRANS_WRITE ); if( !pBt->autoVacuum ){ rc = SQLITE_DONE; }else{ - invalidateAllOverflowCache(pBt); - rc = incrVacuumStep(pBt, 0, btreePagecount(pBt)); - if( rc==SQLITE_OK ){ - rc = sqlite3PagerWrite(pBt->pPage1->pDbPage); - put4byte(&pBt->pPage1->aData[28], pBt->nPage); + Pgno nOrig = btreePagecount(pBt); + Pgno nFree = get4byte(&pBt->pPage1->aData[36]); + Pgno nFin = finalDbSize(pBt, nOrig, nFree); + + if( nOrig<nFin ){ + rc = SQLITE_CORRUPT_BKPT; + }else if( nFree>0 ){ + rc = saveAllCursors(pBt, 0, 0); + if( rc==SQLITE_OK ){ + invalidateAllOverflowCache(pBt); + rc = incrVacuumStep(pBt, nFin, nOrig, 0); + } + if( rc==SQLITE_OK ){ + rc = sqlite3PagerWrite(pBt->pPage1->pDbPage); + put4byte(&pBt->pPage1->aData[28], pBt->nPage); + } + }else{ + rc = SQLITE_DONE; } } sqlite3BtreeLeave(p); return rc; } /* ** This routine is called prior to sqlite3PagerCommit when a transaction -** is commited for an auto-vacuum database. +** is committed for an auto-vacuum database. ** ** If SQLITE_OK is returned, then *pnTrunc is set to the number of pages ** the database file should be truncated to during the commit process. ** i.e. the database has been reorganized so that only the first *pnTrunc ** pages are in use. */ static int autoVacuumCommit(BtShared *pBt){ int rc = SQLITE_OK; Pager *pPager = pBt->pPager; - VVA_ONLY( int nRef = sqlite3PagerRefcount(pPager) ); + VVA_ONLY( int nRef = sqlite3PagerRefcount(pPager); ) assert( sqlite3_mutex_held(pBt->mutex) ); invalidateAllOverflowCache(pBt); assert(pBt->autoVacuum); if( !pBt->incrVacuum ){ Pgno nFin; /* Number of pages in database after autovacuuming */ Pgno nFree; /* Number of pages on the freelist initially */ - Pgno nPtrmap; /* Number of PtrMap pages to be freed */ Pgno iFree; /* The next page to be freed */ - int nEntry; /* Number of entries on one ptrmap page */ Pgno nOrig; /* Database size before freeing */ nOrig = btreePagecount(pBt); if( PTRMAP_ISPAGE(pBt, nOrig) || nOrig==PENDING_BYTE_PAGE(pBt) ){ /* It is not possible to create a database for which the final page @@ -41777,38 +59225,32 @@ */ return SQLITE_CORRUPT_BKPT; } nFree = get4byte(&pBt->pPage1->aData[36]); - nEntry = pBt->usableSize/5; - nPtrmap = (nFree-nOrig+PTRMAP_PAGENO(pBt, nOrig)+nEntry)/nEntry; - nFin = nOrig - nFree - nPtrmap; - if( nOrig>PENDING_BYTE_PAGE(pBt) && nFin<PENDING_BYTE_PAGE(pBt) ){ - nFin--; - } - while( PTRMAP_ISPAGE(pBt, nFin) || nFin==PENDING_BYTE_PAGE(pBt) ){ - nFin--; - } + nFin = finalDbSize(pBt, nOrig, nFree); if( nFin>nOrig ) return SQLITE_CORRUPT_BKPT; - + if( nFin<nOrig ){ + rc = saveAllCursors(pBt, 0, 0); + } for(iFree=nOrig; iFree>nFin && rc==SQLITE_OK; iFree--){ - rc = incrVacuumStep(pBt, nFin, iFree); + rc = incrVacuumStep(pBt, nFin, iFree, 1); } if( (rc==SQLITE_DONE || rc==SQLITE_OK) && nFree>0 ){ rc = sqlite3PagerWrite(pBt->pPage1->pDbPage); put4byte(&pBt->pPage1->aData[32], 0); put4byte(&pBt->pPage1->aData[36], 0); put4byte(&pBt->pPage1->aData[28], nFin); - sqlite3PagerTruncateImage(pBt->pPager, nFin); + pBt->bDoTruncate = 1; pBt->nPage = nFin; } if( rc!=SQLITE_OK ){ sqlite3PagerRollback(pPager); } } - assert( nRef==sqlite3PagerRefcount(pPager) ); + assert( nRef>=sqlite3PagerRefcount(pPager) ); return rc; } #else /* ifndef SQLITE_OMIT_AUTOVACUUM */ # define setChildPtrmaps(x) SQLITE_OK @@ -41851,10 +59293,13 @@ if( rc!=SQLITE_OK ){ sqlite3BtreeLeave(p); return rc; } } + if( pBt->bDoTruncate ){ + sqlite3PagerTruncateImage(pBt->pPager, pBt->nPage); + } #endif rc = sqlite3PagerCommitPhaseOne(pBt->pPager, zMaster, 0); sqlite3BtreeLeave(p); } return rc; @@ -41864,14 +59309,17 @@ ** This function is called from both BtreeCommitPhaseTwo() and BtreeRollback() ** at the conclusion of a transaction. */ static void btreeEndTransaction(Btree *p){ BtShared *pBt = p->pBt; + sqlite3 *db = p->db; assert( sqlite3BtreeHoldsMutex(p) ); - btreeClearHasContent(pBt); - if( p->inTrans>TRANS_NONE && p->db->activeVdbeCnt>1 ){ +#ifndef SQLITE_OMIT_AUTOVACUUM + pBt->bDoTruncate = 0; +#endif + if( p->inTrans>TRANS_NONE && db->nVdbeRead>1 ){ /* If there are other active statements that belong to this database ** handle, downgrade to a read-only transaction. The other statements ** may still be reading from the database. */ downgradeAllSharedCacheTableLocks(p); p->inTrans = TRANS_READ; @@ -41906,33 +59354,47 @@ ** routine did all the work of writing information out to disk and flushing the ** contents so that they are written onto the disk platter. All this ** routine has to do is delete or truncate or zero the header in the ** the rollback journal (which causes the transaction to commit) and ** drop locks. +** +** Normally, if an error occurs while the pager layer is attempting to +** finalize the underlying journal file, this function returns an error and +** the upper layer will attempt a rollback. However, if the second argument +** is non-zero then this b-tree transaction is part of a multi-file +** transaction. In this case, the transaction has already been committed +** (by deleting a master journal file) and the caller will ignore this +** functions return code. So, even if an error occurs in the pager layer, +** reset the b-tree objects internal state to indicate that the write +** transaction has been closed. This is quite safe, as the pager will have +** transitioned to the error state. ** ** This will release the write lock on the database file. If there ** are no active cursors, it also releases the read lock. */ -SQLITE_PRIVATE int sqlite3BtreeCommitPhaseTwo(Btree *p){ - BtShared *pBt = p->pBt; +SQLITE_PRIVATE int sqlite3BtreeCommitPhaseTwo(Btree *p, int bCleanup){ + if( p->inTrans==TRANS_NONE ) return SQLITE_OK; sqlite3BtreeEnter(p); btreeIntegrity(p); /* If the handle has a write-transaction open, commit the shared-btrees ** transaction and set the shared state to TRANS_READ. */ if( p->inTrans==TRANS_WRITE ){ int rc; + BtShared *pBt = p->pBt; assert( pBt->inTransaction==TRANS_WRITE ); assert( pBt->nTransaction>0 ); rc = sqlite3PagerCommitPhaseTwo(pBt->pPager); - if( rc!=SQLITE_OK ){ + if( rc!=SQLITE_OK && bCleanup==0 ){ sqlite3BtreeLeave(p); return rc; } + p->iDataVersion--; /* Compensate for pPager->iDataVersion++; */ pBt->inTransaction = TRANS_READ; + btreeClearHasContent(pBt); } btreeEndTransaction(p); sqlite3BtreeLeave(p); return SQLITE_OK; @@ -41944,98 +59406,104 @@ SQLITE_PRIVATE int sqlite3BtreeCommit(Btree *p){ int rc; sqlite3BtreeEnter(p); rc = sqlite3BtreeCommitPhaseOne(p, 0); if( rc==SQLITE_OK ){ - rc = sqlite3BtreeCommitPhaseTwo(p); + rc = sqlite3BtreeCommitPhaseTwo(p, 0); } sqlite3BtreeLeave(p); return rc; } -#ifndef NDEBUG -/* -** Return the number of write-cursors open on this handle. This is for use -** in assert() expressions, so it is only compiled if NDEBUG is not -** defined. -** -** For the purposes of this routine, a write-cursor is any cursor that -** is capable of writing to the databse. That means the cursor was -** originally opened for writing and the cursor has not be disabled -** by having its state changed to CURSOR_FAULT. -*/ -static int countWriteCursors(BtShared *pBt){ - BtCursor *pCur; - int r = 0; - for(pCur=pBt->pCursor; pCur; pCur=pCur->pNext){ - if( pCur->wrFlag && pCur->eState!=CURSOR_FAULT ) r++; - } - return r; -} -#endif - /* ** This routine sets the state to CURSOR_FAULT and the error -** code to errCode for every cursor on BtShared that pBtree -** references. -** -** Every cursor is tripped, including cursors that belong -** to other database connections that happen to be sharing -** the cache with pBtree. -** -** This routine gets called when a rollback occurs. -** All cursors using the same cache must be tripped -** to prevent them from trying to use the btree after -** the rollback. The rollback may have deleted tables -** or moved root pages, so it is not sufficient to -** save the state of the cursor. The cursor must be -** invalidated. -*/ -SQLITE_PRIVATE void sqlite3BtreeTripAllCursors(Btree *pBtree, int errCode){ +** code to errCode for every cursor on any BtShared that pBtree +** references. Or if the writeOnly flag is set to 1, then only +** trip write cursors and leave read cursors unchanged. +** +** Every cursor is a candidate to be tripped, including cursors +** that belong to other database connections that happen to be +** sharing the cache with pBtree. +** +** This routine gets called when a rollback occurs. If the writeOnly +** flag is true, then only write-cursors need be tripped - read-only +** cursors save their current positions so that they may continue +** following the rollback. Or, if writeOnly is false, all cursors are +** tripped. In general, writeOnly is false if the transaction being +** rolled back modified the database schema. In this case b-tree root +** pages may be moved or deleted from the database altogether, making +** it unsafe for read cursors to continue. +** +** If the writeOnly flag is true and an error is encountered while +** saving the current position of a read-only cursor, all cursors, +** including all read-cursors are tripped. +** +** SQLITE_OK is returned if successful, or if an error occurs while +** saving a cursor position, an SQLite error code. +*/ +SQLITE_PRIVATE int sqlite3BtreeTripAllCursors(Btree *pBtree, int errCode, int writeOnly){ BtCursor *p; - sqlite3BtreeEnter(pBtree); - for(p=pBtree->pBt->pCursor; p; p=p->pNext){ - int i; - sqlite3BtreeClearCursor(p); - p->eState = CURSOR_FAULT; - p->skipNext = errCode; - for(i=0; i<=p->iPage; i++){ - releasePage(p->apPage[i]); - p->apPage[i] = 0; - } - } - sqlite3BtreeLeave(pBtree); + int rc = SQLITE_OK; + + assert( (writeOnly==0 || writeOnly==1) && BTCF_WriteFlag==1 ); + if( pBtree ){ + sqlite3BtreeEnter(pBtree); + for(p=pBtree->pBt->pCursor; p; p=p->pNext){ + int i; + if( writeOnly && (p->curFlags & BTCF_WriteFlag)==0 ){ + if( p->eState==CURSOR_VALID || p->eState==CURSOR_SKIPNEXT ){ + rc = saveCursorPosition(p); + if( rc!=SQLITE_OK ){ + (void)sqlite3BtreeTripAllCursors(pBtree, rc, 0); + break; + } + } + }else{ + sqlite3BtreeClearCursor(p); + p->eState = CURSOR_FAULT; + p->skipNext = errCode; + } + for(i=0; i<=p->iPage; i++){ + releasePage(p->apPage[i]); + p->apPage[i] = 0; + } + } + sqlite3BtreeLeave(pBtree); + } + return rc; } /* -** Rollback the transaction in progress. All cursors will be -** invalided by this operation. Any attempt to use a cursor -** that was open at the beginning of this operation will result -** in an error. +** Rollback the transaction in progress. +** +** If tripCode is not SQLITE_OK then cursors will be invalidated (tripped). +** Only write cursors are tripped if writeOnly is true but all cursors are +** tripped if writeOnly is false. Any attempt to use +** a tripped cursor will result in an error. ** ** This will release the write lock on the database file. If there ** are no active cursors, it also releases the read lock. */ -SQLITE_PRIVATE int sqlite3BtreeRollback(Btree *p){ +SQLITE_PRIVATE int sqlite3BtreeRollback(Btree *p, int tripCode, int writeOnly){ int rc; BtShared *pBt = p->pBt; MemPage *pPage1; + assert( writeOnly==1 || writeOnly==0 ); + assert( tripCode==SQLITE_ABORT_ROLLBACK || tripCode==SQLITE_OK ); sqlite3BtreeEnter(p); - rc = saveAllCursors(pBt, 0, 0); -#ifndef SQLITE_OMIT_SHARED_CACHE - if( rc!=SQLITE_OK ){ - /* This is a horrible situation. An IO or malloc() error occurred whilst - ** trying to save cursor positions. If this is an automatic rollback (as - ** the result of a constraint, malloc() failure or IO error) then - ** the cache may be internally inconsistent (not contain valid trees) so - ** we cannot simply return the error to the caller. Instead, abort - ** all queries that may be using any of the cursors that failed to save. - */ - sqlite3BtreeTripAllCursors(p, rc); - } -#endif + if( tripCode==SQLITE_OK ){ + rc = tripCode = saveAllCursors(pBt, 0, 0); + if( rc ) writeOnly = 0; + }else{ + rc = SQLITE_OK; + } + if( tripCode ){ + int rc2 = sqlite3BtreeTripAllCursors(p, tripCode, writeOnly); + assert( rc==SQLITE_OK || (writeOnly==0 && rc2==SQLITE_OK) ); + if( rc2!=SQLITE_OK ) rc = rc2; + } btreeIntegrity(p); if( p->inTrans==TRANS_WRITE ){ int rc2; @@ -42054,21 +59522,22 @@ if( nPage==0 ) sqlite3PagerPagecount(pBt->pPager, &nPage); testcase( pBt->nPage!=nPage ); pBt->nPage = nPage; releasePage(pPage1); } - assert( countWriteCursors(pBt)==0 ); + assert( countValidCursors(pBt, 1)==0 ); pBt->inTransaction = TRANS_READ; + btreeClearHasContent(pBt); } btreeEndTransaction(p); sqlite3BtreeLeave(p); return rc; } /* -** Start a statement subtransaction. The subtransaction can can be rolled +** Start a statement subtransaction. The subtransaction can be rolled ** back independently of the main transaction. You must start a transaction ** before starting a subtransaction. The subtransaction is ended automatically ** if the main transaction commits or rolls back. ** ** Statement subtransactions are used around individual SQL statements @@ -42086,11 +59555,11 @@ SQLITE_PRIVATE int sqlite3BtreeBeginStmt(Btree *p, int iStatement){ int rc; BtShared *pBt = p->pBt; sqlite3BtreeEnter(p); assert( p->inTrans==TRANS_WRITE ); - assert( pBt->readOnly==0 ); + assert( (pBt->btsFlags & BTS_READ_ONLY)==0 ); assert( iStatement>0 ); assert( iStatement>p->db->nSavepoint ); assert( pBt->inTransaction==TRANS_WRITE ); /* At the pager level, a statement transaction is a savepoint with ** an index greater than all savepoints created explicitly using @@ -42121,16 +59590,20 @@ assert( op==SAVEPOINT_RELEASE || op==SAVEPOINT_ROLLBACK ); assert( iSavepoint>=0 || (iSavepoint==-1 && op==SAVEPOINT_ROLLBACK) ); sqlite3BtreeEnter(p); rc = sqlite3PagerSavepoint(pBt->pPager, op, iSavepoint); if( rc==SQLITE_OK ){ - if( iSavepoint<0 && pBt->initiallyEmpty ) pBt->nPage = 0; + if( iSavepoint<0 && (pBt->btsFlags & BTS_INITIALLY_EMPTY)!=0 ){ + pBt->nPage = 0; + } rc = newDatabase(pBt); pBt->nPage = get4byte(28 + pBt->pPage1->aData); - if( pBt->nPage==0 ){ - sqlite3PagerPagecount(pBt->pPager, (int*)&pBt->nPage); - } + + /* The database size was written into the offset 28 of the header + ** when the transaction started, so we know that the value at offset + ** 28 is nonzero. */ + assert( pBt->nPage>0 ); } sqlite3BtreeLeave(p); } return rc; } @@ -42140,17 +59613,17 @@ ** iTable. If a read-only cursor is requested, it is assumed that ** the caller already has at least a read-only transaction open ** on the database already. If a write-cursor is requested, then ** the caller is assumed to have an open write transaction. ** -** If wrFlag==0, then the cursor can only be used for reading. -** If wrFlag==1, then the cursor can be used for reading or for -** writing if other conditions for writing are also met. These -** are the conditions that must be met in order for writing to -** be allowed: +** If the BTREE_WRCSR bit of wrFlag is clear, then the cursor can only +** be used for reading. If the BTREE_WRCSR bit is set, then the cursor +** can be used for reading or for writing if other conditions for writing +** are also met. These are the conditions that must be met in order +** for writing to be allowed: ** -** 1: The cursor must have been opened with wrFlag==1 +** 1: The cursor must have been opened with wrFlag containing BTREE_WRCSR ** ** 2: Other database connections that share the same pager cache ** but which are not in the READ_UNCOMMITTED state may not have ** cursors open with wrFlag==0 on the same table. Otherwise ** the changes made by this write cursor would be visible to @@ -42157,10 +59630,20 @@ ** the read cursors in the other database connection. ** ** 3: The database must be writable (not on read-only media) ** ** 4: There must be an active transaction. +** +** The BTREE_FORDELETE bit of wrFlag may optionally be set if BTREE_WRCSR +** is set. If FORDELETE is set, that is a hint to the implementation that +** this cursor will only be used to seek to and delete entries of an index +** as part of a larger DELETE statement. The FORDELETE hint is not used by +** this implementation. But in a hypothetical alternative storage engine +** in which index entries are automatically deleted when corresponding table +** rows are deleted, the FORDELETE flag is a hint that all SEEK and DELETE +** operations on this cursor can be no-ops and all READ operations can +** return a null row (2-bytes: 0x01 0x00). ** ** No checking is done to make sure that page iTable really is the ** root page of a b-tree. If it is not, then the cursor acquired ** will not work correctly. ** @@ -42173,48 +59656,60 @@ int wrFlag, /* 1 to write. 0 read-only */ struct KeyInfo *pKeyInfo, /* First arg to comparison function */ BtCursor *pCur /* Space for new cursor */ ){ BtShared *pBt = p->pBt; /* Shared b-tree handle */ + BtCursor *pX; /* Looping over other all cursors */ assert( sqlite3BtreeHoldsMutex(p) ); - assert( wrFlag==0 || wrFlag==1 ); + assert( wrFlag==0 + || wrFlag==BTREE_WRCSR + || wrFlag==(BTREE_WRCSR|BTREE_FORDELETE) + ); /* The following assert statements verify that if this is a sharable ** b-tree database, the connection is holding the required table locks, ** and that no other connection has any open cursor that conflicts with ** this lock. */ - assert( hasSharedCacheTableLock(p, iTable, pKeyInfo!=0, wrFlag+1) ); + assert( hasSharedCacheTableLock(p, iTable, pKeyInfo!=0, (wrFlag?2:1)) ); assert( wrFlag==0 || !hasReadConflicts(p, iTable) ); /* Assert that the caller has opened the required transaction. */ assert( p->inTrans>TRANS_NONE ); assert( wrFlag==0 || p->inTrans==TRANS_WRITE ); assert( pBt->pPage1 && pBt->pPage1->aData ); + assert( wrFlag==0 || (pBt->btsFlags & BTS_READ_ONLY)==0 ); - if( NEVER(wrFlag && pBt->readOnly) ){ - return SQLITE_READONLY; + if( wrFlag ){ + allocateTempSpace(pBt); + if( pBt->pTmpSpace==0 ) return SQLITE_NOMEM; } if( iTable==1 && btreePagecount(pBt)==0 ){ - return SQLITE_EMPTY; + assert( wrFlag==0 ); + iTable = 0; } /* Now that no other errors can occur, finish filling in the BtCursor ** variables and link the cursor into the BtShared list. */ pCur->pgnoRoot = (Pgno)iTable; pCur->iPage = -1; pCur->pKeyInfo = pKeyInfo; pCur->pBtree = p; pCur->pBt = pBt; - pCur->wrFlag = (u8)wrFlag; + pCur->curFlags = wrFlag ? BTCF_WriteFlag : 0; + pCur->curPagerFlags = wrFlag ? 0 : PAGER_GET_READONLY; + /* If there are two or more cursors on the same btree, then all such + ** cursors *must* have the BTCF_Multiple flag set. */ + for(pX=pBt->pCursor; pX; pX=pX->pNext){ + if( pX->pgnoRoot==(Pgno)iTable ){ + pX->curFlags |= BTCF_Multiple; + pCur->curFlags |= BTCF_Multiple; + } + } pCur->pNext = pBt->pCursor; - if( pCur->pNext ){ - pCur->pNext->pPrev = pCur; - } pBt->pCursor = pCur; pCur->eState = CURSOR_INVALID; - pCur->cachedRowid = 0; return SQLITE_OK; } SQLITE_PRIVATE int sqlite3BtreeCursor( Btree *p, /* The btree */ int iTable, /* Root page of table to open */ @@ -42221,13 +59716,17 @@ int wrFlag, /* 1 to write. 0 read-only */ struct KeyInfo *pKeyInfo, /* First arg to xCompare() */ BtCursor *pCur /* Write new cursor here */ ){ int rc; - sqlite3BtreeEnter(p); - rc = btreeCursor(p, iTable, wrFlag, pKeyInfo, pCur); - sqlite3BtreeLeave(p); + if( iTable<1 ){ + rc = SQLITE_CORRUPT_BKPT; + }else{ + sqlite3BtreeEnter(p); + rc = btreeCursor(p, iTable, wrFlag, pKeyInfo, pCur); + sqlite3BtreeLeave(p); + } return rc; } /* ** Return the size of a BtCursor object in bytes. @@ -42251,40 +59750,10 @@ */ SQLITE_PRIVATE void sqlite3BtreeCursorZero(BtCursor *p){ memset(p, 0, offsetof(BtCursor, iPage)); } -/* -** Set the cached rowid value of every cursor in the same database file -** as pCur and having the same root page number as pCur. The value is -** set to iRowid. -** -** Only positive rowid values are considered valid for this cache. -** The cache is initialized to zero, indicating an invalid cache. -** A btree will work fine with zero or negative rowids. We just cannot -** cache zero or negative rowids, which means tables that use zero or -** negative rowids might run a little slower. But in practice, zero -** or negative rowids are very uncommon so this should not be a problem. -*/ -SQLITE_PRIVATE void sqlite3BtreeSetCachedRowid(BtCursor *pCur, sqlite3_int64 iRowid){ - BtCursor *p; - for(p=pCur->pBt->pCursor; p; p=p->pNext){ - if( p->pgnoRoot==pCur->pgnoRoot ) p->cachedRowid = iRowid; - } - assert( pCur->cachedRowid==iRowid ); -} - -/* -** Return the cached rowid for the given cursor. A negative or zero -** return value indicates that the rowid cache is invalid and should be -** ignored. If the rowid cache has never before been set, then a -** zero is returned. -*/ -SQLITE_PRIVATE sqlite3_int64 sqlite3BtreeGetCachedRowid(BtCursor *pCur){ - return pCur->cachedRowid; -} - /* ** Close a cursor. The read lock on the database file is released ** when the last cursor is closed. */ SQLITE_PRIVATE int sqlite3BtreeCloseCursor(BtCursor *pCur){ @@ -42292,23 +59761,28 @@ if( pBtree ){ int i; BtShared *pBt = pCur->pBt; sqlite3BtreeEnter(pBtree); sqlite3BtreeClearCursor(pCur); - if( pCur->pPrev ){ - pCur->pPrev->pNext = pCur->pNext; - }else{ + assert( pBt->pCursor!=0 ); + if( pBt->pCursor==pCur ){ pBt->pCursor = pCur->pNext; - } - if( pCur->pNext ){ - pCur->pNext->pPrev = pCur->pPrev; + }else{ + BtCursor *pPrev = pBt->pCursor; + do{ + if( pPrev->pNext==pCur ){ + pPrev->pNext = pCur->pNext; + break; + } + pPrev = pPrev->pNext; + }while( ALWAYS(pPrev) ); } for(i=0; i<=pCur->iPage; i++){ releasePage(pCur->apPage[i]); } unlockBtreeIfUnused(pBt); - invalidateOverflowCache(pCur); + sqlite3_free(pCur->aOverflow); /* sqlite3_free(pCur); */ sqlite3BtreeLeave(pBtree); } return SQLITE_OK; } @@ -42318,51 +59792,31 @@ ** BtCursor.info structure. If it is not already valid, call ** btreeParseCell() to fill it in. ** ** BtCursor.info is a cache of the information in the current cell. ** Using this cache reduces the number of calls to btreeParseCell(). -** -** 2007-06-25: There is a bug in some versions of MSVC that cause the -** compiler to crash when getCellInfo() is implemented as a macro. -** But there is a measureable speed advantage to using the macro on gcc -** (when less compiler optimizations like -Os or -O0 are used and the -** compiler is not doing agressive inlining.) So we use a real function -** for MSVC and a macro for everything else. Ticket #2457. */ #ifndef NDEBUG static void assertCellInfo(BtCursor *pCur){ CellInfo info; int iPage = pCur->iPage; memset(&info, 0, sizeof(info)); btreeParseCell(pCur->apPage[iPage], pCur->aiIdx[iPage], &info); - assert( memcmp(&info, &pCur->info, sizeof(info))==0 ); + assert( CORRUPT_DB || memcmp(&info, &pCur->info, sizeof(info))==0 ); } #else #define assertCellInfo(x) #endif -#ifdef _MSC_VER - /* Use a real function in MSVC to work around bugs in that compiler. */ - static void getCellInfo(BtCursor *pCur){ - if( pCur->info.nSize==0 ){ - int iPage = pCur->iPage; - btreeParseCell(pCur->apPage[iPage],pCur->aiIdx[iPage],&pCur->info); - pCur->validNKey = 1; - }else{ - assertCellInfo(pCur); - } - } -#else /* if not _MSC_VER */ - /* Use a macro in all other compilers so that the function is inlined */ -#define getCellInfo(pCur) \ - if( pCur->info.nSize==0 ){ \ - int iPage = pCur->iPage; \ - btreeParseCell(pCur->apPage[iPage],pCur->aiIdx[iPage],&pCur->info); \ - pCur->validNKey = 1; \ - }else{ \ - assertCellInfo(pCur); \ - } -#endif /* _MSC_VER */ +static SQLITE_NOINLINE void getCellInfo(BtCursor *pCur){ + if( pCur->info.nSize==0 ){ + int iPage = pCur->iPage; + pCur->curFlags |= BTCF_ValidNKey; + btreeParseCell(pCur->apPage[iPage],pCur->aiIdx[iPage],&pCur->info); + }else{ + assertCellInfo(pCur); + } +} #ifndef NDEBUG /* The next routine used only within assert() statements */ /* ** Return true if the given BtCursor is valid. A valid cursor is one ** that is currently pointing to a row in a (non-empty) table. @@ -42385,17 +59839,13 @@ ** ** This routine cannot fail. It always returns SQLITE_OK. */ SQLITE_PRIVATE int sqlite3BtreeKeySize(BtCursor *pCur, i64 *pSize){ assert( cursorHoldsMutex(pCur) ); - assert( pCur->eState==CURSOR_INVALID || pCur->eState==CURSOR_VALID ); - if( pCur->eState!=CURSOR_VALID ){ - *pSize = 0; - }else{ - getCellInfo(pCur); - *pSize = pCur->info.nKey; - } + assert( pCur->eState==CURSOR_VALID ); + getCellInfo(pCur); + *pSize = pCur->info.nKey; return SQLITE_OK; } /* ** Set *pSize to the number of bytes of data in the entry the @@ -42408,14 +59858,17 @@ ** Failure is not possible. This function always returns SQLITE_OK. ** It might just as well be a procedure (returning void) but we continue ** to return an integer result code for historical reasons. */ SQLITE_PRIVATE int sqlite3BtreeDataSize(BtCursor *pCur, u32 *pSize){ - assert( cursorHoldsMutex(pCur) ); + assert( cursorOwnsBtShared(pCur) ); assert( pCur->eState==CURSOR_VALID ); + assert( pCur->iPage>=0 ); + assert( pCur->iPage<BTCURSOR_MAX_DEPTH ); + assert( pCur->apPage[pCur->iPage]->intKeyLeaf==1 ); getCellInfo(pCur); - *pSize = pCur->info.nData; + *pSize = pCur->info.nPayload; return SQLITE_OK; } /* ** Given the page number of an overflow page in the database (parameter @@ -42475,11 +59928,11 @@ } #endif assert( next==0 || rc==SQLITE_DONE ); if( rc==SQLITE_OK ){ - rc = btreeGetPage(pBt, ovfl, &pPage, 0); + rc = btreeGetPage(pBt, ovfl, &pPage, (ppPage==0) ? PAGER_GET_READONLY : 0); assert( rc==SQLITE_OK || pPage==0 ); if( rc==SQLITE_OK ){ next = get4byte(pPage->aData); } } @@ -42525,26 +59978,28 @@ return SQLITE_OK; } /* ** This function is used to read or overwrite payload information -** for the entry that the pCur cursor is pointing to. If the eOp -** parameter is 0, this is a read operation (data copied into -** buffer pBuf). If it is non-zero, a write (data copied from -** buffer pBuf). +** for the entry that the pCur cursor is pointing to. The eOp +** argument is interpreted as follows: +** +** 0: The operation is a read. Populate the overflow cache. +** 1: The operation is a write. Populate the overflow cache. +** 2: The operation is a read. Do not populate the overflow cache. ** ** A total of "amt" bytes are read or written beginning at "offset". ** Data is read to or from the buffer pBuf. ** ** The content being read or written might appear on the main page ** or be scattered out on multiple overflow pages. ** -** If the BtCursor.isIncrblobHandle flag is set, and the current -** cursor entry uses one or more overflow pages, this function -** allocates space for and lazily popluates the overflow page-list -** cache array (BtCursor.aOverflow). Subsequent calls use this -** cache to make seeking to the supplied offset more efficient. +** If the current cursor entry uses one or more overflow pages and the +** eOp argument is not 2, this function may allocate space for and lazily +** populates the overflow page-list cache array (BtCursor.aOverflow). +** Subsequent calls use this cache to make seeking to the supplied offset +** more efficient. ** ** Once an overflow page-list cache has been allocated, it may be ** invalidated if some other cursor writes to the same table, or if ** the cursor is moved to a different row. Additionally, in auto-vacuum ** mode, the following events may invalidate an overflow page-list cache. @@ -42560,27 +60015,32 @@ unsigned char *pBuf, /* Write the bytes into this buffer */ int eOp /* zero to read. non-zero to write. */ ){ unsigned char *aPayload; int rc = SQLITE_OK; - u32 nKey; int iIdx = 0; MemPage *pPage = pCur->apPage[pCur->iPage]; /* Btree page of current entry */ BtShared *pBt = pCur->pBt; /* Btree this cursor belongs to */ +#ifdef SQLITE_DIRECT_OVERFLOW_READ + unsigned char * const pBufStart = pBuf; + int bEnd; /* True if reading to end of data */ +#endif assert( pPage ); assert( pCur->eState==CURSOR_VALID ); assert( pCur->aiIdx[pCur->iPage]<pPage->nCell ); assert( cursorHoldsMutex(pCur) ); + assert( eOp!=2 || offset==0 ); /* Always start from beginning for eOp==2 */ getCellInfo(pCur); - aPayload = pCur->info.pCell + pCur->info.nHeader; - nKey = (pPage->intKey ? 0 : (int)pCur->info.nKey); + aPayload = pCur->info.pPayload; +#ifdef SQLITE_DIRECT_OVERFLOW_READ + bEnd = offset+amt==pCur->info.nPayload; +#endif + assert( offset+amt <= pCur->info.nPayload ); - if( NEVER(offset+amt > nKey+pCur->info.nData) - || &aPayload[pCur->info.nLocal] > &pPage->aData[pBt->usableSize] - ){ + if( &aPayload[pCur->info.nLocal] > &pPage->aData[pBt->usableSize] ){ /* Trying to read or write past the end of the data is an error */ return SQLITE_CORRUPT_BKPT; } /* Check if data must be read/written to/from the btree page itself. */ @@ -42587,96 +60047,153 @@ if( offset<pCur->info.nLocal ){ int a = amt; if( a+offset>pCur->info.nLocal ){ a = pCur->info.nLocal - offset; } - rc = copyPayload(&aPayload[offset], pBuf, a, eOp, pPage->pDbPage); + rc = copyPayload(&aPayload[offset], pBuf, a, (eOp & 0x01), pPage->pDbPage); offset = 0; pBuf += a; amt -= a; }else{ offset -= pCur->info.nLocal; } + if( rc==SQLITE_OK && amt>0 ){ const u32 ovflSize = pBt->usableSize - 4; /* Bytes content per ovfl page */ Pgno nextPage; nextPage = get4byte(&aPayload[pCur->info.nLocal]); -#ifndef SQLITE_OMIT_INCRBLOB - /* If the isIncrblobHandle flag is set and the BtCursor.aOverflow[] - ** has not been allocated, allocate it now. The array is sized at - ** one entry for each overflow page in the overflow chain. The - ** page number of the first overflow page is stored in aOverflow[0], - ** etc. A value of 0 in the aOverflow[] array means "not yet known" - ** (the cache is lazily populated). + /* If the BtCursor.aOverflow[] has not been allocated, allocate it now. + ** Except, do not allocate aOverflow[] for eOp==2. + ** + ** The aOverflow[] array is sized at one entry for each overflow page + ** in the overflow chain. The page number of the first overflow page is + ** stored in aOverflow[0], etc. A value of 0 in the aOverflow[] array + ** means "not yet known" (the cache is lazily populated). */ - if( pCur->isIncrblobHandle && !pCur->aOverflow ){ + if( eOp!=2 && (pCur->curFlags & BTCF_ValidOvfl)==0 ){ int nOvfl = (pCur->info.nPayload-pCur->info.nLocal+ovflSize-1)/ovflSize; - pCur->aOverflow = (Pgno *)sqlite3MallocZero(sizeof(Pgno)*nOvfl); - /* nOvfl is always positive. If it were zero, fetchPayload would have - ** been used instead of this routine. */ - if( ALWAYS(nOvfl) && !pCur->aOverflow ){ - rc = SQLITE_NOMEM; + if( nOvfl>pCur->nOvflAlloc ){ + Pgno *aNew = (Pgno*)sqlite3Realloc( + pCur->aOverflow, nOvfl*2*sizeof(Pgno) + ); + if( aNew==0 ){ + rc = SQLITE_NOMEM; + }else{ + pCur->nOvflAlloc = nOvfl*2; + pCur->aOverflow = aNew; + } + } + if( rc==SQLITE_OK ){ + memset(pCur->aOverflow, 0, nOvfl*sizeof(Pgno)); + pCur->curFlags |= BTCF_ValidOvfl; } } /* If the overflow page-list cache has been allocated and the ** entry for the first required overflow page is valid, skip ** directly to it. */ - if( pCur->aOverflow && pCur->aOverflow[offset/ovflSize] ){ + if( (pCur->curFlags & BTCF_ValidOvfl)!=0 + && pCur->aOverflow[offset/ovflSize] + ){ iIdx = (offset/ovflSize); nextPage = pCur->aOverflow[iIdx]; offset = (offset%ovflSize); } -#endif for( ; rc==SQLITE_OK && amt>0 && nextPage; iIdx++){ -#ifndef SQLITE_OMIT_INCRBLOB /* If required, populate the overflow page-list cache. */ - if( pCur->aOverflow ){ - assert(!pCur->aOverflow[iIdx] || pCur->aOverflow[iIdx]==nextPage); + if( (pCur->curFlags & BTCF_ValidOvfl)!=0 ){ + assert( pCur->aOverflow[iIdx]==0 + || pCur->aOverflow[iIdx]==nextPage + || CORRUPT_DB ); pCur->aOverflow[iIdx] = nextPage; } -#endif if( offset>=ovflSize ){ /* The only reason to read this page is to obtain the page ** number for the next page in the overflow chain. The page ** data is not required. So first try to lookup the overflow ** page-list cache, if any, then fall back to the getOverflowPage() ** function. + ** + ** Note that the aOverflow[] array must be allocated because eOp!=2 + ** here. If eOp==2, then offset==0 and this branch is never taken. */ -#ifndef SQLITE_OMIT_INCRBLOB - if( pCur->aOverflow && pCur->aOverflow[iIdx+1] ){ + assert( eOp!=2 ); + assert( pCur->curFlags & BTCF_ValidOvfl ); + assert( pCur->pBtree->db==pBt->db ); + if( pCur->aOverflow[iIdx+1] ){ nextPage = pCur->aOverflow[iIdx+1]; - } else -#endif + }else{ rc = getOverflowPage(pBt, nextPage, 0, &nextPage); + } offset -= ovflSize; }else{ /* Need to read this page properly. It contains some of the ** range of data that is being read (eOp==0) or written (eOp!=0). */ - DbPage *pDbPage; +#ifdef SQLITE_DIRECT_OVERFLOW_READ + sqlite3_file *fd; +#endif int a = amt; - rc = sqlite3PagerGet(pBt->pPager, nextPage, &pDbPage); - if( rc==SQLITE_OK ){ - aPayload = sqlite3PagerGetData(pDbPage); - nextPage = get4byte(aPayload); - if( a + offset > ovflSize ){ - a = ovflSize - offset; - } - rc = copyPayload(&aPayload[offset+4], pBuf, a, eOp, pDbPage); - sqlite3PagerUnref(pDbPage); - offset = 0; - amt -= a; - pBuf += a; - } + if( a + offset > ovflSize ){ + a = ovflSize - offset; + } + +#ifdef SQLITE_DIRECT_OVERFLOW_READ + /* If all the following are true: + ** + ** 1) this is a read operation, and + ** 2) data is required from the start of this overflow page, and + ** 3) the database is file-backed, and + ** 4) there is no open write-transaction, and + ** 5) the database is not a WAL database, + ** 6) all data from the page is being read. + ** 7) at least 4 bytes have already been read into the output buffer + ** + ** then data can be read directly from the database file into the + ** output buffer, bypassing the page-cache altogether. This speeds + ** up loading large records that span many overflow pages. + */ + if( (eOp&0x01)==0 /* (1) */ + && offset==0 /* (2) */ + && (bEnd || a==ovflSize) /* (6) */ + && pBt->inTransaction==TRANS_READ /* (4) */ + && (fd = sqlite3PagerFile(pBt->pPager))->pMethods /* (3) */ + && pBt->pPage1->aData[19]==0x01 /* (5) */ + && &pBuf[-4]>=pBufStart /* (7) */ + ){ + u8 aSave[4]; + u8 *aWrite = &pBuf[-4]; + assert( aWrite>=pBufStart ); /* hence (7) */ + memcpy(aSave, aWrite, 4); + rc = sqlite3OsRead(fd, aWrite, a+4, (i64)pBt->pageSize*(nextPage-1)); + nextPage = get4byte(aWrite); + memcpy(aWrite, aSave, 4); + }else +#endif + + { + DbPage *pDbPage; + rc = sqlite3PagerGet(pBt->pPager, nextPage, &pDbPage, + ((eOp&0x01)==0 ? PAGER_GET_READONLY : 0) + ); + if( rc==SQLITE_OK ){ + aPayload = sqlite3PagerGetData(pDbPage); + nextPage = get4byte(aPayload); + rc = copyPayload(&aPayload[offset+4], pBuf, a, (eOp&0x01), pDbPage); + sqlite3PagerUnref(pDbPage); + offset = 0; + } + } + amt -= a; + pBuf += a; } } } if( rc==SQLITE_OK && amt>0 ){ @@ -42685,11 +60202,11 @@ return rc; } /* ** Read part of the key associated with cursor pCur. Exactly -** "amt" bytes will be transfered into pBuf[]. The transfer +** "amt" bytes will be transferred into pBuf[]. The transfer ** begins at "offset". ** ** The caller must ensure that pCur is pointing to a valid row ** in the table. ** @@ -42721,11 +60238,11 @@ if ( pCur->eState==CURSOR_INVALID ){ return SQLITE_ABORT; } #endif - assert( cursorHoldsMutex(pCur) ); + assert( cursorOwnsBtShared(pCur) ); rc = restoreCursorPosition(pCur); if( rc==SQLITE_OK ){ assert( pCur->eState==CURSOR_VALID ); assert( pCur->iPage>=0 && pCur->apPage[pCur->iPage] ); assert( pCur->aiIdx[pCur->iPage]<pCur->apPage[pCur->iPage]->nCell ); @@ -42735,14 +60252,14 @@ } /* ** Return a pointer to payload information from the entry that the ** pCur cursor is pointing to. The pointer is to the beginning of -** the key if skipKey==0 and it points to the beginning of data if -** skipKey==1. The number of bytes of available key/data is written -** into *pAmt. If *pAmt==0, then the value returned will not be -** a valid pointer. +** the key if index btrees (pPage->intKey==0) and is the data for +** table btrees (pPage->intKey==1). The number of bytes of available +** key/data is written into *pAmt. If *pAmt==0, then the value +** returned will not be a valid pointer. ** ** This routine is an optimization. It is common for the entire key ** and data to fit on the local page and for there to be no overflow ** pages. When that is so, this routine can be used to access the ** key and data without making a copy. If the key and/or data spills @@ -42751,45 +60268,27 @@ ** ** The pointer returned by this routine looks directly into the cached ** page of the database. The data might change or move the next time ** any btree routine is called. */ -static const unsigned char *fetchPayload( +static const void *fetchPayload( BtCursor *pCur, /* Cursor pointing to entry to read from */ - int *pAmt, /* Write the number of available bytes here */ - int skipKey /* read beginning at data if this is true */ + u32 *pAmt /* Write the number of available bytes here */ ){ - unsigned char *aPayload; - MemPage *pPage; - u32 nKey; - u32 nLocal; - + u32 amt; assert( pCur!=0 && pCur->iPage>=0 && pCur->apPage[pCur->iPage]); assert( pCur->eState==CURSOR_VALID ); - assert( cursorHoldsMutex(pCur) ); - pPage = pCur->apPage[pCur->iPage]; - assert( pCur->aiIdx[pCur->iPage]<pPage->nCell ); - if( NEVER(pCur->info.nSize==0) ){ - btreeParseCell(pCur->apPage[pCur->iPage], pCur->aiIdx[pCur->iPage], - &pCur->info); - } - aPayload = pCur->info.pCell; - aPayload += pCur->info.nHeader; - if( pPage->intKey ){ - nKey = 0; - }else{ - nKey = (int)pCur->info.nKey; - } - if( skipKey ){ - aPayload += nKey; - nLocal = pCur->info.nLocal - nKey; - }else{ - nLocal = pCur->info.nLocal; - assert( nLocal<=nKey ); - } - *pAmt = nLocal; - return aPayload; + assert( sqlite3_mutex_held(pCur->pBtree->db->mutex) ); + assert( cursorOwnsBtShared(pCur) ); + assert( pCur->aiIdx[pCur->iPage]<pCur->apPage[pCur->iPage]->nCell ); + assert( pCur->info.nSize>0 ); + assert( pCur->info.pPayload>pCur->apPage[pCur->iPage]->aData || CORRUPT_DB ); + assert( pCur->info.pPayload<pCur->apPage[pCur->iPage]->aDataEnd ||CORRUPT_DB); + amt = (int)(pCur->apPage[pCur->iPage]->aDataEnd - pCur->info.pPayload); + if( pCur->info.nLocal<amt ) amt = pCur->info.nLocal; + *pAmt = amt; + return (void*)pCur->info.pPayload; } /* ** For the entry that cursor pCur is point to, return as @@ -42803,27 +60302,15 @@ ** this routine. ** ** These routines is used to get quick access to key and data ** in the common case where no overflow pages are used. */ -SQLITE_PRIVATE const void *sqlite3BtreeKeyFetch(BtCursor *pCur, int *pAmt){ - const void *p = 0; - assert( sqlite3_mutex_held(pCur->pBtree->db->mutex) ); - assert( cursorHoldsMutex(pCur) ); - if( ALWAYS(pCur->eState==CURSOR_VALID) ){ - p = (const void*)fetchPayload(pCur, pAmt, 0); - } - return p; -} -SQLITE_PRIVATE const void *sqlite3BtreeDataFetch(BtCursor *pCur, int *pAmt){ - const void *p = 0; - assert( sqlite3_mutex_held(pCur->pBtree->db->mutex) ); - assert( cursorHoldsMutex(pCur) ); - if( ALWAYS(pCur->eState==CURSOR_VALID) ){ - p = (const void*)fetchPayload(pCur, pAmt, 1); - } - return p; +SQLITE_PRIVATE const void *sqlite3BtreeKeyFetch(BtCursor *pCur, u32 *pAmt){ + return fetchPayload(pCur, pAmt); +} +SQLITE_PRIVATE const void *sqlite3BtreeDataFetch(BtCursor *pCur, u32 *pAmt){ + return fetchPayload(pCur, pAmt); } /* ** Move the cursor down to a new child page. The newPgno argument is the @@ -42833,44 +60320,38 @@ ** the new child page does not match the flags field of the parent (i.e. ** if an intkey page appears to be the parent of a non-intkey page, or ** vice-versa). */ static int moveToChild(BtCursor *pCur, u32 newPgno){ - int rc; - int i = pCur->iPage; - MemPage *pNewPage; BtShared *pBt = pCur->pBt; - assert( cursorHoldsMutex(pCur) ); + assert( cursorOwnsBtShared(pCur) ); assert( pCur->eState==CURSOR_VALID ); assert( pCur->iPage<BTCURSOR_MAX_DEPTH ); + assert( pCur->iPage>=0 ); if( pCur->iPage>=(BTCURSOR_MAX_DEPTH-1) ){ return SQLITE_CORRUPT_BKPT; } - rc = getAndInitPage(pBt, newPgno, &pNewPage); - if( rc ) return rc; - pCur->apPage[i+1] = pNewPage; - pCur->aiIdx[i+1] = 0; + pCur->info.nSize = 0; + pCur->curFlags &= ~(BTCF_ValidNKey|BTCF_ValidOvfl); pCur->iPage++; - - pCur->info.nSize = 0; - pCur->validNKey = 0; - if( pNewPage->nCell<1 || pNewPage->intKey!=pCur->apPage[i]->intKey ){ - return SQLITE_CORRUPT_BKPT; - } - return SQLITE_OK; + pCur->aiIdx[pCur->iPage] = 0; + return getAndInitPage(pBt, newPgno, &pCur->apPage[pCur->iPage], + pCur, pCur->curPagerFlags); } -#ifndef NDEBUG +#if SQLITE_DEBUG /* ** Page pParent is an internal (non-leaf) tree page. This function ** asserts that page number iChild is the left-child if the iIdx'th ** cell in page pParent. Or, if iIdx is equal to the total number of ** cells in pParent, that page number iChild is the right-child of ** the page. */ static void assertParentIndex(MemPage *pParent, int iIdx, Pgno iChild){ + if( CORRUPT_DB ) return; /* The conditions tested below might not be true + ** in a corrupt database */ assert( iIdx<=pParent->nCell ); if( iIdx==pParent->nCell ){ assert( get4byte(&pParent->aData[pParent->hdrOffset+8])==iChild ); }else{ assert( get4byte(findCell(pParent, iIdx))==iChild ); @@ -42887,23 +60368,23 @@ ** to the page we are coming from. If we are coming from the ** right-most child page then pCur->idx is set to one more than ** the largest cell index. */ static void moveToParent(BtCursor *pCur){ - assert( cursorHoldsMutex(pCur) ); + assert( cursorOwnsBtShared(pCur) ); assert( pCur->eState==CURSOR_VALID ); assert( pCur->iPage>0 ); assert( pCur->apPage[pCur->iPage] ); assertParentIndex( pCur->apPage[pCur->iPage-1], pCur->aiIdx[pCur->iPage-1], pCur->apPage[pCur->iPage]->pgno ); - releasePage(pCur->apPage[pCur->iPage]); - pCur->iPage--; + testcase( pCur->aiIdx[pCur->iPage-1] > pCur->apPage[pCur->iPage-1]->nCell ); pCur->info.nSize = 0; - pCur->validNKey = 0; + pCur->curFlags &= ~(BTCF_ValidNKey|BTCF_ValidOvfl); + releasePageNotNull(pCur->apPage[pCur->iPage--]); } /* ** Move the cursor to point to the root page of its b-tree structure. ** @@ -42926,14 +60407,12 @@ ** b-tree). */ static int moveToRoot(BtCursor *pCur){ MemPage *pRoot; int rc = SQLITE_OK; - Btree *p = pCur->pBtree; - BtShared *pBt = p->pBt; - assert( cursorHoldsMutex(pCur) ); + assert( cursorOwnsBtShared(pCur) ); assert( CURSOR_INVALID < CURSOR_REQUIRESEEK ); assert( CURSOR_VALID < CURSOR_REQUIRESEEK ); assert( CURSOR_FAULT > CURSOR_REQUIRESEEK ); if( pCur->eState>=CURSOR_REQUIRESEEK ){ if( pCur->eState==CURSOR_FAULT ){ @@ -42942,56 +60421,60 @@ } sqlite3BtreeClearCursor(pCur); } if( pCur->iPage>=0 ){ - int i; - for(i=1; i<=pCur->iPage; i++){ - releasePage(pCur->apPage[i]); + while( pCur->iPage ){ + assert( pCur->apPage[pCur->iPage]!=0 ); + releasePageNotNull(pCur->apPage[pCur->iPage--]); } - pCur->iPage = 0; + }else if( pCur->pgnoRoot==0 ){ + pCur->eState = CURSOR_INVALID; + return SQLITE_OK; }else{ - rc = getAndInitPage(pBt, pCur->pgnoRoot, &pCur->apPage[0]); + assert( pCur->iPage==(-1) ); + rc = getAndInitPage(pCur->pBtree->pBt, pCur->pgnoRoot, &pCur->apPage[0], + 0, pCur->curPagerFlags); if( rc!=SQLITE_OK ){ pCur->eState = CURSOR_INVALID; return rc; } pCur->iPage = 0; - - /* If pCur->pKeyInfo is not NULL, then the caller that opened this cursor - ** expected to open it on an index b-tree. Otherwise, if pKeyInfo is - ** NULL, the caller expects a table b-tree. If this is not the case, - ** return an SQLITE_CORRUPT error. */ - assert( pCur->apPage[0]->intKey==1 || pCur->apPage[0]->intKey==0 ); - if( (pCur->pKeyInfo==0)!=pCur->apPage[0]->intKey ){ - return SQLITE_CORRUPT_BKPT; - } - } - - /* Assert that the root page is of the correct type. This must be the - ** case as the call to this function that loaded the root-page (either - ** this call or a previous invocation) would have detected corruption - ** if the assumption were not true, and it is not possible for the flags - ** byte to have been modified while this cursor is holding a reference - ** to the page. */ + pCur->curIntKey = pCur->apPage[0]->intKey; + } pRoot = pCur->apPage[0]; assert( pRoot->pgno==pCur->pgnoRoot ); - assert( pRoot->isInit && (pCur->pKeyInfo==0)==pRoot->intKey ); + + /* If pCur->pKeyInfo is not NULL, then the caller that opened this cursor + ** expected to open it on an index b-tree. Otherwise, if pKeyInfo is + ** NULL, the caller expects a table b-tree. If this is not the case, + ** return an SQLITE_CORRUPT error. + ** + ** Earlier versions of SQLite assumed that this test could not fail + ** if the root page was already loaded when this function was called (i.e. + ** if pCur->iPage>=0). But this is not so if the database is corrupted + ** in such a way that page pRoot is linked into a second b-tree table + ** (or the freelist). */ + assert( pRoot->intKey==1 || pRoot->intKey==0 ); + if( pRoot->isInit==0 || (pCur->pKeyInfo==0)!=pRoot->intKey ){ + return SQLITE_CORRUPT_BKPT; + } pCur->aiIdx[0] = 0; pCur->info.nSize = 0; - pCur->atLast = 0; - pCur->validNKey = 0; + pCur->curFlags &= ~(BTCF_AtLast|BTCF_ValidNKey|BTCF_ValidOvfl); - if( pRoot->nCell==0 && !pRoot->leaf ){ + if( pRoot->nCell>0 ){ + pCur->eState = CURSOR_VALID; + }else if( !pRoot->leaf ){ Pgno subpage; if( pRoot->pgno!=1 ) return SQLITE_CORRUPT_BKPT; subpage = get4byte(&pRoot->aData[pRoot->hdrOffset+8]); pCur->eState = CURSOR_VALID; rc = moveToChild(pCur, subpage); }else{ - pCur->eState = ((pRoot->nCell>0)?CURSOR_VALID:CURSOR_INVALID); + pCur->eState = CURSOR_INVALID; } return rc; } /* @@ -43004,11 +60487,11 @@ static int moveToLeftmost(BtCursor *pCur){ Pgno pgno; int rc = SQLITE_OK; MemPage *pPage; - assert( cursorHoldsMutex(pCur) ); + assert( cursorOwnsBtShared(pCur) ); assert( pCur->eState==CURSOR_VALID ); while( rc==SQLITE_OK && !(pPage = pCur->apPage[pCur->iPage])->leaf ){ assert( pCur->aiIdx[pCur->iPage]<pPage->nCell ); pgno = get4byte(findCell(pPage, pCur->aiIdx[pCur->iPage])); rc = moveToChild(pCur, pgno); @@ -43029,40 +60512,38 @@ static int moveToRightmost(BtCursor *pCur){ Pgno pgno; int rc = SQLITE_OK; MemPage *pPage = 0; - assert( cursorHoldsMutex(pCur) ); + assert( cursorOwnsBtShared(pCur) ); assert( pCur->eState==CURSOR_VALID ); - while( rc==SQLITE_OK && !(pPage = pCur->apPage[pCur->iPage])->leaf ){ + while( !(pPage = pCur->apPage[pCur->iPage])->leaf ){ pgno = get4byte(&pPage->aData[pPage->hdrOffset+8]); pCur->aiIdx[pCur->iPage] = pPage->nCell; rc = moveToChild(pCur, pgno); + if( rc ) return rc; } - if( rc==SQLITE_OK ){ - pCur->aiIdx[pCur->iPage] = pPage->nCell-1; - pCur->info.nSize = 0; - pCur->validNKey = 0; - } - return rc; + pCur->aiIdx[pCur->iPage] = pPage->nCell-1; + assert( pCur->info.nSize==0 ); + assert( (pCur->curFlags & BTCF_ValidNKey)==0 ); + return SQLITE_OK; } /* Move the cursor to the first entry in the table. Return SQLITE_OK ** on success. Set *pRes to 0 if the cursor actually points to something ** or set *pRes to 1 if the table is empty. */ SQLITE_PRIVATE int sqlite3BtreeFirst(BtCursor *pCur, int *pRes){ int rc; - assert( cursorHoldsMutex(pCur) ); + assert( cursorOwnsBtShared(pCur) ); assert( sqlite3_mutex_held(pCur->pBtree->db->mutex) ); rc = moveToRoot(pCur); if( rc==SQLITE_OK ){ if( pCur->eState==CURSOR_INVALID ){ - assert( pCur->apPage[pCur->iPage]->nCell==0 ); + assert( pCur->pgnoRoot==0 || pCur->apPage[pCur->iPage]->nCell==0 ); *pRes = 1; - rc = SQLITE_OK; }else{ assert( pCur->apPage[pCur->iPage]->nCell>0 ); *pRes = 0; rc = moveToLeftmost(pCur); } @@ -43075,15 +60556,15 @@ ** or set *pRes to 1 if the table is empty. */ SQLITE_PRIVATE int sqlite3BtreeLast(BtCursor *pCur, int *pRes){ int rc; - assert( cursorHoldsMutex(pCur) ); + assert( cursorOwnsBtShared(pCur) ); assert( sqlite3_mutex_held(pCur->pBtree->db->mutex) ); /* If the cursor already points to the last entry, this is a no-op. */ - if( CURSOR_VALID==pCur->eState && pCur->atLast ){ + if( CURSOR_VALID==pCur->eState && (pCur->curFlags & BTCF_AtLast)!=0 ){ #ifdef SQLITE_DEBUG /* This block serves to assert() that the cursor really does point ** to the last entry in the b-tree. */ int ii; for(ii=0; ii<pCur->iPage; ii++){ @@ -43096,17 +60577,22 @@ } rc = moveToRoot(pCur); if( rc==SQLITE_OK ){ if( CURSOR_INVALID==pCur->eState ){ - assert( pCur->apPage[pCur->iPage]->nCell==0 ); + assert( pCur->pgnoRoot==0 || pCur->apPage[pCur->iPage]->nCell==0 ); *pRes = 1; }else{ assert( pCur->eState==CURSOR_VALID ); *pRes = 0; rc = moveToRightmost(pCur); - pCur->atLast = rc==SQLITE_OK ?1:0; + if( rc==SQLITE_OK ){ + pCur->curFlags |= BTCF_AtLast; + }else{ + pCur->curFlags &= ~BTCF_AtLast; + } + } } return rc; } @@ -43135,58 +60621,73 @@ ** exactly matches intKey/pIdxKey. ** ** *pRes>0 The cursor is left pointing at an entry that ** is larger than intKey/pIdxKey. ** +** For index tables, the pIdxKey->eqSeen field is set to 1 if there +** exists an entry in the table that exactly matches pIdxKey. */ SQLITE_PRIVATE int sqlite3BtreeMovetoUnpacked( BtCursor *pCur, /* The cursor to be moved */ UnpackedRecord *pIdxKey, /* Unpacked index key */ i64 intKey, /* The table key */ int biasRight, /* If true, bias the search to the high end */ int *pRes /* Write search results here */ ){ int rc; + RecordCompare xRecordCompare; - assert( cursorHoldsMutex(pCur) ); + assert( cursorOwnsBtShared(pCur) ); assert( sqlite3_mutex_held(pCur->pBtree->db->mutex) ); assert( pRes ); assert( (pIdxKey==0)==(pCur->pKeyInfo==0) ); /* If the cursor is already positioned at the point we are trying ** to move to, then just return without doing any work */ - if( pCur->eState==CURSOR_VALID && pCur->validNKey - && pCur->apPage[0]->intKey + if( pCur->eState==CURSOR_VALID && (pCur->curFlags & BTCF_ValidNKey)!=0 + && pCur->curIntKey ){ if( pCur->info.nKey==intKey ){ *pRes = 0; return SQLITE_OK; } - if( pCur->atLast && pCur->info.nKey<intKey ){ + if( (pCur->curFlags & BTCF_AtLast)!=0 && pCur->info.nKey<intKey ){ *pRes = -1; return SQLITE_OK; } } + + if( pIdxKey ){ + xRecordCompare = sqlite3VdbeFindCompare(pIdxKey); + pIdxKey->errCode = 0; + assert( pIdxKey->default_rc==1 + || pIdxKey->default_rc==0 + || pIdxKey->default_rc==-1 + ); + }else{ + xRecordCompare = 0; /* All keys are integers */ + } rc = moveToRoot(pCur); if( rc ){ return rc; } - assert( pCur->apPage[pCur->iPage] ); - assert( pCur->apPage[pCur->iPage]->isInit ); - assert( pCur->apPage[pCur->iPage]->nCell>0 || pCur->eState==CURSOR_INVALID ); + assert( pCur->pgnoRoot==0 || pCur->apPage[pCur->iPage] ); + assert( pCur->pgnoRoot==0 || pCur->apPage[pCur->iPage]->isInit ); + assert( pCur->eState==CURSOR_INVALID || pCur->apPage[pCur->iPage]->nCell>0 ); if( pCur->eState==CURSOR_INVALID ){ *pRes = -1; - assert( pCur->apPage[pCur->iPage]->nCell==0 ); + assert( pCur->pgnoRoot==0 || pCur->apPage[pCur->iPage]->nCell==0 ); return SQLITE_OK; } - assert( pCur->apPage[0]->intKey || pIdxKey ); + assert( pCur->apPage[0]->intKey==pCur->curIntKey ); + assert( pCur->curIntKey || pIdxKey ); for(;;){ - int lwr, upr; + int lwr, upr, idx, c; Pgno chldPg; MemPage *pPage = pCur->apPage[pCur->iPage]; - int c; + u8 *pCell; /* Pointer to current cell in pPage */ /* pPage->nCell must be greater than zero. If this is the root-page ** the cursor would have been INVALID above and this for(;;) loop ** not run. If this is not the root-page, then the moveToChild() routine ** would have already detected db corruption. Similarly, pPage must @@ -43194,125 +60695,152 @@ ** a moveToChild() or moveToRoot() call would have detected corruption. */ assert( pPage->nCell>0 ); assert( pPage->intKey==(pIdxKey==0) ); lwr = 0; upr = pPage->nCell-1; - if( biasRight ){ - pCur->aiIdx[pCur->iPage] = (u16)upr; - }else{ - pCur->aiIdx[pCur->iPage] = (u16)((upr+lwr)/2); - } - for(;;){ - int idx = pCur->aiIdx[pCur->iPage]; /* Index of current cell in pPage */ - u8 *pCell; /* Pointer to current cell in pPage */ - - pCur->info.nSize = 0; - pCell = findCell(pPage, idx) + pPage->childPtrSize; - if( pPage->intKey ){ + assert( biasRight==0 || biasRight==1 ); + idx = upr>>(1-biasRight); /* idx = biasRight ? upr : (lwr+upr)/2; */ + pCur->aiIdx[pCur->iPage] = (u16)idx; + if( xRecordCompare==0 ){ + for(;;){ i64 nCellKey; - if( pPage->hasData ){ - u32 dummy; - pCell += getVarint32(pCell, dummy); + pCell = findCellPastPtr(pPage, idx); + if( pPage->intKeyLeaf ){ + while( 0x80 <= *(pCell++) ){ + if( pCell>=pPage->aDataEnd ) return SQLITE_CORRUPT_BKPT; + } } getVarint(pCell, (u64*)&nCellKey); - if( nCellKey==intKey ){ - c = 0; - }else if( nCellKey<intKey ){ - c = -1; - }else{ - assert( nCellKey>intKey ); - c = +1; - } - pCur->validNKey = 1; - pCur->info.nKey = nCellKey; - }else{ - /* The maximum supported page-size is 32768 bytes. This means that + if( nCellKey<intKey ){ + lwr = idx+1; + if( lwr>upr ){ c = -1; break; } + }else if( nCellKey>intKey ){ + upr = idx-1; + if( lwr>upr ){ c = +1; break; } + }else{ + assert( nCellKey==intKey ); + pCur->curFlags |= BTCF_ValidNKey; + pCur->info.nKey = nCellKey; + pCur->aiIdx[pCur->iPage] = (u16)idx; + if( !pPage->leaf ){ + lwr = idx; + goto moveto_next_layer; + }else{ + *pRes = 0; + rc = SQLITE_OK; + goto moveto_finish; + } + } + assert( lwr+upr>=0 ); + idx = (lwr+upr)>>1; /* idx = (lwr+upr)/2; */ + } + }else{ + for(;;){ + int nCell; /* Size of the pCell cell in bytes */ + pCell = findCellPastPtr(pPage, idx); + + /* The maximum supported page-size is 65536 bytes. This means that ** the maximum number of record bytes stored on an index B-Tree - ** page is at most 8198 bytes, which may be stored as a 2-byte + ** page is less than 16384 bytes and may be stored as a 2-byte ** varint. This information is used to attempt to avoid parsing ** the entire cell by checking for the cases where the record is ** stored entirely within the b-tree page by inspecting the first ** 2 bytes of the cell. */ - int nCell = pCell[0]; - if( !(nCell & 0x80) && nCell<=pPage->maxLocal ){ + nCell = pCell[0]; + if( nCell<=pPage->max1bytePayload ){ /* This branch runs if the record-size field of the cell is a ** single byte varint and the record fits entirely on the main ** b-tree page. */ - c = sqlite3VdbeRecordCompare(nCell, (void*)&pCell[1], pIdxKey); + testcase( pCell+nCell+1==pPage->aDataEnd ); + c = xRecordCompare(nCell, (void*)&pCell[1], pIdxKey); }else if( !(pCell[1] & 0x80) && (nCell = ((nCell&0x7f)<<7) + pCell[1])<=pPage->maxLocal ){ /* The record-size field is a 2 byte varint and the record ** fits entirely on the main b-tree page. */ - c = sqlite3VdbeRecordCompare(nCell, (void*)&pCell[2], pIdxKey); + testcase( pCell+nCell+2==pPage->aDataEnd ); + c = xRecordCompare(nCell, (void*)&pCell[2], pIdxKey); }else{ /* The record flows over onto one or more overflow pages. In ** this case the whole cell needs to be parsed, a buffer allocated ** and accessPayload() used to retrieve the record into the - ** buffer before VdbeRecordCompare() can be called. */ + ** buffer before VdbeRecordCompare() can be called. + ** + ** If the record is corrupt, the xRecordCompare routine may read + ** up to two varints past the end of the buffer. An extra 18 + ** bytes of padding is allocated at the end of the buffer in + ** case this happens. */ void *pCellKey; u8 * const pCellBody = pCell - pPage->childPtrSize; - btreeParseCellPtr(pPage, pCellBody, &pCur->info); + pPage->xParseCell(pPage, pCellBody, &pCur->info); nCell = (int)pCur->info.nKey; - pCellKey = sqlite3Malloc( nCell ); + testcase( nCell<0 ); /* True if key size is 2^32 or more */ + testcase( nCell==0 ); /* Invalid key size: 0x80 0x80 0x00 */ + testcase( nCell==1 ); /* Invalid key size: 0x80 0x80 0x01 */ + testcase( nCell==2 ); /* Minimum legal index key size */ + if( nCell<2 ){ + rc = SQLITE_CORRUPT_BKPT; + goto moveto_finish; + } + pCellKey = sqlite3Malloc( nCell+18 ); if( pCellKey==0 ){ rc = SQLITE_NOMEM; goto moveto_finish; } - rc = accessPayload(pCur, 0, nCell, (unsigned char*)pCellKey, 0); + pCur->aiIdx[pCur->iPage] = (u16)idx; + rc = accessPayload(pCur, 0, nCell, (unsigned char*)pCellKey, 2); if( rc ){ sqlite3_free(pCellKey); goto moveto_finish; } - c = sqlite3VdbeRecordCompare(nCell, pCellKey, pIdxKey); + c = xRecordCompare(nCell, pCellKey, pIdxKey); sqlite3_free(pCellKey); } - } - if( c==0 ){ - if( pPage->intKey && !pPage->leaf ){ - lwr = idx; - upr = lwr - 1; - break; + assert( + (pIdxKey->errCode!=SQLITE_CORRUPT || c==0) + && (pIdxKey->errCode!=SQLITE_NOMEM || pCur->pBtree->db->mallocFailed) + ); + if( c<0 ){ + lwr = idx+1; + }else if( c>0 ){ + upr = idx-1; }else{ + assert( c==0 ); *pRes = 0; rc = SQLITE_OK; + pCur->aiIdx[pCur->iPage] = (u16)idx; + if( pIdxKey->errCode ) rc = SQLITE_CORRUPT; goto moveto_finish; } - } - if( c<0 ){ - lwr = idx+1; - }else{ - upr = idx-1; - } - if( lwr>upr ){ - break; - } - pCur->aiIdx[pCur->iPage] = (u16)((lwr+upr)/2); - } - assert( lwr==upr+1 ); + if( lwr>upr ) break; + assert( lwr+upr>=0 ); + idx = (lwr+upr)>>1; /* idx = (lwr+upr)/2 */ + } + } + assert( lwr==upr+1 || (pPage->intKey && !pPage->leaf) ); assert( pPage->isInit ); if( pPage->leaf ){ - chldPg = 0; - }else if( lwr>=pPage->nCell ){ + assert( pCur->aiIdx[pCur->iPage]<pCur->apPage[pCur->iPage]->nCell ); + pCur->aiIdx[pCur->iPage] = (u16)idx; + *pRes = c; + rc = SQLITE_OK; + goto moveto_finish; + } +moveto_next_layer: + if( lwr>=pPage->nCell ){ chldPg = get4byte(&pPage->aData[pPage->hdrOffset+8]); }else{ chldPg = get4byte(findCell(pPage, lwr)); } - if( chldPg==0 ){ - assert( pCur->aiIdx[pCur->iPage]<pCur->apPage[pCur->iPage]->nCell ); - *pRes = c; - rc = SQLITE_OK; - goto moveto_finish; - } pCur->aiIdx[pCur->iPage] = (u16)lwr; - pCur->info.nSize = 0; - pCur->validNKey = 0; rc = moveToChild(pCur, chldPg); - if( rc ) goto moveto_finish; + if( rc ) break; } moveto_finish: + pCur->info.nSize = 0; + pCur->curFlags &= ~(BTCF_ValidNKey|BTCF_ValidOvfl); return rc; } /* @@ -43333,47 +60861,71 @@ /* ** Advance the cursor to the next entry in the database. If ** successful then set *pRes=0. If the cursor ** was already pointing to the last entry in the database before ** this routine was called, then set *pRes=1. +** +** The main entry point is sqlite3BtreeNext(). That routine is optimized +** for the common case of merely incrementing the cell counter BtCursor.aiIdx +** to the next cell on the current page. The (slower) btreeNext() helper +** routine is called when it is necessary to move to a different page or +** to restore the cursor. +** +** The calling function will set *pRes to 0 or 1. The initial *pRes value +** will be 1 if the cursor being stepped corresponds to an SQL index and +** if this routine could have been skipped if that SQL index had been +** a unique index. Otherwise the caller will have set *pRes to zero. +** Zero is the common case. The btree implementation is free to use the +** initial *pRes value as a hint to improve performance, but the current +** SQLite btree implementation does not. (Note that the comdb2 btree +** implementation does use this hint, however.) */ -SQLITE_PRIVATE int sqlite3BtreeNext(BtCursor *pCur, int *pRes){ +static SQLITE_NOINLINE int btreeNext(BtCursor *pCur, int *pRes){ int rc; int idx; MemPage *pPage; - assert( cursorHoldsMutex(pCur) ); - rc = restoreCursorPosition(pCur); - if( rc!=SQLITE_OK ){ - return rc; - } - assert( pRes!=0 ); - if( CURSOR_INVALID==pCur->eState ){ - *pRes = 1; - return SQLITE_OK; - } - if( pCur->skipNext>0 ){ - pCur->skipNext = 0; - *pRes = 0; - return SQLITE_OK; - } - pCur->skipNext = 0; + assert( cursorOwnsBtShared(pCur) ); + assert( pCur->skipNext==0 || pCur->eState!=CURSOR_VALID ); + assert( *pRes==0 ); + if( pCur->eState!=CURSOR_VALID ){ + assert( (pCur->curFlags & BTCF_ValidOvfl)==0 ); + rc = restoreCursorPosition(pCur); + if( rc!=SQLITE_OK ){ + return rc; + } + if( CURSOR_INVALID==pCur->eState ){ + *pRes = 1; + return SQLITE_OK; + } + if( pCur->skipNext ){ + assert( pCur->eState==CURSOR_VALID || pCur->eState==CURSOR_SKIPNEXT ); + pCur->eState = CURSOR_VALID; + if( pCur->skipNext>0 ){ + pCur->skipNext = 0; + return SQLITE_OK; + } + pCur->skipNext = 0; + } + } pPage = pCur->apPage[pCur->iPage]; idx = ++pCur->aiIdx[pCur->iPage]; assert( pPage->isInit ); - assert( idx<=pPage->nCell ); - pCur->info.nSize = 0; - pCur->validNKey = 0; + /* If the database file is corrupt, it is possible for the value of idx + ** to be invalid here. This can only occur if a second cursor modifies + ** the page while cursor pCur is holding a reference to it. Which can + ** only happen if the database is corrupt in such a way as to link the + ** page into more than one b-tree structure. */ + testcase( idx>pPage->nCell ); + if( idx>=pPage->nCell ){ if( !pPage->leaf ){ rc = moveToChild(pCur, get4byte(&pPage->aData[pPage->hdrOffset+8])); if( rc ) return rc; - rc = moveToLeftmost(pCur); - *pRes = 0; - return rc; + return moveToLeftmost(pCur); } do{ if( pCur->iPage==0 ){ *pRes = 1; pCur->eState = CURSOR_INVALID; @@ -43380,62 +60932,101 @@ return SQLITE_OK; } moveToParent(pCur); pPage = pCur->apPage[pCur->iPage]; }while( pCur->aiIdx[pCur->iPage]>=pPage->nCell ); - *pRes = 0; if( pPage->intKey ){ - rc = sqlite3BtreeNext(pCur, pRes); + return sqlite3BtreeNext(pCur, pRes); }else{ - rc = SQLITE_OK; + return SQLITE_OK; } - return rc; } + if( pPage->leaf ){ + return SQLITE_OK; + }else{ + return moveToLeftmost(pCur); + } +} +SQLITE_PRIVATE int sqlite3BtreeNext(BtCursor *pCur, int *pRes){ + MemPage *pPage; + assert( cursorOwnsBtShared(pCur) ); + assert( pRes!=0 ); + assert( *pRes==0 || *pRes==1 ); + assert( pCur->skipNext==0 || pCur->eState!=CURSOR_VALID ); + pCur->info.nSize = 0; + pCur->curFlags &= ~(BTCF_ValidNKey|BTCF_ValidOvfl); *pRes = 0; + if( pCur->eState!=CURSOR_VALID ) return btreeNext(pCur, pRes); + pPage = pCur->apPage[pCur->iPage]; + if( (++pCur->aiIdx[pCur->iPage])>=pPage->nCell ){ + pCur->aiIdx[pCur->iPage]--; + return btreeNext(pCur, pRes); + } if( pPage->leaf ){ return SQLITE_OK; + }else{ + return moveToLeftmost(pCur); } - rc = moveToLeftmost(pCur); - return rc; } - /* ** Step the cursor to the back to the previous entry in the database. If ** successful then set *pRes=0. If the cursor ** was already pointing to the first entry in the database before ** this routine was called, then set *pRes=1. +** +** The main entry point is sqlite3BtreePrevious(). That routine is optimized +** for the common case of merely decrementing the cell counter BtCursor.aiIdx +** to the previous cell on the current page. The (slower) btreePrevious() +** helper routine is called when it is necessary to move to a different page +** or to restore the cursor. +** +** The calling function will set *pRes to 0 or 1. The initial *pRes value +** will be 1 if the cursor being stepped corresponds to an SQL index and +** if this routine could have been skipped if that SQL index had been +** a unique index. Otherwise the caller will have set *pRes to zero. +** Zero is the common case. The btree implementation is free to use the +** initial *pRes value as a hint to improve performance, but the current +** SQLite btree implementation does not. (Note that the comdb2 btree +** implementation does use this hint, however.) */ -SQLITE_PRIVATE int sqlite3BtreePrevious(BtCursor *pCur, int *pRes){ +static SQLITE_NOINLINE int btreePrevious(BtCursor *pCur, int *pRes){ int rc; MemPage *pPage; - assert( cursorHoldsMutex(pCur) ); - rc = restoreCursorPosition(pCur); - if( rc!=SQLITE_OK ){ - return rc; - } - pCur->atLast = 0; - if( CURSOR_INVALID==pCur->eState ){ - *pRes = 1; - return SQLITE_OK; - } - if( pCur->skipNext<0 ){ - pCur->skipNext = 0; - *pRes = 0; - return SQLITE_OK; - } - pCur->skipNext = 0; + assert( cursorOwnsBtShared(pCur) ); + assert( pRes!=0 ); + assert( *pRes==0 ); + assert( pCur->skipNext==0 || pCur->eState!=CURSOR_VALID ); + assert( (pCur->curFlags & (BTCF_AtLast|BTCF_ValidOvfl|BTCF_ValidNKey))==0 ); + assert( pCur->info.nSize==0 ); + if( pCur->eState!=CURSOR_VALID ){ + rc = restoreCursorPosition(pCur); + if( rc!=SQLITE_OK ){ + return rc; + } + if( CURSOR_INVALID==pCur->eState ){ + *pRes = 1; + return SQLITE_OK; + } + if( pCur->skipNext ){ + assert( pCur->eState==CURSOR_VALID || pCur->eState==CURSOR_SKIPNEXT ); + pCur->eState = CURSOR_VALID; + if( pCur->skipNext<0 ){ + pCur->skipNext = 0; + return SQLITE_OK; + } + pCur->skipNext = 0; + } + } pPage = pCur->apPage[pCur->iPage]; assert( pPage->isInit ); if( !pPage->leaf ){ int idx = pCur->aiIdx[pCur->iPage]; rc = moveToChild(pCur, get4byte(findCell(pPage, idx))); - if( rc ){ - return rc; - } + if( rc ) return rc; rc = moveToRightmost(pCur); }else{ while( pCur->aiIdx[pCur->iPage]==0 ){ if( pCur->iPage==0 ){ pCur->eState = CURSOR_INVALID; @@ -43442,23 +61033,39 @@ *pRes = 1; return SQLITE_OK; } moveToParent(pCur); } - pCur->info.nSize = 0; - pCur->validNKey = 0; + assert( pCur->info.nSize==0 ); + assert( (pCur->curFlags & (BTCF_ValidNKey|BTCF_ValidOvfl))==0 ); pCur->aiIdx[pCur->iPage]--; pPage = pCur->apPage[pCur->iPage]; if( pPage->intKey && !pPage->leaf ){ rc = sqlite3BtreePrevious(pCur, pRes); }else{ rc = SQLITE_OK; } } + return rc; +} +SQLITE_PRIVATE int sqlite3BtreePrevious(BtCursor *pCur, int *pRes){ + assert( cursorOwnsBtShared(pCur) ); + assert( pRes!=0 ); + assert( *pRes==0 || *pRes==1 ); + assert( pCur->skipNext==0 || pCur->eState!=CURSOR_VALID ); *pRes = 0; - return rc; + pCur->curFlags &= ~(BTCF_AtLast|BTCF_ValidOvfl|BTCF_ValidNKey); + pCur->info.nSize = 0; + if( pCur->eState!=CURSOR_VALID + || pCur->aiIdx[pCur->iPage]==0 + || pCur->apPage[pCur->iPage]->leaf==0 + ){ + return btreePrevious(pCur, pRes); + } + pCur->aiIdx[pCur->iPage]--; + return SQLITE_OK; } /* ** Allocate a new page from the database file. ** @@ -43466,28 +61073,29 @@ ** has already been called on the new page.) The new page has also ** been referenced and the calling routine is responsible for calling ** sqlite3PagerUnref() on the new page when it is done. ** ** SQLITE_OK is returned on success. Any other return value indicates -** an error. *ppPage and *pPgno are undefined in the event of an error. -** Do not invoke sqlite3PagerUnref() on *ppPage if an error is returned. +** an error. *ppPage is set to NULL in the event of an error. ** -** If the "nearby" parameter is not 0, then a (feeble) effort is made to +** If the "nearby" parameter is not 0, then an effort is made to ** locate a page close to the page number "nearby". This can be used in an ** attempt to keep related pages close to each other in the database file, ** which in turn can make database access faster. ** -** If the "exact" parameter is not 0, and the page-number nearby exists -** anywhere on the free-list, then it is guarenteed to be returned. This -** is only used by auto-vacuum databases when allocating a new table. +** If the eMode parameter is BTALLOC_EXACT and the nearby page exists +** anywhere on the free-list, then it is guaranteed to be returned. If +** eMode is BTALLOC_LT then the page returned will be less than or equal +** to nearby if any such page exists. If eMode is BTALLOC_ANY then there +** are no restrictions on which page is returned. */ static int allocateBtreePage( - BtShared *pBt, - MemPage **ppPage, - Pgno *pPgno, - Pgno nearby, - u8 exact + BtShared *pBt, /* The btree */ + MemPage **ppPage, /* Store pointer to the allocated page here */ + Pgno *pPgno, /* Store the page number here */ + Pgno nearby, /* Search for a page near this one */ + u8 eMode /* BTALLOC_EXACT, BTALLOC_LT, or BTALLOC_ANY */ ){ MemPage *pPage1; int rc; u32 n; /* Number of pages on the freelist */ u32 k; /* Number of leaves on the trunk of the freelist */ @@ -43494,37 +61102,44 @@ MemPage *pTrunk = 0; MemPage *pPrevTrunk = 0; Pgno mxPage; /* Total size of the database file */ assert( sqlite3_mutex_held(pBt->mutex) ); + assert( eMode==BTALLOC_ANY || (nearby>0 && IfNotOmitAV(pBt->autoVacuum)) ); pPage1 = pBt->pPage1; mxPage = btreePagecount(pBt); + /* EVIDENCE-OF: R-05119-02637 The 4-byte big-endian integer at offset 36 + ** stores stores the total number of pages on the freelist. */ n = get4byte(&pPage1->aData[36]); testcase( n==mxPage-1 ); if( n>=mxPage ){ return SQLITE_CORRUPT_BKPT; } if( n>0 ){ /* There are pages on the freelist. Reuse one of those pages. */ Pgno iTrunk; u8 searchList = 0; /* If the free-list must be searched for 'nearby' */ + u32 nSearch = 0; /* Count of the number of search attempts */ - /* If the 'exact' parameter was true and a query of the pointer-map + /* If eMode==BTALLOC_EXACT and a query of the pointer-map ** shows that the page 'nearby' is somewhere on the free-list, then ** the entire-list will be searched for that page. */ #ifndef SQLITE_OMIT_AUTOVACUUM - if( exact && nearby<=mxPage ){ - u8 eType; - assert( nearby>0 ); - assert( pBt->autoVacuum ); - rc = ptrmapGet(pBt, nearby, &eType, 0); - if( rc ) return rc; - if( eType==PTRMAP_FREEPAGE ){ - searchList = 1; - } - *pPgno = nearby; + if( eMode==BTALLOC_EXACT ){ + if( nearby<=mxPage ){ + u8 eType; + assert( nearby>0 ); + assert( pBt->autoVacuum ); + rc = ptrmapGet(pBt, nearby, &eType, 0); + if( rc ) return rc; + if( eType==PTRMAP_FREEPAGE ){ + searchList = 1; + } + } + }else if( eMode==BTALLOC_LE ){ + searchList = 1; } #endif /* Decrement the free-list count by 1. Set iTrunk to the index of the ** first free-list trunk page. iPrevTrunk is initially 1. @@ -43533,30 +61148,40 @@ if( rc ) return rc; put4byte(&pPage1->aData[36], n-1); /* The code within this loop is run only once if the 'searchList' variable ** is not true. Otherwise, it runs once for each trunk-page on the - ** free-list until the page 'nearby' is located. + ** free-list until the page 'nearby' is located (eMode==BTALLOC_EXACT) + ** or until a page less than 'nearby' is located (eMode==BTALLOC_LT) */ do { pPrevTrunk = pTrunk; if( pPrevTrunk ){ + /* EVIDENCE-OF: R-01506-11053 The first integer on a freelist trunk page + ** is the page number of the next freelist trunk page in the list or + ** zero if this is the last freelist trunk page. */ iTrunk = get4byte(&pPrevTrunk->aData[0]); }else{ + /* EVIDENCE-OF: R-59841-13798 The 4-byte big-endian integer at offset 32 + ** stores the page number of the first page of the freelist, or zero if + ** the freelist is empty. */ iTrunk = get4byte(&pPage1->aData[32]); } testcase( iTrunk==mxPage ); - if( iTrunk>mxPage ){ + if( iTrunk>mxPage || nSearch++ > n ){ rc = SQLITE_CORRUPT_BKPT; }else{ - rc = btreeGetPage(pBt, iTrunk, &pTrunk, 0); + rc = btreeGetUnusedPage(pBt, iTrunk, &pTrunk, 0); } if( rc ){ pTrunk = 0; goto end_allocate_page; } - + assert( pTrunk!=0 ); + assert( pTrunk->aData!=0 ); + /* EVIDENCE-OF: R-13523-04394 The second integer on a freelist trunk page + ** is the number of leaf page pointers to follow. */ k = get4byte(&pTrunk->aData[4]); if( k==0 && !searchList ){ /* The trunk has no leaves and the list is not being searched. ** So extract the trunk page itself and use it as the newly ** allocated page */ @@ -43573,15 +61198,17 @@ }else if( k>(u32)(pBt->usableSize/4 - 2) ){ /* Value of k is out of range. Database corruption */ rc = SQLITE_CORRUPT_BKPT; goto end_allocate_page; #ifndef SQLITE_OMIT_AUTOVACUUM - }else if( searchList && nearby==iTrunk ){ + }else if( searchList + && (nearby==iTrunk || (iTrunk<nearby && eMode==BTALLOC_LE)) + ){ /* The list is being searched and this trunk page is the page ** to allocate, regardless of whether it has leaves. */ - assert( *pPgno==iTrunk ); + *pPgno = iTrunk; *ppPage = pTrunk; searchList = 0; rc = sqlite3PagerWrite(pTrunk->pDbPage); if( rc ){ goto end_allocate_page; @@ -43588,10 +61215,14 @@ } if( k==0 ){ if( !pPrevTrunk ){ memcpy(&pPage1->aData[32], &pTrunk->aData[0], 4); }else{ + rc = sqlite3PagerWrite(pPrevTrunk->pDbPage); + if( rc!=SQLITE_OK ){ + goto end_allocate_page; + } memcpy(&pPrevTrunk->aData[0], &pTrunk->aData[0], 4); } }else{ /* The trunk page is required by the caller but it contains ** pointers to free-list leaves. The first leaf becomes a trunk @@ -43602,11 +61233,11 @@ if( iNewTrunk>mxPage ){ rc = SQLITE_CORRUPT_BKPT; goto end_allocate_page; } testcase( iNewTrunk==mxPage ); - rc = btreeGetPage(pBt, iNewTrunk, &pNewTrunk, 0); + rc = btreeGetUnusedPage(pBt, iNewTrunk, &pNewTrunk, 0); if( rc!=SQLITE_OK ){ goto end_allocate_page; } rc = sqlite3PagerWrite(pNewTrunk->pDbPage); if( rc!=SQLITE_OK ){ @@ -43634,26 +61265,30 @@ }else if( k>0 ){ /* Extract a leaf from the trunk */ u32 closest; Pgno iPage; unsigned char *aData = pTrunk->aData; - rc = sqlite3PagerWrite(pTrunk->pDbPage); - if( rc ){ - goto end_allocate_page; - } if( nearby>0 ){ u32 i; - int dist; closest = 0; - dist = get4byte(&aData[8]) - nearby; - if( dist<0 ) dist = -dist; - for(i=1; i<k; i++){ - int d2 = get4byte(&aData[8+i*4]) - nearby; - if( d2<0 ) d2 = -d2; - if( d2<dist ){ - closest = i; - dist = d2; + if( eMode==BTALLOC_LE ){ + for(i=0; i<k; i++){ + iPage = get4byte(&aData[8+i*4]); + if( iPage<=nearby ){ + closest = i; + break; + } + } + }else{ + int dist; + dist = sqlite3AbsInt32(get4byte(&aData[8]) - nearby); + for(i=1; i<k; i++){ + int d2 = sqlite3AbsInt32(get4byte(&aData[8+i*4]) - nearby); + if( d2<dist ){ + closest = i; + dist = d2; + } } } }else{ closest = 0; } @@ -43663,38 +61298,60 @@ if( iPage>mxPage ){ rc = SQLITE_CORRUPT_BKPT; goto end_allocate_page; } testcase( iPage==mxPage ); - if( !searchList || iPage==nearby ){ + if( !searchList + || (iPage==nearby || (iPage<nearby && eMode==BTALLOC_LE)) + ){ int noContent; *pPgno = iPage; TRACE(("ALLOCATE: %d was leaf %d of %d on trunk %d" ": %d more free pages\n", *pPgno, closest+1, k, pTrunk->pgno, n-1)); + rc = sqlite3PagerWrite(pTrunk->pDbPage); + if( rc ) goto end_allocate_page; if( closest<k-1 ){ memcpy(&aData[8+closest*4], &aData[4+k*4], 4); } put4byte(&aData[4], k-1); - assert( sqlite3PagerIswriteable(pTrunk->pDbPage) ); - noContent = !btreeGetHasContent(pBt, *pPgno); - rc = btreeGetPage(pBt, *pPgno, ppPage, noContent); + noContent = !btreeGetHasContent(pBt, *pPgno)? PAGER_GET_NOCONTENT : 0; + rc = btreeGetUnusedPage(pBt, *pPgno, ppPage, noContent); if( rc==SQLITE_OK ){ rc = sqlite3PagerWrite((*ppPage)->pDbPage); if( rc!=SQLITE_OK ){ releasePage(*ppPage); + *ppPage = 0; } } searchList = 0; } } releasePage(pPrevTrunk); pPrevTrunk = 0; }while( searchList ); }else{ - /* There are no pages on the freelist, so create a new page at the - ** end of the file */ + /* There are no pages on the freelist, so append a new page to the + ** database image. + ** + ** Normally, new pages allocated by this block can be requested from the + ** pager layer with the 'no-content' flag set. This prevents the pager + ** from trying to read the pages content from disk. However, if the + ** current transaction has already run one or more incremental-vacuum + ** steps, then the page we are about to allocate may contain content + ** that is required in the event of a rollback. In this case, do + ** not set the no-content flag. This causes the pager to load and journal + ** the current page content before overwriting it. + ** + ** Note that the pager will not actually attempt to load or journal + ** content for any page that really does lie past the end of the database + ** file on disk. So the effects of disabling the no-content optimization + ** here are confined to those pages that lie between the end of the + ** database image and the end of the database file. + */ + int bNoContent = (0==IfNotOmitAV(pBt->bDoTruncate))? PAGER_GET_NOCONTENT:0; + rc = sqlite3PagerWrite(pBt->pPage1->pDbPage); if( rc ) return rc; pBt->nPage++; if( pBt->nPage==PENDING_BYTE_PAGE(pBt) ) pBt->nPage++; @@ -43705,11 +61362,11 @@ ** becomes a new pointer-map page, the second is used by the caller. */ MemPage *pPg = 0; TRACE(("ALLOCATE: %d from end of file (pointer-map page)\n", pBt->nPage)); assert( pBt->nPage!=PENDING_BYTE_PAGE(pBt) ); - rc = btreeGetPage(pBt, pBt->nPage, &pPg, 1); + rc = btreeGetUnusedPage(pBt, pBt->nPage, &pPg, bNoContent); if( rc==SQLITE_OK ){ rc = sqlite3PagerWrite(pPg->pDbPage); releasePage(pPg); } if( rc ) return rc; @@ -43719,33 +61376,27 @@ #endif put4byte(28 + (u8*)pBt->pPage1->aData, pBt->nPage); *pPgno = pBt->nPage; assert( *pPgno!=PENDING_BYTE_PAGE(pBt) ); - rc = btreeGetPage(pBt, *pPgno, ppPage, 1); + rc = btreeGetUnusedPage(pBt, *pPgno, ppPage, bNoContent); if( rc ) return rc; rc = sqlite3PagerWrite((*ppPage)->pDbPage); if( rc!=SQLITE_OK ){ releasePage(*ppPage); + *ppPage = 0; } TRACE(("ALLOCATE: %d from end of file\n", *pPgno)); } assert( *pPgno!=PENDING_BYTE_PAGE(pBt) ); end_allocate_page: releasePage(pTrunk); releasePage(pPrevTrunk); - if( rc==SQLITE_OK ){ - if( sqlite3PagerPageRefcount((*ppPage)->pDbPage)>1 ){ - releasePage(*ppPage); - return SQLITE_CORRUPT_BKPT; - } - (*ppPage)->isInit = 0; - }else{ - *ppPage = 0; - } + assert( rc!=SQLITE_OK || sqlite3PagerPageRefcount((*ppPage)->pDbPage)<=1 ); + assert( rc!=SQLITE_OK || (*ppPage)->isInit==0 ); return rc; } /* ** This function is used to add page iPage to the database file free-list. @@ -43766,13 +61417,14 @@ MemPage *pPage; /* Page being freed. May be NULL. */ int rc; /* Return Code */ int nFree; /* Initial number of pages on free-list */ assert( sqlite3_mutex_held(pBt->mutex) ); - assert( iPage>1 ); + assert( CORRUPT_DB || iPage>1 ); assert( !pMemPage || pMemPage->pgno==iPage ); + if( iPage<2 ) return SQLITE_CORRUPT_BKPT; if( pMemPage ){ pPage = pMemPage; sqlite3PagerRef(pPage->pDbPage); }else{ pPage = btreePageLookup(pBt, iPage); @@ -43782,11 +61434,11 @@ rc = sqlite3PagerWrite(pPage1->pDbPage); if( rc ) goto freepage_out; nFree = get4byte(&pPage1->aData[36]); put4byte(&pPage1->aData[36], nFree+1); - if( pBt->secureDelete ){ + if( pBt->btsFlags & BTS_SECURE_DELETE ){ /* If the secure_delete option is enabled, then ** always fully overwrite deleted information with zeros. */ if( (!pPage && ((rc = btreeGetPage(pBt, iPage, &pPage, 0))!=0) ) || ((rc = sqlite3PagerWrite(pPage->pDbPage))!=0) @@ -43838,16 +61490,21 @@ ** to maintain backwards compatibility with older versions of SQLite, ** we will continue to restrict the number of entries to usableSize/4 - 8 ** for now. At some point in the future (once everyone has upgraded ** to 3.6.0 or later) we should consider fixing the conditional above ** to read "usableSize/4-2" instead of "usableSize/4-8". + ** + ** EVIDENCE-OF: R-19920-11576 However, newer versions of SQLite still + ** avoid using the last six entries in the freelist trunk page array in + ** order that database files created by newer versions of SQLite can be + ** read by older versions of SQLite. */ rc = sqlite3PagerWrite(pTrunk->pDbPage); if( rc==SQLITE_OK ){ put4byte(&pTrunk->aData[4], nLeaf+1); put4byte(&pTrunk->aData[8+nLeaf*4], iPage); - if( pPage && !pBt->secureDelete ){ + if( pPage && (pBt->btsFlags & BTS_SECURE_DELETE)==0 ){ sqlite3PagerDontWrite(pPage->pDbPage); } rc = btreeSetHasContent(pBt, iPage); } TRACE(("FREE-PAGE: %d leaf on trunk page %d\n",pPage->pgno,pTrunk->pgno)); @@ -43886,30 +61543,42 @@ *pRC = freePage2(pPage->pBt, pPage, pPage->pgno); } } /* -** Free any overflow pages associated with the given Cell. +** Free any overflow pages associated with the given Cell. Write the +** local Cell size (the number of bytes on the original page, omitting +** overflow) into *pnSize. */ -static int clearCell(MemPage *pPage, unsigned char *pCell){ +static int clearCell( + MemPage *pPage, /* The page that contains the Cell */ + unsigned char *pCell, /* First byte of the Cell */ + u16 *pnSize /* Write the size of the Cell here */ +){ BtShared *pBt = pPage->pBt; CellInfo info; Pgno ovflPgno; int rc; int nOvfl; - u16 ovflPageSize; + u32 ovflPageSize; assert( sqlite3_mutex_held(pPage->pBt->mutex) ); - btreeParseCellPtr(pPage, pCell, &info); - if( info.iOverflow==0 ){ + pPage->xParseCell(pPage, pCell, &info); + *pnSize = info.nSize; + if( info.nLocal==info.nPayload ){ return SQLITE_OK; /* No overflow pages. Return without doing anything */ } - ovflPgno = get4byte(&pCell[info.iOverflow]); + if( pCell+info.nSize-1 > pPage->aData+pPage->maskPage ){ + return SQLITE_CORRUPT_BKPT; /* Cell extends past end of page */ + } + ovflPgno = get4byte(pCell + info.nSize - 4); assert( pBt->usableSize > 4 ); ovflPageSize = pBt->usableSize - 4; nOvfl = (info.nPayload - info.nLocal + ovflPageSize - 1)/ovflPageSize; - assert( ovflPgno==0 || nOvfl>0 ); + assert( nOvfl>0 || + (CORRUPT_DB && (info.nPayload + ovflPageSize)<ovflPageSize) + ); while( nOvfl-- ){ Pgno iNext = 0; MemPage *pOvfl = 0; if( ovflPgno<2 || ovflPgno>btreePagecount(pBt) ){ /* 0 is not a legal page number and page 1 cannot be an @@ -43978,54 +61647,84 @@ unsigned char *pPrior; unsigned char *pPayload; BtShared *pBt = pPage->pBt; Pgno pgnoOvfl = 0; int nHeader; - CellInfo info; assert( sqlite3_mutex_held(pPage->pBt->mutex) ); /* pPage is not necessarily writeable since pCell might be auxiliary ** buffer space that is separate from the pPage buffer area */ assert( pCell<pPage->aData || pCell>=&pPage->aData[pBt->pageSize] || sqlite3PagerIswriteable(pPage->pDbPage) ); /* Fill in the header. */ - nHeader = 0; - if( !pPage->leaf ){ - nHeader += 4; - } - if( pPage->hasData ){ - nHeader += putVarint(&pCell[nHeader], nData+nZero); + nHeader = pPage->childPtrSize; + nPayload = nData + nZero; + if( pPage->intKeyLeaf ){ + nHeader += putVarint32(&pCell[nHeader], nPayload); }else{ - nData = nZero = 0; + assert( nData==0 ); + assert( nZero==0 ); } nHeader += putVarint(&pCell[nHeader], *(u64*)&nKey); - btreeParseCellPtr(pPage, pCell, &info); - assert( info.nHeader==nHeader ); - assert( info.nKey==nKey ); - assert( info.nData==(u32)(nData+nZero) ); - /* Fill in the payload */ - nPayload = nData + nZero; + /* Fill in the payload size */ if( pPage->intKey ){ pSrc = pData; nSrc = nData; nData = 0; }else{ - if( NEVER(nKey>0x7fffffff || pKey==0) ){ - return SQLITE_CORRUPT_BKPT; - } - nPayload += (int)nKey; + assert( nKey<=0x7fffffff && pKey!=0 ); + nPayload = (int)nKey; pSrc = pKey; nSrc = (int)nKey; } - *pnSize = info.nSize; - spaceLeft = info.nLocal; + if( nPayload<=pPage->maxLocal ){ + n = nHeader + nPayload; + testcase( n==3 ); + testcase( n==4 ); + if( n<4 ) n = 4; + *pnSize = n; + spaceLeft = nPayload; + pPrior = pCell; + }else{ + int mn = pPage->minLocal; + n = mn + (nPayload - mn) % (pPage->pBt->usableSize - 4); + testcase( n==pPage->maxLocal ); + testcase( n==pPage->maxLocal+1 ); + if( n > pPage->maxLocal ) n = mn; + spaceLeft = n; + *pnSize = n + nHeader + 4; + pPrior = &pCell[nHeader+n]; + } pPayload = &pCell[nHeader]; - pPrior = &pCell[info.iOverflow]; + /* At this point variables should be set as follows: + ** + ** nPayload Total payload size in bytes + ** pPayload Begin writing payload here + ** spaceLeft Space available at pPayload. If nPayload>spaceLeft, + ** that means content must spill into overflow pages. + ** *pnSize Size of the local cell (not counting overflow pages) + ** pPrior Where to write the pgno of the first overflow page + ** + ** Use a call to btreeParseCellPtr() to verify that the values above + ** were computed correctly. + */ +#if SQLITE_DEBUG + { + CellInfo info; + pPage->xParseCell(pPage, pCell, &info); + assert( nHeader==(int)(info.pPayload - pCell) ); + assert( info.nKey==nKey ); + assert( *pnSize == info.nSize ); + assert( spaceLeft == info.nLocal ); + } +#endif + + /* Write the payload into the local Cell and any extra into overflow pages */ while( nPayload>0 ){ if( spaceLeft==0 ){ #ifndef SQLITE_OMIT_AUTOVACUUM Pgno pgnoPtrmap = pgnoOvfl; /* Overflow page pointer-map entry page */ if( pBt->autoVacuum ){ @@ -44043,11 +61742,11 @@ ** for that page now. ** ** If this is the first overflow page, then write a partial entry ** to the pointer-map. If we write nothing to this pointer-map slot, ** then the optimistic overflow chain processing in clearCell() - ** may misinterpret the uninitialised values and delete the + ** may misinterpret the uninitialized values and delete the ** wrong pages from the database. */ if( pBt->autoVacuum && rc==SQLITE_OK ){ u8 eType = (pgnoPtrmap?PTRMAP_OVERFLOW2:PTRMAP_OVERFLOW1); ptrmapPut(pBt, pgnoOvfl, eType, pgnoPtrmap, &rc); @@ -44118,63 +61817,62 @@ ** removes the reference to the cell from pPage. ** ** "sz" must be the number of bytes in the cell. */ static void dropCell(MemPage *pPage, int idx, int sz, int *pRC){ - int i; /* Loop counter */ - int pc; /* Offset to cell content of cell being deleted */ + u32 pc; /* Offset to cell content of cell being deleted */ u8 *data; /* pPage->aData */ u8 *ptr; /* Used to move bytes around within data[] */ int rc; /* The return code */ int hdr; /* Beginning of the header. 0 most pages. 100 page 1 */ if( *pRC ) return; assert( idx>=0 && idx<pPage->nCell ); - assert( sz==cellSize(pPage, idx) ); + assert( CORRUPT_DB || sz==cellSize(pPage, idx) ); assert( sqlite3PagerIswriteable(pPage->pDbPage) ); assert( sqlite3_mutex_held(pPage->pBt->mutex) ); data = pPage->aData; - ptr = &data[pPage->cellOffset + 2*idx]; + ptr = &pPage->aCellIdx[2*idx]; pc = get2byte(ptr); hdr = pPage->hdrOffset; testcase( pc==get2byte(&data[hdr+5]) ); testcase( pc+sz==pPage->pBt->usableSize ); - if( pc < get2byte(&data[hdr+5]) || pc+sz > pPage->pBt->usableSize ){ + if( pc < (u32)get2byte(&data[hdr+5]) || pc+sz > pPage->pBt->usableSize ){ *pRC = SQLITE_CORRUPT_BKPT; return; } rc = freeSpace(pPage, pc, sz); if( rc ){ *pRC = rc; return; } - for(i=idx+1; i<pPage->nCell; i++, ptr+=2){ - ptr[0] = ptr[2]; - ptr[1] = ptr[3]; - } pPage->nCell--; - put2byte(&data[hdr+3], pPage->nCell); - pPage->nFree += 2; + if( pPage->nCell==0 ){ + memset(&data[hdr+1], 0, 4); + data[hdr+7] = 0; + put2byte(&data[hdr+5], pPage->pBt->usableSize); + pPage->nFree = pPage->pBt->usableSize - pPage->hdrOffset + - pPage->childPtrSize - 8; + }else{ + memmove(ptr, ptr+2, 2*(pPage->nCell - idx)); + put2byte(&data[hdr+3], pPage->nCell); + pPage->nFree += 2; + } } /* ** Insert a new cell on pPage at cell index "i". pCell points to the ** content of the cell. ** ** If the cell content will fit on the page, then put it there. If it ** will not fit, then make a copy of the cell content into pTemp if ** pTemp is not null. Regardless of pTemp, allocate a new entry -** in pPage->aOvfl[] and make it point to the cell content (either +** in pPage->apOvfl[] and make it point to the cell content (either ** in pTemp or the original pCell) and also record its index. ** Allocating a new entry in pPage->aCell[] implies that ** pPage->nOverflow is incremented. -** -** If nSkip is non-zero, then do not copy the first nSkip bytes of the -** cell. The caller will overwrite them after this function returns. If -** nSkip is non-zero, then pCell may not point to an invalid memory location -** (but pCell+nSkip is always valid). */ static void insertCell( MemPage *pPage, /* Page into which we are copying */ int i, /* New cell becomes the i-th cell of the page */ u8 *pCell, /* Content of the new cell */ @@ -44183,71 +61881,75 @@ Pgno iChild, /* If non-zero, replace first 4 bytes with this value */ int *pRC /* Read and write return code from here */ ){ int idx = 0; /* Where to write new cell content in data[] */ int j; /* Loop counter */ - int end; /* First byte past the last cell pointer in data[] */ - int ins; /* Index in data[] where new cell pointer is inserted */ - int cellOffset; /* Address of first cell pointer in data[] */ u8 *data; /* The content of the whole page */ - u8 *ptr; /* Used for moving information around in data[] */ - - int nSkip = (iChild ? 4 : 0); + u8 *pIns; /* The point in pPage->aCellIdx[] where no cell inserted */ if( *pRC ) return; assert( i>=0 && i<=pPage->nCell+pPage->nOverflow ); - assert( pPage->nCell<=MX_CELL(pPage->pBt) && MX_CELL(pPage->pBt)<=5460 ); - assert( pPage->nOverflow<=ArraySize(pPage->aOvfl) ); + assert( MX_CELL(pPage->pBt)<=10921 ); + assert( pPage->nCell<=MX_CELL(pPage->pBt) || CORRUPT_DB ); + assert( pPage->nOverflow<=ArraySize(pPage->apOvfl) ); + assert( ArraySize(pPage->apOvfl)==ArraySize(pPage->aiOvfl) ); assert( sqlite3_mutex_held(pPage->pBt->mutex) ); /* The cell should normally be sized correctly. However, when moving a ** malformed cell from a leaf page to an interior page, if the cell size ** wanted to be less than 4 but got rounded up to 4 on the leaf, then size ** might be less than 8 (leaf-size + pointer) on the interior node. Hence ** the term after the || in the following assert(). */ - assert( sz==cellSizePtr(pPage, pCell) || (sz==8 && iChild>0) ); + assert( sz==pPage->xCellSize(pPage, pCell) || (sz==8 && iChild>0) ); if( pPage->nOverflow || sz+2>pPage->nFree ){ if( pTemp ){ - memcpy(pTemp+nSkip, pCell+nSkip, sz-nSkip); + memcpy(pTemp, pCell, sz); pCell = pTemp; } if( iChild ){ put4byte(pCell, iChild); } j = pPage->nOverflow++; - assert( j<(int)(sizeof(pPage->aOvfl)/sizeof(pPage->aOvfl[0])) ); - pPage->aOvfl[j].pCell = pCell; - pPage->aOvfl[j].idx = (u16)i; + assert( j<(int)(sizeof(pPage->apOvfl)/sizeof(pPage->apOvfl[0])) ); + pPage->apOvfl[j] = pCell; + pPage->aiOvfl[j] = (u16)i; + + /* When multiple overflows occur, they are always sequential and in + ** sorted order. This invariants arise because multiple overflows can + ** only occur when inserting divider cells into the parent page during + ** balancing, and the dividers are adjacent and sorted. + */ + assert( j==0 || pPage->aiOvfl[j-1]<(u16)i ); /* Overflows in sorted order */ + assert( j==0 || i==pPage->aiOvfl[j-1]+1 ); /* Overflows are sequential */ }else{ int rc = sqlite3PagerWrite(pPage->pDbPage); if( rc!=SQLITE_OK ){ *pRC = rc; return; } assert( sqlite3PagerIswriteable(pPage->pDbPage) ); data = pPage->aData; - cellOffset = pPage->cellOffset; - end = cellOffset + 2*pPage->nCell; - ins = cellOffset + 2*i; + assert( &data[pPage->cellOffset]==pPage->aCellIdx ); rc = allocateSpace(pPage, sz, &idx); if( rc ){ *pRC = rc; return; } - /* The allocateSpace() routine guarantees the following two properties - ** if it returns success */ - assert( idx >= end+2 ); - assert( idx+sz <= pPage->pBt->usableSize ); - pPage->nCell++; + /* The allocateSpace() routine guarantees the following properties + ** if it returns successfully */ + assert( idx >= 0 ); + assert( idx >= pPage->cellOffset+2*pPage->nCell+2 || CORRUPT_DB ); + assert( idx+sz <= (int)pPage->pBt->usableSize ); pPage->nFree -= (u16)(2 + sz); - memcpy(&data[idx+nSkip], pCell+nSkip, sz-nSkip); + memcpy(&data[idx], pCell, sz); if( iChild ){ put4byte(&data[idx], iChild); } - for(j=end, ptr=&data[j]; j>ins; j-=2, ptr-=2){ - ptr[0] = ptr[-2]; - ptr[1] = ptr[-1]; - } - put2byte(&data[ins], idx); - put2byte(&data[pPage->hdrOffset+3], pPage->nCell); + pIns = pPage->aCellIdx + i*2; + memmove(pIns+2, pIns, 2*(pPage->nCell - i)); + put2byte(pIns, idx); + pPage->nCell++; + /* increment the cell count */ + if( (++data[pPage->hdrOffset+4])==0 ) data[pPage->hdrOffset+3]++; + assert( get2byte(&data[pPage->hdrOffset+3])==pPage->nCell ); #ifndef SQLITE_OMIT_AUTOVACUUM if( pPage->pBt->autoVacuum ){ /* The cell may contain a pointer to an overflow page. If so, write ** the entry for the overflow page into the pointer map. */ @@ -44256,47 +61958,332 @@ #endif } } /* -** Add a list of cells to a page. The page should be initially empty. -** The cells are guaranteed to fit on the page. -*/ -static void assemblePage( - MemPage *pPage, /* The page to be assemblied */ - int nCell, /* The number of cells to add to this page */ - u8 **apCell, /* Pointers to cell bodies */ - u16 *aSize /* Sizes of the cells */ -){ - int i; /* Loop counter */ - u8 *pCellptr; /* Address of next cell pointer */ - int cellbody; /* Address of next cell body */ - u8 * const data = pPage->aData; /* Pointer to data for pPage */ - const int hdr = pPage->hdrOffset; /* Offset of header on pPage */ - const int nUsable = pPage->pBt->usableSize; /* Usable size of page */ - - assert( pPage->nOverflow==0 ); - assert( sqlite3_mutex_held(pPage->pBt->mutex) ); - assert( nCell>=0 && nCell<=MX_CELL(pPage->pBt) && MX_CELL(pPage->pBt)<=5460 ); - assert( sqlite3PagerIswriteable(pPage->pDbPage) ); - - /* Check that the page has just been zeroed by zeroPage() */ - assert( pPage->nCell==0 ); - assert( get2byte(&data[hdr+5])==nUsable ); - - pCellptr = &data[pPage->cellOffset + nCell*2]; - cellbody = nUsable; - for(i=nCell-1; i>=0; i--){ - pCellptr -= 2; - cellbody -= aSize[i]; - put2byte(pCellptr, cellbody); - memcpy(&data[cellbody], apCell[i], aSize[i]); - } - put2byte(&data[hdr+3], nCell); - put2byte(&data[hdr+5], cellbody); - pPage->nFree -= (nCell*2 + nUsable - cellbody); - pPage->nCell = (u16)nCell; +** A CellArray object contains a cache of pointers and sizes for a +** consecutive sequence of cells that might be held multiple pages. +*/ +typedef struct CellArray CellArray; +struct CellArray { + int nCell; /* Number of cells in apCell[] */ + MemPage *pRef; /* Reference page */ + u8 **apCell; /* All cells begin balanced */ + u16 *szCell; /* Local size of all cells in apCell[] */ +}; + +/* +** Make sure the cell sizes at idx, idx+1, ..., idx+N-1 have been +** computed. +*/ +static void populateCellCache(CellArray *p, int idx, int N){ + assert( idx>=0 && idx+N<=p->nCell ); + while( N>0 ){ + assert( p->apCell[idx]!=0 ); + if( p->szCell[idx]==0 ){ + p->szCell[idx] = p->pRef->xCellSize(p->pRef, p->apCell[idx]); + }else{ + assert( CORRUPT_DB || + p->szCell[idx]==p->pRef->xCellSize(p->pRef, p->apCell[idx]) ); + } + idx++; + N--; + } +} + +/* +** Return the size of the Nth element of the cell array +*/ +static SQLITE_NOINLINE u16 computeCellSize(CellArray *p, int N){ + assert( N>=0 && N<p->nCell ); + assert( p->szCell[N]==0 ); + p->szCell[N] = p->pRef->xCellSize(p->pRef, p->apCell[N]); + return p->szCell[N]; +} +static u16 cachedCellSize(CellArray *p, int N){ + assert( N>=0 && N<p->nCell ); + if( p->szCell[N] ) return p->szCell[N]; + return computeCellSize(p, N); +} + +/* +** Array apCell[] contains pointers to nCell b-tree page cells. The +** szCell[] array contains the size in bytes of each cell. This function +** replaces the current contents of page pPg with the contents of the cell +** array. +** +** Some of the cells in apCell[] may currently be stored in pPg. This +** function works around problems caused by this by making a copy of any +** such cells before overwriting the page data. +** +** The MemPage.nFree field is invalidated by this function. It is the +** responsibility of the caller to set it correctly. +*/ +static int rebuildPage( + MemPage *pPg, /* Edit this page */ + int nCell, /* Final number of cells on page */ + u8 **apCell, /* Array of cells */ + u16 *szCell /* Array of cell sizes */ +){ + const int hdr = pPg->hdrOffset; /* Offset of header on pPg */ + u8 * const aData = pPg->aData; /* Pointer to data for pPg */ + const int usableSize = pPg->pBt->usableSize; + u8 * const pEnd = &aData[usableSize]; + int i; + u8 *pCellptr = pPg->aCellIdx; + u8 *pTmp = sqlite3PagerTempSpace(pPg->pBt->pPager); + u8 *pData; + + i = get2byte(&aData[hdr+5]); + memcpy(&pTmp[i], &aData[i], usableSize - i); + + pData = pEnd; + for(i=0; i<nCell; i++){ + u8 *pCell = apCell[i]; + if( SQLITE_WITHIN(pCell,aData,pEnd) ){ + pCell = &pTmp[pCell - aData]; + } + pData -= szCell[i]; + put2byte(pCellptr, (pData - aData)); + pCellptr += 2; + if( pData < pCellptr ) return SQLITE_CORRUPT_BKPT; + memcpy(pData, pCell, szCell[i]); + assert( szCell[i]==pPg->xCellSize(pPg, pCell) || CORRUPT_DB ); + testcase( szCell[i]!=pPg->xCellSize(pPg,pCell) ); + } + + /* The pPg->nFree field is now set incorrectly. The caller will fix it. */ + pPg->nCell = nCell; + pPg->nOverflow = 0; + + put2byte(&aData[hdr+1], 0); + put2byte(&aData[hdr+3], pPg->nCell); + put2byte(&aData[hdr+5], pData - aData); + aData[hdr+7] = 0x00; + return SQLITE_OK; +} + +/* +** Array apCell[] contains nCell pointers to b-tree cells. Array szCell +** contains the size in bytes of each such cell. This function attempts to +** add the cells stored in the array to page pPg. If it cannot (because +** the page needs to be defragmented before the cells will fit), non-zero +** is returned. Otherwise, if the cells are added successfully, zero is +** returned. +** +** Argument pCellptr points to the first entry in the cell-pointer array +** (part of page pPg) to populate. After cell apCell[0] is written to the +** page body, a 16-bit offset is written to pCellptr. And so on, for each +** cell in the array. It is the responsibility of the caller to ensure +** that it is safe to overwrite this part of the cell-pointer array. +** +** When this function is called, *ppData points to the start of the +** content area on page pPg. If the size of the content area is extended, +** *ppData is updated to point to the new start of the content area +** before returning. +** +** Finally, argument pBegin points to the byte immediately following the +** end of the space required by this page for the cell-pointer area (for +** all cells - not just those inserted by the current call). If the content +** area must be extended to before this point in order to accomodate all +** cells in apCell[], then the cells do not fit and non-zero is returned. +*/ +static int pageInsertArray( + MemPage *pPg, /* Page to add cells to */ + u8 *pBegin, /* End of cell-pointer array */ + u8 **ppData, /* IN/OUT: Page content -area pointer */ + u8 *pCellptr, /* Pointer to cell-pointer area */ + int iFirst, /* Index of first cell to add */ + int nCell, /* Number of cells to add to pPg */ + CellArray *pCArray /* Array of cells */ +){ + int i; + u8 *aData = pPg->aData; + u8 *pData = *ppData; + int iEnd = iFirst + nCell; + assert( CORRUPT_DB || pPg->hdrOffset==0 ); /* Never called on page 1 */ + for(i=iFirst; i<iEnd; i++){ + int sz, rc; + u8 *pSlot; + sz = cachedCellSize(pCArray, i); + if( (aData[1]==0 && aData[2]==0) || (pSlot = pageFindSlot(pPg,sz,&rc))==0 ){ + pData -= sz; + if( pData<pBegin ) return 1; + pSlot = pData; + } + /* pSlot and pCArray->apCell[i] will never overlap on a well-formed + ** database. But they might for a corrupt database. Hence use memmove() + ** since memcpy() sends SIGABORT with overlapping buffers on OpenBSD */ + assert( (pSlot+sz)<=pCArray->apCell[i] + || pSlot>=(pCArray->apCell[i]+sz) + || CORRUPT_DB ); + memmove(pSlot, pCArray->apCell[i], sz); + put2byte(pCellptr, (pSlot - aData)); + pCellptr += 2; + } + *ppData = pData; + return 0; +} + +/* +** Array apCell[] contains nCell pointers to b-tree cells. Array szCell +** contains the size in bytes of each such cell. This function adds the +** space associated with each cell in the array that is currently stored +** within the body of pPg to the pPg free-list. The cell-pointers and other +** fields of the page are not updated. +** +** This function returns the total number of cells added to the free-list. +*/ +static int pageFreeArray( + MemPage *pPg, /* Page to edit */ + int iFirst, /* First cell to delete */ + int nCell, /* Cells to delete */ + CellArray *pCArray /* Array of cells */ +){ + u8 * const aData = pPg->aData; + u8 * const pEnd = &aData[pPg->pBt->usableSize]; + u8 * const pStart = &aData[pPg->hdrOffset + 8 + pPg->childPtrSize]; + int nRet = 0; + int i; + int iEnd = iFirst + nCell; + u8 *pFree = 0; + int szFree = 0; + + for(i=iFirst; i<iEnd; i++){ + u8 *pCell = pCArray->apCell[i]; + if( SQLITE_WITHIN(pCell, pStart, pEnd) ){ + int sz; + /* No need to use cachedCellSize() here. The sizes of all cells that + ** are to be freed have already been computing while deciding which + ** cells need freeing */ + sz = pCArray->szCell[i]; assert( sz>0 ); + if( pFree!=(pCell + sz) ){ + if( pFree ){ + assert( pFree>aData && (pFree - aData)<65536 ); + freeSpace(pPg, (u16)(pFree - aData), szFree); + } + pFree = pCell; + szFree = sz; + if( pFree+sz>pEnd ) return 0; + }else{ + pFree = pCell; + szFree += sz; + } + nRet++; + } + } + if( pFree ){ + assert( pFree>aData && (pFree - aData)<65536 ); + freeSpace(pPg, (u16)(pFree - aData), szFree); + } + return nRet; +} + +/* +** apCell[] and szCell[] contains pointers to and sizes of all cells in the +** pages being balanced. The current page, pPg, has pPg->nCell cells starting +** with apCell[iOld]. After balancing, this page should hold nNew cells +** starting at apCell[iNew]. +** +** This routine makes the necessary adjustments to pPg so that it contains +** the correct cells after being balanced. +** +** The pPg->nFree field is invalid when this function returns. It is the +** responsibility of the caller to set it correctly. +*/ +static int editPage( + MemPage *pPg, /* Edit this page */ + int iOld, /* Index of first cell currently on page */ + int iNew, /* Index of new first cell on page */ + int nNew, /* Final number of cells on page */ + CellArray *pCArray /* Array of cells and sizes */ +){ + u8 * const aData = pPg->aData; + const int hdr = pPg->hdrOffset; + u8 *pBegin = &pPg->aCellIdx[nNew * 2]; + int nCell = pPg->nCell; /* Cells stored on pPg */ + u8 *pData; + u8 *pCellptr; + int i; + int iOldEnd = iOld + pPg->nCell + pPg->nOverflow; + int iNewEnd = iNew + nNew; + +#ifdef SQLITE_DEBUG + u8 *pTmp = sqlite3PagerTempSpace(pPg->pBt->pPager); + memcpy(pTmp, aData, pPg->pBt->usableSize); +#endif + + /* Remove cells from the start and end of the page */ + if( iOld<iNew ){ + int nShift = pageFreeArray(pPg, iOld, iNew-iOld, pCArray); + memmove(pPg->aCellIdx, &pPg->aCellIdx[nShift*2], nCell*2); + nCell -= nShift; + } + if( iNewEnd < iOldEnd ){ + nCell -= pageFreeArray(pPg, iNewEnd, iOldEnd - iNewEnd, pCArray); + } + + pData = &aData[get2byteNotZero(&aData[hdr+5])]; + if( pData<pBegin ) goto editpage_fail; + + /* Add cells to the start of the page */ + if( iNew<iOld ){ + int nAdd = MIN(nNew,iOld-iNew); + assert( (iOld-iNew)<nNew || nCell==0 || CORRUPT_DB ); + pCellptr = pPg->aCellIdx; + memmove(&pCellptr[nAdd*2], pCellptr, nCell*2); + if( pageInsertArray( + pPg, pBegin, &pData, pCellptr, + iNew, nAdd, pCArray + ) ) goto editpage_fail; + nCell += nAdd; + } + + /* Add any overflow cells */ + for(i=0; i<pPg->nOverflow; i++){ + int iCell = (iOld + pPg->aiOvfl[i]) - iNew; + if( iCell>=0 && iCell<nNew ){ + pCellptr = &pPg->aCellIdx[iCell * 2]; + memmove(&pCellptr[2], pCellptr, (nCell - iCell) * 2); + nCell++; + if( pageInsertArray( + pPg, pBegin, &pData, pCellptr, + iCell+iNew, 1, pCArray + ) ) goto editpage_fail; + } + } + + /* Append cells to the end of the page */ + pCellptr = &pPg->aCellIdx[nCell*2]; + if( pageInsertArray( + pPg, pBegin, &pData, pCellptr, + iNew+nCell, nNew-nCell, pCArray + ) ) goto editpage_fail; + + pPg->nCell = nNew; + pPg->nOverflow = 0; + + put2byte(&aData[hdr+3], pPg->nCell); + put2byte(&aData[hdr+5], pData - aData); + +#ifdef SQLITE_DEBUG + for(i=0; i<nNew && !CORRUPT_DB; i++){ + u8 *pCell = pCArray->apCell[i+iNew]; + int iOff = get2byteAligned(&pPg->aCellIdx[i*2]); + if( pCell>=aData && pCell<&aData[pPg->pBt->usableSize] ){ + pCell = &pTmp[pCell - aData]; + } + assert( 0==memcmp(pCell, &aData[iOff], + pCArray->pRef->xCellSize(pCArray->pRef, pCArray->apCell[i+iNew])) ); + } +#endif + + return SQLITE_OK; + editpage_fail: + /* Unable to edit this page. Rebuild it from scratch instead. */ + populateCellCache(pCArray, iNew, nNew); + return rebuildPage(pPg, nNew, &pCArray->apCell[iNew], &pCArray->szCell[iNew]); } /* ** The following parameters determine how many adjacent pages get involved ** in a balancing operation. NN is the number of neighbors on either side @@ -44345,11 +62332,12 @@ assert( sqlite3_mutex_held(pPage->pBt->mutex) ); assert( sqlite3PagerIswriteable(pParent->pDbPage) ); assert( pPage->nOverflow==1 ); - if( pPage->nCell<=0 ) return SQLITE_CORRUPT_BKPT; + /* This error condition is now caught prior to reaching this function */ + if( NEVER(pPage->nCell==0) ) return SQLITE_CORRUPT_BKPT; /* Allocate a new page. This page will become the right-sibling of ** pPage. Make the parent page writable, so that the new divider cell ** may be inserted. If both these operations are successful, proceed. */ @@ -44356,18 +62344,20 @@ rc = allocateBtreePage(pBt, &pNew, &pgnoNew, 0, 0); if( rc==SQLITE_OK ){ u8 *pOut = &pSpace[4]; - u8 *pCell = pPage->aOvfl[0].pCell; - u16 szCell = cellSizePtr(pPage, pCell); + u8 *pCell = pPage->apOvfl[0]; + u16 szCell = pPage->xCellSize(pPage, pCell); u8 *pStop; assert( sqlite3PagerIswriteable(pNew->pDbPage) ); assert( pPage->aData[0]==(PTF_INTKEY|PTF_LEAFDATA|PTF_LEAF) ); zeroPage(pNew, PTF_INTKEY|PTF_LEAFDATA|PTF_LEAF); - assemblePage(pNew, 1, &pCell, &szCell); + rc = rebuildPage(pNew, 1, &pCell, &szCell); + if( NEVER(rc) ) return rc; + pNew->nFree = pBt->usableSize - pNew->cellOffset - 2 - szCell; /* If this is an auto-vacuum database, update the pointer map ** with entries for the new page, and any pointer from the ** cell on the page to an overflow page. If either of these ** operations fails, the return code is set, but the contents @@ -44435,13 +62425,13 @@ for(j=0; j<pPage->nCell; j++){ CellInfo info; u8 *z; z = findCell(pPage, j); - btreeParseCellPtr(pPage, z, &info); - if( info.iOverflow ){ - Pgno ovfl = get4byte(&z[info.iOverflow]); + pPage->xParseCell(pPage, z, &info); + if( info.nLocal<info.nPayload ){ + Pgno ovfl = get4byte(&z[info.nSize-4]); ptrmapGet(pBt, ovfl, &e, &n); assert( n==pPage->pgno && e==PTRMAP_OVERFLOW1 ); } if( !pPage->leaf ){ Pgno child = get4byte(z); @@ -44466,11 +62456,11 @@ ** parent page stored in the pointer map is page pTo. If pFrom contained ** any cells with overflow page pointers, then the corresponding pointer ** map entries are also updated so that the parent page is page pTo. ** ** If pFrom is currently carrying any overflow cells (entries in the -** MemPage.aOvfl[] array), they are not copied to pTo. +** MemPage.apOvfl[] array), they are not copied to pTo. ** ** Before returning, page pTo is reinitialized using btreeInitPage(). ** ** The performance of this function is not critical. It is only used by ** the balance_shallower() and balance_deeper() procedures, neither of @@ -44487,11 +62477,11 @@ int iData; assert( pFrom->isInit ); assert( pFrom->nFree>=iToHdr ); - assert( get2byte(&aFrom[iFromHdr+5])<=pBt->usableSize ); + assert( get2byte(&aFrom[iFromHdr+5]) <= (int)pBt->usableSize ); /* Copy the b-tree node content from page pFrom to page pTo. */ iData = get2byte(&aFrom[iFromHdr+5]); memcpy(&aTo[iData], &aFrom[iData], pBt->usableSize-iData); memcpy(&aTo[iToHdr], &aFrom[iFromHdr], pFrom->cellOffset + 2*pFrom->nCell); @@ -44559,14 +62549,14 @@ */ static int balance_nonroot( MemPage *pParent, /* Parent page of siblings being balanced */ int iParentIdx, /* Index of "the page" in pParent */ u8 *aOvflSpace, /* page-size bytes of space for parent ovfl */ - int isRoot /* True if pParent is a root-page */ + int isRoot, /* True if pParent is a root-page */ + int bBulk /* True if this call is part of a bulk load */ ){ BtShared *pBt; /* The whole database */ - int nCell = 0; /* Number of cells in apCell[] */ int nMaxCells = 0; /* Allocated size of apCell, szCell, aFrom. */ int nNew = 0; /* Number of pages in apNew[] */ int nOld; /* Number of pages in apOld[] */ int i, j, k; /* Loop counters */ int nxDiv; /* Next divider slot in pParent->aCell[] */ @@ -44573,26 +62563,31 @@ int rc = SQLITE_OK; /* The return code */ u16 leafCorrection; /* 4 if pPage is a leaf. 0 if not */ int leafData; /* True if pPage is a leaf of a LEAFDATA tree */ int usableSpace; /* Bytes in pPage beyond the header */ int pageFlags; /* Value of pPage->aData[0] */ - int subtotal; /* Subtotal of bytes in cells on one page */ int iSpace1 = 0; /* First unused byte of aSpace1[] */ int iOvflSpace = 0; /* First unused byte of aOvflSpace[] */ int szScratch; /* Size of scratch memory requested */ MemPage *apOld[NB]; /* pPage and up to two siblings */ - MemPage *apCopy[NB]; /* Private copies of apOld[] pages */ MemPage *apNew[NB+2]; /* pPage and up to NB siblings after balancing */ u8 *pRight; /* Location in parent of right-sibling pointer */ u8 *apDiv[NB-1]; /* Divider cells in pParent */ - int cntNew[NB+2]; /* Index in aCell[] of cell after i-th page */ - int szNew[NB+2]; /* Combined size of cells place on i-th page */ - u8 **apCell = 0; /* All cells begin balanced */ - u16 *szCell; /* Local size of all cells in apCell[] */ + int cntNew[NB+2]; /* Index in b.paCell[] of cell after i-th page */ + int cntOld[NB+2]; /* Old index in b.apCell[] */ + int szNew[NB+2]; /* Combined size of cells placed on i-th page */ u8 *aSpace1; /* Space for copies of dividers cells */ Pgno pgno; /* Temp var to store a page number in */ + u8 abDone[NB+2]; /* True after i'th new page is populated */ + Pgno aPgno[NB+2]; /* Page numbers of new pages before shuffling */ + Pgno aPgOrder[NB+2]; /* Copy of aPgno[] used for sorting pages */ + u16 aPgFlags[NB+2]; /* flags field of new pages before shuffling */ + CellArray b; /* Parsed information on cells being balanced */ + memset(abDone, 0, sizeof(abDone)); + b.nCell = 0; + b.apCell = 0; pBt = pParent->pBt; assert( sqlite3_mutex_held(pBt->mutex) ); assert( sqlite3PagerIswriteable(pParent->pDbPage) ); #if 0 @@ -44603,11 +62598,11 @@ ** this overflow cell is present, it must be the cell with ** index iParentIdx. This scenario comes about when this function ** is called (indirectly) from sqlite3BtreeDelete(). */ assert( pParent->nOverflow==0 || pParent->nOverflow==1 ); - assert( pParent->nOverflow==0 || pParent->aOvfl[0].idx==iParentIdx ); + assert( pParent->nOverflow==0 || pParent->aiOvfl[0]==iParentIdx ); if( !aOvflSpace ){ return SQLITE_NOMEM; } @@ -44623,62 +62618,64 @@ ** have already been removed. */ i = pParent->nOverflow + pParent->nCell; if( i<2 ){ nxDiv = 0; - nOld = i+1; }else{ - nOld = 3; + assert( bBulk==0 || bBulk==1 ); if( iParentIdx==0 ){ nxDiv = 0; }else if( iParentIdx==i ){ - nxDiv = i-2; + nxDiv = i-2+bBulk; }else{ nxDiv = iParentIdx-1; } - i = 2; + i = 2-bBulk; } + nOld = i+1; if( (i+nxDiv-pParent->nOverflow)==pParent->nCell ){ pRight = &pParent->aData[pParent->hdrOffset+8]; }else{ pRight = findCell(pParent, i+nxDiv-pParent->nOverflow); } pgno = get4byte(pRight); while( 1 ){ - rc = getAndInitPage(pBt, pgno, &apOld[i]); + rc = getAndInitPage(pBt, pgno, &apOld[i], 0, 0); if( rc ){ memset(apOld, 0, (i+1)*sizeof(MemPage*)); goto balance_cleanup; } nMaxCells += 1+apOld[i]->nCell+apOld[i]->nOverflow; if( (i--)==0 ) break; - if( i+nxDiv==pParent->aOvfl[0].idx && pParent->nOverflow ){ - apDiv[i] = pParent->aOvfl[0].pCell; + if( i+nxDiv==pParent->aiOvfl[0] && pParent->nOverflow ){ + apDiv[i] = pParent->apOvfl[0]; pgno = get4byte(apDiv[i]); - szNew[i] = cellSizePtr(pParent, apDiv[i]); + szNew[i] = pParent->xCellSize(pParent, apDiv[i]); pParent->nOverflow = 0; }else{ apDiv[i] = findCell(pParent, i+nxDiv-pParent->nOverflow); pgno = get4byte(apDiv[i]); - szNew[i] = cellSizePtr(pParent, apDiv[i]); + szNew[i] = pParent->xCellSize(pParent, apDiv[i]); /* Drop the cell from the parent page. apDiv[i] still points to ** the cell within the parent, even though it has been dropped. ** This is safe because dropping a cell only overwrites the first ** four bytes of it, and this function does not need the first ** four bytes of the divider cell. So the pointer is safe to use ** later on. ** - ** Unless SQLite is compiled in secure-delete mode. In this case, + ** But not if we are in secure-delete mode. In secure-delete mode, ** the dropCell() routine will overwrite the entire cell with zeroes. ** In this case, temporarily copy the cell into the aOvflSpace[] ** buffer. It will be copied out again as soon as the aSpace[] buffer ** is allocated. */ - if( pBt->secureDelete ){ - int iOff = SQLITE_PTR_TO_INT(apDiv[i]) - SQLITE_PTR_TO_INT(pParent->aData); - if( (iOff+szNew[i])>pBt->usableSize ){ + if( pBt->btsFlags & BTS_SECURE_DELETE ){ + int iOff; + + iOff = SQLITE_PTR_TO_INT(apDiv[i]) - SQLITE_PTR_TO_INT(pParent->aData); + if( (iOff+szNew[i])>(int)pBt->usableSize ){ rc = SQLITE_CORRUPT_BKPT; memset(apOld, 0, (i+1)*sizeof(MemPage*)); goto balance_cleanup; }else{ memcpy(&aOvflSpace[iOff], apDiv[i], szNew[i]); @@ -44694,130 +62691,212 @@ nMaxCells = (nMaxCells + 3)&~3; /* ** Allocate space for memory structures */ - k = pBt->pageSize + ROUND8(sizeof(MemPage)); szScratch = - nMaxCells*sizeof(u8*) /* apCell */ - + nMaxCells*sizeof(u16) /* szCell */ - + pBt->pageSize /* aSpace1 */ - + k*nOld; /* Page copies (apCopy) */ - apCell = sqlite3ScratchMalloc( szScratch ); - if( apCell==0 ){ + nMaxCells*sizeof(u8*) /* b.apCell */ + + nMaxCells*sizeof(u16) /* b.szCell */ + + pBt->pageSize; /* aSpace1 */ + + /* EVIDENCE-OF: R-28375-38319 SQLite will never request a scratch buffer + ** that is more than 6 times the database page size. */ + assert( szScratch<=6*(int)pBt->pageSize ); + b.apCell = sqlite3ScratchMalloc( szScratch ); + if( b.apCell==0 ){ rc = SQLITE_NOMEM; goto balance_cleanup; } - szCell = (u16*)&apCell[nMaxCells]; - aSpace1 = (u8*)&szCell[nMaxCells]; + b.szCell = (u16*)&b.apCell[nMaxCells]; + aSpace1 = (u8*)&b.szCell[nMaxCells]; assert( EIGHT_BYTE_ALIGNMENT(aSpace1) ); /* ** Load pointers to all cells on sibling pages and the divider cells - ** into the local apCell[] array. Make copies of the divider cells - ** into space obtained from aSpace1[] and remove the the divider Cells - ** from pParent. + ** into the local b.apCell[] array. Make copies of the divider cells + ** into space obtained from aSpace1[]. The divider cells have already + ** been removed from pParent. ** ** If the siblings are on leaf pages, then the child pointers of the ** divider cells are stripped from the cells before they are copied - ** into aSpace1[]. In this way, all cells in apCell[] are without + ** into aSpace1[]. In this way, all cells in b.apCell[] are without ** child pointers. If siblings are not leaves, then all cell in - ** apCell[] include child pointers. Either way, all cells in apCell[] + ** b.apCell[] include child pointers. Either way, all cells in b.apCell[] ** are alike. ** ** leafCorrection: 4 if pPage is a leaf. 0 if pPage is not a leaf. ** leafData: 1 if pPage holds key+data and pParent holds only keys. */ - leafCorrection = apOld[0]->leaf*4; - leafData = apOld[0]->hasData; + b.pRef = apOld[0]; + leafCorrection = b.pRef->leaf*4; + leafData = b.pRef->intKeyLeaf; for(i=0; i<nOld; i++){ - int limit; - - /* Before doing anything else, take a copy of the i'th original sibling - ** The rest of this function will use data from the copies rather - ** that the original pages since the original pages will be in the - ** process of being overwritten. */ - MemPage *pOld = apCopy[i] = (MemPage*)&aSpace1[pBt->pageSize + k*i]; - memcpy(pOld, apOld[i], sizeof(MemPage)); - pOld->aData = (void*)&pOld[1]; - memcpy(pOld->aData, apOld[i]->aData, pBt->pageSize); - - limit = pOld->nCell+pOld->nOverflow; - for(j=0; j<limit; j++){ - assert( nCell<nMaxCells ); - apCell[nCell] = findOverflowCell(pOld, j); - szCell[nCell] = cellSizePtr(pOld, apCell[nCell]); - nCell++; - } + MemPage *pOld = apOld[i]; + int limit = pOld->nCell; + u8 *aData = pOld->aData; + u16 maskPage = pOld->maskPage; + u8 *piCell = aData + pOld->cellOffset; + u8 *piEnd; + + /* Verify that all sibling pages are of the same "type" (table-leaf, + ** table-interior, index-leaf, or index-interior). + */ + if( pOld->aData[0]!=apOld[0]->aData[0] ){ + rc = SQLITE_CORRUPT_BKPT; + goto balance_cleanup; + } + + /* Load b.apCell[] with pointers to all cells in pOld. If pOld + ** constains overflow cells, include them in the b.apCell[] array + ** in the correct spot. + ** + ** Note that when there are multiple overflow cells, it is always the + ** case that they are sequential and adjacent. This invariant arises + ** because multiple overflows can only occurs when inserting divider + ** cells into a parent on a prior balance, and divider cells are always + ** adjacent and are inserted in order. There is an assert() tagged + ** with "NOTE 1" in the overflow cell insertion loop to prove this + ** invariant. + ** + ** This must be done in advance. Once the balance starts, the cell + ** offset section of the btree page will be overwritten and we will no + ** long be able to find the cells if a pointer to each cell is not saved + ** first. + */ + memset(&b.szCell[b.nCell], 0, sizeof(b.szCell[0])*(limit+pOld->nOverflow)); + if( pOld->nOverflow>0 ){ + limit = pOld->aiOvfl[0]; + for(j=0; j<limit; j++){ + b.apCell[b.nCell] = aData + (maskPage & get2byteAligned(piCell)); + piCell += 2; + b.nCell++; + } + for(k=0; k<pOld->nOverflow; k++){ + assert( k==0 || pOld->aiOvfl[k-1]+1==pOld->aiOvfl[k] );/* NOTE 1 */ + b.apCell[b.nCell] = pOld->apOvfl[k]; + b.nCell++; + } + } + piEnd = aData + pOld->cellOffset + 2*pOld->nCell; + while( piCell<piEnd ){ + assert( b.nCell<nMaxCells ); + b.apCell[b.nCell] = aData + (maskPage & get2byteAligned(piCell)); + piCell += 2; + b.nCell++; + } + + cntOld[i] = b.nCell; if( i<nOld-1 && !leafData){ u16 sz = (u16)szNew[i]; u8 *pTemp; - assert( nCell<nMaxCells ); - szCell[nCell] = sz; + assert( b.nCell<nMaxCells ); + b.szCell[b.nCell] = sz; pTemp = &aSpace1[iSpace1]; iSpace1 += sz; - assert( sz<=pBt->pageSize/4 ); - assert( iSpace1<=pBt->pageSize ); + assert( sz<=pBt->maxLocal+23 ); + assert( iSpace1 <= (int)pBt->pageSize ); memcpy(pTemp, apDiv[i], sz); - apCell[nCell] = pTemp+leafCorrection; + b.apCell[b.nCell] = pTemp+leafCorrection; assert( leafCorrection==0 || leafCorrection==4 ); - szCell[nCell] = szCell[nCell] - leafCorrection; + b.szCell[b.nCell] = b.szCell[b.nCell] - leafCorrection; if( !pOld->leaf ){ assert( leafCorrection==0 ); assert( pOld->hdrOffset==0 ); /* The right pointer of the child page pOld becomes the left ** pointer of the divider cell */ - memcpy(apCell[nCell], &pOld->aData[8], 4); + memcpy(b.apCell[b.nCell], &pOld->aData[8], 4); }else{ assert( leafCorrection==4 ); - if( szCell[nCell]<4 ){ - /* Do not allow any cells smaller than 4 bytes. */ - szCell[nCell] = 4; + while( b.szCell[b.nCell]<4 ){ + /* Do not allow any cells smaller than 4 bytes. If a smaller cell + ** does exist, pad it with 0x00 bytes. */ + assert( b.szCell[b.nCell]==3 || CORRUPT_DB ); + assert( b.apCell[b.nCell]==&aSpace1[iSpace1-3] || CORRUPT_DB ); + aSpace1[iSpace1++] = 0x00; + b.szCell[b.nCell]++; } } - nCell++; + b.nCell++; } } /* - ** Figure out the number of pages needed to hold all nCell cells. + ** Figure out the number of pages needed to hold all b.nCell cells. ** Store this number in "k". Also compute szNew[] which is the total ** size of all cells on the i-th page and cntNew[] which is the index - ** in apCell[] of the cell that divides page i from page i+1. - ** cntNew[k] should equal nCell. + ** in b.apCell[] of the cell that divides page i from page i+1. + ** cntNew[k] should equal b.nCell. ** ** Values computed by this block: ** ** k: The total number of sibling pages ** szNew[i]: Spaced used on the i-th sibling page. - ** cntNew[i]: Index in apCell[] and szCell[] for the first cell to + ** cntNew[i]: Index in b.apCell[] and b.szCell[] for the first cell to ** the right of the i-th sibling page. ** usableSpace: Number of bytes of space available on each sibling. ** */ usableSpace = pBt->usableSize - 12 + leafCorrection; - for(subtotal=k=i=0; i<nCell; i++){ - assert( i<nMaxCells ); - subtotal += szCell[i] + 2; - if( subtotal > usableSpace ){ - szNew[k] = subtotal - szCell[i]; - cntNew[k] = i; - if( leafData ){ i--; } - subtotal = 0; - k++; - if( k>NB+1 ){ rc = SQLITE_CORRUPT_BKPT; goto balance_cleanup; } - } - } - szNew[k] = subtotal; - cntNew[k] = nCell; - k++; + for(i=0; i<nOld; i++){ + MemPage *p = apOld[i]; + szNew[i] = usableSpace - p->nFree; + if( szNew[i]<0 ){ rc = SQLITE_CORRUPT_BKPT; goto balance_cleanup; } + for(j=0; j<p->nOverflow; j++){ + szNew[i] += 2 + p->xCellSize(p, p->apOvfl[j]); + } + cntNew[i] = cntOld[i]; + } + k = nOld; + for(i=0; i<k; i++){ + int sz; + while( szNew[i]>usableSpace ){ + if( i+1>=k ){ + k = i+2; + if( k>NB+2 ){ rc = SQLITE_CORRUPT_BKPT; goto balance_cleanup; } + szNew[k-1] = 0; + cntNew[k-1] = b.nCell; + } + sz = 2 + cachedCellSize(&b, cntNew[i]-1); + szNew[i] -= sz; + if( !leafData ){ + if( cntNew[i]<b.nCell ){ + sz = 2 + cachedCellSize(&b, cntNew[i]); + }else{ + sz = 0; + } + } + szNew[i+1] += sz; + cntNew[i]--; + } + while( cntNew[i]<b.nCell ){ + sz = 2 + cachedCellSize(&b, cntNew[i]); + if( szNew[i]+sz>usableSpace ) break; + szNew[i] += sz; + cntNew[i]++; + if( !leafData ){ + if( cntNew[i]<b.nCell ){ + sz = 2 + cachedCellSize(&b, cntNew[i]); + }else{ + sz = 0; + } + } + szNew[i+1] -= sz; + } + if( cntNew[i]>=b.nCell ){ + k = i+1; + }else if( cntNew[i] <= (i>0 ? cntNew[i-1] : 0) ){ + rc = SQLITE_CORRUPT_BKPT; + goto balance_cleanup; + } + } /* ** The packing computed by the previous block is biased toward the siblings - ** on the left side. The left siblings are always nearly full, while the - ** right-most sibling might be nearly empty. This block of code attempts - ** to adjust the packing of siblings to get a better balance. + ** on the left side (siblings with smaller keys). The left siblings are + ** always nearly full, while the right-most sibling might be nearly empty. + ** The next block of code attempts to adjust the packing of siblings to + ** get a better balance. ** ** This adjustment is more than an optimization. The packing above might ** be so out of balance as to be illegal. For example, the right-most ** sibling might be completely empty. This adjustment is not optional. */ @@ -44827,42 +62906,50 @@ int r; /* Index of right-most cell in left sibling */ int d; /* Index of first cell to the left of right sibling */ r = cntNew[i-1] - 1; d = r + 1 - leafData; - assert( d<nMaxCells ); - assert( r<nMaxCells ); - while( szRight==0 || szRight+szCell[d]+2<=szLeft-(szCell[r]+2) ){ - szRight += szCell[d] + 2; - szLeft -= szCell[r] + 2; - cntNew[i-1]--; - r = cntNew[i-1] - 1; - d = r + 1 - leafData; - } + (void)cachedCellSize(&b, d); + do{ + assert( d<nMaxCells ); + assert( r<nMaxCells ); + (void)cachedCellSize(&b, r); + if( szRight!=0 + && (bBulk || szRight+b.szCell[d]+2 > szLeft-(b.szCell[r]+2)) ){ + break; + } + szRight += b.szCell[d] + 2; + szLeft -= b.szCell[r] + 2; + cntNew[i-1] = r; + r--; + d--; + }while( r>=0 ); szNew[i] = szRight; szNew[i-1] = szLeft; + if( cntNew[i-1] <= (i>1 ? cntNew[i-2] : 0) ){ + rc = SQLITE_CORRUPT_BKPT; + goto balance_cleanup; + } } - /* Either we found one or more cells (cntnew[0])>0) or pPage is - ** a virtual root page. A virtual root page is when the real root - ** page is page 1 and we are the only child of that page. + /* Sanity check: For a non-corrupt database file one of the follwing + ** must be true: + ** (1) We found one or more cells (cntNew[0])>0), or + ** (2) pPage is a virtual root page. A virtual root page is when + ** the real root page is page 1 and we are the only child of + ** that page. */ - assert( cntNew[0]>0 || (pParent->pgno==1 && pParent->nCell==0) ); - - TRACE(("BALANCE: old: %d %d %d ", - apOld[0]->pgno, - nOld>=2 ? apOld[1]->pgno : 0, - nOld>=3 ? apOld[2]->pgno : 0 + assert( cntNew[0]>0 || (pParent->pgno==1 && pParent->nCell==0) || CORRUPT_DB); + TRACE(("BALANCE: old: %d(nc=%d) %d(nc=%d) %d(nc=%d)\n", + apOld[0]->pgno, apOld[0]->nCell, + nOld>=2 ? apOld[1]->pgno : 0, nOld>=2 ? apOld[1]->nCell : 0, + nOld>=3 ? apOld[2]->pgno : 0, nOld>=3 ? apOld[2]->nCell : 0 )); /* ** Allocate k new pages. Reuse old pages where possible. */ - if( apOld[0]->pgno<=1 ){ - rc = SQLITE_CORRUPT_BKPT; - goto balance_cleanup; - } pageFlags = apOld[0]->aData[0]; for(i=0; i<k; i++){ MemPage *pNew; if( i<nOld ){ pNew = apNew[i] = apOld[i]; @@ -44870,14 +62957,16 @@ rc = sqlite3PagerWrite(pNew->pDbPage); nNew++; if( rc ) goto balance_cleanup; }else{ assert( i>0 ); - rc = allocateBtreePage(pBt, &pNew, &pgno, pgno, 0); + rc = allocateBtreePage(pBt, &pNew, &pgno, (bBulk ? 1 : pgno), 0); if( rc ) goto balance_cleanup; + zeroPage(pNew, pageFlags); apNew[i] = pNew; nNew++; + cntOld[i] = b.nCell; /* Set the pointer-map entry for the new sibling page. */ if( ISAUTOVACUUM ){ ptrmapPut(pBt, pNew->pgno, PTRMAP_BTREE, pParent->pgno, &rc); if( rc!=SQLITE_OK ){ @@ -44885,141 +62974,253 @@ } } } } - /* Free any old pages that were not reused as new pages. - */ - while( i<nOld ){ - freePage(apOld[i], &rc); - if( rc ) goto balance_cleanup; - releasePage(apOld[i]); - apOld[i] = 0; - i++; - } - /* - ** Put the new pages in accending order. This helps to - ** keep entries in the disk file in order so that a scan - ** of the table is a linear scan through the file. That - ** in turn helps the operating system to deliver pages - ** from the disk more rapidly. - ** - ** An O(n^2) insertion sort algorithm is used, but since - ** n is never more than NB (a small constant), that should - ** not be a problem. - ** - ** When NB==3, this one optimization makes the database - ** about 25% faster for large insertions and deletions. - */ - for(i=0; i<k-1; i++){ - int minV = apNew[i]->pgno; - int minI = i; - for(j=i+1; j<k; j++){ - if( apNew[j]->pgno<(unsigned)minV ){ - minI = j; - minV = apNew[j]->pgno; - } - } - if( minI>i ){ - int t; - MemPage *pT; - t = apNew[i]->pgno; - pT = apNew[i]; - apNew[i] = apNew[minI]; - apNew[minI] = pT; - } - } - TRACE(("new: %d(%d) %d(%d) %d(%d) %d(%d) %d(%d)\n", - apNew[0]->pgno, szNew[0], + ** Reassign page numbers so that the new pages are in ascending order. + ** This helps to keep entries in the disk file in order so that a scan + ** of the table is closer to a linear scan through the file. That in turn + ** helps the operating system to deliver pages from the disk more rapidly. + ** + ** An O(n^2) insertion sort algorithm is used, but since n is never more + ** than (NB+2) (a small constant), that should not be a problem. + ** + ** When NB==3, this one optimization makes the database about 25% faster + ** for large insertions and deletions. + */ + for(i=0; i<nNew; i++){ + aPgOrder[i] = aPgno[i] = apNew[i]->pgno; + aPgFlags[i] = apNew[i]->pDbPage->flags; + for(j=0; j<i; j++){ + if( aPgno[j]==aPgno[i] ){ + /* This branch is taken if the set of sibling pages somehow contains + ** duplicate entries. This can happen if the database is corrupt. + ** It would be simpler to detect this as part of the loop below, but + ** we do the detection here in order to avoid populating the pager + ** cache with two separate objects associated with the same + ** page number. */ + assert( CORRUPT_DB ); + rc = SQLITE_CORRUPT_BKPT; + goto balance_cleanup; + } + } + } + for(i=0; i<nNew; i++){ + int iBest = 0; /* aPgno[] index of page number to use */ + for(j=1; j<nNew; j++){ + if( aPgOrder[j]<aPgOrder[iBest] ) iBest = j; + } + pgno = aPgOrder[iBest]; + aPgOrder[iBest] = 0xffffffff; + if( iBest!=i ){ + if( iBest>i ){ + sqlite3PagerRekey(apNew[iBest]->pDbPage, pBt->nPage+iBest+1, 0); + } + sqlite3PagerRekey(apNew[i]->pDbPage, pgno, aPgFlags[iBest]); + apNew[i]->pgno = pgno; + } + } + + TRACE(("BALANCE: new: %d(%d nc=%d) %d(%d nc=%d) %d(%d nc=%d) " + "%d(%d nc=%d) %d(%d nc=%d)\n", + apNew[0]->pgno, szNew[0], cntNew[0], nNew>=2 ? apNew[1]->pgno : 0, nNew>=2 ? szNew[1] : 0, + nNew>=2 ? cntNew[1] - cntNew[0] - !leafData : 0, nNew>=3 ? apNew[2]->pgno : 0, nNew>=3 ? szNew[2] : 0, + nNew>=3 ? cntNew[2] - cntNew[1] - !leafData : 0, nNew>=4 ? apNew[3]->pgno : 0, nNew>=4 ? szNew[3] : 0, - nNew>=5 ? apNew[4]->pgno : 0, nNew>=5 ? szNew[4] : 0)); + nNew>=4 ? cntNew[3] - cntNew[2] - !leafData : 0, + nNew>=5 ? apNew[4]->pgno : 0, nNew>=5 ? szNew[4] : 0, + nNew>=5 ? cntNew[4] - cntNew[3] - !leafData : 0 + )); assert( sqlite3PagerIswriteable(pParent->pDbPage) ); put4byte(pRight, apNew[nNew-1]->pgno); - /* - ** Evenly distribute the data in apCell[] across the new pages. - ** Insert divider cells into pParent as necessary. + /* If the sibling pages are not leaves, ensure that the right-child pointer + ** of the right-most new sibling page is set to the value that was + ** originally in the same field of the right-most old sibling page. */ + if( (pageFlags & PTF_LEAF)==0 && nOld!=nNew ){ + MemPage *pOld = (nNew>nOld ? apNew : apOld)[nOld-1]; + memcpy(&apNew[nNew-1]->aData[8], &pOld->aData[8], 4); + } + + /* Make any required updates to pointer map entries associated with + ** cells stored on sibling pages following the balance operation. Pointer + ** map entries associated with divider cells are set by the insertCell() + ** routine. The associated pointer map entries are: + ** + ** a) if the cell contains a reference to an overflow chain, the + ** entry associated with the first page in the overflow chain, and + ** + ** b) if the sibling pages are not leaves, the child page associated + ** with the cell. + ** + ** If the sibling pages are not leaves, then the pointer map entry + ** associated with the right-child of each sibling may also need to be + ** updated. This happens below, after the sibling pages have been + ** populated, not here. */ - j = 0; - for(i=0; i<nNew; i++){ - /* Assemble the new sibling page. */ + if( ISAUTOVACUUM ){ + MemPage *pNew = apNew[0]; + u8 *aOld = pNew->aData; + int cntOldNext = pNew->nCell + pNew->nOverflow; + int usableSize = pBt->usableSize; + int iNew = 0; + int iOld = 0; + + for(i=0; i<b.nCell; i++){ + u8 *pCell = b.apCell[i]; + if( i==cntOldNext ){ + MemPage *pOld = (++iOld)<nNew ? apNew[iOld] : apOld[iOld]; + cntOldNext += pOld->nCell + pOld->nOverflow + !leafData; + aOld = pOld->aData; + } + if( i==cntNew[iNew] ){ + pNew = apNew[++iNew]; + if( !leafData ) continue; + } + + /* Cell pCell is destined for new sibling page pNew. Originally, it + ** was either part of sibling page iOld (possibly an overflow cell), + ** or else the divider cell to the left of sibling page iOld. So, + ** if sibling page iOld had the same page number as pNew, and if + ** pCell really was a part of sibling page iOld (not a divider or + ** overflow cell), we can skip updating the pointer map entries. */ + if( iOld>=nNew + || pNew->pgno!=aPgno[iOld] + || !SQLITE_WITHIN(pCell,aOld,&aOld[usableSize]) + ){ + if( !leafCorrection ){ + ptrmapPut(pBt, get4byte(pCell), PTRMAP_BTREE, pNew->pgno, &rc); + } + if( cachedCellSize(&b,i)>pNew->minLocal ){ + ptrmapPutOvflPtr(pNew, pCell, &rc); + } + if( rc ) goto balance_cleanup; + } + } + } + + /* Insert new divider cells into pParent. */ + for(i=0; i<nNew-1; i++){ + u8 *pCell; + u8 *pTemp; + int sz; MemPage *pNew = apNew[i]; - assert( j<nMaxCells ); - zeroPage(pNew, pageFlags); - assemblePage(pNew, cntNew[i]-j, &apCell[j], &szCell[j]); - assert( pNew->nCell>0 || (nNew==1 && cntNew[0]==0) ); - assert( pNew->nOverflow==0 ); - j = cntNew[i]; - /* If the sibling page assembled above was not the right-most sibling, - ** insert a divider cell into the parent page. - */ - assert( i<nNew-1 || j==nCell ); - if( j<nCell ){ - u8 *pCell; - u8 *pTemp; - int sz; - - assert( j<nMaxCells ); - pCell = apCell[j]; - sz = szCell[j] + leafCorrection; - pTemp = &aOvflSpace[iOvflSpace]; - if( !pNew->leaf ){ - memcpy(&pNew->aData[8], pCell, 4); - }else if( leafData ){ - /* If the tree is a leaf-data tree, and the siblings are leaves, - ** then there is no divider cell in apCell[]. Instead, the divider - ** cell consists of the integer key for the right-most cell of - ** the sibling-page assembled above only. - */ - CellInfo info; - j--; - btreeParseCellPtr(pNew, apCell[j], &info); - pCell = pTemp; - sz = 4 + putVarint(&pCell[4], info.nKey); - pTemp = 0; - }else{ - pCell -= 4; - /* Obscure case for non-leaf-data trees: If the cell at pCell was - ** previously stored on a leaf node, and its reported size was 4 - ** bytes, then it may actually be smaller than this - ** (see btreeParseCellPtr(), 4 bytes is the minimum size of - ** any cell). But it is important to pass the correct size to - ** insertCell(), so reparse the cell now. - ** - ** Note that this can never happen in an SQLite data file, as all - ** cells are at least 4 bytes. It only happens in b-trees used - ** to evaluate "IN (SELECT ...)" and similar clauses. - */ - if( szCell[j]==4 ){ - assert(leafCorrection==4); - sz = cellSizePtr(pParent, pCell); - } - } - iOvflSpace += sz; - assert( sz<=pBt->pageSize/4 ); - assert( iOvflSpace<=pBt->pageSize ); - insertCell(pParent, nxDiv, pCell, sz, pTemp, pNew->pgno, &rc); - if( rc!=SQLITE_OK ) goto balance_cleanup; - assert( sqlite3PagerIswriteable(pParent->pDbPage) ); - - j++; - nxDiv++; - } - } - assert( j==nCell ); + assert( j<nMaxCells ); + assert( b.apCell[j]!=0 ); + pCell = b.apCell[j]; + sz = b.szCell[j] + leafCorrection; + pTemp = &aOvflSpace[iOvflSpace]; + if( !pNew->leaf ){ + memcpy(&pNew->aData[8], pCell, 4); + }else if( leafData ){ + /* If the tree is a leaf-data tree, and the siblings are leaves, + ** then there is no divider cell in b.apCell[]. Instead, the divider + ** cell consists of the integer key for the right-most cell of + ** the sibling-page assembled above only. + */ + CellInfo info; + j--; + pNew->xParseCell(pNew, b.apCell[j], &info); + pCell = pTemp; + sz = 4 + putVarint(&pCell[4], info.nKey); + pTemp = 0; + }else{ + pCell -= 4; + /* Obscure case for non-leaf-data trees: If the cell at pCell was + ** previously stored on a leaf node, and its reported size was 4 + ** bytes, then it may actually be smaller than this + ** (see btreeParseCellPtr(), 4 bytes is the minimum size of + ** any cell). But it is important to pass the correct size to + ** insertCell(), so reparse the cell now. + ** + ** Note that this can never happen in an SQLite data file, as all + ** cells are at least 4 bytes. It only happens in b-trees used + ** to evaluate "IN (SELECT ...)" and similar clauses. + */ + if( b.szCell[j]==4 ){ + assert(leafCorrection==4); + sz = pParent->xCellSize(pParent, pCell); + } + } + iOvflSpace += sz; + assert( sz<=pBt->maxLocal+23 ); + assert( iOvflSpace <= (int)pBt->pageSize ); + insertCell(pParent, nxDiv+i, pCell, sz, pTemp, pNew->pgno, &rc); + if( rc!=SQLITE_OK ) goto balance_cleanup; + assert( sqlite3PagerIswriteable(pParent->pDbPage) ); + } + + /* Now update the actual sibling pages. The order in which they are updated + ** is important, as this code needs to avoid disrupting any page from which + ** cells may still to be read. In practice, this means: + ** + ** (1) If cells are moving left (from apNew[iPg] to apNew[iPg-1]) + ** then it is not safe to update page apNew[iPg] until after + ** the left-hand sibling apNew[iPg-1] has been updated. + ** + ** (2) If cells are moving right (from apNew[iPg] to apNew[iPg+1]) + ** then it is not safe to update page apNew[iPg] until after + ** the right-hand sibling apNew[iPg+1] has been updated. + ** + ** If neither of the above apply, the page is safe to update. + ** + ** The iPg value in the following loop starts at nNew-1 goes down + ** to 0, then back up to nNew-1 again, thus making two passes over + ** the pages. On the initial downward pass, only condition (1) above + ** needs to be tested because (2) will always be true from the previous + ** step. On the upward pass, both conditions are always true, so the + ** upwards pass simply processes pages that were missed on the downward + ** pass. + */ + for(i=1-nNew; i<nNew; i++){ + int iPg = i<0 ? -i : i; + assert( iPg>=0 && iPg<nNew ); + if( abDone[iPg] ) continue; /* Skip pages already processed */ + if( i>=0 /* On the upwards pass, or... */ + || cntOld[iPg-1]>=cntNew[iPg-1] /* Condition (1) is true */ + ){ + int iNew; + int iOld; + int nNewCell; + + /* Verify condition (1): If cells are moving left, update iPg + ** only after iPg-1 has already been updated. */ + assert( iPg==0 || cntOld[iPg-1]>=cntNew[iPg-1] || abDone[iPg-1] ); + + /* Verify condition (2): If cells are moving right, update iPg + ** only after iPg+1 has already been updated. */ + assert( cntNew[iPg]>=cntOld[iPg] || abDone[iPg+1] ); + + if( iPg==0 ){ + iNew = iOld = 0; + nNewCell = cntNew[0]; + }else{ + iOld = iPg<nOld ? (cntOld[iPg-1] + !leafData) : b.nCell; + iNew = cntNew[iPg-1] + !leafData; + nNewCell = cntNew[iPg] - iNew; + } + + rc = editPage(apNew[iPg], iOld, iNew, nNewCell, &b); + if( rc ) goto balance_cleanup; + abDone[iPg]++; + apNew[iPg]->nFree = usableSpace-szNew[iPg]; + assert( apNew[iPg]->nOverflow==0 ); + assert( apNew[iPg]->nCell==nNewCell ); + } + } + + /* All pages have been processed exactly once */ + assert( memcmp(abDone, "\01\01\01\01\01", nNew)==0 ); + assert( nOld>0 ); assert( nNew>0 ); - if( (pageFlags & PTF_LEAF)==0 ){ - u8 *zChild = &apCopy[nOld-1]->aData[8]; - memcpy(&apNew[nNew-1]->aData[8], zChild, 4); - } if( isRoot && pParent->nCell==0 && pParent->hdrOffset<=apNew[0]->nFree ){ /* The root page of the b-tree now contains no cells. The only sibling ** page is the right-child of the parent. Copy the contents of the ** child page into the parent, decreasing the overall height of the @@ -45028,134 +63229,60 @@ ** ** If this is an auto-vacuum database, the call to copyNodeContent() ** sets all pointer-map entries corresponding to database image pages ** for which the pointer is stored within the content being copied. ** - ** The second assert below verifies that the child page is defragmented - ** (it must be, as it was just reconstructed using assemblePage()). This - ** is important if the parent page happens to be page 1 of the database - ** image. */ - assert( nNew==1 ); + ** It is critical that the child page be defragmented before being + ** copied into the parent, because if the parent is page 1 then it will + ** by smaller than the child due to the database header, and so all the + ** free space needs to be up front. + */ + assert( nNew==1 || CORRUPT_DB ); + rc = defragmentPage(apNew[0]); + testcase( rc!=SQLITE_OK ); assert( apNew[0]->nFree == - (get2byte(&apNew[0]->aData[5])-apNew[0]->cellOffset-apNew[0]->nCell*2) + (get2byte(&apNew[0]->aData[5])-apNew[0]->cellOffset-apNew[0]->nCell*2) + || rc!=SQLITE_OK ); copyNodeContent(apNew[0], pParent, &rc); freePage(apNew[0], &rc); - }else if( ISAUTOVACUUM ){ - /* Fix the pointer-map entries for all the cells that were shifted around. - ** There are several different types of pointer-map entries that need to - ** be dealt with by this routine. Some of these have been set already, but - ** many have not. The following is a summary: - ** - ** 1) The entries associated with new sibling pages that were not - ** siblings when this function was called. These have already - ** been set. We don't need to worry about old siblings that were - ** moved to the free-list - the freePage() code has taken care - ** of those. - ** - ** 2) The pointer-map entries associated with the first overflow - ** page in any overflow chains used by new divider cells. These - ** have also already been taken care of by the insertCell() code. - ** - ** 3) If the sibling pages are not leaves, then the child pages of - ** cells stored on the sibling pages may need to be updated. - ** - ** 4) If the sibling pages are not internal intkey nodes, then any - ** overflow pages used by these cells may need to be updated - ** (internal intkey nodes never contain pointers to overflow pages). - ** - ** 5) If the sibling pages are not leaves, then the pointer-map - ** entries for the right-child pages of each sibling may need - ** to be updated. - ** - ** Cases 1 and 2 are dealt with above by other code. The next - ** block deals with cases 3 and 4 and the one after that, case 5. Since - ** setting a pointer map entry is a relatively expensive operation, this - ** code only sets pointer map entries for child or overflow pages that have - ** actually moved between pages. */ - MemPage *pNew = apNew[0]; - MemPage *pOld = apCopy[0]; - int nOverflow = pOld->nOverflow; - int iNextOld = pOld->nCell + nOverflow; - int iOverflow = (nOverflow ? pOld->aOvfl[0].idx : -1); - j = 0; /* Current 'old' sibling page */ - k = 0; /* Current 'new' sibling page */ - for(i=0; i<nCell; i++){ - int isDivider = 0; - while( i==iNextOld ){ - /* Cell i is the cell immediately following the last cell on old - ** sibling page j. If the siblings are not leaf pages of an - ** intkey b-tree, then cell i was a divider cell. */ - pOld = apCopy[++j]; - iNextOld = i + !leafData + pOld->nCell + pOld->nOverflow; - if( pOld->nOverflow ){ - nOverflow = pOld->nOverflow; - iOverflow = i + !leafData + pOld->aOvfl[0].idx; - } - isDivider = !leafData; - } - - assert(nOverflow>0 || iOverflow<i ); - assert(nOverflow<2 || pOld->aOvfl[0].idx==pOld->aOvfl[1].idx-1); - assert(nOverflow<3 || pOld->aOvfl[1].idx==pOld->aOvfl[2].idx-1); - if( i==iOverflow ){ - isDivider = 1; - if( (--nOverflow)>0 ){ - iOverflow++; - } - } - - if( i==cntNew[k] ){ - /* Cell i is the cell immediately following the last cell on new - ** sibling page k. If the siblings are not leaf pages of an - ** intkey b-tree, then cell i is a divider cell. */ - pNew = apNew[++k]; - if( !leafData ) continue; - } - assert( j<nOld ); - assert( k<nNew ); - - /* If the cell was originally divider cell (and is not now) or - ** an overflow cell, or if the cell was located on a different sibling - ** page before the balancing, then the pointer map entries associated - ** with any child or overflow pages need to be updated. */ - if( isDivider || pOld->pgno!=pNew->pgno ){ - if( !leafCorrection ){ - ptrmapPut(pBt, get4byte(apCell[i]), PTRMAP_BTREE, pNew->pgno, &rc); - } - if( szCell[i]>pNew->minLocal ){ - ptrmapPutOvflPtr(pNew, apCell[i], &rc); - } - } - } - - if( !leafCorrection ){ - for(i=0; i<nNew; i++){ - u32 key = get4byte(&apNew[i]->aData[8]); - ptrmapPut(pBt, key, PTRMAP_BTREE, apNew[i]->pgno, &rc); - } - } + }else if( ISAUTOVACUUM && !leafCorrection ){ + /* Fix the pointer map entries associated with the right-child of each + ** sibling page. All other pointer map entries have already been taken + ** care of. */ + for(i=0; i<nNew; i++){ + u32 key = get4byte(&apNew[i]->aData[8]); + ptrmapPut(pBt, key, PTRMAP_BTREE, apNew[i]->pgno, &rc); + } + } + + assert( pParent->isInit ); + TRACE(("BALANCE: finished: old=%d new=%d cells=%d\n", + nOld, nNew, b.nCell)); + + /* Free any old pages that were not reused as new pages. + */ + for(i=nNew; i<nOld; i++){ + freePage(apOld[i], &rc); + } #if 0 + if( ISAUTOVACUUM && rc==SQLITE_OK && apNew[0]->isInit ){ /* The ptrmapCheckPages() contains assert() statements that verify that ** all pointer map pages are set correctly. This is helpful while ** debugging. This is usually disabled because a corrupt database may ** cause an assert() statement to fail. */ ptrmapCheckPages(apNew, nNew); ptrmapCheckPages(&pParent, 1); + } #endif - } - - assert( pParent->isInit ); - TRACE(("BALANCE: finished: old=%d new=%d cells=%d\n", - nOld, nNew, nCell)); /* ** Cleanup before returning. */ balance_cleanup: - sqlite3ScratchFree(apCell); + sqlite3ScratchFree(b.apCell); for(i=0; i<nOld; i++){ releasePage(apOld[i]); } for(i=0; i<nNew; i++){ releasePage(apNew[i]); @@ -45215,11 +63342,14 @@ assert( pChild->nCell==pRoot->nCell ); TRACE(("BALANCE: copy root %d into %d\n", pRoot->pgno, pChild->pgno)); /* Copy the overflow cells from pRoot to pChild */ - memcpy(pChild->aOvfl, pRoot->aOvfl, pRoot->nOverflow*sizeof(pRoot->aOvfl[0])); + memcpy(pChild->aiOvfl, pRoot->aiOvfl, + pRoot->nOverflow*sizeof(pRoot->aiOvfl[0])); + memcpy(pChild->apOvfl, pRoot->apOvfl, + pRoot->nOverflow*sizeof(pRoot->apOvfl[0])); pChild->nOverflow = pRoot->nOverflow; /* Zero the contents of pRoot. Then install pChild as the right-child. */ zeroPage(pRoot, pChild->aData[0] & ~PTF_LEAF); put4byte(&pRoot->aData[pRoot->hdrOffset+8], pgnoChild); @@ -45242,12 +63372,12 @@ int rc = SQLITE_OK; const int nMin = pCur->pBt->usableSize * 2 / 3; u8 aBalanceQuickSpace[13]; u8 *pFree = 0; - TESTONLY( int balance_quick_called = 0 ); - TESTONLY( int balance_deeper_called = 0 ); + VVA_ONLY( int balance_quick_called = 0 ); + VVA_ONLY( int balance_deeper_called = 0 ); do { int iPage = pCur->iPage; MemPage *pPage = pCur->apPage[iPage]; @@ -45256,11 +63386,12 @@ /* The root page of the b-tree is overfull. In this case call the ** balance_deeper() function to create a new child for the root-page ** and copy the current contents of the root-page to it. The ** next iteration of the do-loop will balance the child page. */ - assert( (balance_deeper_called++)==0 ); + assert( balance_deeper_called==0 ); + VVA_ONLY( balance_deeper_called++ ); rc = balance_deeper(pPage, &pCur->apPage[1]); if( rc==SQLITE_OK ){ pCur->iPage = 1; pCur->aiIdx[0] = 0; pCur->aiIdx[1] = 0; @@ -45276,30 +63407,31 @@ int const iIdx = pCur->aiIdx[iPage-1]; rc = sqlite3PagerWrite(pParent->pDbPage); if( rc==SQLITE_OK ){ #ifndef SQLITE_OMIT_QUICKBALANCE - if( pPage->hasData + if( pPage->intKeyLeaf && pPage->nOverflow==1 - && pPage->aOvfl[0].idx==pPage->nCell + && pPage->aiOvfl[0]==pPage->nCell && pParent->pgno!=1 && pParent->nCell==iIdx ){ /* Call balance_quick() to create a new sibling of pPage on which ** to store the overflow cell. balance_quick() inserts a new cell ** into pParent, which may cause pParent overflow. If this - ** happens, the next interation of the do-loop will balance pParent + ** happens, the next iteration of the do-loop will balance pParent ** use either balance_nonroot() or balance_deeper(). Until this ** happens, the overflow cell is stored in the aBalanceQuickSpace[] ** buffer. ** ** The purpose of the following assert() is to check that only a ** single call to balance_quick() is made for each call to this ** function. If this were not verified, a subtle bug involving reuse ** of the aBalanceQuickSpace[] might sneak in. */ - assert( (balance_quick_called++)==0 ); + assert( balance_quick_called==0 ); + VVA_ONLY( balance_quick_called++ ); rc = balance_quick(pParent, pPage, aBalanceQuickSpace); }else #endif { /* In this case, call balance_nonroot() to redistribute cells @@ -45318,11 +63450,12 @@ ** the previous call, as the overflow cell data will have been ** copied either into the body of a database page or into the new ** pSpace buffer passed to the latter call to balance_nonroot(). */ u8 *pSpace = sqlite3PageMalloc(pCur->pBt->pageSize); - rc = balance_nonroot(pParent, iIdx, pSpace, iPage==1); + rc = balance_nonroot(pParent, iIdx, pSpace, iPage==1, + pCur->hints&BTREE_BULKLOAD); if( pFree ){ /* If pFree is not NULL, it points to the pSpace buffer used ** by a previous call to balance_nonroot(). Its contents are ** now stored either on real database pages or within the ** new pSpace buffer, so it may be safely freed here. */ @@ -45339,10 +63472,11 @@ pPage->nOverflow = 0; /* The next iteration of the do-loop balances the parent page. */ releasePage(pPage); pCur->iPage--; + assert( pCur->iPage>=0 ); } }while( rc==SQLITE_OK ); if( pFree ){ sqlite3PageFree(pFree); @@ -45362,11 +63496,11 @@ ** ** If the seekResult parameter is non-zero, then a successful call to ** MovetoUnpacked() to seek cursor pCur to (pKey, nKey) has already ** been performed. seekResult is the search result returned (a negative ** number if pCur points at an entry that is smaller than (pKey, nKey), or -** a positive value if pCur points at an etry that is larger than +** a positive value if pCur points at an entry that is larger than ** (pKey, nKey)). ** ** If the seekResult parameter is non-zero, then the caller guarantees that ** cursor pCur is pointing at the existing copy of a row that is to be ** overwritten. If the seekResult parameter is 0, then cursor pCur may @@ -45394,28 +63528,23 @@ if( pCur->eState==CURSOR_FAULT ){ assert( pCur->skipNext!=SQLITE_OK ); return pCur->skipNext; } - assert( cursorHoldsMutex(pCur) ); - assert( pCur->wrFlag && pBt->inTransaction==TRANS_WRITE && !pBt->readOnly ); + assert( cursorOwnsBtShared(pCur) ); + assert( (pCur->curFlags & BTCF_WriteFlag)!=0 + && pBt->inTransaction==TRANS_WRITE + && (pBt->btsFlags & BTS_READ_ONLY)==0 ); assert( hasSharedCacheTableLock(p, pCur->pgnoRoot, pCur->pKeyInfo!=0, 2) ); /* Assert that the caller has been consistent. If this cursor was opened ** expecting an index b-tree, then the caller should be inserting blob ** keys with no associated data. If the cursor was opened expecting an ** intkey table, the caller should be inserting integer keys with a ** blob of associated data. */ assert( (pKey==0)==(pCur->pKeyInfo==0) ); - /* If this is an insert into a table b-tree, invalidate any incrblob - ** cursors open on the row being replaced (assuming this is a replace - ** operation - if it is not, the following is a no-op). */ - if( pCur->pKeyInfo==0 ){ - invalidateIncrblobCursors(p, nKey, 0); - } - /* Save the positions of any other cursors open on this table. ** ** In some cases, the call to btreeMoveto() below is a no-op. For ** example, when inserting data into a table with auto-generated integer ** keys, the VDBE layer invokes sqlite3BtreeLast() to figure out the @@ -45423,13 +63552,32 @@ ** data into the intkey B-Tree. In this case btreeMoveto() recognizes ** that the cursor is already where it needs to be and returns without ** doing any work. To avoid thwarting these optimizations, it is important ** not to clear the cursor here. */ - rc = saveAllCursors(pBt, pCur->pgnoRoot, pCur); - if( rc ) return rc; - if( !loc ){ + if( pCur->curFlags & BTCF_Multiple ){ + rc = saveAllCursors(pBt, pCur->pgnoRoot, pCur); + if( rc ) return rc; + } + + if( pCur->pKeyInfo==0 ){ + assert( pKey==0 ); + /* If this is an insert into a table b-tree, invalidate any incrblob + ** cursors open on the row being replaced */ + invalidateIncrblobCursors(p, nKey, 0); + + /* If the cursor is currently on the last row and we are appending a + ** new row onto the end, set the "loc" to avoid an unnecessary + ** btreeMoveto() call */ + if( (pCur->curFlags&BTCF_ValidNKey)!=0 && nKey>0 + && pCur->info.nKey==nKey-1 ){ + loc = -1; + }else if( loc==0 ){ + rc = sqlite3BtreeMovetoUnpacked(pCur, 0, nKey, appendBias, &loc); + if( rc ) return rc; + } + }else if( loc==0 ){ rc = btreeMoveto(pCur, pKey, nKey, appendBias, &loc); if( rc ) return rc; } assert( pCur->eState==CURSOR_VALID || (pCur->eState==CURSOR_INVALID && loc) ); @@ -45439,17 +63587,16 @@ TRACE(("INSERT: table=%d nkey=%lld ndata=%d page=%d %s\n", pCur->pgnoRoot, nKey, nData, pPage->pgno, loc==0 ? "overwrite" : "new entry")); assert( pPage->isInit ); - allocateTempSpace(pBt); newCell = pBt->pTmpSpace; - if( newCell==0 ) return SQLITE_NOMEM; + assert( newCell!=0 ); rc = fillInCell(pPage, newCell, pKey, nKey, pData, nData, nZero, &szNew); if( rc ) goto end_insert; - assert( szNew==cellSizePtr(pPage, newCell) ); - assert( szNew<=MX_CELL_SIZE(pBt) ); + assert( szNew==pPage->xCellSize(pPage, newCell) ); + assert( szNew <= MX_CELL_SIZE(pBt) ); idx = pCur->aiIdx[pCur->iPage]; if( loc==0 ){ u16 szOld; assert( idx<pPage->nCell ); rc = sqlite3PagerWrite(pPage->pDbPage); @@ -45458,12 +63605,11 @@ } oldCell = findCell(pPage, idx); if( !pPage->leaf ){ memcpy(newCell, oldCell, 4); } - szOld = cellSizePtr(pPage, oldCell); - rc = clearCell(pPage, oldCell); + rc = clearCell(pPage, oldCell, &szOld); dropCell(pPage, idx, szOld, &rc); if( rc ) goto end_insert; }else if( loc<0 && pPage->nCell>0 ){ assert( pPage->leaf ); idx = ++pCur->aiIdx[pCur->iPage]; @@ -45471,13 +63617,13 @@ assert( pPage->leaf ); } insertCell(pPage, idx, newCell, szNew, 0, 0, &rc); assert( rc!=SQLITE_OK || pPage->nCell>0 || pPage->nOverflow>0 ); - /* If no error has occured and pPage has an overflow cell, call balance() + /* If no error has occurred and pPage has an overflow cell, call balance() ** to redistribute the cells within the tree. Since balance() may move - ** the cursor, zero the BtCursor.info.nSize and BtCursor.validNKey + ** the cursor, zero the BtCursor.info.nSize and BTCF_ValidNKey ** variables. ** ** Previous versions of SQLite called moveToRoot() to move the cursor ** back to the root page as balance() used to invalidate the contents ** of BtCursor.apPage[] and BtCursor.aiIdx[]. Instead of doing that, @@ -45492,12 +63638,12 @@ ** entry in the table, and the next row inserted has an integer key ** larger than the largest existing key, it is possible to insert the ** row without seeking the cursor. This can be a big performance boost. */ pCur->info.nSize = 0; - pCur->validNKey = 0; if( rc==SQLITE_OK && pPage->nOverflow ){ + pCur->curFlags &= ~(BTCF_ValidNKey); rc = balance(pCur); /* Must make sure nOverflow is reset to zero even if the balance() ** fails. Internal data structure corruption will result otherwise. ** Also, set the cursor state to invalid. This stops saveCursorPosition() @@ -45510,40 +63656,47 @@ end_insert: return rc; } /* -** Delete the entry that the cursor is pointing to. The cursor -** is left pointing at a arbitrary location. +** Delete the entry that the cursor is pointing to. +** +** If the BTREE_SAVEPOSITION bit of the flags parameter is zero, then +** the cursor is left pointing at an arbitrary location after the delete. +** But if that bit is set, then the cursor is left in a state such that +** the next call to BtreeNext() or BtreePrev() moves it to the same row +** as it would have been on if the call to BtreeDelete() had been omitted. +** +** The BTREE_AUXDELETE bit of flags indicates that is one of several deletes +** associated with a single table entry and its indexes. Only one of those +** deletes is considered the "primary" delete. The primary delete occurs +** on a cursor that is not a BTREE_FORDELETE cursor. All but one delete +** operation on non-FORDELETE cursors is tagged with the AUXDELETE flag. +** The BTREE_AUXDELETE bit is a hint that is not used by this implementation, +** but which might be used by alternative storage engines. */ -SQLITE_PRIVATE int sqlite3BtreeDelete(BtCursor *pCur){ +SQLITE_PRIVATE int sqlite3BtreeDelete(BtCursor *pCur, u8 flags){ Btree *p = pCur->pBtree; BtShared *pBt = p->pBt; int rc; /* Return code */ MemPage *pPage; /* Page to delete cell from */ unsigned char *pCell; /* Pointer to cell to delete */ int iCellIdx; /* Index of cell to delete */ int iCellDepth; /* Depth of node containing pCell */ + u16 szCell; /* Size of the cell being deleted */ + int bSkipnext = 0; /* Leaf cursor in SKIPNEXT state */ + u8 bPreserve = flags & BTREE_SAVEPOSITION; /* Keep cursor valid */ - assert( cursorHoldsMutex(pCur) ); + assert( cursorOwnsBtShared(pCur) ); assert( pBt->inTransaction==TRANS_WRITE ); - assert( !pBt->readOnly ); - assert( pCur->wrFlag ); + assert( (pBt->btsFlags & BTS_READ_ONLY)==0 ); + assert( pCur->curFlags & BTCF_WriteFlag ); assert( hasSharedCacheTableLock(p, pCur->pgnoRoot, pCur->pKeyInfo!=0, 2) ); assert( !hasReadConflicts(p, pCur->pgnoRoot) ); - - if( NEVER(pCur->aiIdx[pCur->iPage]>=pCur->apPage[pCur->iPage]->nCell) - || NEVER(pCur->eState!=CURSOR_VALID) - ){ - return SQLITE_ERROR; /* Something has gone awry. */ - } - - /* If this is a delete operation to remove a row from a table b-tree, - ** invalidate any incrblob cursors open on the row being deleted. */ - if( pCur->pKeyInfo==0 ){ - invalidateIncrblobCursors(p, pCur->info.nKey, 0); - } + assert( pCur->aiIdx[pCur->iPage]<pCur->apPage[pCur->iPage]->nCell ); + assert( pCur->eState==CURSOR_VALID ); + assert( (flags & ~(BTREE_SAVEPOSITION | BTREE_AUXDELETE))==0 ); iCellDepth = pCur->iPage; iCellIdx = pCur->aiIdx[iCellDepth]; pPage = pCur->apPage[iCellDepth]; pCell = findCell(pPage, iCellIdx); @@ -45554,26 +63707,57 @@ ** from the internal node. The 'previous' entry is used for this instead ** of the 'next' entry, as the previous entry is always a part of the ** sub-tree headed by the child page of the cell being deleted. This makes ** balancing the tree following the delete operation easier. */ if( !pPage->leaf ){ - int notUsed; + int notUsed = 0; rc = sqlite3BtreePrevious(pCur, ¬Used); if( rc ) return rc; } /* Save the positions of any other cursors open on this table before - ** making any modifications. Make the page containing the entry to be - ** deleted writable. Then free any overflow pages associated with the - ** entry and finally remove the cell itself from within the page. - */ - rc = saveAllCursors(pBt, pCur->pgnoRoot, pCur); - if( rc ) return rc; + ** making any modifications. */ + if( pCur->curFlags & BTCF_Multiple ){ + rc = saveAllCursors(pBt, pCur->pgnoRoot, pCur); + if( rc ) return rc; + } + + /* If this is a delete operation to remove a row from a table b-tree, + ** invalidate any incrblob cursors open on the row being deleted. */ + if( pCur->pKeyInfo==0 ){ + invalidateIncrblobCursors(p, pCur->info.nKey, 0); + } + + /* If the bPreserve flag is set to true, then the cursor position must + ** be preserved following this delete operation. If the current delete + ** will cause a b-tree rebalance, then this is done by saving the cursor + ** key and leaving the cursor in CURSOR_REQUIRESEEK state before + ** returning. + ** + ** Or, if the current delete will not cause a rebalance, then the cursor + ** will be left in CURSOR_SKIPNEXT state pointing to the entry immediately + ** before or after the deleted entry. In this case set bSkipnext to true. */ + if( bPreserve ){ + if( !pPage->leaf + || (pPage->nFree+cellSizePtr(pPage,pCell)+2)>(int)(pBt->usableSize*2/3) + ){ + /* A b-tree rebalance will be required after deleting this entry. + ** Save the cursor key. */ + rc = saveCursorKey(pCur); + if( rc ) return rc; + }else{ + bSkipnext = 1; + } + } + + /* Make the page containing the entry to be deleted writable. Then free any + ** overflow pages associated with the entry and finally remove the cell + ** itself from within the page. */ rc = sqlite3PagerWrite(pPage->pDbPage); if( rc ) return rc; - rc = clearCell(pPage, pCell); - dropCell(pPage, iCellIdx, cellSizePtr(pPage, pCell), &rc); + rc = clearCell(pPage, pCell, &szCell); + dropCell(pPage, iCellIdx, szCell, &rc); if( rc ) return rc; /* If the cell deleted was not located on a leaf page, then the cursor ** is currently pointing to the largest entry in the sub-tree headed ** by the child-page of the cell that was just deleted from an internal @@ -45584,16 +63768,15 @@ int nCell; Pgno n = pCur->apPage[iCellDepth+1]->pgno; unsigned char *pTmp; pCell = findCell(pLeaf, pLeaf->nCell-1); - nCell = cellSizePtr(pLeaf, pCell); - assert( MX_CELL_SIZE(pBt)>=nCell ); - - allocateTempSpace(pBt); + if( pCell<&pLeaf->aData[4] ) return SQLITE_CORRUPT_BKPT; + nCell = pLeaf->xCellSize(pLeaf, pCell); + assert( MX_CELL_SIZE(pBt) >= nCell ); pTmp = pBt->pTmpSpace; - + assert( pTmp!=0 ); rc = sqlite3PagerWrite(pLeaf->pDbPage); insertCell(pPage, iCellIdx, pCell-4, nCell+4, pTmp, n, &rc); dropCell(pLeaf, pLeaf->nCell-1, nCell, &rc); if( rc ) return rc; } @@ -45620,11 +63803,27 @@ } rc = balance(pCur); } if( rc==SQLITE_OK ){ - moveToRoot(pCur); + if( bSkipnext ){ + assert( bPreserve && (pCur->iPage==iCellDepth || CORRUPT_DB) ); + assert( pPage==pCur->apPage[pCur->iPage] || CORRUPT_DB ); + assert( (pPage->nCell>0 || CORRUPT_DB) && iCellIdx<=pPage->nCell ); + pCur->eState = CURSOR_SKIPNEXT; + if( iCellIdx>=pPage->nCell ){ + pCur->skipNext = -1; + pCur->aiIdx[iCellDepth] = pPage->nCell-1; + }else{ + pCur->skipNext = 1; + } + }else{ + rc = moveToRoot(pCur); + if( bPreserve ){ + pCur->eState = CURSOR_REQUIRESEEK; + } + } } return rc; } /* @@ -45636,19 +63835,20 @@ ** flags might not work: ** ** BTREE_INTKEY|BTREE_LEAFDATA Used for SQL tables with rowid keys ** BTREE_ZERODATA Used for SQL indices */ -static int btreeCreateTable(Btree *p, int *piTable, int flags){ +static int btreeCreateTable(Btree *p, int *piTable, int createTabFlags){ BtShared *pBt = p->pBt; MemPage *pRoot; Pgno pgnoRoot; int rc; + int ptfFlags; /* Page-type flage for the root page of new table */ assert( sqlite3BtreeHoldsMutex(p) ); assert( pBt->inTransaction==TRANS_WRITE ); - assert( !pBt->readOnly ); + assert( (pBt->btsFlags & BTS_READ_ONLY)==0 ); #ifdef SQLITE_OMIT_AUTOVACUUM rc = allocateBtreePage(pBt, &pRoot, &pgnoRoot, 1, 0); if( rc ){ return rc; @@ -45677,17 +63877,18 @@ */ while( pgnoRoot==PTRMAP_PAGENO(pBt, pgnoRoot) || pgnoRoot==PENDING_BYTE_PAGE(pBt) ){ pgnoRoot++; } - assert( pgnoRoot>=3 ); + assert( pgnoRoot>=3 || CORRUPT_DB ); + testcase( pgnoRoot<3 ); /* Allocate a page. The page that currently resides at pgnoRoot will ** be moved to the allocated page (unless the allocated page happens ** to reside at pgnoRoot). */ - rc = allocateBtreePage(pBt, &pPageMove, &pgnoMove, pgnoRoot, 1); + rc = allocateBtreePage(pBt, &pPageMove, &pgnoMove, pgnoRoot, BTALLOC_EXACT); if( rc!=SQLITE_OK ){ return rc; } if( pgnoMove!=pgnoRoot ){ @@ -45698,11 +63899,18 @@ ** is already journaled. */ u8 eType = 0; Pgno iPtrPage = 0; + /* Save the positions of any open cursors. This is required in + ** case they are holding a reference to an xFetch reference + ** corresponding to page pgnoRoot. */ + rc = saveAllCursors(pBt, 0, 0); releasePage(pPageMove); + if( rc!=SQLITE_OK ){ + return rc; + } /* Move the page currently at pgnoRoot to pgnoMove. */ rc = btreeGetPage(pBt, pgnoRoot, &pRoot, 0); if( rc!=SQLITE_OK ){ return rc; @@ -45759,12 +63967,18 @@ rc = allocateBtreePage(pBt, &pRoot, &pgnoRoot, 1, 0); if( rc ) return rc; } #endif assert( sqlite3PagerIswriteable(pRoot->pDbPage) ); - zeroPage(pRoot, flags | PTF_LEAF); + if( createTabFlags & BTREE_INTKEY ){ + ptfFlags = PTF_INTKEY | PTF_LEAFDATA | PTF_LEAF; + }else{ + ptfFlags = PTF_ZERODATA | PTF_LEAF; + } + zeroPage(pRoot, ptfFlags); sqlite3PagerUnref(pRoot->pDbPage); + assert( (pBt->openFlags & BTREE_SINGLE)==0 || pgnoRoot==2 ); *piTable = (int)pgnoRoot; return SQLITE_OK; } SQLITE_PRIVATE int sqlite3BtreeCreateTable(Btree *p, int *piTable, int flags){ int rc; @@ -45786,41 +64000,50 @@ ){ MemPage *pPage; int rc; unsigned char *pCell; int i; + int hdr; + u16 szCell; assert( sqlite3_mutex_held(pBt->mutex) ); if( pgno>btreePagecount(pBt) ){ return SQLITE_CORRUPT_BKPT; } - - rc = getAndInitPage(pBt, pgno, &pPage); + rc = getAndInitPage(pBt, pgno, &pPage, 0, 0); if( rc ) return rc; + if( pPage->bBusy ){ + rc = SQLITE_CORRUPT_BKPT; + goto cleardatabasepage_out; + } + pPage->bBusy = 1; + hdr = pPage->hdrOffset; for(i=0; i<pPage->nCell; i++){ pCell = findCell(pPage, i); if( !pPage->leaf ){ rc = clearDatabasePage(pBt, get4byte(pCell), 1, pnChange); if( rc ) goto cleardatabasepage_out; } - rc = clearCell(pPage, pCell); + rc = clearCell(pPage, pCell, &szCell); if( rc ) goto cleardatabasepage_out; } if( !pPage->leaf ){ - rc = clearDatabasePage(pBt, get4byte(&pPage->aData[8]), 1, pnChange); + rc = clearDatabasePage(pBt, get4byte(&pPage->aData[hdr+8]), 1, pnChange); if( rc ) goto cleardatabasepage_out; }else if( pnChange ){ - assert( pPage->intKey ); + assert( pPage->intKey || CORRUPT_DB ); + testcase( !pPage->intKey ); *pnChange += pPage->nCell; } if( freePageFlag ){ freePage(pPage, &rc); }else if( (rc = sqlite3PagerWrite(pPage->pDbPage))==0 ){ - zeroPage(pPage, pPage->aData[0] | PTF_LEAF); + zeroPage(pPage, pPage->aData[hdr] | PTF_LEAF); } cleardatabasepage_out: + pPage->bBusy = 0; releasePage(pPage); return rc; } /* @@ -45840,22 +64063,31 @@ int rc; BtShared *pBt = p->pBt; sqlite3BtreeEnter(p); assert( p->inTrans==TRANS_WRITE ); - /* Invalidate all incrblob cursors open on table iTable (assuming iTable - ** is the root of a table b-tree - if it is not, the following call is - ** a no-op). */ - invalidateIncrblobCursors(p, 0, 1); - rc = saveAllCursors(pBt, (Pgno)iTable, 0); + if( SQLITE_OK==rc ){ + /* Invalidate all incrblob cursors open on table iTable (assuming iTable + ** is the root of a table b-tree - if it is not, the following call is + ** a no-op). */ + invalidateIncrblobCursors(p, 0, 1); rc = clearDatabasePage(pBt, (Pgno)iTable, 0, pnChange); } sqlite3BtreeLeave(p); return rc; } + +/* +** Delete all information from the single table that pCur is open on. +** +** This routine only work for pCur on an ephemeral table. +*/ +SQLITE_PRIVATE int sqlite3BtreeClearTableOfCursor(BtCursor *pCur){ + return sqlite3BtreeClearTable(pCur->pBtree, pCur->pgnoRoot, 0); +} /* ** Erase all information in a table and add the root of the table to ** the freelist. Except, the root of the principle table (the one on ** page 1) is never added to the freelist. @@ -45893,10 +64125,18 @@ */ if( NEVER(pBt->pCursor) ){ sqlite3ConnectionBlocked(p->db, pBt->pCursor->pBtree->db); return SQLITE_LOCKED_SHAREDCACHE; } + + /* + ** It is illegal to drop the sqlite_master table on page 1. But again, + ** this error is caught long before reaching this point. + */ + if( NEVER(iTable<2) ){ + return SQLITE_CORRUPT_BKPT; + } rc = btreeGetPage(pBt, (Pgno)iTable, &pPage, 0); if( rc ) return rc; rc = sqlite3BtreeClearTable(p, iTable, 0); if( rc ){ @@ -45904,80 +64144,71 @@ return rc; } *piMoved = 0; - if( iTable>1 ){ -#ifdef SQLITE_OMIT_AUTOVACUUM - freePage(pPage, &rc); - releasePage(pPage); -#else - if( pBt->autoVacuum ){ - Pgno maxRootPgno; - sqlite3BtreeGetMeta(p, BTREE_LARGEST_ROOT_PAGE, &maxRootPgno); - - if( iTable==maxRootPgno ){ - /* If the table being dropped is the table with the largest root-page - ** number in the database, put the root page on the free list. - */ - freePage(pPage, &rc); - releasePage(pPage); - if( rc!=SQLITE_OK ){ - return rc; - } - }else{ - /* The table being dropped does not have the largest root-page - ** number in the database. So move the page that does into the - ** gap left by the deleted root-page. - */ - MemPage *pMove; - releasePage(pPage); - rc = btreeGetPage(pBt, maxRootPgno, &pMove, 0); - if( rc!=SQLITE_OK ){ - return rc; - } - rc = relocatePage(pBt, pMove, PTRMAP_ROOTPAGE, 0, iTable, 0); - releasePage(pMove); - if( rc!=SQLITE_OK ){ - return rc; - } - pMove = 0; - rc = btreeGetPage(pBt, maxRootPgno, &pMove, 0); - freePage(pMove, &rc); - releasePage(pMove); - if( rc!=SQLITE_OK ){ - return rc; - } - *piMoved = maxRootPgno; - } - - /* Set the new 'max-root-page' value in the database header. This - ** is the old value less one, less one more if that happens to - ** be a root-page number, less one again if that is the - ** PENDING_BYTE_PAGE. - */ - maxRootPgno--; - while( maxRootPgno==PENDING_BYTE_PAGE(pBt) - || PTRMAP_ISPAGE(pBt, maxRootPgno) ){ - maxRootPgno--; - } - assert( maxRootPgno!=PENDING_BYTE_PAGE(pBt) ); - - rc = sqlite3BtreeUpdateMeta(p, 4, maxRootPgno); - }else{ - freePage(pPage, &rc); - releasePage(pPage); - } -#endif - }else{ - /* If sqlite3BtreeDropTable was called on page 1. - ** This really never should happen except in a corrupt - ** database. - */ - zeroPage(pPage, PTF_INTKEY|PTF_LEAF ); - releasePage(pPage); - } +#ifdef SQLITE_OMIT_AUTOVACUUM + freePage(pPage, &rc); + releasePage(pPage); +#else + if( pBt->autoVacuum ){ + Pgno maxRootPgno; + sqlite3BtreeGetMeta(p, BTREE_LARGEST_ROOT_PAGE, &maxRootPgno); + + if( iTable==maxRootPgno ){ + /* If the table being dropped is the table with the largest root-page + ** number in the database, put the root page on the free list. + */ + freePage(pPage, &rc); + releasePage(pPage); + if( rc!=SQLITE_OK ){ + return rc; + } + }else{ + /* The table being dropped does not have the largest root-page + ** number in the database. So move the page that does into the + ** gap left by the deleted root-page. + */ + MemPage *pMove; + releasePage(pPage); + rc = btreeGetPage(pBt, maxRootPgno, &pMove, 0); + if( rc!=SQLITE_OK ){ + return rc; + } + rc = relocatePage(pBt, pMove, PTRMAP_ROOTPAGE, 0, iTable, 0); + releasePage(pMove); + if( rc!=SQLITE_OK ){ + return rc; + } + pMove = 0; + rc = btreeGetPage(pBt, maxRootPgno, &pMove, 0); + freePage(pMove, &rc); + releasePage(pMove); + if( rc!=SQLITE_OK ){ + return rc; + } + *piMoved = maxRootPgno; + } + + /* Set the new 'max-root-page' value in the database header. This + ** is the old value less one, less one more if that happens to + ** be a root-page number, less one again if that is the + ** PENDING_BYTE_PAGE. + */ + maxRootPgno--; + while( maxRootPgno==PENDING_BYTE_PAGE(pBt) + || PTRMAP_ISPAGE(pBt, maxRootPgno) ){ + maxRootPgno--; + } + assert( maxRootPgno!=PENDING_BYTE_PAGE(pBt) ); + + rc = sqlite3BtreeUpdateMeta(p, 4, maxRootPgno); + }else{ + freePage(pPage, &rc); + releasePage(pPage); + } +#endif return rc; } SQLITE_PRIVATE int sqlite3BtreeDropTable(Btree *p, int iTable, int *piMoved){ int rc; sqlite3BtreeEnter(p); @@ -45997,10 +64228,17 @@ ** is read-only, the others are read/write. ** ** The schema layer numbers meta values differently. At the schema ** layer (and the SetCookie and ReadCookie opcodes) the number of ** free pages is not visible. So Cookie[0] is the same as Meta[1]. +** +** This routine treats Meta[BTREE_DATA_VERSION] as a special case. Instead +** of reading the value out of the header, it instead loads the "DataVersion" +** from the pager. The BTREE_DATA_VERSION value is not actually stored in the +** database file. It is a number computed by the pager. But its access +** pattern is the same as header meta values, and so it is convenient to +** read it from this routine. */ SQLITE_PRIVATE void sqlite3BtreeGetMeta(Btree *p, int idx, u32 *pMeta){ BtShared *pBt = p->pBt; sqlite3BtreeEnter(p); @@ -46007,16 +64245,22 @@ assert( p->inTrans>TRANS_NONE ); assert( SQLITE_OK==querySharedCacheTableLock(p, MASTER_ROOT, READ_LOCK) ); assert( pBt->pPage1 ); assert( idx>=0 && idx<=15 ); - *pMeta = get4byte(&pBt->pPage1->aData[36 + idx*4]); + if( idx==BTREE_DATA_VERSION ){ + *pMeta = sqlite3PagerDataVersion(pBt->pPager) + p->iDataVersion; + }else{ + *pMeta = get4byte(&pBt->pPage1->aData[36 + idx*4]); + } /* If auto-vacuum is disabled in this build and this is an auto-vacuum ** database, mark the database as read-only. */ #ifdef SQLITE_OMIT_AUTOVACUUM - if( idx==BTREE_LARGEST_ROOT_PAGE && *pMeta>0 ) pBt->readOnly = 1; + if( idx==BTREE_LARGEST_ROOT_PAGE && *pMeta>0 ){ + pBt->btsFlags |= BTS_READ_ONLY; + } #endif sqlite3BtreeLeave(p); } @@ -46058,10 +64302,15 @@ ** corruption) an SQLite error code is returned. */ SQLITE_PRIVATE int sqlite3BtreeCount(BtCursor *pCur, i64 *pnEntry){ i64 nEntry = 0; /* Value to return in *pnEntry */ int rc; /* Return code */ + + if( pCur->pgnoRoot==0 ){ + *pnEntry = 0; + return SQLITE_OK; + } rc = moveToRoot(pCur); /* Unless an error occurs, the following loop runs one iteration for each ** page in the B-Tree structure (not including overflow pages). */ @@ -46091,11 +64340,11 @@ if( pPage->leaf ){ do { if( pCur->iPage==0 ){ /* All pages of the b-tree have been visited. Return successfully. */ *pnEntry = nEntry; - return SQLITE_OK; + return moveToRoot(pCur); } moveToParent(pCur); }while ( pCur->aiIdx[pCur->iPage]>=pCur->apPage[pCur->iPage]->nCell ); pCur->aiIdx[pCur->iPage]++; @@ -46130,11 +64379,10 @@ /* ** Append a message to the error message string. */ static void checkAppendMsg( IntegrityCk *pCheck, - char *zMsg1, const char *zFormat, ... ){ va_list ap; if( !pCheck->mxErr ) return; @@ -46142,41 +64390,61 @@ pCheck->nErr++; va_start(ap, zFormat); if( pCheck->errMsg.nChar ){ sqlite3StrAccumAppend(&pCheck->errMsg, "\n", 1); } - if( zMsg1 ){ - sqlite3StrAccumAppend(&pCheck->errMsg, zMsg1, -1); + if( pCheck->zPfx ){ + sqlite3XPrintf(&pCheck->errMsg, pCheck->zPfx, pCheck->v1, pCheck->v2); } - sqlite3VXPrintf(&pCheck->errMsg, 1, zFormat, ap); + sqlite3VXPrintf(&pCheck->errMsg, zFormat, ap); va_end(ap); - if( pCheck->errMsg.mallocFailed ){ + if( pCheck->errMsg.accError==STRACCUM_NOMEM ){ pCheck->mallocFailed = 1; } } #endif /* SQLITE_OMIT_INTEGRITY_CHECK */ #ifndef SQLITE_OMIT_INTEGRITY_CHECK + +/* +** Return non-zero if the bit in the IntegrityCk.aPgRef[] array that +** corresponds to page iPg is already set. +*/ +static int getPageReferenced(IntegrityCk *pCheck, Pgno iPg){ + assert( iPg<=pCheck->nPage && sizeof(pCheck->aPgRef[0])==1 ); + return (pCheck->aPgRef[iPg/8] & (1 << (iPg & 0x07))); +} + +/* +** Set the bit in the IntegrityCk.aPgRef[] array that corresponds to page iPg. +*/ +static void setPageReferenced(IntegrityCk *pCheck, Pgno iPg){ + assert( iPg<=pCheck->nPage && sizeof(pCheck->aPgRef[0])==1 ); + pCheck->aPgRef[iPg/8] |= (1 << (iPg & 0x07)); +} + + /* ** Add 1 to the reference count for page iPage. If this is the second ** reference to the page, add an error message to pCheck->zErrMsg. -** Return 1 if there are 2 ore more references to the page and 0 if +** Return 1 if there are 2 or more references to the page and 0 if ** if this is the first reference to the page. ** ** Also check that the page number is in bounds. */ -static int checkRef(IntegrityCk *pCheck, Pgno iPage, char *zContext){ +static int checkRef(IntegrityCk *pCheck, Pgno iPage){ if( iPage==0 ) return 1; if( iPage>pCheck->nPage ){ - checkAppendMsg(pCheck, zContext, "invalid page number %d", iPage); + checkAppendMsg(pCheck, "invalid page number %d", iPage); return 1; } - if( pCheck->anRef[iPage]==1 ){ - checkAppendMsg(pCheck, zContext, "2nd reference to page %d", iPage); + if( getPageReferenced(pCheck, iPage) ){ + checkAppendMsg(pCheck, "2nd reference to page %d", iPage); return 1; } - return (pCheck->anRef[iPage]++)>1; + setPageReferenced(pCheck, iPage); + return 0; } #ifndef SQLITE_OMIT_AUTOVACUUM /* ** Check that the entry in the pointer-map for page iChild maps to @@ -46185,26 +64453,25 @@ */ static void checkPtrmap( IntegrityCk *pCheck, /* Integrity check context */ Pgno iChild, /* Child page number */ u8 eType, /* Expected pointer map type */ - Pgno iParent, /* Expected pointer map parent page number */ - char *zContext /* Context description (used for error msg) */ + Pgno iParent /* Expected pointer map parent page number */ ){ int rc; u8 ePtrmapType; Pgno iPtrmapParent; rc = ptrmapGet(pCheck->pBt, iChild, &ePtrmapType, &iPtrmapParent); if( rc!=SQLITE_OK ){ if( rc==SQLITE_NOMEM || rc==SQLITE_IOERR_NOMEM ) pCheck->mallocFailed = 1; - checkAppendMsg(pCheck, zContext, "Failed to read ptrmap key=%d", iChild); + checkAppendMsg(pCheck, "Failed to read ptrmap key=%d", iChild); return; } if( ePtrmapType!=eType || iPtrmapParent!=iParent ){ - checkAppendMsg(pCheck, zContext, + checkAppendMsg(pCheck, "Bad ptr map entry key=%d expected=(%d,%d) got=(%d,%d)", iChild, eType, iParent, ePtrmapType, iPtrmapParent); } } #endif @@ -46215,51 +64482,50 @@ */ static void checkList( IntegrityCk *pCheck, /* Integrity checking context */ int isFreeList, /* True for a freelist. False for overflow page list */ int iPage, /* Page number for first page in the list */ - int N, /* Expected number of pages in the list */ - char *zContext /* Context for error messages */ + int N /* Expected number of pages in the list */ ){ int i; int expected = N; int iFirst = iPage; while( N-- > 0 && pCheck->mxErr ){ DbPage *pOvflPage; unsigned char *pOvflData; if( iPage<1 ){ - checkAppendMsg(pCheck, zContext, + checkAppendMsg(pCheck, "%d of %d pages missing from overflow list starting at %d", N+1, expected, iFirst); break; } - if( checkRef(pCheck, iPage, zContext) ) break; - if( sqlite3PagerGet(pCheck->pPager, (Pgno)iPage, &pOvflPage) ){ - checkAppendMsg(pCheck, zContext, "failed to get page %d", iPage); + if( checkRef(pCheck, iPage) ) break; + if( sqlite3PagerGet(pCheck->pPager, (Pgno)iPage, &pOvflPage, 0) ){ + checkAppendMsg(pCheck, "failed to get page %d", iPage); break; } pOvflData = (unsigned char *)sqlite3PagerGetData(pOvflPage); if( isFreeList ){ int n = get4byte(&pOvflData[4]); #ifndef SQLITE_OMIT_AUTOVACUUM if( pCheck->pBt->autoVacuum ){ - checkPtrmap(pCheck, iPage, PTRMAP_FREEPAGE, 0, zContext); + checkPtrmap(pCheck, iPage, PTRMAP_FREEPAGE, 0); } #endif - if( n>pCheck->pBt->usableSize/4-2 ){ - checkAppendMsg(pCheck, zContext, + if( n>(int)pCheck->pBt->usableSize/4-2 ){ + checkAppendMsg(pCheck, "freelist leaf count too big on page %d", iPage); N--; }else{ for(i=0; i<n; i++){ Pgno iFreePage = get4byte(&pOvflData[8+i*4]); #ifndef SQLITE_OMIT_AUTOVACUUM if( pCheck->pBt->autoVacuum ){ - checkPtrmap(pCheck, iFreePage, PTRMAP_FREEPAGE, 0, zContext); + checkPtrmap(pCheck, iFreePage, PTRMAP_FREEPAGE, 0); } #endif - checkRef(pCheck, iFreePage, zContext); + checkRef(pCheck, iFreePage); } N -= n; } } #ifndef SQLITE_OMIT_AUTOVACUUM @@ -46268,19 +64534,74 @@ ** page in this overflow list, check that the pointer-map entry for ** the following page matches iPage. */ if( pCheck->pBt->autoVacuum && N>0 ){ i = get4byte(pOvflData); - checkPtrmap(pCheck, i, PTRMAP_OVERFLOW2, iPage, zContext); + checkPtrmap(pCheck, i, PTRMAP_OVERFLOW2, iPage); } } #endif iPage = get4byte(pOvflData); sqlite3PagerUnref(pOvflPage); + + if( isFreeList && N<(iPage!=0) ){ + checkAppendMsg(pCheck, "free-page count in header is too small"); + } } } #endif /* SQLITE_OMIT_INTEGRITY_CHECK */ + +/* +** An implementation of a min-heap. +** +** aHeap[0] is the number of elements on the heap. aHeap[1] is the +** root element. The daughter nodes of aHeap[N] are aHeap[N*2] +** and aHeap[N*2+1]. +** +** The heap property is this: Every node is less than or equal to both +** of its daughter nodes. A consequence of the heap property is that the +** root node aHeap[1] is always the minimum value currently in the heap. +** +** The btreeHeapInsert() routine inserts an unsigned 32-bit number onto +** the heap, preserving the heap property. The btreeHeapPull() routine +** removes the root element from the heap (the minimum value in the heap) +** and then moves other nodes around as necessary to preserve the heap +** property. +** +** This heap is used for cell overlap and coverage testing. Each u32 +** entry represents the span of a cell or freeblock on a btree page. +** The upper 16 bits are the index of the first byte of a range and the +** lower 16 bits are the index of the last byte of that range. +*/ +static void btreeHeapInsert(u32 *aHeap, u32 x){ + u32 j, i = ++aHeap[0]; + aHeap[i] = x; + while( (j = i/2)>0 && aHeap[j]>aHeap[i] ){ + x = aHeap[j]; + aHeap[j] = aHeap[i]; + aHeap[i] = x; + i = j; + } +} +static int btreeHeapPull(u32 *aHeap, u32 *pOut){ + u32 j, i, x; + if( (x = aHeap[0])==0 ) return 0; + *pOut = aHeap[1]; + aHeap[1] = aHeap[x]; + aHeap[x] = 0xffffffff; + aHeap[0]--; + i = 1; + while( (j = i*2)<=aHeap[0] ){ + if( aHeap[j]>aHeap[j+1] ) j++; + if( aHeap[i]<aHeap[j] ) break; + x = aHeap[i]; + aHeap[i] = aHeap[j]; + aHeap[j] = x; + i = j; + } + return 1; +} #ifndef SQLITE_OMIT_INTEGRITY_CHECK /* ** Do various sanity checks on a single page of a tree. Return ** the tree depth. Root pages return 0. Parents of root pages @@ -46288,225 +64609,261 @@ ** ** These checks are done: ** ** 1. Make sure that cells and freeblocks do not overlap ** but combine to completely cover the page. -** NO 2. Make sure cell keys are in order. -** NO 3. Make sure no key is less than or equal to zLowerBound. -** NO 4. Make sure no key is greater than or equal to zUpperBound. -** 5. Check the integrity of overflow pages. -** 6. Recursively call checkTreePage on all children. -** 7. Verify that the depth of all children is the same. -** 8. Make sure this page is at least 33% full or else it is -** the root of the tree. +** 2. Make sure integer cell keys are in order. +** 3. Check the integrity of overflow pages. +** 4. Recursively call checkTreePage on all children. +** 5. Verify that the depth of all children is the same. */ static int checkTreePage( IntegrityCk *pCheck, /* Context for the sanity check */ int iPage, /* Page number of the page to check */ - char *zParentContext, /* Parent context */ - i64 *pnParentMinKey, - i64 *pnParentMaxKey + i64 *piMinKey, /* Write minimum integer primary key here */ + i64 maxKey /* Error if integer primary key greater than this */ ){ - MemPage *pPage; - int i, rc, depth, d2, pgno, cnt; - int hdr, cellStart; - int nCell; - u8 *data; - BtShared *pBt; - int usableSize; - char zContext[100]; - char *hit = 0; - i64 nMinKey = 0; - i64 nMaxKey = 0; - - sqlite3_snprintf(sizeof(zContext), zContext, "Page %d: ", iPage); + MemPage *pPage = 0; /* The page being analyzed */ + int i; /* Loop counter */ + int rc; /* Result code from subroutine call */ + int depth = -1, d2; /* Depth of a subtree */ + int pgno; /* Page number */ + int nFrag; /* Number of fragmented bytes on the page */ + int hdr; /* Offset to the page header */ + int cellStart; /* Offset to the start of the cell pointer array */ + int nCell; /* Number of cells */ + int doCoverageCheck = 1; /* True if cell coverage checking should be done */ + int keyCanBeEqual = 1; /* True if IPK can be equal to maxKey + ** False if IPK must be strictly less than maxKey */ + u8 *data; /* Page content */ + u8 *pCell; /* Cell content */ + u8 *pCellIdx; /* Next element of the cell pointer array */ + BtShared *pBt; /* The BtShared object that owns pPage */ + u32 pc; /* Address of a cell */ + u32 usableSize; /* Usable size of the page */ + u32 contentOffset; /* Offset to the start of the cell content area */ + u32 *heap = 0; /* Min-heap used for checking cell coverage */ + u32 x, prev = 0; /* Next and previous entry on the min-heap */ + const char *saved_zPfx = pCheck->zPfx; + int saved_v1 = pCheck->v1; + int saved_v2 = pCheck->v2; + u8 savedIsInit = 0; /* Check that the page exists */ pBt = pCheck->pBt; usableSize = pBt->usableSize; if( iPage==0 ) return 0; - if( checkRef(pCheck, iPage, zParentContext) ) return 0; + if( checkRef(pCheck, iPage) ) return 0; + pCheck->zPfx = "Page %d: "; + pCheck->v1 = iPage; if( (rc = btreeGetPage(pBt, (Pgno)iPage, &pPage, 0))!=0 ){ - checkAppendMsg(pCheck, zContext, + checkAppendMsg(pCheck, "unable to get the page. error code=%d", rc); - return 0; + goto end_of_check; } /* Clear MemPage.isInit to make sure the corruption detection code in ** btreeInitPage() is executed. */ + savedIsInit = pPage->isInit; pPage->isInit = 0; if( (rc = btreeInitPage(pPage))!=0 ){ assert( rc==SQLITE_CORRUPT ); /* The only possible error from InitPage */ - checkAppendMsg(pCheck, zContext, + checkAppendMsg(pCheck, "btreeInitPage() returns error code %d", rc); - releasePage(pPage); - return 0; + goto end_of_check; + } + data = pPage->aData; + hdr = pPage->hdrOffset; + + /* Set up for cell analysis */ + pCheck->zPfx = "On tree page %d cell %d: "; + contentOffset = get2byteNotZero(&data[hdr+5]); + assert( contentOffset<=usableSize ); /* Enforced by btreeInitPage() */ + + /* EVIDENCE-OF: R-37002-32774 The two-byte integer at offset 3 gives the + ** number of cells on the page. */ + nCell = get2byte(&data[hdr+3]); + assert( pPage->nCell==nCell ); + + /* EVIDENCE-OF: R-23882-45353 The cell pointer array of a b-tree page + ** immediately follows the b-tree page header. */ + cellStart = hdr + 12 - 4*pPage->leaf; + assert( pPage->aCellIdx==&data[cellStart] ); + pCellIdx = &data[cellStart + 2*(nCell-1)]; + + if( !pPage->leaf ){ + /* Analyze the right-child page of internal pages */ + pgno = get4byte(&data[hdr+8]); +#ifndef SQLITE_OMIT_AUTOVACUUM + if( pBt->autoVacuum ){ + pCheck->zPfx = "On page %d at right child: "; + checkPtrmap(pCheck, pgno, PTRMAP_BTREE, iPage); + } +#endif + depth = checkTreePage(pCheck, pgno, &maxKey, maxKey); + keyCanBeEqual = 0; + }else{ + /* For leaf pages, the coverage check will occur in the same loop + ** as the other cell checks, so initialize the heap. */ + heap = pCheck->heap; + heap[0] = 0; } - /* Check out all the cells. - */ - depth = 0; - for(i=0; i<pPage->nCell && pCheck->mxErr; i++){ - u8 *pCell; - u32 sz; + /* EVIDENCE-OF: R-02776-14802 The cell pointer array consists of K 2-byte + ** integer offsets to the cell contents. */ + for(i=nCell-1; i>=0 && pCheck->mxErr; i--){ CellInfo info; - /* Check payload overflow pages - */ - sqlite3_snprintf(sizeof(zContext), zContext, - "On tree page %d cell %d: ", iPage, i); - pCell = findCell(pPage,i); - btreeParseCellPtr(pPage, pCell, &info); - sz = info.nData; - if( !pPage->intKey ) sz += (int)info.nKey; - /* For intKey pages, check that the keys are in order. - */ - else if( i==0 ) nMinKey = nMaxKey = info.nKey; - else{ - if( info.nKey <= nMaxKey ){ - checkAppendMsg(pCheck, zContext, - "Rowid %lld out of order (previous was %lld)", info.nKey, nMaxKey); - } - nMaxKey = info.nKey; - } - assert( sz==info.nPayload ); - if( (sz>info.nLocal) - && (&pCell[info.iOverflow]<=&pPage->aData[pBt->usableSize]) - ){ - int nPage = (sz - info.nLocal + usableSize - 5)/(usableSize - 4); - Pgno pgnoOvfl = get4byte(&pCell[info.iOverflow]); + /* Check cell size */ + pCheck->v2 = i; + assert( pCellIdx==&data[cellStart + i*2] ); + pc = get2byteAligned(pCellIdx); + pCellIdx -= 2; + if( pc<contentOffset || pc>usableSize-4 ){ + checkAppendMsg(pCheck, "Offset %d out of range %d..%d", + pc, contentOffset, usableSize-4); + doCoverageCheck = 0; + continue; + } + pCell = &data[pc]; + pPage->xParseCell(pPage, pCell, &info); + if( pc+info.nSize>usableSize ){ + checkAppendMsg(pCheck, "Extends off end of page"); + doCoverageCheck = 0; + continue; + } + + /* Check for integer primary key out of range */ + if( pPage->intKey ){ + if( keyCanBeEqual ? (info.nKey > maxKey) : (info.nKey >= maxKey) ){ + checkAppendMsg(pCheck, "Rowid %lld out of order", info.nKey); + } + maxKey = info.nKey; + } + + /* Check the content overflow list */ + if( info.nPayload>info.nLocal ){ + int nPage; /* Number of pages on the overflow chain */ + Pgno pgnoOvfl; /* First page of the overflow chain */ + assert( pc + info.nSize - 4 <= usableSize ); + nPage = (info.nPayload - info.nLocal + usableSize - 5)/(usableSize - 4); + pgnoOvfl = get4byte(&pCell[info.nSize - 4]); #ifndef SQLITE_OMIT_AUTOVACUUM if( pBt->autoVacuum ){ - checkPtrmap(pCheck, pgnoOvfl, PTRMAP_OVERFLOW1, iPage, zContext); + checkPtrmap(pCheck, pgnoOvfl, PTRMAP_OVERFLOW1, iPage); } #endif - checkList(pCheck, 0, pgnoOvfl, nPage, zContext); + checkList(pCheck, 0, pgnoOvfl, nPage); } - /* Check sanity of left child page. - */ if( !pPage->leaf ){ + /* Check sanity of left child page for internal pages */ pgno = get4byte(pCell); #ifndef SQLITE_OMIT_AUTOVACUUM if( pBt->autoVacuum ){ - checkPtrmap(pCheck, pgno, PTRMAP_BTREE, iPage, zContext); - } -#endif - d2 = checkTreePage(pCheck, pgno, zContext, &nMinKey, i==0 ? NULL : &nMaxKey); - if( i>0 && d2!=depth ){ - checkAppendMsg(pCheck, zContext, "Child page depth differs"); - } - depth = d2; - } - } - - if( !pPage->leaf ){ - pgno = get4byte(&pPage->aData[pPage->hdrOffset+8]); - sqlite3_snprintf(sizeof(zContext), zContext, - "On page %d at right child: ", iPage); -#ifndef SQLITE_OMIT_AUTOVACUUM - if( pBt->autoVacuum ){ - checkPtrmap(pCheck, pgno, PTRMAP_BTREE, iPage, zContext); - } -#endif - checkTreePage(pCheck, pgno, zContext, NULL, !pPage->nCell ? NULL : &nMaxKey); - } - - /* For intKey leaf pages, check that the min/max keys are in order - ** with any left/parent/right pages. - */ - if( pPage->leaf && pPage->intKey ){ - /* if we are a left child page */ - if( pnParentMinKey ){ - /* if we are the left most child page */ - if( !pnParentMaxKey ){ - if( nMaxKey > *pnParentMinKey ){ - checkAppendMsg(pCheck, zContext, - "Rowid %lld out of order (max larger than parent min of %lld)", - nMaxKey, *pnParentMinKey); - } - }else{ - if( nMinKey <= *pnParentMinKey ){ - checkAppendMsg(pCheck, zContext, - "Rowid %lld out of order (min less than parent min of %lld)", - nMinKey, *pnParentMinKey); - } - if( nMaxKey > *pnParentMaxKey ){ - checkAppendMsg(pCheck, zContext, - "Rowid %lld out of order (max larger than parent max of %lld)", - nMaxKey, *pnParentMaxKey); - } - *pnParentMinKey = nMaxKey; - } - /* else if we're a right child page */ - } else if( pnParentMaxKey ){ - if( nMinKey <= *pnParentMaxKey ){ - checkAppendMsg(pCheck, zContext, - "Rowid %lld out of order (min less than parent max of %lld)", - nMinKey, *pnParentMaxKey); - } - } - } + checkPtrmap(pCheck, pgno, PTRMAP_BTREE, iPage); + } +#endif + d2 = checkTreePage(pCheck, pgno, &maxKey, maxKey); + keyCanBeEqual = 0; + if( d2!=depth ){ + checkAppendMsg(pCheck, "Child page depth differs"); + depth = d2; + } + }else{ + /* Populate the coverage-checking heap for leaf pages */ + btreeHeapInsert(heap, (pc<<16)|(pc+info.nSize-1)); + } + } + *piMinKey = maxKey; /* Check for complete coverage of the page */ - data = pPage->aData; - hdr = pPage->hdrOffset; - hit = sqlite3PageMalloc( pBt->pageSize ); - if( hit==0 ){ - pCheck->mallocFailed = 1; - }else{ - u16 contentOffset = get2byte(&data[hdr+5]); - assert( contentOffset<=usableSize ); /* Enforced by btreeInitPage() */ - memset(hit+contentOffset, 0, usableSize-contentOffset); - memset(hit, 1, contentOffset); - nCell = get2byte(&data[hdr+3]); - cellStart = hdr + 12 - 4*pPage->leaf; - for(i=0; i<nCell; i++){ - int pc = get2byte(&data[cellStart+i*2]); - u16 size = 1024; - int j; - if( pc<=usableSize-4 ){ - size = cellSizePtr(pPage, &data[pc]); - } - if( (pc+size-1)>=usableSize ){ - checkAppendMsg(pCheck, 0, - "Corruption detected in cell %d on page %d",i,iPage); - }else{ - for(j=pc+size-1; j>=pc; j--) hit[j]++; - } - } + pCheck->zPfx = 0; + if( doCoverageCheck && pCheck->mxErr>0 ){ + /* For leaf pages, the min-heap has already been initialized and the + ** cells have already been inserted. But for internal pages, that has + ** not yet been done, so do it now */ + if( !pPage->leaf ){ + heap = pCheck->heap; + heap[0] = 0; + for(i=nCell-1; i>=0; i--){ + u32 size; + pc = get2byteAligned(&data[cellStart+i*2]); + size = pPage->xCellSize(pPage, &data[pc]); + btreeHeapInsert(heap, (pc<<16)|(pc+size-1)); + } + } + /* Add the freeblocks to the min-heap + ** + ** EVIDENCE-OF: R-20690-50594 The second field of the b-tree page header + ** is the offset of the first freeblock, or zero if there are no + ** freeblocks on the page. + */ i = get2byte(&data[hdr+1]); while( i>0 ){ int size, j; - assert( i<=usableSize-4 ); /* Enforced by btreeInitPage() */ + assert( (u32)i<=usableSize-4 ); /* Enforced by btreeInitPage() */ size = get2byte(&data[i+2]); - assert( i+size<=usableSize ); /* Enforced by btreeInitPage() */ - for(j=i+size-1; j>=i; j--) hit[j]++; + assert( (u32)(i+size)<=usableSize ); /* Enforced by btreeInitPage() */ + btreeHeapInsert(heap, (((u32)i)<<16)|(i+size-1)); + /* EVIDENCE-OF: R-58208-19414 The first 2 bytes of a freeblock are a + ** big-endian integer which is the offset in the b-tree page of the next + ** freeblock in the chain, or zero if the freeblock is the last on the + ** chain. */ j = get2byte(&data[i]); + /* EVIDENCE-OF: R-06866-39125 Freeblocks are always connected in order of + ** increasing offset. */ assert( j==0 || j>i+size ); /* Enforced by btreeInitPage() */ - assert( j<=usableSize-4 ); /* Enforced by btreeInitPage() */ + assert( (u32)j<=usableSize-4 ); /* Enforced by btreeInitPage() */ i = j; } - for(i=cnt=0; i<usableSize; i++){ - if( hit[i]==0 ){ - cnt++; - }else if( hit[i]>1 ){ - checkAppendMsg(pCheck, 0, - "Multiple uses for byte %d of page %d", i, iPage); + /* Analyze the min-heap looking for overlap between cells and/or + ** freeblocks, and counting the number of untracked bytes in nFrag. + ** + ** Each min-heap entry is of the form: (start_address<<16)|end_address. + ** There is an implied first entry the covers the page header, the cell + ** pointer index, and the gap between the cell pointer index and the start + ** of cell content. + ** + ** The loop below pulls entries from the min-heap in order and compares + ** the start_address against the previous end_address. If there is an + ** overlap, that means bytes are used multiple times. If there is a gap, + ** that gap is added to the fragmentation count. + */ + nFrag = 0; + prev = contentOffset - 1; /* Implied first min-heap entry */ + while( btreeHeapPull(heap,&x) ){ + if( (prev&0xffff)>=(x>>16) ){ + checkAppendMsg(pCheck, + "Multiple uses for byte %u of page %d", x>>16, iPage); break; + }else{ + nFrag += (x>>16) - (prev&0xffff) - 1; + prev = x; } } - if( cnt!=data[hdr+7] ){ - checkAppendMsg(pCheck, 0, + nFrag += usableSize - (prev&0xffff) - 1; + /* EVIDENCE-OF: R-43263-13491 The total number of bytes in all fragments + ** is stored in the fifth field of the b-tree page header. + ** EVIDENCE-OF: R-07161-27322 The one-byte integer at offset 7 gives the + ** number of fragmented free bytes within the cell content area. + */ + if( heap[0]==0 && nFrag!=data[hdr+7] ){ + checkAppendMsg(pCheck, "Fragmentation of %d bytes reported as %d on page %d", - cnt, data[hdr+7], iPage); + nFrag, data[hdr+7], iPage); } } - sqlite3PageFree(hit); + +end_of_check: + if( !doCoverageCheck ) pPage->isInit = savedIsInit; releasePage(pPage); + pCheck->zPfx = saved_zPfx; + pCheck->v1 = saved_v1; + pCheck->v2 = saved_v2; return depth+1; } #endif /* SQLITE_OMIT_INTEGRITY_CHECK */ #ifndef SQLITE_OMIT_INTEGRITY_CHECK @@ -46529,116 +64886,124 @@ int nRoot, /* Number of entries in aRoot[] */ int mxErr, /* Stop reporting errors after this many */ int *pnErr /* Write number of errors seen to this variable */ ){ Pgno i; - int nRef; IntegrityCk sCheck; BtShared *pBt = p->pBt; + int savedDbFlags = pBt->db->flags; char zErr[100]; + VVA_ONLY( int nRef ); sqlite3BtreeEnter(p); assert( p->inTrans>TRANS_NONE && pBt->inTransaction>TRANS_NONE ); - nRef = sqlite3PagerRefcount(pBt->pPager); + VVA_ONLY( nRef = sqlite3PagerRefcount(pBt->pPager) ); + assert( nRef>=0 ); sCheck.pBt = pBt; sCheck.pPager = pBt->pPager; sCheck.nPage = btreePagecount(sCheck.pBt); sCheck.mxErr = mxErr; sCheck.nErr = 0; sCheck.mallocFailed = 0; - *pnErr = 0; + sCheck.zPfx = 0; + sCheck.v1 = 0; + sCheck.v2 = 0; + sCheck.aPgRef = 0; + sCheck.heap = 0; + sqlite3StrAccumInit(&sCheck.errMsg, 0, zErr, sizeof(zErr), SQLITE_MAX_LENGTH); + sCheck.errMsg.printfFlags = SQLITE_PRINTF_INTERNAL; if( sCheck.nPage==0 ){ - sqlite3BtreeLeave(p); - return 0; - } - sCheck.anRef = sqlite3Malloc( (sCheck.nPage+1)*sizeof(sCheck.anRef[0]) ); - if( !sCheck.anRef ){ - *pnErr = 1; - sqlite3BtreeLeave(p); - return 0; - } - for(i=0; i<=sCheck.nPage; i++){ sCheck.anRef[i] = 0; } + goto integrity_ck_cleanup; + } + + sCheck.aPgRef = sqlite3MallocZero((sCheck.nPage / 8)+ 1); + if( !sCheck.aPgRef ){ + sCheck.mallocFailed = 1; + goto integrity_ck_cleanup; + } + sCheck.heap = (u32*)sqlite3PageMalloc( pBt->pageSize ); + if( sCheck.heap==0 ){ + sCheck.mallocFailed = 1; + goto integrity_ck_cleanup; + } + i = PENDING_BYTE_PAGE(pBt); - if( i<=sCheck.nPage ){ - sCheck.anRef[i] = 1; - } - sqlite3StrAccumInit(&sCheck.errMsg, zErr, sizeof(zErr), 20000); + if( i<=sCheck.nPage ) setPageReferenced(&sCheck, i); /* Check the integrity of the freelist */ + sCheck.zPfx = "Main freelist: "; checkList(&sCheck, 1, get4byte(&pBt->pPage1->aData[32]), - get4byte(&pBt->pPage1->aData[36]), "Main freelist: "); + get4byte(&pBt->pPage1->aData[36])); + sCheck.zPfx = 0; /* Check all the tables. */ + testcase( pBt->db->flags & SQLITE_CellSizeCk ); + pBt->db->flags &= ~SQLITE_CellSizeCk; for(i=0; (int)i<nRoot && sCheck.mxErr; i++){ + i64 notUsed; if( aRoot[i]==0 ) continue; #ifndef SQLITE_OMIT_AUTOVACUUM if( pBt->autoVacuum && aRoot[i]>1 ){ - checkPtrmap(&sCheck, aRoot[i], PTRMAP_ROOTPAGE, 0, 0); + checkPtrmap(&sCheck, aRoot[i], PTRMAP_ROOTPAGE, 0); } #endif - checkTreePage(&sCheck, aRoot[i], "List of tree roots: ", NULL, NULL); + checkTreePage(&sCheck, aRoot[i], ¬Used, LARGEST_INT64); } + pBt->db->flags = savedDbFlags; /* Make sure every page in the file is referenced */ for(i=1; i<=sCheck.nPage && sCheck.mxErr; i++){ #ifdef SQLITE_OMIT_AUTOVACUUM - if( sCheck.anRef[i]==0 ){ - checkAppendMsg(&sCheck, 0, "Page %d is never used", i); + if( getPageReferenced(&sCheck, i)==0 ){ + checkAppendMsg(&sCheck, "Page %d is never used", i); } #else /* If the database supports auto-vacuum, make sure no tables contain ** references to pointer-map pages. */ - if( sCheck.anRef[i]==0 && + if( getPageReferenced(&sCheck, i)==0 && (PTRMAP_PAGENO(pBt, i)!=i || !pBt->autoVacuum) ){ - checkAppendMsg(&sCheck, 0, "Page %d is never used", i); + checkAppendMsg(&sCheck, "Page %d is never used", i); } - if( sCheck.anRef[i]!=0 && + if( getPageReferenced(&sCheck, i)!=0 && (PTRMAP_PAGENO(pBt, i)==i && pBt->autoVacuum) ){ - checkAppendMsg(&sCheck, 0, "Pointer map page %d is referenced", i); + checkAppendMsg(&sCheck, "Pointer map page %d is referenced", i); } #endif } - /* Make sure this analysis did not leave any unref() pages. - ** This is an internal consistency check; an integrity check - ** of the integrity check. - */ - if( NEVER(nRef != sqlite3PagerRefcount(pBt->pPager)) ){ - checkAppendMsg(&sCheck, 0, - "Outstanding page count goes from %d to %d during this analysis", - nRef, sqlite3PagerRefcount(pBt->pPager) - ); - } - /* Clean up and report errors. */ - sqlite3BtreeLeave(p); - sqlite3_free(sCheck.anRef); +integrity_ck_cleanup: + sqlite3PageFree(sCheck.heap); + sqlite3_free(sCheck.aPgRef); if( sCheck.mallocFailed ){ sqlite3StrAccumReset(&sCheck.errMsg); - *pnErr = sCheck.nErr+1; - return 0; + sCheck.nErr++; } *pnErr = sCheck.nErr; if( sCheck.nErr==0 ) sqlite3StrAccumReset(&sCheck.errMsg); + /* Make sure this analysis did not leave any unref() pages. */ + assert( nRef==sqlite3PagerRefcount(pBt->pPager) ); + sqlite3BtreeLeave(p); return sqlite3StrAccumFinish(&sCheck.errMsg); } #endif /* SQLITE_OMIT_INTEGRITY_CHECK */ /* -** Return the full pathname of the underlying database file. +** Return the full pathname of the underlying database file. Return +** an empty string if the database is in-memory or a TEMP database. ** ** The pager filename is invariant as long as the pager is ** open so it is safe to access without the BtShared mutex. */ SQLITE_PRIVATE const char *sqlite3BtreeGetFilename(Btree *p){ assert( p->pBt->pPager!=0 ); - return sqlite3PagerFilename(p->pBt->pPager); + return sqlite3PagerFilename(p->pBt->pPager, 1); } /* ** Return the pathname of the journal file for this database. The return ** value of this routine is the same regardless of whether the journal file @@ -46657,10 +65022,35 @@ */ SQLITE_PRIVATE int sqlite3BtreeIsInTrans(Btree *p){ assert( p==0 || sqlite3_mutex_held(p->db->mutex) ); return (p && (p->inTrans==TRANS_WRITE)); } + +#ifndef SQLITE_OMIT_WAL +/* +** Run a checkpoint on the Btree passed as the first argument. +** +** Return SQLITE_LOCKED if this or any other connection has an open +** transaction on the shared-cache the argument Btree is connected to. +** +** Parameter eMode is one of SQLITE_CHECKPOINT_PASSIVE, FULL or RESTART. +*/ +SQLITE_PRIVATE int sqlite3BtreeCheckpoint(Btree *p, int eMode, int *pnLog, int *pnCkpt){ + int rc = SQLITE_OK; + if( p ){ + BtShared *pBt = p->pBt; + sqlite3BtreeEnter(p); + if( pBt->inTransaction!=TRANS_NONE ){ + rc = SQLITE_LOCKED; + }else{ + rc = sqlite3PagerCheckpoint(pBt->pPager, eMode, pnLog, pnCkpt); + } + sqlite3BtreeLeave(p); + } + return rc; +} +#endif /* ** Return non-zero if a read (or write) transaction is active. */ SQLITE_PRIVATE int sqlite3BtreeIsInReadTrans(Btree *p){ @@ -46690,18 +65080,18 @@ ** allocated, a null pointer is returned. If the blob has already been ** allocated, it is returned as normal. ** ** Just before the shared-btree is closed, the function passed as the ** xFree argument when the memory allocation was made is invoked on the -** blob of allocated memory. This function should not call sqlite3_free() +** blob of allocated memory. The xFree function should not call sqlite3_free() ** on the memory, the btree layer does that. */ SQLITE_PRIVATE void *sqlite3BtreeSchema(Btree *p, int nBytes, void(*xFree)(void *)){ BtShared *pBt = p->pBt; sqlite3BtreeEnter(p); if( !pBt->pSchema && nBytes ){ - pBt->pSchema = sqlite3MallocZero(nBytes); + pBt->pSchema = sqlite3DbMallocZero(0, nBytes); pBt->xFreeSchema = xFree; } sqlite3BtreeLeave(p); return pBt->pSchema; } @@ -46758,57 +65148,124 @@ ** parameters that attempt to write past the end of the existing data, ** no modifications are made and SQLITE_CORRUPT is returned. */ SQLITE_PRIVATE int sqlite3BtreePutData(BtCursor *pCsr, u32 offset, u32 amt, void *z){ int rc; - assert( cursorHoldsMutex(pCsr) ); + assert( cursorOwnsBtShared(pCsr) ); assert( sqlite3_mutex_held(pCsr->pBtree->db->mutex) ); - assert( pCsr->isIncrblobHandle ); + assert( pCsr->curFlags & BTCF_Incrblob ); rc = restoreCursorPosition(pCsr); if( rc!=SQLITE_OK ){ return rc; } assert( pCsr->eState!=CURSOR_REQUIRESEEK ); if( pCsr->eState!=CURSOR_VALID ){ return SQLITE_ABORT; } + + /* Save the positions of all other cursors open on this table. This is + ** required in case any of them are holding references to an xFetch + ** version of the b-tree page modified by the accessPayload call below. + ** + ** Note that pCsr must be open on a INTKEY table and saveCursorPosition() + ** and hence saveAllCursors() cannot fail on a BTREE_INTKEY table, hence + ** saveAllCursors can only return SQLITE_OK. + */ + VVA_ONLY(rc =) saveAllCursors(pCsr->pBt, pCsr->pgnoRoot, pCsr); + assert( rc==SQLITE_OK ); /* Check some assumptions: ** (a) the cursor is open for writing, ** (b) there is a read/write transaction open, ** (c) the connection holds a write-lock on the table (if required), ** (d) there are no conflicting read-locks, and ** (e) the cursor points at a valid row of an intKey table. */ - if( !pCsr->wrFlag ){ + if( (pCsr->curFlags & BTCF_WriteFlag)==0 ){ return SQLITE_READONLY; } - assert( !pCsr->pBt->readOnly && pCsr->pBt->inTransaction==TRANS_WRITE ); + assert( (pCsr->pBt->btsFlags & BTS_READ_ONLY)==0 + && pCsr->pBt->inTransaction==TRANS_WRITE ); assert( hasSharedCacheTableLock(pCsr->pBtree, pCsr->pgnoRoot, 0, 2) ); assert( !hasReadConflicts(pCsr->pBtree, pCsr->pgnoRoot) ); assert( pCsr->apPage[pCsr->iPage]->intKey ); return accessPayload(pCsr, offset, amt, (unsigned char *)z, 1); } /* -** Set a flag on this cursor to cache the locations of pages from the -** overflow list for the current row. This is used by cursors opened -** for incremental blob IO only. -** -** This function sets a flag only. The actual page location cache -** (stored in BtCursor.aOverflow[]) is allocated and used by function -** accessPayload() (the worker function for sqlite3BtreeData() and -** sqlite3BtreePutData()). +** Mark this cursor as an incremental blob cursor. */ -SQLITE_PRIVATE void sqlite3BtreeCacheOverflow(BtCursor *pCur){ - assert( cursorHoldsMutex(pCur) ); - assert( sqlite3_mutex_held(pCur->pBtree->db->mutex) ); - assert(!pCur->isIncrblobHandle); - assert(!pCur->aOverflow); - pCur->isIncrblobHandle = 1; +SQLITE_PRIVATE void sqlite3BtreeIncrblobCursor(BtCursor *pCur){ + pCur->curFlags |= BTCF_Incrblob; + pCur->pBtree->hasIncrblobCur = 1; +} +#endif + +/* +** Set both the "read version" (single byte at byte offset 18) and +** "write version" (single byte at byte offset 19) fields in the database +** header to iVersion. +*/ +SQLITE_PRIVATE int sqlite3BtreeSetVersion(Btree *pBtree, int iVersion){ + BtShared *pBt = pBtree->pBt; + int rc; /* Return code */ + + assert( iVersion==1 || iVersion==2 ); + + /* If setting the version fields to 1, do not automatically open the + ** WAL connection, even if the version fields are currently set to 2. + */ + pBt->btsFlags &= ~BTS_NO_WAL; + if( iVersion==1 ) pBt->btsFlags |= BTS_NO_WAL; + + rc = sqlite3BtreeBeginTrans(pBtree, 0); + if( rc==SQLITE_OK ){ + u8 *aData = pBt->pPage1->aData; + if( aData[18]!=(u8)iVersion || aData[19]!=(u8)iVersion ){ + rc = sqlite3BtreeBeginTrans(pBtree, 2); + if( rc==SQLITE_OK ){ + rc = sqlite3PagerWrite(pBt->pPage1->pDbPage); + if( rc==SQLITE_OK ){ + aData[18] = (u8)iVersion; + aData[19] = (u8)iVersion; + } + } + } + } + + pBt->btsFlags &= ~BTS_NO_WAL; + return rc; +} + +/* +** Return true if the cursor has a hint specified. This routine is +** only used from within assert() statements +*/ +SQLITE_PRIVATE int sqlite3BtreeCursorHasHint(BtCursor *pCsr, unsigned int mask){ + return (pCsr->hints & mask)!=0; +} + +/* +** Return true if the given Btree is read-only. +*/ +SQLITE_PRIVATE int sqlite3BtreeIsReadonly(Btree *p){ + return (p->pBt->btsFlags & BTS_READ_ONLY)!=0; +} + +/* +** Return the size of the header added to each page by this module. +*/ +SQLITE_PRIVATE int sqlite3HeaderSizeBtree(void){ return ROUND8(sizeof(MemPage)); } + +#if !defined(SQLITE_OMIT_SHARED_CACHE) +/* +** Return true if the Btree passed as the only argument is sharable. +*/ +SQLITE_PRIVATE int sqlite3BtreeSharable(Btree *p){ + return p->sharable; } #endif /************** End of btree.c ***********************************************/ /************** Begin file backup.c ******************************************/ @@ -46824,16 +65281,12 @@ ** ************************************************************************* ** This file contains the implementation of the sqlite3_backup_XXX() ** API functions and the related features. */ - -/* Macro to find the minimum of two numeric values. -*/ -#ifndef MIN -# define MIN(x,y) ((x)<(y)?(x):(y)) -#endif +/* #include "sqliteInt.h" */ +/* #include "btreeInt.h" */ /* ** Structure allocated for each backup operation. */ struct sqlite3_backup { @@ -46903,49 +65356,81 @@ if( i==1 ){ Parse *pParse; int rc = 0; pParse = sqlite3StackAllocZero(pErrorDb, sizeof(*pParse)); if( pParse==0 ){ - sqlite3Error(pErrorDb, SQLITE_NOMEM, "out of memory"); + sqlite3ErrorWithMsg(pErrorDb, SQLITE_NOMEM, "out of memory"); rc = SQLITE_NOMEM; }else{ pParse->db = pDb; if( sqlite3OpenTempDatabase(pParse) ){ - sqlite3Error(pErrorDb, pParse->rc, "%s", pParse->zErrMsg); + sqlite3ErrorWithMsg(pErrorDb, pParse->rc, "%s", pParse->zErrMsg); rc = SQLITE_ERROR; } sqlite3DbFree(pErrorDb, pParse->zErrMsg); + sqlite3ParserReset(pParse); sqlite3StackFree(pErrorDb, pParse); } if( rc ){ return 0; } } if( i<0 ){ - sqlite3Error(pErrorDb, SQLITE_ERROR, "unknown database %s", zDb); + sqlite3ErrorWithMsg(pErrorDb, SQLITE_ERROR, "unknown database %s", zDb); return 0; } return pDb->aDb[i].pBt; } + +/* +** Attempt to set the page size of the destination to match the page size +** of the source. +*/ +static int setDestPgsz(sqlite3_backup *p){ + int rc; + rc = sqlite3BtreeSetPageSize(p->pDest,sqlite3BtreeGetPageSize(p->pSrc),-1,0); + return rc; +} + +/* +** Check that there is no open read-transaction on the b-tree passed as the +** second argument. If there is not, return SQLITE_OK. Otherwise, if there +** is an open read-transaction, return SQLITE_ERROR and leave an error +** message in database handle db. +*/ +static int checkReadTransaction(sqlite3 *db, Btree *p){ + if( sqlite3BtreeIsInReadTrans(p) ){ + sqlite3ErrorWithMsg(db, SQLITE_ERROR, "destination database is in use"); + return SQLITE_ERROR; + } + return SQLITE_OK; +} /* ** Create an sqlite3_backup process to copy the contents of zSrcDb from ** connection handle pSrcDb to zDestDb in pDestDb. If successful, return ** a pointer to the new sqlite3_backup object. ** ** If an error occurs, NULL is returned and an error code and error message ** stored in database handle pDestDb. */ -SQLITE_API sqlite3_backup *sqlite3_backup_init( +SQLITE_API sqlite3_backup *SQLITE_STDCALL sqlite3_backup_init( sqlite3* pDestDb, /* Database to write to */ const char *zDestDb, /* Name of database within pDestDb */ sqlite3* pSrcDb, /* Database connection to read from */ const char *zSrcDb /* Name of database within pSrcDb */ ){ sqlite3_backup *p; /* Value to return */ + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(pSrcDb)||!sqlite3SafetyCheckOk(pDestDb) ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif /* Lock the source database handle. The destination database ** handle is not locked in this routine, but it is locked in ** sqlite3_backup_step(). The user is required to ensure that no ** other thread accesses the destination handle for the duration @@ -46955,37 +65440,43 @@ */ sqlite3_mutex_enter(pSrcDb->mutex); sqlite3_mutex_enter(pDestDb->mutex); if( pSrcDb==pDestDb ){ - sqlite3Error( + sqlite3ErrorWithMsg( pDestDb, SQLITE_ERROR, "source and destination must be distinct" ); p = 0; }else { - /* Allocate space for a new sqlite3_backup object */ - p = (sqlite3_backup *)sqlite3_malloc(sizeof(sqlite3_backup)); + /* Allocate space for a new sqlite3_backup object... + ** EVIDENCE-OF: R-64852-21591 The sqlite3_backup object is created by a + ** call to sqlite3_backup_init() and is destroyed by a call to + ** sqlite3_backup_finish(). */ + p = (sqlite3_backup *)sqlite3MallocZero(sizeof(sqlite3_backup)); if( !p ){ - sqlite3Error(pDestDb, SQLITE_NOMEM, 0); + sqlite3Error(pDestDb, SQLITE_NOMEM); } } /* If the allocation succeeded, populate the new object. */ if( p ){ - memset(p, 0, sizeof(sqlite3_backup)); p->pSrc = findBtree(pDestDb, pSrcDb, zSrcDb); p->pDest = findBtree(pDestDb, pDestDb, zDestDb); p->pDestDb = pDestDb; p->pSrcDb = pSrcDb; p->iNext = 1; p->isAttached = 0; - if( 0==p->pSrc || 0==p->pDest ){ - /* One (or both) of the named databases did not exist. An error has - ** already been written into the pDestDb handle. All that is left - ** to do here is free the sqlite3_backup structure. - */ + if( 0==p->pSrc || 0==p->pDest + || setDestPgsz(p)==SQLITE_NOMEM + || checkReadTransaction(pDestDb, p->pDest)!=SQLITE_OK + ){ + /* One (or both) of the named databases did not exist or an OOM + ** error was hit. Or there is a transaction open on the destination + ** database. The error has already been written into the pDestDb + ** handle. All that is left to do here is free the sqlite3_backup + ** structure. */ sqlite3_free(p); p = 0; } } if( p ){ @@ -47009,41 +65500,73 @@ /* ** Parameter zSrcData points to a buffer containing the data for ** page iSrcPg from the source database. Copy this data into the ** destination database. */ -static int backupOnePage(sqlite3_backup *p, Pgno iSrcPg, const u8 *zSrcData){ +static int backupOnePage( + sqlite3_backup *p, /* Backup handle */ + Pgno iSrcPg, /* Source database page to backup */ + const u8 *zSrcData, /* Source database page data */ + int bUpdate /* True for an update, false otherwise */ +){ Pager * const pDestPager = sqlite3BtreePager(p->pDest); const int nSrcPgsz = sqlite3BtreeGetPageSize(p->pSrc); int nDestPgsz = sqlite3BtreeGetPageSize(p->pDest); const int nCopy = MIN(nSrcPgsz, nDestPgsz); const i64 iEnd = (i64)iSrcPg*(i64)nSrcPgsz; - +#ifdef SQLITE_HAS_CODEC + /* Use BtreeGetReserveNoMutex() for the source b-tree, as although it is + ** guaranteed that the shared-mutex is held by this thread, handle + ** p->pSrc may not actually be the owner. */ + int nSrcReserve = sqlite3BtreeGetReserveNoMutex(p->pSrc); + int nDestReserve = sqlite3BtreeGetOptimalReserve(p->pDest); +#endif int rc = SQLITE_OK; i64 iOff; + assert( sqlite3BtreeGetReserveNoMutex(p->pSrc)>=0 ); assert( p->bDestLocked ); assert( !isFatalError(p->rc) ); assert( iSrcPg!=PENDING_BYTE_PAGE(p->pSrc->pBt) ); assert( zSrcData ); /* Catch the case where the destination is an in-memory database and the ** page sizes of the source and destination differ. */ - if( nSrcPgsz!=nDestPgsz && sqlite3PagerIsMemdb(sqlite3BtreePager(p->pDest)) ){ + if( nSrcPgsz!=nDestPgsz && sqlite3PagerIsMemdb(pDestPager) ){ + rc = SQLITE_READONLY; + } + +#ifdef SQLITE_HAS_CODEC + /* Backup is not possible if the page size of the destination is changing + ** and a codec is in use. + */ + if( nSrcPgsz!=nDestPgsz && sqlite3PagerGetCodec(pDestPager)!=0 ){ rc = SQLITE_READONLY; } + + /* Backup is not possible if the number of bytes of reserve space differ + ** between source and destination. If there is a difference, try to + ** fix the destination to agree with the source. If that is not possible, + ** then the backup cannot proceed. + */ + if( nSrcReserve!=nDestReserve ){ + u32 newPgsz = nSrcPgsz; + rc = sqlite3PagerSetPagesize(pDestPager, &newPgsz, nSrcReserve); + if( rc==SQLITE_OK && newPgsz!=nSrcPgsz ) rc = SQLITE_READONLY; + } +#endif /* This loop runs once for each destination page spanned by the source ** page. For each iteration, variable iOff is set to the byte offset ** of the destination page. */ for(iOff=iEnd-(i64)nSrcPgsz; rc==SQLITE_OK && iOff<iEnd; iOff+=nDestPgsz){ DbPage *pDestPg = 0; Pgno iDest = (Pgno)(iOff/nDestPgsz)+1; if( iDest==PENDING_BYTE_PAGE(p->pDest->pBt) ) continue; - if( SQLITE_OK==(rc = sqlite3PagerGet(pDestPager, iDest, &pDestPg)) + if( SQLITE_OK==(rc = sqlite3PagerGet(pDestPager, iDest, &pDestPg, 0)) && SQLITE_OK==(rc = sqlite3PagerWrite(pDestPg)) ){ const u8 *zIn = &zSrcData[iOff%nSrcPgsz]; u8 *zDestData = sqlite3PagerGetData(pDestPg); u8 *zOut = &zDestData[iOff%nDestPgsz]; @@ -47055,10 +65578,13 @@ ** cached parse of the page). MemPage.isInit is marked ** "MUST BE FIRST" for this purpose. */ memcpy(zOut, zIn, nCopy); ((u8 *)sqlite3PagerGetExtra(pDestPg))[0] = 0; + if( iOff==0 && bUpdate==0 ){ + sqlite3Put4byte(&zOut[28], sqlite3BtreeLastPage(p->pSrc)); + } } sqlite3PagerUnref(pDestPg); } return rc; @@ -47095,13 +65621,19 @@ } /* ** Copy nPage pages from the source b-tree to the destination. */ -SQLITE_API int sqlite3_backup_step(sqlite3_backup *p, int nPage){ +SQLITE_API int SQLITE_STDCALL sqlite3_backup_step(sqlite3_backup *p, int nPage){ int rc; + int destMode; /* Destination journal mode */ + int pgszSrc = 0; /* Source page size */ + int pgszDest = 0; /* Destination page size */ +#ifdef SQLITE_ENABLE_API_ARMOR + if( p==0 ) return SQLITE_MISUSE_BKPT; +#endif sqlite3_mutex_enter(p->pSrcDb->mutex); sqlite3BtreeEnter(p->pSrc); if( p->pDestDb ){ sqlite3_mutex_enter(p->pDestDb->mutex); } @@ -47137,10 +65669,19 @@ */ if( rc==SQLITE_OK && 0==sqlite3BtreeIsInReadTrans(p->pSrc) ){ rc = sqlite3BtreeBeginTrans(p->pSrc, 0); bCloseTrans = 1; } + + /* Do not allow backup if the destination database is in WAL mode + ** and the page sizes are different between source and destination */ + pgszSrc = sqlite3BtreeGetPageSize(p->pSrc); + pgszDest = sqlite3BtreeGetPageSize(p->pDest); + destMode = sqlite3PagerGetJournalMode(sqlite3BtreePager(p->pDest)); + if( SQLITE_OK==rc && destMode==PAGER_JOURNALMODE_WAL && pgszSrc!=pgszDest ){ + rc = SQLITE_READONLY; + } /* Now that there is a read-lock on the source database, query the ** source pager for the number of pages in the database. */ nSrcPage = (int)sqlite3BtreeLastPage(p->pSrc); @@ -47147,13 +65688,13 @@ assert( nSrcPage>=0 ); for(ii=0; (nPage<0 || ii<nPage) && p->iNext<=(Pgno)nSrcPage && !rc; ii++){ const Pgno iSrcPg = p->iNext; /* Source page number */ if( iSrcPg!=PENDING_BYTE_PAGE(p->pSrc->pBt) ){ DbPage *pSrcPg; /* Source page object */ - rc = sqlite3PagerGet(pSrcPager, iSrcPg, &pSrcPg); + rc = sqlite3PagerGet(pSrcPager, iSrcPg, &pSrcPg,PAGER_GET_READONLY); if( rc==SQLITE_OK ){ - rc = backupOnePage(p, iSrcPg, sqlite3PagerGetData(pSrcPg)); + rc = backupOnePage(p, iSrcPg, sqlite3PagerGetData(pSrcPg), 0); sqlite3PagerUnref(pSrcPg); } } p->iNext++; } @@ -47170,92 +65711,133 @@ /* Update the schema version field in the destination database. This ** is to make sure that the schema-version really does change in ** the case where the source and destination databases have the ** same schema version. */ - if( rc==SQLITE_DONE - && (rc = sqlite3BtreeUpdateMeta(p->pDest,1,p->iDestSchema+1))==SQLITE_OK - ){ - const int nSrcPagesize = sqlite3BtreeGetPageSize(p->pSrc); - const int nDestPagesize = sqlite3BtreeGetPageSize(p->pDest); - int nDestTruncate; - - if( p->pDestDb ){ - sqlite3ResetInternalSchema(p->pDestDb, 0); - } - - /* Set nDestTruncate to the final number of pages in the destination - ** database. The complication here is that the destination page - ** size may be different to the source page size. - ** - ** If the source page size is smaller than the destination page size, - ** round up. In this case the call to sqlite3OsTruncate() below will - ** fix the size of the file. However it is important to call - ** sqlite3PagerTruncateImage() here so that any pages in the - ** destination file that lie beyond the nDestTruncate page mark are - ** journalled by PagerCommitPhaseOne() before they are destroyed - ** by the file truncation. - */ - if( nSrcPagesize<nDestPagesize ){ - int ratio = nDestPagesize/nSrcPagesize; - nDestTruncate = (nSrcPage+ratio-1)/ratio; - if( nDestTruncate==(int)PENDING_BYTE_PAGE(p->pDest->pBt) ){ - nDestTruncate--; - } - }else{ - nDestTruncate = nSrcPage * (nSrcPagesize/nDestPagesize); - } - sqlite3PagerTruncateImage(pDestPager, nDestTruncate); - - if( nSrcPagesize<nDestPagesize ){ - /* If the source page-size is smaller than the destination page-size, - ** two extra things may need to happen: - ** - ** * The destination may need to be truncated, and - ** - ** * Data stored on the pages immediately following the - ** pending-byte page in the source database may need to be - ** copied into the destination database. - */ - const i64 iSize = (i64)nSrcPagesize * (i64)nSrcPage; - sqlite3_file * const pFile = sqlite3PagerFile(pDestPager); - - assert( pFile ); - assert( (i64)nDestTruncate*(i64)nDestPagesize >= iSize || ( - nDestTruncate==(int)(PENDING_BYTE_PAGE(p->pDest->pBt)-1) - && iSize>=PENDING_BYTE && iSize<=PENDING_BYTE+nDestPagesize - )); - if( SQLITE_OK==(rc = sqlite3PagerCommitPhaseOne(pDestPager, 0, 1)) - && SQLITE_OK==(rc = backupTruncateFile(pFile, iSize)) - && SQLITE_OK==(rc = sqlite3PagerSync(pDestPager)) - ){ + if( rc==SQLITE_DONE ){ + if( nSrcPage==0 ){ + rc = sqlite3BtreeNewDb(p->pDest); + nSrcPage = 1; + } + if( rc==SQLITE_OK || rc==SQLITE_DONE ){ + rc = sqlite3BtreeUpdateMeta(p->pDest,1,p->iDestSchema+1); + } + if( rc==SQLITE_OK ){ + if( p->pDestDb ){ + sqlite3ResetAllSchemasOfConnection(p->pDestDb); + } + if( destMode==PAGER_JOURNALMODE_WAL ){ + rc = sqlite3BtreeSetVersion(p->pDest, 2); + } + } + if( rc==SQLITE_OK ){ + int nDestTruncate; + /* Set nDestTruncate to the final number of pages in the destination + ** database. The complication here is that the destination page + ** size may be different to the source page size. + ** + ** If the source page size is smaller than the destination page size, + ** round up. In this case the call to sqlite3OsTruncate() below will + ** fix the size of the file. However it is important to call + ** sqlite3PagerTruncateImage() here so that any pages in the + ** destination file that lie beyond the nDestTruncate page mark are + ** journalled by PagerCommitPhaseOne() before they are destroyed + ** by the file truncation. + */ + assert( pgszSrc==sqlite3BtreeGetPageSize(p->pSrc) ); + assert( pgszDest==sqlite3BtreeGetPageSize(p->pDest) ); + if( pgszSrc<pgszDest ){ + int ratio = pgszDest/pgszSrc; + nDestTruncate = (nSrcPage+ratio-1)/ratio; + if( nDestTruncate==(int)PENDING_BYTE_PAGE(p->pDest->pBt) ){ + nDestTruncate--; + } + }else{ + nDestTruncate = nSrcPage * (pgszSrc/pgszDest); + } + assert( nDestTruncate>0 ); + + if( pgszSrc<pgszDest ){ + /* If the source page-size is smaller than the destination page-size, + ** two extra things may need to happen: + ** + ** * The destination may need to be truncated, and + ** + ** * Data stored on the pages immediately following the + ** pending-byte page in the source database may need to be + ** copied into the destination database. + */ + const i64 iSize = (i64)pgszSrc * (i64)nSrcPage; + sqlite3_file * const pFile = sqlite3PagerFile(pDestPager); + Pgno iPg; + int nDstPage; i64 iOff; - i64 iEnd = MIN(PENDING_BYTE + nDestPagesize, iSize); + i64 iEnd; + + assert( pFile ); + assert( nDestTruncate==0 + || (i64)nDestTruncate*(i64)pgszDest >= iSize || ( + nDestTruncate==(int)(PENDING_BYTE_PAGE(p->pDest->pBt)-1) + && iSize>=PENDING_BYTE && iSize<=PENDING_BYTE+pgszDest + )); + + /* This block ensures that all data required to recreate the original + ** database has been stored in the journal for pDestPager and the + ** journal synced to disk. So at this point we may safely modify + ** the database file in any way, knowing that if a power failure + ** occurs, the original database will be reconstructed from the + ** journal file. */ + sqlite3PagerPagecount(pDestPager, &nDstPage); + for(iPg=nDestTruncate; rc==SQLITE_OK && iPg<=(Pgno)nDstPage; iPg++){ + if( iPg!=PENDING_BYTE_PAGE(p->pDest->pBt) ){ + DbPage *pPg; + rc = sqlite3PagerGet(pDestPager, iPg, &pPg, 0); + if( rc==SQLITE_OK ){ + rc = sqlite3PagerWrite(pPg); + sqlite3PagerUnref(pPg); + } + } + } + if( rc==SQLITE_OK ){ + rc = sqlite3PagerCommitPhaseOne(pDestPager, 0, 1); + } + + /* Write the extra pages and truncate the database file as required */ + iEnd = MIN(PENDING_BYTE + pgszDest, iSize); for( - iOff=PENDING_BYTE+nSrcPagesize; + iOff=PENDING_BYTE+pgszSrc; rc==SQLITE_OK && iOff<iEnd; - iOff+=nSrcPagesize + iOff+=pgszSrc ){ PgHdr *pSrcPg = 0; - const Pgno iSrcPg = (Pgno)((iOff/nSrcPagesize)+1); - rc = sqlite3PagerGet(pSrcPager, iSrcPg, &pSrcPg); + const Pgno iSrcPg = (Pgno)((iOff/pgszSrc)+1); + rc = sqlite3PagerGet(pSrcPager, iSrcPg, &pSrcPg, 0); if( rc==SQLITE_OK ){ u8 *zData = sqlite3PagerGetData(pSrcPg); - rc = sqlite3OsWrite(pFile, zData, nSrcPagesize, iOff); + rc = sqlite3OsWrite(pFile, zData, pgszSrc, iOff); } sqlite3PagerUnref(pSrcPg); } - } - }else{ - rc = sqlite3PagerCommitPhaseOne(pDestPager, 0, 0); - } - - /* Finish committing the transaction to the destination database. */ - if( SQLITE_OK==rc - && SQLITE_OK==(rc = sqlite3BtreeCommitPhaseTwo(p->pDest)) - ){ - rc = SQLITE_DONE; + if( rc==SQLITE_OK ){ + rc = backupTruncateFile(pFile, iSize); + } + + /* Sync the database file to disk. */ + if( rc==SQLITE_OK ){ + rc = sqlite3PagerSync(pDestPager, 0); + } + }else{ + sqlite3PagerTruncateImage(pDestPager, nDestTruncate); + rc = sqlite3PagerCommitPhaseOne(pDestPager, 0, 0); + } + + /* Finish committing the transaction to the destination database. */ + if( SQLITE_OK==rc + && SQLITE_OK==(rc = sqlite3BtreeCommitPhaseTwo(p->pDest, 0)) + ){ + rc = SQLITE_DONE; + } } } /* If bCloseTrans is true, then this function opened a read transaction ** on the source database. Close the read transaction here. There is @@ -47263,14 +65845,17 @@ ** "committing" a read-only transaction cannot fail. */ if( bCloseTrans ){ TESTONLY( int rc2 ); TESTONLY( rc2 = ) sqlite3BtreeCommitPhaseOne(p->pSrc, 0); - TESTONLY( rc2 |= ) sqlite3BtreeCommitPhaseTwo(p->pSrc); + TESTONLY( rc2 |= ) sqlite3BtreeCommitPhaseTwo(p->pSrc, 0); assert( rc2==SQLITE_OK ); } + if( rc==SQLITE_IOERR_NOMEM ){ + rc = SQLITE_NOMEM; + } p->rc = rc; } if( p->pDestDb ){ sqlite3_mutex_leave(p->pDestDb->mutex); } @@ -47280,20 +65865,20 @@ } /* ** Release all resources associated with an sqlite3_backup* handle. */ -SQLITE_API int sqlite3_backup_finish(sqlite3_backup *p){ +SQLITE_API int SQLITE_STDCALL sqlite3_backup_finish(sqlite3_backup *p){ sqlite3_backup **pp; /* Ptr to head of pagers backup list */ - sqlite3_mutex *mutex; /* Mutex to protect source database */ + sqlite3 *pSrcDb; /* Source database connection */ int rc; /* Value to return */ /* Enter the mutexes */ if( p==0 ) return SQLITE_OK; - sqlite3_mutex_enter(p->pSrcDb->mutex); + pSrcDb = p->pSrcDb; + sqlite3_mutex_enter(pSrcDb->mutex); sqlite3BtreeEnter(p->pSrc); - mutex = p->pSrcDb->mutex; if( p->pDestDb ){ sqlite3_mutex_enter(p->pDestDb->mutex); } /* Detach this backup from the source pager. */ @@ -47307,41 +65892,56 @@ } *pp = p->pNext; } /* If a transaction is still open on the Btree, roll it back. */ - sqlite3BtreeRollback(p->pDest); + sqlite3BtreeRollback(p->pDest, SQLITE_OK, 0); /* Set the error code of the destination database handle. */ rc = (p->rc==SQLITE_DONE) ? SQLITE_OK : p->rc; - sqlite3Error(p->pDestDb, rc, 0); - - /* Exit the mutexes and free the backup context structure. */ if( p->pDestDb ){ - sqlite3_mutex_leave(p->pDestDb->mutex); + sqlite3Error(p->pDestDb, rc); + + /* Exit the mutexes and free the backup context structure. */ + sqlite3LeaveMutexAndCloseZombie(p->pDestDb); } sqlite3BtreeLeave(p->pSrc); if( p->pDestDb ){ + /* EVIDENCE-OF: R-64852-21591 The sqlite3_backup object is created by a + ** call to sqlite3_backup_init() and is destroyed by a call to + ** sqlite3_backup_finish(). */ sqlite3_free(p); } - sqlite3_mutex_leave(mutex); + sqlite3LeaveMutexAndCloseZombie(pSrcDb); return rc; } /* ** Return the number of pages still to be backed up as of the most recent ** call to sqlite3_backup_step(). */ -SQLITE_API int sqlite3_backup_remaining(sqlite3_backup *p){ +SQLITE_API int SQLITE_STDCALL sqlite3_backup_remaining(sqlite3_backup *p){ +#ifdef SQLITE_ENABLE_API_ARMOR + if( p==0 ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif return p->nRemaining; } /* ** Return the total number of pages in the source database as of the most ** recent call to sqlite3_backup_step(). */ -SQLITE_API int sqlite3_backup_pagecount(sqlite3_backup *p){ +SQLITE_API int SQLITE_STDCALL sqlite3_backup_pagecount(sqlite3_backup *p){ +#ifdef SQLITE_ENABLE_API_ARMOR + if( p==0 ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif return p->nPagecount; } /* ** This function is called after the contents of page iPage of the @@ -47353,26 +65953,37 @@ ** ** It is assumed that the mutex associated with the BtShared object ** corresponding to the source database is held when this function is ** called. */ -SQLITE_PRIVATE void sqlite3BackupUpdate(sqlite3_backup *pBackup, Pgno iPage, const u8 *aData){ - sqlite3_backup *p; /* Iterator variable */ - for(p=pBackup; p; p=p->pNext){ +static SQLITE_NOINLINE void backupUpdate( + sqlite3_backup *p, + Pgno iPage, + const u8 *aData +){ + assert( p!=0 ); + do{ assert( sqlite3_mutex_held(p->pSrc->pBt->mutex) ); if( !isFatalError(p->rc) && iPage<p->iNext ){ /* The backup process p has already copied page iPage. But now it ** has been modified by a transaction on the source pager. Copy ** the new data into the backup. */ - int rc = backupOnePage(p, iPage, aData); + int rc; + assert( p->pDestDb ); + sqlite3_mutex_enter(p->pDestDb->mutex); + rc = backupOnePage(p, iPage, aData, 1); + sqlite3_mutex_leave(p->pDestDb->mutex); assert( rc!=SQLITE_BUSY && rc!=SQLITE_LOCKED ); if( rc!=SQLITE_OK ){ p->rc = rc; } } - } + }while( (p = p->pNext)!=0 ); +} +SQLITE_PRIVATE void sqlite3BackupUpdate(sqlite3_backup *pBackup, Pgno iPage, const u8 *aData){ + if( pBackup ) backupUpdate(pBackup, iPage, aData); } /* ** Restart the backup process. This is called when the pager layer ** detects that the database has been modified by an external database @@ -47401,13 +66012,23 @@ ** goes wrong, the transaction on pTo is rolled back. If successful, the ** transaction is committed before returning. */ SQLITE_PRIVATE int sqlite3BtreeCopyFile(Btree *pTo, Btree *pFrom){ int rc; + sqlite3_file *pFd; /* File descriptor for database pTo */ sqlite3_backup b; sqlite3BtreeEnter(pTo); sqlite3BtreeEnter(pFrom); + + assert( sqlite3BtreeIsInTrans(pTo) ); + pFd = sqlite3PagerFile(sqlite3BtreePager(pTo)); + if( pFd->pMethods ){ + i64 nByte = sqlite3BtreeGetPageSize(pFrom)*(i64)sqlite3BtreeLastPage(pFrom); + rc = sqlite3OsFileControl(pFd, SQLITE_FCNTL_OVERWRITE, &nByte); + if( rc==SQLITE_NOTFOUND ) rc = SQLITE_OK; + if( rc ) goto copy_finished; + } /* Set up an sqlite3_backup object. sqlite3_backup.pDestDb must be set ** to 0. This is used by the implementations of sqlite3_backup_step() ** and sqlite3_backup_finish() to detect that they are being called ** from this function, not directly by the user. @@ -47415,10 +66036,14 @@ memset(&b, 0, sizeof(b)); b.pSrcDb = pFrom->db; b.pSrc = pFrom; b.pDest = pTo; b.iNext = 1; + +#ifdef SQLITE_HAS_CODEC + sqlite3PagerAlignReserve(sqlite3BtreePager(pTo), sqlite3BtreePager(pFrom)); +#endif /* 0x7FFFFFFF is the hard limit for the number of pages in a database ** file. By passing this as the number of pages to copy to ** sqlite3_backup_step(), we can guarantee that the copy finishes ** within a single call (unless an error occurs). The assert() statement @@ -47427,13 +66052,17 @@ */ sqlite3_backup_step(&b, 0x7FFFFFFF); assert( b.rc!=SQLITE_OK ); rc = sqlite3_backup_finish(&b); if( rc==SQLITE_OK ){ - pTo->pBt->pageSizeFixed = 0; + pTo->pBt->btsFlags &= ~BTS_PAGESIZE_FIXED; + }else{ + sqlite3PagerClearCache(sqlite3BtreePager(b.pDest)); } + assert( sqlite3BtreeIsInTrans(pTo)==0 ); +copy_finished: sqlite3BtreeLeave(pFrom); sqlite3BtreeLeave(pTo); return rc; } #endif /* SQLITE_OMIT_VACUUM */ @@ -47455,16 +66084,59 @@ ** This file contains code use to manipulate "Mem" structure. A "Mem" ** stores a single value in the VDBE. Mem is an opaque structure visible ** only within the VDBE. Interface routines refer to a Mem using the ** name sqlite_value */ +/* #include "sqliteInt.h" */ +/* #include "vdbeInt.h" */ +#ifdef SQLITE_DEBUG /* -** Call sqlite3VdbeMemExpandBlob() on the supplied value (type Mem*) -** P if required. +** Check invariants on a Mem object. +** +** This routine is intended for use inside of assert() statements, like +** this: assert( sqlite3VdbeCheckMemInvariants(pMem) ); */ -#define expandBlob(P) (((P)->flags&MEM_Zero)?sqlite3VdbeMemExpandBlob(P):0) +SQLITE_PRIVATE int sqlite3VdbeCheckMemInvariants(Mem *p){ + /* If MEM_Dyn is set then Mem.xDel!=0. + ** Mem.xDel is might not be initialized if MEM_Dyn is clear. + */ + assert( (p->flags & MEM_Dyn)==0 || p->xDel!=0 ); + + /* MEM_Dyn may only be set if Mem.szMalloc==0. In this way we + ** ensure that if Mem.szMalloc>0 then it is safe to do + ** Mem.z = Mem.zMalloc without having to check Mem.flags&MEM_Dyn. + ** That saves a few cycles in inner loops. */ + assert( (p->flags & MEM_Dyn)==0 || p->szMalloc==0 ); + + /* Cannot be both MEM_Int and MEM_Real at the same time */ + assert( (p->flags & (MEM_Int|MEM_Real))!=(MEM_Int|MEM_Real) ); + + /* The szMalloc field holds the correct memory allocation size */ + assert( p->szMalloc==0 + || p->szMalloc==sqlite3DbMallocSize(p->db,p->zMalloc) ); + + /* If p holds a string or blob, the Mem.z must point to exactly + ** one of the following: + ** + ** (1) Memory in Mem.zMalloc and managed by the Mem object + ** (2) Memory to be freed using Mem.xDel + ** (3) An ephemeral string or blob + ** (4) A static string or blob + */ + if( (p->flags & (MEM_Str|MEM_Blob)) && p->n>0 ){ + assert( + ((p->szMalloc>0 && p->z==p->zMalloc)? 1 : 0) + + ((p->flags&MEM_Dyn)!=0 ? 1 : 0) + + ((p->flags&MEM_Ephem)!=0 ? 1 : 0) + + ((p->flags&MEM_Static)!=0 ? 1 : 0) == 1 + ); + } + return 1; +} +#endif + /* ** If pMem is an object with a valid string representation, this routine ** ensures the internal encoding for the string representation is ** 'desiredEnc', one of SQLITE_UTF8, SQLITE_UTF16LE or SQLITE_UTF16BE. @@ -47476,11 +66148,13 @@ ** SQLITE_OK is returned if the conversion is successful (or not required). ** SQLITE_NOMEM may be returned if a malloc() fails during conversion ** between formats. */ SQLITE_PRIVATE int sqlite3VdbeChangeEncoding(Mem *pMem, int desiredEnc){ +#ifndef SQLITE_OMIT_UTF16 int rc; +#endif assert( (pMem->flags&MEM_RowSet)==0 ); assert( desiredEnc==SQLITE_UTF8 || desiredEnc==SQLITE_UTF16LE || desiredEnc==SQLITE_UTF16BE ); if( !(pMem->flags&MEM_Str) || pMem->enc==desiredEnc ){ return SQLITE_OK; @@ -47501,80 +66175,110 @@ #endif } /* ** Make sure pMem->z points to a writable allocation of at least -** n bytes. -** -** If the memory cell currently contains string or blob data -** and the third argument passed to this function is true, the -** current content of the cell is preserved. Otherwise, it may -** be discarded. -** -** This function sets the MEM_Dyn flag and clears any xDel callback. -** It also clears MEM_Ephem and MEM_Static. If the preserve flag is -** not set, Mem.n is zeroed. -*/ -SQLITE_PRIVATE int sqlite3VdbeMemGrow(Mem *pMem, int n, int preserve){ - assert( 1 >= - ((pMem->zMalloc && pMem->zMalloc==pMem->z) ? 1 : 0) + - (((pMem->flags&MEM_Dyn)&&pMem->xDel) ? 1 : 0) + - ((pMem->flags&MEM_Ephem) ? 1 : 0) + - ((pMem->flags&MEM_Static) ? 1 : 0) - ); +** min(n,32) bytes. +** +** If the bPreserve argument is true, then copy of the content of +** pMem->z into the new allocation. pMem must be either a string or +** blob if bPreserve is true. If bPreserve is false, any prior content +** in pMem->z is discarded. +*/ +SQLITE_PRIVATE SQLITE_NOINLINE int sqlite3VdbeMemGrow(Mem *pMem, int n, int bPreserve){ + assert( sqlite3VdbeCheckMemInvariants(pMem) ); assert( (pMem->flags&MEM_RowSet)==0 ); + testcase( pMem->db==0 ); - if( n<32 ) n = 32; - if( sqlite3DbMallocSize(pMem->db, pMem->zMalloc)<n ){ - if( preserve && pMem->z==pMem->zMalloc ){ + /* If the bPreserve flag is set to true, then the memory cell must already + ** contain a valid string or blob value. */ + assert( bPreserve==0 || pMem->flags&(MEM_Blob|MEM_Str) ); + testcase( bPreserve && pMem->z==0 ); + + assert( pMem->szMalloc==0 + || pMem->szMalloc==sqlite3DbMallocSize(pMem->db, pMem->zMalloc) ); + if( pMem->szMalloc<n ){ + if( n<32 ) n = 32; + if( bPreserve && pMem->szMalloc>0 && pMem->z==pMem->zMalloc ){ pMem->z = pMem->zMalloc = sqlite3DbReallocOrFree(pMem->db, pMem->z, n); - preserve = 0; + bPreserve = 0; }else{ - sqlite3DbFree(pMem->db, pMem->zMalloc); + if( pMem->szMalloc>0 ) sqlite3DbFree(pMem->db, pMem->zMalloc); pMem->zMalloc = sqlite3DbMallocRaw(pMem->db, n); } + if( pMem->zMalloc==0 ){ + sqlite3VdbeMemSetNull(pMem); + pMem->z = 0; + pMem->szMalloc = 0; + return SQLITE_NOMEM; + }else{ + pMem->szMalloc = sqlite3DbMallocSize(pMem->db, pMem->zMalloc); + } } - if( pMem->z && preserve && pMem->zMalloc && pMem->z!=pMem->zMalloc ){ + if( bPreserve && pMem->z && pMem->z!=pMem->zMalloc ){ memcpy(pMem->zMalloc, pMem->z, pMem->n); } - if( pMem->flags&MEM_Dyn && pMem->xDel ){ + if( (pMem->flags&MEM_Dyn)!=0 ){ + assert( pMem->xDel!=0 && pMem->xDel!=SQLITE_DYNAMIC ); pMem->xDel((void *)(pMem->z)); } pMem->z = pMem->zMalloc; - if( pMem->z==0 ){ - pMem->flags = MEM_Null; - }else{ - pMem->flags &= ~(MEM_Ephem|MEM_Static); - } - pMem->xDel = 0; - return (pMem->z ? SQLITE_OK : SQLITE_NOMEM); + pMem->flags &= ~(MEM_Dyn|MEM_Ephem|MEM_Static); + return SQLITE_OK; } /* -** Make the given Mem object MEM_Dyn. In other words, make it so -** that any TEXT or BLOB content is stored in memory obtained from -** malloc(). In this way, we know that the memory is safe to be -** overwritten or altered. +** Change the pMem->zMalloc allocation to be at least szNew bytes. +** If pMem->zMalloc already meets or exceeds the requested size, this +** routine is a no-op. +** +** Any prior string or blob content in the pMem object may be discarded. +** The pMem->xDel destructor is called, if it exists. Though MEM_Str +** and MEM_Blob values may be discarded, MEM_Int, MEM_Real, and MEM_Null +** values are preserved. +** +** Return SQLITE_OK on success or an error code (probably SQLITE_NOMEM) +** if unable to complete the resizing. +*/ +SQLITE_PRIVATE int sqlite3VdbeMemClearAndResize(Mem *pMem, int szNew){ + assert( szNew>0 ); + assert( (pMem->flags & MEM_Dyn)==0 || pMem->szMalloc==0 ); + if( pMem->szMalloc<szNew ){ + return sqlite3VdbeMemGrow(pMem, szNew, 0); + } + assert( (pMem->flags & MEM_Dyn)==0 ); + pMem->z = pMem->zMalloc; + pMem->flags &= (MEM_Null|MEM_Int|MEM_Real); + return SQLITE_OK; +} + +/* +** Change pMem so that its MEM_Str or MEM_Blob value is stored in +** MEM.zMalloc, where it can be safely written. ** ** Return SQLITE_OK on success or SQLITE_NOMEM if malloc fails. */ SQLITE_PRIVATE int sqlite3VdbeMemMakeWriteable(Mem *pMem){ int f; assert( pMem->db==0 || sqlite3_mutex_held(pMem->db->mutex) ); assert( (pMem->flags&MEM_RowSet)==0 ); - expandBlob(pMem); + ExpandBlob(pMem); f = pMem->flags; - if( (f&(MEM_Str|MEM_Blob)) && pMem->z!=pMem->zMalloc ){ + if( (f&(MEM_Str|MEM_Blob)) && (pMem->szMalloc==0 || pMem->z!=pMem->zMalloc) ){ if( sqlite3VdbeMemGrow(pMem, pMem->n + 2, 1) ){ return SQLITE_NOMEM; } pMem->z[pMem->n] = 0; pMem->z[pMem->n+1] = 0; pMem->flags |= MEM_Term; } + pMem->flags &= ~MEM_Ephem; +#ifdef SQLITE_DEBUG + pMem->pScopyFrom = 0; +#endif return SQLITE_OK; } /* @@ -47604,43 +66308,53 @@ } return SQLITE_OK; } #endif - /* -** Make sure the given Mem is \u0000 terminated. +** It is already known that pMem contains an unterminated string. +** Add the zero terminator. */ -SQLITE_PRIVATE int sqlite3VdbeMemNulTerminate(Mem *pMem){ - assert( pMem->db==0 || sqlite3_mutex_held(pMem->db->mutex) ); - if( (pMem->flags & MEM_Term)!=0 || (pMem->flags & MEM_Str)==0 ){ - return SQLITE_OK; /* Nothing to do */ - } +static SQLITE_NOINLINE int vdbeMemAddTerminator(Mem *pMem){ if( sqlite3VdbeMemGrow(pMem, pMem->n+2, 1) ){ return SQLITE_NOMEM; } pMem->z[pMem->n] = 0; pMem->z[pMem->n+1] = 0; pMem->flags |= MEM_Term; return SQLITE_OK; } + +/* +** Make sure the given Mem is \u0000 terminated. +*/ +SQLITE_PRIVATE int sqlite3VdbeMemNulTerminate(Mem *pMem){ + assert( pMem->db==0 || sqlite3_mutex_held(pMem->db->mutex) ); + testcase( (pMem->flags & (MEM_Term|MEM_Str))==(MEM_Term|MEM_Str) ); + testcase( (pMem->flags & (MEM_Term|MEM_Str))==0 ); + if( (pMem->flags & (MEM_Term|MEM_Str))!=MEM_Str ){ + return SQLITE_OK; /* Nothing to do */ + }else{ + return vdbeMemAddTerminator(pMem); + } +} /* ** Add MEM_Str to the set of representations for the given Mem. Numbers ** are converted using sqlite3_snprintf(). Converting a BLOB to a string ** is a no-op. ** -** Existing representations MEM_Int and MEM_Real are *not* invalidated. +** Existing representations MEM_Int and MEM_Real are invalidated if +** bForce is true but are retained if bForce is false. ** ** A MEM_Null value will never be passed to this function. This function is ** used for converting values to text for returning to the user (i.e. via ** sqlite3_value_text()), or for ensuring that values to be used as btree ** keys are strings. In the former case a NULL pointer is returned the -** user and the later is an internal programming error. +** user and the latter is an internal programming error. */ -SQLITE_PRIVATE int sqlite3VdbeMemStringify(Mem *pMem, int enc){ - int rc = SQLITE_OK; +SQLITE_PRIVATE int sqlite3VdbeMemStringify(Mem *pMem, u8 enc, u8 bForce){ int fg = pMem->flags; const int nByte = 32; assert( pMem->db==0 || sqlite3_mutex_held(pMem->db->mutex) ); assert( !(fg&MEM_Zero) ); @@ -47648,31 +66362,32 @@ assert( fg&(MEM_Int|MEM_Real) ); assert( (pMem->flags&MEM_RowSet)==0 ); assert( EIGHT_BYTE_ALIGNMENT(pMem) ); - if( sqlite3VdbeMemGrow(pMem, nByte, 0) ){ + if( sqlite3VdbeMemClearAndResize(pMem, nByte) ){ return SQLITE_NOMEM; } - /* For a Real or Integer, use sqlite3_mprintf() to produce the UTF-8 + /* For a Real or Integer, use sqlite3_snprintf() to produce the UTF-8 ** string representation of the value. Then, if the required encoding ** is UTF-16le or UTF-16be do a translation. ** ** FIX ME: It would be better if sqlite3_snprintf() could do UTF-16. */ if( fg & MEM_Int ){ sqlite3_snprintf(nByte, pMem->z, "%lld", pMem->u.i); }else{ assert( fg & MEM_Real ); - sqlite3_snprintf(nByte, pMem->z, "%!.15g", pMem->r); + sqlite3_snprintf(nByte, pMem->z, "%!.15g", pMem->u.r); } pMem->n = sqlite3Strlen30(pMem->z); pMem->enc = SQLITE_UTF8; pMem->flags |= MEM_Str|MEM_Term; + if( bForce ) pMem->flags &= ~(MEM_Int|MEM_Real); sqlite3VdbeChangeEncoding(pMem, enc); - return rc; + return SQLITE_OK; } /* ** Memory cell pMem contains the context of an aggregate function. ** This routine calls the finalize method for that function. The @@ -47683,78 +66398,100 @@ */ SQLITE_PRIVATE int sqlite3VdbeMemFinalize(Mem *pMem, FuncDef *pFunc){ int rc = SQLITE_OK; if( ALWAYS(pFunc && pFunc->xFinalize) ){ sqlite3_context ctx; + Mem t; assert( (pMem->flags & MEM_Null)!=0 || pFunc==pMem->u.pDef ); assert( pMem->db==0 || sqlite3_mutex_held(pMem->db->mutex) ); memset(&ctx, 0, sizeof(ctx)); - ctx.s.flags = MEM_Null; - ctx.s.db = pMem->db; + memset(&t, 0, sizeof(t)); + t.flags = MEM_Null; + t.db = pMem->db; + ctx.pOut = &t; ctx.pMem = pMem; ctx.pFunc = pFunc; - pFunc->xFinalize(&ctx); - assert( 0==(pMem->flags&MEM_Dyn) && !pMem->xDel ); - sqlite3DbFree(pMem->db, pMem->zMalloc); - memcpy(pMem, &ctx.s, sizeof(ctx.s)); + pFunc->xFinalize(&ctx); /* IMP: R-24505-23230 */ + assert( (pMem->flags & MEM_Dyn)==0 ); + if( pMem->szMalloc>0 ) sqlite3DbFree(pMem->db, pMem->zMalloc); + memcpy(pMem, &t, sizeof(t)); rc = ctx.isError; } return rc; } /* -** If the memory cell contains a string value that must be freed by -** invoking an external callback, free it now. Calling this function -** does not free any Mem.zMalloc buffer. +** If the memory cell contains a value that must be freed by +** invoking the external callback in Mem.xDel, then this routine +** will free that value. It also sets Mem.flags to MEM_Null. +** +** This is a helper routine for sqlite3VdbeMemSetNull() and +** for sqlite3VdbeMemRelease(). Use those other routines as the +** entry point for releasing Mem resources. */ -SQLITE_PRIVATE void sqlite3VdbeMemReleaseExternal(Mem *p){ +static SQLITE_NOINLINE void vdbeMemClearExternAndSetNull(Mem *p){ assert( p->db==0 || sqlite3_mutex_held(p->db->mutex) ); - testcase( p->flags & MEM_Agg ); - testcase( p->flags & MEM_Dyn ); - testcase( p->flags & MEM_RowSet ); - testcase( p->flags & MEM_Frame ); - if( p->flags&(MEM_Agg|MEM_Dyn|MEM_RowSet|MEM_Frame) ){ - if( p->flags&MEM_Agg ){ - sqlite3VdbeMemFinalize(p, p->u.pDef); - assert( (p->flags & MEM_Agg)==0 ); - sqlite3VdbeMemRelease(p); - }else if( p->flags&MEM_Dyn && p->xDel ){ - assert( (p->flags&MEM_RowSet)==0 ); - p->xDel((void *)p->z); - p->xDel = 0; - }else if( p->flags&MEM_RowSet ){ - sqlite3RowSetClear(p->u.pRowSet); - }else if( p->flags&MEM_Frame ){ - sqlite3VdbeMemSetNull(p); - } - } + assert( VdbeMemDynamic(p) ); + if( p->flags&MEM_Agg ){ + sqlite3VdbeMemFinalize(p, p->u.pDef); + assert( (p->flags & MEM_Agg)==0 ); + testcase( p->flags & MEM_Dyn ); + } + if( p->flags&MEM_Dyn ){ + assert( (p->flags&MEM_RowSet)==0 ); + assert( p->xDel!=SQLITE_DYNAMIC && p->xDel!=0 ); + p->xDel((void *)p->z); + }else if( p->flags&MEM_RowSet ){ + sqlite3RowSetClear(p->u.pRowSet); + }else if( p->flags&MEM_Frame ){ + VdbeFrame *pFrame = p->u.pFrame; + pFrame->pParent = pFrame->v->pDelFrame; + pFrame->v->pDelFrame = pFrame; + } + p->flags = MEM_Null; } /* -** Release any memory held by the Mem. This may leave the Mem in an -** inconsistent state, for example with (Mem.z==0) and -** (Mem.type==SQLITE_TEXT). +** Release memory held by the Mem p, both external memory cleared +** by p->xDel and memory in p->zMalloc. +** +** This is a helper routine invoked by sqlite3VdbeMemRelease() in +** the unusual case where there really is memory in p that needs +** to be freed. +*/ +static SQLITE_NOINLINE void vdbeMemClear(Mem *p){ + if( VdbeMemDynamic(p) ){ + vdbeMemClearExternAndSetNull(p); + } + if( p->szMalloc ){ + sqlite3DbFree(p->db, p->zMalloc); + p->szMalloc = 0; + } + p->z = 0; +} + +/* +** Release any memory resources held by the Mem. Both the memory that is +** free by Mem.xDel and the Mem.zMalloc allocation are freed. +** +** Use this routine prior to clean up prior to abandoning a Mem, or to +** reset a Mem back to its minimum memory utilization. +** +** Use sqlite3VdbeMemSetNull() to release just the Mem.xDel space +** prior to inserting new content into the Mem. */ SQLITE_PRIVATE void sqlite3VdbeMemRelease(Mem *p){ - sqlite3VdbeMemReleaseExternal(p); - sqlite3DbFree(p->db, p->zMalloc); - p->z = 0; - p->zMalloc = 0; - p->xDel = 0; + assert( sqlite3VdbeCheckMemInvariants(p) ); + if( VdbeMemDynamic(p) || p->szMalloc ){ + vdbeMemClear(p); + } } /* ** Convert a 64-bit IEEE double into a 64-bit signed integer. -** If the double is too large, return 0x8000000000000000. -** -** Most systems appear to do this simply by assigning -** variables and without the extra range tests. But -** there are reports that windows throws an expection -** if the floating point value is out of range. (See ticket #2880.) -** Because we do not completely understand the problem, we will -** take the conservative approach and always do range tests -** before attempting the conversion. +** If the double is out of range of a 64-bit signed integer then +** return the closest available 64-bit signed integer. */ static i64 doubleToInt64(double r){ #ifdef SQLITE_OMIT_FLOATING_POINT /* When floating-point is omitted, double and int64 are the same thing */ return r; @@ -47767,18 +66504,14 @@ ** larger than a 32-bit integer constant. */ static const i64 maxInt = LARGEST_INT64; static const i64 minInt = SMALLEST_INT64; - if( r<(double)minInt ){ - return minInt; - }else if( r>(double)maxInt ){ - /* minInt is correct here - not maxInt. It turns out that assigning - ** a very large positive number to an integer results in a very large - ** negative integer. This makes no sense, but it is what x86 hardware - ** does so for compatibility we will do the same in software. */ - return minInt; + if( r<=(double)minInt ){ + return minInt; + }else if( r>=(double)maxInt ){ + return maxInt; }else{ return (i64)r; } #endif } @@ -47787,11 +66520,11 @@ ** Return some kind of integer value which is the best we can do ** at representing the value that *pMem describes as an integer. ** If pMem is an integer, then the value is exact. If pMem is ** a floating-point then the value returned is the integer part. ** If pMem is a string or blob, then we make an attempt to convert -** it into a integer and return that. If pMem represents an +** it into an integer and return that. If pMem represents an ** an SQL-NULL value, return 0. ** ** If pMem represents a string value, its encoding might be changed. */ SQLITE_PRIVATE i64 sqlite3VdbeIntValue(Mem *pMem){ @@ -47800,20 +66533,15 @@ assert( EIGHT_BYTE_ALIGNMENT(pMem) ); flags = pMem->flags; if( flags & MEM_Int ){ return pMem->u.i; }else if( flags & MEM_Real ){ - return doubleToInt64(pMem->r); + return doubleToInt64(pMem->u.r); }else if( flags & (MEM_Str|MEM_Blob) ){ - i64 value; - pMem->flags |= MEM_Str; - if( sqlite3VdbeChangeEncoding(pMem, SQLITE_UTF8) - || sqlite3VdbeMemNulTerminate(pMem) ){ - return 0; - } - assert( pMem->z ); - sqlite3Atoi64(pMem->z, &value); + i64 value = 0; + assert( pMem->z || pMem->n==0 ); + sqlite3Atoi64(pMem->z, &value, pMem->n, pMem->enc); return value; }else{ return 0; } } @@ -47826,24 +66554,17 @@ */ SQLITE_PRIVATE double sqlite3VdbeRealValue(Mem *pMem){ assert( pMem->db==0 || sqlite3_mutex_held(pMem->db->mutex) ); assert( EIGHT_BYTE_ALIGNMENT(pMem) ); if( pMem->flags & MEM_Real ){ - return pMem->r; + return pMem->u.r; }else if( pMem->flags & MEM_Int ){ return (double)pMem->u.i; }else if( pMem->flags & (MEM_Str|MEM_Blob) ){ /* (double)0 In case of SQLITE_OMIT_FLOATING_POINT... */ double val = (double)0; - pMem->flags |= MEM_Str; - if( sqlite3VdbeChangeEncoding(pMem, SQLITE_UTF8) - || sqlite3VdbeMemNulTerminate(pMem) ){ - /* (double)0 In case of SQLITE_OMIT_FLOATING_POINT... */ - return (double)0; - } - assert( pMem->z ); - sqlite3AtoF(pMem->z, &val); + sqlite3AtoF(pMem->z, &val, pMem->n, pMem->enc); return val; }else{ /* (double)0 In case of SQLITE_OMIT_FLOATING_POINT... */ return (double)0; } @@ -47852,32 +66573,31 @@ /* ** The MEM structure is already a MEM_Real. Try to also make it a ** MEM_Int if we can. */ SQLITE_PRIVATE void sqlite3VdbeIntegerAffinity(Mem *pMem){ + i64 ix; assert( pMem->flags & MEM_Real ); assert( (pMem->flags & MEM_RowSet)==0 ); assert( pMem->db==0 || sqlite3_mutex_held(pMem->db->mutex) ); assert( EIGHT_BYTE_ALIGNMENT(pMem) ); - pMem->u.i = doubleToInt64(pMem->r); + ix = doubleToInt64(pMem->u.r); /* Only mark the value as an integer if ** ** (1) the round-trip conversion real->int->real is a no-op, and ** (2) The integer is neither the largest nor the smallest ** possible integer (ticket #3922) ** ** The second and third terms in the following conditional enforces ** the second condition under the assumption that addition overflow causes - ** values to wrap around. On x86 hardware, the third term is always - ** true and could be omitted. But we leave it in because other - ** architectures might behave differently. + ** values to wrap around. */ - if( pMem->r==(double)pMem->u.i && pMem->u.i>SMALLEST_INT64 - && ALWAYS(pMem->u.i<LARGEST_INT64) ){ - pMem->flags |= MEM_Int; + if( pMem->u.r==ix && ix>SMALLEST_INT64 && ix<LARGEST_INT64 ){ + pMem->u.i = ix; + MemSetTypeFlag(pMem, MEM_Int); } } /* ** Convert pMem to type integer. Invalidate any prior representations. @@ -47898,11 +66618,11 @@ */ SQLITE_PRIVATE int sqlite3VdbeMemRealify(Mem *pMem){ assert( pMem->db==0 || sqlite3_mutex_held(pMem->db->mutex) ); assert( EIGHT_BYTE_ALIGNMENT(pMem) ); - pMem->r = sqlite3VdbeRealValue(pMem); + pMem->u.r = sqlite3VdbeRealValue(pMem); MemSetTypeFlag(pMem, MEM_Real); return SQLITE_OK; } /* @@ -47912,88 +66632,154 @@ ** Every effort is made to force the conversion, even if the input ** is a string that does not look completely like a number. Convert ** as much of the string as we can and ignore the rest. */ SQLITE_PRIVATE int sqlite3VdbeMemNumerify(Mem *pMem){ - int rc; - assert( (pMem->flags & (MEM_Int|MEM_Real|MEM_Null))==0 ); - assert( (pMem->flags & (MEM_Blob|MEM_Str))!=0 ); - assert( pMem->db==0 || sqlite3_mutex_held(pMem->db->mutex) ); - rc = sqlite3VdbeChangeEncoding(pMem, SQLITE_UTF8); - if( rc ) return rc; - rc = sqlite3VdbeMemNulTerminate(pMem); - if( rc ) return rc; - if( sqlite3Atoi64(pMem->z, &pMem->u.i) ){ - MemSetTypeFlag(pMem, MEM_Int); - }else{ - pMem->r = sqlite3VdbeRealValue(pMem); - MemSetTypeFlag(pMem, MEM_Real); - sqlite3VdbeIntegerAffinity(pMem); - } + if( (pMem->flags & (MEM_Int|MEM_Real|MEM_Null))==0 ){ + assert( (pMem->flags & (MEM_Blob|MEM_Str))!=0 ); + assert( pMem->db==0 || sqlite3_mutex_held(pMem->db->mutex) ); + if( 0==sqlite3Atoi64(pMem->z, &pMem->u.i, pMem->n, pMem->enc) ){ + MemSetTypeFlag(pMem, MEM_Int); + }else{ + pMem->u.r = sqlite3VdbeRealValue(pMem); + MemSetTypeFlag(pMem, MEM_Real); + sqlite3VdbeIntegerAffinity(pMem); + } + } + assert( (pMem->flags & (MEM_Int|MEM_Real|MEM_Null))!=0 ); + pMem->flags &= ~(MEM_Str|MEM_Blob); return SQLITE_OK; } + +/* +** Cast the datatype of the value in pMem according to the affinity +** "aff". Casting is different from applying affinity in that a cast +** is forced. In other words, the value is converted into the desired +** affinity even if that results in loss of data. This routine is +** used (for example) to implement the SQL "cast()" operator. +*/ +SQLITE_PRIVATE void sqlite3VdbeMemCast(Mem *pMem, u8 aff, u8 encoding){ + if( pMem->flags & MEM_Null ) return; + switch( aff ){ + case SQLITE_AFF_BLOB: { /* Really a cast to BLOB */ + if( (pMem->flags & MEM_Blob)==0 ){ + sqlite3ValueApplyAffinity(pMem, SQLITE_AFF_TEXT, encoding); + assert( pMem->flags & MEM_Str || pMem->db->mallocFailed ); + MemSetTypeFlag(pMem, MEM_Blob); + }else{ + pMem->flags &= ~(MEM_TypeMask&~MEM_Blob); + } + break; + } + case SQLITE_AFF_NUMERIC: { + sqlite3VdbeMemNumerify(pMem); + break; + } + case SQLITE_AFF_INTEGER: { + sqlite3VdbeMemIntegerify(pMem); + break; + } + case SQLITE_AFF_REAL: { + sqlite3VdbeMemRealify(pMem); + break; + } + default: { + assert( aff==SQLITE_AFF_TEXT ); + assert( MEM_Str==(MEM_Blob>>3) ); + pMem->flags |= (pMem->flags&MEM_Blob)>>3; + sqlite3ValueApplyAffinity(pMem, SQLITE_AFF_TEXT, encoding); + assert( pMem->flags & MEM_Str || pMem->db->mallocFailed ); + pMem->flags &= ~(MEM_Int|MEM_Real|MEM_Blob|MEM_Zero); + break; + } + } +} + +/* +** Initialize bulk memory to be a consistent Mem object. +** +** The minimum amount of initialization feasible is performed. +*/ +SQLITE_PRIVATE void sqlite3VdbeMemInit(Mem *pMem, sqlite3 *db, u16 flags){ + assert( (flags & ~MEM_TypeMask)==0 ); + pMem->flags = flags; + pMem->db = db; + pMem->szMalloc = 0; +} + /* ** Delete any previous value and set the value stored in *pMem to NULL. +** +** This routine calls the Mem.xDel destructor to dispose of values that +** require the destructor. But it preserves the Mem.zMalloc memory allocation. +** To free all resources, use sqlite3VdbeMemRelease(), which both calls this +** routine to invoke the destructor and deallocates Mem.zMalloc. +** +** Use this routine to reset the Mem prior to insert a new value. +** +** Use sqlite3VdbeMemRelease() to complete erase the Mem prior to abandoning it. */ SQLITE_PRIVATE void sqlite3VdbeMemSetNull(Mem *pMem){ - if( pMem->flags & MEM_Frame ){ - sqlite3VdbeFrameDelete(pMem->u.pFrame); - } - if( pMem->flags & MEM_RowSet ){ - sqlite3RowSetClear(pMem->u.pRowSet); - } - MemSetTypeFlag(pMem, MEM_Null); - pMem->type = SQLITE_NULL; + if( VdbeMemDynamic(pMem) ){ + vdbeMemClearExternAndSetNull(pMem); + }else{ + pMem->flags = MEM_Null; + } +} +SQLITE_PRIVATE void sqlite3ValueSetNull(sqlite3_value *p){ + sqlite3VdbeMemSetNull((Mem*)p); } /* ** Delete any previous value and set the value to be a BLOB of length ** n containing all zeros. */ SQLITE_PRIVATE void sqlite3VdbeMemSetZeroBlob(Mem *pMem, int n){ sqlite3VdbeMemRelease(pMem); pMem->flags = MEM_Blob|MEM_Zero; - pMem->type = SQLITE_BLOB; pMem->n = 0; if( n<0 ) n = 0; pMem->u.nZero = n; pMem->enc = SQLITE_UTF8; - -#ifdef SQLITE_OMIT_INCRBLOB - sqlite3VdbeMemGrow(pMem, n, 0); - if( pMem->z ){ - pMem->n = n; - memset(pMem->z, 0, n); - } -#endif + pMem->z = 0; +} + +/* +** The pMem is known to contain content that needs to be destroyed prior +** to a value change. So invoke the destructor, then set the value to +** a 64-bit integer. +*/ +static SQLITE_NOINLINE void vdbeReleaseAndSetInt64(Mem *pMem, i64 val){ + sqlite3VdbeMemSetNull(pMem); + pMem->u.i = val; + pMem->flags = MEM_Int; } /* ** Delete any previous value and set the value stored in *pMem to val, ** manifest type INTEGER. */ SQLITE_PRIVATE void sqlite3VdbeMemSetInt64(Mem *pMem, i64 val){ - sqlite3VdbeMemRelease(pMem); - pMem->u.i = val; - pMem->flags = MEM_Int; - pMem->type = SQLITE_INTEGER; + if( VdbeMemDynamic(pMem) ){ + vdbeReleaseAndSetInt64(pMem, val); + }else{ + pMem->u.i = val; + pMem->flags = MEM_Int; + } } #ifndef SQLITE_OMIT_FLOATING_POINT /* ** Delete any previous value and set the value stored in *pMem to val, ** manifest type REAL. */ SQLITE_PRIVATE void sqlite3VdbeMemSetDouble(Mem *pMem, double val){ - if( sqlite3IsNaN(val) ){ - sqlite3VdbeMemSetNull(pMem); - }else{ - sqlite3VdbeMemRelease(pMem); - pMem->r = val; + sqlite3VdbeMemSetNull(pMem); + if( !sqlite3IsNaN(val) ){ + pMem->u.r = val; pMem->flags = MEM_Real; - pMem->type = SQLITE_FLOAT; } } #endif /* @@ -48003,17 +66789,18 @@ SQLITE_PRIVATE void sqlite3VdbeMemSetRowSet(Mem *pMem){ sqlite3 *db = pMem->db; assert( db!=0 ); assert( (pMem->flags & MEM_RowSet)==0 ); sqlite3VdbeMemRelease(pMem); - pMem->zMalloc = sqlite3DbMallocRaw(db, 64); + pMem->zMalloc = sqlite3DbMallocRawNN(db, 64); if( db->mallocFailed ){ pMem->flags = MEM_Null; + pMem->szMalloc = 0; }else{ assert( pMem->zMalloc ); - pMem->u.pRowSet = sqlite3RowSetInit(db, pMem->zMalloc, - sqlite3DbMallocSize(db, pMem->zMalloc)); + pMem->szMalloc = sqlite3DbMallocSize(db, pMem->zMalloc); + pMem->u.pRowSet = sqlite3RowSetInit(db, pMem->zMalloc, pMem->szMalloc); assert( pMem->u.pRowSet!=0 ); pMem->flags = MEM_RowSet; } } @@ -48031,26 +66818,49 @@ return n>p->db->aLimit[SQLITE_LIMIT_LENGTH]; } return 0; } +#ifdef SQLITE_DEBUG /* -** Size of struct Mem not including the Mem.zMalloc member. +** This routine prepares a memory cell for modification by breaking +** its link to a shallow copy and by marking any current shallow +** copies of this cell as invalid. +** +** This is used for testing and debugging only - to make sure shallow +** copies are not misused. */ -#define MEMCELLSIZE (size_t)(&(((Mem *)0)->zMalloc)) +SQLITE_PRIVATE void sqlite3VdbeMemAboutToChange(Vdbe *pVdbe, Mem *pMem){ + int i; + Mem *pX; + for(i=1, pX=&pVdbe->aMem[1]; i<=pVdbe->nMem; i++, pX++){ + if( pX->pScopyFrom==pMem ){ + pX->flags |= MEM_Undefined; + pX->pScopyFrom = 0; + } + } + pMem->pScopyFrom = 0; +} +#endif /* SQLITE_DEBUG */ + /* ** Make an shallow copy of pFrom into pTo. Prior contents of ** pTo are freed. The pFrom->z field is not duplicated. If ** pFrom->z is used, then pTo->z points to the same thing as pFrom->z ** and flags gets srcType (either MEM_Ephem or MEM_Static). */ +static SQLITE_NOINLINE void vdbeClrCopy(Mem *pTo, const Mem *pFrom, int eType){ + vdbeMemClearExternAndSetNull(pTo); + assert( !VdbeMemDynamic(pTo) ); + sqlite3VdbeMemShallowCopy(pTo, pFrom, eType); +} SQLITE_PRIVATE void sqlite3VdbeMemShallowCopy(Mem *pTo, const Mem *pFrom, int srcType){ assert( (pFrom->flags & MEM_RowSet)==0 ); - sqlite3VdbeMemReleaseExternal(pTo); + assert( pTo->db==pFrom->db ); + if( VdbeMemDynamic(pTo) ){ vdbeClrCopy(pTo,pFrom,srcType); return; } memcpy(pTo, pFrom, MEMCELLSIZE); - pTo->xDel = 0; if( (pFrom->flags&MEM_Static)==0 ){ pTo->flags &= ~(MEM_Dyn|MEM_Static|MEM_Ephem); assert( srcType==MEM_Ephem || srcType==MEM_Static ); pTo->flags |= srcType; } @@ -48061,15 +66871,18 @@ ** freed before the copy is made. */ SQLITE_PRIVATE int sqlite3VdbeMemCopy(Mem *pTo, const Mem *pFrom){ int rc = SQLITE_OK; + /* The pFrom==0 case in the following assert() is when an sqlite3_value + ** from sqlite3_value_dup() is used as the argument + ** to sqlite3_result_value(). */ + assert( pTo->db==pFrom->db || pFrom->db==0 ); assert( (pFrom->flags & MEM_RowSet)==0 ); - sqlite3VdbeMemReleaseExternal(pTo); + if( VdbeMemDynamic(pTo) ) vdbeMemClearExternAndSetNull(pTo); memcpy(pTo, pFrom, MEMCELLSIZE); pTo->flags &= ~MEM_Dyn; - if( pTo->flags&(MEM_Str|MEM_Blob) ){ if( 0==(pFrom->flags&MEM_Static) ){ pTo->flags |= MEM_Ephem; rc = sqlite3VdbeMemMakeWriteable(pTo); } @@ -48090,12 +66903,11 @@ assert( pFrom->db==0 || pTo->db==0 || pFrom->db==pTo->db ); sqlite3VdbeMemRelease(pTo); memcpy(pTo, pFrom, sizeof(Mem)); pFrom->flags = MEM_Null; - pFrom->xDel = 0; - pFrom->zMalloc = 0; + pFrom->szMalloc = 0; } /* ** Change the value of a Mem to be a string or a BLOB. ** @@ -48138,11 +66950,12 @@ } flags = (enc==0?MEM_Blob:MEM_Str); if( nByte<0 ){ assert( enc!=0 ); if( enc==SQLITE_UTF8 ){ - for(nByte=0; nByte<=iLimit && z[nByte]; nByte++){} + nByte = sqlite3Strlen30(z); + if( nByte>iLimit ) nByte = iLimit+1; }else{ for(nByte=0; nByte<=iLimit && (z[nByte] | z[nByte+1]); nByte+=2){} } flags |= MEM_Term; } @@ -48157,18 +66970,21 @@ nAlloc += (enc==SQLITE_UTF8?1:2); } if( nByte>iLimit ){ return SQLITE_TOOBIG; } - if( sqlite3VdbeMemGrow(pMem, nAlloc, 0) ){ + testcase( nAlloc==0 ); + testcase( nAlloc==31 ); + testcase( nAlloc==32 ); + if( sqlite3VdbeMemClearAndResize(pMem, MAX(nAlloc,32)) ){ return SQLITE_NOMEM; } memcpy(pMem->z, z, nAlloc); }else if( xDel==SQLITE_DYNAMIC ){ sqlite3VdbeMemRelease(pMem); pMem->zMalloc = pMem->z = (char *)z; - pMem->xDel = 0; + pMem->szMalloc = sqlite3DbMallocSize(pMem->db, pMem->zMalloc); }else{ sqlite3VdbeMemRelease(pMem); pMem->z = (char *)z; pMem->xDel = xDel; flags |= ((xDel==SQLITE_STATIC)?MEM_Static:MEM_Dyn); @@ -48175,11 +66991,10 @@ } pMem->n = nByte; pMem->flags = flags; pMem->enc = (enc==0 ? SQLITE_UTF8 : enc); - pMem->type = (enc==0 ? SQLITE_BLOB : SQLITE_TEXT); #ifndef SQLITE_OMIT_UTF16 if( pMem->enc!=SQLITE_UTF8 && sqlite3VdbeMemHandleBom(pMem) ){ return SQLITE_NOMEM; } @@ -48190,153 +67005,65 @@ } return SQLITE_OK; } -/* -** Compare the values contained by the two memory cells, returning -** negative, zero or positive if pMem1 is less than, equal to, or greater -** than pMem2. Sorting order is NULL's first, followed by numbers (integers -** and reals) sorted numerically, followed by text ordered by the collating -** sequence pColl and finally blob's ordered by memcmp(). -** -** Two NULL values are considered equal by this function. -*/ -SQLITE_PRIVATE int sqlite3MemCompare(const Mem *pMem1, const Mem *pMem2, const CollSeq *pColl){ - int rc; - int f1, f2; - int combined_flags; - - f1 = pMem1->flags; - f2 = pMem2->flags; - combined_flags = f1|f2; - assert( (combined_flags & MEM_RowSet)==0 ); - - /* If one value is NULL, it is less than the other. If both values - ** are NULL, return 0. - */ - if( combined_flags&MEM_Null ){ - return (f2&MEM_Null) - (f1&MEM_Null); - } - - /* If one value is a number and the other is not, the number is less. - ** If both are numbers, compare as reals if one is a real, or as integers - ** if both values are integers. - */ - if( combined_flags&(MEM_Int|MEM_Real) ){ - if( !(f1&(MEM_Int|MEM_Real)) ){ - return 1; - } - if( !(f2&(MEM_Int|MEM_Real)) ){ - return -1; - } - if( (f1 & f2 & MEM_Int)==0 ){ - double r1, r2; - if( (f1&MEM_Real)==0 ){ - r1 = (double)pMem1->u.i; - }else{ - r1 = pMem1->r; - } - if( (f2&MEM_Real)==0 ){ - r2 = (double)pMem2->u.i; - }else{ - r2 = pMem2->r; - } - if( r1<r2 ) return -1; - if( r1>r2 ) return 1; - return 0; - }else{ - assert( f1&MEM_Int ); - assert( f2&MEM_Int ); - if( pMem1->u.i < pMem2->u.i ) return -1; - if( pMem1->u.i > pMem2->u.i ) return 1; - return 0; - } - } - - /* If one value is a string and the other is a blob, the string is less. - ** If both are strings, compare using the collating functions. - */ - if( combined_flags&MEM_Str ){ - if( (f1 & MEM_Str)==0 ){ - return 1; - } - if( (f2 & MEM_Str)==0 ){ - return -1; - } - - assert( pMem1->enc==pMem2->enc ); - assert( pMem1->enc==SQLITE_UTF8 || - pMem1->enc==SQLITE_UTF16LE || pMem1->enc==SQLITE_UTF16BE ); - - /* The collation sequence must be defined at this point, even if - ** the user deletes the collation sequence after the vdbe program is - ** compiled (this was not always the case). - */ - assert( !pColl || pColl->xCmp ); - - if( pColl ){ - if( pMem1->enc==pColl->enc ){ - /* The strings are already in the correct encoding. Call the - ** comparison function directly */ - return pColl->xCmp(pColl->pUser,pMem1->n,pMem1->z,pMem2->n,pMem2->z); - }else{ - const void *v1, *v2; - int n1, n2; - Mem c1; - Mem c2; - memset(&c1, 0, sizeof(c1)); - memset(&c2, 0, sizeof(c2)); - sqlite3VdbeMemShallowCopy(&c1, pMem1, MEM_Ephem); - sqlite3VdbeMemShallowCopy(&c2, pMem2, MEM_Ephem); - v1 = sqlite3ValueText((sqlite3_value*)&c1, pColl->enc); - n1 = v1==0 ? 0 : c1.n; - v2 = sqlite3ValueText((sqlite3_value*)&c2, pColl->enc); - n2 = v2==0 ? 0 : c2.n; - rc = pColl->xCmp(pColl->pUser, n1, v1, n2, v2); - sqlite3VdbeMemRelease(&c1); - sqlite3VdbeMemRelease(&c2); - return rc; - } - } - /* If a NULL pointer was passed as the collate function, fall through - ** to the blob case and use memcmp(). */ - } - - /* Both values must be blobs. Compare using memcmp(). */ - rc = memcmp(pMem1->z, pMem2->z, (pMem1->n>pMem2->n)?pMem2->n:pMem1->n); - if( rc==0 ){ - rc = pMem1->n - pMem2->n; - } - return rc; -} - /* ** Move data out of a btree key or data field and into a Mem structure. ** The data or key is taken from the entry that pCur is currently pointing ** to. offset and amt determine what portion of the data or key to retrieve. ** key is true to get the key or false to get data. The result is written ** into the pMem element. ** -** The pMem structure is assumed to be uninitialized. Any prior content -** is overwritten without being freed. +** The pMem object must have been initialized. This routine will use +** pMem->zMalloc to hold the content from the btree, if possible. New +** pMem->zMalloc space will be allocated if necessary. The calling routine +** is responsible for making sure that the pMem object is eventually +** destroyed. ** ** If this routine fails for any reason (malloc returns NULL or unable ** to read from the disk) then the pMem is left in an inconsistent state. */ +static SQLITE_NOINLINE int vdbeMemFromBtreeResize( + BtCursor *pCur, /* Cursor pointing at record to retrieve. */ + u32 offset, /* Offset from the start of data to return bytes from. */ + u32 amt, /* Number of bytes to return. */ + int key, /* If true, retrieve from the btree key, not data. */ + Mem *pMem /* OUT: Return data in this Mem structure. */ +){ + int rc; + pMem->flags = MEM_Null; + if( SQLITE_OK==(rc = sqlite3VdbeMemClearAndResize(pMem, amt+2)) ){ + if( key ){ + rc = sqlite3BtreeKey(pCur, offset, amt, pMem->z); + }else{ + rc = sqlite3BtreeData(pCur, offset, amt, pMem->z); + } + if( rc==SQLITE_OK ){ + pMem->z[amt] = 0; + pMem->z[amt+1] = 0; + pMem->flags = MEM_Blob|MEM_Term; + pMem->n = (int)amt; + }else{ + sqlite3VdbeMemRelease(pMem); + } + } + return rc; +} SQLITE_PRIVATE int sqlite3VdbeMemFromBtree( BtCursor *pCur, /* Cursor pointing at record to retrieve. */ - int offset, /* Offset from the start of data to return bytes from. */ - int amt, /* Number of bytes to return. */ + u32 offset, /* Offset from the start of data to return bytes from. */ + u32 amt, /* Number of bytes to return. */ int key, /* If true, retrieve from the btree key, not data. */ Mem *pMem /* OUT: Return data in this Mem structure. */ ){ char *zData; /* Data from the btree layer */ - int available = 0; /* Number of bytes available on the local btree page */ + u32 available = 0; /* Number of bytes available on the local btree page */ int rc = SQLITE_OK; /* Return code */ assert( sqlite3BtreeCursorIsValid(pCur) ); + assert( !VdbeMemDynamic(pMem) ); /* Note: the calls to BtreeKeyFetch() and DataFetch() below assert() ** that both the BtShared and database handle mutexes are held. */ assert( (pMem->flags & MEM_RowSet)==0 ); if( key ){ @@ -48344,33 +67071,59 @@ }else{ zData = (char *)sqlite3BtreeDataFetch(pCur, &available); } assert( zData!=0 ); - if( offset+amt<=available && (pMem->flags&MEM_Dyn)==0 ){ - sqlite3VdbeMemRelease(pMem); + if( offset+amt<=available ){ pMem->z = &zData[offset]; pMem->flags = MEM_Blob|MEM_Ephem; - }else if( SQLITE_OK==(rc = sqlite3VdbeMemGrow(pMem, amt+2, 0)) ){ - pMem->flags = MEM_Blob|MEM_Dyn|MEM_Term; - pMem->enc = 0; - pMem->type = SQLITE_BLOB; - if( key ){ - rc = sqlite3BtreeKey(pCur, offset, amt, pMem->z); - }else{ - rc = sqlite3BtreeData(pCur, offset, amt, pMem->z); - } - pMem->z[amt] = 0; - pMem->z[amt+1] = 0; - if( rc!=SQLITE_OK ){ - sqlite3VdbeMemRelease(pMem); - } - } - pMem->n = amt; + pMem->n = (int)amt; + }else{ + rc = vdbeMemFromBtreeResize(pCur, offset, amt, key, pMem); + } return rc; } + +/* +** The pVal argument is known to be a value other than NULL. +** Convert it into a string with encoding enc and return a pointer +** to a zero-terminated version of that string. +*/ +static SQLITE_NOINLINE const void *valueToText(sqlite3_value* pVal, u8 enc){ + assert( pVal!=0 ); + assert( pVal->db==0 || sqlite3_mutex_held(pVal->db->mutex) ); + assert( (enc&3)==(enc&~SQLITE_UTF16_ALIGNED) ); + assert( (pVal->flags & MEM_RowSet)==0 ); + assert( (pVal->flags & (MEM_Null))==0 ); + if( pVal->flags & (MEM_Blob|MEM_Str) ){ + pVal->flags |= MEM_Str; + if( pVal->flags & MEM_Zero ){ + sqlite3VdbeMemExpandBlob(pVal); + } + if( pVal->enc != (enc & ~SQLITE_UTF16_ALIGNED) ){ + sqlite3VdbeChangeEncoding(pVal, enc & ~SQLITE_UTF16_ALIGNED); + } + if( (enc & SQLITE_UTF16_ALIGNED)!=0 && 1==(1&SQLITE_PTR_TO_INT(pVal->z)) ){ + assert( (pVal->flags & (MEM_Ephem|MEM_Static))!=0 ); + if( sqlite3VdbeMemMakeWriteable(pVal)!=SQLITE_OK ){ + return 0; + } + } + sqlite3VdbeMemNulTerminate(pVal); /* IMP: R-31275-44060 */ + }else{ + sqlite3VdbeMemStringify(pVal, enc, 0); + assert( 0==(1&SQLITE_PTR_TO_INT(pVal->z)) ); + } + assert(pVal->enc==(enc & ~SQLITE_UTF16_ALIGNED) || pVal->db==0 + || pVal->db->mallocFailed ); + if( pVal->enc==(enc & ~SQLITE_UTF16_ALIGNED) ){ + return pVal->z; + }else{ + return 0; + } +} /* This function is only available internally, it is not part of the ** external API. It works in a similar way to sqlite3_value_text(), ** except the data returned is in the encoding specified by the second ** parameter, which must be one of SQLITE_UTF16BE, SQLITE_UTF16LE or @@ -48380,56 +67133,338 @@ ** If that is the case, then the result must be aligned on an even byte ** boundary. */ SQLITE_PRIVATE const void *sqlite3ValueText(sqlite3_value* pVal, u8 enc){ if( !pVal ) return 0; - assert( pVal->db==0 || sqlite3_mutex_held(pVal->db->mutex) ); assert( (enc&3)==(enc&~SQLITE_UTF16_ALIGNED) ); assert( (pVal->flags & MEM_RowSet)==0 ); - + if( (pVal->flags&(MEM_Str|MEM_Term))==(MEM_Str|MEM_Term) && pVal->enc==enc ){ + return pVal->z; + } if( pVal->flags&MEM_Null ){ return 0; } - assert( (MEM_Blob>>3) == MEM_Str ); - pVal->flags |= (pVal->flags & MEM_Blob)>>3; - expandBlob(pVal); - if( pVal->flags&MEM_Str ){ - sqlite3VdbeChangeEncoding(pVal, enc & ~SQLITE_UTF16_ALIGNED); - if( (enc & SQLITE_UTF16_ALIGNED)!=0 && 1==(1&SQLITE_PTR_TO_INT(pVal->z)) ){ - assert( (pVal->flags & (MEM_Ephem|MEM_Static))!=0 ); - if( sqlite3VdbeMemMakeWriteable(pVal)!=SQLITE_OK ){ - return 0; - } - } - sqlite3VdbeMemNulTerminate(pVal); - }else{ - assert( (pVal->flags&MEM_Blob)==0 ); - sqlite3VdbeMemStringify(pVal, enc); - assert( 0==(1&SQLITE_PTR_TO_INT(pVal->z)) ); - } - assert(pVal->enc==(enc & ~SQLITE_UTF16_ALIGNED) || pVal->db==0 - || pVal->db->mallocFailed ); - if( pVal->enc==(enc & ~SQLITE_UTF16_ALIGNED) ){ - return pVal->z; - }else{ - return 0; - } + return valueToText(pVal, enc); } /* ** Create a new sqlite3_value object. */ SQLITE_PRIVATE sqlite3_value *sqlite3ValueNew(sqlite3 *db){ Mem *p = sqlite3DbMallocZero(db, sizeof(*p)); if( p ){ p->flags = MEM_Null; - p->type = SQLITE_NULL; p->db = db; } return p; } + +/* +** Context object passed by sqlite3Stat4ProbeSetValue() through to +** valueNew(). See comments above valueNew() for details. +*/ +struct ValueNewStat4Ctx { + Parse *pParse; + Index *pIdx; + UnpackedRecord **ppRec; + int iVal; +}; + +/* +** Allocate and return a pointer to a new sqlite3_value object. If +** the second argument to this function is NULL, the object is allocated +** by calling sqlite3ValueNew(). +** +** Otherwise, if the second argument is non-zero, then this function is +** being called indirectly by sqlite3Stat4ProbeSetValue(). If it has not +** already been allocated, allocate the UnpackedRecord structure that +** that function will return to its caller here. Then return a pointer to +** an sqlite3_value within the UnpackedRecord.a[] array. +*/ +static sqlite3_value *valueNew(sqlite3 *db, struct ValueNewStat4Ctx *p){ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + if( p ){ + UnpackedRecord *pRec = p->ppRec[0]; + + if( pRec==0 ){ + Index *pIdx = p->pIdx; /* Index being probed */ + int nByte; /* Bytes of space to allocate */ + int i; /* Counter variable */ + int nCol = pIdx->nColumn; /* Number of index columns including rowid */ + + nByte = sizeof(Mem) * nCol + ROUND8(sizeof(UnpackedRecord)); + pRec = (UnpackedRecord*)sqlite3DbMallocZero(db, nByte); + if( pRec ){ + pRec->pKeyInfo = sqlite3KeyInfoOfIndex(p->pParse, pIdx); + if( pRec->pKeyInfo ){ + assert( pRec->pKeyInfo->nField+pRec->pKeyInfo->nXField==nCol ); + assert( pRec->pKeyInfo->enc==ENC(db) ); + pRec->aMem = (Mem *)((u8*)pRec + ROUND8(sizeof(UnpackedRecord))); + for(i=0; i<nCol; i++){ + pRec->aMem[i].flags = MEM_Null; + pRec->aMem[i].db = db; + } + }else{ + sqlite3DbFree(db, pRec); + pRec = 0; + } + } + if( pRec==0 ) return 0; + p->ppRec[0] = pRec; + } + + pRec->nField = p->iVal+1; + return &pRec->aMem[p->iVal]; + } +#else + UNUSED_PARAMETER(p); +#endif /* defined(SQLITE_ENABLE_STAT3_OR_STAT4) */ + return sqlite3ValueNew(db); +} + +/* +** The expression object indicated by the second argument is guaranteed +** to be a scalar SQL function. If +** +** * all function arguments are SQL literals, +** * one of the SQLITE_FUNC_CONSTANT or _SLOCHNG function flags is set, and +** * the SQLITE_FUNC_NEEDCOLL function flag is not set, +** +** then this routine attempts to invoke the SQL function. Assuming no +** error occurs, output parameter (*ppVal) is set to point to a value +** object containing the result before returning SQLITE_OK. +** +** Affinity aff is applied to the result of the function before returning. +** If the result is a text value, the sqlite3_value object uses encoding +** enc. +** +** If the conditions above are not met, this function returns SQLITE_OK +** and sets (*ppVal) to NULL. Or, if an error occurs, (*ppVal) is set to +** NULL and an SQLite error code returned. +*/ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 +static int valueFromFunction( + sqlite3 *db, /* The database connection */ + Expr *p, /* The expression to evaluate */ + u8 enc, /* Encoding to use */ + u8 aff, /* Affinity to use */ + sqlite3_value **ppVal, /* Write the new value here */ + struct ValueNewStat4Ctx *pCtx /* Second argument for valueNew() */ +){ + sqlite3_context ctx; /* Context object for function invocation */ + sqlite3_value **apVal = 0; /* Function arguments */ + int nVal = 0; /* Size of apVal[] array */ + FuncDef *pFunc = 0; /* Function definition */ + sqlite3_value *pVal = 0; /* New value */ + int rc = SQLITE_OK; /* Return code */ + int nName; /* Size of function name in bytes */ + ExprList *pList = 0; /* Function arguments */ + int i; /* Iterator variable */ + + assert( pCtx!=0 ); + assert( (p->flags & EP_TokenOnly)==0 ); + pList = p->x.pList; + if( pList ) nVal = pList->nExpr; + nName = sqlite3Strlen30(p->u.zToken); + pFunc = sqlite3FindFunction(db, p->u.zToken, nName, nVal, enc, 0); + assert( pFunc ); + if( (pFunc->funcFlags & (SQLITE_FUNC_CONSTANT|SQLITE_FUNC_SLOCHNG))==0 + || (pFunc->funcFlags & SQLITE_FUNC_NEEDCOLL) + ){ + return SQLITE_OK; + } + + if( pList ){ + apVal = (sqlite3_value**)sqlite3DbMallocZero(db, sizeof(apVal[0]) * nVal); + if( apVal==0 ){ + rc = SQLITE_NOMEM; + goto value_from_function_out; + } + for(i=0; i<nVal; i++){ + rc = sqlite3ValueFromExpr(db, pList->a[i].pExpr, enc, aff, &apVal[i]); + if( apVal[i]==0 || rc!=SQLITE_OK ) goto value_from_function_out; + } + } + + pVal = valueNew(db, pCtx); + if( pVal==0 ){ + rc = SQLITE_NOMEM; + goto value_from_function_out; + } + + assert( pCtx->pParse->rc==SQLITE_OK ); + memset(&ctx, 0, sizeof(ctx)); + ctx.pOut = pVal; + ctx.pFunc = pFunc; + pFunc->xSFunc(&ctx, nVal, apVal); + if( ctx.isError ){ + rc = ctx.isError; + sqlite3ErrorMsg(pCtx->pParse, "%s", sqlite3_value_text(pVal)); + }else{ + sqlite3ValueApplyAffinity(pVal, aff, SQLITE_UTF8); + assert( rc==SQLITE_OK ); + rc = sqlite3VdbeChangeEncoding(pVal, enc); + if( rc==SQLITE_OK && sqlite3VdbeMemTooBig(pVal) ){ + rc = SQLITE_TOOBIG; + pCtx->pParse->nErr++; + } + } + pCtx->pParse->rc = rc; + + value_from_function_out: + if( rc!=SQLITE_OK ){ + pVal = 0; + } + if( apVal ){ + for(i=0; i<nVal; i++){ + sqlite3ValueFree(apVal[i]); + } + sqlite3DbFree(db, apVal); + } + + *ppVal = pVal; + return rc; +} +#else +# define valueFromFunction(a,b,c,d,e,f) SQLITE_OK +#endif /* defined(SQLITE_ENABLE_STAT3_OR_STAT4) */ + +/* +** Extract a value from the supplied expression in the manner described +** above sqlite3ValueFromExpr(). Allocate the sqlite3_value object +** using valueNew(). +** +** If pCtx is NULL and an error occurs after the sqlite3_value object +** has been allocated, it is freed before returning. Or, if pCtx is not +** NULL, it is assumed that the caller will free any allocated object +** in all cases. +*/ +static int valueFromExpr( + sqlite3 *db, /* The database connection */ + Expr *pExpr, /* The expression to evaluate */ + u8 enc, /* Encoding to use */ + u8 affinity, /* Affinity to use */ + sqlite3_value **ppVal, /* Write the new value here */ + struct ValueNewStat4Ctx *pCtx /* Second argument for valueNew() */ +){ + int op; + char *zVal = 0; + sqlite3_value *pVal = 0; + int negInt = 1; + const char *zNeg = ""; + int rc = SQLITE_OK; + + if( !pExpr ){ + *ppVal = 0; + return SQLITE_OK; + } + while( (op = pExpr->op)==TK_UPLUS ) pExpr = pExpr->pLeft; + if( NEVER(op==TK_REGISTER) ) op = pExpr->op2; + + /* Compressed expressions only appear when parsing the DEFAULT clause + ** on a table column definition, and hence only when pCtx==0. This + ** check ensures that an EP_TokenOnly expression is never passed down + ** into valueFromFunction(). */ + assert( (pExpr->flags & EP_TokenOnly)==0 || pCtx==0 ); + + if( op==TK_CAST ){ + u8 aff = sqlite3AffinityType(pExpr->u.zToken,0); + rc = valueFromExpr(db, pExpr->pLeft, enc, aff, ppVal, pCtx); + testcase( rc!=SQLITE_OK ); + if( *ppVal ){ + sqlite3VdbeMemCast(*ppVal, aff, SQLITE_UTF8); + sqlite3ValueApplyAffinity(*ppVal, affinity, SQLITE_UTF8); + } + return rc; + } + + /* Handle negative integers in a single step. This is needed in the + ** case when the value is -9223372036854775808. + */ + if( op==TK_UMINUS + && (pExpr->pLeft->op==TK_INTEGER || pExpr->pLeft->op==TK_FLOAT) ){ + pExpr = pExpr->pLeft; + op = pExpr->op; + negInt = -1; + zNeg = "-"; + } + + if( op==TK_STRING || op==TK_FLOAT || op==TK_INTEGER ){ + pVal = valueNew(db, pCtx); + if( pVal==0 ) goto no_mem; + if( ExprHasProperty(pExpr, EP_IntValue) ){ + sqlite3VdbeMemSetInt64(pVal, (i64)pExpr->u.iValue*negInt); + }else{ + zVal = sqlite3MPrintf(db, "%s%s", zNeg, pExpr->u.zToken); + if( zVal==0 ) goto no_mem; + sqlite3ValueSetStr(pVal, -1, zVal, SQLITE_UTF8, SQLITE_DYNAMIC); + } + if( (op==TK_INTEGER || op==TK_FLOAT ) && affinity==SQLITE_AFF_BLOB ){ + sqlite3ValueApplyAffinity(pVal, SQLITE_AFF_NUMERIC, SQLITE_UTF8); + }else{ + sqlite3ValueApplyAffinity(pVal, affinity, SQLITE_UTF8); + } + if( pVal->flags & (MEM_Int|MEM_Real) ) pVal->flags &= ~MEM_Str; + if( enc!=SQLITE_UTF8 ){ + rc = sqlite3VdbeChangeEncoding(pVal, enc); + } + }else if( op==TK_UMINUS ) { + /* This branch happens for multiple negative signs. Ex: -(-5) */ + if( SQLITE_OK==sqlite3ValueFromExpr(db,pExpr->pLeft,enc,affinity,&pVal) + && pVal!=0 + ){ + sqlite3VdbeMemNumerify(pVal); + if( pVal->flags & MEM_Real ){ + pVal->u.r = -pVal->u.r; + }else if( pVal->u.i==SMALLEST_INT64 ){ + pVal->u.r = -(double)SMALLEST_INT64; + MemSetTypeFlag(pVal, MEM_Real); + }else{ + pVal->u.i = -pVal->u.i; + } + sqlite3ValueApplyAffinity(pVal, affinity, enc); + } + }else if( op==TK_NULL ){ + pVal = valueNew(db, pCtx); + if( pVal==0 ) goto no_mem; + } +#ifndef SQLITE_OMIT_BLOB_LITERAL + else if( op==TK_BLOB ){ + int nVal; + assert( pExpr->u.zToken[0]=='x' || pExpr->u.zToken[0]=='X' ); + assert( pExpr->u.zToken[1]=='\'' ); + pVal = valueNew(db, pCtx); + if( !pVal ) goto no_mem; + zVal = &pExpr->u.zToken[2]; + nVal = sqlite3Strlen30(zVal)-1; + assert( zVal[nVal]=='\'' ); + sqlite3VdbeMemSetStr(pVal, sqlite3HexToBlob(db, zVal, nVal), nVal/2, + 0, SQLITE_DYNAMIC); + } +#endif + +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + else if( op==TK_FUNCTION && pCtx!=0 ){ + rc = valueFromFunction(db, pExpr, enc, affinity, &pVal, pCtx); + } +#endif + + *ppVal = pVal; + return rc; + +no_mem: + sqlite3OomFault(db); + sqlite3DbFree(db, zVal); + assert( *ppVal==0 ); +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + if( pCtx==0 ) sqlite3ValueFree(pVal); +#else + assert( pCtx==0 ); sqlite3ValueFree(pVal); +#endif + return SQLITE_NOMEM; +} /* ** Create a new sqlite3_value object, containing the value of pExpr. ** ** This only works for very simple expressions that consist of one constant @@ -48444,77 +67479,270 @@ Expr *pExpr, /* The expression to evaluate */ u8 enc, /* Encoding to use */ u8 affinity, /* Affinity to use */ sqlite3_value **ppVal /* Write the new value here */ ){ - int op; - char *zVal = 0; + return valueFromExpr(db, pExpr, enc, affinity, ppVal, 0); +} + +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 +/* +** The implementation of the sqlite_record() function. This function accepts +** a single argument of any type. The return value is a formatted database +** record (a blob) containing the argument value. +** +** This is used to convert the value stored in the 'sample' column of the +** sqlite_stat3 table to the record format SQLite uses internally. +*/ +static void recordFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const int file_format = 1; + u32 iSerial; /* Serial type */ + int nSerial; /* Bytes of space for iSerial as varint */ + u32 nVal; /* Bytes of space required for argv[0] */ + int nRet; + sqlite3 *db; + u8 *aRet; + + UNUSED_PARAMETER( argc ); + iSerial = sqlite3VdbeSerialType(argv[0], file_format, &nVal); + nSerial = sqlite3VarintLen(iSerial); + db = sqlite3_context_db_handle(context); + + nRet = 1 + nSerial + nVal; + aRet = sqlite3DbMallocRawNN(db, nRet); + if( aRet==0 ){ + sqlite3_result_error_nomem(context); + }else{ + aRet[0] = nSerial+1; + putVarint32(&aRet[1], iSerial); + sqlite3VdbeSerialPut(&aRet[1+nSerial], argv[0], iSerial); + sqlite3_result_blob(context, aRet, nRet, SQLITE_TRANSIENT); + sqlite3DbFree(db, aRet); + } +} + +/* +** Register built-in functions used to help read ANALYZE data. +*/ +SQLITE_PRIVATE void sqlite3AnalyzeFunctions(void){ + static SQLITE_WSD FuncDef aAnalyzeTableFuncs[] = { + FUNCTION(sqlite_record, 1, 0, 0, recordFunc), + }; + int i; + FuncDefHash *pHash = &GLOBAL(FuncDefHash, sqlite3GlobalFunctions); + FuncDef *aFunc = (FuncDef*)&GLOBAL(FuncDef, aAnalyzeTableFuncs); + for(i=0; i<ArraySize(aAnalyzeTableFuncs); i++){ + sqlite3FuncDefInsert(pHash, &aFunc[i]); + } +} + +/* +** Attempt to extract a value from pExpr and use it to construct *ppVal. +** +** If pAlloc is not NULL, then an UnpackedRecord object is created for +** pAlloc if one does not exist and the new value is added to the +** UnpackedRecord object. +** +** A value is extracted in the following cases: +** +** * (pExpr==0). In this case the value is assumed to be an SQL NULL, +** +** * The expression is a bound variable, and this is a reprepare, or +** +** * The expression is a literal value. +** +** On success, *ppVal is made to point to the extracted value. The caller +** is responsible for ensuring that the value is eventually freed. +*/ +static int stat4ValueFromExpr( + Parse *pParse, /* Parse context */ + Expr *pExpr, /* The expression to extract a value from */ + u8 affinity, /* Affinity to use */ + struct ValueNewStat4Ctx *pAlloc,/* How to allocate space. Or NULL */ + sqlite3_value **ppVal /* OUT: New value object (or NULL) */ +){ + int rc = SQLITE_OK; sqlite3_value *pVal = 0; + sqlite3 *db = pParse->db; + + /* Skip over any TK_COLLATE nodes */ + pExpr = sqlite3ExprSkipCollate(pExpr); if( !pExpr ){ - *ppVal = 0; - return SQLITE_OK; - } - op = pExpr->op; - if( op==TK_REGISTER ){ - op = pExpr->op2; /* This only happens with SQLITE_ENABLE_STAT2 */ - } - - if( op==TK_STRING || op==TK_FLOAT || op==TK_INTEGER ){ - pVal = sqlite3ValueNew(db); - if( pVal==0 ) goto no_mem; - if( ExprHasProperty(pExpr, EP_IntValue) ){ - sqlite3VdbeMemSetInt64(pVal, (i64)pExpr->u.iValue); - }else{ - zVal = sqlite3DbStrDup(db, pExpr->u.zToken); - if( zVal==0 ) goto no_mem; - sqlite3ValueSetStr(pVal, -1, zVal, SQLITE_UTF8, SQLITE_DYNAMIC); - if( op==TK_FLOAT ) pVal->type = SQLITE_FLOAT; - } - if( (op==TK_INTEGER || op==TK_FLOAT ) && affinity==SQLITE_AFF_NONE ){ - sqlite3ValueApplyAffinity(pVal, SQLITE_AFF_NUMERIC, SQLITE_UTF8); - }else{ - sqlite3ValueApplyAffinity(pVal, affinity, SQLITE_UTF8); - } - if( enc!=SQLITE_UTF8 ){ - sqlite3VdbeChangeEncoding(pVal, enc); - } - }else if( op==TK_UMINUS ) { - if( SQLITE_OK==sqlite3ValueFromExpr(db,pExpr->pLeft,enc,affinity,&pVal) ){ - pVal->u.i = -1 * pVal->u.i; - /* (double)-1 In case of SQLITE_OMIT_FLOATING_POINT... */ - pVal->r = (double)-1 * pVal->r; - } - } -#ifndef SQLITE_OMIT_BLOB_LITERAL - else if( op==TK_BLOB ){ - int nVal; - assert( pExpr->u.zToken[0]=='x' || pExpr->u.zToken[0]=='X' ); - assert( pExpr->u.zToken[1]=='\'' ); - pVal = sqlite3ValueNew(db); - if( !pVal ) goto no_mem; - zVal = &pExpr->u.zToken[2]; - nVal = sqlite3Strlen30(zVal)-1; - assert( zVal[nVal]=='\'' ); - sqlite3VdbeMemSetStr(pVal, sqlite3HexToBlob(db, zVal, nVal), nVal/2, - 0, SQLITE_DYNAMIC); - } -#endif - - if( pVal ){ - sqlite3VdbeMemStoreType(pVal); - } + pVal = valueNew(db, pAlloc); + if( pVal ){ + sqlite3VdbeMemSetNull((Mem*)pVal); + } + }else if( pExpr->op==TK_VARIABLE + || NEVER(pExpr->op==TK_REGISTER && pExpr->op2==TK_VARIABLE) + ){ + Vdbe *v; + int iBindVar = pExpr->iColumn; + sqlite3VdbeSetVarmask(pParse->pVdbe, iBindVar); + if( (v = pParse->pReprepare)!=0 ){ + pVal = valueNew(db, pAlloc); + if( pVal ){ + rc = sqlite3VdbeMemCopy((Mem*)pVal, &v->aVar[iBindVar-1]); + if( rc==SQLITE_OK ){ + sqlite3ValueApplyAffinity(pVal, affinity, ENC(db)); + } + pVal->db = pParse->db; + } + } + }else{ + rc = valueFromExpr(db, pExpr, ENC(db), affinity, &pVal, pAlloc); + } + + assert( pVal==0 || pVal->db==db ); *ppVal = pVal; + return rc; +} + +/* +** This function is used to allocate and populate UnpackedRecord +** structures intended to be compared against sample index keys stored +** in the sqlite_stat4 table. +** +** A single call to this function attempts to populates field iVal (leftmost +** is 0 etc.) of the unpacked record with a value extracted from expression +** pExpr. Extraction of values is possible if: +** +** * (pExpr==0). In this case the value is assumed to be an SQL NULL, +** +** * The expression is a bound variable, and this is a reprepare, or +** +** * The sqlite3ValueFromExpr() function is able to extract a value +** from the expression (i.e. the expression is a literal value). +** +** If a value can be extracted, the affinity passed as the 5th argument +** is applied to it before it is copied into the UnpackedRecord. Output +** parameter *pbOk is set to true if a value is extracted, or false +** otherwise. +** +** When this function is called, *ppRec must either point to an object +** allocated by an earlier call to this function, or must be NULL. If it +** is NULL and a value can be successfully extracted, a new UnpackedRecord +** is allocated (and *ppRec set to point to it) before returning. +** +** Unless an error is encountered, SQLITE_OK is returned. It is not an +** error if a value cannot be extracted from pExpr. If an error does +** occur, an SQLite error code is returned. +*/ +SQLITE_PRIVATE int sqlite3Stat4ProbeSetValue( + Parse *pParse, /* Parse context */ + Index *pIdx, /* Index being probed */ + UnpackedRecord **ppRec, /* IN/OUT: Probe record */ + Expr *pExpr, /* The expression to extract a value from */ + u8 affinity, /* Affinity to use */ + int iVal, /* Array element to populate */ + int *pbOk /* OUT: True if value was extracted */ +){ + int rc; + sqlite3_value *pVal = 0; + struct ValueNewStat4Ctx alloc; + + alloc.pParse = pParse; + alloc.pIdx = pIdx; + alloc.ppRec = ppRec; + alloc.iVal = iVal; + + rc = stat4ValueFromExpr(pParse, pExpr, affinity, &alloc, &pVal); + assert( pVal==0 || pVal->db==pParse->db ); + *pbOk = (pVal!=0); + return rc; +} + +/* +** Attempt to extract a value from expression pExpr using the methods +** as described for sqlite3Stat4ProbeSetValue() above. +** +** If successful, set *ppVal to point to a new value object and return +** SQLITE_OK. If no value can be extracted, but no other error occurs +** (e.g. OOM), return SQLITE_OK and set *ppVal to NULL. Or, if an error +** does occur, return an SQLite error code. The final value of *ppVal +** is undefined in this case. +*/ +SQLITE_PRIVATE int sqlite3Stat4ValueFromExpr( + Parse *pParse, /* Parse context */ + Expr *pExpr, /* The expression to extract a value from */ + u8 affinity, /* Affinity to use */ + sqlite3_value **ppVal /* OUT: New value object (or NULL) */ +){ + return stat4ValueFromExpr(pParse, pExpr, affinity, 0, ppVal); +} + +/* +** Extract the iCol-th column from the nRec-byte record in pRec. Write +** the column value into *ppVal. If *ppVal is initially NULL then a new +** sqlite3_value object is allocated. +** +** If *ppVal is initially NULL then the caller is responsible for +** ensuring that the value written into *ppVal is eventually freed. +*/ +SQLITE_PRIVATE int sqlite3Stat4Column( + sqlite3 *db, /* Database handle */ + const void *pRec, /* Pointer to buffer containing record */ + int nRec, /* Size of buffer pRec in bytes */ + int iCol, /* Column to extract */ + sqlite3_value **ppVal /* OUT: Extracted value */ +){ + u32 t; /* a column type code */ + int nHdr; /* Size of the header in the record */ + int iHdr; /* Next unread header byte */ + int iField; /* Next unread data byte */ + int szField; /* Size of the current data field */ + int i; /* Column index */ + u8 *a = (u8*)pRec; /* Typecast byte array */ + Mem *pMem = *ppVal; /* Write result into this Mem object */ + + assert( iCol>0 ); + iHdr = getVarint32(a, nHdr); + if( nHdr>nRec || iHdr>=nHdr ) return SQLITE_CORRUPT_BKPT; + iField = nHdr; + for(i=0; i<=iCol; i++){ + iHdr += getVarint32(&a[iHdr], t); + testcase( iHdr==nHdr ); + testcase( iHdr==nHdr+1 ); + if( iHdr>nHdr ) return SQLITE_CORRUPT_BKPT; + szField = sqlite3VdbeSerialTypeLen(t); + iField += szField; + } + testcase( iField==nRec ); + testcase( iField==nRec+1 ); + if( iField>nRec ) return SQLITE_CORRUPT_BKPT; + if( pMem==0 ){ + pMem = *ppVal = sqlite3ValueNew(db); + if( pMem==0 ) return SQLITE_NOMEM; + } + sqlite3VdbeSerialGet(&a[iField-szField], t, pMem); + pMem->enc = ENC(db); return SQLITE_OK; - -no_mem: - db->mallocFailed = 1; - sqlite3DbFree(db, zVal); - sqlite3ValueFree(pVal); - *ppVal = 0; - return SQLITE_NOMEM; -} +} + +/* +** Unless it is NULL, the argument must be an UnpackedRecord object returned +** by an earlier call to sqlite3Stat4ProbeSetValue(). This call deletes +** the object. +*/ +SQLITE_PRIVATE void sqlite3Stat4ProbeFree(UnpackedRecord *pRec){ + if( pRec ){ + int i; + int nCol = pRec->pKeyInfo->nField+pRec->pKeyInfo->nXField; + Mem *aMem = pRec->aMem; + sqlite3 *db = aMem[0].db; + for(i=0; i<nCol; i++){ + sqlite3VdbeMemRelease(&aMem[i]); + } + sqlite3KeyInfoUnref(pRec->pKeyInfo); + sqlite3DbFree(db, pRec); + } +} +#endif /* ifdef SQLITE_ENABLE_STAT4 */ /* ** Change the string value of an sqlite3_value object */ SQLITE_PRIVATE void sqlite3ValueSetStr( @@ -48535,23 +67763,32 @@ sqlite3VdbeMemRelease((Mem *)v); sqlite3DbFree(((Mem*)v)->db, v); } /* -** Return the number of bytes in the sqlite3_value object assuming -** that it uses the encoding "enc" +** The sqlite3ValueBytes() routine returns the number of bytes in the +** sqlite3_value object assuming that it uses the encoding "enc". +** The valueBytes() routine is a helper function. */ +static SQLITE_NOINLINE int valueBytes(sqlite3_value *pVal, u8 enc){ + return valueToText(pVal, enc)!=0 ? pVal->n : 0; +} SQLITE_PRIVATE int sqlite3ValueBytes(sqlite3_value *pVal, u8 enc){ Mem *p = (Mem*)pVal; - if( (p->flags & MEM_Blob)!=0 || sqlite3ValueText(pVal, enc) ){ + assert( (p->flags & MEM_Null)==0 || (p->flags & (MEM_Str|MEM_Blob))==0 ); + if( (p->flags & MEM_Str)!=0 && pVal->enc==enc ){ + return p->n; + } + if( (p->flags & MEM_Blob)!=0 ){ if( p->flags & MEM_Zero ){ return p->n + p->u.nZero; }else{ return p->n; } } - return 0; + if( p->flags & MEM_Null ) return 0; + return valueBytes(pVal, enc); } /************** End of vdbemem.c *********************************************/ /************** Begin file vdbeaux.c *****************************************/ /* @@ -48564,31 +67801,20 @@ ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* ** This file contains code used for creating, destroying, and populating -** a VDBE (or an "sqlite3_stmt" as it is known to the outside world.) Prior -** to version 2.8.7, all this code was combined into the vdbe.c source file. -** But that file was getting too big so this subroutines were split out. -*/ - - - -/* -** When debugging the code generator in a symbolic debugger, one can -** set the sqlite3VdbeAddopTrace to 1 and all opcodes will be printed -** as they are added to the instruction stream. -*/ -#ifdef SQLITE_DEBUG -SQLITE_PRIVATE int sqlite3VdbeAddopTrace = 0; -#endif - +** a VDBE (or an "sqlite3_stmt" as it is known to the outside world.) +*/ +/* #include "sqliteInt.h" */ +/* #include "vdbeInt.h" */ /* ** Create a new virtual database engine. */ -SQLITE_PRIVATE Vdbe *sqlite3VdbeCreate(sqlite3 *db){ +SQLITE_PRIVATE Vdbe *sqlite3VdbeCreate(Parse *pParse){ + sqlite3 *db = pParse->db; Vdbe *p; p = sqlite3DbMallocZero(db, sizeof(Vdbe) ); if( p==0 ) return 0; p->db = db; if( db->pVdbe ){ @@ -48596,20 +67822,36 @@ } p->pNext = db->pVdbe; p->pPrev = 0; db->pVdbe = p; p->magic = VDBE_MAGIC_INIT; + p->pParse = pParse; + assert( pParse->aLabel==0 ); + assert( pParse->nLabel==0 ); + assert( pParse->nOpAlloc==0 ); + assert( pParse->szOpAlloc==0 ); return p; } + +/* +** Change the error string stored in Vdbe.zErrMsg +*/ +SQLITE_PRIVATE void sqlite3VdbeError(Vdbe *p, const char *zFormat, ...){ + va_list ap; + sqlite3DbFree(p->db, p->zErrMsg); + va_start(ap, zFormat); + p->zErrMsg = sqlite3VMPrintf(p->db, zFormat, ap); + va_end(ap); +} /* ** Remember the SQL string for a prepared statement. */ SQLITE_PRIVATE void sqlite3VdbeSetSql(Vdbe *p, const char *z, int n, int isPrepareV2){ assert( isPrepareV2==1 || isPrepareV2==0 ); if( p==0 ) return; -#ifdef SQLITE_OMIT_TRACE +#if defined(SQLITE_OMIT_TRACE) && !defined(SQLITE_ENABLE_SQLLOG) if( !isPrepareV2 ) return; #endif assert( p->zSql==0 ); p->zSql = sqlite3DbStrNDup(p->db, z, n); p->isPrepareV2 = (u8)isPrepareV2; @@ -48616,13 +67858,13 @@ } /* ** Return the SQL associated with a prepared statement */ -SQLITE_API const char *sqlite3_sql(sqlite3_stmt *pStmt){ +SQLITE_API const char *SQLITE_STDCALL sqlite3_sql(sqlite3_stmt *pStmt){ Vdbe *p = (Vdbe *)pStmt; - return (p && p->isPrepareV2) ? p->zSql : 0; + return p ? p->zSql : 0; } /* ** Swap all content between two VDBE structures. */ @@ -48642,38 +67884,59 @@ pA->zSql = pB->zSql; pB->zSql = zTmp; pB->isPrepareV2 = pA->isPrepareV2; } -#ifdef SQLITE_DEBUG -/* -** Turn tracing on or off -*/ -SQLITE_PRIVATE void sqlite3VdbeTrace(Vdbe *p, FILE *trace){ - p->trace = trace; -} -#endif - -/* -** Resize the Vdbe.aOp array so that it is at least one op larger than -** it was. +/* +** Resize the Vdbe.aOp array so that it is at least nOp elements larger +** than its current size. nOp is guaranteed to be less than or equal +** to 1024/sizeof(Op). ** ** If an out-of-memory error occurs while resizing the array, return -** SQLITE_NOMEM. In this case Vdbe.aOp and Vdbe.nOpAlloc remain +** SQLITE_NOMEM. In this case Vdbe.aOp and Parse.nOpAlloc remain ** unchanged (this is so that any opcodes already allocated can be ** correctly deallocated along with the rest of the Vdbe). */ -static int growOpArray(Vdbe *p){ +static int growOpArray(Vdbe *v, int nOp){ VdbeOp *pNew; + Parse *p = v->pParse; + + /* The SQLITE_TEST_REALLOC_STRESS compile-time option is designed to force + ** more frequent reallocs and hence provide more opportunities for + ** simulated OOM faults. SQLITE_TEST_REALLOC_STRESS is generally used + ** during testing only. With SQLITE_TEST_REALLOC_STRESS grow the op array + ** by the minimum* amount required until the size reaches 512. Normal + ** operation (without SQLITE_TEST_REALLOC_STRESS) is to double the current + ** size of the op array or add 1KB of space, whichever is smaller. */ +#ifdef SQLITE_TEST_REALLOC_STRESS + int nNew = (p->nOpAlloc>=512 ? p->nOpAlloc*2 : p->nOpAlloc+nOp); +#else int nNew = (p->nOpAlloc ? p->nOpAlloc*2 : (int)(1024/sizeof(Op))); - pNew = sqlite3DbRealloc(p->db, p->aOp, nNew*sizeof(Op)); + UNUSED_PARAMETER(nOp); +#endif + + assert( nOp<=(1024/sizeof(Op)) ); + assert( nNew>=(p->nOpAlloc+nOp) ); + pNew = sqlite3DbRealloc(p->db, v->aOp, nNew*sizeof(Op)); if( pNew ){ - p->nOpAlloc = sqlite3DbMallocSize(p->db, pNew)/sizeof(Op); - p->aOp = pNew; + p->szOpAlloc = sqlite3DbMallocSize(p->db, pNew); + p->nOpAlloc = p->szOpAlloc/sizeof(Op); + v->aOp = pNew; } return (pNew ? SQLITE_OK : SQLITE_NOMEM); } + +#ifdef SQLITE_DEBUG +/* This routine is just a convenient place to set a breakpoint that will +** fire after each opcode is inserted and displayed using +** "PRAGMA vdbe_addoptrace=on". +*/ +static void test_addop_breakpoint(void){ + static int n = 0; + n++; +} +#endif /* ** Add a new instruction to the list of instructions current in the ** VDBE. Return the address of the new instruction. ** @@ -48687,21 +67950,25 @@ ** ** Use the sqlite3VdbeResolveLabel() function to fix an address and ** the sqlite3VdbeChangeP4() function to change the value of the P4 ** operand. */ +static SQLITE_NOINLINE int growOp3(Vdbe *p, int op, int p1, int p2, int p3){ + assert( p->pParse->nOpAlloc<=p->nOp ); + if( growOpArray(p, 1) ) return 1; + assert( p->pParse->nOpAlloc>p->nOp ); + return sqlite3VdbeAddOp3(p, op, p1, p2, p3); +} SQLITE_PRIVATE int sqlite3VdbeAddOp3(Vdbe *p, int op, int p1, int p2, int p3){ int i; VdbeOp *pOp; i = p->nOp; assert( p->magic==VDBE_MAGIC_INIT ); - assert( op>0 && op<0xff ); - if( p->nOpAlloc<=i ){ - if( growOpArray(p) ){ - return 1; - } + assert( op>=0 && op<0xff ); + if( p->pParse->nOpAlloc<=i ){ + return growOp3(p, op, p1, p2, p3); } p->nOp++; pOp = &p->aOp[i]; pOp->opcode = (u8)op; pOp->p5 = 0; @@ -48708,18 +67975,34 @@ pOp->p1 = p1; pOp->p2 = p2; pOp->p3 = p3; pOp->p4.p = 0; pOp->p4type = P4_NOTUSED; - p->expired = 0; -#ifdef SQLITE_DEBUG +#ifdef SQLITE_ENABLE_EXPLAIN_COMMENTS pOp->zComment = 0; - if( sqlite3VdbeAddopTrace ) sqlite3VdbePrintOp(0, i, &p->aOp[i]); +#endif +#ifdef SQLITE_DEBUG + if( p->db->flags & SQLITE_VdbeAddopTrace ){ + int jj, kk; + Parse *pParse = p->pParse; + for(jj=kk=0; jj<SQLITE_N_COLCACHE; jj++){ + struct yColCache *x = pParse->aColCache + jj; + if( x->iLevel>pParse->iCacheLevel || x->iReg==0 ) continue; + printf(" r[%d]={%d:%d}", x->iReg, x->iTable, x->iColumn); + kk++; + } + if( kk ) printf("\n"); + sqlite3VdbePrintOp(0, i, &p->aOp[i]); + test_addop_breakpoint(); + } #endif #ifdef VDBE_PROFILE pOp->cycles = 0; pOp->cnt = 0; +#endif +#ifdef SQLITE_VDBE_COVERAGE + pOp->iSrcLine = 0; #endif return i; } SQLITE_PRIVATE int sqlite3VdbeAddOp0(Vdbe *p, int op){ return sqlite3VdbeAddOp3(p, op, 0, 0, 0); @@ -48729,10 +68012,47 @@ } SQLITE_PRIVATE int sqlite3VdbeAddOp2(Vdbe *p, int op, int p1, int p2){ return sqlite3VdbeAddOp3(p, op, p1, p2, 0); } +/* Generate code for an unconditional jump to instruction iDest +*/ +SQLITE_PRIVATE int sqlite3VdbeGoto(Vdbe *p, int iDest){ + return sqlite3VdbeAddOp3(p, OP_Goto, 0, iDest, 0); +} + +/* Generate code to cause the string zStr to be loaded into +** register iDest +*/ +SQLITE_PRIVATE int sqlite3VdbeLoadString(Vdbe *p, int iDest, const char *zStr){ + return sqlite3VdbeAddOp4(p, OP_String8, 0, iDest, 0, zStr, 0); +} + +/* +** Generate code that initializes multiple registers to string or integer +** constants. The registers begin with iDest and increase consecutively. +** One register is initialized for each characgter in zTypes[]. For each +** "s" character in zTypes[], the register is a string if the argument is +** not NULL, or OP_Null if the value is a null pointer. For each "i" character +** in zTypes[], the register is initialized to an integer. +*/ +SQLITE_PRIVATE void sqlite3VdbeMultiLoad(Vdbe *p, int iDest, const char *zTypes, ...){ + va_list ap; + int i; + char c; + va_start(ap, zTypes); + for(i=0; (c = zTypes[i])!=0; i++){ + if( c=='s' ){ + const char *z = va_arg(ap, const char*); + sqlite3VdbeAddOp4(p, z==0 ? OP_Null : OP_String8, 0, iDest++, 0, z, 0); + }else{ + assert( c=='i' ); + sqlite3VdbeAddOp2(p, OP_Integer, va_arg(ap, int), iDest++); + } + } + va_end(ap); +} /* ** Add an opcode that includes the p4 value as a pointer. */ SQLITE_PRIVATE int sqlite3VdbeAddOp4( @@ -48746,10 +68066,42 @@ ){ int addr = sqlite3VdbeAddOp3(p, op, p1, p2, p3); sqlite3VdbeChangeP4(p, addr, zP4, p4type); return addr; } + +/* +** Add an opcode that includes the p4 value with a P4_INT64 or +** P4_REAL type. +*/ +SQLITE_PRIVATE int sqlite3VdbeAddOp4Dup8( + Vdbe *p, /* Add the opcode to this VM */ + int op, /* The new opcode */ + int p1, /* The P1 operand */ + int p2, /* The P2 operand */ + int p3, /* The P3 operand */ + const u8 *zP4, /* The P4 operand */ + int p4type /* P4 operand type */ +){ + char *p4copy = sqlite3DbMallocRawNN(sqlite3VdbeDb(p), 8); + if( p4copy ) memcpy(p4copy, zP4, 8); + return sqlite3VdbeAddOp4(p, op, p1, p2, p3, p4copy, p4type); +} + +/* +** Add an OP_ParseSchema opcode. This routine is broken out from +** sqlite3VdbeAddOp4() since it needs to also needs to mark all btrees +** as having been used. +** +** The zWhere string must have been obtained from sqlite3_malloc(). +** This routine will take ownership of the allocated memory. +*/ +SQLITE_PRIVATE void sqlite3VdbeAddParseSchemaOp(Vdbe *p, int iDb, char *zWhere){ + int j; + sqlite3VdbeAddOp4(p, OP_ParseSchema, iDb, 0, 0, zWhere, P4_DYNAMIC); + for(j=0; j<p->db->nDb; j++) sqlite3VdbeUsesBtree(p, j); +} /* ** Add an opcode that includes the p4 value as an integer. */ SQLITE_PRIVATE int sqlite3VdbeAddOp4Int( @@ -48762,10 +68114,25 @@ ){ int addr = sqlite3VdbeAddOp3(p, op, p1, p2, p3); sqlite3VdbeChangeP4(p, addr, SQLITE_INT_TO_PTR(p4), P4_INT32); return addr; } + +/* Insert the end of a co-routine +*/ +SQLITE_PRIVATE void sqlite3VdbeEndCoroutine(Vdbe *v, int regYield){ + sqlite3VdbeAddOp1(v, OP_EndCoroutine, regYield); + + /* Clear the temporary register cache, thereby ensuring that each + ** co-routine has its own independent set of registers, because co-routines + ** might expect their registers to be preserved across an OP_Yield, and + ** that could cause problems if two or more co-routines are using the same + ** temporary register. + */ + v->pParse->nTempReg = 0; + v->pParse->nRangeReg = 0; +} /* ** Create a new symbolic label for an instruction that has yet to be ** coded. The symbolic label is really just a negative number. The ** label can be used as the P2 value of an operation. Later, when @@ -48777,38 +68144,39 @@ ** always negative and P2 values are suppose to be non-negative. ** Hence, a negative P2 value is a label that has yet to be resolved. ** ** Zero is returned if a malloc() fails. */ -SQLITE_PRIVATE int sqlite3VdbeMakeLabel(Vdbe *p){ - int i; - i = p->nLabel++; - assert( p->magic==VDBE_MAGIC_INIT ); - if( i>=p->nLabelAlloc ){ - int n = p->nLabelAlloc*2 + 5; - p->aLabel = sqlite3DbReallocOrFree(p->db, p->aLabel, - n*sizeof(p->aLabel[0])); - p->nLabelAlloc = sqlite3DbMallocSize(p->db, p->aLabel)/sizeof(p->aLabel[0]); +SQLITE_PRIVATE int sqlite3VdbeMakeLabel(Vdbe *v){ + Parse *p = v->pParse; + int i = p->nLabel++; + assert( v->magic==VDBE_MAGIC_INIT ); + if( (i & (i-1))==0 ){ + p->aLabel = sqlite3DbReallocOrFree(p->db, p->aLabel, + (i*2+1)*sizeof(p->aLabel[0])); } if( p->aLabel ){ p->aLabel[i] = -1; } - return -1-i; + return ADDR(i); } /* ** Resolve label "x" to be the address of the next instruction to ** be inserted. The parameter "x" must have been obtained from ** a prior call to sqlite3VdbeMakeLabel(). */ -SQLITE_PRIVATE void sqlite3VdbeResolveLabel(Vdbe *p, int x){ - int j = -1-x; - assert( p->magic==VDBE_MAGIC_INIT ); - assert( j>=0 && j<p->nLabel ); +SQLITE_PRIVATE void sqlite3VdbeResolveLabel(Vdbe *v, int x){ + Parse *p = v->pParse; + int j = ADDR(x); + assert( v->magic==VDBE_MAGIC_INIT ); + assert( j<p->nLabel ); + assert( j>=0 ); if( p->aLabel ){ - p->aLabel[j] = p->nOp; + p->aLabel[j] = v->nOp; } + p->iFixedOp = v->nOp - 1; } /* ** Mark the VDBE as one that can only be run one time. */ @@ -48896,10 +68264,11 @@ ** * OP_HaltIfNull with P1=SQLITE_CONSTRAINT and P2=OE_Abort. ** * OP_Destroy ** * OP_VUpdate ** * OP_VRename ** * OP_FkCounter with P2==0 (immediate foreign key constraint) +** * OP_CreateTable and OP_InitCoroutine (for CREATE TABLE AS SELECT ...) ** ** Then check that the value of Parse.mayAbort is true if an ** ABORT may be thrown, or false otherwise. Return true if it does ** match, or false otherwise. This function is intended to be used as ** part of an assert statement in the compiler. Similar to: @@ -48906,87 +68275,136 @@ ** ** assert( sqlite3VdbeAssertMayAbort(pParse->pVdbe, pParse->mayAbort) ); */ SQLITE_PRIVATE int sqlite3VdbeAssertMayAbort(Vdbe *v, int mayAbort){ int hasAbort = 0; + int hasFkCounter = 0; + int hasCreateTable = 0; + int hasInitCoroutine = 0; Op *pOp; VdbeOpIter sIter; memset(&sIter, 0, sizeof(sIter)); sIter.v = v; while( (pOp = opIterNext(&sIter))!=0 ){ int opcode = pOp->opcode; if( opcode==OP_Destroy || opcode==OP_VUpdate || opcode==OP_VRename -#ifndef SQLITE_OMIT_FOREIGN_KEY - || (opcode==OP_FkCounter && pOp->p1==0 && pOp->p2==1) -#endif || ((opcode==OP_Halt || opcode==OP_HaltIfNull) - && (pOp->p1==SQLITE_CONSTRAINT && pOp->p2==OE_Abort)) + && ((pOp->p1&0xff)==SQLITE_CONSTRAINT && pOp->p2==OE_Abort)) ){ hasAbort = 1; break; } + if( opcode==OP_CreateTable ) hasCreateTable = 1; + if( opcode==OP_InitCoroutine ) hasInitCoroutine = 1; +#ifndef SQLITE_OMIT_FOREIGN_KEY + if( opcode==OP_FkCounter && pOp->p1==0 && pOp->p2==1 ){ + hasFkCounter = 1; + } +#endif } sqlite3DbFree(v->db, sIter.apSub); - /* Return true if hasAbort==mayAbort. Or if a malloc failure occured. + /* Return true if hasAbort==mayAbort. Or if a malloc failure occurred. ** If malloc failed, then the while() loop above may not have iterated ** through all opcodes and hasAbort may be set incorrectly. Return ** true for this case to prevent the assert() in the callers frame ** from failing. */ - return ( v->db->mallocFailed || hasAbort==mayAbort ); + return ( v->db->mallocFailed || hasAbort==mayAbort || hasFkCounter + || (hasCreateTable && hasInitCoroutine) ); } #endif /* SQLITE_DEBUG - the sqlite3AssertMayAbort() function */ /* -** Loop through the program looking for P2 values that are negative -** on jump instructions. Each such value is a label. Resolve the -** label by setting the P2 value to its correct non-zero value. -** -** This routine is called once after all opcodes have been inserted. -** -** Variable *pMaxFuncArgs is set to the maximum value of any P2 argument -** to an OP_Function, OP_AggStep or OP_VFilter opcode. This is used by -** sqlite3VdbeMakeReady() to size the Vdbe.apArg[] array. -** -** The Op.opflags field is set on all opcodes. +** This routine is called after all opcodes have been inserted. It loops +** through all the opcodes and fixes up some details. +** +** (1) For each jump instruction with a negative P2 value (a label) +** resolve the P2 value to an actual address. +** +** (2) Compute the maximum number of arguments used by any SQL function +** and store that value in *pMaxFuncArgs. +** +** (3) Update the Vdbe.readOnly and Vdbe.bIsReader flags to accurately +** indicate what the prepared statement actually does. +** +** (4) Initialize the p4.xAdvance pointer on opcodes that use it. +** +** (5) Reclaim the memory allocated for storing labels. */ static void resolveP2Values(Vdbe *p, int *pMaxFuncArgs){ int i; int nMaxArgs = *pMaxFuncArgs; Op *pOp; - int *aLabel = p->aLabel; + Parse *pParse = p->pParse; + int *aLabel = pParse->aLabel; p->readOnly = 1; + p->bIsReader = 0; for(pOp=p->aOp, i=p->nOp-1; i>=0; i--, pOp++){ u8 opcode = pOp->opcode; - pOp->opflags = sqlite3OpcodeProperty[opcode]; - if( opcode==OP_Function || opcode==OP_AggStep ){ - if( pOp->p5>nMaxArgs ) nMaxArgs = pOp->p5; - }else if( opcode==OP_Transaction && pOp->p2!=0 ){ - p->readOnly = 0; + /* NOTE: Be sure to update mkopcodeh.tcl when adding or removing + ** cases from this switch! */ + switch( opcode ){ + case OP_Transaction: { + if( pOp->p2!=0 ) p->readOnly = 0; + /* fall thru */ + } + case OP_AutoCommit: + case OP_Savepoint: { + p->bIsReader = 1; + break; + } +#ifndef SQLITE_OMIT_WAL + case OP_Checkpoint: +#endif + case OP_Vacuum: + case OP_JournalMode: { + p->readOnly = 0; + p->bIsReader = 1; + break; + } #ifndef SQLITE_OMIT_VIRTUALTABLE - }else if( opcode==OP_VUpdate ){ - if( pOp->p2>nMaxArgs ) nMaxArgs = pOp->p2; - }else if( opcode==OP_VFilter ){ - int n; - assert( p->nOp - i >= 3 ); - assert( pOp[-1].opcode==OP_Integer ); - n = pOp[-1].p1; - if( n>nMaxArgs ) nMaxArgs = n; + case OP_VUpdate: { + if( pOp->p2>nMaxArgs ) nMaxArgs = pOp->p2; + break; + } + case OP_VFilter: { + int n; + assert( p->nOp - i >= 3 ); + assert( pOp[-1].opcode==OP_Integer ); + n = pOp[-1].p1; + if( n>nMaxArgs ) nMaxArgs = n; + break; + } #endif + case OP_Next: + case OP_NextIfOpen: + case OP_SorterNext: { + pOp->p4.xAdvance = sqlite3BtreeNext; + pOp->p4type = P4_ADVANCE; + break; + } + case OP_Prev: + case OP_PrevIfOpen: { + pOp->p4.xAdvance = sqlite3BtreePrevious; + pOp->p4type = P4_ADVANCE; + break; + } } + pOp->opflags = sqlite3OpcodeProperty[opcode]; if( (pOp->opflags & OPFLG_JUMP)!=0 && pOp->p2<0 ){ - assert( -1-pOp->p2<p->nLabel ); - pOp->p2 = aLabel[-1-pOp->p2]; + assert( ADDR(pOp->p2)<pParse->nLabel ); + pOp->p2 = aLabel[ADDR(pOp->p2)]; } } - sqlite3DbFree(p->db, p->aLabel); - p->aLabel = 0; - + sqlite3DbFree(p->db, pParse->aLabel); + pParse->aLabel = 0; + pParse->nLabel = 0; *pMaxFuncArgs = nMaxArgs; + assert( p->bIsReader!=0 || DbMaskAllZero(p->btreeMask) ); } /* ** Return the address of the next instruction to be inserted. */ @@ -48993,10 +68411,24 @@ SQLITE_PRIVATE int sqlite3VdbeCurrentAddr(Vdbe *p){ assert( p->magic==VDBE_MAGIC_INIT ); return p->nOp; } +/* +** Verify that at least N opcode slots are available in p without +** having to malloc for more space (except when compiled using +** SQLITE_TEST_REALLOC_STRESS). This interface is used during testing +** to verify that certain calls to sqlite3VdbeAddOpList() can never +** fail due to a OOM fault and hence that the return value from +** sqlite3VdbeAddOpList() will always be non-NULL. +*/ +#if defined(SQLITE_DEBUG) && !defined(SQLITE_TEST_REALLOC_STRESS) +SQLITE_PRIVATE void sqlite3VdbeVerifyNoMallocRequired(Vdbe *p, int N){ + assert( p->nOp + N <= p->pParse->nOpAlloc ); +} +#endif + /* ** This function returns a pointer to the array of opcodes associated with ** the Vdbe passed as the first argument. It is the callers responsibility ** to arrange for the returned array to be eventually freed using the ** vdbeFreeOpArray() function. @@ -49009,163 +68441,187 @@ SQLITE_PRIVATE VdbeOp *sqlite3VdbeTakeOpArray(Vdbe *p, int *pnOp, int *pnMaxArg){ VdbeOp *aOp = p->aOp; assert( aOp && !p->db->mallocFailed ); /* Check that sqlite3VdbeUsesBtree() was not called on this VM */ - assert( p->aMutex.nMutex==0 ); + assert( DbMaskAllZero(p->btreeMask) ); resolveP2Values(p, pnMaxArg); *pnOp = p->nOp; p->aOp = 0; return aOp; } /* -** Add a whole list of operations to the operation stack. Return the -** address of the first operation added. +** Add a whole list of operations to the operation stack. Return a +** pointer to the first operation inserted. +** +** Non-zero P2 arguments to jump instructions are automatically adjusted +** so that the jump target is relative to the first operation inserted. */ -SQLITE_PRIVATE int sqlite3VdbeAddOpList(Vdbe *p, int nOp, VdbeOpList const *aOp){ - int addr; +SQLITE_PRIVATE VdbeOp *sqlite3VdbeAddOpList( + Vdbe *p, /* Add opcodes to the prepared statement */ + int nOp, /* Number of opcodes to add */ + VdbeOpList const *aOp, /* The opcodes to be added */ + int iLineno /* Source-file line number of first opcode */ +){ + int i; + VdbeOp *pOut, *pFirst; + assert( nOp>0 ); assert( p->magic==VDBE_MAGIC_INIT ); - if( p->nOp + nOp > p->nOpAlloc && growOpArray(p) ){ + if( p->nOp + nOp > p->pParse->nOpAlloc && growOpArray(p, nOp) ){ return 0; } - addr = p->nOp; - if( ALWAYS(nOp>0) ){ - int i; - VdbeOpList const *pIn = aOp; - for(i=0; i<nOp; i++, pIn++){ - int p2 = pIn->p2; - VdbeOp *pOut = &p->aOp[i+addr]; - pOut->opcode = pIn->opcode; - pOut->p1 = pIn->p1; - if( p2<0 && (sqlite3OpcodeProperty[pOut->opcode] & OPFLG_JUMP)!=0 ){ - pOut->p2 = addr + ADDR(p2); - }else{ - pOut->p2 = p2; - } - pOut->p3 = pIn->p3; - pOut->p4type = P4_NOTUSED; - pOut->p4.p = 0; - pOut->p5 = 0; + pFirst = pOut = &p->aOp[p->nOp]; + for(i=0; i<nOp; i++, aOp++, pOut++){ + pOut->opcode = aOp->opcode; + pOut->p1 = aOp->p1; + pOut->p2 = aOp->p2; + assert( aOp->p2>=0 ); + if( (sqlite3OpcodeProperty[aOp->opcode] & OPFLG_JUMP)!=0 && aOp->p2>0 ){ + pOut->p2 += p->nOp; + } + pOut->p3 = aOp->p3; + pOut->p4type = P4_NOTUSED; + pOut->p4.p = 0; + pOut->p5 = 0; +#ifdef SQLITE_ENABLE_EXPLAIN_COMMENTS + pOut->zComment = 0; +#endif +#ifdef SQLITE_VDBE_COVERAGE + pOut->iSrcLine = iLineno+i; +#else + (void)iLineno; +#endif #ifdef SQLITE_DEBUG - pOut->zComment = 0; - if( sqlite3VdbeAddopTrace ){ - sqlite3VdbePrintOp(0, i+addr, &p->aOp[i+addr]); - } -#endif - } - p->nOp += nOp; - } - return addr; -} - -/* -** Change the value of the P1 operand for a specific instruction. -** This routine is useful when a large program is loaded from a -** static array using sqlite3VdbeAddOpList but we want to make a -** few minor changes to the program. -*/ -SQLITE_PRIVATE void sqlite3VdbeChangeP1(Vdbe *p, int addr, int val){ - assert( p!=0 ); - assert( addr>=0 ); - if( p->nOp>addr ){ - p->aOp[addr].p1 = val; - } -} - -/* -** Change the value of the P2 operand for a specific instruction. -** This routine is useful for setting a jump destination. -*/ -SQLITE_PRIVATE void sqlite3VdbeChangeP2(Vdbe *p, int addr, int val){ - assert( p!=0 ); - assert( addr>=0 ); - if( p->nOp>addr ){ - p->aOp[addr].p2 = val; - } -} - -/* -** Change the value of the P3 operand for a specific instruction. -*/ -SQLITE_PRIVATE void sqlite3VdbeChangeP3(Vdbe *p, int addr, int val){ - assert( p!=0 ); - assert( addr>=0 ); - if( p->nOp>addr ){ - p->aOp[addr].p3 = val; - } -} - -/* -** Change the value of the P5 operand for the most recently -** added operation. -*/ -SQLITE_PRIVATE void sqlite3VdbeChangeP5(Vdbe *p, u8 val){ - assert( p!=0 ); - if( p->aOp ){ - assert( p->nOp>0 ); - p->aOp[p->nOp-1].p5 = val; - } + if( p->db->flags & SQLITE_VdbeAddopTrace ){ + sqlite3VdbePrintOp(0, i+p->nOp, &p->aOp[i+p->nOp]); + } +#endif + } + p->nOp += nOp; + return pFirst; +} + +#if defined(SQLITE_ENABLE_STMT_SCANSTATUS) +/* +** Add an entry to the array of counters managed by sqlite3_stmt_scanstatus(). +*/ +SQLITE_PRIVATE void sqlite3VdbeScanStatus( + Vdbe *p, /* VM to add scanstatus() to */ + int addrExplain, /* Address of OP_Explain (or 0) */ + int addrLoop, /* Address of loop counter */ + int addrVisit, /* Address of rows visited counter */ + LogEst nEst, /* Estimated number of output rows */ + const char *zName /* Name of table or index being scanned */ +){ + int nByte = (p->nScan+1) * sizeof(ScanStatus); + ScanStatus *aNew; + aNew = (ScanStatus*)sqlite3DbRealloc(p->db, p->aScan, nByte); + if( aNew ){ + ScanStatus *pNew = &aNew[p->nScan++]; + pNew->addrExplain = addrExplain; + pNew->addrLoop = addrLoop; + pNew->addrVisit = addrVisit; + pNew->nEst = nEst; + pNew->zName = sqlite3DbStrDup(p->db, zName); + p->aScan = aNew; + } +} +#endif + + +/* +** Change the value of the opcode, or P1, P2, P3, or P5 operands +** for a specific instruction. +*/ +SQLITE_PRIVATE void sqlite3VdbeChangeOpcode(Vdbe *p, u32 addr, u8 iNewOpcode){ + sqlite3VdbeGetOp(p,addr)->opcode = iNewOpcode; +} +SQLITE_PRIVATE void sqlite3VdbeChangeP1(Vdbe *p, u32 addr, int val){ + sqlite3VdbeGetOp(p,addr)->p1 = val; +} +SQLITE_PRIVATE void sqlite3VdbeChangeP2(Vdbe *p, u32 addr, int val){ + sqlite3VdbeGetOp(p,addr)->p2 = val; +} +SQLITE_PRIVATE void sqlite3VdbeChangeP3(Vdbe *p, u32 addr, int val){ + sqlite3VdbeGetOp(p,addr)->p3 = val; +} +SQLITE_PRIVATE void sqlite3VdbeChangeP5(Vdbe *p, u8 p5){ + if( !p->db->mallocFailed ) p->aOp[p->nOp-1].p5 = p5; } /* ** Change the P2 operand of instruction addr so that it points to ** the address of the next instruction to be coded. */ SQLITE_PRIVATE void sqlite3VdbeJumpHere(Vdbe *p, int addr){ + p->pParse->iFixedOp = p->nOp - 1; sqlite3VdbeChangeP2(p, addr, p->nOp); } /* ** If the input FuncDef structure is ephemeral, then free it. If ** the FuncDef is not ephermal, then do nothing. */ static void freeEphemeralFunction(sqlite3 *db, FuncDef *pDef){ - if( ALWAYS(pDef) && (pDef->flags & SQLITE_FUNC_EPHEM)!=0 ){ + if( ALWAYS(pDef) && (pDef->funcFlags & SQLITE_FUNC_EPHEM)!=0 ){ sqlite3DbFree(db, pDef); } } + +static void vdbeFreeOpArray(sqlite3 *, Op *, int); /* ** Delete a P4 value if necessary. */ static void freeP4(sqlite3 *db, int p4type, void *p4){ if( p4 ){ + assert( db ); switch( p4type ){ + case P4_FUNCCTX: { + freeEphemeralFunction(db, ((sqlite3_context*)p4)->pFunc); + /* Fall through into the next case */ + } case P4_REAL: case P4_INT64: - case P4_MPRINTF: case P4_DYNAMIC: - case P4_KEYINFO: - case P4_INTARRAY: - case P4_KEYINFO_HANDOFF: { + case P4_INTARRAY: { sqlite3DbFree(db, p4); break; } - case P4_VDBEFUNC: { - VdbeFunc *pVdbeFunc = (VdbeFunc *)p4; - freeEphemeralFunction(db, pVdbeFunc->pFunc); - sqlite3VdbeDeleteAuxData(pVdbeFunc, 0); - sqlite3DbFree(db, pVdbeFunc); + case P4_KEYINFO: { + if( db->pnBytesFreed==0 ) sqlite3KeyInfoUnref((KeyInfo*)p4); + break; + } +#ifdef SQLITE_ENABLE_CURSOR_HINTS + case P4_EXPR: { + sqlite3ExprDelete(db, (Expr*)p4); + break; + } +#endif + case P4_MPRINTF: { + if( db->pnBytesFreed==0 ) sqlite3_free(p4); break; } case P4_FUNCDEF: { freeEphemeralFunction(db, (FuncDef*)p4); break; } case P4_MEM: { - sqlite3ValueFree((sqlite3_value*)p4); + if( db->pnBytesFreed==0 ){ + sqlite3ValueFree((sqlite3_value*)p4); + }else{ + Mem *p = (Mem*)p4; + if( p->szMalloc ) sqlite3DbFree(db, p->zMalloc); + sqlite3DbFree(db, p); + } break; } case P4_VTAB : { - sqlite3VtabUnlock((VTable *)p4); - break; - } - case P4_SUBPROGRAM : { - sqlite3VdbeProgramDelete(db, (SubProgram *)p4, 1); + if( db->pnBytesFreed==0 ) sqlite3VtabUnlock((VTable *)p4); break; } } } } @@ -49177,62 +68633,53 @@ */ static void vdbeFreeOpArray(sqlite3 *db, Op *aOp, int nOp){ if( aOp ){ Op *pOp; for(pOp=aOp; pOp<&aOp[nOp]; pOp++){ - freeP4(db, pOp->p4type, pOp->p4.p); -#ifdef SQLITE_DEBUG + if( pOp->p4type ) freeP4(db, pOp->p4type, pOp->p4.p); +#ifdef SQLITE_ENABLE_EXPLAIN_COMMENTS sqlite3DbFree(db, pOp->zComment); #endif } } sqlite3DbFree(db, aOp); } /* -** Decrement the ref-count on the SubProgram structure passed as the -** second argument. If the ref-count reaches zero, free the structure. -** -** The array of VDBE opcodes stored as SubProgram.aOp is freed if -** either the ref-count reaches zero or parameter freeop is non-zero. -** -** Since the array of opcodes pointed to by SubProgram.aOp may directly -** or indirectly contain a reference to the SubProgram structure itself. -** By passing a non-zero freeop parameter, the caller may ensure that all -** SubProgram structures and their aOp arrays are freed, even when there -** are such circular references. -*/ -SQLITE_PRIVATE void sqlite3VdbeProgramDelete(sqlite3 *db, SubProgram *p, int freeop){ - if( p ){ - assert( p->nRef>0 ); - if( freeop || p->nRef==1 ){ - Op *aOp = p->aOp; - p->aOp = 0; - vdbeFreeOpArray(db, aOp, p->nOp); - p->nOp = 0; - } - p->nRef--; - if( p->nRef==0 ){ - sqlite3DbFree(db, p); - } - } -} - - -/* -** Change N opcodes starting at addr to No-ops. -*/ -SQLITE_PRIVATE void sqlite3VdbeChangeToNoop(Vdbe *p, int addr, int N){ - if( p->aOp ){ - VdbeOp *pOp = &p->aOp[addr]; - sqlite3 *db = p->db; - while( N-- ){ - freeP4(db, pOp->p4type, pOp->p4.p); - memset(pOp, 0, sizeof(pOp[0])); - pOp->opcode = OP_Noop; - pOp++; - } +** Link the SubProgram object passed as the second argument into the linked +** list at Vdbe.pSubProgram. This list is used to delete all sub-program +** objects when the VM is no longer required. +*/ +SQLITE_PRIVATE void sqlite3VdbeLinkSubProgram(Vdbe *pVdbe, SubProgram *p){ + p->pNext = pVdbe->pProgram; + pVdbe->pProgram = p; +} + +/* +** Change the opcode at addr into OP_Noop +*/ +SQLITE_PRIVATE int sqlite3VdbeChangeToNoop(Vdbe *p, int addr){ + VdbeOp *pOp; + if( p->db->mallocFailed ) return 0; + assert( addr>=0 && addr<p->nOp ); + pOp = &p->aOp[addr]; + freeP4(p->db, pOp->p4type, pOp->p4.p); + pOp->p4type = P4_NOTUSED; + pOp->p4.z = 0; + pOp->opcode = OP_Noop; + return 1; +} + +/* +** If the last opcode is "op" and it is not a jump destination, +** then remove it. Return true if and only if an opcode was removed. +*/ +SQLITE_PRIVATE int sqlite3VdbeDeletePriorOpcode(Vdbe *p, u8 op){ + if( (p->nOp-1)>(p->pParse->iFixedOp) && p->aOp[p->nOp-1].opcode==op ){ + return sqlite3VdbeChangeToNoop(p, p->nOp-1); + }else{ + return 0; } } /* ** Change the value of the P4 operand for a specific instruction. @@ -49242,253 +68689,420 @@ ** ** If n>=0 then the P4 operand is dynamic, meaning that a copy of ** the string is made into memory obtained from sqlite3_malloc(). ** A value of n==0 means copy bytes of zP4 up to and including the ** first null byte. If n>0 then copy n+1 bytes of zP4. -** -** If n==P4_KEYINFO it means that zP4 is a pointer to a KeyInfo structure. -** A copy is made of the KeyInfo structure into memory obtained from -** sqlite3_malloc, to be freed when the Vdbe is finalized. -** n==P4_KEYINFO_HANDOFF indicates that zP4 points to a KeyInfo structure -** stored in memory that the caller has obtained from sqlite3_malloc. The -** caller should not free the allocation, it will be freed when the Vdbe is -** finalized. ** ** Other values of n (P4_STATIC, P4_COLLSEQ etc.) indicate that zP4 points ** to a string or structure that is guaranteed to exist for the lifetime of ** the Vdbe. In these cases we can just copy the pointer. ** ** If addr<0 then change P4 on the most recently inserted instruction. */ +static void SQLITE_NOINLINE vdbeChangeP4Full( + Vdbe *p, + Op *pOp, + const char *zP4, + int n +){ + if( pOp->p4type ){ + freeP4(p->db, pOp->p4type, pOp->p4.p); + pOp->p4type = 0; + pOp->p4.p = 0; + } + if( n<0 ){ + sqlite3VdbeChangeP4(p, (int)(pOp - p->aOp), zP4, n); + }else{ + if( n==0 ) n = sqlite3Strlen30(zP4); + pOp->p4.z = sqlite3DbStrNDup(p->db, zP4, n); + pOp->p4type = P4_DYNAMIC; + } +} SQLITE_PRIVATE void sqlite3VdbeChangeP4(Vdbe *p, int addr, const char *zP4, int n){ Op *pOp; sqlite3 *db; assert( p!=0 ); db = p->db; assert( p->magic==VDBE_MAGIC_INIT ); - if( p->aOp==0 || db->mallocFailed ){ - if ( n!=P4_KEYINFO && n!=P4_VTAB ) { - freeP4(db, n, (void*)*(char**)&zP4); - } + assert( p->aOp!=0 || db->mallocFailed ); + if( db->mallocFailed ){ + if( n!=P4_VTAB ) freeP4(db, n, (void*)*(char**)&zP4); return; } assert( p->nOp>0 ); assert( addr<p->nOp ); if( addr<0 ){ addr = p->nOp - 1; } pOp = &p->aOp[addr]; - freeP4(db, pOp->p4type, pOp->p4.p); - pOp->p4.p = 0; + if( n>=0 || pOp->p4type ){ + vdbeChangeP4Full(p, pOp, zP4, n); + return; + } if( n==P4_INT32 ){ /* Note: this cast is safe, because the origin data point was an int ** that was cast to a (const char *). */ pOp->p4.i = SQLITE_PTR_TO_INT(zP4); pOp->p4type = P4_INT32; - }else if( zP4==0 ){ - pOp->p4.p = 0; - pOp->p4type = P4_NOTUSED; - }else if( n==P4_KEYINFO ){ - KeyInfo *pKeyInfo; - int nField, nByte; - - nField = ((KeyInfo*)zP4)->nField; - nByte = sizeof(*pKeyInfo) + (nField-1)*sizeof(pKeyInfo->aColl[0]) + nField; - pKeyInfo = sqlite3Malloc( nByte ); - pOp->p4.pKeyInfo = pKeyInfo; - if( pKeyInfo ){ - u8 *aSortOrder; - memcpy((char*)pKeyInfo, zP4, nByte - nField); - aSortOrder = pKeyInfo->aSortOrder; - if( aSortOrder ){ - pKeyInfo->aSortOrder = (unsigned char*)&pKeyInfo->aColl[nField]; - memcpy(pKeyInfo->aSortOrder, aSortOrder, nField); - } - pOp->p4type = P4_KEYINFO; - }else{ - p->db->mallocFailed = 1; - pOp->p4type = P4_NOTUSED; - } - }else if( n==P4_KEYINFO_HANDOFF ){ - pOp->p4.p = (void*)zP4; - pOp->p4type = P4_KEYINFO; - }else if( n==P4_VTAB ){ - pOp->p4.p = (void*)zP4; - pOp->p4type = P4_VTAB; - sqlite3VtabLock((VTable *)zP4); - assert( ((VTable *)zP4)->db==p->db ); - }else if( n<0 ){ + }else if( zP4!=0 ){ + assert( n<0 ); pOp->p4.p = (void*)zP4; pOp->p4type = (signed char)n; - }else{ - if( n==0 ) n = sqlite3Strlen30(zP4); - pOp->p4.z = sqlite3DbStrNDup(p->db, zP4, n); - pOp->p4type = P4_DYNAMIC; + if( n==P4_VTAB ) sqlite3VtabLock((VTable*)zP4); } } -#ifndef NDEBUG /* -** Change the comment on the the most recently coded instruction. Or +** Set the P4 on the most recently added opcode to the KeyInfo for the +** index given. +*/ +SQLITE_PRIVATE void sqlite3VdbeSetP4KeyInfo(Parse *pParse, Index *pIdx){ + Vdbe *v = pParse->pVdbe; + assert( v!=0 ); + assert( pIdx!=0 ); + sqlite3VdbeChangeP4(v, -1, (char*)sqlite3KeyInfoOfIndex(pParse, pIdx), + P4_KEYINFO); +} + +#ifdef SQLITE_ENABLE_EXPLAIN_COMMENTS +/* +** Change the comment on the most recently coded instruction. Or ** insert a No-op and add the comment to that new instruction. This ** makes the code easier to read during debugging. None of this happens ** in a production build. */ -SQLITE_PRIVATE void sqlite3VdbeComment(Vdbe *p, const char *zFormat, ...){ - va_list ap; - if( !p ) return; +static void vdbeVComment(Vdbe *p, const char *zFormat, va_list ap){ assert( p->nOp>0 || p->aOp==0 ); assert( p->aOp==0 || p->aOp[p->nOp-1].zComment==0 || p->db->mallocFailed ); if( p->nOp ){ - char **pz = &p->aOp[p->nOp-1].zComment; + assert( p->aOp ); + sqlite3DbFree(p->db, p->aOp[p->nOp-1].zComment); + p->aOp[p->nOp-1].zComment = sqlite3VMPrintf(p->db, zFormat, ap); + } +} +SQLITE_PRIVATE void sqlite3VdbeComment(Vdbe *p, const char *zFormat, ...){ + va_list ap; + if( p ){ va_start(ap, zFormat); - sqlite3DbFree(p->db, *pz); - *pz = sqlite3VMPrintf(p->db, zFormat, ap); + vdbeVComment(p, zFormat, ap); va_end(ap); } } SQLITE_PRIVATE void sqlite3VdbeNoopComment(Vdbe *p, const char *zFormat, ...){ va_list ap; - if( !p ) return; - sqlite3VdbeAddOp0(p, OP_Noop); - assert( p->nOp>0 || p->aOp==0 ); - assert( p->aOp==0 || p->aOp[p->nOp-1].zComment==0 || p->db->mallocFailed ); - if( p->nOp ){ - char **pz = &p->aOp[p->nOp-1].zComment; + if( p ){ + sqlite3VdbeAddOp0(p, OP_Noop); va_start(ap, zFormat); - sqlite3DbFree(p->db, *pz); - *pz = sqlite3VMPrintf(p->db, zFormat, ap); + vdbeVComment(p, zFormat, ap); va_end(ap); } } #endif /* NDEBUG */ +#ifdef SQLITE_VDBE_COVERAGE +/* +** Set the value if the iSrcLine field for the previously coded instruction. +*/ +SQLITE_PRIVATE void sqlite3VdbeSetLineNumber(Vdbe *v, int iLine){ + sqlite3VdbeGetOp(v,-1)->iSrcLine = iLine; +} +#endif /* SQLITE_VDBE_COVERAGE */ + /* ** Return the opcode for a given address. If the address is -1, then ** return the most recently inserted opcode. ** ** If a memory allocation error has occurred prior to the calling of this ** routine, then a pointer to a dummy VdbeOp will be returned. That opcode -** is readable and writable, but it has no effect. The return of a dummy -** opcode allows the call to continue functioning after a OOM fault without -** having to check to see if the return from this routine is a valid pointer. -** -** About the #ifdef SQLITE_OMIT_TRACE: Normally, this routine is never called -** unless p->nOp>0. This is because in the absense of SQLITE_OMIT_TRACE, -** an OP_Trace instruction is always inserted by sqlite3VdbeGet() as soon as -** a new VDBE is created. So we are free to set addr to p->nOp-1 without -** having to double-check to make sure that the result is non-negative. But -** if SQLITE_OMIT_TRACE is defined, the OP_Trace is omitted and we do need to -** check the value of p->nOp-1 before continuing. +** is readable but not writable, though it is cast to a writable value. +** The return of a dummy opcode allows the call to continue functioning +** after an OOM fault without having to check to see if the return from +** this routine is a valid pointer. But because the dummy.opcode is 0, +** dummy will never be written to. This is verified by code inspection and +** by running with Valgrind. */ SQLITE_PRIVATE VdbeOp *sqlite3VdbeGetOp(Vdbe *p, int addr){ - static VdbeOp dummy; + /* C89 specifies that the constant "dummy" will be initialized to all + ** zeros, which is correct. MSVC generates a warning, nevertheless. */ + static VdbeOp dummy; /* Ignore the MSVC warning about no initializer */ assert( p->magic==VDBE_MAGIC_INIT ); if( addr<0 ){ -#ifdef SQLITE_OMIT_TRACE - if( p->nOp==0 ) return &dummy; -#endif addr = p->nOp - 1; } assert( (addr>=0 && addr<p->nOp) || p->db->mallocFailed ); if( p->db->mallocFailed ){ - return &dummy; + return (VdbeOp*)&dummy; }else{ return &p->aOp[addr]; } } -#if !defined(SQLITE_OMIT_EXPLAIN) || !defined(NDEBUG) \ - || defined(VDBE_PROFILE) || defined(SQLITE_DEBUG) +#if defined(SQLITE_ENABLE_EXPLAIN_COMMENTS) +/* +** Return an integer value for one of the parameters to the opcode pOp +** determined by character c. +*/ +static int translateP(char c, const Op *pOp){ + if( c=='1' ) return pOp->p1; + if( c=='2' ) return pOp->p2; + if( c=='3' ) return pOp->p3; + if( c=='4' ) return pOp->p4.i; + return pOp->p5; +} + +/* +** Compute a string for the "comment" field of a VDBE opcode listing. +** +** The Synopsis: field in comments in the vdbe.c source file gets converted +** to an extra string that is appended to the sqlite3OpcodeName(). In the +** absence of other comments, this synopsis becomes the comment on the opcode. +** Some translation occurs: +** +** "PX" -> "r[X]" +** "PX@PY" -> "r[X..X+Y-1]" or "r[x]" if y is 0 or 1 +** "PX@PY+1" -> "r[X..X+Y]" or "r[x]" if y is 0 +** "PY..PY" -> "r[X..Y]" or "r[x]" if y<=x +*/ +static int displayComment( + const Op *pOp, /* The opcode to be commented */ + const char *zP4, /* Previously obtained value for P4 */ + char *zTemp, /* Write result here */ + int nTemp /* Space available in zTemp[] */ +){ + const char *zOpName; + const char *zSynopsis; + int nOpName; + int ii, jj; + zOpName = sqlite3OpcodeName(pOp->opcode); + nOpName = sqlite3Strlen30(zOpName); + if( zOpName[nOpName+1] ){ + int seenCom = 0; + char c; + zSynopsis = zOpName += nOpName + 1; + for(ii=jj=0; jj<nTemp-1 && (c = zSynopsis[ii])!=0; ii++){ + if( c=='P' ){ + c = zSynopsis[++ii]; + if( c=='4' ){ + sqlite3_snprintf(nTemp-jj, zTemp+jj, "%s", zP4); + }else if( c=='X' ){ + sqlite3_snprintf(nTemp-jj, zTemp+jj, "%s", pOp->zComment); + seenCom = 1; + }else{ + int v1 = translateP(c, pOp); + int v2; + sqlite3_snprintf(nTemp-jj, zTemp+jj, "%d", v1); + if( strncmp(zSynopsis+ii+1, "@P", 2)==0 ){ + ii += 3; + jj += sqlite3Strlen30(zTemp+jj); + v2 = translateP(zSynopsis[ii], pOp); + if( strncmp(zSynopsis+ii+1,"+1",2)==0 ){ + ii += 2; + v2++; + } + if( v2>1 ){ + sqlite3_snprintf(nTemp-jj, zTemp+jj, "..%d", v1+v2-1); + } + }else if( strncmp(zSynopsis+ii+1, "..P3", 4)==0 && pOp->p3==0 ){ + ii += 4; + } + } + jj += sqlite3Strlen30(zTemp+jj); + }else{ + zTemp[jj++] = c; + } + } + if( !seenCom && jj<nTemp-5 && pOp->zComment ){ + sqlite3_snprintf(nTemp-jj, zTemp+jj, "; %s", pOp->zComment); + jj += sqlite3Strlen30(zTemp+jj); + } + if( jj<nTemp ) zTemp[jj] = 0; + }else if( pOp->zComment ){ + sqlite3_snprintf(nTemp, zTemp, "%s", pOp->zComment); + jj = sqlite3Strlen30(zTemp); + }else{ + zTemp[0] = 0; + jj = 0; + } + return jj; +} +#endif /* SQLITE_DEBUG */ + +#if VDBE_DISPLAY_P4 && defined(SQLITE_ENABLE_CURSOR_HINTS) +/* +** Translate the P4.pExpr value for an OP_CursorHint opcode into text +** that can be displayed in the P4 column of EXPLAIN output. +*/ +static void displayP4Expr(StrAccum *p, Expr *pExpr){ + const char *zOp = 0; + switch( pExpr->op ){ + case TK_STRING: + sqlite3XPrintf(p, "%Q", pExpr->u.zToken); + break; + case TK_INTEGER: + sqlite3XPrintf(p, "%d", pExpr->u.iValue); + break; + case TK_NULL: + sqlite3XPrintf(p, "NULL"); + break; + case TK_REGISTER: { + sqlite3XPrintf(p, "r[%d]", pExpr->iTable); + break; + } + case TK_COLUMN: { + if( pExpr->iColumn<0 ){ + sqlite3XPrintf(p, "rowid"); + }else{ + sqlite3XPrintf(p, "c%d", (int)pExpr->iColumn); + } + break; + } + case TK_LT: zOp = "LT"; break; + case TK_LE: zOp = "LE"; break; + case TK_GT: zOp = "GT"; break; + case TK_GE: zOp = "GE"; break; + case TK_NE: zOp = "NE"; break; + case TK_EQ: zOp = "EQ"; break; + case TK_IS: zOp = "IS"; break; + case TK_ISNOT: zOp = "ISNOT"; break; + case TK_AND: zOp = "AND"; break; + case TK_OR: zOp = "OR"; break; + case TK_PLUS: zOp = "ADD"; break; + case TK_STAR: zOp = "MUL"; break; + case TK_MINUS: zOp = "SUB"; break; + case TK_REM: zOp = "REM"; break; + case TK_BITAND: zOp = "BITAND"; break; + case TK_BITOR: zOp = "BITOR"; break; + case TK_SLASH: zOp = "DIV"; break; + case TK_LSHIFT: zOp = "LSHIFT"; break; + case TK_RSHIFT: zOp = "RSHIFT"; break; + case TK_CONCAT: zOp = "CONCAT"; break; + case TK_UMINUS: zOp = "MINUS"; break; + case TK_UPLUS: zOp = "PLUS"; break; + case TK_BITNOT: zOp = "BITNOT"; break; + case TK_NOT: zOp = "NOT"; break; + case TK_ISNULL: zOp = "ISNULL"; break; + case TK_NOTNULL: zOp = "NOTNULL"; break; + + default: + sqlite3XPrintf(p, "%s", "expr"); + break; + } + + if( zOp ){ + sqlite3XPrintf(p, "%s(", zOp); + displayP4Expr(p, pExpr->pLeft); + if( pExpr->pRight ){ + sqlite3StrAccumAppend(p, ",", 1); + displayP4Expr(p, pExpr->pRight); + } + sqlite3StrAccumAppend(p, ")", 1); + } +} +#endif /* VDBE_DISPLAY_P4 && defined(SQLITE_ENABLE_CURSOR_HINTS) */ + + +#if VDBE_DISPLAY_P4 /* ** Compute a string that describes the P4 parameter for an opcode. ** Use zTemp for any required temporary buffer space. */ static char *displayP4(Op *pOp, char *zTemp, int nTemp){ char *zP4 = zTemp; + StrAccum x; assert( nTemp>=20 ); + sqlite3StrAccumInit(&x, 0, zTemp, nTemp, 0); switch( pOp->p4type ){ - case P4_KEYINFO_STATIC: case P4_KEYINFO: { - int i, j; + int j; KeyInfo *pKeyInfo = pOp->p4.pKeyInfo; - sqlite3_snprintf(nTemp, zTemp, "keyinfo(%d", pKeyInfo->nField); - i = sqlite3Strlen30(zTemp); + assert( pKeyInfo->aSortOrder!=0 ); + sqlite3XPrintf(&x, "k(%d", pKeyInfo->nField); for(j=0; j<pKeyInfo->nField; j++){ CollSeq *pColl = pKeyInfo->aColl[j]; - if( pColl ){ - int n = sqlite3Strlen30(pColl->zName); - if( i+n>nTemp-6 ){ - memcpy(&zTemp[i],",...",4); - break; - } - zTemp[i++] = ','; - if( pKeyInfo->aSortOrder && pKeyInfo->aSortOrder[j] ){ - zTemp[i++] = '-'; - } - memcpy(&zTemp[i], pColl->zName,n+1); - i += n; - }else if( i+4<nTemp-6 ){ - memcpy(&zTemp[i],",nil",4); - i += 4; - } - } - zTemp[i++] = ')'; - zTemp[i] = 0; - assert( i<nTemp ); - break; - } + const char *zColl = pColl ? pColl->zName : ""; + if( strcmp(zColl, "BINARY")==0 ) zColl = "B"; + sqlite3XPrintf(&x, ",%s%s", pKeyInfo->aSortOrder[j] ? "-" : "", zColl); + } + sqlite3StrAccumAppend(&x, ")", 1); + break; + } +#ifdef SQLITE_ENABLE_CURSOR_HINTS + case P4_EXPR: { + displayP4Expr(&x, pOp->p4.pExpr); + break; + } +#endif case P4_COLLSEQ: { CollSeq *pColl = pOp->p4.pColl; - sqlite3_snprintf(nTemp, zTemp, "collseq(%.20s)", pColl->zName); + sqlite3XPrintf(&x, "(%.20s)", pColl->zName); break; } case P4_FUNCDEF: { FuncDef *pDef = pOp->p4.pFunc; - sqlite3_snprintf(nTemp, zTemp, "%s(%d)", pDef->zName, pDef->nArg); + sqlite3XPrintf(&x, "%s(%d)", pDef->zName, pDef->nArg); break; } +#ifdef SQLITE_DEBUG + case P4_FUNCCTX: { + FuncDef *pDef = pOp->p4.pCtx->pFunc; + sqlite3XPrintf(&x, "%s(%d)", pDef->zName, pDef->nArg); + break; + } +#endif case P4_INT64: { - sqlite3_snprintf(nTemp, zTemp, "%lld", *pOp->p4.pI64); + sqlite3XPrintf(&x, "%lld", *pOp->p4.pI64); break; } case P4_INT32: { - sqlite3_snprintf(nTemp, zTemp, "%d", pOp->p4.i); + sqlite3XPrintf(&x, "%d", pOp->p4.i); break; } case P4_REAL: { - sqlite3_snprintf(nTemp, zTemp, "%.16g", *pOp->p4.pReal); + sqlite3XPrintf(&x, "%.16g", *pOp->p4.pReal); break; } case P4_MEM: { Mem *pMem = pOp->p4.pMem; - assert( (pMem->flags & MEM_Null)==0 ); if( pMem->flags & MEM_Str ){ zP4 = pMem->z; }else if( pMem->flags & MEM_Int ){ - sqlite3_snprintf(nTemp, zTemp, "%lld", pMem->u.i); + sqlite3XPrintf(&x, "%lld", pMem->u.i); }else if( pMem->flags & MEM_Real ){ - sqlite3_snprintf(nTemp, zTemp, "%.16g", pMem->r); + sqlite3XPrintf(&x, "%.16g", pMem->u.r); + }else if( pMem->flags & MEM_Null ){ + zP4 = "NULL"; }else{ assert( pMem->flags & MEM_Blob ); zP4 = "(blob)"; } break; } #ifndef SQLITE_OMIT_VIRTUALTABLE case P4_VTAB: { sqlite3_vtab *pVtab = pOp->p4.pVtab->pVtab; - sqlite3_snprintf(nTemp, zTemp, "vtab:%p:%p", pVtab, pVtab->pModule); + sqlite3XPrintf(&x, "vtab:%p", pVtab); break; } #endif case P4_INTARRAY: { - sqlite3_snprintf(nTemp, zTemp, "intarray"); + int i; + int *ai = pOp->p4.ai; + int n = ai[0]; /* The first element of an INTARRAY is always the + ** count of the number of elements to follow */ + for(i=1; i<n; i++){ + sqlite3XPrintf(&x, ",%d", ai[i]); + } + zTemp[0] = '['; + sqlite3StrAccumAppend(&x, "]", 1); break; } case P4_SUBPROGRAM: { - sqlite3_snprintf(nTemp, zTemp, "program"); + sqlite3XPrintf(&x, "program"); + break; + } + case P4_ADVANCE: { + zTemp[0] = 0; break; } default: { zP4 = pOp->p4.z; if( zP4==0 ){ @@ -49495,47 +69109,118 @@ zP4 = zTemp; zTemp[0] = 0; } } } + sqlite3StrAccumFinish(&x); assert( zP4!=0 ); return zP4; } -#endif +#endif /* VDBE_DISPLAY_P4 */ /* ** Declare to the Vdbe that the BTree object at db->aDb[i] is used. +** +** The prepared statements need to know in advance the complete set of +** attached databases that will be use. A mask of these databases +** is maintained in p->btreeMask. The p->lockMask value is the subset of +** p->btreeMask of databases that will require a lock. */ SQLITE_PRIVATE void sqlite3VdbeUsesBtree(Vdbe *p, int i){ - int mask; - assert( i>=0 && i<p->db->nDb && i<sizeof(u32)*8 ); + assert( i>=0 && i<p->db->nDb && i<(int)sizeof(yDbMask)*8 ); assert( i<(int)sizeof(p->btreeMask)*8 ); - mask = ((u32)1)<<i; - if( (p->btreeMask & mask)==0 ){ - p->btreeMask |= mask; - sqlite3BtreeMutexArrayInsert(&p->aMutex, p->db->aDb[i].pBt); + DbMaskSet(p->btreeMask, i); + if( i!=1 && sqlite3BtreeSharable(p->db->aDb[i].pBt) ){ + DbMaskSet(p->lockMask, i); } } +#if !defined(SQLITE_OMIT_SHARED_CACHE) +/* +** If SQLite is compiled to support shared-cache mode and to be threadsafe, +** this routine obtains the mutex associated with each BtShared structure +** that may be accessed by the VM passed as an argument. In doing so it also +** sets the BtShared.db member of each of the BtShared structures, ensuring +** that the correct busy-handler callback is invoked if required. +** +** If SQLite is not threadsafe but does support shared-cache mode, then +** sqlite3BtreeEnter() is invoked to set the BtShared.db variables +** of all of BtShared structures accessible via the database handle +** associated with the VM. +** +** If SQLite is not threadsafe and does not support shared-cache mode, this +** function is a no-op. +** +** The p->btreeMask field is a bitmask of all btrees that the prepared +** statement p will ever use. Let N be the number of bits in p->btreeMask +** corresponding to btrees that use shared cache. Then the runtime of +** this routine is N*N. But as N is rarely more than 1, this should not +** be a problem. +*/ +SQLITE_PRIVATE void sqlite3VdbeEnter(Vdbe *p){ + int i; + sqlite3 *db; + Db *aDb; + int nDb; + if( DbMaskAllZero(p->lockMask) ) return; /* The common case */ + db = p->db; + aDb = db->aDb; + nDb = db->nDb; + for(i=0; i<nDb; i++){ + if( i!=1 && DbMaskTest(p->lockMask,i) && ALWAYS(aDb[i].pBt!=0) ){ + sqlite3BtreeEnter(aDb[i].pBt); + } + } +} +#endif + +#if !defined(SQLITE_OMIT_SHARED_CACHE) && SQLITE_THREADSAFE>0 +/* +** Unlock all of the btrees previously locked by a call to sqlite3VdbeEnter(). +*/ +static SQLITE_NOINLINE void vdbeLeave(Vdbe *p){ + int i; + sqlite3 *db; + Db *aDb; + int nDb; + db = p->db; + aDb = db->aDb; + nDb = db->nDb; + for(i=0; i<nDb; i++){ + if( i!=1 && DbMaskTest(p->lockMask,i) && ALWAYS(aDb[i].pBt!=0) ){ + sqlite3BtreeLeave(aDb[i].pBt); + } + } +} +SQLITE_PRIVATE void sqlite3VdbeLeave(Vdbe *p){ + if( DbMaskAllZero(p->lockMask) ) return; /* The common case */ + vdbeLeave(p); +} +#endif #if defined(VDBE_PROFILE) || defined(SQLITE_DEBUG) /* ** Print a single opcode. This routine is used for debugging only. */ SQLITE_PRIVATE void sqlite3VdbePrintOp(FILE *pOut, int pc, Op *pOp){ char *zP4; char zPtr[50]; - static const char *zFormat1 = "%4d %-13s %4d %4d %4d %-4s %.2X %s\n"; + char zCom[100]; + static const char *zFormat1 = "%4d %-13s %4d %4d %4d %-13s %.2X %s\n"; if( pOut==0 ) pOut = stdout; zP4 = displayP4(pOp, zPtr, sizeof(zPtr)); +#ifdef SQLITE_ENABLE_EXPLAIN_COMMENTS + displayComment(pOp, zP4, zCom, sizeof(zCom)); +#else + zCom[0] = 0; +#endif + /* NB: The sqlite3OpcodeName() function is implemented by code created + ** by the mkopcodeh.awk and mkopcodec.awk scripts which extract the + ** information from the vdbe.c source text */ fprintf(pOut, zFormat1, pc, sqlite3OpcodeName(pOp->opcode), pOp->p1, pOp->p2, pOp->p3, zP4, pOp->p5, -#ifdef SQLITE_DEBUG - pOp->zComment ? pOp->zComment : "" -#else - "" -#endif + zCom ); fflush(pOut); } #endif @@ -49542,15 +69227,21 @@ /* ** Release an array of N Mem elements */ static void releaseMemArray(Mem *p, int N){ if( p && N ){ - Mem *pEnd; + Mem *pEnd = &p[N]; sqlite3 *db = p->db; - u8 malloc_failed = db->mallocFailed; - for(pEnd=&p[N]; p<pEnd; p++){ + if( db->pnBytesFreed ){ + do{ + if( p->szMalloc ) sqlite3DbFree(db, p->zMalloc); + }while( (++p)<pEnd ); + return; + } + do{ assert( (&p[1])==pEnd || p[0].db==p[1].db ); + assert( sqlite3VdbeCheckMemInvariants(p) ); /* This block is really an inlined version of sqlite3VdbeMemRelease() ** that takes advantage of the fact that the memory cell value is ** being set to NULL after releasing any dynamic resources. ** @@ -49560,20 +69251,23 @@ ** sqlite3MemRelease() were called from here. With -O2, this jumps ** to 6.6 percent. The test case is inserting 1000 rows into a table ** with no indexes using a single prepared INSERT statement, bind() ** and reset(). Inserts are grouped into a transaction. */ + testcase( p->flags & MEM_Agg ); + testcase( p->flags & MEM_Dyn ); + testcase( p->flags & MEM_Frame ); + testcase( p->flags & MEM_RowSet ); if( p->flags&(MEM_Agg|MEM_Dyn|MEM_Frame|MEM_RowSet) ){ sqlite3VdbeMemRelease(p); - }else if( p->zMalloc ){ + }else if( p->szMalloc ){ sqlite3DbFree(db, p->zMalloc); - p->zMalloc = 0; + p->szMalloc = 0; } - p->flags = MEM_Null; - } - db->mallocFailed = malloc_failed; + p->flags = MEM_Undefined; + }while( (++p)<pEnd ); } } /* ** Delete a VdbeFrame object and its contents. VdbeFrame objects are @@ -49614,11 +69308,11 @@ SubProgram **apSub = 0; /* Array of sub-vdbes */ Mem *pSub = 0; /* Memory cell hold array of subprogs */ sqlite3 *db = p->db; /* The database connection */ int i; /* Loop counter */ int rc = SQLITE_OK; /* Return code */ - Mem *pMem = p->pResultSet = &p->aMem[1]; /* First Mem of result set */ + Mem *pMem = &p->aMem[1]; /* First Mem of result set */ assert( p->explain ); assert( p->magic==VDBE_MAGIC_RUN ); assert( p->rc==SQLITE_OK || p->rc==SQLITE_BUSY || p->rc==SQLITE_NOMEM ); @@ -49625,15 +69319,16 @@ /* Even though this opcode does not use dynamic strings for ** the result, result columns may become dynamic if the user calls ** sqlite3_column_text16(), causing a translation to UTF-16 encoding. */ releaseMemArray(pMem, 8); + p->pResultSet = 0; if( p->rc==SQLITE_NOMEM ){ /* This happens if a malloc() inside a call to sqlite3_column_text() or ** sqlite3_column_text16() failed. */ - db->mallocFailed = 1; + sqlite3OomFault(db); return SQLITE_ERROR; } /* When the number of output rows reaches nRow, that means the ** listing has finished and sqlite3_step() should return SQLITE_DONE. @@ -49668,13 +69363,13 @@ p->rc = SQLITE_OK; rc = SQLITE_DONE; }else if( db->u1.isInterrupted ){ p->rc = SQLITE_INTERRUPT; rc = SQLITE_ERROR; - sqlite3SetString(&p->zErrMsg, db, "%s", sqlite3ErrStr(p->rc)); + sqlite3VdbeError(p, sqlite3ErrStr(p->rc)); }else{ - char *z; + char *zP4; Op *pOp; if( i<p->nOp ){ /* The output line number is small enough that we are still in the ** main program. */ pOp = &p->aOp[i]; @@ -49688,19 +69383,17 @@ } pOp = &apSub[j]->aOp[i]; } if( p->explain==1 ){ pMem->flags = MEM_Int; - pMem->type = SQLITE_INTEGER; pMem->u.i = i; /* Program counter */ pMem++; pMem->flags = MEM_Static|MEM_Str|MEM_Term; - pMem->z = (char*)sqlite3OpcodeName(pOp->opcode); /* Opcode */ + pMem->z = (char*)sqlite3OpcodeName(pOp->opcode); /* Opcode */ assert( pMem->z!=0 ); pMem->n = sqlite3Strlen30(pMem->z); - pMem->type = SQLITE_TEXT; pMem->enc = SQLITE_UTF8; pMem++; /* When an OP_Program opcode is encounter (the only opcode that has ** a P4_SUBPROGRAM argument), expand the size of the array of subprograms @@ -49711,11 +69404,11 @@ int nByte = (nSub+1)*sizeof(SubProgram*); int j; for(j=0; j<nSub; j++){ if( apSub[j]==pOp->p4.pProgram ) break; } - if( j==nSub && SQLITE_OK==sqlite3VdbeMemGrow(pSub, nByte, 1) ){ + if( j==nSub && SQLITE_OK==sqlite3VdbeMemGrow(pSub, nByte, nSub!=0) ){ apSub = (SubProgram **)pSub->z; apSub[nSub++] = pOp->p4.pProgram; pSub->flags |= MEM_Blob; pSub->n = nSub*sizeof(SubProgram*); } @@ -49722,69 +69415,61 @@ } } pMem->flags = MEM_Int; pMem->u.i = pOp->p1; /* P1 */ - pMem->type = SQLITE_INTEGER; pMem++; pMem->flags = MEM_Int; pMem->u.i = pOp->p2; /* P2 */ - pMem->type = SQLITE_INTEGER; + pMem++; + + pMem->flags = MEM_Int; + pMem->u.i = pOp->p3; /* P3 */ pMem++; - if( p->explain==1 ){ - pMem->flags = MEM_Int; - pMem->u.i = pOp->p3; /* P3 */ - pMem->type = SQLITE_INTEGER; - pMem++; - } - - if( sqlite3VdbeMemGrow(pMem, 32, 0) ){ /* P4 */ + if( sqlite3VdbeMemClearAndResize(pMem, 100) ){ /* P4 */ assert( p->db->mallocFailed ); return SQLITE_ERROR; } - pMem->flags = MEM_Dyn|MEM_Str|MEM_Term; - z = displayP4(pOp, pMem->z, 32); - if( z!=pMem->z ){ - sqlite3VdbeMemSetStr(pMem, z, -1, SQLITE_UTF8, 0); + pMem->flags = MEM_Str|MEM_Term; + zP4 = displayP4(pOp, pMem->z, pMem->szMalloc); + if( zP4!=pMem->z ){ + sqlite3VdbeMemSetStr(pMem, zP4, -1, SQLITE_UTF8, 0); }else{ assert( pMem->z!=0 ); pMem->n = sqlite3Strlen30(pMem->z); pMem->enc = SQLITE_UTF8; } - pMem->type = SQLITE_TEXT; - pMem++; - - if( p->explain==1 ){ - if( sqlite3VdbeMemGrow(pMem, 4, 0) ){ - assert( p->db->mallocFailed ); - return SQLITE_ERROR; - } - pMem->flags = MEM_Dyn|MEM_Str|MEM_Term; - pMem->n = 2; - sqlite3_snprintf(3, pMem->z, "%.2x", pOp->p5); /* P5 */ - pMem->type = SQLITE_TEXT; - pMem->enc = SQLITE_UTF8; - pMem++; - -#ifdef SQLITE_DEBUG - if( pOp->zComment ){ - pMem->flags = MEM_Str|MEM_Term; - pMem->z = pOp->zComment; - pMem->n = sqlite3Strlen30(pMem->z); - pMem->enc = SQLITE_UTF8; - pMem->type = SQLITE_TEXT; - }else -#endif - { - pMem->flags = MEM_Null; /* Comment */ - pMem->type = SQLITE_NULL; - } - } - - p->nResColumn = 8 - 5*(p->explain-1); + pMem++; + + if( p->explain==1 ){ + if( sqlite3VdbeMemClearAndResize(pMem, 4) ){ + assert( p->db->mallocFailed ); + return SQLITE_ERROR; + } + pMem->flags = MEM_Str|MEM_Term; + pMem->n = 2; + sqlite3_snprintf(3, pMem->z, "%.2x", pOp->p5); /* P5 */ + pMem->enc = SQLITE_UTF8; + pMem++; + +#ifdef SQLITE_ENABLE_EXPLAIN_COMMENTS + if( sqlite3VdbeMemClearAndResize(pMem, 500) ){ + assert( p->db->mallocFailed ); + return SQLITE_ERROR; + } + pMem->flags = MEM_Str|MEM_Term; + pMem->n = displayComment(pOp, zP4, pMem->z, 500); + pMem->enc = SQLITE_UTF8; +#else + pMem->flags = MEM_Null; /* Comment */ +#endif + } + + p->nResColumn = 8 - 4*(p->explain-1); + p->pResultSet = &p->aMem[1]; p->rc = SQLITE_OK; rc = SQLITE_ROW; } return rc; } @@ -49793,19 +69478,21 @@ #ifdef SQLITE_DEBUG /* ** Print the SQL that was used to generate a VDBE program. */ SQLITE_PRIVATE void sqlite3VdbePrintSql(Vdbe *p){ - int nOp = p->nOp; - VdbeOp *pOp; - if( nOp<1 ) return; - pOp = &p->aOp[0]; - if( pOp->opcode==OP_Trace && pOp->p4.z!=0 ){ - const char *z = pOp->p4.z; - while( sqlite3Isspace(*z) ) z++; - printf("SQL: [%s]\n", z); - } + const char *z = 0; + if( p->zSql ){ + z = p->zSql; + }else if( p->nOp>=1 ){ + const VdbeOp *pOp = &p->aOp[0]; + if( pOp->opcode==OP_Init && pOp->p4.z!=0 ){ + z = pOp->p4.z; + while( sqlite3Isspace(*z) ) z++; + } + } + if( z ) printf("SQL: [%s]\n", z); } #endif #if !defined(SQLITE_OMIT_TRACE) && defined(SQLITE_ENABLE_IOTRACE) /* @@ -49815,11 +69502,11 @@ int nOp = p->nOp; VdbeOp *pOp; if( sqlite3IoTrace==0 ) return; if( nOp<1 ) return; pOp = &p->aOp[0]; - if( pOp->opcode==OP_Trace && pOp->p4.z!=0 ){ + if( pOp->opcode==OP_Init && pOp->p4.z!=0 ){ int i, j; char z[1000]; sqlite3_snprintf(sizeof(z), z, "%s", pOp->p4.z); for(i=0; sqlite3Isspace(z[i]); i++){} for(j=0; z[i]; i++){ @@ -49835,79 +69522,61 @@ sqlite3IoTrace("SQL %s\n", z); } } #endif /* !SQLITE_OMIT_TRACE && SQLITE_ENABLE_IOTRACE */ -/* -** Allocate space from a fixed size buffer and return a pointer to -** that space. If insufficient space is available, return NULL. -** -** The pBuf parameter is the initial value of a pointer which will -** receive the new memory. pBuf is normally NULL. If pBuf is not -** NULL, it means that memory space has already been allocated and that -** this routine should not allocate any new memory. When pBuf is not -** NULL simply return pBuf. Only allocate new memory space when pBuf -** is NULL. -** -** nByte is the number of bytes of space needed. -** -** *ppFrom points to available space and pEnd points to the end of the -** available space. When space is allocated, *ppFrom is advanced past -** the end of the allocated space. -** -** *pnByte is a counter of the number of bytes of space that have failed -** to allocate. If there is insufficient space in *ppFrom to satisfy the -** request, then increment *pnByte by the amount of the request. +/* An instance of this object describes bulk memory available for use +** by subcomponents of a prepared statement. Space is allocated out +** of a ReusableSpace object by the allocSpace() routine below. +*/ +struct ReusableSpace { + u8 *pSpace; /* Available memory */ + int nFree; /* Bytes of available memory */ + int nNeeded; /* Total bytes that could not be allocated */ +}; + +/* Try to allocate nByte bytes of 8-byte aligned bulk memory for pBuf +** from the ReusableSpace object. Return a pointer to the allocated +** memory on success. If insufficient memory is available in the +** ReusableSpace object, increase the ReusableSpace.nNeeded +** value by the amount needed and return NULL. +** +** If pBuf is not initially NULL, that means that the memory has already +** been allocated by a prior call to this routine, so just return a copy +** of pBuf and leave ReusableSpace unchanged. +** +** This allocator is employed to repurpose unused slots at the end of the +** opcode array of prepared state for other memory needs of the prepared +** statement. */ static void *allocSpace( - void *pBuf, /* Where return pointer will be stored */ - int nByte, /* Number of bytes to allocate */ - u8 **ppFrom, /* IN/OUT: Allocate from *ppFrom */ - u8 *pEnd, /* Pointer to 1 byte past the end of *ppFrom buffer */ - int *pnByte /* If allocation cannot be made, increment *pnByte */ + struct ReusableSpace *p, /* Bulk memory available for allocation */ + void *pBuf, /* Pointer to a prior allocation */ + int nByte /* Bytes of memory needed */ ){ - assert( EIGHT_BYTE_ALIGNMENT(*ppFrom) ); - if( pBuf ) return pBuf; - nByte = ROUND8(nByte); - if( &(*ppFrom)[nByte] <= pEnd ){ - pBuf = (void*)*ppFrom; - *ppFrom += nByte; - }else{ - *pnByte += nByte; - } + assert( EIGHT_BYTE_ALIGNMENT(p->pSpace) ); + if( pBuf==0 ){ + nByte = ROUND8(nByte); + if( nByte <= p->nFree ){ + p->nFree -= nByte; + pBuf = &p->pSpace[p->nFree]; + }else{ + p->nNeeded += nByte; + } + } + assert( EIGHT_BYTE_ALIGNMENT(pBuf) ); return pBuf; } /* -** Prepare a virtual machine for execution. This involves things such -** as allocating stack space and initializing the program counter. -** After the VDBE has be prepped, it can be executed by one or more -** calls to sqlite3VdbeExec(). -** -** This is the only way to move a VDBE from VDBE_MAGIC_INIT to -** VDBE_MAGIC_RUN. -** -** This function may be called more than once on a single virtual machine. -** The first call is made while compiling the SQL statement. Subsequent -** calls are made as part of the process of resetting a statement to be -** re-executed (from a call to sqlite3_reset()). The nVar, nMem, nCursor -** and isExplain parameters are only passed correct values the first time -** the function is called. On subsequent calls, from sqlite3_reset(), nVar -** is passed -1 and nMem, nCursor and isExplain are all passed zero. +** Rewind the VDBE back to the beginning in preparation for +** running it. */ -SQLITE_PRIVATE void sqlite3VdbeMakeReady( - Vdbe *p, /* The VDBE */ - int nVar, /* Number of '?' see in the SQL statement */ - int nMem, /* Number of memory cells to allocate */ - int nCursor, /* Number of cursors to allocate */ - int nArg, /* Maximum number of args in SubPrograms */ - int isExplain, /* True if the EXPLAIN keywords is present */ - int usesStmtJournal /* True to set Vdbe.usesStmtJournal */ -){ - int n; - sqlite3 *db = p->db; - +SQLITE_PRIVATE void sqlite3VdbeRewind(Vdbe *p){ +#if defined(SQLITE_DEBUG) || defined(VDBE_PROFILE) + int i; +#endif assert( p!=0 ); assert( p->magic==VDBE_MAGIC_INIT ); /* There should be at least one opcode. */ @@ -49914,106 +69583,156 @@ assert( p->nOp>0 ); /* Set the magic to VDBE_MAGIC_RUN sooner rather than later. */ p->magic = VDBE_MAGIC_RUN; +#ifdef SQLITE_DEBUG + for(i=1; i<p->nMem; i++){ + assert( p->aMem[i].db==p->db ); + } +#endif + p->pc = -1; + p->rc = SQLITE_OK; + p->errorAction = OE_Abort; + p->nChange = 0; + p->cacheCtr = 1; + p->minWriteFileFormat = 255; + p->iStatement = 0; + p->nFkConstraint = 0; +#ifdef VDBE_PROFILE + for(i=0; i<p->nOp; i++){ + p->aOp[i].cnt = 0; + p->aOp[i].cycles = 0; + } +#endif +} + +/* +** Prepare a virtual machine for execution for the first time after +** creating the virtual machine. This involves things such +** as allocating registers and initializing the program counter. +** After the VDBE has be prepped, it can be executed by one or more +** calls to sqlite3VdbeExec(). +** +** This function may be called exactly once on each virtual machine. +** After this routine is called the VM has been "packaged" and is ready +** to run. After this routine is called, further calls to +** sqlite3VdbeAddOp() functions are prohibited. This routine disconnects +** the Vdbe from the Parse object that helped generate it so that the +** the Vdbe becomes an independent entity and the Parse object can be +** destroyed. +** +** Use the sqlite3VdbeRewind() procedure to restore a virtual machine back +** to its initial state after it has been run. +*/ +SQLITE_PRIVATE void sqlite3VdbeMakeReady( + Vdbe *p, /* The VDBE */ + Parse *pParse /* Parsing context */ +){ + sqlite3 *db; /* The database connection */ + int nVar; /* Number of parameters */ + int nMem; /* Number of VM memory registers */ + int nCursor; /* Number of cursors required */ + int nArg; /* Number of arguments in subprograms */ + int nOnce; /* Number of OP_Once instructions */ + int n; /* Loop counter */ + struct ReusableSpace x; /* Reusable bulk memory */ + + assert( p!=0 ); + assert( p->nOp>0 ); + assert( pParse!=0 ); + assert( p->magic==VDBE_MAGIC_INIT ); + assert( pParse==p->pParse ); + db = p->db; + assert( db->mallocFailed==0 ); + nVar = pParse->nVar; + nMem = pParse->nMem; + nCursor = pParse->nTab; + nArg = pParse->nMaxArg; + nOnce = pParse->nOnce; + if( nOnce==0 ) nOnce = 1; /* Ensure at least one byte in p->aOnceFlag[] */ + /* For each cursor required, also allocate a memory cell. Memory ** cells (nMem+1-nCursor)..nMem, inclusive, will never be used by - ** the vdbe program. Instead they are used to allocate space for + ** the vdbe program. Instead they are used to allocate memory for ** VdbeCursor/BtCursor structures. The blob of memory associated with ** cursor 0 is stored in memory cell nMem. Memory cell (nMem-1) ** stores the blob of memory associated with cursor 1, etc. ** ** See also: allocateCursor(). */ nMem += nCursor; - /* Allocate space for memory registers, SQL variables, VDBE cursors and - ** an array to marshal SQL function arguments in. This is only done the - ** first time this function is called for a given VDBE, not when it is - ** being called from sqlite3_reset() to reset the virtual machine. - */ - if( nVar>=0 && ALWAYS(db->mallocFailed==0) ){ - u8 *zCsr = (u8 *)&p->aOp[p->nOp]; /* Memory avaliable for alloation */ - u8 *zEnd = (u8 *)&p->aOp[p->nOpAlloc]; /* First byte past available mem */ - int nByte; /* How much extra memory needed */ - - resolveP2Values(p, &nArg); - p->usesStmtJournal = (u8)usesStmtJournal; - if( isExplain && nMem<10 ){ - nMem = 10; - } - memset(zCsr, 0, zEnd-zCsr); - zCsr += (zCsr - (u8*)0)&7; - assert( EIGHT_BYTE_ALIGNMENT(zCsr) ); - - /* Memory for registers, parameters, cursor, etc, is allocated in two - ** passes. On the first pass, we try to reuse unused space at the - ** end of the opcode array. If we are unable to satisfy all memory - ** requirements by reusing the opcode array tail, then the second - ** pass will fill in the rest using a fresh allocation. - ** - ** This two-pass approach that reuses as much memory as possible from - ** the leftover space at the end of the opcode array can significantly - ** reduce the amount of memory held by a prepared statement. - */ - do { - nByte = 0; - p->aMem = allocSpace(p->aMem, nMem*sizeof(Mem), &zCsr, zEnd, &nByte); - p->aVar = allocSpace(p->aVar, nVar*sizeof(Mem), &zCsr, zEnd, &nByte); - p->apArg = allocSpace(p->apArg, nArg*sizeof(Mem*), &zCsr, zEnd, &nByte); - p->azVar = allocSpace(p->azVar, nVar*sizeof(char*), &zCsr, zEnd, &nByte); - p->apCsr = allocSpace(p->apCsr, nCursor*sizeof(VdbeCursor*), - &zCsr, zEnd, &nByte); - if( nByte ){ - p->pFree = sqlite3DbMallocZero(db, nByte); - } - zCsr = p->pFree; - zEnd = &zCsr[nByte]; - }while( nByte && !db->mallocFailed ); - - p->nCursor = (u16)nCursor; - if( p->aVar ){ - p->nVar = (ynVar)nVar; - for(n=0; n<nVar; n++){ - p->aVar[n].flags = MEM_Null; - p->aVar[n].db = db; - } - } - if( p->aMem ){ - p->aMem--; /* aMem[] goes from 1..nMem */ - p->nMem = nMem; /* not from 0..nMem-1 */ - for(n=1; n<=nMem; n++){ - p->aMem[n].flags = MEM_Null; - p->aMem[n].db = db; - } - } - } -#ifdef SQLITE_DEBUG - for(n=1; n<p->nMem; n++){ - assert( p->aMem[n].db==db ); - } -#endif - - p->pc = -1; - p->rc = SQLITE_OK; - p->errorAction = OE_Abort; - p->explain |= isExplain; - p->magic = VDBE_MAGIC_RUN; - p->nChange = 0; - p->cacheCtr = 1; - p->minWriteFileFormat = 255; - p->iStatement = 0; -#ifdef VDBE_PROFILE - { - int i; - for(i=0; i<p->nOp; i++){ - p->aOp[i].cnt = 0; - p->aOp[i].cycles = 0; - } - } -#endif + /* Figure out how much reusable memory is available at the end of the + ** opcode array. This extra memory will be reallocated for other elements + ** of the prepared statement. + */ + n = ROUND8(sizeof(Op)*p->nOp); /* Bytes of opcode memory used */ + x.pSpace = &((u8*)p->aOp)[n]; /* Unused opcode memory */ + assert( EIGHT_BYTE_ALIGNMENT(x.pSpace) ); + x.nFree = ROUNDDOWN8(pParse->szOpAlloc - n); /* Bytes of unused memory */ + assert( x.nFree>=0 ); + if( x.nFree>0 ){ + memset(x.pSpace, 0, x.nFree); + assert( EIGHT_BYTE_ALIGNMENT(&x.pSpace[x.nFree]) ); + } + + resolveP2Values(p, &nArg); + p->usesStmtJournal = (u8)(pParse->isMultiWrite && pParse->mayAbort); + if( pParse->explain && nMem<10 ){ + nMem = 10; + } + p->expired = 0; + + /* Memory for registers, parameters, cursor, etc, is allocated in one or two + ** passes. On the first pass, we try to reuse unused memory at the + ** end of the opcode array. If we are unable to satisfy all memory + ** requirements by reusing the opcode array tail, then the second + ** pass will fill in the remainder using a fresh memory allocation. + ** + ** This two-pass approach that reuses as much memory as possible from + ** the leftover memory at the end of the opcode array. This can significantly + ** reduce the amount of memory held by a prepared statement. + */ + do { + x.nNeeded = 0; + p->aMem = allocSpace(&x, p->aMem, nMem*sizeof(Mem)); + p->aVar = allocSpace(&x, p->aVar, nVar*sizeof(Mem)); + p->apArg = allocSpace(&x, p->apArg, nArg*sizeof(Mem*)); + p->apCsr = allocSpace(&x, p->apCsr, nCursor*sizeof(VdbeCursor*)); + p->aOnceFlag = allocSpace(&x, p->aOnceFlag, nOnce); +#ifdef SQLITE_ENABLE_STMT_SCANSTATUS + p->anExec = allocSpace(&x, p->anExec, p->nOp*sizeof(i64)); +#endif + if( x.nNeeded==0 ) break; + x.pSpace = p->pFree = sqlite3DbMallocZero(db, x.nNeeded); + x.nFree = x.nNeeded; + }while( !db->mallocFailed ); + + p->nCursor = nCursor; + p->nOnceFlag = nOnce; + if( p->aVar ){ + p->nVar = (ynVar)nVar; + for(n=0; n<nVar; n++){ + p->aVar[n].flags = MEM_Null; + p->aVar[n].db = db; + } + } + p->nzVar = pParse->nzVar; + p->azVar = pParse->azVar; + pParse->nzVar = 0; + pParse->azVar = 0; + if( p->aMem ){ + p->aMem--; /* aMem[] goes from 1..nMem */ + p->nMem = nMem; /* not from 0..nMem-1 */ + for(n=1; n<=nMem; n++){ + p->aMem[n].flags = MEM_Undefined; + p->aMem[n].db = db; + } + } + p->explain = pParse->explain; + sqlite3VdbeRewind(p); } /* ** Close a VDBE cursor and release all the resources that cursor ** happens to hold. @@ -50020,43 +69739,78 @@ */ SQLITE_PRIVATE void sqlite3VdbeFreeCursor(Vdbe *p, VdbeCursor *pCx){ if( pCx==0 ){ return; } - if( pCx->pBt ){ - sqlite3BtreeClose(pCx->pBt); - /* The pCx->pCursor will be close automatically, if it exists, by - ** the call above. */ - }else if( pCx->pCursor ){ - sqlite3BtreeCloseCursor(pCx->pCursor); - } + assert( pCx->pBt==0 || pCx->eCurType==CURTYPE_BTREE ); + switch( pCx->eCurType ){ + case CURTYPE_SORTER: { + sqlite3VdbeSorterClose(p->db, pCx); + break; + } + case CURTYPE_BTREE: { + if( pCx->pBt ){ + sqlite3BtreeClose(pCx->pBt); + /* The pCx->pCursor will be close automatically, if it exists, by + ** the call above. */ + }else{ + assert( pCx->uc.pCursor!=0 ); + sqlite3BtreeCloseCursor(pCx->uc.pCursor); + } + break; + } #ifndef SQLITE_OMIT_VIRTUALTABLE - if( pCx->pVtabCursor ){ - sqlite3_vtab_cursor *pVtabCursor = pCx->pVtabCursor; - const sqlite3_module *pModule = pCx->pModule; - p->inVtabMethod = 1; - pModule->xClose(pVtabCursor); - p->inVtabMethod = 0; - } -#endif + case CURTYPE_VTAB: { + sqlite3_vtab_cursor *pVCur = pCx->uc.pVCur; + const sqlite3_module *pModule = pVCur->pVtab->pModule; + assert( pVCur->pVtab->nRef>0 ); + pVCur->pVtab->nRef--; + pModule->xClose(pVCur); + break; + } +#endif + } +} + +/* +** Close all cursors in the current frame. +*/ +static void closeCursorsInFrame(Vdbe *p){ + if( p->apCsr ){ + int i; + for(i=0; i<p->nCursor; i++){ + VdbeCursor *pC = p->apCsr[i]; + if( pC ){ + sqlite3VdbeFreeCursor(p, pC); + p->apCsr[i] = 0; + } + } + } } /* ** Copy the values stored in the VdbeFrame structure to its Vdbe. This ** is used, for example, when a trigger sub-program is halted to restore ** control to the main program. */ SQLITE_PRIVATE int sqlite3VdbeFrameRestore(VdbeFrame *pFrame){ Vdbe *v = pFrame->v; + closeCursorsInFrame(v); +#ifdef SQLITE_ENABLE_STMT_SCANSTATUS + v->anExec = pFrame->anExec; +#endif + v->aOnceFlag = pFrame->aOnceFlag; + v->nOnceFlag = pFrame->nOnceFlag; v->aOp = pFrame->aOp; v->nOp = pFrame->nOp; v->aMem = pFrame->aMem; v->nMem = pFrame->nMem; v->apCsr = pFrame->apCsr; v->nCursor = pFrame->nCursor; v->db->lastRowid = pFrame->lastRowid; v->nChange = pFrame->nChange; + v->db->nChange = pFrame->nDbChange; return pFrame->pc; } /* ** Close all cursors. @@ -50066,48 +69820,46 @@ ** pointers to VdbeFrame objects, which may in turn contain pointers to ** open cursors. */ static void closeAllCursors(Vdbe *p){ if( p->pFrame ){ - VdbeFrame *pFrame = p->pFrame; + VdbeFrame *pFrame; for(pFrame=p->pFrame; pFrame->pParent; pFrame=pFrame->pParent); sqlite3VdbeFrameRestore(pFrame); - } - p->pFrame = 0; - p->nFrame = 0; - - if( p->apCsr ){ - int i; - for(i=0; i<p->nCursor; i++){ - VdbeCursor *pC = p->apCsr[i]; - if( pC ){ - sqlite3VdbeFreeCursor(p, pC); - p->apCsr[i] = 0; - } - } - } + p->pFrame = 0; + p->nFrame = 0; + } + assert( p->nFrame==0 ); + closeCursorsInFrame(p); if( p->aMem ){ releaseMemArray(&p->aMem[1], p->nMem); } + while( p->pDelFrame ){ + VdbeFrame *pDel = p->pDelFrame; + p->pDelFrame = pDel->pParent; + sqlite3VdbeFrameDelete(pDel); + } + + /* Delete any auxdata allocations made by the VM */ + if( p->pAuxData ) sqlite3VdbeDeleteAuxData(p, -1, 0); + assert( p->pAuxData==0 ); } /* -** Clean up the VM after execution. -** -** This routine will automatically close any cursors, lists, and/or -** sorters that were left open. It also deletes the values of -** variables in the aVar[] array. +** Clean up the VM after a single run. */ static void Cleanup(Vdbe *p){ sqlite3 *db = p->db; #ifdef SQLITE_DEBUG /* Execute assert() statements to ensure that the Vdbe.apCsr[] and ** Vdbe.aMem[] arrays have already been cleaned up. */ int i; - for(i=0; i<p->nCursor; i++) assert( p->apCsr==0 || p->apCsr[i]==0 ); - for(i=1; i<=p->nMem; i++) assert( p->aMem==0 || p->aMem[i].flags==MEM_Null ); + if( p->apCsr ) for(i=0; i<p->nCursor; i++) assert( p->apCsr[i]==0 ); + if( p->aMem ){ + for(i=1; i<=p->nMem; i++) assert( p->aMem[i].flags==MEM_Undefined ); + } #endif sqlite3DbFree(db, p->zErrMsg); p->zErrMsg = 0; p->pResultSet = 0; @@ -50192,34 +69944,37 @@ ** virtual module tables written in this transaction. This has to ** be done before determining whether a master journal file is ** required, as an xSync() callback may add an attached database ** to the transaction. */ - rc = sqlite3VtabSync(db, &p->zErrMsg); - if( rc!=SQLITE_OK ){ - return rc; - } + rc = sqlite3VtabSync(db, p); /* This loop determines (a) if the commit hook should be invoked and ** (b) how many database files have open write transactions, not ** including the temp database. (b) is important because if more than ** one database file has an open write transaction, a master journal ** file is required for an atomic commit. */ - for(i=0; i<db->nDb; i++){ + for(i=0; rc==SQLITE_OK && i<db->nDb; i++){ Btree *pBt = db->aDb[i].pBt; if( sqlite3BtreeIsInTrans(pBt) ){ needXcommit = 1; if( i!=1 ) nTrans++; + sqlite3BtreeEnter(pBt); + rc = sqlite3PagerExclusiveLock(sqlite3BtreePager(pBt)); + sqlite3BtreeLeave(pBt); } + } + if( rc!=SQLITE_OK ){ + return rc; } /* If there are any write-transactions at all, invoke the commit hook */ if( needXcommit && db->xCommitCallback ){ rc = db->xCommitCallback(db->pCommitArg); if( rc ){ - return SQLITE_CONSTRAINT; + return SQLITE_CONSTRAINT_COMMITHOOK; } } /* The simple case - no more than one database file (not counting the ** TEMP database) has a transaction active. There is no need for the @@ -50246,21 +70001,21 @@ ** but could happen. In this case abandon processing and return the error. */ for(i=0; rc==SQLITE_OK && i<db->nDb; i++){ Btree *pBt = db->aDb[i].pBt; if( pBt ){ - rc = sqlite3BtreeCommitPhaseTwo(pBt); + rc = sqlite3BtreeCommitPhaseTwo(pBt, 0); } } if( rc==SQLITE_OK ){ sqlite3VtabCommit(db); } } /* The complex case - There is a multi-file write-transaction active. ** This requires a master journal file to ensure the transaction is - ** committed atomicly. + ** committed atomically. */ #ifndef SQLITE_OMIT_DISKIO else{ sqlite3_vfs *pVfs = db->pVfs; int needSync = 0; @@ -50267,20 +70022,36 @@ char *zMaster = 0; /* File-name for the master journal */ char const *zMainFile = sqlite3BtreeGetFilename(db->aDb[0].pBt); sqlite3_file *pMaster = 0; i64 offset = 0; int res; + int retryCount = 0; + int nMainFile; /* Select a master journal file name */ + nMainFile = sqlite3Strlen30(zMainFile); + zMaster = sqlite3MPrintf(db, "%s-mjXXXXXX9XXz", zMainFile); + if( zMaster==0 ) return SQLITE_NOMEM; do { u32 iRandom; - sqlite3DbFree(db, zMaster); + if( retryCount ){ + if( retryCount>100 ){ + sqlite3_log(SQLITE_FULL, "MJ delete: %s", zMaster); + sqlite3OsDelete(pVfs, zMaster, 0); + break; + }else if( retryCount==1 ){ + sqlite3_log(SQLITE_FULL, "MJ collide: %s", zMaster); + } + } + retryCount++; sqlite3_randomness(sizeof(iRandom), &iRandom); - zMaster = sqlite3MPrintf(db, "%s-mj%08X", zMainFile, iRandom&0x7fffffff); - if( !zMaster ){ - return SQLITE_NOMEM; - } + sqlite3_snprintf(13, &zMaster[nMainFile], "-mj%06X9%02X", + (iRandom>>8)&0xffffff, iRandom&0xff); + /* The antipenultimate character of the master journal name must + ** be "9" to avoid name collisions when using 8+3 filenames. */ + assert( zMaster[sqlite3Strlen30(zMaster)-3]=='9' ); + sqlite3FileSuffix3(zMainFile, zMaster); rc = sqlite3OsAccess(pVfs, zMaster, SQLITE_ACCESS_EXISTS, &res); }while( rc==SQLITE_OK && res ); if( rc==SQLITE_OK ){ /* Open the master journal. */ rc = sqlite3OsOpenMalloc(pVfs, zMaster, &pMaster, @@ -50301,13 +70072,14 @@ */ for(i=0; i<db->nDb; i++){ Btree *pBt = db->aDb[i].pBt; if( sqlite3BtreeIsInTrans(pBt) ){ char const *zFile = sqlite3BtreeGetJournalname(pBt); - if( zFile==0 || zFile[0]==0 ){ + if( zFile==0 ){ continue; /* Ignore TEMP and :memory: databases */ } + assert( zFile[0]!=0 ); if( !needSync && !sqlite3BtreeSyncDisabled(pBt) ){ needSync = 1; } rc = sqlite3OsWrite(pMaster, zFile, sqlite3Strlen30(zFile)+1, offset); offset += sqlite3Strlen30(zFile)+1; @@ -50348,20 +70120,21 @@ if( pBt ){ rc = sqlite3BtreeCommitPhaseOne(pBt, zMaster); } } sqlite3OsCloseFree(pMaster); + assert( rc!=SQLITE_BUSY ); if( rc!=SQLITE_OK ){ sqlite3DbFree(db, zMaster); return rc; } /* Delete the master journal file. This commits the transaction. After ** doing this the directory is synced again before any individual ** transaction files are deleted. */ - rc = sqlite3OsDelete(pVfs, zMaster, 1); + rc = sqlite3OsDelete(pVfs, zMaster, needSync); sqlite3DbFree(db, zMaster); zMaster = 0; if( rc ){ return rc; } @@ -50376,11 +70149,11 @@ disable_simulated_io_errors(); sqlite3BeginBenignMalloc(); for(i=0; i<db->nDb; i++){ Btree *pBt = db->aDb[i].pBt; if( pBt ){ - sqlite3BtreeCommitPhaseTwo(pBt); + sqlite3BtreeCommitPhaseTwo(pBt, 1); } } sqlite3EndBenignMalloc(); enable_simulated_io_errors(); @@ -50390,11 +70163,11 @@ return rc; } /* -** This routine checks that the sqlite3.activeVdbeCnt count variable +** This routine checks that the sqlite3.nVdbeActive count variable ** matches the number of vdbe's in the list sqlite3.pVdbe that are ** currently active. An assertion fails if the two counts do not match. ** This is an internal self-check only - it is not an essential processing ** step. ** @@ -50403,57 +70176,34 @@ #ifndef NDEBUG static void checkActiveVdbeCnt(sqlite3 *db){ Vdbe *p; int cnt = 0; int nWrite = 0; + int nRead = 0; p = db->pVdbe; while( p ){ - if( p->magic==VDBE_MAGIC_RUN && p->pc>=0 ){ + if( sqlite3_stmt_busy((sqlite3_stmt*)p) ){ cnt++; if( p->readOnly==0 ) nWrite++; + if( p->bIsReader ) nRead++; } p = p->pNext; } - assert( cnt==db->activeVdbeCnt ); - assert( nWrite==db->writeVdbeCnt ); + assert( cnt==db->nVdbeActive ); + assert( nWrite==db->nVdbeWrite ); + assert( nRead==db->nVdbeRead ); } #else #define checkActiveVdbeCnt(x) #endif -/* -** For every Btree that in database connection db which -** has been modified, "trip" or invalidate each cursor in -** that Btree might have been modified so that the cursor -** can never be used again. This happens when a rollback -*** occurs. We have to trip all the other cursors, even -** cursor from other VMs in different database connections, -** so that none of them try to use the data at which they -** were pointing and which now may have been changed due -** to the rollback. -** -** Remember that a rollback can delete tables complete and -** reorder rootpages. So it is not sufficient just to save -** the state of the cursor. We have to invalidate the cursor -** so that it is never used again. -*/ -static void invalidateCursorsOnModifiedBtrees(sqlite3 *db){ - int i; - for(i=0; i<db->nDb; i++){ - Btree *p = db->aDb[i].pBt; - if( p && sqlite3BtreeIsInTrans(p) ){ - sqlite3BtreeTripAllCursors(p, SQLITE_ABORT); - } - } -} - /* ** If the Vdbe passed as the first argument opened a statement-transaction, ** close it now. Argument eOp must be either SAVEPOINT_ROLLBACK or ** SAVEPOINT_RELEASE. If it is SAVEPOINT_ROLLBACK, then the statement ** transaction is rolled back. If eOp is SAVEPOINT_RELEASE, then the -** statement transaction is commtted. +** statement transaction is committed. ** ** If an IO error occurs, an SQLITE_IOERR_XXX error code is returned. ** Otherwise SQLITE_OK. */ SQLITE_PRIVATE int sqlite3VdbeCloseStatement(Vdbe *p, int eOp){ @@ -50460,11 +70210,11 @@ sqlite3 *const db = p->db; int rc = SQLITE_OK; /* If p->iStatement is greater than zero, then this Vdbe opened a ** statement transaction that should be closed here. The only exception - ** is that an IO error may have occured, causing an emergency rollback. + ** is that an IO error may have occurred, causing an emergency rollback. ** In this case (db->nStatement==0), and there is nothing to do. */ if( db->nStatement && p->iStatement ){ int i; const int iSavepoint = p->iStatement-1; @@ -50488,65 +70238,50 @@ } } } db->nStatement--; p->iStatement = 0; + + if( rc==SQLITE_OK ){ + if( eOp==SAVEPOINT_ROLLBACK ){ + rc = sqlite3VtabSavepoint(db, SAVEPOINT_ROLLBACK, iSavepoint); + } + if( rc==SQLITE_OK ){ + rc = sqlite3VtabSavepoint(db, SAVEPOINT_RELEASE, iSavepoint); + } + } /* If the statement transaction is being rolled back, also restore the ** database handles deferred constraint counter to the value it had when ** the statement transaction was opened. */ if( eOp==SAVEPOINT_ROLLBACK ){ db->nDeferredCons = p->nStmtDefCons; + db->nDeferredImmCons = p->nStmtDefImmCons; } } return rc; } -/* -** If SQLite is compiled to support shared-cache mode and to be threadsafe, -** this routine obtains the mutex associated with each BtShared structure -** that may be accessed by the VM passed as an argument. In doing so it -** sets the BtShared.db member of each of the BtShared structures, ensuring -** that the correct busy-handler callback is invoked if required. -** -** If SQLite is not threadsafe but does support shared-cache mode, then -** sqlite3BtreeEnterAll() is invoked to set the BtShared.db variables -** of all of BtShared structures accessible via the database handle -** associated with the VM. Of course only a subset of these structures -** will be accessed by the VM, and we could use Vdbe.btreeMask to figure -** that subset out, but there is no advantage to doing so. -** -** If SQLite is not threadsafe and does not support shared-cache mode, this -** function is a no-op. -*/ -#ifndef SQLITE_OMIT_SHARED_CACHE -SQLITE_PRIVATE void sqlite3VdbeMutexArrayEnter(Vdbe *p){ -#if SQLITE_THREADSAFE - sqlite3BtreeMutexArrayEnter(&p->aMutex); -#else - sqlite3BtreeEnterAll(p->db); -#endif -} -#endif - /* ** This function is called when a transaction opened by the database ** handle associated with the VM passed as an argument is about to be ** committed. If there are outstanding deferred foreign key constraint ** violations, return SQLITE_ERROR. Otherwise, SQLITE_OK. ** ** If there are outstanding FK violations and this function returns -** SQLITE_ERROR, set the result of the VM to SQLITE_CONSTRAINT and write -** an error message to it. Then return SQLITE_ERROR. +** SQLITE_ERROR, set the result of the VM to SQLITE_CONSTRAINT_FOREIGNKEY +** and write an error message to it. Then return SQLITE_ERROR. */ #ifndef SQLITE_OMIT_FOREIGN_KEY SQLITE_PRIVATE int sqlite3VdbeCheckFk(Vdbe *p, int deferred){ sqlite3 *db = p->db; - if( (deferred && db->nDeferredCons>0) || (!deferred && p->nFkConstraint>0) ){ - p->rc = SQLITE_CONSTRAINT; + if( (deferred && (db->nDeferredCons+db->nDeferredImmCons)>0) + || (!deferred && p->nFkConstraint>0) + ){ + p->rc = SQLITE_CONSTRAINT_FOREIGNKEY; p->errorAction = OE_Abort; - sqlite3SetString(&p->zErrMsg, db, "foreign key constraint failed"); + sqlite3VdbeError(p, "FOREIGN KEY constraint failed"); return SQLITE_ERROR; } return SQLITE_OK; } #endif @@ -50582,48 +70317,58 @@ ** Then the internal cache might have been left in an inconsistent ** state. We need to rollback the statement transaction, if there is ** one, or the complete transaction if there is no statement transaction. */ - if( p->db->mallocFailed ){ + if( db->mallocFailed ){ p->rc = SQLITE_NOMEM; } + if( p->aOnceFlag ) memset(p->aOnceFlag, 0, p->nOnceFlag); closeAllCursors(p); if( p->magic!=VDBE_MAGIC_RUN ){ return SQLITE_OK; } checkActiveVdbeCnt(db); - /* No commit or rollback needed if the program never started */ - if( p->pc>=0 ){ + /* No commit or rollback needed if the program never started or if the + ** SQL statement does not read or write a database file. */ + if( p->pc>=0 && p->bIsReader ){ int mrc; /* Primary error code from p->rc */ int eStatementOp = 0; int isSpecialError; /* Set to true if a 'special' error */ /* Lock all btrees used by the statement */ - sqlite3VdbeMutexArrayEnter(p); + sqlite3VdbeEnter(p); /* Check for one of the special errors */ mrc = p->rc & 0xff; - assert( p->rc!=SQLITE_IOERR_BLOCKED ); /* This error no longer exists */ isSpecialError = mrc==SQLITE_NOMEM || mrc==SQLITE_IOERR || mrc==SQLITE_INTERRUPT || mrc==SQLITE_FULL; if( isSpecialError ){ - /* If the query was read-only, we need do no rollback at all. Otherwise, - ** proceed with the special handling. + /* If the query was read-only and the error code is SQLITE_INTERRUPT, + ** no rollback is necessary. Otherwise, at least a savepoint + ** transaction must be rolled back to restore the database to a + ** consistent state. + ** + ** Even if the statement is read-only, it is important to perform + ** a statement or transaction rollback operation. If the error + ** occurred while writing to the journal, sub-journal or database + ** file as part of an effort to free up cache space (see function + ** pagerStress() in pager.c), the rollback is required to restore + ** the pager to a consistent state. */ if( !p->readOnly || mrc!=SQLITE_INTERRUPT ){ if( (mrc==SQLITE_NOMEM || mrc==SQLITE_FULL) && p->usesStmtJournal ){ eStatementOp = SAVEPOINT_ROLLBACK; }else{ /* We are forced to roll back the active transaction. Before doing ** so, abort any other statements this handle currently has active. */ - invalidateCursorsOnModifiedBtrees(db); - sqlite3RollbackAll(db); + sqlite3RollbackAll(db, SQLITE_ABORT_ROLLBACK); sqlite3CloseSavepoints(db); db->autoCommit = 1; + p->nChange = 0; } } } /* Check for immediate foreign key violations. */ @@ -50637,66 +70382,76 @@ ** Note: This block also runs if one of the special errors handled ** above has occurred. */ if( !sqlite3VtabInSync(db) && db->autoCommit - && db->writeVdbeCnt==(p->readOnly==0) + && db->nVdbeWrite==(p->readOnly==0) ){ if( p->rc==SQLITE_OK || (p->errorAction==OE_Fail && !isSpecialError) ){ - if( sqlite3VdbeCheckFk(p, 1) ){ - sqlite3BtreeMutexArrayLeave(&p->aMutex); - return SQLITE_ERROR; - } - /* The auto-commit flag is true, the vdbe program was successful - ** or hit an 'OR FAIL' constraint and there are no deferred foreign - ** key constraints to hold up the transaction. This means a commit - ** is required. */ - rc = vdbeCommit(db, p); - if( rc==SQLITE_BUSY ){ - sqlite3BtreeMutexArrayLeave(&p->aMutex); + rc = sqlite3VdbeCheckFk(p, 1); + if( rc!=SQLITE_OK ){ + if( NEVER(p->readOnly) ){ + sqlite3VdbeLeave(p); + return SQLITE_ERROR; + } + rc = SQLITE_CONSTRAINT_FOREIGNKEY; + }else{ + /* The auto-commit flag is true, the vdbe program was successful + ** or hit an 'OR FAIL' constraint and there are no deferred foreign + ** key constraints to hold up the transaction. This means a commit + ** is required. */ + rc = vdbeCommit(db, p); + } + if( rc==SQLITE_BUSY && p->readOnly ){ + sqlite3VdbeLeave(p); return SQLITE_BUSY; }else if( rc!=SQLITE_OK ){ p->rc = rc; - sqlite3RollbackAll(db); + sqlite3RollbackAll(db, SQLITE_OK); + p->nChange = 0; }else{ db->nDeferredCons = 0; + db->nDeferredImmCons = 0; + db->flags &= ~SQLITE_DeferFKs; sqlite3CommitInternalChanges(db); } }else{ - sqlite3RollbackAll(db); + sqlite3RollbackAll(db, SQLITE_OK); + p->nChange = 0; } db->nStatement = 0; }else if( eStatementOp==0 ){ if( p->rc==SQLITE_OK || p->errorAction==OE_Fail ){ eStatementOp = SAVEPOINT_RELEASE; }else if( p->errorAction==OE_Abort ){ eStatementOp = SAVEPOINT_ROLLBACK; }else{ - invalidateCursorsOnModifiedBtrees(db); - sqlite3RollbackAll(db); + sqlite3RollbackAll(db, SQLITE_ABORT_ROLLBACK); sqlite3CloseSavepoints(db); db->autoCommit = 1; + p->nChange = 0; } } /* If eStatementOp is non-zero, then a statement transaction needs to ** be committed or rolled back. Call sqlite3VdbeCloseStatement() to ** do so. If this operation returns an error, and the current statement ** error code is SQLITE_OK or SQLITE_CONSTRAINT, then promote the ** current statement error code. - ** - ** Note that sqlite3VdbeCloseStatement() can only fail if eStatementOp - ** is SAVEPOINT_ROLLBACK. But if p->rc==SQLITE_OK then eStatementOp - ** must be SAVEPOINT_RELEASE. Hence the NEVER(p->rc==SQLITE_OK) in - ** the following code. */ if( eStatementOp ){ rc = sqlite3VdbeCloseStatement(p, eStatementOp); - if( rc && (NEVER(p->rc==SQLITE_OK) || p->rc==SQLITE_CONSTRAINT) ){ - p->rc = rc; - sqlite3DbFree(db, p->zErrMsg); - p->zErrMsg = 0; + if( rc ){ + if( p->rc==SQLITE_OK || (p->rc&0xff)==SQLITE_CONSTRAINT ){ + p->rc = rc; + sqlite3DbFree(db, p->zErrMsg); + p->zErrMsg = 0; + } + sqlite3RollbackAll(db, SQLITE_ABORT_ROLLBACK); + sqlite3CloseSavepoints(db); + db->autoCommit = 1; + p->nChange = 0; } } /* If this was an INSERT, UPDATE or DELETE and no statement transaction ** has been rolled back, update the database connection change-counter. @@ -50707,32 +70462,27 @@ }else{ sqlite3VdbeSetChanges(db, 0); } p->nChange = 0; } - - /* Rollback or commit any schema changes that occurred. */ - if( p->rc!=SQLITE_OK && db->flags&SQLITE_InternChanges ){ - sqlite3ResetInternalSchema(db, 0); - db->flags = (db->flags | SQLITE_InternChanges); - } /* Release the locks */ - sqlite3BtreeMutexArrayLeave(&p->aMutex); + sqlite3VdbeLeave(p); } /* We have successfully halted and closed the VM. Record this fact. */ if( p->pc>=0 ){ - db->activeVdbeCnt--; - if( !p->readOnly ){ - db->writeVdbeCnt--; - } - assert( db->activeVdbeCnt>=db->writeVdbeCnt ); + db->nVdbeActive--; + if( !p->readOnly ) db->nVdbeWrite--; + if( p->bIsReader ) db->nVdbeRead--; + assert( db->nVdbeActive>=db->nVdbeRead ); + assert( db->nVdbeRead>=db->nVdbeWrite ); + assert( db->nVdbeWrite>=0 ); } p->magic = VDBE_MAGIC_HALT; checkActiveVdbeCnt(db); - if( p->db->mallocFailed ){ + if( db->mallocFailed ){ p->rc = SQLITE_NOMEM; } /* If the auto-commit flag is set to true, then any locks that were held ** by connection db have now been released. Call sqlite3ConnectionUnlocked() @@ -50740,12 +70490,12 @@ */ if( db->autoCommit ){ sqlite3ConnectionUnlocked(db); } - assert( db->activeVdbeCnt>0 || db->autoCommit==0 || db->nStatement==0 ); - return SQLITE_OK; + assert( db->nVdbeActive>0 || db->autoCommit==0 || db->nStatement==0 ); + return (p->rc==SQLITE_BUSY ? SQLITE_BUSY : SQLITE_OK); } /* ** Each VDBE holds the result of the most recent sqlite3_step() call @@ -50753,10 +70503,56 @@ */ SQLITE_PRIVATE void sqlite3VdbeResetStepResult(Vdbe *p){ p->rc = SQLITE_OK; } +/* +** Copy the error code and error message belonging to the VDBE passed +** as the first argument to its database handle (so that they will be +** returned by calls to sqlite3_errcode() and sqlite3_errmsg()). +** +** This function does not clear the VDBE error code or message, just +** copies them to the database handle. +*/ +SQLITE_PRIVATE int sqlite3VdbeTransferError(Vdbe *p){ + sqlite3 *db = p->db; + int rc = p->rc; + if( p->zErrMsg ){ + db->bBenignMalloc++; + sqlite3BeginBenignMalloc(); + if( db->pErr==0 ) db->pErr = sqlite3ValueNew(db); + sqlite3ValueSetStr(db->pErr, -1, p->zErrMsg, SQLITE_UTF8, SQLITE_TRANSIENT); + sqlite3EndBenignMalloc(); + db->bBenignMalloc--; + db->errCode = rc; + }else{ + sqlite3Error(db, rc); + } + return rc; +} + +#ifdef SQLITE_ENABLE_SQLLOG +/* +** If an SQLITE_CONFIG_SQLLOG hook is registered and the VM has been run, +** invoke it. +*/ +static void vdbeInvokeSqllog(Vdbe *v){ + if( sqlite3GlobalConfig.xSqllog && v->rc==SQLITE_OK && v->zSql && v->pc>=0 ){ + char *zExpanded = sqlite3VdbeExpandSql(v, v->zSql); + assert( v->db->init.busy==0 ); + if( zExpanded ){ + sqlite3GlobalConfig.xSqllog( + sqlite3GlobalConfig.pSqllogArg, v->db, zExpanded, 1 + ); + sqlite3DbFree(v->db, zExpanded); + } + } +} +#else +# define vdbeInvokeSqllog(x) +#endif + /* ** Clean up a VDBE after execution but do not delete the VDBE just yet. ** Write any error messages into *pzErrMsg. Return the result code. ** ** After this routine is run, the VDBE should be ready to be executed @@ -50780,30 +70576,21 @@ ** and error message from the VDBE into the main database structure. But ** if the VDBE has just been set to run but has not actually executed any ** instructions yet, leave the main database error information unchanged. */ if( p->pc>=0 ){ - if( p->zErrMsg ){ - sqlite3BeginBenignMalloc(); - sqlite3ValueSetStr(db->pErr,-1,p->zErrMsg,SQLITE_UTF8,SQLITE_TRANSIENT); - sqlite3EndBenignMalloc(); - db->errCode = p->rc; - sqlite3DbFree(db, p->zErrMsg); - p->zErrMsg = 0; - }else if( p->rc ){ - sqlite3Error(db, p->rc, 0); - }else{ - sqlite3Error(db, SQLITE_OK, 0); - } + vdbeInvokeSqllog(p); + sqlite3VdbeTransferError(p); + sqlite3DbFree(db, p->zErrMsg); + p->zErrMsg = 0; if( p->runOnlyOnce ) p->expired = 1; }else if( p->rc && p->expired ){ /* The expired flag was set on the VDBE before the first call ** to sqlite3_step(). For consistency (since sqlite3_step() was ** called), set the database error in this case as well. */ - sqlite3Error(db, p->rc, 0); - sqlite3ValueSetStr(db->pErr, -1, p->zErrMsg, SQLITE_UTF8, SQLITE_TRANSIENT); + sqlite3ErrorWithMsg(db, p->rc, p->zErrMsg ? "%s" : 0, p->zErrMsg); sqlite3DbFree(db, p->zErrMsg); p->zErrMsg = 0; } /* Reclaim all memory used by the VDBE @@ -50820,22 +70607,35 @@ fprintf(out, "---- "); for(i=0; i<p->nOp; i++){ fprintf(out, "%02x", p->aOp[i].opcode); } fprintf(out, "\n"); + if( p->zSql ){ + char c, pc = 0; + fprintf(out, "-- "); + for(i=0; (c = p->zSql[i])!=0; i++){ + if( pc=='\n' ) fprintf(out, "-- "); + putc(c, out); + pc = c; + } + if( pc!='\n' ) fprintf(out, "\n"); + } for(i=0; i<p->nOp; i++){ - fprintf(out, "%6d %10lld %8lld ", + char zHdr[100]; + sqlite3_snprintf(sizeof(zHdr), zHdr, "%6u %12llu %8llu ", p->aOp[i].cnt, p->aOp[i].cycles, p->aOp[i].cnt>0 ? p->aOp[i].cycles/p->aOp[i].cnt : 0 ); + fprintf(out, "%s", zHdr); sqlite3VdbePrintOp(out, i, &p->aOp[i]); } fclose(out); } } #endif + p->iCurrentTime = 0; p->magic = VDBE_MAGIC_INIT; return p->rc & db->errMask; } /* @@ -50851,56 +70651,154 @@ sqlite3VdbeDelete(p); return rc; } /* -** Call the destructor for each auxdata entry in pVdbeFunc for which -** the corresponding bit in mask is clear. Auxdata entries beyond 31 -** are always destroyed. To destroy all auxdata entries, call this -** routine with mask==0. +** If parameter iOp is less than zero, then invoke the destructor for +** all auxiliary data pointers currently cached by the VM passed as +** the first argument. +** +** Or, if iOp is greater than or equal to zero, then the destructor is +** only invoked for those auxiliary data pointers created by the user +** function invoked by the OP_Function opcode at instruction iOp of +** VM pVdbe, and only then if: +** +** * the associated function parameter is the 32nd or later (counting +** from left to right), or +** +** * the corresponding bit in argument mask is clear (where the first +** function parameter corresponds to bit 0 etc.). */ -SQLITE_PRIVATE void sqlite3VdbeDeleteAuxData(VdbeFunc *pVdbeFunc, int mask){ - int i; - for(i=0; i<pVdbeFunc->nAux; i++){ - struct AuxData *pAux = &pVdbeFunc->apAux[i]; - if( (i>31 || !(mask&(((u32)1)<<i))) && pAux->pAux ){ +SQLITE_PRIVATE void sqlite3VdbeDeleteAuxData(Vdbe *pVdbe, int iOp, int mask){ + AuxData **pp = &pVdbe->pAuxData; + while( *pp ){ + AuxData *pAux = *pp; + if( (iOp<0) + || (pAux->iOp==iOp && (pAux->iArg>31 || !(mask & MASKBIT32(pAux->iArg)))) + ){ + testcase( pAux->iArg==31 ); if( pAux->xDelete ){ pAux->xDelete(pAux->pAux); } - pAux->pAux = 0; + *pp = pAux->pNext; + sqlite3DbFree(pVdbe->db, pAux); + }else{ + pp= &pAux->pNext; } } } + +/* +** Free all memory associated with the Vdbe passed as the second argument, +** except for object itself, which is preserved. +** +** The difference between this function and sqlite3VdbeDelete() is that +** VdbeDelete() also unlinks the Vdbe from the list of VMs associated with +** the database connection and frees the object itself. +*/ +SQLITE_PRIVATE void sqlite3VdbeClearObject(sqlite3 *db, Vdbe *p){ + SubProgram *pSub, *pNext; + int i; + assert( p->db==0 || p->db==db ); + releaseMemArray(p->aVar, p->nVar); + releaseMemArray(p->aColName, p->nResColumn*COLNAME_N); + for(pSub=p->pProgram; pSub; pSub=pNext){ + pNext = pSub->pNext; + vdbeFreeOpArray(db, pSub->aOp, pSub->nOp); + sqlite3DbFree(db, pSub); + } + for(i=p->nzVar-1; i>=0; i--) sqlite3DbFree(db, p->azVar[i]); + sqlite3DbFree(db, p->azVar); + vdbeFreeOpArray(db, p->aOp, p->nOp); + sqlite3DbFree(db, p->aColName); + sqlite3DbFree(db, p->zSql); + sqlite3DbFree(db, p->pFree); +#ifdef SQLITE_ENABLE_STMT_SCANSTATUS + for(i=0; i<p->nScan; i++){ + sqlite3DbFree(db, p->aScan[i].zName); + } + sqlite3DbFree(db, p->aScan); +#endif +} /* ** Delete an entire VDBE. */ SQLITE_PRIVATE void sqlite3VdbeDelete(Vdbe *p){ sqlite3 *db; if( NEVER(p==0) ) return; db = p->db; + assert( sqlite3_mutex_held(db->mutex) ); + sqlite3VdbeClearObject(db, p); if( p->pPrev ){ p->pPrev->pNext = p->pNext; }else{ assert( db->pVdbe==p ); db->pVdbe = p->pNext; } if( p->pNext ){ p->pNext->pPrev = p->pPrev; } - releaseMemArray(p->aVar, p->nVar); - releaseMemArray(p->aColName, p->nResColumn*COLNAME_N); - vdbeFreeOpArray(db, p->aOp, p->nOp); - sqlite3DbFree(db, p->aLabel); - sqlite3DbFree(db, p->aColName); - sqlite3DbFree(db, p->zSql); p->magic = VDBE_MAGIC_DEAD; - sqlite3DbFree(db, p->pFree); p->db = 0; sqlite3DbFree(db, p); } + +/* +** The cursor "p" has a pending seek operation that has not yet been +** carried out. Seek the cursor now. If an error occurs, return +** the appropriate error code. +*/ +static int SQLITE_NOINLINE handleDeferredMoveto(VdbeCursor *p){ + int res, rc; +#ifdef SQLITE_TEST + extern int sqlite3_search_count; +#endif + assert( p->deferredMoveto ); + assert( p->isTable ); + assert( p->eCurType==CURTYPE_BTREE ); + rc = sqlite3BtreeMovetoUnpacked(p->uc.pCursor, 0, p->movetoTarget, 0, &res); + if( rc ) return rc; + if( res!=0 ) return SQLITE_CORRUPT_BKPT; +#ifdef SQLITE_TEST + sqlite3_search_count++; +#endif + p->deferredMoveto = 0; + p->cacheStatus = CACHE_STALE; + return SQLITE_OK; +} + +/* +** Something has moved cursor "p" out of place. Maybe the row it was +** pointed to was deleted out from under it. Or maybe the btree was +** rebalanced. Whatever the cause, try to restore "p" to the place it +** is supposed to be pointing. If the row was deleted out from under the +** cursor, set the cursor to point to a NULL row. +*/ +static int SQLITE_NOINLINE handleMovedCursor(VdbeCursor *p){ + int isDifferentRow, rc; + assert( p->eCurType==CURTYPE_BTREE ); + assert( p->uc.pCursor!=0 ); + assert( sqlite3BtreeCursorHasMoved(p->uc.pCursor) ); + rc = sqlite3BtreeCursorRestore(p->uc.pCursor, &isDifferentRow); + p->cacheStatus = CACHE_STALE; + if( isDifferentRow ) p->nullRow = 1; + return rc; +} + +/* +** Check to ensure that the cursor is valid. Restore the cursor +** if need be. Return any I/O error from the restore operation. +*/ +SQLITE_PRIVATE int sqlite3VdbeCursorRestore(VdbeCursor *p){ + assert( p->eCurType==CURTYPE_BTREE ); + if( sqlite3BtreeCursorHasMoved(p->uc.pCursor) ){ + return handleMovedCursor(p); + } + return SQLITE_OK; +} /* ** Make sure the cursor p is ready to read or write the row to which it ** was last positioned. Return an error code if an OOM fault or I/O error ** prevents us from positioning the cursor to its correct position. @@ -50911,37 +70809,24 @@ ** a NULL row. ** ** If the cursor is already pointing to the correct row and that row has ** not been deleted out from under the cursor, then this routine is a no-op. */ -SQLITE_PRIVATE int sqlite3VdbeCursorMoveto(VdbeCursor *p){ - if( p->deferredMoveto ){ - int res, rc; -#ifdef SQLITE_TEST - extern int sqlite3_search_count; -#endif - assert( p->isTable ); - rc = sqlite3BtreeMovetoUnpacked(p->pCursor, 0, p->movetoTarget, 0, &res); - if( rc ) return rc; - p->lastRowid = p->movetoTarget; - p->rowidIsValid = ALWAYS(res==0) ?1:0; - if( NEVER(res<0) ){ - rc = sqlite3BtreeNext(p->pCursor, &res); - if( rc ) return rc; - } -#ifdef SQLITE_TEST - sqlite3_search_count++; -#endif - p->deferredMoveto = 0; - p->cacheStatus = CACHE_STALE; - }else if( ALWAYS(p->pCursor) ){ - int hasMoved; - int rc = sqlite3BtreeCursorHasMoved(p->pCursor, &hasMoved); - if( rc ) return rc; - if( hasMoved ){ - p->cacheStatus = CACHE_STALE; - p->nullRow = 1; +SQLITE_PRIVATE int sqlite3VdbeCursorMoveto(VdbeCursor **pp, int *piCol){ + VdbeCursor *p = *pp; + if( p->eCurType==CURTYPE_BTREE ){ + if( p->deferredMoveto ){ + int iMap; + if( p->aAltMap && (iMap = p->aAltMap[1+*piCol])>0 ){ + *pp = p->pAltCursor; + *piCol = iMap - 1; + return SQLITE_OK; + } + return handleDeferredMoveto(p); + } + if( sqlite3BtreeCursorHasMoved(p->uc.pCursor) ){ + return handleMovedCursor(p); } } return SQLITE_OK; } @@ -50961,11 +70846,11 @@ ** ** In an SQLite index record, the serial type is stored directly before ** the blob of data that it corresponds to. In a table record, all serial ** types are stored at the start of the record, and the blobs of data at ** the end. Hence these functions allow the caller to handle the -** serial-type and data blob seperately. +** serial-type and data blob separately. ** ** The following table describes the various storage classes for data: ** ** serial type bytes of data type ** -------------- --------------- --------------- @@ -50988,55 +70873,94 @@ */ /* ** Return the serial-type for the value stored in pMem. */ -SQLITE_PRIVATE u32 sqlite3VdbeSerialType(Mem *pMem, int file_format){ +SQLITE_PRIVATE u32 sqlite3VdbeSerialType(Mem *pMem, int file_format, u32 *pLen){ int flags = pMem->flags; - int n; + u32 n; + assert( pLen!=0 ); if( flags&MEM_Null ){ + *pLen = 0; return 0; } if( flags&MEM_Int ){ /* Figure out whether to use 1, 2, 4, 6 or 8 bytes. */ # define MAX_6BYTE ((((i64)0x00008000)<<32)-1) i64 i = pMem->u.i; u64 u; - if( file_format>=4 && (i&1)==i ){ - return 8+(u32)i; - } - u = i<0 ? -i : i; - if( u<=127 ) return 1; - if( u<=32767 ) return 2; - if( u<=8388607 ) return 3; - if( u<=2147483647 ) return 4; - if( u<=MAX_6BYTE ) return 5; + if( i<0 ){ + u = ~i; + }else{ + u = i; + } + if( u<=127 ){ + if( (i&1)==i && file_format>=4 ){ + *pLen = 0; + return 8+(u32)u; + }else{ + *pLen = 1; + return 1; + } + } + if( u<=32767 ){ *pLen = 2; return 2; } + if( u<=8388607 ){ *pLen = 3; return 3; } + if( u<=2147483647 ){ *pLen = 4; return 4; } + if( u<=MAX_6BYTE ){ *pLen = 6; return 5; } + *pLen = 8; return 6; } if( flags&MEM_Real ){ + *pLen = 8; return 7; } assert( pMem->db->mallocFailed || flags&(MEM_Str|MEM_Blob) ); - n = pMem->n; + assert( pMem->n>=0 ); + n = (u32)pMem->n; if( flags & MEM_Zero ){ n += pMem->u.nZero; } - assert( n>=0 ); + *pLen = n; return ((n*2) + 12 + ((flags&MEM_Str)!=0)); } + +/* +** The sizes for serial types less than 128 +*/ +static const u8 sqlite3SmallTypeSizes[] = { + /* 0 1 2 3 4 5 6 7 8 9 */ +/* 0 */ 0, 1, 2, 3, 4, 6, 8, 8, 0, 0, +/* 10 */ 0, 0, 0, 0, 1, 1, 2, 2, 3, 3, +/* 20 */ 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, +/* 30 */ 9, 9, 10, 10, 11, 11, 12, 12, 13, 13, +/* 40 */ 14, 14, 15, 15, 16, 16, 17, 17, 18, 18, +/* 50 */ 19, 19, 20, 20, 21, 21, 22, 22, 23, 23, +/* 60 */ 24, 24, 25, 25, 26, 26, 27, 27, 28, 28, +/* 70 */ 29, 29, 30, 30, 31, 31, 32, 32, 33, 33, +/* 80 */ 34, 34, 35, 35, 36, 36, 37, 37, 38, 38, +/* 90 */ 39, 39, 40, 40, 41, 41, 42, 42, 43, 43, +/* 100 */ 44, 44, 45, 45, 46, 46, 47, 47, 48, 48, +/* 110 */ 49, 49, 50, 50, 51, 51, 52, 52, 53, 53, +/* 120 */ 54, 54, 55, 55, 56, 56, 57, 57 +}; /* ** Return the length of the data corresponding to the supplied serial-type. */ SQLITE_PRIVATE u32 sqlite3VdbeSerialTypeLen(u32 serial_type){ - if( serial_type>=12 ){ + if( serial_type>=128 ){ return (serial_type-12)/2; }else{ - static const u8 aSize[] = { 0, 1, 2, 3, 4, 6, 8, 8, 0, 0, 0, 0 }; - return aSize[serial_type]; + assert( serial_type<12 + || sqlite3SmallTypeSizes[serial_type]==(serial_type - 12)/2 ); + return sqlite3SmallTypeSizes[serial_type]; } +} +SQLITE_PRIVATE u8 sqlite3VdbeOneByteSerialTypeLen(u8 serial_type){ + assert( serial_type<128 ); + return sqlite3SmallTypeSizes[serial_type]; } /* ** If we are on an architecture with mixed-endian floating ** points (ex: ARM7) then swap the lower 4 bytes with the @@ -51093,386 +71017,1034 @@ /* ** Write the serialized data blob for the value stored in pMem into ** buf. It is assumed that the caller has allocated sufficient space. ** Return the number of bytes written. ** -** nBuf is the amount of space left in buf[]. nBuf must always be -** large enough to hold the entire field. Except, if the field is -** a blob with a zero-filled tail, then buf[] might be just the right -** size to hold everything except for the zero-filled tail. If buf[] -** is only big enough to hold the non-zero prefix, then only write that -** prefix into buf[]. But if buf[] is large enough to hold both the -** prefix and the tail then write the prefix and set the tail to all -** zeros. +** nBuf is the amount of space left in buf[]. The caller is responsible +** for allocating enough space to buf[] to hold the entire field, exclusive +** of the pMem->u.nZero bytes for a MEM_Zero value. ** ** Return the number of bytes actually written into buf[]. The number ** of bytes in the zero-filled tail is included in the return value only ** if those bytes were zeroed in buf[]. */ -SQLITE_PRIVATE u32 sqlite3VdbeSerialPut(u8 *buf, int nBuf, Mem *pMem, int file_format){ - u32 serial_type = sqlite3VdbeSerialType(pMem, file_format); +SQLITE_PRIVATE u32 sqlite3VdbeSerialPut(u8 *buf, Mem *pMem, u32 serial_type){ u32 len; /* Integer and Real */ if( serial_type<=7 && serial_type>0 ){ u64 v; u32 i; if( serial_type==7 ){ - assert( sizeof(v)==sizeof(pMem->r) ); - memcpy(&v, &pMem->r, sizeof(v)); + assert( sizeof(v)==sizeof(pMem->u.r) ); + memcpy(&v, &pMem->u.r, sizeof(v)); swapMixedEndianFloat(v); }else{ v = pMem->u.i; } - len = i = sqlite3VdbeSerialTypeLen(serial_type); - assert( len<=(u32)nBuf ); - while( i-- ){ - buf[i] = (u8)(v&0xFF); + len = i = sqlite3SmallTypeSizes[serial_type]; + assert( i>0 ); + do{ + buf[--i] = (u8)(v&0xFF); v >>= 8; - } + }while( i ); return len; } /* String or blob */ if( serial_type>=12 ){ assert( pMem->n + ((pMem->flags & MEM_Zero)?pMem->u.nZero:0) == (int)sqlite3VdbeSerialTypeLen(serial_type) ); - assert( pMem->n<=nBuf ); len = pMem->n; - memcpy(buf, pMem->z, len); - if( pMem->flags & MEM_Zero ){ - len += pMem->u.nZero; - assert( nBuf>=0 ); - if( len > (u32)nBuf ){ - len = (u32)nBuf; - } - memset(&buf[pMem->n], 0, len-pMem->n); - } + if( len>0 ) memcpy(buf, pMem->z, len); return len; } /* NULL or constants 0 or 1 */ return 0; } +/* Input "x" is a sequence of unsigned characters that represent a +** big-endian integer. Return the equivalent native integer +*/ +#define ONE_BYTE_INT(x) ((i8)(x)[0]) +#define TWO_BYTE_INT(x) (256*(i8)((x)[0])|(x)[1]) +#define THREE_BYTE_INT(x) (65536*(i8)((x)[0])|((x)[1]<<8)|(x)[2]) +#define FOUR_BYTE_UINT(x) (((u32)(x)[0]<<24)|((x)[1]<<16)|((x)[2]<<8)|(x)[3]) +#define FOUR_BYTE_INT(x) (16777216*(i8)((x)[0])|((x)[1]<<16)|((x)[2]<<8)|(x)[3]) + /* ** Deserialize the data blob pointed to by buf as serial type serial_type ** and store the result in pMem. Return the number of bytes read. +** +** This function is implemented as two separate routines for performance. +** The few cases that require local variables are broken out into a separate +** routine so that in most cases the overhead of moving the stack pointer +** is avoided. */ +static u32 SQLITE_NOINLINE serialGet( + const unsigned char *buf, /* Buffer to deserialize from */ + u32 serial_type, /* Serial type to deserialize */ + Mem *pMem /* Memory cell to write value into */ +){ + u64 x = FOUR_BYTE_UINT(buf); + u32 y = FOUR_BYTE_UINT(buf+4); + x = (x<<32) + y; + if( serial_type==6 ){ + /* EVIDENCE-OF: R-29851-52272 Value is a big-endian 64-bit + ** twos-complement integer. */ + pMem->u.i = *(i64*)&x; + pMem->flags = MEM_Int; + testcase( pMem->u.i<0 ); + }else{ + /* EVIDENCE-OF: R-57343-49114 Value is a big-endian IEEE 754-2008 64-bit + ** floating point number. */ +#if !defined(NDEBUG) && !defined(SQLITE_OMIT_FLOATING_POINT) + /* Verify that integers and floating point values use the same + ** byte order. Or, that if SQLITE_MIXED_ENDIAN_64BIT_FLOAT is + ** defined that 64-bit floating point values really are mixed + ** endian. + */ + static const u64 t1 = ((u64)0x3ff00000)<<32; + static const double r1 = 1.0; + u64 t2 = t1; + swapMixedEndianFloat(t2); + assert( sizeof(r1)==sizeof(t2) && memcmp(&r1, &t2, sizeof(r1))==0 ); +#endif + assert( sizeof(x)==8 && sizeof(pMem->u.r)==8 ); + swapMixedEndianFloat(x); + memcpy(&pMem->u.r, &x, sizeof(x)); + pMem->flags = sqlite3IsNaN(pMem->u.r) ? MEM_Null : MEM_Real; + } + return 8; +} SQLITE_PRIVATE u32 sqlite3VdbeSerialGet( const unsigned char *buf, /* Buffer to deserialize from */ u32 serial_type, /* Serial type to deserialize */ Mem *pMem /* Memory cell to write value into */ ){ switch( serial_type ){ case 10: /* Reserved for future use */ case 11: /* Reserved for future use */ - case 0: { /* NULL */ + case 0: { /* Null */ + /* EVIDENCE-OF: R-24078-09375 Value is a NULL. */ pMem->flags = MEM_Null; break; } - case 1: { /* 1-byte signed integer */ - pMem->u.i = (signed char)buf[0]; + case 1: { + /* EVIDENCE-OF: R-44885-25196 Value is an 8-bit twos-complement + ** integer. */ + pMem->u.i = ONE_BYTE_INT(buf); pMem->flags = MEM_Int; + testcase( pMem->u.i<0 ); return 1; } case 2: { /* 2-byte signed integer */ - pMem->u.i = (((signed char)buf[0])<<8) | buf[1]; + /* EVIDENCE-OF: R-49794-35026 Value is a big-endian 16-bit + ** twos-complement integer. */ + pMem->u.i = TWO_BYTE_INT(buf); pMem->flags = MEM_Int; + testcase( pMem->u.i<0 ); return 2; } case 3: { /* 3-byte signed integer */ - pMem->u.i = (((signed char)buf[0])<<16) | (buf[1]<<8) | buf[2]; + /* EVIDENCE-OF: R-37839-54301 Value is a big-endian 24-bit + ** twos-complement integer. */ + pMem->u.i = THREE_BYTE_INT(buf); pMem->flags = MEM_Int; + testcase( pMem->u.i<0 ); return 3; } case 4: { /* 4-byte signed integer */ - pMem->u.i = (buf[0]<<24) | (buf[1]<<16) | (buf[2]<<8) | buf[3]; + /* EVIDENCE-OF: R-01849-26079 Value is a big-endian 32-bit + ** twos-complement integer. */ + pMem->u.i = FOUR_BYTE_INT(buf); +#ifdef __HP_cc + /* Work around a sign-extension bug in the HP compiler for HP/UX */ + if( buf[0]&0x80 ) pMem->u.i |= 0xffffffff80000000LL; +#endif pMem->flags = MEM_Int; + testcase( pMem->u.i<0 ); return 4; } case 5: { /* 6-byte signed integer */ - u64 x = (((signed char)buf[0])<<8) | buf[1]; - u32 y = (buf[2]<<24) | (buf[3]<<16) | (buf[4]<<8) | buf[5]; - x = (x<<32) | y; - pMem->u.i = *(i64*)&x; + /* EVIDENCE-OF: R-50385-09674 Value is a big-endian 48-bit + ** twos-complement integer. */ + pMem->u.i = FOUR_BYTE_UINT(buf+2) + (((i64)1)<<32)*TWO_BYTE_INT(buf); pMem->flags = MEM_Int; + testcase( pMem->u.i<0 ); return 6; } case 6: /* 8-byte signed integer */ case 7: { /* IEEE floating point */ - u64 x; - u32 y; -#if !defined(NDEBUG) && !defined(SQLITE_OMIT_FLOATING_POINT) - /* Verify that integers and floating point values use the same - ** byte order. Or, that if SQLITE_MIXED_ENDIAN_64BIT_FLOAT is - ** defined that 64-bit floating point values really are mixed - ** endian. - */ - static const u64 t1 = ((u64)0x3ff00000)<<32; - static const double r1 = 1.0; - u64 t2 = t1; - swapMixedEndianFloat(t2); - assert( sizeof(r1)==sizeof(t2) && memcmp(&r1, &t2, sizeof(r1))==0 ); -#endif - - x = (buf[0]<<24) | (buf[1]<<16) | (buf[2]<<8) | buf[3]; - y = (buf[4]<<24) | (buf[5]<<16) | (buf[6]<<8) | buf[7]; - x = (x<<32) | y; - if( serial_type==6 ){ - pMem->u.i = *(i64*)&x; - pMem->flags = MEM_Int; - }else{ - assert( sizeof(x)==8 && sizeof(pMem->r)==8 ); - swapMixedEndianFloat(x); - memcpy(&pMem->r, &x, sizeof(x)); - pMem->flags = sqlite3IsNaN(pMem->r) ? MEM_Null : MEM_Real; - } - return 8; + /* These use local variables, so do them in a separate routine + ** to avoid having to move the frame pointer in the common case */ + return serialGet(buf,serial_type,pMem); } case 8: /* Integer 0 */ case 9: { /* Integer 1 */ + /* EVIDENCE-OF: R-12976-22893 Value is the integer 0. */ + /* EVIDENCE-OF: R-18143-12121 Value is the integer 1. */ pMem->u.i = serial_type-8; pMem->flags = MEM_Int; return 0; } default: { - u32 len = (serial_type-12)/2; + /* EVIDENCE-OF: R-14606-31564 Value is a BLOB that is (N-12)/2 bytes in + ** length. + ** EVIDENCE-OF: R-28401-00140 Value is a string in the text encoding and + ** (N-13)/2 bytes in length. */ + static const u16 aFlag[] = { MEM_Blob|MEM_Ephem, MEM_Str|MEM_Ephem }; pMem->z = (char *)buf; - pMem->n = len; - pMem->xDel = 0; - if( serial_type&0x01 ){ - pMem->flags = MEM_Str | MEM_Ephem; - }else{ - pMem->flags = MEM_Blob | MEM_Ephem; - } - return len; + pMem->n = (serial_type-12)/2; + pMem->flags = aFlag[serial_type&1]; + return pMem->n; } } return 0; } - - -/* -** Given the nKey-byte encoding of a record in pKey[], parse the -** record into a UnpackedRecord structure. Return a pointer to -** that structure. -** -** The calling function might provide szSpace bytes of memory -** space at pSpace. This space can be used to hold the returned -** VDbeParsedRecord structure if it is large enough. If it is -** not big enough, space is obtained from sqlite3_malloc(). -** -** The returned structure should be closed by a call to -** sqlite3VdbeDeleteUnpackedRecord(). -*/ -SQLITE_PRIVATE UnpackedRecord *sqlite3VdbeRecordUnpack( - KeyInfo *pKeyInfo, /* Information about the record format */ - int nKey, /* Size of the binary record */ - const void *pKey, /* The binary record */ - char *pSpace, /* Unaligned space available to hold the object */ - int szSpace /* Size of pSpace[] in bytes */ -){ - const unsigned char *aKey = (const unsigned char *)pKey; - UnpackedRecord *p; /* The unpacked record that we will return */ - int nByte; /* Memory space needed to hold p, in bytes */ - int d; - u32 idx; - u16 u; /* Unsigned loop counter */ - u32 szHdr; - Mem *pMem; - int nOff; /* Increase pSpace by this much to 8-byte align it */ - - /* - ** We want to shift the pointer pSpace up such that it is 8-byte aligned. +/* +** This routine is used to allocate sufficient space for an UnpackedRecord +** structure large enough to be used with sqlite3VdbeRecordUnpack() if +** the first argument is a pointer to KeyInfo structure pKeyInfo. +** +** The space is either allocated using sqlite3DbMallocRaw() or from within +** the unaligned buffer passed via the second and third arguments (presumably +** stack space). If the former, then *ppFree is set to a pointer that should +** be eventually freed by the caller using sqlite3DbFree(). Or, if the +** allocation comes from the pSpace/szSpace buffer, *ppFree is set to NULL +** before returning. +** +** If an OOM error occurs, NULL is returned. +*/ +SQLITE_PRIVATE UnpackedRecord *sqlite3VdbeAllocUnpackedRecord( + KeyInfo *pKeyInfo, /* Description of the record */ + char *pSpace, /* Unaligned space available */ + int szSpace, /* Size of pSpace[] in bytes */ + char **ppFree /* OUT: Caller should free this pointer */ +){ + UnpackedRecord *p; /* Unpacked record to return */ + int nOff; /* Increment pSpace by nOff to align it */ + int nByte; /* Number of bytes required for *p */ + + /* We want to shift the pointer pSpace up such that it is 8-byte aligned. ** Thus, we need to calculate a value, nOff, between 0 and 7, to shift ** it by. If pSpace is already 8-byte aligned, nOff should be zero. */ nOff = (8 - (SQLITE_PTR_TO_INT(pSpace) & 7)) & 7; - pSpace += nOff; - szSpace -= nOff; nByte = ROUND8(sizeof(UnpackedRecord)) + sizeof(Mem)*(pKeyInfo->nField+1); - if( nByte>szSpace ){ - p = sqlite3DbMallocRaw(pKeyInfo->db, nByte); - if( p==0 ) return 0; - p->flags = UNPACKED_NEED_FREE | UNPACKED_NEED_DESTROY; + if( nByte>szSpace+nOff ){ + p = (UnpackedRecord *)sqlite3DbMallocRaw(pKeyInfo->db, nByte); + *ppFree = (char *)p; + if( !p ) return 0; }else{ - p = (UnpackedRecord*)pSpace; - p->flags = UNPACKED_NEED_DESTROY; + p = (UnpackedRecord*)&pSpace[nOff]; + *ppFree = 0; } + + p->aMem = (Mem*)&((char*)p)[ROUND8(sizeof(UnpackedRecord))]; + assert( pKeyInfo->aSortOrder!=0 ); p->pKeyInfo = pKeyInfo; p->nField = pKeyInfo->nField + 1; - p->aMem = pMem = (Mem*)&((char*)p)[ROUND8(sizeof(UnpackedRecord))]; + return p; +} + +/* +** Given the nKey-byte encoding of a record in pKey[], populate the +** UnpackedRecord structure indicated by the fourth argument with the +** contents of the decoded record. +*/ +SQLITE_PRIVATE void sqlite3VdbeRecordUnpack( + KeyInfo *pKeyInfo, /* Information about the record format */ + int nKey, /* Size of the binary record */ + const void *pKey, /* The binary record */ + UnpackedRecord *p /* Populate this structure before returning. */ +){ + const unsigned char *aKey = (const unsigned char *)pKey; + int d; + u32 idx; /* Offset in aKey[] to read from */ + u16 u; /* Unsigned loop counter */ + u32 szHdr; + Mem *pMem = p->aMem; + + p->default_rc = 0; assert( EIGHT_BYTE_ALIGNMENT(pMem) ); idx = getVarint32(aKey, szHdr); d = szHdr; u = 0; - while( idx<szHdr && u<p->nField && d<=nKey ){ + while( idx<szHdr && d<=nKey ){ u32 serial_type; idx += getVarint32(&aKey[idx], serial_type); pMem->enc = pKeyInfo->enc; pMem->db = pKeyInfo->db; - pMem->flags = 0; - pMem->zMalloc = 0; + /* pMem->flags = 0; // sqlite3VdbeSerialGet() will set this for us */ + pMem->szMalloc = 0; d += sqlite3VdbeSerialGet(&aKey[d], serial_type, pMem); pMem++; - u++; + if( (++u)>=p->nField ) break; } assert( u<=pKeyInfo->nField + 1 ); p->nField = u; - return (void*)p; -} - -/* -** This routine destroys a UnpackedRecord object. -*/ -SQLITE_PRIVATE void sqlite3VdbeDeleteUnpackedRecord(UnpackedRecord *p){ - int i; - Mem *pMem; - - assert( p!=0 ); - assert( p->flags & UNPACKED_NEED_DESTROY ); - for(i=0, pMem=p->aMem; i<p->nField; i++, pMem++){ - /* The unpacked record is always constructed by the - ** sqlite3VdbeUnpackRecord() function above, which makes all - ** strings and blobs static. And none of the elements are - ** ever transformed, so there is never anything to delete. - */ - if( NEVER(pMem->zMalloc) ) sqlite3VdbeMemRelease(pMem); - } - if( p->flags & UNPACKED_NEED_FREE ){ - sqlite3DbFree(p->pKeyInfo->db, p); - } -} - -/* -** This function compares the two table rows or index records -** specified by {nKey1, pKey1} and pPKey2. It returns a negative, zero -** or positive integer if key1 is less than, equal to or -** greater than key2. The {nKey1, pKey1} key must be a blob -** created by th OP_MakeRecord opcode of the VDBE. The pPKey2 -** key must be a parsed key such as obtained from -** sqlite3VdbeParseRecord. -** -** Key1 and Key2 do not have to contain the same number of fields. -** The key with fewer fields is usually compares less than the -** longer key. However if the UNPACKED_INCRKEY flags in pPKey2 is set -** and the common prefixes are equal, then key1 is less than key2. -** Or if the UNPACKED_MATCH_PREFIX flag is set and the prefixes are -** equal, then the keys are considered to be equal and -** the parts beyond the common prefix are ignored. -** -** If the UNPACKED_IGNORE_ROWID flag is set, then the last byte of -** the header of pKey1 is ignored. It is assumed that pKey1 is -** an index key, and thus ends with a rowid value. The last byte -** of the header will therefore be the serial type of the rowid: -** one of 1, 2, 3, 4, 5, 6, 8, or 9 - the integer serial types. -** The serial type of the final rowid will always be a single byte. -** By ignoring this last byte of the header, we force the comparison -** to ignore the rowid at the end of key1. -*/ -SQLITE_PRIVATE int sqlite3VdbeRecordCompare( +} + +#if SQLITE_DEBUG +/* +** This function compares two index or table record keys in the same way +** as the sqlite3VdbeRecordCompare() routine. Unlike VdbeRecordCompare(), +** this function deserializes and compares values using the +** sqlite3VdbeSerialGet() and sqlite3MemCompare() functions. It is used +** in assert() statements to ensure that the optimized code in +** sqlite3VdbeRecordCompare() returns results with these two primitives. +** +** Return true if the result of comparison is equivalent to desiredResult. +** Return false if there is a disagreement. +*/ +static int vdbeRecordCompareDebug( int nKey1, const void *pKey1, /* Left key */ - UnpackedRecord *pPKey2 /* Right key */ + const UnpackedRecord *pPKey2, /* Right key */ + int desiredResult /* Correct answer */ ){ - int d1; /* Offset into aKey[] of next data element */ + u32 d1; /* Offset into aKey[] of next data element */ u32 idx1; /* Offset into aKey[] of next header element */ u32 szHdr1; /* Number of bytes in header */ int i = 0; - int nField; int rc = 0; const unsigned char *aKey1 = (const unsigned char *)pKey1; KeyInfo *pKeyInfo; Mem mem1; pKeyInfo = pPKey2->pKeyInfo; + if( pKeyInfo->db==0 ) return 1; mem1.enc = pKeyInfo->enc; mem1.db = pKeyInfo->db; /* mem1.flags = 0; // Will be initialized by sqlite3VdbeSerialGet() */ - VVA_ONLY( mem1.zMalloc = 0; ) /* Only needed by assert() statements */ + VVA_ONLY( mem1.szMalloc = 0; ) /* Only needed by assert() statements */ /* Compilers may complain that mem1.u.i is potentially uninitialized. ** We could initialize it, as shown here, to silence those complaints. - ** But in fact, mem1.u.i will never actually be used initialized, and doing + ** But in fact, mem1.u.i will never actually be used uninitialized, and doing ** the unnecessary initialization has a measurable negative performance ** impact, since this routine is a very high runner. And so, we choose ** to ignore the compiler warnings and leave this variable uninitialized. */ /* mem1.u.i = 0; // not needed, here to silence compiler warning */ idx1 = getVarint32(aKey1, szHdr1); + if( szHdr1>98307 ) return SQLITE_CORRUPT; d1 = szHdr1; - if( pPKey2->flags & UNPACKED_IGNORE_ROWID ){ - szHdr1--; - } - nField = pKeyInfo->nField; - while( idx1<szHdr1 && i<pPKey2->nField ){ + assert( pKeyInfo->nField+pKeyInfo->nXField>=pPKey2->nField || CORRUPT_DB ); + assert( pKeyInfo->aSortOrder!=0 ); + assert( pKeyInfo->nField>0 ); + assert( idx1<=szHdr1 || CORRUPT_DB ); + do{ u32 serial_type1; /* Read the serial types for the next element in each key. */ idx1 += getVarint32( aKey1+idx1, serial_type1 ); - if( d1>=nKey1 && sqlite3VdbeSerialTypeLen(serial_type1)>0 ) break; + + /* Verify that there is enough key space remaining to avoid + ** a buffer overread. The "d1+serial_type1+2" subexpression will + ** always be greater than or equal to the amount of required key space. + ** Use that approximation to avoid the more expensive call to + ** sqlite3VdbeSerialTypeLen() in the common case. + */ + if( d1+serial_type1+2>(u32)nKey1 + && d1+sqlite3VdbeSerialTypeLen(serial_type1)>(u32)nKey1 + ){ + break; + } /* Extract the values to be compared. */ d1 += sqlite3VdbeSerialGet(&aKey1[d1], serial_type1, &mem1); /* Do the comparison */ - rc = sqlite3MemCompare(&mem1, &pPKey2->aMem[i], - i<nField ? pKeyInfo->aColl[i] : 0); + rc = sqlite3MemCompare(&mem1, &pPKey2->aMem[i], pKeyInfo->aColl[i]); if( rc!=0 ){ - assert( mem1.zMalloc==0 ); /* See comment below */ - - /* Invert the result if we are using DESC sort order. */ - if( pKeyInfo->aSortOrder && i<nField && pKeyInfo->aSortOrder[i] ){ - rc = -rc; - } - - /* If the PREFIX_SEARCH flag is set and all fields except the final - ** rowid field were equal, then clear the PREFIX_SEARCH flag and set - ** pPKey2->rowid to the value of the rowid field in (pKey1, nKey1). - ** This is used by the OP_IsUnique opcode. - */ - if( (pPKey2->flags & UNPACKED_PREFIX_SEARCH) && i==(pPKey2->nField-1) ){ - assert( idx1==szHdr1 && rc ); - assert( mem1.flags & MEM_Int ); - pPKey2->flags &= ~UNPACKED_PREFIX_SEARCH; - pPKey2->rowid = mem1.u.i; - } - - return rc; + assert( mem1.szMalloc==0 ); /* See comment below */ + if( pKeyInfo->aSortOrder[i] ){ + rc = -rc; /* Invert the result for DESC sort order. */ + } + goto debugCompareEnd; } i++; - } + }while( idx1<szHdr1 && i<pPKey2->nField ); /* No memory allocation is ever used on mem1. Prove this using ** the following assert(). If the assert() fails, it indicates a ** memory leak and a need to call sqlite3VdbeMemRelease(&mem1). */ - assert( mem1.zMalloc==0 ); + assert( mem1.szMalloc==0 ); /* rc==0 here means that one of the keys ran out of fields and - ** all the fields up to that point were equal. If the UNPACKED_INCRKEY - ** flag is set, then break the tie by treating key2 as larger. - ** If the UPACKED_PREFIX_MATCH flag is set, then keys with common prefixes - ** are considered to be equal. Otherwise, the longer key is the - ** larger. As it happens, the pPKey2 will always be the longer - ** if there is a difference. - */ - assert( rc==0 ); - if( pPKey2->flags & UNPACKED_INCRKEY ){ - rc = -1; - }else if( pPKey2->flags & UNPACKED_PREFIX_MATCH ){ - /* Leave rc==0 */ - }else if( idx1<szHdr1 ){ - rc = 1; - } - return rc; -} - + ** all the fields up to that point were equal. Return the default_rc + ** value. */ + rc = pPKey2->default_rc; + +debugCompareEnd: + if( desiredResult==0 && rc==0 ) return 1; + if( desiredResult<0 && rc<0 ) return 1; + if( desiredResult>0 && rc>0 ) return 1; + if( CORRUPT_DB ) return 1; + if( pKeyInfo->db->mallocFailed ) return 1; + return 0; +} +#endif + +#if SQLITE_DEBUG +/* +** Count the number of fields (a.k.a. columns) in the record given by +** pKey,nKey. The verify that this count is less than or equal to the +** limit given by pKeyInfo->nField + pKeyInfo->nXField. +** +** If this constraint is not satisfied, it means that the high-speed +** vdbeRecordCompareInt() and vdbeRecordCompareString() routines will +** not work correctly. If this assert() ever fires, it probably means +** that the KeyInfo.nField or KeyInfo.nXField values were computed +** incorrectly. +*/ +static void vdbeAssertFieldCountWithinLimits( + int nKey, const void *pKey, /* The record to verify */ + const KeyInfo *pKeyInfo /* Compare size with this KeyInfo */ +){ + int nField = 0; + u32 szHdr; + u32 idx; + u32 notUsed; + const unsigned char *aKey = (const unsigned char*)pKey; + + if( CORRUPT_DB ) return; + idx = getVarint32(aKey, szHdr); + assert( nKey>=0 ); + assert( szHdr<=(u32)nKey ); + while( idx<szHdr ){ + idx += getVarint32(aKey+idx, notUsed); + nField++; + } + assert( nField <= pKeyInfo->nField+pKeyInfo->nXField ); +} +#else +# define vdbeAssertFieldCountWithinLimits(A,B,C) +#endif + +/* +** Both *pMem1 and *pMem2 contain string values. Compare the two values +** using the collation sequence pColl. As usual, return a negative , zero +** or positive value if *pMem1 is less than, equal to or greater than +** *pMem2, respectively. Similar in spirit to "rc = (*pMem1) - (*pMem2);". +*/ +static int vdbeCompareMemString( + const Mem *pMem1, + const Mem *pMem2, + const CollSeq *pColl, + u8 *prcErr /* If an OOM occurs, set to SQLITE_NOMEM */ +){ + if( pMem1->enc==pColl->enc ){ + /* The strings are already in the correct encoding. Call the + ** comparison function directly */ + return pColl->xCmp(pColl->pUser,pMem1->n,pMem1->z,pMem2->n,pMem2->z); + }else{ + int rc; + const void *v1, *v2; + int n1, n2; + Mem c1; + Mem c2; + sqlite3VdbeMemInit(&c1, pMem1->db, MEM_Null); + sqlite3VdbeMemInit(&c2, pMem1->db, MEM_Null); + sqlite3VdbeMemShallowCopy(&c1, pMem1, MEM_Ephem); + sqlite3VdbeMemShallowCopy(&c2, pMem2, MEM_Ephem); + v1 = sqlite3ValueText((sqlite3_value*)&c1, pColl->enc); + n1 = v1==0 ? 0 : c1.n; + v2 = sqlite3ValueText((sqlite3_value*)&c2, pColl->enc); + n2 = v2==0 ? 0 : c2.n; + rc = pColl->xCmp(pColl->pUser, n1, v1, n2, v2); + if( (v1==0 || v2==0) && prcErr ) *prcErr = SQLITE_NOMEM; + sqlite3VdbeMemRelease(&c1); + sqlite3VdbeMemRelease(&c2); + return rc; + } +} + +/* +** Compare two blobs. Return negative, zero, or positive if the first +** is less than, equal to, or greater than the second, respectively. +** If one blob is a prefix of the other, then the shorter is the lessor. +*/ +static SQLITE_NOINLINE int sqlite3BlobCompare(const Mem *pB1, const Mem *pB2){ + int c = memcmp(pB1->z, pB2->z, pB1->n>pB2->n ? pB2->n : pB1->n); + if( c ) return c; + return pB1->n - pB2->n; +} + +/* +** Do a comparison between a 64-bit signed integer and a 64-bit floating-point +** number. Return negative, zero, or positive if the first (i64) is less than, +** equal to, or greater than the second (double). +*/ +static int sqlite3IntFloatCompare(i64 i, double r){ + if( sizeof(LONGDOUBLE_TYPE)>8 ){ + LONGDOUBLE_TYPE x = (LONGDOUBLE_TYPE)i; + if( x<r ) return -1; + if( x>r ) return +1; + return 0; + }else{ + i64 y; + double s; + if( r<-9223372036854775808.0 ) return +1; + if( r>9223372036854775807.0 ) return -1; + y = (i64)r; + if( i<y ) return -1; + if( i>y ){ + if( y==SMALLEST_INT64 && r>0.0 ) return -1; + return +1; + } + s = (double)i; + if( s<r ) return -1; + if( s>r ) return +1; + return 0; + } +} + +/* +** Compare the values contained by the two memory cells, returning +** negative, zero or positive if pMem1 is less than, equal to, or greater +** than pMem2. Sorting order is NULL's first, followed by numbers (integers +** and reals) sorted numerically, followed by text ordered by the collating +** sequence pColl and finally blob's ordered by memcmp(). +** +** Two NULL values are considered equal by this function. +*/ +SQLITE_PRIVATE int sqlite3MemCompare(const Mem *pMem1, const Mem *pMem2, const CollSeq *pColl){ + int f1, f2; + int combined_flags; + + f1 = pMem1->flags; + f2 = pMem2->flags; + combined_flags = f1|f2; + assert( (combined_flags & MEM_RowSet)==0 ); + + /* If one value is NULL, it is less than the other. If both values + ** are NULL, return 0. + */ + if( combined_flags&MEM_Null ){ + return (f2&MEM_Null) - (f1&MEM_Null); + } + + /* At least one of the two values is a number + */ + if( combined_flags&(MEM_Int|MEM_Real) ){ + if( (f1 & f2 & MEM_Int)!=0 ){ + if( pMem1->u.i < pMem2->u.i ) return -1; + if( pMem1->u.i > pMem2->u.i ) return +1; + return 0; + } + if( (f1 & f2 & MEM_Real)!=0 ){ + if( pMem1->u.r < pMem2->u.r ) return -1; + if( pMem1->u.r > pMem2->u.r ) return +1; + return 0; + } + if( (f1&MEM_Int)!=0 ){ + if( (f2&MEM_Real)!=0 ){ + return sqlite3IntFloatCompare(pMem1->u.i, pMem2->u.r); + }else{ + return -1; + } + } + if( (f1&MEM_Real)!=0 ){ + if( (f2&MEM_Int)!=0 ){ + return -sqlite3IntFloatCompare(pMem2->u.i, pMem1->u.r); + }else{ + return -1; + } + } + return +1; + } + + /* If one value is a string and the other is a blob, the string is less. + ** If both are strings, compare using the collating functions. + */ + if( combined_flags&MEM_Str ){ + if( (f1 & MEM_Str)==0 ){ + return 1; + } + if( (f2 & MEM_Str)==0 ){ + return -1; + } + + assert( pMem1->enc==pMem2->enc || pMem1->db->mallocFailed ); + assert( pMem1->enc==SQLITE_UTF8 || + pMem1->enc==SQLITE_UTF16LE || pMem1->enc==SQLITE_UTF16BE ); + + /* The collation sequence must be defined at this point, even if + ** the user deletes the collation sequence after the vdbe program is + ** compiled (this was not always the case). + */ + assert( !pColl || pColl->xCmp ); + + if( pColl ){ + return vdbeCompareMemString(pMem1, pMem2, pColl, 0); + } + /* If a NULL pointer was passed as the collate function, fall through + ** to the blob case and use memcmp(). */ + } + + /* Both values must be blobs. Compare using memcmp(). */ + return sqlite3BlobCompare(pMem1, pMem2); +} + + +/* +** The first argument passed to this function is a serial-type that +** corresponds to an integer - all values between 1 and 9 inclusive +** except 7. The second points to a buffer containing an integer value +** serialized according to serial_type. This function deserializes +** and returns the value. +*/ +static i64 vdbeRecordDecodeInt(u32 serial_type, const u8 *aKey){ + u32 y; + assert( CORRUPT_DB || (serial_type>=1 && serial_type<=9 && serial_type!=7) ); + switch( serial_type ){ + case 0: + case 1: + testcase( aKey[0]&0x80 ); + return ONE_BYTE_INT(aKey); + case 2: + testcase( aKey[0]&0x80 ); + return TWO_BYTE_INT(aKey); + case 3: + testcase( aKey[0]&0x80 ); + return THREE_BYTE_INT(aKey); + case 4: { + testcase( aKey[0]&0x80 ); + y = FOUR_BYTE_UINT(aKey); + return (i64)*(int*)&y; + } + case 5: { + testcase( aKey[0]&0x80 ); + return FOUR_BYTE_UINT(aKey+2) + (((i64)1)<<32)*TWO_BYTE_INT(aKey); + } + case 6: { + u64 x = FOUR_BYTE_UINT(aKey); + testcase( aKey[0]&0x80 ); + x = (x<<32) | FOUR_BYTE_UINT(aKey+4); + return (i64)*(i64*)&x; + } + } + + return (serial_type - 8); +} + +/* +** This function compares the two table rows or index records +** specified by {nKey1, pKey1} and pPKey2. It returns a negative, zero +** or positive integer if key1 is less than, equal to or +** greater than key2. The {nKey1, pKey1} key must be a blob +** created by the OP_MakeRecord opcode of the VDBE. The pPKey2 +** key must be a parsed key such as obtained from +** sqlite3VdbeParseRecord. +** +** If argument bSkip is non-zero, it is assumed that the caller has already +** determined that the first fields of the keys are equal. +** +** Key1 and Key2 do not have to contain the same number of fields. If all +** fields that appear in both keys are equal, then pPKey2->default_rc is +** returned. +** +** If database corruption is discovered, set pPKey2->errCode to +** SQLITE_CORRUPT and return 0. If an OOM error is encountered, +** pPKey2->errCode is set to SQLITE_NOMEM and, if it is not NULL, the +** malloc-failed flag set on database handle (pPKey2->pKeyInfo->db). +*/ +SQLITE_PRIVATE int sqlite3VdbeRecordCompareWithSkip( + int nKey1, const void *pKey1, /* Left key */ + UnpackedRecord *pPKey2, /* Right key */ + int bSkip /* If true, skip the first field */ +){ + u32 d1; /* Offset into aKey[] of next data element */ + int i; /* Index of next field to compare */ + u32 szHdr1; /* Size of record header in bytes */ + u32 idx1; /* Offset of first type in header */ + int rc = 0; /* Return value */ + Mem *pRhs = pPKey2->aMem; /* Next field of pPKey2 to compare */ + KeyInfo *pKeyInfo = pPKey2->pKeyInfo; + const unsigned char *aKey1 = (const unsigned char *)pKey1; + Mem mem1; + + /* If bSkip is true, then the caller has already determined that the first + ** two elements in the keys are equal. Fix the various stack variables so + ** that this routine begins comparing at the second field. */ + if( bSkip ){ + u32 s1; + idx1 = 1 + getVarint32(&aKey1[1], s1); + szHdr1 = aKey1[0]; + d1 = szHdr1 + sqlite3VdbeSerialTypeLen(s1); + i = 1; + pRhs++; + }else{ + idx1 = getVarint32(aKey1, szHdr1); + d1 = szHdr1; + if( d1>(unsigned)nKey1 ){ + pPKey2->errCode = (u8)SQLITE_CORRUPT_BKPT; + return 0; /* Corruption */ + } + i = 0; + } + + VVA_ONLY( mem1.szMalloc = 0; ) /* Only needed by assert() statements */ + assert( pPKey2->pKeyInfo->nField+pPKey2->pKeyInfo->nXField>=pPKey2->nField + || CORRUPT_DB ); + assert( pPKey2->pKeyInfo->aSortOrder!=0 ); + assert( pPKey2->pKeyInfo->nField>0 ); + assert( idx1<=szHdr1 || CORRUPT_DB ); + do{ + u32 serial_type; + + /* RHS is an integer */ + if( pRhs->flags & MEM_Int ){ + serial_type = aKey1[idx1]; + testcase( serial_type==12 ); + if( serial_type>=10 ){ + rc = +1; + }else if( serial_type==0 ){ + rc = -1; + }else if( serial_type==7 ){ + sqlite3VdbeSerialGet(&aKey1[d1], serial_type, &mem1); + rc = -sqlite3IntFloatCompare(pRhs->u.i, mem1.u.r); + }else{ + i64 lhs = vdbeRecordDecodeInt(serial_type, &aKey1[d1]); + i64 rhs = pRhs->u.i; + if( lhs<rhs ){ + rc = -1; + }else if( lhs>rhs ){ + rc = +1; + } + } + } + + /* RHS is real */ + else if( pRhs->flags & MEM_Real ){ + serial_type = aKey1[idx1]; + if( serial_type>=10 ){ + /* Serial types 12 or greater are strings and blobs (greater than + ** numbers). Types 10 and 11 are currently "reserved for future + ** use", so it doesn't really matter what the results of comparing + ** them to numberic values are. */ + rc = +1; + }else if( serial_type==0 ){ + rc = -1; + }else{ + sqlite3VdbeSerialGet(&aKey1[d1], serial_type, &mem1); + if( serial_type==7 ){ + if( mem1.u.r<pRhs->u.r ){ + rc = -1; + }else if( mem1.u.r>pRhs->u.r ){ + rc = +1; + } + }else{ + rc = sqlite3IntFloatCompare(mem1.u.i, pRhs->u.r); + } + } + } + + /* RHS is a string */ + else if( pRhs->flags & MEM_Str ){ + getVarint32(&aKey1[idx1], serial_type); + testcase( serial_type==12 ); + if( serial_type<12 ){ + rc = -1; + }else if( !(serial_type & 0x01) ){ + rc = +1; + }else{ + mem1.n = (serial_type - 12) / 2; + testcase( (d1+mem1.n)==(unsigned)nKey1 ); + testcase( (d1+mem1.n+1)==(unsigned)nKey1 ); + if( (d1+mem1.n) > (unsigned)nKey1 ){ + pPKey2->errCode = (u8)SQLITE_CORRUPT_BKPT; + return 0; /* Corruption */ + }else if( pKeyInfo->aColl[i] ){ + mem1.enc = pKeyInfo->enc; + mem1.db = pKeyInfo->db; + mem1.flags = MEM_Str; + mem1.z = (char*)&aKey1[d1]; + rc = vdbeCompareMemString( + &mem1, pRhs, pKeyInfo->aColl[i], &pPKey2->errCode + ); + }else{ + int nCmp = MIN(mem1.n, pRhs->n); + rc = memcmp(&aKey1[d1], pRhs->z, nCmp); + if( rc==0 ) rc = mem1.n - pRhs->n; + } + } + } + + /* RHS is a blob */ + else if( pRhs->flags & MEM_Blob ){ + getVarint32(&aKey1[idx1], serial_type); + testcase( serial_type==12 ); + if( serial_type<12 || (serial_type & 0x01) ){ + rc = -1; + }else{ + int nStr = (serial_type - 12) / 2; + testcase( (d1+nStr)==(unsigned)nKey1 ); + testcase( (d1+nStr+1)==(unsigned)nKey1 ); + if( (d1+nStr) > (unsigned)nKey1 ){ + pPKey2->errCode = (u8)SQLITE_CORRUPT_BKPT; + return 0; /* Corruption */ + }else{ + int nCmp = MIN(nStr, pRhs->n); + rc = memcmp(&aKey1[d1], pRhs->z, nCmp); + if( rc==0 ) rc = nStr - pRhs->n; + } + } + } + + /* RHS is null */ + else{ + serial_type = aKey1[idx1]; + rc = (serial_type!=0); + } + + if( rc!=0 ){ + if( pKeyInfo->aSortOrder[i] ){ + rc = -rc; + } + assert( vdbeRecordCompareDebug(nKey1, pKey1, pPKey2, rc) ); + assert( mem1.szMalloc==0 ); /* See comment below */ + return rc; + } + + i++; + pRhs++; + d1 += sqlite3VdbeSerialTypeLen(serial_type); + idx1 += sqlite3VarintLen(serial_type); + }while( idx1<(unsigned)szHdr1 && i<pPKey2->nField && d1<=(unsigned)nKey1 ); + + /* No memory allocation is ever used on mem1. Prove this using + ** the following assert(). If the assert() fails, it indicates a + ** memory leak and a need to call sqlite3VdbeMemRelease(&mem1). */ + assert( mem1.szMalloc==0 ); + + /* rc==0 here means that one or both of the keys ran out of fields and + ** all the fields up to that point were equal. Return the default_rc + ** value. */ + assert( CORRUPT_DB + || vdbeRecordCompareDebug(nKey1, pKey1, pPKey2, pPKey2->default_rc) + || pKeyInfo->db->mallocFailed + ); + pPKey2->eqSeen = 1; + return pPKey2->default_rc; +} +SQLITE_PRIVATE int sqlite3VdbeRecordCompare( + int nKey1, const void *pKey1, /* Left key */ + UnpackedRecord *pPKey2 /* Right key */ +){ + return sqlite3VdbeRecordCompareWithSkip(nKey1, pKey1, pPKey2, 0); +} + + +/* +** This function is an optimized version of sqlite3VdbeRecordCompare() +** that (a) the first field of pPKey2 is an integer, and (b) the +** size-of-header varint at the start of (pKey1/nKey1) fits in a single +** byte (i.e. is less than 128). +** +** To avoid concerns about buffer overreads, this routine is only used +** on schemas where the maximum valid header size is 63 bytes or less. +*/ +static int vdbeRecordCompareInt( + int nKey1, const void *pKey1, /* Left key */ + UnpackedRecord *pPKey2 /* Right key */ +){ + const u8 *aKey = &((const u8*)pKey1)[*(const u8*)pKey1 & 0x3F]; + int serial_type = ((const u8*)pKey1)[1]; + int res; + u32 y; + u64 x; + i64 v = pPKey2->aMem[0].u.i; + i64 lhs; + + vdbeAssertFieldCountWithinLimits(nKey1, pKey1, pPKey2->pKeyInfo); + assert( (*(u8*)pKey1)<=0x3F || CORRUPT_DB ); + switch( serial_type ){ + case 1: { /* 1-byte signed integer */ + lhs = ONE_BYTE_INT(aKey); + testcase( lhs<0 ); + break; + } + case 2: { /* 2-byte signed integer */ + lhs = TWO_BYTE_INT(aKey); + testcase( lhs<0 ); + break; + } + case 3: { /* 3-byte signed integer */ + lhs = THREE_BYTE_INT(aKey); + testcase( lhs<0 ); + break; + } + case 4: { /* 4-byte signed integer */ + y = FOUR_BYTE_UINT(aKey); + lhs = (i64)*(int*)&y; + testcase( lhs<0 ); + break; + } + case 5: { /* 6-byte signed integer */ + lhs = FOUR_BYTE_UINT(aKey+2) + (((i64)1)<<32)*TWO_BYTE_INT(aKey); + testcase( lhs<0 ); + break; + } + case 6: { /* 8-byte signed integer */ + x = FOUR_BYTE_UINT(aKey); + x = (x<<32) | FOUR_BYTE_UINT(aKey+4); + lhs = *(i64*)&x; + testcase( lhs<0 ); + break; + } + case 8: + lhs = 0; + break; + case 9: + lhs = 1; + break; + + /* This case could be removed without changing the results of running + ** this code. Including it causes gcc to generate a faster switch + ** statement (since the range of switch targets now starts at zero and + ** is contiguous) but does not cause any duplicate code to be generated + ** (as gcc is clever enough to combine the two like cases). Other + ** compilers might be similar. */ + case 0: case 7: + return sqlite3VdbeRecordCompare(nKey1, pKey1, pPKey2); + + default: + return sqlite3VdbeRecordCompare(nKey1, pKey1, pPKey2); + } + + if( v>lhs ){ + res = pPKey2->r1; + }else if( v<lhs ){ + res = pPKey2->r2; + }else if( pPKey2->nField>1 ){ + /* The first fields of the two keys are equal. Compare the trailing + ** fields. */ + res = sqlite3VdbeRecordCompareWithSkip(nKey1, pKey1, pPKey2, 1); + }else{ + /* The first fields of the two keys are equal and there are no trailing + ** fields. Return pPKey2->default_rc in this case. */ + res = pPKey2->default_rc; + pPKey2->eqSeen = 1; + } + + assert( vdbeRecordCompareDebug(nKey1, pKey1, pPKey2, res) ); + return res; +} + +/* +** This function is an optimized version of sqlite3VdbeRecordCompare() +** that (a) the first field of pPKey2 is a string, that (b) the first field +** uses the collation sequence BINARY and (c) that the size-of-header varint +** at the start of (pKey1/nKey1) fits in a single byte. +*/ +static int vdbeRecordCompareString( + int nKey1, const void *pKey1, /* Left key */ + UnpackedRecord *pPKey2 /* Right key */ +){ + const u8 *aKey1 = (const u8*)pKey1; + int serial_type; + int res; + + assert( pPKey2->aMem[0].flags & MEM_Str ); + vdbeAssertFieldCountWithinLimits(nKey1, pKey1, pPKey2->pKeyInfo); + getVarint32(&aKey1[1], serial_type); + if( serial_type<12 ){ + res = pPKey2->r1; /* (pKey1/nKey1) is a number or a null */ + }else if( !(serial_type & 0x01) ){ + res = pPKey2->r2; /* (pKey1/nKey1) is a blob */ + }else{ + int nCmp; + int nStr; + int szHdr = aKey1[0]; + + nStr = (serial_type-12) / 2; + if( (szHdr + nStr) > nKey1 ){ + pPKey2->errCode = (u8)SQLITE_CORRUPT_BKPT; + return 0; /* Corruption */ + } + nCmp = MIN( pPKey2->aMem[0].n, nStr ); + res = memcmp(&aKey1[szHdr], pPKey2->aMem[0].z, nCmp); + + if( res==0 ){ + res = nStr - pPKey2->aMem[0].n; + if( res==0 ){ + if( pPKey2->nField>1 ){ + res = sqlite3VdbeRecordCompareWithSkip(nKey1, pKey1, pPKey2, 1); + }else{ + res = pPKey2->default_rc; + pPKey2->eqSeen = 1; + } + }else if( res>0 ){ + res = pPKey2->r2; + }else{ + res = pPKey2->r1; + } + }else if( res>0 ){ + res = pPKey2->r2; + }else{ + res = pPKey2->r1; + } + } + + assert( vdbeRecordCompareDebug(nKey1, pKey1, pPKey2, res) + || CORRUPT_DB + || pPKey2->pKeyInfo->db->mallocFailed + ); + return res; +} + +/* +** Return a pointer to an sqlite3VdbeRecordCompare() compatible function +** suitable for comparing serialized records to the unpacked record passed +** as the only argument. +*/ +SQLITE_PRIVATE RecordCompare sqlite3VdbeFindCompare(UnpackedRecord *p){ + /* varintRecordCompareInt() and varintRecordCompareString() both assume + ** that the size-of-header varint that occurs at the start of each record + ** fits in a single byte (i.e. is 127 or less). varintRecordCompareInt() + ** also assumes that it is safe to overread a buffer by at least the + ** maximum possible legal header size plus 8 bytes. Because there is + ** guaranteed to be at least 74 (but not 136) bytes of padding following each + ** buffer passed to varintRecordCompareInt() this makes it convenient to + ** limit the size of the header to 64 bytes in cases where the first field + ** is an integer. + ** + ** The easiest way to enforce this limit is to consider only records with + ** 13 fields or less. If the first field is an integer, the maximum legal + ** header size is (12*5 + 1 + 1) bytes. */ + if( (p->pKeyInfo->nField + p->pKeyInfo->nXField)<=13 ){ + int flags = p->aMem[0].flags; + if( p->pKeyInfo->aSortOrder[0] ){ + p->r1 = 1; + p->r2 = -1; + }else{ + p->r1 = -1; + p->r2 = 1; + } + if( (flags & MEM_Int) ){ + return vdbeRecordCompareInt; + } + testcase( flags & MEM_Real ); + testcase( flags & MEM_Null ); + testcase( flags & MEM_Blob ); + if( (flags & (MEM_Real|MEM_Null|MEM_Blob))==0 && p->pKeyInfo->aColl[0]==0 ){ + assert( flags & MEM_Str ); + return vdbeRecordCompareString; + } + } + + return sqlite3VdbeRecordCompare; +} /* ** pCur points at an index entry created using the OP_MakeRecord opcode. ** Read the rowid (the last field in the record) and store it in *rowid. ** Return SQLITE_OK if everything works, or an error code otherwise. @@ -51486,25 +72058,23 @@ u32 szHdr; /* Size of the header */ u32 typeRowid; /* Serial type of the rowid */ u32 lenRowid; /* Size of the rowid */ Mem m, v; - UNUSED_PARAMETER(db); - /* Get the size of the index entry. Only indices entries of less ** than 2GiB are support - anything large must be database corruption. ** Any corruption is detected in sqlite3BtreeParseCellPtr(), though, so ** this code can safely assume that nCellKey is 32-bits */ assert( sqlite3BtreeCursorIsValid(pCur) ); - rc = sqlite3BtreeKeySize(pCur, &nCellKey); + VVA_ONLY(rc =) sqlite3BtreeKeySize(pCur, &nCellKey); assert( rc==SQLITE_OK ); /* pCur is always valid so KeySize cannot fail */ assert( (nCellKey & SQLITE_MAX_U32)==(u64)nCellKey ); /* Read in the complete content of the index entry */ - memset(&m, 0, sizeof(m)); - rc = sqlite3VdbeMemFromBtree(pCur, 0, (int)nCellKey, 1, &m); + sqlite3VdbeMemInit(&m, db, 0); + rc = sqlite3VdbeMemFromBtree(pCur, 0, (u32)nCellKey, 1, &m); if( rc ){ return rc; } /* The index entry must begin with a header size */ @@ -51527,11 +72097,11 @@ testcase( typeRowid==8 ); testcase( typeRowid==9 ); if( unlikely(typeRowid<1 || typeRowid>9 || typeRowid==7) ){ goto idx_rowid_corruption; } - lenRowid = sqlite3VdbeSerialTypeLen(typeRowid); + lenRowid = sqlite3SmallTypeSizes[typeRowid]; testcase( (u32)m.n==szHdr+lenRowid ); if( unlikely((u32)m.n<szHdr+lenRowid) ){ goto idx_rowid_corruption; } @@ -51542,11 +72112,11 @@ return SQLITE_OK; /* Jump here if database corruption is detected after m has been ** allocated. Free the m object and return SQLITE_CORRUPT. */ idx_rowid_corruption: - testcase( m.zMalloc!=0 ); + testcase( m.szMalloc!=0 ); sqlite3VdbeMemRelease(&m); return SQLITE_CORRUPT_BKPT; } /* @@ -51559,34 +72129,36 @@ ** omits the rowid at the end. The rowid at the end of the index entry ** is ignored as well. Hence, this routine only compares the prefixes ** of the keys prior to the final rowid, not the entire key. */ SQLITE_PRIVATE int sqlite3VdbeIdxKeyCompare( - VdbeCursor *pC, /* The cursor to compare against */ - UnpackedRecord *pUnpacked, /* Unpacked version of key to compare against */ - int *res /* Write the comparison result here */ + sqlite3 *db, /* Database connection */ + VdbeCursor *pC, /* The cursor to compare against */ + UnpackedRecord *pUnpacked, /* Unpacked version of key */ + int *res /* Write the comparison result here */ ){ i64 nCellKey = 0; int rc; - BtCursor *pCur = pC->pCursor; + BtCursor *pCur; Mem m; + assert( pC->eCurType==CURTYPE_BTREE ); + pCur = pC->uc.pCursor; assert( sqlite3BtreeCursorIsValid(pCur) ); - rc = sqlite3BtreeKeySize(pCur, &nCellKey); + VVA_ONLY(rc =) sqlite3BtreeKeySize(pCur, &nCellKey); assert( rc==SQLITE_OK ); /* pCur is always valid so KeySize cannot fail */ - /* nCellKey will always be between 0 and 0xffffffff because of the say + /* nCellKey will always be between 0 and 0xffffffff because of the way ** that btreeParseCellPtr() and sqlite3GetVarint32() are implemented */ if( nCellKey<=0 || nCellKey>0x7fffffff ){ *res = 0; return SQLITE_CORRUPT_BKPT; } - memset(&m, 0, sizeof(m)); - rc = sqlite3VdbeMemFromBtree(pC->pCursor, 0, (int)nCellKey, 1, &m); + sqlite3VdbeMemInit(&m, db, 0); + rc = sqlite3VdbeMemFromBtree(pCur, 0, (u32)nCellKey, 1, &m); if( rc ){ return rc; } - assert( pUnpacked->flags & UNPACKED_IGNORE_ROWID ); *res = sqlite3VdbeRecordCompare(m.n, m.z, pUnpacked); sqlite3VdbeMemRelease(&m); return SQLITE_OK; } @@ -51638,20 +72210,19 @@ ** 0 instead. Unless it is NULL, apply affinity aff (one of the SQLITE_AFF_* ** constants) to the value before returning it. ** ** The returned value must be freed by the caller using sqlite3ValueFree(). */ -SQLITE_PRIVATE sqlite3_value *sqlite3VdbeGetValue(Vdbe *v, int iVar, u8 aff){ +SQLITE_PRIVATE sqlite3_value *sqlite3VdbeGetBoundValue(Vdbe *v, int iVar, u8 aff){ assert( iVar>0 ); if( v ){ Mem *pMem = &v->aVar[iVar-1]; if( 0==(pMem->flags & MEM_Null) ){ sqlite3_value *pRet = sqlite3ValueNew(v->db); if( pRet ){ sqlite3VdbeMemCopy((Mem *)pRet, pMem); sqlite3ValueApplyAffinity(pRet, aff, SQLITE_UTF8); - sqlite3VdbeMemStoreType((Mem *)pRet); } return pRet; } } return 0; @@ -51669,10 +72240,27 @@ }else{ v->expmask |= ((u32)1 << (iVar-1)); } } +#ifndef SQLITE_OMIT_VIRTUALTABLE +/* +** Transfer error message text from an sqlite3_vtab.zErrMsg (text stored +** in memory obtained from sqlite3_malloc) into a Vdbe.zErrMsg (text stored +** in memory obtained from sqlite3DbMalloc). +*/ +SQLITE_PRIVATE void sqlite3VtabImportErrmsg(Vdbe *p, sqlite3_vtab *pVtab){ + if( pVtab->zErrMsg ){ + sqlite3 *db = p->db; + sqlite3DbFree(db, p->zErrMsg); + p->zErrMsg = sqlite3DbStrDup(db, pVtab->zErrMsg); + sqlite3_free(pVtab->zErrMsg); + pVtab->zErrMsg = 0; + } +} +#endif /* SQLITE_OMIT_VIRTUALTABLE */ + /************** End of vdbeaux.c *********************************************/ /************** Begin file vdbeapi.c *****************************************/ /* ** 2004 May 26 ** @@ -51686,10 +72274,12 @@ ************************************************************************* ** ** This file contains code use to implement APIs that are part of the ** VDBE. */ +/* #include "sqliteInt.h" */ +/* #include "vdbeInt.h" */ #ifndef SQLITE_OMIT_DEPRECATED /* ** Return TRUE (non-zero) of the statement supplied as an argument needs ** to be recompiled. A statement needs to be recompiled whenever the @@ -51696,11 +72286,11 @@ ** execution environment changes in a way that would alter the program ** that sqlite3_prepare() generates. For example, if new functions or ** collating sequences are registered or if an authorizer function is ** added or changed. */ -SQLITE_API int sqlite3_expired(sqlite3_stmt *pStmt){ +SQLITE_API int SQLITE_STDCALL sqlite3_expired(sqlite3_stmt *pStmt){ Vdbe *p = (Vdbe*)pStmt; return p==0 || p->expired; } #endif @@ -51724,37 +72314,59 @@ }else{ return vdbeSafety(p); } } +#ifndef SQLITE_OMIT_TRACE +/* +** Invoke the profile callback. This routine is only called if we already +** know that the profile callback is defined and needs to be invoked. +*/ +static SQLITE_NOINLINE void invokeProfileCallback(sqlite3 *db, Vdbe *p){ + sqlite3_int64 iNow; + assert( p->startTime>0 ); + assert( db->xProfile!=0 ); + assert( db->init.busy==0 ); + assert( p->zSql!=0 ); + sqlite3OsCurrentTimeInt64(db->pVfs, &iNow); + db->xProfile(db->pProfileArg, p->zSql, (iNow - p->startTime)*1000000); + p->startTime = 0; +} +/* +** The checkProfileCallback(DB,P) macro checks to see if a profile callback +** is needed, and it invokes the callback if it is needed. +*/ +# define checkProfileCallback(DB,P) \ + if( ((P)->startTime)>0 ){ invokeProfileCallback(DB,P); } +#else +# define checkProfileCallback(DB,P) /*no-op*/ +#endif + /* ** The following routine destroys a virtual machine that is created by ** the sqlite3_compile() routine. The integer returned is an SQLITE_ ** success/failure code that describes the result of executing the virtual ** machine. ** ** This routine sets the error code and string returned by ** sqlite3_errcode(), sqlite3_errmsg() and sqlite3_errmsg16(). */ -SQLITE_API int sqlite3_finalize(sqlite3_stmt *pStmt){ +SQLITE_API int SQLITE_STDCALL sqlite3_finalize(sqlite3_stmt *pStmt){ int rc; if( pStmt==0 ){ + /* IMPLEMENTATION-OF: R-57228-12904 Invoking sqlite3_finalize() on a NULL + ** pointer is a harmless no-op. */ rc = SQLITE_OK; }else{ Vdbe *v = (Vdbe*)pStmt; sqlite3 *db = v->db; -#if SQLITE_THREADSAFE - sqlite3_mutex *mutex; -#endif if( vdbeSafety(v) ) return SQLITE_MISUSE_BKPT; -#if SQLITE_THREADSAFE - mutex = v->db->mutex; -#endif - sqlite3_mutex_enter(mutex); + sqlite3_mutex_enter(db->mutex); + checkProfileCallback(db, v); rc = sqlite3VdbeFinalize(v); rc = sqlite3ApiExit(db, rc); - sqlite3_mutex_leave(mutex); + sqlite3LeaveMutexAndCloseZombie(db); } return rc; } /* @@ -51763,30 +72375,32 @@ ** the prior execution is returned. ** ** This routine sets the error code and string returned by ** sqlite3_errcode(), sqlite3_errmsg() and sqlite3_errmsg16(). */ -SQLITE_API int sqlite3_reset(sqlite3_stmt *pStmt){ +SQLITE_API int SQLITE_STDCALL sqlite3_reset(sqlite3_stmt *pStmt){ int rc; if( pStmt==0 ){ rc = SQLITE_OK; }else{ Vdbe *v = (Vdbe*)pStmt; - sqlite3_mutex_enter(v->db->mutex); + sqlite3 *db = v->db; + sqlite3_mutex_enter(db->mutex); + checkProfileCallback(db, v); rc = sqlite3VdbeReset(v); - sqlite3VdbeMakeReady(v, -1, 0, 0, 0, 0, 0); - assert( (rc & (v->db->errMask))==rc ); - rc = sqlite3ApiExit(v->db, rc); - sqlite3_mutex_leave(v->db->mutex); + sqlite3VdbeRewind(v); + assert( (rc & (db->errMask))==rc ); + rc = sqlite3ApiExit(db, rc); + sqlite3_mutex_leave(db->mutex); } return rc; } /* ** Set all the parameters in the compiled SQL statement to NULL. */ -SQLITE_API int sqlite3_clear_bindings(sqlite3_stmt *pStmt){ +SQLITE_API int SQLITE_STDCALL sqlite3_clear_bindings(sqlite3_stmt *pStmt){ int i; int rc = SQLITE_OK; Vdbe *p = (Vdbe*)pStmt; #if SQLITE_THREADSAFE sqlite3_mutex *mutex = ((Vdbe*)pStmt)->db->mutex; @@ -51806,180 +72420,351 @@ /**************************** sqlite3_value_ ******************************* ** The following routines extract information from a Mem or sqlite3_value ** structure. */ -SQLITE_API const void *sqlite3_value_blob(sqlite3_value *pVal){ +SQLITE_API const void *SQLITE_STDCALL sqlite3_value_blob(sqlite3_value *pVal){ Mem *p = (Mem*)pVal; if( p->flags & (MEM_Blob|MEM_Str) ){ - sqlite3VdbeMemExpandBlob(p); - p->flags &= ~MEM_Str; + if( sqlite3VdbeMemExpandBlob(p)!=SQLITE_OK ){ + assert( p->flags==MEM_Null && p->z==0 ); + return 0; + } p->flags |= MEM_Blob; - return p->z; + return p->n ? p->z : 0; }else{ return sqlite3_value_text(pVal); } } -SQLITE_API int sqlite3_value_bytes(sqlite3_value *pVal){ +SQLITE_API int SQLITE_STDCALL sqlite3_value_bytes(sqlite3_value *pVal){ return sqlite3ValueBytes(pVal, SQLITE_UTF8); } -SQLITE_API int sqlite3_value_bytes16(sqlite3_value *pVal){ +SQLITE_API int SQLITE_STDCALL sqlite3_value_bytes16(sqlite3_value *pVal){ return sqlite3ValueBytes(pVal, SQLITE_UTF16NATIVE); } -SQLITE_API double sqlite3_value_double(sqlite3_value *pVal){ +SQLITE_API double SQLITE_STDCALL sqlite3_value_double(sqlite3_value *pVal){ return sqlite3VdbeRealValue((Mem*)pVal); } -SQLITE_API int sqlite3_value_int(sqlite3_value *pVal){ +SQLITE_API int SQLITE_STDCALL sqlite3_value_int(sqlite3_value *pVal){ return (int)sqlite3VdbeIntValue((Mem*)pVal); } -SQLITE_API sqlite_int64 sqlite3_value_int64(sqlite3_value *pVal){ +SQLITE_API sqlite_int64 SQLITE_STDCALL sqlite3_value_int64(sqlite3_value *pVal){ return sqlite3VdbeIntValue((Mem*)pVal); } -SQLITE_API const unsigned char *sqlite3_value_text(sqlite3_value *pVal){ +SQLITE_API unsigned int SQLITE_STDCALL sqlite3_value_subtype(sqlite3_value *pVal){ + Mem *pMem = (Mem*)pVal; + return ((pMem->flags & MEM_Subtype) ? pMem->eSubtype : 0); +} +SQLITE_API const unsigned char *SQLITE_STDCALL sqlite3_value_text(sqlite3_value *pVal){ return (const unsigned char *)sqlite3ValueText(pVal, SQLITE_UTF8); } #ifndef SQLITE_OMIT_UTF16 -SQLITE_API const void *sqlite3_value_text16(sqlite3_value* pVal){ +SQLITE_API const void *SQLITE_STDCALL sqlite3_value_text16(sqlite3_value* pVal){ return sqlite3ValueText(pVal, SQLITE_UTF16NATIVE); } -SQLITE_API const void *sqlite3_value_text16be(sqlite3_value *pVal){ +SQLITE_API const void *SQLITE_STDCALL sqlite3_value_text16be(sqlite3_value *pVal){ return sqlite3ValueText(pVal, SQLITE_UTF16BE); } -SQLITE_API const void *sqlite3_value_text16le(sqlite3_value *pVal){ +SQLITE_API const void *SQLITE_STDCALL sqlite3_value_text16le(sqlite3_value *pVal){ return sqlite3ValueText(pVal, SQLITE_UTF16LE); } #endif /* SQLITE_OMIT_UTF16 */ -SQLITE_API int sqlite3_value_type(sqlite3_value* pVal){ - return pVal->type; +/* EVIDENCE-OF: R-12793-43283 Every value in SQLite has one of five +** fundamental datatypes: 64-bit signed integer 64-bit IEEE floating +** point number string BLOB NULL +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_value_type(sqlite3_value* pVal){ + static const u8 aType[] = { + SQLITE_BLOB, /* 0x00 */ + SQLITE_NULL, /* 0x01 */ + SQLITE_TEXT, /* 0x02 */ + SQLITE_NULL, /* 0x03 */ + SQLITE_INTEGER, /* 0x04 */ + SQLITE_NULL, /* 0x05 */ + SQLITE_INTEGER, /* 0x06 */ + SQLITE_NULL, /* 0x07 */ + SQLITE_FLOAT, /* 0x08 */ + SQLITE_NULL, /* 0x09 */ + SQLITE_FLOAT, /* 0x0a */ + SQLITE_NULL, /* 0x0b */ + SQLITE_INTEGER, /* 0x0c */ + SQLITE_NULL, /* 0x0d */ + SQLITE_INTEGER, /* 0x0e */ + SQLITE_NULL, /* 0x0f */ + SQLITE_BLOB, /* 0x10 */ + SQLITE_NULL, /* 0x11 */ + SQLITE_TEXT, /* 0x12 */ + SQLITE_NULL, /* 0x13 */ + SQLITE_INTEGER, /* 0x14 */ + SQLITE_NULL, /* 0x15 */ + SQLITE_INTEGER, /* 0x16 */ + SQLITE_NULL, /* 0x17 */ + SQLITE_FLOAT, /* 0x18 */ + SQLITE_NULL, /* 0x19 */ + SQLITE_FLOAT, /* 0x1a */ + SQLITE_NULL, /* 0x1b */ + SQLITE_INTEGER, /* 0x1c */ + SQLITE_NULL, /* 0x1d */ + SQLITE_INTEGER, /* 0x1e */ + SQLITE_NULL, /* 0x1f */ + }; + return aType[pVal->flags&MEM_AffMask]; } + +/* Make a copy of an sqlite3_value object +*/ +SQLITE_API sqlite3_value *SQLITE_STDCALL sqlite3_value_dup(const sqlite3_value *pOrig){ + sqlite3_value *pNew; + if( pOrig==0 ) return 0; + pNew = sqlite3_malloc( sizeof(*pNew) ); + if( pNew==0 ) return 0; + memset(pNew, 0, sizeof(*pNew)); + memcpy(pNew, pOrig, MEMCELLSIZE); + pNew->flags &= ~MEM_Dyn; + pNew->db = 0; + if( pNew->flags&(MEM_Str|MEM_Blob) ){ + pNew->flags &= ~(MEM_Static|MEM_Dyn); + pNew->flags |= MEM_Ephem; + if( sqlite3VdbeMemMakeWriteable(pNew)!=SQLITE_OK ){ + sqlite3ValueFree(pNew); + pNew = 0; + } + } + return pNew; +} + +/* Destroy an sqlite3_value object previously obtained from +** sqlite3_value_dup(). +*/ +SQLITE_API void SQLITE_STDCALL sqlite3_value_free(sqlite3_value *pOld){ + sqlite3ValueFree(pOld); +} + /**************************** sqlite3_result_ ******************************* ** The following routines are used by user-defined functions to specify ** the function result. ** -** The setStrOrError() funtion calls sqlite3VdbeMemSetStr() to store the +** The setStrOrError() function calls sqlite3VdbeMemSetStr() to store the ** result as a string or blob but if the string or blob is too large, it ** then sets the error code to SQLITE_TOOBIG +** +** The invokeValueDestructor(P,X) routine invokes destructor function X() +** on value P is not going to be used and need to be destroyed. */ static void setResultStrOrError( sqlite3_context *pCtx, /* Function context */ const char *z, /* String pointer */ int n, /* Bytes in string, or negative */ u8 enc, /* Encoding of z. 0 for BLOBs */ void (*xDel)(void*) /* Destructor function */ ){ - if( sqlite3VdbeMemSetStr(&pCtx->s, z, n, enc, xDel)==SQLITE_TOOBIG ){ + if( sqlite3VdbeMemSetStr(pCtx->pOut, z, n, enc, xDel)==SQLITE_TOOBIG ){ sqlite3_result_error_toobig(pCtx); } } -SQLITE_API void sqlite3_result_blob( +static int invokeValueDestructor( + const void *p, /* Value to destroy */ + void (*xDel)(void*), /* The destructor */ + sqlite3_context *pCtx /* Set a SQLITE_TOOBIG error if no NULL */ +){ + assert( xDel!=SQLITE_DYNAMIC ); + if( xDel==0 ){ + /* noop */ + }else if( xDel==SQLITE_TRANSIENT ){ + /* noop */ + }else{ + xDel((void*)p); + } + if( pCtx ) sqlite3_result_error_toobig(pCtx); + return SQLITE_TOOBIG; +} +SQLITE_API void SQLITE_STDCALL sqlite3_result_blob( sqlite3_context *pCtx, const void *z, int n, void (*xDel)(void *) ){ assert( n>=0 ); - assert( sqlite3_mutex_held(pCtx->s.db->mutex) ); + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); setResultStrOrError(pCtx, z, n, 0, xDel); } -SQLITE_API void sqlite3_result_double(sqlite3_context *pCtx, double rVal){ - assert( sqlite3_mutex_held(pCtx->s.db->mutex) ); - sqlite3VdbeMemSetDouble(&pCtx->s, rVal); -} -SQLITE_API void sqlite3_result_error(sqlite3_context *pCtx, const char *z, int n){ - assert( sqlite3_mutex_held(pCtx->s.db->mutex) ); - pCtx->isError = SQLITE_ERROR; - sqlite3VdbeMemSetStr(&pCtx->s, z, n, SQLITE_UTF8, SQLITE_TRANSIENT); -} -#ifndef SQLITE_OMIT_UTF16 -SQLITE_API void sqlite3_result_error16(sqlite3_context *pCtx, const void *z, int n){ - assert( sqlite3_mutex_held(pCtx->s.db->mutex) ); - pCtx->isError = SQLITE_ERROR; - sqlite3VdbeMemSetStr(&pCtx->s, z, n, SQLITE_UTF16NATIVE, SQLITE_TRANSIENT); -} -#endif -SQLITE_API void sqlite3_result_int(sqlite3_context *pCtx, int iVal){ - assert( sqlite3_mutex_held(pCtx->s.db->mutex) ); - sqlite3VdbeMemSetInt64(&pCtx->s, (i64)iVal); -} -SQLITE_API void sqlite3_result_int64(sqlite3_context *pCtx, i64 iVal){ - assert( sqlite3_mutex_held(pCtx->s.db->mutex) ); - sqlite3VdbeMemSetInt64(&pCtx->s, iVal); -} -SQLITE_API void sqlite3_result_null(sqlite3_context *pCtx){ - assert( sqlite3_mutex_held(pCtx->s.db->mutex) ); - sqlite3VdbeMemSetNull(&pCtx->s); -} -SQLITE_API void sqlite3_result_text( +SQLITE_API void SQLITE_STDCALL sqlite3_result_blob64( + sqlite3_context *pCtx, + const void *z, + sqlite3_uint64 n, + void (*xDel)(void *) +){ + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); + assert( xDel!=SQLITE_DYNAMIC ); + if( n>0x7fffffff ){ + (void)invokeValueDestructor(z, xDel, pCtx); + }else{ + setResultStrOrError(pCtx, z, (int)n, 0, xDel); + } +} +SQLITE_API void SQLITE_STDCALL sqlite3_result_double(sqlite3_context *pCtx, double rVal){ + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); + sqlite3VdbeMemSetDouble(pCtx->pOut, rVal); +} +SQLITE_API void SQLITE_STDCALL sqlite3_result_error(sqlite3_context *pCtx, const char *z, int n){ + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); + pCtx->isError = SQLITE_ERROR; + pCtx->fErrorOrAux = 1; + sqlite3VdbeMemSetStr(pCtx->pOut, z, n, SQLITE_UTF8, SQLITE_TRANSIENT); +} +#ifndef SQLITE_OMIT_UTF16 +SQLITE_API void SQLITE_STDCALL sqlite3_result_error16(sqlite3_context *pCtx, const void *z, int n){ + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); + pCtx->isError = SQLITE_ERROR; + pCtx->fErrorOrAux = 1; + sqlite3VdbeMemSetStr(pCtx->pOut, z, n, SQLITE_UTF16NATIVE, SQLITE_TRANSIENT); +} +#endif +SQLITE_API void SQLITE_STDCALL sqlite3_result_int(sqlite3_context *pCtx, int iVal){ + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); + sqlite3VdbeMemSetInt64(pCtx->pOut, (i64)iVal); +} +SQLITE_API void SQLITE_STDCALL sqlite3_result_int64(sqlite3_context *pCtx, i64 iVal){ + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); + sqlite3VdbeMemSetInt64(pCtx->pOut, iVal); +} +SQLITE_API void SQLITE_STDCALL sqlite3_result_null(sqlite3_context *pCtx){ + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); + sqlite3VdbeMemSetNull(pCtx->pOut); +} +SQLITE_API void SQLITE_STDCALL sqlite3_result_subtype(sqlite3_context *pCtx, unsigned int eSubtype){ + Mem *pOut = pCtx->pOut; + assert( sqlite3_mutex_held(pOut->db->mutex) ); + pOut->eSubtype = eSubtype & 0xff; + pOut->flags |= MEM_Subtype; +} +SQLITE_API void SQLITE_STDCALL sqlite3_result_text( sqlite3_context *pCtx, const char *z, int n, void (*xDel)(void *) ){ - assert( sqlite3_mutex_held(pCtx->s.db->mutex) ); - setResultStrOrError(pCtx, z, n, SQLITE_UTF8, xDel); -} -#ifndef SQLITE_OMIT_UTF16 -SQLITE_API void sqlite3_result_text16( - sqlite3_context *pCtx, - const void *z, - int n, - void (*xDel)(void *) -){ - assert( sqlite3_mutex_held(pCtx->s.db->mutex) ); - setResultStrOrError(pCtx, z, n, SQLITE_UTF16NATIVE, xDel); -} -SQLITE_API void sqlite3_result_text16be( - sqlite3_context *pCtx, - const void *z, - int n, - void (*xDel)(void *) -){ - assert( sqlite3_mutex_held(pCtx->s.db->mutex) ); - setResultStrOrError(pCtx, z, n, SQLITE_UTF16BE, xDel); -} -SQLITE_API void sqlite3_result_text16le( - sqlite3_context *pCtx, - const void *z, - int n, - void (*xDel)(void *) -){ - assert( sqlite3_mutex_held(pCtx->s.db->mutex) ); - setResultStrOrError(pCtx, z, n, SQLITE_UTF16LE, xDel); -} -#endif /* SQLITE_OMIT_UTF16 */ -SQLITE_API void sqlite3_result_value(sqlite3_context *pCtx, sqlite3_value *pValue){ - assert( sqlite3_mutex_held(pCtx->s.db->mutex) ); - sqlite3VdbeMemCopy(&pCtx->s, pValue); -} -SQLITE_API void sqlite3_result_zeroblob(sqlite3_context *pCtx, int n){ - assert( sqlite3_mutex_held(pCtx->s.db->mutex) ); - sqlite3VdbeMemSetZeroBlob(&pCtx->s, n); -} -SQLITE_API void sqlite3_result_error_code(sqlite3_context *pCtx, int errCode){ - pCtx->isError = errCode; - if( pCtx->s.flags & MEM_Null ){ - sqlite3VdbeMemSetStr(&pCtx->s, sqlite3ErrStr(errCode), -1, - SQLITE_UTF8, SQLITE_STATIC); - } -} - -/* Force an SQLITE_TOOBIG error. */ -SQLITE_API void sqlite3_result_error_toobig(sqlite3_context *pCtx){ - assert( sqlite3_mutex_held(pCtx->s.db->mutex) ); - pCtx->isError = SQLITE_TOOBIG; - sqlite3VdbeMemSetStr(&pCtx->s, "string or blob too big", -1, - SQLITE_UTF8, SQLITE_STATIC); -} - -/* An SQLITE_NOMEM error. */ -SQLITE_API void sqlite3_result_error_nomem(sqlite3_context *pCtx){ - assert( sqlite3_mutex_held(pCtx->s.db->mutex) ); - sqlite3VdbeMemSetNull(&pCtx->s); - pCtx->isError = SQLITE_NOMEM; - pCtx->s.db->mallocFailed = 1; -} + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); + setResultStrOrError(pCtx, z, n, SQLITE_UTF8, xDel); +} +SQLITE_API void SQLITE_STDCALL sqlite3_result_text64( + sqlite3_context *pCtx, + const char *z, + sqlite3_uint64 n, + void (*xDel)(void *), + unsigned char enc +){ + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); + assert( xDel!=SQLITE_DYNAMIC ); + if( enc==SQLITE_UTF16 ) enc = SQLITE_UTF16NATIVE; + if( n>0x7fffffff ){ + (void)invokeValueDestructor(z, xDel, pCtx); + }else{ + setResultStrOrError(pCtx, z, (int)n, enc, xDel); + } +} +#ifndef SQLITE_OMIT_UTF16 +SQLITE_API void SQLITE_STDCALL sqlite3_result_text16( + sqlite3_context *pCtx, + const void *z, + int n, + void (*xDel)(void *) +){ + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); + setResultStrOrError(pCtx, z, n, SQLITE_UTF16NATIVE, xDel); +} +SQLITE_API void SQLITE_STDCALL sqlite3_result_text16be( + sqlite3_context *pCtx, + const void *z, + int n, + void (*xDel)(void *) +){ + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); + setResultStrOrError(pCtx, z, n, SQLITE_UTF16BE, xDel); +} +SQLITE_API void SQLITE_STDCALL sqlite3_result_text16le( + sqlite3_context *pCtx, + const void *z, + int n, + void (*xDel)(void *) +){ + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); + setResultStrOrError(pCtx, z, n, SQLITE_UTF16LE, xDel); +} +#endif /* SQLITE_OMIT_UTF16 */ +SQLITE_API void SQLITE_STDCALL sqlite3_result_value(sqlite3_context *pCtx, sqlite3_value *pValue){ + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); + sqlite3VdbeMemCopy(pCtx->pOut, pValue); +} +SQLITE_API void SQLITE_STDCALL sqlite3_result_zeroblob(sqlite3_context *pCtx, int n){ + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); + sqlite3VdbeMemSetZeroBlob(pCtx->pOut, n); +} +SQLITE_API int SQLITE_STDCALL sqlite3_result_zeroblob64(sqlite3_context *pCtx, u64 n){ + Mem *pOut = pCtx->pOut; + assert( sqlite3_mutex_held(pOut->db->mutex) ); + if( n>(u64)pOut->db->aLimit[SQLITE_LIMIT_LENGTH] ){ + return SQLITE_TOOBIG; + } + sqlite3VdbeMemSetZeroBlob(pCtx->pOut, (int)n); + return SQLITE_OK; +} +SQLITE_API void SQLITE_STDCALL sqlite3_result_error_code(sqlite3_context *pCtx, int errCode){ + pCtx->isError = errCode; + pCtx->fErrorOrAux = 1; +#ifdef SQLITE_DEBUG + if( pCtx->pVdbe ) pCtx->pVdbe->rcApp = errCode; +#endif + if( pCtx->pOut->flags & MEM_Null ){ + sqlite3VdbeMemSetStr(pCtx->pOut, sqlite3ErrStr(errCode), -1, + SQLITE_UTF8, SQLITE_STATIC); + } +} + +/* Force an SQLITE_TOOBIG error. */ +SQLITE_API void SQLITE_STDCALL sqlite3_result_error_toobig(sqlite3_context *pCtx){ + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); + pCtx->isError = SQLITE_TOOBIG; + pCtx->fErrorOrAux = 1; + sqlite3VdbeMemSetStr(pCtx->pOut, "string or blob too big", -1, + SQLITE_UTF8, SQLITE_STATIC); +} + +/* An SQLITE_NOMEM error. */ +SQLITE_API void SQLITE_STDCALL sqlite3_result_error_nomem(sqlite3_context *pCtx){ + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); + sqlite3VdbeMemSetNull(pCtx->pOut); + pCtx->isError = SQLITE_NOMEM; + pCtx->fErrorOrAux = 1; + sqlite3OomFault(pCtx->pOut->db); +} + +/* +** This function is called after a transaction has been committed. It +** invokes callbacks registered with sqlite3_wal_hook() as required. +*/ +static int doWalCallbacks(sqlite3 *db){ + int rc = SQLITE_OK; +#ifndef SQLITE_OMIT_WAL + int i; + for(i=0; i<db->nDb; i++){ + Btree *pBt = db->aDb[i].pBt; + if( pBt ){ + int nEntry; + sqlite3BtreeEnter(pBt); + nEntry = sqlite3PagerWalCallback(sqlite3BtreePager(pBt)); + sqlite3BtreeLeave(pBt); + if( db->xWalCallback && nEntry>0 && rc==SQLITE_OK ){ + rc = db->xWalCallback(db->pWalArg, db, db->aDb[i].zName, nEntry); + } + } + } +#endif + return rc; +} + /* ** Execute the statement pStmt, either until a row of data is ready, the ** statement is completely executed or an error occurs. ** @@ -51992,13 +72777,35 @@ sqlite3 *db; int rc; assert(p); if( p->magic!=VDBE_MAGIC_RUN ){ - sqlite3_log(SQLITE_MISUSE, - "attempt to step a halted statement: [%s]", p->zSql); - return SQLITE_MISUSE_BKPT; + /* We used to require that sqlite3_reset() be called before retrying + ** sqlite3_step() after any error or after SQLITE_DONE. But beginning + ** with version 3.7.0, we changed this so that sqlite3_reset() would + ** be called automatically instead of throwing the SQLITE_MISUSE error. + ** This "automatic-reset" change is not technically an incompatibility, + ** since any application that receives an SQLITE_MISUSE is broken by + ** definition. + ** + ** Nevertheless, some published applications that were originally written + ** for version 3.6.23 or earlier do in fact depend on SQLITE_MISUSE + ** returns, and those were broken by the automatic-reset change. As a + ** a work-around, the SQLITE_OMIT_AUTORESET compile-time restores the + ** legacy behavior of returning SQLITE_MISUSE for cases where the + ** previous sqlite3_step() returned something other than a SQLITE_LOCKED + ** or SQLITE_BUSY error. + */ +#ifdef SQLITE_OMIT_AUTORESET + if( (rc = p->rc&0xff)==SQLITE_BUSY || rc==SQLITE_LOCKED ){ + sqlite3_reset((sqlite3_stmt*)p); + }else{ + return SQLITE_MISUSE_BKPT; + } +#else + sqlite3_reset((sqlite3_stmt*)p); +#endif } /* Check that malloc() has not failed. If it has, return early. */ db = p->db; if( db->mallocFailed ){ @@ -52014,50 +72821,57 @@ if( p->pc<0 ){ /* If there are no other statements currently running, then ** reset the interrupt flag. This prevents a call to sqlite3_interrupt ** from interrupting a statement that has not yet started. */ - if( db->activeVdbeCnt==0 ){ + if( db->nVdbeActive==0 ){ db->u1.isInterrupted = 0; } - assert( db->writeVdbeCnt>0 || db->autoCommit==0 || db->nDeferredCons==0 ); + assert( db->nVdbeWrite>0 || db->autoCommit==0 + || (db->nDeferredCons==0 && db->nDeferredImmCons==0) + ); #ifndef SQLITE_OMIT_TRACE - if( db->xProfile && !db->init.busy ){ - double rNow; - sqlite3OsCurrentTime(db->pVfs, &rNow); - p->startTime = (u64)((rNow - (int)rNow)*3600.0*24.0*1000000000.0); + if( db->xProfile && !db->init.busy && p->zSql ){ + sqlite3OsCurrentTimeInt64(db->pVfs, &p->startTime); + }else{ + assert( p->startTime==0 ); } #endif - db->activeVdbeCnt++; - if( p->readOnly==0 ) db->writeVdbeCnt++; + db->nVdbeActive++; + if( p->readOnly==0 ) db->nVdbeWrite++; + if( p->bIsReader ) db->nVdbeRead++; p->pc = 0; } +#ifdef SQLITE_DEBUG + p->rcApp = SQLITE_OK; +#endif #ifndef SQLITE_OMIT_EXPLAIN if( p->explain ){ rc = sqlite3VdbeList(p); }else #endif /* SQLITE_OMIT_EXPLAIN */ { + db->nVdbeExec++; rc = sqlite3VdbeExec(p); + db->nVdbeExec--; } #ifndef SQLITE_OMIT_TRACE - /* Invoke the profile callback if there is one - */ - if( rc!=SQLITE_ROW && db->xProfile && !db->init.busy && p->zSql ){ - double rNow; - u64 elapseTime; - - sqlite3OsCurrentTime(db->pVfs, &rNow); - elapseTime = (u64)((rNow - (int)rNow)*3600.0*24.0*1000000000.0); - elapseTime -= p->startTime; - db->xProfile(db->pProfileArg, p->zSql, elapseTime); - } -#endif + /* If the statement completed successfully, invoke the profile callback */ + if( rc!=SQLITE_ROW ) checkProfileCallback(db, p); +#endif + + if( rc==SQLITE_DONE ){ + assert( p->rc==SQLITE_OK ); + p->rc = doWalCallbacks(db); + if( p->rc!=SQLITE_OK ){ + rc = SQLITE_ERROR; + } + } db->errCode = rc; if( SQLITE_NOMEM==sqlite3ApiExit(p->db, p->rc) ){ p->rc = SQLITE_NOMEM; } @@ -52068,29 +72882,29 @@ ** be one of the values in the first assert() below. Variable p->rc ** contains the value that would be returned if sqlite3_finalize() ** were called on statement p. */ assert( rc==SQLITE_ROW || rc==SQLITE_DONE || rc==SQLITE_ERROR - || rc==SQLITE_BUSY || rc==SQLITE_MISUSE + || (rc&0xff)==SQLITE_BUSY || rc==SQLITE_MISUSE ); - assert( p->rc!=SQLITE_ROW && p->rc!=SQLITE_DONE ); + assert( (p->rc!=SQLITE_ROW && p->rc!=SQLITE_DONE) || p->rc==p->rcApp ); if( p->isPrepareV2 && rc!=SQLITE_ROW && rc!=SQLITE_DONE ){ /* If this statement was prepared using sqlite3_prepare_v2(), and an - ** error has occured, then return the error code in p->rc to the + ** error has occurred, then return the error code in p->rc to the ** caller. Set the error code in the database handle to the same value. */ - rc = db->errCode = p->rc; + rc = sqlite3VdbeTransferError(p); } return (rc&db->errMask); } /* ** This is the top-level implementation of sqlite3_step(). Call ** sqlite3Step() to do most of the work. If a schema error occurs, ** call sqlite3Reprepare() and try again. */ -SQLITE_API int sqlite3_step(sqlite3_stmt *pStmt){ +SQLITE_API int SQLITE_STDCALL sqlite3_step(sqlite3_stmt *pStmt){ int rc = SQLITE_OK; /* Result from sqlite3Step() */ int rc2 = SQLITE_OK; /* Result from sqlite3Reprepare() */ Vdbe *v = (Vdbe*)pStmt; /* the prepared statement */ int cnt = 0; /* Counter to prevent infinite loop of reprepares */ sqlite3 *db; /* The database connection */ @@ -52098,17 +72912,21 @@ if( vdbeSafetyNotNull(v) ){ return SQLITE_MISUSE_BKPT; } db = v->db; sqlite3_mutex_enter(db->mutex); + v->doingRerun = 0; while( (rc = sqlite3Step(v))==SQLITE_SCHEMA - && cnt++ < 5 - && (rc2 = rc = sqlite3Reprepare(v))==SQLITE_OK ){ + && cnt++ < SQLITE_MAX_SCHEMA_RETRY ){ + int savedPc = v->pc; + rc2 = rc = sqlite3Reprepare(v); + if( rc!=SQLITE_OK) break; sqlite3_reset(pStmt); - v->expired = 0; + if( savedPc>=0 ) v->doingRerun = 1; + assert( v->expired==0 ); } - if( rc2!=SQLITE_OK && ALWAYS(v->isPrepareV2) && ALWAYS(db->pErr) ){ + if( rc2!=SQLITE_OK ){ /* This case occurs after failing to recompile an sql statement. ** The error message from the SQL compiler has already been loaded ** into the database handle. This block copies the error message ** from the database handle into the statement and sets the statement ** program counter to 0 to ensure that when the statement is @@ -52127,27 +72945,57 @@ } rc = sqlite3ApiExit(db, rc); sqlite3_mutex_leave(db->mutex); return rc; } + /* ** Extract the user data from a sqlite3_context structure and return a ** pointer to it. */ -SQLITE_API void *sqlite3_user_data(sqlite3_context *p){ +SQLITE_API void *SQLITE_STDCALL sqlite3_user_data(sqlite3_context *p){ assert( p && p->pFunc ); return p->pFunc->pUserData; } /* ** Extract the user data from a sqlite3_context structure and return a ** pointer to it. +** +** IMPLEMENTATION-OF: R-46798-50301 The sqlite3_context_db_handle() interface +** returns a copy of the pointer to the database connection (the 1st +** parameter) of the sqlite3_create_function() and +** sqlite3_create_function16() routines that originally registered the +** application defined function. */ -SQLITE_API sqlite3 *sqlite3_context_db_handle(sqlite3_context *p){ - assert( p && p->pFunc ); - return p->s.db; +SQLITE_API sqlite3 *SQLITE_STDCALL sqlite3_context_db_handle(sqlite3_context *p){ + assert( p && p->pOut ); + return p->pOut->db; +} + +/* +** Return the current time for a statement. If the current time +** is requested more than once within the same run of a single prepared +** statement, the exact same time is returned for each invocation regardless +** of the amount of time that elapses between invocations. In other words, +** the time returned is always the time of the first call. +*/ +SQLITE_PRIVATE sqlite3_int64 sqlite3StmtCurrentTime(sqlite3_context *p){ + int rc; +#ifndef SQLITE_ENABLE_STAT3_OR_STAT4 + sqlite3_int64 *piTime = &p->pVdbe->iCurrentTime; + assert( p->pVdbe!=0 ); +#else + sqlite3_int64 iTime = 0; + sqlite3_int64 *piTime = p->pVdbe!=0 ? &p->pVdbe->iCurrentTime : &iTime; +#endif + if( *piTime==0 ){ + rc = sqlite3OsCurrentTimeInt64(p->pOut->db->pVfs, piTime); + if( rc ) *piTime = 0; + } + return *piTime; } /* ** The following is the implementation of an SQL function that always ** fails with an error message stating that the function is used in the @@ -52169,86 +73017,106 @@ sqlite3_result_error(context, zErr, -1); sqlite3_free(zErr); } /* -** Allocate or return the aggregate context for a user function. A new -** context is allocated on the first call. Subsequent calls return the -** same context that was returned on prior calls. +** Create a new aggregate context for p and return a pointer to +** its pMem->z element. */ -SQLITE_API void *sqlite3_aggregate_context(sqlite3_context *p, int nByte){ - Mem *pMem; - assert( p && p->pFunc && p->pFunc->xStep ); - assert( sqlite3_mutex_held(p->s.db->mutex) ); - pMem = p->pMem; - testcase( nByte<0 ); - if( (pMem->flags & MEM_Agg)==0 ){ - if( nByte<=0 ){ - sqlite3VdbeMemReleaseExternal(pMem); - pMem->flags = MEM_Null; - pMem->z = 0; - }else{ - sqlite3VdbeMemGrow(pMem, nByte, 0); - pMem->flags = MEM_Agg; - pMem->u.pDef = p->pFunc; - if( pMem->z ){ - memset(pMem->z, 0, nByte); - } +static SQLITE_NOINLINE void *createAggContext(sqlite3_context *p, int nByte){ + Mem *pMem = p->pMem; + assert( (pMem->flags & MEM_Agg)==0 ); + if( nByte<=0 ){ + sqlite3VdbeMemSetNull(pMem); + pMem->z = 0; + }else{ + sqlite3VdbeMemClearAndResize(pMem, nByte); + pMem->flags = MEM_Agg; + pMem->u.pDef = p->pFunc; + if( pMem->z ){ + memset(pMem->z, 0, nByte); } } return (void*)pMem->z; } /* -** Return the auxilary data pointer, if any, for the iArg'th argument to +** Allocate or return the aggregate context for a user function. A new +** context is allocated on the first call. Subsequent calls return the +** same context that was returned on prior calls. +*/ +SQLITE_API void *SQLITE_STDCALL sqlite3_aggregate_context(sqlite3_context *p, int nByte){ + assert( p && p->pFunc && p->pFunc->xFinalize ); + assert( sqlite3_mutex_held(p->pOut->db->mutex) ); + testcase( nByte<0 ); + if( (p->pMem->flags & MEM_Agg)==0 ){ + return createAggContext(p, nByte); + }else{ + return (void*)p->pMem->z; + } +} + +/* +** Return the auxiliary data pointer, if any, for the iArg'th argument to ** the user-function defined by pCtx. */ -SQLITE_API void *sqlite3_get_auxdata(sqlite3_context *pCtx, int iArg){ - VdbeFunc *pVdbeFunc; - - assert( sqlite3_mutex_held(pCtx->s.db->mutex) ); - pVdbeFunc = pCtx->pVdbeFunc; - if( !pVdbeFunc || iArg>=pVdbeFunc->nAux || iArg<0 ){ - return 0; - } - return pVdbeFunc->apAux[iArg].pAux; +SQLITE_API void *SQLITE_STDCALL sqlite3_get_auxdata(sqlite3_context *pCtx, int iArg){ + AuxData *pAuxData; + + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); +#if SQLITE_ENABLE_STAT3_OR_STAT4 + if( pCtx->pVdbe==0 ) return 0; +#else + assert( pCtx->pVdbe!=0 ); +#endif + for(pAuxData=pCtx->pVdbe->pAuxData; pAuxData; pAuxData=pAuxData->pNext){ + if( pAuxData->iOp==pCtx->iOp && pAuxData->iArg==iArg ) break; + } + + return (pAuxData ? pAuxData->pAux : 0); } /* -** Set the auxilary data pointer and delete function, for the iArg'th +** Set the auxiliary data pointer and delete function, for the iArg'th ** argument to the user-function defined by pCtx. Any previous value is ** deleted by calling the delete function specified when it was set. */ -SQLITE_API void sqlite3_set_auxdata( +SQLITE_API void SQLITE_STDCALL sqlite3_set_auxdata( sqlite3_context *pCtx, int iArg, void *pAux, void (*xDelete)(void*) ){ - struct AuxData *pAuxData; - VdbeFunc *pVdbeFunc; + AuxData *pAuxData; + Vdbe *pVdbe = pCtx->pVdbe; + + assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) ); if( iArg<0 ) goto failed; - - assert( sqlite3_mutex_held(pCtx->s.db->mutex) ); - pVdbeFunc = pCtx->pVdbeFunc; - if( !pVdbeFunc || pVdbeFunc->nAux<=iArg ){ - int nAux = (pVdbeFunc ? pVdbeFunc->nAux : 0); - int nMalloc = sizeof(VdbeFunc) + sizeof(struct AuxData)*iArg; - pVdbeFunc = sqlite3DbRealloc(pCtx->s.db, pVdbeFunc, nMalloc); - if( !pVdbeFunc ){ - goto failed; - } - pCtx->pVdbeFunc = pVdbeFunc; - memset(&pVdbeFunc->apAux[nAux], 0, sizeof(struct AuxData)*(iArg+1-nAux)); - pVdbeFunc->nAux = iArg+1; - pVdbeFunc->pFunc = pCtx->pFunc; - } - - pAuxData = &pVdbeFunc->apAux[iArg]; - if( pAuxData->pAux && pAuxData->xDelete ){ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + if( pVdbe==0 ) goto failed; +#else + assert( pVdbe!=0 ); +#endif + + for(pAuxData=pVdbe->pAuxData; pAuxData; pAuxData=pAuxData->pNext){ + if( pAuxData->iOp==pCtx->iOp && pAuxData->iArg==iArg ) break; + } + if( pAuxData==0 ){ + pAuxData = sqlite3DbMallocZero(pVdbe->db, sizeof(AuxData)); + if( !pAuxData ) goto failed; + pAuxData->iOp = pCtx->iOp; + pAuxData->iArg = iArg; + pAuxData->pNext = pVdbe->pAuxData; + pVdbe->pAuxData = pAuxData; + if( pCtx->fErrorOrAux==0 ){ + pCtx->isError = 0; + pCtx->fErrorOrAux = 1; + } + }else if( pAuxData->xDelete ){ pAuxData->xDelete(pAuxData->pAux); } + pAuxData->pAux = pAux; pAuxData->xDelete = xDelete; return; failed: @@ -52257,82 +73125,99 @@ } } #ifndef SQLITE_OMIT_DEPRECATED /* -** Return the number of times the Step function of a aggregate has been +** Return the number of times the Step function of an aggregate has been ** called. ** ** This function is deprecated. Do not use it for new code. It is ** provide only to avoid breaking legacy code. New aggregate function ** implementations should keep their own counts within their aggregate ** context. */ -SQLITE_API int sqlite3_aggregate_count(sqlite3_context *p){ - assert( p && p->pMem && p->pFunc && p->pFunc->xStep ); +SQLITE_API int SQLITE_STDCALL sqlite3_aggregate_count(sqlite3_context *p){ + assert( p && p->pMem && p->pFunc && p->pFunc->xFinalize ); return p->pMem->n; } #endif /* ** Return the number of columns in the result set for the statement pStmt. */ -SQLITE_API int sqlite3_column_count(sqlite3_stmt *pStmt){ +SQLITE_API int SQLITE_STDCALL sqlite3_column_count(sqlite3_stmt *pStmt){ Vdbe *pVm = (Vdbe *)pStmt; return pVm ? pVm->nResColumn : 0; } /* ** Return the number of values available from the current row of the ** currently executing statement pStmt. */ -SQLITE_API int sqlite3_data_count(sqlite3_stmt *pStmt){ +SQLITE_API int SQLITE_STDCALL sqlite3_data_count(sqlite3_stmt *pStmt){ Vdbe *pVm = (Vdbe *)pStmt; if( pVm==0 || pVm->pResultSet==0 ) return 0; return pVm->nResColumn; } +/* +** Return a pointer to static memory containing an SQL NULL value. +*/ +static const Mem *columnNullValue(void){ + /* Even though the Mem structure contains an element + ** of type i64, on certain architectures (x86) with certain compiler + ** switches (-Os), gcc may align this Mem object on a 4-byte boundary + ** instead of an 8-byte one. This all works fine, except that when + ** running with SQLITE_DEBUG defined the SQLite code sometimes assert()s + ** that a Mem structure is located on an 8-byte boundary. To prevent + ** these assert()s from failing, when building with SQLITE_DEBUG defined + ** using gcc, we force nullMem to be 8-byte aligned using the magical + ** __attribute__((aligned(8))) macro. */ + static const Mem nullMem +#if defined(SQLITE_DEBUG) && defined(__GNUC__) + __attribute__((aligned(8))) +#endif + = { + /* .u = */ {0}, + /* .flags = */ (u16)MEM_Null, + /* .enc = */ (u8)0, + /* .eSubtype = */ (u8)0, + /* .n = */ (int)0, + /* .z = */ (char*)0, + /* .zMalloc = */ (char*)0, + /* .szMalloc = */ (int)0, + /* .uTemp = */ (u32)0, + /* .db = */ (sqlite3*)0, + /* .xDel = */ (void(*)(void*))0, +#ifdef SQLITE_DEBUG + /* .pScopyFrom = */ (Mem*)0, + /* .pFiller = */ (void*)0, +#endif + }; + return &nullMem; +} /* ** Check to see if column iCol of the given statement is valid. If ** it is, return a pointer to the Mem for the value of that column. ** If iCol is not valid, return a pointer to a Mem which has a value ** of NULL. */ static Mem *columnMem(sqlite3_stmt *pStmt, int i){ Vdbe *pVm; - int vals; Mem *pOut; pVm = (Vdbe *)pStmt; if( pVm && pVm->pResultSet!=0 && i<pVm->nResColumn && i>=0 ){ sqlite3_mutex_enter(pVm->db->mutex); - vals = sqlite3_data_count(pStmt); pOut = &pVm->pResultSet[i]; }else{ - /* If the value passed as the second argument is out of range, return - ** a pointer to the following static Mem object which contains the - ** value SQL NULL. Even though the Mem structure contains an element - ** of type i64, on certain architecture (x86) with certain compiler - ** switches (-Os), gcc may align this Mem object on a 4-byte boundary - ** instead of an 8-byte one. This all works fine, except that when - ** running with SQLITE_DEBUG defined the SQLite code sometimes assert()s - ** that a Mem structure is located on an 8-byte boundary. To prevent - ** this assert() from failing, when building with SQLITE_DEBUG defined - ** using gcc, force nullMem to be 8-byte aligned using the magical - ** __attribute__((aligned(8))) macro. */ - static const Mem nullMem -#if defined(SQLITE_DEBUG) && defined(__GNUC__) - __attribute__((aligned(8))) -#endif - = {{0}, (double)0, 0, "", 0, MEM_Null, SQLITE_NULL, 0, 0, 0 }; - if( pVm && ALWAYS(pVm->db) ){ sqlite3_mutex_enter(pVm->db->mutex); - sqlite3Error(pVm->db, SQLITE_RANGE, 0); + sqlite3Error(pVm->db, SQLITE_RANGE); } - pOut = (Mem*)&nullMem; + pOut = (Mem*)columnNullValue(); } return pOut; } /* @@ -52349,12 +73234,11 @@ ** sqlite3_column_text() ** sqlite3_column_text16() ** sqlite3_column_real() ** sqlite3_column_bytes() ** sqlite3_column_bytes16() -** -** But not for sqlite3_column_blob(), which never calls malloc(). +** sqiite3_column_blob() */ static void columnMallocFailure(sqlite3_stmt *pStmt) { /* If malloc() failed during an encoding conversion within an ** sqlite3_column_XXX API, then set the return code of the statement to @@ -52370,79 +73254,72 @@ /**************************** sqlite3_column_ ******************************* ** The following routines are used to access elements of the current row ** in the result set. */ -SQLITE_API const void *sqlite3_column_blob(sqlite3_stmt *pStmt, int i){ +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_blob(sqlite3_stmt *pStmt, int i){ const void *val; val = sqlite3_value_blob( columnMem(pStmt,i) ); /* Even though there is no encoding conversion, value_blob() might ** need to call malloc() to expand the result of a zeroblob() ** expression. */ columnMallocFailure(pStmt); return val; } -SQLITE_API int sqlite3_column_bytes(sqlite3_stmt *pStmt, int i){ +SQLITE_API int SQLITE_STDCALL sqlite3_column_bytes(sqlite3_stmt *pStmt, int i){ int val = sqlite3_value_bytes( columnMem(pStmt,i) ); columnMallocFailure(pStmt); return val; } -SQLITE_API int sqlite3_column_bytes16(sqlite3_stmt *pStmt, int i){ +SQLITE_API int SQLITE_STDCALL sqlite3_column_bytes16(sqlite3_stmt *pStmt, int i){ int val = sqlite3_value_bytes16( columnMem(pStmt,i) ); columnMallocFailure(pStmt); return val; } -SQLITE_API double sqlite3_column_double(sqlite3_stmt *pStmt, int i){ +SQLITE_API double SQLITE_STDCALL sqlite3_column_double(sqlite3_stmt *pStmt, int i){ double val = sqlite3_value_double( columnMem(pStmt,i) ); columnMallocFailure(pStmt); return val; } -SQLITE_API int sqlite3_column_int(sqlite3_stmt *pStmt, int i){ +SQLITE_API int SQLITE_STDCALL sqlite3_column_int(sqlite3_stmt *pStmt, int i){ int val = sqlite3_value_int( columnMem(pStmt,i) ); columnMallocFailure(pStmt); return val; } -SQLITE_API sqlite_int64 sqlite3_column_int64(sqlite3_stmt *pStmt, int i){ +SQLITE_API sqlite_int64 SQLITE_STDCALL sqlite3_column_int64(sqlite3_stmt *pStmt, int i){ sqlite_int64 val = sqlite3_value_int64( columnMem(pStmt,i) ); columnMallocFailure(pStmt); return val; } -SQLITE_API const unsigned char *sqlite3_column_text(sqlite3_stmt *pStmt, int i){ +SQLITE_API const unsigned char *SQLITE_STDCALL sqlite3_column_text(sqlite3_stmt *pStmt, int i){ const unsigned char *val = sqlite3_value_text( columnMem(pStmt,i) ); columnMallocFailure(pStmt); return val; } -SQLITE_API sqlite3_value *sqlite3_column_value(sqlite3_stmt *pStmt, int i){ +SQLITE_API sqlite3_value *SQLITE_STDCALL sqlite3_column_value(sqlite3_stmt *pStmt, int i){ Mem *pOut = columnMem(pStmt, i); if( pOut->flags&MEM_Static ){ pOut->flags &= ~MEM_Static; pOut->flags |= MEM_Ephem; } columnMallocFailure(pStmt); return (sqlite3_value *)pOut; } #ifndef SQLITE_OMIT_UTF16 -SQLITE_API const void *sqlite3_column_text16(sqlite3_stmt *pStmt, int i){ +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_text16(sqlite3_stmt *pStmt, int i){ const void *val = sqlite3_value_text16( columnMem(pStmt,i) ); columnMallocFailure(pStmt); return val; } #endif /* SQLITE_OMIT_UTF16 */ -SQLITE_API int sqlite3_column_type(sqlite3_stmt *pStmt, int i){ +SQLITE_API int SQLITE_STDCALL sqlite3_column_type(sqlite3_stmt *pStmt, int i){ int iType = sqlite3_value_type( columnMem(pStmt,i) ); columnMallocFailure(pStmt); return iType; } -/* The following function is experimental and subject to change or -** removal */ -/*int sqlite3_column_numeric_type(sqlite3_stmt *pStmt, int i){ -** return sqlite3_value_numeric_type( columnMem(pStmt,i) ); -**} -*/ - /* ** Convert the N-th element of pStmt->pColName[] into a string using ** xFunc() then return that string. If N is out of range, return 0. ** ** There are up to 5 names for each column. useType determines which @@ -52461,15 +73338,23 @@ sqlite3_stmt *pStmt, int N, const void *(*xFunc)(Mem*), int useType ){ - const void *ret = 0; - Vdbe *p = (Vdbe *)pStmt; + const void *ret; + Vdbe *p; int n; - sqlite3 *db = p->db; - + sqlite3 *db; +#ifdef SQLITE_ENABLE_API_ARMOR + if( pStmt==0 ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif + ret = 0; + p = (Vdbe *)pStmt; + db = p->db; assert( db!=0 ); n = sqlite3_column_count(pStmt); if( N<n && N>=0 ){ N += useType*n; sqlite3_mutex_enter(db->mutex); @@ -52477,11 +73362,11 @@ ret = xFunc(&p->aColName[N]); /* A malloc may have failed inside of the xFunc() call. If this ** is the case, clear the mallocFailed flag and return NULL. */ if( db->mallocFailed ){ - db->mallocFailed = 0; + sqlite3OomClear(db); ret = 0; } sqlite3_mutex_leave(db->mutex); } return ret; @@ -52489,16 +73374,16 @@ /* ** Return the name of the Nth column of the result set returned by SQL ** statement pStmt. */ -SQLITE_API const char *sqlite3_column_name(sqlite3_stmt *pStmt, int N){ +SQLITE_API const char *SQLITE_STDCALL sqlite3_column_name(sqlite3_stmt *pStmt, int N){ return columnName( pStmt, N, (const void*(*)(Mem*))sqlite3_value_text, COLNAME_NAME); } #ifndef SQLITE_OMIT_UTF16 -SQLITE_API const void *sqlite3_column_name16(sqlite3_stmt *pStmt, int N){ +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_name16(sqlite3_stmt *pStmt, int N){ return columnName( pStmt, N, (const void*(*)(Mem*))sqlite3_value_text16, COLNAME_NAME); } #endif @@ -52514,16 +73399,16 @@ #ifndef SQLITE_OMIT_DECLTYPE /* ** Return the column declaration type (if applicable) of the 'i'th column ** of the result set of SQL statement pStmt. */ -SQLITE_API const char *sqlite3_column_decltype(sqlite3_stmt *pStmt, int N){ +SQLITE_API const char *SQLITE_STDCALL sqlite3_column_decltype(sqlite3_stmt *pStmt, int N){ return columnName( pStmt, N, (const void*(*)(Mem*))sqlite3_value_text, COLNAME_DECLTYPE); } #ifndef SQLITE_OMIT_UTF16 -SQLITE_API const void *sqlite3_column_decltype16(sqlite3_stmt *pStmt, int N){ +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_decltype16(sqlite3_stmt *pStmt, int N){ return columnName( pStmt, N, (const void*(*)(Mem*))sqlite3_value_text16, COLNAME_DECLTYPE); } #endif /* SQLITE_OMIT_UTF16 */ #endif /* SQLITE_OMIT_DECLTYPE */ @@ -52530,50 +73415,50 @@ #ifdef SQLITE_ENABLE_COLUMN_METADATA /* ** Return the name of the database from which a result column derives. ** NULL is returned if the result column is an expression or constant or -** anything else which is not an unabiguous reference to a database column. +** anything else which is not an unambiguous reference to a database column. */ -SQLITE_API const char *sqlite3_column_database_name(sqlite3_stmt *pStmt, int N){ +SQLITE_API const char *SQLITE_STDCALL sqlite3_column_database_name(sqlite3_stmt *pStmt, int N){ return columnName( pStmt, N, (const void*(*)(Mem*))sqlite3_value_text, COLNAME_DATABASE); } #ifndef SQLITE_OMIT_UTF16 -SQLITE_API const void *sqlite3_column_database_name16(sqlite3_stmt *pStmt, int N){ +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_database_name16(sqlite3_stmt *pStmt, int N){ return columnName( pStmt, N, (const void*(*)(Mem*))sqlite3_value_text16, COLNAME_DATABASE); } #endif /* SQLITE_OMIT_UTF16 */ /* ** Return the name of the table from which a result column derives. ** NULL is returned if the result column is an expression or constant or -** anything else which is not an unabiguous reference to a database column. +** anything else which is not an unambiguous reference to a database column. */ -SQLITE_API const char *sqlite3_column_table_name(sqlite3_stmt *pStmt, int N){ +SQLITE_API const char *SQLITE_STDCALL sqlite3_column_table_name(sqlite3_stmt *pStmt, int N){ return columnName( pStmt, N, (const void*(*)(Mem*))sqlite3_value_text, COLNAME_TABLE); } #ifndef SQLITE_OMIT_UTF16 -SQLITE_API const void *sqlite3_column_table_name16(sqlite3_stmt *pStmt, int N){ +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_table_name16(sqlite3_stmt *pStmt, int N){ return columnName( pStmt, N, (const void*(*)(Mem*))sqlite3_value_text16, COLNAME_TABLE); } #endif /* SQLITE_OMIT_UTF16 */ /* ** Return the name of the table column from which a result column derives. ** NULL is returned if the result column is an expression or constant or -** anything else which is not an unabiguous reference to a database column. +** anything else which is not an unambiguous reference to a database column. */ -SQLITE_API const char *sqlite3_column_origin_name(sqlite3_stmt *pStmt, int N){ +SQLITE_API const char *SQLITE_STDCALL sqlite3_column_origin_name(sqlite3_stmt *pStmt, int N){ return columnName( pStmt, N, (const void*(*)(Mem*))sqlite3_value_text, COLNAME_COLUMN); } #ifndef SQLITE_OMIT_UTF16 -SQLITE_API const void *sqlite3_column_origin_name16(sqlite3_stmt *pStmt, int N){ +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_origin_name16(sqlite3_stmt *pStmt, int N){ return columnName( pStmt, N, (const void*(*)(Mem*))sqlite3_value_text16, COLNAME_COLUMN); } #endif /* SQLITE_OMIT_UTF16 */ #endif /* SQLITE_ENABLE_COLUMN_METADATA */ @@ -52599,29 +73484,35 @@ if( vdbeSafetyNotNull(p) ){ return SQLITE_MISUSE_BKPT; } sqlite3_mutex_enter(p->db->mutex); if( p->magic!=VDBE_MAGIC_RUN || p->pc>=0 ){ - sqlite3Error(p->db, SQLITE_MISUSE, 0); + sqlite3Error(p->db, SQLITE_MISUSE); sqlite3_mutex_leave(p->db->mutex); sqlite3_log(SQLITE_MISUSE, "bind on a busy prepared statement: [%s]", p->zSql); return SQLITE_MISUSE_BKPT; } if( i<1 || i>p->nVar ){ - sqlite3Error(p->db, SQLITE_RANGE, 0); + sqlite3Error(p->db, SQLITE_RANGE); sqlite3_mutex_leave(p->db->mutex); return SQLITE_RANGE; } i--; pVar = &p->aVar[i]; sqlite3VdbeMemRelease(pVar); pVar->flags = MEM_Null; - sqlite3Error(p->db, SQLITE_OK, 0); + sqlite3Error(p->db, SQLITE_OK); /* If the bit corresponding to this variable in Vdbe.expmask is set, then ** binding a new value to this variable invalidates the current query plan. + ** + ** IMPLEMENTATION-OF: R-48440-37595 If the specific value bound to host + ** parameter in the WHERE clause might influence the choice of query plan + ** for a statement, then the statement will be automatically recompiled, + ** as if there had been a schema change, on the first sqlite3_step() call + ** following any change to the bindings of that parameter. */ if( p->isPrepareV2 && ((i<32 && p->expmask & ((u32)1 << i)) || p->expmask==0xffffffff) ){ p->expired = 1; @@ -52650,92 +73541,124 @@ pVar = &p->aVar[i-1]; rc = sqlite3VdbeMemSetStr(pVar, zData, nData, encoding, xDel); if( rc==SQLITE_OK && encoding!=0 ){ rc = sqlite3VdbeChangeEncoding(pVar, ENC(p->db)); } - sqlite3Error(p->db, rc, 0); + sqlite3Error(p->db, rc); rc = sqlite3ApiExit(p->db, rc); } sqlite3_mutex_leave(p->db->mutex); + }else if( xDel!=SQLITE_STATIC && xDel!=SQLITE_TRANSIENT ){ + xDel((void*)zData); } return rc; } /* ** Bind a blob value to an SQL statement variable. */ -SQLITE_API int sqlite3_bind_blob( +SQLITE_API int SQLITE_STDCALL sqlite3_bind_blob( sqlite3_stmt *pStmt, int i, const void *zData, int nData, void (*xDel)(void*) ){ return bindText(pStmt, i, zData, nData, xDel, 0); } -SQLITE_API int sqlite3_bind_double(sqlite3_stmt *pStmt, int i, double rValue){ +SQLITE_API int SQLITE_STDCALL sqlite3_bind_blob64( + sqlite3_stmt *pStmt, + int i, + const void *zData, + sqlite3_uint64 nData, + void (*xDel)(void*) +){ + assert( xDel!=SQLITE_DYNAMIC ); + if( nData>0x7fffffff ){ + return invokeValueDestructor(zData, xDel, 0); + }else{ + return bindText(pStmt, i, zData, (int)nData, xDel, 0); + } +} +SQLITE_API int SQLITE_STDCALL sqlite3_bind_double(sqlite3_stmt *pStmt, int i, double rValue){ int rc; Vdbe *p = (Vdbe *)pStmt; rc = vdbeUnbind(p, i); if( rc==SQLITE_OK ){ sqlite3VdbeMemSetDouble(&p->aVar[i-1], rValue); sqlite3_mutex_leave(p->db->mutex); } return rc; } -SQLITE_API int sqlite3_bind_int(sqlite3_stmt *p, int i, int iValue){ +SQLITE_API int SQLITE_STDCALL sqlite3_bind_int(sqlite3_stmt *p, int i, int iValue){ return sqlite3_bind_int64(p, i, (i64)iValue); } -SQLITE_API int sqlite3_bind_int64(sqlite3_stmt *pStmt, int i, sqlite_int64 iValue){ +SQLITE_API int SQLITE_STDCALL sqlite3_bind_int64(sqlite3_stmt *pStmt, int i, sqlite_int64 iValue){ int rc; Vdbe *p = (Vdbe *)pStmt; rc = vdbeUnbind(p, i); if( rc==SQLITE_OK ){ sqlite3VdbeMemSetInt64(&p->aVar[i-1], iValue); sqlite3_mutex_leave(p->db->mutex); } return rc; } -SQLITE_API int sqlite3_bind_null(sqlite3_stmt *pStmt, int i){ +SQLITE_API int SQLITE_STDCALL sqlite3_bind_null(sqlite3_stmt *pStmt, int i){ int rc; Vdbe *p = (Vdbe*)pStmt; rc = vdbeUnbind(p, i); if( rc==SQLITE_OK ){ sqlite3_mutex_leave(p->db->mutex); } return rc; } -SQLITE_API int sqlite3_bind_text( +SQLITE_API int SQLITE_STDCALL sqlite3_bind_text( sqlite3_stmt *pStmt, int i, const char *zData, int nData, void (*xDel)(void*) ){ return bindText(pStmt, i, zData, nData, xDel, SQLITE_UTF8); +} +SQLITE_API int SQLITE_STDCALL sqlite3_bind_text64( + sqlite3_stmt *pStmt, + int i, + const char *zData, + sqlite3_uint64 nData, + void (*xDel)(void*), + unsigned char enc +){ + assert( xDel!=SQLITE_DYNAMIC ); + if( nData>0x7fffffff ){ + return invokeValueDestructor(zData, xDel, 0); + }else{ + if( enc==SQLITE_UTF16 ) enc = SQLITE_UTF16NATIVE; + return bindText(pStmt, i, zData, (int)nData, xDel, enc); + } } #ifndef SQLITE_OMIT_UTF16 -SQLITE_API int sqlite3_bind_text16( +SQLITE_API int SQLITE_STDCALL sqlite3_bind_text16( sqlite3_stmt *pStmt, int i, const void *zData, int nData, void (*xDel)(void*) ){ return bindText(pStmt, i, zData, nData, xDel, SQLITE_UTF16NATIVE); } #endif /* SQLITE_OMIT_UTF16 */ -SQLITE_API int sqlite3_bind_value(sqlite3_stmt *pStmt, int i, const sqlite3_value *pValue){ +SQLITE_API int SQLITE_STDCALL sqlite3_bind_value(sqlite3_stmt *pStmt, int i, const sqlite3_value *pValue){ int rc; - switch( pValue->type ){ + switch( sqlite3_value_type((sqlite3_value*)pValue) ){ case SQLITE_INTEGER: { rc = sqlite3_bind_int64(pStmt, i, pValue->u.i); break; } case SQLITE_FLOAT: { - rc = sqlite3_bind_double(pStmt, i, pValue->r); + rc = sqlite3_bind_double(pStmt, i, pValue->u.r); break; } case SQLITE_BLOB: { if( pValue->flags & MEM_Zero ){ rc = sqlite3_bind_zeroblob(pStmt, i, pValue->u.nZero); @@ -52754,68 +73677,55 @@ break; } } return rc; } -SQLITE_API int sqlite3_bind_zeroblob(sqlite3_stmt *pStmt, int i, int n){ +SQLITE_API int SQLITE_STDCALL sqlite3_bind_zeroblob(sqlite3_stmt *pStmt, int i, int n){ int rc; Vdbe *p = (Vdbe *)pStmt; rc = vdbeUnbind(p, i); if( rc==SQLITE_OK ){ sqlite3VdbeMemSetZeroBlob(&p->aVar[i-1], n); sqlite3_mutex_leave(p->db->mutex); } return rc; } +SQLITE_API int SQLITE_STDCALL sqlite3_bind_zeroblob64(sqlite3_stmt *pStmt, int i, sqlite3_uint64 n){ + int rc; + Vdbe *p = (Vdbe *)pStmt; + sqlite3_mutex_enter(p->db->mutex); + if( n>(u64)p->db->aLimit[SQLITE_LIMIT_LENGTH] ){ + rc = SQLITE_TOOBIG; + }else{ + assert( (n & 0x7FFFFFFF)==n ); + rc = sqlite3_bind_zeroblob(pStmt, i, n); + } + rc = sqlite3ApiExit(p->db, rc); + sqlite3_mutex_leave(p->db->mutex); + return rc; +} /* ** Return the number of wildcards that can be potentially bound to. ** This routine is added to support DBD::SQLite. */ -SQLITE_API int sqlite3_bind_parameter_count(sqlite3_stmt *pStmt){ +SQLITE_API int SQLITE_STDCALL sqlite3_bind_parameter_count(sqlite3_stmt *pStmt){ Vdbe *p = (Vdbe*)pStmt; return p ? p->nVar : 0; } -/* -** Create a mapping from variable numbers to variable names -** in the Vdbe.azVar[] array, if such a mapping does not already -** exist. -*/ -static void createVarMap(Vdbe *p){ - if( !p->okVar ){ - int j; - Op *pOp; - sqlite3_mutex_enter(p->db->mutex); - /* The race condition here is harmless. If two threads call this - ** routine on the same Vdbe at the same time, they both might end - ** up initializing the Vdbe.azVar[] array. That is a little extra - ** work but it results in the same answer. - */ - for(j=0, pOp=p->aOp; j<p->nOp; j++, pOp++){ - if( pOp->opcode==OP_Variable ){ - assert( pOp->p1>0 && pOp->p1<=p->nVar ); - p->azVar[pOp->p1-1] = pOp->p4.z; - } - } - p->okVar = 1; - sqlite3_mutex_leave(p->db->mutex); - } -} - /* ** Return the name of a wildcard parameter. Return NULL if the index ** is out of range or if the wildcard is unnamed. ** ** The result is always UTF-8. */ -SQLITE_API const char *sqlite3_bind_parameter_name(sqlite3_stmt *pStmt, int i){ +SQLITE_API const char *SQLITE_STDCALL sqlite3_bind_parameter_name(sqlite3_stmt *pStmt, int i){ Vdbe *p = (Vdbe*)pStmt; - if( p==0 || i<1 || i>p->nVar ){ + if( p==0 || i<1 || i>p->nzVar ){ return 0; } - createVarMap(p); return p->azVar[i-1]; } /* ** Given a wildcard parameter name, return the index of the variable @@ -52825,22 +73735,21 @@ SQLITE_PRIVATE int sqlite3VdbeParameterIndex(Vdbe *p, const char *zName, int nName){ int i; if( p==0 ){ return 0; } - createVarMap(p); if( zName ){ - for(i=0; i<p->nVar; i++){ + for(i=0; i<p->nzVar; i++){ const char *z = p->azVar[i]; - if( z && memcmp(z,zName,nName)==0 && z[nName]==0 ){ + if( z && strncmp(z,zName,nName)==0 && z[nName]==0 ){ return i+1; } } } return 0; } -SQLITE_API int sqlite3_bind_parameter_index(sqlite3_stmt *pStmt, const char *zName){ +SQLITE_API int SQLITE_STDCALL sqlite3_bind_parameter_index(sqlite3_stmt *pStmt, const char *zName){ return sqlite3VdbeParameterIndex((Vdbe*)pStmt, zName, sqlite3Strlen30(zName)); } /* ** Transfer all bindings from the first statement over to the second. @@ -52862,19 +73771,19 @@ #ifndef SQLITE_OMIT_DEPRECATED /* ** Deprecated external interface. Internal/core SQLite code ** should call sqlite3TransferBindings. ** -** Is is misuse to call this routine with statements from different +** It is misuse to call this routine with statements from different ** database connections. But as this is a deprecated interface, we ** will not bother to check for that condition. ** ** If the two statements contain a different number of bindings, then ** an SQLITE_ERROR is returned. Nothing else can go wrong, so otherwise ** SQLITE_OK is returned. */ -SQLITE_API int sqlite3_transfer_bindings(sqlite3_stmt *pFromStmt, sqlite3_stmt *pToStmt){ +SQLITE_API int SQLITE_STDCALL sqlite3_transfer_bindings(sqlite3_stmt *pFromStmt, sqlite3_stmt *pToStmt){ Vdbe *pFrom = (Vdbe*)pFromStmt; Vdbe *pTo = (Vdbe*)pToStmt; if( pFrom->nVar!=pTo->nVar ){ return SQLITE_ERROR; } @@ -52892,22 +73801,44 @@ ** Return the sqlite3* database handle to which the prepared statement given ** in the argument belongs. This is the same database handle that was ** the first argument to the sqlite3_prepare() that was used to create ** the statement in the first place. */ -SQLITE_API sqlite3 *sqlite3_db_handle(sqlite3_stmt *pStmt){ +SQLITE_API sqlite3 *SQLITE_STDCALL sqlite3_db_handle(sqlite3_stmt *pStmt){ return pStmt ? ((Vdbe*)pStmt)->db : 0; } + +/* +** Return true if the prepared statement is guaranteed to not modify the +** database. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_stmt_readonly(sqlite3_stmt *pStmt){ + return pStmt ? ((Vdbe*)pStmt)->readOnly : 1; +} + +/* +** Return true if the prepared statement is in need of being reset. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_stmt_busy(sqlite3_stmt *pStmt){ + Vdbe *v = (Vdbe*)pStmt; + return v!=0 && v->pc>=0 && v->magic==VDBE_MAGIC_RUN; +} /* ** Return a pointer to the next prepared statement after pStmt associated ** with database connection pDb. If pStmt is NULL, return the first ** prepared statement for the database connection. Return NULL if there ** are no more. */ -SQLITE_API sqlite3_stmt *sqlite3_next_stmt(sqlite3 *pDb, sqlite3_stmt *pStmt){ +SQLITE_API sqlite3_stmt *SQLITE_STDCALL sqlite3_next_stmt(sqlite3 *pDb, sqlite3_stmt *pStmt){ sqlite3_stmt *pNext; +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(pDb) ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif sqlite3_mutex_enter(pDb->mutex); if( pStmt==0 ){ pNext = (sqlite3_stmt*)pDb->pVdbe; }else{ pNext = (sqlite3_stmt*)((Vdbe*)pStmt)->pNext; @@ -52917,17 +73848,93 @@ } /* ** Return the value of a status counter for a prepared statement */ -SQLITE_API int sqlite3_stmt_status(sqlite3_stmt *pStmt, int op, int resetFlag){ +SQLITE_API int SQLITE_STDCALL sqlite3_stmt_status(sqlite3_stmt *pStmt, int op, int resetFlag){ Vdbe *pVdbe = (Vdbe*)pStmt; - int v = pVdbe->aCounter[op-1]; - if( resetFlag ) pVdbe->aCounter[op-1] = 0; - return v; + u32 v; +#ifdef SQLITE_ENABLE_API_ARMOR + if( !pStmt ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif + v = pVdbe->aCounter[op]; + if( resetFlag ) pVdbe->aCounter[op] = 0; + return (int)v; } +#ifdef SQLITE_ENABLE_STMT_SCANSTATUS +/* +** Return status data for a single loop within query pStmt. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_stmt_scanstatus( + sqlite3_stmt *pStmt, /* Prepared statement being queried */ + int idx, /* Index of loop to report on */ + int iScanStatusOp, /* Which metric to return */ + void *pOut /* OUT: Write the answer here */ +){ + Vdbe *p = (Vdbe*)pStmt; + ScanStatus *pScan; + if( idx<0 || idx>=p->nScan ) return 1; + pScan = &p->aScan[idx]; + switch( iScanStatusOp ){ + case SQLITE_SCANSTAT_NLOOP: { + *(sqlite3_int64*)pOut = p->anExec[pScan->addrLoop]; + break; + } + case SQLITE_SCANSTAT_NVISIT: { + *(sqlite3_int64*)pOut = p->anExec[pScan->addrVisit]; + break; + } + case SQLITE_SCANSTAT_EST: { + double r = 1.0; + LogEst x = pScan->nEst; + while( x<100 ){ + x += 10; + r *= 0.5; + } + *(double*)pOut = r*sqlite3LogEstToInt(x); + break; + } + case SQLITE_SCANSTAT_NAME: { + *(const char**)pOut = pScan->zName; + break; + } + case SQLITE_SCANSTAT_EXPLAIN: { + if( pScan->addrExplain ){ + *(const char**)pOut = p->aOp[ pScan->addrExplain ].p4.z; + }else{ + *(const char**)pOut = 0; + } + break; + } + case SQLITE_SCANSTAT_SELECTID: { + if( pScan->addrExplain ){ + *(int*)pOut = p->aOp[ pScan->addrExplain ].p1; + }else{ + *(int*)pOut = -1; + } + break; + } + default: { + return 1; + } + } + return 0; +} + +/* +** Zero all counters associated with the sqlite3_stmt_scanstatus() data. +*/ +SQLITE_API void SQLITE_STDCALL sqlite3_stmt_scanstatus_reset(sqlite3_stmt *pStmt){ + Vdbe *p = (Vdbe*)pStmt; + memset(p->anExec, 0, p->nOp * sizeof(i64)); +} +#endif /* SQLITE_ENABLE_STMT_SCANSTATUS */ + /************** End of vdbeapi.c *********************************************/ /************** Begin file vdbetrace.c ***************************************/ /* ** 2009 November 25 ** @@ -52940,11 +73947,15 @@ ** ************************************************************************* ** ** This file contains code used to insert the values of host parameters ** (aka "wildcards") into the SQL text output by sqlite3_trace(). +** +** The Vdbe parse-tree explainer is also found here. */ +/* #include "sqliteInt.h" */ +/* #include "vdbeInt.h" */ #ifndef SQLITE_OMIT_TRACE /* ** zSql is a zero-terminated string of UTF-8 SQL text. Return the number of @@ -52970,21 +73981,29 @@ } return nTotal; } /* -** Return a pointer to a string in memory obtained form sqlite3DbMalloc() which -** holds a copy of zRawSql but with host parameters expanded to their -** current bindings. +** This function returns a pointer to a nul-terminated string in memory +** obtained from sqlite3DbMalloc(). If sqlite3.nVdbeExec is 1, then the +** string contains a copy of zRawSql but with host parameters expanded to +** their current bindings. Or, if sqlite3.nVdbeExec is greater than 1, +** then the returned string holds a copy of zRawSql with "-- " prepended +** to each line of text. +** +** If the SQLITE_TRACE_SIZE_LIMIT macro is defined to an integer, then +** then long strings and blobs are truncated to that many bytes. This +** can be used to prevent unreasonably large trace strings when dealing +** with large (multi-megabyte) strings and blobs. ** ** The calling function is responsible for making sure the memory returned ** is eventually freed. ** ** ALGORITHM: Scan the input string looking for host parameters in any of ** these forms: ?, ?N, $A, @A, :A. Take care to avoid text within ** string literals, quoted identifier names, and comments. For text forms, -** the host parameter index is found by scanning the perpared +** the host parameter index is found by scanning the prepared ** statement for the corresponding OP_Variable opcode. Once the host ** parameter index is known, locate the value in p->aVar[]. Then render ** the value as a literal in place of the host parameter name. */ SQLITE_PRIVATE char *sqlite3VdbeExpandSql( @@ -53000,70 +74019,106 @@ Mem *pVar; /* Value of a host parameter */ StrAccum out; /* Accumulate the output here */ char zBase[100]; /* Initial working space */ db = p->db; - sqlite3StrAccumInit(&out, zBase, sizeof(zBase), + sqlite3StrAccumInit(&out, db, zBase, sizeof(zBase), db->aLimit[SQLITE_LIMIT_LENGTH]); - out.db = db; - while( zRawSql[0] ){ - n = findNextHostParameter(zRawSql, &nToken); - assert( n>0 ); - sqlite3StrAccumAppend(&out, zRawSql, n); - zRawSql += n; - assert( zRawSql[0] || nToken==0 ); - if( nToken==0 ) break; - if( zRawSql[0]=='?' ){ - if( nToken>1 ){ - assert( sqlite3Isdigit(zRawSql[1]) ); - sqlite3GetInt32(&zRawSql[1], &idx); - }else{ - idx = nextIndex; - } - }else{ - assert( zRawSql[0]==':' || zRawSql[0]=='$' || zRawSql[0]=='@' ); - testcase( zRawSql[0]==':' ); - testcase( zRawSql[0]=='$' ); - testcase( zRawSql[0]=='@' ); - idx = sqlite3VdbeParameterIndex(p, zRawSql, nToken); - assert( idx>0 ); - } - zRawSql += nToken; - nextIndex = idx + 1; - assert( idx>0 && idx<=p->nVar ); - pVar = &p->aVar[idx-1]; - if( pVar->flags & MEM_Null ){ - sqlite3StrAccumAppend(&out, "NULL", 4); - }else if( pVar->flags & MEM_Int ){ - sqlite3XPrintf(&out, "%lld", pVar->u.i); - }else if( pVar->flags & MEM_Real ){ - sqlite3XPrintf(&out, "%!.15g", pVar->r); - }else if( pVar->flags & MEM_Str ){ + if( db->nVdbeExec>1 ){ + while( *zRawSql ){ + const char *zStart = zRawSql; + while( *(zRawSql++)!='\n' && *zRawSql ); + sqlite3StrAccumAppend(&out, "-- ", 3); + assert( (zRawSql - zStart) > 0 ); + sqlite3StrAccumAppend(&out, zStart, (int)(zRawSql-zStart)); + } + }else if( p->nVar==0 ){ + sqlite3StrAccumAppend(&out, zRawSql, sqlite3Strlen30(zRawSql)); + }else{ + while( zRawSql[0] ){ + n = findNextHostParameter(zRawSql, &nToken); + assert( n>0 ); + sqlite3StrAccumAppend(&out, zRawSql, n); + zRawSql += n; + assert( zRawSql[0] || nToken==0 ); + if( nToken==0 ) break; + if( zRawSql[0]=='?' ){ + if( nToken>1 ){ + assert( sqlite3Isdigit(zRawSql[1]) ); + sqlite3GetInt32(&zRawSql[1], &idx); + }else{ + idx = nextIndex; + } + }else{ + assert( zRawSql[0]==':' || zRawSql[0]=='$' || + zRawSql[0]=='@' || zRawSql[0]=='#' ); + testcase( zRawSql[0]==':' ); + testcase( zRawSql[0]=='$' ); + testcase( zRawSql[0]=='@' ); + testcase( zRawSql[0]=='#' ); + idx = sqlite3VdbeParameterIndex(p, zRawSql, nToken); + assert( idx>0 ); + } + zRawSql += nToken; + nextIndex = idx + 1; + assert( idx>0 && idx<=p->nVar ); + pVar = &p->aVar[idx-1]; + if( pVar->flags & MEM_Null ){ + sqlite3StrAccumAppend(&out, "NULL", 4); + }else if( pVar->flags & MEM_Int ){ + sqlite3XPrintf(&out, "%lld", pVar->u.i); + }else if( pVar->flags & MEM_Real ){ + sqlite3XPrintf(&out, "%!.15g", pVar->u.r); + }else if( pVar->flags & MEM_Str ){ + int nOut; /* Number of bytes of the string text to include in output */ #ifndef SQLITE_OMIT_UTF16 - u8 enc = ENC(db); - if( enc!=SQLITE_UTF8 ){ + u8 enc = ENC(db); Mem utf8; - memset(&utf8, 0, sizeof(utf8)); - utf8.db = db; - sqlite3VdbeMemSetStr(&utf8, pVar->z, pVar->n, enc, SQLITE_STATIC); - sqlite3VdbeChangeEncoding(&utf8, SQLITE_UTF8); - sqlite3XPrintf(&out, "'%.*q'", utf8.n, utf8.z); - sqlite3VdbeMemRelease(&utf8); - }else + if( enc!=SQLITE_UTF8 ){ + memset(&utf8, 0, sizeof(utf8)); + utf8.db = db; + sqlite3VdbeMemSetStr(&utf8, pVar->z, pVar->n, enc, SQLITE_STATIC); + sqlite3VdbeChangeEncoding(&utf8, SQLITE_UTF8); + pVar = &utf8; + } #endif - { - sqlite3XPrintf(&out, "'%.*q'", pVar->n, pVar->z); - } - }else if( pVar->flags & MEM_Zero ){ - sqlite3XPrintf(&out, "zeroblob(%d)", pVar->u.nZero); - }else{ - assert( pVar->flags & MEM_Blob ); - sqlite3StrAccumAppend(&out, "x'", 2); - for(i=0; i<pVar->n; i++){ - sqlite3XPrintf(&out, "%02x", pVar->z[i]&0xff); - } - sqlite3StrAccumAppend(&out, "'", 1); + nOut = pVar->n; +#ifdef SQLITE_TRACE_SIZE_LIMIT + if( nOut>SQLITE_TRACE_SIZE_LIMIT ){ + nOut = SQLITE_TRACE_SIZE_LIMIT; + while( nOut<pVar->n && (pVar->z[nOut]&0xc0)==0x80 ){ nOut++; } + } +#endif + sqlite3XPrintf(&out, "'%.*q'", nOut, pVar->z); +#ifdef SQLITE_TRACE_SIZE_LIMIT + if( nOut<pVar->n ){ + sqlite3XPrintf(&out, "/*+%d bytes*/", pVar->n-nOut); + } +#endif +#ifndef SQLITE_OMIT_UTF16 + if( enc!=SQLITE_UTF8 ) sqlite3VdbeMemRelease(&utf8); +#endif + }else if( pVar->flags & MEM_Zero ){ + sqlite3XPrintf(&out, "zeroblob(%d)", pVar->u.nZero); + }else{ + int nOut; /* Number of bytes of the blob to include in output */ + assert( pVar->flags & MEM_Blob ); + sqlite3StrAccumAppend(&out, "x'", 2); + nOut = pVar->n; +#ifdef SQLITE_TRACE_SIZE_LIMIT + if( nOut>SQLITE_TRACE_SIZE_LIMIT ) nOut = SQLITE_TRACE_SIZE_LIMIT; +#endif + for(i=0; i<nOut; i++){ + sqlite3XPrintf(&out, "%02x", pVar->z[i]&0xff); + } + sqlite3StrAccumAppend(&out, "'", 1); +#ifdef SQLITE_TRACE_SIZE_LIMIT + if( nOut<pVar->n ){ + sqlite3XPrintf(&out, "/*+%d bytes*/", pVar->n-nOut); + } +#endif + } } } return sqlite3StrAccumFinish(&out); } @@ -53080,44 +74135,36 @@ ** May you do good and not evil. ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* -** The code in this file implements execution method of the -** Virtual Database Engine (VDBE). A separate file ("vdbeaux.c") -** handles housekeeping details such as creating and deleting -** VDBE instances. This file is solely interested in executing -** the VDBE program. -** -** In the external interface, an "sqlite3_stmt*" is an opaque pointer -** to a VDBE. -** -** The SQL parser generates a program which is then executed by -** the VDBE to do the work of the SQL statement. VDBE programs are -** similar in form to assembly language. The program consists of -** a linear sequence of operations. Each operation has an opcode -** and 5 operands. Operands P1, P2, and P3 are integers. Operand P4 -** is a null-terminated string. Operand P5 is an unsigned character. -** Few opcodes use all 5 operands. -** -** Computation results are stored on a set of registers numbered beginning -** with 1 and going up to Vdbe.nMem. Each register can store -** either an integer, a null-terminated string, a floating point -** number, or the SQL "NULL" value. An implicit conversion from one -** type to the other occurs as necessary. -** -** Most of the code in this file is taken up by the sqlite3VdbeExec() -** function which does the work of interpreting a VDBE program. -** But other routines are also provided to help in building up -** a program instruction by instruction. +** The code in this file implements the function that runs the +** bytecode of a prepared statement. ** ** Various scripts scan this source file in order to generate HTML ** documentation, headers files, or other derived files. The formatting ** of the code in this file is, therefore, important. See other comments ** in this file for details. If in doubt, do not deviate from existing ** commenting and indentation practices when changing or adding code. */ +/* #include "sqliteInt.h" */ +/* #include "vdbeInt.h" */ + +/* +** Invoke this macro on memory cells just prior to changing the +** value of the cell. This macro verifies that shallow copies are +** not misused. A shallow copy of a string or blob just copies a +** pointer to the string or blob, not the content. If the original +** is changed while the copy is still in use, the string or blob might +** be changed out from under the copy. This macro verifies that nothing +** like that ever happens. +*/ +#ifdef SQLITE_DEBUG +# define memAboutToChange(P,M) sqlite3VdbeMemAboutToChange(P,M) +#else +# define memAboutToChange(P,M) +#endif /* ** The following global variable is incremented every time a cursor ** moves, either by the OP_SeekXX, OP_Next, or OP_Prev opcodes. The test ** procedures use this information to make sure that indices are @@ -53128,12 +74175,12 @@ SQLITE_API int sqlite3_search_count = 0; #endif /* ** When this global variable is positive, it gets decremented once before -** each instruction in the VDBE. When reaches zero, the u1.isInterrupted -** field of the sqlite3 structure is set in order to simulate and interrupt. +** each instruction in the VDBE. When it reaches zero, the u1.isInterrupted +** field of the sqlite3 structure is set in order to simulate an interrupt. ** ** This facility is used for testing purposes only. It does not function ** in an ordinary build. */ #ifdef SQLITE_TEST @@ -53166,11 +74213,11 @@ } } #endif /* -** The next global variable is incremented each type the OP_Found opcode +** The next global variable is incremented each time the OP_Found opcode ** is executed. This is used to test whether or not the foreign key ** operation implemented using OP_FkIsZero is working. This variable ** has no function other than to help verify the correct operation of the ** library. */ @@ -53186,16 +74233,50 @@ # define UPDATE_MAX_BLOBSIZE(P) updateMaxBlobsize(P) #else # define UPDATE_MAX_BLOBSIZE(P) #endif +/* +** Invoke the VDBE coverage callback, if that callback is defined. This +** feature is used for test suite validation only and does not appear an +** production builds. +** +** M is an integer, 2 or 3, that indices how many different ways the +** branch can go. It is usually 2. "I" is the direction the branch +** goes. 0 means falls through. 1 means branch is taken. 2 means the +** second alternative branch is taken. +** +** iSrcLine is the source code line (from the __LINE__ macro) that +** generated the VDBE instruction. This instrumentation assumes that all +** source code is in a single file (the amalgamation). Special values 1 +** and 2 for the iSrcLine parameter mean that this particular branch is +** always taken or never taken, respectively. +*/ +#if !defined(SQLITE_VDBE_COVERAGE) +# define VdbeBranchTaken(I,M) +#else +# define VdbeBranchTaken(I,M) vdbeTakeBranch(pOp->iSrcLine,I,M) + static void vdbeTakeBranch(int iSrcLine, u8 I, u8 M){ + if( iSrcLine<=2 && ALWAYS(iSrcLine>0) ){ + M = iSrcLine; + /* Assert the truth of VdbeCoverageAlwaysTaken() and + ** VdbeCoverageNeverTaken() */ + assert( (M & I)==I ); + }else{ + if( sqlite3GlobalConfig.xVdbeBranch==0 ) return; /*NO_TEST*/ + sqlite3GlobalConfig.xVdbeBranch(sqlite3GlobalConfig.pVdbeBranchArg, + iSrcLine,I,M); + } + } +#endif + /* ** Convert the given register into a string if it isn't one ** already. Return non-zero if a malloc() fails. */ #define Stringify(P, enc) \ - if(((P)->flags&(MEM_Str|MEM_Blob))==0 && sqlite3VdbeMemStringify(P,enc)) \ + if(((P)->flags&(MEM_Str|MEM_Blob))==0 && sqlite3VdbeMemStringify(P,enc,0)) \ { goto no_mem; } /* ** An ephemeral string value (signified by the MEM_Ephem flag) contains ** a pointer to a dynamically allocated string where some other entity @@ -53203,56 +74284,29 @@ ** does not control the string, it might be deleted without the register ** knowing it. ** ** This routine converts an ephemeral string into a dynamically allocated ** string that the register itself controls. In other words, it -** converts an MEM_Ephem string into an MEM_Dyn string. +** converts an MEM_Ephem string into a string with P.z==P.zMalloc. */ #define Deephemeralize(P) \ if( ((P)->flags&MEM_Ephem)!=0 \ && sqlite3VdbeMemMakeWriteable(P) ){ goto no_mem;} -/* -** Call sqlite3VdbeMemExpandBlob() on the supplied value (type Mem*) -** P if required. -*/ -#define ExpandBlob(P) (((P)->flags&MEM_Zero)?sqlite3VdbeMemExpandBlob(P):0) - -/* -** Argument pMem points at a register that will be passed to a -** user-defined function or returned to the user as the result of a query. -** This routine sets the pMem->type variable used by the sqlite3_value_*() -** routines. -*/ -SQLITE_PRIVATE void sqlite3VdbeMemStoreType(Mem *pMem){ - int flags = pMem->flags; - if( flags & MEM_Null ){ - pMem->type = SQLITE_NULL; - } - else if( flags & MEM_Int ){ - pMem->type = SQLITE_INTEGER; - } - else if( flags & MEM_Real ){ - pMem->type = SQLITE_FLOAT; - } - else if( flags & MEM_Str ){ - pMem->type = SQLITE_TEXT; - }else{ - pMem->type = SQLITE_BLOB; - } -} +/* Return true if the cursor was opened using the OP_OpenSorter opcode. */ +#define isSorter(x) ((x)->eCurType==CURTYPE_SORTER) /* ** Allocate VdbeCursor number iCur. Return a pointer to it. Return NULL ** if we run out of memory. */ static VdbeCursor *allocateCursor( Vdbe *p, /* The virtual machine */ int iCur, /* Index of the new VdbeCursor */ int nField, /* Number of fields in the table or index */ - int iDb, /* When database the cursor belongs to, or -1 */ - int isBtreeCursor /* True for B-Tree. False for pseudo-table or vtab */ + int iDb, /* Database the cursor belongs to, or -1 */ + u8 eCurType /* Type of the new cursor */ ){ /* Find the memory cell that will be used to store the blob of memory ** required for this VdbeCursor structure. It is convenient to use a ** vdbe memory cell to manage the memory allocation required for a ** VdbeCursor structure for the following reasons: @@ -53273,31 +74327,29 @@ Mem *pMem = &p->aMem[p->nMem-iCur]; int nByte; VdbeCursor *pCx = 0; nByte = - ROUND8(sizeof(VdbeCursor)) + - (isBtreeCursor?sqlite3BtreeCursorSize():0) + - 2*nField*sizeof(u32); + ROUND8(sizeof(VdbeCursor)) + 2*sizeof(u32)*nField + + (eCurType==CURTYPE_BTREE?sqlite3BtreeCursorSize():0); assert( iCur<p->nCursor ); if( p->apCsr[iCur] ){ sqlite3VdbeFreeCursor(p, p->apCsr[iCur]); p->apCsr[iCur] = 0; } - if( SQLITE_OK==sqlite3VdbeMemGrow(pMem, nByte, 0) ){ + if( SQLITE_OK==sqlite3VdbeMemClearAndResize(pMem, nByte) ){ p->apCsr[iCur] = pCx = (VdbeCursor*)pMem->z; memset(pCx, 0, sizeof(VdbeCursor)); + pCx->eCurType = eCurType; pCx->iDb = iDb; pCx->nField = nField; - if( nField ){ - pCx->aType = (u32 *)&pMem->z[ROUND8(sizeof(VdbeCursor))]; - } - if( isBtreeCursor ){ - pCx->pCursor = (BtCursor*) - &pMem->z[ROUND8(sizeof(VdbeCursor))+2*nField*sizeof(u32)]; - sqlite3BtreeCursorZero(pCx->pCursor); + pCx->aOffset = &pCx->aType[nField]; + if( eCurType==CURTYPE_BTREE ){ + pCx->uc.pCursor = (BtCursor*) + &pMem->z[ROUND8(sizeof(VdbeCursor))+2*sizeof(u32)*nField]; + sqlite3BtreeCursorZero(pCx->uc.pCursor); } } return pCx; } @@ -53304,39 +74356,33 @@ /* ** Try to convert a value into a numeric representation if we can ** do so without loss of information. In other words, if the string ** looks like a number, convert it into a number. If it does not ** look like a number, leave it alone. +** +** If the bTryForInt flag is true, then extra effort is made to give +** an integer representation. Strings that look like floating point +** values but which have no fractional component (example: '48.00') +** will have a MEM_Int representation when bTryForInt is true. +** +** If bTryForInt is false, then if the input string contains a decimal +** point or exponential notation, the result is only MEM_Real, even +** if there is an exact integer representation of the quantity. */ -static void applyNumericAffinity(Mem *pRec){ - if( (pRec->flags & (MEM_Real|MEM_Int))==0 ){ - int realnum; - u8 enc = pRec->enc; - sqlite3VdbeMemNulTerminate(pRec); - if( (pRec->flags&MEM_Str) && sqlite3IsNumber(pRec->z, &realnum, enc) ){ - i64 value; - char *zUtf8 = pRec->z; -#ifndef SQLITE_OMIT_UTF16 - if( enc!=SQLITE_UTF8 ){ - assert( pRec->db ); - zUtf8 = sqlite3Utf16to8(pRec->db, pRec->z, pRec->n, enc); - if( !zUtf8 ) return; - } -#endif - if( !realnum && sqlite3Atoi64(zUtf8, &value) ){ - pRec->u.i = value; - MemSetTypeFlag(pRec, MEM_Int); - }else{ - sqlite3AtoF(zUtf8, &pRec->r); - MemSetTypeFlag(pRec, MEM_Real); - } -#ifndef SQLITE_OMIT_UTF16 - if( enc!=SQLITE_UTF8 ){ - sqlite3DbFree(pRec->db, zUtf8); - } -#endif - } +static void applyNumericAffinity(Mem *pRec, int bTryForInt){ + double rValue; + i64 iValue; + u8 enc = pRec->enc; + assert( (pRec->flags & (MEM_Str|MEM_Int|MEM_Real))==MEM_Str ); + if( sqlite3AtoF(pRec->z, &rValue, pRec->n, enc)==0 ) return; + if( 0==sqlite3Atoi64(pRec->z, &iValue, pRec->n, enc) ){ + pRec->u.i = iValue; + pRec->flags |= MEM_Int; + }else{ + pRec->u.r = rValue; + pRec->flags |= MEM_Real; + if( bTryForInt ) sqlite3VdbeIntegerAffinity(pRec); } } /* ** Processing is determine by the affinity parameter: @@ -53351,50 +74397,54 @@ ** an integer representation is more space efficient on disk. ** ** SQLITE_AFF_TEXT: ** Convert pRec to a text representation. ** -** SQLITE_AFF_NONE: +** SQLITE_AFF_BLOB: ** No-op. pRec is unchanged. */ static void applyAffinity( Mem *pRec, /* The value to apply affinity to */ char affinity, /* The affinity to be applied */ u8 enc /* Use this text encoding */ ){ - if( affinity==SQLITE_AFF_TEXT ){ + if( affinity>=SQLITE_AFF_NUMERIC ){ + assert( affinity==SQLITE_AFF_INTEGER || affinity==SQLITE_AFF_REAL + || affinity==SQLITE_AFF_NUMERIC ); + if( (pRec->flags & MEM_Int)==0 ){ + if( (pRec->flags & MEM_Real)==0 ){ + if( pRec->flags & MEM_Str ) applyNumericAffinity(pRec,1); + }else{ + sqlite3VdbeIntegerAffinity(pRec); + } + } + }else if( affinity==SQLITE_AFF_TEXT ){ /* Only attempt the conversion to TEXT if there is an integer or real ** representation (blob and NULL do not get converted) but no string ** representation. */ if( 0==(pRec->flags&MEM_Str) && (pRec->flags&(MEM_Real|MEM_Int)) ){ - sqlite3VdbeMemStringify(pRec, enc); + sqlite3VdbeMemStringify(pRec, enc, 1); } pRec->flags &= ~(MEM_Real|MEM_Int); - }else if( affinity!=SQLITE_AFF_NONE ){ - assert( affinity==SQLITE_AFF_INTEGER || affinity==SQLITE_AFF_REAL - || affinity==SQLITE_AFF_NUMERIC ); - applyNumericAffinity(pRec); - if( pRec->flags & MEM_Real ){ - sqlite3VdbeIntegerAffinity(pRec); - } } } /* ** Try to convert the type of a function argument or a result column ** into a numeric representation. Use either INTEGER or REAL whichever ** is appropriate. But only do the conversion if it is possible without ** loss of information and return the revised type of the argument. -** -** This is an EXPERIMENTAL api and is subject to change or removal. */ -SQLITE_API int sqlite3_value_numeric_type(sqlite3_value *pVal){ - Mem *pMem = (Mem*)pVal; - applyNumericAffinity(pMem); - sqlite3VdbeMemStoreType(pMem); - return pMem->type; +SQLITE_API int SQLITE_STDCALL sqlite3_value_numeric_type(sqlite3_value *pVal){ + int eType = sqlite3_value_type(pVal); + if( eType==SQLITE_TEXT ){ + Mem *pMem = (Mem*)pVal; + applyNumericAffinity(pMem, 0); + eType = sqlite3_value_type(pVal); + } + return eType; } /* ** Exported version of applyAffinity(). This one works on sqlite3_value*, ** not the internal Mem* type. @@ -53404,10 +74454,45 @@ u8 affinity, u8 enc ){ applyAffinity((Mem *)pVal, affinity, enc); } + +/* +** pMem currently only holds a string type (or maybe a BLOB that we can +** interpret as a string if we want to). Compute its corresponding +** numeric type, if has one. Set the pMem->u.r and pMem->u.i fields +** accordingly. +*/ +static u16 SQLITE_NOINLINE computeNumericType(Mem *pMem){ + assert( (pMem->flags & (MEM_Int|MEM_Real))==0 ); + assert( (pMem->flags & (MEM_Str|MEM_Blob))!=0 ); + if( sqlite3AtoF(pMem->z, &pMem->u.r, pMem->n, pMem->enc)==0 ){ + return 0; + } + if( sqlite3Atoi64(pMem->z, &pMem->u.i, pMem->n, pMem->enc)==SQLITE_OK ){ + return MEM_Int; + } + return MEM_Real; +} + +/* +** Return the numeric type for pMem, either MEM_Int or MEM_Real or both or +** none. +** +** Unlike applyNumericAffinity(), this routine does not modify pMem->flags. +** But it does set pMem->u.r and pMem->u.i appropriately. +*/ +static u16 numericType(Mem *pMem){ + if( pMem->flags & (MEM_Int|MEM_Real) ){ + return pMem->flags & (MEM_Int|MEM_Real); + } + if( pMem->flags & (MEM_Str|MEM_Blob) ){ + return computeNumericType(pMem); + } + return 0; +} #ifdef SQLITE_DEBUG /* ** Write a nice string representation of the contents of cell pMem ** into buffer zBuf, length nBuf. @@ -53492,39 +74577,41 @@ #ifdef SQLITE_DEBUG /* ** Print the value of a register for tracing purposes: */ -static void memTracePrint(FILE *out, Mem *p){ - if( p->flags & MEM_Null ){ - fprintf(out, " NULL"); +static void memTracePrint(Mem *p){ + if( p->flags & MEM_Undefined ){ + printf(" undefined"); + }else if( p->flags & MEM_Null ){ + printf(" NULL"); }else if( (p->flags & (MEM_Int|MEM_Str))==(MEM_Int|MEM_Str) ){ - fprintf(out, " si:%lld", p->u.i); + printf(" si:%lld", p->u.i); }else if( p->flags & MEM_Int ){ - fprintf(out, " i:%lld", p->u.i); + printf(" i:%lld", p->u.i); #ifndef SQLITE_OMIT_FLOATING_POINT }else if( p->flags & MEM_Real ){ - fprintf(out, " r:%g", p->r); + printf(" r:%g", p->u.r); #endif }else if( p->flags & MEM_RowSet ){ - fprintf(out, " (rowset)"); + printf(" (rowset)"); }else{ char zBuf[200]; sqlite3VdbeMemPrettyPrint(p, zBuf); - fprintf(out, " "); - fprintf(out, "%s", zBuf); - } -} -static void registerTrace(FILE *out, int iReg, Mem *p){ - fprintf(out, "REG[%d] = ", iReg); - memTracePrint(out, p); - fprintf(out, "\n"); + printf(" %s", zBuf); + } + if( p->flags & MEM_Subtype ) printf(" subtype=0x%02x", p->eSubtype); +} +static void registerTrace(int iReg, Mem *p){ + printf("REG[%d] = ", iReg); + memTracePrint(p); + printf("\n"); } #endif #ifdef SQLITE_DEBUG -# define REGISTER_TRACE(R,M) if(p->trace)registerTrace(p->trace,R,M) +# define REGISTER_TRACE(R,M) if(db->flags&SQLITE_VdbeTrace)registerTrace(R,M) #else # define REGISTER_TRACE(R,M) #endif @@ -53625,40 +74712,10 @@ /************** End of hwtime.h **********************************************/ /************** Continuing where we left off in vdbe.c ***********************/ #endif -/* -** The CHECK_FOR_INTERRUPT macro defined here looks to see if the -** sqlite3_interrupt() routine has been called. If it has been, then -** processing of the VDBE program is interrupted. -** -** This macro added to every instruction that does a jump in order to -** implement a loop. This test used to be on every single instruction, -** but that meant we more testing that we needed. By only testing the -** flag on jump instructions, we get a (small) speed improvement. -*/ -#define CHECK_FOR_INTERRUPT \ - if( db->u1.isInterrupted ) goto abort_due_to_interrupt; - -#ifdef SQLITE_DEBUG -static int fileExists(sqlite3 *db, const char *zFile){ - int res = 0; - int rc = SQLITE_OK; -#ifdef SQLITE_TEST - /* If we are currently testing IO errors, then do not call OsAccess() to - ** test for the presence of zFile. This is because any IO error that - ** occurs here will not be reported, causing the test to fail. - */ - extern int sqlite3_io_error_pending; - if( sqlite3_io_error_pending<=0 ) -#endif - rc = sqlite3OsAccess(db->pVfs, zFile, SQLITE_ACCESS_EXISTS, &res); - return (res && rc==SQLITE_OK); -} -#endif - #ifndef NDEBUG /* ** This function is only called from within an assert() expression. It ** checks that the sqlite3.nTransaction variable is correctly set to ** the number of non-transaction savepoints currently in the @@ -53676,521 +74733,134 @@ return 1; } #endif /* -** Execute as much of a VDBE program as we can then return. -** -** sqlite3VdbeMakeReady() must be called before this routine in order to -** close the program with a final OP_Halt and to set up the callbacks -** and the error message pointer. -** -** Whenever a row or result data is available, this routine will either -** invoke the result callback (if there is one) or return with -** SQLITE_ROW. -** -** If an attempt is made to open a locked database, then this routine -** will either invoke the busy callback (if there is one) or it will -** return SQLITE_BUSY. -** -** If an error occurs, an error message is written to memory obtained -** from sqlite3_malloc() and p->zErrMsg is made to point to that memory. -** The error code is stored in p->rc and this routine returns SQLITE_ERROR. -** -** If the callback ever returns non-zero, then the program exits -** immediately. There will be no error message but the p->rc field is -** set to SQLITE_ABORT and this routine will return SQLITE_ERROR. -** -** A memory allocation error causes p->rc to be set to SQLITE_NOMEM and this -** routine to return SQLITE_ERROR. -** -** Other fatal errors return SQLITE_ERROR. -** -** After this routine has finished, sqlite3VdbeFinalize() should be -** used to clean up the mess that was left behind. +** Return the register of pOp->p2 after first preparing it to be +** overwritten with an integer value. +*/ +static SQLITE_NOINLINE Mem *out2PrereleaseWithClear(Mem *pOut){ + sqlite3VdbeMemSetNull(pOut); + pOut->flags = MEM_Int; + return pOut; +} +static Mem *out2Prerelease(Vdbe *p, VdbeOp *pOp){ + Mem *pOut; + assert( pOp->p2>0 ); + assert( pOp->p2<=(p->nMem-p->nCursor) ); + pOut = &p->aMem[pOp->p2]; + memAboutToChange(p, pOut); + if( VdbeMemDynamic(pOut) ){ + return out2PrereleaseWithClear(pOut); + }else{ + pOut->flags = MEM_Int; + return pOut; + } +} + + +/* +** Execute as much of a VDBE program as we can. +** This is the core of sqlite3_step(). */ SQLITE_PRIVATE int sqlite3VdbeExec( Vdbe *p /* The VDBE */ ){ - int pc=0; /* The program counter */ Op *aOp = p->aOp; /* Copy of p->aOp */ - Op *pOp; /* Current operation */ + Op *pOp = aOp; /* Current operation */ +#if defined(SQLITE_DEBUG) || defined(VDBE_PROFILE) + Op *pOrigOp; /* Value of pOp at the top of the loop */ +#endif +#ifdef SQLITE_DEBUG + int nExtraDelete = 0; /* Verifies FORDELETE and AUXDELETE flags */ +#endif int rc = SQLITE_OK; /* Value to return */ sqlite3 *db = p->db; /* The database */ - u8 resetSchemaOnFault = 0; /* Reset schema after an error if true */ + u8 resetSchemaOnFault = 0; /* Reset schema after an error if positive */ u8 encoding = ENC(db); /* The database encoding */ + int iCompare = 0; /* Result of last OP_Compare operation */ + unsigned nVmStep = 0; /* Number of virtual machine steps */ #ifndef SQLITE_OMIT_PROGRESS_CALLBACK - int checkProgress; /* True if progress callbacks are enabled */ - int nProgressOps = 0; /* Opcodes executed since progress callback. */ + unsigned nProgressLimit = 0;/* Invoke xProgress() when nVmStep reaches this */ #endif Mem *aMem = p->aMem; /* Copy of p->aMem */ Mem *pIn1 = 0; /* 1st input operand */ Mem *pIn2 = 0; /* 2nd input operand */ Mem *pIn3 = 0; /* 3rd input operand */ Mem *pOut = 0; /* Output operand */ - int iCompare = 0; /* Result of last OP_Compare operation */ int *aPermute = 0; /* Permutation of columns for OP_Compare */ + i64 lastRowid = db->lastRowid; /* Saved value of the last insert ROWID */ #ifdef VDBE_PROFILE u64 start; /* CPU clock count at start of opcode */ - int origPc; /* Program counter at start of opcode */ #endif - /******************************************************************** - ** Automatically generated code - ** - ** The following union is automatically generated by the - ** vdbe-compress.tcl script. The purpose of this union is to - ** reduce the amount of stack space required by this function. - ** See comments in the vdbe-compress.tcl script for details. - */ - union vdbeExecUnion { - struct OP_Yield_stack_vars { - int pcDest; - } aa; - struct OP_Variable_stack_vars { - int p1; /* Variable to copy from */ - int p2; /* Register to copy to */ - int n; /* Number of values left to copy */ - Mem *pVar; /* Value being transferred */ - } ab; - struct OP_Move_stack_vars { - char *zMalloc; /* Holding variable for allocated memory */ - int n; /* Number of registers left to copy */ - int p1; /* Register to copy from */ - int p2; /* Register to copy to */ - } ac; - struct OP_ResultRow_stack_vars { - Mem *pMem; - int i; - } ad; - struct OP_Concat_stack_vars { - i64 nByte; - } ae; - struct OP_Remainder_stack_vars { - int flags; /* Combined MEM_* flags from both inputs */ - i64 iA; /* Integer value of left operand */ - i64 iB; /* Integer value of right operand */ - double rA; /* Real value of left operand */ - double rB; /* Real value of right operand */ - } af; - struct OP_Function_stack_vars { - int i; - Mem *pArg; - sqlite3_context ctx; - sqlite3_value **apVal; - int n; - } ag; - struct OP_ShiftRight_stack_vars { - i64 a; - i64 b; - } ah; - struct OP_Ge_stack_vars { - int res; /* Result of the comparison of pIn1 against pIn3 */ - char affinity; /* Affinity to use for comparison */ - u16 flags1; /* Copy of initial value of pIn1->flags */ - u16 flags3; /* Copy of initial value of pIn3->flags */ - } ai; - struct OP_Compare_stack_vars { - int n; - int i; - int p1; - int p2; - const KeyInfo *pKeyInfo; - int idx; - CollSeq *pColl; /* Collating sequence to use on this term */ - int bRev; /* True for DESCENDING sort order */ - } aj; - struct OP_Or_stack_vars { - int v1; /* Left operand: 0==FALSE, 1==TRUE, 2==UNKNOWN or NULL */ - int v2; /* Right operand: 0==FALSE, 1==TRUE, 2==UNKNOWN or NULL */ - } ak; - struct OP_IfNot_stack_vars { - int c; - } al; - struct OP_Column_stack_vars { - u32 payloadSize; /* Number of bytes in the record */ - i64 payloadSize64; /* Number of bytes in the record */ - int p1; /* P1 value of the opcode */ - int p2; /* column number to retrieve */ - VdbeCursor *pC; /* The VDBE cursor */ - char *zRec; /* Pointer to complete record-data */ - BtCursor *pCrsr; /* The BTree cursor */ - u32 *aType; /* aType[i] holds the numeric type of the i-th column */ - u32 *aOffset; /* aOffset[i] is offset to start of data for i-th column */ - int nField; /* number of fields in the record */ - int len; /* The length of the serialized data for the column */ - int i; /* Loop counter */ - char *zData; /* Part of the record being decoded */ - Mem *pDest; /* Where to write the extracted value */ - Mem sMem; /* For storing the record being decoded */ - u8 *zIdx; /* Index into header */ - u8 *zEndHdr; /* Pointer to first byte after the header */ - u32 offset; /* Offset into the data */ - u32 szField; /* Number of bytes in the content of a field */ - int szHdr; /* Size of the header size field at start of record */ - int avail; /* Number of bytes of available data */ - Mem *pReg; /* PseudoTable input register */ - } am; - struct OP_Affinity_stack_vars { - const char *zAffinity; /* The affinity to be applied */ - char cAff; /* A single character of affinity */ - } an; - struct OP_MakeRecord_stack_vars { - u8 *zNewRecord; /* A buffer to hold the data for the new record */ - Mem *pRec; /* The new record */ - u64 nData; /* Number of bytes of data space */ - int nHdr; /* Number of bytes of header space */ - i64 nByte; /* Data space required for this record */ - int nZero; /* Number of zero bytes at the end of the record */ - int nVarint; /* Number of bytes in a varint */ - u32 serial_type; /* Type field */ - Mem *pData0; /* First field to be combined into the record */ - Mem *pLast; /* Last field of the record */ - int nField; /* Number of fields in the record */ - char *zAffinity; /* The affinity string for the record */ - int file_format; /* File format to use for encoding */ - int i; /* Space used in zNewRecord[] */ - int len; /* Length of a field */ - } ao; - struct OP_Count_stack_vars { - i64 nEntry; - BtCursor *pCrsr; - } ap; - struct OP_Savepoint_stack_vars { - int p1; /* Value of P1 operand */ - char *zName; /* Name of savepoint */ - int nName; - Savepoint *pNew; - Savepoint *pSavepoint; - Savepoint *pTmp; - int iSavepoint; - int ii; - } aq; - struct OP_AutoCommit_stack_vars { - int desiredAutoCommit; - int iRollback; - int turnOnAC; - } ar; - struct OP_Transaction_stack_vars { - Btree *pBt; - } as; - struct OP_ReadCookie_stack_vars { - int iMeta; - int iDb; - int iCookie; - } at; - struct OP_SetCookie_stack_vars { - Db *pDb; - } au; - struct OP_VerifyCookie_stack_vars { - int iMeta; - Btree *pBt; - } av; - struct OP_OpenWrite_stack_vars { - int nField; - KeyInfo *pKeyInfo; - int p2; - int iDb; - int wrFlag; - Btree *pX; - VdbeCursor *pCur; - Db *pDb; - } aw; - struct OP_OpenEphemeral_stack_vars { - VdbeCursor *pCx; - } ax; - struct OP_OpenPseudo_stack_vars { - VdbeCursor *pCx; - } ay; - struct OP_SeekGt_stack_vars { - int res; - int oc; - VdbeCursor *pC; - UnpackedRecord r; - int nField; - i64 iKey; /* The rowid we are to seek to */ - } az; - struct OP_Seek_stack_vars { - VdbeCursor *pC; - } ba; - struct OP_Found_stack_vars { - int alreadyExists; - VdbeCursor *pC; - int res; - UnpackedRecord *pIdxKey; - UnpackedRecord r; - char aTempRec[ROUND8(sizeof(UnpackedRecord)) + sizeof(Mem)*3 + 7]; - } bb; - struct OP_IsUnique_stack_vars { - u16 ii; - VdbeCursor *pCx; - BtCursor *pCrsr; - u16 nField; - Mem *aMx; - UnpackedRecord r; /* B-Tree index search key */ - i64 R; /* Rowid stored in register P3 */ - } bc; - struct OP_NotExists_stack_vars { - VdbeCursor *pC; - BtCursor *pCrsr; - int res; - u64 iKey; - } bd; - struct OP_NewRowid_stack_vars { - i64 v; /* The new rowid */ - VdbeCursor *pC; /* Cursor of table to get the new rowid */ - int res; /* Result of an sqlite3BtreeLast() */ - int cnt; /* Counter to limit the number of searches */ - Mem *pMem; /* Register holding largest rowid for AUTOINCREMENT */ - VdbeFrame *pFrame; /* Root frame of VDBE */ - } be; - struct OP_InsertInt_stack_vars { - Mem *pData; /* MEM cell holding data for the record to be inserted */ - Mem *pKey; /* MEM cell holding key for the record */ - i64 iKey; /* The integer ROWID or key for the record to be inserted */ - VdbeCursor *pC; /* Cursor to table into which insert is written */ - int nZero; /* Number of zero-bytes to append */ - int seekResult; /* Result of prior seek or 0 if no USESEEKRESULT flag */ - const char *zDb; /* database name - used by the update hook */ - const char *zTbl; /* Table name - used by the opdate hook */ - int op; /* Opcode for update hook: SQLITE_UPDATE or SQLITE_INSERT */ - } bf; - struct OP_Delete_stack_vars { - i64 iKey; - VdbeCursor *pC; - } bg; - struct OP_RowData_stack_vars { - VdbeCursor *pC; - BtCursor *pCrsr; - u32 n; - i64 n64; - } bh; - struct OP_Rowid_stack_vars { - VdbeCursor *pC; - i64 v; - sqlite3_vtab *pVtab; - const sqlite3_module *pModule; - } bi; - struct OP_NullRow_stack_vars { - VdbeCursor *pC; - } bj; - struct OP_Last_stack_vars { - VdbeCursor *pC; - BtCursor *pCrsr; - int res; - } bk; - struct OP_Rewind_stack_vars { - VdbeCursor *pC; - BtCursor *pCrsr; - int res; - } bl; - struct OP_Next_stack_vars { - VdbeCursor *pC; - BtCursor *pCrsr; - int res; - } bm; - struct OP_IdxInsert_stack_vars { - VdbeCursor *pC; - BtCursor *pCrsr; - int nKey; - const char *zKey; - } bn; - struct OP_IdxDelete_stack_vars { - VdbeCursor *pC; - BtCursor *pCrsr; - int res; - UnpackedRecord r; - } bo; - struct OP_IdxRowid_stack_vars { - BtCursor *pCrsr; - VdbeCursor *pC; - i64 rowid; - } bp; - struct OP_IdxGE_stack_vars { - VdbeCursor *pC; - int res; - UnpackedRecord r; - } bq; - struct OP_Destroy_stack_vars { - int iMoved; - int iCnt; - Vdbe *pVdbe; - int iDb; - } br; - struct OP_Clear_stack_vars { - int nChange; - } bs; - struct OP_CreateTable_stack_vars { - int pgno; - int flags; - Db *pDb; - } bt; - struct OP_ParseSchema_stack_vars { - int iDb; - const char *zMaster; - char *zSql; - InitData initData; - } bu; - struct OP_IntegrityCk_stack_vars { - int nRoot; /* Number of tables to check. (Number of root pages.) */ - int *aRoot; /* Array of rootpage numbers for tables to be checked */ - int j; /* Loop counter */ - int nErr; /* Number of errors reported */ - char *z; /* Text of the error report */ - Mem *pnErr; /* Register keeping track of errors remaining */ - } bv; - struct OP_RowSetRead_stack_vars { - i64 val; - } bw; - struct OP_RowSetTest_stack_vars { - int iSet; - int exists; - } bx; - struct OP_Program_stack_vars { - int nMem; /* Number of memory registers for sub-program */ - int nByte; /* Bytes of runtime space required for sub-program */ - Mem *pRt; /* Register to allocate runtime space */ - Mem *pMem; /* Used to iterate through memory cells */ - Mem *pEnd; /* Last memory cell in new array */ - VdbeFrame *pFrame; /* New vdbe frame to execute in */ - SubProgram *pProgram; /* Sub-program to execute */ - void *t; /* Token identifying trigger */ - } by; - struct OP_Param_stack_vars { - VdbeFrame *pFrame; - Mem *pIn; - } bz; - struct OP_MemMax_stack_vars { - Mem *pIn1; - VdbeFrame *pFrame; - } ca; - struct OP_AggStep_stack_vars { - int n; - int i; - Mem *pMem; - Mem *pRec; - sqlite3_context ctx; - sqlite3_value **apVal; - } cb; - struct OP_AggFinal_stack_vars { - Mem *pMem; - } cc; - struct OP_IncrVacuum_stack_vars { - Btree *pBt; - } cd; - struct OP_VBegin_stack_vars { - VTable *pVTab; - } ce; - struct OP_VOpen_stack_vars { - VdbeCursor *pCur; - sqlite3_vtab_cursor *pVtabCursor; - sqlite3_vtab *pVtab; - sqlite3_module *pModule; - } cf; - struct OP_VFilter_stack_vars { - int nArg; - int iQuery; - const sqlite3_module *pModule; - Mem *pQuery; - Mem *pArgc; - sqlite3_vtab_cursor *pVtabCursor; - sqlite3_vtab *pVtab; - VdbeCursor *pCur; - int res; - int i; - Mem **apArg; - } cg; - struct OP_VColumn_stack_vars { - sqlite3_vtab *pVtab; - const sqlite3_module *pModule; - Mem *pDest; - sqlite3_context sContext; - } ch; - struct OP_VNext_stack_vars { - sqlite3_vtab *pVtab; - const sqlite3_module *pModule; - int res; - VdbeCursor *pCur; - } ci; - struct OP_VRename_stack_vars { - sqlite3_vtab *pVtab; - Mem *pName; - } cj; - struct OP_VUpdate_stack_vars { - sqlite3_vtab *pVtab; - sqlite3_module *pModule; - int nArg; - int i; - sqlite_int64 rowid; - Mem **apArg; - Mem *pX; - } ck; - struct OP_Trace_stack_vars { - char *zTrace; - } cl; - } u; - /* End automatically generated code - ********************************************************************/ + /*** INSERT STACK UNION HERE ***/ assert( p->magic==VDBE_MAGIC_RUN ); /* sqlite3_step() verifies this */ - sqlite3VdbeMutexArrayEnter(p); + sqlite3VdbeEnter(p); if( p->rc==SQLITE_NOMEM ){ /* This happens if a malloc() inside a call to sqlite3_column_text() or ** sqlite3_column_text16() failed. */ goto no_mem; } - assert( p->rc==SQLITE_OK || p->rc==SQLITE_BUSY ); + assert( p->rc==SQLITE_OK || (p->rc&0xff)==SQLITE_BUSY ); + assert( p->bIsReader || p->readOnly!=0 ); p->rc = SQLITE_OK; + p->iCurrentTime = 0; assert( p->explain==0 ); p->pResultSet = 0; db->busyHandler.nBusy = 0; - CHECK_FOR_INTERRUPT; + if( db->u1.isInterrupted ) goto abort_due_to_interrupt; sqlite3VdbeIOTraceSql(p); #ifndef SQLITE_OMIT_PROGRESS_CALLBACK - checkProgress = db->xProgress!=0; + if( db->xProgress ){ + u32 iPrior = p->aCounter[SQLITE_STMTSTATUS_VM_STEP]; + assert( 0 < db->nProgressOps ); + nProgressLimit = db->nProgressOps - (iPrior % db->nProgressOps); + } #endif #ifdef SQLITE_DEBUG sqlite3BeginBenignMalloc(); - if( p->pc==0 - && ((p->db->flags & SQLITE_VdbeListing) || fileExists(db, "vdbe_explain")) + if( p->pc==0 + && (p->db->flags & (SQLITE_VdbeListing|SQLITE_VdbeEQP|SQLITE_VdbeTrace))!=0 ){ int i; - printf("VDBE Program Listing:\n"); + int once = 1; sqlite3VdbePrintSql(p); - for(i=0; i<p->nOp; i++){ - sqlite3VdbePrintOp(stdout, i, &aOp[i]); + if( p->db->flags & SQLITE_VdbeListing ){ + printf("VDBE Program Listing:\n"); + for(i=0; i<p->nOp; i++){ + sqlite3VdbePrintOp(stdout, i, &aOp[i]); + } } - } - if( fileExists(db, "vdbe_trace") ){ - p->trace = stdout; + if( p->db->flags & SQLITE_VdbeEQP ){ + for(i=0; i<p->nOp; i++){ + if( aOp[i].opcode==OP_Explain ){ + if( once ) printf("VDBE Query Plan:\n"); + printf("%s\n", aOp[i].p4.z); + once = 0; + } + } + } + if( p->db->flags & SQLITE_VdbeTrace ) printf("VDBE Trace:\n"); } sqlite3EndBenignMalloc(); #endif - for(pc=p->pc; rc==SQLITE_OK; pc++){ - assert( pc>=0 && pc<p->nOp ); - if( db->mallocFailed ) goto no_mem; + for(pOp=&aOp[p->pc]; rc==SQLITE_OK; pOp++){ + assert( pOp>=aOp && pOp<&aOp[p->nOp]); #ifdef VDBE_PROFILE - origPc = pc; start = sqlite3Hwtime(); #endif - pOp = &aOp[pc]; + nVmStep++; +#ifdef SQLITE_ENABLE_STMT_SCANSTATUS + if( p->anExec ) p->anExec[(int)(pOp-aOp)]++; +#endif /* Only allow tracing if SQLITE_DEBUG is defined. */ #ifdef SQLITE_DEBUG - if( p->trace ){ - if( pc==0 ){ - printf("VDBE Execution Trace:\n"); - sqlite3VdbePrintSql(p); - } - sqlite3VdbePrintOp(p->trace, pc, pOp); - } - if( p->trace==0 && pc==0 ){ - sqlite3BeginBenignMalloc(); - if( fileExists(db, "vdbe_sqltrace") ){ - sqlite3VdbePrintSql(p); - } - sqlite3EndBenignMalloc(); + if( db->flags & SQLITE_VdbeTrace ){ + sqlite3VdbePrintOp(stdout, (int)(pOp - aOp), pOp); } #endif /* Check to see if we need to simulate an interrupt. This only happens @@ -54203,70 +74873,47 @@ sqlite3_interrupt(db); } } #endif -#ifndef SQLITE_OMIT_PROGRESS_CALLBACK - /* Call the progress callback if it is configured and the required number - ** of VDBE ops have been executed (either since this invocation of - ** sqlite3VdbeExec() or since last time the progress callback was called). - ** If the progress callback returns non-zero, exit the virtual machine with - ** a return code SQLITE_ABORT. - */ - if( checkProgress ){ - if( db->nProgressOps==nProgressOps ){ - int prc; - prc = db->xProgress(db->pProgressArg); - if( prc!=0 ){ - rc = SQLITE_INTERRUPT; - goto vdbe_error_halt; - } - nProgressOps = 0; - } - nProgressOps++; - } -#endif - - /* On any opcode with the "out2-prerelase" tag, free any - ** external allocations out of mem[p2] and set mem[p2] to be - ** an undefined integer. Opcodes will either fill in the integer - ** value or convert mem[p2] to a different type. - */ - assert( pOp->opflags==sqlite3OpcodeProperty[pOp->opcode] ); - if( pOp->opflags & OPFLG_OUT2_PRERELEASE ){ - assert( pOp->p2>0 ); - assert( pOp->p2<=p->nMem ); - pOut = &aMem[pOp->p2]; - sqlite3VdbeMemReleaseExternal(pOut); - pOut->flags = MEM_Int; - } - /* Sanity checking on other operands */ #ifdef SQLITE_DEBUG + assert( pOp->opflags==sqlite3OpcodeProperty[pOp->opcode] ); if( (pOp->opflags & OPFLG_IN1)!=0 ){ assert( pOp->p1>0 ); - assert( pOp->p1<=p->nMem ); + assert( pOp->p1<=(p->nMem-p->nCursor) ); + assert( memIsValid(&aMem[pOp->p1]) ); + assert( sqlite3VdbeCheckMemInvariants(&aMem[pOp->p1]) ); REGISTER_TRACE(pOp->p1, &aMem[pOp->p1]); } if( (pOp->opflags & OPFLG_IN2)!=0 ){ assert( pOp->p2>0 ); - assert( pOp->p2<=p->nMem ); + assert( pOp->p2<=(p->nMem-p->nCursor) ); + assert( memIsValid(&aMem[pOp->p2]) ); + assert( sqlite3VdbeCheckMemInvariants(&aMem[pOp->p2]) ); REGISTER_TRACE(pOp->p2, &aMem[pOp->p2]); } if( (pOp->opflags & OPFLG_IN3)!=0 ){ assert( pOp->p3>0 ); - assert( pOp->p3<=p->nMem ); + assert( pOp->p3<=(p->nMem-p->nCursor) ); + assert( memIsValid(&aMem[pOp->p3]) ); + assert( sqlite3VdbeCheckMemInvariants(&aMem[pOp->p3]) ); REGISTER_TRACE(pOp->p3, &aMem[pOp->p3]); } if( (pOp->opflags & OPFLG_OUT2)!=0 ){ assert( pOp->p2>0 ); - assert( pOp->p2<=p->nMem ); + assert( pOp->p2<=(p->nMem-p->nCursor) ); + memAboutToChange(p, &aMem[pOp->p2]); } if( (pOp->opflags & OPFLG_OUT3)!=0 ){ assert( pOp->p3>0 ); - assert( pOp->p3<=p->nMem ); + assert( pOp->p3<=(p->nMem-p->nCursor) ); + memAboutToChange(p, &aMem[pOp->p3]); } +#endif +#if defined(SQLITE_DEBUG) || defined(VDBE_PROFILE) + pOrigOp = pOp; #endif switch( pOp->opcode ){ /***************************************************************************** @@ -54287,11 +74934,11 @@ ** case statement is followed by a comment of the form "/# same as ... #/" ** that comment is used to determine the particular value of the opcode. ** ** Other keywords in the comment that follows each case are used to ** construct the OPFLG_INITIALIZER value that initializes opcodeProperty[]. -** Keywords include: in1, in2, in3, out2_prerelease, out2, out3. See +** Keywords include: in1, in2, in3, out2, out3. See ** the mkopcodeh.awk script for additional information. ** ** Documentation about VDBE opcodes is generated by scanning this file ** for lines of that contain "Opcode:". That line and all subsequent ** comment lines are used in the generation of the opcode.html documentation @@ -54308,74 +74955,170 @@ ** ** An unconditional jump to address P2. ** The next instruction executed will be ** the one at index P2 from the beginning of ** the program. +** +** The P1 parameter is not actually used by this opcode. However, it +** is sometimes set to 1 instead of 0 as a hint to the command-line shell +** that this Goto is the bottom of a loop and that the lines from P2 down +** to the current line should be indented for EXPLAIN output. */ case OP_Goto: { /* jump */ - CHECK_FOR_INTERRUPT; - pc = pOp->p2 - 1; +jump_to_p2_and_check_for_interrupt: + pOp = &aOp[pOp->p2 - 1]; + + /* Opcodes that are used as the bottom of a loop (OP_Next, OP_Prev, + ** OP_VNext, OP_RowSetNext, or OP_SorterNext) all jump here upon + ** completion. Check to see if sqlite3_interrupt() has been called + ** or if the progress callback needs to be invoked. + ** + ** This code uses unstructured "goto" statements and does not look clean. + ** But that is not due to sloppy coding habits. The code is written this + ** way for performance, to avoid having to run the interrupt and progress + ** checks on every opcode. This helps sqlite3_step() to run about 1.5% + ** faster according to "valgrind --tool=cachegrind" */ +check_for_interrupt: + if( db->u1.isInterrupted ) goto abort_due_to_interrupt; +#ifndef SQLITE_OMIT_PROGRESS_CALLBACK + /* Call the progress callback if it is configured and the required number + ** of VDBE ops have been executed (either since this invocation of + ** sqlite3VdbeExec() or since last time the progress callback was called). + ** If the progress callback returns non-zero, exit the virtual machine with + ** a return code SQLITE_ABORT. + */ + if( db->xProgress!=0 && nVmStep>=nProgressLimit ){ + assert( db->nProgressOps!=0 ); + nProgressLimit = nVmStep + db->nProgressOps - (nVmStep%db->nProgressOps); + if( db->xProgress(db->pProgressArg) ){ + rc = SQLITE_INTERRUPT; + goto vdbe_error_halt; + } + } +#endif + break; } /* Opcode: Gosub P1 P2 * * * ** ** Write the current address onto register P1 ** and then jump to address P2. */ -case OP_Gosub: { /* jump, in1 */ +case OP_Gosub: { /* jump */ + assert( pOp->p1>0 && pOp->p1<=(p->nMem-p->nCursor) ); pIn1 = &aMem[pOp->p1]; - assert( (pIn1->flags & MEM_Dyn)==0 ); + assert( VdbeMemDynamic(pIn1)==0 ); + memAboutToChange(p, pIn1); pIn1->flags = MEM_Int; - pIn1->u.i = pc; + pIn1->u.i = (int)(pOp-aOp); REGISTER_TRACE(pOp->p1, pIn1); - pc = pOp->p2 - 1; + + /* Most jump operations do a goto to this spot in order to update + ** the pOp pointer. */ +jump_to_p2: + pOp = &aOp[pOp->p2 - 1]; break; } /* Opcode: Return P1 * * * * ** -** Jump to the next instruction after the address in register P1. +** Jump to the next instruction after the address in register P1. After +** the jump, register P1 becomes undefined. */ case OP_Return: { /* in1 */ pIn1 = &aMem[pOp->p1]; - assert( pIn1->flags & MEM_Int ); - pc = (int)pIn1->u.i; + assert( pIn1->flags==MEM_Int ); + pOp = &aOp[pIn1->u.i]; + pIn1->flags = MEM_Undefined; break; } -/* Opcode: Yield P1 * * * * +/* Opcode: InitCoroutine P1 P2 P3 * * ** -** Swap the program counter with the value in register P1. +** Set up register P1 so that it will Yield to the coroutine +** located at address P3. +** +** If P2!=0 then the coroutine implementation immediately follows +** this opcode. So jump over the coroutine implementation to +** address P2. +** +** See also: EndCoroutine */ -case OP_Yield: { /* in1 */ -#if 0 /* local variables moved into u.aa */ +case OP_InitCoroutine: { /* jump */ + assert( pOp->p1>0 && pOp->p1<=(p->nMem-p->nCursor) ); + assert( pOp->p2>=0 && pOp->p2<p->nOp ); + assert( pOp->p3>=0 && pOp->p3<p->nOp ); + pOut = &aMem[pOp->p1]; + assert( !VdbeMemDynamic(pOut) ); + pOut->u.i = pOp->p3 - 1; + pOut->flags = MEM_Int; + if( pOp->p2 ) goto jump_to_p2; + break; +} + +/* Opcode: EndCoroutine P1 * * * * +** +** The instruction at the address in register P1 is a Yield. +** Jump to the P2 parameter of that Yield. +** After the jump, register P1 becomes undefined. +** +** See also: InitCoroutine +*/ +case OP_EndCoroutine: { /* in1 */ + VdbeOp *pCaller; + pIn1 = &aMem[pOp->p1]; + assert( pIn1->flags==MEM_Int ); + assert( pIn1->u.i>=0 && pIn1->u.i<p->nOp ); + pCaller = &aOp[pIn1->u.i]; + assert( pCaller->opcode==OP_Yield ); + assert( pCaller->p2>=0 && pCaller->p2<p->nOp ); + pOp = &aOp[pCaller->p2 - 1]; + pIn1->flags = MEM_Undefined; + break; +} + +/* Opcode: Yield P1 P2 * * * +** +** Swap the program counter with the value in register P1. This +** has the effect of yielding to a coroutine. +** +** If the coroutine that is launched by this instruction ends with +** Yield or Return then continue to the next instruction. But if +** the coroutine launched by this instruction ends with +** EndCoroutine, then jump to P2 rather than continuing with the +** next instruction. +** +** See also: InitCoroutine +*/ +case OP_Yield: { /* in1, jump */ int pcDest; -#endif /* local variables moved into u.aa */ pIn1 = &aMem[pOp->p1]; - assert( (pIn1->flags & MEM_Dyn)==0 ); + assert( VdbeMemDynamic(pIn1)==0 ); pIn1->flags = MEM_Int; - u.aa.pcDest = (int)pIn1->u.i; - pIn1->u.i = pc; + pcDest = (int)pIn1->u.i; + pIn1->u.i = (int)(pOp - aOp); REGISTER_TRACE(pOp->p1, pIn1); - pc = u.aa.pcDest; + pOp = &aOp[pcDest]; break; } -/* Opcode: HaltIfNull P1 P2 P3 P4 * +/* Opcode: HaltIfNull P1 P2 P3 P4 P5 +** Synopsis: if r[P3]=null halt ** -** Check the value in register P3. If is is NULL then Halt using +** Check the value in register P3. If it is NULL then Halt using ** parameter P1, P2, and P4 as if this were a Halt instruction. If the ** value in register P3 is not NULL, then this routine is a no-op. +** The P5 parameter should be 1. */ case OP_HaltIfNull: { /* in3 */ pIn3 = &aMem[pOp->p3]; if( (pIn3->flags & MEM_Null)==0 ) break; /* Fall through into OP_Halt */ } -/* Opcode: Halt P1 P2 * P4 * +/* Opcode: Halt P1 P2 * P4 P5 ** ** Exit immediately. All open cursors, etc are closed ** automatically. ** ** P1 is the result code returned by sqlite3_exec(), sqlite3_reset(), @@ -54385,114 +75128,156 @@ ** if P2==OE_Fail. Do the rollback if P2==OE_Rollback. If P2==OE_Abort, ** then back out all changes that have occurred during this execution of the ** VDBE, but do not rollback the transaction. ** ** If P4 is not null then it is an error message string. +** +** P5 is a value between 0 and 4, inclusive, that modifies the P4 string. +** +** 0: (no change) +** 1: NOT NULL contraint failed: P4 +** 2: UNIQUE constraint failed: P4 +** 3: CHECK constraint failed: P4 +** 4: FOREIGN KEY constraint failed: P4 +** +** If P5 is not zero and P4 is NULL, then everything after the ":" is +** omitted. ** ** There is an implied "Halt 0 0 0" instruction inserted at the very end of ** every program. So a jump past the last instruction of the program ** is the same as executing Halt. */ case OP_Halt: { + const char *zType; + const char *zLogFmt; + VdbeFrame *pFrame; + int pcx; + + pcx = (int)(pOp - aOp); if( pOp->p1==SQLITE_OK && p->pFrame ){ /* Halt the sub-program. Return control to the parent frame. */ - VdbeFrame *pFrame = p->pFrame; + pFrame = p->pFrame; p->pFrame = pFrame->pParent; p->nFrame--; sqlite3VdbeSetChanges(db, p->nChange); - pc = sqlite3VdbeFrameRestore(pFrame); + pcx = sqlite3VdbeFrameRestore(pFrame); + lastRowid = db->lastRowid; if( pOp->p2==OE_Ignore ){ - /* Instruction pc is the OP_Program that invoked the sub-program + /* Instruction pcx is the OP_Program that invoked the sub-program ** currently being halted. If the p2 instruction of this OP_Halt ** instruction is set to OE_Ignore, then the sub-program is throwing ** an IGNORE exception. In this case jump to the address specified ** as the p2 of the calling OP_Program. */ - pc = p->aOp[pc].p2-1; + pcx = p->aOp[pcx].p2-1; } aOp = p->aOp; aMem = p->aMem; + pOp = &aOp[pcx]; break; } - p->rc = pOp->p1; p->errorAction = (u8)pOp->p2; - p->pc = pc; - if( pOp->p4.z ){ - assert( p->rc!=SQLITE_OK ); - sqlite3SetString(&p->zErrMsg, db, "%s", pOp->p4.z); - testcase( sqlite3GlobalConfig.xLog!=0 ); - sqlite3_log(pOp->p1, "abort at %d in [%s]: %s", pc, p->zSql, pOp->p4.z); - }else if( p->rc ){ - testcase( sqlite3GlobalConfig.xLog!=0 ); - sqlite3_log(pOp->p1, "constraint failed at %d in [%s]", pc, p->zSql); + p->pc = pcx; + if( p->rc ){ + if( pOp->p5 ){ + static const char * const azType[] = { "NOT NULL", "UNIQUE", "CHECK", + "FOREIGN KEY" }; + assert( pOp->p5>=1 && pOp->p5<=4 ); + testcase( pOp->p5==1 ); + testcase( pOp->p5==2 ); + testcase( pOp->p5==3 ); + testcase( pOp->p5==4 ); + zType = azType[pOp->p5-1]; + }else{ + zType = 0; + } + assert( zType!=0 || pOp->p4.z!=0 ); + zLogFmt = "abort at %d in [%s]: %s"; + if( zType && pOp->p4.z ){ + sqlite3VdbeError(p, "%s constraint failed: %s", zType, pOp->p4.z); + }else if( pOp->p4.z ){ + sqlite3VdbeError(p, "%s", pOp->p4.z); + }else{ + sqlite3VdbeError(p, "%s constraint failed", zType); + } + sqlite3_log(pOp->p1, zLogFmt, pcx, p->zSql, p->zErrMsg); } rc = sqlite3VdbeHalt(p); assert( rc==SQLITE_BUSY || rc==SQLITE_OK || rc==SQLITE_ERROR ); if( rc==SQLITE_BUSY ){ p->rc = rc = SQLITE_BUSY; }else{ - assert( rc==SQLITE_OK || p->rc==SQLITE_CONSTRAINT ); - assert( rc==SQLITE_OK || db->nDeferredCons>0 ); + assert( rc==SQLITE_OK || (p->rc&0xff)==SQLITE_CONSTRAINT ); + assert( rc==SQLITE_OK || db->nDeferredCons>0 || db->nDeferredImmCons>0 ); rc = p->rc ? SQLITE_ERROR : SQLITE_DONE; } goto vdbe_return; } /* Opcode: Integer P1 P2 * * * +** Synopsis: r[P2]=P1 ** ** The 32-bit integer value P1 is written into register P2. */ -case OP_Integer: { /* out2-prerelease */ +case OP_Integer: { /* out2 */ + pOut = out2Prerelease(p, pOp); pOut->u.i = pOp->p1; break; } /* Opcode: Int64 * P2 * P4 * +** Synopsis: r[P2]=P4 ** ** P4 is a pointer to a 64-bit integer value. ** Write that value into register P2. */ -case OP_Int64: { /* out2-prerelease */ +case OP_Int64: { /* out2 */ + pOut = out2Prerelease(p, pOp); assert( pOp->p4.pI64!=0 ); pOut->u.i = *pOp->p4.pI64; break; } #ifndef SQLITE_OMIT_FLOATING_POINT /* Opcode: Real * P2 * P4 * +** Synopsis: r[P2]=P4 ** ** P4 is a pointer to a 64-bit floating point value. ** Write that value into register P2. */ -case OP_Real: { /* same as TK_FLOAT, out2-prerelease */ +case OP_Real: { /* same as TK_FLOAT, out2 */ + pOut = out2Prerelease(p, pOp); pOut->flags = MEM_Real; assert( !sqlite3IsNaN(*pOp->p4.pReal) ); - pOut->r = *pOp->p4.pReal; + pOut->u.r = *pOp->p4.pReal; break; } #endif /* Opcode: String8 * P2 * P4 * +** Synopsis: r[P2]='P4' ** ** P4 points to a nul terminated UTF-8 string. This opcode is transformed -** into an OP_String before it is executed for the first time. +** into a String opcode before it is executed for the first time. During +** this transformation, the length of string P4 is computed and stored +** as the P1 parameter. */ -case OP_String8: { /* same as TK_STRING, out2-prerelease */ +case OP_String8: { /* same as TK_STRING, out2 */ assert( pOp->p4.z!=0 ); + pOut = out2Prerelease(p, pOp); pOp->opcode = OP_String; pOp->p1 = sqlite3Strlen30(pOp->p4.z); #ifndef SQLITE_OMIT_UTF16 if( encoding!=SQLITE_UTF8 ){ rc = sqlite3VdbeMemSetStr(pOut, pOp->p4.z, -1, SQLITE_UTF8, SQLITE_STATIC); if( rc==SQLITE_TOOBIG ) goto too_big; if( SQLITE_OK!=sqlite3VdbeChangeEncoding(pOut, encoding) ) goto no_mem; - assert( pOut->zMalloc==pOut->z ); - assert( pOut->flags & MEM_Dyn ); - pOut->zMalloc = 0; + assert( pOut->szMalloc>0 && pOut->zMalloc==pOut->z ); + assert( VdbeMemDynamic(pOut)==0 ); + pOut->szMalloc = 0; pOut->flags |= MEM_Static; - pOut->flags &= ~MEM_Dyn; if( pOp->p4type==P4_DYNAMIC ){ sqlite3DbFree(db, pOp->p4.z); } pOp->p4type = P4_DYNAMIC; pOp->p4.z = pOut->z; @@ -54503,143 +75288,194 @@ goto too_big; } /* Fall through to the next case, OP_String */ } -/* Opcode: String P1 P2 * P4 * +/* Opcode: String P1 P2 P3 P4 P5 +** Synopsis: r[P2]='P4' (len=P1) ** ** The string value P4 of length P1 (bytes) is stored in register P2. +** +** If P5!=0 and the content of register P3 is greater than zero, then +** the datatype of the register P2 is converted to BLOB. The content is +** the same sequence of bytes, it is merely interpreted as a BLOB instead +** of a string, as if it had been CAST. */ -case OP_String: { /* out2-prerelease */ +case OP_String: { /* out2 */ assert( pOp->p4.z!=0 ); + pOut = out2Prerelease(p, pOp); pOut->flags = MEM_Str|MEM_Static|MEM_Term; pOut->z = pOp->p4.z; pOut->n = pOp->p1; pOut->enc = encoding; UPDATE_MAX_BLOBSIZE(pOut); +#ifndef SQLITE_LIKE_DOESNT_MATCH_BLOBS + if( pOp->p5 ){ + assert( pOp->p3>0 ); + assert( pOp->p3<=(p->nMem-p->nCursor) ); + pIn3 = &aMem[pOp->p3]; + assert( pIn3->flags & MEM_Int ); + if( pIn3->u.i ) pOut->flags = MEM_Blob|MEM_Static|MEM_Term; + } +#endif break; } -/* Opcode: Null * P2 * * * +/* Opcode: Null P1 P2 P3 * * +** Synopsis: r[P2..P3]=NULL ** -** Write a NULL into register P2. +** Write a NULL into registers P2. If P3 greater than P2, then also write +** NULL into register P3 and every register in between P2 and P3. If P3 +** is less than P2 (typically P3 is zero) then only register P2 is +** set to NULL. +** +** If the P1 value is non-zero, then also set the MEM_Cleared flag so that +** NULL values will not compare equal even if SQLITE_NULLEQ is set on +** OP_Ne or OP_Eq. */ -case OP_Null: { /* out2-prerelease */ - pOut->flags = MEM_Null; +case OP_Null: { /* out2 */ + int cnt; + u16 nullFlag; + pOut = out2Prerelease(p, pOp); + cnt = pOp->p3-pOp->p2; + assert( pOp->p3<=(p->nMem-p->nCursor) ); + pOut->flags = nullFlag = pOp->p1 ? (MEM_Null|MEM_Cleared) : MEM_Null; + while( cnt>0 ){ + pOut++; + memAboutToChange(p, pOut); + sqlite3VdbeMemSetNull(pOut); + pOut->flags = nullFlag; + cnt--; + } break; } +/* Opcode: SoftNull P1 * * * * +** Synopsis: r[P1]=NULL +** +** Set register P1 to have the value NULL as seen by the OP_MakeRecord +** instruction, but do not free any string or blob memory associated with +** the register, so that if the value was a string or blob that was +** previously copied using OP_SCopy, the copies will continue to be valid. +*/ +case OP_SoftNull: { + assert( pOp->p1>0 && pOp->p1<=(p->nMem-p->nCursor) ); + pOut = &aMem[pOp->p1]; + pOut->flags = (pOut->flags|MEM_Null)&~MEM_Undefined; + break; +} -/* Opcode: Blob P1 P2 * P4 +/* Opcode: Blob P1 P2 * P4 * +** Synopsis: r[P2]=P4 (len=P1) ** ** P4 points to a blob of data P1 bytes long. Store this -** blob in register P2. This instruction is not coded directly -** by the compiler. Instead, the compiler layer specifies -** an OP_HexBlob opcode, with the hex string representation of -** the blob as P4. This opcode is transformed to an OP_Blob -** the first time it is executed. +** blob in register P2. */ -case OP_Blob: { /* out2-prerelease */ +case OP_Blob: { /* out2 */ assert( pOp->p1 <= SQLITE_MAX_LENGTH ); + pOut = out2Prerelease(p, pOp); sqlite3VdbeMemSetStr(pOut, pOp->p4.z, pOp->p1, 0, 0); pOut->enc = encoding; UPDATE_MAX_BLOBSIZE(pOut); break; } -/* Opcode: Variable P1 P2 P3 P4 * +/* Opcode: Variable P1 P2 * P4 * +** Synopsis: r[P2]=parameter(P1,P4) ** -** Transfer the values of bound parameters P1..P1+P3-1 into registers -** P2..P2+P3-1. +** Transfer the values of bound parameter P1 into register P2 ** -** If the parameter is named, then its name appears in P4 and P3==1. +** If the parameter is named, then its name appears in P4. ** The P4 value is used by sqlite3_bind_parameter_name(). */ -case OP_Variable: { -#if 0 /* local variables moved into u.ab */ - int p1; /* Variable to copy from */ - int p2; /* Register to copy to */ - int n; /* Number of values left to copy */ +case OP_Variable: { /* out2 */ Mem *pVar; /* Value being transferred */ -#endif /* local variables moved into u.ab */ - - u.ab.p1 = pOp->p1 - 1; - u.ab.p2 = pOp->p2; - u.ab.n = pOp->p3; - assert( u.ab.p1>=0 && u.ab.p1+u.ab.n<=p->nVar ); - assert( u.ab.p2>=1 && u.ab.p2+u.ab.n-1<=p->nMem ); - assert( pOp->p4.z==0 || pOp->p3==1 || pOp->p3==0 ); - - while( u.ab.n-- > 0 ){ - u.ab.pVar = &p->aVar[u.ab.p1++]; - if( sqlite3VdbeMemTooBig(u.ab.pVar) ){ - goto too_big; - } - pOut = &aMem[u.ab.p2++]; - sqlite3VdbeMemReleaseExternal(pOut); - pOut->flags = MEM_Null; - sqlite3VdbeMemShallowCopy(pOut, u.ab.pVar, MEM_Static); - UPDATE_MAX_BLOBSIZE(pOut); - } + + assert( pOp->p1>0 && pOp->p1<=p->nVar ); + assert( pOp->p4.z==0 || pOp->p4.z==p->azVar[pOp->p1-1] ); + pVar = &p->aVar[pOp->p1 - 1]; + if( sqlite3VdbeMemTooBig(pVar) ){ + goto too_big; + } + pOut = out2Prerelease(p, pOp); + sqlite3VdbeMemShallowCopy(pOut, pVar, MEM_Static); + UPDATE_MAX_BLOBSIZE(pOut); break; } /* Opcode: Move P1 P2 P3 * * +** Synopsis: r[P2@P3]=r[P1@P3] ** -** Move the values in register P1..P1+P3-1 over into -** registers P2..P2+P3-1. Registers P1..P1+P1-1 are +** Move the P3 values in register P1..P1+P3-1 over into +** registers P2..P2+P3-1. Registers P1..P1+P3-1 are ** left holding a NULL. It is an error for register ranges -** P1..P1+P3-1 and P2..P2+P3-1 to overlap. +** P1..P1+P3-1 and P2..P2+P3-1 to overlap. It is an error +** for P3 to be less than 1. */ case OP_Move: { -#if 0 /* local variables moved into u.ac */ - char *zMalloc; /* Holding variable for allocated memory */ int n; /* Number of registers left to copy */ int p1; /* Register to copy from */ int p2; /* Register to copy to */ -#endif /* local variables moved into u.ac */ - - u.ac.n = pOp->p3; - u.ac.p1 = pOp->p1; - u.ac.p2 = pOp->p2; - assert( u.ac.n>0 && u.ac.p1>0 && u.ac.p2>0 ); - assert( u.ac.p1+u.ac.n<=u.ac.p2 || u.ac.p2+u.ac.n<=u.ac.p1 ); - - pIn1 = &aMem[u.ac.p1]; - pOut = &aMem[u.ac.p2]; - while( u.ac.n-- ){ - assert( pOut<=&aMem[p->nMem] ); - assert( pIn1<=&aMem[p->nMem] ); - u.ac.zMalloc = pOut->zMalloc; - pOut->zMalloc = 0; + + n = pOp->p3; + p1 = pOp->p1; + p2 = pOp->p2; + assert( n>0 && p1>0 && p2>0 ); + assert( p1+n<=p2 || p2+n<=p1 ); + + pIn1 = &aMem[p1]; + pOut = &aMem[p2]; + do{ + assert( pOut<=&aMem[(p->nMem-p->nCursor)] ); + assert( pIn1<=&aMem[(p->nMem-p->nCursor)] ); + assert( memIsValid(pIn1) ); + memAboutToChange(p, pOut); sqlite3VdbeMemMove(pOut, pIn1); - pIn1->zMalloc = u.ac.zMalloc; - REGISTER_TRACE(u.ac.p2++, pOut); +#ifdef SQLITE_DEBUG + if( pOut->pScopyFrom>=&aMem[p1] && pOut->pScopyFrom<pOut ){ + pOut->pScopyFrom += pOp->p2 - p1; + } +#endif + Deephemeralize(pOut); + REGISTER_TRACE(p2++, pOut); pIn1++; pOut++; - } + }while( --n ); break; } -/* Opcode: Copy P1 P2 * * * +/* Opcode: Copy P1 P2 P3 * * +** Synopsis: r[P2@P3+1]=r[P1@P3+1] ** -** Make a copy of register P1 into register P2. +** Make a copy of registers P1..P1+P3 into registers P2..P2+P3. ** ** This instruction makes a deep copy of the value. A duplicate ** is made of any string or blob constant. See also OP_SCopy. */ -case OP_Copy: { /* in1, out2 */ +case OP_Copy: { + int n; + + n = pOp->p3; pIn1 = &aMem[pOp->p1]; pOut = &aMem[pOp->p2]; assert( pOut!=pIn1 ); - sqlite3VdbeMemShallowCopy(pOut, pIn1, MEM_Ephem); - Deephemeralize(pOut); - REGISTER_TRACE(pOp->p2, pOut); + while( 1 ){ + sqlite3VdbeMemShallowCopy(pOut, pIn1, MEM_Ephem); + Deephemeralize(pOut); +#ifdef SQLITE_DEBUG + pOut->pScopyFrom = 0; +#endif + REGISTER_TRACE(pOp->p2+pOp->p3-n, pOut); + if( (n--)==0 ) break; + pOut++; + pIn1++; + } break; } /* Opcode: SCopy P1 P2 * * * +** Synopsis: r[P2]=r[P1] ** ** Make a shallow copy of register P1 into register P2. ** ** This instruction makes a shallow copy of the value. If the value ** is a string or blob, then the copy is only a pointer to the @@ -54647,35 +75483,64 @@ ** Worse, if the original is deallocated, the copy becomes invalid. ** Thus the program must guarantee that the original will not change ** during the lifetime of the copy. Use OP_Copy to make a complete ** copy. */ -case OP_SCopy: { /* in1, out2 */ +case OP_SCopy: { /* out2 */ pIn1 = &aMem[pOp->p1]; pOut = &aMem[pOp->p2]; assert( pOut!=pIn1 ); sqlite3VdbeMemShallowCopy(pOut, pIn1, MEM_Ephem); - REGISTER_TRACE(pOp->p2, pOut); +#ifdef SQLITE_DEBUG + if( pOut->pScopyFrom==0 ) pOut->pScopyFrom = pIn1; +#endif + break; +} + +/* Opcode: IntCopy P1 P2 * * * +** Synopsis: r[P2]=r[P1] +** +** Transfer the integer value held in register P1 into register P2. +** +** This is an optimized version of SCopy that works only for integer +** values. +*/ +case OP_IntCopy: { /* out2 */ + pIn1 = &aMem[pOp->p1]; + assert( (pIn1->flags & MEM_Int)!=0 ); + pOut = &aMem[pOp->p2]; + sqlite3VdbeMemSetInt64(pOut, pIn1->u.i); break; } /* Opcode: ResultRow P1 P2 * * * +** Synopsis: output=r[P1@P2] ** ** The registers P1 through P1+P2-1 contain a single row of ** results. This opcode causes the sqlite3_step() call to terminate ** with an SQLITE_ROW return code and it sets up the sqlite3_stmt -** structure to provide access to the top P1 values as the result -** row. +** structure to provide access to the r(P1)..r(P1+P2-1) values as +** the result row. */ case OP_ResultRow: { -#if 0 /* local variables moved into u.ad */ Mem *pMem; int i; -#endif /* local variables moved into u.ad */ assert( p->nResColumn==pOp->p2 ); assert( pOp->p1>0 ); - assert( pOp->p1+pOp->p2<=p->nMem+1 ); + assert( pOp->p1+pOp->p2<=(p->nMem-p->nCursor)+1 ); + +#ifndef SQLITE_OMIT_PROGRESS_CALLBACK + /* Run the progress counter just before returning. + */ + if( db->xProgress!=0 + && nVmStep>=nProgressLimit + && db->xProgress(db->pProgressArg)!=0 + ){ + rc = SQLITE_INTERRUPT; + goto vdbe_error_halt; + } +#endif /* If this statement has violated immediate foreign key constraints, do ** not return the number of rows modified. And do not RELEASE the statement ** transaction. It needs to be rolled back. */ if( SQLITE_OK!=(rc = sqlite3VdbeCheckFk(p, 0)) ){ @@ -54682,12 +75547,12 @@ assert( db->flags&SQLITE_CountRows ); assert( p->usesStmtJournal ); break; } - /* If the SQLITE_CountRows flag is set in sqlite3.flags mask, then - ** DML statements invoke this opcode to return the number of rows + /* If the SQLITE_CountRows flag is set in sqlite3.flags mask, then + ** DML statements invoke this opcode to return the number of rows ** modified to the user. This is the only way that a VM that ** opens a statement transaction may invoke this opcode. ** ** In case this is such a statement, close any statement transaction ** opened by this VM before returning control to the user. This is to @@ -54708,28 +75573,32 @@ /* Invalidate all ephemeral cursor row caches */ p->cacheCtr = (p->cacheCtr + 2)|1; /* Make sure the results of the current row are \000 terminated ** and have an assigned type. The results are de-ephemeralized as - ** as side effect. + ** a side effect. */ - u.ad.pMem = p->pResultSet = &aMem[pOp->p1]; - for(u.ad.i=0; u.ad.i<pOp->p2; u.ad.i++){ - sqlite3VdbeMemNulTerminate(&u.ad.pMem[u.ad.i]); - sqlite3VdbeMemStoreType(&u.ad.pMem[u.ad.i]); - REGISTER_TRACE(pOp->p1+u.ad.i, &u.ad.pMem[u.ad.i]); + pMem = p->pResultSet = &aMem[pOp->p1]; + for(i=0; i<pOp->p2; i++){ + assert( memIsValid(&pMem[i]) ); + Deephemeralize(&pMem[i]); + assert( (pMem[i].flags & MEM_Ephem)==0 + || (pMem[i].flags & (MEM_Str|MEM_Blob))==0 ); + sqlite3VdbeMemNulTerminate(&pMem[i]); + REGISTER_TRACE(pOp->p1+i, &pMem[i]); } if( db->mallocFailed ) goto no_mem; /* Return SQLITE_ROW */ - p->pc = pc + 1; + p->pc = (int)(pOp - aOp) + 1; rc = SQLITE_ROW; goto vdbe_return; } /* Opcode: Concat P1 P2 P3 * * +** Synopsis: r[P3]=r[P2]+r[P1] ** ** Add the text in register P1 onto the end of the text in ** register P2 and store the result in register P3. ** If either the P1 or P2 text are NULL then store NULL in P3. ** @@ -54738,13 +75607,11 @@ ** It is illegal for P1 and P3 to be the same register. Sometimes, ** if P3 is the same register as P2, the implementation is able ** to avoid a memcpy(). */ case OP_Concat: { /* same as TK_CONCAT, in1, in2, out3 */ -#if 0 /* local variables moved into u.ae */ i64 nByte; -#endif /* local variables moved into u.ae */ pIn1 = &aMem[pOp->p1]; pIn2 = &aMem[pOp->p2]; pOut = &aMem[pOp->p3]; assert( pIn1!=pOut ); @@ -54753,145 +75620,147 @@ break; } if( ExpandBlob(pIn1) || ExpandBlob(pIn2) ) goto no_mem; Stringify(pIn1, encoding); Stringify(pIn2, encoding); - u.ae.nByte = pIn1->n + pIn2->n; - if( u.ae.nByte>db->aLimit[SQLITE_LIMIT_LENGTH] ){ + nByte = pIn1->n + pIn2->n; + if( nByte>db->aLimit[SQLITE_LIMIT_LENGTH] ){ goto too_big; } - MemSetTypeFlag(pOut, MEM_Str); - if( sqlite3VdbeMemGrow(pOut, (int)u.ae.nByte+2, pOut==pIn2) ){ + if( sqlite3VdbeMemGrow(pOut, (int)nByte+2, pOut==pIn2) ){ goto no_mem; } + MemSetTypeFlag(pOut, MEM_Str); if( pOut!=pIn2 ){ memcpy(pOut->z, pIn2->z, pIn2->n); } memcpy(&pOut->z[pIn2->n], pIn1->z, pIn1->n); - pOut->z[u.ae.nByte] = 0; - pOut->z[u.ae.nByte+1] = 0; + pOut->z[nByte]=0; + pOut->z[nByte+1] = 0; pOut->flags |= MEM_Term; - pOut->n = (int)u.ae.nByte; + pOut->n = (int)nByte; pOut->enc = encoding; UPDATE_MAX_BLOBSIZE(pOut); break; } /* Opcode: Add P1 P2 P3 * * +** Synopsis: r[P3]=r[P1]+r[P2] ** ** Add the value in register P1 to the value in register P2 ** and store the result in register P3. ** If either input is NULL, the result is NULL. */ /* Opcode: Multiply P1 P2 P3 * * +** Synopsis: r[P3]=r[P1]*r[P2] ** ** ** Multiply the value in register P1 by the value in register P2 ** and store the result in register P3. ** If either input is NULL, the result is NULL. */ /* Opcode: Subtract P1 P2 P3 * * +** Synopsis: r[P3]=r[P2]-r[P1] ** ** Subtract the value in register P1 from the value in register P2 ** and store the result in register P3. ** If either input is NULL, the result is NULL. */ /* Opcode: Divide P1 P2 P3 * * +** Synopsis: r[P3]=r[P2]/r[P1] ** ** Divide the value in register P1 by the value in register P2 ** and store the result in register P3 (P3=P2/P1). If the value in ** register P1 is zero, then the result is NULL. If either input is ** NULL, the result is NULL. */ /* Opcode: Remainder P1 P2 P3 * * +** Synopsis: r[P3]=r[P2]%r[P1] ** -** Compute the remainder after integer division of the value in -** register P1 by the value in register P2 and store the result in P3. -** If the value in register P2 is zero the result is NULL. +** Compute the remainder after integer register P2 is divided by +** register P1 and store the result in register P3. +** If the value in register P1 is zero the result is NULL. ** If either operand is NULL, the result is NULL. */ case OP_Add: /* same as TK_PLUS, in1, in2, out3 */ case OP_Subtract: /* same as TK_MINUS, in1, in2, out3 */ case OP_Multiply: /* same as TK_STAR, in1, in2, out3 */ case OP_Divide: /* same as TK_SLASH, in1, in2, out3 */ case OP_Remainder: { /* same as TK_REM, in1, in2, out3 */ -#if 0 /* local variables moved into u.af */ - int flags; /* Combined MEM_* flags from both inputs */ + char bIntint; /* Started out as two integer operands */ + u16 flags; /* Combined MEM_* flags from both inputs */ + u16 type1; /* Numeric type of left operand */ + u16 type2; /* Numeric type of right operand */ i64 iA; /* Integer value of left operand */ i64 iB; /* Integer value of right operand */ double rA; /* Real value of left operand */ double rB; /* Real value of right operand */ -#endif /* local variables moved into u.af */ pIn1 = &aMem[pOp->p1]; - applyNumericAffinity(pIn1); + type1 = numericType(pIn1); pIn2 = &aMem[pOp->p2]; - applyNumericAffinity(pIn2); + type2 = numericType(pIn2); pOut = &aMem[pOp->p3]; - u.af.flags = pIn1->flags | pIn2->flags; - if( (u.af.flags & MEM_Null)!=0 ) goto arithmetic_result_is_null; - if( (pIn1->flags & pIn2->flags & MEM_Int)==MEM_Int ){ - u.af.iA = pIn1->u.i; - u.af.iB = pIn2->u.i; + flags = pIn1->flags | pIn2->flags; + if( (flags & MEM_Null)!=0 ) goto arithmetic_result_is_null; + if( (type1 & type2 & MEM_Int)!=0 ){ + iA = pIn1->u.i; + iB = pIn2->u.i; + bIntint = 1; switch( pOp->opcode ){ - case OP_Add: u.af.iB += u.af.iA; break; - case OP_Subtract: u.af.iB -= u.af.iA; break; - case OP_Multiply: u.af.iB *= u.af.iA; break; + case OP_Add: if( sqlite3AddInt64(&iB,iA) ) goto fp_math; break; + case OP_Subtract: if( sqlite3SubInt64(&iB,iA) ) goto fp_math; break; + case OP_Multiply: if( sqlite3MulInt64(&iB,iA) ) goto fp_math; break; case OP_Divide: { - if( u.af.iA==0 ) goto arithmetic_result_is_null; - /* Dividing the largest possible negative 64-bit integer (1<<63) by - ** -1 returns an integer too large to store in a 64-bit data-type. On - ** some architectures, the value overflows to (1<<63). On others, - ** a SIGFPE is issued. The following statement normalizes this - ** behavior so that all architectures behave as if integer - ** overflow occurred. - */ - if( u.af.iA==-1 && u.af.iB==SMALLEST_INT64 ) u.af.iA = 1; - u.af.iB /= u.af.iA; + if( iA==0 ) goto arithmetic_result_is_null; + if( iA==-1 && iB==SMALLEST_INT64 ) goto fp_math; + iB /= iA; break; } default: { - if( u.af.iA==0 ) goto arithmetic_result_is_null; - if( u.af.iA==-1 ) u.af.iA = 1; - u.af.iB %= u.af.iA; + if( iA==0 ) goto arithmetic_result_is_null; + if( iA==-1 ) iA = 1; + iB %= iA; break; } } - pOut->u.i = u.af.iB; + pOut->u.i = iB; MemSetTypeFlag(pOut, MEM_Int); }else{ - u.af.rA = sqlite3VdbeRealValue(pIn1); - u.af.rB = sqlite3VdbeRealValue(pIn2); + bIntint = 0; +fp_math: + rA = sqlite3VdbeRealValue(pIn1); + rB = sqlite3VdbeRealValue(pIn2); switch( pOp->opcode ){ - case OP_Add: u.af.rB += u.af.rA; break; - case OP_Subtract: u.af.rB -= u.af.rA; break; - case OP_Multiply: u.af.rB *= u.af.rA; break; + case OP_Add: rB += rA; break; + case OP_Subtract: rB -= rA; break; + case OP_Multiply: rB *= rA; break; case OP_Divide: { /* (double)0 In case of SQLITE_OMIT_FLOATING_POINT... */ - if( u.af.rA==(double)0 ) goto arithmetic_result_is_null; - u.af.rB /= u.af.rA; + if( rA==(double)0 ) goto arithmetic_result_is_null; + rB /= rA; break; } default: { - u.af.iA = (i64)u.af.rA; - u.af.iB = (i64)u.af.rB; - if( u.af.iA==0 ) goto arithmetic_result_is_null; - if( u.af.iA==-1 ) u.af.iA = 1; - u.af.rB = (double)(u.af.iB % u.af.iA); + iA = (i64)rA; + iB = (i64)rB; + if( iA==0 ) goto arithmetic_result_is_null; + if( iA==-1 ) iA = 1; + rB = (double)(iB % iA); break; } } #ifdef SQLITE_OMIT_FLOATING_POINT - pOut->u.i = u.af.rB; + pOut->u.i = rB; MemSetTypeFlag(pOut, MEM_Int); #else - if( sqlite3IsNaN(u.af.rB) ){ + if( sqlite3IsNaN(rB) ){ goto arithmetic_result_is_null; } - pOut->r = u.af.rB; + pOut->u.r = rB; MemSetTypeFlag(pOut, MEM_Real); - if( (u.af.flags & MEM_Real)==0 ){ + if( ((type1|type2)&MEM_Real)==0 && !bIntint ){ sqlite3VdbeIntegerAffinity(pOut); } #endif } break; @@ -54899,29 +75768,37 @@ arithmetic_result_is_null: sqlite3VdbeMemSetNull(pOut); break; } -/* Opcode: CollSeq * * P4 +/* Opcode: CollSeq P1 * * P4 ** ** P4 is a pointer to a CollSeq struct. If the next call to a user function ** or aggregate calls sqlite3GetFuncCollSeq(), this collation sequence will ** be returned. This is used by the built-in min(), max() and nullif() ** functions. +** +** If P1 is not zero, then it is a register that a subsequent min() or +** max() aggregate will set to 1 if the current row is not the minimum or +** maximum. The P1 register is initialized to 0 by this instruction. ** ** The interface used by the implementation of the aforementioned functions ** to retrieve the collation sequence set by this opcode is not available -** publicly, only to user functions defined in func.c. +** publicly. Only built-in functions have access to this feature. */ case OP_CollSeq: { assert( pOp->p4type==P4_COLLSEQ ); + if( pOp->p1 ){ + sqlite3VdbeMemSetInt64(&aMem[pOp->p1], 0); + } break; } -/* Opcode: Function P1 P2 P3 P4 P5 +/* Opcode: Function0 P1 P2 P3 P4 P5 +** Synopsis: r[P3]=func(r[P2@P5]) ** -** Invoke a user function (P4 is a pointer to a Function structure that +** Invoke a user function (P4 is a pointer to a FuncDef object that ** defines the function) with P5 arguments taken from register P2 and ** successors. The result of the function is stored in register P3. ** Register P3 must not be one of the function inputs. ** ** P1 is a 32-bit bitmask indicating whether or not each argument to the @@ -54929,121 +75806,131 @@ ** argument was constant then bit 0 of P1 is set. This is used to determine ** whether meta data associated with a user function argument using the ** sqlite3_set_auxdata() API may be safely retained until the next ** invocation of this opcode. ** -** See also: AggStep and AggFinal +** See also: Function, AggStep, AggFinal */ +/* Opcode: Function P1 P2 P3 P4 P5 +** Synopsis: r[P3]=func(r[P2@P5]) +** +** Invoke a user function (P4 is a pointer to an sqlite3_context object that +** contains a pointer to the function to be run) with P5 arguments taken +** from register P2 and successors. The result of the function is stored +** in register P3. Register P3 must not be one of the function inputs. +** +** P1 is a 32-bit bitmask indicating whether or not each argument to the +** function was determined to be constant at compile time. If the first +** argument was constant then bit 0 of P1 is set. This is used to determine +** whether meta data associated with a user function argument using the +** sqlite3_set_auxdata() API may be safely retained until the next +** invocation of this opcode. +** +** SQL functions are initially coded as OP_Function0 with P4 pointing +** to a FuncDef object. But on first evaluation, the P4 operand is +** automatically converted into an sqlite3_context object and the operation +** changed to this OP_Function opcode. In this way, the initialization of +** the sqlite3_context object occurs only once, rather than once for each +** evaluation of the function. +** +** See also: Function0, AggStep, AggFinal +*/ +case OP_Function0: { + int n; + sqlite3_context *pCtx; + + assert( pOp->p4type==P4_FUNCDEF ); + n = pOp->p5; + assert( pOp->p3>0 && pOp->p3<=(p->nMem-p->nCursor) ); + assert( n==0 || (pOp->p2>0 && pOp->p2+n<=(p->nMem-p->nCursor)+1) ); + assert( pOp->p3<pOp->p2 || pOp->p3>=pOp->p2+n ); + pCtx = sqlite3DbMallocRawNN(db, sizeof(*pCtx) + (n-1)*sizeof(sqlite3_value*)); + if( pCtx==0 ) goto no_mem; + pCtx->pOut = 0; + pCtx->pFunc = pOp->p4.pFunc; + pCtx->iOp = (int)(pOp - aOp); + pCtx->pVdbe = p; + pCtx->argc = n; + pOp->p4type = P4_FUNCCTX; + pOp->p4.pCtx = pCtx; + pOp->opcode = OP_Function; + /* Fall through into OP_Function */ +} case OP_Function: { -#if 0 /* local variables moved into u.ag */ int i; - Mem *pArg; - sqlite3_context ctx; - sqlite3_value **apVal; - int n; -#endif /* local variables moved into u.ag */ - - u.ag.n = pOp->p5; - u.ag.apVal = p->apArg; - assert( u.ag.apVal || u.ag.n==0 ); - - assert( u.ag.n==0 || (pOp->p2>0 && pOp->p2+u.ag.n<=p->nMem+1) ); - assert( pOp->p3<pOp->p2 || pOp->p3>=pOp->p2+u.ag.n ); - u.ag.pArg = &aMem[pOp->p2]; - for(u.ag.i=0; u.ag.i<u.ag.n; u.ag.i++, u.ag.pArg++){ - u.ag.apVal[u.ag.i] = u.ag.pArg; - sqlite3VdbeMemStoreType(u.ag.pArg); - REGISTER_TRACE(pOp->p2+u.ag.i, u.ag.pArg); - } - - assert( pOp->p4type==P4_FUNCDEF || pOp->p4type==P4_VDBEFUNC ); - if( pOp->p4type==P4_FUNCDEF ){ - u.ag.ctx.pFunc = pOp->p4.pFunc; - u.ag.ctx.pVdbeFunc = 0; - }else{ - u.ag.ctx.pVdbeFunc = (VdbeFunc*)pOp->p4.pVdbeFunc; - u.ag.ctx.pFunc = u.ag.ctx.pVdbeFunc->pFunc; - } - - assert( pOp->p3>0 && pOp->p3<=p->nMem ); + sqlite3_context *pCtx; + + assert( pOp->p4type==P4_FUNCCTX ); + pCtx = pOp->p4.pCtx; + + /* If this function is inside of a trigger, the register array in aMem[] + ** might change from one evaluation to the next. The next block of code + ** checks to see if the register array has changed, and if so it + ** reinitializes the relavant parts of the sqlite3_context object */ pOut = &aMem[pOp->p3]; - u.ag.ctx.s.flags = MEM_Null; - u.ag.ctx.s.db = db; - u.ag.ctx.s.xDel = 0; - u.ag.ctx.s.zMalloc = 0; - - /* The output cell may already have a buffer allocated. Move - ** the pointer to u.ag.ctx.s so in case the user-function can use - ** the already allocated buffer instead of allocating a new one. - */ - sqlite3VdbeMemMove(&u.ag.ctx.s, pOut); - MemSetTypeFlag(&u.ag.ctx.s, MEM_Null); - - u.ag.ctx.isError = 0; - if( u.ag.ctx.pFunc->flags & SQLITE_FUNC_NEEDCOLL ){ - assert( pOp>aOp ); - assert( pOp[-1].p4type==P4_COLLSEQ ); - assert( pOp[-1].opcode==OP_CollSeq ); - u.ag.ctx.pColl = pOp[-1].p4.pColl; - } - (*u.ag.ctx.pFunc->xFunc)(&u.ag.ctx, u.ag.n, u.ag.apVal); - if( db->mallocFailed ){ - /* Even though a malloc() has failed, the implementation of the - ** user function may have called an sqlite3_result_XXX() function - ** to return a value. The following call releases any resources - ** associated with such a value. - */ - sqlite3VdbeMemRelease(&u.ag.ctx.s); - goto no_mem; - } - - /* If any auxiliary data functions have been called by this user function, - ** immediately call the destructor for any non-static values. - */ - if( u.ag.ctx.pVdbeFunc ){ - sqlite3VdbeDeleteAuxData(u.ag.ctx.pVdbeFunc, pOp->p1); - pOp->p4.pVdbeFunc = u.ag.ctx.pVdbeFunc; - pOp->p4type = P4_VDBEFUNC; - } + if( pCtx->pOut != pOut ){ + pCtx->pOut = pOut; + for(i=pCtx->argc-1; i>=0; i--) pCtx->argv[i] = &aMem[pOp->p2+i]; + } + + memAboutToChange(p, pCtx->pOut); +#ifdef SQLITE_DEBUG + for(i=0; i<pCtx->argc; i++){ + assert( memIsValid(pCtx->argv[i]) ); + REGISTER_TRACE(pOp->p2+i, pCtx->argv[i]); + } +#endif + MemSetTypeFlag(pCtx->pOut, MEM_Null); + pCtx->fErrorOrAux = 0; + db->lastRowid = lastRowid; + (*pCtx->pFunc->xSFunc)(pCtx, pCtx->argc, pCtx->argv);/* IMP: R-24505-23230 */ + lastRowid = db->lastRowid; /* Remember rowid changes made by xSFunc */ /* If the function returned an error, throw an exception */ - if( u.ag.ctx.isError ){ - sqlite3SetString(&p->zErrMsg, db, "%s", sqlite3_value_text(&u.ag.ctx.s)); - rc = u.ag.ctx.isError; + if( pCtx->fErrorOrAux ){ + if( pCtx->isError ){ + sqlite3VdbeError(p, "%s", sqlite3_value_text(pCtx->pOut)); + rc = pCtx->isError; + } + sqlite3VdbeDeleteAuxData(p, pCtx->iOp, pOp->p1); } /* Copy the result of the function into register P3 */ - sqlite3VdbeChangeEncoding(&u.ag.ctx.s, encoding); - sqlite3VdbeMemMove(pOut, &u.ag.ctx.s); - if( sqlite3VdbeMemTooBig(pOut) ){ - goto too_big; + if( pOut->flags & (MEM_Str|MEM_Blob) ){ + sqlite3VdbeChangeEncoding(pCtx->pOut, encoding); + if( sqlite3VdbeMemTooBig(pCtx->pOut) ) goto too_big; } - REGISTER_TRACE(pOp->p3, pOut); - UPDATE_MAX_BLOBSIZE(pOut); + + REGISTER_TRACE(pOp->p3, pCtx->pOut); + UPDATE_MAX_BLOBSIZE(pCtx->pOut); break; } /* Opcode: BitAnd P1 P2 P3 * * +** Synopsis: r[P3]=r[P1]&r[P2] ** ** Take the bit-wise AND of the values in register P1 and P2 and ** store the result in register P3. ** If either input is NULL, the result is NULL. */ /* Opcode: BitOr P1 P2 P3 * * +** Synopsis: r[P3]=r[P1]|r[P2] ** ** Take the bit-wise OR of the values in register P1 and P2 and ** store the result in register P3. ** If either input is NULL, the result is NULL. */ /* Opcode: ShiftLeft P1 P2 P3 * * +** Synopsis: r[P3]=r[P2]<<r[P1] ** ** Shift the integer value in register P2 to the left by the -** number of bits specified by the integer in regiser P1. +** number of bits specified by the integer in register P1. ** Store the result in register P3. ** If either input is NULL, the result is NULL. */ /* Opcode: ShiftRight P1 P2 P3 * * +** Synopsis: r[P3]=r[P2]>>r[P1] ** ** Shift the integer value in register P2 to the right by the ** number of bits specified by the integer in register P1. ** Store the result in register P3. ** If either input is NULL, the result is NULL. @@ -55050,45 +75937,69 @@ */ case OP_BitAnd: /* same as TK_BITAND, in1, in2, out3 */ case OP_BitOr: /* same as TK_BITOR, in1, in2, out3 */ case OP_ShiftLeft: /* same as TK_LSHIFT, in1, in2, out3 */ case OP_ShiftRight: { /* same as TK_RSHIFT, in1, in2, out3 */ -#if 0 /* local variables moved into u.ah */ - i64 a; - i64 b; -#endif /* local variables moved into u.ah */ + i64 iA; + u64 uA; + i64 iB; + u8 op; pIn1 = &aMem[pOp->p1]; pIn2 = &aMem[pOp->p2]; pOut = &aMem[pOp->p3]; if( (pIn1->flags | pIn2->flags) & MEM_Null ){ sqlite3VdbeMemSetNull(pOut); break; } - u.ah.a = sqlite3VdbeIntValue(pIn2); - u.ah.b = sqlite3VdbeIntValue(pIn1); - switch( pOp->opcode ){ - case OP_BitAnd: u.ah.a &= u.ah.b; break; - case OP_BitOr: u.ah.a |= u.ah.b; break; - case OP_ShiftLeft: u.ah.a <<= u.ah.b; break; - default: assert( pOp->opcode==OP_ShiftRight ); - u.ah.a >>= u.ah.b; break; - } - pOut->u.i = u.ah.a; + iA = sqlite3VdbeIntValue(pIn2); + iB = sqlite3VdbeIntValue(pIn1); + op = pOp->opcode; + if( op==OP_BitAnd ){ + iA &= iB; + }else if( op==OP_BitOr ){ + iA |= iB; + }else if( iB!=0 ){ + assert( op==OP_ShiftRight || op==OP_ShiftLeft ); + + /* If shifting by a negative amount, shift in the other direction */ + if( iB<0 ){ + assert( OP_ShiftRight==OP_ShiftLeft+1 ); + op = 2*OP_ShiftLeft + 1 - op; + iB = iB>(-64) ? -iB : 64; + } + + if( iB>=64 ){ + iA = (iA>=0 || op==OP_ShiftLeft) ? 0 : -1; + }else{ + memcpy(&uA, &iA, sizeof(uA)); + if( op==OP_ShiftLeft ){ + uA <<= iB; + }else{ + uA >>= iB; + /* Sign-extend on a right shift of a negative number */ + if( iA<0 ) uA |= ((((u64)0xffffffff)<<32)|0xffffffff) << (64-iB); + } + memcpy(&iA, &uA, sizeof(iA)); + } + } + pOut->u.i = iA; MemSetTypeFlag(pOut, MEM_Int); break; } /* Opcode: AddImm P1 P2 * * * +** Synopsis: r[P1]=r[P1]+P2 ** ** Add the constant P2 to the value in register P1. ** The result is always an integer. ** ** To force any register to be an integer, just add 0. */ case OP_AddImm: { /* in1 */ pIn1 = &aMem[pOp->p1]; + memAboutToChange(p, pIn1); sqlite3VdbeMemIntegerify(pIn1); pIn1->u.i += pOp->p2; break; } @@ -55099,21 +76010,23 @@ ** without data loss, then jump immediately to P2, or if P2==0 ** raise an SQLITE_MISMATCH exception. */ case OP_MustBeInt: { /* jump, in1 */ pIn1 = &aMem[pOp->p1]; - applyAffinity(pIn1, SQLITE_AFF_NUMERIC, encoding); if( (pIn1->flags & MEM_Int)==0 ){ - if( pOp->p2==0 ){ - rc = SQLITE_MISMATCH; - goto abort_due_to_error; - }else{ - pc = pOp->p2 - 1; - } - }else{ - MemSetTypeFlag(pIn1, MEM_Int); - } + applyAffinity(pIn1, SQLITE_AFF_NUMERIC, encoding); + VdbeBranchTaken((pIn1->flags&MEM_Int)==0, 2); + if( (pIn1->flags & MEM_Int)==0 ){ + if( pOp->p2==0 ){ + rc = SQLITE_MISMATCH; + goto abort_due_to_error; + }else{ + goto jump_to_p2; + } + } + } + MemSetTypeFlag(pIn1, MEM_Int); break; } #ifndef SQLITE_OMIT_FLOATING_POINT /* Opcode: RealAffinity P1 * * * * @@ -55133,118 +76046,50 @@ break; } #endif #ifndef SQLITE_OMIT_CAST -/* Opcode: ToText P1 * * * * -** -** Force the value in register P1 to be text. -** If the value is numeric, convert it to a string using the -** equivalent of printf(). Blob values are unchanged and -** are afterwards simply interpreted as text. -** -** A NULL value is not changed by this routine. It remains NULL. -*/ -case OP_ToText: { /* same as TK_TO_TEXT, in1 */ - pIn1 = &aMem[pOp->p1]; - if( pIn1->flags & MEM_Null ) break; - assert( MEM_Str==(MEM_Blob>>3) ); - pIn1->flags |= (pIn1->flags&MEM_Blob)>>3; - applyAffinity(pIn1, SQLITE_AFF_TEXT, encoding); - rc = ExpandBlob(pIn1); - assert( pIn1->flags & MEM_Str || db->mallocFailed ); - pIn1->flags &= ~(MEM_Int|MEM_Real|MEM_Blob|MEM_Zero); - UPDATE_MAX_BLOBSIZE(pIn1); - break; -} - -/* Opcode: ToBlob P1 * * * * -** -** Force the value in register P1 to be a BLOB. -** If the value is numeric, convert it to a string first. -** Strings are simply reinterpreted as blobs with no change -** to the underlying data. -** -** A NULL value is not changed by this routine. It remains NULL. -*/ -case OP_ToBlob: { /* same as TK_TO_BLOB, in1 */ - pIn1 = &aMem[pOp->p1]; - if( pIn1->flags & MEM_Null ) break; - if( (pIn1->flags & MEM_Blob)==0 ){ - applyAffinity(pIn1, SQLITE_AFF_TEXT, encoding); - assert( pIn1->flags & MEM_Str || db->mallocFailed ); - MemSetTypeFlag(pIn1, MEM_Blob); - }else{ - pIn1->flags &= ~(MEM_TypeMask&~MEM_Blob); - } - UPDATE_MAX_BLOBSIZE(pIn1); - break; -} - -/* Opcode: ToNumeric P1 * * * * -** -** Force the value in register P1 to be numeric (either an -** integer or a floating-point number.) -** If the value is text or blob, try to convert it to an using the -** equivalent of atoi() or atof() and store 0 if no such conversion -** is possible. -** -** A NULL value is not changed by this routine. It remains NULL. -*/ -case OP_ToNumeric: { /* same as TK_TO_NUMERIC, in1 */ - pIn1 = &aMem[pOp->p1]; - if( (pIn1->flags & (MEM_Null|MEM_Int|MEM_Real))==0 ){ - sqlite3VdbeMemNumerify(pIn1); - } +/* Opcode: Cast P1 P2 * * * +** Synopsis: affinity(r[P1]) +** +** Force the value in register P1 to be the type defined by P2. +** +** <ul> +** <li value="97"> TEXT +** <li value="98"> BLOB +** <li value="99"> NUMERIC +** <li value="100"> INTEGER +** <li value="101"> REAL +** </ul> +** +** A NULL value is not changed by this routine. It remains NULL. +*/ +case OP_Cast: { /* in1 */ + assert( pOp->p2>=SQLITE_AFF_BLOB && pOp->p2<=SQLITE_AFF_REAL ); + testcase( pOp->p2==SQLITE_AFF_TEXT ); + testcase( pOp->p2==SQLITE_AFF_BLOB ); + testcase( pOp->p2==SQLITE_AFF_NUMERIC ); + testcase( pOp->p2==SQLITE_AFF_INTEGER ); + testcase( pOp->p2==SQLITE_AFF_REAL ); + pIn1 = &aMem[pOp->p1]; + memAboutToChange(p, pIn1); + rc = ExpandBlob(pIn1); + sqlite3VdbeMemCast(pIn1, pOp->p2, encoding); + UPDATE_MAX_BLOBSIZE(pIn1); break; } #endif /* SQLITE_OMIT_CAST */ -/* Opcode: ToInt P1 * * * * -** -** Force the value in register P1 be an integer. If -** The value is currently a real number, drop its fractional part. -** If the value is text or blob, try to convert it to an integer using the -** equivalent of atoi() and store 0 if no such conversion is possible. -** -** A NULL value is not changed by this routine. It remains NULL. -*/ -case OP_ToInt: { /* same as TK_TO_INT, in1 */ - pIn1 = &aMem[pOp->p1]; - if( (pIn1->flags & MEM_Null)==0 ){ - sqlite3VdbeMemIntegerify(pIn1); - } - break; -} - -#if !defined(SQLITE_OMIT_CAST) && !defined(SQLITE_OMIT_FLOATING_POINT) -/* Opcode: ToReal P1 * * * * -** -** Force the value in register P1 to be a floating point number. -** If The value is currently an integer, convert it. -** If the value is text or blob, try to convert it to an integer using the -** equivalent of atoi() and store 0.0 if no such conversion is possible. -** -** A NULL value is not changed by this routine. It remains NULL. -*/ -case OP_ToReal: { /* same as TK_TO_REAL, in1 */ - pIn1 = &aMem[pOp->p1]; - if( (pIn1->flags & MEM_Null)==0 ){ - sqlite3VdbeMemRealify(pIn1); - } - break; -} -#endif /* !defined(SQLITE_OMIT_CAST) && !defined(SQLITE_OMIT_FLOATING_POINT) */ - /* Opcode: Lt P1 P2 P3 P4 P5 +** Synopsis: if r[P1]<r[P3] goto P2 ** ** Compare the values in register P1 and P3. If reg(P3)<reg(P1) then ** jump to address P2. ** ** If the SQLITE_JUMPIFNULL bit of P5 is set and either reg(P1) or ** reg(P3) is NULL then take the jump. If the SQLITE_JUMPIFNULL -** bit is clear then fall thru if either operand is NULL. +** bit is clear then fall through if either operand is NULL. ** ** The SQLITE_AFF_MASK portion of P5 must be an affinity character - ** SQLITE_AFF_TEXT, SQLITE_AFF_INTEGER, and so forth. An attempt is made ** to coerce both inputs according to this affinity before the ** comparison is made. If the SQLITE_AFF_MASK is 0x00, then numeric @@ -55262,48 +76107,57 @@ ** are of different types, then numbers are considered less than ** strings and strings are considered less than blobs. ** ** If the SQLITE_STOREP2 bit of P5 is set, then do not jump. Instead, ** store a boolean result (either 0, or 1, or NULL) in register P2. +** +** If the SQLITE_NULLEQ bit is set in P5, then NULL values are considered +** equal to one another, provided that they do not have their MEM_Cleared +** bit set. */ /* Opcode: Ne P1 P2 P3 P4 P5 +** Synopsis: if r[P1]!=r[P3] goto P2 ** ** This works just like the Lt opcode except that the jump is taken if ** the operands in registers P1 and P3 are not equal. See the Lt opcode for ** additional information. ** ** If SQLITE_NULLEQ is set in P5 then the result of comparison is always either ** true or false and is never NULL. If both operands are NULL then the result ** of comparison is false. If either operand is NULL then the result is true. -** If neither operand is NULL the the result is the same as it would be if +** If neither operand is NULL the result is the same as it would be if ** the SQLITE_NULLEQ flag were omitted from P5. */ /* Opcode: Eq P1 P2 P3 P4 P5 +** Synopsis: if r[P1]==r[P3] goto P2 ** ** This works just like the Lt opcode except that the jump is taken if ** the operands in registers P1 and P3 are equal. ** See the Lt opcode for additional information. ** ** If SQLITE_NULLEQ is set in P5 then the result of comparison is always either ** true or false and is never NULL. If both operands are NULL then the result ** of comparison is true. If either operand is NULL then the result is false. -** If neither operand is NULL the the result is the same as it would be if +** If neither operand is NULL the result is the same as it would be if ** the SQLITE_NULLEQ flag were omitted from P5. */ /* Opcode: Le P1 P2 P3 P4 P5 +** Synopsis: if r[P1]<=r[P3] goto P2 ** ** This works just like the Lt opcode except that the jump is taken if ** the content of register P3 is less than or equal to the content of ** register P1. See the Lt opcode for additional information. */ /* Opcode: Gt P1 P2 P3 P4 P5 +** Synopsis: if r[P1]>r[P3] goto P2 ** ** This works just like the Lt opcode except that the jump is taken if ** the content of register P3 is greater than the content of ** register P1. See the Lt opcode for additional information. */ /* Opcode: Ge P1 P2 P3 P4 P5 +** Synopsis: if r[P1]>=r[P3] goto P2 ** ** This works just like the Lt opcode except that the jump is taken if ** the content of register P3 is greater than or equal to the content of ** register P1. See the Lt opcode for additional information. */ @@ -55311,103 +76165,152 @@ case OP_Ne: /* same as TK_NE, jump, in1, in3 */ case OP_Lt: /* same as TK_LT, jump, in1, in3 */ case OP_Le: /* same as TK_LE, jump, in1, in3 */ case OP_Gt: /* same as TK_GT, jump, in1, in3 */ case OP_Ge: { /* same as TK_GE, jump, in1, in3 */ -#if 0 /* local variables moved into u.ai */ int res; /* Result of the comparison of pIn1 against pIn3 */ char affinity; /* Affinity to use for comparison */ u16 flags1; /* Copy of initial value of pIn1->flags */ u16 flags3; /* Copy of initial value of pIn3->flags */ -#endif /* local variables moved into u.ai */ pIn1 = &aMem[pOp->p1]; pIn3 = &aMem[pOp->p3]; - u.ai.flags1 = pIn1->flags; - u.ai.flags3 = pIn3->flags; - if( (pIn1->flags | pIn3->flags)&MEM_Null ){ + flags1 = pIn1->flags; + flags3 = pIn3->flags; + if( (flags1 | flags3)&MEM_Null ){ /* One or both operands are NULL */ if( pOp->p5 & SQLITE_NULLEQ ){ /* If SQLITE_NULLEQ is set (which will only happen if the operator is ** OP_Eq or OP_Ne) then take the jump or not depending on whether ** or not both operands are null. */ assert( pOp->opcode==OP_Eq || pOp->opcode==OP_Ne ); - u.ai.res = (pIn1->flags & pIn3->flags & MEM_Null)==0; + assert( (flags1 & MEM_Cleared)==0 ); + assert( (pOp->p5 & SQLITE_JUMPIFNULL)==0 ); + if( (flags1&MEM_Null)!=0 + && (flags3&MEM_Null)!=0 + && (flags3&MEM_Cleared)==0 + ){ + res = 0; /* Results are equal */ + }else{ + res = 1; /* Results are not equal */ + } }else{ /* SQLITE_NULLEQ is clear and at least one operand is NULL, ** then the result is always NULL. ** The jump is taken if the SQLITE_JUMPIFNULL bit is set. */ if( pOp->p5 & SQLITE_STOREP2 ){ pOut = &aMem[pOp->p2]; + memAboutToChange(p, pOut); MemSetTypeFlag(pOut, MEM_Null); REGISTER_TRACE(pOp->p2, pOut); - }else if( pOp->p5 & SQLITE_JUMPIFNULL ){ - pc = pOp->p2-1; + }else{ + VdbeBranchTaken(2,3); + if( pOp->p5 & SQLITE_JUMPIFNULL ){ + goto jump_to_p2; + } } break; } }else{ /* Neither operand is NULL. Do a comparison. */ - u.ai.affinity = pOp->p5 & SQLITE_AFF_MASK; - if( u.ai.affinity ){ - applyAffinity(pIn1, u.ai.affinity, encoding); - applyAffinity(pIn3, u.ai.affinity, encoding); - if( db->mallocFailed ) goto no_mem; + affinity = pOp->p5 & SQLITE_AFF_MASK; + if( affinity>=SQLITE_AFF_NUMERIC ){ + if( (flags1 & (MEM_Int|MEM_Real|MEM_Str))==MEM_Str ){ + applyNumericAffinity(pIn1,0); + } + if( (flags3 & (MEM_Int|MEM_Real|MEM_Str))==MEM_Str ){ + applyNumericAffinity(pIn3,0); + } + }else if( affinity==SQLITE_AFF_TEXT ){ + if( (flags1 & MEM_Str)==0 && (flags1 & (MEM_Int|MEM_Real))!=0 ){ + testcase( pIn1->flags & MEM_Int ); + testcase( pIn1->flags & MEM_Real ); + sqlite3VdbeMemStringify(pIn1, encoding, 1); + testcase( (flags1&MEM_Dyn) != (pIn1->flags&MEM_Dyn) ); + flags1 = (pIn1->flags & ~MEM_TypeMask) | (flags1 & MEM_TypeMask); + } + if( (flags3 & MEM_Str)==0 && (flags3 & (MEM_Int|MEM_Real))!=0 ){ + testcase( pIn3->flags & MEM_Int ); + testcase( pIn3->flags & MEM_Real ); + sqlite3VdbeMemStringify(pIn3, encoding, 1); + testcase( (flags3&MEM_Dyn) != (pIn3->flags&MEM_Dyn) ); + flags3 = (pIn3->flags & ~MEM_TypeMask) | (flags3 & MEM_TypeMask); + } } - assert( pOp->p4type==P4_COLLSEQ || pOp->p4.pColl==0 ); - ExpandBlob(pIn1); - ExpandBlob(pIn3); - u.ai.res = sqlite3MemCompare(pIn3, pIn1, pOp->p4.pColl); + if( flags1 & MEM_Zero ){ + sqlite3VdbeMemExpandBlob(pIn1); + flags1 &= ~MEM_Zero; + } + if( flags3 & MEM_Zero ){ + sqlite3VdbeMemExpandBlob(pIn3); + flags3 &= ~MEM_Zero; + } + res = sqlite3MemCompare(pIn3, pIn1, pOp->p4.pColl); } switch( pOp->opcode ){ - case OP_Eq: u.ai.res = u.ai.res==0; break; - case OP_Ne: u.ai.res = u.ai.res!=0; break; - case OP_Lt: u.ai.res = u.ai.res<0; break; - case OP_Le: u.ai.res = u.ai.res<=0; break; - case OP_Gt: u.ai.res = u.ai.res>0; break; - default: u.ai.res = u.ai.res>=0; break; - } - - if( pOp->p5 & SQLITE_STOREP2 ){ - pOut = &aMem[pOp->p2]; - MemSetTypeFlag(pOut, MEM_Int); - pOut->u.i = u.ai.res; - REGISTER_TRACE(pOp->p2, pOut); - }else if( u.ai.res ){ - pc = pOp->p2-1; + case OP_Eq: res = res==0; break; + case OP_Ne: res = res!=0; break; + case OP_Lt: res = res<0; break; + case OP_Le: res = res<=0; break; + case OP_Gt: res = res>0; break; + default: res = res>=0; break; } /* Undo any changes made by applyAffinity() to the input registers. */ - pIn1->flags = (pIn1->flags&~MEM_TypeMask) | (u.ai.flags1&MEM_TypeMask); - pIn3->flags = (pIn3->flags&~MEM_TypeMask) | (u.ai.flags3&MEM_TypeMask); + assert( (pIn1->flags & MEM_Dyn) == (flags1 & MEM_Dyn) ); + pIn1->flags = flags1; + assert( (pIn3->flags & MEM_Dyn) == (flags3 & MEM_Dyn) ); + pIn3->flags = flags3; + + if( pOp->p5 & SQLITE_STOREP2 ){ + pOut = &aMem[pOp->p2]; + memAboutToChange(p, pOut); + MemSetTypeFlag(pOut, MEM_Int); + pOut->u.i = res; + REGISTER_TRACE(pOp->p2, pOut); + }else{ + VdbeBranchTaken(res!=0, (pOp->p5 & SQLITE_NULLEQ)?2:3); + if( res ){ + goto jump_to_p2; + } + } break; } /* Opcode: Permutation * * * P4 * ** ** Set the permutation used by the OP_Compare operator to be the array ** of integers in P4. ** -** The permutation is only valid until the next OP_Permutation, OP_Compare, -** OP_Halt, or OP_ResultRow. Typically the OP_Permutation should occur -** immediately prior to the OP_Compare. +** The permutation is only valid until the next OP_Compare that has +** the OPFLAG_PERMUTE bit set in P5. Typically the OP_Permutation should +** occur immediately prior to the OP_Compare. +** +** The first integer in the P4 integer array is the length of the array +** and does not become part of the permutation. */ case OP_Permutation: { assert( pOp->p4type==P4_INTARRAY ); assert( pOp->p4.ai ); - aPermute = pOp->p4.ai; + aPermute = pOp->p4.ai + 1; break; } -/* Opcode: Compare P1 P2 P3 P4 * +/* Opcode: Compare P1 P2 P3 P4 P5 +** Synopsis: r[P1@P3] <-> r[P2@P3] ** -** Compare to vectors of registers in reg(P1)..reg(P1+P3-1) (all this -** one "A") and in reg(P2)..reg(P2+P3-1) ("B"). Save the result of +** Compare two vectors of registers in reg(P1)..reg(P1+P3-1) (call this +** vector "A") and in reg(P2)..reg(P2+P3-1) ("B"). Save the result of ** the comparison for use by the next OP_Jump instruct. +** +** If P5 has the OPFLAG_PERMUTE bit set, then the order of comparison is +** determined by the most recent OP_Permutation operator. If the +** OPFLAG_PERMUTE bit is clear, then register are compared in sequential +** order. ** ** P4 is a KeyInfo structure that defines collating sequences and sort ** orders for the comparison. The permutation applies to registers ** only. The KeyInfo elements are used sequentially. ** @@ -55414,48 +76317,49 @@ ** The comparison is a sort comparison, so NULLs compare equal, ** NULLs are less than numbers, numbers are less than strings, ** and strings are less than blobs. */ case OP_Compare: { -#if 0 /* local variables moved into u.aj */ int n; int i; int p1; int p2; const KeyInfo *pKeyInfo; int idx; CollSeq *pColl; /* Collating sequence to use on this term */ int bRev; /* True for DESCENDING sort order */ -#endif /* local variables moved into u.aj */ - - u.aj.n = pOp->p3; - u.aj.pKeyInfo = pOp->p4.pKeyInfo; - assert( u.aj.n>0 ); - assert( u.aj.pKeyInfo!=0 ); - u.aj.p1 = pOp->p1; - u.aj.p2 = pOp->p2; + + if( (pOp->p5 & OPFLAG_PERMUTE)==0 ) aPermute = 0; + n = pOp->p3; + pKeyInfo = pOp->p4.pKeyInfo; + assert( n>0 ); + assert( pKeyInfo!=0 ); + p1 = pOp->p1; + p2 = pOp->p2; #if SQLITE_DEBUG if( aPermute ){ int k, mx = 0; - for(k=0; k<u.aj.n; k++) if( aPermute[k]>mx ) mx = aPermute[k]; - assert( u.aj.p1>0 && u.aj.p1+mx<=p->nMem+1 ); - assert( u.aj.p2>0 && u.aj.p2+mx<=p->nMem+1 ); + for(k=0; k<n; k++) if( aPermute[k]>mx ) mx = aPermute[k]; + assert( p1>0 && p1+mx<=(p->nMem-p->nCursor)+1 ); + assert( p2>0 && p2+mx<=(p->nMem-p->nCursor)+1 ); }else{ - assert( u.aj.p1>0 && u.aj.p1+u.aj.n<=p->nMem+1 ); - assert( u.aj.p2>0 && u.aj.p2+u.aj.n<=p->nMem+1 ); + assert( p1>0 && p1+n<=(p->nMem-p->nCursor)+1 ); + assert( p2>0 && p2+n<=(p->nMem-p->nCursor)+1 ); } #endif /* SQLITE_DEBUG */ - for(u.aj.i=0; u.aj.i<u.aj.n; u.aj.i++){ - u.aj.idx = aPermute ? aPermute[u.aj.i] : u.aj.i; - REGISTER_TRACE(u.aj.p1+u.aj.idx, &aMem[u.aj.p1+u.aj.idx]); - REGISTER_TRACE(u.aj.p2+u.aj.idx, &aMem[u.aj.p2+u.aj.idx]); - assert( u.aj.i<u.aj.pKeyInfo->nField ); - u.aj.pColl = u.aj.pKeyInfo->aColl[u.aj.i]; - u.aj.bRev = u.aj.pKeyInfo->aSortOrder[u.aj.i]; - iCompare = sqlite3MemCompare(&aMem[u.aj.p1+u.aj.idx], &aMem[u.aj.p2+u.aj.idx], u.aj.pColl); + for(i=0; i<n; i++){ + idx = aPermute ? aPermute[i] : i; + assert( memIsValid(&aMem[p1+idx]) ); + assert( memIsValid(&aMem[p2+idx]) ); + REGISTER_TRACE(p1+idx, &aMem[p1+idx]); + REGISTER_TRACE(p2+idx, &aMem[p2+idx]); + assert( i<pKeyInfo->nField ); + pColl = pKeyInfo->aColl[i]; + bRev = pKeyInfo->aSortOrder[i]; + iCompare = sqlite3MemCompare(&aMem[p1+idx], &aMem[p2+idx], pColl); if( iCompare ){ - if( u.aj.bRev ) iCompare = -iCompare; + if( bRev ) iCompare = -iCompare; break; } } aPermute = 0; break; @@ -55467,29 +76371,31 @@ ** in the most recent OP_Compare instruction the P1 vector was less than ** equal to, or greater than the P2 vector, respectively. */ case OP_Jump: { /* jump */ if( iCompare<0 ){ - pc = pOp->p1 - 1; + VdbeBranchTaken(0,3); pOp = &aOp[pOp->p1 - 1]; }else if( iCompare==0 ){ - pc = pOp->p2 - 1; + VdbeBranchTaken(1,3); pOp = &aOp[pOp->p2 - 1]; }else{ - pc = pOp->p3 - 1; + VdbeBranchTaken(2,3); pOp = &aOp[pOp->p3 - 1]; } break; } /* Opcode: And P1 P2 P3 * * +** Synopsis: r[P3]=(r[P1] && r[P2]) ** ** Take the logical AND of the values in registers P1 and P2 and ** write the result into register P3. ** ** If either P1 or P2 is 0 (false) then the result is 0 even if ** the other input is NULL. A NULL and true or two NULLs give ** a NULL output. */ /* Opcode: Or P1 P2 P3 * * +** Synopsis: r[P3]=(r[P1] || r[P2]) ** ** Take the logical OR of the values in register P1 and P2 and ** store the answer in register P3. ** ** If either P1 or P2 is nonzero (true) then the result is 1 (true) @@ -55496,137 +76402,163 @@ ** even if the other input is NULL. A NULL and false or two NULLs ** give a NULL output. */ case OP_And: /* same as TK_AND, in1, in2, out3 */ case OP_Or: { /* same as TK_OR, in1, in2, out3 */ -#if 0 /* local variables moved into u.ak */ int v1; /* Left operand: 0==FALSE, 1==TRUE, 2==UNKNOWN or NULL */ int v2; /* Right operand: 0==FALSE, 1==TRUE, 2==UNKNOWN or NULL */ -#endif /* local variables moved into u.ak */ pIn1 = &aMem[pOp->p1]; if( pIn1->flags & MEM_Null ){ - u.ak.v1 = 2; + v1 = 2; }else{ - u.ak.v1 = sqlite3VdbeIntValue(pIn1)!=0; + v1 = sqlite3VdbeIntValue(pIn1)!=0; } pIn2 = &aMem[pOp->p2]; if( pIn2->flags & MEM_Null ){ - u.ak.v2 = 2; + v2 = 2; }else{ - u.ak.v2 = sqlite3VdbeIntValue(pIn2)!=0; + v2 = sqlite3VdbeIntValue(pIn2)!=0; } if( pOp->opcode==OP_And ){ static const unsigned char and_logic[] = { 0, 0, 0, 0, 1, 2, 0, 2, 2 }; - u.ak.v1 = and_logic[u.ak.v1*3+u.ak.v2]; + v1 = and_logic[v1*3+v2]; }else{ static const unsigned char or_logic[] = { 0, 1, 2, 1, 1, 1, 2, 1, 2 }; - u.ak.v1 = or_logic[u.ak.v1*3+u.ak.v2]; + v1 = or_logic[v1*3+v2]; } pOut = &aMem[pOp->p3]; - if( u.ak.v1==2 ){ + if( v1==2 ){ MemSetTypeFlag(pOut, MEM_Null); }else{ - pOut->u.i = u.ak.v1; + pOut->u.i = v1; MemSetTypeFlag(pOut, MEM_Int); } break; } /* Opcode: Not P1 P2 * * * +** Synopsis: r[P2]= !r[P1] ** ** Interpret the value in register P1 as a boolean value. Store the ** boolean complement in register P2. If the value in register P1 is ** NULL, then a NULL is stored in P2. */ case OP_Not: { /* same as TK_NOT, in1, out2 */ pIn1 = &aMem[pOp->p1]; pOut = &aMem[pOp->p2]; - if( pIn1->flags & MEM_Null ){ - sqlite3VdbeMemSetNull(pOut); - }else{ - sqlite3VdbeMemSetInt64(pOut, !sqlite3VdbeIntValue(pIn1)); + sqlite3VdbeMemSetNull(pOut); + if( (pIn1->flags & MEM_Null)==0 ){ + pOut->flags = MEM_Int; + pOut->u.i = !sqlite3VdbeIntValue(pIn1); } break; } /* Opcode: BitNot P1 P2 * * * +** Synopsis: r[P1]= ~r[P1] ** ** Interpret the content of register P1 as an integer. Store the ** ones-complement of the P1 value into register P2. If P1 holds ** a NULL then store a NULL in P2. */ case OP_BitNot: { /* same as TK_BITNOT, in1, out2 */ pIn1 = &aMem[pOp->p1]; pOut = &aMem[pOp->p2]; - if( pIn1->flags & MEM_Null ){ - sqlite3VdbeMemSetNull(pOut); + sqlite3VdbeMemSetNull(pOut); + if( (pIn1->flags & MEM_Null)==0 ){ + pOut->flags = MEM_Int; + pOut->u.i = ~sqlite3VdbeIntValue(pIn1); + } + break; +} + +/* Opcode: Once P1 P2 * * * +** +** Check the "once" flag number P1. If it is set, jump to instruction P2. +** Otherwise, set the flag and fall through to the next instruction. +** In other words, this opcode causes all following opcodes up through P2 +** (but not including P2) to run just once and to be skipped on subsequent +** times through the loop. +** +** All "once" flags are initially cleared whenever a prepared statement +** first begins to run. +*/ +case OP_Once: { /* jump */ + assert( pOp->p1<p->nOnceFlag ); + VdbeBranchTaken(p->aOnceFlag[pOp->p1]!=0, 2); + if( p->aOnceFlag[pOp->p1] ){ + goto jump_to_p2; }else{ - sqlite3VdbeMemSetInt64(pOut, ~sqlite3VdbeIntValue(pIn1)); + p->aOnceFlag[pOp->p1] = 1; } break; } /* Opcode: If P1 P2 P3 * * ** -** Jump to P2 if the value in register P1 is true. The value is +** Jump to P2 if the value in register P1 is true. The value ** is considered true if it is numeric and non-zero. If the value -** in P1 is NULL then take the jump if P3 is true. +** in P1 is NULL then take the jump if and only if P3 is non-zero. */ /* Opcode: IfNot P1 P2 P3 * * ** -** Jump to P2 if the value in register P1 is False. The value is -** is considered true if it has a numeric value of zero. If the value -** in P1 is NULL then take the jump if P3 is true. +** Jump to P2 if the value in register P1 is False. The value +** is considered false if it has a numeric value of zero. If the value +** in P1 is NULL then take the jump if and only if P3 is non-zero. */ case OP_If: /* jump, in1 */ case OP_IfNot: { /* jump, in1 */ -#if 0 /* local variables moved into u.al */ int c; -#endif /* local variables moved into u.al */ pIn1 = &aMem[pOp->p1]; if( pIn1->flags & MEM_Null ){ - u.al.c = pOp->p3; + c = pOp->p3; }else{ #ifdef SQLITE_OMIT_FLOATING_POINT - u.al.c = sqlite3VdbeIntValue(pIn1)!=0; + c = sqlite3VdbeIntValue(pIn1)!=0; #else - u.al.c = sqlite3VdbeRealValue(pIn1)!=0.0; + c = sqlite3VdbeRealValue(pIn1)!=0.0; #endif - if( pOp->opcode==OP_IfNot ) u.al.c = !u.al.c; + if( pOp->opcode==OP_IfNot ) c = !c; } - if( u.al.c ){ - pc = pOp->p2-1; + VdbeBranchTaken(c!=0, 2); + if( c ){ + goto jump_to_p2; } break; } /* Opcode: IsNull P1 P2 * * * +** Synopsis: if r[P1]==NULL goto P2 ** ** Jump to P2 if the value in register P1 is NULL. */ case OP_IsNull: { /* same as TK_ISNULL, jump, in1 */ pIn1 = &aMem[pOp->p1]; + VdbeBranchTaken( (pIn1->flags & MEM_Null)!=0, 2); if( (pIn1->flags & MEM_Null)!=0 ){ - pc = pOp->p2 - 1; + goto jump_to_p2; } break; } /* Opcode: NotNull P1 P2 * * * +** Synopsis: if r[P1]!=NULL goto P2 ** ** Jump to P2 if the value in register P1 is not NULL. */ case OP_NotNull: { /* same as TK_NOTNULL, jump, in1 */ pIn1 = &aMem[pOp->p1]; + VdbeBranchTaken( (pIn1->flags & MEM_Null)==0, 2); if( (pIn1->flags & MEM_Null)==0 ){ - pc = pOp->p2 - 1; + goto jump_to_p2; } break; } /* Opcode: Column P1 P2 P3 P4 P5 +** Synopsis: r[P3]=PX ** ** Interpret the data that cursor P1 points to as a structure built using ** the MakeRecord instruction. (See the MakeRecord opcode for additional ** information about the format of the data.) Extract the P2-th column ** from this record. If there are less that (P2+1) @@ -55640,476 +76572,469 @@ ** ** If the OPFLAG_CLEARCACHE bit is set on P5 and P1 is a pseudo-table cursor, ** then the cache of the cursor is reset prior to extracting the column. ** The first OP_Column against a pseudo-table after the value of the content ** register has changed should have this bit set. -*/ -case OP_Column: { -#if 0 /* local variables moved into u.am */ - u32 payloadSize; /* Number of bytes in the record */ - i64 payloadSize64; /* Number of bytes in the record */ - int p1; /* P1 value of the opcode */ - int p2; /* column number to retrieve */ - VdbeCursor *pC; /* The VDBE cursor */ - char *zRec; /* Pointer to complete record-data */ - BtCursor *pCrsr; /* The BTree cursor */ - u32 *aType; /* aType[i] holds the numeric type of the i-th column */ - u32 *aOffset; /* aOffset[i] is offset to start of data for i-th column */ - int nField; /* number of fields in the record */ - int len; /* The length of the serialized data for the column */ - int i; /* Loop counter */ - char *zData; /* Part of the record being decoded */ - Mem *pDest; /* Where to write the extracted value */ - Mem sMem; /* For storing the record being decoded */ - u8 *zIdx; /* Index into header */ - u8 *zEndHdr; /* Pointer to first byte after the header */ - u32 offset; /* Offset into the data */ - u32 szField; /* Number of bytes in the content of a field */ - int szHdr; /* Size of the header size field at start of record */ - int avail; /* Number of bytes of available data */ - Mem *pReg; /* PseudoTable input register */ -#endif /* local variables moved into u.am */ - - - u.am.p1 = pOp->p1; - u.am.p2 = pOp->p2; - u.am.pC = 0; - memset(&u.am.sMem, 0, sizeof(u.am.sMem)); - assert( u.am.p1<p->nCursor ); - assert( pOp->p3>0 && pOp->p3<=p->nMem ); - u.am.pDest = &aMem[pOp->p3]; - MemSetTypeFlag(u.am.pDest, MEM_Null); - u.am.zRec = 0; - - /* This block sets the variable u.am.payloadSize to be the total number of - ** bytes in the record. - ** - ** u.am.zRec is set to be the complete text of the record if it is available. - ** The complete record text is always available for pseudo-tables - ** If the record is stored in a cursor, the complete record text - ** might be available in the u.am.pC->aRow cache. Or it might not be. - ** If the data is unavailable, u.am.zRec is set to NULL. - ** - ** We also compute the number of columns in the record. For cursors, - ** the number of columns is stored in the VdbeCursor.nField element. - */ - u.am.pC = p->apCsr[u.am.p1]; - assert( u.am.pC!=0 ); -#ifndef SQLITE_OMIT_VIRTUALTABLE - assert( u.am.pC->pVtabCursor==0 ); -#endif - u.am.pCrsr = u.am.pC->pCursor; - if( u.am.pCrsr!=0 ){ - /* The record is stored in a B-Tree */ - rc = sqlite3VdbeCursorMoveto(u.am.pC); - if( rc ) goto abort_due_to_error; - if( u.am.pC->nullRow ){ - u.am.payloadSize = 0; - }else if( u.am.pC->cacheStatus==p->cacheCtr ){ - u.am.payloadSize = u.am.pC->payloadSize; - u.am.zRec = (char*)u.am.pC->aRow; - }else if( u.am.pC->isIndex ){ - assert( sqlite3BtreeCursorIsValid(u.am.pCrsr) ); - rc = sqlite3BtreeKeySize(u.am.pCrsr, &u.am.payloadSize64); - assert( rc==SQLITE_OK ); /* True because of CursorMoveto() call above */ - /* sqlite3BtreeParseCellPtr() uses getVarint32() to extract the - ** payload size, so it is impossible for u.am.payloadSize64 to be - ** larger than 32 bits. */ - assert( (u.am.payloadSize64 & SQLITE_MAX_U32)==(u64)u.am.payloadSize64 ); - u.am.payloadSize = (u32)u.am.payloadSize64; - }else{ - assert( sqlite3BtreeCursorIsValid(u.am.pCrsr) ); - rc = sqlite3BtreeDataSize(u.am.pCrsr, &u.am.payloadSize); - assert( rc==SQLITE_OK ); /* DataSize() cannot fail */ - } - }else if( u.am.pC->pseudoTableReg>0 ){ - u.am.pReg = &aMem[u.am.pC->pseudoTableReg]; - assert( u.am.pReg->flags & MEM_Blob ); - u.am.payloadSize = u.am.pReg->n; - u.am.zRec = u.am.pReg->z; - u.am.pC->cacheStatus = (pOp->p5&OPFLAG_CLEARCACHE) ? CACHE_STALE : p->cacheCtr; - assert( u.am.payloadSize==0 || u.am.zRec!=0 ); - }else{ - /* Consider the row to be NULL */ - u.am.payloadSize = 0; - } - - /* If u.am.payloadSize is 0, then just store a NULL */ - if( u.am.payloadSize==0 ){ - assert( u.am.pDest->flags&MEM_Null ); - goto op_column_out; - } - assert( db->aLimit[SQLITE_LIMIT_LENGTH]>=0 ); - if( u.am.payloadSize > (u32)db->aLimit[SQLITE_LIMIT_LENGTH] ){ - goto too_big; - } - - u.am.nField = u.am.pC->nField; - assert( u.am.p2<u.am.nField ); - - /* Read and parse the table header. Store the results of the parse - ** into the record header cache fields of the cursor. - */ - u.am.aType = u.am.pC->aType; - if( u.am.pC->cacheStatus==p->cacheCtr ){ - u.am.aOffset = u.am.pC->aOffset; - }else{ - assert(u.am.aType); - u.am.avail = 0; - u.am.pC->aOffset = u.am.aOffset = &u.am.aType[u.am.nField]; - u.am.pC->payloadSize = u.am.payloadSize; - u.am.pC->cacheStatus = p->cacheCtr; - - /* Figure out how many bytes are in the header */ - if( u.am.zRec ){ - u.am.zData = u.am.zRec; - }else{ - if( u.am.pC->isIndex ){ - u.am.zData = (char*)sqlite3BtreeKeyFetch(u.am.pCrsr, &u.am.avail); - }else{ - u.am.zData = (char*)sqlite3BtreeDataFetch(u.am.pCrsr, &u.am.avail); - } - /* If KeyFetch()/DataFetch() managed to get the entire payload, - ** save the payload in the u.am.pC->aRow cache. That will save us from - ** having to make additional calls to fetch the content portion of - ** the record. - */ - assert( u.am.avail>=0 ); - if( u.am.payloadSize <= (u32)u.am.avail ){ - u.am.zRec = u.am.zData; - u.am.pC->aRow = (u8*)u.am.zData; - }else{ - u.am.pC->aRow = 0; - } - } - /* The following assert is true in all cases accept when - ** the database file has been corrupted externally. - ** assert( u.am.zRec!=0 || u.am.avail>=u.am.payloadSize || u.am.avail>=9 ); */ - u.am.szHdr = getVarint32((u8*)u.am.zData, u.am.offset); - - /* Make sure a corrupt database has not given us an oversize header. - ** Do this now to avoid an oversize memory allocation. - ** - ** Type entries can be between 1 and 5 bytes each. But 4 and 5 byte - ** types use so much data space that there can only be 4096 and 32 of - ** them, respectively. So the maximum header length results from a - ** 3-byte type for each of the maximum of 32768 columns plus three - ** extra bytes for the header length itself. 32768*3 + 3 = 98307. - */ - if( u.am.offset > 98307 ){ - rc = SQLITE_CORRUPT_BKPT; - goto op_column_out; - } - - /* Compute in u.am.len the number of bytes of data we need to read in order - ** to get u.am.nField type values. u.am.offset is an upper bound on this. But - ** u.am.nField might be significantly less than the true number of columns - ** in the table, and in that case, 5*u.am.nField+3 might be smaller than u.am.offset. - ** We want to minimize u.am.len in order to limit the size of the memory - ** allocation, especially if a corrupt database file has caused u.am.offset - ** to be oversized. Offset is limited to 98307 above. But 98307 might - ** still exceed Robson memory allocation limits on some configurations. - ** On systems that cannot tolerate large memory allocations, u.am.nField*5+3 - ** will likely be much smaller since u.am.nField will likely be less than - ** 20 or so. This insures that Robson memory allocation limits are - ** not exceeded even for corrupt database files. - */ - u.am.len = u.am.nField*5 + 3; - if( u.am.len > (int)u.am.offset ) u.am.len = (int)u.am.offset; - - /* The KeyFetch() or DataFetch() above are fast and will get the entire - ** record header in most cases. But they will fail to get the complete - ** record header if the record header does not fit on a single page - ** in the B-Tree. When that happens, use sqlite3VdbeMemFromBtree() to - ** acquire the complete header text. - */ - if( !u.am.zRec && u.am.avail<u.am.len ){ - u.am.sMem.flags = 0; - u.am.sMem.db = 0; - rc = sqlite3VdbeMemFromBtree(u.am.pCrsr, 0, u.am.len, u.am.pC->isIndex, &u.am.sMem); - if( rc!=SQLITE_OK ){ - goto op_column_out; - } - u.am.zData = u.am.sMem.z; - } - u.am.zEndHdr = (u8 *)&u.am.zData[u.am.len]; - u.am.zIdx = (u8 *)&u.am.zData[u.am.szHdr]; - - /* Scan the header and use it to fill in the u.am.aType[] and u.am.aOffset[] - ** arrays. u.am.aType[u.am.i] will contain the type integer for the u.am.i-th - ** column and u.am.aOffset[u.am.i] will contain the u.am.offset from the beginning - ** of the record to the start of the data for the u.am.i-th column - */ - for(u.am.i=0; u.am.i<u.am.nField; u.am.i++){ - if( u.am.zIdx<u.am.zEndHdr ){ - u.am.aOffset[u.am.i] = u.am.offset; - u.am.zIdx += getVarint32(u.am.zIdx, u.am.aType[u.am.i]); - u.am.szField = sqlite3VdbeSerialTypeLen(u.am.aType[u.am.i]); - u.am.offset += u.am.szField; - if( u.am.offset<u.am.szField ){ /* True if u.am.offset overflows */ - u.am.zIdx = &u.am.zEndHdr[1]; /* Forces SQLITE_CORRUPT return below */ - break; - } - }else{ - /* If u.am.i is less that u.am.nField, then there are less fields in this - ** record than SetNumColumns indicated there are columns in the - ** table. Set the u.am.offset for any extra columns not present in - ** the record to 0. This tells code below to store a NULL - ** instead of deserializing a value from the record. - */ - u.am.aOffset[u.am.i] = 0; - } - } - sqlite3VdbeMemRelease(&u.am.sMem); - u.am.sMem.flags = MEM_Null; - - /* If we have read more header data than was contained in the header, - ** or if the end of the last field appears to be past the end of the - ** record, or if the end of the last field appears to be before the end - ** of the record (when all fields present), then we must be dealing - ** with a corrupt database. - */ - if( (u.am.zIdx > u.am.zEndHdr) || (u.am.offset > u.am.payloadSize) - || (u.am.zIdx==u.am.zEndHdr && u.am.offset!=u.am.payloadSize) ){ - rc = SQLITE_CORRUPT_BKPT; - goto op_column_out; - } - } - - /* Get the column information. If u.am.aOffset[u.am.p2] is non-zero, then - ** deserialize the value from the record. If u.am.aOffset[u.am.p2] is zero, - ** then there are not enough fields in the record to satisfy the - ** request. In this case, set the value NULL or to P4 if P4 is - ** a pointer to a Mem object. - */ - if( u.am.aOffset[u.am.p2] ){ - assert( rc==SQLITE_OK ); - if( u.am.zRec ){ - sqlite3VdbeMemReleaseExternal(u.am.pDest); - sqlite3VdbeSerialGet((u8 *)&u.am.zRec[u.am.aOffset[u.am.p2]], u.am.aType[u.am.p2], u.am.pDest); - }else{ - u.am.len = sqlite3VdbeSerialTypeLen(u.am.aType[u.am.p2]); - sqlite3VdbeMemMove(&u.am.sMem, u.am.pDest); - rc = sqlite3VdbeMemFromBtree(u.am.pCrsr, u.am.aOffset[u.am.p2], u.am.len, u.am.pC->isIndex, &u.am.sMem); - if( rc!=SQLITE_OK ){ - goto op_column_out; - } - u.am.zData = u.am.sMem.z; - sqlite3VdbeSerialGet((u8*)u.am.zData, u.am.aType[u.am.p2], u.am.pDest); - } - u.am.pDest->enc = encoding; - }else{ - if( pOp->p4type==P4_MEM ){ - sqlite3VdbeMemShallowCopy(u.am.pDest, pOp->p4.pMem, MEM_Static); - }else{ - assert( u.am.pDest->flags&MEM_Null ); - } - } - - /* If we dynamically allocated space to hold the data (in the - ** sqlite3VdbeMemFromBtree() call above) then transfer control of that - ** dynamically allocated space over to the u.am.pDest structure. - ** This prevents a memory copy. - */ - if( u.am.sMem.zMalloc ){ - assert( u.am.sMem.z==u.am.sMem.zMalloc ); - assert( !(u.am.pDest->flags & MEM_Dyn) ); - assert( !(u.am.pDest->flags & (MEM_Blob|MEM_Str)) || u.am.pDest->z==u.am.sMem.z ); - u.am.pDest->flags &= ~(MEM_Ephem|MEM_Static); - u.am.pDest->flags |= MEM_Term; - u.am.pDest->z = u.am.sMem.z; - u.am.pDest->zMalloc = u.am.sMem.zMalloc; - } - - rc = sqlite3VdbeMemMakeWriteable(u.am.pDest); - -op_column_out: - UPDATE_MAX_BLOBSIZE(u.am.pDest); - REGISTER_TRACE(pOp->p3, u.am.pDest); +** +** If the OPFLAG_LENGTHARG and OPFLAG_TYPEOFARG bits are set on P5 when +** the result is guaranteed to only be used as the argument of a length() +** or typeof() function, respectively. The loading of large blobs can be +** skipped for length() and all content loading can be skipped for typeof(). +*/ +case OP_Column: { + i64 payloadSize64; /* Number of bytes in the record */ + int p2; /* column number to retrieve */ + VdbeCursor *pC; /* The VDBE cursor */ + BtCursor *pCrsr; /* The BTree cursor */ + u32 *aOffset; /* aOffset[i] is offset to start of data for i-th column */ + int len; /* The length of the serialized data for the column */ + int i; /* Loop counter */ + Mem *pDest; /* Where to write the extracted value */ + Mem sMem; /* For storing the record being decoded */ + const u8 *zData; /* Part of the record being decoded */ + const u8 *zHdr; /* Next unparsed byte of the header */ + const u8 *zEndHdr; /* Pointer to first byte after the header */ + u32 offset; /* Offset into the data */ + u64 offset64; /* 64-bit offset */ + u32 avail; /* Number of bytes of available data */ + u32 t; /* A type code from the record header */ + Mem *pReg; /* PseudoTable input register */ + + pC = p->apCsr[pOp->p1]; + p2 = pOp->p2; + + /* If the cursor cache is stale, bring it up-to-date */ + rc = sqlite3VdbeCursorMoveto(&pC, &p2); + + assert( pOp->p3>0 && pOp->p3<=(p->nMem-p->nCursor) ); + pDest = &aMem[pOp->p3]; + memAboutToChange(p, pDest); + assert( pOp->p1>=0 && pOp->p1<p->nCursor ); + assert( pC!=0 ); + assert( p2<pC->nField ); + aOffset = pC->aOffset; + assert( pC->eCurType!=CURTYPE_VTAB ); + assert( pC->eCurType!=CURTYPE_PSEUDO || pC->nullRow ); + assert( pC->eCurType!=CURTYPE_SORTER ); + pCrsr = pC->uc.pCursor; + + if( rc ) goto abort_due_to_error; + if( pC->cacheStatus!=p->cacheCtr ){ + if( pC->nullRow ){ + if( pC->eCurType==CURTYPE_PSEUDO ){ + assert( pC->uc.pseudoTableReg>0 ); + pReg = &aMem[pC->uc.pseudoTableReg]; + assert( pReg->flags & MEM_Blob ); + assert( memIsValid(pReg) ); + pC->payloadSize = pC->szRow = avail = pReg->n; + pC->aRow = (u8*)pReg->z; + }else{ + sqlite3VdbeMemSetNull(pDest); + goto op_column_out; + } + }else{ + assert( pC->eCurType==CURTYPE_BTREE ); + assert( pCrsr ); + if( pC->isTable==0 ){ + assert( sqlite3BtreeCursorIsValid(pCrsr) ); + VVA_ONLY(rc =) sqlite3BtreeKeySize(pCrsr, &payloadSize64); + assert( rc==SQLITE_OK ); /* True because of CursorMoveto() call above */ + /* sqlite3BtreeParseCellPtr() uses getVarint32() to extract the + ** payload size, so it is impossible for payloadSize64 to be + ** larger than 32 bits. */ + assert( (payloadSize64 & SQLITE_MAX_U32)==(u64)payloadSize64 ); + pC->aRow = sqlite3BtreeKeyFetch(pCrsr, &avail); + pC->payloadSize = (u32)payloadSize64; + }else{ + assert( sqlite3BtreeCursorIsValid(pCrsr) ); + VVA_ONLY(rc =) sqlite3BtreeDataSize(pCrsr, &pC->payloadSize); + assert( rc==SQLITE_OK ); /* DataSize() cannot fail */ + pC->aRow = sqlite3BtreeDataFetch(pCrsr, &avail); + } + assert( avail<=65536 ); /* Maximum page size is 64KiB */ + if( pC->payloadSize <= (u32)avail ){ + pC->szRow = pC->payloadSize; + }else if( pC->payloadSize > (u32)db->aLimit[SQLITE_LIMIT_LENGTH] ){ + goto too_big; + }else{ + pC->szRow = avail; + } + } + pC->cacheStatus = p->cacheCtr; + pC->iHdrOffset = getVarint32(pC->aRow, offset); + pC->nHdrParsed = 0; + aOffset[0] = offset; + + + if( avail<offset ){ + /* pC->aRow does not have to hold the entire row, but it does at least + ** need to cover the header of the record. If pC->aRow does not contain + ** the complete header, then set it to zero, forcing the header to be + ** dynamically allocated. */ + pC->aRow = 0; + pC->szRow = 0; + + /* Make sure a corrupt database has not given us an oversize header. + ** Do this now to avoid an oversize memory allocation. + ** + ** Type entries can be between 1 and 5 bytes each. But 4 and 5 byte + ** types use so much data space that there can only be 4096 and 32 of + ** them, respectively. So the maximum header length results from a + ** 3-byte type for each of the maximum of 32768 columns plus three + ** extra bytes for the header length itself. 32768*3 + 3 = 98307. + */ + if( offset > 98307 || offset > pC->payloadSize ){ + rc = SQLITE_CORRUPT_BKPT; + goto op_column_error; + } + } + + /* The following goto is an optimization. It can be omitted and + ** everything will still work. But OP_Column is measurably faster + ** by skipping the subsequent conditional, which is always true. + */ + assert( pC->nHdrParsed<=p2 ); /* Conditional skipped */ + goto op_column_read_header; + } + + /* Make sure at least the first p2+1 entries of the header have been + ** parsed and valid information is in aOffset[] and pC->aType[]. + */ + if( pC->nHdrParsed<=p2 ){ + /* If there is more header available for parsing in the record, try + ** to extract additional fields up through the p2+1-th field + */ + op_column_read_header: + if( pC->iHdrOffset<aOffset[0] ){ + /* Make sure zData points to enough of the record to cover the header. */ + if( pC->aRow==0 ){ + memset(&sMem, 0, sizeof(sMem)); + rc = sqlite3VdbeMemFromBtree(pCrsr, 0, aOffset[0], !pC->isTable, &sMem); + if( rc!=SQLITE_OK ) goto op_column_error; + zData = (u8*)sMem.z; + }else{ + zData = pC->aRow; + } + + /* Fill in pC->aType[i] and aOffset[i] values through the p2-th field. */ + i = pC->nHdrParsed; + offset64 = aOffset[i]; + zHdr = zData + pC->iHdrOffset; + zEndHdr = zData + aOffset[0]; + assert( i<=p2 && zHdr<zEndHdr ); + do{ + if( (t = zHdr[0])<0x80 ){ + zHdr++; + offset64 += sqlite3VdbeOneByteSerialTypeLen(t); + }else{ + zHdr += sqlite3GetVarint32(zHdr, &t); + offset64 += sqlite3VdbeSerialTypeLen(t); + } + pC->aType[i++] = t; + aOffset[i] = (u32)(offset64 & 0xffffffff); + }while( i<=p2 && zHdr<zEndHdr ); + pC->nHdrParsed = i; + pC->iHdrOffset = (u32)(zHdr - zData); + if( pC->aRow==0 ) sqlite3VdbeMemRelease(&sMem); + + /* The record is corrupt if any of the following are true: + ** (1) the bytes of the header extend past the declared header size + ** (2) the entire header was used but not all data was used + ** (3) the end of the data extends beyond the end of the record. + */ + if( (zHdr>=zEndHdr && (zHdr>zEndHdr || offset64!=pC->payloadSize)) + || (offset64 > pC->payloadSize) + ){ + rc = SQLITE_CORRUPT_BKPT; + goto op_column_error; + } + }else{ + t = 0; + } + + /* If after trying to extract new entries from the header, nHdrParsed is + ** still not up to p2, that means that the record has fewer than p2 + ** columns. So the result will be either the default value or a NULL. + */ + if( pC->nHdrParsed<=p2 ){ + if( pOp->p4type==P4_MEM ){ + sqlite3VdbeMemShallowCopy(pDest, pOp->p4.pMem, MEM_Static); + }else{ + sqlite3VdbeMemSetNull(pDest); + } + goto op_column_out; + } + }else{ + t = pC->aType[p2]; + } + + /* Extract the content for the p2+1-th column. Control can only + ** reach this point if aOffset[p2], aOffset[p2+1], and pC->aType[p2] are + ** all valid. + */ + assert( p2<pC->nHdrParsed ); + assert( rc==SQLITE_OK ); + assert( sqlite3VdbeCheckMemInvariants(pDest) ); + if( VdbeMemDynamic(pDest) ) sqlite3VdbeMemSetNull(pDest); + assert( t==pC->aType[p2] ); + pDest->enc = encoding; + if( pC->szRow>=aOffset[p2+1] ){ + /* This is the common case where the desired content fits on the original + ** page - where the content is not on an overflow page */ + zData = pC->aRow + aOffset[p2]; + if( t<12 ){ + sqlite3VdbeSerialGet(zData, t, pDest); + }else{ + /* If the column value is a string, we need a persistent value, not + ** a MEM_Ephem value. This branch is a fast short-cut that is equivalent + ** to calling sqlite3VdbeSerialGet() and sqlite3VdbeDeephemeralize(). + */ + static const u16 aFlag[] = { MEM_Blob, MEM_Str|MEM_Term }; + pDest->n = len = (t-12)/2; + if( pDest->szMalloc < len+2 ){ + pDest->flags = MEM_Null; + if( sqlite3VdbeMemGrow(pDest, len+2, 0) ) goto no_mem; + }else{ + pDest->z = pDest->zMalloc; + } + memcpy(pDest->z, zData, len); + pDest->z[len] = 0; + pDest->z[len+1] = 0; + pDest->flags = aFlag[t&1]; + } + }else{ + /* This branch happens only when content is on overflow pages */ + if( ((pOp->p5 & (OPFLAG_LENGTHARG|OPFLAG_TYPEOFARG))!=0 + && ((t>=12 && (t&1)==0) || (pOp->p5 & OPFLAG_TYPEOFARG)!=0)) + || (len = sqlite3VdbeSerialTypeLen(t))==0 + ){ + /* Content is irrelevant for + ** 1. the typeof() function, + ** 2. the length(X) function if X is a blob, and + ** 3. if the content length is zero. + ** So we might as well use bogus content rather than reading + ** content from disk. */ + static u8 aZero[8]; /* This is the bogus content */ + sqlite3VdbeSerialGet(aZero, t, pDest); + }else{ + rc = sqlite3VdbeMemFromBtree(pCrsr, aOffset[p2], len, !pC->isTable, + pDest); + if( rc==SQLITE_OK ){ + sqlite3VdbeSerialGet((const u8*)pDest->z, t, pDest); + pDest->flags &= ~MEM_Ephem; + } + } + } + +op_column_out: +op_column_error: + UPDATE_MAX_BLOBSIZE(pDest); + REGISTER_TRACE(pOp->p3, pDest); break; } /* Opcode: Affinity P1 P2 * P4 * +** Synopsis: affinity(r[P1@P2]) ** ** Apply affinities to a range of P2 registers starting with P1. ** ** P4 is a string that is P2 characters long. The nth character of the ** string indicates the column affinity that should be used for the nth ** memory cell in the range. */ case OP_Affinity: { -#if 0 /* local variables moved into u.an */ const char *zAffinity; /* The affinity to be applied */ char cAff; /* A single character of affinity */ -#endif /* local variables moved into u.an */ - u.an.zAffinity = pOp->p4.z; - assert( u.an.zAffinity!=0 ); - assert( u.an.zAffinity[pOp->p2]==0 ); + zAffinity = pOp->p4.z; + assert( zAffinity!=0 ); + assert( zAffinity[pOp->p2]==0 ); pIn1 = &aMem[pOp->p1]; - while( (u.an.cAff = *(u.an.zAffinity++))!=0 ){ - assert( pIn1 <= &p->aMem[p->nMem] ); - ExpandBlob(pIn1); - applyAffinity(pIn1, u.an.cAff, encoding); + while( (cAff = *(zAffinity++))!=0 ){ + assert( pIn1 <= &p->aMem[(p->nMem-p->nCursor)] ); + assert( memIsValid(pIn1) ); + applyAffinity(pIn1, cAff, encoding); pIn1++; } break; } /* Opcode: MakeRecord P1 P2 P3 P4 * +** Synopsis: r[P3]=mkrec(r[P1@P2]) ** -** Convert P2 registers beginning with P1 into a single entry -** suitable for use as a data record in a database table or as a key -** in an index. The details of the format are irrelevant as long as -** the OP_Column opcode can decode the record later. -** Refer to source code comments for the details of the record -** format. +** Convert P2 registers beginning with P1 into the [record format] +** use as a data record in a database table or as a key +** in an index. The OP_Column opcode can decode the record later. ** ** P4 may be a string that is P2 characters long. The nth character of the ** string indicates the column affinity that should be used for the nth ** field of the index key. ** ** The mapping from character to affinity is given by the SQLITE_AFF_ ** macros defined in sqliteInt.h. ** -** If P4 is NULL then all index fields have the affinity NONE. +** If P4 is NULL then all index fields have the affinity BLOB. */ case OP_MakeRecord: { -#if 0 /* local variables moved into u.ao */ u8 *zNewRecord; /* A buffer to hold the data for the new record */ Mem *pRec; /* The new record */ u64 nData; /* Number of bytes of data space */ int nHdr; /* Number of bytes of header space */ i64 nByte; /* Data space required for this record */ - int nZero; /* Number of zero bytes at the end of the record */ + i64 nZero; /* Number of zero bytes at the end of the record */ int nVarint; /* Number of bytes in a varint */ u32 serial_type; /* Type field */ Mem *pData0; /* First field to be combined into the record */ Mem *pLast; /* Last field of the record */ int nField; /* Number of fields in the record */ char *zAffinity; /* The affinity string for the record */ int file_format; /* File format to use for encoding */ - int i; /* Space used in zNewRecord[] */ - int len; /* Length of a field */ -#endif /* local variables moved into u.ao */ + int i; /* Space used in zNewRecord[] header */ + int j; /* Space used in zNewRecord[] content */ + u32 len; /* Length of a field */ /* Assuming the record contains N fields, the record format looks ** like this: ** ** ------------------------------------------------------------------------ - ** | hdr-size | type 0 | type 1 | ... | type N-1 | data0 | ... | data N-1 | + ** | hdr-size | type 0 | type 1 | ... | type N-1 | data0 | ... | data N-1 | ** ------------------------------------------------------------------------ ** ** Data(0) is taken from register P1. Data(1) comes from register P1+1 - ** and so froth. + ** and so forth. ** - ** Each type field is a varint representing the serial type of the + ** Each type field is a varint representing the serial type of the ** corresponding data element (see sqlite3VdbeSerialType()). The ** hdr-size field is also a varint which is the offset from the beginning ** of the record to data0. */ - u.ao.nData = 0; /* Number of bytes of data space */ - u.ao.nHdr = 0; /* Number of bytes of header space */ - u.ao.nByte = 0; /* Data space required for this record */ - u.ao.nZero = 0; /* Number of zero bytes at the end of the record */ - u.ao.nField = pOp->p1; - u.ao.zAffinity = pOp->p4.z; - assert( u.ao.nField>0 && pOp->p2>0 && pOp->p2+u.ao.nField<=p->nMem+1 ); - u.ao.pData0 = &aMem[u.ao.nField]; - u.ao.nField = pOp->p2; - u.ao.pLast = &u.ao.pData0[u.ao.nField-1]; - u.ao.file_format = p->minWriteFileFormat; + nData = 0; /* Number of bytes of data space */ + nHdr = 0; /* Number of bytes of header space */ + nZero = 0; /* Number of zero bytes at the end of the record */ + nField = pOp->p1; + zAffinity = pOp->p4.z; + assert( nField>0 && pOp->p2>0 && pOp->p2+nField<=(p->nMem-p->nCursor)+1 ); + pData0 = &aMem[nField]; + nField = pOp->p2; + pLast = &pData0[nField-1]; + file_format = p->minWriteFileFormat; + + /* Identify the output register */ + assert( pOp->p3<pOp->p1 || pOp->p3>=pOp->p1+pOp->p2 ); + pOut = &aMem[pOp->p3]; + memAboutToChange(p, pOut); + + /* Apply the requested affinity to all inputs + */ + assert( pData0<=pLast ); + if( zAffinity ){ + pRec = pData0; + do{ + applyAffinity(pRec++, *(zAffinity++), encoding); + assert( zAffinity[0]==0 || pRec<=pLast ); + }while( zAffinity[0] ); + } /* Loop through the elements that will make up the record to figure ** out how much space is required for the new record. */ - for(u.ao.pRec=u.ao.pData0; u.ao.pRec<=u.ao.pLast; u.ao.pRec++){ - if( u.ao.zAffinity ){ - applyAffinity(u.ao.pRec, u.ao.zAffinity[u.ao.pRec-u.ao.pData0], encoding); - } - if( u.ao.pRec->flags&MEM_Zero && u.ao.pRec->n>0 ){ - sqlite3VdbeMemExpandBlob(u.ao.pRec); - } - u.ao.serial_type = sqlite3VdbeSerialType(u.ao.pRec, u.ao.file_format); - u.ao.len = sqlite3VdbeSerialTypeLen(u.ao.serial_type); - u.ao.nData += u.ao.len; - u.ao.nHdr += sqlite3VarintLen(u.ao.serial_type); - if( u.ao.pRec->flags & MEM_Zero ){ - /* Only pure zero-filled BLOBs can be input to this Opcode. - ** We do not allow blobs with a prefix and a zero-filled tail. */ - u.ao.nZero += u.ao.pRec->u.nZero; - }else if( u.ao.len ){ - u.ao.nZero = 0; - } - } - - /* Add the initial header varint and total the size */ - u.ao.nHdr += u.ao.nVarint = sqlite3VarintLen(u.ao.nHdr); - if( u.ao.nVarint<sqlite3VarintLen(u.ao.nHdr) ){ - u.ao.nHdr++; - } - u.ao.nByte = u.ao.nHdr+u.ao.nData-u.ao.nZero; - if( u.ao.nByte>db->aLimit[SQLITE_LIMIT_LENGTH] ){ + pRec = pLast; + do{ + assert( memIsValid(pRec) ); + pRec->uTemp = serial_type = sqlite3VdbeSerialType(pRec, file_format, &len); + if( pRec->flags & MEM_Zero ){ + if( nData ){ + if( sqlite3VdbeMemExpandBlob(pRec) ) goto no_mem; + }else{ + nZero += pRec->u.nZero; + len -= pRec->u.nZero; + } + } + nData += len; + testcase( serial_type==127 ); + testcase( serial_type==128 ); + nHdr += serial_type<=127 ? 1 : sqlite3VarintLen(serial_type); + }while( (--pRec)>=pData0 ); + + /* EVIDENCE-OF: R-22564-11647 The header begins with a single varint + ** which determines the total number of bytes in the header. The varint + ** value is the size of the header in bytes including the size varint + ** itself. */ + testcase( nHdr==126 ); + testcase( nHdr==127 ); + if( nHdr<=126 ){ + /* The common case */ + nHdr += 1; + }else{ + /* Rare case of a really large header */ + nVarint = sqlite3VarintLen(nHdr); + nHdr += nVarint; + if( nVarint<sqlite3VarintLen(nHdr) ) nHdr++; + } + nByte = nHdr+nData; + if( nByte+nZero>db->aLimit[SQLITE_LIMIT_LENGTH] ){ goto too_big; } - /* Make sure the output register has a buffer large enough to store + /* Make sure the output register has a buffer large enough to store ** the new record. The output register (pOp->p3) is not allowed to ** be one of the input registers (because the following call to - ** sqlite3VdbeMemGrow() could clobber the value before it is used). + ** sqlite3VdbeMemClearAndResize() could clobber the value before it is used). */ - assert( pOp->p3<pOp->p1 || pOp->p3>=pOp->p1+pOp->p2 ); - pOut = &aMem[pOp->p3]; - if( sqlite3VdbeMemGrow(pOut, (int)u.ao.nByte, 0) ){ + if( sqlite3VdbeMemClearAndResize(pOut, (int)nByte) ){ goto no_mem; } - u.ao.zNewRecord = (u8 *)pOut->z; + zNewRecord = (u8 *)pOut->z; /* Write the record */ - u.ao.i = putVarint32(u.ao.zNewRecord, u.ao.nHdr); - for(u.ao.pRec=u.ao.pData0; u.ao.pRec<=u.ao.pLast; u.ao.pRec++){ - u.ao.serial_type = sqlite3VdbeSerialType(u.ao.pRec, u.ao.file_format); - u.ao.i += putVarint32(&u.ao.zNewRecord[u.ao.i], u.ao.serial_type); /* serial type */ - } - for(u.ao.pRec=u.ao.pData0; u.ao.pRec<=u.ao.pLast; u.ao.pRec++){ /* serial data */ - u.ao.i += sqlite3VdbeSerialPut(&u.ao.zNewRecord[u.ao.i], (int)(u.ao.nByte-u.ao.i), u.ao.pRec,u.ao.file_format); - } - assert( u.ao.i==u.ao.nByte ); - - assert( pOp->p3>0 && pOp->p3<=p->nMem ); - pOut->n = (int)u.ao.nByte; - pOut->flags = MEM_Blob | MEM_Dyn; - pOut->xDel = 0; - if( u.ao.nZero ){ - pOut->u.nZero = u.ao.nZero; + i = putVarint32(zNewRecord, nHdr); + j = nHdr; + assert( pData0<=pLast ); + pRec = pData0; + do{ + serial_type = pRec->uTemp; + /* EVIDENCE-OF: R-06529-47362 Following the size varint are one or more + ** additional varints, one per column. */ + i += putVarint32(&zNewRecord[i], serial_type); /* serial type */ + /* EVIDENCE-OF: R-64536-51728 The values for each column in the record + ** immediately follow the header. */ + j += sqlite3VdbeSerialPut(&zNewRecord[j], pRec, serial_type); /* content */ + }while( (++pRec)<=pLast ); + assert( i==nHdr ); + assert( j==nByte ); + + assert( pOp->p3>0 && pOp->p3<=(p->nMem-p->nCursor) ); + pOut->n = (int)nByte; + pOut->flags = MEM_Blob; + if( nZero ){ + pOut->u.nZero = nZero; pOut->flags |= MEM_Zero; } pOut->enc = SQLITE_UTF8; /* In case the blob is ever converted to text */ REGISTER_TRACE(pOp->p3, pOut); UPDATE_MAX_BLOBSIZE(pOut); break; } /* Opcode: Count P1 P2 * * * +** Synopsis: r[P2]=count() ** ** Store the number of entries (an integer value) in the table or index ** opened by cursor P1 in register P2 */ #ifndef SQLITE_OMIT_BTREECOUNT -case OP_Count: { /* out2-prerelease */ -#if 0 /* local variables moved into u.ap */ +case OP_Count: { /* out2 */ i64 nEntry; BtCursor *pCrsr; -#endif /* local variables moved into u.ap */ - - u.ap.pCrsr = p->apCsr[pOp->p1]->pCursor; - if( u.ap.pCrsr ){ - rc = sqlite3BtreeCount(u.ap.pCrsr, &u.ap.nEntry); - }else{ - u.ap.nEntry = 0; - } - pOut->u.i = u.ap.nEntry; + + assert( p->apCsr[pOp->p1]->eCurType==CURTYPE_BTREE ); + pCrsr = p->apCsr[pOp->p1]->uc.pCursor; + assert( pCrsr ); + nEntry = 0; /* Not needed. Only used to silence a warning. */ + rc = sqlite3BtreeCount(pCrsr, &nEntry); + pOut = out2Prerelease(p, pOp); + pOut->u.i = nEntry; break; } #endif /* Opcode: Savepoint P1 * * P4 * @@ -56117,147 +77042,171 @@ ** Open, release or rollback the savepoint named by parameter P4, depending ** on the value of P1. To open a new savepoint, P1==0. To release (commit) an ** existing savepoint, P1==1, or to rollback an existing savepoint P1==2. */ case OP_Savepoint: { -#if 0 /* local variables moved into u.aq */ int p1; /* Value of P1 operand */ char *zName; /* Name of savepoint */ int nName; Savepoint *pNew; Savepoint *pSavepoint; Savepoint *pTmp; int iSavepoint; int ii; -#endif /* local variables moved into u.aq */ - u.aq.p1 = pOp->p1; - u.aq.zName = pOp->p4.z; + p1 = pOp->p1; + zName = pOp->p4.z; - /* Assert that the u.aq.p1 parameter is valid. Also that if there is no open - ** transaction, then there cannot be any savepoints. + /* Assert that the p1 parameter is valid. Also that if there is no open + ** transaction, then there cannot be any savepoints. */ assert( db->pSavepoint==0 || db->autoCommit==0 ); - assert( u.aq.p1==SAVEPOINT_BEGIN||u.aq.p1==SAVEPOINT_RELEASE||u.aq.p1==SAVEPOINT_ROLLBACK ); + assert( p1==SAVEPOINT_BEGIN||p1==SAVEPOINT_RELEASE||p1==SAVEPOINT_ROLLBACK ); assert( db->pSavepoint || db->isTransactionSavepoint==0 ); assert( checkSavepointCount(db) ); + assert( p->bIsReader ); - if( u.aq.p1==SAVEPOINT_BEGIN ){ - if( db->writeVdbeCnt>0 ){ - /* A new savepoint cannot be created if there are active write + if( p1==SAVEPOINT_BEGIN ){ + if( db->nVdbeWrite>0 ){ + /* A new savepoint cannot be created if there are active write ** statements (i.e. open read/write incremental blob handles). */ - sqlite3SetString(&p->zErrMsg, db, "cannot open savepoint - " - "SQL statements in progress"); + sqlite3VdbeError(p, "cannot open savepoint - SQL statements in progress"); rc = SQLITE_BUSY; }else{ - u.aq.nName = sqlite3Strlen30(u.aq.zName); + nName = sqlite3Strlen30(zName); + +#ifndef SQLITE_OMIT_VIRTUALTABLE + /* This call is Ok even if this savepoint is actually a transaction + ** savepoint (and therefore should not prompt xSavepoint()) callbacks. + ** If this is a transaction savepoint being opened, it is guaranteed + ** that the db->aVTrans[] array is empty. */ + assert( db->autoCommit==0 || db->nVTrans==0 ); + rc = sqlite3VtabSavepoint(db, SAVEPOINT_BEGIN, + db->nStatement+db->nSavepoint); + if( rc!=SQLITE_OK ) goto abort_due_to_error; +#endif /* Create a new savepoint structure. */ - u.aq.pNew = sqlite3DbMallocRaw(db, sizeof(Savepoint)+u.aq.nName+1); - if( u.aq.pNew ){ - u.aq.pNew->zName = (char *)&u.aq.pNew[1]; - memcpy(u.aq.pNew->zName, u.aq.zName, u.aq.nName+1); - + pNew = sqlite3DbMallocRawNN(db, sizeof(Savepoint)+nName+1); + if( pNew ){ + pNew->zName = (char *)&pNew[1]; + memcpy(pNew->zName, zName, nName+1); + /* If there is no open transaction, then mark this as a special ** "transaction savepoint". */ if( db->autoCommit ){ db->autoCommit = 0; db->isTransactionSavepoint = 1; }else{ db->nSavepoint++; } - + /* Link the new savepoint into the database handle's list. */ - u.aq.pNew->pNext = db->pSavepoint; - db->pSavepoint = u.aq.pNew; - u.aq.pNew->nDeferredCons = db->nDeferredCons; + pNew->pNext = db->pSavepoint; + db->pSavepoint = pNew; + pNew->nDeferredCons = db->nDeferredCons; + pNew->nDeferredImmCons = db->nDeferredImmCons; } } }else{ - u.aq.iSavepoint = 0; + iSavepoint = 0; /* Find the named savepoint. If there is no such savepoint, then an ** an error is returned to the user. */ for( - u.aq.pSavepoint = db->pSavepoint; - u.aq.pSavepoint && sqlite3StrICmp(u.aq.pSavepoint->zName, u.aq.zName); - u.aq.pSavepoint = u.aq.pSavepoint->pNext + pSavepoint = db->pSavepoint; + pSavepoint && sqlite3StrICmp(pSavepoint->zName, zName); + pSavepoint = pSavepoint->pNext ){ - u.aq.iSavepoint++; + iSavepoint++; } - if( !u.aq.pSavepoint ){ - sqlite3SetString(&p->zErrMsg, db, "no such savepoint: %s", u.aq.zName); + if( !pSavepoint ){ + sqlite3VdbeError(p, "no such savepoint: %s", zName); rc = SQLITE_ERROR; - }else if( - db->writeVdbeCnt>0 || (u.aq.p1==SAVEPOINT_ROLLBACK && db->activeVdbeCnt>1) - ){ - /* It is not possible to release (commit) a savepoint if there are - ** active write statements. It is not possible to rollback a savepoint - ** if there are any active statements at all. + }else if( db->nVdbeWrite>0 && p1==SAVEPOINT_RELEASE ){ + /* It is not possible to release (commit) a savepoint if there are + ** active write statements. */ - sqlite3SetString(&p->zErrMsg, db, - "cannot %s savepoint - SQL statements in progress", - (u.aq.p1==SAVEPOINT_ROLLBACK ? "rollback": "release") - ); + sqlite3VdbeError(p, "cannot release savepoint - " + "SQL statements in progress"); rc = SQLITE_BUSY; }else{ /* Determine whether or not this is a transaction savepoint. If so, - ** and this is a RELEASE command, then the current transaction - ** is committed. + ** and this is a RELEASE command, then the current transaction + ** is committed. */ - int isTransaction = u.aq.pSavepoint->pNext==0 && db->isTransactionSavepoint; - if( isTransaction && u.aq.p1==SAVEPOINT_RELEASE ){ + int isTransaction = pSavepoint->pNext==0 && db->isTransactionSavepoint; + if( isTransaction && p1==SAVEPOINT_RELEASE ){ if( (rc = sqlite3VdbeCheckFk(p, 1))!=SQLITE_OK ){ goto vdbe_return; } db->autoCommit = 1; if( sqlite3VdbeHalt(p)==SQLITE_BUSY ){ - p->pc = pc; + p->pc = (int)(pOp - aOp); db->autoCommit = 0; p->rc = rc = SQLITE_BUSY; goto vdbe_return; } db->isTransactionSavepoint = 0; rc = p->rc; }else{ - u.aq.iSavepoint = db->nSavepoint - u.aq.iSavepoint - 1; - for(u.aq.ii=0; u.aq.ii<db->nDb; u.aq.ii++){ - rc = sqlite3BtreeSavepoint(db->aDb[u.aq.ii].pBt, u.aq.p1, u.aq.iSavepoint); + int isSchemaChange; + iSavepoint = db->nSavepoint - iSavepoint - 1; + if( p1==SAVEPOINT_ROLLBACK ){ + isSchemaChange = (db->flags & SQLITE_InternChanges)!=0; + for(ii=0; ii<db->nDb; ii++){ + rc = sqlite3BtreeTripAllCursors(db->aDb[ii].pBt, + SQLITE_ABORT_ROLLBACK, + isSchemaChange==0); + if( rc!=SQLITE_OK ) goto abort_due_to_error; + } + }else{ + isSchemaChange = 0; + } + for(ii=0; ii<db->nDb; ii++){ + rc = sqlite3BtreeSavepoint(db->aDb[ii].pBt, p1, iSavepoint); if( rc!=SQLITE_OK ){ goto abort_due_to_error; } } - if( u.aq.p1==SAVEPOINT_ROLLBACK && (db->flags&SQLITE_InternChanges)!=0 ){ + if( isSchemaChange ){ sqlite3ExpirePreparedStatements(db); - sqlite3ResetInternalSchema(db, 0); + sqlite3ResetAllSchemasOfConnection(db); + db->flags = (db->flags | SQLITE_InternChanges); } } - - /* Regardless of whether this is a RELEASE or ROLLBACK, destroy all + + /* Regardless of whether this is a RELEASE or ROLLBACK, destroy all ** savepoints nested inside of the savepoint being operated on. */ - while( db->pSavepoint!=u.aq.pSavepoint ){ - u.aq.pTmp = db->pSavepoint; - db->pSavepoint = u.aq.pTmp->pNext; - sqlite3DbFree(db, u.aq.pTmp); + while( db->pSavepoint!=pSavepoint ){ + pTmp = db->pSavepoint; + db->pSavepoint = pTmp->pNext; + sqlite3DbFree(db, pTmp); db->nSavepoint--; } - /* If it is a RELEASE, then destroy the savepoint being operated on - ** too. If it is a ROLLBACK TO, then set the number of deferred + /* If it is a RELEASE, then destroy the savepoint being operated on + ** too. If it is a ROLLBACK TO, then set the number of deferred ** constraint violations present in the database to the value stored ** when the savepoint was created. */ - if( u.aq.p1==SAVEPOINT_RELEASE ){ - assert( u.aq.pSavepoint==db->pSavepoint ); - db->pSavepoint = u.aq.pSavepoint->pNext; - sqlite3DbFree(db, u.aq.pSavepoint); + if( p1==SAVEPOINT_RELEASE ){ + assert( pSavepoint==db->pSavepoint ); + db->pSavepoint = pSavepoint->pNext; + sqlite3DbFree(db, pSavepoint); if( !isTransaction ){ db->nSavepoint--; } }else{ - db->nDeferredCons = u.aq.pSavepoint->nDeferredCons; + db->nDeferredCons = pSavepoint->nDeferredCons; + db->nDeferredImmCons = pSavepoint->nDeferredImmCons; + } + + if( !isTransaction || p1==SAVEPOINT_ROLLBACK ){ + rc = sqlite3VtabSavepoint(db, p1, iSavepoint); + if( rc!=SQLITE_OK ) goto abort_due_to_error; } } } break; @@ -56271,53 +77220,43 @@ ** there are active writing VMs or active VMs that use shared cache. ** ** This instruction causes the VM to halt. */ case OP_AutoCommit: { -#if 0 /* local variables moved into u.ar */ int desiredAutoCommit; int iRollback; - int turnOnAC; -#endif /* local variables moved into u.ar */ - - u.ar.desiredAutoCommit = pOp->p1; - u.ar.iRollback = pOp->p2; - u.ar.turnOnAC = u.ar.desiredAutoCommit && !db->autoCommit; - assert( u.ar.desiredAutoCommit==1 || u.ar.desiredAutoCommit==0 ); - assert( u.ar.desiredAutoCommit==1 || u.ar.iRollback==0 ); - assert( db->activeVdbeCnt>0 ); /* At least this one VM is active */ - - if( u.ar.turnOnAC && u.ar.iRollback && db->activeVdbeCnt>1 ){ - /* If this instruction implements a ROLLBACK and other VMs are - ** still running, and a transaction is active, return an error indicating - ** that the other VMs must complete first. - */ - sqlite3SetString(&p->zErrMsg, db, "cannot rollback transaction - " - "SQL statements in progress"); - rc = SQLITE_BUSY; - }else if( u.ar.turnOnAC && !u.ar.iRollback && db->writeVdbeCnt>0 ){ - /* If this instruction implements a COMMIT and other VMs are writing - ** return an error indicating that the other VMs must complete first. - */ - sqlite3SetString(&p->zErrMsg, db, "cannot commit transaction - " - "SQL statements in progress"); - rc = SQLITE_BUSY; - }else if( u.ar.desiredAutoCommit!=db->autoCommit ){ - if( u.ar.iRollback ){ - assert( u.ar.desiredAutoCommit==1 ); - sqlite3RollbackAll(db); + + desiredAutoCommit = pOp->p1; + iRollback = pOp->p2; + assert( desiredAutoCommit==1 || desiredAutoCommit==0 ); + assert( desiredAutoCommit==1 || iRollback==0 ); + assert( db->nVdbeActive>0 ); /* At least this one VM is active */ + assert( p->bIsReader ); + + if( desiredAutoCommit!=db->autoCommit ){ + if( iRollback ){ + assert( desiredAutoCommit==1 ); + sqlite3RollbackAll(db, SQLITE_ABORT_ROLLBACK); db->autoCommit = 1; + }else if( desiredAutoCommit && db->nVdbeWrite>0 ){ + /* If this instruction implements a COMMIT and other VMs are writing + ** return an error indicating that the other VMs must complete first. + */ + sqlite3VdbeError(p, "cannot commit transaction - " + "SQL statements in progress"); + rc = SQLITE_BUSY; + break; }else if( (rc = sqlite3VdbeCheckFk(p, 1))!=SQLITE_OK ){ goto vdbe_return; }else{ - db->autoCommit = (u8)u.ar.desiredAutoCommit; - if( sqlite3VdbeHalt(p)==SQLITE_BUSY ){ - p->pc = pc; - db->autoCommit = (u8)(1-u.ar.desiredAutoCommit); - p->rc = rc = SQLITE_BUSY; - goto vdbe_return; - } + db->autoCommit = (u8)desiredAutoCommit; + } + if( sqlite3VdbeHalt(p)==SQLITE_BUSY ){ + p->pc = (int)(pOp - aOp); + db->autoCommit = (u8)(1-desiredAutoCommit); + p->rc = rc = SQLITE_BUSY; + goto vdbe_return; } assert( db->nStatement==0 ); sqlite3CloseSavepoints(db); if( p->rc==SQLITE_OK ){ rc = SQLITE_DONE; @@ -56324,87 +77263,137 @@ }else{ rc = SQLITE_ERROR; } goto vdbe_return; }else{ - sqlite3SetString(&p->zErrMsg, db, - (!u.ar.desiredAutoCommit)?"cannot start a transaction within a transaction":( - (u.ar.iRollback)?"cannot rollback - no transaction is active": + sqlite3VdbeError(p, + (!desiredAutoCommit)?"cannot start a transaction within a transaction":( + (iRollback)?"cannot rollback - no transaction is active": "cannot commit - no transaction is active")); - + rc = SQLITE_ERROR; } break; } -/* Opcode: Transaction P1 P2 * * * +/* Opcode: Transaction P1 P2 P3 P4 P5 ** -** Begin a transaction. The transaction ends when a Commit or Rollback -** opcode is encountered. Depending on the ON CONFLICT setting, the -** transaction might also be rolled back if an error is encountered. +** Begin a transaction on database P1 if a transaction is not already +** active. +** If P2 is non-zero, then a write-transaction is started, or if a +** read-transaction is already active, it is upgraded to a write-transaction. +** If P2 is zero, then a read-transaction is started. ** ** P1 is the index of the database file on which the transaction is ** started. Index 0 is the main database file and index 1 is the ** file used for temporary tables. Indices of 2 or more are used for ** attached databases. ** -** If P2 is non-zero, then a write-transaction is started. A RESERVED lock is -** obtained on the database file when a write-transaction is started. No -** other process can start another write transaction while this transaction is -** underway. Starting a write transaction also creates a rollback journal. A -** write transaction must be started before any changes can be made to the -** database. If P2 is 2 or greater then an EXCLUSIVE lock is also obtained -** on the file. -** ** If a write-transaction is started and the Vdbe.usesStmtJournal flag is ** true (this flag is set if the Vdbe may modify more than one row and may ** throw an ABORT exception), a statement transaction may also be opened. ** More specifically, a statement transaction is opened iff the database ** connection is currently not in autocommit mode, or if there are other -** active statements. A statement transaction allows the affects of this +** active statements. A statement transaction allows the changes made by this ** VDBE to be rolled back after an error without having to roll back the ** entire transaction. If no error is encountered, the statement transaction ** will automatically commit when the VDBE halts. ** -** If P2 is zero, then a read-lock is obtained on the database file. +** If P5!=0 then this opcode also checks the schema cookie against P3 +** and the schema generation counter against P4. +** The cookie changes its value whenever the database schema changes. +** This operation is used to detect when that the cookie has changed +** and that the current process needs to reread the schema. If the schema +** cookie in P3 differs from the schema cookie in the database header or +** if the schema generation counter in P4 differs from the current +** generation counter, then an SQLITE_SCHEMA error is raised and execution +** halts. The sqlite3_step() wrapper function might then reprepare the +** statement and rerun it from the beginning. */ case OP_Transaction: { -#if 0 /* local variables moved into u.as */ Btree *pBt; -#endif /* local variables moved into u.as */ + int iMeta; + int iGen; + assert( p->bIsReader ); + assert( p->readOnly==0 || pOp->p2==0 ); assert( pOp->p1>=0 && pOp->p1<db->nDb ); - assert( (p->btreeMask & (1<<pOp->p1))!=0 ); - u.as.pBt = db->aDb[pOp->p1].pBt; - - if( u.as.pBt ){ - rc = sqlite3BtreeBeginTrans(u.as.pBt, pOp->p2); - if( rc==SQLITE_BUSY ){ - p->pc = pc; - p->rc = rc = SQLITE_BUSY; + assert( DbMaskTest(p->btreeMask, pOp->p1) ); + if( pOp->p2 && (db->flags & SQLITE_QueryOnly)!=0 ){ + rc = SQLITE_READONLY; + goto abort_due_to_error; + } + pBt = db->aDb[pOp->p1].pBt; + + if( pBt ){ + rc = sqlite3BtreeBeginTrans(pBt, pOp->p2); + testcase( rc==SQLITE_BUSY_SNAPSHOT ); + testcase( rc==SQLITE_BUSY_RECOVERY ); + if( (rc&0xff)==SQLITE_BUSY ){ + p->pc = (int)(pOp - aOp); + p->rc = rc; goto vdbe_return; } if( rc!=SQLITE_OK ){ goto abort_due_to_error; } - if( pOp->p2 && p->usesStmtJournal - && (db->autoCommit==0 || db->activeVdbeCnt>1) + if( pOp->p2 && p->usesStmtJournal + && (db->autoCommit==0 || db->nVdbeRead>1) ){ - assert( sqlite3BtreeIsInTrans(u.as.pBt) ); + assert( sqlite3BtreeIsInTrans(pBt) ); if( p->iStatement==0 ){ assert( db->nStatement>=0 && db->nSavepoint>=0 ); - db->nStatement++; + db->nStatement++; p->iStatement = db->nSavepoint + db->nStatement; } - rc = sqlite3BtreeBeginStmt(u.as.pBt, p->iStatement); + + rc = sqlite3VtabSavepoint(db, SAVEPOINT_BEGIN, p->iStatement-1); + if( rc==SQLITE_OK ){ + rc = sqlite3BtreeBeginStmt(pBt, p->iStatement); + } /* Store the current value of the database handles deferred constraint ** counter. If the statement transaction needs to be rolled back, ** the value of this counter needs to be restored too. */ p->nStmtDefCons = db->nDeferredCons; + p->nStmtDefImmCons = db->nDeferredImmCons; } + + /* Gather the schema version number for checking: + ** IMPLEMENTATION-OF: R-32195-19465 The schema version is used by SQLite + ** each time a query is executed to ensure that the internal cache of the + ** schema used when compiling the SQL query matches the schema of the + ** database against which the compiled query is actually executed. + */ + sqlite3BtreeGetMeta(pBt, BTREE_SCHEMA_VERSION, (u32 *)&iMeta); + iGen = db->aDb[pOp->p1].pSchema->iGeneration; + }else{ + iGen = iMeta = 0; + } + assert( pOp->p5==0 || pOp->p4type==P4_INT32 ); + if( pOp->p5 && (iMeta!=pOp->p3 || iGen!=pOp->p4.i) ){ + sqlite3DbFree(db, p->zErrMsg); + p->zErrMsg = sqlite3DbStrDup(db, "database schema has changed"); + /* If the schema-cookie from the database file matches the cookie + ** stored with the in-memory representation of the schema, do + ** not reload the schema from the database file. + ** + ** If virtual-tables are in use, this is not just an optimization. + ** Often, v-tables store their data in other SQLite tables, which + ** are queried from within xNext() and other v-table methods using + ** prepared queries. If such a query is out-of-date, we do not want to + ** discard the database schema, as the user code implementing the + ** v-table would have to be ready for the sqlite3_vtab structure itself + ** to be invalidated whenever sqlite3_step() is called from within + ** a v-table method. + */ + if( db->aDb[pOp->p1].pSchema->schema_cookie!=iMeta ){ + sqlite3ResetOneSchema(db, pOp->p1); + } + p->expired = 1; + rc = SQLITE_SCHEMA; } break; } /* Opcode: ReadCookie P1 P2 P3 * * @@ -56417,59 +77406,57 @@ ** ** There must be a read-lock on the database (either a transaction ** must be started or there must be an open cursor) before ** executing this instruction. */ -case OP_ReadCookie: { /* out2-prerelease */ -#if 0 /* local variables moved into u.at */ +case OP_ReadCookie: { /* out2 */ int iMeta; int iDb; int iCookie; -#endif /* local variables moved into u.at */ - u.at.iDb = pOp->p1; - u.at.iCookie = pOp->p3; + assert( p->bIsReader ); + iDb = pOp->p1; + iCookie = pOp->p3; assert( pOp->p3<SQLITE_N_BTREE_META ); - assert( u.at.iDb>=0 && u.at.iDb<db->nDb ); - assert( db->aDb[u.at.iDb].pBt!=0 ); - assert( (p->btreeMask & (1<<u.at.iDb))!=0 ); + assert( iDb>=0 && iDb<db->nDb ); + assert( db->aDb[iDb].pBt!=0 ); + assert( DbMaskTest(p->btreeMask, iDb) ); - sqlite3BtreeGetMeta(db->aDb[u.at.iDb].pBt, u.at.iCookie, (u32 *)&u.at.iMeta); - pOut->u.i = u.at.iMeta; + sqlite3BtreeGetMeta(db->aDb[iDb].pBt, iCookie, (u32 *)&iMeta); + pOut = out2Prerelease(p, pOp); + pOut->u.i = iMeta; break; } /* Opcode: SetCookie P1 P2 P3 * * ** -** Write the content of register P3 (interpreted as an integer) -** into cookie number P2 of database P1. P2==1 is the schema version. -** P2==2 is the database format. P2==3 is the recommended pager cache +** Write the integer value P3 into cookie number P2 of database P1. +** P2==1 is the schema version. P2==2 is the database format. +** P2==3 is the recommended pager cache ** size, and so forth. P1==0 is the main database file and P1==1 is the ** database file used to store temporary tables. ** ** A transaction must be started before executing this opcode. */ -case OP_SetCookie: { /* in3 */ -#if 0 /* local variables moved into u.au */ +case OP_SetCookie: { Db *pDb; -#endif /* local variables moved into u.au */ assert( pOp->p2<SQLITE_N_BTREE_META ); assert( pOp->p1>=0 && pOp->p1<db->nDb ); - assert( (p->btreeMask & (1<<pOp->p1))!=0 ); - u.au.pDb = &db->aDb[pOp->p1]; - assert( u.au.pDb->pBt!=0 ); - pIn3 = &aMem[pOp->p3]; - sqlite3VdbeMemIntegerify(pIn3); + assert( DbMaskTest(p->btreeMask, pOp->p1) ); + assert( p->readOnly==0 ); + pDb = &db->aDb[pOp->p1]; + assert( pDb->pBt!=0 ); + assert( sqlite3SchemaMutexHeld(db, pOp->p1, 0) ); /* See note about index shifting on OP_ReadCookie */ - rc = sqlite3BtreeUpdateMeta(u.au.pDb->pBt, pOp->p2, (int)pIn3->u.i); + rc = sqlite3BtreeUpdateMeta(pDb->pBt, pOp->p2, pOp->p3); if( pOp->p2==BTREE_SCHEMA_VERSION ){ /* When the schema cookie changes, record the new cookie internally */ - u.au.pDb->pSchema->schema_cookie = (int)pIn3->u.i; + pDb->pSchema->schema_cookie = pOp->p3; db->flags |= SQLITE_InternChanges; }else if( pOp->p2==BTREE_FILE_FORMAT ){ /* Record changes in the file format */ - u.au.pDb->pSchema->file_format = (u8)pIn3->u.i; + pDb->pSchema->file_format = pOp->p3; } if( pOp->p1==1 ){ /* Invalidate all prepared statements whenever the TEMP database ** schema is changed. Ticket #1644 */ sqlite3ExpirePreparedStatements(db); @@ -56476,66 +77463,12 @@ p->expired = 0; } break; } -/* Opcode: VerifyCookie P1 P2 * -** -** Check the value of global database parameter number 0 (the -** schema version) and make sure it is equal to P2. -** P1 is the database number which is 0 for the main database file -** and 1 for the file holding temporary tables and some higher number -** for auxiliary databases. -** -** The cookie changes its value whenever the database schema changes. -** This operation is used to detect when that the cookie has changed -** and that the current process needs to reread the schema. -** -** Either a transaction needs to have been started or an OP_Open needs -** to be executed (to establish a read lock) before this opcode is -** invoked. -*/ -case OP_VerifyCookie: { -#if 0 /* local variables moved into u.av */ - int iMeta; - Btree *pBt; -#endif /* local variables moved into u.av */ - assert( pOp->p1>=0 && pOp->p1<db->nDb ); - assert( (p->btreeMask & (1<<pOp->p1))!=0 ); - u.av.pBt = db->aDb[pOp->p1].pBt; - if( u.av.pBt ){ - sqlite3BtreeGetMeta(u.av.pBt, BTREE_SCHEMA_VERSION, (u32 *)&u.av.iMeta); - }else{ - u.av.iMeta = 0; - } - if( u.av.iMeta!=pOp->p2 ){ - sqlite3DbFree(db, p->zErrMsg); - p->zErrMsg = sqlite3DbStrDup(db, "database schema has changed"); - /* If the schema-cookie from the database file matches the cookie - ** stored with the in-memory representation of the schema, do - ** not reload the schema from the database file. - ** - ** If virtual-tables are in use, this is not just an optimization. - ** Often, v-tables store their data in other SQLite tables, which - ** are queried from within xNext() and other v-table methods using - ** prepared queries. If such a query is out-of-date, we do not want to - ** discard the database schema, as the user code implementing the - ** v-table would have to be ready for the sqlite3_vtab structure itself - ** to be invalidated whenever sqlite3_step() is called from within - ** a v-table method. - */ - if( db->aDb[pOp->p1].pSchema->schema_cookie!=u.av.iMeta ){ - sqlite3ResetInternalSchema(db, pOp->p1); - } - - sqlite3ExpirePreparedStatements(db); - rc = SQLITE_SCHEMA; - } - break; -} - /* Opcode: OpenRead P1 P2 P3 P4 P5 +** Synopsis: root=P2 iDb=P3 ** ** Open a read-only cursor for the database table whose root page is ** P2 in a database file. The database file is determined by P3. ** P3==0 means the main database, P3==1 means the database used for ** temporary tables, and P3>1 means used the corresponding attached @@ -56559,13 +77492,28 @@ ** a KeyInfo structure (P4_KEYINFO). If it is a pointer to a KeyInfo ** structure, then said structure defines the content and collating ** sequence of the index being opened. Otherwise, if P4 is an integer ** value, it is set to the number of columns in the table. ** -** See also OpenWrite. +** See also: OpenWrite, ReopenIdx +*/ +/* Opcode: ReopenIdx P1 P2 P3 P4 P5 +** Synopsis: root=P2 iDb=P3 +** +** The ReopenIdx opcode works exactly like ReadOpen except that it first +** checks to see if the cursor on P1 is already open with a root page +** number of P2 and if it is this opcode becomes a no-op. In other words, +** if the cursor is already open, do not reopen it. +** +** The ReopenIdx opcode may only be used with P5==0 and with P4 being +** a P4_KEYINFO object. Furthermore, the P3 value must be the same as +** every other ReopenIdx or OpenRead for the same cursor number. +** +** See the OpenRead opcode documentation for additional information. */ /* Opcode: OpenWrite P1 P2 P3 P4 P5 +** Synopsis: root=P2 iDb=P3 ** ** Open a read/write cursor named P1 on the table or index whose root ** page is P2. Or if P5!=0 use the content of register P2 to find the ** root page. ** @@ -56580,94 +77528,119 @@ ** in read/write mode. For a given table, there can be one or more read-only ** cursors or a single read/write cursor but not both. ** ** See also OpenRead. */ -case OP_OpenRead: -case OP_OpenWrite: { -#if 0 /* local variables moved into u.aw */ +case OP_ReopenIdx: { int nField; KeyInfo *pKeyInfo; int p2; int iDb; int wrFlag; Btree *pX; VdbeCursor *pCur; Db *pDb; -#endif /* local variables moved into u.aw */ + + assert( pOp->p5==0 || pOp->p5==OPFLAG_SEEKEQ ); + assert( pOp->p4type==P4_KEYINFO ); + pCur = p->apCsr[pOp->p1]; + if( pCur && pCur->pgnoRoot==(u32)pOp->p2 ){ + assert( pCur->iDb==pOp->p3 ); /* Guaranteed by the code generator */ + goto open_cursor_set_hints; + } + /* If the cursor is not currently open or is open on a different + ** index, then fall through into OP_OpenRead to force a reopen */ +case OP_OpenRead: +case OP_OpenWrite: + + assert( pOp->opcode==OP_OpenWrite || pOp->p5==0 || pOp->p5==OPFLAG_SEEKEQ ); + assert( p->bIsReader ); + assert( pOp->opcode==OP_OpenRead || pOp->opcode==OP_ReopenIdx + || p->readOnly==0 ); if( p->expired ){ - rc = SQLITE_ABORT; + rc = SQLITE_ABORT_ROLLBACK; break; } - u.aw.nField = 0; - u.aw.pKeyInfo = 0; - u.aw.p2 = pOp->p2; - u.aw.iDb = pOp->p3; - assert( u.aw.iDb>=0 && u.aw.iDb<db->nDb ); - assert( (p->btreeMask & (1<<u.aw.iDb))!=0 ); - u.aw.pDb = &db->aDb[u.aw.iDb]; - u.aw.pX = u.aw.pDb->pBt; - assert( u.aw.pX!=0 ); + nField = 0; + pKeyInfo = 0; + p2 = pOp->p2; + iDb = pOp->p3; + assert( iDb>=0 && iDb<db->nDb ); + assert( DbMaskTest(p->btreeMask, iDb) ); + pDb = &db->aDb[iDb]; + pX = pDb->pBt; + assert( pX!=0 ); if( pOp->opcode==OP_OpenWrite ){ - u.aw.wrFlag = 1; - if( u.aw.pDb->pSchema->file_format < p->minWriteFileFormat ){ - p->minWriteFileFormat = u.aw.pDb->pSchema->file_format; + assert( OPFLAG_FORDELETE==BTREE_FORDELETE ); + wrFlag = BTREE_WRCSR | (pOp->p5 & OPFLAG_FORDELETE); + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); + if( pDb->pSchema->file_format < p->minWriteFileFormat ){ + p->minWriteFileFormat = pDb->pSchema->file_format; } }else{ - u.aw.wrFlag = 0; + wrFlag = 0; } - if( pOp->p5 ){ - assert( u.aw.p2>0 ); - assert( u.aw.p2<=p->nMem ); - pIn2 = &aMem[u.aw.p2]; + if( pOp->p5 & OPFLAG_P2ISREG ){ + assert( p2>0 ); + assert( p2<=(p->nMem-p->nCursor) ); + pIn2 = &aMem[p2]; + assert( memIsValid(pIn2) ); + assert( (pIn2->flags & MEM_Int)!=0 ); sqlite3VdbeMemIntegerify(pIn2); - u.aw.p2 = (int)pIn2->u.i; - /* The u.aw.p2 value always comes from a prior OP_CreateTable opcode and - ** that opcode will always set the u.aw.p2 value to 2 or more or else fail. + p2 = (int)pIn2->u.i; + /* The p2 value always comes from a prior OP_CreateTable opcode and + ** that opcode will always set the p2 value to 2 or more or else fail. ** If there were a failure, the prepared statement would have halted ** before reaching this instruction. */ - if( NEVER(u.aw.p2<2) ) { + if( NEVER(p2<2) ) { rc = SQLITE_CORRUPT_BKPT; goto abort_due_to_error; } } if( pOp->p4type==P4_KEYINFO ){ - u.aw.pKeyInfo = pOp->p4.pKeyInfo; - u.aw.pKeyInfo->enc = ENC(p->db); - u.aw.nField = u.aw.pKeyInfo->nField+1; + pKeyInfo = pOp->p4.pKeyInfo; + assert( pKeyInfo->enc==ENC(db) ); + assert( pKeyInfo->db==db ); + nField = pKeyInfo->nField+pKeyInfo->nXField; }else if( pOp->p4type==P4_INT32 ){ - u.aw.nField = pOp->p4.i; + nField = pOp->p4.i; } assert( pOp->p1>=0 ); - u.aw.pCur = allocateCursor(p, pOp->p1, u.aw.nField, u.aw.iDb, 1); - if( u.aw.pCur==0 ) goto no_mem; - u.aw.pCur->nullRow = 1; - rc = sqlite3BtreeCursor(u.aw.pX, u.aw.p2, u.aw.wrFlag, u.aw.pKeyInfo, u.aw.pCur->pCursor); - u.aw.pCur->pKeyInfo = u.aw.pKeyInfo; - - /* Since it performs no memory allocation or IO, the only values that - ** sqlite3BtreeCursor() may return are SQLITE_EMPTY and SQLITE_OK. - ** SQLITE_EMPTY is only returned when attempting to open the table - ** rooted at page 1 of a zero-byte database. */ - assert( rc==SQLITE_EMPTY || rc==SQLITE_OK ); - if( rc==SQLITE_EMPTY ){ - u.aw.pCur->pCursor = 0; - rc = SQLITE_OK; - } - - /* Set the VdbeCursor.isTable and isIndex variables. Previous versions of + assert( nField>=0 ); + testcase( nField==0 ); /* Table with INTEGER PRIMARY KEY and nothing else */ + pCur = allocateCursor(p, pOp->p1, nField, iDb, CURTYPE_BTREE); + if( pCur==0 ) goto no_mem; + pCur->nullRow = 1; + pCur->isOrdered = 1; + pCur->pgnoRoot = p2; +#ifdef SQLITE_DEBUG + pCur->wrFlag = wrFlag; +#endif + rc = sqlite3BtreeCursor(pX, p2, wrFlag, pKeyInfo, pCur->uc.pCursor); + pCur->pKeyInfo = pKeyInfo; + /* Set the VdbeCursor.isTable variable. Previous versions of ** SQLite used to check if the root-page flags were sane at this point ** and report database corruption if they were not, but this check has - ** since moved into the btree layer. */ - u.aw.pCur->isTable = pOp->p4type!=P4_KEYINFO; - u.aw.pCur->isIndex = !u.aw.pCur->isTable; + ** since moved into the btree layer. */ + pCur->isTable = pOp->p4type!=P4_KEYINFO; + +open_cursor_set_hints: + assert( OPFLAG_BULKCSR==BTREE_BULKLOAD ); + assert( OPFLAG_SEEKEQ==BTREE_SEEK_EQ ); + testcase( pOp->p5 & OPFLAG_BULKCSR ); +#ifdef SQLITE_ENABLE_CURSOR_HINTS + testcase( pOp->p2 & OPFLAG_SEEKEQ ); +#endif + sqlite3BtreeCursorHintFlags(pCur->uc.pCursor, + (pOp->p5 & (OPFLAG_BULKCSR|OPFLAG_SEEKEQ))); break; } -/* Opcode: OpenEphemeral P1 P2 * P4 * +/* Opcode: OpenEphemeral P1 P2 * P4 P5 +** Synopsis: nColumn=P2 ** ** Open a new cursor P1 to a transient table. ** The cursor is always opened read/write even if ** the main database is read-only. The ephemeral ** table is deleted automatically when the cursor is closed. @@ -56675,75 +77648,121 @@ ** P2 is the number of columns in the ephemeral table. ** The cursor points to a BTree table if P4==0 and to a BTree index ** if P4 is not 0. If P4 is not NULL, it points to a KeyInfo structure ** that defines the format of keys in the index. ** -** This opcode was once called OpenTemp. But that created -** confusion because the term "temp table", might refer either -** to a TEMP table at the SQL level, or to a table opened by -** this opcode. Then this opcode was call OpenVirtual. But -** that created confusion with the whole virtual-table idea. +** The P5 parameter can be a mask of the BTREE_* flags defined +** in btree.h. These flags control aspects of the operation of +** the btree. The BTREE_OMIT_JOURNAL and BTREE_SINGLE flags are +** added automatically. */ /* Opcode: OpenAutoindex P1 P2 * P4 * +** Synopsis: nColumn=P2 ** ** This opcode works the same as OP_OpenEphemeral. It has a ** different name to distinguish its use. Tables created using ** by this opcode will be used for automatically created transient ** indices in joins. */ case OP_OpenAutoindex: case OP_OpenEphemeral: { -#if 0 /* local variables moved into u.ax */ VdbeCursor *pCx; -#endif /* local variables moved into u.ax */ - static const int openFlags = + KeyInfo *pKeyInfo; + + static const int vfsFlags = SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE | SQLITE_OPEN_EXCLUSIVE | SQLITE_OPEN_DELETEONCLOSE | SQLITE_OPEN_TRANSIENT_DB; - assert( pOp->p1>=0 ); - u.ax.pCx = allocateCursor(p, pOp->p1, pOp->p2, -1, 1); - if( u.ax.pCx==0 ) goto no_mem; - u.ax.pCx->nullRow = 1; - rc = sqlite3BtreeFactory(db, 0, 1, SQLITE_DEFAULT_TEMP_CACHE_SIZE, openFlags, - &u.ax.pCx->pBt); + assert( pOp->p2>=0 ); + pCx = allocateCursor(p, pOp->p1, pOp->p2, -1, CURTYPE_BTREE); + if( pCx==0 ) goto no_mem; + pCx->nullRow = 1; + pCx->isEphemeral = 1; + rc = sqlite3BtreeOpen(db->pVfs, 0, db, &pCx->pBt, + BTREE_OMIT_JOURNAL | BTREE_SINGLE | pOp->p5, vfsFlags); if( rc==SQLITE_OK ){ - rc = sqlite3BtreeBeginTrans(u.ax.pCx->pBt, 1); + rc = sqlite3BtreeBeginTrans(pCx->pBt, 1); } if( rc==SQLITE_OK ){ /* If a transient index is required, create it by calling - ** sqlite3BtreeCreateTable() with the BTREE_ZERODATA flag before + ** sqlite3BtreeCreateTable() with the BTREE_BLOBKEY flag before ** opening it. If a transient table is required, just use the - ** automatically created table with root-page 1 (an INTKEY table). + ** automatically created table with root-page 1 (an BLOB_INTKEY table). */ - if( pOp->p4.pKeyInfo ){ + if( (pKeyInfo = pOp->p4.pKeyInfo)!=0 ){ int pgno; assert( pOp->p4type==P4_KEYINFO ); - rc = sqlite3BtreeCreateTable(u.ax.pCx->pBt, &pgno, BTREE_ZERODATA); + rc = sqlite3BtreeCreateTable(pCx->pBt, &pgno, BTREE_BLOBKEY | pOp->p5); if( rc==SQLITE_OK ){ assert( pgno==MASTER_ROOT+1 ); - rc = sqlite3BtreeCursor(u.ax.pCx->pBt, pgno, 1, - (KeyInfo*)pOp->p4.z, u.ax.pCx->pCursor); - u.ax.pCx->pKeyInfo = pOp->p4.pKeyInfo; - u.ax.pCx->pKeyInfo->enc = ENC(p->db); - } - u.ax.pCx->isTable = 0; - }else{ - rc = sqlite3BtreeCursor(u.ax.pCx->pBt, MASTER_ROOT, 1, 0, u.ax.pCx->pCursor); - u.ax.pCx->isTable = 1; + assert( pKeyInfo->db==db ); + assert( pKeyInfo->enc==ENC(db) ); + pCx->pKeyInfo = pKeyInfo; + rc = sqlite3BtreeCursor(pCx->pBt, pgno, BTREE_WRCSR, + pKeyInfo, pCx->uc.pCursor); + } + pCx->isTable = 0; + }else{ + rc = sqlite3BtreeCursor(pCx->pBt, MASTER_ROOT, BTREE_WRCSR, + 0, pCx->uc.pCursor); + pCx->isTable = 1; } } - u.ax.pCx->isIndex = !u.ax.pCx->isTable; + pCx->isOrdered = (pOp->p5!=BTREE_UNORDERED); + break; +} + +/* Opcode: SorterOpen P1 P2 P3 P4 * +** +** This opcode works like OP_OpenEphemeral except that it opens +** a transient index that is specifically designed to sort large +** tables using an external merge-sort algorithm. +** +** If argument P3 is non-zero, then it indicates that the sorter may +** assume that a stable sort considering the first P3 fields of each +** key is sufficient to produce the required results. +*/ +case OP_SorterOpen: { + VdbeCursor *pCx; + + assert( pOp->p1>=0 ); + assert( pOp->p2>=0 ); + pCx = allocateCursor(p, pOp->p1, pOp->p2, -1, CURTYPE_SORTER); + if( pCx==0 ) goto no_mem; + pCx->pKeyInfo = pOp->p4.pKeyInfo; + assert( pCx->pKeyInfo->db==db ); + assert( pCx->pKeyInfo->enc==ENC(db) ); + rc = sqlite3VdbeSorterInit(db, pOp->p3, pCx); + break; +} + +/* Opcode: SequenceTest P1 P2 * * * +** Synopsis: if( cursor[P1].ctr++ ) pc = P2 +** +** P1 is a sorter cursor. If the sequence counter is currently zero, jump +** to P2. Regardless of whether or not the jump is taken, increment the +** the sequence value. +*/ +case OP_SequenceTest: { + VdbeCursor *pC; + assert( pOp->p1>=0 && pOp->p1<p->nCursor ); + pC = p->apCsr[pOp->p1]; + assert( isSorter(pC) ); + if( (pC->seqCount++)==0 ){ + goto jump_to_p2; + } break; } /* Opcode: OpenPseudo P1 P2 P3 * * +** Synopsis: P3 columns in r[P2] ** ** Open a new cursor that points to a fake table that contains a single -** row of data. The content of that one row in the content of memory +** row of data. The content of that one row is the content of memory ** register P2. In other words, cursor P1 becomes an alias for the ** MEM_Blob content contained in register P2. ** ** A pseudo-table created by this opcode is used to hold a single ** row output from the sorter so that the row can be decomposed into @@ -56752,21 +77771,20 @@ ** ** P3 is the number of fields in the records that will be stored by ** the pseudo-table. */ case OP_OpenPseudo: { -#if 0 /* local variables moved into u.ay */ VdbeCursor *pCx; -#endif /* local variables moved into u.ay */ assert( pOp->p1>=0 ); - u.ay.pCx = allocateCursor(p, pOp->p1, pOp->p3, -1, 0); - if( u.ay.pCx==0 ) goto no_mem; - u.ay.pCx->nullRow = 1; - u.ay.pCx->pseudoTableReg = pOp->p2; - u.ay.pCx->isTable = 1; - u.ay.pCx->isIndex = 0; + assert( pOp->p3>=0 ); + pCx = allocateCursor(p, pOp->p1, pOp->p3, -1, CURTYPE_PSEUDO); + if( pCx==0 ) goto no_mem; + pCx->nullRow = 1; + pCx->uc.pseudoTableReg = pOp->p2; + pCx->isTable = 1; + assert( pOp->p5==0 ); break; } /* Opcode: Close P1 * * * * ** @@ -56778,11 +77796,32 @@ sqlite3VdbeFreeCursor(p, p->apCsr[pOp->p1]); p->apCsr[pOp->p1] = 0; break; } -/* Opcode: SeekGe P1 P2 P3 P4 * +#ifdef SQLITE_ENABLE_COLUMN_USED_MASK +/* Opcode: ColumnsUsed P1 * * P4 * +** +** This opcode (which only exists if SQLite was compiled with +** SQLITE_ENABLE_COLUMN_USED_MASK) identifies which columns of the +** table or index for cursor P1 are used. P4 is a 64-bit integer +** (P4_INT64) in which the first 63 bits are one for each of the +** first 63 columns of the table or index that are actually used +** by the cursor. The high-order bit is set if any column after +** the 64th is used. +*/ +case OP_ColumnsUsed: { + VdbeCursor *pC; + pC = p->apCsr[pOp->p1]; + assert( pC->eCurType==CURTYPE_BTREE ); + pC->maskUsed = *(u64*)pOp->p4.pI64; + break; +} +#endif + +/* Opcode: SeekGE P1 P2 P3 P4 * +** Synopsis: key=r[P3@P4] ** ** If cursor P1 refers to an SQL table (B-Tree that uses integer keys), ** use the value in register P3 as the key. If cursor P1 refers ** to an SQL index, then P3 is the first in an array of P4 registers ** that are used as an unpacked index key. @@ -56789,13 +77828,25 @@ ** ** Reposition cursor P1 so that it points to the smallest entry that ** is greater than or equal to the key value. If there are no records ** greater than or equal to the key and P2 is not zero, then jump to P2. ** -** See also: Found, NotFound, Distinct, SeekLt, SeekGt, SeekLe +** If the cursor P1 was opened using the OPFLAG_SEEKEQ flag, then this +** opcode will always land on a record that equally equals the key, or +** else jump immediately to P2. When the cursor is OPFLAG_SEEKEQ, this +** opcode must be followed by an IdxLE opcode with the same arguments. +** The IdxLE opcode will be skipped if this opcode succeeds, but the +** IdxLE opcode will be used on subsequent loop iterations. +** +** This opcode leaves the cursor configured to move in forward order, +** from the beginning toward the end. In other words, the cursor is +** configured to use Next, not Prev. +** +** See also: Found, NotFound, SeekLt, SeekGt, SeekLe */ -/* Opcode: SeekGt P1 P2 P3 P4 * +/* Opcode: SeekGT P1 P2 P3 P4 * +** Synopsis: key=r[P3@P4] ** ** If cursor P1 refers to an SQL table (B-Tree that uses integer keys), ** use the value in register P3 as a key. If cursor P1 refers ** to an SQL index, then P3 is the first in an array of P4 registers ** that are used as an unpacked index key. @@ -56802,13 +77853,18 @@ ** ** Reposition cursor P1 so that it points to the smallest entry that ** is greater than the key value. If there are no records greater than ** the key and P2 is not zero, then jump to P2. ** -** See also: Found, NotFound, Distinct, SeekLt, SeekGe, SeekLe +** This opcode leaves the cursor configured to move in forward order, +** from the beginning toward the end. In other words, the cursor is +** configured to use Next, not Prev. +** +** See also: Found, NotFound, SeekLt, SeekGe, SeekLe */ -/* Opcode: SeekLt P1 P2 P3 P4 * +/* Opcode: SeekLT P1 P2 P3 P4 * +** Synopsis: key=r[P3@P4] ** ** If cursor P1 refers to an SQL table (B-Tree that uses integer keys), ** use the value in register P3 as a key. If cursor P1 refers ** to an SQL index, then P3 is the first in an array of P4 registers ** that are used as an unpacked index key. @@ -56815,13 +77871,18 @@ ** ** Reposition cursor P1 so that it points to the largest entry that ** is less than the key value. If there are no records less than ** the key and P2 is not zero, then jump to P2. ** -** See also: Found, NotFound, Distinct, SeekGt, SeekGe, SeekLe +** This opcode leaves the cursor configured to move in reverse order, +** from the end toward the beginning. In other words, the cursor is +** configured to use Prev, not Next. +** +** See also: Found, NotFound, SeekGt, SeekGe, SeekLe */ -/* Opcode: SeekLe P1 P2 P3 P4 * +/* Opcode: SeekLE P1 P2 P3 P4 * +** Synopsis: key=r[P3@P4] ** ** If cursor P1 refers to an SQL table (B-Tree that uses integer keys), ** use the value in register P3 as a key. If cursor P1 refers ** to an SQL index, then P3 is the first in an array of P4 registers ** that are used as an unpacked index key. @@ -56828,202 +77889,210 @@ ** ** Reposition cursor P1 so that it points to the largest entry that ** is less than or equal to the key value. If there are no records ** less than or equal to the key and P2 is not zero, then jump to P2. ** -** See also: Found, NotFound, Distinct, SeekGt, SeekGe, SeekLt +** This opcode leaves the cursor configured to move in reverse order, +** from the end toward the beginning. In other words, the cursor is +** configured to use Prev, not Next. +** +** If the cursor P1 was opened using the OPFLAG_SEEKEQ flag, then this +** opcode will always land on a record that equally equals the key, or +** else jump immediately to P2. When the cursor is OPFLAG_SEEKEQ, this +** opcode must be followed by an IdxGE opcode with the same arguments. +** The IdxGE opcode will be skipped if this opcode succeeds, but the +** IdxGE opcode will be used on subsequent loop iterations. +** +** See also: Found, NotFound, SeekGt, SeekGe, SeekLt */ -case OP_SeekLt: /* jump, in3 */ -case OP_SeekLe: /* jump, in3 */ -case OP_SeekGe: /* jump, in3 */ -case OP_SeekGt: { /* jump, in3 */ -#if 0 /* local variables moved into u.az */ - int res; - int oc; - VdbeCursor *pC; - UnpackedRecord r; - int nField; - i64 iKey; /* The rowid we are to seek to */ -#endif /* local variables moved into u.az */ +case OP_SeekLT: /* jump, in3 */ +case OP_SeekLE: /* jump, in3 */ +case OP_SeekGE: /* jump, in3 */ +case OP_SeekGT: { /* jump, in3 */ + int res; /* Comparison result */ + int oc; /* Opcode */ + VdbeCursor *pC; /* The cursor to seek */ + UnpackedRecord r; /* The key to seek for */ + int nField; /* Number of columns or fields in the key */ + i64 iKey; /* The rowid we are to seek to */ + int eqOnly; /* Only interested in == results */ assert( pOp->p1>=0 && pOp->p1<p->nCursor ); assert( pOp->p2!=0 ); - u.az.pC = p->apCsr[pOp->p1]; - assert( u.az.pC!=0 ); - assert( u.az.pC->pseudoTableReg==0 ); - assert( OP_SeekLe == OP_SeekLt+1 ); - assert( OP_SeekGe == OP_SeekLt+2 ); - assert( OP_SeekGt == OP_SeekLt+3 ); - if( u.az.pC->pCursor!=0 ){ - u.az.oc = pOp->opcode; - u.az.pC->nullRow = 0; - if( u.az.pC->isTable ){ - /* The input value in P3 might be of any type: integer, real, string, - ** blob, or NULL. But it needs to be an integer before we can do - ** the seek, so covert it. */ - pIn3 = &aMem[pOp->p3]; - applyNumericAffinity(pIn3); - u.az.iKey = sqlite3VdbeIntValue(pIn3); - u.az.pC->rowidIsValid = 0; - - /* If the P3 value could not be converted into an integer without - ** loss of information, then special processing is required... */ - if( (pIn3->flags & MEM_Int)==0 ){ - if( (pIn3->flags & MEM_Real)==0 ){ - /* If the P3 value cannot be converted into any kind of a number, - ** then the seek is not possible, so jump to P2 */ - pc = pOp->p2 - 1; - break; - } - /* If we reach this point, then the P3 value must be a floating - ** point number. */ - assert( (pIn3->flags & MEM_Real)!=0 ); - - if( u.az.iKey==SMALLEST_INT64 && (pIn3->r<(double)u.az.iKey || pIn3->r>0) ){ - /* The P3 value is too large in magnitude to be expressed as an - ** integer. */ - u.az.res = 1; - if( pIn3->r<0 ){ - if( u.az.oc>=OP_SeekGe ){ assert( u.az.oc==OP_SeekGe || u.az.oc==OP_SeekGt ); - rc = sqlite3BtreeFirst(u.az.pC->pCursor, &u.az.res); - if( rc!=SQLITE_OK ) goto abort_due_to_error; - } - }else{ - if( u.az.oc<=OP_SeekLe ){ assert( u.az.oc==OP_SeekLt || u.az.oc==OP_SeekLe ); - rc = sqlite3BtreeLast(u.az.pC->pCursor, &u.az.res); - if( rc!=SQLITE_OK ) goto abort_due_to_error; - } - } - if( u.az.res ){ - pc = pOp->p2 - 1; - } - break; - }else if( u.az.oc==OP_SeekLt || u.az.oc==OP_SeekGe ){ - /* Use the ceiling() function to convert real->int */ - if( pIn3->r > (double)u.az.iKey ) u.az.iKey++; - }else{ - /* Use the floor() function to convert real->int */ - assert( u.az.oc==OP_SeekLe || u.az.oc==OP_SeekGt ); - if( pIn3->r < (double)u.az.iKey ) u.az.iKey--; - } - } - rc = sqlite3BtreeMovetoUnpacked(u.az.pC->pCursor, 0, (u64)u.az.iKey, 0, &u.az.res); - if( rc!=SQLITE_OK ){ - goto abort_due_to_error; - } - if( u.az.res==0 ){ - u.az.pC->rowidIsValid = 1; - u.az.pC->lastRowid = u.az.iKey; - } - }else{ - u.az.nField = pOp->p4.i; - assert( pOp->p4type==P4_INT32 ); - assert( u.az.nField>0 ); - u.az.r.pKeyInfo = u.az.pC->pKeyInfo; - u.az.r.nField = (u16)u.az.nField; - - /* The next line of code computes as follows, only faster: - ** if( u.az.oc==OP_SeekGt || u.az.oc==OP_SeekLe ){ - ** u.az.r.flags = UNPACKED_INCRKEY; - ** }else{ - ** u.az.r.flags = 0; - ** } - */ - u.az.r.flags = (u16)(UNPACKED_INCRKEY * (1 & (u.az.oc - OP_SeekLt))); - assert( u.az.oc!=OP_SeekGt || u.az.r.flags==UNPACKED_INCRKEY ); - assert( u.az.oc!=OP_SeekLe || u.az.r.flags==UNPACKED_INCRKEY ); - assert( u.az.oc!=OP_SeekGe || u.az.r.flags==0 ); - assert( u.az.oc!=OP_SeekLt || u.az.r.flags==0 ); - - u.az.r.aMem = &aMem[pOp->p3]; - ExpandBlob(u.az.r.aMem); - rc = sqlite3BtreeMovetoUnpacked(u.az.pC->pCursor, &u.az.r, 0, 0, &u.az.res); - if( rc!=SQLITE_OK ){ - goto abort_due_to_error; - } - u.az.pC->rowidIsValid = 0; - } - u.az.pC->deferredMoveto = 0; - u.az.pC->cacheStatus = CACHE_STALE; + pC = p->apCsr[pOp->p1]; + assert( pC!=0 ); + assert( pC->eCurType==CURTYPE_BTREE ); + assert( OP_SeekLE == OP_SeekLT+1 ); + assert( OP_SeekGE == OP_SeekLT+2 ); + assert( OP_SeekGT == OP_SeekLT+3 ); + assert( pC->isOrdered ); + assert( pC->uc.pCursor!=0 ); + oc = pOp->opcode; + eqOnly = 0; + pC->nullRow = 0; +#ifdef SQLITE_DEBUG + pC->seekOp = pOp->opcode; +#endif + + if( pC->isTable ){ + /* The BTREE_SEEK_EQ flag is only set on index cursors */ + assert( sqlite3BtreeCursorHasHint(pC->uc.pCursor, BTREE_SEEK_EQ)==0 ); + + /* The input value in P3 might be of any type: integer, real, string, + ** blob, or NULL. But it needs to be an integer before we can do + ** the seek, so convert it. */ + pIn3 = &aMem[pOp->p3]; + if( (pIn3->flags & (MEM_Int|MEM_Real|MEM_Str))==MEM_Str ){ + applyNumericAffinity(pIn3, 0); + } + iKey = sqlite3VdbeIntValue(pIn3); + + /* If the P3 value could not be converted into an integer without + ** loss of information, then special processing is required... */ + if( (pIn3->flags & MEM_Int)==0 ){ + if( (pIn3->flags & MEM_Real)==0 ){ + /* If the P3 value cannot be converted into any kind of a number, + ** then the seek is not possible, so jump to P2 */ + VdbeBranchTaken(1,2); goto jump_to_p2; + break; + } + + /* If the approximation iKey is larger than the actual real search + ** term, substitute >= for > and < for <=. e.g. if the search term + ** is 4.9 and the integer approximation 5: + ** + ** (x > 4.9) -> (x >= 5) + ** (x <= 4.9) -> (x < 5) + */ + if( pIn3->u.r<(double)iKey ){ + assert( OP_SeekGE==(OP_SeekGT-1) ); + assert( OP_SeekLT==(OP_SeekLE-1) ); + assert( (OP_SeekLE & 0x0001)==(OP_SeekGT & 0x0001) ); + if( (oc & 0x0001)==(OP_SeekGT & 0x0001) ) oc--; + } + + /* If the approximation iKey is smaller than the actual real search + ** term, substitute <= for < and > for >=. */ + else if( pIn3->u.r>(double)iKey ){ + assert( OP_SeekLE==(OP_SeekLT+1) ); + assert( OP_SeekGT==(OP_SeekGE+1) ); + assert( (OP_SeekLT & 0x0001)==(OP_SeekGE & 0x0001) ); + if( (oc & 0x0001)==(OP_SeekLT & 0x0001) ) oc++; + } + } + rc = sqlite3BtreeMovetoUnpacked(pC->uc.pCursor, 0, (u64)iKey, 0, &res); + pC->movetoTarget = iKey; /* Used by OP_Delete */ + if( rc!=SQLITE_OK ){ + goto abort_due_to_error; + } + }else{ + /* For a cursor with the BTREE_SEEK_EQ hint, only the OP_SeekGE and + ** OP_SeekLE opcodes are allowed, and these must be immediately followed + ** by an OP_IdxGT or OP_IdxLT opcode, respectively, with the same key. + */ + if( sqlite3BtreeCursorHasHint(pC->uc.pCursor, BTREE_SEEK_EQ) ){ + eqOnly = 1; + assert( pOp->opcode==OP_SeekGE || pOp->opcode==OP_SeekLE ); + assert( pOp[1].opcode==OP_IdxLT || pOp[1].opcode==OP_IdxGT ); + assert( pOp[1].p1==pOp[0].p1 ); + assert( pOp[1].p2==pOp[0].p2 ); + assert( pOp[1].p3==pOp[0].p3 ); + assert( pOp[1].p4.i==pOp[0].p4.i ); + } + + nField = pOp->p4.i; + assert( pOp->p4type==P4_INT32 ); + assert( nField>0 ); + r.pKeyInfo = pC->pKeyInfo; + r.nField = (u16)nField; + + /* The next line of code computes as follows, only faster: + ** if( oc==OP_SeekGT || oc==OP_SeekLE ){ + ** r.default_rc = -1; + ** }else{ + ** r.default_rc = +1; + ** } + */ + r.default_rc = ((1 & (oc - OP_SeekLT)) ? -1 : +1); + assert( oc!=OP_SeekGT || r.default_rc==-1 ); + assert( oc!=OP_SeekLE || r.default_rc==-1 ); + assert( oc!=OP_SeekGE || r.default_rc==+1 ); + assert( oc!=OP_SeekLT || r.default_rc==+1 ); + + r.aMem = &aMem[pOp->p3]; +#ifdef SQLITE_DEBUG + { int i; for(i=0; i<r.nField; i++) assert( memIsValid(&r.aMem[i]) ); } +#endif + ExpandBlob(r.aMem); + r.eqSeen = 0; + rc = sqlite3BtreeMovetoUnpacked(pC->uc.pCursor, &r, 0, 0, &res); + if( rc!=SQLITE_OK ){ + goto abort_due_to_error; + } + if( eqOnly && r.eqSeen==0 ){ + assert( res!=0 ); + goto seek_not_found; + } + } + pC->deferredMoveto = 0; + pC->cacheStatus = CACHE_STALE; #ifdef SQLITE_TEST - sqlite3_search_count++; -#endif - if( u.az.oc>=OP_SeekGe ){ assert( u.az.oc==OP_SeekGe || u.az.oc==OP_SeekGt ); - if( u.az.res<0 || (u.az.res==0 && u.az.oc==OP_SeekGt) ){ - rc = sqlite3BtreeNext(u.az.pC->pCursor, &u.az.res); - if( rc!=SQLITE_OK ) goto abort_due_to_error; - u.az.pC->rowidIsValid = 0; - }else{ - u.az.res = 0; - } - }else{ - assert( u.az.oc==OP_SeekLt || u.az.oc==OP_SeekLe ); - if( u.az.res>0 || (u.az.res==0 && u.az.oc==OP_SeekLt) ){ - rc = sqlite3BtreePrevious(u.az.pC->pCursor, &u.az.res); - if( rc!=SQLITE_OK ) goto abort_due_to_error; - u.az.pC->rowidIsValid = 0; - }else{ - /* u.az.res might be negative because the table is empty. Check to - ** see if this is the case. - */ - u.az.res = sqlite3BtreeEof(u.az.pC->pCursor); - } - } - assert( pOp->p2>0 ); - if( u.az.res ){ - pc = pOp->p2 - 1; - } - }else{ - /* This happens when attempting to open the sqlite3_master table - ** for read access returns SQLITE_EMPTY. In this case always - ** take the jump (since there are no records in the table). - */ - pc = pOp->p2 - 1; - } - break; -} - -/* Opcode: Seek P1 P2 * * * -** -** P1 is an open table cursor and P2 is a rowid integer. Arrange -** for P1 to move so that it points to the rowid given by P2. -** -** This is actually a deferred seek. Nothing actually happens until -** the cursor is used to read a record. That way, if no reads -** occur, no unnecessary I/O happens. -*/ -case OP_Seek: { /* in2 */ -#if 0 /* local variables moved into u.ba */ - VdbeCursor *pC; -#endif /* local variables moved into u.ba */ - - assert( pOp->p1>=0 && pOp->p1<p->nCursor ); - u.ba.pC = p->apCsr[pOp->p1]; - assert( u.ba.pC!=0 ); - if( ALWAYS(u.ba.pC->pCursor!=0) ){ - assert( u.ba.pC->isTable ); - u.ba.pC->nullRow = 0; - pIn2 = &aMem[pOp->p2]; - u.ba.pC->movetoTarget = sqlite3VdbeIntValue(pIn2); - u.ba.pC->rowidIsValid = 0; - u.ba.pC->deferredMoveto = 1; + sqlite3_search_count++; +#endif + if( oc>=OP_SeekGE ){ assert( oc==OP_SeekGE || oc==OP_SeekGT ); + if( res<0 || (res==0 && oc==OP_SeekGT) ){ + res = 0; + rc = sqlite3BtreeNext(pC->uc.pCursor, &res); + if( rc!=SQLITE_OK ) goto abort_due_to_error; + }else{ + res = 0; + } + }else{ + assert( oc==OP_SeekLT || oc==OP_SeekLE ); + if( res>0 || (res==0 && oc==OP_SeekLT) ){ + res = 0; + rc = sqlite3BtreePrevious(pC->uc.pCursor, &res); + if( rc!=SQLITE_OK ) goto abort_due_to_error; + }else{ + /* res might be negative because the table is empty. Check to + ** see if this is the case. + */ + res = sqlite3BtreeEof(pC->uc.pCursor); + } + } +seek_not_found: + assert( pOp->p2>0 ); + VdbeBranchTaken(res!=0,2); + if( res ){ + goto jump_to_p2; + }else if( eqOnly ){ + assert( pOp[1].opcode==OP_IdxLT || pOp[1].opcode==OP_IdxGT ); + pOp++; /* Skip the OP_IdxLt or OP_IdxGT that follows */ } break; } /* Opcode: Found P1 P2 P3 P4 * +** Synopsis: key=r[P3@P4] ** ** If P4==0 then register P3 holds a blob constructed by MakeRecord. If ** P4>0 then register P3 is the first of P4 registers that form an unpacked ** record. ** ** Cursor P1 is on an index btree. If the record identified by P3 and P4 ** is a prefix of any entry in P1 then a jump is made to P2 and ** P1 is left pointing at the matching entry. +** +** This operation leaves the cursor in a state where it can be +** advanced in the forward direction. The Next instruction will work, +** but not the Prev instruction. +** +** See also: NotFound, NoConflict, NotExists. SeekGe */ /* Opcode: NotFound P1 P2 P3 P4 * +** Synopsis: key=r[P3@P4] ** ** If P4==0 then register P3 holds a blob constructed by MakeRecord. If ** P4>0 then register P3 is the first of P4 registers that form an unpacked ** record. ** @@ -57031,259 +78100,230 @@ ** is not the prefix of any entry in P1 then a jump is made to P2. If P1 ** does contain an entry whose prefix matches the P3/P4 record then control ** falls through to the next instruction and P1 is left pointing at the ** matching entry. ** -** See also: Found, NotExists, IsUnique -*/ -case OP_NotFound: /* jump, in3 */ -case OP_Found: { /* jump, in3 */ -#if 0 /* local variables moved into u.bb */ - int alreadyExists; - VdbeCursor *pC; - int res; - UnpackedRecord *pIdxKey; - UnpackedRecord r; - char aTempRec[ROUND8(sizeof(UnpackedRecord)) + sizeof(Mem)*3 + 7]; -#endif /* local variables moved into u.bb */ - -#ifdef SQLITE_TEST - sqlite3_found_count++; -#endif - - u.bb.alreadyExists = 0; - assert( pOp->p1>=0 && pOp->p1<p->nCursor ); - assert( pOp->p4type==P4_INT32 ); - u.bb.pC = p->apCsr[pOp->p1]; - assert( u.bb.pC!=0 ); - pIn3 = &aMem[pOp->p3]; - if( ALWAYS(u.bb.pC->pCursor!=0) ){ - - assert( u.bb.pC->isTable==0 ); - if( pOp->p4.i>0 ){ - u.bb.r.pKeyInfo = u.bb.pC->pKeyInfo; - u.bb.r.nField = (u16)pOp->p4.i; - u.bb.r.aMem = pIn3; - u.bb.r.flags = UNPACKED_PREFIX_MATCH; - u.bb.pIdxKey = &u.bb.r; - }else{ - assert( pIn3->flags & MEM_Blob ); - ExpandBlob(pIn3); - u.bb.pIdxKey = sqlite3VdbeRecordUnpack(u.bb.pC->pKeyInfo, pIn3->n, pIn3->z, - u.bb.aTempRec, sizeof(u.bb.aTempRec)); - if( u.bb.pIdxKey==0 ){ - goto no_mem; - } - u.bb.pIdxKey->flags |= UNPACKED_PREFIX_MATCH; - } - rc = sqlite3BtreeMovetoUnpacked(u.bb.pC->pCursor, u.bb.pIdxKey, 0, 0, &u.bb.res); - if( pOp->p4.i==0 ){ - sqlite3VdbeDeleteUnpackedRecord(u.bb.pIdxKey); - } - if( rc!=SQLITE_OK ){ - break; - } - u.bb.alreadyExists = (u.bb.res==0); - u.bb.pC->deferredMoveto = 0; - u.bb.pC->cacheStatus = CACHE_STALE; - } - if( pOp->opcode==OP_Found ){ - if( u.bb.alreadyExists ) pc = pOp->p2 - 1; - }else{ - if( !u.bb.alreadyExists ) pc = pOp->p2 - 1; - } - break; -} - -/* Opcode: IsUnique P1 P2 P3 P4 * -** -** Cursor P1 is open on an index b-tree - that is to say, a btree which -** no data and where the key are records generated by OP_MakeRecord with -** the list field being the integer ROWID of the entry that the index -** entry refers to. -** -** The P3 register contains an integer record number. Call this record -** number R. Register P4 is the first in a set of N contiguous registers -** that make up an unpacked index key that can be used with cursor P1. -** The value of N can be inferred from the cursor. N includes the rowid -** value appended to the end of the index record. This rowid value may -** or may not be the same as R. -** -** If any of the N registers beginning with register P4 contains a NULL -** value, jump immediately to P2. -** -** Otherwise, this instruction checks if cursor P1 contains an entry -** where the first (N-1) fields match but the rowid value at the end -** of the index entry is not R. If there is no such entry, control jumps -** to instruction P2. Otherwise, the rowid of the conflicting index -** entry is copied to register P3 and control falls through to the next -** instruction. -** -** See also: NotFound, NotExists, Found -*/ -case OP_IsUnique: { /* jump, in3 */ -#if 0 /* local variables moved into u.bc */ - u16 ii; - VdbeCursor *pCx; - BtCursor *pCrsr; - u16 nField; - Mem *aMx; - UnpackedRecord r; /* B-Tree index search key */ - i64 R; /* Rowid stored in register P3 */ -#endif /* local variables moved into u.bc */ - - pIn3 = &aMem[pOp->p3]; - u.bc.aMx = &aMem[pOp->p4.i]; - /* Assert that the values of parameters P1 and P4 are in range. */ - assert( pOp->p4type==P4_INT32 ); - assert( pOp->p4.i>0 && pOp->p4.i<=p->nMem ); - assert( pOp->p1>=0 && pOp->p1<p->nCursor ); - - /* Find the index cursor. */ - u.bc.pCx = p->apCsr[pOp->p1]; - assert( u.bc.pCx->deferredMoveto==0 ); - u.bc.pCx->seekResult = 0; - u.bc.pCx->cacheStatus = CACHE_STALE; - u.bc.pCrsr = u.bc.pCx->pCursor; - - /* If any of the values are NULL, take the jump. */ - u.bc.nField = u.bc.pCx->pKeyInfo->nField; - for(u.bc.ii=0; u.bc.ii<u.bc.nField; u.bc.ii++){ - if( u.bc.aMx[u.bc.ii].flags & MEM_Null ){ - pc = pOp->p2 - 1; - u.bc.pCrsr = 0; - break; - } - } - assert( (u.bc.aMx[u.bc.nField].flags & MEM_Null)==0 ); - - if( u.bc.pCrsr!=0 ){ - /* Populate the index search key. */ - u.bc.r.pKeyInfo = u.bc.pCx->pKeyInfo; - u.bc.r.nField = u.bc.nField + 1; - u.bc.r.flags = UNPACKED_PREFIX_SEARCH; - u.bc.r.aMem = u.bc.aMx; - - /* Extract the value of u.bc.R from register P3. */ - sqlite3VdbeMemIntegerify(pIn3); - u.bc.R = pIn3->u.i; - - /* Search the B-Tree index. If no conflicting record is found, jump - ** to P2. Otherwise, copy the rowid of the conflicting record to - ** register P3 and fall through to the next instruction. */ - rc = sqlite3BtreeMovetoUnpacked(u.bc.pCrsr, &u.bc.r, 0, 0, &u.bc.pCx->seekResult); - if( (u.bc.r.flags & UNPACKED_PREFIX_SEARCH) || u.bc.r.rowid==u.bc.R ){ - pc = pOp->p2 - 1; - }else{ - pIn3->u.i = u.bc.r.rowid; - } +** This operation leaves the cursor in a state where it cannot be +** advanced in either direction. In other words, the Next and Prev +** opcodes do not work after this operation. +** +** See also: Found, NotExists, NoConflict +*/ +/* Opcode: NoConflict P1 P2 P3 P4 * +** Synopsis: key=r[P3@P4] +** +** If P4==0 then register P3 holds a blob constructed by MakeRecord. If +** P4>0 then register P3 is the first of P4 registers that form an unpacked +** record. +** +** Cursor P1 is on an index btree. If the record identified by P3 and P4 +** contains any NULL value, jump immediately to P2. If all terms of the +** record are not-NULL then a check is done to determine if any row in the +** P1 index btree has a matching key prefix. If there are no matches, jump +** immediately to P2. If there is a match, fall through and leave the P1 +** cursor pointing to the matching row. +** +** This opcode is similar to OP_NotFound with the exceptions that the +** branch is always taken if any part of the search key input is NULL. +** +** This operation leaves the cursor in a state where it cannot be +** advanced in either direction. In other words, the Next and Prev +** opcodes do not work after this operation. +** +** See also: NotFound, Found, NotExists +*/ +case OP_NoConflict: /* jump, in3 */ +case OP_NotFound: /* jump, in3 */ +case OP_Found: { /* jump, in3 */ + int alreadyExists; + int takeJump; + int ii; + VdbeCursor *pC; + int res; + char *pFree; + UnpackedRecord *pIdxKey; + UnpackedRecord r; + char aTempRec[ROUND8(sizeof(UnpackedRecord)) + sizeof(Mem)*4 + 7]; + +#ifdef SQLITE_TEST + if( pOp->opcode!=OP_NoConflict ) sqlite3_found_count++; +#endif + + assert( pOp->p1>=0 && pOp->p1<p->nCursor ); + assert( pOp->p4type==P4_INT32 ); + pC = p->apCsr[pOp->p1]; + assert( pC!=0 ); +#ifdef SQLITE_DEBUG + pC->seekOp = pOp->opcode; +#endif + pIn3 = &aMem[pOp->p3]; + assert( pC->eCurType==CURTYPE_BTREE ); + assert( pC->uc.pCursor!=0 ); + assert( pC->isTable==0 ); + pFree = 0; + if( pOp->p4.i>0 ){ + r.pKeyInfo = pC->pKeyInfo; + r.nField = (u16)pOp->p4.i; + r.aMem = pIn3; + for(ii=0; ii<r.nField; ii++){ + assert( memIsValid(&r.aMem[ii]) ); + ExpandBlob(&r.aMem[ii]); +#ifdef SQLITE_DEBUG + if( ii ) REGISTER_TRACE(pOp->p3+ii, &r.aMem[ii]); +#endif + } + pIdxKey = &r; + }else{ + pIdxKey = sqlite3VdbeAllocUnpackedRecord( + pC->pKeyInfo, aTempRec, sizeof(aTempRec), &pFree + ); + if( pIdxKey==0 ) goto no_mem; + assert( pIn3->flags & MEM_Blob ); + ExpandBlob(pIn3); + sqlite3VdbeRecordUnpack(pC->pKeyInfo, pIn3->n, pIn3->z, pIdxKey); + } + pIdxKey->default_rc = 0; + takeJump = 0; + if( pOp->opcode==OP_NoConflict ){ + /* For the OP_NoConflict opcode, take the jump if any of the + ** input fields are NULL, since any key with a NULL will not + ** conflict */ + for(ii=0; ii<pIdxKey->nField; ii++){ + if( pIdxKey->aMem[ii].flags & MEM_Null ){ + takeJump = 1; + break; + } + } + } + rc = sqlite3BtreeMovetoUnpacked(pC->uc.pCursor, pIdxKey, 0, 0, &res); + sqlite3DbFree(db, pFree); + if( rc!=SQLITE_OK ){ + break; + } + pC->seekResult = res; + alreadyExists = (res==0); + pC->nullRow = 1-alreadyExists; + pC->deferredMoveto = 0; + pC->cacheStatus = CACHE_STALE; + if( pOp->opcode==OP_Found ){ + VdbeBranchTaken(alreadyExists!=0,2); + if( alreadyExists ) goto jump_to_p2; + }else{ + VdbeBranchTaken(takeJump||alreadyExists==0,2); + if( takeJump || !alreadyExists ) goto jump_to_p2; } break; } /* Opcode: NotExists P1 P2 P3 * * -** -** Use the content of register P3 as a integer key. If a record -** with that key does not exist in table of P1, then jump to P2. -** If the record does exist, then fall thru. The cursor is left -** pointing to the record if it exists. -** -** The difference between this operation and NotFound is that this -** operation assumes the key is an integer and that P1 is a table whereas -** NotFound assumes key is a blob constructed from MakeRecord and -** P1 is an index. -** -** See also: Found, NotFound, IsUnique +** Synopsis: intkey=r[P3] +** +** P1 is the index of a cursor open on an SQL table btree (with integer +** keys). P3 is an integer rowid. If P1 does not contain a record with +** rowid P3 then jump immediately to P2. Or, if P2 is 0, raise an +** SQLITE_CORRUPT error. If P1 does contain a record with rowid P3 then +** leave the cursor pointing at that record and fall through to the next +** instruction. +** +** The OP_NotFound opcode performs the same operation on index btrees +** (with arbitrary multi-value keys). +** +** This opcode leaves the cursor in a state where it cannot be advanced +** in either direction. In other words, the Next and Prev opcodes will +** not work following this opcode. +** +** See also: Found, NotFound, NoConflict */ case OP_NotExists: { /* jump, in3 */ -#if 0 /* local variables moved into u.bd */ VdbeCursor *pC; BtCursor *pCrsr; int res; u64 iKey; -#endif /* local variables moved into u.bd */ pIn3 = &aMem[pOp->p3]; assert( pIn3->flags & MEM_Int ); assert( pOp->p1>=0 && pOp->p1<p->nCursor ); - u.bd.pC = p->apCsr[pOp->p1]; - assert( u.bd.pC!=0 ); - assert( u.bd.pC->isTable ); - assert( u.bd.pC->pseudoTableReg==0 ); - u.bd.pCrsr = u.bd.pC->pCursor; - if( u.bd.pCrsr!=0 ){ - u.bd.res = 0; - u.bd.iKey = pIn3->u.i; - rc = sqlite3BtreeMovetoUnpacked(u.bd.pCrsr, 0, u.bd.iKey, 0, &u.bd.res); - u.bd.pC->lastRowid = pIn3->u.i; - u.bd.pC->rowidIsValid = u.bd.res==0 ?1:0; - u.bd.pC->nullRow = 0; - u.bd.pC->cacheStatus = CACHE_STALE; - u.bd.pC->deferredMoveto = 0; - if( u.bd.res!=0 ){ - pc = pOp->p2 - 1; - assert( u.bd.pC->rowidIsValid==0 ); - } - u.bd.pC->seekResult = u.bd.res; - }else{ - /* This happens when an attempt to open a read cursor on the - ** sqlite_master table returns SQLITE_EMPTY. - */ - pc = pOp->p2 - 1; - assert( u.bd.pC->rowidIsValid==0 ); - u.bd.pC->seekResult = 0; + pC = p->apCsr[pOp->p1]; + assert( pC!=0 ); +#ifdef SQLITE_DEBUG + pC->seekOp = 0; +#endif + assert( pC->isTable ); + assert( pC->eCurType==CURTYPE_BTREE ); + pCrsr = pC->uc.pCursor; + assert( pCrsr!=0 ); + res = 0; + iKey = pIn3->u.i; + rc = sqlite3BtreeMovetoUnpacked(pCrsr, 0, iKey, 0, &res); + assert( rc==SQLITE_OK || res==0 ); + pC->movetoTarget = iKey; /* Used by OP_Delete */ + pC->nullRow = 0; + pC->cacheStatus = CACHE_STALE; + pC->deferredMoveto = 0; + VdbeBranchTaken(res!=0,2); + pC->seekResult = res; + if( res!=0 ){ + assert( rc==SQLITE_OK ); + if( pOp->p2==0 ){ + rc = SQLITE_CORRUPT_BKPT; + }else{ + goto jump_to_p2; + } } break; } /* Opcode: Sequence P1 P2 * * * +** Synopsis: r[P2]=cursor[P1].ctr++ ** ** Find the next available sequence number for cursor P1. ** Write the sequence number into register P2. ** The sequence number on the cursor is incremented after this ** instruction. */ -case OP_Sequence: { /* out2-prerelease */ +case OP_Sequence: { /* out2 */ assert( pOp->p1>=0 && pOp->p1<p->nCursor ); assert( p->apCsr[pOp->p1]!=0 ); + assert( p->apCsr[pOp->p1]->eCurType!=CURTYPE_VTAB ); + pOut = out2Prerelease(p, pOp); pOut->u.i = p->apCsr[pOp->p1]->seqCount++; break; } /* Opcode: NewRowid P1 P2 P3 * * +** Synopsis: r[P2]=rowid ** ** Get a new integer record number (a.k.a "rowid") used as the key to a table. ** The record number is not previously used as a key in the database ** table that cursor P1 points to. The new record number is written ** written to register P2. ** ** If P3>0 then P3 is a register in the root frame of this VDBE that holds ** the largest previously generated record number. No new record numbers are ** allowed to be less than this value. When this value reaches its maximum, -** a SQLITE_FULL error is generated. The P3 register is updated with the ' +** an SQLITE_FULL error is generated. The P3 register is updated with the ' ** generated record number. This P3 mechanism is used to help implement the ** AUTOINCREMENT feature. */ -case OP_NewRowid: { /* out2-prerelease */ -#if 0 /* local variables moved into u.be */ +case OP_NewRowid: { /* out2 */ i64 v; /* The new rowid */ VdbeCursor *pC; /* Cursor of table to get the new rowid */ int res; /* Result of an sqlite3BtreeLast() */ int cnt; /* Counter to limit the number of searches */ Mem *pMem; /* Register holding largest rowid for AUTOINCREMENT */ VdbeFrame *pFrame; /* Root frame of VDBE */ -#endif /* local variables moved into u.be */ - u.be.v = 0; - u.be.res = 0; + v = 0; + res = 0; + pOut = out2Prerelease(p, pOp); assert( pOp->p1>=0 && pOp->p1<p->nCursor ); - u.be.pC = p->apCsr[pOp->p1]; - assert( u.be.pC!=0 ); - if( NEVER(u.be.pC->pCursor==0) ){ - /* The zero initialization above is all that is needed */ - }else{ + pC = p->apCsr[pOp->p1]; + assert( pC!=0 ); + assert( pC->eCurType==CURTYPE_BTREE ); + assert( pC->uc.pCursor!=0 ); + { /* The next rowid or record number (different terms for the same ** thing) is obtained in a two-step algorithm. ** ** First we attempt to find the largest existing rowid and add one ** to that. But if the largest existing rowid is already the maximum @@ -57293,12 +78333,11 @@ ** The second algorithm is to select a rowid at random and see if ** it already exists in the table. If it does not exist, we have ** succeeded. If the random rowid does exist, we select a new one ** and try again, up to 100 times. */ - assert( u.be.pC->isTable ); - u.be.cnt = 0; + assert( pC->isTable ); #ifdef SQLITE_32BIT_ROWID # define MAX_ROWID 0x7fffffff #else /* Some compilers complain about constants of the form 0x7fffffffffffffff. @@ -57306,96 +78345,89 @@ ** to provide the constant while making all compilers happy. */ # define MAX_ROWID (i64)( (((u64)0x7fffffff)<<32) | (u64)0xffffffff ) #endif - if( !u.be.pC->useRandomRowid ){ - u.be.v = sqlite3BtreeGetCachedRowid(u.be.pC->pCursor); - if( u.be.v==0 ){ - rc = sqlite3BtreeLast(u.be.pC->pCursor, &u.be.res); - if( rc!=SQLITE_OK ){ - goto abort_due_to_error; - } - if( u.be.res ){ - u.be.v = 1; /* IMP: R-61914-48074 */ - }else{ - assert( sqlite3BtreeCursorIsValid(u.be.pC->pCursor) ); - rc = sqlite3BtreeKeySize(u.be.pC->pCursor, &u.be.v); - assert( rc==SQLITE_OK ); /* Cannot fail following BtreeLast() */ - if( u.be.v==MAX_ROWID ){ - u.be.pC->useRandomRowid = 1; - }else{ - u.be.v++; /* IMP: R-29538-34987 */ - } - } - } - -#ifndef SQLITE_OMIT_AUTOINCREMENT - if( pOp->p3 ){ - /* Assert that P3 is a valid memory cell. */ - assert( pOp->p3>0 ); - if( p->pFrame ){ - for(u.be.pFrame=p->pFrame; u.be.pFrame->pParent; u.be.pFrame=u.be.pFrame->pParent); - /* Assert that P3 is a valid memory cell. */ - assert( pOp->p3<=u.be.pFrame->nMem ); - u.be.pMem = &u.be.pFrame->aMem[pOp->p3]; - }else{ - /* Assert that P3 is a valid memory cell. */ - assert( pOp->p3<=p->nMem ); - u.be.pMem = &aMem[pOp->p3]; - } - - REGISTER_TRACE(pOp->p3, u.be.pMem); - sqlite3VdbeMemIntegerify(u.be.pMem); - assert( (u.be.pMem->flags & MEM_Int)!=0 ); /* mem(P3) holds an integer */ - if( u.be.pMem->u.i==MAX_ROWID || u.be.pC->useRandomRowid ){ - rc = SQLITE_FULL; /* IMP: R-12275-61338 */ - goto abort_due_to_error; - } - if( u.be.v<u.be.pMem->u.i+1 ){ - u.be.v = u.be.pMem->u.i + 1; - } - u.be.pMem->u.i = u.be.v; - } -#endif - - sqlite3BtreeSetCachedRowid(u.be.pC->pCursor, u.be.v<MAX_ROWID ? u.be.v+1 : 0); - } - if( u.be.pC->useRandomRowid ){ - /* IMPLEMENTATION-OF: R-48598-02938 If the largest ROWID is equal to the - ** largest possible integer (9223372036854775807) then the database - ** engine starts picking candidate ROWIDs at random until it finds one - ** that is not previously used. - */ + if( !pC->useRandomRowid ){ + rc = sqlite3BtreeLast(pC->uc.pCursor, &res); + if( rc!=SQLITE_OK ){ + goto abort_due_to_error; + } + if( res ){ + v = 1; /* IMP: R-61914-48074 */ + }else{ + assert( sqlite3BtreeCursorIsValid(pC->uc.pCursor) ); + rc = sqlite3BtreeKeySize(pC->uc.pCursor, &v); + assert( rc==SQLITE_OK ); /* Cannot fail following BtreeLast() */ + if( v>=MAX_ROWID ){ + pC->useRandomRowid = 1; + }else{ + v++; /* IMP: R-29538-34987 */ + } + } + } + +#ifndef SQLITE_OMIT_AUTOINCREMENT + if( pOp->p3 ){ + /* Assert that P3 is a valid memory cell. */ + assert( pOp->p3>0 ); + if( p->pFrame ){ + for(pFrame=p->pFrame; pFrame->pParent; pFrame=pFrame->pParent); + /* Assert that P3 is a valid memory cell. */ + assert( pOp->p3<=pFrame->nMem ); + pMem = &pFrame->aMem[pOp->p3]; + }else{ + /* Assert that P3 is a valid memory cell. */ + assert( pOp->p3<=(p->nMem-p->nCursor) ); + pMem = &aMem[pOp->p3]; + memAboutToChange(p, pMem); + } + assert( memIsValid(pMem) ); + + REGISTER_TRACE(pOp->p3, pMem); + sqlite3VdbeMemIntegerify(pMem); + assert( (pMem->flags & MEM_Int)!=0 ); /* mem(P3) holds an integer */ + if( pMem->u.i==MAX_ROWID || pC->useRandomRowid ){ + rc = SQLITE_FULL; /* IMP: R-12275-61338 */ + goto abort_due_to_error; + } + if( v<pMem->u.i+1 ){ + v = pMem->u.i + 1; + } + pMem->u.i = v; + } +#endif + if( pC->useRandomRowid ){ + /* IMPLEMENTATION-OF: R-07677-41881 If the largest ROWID is equal to the + ** largest possible integer (9223372036854775807) then the database + ** engine starts picking positive candidate ROWIDs at random until + ** it finds one that is not previously used. */ assert( pOp->p3==0 ); /* We cannot be in random rowid mode if this is ** an AUTOINCREMENT table. */ - u.be.v = db->lastRowid; - u.be.cnt = 0; + cnt = 0; do{ - if( u.be.cnt==0 && (u.be.v&0xffffff)==u.be.v ){ - u.be.v++; - }else{ - sqlite3_randomness(sizeof(u.be.v), &u.be.v); - if( u.be.cnt<5 ) u.be.v &= 0xffffff; - } - rc = sqlite3BtreeMovetoUnpacked(u.be.pC->pCursor, 0, (u64)u.be.v, 0, &u.be.res); - u.be.cnt++; - }while( u.be.cnt<100 && rc==SQLITE_OK && u.be.res==0 ); - if( rc==SQLITE_OK && u.be.res==0 ){ + sqlite3_randomness(sizeof(v), &v); + v &= (MAX_ROWID>>1); v++; /* Ensure that v is greater than zero */ + }while( ((rc = sqlite3BtreeMovetoUnpacked(pC->uc.pCursor, 0, (u64)v, + 0, &res))==SQLITE_OK) + && (res==0) + && (++cnt<100)); + if( rc==SQLITE_OK && res==0 ){ rc = SQLITE_FULL; /* IMP: R-38219-53002 */ goto abort_due_to_error; } + assert( v>0 ); /* EV: R-40812-03570 */ } - u.be.pC->rowidIsValid = 0; - u.be.pC->deferredMoveto = 0; - u.be.pC->cacheStatus = CACHE_STALE; + pC->deferredMoveto = 0; + pC->cacheStatus = CACHE_STALE; } - pOut->u.i = u.be.v; + pOut->u.i = v; break; } /* Opcode: Insert P1 P2 P3 P4 P5 +** Synopsis: intkey=r[P3] data=r[P2] ** ** Write an entry into the table of cursor P1. A new entry is ** created if it doesn't already exist or the data for an existing ** entry is overwritten. The data is the value MEM_Blob stored in register ** number P2. The key is stored in register P3. The key must @@ -57431,93 +78463,101 @@ ** ** This instruction only works on tables. The equivalent instruction ** for indices is OP_IdxInsert. */ /* Opcode: InsertInt P1 P2 P3 P4 P5 +** Synopsis: intkey=P3 data=r[P2] ** ** This works exactly like OP_Insert except that the key is the ** integer value P3, not the value of the integer stored in register P3. */ case OP_Insert: case OP_InsertInt: { -#if 0 /* local variables moved into u.bf */ Mem *pData; /* MEM cell holding data for the record to be inserted */ Mem *pKey; /* MEM cell holding key for the record */ i64 iKey; /* The integer ROWID or key for the record to be inserted */ VdbeCursor *pC; /* Cursor to table into which insert is written */ int nZero; /* Number of zero-bytes to append */ int seekResult; /* Result of prior seek or 0 if no USESEEKRESULT flag */ const char *zDb; /* database name - used by the update hook */ const char *zTbl; /* Table name - used by the opdate hook */ int op; /* Opcode for update hook: SQLITE_UPDATE or SQLITE_INSERT */ -#endif /* local variables moved into u.bf */ - u.bf.pData = &aMem[pOp->p2]; + pData = &aMem[pOp->p2]; assert( pOp->p1>=0 && pOp->p1<p->nCursor ); - u.bf.pC = p->apCsr[pOp->p1]; - assert( u.bf.pC!=0 ); - assert( u.bf.pC->pCursor!=0 ); - assert( u.bf.pC->pseudoTableReg==0 ); - assert( u.bf.pC->isTable ); - REGISTER_TRACE(pOp->p2, u.bf.pData); + assert( memIsValid(pData) ); + pC = p->apCsr[pOp->p1]; + assert( pC!=0 ); + assert( pC->eCurType==CURTYPE_BTREE ); + assert( pC->uc.pCursor!=0 ); + assert( pC->isTable ); + REGISTER_TRACE(pOp->p2, pData); if( pOp->opcode==OP_Insert ){ - u.bf.pKey = &aMem[pOp->p3]; - assert( u.bf.pKey->flags & MEM_Int ); - REGISTER_TRACE(pOp->p3, u.bf.pKey); - u.bf.iKey = u.bf.pKey->u.i; + pKey = &aMem[pOp->p3]; + assert( pKey->flags & MEM_Int ); + assert( memIsValid(pKey) ); + REGISTER_TRACE(pOp->p3, pKey); + iKey = pKey->u.i; }else{ assert( pOp->opcode==OP_InsertInt ); - u.bf.iKey = pOp->p3; + iKey = pOp->p3; } if( pOp->p5 & OPFLAG_NCHANGE ) p->nChange++; - if( pOp->p5 & OPFLAG_LASTROWID ) db->lastRowid = u.bf.iKey; - if( u.bf.pData->flags & MEM_Null ){ - u.bf.pData->z = 0; - u.bf.pData->n = 0; - }else{ - assert( u.bf.pData->flags & (MEM_Blob|MEM_Str) ); - } - u.bf.seekResult = ((pOp->p5 & OPFLAG_USESEEKRESULT) ? u.bf.pC->seekResult : 0); - if( u.bf.pData->flags & MEM_Zero ){ - u.bf.nZero = u.bf.pData->u.nZero; - }else{ - u.bf.nZero = 0; - } - sqlite3BtreeSetCachedRowid(u.bf.pC->pCursor, 0); - rc = sqlite3BtreeInsert(u.bf.pC->pCursor, 0, u.bf.iKey, - u.bf.pData->z, u.bf.pData->n, u.bf.nZero, - pOp->p5 & OPFLAG_APPEND, u.bf.seekResult - ); - u.bf.pC->rowidIsValid = 0; - u.bf.pC->deferredMoveto = 0; - u.bf.pC->cacheStatus = CACHE_STALE; + if( pOp->p5 & OPFLAG_LASTROWID ) db->lastRowid = lastRowid = iKey; + if( pData->flags & MEM_Null ){ + pData->z = 0; + pData->n = 0; + }else{ + assert( pData->flags & (MEM_Blob|MEM_Str) ); + } + seekResult = ((pOp->p5 & OPFLAG_USESEEKRESULT) ? pC->seekResult : 0); + if( pData->flags & MEM_Zero ){ + nZero = pData->u.nZero; + }else{ + nZero = 0; + } + rc = sqlite3BtreeInsert(pC->uc.pCursor, 0, iKey, + pData->z, pData->n, nZero, + (pOp->p5 & OPFLAG_APPEND)!=0, seekResult + ); + pC->deferredMoveto = 0; + pC->cacheStatus = CACHE_STALE; /* Invoke the update-hook if required. */ if( rc==SQLITE_OK && db->xUpdateCallback && pOp->p4.z ){ - u.bf.zDb = db->aDb[u.bf.pC->iDb].zName; - u.bf.zTbl = pOp->p4.z; - u.bf.op = ((pOp->p5 & OPFLAG_ISUPDATE) ? SQLITE_UPDATE : SQLITE_INSERT); - assert( u.bf.pC->isTable ); - db->xUpdateCallback(db->pUpdateArg, u.bf.op, u.bf.zDb, u.bf.zTbl, u.bf.iKey); - assert( u.bf.pC->iDb>=0 ); + zDb = db->aDb[pC->iDb].zName; + zTbl = pOp->p4.z; + op = ((pOp->p5 & OPFLAG_ISUPDATE) ? SQLITE_UPDATE : SQLITE_INSERT); + assert( pC->isTable ); + db->xUpdateCallback(db->pUpdateArg, op, zDb, zTbl, iKey); + assert( pC->iDb>=0 ); } break; } -/* Opcode: Delete P1 P2 * P4 * +/* Opcode: Delete P1 P2 * P4 P5 ** ** Delete the record at which the P1 cursor is currently pointing. ** -** The cursor will be left pointing at either the next or the previous +** If the OPFLAG_SAVEPOSITION bit of the P5 parameter is set, then +** the cursor will be left pointing at either the next or the previous ** record in the table. If it is left pointing at the next record, then -** the next Next instruction will be a no-op. Hence it is OK to delete -** a record from within an Next loop. +** the next Next instruction will be a no-op. As a result, in this case +** it is ok to delete a record from within a Next loop. If +** OPFLAG_SAVEPOSITION bit of P5 is clear, then the cursor will be +** left in an undefined state. ** -** If the OPFLAG_NCHANGE flag of P2 is set, then the row change count is -** incremented (otherwise not). +** If the OPFLAG_AUXDELETE bit is set on P5, that indicates that this +** delete one of several associated with deleting a table row and all its +** associated index entries. Exactly one of those deletes is the "primary" +** delete. The others are all on OPFLAG_FORDELETE cursors or else are +** marked with the AUXDELETE flag. +** +** If the OPFLAG_NCHANGE flag of P2 (NB: P2 not P5) is set, then the row +** change count is incremented (otherwise not). ** ** P1 must not be pseudo-table. It has to be a real table with ** multiple rows. ** ** If P4 is not NULL, then it is the name of the table that P1 is @@ -57524,51 +78564,63 @@ ** pointing to. The update hook will be invoked, if it exists. ** If P4 is not NULL then the P1 cursor must have been positioned ** using OP_NotFound prior to invoking this opcode. */ case OP_Delete: { -#if 0 /* local variables moved into u.bg */ - i64 iKey; VdbeCursor *pC; -#endif /* local variables moved into u.bg */ + u8 hasUpdateCallback; - u.bg.iKey = 0; assert( pOp->p1>=0 && pOp->p1<p->nCursor ); - u.bg.pC = p->apCsr[pOp->p1]; - assert( u.bg.pC!=0 ); - assert( u.bg.pC->pCursor!=0 ); /* Only valid for real tables, no pseudotables */ - - /* If the update-hook will be invoked, set u.bg.iKey to the rowid of the - ** row being deleted. - */ - if( db->xUpdateCallback && pOp->p4.z ){ - assert( u.bg.pC->isTable ); - assert( u.bg.pC->rowidIsValid ); /* lastRowid set by previous OP_NotFound */ - u.bg.iKey = u.bg.pC->lastRowid; - } - - /* The OP_Delete opcode always follows an OP_NotExists or OP_Last or - ** OP_Column on the same table without any intervening operations that - ** might move or invalidate the cursor. Hence cursor u.bg.pC is always pointing - ** to the row to be deleted and the sqlite3VdbeCursorMoveto() operation - ** below is always a no-op and cannot fail. We will run it anyhow, though, - ** to guard against future changes to the code generator. - **/ - assert( u.bg.pC->deferredMoveto==0 ); - rc = sqlite3VdbeCursorMoveto(u.bg.pC); - if( NEVER(rc!=SQLITE_OK) ) goto abort_due_to_error; - - sqlite3BtreeSetCachedRowid(u.bg.pC->pCursor, 0); - rc = sqlite3BtreeDelete(u.bg.pC->pCursor); - u.bg.pC->cacheStatus = CACHE_STALE; + pC = p->apCsr[pOp->p1]; + assert( pC!=0 ); + assert( pC->eCurType==CURTYPE_BTREE ); + assert( pC->uc.pCursor!=0 ); + assert( pC->deferredMoveto==0 ); + + hasUpdateCallback = db->xUpdateCallback && pOp->p4.z && pC->isTable; + if( pOp->p5 && hasUpdateCallback ){ + sqlite3BtreeKeySize(pC->uc.pCursor, &pC->movetoTarget); + } + +#ifdef SQLITE_DEBUG + /* The seek operation that positioned the cursor prior to OP_Delete will + ** have also set the pC->movetoTarget field to the rowid of the row that + ** is being deleted */ + if( pOp->p4.z && pC->isTable && pOp->p5==0 ){ + i64 iKey = 0; + sqlite3BtreeKeySize(pC->uc.pCursor, &iKey); + assert( pC->movetoTarget==iKey ); + } +#endif + + /* Only flags that can be set are SAVEPOISTION and AUXDELETE */ + assert( (pOp->p5 & ~(OPFLAG_SAVEPOSITION|OPFLAG_AUXDELETE))==0 ); + assert( OPFLAG_SAVEPOSITION==BTREE_SAVEPOSITION ); + assert( OPFLAG_AUXDELETE==BTREE_AUXDELETE ); + +#ifdef SQLITE_DEBUG + if( p->pFrame==0 ){ + if( pC->isEphemeral==0 + && (pOp->p5 & OPFLAG_AUXDELETE)==0 + && (pC->wrFlag & OPFLAG_FORDELETE)==0 + ){ + nExtraDelete++; + } + if( pOp->p2 & OPFLAG_NCHANGE ){ + nExtraDelete--; + } + } +#endif + + rc = sqlite3BtreeDelete(pC->uc.pCursor, pOp->p5); + pC->cacheStatus = CACHE_STALE; /* Invoke the update-hook if required. */ - if( rc==SQLITE_OK && db->xUpdateCallback && pOp->p4.z ){ - const char *zDb = db->aDb[u.bg.pC->iDb].zName; - const char *zTbl = pOp->p4.z; - db->xUpdateCallback(db->pUpdateArg, SQLITE_DELETE, zDb, zTbl, u.bg.iKey); - assert( u.bg.pC->iDb>=0 ); + if( rc==SQLITE_OK && hasUpdateCallback ){ + db->xUpdateCallback(db->pUpdateArg, SQLITE_DELETE, + db->aDb[pC->iDb].zName, pOp->p4.z, pC->movetoTarget); + assert( pC->iDb>=0 ); } if( pOp->p2 & OPFLAG_NCHANGE ) p->nChange++; break; } /* Opcode: ResetCount * * * * * @@ -57581,12 +78633,70 @@ case OP_ResetCount: { sqlite3VdbeSetChanges(db, p->nChange); p->nChange = 0; break; } + +/* Opcode: SorterCompare P1 P2 P3 P4 +** Synopsis: if key(P1)!=trim(r[P3],P4) goto P2 +** +** P1 is a sorter cursor. This instruction compares a prefix of the +** record blob in register P3 against a prefix of the entry that +** the sorter cursor currently points to. Only the first P4 fields +** of r[P3] and the sorter record are compared. +** +** If either P3 or the sorter contains a NULL in one of their significant +** fields (not counting the P4 fields at the end which are ignored) then +** the comparison is assumed to be equal. +** +** Fall through to next instruction if the two records compare equal to +** each other. Jump to P2 if they are different. +*/ +case OP_SorterCompare: { + VdbeCursor *pC; + int res; + int nKeyCol; + + pC = p->apCsr[pOp->p1]; + assert( isSorter(pC) ); + assert( pOp->p4type==P4_INT32 ); + pIn3 = &aMem[pOp->p3]; + nKeyCol = pOp->p4.i; + res = 0; + rc = sqlite3VdbeSorterCompare(pC, pIn3, nKeyCol, &res); + VdbeBranchTaken(res!=0,2); + if( res ) goto jump_to_p2; + break; +}; + +/* Opcode: SorterData P1 P2 P3 * * +** Synopsis: r[P2]=data +** +** Write into register P2 the current sorter data for sorter cursor P1. +** Then clear the column header cache on cursor P3. +** +** This opcode is normally use to move a record out of the sorter and into +** a register that is the source for a pseudo-table cursor created using +** OpenPseudo. That pseudo-table cursor is the one that is identified by +** parameter P3. Clearing the P3 column cache as part of this opcode saves +** us from having to issue a separate NullRow instruction to clear that cache. +*/ +case OP_SorterData: { + VdbeCursor *pC; + + pOut = &aMem[pOp->p2]; + pC = p->apCsr[pOp->p1]; + assert( isSorter(pC) ); + rc = sqlite3VdbeSorterRowkey(pC, pOut); + assert( rc!=SQLITE_OK || (pOut->flags & MEM_Blob) ); + assert( pOp->p1>=0 && pOp->p1<p->nCursor ); + p->apCsr[pOp->p3]->cacheStatus = CACHE_STALE; + break; +} /* Opcode: RowData P1 P2 * * * +** Synopsis: r[P2]=data ** ** Write into register P2 the complete row data for cursor P1. ** There is no interpretation of the data. ** It is just copied onto the P2 register exactly as ** it is found in the database file. @@ -57593,129 +78703,136 @@ ** ** If the P1 cursor must be pointing to a valid row (not a NULL row) ** of a real table, not a pseudo-table. */ /* Opcode: RowKey P1 P2 * * * +** Synopsis: r[P2]=key ** ** Write into register P2 the complete row key for cursor P1. ** There is no interpretation of the data. -** The key is copied onto the P3 register exactly as +** The key is copied onto the P2 register exactly as ** it is found in the database file. ** ** If the P1 cursor must be pointing to a valid row (not a NULL row) ** of a real table, not a pseudo-table. */ case OP_RowKey: case OP_RowData: { -#if 0 /* local variables moved into u.bh */ VdbeCursor *pC; BtCursor *pCrsr; u32 n; i64 n64; -#endif /* local variables moved into u.bh */ pOut = &aMem[pOp->p2]; + memAboutToChange(p, pOut); /* Note that RowKey and RowData are really exactly the same instruction */ assert( pOp->p1>=0 && pOp->p1<p->nCursor ); - u.bh.pC = p->apCsr[pOp->p1]; - assert( u.bh.pC->isTable || pOp->opcode==OP_RowKey ); - assert( u.bh.pC->isIndex || pOp->opcode==OP_RowData ); - assert( u.bh.pC!=0 ); - assert( u.bh.pC->nullRow==0 ); - assert( u.bh.pC->pseudoTableReg==0 ); - assert( u.bh.pC->pCursor!=0 ); - u.bh.pCrsr = u.bh.pC->pCursor; - assert( sqlite3BtreeCursorIsValid(u.bh.pCrsr) ); + pC = p->apCsr[pOp->p1]; + assert( pC!=0 ); + assert( pC->eCurType==CURTYPE_BTREE ); + assert( isSorter(pC)==0 ); + assert( pC->isTable || pOp->opcode!=OP_RowData ); + assert( pC->isTable==0 || pOp->opcode==OP_RowData ); + assert( pC->nullRow==0 ); + assert( pC->uc.pCursor!=0 ); + pCrsr = pC->uc.pCursor; /* The OP_RowKey and OP_RowData opcodes always follow OP_NotExists or ** OP_Rewind/Op_Next with no intervening instructions that might invalidate - ** the cursor. Hence the following sqlite3VdbeCursorMoveto() call is always - ** a no-op and can never fail. But we leave it in place as a safety. - */ - assert( u.bh.pC->deferredMoveto==0 ); - rc = sqlite3VdbeCursorMoveto(u.bh.pC); - if( NEVER(rc!=SQLITE_OK) ) goto abort_due_to_error; - - if( u.bh.pC->isIndex ){ - assert( !u.bh.pC->isTable ); - rc = sqlite3BtreeKeySize(u.bh.pCrsr, &u.bh.n64); - assert( rc==SQLITE_OK ); /* True because of CursorMoveto() call above */ - if( u.bh.n64>db->aLimit[SQLITE_LIMIT_LENGTH] ){ - goto too_big; - } - u.bh.n = (u32)u.bh.n64; - }else{ - rc = sqlite3BtreeDataSize(u.bh.pCrsr, &u.bh.n); - assert( rc==SQLITE_OK ); /* DataSize() cannot fail */ - if( u.bh.n>(u32)db->aLimit[SQLITE_LIMIT_LENGTH] ){ - goto too_big; - } - } - if( sqlite3VdbeMemGrow(pOut, u.bh.n, 0) ){ - goto no_mem; - } - pOut->n = u.bh.n; - MemSetTypeFlag(pOut, MEM_Blob); - if( u.bh.pC->isIndex ){ - rc = sqlite3BtreeKey(u.bh.pCrsr, 0, u.bh.n, pOut->z); - }else{ - rc = sqlite3BtreeData(u.bh.pCrsr, 0, u.bh.n, pOut->z); - } - pOut->enc = SQLITE_UTF8; /* In case the blob is ever cast to text */ - UPDATE_MAX_BLOBSIZE(pOut); + ** the cursor. If this where not the case, on of the following assert()s + ** would fail. Should this ever change (because of changes in the code + ** generator) then the fix would be to insert a call to + ** sqlite3VdbeCursorMoveto(). + */ + assert( pC->deferredMoveto==0 ); + assert( sqlite3BtreeCursorIsValid(pCrsr) ); +#if 0 /* Not required due to the previous to assert() statements */ + rc = sqlite3VdbeCursorMoveto(pC); + if( rc!=SQLITE_OK ) goto abort_due_to_error; +#endif + + if( pC->isTable==0 ){ + assert( !pC->isTable ); + VVA_ONLY(rc =) sqlite3BtreeKeySize(pCrsr, &n64); + assert( rc==SQLITE_OK ); /* True because of CursorMoveto() call above */ + if( n64>db->aLimit[SQLITE_LIMIT_LENGTH] ){ + goto too_big; + } + n = (u32)n64; + }else{ + VVA_ONLY(rc =) sqlite3BtreeDataSize(pCrsr, &n); + assert( rc==SQLITE_OK ); /* DataSize() cannot fail */ + if( n>(u32)db->aLimit[SQLITE_LIMIT_LENGTH] ){ + goto too_big; + } + } + testcase( n==0 ); + if( sqlite3VdbeMemClearAndResize(pOut, MAX(n,32)) ){ + goto no_mem; + } + pOut->n = n; + MemSetTypeFlag(pOut, MEM_Blob); + if( pC->isTable==0 ){ + rc = sqlite3BtreeKey(pCrsr, 0, n, pOut->z); + }else{ + rc = sqlite3BtreeData(pCrsr, 0, n, pOut->z); + } + pOut->enc = SQLITE_UTF8; /* In case the blob is ever cast to text */ + UPDATE_MAX_BLOBSIZE(pOut); + REGISTER_TRACE(pOp->p2, pOut); break; } /* Opcode: Rowid P1 P2 * * * +** Synopsis: r[P2]=rowid ** ** Store in register P2 an integer which is the key of the table entry that ** P1 is currently point to. ** ** P1 can be either an ordinary table or a virtual table. There used to ** be a separate OP_VRowid opcode for use with virtual tables, but this ** one opcode now works for both table types. */ -case OP_Rowid: { /* out2-prerelease */ -#if 0 /* local variables moved into u.bi */ +case OP_Rowid: { /* out2 */ VdbeCursor *pC; i64 v; sqlite3_vtab *pVtab; const sqlite3_module *pModule; -#endif /* local variables moved into u.bi */ + pOut = out2Prerelease(p, pOp); assert( pOp->p1>=0 && pOp->p1<p->nCursor ); - u.bi.pC = p->apCsr[pOp->p1]; - assert( u.bi.pC!=0 ); - assert( u.bi.pC->pseudoTableReg==0 ); - if( u.bi.pC->nullRow ){ + pC = p->apCsr[pOp->p1]; + assert( pC!=0 ); + assert( pC->eCurType!=CURTYPE_PSEUDO || pC->nullRow ); + if( pC->nullRow ){ pOut->flags = MEM_Null; break; - }else if( u.bi.pC->deferredMoveto ){ - u.bi.v = u.bi.pC->movetoTarget; + }else if( pC->deferredMoveto ){ + v = pC->movetoTarget; #ifndef SQLITE_OMIT_VIRTUALTABLE - }else if( u.bi.pC->pVtabCursor ){ - u.bi.pVtab = u.bi.pC->pVtabCursor->pVtab; - u.bi.pModule = u.bi.pVtab->pModule; - assert( u.bi.pModule->xRowid ); - rc = u.bi.pModule->xRowid(u.bi.pC->pVtabCursor, &u.bi.v); - sqlite3DbFree(db, p->zErrMsg); - p->zErrMsg = u.bi.pVtab->zErrMsg; - u.bi.pVtab->zErrMsg = 0; + }else if( pC->eCurType==CURTYPE_VTAB ){ + assert( pC->uc.pVCur!=0 ); + pVtab = pC->uc.pVCur->pVtab; + pModule = pVtab->pModule; + assert( pModule->xRowid ); + rc = pModule->xRowid(pC->uc.pVCur, &v); + sqlite3VtabImportErrmsg(p, pVtab); #endif /* SQLITE_OMIT_VIRTUALTABLE */ }else{ - assert( u.bi.pC->pCursor!=0 ); - rc = sqlite3VdbeCursorMoveto(u.bi.pC); + assert( pC->eCurType==CURTYPE_BTREE ); + assert( pC->uc.pCursor!=0 ); + rc = sqlite3VdbeCursorRestore(pC); if( rc ) goto abort_due_to_error; - if( u.bi.pC->rowidIsValid ){ - u.bi.v = u.bi.pC->lastRowid; - }else{ - rc = sqlite3BtreeKeySize(u.bi.pC->pCursor, &u.bi.v); - assert( rc==SQLITE_OK ); /* Always so because of CursorMoveto() above */ - } - } - pOut->u.i = u.bi.v; + if( pC->nullRow ){ + pOut->flags = MEM_Null; + break; + } + rc = sqlite3BtreeKeySize(pC->uc.pCursor, &v); + assert( rc==SQLITE_OK ); /* Always so because of CursorRestore() above */ + } + pOut->u.i = v; break; } /* Opcode: NullRow P1 * * * * ** @@ -57722,55 +78839,59 @@ ** Move the cursor P1 to a null row. Any OP_Column operations ** that occur while the cursor is on the null row will always ** write a NULL. */ case OP_NullRow: { -#if 0 /* local variables moved into u.bj */ VdbeCursor *pC; -#endif /* local variables moved into u.bj */ assert( pOp->p1>=0 && pOp->p1<p->nCursor ); - u.bj.pC = p->apCsr[pOp->p1]; - assert( u.bj.pC!=0 ); - u.bj.pC->nullRow = 1; - u.bj.pC->rowidIsValid = 0; - if( u.bj.pC->pCursor ){ - sqlite3BtreeClearCursor(u.bj.pC->pCursor); + pC = p->apCsr[pOp->p1]; + assert( pC!=0 ); + pC->nullRow = 1; + pC->cacheStatus = CACHE_STALE; + if( pC->eCurType==CURTYPE_BTREE ){ + assert( pC->uc.pCursor!=0 ); + sqlite3BtreeClearCursor(pC->uc.pCursor); } break; } -/* Opcode: Last P1 P2 * * * +/* Opcode: Last P1 P2 P3 * * ** -** The next use of the Rowid or Column or Next instruction for P1 +** The next use of the Rowid or Column or Prev instruction for P1 ** will refer to the last entry in the database table or index. ** If the table or index is empty and P2>0, then jump immediately to P2. ** If P2 is 0 or if the table or index is not empty, fall through ** to the following instruction. +** +** This opcode leaves the cursor configured to move in reverse order, +** from the end toward the beginning. In other words, the cursor is +** configured to use Prev, not Next. */ case OP_Last: { /* jump */ -#if 0 /* local variables moved into u.bk */ VdbeCursor *pC; BtCursor *pCrsr; int res; -#endif /* local variables moved into u.bk */ assert( pOp->p1>=0 && pOp->p1<p->nCursor ); - u.bk.pC = p->apCsr[pOp->p1]; - assert( u.bk.pC!=0 ); - u.bk.pCrsr = u.bk.pC->pCursor; - if( u.bk.pCrsr==0 ){ - u.bk.res = 1; - }else{ - rc = sqlite3BtreeLast(u.bk.pCrsr, &u.bk.res); - } - u.bk.pC->nullRow = (u8)u.bk.res; - u.bk.pC->deferredMoveto = 0; - u.bk.pC->rowidIsValid = 0; - u.bk.pC->cacheStatus = CACHE_STALE; - if( pOp->p2>0 && u.bk.res ){ - pc = pOp->p2 - 1; + pC = p->apCsr[pOp->p1]; + assert( pC!=0 ); + assert( pC->eCurType==CURTYPE_BTREE ); + pCrsr = pC->uc.pCursor; + res = 0; + assert( pCrsr!=0 ); + rc = sqlite3BtreeLast(pCrsr, &res); + pC->nullRow = (u8)res; + pC->deferredMoveto = 0; + pC->cacheStatus = CACHE_STALE; + pC->seekResult = pOp->p3; +#ifdef SQLITE_DEBUG + pC->seekOp = OP_Last; +#endif + if( pOp->p2>0 ){ + VdbeBranchTaken(res!=0,2); + if( res ) goto jump_to_p2; } break; } @@ -57784,290 +78905,436 @@ ** end. We use the OP_Sort opcode instead of OP_Rewind to do the ** rewinding so that the global variable will be incremented and ** regression tests can determine whether or not the optimizer is ** correctly optimizing out sorts. */ +case OP_SorterSort: /* jump */ case OP_Sort: { /* jump */ #ifdef SQLITE_TEST sqlite3_sort_count++; sqlite3_search_count--; #endif - p->aCounter[SQLITE_STMTSTATUS_SORT-1]++; + p->aCounter[SQLITE_STMTSTATUS_SORT]++; /* Fall through into OP_Rewind */ } /* Opcode: Rewind P1 P2 * * * ** ** The next use of the Rowid or Column or Next instruction for P1 ** will refer to the first entry in the database table or index. -** If the table or index is empty and P2>0, then jump immediately to P2. -** If P2 is 0 or if the table or index is not empty, fall through -** to the following instruction. +** If the table or index is empty, jump immediately to P2. +** If the table or index is not empty, fall through to the following +** instruction. +** +** This opcode leaves the cursor configured to move in forward order, +** from the beginning toward the end. In other words, the cursor is +** configured to use Next, not Prev. */ case OP_Rewind: { /* jump */ -#if 0 /* local variables moved into u.bl */ VdbeCursor *pC; BtCursor *pCrsr; int res; -#endif /* local variables moved into u.bl */ assert( pOp->p1>=0 && pOp->p1<p->nCursor ); - u.bl.pC = p->apCsr[pOp->p1]; - assert( u.bl.pC!=0 ); - if( (u.bl.pCrsr = u.bl.pC->pCursor)!=0 ){ - rc = sqlite3BtreeFirst(u.bl.pCrsr, &u.bl.res); - u.bl.pC->atFirst = u.bl.res==0 ?1:0; - u.bl.pC->deferredMoveto = 0; - u.bl.pC->cacheStatus = CACHE_STALE; - u.bl.pC->rowidIsValid = 0; + pC = p->apCsr[pOp->p1]; + assert( pC!=0 ); + assert( isSorter(pC)==(pOp->opcode==OP_SorterSort) ); + res = 1; +#ifdef SQLITE_DEBUG + pC->seekOp = OP_Rewind; +#endif + if( isSorter(pC) ){ + rc = sqlite3VdbeSorterRewind(pC, &res); }else{ - u.bl.res = 1; + assert( pC->eCurType==CURTYPE_BTREE ); + pCrsr = pC->uc.pCursor; + assert( pCrsr ); + rc = sqlite3BtreeFirst(pCrsr, &res); + pC->deferredMoveto = 0; + pC->cacheStatus = CACHE_STALE; } - u.bl.pC->nullRow = (u8)u.bl.res; + pC->nullRow = (u8)res; assert( pOp->p2>0 && pOp->p2<p->nOp ); - if( u.bl.res ){ - pc = pOp->p2 - 1; - } + VdbeBranchTaken(res!=0,2); + if( res ) goto jump_to_p2; break; } -/* Opcode: Next P1 P2 * * P5 +/* Opcode: Next P1 P2 P3 P4 P5 ** ** Advance cursor P1 so that it points to the next key/data pair in its ** table or index. If there are no more key/value pairs then fall through ** to the following instruction. But if the cursor advance was successful, ** jump immediately to P2. ** -** The P1 cursor must be for a real table, not a pseudo-table. +** The Next opcode is only valid following an SeekGT, SeekGE, or +** OP_Rewind opcode used to position the cursor. Next is not allowed +** to follow SeekLT, SeekLE, or OP_Last. +** +** The P1 cursor must be for a real table, not a pseudo-table. P1 must have +** been opened prior to this opcode or the program will segfault. +** +** The P3 value is a hint to the btree implementation. If P3==1, that +** means P1 is an SQL index and that this instruction could have been +** omitted if that index had been unique. P3 is usually 0. P3 is +** always either 0 or 1. +** +** P4 is always of type P4_ADVANCE. The function pointer points to +** sqlite3BtreeNext(). ** ** If P5 is positive and the jump is taken, then event counter ** number P5-1 in the prepared statement is incremented. ** -** See also: Prev +** See also: Prev, NextIfOpen */ -/* Opcode: Prev P1 P2 * * P5 +/* Opcode: NextIfOpen P1 P2 P3 P4 P5 +** +** This opcode works just like Next except that if cursor P1 is not +** open it behaves a no-op. +*/ +/* Opcode: Prev P1 P2 P3 P4 P5 ** ** Back up cursor P1 so that it points to the previous key/data pair in its ** table or index. If there is no previous key/value pairs then fall through ** to the following instruction. But if the cursor backup was successful, ** jump immediately to P2. ** -** The P1 cursor must be for a real table, not a pseudo-table. +** +** The Prev opcode is only valid following an SeekLT, SeekLE, or +** OP_Last opcode used to position the cursor. Prev is not allowed +** to follow SeekGT, SeekGE, or OP_Rewind. +** +** The P1 cursor must be for a real table, not a pseudo-table. If P1 is +** not open then the behavior is undefined. +** +** The P3 value is a hint to the btree implementation. If P3==1, that +** means P1 is an SQL index and that this instruction could have been +** omitted if that index had been unique. P3 is usually 0. P3 is +** always either 0 or 1. +** +** P4 is always of type P4_ADVANCE. The function pointer points to +** sqlite3BtreePrevious(). ** ** If P5 is positive and the jump is taken, then event counter ** number P5-1 in the prepared statement is incremented. */ +/* Opcode: PrevIfOpen P1 P2 P3 P4 P5 +** +** This opcode works just like Prev except that if cursor P1 is not +** open it behaves a no-op. +*/ +case OP_SorterNext: { /* jump */ + VdbeCursor *pC; + int res; + + pC = p->apCsr[pOp->p1]; + assert( isSorter(pC) ); + res = 0; + rc = sqlite3VdbeSorterNext(db, pC, &res); + goto next_tail; +case OP_PrevIfOpen: /* jump */ +case OP_NextIfOpen: /* jump */ + if( p->apCsr[pOp->p1]==0 ) break; + /* Fall through */ case OP_Prev: /* jump */ -case OP_Next: { /* jump */ -#if 0 /* local variables moved into u.bm */ - VdbeCursor *pC; - BtCursor *pCrsr; - int res; -#endif /* local variables moved into u.bm */ - - CHECK_FOR_INTERRUPT; +case OP_Next: /* jump */ assert( pOp->p1>=0 && pOp->p1<p->nCursor ); - assert( pOp->p5<=ArraySize(p->aCounter) ); - u.bm.pC = p->apCsr[pOp->p1]; - if( u.bm.pC==0 ){ - break; /* See ticket #2273 */ - } - u.bm.pCrsr = u.bm.pC->pCursor; - if( u.bm.pCrsr==0 ){ - u.bm.pC->nullRow = 1; - break; - } - u.bm.res = 1; - assert( u.bm.pC->deferredMoveto==0 ); - rc = pOp->opcode==OP_Next ? sqlite3BtreeNext(u.bm.pCrsr, &u.bm.res) : - sqlite3BtreePrevious(u.bm.pCrsr, &u.bm.res); - u.bm.pC->nullRow = (u8)u.bm.res; - u.bm.pC->cacheStatus = CACHE_STALE; - if( u.bm.res==0 ){ - pc = pOp->p2 - 1; - if( pOp->p5 ) p->aCounter[pOp->p5-1]++; + assert( pOp->p5<ArraySize(p->aCounter) ); + pC = p->apCsr[pOp->p1]; + res = pOp->p3; + assert( pC!=0 ); + assert( pC->deferredMoveto==0 ); + assert( pC->eCurType==CURTYPE_BTREE ); + assert( res==0 || (res==1 && pC->isTable==0) ); + testcase( res==1 ); + assert( pOp->opcode!=OP_Next || pOp->p4.xAdvance==sqlite3BtreeNext ); + assert( pOp->opcode!=OP_Prev || pOp->p4.xAdvance==sqlite3BtreePrevious ); + assert( pOp->opcode!=OP_NextIfOpen || pOp->p4.xAdvance==sqlite3BtreeNext ); + assert( pOp->opcode!=OP_PrevIfOpen || pOp->p4.xAdvance==sqlite3BtreePrevious); + + /* The Next opcode is only used after SeekGT, SeekGE, and Rewind. + ** The Prev opcode is only used after SeekLT, SeekLE, and Last. */ + assert( pOp->opcode!=OP_Next || pOp->opcode!=OP_NextIfOpen + || pC->seekOp==OP_SeekGT || pC->seekOp==OP_SeekGE + || pC->seekOp==OP_Rewind || pC->seekOp==OP_Found); + assert( pOp->opcode!=OP_Prev || pOp->opcode!=OP_PrevIfOpen + || pC->seekOp==OP_SeekLT || pC->seekOp==OP_SeekLE + || pC->seekOp==OP_Last ); + + rc = pOp->p4.xAdvance(pC->uc.pCursor, &res); +next_tail: + pC->cacheStatus = CACHE_STALE; + VdbeBranchTaken(res==0,2); + if( res==0 ){ + pC->nullRow = 0; + p->aCounter[pOp->p5]++; #ifdef SQLITE_TEST sqlite3_search_count++; #endif + goto jump_to_p2_and_check_for_interrupt; + }else{ + pC->nullRow = 1; } - u.bm.pC->rowidIsValid = 0; - break; + goto check_for_interrupt; } /* Opcode: IdxInsert P1 P2 P3 * P5 +** Synopsis: key=r[P2] ** -** Register P2 holds a SQL index key made using the +** Register P2 holds an SQL index key made using the ** MakeRecord instructions. This opcode writes that key ** into the index P1. Data for the entry is nil. ** ** P3 is a flag that provides a hint to the b-tree layer that this ** insert is likely to be an append. +** +** If P5 has the OPFLAG_NCHANGE bit set, then the change counter is +** incremented by this instruction. If the OPFLAG_NCHANGE bit is clear, +** then the change counter is unchanged. +** +** If P5 has the OPFLAG_USESEEKRESULT bit set, then the cursor must have +** just done a seek to the spot where the new entry is to be inserted. +** This flag avoids doing an extra seek. ** ** This instruction only works for indices. The equivalent instruction ** for tables is OP_Insert. */ +case OP_SorterInsert: /* in2 */ case OP_IdxInsert: { /* in2 */ -#if 0 /* local variables moved into u.bn */ VdbeCursor *pC; - BtCursor *pCrsr; int nKey; const char *zKey; -#endif /* local variables moved into u.bn */ assert( pOp->p1>=0 && pOp->p1<p->nCursor ); - u.bn.pC = p->apCsr[pOp->p1]; - assert( u.bn.pC!=0 ); + pC = p->apCsr[pOp->p1]; + assert( pC!=0 ); + assert( isSorter(pC)==(pOp->opcode==OP_SorterInsert) ); pIn2 = &aMem[pOp->p2]; assert( pIn2->flags & MEM_Blob ); - u.bn.pCrsr = u.bn.pC->pCursor; - if( ALWAYS(u.bn.pCrsr!=0) ){ - assert( u.bn.pC->isTable==0 ); - rc = ExpandBlob(pIn2); - if( rc==SQLITE_OK ){ - u.bn.nKey = pIn2->n; - u.bn.zKey = pIn2->z; - rc = sqlite3BtreeInsert(u.bn.pCrsr, u.bn.zKey, u.bn.nKey, "", 0, 0, pOp->p3, - ((pOp->p5 & OPFLAG_USESEEKRESULT) ? u.bn.pC->seekResult : 0) - ); - assert( u.bn.pC->deferredMoveto==0 ); - u.bn.pC->cacheStatus = CACHE_STALE; + if( pOp->p5 & OPFLAG_NCHANGE ) p->nChange++; + assert( pC->eCurType==CURTYPE_BTREE || pOp->opcode==OP_SorterInsert ); + assert( pC->isTable==0 ); + rc = ExpandBlob(pIn2); + if( rc==SQLITE_OK ){ + if( pOp->opcode==OP_SorterInsert ){ + rc = sqlite3VdbeSorterWrite(pC, pIn2); + }else{ + nKey = pIn2->n; + zKey = pIn2->z; + rc = sqlite3BtreeInsert(pC->uc.pCursor, zKey, nKey, "", 0, 0, pOp->p3, + ((pOp->p5 & OPFLAG_USESEEKRESULT) ? pC->seekResult : 0) + ); + assert( pC->deferredMoveto==0 ); + pC->cacheStatus = CACHE_STALE; } } break; } /* Opcode: IdxDelete P1 P2 P3 * * +** Synopsis: key=r[P2@P3] ** ** The content of P3 registers starting at register P2 form ** an unpacked index key. This opcode removes that entry from the ** index opened by cursor P1. */ case OP_IdxDelete: { -#if 0 /* local variables moved into u.bo */ VdbeCursor *pC; BtCursor *pCrsr; int res; UnpackedRecord r; -#endif /* local variables moved into u.bo */ assert( pOp->p3>0 ); - assert( pOp->p2>0 && pOp->p2+pOp->p3<=p->nMem+1 ); + assert( pOp->p2>0 && pOp->p2+pOp->p3<=(p->nMem-p->nCursor)+1 ); assert( pOp->p1>=0 && pOp->p1<p->nCursor ); - u.bo.pC = p->apCsr[pOp->p1]; - assert( u.bo.pC!=0 ); - u.bo.pCrsr = u.bo.pC->pCursor; - if( ALWAYS(u.bo.pCrsr!=0) ){ - u.bo.r.pKeyInfo = u.bo.pC->pKeyInfo; - u.bo.r.nField = (u16)pOp->p3; - u.bo.r.flags = 0; - u.bo.r.aMem = &aMem[pOp->p2]; - rc = sqlite3BtreeMovetoUnpacked(u.bo.pCrsr, &u.bo.r, 0, 0, &u.bo.res); - if( rc==SQLITE_OK && u.bo.res==0 ){ - rc = sqlite3BtreeDelete(u.bo.pCrsr); - } - assert( u.bo.pC->deferredMoveto==0 ); - u.bo.pC->cacheStatus = CACHE_STALE; - } + pC = p->apCsr[pOp->p1]; + assert( pC!=0 ); + assert( pC->eCurType==CURTYPE_BTREE ); + pCrsr = pC->uc.pCursor; + assert( pCrsr!=0 ); + assert( pOp->p5==0 ); + r.pKeyInfo = pC->pKeyInfo; + r.nField = (u16)pOp->p3; + r.default_rc = 0; + r.aMem = &aMem[pOp->p2]; + rc = sqlite3BtreeMovetoUnpacked(pCrsr, &r, 0, 0, &res); + if( rc==SQLITE_OK && res==0 ){ + rc = sqlite3BtreeDelete(pCrsr, BTREE_AUXDELETE); + } + assert( pC->deferredMoveto==0 ); + pC->cacheStatus = CACHE_STALE; break; } +/* Opcode: Seek P1 * P3 P4 * +** Synopsis: Move P3 to P1.rowid +** +** P1 is an open index cursor and P3 is a cursor on the corresponding +** table. This opcode does a deferred seek of the P3 table cursor +** to the row that corresponds to the current row of P1. +** +** This is a deferred seek. Nothing actually happens until +** the cursor is used to read a record. That way, if no reads +** occur, no unnecessary I/O happens. +** +** P4 may be an array of integers (type P4_INTARRAY) containing +** one entry for each column in the P3 table. If array entry a(i) +** is non-zero, then reading column a(i)-1 from cursor P3 is +** equivalent to performing the deferred seek and then reading column i +** from P1. This information is stored in P3 and used to redirect +** reads against P3 over to P1, thus possibly avoiding the need to +** seek and read cursor P3. +*/ /* Opcode: IdxRowid P1 P2 * * * +** Synopsis: r[P2]=rowid ** ** Write into register P2 an integer which is the last entry in the record at ** the end of the index key pointed to by cursor P1. This integer should be ** the rowid of the table entry to which this index entry points. ** ** See also: Rowid, MakeRecord. */ -case OP_IdxRowid: { /* out2-prerelease */ -#if 0 /* local variables moved into u.bp */ - BtCursor *pCrsr; - VdbeCursor *pC; - i64 rowid; -#endif /* local variables moved into u.bp */ +case OP_Seek: +case OP_IdxRowid: { /* out2 */ + VdbeCursor *pC; /* The P1 index cursor */ + VdbeCursor *pTabCur; /* The P2 table cursor (OP_Seek only) */ + i64 rowid; /* Rowid that P1 current points to */ assert( pOp->p1>=0 && pOp->p1<p->nCursor ); - u.bp.pC = p->apCsr[pOp->p1]; - assert( u.bp.pC!=0 ); - u.bp.pCrsr = u.bp.pC->pCursor; - pOut->flags = MEM_Null; - if( ALWAYS(u.bp.pCrsr!=0) ){ - rc = sqlite3VdbeCursorMoveto(u.bp.pC); - if( NEVER(rc) ) goto abort_due_to_error; - assert( u.bp.pC->deferredMoveto==0 ); - assert( u.bp.pC->isTable==0 ); - if( !u.bp.pC->nullRow ){ - rc = sqlite3VdbeIdxRowid(db, u.bp.pCrsr, &u.bp.rowid); - if( rc!=SQLITE_OK ){ - goto abort_due_to_error; - } - pOut->u.i = u.bp.rowid; + pC = p->apCsr[pOp->p1]; + assert( pC!=0 ); + assert( pC->eCurType==CURTYPE_BTREE ); + assert( pC->uc.pCursor!=0 ); + assert( pC->isTable==0 ); + assert( pC->deferredMoveto==0 ); + assert( !pC->nullRow || pOp->opcode==OP_IdxRowid ); + + /* The IdxRowid and Seek opcodes are combined because of the commonality + ** of sqlite3VdbeCursorRestore() and sqlite3VdbeIdxRowid(). */ + rc = sqlite3VdbeCursorRestore(pC); + + /* sqlite3VbeCursorRestore() can only fail if the record has been deleted + ** out from under the cursor. That will never happens for an IdxRowid + ** or Seek opcode */ + if( NEVER(rc!=SQLITE_OK) ) goto abort_due_to_error; + + if( !pC->nullRow ){ + rowid = 0; /* Not needed. Only used to silence a warning. */ + rc = sqlite3VdbeIdxRowid(db, pC->uc.pCursor, &rowid); + if( rc!=SQLITE_OK ){ + goto abort_due_to_error; + } + if( pOp->opcode==OP_Seek ){ + assert( pOp->p3>=0 && pOp->p3<p->nCursor ); + pTabCur = p->apCsr[pOp->p3]; + assert( pTabCur!=0 ); + assert( pTabCur->eCurType==CURTYPE_BTREE ); + assert( pTabCur->uc.pCursor!=0 ); + assert( pTabCur->isTable ); + pTabCur->nullRow = 0; + pTabCur->movetoTarget = rowid; + pTabCur->deferredMoveto = 1; + assert( pOp->p4type==P4_INTARRAY || pOp->p4.ai==0 ); + pTabCur->aAltMap = pOp->p4.ai; + pTabCur->pAltCursor = pC; + }else{ + pOut = out2Prerelease(p, pOp); + pOut->u.i = rowid; pOut->flags = MEM_Int; } + }else{ + assert( pOp->opcode==OP_IdxRowid ); + sqlite3VdbeMemSetNull(&aMem[pOp->p2]); } break; } /* Opcode: IdxGE P1 P2 P3 P4 P5 +** Synopsis: key=r[P3@P4] ** ** The P4 register values beginning with P3 form an unpacked index -** key that omits the ROWID. Compare this key value against the index -** that P1 is currently pointing to, ignoring the ROWID on the P1 index. +** key that omits the PRIMARY KEY. Compare this key value against the index +** that P1 is currently pointing to, ignoring the PRIMARY KEY or ROWID +** fields at the end. ** ** If the P1 index entry is greater than or equal to the key value ** then jump to P2. Otherwise fall through to the next instruction. +*/ +/* Opcode: IdxGT P1 P2 P3 P4 P5 +** Synopsis: key=r[P3@P4] ** -** If P5 is non-zero then the key value is increased by an epsilon -** prior to the comparison. This make the opcode work like IdxGT except -** that if the key from register P3 is a prefix of the key in the cursor, -** the result is false whereas it would be true with IdxGT. +** The P4 register values beginning with P3 form an unpacked index +** key that omits the PRIMARY KEY. Compare this key value against the index +** that P1 is currently pointing to, ignoring the PRIMARY KEY or ROWID +** fields at the end. +** +** If the P1 index entry is greater than the key value +** then jump to P2. Otherwise fall through to the next instruction. */ -/* Opcode: IdxLT P1 P2 P3 * P5 +/* Opcode: IdxLT P1 P2 P3 P4 P5 +** Synopsis: key=r[P3@P4] ** ** The P4 register values beginning with P3 form an unpacked index -** key that omits the ROWID. Compare this key value against the index -** that P1 is currently pointing to, ignoring the ROWID on the P1 index. +** key that omits the PRIMARY KEY or ROWID. Compare this key value against +** the index that P1 is currently pointing to, ignoring the PRIMARY KEY or +** ROWID on the P1 index. ** ** If the P1 index entry is less than the key value then jump to P2. ** Otherwise fall through to the next instruction. +*/ +/* Opcode: IdxLE P1 P2 P3 P4 P5 +** Synopsis: key=r[P3@P4] ** -** If P5 is non-zero then the key value is increased by an epsilon prior -** to the comparison. This makes the opcode work like IdxLE. +** The P4 register values beginning with P3 form an unpacked index +** key that omits the PRIMARY KEY or ROWID. Compare this key value against +** the index that P1 is currently pointing to, ignoring the PRIMARY KEY or +** ROWID on the P1 index. +** +** If the P1 index entry is less than or equal to the key value then jump +** to P2. Otherwise fall through to the next instruction. */ +case OP_IdxLE: /* jump */ +case OP_IdxGT: /* jump */ case OP_IdxLT: /* jump */ -case OP_IdxGE: { /* jump */ -#if 0 /* local variables moved into u.bq */ +case OP_IdxGE: { /* jump */ VdbeCursor *pC; int res; UnpackedRecord r; -#endif /* local variables moved into u.bq */ assert( pOp->p1>=0 && pOp->p1<p->nCursor ); - u.bq.pC = p->apCsr[pOp->p1]; - assert( u.bq.pC!=0 ); - if( ALWAYS(u.bq.pC->pCursor!=0) ){ - assert( u.bq.pC->deferredMoveto==0 ); - assert( pOp->p5==0 || pOp->p5==1 ); - assert( pOp->p4type==P4_INT32 ); - u.bq.r.pKeyInfo = u.bq.pC->pKeyInfo; - u.bq.r.nField = (u16)pOp->p4.i; - if( pOp->p5 ){ - u.bq.r.flags = UNPACKED_INCRKEY | UNPACKED_IGNORE_ROWID; - }else{ - u.bq.r.flags = UNPACKED_IGNORE_ROWID; - } - u.bq.r.aMem = &aMem[pOp->p3]; - rc = sqlite3VdbeIdxKeyCompare(u.bq.pC, &u.bq.r, &u.bq.res); - if( pOp->opcode==OP_IdxLT ){ - u.bq.res = -u.bq.res; - }else{ - assert( pOp->opcode==OP_IdxGE ); - u.bq.res++; - } - if( u.bq.res>0 ){ - pc = pOp->p2 - 1 ; - } - } + pC = p->apCsr[pOp->p1]; + assert( pC!=0 ); + assert( pC->isOrdered ); + assert( pC->eCurType==CURTYPE_BTREE ); + assert( pC->uc.pCursor!=0); + assert( pC->deferredMoveto==0 ); + assert( pOp->p5==0 || pOp->p5==1 ); + assert( pOp->p4type==P4_INT32 ); + r.pKeyInfo = pC->pKeyInfo; + r.nField = (u16)pOp->p4.i; + if( pOp->opcode<OP_IdxLT ){ + assert( pOp->opcode==OP_IdxLE || pOp->opcode==OP_IdxGT ); + r.default_rc = -1; + }else{ + assert( pOp->opcode==OP_IdxGE || pOp->opcode==OP_IdxLT ); + r.default_rc = 0; + } + r.aMem = &aMem[pOp->p3]; +#ifdef SQLITE_DEBUG + { int i; for(i=0; i<r.nField; i++) assert( memIsValid(&r.aMem[i]) ); } +#endif + res = 0; /* Not needed. Only used to silence a warning. */ + rc = sqlite3VdbeIdxKeyCompare(db, pC, &r, &res); + assert( (OP_IdxLE&1)==(OP_IdxLT&1) && (OP_IdxGE&1)==(OP_IdxGT&1) ); + if( (pOp->opcode&1)==(OP_IdxLT&1) ){ + assert( pOp->opcode==OP_IdxLE || pOp->opcode==OP_IdxLT ); + res = -res; + }else{ + assert( pOp->opcode==OP_IdxGE || pOp->opcode==OP_IdxGT ); + res++; + } + VdbeBranchTaken(res>0,2); + if( res>0 ) goto jump_to_p2; break; } /* Opcode: Destroy P1 P2 P3 * * ** @@ -58087,42 +79354,34 @@ ** the last one in the database) then a zero is stored in register P2. ** If AUTOVACUUM is disabled then a zero is stored in register P2. ** ** See also: Clear */ -case OP_Destroy: { /* out2-prerelease */ -#if 0 /* local variables moved into u.br */ +case OP_Destroy: { /* out2 */ int iMoved; - int iCnt; - Vdbe *pVdbe; int iDb; -#endif /* local variables moved into u.br */ -#ifndef SQLITE_OMIT_VIRTUALTABLE - u.br.iCnt = 0; - for(u.br.pVdbe=db->pVdbe; u.br.pVdbe; u.br.pVdbe = u.br.pVdbe->pNext){ - if( u.br.pVdbe->magic==VDBE_MAGIC_RUN && u.br.pVdbe->inVtabMethod<2 && u.br.pVdbe->pc>=0 ){ - u.br.iCnt++; - } - } -#else - u.br.iCnt = db->activeVdbeCnt; -#endif + + assert( p->readOnly==0 ); + assert( pOp->p1>1 ); + pOut = out2Prerelease(p, pOp); pOut->flags = MEM_Null; - if( u.br.iCnt>1 ){ + if( db->nVdbeRead > db->nVDestroy+1 ){ rc = SQLITE_LOCKED; p->errorAction = OE_Abort; }else{ - u.br.iDb = pOp->p3; - assert( u.br.iCnt==1 ); - assert( (p->btreeMask & (1<<u.br.iDb))!=0 ); - rc = sqlite3BtreeDropTable(db->aDb[u.br.iDb].pBt, pOp->p1, &u.br.iMoved); + iDb = pOp->p3; + assert( DbMaskTest(p->btreeMask, iDb) ); + iMoved = 0; /* Not needed. Only to silence a warning. */ + rc = sqlite3BtreeDropTable(db->aDb[iDb].pBt, pOp->p1, &iMoved); pOut->flags = MEM_Int; - pOut->u.i = u.br.iMoved; + pOut->u.i = iMoved; #ifndef SQLITE_OMIT_AUTOVACUUM - if( rc==SQLITE_OK && u.br.iMoved!=0 ){ - sqlite3RootPageMoved(&db->aDb[u.br.iDb], u.br.iMoved, pOp->p1); - resetSchemaOnFault = 1; + if( rc==SQLITE_OK && iMoved!=0 ){ + sqlite3RootPageMoved(db, iDb, iMoved, pOp->p1); + /* All OP_Destroy operations occur on the same btree */ + assert( resetSchemaOnFault==0 || resetSchemaOnFault==iDb+1 ); + resetSchemaOnFault = iDb+1; } #endif } break; } @@ -58144,29 +79403,55 @@ ** also incremented by the number of rows in the table being cleared. ** ** See also: Destroy */ case OP_Clear: { -#if 0 /* local variables moved into u.bs */ int nChange; -#endif /* local variables moved into u.bs */ - - u.bs.nChange = 0; - assert( (p->btreeMask & (1<<pOp->p2))!=0 ); + + nChange = 0; + assert( p->readOnly==0 ); + assert( DbMaskTest(p->btreeMask, pOp->p2) ); rc = sqlite3BtreeClearTable( - db->aDb[pOp->p2].pBt, pOp->p1, (pOp->p3 ? &u.bs.nChange : 0) + db->aDb[pOp->p2].pBt, pOp->p1, (pOp->p3 ? &nChange : 0) ); if( pOp->p3 ){ - p->nChange += u.bs.nChange; + p->nChange += nChange; if( pOp->p3>0 ){ - aMem[pOp->p3].u.i += u.bs.nChange; + assert( memIsValid(&aMem[pOp->p3]) ); + memAboutToChange(p, &aMem[pOp->p3]); + aMem[pOp->p3].u.i += nChange; } } break; } + +/* Opcode: ResetSorter P1 * * * * +** +** Delete all contents from the ephemeral table or sorter +** that is open on cursor P1. +** +** This opcode only works for cursors used for sorting and +** opened with OP_OpenEphemeral or OP_SorterOpen. +*/ +case OP_ResetSorter: { + VdbeCursor *pC; + + assert( pOp->p1>=0 && pOp->p1<p->nCursor ); + pC = p->apCsr[pOp->p1]; + assert( pC!=0 ); + if( isSorter(pC) ){ + sqlite3VdbeSorterReset(db, pC->uc.pSorter); + }else{ + assert( pC->eCurType==CURTYPE_BTREE ); + assert( pC->isEphemeral ); + rc = sqlite3BtreeClearTableOfCursor(pC->uc.pCursor); + } + break; +} /* Opcode: CreateTable P1 P2 * * * +** Synopsis: r[P2]=root iDb=P1 ** ** Allocate a new table in the main database file if P1==0 or in the ** auxiliary database file if P1==1 or in an attached database if ** P1>1. Write the root page number of the new table into ** register P2 @@ -58176,114 +79461,96 @@ ** has an arbitrary key but no data. ** ** See also: CreateIndex */ /* Opcode: CreateIndex P1 P2 * * * +** Synopsis: r[P2]=root iDb=P1 ** ** Allocate a new index in the main database file if P1==0 or in the ** auxiliary database file if P1==1 or in an attached database if ** P1>1. Write the root page number of the new table into ** register P2. ** ** See documentation on OP_CreateTable for additional information. */ -case OP_CreateIndex: /* out2-prerelease */ -case OP_CreateTable: { /* out2-prerelease */ -#if 0 /* local variables moved into u.bt */ +case OP_CreateIndex: /* out2 */ +case OP_CreateTable: { /* out2 */ int pgno; int flags; Db *pDb; -#endif /* local variables moved into u.bt */ - u.bt.pgno = 0; + pOut = out2Prerelease(p, pOp); + pgno = 0; assert( pOp->p1>=0 && pOp->p1<db->nDb ); - assert( (p->btreeMask & (1<<pOp->p1))!=0 ); - u.bt.pDb = &db->aDb[pOp->p1]; - assert( u.bt.pDb->pBt!=0 ); + assert( DbMaskTest(p->btreeMask, pOp->p1) ); + assert( p->readOnly==0 ); + pDb = &db->aDb[pOp->p1]; + assert( pDb->pBt!=0 ); if( pOp->opcode==OP_CreateTable ){ - /* u.bt.flags = BTREE_INTKEY; */ - u.bt.flags = BTREE_LEAFDATA|BTREE_INTKEY; + /* flags = BTREE_INTKEY; */ + flags = BTREE_INTKEY; }else{ - u.bt.flags = BTREE_ZERODATA; + flags = BTREE_BLOBKEY; } - rc = sqlite3BtreeCreateTable(u.bt.pDb->pBt, &u.bt.pgno, u.bt.flags); - pOut->u.i = u.bt.pgno; + rc = sqlite3BtreeCreateTable(pDb->pBt, &pgno, flags); + pOut->u.i = pgno; break; } -/* Opcode: ParseSchema P1 P2 * P4 * +/* Opcode: ParseSchema P1 * * P4 * ** ** Read and parse all entries from the SQLITE_MASTER table of database P1 -** that match the WHERE clause P4. P2 is the "force" flag. Always do -** the parsing if P2 is true. If P2 is false, then this routine is a -** no-op if the schema is not currently loaded. In other words, if P2 -** is false, the SQLITE_MASTER table is only parsed if the rest of the -** schema is already loaded into the symbol table. +** that match the WHERE clause P4. ** ** This opcode invokes the parser to create a new virtual machine, ** then runs the new virtual machine. It is thus a re-entrant opcode. */ case OP_ParseSchema: { -#if 0 /* local variables moved into u.bu */ int iDb; const char *zMaster; char *zSql; InitData initData; -#endif /* local variables moved into u.bu */ - - u.bu.iDb = pOp->p1; - assert( u.bu.iDb>=0 && u.bu.iDb<db->nDb ); - - /* If pOp->p2 is 0, then this opcode is being executed to read a - ** single row, for example the row corresponding to a new index - ** created by this VDBE, from the sqlite_master table. It only - ** does this if the corresponding in-memory schema is currently - ** loaded. Otherwise, the new index definition can be loaded along - ** with the rest of the schema when it is required. - ** - ** Although the mutex on the BtShared object that corresponds to - ** database u.bu.iDb (the database containing the sqlite_master table - ** read by this instruction) is currently held, it is necessary to - ** obtain the mutexes on all attached databases before checking if - ** the schema of u.bu.iDb is loaded. This is because, at the start of - ** the sqlite3_exec() call below, SQLite will invoke - ** sqlite3BtreeEnterAll(). If all mutexes are not already held, the - ** u.bu.iDb mutex may be temporarily released to avoid deadlock. If - ** this happens, then some other thread may delete the in-memory - ** schema of database u.bu.iDb before the SQL statement runs. The schema - ** will not be reloaded becuase the db->init.busy flag is set. This - ** can result in a "no such table: sqlite_master" or "malformed - ** database schema" error being returned to the user. - */ - assert( sqlite3BtreeHoldsMutex(db->aDb[u.bu.iDb].pBt) ); - sqlite3BtreeEnterAll(db); - if( pOp->p2 || DbHasProperty(db, u.bu.iDb, DB_SchemaLoaded) ){ - u.bu.zMaster = SCHEMA_TABLE(u.bu.iDb); - u.bu.initData.db = db; - u.bu.initData.iDb = pOp->p1; - u.bu.initData.pzErrMsg = &p->zErrMsg; - u.bu.zSql = sqlite3MPrintf(db, + + /* Any prepared statement that invokes this opcode will hold mutexes + ** on every btree. This is a prerequisite for invoking + ** sqlite3InitCallback(). + */ +#ifdef SQLITE_DEBUG + for(iDb=0; iDb<db->nDb; iDb++){ + assert( iDb==1 || sqlite3BtreeHoldsMutex(db->aDb[iDb].pBt) ); + } +#endif + + iDb = pOp->p1; + assert( iDb>=0 && iDb<db->nDb ); + assert( DbHasProperty(db, iDb, DB_SchemaLoaded) ); + /* Used to be a conditional */ { + zMaster = SCHEMA_TABLE(iDb); + initData.db = db; + initData.iDb = pOp->p1; + initData.pzErrMsg = &p->zErrMsg; + zSql = sqlite3MPrintf(db, "SELECT name, rootpage, sql FROM '%q'.%s WHERE %s ORDER BY rowid", - db->aDb[u.bu.iDb].zName, u.bu.zMaster, pOp->p4.z); - if( u.bu.zSql==0 ){ + db->aDb[iDb].zName, zMaster, pOp->p4.z); + if( zSql==0 ){ rc = SQLITE_NOMEM; }else{ assert( db->init.busy==0 ); db->init.busy = 1; - u.bu.initData.rc = SQLITE_OK; + initData.rc = SQLITE_OK; assert( !db->mallocFailed ); - rc = sqlite3_exec(db, u.bu.zSql, sqlite3InitCallback, &u.bu.initData, 0); - if( rc==SQLITE_OK ) rc = u.bu.initData.rc; - sqlite3DbFree(db, u.bu.zSql); + rc = sqlite3_exec(db, zSql, sqlite3InitCallback, &initData, 0); + if( rc==SQLITE_OK ) rc = initData.rc; + sqlite3DbFree(db, zSql); db->init.busy = 0; } } - sqlite3BtreeLeaveAll(db); + if( rc ) sqlite3ResetAllSchemasOfConnection(db); if( rc==SQLITE_NOMEM ){ goto no_mem; } - break; + break; } #if !defined(SQLITE_OMIT_ANALYZE) /* Opcode: LoadAnalysis P1 * * * * ** @@ -58300,11 +79567,12 @@ /* Opcode: DropTable P1 * * P4 * ** ** Remove the internal (in-memory) data structures that describe ** the table named P4 in database P1. This is called after a table -** is dropped in order to keep the internal representation of the +** is dropped from disk (using the Destroy opcode) in order to keep +** the internal representation of the ** schema consistent with what is on disk. */ case OP_DropTable: { sqlite3UnlinkAndDeleteTable(db, pOp->p1, pOp->p4.z); break; @@ -58312,11 +79580,12 @@ /* Opcode: DropIndex P1 * * P4 * ** ** Remove the internal (in-memory) data structures that describe ** the index named P4 in database P1. This is called after an index -** is dropped in order to keep the internal representation of the +** is dropped from disk (using the Destroy opcode) +** in order to keep the internal representation of the ** schema consistent with what is on disk. */ case OP_DropIndex: { sqlite3UnlinkAndDeleteIndex(db, pOp->p1, pOp->p4.z); break; @@ -58324,11 +79593,12 @@ /* Opcode: DropTrigger P1 * * P4 * ** ** Remove the internal (in-memory) data structures that describe ** the trigger named P4 in database P1. This is called after a trigger -** is dropped in order to keep the internal representation of the +** is dropped from disk (using the Destroy opcode) in order to keep +** the internal representation of the ** schema consistent with what is on disk. */ case OP_DropTrigger: { sqlite3UnlinkAndDeleteTrigger(db, pOp->p1, pOp->p4.z); break; @@ -58355,53 +79625,53 @@ ** file, not the main database file. ** ** This opcode is used to implement the integrity_check pragma. */ case OP_IntegrityCk: { -#if 0 /* local variables moved into u.bv */ int nRoot; /* Number of tables to check. (Number of root pages.) */ int *aRoot; /* Array of rootpage numbers for tables to be checked */ int j; /* Loop counter */ int nErr; /* Number of errors reported */ char *z; /* Text of the error report */ Mem *pnErr; /* Register keeping track of errors remaining */ -#endif /* local variables moved into u.bv */ - - u.bv.nRoot = pOp->p2; - assert( u.bv.nRoot>0 ); - u.bv.aRoot = sqlite3DbMallocRaw(db, sizeof(int)*(u.bv.nRoot+1) ); - if( u.bv.aRoot==0 ) goto no_mem; - assert( pOp->p3>0 && pOp->p3<=p->nMem ); - u.bv.pnErr = &aMem[pOp->p3]; - assert( (u.bv.pnErr->flags & MEM_Int)!=0 ); - assert( (u.bv.pnErr->flags & (MEM_Str|MEM_Blob))==0 ); + + assert( p->bIsReader ); + nRoot = pOp->p2; + assert( nRoot>0 ); + aRoot = sqlite3DbMallocRawNN(db, sizeof(int)*(nRoot+1) ); + if( aRoot==0 ) goto no_mem; + assert( pOp->p3>0 && pOp->p3<=(p->nMem-p->nCursor) ); + pnErr = &aMem[pOp->p3]; + assert( (pnErr->flags & MEM_Int)!=0 ); + assert( (pnErr->flags & (MEM_Str|MEM_Blob))==0 ); pIn1 = &aMem[pOp->p1]; - for(u.bv.j=0; u.bv.j<u.bv.nRoot; u.bv.j++){ - u.bv.aRoot[u.bv.j] = (int)sqlite3VdbeIntValue(&pIn1[u.bv.j]); + for(j=0; j<nRoot; j++){ + aRoot[j] = (int)sqlite3VdbeIntValue(&pIn1[j]); } - u.bv.aRoot[u.bv.j] = 0; + aRoot[j] = 0; assert( pOp->p5<db->nDb ); - assert( (p->btreeMask & (1<<pOp->p5))!=0 ); - u.bv.z = sqlite3BtreeIntegrityCheck(db->aDb[pOp->p5].pBt, u.bv.aRoot, u.bv.nRoot, - (int)u.bv.pnErr->u.i, &u.bv.nErr); - sqlite3DbFree(db, u.bv.aRoot); - u.bv.pnErr->u.i -= u.bv.nErr; + assert( DbMaskTest(p->btreeMask, pOp->p5) ); + z = sqlite3BtreeIntegrityCheck(db->aDb[pOp->p5].pBt, aRoot, nRoot, + (int)pnErr->u.i, &nErr); + sqlite3DbFree(db, aRoot); + pnErr->u.i -= nErr; sqlite3VdbeMemSetNull(pIn1); - if( u.bv.nErr==0 ){ - assert( u.bv.z==0 ); - }else if( u.bv.z==0 ){ + if( nErr==0 ){ + assert( z==0 ); + }else if( z==0 ){ goto no_mem; }else{ - sqlite3VdbeMemSetStr(pIn1, u.bv.z, -1, SQLITE_UTF8, sqlite3_free); + sqlite3VdbeMemSetStr(pIn1, z, -1, SQLITE_UTF8, sqlite3_free); } UPDATE_MAX_BLOBSIZE(pIn1); sqlite3VdbeChangeEncoding(pIn1, encoding); break; } #endif /* SQLITE_OMIT_INTEGRITY_CHECK */ /* Opcode: RowSetAdd P1 P2 * * * +** Synopsis: rowset(P1)=r[P2] ** ** Insert the integer value held by register P2 into a boolean index ** held in register P1. ** ** An assertion fails if P2 is not an integer. @@ -58417,35 +79687,37 @@ sqlite3RowSetInsert(pIn1->u.pRowSet, pIn2->u.i); break; } /* Opcode: RowSetRead P1 P2 P3 * * +** Synopsis: r[P3]=rowset(P1) ** ** Extract the smallest value from boolean index P1 and put that value into ** register P3. Or, if boolean index P1 is initially empty, leave P3 ** unchanged and jump to instruction P2. */ case OP_RowSetRead: { /* jump, in1, out3 */ -#if 0 /* local variables moved into u.bw */ i64 val; -#endif /* local variables moved into u.bw */ - CHECK_FOR_INTERRUPT; + pIn1 = &aMem[pOp->p1]; - if( (pIn1->flags & MEM_RowSet)==0 - || sqlite3RowSetNext(pIn1->u.pRowSet, &u.bw.val)==0 + if( (pIn1->flags & MEM_RowSet)==0 + || sqlite3RowSetNext(pIn1->u.pRowSet, &val)==0 ){ /* The boolean index is empty */ sqlite3VdbeMemSetNull(pIn1); - pc = pOp->p2 - 1; + VdbeBranchTaken(1,2); + goto jump_to_p2_and_check_for_interrupt; }else{ /* A value was pulled from the index */ - sqlite3VdbeMemSetInt64(&aMem[pOp->p3], u.bw.val); + VdbeBranchTaken(0,2); + sqlite3VdbeMemSetInt64(&aMem[pOp->p3], val); } - break; + goto check_for_interrupt; } /* Opcode: RowSetTest P1 P2 P3 P4 +** Synopsis: if r[P3] in rowset(P1) goto P2 ** ** Register P3 is assumed to hold a 64-bit integer value. If register P1 ** contains a RowSet object and that RowSet object contains ** the value held in P3, jump to register P2. Otherwise, insert the ** integer in P3 into the RowSet and continue on to the @@ -58465,18 +79737,16 @@ ** inserted, there is no need to search to see if the same value was ** previously inserted as part of set X (only if it was previously ** inserted as part of some other set). */ case OP_RowSetTest: { /* jump, in1, in3 */ -#if 0 /* local variables moved into u.bx */ int iSet; int exists; -#endif /* local variables moved into u.bx */ pIn1 = &aMem[pOp->p1]; pIn3 = &aMem[pOp->p3]; - u.bx.iSet = pOp->p4.i; + iSet = pOp->p4.i; assert( pIn3->flags&MEM_Int ); /* If there is anything other than a rowset object in memory cell P1, ** delete it now and initialize P1 with an empty rowset */ @@ -58484,30 +79754,26 @@ sqlite3VdbeMemSetRowSet(pIn1); if( (pIn1->flags & MEM_RowSet)==0 ) goto no_mem; } assert( pOp->p4type==P4_INT32 ); - assert( u.bx.iSet==-1 || u.bx.iSet>=0 ); - if( u.bx.iSet ){ - u.bx.exists = sqlite3RowSetTest(pIn1->u.pRowSet, - (u8)(u.bx.iSet>=0 ? u.bx.iSet & 0xf : 0xff), - pIn3->u.i); - if( u.bx.exists ){ - pc = pOp->p2 - 1; - break; - } - } - if( u.bx.iSet>=0 ){ + assert( iSet==-1 || iSet>=0 ); + if( iSet ){ + exists = sqlite3RowSetTest(pIn1->u.pRowSet, iSet, pIn3->u.i); + VdbeBranchTaken(exists!=0,2); + if( exists ) goto jump_to_p2; + } + if( iSet>=0 ){ sqlite3RowSetInsert(pIn1->u.pRowSet, pIn3->u.i); } break; } #ifndef SQLITE_OMIT_TRIGGER -/* Opcode: Program P1 P2 P3 P4 * +/* Opcode: Program P1 P2 P3 P4 P5 ** ** Execute the trigger program passed as P4 (type P4_SUBPROGRAM). ** ** P1 contains the address of the memory cell that contains the first memory ** cell in an array of values used as arguments to the sub-program. P2 @@ -58515,109 +79781,122 @@ ** exception using the RAISE() function. Register P3 contains the address ** of a memory cell in this (the parent) VM that is used to allocate the ** memory required by the sub-vdbe at runtime. ** ** P4 is a pointer to the VM containing the trigger program. +** +** If P5 is non-zero, then recursive program invocation is enabled. */ case OP_Program: { /* jump */ -#if 0 /* local variables moved into u.by */ int nMem; /* Number of memory registers for sub-program */ int nByte; /* Bytes of runtime space required for sub-program */ Mem *pRt; /* Register to allocate runtime space */ Mem *pMem; /* Used to iterate through memory cells */ Mem *pEnd; /* Last memory cell in new array */ VdbeFrame *pFrame; /* New vdbe frame to execute in */ SubProgram *pProgram; /* Sub-program to execute */ void *t; /* Token identifying trigger */ -#endif /* local variables moved into u.by */ - u.by.pProgram = pOp->p4.pProgram; - u.by.pRt = &aMem[pOp->p3]; - assert( u.by.pProgram->nOp>0 ); - - /* If the p5 flag is clear, then recursive invocation of triggers is + pProgram = pOp->p4.pProgram; + pRt = &aMem[pOp->p3]; + assert( pProgram->nOp>0 ); + + /* If the p5 flag is clear, then recursive invocation of triggers is ** disabled for backwards compatibility (p5 is set if this sub-program ** is really a trigger, not a foreign key action, and the flag set ** and cleared by the "PRAGMA recursive_triggers" command is clear). - ** - ** It is recursive invocation of triggers, at the SQL level, that is - ** disabled. In some cases a single trigger may generate more than one - ** SubProgram (if the trigger may be executed with more than one different + ** + ** It is recursive invocation of triggers, at the SQL level, that is + ** disabled. In some cases a single trigger may generate more than one + ** SubProgram (if the trigger may be executed with more than one different ** ON CONFLICT algorithm). SubProgram structures associated with a - ** single trigger all have the same value for the SubProgram.token + ** single trigger all have the same value for the SubProgram.token ** variable. */ if( pOp->p5 ){ - u.by.t = u.by.pProgram->token; - for(u.by.pFrame=p->pFrame; u.by.pFrame && u.by.pFrame->token!=u.by.t; u.by.pFrame=u.by.pFrame->pParent); - if( u.by.pFrame ) break; + t = pProgram->token; + for(pFrame=p->pFrame; pFrame && pFrame->token!=t; pFrame=pFrame->pParent); + if( pFrame ) break; } if( p->nFrame>=db->aLimit[SQLITE_LIMIT_TRIGGER_DEPTH] ){ rc = SQLITE_ERROR; - sqlite3SetString(&p->zErrMsg, db, "too many levels of trigger recursion"); + sqlite3VdbeError(p, "too many levels of trigger recursion"); break; } - /* Register u.by.pRt is used to store the memory required to save the state + /* Register pRt is used to store the memory required to save the state ** of the current program, and the memory required at runtime to execute - ** the trigger program. If this trigger has been fired before, then u.by.pRt + ** the trigger program. If this trigger has been fired before, then pRt ** is already allocated. Otherwise, it must be initialized. */ - if( (u.by.pRt->flags&MEM_Frame)==0 ){ - /* SubProgram.nMem is set to the number of memory cells used by the + if( (pRt->flags&MEM_Frame)==0 ){ + /* SubProgram.nMem is set to the number of memory cells used by the ** program stored in SubProgram.aOp. As well as these, one memory ** cell is required for each cursor used by the program. Set local - ** variable u.by.nMem (and later, VdbeFrame.nChildMem) to this value. + ** variable nMem (and later, VdbeFrame.nChildMem) to this value. */ - u.by.nMem = u.by.pProgram->nMem + u.by.pProgram->nCsr; - u.by.nByte = ROUND8(sizeof(VdbeFrame)) - + u.by.nMem * sizeof(Mem) - + u.by.pProgram->nCsr * sizeof(VdbeCursor *); - u.by.pFrame = sqlite3DbMallocZero(db, u.by.nByte); - if( !u.by.pFrame ){ + nMem = pProgram->nMem + pProgram->nCsr; + nByte = ROUND8(sizeof(VdbeFrame)) + + nMem * sizeof(Mem) + + pProgram->nCsr * sizeof(VdbeCursor *) + + pProgram->nOnce * sizeof(u8); + pFrame = sqlite3DbMallocZero(db, nByte); + if( !pFrame ){ goto no_mem; } - sqlite3VdbeMemRelease(u.by.pRt); - u.by.pRt->flags = MEM_Frame; - u.by.pRt->u.pFrame = u.by.pFrame; - - u.by.pFrame->v = p; - u.by.pFrame->nChildMem = u.by.nMem; - u.by.pFrame->nChildCsr = u.by.pProgram->nCsr; - u.by.pFrame->pc = pc; - u.by.pFrame->aMem = p->aMem; - u.by.pFrame->nMem = p->nMem; - u.by.pFrame->apCsr = p->apCsr; - u.by.pFrame->nCursor = p->nCursor; - u.by.pFrame->aOp = p->aOp; - u.by.pFrame->nOp = p->nOp; - u.by.pFrame->token = u.by.pProgram->token; - - u.by.pEnd = &VdbeFrameMem(u.by.pFrame)[u.by.pFrame->nChildMem]; - for(u.by.pMem=VdbeFrameMem(u.by.pFrame); u.by.pMem!=u.by.pEnd; u.by.pMem++){ - u.by.pMem->flags = MEM_Null; - u.by.pMem->db = db; + sqlite3VdbeMemRelease(pRt); + pRt->flags = MEM_Frame; + pRt->u.pFrame = pFrame; + + pFrame->v = p; + pFrame->nChildMem = nMem; + pFrame->nChildCsr = pProgram->nCsr; + pFrame->pc = (int)(pOp - aOp); + pFrame->aMem = p->aMem; + pFrame->nMem = p->nMem; + pFrame->apCsr = p->apCsr; + pFrame->nCursor = p->nCursor; + pFrame->aOp = p->aOp; + pFrame->nOp = p->nOp; + pFrame->token = pProgram->token; + pFrame->aOnceFlag = p->aOnceFlag; + pFrame->nOnceFlag = p->nOnceFlag; +#ifdef SQLITE_ENABLE_STMT_SCANSTATUS + pFrame->anExec = p->anExec; +#endif + + pEnd = &VdbeFrameMem(pFrame)[pFrame->nChildMem]; + for(pMem=VdbeFrameMem(pFrame); pMem!=pEnd; pMem++){ + pMem->flags = MEM_Undefined; + pMem->db = db; } }else{ - u.by.pFrame = u.by.pRt->u.pFrame; - assert( u.by.pProgram->nMem+u.by.pProgram->nCsr==u.by.pFrame->nChildMem ); - assert( u.by.pProgram->nCsr==u.by.pFrame->nChildCsr ); - assert( pc==u.by.pFrame->pc ); + pFrame = pRt->u.pFrame; + assert( pProgram->nMem+pProgram->nCsr==pFrame->nChildMem ); + assert( pProgram->nCsr==pFrame->nChildCsr ); + assert( (int)(pOp - aOp)==pFrame->pc ); } p->nFrame++; - u.by.pFrame->pParent = p->pFrame; - u.by.pFrame->lastRowid = db->lastRowid; - u.by.pFrame->nChange = p->nChange; + pFrame->pParent = p->pFrame; + pFrame->lastRowid = lastRowid; + pFrame->nChange = p->nChange; + pFrame->nDbChange = p->db->nChange; p->nChange = 0; - p->pFrame = u.by.pFrame; - p->aMem = aMem = &VdbeFrameMem(u.by.pFrame)[-1]; - p->nMem = u.by.pFrame->nChildMem; - p->nCursor = (u16)u.by.pFrame->nChildCsr; + p->pFrame = pFrame; + p->aMem = aMem = &VdbeFrameMem(pFrame)[-1]; + p->nMem = pFrame->nChildMem; + p->nCursor = (u16)pFrame->nChildCsr; p->apCsr = (VdbeCursor **)&aMem[p->nMem+1]; - p->aOp = aOp = u.by.pProgram->aOp; - p->nOp = u.by.pProgram->nOp; - pc = -1; + p->aOp = aOp = pProgram->aOp; + p->nOp = pProgram->nOp; + p->aOnceFlag = (u8 *)&p->apCsr[p->nCursor]; + p->nOnceFlag = pProgram->nOnce; +#ifdef SQLITE_ENABLE_STMT_SCANSTATUS + p->anExec = 0; +#endif + pOp = &aOp[-1]; + memset(p->aOnceFlag, 0, p->nOnceFlag); break; } /* Opcode: Param P1 P2 * * * @@ -58630,41 +79909,44 @@ ** ** The address of the cell in the parent frame is determined by adding ** the value of the P1 argument to the value of the P1 argument to the ** calling OP_Program instruction. */ -case OP_Param: { /* out2-prerelease */ -#if 0 /* local variables moved into u.bz */ +case OP_Param: { /* out2 */ VdbeFrame *pFrame; Mem *pIn; -#endif /* local variables moved into u.bz */ - u.bz.pFrame = p->pFrame; - u.bz.pIn = &u.bz.pFrame->aMem[pOp->p1 + u.bz.pFrame->aOp[u.bz.pFrame->pc].p1]; - sqlite3VdbeMemShallowCopy(pOut, u.bz.pIn, MEM_Ephem); + pOut = out2Prerelease(p, pOp); + pFrame = p->pFrame; + pIn = &pFrame->aMem[pOp->p1 + pFrame->aOp[pFrame->pc].p1]; + sqlite3VdbeMemShallowCopy(pOut, pIn, MEM_Ephem); break; } #endif /* #ifndef SQLITE_OMIT_TRIGGER */ #ifndef SQLITE_OMIT_FOREIGN_KEY /* Opcode: FkCounter P1 P2 * * * +** Synopsis: fkctr[P1]+=P2 ** ** Increment a "constraint counter" by P2 (P2 may be negative or positive). ** If P1 is non-zero, the database constraint counter is incremented ** (deferred foreign key constraints). Otherwise, if P1 is zero, the ** statement counter is incremented (immediate foreign key constraints). */ case OP_FkCounter: { - if( pOp->p1 ){ + if( db->flags & SQLITE_DeferFKs ){ + db->nDeferredImmCons += pOp->p2; + }else if( pOp->p1 ){ db->nDeferredCons += pOp->p2; }else{ p->nFkConstraint += pOp->p2; } break; } /* Opcode: FkIfZero P1 P2 * * * +** Synopsis: if fkctr[P1]==0 goto P2 ** ** This opcode tests if a foreign key constraint-counter is currently zero. ** If so, jump to instruction P2. Otherwise, fall through to the next ** instruction. ** @@ -58673,20 +79955,23 @@ ** zero, the jump is taken if the statement constraint-counter is zero ** (immediate foreign key constraint violations). */ case OP_FkIfZero: { /* jump */ if( pOp->p1 ){ - if( db->nDeferredCons==0 ) pc = pOp->p2-1; + VdbeBranchTaken(db->nDeferredCons==0 && db->nDeferredImmCons==0, 2); + if( db->nDeferredCons==0 && db->nDeferredImmCons==0 ) goto jump_to_p2; }else{ - if( p->nFkConstraint==0 ) pc = pOp->p2-1; + VdbeBranchTaken(p->nFkConstraint==0 && db->nDeferredImmCons==0, 2); + if( p->nFkConstraint==0 && db->nDeferredImmCons==0 ) goto jump_to_p2; } break; } #endif /* #ifndef SQLITE_OMIT_FOREIGN_KEY */ #ifndef SQLITE_OMIT_AUTOINCREMENT /* Opcode: MemMax P1 P2 * * * +** Synopsis: r[P1]=max(r[P1],r[P2]) ** ** P1 is a register in the root frame of this VM (the root frame is ** different from the current frame if this instruction is being executed ** within a sub-program). Set the value of register P1 to the maximum of ** its current value and the value in register P2. @@ -58693,136 +79978,227 @@ ** ** This instruction throws an error if the memory cell is not initially ** an integer. */ case OP_MemMax: { /* in2 */ -#if 0 /* local variables moved into u.ca */ - Mem *pIn1; VdbeFrame *pFrame; -#endif /* local variables moved into u.ca */ if( p->pFrame ){ - for(u.ca.pFrame=p->pFrame; u.ca.pFrame->pParent; u.ca.pFrame=u.ca.pFrame->pParent); - u.ca.pIn1 = &u.ca.pFrame->aMem[pOp->p1]; + for(pFrame=p->pFrame; pFrame->pParent; pFrame=pFrame->pParent); + pIn1 = &pFrame->aMem[pOp->p1]; }else{ - u.ca.pIn1 = &aMem[pOp->p1]; + pIn1 = &aMem[pOp->p1]; } - sqlite3VdbeMemIntegerify(u.ca.pIn1); + assert( memIsValid(pIn1) ); + sqlite3VdbeMemIntegerify(pIn1); pIn2 = &aMem[pOp->p2]; sqlite3VdbeMemIntegerify(pIn2); - if( u.ca.pIn1->u.i<pIn2->u.i){ - u.ca.pIn1->u.i = pIn2->u.i; + if( pIn1->u.i<pIn2->u.i){ + pIn1->u.i = pIn2->u.i; } break; } #endif /* SQLITE_OMIT_AUTOINCREMENT */ -/* Opcode: IfPos P1 P2 * * * +/* Opcode: IfPos P1 P2 P3 * * +** Synopsis: if r[P1]>0 then r[P1]-=P3, goto P2 ** -** If the value of register P1 is 1 or greater, jump to P2. +** Register P1 must contain an integer. +** If the value of register P1 is 1 or greater, subtract P3 from the +** value in P1 and jump to P2. ** -** It is illegal to use this instruction on a register that does -** not contain an integer. An assertion fault will result if you try. +** If the initial value of register P1 is less than 1, then the +** value is unchanged and control passes through to the next instruction. */ case OP_IfPos: { /* jump, in1 */ pIn1 = &aMem[pOp->p1]; assert( pIn1->flags&MEM_Int ); - if( pIn1->u.i>0 ){ - pc = pOp->p2 - 1; - } - break; -} - -/* Opcode: IfNeg P1 P2 * * * -** -** If the value of register P1 is less than zero, jump to P2. -** -** It is illegal to use this instruction on a register that does -** not contain an integer. An assertion fault will result if you try. -*/ -case OP_IfNeg: { /* jump, in1 */ - pIn1 = &aMem[pOp->p1]; - assert( pIn1->flags&MEM_Int ); - if( pIn1->u.i<0 ){ - pc = pOp->p2 - 1; - } - break; -} - -/* Opcode: IfZero P1 P2 P3 * * -** -** The register P1 must contain an integer. Add literal P3 to the -** value in register P1. If the result is exactly 0, jump to P2. -** -** It is illegal to use this instruction on a register that does -** not contain an integer. An assertion fault will result if you try. -*/ -case OP_IfZero: { /* jump, in1 */ - pIn1 = &aMem[pOp->p1]; - assert( pIn1->flags&MEM_Int ); - pIn1->u.i += pOp->p3; - if( pIn1->u.i==0 ){ - pc = pOp->p2 - 1; - } - break; -} - -/* Opcode: AggStep * P2 P3 P4 P5 + VdbeBranchTaken( pIn1->u.i>0, 2); + if( pIn1->u.i>0 ){ + pIn1->u.i -= pOp->p3; + goto jump_to_p2; + } + break; +} + +/* Opcode: OffsetLimit P1 P2 P3 * * +** Synopsis: if r[P1]>0 then r[P2]=r[P1]+max(0,r[P3]) else r[P2]=(-1) +** +** This opcode performs a commonly used computation associated with +** LIMIT and OFFSET process. r[P1] holds the limit counter. r[P3] +** holds the offset counter. The opcode computes the combined value +** of the LIMIT and OFFSET and stores that value in r[P2]. The r[P2] +** value computed is the total number of rows that will need to be +** visited in order to complete the query. +** +** If r[P3] is zero or negative, that means there is no OFFSET +** and r[P2] is set to be the value of the LIMIT, r[P1]. +** +** if r[P1] is zero or negative, that means there is no LIMIT +** and r[P2] is set to -1. +** +** Otherwise, r[P2] is set to the sum of r[P1] and r[P3]. +*/ +case OP_OffsetLimit: { /* in1, out2, in3 */ + pIn1 = &aMem[pOp->p1]; + pIn3 = &aMem[pOp->p3]; + pOut = out2Prerelease(p, pOp); + assert( pIn1->flags & MEM_Int ); + assert( pIn3->flags & MEM_Int ); + pOut->u.i = pIn1->u.i<=0 ? -1 : pIn1->u.i+(pIn3->u.i>0?pIn3->u.i:0); + break; +} + +/* Opcode: IfNotZero P1 P2 P3 * * +** Synopsis: if r[P1]!=0 then r[P1]-=P3, goto P2 +** +** Register P1 must contain an integer. If the content of register P1 is +** initially nonzero, then subtract P3 from the value in register P1 and +** jump to P2. If register P1 is initially zero, leave it unchanged +** and fall through. +*/ +case OP_IfNotZero: { /* jump, in1 */ + pIn1 = &aMem[pOp->p1]; + assert( pIn1->flags&MEM_Int ); + VdbeBranchTaken(pIn1->u.i<0, 2); + if( pIn1->u.i ){ + pIn1->u.i -= pOp->p3; + goto jump_to_p2; + } + break; +} + +/* Opcode: DecrJumpZero P1 P2 * * * +** Synopsis: if (--r[P1])==0 goto P2 +** +** Register P1 must hold an integer. Decrement the value in register P1 +** then jump to P2 if the new value is exactly zero. +*/ +case OP_DecrJumpZero: { /* jump, in1 */ + pIn1 = &aMem[pOp->p1]; + assert( pIn1->flags&MEM_Int ); + pIn1->u.i--; + VdbeBranchTaken(pIn1->u.i==0, 2); + if( pIn1->u.i==0 ) goto jump_to_p2; + break; +} + + +/* Opcode: JumpZeroIncr P1 P2 * * * +** Synopsis: if (r[P1]++)==0 ) goto P2 +** +** The register P1 must contain an integer. If register P1 is initially +** zero, then jump to P2. Increment register P1 regardless of whether or +** not the jump is taken. +*/ +case OP_JumpZeroIncr: { /* jump, in1 */ + pIn1 = &aMem[pOp->p1]; + assert( pIn1->flags&MEM_Int ); + VdbeBranchTaken(pIn1->u.i==0, 2); + if( (pIn1->u.i++)==0 ) goto jump_to_p2; + break; +} + +/* Opcode: AggStep0 * P2 P3 P4 P5 +** Synopsis: accum=r[P3] step(r[P2@P5]) ** ** Execute the step function for an aggregate. The ** function has P5 arguments. P4 is a pointer to the FuncDef -** structure that specifies the function. Use register -** P3 as the accumulator. +** structure that specifies the function. Register P3 is the +** accumulator. +** +** The P5 arguments are taken from register P2 and its +** successors. +*/ +/* Opcode: AggStep * P2 P3 P4 P5 +** Synopsis: accum=r[P3] step(r[P2@P5]) +** +** Execute the step function for an aggregate. The +** function has P5 arguments. P4 is a pointer to an sqlite3_context +** object that is used to run the function. Register P3 is +** as the accumulator. ** ** The P5 arguments are taken from register P2 and its ** successors. +** +** This opcode is initially coded as OP_AggStep0. On first evaluation, +** the FuncDef stored in P4 is converted into an sqlite3_context and +** the opcode is changed. In this way, the initialization of the +** sqlite3_context only happens once, instead of on each call to the +** step function. */ +case OP_AggStep0: { + int n; + sqlite3_context *pCtx; + + assert( pOp->p4type==P4_FUNCDEF ); + n = pOp->p5; + assert( pOp->p3>0 && pOp->p3<=(p->nMem-p->nCursor) ); + assert( n==0 || (pOp->p2>0 && pOp->p2+n<=(p->nMem-p->nCursor)+1) ); + assert( pOp->p3<pOp->p2 || pOp->p3>=pOp->p2+n ); + pCtx = sqlite3DbMallocRawNN(db, sizeof(*pCtx) + (n-1)*sizeof(sqlite3_value*)); + if( pCtx==0 ) goto no_mem; + pCtx->pMem = 0; + pCtx->pFunc = pOp->p4.pFunc; + pCtx->iOp = (int)(pOp - aOp); + pCtx->pVdbe = p; + pCtx->argc = n; + pOp->p4type = P4_FUNCCTX; + pOp->p4.pCtx = pCtx; + pOp->opcode = OP_AggStep; + /* Fall through into OP_AggStep */ +} case OP_AggStep: { -#if 0 /* local variables moved into u.cb */ - int n; int i; + sqlite3_context *pCtx; Mem *pMem; - Mem *pRec; - sqlite3_context ctx; - sqlite3_value **apVal; -#endif /* local variables moved into u.cb */ - - u.cb.n = pOp->p5; - assert( u.cb.n>=0 ); - u.cb.pRec = &aMem[pOp->p2]; - u.cb.apVal = p->apArg; - assert( u.cb.apVal || u.cb.n==0 ); - for(u.cb.i=0; u.cb.i<u.cb.n; u.cb.i++, u.cb.pRec++){ - u.cb.apVal[u.cb.i] = u.cb.pRec; - sqlite3VdbeMemStoreType(u.cb.pRec); - } - u.cb.ctx.pFunc = pOp->p4.pFunc; - assert( pOp->p3>0 && pOp->p3<=p->nMem ); - u.cb.ctx.pMem = u.cb.pMem = &aMem[pOp->p3]; - u.cb.pMem->n++; - u.cb.ctx.s.flags = MEM_Null; - u.cb.ctx.s.z = 0; - u.cb.ctx.s.zMalloc = 0; - u.cb.ctx.s.xDel = 0; - u.cb.ctx.s.db = db; - u.cb.ctx.isError = 0; - u.cb.ctx.pColl = 0; - if( u.cb.ctx.pFunc->flags & SQLITE_FUNC_NEEDCOLL ){ - assert( pOp>p->aOp ); - assert( pOp[-1].p4type==P4_COLLSEQ ); + Mem t; + + assert( pOp->p4type==P4_FUNCCTX ); + pCtx = pOp->p4.pCtx; + pMem = &aMem[pOp->p3]; + + /* If this function is inside of a trigger, the register array in aMem[] + ** might change from one evaluation to the next. The next block of code + ** checks to see if the register array has changed, and if so it + ** reinitializes the relavant parts of the sqlite3_context object */ + if( pCtx->pMem != pMem ){ + pCtx->pMem = pMem; + for(i=pCtx->argc-1; i>=0; i--) pCtx->argv[i] = &aMem[pOp->p2+i]; + } + +#ifdef SQLITE_DEBUG + for(i=0; i<pCtx->argc; i++){ + assert( memIsValid(pCtx->argv[i]) ); + REGISTER_TRACE(pOp->p2+i, pCtx->argv[i]); + } +#endif + + pMem->n++; + sqlite3VdbeMemInit(&t, db, MEM_Null); + pCtx->pOut = &t; + pCtx->fErrorOrAux = 0; + pCtx->skipFlag = 0; + (pCtx->pFunc->xSFunc)(pCtx,pCtx->argc,pCtx->argv); /* IMP: R-24505-23230 */ + if( pCtx->fErrorOrAux ){ + if( pCtx->isError ){ + sqlite3VdbeError(p, "%s", sqlite3_value_text(&t)); + rc = pCtx->isError; + } + sqlite3VdbeMemRelease(&t); + }else{ + assert( t.flags==MEM_Null ); + } + if( pCtx->skipFlag ){ assert( pOp[-1].opcode==OP_CollSeq ); - u.cb.ctx.pColl = pOp[-1].p4.pColl; - } - (u.cb.ctx.pFunc->xStep)(&u.cb.ctx, u.cb.n, u.cb.apVal); - if( u.cb.ctx.isError ){ - sqlite3SetString(&p->zErrMsg, db, "%s", sqlite3_value_text(&u.cb.ctx.s)); - rc = u.cb.ctx.isError; - } - sqlite3VdbeMemRelease(&u.cb.ctx.s); + i = pOp[-1].p1; + if( i ) sqlite3VdbeMemSetInt64(&aMem[i], 1); + } break; } /* Opcode: AggFinal P1 P2 * P4 * +** Synopsis: accum=r[P1] N=P2 ** ** Execute the finalizer function for an aggregate. P1 is ** the memory location that is the accumulator for the aggregate. ** ** P2 is the number of arguments that the step function takes and @@ -58831,37 +80207,178 @@ ** functions that can take varying numbers of arguments. The ** P4 argument is only needed for the degenerate case where ** the step function was not previously called. */ case OP_AggFinal: { -#if 0 /* local variables moved into u.cc */ Mem *pMem; -#endif /* local variables moved into u.cc */ - assert( pOp->p1>0 && pOp->p1<=p->nMem ); - u.cc.pMem = &aMem[pOp->p1]; - assert( (u.cc.pMem->flags & ~(MEM_Null|MEM_Agg))==0 ); - rc = sqlite3VdbeMemFinalize(u.cc.pMem, pOp->p4.pFunc); + assert( pOp->p1>0 && pOp->p1<=(p->nMem-p->nCursor) ); + pMem = &aMem[pOp->p1]; + assert( (pMem->flags & ~(MEM_Null|MEM_Agg))==0 ); + rc = sqlite3VdbeMemFinalize(pMem, pOp->p4.pFunc); if( rc ){ - sqlite3SetString(&p->zErrMsg, db, "%s", sqlite3_value_text(u.cc.pMem)); + sqlite3VdbeError(p, "%s", sqlite3_value_text(pMem)); } - sqlite3VdbeChangeEncoding(u.cc.pMem, encoding); - UPDATE_MAX_BLOBSIZE(u.cc.pMem); - if( sqlite3VdbeMemTooBig(u.cc.pMem) ){ + sqlite3VdbeChangeEncoding(pMem, encoding); + UPDATE_MAX_BLOBSIZE(pMem); + if( sqlite3VdbeMemTooBig(pMem) ){ goto too_big; } break; } +#ifndef SQLITE_OMIT_WAL +/* Opcode: Checkpoint P1 P2 P3 * * +** +** Checkpoint database P1. This is a no-op if P1 is not currently in +** WAL mode. Parameter P2 is one of SQLITE_CHECKPOINT_PASSIVE, FULL, +** RESTART, or TRUNCATE. Write 1 or 0 into mem[P3] if the checkpoint returns +** SQLITE_BUSY or not, respectively. Write the number of pages in the +** WAL after the checkpoint into mem[P3+1] and the number of pages +** in the WAL that have been checkpointed after the checkpoint +** completes into mem[P3+2]. However on an error, mem[P3+1] and +** mem[P3+2] are initialized to -1. +*/ +case OP_Checkpoint: { + int i; /* Loop counter */ + int aRes[3]; /* Results */ + Mem *pMem; /* Write results here */ + + assert( p->readOnly==0 ); + aRes[0] = 0; + aRes[1] = aRes[2] = -1; + assert( pOp->p2==SQLITE_CHECKPOINT_PASSIVE + || pOp->p2==SQLITE_CHECKPOINT_FULL + || pOp->p2==SQLITE_CHECKPOINT_RESTART + || pOp->p2==SQLITE_CHECKPOINT_TRUNCATE + ); + rc = sqlite3Checkpoint(db, pOp->p1, pOp->p2, &aRes[1], &aRes[2]); + if( rc==SQLITE_BUSY ){ + rc = SQLITE_OK; + aRes[0] = 1; + } + for(i=0, pMem = &aMem[pOp->p3]; i<3; i++, pMem++){ + sqlite3VdbeMemSetInt64(pMem, (i64)aRes[i]); + } + break; +}; +#endif + +#ifndef SQLITE_OMIT_PRAGMA +/* Opcode: JournalMode P1 P2 P3 * * +** +** Change the journal mode of database P1 to P3. P3 must be one of the +** PAGER_JOURNALMODE_XXX values. If changing between the various rollback +** modes (delete, truncate, persist, off and memory), this is a simple +** operation. No IO is required. +** +** If changing into or out of WAL mode the procedure is more complicated. +** +** Write a string containing the final journal-mode to register P2. +*/ +case OP_JournalMode: { /* out2 */ + Btree *pBt; /* Btree to change journal mode of */ + Pager *pPager; /* Pager associated with pBt */ + int eNew; /* New journal mode */ + int eOld; /* The old journal mode */ +#ifndef SQLITE_OMIT_WAL + const char *zFilename; /* Name of database file for pPager */ +#endif + + pOut = out2Prerelease(p, pOp); + eNew = pOp->p3; + assert( eNew==PAGER_JOURNALMODE_DELETE + || eNew==PAGER_JOURNALMODE_TRUNCATE + || eNew==PAGER_JOURNALMODE_PERSIST + || eNew==PAGER_JOURNALMODE_OFF + || eNew==PAGER_JOURNALMODE_MEMORY + || eNew==PAGER_JOURNALMODE_WAL + || eNew==PAGER_JOURNALMODE_QUERY + ); + assert( pOp->p1>=0 && pOp->p1<db->nDb ); + assert( p->readOnly==0 ); + + pBt = db->aDb[pOp->p1].pBt; + pPager = sqlite3BtreePager(pBt); + eOld = sqlite3PagerGetJournalMode(pPager); + if( eNew==PAGER_JOURNALMODE_QUERY ) eNew = eOld; + if( !sqlite3PagerOkToChangeJournalMode(pPager) ) eNew = eOld; + +#ifndef SQLITE_OMIT_WAL + zFilename = sqlite3PagerFilename(pPager, 1); + + /* Do not allow a transition to journal_mode=WAL for a database + ** in temporary storage or if the VFS does not support shared memory + */ + if( eNew==PAGER_JOURNALMODE_WAL + && (sqlite3Strlen30(zFilename)==0 /* Temp file */ + || !sqlite3PagerWalSupported(pPager)) /* No shared-memory support */ + ){ + eNew = eOld; + } + + if( (eNew!=eOld) + && (eOld==PAGER_JOURNALMODE_WAL || eNew==PAGER_JOURNALMODE_WAL) + ){ + if( !db->autoCommit || db->nVdbeRead>1 ){ + rc = SQLITE_ERROR; + sqlite3VdbeError(p, + "cannot change %s wal mode from within a transaction", + (eNew==PAGER_JOURNALMODE_WAL ? "into" : "out of") + ); + break; + }else{ + + if( eOld==PAGER_JOURNALMODE_WAL ){ + /* If leaving WAL mode, close the log file. If successful, the call + ** to PagerCloseWal() checkpoints and deletes the write-ahead-log + ** file. An EXCLUSIVE lock may still be held on the database file + ** after a successful return. + */ + rc = sqlite3PagerCloseWal(pPager); + if( rc==SQLITE_OK ){ + sqlite3PagerSetJournalMode(pPager, eNew); + } + }else if( eOld==PAGER_JOURNALMODE_MEMORY ){ + /* Cannot transition directly from MEMORY to WAL. Use mode OFF + ** as an intermediate */ + sqlite3PagerSetJournalMode(pPager, PAGER_JOURNALMODE_OFF); + } + + /* Open a transaction on the database file. Regardless of the journal + ** mode, this transaction always uses a rollback journal. + */ + assert( sqlite3BtreeIsInTrans(pBt)==0 ); + if( rc==SQLITE_OK ){ + rc = sqlite3BtreeSetVersion(pBt, (eNew==PAGER_JOURNALMODE_WAL ? 2 : 1)); + } + } + } +#endif /* ifndef SQLITE_OMIT_WAL */ + + if( rc ){ + eNew = eOld; + } + eNew = sqlite3PagerSetJournalMode(pPager, eNew); + + pOut->flags = MEM_Str|MEM_Static|MEM_Term; + pOut->z = (char *)sqlite3JournalModename(eNew); + pOut->n = sqlite3Strlen30(pOut->z); + pOut->enc = SQLITE_UTF8; + sqlite3VdbeChangeEncoding(pOut, encoding); + break; +}; +#endif /* SQLITE_OMIT_PRAGMA */ #if !defined(SQLITE_OMIT_VACUUM) && !defined(SQLITE_OMIT_ATTACH) /* Opcode: Vacuum * * * * * ** ** Vacuum the entire database. This opcode will cause other virtual ** machines to be created and run. It may not be called from within ** a transaction. */ case OP_Vacuum: { + assert( p->readOnly==0 ); rc = sqlite3RunVacuum(&p->zErrMsg, db); break; } #endif @@ -58871,34 +80388,35 @@ ** Perform a single step of the incremental vacuum procedure on ** the P1 database. If the vacuum has finished, jump to instruction ** P2. Otherwise, fall through to the next instruction. */ case OP_IncrVacuum: { /* jump */ -#if 0 /* local variables moved into u.cd */ Btree *pBt; -#endif /* local variables moved into u.cd */ assert( pOp->p1>=0 && pOp->p1<db->nDb ); - assert( (p->btreeMask & (1<<pOp->p1))!=0 ); - u.cd.pBt = db->aDb[pOp->p1].pBt; - rc = sqlite3BtreeIncrVacuum(u.cd.pBt); + assert( DbMaskTest(p->btreeMask, pOp->p1) ); + assert( p->readOnly==0 ); + pBt = db->aDb[pOp->p1].pBt; + rc = sqlite3BtreeIncrVacuum(pBt); + VdbeBranchTaken(rc==SQLITE_DONE,2); if( rc==SQLITE_DONE ){ - pc = pOp->p2 - 1; rc = SQLITE_OK; + goto jump_to_p2; } break; } #endif /* Opcode: Expire P1 * * * * ** -** Cause precompiled statements to become expired. An expired statement -** fails with an error code of SQLITE_SCHEMA if it is ever executed -** (via sqlite3_step()). +** Cause precompiled statements to expire. When an expired statement +** is executed using sqlite3_step() it will either automatically +** reprepare itself (if it was originally created using sqlite3_prepare_v2()) +** or it will fail with SQLITE_SCHEMA. ** ** If P1 is 0, then all SQL statements become expired. If P1 is non-zero, -** then only the currently executing statement is affected. +** then only the currently executing statement is expired. */ case OP_Expire: { if( !pOp->p1 ){ sqlite3ExpirePreparedStatements(db); }else{ @@ -58907,10 +80425,11 @@ break; } #ifndef SQLITE_OMIT_SHARED_CACHE /* Opcode: TableLock P1 P2 P3 P4 * +** Synopsis: iDb=P1 root=P2 write=P3 ** ** Obtain a lock on a particular table. This instruction is only used when ** the shared-cache feature is enabled. ** ** P1 is the index of the database in sqlite3.aDb[] of the database @@ -58925,16 +80444,16 @@ case OP_TableLock: { u8 isWriteLock = (u8)pOp->p3; if( isWriteLock || 0==(db->flags&SQLITE_ReadUncommitted) ){ int p1 = pOp->p1; assert( p1>=0 && p1<db->nDb ); - assert( (p->btreeMask & (1<<p1))!=0 ); + assert( DbMaskTest(p->btreeMask, p1) ); assert( isWriteLock==0 || isWriteLock==1 ); rc = sqlite3BtreeLockTable(db->aDb[p1].pBt, pOp->p2, isWriteLock); if( (rc&0xFF)==SQLITE_LOCKED ){ const char *z = pOp->p4.z; - sqlite3SetString(&p->zErrMsg, db, "database table is locked: %s", z); + sqlite3VdbeError(p, "database table is locked: %s", z); } } break; } #endif /* SQLITE_OMIT_SHARED_CACHE */ @@ -58948,32 +80467,42 @@ ** Also, whether or not P4 is set, check that this is not being called from ** within a callback to a virtual table xSync() method. If it is, the error ** code will be set to SQLITE_LOCKED. */ case OP_VBegin: { -#if 0 /* local variables moved into u.ce */ VTable *pVTab; -#endif /* local variables moved into u.ce */ - u.ce.pVTab = pOp->p4.pVtab; - rc = sqlite3VtabBegin(db, u.ce.pVTab); - if( u.ce.pVTab ){ - sqlite3DbFree(db, p->zErrMsg); - p->zErrMsg = u.ce.pVTab->pVtab->zErrMsg; - u.ce.pVTab->pVtab->zErrMsg = 0; - } + pVTab = pOp->p4.pVtab; + rc = sqlite3VtabBegin(db, pVTab); + if( pVTab ) sqlite3VtabImportErrmsg(p, pVTab->pVtab); break; } #endif /* SQLITE_OMIT_VIRTUALTABLE */ #ifndef SQLITE_OMIT_VIRTUALTABLE -/* Opcode: VCreate P1 * * P4 * +/* Opcode: VCreate P1 P2 * * * ** -** P4 is the name of a virtual table in database P1. Call the xCreate method -** for that table. +** P2 is a register that holds the name of a virtual table in database +** P1. Call the xCreate method for that table. */ case OP_VCreate: { - rc = sqlite3VtabCallCreate(db, pOp->p1, pOp->p4.z, &p->zErrMsg); + Mem sMem; /* For storing the record being decoded */ + const char *zTab; /* Name of the virtual table */ + + memset(&sMem, 0, sizeof(sMem)); + sMem.db = db; + /* Because P2 is always a static string, it is impossible for the + ** sqlite3VdbeMemCopy() to fail */ + assert( (aMem[pOp->p2].flags & MEM_Str)!=0 ); + assert( (aMem[pOp->p2].flags & MEM_Static)!=0 ); + rc = sqlite3VdbeMemCopy(&sMem, &aMem[pOp->p2]); + assert( rc==SQLITE_OK ); + zTab = (const char*)sqlite3_value_text(&sMem); + assert( zTab || db->mallocFailed ); + if( zTab ){ + rc = sqlite3VtabCallCreate(db, pOp->p1, zTab, &p->zErrMsg); + } + sqlite3VdbeMemRelease(&sMem); break; } #endif /* SQLITE_OMIT_VIRTUALTABLE */ #ifndef SQLITE_OMIT_VIRTUALTABLE @@ -58981,13 +80510,13 @@ ** ** P4 is the name of a virtual table in database P1. Call the xDestroy method ** of that table. */ case OP_VDestroy: { - p->inVtabMethod = 2; + db->nVDestroy++; rc = sqlite3VtabCallDestroy(db, pOp->p1, pOp->p4.z); - p->inVtabMethod = 0; + db->nVDestroy--; break; } #endif /* SQLITE_OMIT_VIRTUALTABLE */ #ifndef SQLITE_OMIT_VIRTUALTABLE @@ -58996,46 +80525,48 @@ ** P4 is a pointer to a virtual table object, an sqlite3_vtab structure. ** P1 is a cursor number. This opcode opens a cursor to the virtual ** table and stores that cursor in P1. */ case OP_VOpen: { -#if 0 /* local variables moved into u.cf */ VdbeCursor *pCur; - sqlite3_vtab_cursor *pVtabCursor; + sqlite3_vtab_cursor *pVCur; sqlite3_vtab *pVtab; - sqlite3_module *pModule; -#endif /* local variables moved into u.cf */ - - u.cf.pCur = 0; - u.cf.pVtabCursor = 0; - u.cf.pVtab = pOp->p4.pVtab->pVtab; - u.cf.pModule = (sqlite3_module *)u.cf.pVtab->pModule; - assert(u.cf.pVtab && u.cf.pModule); - rc = u.cf.pModule->xOpen(u.cf.pVtab, &u.cf.pVtabCursor); - sqlite3DbFree(db, p->zErrMsg); - p->zErrMsg = u.cf.pVtab->zErrMsg; - u.cf.pVtab->zErrMsg = 0; + const sqlite3_module *pModule; + + assert( p->bIsReader ); + pCur = 0; + pVCur = 0; + pVtab = pOp->p4.pVtab->pVtab; + if( pVtab==0 || NEVER(pVtab->pModule==0) ){ + rc = SQLITE_LOCKED; + break; + } + pModule = pVtab->pModule; + rc = pModule->xOpen(pVtab, &pVCur); + sqlite3VtabImportErrmsg(p, pVtab); if( SQLITE_OK==rc ){ /* Initialize sqlite3_vtab_cursor base class */ - u.cf.pVtabCursor->pVtab = u.cf.pVtab; - - /* Initialise vdbe cursor object */ - u.cf.pCur = allocateCursor(p, pOp->p1, 0, -1, 0); - if( u.cf.pCur ){ - u.cf.pCur->pVtabCursor = u.cf.pVtabCursor; - u.cf.pCur->pModule = u.cf.pVtabCursor->pVtab->pModule; - }else{ - db->mallocFailed = 1; - u.cf.pModule->xClose(u.cf.pVtabCursor); + pVCur->pVtab = pVtab; + + /* Initialize vdbe cursor object */ + pCur = allocateCursor(p, pOp->p1, 0, -1, CURTYPE_VTAB); + if( pCur ){ + pCur->uc.pVCur = pVCur; + pVtab->nRef++; + }else{ + assert( db->mallocFailed ); + pModule->xClose(pVCur); + goto no_mem; } } break; } #endif /* SQLITE_OMIT_VIRTUALTABLE */ #ifndef SQLITE_OMIT_VIRTUALTABLE /* Opcode: VFilter P1 P2 P3 P4 * +** Synopsis: iplan=r[P3] zplan='P4' ** ** P1 is a cursor opened using VOpen. P2 is an address to jump to if ** the filtered result set is empty. ** ** P4 is either NULL or a string that was generated by the xBestIndex @@ -59050,121 +80581,94 @@ ** xFilter as argv. Register P3+2 becomes argv[0] when passed to xFilter. ** ** A jump is made to P2 if the result set after filtering would be empty. */ case OP_VFilter: { /* jump */ -#if 0 /* local variables moved into u.cg */ int nArg; int iQuery; const sqlite3_module *pModule; Mem *pQuery; Mem *pArgc; - sqlite3_vtab_cursor *pVtabCursor; + sqlite3_vtab_cursor *pVCur; sqlite3_vtab *pVtab; VdbeCursor *pCur; int res; int i; Mem **apArg; -#endif /* local variables moved into u.cg */ - - u.cg.pQuery = &aMem[pOp->p3]; - u.cg.pArgc = &u.cg.pQuery[1]; - u.cg.pCur = p->apCsr[pOp->p1]; - REGISTER_TRACE(pOp->p3, u.cg.pQuery); - assert( u.cg.pCur->pVtabCursor ); - u.cg.pVtabCursor = u.cg.pCur->pVtabCursor; - u.cg.pVtab = u.cg.pVtabCursor->pVtab; - u.cg.pModule = u.cg.pVtab->pModule; + + pQuery = &aMem[pOp->p3]; + pArgc = &pQuery[1]; + pCur = p->apCsr[pOp->p1]; + assert( memIsValid(pQuery) ); + REGISTER_TRACE(pOp->p3, pQuery); + assert( pCur->eCurType==CURTYPE_VTAB ); + pVCur = pCur->uc.pVCur; + pVtab = pVCur->pVtab; + pModule = pVtab->pModule; /* Grab the index number and argc parameters */ - assert( (u.cg.pQuery->flags&MEM_Int)!=0 && u.cg.pArgc->flags==MEM_Int ); - u.cg.nArg = (int)u.cg.pArgc->u.i; - u.cg.iQuery = (int)u.cg.pQuery->u.i; + assert( (pQuery->flags&MEM_Int)!=0 && pArgc->flags==MEM_Int ); + nArg = (int)pArgc->u.i; + iQuery = (int)pQuery->u.i; /* Invoke the xFilter method */ - { - u.cg.res = 0; - u.cg.apArg = p->apArg; - for(u.cg.i = 0; u.cg.i<u.cg.nArg; u.cg.i++){ - u.cg.apArg[u.cg.i] = &u.cg.pArgc[u.cg.i+1]; - sqlite3VdbeMemStoreType(u.cg.apArg[u.cg.i]); - } - - p->inVtabMethod = 1; - rc = u.cg.pModule->xFilter(u.cg.pVtabCursor, u.cg.iQuery, pOp->p4.z, u.cg.nArg, u.cg.apArg); - p->inVtabMethod = 0; - sqlite3DbFree(db, p->zErrMsg); - p->zErrMsg = u.cg.pVtab->zErrMsg; - u.cg.pVtab->zErrMsg = 0; - if( rc==SQLITE_OK ){ - u.cg.res = u.cg.pModule->xEof(u.cg.pVtabCursor); - } - - if( u.cg.res ){ - pc = pOp->p2 - 1; - } - } - u.cg.pCur->nullRow = 0; - + res = 0; + apArg = p->apArg; + for(i = 0; i<nArg; i++){ + apArg[i] = &pArgc[i+1]; + } + rc = pModule->xFilter(pVCur, iQuery, pOp->p4.z, nArg, apArg); + sqlite3VtabImportErrmsg(p, pVtab); + if( rc==SQLITE_OK ){ + res = pModule->xEof(pVCur); + } + pCur->nullRow = 0; + VdbeBranchTaken(res!=0,2); + if( res ) goto jump_to_p2; break; } #endif /* SQLITE_OMIT_VIRTUALTABLE */ #ifndef SQLITE_OMIT_VIRTUALTABLE /* Opcode: VColumn P1 P2 P3 * * +** Synopsis: r[P3]=vcolumn(P2) ** ** Store the value of the P2-th column of ** the row of the virtual-table that the ** P1 cursor is pointing to into register P3. */ case OP_VColumn: { -#if 0 /* local variables moved into u.ch */ sqlite3_vtab *pVtab; const sqlite3_module *pModule; Mem *pDest; sqlite3_context sContext; -#endif /* local variables moved into u.ch */ VdbeCursor *pCur = p->apCsr[pOp->p1]; - assert( pCur->pVtabCursor ); - assert( pOp->p3>0 && pOp->p3<=p->nMem ); - u.ch.pDest = &aMem[pOp->p3]; + assert( pCur->eCurType==CURTYPE_VTAB ); + assert( pOp->p3>0 && pOp->p3<=(p->nMem-p->nCursor) ); + pDest = &aMem[pOp->p3]; + memAboutToChange(p, pDest); if( pCur->nullRow ){ - sqlite3VdbeMemSetNull(u.ch.pDest); + sqlite3VdbeMemSetNull(pDest); break; } - u.ch.pVtab = pCur->pVtabCursor->pVtab; - u.ch.pModule = u.ch.pVtab->pModule; - assert( u.ch.pModule->xColumn ); - memset(&u.ch.sContext, 0, sizeof(u.ch.sContext)); - - /* The output cell may already have a buffer allocated. Move - ** the current contents to u.ch.sContext.s so in case the user-function - ** can use the already allocated buffer instead of allocating a - ** new one. - */ - sqlite3VdbeMemMove(&u.ch.sContext.s, u.ch.pDest); - MemSetTypeFlag(&u.ch.sContext.s, MEM_Null); - - rc = u.ch.pModule->xColumn(pCur->pVtabCursor, &u.ch.sContext, pOp->p2); - sqlite3DbFree(db, p->zErrMsg); - p->zErrMsg = u.ch.pVtab->zErrMsg; - u.ch.pVtab->zErrMsg = 0; - if( u.ch.sContext.isError ){ - rc = u.ch.sContext.isError; - } - - /* Copy the result of the function to the P3 register. We - ** do this regardless of whether or not an error occurred to ensure any - ** dynamic allocation in u.ch.sContext.s (a Mem struct) is released. - */ - sqlite3VdbeChangeEncoding(&u.ch.sContext.s, encoding); - sqlite3VdbeMemMove(u.ch.pDest, &u.ch.sContext.s); - REGISTER_TRACE(pOp->p3, u.ch.pDest); - UPDATE_MAX_BLOBSIZE(u.ch.pDest); - - if( sqlite3VdbeMemTooBig(u.ch.pDest) ){ + pVtab = pCur->uc.pVCur->pVtab; + pModule = pVtab->pModule; + assert( pModule->xColumn ); + memset(&sContext, 0, sizeof(sContext)); + sContext.pOut = pDest; + MemSetTypeFlag(pDest, MEM_Null); + rc = pModule->xColumn(pCur->uc.pVCur, &sContext, pOp->p2); + sqlite3VtabImportErrmsg(p, pVtab); + if( sContext.isError ){ + rc = sContext.isError; + } + sqlite3VdbeChangeEncoding(pDest, encoding); + REGISTER_TRACE(pOp->p3, pDest); + UPDATE_MAX_BLOBSIZE(pDest); + + if( sqlite3VdbeMemTooBig(pDest) ){ goto too_big; } break; } #endif /* SQLITE_OMIT_VIRTUALTABLE */ @@ -59175,48 +80679,42 @@ ** Advance virtual table P1 to the next row in its result set and ** jump to instruction P2. Or, if the virtual table has reached ** the end of its result set, then fall through to the next instruction. */ case OP_VNext: { /* jump */ -#if 0 /* local variables moved into u.ci */ sqlite3_vtab *pVtab; const sqlite3_module *pModule; int res; VdbeCursor *pCur; -#endif /* local variables moved into u.ci */ - u.ci.res = 0; - u.ci.pCur = p->apCsr[pOp->p1]; - assert( u.ci.pCur->pVtabCursor ); - if( u.ci.pCur->nullRow ){ + res = 0; + pCur = p->apCsr[pOp->p1]; + assert( pCur->eCurType==CURTYPE_VTAB ); + if( pCur->nullRow ){ break; } - u.ci.pVtab = u.ci.pCur->pVtabCursor->pVtab; - u.ci.pModule = u.ci.pVtab->pModule; - assert( u.ci.pModule->xNext ); + pVtab = pCur->uc.pVCur->pVtab; + pModule = pVtab->pModule; + assert( pModule->xNext ); /* Invoke the xNext() method of the module. There is no way for the ** underlying implementation to return an error if one occurs during - ** xNext(). Instead, if an error occurs, true is returned (indicating that + ** xNext(). Instead, if an error occurs, true is returned (indicating that ** data is available) and the error code returned when xColumn or ** some other method is next invoked on the save virtual table cursor. */ - p->inVtabMethod = 1; - rc = u.ci.pModule->xNext(u.ci.pCur->pVtabCursor); - p->inVtabMethod = 0; - sqlite3DbFree(db, p->zErrMsg); - p->zErrMsg = u.ci.pVtab->zErrMsg; - u.ci.pVtab->zErrMsg = 0; + rc = pModule->xNext(pCur->uc.pVCur); + sqlite3VtabImportErrmsg(p, pVtab); if( rc==SQLITE_OK ){ - u.ci.res = u.ci.pModule->xEof(u.ci.pCur->pVtabCursor); + res = pModule->xEof(pCur->uc.pVCur); } - - if( !u.ci.res ){ + VdbeBranchTaken(!res,2); + if( !res ){ /* If there is data, jump to P2 */ - pc = pOp->p2 - 1; + goto jump_to_p2_and_check_for_interrupt; } - break; + goto check_for_interrupt; } #endif /* SQLITE_OMIT_VIRTUALTABLE */ #ifndef SQLITE_OMIT_VIRTUALTABLE /* Opcode: VRename P1 * * P4 * @@ -59224,31 +80722,36 @@ ** P4 is a pointer to a virtual table object, an sqlite3_vtab structure. ** This opcode invokes the corresponding xRename method. The value ** in register P1 is passed as the zName argument to the xRename method. */ case OP_VRename: { -#if 0 /* local variables moved into u.cj */ sqlite3_vtab *pVtab; Mem *pName; -#endif /* local variables moved into u.cj */ - - u.cj.pVtab = pOp->p4.pVtab->pVtab; - u.cj.pName = &aMem[pOp->p1]; - assert( u.cj.pVtab->pModule->xRename ); - REGISTER_TRACE(pOp->p1, u.cj.pName); - assert( u.cj.pName->flags & MEM_Str ); - rc = u.cj.pVtab->pModule->xRename(u.cj.pVtab, u.cj.pName->z); - sqlite3DbFree(db, p->zErrMsg); - p->zErrMsg = u.cj.pVtab->zErrMsg; - u.cj.pVtab->zErrMsg = 0; - + + pVtab = pOp->p4.pVtab->pVtab; + pName = &aMem[pOp->p1]; + assert( pVtab->pModule->xRename ); + assert( memIsValid(pName) ); + assert( p->readOnly==0 ); + REGISTER_TRACE(pOp->p1, pName); + assert( pName->flags & MEM_Str ); + testcase( pName->enc==SQLITE_UTF8 ); + testcase( pName->enc==SQLITE_UTF16BE ); + testcase( pName->enc==SQLITE_UTF16LE ); + rc = sqlite3VdbeChangeEncoding(pName, SQLITE_UTF8); + if( rc==SQLITE_OK ){ + rc = pVtab->pModule->xRename(pVtab, pName->z); + sqlite3VtabImportErrmsg(p, pVtab); + p->expired = 0; + } break; } #endif #ifndef SQLITE_OMIT_VIRTUALTABLE -/* Opcode: VUpdate P1 P2 P3 P4 * +/* Opcode: VUpdate P1 P2 P3 P4 P5 +** Synopsis: data=r[P3@P2] ** ** P4 is a pointer to a virtual table object, an sqlite3_vtab structure. ** This opcode invokes the corresponding xUpdate method. P2 values ** are contiguous memory cells starting at P3 to pass to the xUpdate ** invocation. The value in register (P3+P2-1) corresponds to the @@ -59266,43 +80769,62 @@ ** a row to delete. ** ** P1 is a boolean flag. If it is set to true and the xUpdate call ** is successful, then the value returned by sqlite3_last_insert_rowid() ** is set to the value of the rowid for the row just inserted. +** +** P5 is the error actions (OE_Replace, OE_Fail, OE_Ignore, etc) to +** apply in the case of a constraint failure on an insert or update. */ case OP_VUpdate: { -#if 0 /* local variables moved into u.ck */ sqlite3_vtab *pVtab; - sqlite3_module *pModule; + const sqlite3_module *pModule; int nArg; int i; sqlite_int64 rowid; Mem **apArg; Mem *pX; -#endif /* local variables moved into u.ck */ - u.ck.pVtab = pOp->p4.pVtab->pVtab; - u.ck.pModule = (sqlite3_module *)u.ck.pVtab->pModule; - u.ck.nArg = pOp->p2; + assert( pOp->p2==1 || pOp->p5==OE_Fail || pOp->p5==OE_Rollback + || pOp->p5==OE_Abort || pOp->p5==OE_Ignore || pOp->p5==OE_Replace + ); + assert( p->readOnly==0 ); + pVtab = pOp->p4.pVtab->pVtab; + if( pVtab==0 || NEVER(pVtab->pModule==0) ){ + rc = SQLITE_LOCKED; + break; + } + pModule = pVtab->pModule; + nArg = pOp->p2; assert( pOp->p4type==P4_VTAB ); - if( ALWAYS(u.ck.pModule->xUpdate) ){ - u.ck.apArg = p->apArg; - u.ck.pX = &aMem[pOp->p3]; - for(u.ck.i=0; u.ck.i<u.ck.nArg; u.ck.i++){ - sqlite3VdbeMemStoreType(u.ck.pX); - u.ck.apArg[u.ck.i] = u.ck.pX; - u.ck.pX++; - } - rc = u.ck.pModule->xUpdate(u.ck.pVtab, u.ck.nArg, u.ck.apArg, &u.ck.rowid); - sqlite3DbFree(db, p->zErrMsg); - p->zErrMsg = u.ck.pVtab->zErrMsg; - u.ck.pVtab->zErrMsg = 0; + if( ALWAYS(pModule->xUpdate) ){ + u8 vtabOnConflict = db->vtabOnConflict; + apArg = p->apArg; + pX = &aMem[pOp->p3]; + for(i=0; i<nArg; i++){ + assert( memIsValid(pX) ); + memAboutToChange(p, pX); + apArg[i] = pX; + pX++; + } + db->vtabOnConflict = pOp->p5; + rc = pModule->xUpdate(pVtab, nArg, apArg, &rowid); + db->vtabOnConflict = vtabOnConflict; + sqlite3VtabImportErrmsg(p, pVtab); if( rc==SQLITE_OK && pOp->p1 ){ - assert( u.ck.nArg>1 && u.ck.apArg[0] && (u.ck.apArg[0]->flags&MEM_Null) ); - db->lastRowid = u.ck.rowid; + assert( nArg>1 && apArg[0] && (apArg[0]->flags&MEM_Null) ); + db->lastRowid = lastRowid = rowid; } - p->nChange++; + if( (rc&0xff)==SQLITE_CONSTRAINT && pOp->p4.pVtab->bConstraint ){ + if( pOp->p5==OE_Ignore ){ + rc = SQLITE_OK; + }else{ + p->errorAction = ((pOp->p5==OE_Replace) ? OE_Abort : pOp->p5); + } + }else{ + p->nChange++; + } } break; } #endif /* SQLITE_OMIT_VIRTUALTABLE */ @@ -59309,44 +80831,113 @@ #ifndef SQLITE_OMIT_PAGER_PRAGMAS /* Opcode: Pagecount P1 P2 * * * ** ** Write the current number of pages in database P1 to memory cell P2. */ -case OP_Pagecount: { /* out2-prerelease */ +case OP_Pagecount: { /* out2 */ + pOut = out2Prerelease(p, pOp); pOut->u.i = sqlite3BtreeLastPage(db->aDb[pOp->p1].pBt); break; } #endif -#ifndef SQLITE_OMIT_TRACE -/* Opcode: Trace * * * P4 * + +#ifndef SQLITE_OMIT_PAGER_PRAGMAS +/* Opcode: MaxPgcnt P1 P2 P3 * * +** +** Try to set the maximum page count for database P1 to the value in P3. +** Do not let the maximum page count fall below the current page count and +** do not change the maximum page count value if P3==0. +** +** Store the maximum page count after the change in register P2. +*/ +case OP_MaxPgcnt: { /* out2 */ + unsigned int newMax; + Btree *pBt; + + pOut = out2Prerelease(p, pOp); + pBt = db->aDb[pOp->p1].pBt; + newMax = 0; + if( pOp->p3 ){ + newMax = sqlite3BtreeLastPage(pBt); + if( newMax < (unsigned)pOp->p3 ) newMax = (unsigned)pOp->p3; + } + pOut->u.i = sqlite3BtreeMaxPageCount(pBt, newMax); + break; +} +#endif + + +/* Opcode: Init * P2 * P4 * +** Synopsis: Start at P2 +** +** Programs contain a single instance of this opcode as the very first +** opcode. ** ** If tracing is enabled (by the sqlite3_trace()) interface, then ** the UTF-8 string contained in P4 is emitted on the trace callback. +** Or if P4 is blank, use the string returned by sqlite3_sql(). +** +** If P2 is not zero, jump to instruction P2. */ -case OP_Trace: { -#if 0 /* local variables moved into u.cl */ +case OP_Init: { /* jump */ char *zTrace; -#endif /* local variables moved into u.cl */ - - u.cl.zTrace = (pOp->p4.z ? pOp->p4.z : p->zSql); - if( u.cl.zTrace ){ - if( db->xTrace ){ - char *z = sqlite3VdbeExpandSql(p, u.cl.zTrace); - db->xTrace(db->pTraceArg, z); - sqlite3DbFree(db, z); - } + char *z; + +#ifndef SQLITE_OMIT_TRACE + if( db->xTrace + && !p->doingRerun + && (zTrace = (pOp->p4.z ? pOp->p4.z : p->zSql))!=0 + ){ + z = sqlite3VdbeExpandSql(p, zTrace); + db->xTrace(db->pTraceArg, z); + sqlite3DbFree(db, z); + } +#ifdef SQLITE_USE_FCNTL_TRACE + zTrace = (pOp->p4.z ? pOp->p4.z : p->zSql); + if( zTrace ){ + int i; + for(i=0; i<db->nDb; i++){ + if( DbMaskTest(p->btreeMask, i)==0 ) continue; + sqlite3_file_control(db, db->aDb[i].zName, SQLITE_FCNTL_TRACE, zTrace); + } + } +#endif /* SQLITE_USE_FCNTL_TRACE */ #ifdef SQLITE_DEBUG - if( (db->flags & SQLITE_SqlTrace)!=0 ){ - sqlite3DebugPrintf("SQL-trace: %s\n", u.cl.zTrace); - } + if( (db->flags & SQLITE_SqlTrace)!=0 + && (zTrace = (pOp->p4.z ? pOp->p4.z : p->zSql))!=0 + ){ + sqlite3DebugPrintf("SQL-trace: %s\n", zTrace); + } #endif /* SQLITE_DEBUG */ +#endif /* SQLITE_OMIT_TRACE */ + if( pOp->p2 ) goto jump_to_p2; + break; +} + +#ifdef SQLITE_ENABLE_CURSOR_HINTS +/* Opcode: CursorHint P1 * * P4 * +** +** Provide a hint to cursor P1 that it only needs to return rows that +** satisfy the Expr in P4. TK_REGISTER terms in the P4 expression refer +** to values currently held in registers. TK_COLUMN terms in the P4 +** expression refer to columns in the b-tree to which cursor P1 is pointing. +*/ +case OP_CursorHint: { + VdbeCursor *pC; + + assert( pOp->p1>=0 && pOp->p1<p->nCursor ); + assert( pOp->p4type==P4_EXPR ); + pC = p->apCsr[pOp->p1]; + if( pC ){ + assert( pC->eCurType==CURTYPE_BTREE ); + sqlite3BtreeCursorHint(pC->uc.pCursor, BTREE_HINT_RANGE, + pOp->p4.pExpr, aMem); } break; } -#endif - +#endif /* SQLITE_ENABLE_CURSOR_HINTS */ /* Opcode: Noop * * * * * ** ** Do nothing. This instruction is often useful as a jump ** destination. @@ -59370,36 +80961,32 @@ *****************************************************************************/ } #ifdef VDBE_PROFILE { - u64 elapsed = sqlite3Hwtime() - start; - pOp->cycles += elapsed; - pOp->cnt++; -#if 0 - fprintf(stdout, "%10llu ", elapsed); - sqlite3VdbePrintOp(stdout, origPc, &aOp[origPc]); -#endif + u64 endTime = sqlite3Hwtime(); + if( endTime>start ) pOrigOp->cycles += endTime - start; + pOrigOp->cnt++; } #endif /* The following code adds nothing to the actual functionality ** of the program. It is only here for testing and debugging. ** On the other hand, it does burn CPU cycles every time through ** the evaluator loop. So we can leave it out when NDEBUG is defined. */ #ifndef NDEBUG - assert( pc>=-1 && pc<p->nOp ); + assert( pOp>=&aOp[-1] && pOp<&aOp[p->nOp-1] ); #ifdef SQLITE_DEBUG - if( p->trace ){ - if( rc!=0 ) fprintf(p->trace,"rc=%d\n",rc); - if( pOp->opflags & (OPFLG_OUT2_PRERELEASE|OPFLG_OUT2) ){ - registerTrace(p->trace, pOp->p2, &aMem[pOp->p2]); + if( db->flags & SQLITE_VdbeTrace ){ + if( rc!=0 ) printf("rc=%d\n",rc); + if( pOrigOp->opflags & (OPFLG_OUT2) ){ + registerTrace(pOrigOp->p2, &aMem[pOrigOp->p2]); } - if( pOp->opflags & OPFLG_OUT3 ){ - registerTrace(p->trace, pOp->p3, &aMem[pOp->p3]); + if( pOrigOp->opflags & OPFLG_OUT3 ){ + registerTrace(pOrigOp->p3, &aMem[pOrigOp->p3]); } } #endif /* SQLITE_DEBUG */ #endif /* NDEBUG */ } /* The end of the for(;;) loop the loops through opcodes */ @@ -59410,36 +80997,44 @@ vdbe_error_halt: assert( rc ); p->rc = rc; testcase( sqlite3GlobalConfig.xLog!=0 ); sqlite3_log(rc, "statement aborts at %d: [%s] %s", - pc, p->zSql, p->zErrMsg); + (int)(pOp - aOp), p->zSql, p->zErrMsg); sqlite3VdbeHalt(p); - if( rc==SQLITE_IOERR_NOMEM ) db->mallocFailed = 1; + if( rc==SQLITE_IOERR_NOMEM ) sqlite3OomFault(db); rc = SQLITE_ERROR; - if( resetSchemaOnFault ) sqlite3ResetInternalSchema(db, 0); + if( resetSchemaOnFault>0 ){ + sqlite3ResetOneSchema(db, resetSchemaOnFault-1); + } /* This is the only way out of this procedure. We have to ** release the mutexes on btrees that were acquired at the ** top. */ vdbe_return: - sqlite3BtreeMutexArrayLeave(&p->aMutex); + db->lastRowid = lastRowid; + testcase( nVmStep>0 ); + p->aCounter[SQLITE_STMTSTATUS_VM_STEP] += (int)nVmStep; + sqlite3VdbeLeave(p); + assert( rc!=SQLITE_OK || nExtraDelete==0 + || sqlite3_strlike("DELETE%",p->zSql,0)!=0 + ); return rc; /* Jump to here if a string or blob larger than SQLITE_MAX_LENGTH ** is encountered. */ too_big: - sqlite3SetString(&p->zErrMsg, db, "string or blob too big"); + sqlite3VdbeError(p, "string or blob too big"); rc = SQLITE_TOOBIG; goto vdbe_error_halt; /* Jump to here if a malloc() fails. */ no_mem: - db->mallocFailed = 1; - sqlite3SetString(&p->zErrMsg, db, "out of memory"); + sqlite3OomFault(db); + sqlite3VdbeError(p, "out of memory"); rc = SQLITE_NOMEM; goto vdbe_error_halt; /* Jump to here for any other kind of fatal error. The "rc" variable ** should hold the error number. @@ -59446,24 +81041,25 @@ */ abort_due_to_error: assert( p->zErrMsg==0 ); if( db->mallocFailed ) rc = SQLITE_NOMEM; if( rc!=SQLITE_IOERR_NOMEM ){ - sqlite3SetString(&p->zErrMsg, db, "%s", sqlite3ErrStr(rc)); + sqlite3VdbeError(p, "%s", sqlite3ErrStr(rc)); } goto vdbe_error_halt; /* Jump to here if the sqlite3_interrupt() API sets the interrupt ** flag. */ abort_due_to_interrupt: assert( db->u1.isInterrupted ); - rc = SQLITE_INTERRUPT; + rc = db->mallocFailed ? SQLITE_NOMEM : SQLITE_INTERRUPT; p->rc = rc; - sqlite3SetString(&p->zErrMsg, db, "%s", sqlite3ErrStr(rc)); + sqlite3VdbeError(p, "%s", sqlite3ErrStr(rc)); goto vdbe_error_halt; } + /************** End of vdbe.c ************************************************/ /************** Begin file vdbeblob.c ****************************************/ /* ** 2007 May 1 @@ -59478,10 +81074,12 @@ ************************************************************************* ** ** This file contains code used to implement incremental BLOB I/O. */ +/* #include "sqliteInt.h" */ +/* #include "vdbeInt.h" */ #ifndef SQLITE_OMIT_INCRBLOB /* ** Valid sqlite3_blob* handles point to Incrblob structures. @@ -59489,19 +81087,89 @@ typedef struct Incrblob Incrblob; struct Incrblob { int flags; /* Copy of "flags" passed to sqlite3_blob_open() */ int nByte; /* Size of open blob, in bytes */ int iOffset; /* Byte offset of blob in cursor data */ + int iCol; /* Table column this handle is open on */ BtCursor *pCsr; /* Cursor pointing at blob row */ sqlite3_stmt *pStmt; /* Statement holding cursor open */ sqlite3 *db; /* The associated database */ }; + +/* +** This function is used by both blob_open() and blob_reopen(). It seeks +** the b-tree cursor associated with blob handle p to point to row iRow. +** If successful, SQLITE_OK is returned and subsequent calls to +** sqlite3_blob_read() or sqlite3_blob_write() access the specified row. +** +** If an error occurs, or if the specified row does not exist or does not +** contain a value of type TEXT or BLOB in the column nominated when the +** blob handle was opened, then an error code is returned and *pzErr may +** be set to point to a buffer containing an error message. It is the +** responsibility of the caller to free the error message buffer using +** sqlite3DbFree(). +** +** If an error does occur, then the b-tree cursor is closed. All subsequent +** calls to sqlite3_blob_read(), blob_write() or blob_reopen() will +** immediately return SQLITE_ABORT. +*/ +static int blobSeekToRow(Incrblob *p, sqlite3_int64 iRow, char **pzErr){ + int rc; /* Error code */ + char *zErr = 0; /* Error message */ + Vdbe *v = (Vdbe *)p->pStmt; + + /* Set the value of the SQL statements only variable to integer iRow. + ** This is done directly instead of using sqlite3_bind_int64() to avoid + ** triggering asserts related to mutexes. + */ + assert( v->aVar[0].flags&MEM_Int ); + v->aVar[0].u.i = iRow; + + rc = sqlite3_step(p->pStmt); + if( rc==SQLITE_ROW ){ + VdbeCursor *pC = v->apCsr[0]; + u32 type = pC->aType[p->iCol]; + if( type<12 ){ + zErr = sqlite3MPrintf(p->db, "cannot open value of type %s", + type==0?"null": type==7?"real": "integer" + ); + rc = SQLITE_ERROR; + sqlite3_finalize(p->pStmt); + p->pStmt = 0; + }else{ + p->iOffset = pC->aType[p->iCol + pC->nField]; + p->nByte = sqlite3VdbeSerialTypeLen(type); + p->pCsr = pC->uc.pCursor; + sqlite3BtreeIncrblobCursor(p->pCsr); + } + } + + if( rc==SQLITE_ROW ){ + rc = SQLITE_OK; + }else if( p->pStmt ){ + rc = sqlite3_finalize(p->pStmt); + p->pStmt = 0; + if( rc==SQLITE_OK ){ + zErr = sqlite3MPrintf(p->db, "no such rowid: %lld", iRow); + rc = SQLITE_ERROR; + }else{ + zErr = sqlite3MPrintf(p->db, "%s", sqlite3_errmsg(p->db)); + } + } + + assert( rc!=SQLITE_OK || zErr==0 ); + assert( rc!=SQLITE_ROW && rc!=SQLITE_DONE ); + + *pzErr = zErr; + return rc; +} + /* ** Open a blob handle. */ -SQLITE_API int sqlite3_blob_open( +SQLITE_API int SQLITE_STDCALL sqlite3_blob_open( sqlite3* db, /* The database connection */ const char *zDb, /* The attached database containing the blob */ const char *zTable, /* The table containing the blob */ const char *zColumn, /* The column containing the blob */ sqlite_int64 iRow, /* The row containing the glob */ @@ -59508,66 +81176,52 @@ int flags, /* True -> read/write access, false -> read-only */ sqlite3_blob **ppBlob /* Handle for accessing the blob returned here */ ){ int nAttempt = 0; int iCol; /* Index of zColumn in row-record */ - - /* This VDBE program seeks a btree cursor to the identified - ** db/table/row entry. The reason for using a vdbe program instead - ** of writing code to use the b-tree layer directly is that the - ** vdbe program will take advantage of the various transaction, - ** locking and error handling infrastructure built into the vdbe. - ** - ** After seeking the cursor, the vdbe executes an OP_ResultRow. - ** Code external to the Vdbe then "borrows" the b-tree cursor and - ** uses it to implement the blob_read(), blob_write() and - ** blob_bytes() functions. - ** - ** The sqlite3_blob_close() function finalizes the vdbe program, - ** which closes the b-tree cursor and (possibly) commits the - ** transaction. - */ - static const VdbeOpList openBlob[] = { - {OP_Transaction, 0, 0, 0}, /* 0: Start a transaction */ - {OP_VerifyCookie, 0, 0, 0}, /* 1: Check the schema cookie */ - {OP_TableLock, 0, 0, 0}, /* 2: Acquire a read or write lock */ - - /* One of the following two instructions is replaced by an OP_Noop. */ - {OP_OpenRead, 0, 0, 0}, /* 3: Open cursor 0 for reading */ - {OP_OpenWrite, 0, 0, 0}, /* 4: Open cursor 0 for read/write */ - - {OP_Variable, 1, 1, 1}, /* 5: Push the rowid to the stack */ - {OP_NotExists, 0, 9, 1}, /* 6: Seek the cursor */ - {OP_Column, 0, 0, 1}, /* 7 */ - {OP_ResultRow, 1, 0, 0}, /* 8 */ - {OP_Close, 0, 0, 0}, /* 9 */ - {OP_Halt, 0, 0, 0}, /* 10 */ - }; - - Vdbe *v = 0; int rc = SQLITE_OK; char *zErr = 0; Table *pTab; - Parse *pParse; + Parse *pParse = 0; + Incrblob *pBlob = 0; +#ifdef SQLITE_ENABLE_API_ARMOR + if( ppBlob==0 ){ + return SQLITE_MISUSE_BKPT; + } +#endif *ppBlob = 0; +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) || zTable==0 ){ + return SQLITE_MISUSE_BKPT; + } +#endif + flags = !!flags; /* flags = (flags ? 1 : 0); */ + sqlite3_mutex_enter(db->mutex); + + pBlob = (Incrblob *)sqlite3DbMallocZero(db, sizeof(Incrblob)); + if( !pBlob ) goto blob_open_out; pParse = sqlite3StackAllocRaw(db, sizeof(*pParse)); - if( pParse==0 ){ - rc = SQLITE_NOMEM; - goto blob_open_out; - } + if( !pParse ) goto blob_open_out; + do { memset(pParse, 0, sizeof(Parse)); pParse->db = db; + sqlite3DbFree(db, zErr); + zErr = 0; sqlite3BtreeEnterAll(db); pTab = sqlite3LocateTable(pParse, 0, zTable, zDb); if( pTab && IsVirtual(pTab) ){ pTab = 0; sqlite3ErrorMsg(pParse, "cannot open virtual table: %s", zTable); } + if( pTab && !HasRowid(pTab) ){ + pTab = 0; + sqlite3ErrorMsg(pParse, "cannot open table without rowid: %s", zTable); + } #ifndef SQLITE_OMIT_VIEW if( pTab && pTab->pSelect ){ pTab = 0; sqlite3ErrorMsg(pParse, "cannot open view: %s", zTable); } @@ -59582,11 +81236,11 @@ sqlite3BtreeLeaveAll(db); goto blob_open_out; } /* Now search pTab for the exact column. */ - for(iCol=0; iCol < pTab->nCol; iCol++) { + for(iCol=0; iCol<pTab->nCol; iCol++) { if( sqlite3StrICmp(pTab->aCol[iCol].zName, zColumn)==0 ){ break; } } if( iCol==pTab->nCol ){ @@ -59621,12 +81275,13 @@ } } #endif for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ int j; - for(j=0; j<pIdx->nColumn; j++){ - if( pIdx->aiColumn[j]==iCol ){ + for(j=0; j<pIdx->nKeyCol; j++){ + /* FIXME: Be smarter about indexes that use expressions */ + if( pIdx->aiColumn[j]==iCol || pIdx->aiColumn[j]==XN_EXPR ){ zFault = "indexed"; } } } if( zFault ){ @@ -59636,113 +81291,113 @@ sqlite3BtreeLeaveAll(db); goto blob_open_out; } } - v = sqlite3VdbeCreate(db); - if( v ){ + pBlob->pStmt = (sqlite3_stmt *)sqlite3VdbeCreate(pParse); + assert( pBlob->pStmt || db->mallocFailed ); + if( pBlob->pStmt ){ + + /* This VDBE program seeks a btree cursor to the identified + ** db/table/row entry. The reason for using a vdbe program instead + ** of writing code to use the b-tree layer directly is that the + ** vdbe program will take advantage of the various transaction, + ** locking and error handling infrastructure built into the vdbe. + ** + ** After seeking the cursor, the vdbe executes an OP_ResultRow. + ** Code external to the Vdbe then "borrows" the b-tree cursor and + ** uses it to implement the blob_read(), blob_write() and + ** blob_bytes() functions. + ** + ** The sqlite3_blob_close() function finalizes the vdbe program, + ** which closes the b-tree cursor and (possibly) commits the + ** transaction. + */ + static const int iLn = VDBE_OFFSET_LINENO(2); + static const VdbeOpList openBlob[] = { + {OP_TableLock, 0, 0, 0}, /* 0: Acquire a read or write lock */ + {OP_OpenRead, 0, 0, 0}, /* 1: Open a cursor */ + {OP_Variable, 1, 1, 0}, /* 2: Move ?1 into reg[1] */ + {OP_NotExists, 0, 7, 1}, /* 3: Seek the cursor */ + {OP_Column, 0, 0, 1}, /* 4 */ + {OP_ResultRow, 1, 0, 0}, /* 5 */ + {OP_Goto, 0, 2, 0}, /* 6 */ + {OP_Close, 0, 0, 0}, /* 7 */ + {OP_Halt, 0, 0, 0}, /* 8 */ + }; + Vdbe *v = (Vdbe *)pBlob->pStmt; int iDb = sqlite3SchemaToIndex(db, pTab->pSchema); - sqlite3VdbeAddOpList(v, sizeof(openBlob)/sizeof(VdbeOpList), openBlob); - flags = !!flags; /* flags = (flags ? 1 : 0); */ - - /* Configure the OP_Transaction */ - sqlite3VdbeChangeP1(v, 0, iDb); - sqlite3VdbeChangeP2(v, 0, flags); - - /* Configure the OP_VerifyCookie */ - sqlite3VdbeChangeP1(v, 1, iDb); - sqlite3VdbeChangeP2(v, 1, pTab->pSchema->schema_cookie); + VdbeOp *aOp; + + sqlite3VdbeAddOp4Int(v, OP_Transaction, iDb, flags, + pTab->pSchema->schema_cookie, + pTab->pSchema->iGeneration); + sqlite3VdbeChangeP5(v, 1); + aOp = sqlite3VdbeAddOpList(v, ArraySize(openBlob), openBlob, iLn); /* Make sure a mutex is held on the table to be accessed */ sqlite3VdbeUsesBtree(v, iDb); - /* Configure the OP_TableLock instruction */ - sqlite3VdbeChangeP1(v, 2, iDb); - sqlite3VdbeChangeP2(v, 2, pTab->tnum); - sqlite3VdbeChangeP3(v, 2, flags); - sqlite3VdbeChangeP4(v, 2, pTab->zName, P4_TRANSIENT); - - /* Remove either the OP_OpenWrite or OpenRead. Set the P2 - ** parameter of the other to pTab->tnum. */ - sqlite3VdbeChangeToNoop(v, 4 - flags, 1); - sqlite3VdbeChangeP2(v, 3 + flags, pTab->tnum); - sqlite3VdbeChangeP3(v, 3 + flags, iDb); - - /* Configure the number of columns. Configure the cursor to - ** think that the table has one more column than it really - ** does. An OP_Column to retrieve this imaginary column will - ** always return an SQL NULL. This is useful because it means - ** we can invoke OP_Column to fill in the vdbe cursors type - ** and offset cache without causing any IO. - */ - sqlite3VdbeChangeP4(v, 3+flags, SQLITE_INT_TO_PTR(pTab->nCol+1),P4_INT32); - sqlite3VdbeChangeP2(v, 7, pTab->nCol); - if( !db->mallocFailed ){ - sqlite3VdbeMakeReady(v, 1, 1, 1, 0, 0, 0); - } - } - + if( db->mallocFailed==0 ){ + assert( aOp!=0 ); + /* Configure the OP_TableLock instruction */ +#ifdef SQLITE_OMIT_SHARED_CACHE + aOp[0].opcode = OP_Noop; +#else + aOp[0].p1 = iDb; + aOp[0].p2 = pTab->tnum; + aOp[0].p3 = flags; + sqlite3VdbeChangeP4(v, 1, pTab->zName, P4_TRANSIENT); + } + if( db->mallocFailed==0 ){ +#endif + + /* Remove either the OP_OpenWrite or OpenRead. Set the P2 + ** parameter of the other to pTab->tnum. */ + if( flags ) aOp[1].opcode = OP_OpenWrite; + aOp[1].p2 = pTab->tnum; + aOp[1].p3 = iDb; + + /* Configure the number of columns. Configure the cursor to + ** think that the table has one more column than it really + ** does. An OP_Column to retrieve this imaginary column will + ** always return an SQL NULL. This is useful because it means + ** we can invoke OP_Column to fill in the vdbe cursors type + ** and offset cache without causing any IO. + */ + aOp[1].p4type = P4_INT32; + aOp[1].p4.i = pTab->nCol+1; + aOp[4].p2 = pTab->nCol; + + pParse->nVar = 1; + pParse->nMem = 1; + pParse->nTab = 1; + sqlite3VdbeMakeReady(v, pParse); + } + } + + pBlob->flags = flags; + pBlob->iCol = iCol; + pBlob->db = db; sqlite3BtreeLeaveAll(db); if( db->mallocFailed ){ goto blob_open_out; } - - sqlite3_bind_int64((sqlite3_stmt *)v, 1, iRow); - rc = sqlite3_step((sqlite3_stmt *)v); - if( rc!=SQLITE_ROW ){ - nAttempt++; - rc = sqlite3_finalize((sqlite3_stmt *)v); - sqlite3DbFree(db, zErr); - zErr = sqlite3MPrintf(db, sqlite3_errmsg(db)); - v = 0; - } - } while( nAttempt<5 && rc==SQLITE_SCHEMA ); - - if( rc==SQLITE_ROW ){ - /* The row-record has been opened successfully. Check that the - ** column in question contains text or a blob. If it contains - ** text, it is up to the caller to get the encoding right. - */ - Incrblob *pBlob; - u32 type = v->apCsr[0]->aType[iCol]; - - if( type<12 ){ - sqlite3DbFree(db, zErr); - zErr = sqlite3MPrintf(db, "cannot open value of type %s", - type==0?"null": type==7?"real": "integer" - ); - rc = SQLITE_ERROR; - goto blob_open_out; - } - pBlob = (Incrblob *)sqlite3DbMallocZero(db, sizeof(Incrblob)); - if( db->mallocFailed ){ - sqlite3DbFree(db, pBlob); - goto blob_open_out; - } - pBlob->flags = flags; - pBlob->pCsr = v->apCsr[0]->pCursor; - sqlite3BtreeEnterCursor(pBlob->pCsr); - sqlite3BtreeCacheOverflow(pBlob->pCsr); - sqlite3BtreeLeaveCursor(pBlob->pCsr); - pBlob->pStmt = (sqlite3_stmt *)v; - pBlob->iOffset = v->apCsr[0]->aOffset[iCol]; - pBlob->nByte = sqlite3VdbeSerialTypeLen(type); - pBlob->db = db; - *ppBlob = (sqlite3_blob *)pBlob; - rc = SQLITE_OK; - }else if( rc==SQLITE_OK ){ - sqlite3DbFree(db, zErr); - zErr = sqlite3MPrintf(db, "no such rowid: %lld", iRow); - rc = SQLITE_ERROR; - } + sqlite3_bind_int64(pBlob->pStmt, 1, iRow); + rc = blobSeekToRow(pBlob, iRow, &zErr); + } while( (++nAttempt)<SQLITE_MAX_SCHEMA_RETRY && rc==SQLITE_SCHEMA ); blob_open_out: - if( v && (rc!=SQLITE_OK || db->mallocFailed) ){ - sqlite3VdbeFinalize(v); + if( rc==SQLITE_OK && db->mallocFailed==0 ){ + *ppBlob = (sqlite3_blob *)pBlob; + }else{ + if( pBlob && pBlob->pStmt ) sqlite3VdbeFinalize((Vdbe *)pBlob->pStmt); + sqlite3DbFree(db, pBlob); } - sqlite3Error(db, rc, zErr); + sqlite3ErrorWithMsg(db, rc, (zErr ? "%s" : 0), zErr); sqlite3DbFree(db, zErr); + sqlite3ParserReset(pParse); sqlite3StackFree(db, pParse); rc = sqlite3ApiExit(db, rc); sqlite3_mutex_leave(db->mutex); return rc; } @@ -59749,11 +81404,11 @@ /* ** Close a blob handle that was previously created using ** sqlite3_blob_open(). */ -SQLITE_API int sqlite3_blob_close(sqlite3_blob *pBlob){ +SQLITE_API int SQLITE_STDCALL sqlite3_blob_close(sqlite3_blob *pBlob){ Incrblob *p = (Incrblob *)pBlob; int rc; sqlite3 *db; if( p ){ @@ -59786,15 +81441,14 @@ if( p==0 ) return SQLITE_MISUSE_BKPT; db = p->db; sqlite3_mutex_enter(db->mutex); v = (Vdbe*)p->pStmt; - if( n<0 || iOffset<0 || (iOffset+n)>p->nByte ){ + if( n<0 || iOffset<0 || ((sqlite3_int64)iOffset+n)>p->nByte ){ /* Request is out of range. Return a transient error. */ rc = SQLITE_ERROR; - sqlite3Error(db, SQLITE_ERROR, 0); - } else if( v==0 ){ + }else if( v==0 ){ /* If there is no statement handle, then the blob-handle has ** already been invalidated. Return SQLITE_ABORT in this case. */ rc = SQLITE_ABORT; }else{ @@ -59807,47 +81461,2830 @@ sqlite3BtreeLeaveCursor(p->pCsr); if( rc==SQLITE_ABORT ){ sqlite3VdbeFinalize(v); p->pStmt = 0; }else{ - db->errCode = rc; v->rc = rc; } } + sqlite3Error(db, rc); rc = sqlite3ApiExit(db, rc); sqlite3_mutex_leave(db->mutex); return rc; } /* ** Read data from a blob handle. */ -SQLITE_API int sqlite3_blob_read(sqlite3_blob *pBlob, void *z, int n, int iOffset){ +SQLITE_API int SQLITE_STDCALL sqlite3_blob_read(sqlite3_blob *pBlob, void *z, int n, int iOffset){ return blobReadWrite(pBlob, z, n, iOffset, sqlite3BtreeData); } /* ** Write data to a blob handle. */ -SQLITE_API int sqlite3_blob_write(sqlite3_blob *pBlob, const void *z, int n, int iOffset){ +SQLITE_API int SQLITE_STDCALL sqlite3_blob_write(sqlite3_blob *pBlob, const void *z, int n, int iOffset){ return blobReadWrite(pBlob, (void *)z, n, iOffset, sqlite3BtreePutData); } /* ** Query a blob handle for the size of the data. ** ** The Incrblob.nByte field is fixed for the lifetime of the Incrblob ** so no mutex is required for access. */ -SQLITE_API int sqlite3_blob_bytes(sqlite3_blob *pBlob){ +SQLITE_API int SQLITE_STDCALL sqlite3_blob_bytes(sqlite3_blob *pBlob){ + Incrblob *p = (Incrblob *)pBlob; + return (p && p->pStmt) ? p->nByte : 0; +} + +/* +** Move an existing blob handle to point to a different row of the same +** database table. +** +** If an error occurs, or if the specified row does not exist or does not +** contain a blob or text value, then an error code is returned and the +** database handle error code and message set. If this happens, then all +** subsequent calls to sqlite3_blob_xxx() functions (except blob_close()) +** immediately return SQLITE_ABORT. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_blob_reopen(sqlite3_blob *pBlob, sqlite3_int64 iRow){ + int rc; Incrblob *p = (Incrblob *)pBlob; - return p ? p->nByte : 0; + sqlite3 *db; + + if( p==0 ) return SQLITE_MISUSE_BKPT; + db = p->db; + sqlite3_mutex_enter(db->mutex); + + if( p->pStmt==0 ){ + /* If there is no statement handle, then the blob-handle has + ** already been invalidated. Return SQLITE_ABORT in this case. + */ + rc = SQLITE_ABORT; + }else{ + char *zErr; + rc = blobSeekToRow(p, iRow, &zErr); + if( rc!=SQLITE_OK ){ + sqlite3ErrorWithMsg(db, rc, (zErr ? "%s" : 0), zErr); + sqlite3DbFree(db, zErr); + } + assert( rc!=SQLITE_SCHEMA ); + } + + rc = sqlite3ApiExit(db, rc); + assert( rc==SQLITE_OK || p->pStmt==0 ); + sqlite3_mutex_leave(db->mutex); + return rc; } #endif /* #ifndef SQLITE_OMIT_INCRBLOB */ /************** End of vdbeblob.c ********************************************/ +/************** Begin file vdbesort.c ****************************************/ +/* +** 2011-07-09 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This file contains code for the VdbeSorter object, used in concert with +** a VdbeCursor to sort large numbers of keys for CREATE INDEX statements +** or by SELECT statements with ORDER BY clauses that cannot be satisfied +** using indexes and without LIMIT clauses. +** +** The VdbeSorter object implements a multi-threaded external merge sort +** algorithm that is efficient even if the number of elements being sorted +** exceeds the available memory. +** +** Here is the (internal, non-API) interface between this module and the +** rest of the SQLite system: +** +** sqlite3VdbeSorterInit() Create a new VdbeSorter object. +** +** sqlite3VdbeSorterWrite() Add a single new row to the VdbeSorter +** object. The row is a binary blob in the +** OP_MakeRecord format that contains both +** the ORDER BY key columns and result columns +** in the case of a SELECT w/ ORDER BY, or +** the complete record for an index entry +** in the case of a CREATE INDEX. +** +** sqlite3VdbeSorterRewind() Sort all content previously added. +** Position the read cursor on the +** first sorted element. +** +** sqlite3VdbeSorterNext() Advance the read cursor to the next sorted +** element. +** +** sqlite3VdbeSorterRowkey() Return the complete binary blob for the +** row currently under the read cursor. +** +** sqlite3VdbeSorterCompare() Compare the binary blob for the row +** currently under the read cursor against +** another binary blob X and report if +** X is strictly less than the read cursor. +** Used to enforce uniqueness in a +** CREATE UNIQUE INDEX statement. +** +** sqlite3VdbeSorterClose() Close the VdbeSorter object and reclaim +** all resources. +** +** sqlite3VdbeSorterReset() Refurbish the VdbeSorter for reuse. This +** is like Close() followed by Init() only +** much faster. +** +** The interfaces above must be called in a particular order. Write() can +** only occur in between Init()/Reset() and Rewind(). Next(), Rowkey(), and +** Compare() can only occur in between Rewind() and Close()/Reset(). i.e. +** +** Init() +** for each record: Write() +** Rewind() +** Rowkey()/Compare() +** Next() +** Close() +** +** Algorithm: +** +** Records passed to the sorter via calls to Write() are initially held +** unsorted in main memory. Assuming the amount of memory used never exceeds +** a threshold, when Rewind() is called the set of records is sorted using +** an in-memory merge sort. In this case, no temporary files are required +** and subsequent calls to Rowkey(), Next() and Compare() read records +** directly from main memory. +** +** If the amount of space used to store records in main memory exceeds the +** threshold, then the set of records currently in memory are sorted and +** written to a temporary file in "Packed Memory Array" (PMA) format. +** A PMA created at this point is known as a "level-0 PMA". Higher levels +** of PMAs may be created by merging existing PMAs together - for example +** merging two or more level-0 PMAs together creates a level-1 PMA. +** +** The threshold for the amount of main memory to use before flushing +** records to a PMA is roughly the same as the limit configured for the +** page-cache of the main database. Specifically, the threshold is set to +** the value returned by "PRAGMA main.page_size" multipled by +** that returned by "PRAGMA main.cache_size", in bytes. +** +** If the sorter is running in single-threaded mode, then all PMAs generated +** are appended to a single temporary file. Or, if the sorter is running in +** multi-threaded mode then up to (N+1) temporary files may be opened, where +** N is the configured number of worker threads. In this case, instead of +** sorting the records and writing the PMA to a temporary file itself, the +** calling thread usually launches a worker thread to do so. Except, if +** there are already N worker threads running, the main thread does the work +** itself. +** +** The sorter is running in multi-threaded mode if (a) the library was built +** with pre-processor symbol SQLITE_MAX_WORKER_THREADS set to a value greater +** than zero, and (b) worker threads have been enabled at runtime by calling +** "PRAGMA threads=N" with some value of N greater than 0. +** +** When Rewind() is called, any data remaining in memory is flushed to a +** final PMA. So at this point the data is stored in some number of sorted +** PMAs within temporary files on disk. +** +** If there are fewer than SORTER_MAX_MERGE_COUNT PMAs in total and the +** sorter is running in single-threaded mode, then these PMAs are merged +** incrementally as keys are retreived from the sorter by the VDBE. The +** MergeEngine object, described in further detail below, performs this +** merge. +** +** Or, if running in multi-threaded mode, then a background thread is +** launched to merge the existing PMAs. Once the background thread has +** merged T bytes of data into a single sorted PMA, the main thread +** begins reading keys from that PMA while the background thread proceeds +** with merging the next T bytes of data. And so on. +** +** Parameter T is set to half the value of the memory threshold used +** by Write() above to determine when to create a new PMA. +** +** If there are more than SORTER_MAX_MERGE_COUNT PMAs in total when +** Rewind() is called, then a hierarchy of incremental-merges is used. +** First, T bytes of data from the first SORTER_MAX_MERGE_COUNT PMAs on +** disk are merged together. Then T bytes of data from the second set, and +** so on, such that no operation ever merges more than SORTER_MAX_MERGE_COUNT +** PMAs at a time. This done is to improve locality. +** +** If running in multi-threaded mode and there are more than +** SORTER_MAX_MERGE_COUNT PMAs on disk when Rewind() is called, then more +** than one background thread may be created. Specifically, there may be +** one background thread for each temporary file on disk, and one background +** thread to merge the output of each of the others to a single PMA for +** the main thread to read from. +*/ +/* #include "sqliteInt.h" */ +/* #include "vdbeInt.h" */ + +/* +** If SQLITE_DEBUG_SORTER_THREADS is defined, this module outputs various +** messages to stderr that may be helpful in understanding the performance +** characteristics of the sorter in multi-threaded mode. +*/ +#if 0 +# define SQLITE_DEBUG_SORTER_THREADS 1 +#endif + +/* +** Hard-coded maximum amount of data to accumulate in memory before flushing +** to a level 0 PMA. The purpose of this limit is to prevent various integer +** overflows. 512MiB. +*/ +#define SQLITE_MAX_PMASZ (1<<29) + +/* +** Private objects used by the sorter +*/ +typedef struct MergeEngine MergeEngine; /* Merge PMAs together */ +typedef struct PmaReader PmaReader; /* Incrementally read one PMA */ +typedef struct PmaWriter PmaWriter; /* Incrementally write one PMA */ +typedef struct SorterRecord SorterRecord; /* A record being sorted */ +typedef struct SortSubtask SortSubtask; /* A sub-task in the sort process */ +typedef struct SorterFile SorterFile; /* Temporary file object wrapper */ +typedef struct SorterList SorterList; /* In-memory list of records */ +typedef struct IncrMerger IncrMerger; /* Read & merge multiple PMAs */ + +/* +** A container for a temp file handle and the current amount of data +** stored in the file. +*/ +struct SorterFile { + sqlite3_file *pFd; /* File handle */ + i64 iEof; /* Bytes of data stored in pFd */ +}; + +/* +** An in-memory list of objects to be sorted. +** +** If aMemory==0 then each object is allocated separately and the objects +** are connected using SorterRecord.u.pNext. If aMemory!=0 then all objects +** are stored in the aMemory[] bulk memory, one right after the other, and +** are connected using SorterRecord.u.iNext. +*/ +struct SorterList { + SorterRecord *pList; /* Linked list of records */ + u8 *aMemory; /* If non-NULL, bulk memory to hold pList */ + int szPMA; /* Size of pList as PMA in bytes */ +}; + +/* +** The MergeEngine object is used to combine two or more smaller PMAs into +** one big PMA using a merge operation. Separate PMAs all need to be +** combined into one big PMA in order to be able to step through the sorted +** records in order. +** +** The aReadr[] array contains a PmaReader object for each of the PMAs being +** merged. An aReadr[] object either points to a valid key or else is at EOF. +** ("EOF" means "End Of File". When aReadr[] is at EOF there is no more data.) +** For the purposes of the paragraphs below, we assume that the array is +** actually N elements in size, where N is the smallest power of 2 greater +** to or equal to the number of PMAs being merged. The extra aReadr[] elements +** are treated as if they are empty (always at EOF). +** +** The aTree[] array is also N elements in size. The value of N is stored in +** the MergeEngine.nTree variable. +** +** The final (N/2) elements of aTree[] contain the results of comparing +** pairs of PMA keys together. Element i contains the result of +** comparing aReadr[2*i-N] and aReadr[2*i-N+1]. Whichever key is smaller, the +** aTree element is set to the index of it. +** +** For the purposes of this comparison, EOF is considered greater than any +** other key value. If the keys are equal (only possible with two EOF +** values), it doesn't matter which index is stored. +** +** The (N/4) elements of aTree[] that precede the final (N/2) described +** above contains the index of the smallest of each block of 4 PmaReaders +** And so on. So that aTree[1] contains the index of the PmaReader that +** currently points to the smallest key value. aTree[0] is unused. +** +** Example: +** +** aReadr[0] -> Banana +** aReadr[1] -> Feijoa +** aReadr[2] -> Elderberry +** aReadr[3] -> Currant +** aReadr[4] -> Grapefruit +** aReadr[5] -> Apple +** aReadr[6] -> Durian +** aReadr[7] -> EOF +** +** aTree[] = { X, 5 0, 5 0, 3, 5, 6 } +** +** The current element is "Apple" (the value of the key indicated by +** PmaReader 5). When the Next() operation is invoked, PmaReader 5 will +** be advanced to the next key in its segment. Say the next key is +** "Eggplant": +** +** aReadr[5] -> Eggplant +** +** The contents of aTree[] are updated first by comparing the new PmaReader +** 5 key to the current key of PmaReader 4 (still "Grapefruit"). The PmaReader +** 5 value is still smaller, so aTree[6] is set to 5. And so on up the tree. +** The value of PmaReader 6 - "Durian" - is now smaller than that of PmaReader +** 5, so aTree[3] is set to 6. Key 0 is smaller than key 6 (Banana<Durian), +** so the value written into element 1 of the array is 0. As follows: +** +** aTree[] = { X, 0 0, 6 0, 3, 5, 6 } +** +** In other words, each time we advance to the next sorter element, log2(N) +** key comparison operations are required, where N is the number of segments +** being merged (rounded up to the next power of 2). +*/ +struct MergeEngine { + int nTree; /* Used size of aTree/aReadr (power of 2) */ + SortSubtask *pTask; /* Used by this thread only */ + int *aTree; /* Current state of incremental merge */ + PmaReader *aReadr; /* Array of PmaReaders to merge data from */ +}; + +/* +** This object represents a single thread of control in a sort operation. +** Exactly VdbeSorter.nTask instances of this object are allocated +** as part of each VdbeSorter object. Instances are never allocated any +** other way. VdbeSorter.nTask is set to the number of worker threads allowed +** (see SQLITE_CONFIG_WORKER_THREADS) plus one (the main thread). Thus for +** single-threaded operation, there is exactly one instance of this object +** and for multi-threaded operation there are two or more instances. +** +** Essentially, this structure contains all those fields of the VdbeSorter +** structure for which each thread requires a separate instance. For example, +** each thread requries its own UnpackedRecord object to unpack records in +** as part of comparison operations. +** +** Before a background thread is launched, variable bDone is set to 0. Then, +** right before it exits, the thread itself sets bDone to 1. This is used for +** two purposes: +** +** 1. When flushing the contents of memory to a level-0 PMA on disk, to +** attempt to select a SortSubtask for which there is not already an +** active background thread (since doing so causes the main thread +** to block until it finishes). +** +** 2. If SQLITE_DEBUG_SORTER_THREADS is defined, to determine if a call +** to sqlite3ThreadJoin() is likely to block. Cases that are likely to +** block provoke debugging output. +** +** In both cases, the effects of the main thread seeing (bDone==0) even +** after the thread has finished are not dire. So we don't worry about +** memory barriers and such here. +*/ +typedef int (*SorterCompare)(SortSubtask*,int*,const void*,int,const void*,int); +struct SortSubtask { + SQLiteThread *pThread; /* Background thread, if any */ + int bDone; /* Set if thread is finished but not joined */ + VdbeSorter *pSorter; /* Sorter that owns this sub-task */ + UnpackedRecord *pUnpacked; /* Space to unpack a record */ + SorterList list; /* List for thread to write to a PMA */ + int nPMA; /* Number of PMAs currently in file */ + SorterCompare xCompare; /* Compare function to use */ + SorterFile file; /* Temp file for level-0 PMAs */ + SorterFile file2; /* Space for other PMAs */ +}; + + +/* +** Main sorter structure. A single instance of this is allocated for each +** sorter cursor created by the VDBE. +** +** mxKeysize: +** As records are added to the sorter by calls to sqlite3VdbeSorterWrite(), +** this variable is updated so as to be set to the size on disk of the +** largest record in the sorter. +*/ +struct VdbeSorter { + int mnPmaSize; /* Minimum PMA size, in bytes */ + int mxPmaSize; /* Maximum PMA size, in bytes. 0==no limit */ + int mxKeysize; /* Largest serialized key seen so far */ + int pgsz; /* Main database page size */ + PmaReader *pReader; /* Readr data from here after Rewind() */ + MergeEngine *pMerger; /* Or here, if bUseThreads==0 */ + sqlite3 *db; /* Database connection */ + KeyInfo *pKeyInfo; /* How to compare records */ + UnpackedRecord *pUnpacked; /* Used by VdbeSorterCompare() */ + SorterList list; /* List of in-memory records */ + int iMemory; /* Offset of free space in list.aMemory */ + int nMemory; /* Size of list.aMemory allocation in bytes */ + u8 bUsePMA; /* True if one or more PMAs created */ + u8 bUseThreads; /* True to use background threads */ + u8 iPrev; /* Previous thread used to flush PMA */ + u8 nTask; /* Size of aTask[] array */ + u8 typeMask; + SortSubtask aTask[1]; /* One or more subtasks */ +}; + +#define SORTER_TYPE_INTEGER 0x01 +#define SORTER_TYPE_TEXT 0x02 + +/* +** An instance of the following object is used to read records out of a +** PMA, in sorted order. The next key to be read is cached in nKey/aKey. +** aKey might point into aMap or into aBuffer. If neither of those locations +** contain a contiguous representation of the key, then aAlloc is allocated +** and the key is copied into aAlloc and aKey is made to poitn to aAlloc. +** +** pFd==0 at EOF. +*/ +struct PmaReader { + i64 iReadOff; /* Current read offset */ + i64 iEof; /* 1 byte past EOF for this PmaReader */ + int nAlloc; /* Bytes of space at aAlloc */ + int nKey; /* Number of bytes in key */ + sqlite3_file *pFd; /* File handle we are reading from */ + u8 *aAlloc; /* Space for aKey if aBuffer and pMap wont work */ + u8 *aKey; /* Pointer to current key */ + u8 *aBuffer; /* Current read buffer */ + int nBuffer; /* Size of read buffer in bytes */ + u8 *aMap; /* Pointer to mapping of entire file */ + IncrMerger *pIncr; /* Incremental merger */ +}; + +/* +** Normally, a PmaReader object iterates through an existing PMA stored +** within a temp file. However, if the PmaReader.pIncr variable points to +** an object of the following type, it may be used to iterate/merge through +** multiple PMAs simultaneously. +** +** There are two types of IncrMerger object - single (bUseThread==0) and +** multi-threaded (bUseThread==1). +** +** A multi-threaded IncrMerger object uses two temporary files - aFile[0] +** and aFile[1]. Neither file is allowed to grow to more than mxSz bytes in +** size. When the IncrMerger is initialized, it reads enough data from +** pMerger to populate aFile[0]. It then sets variables within the +** corresponding PmaReader object to read from that file and kicks off +** a background thread to populate aFile[1] with the next mxSz bytes of +** sorted record data from pMerger. +** +** When the PmaReader reaches the end of aFile[0], it blocks until the +** background thread has finished populating aFile[1]. It then exchanges +** the contents of the aFile[0] and aFile[1] variables within this structure, +** sets the PmaReader fields to read from the new aFile[0] and kicks off +** another background thread to populate the new aFile[1]. And so on, until +** the contents of pMerger are exhausted. +** +** A single-threaded IncrMerger does not open any temporary files of its +** own. Instead, it has exclusive access to mxSz bytes of space beginning +** at offset iStartOff of file pTask->file2. And instead of using a +** background thread to prepare data for the PmaReader, with a single +** threaded IncrMerger the allocate part of pTask->file2 is "refilled" with +** keys from pMerger by the calling thread whenever the PmaReader runs out +** of data. +*/ +struct IncrMerger { + SortSubtask *pTask; /* Task that owns this merger */ + MergeEngine *pMerger; /* Merge engine thread reads data from */ + i64 iStartOff; /* Offset to start writing file at */ + int mxSz; /* Maximum bytes of data to store */ + int bEof; /* Set to true when merge is finished */ + int bUseThread; /* True to use a bg thread for this object */ + SorterFile aFile[2]; /* aFile[0] for reading, [1] for writing */ +}; + +/* +** An instance of this object is used for writing a PMA. +** +** The PMA is written one record at a time. Each record is of an arbitrary +** size. But I/O is more efficient if it occurs in page-sized blocks where +** each block is aligned on a page boundary. This object caches writes to +** the PMA so that aligned, page-size blocks are written. +*/ +struct PmaWriter { + int eFWErr; /* Non-zero if in an error state */ + u8 *aBuffer; /* Pointer to write buffer */ + int nBuffer; /* Size of write buffer in bytes */ + int iBufStart; /* First byte of buffer to write */ + int iBufEnd; /* Last byte of buffer to write */ + i64 iWriteOff; /* Offset of start of buffer in file */ + sqlite3_file *pFd; /* File handle to write to */ +}; + +/* +** This object is the header on a single record while that record is being +** held in memory and prior to being written out as part of a PMA. +** +** How the linked list is connected depends on how memory is being managed +** by this module. If using a separate allocation for each in-memory record +** (VdbeSorter.list.aMemory==0), then the list is always connected using the +** SorterRecord.u.pNext pointers. +** +** Or, if using the single large allocation method (VdbeSorter.list.aMemory!=0), +** then while records are being accumulated the list is linked using the +** SorterRecord.u.iNext offset. This is because the aMemory[] array may +** be sqlite3Realloc()ed while records are being accumulated. Once the VM +** has finished passing records to the sorter, or when the in-memory buffer +** is full, the list is sorted. As part of the sorting process, it is +** converted to use the SorterRecord.u.pNext pointers. See function +** vdbeSorterSort() for details. +*/ +struct SorterRecord { + int nVal; /* Size of the record in bytes */ + union { + SorterRecord *pNext; /* Pointer to next record in list */ + int iNext; /* Offset within aMemory of next record */ + } u; + /* The data for the record immediately follows this header */ +}; + +/* Return a pointer to the buffer containing the record data for SorterRecord +** object p. Should be used as if: +** +** void *SRVAL(SorterRecord *p) { return (void*)&p[1]; } +*/ +#define SRVAL(p) ((void*)((SorterRecord*)(p) + 1)) + + +/* Maximum number of PMAs that a single MergeEngine can merge */ +#define SORTER_MAX_MERGE_COUNT 16 + +static int vdbeIncrSwap(IncrMerger*); +static void vdbeIncrFree(IncrMerger *); + +/* +** Free all memory belonging to the PmaReader object passed as the +** argument. All structure fields are set to zero before returning. +*/ +static void vdbePmaReaderClear(PmaReader *pReadr){ + sqlite3_free(pReadr->aAlloc); + sqlite3_free(pReadr->aBuffer); + if( pReadr->aMap ) sqlite3OsUnfetch(pReadr->pFd, 0, pReadr->aMap); + vdbeIncrFree(pReadr->pIncr); + memset(pReadr, 0, sizeof(PmaReader)); +} + +/* +** Read the next nByte bytes of data from the PMA p. +** If successful, set *ppOut to point to a buffer containing the data +** and return SQLITE_OK. Otherwise, if an error occurs, return an SQLite +** error code. +** +** The buffer returned in *ppOut is only valid until the +** next call to this function. +*/ +static int vdbePmaReadBlob( + PmaReader *p, /* PmaReader from which to take the blob */ + int nByte, /* Bytes of data to read */ + u8 **ppOut /* OUT: Pointer to buffer containing data */ +){ + int iBuf; /* Offset within buffer to read from */ + int nAvail; /* Bytes of data available in buffer */ + + if( p->aMap ){ + *ppOut = &p->aMap[p->iReadOff]; + p->iReadOff += nByte; + return SQLITE_OK; + } + + assert( p->aBuffer ); + + /* If there is no more data to be read from the buffer, read the next + ** p->nBuffer bytes of data from the file into it. Or, if there are less + ** than p->nBuffer bytes remaining in the PMA, read all remaining data. */ + iBuf = p->iReadOff % p->nBuffer; + if( iBuf==0 ){ + int nRead; /* Bytes to read from disk */ + int rc; /* sqlite3OsRead() return code */ + + /* Determine how many bytes of data to read. */ + if( (p->iEof - p->iReadOff) > (i64)p->nBuffer ){ + nRead = p->nBuffer; + }else{ + nRead = (int)(p->iEof - p->iReadOff); + } + assert( nRead>0 ); + + /* Readr data from the file. Return early if an error occurs. */ + rc = sqlite3OsRead(p->pFd, p->aBuffer, nRead, p->iReadOff); + assert( rc!=SQLITE_IOERR_SHORT_READ ); + if( rc!=SQLITE_OK ) return rc; + } + nAvail = p->nBuffer - iBuf; + + if( nByte<=nAvail ){ + /* The requested data is available in the in-memory buffer. In this + ** case there is no need to make a copy of the data, just return a + ** pointer into the buffer to the caller. */ + *ppOut = &p->aBuffer[iBuf]; + p->iReadOff += nByte; + }else{ + /* The requested data is not all available in the in-memory buffer. + ** In this case, allocate space at p->aAlloc[] to copy the requested + ** range into. Then return a copy of pointer p->aAlloc to the caller. */ + int nRem; /* Bytes remaining to copy */ + + /* Extend the p->aAlloc[] allocation if required. */ + if( p->nAlloc<nByte ){ + u8 *aNew; + int nNew = MAX(128, p->nAlloc*2); + while( nByte>nNew ) nNew = nNew*2; + aNew = sqlite3Realloc(p->aAlloc, nNew); + if( !aNew ) return SQLITE_NOMEM; + p->nAlloc = nNew; + p->aAlloc = aNew; + } + + /* Copy as much data as is available in the buffer into the start of + ** p->aAlloc[]. */ + memcpy(p->aAlloc, &p->aBuffer[iBuf], nAvail); + p->iReadOff += nAvail; + nRem = nByte - nAvail; + + /* The following loop copies up to p->nBuffer bytes per iteration into + ** the p->aAlloc[] buffer. */ + while( nRem>0 ){ + int rc; /* vdbePmaReadBlob() return code */ + int nCopy; /* Number of bytes to copy */ + u8 *aNext; /* Pointer to buffer to copy data from */ + + nCopy = nRem; + if( nRem>p->nBuffer ) nCopy = p->nBuffer; + rc = vdbePmaReadBlob(p, nCopy, &aNext); + if( rc!=SQLITE_OK ) return rc; + assert( aNext!=p->aAlloc ); + memcpy(&p->aAlloc[nByte - nRem], aNext, nCopy); + nRem -= nCopy; + } + + *ppOut = p->aAlloc; + } + + return SQLITE_OK; +} + +/* +** Read a varint from the stream of data accessed by p. Set *pnOut to +** the value read. +*/ +static int vdbePmaReadVarint(PmaReader *p, u64 *pnOut){ + int iBuf; + + if( p->aMap ){ + p->iReadOff += sqlite3GetVarint(&p->aMap[p->iReadOff], pnOut); + }else{ + iBuf = p->iReadOff % p->nBuffer; + if( iBuf && (p->nBuffer-iBuf)>=9 ){ + p->iReadOff += sqlite3GetVarint(&p->aBuffer[iBuf], pnOut); + }else{ + u8 aVarint[16], *a; + int i = 0, rc; + do{ + rc = vdbePmaReadBlob(p, 1, &a); + if( rc ) return rc; + aVarint[(i++)&0xf] = a[0]; + }while( (a[0]&0x80)!=0 ); + sqlite3GetVarint(aVarint, pnOut); + } + } + + return SQLITE_OK; +} + +/* +** Attempt to memory map file pFile. If successful, set *pp to point to the +** new mapping and return SQLITE_OK. If the mapping is not attempted +** (because the file is too large or the VFS layer is configured not to use +** mmap), return SQLITE_OK and set *pp to NULL. +** +** Or, if an error occurs, return an SQLite error code. The final value of +** *pp is undefined in this case. +*/ +static int vdbeSorterMapFile(SortSubtask *pTask, SorterFile *pFile, u8 **pp){ + int rc = SQLITE_OK; + if( pFile->iEof<=(i64)(pTask->pSorter->db->nMaxSorterMmap) ){ + sqlite3_file *pFd = pFile->pFd; + if( pFd->pMethods->iVersion>=3 ){ + rc = sqlite3OsFetch(pFd, 0, (int)pFile->iEof, (void**)pp); + testcase( rc!=SQLITE_OK ); + } + } + return rc; +} + +/* +** Attach PmaReader pReadr to file pFile (if it is not already attached to +** that file) and seek it to offset iOff within the file. Return SQLITE_OK +** if successful, or an SQLite error code if an error occurs. +*/ +static int vdbePmaReaderSeek( + SortSubtask *pTask, /* Task context */ + PmaReader *pReadr, /* Reader whose cursor is to be moved */ + SorterFile *pFile, /* Sorter file to read from */ + i64 iOff /* Offset in pFile */ +){ + int rc = SQLITE_OK; + + assert( pReadr->pIncr==0 || pReadr->pIncr->bEof==0 ); + + if( sqlite3FaultSim(201) ) return SQLITE_IOERR_READ; + if( pReadr->aMap ){ + sqlite3OsUnfetch(pReadr->pFd, 0, pReadr->aMap); + pReadr->aMap = 0; + } + pReadr->iReadOff = iOff; + pReadr->iEof = pFile->iEof; + pReadr->pFd = pFile->pFd; + + rc = vdbeSorterMapFile(pTask, pFile, &pReadr->aMap); + if( rc==SQLITE_OK && pReadr->aMap==0 ){ + int pgsz = pTask->pSorter->pgsz; + int iBuf = pReadr->iReadOff % pgsz; + if( pReadr->aBuffer==0 ){ + pReadr->aBuffer = (u8*)sqlite3Malloc(pgsz); + if( pReadr->aBuffer==0 ) rc = SQLITE_NOMEM; + pReadr->nBuffer = pgsz; + } + if( rc==SQLITE_OK && iBuf ){ + int nRead = pgsz - iBuf; + if( (pReadr->iReadOff + nRead) > pReadr->iEof ){ + nRead = (int)(pReadr->iEof - pReadr->iReadOff); + } + rc = sqlite3OsRead( + pReadr->pFd, &pReadr->aBuffer[iBuf], nRead, pReadr->iReadOff + ); + testcase( rc!=SQLITE_OK ); + } + } + + return rc; +} + +/* +** Advance PmaReader pReadr to the next key in its PMA. Return SQLITE_OK if +** no error occurs, or an SQLite error code if one does. +*/ +static int vdbePmaReaderNext(PmaReader *pReadr){ + int rc = SQLITE_OK; /* Return Code */ + u64 nRec = 0; /* Size of record in bytes */ + + + if( pReadr->iReadOff>=pReadr->iEof ){ + IncrMerger *pIncr = pReadr->pIncr; + int bEof = 1; + if( pIncr ){ + rc = vdbeIncrSwap(pIncr); + if( rc==SQLITE_OK && pIncr->bEof==0 ){ + rc = vdbePmaReaderSeek( + pIncr->pTask, pReadr, &pIncr->aFile[0], pIncr->iStartOff + ); + bEof = 0; + } + } + + if( bEof ){ + /* This is an EOF condition */ + vdbePmaReaderClear(pReadr); + testcase( rc!=SQLITE_OK ); + return rc; + } + } + + if( rc==SQLITE_OK ){ + rc = vdbePmaReadVarint(pReadr, &nRec); + } + if( rc==SQLITE_OK ){ + pReadr->nKey = (int)nRec; + rc = vdbePmaReadBlob(pReadr, (int)nRec, &pReadr->aKey); + testcase( rc!=SQLITE_OK ); + } + + return rc; +} + +/* +** Initialize PmaReader pReadr to scan through the PMA stored in file pFile +** starting at offset iStart and ending at offset iEof-1. This function +** leaves the PmaReader pointing to the first key in the PMA (or EOF if the +** PMA is empty). +** +** If the pnByte parameter is NULL, then it is assumed that the file +** contains a single PMA, and that that PMA omits the initial length varint. +*/ +static int vdbePmaReaderInit( + SortSubtask *pTask, /* Task context */ + SorterFile *pFile, /* Sorter file to read from */ + i64 iStart, /* Start offset in pFile */ + PmaReader *pReadr, /* PmaReader to populate */ + i64 *pnByte /* IN/OUT: Increment this value by PMA size */ +){ + int rc; + + assert( pFile->iEof>iStart ); + assert( pReadr->aAlloc==0 && pReadr->nAlloc==0 ); + assert( pReadr->aBuffer==0 ); + assert( pReadr->aMap==0 ); + + rc = vdbePmaReaderSeek(pTask, pReadr, pFile, iStart); + if( rc==SQLITE_OK ){ + u64 nByte = 0; /* Size of PMA in bytes */ + rc = vdbePmaReadVarint(pReadr, &nByte); + pReadr->iEof = pReadr->iReadOff + nByte; + *pnByte += nByte; + } + + if( rc==SQLITE_OK ){ + rc = vdbePmaReaderNext(pReadr); + } + return rc; +} + +/* +** A version of vdbeSorterCompare() that assumes that it has already been +** determined that the first field of key1 is equal to the first field of +** key2. +*/ +static int vdbeSorterCompareTail( + SortSubtask *pTask, /* Subtask context (for pKeyInfo) */ + int *pbKey2Cached, /* True if pTask->pUnpacked is pKey2 */ + const void *pKey1, int nKey1, /* Left side of comparison */ + const void *pKey2, int nKey2 /* Right side of comparison */ +){ + UnpackedRecord *r2 = pTask->pUnpacked; + if( *pbKey2Cached==0 ){ + sqlite3VdbeRecordUnpack(pTask->pSorter->pKeyInfo, nKey2, pKey2, r2); + *pbKey2Cached = 1; + } + return sqlite3VdbeRecordCompareWithSkip(nKey1, pKey1, r2, 1); +} + +/* +** Compare key1 (buffer pKey1, size nKey1 bytes) with key2 (buffer pKey2, +** size nKey2 bytes). Use (pTask->pKeyInfo) for the collation sequences +** used by the comparison. Return the result of the comparison. +** +** If IN/OUT parameter *pbKey2Cached is true when this function is called, +** it is assumed that (pTask->pUnpacked) contains the unpacked version +** of key2. If it is false, (pTask->pUnpacked) is populated with the unpacked +** version of key2 and *pbKey2Cached set to true before returning. +** +** If an OOM error is encountered, (pTask->pUnpacked->error_rc) is set +** to SQLITE_NOMEM. +*/ +static int vdbeSorterCompare( + SortSubtask *pTask, /* Subtask context (for pKeyInfo) */ + int *pbKey2Cached, /* True if pTask->pUnpacked is pKey2 */ + const void *pKey1, int nKey1, /* Left side of comparison */ + const void *pKey2, int nKey2 /* Right side of comparison */ +){ + UnpackedRecord *r2 = pTask->pUnpacked; + if( !*pbKey2Cached ){ + sqlite3VdbeRecordUnpack(pTask->pSorter->pKeyInfo, nKey2, pKey2, r2); + *pbKey2Cached = 1; + } + return sqlite3VdbeRecordCompare(nKey1, pKey1, r2); +} + +/* +** A specially optimized version of vdbeSorterCompare() that assumes that +** the first field of each key is a TEXT value and that the collation +** sequence to compare them with is BINARY. +*/ +static int vdbeSorterCompareText( + SortSubtask *pTask, /* Subtask context (for pKeyInfo) */ + int *pbKey2Cached, /* True if pTask->pUnpacked is pKey2 */ + const void *pKey1, int nKey1, /* Left side of comparison */ + const void *pKey2, int nKey2 /* Right side of comparison */ +){ + const u8 * const p1 = (const u8 * const)pKey1; + const u8 * const p2 = (const u8 * const)pKey2; + const u8 * const v1 = &p1[ p1[0] ]; /* Pointer to value 1 */ + const u8 * const v2 = &p2[ p2[0] ]; /* Pointer to value 2 */ + + int n1; + int n2; + int res; + + getVarint32(&p1[1], n1); n1 = (n1 - 13) / 2; + getVarint32(&p2[1], n2); n2 = (n2 - 13) / 2; + res = memcmp(v1, v2, MIN(n1, n2)); + if( res==0 ){ + res = n1 - n2; + } + + if( res==0 ){ + if( pTask->pSorter->pKeyInfo->nField>1 ){ + res = vdbeSorterCompareTail( + pTask, pbKey2Cached, pKey1, nKey1, pKey2, nKey2 + ); + } + }else{ + if( pTask->pSorter->pKeyInfo->aSortOrder[0] ){ + res = res * -1; + } + } + + return res; +} + +/* +** A specially optimized version of vdbeSorterCompare() that assumes that +** the first field of each key is an INTEGER value. +*/ +static int vdbeSorterCompareInt( + SortSubtask *pTask, /* Subtask context (for pKeyInfo) */ + int *pbKey2Cached, /* True if pTask->pUnpacked is pKey2 */ + const void *pKey1, int nKey1, /* Left side of comparison */ + const void *pKey2, int nKey2 /* Right side of comparison */ +){ + const u8 * const p1 = (const u8 * const)pKey1; + const u8 * const p2 = (const u8 * const)pKey2; + const int s1 = p1[1]; /* Left hand serial type */ + const int s2 = p2[1]; /* Right hand serial type */ + const u8 * const v1 = &p1[ p1[0] ]; /* Pointer to value 1 */ + const u8 * const v2 = &p2[ p2[0] ]; /* Pointer to value 2 */ + int res; /* Return value */ + + assert( (s1>0 && s1<7) || s1==8 || s1==9 ); + assert( (s2>0 && s2<7) || s2==8 || s2==9 ); + + if( s1>7 && s2>7 ){ + res = s1 - s2; + }else{ + if( s1==s2 ){ + if( (*v1 ^ *v2) & 0x80 ){ + /* The two values have different signs */ + res = (*v1 & 0x80) ? -1 : +1; + }else{ + /* The two values have the same sign. Compare using memcmp(). */ + static const u8 aLen[] = {0, 1, 2, 3, 4, 6, 8 }; + int i; + res = 0; + for(i=0; i<aLen[s1]; i++){ + if( (res = v1[i] - v2[i]) ) break; + } + } + }else{ + if( s2>7 ){ + res = +1; + }else if( s1>7 ){ + res = -1; + }else{ + res = s1 - s2; + } + assert( res!=0 ); + + if( res>0 ){ + if( *v1 & 0x80 ) res = -1; + }else{ + if( *v2 & 0x80 ) res = +1; + } + } + } + + if( res==0 ){ + if( pTask->pSorter->pKeyInfo->nField>1 ){ + res = vdbeSorterCompareTail( + pTask, pbKey2Cached, pKey1, nKey1, pKey2, nKey2 + ); + } + }else if( pTask->pSorter->pKeyInfo->aSortOrder[0] ){ + res = res * -1; + } + + return res; +} + +/* +** Initialize the temporary index cursor just opened as a sorter cursor. +** +** Usually, the sorter module uses the value of (pCsr->pKeyInfo->nField) +** to determine the number of fields that should be compared from the +** records being sorted. However, if the value passed as argument nField +** is non-zero and the sorter is able to guarantee a stable sort, nField +** is used instead. This is used when sorting records for a CREATE INDEX +** statement. In this case, keys are always delivered to the sorter in +** order of the primary key, which happens to be make up the final part +** of the records being sorted. So if the sort is stable, there is never +** any reason to compare PK fields and they can be ignored for a small +** performance boost. +** +** The sorter can guarantee a stable sort when running in single-threaded +** mode, but not in multi-threaded mode. +** +** SQLITE_OK is returned if successful, or an SQLite error code otherwise. +*/ +SQLITE_PRIVATE int sqlite3VdbeSorterInit( + sqlite3 *db, /* Database connection (for malloc()) */ + int nField, /* Number of key fields in each record */ + VdbeCursor *pCsr /* Cursor that holds the new sorter */ +){ + int pgsz; /* Page size of main database */ + int i; /* Used to iterate through aTask[] */ + int mxCache; /* Cache size */ + VdbeSorter *pSorter; /* The new sorter */ + KeyInfo *pKeyInfo; /* Copy of pCsr->pKeyInfo with db==0 */ + int szKeyInfo; /* Size of pCsr->pKeyInfo in bytes */ + int sz; /* Size of pSorter in bytes */ + int rc = SQLITE_OK; +#if SQLITE_MAX_WORKER_THREADS==0 +# define nWorker 0 +#else + int nWorker; +#endif + + /* Initialize the upper limit on the number of worker threads */ +#if SQLITE_MAX_WORKER_THREADS>0 + if( sqlite3TempInMemory(db) || sqlite3GlobalConfig.bCoreMutex==0 ){ + nWorker = 0; + }else{ + nWorker = db->aLimit[SQLITE_LIMIT_WORKER_THREADS]; + } +#endif + + /* Do not allow the total number of threads (main thread + all workers) + ** to exceed the maximum merge count */ +#if SQLITE_MAX_WORKER_THREADS>=SORTER_MAX_MERGE_COUNT + if( nWorker>=SORTER_MAX_MERGE_COUNT ){ + nWorker = SORTER_MAX_MERGE_COUNT-1; + } +#endif + + assert( pCsr->pKeyInfo && pCsr->pBt==0 ); + assert( pCsr->eCurType==CURTYPE_SORTER ); + szKeyInfo = sizeof(KeyInfo) + (pCsr->pKeyInfo->nField-1)*sizeof(CollSeq*); + sz = sizeof(VdbeSorter) + nWorker * sizeof(SortSubtask); + + pSorter = (VdbeSorter*)sqlite3DbMallocZero(db, sz + szKeyInfo); + pCsr->uc.pSorter = pSorter; + if( pSorter==0 ){ + rc = SQLITE_NOMEM; + }else{ + pSorter->pKeyInfo = pKeyInfo = (KeyInfo*)((u8*)pSorter + sz); + memcpy(pKeyInfo, pCsr->pKeyInfo, szKeyInfo); + pKeyInfo->db = 0; + if( nField && nWorker==0 ){ + pKeyInfo->nXField += (pKeyInfo->nField - nField); + pKeyInfo->nField = nField; + } + pSorter->pgsz = pgsz = sqlite3BtreeGetPageSize(db->aDb[0].pBt); + pSorter->nTask = nWorker + 1; + pSorter->iPrev = (u8)(nWorker - 1); + pSorter->bUseThreads = (pSorter->nTask>1); + pSorter->db = db; + for(i=0; i<pSorter->nTask; i++){ + SortSubtask *pTask = &pSorter->aTask[i]; + pTask->pSorter = pSorter; + } + + if( !sqlite3TempInMemory(db) ){ + u32 szPma = sqlite3GlobalConfig.szPma; + pSorter->mnPmaSize = szPma * pgsz; + mxCache = db->aDb[0].pSchema->cache_size; + if( mxCache<(int)szPma ) mxCache = (int)szPma; + pSorter->mxPmaSize = MIN((i64)mxCache*pgsz, SQLITE_MAX_PMASZ); + + /* EVIDENCE-OF: R-26747-61719 When the application provides any amount of + ** scratch memory using SQLITE_CONFIG_SCRATCH, SQLite avoids unnecessary + ** large heap allocations. + */ + if( sqlite3GlobalConfig.pScratch==0 ){ + assert( pSorter->iMemory==0 ); + pSorter->nMemory = pgsz; + pSorter->list.aMemory = (u8*)sqlite3Malloc(pgsz); + if( !pSorter->list.aMemory ) rc = SQLITE_NOMEM; + } + } + + if( (pKeyInfo->nField+pKeyInfo->nXField)<13 + && (pKeyInfo->aColl[0]==0 || pKeyInfo->aColl[0]==db->pDfltColl) + ){ + pSorter->typeMask = SORTER_TYPE_INTEGER | SORTER_TYPE_TEXT; + } + } + + return rc; +} +#undef nWorker /* Defined at the top of this function */ + +/* +** Free the list of sorted records starting at pRecord. +*/ +static void vdbeSorterRecordFree(sqlite3 *db, SorterRecord *pRecord){ + SorterRecord *p; + SorterRecord *pNext; + for(p=pRecord; p; p=pNext){ + pNext = p->u.pNext; + sqlite3DbFree(db, p); + } +} + +/* +** Free all resources owned by the object indicated by argument pTask. All +** fields of *pTask are zeroed before returning. +*/ +static void vdbeSortSubtaskCleanup(sqlite3 *db, SortSubtask *pTask){ + sqlite3DbFree(db, pTask->pUnpacked); +#if SQLITE_MAX_WORKER_THREADS>0 + /* pTask->list.aMemory can only be non-zero if it was handed memory + ** from the main thread. That only occurs SQLITE_MAX_WORKER_THREADS>0 */ + if( pTask->list.aMemory ){ + sqlite3_free(pTask->list.aMemory); + }else +#endif + { + assert( pTask->list.aMemory==0 ); + vdbeSorterRecordFree(0, pTask->list.pList); + } + if( pTask->file.pFd ){ + sqlite3OsCloseFree(pTask->file.pFd); + } + if( pTask->file2.pFd ){ + sqlite3OsCloseFree(pTask->file2.pFd); + } + memset(pTask, 0, sizeof(SortSubtask)); +} + +#ifdef SQLITE_DEBUG_SORTER_THREADS +static void vdbeSorterWorkDebug(SortSubtask *pTask, const char *zEvent){ + i64 t; + int iTask = (pTask - pTask->pSorter->aTask); + sqlite3OsCurrentTimeInt64(pTask->pSorter->db->pVfs, &t); + fprintf(stderr, "%lld:%d %s\n", t, iTask, zEvent); +} +static void vdbeSorterRewindDebug(const char *zEvent){ + i64 t; + sqlite3OsCurrentTimeInt64(sqlite3_vfs_find(0), &t); + fprintf(stderr, "%lld:X %s\n", t, zEvent); +} +static void vdbeSorterPopulateDebug( + SortSubtask *pTask, + const char *zEvent +){ + i64 t; + int iTask = (pTask - pTask->pSorter->aTask); + sqlite3OsCurrentTimeInt64(pTask->pSorter->db->pVfs, &t); + fprintf(stderr, "%lld:bg%d %s\n", t, iTask, zEvent); +} +static void vdbeSorterBlockDebug( + SortSubtask *pTask, + int bBlocked, + const char *zEvent +){ + if( bBlocked ){ + i64 t; + sqlite3OsCurrentTimeInt64(pTask->pSorter->db->pVfs, &t); + fprintf(stderr, "%lld:main %s\n", t, zEvent); + } +} +#else +# define vdbeSorterWorkDebug(x,y) +# define vdbeSorterRewindDebug(y) +# define vdbeSorterPopulateDebug(x,y) +# define vdbeSorterBlockDebug(x,y,z) +#endif + +#if SQLITE_MAX_WORKER_THREADS>0 +/* +** Join thread pTask->thread. +*/ +static int vdbeSorterJoinThread(SortSubtask *pTask){ + int rc = SQLITE_OK; + if( pTask->pThread ){ +#ifdef SQLITE_DEBUG_SORTER_THREADS + int bDone = pTask->bDone; +#endif + void *pRet = SQLITE_INT_TO_PTR(SQLITE_ERROR); + vdbeSorterBlockDebug(pTask, !bDone, "enter"); + (void)sqlite3ThreadJoin(pTask->pThread, &pRet); + vdbeSorterBlockDebug(pTask, !bDone, "exit"); + rc = SQLITE_PTR_TO_INT(pRet); + assert( pTask->bDone==1 ); + pTask->bDone = 0; + pTask->pThread = 0; + } + return rc; +} + +/* +** Launch a background thread to run xTask(pIn). +*/ +static int vdbeSorterCreateThread( + SortSubtask *pTask, /* Thread will use this task object */ + void *(*xTask)(void*), /* Routine to run in a separate thread */ + void *pIn /* Argument passed into xTask() */ +){ + assert( pTask->pThread==0 && pTask->bDone==0 ); + return sqlite3ThreadCreate(&pTask->pThread, xTask, pIn); +} + +/* +** Join all outstanding threads launched by SorterWrite() to create +** level-0 PMAs. +*/ +static int vdbeSorterJoinAll(VdbeSorter *pSorter, int rcin){ + int rc = rcin; + int i; + + /* This function is always called by the main user thread. + ** + ** If this function is being called after SorterRewind() has been called, + ** it is possible that thread pSorter->aTask[pSorter->nTask-1].pThread + ** is currently attempt to join one of the other threads. To avoid a race + ** condition where this thread also attempts to join the same object, join + ** thread pSorter->aTask[pSorter->nTask-1].pThread first. */ + for(i=pSorter->nTask-1; i>=0; i--){ + SortSubtask *pTask = &pSorter->aTask[i]; + int rc2 = vdbeSorterJoinThread(pTask); + if( rc==SQLITE_OK ) rc = rc2; + } + return rc; +} +#else +# define vdbeSorterJoinAll(x,rcin) (rcin) +# define vdbeSorterJoinThread(pTask) SQLITE_OK +#endif + +/* +** Allocate a new MergeEngine object capable of handling up to +** nReader PmaReader inputs. +** +** nReader is automatically rounded up to the next power of two. +** nReader may not exceed SORTER_MAX_MERGE_COUNT even after rounding up. +*/ +static MergeEngine *vdbeMergeEngineNew(int nReader){ + int N = 2; /* Smallest power of two >= nReader */ + int nByte; /* Total bytes of space to allocate */ + MergeEngine *pNew; /* Pointer to allocated object to return */ + + assert( nReader<=SORTER_MAX_MERGE_COUNT ); + + while( N<nReader ) N += N; + nByte = sizeof(MergeEngine) + N * (sizeof(int) + sizeof(PmaReader)); + + pNew = sqlite3FaultSim(100) ? 0 : (MergeEngine*)sqlite3MallocZero(nByte); + if( pNew ){ + pNew->nTree = N; + pNew->pTask = 0; + pNew->aReadr = (PmaReader*)&pNew[1]; + pNew->aTree = (int*)&pNew->aReadr[N]; + } + return pNew; +} + +/* +** Free the MergeEngine object passed as the only argument. +*/ +static void vdbeMergeEngineFree(MergeEngine *pMerger){ + int i; + if( pMerger ){ + for(i=0; i<pMerger->nTree; i++){ + vdbePmaReaderClear(&pMerger->aReadr[i]); + } + } + sqlite3_free(pMerger); +} + +/* +** Free all resources associated with the IncrMerger object indicated by +** the first argument. +*/ +static void vdbeIncrFree(IncrMerger *pIncr){ + if( pIncr ){ +#if SQLITE_MAX_WORKER_THREADS>0 + if( pIncr->bUseThread ){ + vdbeSorterJoinThread(pIncr->pTask); + if( pIncr->aFile[0].pFd ) sqlite3OsCloseFree(pIncr->aFile[0].pFd); + if( pIncr->aFile[1].pFd ) sqlite3OsCloseFree(pIncr->aFile[1].pFd); + } +#endif + vdbeMergeEngineFree(pIncr->pMerger); + sqlite3_free(pIncr); + } +} + +/* +** Reset a sorting cursor back to its original empty state. +*/ +SQLITE_PRIVATE void sqlite3VdbeSorterReset(sqlite3 *db, VdbeSorter *pSorter){ + int i; + (void)vdbeSorterJoinAll(pSorter, SQLITE_OK); + assert( pSorter->bUseThreads || pSorter->pReader==0 ); +#if SQLITE_MAX_WORKER_THREADS>0 + if( pSorter->pReader ){ + vdbePmaReaderClear(pSorter->pReader); + sqlite3DbFree(db, pSorter->pReader); + pSorter->pReader = 0; + } +#endif + vdbeMergeEngineFree(pSorter->pMerger); + pSorter->pMerger = 0; + for(i=0; i<pSorter->nTask; i++){ + SortSubtask *pTask = &pSorter->aTask[i]; + vdbeSortSubtaskCleanup(db, pTask); + pTask->pSorter = pSorter; + } + if( pSorter->list.aMemory==0 ){ + vdbeSorterRecordFree(0, pSorter->list.pList); + } + pSorter->list.pList = 0; + pSorter->list.szPMA = 0; + pSorter->bUsePMA = 0; + pSorter->iMemory = 0; + pSorter->mxKeysize = 0; + sqlite3DbFree(db, pSorter->pUnpacked); + pSorter->pUnpacked = 0; +} + +/* +** Free any cursor components allocated by sqlite3VdbeSorterXXX routines. +*/ +SQLITE_PRIVATE void sqlite3VdbeSorterClose(sqlite3 *db, VdbeCursor *pCsr){ + VdbeSorter *pSorter; + assert( pCsr->eCurType==CURTYPE_SORTER ); + pSorter = pCsr->uc.pSorter; + if( pSorter ){ + sqlite3VdbeSorterReset(db, pSorter); + sqlite3_free(pSorter->list.aMemory); + sqlite3DbFree(db, pSorter); + pCsr->uc.pSorter = 0; + } +} + +#if SQLITE_MAX_MMAP_SIZE>0 +/* +** The first argument is a file-handle open on a temporary file. The file +** is guaranteed to be nByte bytes or smaller in size. This function +** attempts to extend the file to nByte bytes in size and to ensure that +** the VFS has memory mapped it. +** +** Whether or not the file does end up memory mapped of course depends on +** the specific VFS implementation. +*/ +static void vdbeSorterExtendFile(sqlite3 *db, sqlite3_file *pFd, i64 nByte){ + if( nByte<=(i64)(db->nMaxSorterMmap) && pFd->pMethods->iVersion>=3 ){ + void *p = 0; + int chunksize = 4*1024; + sqlite3OsFileControlHint(pFd, SQLITE_FCNTL_CHUNK_SIZE, &chunksize); + sqlite3OsFileControlHint(pFd, SQLITE_FCNTL_SIZE_HINT, &nByte); + sqlite3OsFetch(pFd, 0, (int)nByte, &p); + sqlite3OsUnfetch(pFd, 0, p); + } +} +#else +# define vdbeSorterExtendFile(x,y,z) +#endif + +/* +** Allocate space for a file-handle and open a temporary file. If successful, +** set *ppFd to point to the malloc'd file-handle and return SQLITE_OK. +** Otherwise, set *ppFd to 0 and return an SQLite error code. +*/ +static int vdbeSorterOpenTempFile( + sqlite3 *db, /* Database handle doing sort */ + i64 nExtend, /* Attempt to extend file to this size */ + sqlite3_file **ppFd +){ + int rc; + if( sqlite3FaultSim(202) ) return SQLITE_IOERR_ACCESS; + rc = sqlite3OsOpenMalloc(db->pVfs, 0, ppFd, + SQLITE_OPEN_TEMP_JOURNAL | + SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE | + SQLITE_OPEN_EXCLUSIVE | SQLITE_OPEN_DELETEONCLOSE, &rc + ); + if( rc==SQLITE_OK ){ + i64 max = SQLITE_MAX_MMAP_SIZE; + sqlite3OsFileControlHint(*ppFd, SQLITE_FCNTL_MMAP_SIZE, (void*)&max); + if( nExtend>0 ){ + vdbeSorterExtendFile(db, *ppFd, nExtend); + } + } + return rc; +} + +/* +** If it has not already been allocated, allocate the UnpackedRecord +** structure at pTask->pUnpacked. Return SQLITE_OK if successful (or +** if no allocation was required), or SQLITE_NOMEM otherwise. +*/ +static int vdbeSortAllocUnpacked(SortSubtask *pTask){ + if( pTask->pUnpacked==0 ){ + char *pFree; + pTask->pUnpacked = sqlite3VdbeAllocUnpackedRecord( + pTask->pSorter->pKeyInfo, 0, 0, &pFree + ); + assert( pTask->pUnpacked==(UnpackedRecord*)pFree ); + if( pFree==0 ) return SQLITE_NOMEM; + pTask->pUnpacked->nField = pTask->pSorter->pKeyInfo->nField; + pTask->pUnpacked->errCode = 0; + } + return SQLITE_OK; +} + + +/* +** Merge the two sorted lists p1 and p2 into a single list. +** Set *ppOut to the head of the new list. +*/ +static void vdbeSorterMerge( + SortSubtask *pTask, /* Calling thread context */ + SorterRecord *p1, /* First list to merge */ + SorterRecord *p2, /* Second list to merge */ + SorterRecord **ppOut /* OUT: Head of merged list */ +){ + SorterRecord *pFinal = 0; + SorterRecord **pp = &pFinal; + int bCached = 0; + + while( p1 && p2 ){ + int res; + res = pTask->xCompare( + pTask, &bCached, SRVAL(p1), p1->nVal, SRVAL(p2), p2->nVal + ); + + if( res<=0 ){ + *pp = p1; + pp = &p1->u.pNext; + p1 = p1->u.pNext; + }else{ + *pp = p2; + pp = &p2->u.pNext; + p2 = p2->u.pNext; + bCached = 0; + } + } + *pp = p1 ? p1 : p2; + *ppOut = pFinal; +} + +/* +** Return the SorterCompare function to compare values collected by the +** sorter object passed as the only argument. +*/ +static SorterCompare vdbeSorterGetCompare(VdbeSorter *p){ + if( p->typeMask==SORTER_TYPE_INTEGER ){ + return vdbeSorterCompareInt; + }else if( p->typeMask==SORTER_TYPE_TEXT ){ + return vdbeSorterCompareText; + } + return vdbeSorterCompare; +} + +/* +** Sort the linked list of records headed at pTask->pList. Return +** SQLITE_OK if successful, or an SQLite error code (i.e. SQLITE_NOMEM) if +** an error occurs. +*/ +static int vdbeSorterSort(SortSubtask *pTask, SorterList *pList){ + int i; + SorterRecord **aSlot; + SorterRecord *p; + int rc; + + rc = vdbeSortAllocUnpacked(pTask); + if( rc!=SQLITE_OK ) return rc; + + p = pList->pList; + pTask->xCompare = vdbeSorterGetCompare(pTask->pSorter); + + aSlot = (SorterRecord **)sqlite3MallocZero(64 * sizeof(SorterRecord *)); + if( !aSlot ){ + return SQLITE_NOMEM; + } + + while( p ){ + SorterRecord *pNext; + if( pList->aMemory ){ + if( (u8*)p==pList->aMemory ){ + pNext = 0; + }else{ + assert( p->u.iNext<sqlite3MallocSize(pList->aMemory) ); + pNext = (SorterRecord*)&pList->aMemory[p->u.iNext]; + } + }else{ + pNext = p->u.pNext; + } + + p->u.pNext = 0; + for(i=0; aSlot[i]; i++){ + vdbeSorterMerge(pTask, p, aSlot[i], &p); + aSlot[i] = 0; + } + aSlot[i] = p; + p = pNext; + } + + p = 0; + for(i=0; i<64; i++){ + vdbeSorterMerge(pTask, p, aSlot[i], &p); + } + pList->pList = p; + + sqlite3_free(aSlot); + assert( pTask->pUnpacked->errCode==SQLITE_OK + || pTask->pUnpacked->errCode==SQLITE_NOMEM + ); + return pTask->pUnpacked->errCode; +} + +/* +** Initialize a PMA-writer object. +*/ +static void vdbePmaWriterInit( + sqlite3_file *pFd, /* File handle to write to */ + PmaWriter *p, /* Object to populate */ + int nBuf, /* Buffer size */ + i64 iStart /* Offset of pFd to begin writing at */ +){ + memset(p, 0, sizeof(PmaWriter)); + p->aBuffer = (u8*)sqlite3Malloc(nBuf); + if( !p->aBuffer ){ + p->eFWErr = SQLITE_NOMEM; + }else{ + p->iBufEnd = p->iBufStart = (iStart % nBuf); + p->iWriteOff = iStart - p->iBufStart; + p->nBuffer = nBuf; + p->pFd = pFd; + } +} + +/* +** Write nData bytes of data to the PMA. Return SQLITE_OK +** if successful, or an SQLite error code if an error occurs. +*/ +static void vdbePmaWriteBlob(PmaWriter *p, u8 *pData, int nData){ + int nRem = nData; + while( nRem>0 && p->eFWErr==0 ){ + int nCopy = nRem; + if( nCopy>(p->nBuffer - p->iBufEnd) ){ + nCopy = p->nBuffer - p->iBufEnd; + } + + memcpy(&p->aBuffer[p->iBufEnd], &pData[nData-nRem], nCopy); + p->iBufEnd += nCopy; + if( p->iBufEnd==p->nBuffer ){ + p->eFWErr = sqlite3OsWrite(p->pFd, + &p->aBuffer[p->iBufStart], p->iBufEnd - p->iBufStart, + p->iWriteOff + p->iBufStart + ); + p->iBufStart = p->iBufEnd = 0; + p->iWriteOff += p->nBuffer; + } + assert( p->iBufEnd<p->nBuffer ); + + nRem -= nCopy; + } +} + +/* +** Flush any buffered data to disk and clean up the PMA-writer object. +** The results of using the PMA-writer after this call are undefined. +** Return SQLITE_OK if flushing the buffered data succeeds or is not +** required. Otherwise, return an SQLite error code. +** +** Before returning, set *piEof to the offset immediately following the +** last byte written to the file. +*/ +static int vdbePmaWriterFinish(PmaWriter *p, i64 *piEof){ + int rc; + if( p->eFWErr==0 && ALWAYS(p->aBuffer) && p->iBufEnd>p->iBufStart ){ + p->eFWErr = sqlite3OsWrite(p->pFd, + &p->aBuffer[p->iBufStart], p->iBufEnd - p->iBufStart, + p->iWriteOff + p->iBufStart + ); + } + *piEof = (p->iWriteOff + p->iBufEnd); + sqlite3_free(p->aBuffer); + rc = p->eFWErr; + memset(p, 0, sizeof(PmaWriter)); + return rc; +} + +/* +** Write value iVal encoded as a varint to the PMA. Return +** SQLITE_OK if successful, or an SQLite error code if an error occurs. +*/ +static void vdbePmaWriteVarint(PmaWriter *p, u64 iVal){ + int nByte; + u8 aByte[10]; + nByte = sqlite3PutVarint(aByte, iVal); + vdbePmaWriteBlob(p, aByte, nByte); +} + +/* +** Write the current contents of in-memory linked-list pList to a level-0 +** PMA in the temp file belonging to sub-task pTask. Return SQLITE_OK if +** successful, or an SQLite error code otherwise. +** +** The format of a PMA is: +** +** * A varint. This varint contains the total number of bytes of content +** in the PMA (not including the varint itself). +** +** * One or more records packed end-to-end in order of ascending keys. +** Each record consists of a varint followed by a blob of data (the +** key). The varint is the number of bytes in the blob of data. +*/ +static int vdbeSorterListToPMA(SortSubtask *pTask, SorterList *pList){ + sqlite3 *db = pTask->pSorter->db; + int rc = SQLITE_OK; /* Return code */ + PmaWriter writer; /* Object used to write to the file */ + +#ifdef SQLITE_DEBUG + /* Set iSz to the expected size of file pTask->file after writing the PMA. + ** This is used by an assert() statement at the end of this function. */ + i64 iSz = pList->szPMA + sqlite3VarintLen(pList->szPMA) + pTask->file.iEof; +#endif + + vdbeSorterWorkDebug(pTask, "enter"); + memset(&writer, 0, sizeof(PmaWriter)); + assert( pList->szPMA>0 ); + + /* If the first temporary PMA file has not been opened, open it now. */ + if( pTask->file.pFd==0 ){ + rc = vdbeSorterOpenTempFile(db, 0, &pTask->file.pFd); + assert( rc!=SQLITE_OK || pTask->file.pFd ); + assert( pTask->file.iEof==0 ); + assert( pTask->nPMA==0 ); + } + + /* Try to get the file to memory map */ + if( rc==SQLITE_OK ){ + vdbeSorterExtendFile(db, pTask->file.pFd, pTask->file.iEof+pList->szPMA+9); + } + + /* Sort the list */ + if( rc==SQLITE_OK ){ + rc = vdbeSorterSort(pTask, pList); + } + + if( rc==SQLITE_OK ){ + SorterRecord *p; + SorterRecord *pNext = 0; + + vdbePmaWriterInit(pTask->file.pFd, &writer, pTask->pSorter->pgsz, + pTask->file.iEof); + pTask->nPMA++; + vdbePmaWriteVarint(&writer, pList->szPMA); + for(p=pList->pList; p; p=pNext){ + pNext = p->u.pNext; + vdbePmaWriteVarint(&writer, p->nVal); + vdbePmaWriteBlob(&writer, SRVAL(p), p->nVal); + if( pList->aMemory==0 ) sqlite3_free(p); + } + pList->pList = p; + rc = vdbePmaWriterFinish(&writer, &pTask->file.iEof); + } + + vdbeSorterWorkDebug(pTask, "exit"); + assert( rc!=SQLITE_OK || pList->pList==0 ); + assert( rc!=SQLITE_OK || pTask->file.iEof==iSz ); + return rc; +} + +/* +** Advance the MergeEngine to its next entry. +** Set *pbEof to true there is no next entry because +** the MergeEngine has reached the end of all its inputs. +** +** Return SQLITE_OK if successful or an error code if an error occurs. +*/ +static int vdbeMergeEngineStep( + MergeEngine *pMerger, /* The merge engine to advance to the next row */ + int *pbEof /* Set TRUE at EOF. Set false for more content */ +){ + int rc; + int iPrev = pMerger->aTree[1];/* Index of PmaReader to advance */ + SortSubtask *pTask = pMerger->pTask; + + /* Advance the current PmaReader */ + rc = vdbePmaReaderNext(&pMerger->aReadr[iPrev]); + + /* Update contents of aTree[] */ + if( rc==SQLITE_OK ){ + int i; /* Index of aTree[] to recalculate */ + PmaReader *pReadr1; /* First PmaReader to compare */ + PmaReader *pReadr2; /* Second PmaReader to compare */ + int bCached = 0; + + /* Find the first two PmaReaders to compare. The one that was just + ** advanced (iPrev) and the one next to it in the array. */ + pReadr1 = &pMerger->aReadr[(iPrev & 0xFFFE)]; + pReadr2 = &pMerger->aReadr[(iPrev | 0x0001)]; + + for(i=(pMerger->nTree+iPrev)/2; i>0; i=i/2){ + /* Compare pReadr1 and pReadr2. Store the result in variable iRes. */ + int iRes; + if( pReadr1->pFd==0 ){ + iRes = +1; + }else if( pReadr2->pFd==0 ){ + iRes = -1; + }else{ + iRes = pTask->xCompare(pTask, &bCached, + pReadr1->aKey, pReadr1->nKey, pReadr2->aKey, pReadr2->nKey + ); + } + + /* If pReadr1 contained the smaller value, set aTree[i] to its index. + ** Then set pReadr2 to the next PmaReader to compare to pReadr1. In this + ** case there is no cache of pReadr2 in pTask->pUnpacked, so set + ** pKey2 to point to the record belonging to pReadr2. + ** + ** Alternatively, if pReadr2 contains the smaller of the two values, + ** set aTree[i] to its index and update pReadr1. If vdbeSorterCompare() + ** was actually called above, then pTask->pUnpacked now contains + ** a value equivalent to pReadr2. So set pKey2 to NULL to prevent + ** vdbeSorterCompare() from decoding pReadr2 again. + ** + ** If the two values were equal, then the value from the oldest + ** PMA should be considered smaller. The VdbeSorter.aReadr[] array + ** is sorted from oldest to newest, so pReadr1 contains older values + ** than pReadr2 iff (pReadr1<pReadr2). */ + if( iRes<0 || (iRes==0 && pReadr1<pReadr2) ){ + pMerger->aTree[i] = (int)(pReadr1 - pMerger->aReadr); + pReadr2 = &pMerger->aReadr[ pMerger->aTree[i ^ 0x0001] ]; + bCached = 0; + }else{ + if( pReadr1->pFd ) bCached = 0; + pMerger->aTree[i] = (int)(pReadr2 - pMerger->aReadr); + pReadr1 = &pMerger->aReadr[ pMerger->aTree[i ^ 0x0001] ]; + } + } + *pbEof = (pMerger->aReadr[pMerger->aTree[1]].pFd==0); + } + + return (rc==SQLITE_OK ? pTask->pUnpacked->errCode : rc); +} + +#if SQLITE_MAX_WORKER_THREADS>0 +/* +** The main routine for background threads that write level-0 PMAs. +*/ +static void *vdbeSorterFlushThread(void *pCtx){ + SortSubtask *pTask = (SortSubtask*)pCtx; + int rc; /* Return code */ + assert( pTask->bDone==0 ); + rc = vdbeSorterListToPMA(pTask, &pTask->list); + pTask->bDone = 1; + return SQLITE_INT_TO_PTR(rc); +} +#endif /* SQLITE_MAX_WORKER_THREADS>0 */ + +/* +** Flush the current contents of VdbeSorter.list to a new PMA, possibly +** using a background thread. +*/ +static int vdbeSorterFlushPMA(VdbeSorter *pSorter){ +#if SQLITE_MAX_WORKER_THREADS==0 + pSorter->bUsePMA = 1; + return vdbeSorterListToPMA(&pSorter->aTask[0], &pSorter->list); +#else + int rc = SQLITE_OK; + int i; + SortSubtask *pTask = 0; /* Thread context used to create new PMA */ + int nWorker = (pSorter->nTask-1); + + /* Set the flag to indicate that at least one PMA has been written. + ** Or will be, anyhow. */ + pSorter->bUsePMA = 1; + + /* Select a sub-task to sort and flush the current list of in-memory + ** records to disk. If the sorter is running in multi-threaded mode, + ** round-robin between the first (pSorter->nTask-1) tasks. Except, if + ** the background thread from a sub-tasks previous turn is still running, + ** skip it. If the first (pSorter->nTask-1) sub-tasks are all still busy, + ** fall back to using the final sub-task. The first (pSorter->nTask-1) + ** sub-tasks are prefered as they use background threads - the final + ** sub-task uses the main thread. */ + for(i=0; i<nWorker; i++){ + int iTest = (pSorter->iPrev + i + 1) % nWorker; + pTask = &pSorter->aTask[iTest]; + if( pTask->bDone ){ + rc = vdbeSorterJoinThread(pTask); + } + if( rc!=SQLITE_OK || pTask->pThread==0 ) break; + } + + if( rc==SQLITE_OK ){ + if( i==nWorker ){ + /* Use the foreground thread for this operation */ + rc = vdbeSorterListToPMA(&pSorter->aTask[nWorker], &pSorter->list); + }else{ + /* Launch a background thread for this operation */ + u8 *aMem = pTask->list.aMemory; + void *pCtx = (void*)pTask; + + assert( pTask->pThread==0 && pTask->bDone==0 ); + assert( pTask->list.pList==0 ); + assert( pTask->list.aMemory==0 || pSorter->list.aMemory!=0 ); + + pSorter->iPrev = (u8)(pTask - pSorter->aTask); + pTask->list = pSorter->list; + pSorter->list.pList = 0; + pSorter->list.szPMA = 0; + if( aMem ){ + pSorter->list.aMemory = aMem; + pSorter->nMemory = sqlite3MallocSize(aMem); + }else if( pSorter->list.aMemory ){ + pSorter->list.aMemory = sqlite3Malloc(pSorter->nMemory); + if( !pSorter->list.aMemory ) return SQLITE_NOMEM; + } + + rc = vdbeSorterCreateThread(pTask, vdbeSorterFlushThread, pCtx); + } + } + + return rc; +#endif /* SQLITE_MAX_WORKER_THREADS!=0 */ +} + +/* +** Add a record to the sorter. +*/ +SQLITE_PRIVATE int sqlite3VdbeSorterWrite( + const VdbeCursor *pCsr, /* Sorter cursor */ + Mem *pVal /* Memory cell containing record */ +){ + VdbeSorter *pSorter; + int rc = SQLITE_OK; /* Return Code */ + SorterRecord *pNew; /* New list element */ + int bFlush; /* True to flush contents of memory to PMA */ + int nReq; /* Bytes of memory required */ + int nPMA; /* Bytes of PMA space required */ + int t; /* serial type of first record field */ + + assert( pCsr->eCurType==CURTYPE_SORTER ); + pSorter = pCsr->uc.pSorter; + getVarint32((const u8*)&pVal->z[1], t); + if( t>0 && t<10 && t!=7 ){ + pSorter->typeMask &= SORTER_TYPE_INTEGER; + }else if( t>10 && (t & 0x01) ){ + pSorter->typeMask &= SORTER_TYPE_TEXT; + }else{ + pSorter->typeMask = 0; + } + + assert( pSorter ); + + /* Figure out whether or not the current contents of memory should be + ** flushed to a PMA before continuing. If so, do so. + ** + ** If using the single large allocation mode (pSorter->aMemory!=0), then + ** flush the contents of memory to a new PMA if (a) at least one value is + ** already in memory and (b) the new value will not fit in memory. + ** + ** Or, if using separate allocations for each record, flush the contents + ** of memory to a PMA if either of the following are true: + ** + ** * The total memory allocated for the in-memory list is greater + ** than (page-size * cache-size), or + ** + ** * The total memory allocated for the in-memory list is greater + ** than (page-size * 10) and sqlite3HeapNearlyFull() returns true. + */ + nReq = pVal->n + sizeof(SorterRecord); + nPMA = pVal->n + sqlite3VarintLen(pVal->n); + if( pSorter->mxPmaSize ){ + if( pSorter->list.aMemory ){ + bFlush = pSorter->iMemory && (pSorter->iMemory+nReq) > pSorter->mxPmaSize; + }else{ + bFlush = ( + (pSorter->list.szPMA > pSorter->mxPmaSize) + || (pSorter->list.szPMA > pSorter->mnPmaSize && sqlite3HeapNearlyFull()) + ); + } + if( bFlush ){ + rc = vdbeSorterFlushPMA(pSorter); + pSorter->list.szPMA = 0; + pSorter->iMemory = 0; + assert( rc!=SQLITE_OK || pSorter->list.pList==0 ); + } + } + + pSorter->list.szPMA += nPMA; + if( nPMA>pSorter->mxKeysize ){ + pSorter->mxKeysize = nPMA; + } + + if( pSorter->list.aMemory ){ + int nMin = pSorter->iMemory + nReq; + + if( nMin>pSorter->nMemory ){ + u8 *aNew; + int iListOff = (u8*)pSorter->list.pList - pSorter->list.aMemory; + int nNew = pSorter->nMemory * 2; + while( nNew < nMin ) nNew = nNew*2; + if( nNew > pSorter->mxPmaSize ) nNew = pSorter->mxPmaSize; + if( nNew < nMin ) nNew = nMin; + + aNew = sqlite3Realloc(pSorter->list.aMemory, nNew); + if( !aNew ) return SQLITE_NOMEM; + pSorter->list.pList = (SorterRecord*)&aNew[iListOff]; + pSorter->list.aMemory = aNew; + pSorter->nMemory = nNew; + } + + pNew = (SorterRecord*)&pSorter->list.aMemory[pSorter->iMemory]; + pSorter->iMemory += ROUND8(nReq); + if( pSorter->list.pList ){ + pNew->u.iNext = (int)((u8*)(pSorter->list.pList) - pSorter->list.aMemory); + } + }else{ + pNew = (SorterRecord *)sqlite3Malloc(nReq); + if( pNew==0 ){ + return SQLITE_NOMEM; + } + pNew->u.pNext = pSorter->list.pList; + } + + memcpy(SRVAL(pNew), pVal->z, pVal->n); + pNew->nVal = pVal->n; + pSorter->list.pList = pNew; + + return rc; +} + +/* +** Read keys from pIncr->pMerger and populate pIncr->aFile[1]. The format +** of the data stored in aFile[1] is the same as that used by regular PMAs, +** except that the number-of-bytes varint is omitted from the start. +*/ +static int vdbeIncrPopulate(IncrMerger *pIncr){ + int rc = SQLITE_OK; + int rc2; + i64 iStart = pIncr->iStartOff; + SorterFile *pOut = &pIncr->aFile[1]; + SortSubtask *pTask = pIncr->pTask; + MergeEngine *pMerger = pIncr->pMerger; + PmaWriter writer; + assert( pIncr->bEof==0 ); + + vdbeSorterPopulateDebug(pTask, "enter"); + + vdbePmaWriterInit(pOut->pFd, &writer, pTask->pSorter->pgsz, iStart); + while( rc==SQLITE_OK ){ + int dummy; + PmaReader *pReader = &pMerger->aReadr[ pMerger->aTree[1] ]; + int nKey = pReader->nKey; + i64 iEof = writer.iWriteOff + writer.iBufEnd; + + /* Check if the output file is full or if the input has been exhausted. + ** In either case exit the loop. */ + if( pReader->pFd==0 ) break; + if( (iEof + nKey + sqlite3VarintLen(nKey))>(iStart + pIncr->mxSz) ) break; + + /* Write the next key to the output. */ + vdbePmaWriteVarint(&writer, nKey); + vdbePmaWriteBlob(&writer, pReader->aKey, nKey); + assert( pIncr->pMerger->pTask==pTask ); + rc = vdbeMergeEngineStep(pIncr->pMerger, &dummy); + } + + rc2 = vdbePmaWriterFinish(&writer, &pOut->iEof); + if( rc==SQLITE_OK ) rc = rc2; + vdbeSorterPopulateDebug(pTask, "exit"); + return rc; +} + +#if SQLITE_MAX_WORKER_THREADS>0 +/* +** The main routine for background threads that populate aFile[1] of +** multi-threaded IncrMerger objects. +*/ +static void *vdbeIncrPopulateThread(void *pCtx){ + IncrMerger *pIncr = (IncrMerger*)pCtx; + void *pRet = SQLITE_INT_TO_PTR( vdbeIncrPopulate(pIncr) ); + pIncr->pTask->bDone = 1; + return pRet; +} + +/* +** Launch a background thread to populate aFile[1] of pIncr. +*/ +static int vdbeIncrBgPopulate(IncrMerger *pIncr){ + void *p = (void*)pIncr; + assert( pIncr->bUseThread ); + return vdbeSorterCreateThread(pIncr->pTask, vdbeIncrPopulateThread, p); +} +#endif + +/* +** This function is called when the PmaReader corresponding to pIncr has +** finished reading the contents of aFile[0]. Its purpose is to "refill" +** aFile[0] such that the PmaReader should start rereading it from the +** beginning. +** +** For single-threaded objects, this is accomplished by literally reading +** keys from pIncr->pMerger and repopulating aFile[0]. +** +** For multi-threaded objects, all that is required is to wait until the +** background thread is finished (if it is not already) and then swap +** aFile[0] and aFile[1] in place. If the contents of pMerger have not +** been exhausted, this function also launches a new background thread +** to populate the new aFile[1]. +** +** SQLITE_OK is returned on success, or an SQLite error code otherwise. +*/ +static int vdbeIncrSwap(IncrMerger *pIncr){ + int rc = SQLITE_OK; + +#if SQLITE_MAX_WORKER_THREADS>0 + if( pIncr->bUseThread ){ + rc = vdbeSorterJoinThread(pIncr->pTask); + + if( rc==SQLITE_OK ){ + SorterFile f0 = pIncr->aFile[0]; + pIncr->aFile[0] = pIncr->aFile[1]; + pIncr->aFile[1] = f0; + } + + if( rc==SQLITE_OK ){ + if( pIncr->aFile[0].iEof==pIncr->iStartOff ){ + pIncr->bEof = 1; + }else{ + rc = vdbeIncrBgPopulate(pIncr); + } + } + }else +#endif + { + rc = vdbeIncrPopulate(pIncr); + pIncr->aFile[0] = pIncr->aFile[1]; + if( pIncr->aFile[0].iEof==pIncr->iStartOff ){ + pIncr->bEof = 1; + } + } + + return rc; +} + +/* +** Allocate and return a new IncrMerger object to read data from pMerger. +** +** If an OOM condition is encountered, return NULL. In this case free the +** pMerger argument before returning. +*/ +static int vdbeIncrMergerNew( + SortSubtask *pTask, /* The thread that will be using the new IncrMerger */ + MergeEngine *pMerger, /* The MergeEngine that the IncrMerger will control */ + IncrMerger **ppOut /* Write the new IncrMerger here */ +){ + int rc = SQLITE_OK; + IncrMerger *pIncr = *ppOut = (IncrMerger*) + (sqlite3FaultSim(100) ? 0 : sqlite3MallocZero(sizeof(*pIncr))); + if( pIncr ){ + pIncr->pMerger = pMerger; + pIncr->pTask = pTask; + pIncr->mxSz = MAX(pTask->pSorter->mxKeysize+9,pTask->pSorter->mxPmaSize/2); + pTask->file2.iEof += pIncr->mxSz; + }else{ + vdbeMergeEngineFree(pMerger); + rc = SQLITE_NOMEM; + } + return rc; +} + +#if SQLITE_MAX_WORKER_THREADS>0 +/* +** Set the "use-threads" flag on object pIncr. +*/ +static void vdbeIncrMergerSetThreads(IncrMerger *pIncr){ + pIncr->bUseThread = 1; + pIncr->pTask->file2.iEof -= pIncr->mxSz; +} +#endif /* SQLITE_MAX_WORKER_THREADS>0 */ + + + +/* +** Recompute pMerger->aTree[iOut] by comparing the next keys on the +** two PmaReaders that feed that entry. Neither of the PmaReaders +** are advanced. This routine merely does the comparison. +*/ +static void vdbeMergeEngineCompare( + MergeEngine *pMerger, /* Merge engine containing PmaReaders to compare */ + int iOut /* Store the result in pMerger->aTree[iOut] */ +){ + int i1; + int i2; + int iRes; + PmaReader *p1; + PmaReader *p2; + + assert( iOut<pMerger->nTree && iOut>0 ); + + if( iOut>=(pMerger->nTree/2) ){ + i1 = (iOut - pMerger->nTree/2) * 2; + i2 = i1 + 1; + }else{ + i1 = pMerger->aTree[iOut*2]; + i2 = pMerger->aTree[iOut*2+1]; + } + + p1 = &pMerger->aReadr[i1]; + p2 = &pMerger->aReadr[i2]; + + if( p1->pFd==0 ){ + iRes = i2; + }else if( p2->pFd==0 ){ + iRes = i1; + }else{ + SortSubtask *pTask = pMerger->pTask; + int bCached = 0; + int res; + assert( pTask->pUnpacked!=0 ); /* from vdbeSortSubtaskMain() */ + res = pTask->xCompare( + pTask, &bCached, p1->aKey, p1->nKey, p2->aKey, p2->nKey + ); + if( res<=0 ){ + iRes = i1; + }else{ + iRes = i2; + } + } + + pMerger->aTree[iOut] = iRes; +} + +/* +** Allowed values for the eMode parameter to vdbeMergeEngineInit() +** and vdbePmaReaderIncrMergeInit(). +** +** Only INCRINIT_NORMAL is valid in single-threaded builds (when +** SQLITE_MAX_WORKER_THREADS==0). The other values are only used +** when there exists one or more separate worker threads. +*/ +#define INCRINIT_NORMAL 0 +#define INCRINIT_TASK 1 +#define INCRINIT_ROOT 2 + +/* +** Forward reference required as the vdbeIncrMergeInit() and +** vdbePmaReaderIncrInit() routines are called mutually recursively when +** building a merge tree. +*/ +static int vdbePmaReaderIncrInit(PmaReader *pReadr, int eMode); + +/* +** Initialize the MergeEngine object passed as the second argument. Once this +** function returns, the first key of merged data may be read from the +** MergeEngine object in the usual fashion. +** +** If argument eMode is INCRINIT_ROOT, then it is assumed that any IncrMerge +** objects attached to the PmaReader objects that the merger reads from have +** already been populated, but that they have not yet populated aFile[0] and +** set the PmaReader objects up to read from it. In this case all that is +** required is to call vdbePmaReaderNext() on each PmaReader to point it at +** its first key. +** +** Otherwise, if eMode is any value other than INCRINIT_ROOT, then use +** vdbePmaReaderIncrMergeInit() to initialize each PmaReader that feeds data +** to pMerger. +** +** SQLITE_OK is returned if successful, or an SQLite error code otherwise. +*/ +static int vdbeMergeEngineInit( + SortSubtask *pTask, /* Thread that will run pMerger */ + MergeEngine *pMerger, /* MergeEngine to initialize */ + int eMode /* One of the INCRINIT_XXX constants */ +){ + int rc = SQLITE_OK; /* Return code */ + int i; /* For looping over PmaReader objects */ + int nTree = pMerger->nTree; + + /* eMode is always INCRINIT_NORMAL in single-threaded mode */ + assert( SQLITE_MAX_WORKER_THREADS>0 || eMode==INCRINIT_NORMAL ); + + /* Verify that the MergeEngine is assigned to a single thread */ + assert( pMerger->pTask==0 ); + pMerger->pTask = pTask; + + for(i=0; i<nTree; i++){ + if( SQLITE_MAX_WORKER_THREADS>0 && eMode==INCRINIT_ROOT ){ + /* PmaReaders should be normally initialized in order, as if they are + ** reading from the same temp file this makes for more linear file IO. + ** However, in the INCRINIT_ROOT case, if PmaReader aReadr[nTask-1] is + ** in use it will block the vdbePmaReaderNext() call while it uses + ** the main thread to fill its buffer. So calling PmaReaderNext() + ** on this PmaReader before any of the multi-threaded PmaReaders takes + ** better advantage of multi-processor hardware. */ + rc = vdbePmaReaderNext(&pMerger->aReadr[nTree-i-1]); + }else{ + rc = vdbePmaReaderIncrInit(&pMerger->aReadr[i], INCRINIT_NORMAL); + } + if( rc!=SQLITE_OK ) return rc; + } + + for(i=pMerger->nTree-1; i>0; i--){ + vdbeMergeEngineCompare(pMerger, i); + } + return pTask->pUnpacked->errCode; +} + +/* +** The PmaReader passed as the first argument is guaranteed to be an +** incremental-reader (pReadr->pIncr!=0). This function serves to open +** and/or initialize the temp file related fields of the IncrMerge +** object at (pReadr->pIncr). +** +** If argument eMode is set to INCRINIT_NORMAL, then all PmaReaders +** in the sub-tree headed by pReadr are also initialized. Data is then +** loaded into the buffers belonging to pReadr and it is set to point to +** the first key in its range. +** +** If argument eMode is set to INCRINIT_TASK, then pReadr is guaranteed +** to be a multi-threaded PmaReader and this function is being called in a +** background thread. In this case all PmaReaders in the sub-tree are +** initialized as for INCRINIT_NORMAL and the aFile[1] buffer belonging to +** pReadr is populated. However, pReadr itself is not set up to point +** to its first key. A call to vdbePmaReaderNext() is still required to do +** that. +** +** The reason this function does not call vdbePmaReaderNext() immediately +** in the INCRINIT_TASK case is that vdbePmaReaderNext() assumes that it has +** to block on thread (pTask->thread) before accessing aFile[1]. But, since +** this entire function is being run by thread (pTask->thread), that will +** lead to the current background thread attempting to join itself. +** +** Finally, if argument eMode is set to INCRINIT_ROOT, it may be assumed +** that pReadr->pIncr is a multi-threaded IncrMerge objects, and that all +** child-trees have already been initialized using IncrInit(INCRINIT_TASK). +** In this case vdbePmaReaderNext() is called on all child PmaReaders and +** the current PmaReader set to point to the first key in its range. +** +** SQLITE_OK is returned if successful, or an SQLite error code otherwise. +*/ +static int vdbePmaReaderIncrMergeInit(PmaReader *pReadr, int eMode){ + int rc = SQLITE_OK; + IncrMerger *pIncr = pReadr->pIncr; + SortSubtask *pTask = pIncr->pTask; + sqlite3 *db = pTask->pSorter->db; + + /* eMode is always INCRINIT_NORMAL in single-threaded mode */ + assert( SQLITE_MAX_WORKER_THREADS>0 || eMode==INCRINIT_NORMAL ); + + rc = vdbeMergeEngineInit(pTask, pIncr->pMerger, eMode); + + /* Set up the required files for pIncr. A multi-theaded IncrMerge object + ** requires two temp files to itself, whereas a single-threaded object + ** only requires a region of pTask->file2. */ + if( rc==SQLITE_OK ){ + int mxSz = pIncr->mxSz; +#if SQLITE_MAX_WORKER_THREADS>0 + if( pIncr->bUseThread ){ + rc = vdbeSorterOpenTempFile(db, mxSz, &pIncr->aFile[0].pFd); + if( rc==SQLITE_OK ){ + rc = vdbeSorterOpenTempFile(db, mxSz, &pIncr->aFile[1].pFd); + } + }else +#endif + /*if( !pIncr->bUseThread )*/{ + if( pTask->file2.pFd==0 ){ + assert( pTask->file2.iEof>0 ); + rc = vdbeSorterOpenTempFile(db, pTask->file2.iEof, &pTask->file2.pFd); + pTask->file2.iEof = 0; + } + if( rc==SQLITE_OK ){ + pIncr->aFile[1].pFd = pTask->file2.pFd; + pIncr->iStartOff = pTask->file2.iEof; + pTask->file2.iEof += mxSz; + } + } + } + +#if SQLITE_MAX_WORKER_THREADS>0 + if( rc==SQLITE_OK && pIncr->bUseThread ){ + /* Use the current thread to populate aFile[1], even though this + ** PmaReader is multi-threaded. If this is an INCRINIT_TASK object, + ** then this function is already running in background thread + ** pIncr->pTask->thread. + ** + ** If this is the INCRINIT_ROOT object, then it is running in the + ** main VDBE thread. But that is Ok, as that thread cannot return + ** control to the VDBE or proceed with anything useful until the + ** first results are ready from this merger object anyway. + */ + assert( eMode==INCRINIT_ROOT || eMode==INCRINIT_TASK ); + rc = vdbeIncrPopulate(pIncr); + } +#endif + + if( rc==SQLITE_OK && (SQLITE_MAX_WORKER_THREADS==0 || eMode!=INCRINIT_TASK) ){ + rc = vdbePmaReaderNext(pReadr); + } + + return rc; +} + +#if SQLITE_MAX_WORKER_THREADS>0 +/* +** The main routine for vdbePmaReaderIncrMergeInit() operations run in +** background threads. +*/ +static void *vdbePmaReaderBgIncrInit(void *pCtx){ + PmaReader *pReader = (PmaReader*)pCtx; + void *pRet = SQLITE_INT_TO_PTR( + vdbePmaReaderIncrMergeInit(pReader,INCRINIT_TASK) + ); + pReader->pIncr->pTask->bDone = 1; + return pRet; +} +#endif + +/* +** If the PmaReader passed as the first argument is not an incremental-reader +** (if pReadr->pIncr==0), then this function is a no-op. Otherwise, it invokes +** the vdbePmaReaderIncrMergeInit() function with the parameters passed to +** this routine to initialize the incremental merge. +** +** If the IncrMerger object is multi-threaded (IncrMerger.bUseThread==1), +** then a background thread is launched to call vdbePmaReaderIncrMergeInit(). +** Or, if the IncrMerger is single threaded, the same function is called +** using the current thread. +*/ +static int vdbePmaReaderIncrInit(PmaReader *pReadr, int eMode){ + IncrMerger *pIncr = pReadr->pIncr; /* Incremental merger */ + int rc = SQLITE_OK; /* Return code */ + if( pIncr ){ +#if SQLITE_MAX_WORKER_THREADS>0 + assert( pIncr->bUseThread==0 || eMode==INCRINIT_TASK ); + if( pIncr->bUseThread ){ + void *pCtx = (void*)pReadr; + rc = vdbeSorterCreateThread(pIncr->pTask, vdbePmaReaderBgIncrInit, pCtx); + }else +#endif + { + rc = vdbePmaReaderIncrMergeInit(pReadr, eMode); + } + } + return rc; +} + +/* +** Allocate a new MergeEngine object to merge the contents of nPMA level-0 +** PMAs from pTask->file. If no error occurs, set *ppOut to point to +** the new object and return SQLITE_OK. Or, if an error does occur, set *ppOut +** to NULL and return an SQLite error code. +** +** When this function is called, *piOffset is set to the offset of the +** first PMA to read from pTask->file. Assuming no error occurs, it is +** set to the offset immediately following the last byte of the last +** PMA before returning. If an error does occur, then the final value of +** *piOffset is undefined. +*/ +static int vdbeMergeEngineLevel0( + SortSubtask *pTask, /* Sorter task to read from */ + int nPMA, /* Number of PMAs to read */ + i64 *piOffset, /* IN/OUT: Readr offset in pTask->file */ + MergeEngine **ppOut /* OUT: New merge-engine */ +){ + MergeEngine *pNew; /* Merge engine to return */ + i64 iOff = *piOffset; + int i; + int rc = SQLITE_OK; + + *ppOut = pNew = vdbeMergeEngineNew(nPMA); + if( pNew==0 ) rc = SQLITE_NOMEM; + + for(i=0; i<nPMA && rc==SQLITE_OK; i++){ + i64 nDummy; + PmaReader *pReadr = &pNew->aReadr[i]; + rc = vdbePmaReaderInit(pTask, &pTask->file, iOff, pReadr, &nDummy); + iOff = pReadr->iEof; + } + + if( rc!=SQLITE_OK ){ + vdbeMergeEngineFree(pNew); + *ppOut = 0; + } + *piOffset = iOff; + return rc; +} + +/* +** Return the depth of a tree comprising nPMA PMAs, assuming a fanout of +** SORTER_MAX_MERGE_COUNT. The returned value does not include leaf nodes. +** +** i.e. +** +** nPMA<=16 -> TreeDepth() == 0 +** nPMA<=256 -> TreeDepth() == 1 +** nPMA<=65536 -> TreeDepth() == 2 +*/ +static int vdbeSorterTreeDepth(int nPMA){ + int nDepth = 0; + i64 nDiv = SORTER_MAX_MERGE_COUNT; + while( nDiv < (i64)nPMA ){ + nDiv = nDiv * SORTER_MAX_MERGE_COUNT; + nDepth++; + } + return nDepth; +} + +/* +** pRoot is the root of an incremental merge-tree with depth nDepth (according +** to vdbeSorterTreeDepth()). pLeaf is the iSeq'th leaf to be added to the +** tree, counting from zero. This function adds pLeaf to the tree. +** +** If successful, SQLITE_OK is returned. If an error occurs, an SQLite error +** code is returned and pLeaf is freed. +*/ +static int vdbeSorterAddToTree( + SortSubtask *pTask, /* Task context */ + int nDepth, /* Depth of tree according to TreeDepth() */ + int iSeq, /* Sequence number of leaf within tree */ + MergeEngine *pRoot, /* Root of tree */ + MergeEngine *pLeaf /* Leaf to add to tree */ +){ + int rc = SQLITE_OK; + int nDiv = 1; + int i; + MergeEngine *p = pRoot; + IncrMerger *pIncr; + + rc = vdbeIncrMergerNew(pTask, pLeaf, &pIncr); + + for(i=1; i<nDepth; i++){ + nDiv = nDiv * SORTER_MAX_MERGE_COUNT; + } + + for(i=1; i<nDepth && rc==SQLITE_OK; i++){ + int iIter = (iSeq / nDiv) % SORTER_MAX_MERGE_COUNT; + PmaReader *pReadr = &p->aReadr[iIter]; + + if( pReadr->pIncr==0 ){ + MergeEngine *pNew = vdbeMergeEngineNew(SORTER_MAX_MERGE_COUNT); + if( pNew==0 ){ + rc = SQLITE_NOMEM; + }else{ + rc = vdbeIncrMergerNew(pTask, pNew, &pReadr->pIncr); + } + } + if( rc==SQLITE_OK ){ + p = pReadr->pIncr->pMerger; + nDiv = nDiv / SORTER_MAX_MERGE_COUNT; + } + } + + if( rc==SQLITE_OK ){ + p->aReadr[iSeq % SORTER_MAX_MERGE_COUNT].pIncr = pIncr; + }else{ + vdbeIncrFree(pIncr); + } + return rc; +} + +/* +** This function is called as part of a SorterRewind() operation on a sorter +** that has already written two or more level-0 PMAs to one or more temp +** files. It builds a tree of MergeEngine/IncrMerger/PmaReader objects that +** can be used to incrementally merge all PMAs on disk. +** +** If successful, SQLITE_OK is returned and *ppOut set to point to the +** MergeEngine object at the root of the tree before returning. Or, if an +** error occurs, an SQLite error code is returned and the final value +** of *ppOut is undefined. +*/ +static int vdbeSorterMergeTreeBuild( + VdbeSorter *pSorter, /* The VDBE cursor that implements the sort */ + MergeEngine **ppOut /* Write the MergeEngine here */ +){ + MergeEngine *pMain = 0; + int rc = SQLITE_OK; + int iTask; + +#if SQLITE_MAX_WORKER_THREADS>0 + /* If the sorter uses more than one task, then create the top-level + ** MergeEngine here. This MergeEngine will read data from exactly + ** one PmaReader per sub-task. */ + assert( pSorter->bUseThreads || pSorter->nTask==1 ); + if( pSorter->nTask>1 ){ + pMain = vdbeMergeEngineNew(pSorter->nTask); + if( pMain==0 ) rc = SQLITE_NOMEM; + } +#endif + + for(iTask=0; rc==SQLITE_OK && iTask<pSorter->nTask; iTask++){ + SortSubtask *pTask = &pSorter->aTask[iTask]; + assert( pTask->nPMA>0 || SQLITE_MAX_WORKER_THREADS>0 ); + if( SQLITE_MAX_WORKER_THREADS==0 || pTask->nPMA ){ + MergeEngine *pRoot = 0; /* Root node of tree for this task */ + int nDepth = vdbeSorterTreeDepth(pTask->nPMA); + i64 iReadOff = 0; + + if( pTask->nPMA<=SORTER_MAX_MERGE_COUNT ){ + rc = vdbeMergeEngineLevel0(pTask, pTask->nPMA, &iReadOff, &pRoot); + }else{ + int i; + int iSeq = 0; + pRoot = vdbeMergeEngineNew(SORTER_MAX_MERGE_COUNT); + if( pRoot==0 ) rc = SQLITE_NOMEM; + for(i=0; i<pTask->nPMA && rc==SQLITE_OK; i += SORTER_MAX_MERGE_COUNT){ + MergeEngine *pMerger = 0; /* New level-0 PMA merger */ + int nReader; /* Number of level-0 PMAs to merge */ + + nReader = MIN(pTask->nPMA - i, SORTER_MAX_MERGE_COUNT); + rc = vdbeMergeEngineLevel0(pTask, nReader, &iReadOff, &pMerger); + if( rc==SQLITE_OK ){ + rc = vdbeSorterAddToTree(pTask, nDepth, iSeq++, pRoot, pMerger); + } + } + } + + if( rc==SQLITE_OK ){ +#if SQLITE_MAX_WORKER_THREADS>0 + if( pMain!=0 ){ + rc = vdbeIncrMergerNew(pTask, pRoot, &pMain->aReadr[iTask].pIncr); + }else +#endif + { + assert( pMain==0 ); + pMain = pRoot; + } + }else{ + vdbeMergeEngineFree(pRoot); + } + } + } + + if( rc!=SQLITE_OK ){ + vdbeMergeEngineFree(pMain); + pMain = 0; + } + *ppOut = pMain; + return rc; +} + +/* +** This function is called as part of an sqlite3VdbeSorterRewind() operation +** on a sorter that has written two or more PMAs to temporary files. It sets +** up either VdbeSorter.pMerger (for single threaded sorters) or pReader +** (for multi-threaded sorters) so that it can be used to iterate through +** all records stored in the sorter. +** +** SQLITE_OK is returned if successful, or an SQLite error code otherwise. +*/ +static int vdbeSorterSetupMerge(VdbeSorter *pSorter){ + int rc; /* Return code */ + SortSubtask *pTask0 = &pSorter->aTask[0]; + MergeEngine *pMain = 0; +#if SQLITE_MAX_WORKER_THREADS + sqlite3 *db = pTask0->pSorter->db; + int i; + SorterCompare xCompare = vdbeSorterGetCompare(pSorter); + for(i=0; i<pSorter->nTask; i++){ + pSorter->aTask[i].xCompare = xCompare; + } +#endif + + rc = vdbeSorterMergeTreeBuild(pSorter, &pMain); + if( rc==SQLITE_OK ){ +#if SQLITE_MAX_WORKER_THREADS + assert( pSorter->bUseThreads==0 || pSorter->nTask>1 ); + if( pSorter->bUseThreads ){ + int iTask; + PmaReader *pReadr = 0; + SortSubtask *pLast = &pSorter->aTask[pSorter->nTask-1]; + rc = vdbeSortAllocUnpacked(pLast); + if( rc==SQLITE_OK ){ + pReadr = (PmaReader*)sqlite3DbMallocZero(db, sizeof(PmaReader)); + pSorter->pReader = pReadr; + if( pReadr==0 ) rc = SQLITE_NOMEM; + } + if( rc==SQLITE_OK ){ + rc = vdbeIncrMergerNew(pLast, pMain, &pReadr->pIncr); + if( rc==SQLITE_OK ){ + vdbeIncrMergerSetThreads(pReadr->pIncr); + for(iTask=0; iTask<(pSorter->nTask-1); iTask++){ + IncrMerger *pIncr; + if( (pIncr = pMain->aReadr[iTask].pIncr) ){ + vdbeIncrMergerSetThreads(pIncr); + assert( pIncr->pTask!=pLast ); + } + } + for(iTask=0; rc==SQLITE_OK && iTask<pSorter->nTask; iTask++){ + /* Check that: + ** + ** a) The incremental merge object is configured to use the + ** right task, and + ** b) If it is using task (nTask-1), it is configured to run + ** in single-threaded mode. This is important, as the + ** root merge (INCRINIT_ROOT) will be using the same task + ** object. + */ + PmaReader *p = &pMain->aReadr[iTask]; + assert( p->pIncr==0 || ( + (p->pIncr->pTask==&pSorter->aTask[iTask]) /* a */ + && (iTask!=pSorter->nTask-1 || p->pIncr->bUseThread==0) /* b */ + )); + rc = vdbePmaReaderIncrInit(p, INCRINIT_TASK); + } + } + pMain = 0; + } + if( rc==SQLITE_OK ){ + rc = vdbePmaReaderIncrMergeInit(pReadr, INCRINIT_ROOT); + } + }else +#endif + { + rc = vdbeMergeEngineInit(pTask0, pMain, INCRINIT_NORMAL); + pSorter->pMerger = pMain; + pMain = 0; + } + } + + if( rc!=SQLITE_OK ){ + vdbeMergeEngineFree(pMain); + } + return rc; +} + + +/* +** Once the sorter has been populated by calls to sqlite3VdbeSorterWrite, +** this function is called to prepare for iterating through the records +** in sorted order. +*/ +SQLITE_PRIVATE int sqlite3VdbeSorterRewind(const VdbeCursor *pCsr, int *pbEof){ + VdbeSorter *pSorter; + int rc = SQLITE_OK; /* Return code */ + + assert( pCsr->eCurType==CURTYPE_SORTER ); + pSorter = pCsr->uc.pSorter; + assert( pSorter ); + + /* If no data has been written to disk, then do not do so now. Instead, + ** sort the VdbeSorter.pRecord list. The vdbe layer will read data directly + ** from the in-memory list. */ + if( pSorter->bUsePMA==0 ){ + if( pSorter->list.pList ){ + *pbEof = 0; + rc = vdbeSorterSort(&pSorter->aTask[0], &pSorter->list); + }else{ + *pbEof = 1; + } + return rc; + } + + /* Write the current in-memory list to a PMA. When the VdbeSorterWrite() + ** function flushes the contents of memory to disk, it immediately always + ** creates a new list consisting of a single key immediately afterwards. + ** So the list is never empty at this point. */ + assert( pSorter->list.pList ); + rc = vdbeSorterFlushPMA(pSorter); + + /* Join all threads */ + rc = vdbeSorterJoinAll(pSorter, rc); + + vdbeSorterRewindDebug("rewind"); + + /* Assuming no errors have occurred, set up a merger structure to + ** incrementally read and merge all remaining PMAs. */ + assert( pSorter->pReader==0 ); + if( rc==SQLITE_OK ){ + rc = vdbeSorterSetupMerge(pSorter); + *pbEof = 0; + } + + vdbeSorterRewindDebug("rewinddone"); + return rc; +} + +/* +** Advance to the next element in the sorter. +*/ +SQLITE_PRIVATE int sqlite3VdbeSorterNext(sqlite3 *db, const VdbeCursor *pCsr, int *pbEof){ + VdbeSorter *pSorter; + int rc; /* Return code */ + + assert( pCsr->eCurType==CURTYPE_SORTER ); + pSorter = pCsr->uc.pSorter; + assert( pSorter->bUsePMA || (pSorter->pReader==0 && pSorter->pMerger==0) ); + if( pSorter->bUsePMA ){ + assert( pSorter->pReader==0 || pSorter->pMerger==0 ); + assert( pSorter->bUseThreads==0 || pSorter->pReader ); + assert( pSorter->bUseThreads==1 || pSorter->pMerger ); +#if SQLITE_MAX_WORKER_THREADS>0 + if( pSorter->bUseThreads ){ + rc = vdbePmaReaderNext(pSorter->pReader); + *pbEof = (pSorter->pReader->pFd==0); + }else +#endif + /*if( !pSorter->bUseThreads )*/ { + assert( pSorter->pMerger!=0 ); + assert( pSorter->pMerger->pTask==(&pSorter->aTask[0]) ); + rc = vdbeMergeEngineStep(pSorter->pMerger, pbEof); + } + }else{ + SorterRecord *pFree = pSorter->list.pList; + pSorter->list.pList = pFree->u.pNext; + pFree->u.pNext = 0; + if( pSorter->list.aMemory==0 ) vdbeSorterRecordFree(db, pFree); + *pbEof = !pSorter->list.pList; + rc = SQLITE_OK; + } + return rc; +} + +/* +** Return a pointer to a buffer owned by the sorter that contains the +** current key. +*/ +static void *vdbeSorterRowkey( + const VdbeSorter *pSorter, /* Sorter object */ + int *pnKey /* OUT: Size of current key in bytes */ +){ + void *pKey; + if( pSorter->bUsePMA ){ + PmaReader *pReader; +#if SQLITE_MAX_WORKER_THREADS>0 + if( pSorter->bUseThreads ){ + pReader = pSorter->pReader; + }else +#endif + /*if( !pSorter->bUseThreads )*/{ + pReader = &pSorter->pMerger->aReadr[pSorter->pMerger->aTree[1]]; + } + *pnKey = pReader->nKey; + pKey = pReader->aKey; + }else{ + *pnKey = pSorter->list.pList->nVal; + pKey = SRVAL(pSorter->list.pList); + } + return pKey; +} + +/* +** Copy the current sorter key into the memory cell pOut. +*/ +SQLITE_PRIVATE int sqlite3VdbeSorterRowkey(const VdbeCursor *pCsr, Mem *pOut){ + VdbeSorter *pSorter; + void *pKey; int nKey; /* Sorter key to copy into pOut */ + + assert( pCsr->eCurType==CURTYPE_SORTER ); + pSorter = pCsr->uc.pSorter; + pKey = vdbeSorterRowkey(pSorter, &nKey); + if( sqlite3VdbeMemClearAndResize(pOut, nKey) ){ + return SQLITE_NOMEM; + } + pOut->n = nKey; + MemSetTypeFlag(pOut, MEM_Blob); + memcpy(pOut->z, pKey, nKey); + + return SQLITE_OK; +} + +/* +** Compare the key in memory cell pVal with the key that the sorter cursor +** passed as the first argument currently points to. For the purposes of +** the comparison, ignore the rowid field at the end of each record. +** +** If the sorter cursor key contains any NULL values, consider it to be +** less than pVal. Even if pVal also contains NULL values. +** +** If an error occurs, return an SQLite error code (i.e. SQLITE_NOMEM). +** Otherwise, set *pRes to a negative, zero or positive value if the +** key in pVal is smaller than, equal to or larger than the current sorter +** key. +** +** This routine forms the core of the OP_SorterCompare opcode, which in +** turn is used to verify uniqueness when constructing a UNIQUE INDEX. +*/ +SQLITE_PRIVATE int sqlite3VdbeSorterCompare( + const VdbeCursor *pCsr, /* Sorter cursor */ + Mem *pVal, /* Value to compare to current sorter key */ + int nKeyCol, /* Compare this many columns */ + int *pRes /* OUT: Result of comparison */ +){ + VdbeSorter *pSorter; + UnpackedRecord *r2; + KeyInfo *pKeyInfo; + int i; + void *pKey; int nKey; /* Sorter key to compare pVal with */ + + assert( pCsr->eCurType==CURTYPE_SORTER ); + pSorter = pCsr->uc.pSorter; + r2 = pSorter->pUnpacked; + pKeyInfo = pCsr->pKeyInfo; + if( r2==0 ){ + char *p; + r2 = pSorter->pUnpacked = sqlite3VdbeAllocUnpackedRecord(pKeyInfo,0,0,&p); + assert( pSorter->pUnpacked==(UnpackedRecord*)p ); + if( r2==0 ) return SQLITE_NOMEM; + r2->nField = nKeyCol; + } + assert( r2->nField==nKeyCol ); + + pKey = vdbeSorterRowkey(pSorter, &nKey); + sqlite3VdbeRecordUnpack(pKeyInfo, nKey, pKey, r2); + for(i=0; i<nKeyCol; i++){ + if( r2->aMem[i].flags & MEM_Null ){ + *pRes = -1; + return SQLITE_OK; + } + } + + *pRes = sqlite3VdbeRecordCompare(pVal->n, pVal->z, r2); + return SQLITE_OK; +} + +/************** End of vdbesort.c ********************************************/ /************** Begin file journal.c *****************************************/ /* ** 2007 August 22 ** ** The author disclaims copyright to this source code. In place of @@ -59872,10 +84309,11 @@ ** 1) The in-memory representation grows too large for the allocated ** buffer, or ** 2) The sqlite3JournalCreate() function is called. */ #ifdef SQLITE_ENABLE_ATOMIC_WRITE +/* #include "sqliteInt.h" */ /* ** A JournalFile object is a subclass of sqlite3_file used by ** as an open file handle for journal files. @@ -59904,10 +84342,18 @@ if( rc==SQLITE_OK ){ p->pReal = pReal; if( p->iSize>0 ){ assert(p->iSize<=p->nBuf); rc = sqlite3OsWrite(p->pReal, p->zBuf, p->iSize, 0); + } + if( rc!=SQLITE_OK ){ + /* If an error occurred while writing to the file, close it before + ** returning. This way, SQLite uses the in-memory journal data to + ** roll back changes made to the internal page-cache before this + ** function was called. */ + sqlite3OsClose(pReal); + p->pReal = 0; } } } return rc; } @@ -60028,11 +84474,15 @@ 0, /* xLock */ 0, /* xUnlock */ 0, /* xCheckReservedLock */ 0, /* xFileControl */ 0, /* xSectorSize */ - 0 /* xDeviceCharacteristics */ + 0, /* xDeviceCharacteristics */ + 0, /* xShmMap */ + 0, /* xShmLock */ + 0, /* xShmBarrier */ + 0 /* xShmUnmap */ }; /* ** Open a journal file. */ @@ -60069,10 +84519,20 @@ if( p->pMethods!=&JournalFileMethods ){ return SQLITE_OK; } return createFile((JournalFile *)p); } + +/* +** The file-handle passed as the only argument is guaranteed to be an open +** file. It may or may not be of class JournalFile. If the file is a +** JournalFile, and the underlying file on disk has not yet been opened, +** return 0. Otherwise, return 1. +*/ +SQLITE_PRIVATE int sqlite3JournalExists(sqlite3_file *p){ + return (p->pMethods!=&JournalFileMethods || ((JournalFile *)p)->pReal!=0); +} /* ** Return the number of bytes required to store a JournalFile that uses vfs ** pVfs to create the underlying on-disk files. */ @@ -60097,10 +84557,11 @@ ** ** This file contains code use to implement an in-memory rollback journal. ** The in-memory rollback journal is used to journal transactions for ** ":memory:" databases and when the journal_mode=MEMORY pragma is used. */ +/* #include "sqliteInt.h" */ /* Forward references to internal structures */ typedef struct MemJournal MemJournal; typedef struct FilePoint FilePoint; typedef struct FileChunk FileChunk; @@ -60108,21 +84569,15 @@ /* Space to hold the rollback journal is allocated in increments of ** this many bytes. ** ** The size chosen is a little less than a power of two. That way, ** the FileChunk object will have a size that almost exactly fills -** a power-of-two allocation. This mimimizes wasted space in power-of-two +** a power-of-two allocation. This minimizes wasted space in power-of-two ** memory allocators. */ #define JOURNAL_CHUNKSIZE ((int)(1024-sizeof(FileChunk*))) -/* Macro to find the minimum of two numeric values. -*/ -#ifndef MIN -# define MIN(x,y) ((x)<(y)?(x):(y)) -#endif - /* ** The rollback journal is composed of a linked list of these structures. */ struct FileChunk { FileChunk *pNext; /* Next chunk in the journal */ @@ -60295,11 +84750,11 @@ } /* ** Table of methods for MemJournal sqlite3_file object. */ -static struct sqlite3_io_methods MemJournalMethods = { +static const struct sqlite3_io_methods MemJournalMethods = { 1, /* iVersion */ memjrnlClose, /* xClose */ memjrnlRead, /* xRead */ memjrnlWrite, /* xWrite */ memjrnlTruncate, /* xTruncate */ @@ -60308,21 +84763,27 @@ 0, /* xLock */ 0, /* xUnlock */ 0, /* xCheckReservedLock */ 0, /* xFileControl */ 0, /* xSectorSize */ - 0 /* xDeviceCharacteristics */ + 0, /* xDeviceCharacteristics */ + 0, /* xShmMap */ + 0, /* xShmLock */ + 0, /* xShmBarrier */ + 0, /* xShmUnmap */ + 0, /* xFetch */ + 0 /* xUnfetch */ }; /* ** Open a journal file. */ SQLITE_PRIVATE void sqlite3MemJournalOpen(sqlite3_file *pJfd){ MemJournal *p = (MemJournal *)pJfd; assert( EIGHT_BYTE_ALIGNMENT(p) ); memset(p, 0, sqlite3MemJournalSize()); - p->pMethod = &MemJournalMethods; + p->pMethod = (sqlite3_io_methods*)&MemJournalMethods; } /* ** Return true if the file-handle passed as an argument is ** an in-memory journal @@ -60330,12 +84791,11 @@ SQLITE_PRIVATE int sqlite3IsMemJournal(sqlite3_file *pJfd){ return pJfd->pMethods==&MemJournalMethods; } /* -** Return the number of bytes required to store a MemJournal that uses vfs -** pVfs to create the underlying on-disk files. +** Return the number of bytes required to store a MemJournal file descriptor. */ SQLITE_PRIVATE int sqlite3MemJournalSize(void){ return sizeof(MemJournal); } @@ -60353,15 +84813,18 @@ ** ************************************************************************* ** This file contains routines used for walking the parser tree for ** an SQL statement. */ +/* #include "sqliteInt.h" */ +/* #include <stdlib.h> */ +/* #include <string.h> */ /* ** Walk an expression tree. Invoke the callback once for each node -** of the expression, while decending. (In other words, the callback +** of the expression, while descending. (In other words, the callback ** is invoked before visiting children.) ** ** The return value from the callback should be one of the WRC_* ** constants to specify how to proceed with the walk. ** @@ -60374,27 +84837,29 @@ ** return the top-level walk call. ** ** The return value from this routine is WRC_Abort to abandon the tree walk ** and WRC_Continue to continue. */ -SQLITE_PRIVATE int sqlite3WalkExpr(Walker *pWalker, Expr *pExpr){ +static SQLITE_NOINLINE int walkExpr(Walker *pWalker, Expr *pExpr){ int rc; - if( pExpr==0 ) return WRC_Continue; testcase( ExprHasProperty(pExpr, EP_TokenOnly) ); testcase( ExprHasProperty(pExpr, EP_Reduced) ); rc = pWalker->xExprCallback(pWalker, pExpr); if( rc==WRC_Continue - && !ExprHasAnyProperty(pExpr,EP_TokenOnly) ){ + && !ExprHasProperty(pExpr,EP_TokenOnly) ){ if( sqlite3WalkExpr(pWalker, pExpr->pLeft) ) return WRC_Abort; if( sqlite3WalkExpr(pWalker, pExpr->pRight) ) return WRC_Abort; if( ExprHasProperty(pExpr, EP_xIsSelect) ){ if( sqlite3WalkSelect(pWalker, pExpr->x.pSelect) ) return WRC_Abort; }else{ if( sqlite3WalkExprList(pWalker, pExpr->x.pList) ) return WRC_Abort; } } return rc & WRC_Abort; +} +SQLITE_PRIVATE int sqlite3WalkExpr(Walker *pWalker, Expr *pExpr){ + return pExpr ? walkExpr(pWalker,pExpr) : WRC_Continue; } /* ** Call sqlite3WalkExpr() for every expression in list p or until ** an abort request is seen. @@ -60443,37 +84908,60 @@ if( ALWAYS(pSrc) ){ for(i=pSrc->nSrc, pItem=pSrc->a; i>0; i--, pItem++){ if( sqlite3WalkSelect(pWalker, pItem->pSelect) ){ return WRC_Abort; } + if( pItem->fg.isTabFunc + && sqlite3WalkExprList(pWalker, pItem->u1.pFuncArg) + ){ + return WRC_Abort; + } } } return WRC_Continue; } /* ** Call sqlite3WalkExpr() for every expression in Select statement p. ** Invoke sqlite3WalkSelect() for subqueries in the FROM clause and -** on the compound select chain, p->pPrior. +** on the compound select chain, p->pPrior. +** +** If it is not NULL, the xSelectCallback() callback is invoked before +** the walk of the expressions and FROM clause. The xSelectCallback2() +** method, if it is not NULL, is invoked following the walk of the +** expressions and FROM clause. ** ** Return WRC_Continue under normal conditions. Return WRC_Abort if ** there is an abort request. ** ** If the Walker does not have an xSelectCallback() then this routine ** is a no-op returning WRC_Continue. */ SQLITE_PRIVATE int sqlite3WalkSelect(Walker *pWalker, Select *p){ int rc; - if( p==0 || pWalker->xSelectCallback==0 ) return WRC_Continue; + if( p==0 || (pWalker->xSelectCallback==0 && pWalker->xSelectCallback2==0) ){ + return WRC_Continue; + } rc = WRC_Continue; - while( p ){ - rc = pWalker->xSelectCallback(pWalker, p); - if( rc ) break; - if( sqlite3WalkSelectExpr(pWalker, p) ) return WRC_Abort; - if( sqlite3WalkSelectFrom(pWalker, p) ) return WRC_Abort; + pWalker->walkerDepth++; + while( p ){ + if( pWalker->xSelectCallback ){ + rc = pWalker->xSelectCallback(pWalker, p); + if( rc ) break; + } + if( sqlite3WalkSelectExpr(pWalker, p) + || sqlite3WalkSelectFrom(pWalker, p) + ){ + pWalker->walkerDepth--; + return WRC_Abort; + } + if( pWalker->xSelectCallback2 ){ + pWalker->xSelectCallback2(pWalker, p); + } p = p->pPrior; } + pWalker->walkerDepth--; return rc & WRC_Abort; } /************** End of walker.c **********************************************/ /************** Begin file resolve.c *****************************************/ @@ -60491,91 +84979,145 @@ ** ** This file contains routines used for walking the parser tree and ** resolve all identifiers by associating them with a particular ** table and column. */ +/* #include "sqliteInt.h" */ +/* #include <stdlib.h> */ +/* #include <string.h> */ + +/* +** Walk the expression tree pExpr and increase the aggregate function +** depth (the Expr.op2 field) by N on every TK_AGG_FUNCTION node. +** This needs to occur when copying a TK_AGG_FUNCTION node from an +** outer query into an inner subquery. +** +** incrAggFunctionDepth(pExpr,n) is the main routine. incrAggDepth(..) +** is a helper function - a callback for the tree walker. +*/ +static int incrAggDepth(Walker *pWalker, Expr *pExpr){ + if( pExpr->op==TK_AGG_FUNCTION ) pExpr->op2 += pWalker->u.n; + return WRC_Continue; +} +static void incrAggFunctionDepth(Expr *pExpr, int N){ + if( N>0 ){ + Walker w; + memset(&w, 0, sizeof(w)); + w.xExprCallback = incrAggDepth; + w.u.n = N; + sqlite3WalkExpr(&w, pExpr); + } +} /* ** Turn the pExpr expression into an alias for the iCol-th column of the ** result set in pEList. ** -** If the result set column is a simple column reference, then this routine -** makes an exact copy. But for any other kind of expression, this -** routine make a copy of the result set column as the argument to the -** TK_AS operator. The TK_AS operator causes the expression to be -** evaluated just once and then reused for each alias. -** -** The reason for suppressing the TK_AS term when the expression is a simple -** column reference is so that the column reference will be recognized as -** usable by indices within the WHERE clause processing logic. -** -** Hack: The TK_AS operator is inhibited if zType[0]=='G'. This means -** that in a GROUP BY clause, the expression is evaluated twice. Hence: -** -** SELECT random()%5 AS x, count(*) FROM tab GROUP BY x -** -** Is equivalent to: -** -** SELECT random()%5 AS x, count(*) FROM tab GROUP BY random()%5 -** -** The result of random()%5 in the GROUP BY clause is probably different -** from the result in the result-set. We might fix this someday. Or -** then again, we might not... +** If the reference is followed by a COLLATE operator, then make sure +** the COLLATE operator is preserved. For example: +** +** SELECT a+b, c+d FROM t1 ORDER BY 1 COLLATE nocase; +** +** Should be transformed into: +** +** SELECT a+b, c+d FROM t1 ORDER BY (a+b) COLLATE nocase; +** +** The nSubquery parameter specifies how many levels of subquery the +** alias is removed from the original expression. The usual value is +** zero but it might be more if the alias is contained within a subquery +** of the original expression. The Expr.op2 field of TK_AGG_FUNCTION +** structures must be increased by the nSubquery amount. */ static void resolveAlias( Parse *pParse, /* Parsing context */ ExprList *pEList, /* A result set */ int iCol, /* A column in the result set. 0..pEList->nExpr-1 */ Expr *pExpr, /* Transform this into an alias to the result set */ - const char *zType /* "GROUP" or "ORDER" or "" */ + const char *zType, /* "GROUP" or "ORDER" or "" */ + int nSubquery /* Number of subqueries that the label is moving */ ){ Expr *pOrig; /* The iCol-th column of the result set */ Expr *pDup; /* Copy of pOrig */ sqlite3 *db; /* The database connection */ assert( iCol>=0 && iCol<pEList->nExpr ); pOrig = pEList->a[iCol].pExpr; assert( pOrig!=0 ); - assert( pOrig->flags & EP_Resolved ); db = pParse->db; - if( pOrig->op!=TK_COLUMN && zType[0]!='G' ){ - pDup = sqlite3ExprDup(db, pOrig, 0); - pDup = sqlite3PExpr(pParse, TK_AS, pDup, 0, 0); - if( pDup==0 ) return; - if( pEList->a[iCol].iAlias==0 ){ - pEList->a[iCol].iAlias = (u16)(++pParse->nAlias); - } - pDup->iTable = pEList->a[iCol].iAlias; - }else if( ExprHasProperty(pOrig, EP_IntValue) || pOrig->u.zToken==0 ){ - pDup = sqlite3ExprDup(db, pOrig, 0); - if( pDup==0 ) return; - }else{ - char *zToken = pOrig->u.zToken; - assert( zToken!=0 ); - pOrig->u.zToken = 0; - pDup = sqlite3ExprDup(db, pOrig, 0); - pOrig->u.zToken = zToken; - if( pDup==0 ) return; - assert( (pDup->flags & (EP_Reduced|EP_TokenOnly))==0 ); - pDup->flags2 |= EP2_MallocedToken; - pDup->u.zToken = sqlite3DbStrDup(db, zToken); - } - if( pExpr->flags & EP_ExpCollate ){ - pDup->pColl = pExpr->pColl; - pDup->flags |= EP_ExpCollate; - } + pDup = sqlite3ExprDup(db, pOrig, 0); + if( pDup==0 ) return; + if( zType[0]!='G' ) incrAggFunctionDepth(pDup, nSubquery); + if( pExpr->op==TK_COLLATE ){ + pDup = sqlite3ExprAddCollateString(pParse, pDup, pExpr->u.zToken); + } + ExprSetProperty(pDup, EP_Alias); /* Before calling sqlite3ExprDelete(), set the EP_Static flag. This ** prevents ExprDelete() from deleting the Expr structure itself, ** allowing it to be repopulated by the memcpy() on the following line. + ** The pExpr->u.zToken might point into memory that will be freed by the + ** sqlite3DbFree(db, pDup) on the last line of this block, so be sure to + ** make a copy of the token before doing the sqlite3DbFree(). */ ExprSetProperty(pExpr, EP_Static); sqlite3ExprDelete(db, pExpr); memcpy(pExpr, pDup, sizeof(*pExpr)); + if( !ExprHasProperty(pExpr, EP_IntValue) && pExpr->u.zToken!=0 ){ + assert( (pExpr->flags & (EP_Reduced|EP_TokenOnly))==0 ); + pExpr->u.zToken = sqlite3DbStrDup(db, pExpr->u.zToken); + pExpr->flags |= EP_MemToken; + } sqlite3DbFree(db, pDup); } + +/* +** Return TRUE if the name zCol occurs anywhere in the USING clause. +** +** Return FALSE if the USING clause is NULL or if it does not contain +** zCol. +*/ +static int nameInUsingClause(IdList *pUsing, const char *zCol){ + if( pUsing ){ + int k; + for(k=0; k<pUsing->nId; k++){ + if( sqlite3StrICmp(pUsing->a[k].zName, zCol)==0 ) return 1; + } + } + return 0; +} + +/* +** Subqueries stores the original database, table and column names for their +** result sets in ExprList.a[].zSpan, in the form "DATABASE.TABLE.COLUMN". +** Check to see if the zSpan given to this routine matches the zDb, zTab, +** and zCol. If any of zDb, zTab, and zCol are NULL then those fields will +** match anything. +*/ +SQLITE_PRIVATE int sqlite3MatchSpanName( + const char *zSpan, + const char *zCol, + const char *zTab, + const char *zDb +){ + int n; + for(n=0; ALWAYS(zSpan[n]) && zSpan[n]!='.'; n++){} + if( zDb && (sqlite3StrNICmp(zSpan, zDb, n)!=0 || zDb[n]!=0) ){ + return 0; + } + zSpan += n+1; + for(n=0; ALWAYS(zSpan[n]) && zSpan[n]!='.'; n++){} + if( zTab && (sqlite3StrNICmp(zSpan, zTab, n)!=0 || zTab[n]!=0) ){ + return 0; + } + zSpan += n+1; + if( zCol && sqlite3StrICmp(zSpan, zCol)!=0 ){ + return 0; + } + return 1; +} + /* ** Given the name of a column of the form X.Y.Z or Y.Z or just Z, look up ** that name in the set of source tables in pSrcList and make the pExpr ** expression node refer back to that source column. The following changes ** are made to pExpr: @@ -60607,131 +85149,155 @@ const char *zTab, /* Name of table containing column, or NULL */ const char *zCol, /* Name of the column. */ NameContext *pNC, /* The name context used to resolve the name */ Expr *pExpr /* Make this EXPR node point to the selected column */ ){ - int i, j; /* Loop counters */ + int i, j; /* Loop counters */ int cnt = 0; /* Number of matching column names */ int cntTab = 0; /* Number of matching table names */ + int nSubquery = 0; /* How many levels of subquery */ sqlite3 *db = pParse->db; /* The database connection */ struct SrcList_item *pItem; /* Use for looping over pSrcList items */ struct SrcList_item *pMatch = 0; /* The matching pSrcList item */ NameContext *pTopNC = pNC; /* First namecontext in the list */ Schema *pSchema = 0; /* Schema of the expression */ - int isTrigger = 0; + int isTrigger = 0; /* True if resolved to a trigger column */ + Table *pTab = 0; /* Table hold the row */ + Column *pCol; /* A column of pTab */ assert( pNC ); /* the name context cannot be NULL. */ assert( zCol ); /* The Z in X.Y.Z cannot be NULL */ - assert( ~ExprHasAnyProperty(pExpr, EP_TokenOnly|EP_Reduced) ); + assert( !ExprHasProperty(pExpr, EP_TokenOnly|EP_Reduced) ); /* Initialize the node to no-match */ pExpr->iTable = -1; pExpr->pTab = 0; - ExprSetIrreducible(pExpr); + ExprSetVVAProperty(pExpr, EP_NoReduce); + + /* Translate the schema name in zDb into a pointer to the corresponding + ** schema. If not found, pSchema will remain NULL and nothing will match + ** resulting in an appropriate error message toward the end of this routine + */ + if( zDb ){ + testcase( pNC->ncFlags & NC_PartIdx ); + testcase( pNC->ncFlags & NC_IsCheck ); + if( (pNC->ncFlags & (NC_PartIdx|NC_IsCheck))!=0 ){ + /* Silently ignore database qualifiers inside CHECK constraints and + ** partial indices. Do not raise errors because that might break + ** legacy and because it does not hurt anything to just ignore the + ** database name. */ + zDb = 0; + }else{ + for(i=0; i<db->nDb; i++){ + assert( db->aDb[i].zName ); + if( sqlite3StrICmp(db->aDb[i].zName,zDb)==0 ){ + pSchema = db->aDb[i].pSchema; + break; + } + } + } + } /* Start at the inner-most context and move outward until a match is found */ while( pNC && cnt==0 ){ ExprList *pEList; SrcList *pSrcList = pNC->pSrcList; if( pSrcList ){ for(i=0, pItem=pSrcList->a; i<pSrcList->nSrc; i++, pItem++){ - Table *pTab; - int iDb; - Column *pCol; - pTab = pItem->pTab; assert( pTab!=0 && pTab->zName!=0 ); - iDb = sqlite3SchemaToIndex(db, pTab->pSchema); assert( pTab->nCol>0 ); + if( pItem->pSelect && (pItem->pSelect->selFlags & SF_NestedFrom)!=0 ){ + int hit = 0; + pEList = pItem->pSelect->pEList; + for(j=0; j<pEList->nExpr; j++){ + if( sqlite3MatchSpanName(pEList->a[j].zSpan, zCol, zTab, zDb) ){ + cnt++; + cntTab = 2; + pMatch = pItem; + pExpr->iColumn = j; + hit = 1; + } + } + if( hit || zTab==0 ) continue; + } + if( zDb && pTab->pSchema!=pSchema ){ + continue; + } if( zTab ){ - if( pItem->zAlias ){ - char *zTabName = pItem->zAlias; - if( sqlite3StrICmp(zTabName, zTab)!=0 ) continue; - }else{ - char *zTabName = pTab->zName; - if( NEVER(zTabName==0) || sqlite3StrICmp(zTabName, zTab)!=0 ){ - continue; - } - if( zDb!=0 && sqlite3StrICmp(db->aDb[iDb].zName, zDb)!=0 ){ - continue; - } + const char *zTabName = pItem->zAlias ? pItem->zAlias : pTab->zName; + assert( zTabName!=0 ); + if( sqlite3StrICmp(zTabName, zTab)!=0 ){ + continue; } } if( 0==(cntTab++) ){ - pExpr->iTable = pItem->iCursor; - pExpr->pTab = pTab; - pSchema = pTab->pSchema; pMatch = pItem; } for(j=0, pCol=pTab->aCol; j<pTab->nCol; j++, pCol++){ if( sqlite3StrICmp(pCol->zName, zCol)==0 ){ - IdList *pUsing; + /* If there has been exactly one prior match and this match + ** is for the right-hand table of a NATURAL JOIN or is in a + ** USING clause, then skip this match. + */ + if( cnt==1 ){ + if( pItem->fg.jointype & JT_NATURAL ) continue; + if( nameInUsingClause(pItem->pUsing, zCol) ) continue; + } cnt++; - pExpr->iTable = pItem->iCursor; - pExpr->pTab = pTab; pMatch = pItem; - pSchema = pTab->pSchema; /* Substitute the rowid (column -1) for the INTEGER PRIMARY KEY */ pExpr->iColumn = j==pTab->iPKey ? -1 : (i16)j; - if( i<pSrcList->nSrc-1 ){ - if( pItem[1].jointype & JT_NATURAL ){ - /* If this match occurred in the left table of a natural join, - ** then skip the right table to avoid a duplicate match */ - pItem++; - i++; - }else if( (pUsing = pItem[1].pUsing)!=0 ){ - /* If this match occurs on a column that is in the USING clause - ** of a join, skip the search of the right table of the join - ** to avoid a duplicate match there. */ - int k; - for(k=0; k<pUsing->nId; k++){ - if( sqlite3StrICmp(pUsing->a[k].zName, zCol)==0 ){ - pItem++; - i++; - break; - } - } - } - } break; } } } - } + if( pMatch ){ + pExpr->iTable = pMatch->iCursor; + pExpr->pTab = pMatch->pTab; + /* RIGHT JOIN not (yet) supported */ + assert( (pMatch->fg.jointype & JT_RIGHT)==0 ); + if( (pMatch->fg.jointype & JT_LEFT)!=0 ){ + ExprSetProperty(pExpr, EP_CanBeNull); + } + pSchema = pExpr->pTab->pSchema; + } + } /* if( pSrcList ) */ #ifndef SQLITE_OMIT_TRIGGER /* If we have not already resolved the name, then maybe ** it is a new.* or old.* trigger argument reference */ - if( zDb==0 && zTab!=0 && cnt==0 && pParse->pTriggerTab!=0 ){ + if( zDb==0 && zTab!=0 && cntTab==0 && pParse->pTriggerTab!=0 ){ int op = pParse->eTriggerOp; - Table *pTab = 0; assert( op==TK_DELETE || op==TK_UPDATE || op==TK_INSERT ); if( op!=TK_DELETE && sqlite3StrICmp("new",zTab) == 0 ){ pExpr->iTable = 1; pTab = pParse->pTriggerTab; }else if( op!=TK_INSERT && sqlite3StrICmp("old",zTab)==0 ){ pExpr->iTable = 0; pTab = pParse->pTriggerTab; + }else{ + pTab = 0; } if( pTab ){ int iCol; pSchema = pTab->pSchema; cntTab++; - for(iCol=0; iCol<pTab->nCol; iCol++){ - Column *pCol = &pTab->aCol[iCol]; + for(iCol=0, pCol=pTab->aCol; iCol<pTab->nCol; iCol++, pCol++){ if( sqlite3StrICmp(pCol->zName, zCol)==0 ){ if( iCol==pTab->iPKey ){ iCol = -1; } break; } } - if( iCol>=pTab->nCol && sqlite3IsRowid(zCol) ){ - iCol = -1; /* IMP: R-44911-55124 */ + if( iCol>=pTab->nCol && sqlite3IsRowid(zCol) && VisibleRowid(pTab) ){ + /* IMP: R-51414-32910 */ + iCol = -1; } if( iCol<pTab->nCol ){ cnt++; if( iCol<0 ){ pExpr->affinity = SQLITE_AFF_INTEGER; @@ -60753,13 +85319,19 @@ #endif /* !defined(SQLITE_OMIT_TRIGGER) */ /* ** Perhaps the name is a reference to the ROWID */ - if( cnt==0 && cntTab==1 && sqlite3IsRowid(zCol) ){ + if( cnt==0 + && cntTab==1 + && pMatch + && (pNC->ncFlags & NC_IdxExpr)==0 + && sqlite3IsRowid(zCol) + && VisibleRowid(pMatch->pTab) + ){ cnt = 1; - pExpr->iColumn = -1; /* IMP: R-44911-55124 */ + pExpr->iColumn = -1; pExpr->affinity = SQLITE_AFF_INTEGER; } /* ** If the input is of the form Z (not Y.Z or X.Y.Z) then the name Z @@ -60770,25 +85342,34 @@ ** ** In cases like this, replace pExpr with a copy of the expression that ** forms the result set entry ("a+b" in the example) and return immediately. ** Note that the expression in the result set should have already been ** resolved by the time the WHERE clause is resolved. + ** + ** The ability to use an output result-set column in the WHERE, GROUP BY, + ** or HAVING clauses, or as part of a larger expression in the ORDER BY + ** clause is not standard SQL. This is a (goofy) SQLite extension, that + ** is supported for backwards compatibility only. Hence, we issue a warning + ** on sqlite3_log() whenever the capability is used. */ - if( cnt==0 && (pEList = pNC->pEList)!=0 && zTab==0 ){ + if( (pEList = pNC->pEList)!=0 + && zTab==0 + && cnt==0 + ){ for(j=0; j<pEList->nExpr; j++){ char *zAs = pEList->a[j].zName; if( zAs!=0 && sqlite3StrICmp(zAs, zCol)==0 ){ Expr *pOrig; assert( pExpr->pLeft==0 && pExpr->pRight==0 ); assert( pExpr->x.pList==0 ); assert( pExpr->x.pSelect==0 ); pOrig = pEList->a[j].pExpr; - if( !pNC->allowAgg && ExprHasProperty(pOrig, EP_Agg) ){ + if( (pNC->ncFlags&NC_AllowAgg)==0 && ExprHasProperty(pOrig, EP_Agg) ){ sqlite3ErrorMsg(pParse, "misuse of aliased aggregate %s", zAs); return WRC_Abort; } - resolveAlias(pParse, pEList, j, pExpr, ""); + resolveAlias(pParse, pEList, j, pExpr, "", nSubquery); cnt = 1; pMatch = 0; assert( zTab==0 && zDb==0 ); goto lookupname_end; } @@ -60798,10 +85379,11 @@ /* Advance to the next name context. The loop will exit when either ** we have a match (cnt>0) or when we run out of name contexts. */ if( cnt==0 ){ pNC = pNC->pNext; + nSubquery++; } } /* ** If X and Y are NULL (in other words if only the column name Z is @@ -60831,10 +85413,11 @@ }else if( zTab ){ sqlite3ErrorMsg(pParse, "%s: %s.%s", zErr, zTab, zCol); }else{ sqlite3ErrorMsg(pParse, "%s: %s", zErr, zCol); } + pParse->checkSchema = 1; pTopNC->nErr++; } /* If a column from a table in pSrcList is referenced, then record ** this fact in the pSrcList.a[].colUsed bitmask. Column 0 causes @@ -60860,11 +85443,13 @@ pExpr->pRight = 0; pExpr->op = (isTrigger ? TK_TRIGGER : TK_COLUMN); lookupname_end: if( cnt==1 ){ assert( pNC!=0 ); - sqlite3AuthRead(pParse, pExpr, pSchema, pNC->pSrcList); + if( !ExprHasProperty(pExpr, EP_Alias) ){ + sqlite3AuthRead(pParse, pExpr, pSchema, pNC->pSrcList); + } /* Increment the nRef value on all name contexts from TopNC up to ** the point where the name matched. */ for(;;){ assert( pTopNC!=0 ); pTopNC->nRef++; @@ -60897,10 +85482,45 @@ } ExprSetProperty(p, EP_Resolved); } return p; } + +/* +** Report an error that an expression is not valid for some set of +** pNC->ncFlags values determined by validMask. +*/ +static void notValid( + Parse *pParse, /* Leave error message here */ + NameContext *pNC, /* The name context */ + const char *zMsg, /* Type of error */ + int validMask /* Set of contexts for which prohibited */ +){ + assert( (validMask&~(NC_IsCheck|NC_PartIdx|NC_IdxExpr))==0 ); + if( (pNC->ncFlags & validMask)!=0 ){ + const char *zIn = "partial index WHERE clauses"; + if( pNC->ncFlags & NC_IdxExpr ) zIn = "index expressions"; +#ifndef SQLITE_OMIT_CHECK + else if( pNC->ncFlags & NC_IsCheck ) zIn = "CHECK constraints"; +#endif + sqlite3ErrorMsg(pParse, "%s prohibited in %s", zMsg, zIn); + } +} + +/* +** Expression p should encode a floating point value between 1.0 and 0.0. +** Return 1024 times this value. Or return -1 if p is not a floating point +** value between 1.0 and 0.0. +*/ +static int exprProbability(Expr *p){ + double r = -1.0; + if( p->op!=TK_FLOAT ) return -1; + sqlite3AtoF(p->u.zToken, &r, sqlite3Strlen30(p->u.zToken), SQLITE_UTF8); + assert( r>=0.0 ); + if( r>1.0 ) return -1; + return (int)(r*134217728.0); +} /* ** This routine is callback for sqlite3WalkExpr(). ** ** Resolve symbolic names into TK_COLUMN operators for the current @@ -60918,11 +85538,11 @@ pNC = pWalker->u.pNC; assert( pNC!=0 ); pParse = pNC->pParse; assert( pParse==pWalker->pParse ); - if( ExprHasAnyProperty(pExpr, EP_Resolved) ) return WRC_Prune; + if( ExprHasProperty(pExpr, EP_Resolved) ) return WRC_Prune; ExprSetProperty(pExpr, EP_Resolved); #ifndef NDEBUG if( pNC->pSrcList && pNC->pSrcList->nAlloc>0 ){ SrcList *pSrcList = pNC->pSrcList; int i; @@ -60948,11 +85568,12 @@ pExpr->iTable = pItem->iCursor; pExpr->iColumn = -1; pExpr->affinity = SQLITE_AFF_INTEGER; break; } -#endif /* defined(SQLITE_ENABLE_UPDATE_DELETE_LIMIT) && !defined(SQLITE_OMIT_SUBQUERY) */ +#endif /* defined(SQLITE_ENABLE_UPDATE_DELETE_LIMIT) + && !defined(SQLITE_OMIT_SUBQUERY) */ /* A lone identifier is the name of a column. */ case TK_ID: { return lookupName(pParse, 0, 0, pExpr->u.zToken, pNC, pExpr); @@ -60966,10 +85587,12 @@ const char *zTable; const char *zDb; Expr *pRight; /* if( pSrcList==0 ) break; */ + notValid(pParse, pNC, "the \".\" operator", NC_IdxExpr); + /*notValid(pParse, pNC, "the \".\" operator", NC_PartIdx|NC_IsCheck, 1);*/ pRight = pExpr->pRight; if( pRight->op==TK_ID ){ zDb = 0; zTable = pExpr->pLeft->u.zToken; zColumn = pRight->u.zToken; @@ -60982,11 +85605,10 @@ return lookupName(pParse, zDb, zTable, zColumn, pNC, pExpr); } /* Resolve function names */ - case TK_CONST_FUNC: case TK_FUNCTION: { ExprList *pList = pExpr->x.pList; /* The argument list */ int n = pList ? pList->nExpr : 0; /* Number of arguments */ int no_such_func = 0; /* True if no such function exists */ int wrong_num_args = 0; /* True if wrong number of arguments */ @@ -60995,27 +85617,48 @@ int nId; /* Number of characters in function name */ const char *zId; /* The function name. */ FuncDef *pDef; /* Information about the function */ u8 enc = ENC(pParse->db); /* The database encoding */ - testcase( pExpr->op==TK_CONST_FUNC ); assert( !ExprHasProperty(pExpr, EP_xIsSelect) ); + notValid(pParse, pNC, "functions", NC_PartIdx); zId = pExpr->u.zToken; nId = sqlite3Strlen30(zId); pDef = sqlite3FindFunction(pParse->db, zId, nId, n, enc, 0); if( pDef==0 ){ - pDef = sqlite3FindFunction(pParse->db, zId, nId, -1, enc, 0); + pDef = sqlite3FindFunction(pParse->db, zId, nId, -2, enc, 0); if( pDef==0 ){ no_such_func = 1; }else{ wrong_num_args = 1; } }else{ - is_agg = pDef->xFunc==0; - } + is_agg = pDef->xFinalize!=0; + if( pDef->funcFlags & SQLITE_FUNC_UNLIKELY ){ + ExprSetProperty(pExpr, EP_Unlikely|EP_Skip); + if( n==2 ){ + pExpr->iTable = exprProbability(pList->a[1].pExpr); + if( pExpr->iTable<0 ){ + sqlite3ErrorMsg(pParse, + "second argument to likelihood() must be a " + "constant between 0.0 and 1.0"); + pNC->nErr++; + } + }else{ + /* EVIDENCE-OF: R-61304-29449 The unlikely(X) function is + ** equivalent to likelihood(X, 0.0625). + ** EVIDENCE-OF: R-01283-11636 The unlikely(X) function is + ** short-hand for likelihood(X,0.0625). + ** EVIDENCE-OF: R-36850-34127 The likely(X) function is short-hand + ** for likelihood(X,0.9375). + ** EVIDENCE-OF: R-53436-40973 The likely(X) function is equivalent + ** to likelihood(X,0.9375). */ + /* TUNING: unlikely() probability is 0.0625. likely() is 0.9375 */ + pExpr->iTable = pDef->zName[0]=='u' ? 8388608 : 125829120; + } + } #ifndef SQLITE_OMIT_AUTHORIZATION - if( pDef ){ auth = sqlite3AuthCheck(pParse, SQLITE_FUNCTION, 0, pDef->zName, 0); if( auth!=SQLITE_OK ){ if( auth==SQLITE_DENY ){ sqlite3ErrorMsg(pParse, "not authorized to use function: %s", pDef->zName); @@ -61022,31 +85665,55 @@ pNC->nErr++; } pExpr->op = TK_NULL; return WRC_Prune; } +#endif + if( pDef->funcFlags & (SQLITE_FUNC_CONSTANT|SQLITE_FUNC_SLOCHNG) ){ + /* For the purposes of the EP_ConstFunc flag, date and time + ** functions and other functions that change slowly are considered + ** constant because they are constant for the duration of one query */ + ExprSetProperty(pExpr,EP_ConstFunc); + } + if( (pDef->funcFlags & SQLITE_FUNC_CONSTANT)==0 ){ + /* Date/time functions that use 'now', and other functions like + ** sqlite_version() that might change over time cannot be used + ** in an index. */ + notValid(pParse, pNC, "non-deterministic functions", NC_IdxExpr); + } } -#endif - if( is_agg && !pNC->allowAgg ){ + if( is_agg && (pNC->ncFlags & NC_AllowAgg)==0 ){ sqlite3ErrorMsg(pParse, "misuse of aggregate function %.*s()", nId,zId); pNC->nErr++; is_agg = 0; - }else if( no_such_func ){ + }else if( no_such_func && pParse->db->init.busy==0 ){ sqlite3ErrorMsg(pParse, "no such function: %.*s", nId, zId); pNC->nErr++; }else if( wrong_num_args ){ sqlite3ErrorMsg(pParse,"wrong number of arguments to function %.*s()", nId, zId); pNC->nErr++; } + if( is_agg ) pNC->ncFlags &= ~NC_AllowAgg; + sqlite3WalkExprList(pWalker, pList); if( is_agg ){ + NameContext *pNC2 = pNC; pExpr->op = TK_AGG_FUNCTION; - pNC->hasAgg = 1; + pExpr->op2 = 0; + while( pNC2 && !sqlite3FunctionUsesThisSrc(pExpr, pNC2->pSrcList) ){ + pExpr->op2++; + pNC2 = pNC2->pNext; + } + assert( pDef!=0 ); + if( pNC2 ){ + assert( SQLITE_FUNC_MINMAX==NC_MinMaxAgg ); + testcase( (pDef->funcFlags & SQLITE_FUNC_MINMAX)!=0 ); + pNC2->ncFlags |= NC_HasAgg | (pDef->funcFlags & SQLITE_FUNC_MINMAX); + + } + pNC->ncFlags |= NC_AllowAgg; } - if( is_agg ) pNC->allowAgg = 0; - sqlite3WalkExprList(pWalker, pList); - if( is_agg ) pNC->allowAgg = 1; /* FIX ME: Compute pExpr->affinity based on the expected return ** type of the function */ return WRC_Prune; } @@ -61056,31 +85723,23 @@ #endif case TK_IN: { testcase( pExpr->op==TK_IN ); if( ExprHasProperty(pExpr, EP_xIsSelect) ){ int nRef = pNC->nRef; -#ifndef SQLITE_OMIT_CHECK - if( pNC->isCheck ){ - sqlite3ErrorMsg(pParse,"subqueries prohibited in CHECK constraints"); - } -#endif + notValid(pParse, pNC, "subqueries", NC_IsCheck|NC_PartIdx|NC_IdxExpr); sqlite3WalkSelect(pWalker, pExpr->x.pSelect); assert( pNC->nRef>=nRef ); if( nRef!=pNC->nRef ){ ExprSetProperty(pExpr, EP_VarSelect); } } break; } -#ifndef SQLITE_OMIT_CHECK case TK_VARIABLE: { - if( pNC->isCheck ){ - sqlite3ErrorMsg(pParse,"parameters prohibited in CHECK constraints"); - } + notValid(pParse, pNC, "parameters", NC_IsCheck|NC_PartIdx|NC_IdxExpr); break; } -#endif } return (pParse->nErr || pParse->db->mallocFailed) ? WRC_Abort : WRC_Continue; } /* @@ -61153,11 +85812,11 @@ */ memset(&nc, 0, sizeof(nc)); nc.pParse = pParse; nc.pSrcList = pSelect->pSrc; nc.pEList = pEList; - nc.allowAgg = 1; + nc.ncFlags = NC_AllowAgg; nc.nErr = 0; db = pParse->db; savedSuppErr = db->suppressErr; db->suppressErr = 1; rc = sqlite3ResolveExprNames(&nc, pE); @@ -61167,11 +85826,11 @@ /* Try to match the ORDER BY expression against an expression ** in the result set. Return an 1-based index of the matching ** result-set entry. */ for(i=0; i<pEList->nExpr; i++){ - if( sqlite3ExprCompare(pEList->a[i].pExpr, pE)<2 ){ + if( sqlite3ExprCompare(pEList->a[i].pExpr, pE, -1)<2 ){ return i+1; } } /* If no match, return 0. */ @@ -61241,11 +85900,11 @@ assert( pEList!=0 ); for(i=0, pItem=pOrderBy->a; i<pOrderBy->nExpr; i++, pItem++){ int iCol = -1; Expr *pE, *pDup; if( pItem->done ) continue; - pE = pItem->pExpr; + pE = sqlite3ExprSkipCollate(pItem->pExpr); if( sqlite3ExprIsInteger(pE, &iCol) ){ if( iCol<=0 || iCol>pEList->nExpr ){ resolveOutOfRangeError(pParse, "ORDER", i+1, pEList->nExpr); return 1; } @@ -61259,19 +85918,27 @@ } sqlite3ExprDelete(db, pDup); } } if( iCol>0 ){ - CollSeq *pColl = pE->pColl; - int flags = pE->flags & EP_ExpCollate; + /* Convert the ORDER BY term into an integer column number iCol, + ** taking care to preserve the COLLATE clause if it exists */ + Expr *pNew = sqlite3Expr(db, TK_INTEGER, 0); + if( pNew==0 ) return 1; + pNew->flags |= EP_IntValue; + pNew->u.iValue = iCol; + if( pItem->pExpr==pE ){ + pItem->pExpr = pNew; + }else{ + Expr *pParent = pItem->pExpr; + assert( pParent->op==TK_COLLATE ); + while( pParent->pLeft->op==TK_COLLATE ) pParent = pParent->pLeft; + assert( pParent->pLeft==pE ); + pParent->pLeft = pNew; + } sqlite3ExprDelete(db, pE); - pItem->pExpr = pE = sqlite3Expr(db, TK_INTEGER, 0); - if( pE==0 ) return 1; - pE->pColl = pColl; - pE->flags |= EP_IntValue | flags; - pE->u.iValue = iCol; - pItem->iCol = (u16)iCol; + pItem->u.x.iOrderByCol = (u16)iCol; pItem->done = 1; }else{ moreToDo = 1; } } @@ -61288,12 +85955,12 @@ } /* ** Check every term in the ORDER BY or GROUP BY clause pOrderBy of ** the SELECT statement pSelect. If any term is reference to a -** result set expression (as determined by the ExprList.a.iCol field) -** then convert that term into a copy of the corresponding result set +** result set expression (as determined by the ExprList.a.u.x.iOrderByCol +** field) then convert that term into a copy of the corresponding result set ** column. ** ** If any errors are detected, add an error message to pParse and ** return non-zero. Return zero if no errors are seen. */ @@ -61316,16 +85983,17 @@ } #endif pEList = pSelect->pEList; assert( pEList!=0 ); /* sqlite3SelectNew() guarantees this */ for(i=0, pItem=pOrderBy->a; i<pOrderBy->nExpr; i++, pItem++){ - if( pItem->iCol ){ - if( pItem->iCol>pEList->nExpr ){ + if( pItem->u.x.iOrderByCol ){ + if( pItem->u.x.iOrderByCol>pEList->nExpr ){ resolveOutOfRangeError(pParse, zType, i+1, pEList->nExpr); return 1; } - resolveAlias(pParse, pEList, pItem->iCol-1, pItem->pExpr, zType); + resolveAlias(pParse, pEList, pItem->u.x.iOrderByCol-1, pItem->pExpr, + zType,0); } } return 0; } @@ -61336,11 +86004,11 @@ ** ** This routine resolves each term of the clause into an expression. ** If the order-by term is an integer I between 1 and N (where N is the ** number of columns in the result set of the SELECT) then the expression ** in the resolution is a copy of the I-th result-set expression. If -** the order-by term is an identify that corresponds to the AS-name of +** the order-by term is an identifier that corresponds to the AS-name of ** a result-set expression, then the term resolves to a copy of the ** result-set expression. Otherwise, the expression is resolved in ** the usual way - using sqlite3ResolveExprNames(). ** ** This routine returns the number of errors. If errors occur, then @@ -61351,11 +86019,11 @@ NameContext *pNC, /* The name context of the SELECT statement */ Select *pSelect, /* The SELECT statement holding pOrderBy */ ExprList *pOrderBy, /* An ORDER BY or GROUP BY clause to resolve */ const char *zType /* Either "ORDER" or "GROUP", as appropriate */ ){ - int i; /* Loop counter */ + int i, j; /* Loop counters */ int iCol; /* Column number */ struct ExprList_item *pItem; /* A term of the ORDER BY clause */ Parse *pParse; /* Parsing context */ int nResult; /* Number of terms in the result set */ @@ -61362,50 +86030,57 @@ if( pOrderBy==0 ) return 0; nResult = pSelect->pEList->nExpr; pParse = pNC->pParse; for(i=0, pItem=pOrderBy->a; i<pOrderBy->nExpr; i++, pItem++){ Expr *pE = pItem->pExpr; - iCol = resolveAsName(pParse, pSelect->pEList, pE); - if( iCol>0 ){ - /* If an AS-name match is found, mark this ORDER BY column as being - ** a copy of the iCol-th result-set column. The subsequent call to - ** sqlite3ResolveOrderGroupBy() will convert the expression to a - ** copy of the iCol-th result-set expression. */ - pItem->iCol = (u16)iCol; - continue; - } - if( sqlite3ExprIsInteger(pE, &iCol) ){ + Expr *pE2 = sqlite3ExprSkipCollate(pE); + if( zType[0]!='G' ){ + iCol = resolveAsName(pParse, pSelect->pEList, pE2); + if( iCol>0 ){ + /* If an AS-name match is found, mark this ORDER BY column as being + ** a copy of the iCol-th result-set column. The subsequent call to + ** sqlite3ResolveOrderGroupBy() will convert the expression to a + ** copy of the iCol-th result-set expression. */ + pItem->u.x.iOrderByCol = (u16)iCol; + continue; + } + } + if( sqlite3ExprIsInteger(pE2, &iCol) ){ /* The ORDER BY term is an integer constant. Again, set the column ** number so that sqlite3ResolveOrderGroupBy() will convert the ** order-by term to a copy of the result-set expression */ - if( iCol<1 ){ + if( iCol<1 || iCol>0xffff ){ resolveOutOfRangeError(pParse, zType, i+1, nResult); return 1; } - pItem->iCol = (u16)iCol; + pItem->u.x.iOrderByCol = (u16)iCol; continue; } /* Otherwise, treat the ORDER BY term as an ordinary expression */ - pItem->iCol = 0; + pItem->u.x.iOrderByCol = 0; if( sqlite3ResolveExprNames(pNC, pE) ){ return 1; + } + for(j=0; j<pSelect->pEList->nExpr; j++){ + if( sqlite3ExprCompare(pE, pSelect->pEList->a[j].pExpr, -1)==0 ){ + pItem->u.x.iOrderByCol = j+1; + } } } return sqlite3ResolveOrderGroupBy(pParse, pSelect, pOrderBy, zType); } /* -** Resolve names in the SELECT statement p and all of its descendents. +** Resolve names in the SELECT statement p and all of its descendants. */ static int resolveSelectStep(Walker *pWalker, Select *p){ NameContext *pOuterNC; /* Context that contains this SELECT */ NameContext sNC; /* Name context of this SELECT */ int isCompound; /* True if p is a compound select */ int nCompound; /* Number of compound terms processed so far */ Parse *pParse; /* Parsing context */ - ExprList *pEList; /* Result set expression list */ int i; /* Loop counter */ ExprList *pGroupBy; /* The GROUP BY clause */ Select *pLeftmost; /* Left-most of SELECT of a compound */ sqlite3 *db; /* Database connection */ @@ -61446,86 +86121,133 @@ sNC.pParse = pParse; if( sqlite3ResolveExprNames(&sNC, p->pLimit) || sqlite3ResolveExprNames(&sNC, p->pOffset) ){ return WRC_Abort; } - - /* Set up the local name-context to pass to sqlite3ResolveExprNames() to - ** resolve the result-set expression list. - */ - sNC.allowAgg = 1; - sNC.pSrcList = p->pSrc; - sNC.pNext = pOuterNC; - - /* Resolve names in the result set. */ - pEList = p->pEList; - assert( pEList!=0 ); - for(i=0; i<pEList->nExpr; i++){ - Expr *pX = pEList->a[i].pExpr; - if( sqlite3ResolveExprNames(&sNC, pX) ){ - return WRC_Abort; - } + + /* If the SF_Converted flags is set, then this Select object was + ** was created by the convertCompoundSelectToSubquery() function. + ** In this case the ORDER BY clause (p->pOrderBy) should be resolved + ** as if it were part of the sub-query, not the parent. This block + ** moves the pOrderBy down to the sub-query. It will be moved back + ** after the names have been resolved. */ + if( p->selFlags & SF_Converted ){ + Select *pSub = p->pSrc->a[0].pSelect; + assert( p->pSrc->nSrc==1 && p->pOrderBy ); + assert( pSub->pPrior && pSub->pOrderBy==0 ); + pSub->pOrderBy = p->pOrderBy; + p->pOrderBy = 0; } /* Recursively resolve names in all subqueries */ for(i=0; i<p->pSrc->nSrc; i++){ struct SrcList_item *pItem = &p->pSrc->a[i]; if( pItem->pSelect ){ + NameContext *pNC; /* Used to iterate name contexts */ + int nRef = 0; /* Refcount for pOuterNC and outer contexts */ const char *zSavedContext = pParse->zAuthContext; + + /* Count the total number of references to pOuterNC and all of its + ** parent contexts. After resolving references to expressions in + ** pItem->pSelect, check if this value has changed. If so, then + ** SELECT statement pItem->pSelect must be correlated. Set the + ** pItem->fg.isCorrelated flag if this is the case. */ + for(pNC=pOuterNC; pNC; pNC=pNC->pNext) nRef += pNC->nRef; + if( pItem->zName ) pParse->zAuthContext = pItem->zName; sqlite3ResolveSelectNames(pParse, pItem->pSelect, pOuterNC); pParse->zAuthContext = zSavedContext; if( pParse->nErr || db->mallocFailed ) return WRC_Abort; + + for(pNC=pOuterNC; pNC; pNC=pNC->pNext) nRef -= pNC->nRef; + assert( pItem->fg.isCorrelated==0 && nRef<=0 ); + pItem->fg.isCorrelated = (nRef!=0); } } + + /* Set up the local name-context to pass to sqlite3ResolveExprNames() to + ** resolve the result-set expression list. + */ + sNC.ncFlags = NC_AllowAgg; + sNC.pSrcList = p->pSrc; + sNC.pNext = pOuterNC; + + /* Resolve names in the result set. */ + if( sqlite3ResolveExprListNames(&sNC, p->pEList) ) return WRC_Abort; /* If there are no aggregate functions in the result-set, and no GROUP BY ** expression, do not allow aggregates in any of the other expressions. */ assert( (p->selFlags & SF_Aggregate)==0 ); pGroupBy = p->pGroupBy; - if( pGroupBy || sNC.hasAgg ){ - p->selFlags |= SF_Aggregate; + if( pGroupBy || (sNC.ncFlags & NC_HasAgg)!=0 ){ + assert( NC_MinMaxAgg==SF_MinMaxAgg ); + p->selFlags |= SF_Aggregate | (sNC.ncFlags&NC_MinMaxAgg); }else{ - sNC.allowAgg = 0; + sNC.ncFlags &= ~NC_AllowAgg; } /* If a HAVING clause is present, then there must be a GROUP BY clause. */ if( p->pHaving && !pGroupBy ){ sqlite3ErrorMsg(pParse, "a GROUP BY clause is required before HAVING"); return WRC_Abort; } - /* Add the expression list to the name-context before parsing the + /* Add the output column list to the name-context before parsing the ** other expressions in the SELECT statement. This is so that ** expressions in the WHERE clause (etc.) can refer to expressions by ** aliases in the result set. ** ** Minor point: If this is the case, then the expression will be ** re-evaluated for each reference to it. */ sNC.pEList = p->pEList; - if( sqlite3ResolveExprNames(&sNC, p->pWhere) || - sqlite3ResolveExprNames(&sNC, p->pHaving) - ){ - return WRC_Abort; + if( sqlite3ResolveExprNames(&sNC, p->pHaving) ) return WRC_Abort; + if( sqlite3ResolveExprNames(&sNC, p->pWhere) ) return WRC_Abort; + + /* Resolve names in table-valued-function arguments */ + for(i=0; i<p->pSrc->nSrc; i++){ + struct SrcList_item *pItem = &p->pSrc->a[i]; + if( pItem->fg.isTabFunc + && sqlite3ResolveExprListNames(&sNC, pItem->u1.pFuncArg) + ){ + return WRC_Abort; + } } /* The ORDER BY and GROUP BY clauses may not refer to terms in ** outer queries */ sNC.pNext = 0; - sNC.allowAgg = 1; + sNC.ncFlags |= NC_AllowAgg; + + /* If this is a converted compound query, move the ORDER BY clause from + ** the sub-query back to the parent query. At this point each term + ** within the ORDER BY clause has been transformed to an integer value. + ** These integers will be replaced by copies of the corresponding result + ** set expressions by the call to resolveOrderGroupBy() below. */ + if( p->selFlags & SF_Converted ){ + Select *pSub = p->pSrc->a[0].pSelect; + p->pOrderBy = pSub->pOrderBy; + pSub->pOrderBy = 0; + } /* Process the ORDER BY clause for singleton SELECT statements. ** The ORDER BY clause for compounds SELECT statements is handled ** below, after all of the result-sets for all of the elements of ** the compound have been resolved. + ** + ** If there is an ORDER BY clause on a term of a compound-select other + ** than the right-most term, then that is a syntax error. But the error + ** is not detected until much later, and so we need to go ahead and + ** resolve those symbols on the incorrect ORDER BY for consistency. */ - if( !isCompound && resolveOrderGroupBy(&sNC, p, p->pOrderBy, "ORDER") ){ + if( isCompound<=nCompound /* Defer right-most ORDER BY of a compound */ + && resolveOrderGroupBy(&sNC, p, p->pOrderBy, "ORDER") + ){ return WRC_Abort; } if( db->mallocFailed ){ return WRC_Abort; } @@ -61545,10 +86267,17 @@ "the GROUP BY clause"); return WRC_Abort; } } } + + /* If this is part of a compound SELECT, check that it has the right + ** number of expressions in the select list. */ + if( p->pNext && p->pEList->nExpr!=p->pNext->pEList->nExpr ){ + sqlite3SelectWrongNumTermsError(pParse, p->pNext); + return WRC_Abort; + } /* Advance to the next term of the compound */ p = p->pPrior; nCompound++; @@ -61602,11 +86331,11 @@ ** ** SELECT a+b AS x, c+d AS y FROM t1 ORDER BY a+b; ** ** Function calls are checked to make sure that the function is ** defined and that the correct number of arguments are specified. -** If the function is an aggregate function, then the pNC->hasAgg is +** If the function is an aggregate function, then the NC_HasAgg flag is ** set and the opcode is changed from TK_FUNCTION to TK_AGG_FUNCTION. ** If an expression contains aggregate functions then the EP_Agg ** property on the expression is set. ** ** An error message is left in pParse if anything is amiss. The number @@ -61614,11 +86343,11 @@ */ SQLITE_PRIVATE int sqlite3ResolveExprNames( NameContext *pNC, /* Namespace to resolve expressions in. */ Expr *pExpr /* The expression to be analyzed. */ ){ - int savedHasAgg; + u16 savedHasAgg; Walker w; if( pExpr==0 ) return 0; #if SQLITE_MAX_EXPR_DEPTH>0 { @@ -61627,31 +86356,50 @@ return 1; } pParse->nHeight += pExpr->nHeight; } #endif - savedHasAgg = pNC->hasAgg; - pNC->hasAgg = 0; + savedHasAgg = pNC->ncFlags & (NC_HasAgg|NC_MinMaxAgg); + pNC->ncFlags &= ~(NC_HasAgg|NC_MinMaxAgg); + w.pParse = pNC->pParse; w.xExprCallback = resolveExprStep; w.xSelectCallback = resolveSelectStep; - w.pParse = pNC->pParse; + w.xSelectCallback2 = 0; + w.walkerDepth = 0; + w.eCode = 0; w.u.pNC = pNC; sqlite3WalkExpr(&w, pExpr); #if SQLITE_MAX_EXPR_DEPTH>0 pNC->pParse->nHeight -= pExpr->nHeight; #endif if( pNC->nErr>0 || w.pParse->nErr>0 ){ ExprSetProperty(pExpr, EP_Error); } - if( pNC->hasAgg ){ + if( pNC->ncFlags & NC_HasAgg ){ ExprSetProperty(pExpr, EP_Agg); - }else if( savedHasAgg ){ - pNC->hasAgg = 1; } + pNC->ncFlags |= savedHasAgg; return ExprHasProperty(pExpr, EP_Error); } +/* +** Resolve all names for all expression in an expression list. This is +** just like sqlite3ResolveExprNames() except that it works for an expression +** list rather than a single expression. +*/ +SQLITE_PRIVATE int sqlite3ResolveExprListNames( + NameContext *pNC, /* Namespace to resolve expressions in. */ + ExprList *pList /* The expression list to be analyzed. */ +){ + int i; + if( pList ){ + for(i=0; i<pList->nExpr; i++){ + if( sqlite3ResolveExprNames(pNC, pList->a[i].pExpr) ) return WRC_Abort; + } + } + return WRC_Continue; +} /* ** Resolve all names in all expressions of a SELECT and in all ** decendents of the SELECT, including compounds off of p->pPrior, ** subqueries in expressions, and subqueries used as FROM clause @@ -61669,16 +86417,52 @@ NameContext *pOuterNC /* Name context for parent SELECT statement */ ){ Walker w; assert( p!=0 ); + memset(&w, 0, sizeof(w)); w.xExprCallback = resolveExprStep; w.xSelectCallback = resolveSelectStep; w.pParse = pParse; w.u.pNC = pOuterNC; sqlite3WalkSelect(&w, p); } + +/* +** Resolve names in expressions that can only reference a single table: +** +** * CHECK constraints +** * WHERE clauses on partial indices +** +** The Expr.iTable value for Expr.op==TK_COLUMN nodes of the expression +** is set to -1 and the Expr.iColumn value is set to the column number. +** +** Any errors cause an error message to be set in pParse. +*/ +SQLITE_PRIVATE void sqlite3ResolveSelfReference( + Parse *pParse, /* Parsing context */ + Table *pTab, /* The table being referenced */ + int type, /* NC_IsCheck or NC_PartIdx or NC_IdxExpr */ + Expr *pExpr, /* Expression to resolve. May be NULL. */ + ExprList *pList /* Expression list to resolve. May be NUL. */ +){ + SrcList sSrc; /* Fake SrcList for pParse->pNewTable */ + NameContext sNC; /* Name context for pParse->pNewTable */ + + assert( type==NC_IsCheck || type==NC_PartIdx || type==NC_IdxExpr ); + memset(&sNC, 0, sizeof(sNC)); + memset(&sSrc, 0, sizeof(sSrc)); + sSrc.nSrc = 1; + sSrc.a[0].zName = pTab->zName; + sSrc.a[0].pTab = pTab; + sSrc.a[0].iCursor = -1; + sNC.pParse = pParse; + sNC.pSrcList = &sSrc; + sNC.ncFlags = type; + if( sqlite3ResolveExprNames(&sNC, pExpr) ) return; + if( pList ) sqlite3ResolveExprListNames(&sNC, pList); +} /************** End of resolve.c *********************************************/ /************** Begin file expr.c ********************************************/ /* ** 2001 September 15 @@ -61692,37 +86476,41 @@ ** ************************************************************************* ** This file contains routines used for analyzing expressions and ** for generating VDBE code that evaluates expressions in SQLite. */ +/* #include "sqliteInt.h" */ /* ** Return the 'affinity' of the expression pExpr if any. ** ** If pExpr is a column, a reference to a column via an 'AS' alias, ** or a sub-select with a column as the return value, then the ** affinity of that column is returned. Otherwise, 0x00 is returned, ** indicating no affinity for the expression. ** -** i.e. the WHERE clause expresssions in the following statements all +** i.e. the WHERE clause expressions in the following statements all ** have an affinity: ** ** CREATE TABLE t1(a); ** SELECT * FROM t1 WHERE a; ** SELECT a AS b FROM t1 WHERE b; ** SELECT * FROM t1 WHERE (select a from t1); */ SQLITE_PRIVATE char sqlite3ExprAffinity(Expr *pExpr){ - int op = pExpr->op; + int op; + pExpr = sqlite3ExprSkipCollate(pExpr); + if( pExpr->flags & EP_Generic ) return 0; + op = pExpr->op; if( op==TK_SELECT ){ assert( pExpr->flags&EP_xIsSelect ); return sqlite3ExprAffinity(pExpr->x.pSelect->pEList->a[0].pExpr); } #ifndef SQLITE_OMIT_CAST if( op==TK_CAST ){ assert( !ExprHasProperty(pExpr, EP_IntValue) ); - return sqlite3AffinityType(pExpr->u.zToken); + return sqlite3AffinityType(pExpr->u.zToken, 0); } #endif if( (op==TK_AGG_COLUMN || op==TK_COLUMN || op==TK_REGISTER) && pExpr->pTab!=0 ){ @@ -61736,62 +86524,119 @@ return pExpr->affinity; } /* ** Set the collating sequence for expression pExpr to be the collating -** sequence named by pToken. Return a pointer to the revised expression. -** The collating sequence is marked as "explicit" using the EP_ExpCollate -** flag. An explicit collating sequence will override implicit -** collating sequences. -*/ -SQLITE_PRIVATE Expr *sqlite3ExprSetColl(Parse *pParse, Expr *pExpr, Token *pCollName){ - char *zColl = 0; /* Dequoted name of collation sequence */ - CollSeq *pColl; - sqlite3 *db = pParse->db; - zColl = sqlite3NameFromToken(db, pCollName); - if( pExpr && zColl ){ - pColl = sqlite3LocateCollSeq(pParse, zColl); - if( pColl ){ - pExpr->pColl = pColl; - pExpr->flags |= EP_ExpCollate; - } - } - sqlite3DbFree(db, zColl); +** sequence named by pToken. Return a pointer to a new Expr node that +** implements the COLLATE operator. +** +** If a memory allocation error occurs, that fact is recorded in pParse->db +** and the pExpr parameter is returned unchanged. +*/ +SQLITE_PRIVATE Expr *sqlite3ExprAddCollateToken( + Parse *pParse, /* Parsing context */ + Expr *pExpr, /* Add the "COLLATE" clause to this expression */ + const Token *pCollName, /* Name of collating sequence */ + int dequote /* True to dequote pCollName */ +){ + if( pCollName->n>0 ){ + Expr *pNew = sqlite3ExprAlloc(pParse->db, TK_COLLATE, pCollName, dequote); + if( pNew ){ + pNew->pLeft = pExpr; + pNew->flags |= EP_Collate|EP_Skip; + pExpr = pNew; + } + } + return pExpr; +} +SQLITE_PRIVATE Expr *sqlite3ExprAddCollateString(Parse *pParse, Expr *pExpr, const char *zC){ + Token s; + assert( zC!=0 ); + sqlite3TokenInit(&s, (char*)zC); + return sqlite3ExprAddCollateToken(pParse, pExpr, &s, 0); +} + +/* +** Skip over any TK_COLLATE operators and any unlikely() +** or likelihood() function at the root of an expression. +*/ +SQLITE_PRIVATE Expr *sqlite3ExprSkipCollate(Expr *pExpr){ + while( pExpr && ExprHasProperty(pExpr, EP_Skip) ){ + if( ExprHasProperty(pExpr, EP_Unlikely) ){ + assert( !ExprHasProperty(pExpr, EP_xIsSelect) ); + assert( pExpr->x.pList->nExpr>0 ); + assert( pExpr->op==TK_FUNCTION ); + pExpr = pExpr->x.pList->a[0].pExpr; + }else{ + assert( pExpr->op==TK_COLLATE ); + pExpr = pExpr->pLeft; + } + } return pExpr; } /* -** Return the default collation sequence for the expression pExpr. If -** there is no default collation type, return 0. +** Return the collation sequence for the expression pExpr. If +** there is no defined collating sequence, return NULL. +** +** The collating sequence might be determined by a COLLATE operator +** or by the presence of a column with a defined collating sequence. +** COLLATE operators take first precedence. Left operands take +** precedence over right operands. */ SQLITE_PRIVATE CollSeq *sqlite3ExprCollSeq(Parse *pParse, Expr *pExpr){ + sqlite3 *db = pParse->db; CollSeq *pColl = 0; Expr *p = pExpr; - while( ALWAYS(p) ){ - int op; - pColl = p->pColl; - if( pColl ) break; - op = p->op; - if( p->pTab!=0 && ( - op==TK_AGG_COLUMN || op==TK_COLUMN || op==TK_REGISTER || op==TK_TRIGGER - )){ + while( p ){ + int op = p->op; + if( p->flags & EP_Generic ) break; + if( op==TK_CAST || op==TK_UPLUS ){ + p = p->pLeft; + continue; + } + if( op==TK_COLLATE || (op==TK_REGISTER && p->op2==TK_COLLATE) ){ + pColl = sqlite3GetCollSeq(pParse, ENC(db), 0, p->u.zToken); + break; + } + if( (op==TK_AGG_COLUMN || op==TK_COLUMN + || op==TK_REGISTER || op==TK_TRIGGER) + && p->pTab!=0 + ){ /* op==TK_REGISTER && p->pTab!=0 happens when pExpr was originally ** a TK_COLUMN but was previously evaluated and cached in a register */ - const char *zColl; int j = p->iColumn; if( j>=0 ){ - sqlite3 *db = pParse->db; - zColl = p->pTab->aCol[j].zColl; + const char *zColl = p->pTab->aCol[j].zColl; pColl = sqlite3FindCollSeq(db, ENC(db), zColl, 0); - pExpr->pColl = pColl; } break; } - if( op!=TK_CAST && op!=TK_UPLUS ){ + if( p->flags & EP_Collate ){ + if( p->pLeft && (p->pLeft->flags & EP_Collate)!=0 ){ + p = p->pLeft; + }else{ + Expr *pNext = p->pRight; + /* The Expr.x union is never used at the same time as Expr.pRight */ + assert( p->x.pList==0 || p->pRight==0 ); + /* p->flags holds EP_Collate and p->pLeft->flags does not. And + ** p->x.pSelect cannot. So if p->x.pLeft exists, it must hold at + ** least one EP_Collate. Thus the following two ALWAYS. */ + if( p->x.pList!=0 && ALWAYS(!ExprHasProperty(p, EP_xIsSelect)) ){ + int i; + for(i=0; ALWAYS(i<p->x.pList->nExpr); i++){ + if( ExprHasProperty(p->x.pList->a[i].pExpr, EP_Collate) ){ + pNext = p->x.pList->a[i].pExpr; + break; + } + } + } + p = pNext; + } + }else{ break; } - p = p->pLeft; } if( sqlite3CheckCollSeq(pParse, pColl) ){ pColl = 0; } return pColl; @@ -61809,17 +86654,17 @@ ** affinity, use that. Otherwise use no affinity. */ if( sqlite3IsNumericAffinity(aff1) || sqlite3IsNumericAffinity(aff2) ){ return SQLITE_AFF_NUMERIC; }else{ - return SQLITE_AFF_NONE; + return SQLITE_AFF_BLOB; } }else if( !aff1 && !aff2 ){ /* Neither side of the comparison is a column. Compare the ** results directly. */ - return SQLITE_AFF_NONE; + return SQLITE_AFF_BLOB; }else{ /* One side is a column, the other is not. Use the columns affinity. */ assert( aff1==0 || aff2==0 ); return (aff1 + aff2); } @@ -61839,11 +86684,11 @@ if( pExpr->pRight ){ aff = sqlite3CompareAffinity(pExpr->pRight, aff); }else if( ExprHasProperty(pExpr, EP_xIsSelect) ){ aff = sqlite3CompareAffinity(pExpr->x.pSelect->pEList->a[0].pExpr, aff); }else if( !aff ){ - aff = SQLITE_AFF_NONE; + aff = SQLITE_AFF_BLOB; } return aff; } /* @@ -61853,11 +86698,11 @@ ** the comparison in pExpr. */ SQLITE_PRIVATE int sqlite3IndexAffinityOk(Expr *pExpr, char idx_affinity){ char aff = comparisonAffinity(pExpr); switch( aff ){ - case SQLITE_AFF_NONE: + case SQLITE_AFF_BLOB: return 1; case SQLITE_AFF_TEXT: return idx_affinity==SQLITE_AFF_TEXT; default: return sqlite3IsNumericAffinity(idx_affinity); @@ -61891,16 +86736,14 @@ Expr *pLeft, Expr *pRight ){ CollSeq *pColl; assert( pLeft ); - if( pLeft->flags & EP_ExpCollate ){ - assert( pLeft->pColl ); - pColl = pLeft->pColl; - }else if( pRight && pRight->flags & EP_ExpCollate ){ - assert( pRight->pColl ); - pColl = pRight->pColl; + if( pLeft->flags & EP_Collate ){ + pColl = sqlite3ExprCollSeq(pParse, pLeft); + }else if( pRight && (pRight->flags & EP_Collate)!=0 ){ + pColl = sqlite3ExprCollSeq(pParse, pRight); }else{ pColl = sqlite3ExprCollSeq(pParse, pLeft); if( !pColl ){ pColl = sqlite3ExprCollSeq(pParse, pRight); } @@ -61991,29 +86834,37 @@ ** Set the Expr.nHeight variable in the structure passed as an ** argument. An expression with no children, Expr.pList or ** Expr.pSelect member has a height of 1. Any other expression ** has a height equal to the maximum height of any other ** referenced Expr plus one. +** +** Also propagate EP_Propagate flags up from Expr.x.pList to Expr.flags, +** if appropriate. */ static void exprSetHeight(Expr *p){ int nHeight = 0; heightOfExpr(p->pLeft, &nHeight); heightOfExpr(p->pRight, &nHeight); if( ExprHasProperty(p, EP_xIsSelect) ){ heightOfSelect(p->x.pSelect, &nHeight); - }else{ + }else if( p->x.pList ){ heightOfExprList(p->x.pList, &nHeight); + p->flags |= EP_Propagate & sqlite3ExprListFlags(p->x.pList); } p->nHeight = nHeight + 1; } /* ** Set the Expr.nHeight variable using the exprSetHeight() function. If ** the height is greater than the maximum allowed expression depth, ** leave an error in pParse. +** +** Also propagate all EP_Propagate flags from the Expr.x.pList into +** Expr.flags. */ -SQLITE_PRIVATE void sqlite3ExprSetHeight(Parse *pParse, Expr *p){ +SQLITE_PRIVATE void sqlite3ExprSetHeightAndFlags(Parse *pParse, Expr *p){ + if( pParse->nErr ) return; exprSetHeight(p); sqlite3ExprCheckHeight(pParse, p->nHeight); } /* @@ -62023,12 +86874,21 @@ SQLITE_PRIVATE int sqlite3SelectExprHeight(Select *p){ int nHeight = 0; heightOfSelect(p, &nHeight); return nHeight; } -#else - #define exprSetHeight(y) +#else /* ABOVE: Height enforcement enabled. BELOW: Height enforcement off */ +/* +** Propagate all EP_Propagate flags from the Expr.x.pList into +** Expr.flags. +*/ +SQLITE_PRIVATE void sqlite3ExprSetHeightAndFlags(Parse *pParse, Expr *p){ + if( p && p->x.pList && !ExprHasProperty(p, EP_xIsSelect) ){ + p->flags |= EP_Propagate & sqlite3ExprListFlags(p->x.pList); + } +} +#define exprSetHeight(y) #endif /* SQLITE_MAX_EXPR_DEPTH>0 */ /* ** This routine is the core allocator for Expr nodes. ** @@ -62036,11 +86896,11 @@ ** for this node and for the pToken argument is a single allocation ** obtained from sqlite3DbMalloc(). The calling function ** is responsible for making sure the node eventually gets freed. ** ** If dequote is true, then the token (if it exists) is dequoted. -** If dequote is false, no dequoting is performance. The deQuote +** If dequote is false, no dequoting is performed. The deQuote ** parameter is ignored if pToken is NULL or if the token does not ** appear to be quoted. If the quotes were of the form "..." (double-quotes) ** then the EP_DblQuoted flag is set on the expression node. ** ** Special case: If op==TK_INTEGER and pToken points to a string that @@ -62057,28 +86917,32 @@ ){ Expr *pNew; int nExtra = 0; int iValue = 0; + assert( db!=0 ); if( pToken ){ if( op!=TK_INTEGER || pToken->z==0 || sqlite3GetInt32(pToken->z, &iValue)==0 ){ nExtra = pToken->n+1; + assert( iValue>=0 ); } } - pNew = sqlite3DbMallocZero(db, sizeof(Expr)+nExtra); + pNew = sqlite3DbMallocRawNN(db, sizeof(Expr)+nExtra); if( pNew ){ + memset(pNew, 0, sizeof(Expr)); pNew->op = (u8)op; pNew->iAgg = -1; if( pToken ){ if( nExtra==0 ){ pNew->flags |= EP_IntValue; pNew->u.iValue = iValue; }else{ int c; pNew->u.zToken = (char*)&pNew[1]; - memcpy(pNew->u.zToken, pToken->z, pToken->n); + assert( pToken->z!=0 || pToken->n==0 ); + if( pToken->n ) memcpy(pNew->u.zToken, pToken->z, pToken->n); pNew->u.zToken[pToken->n] = 0; if( dequote && nExtra>=3 && ((c = pToken->z[0])=='\'' || c=='"' || c=='[' || c=='`') ){ sqlite3Dequote(pNew->u.zToken); if( c=='"' ) pNew->flags |= EP_DblQuoted; @@ -62124,28 +86988,22 @@ sqlite3ExprDelete(db, pLeft); sqlite3ExprDelete(db, pRight); }else{ if( pRight ){ pRoot->pRight = pRight; - if( pRight->flags & EP_ExpCollate ){ - pRoot->flags |= EP_ExpCollate; - pRoot->pColl = pRight->pColl; - } + pRoot->flags |= EP_Propagate & pRight->flags; } if( pLeft ){ pRoot->pLeft = pLeft; - if( pLeft->flags & EP_ExpCollate ){ - pRoot->flags |= EP_ExpCollate; - pRoot->pColl = pLeft->pColl; - } + pRoot->flags |= EP_Propagate & pLeft->flags; } exprSetHeight(pRoot); } } /* -** Allocate a Expr node which joins as many as two subtrees. +** Allocate an Expr node which joins as many as two subtrees. ** ** One or both of the subtrees can be NULL. Return a pointer to the new ** Expr node. Or, if an OOM error occurs, set pParse->db->mallocFailed, ** free the subtrees and return NULL. */ @@ -62154,24 +87012,68 @@ int op, /* Expression opcode */ Expr *pLeft, /* Left operand */ Expr *pRight, /* Right operand */ const Token *pToken /* Argument token */ ){ - Expr *p = sqlite3ExprAlloc(pParse->db, op, pToken, 1); - sqlite3ExprAttachSubtrees(pParse->db, p, pLeft, pRight); + Expr *p; + if( op==TK_AND && pParse->nErr==0 ){ + /* Take advantage of short-circuit false optimization for AND */ + p = sqlite3ExprAnd(pParse->db, pLeft, pRight); + }else{ + p = sqlite3ExprAlloc(pParse->db, op & TKFLG_MASK, pToken, 1); + sqlite3ExprAttachSubtrees(pParse->db, p, pLeft, pRight); + } + if( p ) { + sqlite3ExprCheckHeight(pParse, p->nHeight); + } return p; } + +/* +** If the expression is always either TRUE or FALSE (respectively), +** then return 1. If one cannot determine the truth value of the +** expression at compile-time return 0. +** +** This is an optimization. If is OK to return 0 here even if +** the expression really is always false or false (a false negative). +** But it is a bug to return 1 if the expression might have different +** boolean values in different circumstances (a false positive.) +** +** Note that if the expression is part of conditional for a +** LEFT JOIN, then we cannot determine at compile-time whether or not +** is it true or false, so always return 0. +*/ +static int exprAlwaysTrue(Expr *p){ + int v = 0; + if( ExprHasProperty(p, EP_FromJoin) ) return 0; + if( !sqlite3ExprIsInteger(p, &v) ) return 0; + return v!=0; +} +static int exprAlwaysFalse(Expr *p){ + int v = 0; + if( ExprHasProperty(p, EP_FromJoin) ) return 0; + if( !sqlite3ExprIsInteger(p, &v) ) return 0; + return v==0; +} /* ** Join two expressions using an AND operator. If either expression is ** NULL, then just return the other expression. +** +** If one side or the other of the AND is known to be false, then instead +** of returning an AND expression, just return a constant expression with +** a value of false. */ SQLITE_PRIVATE Expr *sqlite3ExprAnd(sqlite3 *db, Expr *pLeft, Expr *pRight){ if( pLeft==0 ){ return pRight; }else if( pRight==0 ){ return pLeft; + }else if( exprAlwaysFalse(pLeft) || exprAlwaysFalse(pRight) ){ + sqlite3ExprDelete(db, pLeft); + sqlite3ExprDelete(db, pRight); + return sqlite3ExprAlloc(db, TK_INTEGER, &sqlite3IntTokens[0], 0); }else{ Expr *pNew = sqlite3ExprAlloc(db, TK_AND, 0, 0); sqlite3ExprAttachSubtrees(db, pNew, pLeft, pRight); return pNew; } @@ -62190,11 +87092,11 @@ sqlite3ExprListDelete(db, pList); /* Avoid memory leak when malloc fails */ return 0; } pNew->x.pList = pList; assert( !ExprHasProperty(pNew, EP_xIsSelect) ); - sqlite3ExprSetHeight(pParse, pNew); + sqlite3ExprSetHeightAndFlags(pParse, pNew); return pNew; } /* ** Assign a variable number to an expression that encodes a wildcard @@ -62207,72 +87109,76 @@ ** sure "nnn" is not too be to avoid a denial of service attack when ** the SQL statement comes from an external source. ** ** Wildcards of the form ":aaa", "@aaa", or "$aaa" are assigned the same number ** as the previous instance of the same wildcard. Or if this is the first -** instance of the wildcard, the next sequenial variable number is +** instance of the wildcard, the next sequential variable number is ** assigned. */ SQLITE_PRIVATE void sqlite3ExprAssignVarNumber(Parse *pParse, Expr *pExpr){ sqlite3 *db = pParse->db; const char *z; if( pExpr==0 ) return; - assert( !ExprHasAnyProperty(pExpr, EP_IntValue|EP_Reduced|EP_TokenOnly) ); + assert( !ExprHasProperty(pExpr, EP_IntValue|EP_Reduced|EP_TokenOnly) ); z = pExpr->u.zToken; assert( z!=0 ); assert( z[0]!=0 ); if( z[1]==0 ){ /* Wildcard of the form "?". Assign the next variable number */ assert( z[0]=='?' ); pExpr->iColumn = (ynVar)(++pParse->nVar); - }else if( z[0]=='?' ){ - /* Wildcard of the form "?nnn". Convert "nnn" to an integer and - ** use it as the variable number */ - int i = atoi((char*)&z[1]); - pExpr->iColumn = (ynVar)i; - testcase( i==0 ); - testcase( i==1 ); - testcase( i==db->aLimit[SQLITE_LIMIT_VARIABLE_NUMBER]-1 ); - testcase( i==db->aLimit[SQLITE_LIMIT_VARIABLE_NUMBER] ); - if( i<1 || i>db->aLimit[SQLITE_LIMIT_VARIABLE_NUMBER] ){ - sqlite3ErrorMsg(pParse, "variable number must be between ?1 and ?%d", - db->aLimit[SQLITE_LIMIT_VARIABLE_NUMBER]); - } - if( i>pParse->nVar ){ - pParse->nVar = i; - } - }else{ - /* Wildcards like ":aaa", "$aaa" or "@aaa". Reuse the same variable - ** number as the prior appearance of the same name, or if the name - ** has never appeared before, reuse the same variable number - */ - int i; - u32 n; - n = sqlite3Strlen30(z); - for(i=0; i<pParse->nVarExpr; i++){ - Expr *pE = pParse->apVarExpr[i]; - assert( pE!=0 ); - if( memcmp(pE->u.zToken, z, n)==0 && pE->u.zToken[n]==0 ){ - pExpr->iColumn = pE->iColumn; - break; - } - } - if( i>=pParse->nVarExpr ){ - pExpr->iColumn = (ynVar)(++pParse->nVar); - if( pParse->nVarExpr>=pParse->nVarExprAlloc-1 ){ - pParse->nVarExprAlloc += pParse->nVarExprAlloc + 10; - pParse->apVarExpr = - sqlite3DbReallocOrFree( - db, - pParse->apVarExpr, - pParse->nVarExprAlloc*sizeof(pParse->apVarExpr[0]) - ); - } - if( !db->mallocFailed ){ - assert( pParse->apVarExpr!=0 ); - pParse->apVarExpr[pParse->nVarExpr++] = pExpr; + }else{ + ynVar x = 0; + u32 n = sqlite3Strlen30(z); + if( z[0]=='?' ){ + /* Wildcard of the form "?nnn". Convert "nnn" to an integer and + ** use it as the variable number */ + i64 i; + int bOk = 0==sqlite3Atoi64(&z[1], &i, n-1, SQLITE_UTF8); + pExpr->iColumn = x = (ynVar)i; + testcase( i==0 ); + testcase( i==1 ); + testcase( i==db->aLimit[SQLITE_LIMIT_VARIABLE_NUMBER]-1 ); + testcase( i==db->aLimit[SQLITE_LIMIT_VARIABLE_NUMBER] ); + if( bOk==0 || i<1 || i>db->aLimit[SQLITE_LIMIT_VARIABLE_NUMBER] ){ + sqlite3ErrorMsg(pParse, "variable number must be between ?1 and ?%d", + db->aLimit[SQLITE_LIMIT_VARIABLE_NUMBER]); + x = 0; + } + if( i>pParse->nVar ){ + pParse->nVar = (int)i; + } + }else{ + /* Wildcards like ":aaa", "$aaa" or "@aaa". Reuse the same variable + ** number as the prior appearance of the same name, or if the name + ** has never appeared before, reuse the same variable number + */ + ynVar i; + for(i=0; i<pParse->nzVar; i++){ + if( pParse->azVar[i] && strcmp(pParse->azVar[i],z)==0 ){ + pExpr->iColumn = x = (ynVar)i+1; + break; + } + } + if( x==0 ) x = pExpr->iColumn = (ynVar)(++pParse->nVar); + } + if( x>0 ){ + if( x>pParse->nzVar ){ + char **a; + a = sqlite3DbRealloc(db, pParse->azVar, x*sizeof(a[0])); + if( a==0 ){ + assert( db->mallocFailed ); /* Error reported through mallocFailed */ + return; + } + pParse->azVar = a; + memset(&a[pParse->nzVar], 0, (x-pParse->nzVar)*sizeof(a[0])); + pParse->nzVar = x; + } + if( z[0]!='?' || pParse->azVar[x-1]==0 ){ + sqlite3DbFree(db, pParse->azVar[x-1]); + pParse->azVar[x-1] = sqlite3DbStrNDup(db, z, n); } } } if( !pParse->nErr && pParse->nVar>db->aLimit[SQLITE_LIMIT_VARIABLE_NUMBER] ){ sqlite3ErrorMsg(pParse, "too many SQL variables"); @@ -62282,16 +87188,18 @@ /* ** Recursively delete an expression tree. */ SQLITE_PRIVATE void sqlite3ExprDelete(sqlite3 *db, Expr *p){ if( p==0 ) return; - if( !ExprHasAnyProperty(p, EP_TokenOnly) ){ + /* Sanity check: Assert that the IntValue is non-negative if it exists */ + assert( !ExprHasProperty(p, EP_IntValue) || p->u.iValue>=0 ); + if( !ExprHasProperty(p, EP_TokenOnly) ){ + /* The Expr.x union is never used at the same time as Expr.pRight */ + assert( p->x.pList==0 || p->pRight==0 ); sqlite3ExprDelete(db, p->pLeft); sqlite3ExprDelete(db, p->pRight); - if( !ExprHasProperty(p, EP_Reduced) && (p->flags2 & EP2_MallocedToken)!=0 ){ - sqlite3DbFree(db, p->u.zToken); - } + if( ExprHasProperty(p, EP_MemToken) ) sqlite3DbFree(db, p->u.zToken); if( ExprHasProperty(p, EP_xIsSelect) ){ sqlite3SelectDelete(db, p->x.pSelect); }else{ sqlite3ExprListDelete(db, p->x.pList); } @@ -62339,28 +87247,31 @@ ** Note that with flags==EXPRDUP_REDUCE, this routines works on full-size ** (unreduced) Expr objects as they or originally constructed by the parser. ** During expression analysis, extra information is computed and moved into ** later parts of teh Expr object and that extra information might get chopped ** off if the expression is reduced. Note also that it does not work to -** make a EXPRDUP_REDUCE copy of a reduced expression. It is only legal +** make an EXPRDUP_REDUCE copy of a reduced expression. It is only legal ** to reduce a pristine expression tree from the parser. The implementation ** of dupedExprStructSize() contain multiple assert() statements that attempt ** to enforce this constraint. */ static int dupedExprStructSize(Expr *p, int flags){ int nSize; assert( flags==EXPRDUP_REDUCE || flags==0 ); /* Only one flag value allowed */ + assert( EXPR_FULLSIZE<=0xfff ); + assert( (0xfff & (EP_Reduced|EP_TokenOnly))==0 ); if( 0==(flags&EXPRDUP_REDUCE) ){ nSize = EXPR_FULLSIZE; }else{ - assert( !ExprHasAnyProperty(p, EP_TokenOnly|EP_Reduced) ); + assert( !ExprHasProperty(p, EP_TokenOnly|EP_Reduced) ); assert( !ExprHasProperty(p, EP_FromJoin) ); - assert( (p->flags2 & EP2_MallocedToken)==0 ); - assert( (p->flags2 & EP2_Irreducible)==0 ); - if( p->pLeft || p->pRight || p->pColl || p->x.pList ){ + assert( !ExprHasProperty(p, EP_MemToken) ); + assert( !ExprHasProperty(p, EP_NoReduce) ); + if( p->pLeft || p->x.pList ){ nSize = EXPR_REDUCEDSIZE | EP_Reduced; }else{ + assert( p->pRight==0 ); nSize = EXPR_TOKENONLYSIZE | EP_TokenOnly; } } return nSize; } @@ -62405,15 +87316,17 @@ /* ** This function is similar to sqlite3ExprDup(), except that if pzBuffer ** is not NULL then *pzBuffer is assumed to point to a buffer large enough ** to store the copy of expression p, the copies of p->u.zToken ** (if applicable), and the copies of the p->pLeft and p->pRight expressions, -** if any. Before returning, *pzBuffer is set to the first byte passed the +** if any. Before returning, *pzBuffer is set to the first byte past the ** portion of the buffer copied into by this function. */ static Expr *exprDup(sqlite3 *db, Expr *p, int flags, u8 **pzBuffer){ Expr *pNew = 0; /* Value to return */ + assert( flags==0 || flags==EXPRDUP_REDUCE ); + assert( db!=0 ); if( p ){ const int isReduced = (flags&EXPRDUP_REDUCE); u8 *zAlloc; u32 staticFlag = 0; @@ -62422,11 +87335,11 @@ /* Figure out where to write the new Expr structure. */ if( pzBuffer ){ zAlloc = *pzBuffer; staticFlag = EP_Static; }else{ - zAlloc = sqlite3DbMallocRaw(db, dupedExprSize(p, flags)); + zAlloc = sqlite3DbMallocRawNN(db, dupedExprSize(p, flags)); } pNew = (Expr *)zAlloc; if( pNew ){ /* Set nNewSize to the size allocated for the structure pointed to @@ -62444,17 +87357,19 @@ } if( isReduced ){ assert( ExprHasProperty(p, EP_Reduced)==0 ); memcpy(zAlloc, p, nNewSize); }else{ - int nSize = exprStructSize(p); + u32 nSize = (u32)exprStructSize(p); memcpy(zAlloc, p, nSize); - memset(&zAlloc[nSize], 0, EXPR_FULLSIZE-nSize); + if( nSize<EXPR_FULLSIZE ){ + memset(&zAlloc[nSize], 0, EXPR_FULLSIZE-nSize); + } } /* Set the EP_Reduced, EP_TokenOnly, and EP_Static flags appropriately. */ - pNew->flags &= ~(EP_Reduced|EP_TokenOnly|EP_Static); + pNew->flags &= ~(EP_Reduced|EP_TokenOnly|EP_Static|EP_MemToken); pNew->flags |= nStructSize & (EP_Reduced|EP_TokenOnly); pNew->flags |= staticFlag; /* Copy the p->u.zToken string, if any. */ if( nToken ){ @@ -62470,22 +87385,21 @@ pNew->x.pList = sqlite3ExprListDup(db, p->x.pList, isReduced); } } /* Fill in pNew->pLeft and pNew->pRight. */ - if( ExprHasAnyProperty(pNew, EP_Reduced|EP_TokenOnly) ){ + if( ExprHasProperty(pNew, EP_Reduced|EP_TokenOnly) ){ zAlloc += dupedExprNodeSize(p, flags); if( ExprHasProperty(pNew, EP_Reduced) ){ pNew->pLeft = exprDup(db, p->pLeft, EXPRDUP_REDUCE, &zAlloc); pNew->pRight = exprDup(db, p->pRight, EXPRDUP_REDUCE, &zAlloc); } if( pzBuffer ){ *pzBuffer = zAlloc; } }else{ - pNew->flags2 = 0; - if( !ExprHasAnyProperty(p, EP_TokenOnly) ){ + if( !ExprHasProperty(p, EP_TokenOnly) ){ pNew->pLeft = sqlite3ExprDup(db, p->pLeft, 0); pNew->pRight = sqlite3ExprDup(db, p->pRight, 0); } } @@ -62492,10 +87406,37 @@ } } return pNew; } +/* +** Create and return a deep copy of the object passed as the second +** argument. If an OOM condition is encountered, NULL is returned +** and the db->mallocFailed flag set. +*/ +#ifndef SQLITE_OMIT_CTE +static With *withDup(sqlite3 *db, With *p){ + With *pRet = 0; + if( p ){ + int nByte = sizeof(*p) + sizeof(p->a[0]) * (p->nCte-1); + pRet = sqlite3DbMallocZero(db, nByte); + if( pRet ){ + int i; + pRet->nCte = p->nCte; + for(i=0; i<p->nCte; i++){ + pRet->a[i].pSelect = sqlite3SelectDup(db, p->a[i].pSelect, 0); + pRet->a[i].pCols = sqlite3ExprListDup(db, p->a[i].pCols, 0); + pRet->a[i].zName = sqlite3DbStrDup(db, p->a[i].zName); + } + } + } + return pRet; +} +#else +# define withDup(x,y) 0 +#endif + /* ** The following group of routines make deep copies of expressions, ** expression lists, ID lists, and select statements. The copies can ** be deleted (by being passed to their respective ...Delete() routines) ** without effecting the originals. @@ -62510,22 +87451,24 @@ ** If the EXPRDUP_REDUCE flag is set, then the structure returned is a ** truncated version of the usual Expr structure that will be stored as ** part of the in-memory representation of the database schema. */ SQLITE_PRIVATE Expr *sqlite3ExprDup(sqlite3 *db, Expr *p, int flags){ + assert( flags==0 || flags==EXPRDUP_REDUCE ); return exprDup(db, p, flags, 0); } SQLITE_PRIVATE ExprList *sqlite3ExprListDup(sqlite3 *db, ExprList *p, int flags){ ExprList *pNew; struct ExprList_item *pItem, *pOldItem; int i; + assert( db!=0 ); if( p==0 ) return 0; - pNew = sqlite3DbMallocRaw(db, sizeof(*pNew) ); + pNew = sqlite3DbMallocRawNN(db, sizeof(*pNew) ); if( pNew==0 ) return 0; - pNew->iECursor = 0; - pNew->nExpr = pNew->nAlloc = p->nExpr; - pNew->a = pItem = sqlite3DbMallocRaw(db, p->nExpr*sizeof(p->a[0]) ); + pNew->nExpr = i = p->nExpr; + if( (flags & EXPRDUP_REDUCE)==0 ) for(i=1; i<p->nExpr; i+=i){} + pNew->a = pItem = sqlite3DbMallocRawNN(db, i*sizeof(p->a[0]) ); if( pItem==0 ){ sqlite3DbFree(db, pNew); return 0; } pOldItem = p->a; @@ -62534,12 +87477,12 @@ pItem->pExpr = sqlite3ExprDup(db, pOldExpr, flags); pItem->zName = sqlite3DbStrDup(db, pOldItem->zName); pItem->zSpan = sqlite3DbStrDup(db, pOldItem->zSpan); pItem->sortOrder = pOldItem->sortOrder; pItem->done = 0; - pItem->iCol = pOldItem->iCol; - pItem->iAlias = pOldItem->iAlias; + pItem->bSpanIsTab = pOldItem->bSpanIsTab; + pItem->u = pOldItem->u; } return pNew; } /* @@ -62552,28 +87495,36 @@ || !defined(SQLITE_OMIT_SUBQUERY) SQLITE_PRIVATE SrcList *sqlite3SrcListDup(sqlite3 *db, SrcList *p, int flags){ SrcList *pNew; int i; int nByte; + assert( db!=0 ); if( p==0 ) return 0; nByte = sizeof(*p) + (p->nSrc>0 ? sizeof(p->a[0]) * (p->nSrc-1) : 0); - pNew = sqlite3DbMallocRaw(db, nByte ); + pNew = sqlite3DbMallocRawNN(db, nByte ); if( pNew==0 ) return 0; pNew->nSrc = pNew->nAlloc = p->nSrc; for(i=0; i<p->nSrc; i++){ struct SrcList_item *pNewItem = &pNew->a[i]; struct SrcList_item *pOldItem = &p->a[i]; Table *pTab; + pNewItem->pSchema = pOldItem->pSchema; pNewItem->zDatabase = sqlite3DbStrDup(db, pOldItem->zDatabase); pNewItem->zName = sqlite3DbStrDup(db, pOldItem->zName); pNewItem->zAlias = sqlite3DbStrDup(db, pOldItem->zAlias); - pNewItem->jointype = pOldItem->jointype; + pNewItem->fg = pOldItem->fg; pNewItem->iCursor = pOldItem->iCursor; - pNewItem->isPopulated = pOldItem->isPopulated; - pNewItem->zIndex = sqlite3DbStrDup(db, pOldItem->zIndex); - pNewItem->notIndexed = pOldItem->notIndexed; - pNewItem->pIndex = pOldItem->pIndex; + pNewItem->addrFillSub = pOldItem->addrFillSub; + pNewItem->regReturn = pOldItem->regReturn; + if( pNewItem->fg.isIndexedBy ){ + pNewItem->u1.zIndexedBy = sqlite3DbStrDup(db, pOldItem->u1.zIndexedBy); + } + pNewItem->pIBIndex = pOldItem->pIBIndex; + if( pNewItem->fg.isTabFunc ){ + pNewItem->u1.pFuncArg = + sqlite3ExprListDup(db, pOldItem->u1.pFuncArg, flags); + } pTab = pNewItem->pTab = pOldItem->pTab; if( pTab ){ pTab->nRef++; } pNewItem->pSelect = sqlite3SelectDup(db, pOldItem->pSelect, flags); @@ -62584,49 +87535,57 @@ return pNew; } SQLITE_PRIVATE IdList *sqlite3IdListDup(sqlite3 *db, IdList *p){ IdList *pNew; int i; + assert( db!=0 ); if( p==0 ) return 0; - pNew = sqlite3DbMallocRaw(db, sizeof(*pNew) ); + pNew = sqlite3DbMallocRawNN(db, sizeof(*pNew) ); if( pNew==0 ) return 0; - pNew->nId = pNew->nAlloc = p->nId; - pNew->a = sqlite3DbMallocRaw(db, p->nId*sizeof(p->a[0]) ); + pNew->nId = p->nId; + pNew->a = sqlite3DbMallocRawNN(db, p->nId*sizeof(p->a[0]) ); if( pNew->a==0 ){ sqlite3DbFree(db, pNew); return 0; } + /* Note that because the size of the allocation for p->a[] is not + ** necessarily a power of two, sqlite3IdListAppend() may not be called + ** on the duplicate created by this function. */ for(i=0; i<p->nId; i++){ struct IdList_item *pNewItem = &pNew->a[i]; struct IdList_item *pOldItem = &p->a[i]; pNewItem->zName = sqlite3DbStrDup(db, pOldItem->zName); pNewItem->idx = pOldItem->idx; } return pNew; } SQLITE_PRIVATE Select *sqlite3SelectDup(sqlite3 *db, Select *p, int flags){ - Select *pNew; + Select *pNew, *pPrior; + assert( db!=0 ); if( p==0 ) return 0; - pNew = sqlite3DbMallocRaw(db, sizeof(*p) ); + pNew = sqlite3DbMallocRawNN(db, sizeof(*p) ); if( pNew==0 ) return 0; pNew->pEList = sqlite3ExprListDup(db, p->pEList, flags); pNew->pSrc = sqlite3SrcListDup(db, p->pSrc, flags); pNew->pWhere = sqlite3ExprDup(db, p->pWhere, flags); pNew->pGroupBy = sqlite3ExprListDup(db, p->pGroupBy, flags); pNew->pHaving = sqlite3ExprDup(db, p->pHaving, flags); pNew->pOrderBy = sqlite3ExprListDup(db, p->pOrderBy, flags); pNew->op = p->op; - pNew->pPrior = sqlite3SelectDup(db, p->pPrior, flags); + pNew->pPrior = pPrior = sqlite3SelectDup(db, p->pPrior, flags); + if( pPrior ) pPrior->pNext = pNew; + pNew->pNext = 0; pNew->pLimit = sqlite3ExprDup(db, p->pLimit, flags); pNew->pOffset = sqlite3ExprDup(db, p->pOffset, flags); pNew->iLimit = 0; pNew->iOffset = 0; pNew->selFlags = p->selFlags & ~SF_UsesEphemeral; - pNew->pRightmost = 0; pNew->addrOpenEphm[0] = -1; pNew->addrOpenEphm[1] = -1; - pNew->addrOpenEphm[2] = -1; + pNew->nSelectRow = p->nSelectRow; + pNew->pWith = withDup(db, p->pWith); + sqlite3SelectSetName(pNew, p->zSelName); return pNew; } #else SQLITE_PRIVATE Select *sqlite3SelectDup(sqlite3 *db, Select *p, int flags){ assert( p==0 ); @@ -62647,26 +87606,27 @@ Parse *pParse, /* Parsing context */ ExprList *pList, /* List to which to append. Might be NULL */ Expr *pExpr /* Expression to be appended. Might be NULL */ ){ sqlite3 *db = pParse->db; + assert( db!=0 ); if( pList==0 ){ - pList = sqlite3DbMallocZero(db, sizeof(ExprList) ); + pList = sqlite3DbMallocRawNN(db, sizeof(ExprList) ); if( pList==0 ){ goto no_mem; } - assert( pList->nAlloc==0 ); - } - if( pList->nAlloc<=pList->nExpr ){ + pList->nExpr = 0; + pList->a = sqlite3DbMallocRawNN(db, sizeof(pList->a[0])); + if( pList->a==0 ) goto no_mem; + }else if( (pList->nExpr & (pList->nExpr-1))==0 ){ struct ExprList_item *a; - int n = pList->nAlloc*2 + 4; - a = sqlite3DbRealloc(db, pList->a, n*sizeof(pList->a[0])); + assert( pList->nExpr>0 ); + a = sqlite3DbRealloc(db, pList->a, pList->nExpr*2*sizeof(pList->a[0])); if( a==0 ){ goto no_mem; } pList->a = a; - pList->nAlloc = sqlite3DbMallocSize(db, a)/sizeof(a[0]); } assert( pList->a!=0 ); if( 1 ){ struct ExprList_item *pItem = &pList->a[pList->nExpr++]; memset(pItem, 0, sizeof(*pItem)); @@ -62678,10 +87638,24 @@ /* Avoid leaking memory if malloc has failed. */ sqlite3ExprDelete(db, pExpr); sqlite3ExprListDelete(db, pList); return 0; } + +/* +** Set the sort order for the last element on the given ExprList. +*/ +SQLITE_PRIVATE void sqlite3ExprListSetSortOrder(ExprList *p, int iSortOrder){ + if( p==0 ) return; + assert( SQLITE_SO_UNDEFINED<0 && SQLITE_SO_ASC>=0 && SQLITE_SO_DESC>0 ); + assert( p->nExpr>0 ); + if( iSortOrder<0 ){ + assert( p->a[p->nExpr-1].sortOrder==SQLITE_SO_ASC ); + return; + } + p->a[p->nExpr-1].sortOrder = (u8)iSortOrder; +} /* ** Set the ExprList.a[].zName element of the most recently added item ** on the expression list. ** @@ -62753,12 +87727,11 @@ */ SQLITE_PRIVATE void sqlite3ExprListDelete(sqlite3 *db, ExprList *pList){ int i; struct ExprList_item *pItem; if( pList==0 ) return; - assert( pList->a!=0 || (pList->nExpr==0 && pList->nAlloc==0) ); - assert( pList->nExpr<=pList->nAlloc ); + assert( pList->a!=0 || pList->nExpr==0 ); for(pItem=pList->a, i=0; i<pList->nExpr; i++, pItem++){ sqlite3ExprDelete(db, pItem->pExpr); sqlite3DbFree(db, pItem->zName); sqlite3DbFree(db, pItem->zSpan); } @@ -62765,142 +87738,216 @@ sqlite3DbFree(db, pList->a); sqlite3DbFree(db, pList); } /* -** These routines are Walker callbacks. Walker.u.pi is a pointer -** to an integer. These routines are checking an expression to see -** if it is a constant. Set *Walker.u.pi to 0 if the expression is -** not constant. +** Return the bitwise-OR of all Expr.flags fields in the given +** ExprList. +*/ +SQLITE_PRIVATE u32 sqlite3ExprListFlags(const ExprList *pList){ + int i; + u32 m = 0; + if( pList ){ + for(i=0; i<pList->nExpr; i++){ + Expr *pExpr = pList->a[i].pExpr; + if( ALWAYS(pExpr) ) m |= pExpr->flags; + } + } + return m; +} + +/* +** These routines are Walker callbacks used to check expressions to +** see if they are "constant" for some definition of constant. The +** Walker.eCode value determines the type of "constant" we are looking +** for. ** ** These callback routines are used to implement the following: ** -** sqlite3ExprIsConstant() -** sqlite3ExprIsConstantNotJoin() -** sqlite3ExprIsConstantOrFunction() +** sqlite3ExprIsConstant() pWalker->eCode==1 +** sqlite3ExprIsConstantNotJoin() pWalker->eCode==2 +** sqlite3ExprIsTableConstant() pWalker->eCode==3 +** sqlite3ExprIsConstantOrFunction() pWalker->eCode==4 or 5 ** +** In all cases, the callbacks set Walker.eCode=0 and abort if the expression +** is found to not be a constant. +** +** The sqlite3ExprIsConstantOrFunction() is used for evaluating expressions +** in a CREATE TABLE statement. The Walker.eCode value is 5 when parsing +** an existing schema and 4 when processing a new statement. A bound +** parameter raises an error for new statements, but is silently converted +** to NULL for existing schemas. This allows sqlite_master tables that +** contain a bound parameter because they were generated by older versions +** of SQLite to be parsed by newer versions of SQLite without raising a +** malformed schema error. */ static int exprNodeIsConstant(Walker *pWalker, Expr *pExpr){ - /* If pWalker->u.i is 3 then any term of the expression that comes from - ** the ON or USING clauses of a join disqualifies the expression + /* If pWalker->eCode is 2 then any term of the expression that comes from + ** the ON or USING clauses of a left join disqualifies the expression ** from being considered constant. */ - if( pWalker->u.i==3 && ExprHasAnyProperty(pExpr, EP_FromJoin) ){ - pWalker->u.i = 0; + if( pWalker->eCode==2 && ExprHasProperty(pExpr, EP_FromJoin) ){ + pWalker->eCode = 0; return WRC_Abort; } switch( pExpr->op ){ /* Consider functions to be constant if all their arguments are constant - ** and pWalker->u.i==2 */ + ** and either pWalker->eCode==4 or 5 or the function has the + ** SQLITE_FUNC_CONST flag. */ case TK_FUNCTION: - if( pWalker->u.i==2 ) return 0; - /* Fall through */ + if( pWalker->eCode>=4 || ExprHasProperty(pExpr,EP_ConstFunc) ){ + return WRC_Continue; + }else{ + pWalker->eCode = 0; + return WRC_Abort; + } case TK_ID: case TK_COLUMN: case TK_AGG_FUNCTION: case TK_AGG_COLUMN: testcase( pExpr->op==TK_ID ); testcase( pExpr->op==TK_COLUMN ); testcase( pExpr->op==TK_AGG_FUNCTION ); testcase( pExpr->op==TK_AGG_COLUMN ); - pWalker->u.i = 0; - return WRC_Abort; + if( pWalker->eCode==3 && pExpr->iTable==pWalker->u.iCur ){ + return WRC_Continue; + }else{ + pWalker->eCode = 0; + return WRC_Abort; + } + case TK_VARIABLE: + if( pWalker->eCode==5 ){ + /* Silently convert bound parameters that appear inside of CREATE + ** statements into a NULL when parsing the CREATE statement text out + ** of the sqlite_master table */ + pExpr->op = TK_NULL; + }else if( pWalker->eCode==4 ){ + /* A bound parameter in a CREATE statement that originates from + ** sqlite3_prepare() causes an error */ + pWalker->eCode = 0; + return WRC_Abort; + } + /* Fall through */ default: testcase( pExpr->op==TK_SELECT ); /* selectNodeIsConstant will disallow */ testcase( pExpr->op==TK_EXISTS ); /* selectNodeIsConstant will disallow */ return WRC_Continue; } } static int selectNodeIsConstant(Walker *pWalker, Select *NotUsed){ UNUSED_PARAMETER(NotUsed); - pWalker->u.i = 0; + pWalker->eCode = 0; return WRC_Abort; } -static int exprIsConst(Expr *p, int initFlag){ +static int exprIsConst(Expr *p, int initFlag, int iCur){ Walker w; - w.u.i = initFlag; + memset(&w, 0, sizeof(w)); + w.eCode = initFlag; w.xExprCallback = exprNodeIsConstant; w.xSelectCallback = selectNodeIsConstant; + w.u.iCur = iCur; sqlite3WalkExpr(&w, p); - return w.u.i; + return w.eCode; } /* -** Walk an expression tree. Return 1 if the expression is constant +** Walk an expression tree. Return non-zero if the expression is constant ** and 0 if it involves variables or function calls. ** ** For the purposes of this function, a double-quoted string (ex: "abc") ** is considered a variable but a single-quoted string (ex: 'abc') is ** a constant. */ SQLITE_PRIVATE int sqlite3ExprIsConstant(Expr *p){ - return exprIsConst(p, 1); + return exprIsConst(p, 1, 0); } /* -** Walk an expression tree. Return 1 if the expression is constant +** Walk an expression tree. Return non-zero if the expression is constant ** that does no originate from the ON or USING clauses of a join. ** Return 0 if it involves variables or function calls or terms from ** an ON or USING clause. */ SQLITE_PRIVATE int sqlite3ExprIsConstantNotJoin(Expr *p){ - return exprIsConst(p, 3); + return exprIsConst(p, 2, 0); } /* -** Walk an expression tree. Return 1 if the expression is constant +** Walk an expression tree. Return non-zero if the expression is constant +** for any single row of the table with cursor iCur. In other words, the +** expression must not refer to any non-deterministic function nor any +** table other than iCur. +*/ +SQLITE_PRIVATE int sqlite3ExprIsTableConstant(Expr *p, int iCur){ + return exprIsConst(p, 3, iCur); +} + +/* +** Walk an expression tree. Return non-zero if the expression is constant ** or a function call with constant arguments. Return and 0 if there ** are any variables. ** ** For the purposes of this function, a double-quoted string (ex: "abc") ** is considered a variable but a single-quoted string (ex: 'abc') is ** a constant. */ -SQLITE_PRIVATE int sqlite3ExprIsConstantOrFunction(Expr *p){ - return exprIsConst(p, 2); +SQLITE_PRIVATE int sqlite3ExprIsConstantOrFunction(Expr *p, u8 isInit){ + assert( isInit==0 || isInit==1 ); + return exprIsConst(p, 4+isInit, 0); } + +#ifdef SQLITE_ENABLE_CURSOR_HINTS +/* +** Walk an expression tree. Return 1 if the expression contains a +** subquery of some kind. Return 0 if there are no subqueries. +*/ +SQLITE_PRIVATE int sqlite3ExprContainsSubquery(Expr *p){ + Walker w; + memset(&w, 0, sizeof(w)); + w.eCode = 1; + w.xExprCallback = sqlite3ExprWalkNoop; + w.xSelectCallback = selectNodeIsConstant; + sqlite3WalkExpr(&w, p); + return w.eCode==0; +} +#endif /* ** If the expression p codes a constant integer that is small enough ** to fit in a 32-bit integer, return 1 and put the value of the integer ** in *pValue. If the expression is not an integer or if it is too big ** to fit in a signed 32-bit integer, return 0 and leave *pValue unchanged. */ SQLITE_PRIVATE int sqlite3ExprIsInteger(Expr *p, int *pValue){ int rc = 0; + + /* If an expression is an integer literal that fits in a signed 32-bit + ** integer, then the EP_IntValue flag will have already been set */ + assert( p->op!=TK_INTEGER || (p->flags & EP_IntValue)!=0 + || sqlite3GetInt32(p->u.zToken, &rc)==0 ); + if( p->flags & EP_IntValue ){ *pValue = p->u.iValue; return 1; } switch( p->op ){ - case TK_INTEGER: { - rc = sqlite3GetInt32(p->u.zToken, pValue); - assert( rc==0 ); - break; - } case TK_UPLUS: { rc = sqlite3ExprIsInteger(p->pLeft, pValue); break; } case TK_UMINUS: { int v; if( sqlite3ExprIsInteger(p->pLeft, &v) ){ + assert( v!=(-2147483647-1) ); *pValue = -v; rc = 1; } break; } default: break; } - if( rc ){ - assert( ExprHasAnyProperty(p, EP_Reduced|EP_TokenOnly) - || (p->flags2 & EP2_MallocedToken)==0 ); - p->op = TK_INTEGER; - p->flags |= EP_IntValue; - p->u.iValue = *pValue; - } return rc; } /* ** Return FALSE if there is no chance that the expression can be NULL. @@ -62925,33 +87972,19 @@ case TK_INTEGER: case TK_STRING: case TK_FLOAT: case TK_BLOB: return 0; + case TK_COLUMN: + assert( p->pTab!=0 ); + return ExprHasProperty(p, EP_CanBeNull) || + (p->iColumn>=0 && p->pTab->aCol[p->iColumn].notNull==0); default: return 1; } } -/* -** Generate an OP_IsNull instruction that tests register iReg and jumps -** to location iDest if the value in iReg is NULL. The value in iReg -** was computed by pExpr. If we can look at pExpr at compile-time and -** determine that it can never generate a NULL, then the OP_IsNull operation -** can be omitted. -*/ -SQLITE_PRIVATE void sqlite3ExprCodeIsNullJump( - Vdbe *v, /* The VDBE under construction */ - const Expr *pExpr, /* Only generate OP_IsNull if this expr can be NULL */ - int iReg, /* Test the value in this register for NULL */ - int iDest /* Jump here if the value is null */ -){ - if( sqlite3ExprCanBeNull(pExpr) ){ - sqlite3VdbeAddOp2(v, OP_IsNull, iReg, iDest); - } -} - /* ** Return TRUE if the given expression is a constant which would be ** unchanged by OP_Affinity with the affinity given in the second ** argument. ** @@ -62960,11 +87993,11 @@ ** is harmless. A false positive, however, can result in the wrong ** answer. */ SQLITE_PRIVATE int sqlite3ExprNeedsNoAffinityChange(const Expr *p, char aff){ u8 op; - if( aff==SQLITE_AFF_NONE ) return 1; + if( aff==SQLITE_AFF_BLOB ) return 1; while( p->op==TK_UPLUS || p->op==TK_UMINUS ){ p = p->pLeft; } op = p->op; if( op==TK_REGISTER ) op = p->op2; switch( op ){ case TK_INTEGER: { @@ -63041,87 +88074,155 @@ if( pEList->a[0].pExpr->op!=TK_COLUMN ) return 0; /* Result is a column */ return 1; } #endif /* SQLITE_OMIT_SUBQUERY */ +/* +** Code an OP_Once instruction and allocate space for its flag. Return the +** address of the new instruction. +*/ +SQLITE_PRIVATE int sqlite3CodeOnce(Parse *pParse){ + Vdbe *v = sqlite3GetVdbe(pParse); /* Virtual machine being coded */ + return sqlite3VdbeAddOp1(v, OP_Once, pParse->nOnce++); +} + +/* +** Generate code that checks the left-most column of index table iCur to see if +** it contains any NULL entries. Cause the register at regHasNull to be set +** to a non-NULL value if iCur contains no NULLs. Cause register regHasNull +** to be set to NULL if iCur contains one or more NULL values. +*/ +static void sqlite3SetHasNullFlag(Vdbe *v, int iCur, int regHasNull){ + int addr1; + sqlite3VdbeAddOp2(v, OP_Integer, 0, regHasNull); + addr1 = sqlite3VdbeAddOp1(v, OP_Rewind, iCur); VdbeCoverage(v); + sqlite3VdbeAddOp3(v, OP_Column, iCur, 0, regHasNull); + sqlite3VdbeChangeP5(v, OPFLAG_TYPEOFARG); + VdbeComment((v, "first_entry_in(%d)", iCur)); + sqlite3VdbeJumpHere(v, addr1); +} + + +#ifndef SQLITE_OMIT_SUBQUERY +/* +** The argument is an IN operator with a list (not a subquery) on the +** right-hand side. Return TRUE if that list is constant. +*/ +static int sqlite3InRhsIsConstant(Expr *pIn){ + Expr *pLHS; + int res; + assert( !ExprHasProperty(pIn, EP_xIsSelect) ); + pLHS = pIn->pLeft; + pIn->pLeft = 0; + res = sqlite3ExprIsConstant(pIn); + pIn->pLeft = pLHS; + return res; +} +#endif + /* ** This function is used by the implementation of the IN (...) operator. -** It's job is to find or create a b-tree structure that may be used -** either to test for membership of the (...) set or to iterate through -** its members, skipping duplicates. +** The pX parameter is the expression on the RHS of the IN operator, which +** might be either a list of expressions or a subquery. ** -** The index of the cursor opened on the b-tree (database table, database index -** or ephermal table) is stored in pX->iTable before this function returns. +** The job of this routine is to find or create a b-tree object that can +** be used either to test for membership in the RHS set or to iterate through +** all members of the RHS set, skipping duplicates. +** +** A cursor is opened on the b-tree object that is the RHS of the IN operator +** and pX->iTable is set to the index of that cursor. +** ** The returned value of this function indicates the b-tree type, as follows: ** -** IN_INDEX_ROWID - The cursor was opened on a database table. -** IN_INDEX_INDEX - The cursor was opened on a database index. -** IN_INDEX_EPH - The cursor was opened on a specially created and -** populated epheremal table. +** IN_INDEX_ROWID - The cursor was opened on a database table. +** IN_INDEX_INDEX_ASC - The cursor was opened on an ascending index. +** IN_INDEX_INDEX_DESC - The cursor was opened on a descending index. +** IN_INDEX_EPH - The cursor was opened on a specially created and +** populated epheremal table. +** IN_INDEX_NOOP - No cursor was allocated. The IN operator must be +** implemented as a sequence of comparisons. ** -** An existing b-tree may only be used if the SELECT is of the simple -** form: +** An existing b-tree might be used if the RHS expression pX is a simple +** subquery such as: ** ** SELECT <column> FROM <table> ** -** If the prNotFound parameter is 0, then the b-tree will be used to iterate -** through the set members, skipping any duplicates. In this case an -** epheremal table must be used unless the selected <column> is guaranteed +** If the RHS of the IN operator is a list or a more complex subquery, then +** an ephemeral table might need to be generated from the RHS and then +** pX->iTable made to point to the ephemeral table instead of an +** existing table. +** +** The inFlags parameter must contain exactly one of the bits +** IN_INDEX_MEMBERSHIP or IN_INDEX_LOOP. If inFlags contains +** IN_INDEX_MEMBERSHIP, then the generated table will be used for a +** fast membership test. When the IN_INDEX_LOOP bit is set, the +** IN index will be used to loop over all values of the RHS of the +** IN operator. +** +** When IN_INDEX_LOOP is used (and the b-tree will be used to iterate +** through the set members) then the b-tree must not contain duplicates. +** An epheremal table must be used unless the selected <column> is guaranteed ** to be unique - either because it is an INTEGER PRIMARY KEY or it ** has a UNIQUE constraint or UNIQUE index. ** -** If the prNotFound parameter is not 0, then the b-tree will be used -** for fast set membership tests. In this case an epheremal table must +** When IN_INDEX_MEMBERSHIP is used (and the b-tree will be used +** for fast set membership tests) then an epheremal table must ** be used unless <column> is an INTEGER PRIMARY KEY or an index can ** be found with <column> as its left-most column. +** +** If the IN_INDEX_NOOP_OK and IN_INDEX_MEMBERSHIP are both set and +** if the RHS of the IN operator is a list (not a subquery) then this +** routine might decide that creating an ephemeral b-tree for membership +** testing is too expensive and return IN_INDEX_NOOP. In that case, the +** calling routine should implement the IN operator using a sequence +** of Eq or Ne comparison operations. ** ** When the b-tree is being used for membership tests, the calling function -** needs to know whether or not the structure contains an SQL NULL -** value in order to correctly evaluate expressions like "X IN (Y, Z)". -** If there is any chance that the (...) might contain a NULL value at +** might need to know whether or not the RHS side of the IN operator +** contains a NULL. If prRhsHasNull is not a NULL pointer and +** if there is any chance that the (...) might contain a NULL value at ** runtime, then a register is allocated and the register number written -** to *prNotFound. If there is no chance that the (...) contains a -** NULL value, then *prNotFound is left unchanged. -** -** If a register is allocated and its location stored in *prNotFound, then -** its initial value is NULL. If the (...) does not remain constant -** for the duration of the query (i.e. the SELECT within the (...) -** is a correlated subquery) then the value of the allocated register is -** reset to NULL each time the subquery is rerun. This allows the -** caller to use vdbe code equivalent to the following: -** -** if( register==NULL ){ -** has_null = <test if data structure contains null> -** register = 1 -** } -** -** in order to avoid running the <test if data structure contains null> -** test more often than is necessary. +** to *prRhsHasNull. If there is no chance that the (...) contains a +** NULL value, then *prRhsHasNull is left unchanged. +** +** If a register is allocated and its location stored in *prRhsHasNull, then +** the value in that register will be NULL if the b-tree contains one or more +** NULL values, and it will be some non-NULL value if the b-tree contains no +** NULL values. */ #ifndef SQLITE_OMIT_SUBQUERY -SQLITE_PRIVATE int sqlite3FindInIndex(Parse *pParse, Expr *pX, int *prNotFound){ +SQLITE_PRIVATE int sqlite3FindInIndex(Parse *pParse, Expr *pX, u32 inFlags, int *prRhsHasNull){ Select *p; /* SELECT to the right of IN operator */ int eType = 0; /* Type of RHS table. IN_INDEX_* */ int iTab = pParse->nTab++; /* Cursor of the RHS table */ - int mustBeUnique = (prNotFound==0); /* True if RHS must be unique */ + int mustBeUnique; /* True if RHS must be unique */ + Vdbe *v = sqlite3GetVdbe(pParse); /* Virtual machine being coded */ assert( pX->op==TK_IN ); + mustBeUnique = (inFlags & IN_INDEX_LOOP)!=0; /* Check to see if an existing table or index can be used to ** satisfy the query. This is preferable to generating a new ** ephemeral table. */ p = (ExprHasProperty(pX, EP_xIsSelect) ? pX->x.pSelect : 0); - if( ALWAYS(pParse->nErr==0) && isCandidateForInOpt(p) ){ + if( pParse->nErr==0 && isCandidateForInOpt(p) ){ sqlite3 *db = pParse->db; /* Database connection */ - Expr *pExpr = p->pEList->a[0].pExpr; /* Expression <column> */ - int iCol = pExpr->iColumn; /* Index of column <column> */ - Vdbe *v = sqlite3GetVdbe(pParse); /* Virtual machine being coded */ - Table *pTab = p->pSrc->a[0].pTab; /* Table <table>. */ - int iDb; /* Database idx for pTab */ + Table *pTab; /* Table <table>. */ + Expr *pExpr; /* Expression <column> */ + i16 iCol; /* Index of column <column> */ + i16 iDb; /* Database idx for pTab */ + + assert( p ); /* Because of isCandidateForInOpt(p) */ + assert( p->pEList!=0 ); /* Because of isCandidateForInOpt(p) */ + assert( p->pEList->a[0].pExpr!=0 ); /* Because of isCandidateForInOpt(p) */ + assert( p->pSrc!=0 ); /* Because of isCandidateForInOpt(p) */ + pTab = p->pSrc->a[0].pTab; + pExpr = p->pEList->a[0].pExpr; + iCol = (i16)pExpr->iColumn; - /* Code an OP_VerifyCookie and OP_TableLock for <table>. */ + /* Code an OP_Transaction and OP_TableLock for <table>. */ iDb = sqlite3SchemaToIndex(db, pTab->pSchema); sqlite3CodeVerifySchema(pParse, iDb); sqlite3TableLock(pParse, iDb, pTab->tnum, 0, pTab->zName); /* This function is only called from two places. In both cases the vdbe @@ -63128,15 +88229,12 @@ ** has already been allocated. So assume sqlite3GetVdbe() is always ** successful here. */ assert(v); if( iCol<0 ){ - int iMem = ++pParse->nMem; - int iAddr; - - iAddr = sqlite3VdbeAddOp1(v, OP_If, iMem); - sqlite3VdbeAddOp2(v, OP_Integer, 1, iMem); + int iAddr = sqlite3CodeOnce(pParse); + VdbeCoverage(v); sqlite3OpenTable(pParse, iTab, iDb, pTab, OP_OpenRead); eType = IN_INDEX_ROWID; sqlite3VdbeJumpHere(v, iAddr); @@ -63150,62 +88248,77 @@ /* Check that the affinity that will be used to perform the ** comparison is the same as the affinity of the column. If ** it is not, it is not possible to use any index. */ - char aff = comparisonAffinity(pX); - int affinity_ok = (pTab->aCol[iCol].affinity==aff||aff==SQLITE_AFF_NONE); + int affinity_ok = sqlite3IndexAffinityOk(pX, pTab->aCol[iCol].affinity); for(pIdx=pTab->pIndex; pIdx && eType==0 && affinity_ok; pIdx=pIdx->pNext){ if( (pIdx->aiColumn[0]==iCol) && sqlite3FindCollSeq(db, ENC(db), pIdx->azColl[0], 0)==pReq - && (!mustBeUnique || (pIdx->nColumn==1 && pIdx->onError!=OE_None)) + && (!mustBeUnique || (pIdx->nKeyCol==1 && IsUniqueIndex(pIdx))) ){ - int iMem = ++pParse->nMem; - int iAddr; - char *pKey; - - pKey = (char *)sqlite3IndexKeyinfo(pParse, pIdx); - iAddr = sqlite3VdbeAddOp1(v, OP_If, iMem); - sqlite3VdbeAddOp2(v, OP_Integer, 1, iMem); - - sqlite3VdbeAddOp4(v, OP_OpenRead, iTab, pIdx->tnum, iDb, - pKey,P4_KEYINFO_HANDOFF); + int iAddr = sqlite3CodeOnce(pParse); VdbeCoverage(v); + sqlite3VdbeAddOp3(v, OP_OpenRead, iTab, pIdx->tnum, iDb); + sqlite3VdbeSetP4KeyInfo(pParse, pIdx); VdbeComment((v, "%s", pIdx->zName)); - eType = IN_INDEX_INDEX; + assert( IN_INDEX_INDEX_DESC == IN_INDEX_INDEX_ASC+1 ); + eType = IN_INDEX_INDEX_ASC + pIdx->aSortOrder[0]; + if( prRhsHasNull && !pTab->aCol[iCol].notNull ){ + *prRhsHasNull = ++pParse->nMem; + sqlite3SetHasNullFlag(v, iTab, *prRhsHasNull); + } sqlite3VdbeJumpHere(v, iAddr); - if( prNotFound && !pTab->aCol[iCol].notNull ){ - *prNotFound = ++pParse->nMem; - } } } } } + + /* If no preexisting index is available for the IN clause + ** and IN_INDEX_NOOP is an allowed reply + ** and the RHS of the IN operator is a list, not a subquery + ** and the RHS is not contant or has two or fewer terms, + ** then it is not worth creating an ephemeral table to evaluate + ** the IN operator so return IN_INDEX_NOOP. + */ + if( eType==0 + && (inFlags & IN_INDEX_NOOP_OK) + && !ExprHasProperty(pX, EP_xIsSelect) + && (!sqlite3InRhsIsConstant(pX) || pX->x.pList->nExpr<=2) + ){ + eType = IN_INDEX_NOOP; + } + if( eType==0 ){ - /* Could not found an existing table or index to use as the RHS b-tree. + /* Could not find an existing table or index to use as the RHS b-tree. ** We will have to generate an ephemeral table to do the job. */ + u32 savedNQueryLoop = pParse->nQueryLoop; int rMayHaveNull = 0; eType = IN_INDEX_EPH; - if( prNotFound ){ - *prNotFound = rMayHaveNull = ++pParse->nMem; - }else if( pX->pLeft->iColumn<0 && !ExprHasAnyProperty(pX, EP_xIsSelect) ){ - eType = IN_INDEX_ROWID; + if( inFlags & IN_INDEX_LOOP ){ + pParse->nQueryLoop = 0; + if( pX->pLeft->iColumn<0 && !ExprHasProperty(pX, EP_xIsSelect) ){ + eType = IN_INDEX_ROWID; + } + }else if( prRhsHasNull ){ + *prRhsHasNull = rMayHaveNull = ++pParse->nMem; } sqlite3CodeSubselect(pParse, pX, rMayHaveNull, eType==IN_INDEX_ROWID); + pParse->nQueryLoop = savedNQueryLoop; }else{ pX->iTable = iTab; } return eType; } #endif /* -** Generate code for scalar subqueries used as an expression -** and IN operators. Examples: +** Generate code for scalar subqueries used as a subquery expression, EXISTS, +** or IN operators. Examples: ** ** (SELECT a FROM b) -- subquery ** EXISTS (SELECT a FROM b) -- EXISTS subquery ** x IN (4,5,11) -- IN operator with list on right-hand side ** x IN (SELECT a FROM b) -- IN operator with subquery on the right @@ -63219,31 +88332,25 @@ ** intkey B-Tree to store the set of IN(...) values instead of the usual ** (slower) variable length keys B-Tree. ** ** If rMayHaveNull is non-zero, that means that the operation is an IN ** (not a SELECT or EXISTS) and that the RHS might contains NULLs. -** Furthermore, the IN is in a WHERE clause and that we really want -** to iterate over the RHS of the IN operator in order to quickly locate -** all corresponding LHS elements. All this routine does is initialize -** the register given by rMayHaveNull to NULL. Calling routines will take -** care of changing this register value to non-NULL if the RHS is NULL-free. -** -** If rMayHaveNull is zero, that means that the subquery is being used -** for membership testing only. There is no need to initialize any -** registers to indicate the presense or absence of NULLs on the RHS. +** All this routine does is initialize the register given by rMayHaveNull +** to NULL. Calling routines will take care of changing this register +** value to non-NULL if the RHS is NULL-free. ** ** For a SELECT or EXISTS operator, return the register that holds the ** result. For IN operators or if an error occurs, the return value is 0. */ #ifndef SQLITE_OMIT_SUBQUERY SQLITE_PRIVATE int sqlite3CodeSubselect( Parse *pParse, /* Parsing context */ Expr *pExpr, /* The IN, SELECT, or EXISTS operator */ - int rMayHaveNull, /* Register that records whether NULLs exist in RHS */ + int rHasNullFlag, /* Register that records whether NULLs exist in RHS */ int isRowid /* If true, LHS of IN operator is a rowid */ ){ - int testAddr = 0; /* One-time test address */ + int jmpIfDynamic = -1; /* One-time test address */ int rReg = 0; /* Register storing resulting */ Vdbe *v = sqlite3GetVdbe(pParse); if( NEVER(v==0) ) return 0; sqlite3ExprCachePush(pParse); @@ -63255,27 +88362,31 @@ ** * We are inside a trigger ** ** If all of the above are false, then we can run this code just once ** save the results, and reuse the same result on subsequent invocations. */ - if( !ExprHasAnyProperty(pExpr, EP_VarSelect) && !pParse->pTriggerTab ){ - int mem = ++pParse->nMem; - sqlite3VdbeAddOp1(v, OP_If, mem); - testAddr = sqlite3VdbeAddOp2(v, OP_Integer, 1, mem); - assert( testAddr>0 || pParse->db->mallocFailed ); + if( !ExprHasProperty(pExpr, EP_VarSelect) ){ + jmpIfDynamic = sqlite3CodeOnce(pParse); VdbeCoverage(v); } + +#ifndef SQLITE_OMIT_EXPLAIN + if( pParse->explain==2 ){ + char *zMsg = sqlite3MPrintf(pParse->db, "EXECUTE %s%s SUBQUERY %d", + jmpIfDynamic>=0?"":"CORRELATED ", + pExpr->op==TK_IN?"LIST":"SCALAR", + pParse->iNextSelectId + ); + sqlite3VdbeAddOp4(v, OP_Explain, pParse->iSelectId, 0, 0, zMsg, P4_DYNAMIC); + } +#endif switch( pExpr->op ){ case TK_IN: { - char affinity; - KeyInfo keyInfo; - int addr; /* Address of OP_OpenEphemeral instruction */ - Expr *pLeft = pExpr->pLeft; - - if( rMayHaveNull ){ - sqlite3VdbeAddOp2(v, OP_Null, 0, rMayHaveNull); - } + char affinity; /* Affinity of the LHS of the IN */ + int addr; /* Address of OP_OpenEphemeral instruction */ + Expr *pLeft = pExpr->pLeft; /* the LHS of the IN operator */ + KeyInfo *pKeyInfo = 0; /* Key information */ affinity = sqlite3ExprAffinity(pLeft); /* Whether this is an 'x IN(SELECT...)' or an 'x IN(<exprlist>)' ** expression it is handled the same way. An ephemeral table is @@ -63290,35 +88401,41 @@ ** 'x' nor the SELECT... statement are columns, then numeric affinity ** is used. */ pExpr->iTable = pParse->nTab++; addr = sqlite3VdbeAddOp2(v, OP_OpenEphemeral, pExpr->iTable, !isRowid); - memset(&keyInfo, 0, sizeof(keyInfo)); - keyInfo.nField = 1; + pKeyInfo = isRowid ? 0 : sqlite3KeyInfoAlloc(pParse->db, 1, 1); if( ExprHasProperty(pExpr, EP_xIsSelect) ){ /* Case 1: expr IN (SELECT ...) ** ** Generate code to write the results of the select into the temporary ** table allocated and opened above. */ + Select *pSelect = pExpr->x.pSelect; SelectDest dest; ExprList *pEList; assert( !isRowid ); sqlite3SelectDestInit(&dest, SRT_Set, pExpr->iTable); - dest.affinity = (u8)affinity; + dest.affSdst = (u8)affinity; assert( (pExpr->iTable&0x0000FFFF)==pExpr->iTable ); - if( sqlite3Select(pParse, pExpr->x.pSelect, &dest) ){ + pSelect->iLimit = 0; + testcase( pSelect->selFlags & SF_Distinct ); + testcase( pKeyInfo==0 ); /* Caused by OOM in sqlite3KeyInfoAlloc() */ + if( sqlite3Select(pParse, pSelect, &dest) ){ + sqlite3KeyInfoUnref(pKeyInfo); return 0; } - pEList = pExpr->x.pSelect->pEList; - if( ALWAYS(pEList!=0 && pEList->nExpr>0) ){ - keyInfo.aColl[0] = sqlite3BinaryCompareCollSeq(pParse, pExpr->pLeft, - pEList->a[0].pExpr); - } - }else if( pExpr->x.pList!=0 ){ + pEList = pSelect->pEList; + assert( pKeyInfo!=0 ); /* OOM will cause exit after sqlite3Select() */ + assert( pEList!=0 ); + assert( pEList->nExpr>0 ); + assert( sqlite3KeyInfoIsWriteable(pKeyInfo) ); + pKeyInfo->aColl[0] = sqlite3BinaryCompareCollSeq(pParse, pExpr->pLeft, + pEList->a[0].pExpr); + }else if( ALWAYS(pExpr->x.pList!=0) ){ /* Case 2: expr IN (exprlist) ** ** For each expression, build an index key from the evaluation and ** store it in the temporary table. If <expr> is a column, then use ** that columns affinity when building index keys. If <expr> is not @@ -63328,30 +88445,33 @@ ExprList *pList = pExpr->x.pList; struct ExprList_item *pItem; int r1, r2, r3; if( !affinity ){ - affinity = SQLITE_AFF_NONE; + affinity = SQLITE_AFF_BLOB; } - keyInfo.aColl[0] = sqlite3ExprCollSeq(pParse, pExpr->pLeft); + if( pKeyInfo ){ + assert( sqlite3KeyInfoIsWriteable(pKeyInfo) ); + pKeyInfo->aColl[0] = sqlite3ExprCollSeq(pParse, pExpr->pLeft); + } /* Loop through each expression in <exprlist>. */ r1 = sqlite3GetTempReg(pParse); r2 = sqlite3GetTempReg(pParse); - sqlite3VdbeAddOp2(v, OP_Null, 0, r2); + if( isRowid ) sqlite3VdbeAddOp2(v, OP_Null, 0, r2); for(i=pList->nExpr, pItem=pList->a; i>0; i--, pItem++){ Expr *pE2 = pItem->pExpr; int iValToIns; /* If the expression is not constant then we will need to ** disable the test that was generated above that makes sure ** this code only executes once. Because for a non-constant ** expression we need to rerun this code each time. */ - if( testAddr && !sqlite3ExprIsConstant(pE2) ){ - sqlite3VdbeChangeToNoop(v, testAddr-1, 2); - testAddr = 0; + if( jmpIfDynamic>=0 && !sqlite3ExprIsConstant(pE2) ){ + sqlite3VdbeChangeToNoop(v, jmpIfDynamic); + jmpIfDynamic = -1; } /* Evaluate the expression and insert it into the temp table */ if( isRowid && sqlite3ExprIsInteger(pE2, &iValToIns) ){ sqlite3VdbeAddOp3(v, OP_InsertInt, pExpr->iTable, r2, iValToIns); @@ -63358,10 +88478,11 @@ }else{ r3 = sqlite3ExprCodeTarget(pParse, pE2, r1); if( isRowid ){ sqlite3VdbeAddOp2(v, OP_MustBeInt, r3, sqlite3VdbeCurrentAddr(v)+2); + VdbeCoverage(v); sqlite3VdbeAddOp3(v, OP_Insert, pExpr->iTable, r2, r3); }else{ sqlite3VdbeAddOp4(v, OP_MakeRecord, r3, 1, r2, &affinity, 1); sqlite3ExprCacheAffinityChange(pParse, r3, 1); sqlite3VdbeAddOp2(v, OP_IdxInsert, pExpr->iTable, r2); @@ -63369,12 +88490,12 @@ } } sqlite3ReleaseTempReg(pParse, r1); sqlite3ReleaseTempReg(pParse, r2); } - if( !isRowid ){ - sqlite3VdbeChangeP4(v, addr, (void *)&keyInfo, P4_KEYINFO); + if( pKeyInfo ){ + sqlite3VdbeChangeP4(v, addr, (void *)pKeyInfo, P4_KEYINFO); } break; } case TK_EXISTS: @@ -63384,11 +88505,10 @@ ** value of this select in a memory cell and record the number ** of the memory cell in iColumn. If this is an EXISTS, write ** an integer 0 (not exists) or 1 (exists) into a memory cell ** and record that memory cell in iColumn. */ - static const Token one = { "1", 1 }; /* Token for literal value 1 */ Select *pSel; /* SELECT statement to encode */ SelectDest dest; /* How to deal with SELECt result */ testcase( pExpr->op==TK_EXISTS ); testcase( pExpr->op==TK_SELECT ); @@ -63397,32 +88517,40 @@ assert( ExprHasProperty(pExpr, EP_xIsSelect) ); pSel = pExpr->x.pSelect; sqlite3SelectDestInit(&dest, 0, ++pParse->nMem); if( pExpr->op==TK_SELECT ){ dest.eDest = SRT_Mem; - sqlite3VdbeAddOp2(v, OP_Null, 0, dest.iParm); + dest.iSdst = dest.iSDParm; + sqlite3VdbeAddOp2(v, OP_Null, 0, dest.iSDParm); VdbeComment((v, "Init subquery result")); }else{ dest.eDest = SRT_Exists; - sqlite3VdbeAddOp2(v, OP_Integer, 0, dest.iParm); + sqlite3VdbeAddOp2(v, OP_Integer, 0, dest.iSDParm); VdbeComment((v, "Init EXISTS result")); } sqlite3ExprDelete(pParse->db, pSel->pLimit); - pSel->pLimit = sqlite3PExpr(pParse, TK_INTEGER, 0, 0, &one); + pSel->pLimit = sqlite3PExpr(pParse, TK_INTEGER, 0, 0, + &sqlite3IntTokens[1]); + pSel->iLimit = 0; + pSel->selFlags &= ~SF_MultiValue; if( sqlite3Select(pParse, pSel, &dest) ){ return 0; } - rReg = dest.iParm; - ExprSetIrreducible(pExpr); + rReg = dest.iSDParm; + ExprSetVVAProperty(pExpr, EP_NoReduce); break; } } - if( testAddr ){ - sqlite3VdbeJumpHere(v, testAddr-1); + if( rHasNullFlag ){ + sqlite3SetHasNullFlag(v, pExpr->iTable, rHasNullFlag); } - sqlite3ExprCachePop(pParse, 1); + + if( jmpIfDynamic>=0 ){ + sqlite3VdbeJumpHere(v, jmpIfDynamic); + } + sqlite3ExprCachePop(pParse); return rReg; } #endif /* SQLITE_OMIT_SUBQUERY */ @@ -63437,11 +88565,11 @@ ** is an array of zero or more values. The expression is true if the LHS is ** contained within the RHS. The value of the expression is unknown (NULL) ** if the LHS is NULL or if the LHS is not contained within the RHS and the ** RHS contains one or more NULL values. ** -** This routine generates code will jump to destIfFalse if the LHS is not +** This routine generates code that jumps to destIfFalse if the LHS is not ** contained within the RHS. If due to NULLs we cannot determine if the LHS ** is contained in the RHS then jump to destIfNull. If the LHS is contained ** within the RHS then fall through. */ static void sqlite3ExprCodeIN( @@ -63460,11 +88588,13 @@ ** pExpr->iTable will contains the values that make up the RHS. */ v = pParse->pVdbe; assert( v!=0 ); /* OOM detected prior to this routine */ VdbeNoopComment((v, "begin IN expr")); - eType = sqlite3FindInIndex(pParse, pExpr, &rRhsHasNull); + eType = sqlite3FindInIndex(pParse, pExpr, + IN_INDEX_MEMBERSHIP | IN_INDEX_NOOP_OK, + destIfFalse==destIfNull ? 0 : &rRhsHasNull); /* Figure out the affinity to use to create a key from the results ** of the expression. affinityStr stores a static string suitable for ** P4 of OP_MakeRecord. */ @@ -63473,94 +88603,127 @@ /* Code the LHS, the <expr> from "<expr> IN (...)". */ sqlite3ExprCachePush(pParse); r1 = sqlite3GetTempReg(pParse); sqlite3ExprCode(pParse, pExpr->pLeft, r1); - sqlite3VdbeAddOp2(v, OP_IsNull, r1, destIfNull); - - - if( eType==IN_INDEX_ROWID ){ - /* In this case, the RHS is the ROWID of table b-tree - */ - sqlite3VdbeAddOp2(v, OP_MustBeInt, r1, destIfFalse); - sqlite3VdbeAddOp3(v, OP_NotExists, pExpr->iTable, destIfFalse, r1); - }else{ - /* In this case, the RHS is an index b-tree. - */ - sqlite3VdbeAddOp4(v, OP_Affinity, r1, 1, 0, &affinity, 1); - - /* If the set membership test fails, then the result of the - ** "x IN (...)" expression must be either 0 or NULL. If the set - ** contains no NULL values, then the result is 0. If the set - ** contains one or more NULL values, then the result of the - ** expression is also NULL. - */ - if( rRhsHasNull==0 || destIfFalse==destIfNull ){ - /* This branch runs if it is known at compile time that the RHS - ** cannot contain NULL values. This happens as the result - ** of a "NOT NULL" constraint in the database schema. - ** - ** Also run this branch if NULL is equivalent to FALSE - ** for this particular IN operator. - */ - sqlite3VdbeAddOp4Int(v, OP_NotFound, pExpr->iTable, destIfFalse, r1, 1); - - }else{ - /* In this branch, the RHS of the IN might contain a NULL and - ** the presence of a NULL on the RHS makes a difference in the - ** outcome. - */ - int j1, j2, j3; - - /* First check to see if the LHS is contained in the RHS. If so, - ** then the presence of NULLs in the RHS does not matter, so jump - ** over all of the code that follows. - */ - j1 = sqlite3VdbeAddOp4Int(v, OP_Found, pExpr->iTable, 0, r1, 1); - - /* Here we begin generating code that runs if the LHS is not - ** contained within the RHS. Generate additional code that - ** tests the RHS for NULLs. If the RHS contains a NULL then - ** jump to destIfNull. If there are no NULLs in the RHS then - ** jump to destIfFalse. - */ - j2 = sqlite3VdbeAddOp1(v, OP_NotNull, rRhsHasNull); - j3 = sqlite3VdbeAddOp4Int(v, OP_Found, pExpr->iTable, 0, rRhsHasNull, 1); - sqlite3VdbeAddOp2(v, OP_Integer, -1, rRhsHasNull); - sqlite3VdbeJumpHere(v, j3); - sqlite3VdbeAddOp2(v, OP_AddImm, rRhsHasNull, 1); - sqlite3VdbeJumpHere(v, j2); - - /* Jump to the appropriate target depending on whether or not - ** the RHS contains a NULL - */ - sqlite3VdbeAddOp2(v, OP_If, rRhsHasNull, destIfNull); - sqlite3VdbeAddOp2(v, OP_Goto, 0, destIfFalse); - - /* The OP_Found at the top of this branch jumps here when true, - ** causing the overall IN expression evaluation to fall through. - */ - sqlite3VdbeJumpHere(v, j1); + + /* If sqlite3FindInIndex() did not find or create an index that is + ** suitable for evaluating the IN operator, then evaluate using a + ** sequence of comparisons. + */ + if( eType==IN_INDEX_NOOP ){ + ExprList *pList = pExpr->x.pList; + CollSeq *pColl = sqlite3ExprCollSeq(pParse, pExpr->pLeft); + int labelOk = sqlite3VdbeMakeLabel(v); + int r2, regToFree; + int regCkNull = 0; + int ii; + assert( !ExprHasProperty(pExpr, EP_xIsSelect) ); + if( destIfNull!=destIfFalse ){ + regCkNull = sqlite3GetTempReg(pParse); + sqlite3VdbeAddOp3(v, OP_BitAnd, r1, r1, regCkNull); + } + for(ii=0; ii<pList->nExpr; ii++){ + r2 = sqlite3ExprCodeTemp(pParse, pList->a[ii].pExpr, ®ToFree); + if( regCkNull && sqlite3ExprCanBeNull(pList->a[ii].pExpr) ){ + sqlite3VdbeAddOp3(v, OP_BitAnd, regCkNull, r2, regCkNull); + } + if( ii<pList->nExpr-1 || destIfNull!=destIfFalse ){ + sqlite3VdbeAddOp4(v, OP_Eq, r1, labelOk, r2, + (void*)pColl, P4_COLLSEQ); + VdbeCoverageIf(v, ii<pList->nExpr-1); + VdbeCoverageIf(v, ii==pList->nExpr-1); + sqlite3VdbeChangeP5(v, affinity); + }else{ + assert( destIfNull==destIfFalse ); + sqlite3VdbeAddOp4(v, OP_Ne, r1, destIfFalse, r2, + (void*)pColl, P4_COLLSEQ); VdbeCoverage(v); + sqlite3VdbeChangeP5(v, affinity | SQLITE_JUMPIFNULL); + } + sqlite3ReleaseTempReg(pParse, regToFree); + } + if( regCkNull ){ + sqlite3VdbeAddOp2(v, OP_IsNull, regCkNull, destIfNull); VdbeCoverage(v); + sqlite3VdbeGoto(v, destIfFalse); + } + sqlite3VdbeResolveLabel(v, labelOk); + sqlite3ReleaseTempReg(pParse, regCkNull); + }else{ + + /* If the LHS is NULL, then the result is either false or NULL depending + ** on whether the RHS is empty or not, respectively. + */ + if( sqlite3ExprCanBeNull(pExpr->pLeft) ){ + if( destIfNull==destIfFalse ){ + /* Shortcut for the common case where the false and NULL outcomes are + ** the same. */ + sqlite3VdbeAddOp2(v, OP_IsNull, r1, destIfNull); VdbeCoverage(v); + }else{ + int addr1 = sqlite3VdbeAddOp1(v, OP_NotNull, r1); VdbeCoverage(v); + sqlite3VdbeAddOp2(v, OP_Rewind, pExpr->iTable, destIfFalse); + VdbeCoverage(v); + sqlite3VdbeGoto(v, destIfNull); + sqlite3VdbeJumpHere(v, addr1); + } + } + + if( eType==IN_INDEX_ROWID ){ + /* In this case, the RHS is the ROWID of table b-tree + */ + sqlite3VdbeAddOp2(v, OP_MustBeInt, r1, destIfFalse); VdbeCoverage(v); + sqlite3VdbeAddOp3(v, OP_NotExists, pExpr->iTable, destIfFalse, r1); + VdbeCoverage(v); + }else{ + /* In this case, the RHS is an index b-tree. + */ + sqlite3VdbeAddOp4(v, OP_Affinity, r1, 1, 0, &affinity, 1); + + /* If the set membership test fails, then the result of the + ** "x IN (...)" expression must be either 0 or NULL. If the set + ** contains no NULL values, then the result is 0. If the set + ** contains one or more NULL values, then the result of the + ** expression is also NULL. + */ + assert( destIfFalse!=destIfNull || rRhsHasNull==0 ); + if( rRhsHasNull==0 ){ + /* This branch runs if it is known at compile time that the RHS + ** cannot contain NULL values. This happens as the result + ** of a "NOT NULL" constraint in the database schema. + ** + ** Also run this branch if NULL is equivalent to FALSE + ** for this particular IN operator. + */ + sqlite3VdbeAddOp4Int(v, OP_NotFound, pExpr->iTable, destIfFalse, r1, 1); + VdbeCoverage(v); + }else{ + /* In this branch, the RHS of the IN might contain a NULL and + ** the presence of a NULL on the RHS makes a difference in the + ** outcome. + */ + int addr1; + + /* First check to see if the LHS is contained in the RHS. If so, + ** then the answer is TRUE the presence of NULLs in the RHS does + ** not matter. If the LHS is not contained in the RHS, then the + ** answer is NULL if the RHS contains NULLs and the answer is + ** FALSE if the RHS is NULL-free. + */ + addr1 = sqlite3VdbeAddOp4Int(v, OP_Found, pExpr->iTable, 0, r1, 1); + VdbeCoverage(v); + sqlite3VdbeAddOp2(v, OP_IsNull, rRhsHasNull, destIfNull); + VdbeCoverage(v); + sqlite3VdbeGoto(v, destIfFalse); + sqlite3VdbeJumpHere(v, addr1); + } } } sqlite3ReleaseTempReg(pParse, r1); - sqlite3ExprCachePop(pParse, 1); + sqlite3ExprCachePop(pParse); VdbeComment((v, "end IN expr")); } #endif /* SQLITE_OMIT_SUBQUERY */ -/* -** Duplicate an 8-byte value -*/ -static char *dup8bytes(Vdbe *v, const char *in){ - char *out = sqlite3DbMallocRaw(sqlite3VdbeDb(v), 8); - if( out ){ - memcpy(out, in, 8); - } - return out; -} - #ifndef SQLITE_OMIT_FLOATING_POINT /* ** Generate an instruction that will put the floating point ** value described by z[0..n-1] into register iMem. ** @@ -63569,50 +88732,53 @@ ** like the continuation of the number. */ static void codeReal(Vdbe *v, const char *z, int negateFlag, int iMem){ if( ALWAYS(z!=0) ){ double value; - char *zV; - sqlite3AtoF(z, &value); + sqlite3AtoF(z, &value, sqlite3Strlen30(z), SQLITE_UTF8); assert( !sqlite3IsNaN(value) ); /* The new AtoF never returns NaN */ if( negateFlag ) value = -value; - zV = dup8bytes(v, (char*)&value); - sqlite3VdbeAddOp4(v, OP_Real, 0, iMem, 0, zV, P4_REAL); + sqlite3VdbeAddOp4Dup8(v, OP_Real, 0, iMem, 0, (u8*)&value, P4_REAL); } } #endif /* ** Generate an instruction that will put the integer describe by ** text z[0..n-1] into register iMem. ** -** The z[] string will probably not be zero-terminated. But the -** z[n] character is guaranteed to be something that does not look -** like the continuation of the number. +** Expr.u.zToken is always UTF8 and zero-terminated. */ static void codeInteger(Parse *pParse, Expr *pExpr, int negFlag, int iMem){ Vdbe *v = pParse->pVdbe; if( pExpr->flags & EP_IntValue ){ int i = pExpr->u.iValue; + assert( i>=0 ); if( negFlag ) i = -i; sqlite3VdbeAddOp2(v, OP_Integer, i, iMem); }else{ + int c; + i64 value; const char *z = pExpr->u.zToken; assert( z!=0 ); - if( sqlite3FitsIn64Bits(z, negFlag) ){ - i64 value; - char *zV; - sqlite3Atoi64(z, &value); - if( negFlag ) value = -value; - zV = dup8bytes(v, (char*)&value); - sqlite3VdbeAddOp4(v, OP_Int64, 0, iMem, 0, zV, P4_INT64); + c = sqlite3DecOrHexToI64(z, &value); + if( c==0 || (c==2 && negFlag) ){ + if( negFlag ){ value = c==2 ? SMALLEST_INT64 : -value; } + sqlite3VdbeAddOp4Dup8(v, OP_Int64, 0, iMem, 0, (u8*)&value, P4_INT64); }else{ #ifdef SQLITE_OMIT_FLOATING_POINT sqlite3ErrorMsg(pParse, "oversized integer: %s%s", negFlag ? "-" : "", z); #else - codeReal(v, z, negFlag, iMem); +#ifndef SQLITE_OMIT_HEX_INTEGER + if( sqlite3_strnicmp(z,"0x",2)==0 ){ + sqlite3ErrorMsg(pParse, "hex literal too big: %s", z); + }else +#endif + { + codeReal(v, z, negFlag, iMem); + } #endif } } } @@ -63637,35 +88803,27 @@ int i; int minLru; int idxLru; struct yColCache *p; - assert( iReg>0 ); /* Register numbers are always positive */ + /* Unless an error has occurred, register numbers are always positive. */ + assert( iReg>0 || pParse->nErr || pParse->db->mallocFailed ); assert( iCol>=-1 && iCol<32768 ); /* Finite column numbers */ /* The SQLITE_ColumnCache flag disables the column cache. This is used ** for testing only - to verify that SQLite always gets the same answer ** with and without the column cache. */ - if( pParse->db->flags & SQLITE_ColumnCache ) return; + if( OptimizationDisabled(pParse->db, SQLITE_ColumnCache) ) return; /* First replace any existing entry. ** ** Actually, the way the column cache is currently used, we are guaranteed ** that the object will never already be in cache. Verify this guarantee. */ #ifndef NDEBUG for(i=0, p=pParse->aColCache; i<SQLITE_N_COLCACHE; i++, p++){ -#if 0 /* This code wold remove the entry from the cache if it existed */ - if( p->iReg && p->iTable==iTab && p->iColumn==iCol ){ - cacheEntryClear(pParse, p); - p->iLevel = pParse->iCacheLevel; - p->iReg = iReg; - p->lru = pParse->iCacheCnt++; - return; - } -#endif assert( p->iReg==0 || p->iTable!=iTab || p->iColumn!=iCol ); } #endif /* Find an empty slot and replace it */ @@ -63724,23 +88882,32 @@ ** added to the column cache after this call are removed when the ** corresponding pop occurs. */ SQLITE_PRIVATE void sqlite3ExprCachePush(Parse *pParse){ pParse->iCacheLevel++; +#ifdef SQLITE_DEBUG + if( pParse->db->flags & SQLITE_VdbeAddopTrace ){ + printf("PUSH to %d\n", pParse->iCacheLevel); + } +#endif } /* ** Remove from the column cache any entries that were added since the -** the previous N Push operations. In other words, restore the cache -** to the state it was in N Pushes ago. +** the previous sqlite3ExprCachePush operation. In other words, restore +** the cache to the state it was in prior the most recent Push. */ -SQLITE_PRIVATE void sqlite3ExprCachePop(Parse *pParse, int N){ +SQLITE_PRIVATE void sqlite3ExprCachePop(Parse *pParse){ int i; struct yColCache *p; - assert( N>0 ); - assert( pParse->iCacheLevel>=N ); - pParse->iCacheLevel -= N; + assert( pParse->iCacheLevel>=1 ); + pParse->iCacheLevel--; +#ifdef SQLITE_DEBUG + if( pParse->db->flags & SQLITE_VdbeAddopTrace ){ + printf("POP to %d\n", pParse->iCacheLevel); + } +#endif for(i=0, p=pParse->aColCache; i<SQLITE_N_COLCACHE; i++, p++){ if( p->iReg && p->iLevel>pParse->iCacheLevel ){ cacheEntryClear(pParse, p); p->iReg = 0; } @@ -63760,26 +88927,77 @@ if( p->iReg==iReg ){ p->tempReg = 0; } } } + +/* Generate code that will load into register regOut a value that is +** appropriate for the iIdxCol-th column of index pIdx. +*/ +SQLITE_PRIVATE void sqlite3ExprCodeLoadIndexColumn( + Parse *pParse, /* The parsing context */ + Index *pIdx, /* The index whose column is to be loaded */ + int iTabCur, /* Cursor pointing to a table row */ + int iIdxCol, /* The column of the index to be loaded */ + int regOut /* Store the index column value in this register */ +){ + i16 iTabCol = pIdx->aiColumn[iIdxCol]; + if( iTabCol==XN_EXPR ){ + assert( pIdx->aColExpr ); + assert( pIdx->aColExpr->nExpr>iIdxCol ); + pParse->iSelfTab = iTabCur; + sqlite3ExprCodeCopy(pParse, pIdx->aColExpr->a[iIdxCol].pExpr, regOut); + }else{ + sqlite3ExprCodeGetColumnOfTable(pParse->pVdbe, pIdx->pTable, iTabCur, + iTabCol, regOut); + } +} + +/* +** Generate code to extract the value of the iCol-th column of a table. +*/ +SQLITE_PRIVATE void sqlite3ExprCodeGetColumnOfTable( + Vdbe *v, /* The VDBE under construction */ + Table *pTab, /* The table containing the value */ + int iTabCur, /* The table cursor. Or the PK cursor for WITHOUT ROWID */ + int iCol, /* Index of the column to extract */ + int regOut /* Extract the value into this register */ +){ + if( iCol<0 || iCol==pTab->iPKey ){ + sqlite3VdbeAddOp2(v, OP_Rowid, iTabCur, regOut); + }else{ + int op = IsVirtual(pTab) ? OP_VColumn : OP_Column; + int x = iCol; + if( !HasRowid(pTab) ){ + x = sqlite3ColumnOfIndex(sqlite3PrimaryKeyIndex(pTab), iCol); + } + sqlite3VdbeAddOp3(v, op, iTabCur, x, regOut); + } + if( iCol>=0 ){ + sqlite3ColumnDefault(v, pTab, iCol, regOut); + } +} /* ** Generate code that will extract the iColumn-th column from -** table pTab and store the column value in a register. An effort -** is made to store the column value in register iReg, but this is -** not guaranteed. The location of the column value is returned. +** table pTab and store the column value in a register. +** +** An effort is made to store the column value in register iReg. This +** is not garanteeed for GetColumn() - the result can be stored in +** any register. But the result is guaranteed to land in register iReg +** for GetColumnToReg(). ** ** There must be an open cursor to pTab in iTable when this routine ** is called. If iColumn<0 then code is generated that extracts the rowid. */ SQLITE_PRIVATE int sqlite3ExprCodeGetColumn( Parse *pParse, /* Parsing and code generating context */ Table *pTab, /* Description of the table we are reading from */ int iColumn, /* Index of the table column */ int iTable, /* The cursor pointing to the table */ - int iReg /* Store results here */ + int iReg, /* Store results here */ + u8 p5 /* P5 value for OP_Column + FLAGS */ ){ Vdbe *v = pParse->pVdbe; int i; struct yColCache *p; @@ -63789,28 +89007,42 @@ sqlite3ExprCachePinRegister(pParse, p->iReg); return p->iReg; } } assert( v!=0 ); - if( iColumn<0 ){ - sqlite3VdbeAddOp2(v, OP_Rowid, iTable, iReg); - }else if( ALWAYS(pTab!=0) ){ - int op = IsVirtual(pTab) ? OP_VColumn : OP_Column; - sqlite3VdbeAddOp3(v, op, iTable, iColumn, iReg); - sqlite3ColumnDefault(v, pTab, iColumn, iReg); - } - sqlite3ExprCacheStore(pParse, iTable, iColumn, iReg); + sqlite3ExprCodeGetColumnOfTable(v, pTab, iTable, iColumn, iReg); + if( p5 ){ + sqlite3VdbeChangeP5(v, p5); + }else{ + sqlite3ExprCacheStore(pParse, iTable, iColumn, iReg); + } return iReg; } +SQLITE_PRIVATE void sqlite3ExprCodeGetColumnToReg( + Parse *pParse, /* Parsing and code generating context */ + Table *pTab, /* Description of the table we are reading from */ + int iColumn, /* Index of the table column */ + int iTable, /* The cursor pointing to the table */ + int iReg /* Store results here */ +){ + int r1 = sqlite3ExprCodeGetColumn(pParse, pTab, iColumn, iTable, iReg, 0); + if( r1!=iReg ) sqlite3VdbeAddOp2(pParse->pVdbe, OP_SCopy, r1, iReg); +} + /* ** Clear all column cache entries. */ SQLITE_PRIVATE void sqlite3ExprCacheClear(Parse *pParse){ int i; struct yColCache *p; +#if SQLITE_DEBUG + if( pParse->db->flags & SQLITE_VdbeAddopTrace ){ + printf("CLEAR\n"); + } +#endif for(i=0, p=pParse->aColCache; i<SQLITE_N_COLCACHE; i++, p++){ if( p->iReg ){ cacheEntryClear(pParse, p); p->iReg = 0; } @@ -63828,32 +89060,13 @@ /* ** Generate code to move content from registers iFrom...iFrom+nReg-1 ** over to iTo..iTo+nReg-1. Keep the column cache up-to-date. */ SQLITE_PRIVATE void sqlite3ExprCodeMove(Parse *pParse, int iFrom, int iTo, int nReg){ - int i; - struct yColCache *p; - if( NEVER(iFrom==iTo) ) return; + assert( iFrom>=iTo+nReg || iFrom+nReg<=iTo ); sqlite3VdbeAddOp3(pParse->pVdbe, OP_Move, iFrom, iTo, nReg); - for(i=0, p=pParse->aColCache; i<SQLITE_N_COLCACHE; i++, p++){ - int x = p->iReg; - if( x>=iFrom && x<iFrom+nReg ){ - p->iReg += iTo-iFrom; - } - } -} - -/* -** Generate code to copy content from registers iFrom...iFrom+nReg-1 -** over to iTo..iTo+nReg-1. -*/ -SQLITE_PRIVATE void sqlite3ExprCodeCopy(Parse *pParse, int iFrom, int iTo, int nReg){ - int i; - if( NEVER(iFrom==iTo) ) return; - for(i=0; i<nReg; i++){ - sqlite3VdbeAddOp2(pParse->pVdbe, OP_Copy, iFrom+i, iTo+i); - } + sqlite3ExprCacheRemove(pParse, iFrom, nReg); } #if defined(SQLITE_DEBUG) || defined(SQLITE_COVERAGE_TEST) /* ** Return true if any register in the range iFrom..iTo (inclusive) @@ -63872,74 +89085,17 @@ return 0; } #endif /* SQLITE_DEBUG || SQLITE_COVERAGE_TEST */ /* -** If the last instruction coded is an ephemeral copy of any of -** the registers in the nReg registers beginning with iReg, then -** convert the last instruction from OP_SCopy to OP_Copy. -*/ -SQLITE_PRIVATE void sqlite3ExprHardCopy(Parse *pParse, int iReg, int nReg){ - VdbeOp *pOp; - Vdbe *v; - - assert( pParse->db->mallocFailed==0 ); - v = pParse->pVdbe; - assert( v!=0 ); - pOp = sqlite3VdbeGetOp(v, -1); - assert( pOp!=0 ); - if( pOp->opcode==OP_SCopy && pOp->p1>=iReg && pOp->p1<iReg+nReg ){ - pOp->opcode = OP_Copy; - } -} - -/* -** Generate code to store the value of the iAlias-th alias in register -** target. The first time this is called, pExpr is evaluated to compute -** the value of the alias. The value is stored in an auxiliary register -** and the number of that register is returned. On subsequent calls, -** the register number is returned without generating any code. -** -** Note that in order for this to work, code must be generated in the -** same order that it is executed. -** -** Aliases are numbered starting with 1. So iAlias is in the range -** of 1 to pParse->nAlias inclusive. -** -** pParse->aAlias[iAlias-1] records the register number where the value -** of the iAlias-th alias is stored. If zero, that means that the -** alias has not yet been computed. -*/ -static int codeAlias(Parse *pParse, int iAlias, Expr *pExpr, int target){ -#if 0 - sqlite3 *db = pParse->db; - int iReg; - if( pParse->nAliasAlloc<pParse->nAlias ){ - pParse->aAlias = sqlite3DbReallocOrFree(db, pParse->aAlias, - sizeof(pParse->aAlias[0])*pParse->nAlias ); - testcase( db->mallocFailed && pParse->nAliasAlloc>0 ); - if( db->mallocFailed ) return 0; - memset(&pParse->aAlias[pParse->nAliasAlloc], 0, - (pParse->nAlias-pParse->nAliasAlloc)*sizeof(pParse->aAlias[0])); - pParse->nAliasAlloc = pParse->nAlias; - } - assert( iAlias>0 && iAlias<=pParse->nAlias ); - iReg = pParse->aAlias[iAlias-1]; - if( iReg==0 ){ - if( pParse->iCacheLevel>0 ){ - iReg = sqlite3ExprCodeTarget(pParse, pExpr, target); - }else{ - iReg = ++pParse->nMem; - sqlite3ExprCode(pParse, pExpr, iReg); - pParse->aAlias[iAlias-1] = iReg; - } - } - return iReg; -#else - UNUSED_PARAMETER(iAlias); - return sqlite3ExprCodeTarget(pParse, pExpr, target); -#endif +** Convert an expression node to a TK_REGISTER +*/ +static void exprToRegister(Expr *p, int iReg){ + p->op2 = p->op; + p->op = TK_REGISTER; + p->iTable = iReg; + ExprClearProperty(p, EP_Skip); } /* ** Generate code into the current Vdbe to evaluate the given ** expression. Attempt to store the results in register "target". @@ -63957,10 +89113,11 @@ int inReg = target; /* Results stored in register inReg */ int regFree1 = 0; /* If non-zero free this temporary register */ int regFree2 = 0; /* If non-zero free this temporary register */ int r1, r2, r3, r4; /* Various register numbers */ sqlite3 *db = pParse->db; /* The database connection */ + Expr tempX; /* Temporary expression node */ assert( target>0 && target<=pParse->nMem ); if( v==0 ){ assert( pParse->db->mallocFailed ); return 0; @@ -63978,25 +89135,32 @@ if( !pAggInfo->directMode ){ assert( pCol->iMem>0 ); inReg = pCol->iMem; break; }else if( pAggInfo->useSortingIdx ){ - sqlite3VdbeAddOp3(v, OP_Column, pAggInfo->sortingIdx, + sqlite3VdbeAddOp3(v, OP_Column, pAggInfo->sortingIdxPTab, pCol->iSorterColumn, target); break; } /* Otherwise, fall thru into the TK_COLUMN case */ } case TK_COLUMN: { - if( pExpr->iTable<0 ){ - /* This only happens when coding check constraints */ - assert( pParse->ckBase>0 ); - inReg = pExpr->iColumn + pParse->ckBase; - }else{ - inReg = sqlite3ExprCodeGetColumn(pParse, pExpr->pTab, - pExpr->iColumn, pExpr->iTable, target); - } + int iTab = pExpr->iTable; + if( iTab<0 ){ + if( pParse->ckBase>0 ){ + /* Generating CHECK constraints or inserting into partial index */ + inReg = pExpr->iColumn + pParse->ckBase; + break; + }else{ + /* Coding an expression that is part of an index where column names + ** in the index refer to the table to which the index belongs */ + iTab = pParse->iSelfTab; + } + } + inReg = sqlite3ExprCodeGetColumn(pParse, pExpr->pTab, + pExpr->iColumn, iTab, target, + pExpr->op2); break; } case TK_INTEGER: { codeInteger(pParse, pExpr, 0, target); break; @@ -64008,11 +89172,11 @@ break; } #endif case TK_STRING: { assert( !ExprHasProperty(pExpr, EP_IntValue) ); - sqlite3VdbeAddOp4(v, OP_String8, 0, target, 0, pExpr->u.zToken, 0); + sqlite3VdbeLoadString(v, target, pExpr->u.zToken); break; } case TK_NULL: { sqlite3VdbeAddOp2(v, OP_Null, 0, target); break; @@ -64032,65 +89196,35 @@ sqlite3VdbeAddOp4(v, OP_Blob, n/2, target, 0, zBlob, P4_DYNAMIC); break; } #endif case TK_VARIABLE: { - VdbeOp *pOp; assert( !ExprHasProperty(pExpr, EP_IntValue) ); assert( pExpr->u.zToken!=0 ); assert( pExpr->u.zToken[0]!=0 ); - if( pExpr->u.zToken[1]==0 - && (pOp = sqlite3VdbeGetOp(v, -1))->opcode==OP_Variable - && pOp->p1+pOp->p3==pExpr->iColumn - && pOp->p2+pOp->p3==target - && pOp->p4.z==0 - ){ - /* If the previous instruction was a copy of the previous unnamed - ** parameter into the previous register, then simply increment the - ** repeat count on the prior instruction rather than making a new - ** instruction. - */ - pOp->p3++; - }else{ - sqlite3VdbeAddOp3(v, OP_Variable, pExpr->iColumn, target, 1); - if( pExpr->u.zToken[1]!=0 ){ - sqlite3VdbeChangeP4(v, -1, pExpr->u.zToken, 0); - } + sqlite3VdbeAddOp2(v, OP_Variable, pExpr->iColumn, target); + if( pExpr->u.zToken[1]!=0 ){ + assert( pExpr->u.zToken[0]=='?' + || strcmp(pExpr->u.zToken, pParse->azVar[pExpr->iColumn-1])==0 ); + sqlite3VdbeChangeP4(v, -1, pParse->azVar[pExpr->iColumn-1], P4_STATIC); } break; } case TK_REGISTER: { inReg = pExpr->iTable; break; } - case TK_AS: { - inReg = codeAlias(pParse, pExpr->iTable, pExpr->pLeft, target); - break; - } #ifndef SQLITE_OMIT_CAST case TK_CAST: { /* Expressions of the form: CAST(pLeft AS token) */ - int aff, to_op; inReg = sqlite3ExprCodeTarget(pParse, pExpr->pLeft, target); - assert( !ExprHasProperty(pExpr, EP_IntValue) ); - aff = sqlite3AffinityType(pExpr->u.zToken); - to_op = aff - SQLITE_AFF_TEXT + OP_ToText; - assert( to_op==OP_ToText || aff!=SQLITE_AFF_TEXT ); - assert( to_op==OP_ToBlob || aff!=SQLITE_AFF_NONE ); - assert( to_op==OP_ToNumeric || aff!=SQLITE_AFF_NUMERIC ); - assert( to_op==OP_ToInt || aff!=SQLITE_AFF_INTEGER ); - assert( to_op==OP_ToReal || aff!=SQLITE_AFF_REAL ); - testcase( to_op==OP_ToText ); - testcase( to_op==OP_ToBlob ); - testcase( to_op==OP_ToNumeric ); - testcase( to_op==OP_ToInt ); - testcase( to_op==OP_ToReal ); if( inReg!=target ){ sqlite3VdbeAddOp2(v, OP_SCopy, inReg, target); inReg = target; } - sqlite3VdbeAddOp1(v, to_op, inReg); + sqlite3VdbeAddOp2(v, OP_Cast, target, + sqlite3AffinityType(pExpr->u.zToken, 0)); testcase( usedAsColumnCache(pParse, inReg, inReg) ); sqlite3ExprCacheAffinityChange(pParse, inReg, 1); break; } #endif /* SQLITE_OMIT_CAST */ @@ -64098,26 +89232,20 @@ case TK_LE: case TK_GT: case TK_GE: case TK_NE: case TK_EQ: { - assert( TK_LT==OP_Lt ); - assert( TK_LE==OP_Le ); - assert( TK_GT==OP_Gt ); - assert( TK_GE==OP_Ge ); - assert( TK_EQ==OP_Eq ); - assert( TK_NE==OP_Ne ); - testcase( op==TK_LT ); - testcase( op==TK_LE ); - testcase( op==TK_GT ); - testcase( op==TK_GE ); - testcase( op==TK_EQ ); - testcase( op==TK_NE ); r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); r2 = sqlite3ExprCodeTemp(pParse, pExpr->pRight, ®Free2); codeCompare(pParse, pExpr->pLeft, pExpr->pRight, op, r1, r2, inReg, SQLITE_STOREP2); + assert(TK_LT==OP_Lt); testcase(op==OP_Lt); VdbeCoverageIf(v,op==OP_Lt); + assert(TK_LE==OP_Le); testcase(op==OP_Le); VdbeCoverageIf(v,op==OP_Le); + assert(TK_GT==OP_Gt); testcase(op==OP_Gt); VdbeCoverageIf(v,op==OP_Gt); + assert(TK_GE==OP_Ge); testcase(op==OP_Ge); VdbeCoverageIf(v,op==OP_Ge); + assert(TK_EQ==OP_Eq); testcase(op==OP_Eq); VdbeCoverageIf(v,op==OP_Eq); + assert(TK_NE==OP_Ne); testcase(op==OP_Ne); VdbeCoverageIf(v,op==OP_Ne); testcase( regFree1==0 ); testcase( regFree2==0 ); break; } case TK_IS: @@ -64127,10 +89255,12 @@ r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); r2 = sqlite3ExprCodeTemp(pParse, pExpr->pRight, ®Free2); op = (op==TK_IS) ? TK_EQ : TK_NE; codeCompare(pParse, pExpr->pLeft, pExpr->pRight, op, r1, r2, inReg, SQLITE_STOREP2 | SQLITE_NULLEQ); + VdbeCoverageIf(v, op==TK_EQ); + VdbeCoverageIf(v, op==TK_NE); testcase( regFree1==0 ); testcase( regFree2==0 ); break; } case TK_AND: @@ -64143,32 +89273,21 @@ case TK_BITOR: case TK_SLASH: case TK_LSHIFT: case TK_RSHIFT: case TK_CONCAT: { - assert( TK_AND==OP_And ); - assert( TK_OR==OP_Or ); - assert( TK_PLUS==OP_Add ); - assert( TK_MINUS==OP_Subtract ); - assert( TK_REM==OP_Remainder ); - assert( TK_BITAND==OP_BitAnd ); - assert( TK_BITOR==OP_BitOr ); - assert( TK_SLASH==OP_Divide ); - assert( TK_LSHIFT==OP_ShiftLeft ); - assert( TK_RSHIFT==OP_ShiftRight ); - assert( TK_CONCAT==OP_Concat ); - testcase( op==TK_AND ); - testcase( op==TK_OR ); - testcase( op==TK_PLUS ); - testcase( op==TK_MINUS ); - testcase( op==TK_REM ); - testcase( op==TK_BITAND ); - testcase( op==TK_BITOR ); - testcase( op==TK_SLASH ); - testcase( op==TK_LSHIFT ); - testcase( op==TK_RSHIFT ); - testcase( op==TK_CONCAT ); + assert( TK_AND==OP_And ); testcase( op==TK_AND ); + assert( TK_OR==OP_Or ); testcase( op==TK_OR ); + assert( TK_PLUS==OP_Add ); testcase( op==TK_PLUS ); + assert( TK_MINUS==OP_Subtract ); testcase( op==TK_MINUS ); + assert( TK_REM==OP_Remainder ); testcase( op==TK_REM ); + assert( TK_BITAND==OP_BitAnd ); testcase( op==TK_BITAND ); + assert( TK_BITOR==OP_BitOr ); testcase( op==TK_BITOR ); + assert( TK_SLASH==OP_Divide ); testcase( op==TK_SLASH ); + assert( TK_LSHIFT==OP_ShiftLeft ); testcase( op==TK_LSHIFT ); + assert( TK_RSHIFT==OP_ShiftRight ); testcase( op==TK_RSHIFT ); + assert( TK_CONCAT==OP_Concat ); testcase( op==TK_CONCAT ); r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); r2 = sqlite3ExprCodeTemp(pParse, pExpr->pRight, ®Free2); sqlite3VdbeAddOp3(v, op, r2, r1, target); testcase( regFree1==0 ); testcase( regFree2==0 ); @@ -64183,43 +89302,43 @@ }else if( pLeft->op==TK_FLOAT ){ assert( !ExprHasProperty(pExpr, EP_IntValue) ); codeReal(v, pLeft->u.zToken, 1, target); #endif }else{ - regFree1 = r1 = sqlite3GetTempReg(pParse); - sqlite3VdbeAddOp2(v, OP_Integer, 0, r1); + tempX.op = TK_INTEGER; + tempX.flags = EP_IntValue|EP_TokenOnly; + tempX.u.iValue = 0; + r1 = sqlite3ExprCodeTemp(pParse, &tempX, ®Free1); r2 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free2); sqlite3VdbeAddOp3(v, OP_Subtract, r2, r1, target); testcase( regFree2==0 ); } inReg = target; break; } case TK_BITNOT: case TK_NOT: { - assert( TK_BITNOT==OP_BitNot ); - assert( TK_NOT==OP_Not ); - testcase( op==TK_BITNOT ); - testcase( op==TK_NOT ); + assert( TK_BITNOT==OP_BitNot ); testcase( op==TK_BITNOT ); + assert( TK_NOT==OP_Not ); testcase( op==TK_NOT ); r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); testcase( regFree1==0 ); inReg = target; sqlite3VdbeAddOp2(v, op, r1, inReg); break; } case TK_ISNULL: case TK_NOTNULL: { int addr; - assert( TK_ISNULL==OP_IsNull ); - assert( TK_NOTNULL==OP_NotNull ); - testcase( op==TK_ISNULL ); - testcase( op==TK_NOTNULL ); + assert( TK_ISNULL==OP_IsNull ); testcase( op==TK_ISNULL ); + assert( TK_NOTNULL==OP_NotNull ); testcase( op==TK_NOTNULL ); sqlite3VdbeAddOp2(v, OP_Integer, 1, target); r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); testcase( regFree1==0 ); addr = sqlite3VdbeAddOp1(v, op, r1); - sqlite3VdbeAddOp2(v, OP_AddImm, target, -1); + VdbeCoverageIf(v, op==TK_ISNULL); + VdbeCoverageIf(v, op==TK_NOTNULL); + sqlite3VdbeAddOp2(v, OP_Integer, 0, target); sqlite3VdbeJumpHere(v, addr); break; } case TK_AGG_FUNCTION: { AggInfo *pInfo = pExpr->pAggInfo; @@ -64229,65 +89348,106 @@ }else{ inReg = pInfo->aFunc[pExpr->iAgg].iMem; } break; } - case TK_CONST_FUNC: case TK_FUNCTION: { ExprList *pFarg; /* List of function arguments */ int nFarg; /* Number of function arguments */ FuncDef *pDef; /* The function definition object */ int nId; /* Length of the function name in bytes */ const char *zId; /* The function name */ - int constMask = 0; /* Mask of function arguments that are constant */ + u32 constMask = 0; /* Mask of function arguments that are constant */ int i; /* Loop counter */ u8 enc = ENC(db); /* The text encoding used by this database */ CollSeq *pColl = 0; /* A collating sequence */ assert( !ExprHasProperty(pExpr, EP_xIsSelect) ); - testcase( op==TK_CONST_FUNC ); - testcase( op==TK_FUNCTION ); - if( ExprHasAnyProperty(pExpr, EP_TokenOnly) ){ + if( ExprHasProperty(pExpr, EP_TokenOnly) ){ pFarg = 0; }else{ pFarg = pExpr->x.pList; } nFarg = pFarg ? pFarg->nExpr : 0; assert( !ExprHasProperty(pExpr, EP_IntValue) ); zId = pExpr->u.zToken; nId = sqlite3Strlen30(zId); pDef = sqlite3FindFunction(db, zId, nId, nFarg, enc, 0); - if( pDef==0 ){ + if( pDef==0 || pDef->xFinalize!=0 ){ sqlite3ErrorMsg(pParse, "unknown function: %.*s()", nId, zId); break; } /* Attempt a direct implementation of the built-in COALESCE() and - ** IFNULL() functions. This avoids unnecessary evalation of + ** IFNULL() functions. This avoids unnecessary evaluation of ** arguments past the first non-NULL argument. */ - if( pDef->flags & SQLITE_FUNC_COALESCE ){ + if( pDef->funcFlags & SQLITE_FUNC_COALESCE ){ int endCoalesce = sqlite3VdbeMakeLabel(v); assert( nFarg>=2 ); sqlite3ExprCode(pParse, pFarg->a[0].pExpr, target); for(i=1; i<nFarg; i++){ sqlite3VdbeAddOp2(v, OP_NotNull, target, endCoalesce); + VdbeCoverage(v); sqlite3ExprCacheRemove(pParse, target, 1); sqlite3ExprCachePush(pParse); sqlite3ExprCode(pParse, pFarg->a[i].pExpr, target); - sqlite3ExprCachePop(pParse, 1); + sqlite3ExprCachePop(pParse); } sqlite3VdbeResolveLabel(v, endCoalesce); break; } + /* The UNLIKELY() function is a no-op. The result is the value + ** of the first argument. + */ + if( pDef->funcFlags & SQLITE_FUNC_UNLIKELY ){ + assert( nFarg>=1 ); + inReg = sqlite3ExprCodeTarget(pParse, pFarg->a[0].pExpr, target); + break; + } + for(i=0; i<nFarg; i++){ + if( i<32 && sqlite3ExprIsConstant(pFarg->a[i].pExpr) ){ + testcase( i==31 ); + constMask |= MASKBIT32(i); + } + if( (pDef->funcFlags & SQLITE_FUNC_NEEDCOLL)!=0 && !pColl ){ + pColl = sqlite3ExprCollSeq(pParse, pFarg->a[i].pExpr); + } + } if( pFarg ){ - r1 = sqlite3GetTempRange(pParse, nFarg); + if( constMask ){ + r1 = pParse->nMem+1; + pParse->nMem += nFarg; + }else{ + r1 = sqlite3GetTempRange(pParse, nFarg); + } + + /* For length() and typeof() functions with a column argument, + ** set the P5 parameter to the OP_Column opcode to OPFLAG_LENGTHARG + ** or OPFLAG_TYPEOFARG respectively, to avoid unnecessary data + ** loading. + */ + if( (pDef->funcFlags & (SQLITE_FUNC_LENGTH|SQLITE_FUNC_TYPEOF))!=0 ){ + u8 exprOp; + assert( nFarg==1 ); + assert( pFarg->a[0].pExpr!=0 ); + exprOp = pFarg->a[0].pExpr->op; + if( exprOp==TK_COLUMN || exprOp==TK_AGG_COLUMN ){ + assert( SQLITE_FUNC_LENGTH==OPFLAG_LENGTHARG ); + assert( SQLITE_FUNC_TYPEOF==OPFLAG_TYPEOFARG ); + testcase( pDef->funcFlags & OPFLAG_LENGTHARG ); + pFarg->a[0].pExpr->op2 = + pDef->funcFlags & (OPFLAG_LENGTHARG|OPFLAG_TYPEOFARG); + } + } + sqlite3ExprCachePush(pParse); /* Ticket 2ea2425d34be */ - sqlite3ExprCodeExprList(pParse, pFarg, r1, 1); - sqlite3ExprCachePop(pParse, 1); /* Ticket 2ea2425d34be */ + sqlite3ExprCodeExprList(pParse, pFarg, r1, 0, + SQLITE_ECEL_DUP|SQLITE_ECEL_FACTOR); + sqlite3ExprCachePop(pParse); /* Ticket 2ea2425d34be */ }else{ r1 = 0; } #ifndef SQLITE_OMIT_VIRTUALTABLE /* Possibly overload the function if the first argument is @@ -64306,26 +89466,18 @@ pDef = sqlite3VtabOverloadFunction(db, pDef, nFarg, pFarg->a[1].pExpr); }else if( nFarg>0 ){ pDef = sqlite3VtabOverloadFunction(db, pDef, nFarg, pFarg->a[0].pExpr); } #endif - for(i=0; i<nFarg; i++){ - if( i<32 && sqlite3ExprIsConstant(pFarg->a[i].pExpr) ){ - constMask |= (1<<i); - } - if( (pDef->flags & SQLITE_FUNC_NEEDCOLL)!=0 && !pColl ){ - pColl = sqlite3ExprCollSeq(pParse, pFarg->a[i].pExpr); - } - } - if( pDef->flags & SQLITE_FUNC_NEEDCOLL ){ + if( pDef->funcFlags & SQLITE_FUNC_NEEDCOLL ){ if( !pColl ) pColl = db->pDfltColl; sqlite3VdbeAddOp4(v, OP_CollSeq, 0, 0, 0, (char *)pColl, P4_COLLSEQ); } - sqlite3VdbeAddOp4(v, OP_Function, constMask, r1, target, + sqlite3VdbeAddOp4(v, OP_Function0, constMask, r1, target, (char*)pDef, P4_FUNCDEF); sqlite3VdbeChangeP5(v, (u8)nFarg); - if( nFarg ){ + if( nFarg && constMask==0 ){ sqlite3ReleaseTempRange(pParse, r1, nFarg); } break; } #ifndef SQLITE_OMIT_SUBQUERY @@ -64371,22 +89523,24 @@ testcase( regFree1==0 ); testcase( regFree2==0 ); r3 = sqlite3GetTempReg(pParse); r4 = sqlite3GetTempReg(pParse); codeCompare(pParse, pLeft, pRight, OP_Ge, - r1, r2, r3, SQLITE_STOREP2); + r1, r2, r3, SQLITE_STOREP2); VdbeCoverage(v); pLItem++; pRight = pLItem->pExpr; sqlite3ReleaseTempReg(pParse, regFree2); r2 = sqlite3ExprCodeTemp(pParse, pRight, ®Free2); testcase( regFree2==0 ); codeCompare(pParse, pLeft, pRight, OP_Le, r1, r2, r4, SQLITE_STOREP2); + VdbeCoverage(v); sqlite3VdbeAddOp3(v, OP_And, r3, r4, target); sqlite3ReleaseTempReg(pParse, r3); sqlite3ReleaseTempReg(pParse, r4); break; } + case TK_COLLATE: case TK_UPLUS: { inReg = sqlite3ExprCodeTarget(pParse, pExpr->pLeft, target); break; } @@ -64431,11 +89585,14 @@ target )); #ifndef SQLITE_OMIT_FLOATING_POINT /* If the column has REAL affinity, it may currently be stored as an - ** integer. Use OP_RealAffinity to make sure it is really real. */ + ** integer. Use OP_RealAffinity to make sure it is really real. + ** + ** EVIDENCE-OF: R-60985-57662 SQLite will convert the value back to + ** floating point when extracting it from the record. */ if( pExpr->iColumn>=0 && pTab->aCol[pExpr->iColumn].affinity==SQLITE_AFF_REAL ){ sqlite3VdbeAddOp1(v, OP_RealAffinity, target); } @@ -64454,13 +89611,13 @@ ** Form A is can be transformed into the equivalent form B as follows: ** CASE WHEN x=e1 THEN r1 WHEN x=e2 THEN r2 ... ** WHEN x=eN THEN rN ELSE y END ** ** X (if it exists) is in pExpr->pLeft. - ** Y is in pExpr->pRight. The Y is also optional. If there is no - ** ELSE clause and no other term matches, then the result of the - ** exprssion is NULL. + ** Y is in the last element of pExpr->x.pList if pExpr->x.pList->nExpr is + ** odd. The Y is also optional. If the number of elements in x.pList + ** is even, then Y is omitted and the "otherwise" result is NULL. ** Ei is in pExpr->pList->a[i*2] and Ri is pExpr->pList->a[i*2+1]. ** ** The result of the expression is the Ri for the first matching Ei, ** or if there is no matching Ei, the ELSE term Y, or if there is ** no ELSE term, NULL. @@ -64471,34 +89628,35 @@ int nExpr; /* 2x number of WHEN terms */ int i; /* Loop counter */ ExprList *pEList; /* List of WHEN terms */ struct ExprList_item *aListelem; /* Array of WHEN terms */ Expr opCompare; /* The X==Ei expression */ - Expr cacheX; /* Cached expression X */ Expr *pX; /* The X expression */ Expr *pTest = 0; /* X==Ei (form A) or just Ei (form B) */ VVA_ONLY( int iCacheLevel = pParse->iCacheLevel; ) assert( !ExprHasProperty(pExpr, EP_xIsSelect) && pExpr->x.pList ); - assert((pExpr->x.pList->nExpr % 2) == 0); assert(pExpr->x.pList->nExpr > 0); pEList = pExpr->x.pList; aListelem = pEList->a; nExpr = pEList->nExpr; endLabel = sqlite3VdbeMakeLabel(v); if( (pX = pExpr->pLeft)!=0 ){ - cacheX = *pX; + tempX = *pX; testcase( pX->op==TK_COLUMN ); - testcase( pX->op==TK_REGISTER ); - cacheX.iTable = sqlite3ExprCodeTemp(pParse, pX, ®Free1); + exprToRegister(&tempX, sqlite3ExprCodeTemp(pParse, pX, ®Free1)); testcase( regFree1==0 ); - cacheX.op = TK_REGISTER; opCompare.op = TK_EQ; - opCompare.pLeft = &cacheX; + opCompare.pLeft = &tempX; pTest = &opCompare; + /* Ticket b351d95f9cd5ef17e9d9dbae18f5ca8611190001: + ** The value in regFree1 might get SCopy-ed into the file result. + ** So make sure that the regFree1 register is not reused for other + ** purposes and possibly overwritten. */ + regFree1 = 0; } - for(i=0; i<nExpr; i=i+2){ + for(i=0; i<nExpr-1; i=i+2){ sqlite3ExprCachePush(pParse); if( pX ){ assert( pTest!=0 ); opCompare.pRight = aListelem[i].pExpr; }else{ @@ -64506,20 +89664,19 @@ } nextCase = sqlite3VdbeMakeLabel(v); testcase( pTest->op==TK_COLUMN ); sqlite3ExprIfFalse(pParse, pTest, nextCase, SQLITE_JUMPIFNULL); testcase( aListelem[i+1].pExpr->op==TK_COLUMN ); - testcase( aListelem[i+1].pExpr->op==TK_REGISTER ); sqlite3ExprCode(pParse, aListelem[i+1].pExpr, target); - sqlite3VdbeAddOp2(v, OP_Goto, 0, endLabel); - sqlite3ExprCachePop(pParse, 1); + sqlite3VdbeGoto(v, endLabel); + sqlite3ExprCachePop(pParse); sqlite3VdbeResolveLabel(v, nextCase); } - if( pExpr->pRight ){ + if( (nExpr&1)!=0 ){ sqlite3ExprCachePush(pParse); - sqlite3ExprCode(pParse, pExpr->pRight, target); - sqlite3ExprCachePop(pParse, 1); + sqlite3ExprCode(pParse, pEList->a[nExpr-1].pExpr, target); + sqlite3ExprCachePop(pParse); }else{ sqlite3VdbeAddOp2(v, OP_Null, 0, target); } assert( db->mallocFailed || pParse->nErr>0 || pParse->iCacheLevel==iCacheLevel ); @@ -64543,12 +89700,14 @@ } assert( !ExprHasProperty(pExpr, EP_IntValue) ); if( pExpr->affinity==OE_Ignore ){ sqlite3VdbeAddOp4( v, OP_Halt, SQLITE_OK, OE_Ignore, 0, pExpr->u.zToken,0); + VdbeCoverage(v); }else{ - sqlite3HaltConstraint(pParse, pExpr->affinity, pExpr->u.zToken, 0); + sqlite3HaltConstraint(pParse, SQLITE_CONSTRAINT_TRIGGER, + pExpr->affinity, pExpr->u.zToken, 0, 0); } break; } #endif @@ -64555,51 +89714,127 @@ } sqlite3ReleaseTempReg(pParse, regFree1); sqlite3ReleaseTempReg(pParse, regFree2); return inReg; } + +/* +** Factor out the code of the given expression to initialization time. +*/ +SQLITE_PRIVATE void sqlite3ExprCodeAtInit( + Parse *pParse, /* Parsing context */ + Expr *pExpr, /* The expression to code when the VDBE initializes */ + int regDest, /* Store the value in this register */ + u8 reusable /* True if this expression is reusable */ +){ + ExprList *p; + assert( ConstFactorOk(pParse) ); + p = pParse->pConstExpr; + pExpr = sqlite3ExprDup(pParse->db, pExpr, 0); + p = sqlite3ExprListAppend(pParse, p, pExpr); + if( p ){ + struct ExprList_item *pItem = &p->a[p->nExpr-1]; + pItem->u.iConstExprReg = regDest; + pItem->reusable = reusable; + } + pParse->pConstExpr = p; +} /* ** Generate code to evaluate an expression and store the results ** into a register. Return the register number where the results ** are stored. ** ** If the register is a temporary register that can be deallocated, ** then write its number into *pReg. If the result register is not ** a temporary, then set *pReg to zero. +** +** If pExpr is a constant, then this routine might generate this +** code to fill the register in the initialization section of the +** VDBE program, in order to factor it out of the evaluation loop. */ SQLITE_PRIVATE int sqlite3ExprCodeTemp(Parse *pParse, Expr *pExpr, int *pReg){ - int r1 = sqlite3GetTempReg(pParse); - int r2 = sqlite3ExprCodeTarget(pParse, pExpr, r1); - if( r2==r1 ){ - *pReg = r1; + int r2; + pExpr = sqlite3ExprSkipCollate(pExpr); + if( ConstFactorOk(pParse) + && pExpr->op!=TK_REGISTER + && sqlite3ExprIsConstantNotJoin(pExpr) + ){ + ExprList *p = pParse->pConstExpr; + int i; + *pReg = 0; + if( p ){ + struct ExprList_item *pItem; + for(pItem=p->a, i=p->nExpr; i>0; pItem++, i--){ + if( pItem->reusable && sqlite3ExprCompare(pItem->pExpr,pExpr,-1)==0 ){ + return pItem->u.iConstExprReg; + } + } + } + r2 = ++pParse->nMem; + sqlite3ExprCodeAtInit(pParse, pExpr, r2, 1); }else{ - sqlite3ReleaseTempReg(pParse, r1); - *pReg = 0; + int r1 = sqlite3GetTempReg(pParse); + r2 = sqlite3ExprCodeTarget(pParse, pExpr, r1); + if( r2==r1 ){ + *pReg = r1; + }else{ + sqlite3ReleaseTempReg(pParse, r1); + *pReg = 0; + } } return r2; } /* ** Generate code that will evaluate expression pExpr and store the ** results in register target. The results are guaranteed to appear ** in register target. */ -SQLITE_PRIVATE int sqlite3ExprCode(Parse *pParse, Expr *pExpr, int target){ +SQLITE_PRIVATE void sqlite3ExprCode(Parse *pParse, Expr *pExpr, int target){ int inReg; assert( target>0 && target<=pParse->nMem ); - inReg = sqlite3ExprCodeTarget(pParse, pExpr, target); - assert( pParse->pVdbe || pParse->db->mallocFailed ); - if( inReg!=target && pParse->pVdbe ){ - sqlite3VdbeAddOp2(pParse->pVdbe, OP_SCopy, inReg, target); + if( pExpr && pExpr->op==TK_REGISTER ){ + sqlite3VdbeAddOp2(pParse->pVdbe, OP_Copy, pExpr->iTable, target); + }else{ + inReg = sqlite3ExprCodeTarget(pParse, pExpr, target); + assert( pParse->pVdbe!=0 || pParse->db->mallocFailed ); + if( inReg!=target && pParse->pVdbe ){ + sqlite3VdbeAddOp2(pParse->pVdbe, OP_SCopy, inReg, target); + } } - return target; } /* -** Generate code that evalutes the given expression and puts the result +** Make a transient copy of expression pExpr and then code it using +** sqlite3ExprCode(). This routine works just like sqlite3ExprCode() +** except that the input expression is guaranteed to be unchanged. +*/ +SQLITE_PRIVATE void sqlite3ExprCodeCopy(Parse *pParse, Expr *pExpr, int target){ + sqlite3 *db = pParse->db; + pExpr = sqlite3ExprDup(db, pExpr, 0); + if( !db->mallocFailed ) sqlite3ExprCode(pParse, pExpr, target); + sqlite3ExprDelete(db, pExpr); +} + +/* +** Generate code that will evaluate expression pExpr and store the +** results in register target. The results are guaranteed to appear +** in register target. If the expression is constant, then this routine +** might choose to code the expression at initialization time. +*/ +SQLITE_PRIVATE void sqlite3ExprCodeFactorable(Parse *pParse, Expr *pExpr, int target){ + if( pParse->okConstFactor && sqlite3ExprIsConstant(pExpr) ){ + sqlite3ExprCodeAtInit(pParse, pExpr, target, 0); + }else{ + sqlite3ExprCode(pParse, pExpr, target); + } +} + +/* +** Generate code that evaluates the given expression and puts the result ** in register target. ** ** Also make a copy of the expression results into another "cache" register ** and modify the expression so that the next time it is evaluated, ** the result is a copy of the cache register. @@ -64606,179 +89841,74 @@ ** ** This routine is used for expressions that are used multiple ** times. They are evaluated once and the results of the expression ** are reused. */ -SQLITE_PRIVATE int sqlite3ExprCodeAndCache(Parse *pParse, Expr *pExpr, int target){ +SQLITE_PRIVATE void sqlite3ExprCodeAndCache(Parse *pParse, Expr *pExpr, int target){ Vdbe *v = pParse->pVdbe; - int inReg; - inReg = sqlite3ExprCode(pParse, pExpr, target); + int iMem; + assert( target>0 ); - /* This routine is called for terms to INSERT or UPDATE. And the only - ** other place where expressions can be converted into TK_REGISTER is - ** in WHERE clause processing. So as currently implemented, there is - ** no way for a TK_REGISTER to exist here. But it seems prudent to - ** keep the ALWAYS() in case the conditions above change with future - ** modifications or enhancements. */ - if( ALWAYS(pExpr->op!=TK_REGISTER) ){ - int iMem; - iMem = ++pParse->nMem; - sqlite3VdbeAddOp2(v, OP_Copy, inReg, iMem); - pExpr->iTable = iMem; - pExpr->op2 = pExpr->op; - pExpr->op = TK_REGISTER; - } - return inReg; -} - -/* -** Return TRUE if pExpr is an constant expression that is appropriate -** for factoring out of a loop. Appropriate expressions are: -** -** * Any expression that evaluates to two or more opcodes. -** -** * Any OP_Integer, OP_Real, OP_String, OP_Blob, OP_Null, -** or OP_Variable that does not need to be placed in a -** specific register. -** -** There is no point in factoring out single-instruction constant -** expressions that need to be placed in a particular register. -** We could factor them out, but then we would end up adding an -** OP_SCopy instruction to move the value into the correct register -** later. We might as well just use the original instruction and -** avoid the OP_SCopy. -*/ -static int isAppropriateForFactoring(Expr *p){ - if( !sqlite3ExprIsConstantNotJoin(p) ){ - return 0; /* Only constant expressions are appropriate for factoring */ - } - if( (p->flags & EP_FixedDest)==0 ){ - return 1; /* Any constant without a fixed destination is appropriate */ - } - while( p->op==TK_UPLUS ) p = p->pLeft; - switch( p->op ){ -#ifndef SQLITE_OMIT_BLOB_LITERAL - case TK_BLOB: -#endif - case TK_VARIABLE: - case TK_INTEGER: - case TK_FLOAT: - case TK_NULL: - case TK_STRING: { - testcase( p->op==TK_BLOB ); - testcase( p->op==TK_VARIABLE ); - testcase( p->op==TK_INTEGER ); - testcase( p->op==TK_FLOAT ); - testcase( p->op==TK_NULL ); - testcase( p->op==TK_STRING ); - /* Single-instruction constants with a fixed destination are - ** better done in-line. If we factor them, they will just end - ** up generating an OP_SCopy to move the value to the destination - ** register. */ - return 0; - } - case TK_UMINUS: { - if( p->pLeft->op==TK_FLOAT || p->pLeft->op==TK_INTEGER ){ - return 0; - } - break; - } - default: { - break; - } - } - return 1; -} - -/* -** If pExpr is a constant expression that is appropriate for -** factoring out of a loop, then evaluate the expression -** into a register and convert the expression into a TK_REGISTER -** expression. -*/ -static int evalConstExpr(Walker *pWalker, Expr *pExpr){ - Parse *pParse = pWalker->pParse; - switch( pExpr->op ){ - case TK_IN: - case TK_REGISTER: { - return WRC_Prune; - } - case TK_FUNCTION: - case TK_AGG_FUNCTION: - case TK_CONST_FUNC: { - /* The arguments to a function have a fixed destination. - ** Mark them this way to avoid generated unneeded OP_SCopy - ** instructions. - */ - ExprList *pList = pExpr->x.pList; - assert( !ExprHasProperty(pExpr, EP_xIsSelect) ); - if( pList ){ - int i = pList->nExpr; - struct ExprList_item *pItem = pList->a; - for(; i>0; i--, pItem++){ - if( ALWAYS(pItem->pExpr) ) pItem->pExpr->flags |= EP_FixedDest; - } - } - break; - } - } - if( isAppropriateForFactoring(pExpr) ){ - int r1 = ++pParse->nMem; - int r2; - r2 = sqlite3ExprCodeTarget(pParse, pExpr, r1); - if( NEVER(r1!=r2) ) sqlite3ReleaseTempReg(pParse, r1); - pExpr->op2 = pExpr->op; - pExpr->op = TK_REGISTER; - pExpr->iTable = r2; - return WRC_Prune; - } - return WRC_Continue; -} - -/* -** Preevaluate constant subexpressions within pExpr and store the -** results in registers. Modify pExpr so that the constant subexpresions -** are TK_REGISTER opcodes that refer to the precomputed values. -*/ -SQLITE_PRIVATE void sqlite3ExprCodeConstants(Parse *pParse, Expr *pExpr){ - Walker w; - w.xExprCallback = evalConstExpr; - w.xSelectCallback = 0; - w.pParse = pParse; - sqlite3WalkExpr(&w, pExpr); -} - + assert( pExpr->op!=TK_REGISTER ); + sqlite3ExprCode(pParse, pExpr, target); + iMem = ++pParse->nMem; + sqlite3VdbeAddOp2(v, OP_Copy, target, iMem); + exprToRegister(pExpr, iMem); +} /* ** Generate code that pushes the value of every element of the given ** expression list into a sequence of registers beginning at target. ** ** Return the number of elements evaluated. +** +** The SQLITE_ECEL_DUP flag prevents the arguments from being +** filled using OP_SCopy. OP_Copy must be used instead. +** +** The SQLITE_ECEL_FACTOR argument allows constant arguments to be +** factored out into initialization code. +** +** The SQLITE_ECEL_REF flag means that expressions in the list with +** ExprList.a[].u.x.iOrderByCol>0 have already been evaluated and stored +** in registers at srcReg, and so the value can be copied from there. */ SQLITE_PRIVATE int sqlite3ExprCodeExprList( Parse *pParse, /* Parsing context */ ExprList *pList, /* The expression list to be coded */ int target, /* Where to write results */ - int doHardCopy /* Make a hard copy of every element */ + int srcReg, /* Source registers if SQLITE_ECEL_REF */ + u8 flags /* SQLITE_ECEL_* flags */ ){ struct ExprList_item *pItem; - int i, n; + int i, j, n; + u8 copyOp = (flags & SQLITE_ECEL_DUP) ? OP_Copy : OP_SCopy; + Vdbe *v = pParse->pVdbe; assert( pList!=0 ); assert( target>0 ); + assert( pParse->pVdbe!=0 ); /* Never gets this far otherwise */ n = pList->nExpr; + if( !ConstFactorOk(pParse) ) flags &= ~SQLITE_ECEL_FACTOR; for(pItem=pList->a, i=0; i<n; i++, pItem++){ - if( pItem->iAlias ){ - int iReg = codeAlias(pParse, pItem->iAlias, pItem->pExpr, target+i); - Vdbe *v = sqlite3GetVdbe(pParse); - if( iReg!=target+i ){ - sqlite3VdbeAddOp2(v, OP_SCopy, iReg, target+i); - } - }else{ - sqlite3ExprCode(pParse, pItem->pExpr, target+i); - } - if( doHardCopy && !pParse->db->mallocFailed ){ - sqlite3ExprHardCopy(pParse, target, n); + Expr *pExpr = pItem->pExpr; + if( (flags & SQLITE_ECEL_REF)!=0 && (j = pList->a[i].u.x.iOrderByCol)>0 ){ + sqlite3VdbeAddOp2(v, copyOp, j+srcReg-1, target+i); + }else if( (flags & SQLITE_ECEL_FACTOR)!=0 && sqlite3ExprIsConstant(pExpr) ){ + sqlite3ExprCodeAtInit(pParse, pExpr, target+i, 0); + }else{ + int inReg = sqlite3ExprCodeTarget(pParse, pExpr, target+i); + if( inReg!=target+i ){ + VdbeOp *pOp; + if( copyOp==OP_Copy + && (pOp=sqlite3VdbeGetOp(v, -1))->opcode==OP_Copy + && pOp->p1+pOp->p3+1==inReg + && pOp->p2+pOp->p3+1==target+i + ){ + pOp->p3++; + }else{ + sqlite3VdbeAddOp2(v, copyOp, inReg, target+i); + } + } } } return n; } @@ -64790,11 +89920,11 @@ ** The above is equivalent to ** ** x>=y AND x<=z ** ** Code it as such, taking care to do the common subexpression -** elementation of x. +** elimination of x. */ static void exprCodeBetween( Parse *pParse, /* Parsing and code generating context */ Expr *pExpr, /* The BETWEEN expression */ int dest, /* Jump here if the jump is taken */ @@ -64816,12 +89946,11 @@ compLeft.pLeft = &exprX; compLeft.pRight = pExpr->x.pList->a[0].pExpr; compRight.op = TK_LE; compRight.pLeft = &exprX; compRight.pRight = pExpr->x.pList->a[1].pExpr; - exprX.iTable = sqlite3ExprCodeTemp(pParse, &exprX, ®Free1); - exprX.op = TK_REGISTER; + exprToRegister(&exprX, sqlite3ExprCodeTemp(pParse, &exprX, ®Free1)); if( jumpIfTrue ){ sqlite3ExprIfTrue(pParse, &exprAnd, dest, jumpIfNull); }else{ sqlite3ExprIfFalse(pParse, &exprAnd, dest, jumpIfNull); } @@ -64858,28 +89987,30 @@ int regFree1 = 0; int regFree2 = 0; int r1, r2; assert( jumpIfNull==SQLITE_JUMPIFNULL || jumpIfNull==0 ); - if( NEVER(v==0) ) return; /* Existance of VDBE checked by caller */ + if( NEVER(v==0) ) return; /* Existence of VDBE checked by caller */ if( NEVER(pExpr==0) ) return; /* No way this can happen */ op = pExpr->op; switch( op ){ case TK_AND: { int d2 = sqlite3VdbeMakeLabel(v); testcase( jumpIfNull==0 ); - sqlite3ExprCachePush(pParse); sqlite3ExprIfFalse(pParse, pExpr->pLeft, d2,jumpIfNull^SQLITE_JUMPIFNULL); + sqlite3ExprCachePush(pParse); sqlite3ExprIfTrue(pParse, pExpr->pRight, dest, jumpIfNull); sqlite3VdbeResolveLabel(v, d2); - sqlite3ExprCachePop(pParse, 1); + sqlite3ExprCachePop(pParse); break; } case TK_OR: { testcase( jumpIfNull==0 ); sqlite3ExprIfTrue(pParse, pExpr->pLeft, dest, jumpIfNull); + sqlite3ExprCachePush(pParse); sqlite3ExprIfTrue(pParse, pExpr->pRight, dest, jumpIfNull); + sqlite3ExprCachePop(pParse); break; } case TK_NOT: { testcase( jumpIfNull==0 ); sqlite3ExprIfFalse(pParse, pExpr->pLeft, dest, jumpIfNull); @@ -64889,27 +90020,21 @@ case TK_LE: case TK_GT: case TK_GE: case TK_NE: case TK_EQ: { - assert( TK_LT==OP_Lt ); - assert( TK_LE==OP_Le ); - assert( TK_GT==OP_Gt ); - assert( TK_GE==OP_Ge ); - assert( TK_EQ==OP_Eq ); - assert( TK_NE==OP_Ne ); - testcase( op==TK_LT ); - testcase( op==TK_LE ); - testcase( op==TK_GT ); - testcase( op==TK_GE ); - testcase( op==TK_EQ ); - testcase( op==TK_NE ); testcase( jumpIfNull==0 ); r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); r2 = sqlite3ExprCodeTemp(pParse, pExpr->pRight, ®Free2); codeCompare(pParse, pExpr->pLeft, pExpr->pRight, op, r1, r2, dest, jumpIfNull); + assert(TK_LT==OP_Lt); testcase(op==OP_Lt); VdbeCoverageIf(v,op==OP_Lt); + assert(TK_LE==OP_Le); testcase(op==OP_Le); VdbeCoverageIf(v,op==OP_Le); + assert(TK_GT==OP_Gt); testcase(op==OP_Gt); VdbeCoverageIf(v,op==OP_Gt); + assert(TK_GE==OP_Ge); testcase(op==OP_Ge); VdbeCoverageIf(v,op==OP_Ge); + assert(TK_EQ==OP_Eq); testcase(op==OP_Eq); VdbeCoverageIf(v,op==OP_Eq); + assert(TK_NE==OP_Ne); testcase(op==OP_Ne); VdbeCoverageIf(v,op==OP_Ne); testcase( regFree1==0 ); testcase( regFree2==0 ); break; } case TK_IS: @@ -64919,43 +90044,54 @@ r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); r2 = sqlite3ExprCodeTemp(pParse, pExpr->pRight, ®Free2); op = (op==TK_IS) ? TK_EQ : TK_NE; codeCompare(pParse, pExpr->pLeft, pExpr->pRight, op, r1, r2, dest, SQLITE_NULLEQ); + VdbeCoverageIf(v, op==TK_EQ); + VdbeCoverageIf(v, op==TK_NE); testcase( regFree1==0 ); testcase( regFree2==0 ); break; } case TK_ISNULL: case TK_NOTNULL: { - assert( TK_ISNULL==OP_IsNull ); - assert( TK_NOTNULL==OP_NotNull ); - testcase( op==TK_ISNULL ); - testcase( op==TK_NOTNULL ); + assert( TK_ISNULL==OP_IsNull ); testcase( op==TK_ISNULL ); + assert( TK_NOTNULL==OP_NotNull ); testcase( op==TK_NOTNULL ); r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); sqlite3VdbeAddOp2(v, op, r1, dest); + VdbeCoverageIf(v, op==TK_ISNULL); + VdbeCoverageIf(v, op==TK_NOTNULL); testcase( regFree1==0 ); break; } case TK_BETWEEN: { testcase( jumpIfNull==0 ); exprCodeBetween(pParse, pExpr, dest, 1, jumpIfNull); break; } +#ifndef SQLITE_OMIT_SUBQUERY case TK_IN: { int destIfFalse = sqlite3VdbeMakeLabel(v); int destIfNull = jumpIfNull ? dest : destIfFalse; sqlite3ExprCodeIN(pParse, pExpr, destIfFalse, destIfNull); - sqlite3VdbeAddOp2(v, OP_Goto, 0, dest); + sqlite3VdbeGoto(v, dest); sqlite3VdbeResolveLabel(v, destIfFalse); break; } +#endif default: { - r1 = sqlite3ExprCodeTemp(pParse, pExpr, ®Free1); - sqlite3VdbeAddOp3(v, OP_If, r1, dest, jumpIfNull!=0); - testcase( regFree1==0 ); - testcase( jumpIfNull==0 ); + if( exprAlwaysTrue(pExpr) ){ + sqlite3VdbeGoto(v, dest); + }else if( exprAlwaysFalse(pExpr) ){ + /* No-op */ + }else{ + r1 = sqlite3ExprCodeTemp(pParse, pExpr, ®Free1); + sqlite3VdbeAddOp3(v, OP_If, r1, dest, jumpIfNull!=0); + VdbeCoverage(v); + testcase( regFree1==0 ); + testcase( jumpIfNull==0 ); + } break; } } sqlite3ReleaseTempReg(pParse, regFree1); sqlite3ReleaseTempReg(pParse, regFree2); @@ -64976,11 +90112,11 @@ int regFree1 = 0; int regFree2 = 0; int r1, r2; assert( jumpIfNull==SQLITE_JUMPIFNULL || jumpIfNull==0 ); - if( NEVER(v==0) ) return; /* Existance of VDBE checked by caller */ + if( NEVER(v==0) ) return; /* Existence of VDBE checked by caller */ if( pExpr==0 ) return; /* The value of pExpr->op and op are related as follows: ** ** pExpr->op op @@ -65014,21 +90150,23 @@ switch( pExpr->op ){ case TK_AND: { testcase( jumpIfNull==0 ); sqlite3ExprIfFalse(pParse, pExpr->pLeft, dest, jumpIfNull); + sqlite3ExprCachePush(pParse); sqlite3ExprIfFalse(pParse, pExpr->pRight, dest, jumpIfNull); + sqlite3ExprCachePop(pParse); break; } case TK_OR: { int d2 = sqlite3VdbeMakeLabel(v); testcase( jumpIfNull==0 ); - sqlite3ExprCachePush(pParse); sqlite3ExprIfTrue(pParse, pExpr->pLeft, d2, jumpIfNull^SQLITE_JUMPIFNULL); + sqlite3ExprCachePush(pParse); sqlite3ExprIfFalse(pParse, pExpr->pRight, dest, jumpIfNull); sqlite3VdbeResolveLabel(v, d2); - sqlite3ExprCachePop(pParse, 1); + sqlite3ExprCachePop(pParse); break; } case TK_NOT: { testcase( jumpIfNull==0 ); sqlite3ExprIfTrue(pParse, pExpr->pLeft, dest, jumpIfNull); @@ -65038,21 +90176,21 @@ case TK_LE: case TK_GT: case TK_GE: case TK_NE: case TK_EQ: { - testcase( op==TK_LT ); - testcase( op==TK_LE ); - testcase( op==TK_GT ); - testcase( op==TK_GE ); - testcase( op==TK_EQ ); - testcase( op==TK_NE ); testcase( jumpIfNull==0 ); r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); r2 = sqlite3ExprCodeTemp(pParse, pExpr->pRight, ®Free2); codeCompare(pParse, pExpr->pLeft, pExpr->pRight, op, r1, r2, dest, jumpIfNull); + assert(TK_LT==OP_Lt); testcase(op==OP_Lt); VdbeCoverageIf(v,op==OP_Lt); + assert(TK_LE==OP_Le); testcase(op==OP_Le); VdbeCoverageIf(v,op==OP_Le); + assert(TK_GT==OP_Gt); testcase(op==OP_Gt); VdbeCoverageIf(v,op==OP_Gt); + assert(TK_GE==OP_Ge); testcase(op==OP_Ge); VdbeCoverageIf(v,op==OP_Ge); + assert(TK_EQ==OP_Eq); testcase(op==OP_Eq); VdbeCoverageIf(v,op==OP_Eq); + assert(TK_NE==OP_Ne); testcase(op==OP_Ne); VdbeCoverageIf(v,op==OP_Ne); testcase( regFree1==0 ); testcase( regFree2==0 ); break; } case TK_IS: @@ -65062,28 +90200,31 @@ r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); r2 = sqlite3ExprCodeTemp(pParse, pExpr->pRight, ®Free2); op = (pExpr->op==TK_IS) ? TK_NE : TK_EQ; codeCompare(pParse, pExpr->pLeft, pExpr->pRight, op, r1, r2, dest, SQLITE_NULLEQ); + VdbeCoverageIf(v, op==TK_EQ); + VdbeCoverageIf(v, op==TK_NE); testcase( regFree1==0 ); testcase( regFree2==0 ); break; } case TK_ISNULL: case TK_NOTNULL: { - testcase( op==TK_ISNULL ); - testcase( op==TK_NOTNULL ); r1 = sqlite3ExprCodeTemp(pParse, pExpr->pLeft, ®Free1); sqlite3VdbeAddOp2(v, op, r1, dest); + testcase( op==TK_ISNULL ); VdbeCoverageIf(v, op==TK_ISNULL); + testcase( op==TK_NOTNULL ); VdbeCoverageIf(v, op==TK_NOTNULL); testcase( regFree1==0 ); break; } case TK_BETWEEN: { testcase( jumpIfNull==0 ); exprCodeBetween(pParse, pExpr, dest, 0, jumpIfNull); break; } +#ifndef SQLITE_OMIT_SUBQUERY case TK_IN: { if( jumpIfNull ){ sqlite3ExprCodeIN(pParse, pExpr, dest, dest); }else{ int destIfNull = sqlite3VdbeMakeLabel(v); @@ -65090,28 +90231,57 @@ sqlite3ExprCodeIN(pParse, pExpr, dest, destIfNull); sqlite3VdbeResolveLabel(v, destIfNull); } break; } +#endif default: { - r1 = sqlite3ExprCodeTemp(pParse, pExpr, ®Free1); - sqlite3VdbeAddOp3(v, OP_IfNot, r1, dest, jumpIfNull!=0); - testcase( regFree1==0 ); - testcase( jumpIfNull==0 ); + if( exprAlwaysFalse(pExpr) ){ + sqlite3VdbeGoto(v, dest); + }else if( exprAlwaysTrue(pExpr) ){ + /* no-op */ + }else{ + r1 = sqlite3ExprCodeTemp(pParse, pExpr, ®Free1); + sqlite3VdbeAddOp3(v, OP_IfNot, r1, dest, jumpIfNull!=0); + VdbeCoverage(v); + testcase( regFree1==0 ); + testcase( jumpIfNull==0 ); + } break; } } sqlite3ReleaseTempReg(pParse, regFree1); sqlite3ReleaseTempReg(pParse, regFree2); } + +/* +** Like sqlite3ExprIfFalse() except that a copy is made of pExpr before +** code generation, and that copy is deleted after code generation. This +** ensures that the original pExpr is unchanged. +*/ +SQLITE_PRIVATE void sqlite3ExprIfFalseDup(Parse *pParse, Expr *pExpr, int dest,int jumpIfNull){ + sqlite3 *db = pParse->db; + Expr *pCopy = sqlite3ExprDup(db, pExpr, 0); + if( db->mallocFailed==0 ){ + sqlite3ExprIfFalse(pParse, pCopy, dest, jumpIfNull); + } + sqlite3ExprDelete(db, pCopy); +} + /* ** Do a deep comparison of two expression trees. Return 0 if the two ** expressions are completely identical. Return 1 if they differ only ** by a COLLATE operator at the top level. Return 2 if there are differences ** other than the top-level COLLATE operator. ** +** If any subelement of pB has Expr.iTable==(-1) then it is allowed +** to compare equal to an equivalent element in pA with Expr.iTable==iTab. +** +** The pA side might be using TK_REGISTER. If that is the case and pB is +** not using TK_REGISTER but is otherwise equivalent, then still return 0. +** ** Sometimes this routine will return 2 even if the two expressions ** really are equivalent. If we cannot prove that the expressions are ** identical, we return 2 just to be safe. So if this routine ** returns 2, then you do not really know for certain if the two ** expressions are the same. But if you get a 0 or 1 return, then you @@ -65118,52 +90288,178 @@ ** can be sure the expressions are the same. In the places where ** this routine is used, it does not hurt to get an extra 2 - that ** just might result in some slightly slower code. But returning ** an incorrect 0 or 1 could lead to a malfunction. */ -SQLITE_PRIVATE int sqlite3ExprCompare(Expr *pA, Expr *pB){ - int i; - if( pA==0||pB==0 ){ - return pB==pA ? 0 : 2; - } - assert( !ExprHasAnyProperty(pA, EP_TokenOnly|EP_Reduced) ); - assert( !ExprHasAnyProperty(pB, EP_TokenOnly|EP_Reduced) ); - if( ExprHasProperty(pA, EP_xIsSelect) || ExprHasProperty(pB, EP_xIsSelect) ){ - return 2; - } - if( (pA->flags & EP_Distinct)!=(pB->flags & EP_Distinct) ) return 2; - if( pA->op!=pB->op ) return 2; - if( sqlite3ExprCompare(pA->pLeft, pB->pLeft) ) return 2; - if( sqlite3ExprCompare(pA->pRight, pB->pRight) ) return 2; - - if( pA->x.pList && pB->x.pList ){ - if( pA->x.pList->nExpr!=pB->x.pList->nExpr ) return 2; - for(i=0; i<pA->x.pList->nExpr; i++){ - Expr *pExprA = pA->x.pList->a[i].pExpr; - Expr *pExprB = pB->x.pList->a[i].pExpr; - if( sqlite3ExprCompare(pExprA, pExprB) ) return 2; - } - }else if( pA->x.pList || pB->x.pList ){ - return 2; - } - - if( pA->iTable!=pB->iTable || pA->iColumn!=pB->iColumn ) return 2; - if( ExprHasProperty(pA, EP_IntValue) ){ - if( !ExprHasProperty(pB, EP_IntValue) || pA->u.iValue!=pB->u.iValue ){ - return 2; - } - }else if( pA->op!=TK_COLUMN && pA->u.zToken ){ - if( ExprHasProperty(pB, EP_IntValue) || NEVER(pB->u.zToken==0) ) return 2; - if( sqlite3StrICmp(pA->u.zToken,pB->u.zToken)!=0 ){ - return 2; - } - } - if( (pA->flags & EP_ExpCollate)!=(pB->flags & EP_ExpCollate) ) return 1; - if( (pA->flags & EP_ExpCollate)!=0 && pA->pColl!=pB->pColl ) return 2; - return 0; -} - +SQLITE_PRIVATE int sqlite3ExprCompare(Expr *pA, Expr *pB, int iTab){ + u32 combinedFlags; + if( pA==0 || pB==0 ){ + return pB==pA ? 0 : 2; + } + combinedFlags = pA->flags | pB->flags; + if( combinedFlags & EP_IntValue ){ + if( (pA->flags&pB->flags&EP_IntValue)!=0 && pA->u.iValue==pB->u.iValue ){ + return 0; + } + return 2; + } + if( pA->op!=pB->op ){ + if( pA->op==TK_COLLATE && sqlite3ExprCompare(pA->pLeft, pB, iTab)<2 ){ + return 1; + } + if( pB->op==TK_COLLATE && sqlite3ExprCompare(pA, pB->pLeft, iTab)<2 ){ + return 1; + } + return 2; + } + if( pA->op!=TK_COLUMN && pA->op!=TK_AGG_COLUMN && pA->u.zToken ){ + if( pA->op==TK_FUNCTION ){ + if( sqlite3StrICmp(pA->u.zToken,pB->u.zToken)!=0 ) return 2; + }else if( strcmp(pA->u.zToken,pB->u.zToken)!=0 ){ + return pA->op==TK_COLLATE ? 1 : 2; + } + } + if( (pA->flags & EP_Distinct)!=(pB->flags & EP_Distinct) ) return 2; + if( ALWAYS((combinedFlags & EP_TokenOnly)==0) ){ + if( combinedFlags & EP_xIsSelect ) return 2; + if( sqlite3ExprCompare(pA->pLeft, pB->pLeft, iTab) ) return 2; + if( sqlite3ExprCompare(pA->pRight, pB->pRight, iTab) ) return 2; + if( sqlite3ExprListCompare(pA->x.pList, pB->x.pList, iTab) ) return 2; + if( ALWAYS((combinedFlags & EP_Reduced)==0) && pA->op!=TK_STRING ){ + if( pA->iColumn!=pB->iColumn ) return 2; + if( pA->iTable!=pB->iTable + && (pA->iTable!=iTab || NEVER(pB->iTable>=0)) ) return 2; + } + } + return 0; +} + +/* +** Compare two ExprList objects. Return 0 if they are identical and +** non-zero if they differ in any way. +** +** If any subelement of pB has Expr.iTable==(-1) then it is allowed +** to compare equal to an equivalent element in pA with Expr.iTable==iTab. +** +** This routine might return non-zero for equivalent ExprLists. The +** only consequence will be disabled optimizations. But this routine +** must never return 0 if the two ExprList objects are different, or +** a malfunction will result. +** +** Two NULL pointers are considered to be the same. But a NULL pointer +** always differs from a non-NULL pointer. +*/ +SQLITE_PRIVATE int sqlite3ExprListCompare(ExprList *pA, ExprList *pB, int iTab){ + int i; + if( pA==0 && pB==0 ) return 0; + if( pA==0 || pB==0 ) return 1; + if( pA->nExpr!=pB->nExpr ) return 1; + for(i=0; i<pA->nExpr; i++){ + Expr *pExprA = pA->a[i].pExpr; + Expr *pExprB = pB->a[i].pExpr; + if( pA->a[i].sortOrder!=pB->a[i].sortOrder ) return 1; + if( sqlite3ExprCompare(pExprA, pExprB, iTab) ) return 1; + } + return 0; +} + +/* +** Return true if we can prove the pE2 will always be true if pE1 is +** true. Return false if we cannot complete the proof or if pE2 might +** be false. Examples: +** +** pE1: x==5 pE2: x==5 Result: true +** pE1: x>0 pE2: x==5 Result: false +** pE1: x=21 pE2: x=21 OR y=43 Result: true +** pE1: x!=123 pE2: x IS NOT NULL Result: true +** pE1: x!=?1 pE2: x IS NOT NULL Result: true +** pE1: x IS NULL pE2: x IS NOT NULL Result: false +** pE1: x IS ?2 pE2: x IS NOT NULL Reuslt: false +** +** When comparing TK_COLUMN nodes between pE1 and pE2, if pE2 has +** Expr.iTable<0 then assume a table number given by iTab. +** +** When in doubt, return false. Returning true might give a performance +** improvement. Returning false might cause a performance reduction, but +** it will always give the correct answer and is hence always safe. +*/ +SQLITE_PRIVATE int sqlite3ExprImpliesExpr(Expr *pE1, Expr *pE2, int iTab){ + if( sqlite3ExprCompare(pE1, pE2, iTab)==0 ){ + return 1; + } + if( pE2->op==TK_OR + && (sqlite3ExprImpliesExpr(pE1, pE2->pLeft, iTab) + || sqlite3ExprImpliesExpr(pE1, pE2->pRight, iTab) ) + ){ + return 1; + } + if( pE2->op==TK_NOTNULL + && sqlite3ExprCompare(pE1->pLeft, pE2->pLeft, iTab)==0 + && (pE1->op!=TK_ISNULL && pE1->op!=TK_IS) + ){ + return 1; + } + return 0; +} + +/* +** An instance of the following structure is used by the tree walker +** to count references to table columns in the arguments of an +** aggregate function, in order to implement the +** sqlite3FunctionThisSrc() routine. +*/ +struct SrcCount { + SrcList *pSrc; /* One particular FROM clause in a nested query */ + int nThis; /* Number of references to columns in pSrcList */ + int nOther; /* Number of references to columns in other FROM clauses */ +}; + +/* +** Count the number of references to columns. +*/ +static int exprSrcCount(Walker *pWalker, Expr *pExpr){ + /* The NEVER() on the second term is because sqlite3FunctionUsesThisSrc() + ** is always called before sqlite3ExprAnalyzeAggregates() and so the + ** TK_COLUMNs have not yet been converted into TK_AGG_COLUMN. If + ** sqlite3FunctionUsesThisSrc() is used differently in the future, the + ** NEVER() will need to be removed. */ + if( pExpr->op==TK_COLUMN || NEVER(pExpr->op==TK_AGG_COLUMN) ){ + int i; + struct SrcCount *p = pWalker->u.pSrcCount; + SrcList *pSrc = p->pSrc; + int nSrc = pSrc ? pSrc->nSrc : 0; + for(i=0; i<nSrc; i++){ + if( pExpr->iTable==pSrc->a[i].iCursor ) break; + } + if( i<nSrc ){ + p->nThis++; + }else{ + p->nOther++; + } + } + return WRC_Continue; +} + +/* +** Determine if any of the arguments to the pExpr Function reference +** pSrcList. Return true if they do. Also return true if the function +** has no arguments or has only constant arguments. Return false if pExpr +** references columns but not columns of tables found in pSrcList. +*/ +SQLITE_PRIVATE int sqlite3FunctionUsesThisSrc(Expr *pExpr, SrcList *pSrcList){ + Walker w; + struct SrcCount cnt; + assert( pExpr->op==TK_AGG_FUNCTION ); + memset(&w, 0, sizeof(w)); + w.xExprCallback = exprSrcCount; + w.u.pSrcCount = &cnt; + cnt.pSrc = pSrcList; + cnt.nThis = 0; + cnt.nOther = 0; + sqlite3WalkExprList(&w, pExpr->x.pList); + return cnt.nThis>0 || cnt.nOther==0; +} /* ** Add a new element to the pAggInfo->aCol[] array. Return the index of ** the new element. Return a negative number if malloc fails. */ @@ -65171,13 +90467,11 @@ int i; pInfo->aCol = sqlite3ArrayAllocate( db, pInfo->aCol, sizeof(pInfo->aCol[0]), - 3, &pInfo->nColumn, - &pInfo->nColumnAlloc, &i ); return i; } @@ -65189,13 +90483,11 @@ int i; pInfo->aFunc = sqlite3ArrayAllocate( db, pInfo->aFunc, sizeof(pInfo->aFunc[0]), - 3, &pInfo->nFunc, - &pInfo->nFuncAlloc, &i ); return i; } @@ -65220,11 +90512,11 @@ ** clause of the aggregate query */ if( ALWAYS(pSrcList!=0) ){ struct SrcList_item *pItem = pSrcList->a; for(i=0; i<pSrcList->nSrc; i++, pItem++){ struct AggInfo_col *pCol; - assert( !ExprHasAnyProperty(pExpr, EP_TokenOnly|EP_Reduced) ); + assert( !ExprHasProperty(pExpr, EP_TokenOnly|EP_Reduced) ); if( pExpr->iTable==pItem->iCursor ){ /* If we reach this point, it means that pExpr refers to a table ** that is in the FROM clause of the aggregate query. ** ** Make an entry for the column in pAggInfo->aCol[] if there @@ -65269,11 +90561,11 @@ /* There is now an entry for pExpr in pAggInfo->aCol[] (either ** because it was there before or because we just created it). ** Convert the pExpr to be a TK_AGG_COLUMN referring to that ** pAggInfo->aCol[] entry. */ - ExprSetIrreducible(pExpr); + ExprSetVVAProperty(pExpr, EP_NoReduce); pExpr->pAggInfo = pAggInfo; pExpr->op = TK_AGG_COLUMN; pExpr->iAgg = (i16)k; break; } /* endif pExpr->iTable==pItem->iCursor */ @@ -65280,19 +90572,19 @@ } /* end loop over pSrcList */ } return WRC_Prune; } case TK_AGG_FUNCTION: { - /* The pNC->nDepth==0 test causes aggregate functions in subqueries - ** to be ignored */ - if( pNC->nDepth==0 ){ + if( (pNC->ncFlags & NC_InAggFunc)==0 + && pWalker->walkerDepth==pExpr->op2 + ){ /* Check to see if pExpr is a duplicate of another aggregate ** function that is already in the pAggInfo structure */ struct AggInfo_func *pItem = pAggInfo->aFunc; for(i=0; i<pAggInfo->nFunc; i++, pItem++){ - if( sqlite3ExprCompare(pItem->pExpr, pExpr)==0 ){ + if( sqlite3ExprCompare(pItem->pExpr, pExpr, -1)==0 ){ break; } } if( i>=pAggInfo->nFunc ){ /* pExpr is original. Make a new entry in pAggInfo->aFunc[] @@ -65315,42 +90607,40 @@ } } } /* Make pExpr point to the appropriate pAggInfo->aFunc[] entry */ - assert( !ExprHasAnyProperty(pExpr, EP_TokenOnly|EP_Reduced) ); - ExprSetIrreducible(pExpr); + assert( !ExprHasProperty(pExpr, EP_TokenOnly|EP_Reduced) ); + ExprSetVVAProperty(pExpr, EP_NoReduce); pExpr->iAgg = (i16)i; pExpr->pAggInfo = pAggInfo; return WRC_Prune; + }else{ + return WRC_Continue; } } } return WRC_Continue; } static int analyzeAggregatesInSelect(Walker *pWalker, Select *pSelect){ - NameContext *pNC = pWalker->u.pNC; - if( pNC->nDepth==0 ){ - pNC->nDepth++; - sqlite3WalkSelect(pWalker, pSelect); - pNC->nDepth--; - return WRC_Prune; - }else{ - return WRC_Continue; - } + UNUSED_PARAMETER(pWalker); + UNUSED_PARAMETER(pSelect); + return WRC_Continue; } /* -** Analyze the given expression looking for aggregate functions and -** for variables that need to be added to the pParse->aAgg[] array. -** Make additional entries to the pParse->aAgg[] array as necessary. +** Analyze the pExpr expression looking for aggregate functions and +** for variables that need to be added to AggInfo object that pNC->pAggInfo +** points to. Additional entries are made on the AggInfo object as +** necessary. ** ** This routine should only be called after the expression has been ** analyzed by sqlite3ResolveExprNames(). */ SQLITE_PRIVATE void sqlite3ExprAnalyzeAggregates(NameContext *pNC, Expr *pExpr){ Walker w; + memset(&w, 0, sizeof(w)); w.xExprCallback = analyzeAggregate; w.xSelectCallback = analyzeAggregatesInSelect; w.u.pNC = pNC; assert( pNC->pSrcList!=0 ); sqlite3WalkExpr(&w, pExpr); @@ -65385,11 +90675,11 @@ /* ** Deallocate a register, making available for reuse for some other ** purpose. ** ** If a register is currently being used by the column cache, then -** the dallocation is deferred until the column cache line that uses +** the deallocation is deferred until the column cache line that uses ** the register becomes stale. */ SQLITE_PRIVATE void sqlite3ReleaseTempReg(Parse *pParse, int iReg){ if( iReg && pParse->nTempReg<ArraySize(pParse->aTempReg) ){ int i; @@ -65426,10 +90716,18 @@ if( nReg>pParse->nRangeReg ){ pParse->nRangeReg = nReg; pParse->iRangeReg = iReg; } } + +/* +** Mark all temporary registers as being unavailable for reuse. +*/ +SQLITE_PRIVATE void sqlite3ClearTempRegCache(Parse *pParse){ + pParse->nTempReg = 0; + pParse->nRangeReg = 0; +} /************** End of expr.c ************************************************/ /************** Begin file alter.c *******************************************/ /* ** 2005 February 15 @@ -65443,10 +90741,11 @@ ** ************************************************************************* ** This file contains C code routines that used to generate VDBE code ** that implements the ALTER TABLE command. */ +/* #include "sqliteInt.h" */ /* ** The code in this file only exists if we are not omitting the ** ALTER TABLE logic from the build. */ @@ -65507,12 +90806,12 @@ len = sqlite3GetToken(zCsr, &token); } while( token==TK_SPACE ); assert( len>0 ); } while( token!=TK_LP && token!=TK_USING ); - zRet = sqlite3MPrintf(db, "%.*s\"%w\"%s", ((u8*)tname.z) - zSql, zSql, - zTableName, tname.z+tname.n); + zRet = sqlite3MPrintf(db, "%.*s\"%w\"%s", (int)(((u8*)tname.z) - zSql), + zSql, zTableName, tname.z+tname.n); sqlite3_result_text(context, zRet, -1, SQLITE_DYNAMIC); } } /* @@ -65546,25 +90845,27 @@ unsigned const char *z; /* Pointer to token */ int n; /* Length of token z */ int token; /* Type of token */ UNUSED_PARAMETER(NotUsed); + if( zInput==0 || zOld==0 ) return; for(z=zInput; *z; z=z+n){ n = sqlite3GetToken(z, &token); if( token==TK_REFERENCES ){ char *zParent; do { z += n; n = sqlite3GetToken(z, &token); }while( token==TK_SPACE ); + if( token==TK_ILLEGAL ) break; zParent = sqlite3DbStrNDup(db, (const char *)z, n); if( zParent==0 ) break; sqlite3Dequote(zParent); if( 0==sqlite3StrICmp((const char *)zOld, zParent) ){ char *zOut = sqlite3MPrintf(db, "%s%.*s\"%w\"", - (zOutput?zOutput:""), z-zInput, zInput, (const char *)zNew + (zOutput?zOutput:""), (int)(z-zInput), zInput, (const char *)zNew ); sqlite3DbFree(db, zOutput); zOutput = zOut; zInput = &z[n]; } @@ -65603,12 +90904,12 @@ sqlite3 *db = sqlite3_context_db_handle(context); UNUSED_PARAMETER(NotUsed); /* The principle used to locate the table name in the CREATE TRIGGER - ** statement is that the table name is the first token that is immediatedly - ** preceded by either TK_ON or TK_DOT and immediatedly followed by one + ** statement is that the table name is the first token that is immediately + ** preceded by either TK_ON or TK_DOT and immediately followed by one ** of TK_WHEN, TK_BEGIN or TK_FOR. */ if( zSql ){ do { @@ -65646,31 +90947,37 @@ } while( dist!=2 || (token!=TK_WHEN && token!=TK_FOR && token!=TK_BEGIN) ); /* Variable tname now contains the token that is the old table-name ** in the CREATE TRIGGER statement. */ - zRet = sqlite3MPrintf(db, "%.*s\"%w\"%s", ((u8*)tname.z) - zSql, zSql, - zTableName, tname.z+tname.n); + zRet = sqlite3MPrintf(db, "%.*s\"%w\"%s", (int)(((u8*)tname.z) - zSql), + zSql, zTableName, tname.z+tname.n); sqlite3_result_text(context, zRet, -1, SQLITE_DYNAMIC); } } #endif /* !SQLITE_OMIT_TRIGGER */ /* ** Register built-in functions used to help implement ALTER TABLE */ -SQLITE_PRIVATE void sqlite3AlterFunctions(sqlite3 *db){ - sqlite3CreateFunc(db, "sqlite_rename_table", 2, SQLITE_UTF8, 0, - renameTableFunc, 0, 0); +SQLITE_PRIVATE void sqlite3AlterFunctions(void){ + static SQLITE_WSD FuncDef aAlterTableFuncs[] = { + FUNCTION(sqlite_rename_table, 2, 0, 0, renameTableFunc), #ifndef SQLITE_OMIT_TRIGGER - sqlite3CreateFunc(db, "sqlite_rename_trigger", 2, SQLITE_UTF8, 0, - renameTriggerFunc, 0, 0); + FUNCTION(sqlite_rename_trigger, 2, 0, 0, renameTriggerFunc), #endif #ifndef SQLITE_OMIT_FOREIGN_KEY - sqlite3CreateFunc(db, "sqlite_rename_parent", 3, SQLITE_UTF8, 0, - renameParentFunc, 0, 0); + FUNCTION(sqlite_rename_parent, 3, 0, 0, renameParentFunc), #endif + }; + int i; + FuncDefHash *pHash = &GLOBAL(FuncDefHash, sqlite3GlobalFunctions); + FuncDef *aFunc = (FuncDef*)&GLOBAL(FuncDef, aAlterTableFuncs); + + for(i=0; i<ArraySize(aAlterTableFuncs); i++){ + sqlite3FuncDefInsert(pHash, &aFunc[i]); + } } /* ** This function is used to create the text of expressions of the form: ** @@ -65737,10 +91044,15 @@ if( pTrig->pSchema==pTempSchema ){ zWhere = whereOrName(db, zWhere, pTrig->zName); } } } + if( zWhere ){ + char *zNew = sqlite3MPrintf(pParse->db, "type='trigger' AND (%s)", zWhere); + sqlite3DbFree(pParse->db, zWhere); + zWhere = zNew; + } return zWhere; } /* ** Generate code to drop and reload the internal representation of table @@ -65777,21 +91089,37 @@ sqlite3VdbeAddOp4(v, OP_DropTable, iDb, 0, 0, pTab->zName, 0); /* Reload the table, index and permanent trigger schemas. */ zWhere = sqlite3MPrintf(pParse->db, "tbl_name=%Q", zName); if( !zWhere ) return; - sqlite3VdbeAddOp4(v, OP_ParseSchema, iDb, 0, 0, zWhere, P4_DYNAMIC); + sqlite3VdbeAddParseSchemaOp(v, iDb, zWhere); #ifndef SQLITE_OMIT_TRIGGER /* Now, if the table is not stored in the temp database, reload any temp ** triggers. Don't use IN(...) in case SQLITE_OMIT_SUBQUERY is defined. */ if( (zWhere=whereTempTriggers(pParse, pTab))!=0 ){ - sqlite3VdbeAddOp4(v, OP_ParseSchema, 1, 0, 0, zWhere, P4_DYNAMIC); + sqlite3VdbeAddParseSchemaOp(v, 1, zWhere); } #endif } + +/* +** Parameter zName is the name of a table that is about to be altered +** (either with ALTER TABLE ... RENAME TO or ALTER TABLE ... ADD COLUMN). +** If the table is a system table, this function leaves an error message +** in pParse->zErr (system tables may not be altered) and returns non-zero. +** +** Or, if zName is not a system table, zero is returned. +*/ +static int isSystemTable(Parse *pParse, const char *zName){ + if( sqlite3Strlen30(zName)>6 && 0==sqlite3StrNICmp(zName, "sqlite_", 7) ){ + sqlite3ErrorMsg(pParse, "table %s may not be altered", zName); + return 1; + } + return 0; +} /* ** Generate code to implement the "ALTER TABLE xxx RENAME TO yyy" ** command. */ @@ -65810,19 +91138,22 @@ Vdbe *v; #ifndef SQLITE_OMIT_TRIGGER char *zWhere = 0; /* Where clause to locate temp triggers */ #endif VTable *pVTab = 0; /* Non-zero if this is a v-tab with an xRename() */ - + int savedDbFlags; /* Saved value of db->flags */ + + savedDbFlags = db->flags; if( NEVER(db->mallocFailed) ) goto exit_rename_table; assert( pSrc->nSrc==1 ); assert( sqlite3BtreeHoldsAllMutexes(pParse->db) ); - pTab = sqlite3LocateTable(pParse, 0, pSrc->a[0].zName, pSrc->a[0].zDatabase); + pTab = sqlite3LocateTableItem(pParse, 0, &pSrc->a[0]); if( !pTab ) goto exit_rename_table; iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema); zDb = db->aDb[iDb].zName; + db->flags |= SQLITE_PreferBuiltin; /* Get a NULL terminated version of the new table name. */ zName = sqlite3NameFromToken(db, pName); if( !zName ) goto exit_rename_table; @@ -65836,18 +91167,15 @@ } /* Make sure it is not a system table being altered, or a reserved name ** that the table is being renamed to. */ - if( sqlite3Strlen30(pTab->zName)>6 - && 0==sqlite3StrNICmp(pTab->zName, "sqlite_", 7) - ){ - sqlite3ErrorMsg(pParse, "table %s may not be altered", pTab->zName); + if( SQLITE_OK!=isSystemTable(pParse, pTab->zName) ){ goto exit_rename_table; } - if( SQLITE_OK!=sqlite3CheckObjectName(pParse, zName) ){ - goto exit_rename_table; + if( SQLITE_OK!=sqlite3CheckObjectName(pParse, zName) ){ goto + exit_rename_table; } #ifndef SQLITE_OMIT_VIEW if( pTab->pSelect ){ sqlite3ErrorMsg(pParse, "view %s may not be altered", pTab->zName); @@ -65872,11 +91200,11 @@ pVTab = 0; } } #endif - /* Begin a transaction and code the VerifyCookie for database iDb. + /* Begin a transaction for database iDb. ** Then modify the schema cookie (since the ALTER TABLE modifies the ** schema). Open a statement transaction if the table is a virtual ** table. */ v = sqlite3GetVdbe(pParse); @@ -65892,11 +91220,11 @@ ** SQLite tables) that are identified by the name of the virtual table. */ #ifndef SQLITE_OMIT_VIRTUALTABLE if( pVTab ){ int i = ++pParse->nMem; - sqlite3VdbeAddOp4(v, OP_String8, 0, i, 0, zName, 0); + sqlite3VdbeLoadString(v, i, zName); sqlite3VdbeAddOp4(v, OP_VRename, i, 0, 0,(const char*)pVTab, P4_VTAB); sqlite3MayAbort(pParse); } #endif @@ -65933,11 +91261,11 @@ "name = CASE " "WHEN type='table' THEN %Q " "WHEN name LIKE 'sqlite_autoindex%%' AND type='index' THEN " "'sqlite_autoindex_' || %Q || substr(name,%d+18) " "ELSE name END " - "WHERE tbl_name=%Q AND " + "WHERE tbl_name=%Q COLLATE nocase AND " "(type='table' OR type='index' OR type='trigger');", zDb, SCHEMA_TABLE(iDb), zName, zName, zName, #ifndef SQLITE_OMIT_TRIGGER zName, #endif @@ -65986,36 +91314,11 @@ reloadTableSchema(pParse, pTab, zName); exit_rename_table: sqlite3SrcListDelete(db, pSrc); sqlite3DbFree(db, zName); -} - - -/* -** Generate code to make sure the file format number is at least minFormat. -** The generated code will increase the file format number if necessary. -*/ -SQLITE_PRIVATE void sqlite3MinimumFileFormat(Parse *pParse, int iDb, int minFormat){ - Vdbe *v; - v = sqlite3GetVdbe(pParse); - /* The VDBE should have been allocated before this routine is called. - ** If that allocation failed, we would have quit before reaching this - ** point */ - if( ALWAYS(v) ){ - int r1 = sqlite3GetTempReg(pParse); - int r2 = sqlite3GetTempReg(pParse); - int j1; - sqlite3VdbeAddOp3(v, OP_ReadCookie, iDb, r1, BTREE_FILE_FORMAT); - sqlite3VdbeUsesBtree(v, iDb); - sqlite3VdbeAddOp2(v, OP_Integer, minFormat, r2); - j1 = sqlite3VdbeAddOp3(v, OP_Ge, r2, 0, r1); - sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, BTREE_FILE_FORMAT, r2); - sqlite3VdbeJumpHere(v, j1); - sqlite3ReleaseTempReg(pParse, r1); - sqlite3ReleaseTempReg(pParse, r2); - } + db->flags = savedDbFlags; } /* ** This function is called after an "ALTER TABLE ... ADD" statement ** has been parsed. Argument pColDef contains the text of the new @@ -66032,13 +91335,15 @@ const char *zTab; /* Table name */ char *zCol; /* Null-terminated column definition */ Column *pCol; /* The new column */ Expr *pDflt; /* Default value for the new column */ sqlite3 *db; /* The database connection; */ + Vdbe *v = pParse->pVdbe; /* The prepared statement under construction */ db = pParse->db; if( pParse->nErr || db->mallocFailed ) return; + assert( v!=0 ); pNew = pParse->pNewTable; assert( pNew ); assert( sqlite3BtreeHoldsAllMutexes(db) ); iDb = sqlite3SchemaToIndex(db, pNew->pSchema); @@ -66066,11 +91371,11 @@ /* Check that the new column is not specified as PRIMARY KEY or UNIQUE. ** If there is a NOT NULL constraint, then the default value for the ** column must not be NULL. */ - if( pCol->isPrimKey ){ + if( pCol->colFlags & COLFLAG_PRIMKEY ){ sqlite3ErrorMsg(pParse, "Cannot add a PRIMARY KEY column"); return; } if( pNew->pIndex ){ sqlite3ErrorMsg(pParse, "Cannot add a UNIQUE column"); @@ -66089,13 +91394,16 @@ /* Ensure the default expression is something that sqlite3ValueFromExpr() ** can handle (i.e. not CURRENT_TIME etc.) */ if( pDflt ){ - sqlite3_value *pVal; - if( sqlite3ValueFromExpr(db, pDflt, SQLITE_UTF8, SQLITE_AFF_NONE, &pVal) ){ - db->mallocFailed = 1; + sqlite3_value *pVal = 0; + int rc; + rc = sqlite3ValueFromExpr(db, pDflt, SQLITE_UTF8, SQLITE_AFF_BLOB, &pVal); + assert( rc==SQLITE_OK || rc==SQLITE_NOMEM ); + if( rc!=SQLITE_OK ){ + assert( db->mallocFailed == 1 ); return; } if( !pVal ){ sqlite3ErrorMsg(pParse, "Cannot add a column with non-constant default"); return; @@ -66105,28 +91413,36 @@ /* Modify the CREATE TABLE statement. */ zCol = sqlite3DbStrNDup(db, (char*)pColDef->z, pColDef->n); if( zCol ){ char *zEnd = &zCol[pColDef->n-1]; + int savedDbFlags = db->flags; while( zEnd>zCol && (*zEnd==';' || sqlite3Isspace(*zEnd)) ){ *zEnd-- = '\0'; } + db->flags |= SQLITE_PreferBuiltin; sqlite3NestedParse(pParse, "UPDATE \"%w\".%s SET " "sql = substr(sql,1,%d) || ', ' || %Q || substr(sql,%d) " "WHERE type = 'table' AND name = %Q", zDb, SCHEMA_TABLE(iDb), pNew->addColOffset, zCol, pNew->addColOffset+1, zTab ); sqlite3DbFree(db, zCol); + db->flags = savedDbFlags; } - /* If the default value of the new column is NULL, then set the file + /* If the default value of the new column is NULL, then the file ** format to 2. If the default value of the new column is not NULL, - ** the file format becomes 3. + ** the file format be 3. Back when this feature was first added + ** in 2006, we went to the trouble to upgrade the file format to the + ** minimum support values. But 10-years on, we can assume that all + ** extent versions of SQLite support file-format 4, so we always and + ** unconditionally upgrade to 4. */ - sqlite3MinimumFileFormat(pParse, iDb, pDflt ? 3 : 2); + sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, BTREE_FILE_FORMAT, + SQLITE_MAX_FILE_FORMAT); /* Reload the schema of the modified table. */ reloadTableSchema(pParse, pTab, pTab->zName); } @@ -66156,11 +91472,11 @@ /* Look up the table being altered. */ assert( pParse->pNewTable==0 ); assert( sqlite3BtreeHoldsAllMutexes(db) ); if( db->mallocFailed ) goto exit_begin_add_column; - pTab = sqlite3LocateTable(pParse, 0, pSrc->a[0].zName, pSrc->a[0].zDatabase); + pTab = sqlite3LocateTableItem(pParse, 0, &pSrc->a[0]); if( !pTab ) goto exit_begin_add_column; #ifndef SQLITE_OMIT_VIRTUALTABLE if( IsVirtual(pTab) ){ sqlite3ErrorMsg(pParse, "virtual tables may not be altered"); @@ -66170,10 +91486,13 @@ /* Make sure this is not an attempt to ALTER a view. */ if( pTab->pSelect ){ sqlite3ErrorMsg(pParse, "Cannot add a column to a view"); goto exit_begin_add_column; + } + if( SQLITE_OK!=isSystemTable(pParse, pTab->zName) ){ + goto exit_begin_add_column; } assert( pTab->addColOffset>0 ); iDb = sqlite3SchemaToIndex(db, pTab->pSchema); @@ -66186,19 +91505,18 @@ */ pNew = (Table*)sqlite3DbMallocZero(db, sizeof(Table)); if( !pNew ) goto exit_begin_add_column; pParse->pNewTable = pNew; pNew->nRef = 1; - pNew->dbMem = pTab->dbMem; pNew->nCol = pTab->nCol; assert( pNew->nCol>0 ); nAlloc = (((pNew->nCol-1)/8)*8)+8; assert( nAlloc>=pNew->nCol && nAlloc%8==0 && nAlloc-pNew->nCol<8 ); pNew->aCol = (Column*)sqlite3DbMallocZero(db, sizeof(Column)*nAlloc); pNew->zName = sqlite3MPrintf(db, "sqlite_altertab_%s", pTab->zName); if( !pNew->aCol || !pNew->zName ){ - db->mallocFailed = 1; + assert( db->mallocFailed ); goto exit_begin_add_column; } memcpy(pNew->aCol, pTab->aCol, sizeof(Column)*pNew->nCol); for(i=0; i<pNew->nCol; i++){ Column *pCol = &pNew->aCol[i]; @@ -66225,11 +91543,11 @@ #endif /* SQLITE_ALTER_TABLE */ /************** End of alter.c ***********************************************/ /************** Begin file analyze.c *****************************************/ /* -** 2005 July 8 +** 2005-07-08 ** ** The author disclaims copyright to this source code. In place of ** a legal notice, here is a blessing: ** ** May you do good and not evil. @@ -66236,323 +91554,1274 @@ ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* ** This file contains code associated with the ANALYZE command. +** +** The ANALYZE command gather statistics about the content of tables +** and indices. These statistics are made available to the query planner +** to help it make better decisions about how to perform queries. +** +** The following system tables are or have been supported: +** +** CREATE TABLE sqlite_stat1(tbl, idx, stat); +** CREATE TABLE sqlite_stat2(tbl, idx, sampleno, sample); +** CREATE TABLE sqlite_stat3(tbl, idx, nEq, nLt, nDLt, sample); +** CREATE TABLE sqlite_stat4(tbl, idx, nEq, nLt, nDLt, sample); +** +** Additional tables might be added in future releases of SQLite. +** The sqlite_stat2 table is not created or used unless the SQLite version +** is between 3.6.18 and 3.7.8, inclusive, and unless SQLite is compiled +** with SQLITE_ENABLE_STAT2. The sqlite_stat2 table is deprecated. +** The sqlite_stat2 table is superseded by sqlite_stat3, which is only +** created and used by SQLite versions 3.7.9 and later and with +** SQLITE_ENABLE_STAT3 defined. The functionality of sqlite_stat3 +** is a superset of sqlite_stat2. The sqlite_stat4 is an enhanced +** version of sqlite_stat3 and is only available when compiled with +** SQLITE_ENABLE_STAT4 and in SQLite versions 3.8.1 and later. It is +** not possible to enable both STAT3 and STAT4 at the same time. If they +** are both enabled, then STAT4 takes precedence. +** +** For most applications, sqlite_stat1 provides all the statistics required +** for the query planner to make good choices. +** +** Format of sqlite_stat1: +** +** There is normally one row per index, with the index identified by the +** name in the idx column. The tbl column is the name of the table to +** which the index belongs. In each such row, the stat column will be +** a string consisting of a list of integers. The first integer in this +** list is the number of rows in the index. (This is the same as the +** number of rows in the table, except for partial indices.) The second +** integer is the average number of rows in the index that have the same +** value in the first column of the index. The third integer is the average +** number of rows in the index that have the same value for the first two +** columns. The N-th integer (for N>1) is the average number of rows in +** the index which have the same value for the first N-1 columns. For +** a K-column index, there will be K+1 integers in the stat column. If +** the index is unique, then the last integer will be 1. +** +** The list of integers in the stat column can optionally be followed +** by the keyword "unordered". The "unordered" keyword, if it is present, +** must be separated from the last integer by a single space. If the +** "unordered" keyword is present, then the query planner assumes that +** the index is unordered and will not use the index for a range query. +** +** If the sqlite_stat1.idx column is NULL, then the sqlite_stat1.stat +** column contains a single integer which is the (estimated) number of +** rows in the table identified by sqlite_stat1.tbl. +** +** Format of sqlite_stat2: +** +** The sqlite_stat2 is only created and is only used if SQLite is compiled +** with SQLITE_ENABLE_STAT2 and if the SQLite version number is between +** 3.6.18 and 3.7.8. The "stat2" table contains additional information +** about the distribution of keys within an index. The index is identified by +** the "idx" column and the "tbl" column is the name of the table to which +** the index belongs. There are usually 10 rows in the sqlite_stat2 +** table for each index. +** +** The sqlite_stat2 entries for an index that have sampleno between 0 and 9 +** inclusive are samples of the left-most key value in the index taken at +** evenly spaced points along the index. Let the number of samples be S +** (10 in the standard build) and let C be the number of rows in the index. +** Then the sampled rows are given by: +** +** rownumber = (i*C*2 + C)/(S*2) +** +** For i between 0 and S-1. Conceptually, the index space is divided into +** S uniform buckets and the samples are the middle row from each bucket. +** +** The format for sqlite_stat2 is recorded here for legacy reference. This +** version of SQLite does not support sqlite_stat2. It neither reads nor +** writes the sqlite_stat2 table. This version of SQLite only supports +** sqlite_stat3. +** +** Format for sqlite_stat3: +** +** The sqlite_stat3 format is a subset of sqlite_stat4. Hence, the +** sqlite_stat4 format will be described first. Further information +** about sqlite_stat3 follows the sqlite_stat4 description. +** +** Format for sqlite_stat4: +** +** As with sqlite_stat2, the sqlite_stat4 table contains histogram data +** to aid the query planner in choosing good indices based on the values +** that indexed columns are compared against in the WHERE clauses of +** queries. +** +** The sqlite_stat4 table contains multiple entries for each index. +** The idx column names the index and the tbl column is the table of the +** index. If the idx and tbl columns are the same, then the sample is +** of the INTEGER PRIMARY KEY. The sample column is a blob which is the +** binary encoding of a key from the index. The nEq column is a +** list of integers. The first integer is the approximate number +** of entries in the index whose left-most column exactly matches +** the left-most column of the sample. The second integer in nEq +** is the approximate number of entries in the index where the +** first two columns match the first two columns of the sample. +** And so forth. nLt is another list of integers that show the approximate +** number of entries that are strictly less than the sample. The first +** integer in nLt contains the number of entries in the index where the +** left-most column is less than the left-most column of the sample. +** The K-th integer in the nLt entry is the number of index entries +** where the first K columns are less than the first K columns of the +** sample. The nDLt column is like nLt except that it contains the +** number of distinct entries in the index that are less than the +** sample. +** +** There can be an arbitrary number of sqlite_stat4 entries per index. +** The ANALYZE command will typically generate sqlite_stat4 tables +** that contain between 10 and 40 samples which are distributed across +** the key space, though not uniformly, and which include samples with +** large nEq values. +** +** Format for sqlite_stat3 redux: +** +** The sqlite_stat3 table is like sqlite_stat4 except that it only +** looks at the left-most column of the index. The sqlite_stat3.sample +** column contains the actual value of the left-most column instead +** of a blob encoding of the complete index key as is found in +** sqlite_stat4.sample. The nEq, nLt, and nDLt entries of sqlite_stat3 +** all contain just a single integer which is the same as the first +** integer in the equivalent columns in sqlite_stat4. */ #ifndef SQLITE_OMIT_ANALYZE +/* #include "sqliteInt.h" */ + +#if defined(SQLITE_ENABLE_STAT4) +# define IsStat4 1 +# define IsStat3 0 +#elif defined(SQLITE_ENABLE_STAT3) +# define IsStat4 0 +# define IsStat3 1 +#else +# define IsStat4 0 +# define IsStat3 0 +# undef SQLITE_STAT4_SAMPLES +# define SQLITE_STAT4_SAMPLES 1 +#endif +#define IsStat34 (IsStat3+IsStat4) /* 1 for STAT3 or STAT4. 0 otherwise */ /* -** This routine generates code that opens the sqlite_stat1 table for -** writing with cursor iStatCur. If the library was built with the -** SQLITE_ENABLE_STAT2 macro defined, then the sqlite_stat2 table is -** opened for writing using cursor (iStatCur+1) +** This routine generates code that opens the sqlite_statN tables. +** The sqlite_stat1 table is always relevant. sqlite_stat2 is now +** obsolete. sqlite_stat3 and sqlite_stat4 are only opened when +** appropriate compile-time options are provided. ** -** If the sqlite_stat1 tables does not previously exist, it is created. -** Similarly, if the sqlite_stat2 table does not exist and the library -** is compiled with SQLITE_ENABLE_STAT2 defined, it is created. +** If the sqlite_statN tables do not previously exist, it is created. ** ** Argument zWhere may be a pointer to a buffer containing a table name, ** or it may be a NULL pointer. If it is not NULL, then all entries in -** the sqlite_stat1 and (if applicable) sqlite_stat2 tables associated -** with the named table are deleted. If zWhere==0, then code is generated -** to delete all stat table entries. +** the sqlite_statN tables associated with the named table are deleted. +** If zWhere==0, then code is generated to delete all stat table entries. */ static void openStatTable( Parse *pParse, /* Parsing context */ int iDb, /* The database we are looking in */ int iStatCur, /* Open the sqlite_stat1 table on this cursor */ - const char *zWhere /* Delete entries associated with this table */ + const char *zWhere, /* Delete entries for this table or index */ + const char *zWhereType /* Either "tbl" or "idx" */ ){ - static struct { + static const struct { const char *zName; const char *zCols; } aTable[] = { { "sqlite_stat1", "tbl,idx,stat" }, -#ifdef SQLITE_ENABLE_STAT2 - { "sqlite_stat2", "tbl,idx,sampleno,sample" }, +#if defined(SQLITE_ENABLE_STAT4) + { "sqlite_stat4", "tbl,idx,neq,nlt,ndlt,sample" }, + { "sqlite_stat3", 0 }, +#elif defined(SQLITE_ENABLE_STAT3) + { "sqlite_stat3", "tbl,idx,neq,nlt,ndlt,sample" }, + { "sqlite_stat4", 0 }, +#else + { "sqlite_stat3", 0 }, + { "sqlite_stat4", 0 }, #endif }; - - int aRoot[] = {0, 0}; - u8 aCreateTbl[] = {0, 0}; - int i; sqlite3 *db = pParse->db; Db *pDb; Vdbe *v = sqlite3GetVdbe(pParse); + int aRoot[ArraySize(aTable)]; + u8 aCreateTbl[ArraySize(aTable)]; + if( v==0 ) return; assert( sqlite3BtreeHoldsAllMutexes(db) ); assert( sqlite3VdbeDb(v)==db ); pDb = &db->aDb[iDb]; + /* Create new statistic tables if they do not exist, or clear them + ** if they do already exist. + */ for(i=0; i<ArraySize(aTable); i++){ const char *zTab = aTable[i].zName; Table *pStat; if( (pStat = sqlite3FindTable(db, zTab, pDb->zName))==0 ){ - /* The sqlite_stat[12] table does not exist. Create it. Note that a - ** side-effect of the CREATE TABLE statement is to leave the rootpage - ** of the new table in register pParse->regRoot. This is important - ** because the OpenWrite opcode below will be needing it. */ - sqlite3NestedParse(pParse, - "CREATE TABLE %Q.%s(%s)", pDb->zName, zTab, aTable[i].zCols - ); - aRoot[i] = pParse->regRoot; - aCreateTbl[i] = 1; + if( aTable[i].zCols ){ + /* The sqlite_statN table does not exist. Create it. Note that a + ** side-effect of the CREATE TABLE statement is to leave the rootpage + ** of the new table in register pParse->regRoot. This is important + ** because the OpenWrite opcode below will be needing it. */ + sqlite3NestedParse(pParse, + "CREATE TABLE %Q.%s(%s)", pDb->zName, zTab, aTable[i].zCols + ); + aRoot[i] = pParse->regRoot; + aCreateTbl[i] = OPFLAG_P2ISREG; + } }else{ /* The table already exists. If zWhere is not NULL, delete all entries ** associated with the table zWhere. If zWhere is NULL, delete the ** entire contents of the table. */ aRoot[i] = pStat->tnum; + aCreateTbl[i] = 0; sqlite3TableLock(pParse, iDb, aRoot[i], 1, zTab); if( zWhere ){ sqlite3NestedParse(pParse, - "DELETE FROM %Q.%s WHERE tbl=%Q", pDb->zName, zTab, zWhere + "DELETE FROM %Q.%s WHERE %s=%Q", + pDb->zName, zTab, zWhereType, zWhere ); }else{ - /* The sqlite_stat[12] table already exists. Delete all rows. */ + /* The sqlite_stat[134] table already exists. Delete all rows. */ sqlite3VdbeAddOp2(v, OP_Clear, aRoot[i], iDb); } } } - /* Open the sqlite_stat[12] tables for writing. */ - for(i=0; i<ArraySize(aTable); i++){ - sqlite3VdbeAddOp3(v, OP_OpenWrite, iStatCur+i, aRoot[i], iDb); - sqlite3VdbeChangeP4(v, -1, (char *)3, P4_INT32); + /* Open the sqlite_stat[134] tables for writing. */ + for(i=0; aTable[i].zCols; i++){ + assert( i<ArraySize(aTable) ); + sqlite3VdbeAddOp4Int(v, OP_OpenWrite, iStatCur+i, aRoot[i], iDb, 3); sqlite3VdbeChangeP5(v, aCreateTbl[i]); + VdbeComment((v, aTable[i].zName)); } } + +/* +** Recommended number of samples for sqlite_stat4 +*/ +#ifndef SQLITE_STAT4_SAMPLES +# define SQLITE_STAT4_SAMPLES 24 +#endif + +/* +** Three SQL functions - stat_init(), stat_push(), and stat_get() - +** share an instance of the following structure to hold their state +** information. +*/ +typedef struct Stat4Accum Stat4Accum; +typedef struct Stat4Sample Stat4Sample; +struct Stat4Sample { + tRowcnt *anEq; /* sqlite_stat4.nEq */ + tRowcnt *anDLt; /* sqlite_stat4.nDLt */ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + tRowcnt *anLt; /* sqlite_stat4.nLt */ + union { + i64 iRowid; /* Rowid in main table of the key */ + u8 *aRowid; /* Key for WITHOUT ROWID tables */ + } u; + u32 nRowid; /* Sizeof aRowid[] */ + u8 isPSample; /* True if a periodic sample */ + int iCol; /* If !isPSample, the reason for inclusion */ + u32 iHash; /* Tiebreaker hash */ +#endif +}; +struct Stat4Accum { + tRowcnt nRow; /* Number of rows in the entire table */ + tRowcnt nPSample; /* How often to do a periodic sample */ + int nCol; /* Number of columns in index + pk/rowid */ + int nKeyCol; /* Number of index columns w/o the pk/rowid */ + int mxSample; /* Maximum number of samples to accumulate */ + Stat4Sample current; /* Current row as a Stat4Sample */ + u32 iPrn; /* Pseudo-random number used for sampling */ + Stat4Sample *aBest; /* Array of nCol best samples */ + int iMin; /* Index in a[] of entry with minimum score */ + int nSample; /* Current number of samples */ + int iGet; /* Index of current sample accessed by stat_get() */ + Stat4Sample *a; /* Array of mxSample Stat4Sample objects */ + sqlite3 *db; /* Database connection, for malloc() */ +}; + +/* Reclaim memory used by a Stat4Sample +*/ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 +static void sampleClear(sqlite3 *db, Stat4Sample *p){ + assert( db!=0 ); + if( p->nRowid ){ + sqlite3DbFree(db, p->u.aRowid); + p->nRowid = 0; + } +} +#endif + +/* Initialize the BLOB value of a ROWID +*/ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 +static void sampleSetRowid(sqlite3 *db, Stat4Sample *p, int n, const u8 *pData){ + assert( db!=0 ); + if( p->nRowid ) sqlite3DbFree(db, p->u.aRowid); + p->u.aRowid = sqlite3DbMallocRawNN(db, n); + if( p->u.aRowid ){ + p->nRowid = n; + memcpy(p->u.aRowid, pData, n); + }else{ + p->nRowid = 0; + } +} +#endif + +/* Initialize the INTEGER value of a ROWID. +*/ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 +static void sampleSetRowidInt64(sqlite3 *db, Stat4Sample *p, i64 iRowid){ + assert( db!=0 ); + if( p->nRowid ) sqlite3DbFree(db, p->u.aRowid); + p->nRowid = 0; + p->u.iRowid = iRowid; +} +#endif + + +/* +** Copy the contents of object (*pFrom) into (*pTo). +*/ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 +static void sampleCopy(Stat4Accum *p, Stat4Sample *pTo, Stat4Sample *pFrom){ + pTo->isPSample = pFrom->isPSample; + pTo->iCol = pFrom->iCol; + pTo->iHash = pFrom->iHash; + memcpy(pTo->anEq, pFrom->anEq, sizeof(tRowcnt)*p->nCol); + memcpy(pTo->anLt, pFrom->anLt, sizeof(tRowcnt)*p->nCol); + memcpy(pTo->anDLt, pFrom->anDLt, sizeof(tRowcnt)*p->nCol); + if( pFrom->nRowid ){ + sampleSetRowid(p->db, pTo, pFrom->nRowid, pFrom->u.aRowid); + }else{ + sampleSetRowidInt64(p->db, pTo, pFrom->u.iRowid); + } +} +#endif + +/* +** Reclaim all memory of a Stat4Accum structure. +*/ +static void stat4Destructor(void *pOld){ + Stat4Accum *p = (Stat4Accum*)pOld; +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + int i; + for(i=0; i<p->nCol; i++) sampleClear(p->db, p->aBest+i); + for(i=0; i<p->mxSample; i++) sampleClear(p->db, p->a+i); + sampleClear(p->db, &p->current); +#endif + sqlite3DbFree(p->db, p); +} + +/* +** Implementation of the stat_init(N,K,C) SQL function. The three parameters +** are: +** N: The number of columns in the index including the rowid/pk (note 1) +** K: The number of columns in the index excluding the rowid/pk. +** C: The number of rows in the index (note 2) +** +** Note 1: In the special case of the covering index that implements a +** WITHOUT ROWID table, N is the number of PRIMARY KEY columns, not the +** total number of columns in the table. +** +** Note 2: C is only used for STAT3 and STAT4. +** +** For indexes on ordinary rowid tables, N==K+1. But for indexes on +** WITHOUT ROWID tables, N=K+P where P is the number of columns in the +** PRIMARY KEY of the table. The covering index that implements the +** original WITHOUT ROWID table as N==K as a special case. +** +** This routine allocates the Stat4Accum object in heap memory. The return +** value is a pointer to the Stat4Accum object. The datatype of the +** return value is BLOB, but it is really just a pointer to the Stat4Accum +** object. +*/ +static void statInit( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + Stat4Accum *p; + int nCol; /* Number of columns in index being sampled */ + int nKeyCol; /* Number of key columns */ + int nColUp; /* nCol rounded up for alignment */ + int n; /* Bytes of space to allocate */ + sqlite3 *db; /* Database connection */ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + int mxSample = SQLITE_STAT4_SAMPLES; +#endif + + /* Decode the three function arguments */ + UNUSED_PARAMETER(argc); + nCol = sqlite3_value_int(argv[0]); + assert( nCol>0 ); + nColUp = sizeof(tRowcnt)<8 ? (nCol+1)&~1 : nCol; + nKeyCol = sqlite3_value_int(argv[1]); + assert( nKeyCol<=nCol ); + assert( nKeyCol>0 ); + + /* Allocate the space required for the Stat4Accum object */ + n = sizeof(*p) + + sizeof(tRowcnt)*nColUp /* Stat4Accum.anEq */ + + sizeof(tRowcnt)*nColUp /* Stat4Accum.anDLt */ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + + sizeof(tRowcnt)*nColUp /* Stat4Accum.anLt */ + + sizeof(Stat4Sample)*(nCol+mxSample) /* Stat4Accum.aBest[], a[] */ + + sizeof(tRowcnt)*3*nColUp*(nCol+mxSample) +#endif + ; + db = sqlite3_context_db_handle(context); + p = sqlite3DbMallocZero(db, n); + if( p==0 ){ + sqlite3_result_error_nomem(context); + return; + } + + p->db = db; + p->nRow = 0; + p->nCol = nCol; + p->nKeyCol = nKeyCol; + p->current.anDLt = (tRowcnt*)&p[1]; + p->current.anEq = &p->current.anDLt[nColUp]; + +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + { + u8 *pSpace; /* Allocated space not yet assigned */ + int i; /* Used to iterate through p->aSample[] */ + + p->iGet = -1; + p->mxSample = mxSample; + p->nPSample = (tRowcnt)(sqlite3_value_int64(argv[2])/(mxSample/3+1) + 1); + p->current.anLt = &p->current.anEq[nColUp]; + p->iPrn = 0x689e962d*(u32)nCol ^ 0xd0944565*(u32)sqlite3_value_int(argv[2]); + + /* Set up the Stat4Accum.a[] and aBest[] arrays */ + p->a = (struct Stat4Sample*)&p->current.anLt[nColUp]; + p->aBest = &p->a[mxSample]; + pSpace = (u8*)(&p->a[mxSample+nCol]); + for(i=0; i<(mxSample+nCol); i++){ + p->a[i].anEq = (tRowcnt *)pSpace; pSpace += (sizeof(tRowcnt) * nColUp); + p->a[i].anLt = (tRowcnt *)pSpace; pSpace += (sizeof(tRowcnt) * nColUp); + p->a[i].anDLt = (tRowcnt *)pSpace; pSpace += (sizeof(tRowcnt) * nColUp); + } + assert( (pSpace - (u8*)p)==n ); + + for(i=0; i<nCol; i++){ + p->aBest[i].iCol = i; + } + } +#endif + + /* Return a pointer to the allocated object to the caller. Note that + ** only the pointer (the 2nd parameter) matters. The size of the object + ** (given by the 3rd parameter) is never used and can be any positive + ** value. */ + sqlite3_result_blob(context, p, sizeof(*p), stat4Destructor); +} +static const FuncDef statInitFuncdef = { + 2+IsStat34, /* nArg */ + SQLITE_UTF8, /* funcFlags */ + 0, /* pUserData */ + 0, /* pNext */ + statInit, /* xSFunc */ + 0, /* xFinalize */ + "stat_init", /* zName */ + 0, /* pHash */ + 0 /* pDestructor */ +}; + +#ifdef SQLITE_ENABLE_STAT4 +/* +** pNew and pOld are both candidate non-periodic samples selected for +** the same column (pNew->iCol==pOld->iCol). Ignoring this column and +** considering only any trailing columns and the sample hash value, this +** function returns true if sample pNew is to be preferred over pOld. +** In other words, if we assume that the cardinalities of the selected +** column for pNew and pOld are equal, is pNew to be preferred over pOld. +** +** This function assumes that for each argument sample, the contents of +** the anEq[] array from pSample->anEq[pSample->iCol+1] onwards are valid. +*/ +static int sampleIsBetterPost( + Stat4Accum *pAccum, + Stat4Sample *pNew, + Stat4Sample *pOld +){ + int nCol = pAccum->nCol; + int i; + assert( pNew->iCol==pOld->iCol ); + for(i=pNew->iCol+1; i<nCol; i++){ + if( pNew->anEq[i]>pOld->anEq[i] ) return 1; + if( pNew->anEq[i]<pOld->anEq[i] ) return 0; + } + if( pNew->iHash>pOld->iHash ) return 1; + return 0; +} +#endif + +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 +/* +** Return true if pNew is to be preferred over pOld. +** +** This function assumes that for each argument sample, the contents of +** the anEq[] array from pSample->anEq[pSample->iCol] onwards are valid. +*/ +static int sampleIsBetter( + Stat4Accum *pAccum, + Stat4Sample *pNew, + Stat4Sample *pOld +){ + tRowcnt nEqNew = pNew->anEq[pNew->iCol]; + tRowcnt nEqOld = pOld->anEq[pOld->iCol]; + + assert( pOld->isPSample==0 && pNew->isPSample==0 ); + assert( IsStat4 || (pNew->iCol==0 && pOld->iCol==0) ); + + if( (nEqNew>nEqOld) ) return 1; +#ifdef SQLITE_ENABLE_STAT4 + if( nEqNew==nEqOld ){ + if( pNew->iCol<pOld->iCol ) return 1; + return (pNew->iCol==pOld->iCol && sampleIsBetterPost(pAccum, pNew, pOld)); + } + return 0; +#else + return (nEqNew==nEqOld && pNew->iHash>pOld->iHash); +#endif +} + +/* +** Copy the contents of sample *pNew into the p->a[] array. If necessary, +** remove the least desirable sample from p->a[] to make room. +*/ +static void sampleInsert(Stat4Accum *p, Stat4Sample *pNew, int nEqZero){ + Stat4Sample *pSample = 0; + int i; + + assert( IsStat4 || nEqZero==0 ); + +#ifdef SQLITE_ENABLE_STAT4 + if( pNew->isPSample==0 ){ + Stat4Sample *pUpgrade = 0; + assert( pNew->anEq[pNew->iCol]>0 ); + + /* This sample is being added because the prefix that ends in column + ** iCol occurs many times in the table. However, if we have already + ** added a sample that shares this prefix, there is no need to add + ** this one. Instead, upgrade the priority of the highest priority + ** existing sample that shares this prefix. */ + for(i=p->nSample-1; i>=0; i--){ + Stat4Sample *pOld = &p->a[i]; + if( pOld->anEq[pNew->iCol]==0 ){ + if( pOld->isPSample ) return; + assert( pOld->iCol>pNew->iCol ); + assert( sampleIsBetter(p, pNew, pOld) ); + if( pUpgrade==0 || sampleIsBetter(p, pOld, pUpgrade) ){ + pUpgrade = pOld; + } + } + } + if( pUpgrade ){ + pUpgrade->iCol = pNew->iCol; + pUpgrade->anEq[pUpgrade->iCol] = pNew->anEq[pUpgrade->iCol]; + goto find_new_min; + } + } +#endif + + /* If necessary, remove sample iMin to make room for the new sample. */ + if( p->nSample>=p->mxSample ){ + Stat4Sample *pMin = &p->a[p->iMin]; + tRowcnt *anEq = pMin->anEq; + tRowcnt *anLt = pMin->anLt; + tRowcnt *anDLt = pMin->anDLt; + sampleClear(p->db, pMin); + memmove(pMin, &pMin[1], sizeof(p->a[0])*(p->nSample-p->iMin-1)); + pSample = &p->a[p->nSample-1]; + pSample->nRowid = 0; + pSample->anEq = anEq; + pSample->anDLt = anDLt; + pSample->anLt = anLt; + p->nSample = p->mxSample-1; + } + + /* The "rows less-than" for the rowid column must be greater than that + ** for the last sample in the p->a[] array. Otherwise, the samples would + ** be out of order. */ +#ifdef SQLITE_ENABLE_STAT4 + assert( p->nSample==0 + || pNew->anLt[p->nCol-1] > p->a[p->nSample-1].anLt[p->nCol-1] ); +#endif + + /* Insert the new sample */ + pSample = &p->a[p->nSample]; + sampleCopy(p, pSample, pNew); + p->nSample++; + + /* Zero the first nEqZero entries in the anEq[] array. */ + memset(pSample->anEq, 0, sizeof(tRowcnt)*nEqZero); + +#ifdef SQLITE_ENABLE_STAT4 + find_new_min: +#endif + if( p->nSample>=p->mxSample ){ + int iMin = -1; + for(i=0; i<p->mxSample; i++){ + if( p->a[i].isPSample ) continue; + if( iMin<0 || sampleIsBetter(p, &p->a[iMin], &p->a[i]) ){ + iMin = i; + } + } + assert( iMin>=0 ); + p->iMin = iMin; + } +} +#endif /* SQLITE_ENABLE_STAT3_OR_STAT4 */ + +/* +** Field iChng of the index being scanned has changed. So at this point +** p->current contains a sample that reflects the previous row of the +** index. The value of anEq[iChng] and subsequent anEq[] elements are +** correct at this point. +*/ +static void samplePushPrevious(Stat4Accum *p, int iChng){ +#ifdef SQLITE_ENABLE_STAT4 + int i; + + /* Check if any samples from the aBest[] array should be pushed + ** into IndexSample.a[] at this point. */ + for(i=(p->nCol-2); i>=iChng; i--){ + Stat4Sample *pBest = &p->aBest[i]; + pBest->anEq[i] = p->current.anEq[i]; + if( p->nSample<p->mxSample || sampleIsBetter(p, pBest, &p->a[p->iMin]) ){ + sampleInsert(p, pBest, i); + } + } + + /* Update the anEq[] fields of any samples already collected. */ + for(i=p->nSample-1; i>=0; i--){ + int j; + for(j=iChng; j<p->nCol; j++){ + if( p->a[i].anEq[j]==0 ) p->a[i].anEq[j] = p->current.anEq[j]; + } + } +#endif + +#if defined(SQLITE_ENABLE_STAT3) && !defined(SQLITE_ENABLE_STAT4) + if( iChng==0 ){ + tRowcnt nLt = p->current.anLt[0]; + tRowcnt nEq = p->current.anEq[0]; + + /* Check if this is to be a periodic sample. If so, add it. */ + if( (nLt/p->nPSample)!=(nLt+nEq)/p->nPSample ){ + p->current.isPSample = 1; + sampleInsert(p, &p->current, 0); + p->current.isPSample = 0; + }else + + /* Or if it is a non-periodic sample. Add it in this case too. */ + if( p->nSample<p->mxSample + || sampleIsBetter(p, &p->current, &p->a[p->iMin]) + ){ + sampleInsert(p, &p->current, 0); + } + } +#endif + +#ifndef SQLITE_ENABLE_STAT3_OR_STAT4 + UNUSED_PARAMETER( p ); + UNUSED_PARAMETER( iChng ); +#endif +} + +/* +** Implementation of the stat_push SQL function: stat_push(P,C,R) +** Arguments: +** +** P Pointer to the Stat4Accum object created by stat_init() +** C Index of left-most column to differ from previous row +** R Rowid for the current row. Might be a key record for +** WITHOUT ROWID tables. +** +** This SQL function always returns NULL. It's purpose it to accumulate +** statistical data and/or samples in the Stat4Accum object about the +** index being analyzed. The stat_get() SQL function will later be used to +** extract relevant information for constructing the sqlite_statN tables. +** +** The R parameter is only used for STAT3 and STAT4 +*/ +static void statPush( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + int i; + + /* The three function arguments */ + Stat4Accum *p = (Stat4Accum*)sqlite3_value_blob(argv[0]); + int iChng = sqlite3_value_int(argv[1]); + + UNUSED_PARAMETER( argc ); + UNUSED_PARAMETER( context ); + assert( p->nCol>0 ); + assert( iChng<p->nCol ); + + if( p->nRow==0 ){ + /* This is the first call to this function. Do initialization. */ + for(i=0; i<p->nCol; i++) p->current.anEq[i] = 1; + }else{ + /* Second and subsequent calls get processed here */ + samplePushPrevious(p, iChng); + + /* Update anDLt[], anLt[] and anEq[] to reflect the values that apply + ** to the current row of the index. */ + for(i=0; i<iChng; i++){ + p->current.anEq[i]++; + } + for(i=iChng; i<p->nCol; i++){ + p->current.anDLt[i]++; +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + p->current.anLt[i] += p->current.anEq[i]; +#endif + p->current.anEq[i] = 1; + } + } + p->nRow++; +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + if( sqlite3_value_type(argv[2])==SQLITE_INTEGER ){ + sampleSetRowidInt64(p->db, &p->current, sqlite3_value_int64(argv[2])); + }else{ + sampleSetRowid(p->db, &p->current, sqlite3_value_bytes(argv[2]), + sqlite3_value_blob(argv[2])); + } + p->current.iHash = p->iPrn = p->iPrn*1103515245 + 12345; +#endif + +#ifdef SQLITE_ENABLE_STAT4 + { + tRowcnt nLt = p->current.anLt[p->nCol-1]; + + /* Check if this is to be a periodic sample. If so, add it. */ + if( (nLt/p->nPSample)!=(nLt+1)/p->nPSample ){ + p->current.isPSample = 1; + p->current.iCol = 0; + sampleInsert(p, &p->current, p->nCol-1); + p->current.isPSample = 0; + } + + /* Update the aBest[] array. */ + for(i=0; i<(p->nCol-1); i++){ + p->current.iCol = i; + if( i>=iChng || sampleIsBetterPost(p, &p->current, &p->aBest[i]) ){ + sampleCopy(p, &p->aBest[i], &p->current); + } + } + } +#endif +} +static const FuncDef statPushFuncdef = { + 2+IsStat34, /* nArg */ + SQLITE_UTF8, /* funcFlags */ + 0, /* pUserData */ + 0, /* pNext */ + statPush, /* xSFunc */ + 0, /* xFinalize */ + "stat_push", /* zName */ + 0, /* pHash */ + 0 /* pDestructor */ +}; + +#define STAT_GET_STAT1 0 /* "stat" column of stat1 table */ +#define STAT_GET_ROWID 1 /* "rowid" column of stat[34] entry */ +#define STAT_GET_NEQ 2 /* "neq" column of stat[34] entry */ +#define STAT_GET_NLT 3 /* "nlt" column of stat[34] entry */ +#define STAT_GET_NDLT 4 /* "ndlt" column of stat[34] entry */ + +/* +** Implementation of the stat_get(P,J) SQL function. This routine is +** used to query statistical information that has been gathered into +** the Stat4Accum object by prior calls to stat_push(). The P parameter +** has type BLOB but it is really just a pointer to the Stat4Accum object. +** The content to returned is determined by the parameter J +** which is one of the STAT_GET_xxxx values defined above. +** +** If neither STAT3 nor STAT4 are enabled, then J is always +** STAT_GET_STAT1 and is hence omitted and this routine becomes +** a one-parameter function, stat_get(P), that always returns the +** stat1 table entry information. +*/ +static void statGet( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + Stat4Accum *p = (Stat4Accum*)sqlite3_value_blob(argv[0]); +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + /* STAT3 and STAT4 have a parameter on this routine. */ + int eCall = sqlite3_value_int(argv[1]); + assert( argc==2 ); + assert( eCall==STAT_GET_STAT1 || eCall==STAT_GET_NEQ + || eCall==STAT_GET_ROWID || eCall==STAT_GET_NLT + || eCall==STAT_GET_NDLT + ); + if( eCall==STAT_GET_STAT1 ) +#else + assert( argc==1 ); +#endif + { + /* Return the value to store in the "stat" column of the sqlite_stat1 + ** table for this index. + ** + ** The value is a string composed of a list of integers describing + ** the index. The first integer in the list is the total number of + ** entries in the index. There is one additional integer in the list + ** for each indexed column. This additional integer is an estimate of + ** the number of rows matched by a stabbing query on the index using + ** a key with the corresponding number of fields. In other words, + ** if the index is on columns (a,b) and the sqlite_stat1 value is + ** "100 10 2", then SQLite estimates that: + ** + ** * the index contains 100 rows, + ** * "WHERE a=?" matches 10 rows, and + ** * "WHERE a=? AND b=?" matches 2 rows. + ** + ** If D is the count of distinct values and K is the total number of + ** rows, then each estimate is computed as: + ** + ** I = (K+D-1)/D + */ + char *z; + int i; + + char *zRet = sqlite3MallocZero( (p->nKeyCol+1)*25 ); + if( zRet==0 ){ + sqlite3_result_error_nomem(context); + return; + } + + sqlite3_snprintf(24, zRet, "%llu", (u64)p->nRow); + z = zRet + sqlite3Strlen30(zRet); + for(i=0; i<p->nKeyCol; i++){ + u64 nDistinct = p->current.anDLt[i] + 1; + u64 iVal = (p->nRow + nDistinct - 1) / nDistinct; + sqlite3_snprintf(24, z, " %llu", iVal); + z += sqlite3Strlen30(z); + assert( p->current.anEq[i] ); + } + assert( z[0]=='\0' && z>zRet ); + + sqlite3_result_text(context, zRet, -1, sqlite3_free); + } +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + else if( eCall==STAT_GET_ROWID ){ + if( p->iGet<0 ){ + samplePushPrevious(p, 0); + p->iGet = 0; + } + if( p->iGet<p->nSample ){ + Stat4Sample *pS = p->a + p->iGet; + if( pS->nRowid==0 ){ + sqlite3_result_int64(context, pS->u.iRowid); + }else{ + sqlite3_result_blob(context, pS->u.aRowid, pS->nRowid, + SQLITE_TRANSIENT); + } + } + }else{ + tRowcnt *aCnt = 0; + + assert( p->iGet<p->nSample ); + switch( eCall ){ + case STAT_GET_NEQ: aCnt = p->a[p->iGet].anEq; break; + case STAT_GET_NLT: aCnt = p->a[p->iGet].anLt; break; + default: { + aCnt = p->a[p->iGet].anDLt; + p->iGet++; + break; + } + } + + if( IsStat3 ){ + sqlite3_result_int64(context, (i64)aCnt[0]); + }else{ + char *zRet = sqlite3MallocZero(p->nCol * 25); + if( zRet==0 ){ + sqlite3_result_error_nomem(context); + }else{ + int i; + char *z = zRet; + for(i=0; i<p->nCol; i++){ + sqlite3_snprintf(24, z, "%llu ", (u64)aCnt[i]); + z += sqlite3Strlen30(z); + } + assert( z[0]=='\0' && z>zRet ); + z[-1] = '\0'; + sqlite3_result_text(context, zRet, -1, sqlite3_free); + } + } + } +#endif /* SQLITE_ENABLE_STAT3_OR_STAT4 */ +#ifndef SQLITE_DEBUG + UNUSED_PARAMETER( argc ); +#endif +} +static const FuncDef statGetFuncdef = { + 1+IsStat34, /* nArg */ + SQLITE_UTF8, /* funcFlags */ + 0, /* pUserData */ + 0, /* pNext */ + statGet, /* xSFunc */ + 0, /* xFinalize */ + "stat_get", /* zName */ + 0, /* pHash */ + 0 /* pDestructor */ +}; + +static void callStatGet(Vdbe *v, int regStat4, int iParam, int regOut){ + assert( regOut!=regStat4 && regOut!=regStat4+1 ); +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + sqlite3VdbeAddOp2(v, OP_Integer, iParam, regStat4+1); +#elif SQLITE_DEBUG + assert( iParam==STAT_GET_STAT1 ); +#else + UNUSED_PARAMETER( iParam ); +#endif + sqlite3VdbeAddOp4(v, OP_Function0, 0, regStat4, regOut, + (char*)&statGetFuncdef, P4_FUNCDEF); + sqlite3VdbeChangeP5(v, 1 + IsStat34); +} /* ** Generate code to do an analysis of all indices associated with ** a single table. */ static void analyzeOneTable( Parse *pParse, /* Parser context */ Table *pTab, /* Table whose indices are to be analyzed */ + Index *pOnlyIdx, /* If not NULL, only analyze this one index */ int iStatCur, /* Index of VdbeCursor that writes the sqlite_stat1 table */ - int iMem /* Available memory locations begin here */ + int iMem, /* Available memory locations begin here */ + int iTab /* Next available cursor */ ){ sqlite3 *db = pParse->db; /* Database handle */ Index *pIdx; /* An index to being analyzed */ int iIdxCur; /* Cursor open on index being analyzed */ + int iTabCur; /* Table cursor */ Vdbe *v; /* The virtual machine being built up */ int i; /* Loop counter */ - int topOfLoop; /* The top of the loop */ - int endOfLoop; /* The end of the loop */ - int addr; /* The address of an instruction */ + int jZeroRows = -1; /* Jump from here if number of rows is zero */ int iDb; /* Index of database containing pTab */ + u8 needTableCnt = 1; /* True to count the table */ + int regNewRowid = iMem++; /* Rowid for the inserted record */ + int regStat4 = iMem++; /* Register to hold Stat4Accum object */ + int regChng = iMem++; /* Index of changed index field */ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + int regRowid = iMem++; /* Rowid argument passed to stat_push() */ +#endif + int regTemp = iMem++; /* Temporary use register */ int regTabname = iMem++; /* Register containing table name */ int regIdxname = iMem++; /* Register containing index name */ - int regSampleno = iMem++; /* Register containing next sample number */ - int regCol = iMem++; /* Content of a column analyzed table */ - int regRec = iMem++; /* Register holding completed record */ - int regTemp = iMem++; /* Temporary use register */ - int regRowid = iMem++; /* Rowid for the inserted record */ - -#ifdef SQLITE_ENABLE_STAT2 - int regTemp2 = iMem++; /* Temporary use register */ - int regSamplerecno = iMem++; /* Index of next sample to record */ - int regRecno = iMem++; /* Current sample index */ - int regLast = iMem++; /* Index of last sample to record */ - int regFirst = iMem++; /* Index of first sample to record */ -#endif - + int regStat1 = iMem++; /* Value for the stat column of sqlite_stat1 */ + int regPrev = iMem; /* MUST BE LAST (see below) */ + + pParse->nMem = MAX(pParse->nMem, iMem); v = sqlite3GetVdbe(pParse); - if( v==0 || NEVER(pTab==0) || pTab->pIndex==0 ){ - /* Do no analysis for tables that have no indices */ + if( v==0 || NEVER(pTab==0) ){ + return; + } + if( pTab->tnum==0 ){ + /* Do not gather statistics on views or virtual tables */ + return; + } + if( sqlite3_strlike("sqlite_%", pTab->zName, 0)==0 ){ + /* Do not gather statistics on system tables */ return; } assert( sqlite3BtreeHoldsAllMutexes(db) ); iDb = sqlite3SchemaToIndex(db, pTab->pSchema); assert( iDb>=0 ); + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); #ifndef SQLITE_OMIT_AUTHORIZATION if( sqlite3AuthCheck(pParse, SQLITE_ANALYZE, pTab->zName, 0, db->aDb[iDb].zName ) ){ return; } #endif - /* Establish a read-lock on the table at the shared-cache level. */ + /* Establish a read-lock on the table at the shared-cache level. + ** Open a read-only cursor on the table. Also allocate a cursor number + ** to use for scanning indexes (iIdxCur). No index cursor is opened at + ** this time though. */ sqlite3TableLock(pParse, iDb, pTab->tnum, 0, pTab->zName); + iTabCur = iTab++; + iIdxCur = iTab++; + pParse->nTab = MAX(pParse->nTab, iTab); + sqlite3OpenTable(pParse, iTabCur, iDb, pTab, OP_OpenRead); + sqlite3VdbeLoadString(v, regTabname, pTab->zName); - iIdxCur = pParse->nTab++; for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ - int nCol = pIdx->nColumn; - KeyInfo *pKey = sqlite3IndexKeyinfo(pParse, pIdx); + int nCol; /* Number of columns in pIdx. "N" */ + int addrRewind; /* Address of "OP_Rewind iIdxCur" */ + int addrNextRow; /* Address of "next_row:" */ + const char *zIdxName; /* Name of the index */ + int nColTest; /* Number of columns to test for changes */ - if( iMem+1+(nCol*2)>pParse->nMem ){ - pParse->nMem = iMem+1+(nCol*2); + if( pOnlyIdx && pOnlyIdx!=pIdx ) continue; + if( pIdx->pPartIdxWhere==0 ) needTableCnt = 0; + if( !HasRowid(pTab) && IsPrimaryKeyIndex(pIdx) ){ + nCol = pIdx->nKeyCol; + zIdxName = pTab->zName; + nColTest = nCol - 1; + }else{ + nCol = pIdx->nColumn; + zIdxName = pIdx->zName; + nColTest = pIdx->uniqNotNull ? pIdx->nKeyCol-1 : nCol-1; } - /* Open a cursor to the index to be analyzed. */ + /* Populate the register containing the index name. */ + sqlite3VdbeLoadString(v, regIdxname, zIdxName); + VdbeComment((v, "Analysis for %s.%s", pTab->zName, zIdxName)); + + /* + ** Pseudo-code for loop that calls stat_push(): + ** + ** Rewind csr + ** if eof(csr) goto end_of_scan; + ** regChng = 0 + ** goto chng_addr_0; + ** + ** next_row: + ** regChng = 0 + ** if( idx(0) != regPrev(0) ) goto chng_addr_0 + ** regChng = 1 + ** if( idx(1) != regPrev(1) ) goto chng_addr_1 + ** ... + ** regChng = N + ** goto chng_addr_N + ** + ** chng_addr_0: + ** regPrev(0) = idx(0) + ** chng_addr_1: + ** regPrev(1) = idx(1) + ** ... + ** + ** endDistinctTest: + ** regRowid = idx(rowid) + ** stat_push(P, regChng, regRowid) + ** Next csr + ** if !eof(csr) goto next_row; + ** + ** end_of_scan: + */ + + /* Make sure there are enough memory cells allocated to accommodate + ** the regPrev array and a trailing rowid (the rowid slot is required + ** when building a record to insert into the sample column of + ** the sqlite_stat4 table. */ + pParse->nMem = MAX(pParse->nMem, regPrev+nColTest); + + /* Open a read-only cursor on the index being analyzed. */ assert( iDb==sqlite3SchemaToIndex(db, pIdx->pSchema) ); - sqlite3VdbeAddOp4(v, OP_OpenRead, iIdxCur, pIdx->tnum, iDb, - (char *)pKey, P4_KEYINFO_HANDOFF); + sqlite3VdbeAddOp3(v, OP_OpenRead, iIdxCur, pIdx->tnum, iDb); + sqlite3VdbeSetP4KeyInfo(pParse, pIdx); VdbeComment((v, "%s", pIdx->zName)); - /* Populate the registers containing the table and index names. */ - if( pTab->pIndex==pIdx ){ - sqlite3VdbeAddOp4(v, OP_String8, 0, regTabname, 0, pTab->zName, 0); - } - sqlite3VdbeAddOp4(v, OP_String8, 0, regIdxname, 0, pIdx->zName, 0); - -#ifdef SQLITE_ENABLE_STAT2 - - /* If this iteration of the loop is generating code to analyze the - ** first index in the pTab->pIndex list, then register regLast has - ** not been populated. In this case populate it now. */ - if( pTab->pIndex==pIdx ){ - sqlite3VdbeAddOp2(v, OP_Integer, SQLITE_INDEX_SAMPLES, regSamplerecno); - sqlite3VdbeAddOp2(v, OP_Integer, SQLITE_INDEX_SAMPLES*2-1, regTemp); - sqlite3VdbeAddOp2(v, OP_Integer, SQLITE_INDEX_SAMPLES*2, regTemp2); - - sqlite3VdbeAddOp2(v, OP_Count, iIdxCur, regLast); - sqlite3VdbeAddOp2(v, OP_Null, 0, regFirst); - addr = sqlite3VdbeAddOp3(v, OP_Lt, regSamplerecno, 0, regLast); - sqlite3VdbeAddOp3(v, OP_Divide, regTemp2, regLast, regFirst); - sqlite3VdbeAddOp3(v, OP_Multiply, regLast, regTemp, regLast); - sqlite3VdbeAddOp2(v, OP_AddImm, regLast, SQLITE_INDEX_SAMPLES*2-2); - sqlite3VdbeAddOp3(v, OP_Divide, regTemp2, regLast, regLast); - sqlite3VdbeJumpHere(v, addr); - } - - /* Zero the regSampleno and regRecno registers. */ - sqlite3VdbeAddOp2(v, OP_Integer, 0, regSampleno); - sqlite3VdbeAddOp2(v, OP_Integer, 0, regRecno); - sqlite3VdbeAddOp2(v, OP_Copy, regFirst, regSamplerecno); -#endif - - /* The block of memory cells initialized here is used as follows. - ** - ** iMem: - ** The total number of rows in the table. - ** - ** iMem+1 .. iMem+nCol: - ** Number of distinct entries in index considering the - ** left-most N columns only, where N is between 1 and nCol, - ** inclusive. - ** - ** iMem+nCol+1 .. Mem+2*nCol: - ** Previous value of indexed columns, from left to right. - ** - ** Cells iMem through iMem+nCol are initialized to 0. The others are - ** initialized to contain an SQL NULL. - */ - for(i=0; i<=nCol; i++){ - sqlite3VdbeAddOp2(v, OP_Integer, 0, iMem+i); - } - for(i=0; i<nCol; i++){ - sqlite3VdbeAddOp2(v, OP_Null, 0, iMem+nCol+i+1); - } - - /* Start the analysis loop. This loop runs through all the entries in - ** the index b-tree. */ - endOfLoop = sqlite3VdbeMakeLabel(v); - sqlite3VdbeAddOp2(v, OP_Rewind, iIdxCur, endOfLoop); - topOfLoop = sqlite3VdbeCurrentAddr(v); - sqlite3VdbeAddOp2(v, OP_AddImm, iMem, 1); - - for(i=0; i<nCol; i++){ - sqlite3VdbeAddOp3(v, OP_Column, iIdxCur, i, regCol); -#ifdef SQLITE_ENABLE_STAT2 - if( i==0 ){ - /* Check if the record that cursor iIdxCur points to contains a - ** value that should be stored in the sqlite_stat2 table. If so, - ** store it. */ - int ne = sqlite3VdbeAddOp3(v, OP_Ne, regRecno, 0, regSamplerecno); - assert( regTabname+1==regIdxname - && regTabname+2==regSampleno - && regTabname+3==regCol - ); - sqlite3VdbeChangeP5(v, SQLITE_JUMPIFNULL); - sqlite3VdbeAddOp4(v, OP_MakeRecord, regTabname, 4, regRec, "aaab", 0); - sqlite3VdbeAddOp2(v, OP_NewRowid, iStatCur+1, regRowid); - sqlite3VdbeAddOp3(v, OP_Insert, iStatCur+1, regRec, regRowid); - - /* Calculate new values for regSamplerecno and regSampleno. - ** - ** sampleno = sampleno + 1 - ** samplerecno = samplerecno+(remaining records)/(remaining samples) - */ - sqlite3VdbeAddOp2(v, OP_AddImm, regSampleno, 1); - sqlite3VdbeAddOp3(v, OP_Subtract, regRecno, regLast, regTemp); - sqlite3VdbeAddOp2(v, OP_AddImm, regTemp, -1); - sqlite3VdbeAddOp2(v, OP_Integer, SQLITE_INDEX_SAMPLES, regTemp2); - sqlite3VdbeAddOp3(v, OP_Subtract, regSampleno, regTemp2, regTemp2); - sqlite3VdbeAddOp3(v, OP_Divide, regTemp2, regTemp, regTemp); - sqlite3VdbeAddOp3(v, OP_Add, regSamplerecno, regTemp, regSamplerecno); - - sqlite3VdbeJumpHere(v, ne); - sqlite3VdbeAddOp2(v, OP_AddImm, regRecno, 1); - } -#endif - - sqlite3VdbeAddOp3(v, OP_Ne, regCol, 0, iMem+nCol+i+1); - /**** TODO: add collating sequence *****/ - sqlite3VdbeChangeP5(v, SQLITE_JUMPIFNULL); - } - if( db->mallocFailed ){ - /* If a malloc failure has occurred, then the result of the expression - ** passed as the second argument to the call to sqlite3VdbeJumpHere() - ** below may be negative. Which causes an assert() to fail (or an - ** out-of-bounds write if SQLITE_DEBUG is not defined). */ - return; - } - sqlite3VdbeAddOp2(v, OP_Goto, 0, endOfLoop); - for(i=0; i<nCol; i++){ - sqlite3VdbeJumpHere(v, sqlite3VdbeCurrentAddr(v)-(nCol*2)); - sqlite3VdbeAddOp2(v, OP_AddImm, iMem+i+1, 1); - sqlite3VdbeAddOp3(v, OP_Column, iIdxCur, i, iMem+nCol+i+1); - } - - /* End of the analysis loop. */ - sqlite3VdbeResolveLabel(v, endOfLoop); - sqlite3VdbeAddOp2(v, OP_Next, iIdxCur, topOfLoop); - sqlite3VdbeAddOp1(v, OP_Close, iIdxCur); - - /* Store the results in sqlite_stat1. - ** - ** The result is a single row of the sqlite_stat1 table. The first - ** two columns are the names of the table and index. The third column - ** is a string composed of a list of integer statistics about the - ** index. The first integer in the list is the total number of entries - ** in the index. There is one additional integer in the list for each - ** column of the table. This additional integer is a guess of how many - ** rows of the table the index will select. If D is the count of distinct - ** values and K is the total number of rows, then the integer is computed - ** as: - ** - ** I = (K+D-1)/D - ** - ** If K==0 then no entry is made into the sqlite_stat1 table. - ** If K>0 then it is always the case the D>0 so division by zero - ** is never possible. - */ - addr = sqlite3VdbeAddOp1(v, OP_IfNot, iMem); - sqlite3VdbeAddOp2(v, OP_SCopy, iMem, regSampleno); - for(i=0; i<nCol; i++){ - sqlite3VdbeAddOp4(v, OP_String8, 0, regTemp, 0, " ", 0); - sqlite3VdbeAddOp3(v, OP_Concat, regTemp, regSampleno, regSampleno); - sqlite3VdbeAddOp3(v, OP_Add, iMem, iMem+i+1, regTemp); - sqlite3VdbeAddOp2(v, OP_AddImm, regTemp, -1); - sqlite3VdbeAddOp3(v, OP_Divide, iMem+i+1, regTemp, regTemp); - sqlite3VdbeAddOp1(v, OP_ToInt, regTemp); - sqlite3VdbeAddOp3(v, OP_Concat, regTemp, regSampleno, regSampleno); - } - sqlite3VdbeAddOp4(v, OP_MakeRecord, regTabname, 3, regRec, "aaa", 0); - sqlite3VdbeAddOp2(v, OP_NewRowid, iStatCur, regRowid); - sqlite3VdbeAddOp3(v, OP_Insert, iStatCur, regRec, regRowid); + /* Invoke the stat_init() function. The arguments are: + ** + ** (1) the number of columns in the index including the rowid + ** (or for a WITHOUT ROWID table, the number of PK columns), + ** (2) the number of columns in the key without the rowid/pk + ** (3) the number of rows in the index, + ** + ** + ** The third argument is only used for STAT3 and STAT4 + */ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + sqlite3VdbeAddOp2(v, OP_Count, iIdxCur, regStat4+3); +#endif + sqlite3VdbeAddOp2(v, OP_Integer, nCol, regStat4+1); + sqlite3VdbeAddOp2(v, OP_Integer, pIdx->nKeyCol, regStat4+2); + sqlite3VdbeAddOp4(v, OP_Function0, 0, regStat4+1, regStat4, + (char*)&statInitFuncdef, P4_FUNCDEF); + sqlite3VdbeChangeP5(v, 2+IsStat34); + + /* Implementation of the following: + ** + ** Rewind csr + ** if eof(csr) goto end_of_scan; + ** regChng = 0 + ** goto next_push_0; + ** + */ + addrRewind = sqlite3VdbeAddOp1(v, OP_Rewind, iIdxCur); + VdbeCoverage(v); + sqlite3VdbeAddOp2(v, OP_Integer, 0, regChng); + addrNextRow = sqlite3VdbeCurrentAddr(v); + + if( nColTest>0 ){ + int endDistinctTest = sqlite3VdbeMakeLabel(v); + int *aGotoChng; /* Array of jump instruction addresses */ + aGotoChng = sqlite3DbMallocRawNN(db, sizeof(int)*nColTest); + if( aGotoChng==0 ) continue; + + /* + ** next_row: + ** regChng = 0 + ** if( idx(0) != regPrev(0) ) goto chng_addr_0 + ** regChng = 1 + ** if( idx(1) != regPrev(1) ) goto chng_addr_1 + ** ... + ** regChng = N + ** goto endDistinctTest + */ + sqlite3VdbeAddOp0(v, OP_Goto); + addrNextRow = sqlite3VdbeCurrentAddr(v); + if( nColTest==1 && pIdx->nKeyCol==1 && IsUniqueIndex(pIdx) ){ + /* For a single-column UNIQUE index, once we have found a non-NULL + ** row, we know that all the rest will be distinct, so skip + ** subsequent distinctness tests. */ + sqlite3VdbeAddOp2(v, OP_NotNull, regPrev, endDistinctTest); + VdbeCoverage(v); + } + for(i=0; i<nColTest; i++){ + char *pColl = (char*)sqlite3LocateCollSeq(pParse, pIdx->azColl[i]); + sqlite3VdbeAddOp2(v, OP_Integer, i, regChng); + sqlite3VdbeAddOp3(v, OP_Column, iIdxCur, i, regTemp); + aGotoChng[i] = + sqlite3VdbeAddOp4(v, OP_Ne, regTemp, 0, regPrev+i, pColl, P4_COLLSEQ); + sqlite3VdbeChangeP5(v, SQLITE_NULLEQ); + VdbeCoverage(v); + } + sqlite3VdbeAddOp2(v, OP_Integer, nColTest, regChng); + sqlite3VdbeGoto(v, endDistinctTest); + + + /* + ** chng_addr_0: + ** regPrev(0) = idx(0) + ** chng_addr_1: + ** regPrev(1) = idx(1) + ** ... + */ + sqlite3VdbeJumpHere(v, addrNextRow-1); + for(i=0; i<nColTest; i++){ + sqlite3VdbeJumpHere(v, aGotoChng[i]); + sqlite3VdbeAddOp3(v, OP_Column, iIdxCur, i, regPrev+i); + } + sqlite3VdbeResolveLabel(v, endDistinctTest); + sqlite3DbFree(db, aGotoChng); + } + + /* + ** chng_addr_N: + ** regRowid = idx(rowid) // STAT34 only + ** stat_push(P, regChng, regRowid) // 3rd parameter STAT34 only + ** Next csr + ** if !eof(csr) goto next_row; + */ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + assert( regRowid==(regStat4+2) ); + if( HasRowid(pTab) ){ + sqlite3VdbeAddOp2(v, OP_IdxRowid, iIdxCur, regRowid); + }else{ + Index *pPk = sqlite3PrimaryKeyIndex(pIdx->pTable); + int j, k, regKey; + regKey = sqlite3GetTempRange(pParse, pPk->nKeyCol); + for(j=0; j<pPk->nKeyCol; j++){ + k = sqlite3ColumnOfIndex(pIdx, pPk->aiColumn[j]); + assert( k>=0 && k<pTab->nCol ); + sqlite3VdbeAddOp3(v, OP_Column, iIdxCur, k, regKey+j); + VdbeComment((v, "%s", pTab->aCol[pPk->aiColumn[j]].zName)); + } + sqlite3VdbeAddOp3(v, OP_MakeRecord, regKey, pPk->nKeyCol, regRowid); + sqlite3ReleaseTempRange(pParse, regKey, pPk->nKeyCol); + } +#endif + assert( regChng==(regStat4+1) ); + sqlite3VdbeAddOp4(v, OP_Function0, 1, regStat4, regTemp, + (char*)&statPushFuncdef, P4_FUNCDEF); + sqlite3VdbeChangeP5(v, 2+IsStat34); + sqlite3VdbeAddOp2(v, OP_Next, iIdxCur, addrNextRow); VdbeCoverage(v); + + /* Add the entry to the stat1 table. */ + callStatGet(v, regStat4, STAT_GET_STAT1, regStat1); + assert( "BBB"[0]==SQLITE_AFF_TEXT ); + sqlite3VdbeAddOp4(v, OP_MakeRecord, regTabname, 3, regTemp, "BBB", 0); + sqlite3VdbeAddOp2(v, OP_NewRowid, iStatCur, regNewRowid); + sqlite3VdbeAddOp3(v, OP_Insert, iStatCur, regTemp, regNewRowid); + sqlite3VdbeChangeP5(v, OPFLAG_APPEND); + + /* Add the entries to the stat3 or stat4 table. */ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + { + int regEq = regStat1; + int regLt = regStat1+1; + int regDLt = regStat1+2; + int regSample = regStat1+3; + int regCol = regStat1+4; + int regSampleRowid = regCol + nCol; + int addrNext; + int addrIsNull; + u8 seekOp = HasRowid(pTab) ? OP_NotExists : OP_NotFound; + + pParse->nMem = MAX(pParse->nMem, regCol+nCol); + + addrNext = sqlite3VdbeCurrentAddr(v); + callStatGet(v, regStat4, STAT_GET_ROWID, regSampleRowid); + addrIsNull = sqlite3VdbeAddOp1(v, OP_IsNull, regSampleRowid); + VdbeCoverage(v); + callStatGet(v, regStat4, STAT_GET_NEQ, regEq); + callStatGet(v, regStat4, STAT_GET_NLT, regLt); + callStatGet(v, regStat4, STAT_GET_NDLT, regDLt); + sqlite3VdbeAddOp4Int(v, seekOp, iTabCur, addrNext, regSampleRowid, 0); + /* We know that the regSampleRowid row exists because it was read by + ** the previous loop. Thus the not-found jump of seekOp will never + ** be taken */ + VdbeCoverageNeverTaken(v); +#ifdef SQLITE_ENABLE_STAT3 + sqlite3ExprCodeLoadIndexColumn(pParse, pIdx, iTabCur, 0, regSample); +#else + for(i=0; i<nCol; i++){ + sqlite3ExprCodeLoadIndexColumn(pParse, pIdx, iTabCur, i, regCol+i); + } + sqlite3VdbeAddOp3(v, OP_MakeRecord, regCol, nCol, regSample); +#endif + sqlite3VdbeAddOp3(v, OP_MakeRecord, regTabname, 6, regTemp); + sqlite3VdbeAddOp2(v, OP_NewRowid, iStatCur+1, regNewRowid); + sqlite3VdbeAddOp3(v, OP_Insert, iStatCur+1, regTemp, regNewRowid); + sqlite3VdbeAddOp2(v, OP_Goto, 1, addrNext); /* P1==1 for end-of-loop */ + sqlite3VdbeJumpHere(v, addrIsNull); + } +#endif /* SQLITE_ENABLE_STAT3_OR_STAT4 */ + + /* End of analysis */ + sqlite3VdbeJumpHere(v, addrRewind); + } + + + /* Create a single sqlite_stat1 entry containing NULL as the index + ** name and the row count as the content. + */ + if( pOnlyIdx==0 && needTableCnt ){ + VdbeComment((v, "%s", pTab->zName)); + sqlite3VdbeAddOp2(v, OP_Count, iTabCur, regStat1); + jZeroRows = sqlite3VdbeAddOp1(v, OP_IfNot, regStat1); VdbeCoverage(v); + sqlite3VdbeAddOp2(v, OP_Null, 0, regIdxname); + assert( "BBB"[0]==SQLITE_AFF_TEXT ); + sqlite3VdbeAddOp4(v, OP_MakeRecord, regTabname, 3, regTemp, "BBB", 0); + sqlite3VdbeAddOp2(v, OP_NewRowid, iStatCur, regNewRowid); + sqlite3VdbeAddOp3(v, OP_Insert, iStatCur, regTemp, regNewRowid); sqlite3VdbeChangeP5(v, OPFLAG_APPEND); - sqlite3VdbeJumpHere(v, addr); + sqlite3VdbeJumpHere(v, jZeroRows); } } + /* ** Generate code that will cause the most recent index analysis to -** be laoded into internal hash tables where is can be used. +** be loaded into internal hash tables where is can be used. */ static void loadAnalysis(Parse *pParse, int iDb){ Vdbe *v = sqlite3GetVdbe(pParse); if( v ){ sqlite3VdbeAddOp1(v, OP_LoadAnalysis, iDb); @@ -66566,39 +92835,47 @@ sqlite3 *db = pParse->db; Schema *pSchema = db->aDb[iDb].pSchema; /* Schema of database iDb */ HashElem *k; int iStatCur; int iMem; + int iTab; sqlite3BeginWriteOperation(pParse, 0, iDb); iStatCur = pParse->nTab; - pParse->nTab += 2; - openStatTable(pParse, iDb, iStatCur, 0); + pParse->nTab += 3; + openStatTable(pParse, iDb, iStatCur, 0, 0); iMem = pParse->nMem+1; + iTab = pParse->nTab; + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); for(k=sqliteHashFirst(&pSchema->tblHash); k; k=sqliteHashNext(k)){ Table *pTab = (Table*)sqliteHashData(k); - analyzeOneTable(pParse, pTab, iStatCur, iMem); + analyzeOneTable(pParse, pTab, 0, iStatCur, iMem, iTab); } loadAnalysis(pParse, iDb); } /* ** Generate code that will do an analysis of a single table in -** a database. +** a database. If pOnlyIdx is not NULL then it is a single index +** in pTab that should be analyzed. */ -static void analyzeTable(Parse *pParse, Table *pTab){ +static void analyzeTable(Parse *pParse, Table *pTab, Index *pOnlyIdx){ int iDb; int iStatCur; assert( pTab!=0 ); assert( sqlite3BtreeHoldsAllMutexes(pParse->db) ); iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema); sqlite3BeginWriteOperation(pParse, 0, iDb); iStatCur = pParse->nTab; - pParse->nTab += 2; - openStatTable(pParse, iDb, iStatCur, pTab->zName); - analyzeOneTable(pParse, pTab, iStatCur, pParse->nMem+1); + pParse->nTab += 3; + if( pOnlyIdx ){ + openStatTable(pParse, iDb, iStatCur, pOnlyIdx->zName, "idx"); + }else{ + openStatTable(pParse, iDb, iStatCur, pTab->zName, "tbl"); + } + analyzeOneTable(pParse, pTab, pOnlyIdx, iStatCur,pParse->nMem+1,pParse->nTab); loadAnalysis(pParse, iDb); } /* ** Generate code for the ANALYZE command. The parser calls this routine @@ -66616,11 +92893,13 @@ sqlite3 *db = pParse->db; int iDb; int i; char *z, *zDb; Table *pTab; + Index *pIdx; Token *pTableName; + Vdbe *v; /* Read the database schema. If an error occurs, leave an error message ** and code in pParse and return NULL. */ assert( sqlite3BtreeHoldsAllMutexes(pParse->db) ); if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){ @@ -66640,32 +92919,36 @@ if( iDb>=0 ){ analyzeDatabase(pParse, iDb); }else{ z = sqlite3NameFromToken(db, pName1); if( z ){ - pTab = sqlite3LocateTable(pParse, 0, z, 0); + if( (pIdx = sqlite3FindIndex(db, z, 0))!=0 ){ + analyzeTable(pParse, pIdx->pTable, pIdx); + }else if( (pTab = sqlite3LocateTable(pParse, 0, z, 0))!=0 ){ + analyzeTable(pParse, pTab, 0); + } sqlite3DbFree(db, z); - if( pTab ){ - analyzeTable(pParse, pTab); - } } } }else{ /* Form 3: Analyze the fully qualified table name */ iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pTableName); if( iDb>=0 ){ zDb = db->aDb[iDb].zName; z = sqlite3NameFromToken(db, pTableName); if( z ){ - pTab = sqlite3LocateTable(pParse, 0, z, zDb); + if( (pIdx = sqlite3FindIndex(db, z, zDb))!=0 ){ + analyzeTable(pParse, pIdx->pTable, pIdx); + }else if( (pTab = sqlite3LocateTable(pParse, 0, z, zDb))!=0 ){ + analyzeTable(pParse, pTab, 0); + } sqlite3DbFree(db, z); - if( pTab ){ - analyzeTable(pParse, pTab); - } } } } + v = sqlite3GetVdbe(pParse); + if( v ) sqlite3VdbeAddOp0(v, OP_Expire); } /* ** Used to pass information from the analyzer reader through to the ** callback routine. @@ -66675,82 +92958,414 @@ sqlite3 *db; const char *zDatabase; }; /* -** This callback is invoked once for each index when reading the -** sqlite_stat1 table. -** -** argv[0] = name of the index -** argv[1] = results of analysis - on integer for each column +** The first argument points to a nul-terminated string containing a +** list of space separated integers. Read the first nOut of these into +** the array aOut[]. */ -static int analysisLoader(void *pData, int argc, char **argv, char **NotUsed){ - analysisInfo *pInfo = (analysisInfo*)pData; - Index *pIndex; - int i, c; - unsigned int v; - const char *z; - - assert( argc==2 ); - UNUSED_PARAMETER2(NotUsed, argc); - - if( argv==0 || argv[0]==0 || argv[1]==0 ){ - return 0; - } - pIndex = sqlite3FindIndex(pInfo->db, argv[0], pInfo->zDatabase); - if( pIndex==0 ){ - return 0; - } - z = argv[1]; - for(i=0; *z && i<=pIndex->nColumn; i++){ +static void decodeIntArray( + char *zIntArray, /* String containing int array to decode */ + int nOut, /* Number of slots in aOut[] */ + tRowcnt *aOut, /* Store integers here */ + LogEst *aLog, /* Or, if aOut==0, here */ + Index *pIndex /* Handle extra flags for this index, if not NULL */ +){ + char *z = zIntArray; + int c; + int i; + tRowcnt v; + +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + if( z==0 ) z = ""; +#else + assert( z!=0 ); +#endif + for(i=0; *z && i<nOut; i++){ v = 0; while( (c=z[0])>='0' && c<='9' ){ v = v*10 + c - '0'; z++; } - pIndex->aiRowEst[i] = v; +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + if( aOut ) aOut[i] = v; + if( aLog ) aLog[i] = sqlite3LogEst(v); +#else + assert( aOut==0 ); + UNUSED_PARAMETER(aOut); + assert( aLog!=0 ); + aLog[i] = sqlite3LogEst(v); +#endif if( *z==' ' ) z++; } +#ifndef SQLITE_ENABLE_STAT3_OR_STAT4 + assert( pIndex!=0 ); { +#else + if( pIndex ){ +#endif + pIndex->bUnordered = 0; + pIndex->noSkipScan = 0; + while( z[0] ){ + if( sqlite3_strglob("unordered*", z)==0 ){ + pIndex->bUnordered = 1; + }else if( sqlite3_strglob("sz=[0-9]*", z)==0 ){ + pIndex->szIdxRow = sqlite3LogEst(sqlite3Atoi(z+3)); + }else if( sqlite3_strglob("noskipscan*", z)==0 ){ + pIndex->noSkipScan = 1; + } +#ifdef SQLITE_ENABLE_COSTMULT + else if( sqlite3_strglob("costmult=[0-9]*",z)==0 ){ + pIndex->pTable->costMult = sqlite3LogEst(sqlite3Atoi(z+9)); + } +#endif + while( z[0]!=0 && z[0]!=' ' ) z++; + while( z[0]==' ' ) z++; + } + } +} + +/* +** This callback is invoked once for each index when reading the +** sqlite_stat1 table. +** +** argv[0] = name of the table +** argv[1] = name of the index (might be NULL) +** argv[2] = results of analysis - on integer for each column +** +** Entries for which argv[1]==NULL simply record the number of rows in +** the table. +*/ +static int analysisLoader(void *pData, int argc, char **argv, char **NotUsed){ + analysisInfo *pInfo = (analysisInfo*)pData; + Index *pIndex; + Table *pTable; + const char *z; + + assert( argc==3 ); + UNUSED_PARAMETER2(NotUsed, argc); + + if( argv==0 || argv[0]==0 || argv[2]==0 ){ + return 0; + } + pTable = sqlite3FindTable(pInfo->db, argv[0], pInfo->zDatabase); + if( pTable==0 ){ + return 0; + } + if( argv[1]==0 ){ + pIndex = 0; + }else if( sqlite3_stricmp(argv[0],argv[1])==0 ){ + pIndex = sqlite3PrimaryKeyIndex(pTable); + }else{ + pIndex = sqlite3FindIndex(pInfo->db, argv[1], pInfo->zDatabase); + } + z = argv[2]; + + if( pIndex ){ + tRowcnt *aiRowEst = 0; + int nCol = pIndex->nKeyCol+1; +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + /* Index.aiRowEst may already be set here if there are duplicate + ** sqlite_stat1 entries for this index. In that case just clobber + ** the old data with the new instead of allocating a new array. */ + if( pIndex->aiRowEst==0 ){ + pIndex->aiRowEst = (tRowcnt*)sqlite3MallocZero(sizeof(tRowcnt) * nCol); + if( pIndex->aiRowEst==0 ) sqlite3OomFault(pInfo->db); + } + aiRowEst = pIndex->aiRowEst; +#endif + pIndex->bUnordered = 0; + decodeIntArray((char*)z, nCol, aiRowEst, pIndex->aiRowLogEst, pIndex); + if( pIndex->pPartIdxWhere==0 ) pTable->nRowLogEst = pIndex->aiRowLogEst[0]; + }else{ + Index fakeIdx; + fakeIdx.szIdxRow = pTable->szTabRow; +#ifdef SQLITE_ENABLE_COSTMULT + fakeIdx.pTable = pTable; +#endif + decodeIntArray((char*)z, 1, 0, &pTable->nRowLogEst, &fakeIdx); + pTable->szTabRow = fakeIdx.szIdxRow; + } + return 0; } /* ** If the Index.aSample variable is not NULL, delete the aSample[] array ** and its contents. */ -SQLITE_PRIVATE void sqlite3DeleteIndexSamples(Index *pIdx){ -#ifdef SQLITE_ENABLE_STAT2 +SQLITE_PRIVATE void sqlite3DeleteIndexSamples(sqlite3 *db, Index *pIdx){ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 if( pIdx->aSample ){ int j; - sqlite3 *dbMem = pIdx->pTable->dbMem; - for(j=0; j<SQLITE_INDEX_SAMPLES; j++){ + for(j=0; j<pIdx->nSample; j++){ IndexSample *p = &pIdx->aSample[j]; - if( p->eType==SQLITE_TEXT || p->eType==SQLITE_BLOB ){ - sqlite3DbFree(pIdx->pTable->dbMem, p->u.z); - } + sqlite3DbFree(db, p->p); } - sqlite3DbFree(dbMem, pIdx->aSample); + sqlite3DbFree(db, pIdx->aSample); + } + if( db && db->pnBytesFreed==0 ){ + pIdx->nSample = 0; pIdx->aSample = 0; } #else + UNUSED_PARAMETER(db); UNUSED_PARAMETER(pIdx); -#endif +#endif /* SQLITE_ENABLE_STAT3_OR_STAT4 */ +} + +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 +/* +** Populate the pIdx->aAvgEq[] array based on the samples currently +** stored in pIdx->aSample[]. +*/ +static void initAvgEq(Index *pIdx){ + if( pIdx ){ + IndexSample *aSample = pIdx->aSample; + IndexSample *pFinal = &aSample[pIdx->nSample-1]; + int iCol; + int nCol = 1; + if( pIdx->nSampleCol>1 ){ + /* If this is stat4 data, then calculate aAvgEq[] values for all + ** sample columns except the last. The last is always set to 1, as + ** once the trailing PK fields are considered all index keys are + ** unique. */ + nCol = pIdx->nSampleCol-1; + pIdx->aAvgEq[nCol] = 1; + } + for(iCol=0; iCol<nCol; iCol++){ + int nSample = pIdx->nSample; + int i; /* Used to iterate through samples */ + tRowcnt sumEq = 0; /* Sum of the nEq values */ + tRowcnt avgEq = 0; + tRowcnt nRow; /* Number of rows in index */ + i64 nSum100 = 0; /* Number of terms contributing to sumEq */ + i64 nDist100; /* Number of distinct values in index */ + + if( !pIdx->aiRowEst || iCol>=pIdx->nKeyCol || pIdx->aiRowEst[iCol+1]==0 ){ + nRow = pFinal->anLt[iCol]; + nDist100 = (i64)100 * pFinal->anDLt[iCol]; + nSample--; + }else{ + nRow = pIdx->aiRowEst[0]; + nDist100 = ((i64)100 * pIdx->aiRowEst[0]) / pIdx->aiRowEst[iCol+1]; + } + pIdx->nRowEst0 = nRow; + + /* Set nSum to the number of distinct (iCol+1) field prefixes that + ** occur in the stat4 table for this index. Set sumEq to the sum of + ** the nEq values for column iCol for the same set (adding the value + ** only once where there exist duplicate prefixes). */ + for(i=0; i<nSample; i++){ + if( i==(pIdx->nSample-1) + || aSample[i].anDLt[iCol]!=aSample[i+1].anDLt[iCol] + ){ + sumEq += aSample[i].anEq[iCol]; + nSum100 += 100; + } + } + + if( nDist100>nSum100 ){ + avgEq = ((i64)100 * (nRow - sumEq))/(nDist100 - nSum100); + } + if( avgEq==0 ) avgEq = 1; + pIdx->aAvgEq[iCol] = avgEq; + } + } } /* -** Load the content of the sqlite_stat1 and sqlite_stat2 tables. The +** Look up an index by name. Or, if the name of a WITHOUT ROWID table +** is supplied instead, find the PRIMARY KEY index for that table. +*/ +static Index *findIndexOrPrimaryKey( + sqlite3 *db, + const char *zName, + const char *zDb +){ + Index *pIdx = sqlite3FindIndex(db, zName, zDb); + if( pIdx==0 ){ + Table *pTab = sqlite3FindTable(db, zName, zDb); + if( pTab && !HasRowid(pTab) ) pIdx = sqlite3PrimaryKeyIndex(pTab); + } + return pIdx; +} + +/* +** Load the content from either the sqlite_stat4 or sqlite_stat3 table +** into the relevant Index.aSample[] arrays. +** +** Arguments zSql1 and zSql2 must point to SQL statements that return +** data equivalent to the following (statements are different for stat3, +** see the caller of this function for details): +** +** zSql1: SELECT idx,count(*) FROM %Q.sqlite_stat4 GROUP BY idx +** zSql2: SELECT idx,neq,nlt,ndlt,sample FROM %Q.sqlite_stat4 +** +** where %Q is replaced with the database name before the SQL is executed. +*/ +static int loadStatTbl( + sqlite3 *db, /* Database handle */ + int bStat3, /* Assume single column records only */ + const char *zSql1, /* SQL statement 1 (see above) */ + const char *zSql2, /* SQL statement 2 (see above) */ + const char *zDb /* Database name (e.g. "main") */ +){ + int rc; /* Result codes from subroutines */ + sqlite3_stmt *pStmt = 0; /* An SQL statement being run */ + char *zSql; /* Text of the SQL statement */ + Index *pPrevIdx = 0; /* Previous index in the loop */ + IndexSample *pSample; /* A slot in pIdx->aSample[] */ + + assert( db->lookaside.bDisable ); + zSql = sqlite3MPrintf(db, zSql1, zDb); + if( !zSql ){ + return SQLITE_NOMEM; + } + rc = sqlite3_prepare(db, zSql, -1, &pStmt, 0); + sqlite3DbFree(db, zSql); + if( rc ) return rc; + + while( sqlite3_step(pStmt)==SQLITE_ROW ){ + int nIdxCol = 1; /* Number of columns in stat4 records */ + + char *zIndex; /* Index name */ + Index *pIdx; /* Pointer to the index object */ + int nSample; /* Number of samples */ + int nByte; /* Bytes of space required */ + int i; /* Bytes of space required */ + tRowcnt *pSpace; + + zIndex = (char *)sqlite3_column_text(pStmt, 0); + if( zIndex==0 ) continue; + nSample = sqlite3_column_int(pStmt, 1); + pIdx = findIndexOrPrimaryKey(db, zIndex, zDb); + assert( pIdx==0 || bStat3 || pIdx->nSample==0 ); + /* Index.nSample is non-zero at this point if data has already been + ** loaded from the stat4 table. In this case ignore stat3 data. */ + if( pIdx==0 || pIdx->nSample ) continue; + if( bStat3==0 ){ + assert( !HasRowid(pIdx->pTable) || pIdx->nColumn==pIdx->nKeyCol+1 ); + if( !HasRowid(pIdx->pTable) && IsPrimaryKeyIndex(pIdx) ){ + nIdxCol = pIdx->nKeyCol; + }else{ + nIdxCol = pIdx->nColumn; + } + } + pIdx->nSampleCol = nIdxCol; + nByte = sizeof(IndexSample) * nSample; + nByte += sizeof(tRowcnt) * nIdxCol * 3 * nSample; + nByte += nIdxCol * sizeof(tRowcnt); /* Space for Index.aAvgEq[] */ + + pIdx->aSample = sqlite3DbMallocZero(db, nByte); + if( pIdx->aSample==0 ){ + sqlite3_finalize(pStmt); + return SQLITE_NOMEM; + } + pSpace = (tRowcnt*)&pIdx->aSample[nSample]; + pIdx->aAvgEq = pSpace; pSpace += nIdxCol; + for(i=0; i<nSample; i++){ + pIdx->aSample[i].anEq = pSpace; pSpace += nIdxCol; + pIdx->aSample[i].anLt = pSpace; pSpace += nIdxCol; + pIdx->aSample[i].anDLt = pSpace; pSpace += nIdxCol; + } + assert( ((u8*)pSpace)-nByte==(u8*)(pIdx->aSample) ); + } + rc = sqlite3_finalize(pStmt); + if( rc ) return rc; + + zSql = sqlite3MPrintf(db, zSql2, zDb); + if( !zSql ){ + return SQLITE_NOMEM; + } + rc = sqlite3_prepare(db, zSql, -1, &pStmt, 0); + sqlite3DbFree(db, zSql); + if( rc ) return rc; + + while( sqlite3_step(pStmt)==SQLITE_ROW ){ + char *zIndex; /* Index name */ + Index *pIdx; /* Pointer to the index object */ + int nCol = 1; /* Number of columns in index */ + + zIndex = (char *)sqlite3_column_text(pStmt, 0); + if( zIndex==0 ) continue; + pIdx = findIndexOrPrimaryKey(db, zIndex, zDb); + if( pIdx==0 ) continue; + /* This next condition is true if data has already been loaded from + ** the sqlite_stat4 table. In this case ignore stat3 data. */ + nCol = pIdx->nSampleCol; + if( bStat3 && nCol>1 ) continue; + if( pIdx!=pPrevIdx ){ + initAvgEq(pPrevIdx); + pPrevIdx = pIdx; + } + pSample = &pIdx->aSample[pIdx->nSample]; + decodeIntArray((char*)sqlite3_column_text(pStmt,1),nCol,pSample->anEq,0,0); + decodeIntArray((char*)sqlite3_column_text(pStmt,2),nCol,pSample->anLt,0,0); + decodeIntArray((char*)sqlite3_column_text(pStmt,3),nCol,pSample->anDLt,0,0); + + /* Take a copy of the sample. Add two 0x00 bytes the end of the buffer. + ** This is in case the sample record is corrupted. In that case, the + ** sqlite3VdbeRecordCompare() may read up to two varints past the + ** end of the allocated buffer before it realizes it is dealing with + ** a corrupt record. Adding the two 0x00 bytes prevents this from causing + ** a buffer overread. */ + pSample->n = sqlite3_column_bytes(pStmt, 4); + pSample->p = sqlite3DbMallocZero(db, pSample->n + 2); + if( pSample->p==0 ){ + sqlite3_finalize(pStmt); + return SQLITE_NOMEM; + } + memcpy(pSample->p, sqlite3_column_blob(pStmt, 4), pSample->n); + pIdx->nSample++; + } + rc = sqlite3_finalize(pStmt); + if( rc==SQLITE_OK ) initAvgEq(pPrevIdx); + return rc; +} + +/* +** Load content from the sqlite_stat4 and sqlite_stat3 tables into +** the Index.aSample[] arrays of all indices. +*/ +static int loadStat4(sqlite3 *db, const char *zDb){ + int rc = SQLITE_OK; /* Result codes from subroutines */ + + assert( db->lookaside.bDisable ); + if( sqlite3FindTable(db, "sqlite_stat4", zDb) ){ + rc = loadStatTbl(db, 0, + "SELECT idx,count(*) FROM %Q.sqlite_stat4 GROUP BY idx", + "SELECT idx,neq,nlt,ndlt,sample FROM %Q.sqlite_stat4", + zDb + ); + } + + if( rc==SQLITE_OK && sqlite3FindTable(db, "sqlite_stat3", zDb) ){ + rc = loadStatTbl(db, 1, + "SELECT idx,count(*) FROM %Q.sqlite_stat3 GROUP BY idx", + "SELECT idx,neq,nlt,ndlt,sqlite_record(sample) FROM %Q.sqlite_stat3", + zDb + ); + } + + return rc; +} +#endif /* SQLITE_ENABLE_STAT3_OR_STAT4 */ + +/* +** Load the content of the sqlite_stat1 and sqlite_stat3/4 tables. The ** contents of sqlite_stat1 are used to populate the Index.aiRowEst[] -** arrays. The contents of sqlite_stat2 are used to populate the +** arrays. The contents of sqlite_stat3/4 are used to populate the ** Index.aSample[] arrays. ** ** If the sqlite_stat1 table is not present in the database, SQLITE_ERROR -** is returned. In this case, even if SQLITE_ENABLE_STAT2 was defined -** during compilation and the sqlite_stat2 table is present, no data is +** is returned. In this case, even if SQLITE_ENABLE_STAT3/4 was defined +** during compilation and the sqlite_stat3/4 table is present, no data is ** read from it. ** -** If SQLITE_ENABLE_STAT2 was defined during compilation and the -** sqlite_stat2 table is not present in the database, SQLITE_ERROR is +** If SQLITE_ENABLE_STAT3/4 was defined during compilation and the +** sqlite_stat4 table is not present in the database, SQLITE_ERROR is ** returned. However, in this case, data is read from the sqlite_stat1 ** table (if it is present) before returning. ** ** If an OOM error occurs, this function always sets db->mallocFailed. ** This means if the caller does not care about other errors, the return @@ -66762,17 +93377,20 @@ char *zSql; int rc; assert( iDb>=0 && iDb<db->nDb ); assert( db->aDb[iDb].pBt!=0 ); - assert( sqlite3BtreeHoldsMutex(db->aDb[iDb].pBt) ); /* Clear any prior statistics */ + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); for(i=sqliteHashFirst(&db->aDb[iDb].pSchema->idxHash);i;i=sqliteHashNext(i)){ Index *pIdx = sqliteHashData(i); sqlite3DefaultRowEst(pIdx); - sqlite3DeleteIndexSamples(pIdx); +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + sqlite3DeleteIndexSamples(db, pIdx); + pIdx->aSample = 0; +#endif } /* Check to make sure the sqlite_stat1 table exists */ sInfo.db = db; sInfo.zDatabase = db->aDb[iDb].zName; @@ -66780,96 +93398,35 @@ return SQLITE_ERROR; } /* Load new statistics out of the sqlite_stat1 table */ zSql = sqlite3MPrintf(db, - "SELECT idx, stat FROM %Q.sqlite_stat1", sInfo.zDatabase); + "SELECT tbl,idx,stat FROM %Q.sqlite_stat1", sInfo.zDatabase); if( zSql==0 ){ rc = SQLITE_NOMEM; }else{ rc = sqlite3_exec(db, zSql, analysisLoader, &sInfo, 0); sqlite3DbFree(db, zSql); } - /* Load the statistics from the sqlite_stat2 table. */ -#ifdef SQLITE_ENABLE_STAT2 - if( rc==SQLITE_OK && !sqlite3FindTable(db, "sqlite_stat2", sInfo.zDatabase) ){ - rc = SQLITE_ERROR; - } - if( rc==SQLITE_OK ){ - sqlite3_stmt *pStmt = 0; - - zSql = sqlite3MPrintf(db, - "SELECT idx,sampleno,sample FROM %Q.sqlite_stat2", sInfo.zDatabase); - if( !zSql ){ - rc = SQLITE_NOMEM; - }else{ - rc = sqlite3_prepare(db, zSql, -1, &pStmt, 0); - sqlite3DbFree(db, zSql); - } - - if( rc==SQLITE_OK ){ - while( sqlite3_step(pStmt)==SQLITE_ROW ){ - char *zIndex = (char *)sqlite3_column_text(pStmt, 0); - Index *pIdx = sqlite3FindIndex(db, zIndex, sInfo.zDatabase); - if( pIdx ){ - int iSample = sqlite3_column_int(pStmt, 1); - sqlite3 *dbMem = pIdx->pTable->dbMem; - assert( dbMem==db || dbMem==0 ); - if( iSample<SQLITE_INDEX_SAMPLES && iSample>=0 ){ - int eType = sqlite3_column_type(pStmt, 2); - - if( pIdx->aSample==0 ){ - static const int sz = sizeof(IndexSample)*SQLITE_INDEX_SAMPLES; - pIdx->aSample = (IndexSample *)sqlite3DbMallocZero(dbMem, sz); - if( pIdx->aSample==0 ){ - db->mallocFailed = 1; - break; - } - } - - assert( pIdx->aSample ); - { - IndexSample *pSample = &pIdx->aSample[iSample]; - pSample->eType = (u8)eType; - if( eType==SQLITE_INTEGER || eType==SQLITE_FLOAT ){ - pSample->u.r = sqlite3_column_double(pStmt, 2); - }else if( eType==SQLITE_TEXT || eType==SQLITE_BLOB ){ - const char *z = (const char *)( - (eType==SQLITE_BLOB) ? - sqlite3_column_blob(pStmt, 2): - sqlite3_column_text(pStmt, 2) - ); - int n = sqlite3_column_bytes(pStmt, 2); - if( n>24 ){ - n = 24; - } - pSample->nByte = (u8)n; - if( n < 1){ - pSample->u.z = 0; - }else{ - pSample->u.z = sqlite3DbMallocRaw(dbMem, n); - if( pSample->u.z ){ - memcpy(pSample->u.z, z, n); - }else{ - db->mallocFailed = 1; - break; - } - } - } - } - } - } - } - rc = sqlite3_finalize(pStmt); - } + /* Load the statistics from the sqlite_stat4 table. */ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + if( rc==SQLITE_OK && OptimizationEnabled(db, SQLITE_Stat34) ){ + db->lookaside.bDisable++; + rc = loadStat4(db, sInfo.zDatabase); + db->lookaside.bDisable--; + } + for(i=sqliteHashFirst(&db->aDb[iDb].pSchema->idxHash);i;i=sqliteHashNext(i)){ + Index *pIdx = sqliteHashData(i); + sqlite3_free(pIdx->aiRowEst); + pIdx->aiRowEst = 0; } #endif if( rc==SQLITE_NOMEM ){ - db->mallocFailed = 1; + sqlite3OomFault(db); } return rc; } @@ -66888,10 +93445,11 @@ ** May you share freely, never taking more than you give. ** ************************************************************************* ** This file contains code used to implement the ATTACH and DETACH commands. */ +/* #include "sqliteInt.h" */ #ifndef SQLITE_OMIT_ATTACH /* ** Resolve an expression that was part of an ATTACH or DETACH statement. This ** is slightly different from resolving a normal SQL expression, because simple @@ -66914,14 +93472,10 @@ { int rc = SQLITE_OK; if( pExpr ){ if( pExpr->op!=TK_ID ){ rc = sqlite3ResolveExprNames(pName, pExpr); - if( rc==SQLITE_OK && !sqlite3ExprIsConstant(pExpr) ){ - sqlite3ErrorMsg(pName->pParse, "invalid name: \"%s\"", pExpr->u.zToken); - return SQLITE_ERROR; - } }else{ pExpr->op = TK_STRING; } } return rc; @@ -66946,12 +93500,16 @@ int i; int rc = 0; sqlite3 *db = sqlite3_context_db_handle(context); const char *zName; const char *zFile; + char *zPath = 0; + char *zErr = 0; + unsigned int flags; Db *aNew; char *zErrDyn = 0; + sqlite3_vfs *pVfs; UNUSED_PARAMETER(NotUsed); zFile = (const char *)sqlite3_value_text(argv[0]); zName = (const char *)sqlite3_value_text(argv[1]); @@ -66981,15 +93539,15 @@ zErrDyn = sqlite3MPrintf(db, "database %s is already in use", zName); goto attach_error; } } - /* Allocate the new entry in the db->aDb[] array and initialise the schema + /* Allocate the new entry in the db->aDb[] array and initialize the schema ** hash tables. */ if( db->aDb==db->aDbStatic ){ - aNew = sqlite3DbMallocRaw(db, sizeof(db->aDb[0])*3 ); + aNew = sqlite3DbMallocRawNN(db, sizeof(db->aDb[0])*3 ); if( aNew==0 ) return; memcpy(aNew, db->aDb, sizeof(db->aDb[0])*2); }else{ aNew = sqlite3DbRealloc(db, db->aDb, sizeof(db->aDb[0])*(db->nDb+1) ); if( aNew==0 ) return; @@ -66998,15 +93556,24 @@ aNew = &db->aDb[db->nDb]; memset(aNew, 0, sizeof(*aNew)); /* Open the database file. If the btree is successfully opened, use ** it to obtain the database schema. At this point the schema may - ** or may not be initialised. + ** or may not be initialized. */ - rc = sqlite3BtreeFactory(db, zFile, 0, SQLITE_DEFAULT_CACHE_SIZE, - db->openFlags | SQLITE_OPEN_MAIN_DB, - &aNew->pBt); + flags = db->openFlags; + rc = sqlite3ParseUri(db->pVfs->zName, zFile, &flags, &pVfs, &zPath, &zErr); + if( rc!=SQLITE_OK ){ + if( rc==SQLITE_NOMEM ) sqlite3OomFault(db); + sqlite3_result_error(context, zErr, -1); + sqlite3_free(zErr); + return; + } + assert( pVfs ); + flags |= SQLITE_OPEN_MAIN_DB; + rc = sqlite3BtreeOpen(pVfs, zPath, db, &aNew->pBt, 0, flags); + sqlite3_free( zPath ); db->nDb++; if( rc==SQLITE_CONSTRAINT ){ rc = SQLITE_ERROR; zErrDyn = sqlite3MPrintf(db, "database is already attached"); }else if( rc==SQLITE_OK ){ @@ -67017,15 +93584,20 @@ }else if( aNew->pSchema->file_format && aNew->pSchema->enc!=ENC(db) ){ zErrDyn = sqlite3MPrintf(db, "attached databases must use the same text encoding as main database"); rc = SQLITE_ERROR; } + sqlite3BtreeEnter(aNew->pBt); pPager = sqlite3BtreePager(aNew->pBt); sqlite3PagerLockingMode(pPager, db->dfltLockMode); - sqlite3PagerJournalMode(pPager, db->dfltJournalMode); sqlite3BtreeSecureDelete(aNew->pBt, sqlite3BtreeSecureDelete(db->aDb[0].pBt,-1) ); +#ifndef SQLITE_OMIT_PAGER_PRAGMAS + sqlite3BtreeSetPagerFlags(aNew->pBt, + PAGER_SYNCHRONOUS_FULL | (db->flags & PAGER_FLAGS_MASK)); +#endif + sqlite3BtreeLeave(aNew->pBt); } aNew->safety_level = 3; aNew->zName = sqlite3DbStrDup(db, zName); if( rc==SQLITE_OK && aNew->zName==0 ){ rc = SQLITE_NOMEM; @@ -67054,11 +93626,13 @@ break; case SQLITE_NULL: /* No key specified. Use the key from the main database */ sqlite3CodecGetKey(db, 0, (void**)&zKey, &nKey); - rc = sqlite3CodecAttach(db, db->nDb-1, zKey, nKey); + if( nKey>0 || sqlite3BtreeGetOptimalReserve(db->aDb[0].pBt)>0 ){ + rc = sqlite3CodecAttach(db, db->nDb-1, zKey, nKey); + } break; } } #endif @@ -67070,22 +93644,31 @@ if( rc==SQLITE_OK ){ sqlite3BtreeEnterAll(db); rc = sqlite3Init(db, &zErrDyn); sqlite3BtreeLeaveAll(db); } +#ifdef SQLITE_USER_AUTHENTICATION + if( rc==SQLITE_OK ){ + u8 newAuth = 0; + rc = sqlite3UserAuthCheckLogin(db, zName, &newAuth); + if( newAuth<db->auth.authLevel ){ + rc = SQLITE_AUTH_USER; + } + } +#endif if( rc ){ int iDb = db->nDb - 1; assert( iDb>=2 ); if( db->aDb[iDb].pBt ){ sqlite3BtreeClose(db->aDb[iDb].pBt); db->aDb[iDb].pBt = 0; db->aDb[iDb].pSchema = 0; } - sqlite3ResetInternalSchema(db, 0); + sqlite3ResetAllSchemasOfConnection(db); db->nDb = iDb; if( rc==SQLITE_NOMEM || rc==SQLITE_IOERR_NOMEM ){ - db->mallocFailed = 1; + sqlite3OomFault(db); sqlite3DbFree(db, zErrDyn); zErrDyn = sqlite3MPrintf(db, "out of memory"); }else if( zErrDyn==0 ){ zErrDyn = sqlite3MPrintf(db, "unable to open database: %s", zFile); } @@ -67150,11 +93733,11 @@ } sqlite3BtreeClose(pDb->pBt); pDb->pBt = 0; pDb->pSchema = 0; - sqlite3ResetInternalSchema(db, 0); + sqlite3CollapseDatabaseArray(db); return; detach_error: sqlite3_result_error(context, zErr, -1); } @@ -67164,11 +93747,11 @@ ** sqlite_detach() or sqlite_attach() SQL user functions. */ static void codeAttach( Parse *pParse, /* The parser context */ int type, /* Either SQLITE_ATTACH or SQLITE_DETACH */ - FuncDef *pFunc, /* FuncDef wrapper for detachFunc() or attachFunc() */ + FuncDef const *pFunc,/* FuncDef wrapper for detachFunc() or attachFunc() */ Expr *pAuthArg, /* Expression to pass to authorization callback */ Expr *pFilename, /* Name of database file */ Expr *pDbname, /* Name of the database to use internally */ Expr *pKey /* Database key for encryption extension */ ){ @@ -67184,19 +93767,20 @@ if( SQLITE_OK!=(rc = resolveAttachExpr(&sName, pFilename)) || SQLITE_OK!=(rc = resolveAttachExpr(&sName, pDbname)) || SQLITE_OK!=(rc = resolveAttachExpr(&sName, pKey)) ){ - pParse->nErr++; goto attach_end; } #ifndef SQLITE_OMIT_AUTHORIZATION if( pAuthArg ){ - char *zAuthArg = pAuthArg->u.zToken; - if( NEVER(zAuthArg==0) ){ - goto attach_end; + char *zAuthArg; + if( pAuthArg->op==TK_STRING ){ + zAuthArg = pAuthArg->u.zToken; + }else{ + zAuthArg = 0; } rc = sqlite3AuthCheck(pParse, type, zAuthArg, 0, 0); if(rc!=SQLITE_OK ){ goto attach_end; } @@ -67210,15 +93794,15 @@ sqlite3ExprCode(pParse, pDbname, regArgs+1); sqlite3ExprCode(pParse, pKey, regArgs+2); assert( v || db->mallocFailed ); if( v ){ - sqlite3VdbeAddOp3(v, OP_Function, 0, regArgs+3-pFunc->nArg, regArgs+3); + sqlite3VdbeAddOp4(v, OP_Function0, 0, regArgs+3-pFunc->nArg, regArgs+3, + (char *)pFunc, P4_FUNCDEF); assert( pFunc->nArg==-1 || (pFunc->nArg&0xff)==pFunc->nArg ); sqlite3VdbeChangeP5(v, (u8)(pFunc->nArg)); - sqlite3VdbeChangeP4(v, -1, (char *)pFunc, P4_FUNCDEF); - + /* Code an OP_Expire. For an ATTACH statement, set P1 to true (expire this ** statement only). For DETACH, set it to false (expire all existing ** statements). */ sqlite3VdbeAddOp1(v, OP_Expire, (type==SQLITE_ATTACH)); @@ -67234,21 +93818,20 @@ ** Called by the parser to compile a DETACH statement. ** ** DETACH pDbname */ SQLITE_PRIVATE void sqlite3Detach(Parse *pParse, Expr *pDbname){ - static FuncDef detach_func = { + static const FuncDef detach_func = { 1, /* nArg */ - SQLITE_UTF8, /* iPrefEnc */ - 0, /* flags */ + SQLITE_UTF8, /* funcFlags */ 0, /* pUserData */ 0, /* pNext */ - detachFunc, /* xFunc */ - 0, /* xStep */ + detachFunc, /* xSFunc */ 0, /* xFinalize */ "sqlite_detach", /* zName */ - 0 /* pHash */ + 0, /* pHash */ + 0 /* pDestructor */ }; codeAttach(pParse, SQLITE_DETACH, &detach_func, pDbname, 0, 0, pDbname); } /* @@ -67255,50 +93838,46 @@ ** Called by the parser to compile an ATTACH statement. ** ** ATTACH p AS pDbname KEY pKey */ SQLITE_PRIVATE void sqlite3Attach(Parse *pParse, Expr *p, Expr *pDbname, Expr *pKey){ - static FuncDef attach_func = { + static const FuncDef attach_func = { 3, /* nArg */ - SQLITE_UTF8, /* iPrefEnc */ - 0, /* flags */ + SQLITE_UTF8, /* funcFlags */ 0, /* pUserData */ 0, /* pNext */ - attachFunc, /* xFunc */ - 0, /* xStep */ + attachFunc, /* xSFunc */ 0, /* xFinalize */ "sqlite_attach", /* zName */ - 0 /* pHash */ + 0, /* pHash */ + 0 /* pDestructor */ }; codeAttach(pParse, SQLITE_ATTACH, &attach_func, p, p, pDbname, pKey); } #endif /* SQLITE_OMIT_ATTACH */ /* ** Initialize a DbFixer structure. This routine must be called prior ** to passing the structure to one of the sqliteFixAAAA() routines below. -** -** The return value indicates whether or not fixation is required. TRUE -** means we do need to fix the database references, FALSE means we do not. */ -SQLITE_PRIVATE int sqlite3FixInit( +SQLITE_PRIVATE void sqlite3FixInit( DbFixer *pFix, /* The fixer to be initialized */ Parse *pParse, /* Error messages will be written here */ int iDb, /* This is the database that must be used */ const char *zType, /* "view", "trigger", or "index" */ const Token *pName /* Name of the view, trigger, or index */ ){ sqlite3 *db; - if( NEVER(iDb<0) || iDb==1 ) return 0; db = pParse->db; assert( db->nDb>iDb ); pFix->pParse = pParse; pFix->zDb = db->aDb[iDb].zName; + pFix->pSchema = db->aDb[iDb].pSchema; pFix->zType = zType; pFix->pName = pName; - return 1; + pFix->bVarOnly = (iDb==1); } /* ** The following set of routines walk through the parse tree and assign ** a specific database to all table references where the database name @@ -67322,17 +93901,20 @@ struct SrcList_item *pItem; if( NEVER(pList==0) ) return 0; zDb = pFix->zDb; for(i=0, pItem=pList->a; i<pList->nSrc; i++, pItem++){ - if( pItem->zDatabase==0 ){ - pItem->zDatabase = sqlite3DbStrDup(pFix->pParse->db, zDb); - }else if( sqlite3StrICmp(pItem->zDatabase,zDb)!=0 ){ - sqlite3ErrorMsg(pFix->pParse, - "%s %T cannot reference objects in database %s", - pFix->zType, pFix->pName, pItem->zDatabase); - return 1; + if( pFix->bVarOnly==0 ){ + if( pItem->zDatabase && sqlite3StrICmp(pItem->zDatabase, zDb) ){ + sqlite3ErrorMsg(pFix->pParse, + "%s %T cannot reference objects in database %s", + pFix->zType, pFix->pName, pItem->zDatabase); + return 1; + } + sqlite3DbFree(pFix->pParse->db, pItem->zDatabase); + pItem->zDatabase = 0; + pItem->pSchema = pFix->pSchema; } #if !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_TRIGGER) if( sqlite3FixSelect(pFix, pItem->pSelect) ) return 1; if( sqlite3FixExpr(pFix, pItem->pOn) ) return 1; #endif @@ -67351,13 +93933,25 @@ if( sqlite3FixSrcList(pFix, pSelect->pSrc) ){ return 1; } if( sqlite3FixExpr(pFix, pSelect->pWhere) ){ return 1; + } + if( sqlite3FixExprList(pFix, pSelect->pGroupBy) ){ + return 1; } if( sqlite3FixExpr(pFix, pSelect->pHaving) ){ return 1; + } + if( sqlite3FixExprList(pFix, pSelect->pOrderBy) ){ + return 1; + } + if( sqlite3FixExpr(pFix, pSelect->pLimit) ){ + return 1; + } + if( sqlite3FixExpr(pFix, pSelect->pOffset) ){ + return 1; } pSelect = pSelect->pPrior; } return 0; } @@ -67364,11 +93958,19 @@ SQLITE_PRIVATE int sqlite3FixExpr( DbFixer *pFix, /* Context of the fixation */ Expr *pExpr /* The expression to be fixed to one database */ ){ while( pExpr ){ - if( ExprHasAnyProperty(pExpr, EP_TokenOnly) ) break; + if( pExpr->op==TK_VARIABLE ){ + if( pFix->pParse->db->init.busy ){ + pExpr->op = TK_NULL; + }else{ + sqlite3ErrorMsg(pFix->pParse, "%s cannot use variables", pFix->zType); + return 1; + } + } + if( ExprHasProperty(pExpr, EP_TokenOnly) ) break; if( ExprHasProperty(pExpr, EP_xIsSelect) ){ if( sqlite3FixSelect(pFix, pExpr->x.pSelect) ) return 1; }else{ if( sqlite3FixExprList(pFix, pExpr->x.pList) ) return 1; } @@ -67432,10 +94034,11 @@ ** This file contains code used to implement the sqlite3_set_authorizer() ** API. This facility is an optional feature of the library. Embedded ** systems that do not need this facility may omit it by recompiling ** the library with -DSQLITE_OMIT_AUTHORIZATION=1 */ +/* #include "sqliteInt.h" */ /* ** All of the code in this file may be omitted by defining a single ** macro. */ @@ -67484,17 +94087,20 @@ ** and attempts to write the column will be ignored. ** ** Setting the auth function to NULL disables this hook. The default ** setting of the auth function is NULL. */ -SQLITE_API int sqlite3_set_authorizer( +SQLITE_API int SQLITE_STDCALL sqlite3_set_authorizer( sqlite3 *db, int (*xAuth)(void*,int,const char*,const char*,const char*,const char*), void *pArg ){ +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT; +#endif sqlite3_mutex_enter(db->mutex); - db->xAuth = xAuth; + db->xAuth = (sqlite3_xauth)xAuth; db->pAuthArg = pArg; sqlite3ExpirePreparedStatements(db); sqlite3_mutex_leave(db->mutex); return SQLITE_OK; } @@ -67525,11 +94131,15 @@ ){ sqlite3 *db = pParse->db; /* Database handle */ char *zDb = db->aDb[iDb].zName; /* Name of attached database */ int rc; /* Auth callback return code */ - rc = db->xAuth(db->pAuthArg, SQLITE_READ, zTab,zCol,zDb,pParse->zAuthContext); + rc = db->xAuth(db->pAuthArg, SQLITE_READ, zTab,zCol,zDb,pParse->zAuthContext +#ifdef SQLITE_USER_AUTHENTICATION + ,db->auth.zAuthUser +#endif + ); if( rc==SQLITE_DENY ){ if( db->nDb>2 || iDb!=0 ){ sqlite3ErrorMsg(pParse, "access to %s.%s.%s is prohibited",zDb,zTab,zCol); }else{ sqlite3ErrorMsg(pParse, "access to %s.%s is prohibited", zTab, zCol); @@ -67625,11 +94235,15 @@ } if( db->xAuth==0 ){ return SQLITE_OK; } - rc = db->xAuth(db->pAuthArg, code, zArg1, zArg2, zArg3, pParse->zAuthContext); + rc = db->xAuth(db->pAuthArg, code, zArg1, zArg2, zArg3, pParse->zAuthContext +#ifdef SQLITE_USER_AUTHENTICATION + ,db->auth.zAuthUser +#endif + ); if( rc==SQLITE_DENY ){ sqlite3ErrorMsg(pParse, "not authorized"); pParse->rc = SQLITE_AUTH; }else if( rc!=SQLITE_OK && rc!=SQLITE_IGNORE ){ rc = SQLITE_DENY; @@ -67691,19 +94305,11 @@ ** creating ID lists ** BEGIN TRANSACTION ** COMMIT ** ROLLBACK */ - -/* -** This routine is called when a new SQL statement is beginning to -** be parsed. Initialize the pParse structure as needed. -*/ -SQLITE_PRIVATE void sqlite3BeginParse(Parse *pParse, int explainFlag){ - pParse->explain = (u8)explainFlag; - pParse->nVar = 0; -} +/* #include "sqliteInt.h" */ #ifndef SQLITE_OMIT_SHARED_CACHE /* ** The TableLock structure is only used by the sqlite3TableLock() and ** codeTableLocks() functions. @@ -67755,11 +94361,11 @@ p->iTab = iTab; p->isWriteLock = isWriteLock; p->zName = zName; }else{ pToplevel->nTableLock = 0; - pToplevel->db->mallocFailed = 1; + sqlite3OomFault(pToplevel->db); } } /* ** Code an OP_TableLock instruction for each table locked by the @@ -67781,10 +94387,23 @@ } #else #define codeTableLocks(x) #endif +/* +** Return TRUE if the given yDbMask object is empty - if it contains no +** 1 bits. This routine is used by the DbMaskAllZero() and DbMaskNotZero() +** macros when SQLITE_MAX_ATTACHED is greater than 30. +*/ +#if SQLITE_MAX_ATTACHED>30 +SQLITE_PRIVATE int sqlite3DbMaskAllZero(yDbMask m){ + int i; + for(i=0; i<sizeof(yDbMask); i++) if( m[i] ) return 0; + return 1; +} +#endif + /* ** This routine is called after a single SQL statement has been ** parsed and a VDBE program to execute that statement has been ** prepared. This routine puts the finishing touches on the ** VDBE program and resets the pParse structure for the next @@ -67795,51 +94414,71 @@ */ SQLITE_PRIVATE void sqlite3FinishCoding(Parse *pParse){ sqlite3 *db; Vdbe *v; + assert( pParse->pToplevel==0 ); db = pParse->db; - if( db->mallocFailed ) return; if( pParse->nested ) return; - if( pParse->nErr ) return; + if( db->mallocFailed || pParse->nErr ){ + if( pParse->rc==SQLITE_OK ) pParse->rc = SQLITE_ERROR; + return; + } /* Begin by generating some termination code at the end of the ** vdbe program */ v = sqlite3GetVdbe(pParse); assert( !pParse->isMultiWrite || sqlite3VdbeAssertMayAbort(v, pParse->mayAbort)); if( v ){ + while( sqlite3VdbeDeletePriorOpcode(v, OP_Close) ){} sqlite3VdbeAddOp0(v, OP_Halt); + +#if SQLITE_USER_AUTHENTICATION + if( pParse->nTableLock>0 && db->init.busy==0 ){ + sqlite3UserAuthInit(db); + if( db->auth.authLevel<UAUTH_User ){ + pParse->rc = SQLITE_AUTH_USER; + sqlite3ErrorMsg(pParse, "user not authenticated"); + return; + } + } +#endif /* The cookie mask contains one bit for each database file open. ** (Bit 0 is for main, bit 1 is for temp, and so forth.) Bits are ** set for each database that is used. Generate code to start a ** transaction on each used database and to verify the schema cookie ** on each used database. */ - if( pParse->cookieGoto>0 ){ - u32 mask; - int iDb; - sqlite3VdbeJumpHere(v, pParse->cookieGoto-1); - for(iDb=0, mask=1; iDb<db->nDb; mask<<=1, iDb++){ - if( (mask & pParse->cookieMask)==0 ) continue; + if( db->mallocFailed==0 + && (DbMaskNonZero(pParse->cookieMask) || pParse->pConstExpr) + ){ + int iDb, i; + assert( sqlite3VdbeGetOp(v, 0)->opcode==OP_Init ); + sqlite3VdbeJumpHere(v, 0); + for(iDb=0; iDb<db->nDb; iDb++){ + if( DbMaskTest(pParse->cookieMask, iDb)==0 ) continue; sqlite3VdbeUsesBtree(v, iDb); - sqlite3VdbeAddOp2(v,OP_Transaction, iDb, (mask & pParse->writeMask)!=0); - if( db->init.busy==0 ){ - sqlite3VdbeAddOp2(v,OP_VerifyCookie, iDb, pParse->cookieValue[iDb]); - } + sqlite3VdbeAddOp4Int(v, + OP_Transaction, /* Opcode */ + iDb, /* P1 */ + DbMaskTest(pParse->writeMask,iDb), /* P2 */ + pParse->cookieValue[iDb], /* P3 */ + db->aDb[iDb].pSchema->iGeneration /* P4 */ + ); + if( db->init.busy==0 ) sqlite3VdbeChangeP5(v, 1); + VdbeComment((v, + "usesStmtJournal=%d", pParse->mayAbort && pParse->isMultiWrite)); } #ifndef SQLITE_OMIT_VIRTUALTABLE - { - int i; - for(i=0; i<pParse->nVtabLock; i++){ - char *vtab = (char *)sqlite3GetVTable(db, pParse->apVtabLock[i]); - sqlite3VdbeAddOp4(v, OP_VBegin, 0, 0, 0, vtab, P4_VTAB); - } - pParse->nVtabLock = 0; - } + for(i=0; i<pParse->nVtabLock; i++){ + char *vtab = (char *)sqlite3GetVTable(db, pParse->apVtabLock[i]); + sqlite3VdbeAddOp4(v, OP_VBegin, 0, 0, 0, vtab, P4_VTAB); + } + pParse->nVtabLock = 0; #endif /* Once all the cookies have been verified and transactions opened, ** obtain the required table-locks. This is a no-op unless the ** shared-cache feature is enabled. @@ -67847,42 +94486,48 @@ codeTableLocks(pParse); /* Initialize any AUTOINCREMENT data structures required. */ sqlite3AutoincrementBegin(pParse); + + /* Code constant expressions that where factored out of inner loops */ + if( pParse->pConstExpr ){ + ExprList *pEL = pParse->pConstExpr; + pParse->okConstFactor = 0; + for(i=0; i<pEL->nExpr; i++){ + sqlite3ExprCode(pParse, pEL->a[i].pExpr, pEL->a[i].u.iConstExprReg); + } + } /* Finally, jump back to the beginning of the executable code. */ - sqlite3VdbeAddOp2(v, OP_Goto, 0, pParse->cookieGoto); + sqlite3VdbeGoto(v, 1); } } /* Get the VDBE program ready for execution */ - if( v && ALWAYS(pParse->nErr==0) && !db->mallocFailed ){ -#ifdef SQLITE_DEBUG - FILE *trace = (db->flags & SQLITE_VdbeTrace)!=0 ? stdout : 0; - sqlite3VdbeTrace(v, trace); -#endif + if( v && pParse->nErr==0 && !db->mallocFailed ){ assert( pParse->iCacheLevel==0 ); /* Disables and re-enables match */ /* A minimum of one cursor is required if autoincrement is used * See ticket [a696379c1f08866] */ if( pParse->pAinc!=0 && pParse->nTab==0 ) pParse->nTab = 1; - sqlite3VdbeMakeReady(v, pParse->nVar, pParse->nMem, - pParse->nTab, pParse->nMaxArg, pParse->explain, - pParse->isMultiWrite && pParse->mayAbort); + sqlite3VdbeMakeReady(v, pParse); pParse->rc = SQLITE_DONE; - pParse->colNamesSet = 0; }else{ pParse->rc = SQLITE_ERROR; } + + /* We are done with this Parse object. There is no need to de-initialize it */ +#if 0 + pParse->colNamesSet = 0; pParse->nTab = 0; pParse->nMem = 0; pParse->nSet = 0; pParse->nVar = 0; - pParse->cookieMask = 0; - pParse->cookieGoto = 0; + DbMaskZero(pParse->cookieMask); +#endif } /* ** Run the parser and code generator recursively in order to generate ** code for the SQL statement given onto the end of the pParse context @@ -67919,10 +94564,20 @@ sqlite3DbFree(db, zSql); memcpy(&pParse->nVar, saveBuf, SAVE_SZ); pParse->nested--; } +#if SQLITE_USER_AUTHENTICATION +/* +** Return TRUE if zTable is the name of the system table that stores the +** list of users and their access credentials. +*/ +SQLITE_PRIVATE int sqlite3UserAuthTable(const char *zTable){ + return sqlite3_stricmp(zTable, "sqlite_user")==0; +} +#endif + /* ** Locate the in-memory structure that describes a particular database ** table given the name of that table and (optionally) the name of the ** database containing the table. Return NULL if not found. ** @@ -67934,17 +94589,25 @@ ** See also sqlite3LocateTable(). */ SQLITE_PRIVATE Table *sqlite3FindTable(sqlite3 *db, const char *zName, const char *zDatabase){ Table *p = 0; int i; - int nName; - assert( zName!=0 ); - nName = sqlite3Strlen30(zName); + + /* All mutexes are required for schema access. Make sure we hold them. */ + assert( zDatabase!=0 || sqlite3BtreeHoldsAllMutexes(db) ); +#if SQLITE_USER_AUTHENTICATION + /* Only the admin user is allowed to know that the sqlite_user table + ** exists */ + if( db->auth.authLevel<UAUTH_Admin && sqlite3UserAuthTable(zName)!=0 ){ + return 0; + } +#endif for(i=OMIT_TEMPDB; i<db->nDb; i++){ int j = (i<2) ? i^1 : i; /* Search TEMP before MAIN */ if( zDatabase!=0 && sqlite3StrICmp(zDatabase, db->aDb[j].zName) ) continue; - p = sqlite3HashFind(&db->aDb[j].pSchema->tblHash, zName, nName); + assert( sqlite3SchemaMutexHeld(db, j, 0) ); + p = sqlite3HashFind(&db->aDb[j].pSchema->tblHash, zName); if( p ) break; } return p; } @@ -67973,19 +94636,56 @@ } p = sqlite3FindTable(pParse->db, zName, zDbase); if( p==0 ){ const char *zMsg = isView ? "no such view" : "no such table"; +#ifndef SQLITE_OMIT_VIRTUALTABLE + if( sqlite3FindDbName(pParse->db, zDbase)<1 ){ + /* If zName is the not the name of a table in the schema created using + ** CREATE, then check to see if it is the name of an virtual table that + ** can be an eponymous virtual table. */ + Module *pMod = (Module*)sqlite3HashFind(&pParse->db->aModule, zName); + if( pMod && sqlite3VtabEponymousTableInit(pParse, pMod) ){ + return pMod->pEpoTab; + } + } +#endif if( zDbase ){ sqlite3ErrorMsg(pParse, "%s: %s.%s", zMsg, zDbase, zName); }else{ sqlite3ErrorMsg(pParse, "%s: %s", zMsg, zName); } pParse->checkSchema = 1; } + return p; } + +/* +** Locate the table identified by *p. +** +** This is a wrapper around sqlite3LocateTable(). The difference between +** sqlite3LocateTable() and this function is that this function restricts +** the search to schema (p->pSchema) if it is not NULL. p->pSchema may be +** non-NULL if it is part of a view or trigger program definition. See +** sqlite3FixSrcList() for details. +*/ +SQLITE_PRIVATE Table *sqlite3LocateTableItem( + Parse *pParse, + int isView, + struct SrcList_item *p +){ + const char *zDb; + assert( p->pSchema==0 || p->zDatabase==0 ); + if( p->pSchema ){ + int iDb = sqlite3SchemaToIndex(pParse->db, p->pSchema); + zDb = pParse->db->aDb[iDb].zName; + }else{ + zDb = p->zDatabase; + } + return sqlite3LocateTable(pParse, isView, p->zName, zDb); +} /* ** Locate the in-memory structure that describes ** a particular index given the name of that index ** and the name of the database that contains the index. @@ -67998,66 +94698,55 @@ ** using the ATTACH command. */ SQLITE_PRIVATE Index *sqlite3FindIndex(sqlite3 *db, const char *zName, const char *zDb){ Index *p = 0; int i; - int nName = sqlite3Strlen30(zName); + /* All mutexes are required for schema access. Make sure we hold them. */ + assert( zDb!=0 || sqlite3BtreeHoldsAllMutexes(db) ); for(i=OMIT_TEMPDB; i<db->nDb; i++){ int j = (i<2) ? i^1 : i; /* Search TEMP before MAIN */ Schema *pSchema = db->aDb[j].pSchema; assert( pSchema ); if( zDb && sqlite3StrICmp(zDb, db->aDb[j].zName) ) continue; - p = sqlite3HashFind(&pSchema->idxHash, zName, nName); + assert( sqlite3SchemaMutexHeld(db, j, 0) ); + p = sqlite3HashFind(&pSchema->idxHash, zName); if( p ) break; } return p; } /* ** Reclaim the memory used by an index */ -static void freeIndex(Index *p){ - sqlite3 *db = p->pTable->dbMem; +static void freeIndex(sqlite3 *db, Index *p){ #ifndef SQLITE_OMIT_ANALYZE - sqlite3DeleteIndexSamples(p); + sqlite3DeleteIndexSamples(db, p); #endif + sqlite3ExprDelete(db, p->pPartIdxWhere); + sqlite3ExprListDelete(db, p->aColExpr); sqlite3DbFree(db, p->zColAff); + if( p->isResized ) sqlite3DbFree(db, (void *)p->azColl); +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + sqlite3_free(p->aiRowEst); +#endif sqlite3DbFree(db, p); } -/* -** Remove the given index from the index hash table, and free -** its memory structures. -** -** The index is removed from the database hash tables but -** it is not unlinked from the Table that it indexes. -** Unlinking from the Table must be done by the calling function. -*/ -static void sqlite3DeleteIndex(Index *p){ - Index *pOld; - const char *zName = p->zName; - - pOld = sqlite3HashInsert(&p->pSchema->idxHash, zName, - sqlite3Strlen30(zName), 0); - assert( pOld==0 || pOld==p ); - freeIndex(p); -} - /* ** For the index called zIdxName which is found in the database iDb, ** unlike that index from its Table then remove the index from ** the index hash table and free all memory structures associated ** with the index. */ SQLITE_PRIVATE void sqlite3UnlinkAndDeleteIndex(sqlite3 *db, int iDb, const char *zIdxName){ Index *pIndex; - int len; - Hash *pHash = &db->aDb[iDb].pSchema->idxHash; + Hash *pHash; - len = sqlite3Strlen30(zIdxName); - pIndex = sqlite3HashInsert(pHash, zIdxName, len, 0); - if( pIndex ){ + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); + pHash = &db->aDb[iDb].pSchema->idxHash; + pIndex = sqlite3HashInsert(pHash, zIdxName, 0); + if( ALWAYS(pIndex) ){ if( pIndex->pTable->pIndex==pIndex ){ pIndex->pTable->pIndex = pIndex->pNext; }else{ Index *p; /* Justification of ALWAYS(); The index must be on the list of @@ -68066,52 +94755,25 @@ while( ALWAYS(p) && p->pNext!=pIndex ){ p = p->pNext; } if( ALWAYS(p && p->pNext==pIndex) ){ p->pNext = pIndex->pNext; } } - freeIndex(pIndex); + freeIndex(db, pIndex); } db->flags |= SQLITE_InternChanges; } /* -** Erase all schema information from the in-memory hash tables of -** a single database. This routine is called to reclaim memory -** before the database closes. It is also called during a rollback -** if there were schema changes during the transaction or if a -** schema-cookie mismatch occurs. +** Look through the list of open database files in db->aDb[] and if +** any have been closed, remove them from the list. Reallocate the +** db->aDb[] structure to a smaller size, if possible. ** -** If iDb==0 then reset the internal schema tables for all database -** files. If iDb>=1 then reset the internal schema for only the -** single file indicated. -*/ -SQLITE_PRIVATE void sqlite3ResetInternalSchema(sqlite3 *db, int iDb){ - int i, j; - assert( iDb>=0 && iDb<db->nDb ); - - if( iDb==0 ){ - sqlite3BtreeEnterAll(db); - } - for(i=iDb; i<db->nDb; i++){ - Db *pDb = &db->aDb[i]; - if( pDb->pSchema ){ - assert(i==1 || (pDb->pBt && sqlite3BtreeHoldsMutex(pDb->pBt))); - sqlite3SchemaFree(pDb->pSchema); - } - if( iDb>0 ) return; - } - assert( iDb==0 ); - db->flags &= ~SQLITE_InternChanges; - sqlite3VtabUnlockList(db); - sqlite3BtreeLeaveAll(db); - - /* If one or more of the auxiliary database files has been closed, - ** then remove them from the auxiliary database list. We take the - ** opportunity to do this here since we have just deleted all of the - ** schema hash tables and therefore do not have to make any changes - ** to any of those tables. - */ +** Entry 0 (the "main" database) and entry 1 (the "temp" database) +** are never candidates for being collapsed. +*/ +SQLITE_PRIVATE void sqlite3CollapseDatabaseArray(sqlite3 *db){ + int i, j; for(i=j=2; i<db->nDb; i++){ struct Db *pDb = &db->aDb[i]; if( pDb->pBt==0 ){ sqlite3DbFree(db, pDb->zName); pDb->zName = 0; @@ -68120,34 +94782,77 @@ if( j<i ){ db->aDb[j] = db->aDb[i]; } j++; } - memset(&db->aDb[j], 0, (db->nDb-j)*sizeof(db->aDb[j])); db->nDb = j; if( db->nDb<=2 && db->aDb!=db->aDbStatic ){ memcpy(db->aDbStatic, db->aDb, 2*sizeof(db->aDb[0])); sqlite3DbFree(db, db->aDb); db->aDb = db->aDbStatic; } } + +/* +** Reset the schema for the database at index iDb. Also reset the +** TEMP schema. +*/ +SQLITE_PRIVATE void sqlite3ResetOneSchema(sqlite3 *db, int iDb){ + Db *pDb; + assert( iDb<db->nDb ); + + /* Case 1: Reset the single schema identified by iDb */ + pDb = &db->aDb[iDb]; + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); + assert( pDb->pSchema!=0 ); + sqlite3SchemaClear(pDb->pSchema); + + /* If any database other than TEMP is reset, then also reset TEMP + ** since TEMP might be holding triggers that reference tables in the + ** other database. + */ + if( iDb!=1 ){ + pDb = &db->aDb[1]; + assert( pDb->pSchema!=0 ); + sqlite3SchemaClear(pDb->pSchema); + } + return; +} + +/* +** Erase all schema information from all attached databases (including +** "main" and "temp") for a single database connection. +*/ +SQLITE_PRIVATE void sqlite3ResetAllSchemasOfConnection(sqlite3 *db){ + int i; + sqlite3BtreeEnterAll(db); + for(i=0; i<db->nDb; i++){ + Db *pDb = &db->aDb[i]; + if( pDb->pSchema ){ + sqlite3SchemaClear(pDb->pSchema); + } + } + db->flags &= ~SQLITE_InternChanges; + sqlite3VtabUnlockList(db); + sqlite3BtreeLeaveAll(db); + sqlite3CollapseDatabaseArray(db); +} /* ** This routine is called when a commit occurs. */ SQLITE_PRIVATE void sqlite3CommitInternalChanges(sqlite3 *db){ db->flags &= ~SQLITE_InternChanges; } /* -** Clear the column names from a table or view. +** Delete memory allocated for the column names of a table or view (the +** Table.aCol[] array). */ -static void sqliteResetColumnNames(Table *pTable){ +SQLITE_PRIVATE void sqlite3DeleteColumnNames(sqlite3 *db, Table *pTable){ int i; Column *pCol; - sqlite3 *db = pTable->dbMem; - testcase( db==0 ); assert( pTable!=0 ); if( (pCol = pTable->aCol)!=0 ){ for(i=0; i<pTable->nCol; i++, pCol++){ sqlite3DbFree(db, pCol->zName); sqlite3ExprDelete(db, pCol->pDflt); @@ -68155,12 +94860,10 @@ sqlite3DbFree(db, pCol->zType); sqlite3DbFree(db, pCol->zColl); } sqlite3DbFree(db, pTable->aCol); } - pTable->aCol = 0; - pTable->nCol = 0; } /* ** Remove the memory data structures associated with the given ** Table. No changes are made to disk by this routine. @@ -68167,48 +94870,65 @@ ** ** This routine just deletes the data structure. It does not unlink ** the table data structure from the hash table. But it does destroy ** memory structures of the indices and foreign keys associated with ** the table. +** +** The db parameter is optional. It is needed if the Table object +** contains lookaside memory. (Table objects in the schema do not use +** lookaside memory, but some ephemeral Table objects do.) Or the +** db parameter can be used with db->pnBytesFreed to measure the memory +** used by the Table object. */ -SQLITE_PRIVATE void sqlite3DeleteTable(Table *pTable){ +SQLITE_PRIVATE void sqlite3DeleteTable(sqlite3 *db, Table *pTable){ Index *pIndex, *pNext; - sqlite3 *db; + TESTONLY( int nLookaside; ) /* Used to verify lookaside not used for schema */ - if( pTable==0 ) return; - db = pTable->dbMem; - testcase( db==0 ); + assert( !pTable || pTable->nRef>0 ); /* Do not delete the table until the reference count reaches zero. */ - pTable->nRef--; - if( pTable->nRef>0 ){ - return; - } - assert( pTable->nRef==0 ); - - /* Delete all indices associated with this table - */ + if( !pTable ) return; + if( ((!db || db->pnBytesFreed==0) && (--pTable->nRef)>0) ) return; + + /* Record the number of outstanding lookaside allocations in schema Tables + ** prior to doing any free() operations. Since schema Tables do not use + ** lookaside, this number should not change. */ + TESTONLY( nLookaside = (db && (pTable->tabFlags & TF_Ephemeral)==0) ? + db->lookaside.nOut : 0 ); + + /* Delete all indices associated with this table. */ for(pIndex = pTable->pIndex; pIndex; pIndex=pNext){ pNext = pIndex->pNext; assert( pIndex->pSchema==pTable->pSchema ); - sqlite3DeleteIndex(pIndex); + if( !db || db->pnBytesFreed==0 ){ + char *zName = pIndex->zName; + TESTONLY ( Index *pOld = ) sqlite3HashInsert( + &pIndex->pSchema->idxHash, zName, 0 + ); + assert( db==0 || sqlite3SchemaMutexHeld(db, 0, pIndex->pSchema) ); + assert( pOld==pIndex || pOld==0 ); + } + freeIndex(db, pIndex); } /* Delete any foreign keys attached to this table. */ - sqlite3FkDelete(pTable); + sqlite3FkDelete(db, pTable); /* Delete the Table structure itself. */ - sqliteResetColumnNames(pTable); + sqlite3DeleteColumnNames(db, pTable); sqlite3DbFree(db, pTable->zName); sqlite3DbFree(db, pTable->zColAff); sqlite3SelectDelete(db, pTable->pSelect); -#ifndef SQLITE_OMIT_CHECK - sqlite3ExprDelete(db, pTable->pCheck); + sqlite3ExprListDelete(db, pTable->pCheck); +#ifndef SQLITE_OMIT_VIRTUALTABLE + sqlite3VtabClear(db, pTable); #endif - sqlite3VtabClear(pTable); sqlite3DbFree(db, pTable); + + /* Verify that no lookaside memory was used by schema tables */ + assert( nLookaside==0 || nLookaside==db->lookaside.nOut ); } /* ** Unlink the given table from the hash tables and the delete the ** table structure with all its indices and foreign keys. @@ -68218,15 +94938,15 @@ Db *pDb; assert( db!=0 ); assert( iDb>=0 && iDb<db->nDb ); assert( zTabName ); + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); testcase( zTabName[0]==0 ); /* Zero-length table names are allowed */ pDb = &db->aDb[iDb]; - p = sqlite3HashInsert(&pDb->pSchema->tblHash, zTabName, - sqlite3Strlen30(zTabName),0); - sqlite3DeleteTable(p); + p = sqlite3HashInsert(&pDb->pSchema->tblHash, zTabName, 0); + sqlite3DeleteTable(db, p); db->flags |= SQLITE_InternChanges; } /* ** Given a token, return a string that consists of the text of that @@ -68257,12 +94977,11 @@ ** writing. The table is opened using cursor 0. */ SQLITE_PRIVATE void sqlite3OpenMasterTable(Parse *p, int iDb){ Vdbe *v = sqlite3GetVdbe(p); sqlite3TableLock(p, iDb, MASTER_ROOT, 1, SCHEMA_TABLE(iDb)); - sqlite3VdbeAddOp3(v, OP_OpenWrite, 0, MASTER_ROOT, iDb); - sqlite3VdbeChangeP4(v, -1, (char *)5, P4_INT32); /* 5 column table */ + sqlite3VdbeAddOp4Int(v, OP_OpenWrite, 0, MASTER_ROOT, iDb, 5); if( p->nTab==0 ){ p->nTab = 1; } } @@ -68325,21 +95044,20 @@ Token **pUnqual /* Write the unqualified object name here */ ){ int iDb; /* Database holding the object */ sqlite3 *db = pParse->db; - if( ALWAYS(pName2!=0) && pName2->n>0 ){ + assert( pName2!=0 ); + if( pName2->n>0 ){ if( db->init.busy ) { sqlite3ErrorMsg(pParse, "corrupt database"); - pParse->nErr++; return -1; } *pUnqual = pName2; iDb = sqlite3FindDb(db, pName1); if( iDb<0 ){ sqlite3ErrorMsg(pParse, "unknown database %T", pName1); - pParse->nErr++; return -1; } }else{ assert( db->init.iDb==0 || db->init.busy ); iDb = db->init.iDb; @@ -68362,10 +95080,31 @@ sqlite3ErrorMsg(pParse, "object name reserved for internal use: %s", zName); return SQLITE_ERROR; } return SQLITE_OK; } + +/* +** Return the PRIMARY KEY index of a table +*/ +SQLITE_PRIVATE Index *sqlite3PrimaryKeyIndex(Table *pTab){ + Index *p; + for(p=pTab->pIndex; p && !IsPrimaryKeyIndex(p); p=p->pNext){} + return p; +} + +/* +** Return the column of index pIdx that corresponds to table +** column iCol. Return -1 if not found. +*/ +SQLITE_PRIVATE i16 sqlite3ColumnOfIndex(Index *pIdx, i16 iCol){ + int i; + for(i=0; i<pIdx->nColumn; i++){ + if( iCol==pIdx->aiColumn[i] ) return i; + } + return -1; +} /* ** Begin constructing a new table representation in memory. This is ** the first of several action routines that get called in response ** to a CREATE TABLE statement. In particular, this routine is called @@ -68395,65 +95134,50 @@ sqlite3 *db = pParse->db; Vdbe *v; int iDb; /* Database number to create the table in */ Token *pName; /* Unqualified name of the table to create */ - /* The table or view name to create is passed to this routine via tokens - ** pName1 and pName2. If the table name was fully qualified, for example: - ** - ** CREATE TABLE xxx.yyy (...); - ** - ** Then pName1 is set to "xxx" and pName2 "yyy". On the other hand if - ** the table name is not fully qualified, i.e.: - ** - ** CREATE TABLE yyy(...); - ** - ** Then pName1 is set to "yyy" and pName2 is "". - ** - ** The call below sets the pName pointer to point at the token (pName1 or - ** pName2) that stores the unqualified table name. The variable iDb is - ** set to the index of the database that the table or view is to be - ** created in. - */ - iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName); - if( iDb<0 ) return; - if( !OMIT_TEMPDB && isTemp && iDb>1 ){ - /* If creating a temp table, the name may not be qualified */ - sqlite3ErrorMsg(pParse, "temporary table name must be unqualified"); - return; - } - if( !OMIT_TEMPDB && isTemp ) iDb = 1; - - pParse->sNameToken = *pName; - zName = sqlite3NameFromToken(db, pName); + if( db->init.busy && db->init.newTnum==1 ){ + /* Special case: Parsing the sqlite_master or sqlite_temp_master schema */ + iDb = db->init.iDb; + zName = sqlite3DbStrDup(db, SCHEMA_TABLE(iDb)); + pName = pName1; + }else{ + /* The common case */ + iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName); + if( iDb<0 ) return; + if( !OMIT_TEMPDB && isTemp && pName2->n>0 && iDb!=1 ){ + /* If creating a temp table, the name may not be qualified. Unless + ** the database name is "temp" anyway. */ + sqlite3ErrorMsg(pParse, "temporary table name must be unqualified"); + return; + } + if( !OMIT_TEMPDB && isTemp ) iDb = 1; + zName = sqlite3NameFromToken(db, pName); + } + pParse->sNameToken = *pName; if( zName==0 ) return; if( SQLITE_OK!=sqlite3CheckObjectName(pParse, zName) ){ goto begin_table_error; } if( db->init.iDb==1 ) isTemp = 1; #ifndef SQLITE_OMIT_AUTHORIZATION - assert( (isTemp & 1)==isTemp ); + assert( isTemp==0 || isTemp==1 ); + assert( isView==0 || isView==1 ); { - int code; + static const u8 aCode[] = { + SQLITE_CREATE_TABLE, + SQLITE_CREATE_TEMP_TABLE, + SQLITE_CREATE_VIEW, + SQLITE_CREATE_TEMP_VIEW + }; char *zDb = db->aDb[iDb].zName; if( sqlite3AuthCheck(pParse, SQLITE_INSERT, SCHEMA_TABLE(isTemp), 0, zDb) ){ goto begin_table_error; } - if( isView ){ - if( !OMIT_TEMPDB && isTemp ){ - code = SQLITE_CREATE_TEMP_VIEW; - }else{ - code = SQLITE_CREATE_VIEW; - } - }else{ - if( !OMIT_TEMPDB && isTemp ){ - code = SQLITE_CREATE_TEMP_TABLE; - }else{ - code = SQLITE_CREATE_TABLE; - } - } - if( !isVirtual && sqlite3AuthCheck(pParse, code, zName, 0, zDb) ){ + if( !isVirtual && sqlite3AuthCheck(pParse, (int)aCode[isTemp+2*isView], + zName, 0, zDb) ){ goto begin_table_error; } } #endif @@ -68463,47 +95187,52 @@ ** to an sqlite3_declare_vtab() call. In that case only the column names ** and types will be used, so there is no need to test for namespace ** collisions. */ if( !IN_DECLARE_VTAB ){ + char *zDb = db->aDb[iDb].zName; if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){ goto begin_table_error; } - pTable = sqlite3FindTable(db, zName, db->aDb[iDb].zName); + pTable = sqlite3FindTable(db, zName, zDb); if( pTable ){ if( !noErr ){ sqlite3ErrorMsg(pParse, "table %T already exists", pName); + }else{ + assert( !db->init.busy || CORRUPT_DB ); + sqlite3CodeVerifySchema(pParse, iDb); } goto begin_table_error; } - if( sqlite3FindIndex(db, zName, 0)!=0 && (iDb==0 || !db->init.busy) ){ + if( sqlite3FindIndex(db, zName, zDb)!=0 ){ sqlite3ErrorMsg(pParse, "there is already an index named %s", zName); goto begin_table_error; } } pTable = sqlite3DbMallocZero(db, sizeof(Table)); if( pTable==0 ){ - db->mallocFailed = 1; + assert( db->mallocFailed ); pParse->rc = SQLITE_NOMEM; pParse->nErr++; goto begin_table_error; } pTable->zName = zName; pTable->iPKey = -1; pTable->pSchema = db->aDb[iDb].pSchema; pTable->nRef = 1; - pTable->dbMem = 0; + pTable->nRowLogEst = 200; assert( 200==sqlite3LogEst(1048576) ); assert( pParse->pNewTable==0 ); pParse->pNewTable = pTable; /* If this is the magic sqlite_sequence table used by autoincrement, ** then record a pointer to this table in the main database structure ** so that INSERT can find the table easily. */ #ifndef SQLITE_OMIT_AUTOINCREMENT if( !pParse->nested && strcmp(zName, "sqlite_sequence")==0 ){ + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); pTable->pSchema->pSeqTab = pTable; } #endif /* Begin generating the code that will insert the table record into @@ -68513,14 +95242,16 @@ ** indices to be created and the table record must come before the ** indices. Hence, the record number for the table must be allocated ** now. */ if( !db->init.busy && (v = sqlite3GetVdbe(pParse))!=0 ){ - int j1; + int addr1; int fileFormat; int reg1, reg2, reg3; - sqlite3BeginWriteOperation(pParse, 0, iDb); + /* nullRow[] is an OP_Record encoding of a row containing 5 NULLs */ + static const char nullRow[] = { 6, 0, 0, 0, 0, 0 }; + sqlite3BeginWriteOperation(pParse, 1, iDb); #ifndef SQLITE_OMIT_VIRTUALTABLE if( isVirtual ){ sqlite3VdbeAddOp0(v, OP_VBegin); } @@ -68532,18 +95263,16 @@ reg1 = pParse->regRowid = ++pParse->nMem; reg2 = pParse->regRoot = ++pParse->nMem; reg3 = ++pParse->nMem; sqlite3VdbeAddOp3(v, OP_ReadCookie, iDb, reg3, BTREE_FILE_FORMAT); sqlite3VdbeUsesBtree(v, iDb); - j1 = sqlite3VdbeAddOp1(v, OP_If, reg3); + addr1 = sqlite3VdbeAddOp1(v, OP_If, reg3); VdbeCoverage(v); fileFormat = (db->flags & SQLITE_LegacyFileFmt)!=0 ? 1 : SQLITE_MAX_FILE_FORMAT; - sqlite3VdbeAddOp2(v, OP_Integer, fileFormat, reg3); - sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, BTREE_FILE_FORMAT, reg3); - sqlite3VdbeAddOp2(v, OP_Integer, ENC(db), reg3); - sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, BTREE_TEXT_ENCODING, reg3); - sqlite3VdbeJumpHere(v, j1); + sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, BTREE_FILE_FORMAT, fileFormat); + sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, BTREE_TEXT_ENCODING, ENC(db)); + sqlite3VdbeJumpHere(v, addr1); /* This just creates a place-holder record in the sqlite_master table. ** The record created does not contain anything yet. It will be replaced ** by the real entry in code generated at sqlite3EndTable(). ** @@ -68556,15 +95285,15 @@ if( isView || isVirtual ){ sqlite3VdbeAddOp2(v, OP_Integer, 0, reg2); }else #endif { - sqlite3VdbeAddOp2(v, OP_CreateTable, iDb, reg2); + pParse->addrCrTab = sqlite3VdbeAddOp2(v, OP_CreateTable, iDb, reg2); } sqlite3OpenMasterTable(pParse, iDb); sqlite3VdbeAddOp2(v, OP_NewRowid, 0, reg1); - sqlite3VdbeAddOp2(v, OP_Null, 0, reg3); + sqlite3VdbeAddOp4(v, OP_Blob, 6, reg3, 0, nullRow, P4_STATIC); sqlite3VdbeAddOp3(v, OP_Insert, 0, reg3, reg1); sqlite3VdbeChangeP5(v, OPFLAG_APPEND); sqlite3VdbeAddOp0(v, OP_Close); } @@ -68575,22 +95304,23 @@ begin_table_error: sqlite3DbFree(db, zName); return; } -/* -** This macro is used to compare two strings in a case-insensitive manner. -** It is slightly faster than calling sqlite3StrICmp() directly, but -** produces larger code. -** -** WARNING: This macro is not compatible with the strcmp() family. It -** returns true if the two strings are equal, otherwise false. +/* Set properties of a table column based on the (magical) +** name of the column. */ -#define STRICMP(x, y) (\ -sqlite3UpperToLower[*(unsigned char *)(x)]== \ -sqlite3UpperToLower[*(unsigned char *)(y)] \ -&& sqlite3StrICmp((x)+1,(y)+1)==0 ) +#if SQLITE_ENABLE_HIDDEN_COLUMNS +SQLITE_PRIVATE void sqlite3ColumnPropertiesFromName(Table *pTab, Column *pCol){ + if( sqlite3_strnicmp(pCol->zName, "__hidden__", 10)==0 ){ + pCol->colFlags |= COLFLAG_HIDDEN; + }else if( pTab && pCol!=pTab->aCol && (pCol[-1].colFlags & COLFLAG_HIDDEN) ){ + pTab->tabFlags |= TF_OOOHidden; + } +} +#endif + /* ** Add a new column to the table currently being constructed. ** ** The parser calls this routine once for each column declaration @@ -68612,11 +95342,11 @@ } #endif z = sqlite3NameFromToken(db, pName); if( z==0 ) return; for(i=0; i<p->nCol; i++){ - if( STRICMP(z, p->aCol[i].zName) ){ + if( sqlite3_stricmp(z, p->aCol[i].zName)==0 ){ sqlite3ErrorMsg(pParse, "duplicate column name: %s", z); sqlite3DbFree(db, z); return; } } @@ -68630,16 +95360,18 @@ p->aCol = aNew; } pCol = &p->aCol[p->nCol]; memset(pCol, 0, sizeof(p->aCol[0])); pCol->zName = z; + sqlite3ColumnPropertiesFromName(p, pCol); /* If there is no type specified, columns have the default affinity - ** 'NONE'. If there is a type specified, then sqlite3AddColumnType() will + ** 'BLOB'. If there is a type specified, then sqlite3AddColumnType() will ** be called next to set pCol->affinity correctly. */ - pCol->affinity = SQLITE_AFF_NONE; + pCol->affinity = SQLITE_AFF_BLOB; + pCol->szEst = 1; p->nCol++; } /* ** This routine is called by the parser while in the middle of @@ -68669,34 +95401,38 @@ ** -------------------------------- ** 'INT' | SQLITE_AFF_INTEGER ** 'CHAR' | SQLITE_AFF_TEXT ** 'CLOB' | SQLITE_AFF_TEXT ** 'TEXT' | SQLITE_AFF_TEXT -** 'BLOB' | SQLITE_AFF_NONE +** 'BLOB' | SQLITE_AFF_BLOB ** 'REAL' | SQLITE_AFF_REAL ** 'FLOA' | SQLITE_AFF_REAL ** 'DOUB' | SQLITE_AFF_REAL ** ** If none of the substrings in the above table are found, ** SQLITE_AFF_NUMERIC is returned. */ -SQLITE_PRIVATE char sqlite3AffinityType(const char *zIn){ +SQLITE_PRIVATE char sqlite3AffinityType(const char *zIn, u8 *pszEst){ u32 h = 0; char aff = SQLITE_AFF_NUMERIC; + const char *zChar = 0; - if( zIn ) while( zIn[0] ){ + if( zIn==0 ) return aff; + while( zIn[0] ){ h = (h<<8) + sqlite3UpperToLower[(*zIn)&0xff]; zIn++; if( h==(('c'<<24)+('h'<<16)+('a'<<8)+'r') ){ /* CHAR */ - aff = SQLITE_AFF_TEXT; + aff = SQLITE_AFF_TEXT; + zChar = zIn; }else if( h==(('c'<<24)+('l'<<16)+('o'<<8)+'b') ){ /* CLOB */ aff = SQLITE_AFF_TEXT; }else if( h==(('t'<<24)+('e'<<16)+('x'<<8)+'t') ){ /* TEXT */ aff = SQLITE_AFF_TEXT; }else if( h==(('b'<<24)+('l'<<16)+('o'<<8)+'b') /* BLOB */ && (aff==SQLITE_AFF_NUMERIC || aff==SQLITE_AFF_REAL) ){ - aff = SQLITE_AFF_NONE; + aff = SQLITE_AFF_BLOB; + if( zIn[0]=='(' ) zChar = zIn; #ifndef SQLITE_OMIT_FLOATING_POINT }else if( h==(('r'<<24)+('e'<<16)+('a'<<8)+'l') /* REAL */ && aff==SQLITE_AFF_NUMERIC ){ aff = SQLITE_AFF_REAL; }else if( h==(('f'<<24)+('l'<<16)+('o'<<8)+'a') /* FLOA */ @@ -68710,10 +95446,32 @@ aff = SQLITE_AFF_INTEGER; break; } } + /* If pszEst is not NULL, store an estimate of the field size. The + ** estimate is scaled so that the size of an integer is 1. */ + if( pszEst ){ + *pszEst = 1; /* default size is approx 4 bytes */ + if( aff<SQLITE_AFF_NUMERIC ){ + if( zChar ){ + while( zChar[0] ){ + if( sqlite3Isdigit(zChar[0]) ){ + int v = 0; + sqlite3GetInt32(zChar, &v); + v = v/4 + 1; + if( v>255 ) v = 255; + *pszEst = v; /* BLOB(k), VARCHAR(k), CHAR(k) -> r=(k/4+1) */ + break; + } + zChar++; + } + }else{ + *pszEst = 5; /* BLOB, TEXT, CLOB -> r=5 (approx 20 bytes)*/ + } + } + } return aff; } /* ** This routine is called by the parser while in the middle of @@ -68729,13 +95487,14 @@ Column *pCol; p = pParse->pNewTable; if( p==0 || NEVER(p->nCol<1) ) return; pCol = &p->aCol[p->nCol-1]; - assert( pCol->zType==0 ); + assert( pCol->zType==0 || CORRUPT_DB ); + sqlite3DbFree(pParse->db, pCol->zType); pCol->zType = sqlite3NameFromToken(pParse->db, pType); - pCol->affinity = sqlite3AffinityType(pCol->zType); + pCol->affinity = sqlite3AffinityType(pCol->zType, &pCol->szEst); } /* ** The expression is the default value for the most recently added column ** of the table currently under construction. @@ -68751,11 +95510,11 @@ Column *pCol; sqlite3 *db = pParse->db; p = pParse->pNewTable; if( p!=0 ){ pCol = &(p->aCol[p->nCol-1]); - if( !sqlite3ExprIsConstantOrFunction(pSpan->pExpr) ){ + if( !sqlite3ExprIsConstantOrFunction(pSpan->pExpr, db->init.busy) ){ sqlite3ErrorMsg(pParse, "default value of column [%s] is not constant", pCol->zName); }else{ /* A copy of pExpr is used instead of the original, as pExpr contains ** tokens that point to volatile memory. The 'span' of the expression @@ -68768,10 +95527,34 @@ (int)(pSpan->zEnd - pSpan->zStart)); } } sqlite3ExprDelete(db, pSpan->pExpr); } + +/* +** Backwards Compatibility Hack: +** +** Historical versions of SQLite accepted strings as column names in +** indexes and PRIMARY KEY constraints and in UNIQUE constraints. Example: +** +** CREATE TABLE xyz(a,b,c,d,e,PRIMARY KEY('a'),UNIQUE('b','c' COLLATE trim) +** CREATE INDEX abc ON xyz('c','d' DESC,'e' COLLATE nocase DESC); +** +** This is goofy. But to preserve backwards compatibility we continue to +** accept it. This routine does the necessary conversion. It converts +** the expression given in its argument from a TK_STRING into a TK_ID +** if the expression is just a TK_STRING with an optional COLLATE clause. +** If the epxression is anything other than TK_STRING, the expression is +** unchanged. +*/ +static void sqlite3StringToId(Expr *p){ + if( p->op==TK_STRING ){ + p->op = TK_ID; + }else if( p->op==TK_COLLATE && p->pLeft->op==TK_STRING ){ + p->pLeft->op = TK_ID; + } +} /* ** Designate the PRIMARY KEY for the table. pList is a list of names ** of columns that form the primary key. If pList is NULL, then the ** most recently added column of the table is the primary key. @@ -68797,52 +95580,61 @@ int sortOrder /* SQLITE_SO_ASC or SQLITE_SO_DESC */ ){ Table *pTab = pParse->pNewTable; char *zType = 0; int iCol = -1, i; + int nTerm; if( pTab==0 || IN_DECLARE_VTAB ) goto primary_key_exit; if( pTab->tabFlags & TF_HasPrimaryKey ){ sqlite3ErrorMsg(pParse, "table \"%s\" has more than one primary key", pTab->zName); goto primary_key_exit; } pTab->tabFlags |= TF_HasPrimaryKey; if( pList==0 ){ iCol = pTab->nCol - 1; - pTab->aCol[iCol].isPrimKey = 1; - }else{ - for(i=0; i<pList->nExpr; i++){ - for(iCol=0; iCol<pTab->nCol; iCol++){ - if( sqlite3StrICmp(pList->a[i].zName, pTab->aCol[iCol].zName)==0 ){ - break; - } - } - if( iCol<pTab->nCol ){ - pTab->aCol[iCol].isPrimKey = 1; - } - } - if( pList->nExpr>1 ) iCol = -1; - } - if( iCol>=0 && iCol<pTab->nCol ){ + pTab->aCol[iCol].colFlags |= COLFLAG_PRIMKEY; zType = pTab->aCol[iCol].zType; + nTerm = 1; + }else{ + nTerm = pList->nExpr; + for(i=0; i<nTerm; i++){ + Expr *pCExpr = sqlite3ExprSkipCollate(pList->a[i].pExpr); + assert( pCExpr!=0 ); + sqlite3StringToId(pCExpr); + if( pCExpr->op==TK_ID ){ + const char *zCName = pCExpr->u.zToken; + for(iCol=0; iCol<pTab->nCol; iCol++){ + if( sqlite3StrICmp(zCName, pTab->aCol[iCol].zName)==0 ){ + pTab->aCol[iCol].colFlags |= COLFLAG_PRIMKEY; + zType = pTab->aCol[iCol].zType; + break; + } + } + } + } } - if( zType && sqlite3StrICmp(zType, "INTEGER")==0 - && sortOrder==SQLITE_SO_ASC ){ + if( nTerm==1 + && zType && sqlite3StrICmp(zType, "INTEGER")==0 + && sortOrder!=SQLITE_SO_DESC + ){ pTab->iPKey = iCol; pTab->keyConf = (u8)onError; assert( autoInc==0 || autoInc==1 ); pTab->tabFlags |= autoInc*TF_Autoincrement; + if( pList ) pParse->iPkSortOrder = pList->a[0].sortOrder; }else if( autoInc ){ #ifndef SQLITE_OMIT_AUTOINCREMENT sqlite3ErrorMsg(pParse, "AUTOINCREMENT is only allowed on an " "INTEGER PRIMARY KEY"); #endif }else{ Index *p; - p = sqlite3CreateIndex(pParse, 0, 0, 0, pList, onError, 0, 0, sortOrder, 0); + p = sqlite3CreateIndex(pParse, 0, 0, 0, pList, onError, 0, + 0, sortOrder, 0); if( p ){ - p->autoIndex = 2; + p->idxType = SQLITE_IDXTYPE_PRIMARYKEY; } pList = 0; } primary_key_exit: @@ -68855,19 +95647,24 @@ */ SQLITE_PRIVATE void sqlite3AddCheckConstraint( Parse *pParse, /* Parsing context */ Expr *pCheckExpr /* The check expression */ ){ - sqlite3 *db = pParse->db; #ifndef SQLITE_OMIT_CHECK Table *pTab = pParse->pNewTable; - if( pTab && !IN_DECLARE_VTAB ){ - pTab->pCheck = sqlite3ExprAnd(db, pTab->pCheck, pCheckExpr); + sqlite3 *db = pParse->db; + if( pTab && !IN_DECLARE_VTAB + && !sqlite3BtreeIsReadonly(db->aDb[db->init.iDb].pBt) + ){ + pTab->pCheck = sqlite3ExprListAppend(pParse, pTab->pCheck, pCheckExpr); + if( pParse->constraintName.n ){ + sqlite3ExprListSetName(pParse, pTab->pCheck, &pParse->constraintName, 1); + } }else #endif { - sqlite3ExprDelete(db, pCheckExpr); + sqlite3ExprDelete(pParse->db, pCheckExpr); } } /* ** Set the collation function of the most recently parsed table column @@ -68885,18 +95682,19 @@ zColl = sqlite3NameFromToken(db, pToken); if( !zColl ) return; if( sqlite3LocateCollSeq(pParse, zColl) ){ Index *pIdx; + sqlite3DbFree(db, p->aCol[i].zColl); p->aCol[i].zColl = zColl; /* If the column is declared as "<name> PRIMARY KEY COLLATE <type>", ** then an index may have been created on this column before the ** collation type was added. Correct this if it is the case. */ for(pIdx=p->pIndex; pIdx; pIdx=pIdx->pNext){ - assert( pIdx->nColumn==1 ); + assert( pIdx->nKeyCol==1 ); if( pIdx->aiColumn[0]==i ){ pIdx->azColl[0] = p->aCol[i].zColl; } } }else{ @@ -68930,14 +95728,11 @@ u8 initbusy = db->init.busy; CollSeq *pColl; pColl = sqlite3FindCollSeq(db, enc, zName, initbusy); if( !initbusy && (!pColl || !pColl->xCmp) ){ - pColl = sqlite3GetCollSeq(db, enc, pColl, zName); - if( !pColl ){ - sqlite3ErrorMsg(pParse, "no such collation sequence: %s", zName); - } + pColl = sqlite3GetCollSeq(pParse, enc, pColl, zName); } return pColl; } @@ -68957,16 +95752,15 @@ ** set back to prior value. But schema changes are infrequent ** and the probability of hitting the same cookie value is only ** 1 chance in 2^32. So we're safe enough. */ SQLITE_PRIVATE void sqlite3ChangeCookie(Parse *pParse, int iDb){ - int r1 = sqlite3GetTempReg(pParse); sqlite3 *db = pParse->db; Vdbe *v = pParse->pVdbe; - sqlite3VdbeAddOp2(v, OP_Integer, db->aDb[iDb].pSchema->schema_cookie+1, r1); - sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, BTREE_SCHEMA_VERSION, r1); - sqlite3ReleaseTempReg(pParse, r1); + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); + sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, BTREE_SCHEMA_VERSION, + db->aDb[iDb].pSchema->schema_cookie+1); } /* ** Measure the number of characters needed to output the given ** identifier. The number returned includes any quotes used @@ -69002,14 +95796,14 @@ i = *pIdx; for(j=0; zIdent[j]; j++){ if( !sqlite3Isalnum(zIdent[j]) && zIdent[j]!='_' ) break; } - needQuote = sqlite3Isdigit(zIdent[0]) || sqlite3KeywordCode(zIdent, j)!=TK_ID; - if( !needQuote ){ - needQuote = zIdent[j]; - } + needQuote = sqlite3Isdigit(zIdent[0]) + || sqlite3KeywordCode(zIdent, j)!=TK_ID + || zIdent[j]!=0 + || j==0; if( needQuote ) z[i++] = '"'; for(j=0; zIdent[j]; j++){ z[i++] = zIdent[j]; if( zIdent[j]=='"' ) z[i++] = '"'; @@ -69042,23 +95836,23 @@ zSep = "\n "; zSep2 = ",\n "; zEnd = "\n)"; } n += 35 + 6*p->nCol; - zStmt = sqlite3Malloc( n ); + zStmt = sqlite3DbMallocRaw(0, n); if( zStmt==0 ){ - db->mallocFailed = 1; + sqlite3OomFault(db); return 0; } sqlite3_snprintf(n, zStmt, "CREATE TABLE "); k = sqlite3Strlen30(zStmt); identPut(zStmt, &k, p->zName); zStmt[k++] = '('; for(pCol=p->aCol, i=0; i<p->nCol; i++, pCol++){ static const char * const azType[] = { + /* SQLITE_AFF_BLOB */ "", /* SQLITE_AFF_TEXT */ " TEXT", - /* SQLITE_AFF_NONE */ "", /* SQLITE_AFF_NUMERIC */ " NUM", /* SQLITE_AFF_INTEGER */ " INT", /* SQLITE_AFF_REAL */ " REAL" }; int len; @@ -69066,29 +95860,233 @@ sqlite3_snprintf(n-k, &zStmt[k], zSep); k += sqlite3Strlen30(&zStmt[k]); zSep = zSep2; identPut(zStmt, &k, pCol->zName); - assert( pCol->affinity-SQLITE_AFF_TEXT >= 0 ); - assert( pCol->affinity-SQLITE_AFF_TEXT < sizeof(azType)/sizeof(azType[0]) ); + assert( pCol->affinity-SQLITE_AFF_BLOB >= 0 ); + assert( pCol->affinity-SQLITE_AFF_BLOB < ArraySize(azType) ); + testcase( pCol->affinity==SQLITE_AFF_BLOB ); testcase( pCol->affinity==SQLITE_AFF_TEXT ); - testcase( pCol->affinity==SQLITE_AFF_NONE ); testcase( pCol->affinity==SQLITE_AFF_NUMERIC ); testcase( pCol->affinity==SQLITE_AFF_INTEGER ); testcase( pCol->affinity==SQLITE_AFF_REAL ); - zType = azType[pCol->affinity - SQLITE_AFF_TEXT]; + zType = azType[pCol->affinity - SQLITE_AFF_BLOB]; len = sqlite3Strlen30(zType); - assert( pCol->affinity==SQLITE_AFF_NONE - || pCol->affinity==sqlite3AffinityType(zType) ); + assert( pCol->affinity==SQLITE_AFF_BLOB + || pCol->affinity==sqlite3AffinityType(zType, 0) ); memcpy(&zStmt[k], zType, len); k += len; assert( k<=n ); } sqlite3_snprintf(n-k, &zStmt[k], "%s", zEnd); return zStmt; } + +/* +** Resize an Index object to hold N columns total. Return SQLITE_OK +** on success and SQLITE_NOMEM on an OOM error. +*/ +static int resizeIndexObject(sqlite3 *db, Index *pIdx, int N){ + char *zExtra; + int nByte; + if( pIdx->nColumn>=N ) return SQLITE_OK; + assert( pIdx->isResized==0 ); + nByte = (sizeof(char*) + sizeof(i16) + 1)*N; + zExtra = sqlite3DbMallocZero(db, nByte); + if( zExtra==0 ) return SQLITE_NOMEM; + memcpy(zExtra, pIdx->azColl, sizeof(char*)*pIdx->nColumn); + pIdx->azColl = (const char**)zExtra; + zExtra += sizeof(char*)*N; + memcpy(zExtra, pIdx->aiColumn, sizeof(i16)*pIdx->nColumn); + pIdx->aiColumn = (i16*)zExtra; + zExtra += sizeof(i16)*N; + memcpy(zExtra, pIdx->aSortOrder, pIdx->nColumn); + pIdx->aSortOrder = (u8*)zExtra; + pIdx->nColumn = N; + pIdx->isResized = 1; + return SQLITE_OK; +} + +/* +** Estimate the total row width for a table. +*/ +static void estimateTableWidth(Table *pTab){ + unsigned wTable = 0; + const Column *pTabCol; + int i; + for(i=pTab->nCol, pTabCol=pTab->aCol; i>0; i--, pTabCol++){ + wTable += pTabCol->szEst; + } + if( pTab->iPKey<0 ) wTable++; + pTab->szTabRow = sqlite3LogEst(wTable*4); +} + +/* +** Estimate the average size of a row for an index. +*/ +static void estimateIndexWidth(Index *pIdx){ + unsigned wIndex = 0; + int i; + const Column *aCol = pIdx->pTable->aCol; + for(i=0; i<pIdx->nColumn; i++){ + i16 x = pIdx->aiColumn[i]; + assert( x<pIdx->pTable->nCol ); + wIndex += x<0 ? 1 : aCol[pIdx->aiColumn[i]].szEst; + } + pIdx->szIdxRow = sqlite3LogEst(wIndex*4); +} + +/* Return true if value x is found any of the first nCol entries of aiCol[] +*/ +static int hasColumn(const i16 *aiCol, int nCol, int x){ + while( nCol-- > 0 ) if( x==*(aiCol++) ) return 1; + return 0; +} + +/* +** This routine runs at the end of parsing a CREATE TABLE statement that +** has a WITHOUT ROWID clause. The job of this routine is to convert both +** internal schema data structures and the generated VDBE code so that they +** are appropriate for a WITHOUT ROWID table instead of a rowid table. +** Changes include: +** +** (1) Convert the OP_CreateTable into an OP_CreateIndex. There is +** no rowid btree for a WITHOUT ROWID. Instead, the canonical +** data storage is a covering index btree. +** (2) Bypass the creation of the sqlite_master table entry +** for the PRIMARY KEY as the primary key index is now +** identified by the sqlite_master table entry of the table itself. +** (3) Set the Index.tnum of the PRIMARY KEY Index object in the +** schema to the rootpage from the main table. +** (4) Set all columns of the PRIMARY KEY schema object to be NOT NULL. +** (5) Add all table columns to the PRIMARY KEY Index object +** so that the PRIMARY KEY is a covering index. The surplus +** columns are part of KeyInfo.nXField and are not used for +** sorting or lookup or uniqueness checks. +** (6) Replace the rowid tail on all automatically generated UNIQUE +** indices with the PRIMARY KEY columns. +*/ +static void convertToWithoutRowidTable(Parse *pParse, Table *pTab){ + Index *pIdx; + Index *pPk; + int nPk; + int i, j; + sqlite3 *db = pParse->db; + Vdbe *v = pParse->pVdbe; + + /* Convert the OP_CreateTable opcode that would normally create the + ** root-page for the table into an OP_CreateIndex opcode. The index + ** created will become the PRIMARY KEY index. + */ + if( pParse->addrCrTab ){ + assert( v ); + sqlite3VdbeChangeOpcode(v, pParse->addrCrTab, OP_CreateIndex); + } + + /* Locate the PRIMARY KEY index. Or, if this table was originally + ** an INTEGER PRIMARY KEY table, create a new PRIMARY KEY index. + */ + if( pTab->iPKey>=0 ){ + ExprList *pList; + Token ipkToken; + sqlite3TokenInit(&ipkToken, pTab->aCol[pTab->iPKey].zName); + pList = sqlite3ExprListAppend(pParse, 0, + sqlite3ExprAlloc(db, TK_ID, &ipkToken, 0)); + if( pList==0 ) return; + pList->a[0].sortOrder = pParse->iPkSortOrder; + assert( pParse->pNewTable==pTab ); + pPk = sqlite3CreateIndex(pParse, 0, 0, 0, pList, pTab->keyConf, 0, 0, 0, 0); + if( pPk==0 ) return; + pPk->idxType = SQLITE_IDXTYPE_PRIMARYKEY; + pTab->iPKey = -1; + }else{ + pPk = sqlite3PrimaryKeyIndex(pTab); + + /* Bypass the creation of the PRIMARY KEY btree and the sqlite_master + ** table entry. This is only required if currently generating VDBE + ** code for a CREATE TABLE (not when parsing one as part of reading + ** a database schema). */ + if( v ){ + assert( db->init.busy==0 ); + sqlite3VdbeChangeOpcode(v, pPk->tnum, OP_Goto); + } + + /* + ** Remove all redundant columns from the PRIMARY KEY. For example, change + ** "PRIMARY KEY(a,b,a,b,c,b,c,d)" into just "PRIMARY KEY(a,b,c,d)". Later + ** code assumes the PRIMARY KEY contains no repeated columns. + */ + for(i=j=1; i<pPk->nKeyCol; i++){ + if( hasColumn(pPk->aiColumn, j, pPk->aiColumn[i]) ){ + pPk->nColumn--; + }else{ + pPk->aiColumn[j++] = pPk->aiColumn[i]; + } + } + pPk->nKeyCol = j; + } + pPk->isCovering = 1; + assert( pPk!=0 ); + nPk = pPk->nKeyCol; + + /* Make sure every column of the PRIMARY KEY is NOT NULL. (Except, + ** do not enforce this for imposter tables.) */ + if( !db->init.imposterTable ){ + for(i=0; i<nPk; i++){ + pTab->aCol[pPk->aiColumn[i]].notNull = OE_Abort; + } + pPk->uniqNotNull = 1; + } + + /* The root page of the PRIMARY KEY is the table root page */ + pPk->tnum = pTab->tnum; + + /* Update the in-memory representation of all UNIQUE indices by converting + ** the final rowid column into one or more columns of the PRIMARY KEY. + */ + for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ + int n; + if( IsPrimaryKeyIndex(pIdx) ) continue; + for(i=n=0; i<nPk; i++){ + if( !hasColumn(pIdx->aiColumn, pIdx->nKeyCol, pPk->aiColumn[i]) ) n++; + } + if( n==0 ){ + /* This index is a superset of the primary key */ + pIdx->nColumn = pIdx->nKeyCol; + continue; + } + if( resizeIndexObject(db, pIdx, pIdx->nKeyCol+n) ) return; + for(i=0, j=pIdx->nKeyCol; i<nPk; i++){ + if( !hasColumn(pIdx->aiColumn, pIdx->nKeyCol, pPk->aiColumn[i]) ){ + pIdx->aiColumn[j] = pPk->aiColumn[i]; + pIdx->azColl[j] = pPk->azColl[i]; + j++; + } + } + assert( pIdx->nColumn>=pIdx->nKeyCol+n ); + assert( pIdx->nColumn>=j ); + } + + /* Add all table columns to the PRIMARY KEY index + */ + if( nPk<pTab->nCol ){ + if( resizeIndexObject(db, pPk, pTab->nCol) ) return; + for(i=0, j=nPk; i<pTab->nCol; i++){ + if( !hasColumn(pPk->aiColumn, j, i) ){ + assert( j<pPk->nColumn ); + pPk->aiColumn[j] = i; + pPk->azColl[j] = sqlite3StrBINARY; + j++; + } + } + assert( pPk->nColumn==j ); + assert( pTab->nCol==j ); + }else{ + pPk->nColumn = pTab->nCol; + } +} /* ** This routine is called to report the final ")" that terminates ** a CREATE TABLE statement. ** @@ -69109,57 +96107,71 @@ ** the new table will match the result set of the SELECT. */ SQLITE_PRIVATE void sqlite3EndTable( Parse *pParse, /* Parse context */ Token *pCons, /* The ',' token after the last column defn. */ - Token *pEnd, /* The final ')' token in the CREATE TABLE */ + Token *pEnd, /* The ')' before options in the CREATE TABLE */ + u8 tabOpts, /* Extra table options. Usually 0. */ Select *pSelect /* Select from a "CREATE ... AS SELECT" */ ){ - Table *p; - sqlite3 *db = pParse->db; - int iDb; + Table *p; /* The new table */ + sqlite3 *db = pParse->db; /* The database connection */ + int iDb; /* Database in which the table lives */ + Index *pIdx; /* An implied index of the table */ - if( (pEnd==0 && pSelect==0) || db->mallocFailed ){ + if( pEnd==0 && pSelect==0 ){ return; } + assert( !db->mallocFailed ); p = pParse->pNewTable; if( p==0 ) return; assert( !db->init.busy || !pSelect ); + + /* If the db->init.busy is 1 it means we are reading the SQL off the + ** "sqlite_master" or "sqlite_temp_master" table on the disk. + ** So do not write to the disk again. Extract the root page number + ** for the table from the db->init.newTnum field. (The page number + ** should have been put there by the sqliteOpenCb routine.) + ** + ** If the root page number is 1, that means this is the sqlite_master + ** table itself. So mark it read-only. + */ + if( db->init.busy ){ + p->tnum = db->init.newTnum; + if( p->tnum==1 ) p->tabFlags |= TF_Readonly; + } + + /* Special processing for WITHOUT ROWID Tables */ + if( tabOpts & TF_WithoutRowid ){ + if( (p->tabFlags & TF_Autoincrement) ){ + sqlite3ErrorMsg(pParse, + "AUTOINCREMENT not allowed on WITHOUT ROWID tables"); + return; + } + if( (p->tabFlags & TF_HasPrimaryKey)==0 ){ + sqlite3ErrorMsg(pParse, "PRIMARY KEY missing on table %s", p->zName); + }else{ + p->tabFlags |= TF_WithoutRowid | TF_NoVisibleRowid; + convertToWithoutRowidTable(pParse, p); + } + } iDb = sqlite3SchemaToIndex(db, p->pSchema); #ifndef SQLITE_OMIT_CHECK /* Resolve names in all CHECK constraint expressions. */ if( p->pCheck ){ - SrcList sSrc; /* Fake SrcList for pParse->pNewTable */ - NameContext sNC; /* Name context for pParse->pNewTable */ - - memset(&sNC, 0, sizeof(sNC)); - memset(&sSrc, 0, sizeof(sSrc)); - sSrc.nSrc = 1; - sSrc.a[0].zName = p->zName; - sSrc.a[0].pTab = p; - sSrc.a[0].iCursor = -1; - sNC.pParse = pParse; - sNC.pSrcList = &sSrc; - sNC.isCheck = 1; - if( sqlite3ResolveExprNames(&sNC, p->pCheck) ){ - return; - } + sqlite3ResolveSelfReference(pParse, p, NC_IsCheck, 0, p->pCheck); } #endif /* !defined(SQLITE_OMIT_CHECK) */ - /* If the db->init.busy is 1 it means we are reading the SQL off the - ** "sqlite_master" or "sqlite_temp_master" table on the disk. - ** So do not write to the disk again. Extract the root page number - ** for the table from the db->init.newTnum field. (The page number - ** should have been put there by the sqliteOpenCb routine.) - */ - if( db->init.busy ){ - p->tnum = db->init.newTnum; + /* Estimate the average row size for the table and for all implied indices */ + estimateTableWidth(p); + for(pIdx=p->pIndex; pIdx; pIdx=pIdx->pNext){ + estimateIndexWidth(pIdx); } /* If not initializing, then create a record for the new table ** in the SQLITE_MASTER table of the database. ** @@ -69205,37 +96217,59 @@ ** as a schema-lock must have already been obtained to create it. Since ** a schema-lock excludes all other database users, the write-lock would ** be redundant. */ if( pSelect ){ - SelectDest dest; - Table *pSelTab; + SelectDest dest; /* Where the SELECT should store results */ + int regYield; /* Register holding co-routine entry-point */ + int addrTop; /* Top of the co-routine */ + int regRec; /* A record to be insert into the new table */ + int regRowid; /* Rowid of the next row to insert */ + int addrInsLoop; /* Top of the loop for inserting rows */ + Table *pSelTab; /* A table that describes the SELECT results */ + regYield = ++pParse->nMem; + regRec = ++pParse->nMem; + regRowid = ++pParse->nMem; assert(pParse->nTab==1); + sqlite3MayAbort(pParse); sqlite3VdbeAddOp3(v, OP_OpenWrite, 1, pParse->regRoot, iDb); - sqlite3VdbeChangeP5(v, 1); + sqlite3VdbeChangeP5(v, OPFLAG_P2ISREG); pParse->nTab = 2; - sqlite3SelectDestInit(&dest, SRT_Table, 1); + addrTop = sqlite3VdbeCurrentAddr(v) + 1; + sqlite3VdbeAddOp3(v, OP_InitCoroutine, regYield, 0, addrTop); + sqlite3SelectDestInit(&dest, SRT_Coroutine, regYield); sqlite3Select(pParse, pSelect, &dest); + sqlite3VdbeEndCoroutine(v, regYield); + sqlite3VdbeJumpHere(v, addrTop - 1); + if( pParse->nErr ) return; + pSelTab = sqlite3ResultSetOfSelect(pParse, pSelect); + if( pSelTab==0 ) return; + assert( p->aCol==0 ); + p->nCol = pSelTab->nCol; + p->aCol = pSelTab->aCol; + pSelTab->nCol = 0; + pSelTab->aCol = 0; + sqlite3DeleteTable(db, pSelTab); + addrInsLoop = sqlite3VdbeAddOp1(v, OP_Yield, dest.iSDParm); + VdbeCoverage(v); + sqlite3VdbeAddOp3(v, OP_MakeRecord, dest.iSdst, dest.nSdst, regRec); + sqlite3TableAffinity(v, p, 0); + sqlite3VdbeAddOp2(v, OP_NewRowid, 1, regRowid); + sqlite3VdbeAddOp3(v, OP_Insert, 1, regRec, regRowid); + sqlite3VdbeGoto(v, addrInsLoop); + sqlite3VdbeJumpHere(v, addrInsLoop); sqlite3VdbeAddOp1(v, OP_Close, 1); - if( pParse->nErr==0 ){ - pSelTab = sqlite3ResultSetOfSelect(pParse, pSelect); - if( pSelTab==0 ) return; - assert( p->aCol==0 ); - p->nCol = pSelTab->nCol; - p->aCol = pSelTab->aCol; - pSelTab->nCol = 0; - pSelTab->aCol = 0; - sqlite3DeleteTable(pSelTab); - } } /* Compute the complete text of the CREATE statement */ if( pSelect ){ zStmt = createTableStmt(db, p); }else{ - n = (int)(pEnd->z - pParse->sNameToken.z) + 1; + Token *pEnd2 = tabOpts ? &pParse->sLastToken : pEnd; + n = (int)(pEnd2->z - pParse->sNameToken.z); + if( pEnd2->z[0]!=';' ) n += pEnd2->n; zStmt = sqlite3MPrintf(db, "CREATE %s %.*s", zType2, n, pParse->sNameToken.z ); } @@ -69262,10 +96296,11 @@ /* Check to see if we need to create an sqlite_sequence table for ** keeping track of autoincrement keys. */ if( p->tabFlags & TF_Autoincrement ){ Db *pDb = &db->aDb[iDb]; + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); if( pDb->pSchema->pSeqTab==0 ){ sqlite3NestedParse(pParse, "CREATE TABLE %Q.sqlite_sequence(name,seq)", pDb->zName ); @@ -69272,29 +96307,28 @@ } } #endif /* Reparse everything to update our internal data structures */ - sqlite3VdbeAddOp4(v, OP_ParseSchema, iDb, 0, 0, - sqlite3MPrintf(db, "tbl_name='%q'",p->zName), P4_DYNAMIC); + sqlite3VdbeAddParseSchemaOp(v, iDb, + sqlite3MPrintf(db, "tbl_name='%q' AND type!='trigger'", p->zName)); } /* Add the table to the in-memory representation of the database. */ if( db->init.busy ){ Table *pOld; Schema *pSchema = p->pSchema; - pOld = sqlite3HashInsert(&pSchema->tblHash, p->zName, - sqlite3Strlen30(p->zName),p); + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); + pOld = sqlite3HashInsert(&pSchema->tblHash, p->zName, p); if( pOld ){ assert( p==pOld ); /* Malloc must have failed inside HashInsert() */ - db->mallocFailed = 1; + sqlite3OomFault(db); return; } pParse->pNewTable = 0; - db->nTable++; db->flags |= SQLITE_InternChanges; #ifndef SQLITE_OMIT_ALTERTABLE if( !p->pSelect ){ const char *zName = (const char *)pParse->sNameToken.z; @@ -69317,75 +96351,67 @@ SQLITE_PRIVATE void sqlite3CreateView( Parse *pParse, /* The parsing context */ Token *pBegin, /* The CREATE token that begins the statement */ Token *pName1, /* The token that holds the name of the view */ Token *pName2, /* The token that holds the name of the view */ + ExprList *pCNames, /* Optional list of view column names */ Select *pSelect, /* A SELECT statement that will become the new view */ int isTemp, /* TRUE for a TEMPORARY view */ int noErr /* Suppress error messages if VIEW already exists */ ){ Table *p; int n; const char *z; Token sEnd; DbFixer sFix; - Token *pName; + Token *pName = 0; int iDb; sqlite3 *db = pParse->db; if( pParse->nVar>0 ){ sqlite3ErrorMsg(pParse, "parameters are not allowed in views"); - sqlite3SelectDelete(db, pSelect); - return; + goto create_view_fail; } sqlite3StartTable(pParse, pName1, pName2, isTemp, 1, 0, noErr); p = pParse->pNewTable; - if( p==0 ){ - sqlite3SelectDelete(db, pSelect); - return; - } - assert( pParse->nErr==0 ); /* If sqlite3StartTable return non-NULL then - ** there could not have been an error */ + if( p==0 || pParse->nErr ) goto create_view_fail; sqlite3TwoPartName(pParse, pName1, pName2, &pName); iDb = sqlite3SchemaToIndex(db, p->pSchema); - if( sqlite3FixInit(&sFix, pParse, iDb, "view", pName) - && sqlite3FixSelect(&sFix, pSelect) - ){ - sqlite3SelectDelete(db, pSelect); - return; - } + sqlite3FixInit(&sFix, pParse, iDb, "view", pName); + if( sqlite3FixSelect(&sFix, pSelect) ) goto create_view_fail; /* Make a copy of the entire SELECT statement that defines the view. ** This will force all the Expr.token.z values to be dynamically ** allocated rather than point to the input string - which means that ** they will persist after the current sqlite3_exec() call returns. */ p->pSelect = sqlite3SelectDup(db, pSelect, EXPRDUP_REDUCE); - sqlite3SelectDelete(db, pSelect); - if( db->mallocFailed ){ - return; - } - if( !db->init.busy ){ - sqlite3ViewGetColumnNames(pParse, p); - } + p->pCheck = sqlite3ExprListDup(db, pCNames, EXPRDUP_REDUCE); + if( db->mallocFailed ) goto create_view_fail; /* Locate the end of the CREATE VIEW statement. Make sEnd point to ** the end. */ sEnd = pParse->sLastToken; - if( ALWAYS(sEnd.z[0]!=0) && sEnd.z[0]!=';' ){ + assert( sEnd.z[0]!=0 ); + if( sEnd.z[0]!=';' ){ sEnd.z += sEnd.n; } sEnd.n = 0; n = (int)(sEnd.z - pBegin->z); + assert( n>0 ); z = pBegin->z; - while( ALWAYS(n>0) && sqlite3Isspace(z[n-1]) ){ n--; } + while( sqlite3Isspace(z[n-1]) ){ n--; } sEnd.z = &z[n-1]; sEnd.n = 1; /* Use sqlite3EndTable() to add the view to the SQLITE_MASTER table */ - sqlite3EndTable(pParse, 0, &sEnd, 0); + sqlite3EndTable(pParse, 0, &sEnd, 0, 0); + +create_view_fail: + sqlite3SelectDelete(db, pSelect); + sqlite3ExprListDelete(db, pCNames); return; } #endif /* SQLITE_OMIT_VIEW */ #if !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_VIRTUALTABLE) @@ -69398,11 +96424,11 @@ Table *pSelTab; /* A fake table from which we get the result set */ Select *pSel; /* Copy of the SELECT that implements the view */ int nErr = 0; /* Number of errors encountered */ int n; /* Temporarily holds the number of cursors assigned */ sqlite3 *db = pParse->db; /* Database connection for malloc errors */ - int (*xAuth)(void*,int,const char*,const char*,const char*,const char*); + sqlite3_xauth xAuth; /* Saved xAuth pointer */ assert( pTable ); #ifndef SQLITE_OMIT_VIRTUALTABLE if( sqlite3VtabCallConnect(pParse, pTable) ){ @@ -69444,43 +96470,50 @@ ** to the elements of the FROM clause. But we do not want these changes ** to be permanent. So the computation is done on a copy of the SELECT ** statement that defines the view. */ assert( pTable->pSelect ); - pSel = sqlite3SelectDup(db, pTable->pSelect, 0); - if( pSel ){ - u8 enableLookaside = db->lookaside.bEnabled; - n = pParse->nTab; - sqlite3SrcListAssignCursors(pParse, pSel->pSrc); - pTable->nCol = -1; - db->lookaside.bEnabled = 0; -#ifndef SQLITE_OMIT_AUTHORIZATION - xAuth = db->xAuth; - db->xAuth = 0; - pSelTab = sqlite3ResultSetOfSelect(pParse, pSel); - db->xAuth = xAuth; -#else - pSelTab = sqlite3ResultSetOfSelect(pParse, pSel); -#endif - db->lookaside.bEnabled = enableLookaside; - pParse->nTab = n; - if( pSelTab ){ - assert( pTable->aCol==0 ); - pTable->nCol = pSelTab->nCol; - pTable->aCol = pSelTab->aCol; - pSelTab->nCol = 0; - pSelTab->aCol = 0; - sqlite3DeleteTable(pSelTab); - pTable->pSchema->flags |= DB_UnresetViews; - }else{ - pTable->nCol = 0; - nErr++; - } - sqlite3SelectDelete(db, pSel); - } else { - nErr++; - } + if( pTable->pCheck ){ + db->lookaside.bDisable++; + sqlite3ColumnsFromExprList(pParse, pTable->pCheck, + &pTable->nCol, &pTable->aCol); + db->lookaside.bDisable--; + }else{ + pSel = sqlite3SelectDup(db, pTable->pSelect, 0); + if( pSel ){ + n = pParse->nTab; + sqlite3SrcListAssignCursors(pParse, pSel->pSrc); + pTable->nCol = -1; + db->lookaside.bDisable++; +#ifndef SQLITE_OMIT_AUTHORIZATION + xAuth = db->xAuth; + db->xAuth = 0; + pSelTab = sqlite3ResultSetOfSelect(pParse, pSel); + db->xAuth = xAuth; +#else + pSelTab = sqlite3ResultSetOfSelect(pParse, pSel); +#endif + db->lookaside.bDisable--; + pParse->nTab = n; + if( pSelTab ){ + assert( pTable->aCol==0 ); + pTable->nCol = pSelTab->nCol; + pTable->aCol = pSelTab->aCol; + pSelTab->nCol = 0; + pSelTab->aCol = 0; + sqlite3DeleteTable(db, pSelTab); + assert( sqlite3SchemaMutexHeld(db, 0, pTable->pSchema) ); + }else{ + pTable->nCol = 0; + nErr++; + } + sqlite3SelectDelete(db, pSel); + } else { + nErr++; + } + } + pTable->pSchema->schemaFlags |= DB_UnresetViews; #endif /* SQLITE_OMIT_VIEW */ return nErr; } #endif /* !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_VIRTUALTABLE) */ @@ -69488,15 +96521,18 @@ /* ** Clear the column names from every VIEW in database idx. */ static void sqliteViewResetAll(sqlite3 *db, int idx){ HashElem *i; + assert( sqlite3SchemaMutexHeld(db, idx, 0) ); if( !DbHasProperty(db, idx, DB_UnresetViews) ) return; for(i=sqliteHashFirst(&db->aDb[idx].pSchema->tblHash); i;i=sqliteHashNext(i)){ Table *pTab = sqliteHashData(i); if( pTab->pSelect ){ - sqliteResetColumnNames(pTab); + sqlite3DeleteColumnNames(db, pTab); + pTab->aCol = 0; + pTab->nCol = 0; } } DbClearProperty(db, idx, DB_UnresetViews); } #else @@ -69519,14 +96555,17 @@ ** We must continue looping until all tables and indices with ** rootpage==iFrom have been converted to have a rootpage of iTo ** in order to be certain that we got the right one. */ #ifndef SQLITE_OMIT_AUTOVACUUM -SQLITE_PRIVATE void sqlite3RootPageMoved(Db *pDb, int iFrom, int iTo){ +SQLITE_PRIVATE void sqlite3RootPageMoved(sqlite3 *db, int iDb, int iFrom, int iTo){ HashElem *pElem; Hash *pHash; + Db *pDb; + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); + pDb = &db->aDb[iDb]; pHash = &pDb->pSchema->tblHash; for(pElem=sqliteHashFirst(pHash); pElem; pElem=sqliteHashNext(pElem)){ Table *pTab = sqliteHashData(pElem); if( pTab->tnum==iFrom ){ pTab->tnum = iTo; @@ -69549,10 +96588,11 @@ ** erasing iTable (this can happen with an auto-vacuum database). */ static void destroyRootPage(Parse *pParse, int iTable, int iDb){ Vdbe *v = sqlite3GetVdbe(pParse); int r1 = sqlite3GetTempReg(pParse); + assert( iTable>1 ); sqlite3VdbeAddOp3(v, OP_Destroy, iTable, r1, iDb); sqlite3MayAbort(pParse); #ifndef SQLITE_OMIT_AUTOVACUUM /* OP_Destroy stores an in integer r1. If this integer ** is non-zero, then it is the root page number of a table moved to @@ -69620,16 +96660,111 @@ } if( iLargest==0 ){ return; }else{ int iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema); + assert( iDb>=0 && iDb<pParse->db->nDb ); destroyRootPage(pParse, iLargest, iDb); iDestroyed = iLargest; } } #endif } + +/* +** Remove entries from the sqlite_statN tables (for N in (1,2,3)) +** after a DROP INDEX or DROP TABLE command. +*/ +static void sqlite3ClearStatTables( + Parse *pParse, /* The parsing context */ + int iDb, /* The database number */ + const char *zType, /* "idx" or "tbl" */ + const char *zName /* Name of index or table */ +){ + int i; + const char *zDbName = pParse->db->aDb[iDb].zName; + for(i=1; i<=4; i++){ + char zTab[24]; + sqlite3_snprintf(sizeof(zTab),zTab,"sqlite_stat%d",i); + if( sqlite3FindTable(pParse->db, zTab, zDbName) ){ + sqlite3NestedParse(pParse, + "DELETE FROM %Q.%s WHERE %s=%Q", + zDbName, zTab, zType, zName + ); + } + } +} + +/* +** Generate code to drop a table. +*/ +SQLITE_PRIVATE void sqlite3CodeDropTable(Parse *pParse, Table *pTab, int iDb, int isView){ + Vdbe *v; + sqlite3 *db = pParse->db; + Trigger *pTrigger; + Db *pDb = &db->aDb[iDb]; + + v = sqlite3GetVdbe(pParse); + assert( v!=0 ); + sqlite3BeginWriteOperation(pParse, 1, iDb); + +#ifndef SQLITE_OMIT_VIRTUALTABLE + if( IsVirtual(pTab) ){ + sqlite3VdbeAddOp0(v, OP_VBegin); + } +#endif + + /* Drop all triggers associated with the table being dropped. Code + ** is generated to remove entries from sqlite_master and/or + ** sqlite_temp_master if required. + */ + pTrigger = sqlite3TriggerList(pParse, pTab); + while( pTrigger ){ + assert( pTrigger->pSchema==pTab->pSchema || + pTrigger->pSchema==db->aDb[1].pSchema ); + sqlite3DropTriggerPtr(pParse, pTrigger); + pTrigger = pTrigger->pNext; + } + +#ifndef SQLITE_OMIT_AUTOINCREMENT + /* Remove any entries of the sqlite_sequence table associated with + ** the table being dropped. This is done before the table is dropped + ** at the btree level, in case the sqlite_sequence table needs to + ** move as a result of the drop (can happen in auto-vacuum mode). + */ + if( pTab->tabFlags & TF_Autoincrement ){ + sqlite3NestedParse(pParse, + "DELETE FROM %Q.sqlite_sequence WHERE name=%Q", + pDb->zName, pTab->zName + ); + } +#endif + + /* Drop all SQLITE_MASTER table and index entries that refer to the + ** table. The program name loops through the master table and deletes + ** every row that refers to a table of the same name as the one being + ** dropped. Triggers are handled separately because a trigger can be + ** created in the temp database that refers to a table in another + ** database. + */ + sqlite3NestedParse(pParse, + "DELETE FROM %Q.%s WHERE tbl_name=%Q and type!='trigger'", + pDb->zName, SCHEMA_TABLE(iDb), pTab->zName); + if( !isView && !IsVirtual(pTab) ){ + destroyTable(pParse, pTab); + } + + /* Remove the table entry from SQLite's internal schema and modify + ** the schema cookie. + */ + if( IsVirtual(pTab) ){ + sqlite3VdbeAddOp4(v, OP_VDestroy, iDb, 0, 0, pTab->zName, 0); + } + sqlite3VdbeAddOp4(v, OP_DropTable, iDb, 0, 0, pTab->zName, 0); + sqlite3ChangeCookie(pParse, iDb); + sqliteViewResetAll(db, iDb); +} /* ** This routine is called to do the work of a DROP TABLE statement. ** pName is the name of the table to be dropped. */ @@ -69642,16 +96777,17 @@ if( db->mallocFailed ){ goto exit_drop_table; } assert( pParse->nErr==0 ); assert( pName->nSrc==1 ); + if( sqlite3ReadSchema(pParse) ) goto exit_drop_table; if( noErr ) db->suppressErr++; - pTab = sqlite3LocateTable(pParse, isView, - pName->a[0].zName, pName->a[0].zDatabase); + pTab = sqlite3LocateTableItem(pParse, isView, &pName->a[0]); if( noErr ) db->suppressErr--; if( pTab==0 ){ + if( noErr ) sqlite3CodeVerifyNamedSchema(pParse, pName->a[0].zDatabase); goto exit_drop_table; } iDb = sqlite3SchemaToIndex(db, pTab->pSchema); assert( iDb>=0 && iDb<db->nDb ); @@ -69694,11 +96830,12 @@ if( sqlite3AuthCheck(pParse, SQLITE_DELETE, pTab->zName, 0, zDb) ){ goto exit_drop_table; } } #endif - if( sqlite3StrNICmp(pTab->zName, "sqlite_", 7)==0 ){ + if( sqlite3StrNICmp(pTab->zName, "sqlite_", 7)==0 + && sqlite3StrNICmp(pTab->zName, "sqlite_stat", 11)!=0 ){ sqlite3ErrorMsg(pParse, "table %s may not be dropped", pTab->zName); goto exit_drop_table; } #ifndef SQLITE_OMIT_VIEW @@ -69718,79 +96855,15 @@ /* Generate code to remove the table from the master table ** on disk. */ v = sqlite3GetVdbe(pParse); if( v ){ - Trigger *pTrigger; - Db *pDb = &db->aDb[iDb]; sqlite3BeginWriteOperation(pParse, 1, iDb); - -#ifndef SQLITE_OMIT_VIRTUALTABLE - if( IsVirtual(pTab) ){ - sqlite3VdbeAddOp0(v, OP_VBegin); - } -#endif + sqlite3ClearStatTables(pParse, iDb, "tbl", pTab->zName); sqlite3FkDropTable(pParse, pName, pTab); - - /* Drop all triggers associated with the table being dropped. Code - ** is generated to remove entries from sqlite_master and/or - ** sqlite_temp_master if required. - */ - pTrigger = sqlite3TriggerList(pParse, pTab); - while( pTrigger ){ - assert( pTrigger->pSchema==pTab->pSchema || - pTrigger->pSchema==db->aDb[1].pSchema ); - sqlite3DropTriggerPtr(pParse, pTrigger); - pTrigger = pTrigger->pNext; - } - -#ifndef SQLITE_OMIT_AUTOINCREMENT - /* Remove any entries of the sqlite_sequence table associated with - ** the table being dropped. This is done before the table is dropped - ** at the btree level, in case the sqlite_sequence table needs to - ** move as a result of the drop (can happen in auto-vacuum mode). - */ - if( pTab->tabFlags & TF_Autoincrement ){ - sqlite3NestedParse(pParse, - "DELETE FROM %s.sqlite_sequence WHERE name=%Q", - pDb->zName, pTab->zName - ); - } -#endif - - /* Drop all SQLITE_MASTER table and index entries that refer to the - ** table. The program name loops through the master table and deletes - ** every row that refers to a table of the same name as the one being - ** dropped. Triggers are handled seperately because a trigger can be - ** created in the temp database that refers to a table in another - ** database. - */ - sqlite3NestedParse(pParse, - "DELETE FROM %Q.%s WHERE tbl_name=%Q and type!='trigger'", - pDb->zName, SCHEMA_TABLE(iDb), pTab->zName); - - /* Drop any statistics from the sqlite_stat1 table, if it exists */ - if( sqlite3FindTable(db, "sqlite_stat1", db->aDb[iDb].zName) ){ - sqlite3NestedParse(pParse, - "DELETE FROM %Q.sqlite_stat1 WHERE tbl=%Q", pDb->zName, pTab->zName - ); - } - - if( !isView && !IsVirtual(pTab) ){ - destroyTable(pParse, pTab); - } - - /* Remove the table entry from SQLite's internal schema and modify - ** the schema cookie. - */ - if( IsVirtual(pTab) ){ - sqlite3VdbeAddOp4(v, OP_VDestroy, iDb, 0, 0, pTab->zName, 0); - } - sqlite3VdbeAddOp4(v, OP_DropTable, iDb, 0, 0, pTab->zName, 0); - sqlite3ChangeCookie(pParse, iDb); - } - sqliteViewResetAll(db, iDb); + sqlite3CodeDropTable(pParse, pTab, iDb, isView); + } exit_drop_table: sqlite3SrcListDelete(db, pName); } @@ -69797,12 +96870,12 @@ /* ** This routine is called to create a new foreign key on the table ** currently under construction. pFromCol determines which columns ** in the current table point to the foreign key. If pFromCol==0 then ** connect the key to the last column inserted. pTo is the name of -** the table referred to. pToCol is a list of tables in the other -** pTo table that the foreign key points to. flags contains all +** the table referred to (a.k.a the "parent" table). pToCol is a list +** of tables in the parent pTo table. flags contains all ** information about the conflict resolution algorithms specified ** in the ON DELETE, ON UPDATE and ON INSERT clauses. ** ** An FKey structure is created and added to the table currently ** under construction in the pParse->pNewTable field. @@ -69896,15 +96969,16 @@ } pFKey->isDeferred = 0; pFKey->aAction[0] = (u8)(flags & 0xff); /* ON DELETE action */ pFKey->aAction[1] = (u8)((flags >> 8 ) & 0xff); /* ON UPDATE action */ + assert( sqlite3SchemaMutexHeld(db, 0, p->pSchema) ); pNextTo = (FKey *)sqlite3HashInsert(&p->pSchema->fkeyHash, - pFKey->zTo, sqlite3Strlen30(pFKey->zTo), (void *)pFKey + pFKey->zTo, (void *)pFKey ); if( pNextTo==pFKey ){ - db->mallocFailed = 1; + sqlite3OomFault(db); goto fk_end; } if( pNextTo ){ assert( pNextTo->pPrevTo==0 ); pFKey->pNextTo = pNextTo; @@ -69953,16 +97027,18 @@ */ static void sqlite3RefillIndex(Parse *pParse, Index *pIndex, int memRootPage){ Table *pTab = pIndex->pTable; /* The table that is indexed */ int iTab = pParse->nTab++; /* Btree cursor used for pTab */ int iIdx = pParse->nTab++; /* Btree cursor used for pIndex */ + int iSorter; /* Cursor opened by OpenSorter (if in use) */ int addr1; /* Address of top of loop */ + int addr2; /* Address to jump to for next iteration */ int tnum; /* Root page of index */ + int iPartIdxLabel; /* Jump to this label to skip a row */ Vdbe *v; /* Generate code into this virtual machine */ KeyInfo *pKey; /* KeyInfo for index */ - int regIdxKey; /* Registers containing the index key */ - int regRecord; /* Register holding assemblied index record */ + int regRecord; /* Register holding assembled index record */ sqlite3 *db = pParse->db; /* The database connection */ int iDb = sqlite3SchemaToIndex(db, pIndex->pSchema); #ifndef SQLITE_OMIT_AUTHORIZATION if( sqlite3AuthCheck(pParse, SQLITE_REINDEX, pIndex->zName, 0, @@ -69978,47 +97054,92 @@ if( v==0 ) return; if( memRootPage>=0 ){ tnum = memRootPage; }else{ tnum = pIndex->tnum; - sqlite3VdbeAddOp2(v, OP_Clear, tnum, iDb); - } - pKey = sqlite3IndexKeyinfo(pParse, pIndex); - sqlite3VdbeAddOp4(v, OP_OpenWrite, iIdx, tnum, iDb, - (char *)pKey, P4_KEYINFO_HANDOFF); - if( memRootPage>=0 ){ - sqlite3VdbeChangeP5(v, 1); - } + } + pKey = sqlite3KeyInfoOfIndex(pParse, pIndex); + + /* Open the sorter cursor if we are to use one. */ + iSorter = pParse->nTab++; + sqlite3VdbeAddOp4(v, OP_SorterOpen, iSorter, 0, pIndex->nKeyCol, (char*) + sqlite3KeyInfoRef(pKey), P4_KEYINFO); + + /* Open the table. Loop through all rows of the table, inserting index + ** records into the sorter. */ sqlite3OpenTable(pParse, iTab, iDb, pTab, OP_OpenRead); - addr1 = sqlite3VdbeAddOp2(v, OP_Rewind, iTab, 0); + addr1 = sqlite3VdbeAddOp2(v, OP_Rewind, iTab, 0); VdbeCoverage(v); regRecord = sqlite3GetTempReg(pParse); - regIdxKey = sqlite3GenerateIndexKey(pParse, pIndex, iTab, regRecord, 1); - if( pIndex->onError!=OE_None ){ - const int regRowid = regIdxKey + pIndex->nColumn; - const int j2 = sqlite3VdbeCurrentAddr(v) + 2; - void * const pRegKey = SQLITE_INT_TO_PTR(regIdxKey); - - /* The registers accessed by the OP_IsUnique opcode were allocated - ** using sqlite3GetTempRange() inside of the sqlite3GenerateIndexKey() - ** call above. Just before that function was freed they were released - ** (made available to the compiler for reuse) using - ** sqlite3ReleaseTempRange(). So in some ways having the OP_IsUnique - ** opcode use the values stored within seems dangerous. However, since - ** we can be sure that no other temp registers have been allocated - ** since sqlite3ReleaseTempRange() was called, it is safe to do so. - */ - sqlite3VdbeAddOp4(v, OP_IsUnique, iIdx, j2, regRowid, pRegKey, P4_INT32); - sqlite3HaltConstraint( - pParse, OE_Abort, "indexed columns are not unique", P4_STATIC); - } - sqlite3VdbeAddOp2(v, OP_IdxInsert, iIdx, regRecord); + + sqlite3GenerateIndexKey(pParse,pIndex,iTab,regRecord,0,&iPartIdxLabel,0,0); + sqlite3VdbeAddOp2(v, OP_SorterInsert, iSorter, regRecord); + sqlite3ResolvePartIdxLabel(pParse, iPartIdxLabel); + sqlite3VdbeAddOp2(v, OP_Next, iTab, addr1+1); VdbeCoverage(v); + sqlite3VdbeJumpHere(v, addr1); + if( memRootPage<0 ) sqlite3VdbeAddOp2(v, OP_Clear, tnum, iDb); + sqlite3VdbeAddOp4(v, OP_OpenWrite, iIdx, tnum, iDb, + (char *)pKey, P4_KEYINFO); + sqlite3VdbeChangeP5(v, OPFLAG_BULKCSR|((memRootPage>=0)?OPFLAG_P2ISREG:0)); + + addr1 = sqlite3VdbeAddOp2(v, OP_SorterSort, iSorter, 0); VdbeCoverage(v); + assert( pKey!=0 || db->mallocFailed || pParse->nErr ); + if( IsUniqueIndex(pIndex) && pKey!=0 ){ + int j2 = sqlite3VdbeCurrentAddr(v) + 3; + sqlite3VdbeGoto(v, j2); + addr2 = sqlite3VdbeCurrentAddr(v); + sqlite3VdbeAddOp4Int(v, OP_SorterCompare, iSorter, j2, regRecord, + pIndex->nKeyCol); VdbeCoverage(v); + sqlite3UniqueConstraint(pParse, OE_Abort, pIndex); + }else{ + addr2 = sqlite3VdbeCurrentAddr(v); + } + sqlite3VdbeAddOp3(v, OP_SorterData, iSorter, regRecord, iIdx); + sqlite3VdbeAddOp3(v, OP_Last, iIdx, 0, -1); + sqlite3VdbeAddOp3(v, OP_IdxInsert, iIdx, regRecord, 0); sqlite3VdbeChangeP5(v, OPFLAG_USESEEKRESULT); sqlite3ReleaseTempReg(pParse, regRecord); - sqlite3VdbeAddOp2(v, OP_Next, iTab, addr1+1); + sqlite3VdbeAddOp2(v, OP_SorterNext, iSorter, addr2); VdbeCoverage(v); sqlite3VdbeJumpHere(v, addr1); + sqlite3VdbeAddOp1(v, OP_Close, iTab); sqlite3VdbeAddOp1(v, OP_Close, iIdx); + sqlite3VdbeAddOp1(v, OP_Close, iSorter); +} + +/* +** Allocate heap space to hold an Index object with nCol columns. +** +** Increase the allocation size to provide an extra nExtra bytes +** of 8-byte aligned space after the Index object and return a +** pointer to this extra space in *ppExtra. +*/ +SQLITE_PRIVATE Index *sqlite3AllocateIndexObject( + sqlite3 *db, /* Database connection */ + i16 nCol, /* Total number of columns in the index */ + int nExtra, /* Number of bytes of extra space to alloc */ + char **ppExtra /* Pointer to the "extra" space */ +){ + Index *p; /* Allocated index object */ + int nByte; /* Bytes of space for Index object + arrays */ + + nByte = ROUND8(sizeof(Index)) + /* Index structure */ + ROUND8(sizeof(char*)*nCol) + /* Index.azColl */ + ROUND8(sizeof(LogEst)*(nCol+1) + /* Index.aiRowLogEst */ + sizeof(i16)*nCol + /* Index.aiColumn */ + sizeof(u8)*nCol); /* Index.aSortOrder */ + p = sqlite3DbMallocZero(db, nByte + nExtra); + if( p ){ + char *pExtra = ((char*)p)+ROUND8(sizeof(Index)); + p->azColl = (const char**)pExtra; pExtra += ROUND8(sizeof(char*)*nCol); + p->aiRowLogEst = (LogEst*)pExtra; pExtra += sizeof(LogEst)*(nCol+1); + p->aiColumn = (i16*)pExtra; pExtra += sizeof(i16)*nCol; + p->aSortOrder = (u8*)pExtra; + p->nColumn = nCol; + p->nKeyCol = nCol - 1; + *ppExtra = ((char*)p) + nByte; + } + return p; } /* ** Create a new index for an SQL table. pName1.pName2 is the name of the index ** and pTblList is the name of the table that is to be indexed. Both will @@ -70031,45 +97152,43 @@ ** is a primary key or unique-constraint on the most recent column added ** to the table currently under construction. ** ** If the index is created successfully, return a pointer to the new Index ** structure. This is used by sqlite3AddPrimaryKey() to mark the index -** as the tables primary key (Index.autoIndex==2). +** as the tables primary key (Index.idxType==SQLITE_IDXTYPE_PRIMARYKEY) */ SQLITE_PRIVATE Index *sqlite3CreateIndex( Parse *pParse, /* All information about this parse */ Token *pName1, /* First part of index name. May be NULL */ Token *pName2, /* Second part of index name. May be NULL */ SrcList *pTblName, /* Table to index. Use pParse->pNewTable if 0 */ ExprList *pList, /* A list of columns to be indexed */ int onError, /* OE_Abort, OE_Ignore, OE_Replace, or OE_None */ Token *pStart, /* The CREATE token that begins this statement */ - Token *pEnd, /* The ")" that closes the CREATE INDEX statement */ + Expr *pPIWhere, /* WHERE clause for partial indices */ int sortOrder, /* Sort order of primary key when pList==NULL */ int ifNotExist /* Omit error if index already exists */ ){ Index *pRet = 0; /* Pointer to return */ Table *pTab = 0; /* Table to be indexed */ Index *pIndex = 0; /* The index to be created */ char *zName = 0; /* Name of the index */ int nName; /* Number of characters in zName */ int i, j; - Token nullId; /* Fake token for an empty ID list */ DbFixer sFix; /* For assigning database names to pTable */ int sortOrderMask; /* 1 to honor DESC in index. 0 to ignore. */ sqlite3 *db = pParse->db; Db *pDb; /* The specific table containing the indexed database */ int iDb; /* Index of the database that is being written */ Token *pName = 0; /* Unqualified name of the index to create */ struct ExprList_item *pListItem; /* For looping over pList */ - int nCol; - int nExtra = 0; - char *zExtra; + int nExtra = 0; /* Space allocated for zExtra[] */ + int nExtraCol; /* Number of extra columns needed */ + char *zExtra = 0; /* Extra space after the Index object */ + Index *pPk = 0; /* PRIMARY KEY index for WITHOUT ROWID tables */ - assert( pStart==0 || pEnd!=0 ); /* pEnd must be non-NULL if pStart is */ - assert( pParse->nErr==0 ); /* Never called with prior errors */ - if( db->mallocFailed || IN_DECLARE_VTAB ){ + if( db->mallocFailed || IN_DECLARE_VTAB || pParse->nErr>0 ){ goto exit_create_index; } if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){ goto exit_create_index; } @@ -70084,13 +97203,14 @@ ** before looking up the table. */ assert( pName1 && pName2 ); iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName); if( iDb<0 ) goto exit_create_index; + assert( pName && pName->z ); #ifndef SQLITE_OMIT_TEMPDB - /* If the index name was unqualified, check if the the table + /* If the index name was unqualified, check if the table ** is a temp table. If so, set the database to 1. Do not do this ** if initialising a database schema. */ if( !db->init.busy ){ pTab = sqlite3SrcListLookup(pParse, pTblName); @@ -70098,33 +97218,43 @@ iDb = 1; } } #endif - if( sqlite3FixInit(&sFix, pParse, iDb, "index", pName) && - sqlite3FixSrcList(&sFix, pTblName) - ){ + sqlite3FixInit(&sFix, pParse, iDb, "index", pName); + if( sqlite3FixSrcList(&sFix, pTblName) ){ /* Because the parser constructs pTblName from a single identifier, ** sqlite3FixSrcList can never fail. */ assert(0); } - pTab = sqlite3LocateTable(pParse, 0, pTblName->a[0].zName, - pTblName->a[0].zDatabase); - if( !pTab || db->mallocFailed ) goto exit_create_index; - assert( db->aDb[iDb].pSchema==pTab->pSchema ); + pTab = sqlite3LocateTableItem(pParse, 0, &pTblName->a[0]); + assert( db->mallocFailed==0 || pTab==0 ); + if( pTab==0 ) goto exit_create_index; + if( iDb==1 && db->aDb[iDb].pSchema!=pTab->pSchema ){ + sqlite3ErrorMsg(pParse, + "cannot create a TEMP index on non-TEMP table \"%s\"", + pTab->zName); + goto exit_create_index; + } + if( !HasRowid(pTab) ) pPk = sqlite3PrimaryKeyIndex(pTab); }else{ assert( pName==0 ); + assert( pStart==0 ); pTab = pParse->pNewTable; if( !pTab ) goto exit_create_index; iDb = sqlite3SchemaToIndex(db, pTab->pSchema); } pDb = &db->aDb[iDb]; assert( pTab!=0 ); assert( pParse->nErr==0 ); if( sqlite3StrNICmp(pTab->zName, "sqlite_", 7)==0 - && memcmp(&pTab->zName[7],"altertab_",9)!=0 ){ + && db->init.busy==0 +#if SQLITE_USER_AUTHENTICATION + && sqlite3UserAuthTable(pTab->zName)==0 +#endif + && sqlite3StrNICmp(&pTab->zName[7],"altertab_",9)!=0 ){ sqlite3ErrorMsg(pParse, "table %s may not be indexed", pTab->zName); goto exit_create_index; } #ifndef SQLITE_OMIT_VIEW if( pTab->pSelect ){ @@ -70153,10 +97283,11 @@ ** own name. */ if( pName ){ zName = sqlite3NameFromToken(db, pName); if( zName==0 ) goto exit_create_index; + assert( pName->z!=0 ); if( SQLITE_OK!=sqlite3CheckObjectName(pParse, zName) ){ goto exit_create_index; } if( !db->init.busy ){ if( sqlite3FindTable(db, zName, 0)!=0 ){ @@ -70165,10 +97296,13 @@ } } if( sqlite3FindIndex(db, zName, pDb->zName)!=0 ){ if( !ifNotExist ){ sqlite3ErrorMsg(pParse, "index %s already exists", zName); + }else{ + assert( !db->init.busy ); + sqlite3CodeVerifySchema(pParse, iDb); } goto exit_create_index; } }else{ int n; @@ -70199,124 +97333,159 @@ /* If pList==0, it means this routine was called to make a primary ** key out of the last column added to the table under construction. ** So create a fake list to simulate this. */ if( pList==0 ){ - nullId.z = pTab->aCol[pTab->nCol-1].zName; - nullId.n = sqlite3Strlen30((char*)nullId.z); - pList = sqlite3ExprListAppend(pParse, 0, 0); + Token prevCol; + sqlite3TokenInit(&prevCol, pTab->aCol[pTab->nCol-1].zName); + pList = sqlite3ExprListAppend(pParse, 0, + sqlite3ExprAlloc(db, TK_ID, &prevCol, 0)); if( pList==0 ) goto exit_create_index; - sqlite3ExprListSetName(pParse, pList, &nullId, 0); - pList->a[0].sortOrder = (u8)sortOrder; + assert( pList->nExpr==1 ); + sqlite3ExprListSetSortOrder(pList, sortOrder); + }else{ + sqlite3ExprListCheckLength(pParse, pList, "index"); } /* Figure out how many bytes of space are required to store explicitly ** specified collation sequence names. */ for(i=0; i<pList->nExpr; i++){ Expr *pExpr = pList->a[i].pExpr; - if( pExpr ){ - CollSeq *pColl = pExpr->pColl; - /* Either pColl!=0 or there was an OOM failure. But if an OOM - ** failure we have quit before reaching this point. */ - if( ALWAYS(pColl) ){ - nExtra += (1 + sqlite3Strlen30(pColl->zName)); - } + assert( pExpr!=0 ); + if( pExpr->op==TK_COLLATE ){ + nExtra += (1 + sqlite3Strlen30(pExpr->u.zToken)); } } /* ** Allocate the index structure. */ nName = sqlite3Strlen30(zName); - nCol = pList->nExpr; - pIndex = sqlite3DbMallocZero(db, - sizeof(Index) + /* Index structure */ - sizeof(int)*nCol + /* Index.aiColumn */ - sizeof(int)*(nCol+1) + /* Index.aiRowEst */ - sizeof(char *)*nCol + /* Index.azColl */ - sizeof(u8)*nCol + /* Index.aSortOrder */ - nName + 1 + /* Index.zName */ - nExtra /* Collation sequence names */ - ); + nExtraCol = pPk ? pPk->nKeyCol : 1; + pIndex = sqlite3AllocateIndexObject(db, pList->nExpr + nExtraCol, + nName + nExtra + 1, &zExtra); if( db->mallocFailed ){ goto exit_create_index; } - pIndex->azColl = (char**)(&pIndex[1]); - pIndex->aiColumn = (int *)(&pIndex->azColl[nCol]); - pIndex->aiRowEst = (unsigned *)(&pIndex->aiColumn[nCol]); - pIndex->aSortOrder = (u8 *)(&pIndex->aiRowEst[nCol+1]); - pIndex->zName = (char *)(&pIndex->aSortOrder[nCol]); - zExtra = (char *)(&pIndex->zName[nName+1]); + assert( EIGHT_BYTE_ALIGNMENT(pIndex->aiRowLogEst) ); + assert( EIGHT_BYTE_ALIGNMENT(pIndex->azColl) ); + pIndex->zName = zExtra; + zExtra += nName + 1; memcpy(pIndex->zName, zName, nName+1); pIndex->pTable = pTab; - pIndex->nColumn = pList->nExpr; pIndex->onError = (u8)onError; - pIndex->autoIndex = (u8)(pName==0); + pIndex->uniqNotNull = onError!=OE_None; + pIndex->idxType = pName ? SQLITE_IDXTYPE_APPDEF : SQLITE_IDXTYPE_UNIQUE; pIndex->pSchema = db->aDb[iDb].pSchema; + pIndex->nKeyCol = pList->nExpr; + if( pPIWhere ){ + sqlite3ResolveSelfReference(pParse, pTab, NC_PartIdx, pPIWhere, 0); + pIndex->pPartIdxWhere = pPIWhere; + pPIWhere = 0; + } + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); /* Check to see if we should honor DESC requests on index columns */ if( pDb->pSchema->file_format>=4 ){ sortOrderMask = -1; /* Honor DESC */ }else{ sortOrderMask = 0; /* Ignore DESC */ } - /* Scan the names of the columns of the table to be indexed and - ** load the column indices into the Index structure. Report an error - ** if any column is not found. + /* Analyze the list of expressions that form the terms of the index and + ** report any errors. In the common case where the expression is exactly + ** a table column, store that column in aiColumn[]. For general expressions, + ** populate pIndex->aColExpr and store XN_EXPR (-2) in aiColumn[]. ** - ** TODO: Add a test to make sure that the same column is not named - ** more than once within the same index. Only the first instance of - ** the column will ever be used by the optimizer. Note that using the - ** same column more than once cannot be an error because that would - ** break backwards compatibility - it needs to be a warning. + ** TODO: Issue a warning if two or more columns of the index are identical. + ** TODO: Issue a warning if the table primary key is used as part of the + ** index key. */ for(i=0, pListItem=pList->a; i<pList->nExpr; i++, pListItem++){ - const char *zColName = pListItem->zName; - Column *pTabCol; - int requestedSortOrder; - char *zColl; /* Collation sequence name */ - - for(j=0, pTabCol=pTab->aCol; j<pTab->nCol; j++, pTabCol++){ - if( sqlite3StrICmp(zColName, pTabCol->zName)==0 ) break; - } - if( j>=pTab->nCol ){ - sqlite3ErrorMsg(pParse, "table %s has no column named %s", - pTab->zName, zColName); - goto exit_create_index; - } - pIndex->aiColumn[i] = j; - /* Justification of the ALWAYS(pListItem->pExpr->pColl): Because of - ** the way the "idxlist" non-terminal is constructed by the parser, - ** if pListItem->pExpr is not null then either pListItem->pExpr->pColl - ** must exist or else there must have been an OOM error. But if there - ** was an OOM error, we would never reach this point. */ - if( pListItem->pExpr && ALWAYS(pListItem->pExpr->pColl) ){ + Expr *pCExpr; /* The i-th index expression */ + int requestedSortOrder; /* ASC or DESC on the i-th expression */ + const char *zColl; /* Collation sequence name */ + + sqlite3StringToId(pListItem->pExpr); + sqlite3ResolveSelfReference(pParse, pTab, NC_IdxExpr, pListItem->pExpr, 0); + if( pParse->nErr ) goto exit_create_index; + pCExpr = sqlite3ExprSkipCollate(pListItem->pExpr); + if( pCExpr->op!=TK_COLUMN ){ + if( pTab==pParse->pNewTable ){ + sqlite3ErrorMsg(pParse, "expressions prohibited in PRIMARY KEY and " + "UNIQUE constraints"); + goto exit_create_index; + } + if( pIndex->aColExpr==0 ){ + ExprList *pCopy = sqlite3ExprListDup(db, pList, 0); + pIndex->aColExpr = pCopy; + if( !db->mallocFailed ){ + assert( pCopy!=0 ); + pListItem = &pCopy->a[i]; + } + } + j = XN_EXPR; + pIndex->aiColumn[i] = XN_EXPR; + pIndex->uniqNotNull = 0; + }else{ + j = pCExpr->iColumn; + assert( j<=0x7fff ); + if( j<0 ){ + j = pTab->iPKey; + }else if( pTab->aCol[j].notNull==0 ){ + pIndex->uniqNotNull = 0; + } + pIndex->aiColumn[i] = (i16)j; + } + zColl = 0; + if( pListItem->pExpr->op==TK_COLLATE ){ int nColl; - zColl = pListItem->pExpr->pColl->zName; + zColl = pListItem->pExpr->u.zToken; nColl = sqlite3Strlen30(zColl) + 1; assert( nExtra>=nColl ); memcpy(zExtra, zColl, nColl); zColl = zExtra; zExtra += nColl; nExtra -= nColl; - }else{ + }else if( j>=0 ){ zColl = pTab->aCol[j].zColl; - if( !zColl ){ - zColl = db->pDfltColl->zName; - } } + if( !zColl ) zColl = sqlite3StrBINARY; if( !db->init.busy && !sqlite3LocateCollSeq(pParse, zColl) ){ goto exit_create_index; } pIndex->azColl[i] = zColl; requestedSortOrder = pListItem->sortOrder & sortOrderMask; pIndex->aSortOrder[i] = (u8)requestedSortOrder; } + + /* Append the table key to the end of the index. For WITHOUT ROWID + ** tables (when pPk!=0) this will be the declared PRIMARY KEY. For + ** normal tables (when pPk==0) this will be the rowid. + */ + if( pPk ){ + for(j=0; j<pPk->nKeyCol; j++){ + int x = pPk->aiColumn[j]; + assert( x>=0 ); + if( hasColumn(pIndex->aiColumn, pIndex->nKeyCol, x) ){ + pIndex->nColumn--; + }else{ + pIndex->aiColumn[i] = x; + pIndex->azColl[i] = pPk->azColl[j]; + pIndex->aSortOrder[i] = pPk->aSortOrder[j]; + i++; + } + } + assert( i==pIndex->nColumn ); + }else{ + pIndex->aiColumn[i] = XN_ROWID; + pIndex->azColl[i] = sqlite3StrBINARY; + } sqlite3DefaultRowEst(pIndex); + if( pParse->pNewTable==0 ) estimateIndexWidth(pIndex); if( pTab==pParse->pNewTable ){ /* This routine has been called to create an automatic index as a ** result of a PRIMARY KEY or UNIQUE clause on a column definition, or ** a PRIMARY KEY or UNIQUE clause following the column definitions. @@ -70339,103 +97508,108 @@ ** considered distinct and both result in separate indices. */ Index *pIdx; for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ int k; - assert( pIdx->onError!=OE_None ); - assert( pIdx->autoIndex ); - assert( pIndex->onError!=OE_None ); + assert( IsUniqueIndex(pIdx) ); + assert( pIdx->idxType!=SQLITE_IDXTYPE_APPDEF ); + assert( IsUniqueIndex(pIndex) ); - if( pIdx->nColumn!=pIndex->nColumn ) continue; - for(k=0; k<pIdx->nColumn; k++){ + if( pIdx->nKeyCol!=pIndex->nKeyCol ) continue; + for(k=0; k<pIdx->nKeyCol; k++){ const char *z1; const char *z2; + assert( pIdx->aiColumn[k]>=0 ); if( pIdx->aiColumn[k]!=pIndex->aiColumn[k] ) break; z1 = pIdx->azColl[k]; z2 = pIndex->azColl[k]; if( z1!=z2 && sqlite3StrICmp(z1, z2) ) break; } - if( k==pIdx->nColumn ){ + if( k==pIdx->nKeyCol ){ if( pIdx->onError!=pIndex->onError ){ /* This constraint creates the same index as a previous ** constraint specified somewhere in the CREATE TABLE statement. ** However the ON CONFLICT clauses are different. If both this ** constraint and the previous equivalent constraint have explicit ** ON CONFLICT clauses this is an error. Otherwise, use the - ** explicitly specified behaviour for the index. + ** explicitly specified behavior for the index. */ if( !(pIdx->onError==OE_Default || pIndex->onError==OE_Default) ){ sqlite3ErrorMsg(pParse, "conflicting ON CONFLICT clauses specified", 0); } if( pIdx->onError==OE_Default ){ pIdx->onError = pIndex->onError; } } + pRet = pIdx; goto exit_create_index; } } } /* Link the new Index structure to its table and to the other ** in-memory database structures. */ + assert( pParse->nErr==0 ); if( db->init.busy ){ Index *p; + assert( sqlite3SchemaMutexHeld(db, 0, pIndex->pSchema) ); p = sqlite3HashInsert(&pIndex->pSchema->idxHash, - pIndex->zName, sqlite3Strlen30(pIndex->zName), - pIndex); + pIndex->zName, pIndex); if( p ){ assert( p==pIndex ); /* Malloc must have failed */ - db->mallocFailed = 1; + sqlite3OomFault(db); goto exit_create_index; } db->flags |= SQLITE_InternChanges; if( pTblName!=0 ){ pIndex->tnum = db->init.newTnum; } } - /* If the db->init.busy is 0 then create the index on disk. This - ** involves writing the index into the master table and filling in the - ** index with the current table contents. - ** - ** The db->init.busy is 0 when the user first enters a CREATE INDEX - ** command. db->init.busy is 1 when a database is opened and - ** CREATE INDEX statements are read out of the master table. In - ** the latter case the index already exists on disk, which is why - ** we don't want to recreate it. - ** - ** If pTblName==0 it means this index is generated as a primary key - ** or UNIQUE constraint of a CREATE TABLE statement. Since the table + /* If this is the initial CREATE INDEX statement (or CREATE TABLE if the + ** index is an implied index for a UNIQUE or PRIMARY KEY constraint) then + ** emit code to allocate the index rootpage on disk and make an entry for + ** the index in the sqlite_master table and populate the index with + ** content. But, do not do this if we are simply reading the sqlite_master + ** table to parse the schema, or if this index is the PRIMARY KEY index + ** of a WITHOUT ROWID table. + ** + ** If pTblName==0 it means this index is generated as an implied PRIMARY KEY + ** or UNIQUE index in a CREATE TABLE statement. Since the table ** has just been created, it contains no data and the index initialization ** step can be skipped. */ - else{ /* if( db->init.busy==0 ) */ + else if( HasRowid(pTab) || pTblName!=0 ){ Vdbe *v; char *zStmt; int iMem = ++pParse->nMem; v = sqlite3GetVdbe(pParse); if( v==0 ) goto exit_create_index; - - /* Create the rootpage for the index - */ sqlite3BeginWriteOperation(pParse, 1, iDb); + + /* Create the rootpage for the index using CreateIndex. But before + ** doing so, code a Noop instruction and store its address in + ** Index.tnum. This is required in case this index is actually a + ** PRIMARY KEY and the table is actually a WITHOUT ROWID table. In + ** that case the convertToWithoutRowidTable() routine will replace + ** the Noop with a Goto to jump over the VDBE code generated below. */ + pIndex->tnum = sqlite3VdbeAddOp0(v, OP_Noop); sqlite3VdbeAddOp2(v, OP_CreateIndex, iDb, iMem); /* Gather the complete text of the CREATE INDEX statement into ** the zStmt variable */ if( pStart ){ - assert( pEnd!=0 ); + int n = (int)(pParse->sLastToken.z - pName->z) + pParse->sLastToken.n; + if( pName->z[n-1]==';' ) n--; /* A named index with an explicit CREATE INDEX statement */ zStmt = sqlite3MPrintf(db, "CREATE%s INDEX %.*s", - onError==OE_None ? "" : " UNIQUE", - pEnd->z - pName->z + 1, - pName->z); + onError==OE_None ? "" : " UNIQUE", n, pName->z); }else{ /* An automatic index created by a PRIMARY KEY or UNIQUE constraint */ /* zStmt = sqlite3MPrintf(""); */ zStmt = 0; } @@ -70456,14 +97630,16 @@ ** to invalidate all pre-compiled statements. */ if( pTblName ){ sqlite3RefillIndex(pParse, pIndex, iMem); sqlite3ChangeCookie(pParse, iDb); - sqlite3VdbeAddOp4(v, OP_ParseSchema, iDb, 0, 0, - sqlite3MPrintf(db, "name='%q'", pIndex->zName), P4_DYNAMIC); + sqlite3VdbeAddParseSchemaOp(v, iDb, + sqlite3MPrintf(db, "name='%q' AND type='index'", pIndex->zName)); sqlite3VdbeAddOp1(v, OP_Expire, 0); } + + sqlite3VdbeJumpHere(v, pIndex->tnum); } /* When adding an index to the list of indices for a table, make ** sure all indices labeled OE_Replace come after all those labeled ** OE_Ignore. This is necessary for the correct constraint check @@ -70487,14 +97663,12 @@ pIndex = 0; } /* Clean up before exiting */ exit_create_index: - if( pIndex ){ - sqlite3_free(pIndex->zColAff); - sqlite3DbFree(db, pIndex); - } + if( pIndex ) freeIndex(db, pIndex); + sqlite3ExprDelete(db, pPIWhere); sqlite3ExprListDelete(db, pList); sqlite3SrcListDelete(db, pTblName); sqlite3DbFree(db, zName); return pRet; } @@ -70501,15 +97675,15 @@ /* ** Fill the Index.aiRowEst[] array with default information - information ** to be used when we have not run the ANALYZE command. ** -** aiRowEst[0] is suppose to contain the number of elements in the index. +** aiRowEst[0] is supposed to contain the number of elements in the index. ** Since we do not know, guess 1 million. aiRowEst[1] is an estimate of the ** number of rows in the table that match any particular value of the ** first column of the index. aiRowEst[2] is an estimate of the number -** of rows that match any particular combiniation of the first 2 columns +** of rows that match any particular combination of the first 2 columns ** of the index. And so forth. It must always be the case that * ** aiRowEst[N]<=aiRowEst[N-1] ** aiRowEst[N]>=1 ** @@ -70516,24 +97690,31 @@ ** Apart from that, we have little to go on besides intuition as to ** how aiRowEst[] should be initialized. The numbers generated here ** are based on typical values found in actual indices. */ SQLITE_PRIVATE void sqlite3DefaultRowEst(Index *pIdx){ - unsigned *a = pIdx->aiRowEst; + /* 10, 9, 8, 7, 6 */ + LogEst aVal[] = { 33, 32, 30, 28, 26 }; + LogEst *a = pIdx->aiRowLogEst; + int nCopy = MIN(ArraySize(aVal), pIdx->nKeyCol); int i; - assert( a!=0 ); - a[0] = 1000000; - for(i=pIdx->nColumn; i>=5; i--){ - a[i] = 5; - } - while( i>=1 ){ - a[i] = 11 - i; - i--; - } - if( pIdx->onError!=OE_None ){ - a[pIdx->nColumn] = 1; - } + + /* Set the first entry (number of rows in the index) to the estimated + ** number of rows in the table. Or 10, if the estimated number of rows + ** in the table is less than that. */ + a[0] = pIdx->pTable->nRowLogEst; + if( a[0]<33 ) a[0] = 33; assert( 33==sqlite3LogEst(10) ); + + /* Estimate that a[1] is 10, a[2] is 9, a[3] is 8, a[4] is 7, a[5] is + ** 6 and each subsequent value (if any) is 5. */ + memcpy(&a[1], aVal, nCopy*sizeof(LogEst)); + for(i=nCopy+1; i<=pIdx->nKeyCol; i++){ + a[i] = 23; assert( 23==sqlite3LogEst(5) ); + } + + assert( 0==sqlite3LogEst(1) ); + if( IsUniqueIndex(pIdx) ) a[pIdx->nKeyCol] = 0; } /* ** This routine will drop an existing named index. This routine ** implements the DROP INDEX statement. @@ -70554,15 +97735,17 @@ } pIndex = sqlite3FindIndex(db, pName->a[0].zName, pName->a[0].zDatabase); if( pIndex==0 ){ if( !ifExists ){ sqlite3ErrorMsg(pParse, "no such index: %S", pName, 0); + }else{ + sqlite3CodeVerifyNamedSchema(pParse, pName->a[0].zDatabase); } pParse->checkSchema = 1; goto exit_drop_index; } - if( pIndex->autoIndex ){ + if( pIndex->idxType!=SQLITE_IDXTYPE_APPDEF ){ sqlite3ErrorMsg(pParse, "index associated with UNIQUE " "or PRIMARY KEY constraint cannot be dropped", 0); goto exit_drop_index; } iDb = sqlite3SchemaToIndex(db, pIndex->pSchema); @@ -70585,20 +97768,14 @@ /* Generate code to remove the index and from the master table */ v = sqlite3GetVdbe(pParse); if( v ){ sqlite3BeginWriteOperation(pParse, 1, iDb); sqlite3NestedParse(pParse, - "DELETE FROM %Q.%s WHERE name=%Q", - db->aDb[iDb].zName, SCHEMA_TABLE(iDb), - pIndex->zName - ); - if( sqlite3FindTable(db, "sqlite_stat1", db->aDb[iDb].zName) ){ - sqlite3NestedParse(pParse, - "DELETE FROM %Q.sqlite_stat1 WHERE idx=%Q", - db->aDb[iDb].zName, pIndex->zName - ); - } + "DELETE FROM %Q.%s WHERE name=%Q AND type='index'", + db->aDb[iDb].zName, SCHEMA_TABLE(iDb), pIndex->zName + ); + sqlite3ClearStatTables(pParse, iDb, "idx", pIndex->zName); sqlite3ChangeCookie(pParse, iDb); destroyRootPage(pParse, pIndex->tnum, iDb); sqlite3VdbeAddOp4(v, OP_DropIndex, iDb, 0, 0, pIndex->zName, 0); } @@ -70605,49 +97782,47 @@ exit_drop_index: sqlite3SrcListDelete(db, pName); } /* -** pArray is a pointer to an array of objects. Each object in the -** array is szEntry bytes in size. This routine allocates a new -** object on the end of the array. -** -** *pnEntry is the number of entries already in use. *pnAlloc is -** the previously allocated size of the array. initSize is the -** suggested initial array size allocation. -** -** The index of the new entry is returned in *pIdx. -** -** This routine returns a pointer to the array of objects. This -** might be the same as the pArray parameter or it might be a different -** pointer if the array was resized. +** pArray is a pointer to an array of objects. Each object in the +** array is szEntry bytes in size. This routine uses sqlite3DbRealloc() +** to extend the array so that there is space for a new object at the end. +** +** When this function is called, *pnEntry contains the current size of +** the array (in entries - so the allocation is ((*pnEntry) * szEntry) bytes +** in total). +** +** If the realloc() is successful (i.e. if no OOM condition occurs), the +** space allocated for the new object is zeroed, *pnEntry updated to +** reflect the new size of the array and a pointer to the new allocation +** returned. *pIdx is set to the index of the new array entry in this case. +** +** Otherwise, if the realloc() fails, *pIdx is set to -1, *pnEntry remains +** unchanged and a copy of pArray returned. */ SQLITE_PRIVATE void *sqlite3ArrayAllocate( sqlite3 *db, /* Connection to notify of malloc failures */ void *pArray, /* Array of objects. Might be reallocated */ int szEntry, /* Size of each object in the array */ - int initSize, /* Suggested initial allocation, in elements */ int *pnEntry, /* Number of objects currently in use */ - int *pnAlloc, /* Current size of the allocation, in elements */ int *pIdx /* Write the index of a new slot here */ ){ char *z; - if( *pnEntry >= *pnAlloc ){ - void *pNew; - int newSize; - newSize = (*pnAlloc)*2 + initSize; - pNew = sqlite3DbRealloc(db, pArray, newSize*szEntry); + int n = *pnEntry; + if( (n & (n-1))==0 ){ + int sz = (n==0) ? 1 : 2*n; + void *pNew = sqlite3DbRealloc(db, pArray, sz*szEntry); if( pNew==0 ){ *pIdx = -1; return pArray; } - *pnAlloc = sqlite3DbMallocSize(db, pNew)/szEntry; pArray = pNew; } z = (char*)pArray; - memset(&z[*pnEntry * szEntry], 0, szEntry); - *pIdx = *pnEntry; + memset(&z[n * szEntry], 0, szEntry); + *pIdx = n; ++*pnEntry; return pArray; } /* @@ -70659,19 +97834,16 @@ SQLITE_PRIVATE IdList *sqlite3IdListAppend(sqlite3 *db, IdList *pList, Token *pToken){ int i; if( pList==0 ){ pList = sqlite3DbMallocZero(db, sizeof(IdList) ); if( pList==0 ) return 0; - pList->nAlloc = 0; } pList->a = sqlite3ArrayAllocate( db, pList->a, sizeof(pList->a[0]), - 5, &pList->nId, - &pList->nAlloc, &i ); if( i<0 ){ sqlite3IdListDelete(db, pList); return 0; @@ -70738,11 +97910,11 @@ assert( nExtra>=1 ); assert( pSrc!=0 ); assert( iStart<=pSrc->nSrc ); /* Allocate additional space if needed */ - if( pSrc->nSrc+nExtra>pSrc->nAlloc ){ + if( (u32)pSrc->nSrc+nExtra>pSrc->nAlloc ){ SrcList *pNew; int nAlloc = pSrc->nSrc+nExtra; int nGot; pNew = sqlite3DbRealloc(db, pSrc, sizeof(*pSrc) + (nAlloc-1)*sizeof(pSrc->a[0]) ); @@ -70750,19 +97922,19 @@ assert( db->mallocFailed ); return pSrc; } pSrc = pNew; nGot = (sqlite3DbMallocSize(db, pNew) - sizeof(*pSrc))/sizeof(pSrc->a[0])+1; - pSrc->nAlloc = (u16)nGot; + pSrc->nAlloc = nGot; } /* Move existing slots that come after the newly inserted slots ** out of the way */ for(i=pSrc->nSrc-1; i>=iStart; i--){ pSrc->a[i+nExtra] = pSrc->a[i]; } - pSrc->nSrc += (i16)nExtra; + pSrc->nSrc += nExtra; /* Zero the newly allocated slots */ memset(&pSrc->a[iStart], 0, sizeof(pSrc->a[0])*nExtra); for(i=iStart; i<iStart+nExtra; i++){ pSrc->a[i].iCursor = -1; @@ -70813,14 +97985,16 @@ Token *pTable, /* Table to append */ Token *pDatabase /* Database of the table */ ){ struct SrcList_item *pItem; assert( pDatabase==0 || pTable!=0 ); /* Cannot have C without B */ + assert( db!=0 ); if( pList==0 ){ - pList = sqlite3DbMallocZero(db, sizeof(SrcList) ); + pList = sqlite3DbMallocRawNN(db, sizeof(SrcList) ); if( pList==0 ) return 0; pList->nAlloc = 1; + pList->nSrc = 0; } pList = sqlite3SrcListEnlarge(db, pList, 1, pList->nSrc); if( db->mallocFailed ){ sqlite3SrcListDelete(db, pList); return 0; @@ -70866,12 +98040,13 @@ if( pList==0 ) return; for(pItem=pList->a, i=0; i<pList->nSrc; i++, pItem++){ sqlite3DbFree(db, pItem->zDatabase); sqlite3DbFree(db, pItem->zName); sqlite3DbFree(db, pItem->zAlias); - sqlite3DbFree(db, pItem->zIndex); - sqlite3DeleteTable(pItem->pTab); + if( pItem->fg.isIndexedBy ) sqlite3DbFree(db, pItem->u1.zIndexedBy); + if( pItem->fg.isTabFunc ) sqlite3ExprListDelete(db, pItem->u1.pFuncArg); + sqlite3DeleteTable(db, pItem->pTab); sqlite3SelectDelete(db, pItem->pSelect); sqlite3ExprDelete(db, pItem->pOn); sqlite3IdListDelete(db, pItem->pUsing); } sqlite3DbFree(db, pList); @@ -70882,11 +98057,11 @@ ** end of a growing FROM clause. The "p" parameter is the part of ** the FROM clause that has already been constructed. "p" is NULL ** if this is the first term of the FROM clause. pTable and pDatabase ** are the name of the table and database named in the FROM clause term. ** pDatabase is NULL if the database name qualifier is missing - the -** usual case. If the term has a alias, then pAlias points to the +** usual case. If the term has an alias, then pAlias points to the ** alias token. If the term is a subquery, then pSubquery is the ** SELECT statement that the subquery encodes. The pTable and ** pDatabase parameters are NULL for subqueries. The pOn and pUsing ** parameters are the content of the ON and USING clauses. ** @@ -70939,20 +98114,40 @@ */ SQLITE_PRIVATE void sqlite3SrcListIndexedBy(Parse *pParse, SrcList *p, Token *pIndexedBy){ assert( pIndexedBy!=0 ); if( p && ALWAYS(p->nSrc>0) ){ struct SrcList_item *pItem = &p->a[p->nSrc-1]; - assert( pItem->notIndexed==0 && pItem->zIndex==0 ); + assert( pItem->fg.notIndexed==0 ); + assert( pItem->fg.isIndexedBy==0 ); + assert( pItem->fg.isTabFunc==0 ); if( pIndexedBy->n==1 && !pIndexedBy->z ){ /* A "NOT INDEXED" clause was supplied. See parse.y ** construct "indexed_opt" for details. */ - pItem->notIndexed = 1; + pItem->fg.notIndexed = 1; }else{ - pItem->zIndex = sqlite3NameFromToken(pParse->db, pIndexedBy); + pItem->u1.zIndexedBy = sqlite3NameFromToken(pParse->db, pIndexedBy); + pItem->fg.isIndexedBy = (pItem->u1.zIndexedBy!=0); } } } + +/* +** Add the list of function arguments to the SrcList entry for a +** table-valued-function. +*/ +SQLITE_PRIVATE void sqlite3SrcListFuncArgs(Parse *pParse, SrcList *p, ExprList *pList){ + if( p ){ + struct SrcList_item *pItem = &p->a[p->nSrc-1]; + assert( pItem->fg.notIndexed==0 ); + assert( pItem->fg.isIndexedBy==0 ); + assert( pItem->fg.isTabFunc==0 ); + pItem->u1.pFuncArg = pList; + pItem->fg.isTabFunc = 1; + }else{ + sqlite3ExprListDelete(pParse->db, pList); + } +} /* ** When building up a FROM clause in the parser, the join operator ** is initially attached to the left operand. But the code generator ** expects the join operator to be on the right operand. This routine @@ -70966,31 +98161,30 @@ ** The operator is "natural cross join". The A and B operands are stored ** in p->a[0] and p->a[1], respectively. The parser initially stores the ** operator with A. This routine shifts that operator over to B. */ SQLITE_PRIVATE void sqlite3SrcListShiftJoinType(SrcList *p){ - if( p && p->a ){ + if( p ){ int i; for(i=p->nSrc-1; i>0; i--){ - p->a[i].jointype = p->a[i-1].jointype; + p->a[i].fg.jointype = p->a[i-1].fg.jointype; } - p->a[0].jointype = 0; + p->a[0].fg.jointype = 0; } } /* -** Begin a transaction +** Generate VDBE code for a BEGIN statement. */ SQLITE_PRIVATE void sqlite3BeginTransaction(Parse *pParse, int type){ sqlite3 *db; Vdbe *v; int i; assert( pParse!=0 ); db = pParse->db; assert( db!=0 ); -/* if( db->aDb[0].pBt==0 ) return; */ if( sqlite3AuthCheck(pParse, SQLITE_TRANSACTION, "BEGIN", 0, 0) ){ return; } v = sqlite3GetVdbe(pParse); if( !v ) return; @@ -70998,44 +98192,38 @@ for(i=0; i<db->nDb; i++){ sqlite3VdbeAddOp2(v, OP_Transaction, i, (type==TK_EXCLUSIVE)+1); sqlite3VdbeUsesBtree(v, i); } } - sqlite3VdbeAddOp2(v, OP_AutoCommit, 0, 0); + sqlite3VdbeAddOp0(v, OP_AutoCommit); } /* -** Commit a transaction +** Generate VDBE code for a COMMIT statement. */ SQLITE_PRIVATE void sqlite3CommitTransaction(Parse *pParse){ - sqlite3 *db; Vdbe *v; assert( pParse!=0 ); - db = pParse->db; - assert( db!=0 ); -/* if( db->aDb[0].pBt==0 ) return; */ + assert( pParse->db!=0 ); if( sqlite3AuthCheck(pParse, SQLITE_TRANSACTION, "COMMIT", 0, 0) ){ return; } v = sqlite3GetVdbe(pParse); if( v ){ - sqlite3VdbeAddOp2(v, OP_AutoCommit, 1, 0); + sqlite3VdbeAddOp1(v, OP_AutoCommit, 1); } } /* -** Rollback a transaction +** Generate VDBE code for a ROLLBACK statement. */ SQLITE_PRIVATE void sqlite3RollbackTransaction(Parse *pParse){ - sqlite3 *db; Vdbe *v; assert( pParse!=0 ); - db = pParse->db; - assert( db!=0 ); -/* if( db->aDb[0].pBt==0 ) return; */ + assert( pParse->db!=0 ); if( sqlite3AuthCheck(pParse, SQLITE_TRANSACTION, "ROLLBACK", 0, 0) ){ return; } v = sqlite3GetVdbe(pParse); if( v ){ @@ -71050,11 +98238,11 @@ SQLITE_PRIVATE void sqlite3Savepoint(Parse *pParse, int op, Token *pName){ char *zName = sqlite3NameFromToken(pParse->db, pName); if( zName ){ Vdbe *v = sqlite3GetVdbe(pParse); #ifndef SQLITE_OMIT_AUTHORIZATION - static const char *az[] = { "BEGIN", "RELEASE", "ROLLBACK" }; + static const char * const az[] = { "BEGIN", "RELEASE", "ROLLBACK" }; assert( !SAVEPOINT_BEGIN && SAVEPOINT_RELEASE==1 && SAVEPOINT_ROLLBACK==2 ); #endif if( !v || sqlite3AuthCheck(pParse, SQLITE_SAVEPOINT, az[op], zName, 0) ){ sqlite3DbFree(pParse->db, zName); return; @@ -71077,72 +98265,61 @@ SQLITE_OPEN_CREATE | SQLITE_OPEN_EXCLUSIVE | SQLITE_OPEN_DELETEONCLOSE | SQLITE_OPEN_TEMP_DB; - rc = sqlite3BtreeFactory(db, 0, 0, SQLITE_DEFAULT_CACHE_SIZE, flags, &pBt); + rc = sqlite3BtreeOpen(db->pVfs, 0, db, &pBt, 0, flags); if( rc!=SQLITE_OK ){ sqlite3ErrorMsg(pParse, "unable to open a temporary database " "file for storing temporary tables"); pParse->rc = rc; return 1; } db->aDb[1].pBt = pBt; assert( db->aDb[1].pSchema ); if( SQLITE_NOMEM==sqlite3BtreeSetPageSize(pBt, db->nextPagesize, -1, 0) ){ - db->mallocFailed = 1; + sqlite3OomFault(db); return 1; } - sqlite3PagerJournalMode(sqlite3BtreePager(pBt), db->dfltJournalMode); } return 0; } /* -** Generate VDBE code that will verify the schema cookie and start -** a read-transaction for all named database files. -** -** It is important that all schema cookies be verified and all -** read transactions be started before anything else happens in -** the VDBE program. But this routine can be called after much other -** code has been generated. So here is what we do: -** -** The first time this routine is called, we code an OP_Goto that -** will jump to a subroutine at the end of the program. Then we -** record every database that needs its schema verified in the -** pParse->cookieMask field. Later, after all other code has been -** generated, the subroutine that does the cookie verifications and -** starts the transactions will be coded and the OP_Goto P2 value -** will be made to point to that subroutine. The generation of the -** cookie verification subroutine code happens in sqlite3FinishCoding(). -** -** If iDb<0 then code the OP_Goto only - don't set flag to verify the -** schema on any databases. This can be used to position the OP_Goto -** early in the code, before we know if any database tables will be used. +** Record the fact that the schema cookie will need to be verified +** for database iDb. The code to actually verify the schema cookie +** will occur at the end of the top-level VDBE and will be generated +** later, by sqlite3FinishCoding(). */ SQLITE_PRIVATE void sqlite3CodeVerifySchema(Parse *pParse, int iDb){ Parse *pToplevel = sqlite3ParseToplevel(pParse); - - if( pToplevel->cookieGoto==0 ){ - Vdbe *v = sqlite3GetVdbe(pToplevel); - if( v==0 ) return; /* This only happens if there was a prior error */ - pToplevel->cookieGoto = sqlite3VdbeAddOp2(v, OP_Goto, 0, 0)+1; - } - if( iDb>=0 ){ - sqlite3 *db = pToplevel->db; - int mask; - - assert( iDb<db->nDb ); - assert( db->aDb[iDb].pBt!=0 || iDb==1 ); - assert( iDb<SQLITE_MAX_ATTACHED+2 ); - mask = 1<<iDb; - if( (pToplevel->cookieMask & mask)==0 ){ - pToplevel->cookieMask |= mask; - pToplevel->cookieValue[iDb] = db->aDb[iDb].pSchema->schema_cookie; - if( !OMIT_TEMPDB && iDb==1 ){ - sqlite3OpenTempDatabase(pToplevel); - } + sqlite3 *db = pToplevel->db; + + assert( iDb>=0 && iDb<db->nDb ); + assert( db->aDb[iDb].pBt!=0 || iDb==1 ); + assert( iDb<SQLITE_MAX_ATTACHED+2 ); + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); + if( DbMaskTest(pToplevel->cookieMask, iDb)==0 ){ + DbMaskSet(pToplevel->cookieMask, iDb); + pToplevel->cookieValue[iDb] = db->aDb[iDb].pSchema->schema_cookie; + if( !OMIT_TEMPDB && iDb==1 ){ + sqlite3OpenTempDatabase(pToplevel); + } + } +} + +/* +** If argument zDb is NULL, then call sqlite3CodeVerifySchema() for each +** attached database. Otherwise, invoke it for the database named zDb only. +*/ +SQLITE_PRIVATE void sqlite3CodeVerifyNamedSchema(Parse *pParse, const char *zDb){ + sqlite3 *db = pParse->db; + int i; + for(i=0; i<db->nDb; i++){ + Db *pDb = &db->aDb[i]; + if( pDb->pBt && (!zDb || 0==sqlite3StrICmp(zDb, pDb->zName)) ){ + sqlite3CodeVerifySchema(pParse, i); } } } /* @@ -71159,11 +98336,11 @@ ** necessary to undo a write and the checkpoint should not be set. */ SQLITE_PRIVATE void sqlite3BeginWriteOperation(Parse *pParse, int setStatement, int iDb){ Parse *pToplevel = sqlite3ParseToplevel(pParse); sqlite3CodeVerifySchema(pParse, iDb); - pToplevel->writeMask |= 1<<iDb; + DbMaskSet(pToplevel->writeMask, iDb); pToplevel->isMultiWrite |= setStatement; } /* ** Indicate that the statement currently under construction might write @@ -71201,16 +98378,80 @@ /* ** Code an OP_Halt that causes the vdbe to return an SQLITE_CONSTRAINT ** error. The onError parameter determines which (if any) of the statement ** and/or current transaction is rolled back. */ -SQLITE_PRIVATE void sqlite3HaltConstraint(Parse *pParse, int onError, char *p4, int p4type){ +SQLITE_PRIVATE void sqlite3HaltConstraint( + Parse *pParse, /* Parsing context */ + int errCode, /* extended error code */ + int onError, /* Constraint type */ + char *p4, /* Error message */ + i8 p4type, /* P4_STATIC or P4_TRANSIENT */ + u8 p5Errmsg /* P5_ErrMsg type */ +){ Vdbe *v = sqlite3GetVdbe(pParse); + assert( (errCode&0xff)==SQLITE_CONSTRAINT ); if( onError==OE_Abort ){ sqlite3MayAbort(pParse); } - sqlite3VdbeAddOp4(v, OP_Halt, SQLITE_CONSTRAINT, onError, 0, p4, p4type); + sqlite3VdbeAddOp4(v, OP_Halt, errCode, onError, 0, p4, p4type); + sqlite3VdbeChangeP5(v, p5Errmsg); +} + +/* +** Code an OP_Halt due to UNIQUE or PRIMARY KEY constraint violation. +*/ +SQLITE_PRIVATE void sqlite3UniqueConstraint( + Parse *pParse, /* Parsing context */ + int onError, /* Constraint type */ + Index *pIdx /* The index that triggers the constraint */ +){ + char *zErr; + int j; + StrAccum errMsg; + Table *pTab = pIdx->pTable; + + sqlite3StrAccumInit(&errMsg, pParse->db, 0, 0, 200); + if( pIdx->aColExpr ){ + sqlite3XPrintf(&errMsg, "index '%q'", pIdx->zName); + }else{ + for(j=0; j<pIdx->nKeyCol; j++){ + char *zCol; + assert( pIdx->aiColumn[j]>=0 ); + zCol = pTab->aCol[pIdx->aiColumn[j]].zName; + if( j ) sqlite3StrAccumAppend(&errMsg, ", ", 2); + sqlite3XPrintf(&errMsg, "%s.%s", pTab->zName, zCol); + } + } + zErr = sqlite3StrAccumFinish(&errMsg); + sqlite3HaltConstraint(pParse, + IsPrimaryKeyIndex(pIdx) ? SQLITE_CONSTRAINT_PRIMARYKEY + : SQLITE_CONSTRAINT_UNIQUE, + onError, zErr, P4_DYNAMIC, P5_ConstraintUnique); +} + + +/* +** Code an OP_Halt due to non-unique rowid. +*/ +SQLITE_PRIVATE void sqlite3RowidConstraint( + Parse *pParse, /* Parsing context */ + int onError, /* Conflict resolution algorithm */ + Table *pTab /* The table with the non-unique rowid */ +){ + char *zMsg; + int rc; + if( pTab->iPKey>=0 ){ + zMsg = sqlite3MPrintf(pParse->db, "%s.%s", pTab->zName, + pTab->aCol[pTab->iPKey].zName); + rc = SQLITE_CONSTRAINT_PRIMARYKEY; + }else{ + zMsg = sqlite3MPrintf(pParse->db, "%s.rowid", pTab->zName); + rc = SQLITE_CONSTRAINT_ROWID; + } + sqlite3HaltConstraint(pParse, rc, onError, zMsg, P4_DYNAMIC, + P5_ConstraintUnique); } /* ** Check to see if pIndex uses the collating sequence pColl. Return ** true if it does and false if it does not. @@ -71219,12 +98460,12 @@ static int collationMatch(const char *zColl, Index *pIndex){ int i; assert( zColl!=0 ); for(i=0; i<pIndex->nColumn; i++){ const char *z = pIndex->azColl[i]; - assert( z!=0 ); - if( 0==sqlite3StrICmp(z, zColl) ){ + assert( z!=0 || pIndex->aiColumn[i]<0 ); + if( pIndex->aiColumn[i]>=0 && 0==sqlite3StrICmp(z, zColl) ){ return 1; } } return 0; } @@ -71259,10 +98500,11 @@ int iDb; /* The database index number */ sqlite3 *db = pParse->db; /* The database connection */ HashElem *k; /* For looping over tables in pDb */ Table *pTab; /* A table in the database */ + assert( sqlite3BtreeHoldsAllMutexes(db) ); /* Needed for schema access */ for(iDb=0, pDb=db->aDb; iDb<db->nDb; iDb++, pDb++){ assert( pDb!=0 ); for(k=sqliteHashFirst(&pDb->pSchema->tblHash); k; k=sqliteHashNext(k)){ pTab = (Table*)sqliteHashData(k); reindexTable(pParse, pTab, zColl); @@ -71338,46 +98580,115 @@ sqlite3ErrorMsg(pParse, "unable to identify the object to be reindexed"); } #endif /* -** Return a dynamicly allocated KeyInfo structure that can be used -** with OP_OpenRead or OP_OpenWrite to access database index pIdx. +** Return a KeyInfo structure that is appropriate for the given Index. ** -** If successful, a pointer to the new structure is returned. In this case -** the caller is responsible for calling sqlite3DbFree(db, ) on the returned -** pointer. If an error occurs (out of memory or missing collation -** sequence), NULL is returned and the state of pParse updated to reflect -** the error. +** The KeyInfo structure for an index is cached in the Index object. +** So there might be multiple references to the returned pointer. The +** caller should not try to modify the KeyInfo object. +** +** The caller should invoke sqlite3KeyInfoUnref() on the returned object +** when it has finished using it. */ -SQLITE_PRIVATE KeyInfo *sqlite3IndexKeyinfo(Parse *pParse, Index *pIdx){ +SQLITE_PRIVATE KeyInfo *sqlite3KeyInfoOfIndex(Parse *pParse, Index *pIdx){ int i; int nCol = pIdx->nColumn; - int nBytes = sizeof(KeyInfo) + (nCol-1)*sizeof(CollSeq*) + nCol; - sqlite3 *db = pParse->db; - KeyInfo *pKey = (KeyInfo *)sqlite3DbMallocZero(db, nBytes); - + int nKey = pIdx->nKeyCol; + KeyInfo *pKey; + if( pParse->nErr ) return 0; + if( pIdx->uniqNotNull ){ + pKey = sqlite3KeyInfoAlloc(pParse->db, nKey, nCol-nKey); + }else{ + pKey = sqlite3KeyInfoAlloc(pParse->db, nCol, 0); + } if( pKey ){ - pKey->db = pParse->db; - pKey->aSortOrder = (u8 *)&(pKey->aColl[nCol]); - assert( &pKey->aSortOrder[nCol]==&(((u8 *)pKey)[nBytes]) ); + assert( sqlite3KeyInfoIsWriteable(pKey) ); for(i=0; i<nCol; i++){ - char *zColl = pIdx->azColl[i]; - assert( zColl ); - pKey->aColl[i] = sqlite3LocateCollSeq(pParse, zColl); + const char *zColl = pIdx->azColl[i]; + pKey->aColl[i] = zColl==sqlite3StrBINARY ? 0 : + sqlite3LocateCollSeq(pParse, zColl); pKey->aSortOrder[i] = pIdx->aSortOrder[i]; } - pKey->nField = (u16)nCol; - } - - if( pParse->nErr ){ - sqlite3DbFree(db, pKey); - pKey = 0; + if( pParse->nErr ){ + sqlite3KeyInfoUnref(pKey); + pKey = 0; + } } return pKey; } +#ifndef SQLITE_OMIT_CTE +/* +** This routine is invoked once per CTE by the parser while parsing a +** WITH clause. +*/ +SQLITE_PRIVATE With *sqlite3WithAdd( + Parse *pParse, /* Parsing context */ + With *pWith, /* Existing WITH clause, or NULL */ + Token *pName, /* Name of the common-table */ + ExprList *pArglist, /* Optional column name list for the table */ + Select *pQuery /* Query used to initialize the table */ +){ + sqlite3 *db = pParse->db; + With *pNew; + char *zName; + + /* Check that the CTE name is unique within this WITH clause. If + ** not, store an error in the Parse structure. */ + zName = sqlite3NameFromToken(pParse->db, pName); + if( zName && pWith ){ + int i; + for(i=0; i<pWith->nCte; i++){ + if( sqlite3StrICmp(zName, pWith->a[i].zName)==0 ){ + sqlite3ErrorMsg(pParse, "duplicate WITH table name: %s", zName); + } + } + } + + if( pWith ){ + int nByte = sizeof(*pWith) + (sizeof(pWith->a[1]) * pWith->nCte); + pNew = sqlite3DbRealloc(db, pWith, nByte); + }else{ + pNew = sqlite3DbMallocZero(db, sizeof(*pWith)); + } + assert( (pNew!=0 && zName!=0) || db->mallocFailed ); + + if( db->mallocFailed ){ + sqlite3ExprListDelete(db, pArglist); + sqlite3SelectDelete(db, pQuery); + sqlite3DbFree(db, zName); + pNew = pWith; + }else{ + pNew->a[pNew->nCte].pSelect = pQuery; + pNew->a[pNew->nCte].pCols = pArglist; + pNew->a[pNew->nCte].zName = zName; + pNew->a[pNew->nCte].zCteErr = 0; + pNew->nCte++; + } + + return pNew; +} + +/* +** Free the contents of the With object passed as the second argument. +*/ +SQLITE_PRIVATE void sqlite3WithDelete(sqlite3 *db, With *pWith){ + if( pWith ){ + int i; + for(i=0; i<pWith->nCte; i++){ + struct Cte *pCte = &pWith->a[i]; + sqlite3ExprListDelete(db, pCte->pCols); + sqlite3SelectDelete(db, pCte->pSelect); + sqlite3DbFree(db, pCte->zName); + } + sqlite3DbFree(db, pWith); + } +} +#endif /* !defined(SQLITE_OMIT_CTE) */ + /************** End of build.c ***********************************************/ /************** Begin file callback.c ****************************************/ /* ** 2005 May 23 ** @@ -71392,10 +98703,11 @@ ** ** This file contains functions used to access the internal hash tables ** of user defined functions and collation sequences. */ +/* #include "sqliteInt.h" */ /* ** Invoke the 'collation needed' callback to request a collation sequence ** in the encoding enc of name zName, length nName. */ @@ -71452,21 +98764,22 @@ ** If it is not NULL, then pColl must point to the database native encoding ** collation sequence with name zName, length nName. ** ** The return value is either the collation sequence to be used in database ** db for collation type name zName, length nName, or NULL, if no collation -** sequence can be found. +** sequence can be found. If no collation is found, leave an error message. ** ** See also: sqlite3LocateCollSeq(), sqlite3FindCollSeq() */ SQLITE_PRIVATE CollSeq *sqlite3GetCollSeq( - sqlite3* db, /* The database connection */ + Parse *pParse, /* Parsing context */ u8 enc, /* The desired encoding for the collating sequence */ CollSeq *pColl, /* Collating sequence with native encoding, or NULL */ const char *zName /* Collating sequence name */ ){ CollSeq *p; + sqlite3 *db = pParse->db; p = pColl; if( !p ){ p = sqlite3FindCollSeq(db, enc, zName, 0); } @@ -71479,10 +98792,13 @@ } if( p && !p->xCmp && synthCollSeq(db, p) ){ p = 0; } assert( !p || p->xCmp ); + if( p==0 ){ + sqlite3ErrorMsg(pParse, "no such collation sequence: %s", zName); + } return p; } /* ** This routine is called on a collation sequence before it is used to @@ -71497,14 +98813,12 @@ */ SQLITE_PRIVATE int sqlite3CheckCollSeq(Parse *pParse, CollSeq *pColl){ if( pColl ){ const char *zName = pColl->zName; sqlite3 *db = pParse->db; - CollSeq *p = sqlite3GetCollSeq(db, ENC(db), pColl, zName); + CollSeq *p = sqlite3GetCollSeq(pParse, ENC(db), pColl, zName); if( !p ){ - sqlite3ErrorMsg(pParse, "no such collation sequence: %s", zName); - pParse->nErr++; return SQLITE_ERROR; } assert( p==pColl ); } return SQLITE_OK; @@ -71517,11 +98831,11 @@ ** specified by zName and nName is not found and parameter 'create' is ** true, then create a new entry. Otherwise return NULL. ** ** Each pointer stored in the sqlite3.aCollSeq hash table contains an ** array of three CollSeq structures. The first is the collation sequence -** prefferred for UTF-8, the second UTF-16le, and the third UTF-16be. +** preferred for UTF-8, the second UTF-16le, and the third UTF-16be. ** ** Stored immediately after the three collation sequences is a copy of ** the collation sequence name. A pointer to this string is stored in ** each collation sequence structure. */ @@ -71529,15 +98843,15 @@ sqlite3 *db, /* Database connection */ const char *zName, /* Name of the collating sequence */ int create /* Create a new entry if true */ ){ CollSeq *pColl; - int nName = sqlite3Strlen30(zName); - pColl = sqlite3HashFind(&db->aCollSeq, zName, nName); + pColl = sqlite3HashFind(&db->aCollSeq, zName); if( 0==pColl && create ){ - pColl = sqlite3DbMallocZero(db, 3*sizeof(*pColl) + nName + 1 ); + int nName = sqlite3Strlen30(zName); + pColl = sqlite3DbMallocZero(db, 3*sizeof(*pColl) + nName + 1); if( pColl ){ CollSeq *pDel = 0; pColl[0].zName = (char*)&pColl[3]; pColl[0].enc = SQLITE_UTF8; pColl[1].zName = (char*)&pColl[3]; @@ -71544,19 +98858,19 @@ pColl[1].enc = SQLITE_UTF16LE; pColl[2].zName = (char*)&pColl[3]; pColl[2].enc = SQLITE_UTF16BE; memcpy(pColl[0].zName, zName, nName); pColl[0].zName[nName] = 0; - pDel = sqlite3HashInsert(&db->aCollSeq, pColl[0].zName, nName, pColl); + pDel = sqlite3HashInsert(&db->aCollSeq, pColl[0].zName, pColl); /* If a malloc() failure occurred in sqlite3HashInsert(), it will ** return the pColl pointer to be deleted (because it wasn't added ** to the hash table). */ assert( pDel==0 || pDel==pColl ); if( pDel!=0 ){ - db->mallocFailed = 1; + sqlite3OomFault(db); sqlite3DbFree(db, pDel); pColl = 0; } } } @@ -71599,43 +98913,62 @@ /* During the search for the best function definition, this procedure ** is called to test how well the function passed as the first argument ** matches the request for a function with nArg arguments in a system ** that uses encoding enc. The value returned indicates how well the ** request is matched. A higher value indicates a better match. +** +** If nArg is -1 that means to only return a match (non-zero) if p->nArg +** is also -1. In other words, we are searching for a function that +** takes a variable number of arguments. +** +** If nArg is -2 that means that we are searching for any function +** regardless of the number of arguments it uses, so return a positive +** match score for any ** ** The returned value is always between 0 and 6, as follows: ** -** 0: Not a match, or if nArg<0 and the function is has no implementation. -** 1: A variable arguments function that prefers UTF-8 when a UTF-16 -** encoding is requested, or vice versa. -** 2: A variable arguments function that uses UTF-16BE when UTF-16LE is -** requested, or vice versa. -** 3: A variable arguments function using the same text encoding. -** 4: A function with the exact number of arguments requested that -** prefers UTF-8 when a UTF-16 encoding is requested, or vice versa. -** 5: A function with the exact number of arguments requested that -** prefers UTF-16LE when UTF-16BE is requested, or vice versa. -** 6: An exact match. +** 0: Not a match. +** 1: UTF8/16 conversion required and function takes any number of arguments. +** 2: UTF16 byte order change required and function takes any number of args. +** 3: encoding matches and function takes any number of arguments +** 4: UTF8/16 conversion required - argument count matches exactly +** 5: UTF16 byte order conversion required - argument count matches exactly +** 6: Perfect match: encoding and argument count match exactly. ** +** If nArg==(-2) then any function with a non-null xSFunc is +** a perfect match and any function with xSFunc NULL is +** a non-match. */ -static int matchQuality(FuncDef *p, int nArg, u8 enc){ - int match = 0; - if( p->nArg==-1 || p->nArg==nArg - || (nArg==-1 && (p->xFunc!=0 || p->xStep!=0)) - ){ +#define FUNC_PERFECT_MATCH 6 /* The score for a perfect match */ +static int matchQuality( + FuncDef *p, /* The function we are evaluating for match quality */ + int nArg, /* Desired number of arguments. (-1)==any */ + u8 enc /* Desired text encoding */ +){ + int match; + + /* nArg of -2 is a special case */ + if( nArg==(-2) ) return (p->xSFunc==0) ? 0 : FUNC_PERFECT_MATCH; + + /* Wrong number of arguments means "no match" */ + if( p->nArg!=nArg && p->nArg>=0 ) return 0; + + /* Give a better score to a function with a specific number of arguments + ** than to function that accepts any number of arguments. */ + if( p->nArg==nArg ){ + match = 4; + }else{ match = 1; - if( p->nArg==nArg || nArg==-1 ){ - match = 4; - } - if( enc==p->iPrefEnc ){ - match += 2; - } - else if( (enc==SQLITE_UTF16LE && p->iPrefEnc==SQLITE_UTF16BE) || - (enc==SQLITE_UTF16BE && p->iPrefEnc==SQLITE_UTF16LE) ){ - match += 1; - } - } + } + + /* Bonus points if the text encoding matches */ + if( enc==(p->funcFlags & SQLITE_FUNC_ENCMASK) ){ + match += 2; /* Exact encoding match */ + }else if( (enc & p->funcFlags & 2)!=0 ){ + match += 1; /* Both are UTF16, but with different byte orders */ + } + return match; } /* ** Search a FuncDefHash for a function with the given name. Return @@ -71687,17 +99020,16 @@ ** pointer to the FuncDef structure that defines that function, or return ** NULL if the function does not exist. ** ** If the createFlag argument is true, then a new (blank) FuncDef ** structure is created and liked into the "db" structure if a -** no matching function previously existed. When createFlag is true -** and the nArg parameter is -1, then only a function that accepts -** any number of arguments will be returned. +** no matching function previously existed. ** -** If createFlag is false and nArg is -1, then the first valid -** function found is returned. A function is valid if either xFunc -** or xStep is non-zero. +** If nArg is -2, then the first valid function found is returned. A +** function is valid if xSFunc is non-zero. The nArg==(-2) +** case is used to see if zName is a valid function name for some number +** of arguments. If nArg is -2, then createFlag must be 0. ** ** If createFlag is false, then a function with the required name and ** number of arguments may be returned even if the eTextRep flag does not ** match that requested. */ @@ -71705,19 +99037,19 @@ sqlite3 *db, /* An open database */ const char *zName, /* Name of the function. Not null-terminated */ int nName, /* Number of characters in the name */ int nArg, /* Number of arguments. -1 means any number */ u8 enc, /* Preferred text encoding */ - int createFlag /* Create new entry if true and does not otherwise exist */ + u8 createFlag /* Create new entry if true and does not otherwise exist */ ){ FuncDef *p; /* Iterator variable */ FuncDef *pBest = 0; /* Best match found so far */ int bestScore = 0; /* Score of best match */ int h; /* Hash value */ - - assert( enc==SQLITE_UTF8 || enc==SQLITE_UTF16LE || enc==SQLITE_UTF16BE ); + assert( nArg>=(-2) ); + assert( nArg>=(-1) || createFlag==0 ); h = (sqlite3UpperToLower[(u8)zName[0]] + nName) % ArraySize(db->aFunc.a); /* First search for a match amongst the application-defined functions. */ p = functionSearch(&db->aFunc, h, zName, nName); @@ -71729,19 +99061,24 @@ } p = p->pNext; } /* If no match is found, search the built-in functions. + ** + ** If the SQLITE_PreferBuiltin flag is set, then search the built-in + ** functions even if a prior app-defined function was found. And give + ** priority to built-in functions. ** ** Except, if createFlag is true, that means that we are trying to - ** install a new function. Whatever FuncDef structure is returned will + ** install a new function. Whatever FuncDef structure is returned it will ** have fields overwritten with new information appropriate for the ** new function. But the FuncDefs for built-in functions are read-only. ** So we must not search for built-ins when creating a new function. */ - if( !createFlag && !pBest ){ + if( !createFlag && (pBest==0 || (db->flags & SQLITE_PreferBuiltin)!=0) ){ FuncDefHash *pHash = &GLOBAL(FuncDefHash, sqlite3GlobalFunctions); + bestScore = 0; p = functionSearch(pHash, h, zName, nName); while( p ){ int score = matchQuality(p, nArg, enc); if( score>bestScore ){ pBest = p; @@ -71753,35 +99090,35 @@ /* If the createFlag parameter is true and the search did not reveal an ** exact match for the name, number of arguments and encoding, then add a ** new entry to the hash table and return it. */ - if( createFlag && (bestScore<6 || pBest->nArg!=nArg) && + if( createFlag && bestScore<FUNC_PERFECT_MATCH && (pBest = sqlite3DbMallocZero(db, sizeof(*pBest)+nName+1))!=0 ){ pBest->zName = (char *)&pBest[1]; pBest->nArg = (u16)nArg; - pBest->iPrefEnc = enc; + pBest->funcFlags = enc; memcpy(pBest->zName, zName, nName); pBest->zName[nName] = 0; sqlite3FuncDefInsert(&db->aFunc, pBest); } - if( pBest && (pBest->xStep || pBest->xFunc || createFlag) ){ + if( pBest && (pBest->xSFunc || createFlag) ){ return pBest; } return 0; } /* ** Free all resources held by the schema structure. The void* argument points ** at a Schema struct. This function does not call sqlite3DbFree(db, ) on the -** pointer itself, it just cleans up subsiduary resources (i.e. the contents +** pointer itself, it just cleans up subsidiary resources (i.e. the contents ** of the schema hash tables). ** ** The Schema.cache_size variable is not cleared. */ -SQLITE_PRIVATE void sqlite3SchemaFree(void *p){ +SQLITE_PRIVATE void sqlite3SchemaClear(void *p){ Hash temp1; Hash temp2; HashElem *pElem; Schema *pSchema = (Schema *)p; @@ -71794,32 +99131,34 @@ } sqlite3HashClear(&temp2); sqlite3HashInit(&pSchema->tblHash); for(pElem=sqliteHashFirst(&temp1); pElem; pElem=sqliteHashNext(pElem)){ Table *pTab = sqliteHashData(pElem); - assert( pTab->dbMem==0 ); - sqlite3DeleteTable(pTab); + sqlite3DeleteTable(0, pTab); } sqlite3HashClear(&temp1); sqlite3HashClear(&pSchema->fkeyHash); pSchema->pSeqTab = 0; - pSchema->flags &= ~DB_SchemaLoaded; + if( pSchema->schemaFlags & DB_SchemaLoaded ){ + pSchema->iGeneration++; + pSchema->schemaFlags &= ~DB_SchemaLoaded; + } } /* ** Find and return the schema associated with a BTree. Create ** a new one if necessary. */ SQLITE_PRIVATE Schema *sqlite3SchemaGet(sqlite3 *db, Btree *pBt){ Schema * p; if( pBt ){ - p = (Schema *)sqlite3BtreeSchema(pBt, sizeof(Schema), sqlite3SchemaFree); + p = (Schema *)sqlite3BtreeSchema(pBt, sizeof(Schema), sqlite3SchemaClear); }else{ - p = (Schema *)sqlite3MallocZero(sizeof(Schema)); + p = (Schema *)sqlite3DbMallocZero(0, sizeof(Schema)); } if( !p ){ - db->mallocFailed = 1; + sqlite3OomFault(db); }else if ( 0==p->file_format ){ sqlite3HashInit(&p->tblHash); sqlite3HashInit(&p->idxHash); sqlite3HashInit(&p->trigHash); sqlite3HashInit(&p->fkeyHash); @@ -71842,22 +99181,32 @@ ** ************************************************************************* ** This file contains C code routines that are called by the parser ** in order to generate code for DELETE FROM statements. */ +/* #include "sqliteInt.h" */ /* -** Look up every table that is named in pSrc. If any table is not found, -** add an error message to pParse->zErrMsg and return NULL. If all tables -** are found, return a pointer to the last table. +** While a SrcList can in general represent multiple tables and subqueries +** (as in the FROM clause of a SELECT statement) in this case it contains +** the name of a single table, as one might find in an INSERT, DELETE, +** or UPDATE statement. Look up that table in the symbol table and +** return a pointer. Set an error message and return NULL if the table +** name is not found or if any other error occurs. +** +** The following fields are initialized appropriate in pSrc: +** +** pSrc->a[0].pTab Pointer to the Table object +** pSrc->a[0].pIndex Pointer to the INDEXED BY index, if there is one +** */ SQLITE_PRIVATE Table *sqlite3SrcListLookup(Parse *pParse, SrcList *pSrc){ struct SrcList_item *pItem = pSrc->a; Table *pTab; assert( pItem && pSrc->nSrc==1 ); - pTab = sqlite3LocateTable(pParse, 0, pItem->zName, pItem->zDatabase); - sqlite3DeleteTable(pItem->pTab); + pTab = sqlite3LocateTableItem(pParse, 0, pItem); + sqlite3DeleteTable(pParse->db, pItem->pTab); pItem->pTab = pTab; if( pTab ){ pTab->nRef++; } if( sqlite3IndexedByLookup(pParse, pItem) ){ @@ -71910,36 +99259,31 @@ */ SQLITE_PRIVATE void sqlite3MaterializeView( Parse *pParse, /* Parsing context */ Table *pView, /* View definition */ Expr *pWhere, /* Optional WHERE clause to be added */ - int iCur /* Cursor number for ephemerial table */ + int iCur /* Cursor number for ephemeral table */ ){ SelectDest dest; - Select *pDup; + Select *pSel; + SrcList *pFrom; sqlite3 *db = pParse->db; - - pDup = sqlite3SelectDup(db, pView->pSelect, 0); - if( pWhere ){ - SrcList *pFrom; - - pWhere = sqlite3ExprDup(db, pWhere, 0); - pFrom = sqlite3SrcListAppend(db, 0, 0, 0); - if( pFrom ){ - assert( pFrom->nSrc==1 ); - pFrom->a[0].zAlias = sqlite3DbStrDup(db, pView->zName); - pFrom->a[0].pSelect = pDup; - assert( pFrom->a[0].pOn==0 ); - assert( pFrom->a[0].pUsing==0 ); - }else{ - sqlite3SelectDelete(db, pDup); - } - pDup = sqlite3SelectNew(pParse, 0, pFrom, pWhere, 0, 0, 0, 0, 0, 0); - } + int iDb = sqlite3SchemaToIndex(db, pView->pSchema); + pWhere = sqlite3ExprDup(db, pWhere, 0); + pFrom = sqlite3SrcListAppend(db, 0, 0, 0); + if( pFrom ){ + assert( pFrom->nSrc==1 ); + pFrom->a[0].zName = sqlite3DbStrDup(db, pView->zName); + pFrom->a[0].zDatabase = sqlite3DbStrDup(db, db->aDb[iDb].zName); + assert( pFrom->a[0].pOn==0 ); + assert( pFrom->a[0].pUsing==0 ); + } + pSel = sqlite3SelectNew(pParse, 0, pFrom, pWhere, 0, 0, 0, + SF_IncludeHidden, 0, 0); sqlite3SelectDestInit(&dest, SRT_EphemTab, iCur); - sqlite3Select(pParse, pDup, &dest); - sqlite3SelectDelete(db, pDup); + sqlite3Select(pParse, pSel, &dest); + sqlite3SelectDelete(db, pSel); } #endif /* !defined(SQLITE_OMIT_VIEW) && !defined(SQLITE_OMIT_TRIGGER) */ #if defined(SQLITE_ENABLE_UPDATE_DELETE_LIMIT) && !defined(SQLITE_OMIT_SUBQUERY) /* @@ -71955,11 +99299,11 @@ SrcList *pSrc, /* the FROM clause -- which tables to scan */ Expr *pWhere, /* The WHERE clause. May be null */ ExprList *pOrderBy, /* The ORDER BY clause. May be null */ Expr *pLimit, /* The LIMIT clause. May be null */ Expr *pOffset, /* The OFFSET clause. May be null */ - char *zStmtType /* Either DELETE or UPDATE. For error messages. */ + char *zStmtType /* Either DELETE or UPDATE. For err msgs. */ ){ Expr *pWhereRowid = NULL; /* WHERE rowid .. */ Expr *pInClause = NULL; /* WHERE rowid IN ( select ) */ Expr *pSelectRowid = NULL; /* SELECT rowid ... */ ExprList *pEList = NULL; /* Expression list contaning only pSelectRowid */ @@ -71968,11 +99312,10 @@ /* Check that there isn't an ORDER BY without a LIMIT clause. */ if( pOrderBy && (pLimit == 0) ) { sqlite3ErrorMsg(pParse, "ORDER BY without LIMIT on %s", zStmtType); - pParse->parseError = 1; goto limit_where_cleanup_2; } /* We only need to generate a select expression if there ** is a limit/offset term to enforce. @@ -72016,11 +99359,11 @@ pInClause = sqlite3PExpr(pParse, TK_IN, pWhereRowid, 0, 0); if( pInClause == 0 ) goto limit_where_cleanup_1; pInClause->x.pSelect = pSelect; pInClause->flags |= EP_xIsSelect; - sqlite3ExprSetHeight(pParse, pInClause); + sqlite3ExprSetHeightAndFlags(pParse, pInClause); return pInClause; /* something went wrong. clean up anything allocated. */ limit_where_cleanup_1: sqlite3SelectDelete(pParse->db, pSelect); @@ -72031,11 +99374,12 @@ sqlite3ExprListDelete(pParse->db, pOrderBy); sqlite3ExprDelete(pParse->db, pLimit); sqlite3ExprDelete(pParse->db, pOffset); return 0; } -#endif /* defined(SQLITE_ENABLE_UPDATE_DELETE_LIMIT) && !defined(SQLITE_OMIT_SUBQUERY) */ +#endif /* defined(SQLITE_ENABLE_UPDATE_DELETE_LIMIT) */ + /* && !defined(SQLITE_OMIT_SUBQUERY) */ /* ** Generate code for a DELETE FROM statement. ** ** DELETE FROM table_wxyz WHERE a<5 AND b NOT NULL; @@ -72048,25 +99392,41 @@ Expr *pWhere /* The WHERE clause. May be null */ ){ Vdbe *v; /* The virtual database engine */ Table *pTab; /* The table from which records will be deleted */ const char *zDb; /* Name of database holding pTab */ - int end, addr = 0; /* A couple addresses of generated code */ int i; /* Loop counter */ WhereInfo *pWInfo; /* Information about the WHERE clause */ Index *pIdx; /* For looping over indices of the table */ - int iCur; /* VDBE Cursor number for pTab */ + int iTabCur; /* Cursor number for the table */ + int iDataCur = 0; /* VDBE cursor for the canonical data source */ + int iIdxCur = 0; /* Cursor number of the first index */ + int nIdx; /* Number of indices */ sqlite3 *db; /* Main database structure */ AuthContext sContext; /* Authorization context */ NameContext sNC; /* Name context to resolve expressions in */ int iDb; /* Database number */ int memCnt = -1; /* Memory cell used for change counting */ int rcauth; /* Value returned by authorization callback */ - + int eOnePass; /* ONEPASS_OFF or _SINGLE or _MULTI */ + int aiCurOnePass[2]; /* The write cursors opened by WHERE_ONEPASS */ + u8 *aToOpen = 0; /* Open cursor iTabCur+j if aToOpen[j] is true */ + Index *pPk; /* The PRIMARY KEY index on the table */ + int iPk = 0; /* First of nPk registers holding PRIMARY KEY value */ + i16 nPk = 1; /* Number of columns in the PRIMARY KEY */ + int iKey; /* Memory cell holding key of row to be deleted */ + i16 nKey; /* Number of memory cells in the row key */ + int iEphCur = 0; /* Ephemeral table holding all primary key values */ + int iRowSet = 0; /* Register for rowset of rows to delete */ + int addrBypass = 0; /* Address of jump over the delete logic */ + int addrLoop = 0; /* Top of the delete loop */ + int addrEphOpen = 0; /* Instruction to open the Ephemeral table */ + #ifndef SQLITE_OMIT_TRIGGER int isView; /* True if attempting to delete from a view */ Trigger *pTrigger; /* List of table triggers, if required */ + int bComplex; /* True if there are either triggers or FKs */ #endif memset(&sContext, 0, sizeof(sContext)); db = pParse->db; if( pParse->nErr || db->mallocFailed ){ @@ -72086,13 +99446,15 @@ ** deleted from is a view */ #ifndef SQLITE_OMIT_TRIGGER pTrigger = sqlite3TriggersExist(pParse, pTab, TK_DELETE, 0, 0); isView = pTab->pSelect!=0; + bComplex = pTrigger || sqlite3FkRequired(pParse, pTab, 0, 0); #else # define pTrigger 0 # define isView 0 +# define bComplex 0 #endif #ifdef SQLITE_OMIT_VIEW # undef isView # define isView 0 #endif @@ -72114,15 +99476,15 @@ if( rcauth==SQLITE_DENY ){ goto delete_from_cleanup; } assert(!isView || pTrigger); - /* Assign cursor number to the table and all its indices. + /* Assign cursor numbers to the table and all its indices. */ assert( pTabList->nSrc==1 ); - iCur = pTabList->a[0].iCursor = pParse->nTab++; - for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ + iTabCur = pTabList->a[0].iCursor = pParse->nTab++; + for(nIdx=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, nIdx++){ pParse->nTab++; } /* Start the view context */ @@ -72138,15 +99500,16 @@ } if( pParse->nested==0 ) sqlite3VdbeCountChanges(v); sqlite3BeginWriteOperation(pParse, 1, iDb); /* If we are trying to delete from a view, realize that view into - ** a ephemeral table. + ** an ephemeral table. */ #if !defined(SQLITE_OMIT_VIEW) && !defined(SQLITE_OMIT_TRIGGER) if( isView ){ - sqlite3MaterializeView(pParse, pTab, pWhere, iCur); + sqlite3MaterializeView(pParse, pTab, pWhere, iTabCur); + iDataCur = iIdxCur = iTabCur; } #endif /* Resolve the column names in the WHERE clause. */ @@ -72168,83 +99531,202 @@ #ifndef SQLITE_OMIT_TRUNCATE_OPTIMIZATION /* Special case: A DELETE without a WHERE clause deletes everything. ** It is easier just to erase the whole table. Prior to version 3.6.5, ** this optimization caused the row change count (the value returned by ** API function sqlite3_count_changes) to be set incorrectly. */ - if( rcauth==SQLITE_OK && pWhere==0 && !pTrigger && !IsVirtual(pTab) - && 0==sqlite3FkRequired(pParse, pTab, 0, 0) + if( rcauth==SQLITE_OK + && pWhere==0 + && !bComplex + && !IsVirtual(pTab) ){ assert( !isView ); - sqlite3VdbeAddOp4(v, OP_Clear, pTab->tnum, iDb, memCnt, - pTab->zName, P4_STATIC); + sqlite3TableLock(pParse, iDb, pTab->tnum, 1, pTab->zName); + if( HasRowid(pTab) ){ + sqlite3VdbeAddOp4(v, OP_Clear, pTab->tnum, iDb, memCnt, + pTab->zName, P4_STATIC); + } for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ assert( pIdx->pSchema==pTab->pSchema ); sqlite3VdbeAddOp2(v, OP_Clear, pIdx->tnum, iDb); } }else #endif /* SQLITE_OMIT_TRUNCATE_OPTIMIZATION */ - /* The usual case: There is a WHERE clause so we have to scan through - ** the table and pick which records to delete. - */ - { - int iRowSet = ++pParse->nMem; /* Register for rowset of rows to delete */ - int iRowid = ++pParse->nMem; /* Used for storing rowid values. */ - int regRowid; /* Actual register containing rowids */ - - /* Collect rowids of every row to be deleted. - */ - sqlite3VdbeAddOp2(v, OP_Null, 0, iRowSet); - pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere,0,WHERE_DUPLICATES_OK); + { + u16 wcf = WHERE_ONEPASS_DESIRED|WHERE_DUPLICATES_OK; + wcf |= (bComplex ? 0 : WHERE_ONEPASS_MULTIROW); + if( HasRowid(pTab) ){ + /* For a rowid table, initialize the RowSet to an empty set */ + pPk = 0; + nPk = 1; + iRowSet = ++pParse->nMem; + sqlite3VdbeAddOp2(v, OP_Null, 0, iRowSet); + }else{ + /* For a WITHOUT ROWID table, create an ephemeral table used to + ** hold all primary keys for rows to be deleted. */ + pPk = sqlite3PrimaryKeyIndex(pTab); + assert( pPk!=0 ); + nPk = pPk->nKeyCol; + iPk = pParse->nMem+1; + pParse->nMem += nPk; + iEphCur = pParse->nTab++; + addrEphOpen = sqlite3VdbeAddOp2(v, OP_OpenEphemeral, iEphCur, nPk); + sqlite3VdbeSetP4KeyInfo(pParse, pPk); + } + + /* Construct a query to find the rowid or primary key for every row + ** to be deleted, based on the WHERE clause. Set variable eOnePass + ** to indicate the strategy used to implement this delete: + ** + ** ONEPASS_OFF: Two-pass approach - use a FIFO for rowids/PK values. + ** ONEPASS_SINGLE: One-pass approach - at most one row deleted. + ** ONEPASS_MULTI: One-pass approach - any number of rows may be deleted. + */ + pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere, 0, 0, wcf, iTabCur+1); if( pWInfo==0 ) goto delete_from_cleanup; - regRowid = sqlite3ExprCodeGetColumn(pParse, pTab, -1, iCur, iRowid); - sqlite3VdbeAddOp2(v, OP_RowSetAdd, iRowSet, regRowid); + eOnePass = sqlite3WhereOkOnePass(pWInfo, aiCurOnePass); + assert( IsVirtual(pTab)==0 || eOnePass!=ONEPASS_MULTI ); + assert( IsVirtual(pTab) || bComplex || eOnePass!=ONEPASS_OFF ); + + /* Keep track of the number of rows to be deleted */ if( db->flags & SQLITE_CountRows ){ sqlite3VdbeAddOp2(v, OP_AddImm, memCnt, 1); } - sqlite3WhereEnd(pWInfo); - - /* Delete every item whose key was written to the list during the - ** database scan. We have to delete items after the scan is complete - ** because deleting an item can change the scan order. */ - end = sqlite3VdbeMakeLabel(v); - + + /* Extract the rowid or primary key for the current row */ + if( pPk ){ + for(i=0; i<nPk; i++){ + assert( pPk->aiColumn[i]>=0 ); + sqlite3ExprCodeGetColumnOfTable(v, pTab, iTabCur, + pPk->aiColumn[i], iPk+i); + } + iKey = iPk; + }else{ + iKey = pParse->nMem + 1; + iKey = sqlite3ExprCodeGetColumn(pParse, pTab, -1, iTabCur, iKey, 0); + if( iKey>pParse->nMem ) pParse->nMem = iKey; + } + + if( eOnePass!=ONEPASS_OFF ){ + /* For ONEPASS, no need to store the rowid/primary-key. There is only + ** one, so just keep it in its register(s) and fall through to the + ** delete code. */ + nKey = nPk; /* OP_Found will use an unpacked key */ + aToOpen = sqlite3DbMallocRawNN(db, nIdx+2); + if( aToOpen==0 ){ + sqlite3WhereEnd(pWInfo); + goto delete_from_cleanup; + } + memset(aToOpen, 1, nIdx+1); + aToOpen[nIdx+1] = 0; + if( aiCurOnePass[0]>=0 ) aToOpen[aiCurOnePass[0]-iTabCur] = 0; + if( aiCurOnePass[1]>=0 ) aToOpen[aiCurOnePass[1]-iTabCur] = 0; + if( addrEphOpen ) sqlite3VdbeChangeToNoop(v, addrEphOpen); + }else{ + if( pPk ){ + /* Add the PK key for this row to the temporary table */ + iKey = ++pParse->nMem; + nKey = 0; /* Zero tells OP_Found to use a composite key */ + sqlite3VdbeAddOp4(v, OP_MakeRecord, iPk, nPk, iKey, + sqlite3IndexAffinityStr(pParse->db, pPk), nPk); + sqlite3VdbeAddOp2(v, OP_IdxInsert, iEphCur, iKey); + }else{ + /* Add the rowid of the row to be deleted to the RowSet */ + nKey = 1; /* OP_Seek always uses a single rowid */ + sqlite3VdbeAddOp2(v, OP_RowSetAdd, iRowSet, iKey); + } + } + + /* If this DELETE cannot use the ONEPASS strategy, this is the + ** end of the WHERE loop */ + if( eOnePass!=ONEPASS_OFF ){ + addrBypass = sqlite3VdbeMakeLabel(v); + }else{ + sqlite3WhereEnd(pWInfo); + } + /* Unless this is a view, open cursors for the table we are ** deleting from and all its indices. If this is a view, then the ** only effect this statement has is to fire the INSTEAD OF - ** triggers. */ + ** triggers. + */ if( !isView ){ - sqlite3OpenTableAndIndices(pParse, pTab, iCur, OP_OpenWrite); + int iAddrOnce = 0; + if( eOnePass==ONEPASS_MULTI ){ + iAddrOnce = sqlite3CodeOnce(pParse); VdbeCoverage(v); + } + testcase( IsVirtual(pTab) ); + sqlite3OpenTableAndIndices(pParse, pTab, OP_OpenWrite, OPFLAG_FORDELETE, + iTabCur, aToOpen, &iDataCur, &iIdxCur); + assert( pPk || IsVirtual(pTab) || iDataCur==iTabCur ); + assert( pPk || IsVirtual(pTab) || iIdxCur==iDataCur+1 ); + if( eOnePass==ONEPASS_MULTI ) sqlite3VdbeJumpHere(v, iAddrOnce); } - - addr = sqlite3VdbeAddOp3(v, OP_RowSetRead, iRowSet, end, iRowid); - + + /* Set up a loop over the rowids/primary-keys that were found in the + ** where-clause loop above. + */ + if( eOnePass!=ONEPASS_OFF ){ + assert( nKey==nPk ); /* OP_Found will use an unpacked key */ + if( !IsVirtual(pTab) && aToOpen[iDataCur-iTabCur] ){ + assert( pPk!=0 || pTab->pSelect!=0 ); + sqlite3VdbeAddOp4Int(v, OP_NotFound, iDataCur, addrBypass, iKey, nKey); + VdbeCoverage(v); + } + }else if( pPk ){ + addrLoop = sqlite3VdbeAddOp1(v, OP_Rewind, iEphCur); VdbeCoverage(v); + sqlite3VdbeAddOp2(v, OP_RowKey, iEphCur, iKey); + assert( nKey==0 ); /* OP_Found will use a composite key */ + }else{ + addrLoop = sqlite3VdbeAddOp3(v, OP_RowSetRead, iRowSet, 0, iKey); + VdbeCoverage(v); + assert( nKey==1 ); + } + /* Delete the row */ #ifndef SQLITE_OMIT_VIRTUALTABLE if( IsVirtual(pTab) ){ const char *pVTab = (const char *)sqlite3GetVTable(db, pTab); sqlite3VtabMakeWritable(pParse, pTab); - sqlite3VdbeAddOp4(v, OP_VUpdate, 0, 1, iRowid, pVTab, P4_VTAB); + sqlite3VdbeAddOp4(v, OP_VUpdate, 0, 1, iKey, pVTab, P4_VTAB); + sqlite3VdbeChangeP5(v, OE_Abort); + assert( eOnePass==ONEPASS_OFF || eOnePass==ONEPASS_SINGLE ); sqlite3MayAbort(pParse); + if( eOnePass==ONEPASS_SINGLE && sqlite3IsToplevel(pParse) ){ + pParse->isMultiWrite = 0; + } }else #endif { int count = (pParse->nested==0); /* True to count changes */ - sqlite3GenerateRowDelete(pParse, pTab, iCur, iRowid, count, pTrigger, OE_Default); + int iIdxNoSeek = -1; + if( bComplex==0 && aiCurOnePass[1]!=iDataCur ){ + iIdxNoSeek = aiCurOnePass[1]; + } + sqlite3GenerateRowDelete(pParse, pTab, pTrigger, iDataCur, iIdxCur, + iKey, nKey, count, OE_Default, eOnePass, iIdxNoSeek); } - - /* End of the delete loop */ - sqlite3VdbeAddOp2(v, OP_Goto, 0, addr); - sqlite3VdbeResolveLabel(v, end); - + + /* End of the loop over all rowids/primary-keys. */ + if( eOnePass!=ONEPASS_OFF ){ + sqlite3VdbeResolveLabel(v, addrBypass); + sqlite3WhereEnd(pWInfo); + }else if( pPk ){ + sqlite3VdbeAddOp2(v, OP_Next, iEphCur, addrLoop+1); VdbeCoverage(v); + sqlite3VdbeJumpHere(v, addrLoop); + }else{ + sqlite3VdbeGoto(v, addrLoop); + sqlite3VdbeJumpHere(v, addrLoop); + } + /* Close the cursors open on the table and its indexes. */ if( !isView && !IsVirtual(pTab) ){ - for(i=1, pIdx=pTab->pIndex; pIdx; i++, pIdx=pIdx->pNext){ - sqlite3VdbeAddOp2(v, OP_Close, iCur + i, pIdx->tnum); + if( !pPk ) sqlite3VdbeAddOp1(v, OP_Close, iDataCur); + for(i=0, pIdx=pTab->pIndex; pIdx; i++, pIdx=pIdx->pNext){ + sqlite3VdbeAddOp1(v, OP_Close, iIdxCur + i); } - sqlite3VdbeAddOp1(v, OP_Close, iCur); } - } + } /* End non-truncate path */ /* Update the sqlite_sequence table by storing the content of the ** maximum rowid counter values recorded while inserting into ** autoincrement tables. */ @@ -72264,14 +99746,15 @@ delete_from_cleanup: sqlite3AuthContextPop(&sContext); sqlite3SrcListDelete(db, pTabList); sqlite3ExprDelete(db, pWhere); + sqlite3DbFree(db, aToOpen); return; } /* Make sure "isView" and other macros defined above are undefined. Otherwise -** thely may interfere with compilation of other functions in this file +** they may interfere with compilation of other functions in this file ** (or in another file, if this file becomes part of the amalgamation). */ #ifdef isView #undef isView #endif #ifdef pTrigger @@ -72278,54 +99761,87 @@ #undef pTrigger #endif /* ** This routine generates VDBE code that causes a single row of a -** single table to be deleted. +** single table to be deleted. Both the original table entry and +** all indices are removed. ** -** The VDBE must be in a particular state when this routine is called. -** These are the requirements: +** Preconditions: ** -** 1. A read/write cursor pointing to pTab, the table containing the row -** to be deleted, must be opened as cursor number $iCur. +** 1. iDataCur is an open cursor on the btree that is the canonical data +** store for the table. (This will be either the table itself, +** in the case of a rowid table, or the PRIMARY KEY index in the case +** of a WITHOUT ROWID table.) ** ** 2. Read/write cursors for all indices of pTab must be open as -** cursor number base+i for the i-th index. +** cursor number iIdxCur+i for the i-th index. ** -** 3. The record number of the row to be deleted must be stored in -** memory cell iRowid. +** 3. The primary key for the row to be deleted must be stored in a +** sequence of nPk memory cells starting at iPk. If nPk==0 that means +** that a search record formed from OP_MakeRecord is contained in the +** single memory location iPk. ** -** This routine generates code to remove both the table record and all -** index entries that point to that record. +** eMode: +** Parameter eMode may be passed either ONEPASS_OFF (0), ONEPASS_SINGLE, or +** ONEPASS_MULTI. If eMode is not ONEPASS_OFF, then the cursor +** iDataCur already points to the row to delete. If eMode is ONEPASS_OFF +** then this function must seek iDataCur to the entry identified by iPk +** and nPk before reading from it. +** +** If eMode is ONEPASS_MULTI, then this call is being made as part +** of a ONEPASS delete that affects multiple rows. In this case, if +** iIdxNoSeek is a valid cursor number (>=0), then its position should +** be preserved following the delete operation. Or, if iIdxNoSeek is not +** a valid cursor number, the position of iDataCur should be preserved +** instead. +** +** iIdxNoSeek: +** If iIdxNoSeek is a valid cursor number (>=0), then it identifies an +** index cursor (from within array of cursors starting at iIdxCur) that +** already points to the index entry to be deleted. */ SQLITE_PRIVATE void sqlite3GenerateRowDelete( Parse *pParse, /* Parsing context */ Table *pTab, /* Table containing the row to be deleted */ - int iCur, /* Cursor number for the table */ - int iRowid, /* Memory cell that contains the rowid to delete */ - int count, /* If non-zero, increment the row change counter */ Trigger *pTrigger, /* List of triggers to (potentially) fire */ - int onconf /* Default ON CONFLICT policy for triggers */ + int iDataCur, /* Cursor from which column data is extracted */ + int iIdxCur, /* First index cursor */ + int iPk, /* First memory cell containing the PRIMARY KEY */ + i16 nPk, /* Number of PRIMARY KEY memory cells */ + u8 count, /* If non-zero, increment the row change counter */ + u8 onconf, /* Default ON CONFLICT policy for triggers */ + u8 eMode, /* ONEPASS_OFF, _SINGLE, or _MULTI. See above */ + int iIdxNoSeek /* Cursor number of cursor that does not need seeking */ ){ Vdbe *v = pParse->pVdbe; /* Vdbe */ int iOld = 0; /* First register in OLD.* array */ int iLabel; /* Label resolved to end of generated code */ + u8 opSeek; /* Seek opcode */ /* Vdbe is guaranteed to have been allocated by this stage. */ assert( v ); + VdbeModuleComment((v, "BEGIN: GenRowDel(%d,%d,%d,%d)", + iDataCur, iIdxCur, iPk, (int)nPk)); /* Seek cursor iCur to the row to delete. If this row no longer exists ** (this can happen if a trigger program has already deleted it), do ** not attempt to delete it or fire any DELETE triggers. */ iLabel = sqlite3VdbeMakeLabel(v); - sqlite3VdbeAddOp3(v, OP_NotExists, iCur, iLabel, iRowid); + opSeek = HasRowid(pTab) ? OP_NotExists : OP_NotFound; + if( eMode==ONEPASS_OFF ){ + sqlite3VdbeAddOp4Int(v, opSeek, iDataCur, iLabel, iPk, nPk); + VdbeCoverageIf(v, opSeek==OP_NotExists); + VdbeCoverageIf(v, opSeek==OP_NotFound); + } /* If there are any triggers to fire, allocate a range of registers to ** use for the old.* references in the triggers. */ if( sqlite3FkRequired(pParse, pTab, 0, 0) || pTrigger ){ u32 mask; /* Mask of OLD.* columns in use */ int iCol; /* Iterator used while populating OLD.* */ + int addrStart; /* Start of BEFORE trigger programs */ /* TODO: Could use temporary registers here. Also could attempt to ** avoid copying the contents of the rowid register. */ mask = sqlite3TriggerColmask( pParse, pTrigger, 0, 0, TRIGGER_BEFORE|TRIGGER_AFTER, pTab, onconf @@ -72334,51 +99850,66 @@ iOld = pParse->nMem+1; pParse->nMem += (1 + pTab->nCol); /* Populate the OLD.* pseudo-table register array. These values will be ** used by any BEFORE and AFTER triggers that exist. */ - sqlite3VdbeAddOp2(v, OP_Copy, iRowid, iOld); + sqlite3VdbeAddOp2(v, OP_Copy, iPk, iOld); for(iCol=0; iCol<pTab->nCol; iCol++){ - if( mask==0xffffffff || mask&(1<<iCol) ){ - int iTarget = iOld + iCol + 1; - sqlite3VdbeAddOp3(v, OP_Column, iCur, iCol, iTarget); - sqlite3ColumnDefault(v, pTab, iCol, iTarget); + testcase( mask!=0xffffffff && iCol==31 ); + testcase( mask!=0xffffffff && iCol==32 ); + if( mask==0xffffffff || (iCol<=31 && (mask & MASKBIT32(iCol))!=0) ){ + sqlite3ExprCodeGetColumnOfTable(v, pTab, iDataCur, iCol, iOld+iCol+1); } } /* Invoke BEFORE DELETE trigger programs. */ + addrStart = sqlite3VdbeCurrentAddr(v); sqlite3CodeRowTrigger(pParse, pTrigger, TK_DELETE, 0, TRIGGER_BEFORE, pTab, iOld, onconf, iLabel ); - /* Seek the cursor to the row to be deleted again. It may be that - ** the BEFORE triggers coded above have already removed the row - ** being deleted. Do not attempt to delete the row a second time, and - ** do not fire AFTER triggers. */ - sqlite3VdbeAddOp3(v, OP_NotExists, iCur, iLabel, iRowid); + /* If any BEFORE triggers were coded, then seek the cursor to the + ** row to be deleted again. It may be that the BEFORE triggers moved + ** the cursor or of already deleted the row that the cursor was + ** pointing to. + */ + if( addrStart<sqlite3VdbeCurrentAddr(v) ){ + sqlite3VdbeAddOp4Int(v, opSeek, iDataCur, iLabel, iPk, nPk); + VdbeCoverageIf(v, opSeek==OP_NotExists); + VdbeCoverageIf(v, opSeek==OP_NotFound); + } /* Do FK processing. This call checks that any FK constraints that ** refer to this table (i.e. constraints attached to other tables) ** are not violated by deleting this row. */ - sqlite3FkCheck(pParse, pTab, iOld, 0); + sqlite3FkCheck(pParse, pTab, iOld, 0, 0, 0); } /* Delete the index and table entries. Skip this step if pTab is really ** a view (in which case the only effect of the DELETE statement is to ** fire the INSTEAD OF triggers). */ if( pTab->pSelect==0 ){ - sqlite3GenerateRowIndexDelete(pParse, pTab, iCur, 0); - sqlite3VdbeAddOp2(v, OP_Delete, iCur, (count?OPFLAG_NCHANGE:0)); + u8 p5 = 0; + sqlite3GenerateRowIndexDelete(pParse, pTab, iDataCur, iIdxCur,0,iIdxNoSeek); + sqlite3VdbeAddOp2(v, OP_Delete, iDataCur, (count?OPFLAG_NCHANGE:0)); if( count ){ - sqlite3VdbeChangeP4(v, -1, pTab->zName, P4_STATIC); + sqlite3VdbeChangeP4(v, -1, pTab->zName, P4_TRANSIENT); } + if( eMode!=ONEPASS_OFF ){ + sqlite3VdbeChangeP5(v, OPFLAG_AUXDELETE); + } + if( iIdxNoSeek>=0 ){ + sqlite3VdbeAddOp1(v, OP_Delete, iIdxNoSeek); + } + if( eMode==ONEPASS_MULTI ) p5 |= OPFLAG_SAVEPOSITION; + sqlite3VdbeChangeP5(v, p5); } /* Do any ON CASCADE, SET NULL or SET DEFAULT operations required to ** handle rows (possibly in other tables) that refer via a foreign key ** to the row just deleted. */ - sqlite3FkActions(pParse, pTab, 0, iOld); + sqlite3FkActions(pParse, pTab, 0, iOld, 0, 0); /* Invoke AFTER DELETE trigger programs. */ sqlite3CodeRowTrigger(pParse, pTrigger, TK_DELETE, 0, TRIGGER_AFTER, pTab, iOld, onconf, iLabel ); @@ -72385,88 +99916,159 @@ /* Jump here if the row had already been deleted before any BEFORE ** trigger programs were invoked. Or if a trigger program throws a ** RAISE(IGNORE) exception. */ sqlite3VdbeResolveLabel(v, iLabel); + VdbeModuleComment((v, "END: GenRowDel()")); } /* ** This routine generates VDBE code that causes the deletion of all -** index entries associated with a single row of a single table. +** index entries associated with a single row of a single table, pTab ** -** The VDBE must be in a particular state when this routine is called. -** These are the requirements: +** Preconditions: ** -** 1. A read/write cursor pointing to pTab, the table containing the row -** to be deleted, must be opened as cursor number "iCur". +** 1. A read/write cursor "iDataCur" must be open on the canonical storage +** btree for the table pTab. (This will be either the table itself +** for rowid tables or to the primary key index for WITHOUT ROWID +** tables.) ** ** 2. Read/write cursors for all indices of pTab must be open as -** cursor number iCur+i for the i-th index. +** cursor number iIdxCur+i for the i-th index. (The pTab->pIndex +** index is the 0-th index.) ** -** 3. The "iCur" cursor must be pointing to the row that is to be -** deleted. +** 3. The "iDataCur" cursor must be already be positioned on the row +** that is to be deleted. */ SQLITE_PRIVATE void sqlite3GenerateRowIndexDelete( Parse *pParse, /* Parsing and code generating context */ Table *pTab, /* Table containing the row to be deleted */ - int iCur, /* Cursor number for the table */ - int *aRegIdx /* Only delete if aRegIdx!=0 && aRegIdx[i]>0 */ + int iDataCur, /* Cursor of table holding data. */ + int iIdxCur, /* First index cursor */ + int *aRegIdx, /* Only delete if aRegIdx!=0 && aRegIdx[i]>0 */ + int iIdxNoSeek /* Do not delete from this cursor */ ){ - int i; - Index *pIdx; - int r1; - - for(i=1, pIdx=pTab->pIndex; pIdx; i++, pIdx=pIdx->pNext){ - if( aRegIdx!=0 && aRegIdx[i-1]==0 ) continue; - r1 = sqlite3GenerateIndexKey(pParse, pIdx, iCur, 0, 0); - sqlite3VdbeAddOp3(pParse->pVdbe, OP_IdxDelete, iCur+i, r1,pIdx->nColumn+1); + int i; /* Index loop counter */ + int r1 = -1; /* Register holding an index key */ + int iPartIdxLabel; /* Jump destination for skipping partial index entries */ + Index *pIdx; /* Current index */ + Index *pPrior = 0; /* Prior index */ + Vdbe *v; /* The prepared statement under construction */ + Index *pPk; /* PRIMARY KEY index, or NULL for rowid tables */ + + v = pParse->pVdbe; + pPk = HasRowid(pTab) ? 0 : sqlite3PrimaryKeyIndex(pTab); + for(i=0, pIdx=pTab->pIndex; pIdx; i++, pIdx=pIdx->pNext){ + assert( iIdxCur+i!=iDataCur || pPk==pIdx ); + if( aRegIdx!=0 && aRegIdx[i]==0 ) continue; + if( pIdx==pPk ) continue; + if( iIdxCur+i==iIdxNoSeek ) continue; + VdbeModuleComment((v, "GenRowIdxDel for %s", pIdx->zName)); + r1 = sqlite3GenerateIndexKey(pParse, pIdx, iDataCur, 0, 1, + &iPartIdxLabel, pPrior, r1); + sqlite3VdbeAddOp3(v, OP_IdxDelete, iIdxCur+i, r1, + pIdx->uniqNotNull ? pIdx->nKeyCol : pIdx->nColumn); + sqlite3ResolvePartIdxLabel(pParse, iPartIdxLabel); + pPrior = pIdx; } } /* -** Generate code that will assemble an index key and put it in register +** Generate code that will assemble an index key and stores it in register ** regOut. The key with be for index pIdx which is an index on pTab. ** iCur is the index of a cursor open on the pTab table and pointing to -** the entry that needs indexing. +** the entry that needs indexing. If pTab is a WITHOUT ROWID table, then +** iCur must be the cursor of the PRIMARY KEY index. ** ** Return a register number which is the first in a block of ** registers that holds the elements of the index key. The ** block of registers has already been deallocated by the time ** this routine returns. +** +** If *piPartIdxLabel is not NULL, fill it in with a label and jump +** to that label if pIdx is a partial index that should be skipped. +** The label should be resolved using sqlite3ResolvePartIdxLabel(). +** A partial index should be skipped if its WHERE clause evaluates +** to false or null. If pIdx is not a partial index, *piPartIdxLabel +** will be set to zero which is an empty label that is ignored by +** sqlite3ResolvePartIdxLabel(). +** +** The pPrior and regPrior parameters are used to implement a cache to +** avoid unnecessary register loads. If pPrior is not NULL, then it is +** a pointer to a different index for which an index key has just been +** computed into register regPrior. If the current pIdx index is generating +** its key into the same sequence of registers and if pPrior and pIdx share +** a column in common, then the register corresponding to that column already +** holds the correct value and the loading of that register is skipped. +** This optimization is helpful when doing a DELETE or an INTEGRITY_CHECK +** on a table with multiple indices, and especially with the ROWID or +** PRIMARY KEY columns of the index. */ SQLITE_PRIVATE int sqlite3GenerateIndexKey( - Parse *pParse, /* Parsing context */ - Index *pIdx, /* The index for which to generate a key */ - int iCur, /* Cursor number for the pIdx->pTable table */ - int regOut, /* Write the new index key to this register */ - int doMakeRec /* Run the OP_MakeRecord instruction if true */ + Parse *pParse, /* Parsing context */ + Index *pIdx, /* The index for which to generate a key */ + int iDataCur, /* Cursor number from which to take column data */ + int regOut, /* Put the new key into this register if not 0 */ + int prefixOnly, /* Compute only a unique prefix of the key */ + int *piPartIdxLabel, /* OUT: Jump to this label to skip partial index */ + Index *pPrior, /* Previously generated index key */ + int regPrior /* Register holding previous generated key */ ){ Vdbe *v = pParse->pVdbe; int j; - Table *pTab = pIdx->pTable; int regBase; int nCol; - nCol = pIdx->nColumn; - regBase = sqlite3GetTempRange(pParse, nCol+1); - sqlite3VdbeAddOp2(v, OP_Rowid, iCur, regBase+nCol); + if( piPartIdxLabel ){ + if( pIdx->pPartIdxWhere ){ + *piPartIdxLabel = sqlite3VdbeMakeLabel(v); + pParse->iSelfTab = iDataCur; + sqlite3ExprCachePush(pParse); + sqlite3ExprIfFalseDup(pParse, pIdx->pPartIdxWhere, *piPartIdxLabel, + SQLITE_JUMPIFNULL); + }else{ + *piPartIdxLabel = 0; + } + } + nCol = (prefixOnly && pIdx->uniqNotNull) ? pIdx->nKeyCol : pIdx->nColumn; + regBase = sqlite3GetTempRange(pParse, nCol); + if( pPrior && (regBase!=regPrior || pPrior->pPartIdxWhere) ) pPrior = 0; for(j=0; j<nCol; j++){ - int idx = pIdx->aiColumn[j]; - if( idx==pTab->iPKey ){ - sqlite3VdbeAddOp2(v, OP_SCopy, regBase+nCol, regBase+j); - }else{ - sqlite3VdbeAddOp3(v, OP_Column, iCur, idx, regBase+j); - sqlite3ColumnDefault(v, pTab, idx, -1); - } - } - if( doMakeRec ){ - sqlite3VdbeAddOp3(v, OP_MakeRecord, regBase, nCol+1, regOut); - sqlite3VdbeChangeP4(v, -1, sqlite3IndexAffinityStr(v, pIdx), 0); - } - sqlite3ReleaseTempRange(pParse, regBase, nCol+1); + if( pPrior + && pPrior->aiColumn[j]==pIdx->aiColumn[j] + && pPrior->aiColumn[j]!=XN_EXPR + ){ + /* This column was already computed by the previous index */ + continue; + } + sqlite3ExprCodeLoadIndexColumn(pParse, pIdx, iDataCur, j, regBase+j); + /* If the column affinity is REAL but the number is an integer, then it + ** might be stored in the table as an integer (using a compact + ** representation) then converted to REAL by an OP_RealAffinity opcode. + ** But we are getting ready to store this value back into an index, where + ** it should be converted by to INTEGER again. So omit the OP_RealAffinity + ** opcode if it is present */ + sqlite3VdbeDeletePriorOpcode(v, OP_RealAffinity); + } + if( regOut ){ + sqlite3VdbeAddOp3(v, OP_MakeRecord, regBase, nCol, regOut); + } + sqlite3ReleaseTempRange(pParse, regBase, nCol); return regBase; } + +/* +** If a prior call to sqlite3GenerateIndexKey() generated a jump-over label +** because it was a partial index, then this routine should be called to +** resolve that label. +*/ +SQLITE_PRIVATE void sqlite3ResolvePartIdxLabel(Parse *pParse, int iLabel){ + if( iLabel ){ + sqlite3VdbeResolveLabel(pParse->pVdbe, iLabel); + sqlite3ExprCachePop(pParse); + } +} /************** End of delete.c **********************************************/ /************** Begin file func.c ********************************************/ /* ** 2002 February 23 @@ -72477,23 +100079,37 @@ ** May you do good and not evil. ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* -** This file contains the C functions that implement various SQL -** functions of SQLite. -** -** There is only one exported symbol in this file - the function -** sqliteRegisterBuildinFunctions() found at the bottom of the file. -** All other code has file scope. +** This file contains the C-language implementations for many of the SQL +** functions of SQLite. (Some function, and in particular the date and +** time functions, are implemented separately.) */ +/* #include "sqliteInt.h" */ +/* #include <stdlib.h> */ +/* #include <assert.h> */ +/* #include "vdbeInt.h" */ /* ** Return the collating function associated with a function. */ static CollSeq *sqlite3GetFuncCollSeq(sqlite3_context *context){ - return context->pColl; + VdbeOp *pOp; + assert( context->pVdbe!=0 ); + pOp = &context->pVdbe->aOp[context->iOp-1]; + assert( pOp->opcode==OP_CollSeq ); + assert( pOp->p4type==P4_COLLSEQ ); + return pOp->p4.pColl; +} + +/* +** Indicate that the accumulator load should be skipped on this +** iteration of the aggregate loop. +*/ +static void sqlite3SkipAccumulatorLoad(sqlite3_context *context){ + context->skipFlag = 1; } /* ** Implementation of the non-aggregate min() and max() functions */ @@ -72593,13 +100209,13 @@ UNUSED_PARAMETER(argc); switch( sqlite3_value_type(argv[0]) ){ case SQLITE_INTEGER: { i64 iVal = sqlite3_value_int64(argv[0]); if( iVal<0 ){ - if( (iVal<<1)==0 ){ - /* IMP: R-35460-15084 If X is the integer -9223372036854775807 then - ** abs(X) throws an integer overflow error since there is no + if( iVal==SMALLEST_INT64 ){ + /* IMP: R-31676-45509 If X is the integer -9223372036854775808 + ** then abs(X) throws an integer overflow error since there is no ** equivalent positive 64-bit two complement value. */ sqlite3_result_error(context, "integer overflow", -1); return; } iVal = -iVal; @@ -72613,20 +100229,97 @@ break; } default: { /* Because sqlite3_value_double() returns 0.0 if the argument is not ** something that can be converted into a number, we have: - ** IMP: R-57326-31541 Abs(X) return 0.0 if X is a string or blob that - ** cannot be converted to a numeric value. + ** IMP: R-01992-00519 Abs(X) returns 0.0 if X is a string or blob + ** that cannot be converted to a numeric value. */ double rVal = sqlite3_value_double(argv[0]); if( rVal<0 ) rVal = -rVal; sqlite3_result_double(context, rVal); break; } } } + +/* +** Implementation of the instr() function. +** +** instr(haystack,needle) finds the first occurrence of needle +** in haystack and returns the number of previous characters plus 1, +** or 0 if needle does not occur within haystack. +** +** If both haystack and needle are BLOBs, then the result is one more than +** the number of bytes in haystack prior to the first occurrence of needle, +** or 0 if needle never occurs in haystack. +*/ +static void instrFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const unsigned char *zHaystack; + const unsigned char *zNeedle; + int nHaystack; + int nNeedle; + int typeHaystack, typeNeedle; + int N = 1; + int isText; + + UNUSED_PARAMETER(argc); + typeHaystack = sqlite3_value_type(argv[0]); + typeNeedle = sqlite3_value_type(argv[1]); + if( typeHaystack==SQLITE_NULL || typeNeedle==SQLITE_NULL ) return; + nHaystack = sqlite3_value_bytes(argv[0]); + nNeedle = sqlite3_value_bytes(argv[1]); + if( typeHaystack==SQLITE_BLOB && typeNeedle==SQLITE_BLOB ){ + zHaystack = sqlite3_value_blob(argv[0]); + zNeedle = sqlite3_value_blob(argv[1]); + isText = 0; + }else{ + zHaystack = sqlite3_value_text(argv[0]); + zNeedle = sqlite3_value_text(argv[1]); + isText = 1; + } + while( nNeedle<=nHaystack && memcmp(zHaystack, zNeedle, nNeedle)!=0 ){ + N++; + do{ + nHaystack--; + zHaystack++; + }while( isText && (zHaystack[0]&0xc0)==0x80 ); + } + if( nNeedle>nHaystack ) N = 0; + sqlite3_result_int(context, N); +} + +/* +** Implementation of the printf() function. +*/ +static void printfFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + PrintfArguments x; + StrAccum str; + const char *zFormat; + int n; + sqlite3 *db = sqlite3_context_db_handle(context); + + if( argc>=1 && (zFormat = (const char*)sqlite3_value_text(argv[0]))!=0 ){ + x.nArg = argc-1; + x.nUsed = 0; + x.apArg = argv+1; + sqlite3StrAccumInit(&str, db, 0, 0, db->aLimit[SQLITE_LIMIT_LENGTH]); + str.printfFlags = SQLITE_PRINTF_SQLFUNC; + sqlite3XPrintf(&str, zFormat, &x); + n = str.nChar; + sqlite3_result_text(context, sqlite3StrAccumFinish(&str), n, + SQLITE_DYNAMIC); + } +} /* ** Implementation of the substr() function. ** ** substr(x,p1,p2) returns p2 characters of x[] beginning with p1. @@ -72634,11 +100327,11 @@ ** of x. If x is text, then we actually count UTF-8 characters. ** If x is a blob, then we count bytes. ** ** If p1 is negative, then we begin abs(p1) from the end of x[]. ** -** If p2 is negative, return the p2 characters preceeding p1. +** If p2 is negative, return the p2 characters preceding p1. */ static void substrFunc( sqlite3_context *context, int argc, sqlite3_value **argv @@ -72671,10 +100364,18 @@ for(z2=z; *z2; len++){ SQLITE_SKIP_UTF8(z2); } } } +#ifdef SQLITE_SUBSTR_COMPATIBILITY + /* If SUBSTR_COMPATIBILITY is defined then substr(X,0,N) work the same as + ** as substr(X,1,N) - it returns the first N characters of X. This + ** is essentially a back-out of the bug-fix in check-in [5fc125d362df4b8] + ** from 2009-02-02 for compatibility of applications that exploited the + ** old buggy behavior. */ + if( p1==0 ) p1 = 1; /* <rdar://problem/6778339> */ +#endif if( argc==3 ){ p2 = sqlite3_value_int(argv[2]); if( p2<0 ){ p2 = -p2; negP2 = 1; @@ -72708,17 +100409,18 @@ p1--; } for(z2=z; *z2 && p2; p2--){ SQLITE_SKIP_UTF8(z2); } - sqlite3_result_text(context, (char*)z, (int)(z2-z), SQLITE_TRANSIENT); + sqlite3_result_text64(context, (char*)z, z2-z, SQLITE_TRANSIENT, + SQLITE_UTF8); }else{ if( p1+p2>len ){ p2 = len-p1; if( p2<0 ) p2 = 0; } - sqlite3_result_blob(context, (char*)&z[p1], (int)p2, SQLITE_TRANSIENT); + sqlite3_result_blob64(context, (char*)&z[p1], (u64)p2, SQLITE_TRANSIENT); } } /* ** Implementation of the round() function @@ -72749,19 +100451,19 @@ zBuf = sqlite3_mprintf("%.*f",n,r); if( zBuf==0 ){ sqlite3_result_error_nomem(context); return; } - sqlite3AtoF(zBuf, &r); + sqlite3AtoF(zBuf, &r, sqlite3Strlen30(zBuf), SQLITE_UTF8); sqlite3_free(zBuf); } sqlite3_result_double(context, r); } #endif /* -** Allocate nByte bytes of space using sqlite3_malloc(). If the +** Allocate nByte bytes of space using sqlite3Malloc(). If the ** allocation fails, call sqlite3_result_error_nomem() to notify ** the database handle that malloc() has failed and return NULL. ** If nByte is larger than the maximum string or blob length, then ** raise an SQLITE_TOOBIG exception and return NULL. */ @@ -72773,11 +100475,11 @@ testcase( nByte==db->aLimit[SQLITE_LIMIT_LENGTH]+1 ); if( nByte>db->aLimit[SQLITE_LIMIT_LENGTH] ){ sqlite3_result_error_toobig(context); z = 0; }else{ - z = sqlite3Malloc((int)nByte); + z = sqlite3Malloc(nByte); if( !z ){ sqlite3_result_error_nomem(context); } } return z; @@ -72796,20 +100498,19 @@ /* Verify that the call to _bytes() does not invalidate the _text() pointer */ assert( z2==(char*)sqlite3_value_text(argv[0]) ); if( z2 ){ z1 = contextMalloc(context, ((i64)n)+1); if( z1 ){ - memcpy(z1, z2, n+1); - for(i=0; z1[i]; i++){ - z1[i] = (char)sqlite3Toupper(z1[i]); + for(i=0; i<n; i++){ + z1[i] = (char)sqlite3Toupper(z2[i]); } - sqlite3_result_text(context, z1, -1, sqlite3_free); + sqlite3_result_text(context, z1, n, sqlite3_free); } } } static void lowerFunc(sqlite3_context *context, int argc, sqlite3_value **argv){ - u8 *z1; + char *z1; const char *z2; int i, n; UNUSED_PARAMETER(argc); z2 = (char*)sqlite3_value_text(argv[0]); n = sqlite3_value_bytes(argv[0]); @@ -72816,47 +100517,27 @@ /* Verify that the call to _bytes() does not invalidate the _text() pointer */ assert( z2==(char*)sqlite3_value_text(argv[0]) ); if( z2 ){ z1 = contextMalloc(context, ((i64)n)+1); if( z1 ){ - memcpy(z1, z2, n+1); - for(i=0; z1[i]; i++){ - z1[i] = sqlite3Tolower(z1[i]); - } - sqlite3_result_text(context, (char *)z1, -1, sqlite3_free); - } - } -} - - -#if 0 /* This function is never used. */ -/* -** The COALESCE() and IFNULL() functions used to be implemented as shown -** here. But now they are implemented as VDBE code so that unused arguments -** do not have to be computed. This legacy implementation is retained as -** comment. -*/ -/* -** Implementation of the IFNULL(), NVL(), and COALESCE() functions. -** All three do the same thing. They return the first non-NULL -** argument. -*/ -static void ifnullFunc( - sqlite3_context *context, - int argc, - sqlite3_value **argv -){ - int i; - for(i=0; i<argc; i++){ - if( SQLITE_NULL!=sqlite3_value_type(argv[i]) ){ - sqlite3_result_value(context, argv[i]); - break; - } - } -} -#endif /* NOT USED */ -#define ifnullFunc versionFunc /* Substitute function - never called */ + for(i=0; i<n; i++){ + z1[i] = sqlite3Tolower(z2[i]); + } + sqlite3_result_text(context, z1, n, sqlite3_free); + } + } +} + +/* +** Some functions like COALESCE() and IFNULL() and UNLIKELY() are implemented +** as VDBE code so that unused argument values do not have to be computed. +** However, we still need some kind of function implementation for this +** routines in the function table. The noopFunc macro provides this. +** noopFunc will never be called so it doesn't matter what the implementation +** is. We might as well use the "version()" function as a substitute. +*/ +#define noopFunc versionFunc /* Substitute function - never called */ /* ** Implementation of random(). Return a random integer. */ static void randomFunc( @@ -72874,11 +100555,11 @@ ** in a way that is testable, mask the sign bit off of negative ** values, resulting in a positive value. Then take the ** 2s complement of that positive value. The end result can ** therefore be no less than -9223372036854775807. */ - r = -(r ^ (((sqlite3_int64)1)<<63)); + r = -(r & LARGEST_INT64); } sqlite3_result_int64(context, r); } /* @@ -72957,27 +100638,27 @@ /* ** A structure defining how to do GLOB-style comparisons. */ struct compareInfo { - u8 matchAll; - u8 matchOne; - u8 matchSet; - u8 noCase; + u8 matchAll; /* "*" or "%" */ + u8 matchOne; /* "?" or "_" */ + u8 matchSet; /* "[" or 0 */ + u8 noCase; /* true to ignore case differences */ }; /* ** For LIKE and GLOB matching on EBCDIC machines, assume that every -** character is exactly one byte in size. Also, all characters are -** able to participate in upper-case-to-lower-case mappings in EBCDIC -** whereas only characters less than 0x80 do in ASCII. +** character is exactly one byte in size. Also, provde the Utf8Read() +** macro for fast reading of the next character in the common case where +** the next character is ASCII. */ #if defined(SQLITE_EBCDIC) -# define sqlite3Utf8Read(A,C) (*(A++)) -# define GlogUpperToLower(A) A = sqlite3UpperToLower[A] +# define sqlite3Utf8Read(A) (*((*A)++)) +# define Utf8Read(A) (*(A++)) #else -# define GlogUpperToLower(A) if( A<0x80 ){ A = sqlite3UpperToLower[A]; } +# define Utf8Read(A) (A[0]<0x80?*(A++):sqlite3Utf8Read(&A)) #endif static const struct compareInfo globInfo = { '*', '?', '[', 0 }; /* The correct SQL-92 behavior is for the LIKE operator to ignore ** case. Thus 'a' LIKE 'A' would be true. */ @@ -72986,11 +100667,11 @@ ** is case sensitive causing 'a' LIKE 'A' to be false */ static const struct compareInfo likeInfoAlt = { '%', '_', 0, 0 }; /* ** Compare two UTF-8 strings for equality where the first string can -** potentially be a "glob" expression. Return true (1) if they +** potentially be a "glob" or "like" expression. Return true (1) if they ** are the same and false (0) if they are different. ** ** Globbing rules: ** ** '*' Matches any sequence of zero or more characters. @@ -73006,122 +100687,154 @@ ** in the list by making it the first character after '[' or '^'. A ** range of characters can be specified using '-'. Example: ** "[a-z]" matches any single lower-case letter. To match a '-', make ** it the last character in the list. ** +** Like matching rules: +** +** '%' Matches any sequence of zero or more characters +** +*** '_' Matches any one character +** +** Ec Where E is the "esc" character and c is any other +** character, including '%', '_', and esc, match exactly c. +** +** The comments within this routine usually assume glob matching. +** ** This routine is usually quick, but can be N**2 in the worst case. -** -** Hints: to match '*' or '?', put them in "[]". Like this: -** -** abc[*]xyz Matches "abc*xyz" only */ static int patternCompare( const u8 *zPattern, /* The glob pattern */ const u8 *zString, /* The string to compare against the glob */ const struct compareInfo *pInfo, /* Information about how to do the compare */ - const int esc /* The escape character */ -){ - int c, c2; - int invert; - int seen; - u8 matchOne = pInfo->matchOne; - u8 matchAll = pInfo->matchAll; - u8 matchSet = pInfo->matchSet; - u8 noCase = pInfo->noCase; - int prevEscape = 0; /* True if the previous character was 'escape' */ - - while( (c = sqlite3Utf8Read(zPattern,&zPattern))!=0 ){ - if( !prevEscape && c==matchAll ){ - while( (c=sqlite3Utf8Read(zPattern,&zPattern)) == matchAll - || c == matchOne ){ - if( c==matchOne && sqlite3Utf8Read(zString, &zString)==0 ){ - return 0; - } - } - if( c==0 ){ - return 1; - }else if( c==esc ){ - c = sqlite3Utf8Read(zPattern, &zPattern); - if( c==0 ){ - return 0; - } - }else if( c==matchSet ){ - assert( esc==0 ); /* This is GLOB, not LIKE */ - assert( matchSet<0x80 ); /* '[' is a single-byte character */ - while( *zString && patternCompare(&zPattern[-1],zString,pInfo,esc)==0 ){ - SQLITE_SKIP_UTF8(zString); - } - return *zString!=0; - } - while( (c2 = sqlite3Utf8Read(zString,&zString))!=0 ){ - if( noCase ){ - GlogUpperToLower(c2); - GlogUpperToLower(c); - while( c2 != 0 && c2 != c ){ - c2 = sqlite3Utf8Read(zString, &zString); - GlogUpperToLower(c2); - } - }else{ - while( c2 != 0 && c2 != c ){ - c2 = sqlite3Utf8Read(zString, &zString); - } - } - if( c2==0 ) return 0; - if( patternCompare(zPattern,zString,pInfo,esc) ) return 1; - } - return 0; - }else if( !prevEscape && c==matchOne ){ - if( sqlite3Utf8Read(zString, &zString)==0 ){ - return 0; - } - }else if( c==matchSet ){ - int prior_c = 0; - assert( esc==0 ); /* This only occurs for GLOB, not LIKE */ - seen = 0; - invert = 0; - c = sqlite3Utf8Read(zString, &zString); - if( c==0 ) return 0; - c2 = sqlite3Utf8Read(zPattern, &zPattern); - if( c2=='^' ){ - invert = 1; - c2 = sqlite3Utf8Read(zPattern, &zPattern); - } - if( c2==']' ){ - if( c==']' ) seen = 1; - c2 = sqlite3Utf8Read(zPattern, &zPattern); - } - while( c2 && c2!=']' ){ - if( c2=='-' && zPattern[0]!=']' && zPattern[0]!=0 && prior_c>0 ){ - c2 = sqlite3Utf8Read(zPattern, &zPattern); - if( c>=prior_c && c<=c2 ) seen = 1; - prior_c = 0; - }else{ - if( c==c2 ){ - seen = 1; - } - prior_c = c2; - } - c2 = sqlite3Utf8Read(zPattern, &zPattern); - } - if( c2==0 || (seen ^ invert)==0 ){ - return 0; - } - }else if( esc==c && !prevEscape ){ - prevEscape = 1; - }else{ - c2 = sqlite3Utf8Read(zString, &zString); - if( noCase ){ - GlogUpperToLower(c); - GlogUpperToLower(c2); - } - if( c!=c2 ){ - return 0; - } - prevEscape = 0; - } - } - return *zString==0; + u32 matchOther /* The escape char (LIKE) or '[' (GLOB) */ +){ + u32 c, c2; /* Next pattern and input string chars */ + u32 matchOne = pInfo->matchOne; /* "?" or "_" */ + u32 matchAll = pInfo->matchAll; /* "*" or "%" */ + u8 noCase = pInfo->noCase; /* True if uppercase==lowercase */ + const u8 *zEscaped = 0; /* One past the last escaped input char */ + + while( (c = Utf8Read(zPattern))!=0 ){ + if( c==matchAll ){ /* Match "*" */ + /* Skip over multiple "*" characters in the pattern. If there + ** are also "?" characters, skip those as well, but consume a + ** single character of the input string for each "?" skipped */ + while( (c=Utf8Read(zPattern)) == matchAll || c == matchOne ){ + if( c==matchOne && sqlite3Utf8Read(&zString)==0 ){ + return 0; + } + } + if( c==0 ){ + return 1; /* "*" at the end of the pattern matches */ + }else if( c==matchOther ){ + if( pInfo->matchSet==0 ){ + c = sqlite3Utf8Read(&zPattern); + if( c==0 ) return 0; + }else{ + /* "[...]" immediately follows the "*". We have to do a slow + ** recursive search in this case, but it is an unusual case. */ + assert( matchOther<0x80 ); /* '[' is a single-byte character */ + while( *zString + && patternCompare(&zPattern[-1],zString,pInfo,matchOther)==0 ){ + SQLITE_SKIP_UTF8(zString); + } + return *zString!=0; + } + } + + /* At this point variable c contains the first character of the + ** pattern string past the "*". Search in the input string for the + ** first matching character and recursively contine the match from + ** that point. + ** + ** For a case-insensitive search, set variable cx to be the same as + ** c but in the other case and search the input string for either + ** c or cx. + */ + if( c<=0x80 ){ + u32 cx; + if( noCase ){ + cx = sqlite3Toupper(c); + c = sqlite3Tolower(c); + }else{ + cx = c; + } + while( (c2 = *(zString++))!=0 ){ + if( c2!=c && c2!=cx ) continue; + if( patternCompare(zPattern,zString,pInfo,matchOther) ) return 1; + } + }else{ + while( (c2 = Utf8Read(zString))!=0 ){ + if( c2!=c ) continue; + if( patternCompare(zPattern,zString,pInfo,matchOther) ) return 1; + } + } + return 0; + } + if( c==matchOther ){ + if( pInfo->matchSet==0 ){ + c = sqlite3Utf8Read(&zPattern); + if( c==0 ) return 0; + zEscaped = zPattern; + }else{ + u32 prior_c = 0; + int seen = 0; + int invert = 0; + c = sqlite3Utf8Read(&zString); + if( c==0 ) return 0; + c2 = sqlite3Utf8Read(&zPattern); + if( c2=='^' ){ + invert = 1; + c2 = sqlite3Utf8Read(&zPattern); + } + if( c2==']' ){ + if( c==']' ) seen = 1; + c2 = sqlite3Utf8Read(&zPattern); + } + while( c2 && c2!=']' ){ + if( c2=='-' && zPattern[0]!=']' && zPattern[0]!=0 && prior_c>0 ){ + c2 = sqlite3Utf8Read(&zPattern); + if( c>=prior_c && c<=c2 ) seen = 1; + prior_c = 0; + }else{ + if( c==c2 ){ + seen = 1; + } + prior_c = c2; + } + c2 = sqlite3Utf8Read(&zPattern); + } + if( c2==0 || (seen ^ invert)==0 ){ + return 0; + } + continue; + } + } + c2 = Utf8Read(zString); + if( c==c2 ) continue; + if( noCase && c<0x80 && c2<0x80 && sqlite3Tolower(c)==sqlite3Tolower(c2) ){ + continue; + } + if( c==matchOne && zPattern!=zEscaped && c2!=0 ) continue; + return 0; + } + return *zString==0; +} + +/* +** The sqlite3_strglob() interface. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_strglob(const char *zGlobPattern, const char *zString){ + return patternCompare((u8*)zGlobPattern, (u8*)zString, &globInfo, '[')==0; +} + +/* +** The sqlite3_strlike() interface. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_strlike(const char *zPattern, const char *zStr, unsigned int esc){ + return patternCompare((u8*)zPattern, (u8*)zStr, &likeInfoNorm, esc)==0; } /* ** Count the number of times that the LIKE operator (or GLOB which is ** just a variation of LIKE) gets called. This is used for testing @@ -73148,14 +100861,26 @@ sqlite3_context *context, int argc, sqlite3_value **argv ){ const unsigned char *zA, *zB; - int escape = 0; + u32 escape; int nPat; sqlite3 *db = sqlite3_context_db_handle(context); + struct compareInfo *pInfo = sqlite3_user_data(context); +#ifdef SQLITE_LIKE_DOESNT_MATCH_BLOBS + if( sqlite3_value_type(argv[0])==SQLITE_BLOB + || sqlite3_value_type(argv[1])==SQLITE_BLOB + ){ +#ifdef SQLITE_TEST + sqlite3_like_count++; +#endif + sqlite3_result_int(context, 0); + return; + } +#endif zB = sqlite3_value_text(argv[0]); zA = sqlite3_value_text(argv[1]); /* Limit the length of the LIKE or GLOB pattern to avoid problems ** of deep recursion and N*N behavior in patternCompare(). @@ -73178,18 +100903,18 @@ if( sqlite3Utf8CharLen((char*)zEsc, -1)!=1 ){ sqlite3_result_error(context, "ESCAPE expression must be a single character", -1); return; } - escape = sqlite3Utf8Read(zEsc, &zEsc); + escape = sqlite3Utf8Read(&zEsc); + }else{ + escape = pInfo->matchSet; } if( zA && zB ){ - struct compareInfo *pInfo = sqlite3_user_data(context); #ifdef SQLITE_TEST sqlite3_like_count++; #endif - sqlite3_result_int(context, patternCompare(zB, zA, pInfo, escape)); } } /* @@ -73237,10 +100962,25 @@ UNUSED_PARAMETER2(NotUsed, NotUsed2); /* IMP: R-24470-31136 This function is an SQL wrapper around the ** sqlite3_sourceid() C interface. */ sqlite3_result_text(context, sqlite3_sourceid(), -1, SQLITE_STATIC); } + +/* +** Implementation of the sqlite_log() function. This is a wrapper around +** sqlite3_log(). The return value is NULL. The function exists purely for +** its side-effects. +*/ +static void errlogFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + UNUSED_PARAMETER(argc); + UNUSED_PARAMETER(context); + sqlite3_log(sqlite3_value_int(argv[0]), "%s", sqlite3_value_text(argv[1])); +} /* ** Implementation of the sqlite_compileoption_used() function. ** The result is an integer that identifies if the compiler option ** was used to build SQLite. @@ -73252,12 +100992,14 @@ sqlite3_value **argv ){ const char *zOptName; assert( argc==1 ); UNUSED_PARAMETER(argc); - /* IMP: R-xxxx This function is an SQL wrapper around the - ** sqlite3_compileoption_used() C interface. */ + /* IMP: R-39564-36305 The sqlite_compileoption_used() SQL + ** function is a wrapper around the sqlite3_compileoption_used() C/C++ + ** function. + */ if( (zOptName = (const char*)sqlite3_value_text(argv[0]))!=0 ){ sqlite3_result_int(context, sqlite3_compileoption_used(zOptName)); } } #endif /* SQLITE_OMIT_COMPILEOPTION_DIAGS */ @@ -73274,12 +101016,13 @@ sqlite3_value **argv ){ int n; assert( argc==1 ); UNUSED_PARAMETER(argc); - /* IMP: R-xxxx This function is an SQL wrapper around the - ** sqlite3_compileoption_get() C interface. */ + /* IMP: R-04922-24076 The sqlite_compileoption_get() SQL function + ** is a wrapper around the sqlite3_compileoption_get() C/C++ function. + */ n = sqlite3_value_int(argv[0]); sqlite3_result_text(context, sqlite3_compileoption_get(n), -1, SQLITE_STATIC); } #endif /* SQLITE_OMIT_COMPILEOPTION_DIAGS */ @@ -73289,14 +101032,10 @@ '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C', 'D', 'E', 'F' }; /* -** EXPERIMENTAL - This is not an official function. The interface may -** change. This function may disappear. Do not write code that depends -** on this function. -** ** Implementation of the QUOTE() function. This function takes a single ** argument. If the argument is numeric, the return value is the same as ** the argument. If the argument is NULL, the return value is the string ** "NULL". Otherwise, the argument is enclosed in single quotes with ** single-quote escapes. @@ -73303,12 +101042,23 @@ */ static void quoteFunc(sqlite3_context *context, int argc, sqlite3_value **argv){ assert( argc==1 ); UNUSED_PARAMETER(argc); switch( sqlite3_value_type(argv[0]) ){ - case SQLITE_INTEGER: case SQLITE_FLOAT: { + double r1, r2; + char zBuf[50]; + r1 = sqlite3_value_double(argv[0]); + sqlite3_snprintf(sizeof(zBuf), zBuf, "%!.15g", r1); + sqlite3AtoF(zBuf, &r2, 20, SQLITE_UTF8); + if( r1!=r2 ){ + sqlite3_snprintf(sizeof(zBuf), zBuf, "%!.20e", r1); + } + sqlite3_result_text(context, zBuf, -1, SQLITE_TRANSIENT); + break; + } + case SQLITE_INTEGER: { sqlite3_result_value(context, argv[0]); break; } case SQLITE_BLOB: { char *zText = 0; @@ -73359,10 +101109,66 @@ sqlite3_result_text(context, "NULL", 4, SQLITE_STATIC); break; } } } + +/* +** The unicode() function. Return the integer unicode code-point value +** for the first character of the input string. +*/ +static void unicodeFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const unsigned char *z = sqlite3_value_text(argv[0]); + (void)argc; + if( z && z[0] ) sqlite3_result_int(context, sqlite3Utf8Read(&z)); +} + +/* +** The char() function takes zero or more arguments, each of which is +** an integer. It constructs a string where each character of the string +** is the unicode character for the corresponding integer argument. +*/ +static void charFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + unsigned char *z, *zOut; + int i; + zOut = z = sqlite3_malloc64( argc*4+1 ); + if( z==0 ){ + sqlite3_result_error_nomem(context); + return; + } + for(i=0; i<argc; i++){ + sqlite3_int64 x; + unsigned c; + x = sqlite3_value_int64(argv[i]); + if( x<0 || x>0x10ffff ) x = 0xfffd; + c = (unsigned)(x & 0x1fffff); + if( c<0x00080 ){ + *zOut++ = (u8)(c&0xFF); + }else if( c<0x00800 ){ + *zOut++ = 0xC0 + (u8)((c>>6)&0x1F); + *zOut++ = 0x80 + (u8)(c & 0x3F); + }else if( c<0x10000 ){ + *zOut++ = 0xE0 + (u8)((c>>12)&0x0F); + *zOut++ = 0x80 + (u8)((c>>6) & 0x3F); + *zOut++ = 0x80 + (u8)(c & 0x3F); + }else{ + *zOut++ = 0xF0 + (u8)((c>>18) & 0x07); + *zOut++ = 0x80 + (u8)((c>>12) & 0x3F); + *zOut++ = 0x80 + (u8)((c>>6) & 0x3F); + *zOut++ = 0x80 + (u8)(c & 0x3F); + } \ + } + sqlite3_result_text64(context, (char*)z, zOut-z, sqlite3_free, SQLITE_UTF8); +} /* ** The hex() function. Interpret the argument as a blob. Return ** a hexadecimal rendering as text. */ @@ -73398,27 +101204,25 @@ sqlite3_context *context, int argc, sqlite3_value **argv ){ i64 n; - sqlite3 *db = sqlite3_context_db_handle(context); + int rc; assert( argc==1 ); UNUSED_PARAMETER(argc); n = sqlite3_value_int64(argv[0]); - testcase( n==db->aLimit[SQLITE_LIMIT_LENGTH] ); - testcase( n==db->aLimit[SQLITE_LIMIT_LENGTH]+1 ); - if( n>db->aLimit[SQLITE_LIMIT_LENGTH] ){ - sqlite3_result_error_toobig(context); - }else{ - sqlite3_result_zeroblob(context, (int)n); /* IMP: R-00293-64994 */ + if( n<0 ) n = 0; + rc = sqlite3_result_zeroblob64(context, n); /* IMP: R-00293-64994 */ + if( rc ){ + sqlite3_result_error_code(context, rc); } } /* ** The replace() function. Three arguments are all strings: call ** them A, B, and C. The result is also a string which is derived -** from A by replacing every occurance of B with C. The match +** from A by replacing every occurrence of B with C. The match ** must be exact. Collating sequences are not used. */ static void replaceFunc( sqlite3_context *context, int argc, @@ -73474,18 +101278,18 @@ nOut += nRep - nPattern; testcase( nOut-1==db->aLimit[SQLITE_LIMIT_LENGTH] ); testcase( nOut-2==db->aLimit[SQLITE_LIMIT_LENGTH] ); if( nOut-1>db->aLimit[SQLITE_LIMIT_LENGTH] ){ sqlite3_result_error_toobig(context); - sqlite3DbFree(db, zOut); + sqlite3_free(zOut); return; } zOld = zOut; - zOut = sqlite3_realloc(zOut, (int)nOut); + zOut = sqlite3_realloc64(zOut, (int)nOut); if( zOut==0 ){ sqlite3_result_error_nomem(context); - sqlite3DbFree(db, zOld); + sqlite3_free(zOld); return; } memcpy(&zOut[j], zRep, nRep); j += nRep; i += nPattern-1; @@ -73700,17 +101504,12 @@ if( p && type!=SQLITE_NULL ){ p->cnt++; if( type==SQLITE_INTEGER ){ i64 v = sqlite3_value_int64(argv[0]); p->rSum += v; - if( (p->approx|p->overflow)==0 ){ - i64 iNewSum = p->iSum + v; - int s1 = (int)(p->iSum >> (sizeof(i64)*8-1)); - int s2 = (int)(v >> (sizeof(i64)*8-1)); - int s3 = (int)(iNewSum >> (sizeof(i64)*8-1)); - p->overflow = ((s1&s2&~s3) | (~s1&~s2&s3))?1:0; - p->iSum = iNewSum; + if( (p->approx|p->overflow)==0 && sqlite3AddInt64(&p->iSum, v) ){ + p->overflow = 1; } }else{ p->rSum += sqlite3_value_double(argv[0]); p->approx = 1; } @@ -73787,15 +101586,16 @@ ){ Mem *pArg = (Mem *)argv[0]; Mem *pBest; UNUSED_PARAMETER(NotUsed); - if( sqlite3_value_type(argv[0])==SQLITE_NULL ) return; pBest = (Mem *)sqlite3_aggregate_context(context, sizeof(*pBest)); if( !pBest ) return; - if( pBest->flags ){ + if( sqlite3_value_type(argv[0])==SQLITE_NULL ){ + if( pBest->flags ) sqlite3SkipAccumulatorLoad(context); + }else if( pBest->flags ){ int max; int cmp; CollSeq *pColl = sqlite3GetFuncCollSeq(context); /* This step function is used for both the min() and max() aggregates, ** the only difference between the two being that the sense of the @@ -73807,20 +101607,23 @@ */ max = sqlite3_user_data(context)!=0; cmp = sqlite3MemCompare(pBest, pArg, pColl); if( (max && cmp<0) || (!max && cmp>0) ){ sqlite3VdbeMemCopy(pBest, pArg); + }else{ + sqlite3SkipAccumulatorLoad(context); } }else{ + pBest->db = sqlite3_context_db_handle(context); sqlite3VdbeMemCopy(pBest, pArg); } } static void minMaxFinalize(sqlite3_context *context){ sqlite3_value *pRes; pRes = (sqlite3_value *)sqlite3_aggregate_context(context, 0); if( pRes ){ - if( ALWAYS(pRes->flags) ){ + if( pRes->flags ){ sqlite3_result_value(context, pRes); } sqlite3VdbeMemRelease(pRes); } } @@ -73841,58 +101644,52 @@ if( sqlite3_value_type(argv[0])==SQLITE_NULL ) return; pAccum = (StrAccum*)sqlite3_aggregate_context(context, sizeof(*pAccum)); if( pAccum ){ sqlite3 *db = sqlite3_context_db_handle(context); - int firstTerm = pAccum->useMalloc==0; - pAccum->useMalloc = 1; + int firstTerm = pAccum->mxAlloc==0; pAccum->mxAlloc = db->aLimit[SQLITE_LIMIT_LENGTH]; if( !firstTerm ){ if( argc==2 ){ zSep = (char*)sqlite3_value_text(argv[1]); nSep = sqlite3_value_bytes(argv[1]); }else{ zSep = ","; nSep = 1; } - sqlite3StrAccumAppend(pAccum, zSep, nSep); + if( nSep ) sqlite3StrAccumAppend(pAccum, zSep, nSep); } zVal = (char*)sqlite3_value_text(argv[0]); nVal = sqlite3_value_bytes(argv[0]); - sqlite3StrAccumAppend(pAccum, zVal, nVal); + if( zVal ) sqlite3StrAccumAppend(pAccum, zVal, nVal); } } static void groupConcatFinalize(sqlite3_context *context){ StrAccum *pAccum; pAccum = sqlite3_aggregate_context(context, 0); if( pAccum ){ - if( pAccum->tooBig ){ + if( pAccum->accError==STRACCUM_TOOBIG ){ sqlite3_result_error_toobig(context); - }else if( pAccum->mallocFailed ){ + }else if( pAccum->accError==STRACCUM_NOMEM ){ sqlite3_result_error_nomem(context); }else{ sqlite3_result_text(context, sqlite3StrAccumFinish(pAccum), -1, sqlite3_free); } } } /* -** This function registered all of the above C functions as SQL -** functions. This should be the only routine in this file with -** external linkage. +** This routine does per-connection function registration. Most +** of the built-in functions above are part of the global function set. +** This routine only deals with those that are not global. */ SQLITE_PRIVATE void sqlite3RegisterBuiltinFunctions(sqlite3 *db){ -#ifndef SQLITE_OMIT_ALTERTABLE - sqlite3AlterFunctions(db); -#endif - if( !db->mallocFailed ){ - int rc = sqlite3_overload_function(db, "MATCH", 2); - assert( rc==SQLITE_NOMEM || rc==SQLITE_OK ); - if( rc==SQLITE_NOMEM ){ - db->mallocFailed = 1; - } + int rc = sqlite3_overload_function(db, "MATCH", 2); + assert( rc==SQLITE_NOMEM || rc==SQLITE_OK ); + if( rc==SQLITE_NOMEM ){ + sqlite3OomFault(db); } } /* ** Set the LIKEOPT flag on the 2-argument function with the given name. @@ -73900,11 +101697,11 @@ static void setLikeOptFlag(sqlite3 *db, const char *zName, u8 flagVal){ FuncDef *pDef; pDef = sqlite3FindFunction(db, zName, sqlite3Strlen30(zName), 2, SQLITE_UTF8, 0); if( ALWAYS(pDef) ){ - pDef->flags = flagVal; + pDef->funcFlags |= flagVal; } } /* ** Register the built-in LIKE and GLOB functions. The caseSensitive @@ -73916,14 +101713,14 @@ if( caseSensitive ){ pInfo = (struct compareInfo*)&likeInfoAlt; }else{ pInfo = (struct compareInfo*)&likeInfoNorm; } - sqlite3CreateFunc(db, "like", 2, SQLITE_ANY, pInfo, likeFunc, 0, 0); - sqlite3CreateFunc(db, "like", 3, SQLITE_ANY, pInfo, likeFunc, 0, 0); - sqlite3CreateFunc(db, "glob", 2, SQLITE_ANY, - (struct compareInfo*)&globInfo, likeFunc, 0,0); + sqlite3CreateFunc(db, "like", 2, SQLITE_UTF8, pInfo, likeFunc, 0, 0, 0); + sqlite3CreateFunc(db, "like", 3, SQLITE_UTF8, pInfo, likeFunc, 0, 0, 0); + sqlite3CreateFunc(db, "glob", 2, SQLITE_UTF8, + (struct compareInfo*)&globInfo, likeFunc, 0, 0, 0); setLikeOptFlag(db, "glob", SQLITE_FUNC_LIKE | SQLITE_FUNC_CASE); setLikeOptFlag(db, "like", caseSensitive ? (SQLITE_FUNC_LIKE | SQLITE_FUNC_CASE) : SQLITE_FUNC_LIKE); } @@ -73931,10 +101728,15 @@ ** pExpr points to an expression which implements a function. If ** it is appropriate to apply the LIKE optimization to that function ** then set aWc[0] through aWc[2] to the wildcard characters and ** return TRUE. If the function is not a LIKE-style function then ** return FALSE. +** +** *pIsNocase is set to true if uppercase and lowercase are equivalent for +** the function (default for LIKE). If the function makes the distinction +** between uppercase and lowercase (as does GLOB) then *pIsNocase is set to +** false. */ SQLITE_PRIVATE int sqlite3IsLikeFunction(sqlite3 *db, Expr *pExpr, int *pIsNocase, char *aWc){ FuncDef *pDef; if( pExpr->op!=TK_FUNCTION || !pExpr->x.pList @@ -73944,11 +101746,11 @@ } assert( !ExprHasProperty(pExpr, EP_xIsSelect) ); pDef = sqlite3FindFunction(db, pExpr->u.zToken, sqlite3Strlen30(pExpr->u.zToken), 2, SQLITE_UTF8, 0); - if( NEVER(pDef==0) || (pDef->flags & SQLITE_FUNC_LIKE)==0 ){ + if( NEVER(pDef==0) || (pDef->funcFlags & SQLITE_FUNC_LIKE)==0 ){ return 0; } /* The memcpy() statement assumes that the wildcard characters are ** the first three statements in the compareInfo structure. The @@ -73956,16 +101758,16 @@ */ memcpy(aWc, pDef->pUserData, 3); assert( (char*)&likeInfoAlt == (char*)&likeInfoAlt.matchAll ); assert( &((char*)&likeInfoAlt)[1] == (char*)&likeInfoAlt.matchOne ); assert( &((char*)&likeInfoAlt)[2] == (char*)&likeInfoAlt.matchSet ); - *pIsNocase = (pDef->flags & SQLITE_FUNC_CASE)==0; + *pIsNocase = (pDef->funcFlags & SQLITE_FUNC_CASE)==0; return 1; } /* -** All all of the FuncDef structures in the aBuiltinFunc[] array above +** All of the FuncDef structures in the aBuiltinFunc[] array above ** to the global function hash table. This occurs at start-time (as ** a consequence of calling sqlite3_initialize()). ** ** After this routine runs */ @@ -73985,59 +101787,70 @@ FUNCTION(rtrim, 2, 2, 0, trimFunc ), FUNCTION(trim, 1, 3, 0, trimFunc ), FUNCTION(trim, 2, 3, 0, trimFunc ), FUNCTION(min, -1, 0, 1, minmaxFunc ), FUNCTION(min, 0, 0, 1, 0 ), - AGGREGATE(min, 1, 0, 1, minmaxStep, minMaxFinalize ), + AGGREGATE2(min, 1, 0, 1, minmaxStep, minMaxFinalize, + SQLITE_FUNC_MINMAX ), FUNCTION(max, -1, 1, 1, minmaxFunc ), FUNCTION(max, 0, 1, 1, 0 ), - AGGREGATE(max, 1, 1, 1, minmaxStep, minMaxFinalize ), - FUNCTION(typeof, 1, 0, 0, typeofFunc ), - FUNCTION(length, 1, 0, 0, lengthFunc ), + AGGREGATE2(max, 1, 1, 1, minmaxStep, minMaxFinalize, + SQLITE_FUNC_MINMAX ), + FUNCTION2(typeof, 1, 0, 0, typeofFunc, SQLITE_FUNC_TYPEOF), + FUNCTION2(length, 1, 0, 0, lengthFunc, SQLITE_FUNC_LENGTH), + FUNCTION(instr, 2, 0, 0, instrFunc ), FUNCTION(substr, 2, 0, 0, substrFunc ), FUNCTION(substr, 3, 0, 0, substrFunc ), + FUNCTION(printf, -1, 0, 0, printfFunc ), + FUNCTION(unicode, 1, 0, 0, unicodeFunc ), + FUNCTION(char, -1, 0, 0, charFunc ), FUNCTION(abs, 1, 0, 0, absFunc ), #ifndef SQLITE_OMIT_FLOATING_POINT FUNCTION(round, 1, 0, 0, roundFunc ), FUNCTION(round, 2, 0, 0, roundFunc ), #endif FUNCTION(upper, 1, 0, 0, upperFunc ), FUNCTION(lower, 1, 0, 0, lowerFunc ), FUNCTION(coalesce, 1, 0, 0, 0 ), FUNCTION(coalesce, 0, 0, 0, 0 ), -/* FUNCTION(coalesce, -1, 0, 0, ifnullFunc ), */ - {-1,SQLITE_UTF8,SQLITE_FUNC_COALESCE,0,0,ifnullFunc,0,0,"coalesce",0}, + FUNCTION2(coalesce, -1, 0, 0, noopFunc, SQLITE_FUNC_COALESCE), FUNCTION(hex, 1, 0, 0, hexFunc ), -/* FUNCTION(ifnull, 2, 0, 0, ifnullFunc ), */ - {2,SQLITE_UTF8,SQLITE_FUNC_COALESCE,0,0,ifnullFunc,0,0,"ifnull",0}, - FUNCTION(random, 0, 0, 0, randomFunc ), - FUNCTION(randomblob, 1, 0, 0, randomBlob ), + FUNCTION2(ifnull, 2, 0, 0, noopFunc, SQLITE_FUNC_COALESCE), + FUNCTION2(unlikely, 1, 0, 0, noopFunc, SQLITE_FUNC_UNLIKELY), + FUNCTION2(likelihood, 2, 0, 0, noopFunc, SQLITE_FUNC_UNLIKELY), + FUNCTION2(likely, 1, 0, 0, noopFunc, SQLITE_FUNC_UNLIKELY), + VFUNCTION(random, 0, 0, 0, randomFunc ), + VFUNCTION(randomblob, 1, 0, 0, randomBlob ), FUNCTION(nullif, 2, 0, 1, nullifFunc ), - FUNCTION(sqlite_version, 0, 0, 0, versionFunc ), - FUNCTION(sqlite_source_id, 0, 0, 0, sourceidFunc ), + DFUNCTION(sqlite_version, 0, 0, 0, versionFunc ), + DFUNCTION(sqlite_source_id, 0, 0, 0, sourceidFunc ), + FUNCTION(sqlite_log, 2, 0, 0, errlogFunc ), +#if SQLITE_USER_AUTHENTICATION + FUNCTION(sqlite_crypt, 2, 0, 0, sqlite3CryptFunc ), +#endif #ifndef SQLITE_OMIT_COMPILEOPTION_DIAGS - FUNCTION(sqlite_compileoption_used,1, 0, 0, compileoptionusedFunc ), - FUNCTION(sqlite_compileoption_get, 1, 0, 0, compileoptiongetFunc ), + DFUNCTION(sqlite_compileoption_used,1, 0, 0, compileoptionusedFunc ), + DFUNCTION(sqlite_compileoption_get, 1, 0, 0, compileoptiongetFunc ), #endif /* SQLITE_OMIT_COMPILEOPTION_DIAGS */ FUNCTION(quote, 1, 0, 0, quoteFunc ), - FUNCTION(last_insert_rowid, 0, 0, 0, last_insert_rowid), - FUNCTION(changes, 0, 0, 0, changes ), - FUNCTION(total_changes, 0, 0, 0, total_changes ), + VFUNCTION(last_insert_rowid, 0, 0, 0, last_insert_rowid), + VFUNCTION(changes, 0, 0, 0, changes ), + VFUNCTION(total_changes, 0, 0, 0, total_changes ), FUNCTION(replace, 3, 0, 0, replaceFunc ), FUNCTION(zeroblob, 1, 0, 0, zeroblobFunc ), #ifdef SQLITE_SOUNDEX FUNCTION(soundex, 1, 0, 0, soundexFunc ), #endif #ifndef SQLITE_OMIT_LOAD_EXTENSION - FUNCTION(load_extension, 1, 0, 0, loadExt ), - FUNCTION(load_extension, 2, 0, 0, loadExt ), + VFUNCTION(load_extension, 1, 0, 0, loadExt ), + VFUNCTION(load_extension, 2, 0, 0, loadExt ), #endif AGGREGATE(sum, 1, 0, 0, sumStep, sumFinalize ), AGGREGATE(total, 1, 0, 0, sumStep, totalFinalize ), AGGREGATE(avg, 1, 0, 0, sumStep, avgFinalize ), - /* AGGREGATE(count, 0, 0, 0, countStep, countFinalize ), */ - {0,SQLITE_UTF8,SQLITE_FUNC_COUNT,0,0,0,countStep,countFinalize,"count",0}, + AGGREGATE2(count, 0, 0, 0, countStep, countFinalize, + SQLITE_FUNC_COUNT ), AGGREGATE(count, 1, 0, 0, countStep, countFinalize ), AGGREGATE(group_concat, 1, 0, 0, groupConcatStep, groupConcatFinalize), AGGREGATE(group_concat, 2, 0, 0, groupConcatStep, groupConcatFinalize), LIKEFUNC(glob, 2, &globInfo, SQLITE_FUNC_LIKE|SQLITE_FUNC_CASE), @@ -74056,10 +101869,16 @@ for(i=0; i<ArraySize(aBuiltinFunc); i++){ sqlite3FuncDefInsert(pHash, &aFunc[i]); } sqlite3RegisterDateTimeFunctions(); +#ifndef SQLITE_OMIT_ALTERTABLE + sqlite3AlterFunctions(); +#endif +#if defined(SQLITE_ENABLE_STAT3) || defined(SQLITE_ENABLE_STAT4) + sqlite3AnalyzeFunctions(); +#endif } /************** End of func.c ************************************************/ /************** Begin file fkey.c ********************************************/ /* @@ -74073,21 +101892,23 @@ ** ************************************************************************* ** This file contains code used by the compiler to add foreign key ** support to compiled SQL statements. */ +/* #include "sqliteInt.h" */ #ifndef SQLITE_OMIT_FOREIGN_KEY #ifndef SQLITE_OMIT_TRIGGER /* ** Deferred and Immediate FKs ** -------------------------- ** ** Foreign keys in SQLite come in two flavours: deferred and immediate. -** If an immediate foreign key constraint is violated, SQLITE_CONSTRAINT -** is returned and the current statement transaction rolled back. If a +** If an immediate foreign key constraint is violated, +** SQLITE_CONSTRAINT_FOREIGNKEY is returned and the current +** statement transaction rolled back. If a ** deferred foreign key constraint is violated, no action is taken ** immediately. However if the application attempts to commit the ** transaction before fixing the constraint violation, the attempt fails. ** ** Deferred constraints are implemented using a simple counter associated @@ -74147,11 +101968,12 @@ ** row is inserted. ** ** Immediate constraints are usually handled similarly. The only difference ** is that the counter used is stored as part of each individual statement ** object (struct Vdbe). If, after the statement has run, its immediate -** constraint counter is greater than zero, it returns SQLITE_CONSTRAINT +** constraint counter is greater than zero, +** it returns SQLITE_CONSTRAINT_FOREIGNKEY ** and the statement transaction is rolled back. An exception is an INSERT ** statement that inserts a single row only (no triggers). In this case, ** instead of using a counter, an exception is thrown immediately if the ** INSERT violates a foreign key constraint. This is necessary as such ** an INSERT does not open a statement transaction. @@ -74203,11 +102025,11 @@ /* ** A foreign key constraint requires that the key columns in the parent ** table are collectively subject to a UNIQUE or PRIMARY KEY constraint. ** Given that pParent is the parent table for foreign key constraint pFKey, -** search the schema a unique index on the parent key columns. +** search the schema for a unique index on the parent key columns. ** ** If successful, zero is returned. If the parent key is an INTEGER PRIMARY ** KEY column, then output variable *ppIdx is set to NULL. Otherwise, *ppIdx ** is set to point to the unique index. ** @@ -74232,18 +102054,18 @@ ** foreign key definition, and the parent table does not have a ** PRIMARY KEY, or ** ** 4) No parent key columns were provided explicitly as part of the ** foreign key definition, and the PRIMARY KEY of the parent table -** consists of a a different number of columns to the child key in +** consists of a different number of columns to the child key in ** the child table. ** ** then non-zero is returned, and a "foreign key mismatch" error loaded ** into pParse. If an OOM error occurs, non-zero is returned and the ** pParse->db->mallocFailed flag is set. */ -static int locateFkeyIndex( +SQLITE_PRIVATE int sqlite3FkLocateIndex( Parse *pParse, /* Parse context to store any error in */ Table *pParent, /* Parent table of FK constraint pFKey */ FKey *pFKey, /* Foreign key to find index for */ Index **ppIdx, /* OUT: Unique index on parent table */ int **paiCol /* OUT: Map of index columns in pFKey */ @@ -74278,26 +102100,26 @@ if( !zKey ) return 0; if( !sqlite3StrICmp(pParent->aCol[pParent->iPKey].zName, zKey) ) return 0; } }else if( paiCol ){ assert( nCol>1 ); - aiCol = (int *)sqlite3DbMallocRaw(pParse->db, nCol*sizeof(int)); + aiCol = (int *)sqlite3DbMallocRawNN(pParse->db, nCol*sizeof(int)); if( !aiCol ) return 1; *paiCol = aiCol; } for(pIdx=pParent->pIndex; pIdx; pIdx=pIdx->pNext){ - if( pIdx->nColumn==nCol && pIdx->onError!=OE_None ){ + if( pIdx->nKeyCol==nCol && IsUniqueIndex(pIdx) ){ /* pIdx is a UNIQUE index (or a PRIMARY KEY) and has the right number ** of columns. If each indexed column corresponds to a foreign key ** column of pFKey, then this index is a winner. */ if( zKey==0 ){ /* If zKey is NULL, then this foreign key is implicitly mapped to ** the PRIMARY KEY of table pParent. The PRIMARY KEY index may be - ** identified by the test (Index.autoIndex==2). */ - if( pIdx->autoIndex==2 ){ + ** identified by the test. */ + if( IsPrimaryKeyIndex(pIdx) ){ if( aiCol ){ int i; for(i=0; i<nCol; i++) aiCol[i] = pFKey->aCol[i].iFrom; } break; @@ -74307,21 +102129,21 @@ ** map to an explicit list of columns in table pParent. Check if this ** index matches those columns. Also, check that the index uses ** the default collation sequences for each column. */ int i, j; for(i=0; i<nCol; i++){ - int iCol = pIdx->aiColumn[i]; /* Index of column in parent tbl */ - char *zDfltColl; /* Def. collation for column */ + i16 iCol = pIdx->aiColumn[i]; /* Index of column in parent tbl */ + const char *zDfltColl; /* Def. collation for column */ char *zIdxCol; /* Name of indexed column */ + + if( iCol<0 ) break; /* No foreign keys against expression indexes */ /* If the index uses a collation sequence that is different from ** the default collation sequence for the column, this index is ** unusable. Bail out early in this case. */ zDfltColl = pParent->aCol[iCol].zColl; - if( !zDfltColl ){ - zDfltColl = "BINARY"; - } + if( !zDfltColl ) zDfltColl = sqlite3StrBINARY; if( sqlite3StrICmp(pIdx->azColl[i], zDfltColl) ) break; zIdxCol = pParent->aCol[iCol].zName; for(j=0; j<nCol; j++){ if( sqlite3StrICmp(pFKey->aCol[j].zCol, zIdxCol)==0 ){ @@ -74336,11 +102158,13 @@ } } if( !pIdx ){ if( !pParse->disableTriggers ){ - sqlite3ErrorMsg(pParse, "foreign key mismatch"); + sqlite3ErrorMsg(pParse, + "foreign key mismatch - \"%w\" referencing \"%w\"", + pFKey->pFrom->zName, pFKey->zTo); } sqlite3DbFree(pParse->db, aiCol); return 1; } @@ -74397,14 +102221,15 @@ ** Check if any of the key columns in the child table row are NULL. If ** any are, then the constraint is considered satisfied. No need to ** search for a matching row in the parent table. */ if( nIncr<0 ){ sqlite3VdbeAddOp2(v, OP_FkIfZero, pFKey->isDeferred, iOk); + VdbeCoverage(v); } for(i=0; i<pFKey->nCol; i++){ int iReg = aiCol[i] + regData + 1; - sqlite3VdbeAddOp2(v, OP_IsNull, iReg, iOk); + sqlite3VdbeAddOp2(v, OP_IsNull, iReg, iOk); VdbeCoverage(v); } if( isIgnore==0 ){ if( pIdx==0 ){ /* If pIdx is NULL, then the parent key is the INTEGER PRIMARY KEY @@ -74417,116 +102242,192 @@ ** is no matching parent key. Before using MustBeInt, make a copy of ** the value. Otherwise, the value inserted into the child key column ** will have INTEGER affinity applied to it, which may not be correct. */ sqlite3VdbeAddOp2(v, OP_SCopy, aiCol[0]+1+regData, regTemp); iMustBeInt = sqlite3VdbeAddOp2(v, OP_MustBeInt, regTemp, 0); + VdbeCoverage(v); /* If the parent table is the same as the child table, and we are about ** to increment the constraint-counter (i.e. this is an INSERT operation), ** then check if the row being inserted matches itself. If so, do not ** increment the constraint-counter. */ if( pTab==pFKey->pFrom && nIncr==1 ){ - sqlite3VdbeAddOp3(v, OP_Eq, regData, iOk, regTemp); + sqlite3VdbeAddOp3(v, OP_Eq, regData, iOk, regTemp); VdbeCoverage(v); + sqlite3VdbeChangeP5(v, SQLITE_NOTNULL); } sqlite3OpenTable(pParse, iCur, iDb, pTab, OP_OpenRead); - sqlite3VdbeAddOp3(v, OP_NotExists, iCur, 0, regTemp); - sqlite3VdbeAddOp2(v, OP_Goto, 0, iOk); + sqlite3VdbeAddOp3(v, OP_NotExists, iCur, 0, regTemp); VdbeCoverage(v); + sqlite3VdbeGoto(v, iOk); sqlite3VdbeJumpHere(v, sqlite3VdbeCurrentAddr(v)-2); sqlite3VdbeJumpHere(v, iMustBeInt); sqlite3ReleaseTempReg(pParse, regTemp); }else{ int nCol = pFKey->nCol; int regTemp = sqlite3GetTempRange(pParse, nCol); int regRec = sqlite3GetTempReg(pParse); - KeyInfo *pKey = sqlite3IndexKeyinfo(pParse, pIdx); sqlite3VdbeAddOp3(v, OP_OpenRead, iCur, pIdx->tnum, iDb); - sqlite3VdbeChangeP4(v, -1, (char*)pKey, P4_KEYINFO_HANDOFF); + sqlite3VdbeSetP4KeyInfo(pParse, pIdx); for(i=0; i<nCol; i++){ - sqlite3VdbeAddOp2(v, OP_SCopy, aiCol[i]+1+regData, regTemp+i); + sqlite3VdbeAddOp2(v, OP_Copy, aiCol[i]+1+regData, regTemp+i); } /* If the parent table is the same as the child table, and we are about ** to increment the constraint-counter (i.e. this is an INSERT operation), ** then check if the row being inserted matches itself. If so, do not - ** increment the constraint-counter. */ + ** increment the constraint-counter. + ** + ** If any of the parent-key values are NULL, then the row cannot match + ** itself. So set JUMPIFNULL to make sure we do the OP_Found if any + ** of the parent-key values are NULL (at this point it is known that + ** none of the child key values are). + */ if( pTab==pFKey->pFrom && nIncr==1 ){ int iJump = sqlite3VdbeCurrentAddr(v) + nCol + 1; for(i=0; i<nCol; i++){ int iChild = aiCol[i]+1+regData; int iParent = pIdx->aiColumn[i]+1+regData; - sqlite3VdbeAddOp3(v, OP_Ne, iChild, iJump, iParent); + assert( pIdx->aiColumn[i]>=0 ); + assert( aiCol[i]!=pTab->iPKey ); + if( pIdx->aiColumn[i]==pTab->iPKey ){ + /* The parent key is a composite key that includes the IPK column */ + iParent = regData; + } + sqlite3VdbeAddOp3(v, OP_Ne, iChild, iJump, iParent); VdbeCoverage(v); + sqlite3VdbeChangeP5(v, SQLITE_JUMPIFNULL); } - sqlite3VdbeAddOp2(v, OP_Goto, 0, iOk); + sqlite3VdbeGoto(v, iOk); } - sqlite3VdbeAddOp3(v, OP_MakeRecord, regTemp, nCol, regRec); - sqlite3VdbeChangeP4(v, -1, sqlite3IndexAffinityStr(v, pIdx), 0); - sqlite3VdbeAddOp4Int(v, OP_Found, iCur, iOk, regRec, 0); + sqlite3VdbeAddOp4(v, OP_MakeRecord, regTemp, nCol, regRec, + sqlite3IndexAffinityStr(pParse->db,pIdx), nCol); + sqlite3VdbeAddOp4Int(v, OP_Found, iCur, iOk, regRec, 0); VdbeCoverage(v); sqlite3ReleaseTempReg(pParse, regRec); sqlite3ReleaseTempRange(pParse, regTemp, nCol); } } - if( !pFKey->isDeferred && !pParse->pToplevel && !pParse->isMultiWrite ){ + if( !pFKey->isDeferred && !(pParse->db->flags & SQLITE_DeferFKs) + && !pParse->pToplevel + && !pParse->isMultiWrite + ){ /* Special case: If this is an INSERT statement that will insert exactly ** one row into the table, raise a constraint immediately instead of ** incrementing a counter. This is necessary as the VM code is being ** generated for will not open a statement transaction. */ assert( nIncr==1 ); - sqlite3HaltConstraint( - pParse, OE_Abort, "foreign key constraint failed", P4_STATIC - ); + sqlite3HaltConstraint(pParse, SQLITE_CONSTRAINT_FOREIGNKEY, + OE_Abort, 0, P4_STATIC, P5_ConstraintFK); }else{ if( nIncr>0 && pFKey->isDeferred==0 ){ - sqlite3ParseToplevel(pParse)->mayAbort = 1; + sqlite3MayAbort(pParse); } sqlite3VdbeAddOp2(v, OP_FkCounter, pFKey->isDeferred, nIncr); } sqlite3VdbeResolveLabel(v, iOk); sqlite3VdbeAddOp1(v, OP_Close, iCur); } + +/* +** Return an Expr object that refers to a memory register corresponding +** to column iCol of table pTab. +** +** regBase is the first of an array of register that contains the data +** for pTab. regBase itself holds the rowid. regBase+1 holds the first +** column. regBase+2 holds the second column, and so forth. +*/ +static Expr *exprTableRegister( + Parse *pParse, /* Parsing and code generating context */ + Table *pTab, /* The table whose content is at r[regBase]... */ + int regBase, /* Contents of table pTab */ + i16 iCol /* Which column of pTab is desired */ +){ + Expr *pExpr; + Column *pCol; + const char *zColl; + sqlite3 *db = pParse->db; + + pExpr = sqlite3Expr(db, TK_REGISTER, 0); + if( pExpr ){ + if( iCol>=0 && iCol!=pTab->iPKey ){ + pCol = &pTab->aCol[iCol]; + pExpr->iTable = regBase + iCol + 1; + pExpr->affinity = pCol->affinity; + zColl = pCol->zColl; + if( zColl==0 ) zColl = db->pDfltColl->zName; + pExpr = sqlite3ExprAddCollateString(pParse, pExpr, zColl); + }else{ + pExpr->iTable = regBase; + pExpr->affinity = SQLITE_AFF_INTEGER; + } + } + return pExpr; +} + +/* +** Return an Expr object that refers to column iCol of table pTab which +** has cursor iCur. +*/ +static Expr *exprTableColumn( + sqlite3 *db, /* The database connection */ + Table *pTab, /* The table whose column is desired */ + int iCursor, /* The open cursor on the table */ + i16 iCol /* The column that is wanted */ +){ + Expr *pExpr = sqlite3Expr(db, TK_COLUMN, 0); + if( pExpr ){ + pExpr->pTab = pTab; + pExpr->iTable = iCursor; + pExpr->iColumn = iCol; + } + return pExpr; +} + /* ** This function is called to generate code executed when a row is deleted ** from the parent table of foreign key constraint pFKey and, if pFKey is ** deferred, when a row is inserted into the same table. When generating ** code for an SQL UPDATE operation, this function may be called twice - ** once to "delete" the old row and once to "insert" the new row. +** +** Parameter nIncr is passed -1 when inserting a row (as this may decrease +** the number of FK violations in the db) or +1 when deleting one (as this +** may increase the number of FK constraint problems). ** ** The code generated by this function scans through the rows in the child ** table that correspond to the parent table row being deleted or inserted. ** For each child row found, one of the following actions is taken: ** ** Operation | FK type | Action taken ** -------------------------------------------------------------------------- ** DELETE immediate Increment the "immediate constraint counter". ** Or, if the ON (UPDATE|DELETE) action is RESTRICT, -** throw a "foreign key constraint failed" exception. +** throw a "FOREIGN KEY constraint failed" exception. ** ** INSERT immediate Decrement the "immediate constraint counter". ** ** DELETE deferred Increment the "deferred constraint counter". ** Or, if the ON (UPDATE|DELETE) action is RESTRICT, -** throw a "foreign key constraint failed" exception. +** throw a "FOREIGN KEY constraint failed" exception. ** ** INSERT deferred Decrement the "deferred constraint counter". ** ** These operations are identified in the comment at the top of this file ** (fkey.c) as "I.2" and "D.2". */ static void fkScanChildren( Parse *pParse, /* Parse context */ - SrcList *pSrc, /* SrcList containing the table to scan */ - Table *pTab, - Index *pIdx, /* Foreign key index */ - FKey *pFKey, /* Foreign key relationship */ + SrcList *pSrc, /* The child table to be scanned */ + Table *pTab, /* The parent table */ + Index *pIdx, /* Index on parent covering the foreign key */ + FKey *pFKey, /* The foreign key linking pSrc to pTab */ int *aiCol, /* Map from pIdx cols to child table cols */ - int regData, /* Referenced table data starts here */ + int regData, /* Parent row data starts here */ int nIncr /* Amount to increment deferred counter by */ ){ sqlite3 *db = pParse->db; /* Database handle */ int i; /* Iterator variable */ Expr *pWhere = 0; /* WHERE clause to scan with */ @@ -74533,14 +102434,18 @@ NameContext sNameContext; /* Context used to resolve WHERE clause */ WhereInfo *pWInfo; /* Context used by sqlite3WhereXXX() */ int iFkIfZero = 0; /* Address of OP_FkIfZero */ Vdbe *v = sqlite3GetVdbe(pParse); - assert( !pIdx || pIdx->pTable==pTab ); + assert( pIdx==0 || pIdx->pTable==pTab ); + assert( pIdx==0 || pIdx->nKeyCol==pFKey->nCol ); + assert( pIdx!=0 || pFKey->nCol==1 ); + assert( pIdx!=0 || HasRowid(pTab) ); if( nIncr<0 ){ iFkIfZero = sqlite3VdbeAddOp2(v, OP_FkIfZero, pFKey->isDeferred, 0); + VdbeCoverage(v); } /* Create an Expr object representing an SQL expression like: ** ** <parent-key1> = <child-key1> AND <parent-key2> = <child-key2> ... @@ -74551,71 +102456,69 @@ */ for(i=0; i<pFKey->nCol; i++){ Expr *pLeft; /* Value from parent table row */ Expr *pRight; /* Column ref to child table */ Expr *pEq; /* Expression (pLeft = pRight) */ - int iCol; /* Index of column in child table */ + i16 iCol; /* Index of column in child table */ const char *zCol; /* Name of column in child table */ - pLeft = sqlite3Expr(db, TK_REGISTER, 0); - if( pLeft ){ - /* Set the collation sequence and affinity of the LHS of each TK_EQ - ** expression to the parent key column defaults. */ - if( pIdx ){ - Column *pCol; - iCol = pIdx->aiColumn[i]; - pCol = &pIdx->pTable->aCol[iCol]; - pLeft->iTable = regData+iCol+1; - pLeft->affinity = pCol->affinity; - pLeft->pColl = sqlite3LocateCollSeq(pParse, pCol->zColl); - }else{ - pLeft->iTable = regData; - pLeft->affinity = SQLITE_AFF_INTEGER; - } - } + iCol = pIdx ? pIdx->aiColumn[i] : -1; + pLeft = exprTableRegister(pParse, pTab, regData, iCol); iCol = aiCol ? aiCol[i] : pFKey->aCol[0].iFrom; assert( iCol>=0 ); zCol = pFKey->pFrom->aCol[iCol].zName; pRight = sqlite3Expr(db, TK_ID, zCol); pEq = sqlite3PExpr(pParse, TK_EQ, pLeft, pRight, 0); pWhere = sqlite3ExprAnd(db, pWhere, pEq); } - /* If the child table is the same as the parent table, and this scan - ** is taking place as part of a DELETE operation (operation D.2), omit the - ** row being deleted from the scan by adding ($rowid != rowid) to the WHERE - ** clause, where $rowid is the rowid of the row being deleted. */ + /* If the child table is the same as the parent table, then add terms + ** to the WHERE clause that prevent this entry from being scanned. + ** The added WHERE clause terms are like this: + ** + ** $current_rowid!=rowid + ** NOT( $current_a==a AND $current_b==b AND ... ) + ** + ** The first form is used for rowid tables. The second form is used + ** for WITHOUT ROWID tables. In the second form, the primary key is + ** (a,b,...) + */ if( pTab==pFKey->pFrom && nIncr>0 ){ - Expr *pEq; /* Expression (pLeft = pRight) */ + Expr *pNe; /* Expression (pLeft != pRight) */ Expr *pLeft; /* Value from parent table row */ Expr *pRight; /* Column ref to child table */ - pLeft = sqlite3Expr(db, TK_REGISTER, 0); - pRight = sqlite3Expr(db, TK_COLUMN, 0); - if( pLeft && pRight ){ - pLeft->iTable = regData; - pLeft->affinity = SQLITE_AFF_INTEGER; - pRight->iTable = pSrc->a[0].iCursor; - pRight->iColumn = -1; - } - pEq = sqlite3PExpr(pParse, TK_NE, pLeft, pRight, 0); - pWhere = sqlite3ExprAnd(db, pWhere, pEq); + if( HasRowid(pTab) ){ + pLeft = exprTableRegister(pParse, pTab, regData, -1); + pRight = exprTableColumn(db, pTab, pSrc->a[0].iCursor, -1); + pNe = sqlite3PExpr(pParse, TK_NE, pLeft, pRight, 0); + }else{ + Expr *pEq, *pAll = 0; + Index *pPk = sqlite3PrimaryKeyIndex(pTab); + assert( pIdx!=0 ); + for(i=0; i<pPk->nKeyCol; i++){ + i16 iCol = pIdx->aiColumn[i]; + assert( iCol>=0 ); + pLeft = exprTableRegister(pParse, pTab, regData, iCol); + pRight = exprTableColumn(db, pTab, pSrc->a[0].iCursor, iCol); + pEq = sqlite3PExpr(pParse, TK_EQ, pLeft, pRight, 0); + pAll = sqlite3ExprAnd(db, pAll, pEq); + } + pNe = sqlite3PExpr(pParse, TK_NOT, pAll, 0, 0); + } + pWhere = sqlite3ExprAnd(db, pWhere, pNe); } /* Resolve the references in the WHERE clause. */ memset(&sNameContext, 0, sizeof(NameContext)); sNameContext.pSrcList = pSrc; sNameContext.pParse = pParse; sqlite3ResolveExprNames(&sNameContext, pWhere); /* Create VDBE to loop through the entries in pSrc that match the WHERE - ** clause. If the constraint is not deferred, throw an exception for - ** each row found. Otherwise, for deferred constraints, increment the - ** deferred constraint counter by nIncr for each row selected. */ - pWInfo = sqlite3WhereBegin(pParse, pSrc, pWhere, 0, 0); - if( nIncr>0 && pFKey->isDeferred==0 ){ - sqlite3ParseToplevel(pParse)->mayAbort = 1; - } + ** clause. For each row found, increment either the deferred or immediate + ** foreign key constraint counter. */ + pWInfo = sqlite3WhereBegin(pParse, pSrc, pWhere, 0, 0, 0, 0); sqlite3VdbeAddOp2(v, OP_FkCounter, pFKey->isDeferred, nIncr); if( pWInfo ){ sqlite3WhereEnd(pWInfo); } @@ -74625,12 +102528,12 @@ sqlite3VdbeJumpHere(v, iFkIfZero); } } /* -** This function returns a pointer to the head of a linked list of FK -** constraints for which table pTab is the parent table. For example, +** This function returns a linked list of FKey objects (connected by +** FKey.pNextTo) holding all children of table pTab. For example, ** given the following schema: ** ** CREATE TABLE t1(a PRIMARY KEY); ** CREATE TABLE t2(b REFERENCES t1(a); ** @@ -74639,12 +102542,11 @@ ** "t2". Calling this function with "t2" as the argument would return a ** NULL pointer (as there are no FK constraints for which t2 is the parent ** table). */ SQLITE_PRIVATE FKey *sqlite3FkReferences(Table *pTab){ - int nName = sqlite3Strlen30(pTab->zName); - return (FKey *)sqlite3HashFind(&pTab->pSchema->fkeyHash, pTab->zName, nName); + return (FKey *)sqlite3HashFind(&pTab->pSchema->fkeyHash, pTab->zName); } /* ** The second argument is a Trigger structure allocated by the ** fkActionTrigger() routine. This function deletes the Trigger structure @@ -74694,36 +102596,125 @@ ** generating any VDBE code. If one can be found, then jump over ** the entire DELETE if there are no outstanding deferred constraints ** when this statement is run. */ FKey *p; for(p=pTab->pFKey; p; p=p->pNextFrom){ - if( p->isDeferred ) break; + if( p->isDeferred || (db->flags & SQLITE_DeferFKs) ) break; } if( !p ) return; iSkip = sqlite3VdbeMakeLabel(v); - sqlite3VdbeAddOp2(v, OP_FkIfZero, 1, iSkip); + sqlite3VdbeAddOp2(v, OP_FkIfZero, 1, iSkip); VdbeCoverage(v); } pParse->disableTriggers = 1; sqlite3DeleteFrom(pParse, sqlite3SrcListDup(db, pName, 0), 0); pParse->disableTriggers = 0; /* If the DELETE has generated immediate foreign key constraint ** violations, halt the VDBE and return an error at this point, before ** any modifications to the schema are made. This is because statement - ** transactions are not able to rollback schema changes. */ - sqlite3VdbeAddOp2(v, OP_FkIfZero, 0, sqlite3VdbeCurrentAddr(v)+2); - sqlite3HaltConstraint( - pParse, OE_Abort, "foreign key constraint failed", P4_STATIC - ); + ** transactions are not able to rollback schema changes. + ** + ** If the SQLITE_DeferFKs flag is set, then this is not required, as + ** the statement transaction will not be rolled back even if FK + ** constraints are violated. + */ + if( (db->flags & SQLITE_DeferFKs)==0 ){ + sqlite3VdbeAddOp2(v, OP_FkIfZero, 0, sqlite3VdbeCurrentAddr(v)+2); + VdbeCoverage(v); + sqlite3HaltConstraint(pParse, SQLITE_CONSTRAINT_FOREIGNKEY, + OE_Abort, 0, P4_STATIC, P5_ConstraintFK); + } if( iSkip ){ sqlite3VdbeResolveLabel(v, iSkip); } } } + +/* +** The second argument points to an FKey object representing a foreign key +** for which pTab is the child table. An UPDATE statement against pTab +** is currently being processed. For each column of the table that is +** actually updated, the corresponding element in the aChange[] array +** is zero or greater (if a column is unmodified the corresponding element +** is set to -1). If the rowid column is modified by the UPDATE statement +** the bChngRowid argument is non-zero. +** +** This function returns true if any of the columns that are part of the +** child key for FK constraint *p are modified. +*/ +static int fkChildIsModified( + Table *pTab, /* Table being updated */ + FKey *p, /* Foreign key for which pTab is the child */ + int *aChange, /* Array indicating modified columns */ + int bChngRowid /* True if rowid is modified by this update */ +){ + int i; + for(i=0; i<p->nCol; i++){ + int iChildKey = p->aCol[i].iFrom; + if( aChange[iChildKey]>=0 ) return 1; + if( iChildKey==pTab->iPKey && bChngRowid ) return 1; + } + return 0; +} + +/* +** The second argument points to an FKey object representing a foreign key +** for which pTab is the parent table. An UPDATE statement against pTab +** is currently being processed. For each column of the table that is +** actually updated, the corresponding element in the aChange[] array +** is zero or greater (if a column is unmodified the corresponding element +** is set to -1). If the rowid column is modified by the UPDATE statement +** the bChngRowid argument is non-zero. +** +** This function returns true if any of the columns that are part of the +** parent key for FK constraint *p are modified. +*/ +static int fkParentIsModified( + Table *pTab, + FKey *p, + int *aChange, + int bChngRowid +){ + int i; + for(i=0; i<p->nCol; i++){ + char *zKey = p->aCol[i].zCol; + int iKey; + for(iKey=0; iKey<pTab->nCol; iKey++){ + if( aChange[iKey]>=0 || (iKey==pTab->iPKey && bChngRowid) ){ + Column *pCol = &pTab->aCol[iKey]; + if( zKey ){ + if( 0==sqlite3StrICmp(pCol->zName, zKey) ) return 1; + }else if( pCol->colFlags & COLFLAG_PRIMKEY ){ + return 1; + } + } + } + } + return 0; +} + +/* +** Return true if the parser passed as the first argument is being +** used to code a trigger that is really a "SET NULL" action belonging +** to trigger pFKey. +*/ +static int isSetNullAction(Parse *pParse, FKey *pFKey){ + Parse *pTop = sqlite3ParseToplevel(pParse); + if( pTop->pTriggerPrg ){ + Trigger *p = pTop->pTriggerPrg->pTrigger; + if( (p==pFKey->apTrigger[0] && pFKey->aAction[0]==OE_SetNull) + || (p==pFKey->apTrigger[1] && pFKey->aAction[1]==OE_SetNull) + ){ + return 1; + } + } + return 0; +} + /* ** This function is called when inserting, deleting or updating a row of ** table pTab to generate VDBE code to perform foreign key constraint ** processing for the operation. ** @@ -74744,14 +102735,15 @@ */ SQLITE_PRIVATE void sqlite3FkCheck( Parse *pParse, /* Parse context */ Table *pTab, /* Row is being deleted from this table */ int regOld, /* Previous row data is stored here */ - int regNew /* New row data is stored here */ + int regNew, /* New row data is stored here */ + int *aChange, /* Array indicating UPDATEd columns (or 0) */ + int bChngRowid /* True if rowid is UPDATEd */ ){ sqlite3 *db = pParse->db; /* Database handle */ - Vdbe *v; /* VM to write code to */ FKey *pFKey; /* Used to iterate through FKs */ int iDb; /* Index of database containing pTab */ const char *zDb; /* Name of database containing pTab */ int isIgnoreErrors = pParse->disableTriggers; @@ -74759,11 +102751,10 @@ assert( (regOld==0)!=(regNew==0) ); /* If foreign-keys are disabled, this function is a no-op. */ if( (db->flags&SQLITE_ForeignKeys)==0 ) return; - v = sqlite3GetVdbe(pParse); iDb = sqlite3SchemaToIndex(db, pTab->pSchema); zDb = db->aDb[iDb].zName; /* Loop through all the foreign key constraints for which pTab is the ** child table (the table that the foreign key definition is part of). */ @@ -74772,11 +102763,18 @@ Index *pIdx = 0; /* Index on key columns in pTo */ int *aiFree = 0; int *aiCol; int iCol; int i; - int isIgnore = 0; + int bIgnore = 0; + + if( aChange + && sqlite3_stricmp(pTab->zName, pFKey->zTo)!=0 + && fkChildIsModified(pTab, pFKey, aChange, bChngRowid)==0 + ){ + continue; + } /* Find the parent table of this foreign key. Also find a unique index ** on the parent key columns in the parent table. If either of these ** schema items cannot be located, set an error in pParse and return ** early. */ @@ -74783,12 +102781,29 @@ if( pParse->disableTriggers ){ pTo = sqlite3FindTable(db, pFKey->zTo, zDb); }else{ pTo = sqlite3LocateTable(pParse, 0, pFKey->zTo, zDb); } - if( !pTo || locateFkeyIndex(pParse, pTo, pFKey, &pIdx, &aiFree) ){ + if( !pTo || sqlite3FkLocateIndex(pParse, pTo, pFKey, &pIdx, &aiFree) ){ + assert( isIgnoreErrors==0 || (regOld!=0 && regNew==0) ); if( !isIgnoreErrors || db->mallocFailed ) return; + if( pTo==0 ){ + /* If isIgnoreErrors is true, then a table is being dropped. In this + ** case SQLite runs a "DELETE FROM xxx" on the table being dropped + ** before actually dropping it in order to check FK constraints. + ** If the parent table of an FK constraint on the current table is + ** missing, behave as if it is empty. i.e. decrement the relevant + ** FK counter for each row of the current table with non-NULL keys. + */ + Vdbe *v = sqlite3GetVdbe(pParse); + int iJump = sqlite3VdbeCurrentAddr(v) + pFKey->nCol + 1; + for(i=0; i<pFKey->nCol; i++){ + int iReg = pFKey->aCol[i].iFrom + regOld + 1; + sqlite3VdbeAddOp2(v, OP_IsNull, iReg, iJump); VdbeCoverage(v); + } + sqlite3VdbeAddOp2(v, OP_FkCounter, pFKey->isDeferred, -1); + } continue; } assert( pFKey->nCol==1 || (aiFree && pIdx) ); if( aiFree ){ @@ -74799,19 +102814,20 @@ } for(i=0; i<pFKey->nCol; i++){ if( aiCol[i]==pTab->iPKey ){ aiCol[i] = -1; } + assert( pIdx==0 || pIdx->aiColumn[i]>=0 ); #ifndef SQLITE_OMIT_AUTHORIZATION /* Request permission to read the parent key columns. If the ** authorization callback returns SQLITE_IGNORE, behave as if any ** values read from the parent table are NULL. */ if( db->xAuth ){ int rcauth; char *zCol = pTo->aCol[pIdx ? pIdx->aiColumn[i] : pTo->iPKey].zName; rcauth = sqlite3AuthReadCol(pParse, pTo->zName, zCol, iDb); - isIgnore = (rcauth==SQLITE_IGNORE); + bIgnore = (rcauth==SQLITE_IGNORE); } #endif } /* Take a shared-cache advisory read-lock on the parent table. Allocate @@ -74822,43 +102838,55 @@ if( regOld!=0 ){ /* A row is being removed from the child table. Search for the parent. ** If the parent does not exist, removing the child row resolves an ** outstanding foreign key constraint violation. */ - fkLookupParent(pParse, iDb, pTo, pIdx, pFKey, aiCol, regOld, -1,isIgnore); + fkLookupParent(pParse, iDb, pTo, pIdx, pFKey, aiCol, regOld, -1, bIgnore); } - if( regNew!=0 ){ + if( regNew!=0 && !isSetNullAction(pParse, pFKey) ){ /* A row is being added to the child table. If a parent row cannot - ** be found, adding the child row has violated the FK constraint. */ - fkLookupParent(pParse, iDb, pTo, pIdx, pFKey, aiCol, regNew, +1,isIgnore); + ** be found, adding the child row has violated the FK constraint. + ** + ** If this operation is being performed as part of a trigger program + ** that is actually a "SET NULL" action belonging to this very + ** foreign key, then omit this scan altogether. As all child key + ** values are guaranteed to be NULL, it is not possible for adding + ** this row to cause an FK violation. */ + fkLookupParent(pParse, iDb, pTo, pIdx, pFKey, aiCol, regNew, +1, bIgnore); } sqlite3DbFree(db, aiFree); } - /* Loop through all the foreign key constraints that refer to this table */ + /* Loop through all the foreign key constraints that refer to this table. + ** (the "child" constraints) */ for(pFKey = sqlite3FkReferences(pTab); pFKey; pFKey=pFKey->pNextTo){ Index *pIdx = 0; /* Foreign key index for pFKey */ SrcList *pSrc; int *aiCol = 0; - if( !pFKey->isDeferred && !pParse->pToplevel && !pParse->isMultiWrite ){ + if( aChange && fkParentIsModified(pTab, pFKey, aChange, bChngRowid)==0 ){ + continue; + } + + if( !pFKey->isDeferred && !(db->flags & SQLITE_DeferFKs) + && !pParse->pToplevel && !pParse->isMultiWrite + ){ assert( regOld==0 && regNew!=0 ); - /* Inserting a single row into a parent table cannot cause an immediate - ** foreign key violation. So do nothing in this case. */ + /* Inserting a single row into a parent table cannot cause (or fix) + ** an immediate foreign key violation. So do nothing in this case. */ continue; } - if( locateFkeyIndex(pParse, pTab, pFKey, &pIdx, &aiCol) ){ + if( sqlite3FkLocateIndex(pParse, pTab, pFKey, &pIdx, &aiCol) ){ if( !isIgnoreErrors || db->mallocFailed ) return; continue; } assert( aiCol || pFKey->nCol==1 ); - /* Create a SrcList structure containing a single table (the table - ** the foreign key that refers to this table is attached to). This - ** is required for the sqlite3WhereXXX() interface. */ + /* Create a SrcList structure containing the child table. We need the + ** child table as a SrcList for sqlite3WhereBegin() */ pSrc = sqlite3SrcListAppend(db, 0, 0, 0); if( pSrc ){ struct SrcList_item *pItem = pSrc->a; pItem->pTab = pFKey->pFrom; pItem->zName = pFKey->pFrom->zName; @@ -74867,17 +102895,32 @@ if( regNew!=0 ){ fkScanChildren(pParse, pSrc, pTab, pIdx, pFKey, aiCol, regNew, -1); } if( regOld!=0 ){ - /* If there is a RESTRICT action configured for the current operation - ** on the parent table of this FK, then throw an exception - ** immediately if the FK constraint is violated, even if this is a - ** deferred trigger. That's what RESTRICT means. To defer checking - ** the constraint, the FK should specify NO ACTION (represented - ** using OE_None). NO ACTION is the default. */ + int eAction = pFKey->aAction[aChange!=0]; fkScanChildren(pParse, pSrc, pTab, pIdx, pFKey, aiCol, regOld, 1); + /* If this is a deferred FK constraint, or a CASCADE or SET NULL + ** action applies, then any foreign key violations caused by + ** removing the parent key will be rectified by the action trigger. + ** So do not set the "may-abort" flag in this case. + ** + ** Note 1: If the FK is declared "ON UPDATE CASCADE", then the + ** may-abort flag will eventually be set on this statement anyway + ** (when this function is called as part of processing the UPDATE + ** within the action trigger). + ** + ** Note 2: At first glance it may seem like SQLite could simply omit + ** all OP_FkCounter related scans when either CASCADE or SET NULL + ** applies. The trouble starts if the CASCADE or SET NULL action + ** trigger causes other triggers or action rules attached to the + ** child table to fire. In these cases the fk constraint counters + ** might be set incorrectly if any OP_FkCounter related scans are + ** omitted. */ + if( !pFKey->isDeferred && eAction!=OE_Cascade && eAction!=OE_SetNull ){ + sqlite3MayAbort(pParse); + } } pItem->zName = 0; sqlite3SrcListDelete(db, pSrc); } sqlite3DbFree(db, aiCol); @@ -74901,18 +102944,22 @@ for(p=pTab->pFKey; p; p=p->pNextFrom){ for(i=0; i<p->nCol; i++) mask |= COLUMN_MASK(p->aCol[i].iFrom); } for(p=sqlite3FkReferences(pTab); p; p=p->pNextTo){ Index *pIdx = 0; - locateFkeyIndex(pParse, pTab, p, &pIdx, 0); + sqlite3FkLocateIndex(pParse, pTab, p, &pIdx, 0); if( pIdx ){ - for(i=0; i<pIdx->nColumn; i++) mask |= COLUMN_MASK(pIdx->aiColumn[i]); + for(i=0; i<pIdx->nKeyCol; i++){ + assert( pIdx->aiColumn[i]>=0 ); + mask |= COLUMN_MASK(pIdx->aiColumn[i]); + } } } } return mask; } + /* ** This function is called before generating code to update or delete a ** row contained in table pTab. If the operation is a DELETE, then ** parameter aChange is passed a NULL value. For an UPDATE, aChange points @@ -74939,35 +102986,20 @@ ** foreign key constraint. */ return (sqlite3FkReferences(pTab) || pTab->pFKey); }else{ /* This is an UPDATE. Foreign key processing is only required if the ** operation modifies one or more child or parent key columns. */ - int i; FKey *p; /* Check if any child key columns are being modified. */ for(p=pTab->pFKey; p; p=p->pNextFrom){ - for(i=0; i<p->nCol; i++){ - int iChildKey = p->aCol[i].iFrom; - if( aChange[iChildKey]>=0 ) return 1; - if( iChildKey==pTab->iPKey && chngRowid ) return 1; - } + if( fkChildIsModified(pTab, p, aChange, chngRowid) ) return 1; } /* Check if any parent key columns are being modified. */ for(p=sqlite3FkReferences(pTab); p; p=p->pNextTo){ - for(i=0; i<p->nCol; i++){ - char *zKey = p->aCol[i].zCol; - int iKey; - for(iKey=0; iKey<pTab->nCol; iKey++){ - Column *pCol = &pTab->aCol[iKey]; - if( (zKey ? !sqlite3StrICmp(pCol->zName, zKey) : pCol->isPrimKey) ){ - if( aChange[iKey]>=0 ) return 1; - if( iKey==pTab->iPKey && chngRowid ) return 1; - } - } - } + if( fkParentIsModified(pTab, p, aChange, chngRowid) ) return 1; } } } return 0; } @@ -75014,11 +103046,10 @@ action = pFKey->aAction[iAction]; pTrigger = pFKey->apTrigger[iAction]; if( action!=OE_None && !pTrigger ){ - u8 enableLookaside; /* Copy of db->lookaside.bEnabled */ char const *zFrom; /* Name of child table */ int nFrom; /* Length in bytes of zFrom */ Index *pIdx = 0; /* Parent key index for this FK */ int *aiCol = 0; /* child table cols -> parent key cols */ TriggerStep *pStep = 0; /* First (only) step of trigger program */ @@ -75026,11 +103057,11 @@ ExprList *pList = 0; /* Changes list if ON UPDATE CASCADE */ Select *pSelect = 0; /* If RESTRICT, "SELECT RAISE(...)" */ int i; /* Iterator variable */ Expr *pWhen = 0; /* WHEN clause for the trigger */ - if( locateFkeyIndex(pParse, pTab, pFKey, &pIdx, &aiCol) ) return 0; + if( sqlite3FkLocateIndex(pParse, pTab, pFKey, &pIdx, &aiCol) ) return 0; assert( aiCol || pFKey->nCol==1 ); for(i=0; i<pFKey->nCol; i++){ Token tOld = { "old", 3 }; /* Literal "old" token */ Token tNew = { "new", 3 }; /* Literal "new" token */ @@ -75039,26 +103070,26 @@ int iFromCol; /* Idx of column in child table */ Expr *pEq; /* tFromCol = OLD.tToCol */ iFromCol = aiCol ? aiCol[i] : pFKey->aCol[0].iFrom; assert( iFromCol>=0 ); - tToCol.z = pIdx ? pTab->aCol[pIdx->aiColumn[i]].zName : "oid"; - tFromCol.z = pFKey->pFrom->aCol[iFromCol].zName; - - tToCol.n = sqlite3Strlen30(tToCol.z); - tFromCol.n = sqlite3Strlen30(tFromCol.z); + assert( pIdx!=0 || (pTab->iPKey>=0 && pTab->iPKey<pTab->nCol) ); + assert( pIdx==0 || pIdx->aiColumn[i]>=0 ); + sqlite3TokenInit(&tToCol, + pTab->aCol[pIdx ? pIdx->aiColumn[i] : pTab->iPKey].zName); + sqlite3TokenInit(&tFromCol, pFKey->pFrom->aCol[iFromCol].zName); /* Create the expression "OLD.zToCol = zFromCol". It is important ** that the "OLD.zToCol" term is on the LHS of the = operator, so ** that the affinity and collation sequence associated with the ** parent table are used for the comparison. */ pEq = sqlite3PExpr(pParse, TK_EQ, sqlite3PExpr(pParse, TK_DOT, - sqlite3PExpr(pParse, TK_ID, 0, 0, &tOld), - sqlite3PExpr(pParse, TK_ID, 0, 0, &tToCol) + sqlite3ExprAlloc(db, TK_ID, &tOld, 0), + sqlite3ExprAlloc(db, TK_ID, &tToCol, 0) , 0), - sqlite3PExpr(pParse, TK_ID, 0, 0, &tFromCol) + sqlite3ExprAlloc(db, TK_ID, &tFromCol, 0) , 0); pWhere = sqlite3ExprAnd(db, pWhere, pEq); /* For ON UPDATE, construct the next term of the WHEN clause. ** The final WHEN clause will be like this: @@ -75066,27 +103097,27 @@ ** WHEN NOT(old.col1 IS new.col1 AND ... AND old.colN IS new.colN) */ if( pChanges ){ pEq = sqlite3PExpr(pParse, TK_IS, sqlite3PExpr(pParse, TK_DOT, - sqlite3PExpr(pParse, TK_ID, 0, 0, &tOld), - sqlite3PExpr(pParse, TK_ID, 0, 0, &tToCol), + sqlite3ExprAlloc(db, TK_ID, &tOld, 0), + sqlite3ExprAlloc(db, TK_ID, &tToCol, 0), 0), sqlite3PExpr(pParse, TK_DOT, - sqlite3PExpr(pParse, TK_ID, 0, 0, &tNew), - sqlite3PExpr(pParse, TK_ID, 0, 0, &tToCol), + sqlite3ExprAlloc(db, TK_ID, &tNew, 0), + sqlite3ExprAlloc(db, TK_ID, &tToCol, 0), 0), 0); pWhen = sqlite3ExprAnd(db, pWhen, pEq); } if( action!=OE_Restrict && (action!=OE_Cascade || pChanges) ){ Expr *pNew; if( action==OE_Cascade ){ pNew = sqlite3PExpr(pParse, TK_DOT, - sqlite3PExpr(pParse, TK_ID, 0, 0, &tNew), - sqlite3PExpr(pParse, TK_ID, 0, 0, &tToCol) + sqlite3ExprAlloc(db, TK_ID, &tNew, 0), + sqlite3ExprAlloc(db, TK_ID, &tToCol, 0) , 0); }else if( action==OE_SetDflt ){ Expr *pDflt = pFKey->pFrom->aCol[iFromCol].pDflt; if( pDflt ){ pNew = sqlite3ExprDup(db, pDflt, 0); @@ -75109,11 +103140,11 @@ Token tFrom; Expr *pRaise; tFrom.z = zFrom; tFrom.n = nFrom; - pRaise = sqlite3Expr(db, TK_RAISE, "foreign key constraint failed"); + pRaise = sqlite3Expr(db, TK_RAISE, "FOREIGN KEY constraint failed"); if( pRaise ){ pRaise->affinity = OE_Abort; } pSelect = sqlite3SelectNew(pParse, sqlite3ExprListAppend(pParse, 0, pRaise), @@ -75122,28 +103153,22 @@ 0, 0, 0, 0, 0, 0 ); pWhere = 0; } - /* In the current implementation, pTab->dbMem==0 for all tables except - ** for temporary tables used to describe subqueries. And temporary - ** tables do not have foreign key constraints. Hence, pTab->dbMem - ** should always be 0 there. - */ - enableLookaside = db->lookaside.bEnabled; - db->lookaside.bEnabled = 0; + /* Disable lookaside memory allocation */ + db->lookaside.bDisable++; pTrigger = (Trigger *)sqlite3DbMallocZero(db, sizeof(Trigger) + /* struct Trigger */ sizeof(TriggerStep) + /* Single step in trigger program */ - nFrom + 1 /* Space for pStep->target.z */ + nFrom + 1 /* Space for pStep->zTarget */ ); if( pTrigger ){ pStep = pTrigger->step_list = (TriggerStep *)&pTrigger[1]; - pStep->target.z = (char *)&pStep[1]; - pStep->target.n = nFrom; - memcpy((char *)pStep->target.z, zFrom, nFrom); + pStep->zTarget = (char *)&pStep[1]; + memcpy((char *)pStep->zTarget, zFrom, nFrom); pStep->pWhere = sqlite3ExprDup(db, pWhere, EXPRDUP_REDUCE); pStep->pExprList = sqlite3ExprListDup(db, pList, EXPRDUP_REDUCE); pStep->pSelect = sqlite3SelectDup(db, pSelect, EXPRDUP_REDUCE); if( pWhen ){ @@ -75151,20 +103176,21 @@ pTrigger->pWhen = sqlite3ExprDup(db, pWhen, EXPRDUP_REDUCE); } } /* Re-enable the lookaside buffer, if it was disabled earlier. */ - db->lookaside.bEnabled = enableLookaside; + db->lookaside.bDisable--; sqlite3ExprDelete(db, pWhere); sqlite3ExprDelete(db, pWhen); sqlite3ExprListDelete(db, pList); sqlite3SelectDelete(db, pSelect); if( db->mallocFailed==1 ){ fkTriggerDelete(db, pTrigger); return 0; } + assert( pStep!=0 ); switch( action ){ case OE_Restrict: pStep->op = TK_SELECT; break; @@ -75192,22 +103218,26 @@ */ SQLITE_PRIVATE void sqlite3FkActions( Parse *pParse, /* Parse context */ Table *pTab, /* Table being updated or deleted from */ ExprList *pChanges, /* Change-list for UPDATE, NULL for DELETE */ - int regOld /* Address of array containing old row */ + int regOld, /* Address of array containing old row */ + int *aChange, /* Array indicating UPDATEd columns (or 0) */ + int bChngRowid /* True if rowid is UPDATEd */ ){ /* If foreign-key support is enabled, iterate through all FKs that ** refer to table pTab. If there is an action associated with the FK ** for this operation (either update or delete), invoke the associated ** trigger sub-program. */ if( pParse->db->flags&SQLITE_ForeignKeys ){ FKey *pFKey; /* Iterator variable */ for(pFKey = sqlite3FkReferences(pTab); pFKey; pFKey=pFKey->pNextTo){ - Trigger *pAction = fkActionTrigger(pParse, pTab, pFKey, pChanges); - if( pAction ){ - sqlite3CodeRowTriggerDirect(pParse, pAction, pTab, regOld, OE_Abort, 0); + if( aChange==0 || fkParentIsModified(pTab, pFKey, aChange, bChngRowid) ){ + Trigger *pAct = fkActionTrigger(pParse, pTab, pFKey, pChanges); + if( pAct ){ + sqlite3CodeRowTriggerDirect(pParse, pAct, pTab, regOld, OE_Abort, 0); + } } } } } @@ -75216,41 +103246,44 @@ /* ** Free all memory associated with foreign key definitions attached to ** table pTab. Remove the deleted foreign keys from the Schema.fkeyHash ** hash table. */ -SQLITE_PRIVATE void sqlite3FkDelete(Table *pTab){ +SQLITE_PRIVATE void sqlite3FkDelete(sqlite3 *db, Table *pTab){ FKey *pFKey; /* Iterator variable */ FKey *pNext; /* Copy of pFKey->pNextFrom */ + assert( db==0 || sqlite3SchemaMutexHeld(db, 0, pTab->pSchema) ); for(pFKey=pTab->pFKey; pFKey; pFKey=pNext){ /* Remove the FK from the fkeyHash hash table. */ - if( pFKey->pPrevTo ){ - pFKey->pPrevTo->pNextTo = pFKey->pNextTo; - }else{ - void *data = (void *)pFKey->pNextTo; - const char *z = (data ? pFKey->pNextTo->zTo : pFKey->zTo); - sqlite3HashInsert(&pTab->pSchema->fkeyHash, z, sqlite3Strlen30(z), data); - } - if( pFKey->pNextTo ){ - pFKey->pNextTo->pPrevTo = pFKey->pPrevTo; - } - - /* Delete any triggers created to implement actions for this FK. */ -#ifndef SQLITE_OMIT_TRIGGER - fkTriggerDelete(pTab->dbMem, pFKey->apTrigger[0]); - fkTriggerDelete(pTab->dbMem, pFKey->apTrigger[1]); -#endif + if( !db || db->pnBytesFreed==0 ){ + if( pFKey->pPrevTo ){ + pFKey->pPrevTo->pNextTo = pFKey->pNextTo; + }else{ + void *p = (void *)pFKey->pNextTo; + const char *z = (p ? pFKey->pNextTo->zTo : pFKey->zTo); + sqlite3HashInsert(&pTab->pSchema->fkeyHash, z, p); + } + if( pFKey->pNextTo ){ + pFKey->pNextTo->pPrevTo = pFKey->pPrevTo; + } + } /* EV: R-30323-21917 Each foreign key constraint in SQLite is ** classified as either immediate or deferred. */ assert( pFKey->isDeferred==0 || pFKey->isDeferred==1 ); + + /* Delete any triggers created to implement actions for this FK. */ +#ifndef SQLITE_OMIT_TRIGGER + fkTriggerDelete(db, pFKey->apTrigger[0]); + fkTriggerDelete(db, pFKey->apTrigger[1]); +#endif pNext = pFKey->pNextFrom; - sqlite3DbFree(pTab->dbMem, pFKey); + sqlite3DbFree(db, pFKey); } } #endif /* ifndef SQLITE_OMIT_FOREIGN_KEY */ /************** End of fkey.c ************************************************/ @@ -75267,52 +103300,68 @@ ** ************************************************************************* ** This file contains C code routines that are called by the parser ** to handle INSERT statements in SQLite. */ +/* #include "sqliteInt.h" */ /* -** Generate code that will open a table for reading. +** Generate code that will +** +** (1) acquire a lock for table pTab then +** (2) open pTab as cursor iCur. +** +** If pTab is a WITHOUT ROWID table, then it is the PRIMARY KEY index +** for that table that is actually opened. */ SQLITE_PRIVATE void sqlite3OpenTable( - Parse *p, /* Generate code into this VDBE */ + Parse *pParse, /* Generate code into this VDBE */ int iCur, /* The cursor number of the table */ int iDb, /* The database index in sqlite3.aDb[] */ Table *pTab, /* The table to be opened */ int opcode /* OP_OpenRead or OP_OpenWrite */ ){ Vdbe *v; - if( IsVirtual(pTab) ) return; - v = sqlite3GetVdbe(p); + assert( !IsVirtual(pTab) ); + v = sqlite3GetVdbe(pParse); assert( opcode==OP_OpenWrite || opcode==OP_OpenRead ); - sqlite3TableLock(p, iDb, pTab->tnum, (opcode==OP_OpenWrite)?1:0, pTab->zName); - sqlite3VdbeAddOp3(v, opcode, iCur, pTab->tnum, iDb); - sqlite3VdbeChangeP4(v, -1, SQLITE_INT_TO_PTR(pTab->nCol), P4_INT32); - VdbeComment((v, "%s", pTab->zName)); + sqlite3TableLock(pParse, iDb, pTab->tnum, + (opcode==OP_OpenWrite)?1:0, pTab->zName); + if( HasRowid(pTab) ){ + sqlite3VdbeAddOp4Int(v, opcode, iCur, pTab->tnum, iDb, pTab->nCol); + VdbeComment((v, "%s", pTab->zName)); + }else{ + Index *pPk = sqlite3PrimaryKeyIndex(pTab); + assert( pPk!=0 ); + assert( pPk->tnum==pTab->tnum ); + sqlite3VdbeAddOp3(v, opcode, iCur, pPk->tnum, iDb); + sqlite3VdbeSetP4KeyInfo(pParse, pPk); + VdbeComment((v, "%s", pTab->zName)); + } } /* ** Return a pointer to the column affinity string associated with index ** pIdx. A column affinity string has one character for each column in ** the table, according to the affinity of the column: ** ** Character Column affinity ** ------------------------------ -** 'a' TEXT -** 'b' NONE -** 'c' NUMERIC -** 'd' INTEGER -** 'e' REAL +** 'A' BLOB +** 'B' TEXT +** 'C' NUMERIC +** 'D' INTEGER +** 'F' REAL ** -** An extra 'b' is appended to the end of the string to cover the +** An extra 'D' is appended to the end of the string to cover the ** rowid that appears as the last column in every index. ** ** Memory for the buffer containing the column index affinity string ** is managed along with the rest of the Index structure. It will be ** released when sqlite3DeleteIndex() is called. */ -SQLITE_PRIVATE const char *sqlite3IndexAffinityStr(Vdbe *v, Index *pIdx){ +SQLITE_PRIVATE const char *sqlite3IndexAffinityStr(sqlite3 *db, Index *pIdx){ if( !pIdx->zColAff ){ /* The first time a column affinity string for a particular index is ** required, it is allocated and populated here. It is then stored as ** a member of the Index structure for subsequent use. ** @@ -75320,86 +103369,100 @@ ** sqliteDeleteIndex() when the Index structure itself is cleaned ** up. */ int n; Table *pTab = pIdx->pTable; - sqlite3 *db = sqlite3VdbeDb(v); - pIdx->zColAff = (char *)sqlite3Malloc(pIdx->nColumn+2); + pIdx->zColAff = (char *)sqlite3DbMallocRaw(0, pIdx->nColumn+1); if( !pIdx->zColAff ){ - db->mallocFailed = 1; + sqlite3OomFault(db); return 0; } for(n=0; n<pIdx->nColumn; n++){ - pIdx->zColAff[n] = pTab->aCol[pIdx->aiColumn[n]].affinity; + i16 x = pIdx->aiColumn[n]; + if( x>=0 ){ + pIdx->zColAff[n] = pTab->aCol[x].affinity; + }else if( x==XN_ROWID ){ + pIdx->zColAff[n] = SQLITE_AFF_INTEGER; + }else{ + char aff; + assert( x==XN_EXPR ); + assert( pIdx->aColExpr!=0 ); + aff = sqlite3ExprAffinity(pIdx->aColExpr->a[n].pExpr); + if( aff==0 ) aff = SQLITE_AFF_BLOB; + pIdx->zColAff[n] = aff; + } } - pIdx->zColAff[n++] = SQLITE_AFF_NONE; pIdx->zColAff[n] = 0; } return pIdx->zColAff; } /* -** Set P4 of the most recently inserted opcode to a column affinity -** string for table pTab. A column affinity string has one character -** for each column indexed by the index, according to the affinity of the -** column: +** Compute the affinity string for table pTab, if it has not already been +** computed. As an optimization, omit trailing SQLITE_AFF_BLOB affinities. +** +** If the affinity exists (if it is no entirely SQLITE_AFF_BLOB values) and +** if iReg>0 then code an OP_Affinity opcode that will set the affinities +** for register iReg and following. Or if affinities exists and iReg==0, +** then just set the P4 operand of the previous opcode (which should be +** an OP_MakeRecord) to the affinity string. +** +** A column affinity string has one character per column: ** ** Character Column affinity ** ------------------------------ -** 'a' TEXT -** 'b' NONE -** 'c' NUMERIC -** 'd' INTEGER -** 'e' REAL -*/ -SQLITE_PRIVATE void sqlite3TableAffinityStr(Vdbe *v, Table *pTab){ - /* The first time a column affinity string for a particular table - ** is required, it is allocated and populated here. It is then - ** stored as a member of the Table structure for subsequent use. - ** - ** The column affinity string will eventually be deleted by - ** sqlite3DeleteTable() when the Table structure itself is cleaned up. - */ - if( !pTab->zColAff ){ - char *zColAff; - int i; +** 'A' BLOB +** 'B' TEXT +** 'C' NUMERIC +** 'D' INTEGER +** 'E' REAL +*/ +SQLITE_PRIVATE void sqlite3TableAffinity(Vdbe *v, Table *pTab, int iReg){ + int i; + char *zColAff = pTab->zColAff; + if( zColAff==0 ){ sqlite3 *db = sqlite3VdbeDb(v); - - zColAff = (char *)sqlite3Malloc(pTab->nCol+1); + zColAff = (char *)sqlite3DbMallocRaw(0, pTab->nCol+1); if( !zColAff ){ - db->mallocFailed = 1; + sqlite3OomFault(db); return; } for(i=0; i<pTab->nCol; i++){ zColAff[i] = pTab->aCol[i].affinity; } - zColAff[pTab->nCol] = '\0'; - + do{ + zColAff[i--] = 0; + }while( i>=0 && zColAff[i]==SQLITE_AFF_BLOB ); pTab->zColAff = zColAff; } - - sqlite3VdbeChangeP4(v, -1, pTab->zColAff, 0); + i = sqlite3Strlen30(zColAff); + if( i ){ + if( iReg ){ + sqlite3VdbeAddOp4(v, OP_Affinity, iReg, i, 0, zColAff, i); + }else{ + sqlite3VdbeChangeP4(v, -1, zColAff, i); + } + } } /* ** Return non-zero if the table pTab in database iDb or any of its indices -** have been opened at any point in the VDBE program beginning at location -** iStartAddr throught the end of the program. This is used to see if +** have been opened at any point in the VDBE program. This is used to see if ** a statement of the form "INSERT INTO <iDb, pTab> SELECT ..." can -** run without using temporary table for the results of the SELECT. +** run without using a temporary table for the results of the SELECT. */ -static int readsTable(Parse *p, int iStartAddr, int iDb, Table *pTab){ +static int readsTable(Parse *p, int iDb, Table *pTab){ Vdbe *v = sqlite3GetVdbe(p); int i; int iEnd = sqlite3VdbeCurrentAddr(v); #ifndef SQLITE_OMIT_VIRTUALTABLE VTable *pVTab = IsVirtual(pTab) ? sqlite3GetVTable(p->db, pTab) : 0; #endif - for(i=iStartAddr; i<iEnd; i++){ + for(i=1; i<iEnd; i++){ VdbeOp *pOp = sqlite3VdbeGetOp(v, i); assert( pOp!=0 ); if( pOp->opcode==OP_OpenRead && pOp->p3==iDb ){ Index *pIndex; int tnum = pOp->p2; @@ -75455,11 +103518,11 @@ AutoincInfo *pInfo; pInfo = pToplevel->pAinc; while( pInfo && pInfo->pTab!=pTab ){ pInfo = pInfo->pNext; } if( pInfo==0 ){ - pInfo = sqlite3DbMallocRaw(pParse->db, sizeof(*pInfo)); + pInfo = sqlite3DbMallocRawNN(pParse->db, sizeof(*pInfo)); if( pInfo==0 ) return 0; pInfo->pNext = pToplevel->pAinc; pToplevel->pAinc = pInfo; pInfo->pTab = pTab; pInfo->iDb = iDb; @@ -75479,45 +103542,59 @@ SQLITE_PRIVATE void sqlite3AutoincrementBegin(Parse *pParse){ AutoincInfo *p; /* Information about an AUTOINCREMENT */ sqlite3 *db = pParse->db; /* The database connection */ Db *pDb; /* Database only autoinc table */ int memId; /* Register holding max rowid */ - int addr; /* A VDBE address */ Vdbe *v = pParse->pVdbe; /* VDBE under construction */ /* This routine is never called during trigger-generation. It is ** only called from the top-level */ assert( pParse->pTriggerTab==0 ); - assert( pParse==sqlite3ParseToplevel(pParse) ); + assert( sqlite3IsToplevel(pParse) ); assert( v ); /* We failed long ago if this is not so */ for(p = pParse->pAinc; p; p = p->pNext){ + static const int iLn = VDBE_OFFSET_LINENO(2); + static const VdbeOpList autoInc[] = { + /* 0 */ {OP_Null, 0, 0, 0}, + /* 1 */ {OP_Rewind, 0, 9, 0}, + /* 2 */ {OP_Column, 0, 0, 0}, + /* 3 */ {OP_Ne, 0, 7, 0}, + /* 4 */ {OP_Rowid, 0, 0, 0}, + /* 5 */ {OP_Column, 0, 1, 0}, + /* 6 */ {OP_Goto, 0, 9, 0}, + /* 7 */ {OP_Next, 0, 2, 0}, + /* 8 */ {OP_Integer, 0, 0, 0}, + /* 9 */ {OP_Close, 0, 0, 0} + }; + VdbeOp *aOp; pDb = &db->aDb[p->iDb]; memId = p->regCtr; + assert( sqlite3SchemaMutexHeld(db, 0, pDb->pSchema) ); sqlite3OpenTable(pParse, 0, p->iDb, pDb->pSchema->pSeqTab, OP_OpenRead); - addr = sqlite3VdbeCurrentAddr(v); - sqlite3VdbeAddOp4(v, OP_String8, 0, memId-1, 0, p->pTab->zName, 0); - sqlite3VdbeAddOp2(v, OP_Rewind, 0, addr+9); - sqlite3VdbeAddOp3(v, OP_Column, 0, 0, memId); - sqlite3VdbeAddOp3(v, OP_Ne, memId-1, addr+7, memId); - sqlite3VdbeChangeP5(v, SQLITE_JUMPIFNULL); - sqlite3VdbeAddOp2(v, OP_Rowid, 0, memId+1); - sqlite3VdbeAddOp3(v, OP_Column, 0, 1, memId); - sqlite3VdbeAddOp2(v, OP_Goto, 0, addr+9); - sqlite3VdbeAddOp2(v, OP_Next, 0, addr+2); - sqlite3VdbeAddOp2(v, OP_Integer, 0, memId); - sqlite3VdbeAddOp0(v, OP_Close); + sqlite3VdbeLoadString(v, memId-1, p->pTab->zName); + aOp = sqlite3VdbeAddOpList(v, ArraySize(autoInc), autoInc, iLn); + if( aOp==0 ) break; + aOp[0].p2 = memId; + aOp[0].p3 = memId+1; + aOp[2].p3 = memId; + aOp[3].p1 = memId-1; + aOp[3].p3 = memId; + aOp[3].p5 = SQLITE_JUMPIFNULL; + aOp[4].p2 = memId+1; + aOp[5].p3 = memId; + aOp[8].p2 = memId; } } /* ** Update the maximum rowid for an autoincrement calculation. ** -** This routine should be called when the top of the stack holds a +** This routine should be called when the regRowid register holds a ** new rowid that is about to be inserted. If that new rowid is ** larger than the maximum rowid in the memId memory cell, then the -** memory cell is updated. The stack is unchanged. +** memory cell is updated. */ static void autoIncStep(Parse *pParse, int memId, int regRowid){ if( memId>0 ){ sqlite3VdbeAddOp2(pParse->pVdbe, OP_MemMax, memId, regRowid); } @@ -75528,42 +103605,47 @@ ** maximum rowid values back into the sqlite_sequence register. ** Every statement that might do an INSERT into an autoincrement ** table (either directly or through triggers) needs to call this ** routine just before the "exit" code. */ -SQLITE_PRIVATE void sqlite3AutoincrementEnd(Parse *pParse){ +static SQLITE_NOINLINE void autoIncrementEnd(Parse *pParse){ AutoincInfo *p; Vdbe *v = pParse->pVdbe; sqlite3 *db = pParse->db; assert( v ); for(p = pParse->pAinc; p; p = p->pNext){ + static const int iLn = VDBE_OFFSET_LINENO(2); + static const VdbeOpList autoIncEnd[] = { + /* 0 */ {OP_NotNull, 0, 2, 0}, + /* 1 */ {OP_NewRowid, 0, 0, 0}, + /* 2 */ {OP_MakeRecord, 0, 2, 0}, + /* 3 */ {OP_Insert, 0, 0, 0}, + /* 4 */ {OP_Close, 0, 0, 0} + }; + VdbeOp *aOp; Db *pDb = &db->aDb[p->iDb]; - int j1, j2, j3, j4, j5; int iRec; int memId = p->regCtr; iRec = sqlite3GetTempReg(pParse); + assert( sqlite3SchemaMutexHeld(db, 0, pDb->pSchema) ); sqlite3OpenTable(pParse, 0, p->iDb, pDb->pSchema->pSeqTab, OP_OpenWrite); - j1 = sqlite3VdbeAddOp1(v, OP_NotNull, memId+1); - j2 = sqlite3VdbeAddOp0(v, OP_Rewind); - j3 = sqlite3VdbeAddOp3(v, OP_Column, 0, 0, iRec); - j4 = sqlite3VdbeAddOp3(v, OP_Eq, memId-1, 0, iRec); - sqlite3VdbeAddOp2(v, OP_Next, 0, j3); - sqlite3VdbeJumpHere(v, j2); - sqlite3VdbeAddOp2(v, OP_NewRowid, 0, memId+1); - j5 = sqlite3VdbeAddOp0(v, OP_Goto); - sqlite3VdbeJumpHere(v, j4); - sqlite3VdbeAddOp2(v, OP_Rowid, 0, memId+1); - sqlite3VdbeJumpHere(v, j1); - sqlite3VdbeJumpHere(v, j5); - sqlite3VdbeAddOp3(v, OP_MakeRecord, memId-1, 2, iRec); - sqlite3VdbeAddOp3(v, OP_Insert, 0, iRec, memId+1); - sqlite3VdbeChangeP5(v, OPFLAG_APPEND); - sqlite3VdbeAddOp0(v, OP_Close); + aOp = sqlite3VdbeAddOpList(v, ArraySize(autoIncEnd), autoIncEnd, iLn); + if( aOp==0 ) break; + aOp[0].p1 = memId+1; + aOp[1].p2 = memId+1; + aOp[2].p1 = memId-1; + aOp[2].p3 = iRec; + aOp[3].p2 = iRec; + aOp[3].p3 = memId+1; + aOp[3].p5 = OPFLAG_APPEND; sqlite3ReleaseTempReg(pParse, iRec); } +} +SQLITE_PRIVATE void sqlite3AutoincrementEnd(Parse *pParse){ + if( pParse->pAinc ) autoIncrementEnd(pParse); } #else /* ** If SQLITE_OMIT_AUTOINCREMENT is defined, then the three routines ** above are all no-ops @@ -75581,31 +103663,34 @@ int onError, /* How to handle constraint errors */ int iDbDest /* The database of pDest */ ); /* -** This routine is call to handle SQL of the following forms: +** This routine is called to handle SQL of the following forms: ** -** insert into TABLE (IDLIST) values(EXPRLIST) +** insert into TABLE (IDLIST) values(EXPRLIST),(EXPRLIST),... ** insert into TABLE (IDLIST) select +** insert into TABLE (IDLIST) default values ** ** The IDLIST following the table name is always optional. If omitted, -** then a list of all columns for the table is substituted. The IDLIST -** appears in the pColumn parameter. pColumn is NULL if IDLIST is omitted. +** then a list of all (non-hidden) columns for the table is substituted. +** The IDLIST appears in the pColumn parameter. pColumn is NULL if IDLIST +** is omitted. ** -** The pList parameter holds EXPRLIST in the first form of the INSERT -** statement above, and pSelect is NULL. For the second form, pList is -** NULL and pSelect is a pointer to the select statement used to generate -** data for the insert. +** For the pSelect parameter holds the values to be inserted for the +** first two forms shown above. A VALUES clause is really just short-hand +** for a SELECT statement that omits the FROM clause and everything else +** that follows. If the pSelect parameter is NULL, that means that the +** DEFAULT VALUES form of the INSERT statement is intended. ** ** The code generated follows one of four templates. For a simple -** select with data coming from a VALUES clause, the code executes +** insert with data coming from a single-row VALUES clause, the code executes ** once straight down through. Pseudo-code follows (we call this ** the "1st template"): ** ** open write cursor to <table> and its indices -** puts VALUES clause expressions onto the stack +** put VALUES clause expressions into registers ** write the resulting record into <table> ** cleanup ** ** The three remaining templates assume the statement is of the form ** @@ -75633,50 +103718,42 @@ ** ** The 3rd template is for when the second template does not apply ** and the SELECT clause does not read from <table> at any time. ** The generated code follows this template: ** -** EOF <- 0 ** X <- A ** goto B ** A: setup for the SELECT ** loop over the rows in the SELECT ** load values into registers R..R+n ** yield X ** end loop ** cleanup after the SELECT -** EOF <- 1 -** yield X -** goto A +** end-coroutine X ** B: open write cursor to <table> and its indices -** C: yield X -** if EOF goto D +** C: yield X, at EOF goto D ** insert the select result into <table> from R..R+n ** goto C ** D: cleanup ** ** The 4th template is used if the insert statement takes its ** values from a SELECT but the data is being inserted into a table ** that is also read as part of the SELECT. In the third form, -** we have to use a intermediate table to store the results of +** we have to use an intermediate table to store the results of ** the select. The template is like this: ** -** EOF <- 0 ** X <- A ** goto B ** A: setup for the SELECT ** loop over the tables in the SELECT ** load value into register R..R+n ** yield X ** end loop ** cleanup after the SELECT -** EOF <- 1 -** yield X -** halt-error +** end co-routine R ** B: open temp table -** L: yield X -** if EOF goto M +** L: yield X, at EOF goto M ** insert row from R..R+n into temp table ** goto L ** M: open write cursor to <table> and its indices ** rewind temp table ** C: loop over rows of intermediate table @@ -75685,11 +103762,10 @@ ** D: cleanup */ SQLITE_PRIVATE void sqlite3Insert( Parse *pParse, /* Parser context */ SrcList *pTabList, /* Name of table into which we are inserting */ - ExprList *pList, /* List of values to be inserted */ Select *pSelect, /* A SELECT statement to use as the data source */ IdList *pColumn, /* Column names corresponding to IDLIST. */ int onError /* How to handle constraint errors */ ){ sqlite3 *db; /* The main database structure */ @@ -75699,32 +103775,33 @@ int i, j, idx; /* Loop counters */ Vdbe *v; /* Generate code into this virtual machine */ Index *pIdx; /* For looping over indices of the table */ int nColumn; /* Number of columns in the data */ int nHidden = 0; /* Number of hidden columns if TABLE is virtual */ - int baseCur = 0; /* VDBE Cursor number for pTab */ - int keyColumn = -1; /* Column that is the INTEGER PRIMARY KEY */ + int iDataCur = 0; /* VDBE cursor that is the main data repository */ + int iIdxCur = 0; /* First index cursor */ + int ipkColumn = -1; /* Column that is the INTEGER PRIMARY KEY */ int endOfLoop; /* Label for the end of the insertion loop */ - int useTempTable = 0; /* Store SELECT results in intermediate table */ int srcTab = 0; /* Data comes from this temporary cursor if >=0 */ int addrInsTop = 0; /* Jump to label "D" */ int addrCont = 0; /* Top of insert loop. Label "C" in templates 3 and 4 */ - int addrSelect = 0; /* Address of coroutine that implements the SELECT */ SelectDest dest; /* Destination for SELECT on rhs of INSERT */ int iDb; /* Index of database holding TABLE */ Db *pDb; /* The database containing table being inserted into */ - int appendFlag = 0; /* True if the insert is likely to be an append */ + u8 useTempTable = 0; /* Store SELECT results in intermediate table */ + u8 appendFlag = 0; /* True if the insert is likely to be an append */ + u8 withoutRowid; /* 0 for normal table. 1 for WITHOUT ROWID table */ + u8 bIdListInOrder; /* True if IDLIST is in table order */ + ExprList *pList = 0; /* List of VALUES() to be inserted */ /* Register allocations */ int regFromSelect = 0;/* Base register for data coming from SELECT */ int regAutoinc = 0; /* Register holding the AUTOINCREMENT counter */ int regRowCount = 0; /* Memory cell used for the row counter */ int regIns; /* Block of regs holding rowid+data being inserted */ int regRowid; /* registers holding insert rowid */ int regData; /* register holding first column to insert */ - int regRecord; /* Holds the assemblied row record */ - int regEof = 0; /* Register recording end of SELECT data */ int *aRegIdx = 0; /* One register allocated to each index */ #ifndef SQLITE_OMIT_TRIGGER int isView; /* True if attempting to insert into a view */ Trigger *pTrigger; /* List of triggers on pTab, if required */ @@ -75734,10 +103811,21 @@ db = pParse->db; memset(&dest, 0, sizeof(dest)); if( pParse->nErr || db->mallocFailed ){ goto insert_cleanup; } + + /* If the Select object is really just a simple VALUES() list with a + ** single row (the common case) then keep that one row of values + ** and discard the other (unused) parts of the pSelect object + */ + if( pSelect && (pSelect->selFlags & SF_Values)!=0 && pSelect->pPrior==0 ){ + pList = pSelect->pEList; + pSelect->pEList = 0; + sqlite3SelectDelete(db, pSelect); + pSelect = 0; + } /* Locate the table into which we will be inserting new information. */ assert( pTabList->nSrc==1 ); zTab = pTabList->a[0].zName; @@ -75751,10 +103839,11 @@ pDb = &db->aDb[iDb]; zDb = pDb->zName; if( sqlite3AuthCheck(pParse, SQLITE_INSERT, pTab->zName, 0, zDb) ){ goto insert_cleanup; } + withoutRowid = !HasRowid(pTab); /* Figure out if we have any triggers and if the table being ** inserted into is a view */ #ifndef SQLITE_OMIT_TRIGGER @@ -75770,20 +103859,17 @@ # define isView 0 #endif assert( (pTrigger && tmask) || (pTrigger==0 && tmask==0) ); /* If pTab is really a view, make sure it has been initialized. - ** ViewGetColumnNames() is a no-op if pTab is not a view (or virtual - ** module table). + ** ViewGetColumnNames() is a no-op if pTab is not a view. */ if( sqlite3ViewGetColumnNames(pParse, pTab) ){ goto insert_cleanup; } - /* Ensure that: - * (a) the table is not read-only, - * (b) that if it is a view then ON INSERT triggers exist + /* Cannot insert into a read-only table. */ if( sqlite3IsReadOnly(pParse, pTab, tmask) ){ goto insert_cleanup; } @@ -75813,135 +103899,162 @@ /* If this is an AUTOINCREMENT table, look up the sequence number in the ** sqlite_sequence table and store it in memory cell regAutoinc. */ regAutoinc = autoIncBegin(pParse, iDb, pTab); + + /* Allocate registers for holding the rowid of the new row, + ** the content of the new row, and the assembled row record. + */ + regRowid = regIns = pParse->nMem+1; + pParse->nMem += pTab->nCol + 1; + if( IsVirtual(pTab) ){ + regRowid++; + pParse->nMem++; + } + regData = regRowid+1; + + /* If the INSERT statement included an IDLIST term, then make sure + ** all elements of the IDLIST really are columns of the table and + ** remember the column indices. + ** + ** If the table has an INTEGER PRIMARY KEY column and that column + ** is named in the IDLIST, then record in the ipkColumn variable + ** the index into IDLIST of the primary key column. ipkColumn is + ** the index of the primary key as it appears in IDLIST, not as + ** is appears in the original table. (The index of the INTEGER + ** PRIMARY KEY in the original table is pTab->iPKey.) + */ + bIdListInOrder = (pTab->tabFlags & TF_OOOHidden)==0; + if( pColumn ){ + for(i=0; i<pColumn->nId; i++){ + pColumn->a[i].idx = -1; + } + for(i=0; i<pColumn->nId; i++){ + for(j=0; j<pTab->nCol; j++){ + if( sqlite3StrICmp(pColumn->a[i].zName, pTab->aCol[j].zName)==0 ){ + pColumn->a[i].idx = j; + if( i!=j ) bIdListInOrder = 0; + if( j==pTab->iPKey ){ + ipkColumn = i; assert( !withoutRowid ); + } + break; + } + } + if( j>=pTab->nCol ){ + if( sqlite3IsRowid(pColumn->a[i].zName) && !withoutRowid ){ + ipkColumn = i; + bIdListInOrder = 0; + }else{ + sqlite3ErrorMsg(pParse, "table %S has no column named %s", + pTabList, 0, pColumn->a[i].zName); + pParse->checkSchema = 1; + goto insert_cleanup; + } + } + } + } /* Figure out how many columns of data are supplied. If the data ** is coming from a SELECT statement, then generate a co-routine that ** produces a single row of the SELECT on each invocation. The ** co-routine is the common header to the 3rd and 4th templates. */ if( pSelect ){ - /* Data is coming from a SELECT. Generate code to implement that SELECT - ** as a co-routine. The code is common to both the 3rd and 4th - ** templates: - ** - ** EOF <- 0 - ** X <- A - ** goto B - ** A: setup for the SELECT - ** loop over the tables in the SELECT - ** load value into register R..R+n - ** yield X - ** end loop - ** cleanup after the SELECT - ** EOF <- 1 - ** yield X - ** halt-error - ** - ** On each invocation of the co-routine, it puts a single row of the - ** SELECT result into registers dest.iMem...dest.iMem+dest.nMem-1. - ** (These output registers are allocated by sqlite3Select().) When - ** the SELECT completes, it sets the EOF flag stored in regEof. - */ - int rc, j1; - - regEof = ++pParse->nMem; - sqlite3VdbeAddOp2(v, OP_Integer, 0, regEof); /* EOF <- 0 */ - VdbeComment((v, "SELECT eof flag")); - sqlite3SelectDestInit(&dest, SRT_Coroutine, ++pParse->nMem); - addrSelect = sqlite3VdbeCurrentAddr(v)+2; - sqlite3VdbeAddOp2(v, OP_Integer, addrSelect-1, dest.iParm); - j1 = sqlite3VdbeAddOp2(v, OP_Goto, 0, 0); - VdbeComment((v, "Jump over SELECT coroutine")); - - /* Resolve the expressions in the SELECT statement and execute it. */ + /* Data is coming from a SELECT or from a multi-row VALUES clause. + ** Generate a co-routine to run the SELECT. */ + int regYield; /* Register holding co-routine entry-point */ + int addrTop; /* Top of the co-routine */ + int rc; /* Result code */ + + regYield = ++pParse->nMem; + addrTop = sqlite3VdbeCurrentAddr(v) + 1; + sqlite3VdbeAddOp3(v, OP_InitCoroutine, regYield, 0, addrTop); + sqlite3SelectDestInit(&dest, SRT_Coroutine, regYield); + dest.iSdst = bIdListInOrder ? regData : 0; + dest.nSdst = pTab->nCol; rc = sqlite3Select(pParse, pSelect, &dest); - assert( pParse->nErr==0 || rc ); - if( rc || NEVER(pParse->nErr) || db->mallocFailed ){ - goto insert_cleanup; - } - sqlite3VdbeAddOp2(v, OP_Integer, 1, regEof); /* EOF <- 1 */ - sqlite3VdbeAddOp1(v, OP_Yield, dest.iParm); /* yield X */ - sqlite3VdbeAddOp2(v, OP_Halt, SQLITE_INTERNAL, OE_Abort); - VdbeComment((v, "End of SELECT coroutine")); - sqlite3VdbeJumpHere(v, j1); /* label B: */ - - regFromSelect = dest.iMem; + regFromSelect = dest.iSdst; + if( rc || db->mallocFailed || pParse->nErr ) goto insert_cleanup; + sqlite3VdbeEndCoroutine(v, regYield); + sqlite3VdbeJumpHere(v, addrTop - 1); /* label B: */ assert( pSelect->pEList ); nColumn = pSelect->pEList->nExpr; - assert( dest.nMem==nColumn ); /* Set useTempTable to TRUE if the result of the SELECT statement ** should be written into a temporary table (template 4). Set to - ** FALSE if each* row of the SELECT can be written directly into + ** FALSE if each output row of the SELECT can be written directly into ** the destination table (template 3). ** ** A temp table must be used if the table being updated is also one ** of the tables being read by the SELECT statement. Also use a ** temp table in the case of row triggers. */ - if( pTrigger || readsTable(pParse, addrSelect, iDb, pTab) ){ + if( pTrigger || readsTable(pParse, iDb, pTab) ){ useTempTable = 1; } if( useTempTable ){ /* Invoke the coroutine to extract information from the SELECT ** and add it to a transient table srcTab. The code generated ** here is from the 4th template: ** ** B: open temp table - ** L: yield X - ** if EOF goto M + ** L: yield X, goto M at EOF ** insert row from R..R+n into temp table ** goto L ** M: ... */ int regRec; /* Register to hold packed record */ int regTempRowid; /* Register to hold temp table ROWID */ - int addrTop; /* Label "L" */ - int addrIf; /* Address of jump to M */ + int addrL; /* Label "L" */ srcTab = pParse->nTab++; regRec = sqlite3GetTempReg(pParse); regTempRowid = sqlite3GetTempReg(pParse); sqlite3VdbeAddOp2(v, OP_OpenEphemeral, srcTab, nColumn); - addrTop = sqlite3VdbeAddOp1(v, OP_Yield, dest.iParm); - addrIf = sqlite3VdbeAddOp1(v, OP_If, regEof); + addrL = sqlite3VdbeAddOp1(v, OP_Yield, dest.iSDParm); VdbeCoverage(v); sqlite3VdbeAddOp3(v, OP_MakeRecord, regFromSelect, nColumn, regRec); sqlite3VdbeAddOp2(v, OP_NewRowid, srcTab, regTempRowid); sqlite3VdbeAddOp3(v, OP_Insert, srcTab, regRec, regTempRowid); - sqlite3VdbeAddOp2(v, OP_Goto, 0, addrTop); - sqlite3VdbeJumpHere(v, addrIf); + sqlite3VdbeGoto(v, addrL); + sqlite3VdbeJumpHere(v, addrL); sqlite3ReleaseTempReg(pParse, regRec); sqlite3ReleaseTempReg(pParse, regTempRowid); } }else{ - /* This is the case if the data for the INSERT is coming from a VALUES - ** clause + /* This is the case if the data for the INSERT is coming from a + ** single-row VALUES clause */ NameContext sNC; memset(&sNC, 0, sizeof(sNC)); sNC.pParse = pParse; srcTab = -1; assert( useTempTable==0 ); - nColumn = pList ? pList->nExpr : 0; - for(i=0; i<nColumn; i++){ - if( sqlite3ResolveExprNames(&sNC, pList->a[i].pExpr) ){ + if( pList ){ + nColumn = pList->nExpr; + if( sqlite3ResolveExprListNames(&sNC, pList) ){ goto insert_cleanup; } + }else{ + nColumn = 0; } } + + /* If there is no IDLIST term but the table has an integer primary + ** key, the set the ipkColumn variable to the integer primary key + ** column index in the original table definition. + */ + if( pColumn==0 && nColumn>0 ){ + ipkColumn = pTab->iPKey; + } /* Make sure the number of columns in the source data matches the number ** of columns to be inserted into the table. */ - if( IsVirtual(pTab) ){ - for(i=0; i<pTab->nCol; i++){ - nHidden += (IsHiddenColumn(&pTab->aCol[i]) ? 1 : 0); - } + for(i=0; i<pTab->nCol; i++){ + nHidden += (IsHiddenColumn(&pTab->aCol[i]) ? 1 : 0); } if( pColumn==0 && nColumn && nColumn!=(pTab->nCol-nHidden) ){ sqlite3ErrorMsg(pParse, "table %S has %d columns but %d values were supplied", pTabList, 0, pTab->nCol-nHidden, nColumn); @@ -75949,56 +104062,10 @@ } if( pColumn!=0 && nColumn!=pColumn->nId ){ sqlite3ErrorMsg(pParse, "%d values for %d columns", nColumn, pColumn->nId); goto insert_cleanup; } - - /* If the INSERT statement included an IDLIST term, then make sure - ** all elements of the IDLIST really are columns of the table and - ** remember the column indices. - ** - ** If the table has an INTEGER PRIMARY KEY column and that column - ** is named in the IDLIST, then record in the keyColumn variable - ** the index into IDLIST of the primary key column. keyColumn is - ** the index of the primary key as it appears in IDLIST, not as - ** is appears in the original table. (The index of the primary - ** key in the original table is pTab->iPKey.) - */ - if( pColumn ){ - for(i=0; i<pColumn->nId; i++){ - pColumn->a[i].idx = -1; - } - for(i=0; i<pColumn->nId; i++){ - for(j=0; j<pTab->nCol; j++){ - if( sqlite3StrICmp(pColumn->a[i].zName, pTab->aCol[j].zName)==0 ){ - pColumn->a[i].idx = j; - if( j==pTab->iPKey ){ - keyColumn = i; - } - break; - } - } - if( j>=pTab->nCol ){ - if( sqlite3IsRowid(pColumn->a[i].zName) ){ - keyColumn = i; - }else{ - sqlite3ErrorMsg(pParse, "table %S has no column named %s", - pTabList, 0, pColumn->a[i].zName); - pParse->nErr++; - goto insert_cleanup; - } - } - } - } - - /* If there is no IDLIST term but the table has an integer primary - ** key, the set the keyColumn variable to the primary key column index - ** in the original table definition. - */ - if( pColumn==0 && nColumn>0 ){ - keyColumn = pTab->iPKey; - } /* Initialize the count of rows to be inserted */ if( db->flags & SQLITE_CountRows ){ regRowCount = ++pParse->nMem; @@ -76006,14 +104073,13 @@ } /* If this is not a view, open the table and and all indices */ if( !isView ){ int nIdx; - - baseCur = pParse->nTab; - nIdx = sqlite3OpenTableAndIndices(pParse, pTab, baseCur, OP_OpenWrite); - aRegIdx = sqlite3DbMallocRaw(db, sizeof(int)*(nIdx+1)); + nIdx = sqlite3OpenTableAndIndices(pParse, pTab, OP_OpenWrite, 0, -1, 0, + &iDataCur, &iIdxCur); + aRegIdx = sqlite3DbMallocRawNN(db, sizeof(int)*(nIdx+1)); if( aRegIdx==0 ){ goto insert_cleanup; } for(i=0; i<nIdx; i++){ aRegIdx[i] = ++pParse->nMem; @@ -76023,43 +104089,30 @@ /* This is the top of the main insertion loop */ if( useTempTable ){ /* This block codes the top of loop only. The complete loop is the ** following pseudocode (template 4): ** - ** rewind temp table + ** rewind temp table, if empty goto D ** C: loop over rows of intermediate table ** transfer values form intermediate table into <table> ** end loop ** D: ... */ - addrInsTop = sqlite3VdbeAddOp1(v, OP_Rewind, srcTab); + addrInsTop = sqlite3VdbeAddOp1(v, OP_Rewind, srcTab); VdbeCoverage(v); addrCont = sqlite3VdbeCurrentAddr(v); }else if( pSelect ){ /* This block codes the top of loop only. The complete loop is the ** following pseudocode (template 3): ** - ** C: yield X - ** if EOF goto D + ** C: yield X, at EOF goto D ** insert the select result into <table> from R..R+n ** goto C ** D: ... */ - addrCont = sqlite3VdbeAddOp1(v, OP_Yield, dest.iParm); - addrInsTop = sqlite3VdbeAddOp1(v, OP_If, regEof); - } - - /* Allocate registers for holding the rowid of the new row, - ** the content of the new row, and the assemblied row record. - */ - regRecord = ++pParse->nMem; - regRowid = regIns = pParse->nMem+1; - pParse->nMem += pTab->nCol + 1; - if( IsVirtual(pTab) ){ - regRowid++; - pParse->nMem++; - } - regData = regRowid+1; + addrInsTop = addrCont = sqlite3VdbeAddOp1(v, OP_Yield, dest.iSDParm); + VdbeCoverage(v); + } /* Run the BEFORE and INSTEAD OF triggers, if there are any */ endOfLoop = sqlite3VdbeMakeLabel(v); if( tmask & TRIGGER_BEFORE ){ @@ -76069,135 +104122,133 @@ ** PRIMARY KEY into which a NULL is being inserted, that NULL will be ** translated into a unique ID for the row. But on a BEFORE trigger, ** we do not know what the unique ID will be (because the insert has ** not happened yet) so we substitute a rowid of -1 */ - if( keyColumn<0 ){ + if( ipkColumn<0 ){ sqlite3VdbeAddOp2(v, OP_Integer, -1, regCols); }else{ - int j1; + int addr1; + assert( !withoutRowid ); if( useTempTable ){ - sqlite3VdbeAddOp3(v, OP_Column, srcTab, keyColumn, regCols); + sqlite3VdbeAddOp3(v, OP_Column, srcTab, ipkColumn, regCols); }else{ assert( pSelect==0 ); /* Otherwise useTempTable is true */ - sqlite3ExprCode(pParse, pList->a[keyColumn].pExpr, regCols); + sqlite3ExprCode(pParse, pList->a[ipkColumn].pExpr, regCols); } - j1 = sqlite3VdbeAddOp1(v, OP_NotNull, regCols); + addr1 = sqlite3VdbeAddOp1(v, OP_NotNull, regCols); VdbeCoverage(v); sqlite3VdbeAddOp2(v, OP_Integer, -1, regCols); - sqlite3VdbeJumpHere(v, j1); - sqlite3VdbeAddOp1(v, OP_MustBeInt, regCols); + sqlite3VdbeJumpHere(v, addr1); + sqlite3VdbeAddOp1(v, OP_MustBeInt, regCols); VdbeCoverage(v); } /* Cannot have triggers on a virtual table. If it were possible, ** this block would have to account for hidden column. */ assert( !IsVirtual(pTab) ); /* Create the new column data */ - for(i=0; i<pTab->nCol; i++){ - if( pColumn==0 ){ - j = i; - }else{ + for(i=j=0; i<pTab->nCol; i++){ + if( pColumn ){ for(j=0; j<pColumn->nId; j++){ if( pColumn->a[j].idx==i ) break; } } - if( (!useTempTable && !pList) || (pColumn && j>=pColumn->nId) ){ + if( (!useTempTable && !pList) || (pColumn && j>=pColumn->nId) + || (pColumn==0 && IsOrdinaryHiddenColumn(&pTab->aCol[i])) ){ sqlite3ExprCode(pParse, pTab->aCol[i].pDflt, regCols+i+1); }else if( useTempTable ){ sqlite3VdbeAddOp3(v, OP_Column, srcTab, j, regCols+i+1); }else{ assert( pSelect==0 ); /* Otherwise useTempTable is true */ sqlite3ExprCodeAndCache(pParse, pList->a[j].pExpr, regCols+i+1); } + if( pColumn==0 && !IsOrdinaryHiddenColumn(&pTab->aCol[i]) ) j++; } /* If this is an INSERT on a view with an INSTEAD OF INSERT trigger, ** do not attempt any conversions before assembling the record. ** If this is a real table, attempt conversions as required by the ** table column affinities. */ if( !isView ){ - sqlite3VdbeAddOp2(v, OP_Affinity, regCols+1, pTab->nCol); - sqlite3TableAffinityStr(v, pTab); + sqlite3TableAffinity(v, pTab, regCols+1); } /* Fire BEFORE or INSTEAD OF triggers */ sqlite3CodeRowTrigger(pParse, pTrigger, TK_INSERT, 0, TRIGGER_BEFORE, pTab, regCols-pTab->nCol-1, onError, endOfLoop); sqlite3ReleaseTempRange(pParse, regCols, pTab->nCol+1); } - /* Push the record number for the new entry onto the stack. The - ** record number is a randomly generate integer created by NewRowid - ** except when the table has an INTEGER PRIMARY KEY column, in which - ** case the record number is the same as that column. + /* Compute the content of the next row to insert into a range of + ** registers beginning at regIns. */ if( !isView ){ if( IsVirtual(pTab) ){ /* The row that the VUpdate opcode will delete: none */ sqlite3VdbeAddOp2(v, OP_Null, 0, regIns); } - if( keyColumn>=0 ){ + if( ipkColumn>=0 ){ if( useTempTable ){ - sqlite3VdbeAddOp3(v, OP_Column, srcTab, keyColumn, regRowid); + sqlite3VdbeAddOp3(v, OP_Column, srcTab, ipkColumn, regRowid); }else if( pSelect ){ - sqlite3VdbeAddOp2(v, OP_SCopy, regFromSelect+keyColumn, regRowid); + sqlite3VdbeAddOp2(v, OP_Copy, regFromSelect+ipkColumn, regRowid); }else{ VdbeOp *pOp; - sqlite3ExprCode(pParse, pList->a[keyColumn].pExpr, regRowid); + sqlite3ExprCode(pParse, pList->a[ipkColumn].pExpr, regRowid); pOp = sqlite3VdbeGetOp(v, -1); if( ALWAYS(pOp) && pOp->opcode==OP_Null && !IsVirtual(pTab) ){ appendFlag = 1; pOp->opcode = OP_NewRowid; - pOp->p1 = baseCur; + pOp->p1 = iDataCur; pOp->p2 = regRowid; pOp->p3 = regAutoinc; } } /* If the PRIMARY KEY expression is NULL, then use OP_NewRowid ** to generate a unique primary key value. */ if( !appendFlag ){ - int j1; + int addr1; if( !IsVirtual(pTab) ){ - j1 = sqlite3VdbeAddOp1(v, OP_NotNull, regRowid); - sqlite3VdbeAddOp3(v, OP_NewRowid, baseCur, regRowid, regAutoinc); - sqlite3VdbeJumpHere(v, j1); + addr1 = sqlite3VdbeAddOp1(v, OP_NotNull, regRowid); VdbeCoverage(v); + sqlite3VdbeAddOp3(v, OP_NewRowid, iDataCur, regRowid, regAutoinc); + sqlite3VdbeJumpHere(v, addr1); }else{ - j1 = sqlite3VdbeCurrentAddr(v); - sqlite3VdbeAddOp2(v, OP_IsNull, regRowid, j1+2); + addr1 = sqlite3VdbeCurrentAddr(v); + sqlite3VdbeAddOp2(v, OP_IsNull, regRowid, addr1+2); VdbeCoverage(v); } - sqlite3VdbeAddOp1(v, OP_MustBeInt, regRowid); + sqlite3VdbeAddOp1(v, OP_MustBeInt, regRowid); VdbeCoverage(v); } - }else if( IsVirtual(pTab) ){ + }else if( IsVirtual(pTab) || withoutRowid ){ sqlite3VdbeAddOp2(v, OP_Null, 0, regRowid); }else{ - sqlite3VdbeAddOp3(v, OP_NewRowid, baseCur, regRowid, regAutoinc); + sqlite3VdbeAddOp3(v, OP_NewRowid, iDataCur, regRowid, regAutoinc); appendFlag = 1; } autoIncStep(pParse, regAutoinc, regRowid); - /* Push onto the stack, data for all columns of the new entry, beginning + /* Compute data for all columns of the new entry, beginning ** with the first column. */ nHidden = 0; for(i=0; i<pTab->nCol; i++){ int iRegStore = regRowid+1+i; if( i==pTab->iPKey ){ /* The value of the INTEGER PRIMARY KEY column is always a NULL. - ** Whenever this column is read, the record number will be substituted - ** in its place. So will fill this column with a NULL to avoid - ** taking up data space with information that will never be used. */ - sqlite3VdbeAddOp2(v, OP_Null, 0, iRegStore); + ** Whenever this column is read, the rowid will be substituted + ** in its place. Hence, fill this column with a NULL to avoid + ** taking up data space with information that will never be used. + ** As there may be shallow copies of this value, make it a soft-NULL */ + sqlite3VdbeAddOp1(v, OP_SoftNull, iRegStore); continue; } if( pColumn==0 ){ if( IsHiddenColumn(&pTab->aCol[i]) ){ - assert( IsVirtual(pTab) ); j = -1; nHidden++; }else{ j = i - nHidden; } @@ -76205,15 +104256,17 @@ for(j=0; j<pColumn->nId; j++){ if( pColumn->a[j].idx==i ) break; } } if( j<0 || nColumn==0 || (pColumn && j>=pColumn->nId) ){ - sqlite3ExprCode(pParse, pTab->aCol[i].pDflt, iRegStore); + sqlite3ExprCodeFactorable(pParse, pTab->aCol[i].pDflt, iRegStore); }else if( useTempTable ){ sqlite3VdbeAddOp3(v, OP_Column, srcTab, j, iRegStore); }else if( pSelect ){ - sqlite3VdbeAddOp2(v, OP_SCopy, regFromSelect+j, iRegStore); + if( regFromSelect!=regData ){ + sqlite3VdbeAddOp2(v, OP_SCopy, regFromSelect+j, iRegStore); + } }else{ sqlite3ExprCode(pParse, pList->a[j].pExpr, iRegStore); } } @@ -76223,22 +104276,22 @@ #ifndef SQLITE_OMIT_VIRTUALTABLE if( IsVirtual(pTab) ){ const char *pVTab = (const char *)sqlite3GetVTable(db, pTab); sqlite3VtabMakeWritable(pParse, pTab); sqlite3VdbeAddOp4(v, OP_VUpdate, 1, pTab->nCol+2, regIns, pVTab, P4_VTAB); + sqlite3VdbeChangeP5(v, onError==OE_Default ? OE_Abort : onError); sqlite3MayAbort(pParse); }else #endif { int isReplace; /* Set to true if constraints may cause a replace */ - sqlite3GenerateConstraintChecks(pParse, pTab, baseCur, regIns, aRegIdx, - keyColumn>=0, 0, onError, endOfLoop, &isReplace + sqlite3GenerateConstraintChecks(pParse, pTab, aRegIdx, iDataCur, iIdxCur, + regIns, 0, ipkColumn>=0, onError, endOfLoop, &isReplace, 0 ); - sqlite3FkCheck(pParse, pTab, 0, regIns); - sqlite3CompleteInsertion( - pParse, pTab, baseCur, regIns, aRegIdx, 0, appendFlag, isReplace==0 - ); + sqlite3FkCheck(pParse, pTab, 0, regIns, 0, 0); + sqlite3CompleteInsertion(pParse, pTab, iDataCur, iIdxCur, + regIns, aRegIdx, 0, appendFlag, isReplace==0); } } /* Update the count of rows that are inserted */ @@ -76255,23 +104308,23 @@ /* The bottom of the main insertion loop, if the data source ** is a SELECT statement. */ sqlite3VdbeResolveLabel(v, endOfLoop); if( useTempTable ){ - sqlite3VdbeAddOp2(v, OP_Next, srcTab, addrCont); + sqlite3VdbeAddOp2(v, OP_Next, srcTab, addrCont); VdbeCoverage(v); sqlite3VdbeJumpHere(v, addrInsTop); sqlite3VdbeAddOp1(v, OP_Close, srcTab); }else if( pSelect ){ - sqlite3VdbeAddOp2(v, OP_Goto, 0, addrCont); + sqlite3VdbeGoto(v, addrCont); sqlite3VdbeJumpHere(v, addrInsTop); } if( !IsVirtual(pTab) && !isView ){ /* Close all tables opened */ - sqlite3VdbeAddOp1(v, OP_Close, baseCur); - for(idx=1, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, idx++){ - sqlite3VdbeAddOp1(v, OP_Close, idx+baseCur); + if( iDataCur<iIdxCur ) sqlite3VdbeAddOp1(v, OP_Close, iDataCur); + for(idx=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, idx++){ + sqlite3VdbeAddOp1(v, OP_Close, idx+iIdxCur); } } insert_end: /* Update the sqlite_sequence table by storing the content of the @@ -76300,11 +104353,11 @@ sqlite3IdListDelete(db, pColumn); sqlite3DbFree(db, aRegIdx); } /* Make sure "isView" and other macros defined above are undefined. Otherwise -** thely may interfere with compilation of other functions in this file +** they may interfere with compilation of other functions in this file ** (or in another file, if this file becomes part of the amalgamation). */ #ifdef isView #undef isView #endif #ifdef pTrigger @@ -76312,65 +104365,131 @@ #endif #ifdef tmask #undef tmask #endif +/* +** Meanings of bits in of pWalker->eCode for checkConstraintUnchanged() +*/ +#define CKCNSTRNT_COLUMN 0x01 /* CHECK constraint uses a changing column */ +#define CKCNSTRNT_ROWID 0x02 /* CHECK constraint references the ROWID */ + +/* This is the Walker callback from checkConstraintUnchanged(). Set +** bit 0x01 of pWalker->eCode if +** pWalker->eCode to 0 if this expression node references any of the +** columns that are being modifed by an UPDATE statement. +*/ +static int checkConstraintExprNode(Walker *pWalker, Expr *pExpr){ + if( pExpr->op==TK_COLUMN ){ + assert( pExpr->iColumn>=0 || pExpr->iColumn==-1 ); + if( pExpr->iColumn>=0 ){ + if( pWalker->u.aiCol[pExpr->iColumn]>=0 ){ + pWalker->eCode |= CKCNSTRNT_COLUMN; + } + }else{ + pWalker->eCode |= CKCNSTRNT_ROWID; + } + } + return WRC_Continue; +} /* -** Generate code to do constraint checks prior to an INSERT or an UPDATE. -** -** The input is a range of consecutive registers as follows: -** -** 1. The rowid of the row after the update. -** -** 2. The data in the first column of the entry after the update. -** -** i. Data from middle columns... -** -** N. The data in the last column of the entry after the update. -** -** The regRowid parameter is the index of the register containing (1). -** -** If isUpdate is true and rowidChng is non-zero, then rowidChng contains -** the address of a register containing the rowid before the update takes -** place. isUpdate is true for UPDATEs and false for INSERTs. If isUpdate -** is false, indicating an INSERT statement, then a non-zero rowidChng -** indicates that the rowid was explicitly specified as part of the -** INSERT statement. If rowidChng is false, it means that the rowid is -** computed automatically in an insert or that the rowid value is not -** modified by an update. -** -** The code generated by this routine store new index entries into +** pExpr is a CHECK constraint on a row that is being UPDATE-ed. The +** only columns that are modified by the UPDATE are those for which +** aiChng[i]>=0, and also the ROWID is modified if chngRowid is true. +** +** Return true if CHECK constraint pExpr does not use any of the +** changing columns (or the rowid if it is changing). In other words, +** return true if this CHECK constraint can be skipped when validating +** the new row in the UPDATE statement. +*/ +static int checkConstraintUnchanged(Expr *pExpr, int *aiChng, int chngRowid){ + Walker w; + memset(&w, 0, sizeof(w)); + w.eCode = 0; + w.xExprCallback = checkConstraintExprNode; + w.u.aiCol = aiChng; + sqlite3WalkExpr(&w, pExpr); + if( !chngRowid ){ + testcase( (w.eCode & CKCNSTRNT_ROWID)!=0 ); + w.eCode &= ~CKCNSTRNT_ROWID; + } + testcase( w.eCode==0 ); + testcase( w.eCode==CKCNSTRNT_COLUMN ); + testcase( w.eCode==CKCNSTRNT_ROWID ); + testcase( w.eCode==(CKCNSTRNT_ROWID|CKCNSTRNT_COLUMN) ); + return !w.eCode; +} + +/* +** Generate code to do constraint checks prior to an INSERT or an UPDATE +** on table pTab. +** +** The regNewData parameter is the first register in a range that contains +** the data to be inserted or the data after the update. There will be +** pTab->nCol+1 registers in this range. The first register (the one +** that regNewData points to) will contain the new rowid, or NULL in the +** case of a WITHOUT ROWID table. The second register in the range will +** contain the content of the first table column. The third register will +** contain the content of the second table column. And so forth. +** +** The regOldData parameter is similar to regNewData except that it contains +** the data prior to an UPDATE rather than afterwards. regOldData is zero +** for an INSERT. This routine can distinguish between UPDATE and INSERT by +** checking regOldData for zero. +** +** For an UPDATE, the pkChng boolean is true if the true primary key (the +** rowid for a normal table or the PRIMARY KEY for a WITHOUT ROWID table) +** might be modified by the UPDATE. If pkChng is false, then the key of +** the iDataCur content table is guaranteed to be unchanged by the UPDATE. +** +** For an INSERT, the pkChng boolean indicates whether or not the rowid +** was explicitly specified as part of the INSERT statement. If pkChng +** is zero, it means that the either rowid is computed automatically or +** that the table is a WITHOUT ROWID table and has no rowid. On an INSERT, +** pkChng will only be true if the INSERT statement provides an integer +** value for either the rowid column or its INTEGER PRIMARY KEY alias. +** +** The code generated by this routine will store new index entries into ** registers identified by aRegIdx[]. No index entry is created for ** indices where aRegIdx[i]==0. The order of indices in aRegIdx[] is ** the same as the order of indices on the linked list of indices -** attached to the table. +** at pTab->pIndex. +** +** The caller must have already opened writeable cursors on the main +** table and all applicable indices (that is to say, all indices for which +** aRegIdx[] is not zero). iDataCur is the cursor for the main table when +** inserting or updating a rowid table, or the cursor for the PRIMARY KEY +** index when operating on a WITHOUT ROWID table. iIdxCur is the cursor +** for the first index in the pTab->pIndex list. Cursors for other indices +** are at iIdxCur+N for the N-th element of the pTab->pIndex list. ** ** This routine also generates code to check constraints. NOT NULL, ** CHECK, and UNIQUE constraints are all checked. If a constraint fails, ** then the appropriate action is performed. There are five possible ** actions: ROLLBACK, ABORT, FAIL, REPLACE, and IGNORE. ** ** Constraint type Action What Happens ** --------------- ---------- ---------------------------------------- ** any ROLLBACK The current transaction is rolled back and -** sqlite3_exec() returns immediately with a +** sqlite3_step() returns immediately with a ** return code of SQLITE_CONSTRAINT. ** ** any ABORT Back out changes from the current command ** only (do not do a complete rollback) then -** cause sqlite3_exec() to return immediately +** cause sqlite3_step() to return immediately ** with SQLITE_CONSTRAINT. ** -** any FAIL Sqlite_exec() returns immediately with a +** any FAIL Sqlite3_step() returns immediately with a ** return code of SQLITE_CONSTRAINT. The ** transaction is not rolled back and any -** prior changes are retained. +** changes to prior rows are retained. ** -** any IGNORE The record number and data is popped from -** the stack and there is an immediate jump -** to label ignoreDest. +** any IGNORE The attempt in insert or update the current +** row is skipped, without throwing an error. +** Processing continues with the next row. +** (There is an immediate jump to ignoreDest.) ** ** NOT NULL REPLACE The NULL value is replace by the default ** value for that column. If the default value ** is NULL, the action is the same as ABORT. ** @@ -76381,55 +104500,77 @@ ** ** Which action to take is determined by the overrideError parameter. ** Or if overrideError==OE_Default, then the pParse->onError parameter ** is used. Or if pParse->onError==OE_Default then the onError value ** for the constraint is used. -** -** The calling routine must open a read/write cursor for pTab with -** cursor number "baseCur". All indices of pTab must also have open -** read/write cursors with cursor number baseCur+i for the i-th cursor. -** Except, if there is no possibility of a REPLACE action then -** cursors do not need to be open for indices where aRegIdx[i]==0. */ SQLITE_PRIVATE void sqlite3GenerateConstraintChecks( - Parse *pParse, /* The parser context */ - Table *pTab, /* the table into which we are inserting */ - int baseCur, /* Index of a read/write cursor pointing at pTab */ - int regRowid, /* Index of the range of input registers */ - int *aRegIdx, /* Register used by each index. 0 for unused indices */ - int rowidChng, /* True if the rowid might collide with existing entry */ - int isUpdate, /* True for UPDATE, False for INSERT */ - int overrideError, /* Override onError to this if not OE_Default */ - int ignoreDest, /* Jump to this label on an OE_Ignore resolution */ - int *pbMayReplace /* OUT: Set to true if constraint may cause a replace */ + Parse *pParse, /* The parser context */ + Table *pTab, /* The table being inserted or updated */ + int *aRegIdx, /* Use register aRegIdx[i] for index i. 0 for unused */ + int iDataCur, /* Canonical data cursor (main table or PK index) */ + int iIdxCur, /* First index cursor */ + int regNewData, /* First register in a range holding values to insert */ + int regOldData, /* Previous content. 0 for INSERTs */ + u8 pkChng, /* Non-zero if the rowid or PRIMARY KEY changed */ + u8 overrideError, /* Override onError to this if not OE_Default */ + int ignoreDest, /* Jump to this label on an OE_Ignore resolution */ + int *pbMayReplace, /* OUT: Set to true if constraint may cause a replace */ + int *aiChng /* column i is unchanged if aiChng[i]<0 */ ){ - int i; /* loop counter */ - Vdbe *v; /* VDBE under constrution */ - int nCol; /* Number of columns */ - int onError; /* Conflict resolution strategy */ - int j1; /* Addresss of jump instruction */ - int j2 = 0, j3; /* Addresses of jump instructions */ - int regData; /* Register containing first data column */ - int iCur; /* Table cursor number */ + Vdbe *v; /* VDBE under constrution */ Index *pIdx; /* Pointer to one of the indices */ + Index *pPk = 0; /* The PRIMARY KEY index */ + sqlite3 *db; /* Database connection */ + int i; /* loop counter */ + int ix; /* Index loop counter */ + int nCol; /* Number of columns */ + int onError; /* Conflict resolution strategy */ + int addr1; /* Address of jump instruction */ int seenReplace = 0; /* True if REPLACE is used to resolve INT PK conflict */ - int regOldRowid = (rowidChng && isUpdate) ? rowidChng : regRowid; + int nPkField; /* Number of fields in PRIMARY KEY. 1 for ROWID tables */ + int ipkTop = 0; /* Top of the rowid change constraint check */ + int ipkBottom = 0; /* Bottom of the rowid change constraint check */ + u8 isUpdate; /* True if this is an UPDATE operation */ + u8 bAffinityDone = 0; /* True if the OP_Affinity operation has been run */ + int regRowid = -1; /* Register holding ROWID value */ + isUpdate = regOldData!=0; + db = pParse->db; v = sqlite3GetVdbe(pParse); assert( v!=0 ); assert( pTab->pSelect==0 ); /* This table is not a VIEW */ nCol = pTab->nCol; - regData = regRowid + 1; + + /* pPk is the PRIMARY KEY index for WITHOUT ROWID tables and NULL for + ** normal rowid tables. nPkField is the number of key fields in the + ** pPk index or 1 for a rowid table. In other words, nPkField is the + ** number of fields in the true primary key of the table. */ + if( HasRowid(pTab) ){ + pPk = 0; + nPkField = 1; + }else{ + pPk = sqlite3PrimaryKeyIndex(pTab); + nPkField = pPk->nKeyCol; + } + + /* Record that this module has started */ + VdbeModuleComment((v, "BEGIN: GenCnstCks(%d,%d,%d,%d,%d)", + iDataCur, iIdxCur, regNewData, regOldData, pkChng)); /* Test all NOT NULL constraints. */ for(i=0; i<nCol; i++){ if( i==pTab->iPKey ){ + continue; /* ROWID is never NULL */ + } + if( aiChng && aiChng[i]<0 ){ + /* Don't bother checking for NOT NULL on columns that do not change */ continue; } onError = pTab->aCol[i].notNull; - if( onError==OE_None ) continue; + if( onError==OE_None ) continue; /* This column is allowed to be NULL */ if( overrideError!=OE_Default ){ onError = overrideError; }else if( onError==OE_Default ){ onError = OE_Abort; } @@ -76439,83 +104580,123 @@ assert( onError==OE_Rollback || onError==OE_Abort || onError==OE_Fail || onError==OE_Ignore || onError==OE_Replace ); switch( onError ){ case OE_Abort: sqlite3MayAbort(pParse); + /* Fall through */ case OE_Rollback: case OE_Fail: { - char *zMsg; - j1 = sqlite3VdbeAddOp3(v, OP_HaltIfNull, - SQLITE_CONSTRAINT, onError, regData+i); - zMsg = sqlite3MPrintf(pParse->db, "%s.%s may not be NULL", - pTab->zName, pTab->aCol[i].zName); - sqlite3VdbeChangeP4(v, -1, zMsg, P4_DYNAMIC); + char *zMsg = sqlite3MPrintf(db, "%s.%s", pTab->zName, + pTab->aCol[i].zName); + sqlite3VdbeAddOp4(v, OP_HaltIfNull, SQLITE_CONSTRAINT_NOTNULL, onError, + regNewData+1+i, zMsg, P4_DYNAMIC); + sqlite3VdbeChangeP5(v, P5_ConstraintNotNull); + VdbeCoverage(v); break; } case OE_Ignore: { - sqlite3VdbeAddOp2(v, OP_IsNull, regData+i, ignoreDest); + sqlite3VdbeAddOp2(v, OP_IsNull, regNewData+1+i, ignoreDest); + VdbeCoverage(v); break; } default: { assert( onError==OE_Replace ); - j1 = sqlite3VdbeAddOp1(v, OP_NotNull, regData+i); - sqlite3ExprCode(pParse, pTab->aCol[i].pDflt, regData+i); - sqlite3VdbeJumpHere(v, j1); + addr1 = sqlite3VdbeAddOp1(v, OP_NotNull, regNewData+1+i); + VdbeCoverage(v); + sqlite3ExprCode(pParse, pTab->aCol[i].pDflt, regNewData+1+i); + sqlite3VdbeJumpHere(v, addr1); break; } } } /* Test all CHECK constraints */ #ifndef SQLITE_OMIT_CHECK - if( pTab->pCheck && (pParse->db->flags & SQLITE_IgnoreChecks)==0 ){ - int allOk = sqlite3VdbeMakeLabel(v); - pParse->ckBase = regData; - sqlite3ExprIfTrue(pParse, pTab->pCheck, allOk, SQLITE_JUMPIFNULL); + if( pTab->pCheck && (db->flags & SQLITE_IgnoreChecks)==0 ){ + ExprList *pCheck = pTab->pCheck; + pParse->ckBase = regNewData+1; onError = overrideError!=OE_Default ? overrideError : OE_Abort; - if( onError==OE_Ignore ){ - sqlite3VdbeAddOp2(v, OP_Goto, 0, ignoreDest); - }else{ - sqlite3HaltConstraint(pParse, onError, 0, 0); + for(i=0; i<pCheck->nExpr; i++){ + int allOk; + Expr *pExpr = pCheck->a[i].pExpr; + if( aiChng && checkConstraintUnchanged(pExpr, aiChng, pkChng) ) continue; + allOk = sqlite3VdbeMakeLabel(v); + sqlite3ExprIfTrue(pParse, pExpr, allOk, SQLITE_JUMPIFNULL); + if( onError==OE_Ignore ){ + sqlite3VdbeGoto(v, ignoreDest); + }else{ + char *zName = pCheck->a[i].zName; + if( zName==0 ) zName = pTab->zName; + if( onError==OE_Replace ) onError = OE_Abort; /* IMP: R-15569-63625 */ + sqlite3HaltConstraint(pParse, SQLITE_CONSTRAINT_CHECK, + onError, zName, P4_TRANSIENT, + P5_ConstraintCheck); + } + sqlite3VdbeResolveLabel(v, allOk); } - sqlite3VdbeResolveLabel(v, allOk); } #endif /* !defined(SQLITE_OMIT_CHECK) */ - /* If we have an INTEGER PRIMARY KEY, make sure the primary key - ** of the new record does not previously exist. Except, if this - ** is an UPDATE and the primary key is not changing, that is OK. + /* If rowid is changing, make sure the new rowid does not previously + ** exist in the table. */ - if( rowidChng ){ + if( pkChng && pPk==0 ){ + int addrRowidOk = sqlite3VdbeMakeLabel(v); + + /* Figure out what action to take in case of a rowid collision */ onError = pTab->keyConf; if( overrideError!=OE_Default ){ onError = overrideError; }else if( onError==OE_Default ){ onError = OE_Abort; } - + if( isUpdate ){ - j2 = sqlite3VdbeAddOp3(v, OP_Eq, regRowid, 0, rowidChng); + /* pkChng!=0 does not mean that the rowid has change, only that + ** it might have changed. Skip the conflict logic below if the rowid + ** is unchanged. */ + sqlite3VdbeAddOp3(v, OP_Eq, regNewData, addrRowidOk, regOldData); + sqlite3VdbeChangeP5(v, SQLITE_NOTNULL); + VdbeCoverage(v); } - j3 = sqlite3VdbeAddOp3(v, OP_NotExists, baseCur, 0, regRowid); + + /* If the response to a rowid conflict is REPLACE but the response + ** to some other UNIQUE constraint is FAIL or IGNORE, then we need + ** to defer the running of the rowid conflict checking until after + ** the UNIQUE constraints have run. + */ + if( onError==OE_Replace && overrideError!=OE_Replace ){ + for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ + if( pIdx->onError==OE_Ignore || pIdx->onError==OE_Fail ){ + ipkTop = sqlite3VdbeAddOp0(v, OP_Goto); + break; + } + } + } + + /* Check to see if the new rowid already exists in the table. Skip + ** the following conflict logic if it does not. */ + sqlite3VdbeAddOp3(v, OP_NotExists, iDataCur, addrRowidOk, regNewData); + VdbeCoverage(v); + + /* Generate code that deals with a rowid collision */ switch( onError ){ default: { onError = OE_Abort; /* Fall thru into the next case */ } case OE_Rollback: case OE_Abort: case OE_Fail: { - sqlite3HaltConstraint( - pParse, onError, "PRIMARY KEY must be unique", P4_STATIC); + sqlite3RowidConstraint(pParse, onError, pTab); break; } case OE_Replace: { /* If there are DELETE triggers on this table and the ** recursive-triggers flag is set, call GenerateRowDelete() to - ** remove the conflicting row from the the table. This will fire + ** remove the conflicting row from the table. This will fire ** the triggers and remove both the table and index b-tree entries. ** ** Otherwise, if there are no triggers or the recursive-triggers ** flag is not set, but the table has one or more indexes, call ** GenerateRowIndexDelete(). This removes the index b-tree entries @@ -76532,184 +104713,273 @@ ** ** to run without a statement journal if there are no indexes on the ** table. */ Trigger *pTrigger = 0; - if( pParse->db->flags&SQLITE_RecTriggers ){ + if( db->flags&SQLITE_RecTriggers ){ pTrigger = sqlite3TriggersExist(pParse, pTab, TK_DELETE, 0, 0); } if( pTrigger || sqlite3FkRequired(pParse, pTab, 0, 0) ){ sqlite3MultiWrite(pParse); - sqlite3GenerateRowDelete( - pParse, pTab, baseCur, regRowid, 0, pTrigger, OE_Replace - ); - }else if( pTab->pIndex ){ - sqlite3MultiWrite(pParse); - sqlite3GenerateRowIndexDelete(pParse, pTab, baseCur, 0); + sqlite3GenerateRowDelete(pParse, pTab, pTrigger, iDataCur, iIdxCur, + regNewData, 1, 0, OE_Replace, + ONEPASS_SINGLE, -1); + }else{ + if( pTab->pIndex ){ + sqlite3MultiWrite(pParse); + sqlite3GenerateRowIndexDelete(pParse, pTab, iDataCur, iIdxCur,0,-1); + } } seenReplace = 1; break; } case OE_Ignore: { - assert( seenReplace==0 ); - sqlite3VdbeAddOp2(v, OP_Goto, 0, ignoreDest); + /*assert( seenReplace==0 );*/ + sqlite3VdbeGoto(v, ignoreDest); break; } } - sqlite3VdbeJumpHere(v, j3); - if( isUpdate ){ - sqlite3VdbeJumpHere(v, j2); + sqlite3VdbeResolveLabel(v, addrRowidOk); + if( ipkTop ){ + ipkBottom = sqlite3VdbeAddOp0(v, OP_Goto); + sqlite3VdbeJumpHere(v, ipkTop); } } /* Test all UNIQUE constraints by creating entries for each UNIQUE ** index and making sure that duplicate entries do not already exist. - ** Add the new records to the indices as we go. + ** Compute the revised record entries for indices as we go. + ** + ** This loop also handles the case of the PRIMARY KEY index for a + ** WITHOUT ROWID table. */ - for(iCur=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, iCur++){ - int regIdx; - int regR; - - if( aRegIdx[iCur]==0 ) continue; /* Skip unused indices */ - - /* Create a key for accessing the index entry */ - regIdx = sqlite3GetTempRange(pParse, pIdx->nColumn+1); + for(ix=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, ix++){ + int regIdx; /* Range of registers hold conent for pIdx */ + int regR; /* Range of registers holding conflicting PK */ + int iThisCur; /* Cursor for this UNIQUE index */ + int addrUniqueOk; /* Jump here if the UNIQUE constraint is satisfied */ + + if( aRegIdx[ix]==0 ) continue; /* Skip indices that do not change */ + if( bAffinityDone==0 ){ + sqlite3TableAffinity(v, pTab, regNewData+1); + bAffinityDone = 1; + } + iThisCur = iIdxCur+ix; + addrUniqueOk = sqlite3VdbeMakeLabel(v); + + /* Skip partial indices for which the WHERE clause is not true */ + if( pIdx->pPartIdxWhere ){ + sqlite3VdbeAddOp2(v, OP_Null, 0, aRegIdx[ix]); + pParse->ckBase = regNewData+1; + sqlite3ExprIfFalseDup(pParse, pIdx->pPartIdxWhere, addrUniqueOk, + SQLITE_JUMPIFNULL); + pParse->ckBase = 0; + } + + /* Create a record for this index entry as it should appear after + ** the insert or update. Store that record in the aRegIdx[ix] register + */ + regIdx = sqlite3GetTempRange(pParse, pIdx->nColumn); for(i=0; i<pIdx->nColumn; i++){ - int idx = pIdx->aiColumn[i]; - if( idx==pTab->iPKey ){ - sqlite3VdbeAddOp2(v, OP_SCopy, regRowid, regIdx+i); + int iField = pIdx->aiColumn[i]; + int x; + if( iField==XN_EXPR ){ + pParse->ckBase = regNewData+1; + sqlite3ExprCodeCopy(pParse, pIdx->aColExpr->a[i].pExpr, regIdx+i); + pParse->ckBase = 0; + VdbeComment((v, "%s column %d", pIdx->zName, i)); }else{ - sqlite3VdbeAddOp2(v, OP_SCopy, regData+idx, regIdx+i); + if( iField==XN_ROWID || iField==pTab->iPKey ){ + if( regRowid==regIdx+i ) continue; /* ROWID already in regIdx+i */ + x = regNewData; + regRowid = pIdx->pPartIdxWhere ? -1 : regIdx+i; + }else{ + x = iField + regNewData + 1; + } + sqlite3VdbeAddOp2(v, iField<0 ? OP_IntCopy : OP_SCopy, x, regIdx+i); + VdbeComment((v, "%s", iField<0 ? "rowid" : pTab->aCol[iField].zName)); } } - sqlite3VdbeAddOp2(v, OP_SCopy, regRowid, regIdx+i); - sqlite3VdbeAddOp3(v, OP_MakeRecord, regIdx, pIdx->nColumn+1, aRegIdx[iCur]); - sqlite3VdbeChangeP4(v, -1, sqlite3IndexAffinityStr(v, pIdx), 0); - sqlite3ExprCacheAffinityChange(pParse, regIdx, pIdx->nColumn+1); + sqlite3VdbeAddOp3(v, OP_MakeRecord, regIdx, pIdx->nColumn, aRegIdx[ix]); + VdbeComment((v, "for %s", pIdx->zName)); + sqlite3ExprCacheAffinityChange(pParse, regIdx, pIdx->nColumn); - /* Find out what action to take in case there is an indexing conflict */ + /* In an UPDATE operation, if this index is the PRIMARY KEY index + ** of a WITHOUT ROWID table and there has been no change the + ** primary key, then no collision is possible. The collision detection + ** logic below can all be skipped. */ + if( isUpdate && pPk==pIdx && pkChng==0 ){ + sqlite3VdbeResolveLabel(v, addrUniqueOk); + continue; + } + + /* Find out what action to take in case there is a uniqueness conflict */ onError = pIdx->onError; if( onError==OE_None ){ - sqlite3ReleaseTempRange(pParse, regIdx, pIdx->nColumn+1); + sqlite3ReleaseTempRange(pParse, regIdx, pIdx->nColumn); + sqlite3VdbeResolveLabel(v, addrUniqueOk); continue; /* pIdx is not a UNIQUE index */ } if( overrideError!=OE_Default ){ onError = overrideError; }else if( onError==OE_Default ){ onError = OE_Abort; } - if( seenReplace ){ - if( onError==OE_Ignore ) onError = OE_Replace; - else if( onError==OE_Fail ) onError = OE_Abort; - } /* Check to see if the new index entry will be unique */ - regR = sqlite3GetTempReg(pParse); - sqlite3VdbeAddOp2(v, OP_SCopy, regOldRowid, regR); - j3 = sqlite3VdbeAddOp4(v, OP_IsUnique, baseCur+iCur+1, 0, - regR, SQLITE_INT_TO_PTR(regIdx), - P4_INT32); - sqlite3ReleaseTempRange(pParse, regIdx, pIdx->nColumn+1); + sqlite3VdbeAddOp4Int(v, OP_NoConflict, iThisCur, addrUniqueOk, + regIdx, pIdx->nKeyCol); VdbeCoverage(v); + + /* Generate code to handle collisions */ + regR = (pIdx==pPk) ? regIdx : sqlite3GetTempRange(pParse, nPkField); + if( isUpdate || onError==OE_Replace ){ + if( HasRowid(pTab) ){ + sqlite3VdbeAddOp2(v, OP_IdxRowid, iThisCur, regR); + /* Conflict only if the rowid of the existing index entry + ** is different from old-rowid */ + if( isUpdate ){ + sqlite3VdbeAddOp3(v, OP_Eq, regR, addrUniqueOk, regOldData); + sqlite3VdbeChangeP5(v, SQLITE_NOTNULL); + VdbeCoverage(v); + } + }else{ + int x; + /* Extract the PRIMARY KEY from the end of the index entry and + ** store it in registers regR..regR+nPk-1 */ + if( pIdx!=pPk ){ + for(i=0; i<pPk->nKeyCol; i++){ + assert( pPk->aiColumn[i]>=0 ); + x = sqlite3ColumnOfIndex(pIdx, pPk->aiColumn[i]); + sqlite3VdbeAddOp3(v, OP_Column, iThisCur, x, regR+i); + VdbeComment((v, "%s.%s", pTab->zName, + pTab->aCol[pPk->aiColumn[i]].zName)); + } + } + if( isUpdate ){ + /* If currently processing the PRIMARY KEY of a WITHOUT ROWID + ** table, only conflict if the new PRIMARY KEY values are actually + ** different from the old. + ** + ** For a UNIQUE index, only conflict if the PRIMARY KEY values + ** of the matched index row are different from the original PRIMARY + ** KEY values of this row before the update. */ + int addrJump = sqlite3VdbeCurrentAddr(v)+pPk->nKeyCol; + int op = OP_Ne; + int regCmp = (IsPrimaryKeyIndex(pIdx) ? regIdx : regR); + + for(i=0; i<pPk->nKeyCol; i++){ + char *p4 = (char*)sqlite3LocateCollSeq(pParse, pPk->azColl[i]); + x = pPk->aiColumn[i]; + assert( x>=0 ); + if( i==(pPk->nKeyCol-1) ){ + addrJump = addrUniqueOk; + op = OP_Eq; + } + sqlite3VdbeAddOp4(v, op, + regOldData+1+x, addrJump, regCmp+i, p4, P4_COLLSEQ + ); + sqlite3VdbeChangeP5(v, SQLITE_NOTNULL); + VdbeCoverageIf(v, op==OP_Eq); + VdbeCoverageIf(v, op==OP_Ne); + } + } + } + } /* Generate code that executes if the new index entry is not unique */ assert( onError==OE_Rollback || onError==OE_Abort || onError==OE_Fail || onError==OE_Ignore || onError==OE_Replace ); switch( onError ){ case OE_Rollback: case OE_Abort: case OE_Fail: { - int j; - StrAccum errMsg; - const char *zSep; - char *zErr; - - sqlite3StrAccumInit(&errMsg, 0, 0, 200); - errMsg.db = pParse->db; - zSep = pIdx->nColumn>1 ? "columns " : "column "; - for(j=0; j<pIdx->nColumn; j++){ - char *zCol = pTab->aCol[pIdx->aiColumn[j]].zName; - sqlite3StrAccumAppend(&errMsg, zSep, -1); - zSep = ", "; - sqlite3StrAccumAppend(&errMsg, zCol, -1); - } - sqlite3StrAccumAppend(&errMsg, - pIdx->nColumn>1 ? " are not unique" : " is not unique", -1); - zErr = sqlite3StrAccumFinish(&errMsg); - sqlite3HaltConstraint(pParse, onError, zErr, 0); - sqlite3DbFree(errMsg.db, zErr); + sqlite3UniqueConstraint(pParse, onError, pIdx); break; } case OE_Ignore: { - assert( seenReplace==0 ); - sqlite3VdbeAddOp2(v, OP_Goto, 0, ignoreDest); + sqlite3VdbeGoto(v, ignoreDest); break; } default: { Trigger *pTrigger = 0; assert( onError==OE_Replace ); sqlite3MultiWrite(pParse); - if( pParse->db->flags&SQLITE_RecTriggers ){ + if( db->flags&SQLITE_RecTriggers ){ pTrigger = sqlite3TriggersExist(pParse, pTab, TK_DELETE, 0, 0); } - sqlite3GenerateRowDelete( - pParse, pTab, baseCur, regR, 0, pTrigger, OE_Replace - ); + sqlite3GenerateRowDelete(pParse, pTab, pTrigger, iDataCur, iIdxCur, + regR, nPkField, 0, OE_Replace, + (pIdx==pPk ? ONEPASS_SINGLE : ONEPASS_OFF), -1); seenReplace = 1; break; } } - sqlite3VdbeJumpHere(v, j3); - sqlite3ReleaseTempReg(pParse, regR); + sqlite3VdbeResolveLabel(v, addrUniqueOk); + sqlite3ReleaseTempRange(pParse, regIdx, pIdx->nColumn); + if( regR!=regIdx ) sqlite3ReleaseTempRange(pParse, regR, nPkField); + } + if( ipkTop ){ + sqlite3VdbeGoto(v, ipkTop+1); + sqlite3VdbeJumpHere(v, ipkBottom); } - if( pbMayReplace ){ - *pbMayReplace = seenReplace; - } + *pbMayReplace = seenReplace; + VdbeModuleComment((v, "END: GenCnstCks(%d)", seenReplace)); } /* ** This routine generates code to finish the INSERT or UPDATE operation ** that was started by a prior call to sqlite3GenerateConstraintChecks. -** A consecutive range of registers starting at regRowid contains the +** A consecutive range of registers starting at regNewData contains the ** rowid and the content to be inserted. ** ** The arguments to this routine should be the same as the first six ** arguments to sqlite3GenerateConstraintChecks. */ SQLITE_PRIVATE void sqlite3CompleteInsertion( Parse *pParse, /* The parser context */ Table *pTab, /* the table into which we are inserting */ - int baseCur, /* Index of a read/write cursor pointing at pTab */ - int regRowid, /* Range of content */ + int iDataCur, /* Cursor of the canonical data source */ + int iIdxCur, /* First index cursor */ + int regNewData, /* Range of content */ int *aRegIdx, /* Register used by each index. 0 for unused indices */ int isUpdate, /* True for UPDATE, False for INSERT */ int appendBias, /* True if this is likely to be an append */ int useSeekResult /* True to set the USESEEKRESULT flag on OP_[Idx]Insert */ ){ - int i; - Vdbe *v; - int nIdx; - Index *pIdx; - u8 pik_flags; - int regData; - int regRec; + Vdbe *v; /* Prepared statements under construction */ + Index *pIdx; /* An index being inserted or updated */ + u8 pik_flags; /* flag values passed to the btree insert */ + int regData; /* Content registers (after the rowid) */ + int regRec; /* Register holding assembled record for the table */ + int i; /* Loop counter */ + u8 bAffinityDone = 0; /* True if OP_Affinity has been run already */ v = sqlite3GetVdbe(pParse); assert( v!=0 ); assert( pTab->pSelect==0 ); /* This table is not a VIEW */ - for(nIdx=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, nIdx++){} - for(i=nIdx-1; i>=0; i--){ + for(i=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, i++){ if( aRegIdx[i]==0 ) continue; - sqlite3VdbeAddOp2(v, OP_IdxInsert, baseCur+i+1, aRegIdx[i]); - if( useSeekResult ){ - sqlite3VdbeChangeP5(v, OPFLAG_USESEEKRESULT); + bAffinityDone = 1; + if( pIdx->pPartIdxWhere ){ + sqlite3VdbeAddOp2(v, OP_IsNull, aRegIdx[i], sqlite3VdbeCurrentAddr(v)+2); + VdbeCoverage(v); } + sqlite3VdbeAddOp2(v, OP_IdxInsert, iIdxCur+i, aRegIdx[i]); + pik_flags = 0; + if( useSeekResult ) pik_flags = OPFLAG_USESEEKRESULT; + if( IsPrimaryKeyIndex(pIdx) && !HasRowid(pTab) ){ + assert( pParse->nested==0 ); + pik_flags |= OPFLAG_NCHANGE; + } + sqlite3VdbeChangeP5(v, pik_flags); } - regData = regRowid + 1; + if( !HasRowid(pTab) ) return; + regData = regNewData + 1; regRec = sqlite3GetTempReg(pParse); sqlite3VdbeAddOp3(v, OP_MakeRecord, regData, pTab->nCol, regRec); - sqlite3TableAffinityStr(v, pTab); + if( !bAffinityDone ) sqlite3TableAffinity(v, pTab, 0); sqlite3ExprCacheAffinityChange(pParse, regData, pTab->nCol); if( pParse->nested ){ pik_flags = 0; }else{ pik_flags = OPFLAG_NCHANGE; @@ -76719,110 +104989,146 @@ pik_flags |= OPFLAG_APPEND; } if( useSeekResult ){ pik_flags |= OPFLAG_USESEEKRESULT; } - sqlite3VdbeAddOp3(v, OP_Insert, baseCur, regRec, regRowid); + sqlite3VdbeAddOp3(v, OP_Insert, iDataCur, regRec, regNewData); if( !pParse->nested ){ - sqlite3VdbeChangeP4(v, -1, pTab->zName, P4_STATIC); + sqlite3VdbeChangeP4(v, -1, pTab->zName, P4_TRANSIENT); } sqlite3VdbeChangeP5(v, pik_flags); } /* -** Generate code that will open cursors for a table and for all -** indices of that table. The "baseCur" parameter is the cursor number used -** for the table. Indices are opened on subsequent cursors. +** Allocate cursors for the pTab table and all its indices and generate +** code to open and initialized those cursors. ** -** Return the number of indices on the table. +** The cursor for the object that contains the complete data (normally +** the table itself, but the PRIMARY KEY index in the case of a WITHOUT +** ROWID table) is returned in *piDataCur. The first index cursor is +** returned in *piIdxCur. The number of indices is returned. +** +** Use iBase as the first cursor (either the *piDataCur for rowid tables +** or the first index for WITHOUT ROWID tables) if it is non-negative. +** If iBase is negative, then allocate the next available cursor. +** +** For a rowid table, *piDataCur will be exactly one less than *piIdxCur. +** For a WITHOUT ROWID table, *piDataCur will be somewhere in the range +** of *piIdxCurs, depending on where the PRIMARY KEY index appears on the +** pTab->pIndex list. +** +** If pTab is a virtual table, then this routine is a no-op and the +** *piDataCur and *piIdxCur values are left uninitialized. */ SQLITE_PRIVATE int sqlite3OpenTableAndIndices( Parse *pParse, /* Parsing context */ Table *pTab, /* Table to be opened */ - int baseCur, /* Cursor number assigned to the table */ - int op /* OP_OpenRead or OP_OpenWrite */ + int op, /* OP_OpenRead or OP_OpenWrite */ + u8 p5, /* P5 value for OP_Open* opcodes (except on WITHOUT ROWID) */ + int iBase, /* Use this for the table cursor, if there is one */ + u8 *aToOpen, /* If not NULL: boolean for each table and index */ + int *piDataCur, /* Write the database source cursor number here */ + int *piIdxCur /* Write the first index cursor number here */ ){ int i; int iDb; + int iDataCur; Index *pIdx; Vdbe *v; - if( IsVirtual(pTab) ) return 0; + assert( op==OP_OpenRead || op==OP_OpenWrite ); + assert( op==OP_OpenWrite || p5==0 ); + if( IsVirtual(pTab) ){ + /* This routine is a no-op for virtual tables. Leave the output + ** variables *piDataCur and *piIdxCur uninitialized so that valgrind + ** can detect if they are used by mistake in the caller. */ + return 0; + } iDb = sqlite3SchemaToIndex(pParse->db, pTab->pSchema); v = sqlite3GetVdbe(pParse); assert( v!=0 ); - sqlite3OpenTable(pParse, baseCur, iDb, pTab, op); - for(i=1, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, i++){ - KeyInfo *pKey = sqlite3IndexKeyinfo(pParse, pIdx); + if( iBase<0 ) iBase = pParse->nTab; + iDataCur = iBase++; + if( piDataCur ) *piDataCur = iDataCur; + if( HasRowid(pTab) && (aToOpen==0 || aToOpen[0]) ){ + sqlite3OpenTable(pParse, iDataCur, iDb, pTab, op); + }else{ + sqlite3TableLock(pParse, iDb, pTab->tnum, op==OP_OpenWrite, pTab->zName); + } + if( piIdxCur ) *piIdxCur = iBase; + for(i=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, i++){ + int iIdxCur = iBase++; assert( pIdx->pSchema==pTab->pSchema ); - sqlite3VdbeAddOp4(v, op, i+baseCur, pIdx->tnum, iDb, - (char*)pKey, P4_KEYINFO_HANDOFF); - VdbeComment((v, "%s", pIdx->zName)); - } - if( pParse->nTab<baseCur+i ){ - pParse->nTab = baseCur+i; - } - return i-1; + if( aToOpen==0 || aToOpen[i+1] ){ + sqlite3VdbeAddOp3(v, op, iIdxCur, pIdx->tnum, iDb); + sqlite3VdbeSetP4KeyInfo(pParse, pIdx); + VdbeComment((v, "%s", pIdx->zName)); + } + if( IsPrimaryKeyIndex(pIdx) && !HasRowid(pTab) ){ + if( piDataCur ) *piDataCur = iIdxCur; + }else{ + sqlite3VdbeChangeP5(v, p5); + } + } + if( iBase>pParse->nTab ) pParse->nTab = iBase; + return i; } #ifdef SQLITE_TEST /* ** The following global variable is incremented whenever the ** transfer optimization is used. This is used for testing ** purposes only - to make sure the transfer optimization really -** is happening when it is suppose to. +** is happening when it is supposed to. */ SQLITE_API int sqlite3_xferopt_count; #endif /* SQLITE_TEST */ #ifndef SQLITE_OMIT_XFER_OPT -/* -** Check to collation names to see if they are compatible. -*/ -static int xferCompatibleCollation(const char *z1, const char *z2){ - if( z1==0 ){ - return z2==0; - } - if( z2==0 ){ - return 0; - } - return sqlite3StrICmp(z1, z2)==0; -} - - /* ** Check to see if index pSrc is compatible as a source of data ** for index pDest in an insert transfer optimization. The rules ** for a compatible index: ** ** * The index is over the same set of columns ** * The same DESC and ASC markings occurs on all columns ** * The same onError processing (OE_Abort, OE_Ignore, etc) ** * The same collating sequence on each column +** * The index has the exact same WHERE clause */ static int xferCompatibleIndex(Index *pDest, Index *pSrc){ int i; assert( pDest && pSrc ); assert( pDest->pTable!=pSrc->pTable ); - if( pDest->nColumn!=pSrc->nColumn ){ + if( pDest->nKeyCol!=pSrc->nKeyCol ){ return 0; /* Different number of columns */ } if( pDest->onError!=pSrc->onError ){ return 0; /* Different conflict resolution strategies */ } - for(i=0; i<pSrc->nColumn; i++){ + for(i=0; i<pSrc->nKeyCol; i++){ if( pSrc->aiColumn[i]!=pDest->aiColumn[i] ){ return 0; /* Different columns indexed */ + } + if( pSrc->aiColumn[i]==XN_EXPR ){ + assert( pSrc->aColExpr!=0 && pDest->aColExpr!=0 ); + if( sqlite3ExprCompare(pSrc->aColExpr->a[i].pExpr, + pDest->aColExpr->a[i].pExpr, -1)!=0 ){ + return 0; /* Different expressions in the index */ + } } if( pSrc->aSortOrder[i]!=pDest->aSortOrder[i] ){ return 0; /* Different sort orders */ } - if( !xferCompatibleCollation(pSrc->azColl[i],pDest->azColl[i]) ){ + if( sqlite3_stricmp(pSrc->azColl[i],pDest->azColl[i])!=0 ){ return 0; /* Different collating sequences */ } + } + if( sqlite3ExprCompare(pSrc->pPartIdxWhere, pDest->pPartIdxWhere, -1) ){ + return 0; /* Different WHERE clauses */ } /* If no test above fails then the indices must be compatible */ return 1; } @@ -76830,61 +105136,61 @@ /* ** Attempt the transfer optimization on INSERTs of the form ** ** INSERT INTO tab1 SELECT * FROM tab2; ** -** This optimization is only attempted if -** -** (1) tab1 and tab2 have identical schemas including all the -** same indices and constraints -** -** (2) tab1 and tab2 are different tables -** -** (3) There must be no triggers on tab1 -** -** (4) The result set of the SELECT statement is "*" -** -** (5) The SELECT statement has no WHERE, HAVING, ORDER BY, GROUP BY, -** or LIMIT clause. -** -** (6) The SELECT statement is a simple (not a compound) select that -** contains only tab2 in its FROM clause -** -** This method for implementing the INSERT transfers raw records from -** tab2 over to tab1. The columns are not decoded. Raw records from -** the indices of tab2 are transfered to tab1 as well. In so doing, -** the resulting tab1 has much less fragmentation. -** -** This routine returns TRUE if the optimization is attempted. If any -** of the conditions above fail so that the optimization should not -** be attempted, then this routine returns FALSE. +** The xfer optimization transfers raw records from tab2 over to tab1. +** Columns are not decoded and reassembled, which greatly improves +** performance. Raw index records are transferred in the same way. +** +** The xfer optimization is only attempted if tab1 and tab2 are compatible. +** There are lots of rules for determining compatibility - see comments +** embedded in the code for details. +** +** This routine returns TRUE if the optimization is guaranteed to be used. +** Sometimes the xfer optimization will only work if the destination table +** is empty - a factor that can only be determined at run-time. In that +** case, this routine generates code for the xfer optimization but also +** does a test to see if the destination table is empty and jumps over the +** xfer optimization code if the test fails. In that case, this routine +** returns FALSE so that the caller will know to go ahead and generate +** an unoptimized transfer. This routine also returns FALSE if there +** is no chance that the xfer optimization can be applied. +** +** This optimization is particularly useful at making VACUUM run faster. */ static int xferOptimization( Parse *pParse, /* Parser context */ Table *pDest, /* The table we are inserting into */ Select *pSelect, /* A SELECT statement to use as the data source */ int onError, /* How to handle constraint errors */ int iDbDest /* The database of pDest */ ){ + sqlite3 *db = pParse->db; ExprList *pEList; /* The result set of the SELECT */ Table *pSrc; /* The table in the FROM clause of SELECT */ Index *pSrcIdx, *pDestIdx; /* Source and destination indices */ struct SrcList_item *pItem; /* An element of pSelect->pSrc */ int i; /* Loop counter */ int iDbSrc; /* The database of pSrc */ int iSrc, iDest; /* Cursors from source and destination */ int addr1, addr2; /* Loop addresses */ - int emptyDestTest; /* Address of test for empty pDest */ - int emptySrcTest; /* Address of test for empty pSrc */ + int emptyDestTest = 0; /* Address of test for empty pDest */ + int emptySrcTest = 0; /* Address of test for empty pSrc */ Vdbe *v; /* The VDBE we are building */ - KeyInfo *pKey; /* Key information for an index */ int regAutoinc; /* Memory register used by AUTOINC */ int destHasUniqueIdx = 0; /* True if pDest has a UNIQUE index */ int regData, regRowid; /* Registers holding data and rowid */ if( pSelect==0 ){ return 0; /* Must be of the form INSERT INTO ... SELECT ... */ + } + if( pParse->pWith || pSelect->pWith ){ + /* Do not attempt to process this query if there are an WITH clauses + ** attached to it. Proceeding may generate a false "no such table: xxx" + ** error if pSelect reads from a CTE named "xxx". */ + return 0; } if( sqlite3TriggerList(pParse, pDest) ){ return 0; /* tab1 must not have triggers */ } #ifndef SQLITE_OMIT_VIRTUALTABLE @@ -76891,14 +105197,12 @@ if( pDest->tabFlags & TF_Virtual ){ return 0; /* tab1 must not be a virtual table */ } #endif if( onError==OE_Default ){ - onError = OE_Abort; - } - if( onError!=OE_Abort && onError!=OE_Rollback ){ - return 0; /* Cannot do OR REPLACE or OR IGNORE or OR FAIL */ + if( pDest->iPKey>=0 ) onError = pDest->keyConf; + if( onError==OE_Default ) onError = OE_Abort; } assert(pSelect->pSrc); /* allocated even if there is no FROM clause */ if( pSelect->pSrc->nSrc!=1 ){ return 0; /* FROM clause must have exactly one term */ } @@ -76930,26 +105234,29 @@ assert( pEList!=0 ); if( pEList->nExpr!=1 ){ return 0; /* The result set must have exactly one column */ } assert( pEList->a[0].pExpr ); - if( pEList->a[0].pExpr->op!=TK_ALL ){ + if( pEList->a[0].pExpr->op!=TK_ASTERISK ){ return 0; /* The result set must be the special operator "*" */ } /* At this point we have established that the statement is of the ** correct syntactic form to participate in this optimization. Now ** we have to check the semantics. */ pItem = pSelect->pSrc->a; - pSrc = sqlite3LocateTable(pParse, 0, pItem->zName, pItem->zDatabase); + pSrc = sqlite3LocateTableItem(pParse, 0, pItem); if( pSrc==0 ){ return 0; /* FROM clause does not contain a real table */ } if( pSrc==pDest ){ return 0; /* tab1 and tab2 may not be the same table */ } + if( HasRowid(pDest)!=HasRowid(pSrc) ){ + return 0; /* source and destination must both be WITHOUT ROWID or not */ + } #ifndef SQLITE_OMIT_VIRTUALTABLE if( pSrc->tabFlags & TF_Virtual ){ return 0; /* tab2 must not be a virtual table */ } #endif @@ -76961,22 +105268,38 @@ } if( pDest->iPKey!=pSrc->iPKey ){ return 0; /* Both tables must have the same INTEGER PRIMARY KEY */ } for(i=0; i<pDest->nCol; i++){ - if( pDest->aCol[i].affinity!=pSrc->aCol[i].affinity ){ + Column *pDestCol = &pDest->aCol[i]; + Column *pSrcCol = &pSrc->aCol[i]; +#ifdef SQLITE_ENABLE_HIDDEN_COLUMNS + if( (db->flags & SQLITE_Vacuum)==0 + && (pDestCol->colFlags | pSrcCol->colFlags) & COLFLAG_HIDDEN + ){ + return 0; /* Neither table may have __hidden__ columns */ + } +#endif + if( pDestCol->affinity!=pSrcCol->affinity ){ return 0; /* Affinity must be the same on all columns */ } - if( !xferCompatibleCollation(pDest->aCol[i].zColl, pSrc->aCol[i].zColl) ){ + if( sqlite3_stricmp(pDestCol->zColl, pSrcCol->zColl)!=0 ){ return 0; /* Collating sequence must be the same on all columns */ } - if( pDest->aCol[i].notNull && !pSrc->aCol[i].notNull ){ + if( pDestCol->notNull && !pSrcCol->notNull ){ return 0; /* tab2 must be NOT NULL if tab1 is */ + } + /* Default values for second and subsequent columns need to match. */ + if( i>0 + && ((pDestCol->zDflt==0)!=(pSrcCol->zDflt==0) + || (pDestCol->zDflt && strcmp(pDestCol->zDflt, pSrcCol->zDflt)!=0)) + ){ + return 0; /* Default values must be the same for all columns */ } } for(pDestIdx=pDest->pIndex; pDestIdx; pDestIdx=pDestIdx->pNext){ - if( pDestIdx->onError!=OE_None ){ + if( IsUniqueIndex(pDestIdx) ){ destHasUniqueIdx = 1; } for(pSrcIdx=pSrc->pIndex; pSrcIdx; pSrcIdx=pSrcIdx->pNext){ if( xferCompatibleIndex(pDestIdx, pSrcIdx) ) break; } @@ -76983,98 +105306,154 @@ if( pSrcIdx==0 ){ return 0; /* pDestIdx has no corresponding index in pSrc */ } } #ifndef SQLITE_OMIT_CHECK - if( pDest->pCheck && sqlite3ExprCompare(pSrc->pCheck, pDest->pCheck) ){ + if( pDest->pCheck && sqlite3ExprListCompare(pSrc->pCheck,pDest->pCheck,-1) ){ return 0; /* Tables have different CHECK constraints. Ticket #2252 */ } #endif - - /* If we get this far, it means either: - ** - ** * We can always do the transfer if the table contains an - ** an integer primary key - ** - ** * We can conditionally do the transfer if the destination - ** table is empty. +#ifndef SQLITE_OMIT_FOREIGN_KEY + /* Disallow the transfer optimization if the destination table constains + ** any foreign key constraints. This is more restrictive than necessary. + ** But the main beneficiary of the transfer optimization is the VACUUM + ** command, and the VACUUM command disables foreign key constraints. So + ** the extra complication to make this rule less restrictive is probably + ** not worth the effort. Ticket [6284df89debdfa61db8073e062908af0c9b6118e] + */ + if( (db->flags & SQLITE_ForeignKeys)!=0 && pDest->pFKey!=0 ){ + return 0; + } +#endif + if( (db->flags & SQLITE_CountRows)!=0 ){ + return 0; /* xfer opt does not play well with PRAGMA count_changes */ + } + + /* If we get this far, it means that the xfer optimization is at + ** least a possibility, though it might only work if the destination + ** table (tab1) is initially empty. */ #ifdef SQLITE_TEST sqlite3_xferopt_count++; #endif - iDbSrc = sqlite3SchemaToIndex(pParse->db, pSrc->pSchema); + iDbSrc = sqlite3SchemaToIndex(db, pSrc->pSchema); v = sqlite3GetVdbe(pParse); sqlite3CodeVerifySchema(pParse, iDbSrc); iSrc = pParse->nTab++; iDest = pParse->nTab++; regAutoinc = autoIncBegin(pParse, iDbDest, pDest); - sqlite3OpenTable(pParse, iDest, iDbDest, pDest, OP_OpenWrite); - if( (pDest->iPKey<0 && pDest->pIndex!=0) || destHasUniqueIdx ){ - /* If tables do not have an INTEGER PRIMARY KEY and there - ** are indices to be copied and the destination is not empty, - ** we have to disallow the transfer optimization because the - ** the rowids might change which will mess up indexing. - ** - ** Or if the destination has a UNIQUE index and is not empty, - ** we also disallow the transfer optimization because we cannot - ** insure that all entries in the union of DEST and SRC will be - ** unique. - */ - addr1 = sqlite3VdbeAddOp2(v, OP_Rewind, iDest, 0); - emptyDestTest = sqlite3VdbeAddOp2(v, OP_Goto, 0, 0); - sqlite3VdbeJumpHere(v, addr1); - }else{ - emptyDestTest = 0; - } - sqlite3OpenTable(pParse, iSrc, iDbSrc, pSrc, OP_OpenRead); - emptySrcTest = sqlite3VdbeAddOp2(v, OP_Rewind, iSrc, 0); regData = sqlite3GetTempReg(pParse); regRowid = sqlite3GetTempReg(pParse); - if( pDest->iPKey>=0 ){ - addr1 = sqlite3VdbeAddOp2(v, OP_Rowid, iSrc, regRowid); - addr2 = sqlite3VdbeAddOp3(v, OP_NotExists, iDest, 0, regRowid); - sqlite3HaltConstraint( - pParse, onError, "PRIMARY KEY must be unique", P4_STATIC); - sqlite3VdbeJumpHere(v, addr2); - autoIncStep(pParse, regAutoinc, regRowid); - }else if( pDest->pIndex==0 ){ - addr1 = sqlite3VdbeAddOp2(v, OP_NewRowid, iDest, regRowid); + sqlite3OpenTable(pParse, iDest, iDbDest, pDest, OP_OpenWrite); + assert( HasRowid(pDest) || destHasUniqueIdx ); + if( (db->flags & SQLITE_Vacuum)==0 && ( + (pDest->iPKey<0 && pDest->pIndex!=0) /* (1) */ + || destHasUniqueIdx /* (2) */ + || (onError!=OE_Abort && onError!=OE_Rollback) /* (3) */ + )){ + /* In some circumstances, we are able to run the xfer optimization + ** only if the destination table is initially empty. Unless the + ** SQLITE_Vacuum flag is set, this block generates code to make + ** that determination. If SQLITE_Vacuum is set, then the destination + ** table is always empty. + ** + ** Conditions under which the destination must be empty: + ** + ** (1) There is no INTEGER PRIMARY KEY but there are indices. + ** (If the destination is not initially empty, the rowid fields + ** of index entries might need to change.) + ** + ** (2) The destination has a unique index. (The xfer optimization + ** is unable to test uniqueness.) + ** + ** (3) onError is something other than OE_Abort and OE_Rollback. + */ + addr1 = sqlite3VdbeAddOp2(v, OP_Rewind, iDest, 0); VdbeCoverage(v); + emptyDestTest = sqlite3VdbeAddOp0(v, OP_Goto); + sqlite3VdbeJumpHere(v, addr1); + } + if( HasRowid(pSrc) ){ + sqlite3OpenTable(pParse, iSrc, iDbSrc, pSrc, OP_OpenRead); + emptySrcTest = sqlite3VdbeAddOp2(v, OP_Rewind, iSrc, 0); VdbeCoverage(v); + if( pDest->iPKey>=0 ){ + addr1 = sqlite3VdbeAddOp2(v, OP_Rowid, iSrc, regRowid); + addr2 = sqlite3VdbeAddOp3(v, OP_NotExists, iDest, 0, regRowid); + VdbeCoverage(v); + sqlite3RowidConstraint(pParse, onError, pDest); + sqlite3VdbeJumpHere(v, addr2); + autoIncStep(pParse, regAutoinc, regRowid); + }else if( pDest->pIndex==0 ){ + addr1 = sqlite3VdbeAddOp2(v, OP_NewRowid, iDest, regRowid); + }else{ + addr1 = sqlite3VdbeAddOp2(v, OP_Rowid, iSrc, regRowid); + assert( (pDest->tabFlags & TF_Autoincrement)==0 ); + } + sqlite3VdbeAddOp2(v, OP_RowData, iSrc, regData); + sqlite3VdbeAddOp4(v, OP_Insert, iDest, regData, regRowid, + pDest->zName, 0); + sqlite3VdbeChangeP5(v, OPFLAG_NCHANGE|OPFLAG_LASTROWID|OPFLAG_APPEND); + sqlite3VdbeAddOp2(v, OP_Next, iSrc, addr1); VdbeCoverage(v); + sqlite3VdbeAddOp2(v, OP_Close, iSrc, 0); + sqlite3VdbeAddOp2(v, OP_Close, iDest, 0); }else{ - addr1 = sqlite3VdbeAddOp2(v, OP_Rowid, iSrc, regRowid); - assert( (pDest->tabFlags & TF_Autoincrement)==0 ); - } - sqlite3VdbeAddOp2(v, OP_RowData, iSrc, regData); - sqlite3VdbeAddOp3(v, OP_Insert, iDest, regData, regRowid); - sqlite3VdbeChangeP5(v, OPFLAG_NCHANGE|OPFLAG_LASTROWID|OPFLAG_APPEND); - sqlite3VdbeChangeP4(v, -1, pDest->zName, 0); - sqlite3VdbeAddOp2(v, OP_Next, iSrc, addr1); + sqlite3TableLock(pParse, iDbDest, pDest->tnum, 1, pDest->zName); + sqlite3TableLock(pParse, iDbSrc, pSrc->tnum, 0, pSrc->zName); + } for(pDestIdx=pDest->pIndex; pDestIdx; pDestIdx=pDestIdx->pNext){ + u8 idxInsFlags = 0; for(pSrcIdx=pSrc->pIndex; ALWAYS(pSrcIdx); pSrcIdx=pSrcIdx->pNext){ if( xferCompatibleIndex(pDestIdx, pSrcIdx) ) break; } assert( pSrcIdx ); + sqlite3VdbeAddOp3(v, OP_OpenRead, iSrc, pSrcIdx->tnum, iDbSrc); + sqlite3VdbeSetP4KeyInfo(pParse, pSrcIdx); + VdbeComment((v, "%s", pSrcIdx->zName)); + sqlite3VdbeAddOp3(v, OP_OpenWrite, iDest, pDestIdx->tnum, iDbDest); + sqlite3VdbeSetP4KeyInfo(pParse, pDestIdx); + sqlite3VdbeChangeP5(v, OPFLAG_BULKCSR); + VdbeComment((v, "%s", pDestIdx->zName)); + addr1 = sqlite3VdbeAddOp2(v, OP_Rewind, iSrc, 0); VdbeCoverage(v); + sqlite3VdbeAddOp2(v, OP_RowKey, iSrc, regData); + if( db->flags & SQLITE_Vacuum ){ + /* This INSERT command is part of a VACUUM operation, which guarantees + ** that the destination table is empty. If all indexed columns use + ** collation sequence BINARY, then it can also be assumed that the + ** index will be populated by inserting keys in strictly sorted + ** order. In this case, instead of seeking within the b-tree as part + ** of every OP_IdxInsert opcode, an OP_Last is added before the + ** OP_IdxInsert to seek to the point within the b-tree where each key + ** should be inserted. This is faster. + ** + ** If any of the indexed columns use a collation sequence other than + ** BINARY, this optimization is disabled. This is because the user + ** might change the definition of a collation sequence and then run + ** a VACUUM command. In that case keys may not be written in strictly + ** sorted order. */ + for(i=0; i<pSrcIdx->nColumn; i++){ + const char *zColl = pSrcIdx->azColl[i]; + assert( sqlite3_stricmp(sqlite3StrBINARY, zColl)!=0 + || sqlite3StrBINARY==zColl ); + if( sqlite3_stricmp(sqlite3StrBINARY, zColl) ) break; + } + if( i==pSrcIdx->nColumn ){ + idxInsFlags = OPFLAG_USESEEKRESULT; + sqlite3VdbeAddOp3(v, OP_Last, iDest, 0, -1); + } + } + if( !HasRowid(pSrc) && pDestIdx->idxType==2 ){ + idxInsFlags |= OPFLAG_NCHANGE; + } + sqlite3VdbeAddOp3(v, OP_IdxInsert, iDest, regData, 1); + sqlite3VdbeChangeP5(v, idxInsFlags); + sqlite3VdbeAddOp2(v, OP_Next, iSrc, addr1+1); VdbeCoverage(v); + sqlite3VdbeJumpHere(v, addr1); sqlite3VdbeAddOp2(v, OP_Close, iSrc, 0); sqlite3VdbeAddOp2(v, OP_Close, iDest, 0); - pKey = sqlite3IndexKeyinfo(pParse, pSrcIdx); - sqlite3VdbeAddOp4(v, OP_OpenRead, iSrc, pSrcIdx->tnum, iDbSrc, - (char*)pKey, P4_KEYINFO_HANDOFF); - VdbeComment((v, "%s", pSrcIdx->zName)); - pKey = sqlite3IndexKeyinfo(pParse, pDestIdx); - sqlite3VdbeAddOp4(v, OP_OpenWrite, iDest, pDestIdx->tnum, iDbDest, - (char*)pKey, P4_KEYINFO_HANDOFF); - VdbeComment((v, "%s", pDestIdx->zName)); - addr1 = sqlite3VdbeAddOp2(v, OP_Rewind, iSrc, 0); - sqlite3VdbeAddOp2(v, OP_RowKey, iSrc, regData); - sqlite3VdbeAddOp3(v, OP_IdxInsert, iDest, regData, 1); - sqlite3VdbeAddOp2(v, OP_Next, iSrc, addr1+1); - sqlite3VdbeJumpHere(v, addr1); - } - sqlite3VdbeJumpHere(v, emptySrcTest); + } + if( emptySrcTest ) sqlite3VdbeJumpHere(v, emptySrcTest); sqlite3ReleaseTempReg(pParse, regRowid); sqlite3ReleaseTempReg(pParse, regData); - sqlite3VdbeAddOp2(v, OP_Close, iSrc, 0); - sqlite3VdbeAddOp2(v, OP_Close, iDest, 0); if( emptyDestTest ){ sqlite3VdbeAddOp2(v, OP_Halt, SQLITE_OK, 0); sqlite3VdbeJumpHere(v, emptyDestTest); sqlite3VdbeAddOp2(v, OP_Close, iDest, 0); return 0; @@ -77101,10 +105480,11 @@ ** implement the programmer interface to the library. Routines in ** other files are for internal use by SQLite and should not be ** accessed by users of the library. */ +/* #include "sqliteInt.h" */ /* ** Execute SQL code. Return one of the SQLITE_ success/failure ** codes. Also write an error message into memory obtained from ** malloc() and make *pzErrMsg point to that message. @@ -77112,11 +105492,11 @@ ** If the SQL is a query, then for each row in the query result ** the xCallback() function is called. pArg becomes the first ** argument to xCallback(). If xCallback=NULL then no callback ** is invoked, even for queries. */ -SQLITE_API int sqlite3_exec( +SQLITE_API int SQLITE_STDCALL sqlite3_exec( sqlite3 *db, /* The database on which the SQL executes */ const char *zSql, /* The SQL to be executed */ sqlite3_callback xCallback, /* Invoke this callback routine */ void *pArg, /* First argument to xCallback() */ char **pzErrMsg /* Write error messages here */ @@ -77123,24 +105503,23 @@ ){ int rc = SQLITE_OK; /* Return code */ const char *zLeftover; /* Tail of unprocessed SQL */ sqlite3_stmt *pStmt = 0; /* The current SQL statement */ char **azCols = 0; /* Names of result columns */ - int nRetry = 0; /* Number of retry attempts */ int callbackIsInit; /* True if callback data is initialized */ if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT; if( zSql==0 ) zSql = ""; sqlite3_mutex_enter(db->mutex); - sqlite3Error(db, SQLITE_OK, 0); - while( (rc==SQLITE_OK || (rc==SQLITE_SCHEMA && (++nRetry)<2)) && zSql[0] ){ + sqlite3Error(db, SQLITE_OK); + while( rc==SQLITE_OK && zSql[0] ){ int nCol; char **azVals = 0; pStmt = 0; - rc = sqlite3_prepare(db, zSql, -1, &pStmt, &zLeftover); + rc = sqlite3_prepare_v2(db, zSql, -1, &pStmt, &zLeftover); assert( rc==SQLITE_OK || pStmt==0 ); if( rc!=SQLITE_OK ){ continue; } if( !pStmt ){ @@ -77176,32 +105555,32 @@ if( rc==SQLITE_ROW ){ azVals = &azCols[nCol]; for(i=0; i<nCol; i++){ azVals[i] = (char *)sqlite3_column_text(pStmt, i); if( !azVals[i] && sqlite3_column_type(pStmt, i)!=SQLITE_NULL ){ - db->mallocFailed = 1; + sqlite3OomFault(db); goto exec_out; } } } if( xCallback(pArg, nCol, azVals, azCols) ){ + /* EVIDENCE-OF: R-38229-40159 If the callback function to + ** sqlite3_exec() returns non-zero, then sqlite3_exec() will + ** return SQLITE_ABORT. */ rc = SQLITE_ABORT; sqlite3VdbeFinalize((Vdbe *)pStmt); pStmt = 0; - sqlite3Error(db, SQLITE_ABORT, 0); + sqlite3Error(db, SQLITE_ABORT); goto exec_out; } } if( rc!=SQLITE_ROW ){ rc = sqlite3VdbeFinalize((Vdbe *)pStmt); pStmt = 0; - if( rc!=SQLITE_SCHEMA ){ - nRetry = 0; - zSql = zLeftover; - while( sqlite3Isspace(zSql[0]) ) zSql++; - } + zSql = zLeftover; + while( sqlite3Isspace(zSql[0]) ) zSql++; break; } } sqlite3DbFree(db, azCols); @@ -77211,18 +105590,18 @@ exec_out: if( pStmt ) sqlite3VdbeFinalize((Vdbe *)pStmt); sqlite3DbFree(db, azCols); rc = sqlite3ApiExit(db, rc); - if( rc!=SQLITE_OK && ALWAYS(rc==sqlite3_errcode(db)) && pzErrMsg ){ + if( rc!=SQLITE_OK && pzErrMsg ){ int nErrMsg = 1 + sqlite3Strlen30(sqlite3_errmsg(db)); *pzErrMsg = sqlite3Malloc(nErrMsg); if( *pzErrMsg ){ memcpy(*pzErrMsg, sqlite3_errmsg(db), nErrMsg); }else{ rc = SQLITE_NOMEM; - sqlite3Error(db, SQLITE_NOMEM, 0); + sqlite3Error(db, SQLITE_NOMEM); } }else if( pzErrMsg ){ *pzErrMsg = 0; } @@ -77270,10 +105649,11 @@ ** as extensions by SQLite should #include this file instead of ** sqlite3.h. */ #ifndef _SQLITE3EXT_H_ #define _SQLITE3EXT_H_ +/* #include "sqlite3.h" */ typedef struct sqlite3_api_routines sqlite3_api_routines; /* ** The following structure holds pointers to all of the SQLite API @@ -77280,11 +105660,11 @@ ** routines. ** ** WARNING: In order to maintain backwards compatibility, add new ** interfaces to the end of this structure only. If you insert new ** interfaces in the middle of this structure, then older different -** versions of SQLite will not be able to load each others' shared +** versions of SQLite will not be able to load each other's shared ** libraries! */ struct sqlite3_api_routines { void * (*aggregate_context)(sqlite3_context*,int nBytes); int (*aggregate_count)(sqlite3_context*); @@ -77301,12 +105681,14 @@ int (*bind_value)(sqlite3_stmt*,int,const sqlite3_value*); int (*busy_handler)(sqlite3*,int(*)(void*,int),void*); int (*busy_timeout)(sqlite3*,int ms); int (*changes)(sqlite3*); int (*close)(sqlite3*); - int (*collation_needed)(sqlite3*,void*,void(*)(void*,sqlite3*,int eTextRep,const char*)); - int (*collation_needed16)(sqlite3*,void*,void(*)(void*,sqlite3*,int eTextRep,const void*)); + int (*collation_needed)(sqlite3*,void*,void(*)(void*,sqlite3*, + int eTextRep,const char*)); + int (*collation_needed16)(sqlite3*,void*,void(*)(void*,sqlite3*, + int eTextRep,const void*)); const void * (*column_blob)(sqlite3_stmt*,int iCol); int (*column_bytes)(sqlite3_stmt*,int iCol); int (*column_bytes16)(sqlite3_stmt*,int iCol); int (*column_count)(sqlite3_stmt*pStmt); const char * (*column_database_name)(sqlite3_stmt*,int); @@ -77327,14 +105709,22 @@ int (*column_type)(sqlite3_stmt*,int iCol); sqlite3_value* (*column_value)(sqlite3_stmt*,int iCol); void * (*commit_hook)(sqlite3*,int(*)(void*),void*); int (*complete)(const char*sql); int (*complete16)(const void*sql); - int (*create_collation)(sqlite3*,const char*,int,void*,int(*)(void*,int,const void*,int,const void*)); - int (*create_collation16)(sqlite3*,const void*,int,void*,int(*)(void*,int,const void*,int,const void*)); - int (*create_function)(sqlite3*,const char*,int,int,void*,void (*xFunc)(sqlite3_context*,int,sqlite3_value**),void (*xStep)(sqlite3_context*,int,sqlite3_value**),void (*xFinal)(sqlite3_context*)); - int (*create_function16)(sqlite3*,const void*,int,int,void*,void (*xFunc)(sqlite3_context*,int,sqlite3_value**),void (*xStep)(sqlite3_context*,int,sqlite3_value**),void (*xFinal)(sqlite3_context*)); + int (*create_collation)(sqlite3*,const char*,int,void*, + int(*)(void*,int,const void*,int,const void*)); + int (*create_collation16)(sqlite3*,const void*,int,void*, + int(*)(void*,int,const void*,int,const void*)); + int (*create_function)(sqlite3*,const char*,int,int,void*, + void (*xFunc)(sqlite3_context*,int,sqlite3_value**), + void (*xStep)(sqlite3_context*,int,sqlite3_value**), + void (*xFinal)(sqlite3_context*)); + int (*create_function16)(sqlite3*,const void*,int,int,void*, + void (*xFunc)(sqlite3_context*,int,sqlite3_value**), + void (*xStep)(sqlite3_context*,int,sqlite3_value**), + void (*xFinal)(sqlite3_context*)); int (*create_module)(sqlite3*,const char*,const sqlite3_module*,void*); int (*data_count)(sqlite3_stmt*pStmt); sqlite3 * (*db_handle)(sqlite3_stmt*); int (*declare_vtab)(sqlite3*,const char*); int (*enable_shared_cache)(int); @@ -77375,20 +105765,23 @@ void (*result_text16)(sqlite3_context*,const void*,int,void(*)(void*)); void (*result_text16be)(sqlite3_context*,const void*,int,void(*)(void*)); void (*result_text16le)(sqlite3_context*,const void*,int,void(*)(void*)); void (*result_value)(sqlite3_context*,sqlite3_value*); void * (*rollback_hook)(sqlite3*,void(*)(void*),void*); - int (*set_authorizer)(sqlite3*,int(*)(void*,int,const char*,const char*,const char*,const char*),void*); + int (*set_authorizer)(sqlite3*,int(*)(void*,int,const char*,const char*, + const char*,const char*),void*); void (*set_auxdata)(sqlite3_context*,int,void*,void (*)(void*)); char * (*snprintf)(int,char*,const char*,...); int (*step)(sqlite3_stmt*); - int (*table_column_metadata)(sqlite3*,const char*,const char*,const char*,char const**,char const**,int*,int*,int*); + int (*table_column_metadata)(sqlite3*,const char*,const char*,const char*, + char const**,char const**,int*,int*,int*); void (*thread_cleanup)(void); int (*total_changes)(sqlite3*); void * (*trace)(sqlite3*,void(*xTrace)(void*,const char*),void*); int (*transfer_bindings)(sqlite3_stmt*,sqlite3_stmt*); - void * (*update_hook)(sqlite3*,void(*)(void*,int ,char const*,char const*,sqlite_int64),void*); + void * (*update_hook)(sqlite3*,void(*)(void*,int ,char const*,char const*, + sqlite_int64),void*); void * (*user_data)(sqlite3_context*); const void * (*value_blob)(sqlite3_value*); int (*value_bytes)(sqlite3_value*); int (*value_bytes16)(sqlite3_value*); double (*value_double)(sqlite3_value*); @@ -77406,19 +105799,23 @@ /* Added by 3.3.13 */ int (*prepare_v2)(sqlite3*,const char*,int,sqlite3_stmt**,const char**); int (*prepare16_v2)(sqlite3*,const void*,int,sqlite3_stmt**,const void**); int (*clear_bindings)(sqlite3_stmt*); /* Added by 3.4.1 */ - int (*create_module_v2)(sqlite3*,const char*,const sqlite3_module*,void*,void (*xDestroy)(void *)); + int (*create_module_v2)(sqlite3*,const char*,const sqlite3_module*,void*, + void (*xDestroy)(void *)); /* Added by 3.5.0 */ int (*bind_zeroblob)(sqlite3_stmt*,int,int); int (*blob_bytes)(sqlite3_blob*); int (*blob_close)(sqlite3_blob*); - int (*blob_open)(sqlite3*,const char*,const char*,const char*,sqlite3_int64,int,sqlite3_blob**); + int (*blob_open)(sqlite3*,const char*,const char*,const char*,sqlite3_int64, + int,sqlite3_blob**); int (*blob_read)(sqlite3_blob*,void*,int,int); int (*blob_write)(sqlite3_blob*,const void*,int,int); - int (*create_collation_v2)(sqlite3*,const char*,int,void*,int(*)(void*,int,const void*,int,const void*),void(*)(void*)); + int (*create_collation_v2)(sqlite3*,const char*,int,void*, + int(*)(void*,int,const void*,int,const void*), + void(*)(void*)); int (*file_control)(sqlite3*,const char*,int,void*); sqlite3_int64 (*memory_highwater)(int); sqlite3_int64 (*memory_used)(void); sqlite3_mutex *(*mutex_alloc)(int); void (*mutex_enter)(sqlite3_mutex*); @@ -77443,24 +105840,95 @@ int (*extended_result_codes)(sqlite3*,int); int (*limit)(sqlite3*,int,int); sqlite3_stmt *(*next_stmt)(sqlite3*,sqlite3_stmt*); const char *(*sql)(sqlite3_stmt*); int (*status)(int,int*,int*,int); + int (*backup_finish)(sqlite3_backup*); + sqlite3_backup *(*backup_init)(sqlite3*,const char*,sqlite3*,const char*); + int (*backup_pagecount)(sqlite3_backup*); + int (*backup_remaining)(sqlite3_backup*); + int (*backup_step)(sqlite3_backup*,int); + const char *(*compileoption_get)(int); + int (*compileoption_used)(const char*); + int (*create_function_v2)(sqlite3*,const char*,int,int,void*, + void (*xFunc)(sqlite3_context*,int,sqlite3_value**), + void (*xStep)(sqlite3_context*,int,sqlite3_value**), + void (*xFinal)(sqlite3_context*), + void(*xDestroy)(void*)); + int (*db_config)(sqlite3*,int,...); + sqlite3_mutex *(*db_mutex)(sqlite3*); + int (*db_status)(sqlite3*,int,int*,int*,int); + int (*extended_errcode)(sqlite3*); + void (*log)(int,const char*,...); + sqlite3_int64 (*soft_heap_limit64)(sqlite3_int64); + const char *(*sourceid)(void); + int (*stmt_status)(sqlite3_stmt*,int,int); + int (*strnicmp)(const char*,const char*,int); + int (*unlock_notify)(sqlite3*,void(*)(void**,int),void*); + int (*wal_autocheckpoint)(sqlite3*,int); + int (*wal_checkpoint)(sqlite3*,const char*); + void *(*wal_hook)(sqlite3*,int(*)(void*,sqlite3*,const char*,int),void*); + int (*blob_reopen)(sqlite3_blob*,sqlite3_int64); + int (*vtab_config)(sqlite3*,int op,...); + int (*vtab_on_conflict)(sqlite3*); + /* Version 3.7.16 and later */ + int (*close_v2)(sqlite3*); + const char *(*db_filename)(sqlite3*,const char*); + int (*db_readonly)(sqlite3*,const char*); + int (*db_release_memory)(sqlite3*); + const char *(*errstr)(int); + int (*stmt_busy)(sqlite3_stmt*); + int (*stmt_readonly)(sqlite3_stmt*); + int (*stricmp)(const char*,const char*); + int (*uri_boolean)(const char*,const char*,int); + sqlite3_int64 (*uri_int64)(const char*,const char*,sqlite3_int64); + const char *(*uri_parameter)(const char*,const char*); + char *(*vsnprintf)(int,char*,const char*,va_list); + int (*wal_checkpoint_v2)(sqlite3*,const char*,int,int*,int*); + /* Version 3.8.7 and later */ + int (*auto_extension)(void(*)(void)); + int (*bind_blob64)(sqlite3_stmt*,int,const void*,sqlite3_uint64, + void(*)(void*)); + int (*bind_text64)(sqlite3_stmt*,int,const char*,sqlite3_uint64, + void(*)(void*),unsigned char); + int (*cancel_auto_extension)(void(*)(void)); + int (*load_extension)(sqlite3*,const char*,const char*,char**); + void *(*malloc64)(sqlite3_uint64); + sqlite3_uint64 (*msize)(void*); + void *(*realloc64)(void*,sqlite3_uint64); + void (*reset_auto_extension)(void); + void (*result_blob64)(sqlite3_context*,const void*,sqlite3_uint64, + void(*)(void*)); + void (*result_text64)(sqlite3_context*,const char*,sqlite3_uint64, + void(*)(void*), unsigned char); + int (*strglob)(const char*,const char*); + /* Version 3.8.11 and later */ + sqlite3_value *(*value_dup)(const sqlite3_value*); + void (*value_free)(sqlite3_value*); + int (*result_zeroblob64)(sqlite3_context*,sqlite3_uint64); + int (*bind_zeroblob64)(sqlite3_stmt*, int, sqlite3_uint64); + /* Version 3.9.0 and later */ + unsigned int (*value_subtype)(sqlite3_value*); + void (*result_subtype)(sqlite3_context*,unsigned int); + /* Version 3.10.0 and later */ + int (*status64)(int,sqlite3_int64*,sqlite3_int64*,int); + int (*strlike)(const char*,const char*,unsigned int); + int (*db_cacheflush)(sqlite3*); }; /* ** The following macros redefine the API routines so that they are -** redirected throught the global sqlite3_api structure. +** redirected through the global sqlite3_api structure. ** ** This header file is also used by the loadext.c source file ** (part of the main SQLite library - not an extension) so that ** it can get access to the sqlite3_api_routines structure ** definition. But the main library does not want to redefine ** the API. So the redefinition macros are only valid if the ** SQLITE_CORE macros is undefined. */ -#ifndef SQLITE_CORE +#if !defined(SQLITE_CORE) && !defined(SQLITE_OMIT_LOAD_EXTENSION) #define sqlite3_aggregate_context sqlite3_api->aggregate_context #ifndef SQLITE_OMIT_DEPRECATED #define sqlite3_aggregate_count sqlite3_api->aggregate_count #endif #define sqlite3_bind_blob sqlite3_api->bind_blob @@ -77583,10 +106051,11 @@ #define sqlite3_value_text16 sqlite3_api->value_text16 #define sqlite3_value_text16be sqlite3_api->value_text16be #define sqlite3_value_text16le sqlite3_api->value_text16le #define sqlite3_value_type sqlite3_api->value_type #define sqlite3_vmprintf sqlite3_api->vmprintf +#define sqlite3_vsnprintf sqlite3_api->vsnprintf #define sqlite3_overload_function sqlite3_api->overload_function #define sqlite3_prepare_v2 sqlite3_api->prepare_v2 #define sqlite3_prepare16_v2 sqlite3_api->prepare16_v2 #define sqlite3_clear_bindings sqlite3_api->clear_bindings #define sqlite3_bind_zeroblob sqlite3_api->bind_zeroblob @@ -77622,19 +106091,96 @@ #define sqlite3_extended_result_codes sqlite3_api->extended_result_codes #define sqlite3_limit sqlite3_api->limit #define sqlite3_next_stmt sqlite3_api->next_stmt #define sqlite3_sql sqlite3_api->sql #define sqlite3_status sqlite3_api->status -#endif /* SQLITE_CORE */ +#define sqlite3_backup_finish sqlite3_api->backup_finish +#define sqlite3_backup_init sqlite3_api->backup_init +#define sqlite3_backup_pagecount sqlite3_api->backup_pagecount +#define sqlite3_backup_remaining sqlite3_api->backup_remaining +#define sqlite3_backup_step sqlite3_api->backup_step +#define sqlite3_compileoption_get sqlite3_api->compileoption_get +#define sqlite3_compileoption_used sqlite3_api->compileoption_used +#define sqlite3_create_function_v2 sqlite3_api->create_function_v2 +#define sqlite3_db_config sqlite3_api->db_config +#define sqlite3_db_mutex sqlite3_api->db_mutex +#define sqlite3_db_status sqlite3_api->db_status +#define sqlite3_extended_errcode sqlite3_api->extended_errcode +#define sqlite3_log sqlite3_api->log +#define sqlite3_soft_heap_limit64 sqlite3_api->soft_heap_limit64 +#define sqlite3_sourceid sqlite3_api->sourceid +#define sqlite3_stmt_status sqlite3_api->stmt_status +#define sqlite3_strnicmp sqlite3_api->strnicmp +#define sqlite3_unlock_notify sqlite3_api->unlock_notify +#define sqlite3_wal_autocheckpoint sqlite3_api->wal_autocheckpoint +#define sqlite3_wal_checkpoint sqlite3_api->wal_checkpoint +#define sqlite3_wal_hook sqlite3_api->wal_hook +#define sqlite3_blob_reopen sqlite3_api->blob_reopen +#define sqlite3_vtab_config sqlite3_api->vtab_config +#define sqlite3_vtab_on_conflict sqlite3_api->vtab_on_conflict +/* Version 3.7.16 and later */ +#define sqlite3_close_v2 sqlite3_api->close_v2 +#define sqlite3_db_filename sqlite3_api->db_filename +#define sqlite3_db_readonly sqlite3_api->db_readonly +#define sqlite3_db_release_memory sqlite3_api->db_release_memory +#define sqlite3_errstr sqlite3_api->errstr +#define sqlite3_stmt_busy sqlite3_api->stmt_busy +#define sqlite3_stmt_readonly sqlite3_api->stmt_readonly +#define sqlite3_stricmp sqlite3_api->stricmp +#define sqlite3_uri_boolean sqlite3_api->uri_boolean +#define sqlite3_uri_int64 sqlite3_api->uri_int64 +#define sqlite3_uri_parameter sqlite3_api->uri_parameter +#define sqlite3_uri_vsnprintf sqlite3_api->vsnprintf +#define sqlite3_wal_checkpoint_v2 sqlite3_api->wal_checkpoint_v2 +/* Version 3.8.7 and later */ +#define sqlite3_auto_extension sqlite3_api->auto_extension +#define sqlite3_bind_blob64 sqlite3_api->bind_blob64 +#define sqlite3_bind_text64 sqlite3_api->bind_text64 +#define sqlite3_cancel_auto_extension sqlite3_api->cancel_auto_extension +#define sqlite3_load_extension sqlite3_api->load_extension +#define sqlite3_malloc64 sqlite3_api->malloc64 +#define sqlite3_msize sqlite3_api->msize +#define sqlite3_realloc64 sqlite3_api->realloc64 +#define sqlite3_reset_auto_extension sqlite3_api->reset_auto_extension +#define sqlite3_result_blob64 sqlite3_api->result_blob64 +#define sqlite3_result_text64 sqlite3_api->result_text64 +#define sqlite3_strglob sqlite3_api->strglob +/* Version 3.8.11 and later */ +#define sqlite3_value_dup sqlite3_api->value_dup +#define sqlite3_value_free sqlite3_api->value_free +#define sqlite3_result_zeroblob64 sqlite3_api->result_zeroblob64 +#define sqlite3_bind_zeroblob64 sqlite3_api->bind_zeroblob64 +/* Version 3.9.0 and later */ +#define sqlite3_value_subtype sqlite3_api->value_subtype +#define sqlite3_result_subtype sqlite3_api->result_subtype +/* Version 3.10.0 and later */ +#define sqlite3_status64 sqlite3_api->status64 +#define sqlite3_strlike sqlite3_api->strlike +#define sqlite3_db_cacheflush sqlite3_api->db_cacheflush +#endif /* !defined(SQLITE_CORE) && !defined(SQLITE_OMIT_LOAD_EXTENSION) */ -#define SQLITE_EXTENSION_INIT1 const sqlite3_api_routines *sqlite3_api = 0; -#define SQLITE_EXTENSION_INIT2(v) sqlite3_api = v; +#if !defined(SQLITE_CORE) && !defined(SQLITE_OMIT_LOAD_EXTENSION) + /* This case when the file really is being compiled as a loadable + ** extension */ +# define SQLITE_EXTENSION_INIT1 const sqlite3_api_routines *sqlite3_api=0; +# define SQLITE_EXTENSION_INIT2(v) sqlite3_api=v; +# define SQLITE_EXTENSION_INIT3 \ + extern const sqlite3_api_routines *sqlite3_api; +#else + /* This case when the file is being statically linked into the + ** application */ +# define SQLITE_EXTENSION_INIT1 /*no-op*/ +# define SQLITE_EXTENSION_INIT2(v) (void)v; /* unused parameter */ +# define SQLITE_EXTENSION_INIT3 /*no-op*/ +#endif #endif /* _SQLITE3EXT_H_ */ /************** End of sqlite3ext.h ******************************************/ /************** Continuing where we left off in loadext.c ********************/ +/* #include "sqliteInt.h" */ +/* #include <string.h> */ #ifndef SQLITE_OMIT_LOAD_EXTENSION /* ** Some API routines are omitted when various features are @@ -77646,11 +106192,10 @@ # define sqlite3_column_database_name16 0 # define sqlite3_column_table_name 0 # define sqlite3_column_table_name16 0 # define sqlite3_column_origin_name 0 # define sqlite3_column_origin_name16 0 -# define sqlite3_table_column_metadata 0 #endif #ifdef SQLITE_OMIT_AUTHORIZATION # define sqlite3_set_authorizer 0 #endif @@ -77682,19 +106227,26 @@ #ifdef SQLITE_OMIT_COMPLETE # define sqlite3_complete 0 # define sqlite3_complete16 0 #endif + +#ifdef SQLITE_OMIT_DECLTYPE +# define sqlite3_column_decltype16 0 +# define sqlite3_column_decltype 0 +#endif #ifdef SQLITE_OMIT_PROGRESS_CALLBACK # define sqlite3_progress_handler 0 #endif #ifdef SQLITE_OMIT_VIRTUALTABLE # define sqlite3_create_module 0 # define sqlite3_create_module_v2 0 # define sqlite3_declare_vtab 0 +# define sqlite3_vtab_config 0 +# define sqlite3_vtab_on_conflict 0 #endif #ifdef SQLITE_OMIT_SHARED_CACHE # define sqlite3_enable_shared_cache 0 #endif @@ -77714,10 +106266,11 @@ #define sqlite3_blob_bytes 0 #define sqlite3_blob_close 0 #define sqlite3_blob_open 0 #define sqlite3_blob_read 0 #define sqlite3_blob_write 0 +#define sqlite3_blob_reopen 0 #endif /* ** The following structure contains pointers to all SQLite API routines. ** A pointer to this structure is passed into extensions when they are @@ -77939,10 +106492,91 @@ sqlite3_extended_result_codes, sqlite3_limit, sqlite3_next_stmt, sqlite3_sql, sqlite3_status, + + /* + ** Added for 3.7.4 + */ + sqlite3_backup_finish, + sqlite3_backup_init, + sqlite3_backup_pagecount, + sqlite3_backup_remaining, + sqlite3_backup_step, +#ifndef SQLITE_OMIT_COMPILEOPTION_DIAGS + sqlite3_compileoption_get, + sqlite3_compileoption_used, +#else + 0, + 0, +#endif + sqlite3_create_function_v2, + sqlite3_db_config, + sqlite3_db_mutex, + sqlite3_db_status, + sqlite3_extended_errcode, + sqlite3_log, + sqlite3_soft_heap_limit64, + sqlite3_sourceid, + sqlite3_stmt_status, + sqlite3_strnicmp, +#ifdef SQLITE_ENABLE_UNLOCK_NOTIFY + sqlite3_unlock_notify, +#else + 0, +#endif +#ifndef SQLITE_OMIT_WAL + sqlite3_wal_autocheckpoint, + sqlite3_wal_checkpoint, + sqlite3_wal_hook, +#else + 0, + 0, + 0, +#endif + sqlite3_blob_reopen, + sqlite3_vtab_config, + sqlite3_vtab_on_conflict, + sqlite3_close_v2, + sqlite3_db_filename, + sqlite3_db_readonly, + sqlite3_db_release_memory, + sqlite3_errstr, + sqlite3_stmt_busy, + sqlite3_stmt_readonly, + sqlite3_stricmp, + sqlite3_uri_boolean, + sqlite3_uri_int64, + sqlite3_uri_parameter, + sqlite3_vsnprintf, + sqlite3_wal_checkpoint_v2, + /* Version 3.8.7 and later */ + sqlite3_auto_extension, + sqlite3_bind_blob64, + sqlite3_bind_text64, + sqlite3_cancel_auto_extension, + sqlite3_load_extension, + sqlite3_malloc64, + sqlite3_msize, + sqlite3_realloc64, + sqlite3_reset_auto_extension, + sqlite3_result_blob64, + sqlite3_result_text64, + sqlite3_strglob, + /* Version 3.8.11 and later */ + (sqlite3_value*(*)(const sqlite3_value*))sqlite3_value_dup, + sqlite3_value_free, + sqlite3_result_zeroblob64, + sqlite3_bind_zeroblob64, + /* Version 3.9.0 and later */ + sqlite3_value_subtype, + sqlite3_result_subtype, + /* Version 3.10.0 and later */ + sqlite3_status64, + sqlite3_strlike, + sqlite3_db_cacheflush }; /* ** Attempt to load an SQLite extension library contained in the file ** zFile. The entry point is zProc. zProc may be 0 in which case a @@ -77963,12 +106597,27 @@ ){ sqlite3_vfs *pVfs = db->pVfs; void *handle; int (*xInit)(sqlite3*,char**,const sqlite3_api_routines*); char *zErrmsg = 0; + const char *zEntry; + char *zAltEntry = 0; void **aHandle; - const int nMsg = 300; + u64 nMsg = 300 + sqlite3Strlen30(zFile); + int ii; + + /* Shared library endings to try if zFile cannot be loaded as written */ + static const char *azEndings[] = { +#if SQLITE_OS_WIN + "dll" +#elif defined(__APPLE__) + "dylib" +#else + "so" +#endif + }; + if( pzErrMsg ) *pzErrMsg = 0; /* Ticket #1863. To avoid a creating security problems for older ** applications that relink against newer versions of SQLite, the @@ -77981,44 +106630,84 @@ *pzErrMsg = sqlite3_mprintf("not authorized"); } return SQLITE_ERROR; } - if( zProc==0 ){ - zProc = "sqlite3_extension_init"; - } + zEntry = zProc ? zProc : "sqlite3_extension_init"; handle = sqlite3OsDlOpen(pVfs, zFile); +#if SQLITE_OS_UNIX || SQLITE_OS_WIN + for(ii=0; ii<ArraySize(azEndings) && handle==0; ii++){ + char *zAltFile = sqlite3_mprintf("%s.%s", zFile, azEndings[ii]); + if( zAltFile==0 ) return SQLITE_NOMEM; + handle = sqlite3OsDlOpen(pVfs, zAltFile); + sqlite3_free(zAltFile); + } +#endif if( handle==0 ){ if( pzErrMsg ){ - zErrmsg = sqlite3StackAllocZero(db, nMsg); + *pzErrMsg = zErrmsg = sqlite3_malloc64(nMsg); if( zErrmsg ){ sqlite3_snprintf(nMsg, zErrmsg, "unable to open shared library [%s]", zFile); sqlite3OsDlError(pVfs, nMsg-1, zErrmsg); - *pzErrMsg = sqlite3DbStrDup(0, zErrmsg); - sqlite3StackFree(db, zErrmsg); } } return SQLITE_ERROR; } xInit = (int(*)(sqlite3*,char**,const sqlite3_api_routines*)) - sqlite3OsDlSym(pVfs, handle, zProc); + sqlite3OsDlSym(pVfs, handle, zEntry); + + /* If no entry point was specified and the default legacy + ** entry point name "sqlite3_extension_init" was not found, then + ** construct an entry point name "sqlite3_X_init" where the X is + ** replaced by the lowercase value of every ASCII alphabetic + ** character in the filename after the last "/" upto the first ".", + ** and eliding the first three characters if they are "lib". + ** Examples: + ** + ** /usr/local/lib/libExample5.4.3.so ==> sqlite3_example_init + ** C:/lib/mathfuncs.dll ==> sqlite3_mathfuncs_init + */ + if( xInit==0 && zProc==0 ){ + int iFile, iEntry, c; + int ncFile = sqlite3Strlen30(zFile); + zAltEntry = sqlite3_malloc64(ncFile+30); + if( zAltEntry==0 ){ + sqlite3OsDlClose(pVfs, handle); + return SQLITE_NOMEM; + } + memcpy(zAltEntry, "sqlite3_", 8); + for(iFile=ncFile-1; iFile>=0 && zFile[iFile]!='/'; iFile--){} + iFile++; + if( sqlite3_strnicmp(zFile+iFile, "lib", 3)==0 ) iFile += 3; + for(iEntry=8; (c = zFile[iFile])!=0 && c!='.'; iFile++){ + if( sqlite3Isalpha(c) ){ + zAltEntry[iEntry++] = (char)sqlite3UpperToLower[(unsigned)c]; + } + } + memcpy(zAltEntry+iEntry, "_init", 6); + zEntry = zAltEntry; + xInit = (int(*)(sqlite3*,char**,const sqlite3_api_routines*)) + sqlite3OsDlSym(pVfs, handle, zEntry); + } if( xInit==0 ){ if( pzErrMsg ){ - zErrmsg = sqlite3StackAllocZero(db, nMsg); + nMsg += sqlite3Strlen30(zEntry); + *pzErrMsg = zErrmsg = sqlite3_malloc64(nMsg); if( zErrmsg ){ sqlite3_snprintf(nMsg, zErrmsg, - "no entry point [%s] in shared library [%s]", zProc,zFile); + "no entry point [%s] in shared library [%s]", zEntry, zFile); sqlite3OsDlError(pVfs, nMsg-1, zErrmsg); - *pzErrMsg = sqlite3DbStrDup(0, zErrmsg); - sqlite3StackFree(db, zErrmsg); } - sqlite3OsDlClose(pVfs, handle); } + sqlite3OsDlClose(pVfs, handle); + sqlite3_free(zAltEntry); return SQLITE_ERROR; - }else if( xInit(db, &zErrmsg, &sqlite3Apis) ){ + } + sqlite3_free(zAltEntry); + if( xInit(db, &zErrmsg, &sqlite3Apis) ){ if( pzErrMsg ){ *pzErrMsg = sqlite3_mprintf("error during initialization: %s", zErrmsg); } sqlite3_free(zErrmsg); sqlite3OsDlClose(pVfs, handle); @@ -78037,11 +106726,11 @@ db->aExtension = aHandle; db->aExtension[db->nExtension++] = handle; return SQLITE_OK; } -SQLITE_API int sqlite3_load_extension( +SQLITE_API int SQLITE_STDCALL sqlite3_load_extension( sqlite3 *db, /* Load the extension into this database connection */ const char *zFile, /* Name of the shared library containing extension */ const char *zProc, /* Entry point. Use "sqlite3_extension_init" if 0 */ char **pzErrMsg /* Put error message here if not 0 */ ){ @@ -78068,11 +106757,11 @@ /* ** Enable or disable extension loading. Extension loading is disabled by ** default so as not to open security holes in older applications. */ -SQLITE_API int sqlite3_enable_load_extension(sqlite3 *db, int onoff){ +SQLITE_API int SQLITE_STDCALL sqlite3_enable_load_extension(sqlite3 *db, int onoff){ sqlite3_mutex_enter(db->mutex); if( onoff ){ db->flags |= SQLITE_LoadExtension; }else{ db->flags &= ~SQLITE_LoadExtension; @@ -78101,11 +106790,11 @@ ** This list is shared across threads. The SQLITE_MUTEX_STATIC_MASTER ** mutex must be held while accessing this list. */ typedef struct sqlite3AutoExtList sqlite3AutoExtList; static SQLITE_WSD struct sqlite3AutoExtList { - int nExt; /* Number of entries in aExt[] */ + u32 nExt; /* Number of entries in aExt[] */ void (**aExt)(void); /* Pointers to the extension init functions */ } sqlite3Autoext = { 0, 0 }; /* The "wsdAutoext" macro will resolve to the autoextension ** state vector. If writable static data is unsupported on the target, @@ -78125,32 +106814,32 @@ /* ** Register a statically linked extension that is automatically ** loaded by every new database connection. */ -SQLITE_API int sqlite3_auto_extension(void (*xInit)(void)){ +SQLITE_API int SQLITE_STDCALL sqlite3_auto_extension(void (*xInit)(void)){ int rc = SQLITE_OK; #ifndef SQLITE_OMIT_AUTOINIT rc = sqlite3_initialize(); if( rc ){ return rc; }else #endif { - int i; + u32 i; #if SQLITE_THREADSAFE sqlite3_mutex *mutex = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER); #endif wsdAutoextInit; sqlite3_mutex_enter(mutex); for(i=0; i<wsdAutoext.nExt; i++){ if( wsdAutoext.aExt[i]==xInit ) break; } if( i==wsdAutoext.nExt ){ - int nByte = (wsdAutoext.nExt+1)*sizeof(wsdAutoext.aExt[0]); + u64 nByte = (wsdAutoext.nExt+1)*sizeof(wsdAutoext.aExt[0]); void (**aNew)(void); - aNew = sqlite3_realloc(wsdAutoext.aExt, nByte); + aNew = sqlite3_realloc64(wsdAutoext.aExt, nByte); if( aNew==0 ){ rc = SQLITE_NOMEM; }else{ wsdAutoext.aExt = aNew; wsdAutoext.aExt[wsdAutoext.nExt] = xInit; @@ -78160,15 +106849,44 @@ sqlite3_mutex_leave(mutex); assert( (rc&0xff)==rc ); return rc; } } + +/* +** Cancel a prior call to sqlite3_auto_extension. Remove xInit from the +** set of routines that is invoked for each new database connection, if it +** is currently on the list. If xInit is not on the list, then this +** routine is a no-op. +** +** Return 1 if xInit was found on the list and removed. Return 0 if xInit +** was not on the list. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_cancel_auto_extension(void (*xInit)(void)){ +#if SQLITE_THREADSAFE + sqlite3_mutex *mutex = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER); +#endif + int i; + int n = 0; + wsdAutoextInit; + sqlite3_mutex_enter(mutex); + for(i=(int)wsdAutoext.nExt-1; i>=0; i--){ + if( wsdAutoext.aExt[i]==xInit ){ + wsdAutoext.nExt--; + wsdAutoext.aExt[i] = wsdAutoext.aExt[wsdAutoext.nExt]; + n++; + break; + } + } + sqlite3_mutex_leave(mutex); + return n; +} /* ** Reset the automatic extension loading mechanism. */ -SQLITE_API void sqlite3_reset_auto_extension(void){ +SQLITE_API void SQLITE_STDCALL sqlite3_reset_auto_extension(void){ #ifndef SQLITE_OMIT_AUTOINIT if( sqlite3_initialize()==SQLITE_OK ) #endif { #if SQLITE_THREADSAFE @@ -78187,12 +106905,13 @@ ** Load all automatic extensions. ** ** If anything goes wrong, set an error in the database connection. */ SQLITE_PRIVATE void sqlite3AutoLoadExtensions(sqlite3 *db){ - int i; + u32 i; int go = 1; + int rc; int (*xInit)(sqlite3*,char**,const sqlite3_api_routines*); wsdAutoextInit; if( wsdAutoext.nExt==0 ){ /* Common case: early out without every having to acquire a mutex */ @@ -78211,12 +106930,12 @@ xInit = (int(*)(sqlite3*,char**,const sqlite3_api_routines*)) wsdAutoext.aExt[i]; } sqlite3_mutex_leave(mutex); zErrmsg = 0; - if( xInit && xInit(db, &zErrmsg, &sqlite3Apis) ){ - sqlite3Error(db, SQLITE_ERROR, + if( xInit && (rc = xInit(db, &zErrmsg, &sqlite3Apis))!=0 ){ + sqlite3ErrorWithMsg(db, rc, "automatic extension loading failed: %s", zErrmsg); go = 0; } sqlite3_free(zErrmsg); } @@ -78235,50 +106954,542 @@ ** May you share freely, never taking more than you give. ** ************************************************************************* ** This file contains code used to implement the PRAGMA command. */ +/* #include "sqliteInt.h" */ -/* Ignore this whole file if pragmas are disabled +#if !defined(SQLITE_ENABLE_LOCKING_STYLE) +# if defined(__APPLE__) +# define SQLITE_ENABLE_LOCKING_STYLE 1 +# else +# define SQLITE_ENABLE_LOCKING_STYLE 0 +# endif +#endif + +/*************************************************************************** +** The "pragma.h" include file is an automatically generated file that +** that includes the PragType_XXXX macro definitions and the aPragmaName[] +** object. This ensures that the aPragmaName[] table is arranged in +** lexicographical order to facility a binary search of the pragma name. +** Do not edit pragma.h directly. Edit and rerun the script in at +** ../tool/mkpragmatab.tcl. */ +/************** Include pragma.h in the middle of pragma.c *******************/ +/************** Begin file pragma.h ******************************************/ +/* DO NOT EDIT! +** This file is automatically generated by the script at +** ../tool/mkpragmatab.tcl. To update the set of pragmas, edit +** that script and rerun it. */ -#if !defined(SQLITE_OMIT_PRAGMA) +#define PragTyp_HEADER_VALUE 0 +#define PragTyp_AUTO_VACUUM 1 +#define PragTyp_FLAG 2 +#define PragTyp_BUSY_TIMEOUT 3 +#define PragTyp_CACHE_SIZE 4 +#define PragTyp_CACHE_SPILL 5 +#define PragTyp_CASE_SENSITIVE_LIKE 6 +#define PragTyp_COLLATION_LIST 7 +#define PragTyp_COMPILE_OPTIONS 8 +#define PragTyp_DATA_STORE_DIRECTORY 9 +#define PragTyp_DATABASE_LIST 10 +#define PragTyp_DEFAULT_CACHE_SIZE 11 +#define PragTyp_ENCODING 12 +#define PragTyp_FOREIGN_KEY_CHECK 13 +#define PragTyp_FOREIGN_KEY_LIST 14 +#define PragTyp_INCREMENTAL_VACUUM 15 +#define PragTyp_INDEX_INFO 16 +#define PragTyp_INDEX_LIST 17 +#define PragTyp_INTEGRITY_CHECK 18 +#define PragTyp_JOURNAL_MODE 19 +#define PragTyp_JOURNAL_SIZE_LIMIT 20 +#define PragTyp_LOCK_PROXY_FILE 21 +#define PragTyp_LOCKING_MODE 22 +#define PragTyp_PAGE_COUNT 23 +#define PragTyp_MMAP_SIZE 24 +#define PragTyp_PAGE_SIZE 25 +#define PragTyp_SECURE_DELETE 26 +#define PragTyp_SHRINK_MEMORY 27 +#define PragTyp_SOFT_HEAP_LIMIT 28 +#define PragTyp_STATS 29 +#define PragTyp_SYNCHRONOUS 30 +#define PragTyp_TABLE_INFO 31 +#define PragTyp_TEMP_STORE 32 +#define PragTyp_TEMP_STORE_DIRECTORY 33 +#define PragTyp_THREADS 34 +#define PragTyp_WAL_AUTOCHECKPOINT 35 +#define PragTyp_WAL_CHECKPOINT 36 +#define PragTyp_ACTIVATE_EXTENSIONS 37 +#define PragTyp_HEXKEY 38 +#define PragTyp_KEY 39 +#define PragTyp_REKEY 40 +#define PragTyp_LOCK_STATUS 41 +#define PragTyp_PARSER_TRACE 42 +#define PragFlag_NeedSchema 0x01 +#define PragFlag_ReadOnly 0x02 +static const struct sPragmaNames { + const char *const zName; /* Name of pragma */ + u8 ePragTyp; /* PragTyp_XXX value */ + u8 mPragFlag; /* Zero or more PragFlag_XXX values */ + u32 iArg; /* Extra argument */ +} aPragmaNames[] = { +#if defined(SQLITE_HAS_CODEC) || defined(SQLITE_ENABLE_CEROD) + { /* zName: */ "activate_extensions", + /* ePragTyp: */ PragTyp_ACTIVATE_EXTENSIONS, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_SCHEMA_VERSION_PRAGMAS) + { /* zName: */ "application_id", + /* ePragTyp: */ PragTyp_HEADER_VALUE, + /* ePragFlag: */ 0, + /* iArg: */ BTREE_APPLICATION_ID }, +#endif +#if !defined(SQLITE_OMIT_AUTOVACUUM) + { /* zName: */ "auto_vacuum", + /* ePragTyp: */ PragTyp_AUTO_VACUUM, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_FLAG_PRAGMAS) +#if !defined(SQLITE_OMIT_AUTOMATIC_INDEX) + { /* zName: */ "automatic_index", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_AutoIndex }, +#endif +#endif + { /* zName: */ "busy_timeout", + /* ePragTyp: */ PragTyp_BUSY_TIMEOUT, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#if !defined(SQLITE_OMIT_PAGER_PRAGMAS) + { /* zName: */ "cache_size", + /* ePragTyp: */ PragTyp_CACHE_SIZE, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_FLAG_PRAGMAS) + { /* zName: */ "cache_spill", + /* ePragTyp: */ PragTyp_CACHE_SPILL, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#endif + { /* zName: */ "case_sensitive_like", + /* ePragTyp: */ PragTyp_CASE_SENSITIVE_LIKE, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, + { /* zName: */ "cell_size_check", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_CellSizeCk }, +#if !defined(SQLITE_OMIT_FLAG_PRAGMAS) + { /* zName: */ "checkpoint_fullfsync", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_CkptFullFSync }, +#endif +#if !defined(SQLITE_OMIT_SCHEMA_PRAGMAS) + { /* zName: */ "collation_list", + /* ePragTyp: */ PragTyp_COLLATION_LIST, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_COMPILEOPTION_DIAGS) + { /* zName: */ "compile_options", + /* ePragTyp: */ PragTyp_COMPILE_OPTIONS, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_FLAG_PRAGMAS) + { /* zName: */ "count_changes", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_CountRows }, +#endif +#if !defined(SQLITE_OMIT_PAGER_PRAGMAS) && SQLITE_OS_WIN + { /* zName: */ "data_store_directory", + /* ePragTyp: */ PragTyp_DATA_STORE_DIRECTORY, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_SCHEMA_VERSION_PRAGMAS) + { /* zName: */ "data_version", + /* ePragTyp: */ PragTyp_HEADER_VALUE, + /* ePragFlag: */ PragFlag_ReadOnly, + /* iArg: */ BTREE_DATA_VERSION }, +#endif +#if !defined(SQLITE_OMIT_SCHEMA_PRAGMAS) + { /* zName: */ "database_list", + /* ePragTyp: */ PragTyp_DATABASE_LIST, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_PAGER_PRAGMAS) && !defined(SQLITE_OMIT_DEPRECATED) + { /* zName: */ "default_cache_size", + /* ePragTyp: */ PragTyp_DEFAULT_CACHE_SIZE, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_FLAG_PRAGMAS) +#if !defined(SQLITE_OMIT_FOREIGN_KEY) && !defined(SQLITE_OMIT_TRIGGER) + { /* zName: */ "defer_foreign_keys", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_DeferFKs }, +#endif +#endif +#if !defined(SQLITE_OMIT_FLAG_PRAGMAS) + { /* zName: */ "empty_result_callbacks", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_NullCallback }, +#endif +#if !defined(SQLITE_OMIT_UTF16) + { /* zName: */ "encoding", + /* ePragTyp: */ PragTyp_ENCODING, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_FOREIGN_KEY) && !defined(SQLITE_OMIT_TRIGGER) + { /* zName: */ "foreign_key_check", + /* ePragTyp: */ PragTyp_FOREIGN_KEY_CHECK, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_FOREIGN_KEY) + { /* zName: */ "foreign_key_list", + /* ePragTyp: */ PragTyp_FOREIGN_KEY_LIST, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_FLAG_PRAGMAS) +#if !defined(SQLITE_OMIT_FOREIGN_KEY) && !defined(SQLITE_OMIT_TRIGGER) + { /* zName: */ "foreign_keys", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_ForeignKeys }, +#endif +#endif +#if !defined(SQLITE_OMIT_SCHEMA_VERSION_PRAGMAS) + { /* zName: */ "freelist_count", + /* ePragTyp: */ PragTyp_HEADER_VALUE, + /* ePragFlag: */ PragFlag_ReadOnly, + /* iArg: */ BTREE_FREE_PAGE_COUNT }, +#endif +#if !defined(SQLITE_OMIT_FLAG_PRAGMAS) + { /* zName: */ "full_column_names", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_FullColNames }, + { /* zName: */ "fullfsync", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_FullFSync }, +#endif +#if defined(SQLITE_HAS_CODEC) + { /* zName: */ "hexkey", + /* ePragTyp: */ PragTyp_HEXKEY, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, + { /* zName: */ "hexrekey", + /* ePragTyp: */ PragTyp_HEXKEY, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_FLAG_PRAGMAS) +#if !defined(SQLITE_OMIT_CHECK) + { /* zName: */ "ignore_check_constraints", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_IgnoreChecks }, +#endif +#endif +#if !defined(SQLITE_OMIT_AUTOVACUUM) + { /* zName: */ "incremental_vacuum", + /* ePragTyp: */ PragTyp_INCREMENTAL_VACUUM, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_SCHEMA_PRAGMAS) + { /* zName: */ "index_info", + /* ePragTyp: */ PragTyp_INDEX_INFO, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, + { /* zName: */ "index_list", + /* ePragTyp: */ PragTyp_INDEX_LIST, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, + { /* zName: */ "index_xinfo", + /* ePragTyp: */ PragTyp_INDEX_INFO, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 1 }, +#endif +#if !defined(SQLITE_OMIT_INTEGRITY_CHECK) + { /* zName: */ "integrity_check", + /* ePragTyp: */ PragTyp_INTEGRITY_CHECK, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_PAGER_PRAGMAS) + { /* zName: */ "journal_mode", + /* ePragTyp: */ PragTyp_JOURNAL_MODE, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, + { /* zName: */ "journal_size_limit", + /* ePragTyp: */ PragTyp_JOURNAL_SIZE_LIMIT, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#endif +#if defined(SQLITE_HAS_CODEC) + { /* zName: */ "key", + /* ePragTyp: */ PragTyp_KEY, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_FLAG_PRAGMAS) + { /* zName: */ "legacy_file_format", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_LegacyFileFmt }, +#endif +#if !defined(SQLITE_OMIT_PAGER_PRAGMAS) && SQLITE_ENABLE_LOCKING_STYLE + { /* zName: */ "lock_proxy_file", + /* ePragTyp: */ PragTyp_LOCK_PROXY_FILE, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#endif +#if defined(SQLITE_DEBUG) || defined(SQLITE_TEST) + { /* zName: */ "lock_status", + /* ePragTyp: */ PragTyp_LOCK_STATUS, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_PAGER_PRAGMAS) + { /* zName: */ "locking_mode", + /* ePragTyp: */ PragTyp_LOCKING_MODE, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, + { /* zName: */ "max_page_count", + /* ePragTyp: */ PragTyp_PAGE_COUNT, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, + { /* zName: */ "mmap_size", + /* ePragTyp: */ PragTyp_MMAP_SIZE, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, + { /* zName: */ "page_count", + /* ePragTyp: */ PragTyp_PAGE_COUNT, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, + { /* zName: */ "page_size", + /* ePragTyp: */ PragTyp_PAGE_SIZE, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#endif +#if defined(SQLITE_DEBUG) && !defined(SQLITE_OMIT_PARSER_TRACE) + { /* zName: */ "parser_trace", + /* ePragTyp: */ PragTyp_PARSER_TRACE, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_FLAG_PRAGMAS) + { /* zName: */ "query_only", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_QueryOnly }, +#endif +#if !defined(SQLITE_OMIT_INTEGRITY_CHECK) + { /* zName: */ "quick_check", + /* ePragTyp: */ PragTyp_INTEGRITY_CHECK, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_FLAG_PRAGMAS) + { /* zName: */ "read_uncommitted", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_ReadUncommitted }, + { /* zName: */ "recursive_triggers", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_RecTriggers }, +#endif +#if defined(SQLITE_HAS_CODEC) + { /* zName: */ "rekey", + /* ePragTyp: */ PragTyp_REKEY, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_FLAG_PRAGMAS) + { /* zName: */ "reverse_unordered_selects", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_ReverseOrder }, +#endif +#if !defined(SQLITE_OMIT_SCHEMA_VERSION_PRAGMAS) + { /* zName: */ "schema_version", + /* ePragTyp: */ PragTyp_HEADER_VALUE, + /* ePragFlag: */ 0, + /* iArg: */ BTREE_SCHEMA_VERSION }, +#endif +#if !defined(SQLITE_OMIT_PAGER_PRAGMAS) + { /* zName: */ "secure_delete", + /* ePragTyp: */ PragTyp_SECURE_DELETE, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_FLAG_PRAGMAS) + { /* zName: */ "short_column_names", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_ShortColNames }, +#endif + { /* zName: */ "shrink_memory", + /* ePragTyp: */ PragTyp_SHRINK_MEMORY, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, + { /* zName: */ "soft_heap_limit", + /* ePragTyp: */ PragTyp_SOFT_HEAP_LIMIT, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#if !defined(SQLITE_OMIT_FLAG_PRAGMAS) +#if defined(SQLITE_DEBUG) + { /* zName: */ "sql_trace", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_SqlTrace }, +#endif +#endif +#if !defined(SQLITE_OMIT_SCHEMA_PRAGMAS) + { /* zName: */ "stats", + /* ePragTyp: */ PragTyp_STATS, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_PAGER_PRAGMAS) + { /* zName: */ "synchronous", + /* ePragTyp: */ PragTyp_SYNCHRONOUS, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_SCHEMA_PRAGMAS) + { /* zName: */ "table_info", + /* ePragTyp: */ PragTyp_TABLE_INFO, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_PAGER_PRAGMAS) + { /* zName: */ "temp_store", + /* ePragTyp: */ PragTyp_TEMP_STORE, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, + { /* zName: */ "temp_store_directory", + /* ePragTyp: */ PragTyp_TEMP_STORE_DIRECTORY, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#endif + { /* zName: */ "threads", + /* ePragTyp: */ PragTyp_THREADS, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, +#if !defined(SQLITE_OMIT_SCHEMA_VERSION_PRAGMAS) + { /* zName: */ "user_version", + /* ePragTyp: */ PragTyp_HEADER_VALUE, + /* ePragFlag: */ 0, + /* iArg: */ BTREE_USER_VERSION }, +#endif +#if !defined(SQLITE_OMIT_FLAG_PRAGMAS) +#if defined(SQLITE_DEBUG) + { /* zName: */ "vdbe_addoptrace", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_VdbeAddopTrace }, + { /* zName: */ "vdbe_debug", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_SqlTrace|SQLITE_VdbeListing|SQLITE_VdbeTrace }, + { /* zName: */ "vdbe_eqp", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_VdbeEQP }, + { /* zName: */ "vdbe_listing", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_VdbeListing }, + { /* zName: */ "vdbe_trace", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_VdbeTrace }, +#endif +#endif +#if !defined(SQLITE_OMIT_WAL) + { /* zName: */ "wal_autocheckpoint", + /* ePragTyp: */ PragTyp_WAL_AUTOCHECKPOINT, + /* ePragFlag: */ 0, + /* iArg: */ 0 }, + { /* zName: */ "wal_checkpoint", + /* ePragTyp: */ PragTyp_WAL_CHECKPOINT, + /* ePragFlag: */ PragFlag_NeedSchema, + /* iArg: */ 0 }, +#endif +#if !defined(SQLITE_OMIT_FLAG_PRAGMAS) + { /* zName: */ "writable_schema", + /* ePragTyp: */ PragTyp_FLAG, + /* ePragFlag: */ 0, + /* iArg: */ SQLITE_WriteSchema|SQLITE_RecoveryMode }, +#endif +}; +/* Number of pragmas: 60 on by default, 73 total. */ + +/************** End of pragma.h **********************************************/ +/************** Continuing where we left off in pragma.c *********************/ /* ** Interpret the given string as a safety level. Return 0 for OFF, -** 1 for ON or NORMAL and 2 for FULL. Return 1 for an empty or -** unrecognized string argument. +** 1 for ON or NORMAL, 2 for FULL, and 3 for EXTRA. Return 1 for an empty or +** unrecognized string argument. The FULL and EXTRA option is disallowed +** if the omitFull parameter it 1. ** ** Note that the values returned are one less that the values that ** should be passed into sqlite3BtreeSetSafetyLevel(). The is done ** to support legacy SQL code. The safety level used to be boolean ** and older scripts may have used numbers 0 for OFF and 1 for ON. */ -static u8 getSafetyLevel(const char *z){ - /* 123456789 123456789 */ - static const char zText[] = "onoffalseyestruefull"; - static const u8 iOffset[] = {0, 1, 2, 4, 9, 12, 16}; - static const u8 iLength[] = {2, 2, 3, 5, 3, 4, 4}; - static const u8 iValue[] = {1, 0, 0, 0, 1, 1, 2}; +static u8 getSafetyLevel(const char *z, int omitFull, u8 dflt){ + /* 123456789 123456789 123 */ + static const char zText[] = "onoffalseyestruextrafull"; + static const u8 iOffset[] = {0, 1, 2, 4, 9, 12, 15, 20}; + static const u8 iLength[] = {2, 2, 3, 5, 3, 4, 5, 4}; + static const u8 iValue[] = {1, 0, 0, 0, 1, 1, 3, 2}; + /* on no off false yes true extra full */ int i, n; if( sqlite3Isdigit(*z) ){ - return (u8)atoi(z); + return (u8)sqlite3Atoi(z); } n = sqlite3Strlen30(z); for(i=0; i<ArraySize(iLength); i++){ - if( iLength[i]==n && sqlite3StrNICmp(&zText[iOffset[i]],z,n)==0 ){ + if( iLength[i]==n && sqlite3StrNICmp(&zText[iOffset[i]],z,n)==0 + && (!omitFull || iValue[i]<=1) + ){ return iValue[i]; } } - return 1; + return dflt; } /* ** Interpret the given string as a boolean value. */ -static u8 getBoolean(const char *z){ - return getSafetyLevel(z)&1; +SQLITE_PRIVATE u8 sqlite3GetBoolean(const char *z, u8 dflt){ + return getSafetyLevel(z,1,dflt)!=0; } + +/* The sqlite3GetBoolean() function is used by other modules but the +** remainder of this file is specific to PRAGMA processing. So omit +** the rest of the file if PRAGMAs are omitted from the build. +*/ +#if !defined(SQLITE_OMIT_PRAGMA) /* ** Interpret the given string as a locking mode value. */ static int getLockingMode(const char *z){ @@ -78299,11 +107510,11 @@ static int getAutoVacuum(const char *z){ int i; if( 0==sqlite3StrICmp(z, "none") ) return BTREE_AUTOVACUUM_NONE; if( 0==sqlite3StrICmp(z, "full") ) return BTREE_AUTOVACUUM_FULL; if( 0==sqlite3StrICmp(z, "incremental") ) return BTREE_AUTOVACUUM_INCR; - i = atoi(z); + i = sqlite3Atoi(z); return (u8)((i>=0&&i<=2)?i:0); } #endif /* ifndef SQLITE_OMIT_AUTOVACUUM */ #ifndef SQLITE_OMIT_PAGER_PRAGMAS @@ -78338,11 +107549,11 @@ "from within a transaction"); return SQLITE_ERROR; } sqlite3BtreeClose(db->aDb[1].pBt); db->aDb[1].pBt = 0; - sqlite3ResetInternalSchema(db, 0); + sqlite3ResetAllSchemasOfConnection(db); } return SQLITE_OK; } #endif /* SQLITE_PAGER_PRAGMAS */ @@ -78361,110 +107572,82 @@ } db->temp_store = (u8)ts; return SQLITE_OK; } #endif /* SQLITE_PAGER_PRAGMAS */ + +/* +** Set the names of the first N columns to the values in azCol[] +*/ +static void setAllColumnNames( + Vdbe *v, /* The query under construction */ + int N, /* Number of columns */ + const char **azCol /* Names of columns */ +){ + int i; + sqlite3VdbeSetNumCols(v, N); + for(i=0; i<N; i++){ + sqlite3VdbeSetColName(v, i, COLNAME_NAME, azCol[i], SQLITE_STATIC); + } +} +static void setOneColumnName(Vdbe *v, const char *z){ + setAllColumnNames(v, 1, &z); +} /* ** Generate code to return a single integer value. */ -static void returnSingleInt(Parse *pParse, const char *zLabel, i64 value){ - Vdbe *v = sqlite3GetVdbe(pParse); - int mem = ++pParse->nMem; - i64 *pI64 = sqlite3DbMallocRaw(pParse->db, sizeof(value)); - if( pI64 ){ - memcpy(pI64, &value, sizeof(value)); - } - sqlite3VdbeAddOp4(v, OP_Int64, 0, mem, 0, (char*)pI64, P4_INT64); - sqlite3VdbeSetNumCols(v, 1); - sqlite3VdbeSetColName(v, 0, COLNAME_NAME, zLabel, SQLITE_STATIC); - sqlite3VdbeAddOp2(v, OP_ResultRow, mem, 1); -} - -#ifndef SQLITE_OMIT_FLAG_PRAGMAS -/* -** Check to see if zRight and zLeft refer to a pragma that queries -** or changes one of the flags in db->flags. Return 1 if so and 0 if not. -** Also, implement the pragma. -*/ -static int flagPragma(Parse *pParse, const char *zLeft, const char *zRight){ - static const struct sPragmaType { - const char *zName; /* Name of the pragma */ - int mask; /* Mask for the db->flags value */ - } aPragma[] = { - { "full_column_names", SQLITE_FullColNames }, - { "short_column_names", SQLITE_ShortColNames }, - { "count_changes", SQLITE_CountRows }, - { "empty_result_callbacks", SQLITE_NullCallback }, - { "legacy_file_format", SQLITE_LegacyFileFmt }, - { "fullfsync", SQLITE_FullFSync }, - { "reverse_unordered_selects", SQLITE_ReverseOrder }, -#ifndef SQLITE_OMIT_AUTOMATIC_INDEX - { "automatic_index", SQLITE_AutoIndex }, -#endif -#ifdef SQLITE_DEBUG - { "sql_trace", SQLITE_SqlTrace }, - { "vdbe_listing", SQLITE_VdbeListing }, - { "vdbe_trace", SQLITE_VdbeTrace }, -#endif -#ifndef SQLITE_OMIT_CHECK - { "ignore_check_constraints", SQLITE_IgnoreChecks }, -#endif - /* The following is VERY experimental */ - { "writable_schema", SQLITE_WriteSchema|SQLITE_RecoveryMode }, - { "omit_readlock", SQLITE_NoReadlock }, - - /* TODO: Maybe it shouldn't be possible to change the ReadUncommitted - ** flag if there are any active statements. */ - { "read_uncommitted", SQLITE_ReadUncommitted }, - { "recursive_triggers", SQLITE_RecTriggers }, - - /* This flag may only be set if both foreign-key and trigger support - ** are present in the build. */ -#if !defined(SQLITE_OMIT_FOREIGN_KEY) && !defined(SQLITE_OMIT_TRIGGER) - { "foreign_keys", SQLITE_ForeignKeys }, -#endif - }; - int i; - const struct sPragmaType *p; - for(i=0, p=aPragma; i<ArraySize(aPragma); i++, p++){ - if( sqlite3StrICmp(zLeft, p->zName)==0 ){ - sqlite3 *db = pParse->db; - Vdbe *v; - v = sqlite3GetVdbe(pParse); - assert( v!=0 ); /* Already allocated by sqlite3Pragma() */ - if( ALWAYS(v) ){ - if( zRight==0 ){ - returnSingleInt(pParse, p->zName, (db->flags & p->mask)!=0 ); - }else{ - int mask = p->mask; /* Mask of bits to set or clear. */ - if( db->autoCommit==0 ){ - /* Foreign key support may not be enabled or disabled while not - ** in auto-commit mode. */ - mask &= ~(SQLITE_ForeignKeys); - } - - if( getBoolean(zRight) ){ - db->flags |= mask; - }else{ - db->flags &= ~mask; - } - - /* Many of the flag-pragmas modify the code generated by the SQL - ** compiler (eg. count_changes). So add an opcode to expire all - ** compiled SQL statements after modifying a pragma value. - */ - sqlite3VdbeAddOp2(v, OP_Expire, 0, 0); - } - } - - return 1; - } - } - return 0; -} -#endif /* SQLITE_OMIT_FLAG_PRAGMAS */ +static void returnSingleInt(Vdbe *v, const char *zLabel, i64 value){ + sqlite3VdbeAddOp4Dup8(v, OP_Int64, 0, 1, 0, (const u8*)&value, P4_INT64); + setOneColumnName(v, zLabel); + sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 1); +} + +/* +** Generate code to return a single text value. +*/ +static void returnSingleText( + Vdbe *v, /* Prepared statement under construction */ + const char *zLabel, /* Name of the result column */ + const char *zValue /* Value to be returned */ +){ + if( zValue ){ + sqlite3VdbeLoadString(v, 1, (const char*)zValue); + setOneColumnName(v, zLabel); + sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 1); + } +} + + +/* +** Set the safety_level and pager flags for pager iDb. Or if iDb<0 +** set these values for all pagers. +*/ +#ifndef SQLITE_OMIT_PAGER_PRAGMAS +static void setAllPagerFlags(sqlite3 *db){ + if( db->autoCommit ){ + Db *pDb = db->aDb; + int n = db->nDb; + assert( SQLITE_FullFSync==PAGER_FULLFSYNC ); + assert( SQLITE_CkptFullFSync==PAGER_CKPT_FULLFSYNC ); + assert( SQLITE_CacheSpill==PAGER_CACHESPILL ); + assert( (PAGER_FULLFSYNC | PAGER_CKPT_FULLFSYNC | PAGER_CACHESPILL) + == PAGER_FLAGS_MASK ); + assert( (pDb->safety_level & PAGER_SYNCHRONOUS_MASK)==pDb->safety_level ); + while( (n--) > 0 ){ + if( pDb->pBt ){ + sqlite3BtreeSetPagerFlags(pDb->pBt, + pDb->safety_level | (db->flags & PAGER_FLAGS_MASK) ); + } + pDb++; + } + } +} +#else +# define setAllPagerFlags(X) /* no-op */ +#endif + /* ** Return a human-readable name for a constraint resolution action. */ #ifndef SQLITE_OMIT_FOREIGN_KEY @@ -78480,16 +107663,41 @@ } return zName; } #endif + +/* +** Parameter eMode must be one of the PAGER_JOURNALMODE_XXX constants +** defined in pager.h. This function returns the associated lowercase +** journal-mode name. +*/ +SQLITE_PRIVATE const char *sqlite3JournalModename(int eMode){ + static char * const azModeName[] = { + "delete", "persist", "off", "truncate", "memory" +#ifndef SQLITE_OMIT_WAL + , "wal" +#endif + }; + assert( PAGER_JOURNALMODE_DELETE==0 ); + assert( PAGER_JOURNALMODE_PERSIST==1 ); + assert( PAGER_JOURNALMODE_OFF==2 ); + assert( PAGER_JOURNALMODE_TRUNCATE==3 ); + assert( PAGER_JOURNALMODE_MEMORY==4 ); + assert( PAGER_JOURNALMODE_WAL==5 ); + assert( eMode>=0 && eMode<=ArraySize(azModeName) ); + + if( eMode==ArraySize(azModeName) ) return 0; + return azModeName[eMode]; +} + /* ** Process a pragma statement. ** ** Pragmas are of this form: ** -** PRAGMA [database.]id [= value] +** PRAGMA [schema.]id [= value] ** ** The identifier might also be a string. The value is a string, and ** identifier, or a number. If minusFlag is true, then the value is ** a number that was preceded by a minus sign. ** @@ -78497,28 +107705,33 @@ ** and pId2 is the id. If the left side is just "id" then pId1 is the ** id and pId2 is any empty string. */ SQLITE_PRIVATE void sqlite3Pragma( Parse *pParse, - Token *pId1, /* First part of [database.]id field */ - Token *pId2, /* Second part of [database.]id field, or NULL */ + Token *pId1, /* First part of [schema.]id field */ + Token *pId2, /* Second part of [schema.]id field, or NULL */ Token *pValue, /* Token for <value>, or NULL */ int minusFlag /* True if a '-' sign preceded <value> */ ){ char *zLeft = 0; /* Nul-terminated UTF-8 string <id> */ char *zRight = 0; /* Nul-terminated UTF-8 string <value>, or NULL */ const char *zDb = 0; /* The database name */ Token *pId; /* Pointer to <id> token */ + char *aFcntl[4]; /* Argument to SQLITE_FCNTL_PRAGMA */ int iDb; /* Database index for <database> */ - sqlite3 *db = pParse->db; - Db *pDb; - Vdbe *v = pParse->pVdbe = sqlite3VdbeCreate(db); + int lwr, upr, mid = 0; /* Binary search bounds */ + int rc; /* return value form SQLITE_FCNTL_PRAGMA */ + sqlite3 *db = pParse->db; /* The database connection */ + Db *pDb; /* The specific database being pragmaed */ + Vdbe *v = sqlite3GetVdbe(pParse); /* Prepared statement */ + const struct sPragmaNames *pPragma; + if( v==0 ) return; sqlite3VdbeRunOnlyOnce(v); pParse->nMem = 2; - /* Interpret the [database.] part of the pragma statement. iDb is the + /* Interpret the [schema.] part of the pragma statement. iDb is the ** index of the database this pragma is being applied to in db.aDb[]. */ iDb = sqlite3TwoPartName(pParse, pId1, pId2, &pId); if( iDb<0 ) return; pDb = &db->aDb[iDb]; @@ -78540,159 +107753,216 @@ assert( pId2 ); zDb = pId2->n>0 ? pDb->zName : 0; if( sqlite3AuthCheck(pParse, SQLITE_PRAGMA, zLeft, zRight, zDb) ){ goto pragma_out; } - -#ifndef SQLITE_OMIT_PAGER_PRAGMAS + + /* Send an SQLITE_FCNTL_PRAGMA file-control to the underlying VFS + ** connection. If it returns SQLITE_OK, then assume that the VFS + ** handled the pragma and generate a no-op prepared statement. + ** + ** IMPLEMENTATION-OF: R-12238-55120 Whenever a PRAGMA statement is parsed, + ** an SQLITE_FCNTL_PRAGMA file control is sent to the open sqlite3_file + ** object corresponding to the database file to which the pragma + ** statement refers. + ** + ** IMPLEMENTATION-OF: R-29875-31678 The argument to the SQLITE_FCNTL_PRAGMA + ** file control is an array of pointers to strings (char**) in which the + ** second element of the array is the name of the pragma and the third + ** element is the argument to the pragma or NULL if the pragma has no + ** argument. + */ + aFcntl[0] = 0; + aFcntl[1] = zLeft; + aFcntl[2] = zRight; + aFcntl[3] = 0; + db->busyHandler.nBusy = 0; + rc = sqlite3_file_control(db, zDb, SQLITE_FCNTL_PRAGMA, (void*)aFcntl); + if( rc==SQLITE_OK ){ + returnSingleText(v, "result", aFcntl[0]); + sqlite3_free(aFcntl[0]); + goto pragma_out; + } + if( rc!=SQLITE_NOTFOUND ){ + if( aFcntl[0] ){ + sqlite3ErrorMsg(pParse, "%s", aFcntl[0]); + sqlite3_free(aFcntl[0]); + } + pParse->nErr++; + pParse->rc = rc; + goto pragma_out; + } + + /* Locate the pragma in the lookup table */ + lwr = 0; + upr = ArraySize(aPragmaNames)-1; + while( lwr<=upr ){ + mid = (lwr+upr)/2; + rc = sqlite3_stricmp(zLeft, aPragmaNames[mid].zName); + if( rc==0 ) break; + if( rc<0 ){ + upr = mid - 1; + }else{ + lwr = mid + 1; + } + } + if( lwr>upr ) goto pragma_out; + pPragma = &aPragmaNames[mid]; + + /* Make sure the database schema is loaded if the pragma requires that */ + if( (pPragma->mPragFlag & PragFlag_NeedSchema)!=0 ){ + if( sqlite3ReadSchema(pParse) ) goto pragma_out; + } + + /* Jump to the appropriate pragma handler */ + switch( pPragma->ePragTyp ){ + +#if !defined(SQLITE_OMIT_PAGER_PRAGMAS) && !defined(SQLITE_OMIT_DEPRECATED) /* - ** PRAGMA [database.]default_cache_size - ** PRAGMA [database.]default_cache_size=N + ** PRAGMA [schema.]default_cache_size + ** PRAGMA [schema.]default_cache_size=N ** ** The first form reports the current persistent setting for the ** page cache size. The value returned is the maximum number of ** pages in the page cache. The second form sets both the current ** page cache size value and the persistent page cache size value ** stored in the database file. ** - ** The default cache size is stored in meta-value 2 of page 1 of the - ** database file. The cache size is actually the absolute value of - ** this memory location. The sign of meta-value 2 determines the - ** synchronous setting. A negative value means synchronous is off - ** and a positive value means synchronous is on. + ** Older versions of SQLite would set the default cache size to a + ** negative number to indicate synchronous=OFF. These days, synchronous + ** is always on by default regardless of the sign of the default cache + ** size. But continue to take the absolute value of the default cache + ** size of historical compatibility. */ - if( sqlite3StrICmp(zLeft,"default_cache_size")==0 ){ + case PragTyp_DEFAULT_CACHE_SIZE: { + static const int iLn = VDBE_OFFSET_LINENO(2); static const VdbeOpList getCacheSize[] = { { OP_Transaction, 0, 0, 0}, /* 0 */ { OP_ReadCookie, 0, 1, BTREE_DEFAULT_CACHE_SIZE}, /* 1 */ - { OP_IfPos, 1, 7, 0}, + { OP_IfPos, 1, 8, 0}, { OP_Integer, 0, 2, 0}, { OP_Subtract, 1, 2, 1}, - { OP_IfPos, 1, 7, 0}, + { OP_IfPos, 1, 8, 0}, { OP_Integer, 0, 1, 0}, /* 6 */ + { OP_Noop, 0, 0, 0}, { OP_ResultRow, 1, 1, 0}, }; - int addr; - if( sqlite3ReadSchema(pParse) ) goto pragma_out; + VdbeOp *aOp; sqlite3VdbeUsesBtree(v, iDb); if( !zRight ){ - sqlite3VdbeSetNumCols(v, 1); - sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "cache_size", SQLITE_STATIC); + setOneColumnName(v, "cache_size"); pParse->nMem += 2; - addr = sqlite3VdbeAddOpList(v, ArraySize(getCacheSize), getCacheSize); - sqlite3VdbeChangeP1(v, addr, iDb); - sqlite3VdbeChangeP1(v, addr+1, iDb); - sqlite3VdbeChangeP1(v, addr+6, SQLITE_DEFAULT_CACHE_SIZE); + sqlite3VdbeVerifyNoMallocRequired(v, ArraySize(getCacheSize)); + aOp = sqlite3VdbeAddOpList(v, ArraySize(getCacheSize), getCacheSize, iLn); + if( ONLY_IF_REALLOC_STRESS(aOp==0) ) break; + aOp[0].p1 = iDb; + aOp[1].p1 = iDb; + aOp[6].p1 = SQLITE_DEFAULT_CACHE_SIZE; }else{ - int size = atoi(zRight); - if( size<0 ) size = -size; + int size = sqlite3AbsInt32(sqlite3Atoi(zRight)); sqlite3BeginWriteOperation(pParse, 0, iDb); - sqlite3VdbeAddOp2(v, OP_Integer, size, 1); - sqlite3VdbeAddOp3(v, OP_ReadCookie, iDb, 2, BTREE_DEFAULT_CACHE_SIZE); - addr = sqlite3VdbeAddOp2(v, OP_IfPos, 2, 0); - sqlite3VdbeAddOp2(v, OP_Integer, -size, 1); - sqlite3VdbeJumpHere(v, addr); - sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, BTREE_DEFAULT_CACHE_SIZE, 1); + sqlite3VdbeAddOp3(v, OP_SetCookie, iDb, BTREE_DEFAULT_CACHE_SIZE, size); + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); pDb->pSchema->cache_size = size; sqlite3BtreeSetCacheSize(pDb->pBt, pDb->pSchema->cache_size); } - }else + break; + } +#endif /* !SQLITE_OMIT_PAGER_PRAGMAS && !SQLITE_OMIT_DEPRECATED */ +#if !defined(SQLITE_OMIT_PAGER_PRAGMAS) /* - ** PRAGMA [database.]page_size - ** PRAGMA [database.]page_size=N + ** PRAGMA [schema.]page_size + ** PRAGMA [schema.]page_size=N ** ** The first form reports the current setting for the ** database page size in bytes. The second form sets the ** database page size value. The value can only be set if ** the database has not yet been created. */ - if( sqlite3StrICmp(zLeft,"page_size")==0 ){ + case PragTyp_PAGE_SIZE: { Btree *pBt = pDb->pBt; assert( pBt!=0 ); if( !zRight ){ int size = ALWAYS(pBt) ? sqlite3BtreeGetPageSize(pBt) : 0; - returnSingleInt(pParse, "page_size", size); + returnSingleInt(v, "page_size", size); }else{ /* Malloc may fail when setting the page-size, as there is an internal ** buffer that the pager module resizes using sqlite3_realloc(). */ - db->nextPagesize = atoi(zRight); - if( SQLITE_NOMEM==sqlite3BtreeSetPageSize(pBt, db->nextPagesize, -1, 0) ){ - db->mallocFailed = 1; - } - } - }else - - /* - ** PRAGMA [database.]max_page_count - ** PRAGMA [database.]max_page_count=N - ** - ** The first form reports the current setting for the - ** maximum number of pages in the database file. The - ** second form attempts to change this setting. Both - ** forms return the current setting. - */ - if( sqlite3StrICmp(zLeft,"max_page_count")==0 ){ - Btree *pBt = pDb->pBt; - int newMax = 0; - assert( pBt!=0 ); - if( zRight ){ - newMax = atoi(zRight); - } - if( ALWAYS(pBt) ){ - newMax = sqlite3BtreeMaxPageCount(pBt, newMax); - } - returnSingleInt(pParse, "max_page_count", newMax); - }else - - /* - ** PRAGMA [database.]secure_delete - ** PRAGMA [database.]secure_delete=ON/OFF + db->nextPagesize = sqlite3Atoi(zRight); + if( SQLITE_NOMEM==sqlite3BtreeSetPageSize(pBt, db->nextPagesize,-1,0) ){ + sqlite3OomFault(db); + } + } + break; + } + + /* + ** PRAGMA [schema.]secure_delete + ** PRAGMA [schema.]secure_delete=ON/OFF ** ** The first form reports the current setting for the ** secure_delete flag. The second form changes the secure_delete ** flag setting and reports thenew value. */ - if( sqlite3StrICmp(zLeft,"secure_delete")==0 ){ + case PragTyp_SECURE_DELETE: { Btree *pBt = pDb->pBt; int b = -1; assert( pBt!=0 ); if( zRight ){ - b = getBoolean(zRight); + b = sqlite3GetBoolean(zRight, 0); } if( pId2->n==0 && b>=0 ){ int ii; for(ii=0; ii<db->nDb; ii++){ sqlite3BtreeSecureDelete(db->aDb[ii].pBt, b); } } b = sqlite3BtreeSecureDelete(pBt, b); - returnSingleInt(pParse, "secure_delete", b); - }else + returnSingleInt(v, "secure_delete", b); + break; + } /* - ** PRAGMA [database.]page_count + ** PRAGMA [schema.]max_page_count + ** PRAGMA [schema.]max_page_count=N + ** + ** The first form reports the current setting for the + ** maximum number of pages in the database file. The + ** second form attempts to change this setting. Both + ** forms return the current setting. + ** + ** The absolute value of N is used. This is undocumented and might + ** change. The only purpose is to provide an easy way to test + ** the sqlite3AbsInt32() function. + ** + ** PRAGMA [schema.]page_count ** ** Return the number of pages in the specified database. */ - if( sqlite3StrICmp(zLeft,"page_count")==0 ){ + case PragTyp_PAGE_COUNT: { int iReg; - if( sqlite3ReadSchema(pParse) ) goto pragma_out; sqlite3CodeVerifySchema(pParse, iDb); iReg = ++pParse->nMem; - sqlite3VdbeAddOp2(v, OP_Pagecount, iDb, iReg); + if( sqlite3Tolower(zLeft[0])=='p' ){ + sqlite3VdbeAddOp2(v, OP_Pagecount, iDb, iReg); + }else{ + sqlite3VdbeAddOp3(v, OP_MaxPgcnt, iDb, iReg, + sqlite3AbsInt32(sqlite3Atoi(zRight))); + } sqlite3VdbeAddOp2(v, OP_ResultRow, iReg, 1); sqlite3VdbeSetNumCols(v, 1); - sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "page_count", SQLITE_STATIC); - }else + sqlite3VdbeSetColName(v, 0, COLNAME_NAME, zLeft, SQLITE_TRANSIENT); + break; + } /* - ** PRAGMA [database.]locking_mode - ** PRAGMA [database.]locking_mode = (normal|exclusive) + ** PRAGMA [schema.]locking_mode + ** PRAGMA [schema.]locking_mode = (normal|exclusive) */ - if( sqlite3StrICmp(zLeft,"locking_mode")==0 ){ + case PragTyp_LOCKING_MODE: { const char *zRet = "normal"; int eMode = getLockingMode(zRight); if( pId2->n==0 && eMode==PAGER_LOCKINGMODE_QUERY ){ /* Simple "PRAGMA locking_mode;" statement. This is a query for @@ -78721,211 +107991,266 @@ } pPager = sqlite3BtreePager(pDb->pBt); eMode = sqlite3PagerLockingMode(pPager, eMode); } - assert(eMode==PAGER_LOCKINGMODE_NORMAL||eMode==PAGER_LOCKINGMODE_EXCLUSIVE); + assert( eMode==PAGER_LOCKINGMODE_NORMAL + || eMode==PAGER_LOCKINGMODE_EXCLUSIVE ); if( eMode==PAGER_LOCKINGMODE_EXCLUSIVE ){ zRet = "exclusive"; } - sqlite3VdbeSetNumCols(v, 1); - sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "locking_mode", SQLITE_STATIC); - sqlite3VdbeAddOp4(v, OP_String8, 0, 1, 0, zRet, 0); - sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 1); - }else + returnSingleText(v, "locking_mode", zRet); + break; + } /* - ** PRAGMA [database.]journal_mode - ** PRAGMA [database.]journal_mode = (delete|persist|off|truncate|memory) + ** PRAGMA [schema.]journal_mode + ** PRAGMA [schema.]journal_mode = + ** (delete|persist|off|truncate|memory|wal|off) */ - if( sqlite3StrICmp(zLeft,"journal_mode")==0 ){ - int eMode; - static char * const azModeName[] = { - "delete", "persist", "off", "truncate", "memory" - }; + case PragTyp_JOURNAL_MODE: { + int eMode; /* One of the PAGER_JOURNALMODE_XXX symbols */ + int ii; /* Loop counter */ + setOneColumnName(v, "journal_mode"); if( zRight==0 ){ + /* If there is no "=MODE" part of the pragma, do a query for the + ** current mode */ eMode = PAGER_JOURNALMODE_QUERY; }else{ + const char *zMode; int n = sqlite3Strlen30(zRight); - eMode = sizeof(azModeName)/sizeof(azModeName[0]) - 1; - while( eMode>=0 && sqlite3StrNICmp(zRight, azModeName[eMode], n)!=0 ){ - eMode--; - } - } - if( pId2->n==0 && eMode==PAGER_JOURNALMODE_QUERY ){ - /* Simple "PRAGMA journal_mode;" statement. This is a query for - ** the current default journal mode (which may be different to - ** the journal-mode of the main database). - */ - eMode = db->dfltJournalMode; - }else{ - Pager *pPager; - if( pId2->n==0 ){ - /* This indicates that no database name was specified as part - ** of the PRAGMA command. In this case the journal-mode must be - ** set on all attached databases, as well as the main db file. - ** - ** Also, the sqlite3.dfltJournalMode variable is set so that - ** any subsequently attached databases also use the specified - ** journal mode. - */ - int ii; - assert(pDb==&db->aDb[0]); - for(ii=1; ii<db->nDb; ii++){ - if( db->aDb[ii].pBt ){ - pPager = sqlite3BtreePager(db->aDb[ii].pBt); - sqlite3PagerJournalMode(pPager, eMode); - } - } - db->dfltJournalMode = (u8)eMode; - } - pPager = sqlite3BtreePager(pDb->pBt); - eMode = sqlite3PagerJournalMode(pPager, eMode); - } - assert( eMode==PAGER_JOURNALMODE_DELETE - || eMode==PAGER_JOURNALMODE_TRUNCATE - || eMode==PAGER_JOURNALMODE_PERSIST - || eMode==PAGER_JOURNALMODE_OFF - || eMode==PAGER_JOURNALMODE_MEMORY ); - sqlite3VdbeSetNumCols(v, 1); - sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "journal_mode", SQLITE_STATIC); - sqlite3VdbeAddOp4(v, OP_String8, 0, 1, 0, - azModeName[eMode], P4_STATIC); + for(eMode=0; (zMode = sqlite3JournalModename(eMode))!=0; eMode++){ + if( sqlite3StrNICmp(zRight, zMode, n)==0 ) break; + } + if( !zMode ){ + /* If the "=MODE" part does not match any known journal mode, + ** then do a query */ + eMode = PAGER_JOURNALMODE_QUERY; + } + } + if( eMode==PAGER_JOURNALMODE_QUERY && pId2->n==0 ){ + /* Convert "PRAGMA journal_mode" into "PRAGMA main.journal_mode" */ + iDb = 0; + pId2->n = 1; + } + for(ii=db->nDb-1; ii>=0; ii--){ + if( db->aDb[ii].pBt && (ii==iDb || pId2->n==0) ){ + sqlite3VdbeUsesBtree(v, ii); + sqlite3VdbeAddOp3(v, OP_JournalMode, ii, 1, eMode); + } + } sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 1); - }else + break; + } /* - ** PRAGMA [database.]journal_size_limit - ** PRAGMA [database.]journal_size_limit=N + ** PRAGMA [schema.]journal_size_limit + ** PRAGMA [schema.]journal_size_limit=N ** ** Get or set the size limit on rollback journal files. */ - if( sqlite3StrICmp(zLeft,"journal_size_limit")==0 ){ + case PragTyp_JOURNAL_SIZE_LIMIT: { Pager *pPager = sqlite3BtreePager(pDb->pBt); i64 iLimit = -2; if( zRight ){ - sqlite3Atoi64(zRight, &iLimit); + sqlite3DecOrHexToI64(zRight, &iLimit); if( iLimit<-1 ) iLimit = -1; } iLimit = sqlite3PagerJournalSizeLimit(pPager, iLimit); - returnSingleInt(pParse, "journal_size_limit", iLimit); - }else + returnSingleInt(v, "journal_size_limit", iLimit); + break; + } #endif /* SQLITE_OMIT_PAGER_PRAGMAS */ /* - ** PRAGMA [database.]auto_vacuum - ** PRAGMA [database.]auto_vacuum=N + ** PRAGMA [schema.]auto_vacuum + ** PRAGMA [schema.]auto_vacuum=N ** ** Get or set the value of the database 'auto-vacuum' parameter. ** The value is one of: 0 NONE 1 FULL 2 INCREMENTAL */ #ifndef SQLITE_OMIT_AUTOVACUUM - if( sqlite3StrICmp(zLeft,"auto_vacuum")==0 ){ + case PragTyp_AUTO_VACUUM: { Btree *pBt = pDb->pBt; assert( pBt!=0 ); - if( sqlite3ReadSchema(pParse) ){ - goto pragma_out; - } if( !zRight ){ - int auto_vacuum; - if( ALWAYS(pBt) ){ - auto_vacuum = sqlite3BtreeGetAutoVacuum(pBt); - }else{ - auto_vacuum = SQLITE_DEFAULT_AUTOVACUUM; - } - returnSingleInt(pParse, "auto_vacuum", auto_vacuum); + returnSingleInt(v, "auto_vacuum", sqlite3BtreeGetAutoVacuum(pBt)); }else{ int eAuto = getAutoVacuum(zRight); assert( eAuto>=0 && eAuto<=2 ); db->nextAutovac = (u8)eAuto; - if( ALWAYS(eAuto>=0) ){ - /* Call SetAutoVacuum() to set initialize the internal auto and - ** incr-vacuum flags. This is required in case this connection - ** creates the database file. It is important that it is created - ** as an auto-vacuum capable db. - */ - int rc = sqlite3BtreeSetAutoVacuum(pBt, eAuto); - if( rc==SQLITE_OK && (eAuto==1 || eAuto==2) ){ - /* When setting the auto_vacuum mode to either "full" or - ** "incremental", write the value of meta[6] in the database - ** file. Before writing to meta[6], check that meta[3] indicates - ** that this really is an auto-vacuum capable database. - */ - static const VdbeOpList setMeta6[] = { - { OP_Transaction, 0, 1, 0}, /* 0 */ - { OP_ReadCookie, 0, 1, BTREE_LARGEST_ROOT_PAGE}, - { OP_If, 1, 0, 0}, /* 2 */ - { OP_Halt, SQLITE_OK, OE_Abort, 0}, /* 3 */ - { OP_Integer, 0, 1, 0}, /* 4 */ - { OP_SetCookie, 0, BTREE_INCR_VACUUM, 1}, /* 5 */ - }; - int iAddr; - iAddr = sqlite3VdbeAddOpList(v, ArraySize(setMeta6), setMeta6); - sqlite3VdbeChangeP1(v, iAddr, iDb); - sqlite3VdbeChangeP1(v, iAddr+1, iDb); - sqlite3VdbeChangeP2(v, iAddr+2, iAddr+4); - sqlite3VdbeChangeP1(v, iAddr+4, eAuto-1); - sqlite3VdbeChangeP1(v, iAddr+5, iDb); - sqlite3VdbeUsesBtree(v, iDb); - } - } - } - }else + /* Call SetAutoVacuum() to set initialize the internal auto and + ** incr-vacuum flags. This is required in case this connection + ** creates the database file. It is important that it is created + ** as an auto-vacuum capable db. + */ + rc = sqlite3BtreeSetAutoVacuum(pBt, eAuto); + if( rc==SQLITE_OK && (eAuto==1 || eAuto==2) ){ + /* When setting the auto_vacuum mode to either "full" or + ** "incremental", write the value of meta[6] in the database + ** file. Before writing to meta[6], check that meta[3] indicates + ** that this really is an auto-vacuum capable database. + */ + static const int iLn = VDBE_OFFSET_LINENO(2); + static const VdbeOpList setMeta6[] = { + { OP_Transaction, 0, 1, 0}, /* 0 */ + { OP_ReadCookie, 0, 1, BTREE_LARGEST_ROOT_PAGE}, + { OP_If, 1, 0, 0}, /* 2 */ + { OP_Halt, SQLITE_OK, OE_Abort, 0}, /* 3 */ + { OP_SetCookie, 0, BTREE_INCR_VACUUM, 0}, /* 4 */ + }; + VdbeOp *aOp; + int iAddr = sqlite3VdbeCurrentAddr(v); + sqlite3VdbeVerifyNoMallocRequired(v, ArraySize(setMeta6)); + aOp = sqlite3VdbeAddOpList(v, ArraySize(setMeta6), setMeta6, iLn); + if( ONLY_IF_REALLOC_STRESS(aOp==0) ) break; + aOp[0].p1 = iDb; + aOp[1].p1 = iDb; + aOp[2].p2 = iAddr+4; + aOp[4].p1 = iDb; + aOp[4].p3 = eAuto - 1; + sqlite3VdbeUsesBtree(v, iDb); + } + } + break; + } #endif /* - ** PRAGMA [database.]incremental_vacuum(N) + ** PRAGMA [schema.]incremental_vacuum(N) ** ** Do N steps of incremental vacuuming on a database. */ #ifndef SQLITE_OMIT_AUTOVACUUM - if( sqlite3StrICmp(zLeft,"incremental_vacuum")==0 ){ + case PragTyp_INCREMENTAL_VACUUM: { int iLimit, addr; - if( sqlite3ReadSchema(pParse) ){ - goto pragma_out; - } if( zRight==0 || !sqlite3GetInt32(zRight, &iLimit) || iLimit<=0 ){ iLimit = 0x7fffffff; } sqlite3BeginWriteOperation(pParse, 0, iDb); sqlite3VdbeAddOp2(v, OP_Integer, iLimit, 1); - addr = sqlite3VdbeAddOp1(v, OP_IncrVacuum, iDb); + addr = sqlite3VdbeAddOp1(v, OP_IncrVacuum, iDb); VdbeCoverage(v); sqlite3VdbeAddOp1(v, OP_ResultRow, 1); sqlite3VdbeAddOp2(v, OP_AddImm, 1, -1); - sqlite3VdbeAddOp2(v, OP_IfPos, 1, addr); + sqlite3VdbeAddOp2(v, OP_IfPos, 1, addr); VdbeCoverage(v); sqlite3VdbeJumpHere(v, addr); - }else + break; + } #endif #ifndef SQLITE_OMIT_PAGER_PRAGMAS /* - ** PRAGMA [database.]cache_size - ** PRAGMA [database.]cache_size=N + ** PRAGMA [schema.]cache_size + ** PRAGMA [schema.]cache_size=N ** ** The first form reports the current local setting for the - ** page cache size. The local setting can be different from - ** the persistent cache size value that is stored in the database - ** file itself. The value returned is the maximum number of - ** pages in the page cache. The second form sets the local - ** page cache size value. It does not change the persistent - ** cache size stored on the disk so the cache size will revert - ** to its default value when the database is closed and reopened. - ** N should be a positive integer. + ** page cache size. The second form sets the local + ** page cache size value. If N is positive then that is the + ** number of pages in the cache. If N is negative, then the + ** number of pages is adjusted so that the cache uses -N kibibytes + ** of memory. */ - if( sqlite3StrICmp(zLeft,"cache_size")==0 ){ - if( sqlite3ReadSchema(pParse) ) goto pragma_out; + case PragTyp_CACHE_SIZE: { + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); if( !zRight ){ - returnSingleInt(pParse, "cache_size", pDb->pSchema->cache_size); + returnSingleInt(v, "cache_size", pDb->pSchema->cache_size); }else{ - int size = atoi(zRight); - if( size<0 ) size = -size; + int size = sqlite3Atoi(zRight); pDb->pSchema->cache_size = size; sqlite3BtreeSetCacheSize(pDb->pBt, pDb->pSchema->cache_size); } - }else + break; + } + + /* + ** PRAGMA [schema.]cache_spill + ** PRAGMA cache_spill=BOOLEAN + ** PRAGMA [schema.]cache_spill=N + ** + ** The first form reports the current local setting for the + ** page cache spill size. The second form turns cache spill on + ** or off. When turnning cache spill on, the size is set to the + ** current cache_size. The third form sets a spill size that + ** may be different form the cache size. + ** If N is positive then that is the + ** number of pages in the cache. If N is negative, then the + ** number of pages is adjusted so that the cache uses -N kibibytes + ** of memory. + ** + ** If the number of cache_spill pages is less then the number of + ** cache_size pages, no spilling occurs until the page count exceeds + ** the number of cache_size pages. + ** + ** The cache_spill=BOOLEAN setting applies to all attached schemas, + ** not just the schema specified. + */ + case PragTyp_CACHE_SPILL: { + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); + if( !zRight ){ + returnSingleInt(v, "cache_spill", + (db->flags & SQLITE_CacheSpill)==0 ? 0 : + sqlite3BtreeSetSpillSize(pDb->pBt,0)); + }else{ + int size = 1; + if( sqlite3GetInt32(zRight, &size) ){ + sqlite3BtreeSetSpillSize(pDb->pBt, size); + } + if( sqlite3GetBoolean(zRight, size!=0) ){ + db->flags |= SQLITE_CacheSpill; + }else{ + db->flags &= ~SQLITE_CacheSpill; + } + setAllPagerFlags(db); + } + break; + } + + /* + ** PRAGMA [schema.]mmap_size(N) + ** + ** Used to set mapping size limit. The mapping size limit is + ** used to limit the aggregate size of all memory mapped regions of the + ** database file. If this parameter is set to zero, then memory mapping + ** is not used at all. If N is negative, then the default memory map + ** limit determined by sqlite3_config(SQLITE_CONFIG_MMAP_SIZE) is set. + ** The parameter N is measured in bytes. + ** + ** This value is advisory. The underlying VFS is free to memory map + ** as little or as much as it wants. Except, if N is set to 0 then the + ** upper layers will never invoke the xFetch interfaces to the VFS. + */ + case PragTyp_MMAP_SIZE: { + sqlite3_int64 sz; +#if SQLITE_MAX_MMAP_SIZE>0 + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); + if( zRight ){ + int ii; + sqlite3DecOrHexToI64(zRight, &sz); + if( sz<0 ) sz = sqlite3GlobalConfig.szMmap; + if( pId2->n==0 ) db->szMmap = sz; + for(ii=db->nDb-1; ii>=0; ii--){ + if( db->aDb[ii].pBt && (ii==iDb || pId2->n==0) ){ + sqlite3BtreeSetMmapLimit(db->aDb[ii].pBt, sz); + } + } + } + sz = -1; + rc = sqlite3_file_control(db, zDb, SQLITE_FCNTL_MMAP_SIZE, &sz); +#else + sz = 0; + rc = SQLITE_OK; +#endif + if( rc==SQLITE_OK ){ + returnSingleInt(v, "mmap_size", sz); + }else if( rc!=SQLITE_NOTFOUND ){ + pParse->nErr++; + pParse->rc = rc; + } + break; + } /* ** PRAGMA temp_store ** PRAGMA temp_store = "default"|"memory"|"file" ** @@ -78934,17 +108259,18 @@ ** value will be restored the next time the database is opened. ** ** Note that it is possible for the library compile-time options to ** override this setting */ - if( sqlite3StrICmp(zLeft, "temp_store")==0 ){ + case PragTyp_TEMP_STORE: { if( !zRight ){ - returnSingleInt(pParse, "temp_store", db->temp_store); + returnSingleInt(v, "temp_store", db->temp_store); }else{ changeTempStorage(pParse, zRight); } - }else + break; + } /* ** PRAGMA temp_store_directory ** PRAGMA temp_store_directory = ""|"directory_name" ** @@ -78952,23 +108278,16 @@ ** the value sets a specific directory to be used for temporary files. ** Setting to a null string reverts to the default temporary directory search. ** If temporary directory is changed, then invalidateTempStorage. ** */ - if( sqlite3StrICmp(zLeft, "temp_store_directory")==0 ){ + case PragTyp_TEMP_STORE_DIRECTORY: { if( !zRight ){ - if( sqlite3_temp_directory ){ - sqlite3VdbeSetNumCols(v, 1); - sqlite3VdbeSetColName(v, 0, COLNAME_NAME, - "temp_store_directory", SQLITE_STATIC); - sqlite3VdbeAddOp4(v, OP_String8, 0, 1, 0, sqlite3_temp_directory, 0); - sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 1); - } + returnSingleText(v, "temp_store_directory", sqlite3_temp_directory); }else{ #ifndef SQLITE_OMIT_WSD if( zRight[0] ){ - int rc; int res; rc = sqlite3OsAccess(db->pVfs, zRight, SQLITE_ACCESS_READWRITE, &res); if( rc!=SQLITE_OK || res==0 ){ sqlite3ErrorMsg(pParse, "not a writable directory"); goto pragma_out; @@ -78980,49 +108299,75 @@ ){ invalidateTempStorage(pParse); } sqlite3_free(sqlite3_temp_directory); if( zRight[0] ){ - sqlite3_temp_directory = sqlite3DbStrDup(0, zRight); + sqlite3_temp_directory = sqlite3_mprintf("%s", zRight); }else{ sqlite3_temp_directory = 0; } #endif /* SQLITE_OMIT_WSD */ } - }else - -#if !defined(SQLITE_ENABLE_LOCKING_STYLE) -# if defined(__APPLE__) -# define SQLITE_ENABLE_LOCKING_STYLE 1 -# else -# define SQLITE_ENABLE_LOCKING_STYLE 0 -# endif -#endif + break; + } + +#if SQLITE_OS_WIN + /* + ** PRAGMA data_store_directory + ** PRAGMA data_store_directory = ""|"directory_name" + ** + ** Return or set the local value of the data_store_directory flag. Changing + ** the value sets a specific directory to be used for database files that + ** were specified with a relative pathname. Setting to a null string reverts + ** to the default database directory, which for database files specified with + ** a relative path will probably be based on the current directory for the + ** process. Database file specified with an absolute path are not impacted + ** by this setting, regardless of its value. + ** + */ + case PragTyp_DATA_STORE_DIRECTORY: { + if( !zRight ){ + returnSingleText(v, "data_store_directory", sqlite3_data_directory); + }else{ +#ifndef SQLITE_OMIT_WSD + if( zRight[0] ){ + int res; + rc = sqlite3OsAccess(db->pVfs, zRight, SQLITE_ACCESS_READWRITE, &res); + if( rc!=SQLITE_OK || res==0 ){ + sqlite3ErrorMsg(pParse, "not a writable directory"); + goto pragma_out; + } + } + sqlite3_free(sqlite3_data_directory); + if( zRight[0] ){ + sqlite3_data_directory = sqlite3_mprintf("%s", zRight); + }else{ + sqlite3_data_directory = 0; + } +#endif /* SQLITE_OMIT_WSD */ + } + break; + } +#endif + #if SQLITE_ENABLE_LOCKING_STYLE /* - ** PRAGMA [database.]lock_proxy_file - ** PRAGMA [database.]lock_proxy_file = ":auto:"|"lock_file_path" - ** - ** Return or set the value of the lock_proxy_file flag. Changing - ** the value sets a specific file to be used for database access locks. - ** - */ - if( sqlite3StrICmp(zLeft, "lock_proxy_file")==0 ){ + ** PRAGMA [schema.]lock_proxy_file + ** PRAGMA [schema.]lock_proxy_file = ":auto:"|"lock_file_path" + ** + ** Return or set the value of the lock_proxy_file flag. Changing + ** the value sets a specific file to be used for database access locks. + ** + */ + case PragTyp_LOCK_PROXY_FILE: { if( !zRight ){ Pager *pPager = sqlite3BtreePager(pDb->pBt); char *proxy_file_path = NULL; sqlite3_file *pFile = sqlite3PagerFile(pPager); - sqlite3OsFileControl(pFile, SQLITE_GET_LOCKPROXYFILE, + sqlite3OsFileControlHint(pFile, SQLITE_GET_LOCKPROXYFILE, &proxy_file_path); - - if( proxy_file_path ){ - sqlite3VdbeSetNumCols(v, 1); - sqlite3VdbeSetColName(v, 0, COLNAME_NAME, - "lock_proxy_file", SQLITE_STATIC); - sqlite3VdbeAddOp4(v, OP_String8, 0, 1, 0, proxy_file_path, 0); - sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 1); - } + returnSingleText(v, "lock_proxy_file", proxy_file_path); }else{ Pager *pPager = sqlite3BtreePager(pDb->pBt); sqlite3_file *pFile = sqlite3PagerFile(pPager); int res; if( zRight[0] ){ @@ -79035,42 +108380,75 @@ if( res!=SQLITE_OK ){ sqlite3ErrorMsg(pParse, "failed to set lock proxy file"); goto pragma_out; } } - }else + break; + } #endif /* SQLITE_ENABLE_LOCKING_STYLE */ /* - ** PRAGMA [database.]synchronous - ** PRAGMA [database.]synchronous=OFF|ON|NORMAL|FULL + ** PRAGMA [schema.]synchronous + ** PRAGMA [schema.]synchronous=OFF|ON|NORMAL|FULL|EXTRA ** ** Return or set the local value of the synchronous flag. Changing ** the local value does not make changes to the disk file and the ** default value will be restored the next time the database is ** opened. */ - if( sqlite3StrICmp(zLeft,"synchronous")==0 ){ - if( sqlite3ReadSchema(pParse) ) goto pragma_out; + case PragTyp_SYNCHRONOUS: { if( !zRight ){ - returnSingleInt(pParse, "synchronous", pDb->safety_level-1); + returnSingleInt(v, "synchronous", pDb->safety_level-1); }else{ if( !db->autoCommit ){ sqlite3ErrorMsg(pParse, "Safety level may not be changed inside a transaction"); }else{ - pDb->safety_level = getSafetyLevel(zRight)+1; + int iLevel = (getSafetyLevel(zRight,0,1)+1) & PAGER_SYNCHRONOUS_MASK; + if( iLevel==0 ) iLevel = 1; + pDb->safety_level = iLevel; + setAllPagerFlags(db); } } - }else + break; + } #endif /* SQLITE_OMIT_PAGER_PRAGMAS */ #ifndef SQLITE_OMIT_FLAG_PRAGMAS - if( flagPragma(pParse, zLeft, zRight) ){ - /* The flagPragma() subroutine also generates any necessary code - ** there is nothing more to do here */ - }else + case PragTyp_FLAG: { + if( zRight==0 ){ + returnSingleInt(v, pPragma->zName, (db->flags & pPragma->iArg)!=0 ); + }else{ + int mask = pPragma->iArg; /* Mask of bits to set or clear. */ + if( db->autoCommit==0 ){ + /* Foreign key support may not be enabled or disabled while not + ** in auto-commit mode. */ + mask &= ~(SQLITE_ForeignKeys); + } +#if SQLITE_USER_AUTHENTICATION + if( db->auth.authLevel==UAUTH_User ){ + /* Do not allow non-admin users to modify the schema arbitrarily */ + mask &= ~(SQLITE_WriteSchema); + } +#endif + + if( sqlite3GetBoolean(zRight, 0) ){ + db->flags |= mask; + }else{ + db->flags &= ~mask; + if( mask==SQLITE_DeferFKs ) db->nDeferredImmCons = 0; + } + + /* Many of the flag-pragmas modify the code generated by the SQL + ** compiler (eg. count_changes). So add an opcode to expire all + ** compiled SQL statements after modifying a pragma value. + */ + sqlite3VdbeAddOp2(v, OP_Expire, 0, 0); + setAllPagerFlags(db); + } + break; + } #endif /* SQLITE_OMIT_FLAG_PRAGMAS */ #ifndef SQLITE_OMIT_SCHEMA_PRAGMAS /* ** PRAGMA table_info(<table>) @@ -79082,238 +108460,386 @@ ** name: Column name ** type: Column declaration type. ** notnull: True if 'NOT NULL' is part of column declaration ** dflt_value: The default value for the column, if any. */ - if( sqlite3StrICmp(zLeft, "table_info")==0 && zRight ){ + case PragTyp_TABLE_INFO: if( zRight ){ Table *pTab; - if( sqlite3ReadSchema(pParse) ) goto pragma_out; pTab = sqlite3FindTable(db, zRight, zDb); if( pTab ){ - int i; + static const char *azCol[] = { + "cid", "name", "type", "notnull", "dflt_value", "pk" + }; + int i, k; int nHidden = 0; Column *pCol; - sqlite3VdbeSetNumCols(v, 6); + Index *pPk = sqlite3PrimaryKeyIndex(pTab); pParse->nMem = 6; - sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "cid", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "name", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "type", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 3, COLNAME_NAME, "notnull", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 4, COLNAME_NAME, "dflt_value", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 5, COLNAME_NAME, "pk", SQLITE_STATIC); + sqlite3CodeVerifySchema(pParse, iDb); + setAllColumnNames(v, 6, azCol); assert( 6==ArraySize(azCol) ); sqlite3ViewGetColumnNames(pParse, pTab); for(i=0, pCol=pTab->aCol; i<pTab->nCol; i++, pCol++){ if( IsHiddenColumn(pCol) ){ nHidden++; continue; } - sqlite3VdbeAddOp2(v, OP_Integer, i-nHidden, 1); - sqlite3VdbeAddOp4(v, OP_String8, 0, 2, 0, pCol->zName, 0); - sqlite3VdbeAddOp4(v, OP_String8, 0, 3, 0, - pCol->zType ? pCol->zType : "", 0); - sqlite3VdbeAddOp2(v, OP_Integer, (pCol->notNull ? 1 : 0), 4); - if( pCol->zDflt ){ - sqlite3VdbeAddOp4(v, OP_String8, 0, 5, 0, (char*)pCol->zDflt, 0); + if( (pCol->colFlags & COLFLAG_PRIMKEY)==0 ){ + k = 0; + }else if( pPk==0 ){ + k = 1; }else{ - sqlite3VdbeAddOp2(v, OP_Null, 0, 5); + for(k=1; k<=pTab->nCol && pPk->aiColumn[k-1]!=i; k++){} } - sqlite3VdbeAddOp2(v, OP_Integer, pCol->isPrimKey, 6); + sqlite3VdbeMultiLoad(v, 1, "issisi", + i-nHidden, + pCol->zName, + pCol->zType ? pCol->zType : "", + pCol->notNull ? 1 : 0, + pCol->zDflt, + k); sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 6); } } - }else + } + break; - if( sqlite3StrICmp(zLeft, "index_info")==0 && zRight ){ + case PragTyp_STATS: { + static const char *azCol[] = { "table", "index", "width", "height" }; + Index *pIdx; + HashElem *i; + v = sqlite3GetVdbe(pParse); + pParse->nMem = 4; + sqlite3CodeVerifySchema(pParse, iDb); + setAllColumnNames(v, 4, azCol); assert( 4==ArraySize(azCol) ); + for(i=sqliteHashFirst(&pDb->pSchema->tblHash); i; i=sqliteHashNext(i)){ + Table *pTab = sqliteHashData(i); + sqlite3VdbeMultiLoad(v, 1, "ssii", + pTab->zName, + 0, + (int)sqlite3LogEstToInt(pTab->szTabRow), + (int)sqlite3LogEstToInt(pTab->nRowLogEst)); + sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 4); + for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ + sqlite3VdbeMultiLoad(v, 2, "sii", + pIdx->zName, + (int)sqlite3LogEstToInt(pIdx->szIdxRow), + (int)sqlite3LogEstToInt(pIdx->aiRowLogEst[0])); + sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 4); + } + } + } + break; + + case PragTyp_INDEX_INFO: if( zRight ){ Index *pIdx; Table *pTab; - if( sqlite3ReadSchema(pParse) ) goto pragma_out; pIdx = sqlite3FindIndex(db, zRight, zDb); if( pIdx ){ + static const char *azCol[] = { + "seqno", "cid", "name", "desc", "coll", "key" + }; int i; + int mx; + if( pPragma->iArg ){ + /* PRAGMA index_xinfo (newer version with more rows and columns) */ + mx = pIdx->nColumn; + pParse->nMem = 6; + }else{ + /* PRAGMA index_info (legacy version) */ + mx = pIdx->nKeyCol; + pParse->nMem = 3; + } pTab = pIdx->pTable; - sqlite3VdbeSetNumCols(v, 3); - pParse->nMem = 3; - sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "seqno", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "cid", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "name", SQLITE_STATIC); - for(i=0; i<pIdx->nColumn; i++){ - int cnum = pIdx->aiColumn[i]; - sqlite3VdbeAddOp2(v, OP_Integer, i, 1); - sqlite3VdbeAddOp2(v, OP_Integer, cnum, 2); - assert( pTab->nCol>cnum ); - sqlite3VdbeAddOp4(v, OP_String8, 0, 3, 0, pTab->aCol[cnum].zName, 0); - sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 3); - } - } - }else - - if( sqlite3StrICmp(zLeft, "index_list")==0 && zRight ){ + sqlite3CodeVerifySchema(pParse, iDb); + assert( pParse->nMem<=ArraySize(azCol) ); + setAllColumnNames(v, pParse->nMem, azCol); + for(i=0; i<mx; i++){ + i16 cnum = pIdx->aiColumn[i]; + sqlite3VdbeMultiLoad(v, 1, "iis", i, cnum, + cnum<0 ? 0 : pTab->aCol[cnum].zName); + if( pPragma->iArg ){ + sqlite3VdbeMultiLoad(v, 4, "isi", + pIdx->aSortOrder[i], + pIdx->azColl[i], + i<pIdx->nKeyCol); + } + sqlite3VdbeAddOp2(v, OP_ResultRow, 1, pParse->nMem); + } + } + } + break; + + case PragTyp_INDEX_LIST: if( zRight ){ Index *pIdx; Table *pTab; - if( sqlite3ReadSchema(pParse) ) goto pragma_out; + int i; pTab = sqlite3FindTable(db, zRight, zDb); if( pTab ){ - v = sqlite3GetVdbe(pParse); - pIdx = pTab->pIndex; - if( pIdx ){ - int i = 0; - sqlite3VdbeSetNumCols(v, 3); - pParse->nMem = 3; - sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "seq", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "name", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "unique", SQLITE_STATIC); - while(pIdx){ - sqlite3VdbeAddOp2(v, OP_Integer, i, 1); - sqlite3VdbeAddOp4(v, OP_String8, 0, 2, 0, pIdx->zName, 0); - sqlite3VdbeAddOp2(v, OP_Integer, pIdx->onError!=OE_None, 3); - sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 3); - ++i; - pIdx = pIdx->pNext; - } - } - } - }else - - if( sqlite3StrICmp(zLeft, "database_list")==0 ){ - int i; - if( sqlite3ReadSchema(pParse) ) goto pragma_out; - sqlite3VdbeSetNumCols(v, 3); - pParse->nMem = 3; - sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "seq", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "name", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "file", SQLITE_STATIC); + static const char *azCol[] = { + "seq", "name", "unique", "origin", "partial" + }; + v = sqlite3GetVdbe(pParse); + pParse->nMem = 5; + sqlite3CodeVerifySchema(pParse, iDb); + setAllColumnNames(v, 5, azCol); assert( 5==ArraySize(azCol) ); + for(pIdx=pTab->pIndex, i=0; pIdx; pIdx=pIdx->pNext, i++){ + const char *azOrigin[] = { "c", "u", "pk" }; + sqlite3VdbeMultiLoad(v, 1, "isisi", + i, + pIdx->zName, + IsUniqueIndex(pIdx), + azOrigin[pIdx->idxType], + pIdx->pPartIdxWhere!=0); + sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 5); + } + } + } + break; + + case PragTyp_DATABASE_LIST: { + static const char *azCol[] = { "seq", "name", "file" }; + int i; + pParse->nMem = 3; + setAllColumnNames(v, 3, azCol); assert( 3==ArraySize(azCol) ); for(i=0; i<db->nDb; i++){ if( db->aDb[i].pBt==0 ) continue; assert( db->aDb[i].zName!=0 ); - sqlite3VdbeAddOp2(v, OP_Integer, i, 1); - sqlite3VdbeAddOp4(v, OP_String8, 0, 2, 0, db->aDb[i].zName, 0); - sqlite3VdbeAddOp4(v, OP_String8, 0, 3, 0, - sqlite3BtreeGetFilename(db->aDb[i].pBt), 0); + sqlite3VdbeMultiLoad(v, 1, "iss", + i, + db->aDb[i].zName, + sqlite3BtreeGetFilename(db->aDb[i].pBt)); sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 3); } - }else + } + break; - if( sqlite3StrICmp(zLeft, "collation_list")==0 ){ + case PragTyp_COLLATION_LIST: { + static const char *azCol[] = { "seq", "name" }; int i = 0; HashElem *p; - sqlite3VdbeSetNumCols(v, 2); pParse->nMem = 2; - sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "seq", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "name", SQLITE_STATIC); + setAllColumnNames(v, 2, azCol); assert( 2==ArraySize(azCol) ); for(p=sqliteHashFirst(&db->aCollSeq); p; p=sqliteHashNext(p)){ CollSeq *pColl = (CollSeq *)sqliteHashData(p); - sqlite3VdbeAddOp2(v, OP_Integer, i++, 1); - sqlite3VdbeAddOp4(v, OP_String8, 0, 2, 0, pColl->zName, 0); + sqlite3VdbeMultiLoad(v, 1, "is", i++, pColl->zName); sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 2); } - }else + } + break; #endif /* SQLITE_OMIT_SCHEMA_PRAGMAS */ #ifndef SQLITE_OMIT_FOREIGN_KEY - if( sqlite3StrICmp(zLeft, "foreign_key_list")==0 && zRight ){ + case PragTyp_FOREIGN_KEY_LIST: if( zRight ){ FKey *pFK; Table *pTab; - if( sqlite3ReadSchema(pParse) ) goto pragma_out; pTab = sqlite3FindTable(db, zRight, zDb); if( pTab ){ v = sqlite3GetVdbe(pParse); pFK = pTab->pFKey; if( pFK ){ + static const char *azCol[] = { + "id", "seq", "table", "from", "to", "on_update", "on_delete", + "match" + }; int i = 0; - sqlite3VdbeSetNumCols(v, 8); pParse->nMem = 8; - sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "id", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "seq", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "table", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 3, COLNAME_NAME, "from", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 4, COLNAME_NAME, "to", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 5, COLNAME_NAME, "on_update", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 6, COLNAME_NAME, "on_delete", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 7, COLNAME_NAME, "match", SQLITE_STATIC); + sqlite3CodeVerifySchema(pParse, iDb); + setAllColumnNames(v, 8, azCol); assert( 8==ArraySize(azCol) ); while(pFK){ int j; for(j=0; j<pFK->nCol; j++){ - char *zCol = pFK->aCol[j].zCol; - char *zOnDelete = (char *)actionName(pFK->aAction[0]); - char *zOnUpdate = (char *)actionName(pFK->aAction[1]); - sqlite3VdbeAddOp2(v, OP_Integer, i, 1); - sqlite3VdbeAddOp2(v, OP_Integer, j, 2); - sqlite3VdbeAddOp4(v, OP_String8, 0, 3, 0, pFK->zTo, 0); - sqlite3VdbeAddOp4(v, OP_String8, 0, 4, 0, - pTab->aCol[pFK->aCol[j].iFrom].zName, 0); - sqlite3VdbeAddOp4(v, zCol ? OP_String8 : OP_Null, 0, 5, 0, zCol, 0); - sqlite3VdbeAddOp4(v, OP_String8, 0, 6, 0, zOnUpdate, 0); - sqlite3VdbeAddOp4(v, OP_String8, 0, 7, 0, zOnDelete, 0); - sqlite3VdbeAddOp4(v, OP_String8, 0, 8, 0, "NONE", 0); + sqlite3VdbeMultiLoad(v, 1, "iissssss", + i, + j, + pFK->zTo, + pTab->aCol[pFK->aCol[j].iFrom].zName, + pFK->aCol[j].zCol, + actionName(pFK->aAction[1]), /* ON UPDATE */ + actionName(pFK->aAction[0]), /* ON DELETE */ + "NONE"); sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 8); } ++i; pFK = pFK->pNextFrom; } } } - }else + } + break; +#endif /* !defined(SQLITE_OMIT_FOREIGN_KEY) */ + +#ifndef SQLITE_OMIT_FOREIGN_KEY +#ifndef SQLITE_OMIT_TRIGGER + case PragTyp_FOREIGN_KEY_CHECK: { + FKey *pFK; /* A foreign key constraint */ + Table *pTab; /* Child table contain "REFERENCES" keyword */ + Table *pParent; /* Parent table that child points to */ + Index *pIdx; /* Index in the parent table */ + int i; /* Loop counter: Foreign key number for pTab */ + int j; /* Loop counter: Field of the foreign key */ + HashElem *k; /* Loop counter: Next table in schema */ + int x; /* result variable */ + int regResult; /* 3 registers to hold a result row */ + int regKey; /* Register to hold key for checking the FK */ + int regRow; /* Registers to hold a row from pTab */ + int addrTop; /* Top of a loop checking foreign keys */ + int addrOk; /* Jump here if the key is OK */ + int *aiCols; /* child to parent column mapping */ + static const char *azCol[] = { "table", "rowid", "parent", "fkid" }; + + regResult = pParse->nMem+1; + pParse->nMem += 4; + regKey = ++pParse->nMem; + regRow = ++pParse->nMem; + v = sqlite3GetVdbe(pParse); + setAllColumnNames(v, 4, azCol); assert( 4==ArraySize(azCol) ); + sqlite3CodeVerifySchema(pParse, iDb); + k = sqliteHashFirst(&db->aDb[iDb].pSchema->tblHash); + while( k ){ + if( zRight ){ + pTab = sqlite3LocateTable(pParse, 0, zRight, zDb); + k = 0; + }else{ + pTab = (Table*)sqliteHashData(k); + k = sqliteHashNext(k); + } + if( pTab==0 || pTab->pFKey==0 ) continue; + sqlite3TableLock(pParse, iDb, pTab->tnum, 0, pTab->zName); + if( pTab->nCol+regRow>pParse->nMem ) pParse->nMem = pTab->nCol + regRow; + sqlite3OpenTable(pParse, 0, iDb, pTab, OP_OpenRead); + sqlite3VdbeLoadString(v, regResult, pTab->zName); + for(i=1, pFK=pTab->pFKey; pFK; i++, pFK=pFK->pNextFrom){ + pParent = sqlite3FindTable(db, pFK->zTo, zDb); + if( pParent==0 ) continue; + pIdx = 0; + sqlite3TableLock(pParse, iDb, pParent->tnum, 0, pParent->zName); + x = sqlite3FkLocateIndex(pParse, pParent, pFK, &pIdx, 0); + if( x==0 ){ + if( pIdx==0 ){ + sqlite3OpenTable(pParse, i, iDb, pParent, OP_OpenRead); + }else{ + sqlite3VdbeAddOp3(v, OP_OpenRead, i, pIdx->tnum, iDb); + sqlite3VdbeSetP4KeyInfo(pParse, pIdx); + } + }else{ + k = 0; + break; + } + } + assert( pParse->nErr>0 || pFK==0 ); + if( pFK ) break; + if( pParse->nTab<i ) pParse->nTab = i; + addrTop = sqlite3VdbeAddOp1(v, OP_Rewind, 0); VdbeCoverage(v); + for(i=1, pFK=pTab->pFKey; pFK; i++, pFK=pFK->pNextFrom){ + pParent = sqlite3FindTable(db, pFK->zTo, zDb); + pIdx = 0; + aiCols = 0; + if( pParent ){ + x = sqlite3FkLocateIndex(pParse, pParent, pFK, &pIdx, &aiCols); + assert( x==0 ); + } + addrOk = sqlite3VdbeMakeLabel(v); + if( pParent && pIdx==0 ){ + int iKey = pFK->aCol[0].iFrom; + assert( iKey>=0 && iKey<pTab->nCol ); + if( iKey!=pTab->iPKey ){ + sqlite3VdbeAddOp3(v, OP_Column, 0, iKey, regRow); + sqlite3ColumnDefault(v, pTab, iKey, regRow); + sqlite3VdbeAddOp2(v, OP_IsNull, regRow, addrOk); VdbeCoverage(v); + sqlite3VdbeAddOp2(v, OP_MustBeInt, regRow, + sqlite3VdbeCurrentAddr(v)+3); VdbeCoverage(v); + }else{ + sqlite3VdbeAddOp2(v, OP_Rowid, 0, regRow); + } + sqlite3VdbeAddOp3(v, OP_NotExists, i, 0, regRow); VdbeCoverage(v); + sqlite3VdbeGoto(v, addrOk); + sqlite3VdbeJumpHere(v, sqlite3VdbeCurrentAddr(v)-2); + }else{ + for(j=0; j<pFK->nCol; j++){ + sqlite3ExprCodeGetColumnOfTable(v, pTab, 0, + aiCols ? aiCols[j] : pFK->aCol[j].iFrom, regRow+j); + sqlite3VdbeAddOp2(v, OP_IsNull, regRow+j, addrOk); VdbeCoverage(v); + } + if( pParent ){ + sqlite3VdbeAddOp4(v, OP_MakeRecord, regRow, pFK->nCol, regKey, + sqlite3IndexAffinityStr(db,pIdx), pFK->nCol); + sqlite3VdbeAddOp4Int(v, OP_Found, i, addrOk, regKey, 0); + VdbeCoverage(v); + } + } + sqlite3VdbeAddOp2(v, OP_Rowid, 0, regResult+1); + sqlite3VdbeMultiLoad(v, regResult+2, "si", pFK->zTo, i-1); + sqlite3VdbeAddOp2(v, OP_ResultRow, regResult, 4); + sqlite3VdbeResolveLabel(v, addrOk); + sqlite3DbFree(db, aiCols); + } + sqlite3VdbeAddOp2(v, OP_Next, 0, addrTop+1); VdbeCoverage(v); + sqlite3VdbeJumpHere(v, addrTop); + } + } + break; +#endif /* !defined(SQLITE_OMIT_TRIGGER) */ #endif /* !defined(SQLITE_OMIT_FOREIGN_KEY) */ #ifndef NDEBUG - if( sqlite3StrICmp(zLeft, "parser_trace")==0 ){ + case PragTyp_PARSER_TRACE: { if( zRight ){ - if( getBoolean(zRight) ){ - sqlite3ParserTrace(stderr, "parser: "); + if( sqlite3GetBoolean(zRight, 0) ){ + sqlite3ParserTrace(stdout, "parser: "); }else{ sqlite3ParserTrace(0, 0); } } - }else + } + break; #endif /* Reinstall the LIKE and GLOB functions. The variant of LIKE ** used will be case sensitive or not depending on the RHS. */ - if( sqlite3StrICmp(zLeft, "case_sensitive_like")==0 ){ + case PragTyp_CASE_SENSITIVE_LIKE: { if( zRight ){ - sqlite3RegisterLikeFunctions(db, getBoolean(zRight)); + sqlite3RegisterLikeFunctions(db, sqlite3GetBoolean(zRight, 0)); } - }else + } + break; #ifndef SQLITE_INTEGRITY_CHECK_ERROR_MAX # define SQLITE_INTEGRITY_CHECK_ERROR_MAX 100 #endif #ifndef SQLITE_OMIT_INTEGRITY_CHECK - /* Pragma "quick_check" is an experimental reduced version of + /* Pragma "quick_check" is reduced version of ** integrity_check designed to detect most database corruption ** without most of the overhead of a full integrity-check. */ - if( sqlite3StrICmp(zLeft, "integrity_check")==0 - || sqlite3StrICmp(zLeft, "quick_check")==0 - ){ + case PragTyp_INTEGRITY_CHECK: { int i, j, addr, mxErr; - /* Code that appears at the end of the integrity check. If no error - ** messages have been generated, output OK. Otherwise output the - ** error message - */ - static const VdbeOpList endCode[] = { - { OP_AddImm, 1, 0, 0}, /* 0 */ - { OP_IfNeg, 1, 0, 0}, /* 1 */ - { OP_String8, 0, 3, 0}, /* 2 */ - { OP_ResultRow, 3, 1, 0}, - }; - - int isQuick = (zLeft[0]=='q'); + int isQuick = (sqlite3Tolower(zLeft[0])=='q'); + + /* If the PRAGMA command was of the form "PRAGMA <db>.integrity_check", + ** then iDb is set to the index of the database identified by <db>. + ** In this case, the integrity of database iDb only is verified by + ** the VDBE created below. + ** + ** Otherwise, if the command was simply "PRAGMA integrity_check" (or + ** "PRAGMA quick_check"), then iDb is set to 0. In this case, set iDb + ** to -1 here, to indicate that the VDBE should verify the integrity + ** of all attached databases. */ + assert( iDb>=0 ); + assert( iDb==0 || pId2->z ); + if( pId2->z==0 ) iDb = -1; /* Initialize the VDBE program */ - if( sqlite3ReadSchema(pParse) ) goto pragma_out; pParse->nMem = 6; - sqlite3VdbeSetNumCols(v, 1); - sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "integrity_check", SQLITE_STATIC); + setOneColumnName(v, "integrity_check"); /* Set the maximum error count */ mxErr = SQLITE_INTEGRITY_CHECK_ERROR_MAX; if( zRight ){ - mxErr = atoi(zRight); + sqlite3GetInt32(zRight, &mxErr); if( mxErr<=0 ){ mxErr = SQLITE_INTEGRITY_CHECK_ERROR_MAX; } } sqlite3VdbeAddOp2(v, OP_Integer, mxErr, 1); /* reg[1] holds errors left */ @@ -79323,42 +108849,47 @@ HashElem *x; Hash *pTbls; int cnt = 0; if( OMIT_TEMPDB && i==1 ) continue; + if( iDb>=0 && i!=iDb ) continue; sqlite3CodeVerifySchema(pParse, i); addr = sqlite3VdbeAddOp1(v, OP_IfPos, 1); /* Halt if out of errors */ + VdbeCoverage(v); sqlite3VdbeAddOp2(v, OP_Halt, 0, 0); sqlite3VdbeJumpHere(v, addr); /* Do an integrity check of the B-Tree ** ** Begin by filling registers 2, 3, ... with the root pages numbers ** for all tables and indices in the database. */ + assert( sqlite3SchemaMutexHeld(db, i, 0) ); pTbls = &db->aDb[i].pSchema->tblHash; for(x=sqliteHashFirst(pTbls); x; x=sqliteHashNext(x)){ Table *pTab = sqliteHashData(x); Index *pIdx; - sqlite3VdbeAddOp2(v, OP_Integer, pTab->tnum, 2+cnt); - cnt++; + if( HasRowid(pTab) ){ + sqlite3VdbeAddOp2(v, OP_Integer, pTab->tnum, 2+cnt); + VdbeComment((v, "%s", pTab->zName)); + cnt++; + } for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ sqlite3VdbeAddOp2(v, OP_Integer, pIdx->tnum, 2+cnt); + VdbeComment((v, "%s", pIdx->zName)); cnt++; } } /* Make sure sufficient number of registers have been allocated */ - if( pParse->nMem < cnt+4 ){ - pParse->nMem = cnt+4; - } + pParse->nMem = MAX( pParse->nMem, cnt+8 ); /* Do the b-tree integrity checks */ sqlite3VdbeAddOp3(v, OP_IntegrityCk, 2, cnt, 1); sqlite3VdbeChangeP5(v, (u8)i); - addr = sqlite3VdbeAddOp1(v, OP_IsNull, 2); + addr = sqlite3VdbeAddOp1(v, OP_IsNull, 2); VdbeCoverage(v); sqlite3VdbeAddOp4(v, OP_String8, 0, 3, 0, sqlite3MPrintf(db, "*** in database %s ***\n", db->aDb[i].zName), P4_DYNAMIC); sqlite3VdbeAddOp3(v, OP_Move, 2, 4, 1); sqlite3VdbeAddOp3(v, OP_Concat, 4, 3, 2); @@ -79367,81 +108898,140 @@ /* Make sure all the indices are constructed correctly. */ for(x=sqliteHashFirst(pTbls); x && !isQuick; x=sqliteHashNext(x)){ Table *pTab = sqliteHashData(x); - Index *pIdx; - int loopTop; - - if( pTab->pIndex==0 ) continue; - addr = sqlite3VdbeAddOp1(v, OP_IfPos, 1); /* Stop if out of errors */ - sqlite3VdbeAddOp2(v, OP_Halt, 0, 0); - sqlite3VdbeJumpHere(v, addr); - sqlite3OpenTableAndIndices(pParse, pTab, 1, OP_OpenRead); - sqlite3VdbeAddOp2(v, OP_Integer, 0, 2); /* reg(2) will count entries */ - loopTop = sqlite3VdbeAddOp2(v, OP_Rewind, 1, 0); - sqlite3VdbeAddOp2(v, OP_AddImm, 2, 1); /* increment entry count */ - for(j=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, j++){ - int jmp2; - int r1; - static const VdbeOpList idxErr[] = { - { OP_AddImm, 1, -1, 0}, - { OP_String8, 0, 3, 0}, /* 1 */ - { OP_Rowid, 1, 4, 0}, - { OP_String8, 0, 5, 0}, /* 3 */ - { OP_String8, 0, 6, 0}, /* 4 */ - { OP_Concat, 4, 3, 3}, - { OP_Concat, 5, 3, 3}, - { OP_Concat, 6, 3, 3}, - { OP_ResultRow, 3, 1, 0}, - { OP_IfPos, 1, 0, 0}, /* 9 */ - { OP_Halt, 0, 0, 0}, - }; - r1 = sqlite3GenerateIndexKey(pParse, pIdx, 1, 3, 0); - jmp2 = sqlite3VdbeAddOp4Int(v, OP_Found, j+2, 0, r1, pIdx->nColumn+1); - addr = sqlite3VdbeAddOpList(v, ArraySize(idxErr), idxErr); - sqlite3VdbeChangeP4(v, addr+1, "rowid ", P4_STATIC); - sqlite3VdbeChangeP4(v, addr+3, " missing from index ", P4_STATIC); - sqlite3VdbeChangeP4(v, addr+4, pIdx->zName, P4_STATIC); - sqlite3VdbeJumpHere(v, addr+9); - sqlite3VdbeJumpHere(v, jmp2); - } - sqlite3VdbeAddOp2(v, OP_Next, 1, loopTop+1); - sqlite3VdbeJumpHere(v, loopTop); - for(j=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, j++){ - static const VdbeOpList cntIdx[] = { - { OP_Integer, 0, 3, 0}, - { OP_Rewind, 0, 0, 0}, /* 1 */ - { OP_AddImm, 3, 1, 0}, - { OP_Next, 0, 0, 0}, /* 3 */ - { OP_Eq, 2, 0, 3}, /* 4 */ - { OP_AddImm, 1, -1, 0}, - { OP_String8, 0, 2, 0}, /* 6 */ - { OP_String8, 0, 3, 0}, /* 7 */ - { OP_Concat, 3, 2, 2}, - { OP_ResultRow, 2, 1, 0}, - }; - addr = sqlite3VdbeAddOp1(v, OP_IfPos, 1); - sqlite3VdbeAddOp2(v, OP_Halt, 0, 0); - sqlite3VdbeJumpHere(v, addr); - addr = sqlite3VdbeAddOpList(v, ArraySize(cntIdx), cntIdx); - sqlite3VdbeChangeP1(v, addr+1, j+2); - sqlite3VdbeChangeP2(v, addr+1, addr+4); - sqlite3VdbeChangeP1(v, addr+3, j+2); - sqlite3VdbeChangeP2(v, addr+3, addr+2); - sqlite3VdbeJumpHere(v, addr+4); - sqlite3VdbeChangeP4(v, addr+6, - "wrong # of entries in index ", P4_STATIC); - sqlite3VdbeChangeP4(v, addr+7, pIdx->zName, P4_STATIC); - } - } - } - addr = sqlite3VdbeAddOpList(v, ArraySize(endCode), endCode); - sqlite3VdbeChangeP2(v, addr, -mxErr); - sqlite3VdbeJumpHere(v, addr+1); - sqlite3VdbeChangeP4(v, addr+2, "ok", P4_STATIC); - }else + Index *pIdx, *pPk; + Index *pPrior = 0; + int loopTop; + int iDataCur, iIdxCur; + int r1 = -1; + + if( pTab->pIndex==0 ) continue; + pPk = HasRowid(pTab) ? 0 : sqlite3PrimaryKeyIndex(pTab); + addr = sqlite3VdbeAddOp1(v, OP_IfPos, 1); /* Stop if out of errors */ + VdbeCoverage(v); + sqlite3VdbeAddOp2(v, OP_Halt, 0, 0); + sqlite3VdbeJumpHere(v, addr); + sqlite3ExprCacheClear(pParse); + sqlite3OpenTableAndIndices(pParse, pTab, OP_OpenRead, 0, + 1, 0, &iDataCur, &iIdxCur); + sqlite3VdbeAddOp2(v, OP_Integer, 0, 7); + for(j=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, j++){ + sqlite3VdbeAddOp2(v, OP_Integer, 0, 8+j); /* index entries counter */ + } + pParse->nMem = MAX(pParse->nMem, 8+j); + sqlite3VdbeAddOp2(v, OP_Rewind, iDataCur, 0); VdbeCoverage(v); + loopTop = sqlite3VdbeAddOp2(v, OP_AddImm, 7, 1); + /* Verify that all NOT NULL columns really are NOT NULL */ + for(j=0; j<pTab->nCol; j++){ + char *zErr; + int jmp2, jmp3; + if( j==pTab->iPKey ) continue; + if( pTab->aCol[j].notNull==0 ) continue; + sqlite3ExprCodeGetColumnOfTable(v, pTab, iDataCur, j, 3); + sqlite3VdbeChangeP5(v, OPFLAG_TYPEOFARG); + jmp2 = sqlite3VdbeAddOp1(v, OP_NotNull, 3); VdbeCoverage(v); + sqlite3VdbeAddOp2(v, OP_AddImm, 1, -1); /* Decrement error limit */ + zErr = sqlite3MPrintf(db, "NULL value in %s.%s", pTab->zName, + pTab->aCol[j].zName); + sqlite3VdbeAddOp4(v, OP_String8, 0, 3, 0, zErr, P4_DYNAMIC); + sqlite3VdbeAddOp2(v, OP_ResultRow, 3, 1); + jmp3 = sqlite3VdbeAddOp1(v, OP_IfPos, 1); VdbeCoverage(v); + sqlite3VdbeAddOp0(v, OP_Halt); + sqlite3VdbeJumpHere(v, jmp2); + sqlite3VdbeJumpHere(v, jmp3); + } + /* Validate index entries for the current row */ + for(j=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, j++){ + int jmp2, jmp3, jmp4, jmp5; + int ckUniq = sqlite3VdbeMakeLabel(v); + if( pPk==pIdx ) continue; + r1 = sqlite3GenerateIndexKey(pParse, pIdx, iDataCur, 0, 0, &jmp3, + pPrior, r1); + pPrior = pIdx; + sqlite3VdbeAddOp2(v, OP_AddImm, 8+j, 1); /* increment entry count */ + /* Verify that an index entry exists for the current table row */ + jmp2 = sqlite3VdbeAddOp4Int(v, OP_Found, iIdxCur+j, ckUniq, r1, + pIdx->nColumn); VdbeCoverage(v); + sqlite3VdbeAddOp2(v, OP_AddImm, 1, -1); /* Decrement error limit */ + sqlite3VdbeLoadString(v, 3, "row "); + sqlite3VdbeAddOp3(v, OP_Concat, 7, 3, 3); + sqlite3VdbeLoadString(v, 4, " missing from index "); + sqlite3VdbeAddOp3(v, OP_Concat, 4, 3, 3); + jmp5 = sqlite3VdbeLoadString(v, 4, pIdx->zName); + sqlite3VdbeAddOp3(v, OP_Concat, 4, 3, 3); + sqlite3VdbeAddOp2(v, OP_ResultRow, 3, 1); + jmp4 = sqlite3VdbeAddOp1(v, OP_IfPos, 1); VdbeCoverage(v); + sqlite3VdbeAddOp0(v, OP_Halt); + sqlite3VdbeJumpHere(v, jmp2); + /* For UNIQUE indexes, verify that only one entry exists with the + ** current key. The entry is unique if (1) any column is NULL + ** or (2) the next entry has a different key */ + if( IsUniqueIndex(pIdx) ){ + int uniqOk = sqlite3VdbeMakeLabel(v); + int jmp6; + int kk; + for(kk=0; kk<pIdx->nKeyCol; kk++){ + int iCol = pIdx->aiColumn[kk]; + assert( iCol!=XN_ROWID && iCol<pTab->nCol ); + if( iCol>=0 && pTab->aCol[iCol].notNull ) continue; + sqlite3VdbeAddOp2(v, OP_IsNull, r1+kk, uniqOk); + VdbeCoverage(v); + } + jmp6 = sqlite3VdbeAddOp1(v, OP_Next, iIdxCur+j); VdbeCoverage(v); + sqlite3VdbeGoto(v, uniqOk); + sqlite3VdbeJumpHere(v, jmp6); + sqlite3VdbeAddOp4Int(v, OP_IdxGT, iIdxCur+j, uniqOk, r1, + pIdx->nKeyCol); VdbeCoverage(v); + sqlite3VdbeAddOp2(v, OP_AddImm, 1, -1); /* Decrement error limit */ + sqlite3VdbeLoadString(v, 3, "non-unique entry in index "); + sqlite3VdbeGoto(v, jmp5); + sqlite3VdbeResolveLabel(v, uniqOk); + } + sqlite3VdbeJumpHere(v, jmp4); + sqlite3ResolvePartIdxLabel(pParse, jmp3); + } + sqlite3VdbeAddOp2(v, OP_Next, iDataCur, loopTop); VdbeCoverage(v); + sqlite3VdbeJumpHere(v, loopTop-1); +#ifndef SQLITE_OMIT_BTREECOUNT + sqlite3VdbeLoadString(v, 2, "wrong # of entries in index "); + for(j=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, j++){ + if( pPk==pIdx ) continue; + addr = sqlite3VdbeCurrentAddr(v); + sqlite3VdbeAddOp2(v, OP_IfPos, 1, addr+2); VdbeCoverage(v); + sqlite3VdbeAddOp2(v, OP_Halt, 0, 0); + sqlite3VdbeAddOp2(v, OP_Count, iIdxCur+j, 3); + sqlite3VdbeAddOp3(v, OP_Eq, 8+j, addr+8, 3); VdbeCoverage(v); + sqlite3VdbeChangeP5(v, SQLITE_NOTNULL); + sqlite3VdbeAddOp2(v, OP_AddImm, 1, -1); + sqlite3VdbeLoadString(v, 3, pIdx->zName); + sqlite3VdbeAddOp3(v, OP_Concat, 3, 2, 7); + sqlite3VdbeAddOp2(v, OP_ResultRow, 7, 1); + } +#endif /* SQLITE_OMIT_BTREECOUNT */ + } + } + { + static const int iLn = VDBE_OFFSET_LINENO(2); + static const VdbeOpList endCode[] = { + { OP_AddImm, 1, 0, 0}, /* 0 */ + { OP_If, 1, 4, 0}, /* 1 */ + { OP_String8, 0, 3, 0}, /* 2 */ + { OP_ResultRow, 3, 1, 0}, /* 3 */ + }; + VdbeOp *aOp; + + aOp = sqlite3VdbeAddOpList(v, ArraySize(endCode), endCode, iLn); + if( aOp ){ + aOp[0].p2 = -mxErr; + aOp[2].p4type = P4_STATIC; + aOp[2].p4.z = "ok"; + } + } + } + break; #endif /* SQLITE_OMIT_INTEGRITY_CHECK */ #ifndef SQLITE_OMIT_UTF16 /* ** PRAGMA encoding @@ -79463,11 +109053,11 @@ ** ** In the second form this pragma sets the text encoding to be used in ** new database files created using this database handle. It is only ** useful if invoked immediately after the main database i */ - if( sqlite3StrICmp(zLeft, "encoding")==0 ){ + case PragTyp_ENCODING: { static const struct EncName { char *zName; u8 enc; } encnames[] = { { "UTF8", SQLITE_UTF8 }, @@ -79481,18 +109071,14 @@ { 0, 0 } }; const struct EncName *pEnc; if( !zRight ){ /* "PRAGMA encoding" */ if( sqlite3ReadSchema(pParse) ) goto pragma_out; - sqlite3VdbeSetNumCols(v, 1); - sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "encoding", SQLITE_STATIC); - sqlite3VdbeAddOp2(v, OP_String8, 0, 1); assert( encnames[SQLITE_UTF8].enc==SQLITE_UTF8 ); assert( encnames[SQLITE_UTF16LE].enc==SQLITE_UTF16LE ); assert( encnames[SQLITE_UTF16BE].enc==SQLITE_UTF16BE ); - sqlite3VdbeChangeP4(v, -1, encnames[ENC(pParse->db)].zName, P4_STATIC); - sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 1); + returnSingleText(v, "encoding", encnames[ENC(pParse->db)].zName); }else{ /* "PRAGMA encoding = XXX" */ /* Only change the value of sqlite.enc if the database handle is not ** initialized. If the main database exists, the new sqlite.enc value ** will be overwritten when the schema is next loaded. If it does not ** already exists, it will be created to use the new encoding value. @@ -79501,29 +109087,36 @@ !(DbHasProperty(db, 0, DB_SchemaLoaded)) || DbHasProperty(db, 0, DB_Empty) ){ for(pEnc=&encnames[0]; pEnc->zName; pEnc++){ if( 0==sqlite3StrICmp(zRight, pEnc->zName) ){ - ENC(pParse->db) = pEnc->enc ? pEnc->enc : SQLITE_UTF16NATIVE; + SCHEMA_ENC(db) = ENC(db) = + pEnc->enc ? pEnc->enc : SQLITE_UTF16NATIVE; break; } } if( !pEnc->zName ){ sqlite3ErrorMsg(pParse, "unsupported encoding: %s", zRight); } } } - }else + } + break; #endif /* SQLITE_OMIT_UTF16 */ #ifndef SQLITE_OMIT_SCHEMA_VERSION_PRAGMAS /* - ** PRAGMA [database.]schema_version - ** PRAGMA [database.]schema_version = <integer> + ** PRAGMA [schema.]schema_version + ** PRAGMA [schema.]schema_version = <integer> ** - ** PRAGMA [database.]user_version - ** PRAGMA [database.]user_version = <integer> + ** PRAGMA [schema.]user_version + ** PRAGMA [schema.]user_version = <integer> + ** + ** PRAGMA [schema.]freelist_count = <integer> + ** + ** PRAGMA [schema.]application_id + ** PRAGMA [schema.]application_id = <integer> ** ** The pragma's schema_version and user_version are used to set or get ** the value of the schema-version and user-version, respectively. Both ** the schema-version and the user-version are 32-bit signed integers ** stored in the database header. @@ -79539,136 +109132,243 @@ ** crashes or database corruption. Use with caution! ** ** The user-version is not used internally by SQLite. It may be used by ** applications for any purpose. */ - if( sqlite3StrICmp(zLeft, "schema_version")==0 - || sqlite3StrICmp(zLeft, "user_version")==0 - || sqlite3StrICmp(zLeft, "freelist_count")==0 - ){ - int iCookie; /* Cookie index. 1 for schema-cookie, 6 for user-cookie. */ + case PragTyp_HEADER_VALUE: { + int iCookie = pPragma->iArg; /* Which cookie to read or write */ sqlite3VdbeUsesBtree(v, iDb); - switch( zLeft[0] ){ - case 'f': case 'F': - iCookie = BTREE_FREE_PAGE_COUNT; - break; - case 's': case 'S': - iCookie = BTREE_SCHEMA_VERSION; - break; - default: - iCookie = BTREE_USER_VERSION; - break; - } - - if( zRight && iCookie!=BTREE_FREE_PAGE_COUNT ){ + if( zRight && (pPragma->mPragFlag & PragFlag_ReadOnly)==0 ){ /* Write the specified cookie value */ static const VdbeOpList setCookie[] = { { OP_Transaction, 0, 1, 0}, /* 0 */ - { OP_Integer, 0, 1, 0}, /* 1 */ - { OP_SetCookie, 0, 0, 1}, /* 2 */ + { OP_SetCookie, 0, 0, 0}, /* 1 */ }; - int addr = sqlite3VdbeAddOpList(v, ArraySize(setCookie), setCookie); - sqlite3VdbeChangeP1(v, addr, iDb); - sqlite3VdbeChangeP1(v, addr+1, atoi(zRight)); - sqlite3VdbeChangeP1(v, addr+2, iDb); - sqlite3VdbeChangeP2(v, addr+2, iCookie); + VdbeOp *aOp; + sqlite3VdbeVerifyNoMallocRequired(v, ArraySize(setCookie)); + aOp = sqlite3VdbeAddOpList(v, ArraySize(setCookie), setCookie, 0); + if( ONLY_IF_REALLOC_STRESS(aOp==0) ) break; + aOp[0].p1 = iDb; + aOp[1].p1 = iDb; + aOp[1].p2 = iCookie; + aOp[1].p3 = sqlite3Atoi(zRight); }else{ /* Read the specified cookie value */ static const VdbeOpList readCookie[] = { { OP_Transaction, 0, 0, 0}, /* 0 */ { OP_ReadCookie, 0, 1, 0}, /* 1 */ { OP_ResultRow, 1, 1, 0} }; - int addr = sqlite3VdbeAddOpList(v, ArraySize(readCookie), readCookie); - sqlite3VdbeChangeP1(v, addr, iDb); - sqlite3VdbeChangeP1(v, addr+1, iDb); - sqlite3VdbeChangeP3(v, addr+1, iCookie); + VdbeOp *aOp; + sqlite3VdbeVerifyNoMallocRequired(v, ArraySize(readCookie)); + aOp = sqlite3VdbeAddOpList(v, ArraySize(readCookie),readCookie,0); + if( ONLY_IF_REALLOC_STRESS(aOp==0) ) break; + aOp[0].p1 = iDb; + aOp[1].p1 = iDb; + aOp[1].p3 = iCookie; sqlite3VdbeSetNumCols(v, 1); sqlite3VdbeSetColName(v, 0, COLNAME_NAME, zLeft, SQLITE_TRANSIENT); } - }else + } + break; #endif /* SQLITE_OMIT_SCHEMA_VERSION_PRAGMAS */ #ifndef SQLITE_OMIT_COMPILEOPTION_DIAGS /* ** PRAGMA compile_options ** ** Return the names of all compile-time options used in this build, ** one option per row. */ - if( sqlite3StrICmp(zLeft, "compile_options")==0 ){ + case PragTyp_COMPILE_OPTIONS: { int i = 0; const char *zOpt; - sqlite3VdbeSetNumCols(v, 1); pParse->nMem = 1; - sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "compile_option", SQLITE_STATIC); + setOneColumnName(v, "compile_option"); while( (zOpt = sqlite3_compileoption_get(i++))!=0 ){ - sqlite3VdbeAddOp4(v, OP_String8, 0, 1, 0, zOpt, 0); + sqlite3VdbeLoadString(v, 1, zOpt); sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 1); } - }else + } + break; #endif /* SQLITE_OMIT_COMPILEOPTION_DIAGS */ + +#ifndef SQLITE_OMIT_WAL + /* + ** PRAGMA [schema.]wal_checkpoint = passive|full|restart|truncate + ** + ** Checkpoint the database. + */ + case PragTyp_WAL_CHECKPOINT: { + static const char *azCol[] = { "busy", "log", "checkpointed" }; + int iBt = (pId2->z?iDb:SQLITE_MAX_ATTACHED); + int eMode = SQLITE_CHECKPOINT_PASSIVE; + if( zRight ){ + if( sqlite3StrICmp(zRight, "full")==0 ){ + eMode = SQLITE_CHECKPOINT_FULL; + }else if( sqlite3StrICmp(zRight, "restart")==0 ){ + eMode = SQLITE_CHECKPOINT_RESTART; + }else if( sqlite3StrICmp(zRight, "truncate")==0 ){ + eMode = SQLITE_CHECKPOINT_TRUNCATE; + } + } + setAllColumnNames(v, 3, azCol); assert( 3==ArraySize(azCol) ); + pParse->nMem = 3; + sqlite3VdbeAddOp3(v, OP_Checkpoint, iBt, eMode, 1); + sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 3); + } + break; + + /* + ** PRAGMA wal_autocheckpoint + ** PRAGMA wal_autocheckpoint = N + ** + ** Configure a database connection to automatically checkpoint a database + ** after accumulating N frames in the log. Or query for the current value + ** of N. + */ + case PragTyp_WAL_AUTOCHECKPOINT: { + if( zRight ){ + sqlite3_wal_autocheckpoint(db, sqlite3Atoi(zRight)); + } + returnSingleInt(v, "wal_autocheckpoint", + db->xWalCallback==sqlite3WalDefaultHook ? + SQLITE_PTR_TO_INT(db->pWalArg) : 0); + } + break; +#endif + + /* + ** PRAGMA shrink_memory + ** + ** IMPLEMENTATION-OF: R-23445-46109 This pragma causes the database + ** connection on which it is invoked to free up as much memory as it + ** can, by calling sqlite3_db_release_memory(). + */ + case PragTyp_SHRINK_MEMORY: { + sqlite3_db_release_memory(db); + break; + } + + /* + ** PRAGMA busy_timeout + ** PRAGMA busy_timeout = N + ** + ** Call sqlite3_busy_timeout(db, N). Return the current timeout value + ** if one is set. If no busy handler or a different busy handler is set + ** then 0 is returned. Setting the busy_timeout to 0 or negative + ** disables the timeout. + */ + /*case PragTyp_BUSY_TIMEOUT*/ default: { + assert( pPragma->ePragTyp==PragTyp_BUSY_TIMEOUT ); + if( zRight ){ + sqlite3_busy_timeout(db, sqlite3Atoi(zRight)); + } + returnSingleInt(v, "timeout", db->busyTimeout); + break; + } + + /* + ** PRAGMA soft_heap_limit + ** PRAGMA soft_heap_limit = N + ** + ** IMPLEMENTATION-OF: R-26343-45930 This pragma invokes the + ** sqlite3_soft_heap_limit64() interface with the argument N, if N is + ** specified and is a non-negative integer. + ** IMPLEMENTATION-OF: R-64451-07163 The soft_heap_limit pragma always + ** returns the same integer that would be returned by the + ** sqlite3_soft_heap_limit64(-1) C-language function. + */ + case PragTyp_SOFT_HEAP_LIMIT: { + sqlite3_int64 N; + if( zRight && sqlite3DecOrHexToI64(zRight, &N)==SQLITE_OK ){ + sqlite3_soft_heap_limit64(N); + } + returnSingleInt(v, "soft_heap_limit", sqlite3_soft_heap_limit64(-1)); + break; + } + + /* + ** PRAGMA threads + ** PRAGMA threads = N + ** + ** Configure the maximum number of worker threads. Return the new + ** maximum, which might be less than requested. + */ + case PragTyp_THREADS: { + sqlite3_int64 N; + if( zRight + && sqlite3DecOrHexToI64(zRight, &N)==SQLITE_OK + && N>=0 + ){ + sqlite3_limit(db, SQLITE_LIMIT_WORKER_THREADS, (int)(N&0x7fffffff)); + } + returnSingleInt(v, "threads", + sqlite3_limit(db, SQLITE_LIMIT_WORKER_THREADS, -1)); + break; + } #if defined(SQLITE_DEBUG) || defined(SQLITE_TEST) /* ** Report the current state of file logs for all databases */ - if( sqlite3StrICmp(zLeft, "lock_status")==0 ){ + case PragTyp_LOCK_STATUS: { static const char *const azLockName[] = { "unlocked", "shared", "reserved", "pending", "exclusive" }; + static const char *azCol[] = { "database", "status" }; int i; - sqlite3VdbeSetNumCols(v, 2); + setAllColumnNames(v, 2, azCol); assert( 2==ArraySize(azCol) ); pParse->nMem = 2; - sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "database", SQLITE_STATIC); - sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "status", SQLITE_STATIC); for(i=0; i<db->nDb; i++){ Btree *pBt; - Pager *pPager; const char *zState = "unknown"; int j; if( db->aDb[i].zName==0 ) continue; - sqlite3VdbeAddOp4(v, OP_String8, 0, 1, 0, db->aDb[i].zName, P4_STATIC); pBt = db->aDb[i].pBt; - if( pBt==0 || (pPager = sqlite3BtreePager(pBt))==0 ){ + if( pBt==0 || sqlite3BtreePager(pBt)==0 ){ zState = "closed"; }else if( sqlite3_file_control(db, i ? db->aDb[i].zName : 0, SQLITE_FCNTL_LOCKSTATE, &j)==SQLITE_OK ){ zState = azLockName[j]; } - sqlite3VdbeAddOp4(v, OP_String8, 0, 2, 0, zState, P4_STATIC); + sqlite3VdbeMultiLoad(v, 1, "ss", db->aDb[i].zName, zState); sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 2); } - - }else + break; + } #endif #ifdef SQLITE_HAS_CODEC - if( sqlite3StrICmp(zLeft, "key")==0 && zRight ){ - sqlite3_key(db, zRight, sqlite3Strlen30(zRight)); - }else - if( sqlite3StrICmp(zLeft, "rekey")==0 && zRight ){ - sqlite3_rekey(db, zRight, sqlite3Strlen30(zRight)); - }else - if( zRight && (sqlite3StrICmp(zLeft, "hexkey")==0 || - sqlite3StrICmp(zLeft, "hexrekey")==0) ){ - int i, h1, h2; - char zKey[40]; - for(i=0; (h1 = zRight[i])!=0 && (h2 = zRight[i+1])!=0; i+=2){ - h1 += 9*(1&(h1>>6)); - h2 += 9*(1&(h2>>6)); - zKey[i/2] = (h2 & 0x0f) | ((h1 & 0xf)<<4); - } - if( (zLeft[3] & 0xf)==0xb ){ - sqlite3_key(db, zKey, i/2); - }else{ - sqlite3_rekey(db, zKey, i/2); - } - }else + case PragTyp_KEY: { + if( zRight ) sqlite3_key_v2(db, zDb, zRight, sqlite3Strlen30(zRight)); + break; + } + case PragTyp_REKEY: { + if( zRight ) sqlite3_rekey_v2(db, zDb, zRight, sqlite3Strlen30(zRight)); + break; + } + case PragTyp_HEXKEY: { + if( zRight ){ + u8 iByte; + int i; + char zKey[40]; + for(i=0, iByte=0; i<sizeof(zKey)*2 && sqlite3Isxdigit(zRight[i]); i++){ + iByte = (iByte<<4) + sqlite3HexToInt(zRight[i]); + if( (i&1)!=0 ) zKey[i/2] = iByte; + } + if( (zLeft[3] & 0xf)==0xb ){ + sqlite3_key_v2(db, zDb, zKey, i/2); + }else{ + sqlite3_rekey_v2(db, zDb, zKey, i/2); + } + } + break; + } #endif #if defined(SQLITE_HAS_CODEC) || defined(SQLITE_ENABLE_CEROD) - if( sqlite3StrICmp(zLeft, "activate_extensions")==0 ){ + case PragTyp_ACTIVATE_EXTENSIONS: if( zRight ){ #ifdef SQLITE_HAS_CODEC if( sqlite3StrNICmp(zRight, "see-", 4)==0 ){ sqlite3_activate_see(&zRight[4]); } #endif @@ -79675,26 +109375,16 @@ #ifdef SQLITE_ENABLE_CEROD if( sqlite3StrNICmp(zRight, "cerod-", 6)==0 ){ sqlite3_activate_cerod(&zRight[6]); } #endif - }else -#endif - - - {/* Empty ELSE clause */} - - /* - ** Reset the safety level, in case the fullfsync flag or synchronous - ** setting changed. - */ -#ifndef SQLITE_OMIT_PAGER_PRAGMAS - if( db->autoCommit ){ - sqlite3BtreeSetSafetyLevel(pDb->pBt, pDb->safety_level, - (db->flags&SQLITE_FullFSync)!=0); - } -#endif + } + break; +#endif + + } /* End of the PRAGMA switch */ + pragma_out: sqlite3DbFree(db, zLeft); sqlite3DbFree(db, zRight); } @@ -79715,10 +109405,11 @@ ************************************************************************* ** This file contains the implementation of the sqlite3_prepare() ** interface, and routines that contribute to loading the database schema ** from disk. */ +/* #include "sqliteInt.h" */ /* ** Fill the InitData structure with an error message that indicates ** that the database is corrupt. */ @@ -79727,19 +109418,18 @@ const char *zObj, /* Object being parsed at the point of error */ const char *zExtra /* Error information */ ){ sqlite3 *db = pData->db; if( !db->mallocFailed && (db->flags & SQLITE_RecoveryMode)==0 ){ + char *z; if( zObj==0 ) zObj = "?"; - sqlite3SetString(pData->pzErrMsg, db, - "malformed database schema (%s)", zObj); - if( zExtra ){ - *pData->pzErrMsg = sqlite3MAppendf(db, *pData->pzErrMsg, - "%s - %s", *pData->pzErrMsg, zExtra); - } - } - pData->rc = db->mallocFailed ? SQLITE_NOMEM : SQLITE_CORRUPT; + z = sqlite3MPrintf(db, "malformed database schema (%s)", zObj); + if( zExtra ) z = sqlite3MPrintf(db, "%z - %s", z, zExtra); + sqlite3DbFree(db, *pData->pzErrMsg); + *pData->pzErrMsg = z; + } + pData->rc = db->mallocFailed ? SQLITE_NOMEM : SQLITE_CORRUPT_BKPT; } /* ** This is the callback routine for the code that initializes the ** database. See sqlite3Init() below for additional information. @@ -79768,40 +109458,43 @@ assert( iDb>=0 && iDb<db->nDb ); if( argv==0 ) return 0; /* Might happen if EMPTY_RESULT_CALLBACKS are on */ if( argv[1]==0 ){ corruptSchema(pData, argv[0], 0); - }else if( argv[2] && argv[2][0] ){ + }else if( sqlite3_strnicmp(argv[2],"create ",7)==0 ){ /* Call the parser to process a CREATE TABLE, INDEX or VIEW. ** But because db->init.busy is set to 1, no VDBE code is generated ** or executed. All the parser does is build the internal data ** structures that describe the table, index, or view. */ int rc; sqlite3_stmt *pStmt; + TESTONLY(int rcp); /* Return code from sqlite3_prepare() */ assert( db->init.busy ); db->init.iDb = iDb; - db->init.newTnum = atoi(argv[1]); + db->init.newTnum = sqlite3Atoi(argv[1]); db->init.orphanTrigger = 0; - rc = sqlite3_prepare(db, argv[2], -1, &pStmt, 0); + TESTONLY(rcp = ) sqlite3_prepare(db, argv[2], -1, &pStmt, 0); + rc = db->errCode; + assert( (rc&0xFF)==(rcp&0xFF) ); db->init.iDb = 0; if( SQLITE_OK!=rc ){ if( db->init.orphanTrigger ){ assert( iDb==1 ); }else{ pData->rc = rc; if( rc==SQLITE_NOMEM ){ - db->mallocFailed = 1; - }else if( rc!=SQLITE_INTERRUPT && rc!=SQLITE_LOCKED ){ + sqlite3OomFault(db); + }else if( rc!=SQLITE_INTERRUPT && (rc&0xFF)!=SQLITE_LOCKED ){ corruptSchema(pData, argv[0], sqlite3_errmsg(db)); } } } sqlite3_finalize(pStmt); - }else if( argv[0]==0 ){ - corruptSchema(pData, 0, 0); + }else if( argv[0]==0 || (argv[2]!=0 && argv[2][0]!=0) ){ + corruptSchema(pData, argv[0], 0); }else{ /* If the SQL column is blank it means this is an index that ** was created to be the PRIMARY KEY or to fulfill a UNIQUE ** constraint for a CREATE TABLE. The index should have already ** been created when we processed the CREATE TABLE. All we have @@ -79832,66 +109525,34 @@ ** indicate success or failure. */ static int sqlite3InitOne(sqlite3 *db, int iDb, char **pzErrMsg){ int rc; int i; +#ifndef SQLITE_OMIT_DEPRECATED int size; - Table *pTab; +#endif Db *pDb; char const *azArg[4]; int meta[5]; InitData initData; - char const *zMasterSchema; - char const *zMasterName = SCHEMA_TABLE(iDb); + const char *zMasterName; int openedTransaction = 0; - /* - ** The master database table has a structure like this - */ - static const char master_schema[] = - "CREATE TABLE sqlite_master(\n" - " type text,\n" - " name text,\n" - " tbl_name text,\n" - " rootpage integer,\n" - " sql text\n" - ")" - ; -#ifndef SQLITE_OMIT_TEMPDB - static const char temp_master_schema[] = - "CREATE TEMP TABLE sqlite_temp_master(\n" - " type text,\n" - " name text,\n" - " tbl_name text,\n" - " rootpage integer,\n" - " sql text\n" - ")" - ; -#else - #define temp_master_schema 0 -#endif - assert( iDb>=0 && iDb<db->nDb ); assert( db->aDb[iDb].pSchema ); assert( sqlite3_mutex_held(db->mutex) ); assert( iDb==1 || sqlite3BtreeHoldsMutex(db->aDb[iDb].pBt) ); - /* zMasterSchema and zInitScript are set to point at the master schema - ** and initialisation script appropriate for the database being - ** initialised. zMasterName is the name of the master table. - */ - if( !OMIT_TEMPDB && iDb==1 ){ - zMasterSchema = temp_master_schema; - }else{ - zMasterSchema = master_schema; - } - zMasterName = SCHEMA_TABLE(iDb); - - /* Construct the schema tables. */ - azArg[0] = zMasterName; + /* Construct the in-memory representation schema tables (sqlite_master or + ** sqlite_temp_master) by invoking the parser directly. The appropriate + ** table name will be inserted automatically by the parser so we can just + ** use the abbreviation "x" here. The parser will also automatically tag + ** the schema table as read-only. */ + azArg[0] = zMasterName = SCHEMA_TABLE(iDb); azArg[1] = "1"; - azArg[2] = zMasterSchema; + azArg[2] = "CREATE TABLE x(type text,name text,tbl_name text," + "rootpage integer,sql text)"; azArg[3] = 0; initData.db = db; initData.iDb = iDb; initData.rc = SQLITE_OK; initData.pzErrMsg = pzErrMsg; @@ -79898,14 +109559,10 @@ sqlite3InitCallback(&initData, 3, (char **)azArg, 0); if( initData.rc ){ rc = initData.rc; goto error_out; } - pTab = sqlite3FindTable(db, zMasterName, db->aDb[iDb].zName); - if( ALWAYS(pTab) ){ - pTab->tabFlags |= TF_Readonly; - } /* Create a cursor to hold the database open */ pDb = &db->aDb[iDb]; if( pDb->pBt==0 ){ @@ -79920,11 +109577,11 @@ ** will be closed before this function returns. */ sqlite3BtreeEnter(pDb->pBt); if( !sqlite3BtreeIsInReadTrans(pDb->pBt) ){ rc = sqlite3BtreeBeginTrans(pDb->pBt, 0); if( rc!=SQLITE_OK ){ - sqlite3SetString(pzErrMsg, db, "%s", sqlite3ErrStr(rc)); + sqlite3SetString(pzErrMsg, db, sqlite3ErrStr(rc)); goto initone_error_out; } openedTransaction = 1; } @@ -79955,16 +109612,19 @@ ** For an attached db, it is an error if the encoding is not the same ** as sqlite3.enc. */ if( meta[BTREE_TEXT_ENCODING-1] ){ /* text encoding */ if( iDb==0 ){ +#ifndef SQLITE_OMIT_UTF16 u8 encoding; /* If opening the main database, set ENC(db). */ encoding = (u8)meta[BTREE_TEXT_ENCODING-1] & 3; if( encoding==0 ) encoding = SQLITE_UTF8; ENC(db) = encoding; - db->pDfltColl = sqlite3FindCollSeq(db, SQLITE_UTF8, "BINARY", 0); +#else + ENC(db) = SQLITE_UTF8; +#endif }else{ /* If opening an attached database, the encoding much match ENC(db) */ if( meta[BTREE_TEXT_ENCODING-1]!=ENC(db) ){ sqlite3SetString(pzErrMsg, db, "attached databases must use the same" " text encoding as main database"); @@ -79976,14 +109636,17 @@ DbSetProperty(db, iDb, DB_Empty); } pDb->pSchema->enc = ENC(db); if( pDb->pSchema->cache_size==0 ){ - size = meta[BTREE_DEFAULT_CACHE_SIZE-1]; +#ifndef SQLITE_OMIT_DEPRECATED + size = sqlite3AbsInt32(meta[BTREE_DEFAULT_CACHE_SIZE-1]); if( size==0 ){ size = SQLITE_DEFAULT_CACHE_SIZE; } - if( size<0 ) size = -size; pDb->pSchema->cache_size = size; +#else + pDb->pSchema->cache_size = SQLITE_DEFAULT_CACHE_SIZE; +#endif sqlite3BtreeSetCacheSize(pDb->pBt, pDb->pSchema->cache_size); } /* ** file_format==1 Version 3.0.0. @@ -80014,15 +109677,15 @@ */ assert( db->init.busy ); { char *zSql; zSql = sqlite3MPrintf(db, - "SELECT name, rootpage, sql FROM '%q'.%s ORDER BY rowid", + "SELECT name, rootpage, sql FROM \"%w\".%s ORDER BY rowid", db->aDb[iDb].zName, zMasterName); #ifndef SQLITE_OMIT_AUTHORIZATION { - int (*xAuth)(void*,int,const char*,const char*,const char*,const char*); + sqlite3_xauth xAuth; xAuth = db->xAuth; db->xAuth = 0; #endif rc = sqlite3_exec(db, zSql, sqlite3InitCallback, &initData, 0); #ifndef SQLITE_OMIT_AUTHORIZATION @@ -80037,11 +109700,11 @@ } #endif } if( db->mallocFailed ){ rc = SQLITE_NOMEM; - sqlite3ResetInternalSchema(db, 0); + sqlite3ResetAllSchemasOfConnection(db); } if( rc==SQLITE_OK || (db->flags&SQLITE_RecoveryMode)){ /* Black magic: If the SQLITE_RecoveryMode flag is set, then consider ** the schema loaded, even if errors occurred. In this situation the ** current sqlite3_prepare() operation will fail, but the following one @@ -80064,11 +109727,11 @@ } sqlite3BtreeLeave(pDb->pBt); error_out: if( rc==SQLITE_NOMEM || rc==SQLITE_IOERR_NOMEM ){ - db->mallocFailed = 1; + sqlite3OomFault(db); } return rc; } /* @@ -80084,30 +109747,33 @@ SQLITE_PRIVATE int sqlite3Init(sqlite3 *db, char **pzErrMsg){ int i, rc; int commit_internal = !(db->flags&SQLITE_InternChanges); assert( sqlite3_mutex_held(db->mutex) ); + assert( sqlite3BtreeHoldsMutex(db->aDb[0].pBt) ); + assert( db->init.busy==0 ); rc = SQLITE_OK; db->init.busy = 1; + ENC(db) = SCHEMA_ENC(db); for(i=0; rc==SQLITE_OK && i<db->nDb; i++){ if( DbHasProperty(db, i, DB_SchemaLoaded) || i==1 ) continue; rc = sqlite3InitOne(db, i, pzErrMsg); if( rc ){ - sqlite3ResetInternalSchema(db, i); + sqlite3ResetOneSchema(db, i); } } - /* Once all the other databases have been initialised, load the schema + /* Once all the other databases have been initialized, load the schema ** for the TEMP database. This is loaded last, as the TEMP database ** schema may contain references to objects in other databases. */ #ifndef SQLITE_OMIT_TEMPDB - if( rc==SQLITE_OK && ALWAYS(db->nDb>1) - && !DbHasProperty(db, 1, DB_SchemaLoaded) ){ + assert( db->nDb>1 ); + if( rc==SQLITE_OK && !DbHasProperty(db, 1, DB_SchemaLoaded) ){ rc = sqlite3InitOne(db, 1, pzErrMsg); if( rc ){ - sqlite3ResetInternalSchema(db, 1); + sqlite3ResetOneSchema(db, 1); } } #endif db->init.busy = 0; @@ -80117,11 +109783,11 @@ return rc; } /* -** This routine is a no-op if the database schema is already initialised. +** This routine is a no-op if the database schema is already initialized. ** Otherwise, the schema is loaded. An error code is returned. */ SQLITE_PRIVATE int sqlite3ReadSchema(Parse *pParse){ int rc = SQLITE_OK; sqlite3 *db = pParse->db; @@ -80159,21 +109825,23 @@ ** on the b-tree database, open one now. If a transaction is opened, it ** will be closed immediately after reading the meta-value. */ if( !sqlite3BtreeIsInReadTrans(pBt) ){ rc = sqlite3BtreeBeginTrans(pBt, 0); if( rc==SQLITE_NOMEM || rc==SQLITE_IOERR_NOMEM ){ - db->mallocFailed = 1; + sqlite3OomFault(db); } if( rc!=SQLITE_OK ) return; openedTransaction = 1; } /* Read the schema cookie from the database. If it does not match the ** value stored as part of the in-memory schema representation, ** set Parse.rc to SQLITE_SCHEMA. */ sqlite3BtreeGetMeta(pBt, BTREE_SCHEMA_VERSION, (u32 *)&cookie); + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); if( cookie!=db->aDb[iDb].pSchema->schema_cookie ){ + sqlite3ResetOneSchema(db, iDb); pParse->rc = SQLITE_SCHEMA; } /* Close the transaction, if one was opened. */ if( openedTransaction ){ @@ -80211,10 +109879,26 @@ } assert( i>=0 && i<db->nDb ); } return i; } + +/* +** Free all memory allocations in the pParse object +*/ +SQLITE_PRIVATE void sqlite3ParserReset(Parse *pParse){ + if( pParse ){ + sqlite3 *db = pParse->db; + sqlite3DbFree(db, pParse->aLabel); + sqlite3ExprListDelete(db, pParse->pConstExpr); + if( db ){ + assert( db->lookaside.bDisable >= pParse->disableLookaside ); + db->lookaside.bDisable -= pParse->disableLookaside; + } + pParse->disableLookaside = 0; + } +} /* ** Compile the UTF-8 encoded SQL statement zSql into a statement handle. */ static int sqlite3Prepare( @@ -80237,11 +109921,11 @@ rc = SQLITE_NOMEM; goto end_prepare; } pParse->pReprepare = pReprepare; assert( ppStmt && *ppStmt==0 ); - assert( !db->mallocFailed ); + /* assert( !db->mallocFailed ); // not true with SQLITE_USE_ALLOCA */ assert( sqlite3_mutex_held(db->mutex) ); /* Check to verify that it is possible to get a read lock on all ** database schemas. The inability to get a read lock indicates that ** some other database connection is holding a write-lock, which in @@ -80270,54 +109954,48 @@ if( pBt ){ assert( sqlite3BtreeHoldsMutex(pBt) ); rc = sqlite3BtreeSchemaLocked(pBt); if( rc ){ const char *zDb = db->aDb[i].zName; - sqlite3Error(db, rc, "database schema is locked: %s", zDb); + sqlite3ErrorWithMsg(db, rc, "database schema is locked: %s", zDb); testcase( db->flags & SQLITE_ReadUncommitted ); goto end_prepare; } } } sqlite3VtabUnlockList(db); pParse->db = db; - pParse->nQueryLoop = (double)1; + pParse->nQueryLoop = 0; /* Logarithmic, so 0 really means 1 */ if( nBytes>=0 && (nBytes==0 || zSql[nBytes-1]!=0) ){ char *zSqlCopy; int mxLen = db->aLimit[SQLITE_LIMIT_SQL_LENGTH]; testcase( nBytes==mxLen ); testcase( nBytes==mxLen+1 ); if( nBytes>mxLen ){ - sqlite3Error(db, SQLITE_TOOBIG, "statement too long"); + sqlite3ErrorWithMsg(db, SQLITE_TOOBIG, "statement too long"); rc = sqlite3ApiExit(db, SQLITE_TOOBIG); goto end_prepare; } zSqlCopy = sqlite3DbStrNDup(db, zSql, nBytes); if( zSqlCopy ){ sqlite3RunParser(pParse, zSqlCopy, &zErrMsg); - sqlite3DbFree(db, zSqlCopy); pParse->zTail = &zSql[pParse->zTail-zSqlCopy]; + sqlite3DbFree(db, zSqlCopy); }else{ pParse->zTail = &zSql[nBytes]; } }else{ sqlite3RunParser(pParse, zSql, &zErrMsg); } - assert( 1==(int)pParse->nQueryLoop ); + assert( 0==pParse->nQueryLoop ); - if( db->mallocFailed ){ - pParse->rc = SQLITE_NOMEM; - } if( pParse->rc==SQLITE_DONE ) pParse->rc = SQLITE_OK; if( pParse->checkSchema ){ schemaIsValid(pParse); } - if( pParse->rc==SQLITE_SCHEMA ){ - sqlite3ResetInternalSchema(db, 0); - } if( db->mallocFailed ){ pParse->rc = SQLITE_NOMEM; } if( pzTail ){ *pzTail = pParse->zTail; @@ -80326,17 +110004,17 @@ #ifndef SQLITE_OMIT_EXPLAIN if( rc==SQLITE_OK && pParse->pVdbe && pParse->explain ){ static const char * const azColName[] = { "addr", "opcode", "p1", "p2", "p3", "p4", "p5", "comment", - "order", "from", "detail" + "selectid", "order", "from", "detail" }; int iFirst, mx; if( pParse->explain==2 ){ - sqlite3VdbeSetNumCols(pParse->pVdbe, 3); + sqlite3VdbeSetNumCols(pParse->pVdbe, 4); iFirst = 8; - mx = 11; + mx = 12; }else{ sqlite3VdbeSetNumCols(pParse->pVdbe, 8); iFirst = 0; mx = 8; } @@ -80345,11 +110023,10 @@ azColName[i], SQLITE_STATIC); } } #endif - assert( db->init.busy==0 || saveSqlFlag==0 ); if( db->init.busy==0 ){ Vdbe *pVdbe = pParse->pVdbe; sqlite3VdbeSetSql(pVdbe, zSql, (int)(pParse->zTail-zSql), saveSqlFlag); } if( pParse->pVdbe && (rc!=SQLITE_OK || db->mallocFailed) ){ @@ -80358,26 +110035,26 @@ }else{ *ppStmt = (sqlite3_stmt*)pParse->pVdbe; } if( zErrMsg ){ - sqlite3Error(db, rc, "%s", zErrMsg); + sqlite3ErrorWithMsg(db, rc, "%s", zErrMsg); sqlite3DbFree(db, zErrMsg); }else{ - sqlite3Error(db, rc, 0); + sqlite3Error(db, rc); } /* Delete any TriggerPrg structures allocated while parsing this statement. */ while( pParse->pTriggerPrg ){ TriggerPrg *pT = pParse->pTriggerPrg; pParse->pTriggerPrg = pT->pNext; - sqlite3VdbeProgramDelete(db, pT->pProgram, 0); sqlite3DbFree(db, pT); } end_prepare: + sqlite3ParserReset(pParse); sqlite3StackFree(db, pParse); rc = sqlite3ApiExit(db, rc); assert( (rc&db->errMask)==rc ); return rc; } @@ -80389,13 +110066,16 @@ Vdbe *pOld, /* VM being reprepared */ sqlite3_stmt **ppStmt, /* OUT: A pointer to the prepared statement */ const char **pzTail /* OUT: End of parsed string */ ){ int rc; - assert( ppStmt!=0 ); + +#ifdef SQLITE_ENABLE_API_ARMOR + if( ppStmt==0 ) return SQLITE_MISUSE_BKPT; +#endif *ppStmt = 0; - if( !sqlite3SafetyCheckOk(db) ){ + if( !sqlite3SafetyCheckOk(db)||zSql==0 ){ return SQLITE_MISUSE_BKPT; } sqlite3_mutex_enter(db->mutex); sqlite3BtreeEnterAll(db); rc = sqlite3Prepare(db, zSql, nBytes, saveSqlFlag, pOld, ppStmt, pzTail); @@ -80403,10 +110083,11 @@ sqlite3_finalize(*ppStmt); rc = sqlite3Prepare(db, zSql, nBytes, saveSqlFlag, pOld, ppStmt, pzTail); } sqlite3BtreeLeaveAll(db); sqlite3_mutex_leave(db->mutex); + assert( rc==SQLITE_OK || *ppStmt==0 ); return rc; } /* ** Rerun the compilation of a statement after a schema change. @@ -80428,11 +110109,11 @@ db = sqlite3VdbeDb(p); assert( sqlite3_mutex_held(db->mutex) ); rc = sqlite3LockAndPrepare(db, zSql, -1, 0, p, &pNew, 0); if( rc ){ if( rc==SQLITE_NOMEM ){ - db->mallocFailed = 1; + sqlite3OomFault(db); } assert( pNew==0 ); return rc; }else{ assert( pNew!=0 ); @@ -80451,11 +110132,11 @@ ** and so if a schema change occurs, SQLITE_SCHEMA is returned by ** sqlite3_step(). In the new version, the original SQL text is retained ** and the statement is automatically recompiled if an schema change ** occurs. */ -SQLITE_API int sqlite3_prepare( +SQLITE_API int SQLITE_STDCALL sqlite3_prepare( sqlite3 *db, /* Database handle. */ const char *zSql, /* UTF-8 encoded SQL statement. */ int nBytes, /* Length of zSql in bytes. */ sqlite3_stmt **ppStmt, /* OUT: A pointer to the prepared statement */ const char **pzTail /* OUT: End of parsed string */ @@ -80463,11 +110144,11 @@ int rc; rc = sqlite3LockAndPrepare(db,zSql,nBytes,0,0,ppStmt,pzTail); assert( rc==SQLITE_OK || ppStmt==0 || *ppStmt==0 ); /* VERIFY: F13021 */ return rc; } -SQLITE_API int sqlite3_prepare_v2( +SQLITE_API int SQLITE_STDCALL sqlite3_prepare_v2( sqlite3 *db, /* Database handle. */ const char *zSql, /* UTF-8 encoded SQL statement. */ int nBytes, /* Length of zSql in bytes. */ sqlite3_stmt **ppStmt, /* OUT: A pointer to the prepared statement */ const char **pzTail /* OUT: End of parsed string */ @@ -80483,11 +110164,11 @@ /* ** Compile the UTF-16 encoded SQL statement zSql into a statement handle. */ static int sqlite3Prepare16( sqlite3 *db, /* Database handle. */ - const void *zSql, /* UTF-8 encoded SQL statement. */ + const void *zSql, /* UTF-16 encoded SQL statement. */ int nBytes, /* Length of zSql in bytes. */ int saveSqlFlag, /* True to save SQL text into the sqlite3_stmt */ sqlite3_stmt **ppStmt, /* OUT: A pointer to the prepared statement */ const void **pzTail /* OUT: End of parsed string */ ){ @@ -80497,14 +110178,22 @@ */ char *zSql8; const char *zTail8 = 0; int rc = SQLITE_OK; - assert( ppStmt ); +#ifdef SQLITE_ENABLE_API_ARMOR + if( ppStmt==0 ) return SQLITE_MISUSE_BKPT; +#endif *ppStmt = 0; - if( !sqlite3SafetyCheckOk(db) ){ + if( !sqlite3SafetyCheckOk(db)||zSql==0 ){ return SQLITE_MISUSE_BKPT; + } + if( nBytes>=0 ){ + int sz; + const char *z = (const char*)zSql; + for(sz=0; sz<nBytes && (z[sz]!=0 || z[sz+1]!=0); sz += 2){} + nBytes = sz; } sqlite3_mutex_enter(db->mutex); zSql8 = sqlite3Utf16to8(db, zSql, nBytes, SQLITE_UTF16NATIVE); if( zSql8 ){ rc = sqlite3LockAndPrepare(db, zSql8, -1, saveSqlFlag, 0, ppStmt, &zTail8); @@ -80531,25 +110220,25 @@ ** and so if a schema change occurs, SQLITE_SCHEMA is returned by ** sqlite3_step(). In the new version, the original SQL text is retained ** and the statement is automatically recompiled if an schema change ** occurs. */ -SQLITE_API int sqlite3_prepare16( +SQLITE_API int SQLITE_STDCALL sqlite3_prepare16( sqlite3 *db, /* Database handle. */ - const void *zSql, /* UTF-8 encoded SQL statement. */ + const void *zSql, /* UTF-16 encoded SQL statement. */ int nBytes, /* Length of zSql in bytes. */ sqlite3_stmt **ppStmt, /* OUT: A pointer to the prepared statement */ const void **pzTail /* OUT: End of parsed string */ ){ int rc; rc = sqlite3Prepare16(db,zSql,nBytes,0,ppStmt,pzTail); assert( rc==SQLITE_OK || ppStmt==0 || *ppStmt==0 ); /* VERIFY: F13021 */ return rc; } -SQLITE_API int sqlite3_prepare16_v2( +SQLITE_API int SQLITE_STDCALL sqlite3_prepare16_v2( sqlite3 *db, /* Database handle. */ - const void *zSql, /* UTF-8 encoded SQL statement. */ + const void *zSql, /* UTF-16 encoded SQL statement. */ int nBytes, /* Length of zSql in bytes. */ sqlite3_stmt **ppStmt, /* OUT: A pointer to the prepared statement */ const void **pzTail /* OUT: End of parsed string */ ){ int rc; @@ -80574,37 +110263,88 @@ ** ************************************************************************* ** This file contains C code routines that are called by the parser ** to handle SELECT statements in SQLite. */ +/* #include "sqliteInt.h" */ + +/* +** Trace output macros +*/ +#if SELECTTRACE_ENABLED +/***/ int sqlite3SelectTrace = 0; +# define SELECTTRACE(K,P,S,X) \ + if(sqlite3SelectTrace&(K)) \ + sqlite3DebugPrintf("%*s%s.%p: ",(P)->nSelectIndent*2-2,"",\ + (S)->zSelName,(S)),\ + sqlite3DebugPrintf X +#else +# define SELECTTRACE(K,P,S,X) +#endif /* -** Delete all the content of a Select structure but do not deallocate -** the select structure itself. +** An instance of the following object is used to record information about +** how to process the DISTINCT keyword, to simplify passing that information +** into the selectInnerLoop() routine. */ -static void clearSelect(sqlite3 *db, Select *p){ - sqlite3ExprListDelete(db, p->pEList); - sqlite3SrcListDelete(db, p->pSrc); - sqlite3ExprDelete(db, p->pWhere); - sqlite3ExprListDelete(db, p->pGroupBy); - sqlite3ExprDelete(db, p->pHaving); - sqlite3ExprListDelete(db, p->pOrderBy); - sqlite3SelectDelete(db, p->pPrior); - sqlite3ExprDelete(db, p->pLimit); - sqlite3ExprDelete(db, p->pOffset); +typedef struct DistinctCtx DistinctCtx; +struct DistinctCtx { + u8 isTnct; /* True if the DISTINCT keyword is present */ + u8 eTnctType; /* One of the WHERE_DISTINCT_* operators */ + int tabTnct; /* Ephemeral table used for DISTINCT processing */ + int addrTnct; /* Address of OP_OpenEphemeral opcode for tabTnct */ +}; + +/* +** An instance of the following object is used to record information about +** the ORDER BY (or GROUP BY) clause of query is being coded. +*/ +typedef struct SortCtx SortCtx; +struct SortCtx { + ExprList *pOrderBy; /* The ORDER BY (or GROUP BY clause) */ + int nOBSat; /* Number of ORDER BY terms satisfied by indices */ + int iECursor; /* Cursor number for the sorter */ + int regReturn; /* Register holding block-output return address */ + int labelBkOut; /* Start label for the block-output subroutine */ + int addrSortIndex; /* Address of the OP_SorterOpen or OP_OpenEphemeral */ + int labelDone; /* Jump here when done, ex: LIMIT reached */ + u8 sortFlags; /* Zero or more SORTFLAG_* bits */ +}; +#define SORTFLAG_UseSorter 0x01 /* Use SorterOpen instead of OpenEphemeral */ + +/* +** Delete all the content of a Select structure. Deallocate the structure +** itself only if bFree is true. +*/ +static void clearSelect(sqlite3 *db, Select *p, int bFree){ + while( p ){ + Select *pPrior = p->pPrior; + sqlite3ExprListDelete(db, p->pEList); + sqlite3SrcListDelete(db, p->pSrc); + sqlite3ExprDelete(db, p->pWhere); + sqlite3ExprListDelete(db, p->pGroupBy); + sqlite3ExprDelete(db, p->pHaving); + sqlite3ExprListDelete(db, p->pOrderBy); + sqlite3ExprDelete(db, p->pLimit); + sqlite3ExprDelete(db, p->pOffset); + sqlite3WithDelete(db, p->pWith); + if( bFree ) sqlite3DbFree(db, p); + p = pPrior; + bFree = 1; + } } /* ** Initialize a SelectDest structure. */ SQLITE_PRIVATE void sqlite3SelectDestInit(SelectDest *pDest, int eDest, int iParm){ pDest->eDest = (u8)eDest; - pDest->iParm = iParm; - pDest->affinity = 0; - pDest->iMem = 0; - pDest->nMem = 0; + pDest->iSDParm = iParm; + pDest->affSdst = 0; + pDest->iSdst = 0; + pDest->nSdst = 0; } /* ** Allocate a new Select structure and return a pointer to that @@ -80616,60 +110356,87 @@ SrcList *pSrc, /* the FROM clause -- which tables to scan */ Expr *pWhere, /* the WHERE clause */ ExprList *pGroupBy, /* the GROUP BY clause */ Expr *pHaving, /* the HAVING clause */ ExprList *pOrderBy, /* the ORDER BY clause */ - int isDistinct, /* true if the DISTINCT keyword is present */ + u16 selFlags, /* Flag parameters, such as SF_Distinct */ Expr *pLimit, /* LIMIT value. NULL means not used */ Expr *pOffset /* OFFSET value. NULL means no offset */ ){ Select *pNew; Select standin; sqlite3 *db = pParse->db; - pNew = sqlite3DbMallocZero(db, sizeof(*pNew) ); - assert( db->mallocFailed || !pOffset || pLimit ); /* OFFSET implies LIMIT */ + pNew = sqlite3DbMallocRawNN(db, sizeof(*pNew) ); if( pNew==0 ){ + assert( db->mallocFailed ); pNew = &standin; - memset(pNew, 0, sizeof(*pNew)); } if( pEList==0 ){ - pEList = sqlite3ExprListAppend(pParse, 0, sqlite3Expr(db,TK_ALL,0)); + pEList = sqlite3ExprListAppend(pParse, 0, sqlite3Expr(db,TK_ASTERISK,0)); } pNew->pEList = pEList; + pNew->op = TK_SELECT; + pNew->selFlags = selFlags; + pNew->iLimit = 0; + pNew->iOffset = 0; +#if SELECTTRACE_ENABLED + pNew->zSelName[0] = 0; +#endif + pNew->addrOpenEphm[0] = -1; + pNew->addrOpenEphm[1] = -1; + pNew->nSelectRow = 0; + if( pSrc==0 ) pSrc = sqlite3DbMallocZero(db, sizeof(*pSrc)); pNew->pSrc = pSrc; pNew->pWhere = pWhere; pNew->pGroupBy = pGroupBy; pNew->pHaving = pHaving; pNew->pOrderBy = pOrderBy; - pNew->selFlags = isDistinct ? SF_Distinct : 0; - pNew->op = TK_SELECT; + pNew->pPrior = 0; + pNew->pNext = 0; pNew->pLimit = pLimit; pNew->pOffset = pOffset; - assert( pOffset==0 || pLimit!=0 ); - pNew->addrOpenEphm[0] = -1; - pNew->addrOpenEphm[1] = -1; - pNew->addrOpenEphm[2] = -1; + pNew->pWith = 0; + assert( pOffset==0 || pLimit!=0 || pParse->nErr>0 || db->mallocFailed!=0 ); if( db->mallocFailed ) { - clearSelect(db, pNew); - if( pNew!=&standin ) sqlite3DbFree(db, pNew); + clearSelect(db, pNew, pNew!=&standin); pNew = 0; + }else{ + assert( pNew->pSrc!=0 || pParse->nErr>0 ); } + assert( pNew!=&standin ); return pNew; } + +#if SELECTTRACE_ENABLED +/* +** Set the name of a Select object +*/ +SQLITE_PRIVATE void sqlite3SelectSetName(Select *p, const char *zName){ + if( p && zName ){ + sqlite3_snprintf(sizeof(p->zSelName), p->zSelName, "%s", zName); + } +} +#endif + /* ** Delete the given Select structure and all of its substructures. */ SQLITE_PRIVATE void sqlite3SelectDelete(sqlite3 *db, Select *p){ - if( p ){ - clearSelect(db, p); - sqlite3DbFree(db, p); - } + clearSelect(db, p, 1); } /* -** Given 1 to 3 identifiers preceeding the JOIN keyword, determine the +** Return a pointer to the right-most SELECT statement in a compound. +*/ +static Select *findRightmost(Select *p){ + while( p->pNext ) p = p->pNext; + return p; +} + +/* +** Given 1 to 3 identifiers preceding the JOIN keyword, determine the ** type of join. Return an integer constant that expresses that type ** in terms of the following bit values: ** ** JT_INNER ** JT_CROSS @@ -80820,12 +110587,12 @@ pE2 = sqlite3CreateColumnExpr(db, pSrc, iRight, iColRight); pEq = sqlite3PExpr(pParse, TK_EQ, pE1, pE2, 0); if( pEq && isOuterJoin ){ ExprSetProperty(pEq, EP_FromJoin); - assert( !ExprHasAnyProperty(pEq, EP_TokenOnly|EP_Reduced) ); - ExprSetIrreducible(pEq); + assert( !ExprHasProperty(pEq, EP_TokenOnly|EP_Reduced) ); + ExprSetVVAProperty(pEq, EP_NoReduce); pEq->iRightJoinTable = (i16)pE2->iTable; } *ppWhere = sqlite3ExprAnd(db, *ppWhere, pEq); } @@ -80856,13 +110623,19 @@ ** the output, which is incorrect. */ static void setJoinExpr(Expr *p, int iTable){ while( p ){ ExprSetProperty(p, EP_FromJoin); - assert( !ExprHasAnyProperty(p, EP_TokenOnly|EP_Reduced) ); - ExprSetIrreducible(p); + assert( !ExprHasProperty(p, EP_TokenOnly|EP_Reduced) ); + ExprSetVVAProperty(p, EP_NoReduce); p->iRightJoinTable = (i16)iTable; + if( p->op==TK_FUNCTION && p->x.pList ){ + int i; + for(i=0; i<p->x.pList->nExpr; i++){ + setJoinExpr(p->x.pList->a[i].pExpr, iTable); + } + } setJoinExpr(p->pLeft, iTable); p = p->pRight; } } @@ -80893,16 +110666,16 @@ Table *pLeftTab = pLeft->pTab; Table *pRightTab = pRight->pTab; int isOuter; if( NEVER(pLeftTab==0 || pRightTab==0) ) continue; - isOuter = (pRight->jointype & JT_OUTER)!=0; + isOuter = (pRight->fg.jointype & JT_OUTER)!=0; /* When the NATURAL keyword is present, add WHERE clause terms for ** every column that the two tables have in common. */ - if( pRight->jointype & JT_NATURAL ){ + if( pRight->fg.jointype & JT_NATURAL ){ if( pRight->pOn || pRight->pUsing ){ sqlite3ErrorMsg(pParse, "a NATURAL join may not have " "an ON or USING clause", 0); return 1; } @@ -80966,66 +110739,129 @@ } } return 0; } +/* Forward reference */ +static KeyInfo *keyInfoFromExprList( + Parse *pParse, /* Parsing context */ + ExprList *pList, /* Form the KeyInfo object from this ExprList */ + int iStart, /* Begin with this column of pList */ + int nExtra /* Add this many extra columns to the end */ +); + /* -** Insert code into "v" that will push the record on the top of the -** stack into the sorter. +** Generate code that will push the record in registers regData +** through regData+nData-1 onto the sorter. */ static void pushOntoSorter( Parse *pParse, /* Parser context */ - ExprList *pOrderBy, /* The ORDER BY clause */ + SortCtx *pSort, /* Information about the ORDER BY clause */ Select *pSelect, /* The whole SELECT statement */ - int regData /* Register holding data to be sorted */ -){ - Vdbe *v = pParse->pVdbe; - int nExpr = pOrderBy->nExpr; - int regBase = sqlite3GetTempRange(pParse, nExpr+2); - int regRecord = sqlite3GetTempReg(pParse); - sqlite3ExprCacheClear(pParse); - sqlite3ExprCodeExprList(pParse, pOrderBy, regBase, 0); - sqlite3VdbeAddOp2(v, OP_Sequence, pOrderBy->iECursor, regBase+nExpr); - sqlite3ExprCodeMove(pParse, regData, regBase+nExpr+1, 1); - sqlite3VdbeAddOp3(v, OP_MakeRecord, regBase, nExpr + 2, regRecord); - sqlite3VdbeAddOp2(v, OP_IdxInsert, pOrderBy->iECursor, regRecord); - sqlite3ReleaseTempReg(pParse, regRecord); - sqlite3ReleaseTempRange(pParse, regBase, nExpr+2); - if( pSelect->iLimit ){ - int addr1, addr2; - int iLimit; - if( pSelect->iOffset ){ - iLimit = pSelect->iOffset+1; - }else{ - iLimit = pSelect->iLimit; - } - addr1 = sqlite3VdbeAddOp1(v, OP_IfZero, iLimit); - sqlite3VdbeAddOp2(v, OP_AddImm, iLimit, -1); - addr2 = sqlite3VdbeAddOp0(v, OP_Goto); - sqlite3VdbeJumpHere(v, addr1); - sqlite3VdbeAddOp1(v, OP_Last, pOrderBy->iECursor); - sqlite3VdbeAddOp1(v, OP_Delete, pOrderBy->iECursor); - sqlite3VdbeJumpHere(v, addr2); - pSelect->iLimit = 0; + int regData, /* First register holding data to be sorted */ + int regOrigData, /* First register holding data before packing */ + int nData, /* Number of elements in the data array */ + int nPrefixReg /* No. of reg prior to regData available for use */ +){ + Vdbe *v = pParse->pVdbe; /* Stmt under construction */ + int bSeq = ((pSort->sortFlags & SORTFLAG_UseSorter)==0); + int nExpr = pSort->pOrderBy->nExpr; /* No. of ORDER BY terms */ + int nBase = nExpr + bSeq + nData; /* Fields in sorter record */ + int regBase; /* Regs for sorter record */ + int regRecord = ++pParse->nMem; /* Assembled sorter record */ + int nOBSat = pSort->nOBSat; /* ORDER BY terms to skip */ + int op; /* Opcode to add sorter record to sorter */ + int iLimit; /* LIMIT counter */ + + assert( bSeq==0 || bSeq==1 ); + assert( nData==1 || regData==regOrigData ); + if( nPrefixReg ){ + assert( nPrefixReg==nExpr+bSeq ); + regBase = regData - nExpr - bSeq; + }else{ + regBase = pParse->nMem + 1; + pParse->nMem += nBase; + } + assert( pSelect->iOffset==0 || pSelect->iLimit!=0 ); + iLimit = pSelect->iOffset ? pSelect->iOffset+1 : pSelect->iLimit; + pSort->labelDone = sqlite3VdbeMakeLabel(v); + sqlite3ExprCodeExprList(pParse, pSort->pOrderBy, regBase, regOrigData, + SQLITE_ECEL_DUP|SQLITE_ECEL_REF); + if( bSeq ){ + sqlite3VdbeAddOp2(v, OP_Sequence, pSort->iECursor, regBase+nExpr); + } + if( nPrefixReg==0 ){ + sqlite3ExprCodeMove(pParse, regData, regBase+nExpr+bSeq, nData); + } + sqlite3VdbeAddOp3(v, OP_MakeRecord, regBase+nOBSat, nBase-nOBSat, regRecord); + if( nOBSat>0 ){ + int regPrevKey; /* The first nOBSat columns of the previous row */ + int addrFirst; /* Address of the OP_IfNot opcode */ + int addrJmp; /* Address of the OP_Jump opcode */ + VdbeOp *pOp; /* Opcode that opens the sorter */ + int nKey; /* Number of sorting key columns, including OP_Sequence */ + KeyInfo *pKI; /* Original KeyInfo on the sorter table */ + + regPrevKey = pParse->nMem+1; + pParse->nMem += pSort->nOBSat; + nKey = nExpr - pSort->nOBSat + bSeq; + if( bSeq ){ + addrFirst = sqlite3VdbeAddOp1(v, OP_IfNot, regBase+nExpr); + }else{ + addrFirst = sqlite3VdbeAddOp1(v, OP_SequenceTest, pSort->iECursor); + } + VdbeCoverage(v); + sqlite3VdbeAddOp3(v, OP_Compare, regPrevKey, regBase, pSort->nOBSat); + pOp = sqlite3VdbeGetOp(v, pSort->addrSortIndex); + if( pParse->db->mallocFailed ) return; + pOp->p2 = nKey + nData; + pKI = pOp->p4.pKeyInfo; + memset(pKI->aSortOrder, 0, pKI->nField); /* Makes OP_Jump below testable */ + sqlite3VdbeChangeP4(v, -1, (char*)pKI, P4_KEYINFO); + testcase( pKI->nXField>2 ); + pOp->p4.pKeyInfo = keyInfoFromExprList(pParse, pSort->pOrderBy, nOBSat, + pKI->nXField-1); + addrJmp = sqlite3VdbeCurrentAddr(v); + sqlite3VdbeAddOp3(v, OP_Jump, addrJmp+1, 0, addrJmp+1); VdbeCoverage(v); + pSort->labelBkOut = sqlite3VdbeMakeLabel(v); + pSort->regReturn = ++pParse->nMem; + sqlite3VdbeAddOp2(v, OP_Gosub, pSort->regReturn, pSort->labelBkOut); + sqlite3VdbeAddOp1(v, OP_ResetSorter, pSort->iECursor); + if( iLimit ){ + sqlite3VdbeAddOp2(v, OP_IfNot, iLimit, pSort->labelDone); + VdbeCoverage(v); + } + sqlite3VdbeJumpHere(v, addrFirst); + sqlite3ExprCodeMove(pParse, regBase, regPrevKey, pSort->nOBSat); + sqlite3VdbeJumpHere(v, addrJmp); + } + if( pSort->sortFlags & SORTFLAG_UseSorter ){ + op = OP_SorterInsert; + }else{ + op = OP_IdxInsert; + } + sqlite3VdbeAddOp2(v, op, pSort->iECursor, regRecord); + if( iLimit ){ + int addr; + addr = sqlite3VdbeAddOp3(v, OP_IfNotZero, iLimit, 0, 1); VdbeCoverage(v); + sqlite3VdbeAddOp1(v, OP_Last, pSort->iECursor); + sqlite3VdbeAddOp1(v, OP_Delete, pSort->iECursor); + sqlite3VdbeJumpHere(v, addr); } } /* ** Add code to implement the OFFSET */ static void codeOffset( Vdbe *v, /* Generate code into this VM */ - Select *p, /* The SELECT statement being coded */ + int iOffset, /* Register holding the offset counter */ int iContinue /* Jump here to skip the current record */ ){ - if( p->iOffset && iContinue!=0 ){ - int addr; - sqlite3VdbeAddOp2(v, OP_AddImm, p->iOffset, -1); - addr = sqlite3VdbeAddOp1(v, OP_IfNeg, p->iOffset); - sqlite3VdbeAddOp2(v, OP_Goto, 0, iContinue); - VdbeComment((v, "skip OFFSET records")); - sqlite3VdbeJumpHere(v, addr); + if( iOffset>0 ){ + sqlite3VdbeAddOp3(v, OP_IfPos, iOffset, iContinue, 1); VdbeCoverage(v); + VdbeComment((v, "OFFSET")); } } /* ** Add code that will check to make sure the N registers starting at iMem @@ -81046,21 +110882,23 @@ Vdbe *v; int r1; v = pParse->pVdbe; r1 = sqlite3GetTempReg(pParse); - sqlite3VdbeAddOp4Int(v, OP_Found, iTab, addrRepeat, iMem, N); + sqlite3VdbeAddOp4Int(v, OP_Found, iTab, addrRepeat, iMem, N); VdbeCoverage(v); sqlite3VdbeAddOp3(v, OP_MakeRecord, iMem, N, r1); sqlite3VdbeAddOp2(v, OP_IdxInsert, iTab, r1); sqlite3ReleaseTempReg(pParse, r1); } +#ifndef SQLITE_OMIT_SUBQUERY /* ** Generate an error message when a SELECT is used within a subexpression ** (example: "a IN (SELECT * FROM table)") but it has more than 1 result -** column. We do this in a subroutine because the error occurs in multiple -** places. +** column. We do this in a subroutine because the error used to occur +** in multiple places. (The error only occurs in one place now, but we +** retain the subroutine to minimize code disruption.) */ static int checkForMultiColumnSelectError( Parse *pParse, /* Parse context. */ SelectDest *pDest, /* Destination of SELECT results */ int nExpr /* Number of result columns returned by SELECT */ @@ -81072,91 +110910,150 @@ return 1; }else{ return 0; } } +#endif /* ** This routine generates the code for the inside of the inner loop ** of a SELECT. ** -** If srcTab and nColumn are both zero, then the pEList expressions -** are evaluated in order to get the data for this row. If nColumn>0 -** then data is pulled from srcTab and pEList is used only to get the -** datatypes for each column. +** If srcTab is negative, then the pEList expressions +** are evaluated in order to get the data for this row. If srcTab is +** zero or more, then data is pulled from srcTab and pEList is used only +** to get number columns and the datatype for each column. */ static void selectInnerLoop( Parse *pParse, /* The parser context */ Select *p, /* The complete select statement being coded */ ExprList *pEList, /* List of values being extracted */ int srcTab, /* Pull data from this table */ - int nColumn, /* Number of columns in the source table */ - ExprList *pOrderBy, /* If not NULL, sort results using this key */ - int distinct, /* If >=0, make sure results are distinct */ + SortCtx *pSort, /* If not NULL, info on how to process ORDER BY */ + DistinctCtx *pDistinct, /* If not NULL, info on how to process DISTINCT */ SelectDest *pDest, /* How to dispose of the results */ int iContinue, /* Jump here to continue with next row */ int iBreak /* Jump here to break out of the inner loop */ ){ Vdbe *v = pParse->pVdbe; int i; int hasDistinct; /* True if the DISTINCT keyword is present */ int regResult; /* Start of memory holding result set */ int eDest = pDest->eDest; /* How to dispose of results */ - int iParm = pDest->iParm; /* First argument to disposal method */ + int iParm = pDest->iSDParm; /* First argument to disposal method */ int nResultCol; /* Number of result columns */ + int nPrefixReg = 0; /* Number of extra registers before regResult */ assert( v ); - if( NEVER(v==0) ) return; assert( pEList!=0 ); - hasDistinct = distinct>=0; - if( pOrderBy==0 && !hasDistinct ){ - codeOffset(v, p, iContinue); + hasDistinct = pDistinct ? pDistinct->eTnctType : WHERE_DISTINCT_NOOP; + if( pSort && pSort->pOrderBy==0 ) pSort = 0; + if( pSort==0 && !hasDistinct ){ + assert( iContinue!=0 ); + codeOffset(v, p->iOffset, iContinue); } /* Pull the requested columns. */ - if( nColumn>0 ){ - nResultCol = nColumn; - }else{ - nResultCol = pEList->nExpr; - } - if( pDest->iMem==0 ){ - pDest->iMem = pParse->nMem+1; - pDest->nMem = nResultCol; + nResultCol = pEList->nExpr; + + if( pDest->iSdst==0 ){ + if( pSort ){ + nPrefixReg = pSort->pOrderBy->nExpr; + if( !(pSort->sortFlags & SORTFLAG_UseSorter) ) nPrefixReg++; + pParse->nMem += nPrefixReg; + } + pDest->iSdst = pParse->nMem+1; + pParse->nMem += nResultCol; + }else if( pDest->iSdst+nResultCol > pParse->nMem ){ + /* This is an error condition that can result, for example, when a SELECT + ** on the right-hand side of an INSERT contains more result columns than + ** there are columns in the table on the left. The error will be caught + ** and reported later. But we need to make sure enough memory is allocated + ** to avoid other spurious errors in the meantime. */ pParse->nMem += nResultCol; - }else{ - assert( pDest->nMem==nResultCol ); } - regResult = pDest->iMem; - if( nColumn>0 ){ - for(i=0; i<nColumn; i++){ + pDest->nSdst = nResultCol; + regResult = pDest->iSdst; + if( srcTab>=0 ){ + for(i=0; i<nResultCol; i++){ sqlite3VdbeAddOp3(v, OP_Column, srcTab, i, regResult+i); + VdbeComment((v, "%s", pEList->a[i].zName)); } }else if( eDest!=SRT_Exists ){ /* If the destination is an EXISTS(...) expression, the actual ** values returned by the SELECT are not required. */ - sqlite3ExprCacheClear(pParse); - sqlite3ExprCodeExprList(pParse, pEList, regResult, eDest==SRT_Output); + u8 ecelFlags; + if( eDest==SRT_Mem || eDest==SRT_Output || eDest==SRT_Coroutine ){ + ecelFlags = SQLITE_ECEL_DUP; + }else{ + ecelFlags = 0; + } + sqlite3ExprCodeExprList(pParse, pEList, regResult, 0, ecelFlags); } - nColumn = nResultCol; /* If the DISTINCT keyword was present on the SELECT statement ** and this row has been seen before, then do not make this row ** part of the result. */ if( hasDistinct ){ - assert( pEList!=0 ); - assert( pEList->nExpr==nColumn ); - codeDistinct(pParse, distinct, iContinue, nColumn, regResult); - if( pOrderBy==0 ){ - codeOffset(v, p, iContinue); - } - } - - if( checkForMultiColumnSelectError(pParse, pDest, pEList->nExpr) ){ - return; + switch( pDistinct->eTnctType ){ + case WHERE_DISTINCT_ORDERED: { + VdbeOp *pOp; /* No longer required OpenEphemeral instr. */ + int iJump; /* Jump destination */ + int regPrev; /* Previous row content */ + + /* Allocate space for the previous row */ + regPrev = pParse->nMem+1; + pParse->nMem += nResultCol; + + /* Change the OP_OpenEphemeral coded earlier to an OP_Null + ** sets the MEM_Cleared bit on the first register of the + ** previous value. This will cause the OP_Ne below to always + ** fail on the first iteration of the loop even if the first + ** row is all NULLs. + */ + sqlite3VdbeChangeToNoop(v, pDistinct->addrTnct); + pOp = sqlite3VdbeGetOp(v, pDistinct->addrTnct); + pOp->opcode = OP_Null; + pOp->p1 = 1; + pOp->p2 = regPrev; + + iJump = sqlite3VdbeCurrentAddr(v) + nResultCol; + for(i=0; i<nResultCol; i++){ + CollSeq *pColl = sqlite3ExprCollSeq(pParse, pEList->a[i].pExpr); + if( i<nResultCol-1 ){ + sqlite3VdbeAddOp3(v, OP_Ne, regResult+i, iJump, regPrev+i); + VdbeCoverage(v); + }else{ + sqlite3VdbeAddOp3(v, OP_Eq, regResult+i, iContinue, regPrev+i); + VdbeCoverage(v); + } + sqlite3VdbeChangeP4(v, -1, (const char *)pColl, P4_COLLSEQ); + sqlite3VdbeChangeP5(v, SQLITE_NULLEQ); + } + assert( sqlite3VdbeCurrentAddr(v)==iJump || pParse->db->mallocFailed ); + sqlite3VdbeAddOp3(v, OP_Copy, regResult, regPrev, nResultCol-1); + break; + } + + case WHERE_DISTINCT_UNIQUE: { + sqlite3VdbeChangeToNoop(v, pDistinct->addrTnct); + break; + } + + default: { + assert( pDistinct->eTnctType==WHERE_DISTINCT_UNORDERED ); + codeDistinct(pParse, pDistinct->tabTnct, iContinue, nResultCol, + regResult); + break; + } + } + if( pSort==0 ){ + codeOffset(v, p->iOffset, iContinue); + } } switch( eDest ){ /* In this mode, write each query result to the key of the temporary ** table iParm. @@ -81163,11 +111060,11 @@ */ #ifndef SQLITE_OMIT_COMPOUND_SELECT case SRT_Union: { int r1; r1 = sqlite3GetTempReg(pParse); - sqlite3VdbeAddOp3(v, OP_MakeRecord, regResult, nColumn, r1); + sqlite3VdbeAddOp3(v, OP_MakeRecord, regResult, nResultCol, r1); sqlite3VdbeAddOp2(v, OP_IdxInsert, iParm, r1); sqlite3ReleaseTempReg(pParse, r1); break; } @@ -81174,53 +111071,72 @@ /* Construct a record from the query result, but instead of ** saving that record, use it as a key to delete elements from ** the temporary table iParm. */ case SRT_Except: { - sqlite3VdbeAddOp3(v, OP_IdxDelete, iParm, regResult, nColumn); + sqlite3VdbeAddOp3(v, OP_IdxDelete, iParm, regResult, nResultCol); break; } -#endif +#endif /* SQLITE_OMIT_COMPOUND_SELECT */ /* Store the result as data using a unique key. */ + case SRT_Fifo: + case SRT_DistFifo: case SRT_Table: case SRT_EphemTab: { - int r1 = sqlite3GetTempReg(pParse); + int r1 = sqlite3GetTempRange(pParse, nPrefixReg+1); testcase( eDest==SRT_Table ); testcase( eDest==SRT_EphemTab ); - sqlite3VdbeAddOp3(v, OP_MakeRecord, regResult, nColumn, r1); - if( pOrderBy ){ - pushOntoSorter(pParse, pOrderBy, p, r1); + testcase( eDest==SRT_Fifo ); + testcase( eDest==SRT_DistFifo ); + sqlite3VdbeAddOp3(v, OP_MakeRecord, regResult, nResultCol, r1+nPrefixReg); +#ifndef SQLITE_OMIT_CTE + if( eDest==SRT_DistFifo ){ + /* If the destination is DistFifo, then cursor (iParm+1) is open + ** on an ephemeral index. If the current row is already present + ** in the index, do not write it to the output. If not, add the + ** current row to the index and proceed with writing it to the + ** output table as well. */ + int addr = sqlite3VdbeCurrentAddr(v) + 4; + sqlite3VdbeAddOp4Int(v, OP_Found, iParm+1, addr, r1, 0); + VdbeCoverage(v); + sqlite3VdbeAddOp2(v, OP_IdxInsert, iParm+1, r1); + assert( pSort==0 ); + } +#endif + if( pSort ){ + pushOntoSorter(pParse, pSort, p, r1+nPrefixReg,regResult,1,nPrefixReg); }else{ int r2 = sqlite3GetTempReg(pParse); sqlite3VdbeAddOp2(v, OP_NewRowid, iParm, r2); sqlite3VdbeAddOp3(v, OP_Insert, iParm, r1, r2); sqlite3VdbeChangeP5(v, OPFLAG_APPEND); sqlite3ReleaseTempReg(pParse, r2); } - sqlite3ReleaseTempReg(pParse, r1); + sqlite3ReleaseTempRange(pParse, r1, nPrefixReg+1); break; } #ifndef SQLITE_OMIT_SUBQUERY /* If we are creating a set for an "expr IN (SELECT ...)" construct, ** then there should be a single item on the stack. Write this ** item into the set table with bogus data. */ case SRT_Set: { - assert( nColumn==1 ); - p->affinity = sqlite3CompareAffinity(pEList->a[0].pExpr, pDest->affinity); - if( pOrderBy ){ + assert( nResultCol==1 ); + pDest->affSdst = + sqlite3CompareAffinity(pEList->a[0].pExpr, pDest->affSdst); + if( pSort ){ /* At first glance you would think we could optimize out the ** ORDER BY in this case since the order of entries in the set ** does not matter. But there might be a LIMIT clause, in which ** case the order does matter */ - pushOntoSorter(pParse, pOrderBy, p, regResult); + pushOntoSorter(pParse, pSort, p, regResult, regResult, 1, nPrefixReg); }else{ int r1 = sqlite3GetTempReg(pParse); - sqlite3VdbeAddOp4(v, OP_MakeRecord, regResult, 1, r1, &p->affinity, 1); + sqlite3VdbeAddOp4(v, OP_MakeRecord, regResult,1,r1, &pDest->affSdst, 1); sqlite3ExprCacheAffinityChange(pParse, regResult, 1); sqlite3VdbeAddOp2(v, OP_IdxInsert, iParm, r1); sqlite3ReleaseTempReg(pParse, r1); } break; @@ -81237,42 +111153,85 @@ /* If this is a scalar select that is part of an expression, then ** store the results in the appropriate memory cell and break out ** of the scan loop. */ case SRT_Mem: { - assert( nColumn==1 ); - if( pOrderBy ){ - pushOntoSorter(pParse, pOrderBy, p, regResult); + assert( nResultCol==1 ); + if( pSort ){ + pushOntoSorter(pParse, pSort, p, regResult, regResult, 1, nPrefixReg); }else{ - sqlite3ExprCodeMove(pParse, regResult, iParm, 1); + assert( regResult==iParm ); /* The LIMIT clause will jump out of the loop for us */ } break; } #endif /* #ifndef SQLITE_OMIT_SUBQUERY */ - /* Send the data to the callback function or to a subroutine. In the - ** case of a subroutine, the subroutine itself is responsible for - ** popping the data from the stack. - */ - case SRT_Coroutine: - case SRT_Output: { + case SRT_Coroutine: /* Send data to a co-routine */ + case SRT_Output: { /* Return the results */ testcase( eDest==SRT_Coroutine ); testcase( eDest==SRT_Output ); - if( pOrderBy ){ - int r1 = sqlite3GetTempReg(pParse); - sqlite3VdbeAddOp3(v, OP_MakeRecord, regResult, nColumn, r1); - pushOntoSorter(pParse, pOrderBy, p, r1); - sqlite3ReleaseTempReg(pParse, r1); + if( pSort ){ + pushOntoSorter(pParse, pSort, p, regResult, regResult, nResultCol, + nPrefixReg); }else if( eDest==SRT_Coroutine ){ - sqlite3VdbeAddOp1(v, OP_Yield, pDest->iParm); + sqlite3VdbeAddOp1(v, OP_Yield, pDest->iSDParm); }else{ - sqlite3VdbeAddOp2(v, OP_ResultRow, regResult, nColumn); - sqlite3ExprCacheAffinityChange(pParse, regResult, nColumn); + sqlite3VdbeAddOp2(v, OP_ResultRow, regResult, nResultCol); + sqlite3ExprCacheAffinityChange(pParse, regResult, nResultCol); } break; } + +#ifndef SQLITE_OMIT_CTE + /* Write the results into a priority queue that is order according to + ** pDest->pOrderBy (in pSO). pDest->iSDParm (in iParm) is the cursor for an + ** index with pSO->nExpr+2 columns. Build a key using pSO for the first + ** pSO->nExpr columns, then make sure all keys are unique by adding a + ** final OP_Sequence column. The last column is the record as a blob. + */ + case SRT_DistQueue: + case SRT_Queue: { + int nKey; + int r1, r2, r3; + int addrTest = 0; + ExprList *pSO; + pSO = pDest->pOrderBy; + assert( pSO ); + nKey = pSO->nExpr; + r1 = sqlite3GetTempReg(pParse); + r2 = sqlite3GetTempRange(pParse, nKey+2); + r3 = r2+nKey+1; + if( eDest==SRT_DistQueue ){ + /* If the destination is DistQueue, then cursor (iParm+1) is open + ** on a second ephemeral index that holds all values every previously + ** added to the queue. */ + addrTest = sqlite3VdbeAddOp4Int(v, OP_Found, iParm+1, 0, + regResult, nResultCol); + VdbeCoverage(v); + } + sqlite3VdbeAddOp3(v, OP_MakeRecord, regResult, nResultCol, r3); + if( eDest==SRT_DistQueue ){ + sqlite3VdbeAddOp2(v, OP_IdxInsert, iParm+1, r3); + sqlite3VdbeChangeP5(v, OPFLAG_USESEEKRESULT); + } + for(i=0; i<nKey; i++){ + sqlite3VdbeAddOp2(v, OP_SCopy, + regResult + pSO->a[i].u.x.iOrderByCol - 1, + r2+i); + } + sqlite3VdbeAddOp2(v, OP_Sequence, iParm, r2+nKey); + sqlite3VdbeAddOp3(v, OP_MakeRecord, r2, nKey+2, r1); + sqlite3VdbeAddOp2(v, OP_IdxInsert, iParm, r1); + if( addrTest ) sqlite3VdbeJumpHere(v, addrTest); + sqlite3ReleaseTempReg(pParse, r1); + sqlite3ReleaseTempRange(pParse, r2, nKey+2); + break; + } +#endif /* SQLITE_OMIT_CTE */ + + #if !defined(SQLITE_OMIT_TRIGGER) /* Discard the results. This is used for SELECT statements inside ** the body of a TRIGGER. The purpose of such selects is to call ** user-defined functions that have side effects. We do not care @@ -81283,19 +111242,72 @@ break; } #endif } - /* Jump to the end of the loop if the LIMIT is reached. + /* Jump to the end of the loop if the LIMIT is reached. Except, if + ** there is a sorter, in which case the sorter has already limited + ** the output for us. */ - if( p->iLimit ){ - assert( pOrderBy==0 ); /* If there is an ORDER BY, the call to - ** pushOntoSorter() would have cleared p->iLimit */ - sqlite3VdbeAddOp3(v, OP_IfZero, p->iLimit, iBreak, -1); + if( pSort==0 && p->iLimit ){ + sqlite3VdbeAddOp2(v, OP_DecrJumpZero, p->iLimit, iBreak); VdbeCoverage(v); } } +/* +** Allocate a KeyInfo object sufficient for an index of N key columns and +** X extra columns. +*/ +SQLITE_PRIVATE KeyInfo *sqlite3KeyInfoAlloc(sqlite3 *db, int N, int X){ + int nExtra = (N+X)*(sizeof(CollSeq*)+1); + KeyInfo *p = sqlite3Malloc(sizeof(KeyInfo) + nExtra); + if( p ){ + p->aSortOrder = (u8*)&p->aColl[N+X]; + p->nField = (u16)N; + p->nXField = (u16)X; + p->enc = ENC(db); + p->db = db; + p->nRef = 1; + memset(&p[1], 0, nExtra); + }else{ + sqlite3OomFault(db); + } + return p; +} + +/* +** Deallocate a KeyInfo object +*/ +SQLITE_PRIVATE void sqlite3KeyInfoUnref(KeyInfo *p){ + if( p ){ + assert( p->nRef>0 ); + p->nRef--; + if( p->nRef==0 ) sqlite3DbFree(0, p); + } +} + +/* +** Make a new pointer to a KeyInfo object +*/ +SQLITE_PRIVATE KeyInfo *sqlite3KeyInfoRef(KeyInfo *p){ + if( p ){ + assert( p->nRef>0 ); + p->nRef++; + } + return p; +} + +#ifdef SQLITE_DEBUG +/* +** Return TRUE if a KeyInfo object can be change. The KeyInfo object +** can only be changed if this is just a single reference to the object. +** +** This routine is used only inside of assert() statements. +*/ +SQLITE_PRIVATE int sqlite3KeyInfoIsWriteable(KeyInfo *p){ return p->nRef==1; } +#endif /* SQLITE_DEBUG */ + /* ** Given an expression list, generate a KeyInfo structure that records ** the collating sequence for each expression in that expression list. ** ** If the ExprList is an ORDER BY or GROUP BY clause then the resulting @@ -81302,42 +111314,125 @@ ** KeyInfo structure is appropriate for initializing a virtual index to ** implement that clause. If the ExprList is the result set of a SELECT ** then the KeyInfo structure is appropriate for initializing a virtual ** index to implement a DISTINCT test. ** -** Space to hold the KeyInfo structure is obtain from malloc. The calling +** Space to hold the KeyInfo structure is obtained from malloc. The calling ** function is responsible for seeing that this structure is eventually -** freed. Add the KeyInfo structure to the P4 field of an opcode using -** P4_KEYINFO_HANDOFF is the usual way of dealing with this. +** freed. */ -static KeyInfo *keyInfoFromExprList(Parse *pParse, ExprList *pList){ - sqlite3 *db = pParse->db; +static KeyInfo *keyInfoFromExprList( + Parse *pParse, /* Parsing context */ + ExprList *pList, /* Form the KeyInfo object from this ExprList */ + int iStart, /* Begin with this column of pList */ + int nExtra /* Add this many extra columns to the end */ +){ int nExpr; KeyInfo *pInfo; struct ExprList_item *pItem; + sqlite3 *db = pParse->db; int i; nExpr = pList->nExpr; - pInfo = sqlite3DbMallocZero(db, sizeof(*pInfo) + nExpr*(sizeof(CollSeq*)+1) ); + pInfo = sqlite3KeyInfoAlloc(db, nExpr-iStart, nExtra+1); if( pInfo ){ - pInfo->aSortOrder = (u8*)&pInfo->aColl[nExpr]; - pInfo->nField = (u16)nExpr; - pInfo->enc = ENC(db); - pInfo->db = db; - for(i=0, pItem=pList->a; i<nExpr; i++, pItem++){ + assert( sqlite3KeyInfoIsWriteable(pInfo) ); + for(i=iStart, pItem=pList->a+iStart; i<nExpr; i++, pItem++){ CollSeq *pColl; pColl = sqlite3ExprCollSeq(pParse, pItem->pExpr); - if( !pColl ){ - pColl = db->pDfltColl; - } - pInfo->aColl[i] = pColl; - pInfo->aSortOrder[i] = pItem->sortOrder; + if( !pColl ) pColl = db->pDfltColl; + pInfo->aColl[i-iStart] = pColl; + pInfo->aSortOrder[i-iStart] = pItem->sortOrder; } } return pInfo; } +/* +** Name of the connection operator, used for error messages. +*/ +static const char *selectOpName(int id){ + char *z; + switch( id ){ + case TK_ALL: z = "UNION ALL"; break; + case TK_INTERSECT: z = "INTERSECT"; break; + case TK_EXCEPT: z = "EXCEPT"; break; + default: z = "UNION"; break; + } + return z; +} + +#ifndef SQLITE_OMIT_EXPLAIN +/* +** Unless an "EXPLAIN QUERY PLAN" command is being processed, this function +** is a no-op. Otherwise, it adds a single row of output to the EQP result, +** where the caption is of the form: +** +** "USE TEMP B-TREE FOR xxx" +** +** where xxx is one of "DISTINCT", "ORDER BY" or "GROUP BY". Exactly which +** is determined by the zUsage argument. +*/ +static void explainTempTable(Parse *pParse, const char *zUsage){ + if( pParse->explain==2 ){ + Vdbe *v = pParse->pVdbe; + char *zMsg = sqlite3MPrintf(pParse->db, "USE TEMP B-TREE FOR %s", zUsage); + sqlite3VdbeAddOp4(v, OP_Explain, pParse->iSelectId, 0, 0, zMsg, P4_DYNAMIC); + } +} + +/* +** Assign expression b to lvalue a. A second, no-op, version of this macro +** is provided when SQLITE_OMIT_EXPLAIN is defined. This allows the code +** in sqlite3Select() to assign values to structure member variables that +** only exist if SQLITE_OMIT_EXPLAIN is not defined without polluting the +** code with #ifndef directives. +*/ +# define explainSetInteger(a, b) a = b + +#else +/* No-op versions of the explainXXX() functions and macros. */ +# define explainTempTable(y,z) +# define explainSetInteger(y,z) +#endif + +#if !defined(SQLITE_OMIT_EXPLAIN) && !defined(SQLITE_OMIT_COMPOUND_SELECT) +/* +** Unless an "EXPLAIN QUERY PLAN" command is being processed, this function +** is a no-op. Otherwise, it adds a single row of output to the EQP result, +** where the caption is of one of the two forms: +** +** "COMPOSITE SUBQUERIES iSub1 and iSub2 (op)" +** "COMPOSITE SUBQUERIES iSub1 and iSub2 USING TEMP B-TREE (op)" +** +** where iSub1 and iSub2 are the integers passed as the corresponding +** function parameters, and op is the text representation of the parameter +** of the same name. The parameter "op" must be one of TK_UNION, TK_EXCEPT, +** TK_INTERSECT or TK_ALL. The first form is used if argument bUseTmp is +** false, or the second form if it is true. +*/ +static void explainComposite( + Parse *pParse, /* Parse context */ + int op, /* One of TK_UNION, TK_EXCEPT etc. */ + int iSub1, /* Subquery id 1 */ + int iSub2, /* Subquery id 2 */ + int bUseTmp /* True if a temp table was used */ +){ + assert( op==TK_UNION || op==TK_EXCEPT || op==TK_INTERSECT || op==TK_ALL ); + if( pParse->explain==2 ){ + Vdbe *v = pParse->pVdbe; + char *zMsg = sqlite3MPrintf( + pParse->db, "COMPOUND SUBQUERIES %d AND %d %s(%s)", iSub1, iSub2, + bUseTmp?"USING TEMP B-TREE ":"", selectOpName(op) + ); + sqlite3VdbeAddOp4(v, OP_Explain, pParse->iSelectId, 0, 0, zMsg, P4_DYNAMIC); + } +} +#else +/* No-op versions of the explainXXX() functions and macros. */ +# define explainComposite(v,w,x,y,z) +#endif /* ** If the inner loop was generated using a non-null pOrderBy argument, ** then the results were placed in a sorter. After the loop is terminated ** we need to run the sorter and output the results. The following @@ -81344,53 +111439,86 @@ ** routine generates the code needed to do that. */ static void generateSortTail( Parse *pParse, /* Parsing context */ Select *p, /* The SELECT statement */ - Vdbe *v, /* Generate code into this VDBE */ + SortCtx *pSort, /* Information on the ORDER BY clause */ int nColumn, /* Number of columns of data */ SelectDest *pDest /* Write the sorted results here */ ){ - int addrBreak = sqlite3VdbeMakeLabel(v); /* Jump here to exit loop */ + Vdbe *v = pParse->pVdbe; /* The prepared statement */ + int addrBreak = pSort->labelDone; /* Jump here to exit loop */ int addrContinue = sqlite3VdbeMakeLabel(v); /* Jump here for next cycle */ int addr; + int addrOnce = 0; int iTab; - int pseudoTab = 0; - ExprList *pOrderBy = p->pOrderBy; - + ExprList *pOrderBy = pSort->pOrderBy; int eDest = pDest->eDest; - int iParm = pDest->iParm; - + int iParm = pDest->iSDParm; int regRow; int regRowid; + int nKey; + int iSortTab; /* Sorter cursor to read from */ + int nSortData; /* Trailing values to read from sorter */ + int i; + int bSeq; /* True if sorter record includes seq. no. */ +#ifdef SQLITE_ENABLE_EXPLAIN_COMMENTS + struct ExprList_item *aOutEx = p->pEList->a; +#endif - iTab = pOrderBy->iECursor; - regRow = sqlite3GetTempReg(pParse); + assert( addrBreak<0 ); + if( pSort->labelBkOut ){ + sqlite3VdbeAddOp2(v, OP_Gosub, pSort->regReturn, pSort->labelBkOut); + sqlite3VdbeGoto(v, addrBreak); + sqlite3VdbeResolveLabel(v, pSort->labelBkOut); + } + iTab = pSort->iECursor; if( eDest==SRT_Output || eDest==SRT_Coroutine ){ - pseudoTab = pParse->nTab++; - sqlite3VdbeAddOp3(v, OP_OpenPseudo, pseudoTab, regRow, nColumn); regRowid = 0; + regRow = pDest->iSdst; + nSortData = nColumn; }else{ regRowid = sqlite3GetTempReg(pParse); + regRow = sqlite3GetTempReg(pParse); + nSortData = 1; } - addr = 1 + sqlite3VdbeAddOp2(v, OP_Sort, iTab, addrBreak); - codeOffset(v, p, addrContinue); - sqlite3VdbeAddOp3(v, OP_Column, iTab, pOrderBy->nExpr + 1, regRow); + nKey = pOrderBy->nExpr - pSort->nOBSat; + if( pSort->sortFlags & SORTFLAG_UseSorter ){ + int regSortOut = ++pParse->nMem; + iSortTab = pParse->nTab++; + if( pSort->labelBkOut ){ + addrOnce = sqlite3CodeOnce(pParse); VdbeCoverage(v); + } + sqlite3VdbeAddOp3(v, OP_OpenPseudo, iSortTab, regSortOut, nKey+1+nSortData); + if( addrOnce ) sqlite3VdbeJumpHere(v, addrOnce); + addr = 1 + sqlite3VdbeAddOp2(v, OP_SorterSort, iTab, addrBreak); + VdbeCoverage(v); + codeOffset(v, p->iOffset, addrContinue); + sqlite3VdbeAddOp3(v, OP_SorterData, iTab, regSortOut, iSortTab); + bSeq = 0; + }else{ + addr = 1 + sqlite3VdbeAddOp2(v, OP_Sort, iTab, addrBreak); VdbeCoverage(v); + codeOffset(v, p->iOffset, addrContinue); + iSortTab = iTab; + bSeq = 1; + } + for(i=0; i<nSortData; i++){ + sqlite3VdbeAddOp3(v, OP_Column, iSortTab, nKey+bSeq+i, regRow+i); + VdbeComment((v, "%s", aOutEx[i].zName ? aOutEx[i].zName : aOutEx[i].zSpan)); + } switch( eDest ){ - case SRT_Table: case SRT_EphemTab: { - testcase( eDest==SRT_Table ); - testcase( eDest==SRT_EphemTab ); sqlite3VdbeAddOp2(v, OP_NewRowid, iParm, regRowid); sqlite3VdbeAddOp3(v, OP_Insert, iParm, regRow, regRowid); sqlite3VdbeChangeP5(v, OPFLAG_APPEND); break; } #ifndef SQLITE_OMIT_SUBQUERY case SRT_Set: { assert( nColumn==1 ); - sqlite3VdbeAddOp4(v, OP_MakeRecord, regRow, 1, regRowid, &p->affinity, 1); + sqlite3VdbeAddOp4(v, OP_MakeRecord, regRow, 1, regRowid, + &pDest->affSdst, 1); sqlite3ExprCacheAffinityChange(pParse, regRow, 1); sqlite3VdbeAddOp2(v, OP_IdxInsert, iParm, regRowid); break; } case SRT_Mem: { @@ -81399,50 +111527,44 @@ /* The LIMIT clause will terminate the loop for us */ break; } #endif default: { - int i; assert( eDest==SRT_Output || eDest==SRT_Coroutine ); testcase( eDest==SRT_Output ); testcase( eDest==SRT_Coroutine ); - for(i=0; i<nColumn; i++){ - assert( regRow!=pDest->iMem+i ); - sqlite3VdbeAddOp3(v, OP_Column, pseudoTab, i, pDest->iMem+i); - if( i==0 ){ - sqlite3VdbeChangeP5(v, OPFLAG_CLEARCACHE); - } - } if( eDest==SRT_Output ){ - sqlite3VdbeAddOp2(v, OP_ResultRow, pDest->iMem, nColumn); - sqlite3ExprCacheAffinityChange(pParse, pDest->iMem, nColumn); + sqlite3VdbeAddOp2(v, OP_ResultRow, pDest->iSdst, nColumn); + sqlite3ExprCacheAffinityChange(pParse, pDest->iSdst, nColumn); }else{ - sqlite3VdbeAddOp1(v, OP_Yield, pDest->iParm); + sqlite3VdbeAddOp1(v, OP_Yield, pDest->iSDParm); } break; } } - sqlite3ReleaseTempReg(pParse, regRow); - sqlite3ReleaseTempReg(pParse, regRowid); - - /* LIMIT has been implemented by the pushOntoSorter() routine. - */ - assert( p->iLimit==0 ); - + if( regRowid ){ + sqlite3ReleaseTempReg(pParse, regRow); + sqlite3ReleaseTempReg(pParse, regRowid); + } /* The bottom of the loop */ sqlite3VdbeResolveLabel(v, addrContinue); - sqlite3VdbeAddOp2(v, OP_Next, iTab, addr); + if( pSort->sortFlags & SORTFLAG_UseSorter ){ + sqlite3VdbeAddOp2(v, OP_SorterNext, iTab, addr); VdbeCoverage(v); + }else{ + sqlite3VdbeAddOp2(v, OP_Next, iTab, addr); VdbeCoverage(v); + } + if( pSort->regReturn ) sqlite3VdbeAddOp1(v, OP_Return, pSort->regReturn); sqlite3VdbeResolveLabel(v, addrBreak); - if( eDest==SRT_Output || eDest==SRT_Coroutine ){ - sqlite3VdbeAddOp2(v, OP_Close, pseudoTab, 0); - } } /* ** Return a pointer to a string containing the 'declaration type' of the ** expression pExpr. The string may be treated as static by the caller. +** +** Also try to estimate the size of the returned value and return that +** result in *pEstWidth. ** ** The declaration type is the exact datatype definition extracted from the ** original CREATE TABLE statement if the expression is a column. The ** declaration type for a ROWID field is INTEGER. Exactly when an expression ** is considered a column can be complex in the presence of subqueries. The @@ -81453,25 +111575,40 @@ ** SELECT (SELECT col FROM tbl; ** SELECT (SELECT col FROM tbl); ** SELECT abc FROM (SELECT col AS abc FROM tbl); ** ** The declaration type for any expression other than a column is NULL. +** +** This routine has either 3 or 6 parameters depending on whether or not +** the SQLITE_ENABLE_COLUMN_METADATA compile-time option is used. */ -static const char *columnType( +#ifdef SQLITE_ENABLE_COLUMN_METADATA +# define columnType(A,B,C,D,E,F) columnTypeImpl(A,B,C,D,E,F) +#else /* if !defined(SQLITE_ENABLE_COLUMN_METADATA) */ +# define columnType(A,B,C,D,E,F) columnTypeImpl(A,B,F) +#endif +static const char *columnTypeImpl( NameContext *pNC, Expr *pExpr, - const char **pzOriginDb, - const char **pzOriginTab, - const char **pzOriginCol +#ifdef SQLITE_ENABLE_COLUMN_METADATA + const char **pzOrigDb, + const char **pzOrigTab, + const char **pzOrigCol, +#endif + u8 *pEstWidth ){ char const *zType = 0; - char const *zOriginDb = 0; - char const *zOriginTab = 0; - char const *zOriginCol = 0; int j; - if( NEVER(pExpr==0) || pNC->pSrcList==0 ) return 0; + u8 estWidth = 1; +#ifdef SQLITE_ENABLE_COLUMN_METADATA + char const *zOrigDb = 0; + char const *zOrigTab = 0; + char const *zOrigCol = 0; +#endif + assert( pExpr!=0 ); + assert( pNC->pSrcList!=0 ); switch( pExpr->op ){ case TK_AGG_COLUMN: case TK_COLUMN: { /* The expression is a column. Locate the table the column is being ** extracted from in NameContext.pSrcList. This table may be real @@ -81522,35 +111659,48 @@ */ if( iCol>=0 && ALWAYS(iCol<pS->pEList->nExpr) ){ /* If iCol is less than zero, then the expression requests the ** rowid of the sub-select or view. This expression is legal (see ** test case misc2.2.2) - it always evaluates to NULL. + ** + ** The ALWAYS() is because iCol>=pS->pEList->nExpr will have been + ** caught already by name resolution. */ NameContext sNC; Expr *p = pS->pEList->a[iCol].pExpr; sNC.pSrcList = pS->pSrc; sNC.pNext = pNC; sNC.pParse = pNC->pParse; - zType = columnType(&sNC, p, &zOriginDb, &zOriginTab, &zOriginCol); + zType = columnType(&sNC, p,&zOrigDb,&zOrigTab,&zOrigCol, &estWidth); } - }else if( ALWAYS(pTab->pSchema) ){ + }else if( pTab->pSchema ){ /* A real table */ assert( !pS ); if( iCol<0 ) iCol = pTab->iPKey; assert( iCol==-1 || (iCol>=0 && iCol<pTab->nCol) ); +#ifdef SQLITE_ENABLE_COLUMN_METADATA if( iCol<0 ){ zType = "INTEGER"; - zOriginCol = "rowid"; + zOrigCol = "rowid"; }else{ zType = pTab->aCol[iCol].zType; - zOriginCol = pTab->aCol[iCol].zName; + zOrigCol = pTab->aCol[iCol].zName; + estWidth = pTab->aCol[iCol].szEst; } - zOriginTab = pTab->zName; + zOrigTab = pTab->zName; if( pNC->pParse ){ int iDb = sqlite3SchemaToIndex(pNC->pParse->db, pTab->pSchema); - zOriginDb = pNC->pParse->db->aDb[iDb].zName; + zOrigDb = pNC->pParse->db->aDb[iDb].zName; } +#else + if( iCol<0 ){ + zType = "INTEGER"; + }else{ + zType = pTab->aCol[iCol].zType; + estWidth = pTab->aCol[iCol].szEst; + } +#endif } break; } #ifndef SQLITE_OMIT_SUBQUERY case TK_SELECT: { @@ -81563,22 +111713,25 @@ Expr *p = pS->pEList->a[0].pExpr; assert( ExprHasProperty(pExpr, EP_xIsSelect) ); sNC.pSrcList = pS->pSrc; sNC.pNext = pNC; sNC.pParse = pNC->pParse; - zType = columnType(&sNC, p, &zOriginDb, &zOriginTab, &zOriginCol); + zType = columnType(&sNC, p, &zOrigDb, &zOrigTab, &zOrigCol, &estWidth); break; } #endif } - - if( pzOriginDb ){ - assert( pzOriginTab && pzOriginCol ); - *pzOriginDb = zOriginDb; - *pzOriginTab = zOriginTab; - *pzOriginCol = zOriginCol; + +#ifdef SQLITE_ENABLE_COLUMN_METADATA + if( pzOrigDb ){ + assert( pzOrigTab && pzOrigCol ); + *pzOrigDb = zOrigDb; + *pzOrigTab = zOrigTab; + *pzOrigCol = zOrigCol; } +#endif + if( pEstWidth ) *pEstWidth = estWidth; return zType; } /* ** Generate code that will tell the VDBE the declaration types of columns @@ -81600,25 +111753,25 @@ const char *zType; #ifdef SQLITE_ENABLE_COLUMN_METADATA const char *zOrigDb = 0; const char *zOrigTab = 0; const char *zOrigCol = 0; - zType = columnType(&sNC, p, &zOrigDb, &zOrigTab, &zOrigCol); + zType = columnType(&sNC, p, &zOrigDb, &zOrigTab, &zOrigCol, 0); /* The vdbe must make its own copy of the column-type and other ** column specific strings, in case the schema is reset before this ** virtual machine is deleted. */ sqlite3VdbeSetColName(v, i, COLNAME_DATABASE, zOrigDb, SQLITE_TRANSIENT); sqlite3VdbeSetColName(v, i, COLNAME_TABLE, zOrigTab, SQLITE_TRANSIENT); sqlite3VdbeSetColName(v, i, COLNAME_COLUMN, zOrigCol, SQLITE_TRANSIENT); #else - zType = columnType(&sNC, p, 0, 0, 0); + zType = columnType(&sNC, p, 0, 0, 0, 0); #endif sqlite3VdbeSetColName(v, i, COLNAME_DECLTYPE, zType, SQLITE_TRANSIENT); } -#endif /* SQLITE_OMIT_DECLTYPE */ +#endif /* !defined(SQLITE_OMIT_DECLTYPE) */ } /* ** Generate code that will tell the VDBE the names of columns ** in the result set. This information is used to provide the @@ -81639,11 +111792,13 @@ if( pParse->explain ){ return; } #endif - if( pParse->colNamesSet || NEVER(v==0) || db->mallocFailed ) return; + if( pParse->colNamesSet || db->mallocFailed ) return; + assert( v!=0 ); + assert( pTabList!=0 ); pParse->colNamesSet = 1; fullNames = (db->flags & SQLITE_FullColNames)!=0; shortNames = (db->flags & SQLITE_ShortColNames)!=0; sqlite3VdbeSetNumCols(v, pEList->nExpr); for(i=0; i<pEList->nExpr; i++){ @@ -81651,11 +111806,11 @@ p = pEList->a[i].pExpr; if( NEVER(p==0) ) continue; if( pEList->a[i].zName ){ char *zName = pEList->a[i].zName; sqlite3VdbeSetColName(v, i, COLNAME_NAME, zName, SQLITE_TRANSIENT); - }else if( (p->op==TK_COLUMN || p->op==TK_AGG_COLUMN) && pTabList ){ + }else if( p->op==TK_COLUMN || p->op==TK_AGG_COLUMN ){ Table *pTab; char *zCol; int iCol = p->iColumn; for(j=0; ALWAYS(j<pTabList->nSrc); j++){ if( pTabList->a[j].iCursor==p->iTable ) break; @@ -81678,35 +111833,20 @@ sqlite3VdbeSetColName(v, i, COLNAME_NAME, zName, SQLITE_DYNAMIC); }else{ sqlite3VdbeSetColName(v, i, COLNAME_NAME, zCol, SQLITE_TRANSIENT); } }else{ - sqlite3VdbeSetColName(v, i, COLNAME_NAME, - sqlite3DbStrDup(db, pEList->a[i].zSpan), SQLITE_DYNAMIC); + const char *z = pEList->a[i].zSpan; + z = z==0 ? sqlite3MPrintf(db, "column%d", i+1) : sqlite3DbStrDup(db, z); + sqlite3VdbeSetColName(v, i, COLNAME_NAME, z, SQLITE_DYNAMIC); } } generateColumnTypes(pParse, pTabList, pEList); } -#ifndef SQLITE_OMIT_COMPOUND_SELECT -/* -** Name of the connection operator, used for error messages. -*/ -static const char *selectOpName(int id){ - char *z; - switch( id ){ - case TK_ALL: z = "UNION ALL"; break; - case TK_INTERSECT: z = "INTERSECT"; break; - case TK_EXCEPT: z = "EXCEPT"; break; - default: z = "UNION"; break; - } - return z; -} -#endif /* SQLITE_OMIT_COMPOUND_SELECT */ - -/* -** Given a an expression list (which is really the list of expressions +/* +** Given an expression list (which is really the list of expressions ** that form the result set of a SELECT statement) compute appropriate ** column names for a table that would hold the expression list. ** ** All column names will be unique. ** @@ -81714,78 +111854,88 @@ ** and other fields of Column are zeroed. ** ** Return SQLITE_OK on success. If a memory allocation error occurs, ** store NULL in *paCol and 0 in *pnCol and return SQLITE_NOMEM. */ -static int selectColumnsFromExprList( +SQLITE_PRIVATE int sqlite3ColumnsFromExprList( Parse *pParse, /* Parsing context */ ExprList *pEList, /* Expr list from which to derive column names */ - int *pnCol, /* Write the number of columns here */ + i16 *pnCol, /* Write the number of columns here */ Column **paCol /* Write the new column list here */ ){ sqlite3 *db = pParse->db; /* Database connection */ int i, j; /* Loop counters */ - int cnt; /* Index added to make the name unique */ + u32 cnt; /* Index added to make the name unique */ Column *aCol, *pCol; /* For looping over result columns */ int nCol; /* Number of columns in the result set */ Expr *p; /* Expression for a single result column */ char *zName; /* Column name */ int nName; /* Size of name in zName[] */ + Hash ht; /* Hash table of column names */ - *pnCol = nCol = pEList->nExpr; - aCol = *paCol = sqlite3DbMallocZero(db, sizeof(aCol[0])*nCol); - if( aCol==0 ) return SQLITE_NOMEM; - for(i=0, pCol=aCol; i<nCol; i++, pCol++){ + sqlite3HashInit(&ht); + if( pEList ){ + nCol = pEList->nExpr; + aCol = sqlite3DbMallocZero(db, sizeof(aCol[0])*nCol); + testcase( aCol==0 ); + }else{ + nCol = 0; + aCol = 0; + } + assert( nCol==(i16)nCol ); + *pnCol = nCol; + *paCol = aCol; + + for(i=0, pCol=aCol; i<nCol && !db->mallocFailed; i++, pCol++){ /* Get an appropriate name for the column */ - p = pEList->a[i].pExpr; - assert( p->pRight==0 || ExprHasProperty(p->pRight, EP_IntValue) - || p->pRight->u.zToken==0 || p->pRight->u.zToken[0]!=0 ); + p = sqlite3ExprSkipCollate(pEList->a[i].pExpr); if( (zName = pEList->a[i].zName)!=0 ){ /* If the column contains an "AS <name>" phrase, use <name> as the name */ - zName = sqlite3DbStrDup(db, zName); }else{ Expr *pColExpr = p; /* The expression that is the result column name */ Table *pTab; /* Table associated with this expression */ - while( pColExpr->op==TK_DOT ) pColExpr = pColExpr->pRight; + while( pColExpr->op==TK_DOT ){ + pColExpr = pColExpr->pRight; + assert( pColExpr!=0 ); + } if( pColExpr->op==TK_COLUMN && ALWAYS(pColExpr->pTab!=0) ){ /* For columns use the column name name */ int iCol = pColExpr->iColumn; pTab = pColExpr->pTab; if( iCol<0 ) iCol = pTab->iPKey; - zName = sqlite3MPrintf(db, "%s", - iCol>=0 ? pTab->aCol[iCol].zName : "rowid"); + zName = iCol>=0 ? pTab->aCol[iCol].zName : "rowid"; }else if( pColExpr->op==TK_ID ){ assert( !ExprHasProperty(pColExpr, EP_IntValue) ); - zName = sqlite3MPrintf(db, "%s", pColExpr->u.zToken); + zName = pColExpr->u.zToken; }else{ /* Use the original text of the column expression as its name */ - zName = sqlite3MPrintf(db, "%s", pEList->a[i].zSpan); + zName = pEList->a[i].zSpan; } } - if( db->mallocFailed ){ - sqlite3DbFree(db, zName); - break; - } + zName = sqlite3MPrintf(db, "%s", zName); /* Make sure the column name is unique. If the name is not unique, - ** append a integer to the name so that it becomes unique. + ** append an integer to the name so that it becomes unique. */ - nName = sqlite3Strlen30(zName); - for(j=cnt=0; j<i; j++){ - if( sqlite3StrICmp(aCol[j].zName, zName)==0 ){ - char *zNewName; - zName[nName] = 0; - zNewName = sqlite3MPrintf(db, "%s:%d", zName, ++cnt); - sqlite3DbFree(db, zName); - zName = zNewName; - j = -1; - if( zName==0 ) break; - } + cnt = 0; + while( zName && sqlite3HashFind(&ht, zName)!=0 ){ + nName = sqlite3Strlen30(zName); + if( nName>0 ){ + for(j=nName-1; j>0 && sqlite3Isdigit(zName[j]); j--){} + if( zName[j]==':' ) nName = j; + } + zName = sqlite3MPrintf(db, "%.*z:%u", nName, zName, ++cnt); + if( cnt>3 ) sqlite3_randomness(sizeof(cnt), &cnt); } pCol->zName = zName; + sqlite3ColumnPropertiesFromName(0, pCol); + if( zName && sqlite3HashInsert(&ht, zName, pCol)==pCol ){ + sqlite3OomFault(db); + } } + sqlite3HashClear(&ht); if( db->mallocFailed ){ for(j=0; j<i; j++){ sqlite3DbFree(db, aCol[j].zName); } sqlite3DbFree(db, aCol); @@ -81807,39 +111957,44 @@ ** This routine requires that all identifiers in the SELECT ** statement be resolved. */ static void selectAddColumnTypeAndCollation( Parse *pParse, /* Parsing contexts */ - int nCol, /* Number of columns */ - Column *aCol, /* List of columns */ + Table *pTab, /* Add column type information to this table */ Select *pSelect /* SELECT used to determine types and collations */ ){ sqlite3 *db = pParse->db; NameContext sNC; Column *pCol; CollSeq *pColl; int i; Expr *p; struct ExprList_item *a; + u64 szAll = 0; assert( pSelect!=0 ); assert( (pSelect->selFlags & SF_Resolved)!=0 ); - assert( nCol==pSelect->pEList->nExpr || db->mallocFailed ); + assert( pTab->nCol==pSelect->pEList->nExpr || db->mallocFailed ); if( db->mallocFailed ) return; memset(&sNC, 0, sizeof(sNC)); sNC.pSrcList = pSelect->pSrc; a = pSelect->pEList->a; - for(i=0, pCol=aCol; i<nCol; i++, pCol++){ + for(i=0, pCol=pTab->aCol; i<pTab->nCol; i++, pCol++){ p = a[i].pExpr; - pCol->zType = sqlite3DbStrDup(db, columnType(&sNC, p, 0, 0, 0)); + if( pCol->zType==0 ){ + pCol->zType = sqlite3DbStrDup(db, + columnType(&sNC, p,0,0,0, &pCol->szEst)); + } + szAll += pCol->szEst; pCol->affinity = sqlite3ExprAffinity(p); - if( pCol->affinity==0 ) pCol->affinity = SQLITE_AFF_NONE; + if( pCol->affinity==0 ) pCol->affinity = SQLITE_AFF_BLOB; pColl = sqlite3ExprCollSeq(pParse, p); - if( pColl ){ + if( pColl && pCol->zColl==0 ){ pCol->zColl = sqlite3DbStrDup(db, pColl->zName); } } + pTab->szTabRow = sqlite3LogEst(szAll*4); } /* ** Given a SELECT statement, generate a Table structure that describes ** the result set of that SELECT. @@ -81859,20 +112014,20 @@ pTab = sqlite3DbMallocZero(db, sizeof(Table) ); if( pTab==0 ){ return 0; } /* The sqlite3ResultSetOfSelect() is only used n contexts where lookaside - ** is disabled, so we might as well hard-code pTab->dbMem to NULL. */ - assert( db->lookaside.bEnabled==0 ); - pTab->dbMem = 0; + ** is disabled */ + assert( db->lookaside.bDisable ); pTab->nRef = 1; pTab->zName = 0; - selectColumnsFromExprList(pParse, pSelect->pEList, &pTab->nCol, &pTab->aCol); - selectAddColumnTypeAndCollation(pParse, pTab->nCol, pTab->aCol, pSelect); + pTab->nRowLogEst = 200; assert( 200==sqlite3LogEst(1048576) ); + sqlite3ColumnsFromExprList(pParse, pSelect->pEList, &pTab->nCol, &pTab->aCol); + selectAddColumnTypeAndCollation(pParse, pTab, pSelect); pTab->iPKey = -1; if( db->mallocFailed ){ - sqlite3DeleteTable(pTab); + sqlite3DeleteTable(db, pTab); return 0; } return pTab; } @@ -81881,16 +112036,18 @@ ** If an error occurs, return NULL and leave a message in pParse. */ SQLITE_PRIVATE Vdbe *sqlite3GetVdbe(Parse *pParse){ Vdbe *v = pParse->pVdbe; if( v==0 ){ - v = pParse->pVdbe = sqlite3VdbeCreate(pParse->db); -#ifndef SQLITE_OMIT_TRACE - if( v ){ - sqlite3VdbeAddOp0(v, OP_Trace); + v = pParse->pVdbe = sqlite3VdbeCreate(pParse); + if( v ) sqlite3VdbeAddOp0(v, OP_Init); + if( pParse->pToplevel==0 + && OptimizationEnabled(pParse->db,SQLITE_FactorOutConst) + ){ + pParse->okConstFactor = 1; } -#endif + } return v; } @@ -81903,62 +112060,63 @@ ** the limit and offset. If there is no limit and/or offset, then ** iLimit and iOffset are negative. ** ** This routine changes the values of iLimit and iOffset only if ** a limit or offset is defined by pLimit and pOffset. iLimit and -** iOffset should have been preset to appropriate default values -** (usually but not always -1) prior to calling this routine. +** iOffset should have been preset to appropriate default values (zero) +** prior to calling this routine. +** +** The iOffset register (if it exists) is initialized to the value +** of the OFFSET. The iLimit register is initialized to LIMIT. Register +** iOffset+1 is initialized to LIMIT+OFFSET. +** ** Only if pLimit!=0 or pOffset!=0 do the limit registers get ** redefined. The UNION ALL operator uses this property to force ** the reuse of the same limit and offset registers across multiple ** SELECT statements. */ static void computeLimitRegisters(Parse *pParse, Select *p, int iBreak){ Vdbe *v = 0; int iLimit = 0; int iOffset; - int addr1, n; + int n; if( p->iLimit ) return; /* ** "LIMIT -1" always shows all rows. There is some - ** contraversy about what the correct behavior should be. + ** controversy about what the correct behavior should be. ** The current implementation interprets "LIMIT 0" to mean ** no rows. */ sqlite3ExprCacheClear(pParse); assert( p->pOffset==0 || p->pLimit!=0 ); if( p->pLimit ){ p->iLimit = iLimit = ++pParse->nMem; v = sqlite3GetVdbe(pParse); - if( NEVER(v==0) ) return; /* VDBE should have already been allocated */ + assert( v!=0 ); if( sqlite3ExprIsInteger(p->pLimit, &n) ){ sqlite3VdbeAddOp2(v, OP_Integer, n, iLimit); VdbeComment((v, "LIMIT counter")); if( n==0 ){ - sqlite3VdbeAddOp2(v, OP_Goto, 0, iBreak); + sqlite3VdbeGoto(v, iBreak); + }else if( n>=0 && p->nSelectRow>(u64)n ){ + p->nSelectRow = n; } }else{ sqlite3ExprCode(pParse, p->pLimit, iLimit); - sqlite3VdbeAddOp1(v, OP_MustBeInt, iLimit); + sqlite3VdbeAddOp1(v, OP_MustBeInt, iLimit); VdbeCoverage(v); VdbeComment((v, "LIMIT counter")); - sqlite3VdbeAddOp2(v, OP_IfZero, iLimit, iBreak); + sqlite3VdbeAddOp2(v, OP_IfNot, iLimit, iBreak); VdbeCoverage(v); } if( p->pOffset ){ p->iOffset = iOffset = ++pParse->nMem; pParse->nMem++; /* Allocate an extra register for limit+offset */ sqlite3ExprCode(pParse, p->pOffset, iOffset); - sqlite3VdbeAddOp1(v, OP_MustBeInt, iOffset); + sqlite3VdbeAddOp1(v, OP_MustBeInt, iOffset); VdbeCoverage(v); VdbeComment((v, "OFFSET counter")); - addr1 = sqlite3VdbeAddOp1(v, OP_IfPos, iOffset); - sqlite3VdbeAddOp2(v, OP_Integer, 0, iOffset); - sqlite3VdbeJumpHere(v, addr1); - sqlite3VdbeAddOp3(v, OP_Add, iLimit, iOffset, iOffset+1); + sqlite3VdbeAddOp3(v, OP_OffsetLimit, iLimit, iOffset+1, iOffset); VdbeComment((v, "LIMIT+OFFSET")); - addr1 = sqlite3VdbeAddOp1(v, OP_IfPos, iLimit); - sqlite3VdbeAddOp2(v, OP_Integer, -1, iOffset+1); - sqlite3VdbeJumpHere(v, addr1); } } } #ifndef SQLITE_OMIT_COMPOUND_SELECT @@ -81976,26 +112134,275 @@ pRet = multiSelectCollSeq(pParse, p->pPrior, iCol); }else{ pRet = 0; } assert( iCol>=0 ); - if( pRet==0 && iCol<p->pEList->nExpr ){ + /* iCol must be less than p->pEList->nExpr. Otherwise an error would + ** have been thrown during name resolution and we would not have gotten + ** this far */ + if( pRet==0 && ALWAYS(iCol<p->pEList->nExpr) ){ pRet = sqlite3ExprCollSeq(pParse, p->pEList->a[iCol].pExpr); } return pRet; } -#endif /* SQLITE_OMIT_COMPOUND_SELECT */ -/* Forward reference */ +/* +** The select statement passed as the second parameter is a compound SELECT +** with an ORDER BY clause. This function allocates and returns a KeyInfo +** structure suitable for implementing the ORDER BY. +** +** Space to hold the KeyInfo structure is obtained from malloc. The calling +** function is responsible for ensuring that this structure is eventually +** freed. +*/ +static KeyInfo *multiSelectOrderByKeyInfo(Parse *pParse, Select *p, int nExtra){ + ExprList *pOrderBy = p->pOrderBy; + int nOrderBy = p->pOrderBy->nExpr; + sqlite3 *db = pParse->db; + KeyInfo *pRet = sqlite3KeyInfoAlloc(db, nOrderBy+nExtra, 1); + if( pRet ){ + int i; + for(i=0; i<nOrderBy; i++){ + struct ExprList_item *pItem = &pOrderBy->a[i]; + Expr *pTerm = pItem->pExpr; + CollSeq *pColl; + + if( pTerm->flags & EP_Collate ){ + pColl = sqlite3ExprCollSeq(pParse, pTerm); + }else{ + pColl = multiSelectCollSeq(pParse, p, pItem->u.x.iOrderByCol-1); + if( pColl==0 ) pColl = db->pDfltColl; + pOrderBy->a[i].pExpr = + sqlite3ExprAddCollateString(pParse, pTerm, pColl->zName); + } + assert( sqlite3KeyInfoIsWriteable(pRet) ); + pRet->aColl[i] = pColl; + pRet->aSortOrder[i] = pOrderBy->a[i].sortOrder; + } + } + + return pRet; +} + +#ifndef SQLITE_OMIT_CTE +/* +** This routine generates VDBE code to compute the content of a WITH RECURSIVE +** query of the form: +** +** <recursive-table> AS (<setup-query> UNION [ALL] <recursive-query>) +** \___________/ \_______________/ +** p->pPrior p +** +** +** There is exactly one reference to the recursive-table in the FROM clause +** of recursive-query, marked with the SrcList->a[].fg.isRecursive flag. +** +** The setup-query runs once to generate an initial set of rows that go +** into a Queue table. Rows are extracted from the Queue table one by +** one. Each row extracted from Queue is output to pDest. Then the single +** extracted row (now in the iCurrent table) becomes the content of the +** recursive-table for a recursive-query run. The output of the recursive-query +** is added back into the Queue table. Then another row is extracted from Queue +** and the iteration continues until the Queue table is empty. +** +** If the compound query operator is UNION then no duplicate rows are ever +** inserted into the Queue table. The iDistinct table keeps a copy of all rows +** that have ever been inserted into Queue and causes duplicates to be +** discarded. If the operator is UNION ALL, then duplicates are allowed. +** +** If the query has an ORDER BY, then entries in the Queue table are kept in +** ORDER BY order and the first entry is extracted for each cycle. Without +** an ORDER BY, the Queue table is just a FIFO. +** +** If a LIMIT clause is provided, then the iteration stops after LIMIT rows +** have been output to pDest. A LIMIT of zero means to output no rows and a +** negative LIMIT means to output all rows. If there is also an OFFSET clause +** with a positive value, then the first OFFSET outputs are discarded rather +** than being sent to pDest. The LIMIT count does not begin until after OFFSET +** rows have been skipped. +*/ +static void generateWithRecursiveQuery( + Parse *pParse, /* Parsing context */ + Select *p, /* The recursive SELECT to be coded */ + SelectDest *pDest /* What to do with query results */ +){ + SrcList *pSrc = p->pSrc; /* The FROM clause of the recursive query */ + int nCol = p->pEList->nExpr; /* Number of columns in the recursive table */ + Vdbe *v = pParse->pVdbe; /* The prepared statement under construction */ + Select *pSetup = p->pPrior; /* The setup query */ + int addrTop; /* Top of the loop */ + int addrCont, addrBreak; /* CONTINUE and BREAK addresses */ + int iCurrent = 0; /* The Current table */ + int regCurrent; /* Register holding Current table */ + int iQueue; /* The Queue table */ + int iDistinct = 0; /* To ensure unique results if UNION */ + int eDest = SRT_Fifo; /* How to write to Queue */ + SelectDest destQueue; /* SelectDest targetting the Queue table */ + int i; /* Loop counter */ + int rc; /* Result code */ + ExprList *pOrderBy; /* The ORDER BY clause */ + Expr *pLimit, *pOffset; /* Saved LIMIT and OFFSET */ + int regLimit, regOffset; /* Registers used by LIMIT and OFFSET */ + + /* Obtain authorization to do a recursive query */ + if( sqlite3AuthCheck(pParse, SQLITE_RECURSIVE, 0, 0, 0) ) return; + + /* Process the LIMIT and OFFSET clauses, if they exist */ + addrBreak = sqlite3VdbeMakeLabel(v); + computeLimitRegisters(pParse, p, addrBreak); + pLimit = p->pLimit; + pOffset = p->pOffset; + regLimit = p->iLimit; + regOffset = p->iOffset; + p->pLimit = p->pOffset = 0; + p->iLimit = p->iOffset = 0; + pOrderBy = p->pOrderBy; + + /* Locate the cursor number of the Current table */ + for(i=0; ALWAYS(i<pSrc->nSrc); i++){ + if( pSrc->a[i].fg.isRecursive ){ + iCurrent = pSrc->a[i].iCursor; + break; + } + } + + /* Allocate cursors numbers for Queue and Distinct. The cursor number for + ** the Distinct table must be exactly one greater than Queue in order + ** for the SRT_DistFifo and SRT_DistQueue destinations to work. */ + iQueue = pParse->nTab++; + if( p->op==TK_UNION ){ + eDest = pOrderBy ? SRT_DistQueue : SRT_DistFifo; + iDistinct = pParse->nTab++; + }else{ + eDest = pOrderBy ? SRT_Queue : SRT_Fifo; + } + sqlite3SelectDestInit(&destQueue, eDest, iQueue); + + /* Allocate cursors for Current, Queue, and Distinct. */ + regCurrent = ++pParse->nMem; + sqlite3VdbeAddOp3(v, OP_OpenPseudo, iCurrent, regCurrent, nCol); + if( pOrderBy ){ + KeyInfo *pKeyInfo = multiSelectOrderByKeyInfo(pParse, p, 1); + sqlite3VdbeAddOp4(v, OP_OpenEphemeral, iQueue, pOrderBy->nExpr+2, 0, + (char*)pKeyInfo, P4_KEYINFO); + destQueue.pOrderBy = pOrderBy; + }else{ + sqlite3VdbeAddOp2(v, OP_OpenEphemeral, iQueue, nCol); + } + VdbeComment((v, "Queue table")); + if( iDistinct ){ + p->addrOpenEphm[0] = sqlite3VdbeAddOp2(v, OP_OpenEphemeral, iDistinct, 0); + p->selFlags |= SF_UsesEphemeral; + } + + /* Detach the ORDER BY clause from the compound SELECT */ + p->pOrderBy = 0; + + /* Store the results of the setup-query in Queue. */ + pSetup->pNext = 0; + rc = sqlite3Select(pParse, pSetup, &destQueue); + pSetup->pNext = p; + if( rc ) goto end_of_recursive_query; + + /* Find the next row in the Queue and output that row */ + addrTop = sqlite3VdbeAddOp2(v, OP_Rewind, iQueue, addrBreak); VdbeCoverage(v); + + /* Transfer the next row in Queue over to Current */ + sqlite3VdbeAddOp1(v, OP_NullRow, iCurrent); /* To reset column cache */ + if( pOrderBy ){ + sqlite3VdbeAddOp3(v, OP_Column, iQueue, pOrderBy->nExpr+1, regCurrent); + }else{ + sqlite3VdbeAddOp2(v, OP_RowData, iQueue, regCurrent); + } + sqlite3VdbeAddOp1(v, OP_Delete, iQueue); + + /* Output the single row in Current */ + addrCont = sqlite3VdbeMakeLabel(v); + codeOffset(v, regOffset, addrCont); + selectInnerLoop(pParse, p, p->pEList, iCurrent, + 0, 0, pDest, addrCont, addrBreak); + if( regLimit ){ + sqlite3VdbeAddOp2(v, OP_DecrJumpZero, regLimit, addrBreak); + VdbeCoverage(v); + } + sqlite3VdbeResolveLabel(v, addrCont); + + /* Execute the recursive SELECT taking the single row in Current as + ** the value for the recursive-table. Store the results in the Queue. + */ + if( p->selFlags & SF_Aggregate ){ + sqlite3ErrorMsg(pParse, "recursive aggregate queries not supported"); + }else{ + p->pPrior = 0; + sqlite3Select(pParse, p, &destQueue); + assert( p->pPrior==0 ); + p->pPrior = pSetup; + } + + /* Keep running the loop until the Queue is empty */ + sqlite3VdbeGoto(v, addrTop); + sqlite3VdbeResolveLabel(v, addrBreak); + +end_of_recursive_query: + sqlite3ExprListDelete(pParse->db, p->pOrderBy); + p->pOrderBy = pOrderBy; + p->pLimit = pLimit; + p->pOffset = pOffset; + return; +} +#endif /* SQLITE_OMIT_CTE */ + +/* Forward references */ static int multiSelectOrderBy( Parse *pParse, /* Parsing context */ Select *p, /* The right-most of SELECTs to be coded */ SelectDest *pDest /* What to do with query results */ ); +/* +** Handle the special case of a compound-select that originates from a +** VALUES clause. By handling this as a special case, we avoid deep +** recursion, and thus do not need to enforce the SQLITE_LIMIT_COMPOUND_SELECT +** on a VALUES clause. +** +** Because the Select object originates from a VALUES clause: +** (1) It has no LIMIT or OFFSET +** (2) All terms are UNION ALL +** (3) There is no ORDER BY clause +*/ +static int multiSelectValues( + Parse *pParse, /* Parsing context */ + Select *p, /* The right-most of SELECTs to be coded */ + SelectDest *pDest /* What to do with query results */ +){ + Select *pPrior; + int nRow = 1; + int rc = 0; + assert( p->selFlags & SF_MultiValue ); + do{ + assert( p->selFlags & SF_Values ); + assert( p->op==TK_ALL || (p->op==TK_SELECT && p->pPrior==0) ); + assert( p->pLimit==0 ); + assert( p->pOffset==0 ); + assert( p->pNext==0 || p->pEList->nExpr==p->pNext->pEList->nExpr ); + if( p->pPrior==0 ) break; + assert( p->pPrior->pNext==p ); + p = p->pPrior; + nRow++; + }while(1); + while( p ){ + pPrior = p->pPrior; + p->pPrior = 0; + rc = sqlite3Select(pParse, p, pDest); + p->pPrior = pPrior; + if( rc ) break; + p->nSelectRow = nRow; + p = p->pNext; + } + return rc; +} -#ifndef SQLITE_OMIT_COMPOUND_SELECT /* ** This routine is called to process a compound query form from ** two or more separate queries using UNION, UNION ALL, EXCEPT, or ** INTERSECT ** @@ -82034,19 +112441,22 @@ Select *pPrior; /* Another SELECT immediately to our left */ Vdbe *v; /* Generate code to this VDBE */ SelectDest dest; /* Alternative data destination */ Select *pDelete = 0; /* Chain of simple selects to delete */ sqlite3 *db; /* Database connection */ +#ifndef SQLITE_OMIT_EXPLAIN + int iSub1 = 0; /* EQP id of left-hand query */ + int iSub2 = 0; /* EQP id of right-hand query */ +#endif /* Make sure there is no ORDER BY or LIMIT clause on prior SELECTs. Only ** the last (right-most) SELECT in the series may have an ORDER BY or LIMIT. */ assert( p && p->pPrior ); /* Calling function guarantees this much */ + assert( (p->selFlags & SF_Recursive)==0 || p->op==TK_ALL || p->op==TK_UNION ); db = pParse->db; pPrior = p->pPrior; - assert( pPrior->pRightmost!=pPrior ); - assert( pPrior->pRightmost==p->pRightmost ); dest = *pDest; if( pPrior->pOrderBy ){ sqlite3ErrorMsg(pParse,"ORDER BY clause should come after %s not before", selectOpName(p->op)); rc = 1; @@ -82064,39 +112474,52 @@ /* Create the destination temporary table if necessary */ if( dest.eDest==SRT_EphemTab ){ assert( p->pEList ); - sqlite3VdbeAddOp2(v, OP_OpenEphemeral, dest.iParm, p->pEList->nExpr); + sqlite3VdbeAddOp2(v, OP_OpenEphemeral, dest.iSDParm, p->pEList->nExpr); + sqlite3VdbeChangeP5(v, BTREE_UNORDERED); dest.eDest = SRT_Table; } + + /* Special handling for a compound-select that originates as a VALUES clause. + */ + if( p->selFlags & SF_MultiValue ){ + rc = multiSelectValues(pParse, p, &dest); + goto multi_select_end; + } /* Make sure all SELECTs in the statement have the same number of elements ** in their result sets. */ assert( p->pEList && pPrior->pEList ); - if( p->pEList->nExpr!=pPrior->pEList->nExpr ){ - sqlite3ErrorMsg(pParse, "SELECTs to the left and right of %s" - " do not have the same number of result columns", selectOpName(p->op)); - rc = 1; - goto multi_select_end; - } + assert( p->pEList->nExpr==pPrior->pEList->nExpr ); + +#ifndef SQLITE_OMIT_CTE + if( p->selFlags & SF_Recursive ){ + generateWithRecursiveQuery(pParse, p, &dest); + }else +#endif /* Compound SELECTs that have an ORDER BY clause are handled separately. */ if( p->pOrderBy ){ return multiSelectOrderBy(pParse, p, pDest); - } + }else /* Generate code for the left and right SELECT statements. */ switch( p->op ){ case TK_ALL: { int addr = 0; + int nLimit; assert( !pPrior->pLimit ); + pPrior->iLimit = p->iLimit; + pPrior->iOffset = p->iOffset; pPrior->pLimit = p->pLimit; pPrior->pOffset = p->pOffset; + explainSetInteger(iSub1, pParse->iNextSelectId); rc = sqlite3Select(pParse, pPrior, &dest); p->pLimit = 0; p->pOffset = 0; if( rc ){ goto multi_select_end; @@ -82103,17 +112526,29 @@ } p->pPrior = 0; p->iLimit = pPrior->iLimit; p->iOffset = pPrior->iOffset; if( p->iLimit ){ - addr = sqlite3VdbeAddOp1(v, OP_IfZero, p->iLimit); + addr = sqlite3VdbeAddOp1(v, OP_IfNot, p->iLimit); VdbeCoverage(v); VdbeComment((v, "Jump ahead if LIMIT reached")); + if( p->iOffset ){ + sqlite3VdbeAddOp3(v, OP_OffsetLimit, + p->iLimit, p->iOffset+1, p->iOffset); + } } + explainSetInteger(iSub2, pParse->iNextSelectId); rc = sqlite3Select(pParse, p, &dest); testcase( rc!=SQLITE_OK ); pDelete = p->pPrior; p->pPrior = pPrior; + p->nSelectRow += pPrior->nSelectRow; + if( pPrior->pLimit + && sqlite3ExprIsInteger(pPrior->pLimit, &nLimit) + && nLimit>0 && p->nSelectRow > (u64)nLimit + ){ + p->nSelectRow = nLimit; + } if( addr ){ sqlite3VdbeJumpHere(v, addr); } break; } @@ -82127,36 +112562,35 @@ SelectDest uniondest; testcase( p->op==TK_EXCEPT ); testcase( p->op==TK_UNION ); priorOp = SRT_Union; - if( dest.eDest==priorOp && ALWAYS(!p->pLimit &&!p->pOffset) ){ + if( dest.eDest==priorOp ){ /* We can reuse a temporary table generated by a SELECT to our ** right. */ - assert( p->pRightmost!=p ); /* Can only happen for leftward elements - ** of a 3-way or more compound */ assert( p->pLimit==0 ); /* Not allowed on leftward elements */ assert( p->pOffset==0 ); /* Not allowed on leftward elements */ - unionTab = dest.iParm; + unionTab = dest.iSDParm; }else{ /* We will need to create our own temporary table to hold the ** intermediate results. */ unionTab = pParse->nTab++; assert( p->pOrderBy==0 ); addr = sqlite3VdbeAddOp2(v, OP_OpenEphemeral, unionTab, 0); assert( p->addrOpenEphm[0] == -1 ); p->addrOpenEphm[0] = addr; - p->pRightmost->selFlags |= SF_UsesEphemeral; + findRightmost(p)->selFlags |= SF_UsesEphemeral; assert( p->pEList ); } /* Code the SELECT statements to our left */ assert( !pPrior->pOrderBy ); sqlite3SelectDestInit(&uniondest, priorOp, unionTab); + explainSetInteger(iSub1, pParse->iNextSelectId); rc = sqlite3Select(pParse, pPrior, &uniondest); if( rc ){ goto multi_select_end; } @@ -82172,45 +112606,47 @@ pLimit = p->pLimit; p->pLimit = 0; pOffset = p->pOffset; p->pOffset = 0; uniondest.eDest = op; + explainSetInteger(iSub2, pParse->iNextSelectId); rc = sqlite3Select(pParse, p, &uniondest); testcase( rc!=SQLITE_OK ); /* Query flattening in sqlite3Select() might refill p->pOrderBy. ** Be sure to delete p->pOrderBy, therefore, to avoid a memory leak. */ sqlite3ExprListDelete(db, p->pOrderBy); pDelete = p->pPrior; p->pPrior = pPrior; p->pOrderBy = 0; + if( p->op==TK_UNION ) p->nSelectRow += pPrior->nSelectRow; sqlite3ExprDelete(db, p->pLimit); p->pLimit = pLimit; p->pOffset = pOffset; p->iLimit = 0; p->iOffset = 0; /* Convert the data in the temporary table into whatever form ** it is that we currently need. */ - assert( unionTab==dest.iParm || dest.eDest!=priorOp ); + assert( unionTab==dest.iSDParm || dest.eDest!=priorOp ); if( dest.eDest!=priorOp ){ int iCont, iBreak, iStart; assert( p->pEList ); if( dest.eDest==SRT_Output ){ Select *pFirst = p; while( pFirst->pPrior ) pFirst = pFirst->pPrior; - generateColumnNames(pParse, 0, pFirst->pEList); + generateColumnNames(pParse, pFirst->pSrc, pFirst->pEList); } iBreak = sqlite3VdbeMakeLabel(v); iCont = sqlite3VdbeMakeLabel(v); computeLimitRegisters(pParse, p, iBreak); - sqlite3VdbeAddOp2(v, OP_Rewind, unionTab, iBreak); + sqlite3VdbeAddOp2(v, OP_Rewind, unionTab, iBreak); VdbeCoverage(v); iStart = sqlite3VdbeCurrentAddr(v); - selectInnerLoop(pParse, p, p->pEList, unionTab, p->pEList->nExpr, - 0, -1, &dest, iCont, iBreak); + selectInnerLoop(pParse, p, p->pEList, unionTab, + 0, 0, &dest, iCont, iBreak); sqlite3VdbeResolveLabel(v, iCont); - sqlite3VdbeAddOp2(v, OP_Next, unionTab, iStart); + sqlite3VdbeAddOp2(v, OP_Next, unionTab, iStart); VdbeCoverage(v); sqlite3VdbeResolveLabel(v, iBreak); sqlite3VdbeAddOp2(v, OP_Close, unionTab, 0); } break; } @@ -82231,16 +112667,17 @@ assert( p->pOrderBy==0 ); addr = sqlite3VdbeAddOp2(v, OP_OpenEphemeral, tab1, 0); assert( p->addrOpenEphm[0] == -1 ); p->addrOpenEphm[0] = addr; - p->pRightmost->selFlags |= SF_UsesEphemeral; + findRightmost(p)->selFlags |= SF_UsesEphemeral; assert( p->pEList ); /* Code the SELECTs to our left into temporary table "tab1". */ sqlite3SelectDestInit(&intersectdest, SRT_Union, tab1); + explainSetInteger(iSub1, pParse->iNextSelectId); rc = sqlite3Select(pParse, pPrior, &intersectdest); if( rc ){ goto multi_select_end; } @@ -82252,15 +112689,17 @@ p->pPrior = 0; pLimit = p->pLimit; p->pLimit = 0; pOffset = p->pOffset; p->pOffset = 0; - intersectdest.iParm = tab2; + intersectdest.iSDParm = tab2; + explainSetInteger(iSub2, pParse->iNextSelectId); rc = sqlite3Select(pParse, p, &intersectdest); testcase( rc!=SQLITE_OK ); pDelete = p->pPrior; p->pPrior = pPrior; + if( p->nSelectRow>pPrior->nSelectRow ) p->nSelectRow = pPrior->nSelectRow; sqlite3ExprDelete(db, p->pLimit); p->pLimit = pLimit; p->pOffset = pOffset; /* Generate code to take the intersection of the two temporary @@ -82268,30 +112707,32 @@ */ assert( p->pEList ); if( dest.eDest==SRT_Output ){ Select *pFirst = p; while( pFirst->pPrior ) pFirst = pFirst->pPrior; - generateColumnNames(pParse, 0, pFirst->pEList); + generateColumnNames(pParse, pFirst->pSrc, pFirst->pEList); } iBreak = sqlite3VdbeMakeLabel(v); iCont = sqlite3VdbeMakeLabel(v); computeLimitRegisters(pParse, p, iBreak); - sqlite3VdbeAddOp2(v, OP_Rewind, tab1, iBreak); + sqlite3VdbeAddOp2(v, OP_Rewind, tab1, iBreak); VdbeCoverage(v); r1 = sqlite3GetTempReg(pParse); iStart = sqlite3VdbeAddOp2(v, OP_RowKey, tab1, r1); - sqlite3VdbeAddOp4Int(v, OP_NotFound, tab2, iCont, r1, 0); + sqlite3VdbeAddOp4Int(v, OP_NotFound, tab2, iCont, r1, 0); VdbeCoverage(v); sqlite3ReleaseTempReg(pParse, r1); - selectInnerLoop(pParse, p, p->pEList, tab1, p->pEList->nExpr, - 0, -1, &dest, iCont, iBreak); + selectInnerLoop(pParse, p, p->pEList, tab1, + 0, 0, &dest, iCont, iBreak); sqlite3VdbeResolveLabel(v, iCont); - sqlite3VdbeAddOp2(v, OP_Next, tab1, iStart); + sqlite3VdbeAddOp2(v, OP_Next, tab1, iStart); VdbeCoverage(v); sqlite3VdbeResolveLabel(v, iBreak); sqlite3VdbeAddOp2(v, OP_Close, tab2, 0); sqlite3VdbeAddOp2(v, OP_Close, tab1, 0); break; } } + + explainComposite(pParse, p->op, iSub1, iSub2, p->op!=TK_ALL); /* Compute collating sequences used by ** temporary tables needed to implement the compound select. ** Attach the KeyInfo structure to all temporary tables. ** @@ -82305,22 +112746,17 @@ KeyInfo *pKeyInfo; /* Collating sequence for the result set */ Select *pLoop; /* For looping through SELECT statements */ CollSeq **apColl; /* For looping through pKeyInfo->aColl[] */ int nCol; /* Number of columns in result set */ - assert( p->pRightmost==p ); + assert( p->pNext==0 ); nCol = p->pEList->nExpr; - pKeyInfo = sqlite3DbMallocZero(db, - sizeof(*pKeyInfo)+nCol*(sizeof(CollSeq*) + 1)); + pKeyInfo = sqlite3KeyInfoAlloc(db, nCol, 1); if( !pKeyInfo ){ rc = SQLITE_NOMEM; goto multi_select_end; } - - pKeyInfo->enc = ENC(db); - pKeyInfo->nField = (u16)nCol; - for(i=0, apColl=pKeyInfo->aColl; i<nCol; i++, apColl++){ *apColl = multiSelectCollSeq(pParse, p, i); if( 0==*apColl ){ *apColl = db->pDfltColl; } @@ -82334,37 +112770,51 @@ ** always safely abort as soon as the first unused slot is found */ assert( pLoop->addrOpenEphm[1]<0 ); break; } sqlite3VdbeChangeP2(v, addr, nCol); - sqlite3VdbeChangeP4(v, addr, (char*)pKeyInfo, P4_KEYINFO); + sqlite3VdbeChangeP4(v, addr, (char*)sqlite3KeyInfoRef(pKeyInfo), + P4_KEYINFO); pLoop->addrOpenEphm[i] = -1; } } - sqlite3DbFree(db, pKeyInfo); + sqlite3KeyInfoUnref(pKeyInfo); } multi_select_end: - pDest->iMem = dest.iMem; - pDest->nMem = dest.nMem; + pDest->iSdst = dest.iSdst; + pDest->nSdst = dest.nSdst; sqlite3SelectDelete(db, pDelete); return rc; } #endif /* SQLITE_OMIT_COMPOUND_SELECT */ + +/* +** Error message for when two or more terms of a compound select have different +** size result sets. +*/ +SQLITE_PRIVATE void sqlite3SelectWrongNumTermsError(Parse *pParse, Select *p){ + if( p->selFlags & SF_Values ){ + sqlite3ErrorMsg(pParse, "all VALUES must have the same number of terms"); + }else{ + sqlite3ErrorMsg(pParse, "SELECTs to the left and right of %s" + " do not have the same number of result columns", selectOpName(p->op)); + } +} /* ** Code an output subroutine for a coroutine implementation of a ** SELECT statment. ** -** The data to be output is contained in pIn->iMem. There are -** pIn->nMem columns to be output. pDest is where the output should +** The data to be output is contained in pIn->iSdst. There are +** pIn->nSdst columns to be output. pDest is where the output should ** be sent. ** ** regReturn is the number of the register holding the subroutine ** return address. ** -** If regPrev>0 then it is a the first register in a vector that +** If regPrev>0 then it is the first register in a vector that ** records the previous output. mem[regPrev] is a flag that is false ** if there has been no previous output. If regPrev>0 then code is ** generated to suppress duplicates. pKeyInfo is used for comparing ** keys. ** @@ -82377,11 +112827,10 @@ SelectDest *pIn, /* Coroutine supplying data */ SelectDest *pDest, /* Where to send the data */ int regReturn, /* The return address register */ int regPrev, /* Previous result register. No uniqueness if 0 */ KeyInfo *pKeyInfo, /* For comparing with previous entry */ - int p4type, /* The p4 type for pKeyInfo */ int iBreak /* Jump here if we hit the LIMIT */ ){ Vdbe *v = pParse->pVdbe; int iContinue; int addr; @@ -82390,37 +112839,36 @@ iContinue = sqlite3VdbeMakeLabel(v); /* Suppress duplicates for UNION, EXCEPT, and INTERSECT */ if( regPrev ){ - int j1, j2; - j1 = sqlite3VdbeAddOp1(v, OP_IfNot, regPrev); - j2 = sqlite3VdbeAddOp4(v, OP_Compare, pIn->iMem, regPrev+1, pIn->nMem, - (char*)pKeyInfo, p4type); - sqlite3VdbeAddOp3(v, OP_Jump, j2+2, iContinue, j2+2); - sqlite3VdbeJumpHere(v, j1); - sqlite3ExprCodeCopy(pParse, pIn->iMem, regPrev+1, pIn->nMem); + int addr1, addr2; + addr1 = sqlite3VdbeAddOp1(v, OP_IfNot, regPrev); VdbeCoverage(v); + addr2 = sqlite3VdbeAddOp4(v, OP_Compare, pIn->iSdst, regPrev+1, pIn->nSdst, + (char*)sqlite3KeyInfoRef(pKeyInfo), P4_KEYINFO); + sqlite3VdbeAddOp3(v, OP_Jump, addr2+2, iContinue, addr2+2); VdbeCoverage(v); + sqlite3VdbeJumpHere(v, addr1); + sqlite3VdbeAddOp3(v, OP_Copy, pIn->iSdst, regPrev+1, pIn->nSdst-1); sqlite3VdbeAddOp2(v, OP_Integer, 1, regPrev); } if( pParse->db->mallocFailed ) return 0; - /* Suppress the the first OFFSET entries if there is an OFFSET clause + /* Suppress the first OFFSET entries if there is an OFFSET clause */ - codeOffset(v, p, iContinue); + codeOffset(v, p->iOffset, iContinue); + assert( pDest->eDest!=SRT_Exists ); + assert( pDest->eDest!=SRT_Table ); switch( pDest->eDest ){ /* Store the result as data using a unique key. */ - case SRT_Table: case SRT_EphemTab: { int r1 = sqlite3GetTempReg(pParse); int r2 = sqlite3GetTempReg(pParse); - testcase( pDest->eDest==SRT_Table ); - testcase( pDest->eDest==SRT_EphemTab ); - sqlite3VdbeAddOp3(v, OP_MakeRecord, pIn->iMem, pIn->nMem, r1); - sqlite3VdbeAddOp2(v, OP_NewRowid, pDest->iParm, r2); - sqlite3VdbeAddOp3(v, OP_Insert, pDest->iParm, r1, r2); + sqlite3VdbeAddOp3(v, OP_MakeRecord, pIn->iSdst, pIn->nSdst, r1); + sqlite3VdbeAddOp2(v, OP_NewRowid, pDest->iSDParm, r2); + sqlite3VdbeAddOp3(v, OP_Insert, pDest->iSDParm, r1, r2); sqlite3VdbeChangeP5(v, OPFLAG_APPEND); sqlite3ReleaseTempReg(pParse, r2); sqlite3ReleaseTempReg(pParse, r1); break; } @@ -82430,53 +112878,43 @@ ** then there should be a single item on the stack. Write this ** item into the set table with bogus data. */ case SRT_Set: { int r1; - assert( pIn->nMem==1 ); - p->affinity = - sqlite3CompareAffinity(p->pEList->a[0].pExpr, pDest->affinity); + assert( pIn->nSdst==1 || pParse->nErr>0 ); + pDest->affSdst = + sqlite3CompareAffinity(p->pEList->a[0].pExpr, pDest->affSdst); r1 = sqlite3GetTempReg(pParse); - sqlite3VdbeAddOp4(v, OP_MakeRecord, pIn->iMem, 1, r1, &p->affinity, 1); - sqlite3ExprCacheAffinityChange(pParse, pIn->iMem, 1); - sqlite3VdbeAddOp2(v, OP_IdxInsert, pDest->iParm, r1); + sqlite3VdbeAddOp4(v, OP_MakeRecord, pIn->iSdst, 1, r1, &pDest->affSdst,1); + sqlite3ExprCacheAffinityChange(pParse, pIn->iSdst, 1); + sqlite3VdbeAddOp2(v, OP_IdxInsert, pDest->iSDParm, r1); sqlite3ReleaseTempReg(pParse, r1); break; } -#if 0 /* Never occurs on an ORDER BY query */ - /* If any row exist in the result set, record that fact and abort. - */ - case SRT_Exists: { - sqlite3VdbeAddOp2(v, OP_Integer, 1, pDest->iParm); - /* The LIMIT clause will terminate the loop for us */ - break; - } -#endif - /* If this is a scalar select that is part of an expression, then ** store the results in the appropriate memory cell and break out ** of the scan loop. */ case SRT_Mem: { - assert( pIn->nMem==1 ); - sqlite3ExprCodeMove(pParse, pIn->iMem, pDest->iParm, 1); + assert( pIn->nSdst==1 || pParse->nErr>0 ); testcase( pIn->nSdst!=1 ); + sqlite3ExprCodeMove(pParse, pIn->iSdst, pDest->iSDParm, 1); /* The LIMIT clause will jump out of the loop for us */ break; } #endif /* #ifndef SQLITE_OMIT_SUBQUERY */ /* The results are stored in a sequence of registers - ** starting at pDest->iMem. Then the co-routine yields. + ** starting at pDest->iSdst. Then the co-routine yields. */ case SRT_Coroutine: { - if( pDest->iMem==0 ){ - pDest->iMem = sqlite3GetTempRange(pParse, pIn->nMem); - pDest->nMem = pIn->nMem; + if( pDest->iSdst==0 ){ + pDest->iSdst = sqlite3GetTempRange(pParse, pIn->nSdst); + pDest->nSdst = pIn->nSdst; } - sqlite3ExprCodeMove(pParse, pIn->iMem, pDest->iMem, pDest->nMem); - sqlite3VdbeAddOp1(v, OP_Yield, pDest->iParm); + sqlite3ExprCodeMove(pParse, pIn->iSdst, pDest->iSdst, pIn->nSdst); + sqlite3VdbeAddOp1(v, OP_Yield, pDest->iSDParm); break; } /* If none of the above, then the result destination must be ** SRT_Output. This routine is never called with any other @@ -82486,20 +112924,20 @@ ** Then the OP_ResultRow opcode is used to cause sqlite3_step() to ** return the next row of result. */ default: { assert( pDest->eDest==SRT_Output ); - sqlite3VdbeAddOp2(v, OP_ResultRow, pIn->iMem, pIn->nMem); - sqlite3ExprCacheAffinityChange(pParse, pIn->iMem, pIn->nMem); + sqlite3VdbeAddOp2(v, OP_ResultRow, pIn->iSdst, pIn->nSdst); + sqlite3ExprCacheAffinityChange(pParse, pIn->iSdst, pIn->nSdst); break; } } /* Jump to the end of the loop if the LIMIT is reached. */ if( p->iLimit ){ - sqlite3VdbeAddOp3(v, OP_IfZero, p->iLimit, iBreak, -1); + sqlite3VdbeAddOp2(v, OP_DecrJumpZero, p->iLimit, iBreak); VdbeCoverage(v); } /* Generate the subroutine return */ sqlite3VdbeResolveLabel(v, iContinue); @@ -82603,20 +113041,19 @@ Select *pPrior; /* Another SELECT immediately to our left */ Vdbe *v; /* Generate code to this VDBE */ SelectDest destA; /* Destination for coroutine A */ SelectDest destB; /* Destination for coroutine B */ int regAddrA; /* Address register for select-A coroutine */ - int regEofA; /* Flag to indicate when select-A is complete */ int regAddrB; /* Address register for select-B coroutine */ - int regEofB; /* Flag to indicate when select-B is complete */ int addrSelectA; /* Address of the select-A coroutine */ int addrSelectB; /* Address of the select-B coroutine */ int regOutA; /* Address register for the output-A subroutine */ int regOutB; /* Address register for the output-B subroutine */ int addrOutA; /* Address of the output-A subroutine */ int addrOutB = 0; /* Address of the output-B subroutine */ int addrEofA; /* Address of the select-A-exhausted subroutine */ + int addrEofA_noB; /* Alternate addrEofA if B is uninitialized */ int addrEofB; /* Address of the select-B-exhausted subroutine */ int addrAltB; /* Address of the A<B subroutine */ int addrAeqB; /* Address of the A==B subroutine */ int addrAgtB; /* Address of the A>B subroutine */ int regLimitA; /* Limit register for select-A */ @@ -82624,18 +113061,22 @@ int regPrev; /* A range of registers to hold previous output */ int savedLimit; /* Saved value of p->iLimit */ int savedOffset; /* Saved value of p->iOffset */ int labelCmpr; /* Label for the start of the merge algorithm */ int labelEnd; /* Label for the end of the overall SELECT stmt */ - int j1; /* Jump instructions that get retargetted */ + int addr1; /* Jump instructions that get retargetted */ int op; /* One of TK_ALL, TK_UNION, TK_EXCEPT, TK_INTERSECT */ KeyInfo *pKeyDup = 0; /* Comparison information for duplicate removal */ KeyInfo *pKeyMerge; /* Comparison information for merging rows */ sqlite3 *db; /* Database connection */ ExprList *pOrderBy; /* The ORDER BY clause */ int nOrderBy; /* Number of terms in the ORDER BY clause */ int *aPermute; /* Mapping from ORDER BY terms to result set columns */ +#ifndef SQLITE_OMIT_EXPLAIN + int iSub1; /* EQP id of left-hand query */ + int iSub2; /* EQP id of right-hand query */ +#endif assert( p->pOrderBy!=0 ); assert( pKeyDup==0 ); /* "Managed" code needs this. Ticket #3382. */ db = pParse->db; v = pParse->pVdbe; @@ -82659,20 +113100,20 @@ */ if( op!=TK_ALL ){ for(i=1; db->mallocFailed==0 && i<=p->pEList->nExpr; i++){ struct ExprList_item *pItem; for(j=0, pItem=pOrderBy->a; j<nOrderBy; j++, pItem++){ - assert( pItem->iCol>0 ); - if( pItem->iCol==i ) break; + assert( pItem->u.x.iOrderByCol>0 ); + if( pItem->u.x.iOrderByCol==i ) break; } if( j==nOrderBy ){ Expr *pNew = sqlite3Expr(db, TK_INTEGER, 0); if( pNew==0 ) return SQLITE_NOMEM; pNew->flags |= EP_IntValue; pNew->u.iValue = i; pOrderBy = sqlite3ExprListAppend(pParse, pOrderBy, pNew); - pOrderBy->a[nOrderBy++].iCol = (u16)i; + if( pOrderBy ) pOrderBy->a[nOrderBy++].u.x.iOrderByCol = (u16)i; } } } /* Compute the comparison permutation and keyinfo that is used with @@ -82680,37 +113121,20 @@ ** row of results comes from selectA or selectB. Also add explicit ** collations to the ORDER BY clause terms so that when the subqueries ** to the right and the left are evaluated, they use the correct ** collation. */ - aPermute = sqlite3DbMallocRaw(db, sizeof(int)*nOrderBy); + aPermute = sqlite3DbMallocRawNN(db, sizeof(int)*(nOrderBy + 1)); if( aPermute ){ struct ExprList_item *pItem; - for(i=0, pItem=pOrderBy->a; i<nOrderBy; i++, pItem++){ - assert( pItem->iCol>0 && pItem->iCol<=p->pEList->nExpr ); - aPermute[i] = pItem->iCol - 1; - } - pKeyMerge = - sqlite3DbMallocRaw(db, sizeof(*pKeyMerge)+nOrderBy*(sizeof(CollSeq*)+1)); - if( pKeyMerge ){ - pKeyMerge->aSortOrder = (u8*)&pKeyMerge->aColl[nOrderBy]; - pKeyMerge->nField = (u16)nOrderBy; - pKeyMerge->enc = ENC(db); - for(i=0; i<nOrderBy; i++){ - CollSeq *pColl; - Expr *pTerm = pOrderBy->a[i].pExpr; - if( pTerm->flags & EP_ExpCollate ){ - pColl = pTerm->pColl; - }else{ - pColl = multiSelectCollSeq(pParse, p, aPermute[i]); - pTerm->flags |= EP_ExpCollate; - pTerm->pColl = pColl; - } - pKeyMerge->aColl[i] = pColl; - pKeyMerge->aSortOrder[i] = pOrderBy->a[i].sortOrder; - } - } + aPermute[0] = nOrderBy; + for(i=1, pItem=pOrderBy->a; i<=nOrderBy; i++, pItem++){ + assert( pItem->u.x.iOrderByCol>0 ); + assert( pItem->u.x.iOrderByCol<=p->pEList->nExpr ); + aPermute[i] = pItem->u.x.iOrderByCol - 1; + } + pKeyMerge = multiSelectOrderByKeyInfo(pParse, p, 1); }else{ pKeyMerge = 0; } /* Reattach the ORDER BY clause to the query. @@ -82725,18 +113149,16 @@ if( op==TK_ALL ){ regPrev = 0; }else{ int nExpr = p->pEList->nExpr; assert( nOrderBy>=nExpr || db->mallocFailed ); - regPrev = sqlite3GetTempRange(pParse, nExpr+1); + regPrev = pParse->nMem+1; + pParse->nMem += nExpr+1; sqlite3VdbeAddOp2(v, OP_Integer, 0, regPrev); - pKeyDup = sqlite3DbMallocZero(db, - sizeof(*pKeyDup) + nExpr*(sizeof(CollSeq*)+1) ); + pKeyDup = sqlite3KeyInfoAlloc(db, nExpr, 1); if( pKeyDup ){ - pKeyDup->aSortOrder = (u8*)&pKeyDup->aColl[nExpr]; - pKeyDup->nField = (u16)nExpr; - pKeyDup->enc = ENC(db); + assert( sqlite3KeyInfoIsWriteable(pKeyDup) ); for(i=0; i<nExpr; i++){ pKeyDup->aColl[i] = multiSelectCollSeq(pParse, p, i); pKeyDup->aSortOrder[i] = 0; } } @@ -82743,11 +113165,11 @@ } /* Separate the left and the right query from one another */ p->pPrior = 0; - pPrior->pRightmost = 0; + pPrior->pNext = 0; sqlite3ResolveOrderGroupBy(pParse, p, p->pOrderBy, "ORDER"); if( pPrior->pPrior==0 ){ sqlite3ResolveOrderGroupBy(pParse, pPrior, pPrior->pOrderBy, "ORDER"); } @@ -82766,102 +113188,96 @@ p->pLimit = 0; sqlite3ExprDelete(db, p->pOffset); p->pOffset = 0; regAddrA = ++pParse->nMem; - regEofA = ++pParse->nMem; regAddrB = ++pParse->nMem; - regEofB = ++pParse->nMem; regOutA = ++pParse->nMem; regOutB = ++pParse->nMem; sqlite3SelectDestInit(&destA, SRT_Coroutine, regAddrA); sqlite3SelectDestInit(&destB, SRT_Coroutine, regAddrB); - - /* Jump past the various subroutines and coroutines to the main - ** merge loop - */ - j1 = sqlite3VdbeAddOp0(v, OP_Goto); - addrSelectA = sqlite3VdbeCurrentAddr(v); - /* Generate a coroutine to evaluate the SELECT statement to the ** left of the compound operator - the "A" select. */ - VdbeNoopComment((v, "Begin coroutine for left SELECT")); + addrSelectA = sqlite3VdbeCurrentAddr(v) + 1; + addr1 = sqlite3VdbeAddOp3(v, OP_InitCoroutine, regAddrA, 0, addrSelectA); + VdbeComment((v, "left SELECT")); pPrior->iLimit = regLimitA; + explainSetInteger(iSub1, pParse->iNextSelectId); sqlite3Select(pParse, pPrior, &destA); - sqlite3VdbeAddOp2(v, OP_Integer, 1, regEofA); - sqlite3VdbeAddOp1(v, OP_Yield, regAddrA); - VdbeNoopComment((v, "End coroutine for left SELECT")); + sqlite3VdbeEndCoroutine(v, regAddrA); + sqlite3VdbeJumpHere(v, addr1); /* Generate a coroutine to evaluate the SELECT statement on ** the right - the "B" select */ - addrSelectB = sqlite3VdbeCurrentAddr(v); - VdbeNoopComment((v, "Begin coroutine for right SELECT")); + addrSelectB = sqlite3VdbeCurrentAddr(v) + 1; + addr1 = sqlite3VdbeAddOp3(v, OP_InitCoroutine, regAddrB, 0, addrSelectB); + VdbeComment((v, "right SELECT")); savedLimit = p->iLimit; savedOffset = p->iOffset; p->iLimit = regLimitB; p->iOffset = 0; + explainSetInteger(iSub2, pParse->iNextSelectId); sqlite3Select(pParse, p, &destB); p->iLimit = savedLimit; p->iOffset = savedOffset; - sqlite3VdbeAddOp2(v, OP_Integer, 1, regEofB); - sqlite3VdbeAddOp1(v, OP_Yield, regAddrB); - VdbeNoopComment((v, "End coroutine for right SELECT")); + sqlite3VdbeEndCoroutine(v, regAddrB); /* Generate a subroutine that outputs the current row of the A ** select as the next output row of the compound select. */ VdbeNoopComment((v, "Output routine for A")); addrOutA = generateOutputSubroutine(pParse, p, &destA, pDest, regOutA, - regPrev, pKeyDup, P4_KEYINFO_HANDOFF, labelEnd); + regPrev, pKeyDup, labelEnd); /* Generate a subroutine that outputs the current row of the B ** select as the next output row of the compound select. */ if( op==TK_ALL || op==TK_UNION ){ VdbeNoopComment((v, "Output routine for B")); addrOutB = generateOutputSubroutine(pParse, p, &destB, pDest, regOutB, - regPrev, pKeyDup, P4_KEYINFO_STATIC, labelEnd); + regPrev, pKeyDup, labelEnd); } + sqlite3KeyInfoUnref(pKeyDup); /* Generate a subroutine to run when the results from select A ** are exhausted and only data in select B remains. */ - VdbeNoopComment((v, "eof-A subroutine")); if( op==TK_EXCEPT || op==TK_INTERSECT ){ - addrEofA = sqlite3VdbeAddOp2(v, OP_Goto, 0, labelEnd); + addrEofA_noB = addrEofA = labelEnd; }else{ - addrEofA = sqlite3VdbeAddOp2(v, OP_If, regEofB, labelEnd); - sqlite3VdbeAddOp2(v, OP_Gosub, regOutB, addrOutB); - sqlite3VdbeAddOp1(v, OP_Yield, regAddrB); - sqlite3VdbeAddOp2(v, OP_Goto, 0, addrEofA); + VdbeNoopComment((v, "eof-A subroutine")); + addrEofA = sqlite3VdbeAddOp2(v, OP_Gosub, regOutB, addrOutB); + addrEofA_noB = sqlite3VdbeAddOp2(v, OP_Yield, regAddrB, labelEnd); + VdbeCoverage(v); + sqlite3VdbeGoto(v, addrEofA); + p->nSelectRow += pPrior->nSelectRow; } /* Generate a subroutine to run when the results from select B ** are exhausted and only data in select A remains. */ if( op==TK_INTERSECT ){ addrEofB = addrEofA; + if( p->nSelectRow > pPrior->nSelectRow ) p->nSelectRow = pPrior->nSelectRow; }else{ VdbeNoopComment((v, "eof-B subroutine")); - addrEofB = sqlite3VdbeAddOp2(v, OP_If, regEofA, labelEnd); - sqlite3VdbeAddOp2(v, OP_Gosub, regOutA, addrOutA); - sqlite3VdbeAddOp1(v, OP_Yield, regAddrA); - sqlite3VdbeAddOp2(v, OP_Goto, 0, addrEofB); + addrEofB = sqlite3VdbeAddOp2(v, OP_Gosub, regOutA, addrOutA); + sqlite3VdbeAddOp2(v, OP_Yield, regAddrA, labelEnd); VdbeCoverage(v); + sqlite3VdbeGoto(v, addrEofB); } /* Generate code to handle the case of A<B */ VdbeNoopComment((v, "A-lt-B subroutine")); addrAltB = sqlite3VdbeAddOp2(v, OP_Gosub, regOutA, addrOutA); - sqlite3VdbeAddOp1(v, OP_Yield, regAddrA); - sqlite3VdbeAddOp2(v, OP_If, regEofA, addrEofA); - sqlite3VdbeAddOp2(v, OP_Goto, 0, labelCmpr); + sqlite3VdbeAddOp2(v, OP_Yield, regAddrA, addrEofA); VdbeCoverage(v); + sqlite3VdbeGoto(v, labelCmpr); /* Generate code to handle the case of A==B */ if( op==TK_ALL ){ addrAeqB = addrAltB; @@ -82869,49 +113285,38 @@ addrAeqB = addrAltB; addrAltB++; }else{ VdbeNoopComment((v, "A-eq-B subroutine")); addrAeqB = - sqlite3VdbeAddOp1(v, OP_Yield, regAddrA); - sqlite3VdbeAddOp2(v, OP_If, regEofA, addrEofA); - sqlite3VdbeAddOp2(v, OP_Goto, 0, labelCmpr); + sqlite3VdbeAddOp2(v, OP_Yield, regAddrA, addrEofA); VdbeCoverage(v); + sqlite3VdbeGoto(v, labelCmpr); } /* Generate code to handle the case of A>B */ VdbeNoopComment((v, "A-gt-B subroutine")); addrAgtB = sqlite3VdbeCurrentAddr(v); if( op==TK_ALL || op==TK_UNION ){ sqlite3VdbeAddOp2(v, OP_Gosub, regOutB, addrOutB); } - sqlite3VdbeAddOp1(v, OP_Yield, regAddrB); - sqlite3VdbeAddOp2(v, OP_If, regEofB, addrEofB); - sqlite3VdbeAddOp2(v, OP_Goto, 0, labelCmpr); + sqlite3VdbeAddOp2(v, OP_Yield, regAddrB, addrEofB); VdbeCoverage(v); + sqlite3VdbeGoto(v, labelCmpr); /* This code runs once to initialize everything. */ - sqlite3VdbeJumpHere(v, j1); - sqlite3VdbeAddOp2(v, OP_Integer, 0, regEofA); - sqlite3VdbeAddOp2(v, OP_Integer, 0, regEofB); - sqlite3VdbeAddOp2(v, OP_Gosub, regAddrA, addrSelectA); - sqlite3VdbeAddOp2(v, OP_Gosub, regAddrB, addrSelectB); - sqlite3VdbeAddOp2(v, OP_If, regEofA, addrEofA); - sqlite3VdbeAddOp2(v, OP_If, regEofB, addrEofB); + sqlite3VdbeJumpHere(v, addr1); + sqlite3VdbeAddOp2(v, OP_Yield, regAddrA, addrEofA_noB); VdbeCoverage(v); + sqlite3VdbeAddOp2(v, OP_Yield, regAddrB, addrEofB); VdbeCoverage(v); /* Implement the main merge loop */ sqlite3VdbeResolveLabel(v, labelCmpr); sqlite3VdbeAddOp4(v, OP_Permutation, 0, 0, 0, (char*)aPermute, P4_INTARRAY); - sqlite3VdbeAddOp4(v, OP_Compare, destA.iMem, destB.iMem, nOrderBy, - (char*)pKeyMerge, P4_KEYINFO_HANDOFF); - sqlite3VdbeAddOp3(v, OP_Jump, addrAltB, addrAeqB, addrAgtB); - - /* Release temporary registers - */ - if( regPrev ){ - sqlite3ReleaseTempRange(pParse, regPrev, nOrderBy+1); - } + sqlite3VdbeAddOp4(v, OP_Compare, destA.iSdst, destB.iSdst, nOrderBy, + (char*)pKeyMerge, P4_KEYINFO); + sqlite3VdbeChangeP5(v, OPFLAG_PERMUTE); + sqlite3VdbeAddOp3(v, OP_Jump, addrAltB, addrAeqB, addrAgtB); VdbeCoverage(v); /* Jump to the this point in order to terminate the query. */ sqlite3VdbeResolveLabel(v, labelEnd); @@ -82918,30 +113323,32 @@ /* Set the number of output columns */ if( pDest->eDest==SRT_Output ){ Select *pFirst = pPrior; while( pFirst->pPrior ) pFirst = pFirst->pPrior; - generateColumnNames(pParse, 0, pFirst->pEList); + generateColumnNames(pParse, pFirst->pSrc, pFirst->pEList); } /* Reassembly the compound query so that it will be freed correctly ** by the calling function */ if( p->pPrior ){ sqlite3SelectDelete(db, p->pPrior); } p->pPrior = pPrior; + pPrior->pNext = p; /*** TBD: Insert subroutine calls to close cursors on incomplete **** subqueries ****/ - return SQLITE_OK; + explainComposite(pParse, p->op, iSub1, iSub2, 0); + return pParse->nErr!=0; } #endif #if !defined(SQLITE_OMIT_SUBQUERY) || !defined(SQLITE_OMIT_VIEW) /* Forward Declarations */ static void substExprList(sqlite3*, ExprList*, int, ExprList*); -static void substSelect(sqlite3*, Select *, int, ExprList *); +static void substSelect(sqlite3*, Select *, int, ExprList*, int); /* ** Scan through the expression pExpr. Replace every reference to ** a column in table number iTable with a copy of the iColumn-th ** entry in pEList. (But leave references to the ROWID column @@ -82967,21 +113374,18 @@ }else{ Expr *pNew; assert( pEList!=0 && pExpr->iColumn<pEList->nExpr ); assert( pExpr->pLeft==0 && pExpr->pRight==0 ); pNew = sqlite3ExprDup(db, pEList->a[pExpr->iColumn].pExpr, 0); - if( pNew && pExpr->pColl ){ - pNew->pColl = pExpr->pColl; - } sqlite3ExprDelete(db, pExpr); pExpr = pNew; } }else{ pExpr->pLeft = substExpr(db, pExpr->pLeft, iTable, pEList); pExpr->pRight = substExpr(db, pExpr->pRight, iTable, pEList); if( ExprHasProperty(pExpr, EP_xIsSelect) ){ - substSelect(db, pExpr->x.pSelect, iTable, pEList); + substSelect(db, pExpr->x.pSelect, iTable, pEList, 1); }else{ substExprList(db, pExpr->x.pList, iTable, pEList); } } return pExpr; @@ -83000,37 +113404,39 @@ } static void substSelect( sqlite3 *db, /* Report malloc errors here */ Select *p, /* SELECT statement in which to make substitutions */ int iTable, /* Table to be replaced */ - ExprList *pEList /* Substitute values */ + ExprList *pEList, /* Substitute values */ + int doPrior /* Do substitutes on p->pPrior too */ ){ SrcList *pSrc; struct SrcList_item *pItem; int i; if( !p ) return; - substExprList(db, p->pEList, iTable, pEList); - substExprList(db, p->pGroupBy, iTable, pEList); - substExprList(db, p->pOrderBy, iTable, pEList); - p->pHaving = substExpr(db, p->pHaving, iTable, pEList); - p->pWhere = substExpr(db, p->pWhere, iTable, pEList); - substSelect(db, p->pPrior, iTable, pEList); - pSrc = p->pSrc; - assert( pSrc ); /* Even for (SELECT 1) we have: pSrc!=0 but pSrc->nSrc==0 */ - if( ALWAYS(pSrc) ){ + do{ + substExprList(db, p->pEList, iTable, pEList); + substExprList(db, p->pGroupBy, iTable, pEList); + substExprList(db, p->pOrderBy, iTable, pEList); + p->pHaving = substExpr(db, p->pHaving, iTable, pEList); + p->pWhere = substExpr(db, p->pWhere, iTable, pEList); + pSrc = p->pSrc; + assert( pSrc!=0 ); for(i=pSrc->nSrc, pItem=pSrc->a; i>0; i--, pItem++){ - substSelect(db, pItem->pSelect, iTable, pEList); + substSelect(db, pItem->pSelect, iTable, pEList, 1); + if( pItem->fg.isTabFunc ){ + substExprList(db, pItem->u1.pFuncArg, iTable, pEList); + } } - } + }while( doPrior && (p = p->pPrior)!=0 ); } #endif /* !defined(SQLITE_OMIT_SUBQUERY) || !defined(SQLITE_OMIT_VIEW) */ #if !defined(SQLITE_OMIT_SUBQUERY) || !defined(SQLITE_OMIT_VIEW) /* -** This routine attempts to flatten subqueries in order to speed -** execution. It returns 1 if it makes changes and 0 if no flattening -** occurs. +** This routine attempts to flatten subqueries as a performance optimization. +** This routine returns 1 if it makes changes and 0 if no flattening occurs. ** ** To understand the concept of flattening, consider the following ** query: ** ** SELECT a FROM (SELECT x+y AS a FROM t1 WHERE z<100) WHERE a>5 @@ -83045,50 +113451,59 @@ ** This routine attempts to rewrite queries such as the above into ** a single flat select, like this: ** ** SELECT x+y AS a FROM t1 WHERE z<100 AND a>5 ** -** The code generated for this simpification gives the same result +** The code generated for this simplification gives the same result ** but only has to scan the data once. And because indices might ** exist on the table t1, a complete scan of the data might be ** avoided. ** ** Flattening is only attempted if all of the following are true: ** ** (1) The subquery and the outer query do not both use aggregates. ** -** (2) The subquery is not an aggregate or the outer query is not a join. +** (2) The subquery is not an aggregate or (2a) the outer query is not a join +** and (2b) the outer query does not use subqueries other than the one +** FROM-clause subquery that is a candidate for flattening. (2b is +** due to ticket [2f7170d73bf9abf80] from 2015-02-09.) ** ** (3) The subquery is not the right operand of a left outer join -** (Originally ticket #306. Strenghtened by ticket #3300) +** (Originally ticket #306. Strengthened by ticket #3300) ** -** (4) The subquery is not DISTINCT or the outer query is not a join. +** (4) The subquery is not DISTINCT. ** -** (5) The subquery is not DISTINCT or the outer query does not use -** aggregates. +** (**) At one point restrictions (4) and (5) defined a subset of DISTINCT +** sub-queries that were excluded from this optimization. Restriction +** (4) has since been expanded to exclude all DISTINCT subqueries. ** ** (6) The subquery does not use aggregates or the outer query is not ** DISTINCT. ** -** (7) The subquery has a FROM clause. +** (7) The subquery has a FROM clause. TODO: For subqueries without +** A FROM clause, consider adding a FROM close with the special +** table sqlite_once that consists of a single row containing a +** single NULL. ** ** (8) The subquery does not use LIMIT or the outer query is not a join. ** ** (9) The subquery does not use LIMIT or the outer query does not use ** aggregates. ** -** (10) The subquery does not use aggregates or the outer query does not -** use LIMIT. +** (**) Restriction (10) was removed from the code on 2005-02-05 but we +** accidently carried the comment forward until 2014-09-15. Original +** text: "The subquery does not use aggregates or the outer query +** does not use LIMIT." ** ** (11) The subquery and the outer query do not both have ORDER BY clauses. ** ** (**) Not implemented. Subsumed into restriction (3). Was previously ** a separate restriction deriving from ticket #350. ** -** (13) The subquery and outer query do not both use LIMIT +** (13) The subquery and outer query do not both use LIMIT. ** -** (14) The subquery does not use OFFSET +** (14) The subquery does not use OFFSET. ** ** (15) The outer query is not part of a compound select or the ** subquery does not have a LIMIT clause. ** (See ticket #2339 and ticket [02a8e81d44]). ** @@ -83100,15 +113515,24 @@ ** compound clause made up entirely of non-aggregate queries, and ** the parent query: ** ** * is not itself part of a compound select, ** * is not an aggregate or DISTINCT query, and -** * has no other tables or sub-selects in the FROM clause. +** * is not a join ** ** The parent and sub-query may contain WHERE clauses. Subject to ** rules (11), (13) and (14), they may also contain ORDER BY, -** LIMIT and OFFSET clauses. +** LIMIT and OFFSET clauses. The subquery cannot use any compound +** operator other than UNION ALL because all the other compound +** operators have an implied DISTINCT which is disallowed by +** restriction (4). +** +** Also, each component of the sub-query must return the same number +** of result columns. This is actually a requirement for any compound +** SELECT statement, but all the code here does is make sure that no +** such (illegal) sub-query is flattened. The caller will detect the +** syntax error and return a detailed message. ** ** (18) If the sub-query is a compound select, then all terms of the ** ORDER by clause of the parent must be simple references to ** columns of the sub-query. ** @@ -83116,12 +113540,28 @@ ** have a WHERE clause. ** ** (20) If the sub-query is a compound select, then it must not use ** an ORDER BY clause. Ticket #3773. We could relax this constraint ** somewhat by saying that the terms of the ORDER BY clause must -** appear as unmodified result columns in the outer query. But +** appear as unmodified result columns in the outer query. But we ** have other optimizations in mind to deal with that case. +** +** (21) The subquery does not use LIMIT or the outer query is not +** DISTINCT. (See ticket [752e1646fc]). +** +** (22) The subquery is not a recursive CTE. +** +** (23) The parent is not a recursive CTE, or the sub-query is not a +** compound query. This restriction is because transforming the +** parent to a compound query confuses the code that handles +** recursive queries in multiSelect(). +** +** (24) The subquery is not an aggregate that uses the built-in min() or +** or max() functions. (Without this restriction, a query like: +** "SELECT x FROM (SELECT max(y), x FROM t1)" would not necessarily +** return the value X for which Y was maximal.) +** ** ** In this routine, the "p" parameter is a pointer to the outer query. ** The subquery is p->pSrc->a[iFrom]. isAgg is true if the outer query ** uses aggregates and subqueryIsAgg is true if the subquery uses aggregates. ** @@ -83137,11 +113577,11 @@ int iFrom, /* Index in p->pSrc->a[] of the inner subquery */ int isAgg, /* True if outer SELECT uses aggregate functions */ int subqueryIsAgg /* True if the subquery uses aggregate functions */ ){ const char *zSavedAuthContext = pParse->zAuthContext; - Select *pParent; + Select *pParent; /* Current UNION ALL term of the other query */ Select *pSub; /* The inner query or "subquery" */ Select *pSub1; /* Pointer to the rightmost select in sub-query */ SrcList *pSrc; /* The FROM clause of the outer query */ SrcList *pSubSrc; /* The FROM clause of the subquery */ ExprList *pList; /* The result set of the outer query */ @@ -83153,44 +113593,64 @@ /* Check to see if flattening is permitted. Return 0 if not. */ assert( p!=0 ); assert( p->pPrior==0 ); /* Unable to flatten compound queries */ - if( db->flags & SQLITE_QueryFlattener ) return 0; + if( OptimizationDisabled(db, SQLITE_QueryFlattener) ) return 0; pSrc = p->pSrc; assert( pSrc && iFrom>=0 && iFrom<pSrc->nSrc ); pSubitem = &pSrc->a[iFrom]; iParent = pSubitem->iCursor; pSub = pSubitem->pSelect; assert( pSub!=0 ); - if( isAgg && subqueryIsAgg ) return 0; /* Restriction (1) */ - if( subqueryIsAgg && pSrc->nSrc>1 ) return 0; /* Restriction (2) */ + if( subqueryIsAgg ){ + if( isAgg ) return 0; /* Restriction (1) */ + if( pSrc->nSrc>1 ) return 0; /* Restriction (2a) */ + if( (p->pWhere && ExprHasProperty(p->pWhere,EP_Subquery)) + || (sqlite3ExprListFlags(p->pEList) & EP_Subquery)!=0 + || (sqlite3ExprListFlags(p->pOrderBy) & EP_Subquery)!=0 + ){ + return 0; /* Restriction (2b) */ + } + } + pSubSrc = pSub->pSrc; assert( pSubSrc ); /* Prior to version 3.1.2, when LIMIT and OFFSET had to be simple constants, - ** not arbitrary expresssions, we allowed some combining of LIMIT and OFFSET + ** not arbitrary expressions, we allowed some combining of LIMIT and OFFSET ** because they could be computed at compile-time. But when LIMIT and OFFSET ** became arbitrary expressions, we were forced to add restrictions (13) ** and (14). */ if( pSub->pLimit && p->pLimit ) return 0; /* Restriction (13) */ if( pSub->pOffset ) return 0; /* Restriction (14) */ - if( p->pRightmost && pSub->pLimit ){ + if( (p->selFlags & SF_Compound)!=0 && pSub->pLimit ){ return 0; /* Restriction (15) */ } if( pSubSrc->nSrc==0 ) return 0; /* Restriction (7) */ - if( ((pSub->selFlags & SF_Distinct)!=0 || pSub->pLimit) - && (pSrc->nSrc>1 || isAgg) ){ /* Restrictions (4)(5)(8)(9) */ - return 0; + if( pSub->selFlags & SF_Distinct ) return 0; /* Restriction (5) */ + if( pSub->pLimit && (pSrc->nSrc>1 || isAgg) ){ + return 0; /* Restrictions (8)(9) */ } if( (p->selFlags & SF_Distinct)!=0 && subqueryIsAgg ){ return 0; /* Restriction (6) */ } if( p->pOrderBy && pSub->pOrderBy ){ return 0; /* Restriction (11) */ } if( isAgg && pSub->pOrderBy ) return 0; /* Restriction (16) */ if( pSub->pLimit && p->pWhere ) return 0; /* Restriction (19) */ + if( pSub->pLimit && (p->selFlags & SF_Distinct)!=0 ){ + return 0; /* Restriction (21) */ + } + testcase( pSub->selFlags & SF_Recursive ); + testcase( pSub->selFlags & SF_MinMaxAgg ); + if( pSub->selFlags & (SF_Recursive|SF_MinMaxAgg) ){ + return 0; /* Restrictions (22) and (24) */ + } + if( (p->selFlags & SF_Recursive) && pSub->pPrior ){ + return 0; /* Restriction (23) */ + } /* OBSOLETE COMMENT 1: ** Restriction 3: If the subquery is a join, make sure the subquery is ** not used as the right operand of an outer join. Examples of why this ** is not allowed: @@ -83220,11 +113680,11 @@ ** THIS OVERRIDES OBSOLETE COMMENTS 1 AND 2 ABOVE: ** Ticket #3300 shows that flattening the right term of a LEFT JOIN ** is fraught with danger. Best to avoid the whole thing. If the ** subquery is the right term of a LEFT JOIN, then do not flatten. */ - if( (pSubitem->jointype & JT_OUTER)!=0 ){ + if( (pSubitem->fg.jointype & JT_OUTER)!=0 ){ return 0; } /* Restriction 17: If the sub-query is a compound SELECT, then it must ** use only the UNION ALL operator. And none of the simple select queries @@ -83239,32 +113699,38 @@ return 0; } for(pSub1=pSub; pSub1; pSub1=pSub1->pPrior){ testcase( (pSub1->selFlags & (SF_Distinct|SF_Aggregate))==SF_Distinct ); testcase( (pSub1->selFlags & (SF_Distinct|SF_Aggregate))==SF_Aggregate ); + assert( pSub->pSrc!=0 ); + assert( pSub->pEList->nExpr==pSub1->pEList->nExpr ); if( (pSub1->selFlags & (SF_Distinct|SF_Aggregate))!=0 || (pSub1->pPrior && pSub1->op!=TK_ALL) - || NEVER(pSub1->pSrc==0) || pSub1->pSrc->nSrc!=1 + || pSub1->pSrc->nSrc<1 ){ return 0; } + testcase( pSub1->pSrc->nSrc>1 ); } /* Restriction 18. */ if( p->pOrderBy ){ int ii; for(ii=0; ii<p->pOrderBy->nExpr; ii++){ - if( p->pOrderBy->a[ii].iCol==0 ) return 0; + if( p->pOrderBy->a[ii].u.x.iOrderByCol==0 ) return 0; } } } /***** If we reach this point, flattening is permitted. *****/ + SELECTTRACE(1,pParse,p,("flatten %s.%p from term %d\n", + pSub->zSelName, pSub, iFrom)); /* Authorize the subquery */ pParse->zAuthContext = pSubitem->zName; - sqlite3AuthCheck(pParse, SQLITE_SELECT, 0, 0, 0); + TESTONLY(i =) sqlite3AuthCheck(pParse, SQLITE_SELECT, 0, 0, 0); + testcase( i==SQLITE_DENY ); pParse->zAuthContext = zSavedAuthContext; /* If the sub-query is a compound SELECT statement, then (by restrictions ** 17 and 18 above) it must be a UNION ALL and the parent query must ** be of the form: @@ -83300,28 +113766,35 @@ */ for(pSub=pSub->pPrior; pSub; pSub=pSub->pPrior){ Select *pNew; ExprList *pOrderBy = p->pOrderBy; Expr *pLimit = p->pLimit; + Expr *pOffset = p->pOffset; Select *pPrior = p->pPrior; p->pOrderBy = 0; p->pSrc = 0; p->pPrior = 0; p->pLimit = 0; + p->pOffset = 0; pNew = sqlite3SelectDup(db, p, 0); + sqlite3SelectSetName(pNew, pSub->zSelName); + p->pOffset = pOffset; p->pLimit = pLimit; p->pOrderBy = pOrderBy; p->pSrc = pSrc; p->op = TK_ALL; - p->pRightmost = 0; if( pNew==0 ){ - pNew = pPrior; + p->pPrior = pPrior; }else{ pNew->pPrior = pPrior; - pNew->pRightmost = 0; + if( pPrior ) pPrior->pNext = pNew; + pNew->pNext = p; + p->pPrior = pNew; + SELECTTRACE(2,pParse,p, + ("compound-subquery flattener creates %s.%p as peer\n", + pNew->zSelName, pNew)); } - p->pPrior = pNew; if( db->mallocFailed ) return 1; } /* Begin flattening the iFrom-th entry of the FROM clause ** in the outer query. @@ -83378,11 +113851,11 @@ nSubSrc = pSubSrc->nSrc; /* Number of terms in subquery FROM clause */ pSrc = pParent->pSrc; /* FROM clause of the outer query */ if( pSrc ){ assert( pParent==p ); /* First time through the loop */ - jointype = pSubitem->jointype; + jointype = pSubitem->fg.jointype; }else{ assert( pParent!=p ); /* 2nd and subsequent times through the loop */ pSrc = pParent->pSrc = sqlite3SrcListAppend(db, 0, 0, 0); if( pSrc==0 ){ assert( db->mallocFailed ); @@ -83399,13 +113872,13 @@ ** ** SELECT * FROM tabA, (SELECT * FROM sub1, sub2), tabB; ** ** The outer query has 3 slots in its FROM clause. One slot of the ** outer query (the middle slot) is used by the subquery. The next - ** block of code will expand the out query to 4 slots. The middle - ** slot is expanded to two slots in order to make space for the - ** two elements in the FROM clause of the subquery. + ** block of code will expand the outer query FROM clause to 4 slots. + ** The middle slot is expanded to two slots in order to make space + ** for the two elements in the FROM clause of the subquery. */ if( nSubSrc>1 ){ pParent->pSrc = pSrc = sqlite3SrcListEnlarge(db, pSrc, nSubSrc-1,iFrom+1); if( db->mallocFailed ){ break; @@ -83415,14 +113888,15 @@ /* Transfer the FROM clause terms from the subquery into the ** outer query. */ for(i=0; i<nSubSrc; i++){ sqlite3IdListDelete(db, pSrc->a[i+iFrom].pUsing); + assert( pSrc->a[i+iFrom].fg.isTabFunc==0 ); pSrc->a[i+iFrom] = pSubSrc->a[i]; memset(&pSubSrc->a[i], 0, sizeof(pSubSrc->a[i])); } - pSrc->a[iFrom].jointype = jointype; + pSrc->a[iFrom].fg.jointype = jointype; /* Now begin substituting subquery result set expressions for ** references to the iParent in the outer query. ** ** Example: @@ -83435,46 +113909,48 @@ ** "a" we substitute "x*3" and every place we see "b" we substitute "y+10". */ pList = pParent->pEList; for(i=0; i<pList->nExpr; i++){ if( pList->a[i].zName==0 ){ - const char *zSpan = pList->a[i].zSpan; - if( ALWAYS(zSpan) ){ - pList->a[i].zName = sqlite3DbStrDup(db, zSpan); - } - } - } - substExprList(db, pParent->pEList, iParent, pSub->pEList); - if( isAgg ){ - substExprList(db, pParent->pGroupBy, iParent, pSub->pEList); - pParent->pHaving = substExpr(db, pParent->pHaving, iParent, pSub->pEList); + char *zName = sqlite3DbStrDup(db, pList->a[i].zSpan); + sqlite3Dequote(zName); + pList->a[i].zName = zName; + } } if( pSub->pOrderBy ){ + /* At this point, any non-zero iOrderByCol values indicate that the + ** ORDER BY column expression is identical to the iOrderByCol'th + ** expression returned by SELECT statement pSub. Since these values + ** do not necessarily correspond to columns in SELECT statement pParent, + ** zero them before transfering the ORDER BY clause. + ** + ** Not doing this may cause an error if a subsequent call to this + ** function attempts to flatten a compound sub-query into pParent + ** (the only way this can happen is if the compound sub-query is + ** currently part of pSub->pSrc). See ticket [d11a6e908f]. */ + ExprList *pOrderBy = pSub->pOrderBy; + for(i=0; i<pOrderBy->nExpr; i++){ + pOrderBy->a[i].u.x.iOrderByCol = 0; + } assert( pParent->pOrderBy==0 ); - pParent->pOrderBy = pSub->pOrderBy; + assert( pSub->pPrior==0 ); + pParent->pOrderBy = pOrderBy; pSub->pOrderBy = 0; - }else if( pParent->pOrderBy ){ - substExprList(db, pParent->pOrderBy, iParent, pSub->pEList); - } - if( pSub->pWhere ){ - pWhere = sqlite3ExprDup(db, pSub->pWhere, 0); - }else{ - pWhere = 0; - } + } + pWhere = sqlite3ExprDup(db, pSub->pWhere, 0); if( subqueryIsAgg ){ assert( pParent->pHaving==0 ); pParent->pHaving = pParent->pWhere; pParent->pWhere = pWhere; - pParent->pHaving = substExpr(db, pParent->pHaving, iParent, pSub->pEList); pParent->pHaving = sqlite3ExprAnd(db, pParent->pHaving, sqlite3ExprDup(db, pSub->pHaving, 0)); assert( pParent->pGroupBy==0 ); pParent->pGroupBy = sqlite3ExprListDup(db, pSub->pGroupBy, 0); }else{ - pParent->pWhere = substExpr(db, pParent->pWhere, iParent, pSub->pEList); pParent->pWhere = sqlite3ExprAnd(db, pParent->pWhere, pWhere); } + substSelect(db, pParent, iParent, pSub->pEList, 0); /* The flattened query is distinct if either the inner or the ** outer query is distinct. */ pParent->selFlags |= pSub->selFlags & SF_Distinct; @@ -83493,49 +113969,136 @@ /* Finially, delete what is left of the subquery and return ** success. */ sqlite3SelectDelete(db, pSub1); + +#if SELECTTRACE_ENABLED + if( sqlite3SelectTrace & 0x100 ){ + SELECTTRACE(0x100,pParse,p,("After flattening:\n")); + sqlite3TreeViewSelect(0, p, 0); + } +#endif return 1; +} +#endif /* !defined(SQLITE_OMIT_SUBQUERY) || !defined(SQLITE_OMIT_VIEW) */ + + + +#if !defined(SQLITE_OMIT_SUBQUERY) || !defined(SQLITE_OMIT_VIEW) +/* +** Make copies of relevant WHERE clause terms of the outer query into +** the WHERE clause of subquery. Example: +** +** SELECT * FROM (SELECT a AS x, c-d AS y FROM t1) WHERE x=5 AND y=10; +** +** Transformed into: +** +** SELECT * FROM (SELECT a AS x, c-d AS y FROM t1 WHERE a=5 AND c-d=10) +** WHERE x=5 AND y=10; +** +** The hope is that the terms added to the inner query will make it more +** efficient. +** +** Do not attempt this optimization if: +** +** (1) The inner query is an aggregate. (In that case, we'd really want +** to copy the outer WHERE-clause terms onto the HAVING clause of the +** inner query. But they probably won't help there so do not bother.) +** +** (2) The inner query is the recursive part of a common table expression. +** +** (3) The inner query has a LIMIT clause (since the changes to the WHERE +** close would change the meaning of the LIMIT). +** +** (4) The inner query is the right operand of a LEFT JOIN. (The caller +** enforces this restriction since this routine does not have enough +** information to know.) +** +** (5) The WHERE clause expression originates in the ON or USING clause +** of a LEFT JOIN. +** +** Return 0 if no changes are made and non-zero if one or more WHERE clause +** terms are duplicated into the subquery. +*/ +static int pushDownWhereTerms( + sqlite3 *db, /* The database connection (for malloc()) */ + Select *pSubq, /* The subquery whose WHERE clause is to be augmented */ + Expr *pWhere, /* The WHERE clause of the outer query */ + int iCursor /* Cursor number of the subquery */ +){ + Expr *pNew; + int nChng = 0; + if( pWhere==0 ) return 0; + if( (pSubq->selFlags & (SF_Aggregate|SF_Recursive))!=0 ){ + return 0; /* restrictions (1) and (2) */ + } + if( pSubq->pLimit!=0 ){ + return 0; /* restriction (3) */ + } + while( pWhere->op==TK_AND ){ + nChng += pushDownWhereTerms(db, pSubq, pWhere->pRight, iCursor); + pWhere = pWhere->pLeft; + } + if( ExprHasProperty(pWhere,EP_FromJoin) ) return 0; /* restriction 5 */ + if( sqlite3ExprIsTableConstant(pWhere, iCursor) ){ + nChng++; + while( pSubq ){ + pNew = sqlite3ExprDup(db, pWhere, 0); + pNew = substExpr(db, pNew, iCursor, pSubq->pEList); + pSubq->pWhere = sqlite3ExprAnd(db, pSubq->pWhere, pNew); + pSubq = pSubq->pPrior; + } + } + return nChng; } #endif /* !defined(SQLITE_OMIT_SUBQUERY) || !defined(SQLITE_OMIT_VIEW) */ /* -** Analyze the SELECT statement passed as an argument to see if it -** is a min() or max() query. Return WHERE_ORDERBY_MIN or WHERE_ORDERBY_MAX if -** it is, or 0 otherwise. At present, a query is considered to be -** a min()/max() query if: -** -** 1. There is a single object in the FROM clause. -** -** 2. There is a single expression in the result set, and it is -** either min(x) or max(x), where x is a column reference. -*/ -static u8 minMaxQuery(Select *p){ - Expr *pExpr; - ExprList *pEList = p->pEList; - - if( pEList->nExpr!=1 ) return WHERE_ORDERBY_NORMAL; - pExpr = pEList->a[0].pExpr; - if( pExpr->op!=TK_AGG_FUNCTION ) return 0; - if( NEVER(ExprHasProperty(pExpr, EP_xIsSelect)) ) return 0; - pEList = pExpr->x.pList; - if( pEList==0 || pEList->nExpr!=1 ) return 0; - if( pEList->a[0].pExpr->op!=TK_AGG_COLUMN ) return WHERE_ORDERBY_NORMAL; - assert( !ExprHasProperty(pExpr, EP_IntValue) ); - if( sqlite3StrICmp(pExpr->u.zToken,"min")==0 ){ - return WHERE_ORDERBY_MIN; - }else if( sqlite3StrICmp(pExpr->u.zToken,"max")==0 ){ - return WHERE_ORDERBY_MAX; - } - return WHERE_ORDERBY_NORMAL; +** Based on the contents of the AggInfo structure indicated by the first +** argument, this function checks if the following are true: +** +** * the query contains just a single aggregate function, +** * the aggregate function is either min() or max(), and +** * the argument to the aggregate function is a column value. +** +** If all of the above are true, then WHERE_ORDERBY_MIN or WHERE_ORDERBY_MAX +** is returned as appropriate. Also, *ppMinMax is set to point to the +** list of arguments passed to the aggregate before returning. +** +** Or, if the conditions above are not met, *ppMinMax is set to 0 and +** WHERE_ORDERBY_NORMAL is returned. +*/ +static u8 minMaxQuery(AggInfo *pAggInfo, ExprList **ppMinMax){ + int eRet = WHERE_ORDERBY_NORMAL; /* Return value */ + + *ppMinMax = 0; + if( pAggInfo->nFunc==1 ){ + Expr *pExpr = pAggInfo->aFunc[0].pExpr; /* Aggregate function */ + ExprList *pEList = pExpr->x.pList; /* Arguments to agg function */ + + assert( pExpr->op==TK_AGG_FUNCTION ); + if( pEList && pEList->nExpr==1 && pEList->a[0].pExpr->op==TK_AGG_COLUMN ){ + const char *zFunc = pExpr->u.zToken; + if( sqlite3StrICmp(zFunc, "min")==0 ){ + eRet = WHERE_ORDERBY_MIN; + *ppMinMax = pEList; + }else if( sqlite3StrICmp(zFunc, "max")==0 ){ + eRet = WHERE_ORDERBY_MAX; + *ppMinMax = pEList; + } + } + } + + assert( *ppMinMax==0 || (*ppMinMax)->nExpr==1 ); + return eRet; } /* ** The select statement passed as the first argument is an aggregate query. -** The second argment is the associated aggregate-info object. This +** The second argument is the associated aggregate-info object. This ** function tests if the SELECT is of the form: ** ** SELECT count(*) FROM <tbl> ** ** where table is a database table, not a sub-select or view. If the query @@ -83557,11 +114120,12 @@ pExpr = p->pEList->a[0].pExpr; assert( pTab && !pTab->pSelect && pExpr ); if( IsVirtual(pTab) ) return 0; if( pExpr->op!=TK_AGG_FUNCTION ) return 0; - if( (pAggInfo->aFunc[0].pFunc->flags&SQLITE_FUNC_COUNT)==0 ) return 0; + if( NEVER(pAggInfo->nFunc==0) ) return 0; + if( (pAggInfo->aFunc[0].pFunc->funcFlags&SQLITE_FUNC_COUNT)==0 ) return 0; if( pExpr->flags&EP_Distinct ) return 0; return pTab; } @@ -83571,26 +114135,305 @@ ** was such a clause and the named index cannot be found, return ** SQLITE_ERROR and leave an error in pParse. Otherwise, populate ** pFrom->pIndex and return SQLITE_OK. */ SQLITE_PRIVATE int sqlite3IndexedByLookup(Parse *pParse, struct SrcList_item *pFrom){ - if( pFrom->pTab && pFrom->zIndex ){ + if( pFrom->pTab && pFrom->fg.isIndexedBy ){ Table *pTab = pFrom->pTab; - char *zIndex = pFrom->zIndex; + char *zIndexedBy = pFrom->u1.zIndexedBy; Index *pIdx; for(pIdx=pTab->pIndex; - pIdx && sqlite3StrICmp(pIdx->zName, zIndex); + pIdx && sqlite3StrICmp(pIdx->zName, zIndexedBy); pIdx=pIdx->pNext ); if( !pIdx ){ - sqlite3ErrorMsg(pParse, "no such index: %s", zIndex, 0); + sqlite3ErrorMsg(pParse, "no such index: %s", zIndexedBy, 0); + pParse->checkSchema = 1; + return SQLITE_ERROR; + } + pFrom->pIBIndex = pIdx; + } + return SQLITE_OK; +} +/* +** Detect compound SELECT statements that use an ORDER BY clause with +** an alternative collating sequence. +** +** SELECT ... FROM t1 EXCEPT SELECT ... FROM t2 ORDER BY .. COLLATE ... +** +** These are rewritten as a subquery: +** +** SELECT * FROM (SELECT ... FROM t1 EXCEPT SELECT ... FROM t2) +** ORDER BY ... COLLATE ... +** +** This transformation is necessary because the multiSelectOrderBy() routine +** above that generates the code for a compound SELECT with an ORDER BY clause +** uses a merge algorithm that requires the same collating sequence on the +** result columns as on the ORDER BY clause. See ticket +** http://www.sqlite.org/src/info/6709574d2a +** +** This transformation is only needed for EXCEPT, INTERSECT, and UNION. +** The UNION ALL operator works fine with multiSelectOrderBy() even when +** there are COLLATE terms in the ORDER BY. +*/ +static int convertCompoundSelectToSubquery(Walker *pWalker, Select *p){ + int i; + Select *pNew; + Select *pX; + sqlite3 *db; + struct ExprList_item *a; + SrcList *pNewSrc; + Parse *pParse; + Token dummy; + + if( p->pPrior==0 ) return WRC_Continue; + if( p->pOrderBy==0 ) return WRC_Continue; + for(pX=p; pX && (pX->op==TK_ALL || pX->op==TK_SELECT); pX=pX->pPrior){} + if( pX==0 ) return WRC_Continue; + a = p->pOrderBy->a; + for(i=p->pOrderBy->nExpr-1; i>=0; i--){ + if( a[i].pExpr->flags & EP_Collate ) break; + } + if( i<0 ) return WRC_Continue; + + /* If we reach this point, that means the transformation is required. */ + + pParse = pWalker->pParse; + db = pParse->db; + pNew = sqlite3DbMallocZero(db, sizeof(*pNew) ); + if( pNew==0 ) return WRC_Abort; + memset(&dummy, 0, sizeof(dummy)); + pNewSrc = sqlite3SrcListAppendFromTerm(pParse,0,0,0,&dummy,pNew,0,0); + if( pNewSrc==0 ) return WRC_Abort; + *pNew = *p; + p->pSrc = pNewSrc; + p->pEList = sqlite3ExprListAppend(pParse, 0, sqlite3Expr(db, TK_ASTERISK, 0)); + p->op = TK_SELECT; + p->pWhere = 0; + pNew->pGroupBy = 0; + pNew->pHaving = 0; + pNew->pOrderBy = 0; + p->pPrior = 0; + p->pNext = 0; + p->pWith = 0; + p->selFlags &= ~SF_Compound; + assert( (p->selFlags & SF_Converted)==0 ); + p->selFlags |= SF_Converted; + assert( pNew->pPrior!=0 ); + pNew->pPrior->pNext = pNew; + pNew->pLimit = 0; + pNew->pOffset = 0; + return WRC_Continue; +} + +/* +** Check to see if the FROM clause term pFrom has table-valued function +** arguments. If it does, leave an error message in pParse and return +** non-zero, since pFrom is not allowed to be a table-valued function. +*/ +static int cannotBeFunction(Parse *pParse, struct SrcList_item *pFrom){ + if( pFrom->fg.isTabFunc ){ + sqlite3ErrorMsg(pParse, "'%s' is not a function", pFrom->zName); + return 1; + } + return 0; +} + +#ifndef SQLITE_OMIT_CTE +/* +** Argument pWith (which may be NULL) points to a linked list of nested +** WITH contexts, from inner to outermost. If the table identified by +** FROM clause element pItem is really a common-table-expression (CTE) +** then return a pointer to the CTE definition for that table. Otherwise +** return NULL. +** +** If a non-NULL value is returned, set *ppContext to point to the With +** object that the returned CTE belongs to. +*/ +static struct Cte *searchWith( + With *pWith, /* Current innermost WITH clause */ + struct SrcList_item *pItem, /* FROM clause element to resolve */ + With **ppContext /* OUT: WITH clause return value belongs to */ +){ + const char *zName; + if( pItem->zDatabase==0 && (zName = pItem->zName)!=0 ){ + With *p; + for(p=pWith; p; p=p->pOuter){ + int i; + for(i=0; i<p->nCte; i++){ + if( sqlite3StrICmp(zName, p->a[i].zName)==0 ){ + *ppContext = p; + return &p->a[i]; + } + } + } + } + return 0; +} + +/* The code generator maintains a stack of active WITH clauses +** with the inner-most WITH clause being at the top of the stack. +** +** This routine pushes the WITH clause passed as the second argument +** onto the top of the stack. If argument bFree is true, then this +** WITH clause will never be popped from the stack. In this case it +** should be freed along with the Parse object. In other cases, when +** bFree==0, the With object will be freed along with the SELECT +** statement with which it is associated. +*/ +SQLITE_PRIVATE void sqlite3WithPush(Parse *pParse, With *pWith, u8 bFree){ + assert( bFree==0 || (pParse->pWith==0 && pParse->pWithToFree==0) ); + if( pWith ){ + assert( pParse->pWith!=pWith ); + pWith->pOuter = pParse->pWith; + pParse->pWith = pWith; + if( bFree ) pParse->pWithToFree = pWith; + } +} + +/* +** This function checks if argument pFrom refers to a CTE declared by +** a WITH clause on the stack currently maintained by the parser. And, +** if currently processing a CTE expression, if it is a recursive +** reference to the current CTE. +** +** If pFrom falls into either of the two categories above, pFrom->pTab +** and other fields are populated accordingly. The caller should check +** (pFrom->pTab!=0) to determine whether or not a successful match +** was found. +** +** Whether or not a match is found, SQLITE_OK is returned if no error +** occurs. If an error does occur, an error message is stored in the +** parser and some error code other than SQLITE_OK returned. +*/ +static int withExpand( + Walker *pWalker, + struct SrcList_item *pFrom +){ + Parse *pParse = pWalker->pParse; + sqlite3 *db = pParse->db; + struct Cte *pCte; /* Matched CTE (or NULL if no match) */ + With *pWith; /* WITH clause that pCte belongs to */ + + assert( pFrom->pTab==0 ); + + pCte = searchWith(pParse->pWith, pFrom, &pWith); + if( pCte ){ + Table *pTab; + ExprList *pEList; + Select *pSel; + Select *pLeft; /* Left-most SELECT statement */ + int bMayRecursive; /* True if compound joined by UNION [ALL] */ + With *pSavedWith; /* Initial value of pParse->pWith */ + + /* If pCte->zCteErr is non-NULL at this point, then this is an illegal + ** recursive reference to CTE pCte. Leave an error in pParse and return + ** early. If pCte->zCteErr is NULL, then this is not a recursive reference. + ** In this case, proceed. */ + if( pCte->zCteErr ){ + sqlite3ErrorMsg(pParse, pCte->zCteErr, pCte->zName); + return SQLITE_ERROR; + } + if( cannotBeFunction(pParse, pFrom) ) return SQLITE_ERROR; + + assert( pFrom->pTab==0 ); + pFrom->pTab = pTab = sqlite3DbMallocZero(db, sizeof(Table)); + if( pTab==0 ) return WRC_Abort; + pTab->nRef = 1; + pTab->zName = sqlite3DbStrDup(db, pCte->zName); + pTab->iPKey = -1; + pTab->nRowLogEst = 200; assert( 200==sqlite3LogEst(1048576) ); + pTab->tabFlags |= TF_Ephemeral | TF_NoVisibleRowid; + pFrom->pSelect = sqlite3SelectDup(db, pCte->pSelect, 0); + if( db->mallocFailed ) return SQLITE_NOMEM; + assert( pFrom->pSelect ); + + /* Check if this is a recursive CTE. */ + pSel = pFrom->pSelect; + bMayRecursive = ( pSel->op==TK_ALL || pSel->op==TK_UNION ); + if( bMayRecursive ){ + int i; + SrcList *pSrc = pFrom->pSelect->pSrc; + for(i=0; i<pSrc->nSrc; i++){ + struct SrcList_item *pItem = &pSrc->a[i]; + if( pItem->zDatabase==0 + && pItem->zName!=0 + && 0==sqlite3StrICmp(pItem->zName, pCte->zName) + ){ + pItem->pTab = pTab; + pItem->fg.isRecursive = 1; + pTab->nRef++; + pSel->selFlags |= SF_Recursive; + } + } + } + + /* Only one recursive reference is permitted. */ + if( pTab->nRef>2 ){ + sqlite3ErrorMsg( + pParse, "multiple references to recursive table: %s", pCte->zName + ); return SQLITE_ERROR; } - pFrom->pIndex = pIdx; + assert( pTab->nRef==1 || ((pSel->selFlags&SF_Recursive) && pTab->nRef==2 )); + + pCte->zCteErr = "circular reference: %s"; + pSavedWith = pParse->pWith; + pParse->pWith = pWith; + sqlite3WalkSelect(pWalker, bMayRecursive ? pSel->pPrior : pSel); + pParse->pWith = pWith; + + for(pLeft=pSel; pLeft->pPrior; pLeft=pLeft->pPrior); + pEList = pLeft->pEList; + if( pCte->pCols ){ + if( pEList && pEList->nExpr!=pCte->pCols->nExpr ){ + sqlite3ErrorMsg(pParse, "table %s has %d values for %d columns", + pCte->zName, pEList->nExpr, pCte->pCols->nExpr + ); + pParse->pWith = pSavedWith; + return SQLITE_ERROR; + } + pEList = pCte->pCols; + } + + sqlite3ColumnsFromExprList(pParse, pEList, &pTab->nCol, &pTab->aCol); + if( bMayRecursive ){ + if( pSel->selFlags & SF_Recursive ){ + pCte->zCteErr = "multiple recursive references: %s"; + }else{ + pCte->zCteErr = "recursive reference in a subquery: %s"; + } + sqlite3WalkSelect(pWalker, pSel); + } + pCte->zCteErr = 0; + pParse->pWith = pSavedWith; } + return SQLITE_OK; } +#endif + +#ifndef SQLITE_OMIT_CTE +/* +** If the SELECT passed as the second argument has an associated WITH +** clause, pop it from the stack stored as part of the Parse object. +** +** This function is used as the xSelectCallback2() callback by +** sqlite3SelectExpand() when walking a SELECT tree to resolve table +** names and other FROM clause elements. +*/ +static void selectPopWith(Walker *pWalker, Select *p){ + Parse *pParse = pWalker->pParse; + With *pWith = findRightmost(p)->pWith; + if( pWith!=0 ){ + assert( pParse->pWith==pWith ); + pParse->pWith = pWith->pOuter; + } +} +#else +#define selectPopWith 0 +#endif /* ** This routine is a Walker callback for "expanding" a SELECT statement. ** "Expanding" means to do the following: ** @@ -83600,14 +114443,14 @@ ** (2) Fill in the pTabList->a[].pTab fields in the SrcList that ** defines FROM clause. When views appear in the FROM clause, ** fill pTabList->a[].pSelect with a copy of the SELECT statement ** that implements the view. A copy is made of the view's SELECT ** statement so that we can freely modify or delete that statement -** without worrying about messing up the presistent representation +** without worrying about messing up the persistent representation ** of the view. ** -** (3) Add terms to the WHERE clause to accomodate the NATURAL keyword +** (3) Add terms to the WHERE clause to accommodate the NATURAL keyword ** on joins and the ON and USING clause of joins. ** ** (4) Scan the list of columns in the result set (pEList) looking ** for instances of the "*" operator or the TABLE.* operator. ** If found, expand each "*" to be every column in every table @@ -83619,20 +114462,25 @@ int i, j, k; SrcList *pTabList; ExprList *pEList; struct SrcList_item *pFrom; sqlite3 *db = pParse->db; + Expr *pE, *pRight, *pExpr; + u16 selFlags = p->selFlags; + p->selFlags |= SF_Expanded; if( db->mallocFailed ){ return WRC_Abort; } - if( NEVER(p->pSrc==0) || (p->selFlags & SF_Expanded)!=0 ){ + if( NEVER(p->pSrc==0) || (selFlags & SF_Expanded)!=0 ){ return WRC_Prune; } - p->selFlags |= SF_Expanded; pTabList = p->pSrc; pEList = p->pEList; + if( pWalker->xSelectCallback2==selectPopWith ){ + sqlite3WithPush(pParse, findRightmost(p)->pWith, 0); + } /* Make sure cursor numbers have been assigned to all entries in ** the FROM clause of the SELECT statement. */ sqlite3SrcListAssignCursors(pParse, pTabList); @@ -83641,47 +114489,60 @@ ** an entry of the FROM clause is a subquery instead of a table or view, ** then create a transient table structure to describe the subquery. */ for(i=0, pFrom=pTabList->a; i<pTabList->nSrc; i++, pFrom++){ Table *pTab; - if( pFrom->pTab!=0 ){ - /* This statement has already been prepared. There is no need - ** to go further. */ - assert( i==0 ); - return WRC_Prune; - } + assert( pFrom->fg.isRecursive==0 || pFrom->pTab!=0 ); + if( pFrom->fg.isRecursive ) continue; + assert( pFrom->pTab==0 ); +#ifndef SQLITE_OMIT_CTE + if( withExpand(pWalker, pFrom) ) return WRC_Abort; + if( pFrom->pTab ) {} else +#endif if( pFrom->zName==0 ){ #ifndef SQLITE_OMIT_SUBQUERY Select *pSel = pFrom->pSelect; /* A sub-query in the FROM clause of a SELECT */ assert( pSel!=0 ); assert( pFrom->pTab==0 ); - sqlite3WalkSelect(pWalker, pSel); + if( sqlite3WalkSelect(pWalker, pSel) ) return WRC_Abort; pFrom->pTab = pTab = sqlite3DbMallocZero(db, sizeof(Table)); if( pTab==0 ) return WRC_Abort; - pTab->dbMem = db->lookaside.bEnabled ? db : 0; pTab->nRef = 1; - pTab->zName = sqlite3MPrintf(db, "sqlite_subquery_%p_", (void*)pTab); + pTab->zName = sqlite3MPrintf(db, "sqlite_sq_%p", (void*)pTab); while( pSel->pPrior ){ pSel = pSel->pPrior; } - selectColumnsFromExprList(pParse, pSel->pEList, &pTab->nCol, &pTab->aCol); + sqlite3ColumnsFromExprList(pParse, pSel->pEList,&pTab->nCol,&pTab->aCol); pTab->iPKey = -1; + pTab->nRowLogEst = 200; assert( 200==sqlite3LogEst(1048576) ); pTab->tabFlags |= TF_Ephemeral; #endif }else{ /* An ordinary table or view name in the FROM clause */ assert( pFrom->pTab==0 ); - pFrom->pTab = pTab = - sqlite3LocateTable(pParse,0,pFrom->zName,pFrom->zDatabase); + pFrom->pTab = pTab = sqlite3LocateTableItem(pParse, 0, pFrom); if( pTab==0 ) return WRC_Abort; + if( pTab->nRef==0xffff ){ + sqlite3ErrorMsg(pParse, "too many references to \"%s\": max 65535", + pTab->zName); + pFrom->pTab = 0; + return WRC_Abort; + } pTab->nRef++; + if( !IsVirtual(pTab) && cannotBeFunction(pParse, pFrom) ){ + return WRC_Abort; + } #if !defined(SQLITE_OMIT_VIEW) || !defined (SQLITE_OMIT_VIRTUALTABLE) - if( pTab->pSelect || IsVirtual(pTab) ){ - /* We reach here if the named table is a really a view */ + if( IsVirtual(pTab) || pTab->pSelect ){ + i16 nCol; if( sqlite3ViewGetColumnNames(pParse, pTab) ) return WRC_Abort; assert( pFrom->pSelect==0 ); pFrom->pSelect = sqlite3SelectDup(db, pTab->pSelect, 0); + sqlite3SelectSetName(pFrom->pSelect, pTab->zName); + nCol = pTab->nCol; + pTab->nCol = -1; sqlite3WalkSelect(pWalker, pFrom->pSelect); + pTab->nCol = nCol; } #endif } /* Locate the index named by the INDEXED BY clause, if any. */ @@ -83697,23 +114558,24 @@ } /* For every "*" that occurs in the column list, insert the names of ** all columns in all tables. And for every TABLE.* insert the names ** of all columns in TABLE. The parser inserted a special expression - ** with the TK_ALL operator for each "*" that it found in the column list. - ** The following code just has to locate the TK_ALL expressions and expand - ** each one to the list of all columns in all tables. + ** with the TK_ASTERISK operator for each "*" that it found in the column + ** list. The following code just has to locate the TK_ASTERISK + ** expressions and expand each one to the list of all columns in + ** all tables. ** ** The first loop just checks to see if there are any "*" operators ** that need expanding. */ for(k=0; k<pEList->nExpr; k++){ - Expr *pE = pEList->a[k].pExpr; - if( pE->op==TK_ALL ) break; + pE = pEList->a[k].pExpr; + if( pE->op==TK_ASTERISK ) break; assert( pE->op!=TK_DOT || pE->pRight!=0 ); assert( pE->op!=TK_DOT || (pE->pLeft!=0 && pE->pLeft->op==TK_ID) ); - if( pE->op==TK_DOT && pE->pRight->op==TK_ALL ) break; + if( pE->op==TK_DOT && pE->pRight->op==TK_ASTERISK ) break; } if( k<pEList->nExpr ){ /* ** If we get here it means the result set contains one or more "*" ** operators that need to be expanded. Loop through each expression @@ -83724,13 +114586,16 @@ int flags = pParse->db->flags; int longNames = (flags & SQLITE_FullColNames)!=0 && (flags & SQLITE_ShortColNames)==0; for(k=0; k<pEList->nExpr; k++){ - Expr *pE = a[k].pExpr; - assert( pE->op!=TK_DOT || pE->pRight!=0 ); - if( pE->op!=TK_ALL && (pE->op!=TK_DOT || pE->pRight->op!=TK_ALL) ){ + pE = a[k].pExpr; + pRight = pE->pRight; + assert( pE->op!=TK_DOT || pRight!=0 ); + if( pE->op!=TK_ASTERISK + && (pE->op!=TK_DOT || pRight->op!=TK_ASTERISK) + ){ /* This particular expression does not need to be expanded. */ pNew = sqlite3ExprListAppend(pParse, pNew, a[k].pExpr); if( pNew ){ pNew->a[pNew->nExpr-1].zName = a[k].zName; @@ -83741,47 +114606,60 @@ a[k].pExpr = 0; }else{ /* This expression is a "*" or a "TABLE.*" and needs to be ** expanded. */ int tableSeen = 0; /* Set to 1 when TABLE matches */ - char *zTName; /* text of name of TABLE */ + char *zTName = 0; /* text of name of TABLE */ if( pE->op==TK_DOT ){ assert( pE->pLeft!=0 ); assert( !ExprHasProperty(pE->pLeft, EP_IntValue) ); zTName = pE->pLeft->u.zToken; - }else{ - zTName = 0; } for(i=0, pFrom=pTabList->a; i<pTabList->nSrc; i++, pFrom++){ Table *pTab = pFrom->pTab; + Select *pSub = pFrom->pSelect; char *zTabName = pFrom->zAlias; + const char *zSchemaName = 0; + int iDb; if( zTabName==0 ){ zTabName = pTab->zName; } if( db->mallocFailed ) break; - if( zTName && sqlite3StrICmp(zTName, zTabName)!=0 ){ - continue; + if( pSub==0 || (pSub->selFlags & SF_NestedFrom)==0 ){ + pSub = 0; + if( zTName && sqlite3StrICmp(zTName, zTabName)!=0 ){ + continue; + } + iDb = sqlite3SchemaToIndex(db, pTab->pSchema); + zSchemaName = iDb>=0 ? db->aDb[iDb].zName : "*"; } - tableSeen = 1; for(j=0; j<pTab->nCol; j++){ - Expr *pExpr, *pRight; char *zName = pTab->aCol[j].zName; char *zColname; /* The computed column name */ char *zToFree; /* Malloced string that needs to be freed */ Token sColname; /* Computed column name as a token */ - /* If a column is marked as 'hidden' (currently only possible - ** for virtual tables), do not include it in the expanded - ** result-set list. + assert( zName ); + if( zTName && pSub + && sqlite3MatchSpanName(pSub->pEList->a[j].zSpan, 0, zTName, 0)==0 + ){ + continue; + } + + /* If a column is marked as 'hidden', omit it from the expanded + ** result-set list unless the SELECT has the SF_IncludeHidden + ** bit set. */ - if( IsHiddenColumn(&pTab->aCol[j]) ){ - assert(IsVirtual(pTab)); + if( (p->selFlags & SF_IncludeHidden)==0 + && IsHiddenColumn(&pTab->aCol[j]) + ){ continue; } + tableSeen = 1; if( i>0 && zTName==0 ){ - if( (pFrom->jointype & JT_NATURAL)!=0 + if( (pFrom->fg.jointype & JT_NATURAL)!=0 && tableAndColumnIndex(pTabList, i, zName, 0, 0) ){ /* In a NATURAL join, omit the join columns from the ** table to the right of the join */ continue; @@ -83797,21 +114675,36 @@ zToFree = 0; if( longNames || pTabList->nSrc>1 ){ Expr *pLeft; pLeft = sqlite3Expr(db, TK_ID, zTabName); pExpr = sqlite3PExpr(pParse, TK_DOT, pLeft, pRight, 0); + if( zSchemaName ){ + pLeft = sqlite3Expr(db, TK_ID, zSchemaName); + pExpr = sqlite3PExpr(pParse, TK_DOT, pLeft, pExpr, 0); + } if( longNames ){ zColname = sqlite3MPrintf(db, "%s.%s", zTabName, zName); zToFree = zColname; } }else{ pExpr = pRight; } pNew = sqlite3ExprListAppend(pParse, pNew, pExpr); - sColname.z = zColname; - sColname.n = sqlite3Strlen30(zColname); + sqlite3TokenInit(&sColname, zColname); sqlite3ExprListSetName(pParse, pNew, &sColname, 0); + if( pNew && (p->selFlags & SF_NestedFrom)!=0 ){ + struct ExprList_item *pX = &pNew->a[pNew->nExpr-1]; + if( pSub ){ + pX->zSpan = sqlite3DbStrDup(db, pSub->pEList->a[j].zSpan); + testcase( pX->zSpan==0 ); + }else{ + pX->zSpan = sqlite3MPrintf(db, "%s.%s.%s", + zSchemaName, zTabName, zColname); + testcase( pX->zSpan==0 ); + } + pX->bSpanIsTab = 1; + } sqlite3DbFree(db, zToFree); } } if( !tableSeen ){ if( zTName ){ @@ -83826,10 +114719,11 @@ p->pEList = pNew; } #if SQLITE_MAX_COLUMN if( p->pEList && p->pEList->nExpr>db->aLimit[SQLITE_LIMIT_COLUMN] ){ sqlite3ErrorMsg(pParse, "too many columns in result set"); + return WRC_Abort; } #endif return WRC_Continue; } @@ -83840,11 +114734,11 @@ ** are walked without any actions being taken at each node. Presumably, ** when this routine is used for Walker.xExprCallback then ** Walker.xSelectCallback is set to do something useful for every ** subquery in the parser tree. */ -static int exprWalkNoop(Walker *NotUsed, Expr *NotUsed2){ +SQLITE_PRIVATE int sqlite3ExprWalkNoop(Walker *NotUsed, Expr *NotUsed2){ UNUSED_PARAMETER2(NotUsed, NotUsed2); return WRC_Continue; } /* @@ -83860,13 +114754,21 @@ ** The calling function can detect the problem by looking at pParse->nErr ** and/or pParse->db->mallocFailed. */ static void sqlite3SelectExpand(Parse *pParse, Select *pSelect){ Walker w; - w.xSelectCallback = selectExpander; - w.xExprCallback = exprWalkNoop; + memset(&w, 0, sizeof(w)); + w.xExprCallback = sqlite3ExprWalkNoop; w.pParse = pParse; + if( pParse->hasCompound ){ + w.xSelectCallback = convertCompoundSelectToSubquery; + sqlite3WalkSelect(&w, pSelect); + } + w.xSelectCallback = selectExpander; + if( (pSelect->selFlags & SF_MultiValue)==0 ){ + w.xSelectCallback2 = selectPopWith; + } sqlite3WalkSelect(&w, pSelect); } #ifndef SQLITE_OMIT_SUBQUERY @@ -83881,33 +114783,33 @@ ** The Table structure that represents the result set was constructed ** by selectExpander() but the type and collation information was omitted ** at that point because identifiers had not yet been resolved. This ** routine is called after identifier resolution. */ -static int selectAddSubqueryTypeInfo(Walker *pWalker, Select *p){ +static void selectAddSubqueryTypeInfo(Walker *pWalker, Select *p){ Parse *pParse; int i; SrcList *pTabList; struct SrcList_item *pFrom; assert( p->selFlags & SF_Resolved ); - if( (p->selFlags & SF_HasTypeInfo)==0 ){ - p->selFlags |= SF_HasTypeInfo; - pParse = pWalker->pParse; - pTabList = p->pSrc; - for(i=0, pFrom=pTabList->a; i<pTabList->nSrc; i++, pFrom++){ - Table *pTab = pFrom->pTab; - if( ALWAYS(pTab!=0) && (pTab->tabFlags & TF_Ephemeral)!=0 ){ - /* A sub-query in the FROM clause of a SELECT */ - Select *pSel = pFrom->pSelect; - assert( pSel ); + assert( (p->selFlags & SF_HasTypeInfo)==0 ); + p->selFlags |= SF_HasTypeInfo; + pParse = pWalker->pParse; + pTabList = p->pSrc; + for(i=0, pFrom=pTabList->a; i<pTabList->nSrc; i++, pFrom++){ + Table *pTab = pFrom->pTab; + assert( pTab!=0 ); + if( (pTab->tabFlags & TF_Ephemeral)!=0 ){ + /* A sub-query in the FROM clause of a SELECT */ + Select *pSel = pFrom->pSelect; + if( pSel ){ while( pSel->pPrior ) pSel = pSel->pPrior; - selectAddColumnTypeAndCollation(pParse, pTab->nCol, pTab->aCol, pSel); + selectAddColumnTypeAndCollation(pParse, pTab, pSel); } } } - return WRC_Continue; } #endif /* @@ -83918,20 +114820,21 @@ ** Use this routine after name resolution. */ static void sqlite3SelectAddTypeInfo(Parse *pParse, Select *pSelect){ #ifndef SQLITE_OMIT_SUBQUERY Walker w; - w.xSelectCallback = selectAddSubqueryTypeInfo; - w.xExprCallback = exprWalkNoop; + memset(&w, 0, sizeof(w)); + w.xSelectCallback2 = selectAddSubqueryTypeInfo; + w.xExprCallback = sqlite3ExprWalkNoop; w.pParse = pParse; sqlite3WalkSelect(&w, pSelect); #endif } /* -** This routine sets of a SELECT statement for processing. The +** This routine sets up a SELECT statement for processing. The ** following is accomplished: ** ** * VDBE Cursor numbers are assigned to all FROM-clause terms. ** * Ephemeral Table objects are created for all FROM-clause subqueries. ** * ON and USING clauses are shifted into WHERE statements @@ -83946,10 +114849,11 @@ NameContext *pOuterNC /* Name context for container */ ){ sqlite3 *db; if( NEVER(p==0) ) return; db = pParse->db; + if( db->mallocFailed ) return; if( p->selFlags & SF_HasTypeInfo ) return; sqlite3SelectExpand(pParse, p); if( pParse->nErr || db->mallocFailed ) return; sqlite3ResolveSelectNames(pParse, p, pOuterNC); if( pParse->nErr || db->mallocFailed ) return; @@ -83959,35 +114863,45 @@ /* ** Reset the aggregate accumulator. ** ** The aggregate accumulator is a set of memory cells that hold ** intermediate results while calculating an aggregate. This -** routine simply stores NULLs in all of those memory cells. +** routine generates code that stores NULLs in all of those memory +** cells. */ static void resetAccumulator(Parse *pParse, AggInfo *pAggInfo){ Vdbe *v = pParse->pVdbe; int i; struct AggInfo_func *pFunc; - if( pAggInfo->nFunc+pAggInfo->nColumn==0 ){ - return; - } + int nReg = pAggInfo->nFunc + pAggInfo->nColumn; + if( nReg==0 ) return; +#ifdef SQLITE_DEBUG + /* Verify that all AggInfo registers are within the range specified by + ** AggInfo.mnReg..AggInfo.mxReg */ + assert( nReg==pAggInfo->mxReg-pAggInfo->mnReg+1 ); for(i=0; i<pAggInfo->nColumn; i++){ - sqlite3VdbeAddOp2(v, OP_Null, 0, pAggInfo->aCol[i].iMem); + assert( pAggInfo->aCol[i].iMem>=pAggInfo->mnReg + && pAggInfo->aCol[i].iMem<=pAggInfo->mxReg ); } + for(i=0; i<pAggInfo->nFunc; i++){ + assert( pAggInfo->aFunc[i].iMem>=pAggInfo->mnReg + && pAggInfo->aFunc[i].iMem<=pAggInfo->mxReg ); + } +#endif + sqlite3VdbeAddOp3(v, OP_Null, 0, pAggInfo->mnReg, pAggInfo->mxReg); for(pFunc=pAggInfo->aFunc, i=0; i<pAggInfo->nFunc; i++, pFunc++){ - sqlite3VdbeAddOp2(v, OP_Null, 0, pFunc->iMem); if( pFunc->iDistinct>=0 ){ Expr *pE = pFunc->pExpr; assert( !ExprHasProperty(pE, EP_xIsSelect) ); if( pE->x.pList==0 || pE->x.pList->nExpr!=1 ){ sqlite3ErrorMsg(pParse, "DISTINCT aggregates must have exactly one " "argument"); pFunc->iDistinct = -1; }else{ - KeyInfo *pKeyInfo = keyInfoFromExprList(pParse, pE->x.pList); + KeyInfo *pKeyInfo = keyInfoFromExprList(pParse, pE->x.pList, 0, 0); sqlite3VdbeAddOp4(v, OP_OpenEphemeral, pFunc->iDistinct, 0, 0, - (char*)pKeyInfo, P4_KEYINFO_HANDOFF); + (char*)pKeyInfo, P4_KEYINFO); } } } } @@ -84012,35 +114926,37 @@ ** the current cursor position. */ static void updateAccumulator(Parse *pParse, AggInfo *pAggInfo){ Vdbe *v = pParse->pVdbe; int i; + int regHit = 0; + int addrHitTest = 0; struct AggInfo_func *pF; struct AggInfo_col *pC; pAggInfo->directMode = 1; - sqlite3ExprCacheClear(pParse); for(i=0, pF=pAggInfo->aFunc; i<pAggInfo->nFunc; i++, pF++){ int nArg; int addrNext = 0; int regAgg; ExprList *pList = pF->pExpr->x.pList; assert( !ExprHasProperty(pF->pExpr, EP_xIsSelect) ); if( pList ){ nArg = pList->nExpr; regAgg = sqlite3GetTempRange(pParse, nArg); - sqlite3ExprCodeExprList(pParse, pList, regAgg, 0); + sqlite3ExprCodeExprList(pParse, pList, regAgg, 0, SQLITE_ECEL_DUP); }else{ nArg = 0; regAgg = 0; } if( pF->iDistinct>=0 ){ addrNext = sqlite3VdbeMakeLabel(v); - assert( nArg==1 ); + testcase( nArg==0 ); /* Error condition */ + testcase( nArg>1 ); /* Also an error */ codeDistinct(pParse, pF->iDistinct, addrNext, 1, regAgg); } - if( pF->pFunc->flags & SQLITE_FUNC_NEEDCOLL ){ + if( pF->pFunc->funcFlags & SQLITE_FUNC_NEEDCOLL ){ CollSeq *pColl = 0; struct ExprList_item *pItem; int j; assert( pList!=0 ); /* pList!=0 if pF->pFunc has NEEDCOLL */ for(j=0, pItem=pList->a; !pColl && j<nArg; j++, pItem++){ @@ -84047,13 +114963,14 @@ pColl = sqlite3ExprCollSeq(pParse, pItem->pExpr); } if( !pColl ){ pColl = pParse->db->pDfltColl; } - sqlite3VdbeAddOp4(v, OP_CollSeq, 0, 0, 0, (char *)pColl, P4_COLLSEQ); + if( regHit==0 && pAggInfo->nAccumulator ) regHit = ++pParse->nMem; + sqlite3VdbeAddOp4(v, OP_CollSeq, regHit, 0, 0, (char *)pColl, P4_COLLSEQ); } - sqlite3VdbeAddOp4(v, OP_AggStep, 0, regAgg, pF->iMem, + sqlite3VdbeAddOp4(v, OP_AggStep0, 0, regAgg, pF->iMem, (void*)pF->pFunc, P4_FUNCDEF); sqlite3VdbeChangeP5(v, (u8)nArg); sqlite3ExprCacheAffinityChange(pParse, regAgg, nArg); sqlite3ReleaseTempRange(pParse, regAgg, nArg); if( addrNext ){ @@ -84070,64 +114987,55 @@ ** text or blob value. See ticket [883034dcb5]. ** ** Another solution would be to change the OP_SCopy used to copy cached ** values to an OP_Copy. */ + if( regHit ){ + addrHitTest = sqlite3VdbeAddOp1(v, OP_If, regHit); VdbeCoverage(v); + } sqlite3ExprCacheClear(pParse); for(i=0, pC=pAggInfo->aCol; i<pAggInfo->nAccumulator; i++, pC++){ sqlite3ExprCode(pParse, pC->pExpr, pC->iMem); } pAggInfo->directMode = 0; sqlite3ExprCacheClear(pParse); + if( addrHitTest ){ + sqlite3VdbeJumpHere(v, addrHitTest); + } } +/* +** Add a single OP_Explain instruction to the VDBE to explain a simple +** count(*) query ("SELECT count(*) FROM pTab"). +*/ +#ifndef SQLITE_OMIT_EXPLAIN +static void explainSimpleCount( + Parse *pParse, /* Parse context */ + Table *pTab, /* Table being queried */ + Index *pIdx /* Index used to optimize scan, or NULL */ +){ + if( pParse->explain==2 ){ + int bCover = (pIdx!=0 && (HasRowid(pTab) || !IsPrimaryKeyIndex(pIdx))); + char *zEqp = sqlite3MPrintf(pParse->db, "SCAN TABLE %s%s%s", + pTab->zName, + bCover ? " USING COVERING INDEX " : "", + bCover ? pIdx->zName : "" + ); + sqlite3VdbeAddOp4( + pParse->pVdbe, OP_Explain, pParse->iSelectId, 0, 0, zEqp, P4_DYNAMIC + ); + } +} +#else +# define explainSimpleCount(a,b,c) +#endif + /* ** Generate code for the SELECT statement given in the p argument. ** -** The results are distributed in various ways depending on the -** contents of the SelectDest structure pointed to by argument pDest -** as follows: -** -** pDest->eDest Result -** ------------ ------------------------------------------- -** SRT_Output Generate a row of output (using the OP_ResultRow -** opcode) for each row in the result set. -** -** SRT_Mem Only valid if the result is a single column. -** Store the first column of the first result row -** in register pDest->iParm then abandon the rest -** of the query. This destination implies "LIMIT 1". -** -** SRT_Set The result must be a single column. Store each -** row of result as the key in table pDest->iParm. -** Apply the affinity pDest->affinity before storing -** results. Used to implement "IN (SELECT ...)". -** -** SRT_Union Store results as a key in a temporary table pDest->iParm. -** -** SRT_Except Remove results from the temporary table pDest->iParm. -** -** SRT_Table Store results in temporary table pDest->iParm. -** This is like SRT_EphemTab except that the table -** is assumed to already be open. -** -** SRT_EphemTab Create an temporary table pDest->iParm and store -** the result there. The cursor is left open after -** returning. This is like SRT_Table except that -** this destination uses OP_OpenEphemeral to create -** the table first. -** -** SRT_Coroutine Generate a co-routine that returns a new row of -** results each time it is invoked. The entry point -** of the co-routine is stored in register pDest->iParm. -** -** SRT_Exists Store a 1 in memory cell pDest->iParm if the result -** set is not empty. -** -** SRT_Discard Throw the results away. This is used by SELECT -** statements within triggers whose only purpose is -** the side-effects of functions. +** The results are returned according to the SelectDest structure. +** See comments in sqliteInt.h for further information. ** ** This routine returns the number of errors. If any errors are ** encountered, then an appropriate error message is left in ** pParse->zErrMsg. ** @@ -84141,244 +115049,416 @@ ){ int i, j; /* Loop counters */ WhereInfo *pWInfo; /* Return from sqlite3WhereBegin() */ Vdbe *v; /* The virtual machine under construction */ int isAgg; /* True for select lists like "count(*)" */ - ExprList *pEList; /* List of columns to extract. */ + ExprList *pEList = 0; /* List of columns to extract. */ SrcList *pTabList; /* List of tables to select from */ Expr *pWhere; /* The WHERE clause. May be NULL */ - ExprList *pOrderBy; /* The ORDER BY clause. May be NULL */ ExprList *pGroupBy; /* The GROUP BY clause. May be NULL */ Expr *pHaving; /* The HAVING clause. May be NULL */ - int isDistinct; /* True if the DISTINCT keyword is present */ - int distinct; /* Table to use for the distinct set */ int rc = 1; /* Value to return from this function */ - int addrSortIndex; /* Address of an OP_OpenEphemeral instruction */ + DistinctCtx sDistinct; /* Info on how to code the DISTINCT keyword */ + SortCtx sSort; /* Info on how to code the ORDER BY clause */ AggInfo sAggInfo; /* Information used by aggregate queries */ int iEnd; /* Address of the end of the query */ sqlite3 *db; /* The database connection */ + +#ifndef SQLITE_OMIT_EXPLAIN + int iRestoreSelectId = pParse->iSelectId; + pParse->iSelectId = pParse->iNextSelectId++; +#endif db = pParse->db; if( p==0 || db->mallocFailed || pParse->nErr ){ return 1; } if( sqlite3AuthCheck(pParse, SQLITE_SELECT, 0, 0, 0) ) return 1; memset(&sAggInfo, 0, sizeof(sAggInfo)); +#if SELECTTRACE_ENABLED + pParse->nSelectIndent++; + SELECTTRACE(1,pParse,p, ("begin processing:\n")); + if( sqlite3SelectTrace & 0x100 ){ + sqlite3TreeViewSelect(0, p, 0); + } +#endif + assert( p->pOrderBy==0 || pDest->eDest!=SRT_DistFifo ); + assert( p->pOrderBy==0 || pDest->eDest!=SRT_Fifo ); + assert( p->pOrderBy==0 || pDest->eDest!=SRT_DistQueue ); + assert( p->pOrderBy==0 || pDest->eDest!=SRT_Queue ); if( IgnorableOrderby(pDest) ){ assert(pDest->eDest==SRT_Exists || pDest->eDest==SRT_Union || - pDest->eDest==SRT_Except || pDest->eDest==SRT_Discard); + pDest->eDest==SRT_Except || pDest->eDest==SRT_Discard || + pDest->eDest==SRT_Queue || pDest->eDest==SRT_DistFifo || + pDest->eDest==SRT_DistQueue || pDest->eDest==SRT_Fifo); /* If ORDER BY makes no difference in the output then neither does ** DISTINCT so it can be removed too. */ sqlite3ExprListDelete(db, p->pOrderBy); p->pOrderBy = 0; p->selFlags &= ~SF_Distinct; } sqlite3SelectPrep(pParse, p, 0); - pOrderBy = p->pOrderBy; + memset(&sSort, 0, sizeof(sSort)); + sSort.pOrderBy = p->pOrderBy; pTabList = p->pSrc; - pEList = p->pEList; if( pParse->nErr || db->mallocFailed ){ goto select_end; } + assert( p->pEList!=0 ); isAgg = (p->selFlags & SF_Aggregate)!=0; - assert( pEList!=0 ); - - /* Begin generating code. - */ - v = sqlite3GetVdbe(pParse); - if( v==0 ) goto select_end; - - /* Generate code for all sub-queries in the FROM clause +#if SELECTTRACE_ENABLED + if( sqlite3SelectTrace & 0x100 ){ + SELECTTRACE(0x100,pParse,p, ("after name resolution:\n")); + sqlite3TreeViewSelect(0, p, 0); + } +#endif + + + /* If writing to memory or generating a set + ** only a single column may be output. + */ +#ifndef SQLITE_OMIT_SUBQUERY + if( checkForMultiColumnSelectError(pParse, pDest, p->pEList->nExpr) ){ + goto select_end; + } +#endif + + /* Try to flatten subqueries in the FROM clause up into the main query */ #if !defined(SQLITE_OMIT_SUBQUERY) || !defined(SQLITE_OMIT_VIEW) for(i=0; !p->pPrior && i<pTabList->nSrc; i++){ struct SrcList_item *pItem = &pTabList->a[i]; - SelectDest dest; Select *pSub = pItem->pSelect; int isAggSub; + Table *pTab = pItem->pTab; + if( pSub==0 ) continue; - if( pSub==0 || pItem->isPopulated ) continue; + /* Catch mismatch in the declared columns of a view and the number of + ** columns in the SELECT on the RHS */ + if( pTab->nCol!=pSub->pEList->nExpr ){ + sqlite3ErrorMsg(pParse, "expected %d columns for '%s' but got %d", + pTab->nCol, pTab->zName, pSub->pEList->nExpr); + goto select_end; + } + + isAggSub = (pSub->selFlags & SF_Aggregate)!=0; + if( flattenSubquery(pParse, p, i, isAgg, isAggSub) ){ + /* This subquery can be absorbed into its parent. */ + if( isAggSub ){ + isAgg = 1; + p->selFlags |= SF_Aggregate; + } + i = -1; + } + pTabList = p->pSrc; + if( db->mallocFailed ) goto select_end; + if( !IgnorableOrderby(pDest) ){ + sSort.pOrderBy = p->pOrderBy; + } + } +#endif + + /* Get a pointer the VDBE under construction, allocating a new VDBE if one + ** does not already exist */ + v = sqlite3GetVdbe(pParse); + if( v==0 ) goto select_end; + +#ifndef SQLITE_OMIT_COMPOUND_SELECT + /* Handle compound SELECT statements using the separate multiSelect() + ** procedure. + */ + if( p->pPrior ){ + rc = multiSelect(pParse, p, pDest); + explainSetInteger(pParse->iSelectId, iRestoreSelectId); +#if SELECTTRACE_ENABLED + SELECTTRACE(1,pParse,p,("end compound-select processing\n")); + pParse->nSelectIndent--; +#endif + return rc; + } +#endif + + /* Generate code for all sub-queries in the FROM clause + */ +#if !defined(SQLITE_OMIT_SUBQUERY) || !defined(SQLITE_OMIT_VIEW) + for(i=0; i<pTabList->nSrc; i++){ + struct SrcList_item *pItem = &pTabList->a[i]; + SelectDest dest; + Select *pSub = pItem->pSelect; + if( pSub==0 ) continue; + + /* Sometimes the code for a subquery will be generated more than + ** once, if the subquery is part of the WHERE clause in a LEFT JOIN, + ** for example. In that case, do not regenerate the code to manifest + ** a view or the co-routine to implement a view. The first instance + ** is sufficient, though the subroutine to manifest the view does need + ** to be invoked again. */ + if( pItem->addrFillSub ){ + if( pItem->fg.viaCoroutine==0 ){ + sqlite3VdbeAddOp2(v, OP_Gosub, pItem->regReturn, pItem->addrFillSub); + } + continue; + } /* Increment Parse.nHeight by the height of the largest expression - ** tree refered to by this, the parent select. The child select + ** tree referred to by this, the parent select. The child select ** may contain expression trees of at most ** (SQLITE_MAX_EXPR_DEPTH-Parse.nHeight) height. This is a bit ** more conservative than necessary, but much easier than enforcing ** an exact limit. */ pParse->nHeight += sqlite3SelectExprHeight(p); - /* Check to see if the subquery can be absorbed into the parent. */ - isAggSub = (pSub->selFlags & SF_Aggregate)!=0; - if( flattenSubquery(pParse, p, i, isAgg, isAggSub) ){ - if( isAggSub ){ - isAgg = 1; - p->selFlags |= SF_Aggregate; - } - i = -1; - }else{ + /* Make copies of constant WHERE-clause terms in the outer query down + ** inside the subquery. This can help the subquery to run more efficiently. + */ + if( (pItem->fg.jointype & JT_OUTER)==0 + && pushDownWhereTerms(db, pSub, p->pWhere, pItem->iCursor) + ){ +#if SELECTTRACE_ENABLED + if( sqlite3SelectTrace & 0x100 ){ + SELECTTRACE(0x100,pParse,p,("After WHERE-clause push-down:\n")); + sqlite3TreeViewSelect(0, p, 0); + } +#endif + } + + /* Generate code to implement the subquery + */ + if( pTabList->nSrc==1 + && (p->selFlags & SF_All)==0 + && OptimizationEnabled(db, SQLITE_SubqCoroutine) + ){ + /* Implement a co-routine that will return a single row of the result + ** set on each invocation. + */ + int addrTop = sqlite3VdbeCurrentAddr(v)+1; + pItem->regReturn = ++pParse->nMem; + sqlite3VdbeAddOp3(v, OP_InitCoroutine, pItem->regReturn, 0, addrTop); + VdbeComment((v, "%s", pItem->pTab->zName)); + pItem->addrFillSub = addrTop; + sqlite3SelectDestInit(&dest, SRT_Coroutine, pItem->regReturn); + explainSetInteger(pItem->iSelectId, (u8)pParse->iNextSelectId); + sqlite3Select(pParse, pSub, &dest); + pItem->pTab->nRowLogEst = sqlite3LogEst(pSub->nSelectRow); + pItem->fg.viaCoroutine = 1; + pItem->regResult = dest.iSdst; + sqlite3VdbeEndCoroutine(v, pItem->regReturn); + sqlite3VdbeJumpHere(v, addrTop-1); + sqlite3ClearTempRegCache(pParse); + }else{ + /* Generate a subroutine that will fill an ephemeral table with + ** the content of this subquery. pItem->addrFillSub will point + ** to the address of the generated subroutine. pItem->regReturn + ** is a register allocated to hold the subroutine return address + */ + int topAddr; + int onceAddr = 0; + int retAddr; + assert( pItem->addrFillSub==0 ); + pItem->regReturn = ++pParse->nMem; + topAddr = sqlite3VdbeAddOp2(v, OP_Integer, 0, pItem->regReturn); + pItem->addrFillSub = topAddr+1; + if( pItem->fg.isCorrelated==0 ){ + /* If the subquery is not correlated and if we are not inside of + ** a trigger, then we only need to compute the value of the subquery + ** once. */ + onceAddr = sqlite3CodeOnce(pParse); VdbeCoverage(v); + VdbeComment((v, "materialize \"%s\"", pItem->pTab->zName)); + }else{ + VdbeNoopComment((v, "materialize \"%s\"", pItem->pTab->zName)); + } sqlite3SelectDestInit(&dest, SRT_EphemTab, pItem->iCursor); - assert( pItem->isPopulated==0 ); + explainSetInteger(pItem->iSelectId, (u8)pParse->iNextSelectId); sqlite3Select(pParse, pSub, &dest); - pItem->isPopulated = 1; + pItem->pTab->nRowLogEst = sqlite3LogEst(pSub->nSelectRow); + if( onceAddr ) sqlite3VdbeJumpHere(v, onceAddr); + retAddr = sqlite3VdbeAddOp1(v, OP_Return, pItem->regReturn); + VdbeComment((v, "end %s", pItem->pTab->zName)); + sqlite3VdbeChangeP1(v, topAddr, retAddr); + sqlite3ClearTempRegCache(pParse); } - if( /*pParse->nErr ||*/ db->mallocFailed ){ - goto select_end; - } + if( db->mallocFailed ) goto select_end; pParse->nHeight -= sqlite3SelectExprHeight(p); - pTabList = p->pSrc; - if( !IgnorableOrderby(pDest) ){ - pOrderBy = p->pOrderBy; - } } +#endif + + /* Various elements of the SELECT copied into local variables for + ** convenience */ pEList = p->pEList; -#endif pWhere = p->pWhere; pGroupBy = p->pGroupBy; pHaving = p->pHaving; - isDistinct = (p->selFlags & SF_Distinct)!=0; - -#ifndef SQLITE_OMIT_COMPOUND_SELECT - /* If there is are a sequence of queries, do the earlier ones first. - */ - if( p->pPrior ){ - if( p->pRightmost==0 ){ - Select *pLoop, *pRight = 0; - int cnt = 0; - int mxSelect; - for(pLoop=p; pLoop; pLoop=pLoop->pPrior, cnt++){ - pLoop->pRightmost = p; - pLoop->pNext = pRight; - pRight = pLoop; - } - mxSelect = db->aLimit[SQLITE_LIMIT_COMPOUND_SELECT]; - if( mxSelect && cnt>mxSelect ){ - sqlite3ErrorMsg(pParse, "too many terms in compound SELECT"); - return 1; - } - } - return multiSelect(pParse, p, pDest); - } -#endif - - /* If writing to memory or generating a set - ** only a single column may be output. - */ -#ifndef SQLITE_OMIT_SUBQUERY - if( checkForMultiColumnSelectError(pParse, pDest, pEList->nExpr) ){ - goto select_end; - } -#endif - - /* If possible, rewrite the query to use GROUP BY instead of DISTINCT. - ** GROUP BY might use an index, DISTINCT never does. - */ - assert( p->pGroupBy==0 || (p->selFlags & SF_Aggregate)!=0 ); - if( (p->selFlags & (SF_Distinct|SF_Aggregate))==SF_Distinct ){ - p->pGroupBy = sqlite3ExprListDup(db, p->pEList, 0); - pGroupBy = p->pGroupBy; + sDistinct.isTnct = (p->selFlags & SF_Distinct)!=0; + +#if SELECTTRACE_ENABLED + if( sqlite3SelectTrace & 0x400 ){ + SELECTTRACE(0x400,pParse,p,("After all FROM-clause analysis:\n")); + sqlite3TreeViewSelect(0, p, 0); + } +#endif + + /* If the query is DISTINCT with an ORDER BY but is not an aggregate, and + ** if the select-list is the same as the ORDER BY list, then this query + ** can be rewritten as a GROUP BY. In other words, this: + ** + ** SELECT DISTINCT xyz FROM ... ORDER BY xyz + ** + ** is transformed to: + ** + ** SELECT xyz FROM ... GROUP BY xyz ORDER BY xyz + ** + ** The second form is preferred as a single index (or temp-table) may be + ** used for both the ORDER BY and DISTINCT processing. As originally + ** written the query must use a temp-table for at least one of the ORDER + ** BY and DISTINCT, and an index or separate temp-table for the other. + */ + if( (p->selFlags & (SF_Distinct|SF_Aggregate))==SF_Distinct + && sqlite3ExprListCompare(sSort.pOrderBy, pEList, -1)==0 + ){ p->selFlags &= ~SF_Distinct; - isDistinct = 0; + pGroupBy = p->pGroupBy = sqlite3ExprListDup(db, pEList, 0); + /* Notice that even thought SF_Distinct has been cleared from p->selFlags, + ** the sDistinct.isTnct is still set. Hence, isTnct represents the + ** original setting of the SF_Distinct flag, not the current setting */ + assert( sDistinct.isTnct ); } - /* If there is an ORDER BY clause, then this sorting - ** index might end up being unused if the data can be - ** extracted in pre-sorted order. If that is the case, then the - ** OP_OpenEphemeral instruction will be changed to an OP_Noop once - ** we figure out that the sorting index is not needed. The addrSortIndex - ** variable is used to facilitate that change. + /* If there is an ORDER BY clause, then create an ephemeral index to + ** do the sorting. But this sorting ephemeral index might end up + ** being unused if the data can be extracted in pre-sorted order. + ** If that is the case, then the OP_OpenEphemeral instruction will be + ** changed to an OP_Noop once we figure out that the sorting index is + ** not needed. The sSort.addrSortIndex variable is used to facilitate + ** that change. */ - if( pOrderBy ){ + if( sSort.pOrderBy ){ KeyInfo *pKeyInfo; - pKeyInfo = keyInfoFromExprList(pParse, pOrderBy); - pOrderBy->iECursor = pParse->nTab++; - p->addrOpenEphm[2] = addrSortIndex = + pKeyInfo = keyInfoFromExprList(pParse, sSort.pOrderBy, 0, pEList->nExpr); + sSort.iECursor = pParse->nTab++; + sSort.addrSortIndex = sqlite3VdbeAddOp4(v, OP_OpenEphemeral, - pOrderBy->iECursor, pOrderBy->nExpr+2, 0, - (char*)pKeyInfo, P4_KEYINFO_HANDOFF); + sSort.iECursor, sSort.pOrderBy->nExpr+1+pEList->nExpr, 0, + (char*)pKeyInfo, P4_KEYINFO + ); }else{ - addrSortIndex = -1; + sSort.addrSortIndex = -1; } /* If the output is destined for a temporary table, open that table. */ if( pDest->eDest==SRT_EphemTab ){ - sqlite3VdbeAddOp2(v, OP_OpenEphemeral, pDest->iParm, pEList->nExpr); + sqlite3VdbeAddOp2(v, OP_OpenEphemeral, pDest->iSDParm, pEList->nExpr); } /* Set the limiter. */ iEnd = sqlite3VdbeMakeLabel(v); + p->nSelectRow = LARGEST_INT64; computeLimitRegisters(pParse, p, iEnd); - - /* Open a virtual index to use for the distinct set. - */ - if( isDistinct ){ - KeyInfo *pKeyInfo; - assert( isAgg || pGroupBy ); - distinct = pParse->nTab++; - pKeyInfo = keyInfoFromExprList(pParse, p->pEList); - sqlite3VdbeAddOp4(v, OP_OpenEphemeral, distinct, 0, 0, - (char*)pKeyInfo, P4_KEYINFO_HANDOFF); - }else{ - distinct = -1; - } - - /* Aggregate and non-aggregate queries are handled differently */ + if( p->iLimit==0 && sSort.addrSortIndex>=0 ){ + sqlite3VdbeChangeOpcode(v, sSort.addrSortIndex, OP_SorterOpen); + sSort.sortFlags |= SORTFLAG_UseSorter; + } + + /* Open an ephemeral index to use for the distinct set. + */ + if( p->selFlags & SF_Distinct ){ + sDistinct.tabTnct = pParse->nTab++; + sDistinct.addrTnct = sqlite3VdbeAddOp4(v, OP_OpenEphemeral, + sDistinct.tabTnct, 0, 0, + (char*)keyInfoFromExprList(pParse, p->pEList,0,0), + P4_KEYINFO); + sqlite3VdbeChangeP5(v, BTREE_UNORDERED); + sDistinct.eTnctType = WHERE_DISTINCT_UNORDERED; + }else{ + sDistinct.eTnctType = WHERE_DISTINCT_NOOP; + } + if( !isAgg && pGroupBy==0 ){ - /* This case is for non-aggregate queries - ** Begin the database scan - */ - pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere, &pOrderBy, 0); + /* No aggregate functions and no GROUP BY clause */ + u16 wctrlFlags = (sDistinct.isTnct ? WHERE_WANT_DISTINCT : 0); + + /* Begin the database scan. */ + pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere, sSort.pOrderBy, + p->pEList, wctrlFlags, 0); if( pWInfo==0 ) goto select_end; + if( sqlite3WhereOutputRowCount(pWInfo) < p->nSelectRow ){ + p->nSelectRow = sqlite3WhereOutputRowCount(pWInfo); + } + if( sDistinct.isTnct && sqlite3WhereIsDistinct(pWInfo) ){ + sDistinct.eTnctType = sqlite3WhereIsDistinct(pWInfo); + } + if( sSort.pOrderBy ){ + sSort.nOBSat = sqlite3WhereIsOrdered(pWInfo); + if( sSort.nOBSat==sSort.pOrderBy->nExpr ){ + sSort.pOrderBy = 0; + } + } /* If sorting index that was created by a prior OP_OpenEphemeral ** instruction ended up not being needed, then change the OP_OpenEphemeral ** into an OP_Noop. */ - if( addrSortIndex>=0 && pOrderBy==0 ){ - sqlite3VdbeChangeToNoop(v, addrSortIndex, 1); - p->addrOpenEphm[2] = -1; + if( sSort.addrSortIndex>=0 && sSort.pOrderBy==0 ){ + sqlite3VdbeChangeToNoop(v, sSort.addrSortIndex); } - /* Use the standard inner loop - */ - assert(!isDistinct); - selectInnerLoop(pParse, p, pEList, 0, 0, pOrderBy, -1, pDest, - pWInfo->iContinue, pWInfo->iBreak); + /* Use the standard inner loop. */ + selectInnerLoop(pParse, p, pEList, -1, &sSort, &sDistinct, pDest, + sqlite3WhereContinueLabel(pWInfo), + sqlite3WhereBreakLabel(pWInfo)); /* End the database scan loop. */ sqlite3WhereEnd(pWInfo); }else{ - /* This is the processing for aggregate queries */ + /* This case when there exist aggregate functions or a GROUP BY clause + ** or both */ NameContext sNC; /* Name context for processing aggregate information */ int iAMem; /* First Mem address for storing current GROUP BY */ int iBMem; /* First Mem address for previous GROUP BY */ int iUseFlag; /* Mem address holding flag indicating that at least ** one row of the input to the aggregator has been ** processed */ int iAbortFlag; /* Mem address which causes query abort if positive */ int groupBySort; /* Rows come from source in GROUP BY order */ int addrEnd; /* End of processing for this SELECT */ + int sortPTab = 0; /* Pseudotable used to decode sorting results */ + int sortOut = 0; /* Output register from the sorter */ + int orderByGrp = 0; /* True if the GROUP BY and ORDER BY are the same */ /* Remove any and all aliases between the result set and the ** GROUP BY clause. */ if( pGroupBy ){ int k; /* Loop counter */ struct ExprList_item *pItem; /* For looping over expression in a list */ for(k=p->pEList->nExpr, pItem=p->pEList->a; k>0; k--, pItem++){ - pItem->iAlias = 0; + pItem->u.x.iAlias = 0; } for(k=pGroupBy->nExpr, pItem=pGroupBy->a; k>0; k--, pItem++){ - pItem->iAlias = 0; + pItem->u.x.iAlias = 0; } + if( p->nSelectRow>100 ) p->nSelectRow = 100; + }else{ + p->nSelectRow = 1; } + /* If there is both a GROUP BY and an ORDER BY clause and they are + ** identical, then it may be possible to disable the ORDER BY clause + ** on the grounds that the GROUP BY will cause elements to come out + ** in the correct order. It also may not - the GROUP BY might use a + ** database index that causes rows to be grouped together as required + ** but not actually sorted. Either way, record the fact that the + ** ORDER BY and GROUP BY clauses are the same by setting the orderByGrp + ** variable. */ + if( sqlite3ExprListCompare(pGroupBy, sSort.pOrderBy, -1)==0 ){ + orderByGrp = 1; + } /* Create a label to jump to when we want to abort the query */ addrEnd = sqlite3VdbeMakeLabel(v); /* Convert TK_COLUMN nodes into TK_AGG_COLUMN and make entries in @@ -84387,30 +115467,34 @@ */ memset(&sNC, 0, sizeof(sNC)); sNC.pParse = pParse; sNC.pSrcList = pTabList; sNC.pAggInfo = &sAggInfo; - sAggInfo.nSortingColumn = pGroupBy ? pGroupBy->nExpr+1 : 0; + sAggInfo.mnReg = pParse->nMem+1; + sAggInfo.nSortingColumn = pGroupBy ? pGroupBy->nExpr : 0; sAggInfo.pGroupBy = pGroupBy; sqlite3ExprAnalyzeAggList(&sNC, pEList); - sqlite3ExprAnalyzeAggList(&sNC, pOrderBy); + sqlite3ExprAnalyzeAggList(&sNC, sSort.pOrderBy); if( pHaving ){ sqlite3ExprAnalyzeAggregates(&sNC, pHaving); } sAggInfo.nAccumulator = sAggInfo.nColumn; for(i=0; i<sAggInfo.nFunc; i++){ assert( !ExprHasProperty(sAggInfo.aFunc[i].pExpr, EP_xIsSelect) ); + sNC.ncFlags |= NC_InAggFunc; sqlite3ExprAnalyzeAggList(&sNC, sAggInfo.aFunc[i].pExpr->x.pList); + sNC.ncFlags &= ~NC_InAggFunc; } + sAggInfo.mxReg = pParse->nMem; if( db->mallocFailed ) goto select_end; /* Processing for aggregates with GROUP BY is very different and ** much more complex than aggregates without a GROUP BY. */ if( pGroupBy ){ KeyInfo *pKeyInfo; /* Keying information for the group by clause */ - int j1; /* A-vs-B comparision jump */ + int addr1; /* A-vs-B comparision jump */ int addrOutputRow; /* Start of subroutine that outputs a result row */ int regOutputRow; /* Return address register for output subroutine */ int addrSetAbort; /* Set the abort flag and return */ int addrTopOfLoop; /* Top of the input loop */ int addrSortingIdx; /* The OP_OpenEphemeral for the sorting index */ @@ -84417,18 +115501,18 @@ int addrReset; /* Subroutine for resetting the accumulator */ int regReset; /* Return address register for reset subroutine */ /* If there is a GROUP BY clause we might need a sorting index to ** implement it. Allocate that sorting index now. If it turns out - ** that we do not need it after all, the OpenEphemeral instruction + ** that we do not need it after all, the OP_SorterOpen instruction ** will be converted into a Noop. */ sAggInfo.sortingIdx = pParse->nTab++; - pKeyInfo = keyInfoFromExprList(pParse, pGroupBy); - addrSortingIdx = sqlite3VdbeAddOp4(v, OP_OpenEphemeral, + pKeyInfo = keyInfoFromExprList(pParse, pGroupBy, 0, sAggInfo.nColumn); + addrSortingIdx = sqlite3VdbeAddOp4(v, OP_SorterOpen, sAggInfo.sortingIdx, sAggInfo.nSortingColumn, - 0, (char*)pKeyInfo, P4_KEYINFO_HANDOFF); + 0, (char*)pKeyInfo, P4_KEYINFO); /* Initialize memory locations used by GROUP BY aggregate processing */ iUseFlag = ++pParse->nMem; iAbortFlag = ++pParse->nMem; @@ -84442,25 +115526,27 @@ pParse->nMem += pGroupBy->nExpr; sqlite3VdbeAddOp2(v, OP_Integer, 0, iAbortFlag); VdbeComment((v, "clear abort flag")); sqlite3VdbeAddOp2(v, OP_Integer, 0, iUseFlag); VdbeComment((v, "indicate accumulator empty")); + sqlite3VdbeAddOp3(v, OP_Null, 0, iAMem, iAMem+pGroupBy->nExpr-1); /* Begin a loop that will extract all source rows in GROUP BY order. ** This might involve two separate loops with an OP_Sort in between, or ** it might be a single loop that uses an index to extract information ** in the right order to begin with. */ sqlite3VdbeAddOp2(v, OP_Gosub, regReset, addrReset); - pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere, &pGroupBy, 0); + pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere, pGroupBy, 0, + WHERE_GROUPBY | (orderByGrp ? WHERE_SORTBYGROUP : 0), 0 + ); if( pWInfo==0 ) goto select_end; - if( pGroupBy==0 ){ + if( sqlite3WhereIsOrdered(pWInfo)==pGroupBy->nExpr ){ /* The optimizer is able to deliver rows in group by order so ** we do not have to sort. The OP_OpenEphemeral table will be ** cancelled later because we still need to use the pKeyInfo */ - pGroupBy = p->pGroupBy; groupBySort = 0; }else{ /* Rows are coming out in undetermined order. We have to push ** each row into a sorting index, terminate the first loop, ** then loop over the sorting index in order to get the output @@ -84469,70 +115555,90 @@ int regBase; int regRecord; int nCol; int nGroupBy; + explainTempTable(pParse, + (sDistinct.isTnct && (p->selFlags&SF_Distinct)==0) ? + "DISTINCT" : "GROUP BY"); + groupBySort = 1; nGroupBy = pGroupBy->nExpr; - nCol = nGroupBy + 1; - j = nGroupBy+1; + nCol = nGroupBy; + j = nGroupBy; for(i=0; i<sAggInfo.nColumn; i++){ if( sAggInfo.aCol[i].iSorterColumn>=j ){ nCol++; j++; } } regBase = sqlite3GetTempRange(pParse, nCol); sqlite3ExprCacheClear(pParse); - sqlite3ExprCodeExprList(pParse, pGroupBy, regBase, 0); - sqlite3VdbeAddOp2(v, OP_Sequence, sAggInfo.sortingIdx,regBase+nGroupBy); - j = nGroupBy+1; + sqlite3ExprCodeExprList(pParse, pGroupBy, regBase, 0, 0); + j = nGroupBy; for(i=0; i<sAggInfo.nColumn; i++){ struct AggInfo_col *pCol = &sAggInfo.aCol[i]; if( pCol->iSorterColumn>=j ){ int r1 = j + regBase; - int r2; - - r2 = sqlite3ExprCodeGetColumn(pParse, + sqlite3ExprCodeGetColumnToReg(pParse, pCol->pTab, pCol->iColumn, pCol->iTable, r1); - if( r1!=r2 ){ - sqlite3VdbeAddOp2(v, OP_SCopy, r2, r1); - } j++; } } regRecord = sqlite3GetTempReg(pParse); sqlite3VdbeAddOp3(v, OP_MakeRecord, regBase, nCol, regRecord); - sqlite3VdbeAddOp2(v, OP_IdxInsert, sAggInfo.sortingIdx, regRecord); + sqlite3VdbeAddOp2(v, OP_SorterInsert, sAggInfo.sortingIdx, regRecord); sqlite3ReleaseTempReg(pParse, regRecord); sqlite3ReleaseTempRange(pParse, regBase, nCol); sqlite3WhereEnd(pWInfo); - sqlite3VdbeAddOp2(v, OP_Sort, sAggInfo.sortingIdx, addrEnd); - VdbeComment((v, "GROUP BY sort")); + sAggInfo.sortingIdxPTab = sortPTab = pParse->nTab++; + sortOut = sqlite3GetTempReg(pParse); + sqlite3VdbeAddOp3(v, OP_OpenPseudo, sortPTab, sortOut, nCol); + sqlite3VdbeAddOp2(v, OP_SorterSort, sAggInfo.sortingIdx, addrEnd); + VdbeComment((v, "GROUP BY sort")); VdbeCoverage(v); sAggInfo.useSortingIdx = 1; sqlite3ExprCacheClear(pParse); + + } + + /* If the index or temporary table used by the GROUP BY sort + ** will naturally deliver rows in the order required by the ORDER BY + ** clause, cancel the ephemeral table open coded earlier. + ** + ** This is an optimization - the correct answer should result regardless. + ** Use the SQLITE_GroupByOrder flag with SQLITE_TESTCTRL_OPTIMIZER to + ** disable this optimization for testing purposes. */ + if( orderByGrp && OptimizationEnabled(db, SQLITE_GroupByOrder) + && (groupBySort || sqlite3WhereIsSorted(pWInfo)) + ){ + sSort.pOrderBy = 0; + sqlite3VdbeChangeToNoop(v, sSort.addrSortIndex); } /* Evaluate the current GROUP BY terms and store in b0, b1, b2... ** (b0 is memory location iBMem+0, b1 is iBMem+1, and so forth) ** Then compare the current GROUP BY terms against the GROUP BY terms ** from the previous row currently stored in a0, a1, a2... */ addrTopOfLoop = sqlite3VdbeCurrentAddr(v); sqlite3ExprCacheClear(pParse); + if( groupBySort ){ + sqlite3VdbeAddOp3(v, OP_SorterData, sAggInfo.sortingIdx, + sortOut, sortPTab); + } for(j=0; j<pGroupBy->nExpr; j++){ if( groupBySort ){ - sqlite3VdbeAddOp3(v, OP_Column, sAggInfo.sortingIdx, j, iBMem+j); + sqlite3VdbeAddOp3(v, OP_Column, sortPTab, j, iBMem+j); }else{ sAggInfo.directMode = 1; sqlite3ExprCode(pParse, pGroupBy->a[j].pExpr, iBMem+j); } } sqlite3VdbeAddOp4(v, OP_Compare, iAMem, iBMem, pGroupBy->nExpr, - (char*)pKeyInfo, P4_KEYINFO); - j1 = sqlite3VdbeCurrentAddr(v); - sqlite3VdbeAddOp3(v, OP_Jump, j1+1, 0, j1+1); + (char*)sqlite3KeyInfoRef(pKeyInfo), P4_KEYINFO); + addr1 = sqlite3VdbeCurrentAddr(v); + sqlite3VdbeAddOp3(v, OP_Jump, addr1+1, 0, addr1+1); VdbeCoverage(v); /* Generate code that runs whenever the GROUP BY changes. ** Changes in the GROUP BY are detected by the previous code ** block. If there were no changes, this block is skipped. ** @@ -84542,40 +115648,41 @@ ** for the next GROUP BY batch. */ sqlite3ExprCodeMove(pParse, iBMem, iAMem, pGroupBy->nExpr); sqlite3VdbeAddOp2(v, OP_Gosub, regOutputRow, addrOutputRow); VdbeComment((v, "output one row")); - sqlite3VdbeAddOp2(v, OP_IfPos, iAbortFlag, addrEnd); + sqlite3VdbeAddOp2(v, OP_IfPos, iAbortFlag, addrEnd); VdbeCoverage(v); VdbeComment((v, "check abort flag")); sqlite3VdbeAddOp2(v, OP_Gosub, regReset, addrReset); VdbeComment((v, "reset accumulator")); /* Update the aggregate accumulators based on the content of ** the current row */ - sqlite3VdbeJumpHere(v, j1); + sqlite3VdbeJumpHere(v, addr1); updateAccumulator(pParse, &sAggInfo); sqlite3VdbeAddOp2(v, OP_Integer, 1, iUseFlag); VdbeComment((v, "indicate data in accumulator")); /* End of the loop */ if( groupBySort ){ - sqlite3VdbeAddOp2(v, OP_Next, sAggInfo.sortingIdx, addrTopOfLoop); + sqlite3VdbeAddOp2(v, OP_SorterNext, sAggInfo.sortingIdx, addrTopOfLoop); + VdbeCoverage(v); }else{ sqlite3WhereEnd(pWInfo); - sqlite3VdbeChangeToNoop(v, addrSortingIdx, 1); + sqlite3VdbeChangeToNoop(v, addrSortingIdx); } /* Output the final row of result */ sqlite3VdbeAddOp2(v, OP_Gosub, regOutputRow, addrOutputRow); VdbeComment((v, "output final row")); /* Jump over the subroutines */ - sqlite3VdbeAddOp2(v, OP_Goto, 0, addrEnd); + sqlite3VdbeGoto(v, addrEnd); /* Generate a subroutine that outputs a single row of the result ** set. This subroutine first looks at the iUseFlag. If iUseFlag ** is less than or equal to zero, the subroutine is a no-op. If ** the processing calls for the query to abort, this subroutine @@ -84587,16 +115694,17 @@ VdbeComment((v, "set abort flag")); sqlite3VdbeAddOp1(v, OP_Return, regOutputRow); sqlite3VdbeResolveLabel(v, addrOutputRow); addrOutputRow = sqlite3VdbeCurrentAddr(v); sqlite3VdbeAddOp2(v, OP_IfPos, iUseFlag, addrOutputRow+2); + VdbeCoverage(v); VdbeComment((v, "Groupby result generator entry point")); sqlite3VdbeAddOp1(v, OP_Return, regOutputRow); finalizeAggFunctions(pParse, &sAggInfo); sqlite3ExprIfFalse(pParse, pHaving, addrOutputRow+1, SQLITE_JUMPIFNULL); - selectInnerLoop(pParse, p, p->pEList, 0, 0, pOrderBy, - distinct, pDest, + selectInnerLoop(pParse, p, p->pEList, -1, &sSort, + &sDistinct, pDest, addrOutputRow+1, addrSetAbort); sqlite3VdbeAddOp1(v, OP_Return, regOutputRow); VdbeComment((v, "end groupby result generator")); /* Generate a subroutine that will reset the group-by accumulator @@ -84632,38 +115740,42 @@ int iRoot = pTab->tnum; /* Root page of scanned b-tree */ sqlite3CodeVerifySchema(pParse, iDb); sqlite3TableLock(pParse, iDb, pTab->tnum, 0, pTab->zName); - /* Search for the index that has the least amount of columns. If - ** there is such an index, and it has less columns than the table - ** does, then we can assume that it consumes less space on disk and - ** will therefore be cheaper to scan to determine the query result. - ** In this case set iRoot to the root page number of the index b-tree - ** and pKeyInfo to the KeyInfo structure required to navigate the - ** index. + /* Search for the index that has the lowest scan cost. + ** + ** (2011-04-15) Do not do a full scan of an unordered index. + ** + ** (2013-10-03) Do not count the entries in a partial index. ** ** In practice the KeyInfo structure will not be used. It is only ** passed to keep OP_OpenRead happy. */ + if( !HasRowid(pTab) ) pBest = sqlite3PrimaryKeyIndex(pTab); for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ - if( !pBest || pIdx->nColumn<pBest->nColumn ){ + if( pIdx->bUnordered==0 + && pIdx->szIdxRow<pTab->szTabRow + && pIdx->pPartIdxWhere==0 + && (!pBest || pIdx->szIdxRow<pBest->szIdxRow) + ){ pBest = pIdx; } } - if( pBest && pBest->nColumn<pTab->nCol ){ + if( pBest ){ iRoot = pBest->tnum; - pKeyInfo = sqlite3IndexKeyinfo(pParse, pBest); + pKeyInfo = sqlite3KeyInfoOfIndex(pParse, pBest); } /* Open a read-only cursor, execute the OP_Count, close the cursor. */ - sqlite3VdbeAddOp3(v, OP_OpenRead, iCsr, iRoot, iDb); + sqlite3VdbeAddOp4Int(v, OP_OpenRead, iCsr, iRoot, iDb, 1); if( pKeyInfo ){ - sqlite3VdbeChangeP4(v, -1, (char *)pKeyInfo, P4_KEYINFO_HANDOFF); + sqlite3VdbeChangeP4(v, -1, (char *)pKeyInfo, P4_KEYINFO); } sqlite3VdbeAddOp2(v, OP_Count, iCsr, sAggInfo.aFunc[0].iMem); sqlite3VdbeAddOp1(v, OP_Close, iCsr); + explainSimpleCount(pParse, pTab, pBest); }else #endif /* SQLITE_OMIT_BTREECOUNT */ { /* Check if the query is of one of the following forms: ** @@ -84677,11 +115789,11 @@ ** first iteration (since the first iteration of the loop is ** guaranteed to operate on the row with the minimum or maximum ** value of x, the only row required). ** ** A special flag must be passed to sqlite3WhereBegin() to slightly - ** modify behaviour as follows: + ** modify behavior as follows: ** ** + If the query is a "SELECT min(x)", then the loop coded by ** where.c should not iterate over any values with a NULL value ** for x. ** @@ -84689,16 +115801,24 @@ ** index or indices to use) should place a different priority on ** satisfying the 'ORDER BY' clause than it does in other cases. ** Refer to code and comments in where.c for details. */ ExprList *pMinMax = 0; - u8 flag = minMaxQuery(p); + u8 flag = WHERE_ORDERBY_NORMAL; + + assert( p->pGroupBy==0 ); + assert( flag==0 ); + if( p->pHaving==0 ){ + flag = minMaxQuery(&sAggInfo, &pMinMax); + } + assert( flag==0 || (pMinMax!=0 && pMinMax->nExpr==1) ); + if( flag ){ - assert( !ExprHasProperty(p->pEList->a[0].pExpr, EP_xIsSelect) ); - pMinMax = sqlite3ExprListDup(db, p->pEList->a[0].pExpr->x.pList,0); + pMinMax = sqlite3ExprListDup(db, pMinMax, 0); pDel = pMinMax; - if( pMinMax && !db->mallocFailed ){ + assert( db->mallocFailed || pMinMax!=0 ); + if( !db->mallocFailed ){ pMinMax->a[0].sortOrder = flag!=WHERE_ORDERBY_MIN ?1:0; pMinMax->a[0].pExpr->op = TK_COLUMN; } } @@ -84705,163 +115825,78 @@ /* This case runs if the aggregate has no GROUP BY clause. The ** processing is much simpler since there is only a single row ** of output. */ resetAccumulator(pParse, &sAggInfo); - pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere, &pMinMax, flag); + pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere, pMinMax,0,flag,0); if( pWInfo==0 ){ sqlite3ExprListDelete(db, pDel); goto select_end; } updateAccumulator(pParse, &sAggInfo); - if( !pMinMax && flag ){ - sqlite3VdbeAddOp2(v, OP_Goto, 0, pWInfo->iBreak); + assert( pMinMax==0 || pMinMax->nExpr==1 ); + if( sqlite3WhereIsOrdered(pWInfo)>0 ){ + sqlite3VdbeGoto(v, sqlite3WhereBreakLabel(pWInfo)); VdbeComment((v, "%s() by index", (flag==WHERE_ORDERBY_MIN?"min":"max"))); } sqlite3WhereEnd(pWInfo); finalizeAggFunctions(pParse, &sAggInfo); } - pOrderBy = 0; + sSort.pOrderBy = 0; sqlite3ExprIfFalse(pParse, pHaving, addrEnd, SQLITE_JUMPIFNULL); - selectInnerLoop(pParse, p, p->pEList, 0, 0, 0, -1, + selectInnerLoop(pParse, p, p->pEList, -1, 0, 0, pDest, addrEnd, addrEnd); sqlite3ExprListDelete(db, pDel); } sqlite3VdbeResolveLabel(v, addrEnd); } /* endif aggregate query */ + + if( sDistinct.eTnctType==WHERE_DISTINCT_UNORDERED ){ + explainTempTable(pParse, "DISTINCT"); + } /* If there is an ORDER BY clause, then we need to sort the results ** and send them to the callback one by one. */ - if( pOrderBy ){ - generateSortTail(pParse, p, v, pEList->nExpr, pDest); + if( sSort.pOrderBy ){ + explainTempTable(pParse, + sSort.nOBSat>0 ? "RIGHT PART OF ORDER BY":"ORDER BY"); + generateSortTail(pParse, p, &sSort, pEList->nExpr, pDest); } /* Jump here to skip this query */ sqlite3VdbeResolveLabel(v, iEnd); - /* The SELECT was successfully coded. Set the return code to 0 - ** to indicate no errors. - */ - rc = 0; + /* The SELECT has been coded. If there is an error in the Parse structure, + ** set the return code to 1. Otherwise 0. */ + rc = (pParse->nErr>0); /* Control jumps to here if an error is encountered above, or upon ** successful coding of the SELECT. */ select_end: + explainSetInteger(pParse->iSelectId, iRestoreSelectId); /* Identify column names if results of the SELECT are to be output. */ if( rc==SQLITE_OK && pDest->eDest==SRT_Output ){ generateColumnNames(pParse, pTabList, pEList); } sqlite3DbFree(db, sAggInfo.aCol); sqlite3DbFree(db, sAggInfo.aFunc); +#if SELECTTRACE_ENABLED + SELECTTRACE(1,pParse,p,("end processing\n")); + pParse->nSelectIndent--; +#endif return rc; } -#if defined(SQLITE_DEBUG) -/* -******************************************************************************* -** The following code is used for testing and debugging only. The code -** that follows does not appear in normal builds. -** -** These routines are used to print out the content of all or part of a -** parse structures such as Select or Expr. Such printouts are useful -** for helping to understand what is happening inside the code generator -** during the execution of complex SELECT statements. -** -** These routine are not called anywhere from within the normal -** code base. Then are intended to be called from within the debugger -** or from temporary "printf" statements inserted for debugging. -*/ -SQLITE_PRIVATE void sqlite3PrintExpr(Expr *p){ - if( !ExprHasProperty(p, EP_IntValue) && p->u.zToken ){ - sqlite3DebugPrintf("(%s", p->u.zToken); - }else{ - sqlite3DebugPrintf("(%d", p->op); - } - if( p->pLeft ){ - sqlite3DebugPrintf(" "); - sqlite3PrintExpr(p->pLeft); - } - if( p->pRight ){ - sqlite3DebugPrintf(" "); - sqlite3PrintExpr(p->pRight); - } - sqlite3DebugPrintf(")"); -} -SQLITE_PRIVATE void sqlite3PrintExprList(ExprList *pList){ - int i; - for(i=0; i<pList->nExpr; i++){ - sqlite3PrintExpr(pList->a[i].pExpr); - if( i<pList->nExpr-1 ){ - sqlite3DebugPrintf(", "); - } - } -} -SQLITE_PRIVATE void sqlite3PrintSelect(Select *p, int indent){ - sqlite3DebugPrintf("%*sSELECT(%p) ", indent, "", p); - sqlite3PrintExprList(p->pEList); - sqlite3DebugPrintf("\n"); - if( p->pSrc ){ - char *zPrefix; - int i; - zPrefix = "FROM"; - for(i=0; i<p->pSrc->nSrc; i++){ - struct SrcList_item *pItem = &p->pSrc->a[i]; - sqlite3DebugPrintf("%*s ", indent+6, zPrefix); - zPrefix = ""; - if( pItem->pSelect ){ - sqlite3DebugPrintf("(\n"); - sqlite3PrintSelect(pItem->pSelect, indent+10); - sqlite3DebugPrintf("%*s)", indent+8, ""); - }else if( pItem->zName ){ - sqlite3DebugPrintf("%s", pItem->zName); - } - if( pItem->pTab ){ - sqlite3DebugPrintf("(table: %s)", pItem->pTab->zName); - } - if( pItem->zAlias ){ - sqlite3DebugPrintf(" AS %s", pItem->zAlias); - } - if( i<p->pSrc->nSrc-1 ){ - sqlite3DebugPrintf(","); - } - sqlite3DebugPrintf("\n"); - } - } - if( p->pWhere ){ - sqlite3DebugPrintf("%*s WHERE ", indent, ""); - sqlite3PrintExpr(p->pWhere); - sqlite3DebugPrintf("\n"); - } - if( p->pGroupBy ){ - sqlite3DebugPrintf("%*s GROUP BY ", indent, ""); - sqlite3PrintExprList(p->pGroupBy); - sqlite3DebugPrintf("\n"); - } - if( p->pHaving ){ - sqlite3DebugPrintf("%*s HAVING ", indent, ""); - sqlite3PrintExpr(p->pHaving); - sqlite3DebugPrintf("\n"); - } - if( p->pOrderBy ){ - sqlite3DebugPrintf("%*s ORDER BY ", indent, ""); - sqlite3PrintExprList(p->pOrderBy); - sqlite3DebugPrintf("\n"); - } -} -/* End of the structure debug printing code -*****************************************************************************/ -#endif /* defined(SQLITE_TEST) || defined(SQLITE_DEBUG) */ - /************** End of select.c **********************************************/ /************** Begin file table.c *******************************************/ /* ** 2001 September 15 ** @@ -84878,10 +115913,13 @@ ** interface routine of sqlite3_exec(). ** ** These routines are in a separate files so that they will not be linked ** if they are not used. */ +/* #include "sqliteInt.h" */ +/* #include <stdlib.h> */ +/* #include <string.h> */ #ifndef SQLITE_OMIT_GET_TABLE /* ** This structure is used to pass data from sqlite3_get_table() through @@ -84888,14 +115926,14 @@ ** to the callback function is uses to build the result. */ typedef struct TabResult { char **azResult; /* Accumulated output */ char *zErrMsg; /* Error message text, if an error occurs */ - int nAlloc; /* Slots allocated for azResult[] */ - int nRow; /* Number of rows in the result */ - int nColumn; /* Number of columns in the result */ - int nData; /* Slots used in azResult[]. (nRow+1)*nColumn */ + u32 nAlloc; /* Slots allocated for azResult[] */ + u32 nRow; /* Number of rows in the result */ + u32 nColumn; /* Number of columns in the result */ + u32 nData; /* Slots used in azResult[]. (nRow+1)*nColumn */ int rc; /* Return code from sqlite3_exec() */ } TabResult; /* ** This routine is called once for each row in the result table. Its job @@ -84917,11 +115955,11 @@ need = nCol; } if( p->nData + need > p->nAlloc ){ char **azNew; p->nAlloc = p->nAlloc*2 + need; - azNew = sqlite3_realloc( p->azResult, sizeof(char*)*p->nAlloc ); + azNew = sqlite3_realloc64( p->azResult, sizeof(char*)*p->nAlloc ); if( azNew==0 ) goto malloc_failed; p->azResult = azNew; } /* If this is the first row, then generate an extra row containing @@ -84932,11 +115970,11 @@ for(i=0; i<nCol; i++){ z = sqlite3_mprintf("%s", colv[i]); if( z==0 ) goto malloc_failed; p->azResult[p->nData++] = z; } - }else if( p->nColumn!=nCol ){ + }else if( (int)p->nColumn!=nCol ){ sqlite3_free(p->zErrMsg); p->zErrMsg = sqlite3_mprintf( "sqlite3_get_table() called with two or more incompatible queries" ); p->rc = SQLITE_ERROR; @@ -84949,11 +115987,11 @@ for(i=0; i<nCol; i++){ if( argv[i]==0 ){ z = 0; }else{ int n = sqlite3Strlen30(argv[i])+1; - z = sqlite3_malloc( n ); + z = sqlite3_malloc64( n ); if( z==0 ) goto malloc_failed; memcpy(z, argv[i], n); } p->azResult[p->nData++] = z; } @@ -84974,11 +116012,11 @@ ** The result that is written to ***pazResult is held in memory obtained ** from malloc(). But the caller cannot free this memory directly. ** Instead, the entire table should be passed to sqlite3_free_table() when ** the calling procedure is finished using it. */ -SQLITE_API int sqlite3_get_table( +SQLITE_API int SQLITE_STDCALL sqlite3_get_table( sqlite3 *db, /* The database on which the SQL executes */ const char *zSql, /* The SQL to be executed */ char ***pazResult, /* Write the result table here */ int *pnRow, /* Write the number of rows in the result here */ int *pnColumn, /* Write the number of columns of result here */ @@ -84985,10 +116023,13 @@ char **pzErrMsg /* Write error messages here */ ){ int rc; TabResult res; +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) || pazResult==0 ) return SQLITE_MISUSE_BKPT; +#endif *pazResult = 0; if( pnColumn ) *pnColumn = 0; if( pnRow ) *pnRow = 0; if( pzErrMsg ) *pzErrMsg = 0; res.zErrMsg = 0; @@ -84995,11 +116036,11 @@ res.nRow = 0; res.nColumn = 0; res.nData = 1; res.nAlloc = 20; res.rc = SQLITE_OK; - res.azResult = sqlite3_malloc(sizeof(char*)*res.nAlloc ); + res.azResult = sqlite3_malloc64(sizeof(char*)*res.nAlloc ); if( res.azResult==0 ){ db->errCode = SQLITE_NOMEM; return SQLITE_NOMEM; } res.azResult[0] = 0; @@ -85023,11 +116064,11 @@ sqlite3_free_table(&res.azResult[1]); return rc; } if( res.nAlloc>res.nData ){ char **azNew; - azNew = sqlite3_realloc( res.azResult, sizeof(char*)*res.nData ); + azNew = sqlite3_realloc64( res.azResult, sizeof(char*)*res.nData ); if( azNew==0 ){ sqlite3_free_table(&res.azResult[1]); db->errCode = SQLITE_NOMEM; return SQLITE_NOMEM; } @@ -85040,12 +116081,12 @@ } /* ** This routine frees the space the sqlite3_get_table() malloced. */ -SQLITE_API void sqlite3_free_table( - char **azResult /* Result returned from from sqlite3_get_table() */ +SQLITE_API void SQLITE_STDCALL sqlite3_free_table( + char **azResult /* Result returned from sqlite3_get_table() */ ){ if( azResult ){ int i, n; azResult--; assert( azResult!=0 ); @@ -85069,10 +116110,11 @@ ** May you share freely, never taking more than you give. ** ************************************************************************* ** This file contains the implementation for TRIGGERs */ +/* #include "sqliteInt.h" */ #ifndef SQLITE_OMIT_TRIGGER /* ** Delete a linked list of TriggerStep structures. */ @@ -85112,10 +116154,11 @@ return 0; } if( pTmpSchema!=pTab->pSchema ){ HashElem *p; + assert( sqlite3SchemaMutexHeld(pParse->db, 0, pTmpSchema) ); for(p=sqliteHashFirst(&pTmpSchema->trigHash); p; p=sqliteHashNext(p)){ Trigger *pTrig = (Trigger *)sqliteHashData(p); if( pTrig->pTabSchema==pTab->pSchema && 0==sqlite3StrICmp(pTrig->table, pTab->zName) ){ @@ -85168,36 +116211,49 @@ goto trigger_cleanup; } iDb = 1; pName = pName1; }else{ - /* Figure out the db that the the trigger will be created in */ + /* Figure out the db that the trigger will be created in */ iDb = sqlite3TwoPartName(pParse, pName1, pName2, &pName); if( iDb<0 ){ goto trigger_cleanup; } + } + if( !pTableName || db->mallocFailed ){ + goto trigger_cleanup; + } + + /* A long-standing parser bug is that this syntax was allowed: + ** + ** CREATE TRIGGER attached.demo AFTER INSERT ON attached.tab .... + ** ^^^^^^^^ + ** + ** To maintain backwards compatibility, ignore the database + ** name on pTableName if we are reparsing out of SQLITE_MASTER. + */ + if( db->init.busy && iDb!=1 ){ + sqlite3DbFree(db, pTableName->a[0].zDatabase); + pTableName->a[0].zDatabase = 0; } /* If the trigger name was unqualified, and the table is a temp table, ** then set iDb to 1 to create the trigger in the temporary database. ** If sqlite3SrcListLookup() returns 0, indicating the table does not ** exist, the error is caught by the block below. */ - if( !pTableName || db->mallocFailed ){ - goto trigger_cleanup; - } pTab = sqlite3SrcListLookup(pParse, pTableName); if( db->init.busy==0 && pName2->n==0 && pTab && pTab->pSchema==db->aDb[1].pSchema ){ iDb = 1; } /* Ensure the table name matches database name and that the table exists */ if( db->mallocFailed ) goto trigger_cleanup; assert( pTableName->nSrc==1 ); - if( sqlite3FixInit(&sFix, pParse, iDb, "trigger", pName) && - sqlite3FixSrcList(&sFix, pTableName) ){ + sqlite3FixInit(&sFix, pParse, iDb, "trigger", pName); + if( sqlite3FixSrcList(&sFix, pTableName) ){ goto trigger_cleanup; } pTab = sqlite3SrcListLookup(pParse, pTableName); if( !pTab ){ /* The table does not exist. */ @@ -85223,22 +116279,24 @@ ** specified name exists */ zName = sqlite3NameFromToken(db, pName); if( !zName || SQLITE_OK!=sqlite3CheckObjectName(pParse, zName) ){ goto trigger_cleanup; } - if( sqlite3HashFind(&(db->aDb[iDb].pSchema->trigHash), - zName, sqlite3Strlen30(zName)) ){ + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); + if( sqlite3HashFind(&(db->aDb[iDb].pSchema->trigHash),zName) ){ if( !noErr ){ sqlite3ErrorMsg(pParse, "trigger %T already exists", pName); + }else{ + assert( !db->init.busy ); + sqlite3CodeVerifySchema(pParse, iDb); } goto trigger_cleanup; } /* Do not create a trigger on a system table */ if( sqlite3StrNICmp(pTab->zName, "sqlite_", 7)==0 ){ sqlite3ErrorMsg(pParse, "cannot create trigger on system table"); - pParse->nErr++; goto trigger_cleanup; } /* INSTEAD of triggers are only for views and views only support INSTEAD ** of triggers. @@ -85320,24 +116378,24 @@ sqlite3 *db = pParse->db; /* The database */ DbFixer sFix; /* Fixer object */ int iDb; /* Database containing the trigger */ Token nameToken; /* Trigger name for error reporting */ - pTrig = pParse->pNewTrigger; pParse->pNewTrigger = 0; if( NEVER(pParse->nErr) || !pTrig ) goto triggerfinish_cleanup; zName = pTrig->zName; iDb = sqlite3SchemaToIndex(pParse->db, pTrig->pSchema); pTrig->step_list = pStepList; while( pStepList ){ pStepList->pTrig = pTrig; pStepList = pStepList->pNext; } - nameToken.z = pTrig->zName; - nameToken.n = sqlite3Strlen30(nameToken.z); - if( sqlite3FixInit(&sFix, pParse, iDb, "trigger", &nameToken) - && sqlite3FixTriggerStep(&sFix, pTrig->step_list) ){ + sqlite3TokenInit(&nameToken, pTrig->zName); + sqlite3FixInit(&sFix, pParse, iDb, "trigger", &nameToken); + if( sqlite3FixTriggerStep(&sFix, pTrig->step_list) + || sqlite3FixExpr(&sFix, pTrig->pWhen) + ){ goto triggerfinish_cleanup; } /* if we are not initializing, ** build the sqlite_master entry @@ -85355,25 +116413,24 @@ "INSERT INTO %Q.%s VALUES('trigger',%Q,%Q,0,'CREATE TRIGGER %q')", db->aDb[iDb].zName, SCHEMA_TABLE(iDb), zName, pTrig->table, z); sqlite3DbFree(db, z); sqlite3ChangeCookie(pParse, iDb); - sqlite3VdbeAddOp4(v, OP_ParseSchema, iDb, 0, 0, sqlite3MPrintf( - db, "type='trigger' AND name='%q'", zName), P4_DYNAMIC - ); + sqlite3VdbeAddParseSchemaOp(v, iDb, + sqlite3MPrintf(db, "type='trigger' AND name='%q'", zName)); } if( db->init.busy ){ Trigger *pLink = pTrig; Hash *pHash = &db->aDb[iDb].pSchema->trigHash; - pTrig = sqlite3HashInsert(pHash, zName, sqlite3Strlen30(zName), pTrig); + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); + pTrig = sqlite3HashInsert(pHash, zName, pTrig); if( pTrig ){ - db->mallocFailed = 1; + sqlite3OomFault(db); }else if( pLink->pSchema==pLink->pTabSchema ){ Table *pTab; - int n = sqlite3Strlen30(pLink->table); - pTab = sqlite3HashFind(&pLink->pTabSchema->tblHash, pLink->table, n); + pTab = sqlite3HashFind(&pLink->pTabSchema->tblHash, pLink->table); assert( pTab!=0 ); pLink->pNext = pTab->pTrigger; pTab->pTrigger = pLink; } } @@ -85414,16 +116471,16 @@ u8 op, /* Trigger opcode */ Token *pName /* The target name */ ){ TriggerStep *pTriggerStep; - pTriggerStep = sqlite3DbMallocZero(db, sizeof(TriggerStep) + pName->n); + pTriggerStep = sqlite3DbMallocZero(db, sizeof(TriggerStep) + pName->n + 1); if( pTriggerStep ){ char *z = (char*)&pTriggerStep[1]; memcpy(z, pName->z, pName->n); - pTriggerStep->target.z = z; - pTriggerStep->target.n = pName->n; + sqlite3Dequote(z); + pTriggerStep->zTarget = z; pTriggerStep->op = op; } return pTriggerStep; } @@ -85436,29 +116493,25 @@ */ SQLITE_PRIVATE TriggerStep *sqlite3TriggerInsertStep( sqlite3 *db, /* The database connection */ Token *pTableName, /* Name of the table into which we insert */ IdList *pColumn, /* List of columns in pTableName to insert into */ - ExprList *pEList, /* The VALUE clause: a list of values to be inserted */ Select *pSelect, /* A SELECT statement that supplies values */ u8 orconf /* The conflict algorithm (OE_Abort, OE_Replace, etc.) */ ){ TriggerStep *pTriggerStep; - assert(pEList == 0 || pSelect == 0); - assert(pEList != 0 || pSelect != 0 || db->mallocFailed); + assert(pSelect != 0 || db->mallocFailed); pTriggerStep = triggerStepAllocate(db, TK_INSERT, pTableName); if( pTriggerStep ){ pTriggerStep->pSelect = sqlite3SelectDup(db, pSelect, EXPRDUP_REDUCE); pTriggerStep->pIdList = pColumn; - pTriggerStep->pExprList = sqlite3ExprListDup(db, pEList, EXPRDUP_REDUCE); pTriggerStep->orconf = orconf; }else{ sqlite3IdListDelete(db, pColumn); } - sqlite3ExprListDelete(db, pEList); sqlite3SelectDelete(db, pSelect); return pTriggerStep; } @@ -85532,11 +116585,10 @@ SQLITE_PRIVATE void sqlite3DropTrigger(Parse *pParse, SrcList *pName, int noErr){ Trigger *pTrigger = 0; int i; const char *zDb; const char *zName; - int nName; sqlite3 *db = pParse->db; if( db->mallocFailed ) goto drop_trigger_cleanup; if( SQLITE_OK!=sqlite3ReadSchema(pParse) ){ goto drop_trigger_cleanup; @@ -85543,21 +116595,25 @@ } assert( pName->nSrc==1 ); zDb = pName->a[0].zDatabase; zName = pName->a[0].zName; - nName = sqlite3Strlen30(zName); + assert( zDb!=0 || sqlite3BtreeHoldsAllMutexes(db) ); for(i=OMIT_TEMPDB; i<db->nDb; i++){ int j = (i<2) ? i^1 : i; /* Search TEMP before MAIN */ if( zDb && sqlite3StrICmp(db->aDb[j].zName, zDb) ) continue; - pTrigger = sqlite3HashFind(&(db->aDb[j].pSchema->trigHash), zName, nName); + assert( sqlite3SchemaMutexHeld(db, j, 0) ); + pTrigger = sqlite3HashFind(&(db->aDb[j].pSchema->trigHash), zName); if( pTrigger ) break; } if( !pTrigger ){ if( !noErr ){ sqlite3ErrorMsg(pParse, "no such trigger: %S", pName, 0); + }else{ + sqlite3CodeVerifyNamedSchema(pParse, zDb); } + pParse->checkSchema = 1; goto drop_trigger_cleanup; } sqlite3DropTriggerPtr(pParse, pTrigger); drop_trigger_cleanup: @@ -85567,12 +116623,11 @@ /* ** Return a pointer to the Table structure for the table that a trigger ** is set on. */ static Table *tableOfTrigger(Trigger *pTrigger){ - int n = sqlite3Strlen30(pTrigger->table); - return sqlite3HashFind(&pTrigger->pTabSchema->tblHash, pTrigger->table, n); + return sqlite3HashFind(&pTrigger->pTabSchema->tblHash, pTrigger->table); } /* ** Drop a trigger given a pointer to that trigger. @@ -85603,44 +116658,29 @@ /* Generate code to destroy the database record of the trigger. */ assert( pTable!=0 ); if( (v = sqlite3GetVdbe(pParse))!=0 ){ - int base; - static const VdbeOpList dropTrigger[] = { - { OP_Rewind, 0, ADDR(9), 0}, - { OP_String8, 0, 1, 0}, /* 1 */ - { OP_Column, 0, 1, 2}, - { OP_Ne, 2, ADDR(8), 1}, - { OP_String8, 0, 1, 0}, /* 4: "trigger" */ - { OP_Column, 0, 0, 2}, - { OP_Ne, 2, ADDR(8), 1}, - { OP_Delete, 0, 0, 0}, - { OP_Next, 0, ADDR(1), 0}, /* 8 */ - }; - - sqlite3BeginWriteOperation(pParse, 0, iDb); - sqlite3OpenMasterTable(pParse, iDb); - base = sqlite3VdbeAddOpList(v, ArraySize(dropTrigger), dropTrigger); - sqlite3VdbeChangeP4(v, base+1, pTrigger->zName, 0); - sqlite3VdbeChangeP4(v, base+4, "trigger", P4_STATIC); + sqlite3NestedParse(pParse, + "DELETE FROM %Q.%s WHERE name=%Q AND type='trigger'", + db->aDb[iDb].zName, SCHEMA_TABLE(iDb), pTrigger->zName + ); sqlite3ChangeCookie(pParse, iDb); - sqlite3VdbeAddOp2(v, OP_Close, 0, 0); sqlite3VdbeAddOp4(v, OP_DropTrigger, iDb, 0, 0, pTrigger->zName, 0); - if( pParse->nMem<3 ){ - pParse->nMem = 3; - } } } /* ** Remove a trigger from the hash tables of the sqlite* pointer. */ SQLITE_PRIVATE void sqlite3UnlinkAndDeleteTrigger(sqlite3 *db, int iDb, const char *zName){ - Hash *pHash = &(db->aDb[iDb].pSchema->trigHash); Trigger *pTrigger; - pTrigger = sqlite3HashInsert(pHash, zName, sqlite3Strlen30(zName), 0); + Hash *pHash; + + assert( sqlite3SchemaMutexHeld(db, iDb, 0) ); + pHash = &(db->aDb[iDb].pSchema->trigHash); + pTrigger = sqlite3HashInsert(pHash, zName, 0); if( ALWAYS(pTrigger) ){ if( pTrigger->pSchema==pTrigger->pTabSchema ){ Table *pTab = tableOfTrigger(pTrigger); Trigger **pp; for(pp=&pTab->pTrigger; *pp!=pTrigger; pp=&((*pp)->pNext)); @@ -85681,12 +116721,16 @@ int op, /* one of TK_DELETE, TK_INSERT, TK_UPDATE */ ExprList *pChanges, /* Columns that change in an UPDATE statement */ int *pMask /* OUT: Mask of TRIGGER_BEFORE|TRIGGER_AFTER */ ){ int mask = 0; - Trigger *pList = sqlite3TriggerList(pParse, pTab); + Trigger *pList = 0; Trigger *p; + + if( (pParse->db->flags & SQLITE_EnableTrigger)!=0 ){ + pList = sqlite3TriggerList(pParse, pTab); + } assert( pList==0 || IsVirtual(pTab)==0 ); for(p=pList; p; p=p->pNext){ if( p->op==op && checkColumnOverlap(p->pColumns, pChanges) ){ mask |= p->tr_tm; } @@ -85696,11 +116740,11 @@ } return (mask ? pList : 0); } /* -** Convert the pStep->target token into a SrcList and return a pointer +** Convert the pStep->zTarget string into a SrcList and return a pointer ** to that SrcList. ** ** This routine adds a specific database name, if needed, to the target when ** forming the SrcList. This prevents a trigger in one database from ** referring to a target in another database. An exception is when the @@ -85709,21 +116753,21 @@ */ static SrcList *targetSrcList( Parse *pParse, /* The parsing context */ TriggerStep *pStep /* The trigger containing the target token */ ){ + sqlite3 *db = pParse->db; int iDb; /* Index of the database to use */ SrcList *pSrc; /* SrcList to be returned */ - pSrc = sqlite3SrcListAppend(pParse->db, 0, &pStep->target, 0); + pSrc = sqlite3SrcListAppend(db, 0, 0, 0); if( pSrc ){ assert( pSrc->nSrc>0 ); - assert( pSrc->a!=0 ); - iDb = sqlite3SchemaToIndex(pParse->db, pStep->pTrig->pSchema); + pSrc->a[pSrc->nSrc-1].zName = sqlite3DbStrDup(db, pStep->zTarget); + iDb = sqlite3SchemaToIndex(db, pStep->pTrig->pSchema); if( iDb==0 || iDb>=2 ){ - sqlite3 *db = pParse->db; - assert( iDb<pParse->db->nDb ); + assert( iDb<db->nDb ); pSrc->a[pSrc->nSrc-1].zDatabase = sqlite3DbStrDup(db, db->aDb[iDb].zName); } } return pSrc; } @@ -85757,10 +116801,11 @@ ** ** INSERT INTO t1 ... ; -- insert into t2 uses REPLACE policy ** INSERT OR IGNORE INTO t1 ... ; -- insert into t2 uses IGNORE policy */ pParse->eOrconf = (orconf==OE_Default)?pStep->orconf:(u8)orconf; + assert( pParse->okConstFactor==0 ); switch( pStep->op ){ case TK_UPDATE: { sqlite3Update(pParse, targetSrcList(pParse, pStep), @@ -85771,11 +116816,10 @@ break; } case TK_INSERT: { sqlite3Insert(pParse, targetSrcList(pParse, pStep), - sqlite3ExprListDup(db, pStep->pExprList, 0), sqlite3SelectDup(db, pStep->pSelect, 0), sqlite3IdListDup(db, pStep->pIdList), pParse->eOrconf ); break; @@ -85802,11 +116846,11 @@ } return 0; } -#ifdef SQLITE_DEBUG +#ifdef SQLITE_ENABLE_EXPLAIN_COMMENTS /* ** This function is used to add VdbeComment() annotations to a VDBE ** program. It is not used in production code, only for debugging. */ static const char *onErrorText(int onError){ @@ -85831,10 +116875,11 @@ assert( pFrom->zErrMsg==0 || pFrom->nErr ); assert( pTo->zErrMsg==0 || pTo->nErr ); if( pTo->nErr==0 ){ pTo->zErrMsg = pFrom->zErrMsg; pTo->nErr = pFrom->nErr; + pTo->rc = pFrom->rc; }else{ sqlite3DbFree(pFrom->db, pFrom->zErrMsg); } } @@ -85857,10 +116902,11 @@ SubProgram *pProgram = 0; /* Sub-vdbe for trigger program */ Parse *pSubParse; /* Parse context for sub-vdbe */ int iEndTrigger = 0; /* Label to jump to if WHEN is false */ assert( pTrigger->zName==0 || pTab==tableOfTrigger(pTrigger) ); + assert( pTop->pVdbe ); /* Allocate the TriggerPrg and SubProgram objects. To ensure that they ** are freed if an error occurs, link them into the Parse.pTriggerPrg ** list of the top-level Parse object sooner rather than later. */ pPrg = sqlite3DbMallocZero(db, sizeof(TriggerPrg)); @@ -85867,11 +116913,11 @@ if( !pPrg ) return 0; pPrg->pNext = pTop->pTriggerPrg; pTop->pTriggerPrg = pPrg; pPrg->pProgram = pProgram = sqlite3DbMallocZero(db, sizeof(SubProgram)); if( !pProgram ) return 0; - pProgram->nRef = 1; + sqlite3VdbeLinkSubProgram(pTop->pVdbe, pProgram); pPrg->pTrigger = pTrigger; pPrg->orconf = orconf; pPrg->aColmask[0] = 0xffffffff; pPrg->aColmask[1] = 0xffffffff; @@ -85932,18 +116978,20 @@ if( db->mallocFailed==0 ){ pProgram->aOp = sqlite3VdbeTakeOpArray(v, &pProgram->nOp, &pTop->nMaxArg); } pProgram->nMem = pSubParse->nMem; pProgram->nCsr = pSubParse->nTab; + pProgram->nOnce = pSubParse->nOnce; pProgram->token = (void *)pTrigger; pPrg->aColmask[0] = pSubParse->oldmask; pPrg->aColmask[1] = pSubParse->newmask; sqlite3VdbeDelete(v); } assert( !pSubParse->pAinc && !pSubParse->pZombieTab ); assert( !pSubParse->pTriggerPrg && !pSubParse->nMaxArg ); + sqlite3ParserReset(pSubParse); sqlite3StackFree(db, pSubParse); return pPrg; } @@ -86001,29 +117049,30 @@ assert( pPrg || pParse->nErr || pParse->db->mallocFailed ); /* Code the OP_Program opcode in the parent VDBE. P4 of the OP_Program ** is a pointer to the sub-vdbe containing the trigger program. */ if( pPrg ){ - sqlite3VdbeAddOp3(v, OP_Program, reg, ignoreJump, ++pParse->nMem); - pPrg->pProgram->nRef++; - sqlite3VdbeChangeP4(v, -1, (const char *)pPrg->pProgram, P4_SUBPROGRAM); + int bRecursive = (p->zName && 0==(pParse->db->flags&SQLITE_RecTriggers)); + + sqlite3VdbeAddOp4(v, OP_Program, reg, ignoreJump, ++pParse->nMem, + (const char *)pPrg->pProgram, P4_SUBPROGRAM); VdbeComment( (v, "Call: %s.%s", (p->zName?p->zName:"fkey"), onErrorText(orconf))); /* Set the P5 operand of the OP_Program instruction to non-zero if ** recursive invocation of this trigger program is disallowed. Recursive ** invocation is disallowed if (a) the sub-program is really a trigger, ** not a foreign key action, and (b) the flag to enable recursive triggers ** is clear. */ - sqlite3VdbeChangeP5(v, (u8)(p->zName && !(pParse->db->flags&SQLITE_RecTriggers))); + sqlite3VdbeChangeP5(v, (u8)bRecursive); } } /* ** This is called to code the required FOR EACH ROW triggers for an operation ** on table pTab. The operation to code triggers for (INSERT, UPDATE or DELETE) -** is given by the op paramater. The tr_tm parameter determines whether the +** is given by the op parameter. The tr_tm parameter determines whether the ** BEFORE or AFTER triggers are coded. If the operation is an UPDATE, then ** parameter pChanges is passed the list of columns being modified. ** ** If there are no triggers that fire at the specified time for the specified ** operation on pTab, this function is a no-op. @@ -86165,10 +117214,11 @@ ** ************************************************************************* ** This file contains C code routines that are called by the parser ** to handle UPDATE statements. */ +/* #include "sqliteInt.h" */ #ifndef SQLITE_OMIT_VIRTUALTABLE /* Forward declaration */ static void updateVirtualTable( Parse *pParse, /* The parsing context */ @@ -86175,11 +117225,12 @@ SrcList *pSrc, /* The virtual table to be modified */ Table *pTab, /* The virtual table */ ExprList *pChanges, /* The columns to change in the UPDATE statement */ Expr *pRowidExpr, /* Expression used to recompute the rowid */ int *aXRef, /* Mapping from columns of pTab to entries in pChanges */ - Expr *pWhere /* WHERE clause of the UPDATE statement */ + Expr *pWhere, /* WHERE clause of the UPDATE statement */ + int onError /* ON CONFLICT strategy */ ); #endif /* SQLITE_OMIT_VIRTUALTABLE */ /* ** The most recently coded instruction was an OP_Column to retrieve the @@ -86212,11 +117263,11 @@ ** space. */ SQLITE_PRIVATE void sqlite3ColumnDefault(Vdbe *v, Table *pTab, int i, int iReg){ assert( pTab!=0 ); if( !pTab->pSelect ){ - sqlite3_value *pValue; + sqlite3_value *pValue = 0; u8 enc = ENC(sqlite3VdbeDb(v)); Column *pCol = &pTab->aCol[i]; VdbeComment((v, "%s.%s", pTab->zName, pCol->zName)); assert( i<pTab->nCol ); sqlite3ValueFromExpr(sqlite3VdbeDb(v), pCol->pDflt, enc, @@ -86223,11 +117274,11 @@ pCol->affinity, &pValue); if( pValue ){ sqlite3VdbeChangeP4(v, -1, (const char *)pValue, P4_MEM); } #ifndef SQLITE_OMIT_FLOATING_POINT - if( iReg>=0 && pTab->aCol[i].affinity==SQLITE_AFF_REAL ){ + if( pTab->aCol[i].affinity==SQLITE_AFF_REAL ){ sqlite3VdbeAddOp1(v, OP_RealAffinity, iReg); } #endif } } @@ -86246,45 +117297,55 @@ Expr *pWhere, /* The WHERE clause. May be null */ int onError /* How to handle constraint errors */ ){ int i, j; /* Loop counters */ Table *pTab; /* The table to be updated */ - int addr = 0; /* VDBE instruction address of the start of the loop */ + int addrTop = 0; /* VDBE instruction address of the start of the loop */ WhereInfo *pWInfo; /* Information about the WHERE clause */ Vdbe *v; /* The virtual database engine */ Index *pIdx; /* For looping over indices */ + Index *pPk; /* The PRIMARY KEY index for WITHOUT ROWID tables */ int nIdx; /* Number of indices that need updating */ - int iCur; /* VDBE Cursor number of pTab */ + int iBaseCur; /* Base cursor number */ + int iDataCur; /* Cursor for the canonical data btree */ + int iIdxCur; /* Cursor for the first index */ sqlite3 *db; /* The database structure */ int *aRegIdx = 0; /* One register assigned to each index to be updated */ int *aXRef = 0; /* aXRef[i] is the index in pChanges->a[] of the ** an expression for the i-th column of the table. ** aXRef[i]==-1 if the i-th column is not changed. */ - int chngRowid; /* True if the record number is being changed */ + u8 *aToOpen; /* 1 for tables and indices to be opened */ + u8 chngPk; /* PRIMARY KEY changed in a WITHOUT ROWID table */ + u8 chngRowid; /* Rowid changed in a normal table */ + u8 chngKey; /* Either chngPk or chngRowid */ Expr *pRowidExpr = 0; /* Expression defining the new record number */ - int openAll = 0; /* True if all indices need to be opened */ AuthContext sContext; /* The authorization context */ NameContext sNC; /* The name-context to resolve expressions in */ int iDb; /* Database containing the table being updated */ int okOnePass; /* True for one-pass algorithm without the FIFO */ int hasFK; /* True if foreign key processing is required */ + int labelBreak; /* Jump here to break out of UPDATE loop */ + int labelContinue; /* Jump here to continue next step of UPDATE loop */ #ifndef SQLITE_OMIT_TRIGGER int isView; /* True when updating a view (INSTEAD OF trigger) */ Trigger *pTrigger; /* List of triggers on pTab, if required */ int tmask; /* Mask of TRIGGER_BEFORE|TRIGGER_AFTER */ #endif int newmask; /* Mask of NEW.* columns accessed by BEFORE triggers */ + int iEph = 0; /* Ephemeral table holding all primary key values */ + int nKey = 0; /* Number of elements in regKey for WITHOUT ROWID */ + int aiCurOnePass[2]; /* The write cursors opened by WHERE_ONEPASS */ /* Register Allocations */ int regRowCount = 0; /* A count of rows changed */ - int regOldRowid; /* The old rowid */ - int regNewRowid; /* The new rowid */ - int regNew; - int regOld = 0; + int regOldRowid = 0; /* The old rowid */ + int regNewRowid = 0; /* The new rowid */ + int regNew = 0; /* Content of the NEW.* table in triggers */ + int regOld = 0; /* Content of OLD.* table in triggers */ int regRowSet = 0; /* Rowset of rows to be updated */ - int regRec; /* Register used for new table record to insert */ + int regKey = 0; /* composite PRIMARY KEY value */ memset(&sContext, 0, sizeof(sContext)); db = pParse->db; if( pParse->nErr || db->mallocFailed ){ goto update_cleanup; @@ -86318,23 +117379,37 @@ goto update_cleanup; } if( sqlite3IsReadOnly(pParse, pTab, tmask) ){ goto update_cleanup; } - aXRef = sqlite3DbMallocRaw(db, sizeof(int) * pTab->nCol ); - if( aXRef==0 ) goto update_cleanup; - for(i=0; i<pTab->nCol; i++) aXRef[i] = -1; /* Allocate a cursors for the main database table and for all indices. ** The index cursors might not be used, but if they are used they ** need to occur right after the database cursor. So go ahead and ** allocate enough space, just in case. */ - pTabList->a[0].iCursor = iCur = pParse->nTab++; - for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ + pTabList->a[0].iCursor = iBaseCur = iDataCur = pParse->nTab++; + iIdxCur = iDataCur+1; + pPk = HasRowid(pTab) ? 0 : sqlite3PrimaryKeyIndex(pTab); + for(nIdx=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, nIdx++){ + if( IsPrimaryKeyIndex(pIdx) && pPk!=0 ){ + iDataCur = pParse->nTab; + pTabList->a[0].iCursor = iDataCur; + } pParse->nTab++; } + + /* Allocate space for aXRef[], aRegIdx[], and aToOpen[]. + ** Initialize aXRef[] and aToOpen[] to their default values. + */ + aXRef = sqlite3DbMallocRawNN(db, sizeof(int) * (pTab->nCol+nIdx) + nIdx+2 ); + if( aXRef==0 ) goto update_cleanup; + aRegIdx = aXRef+pTab->nCol; + aToOpen = (u8*)(aRegIdx+nIdx); + memset(aToOpen, 1, nIdx+1); + aToOpen[nIdx+1] = 0; + for(i=0; i<pTab->nCol; i++) aXRef[i] = -1; /* Initialize the name-context */ memset(&sNC, 0, sizeof(sNC)); sNC.pParse = pParse; sNC.pSrcList = pTabList; @@ -86343,228 +117418,289 @@ ** of the UPDATE statement. Also find the column index ** for each column to be updated in the pChanges array. For each ** column to be updated, make sure we have authorization to change ** that column. */ - chngRowid = 0; + chngRowid = chngPk = 0; for(i=0; i<pChanges->nExpr; i++){ if( sqlite3ResolveExprNames(&sNC, pChanges->a[i].pExpr) ){ goto update_cleanup; } for(j=0; j<pTab->nCol; j++){ if( sqlite3StrICmp(pTab->aCol[j].zName, pChanges->a[i].zName)==0 ){ if( j==pTab->iPKey ){ chngRowid = 1; pRowidExpr = pChanges->a[i].pExpr; + }else if( pPk && (pTab->aCol[j].colFlags & COLFLAG_PRIMKEY)!=0 ){ + chngPk = 1; } aXRef[j] = i; break; } } if( j>=pTab->nCol ){ - if( sqlite3IsRowid(pChanges->a[i].zName) ){ + if( pPk==0 && sqlite3IsRowid(pChanges->a[i].zName) ){ + j = -1; chngRowid = 1; pRowidExpr = pChanges->a[i].pExpr; }else{ sqlite3ErrorMsg(pParse, "no such column: %s", pChanges->a[i].zName); + pParse->checkSchema = 1; goto update_cleanup; } } #ifndef SQLITE_OMIT_AUTHORIZATION { int rc; rc = sqlite3AuthCheck(pParse, SQLITE_UPDATE, pTab->zName, - pTab->aCol[j].zName, db->aDb[iDb].zName); + j<0 ? "ROWID" : pTab->aCol[j].zName, + db->aDb[iDb].zName); if( rc==SQLITE_DENY ){ goto update_cleanup; }else if( rc==SQLITE_IGNORE ){ aXRef[j] = -1; } } #endif } - - hasFK = sqlite3FkRequired(pParse, pTab, aXRef, chngRowid); - - /* Allocate memory for the array aRegIdx[]. There is one entry in the - ** array for each index associated with table being updated. Fill in - ** the value with a register number for indices that are to be used - ** and with zero for unused indices. - */ - for(nIdx=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, nIdx++){} - if( nIdx>0 ){ - aRegIdx = sqlite3DbMallocRaw(db, sizeof(Index*) * nIdx ); - if( aRegIdx==0 ) goto update_cleanup; - } + assert( (chngRowid & chngPk)==0 ); + assert( chngRowid==0 || chngRowid==1 ); + assert( chngPk==0 || chngPk==1 ); + chngKey = chngRowid + chngPk; + + /* The SET expressions are not actually used inside the WHERE loop. + ** So reset the colUsed mask. Unless this is a virtual table. In that + ** case, set all bits of the colUsed mask (to ensure that the virtual + ** table implementation makes all columns available). + */ + pTabList->a[0].colUsed = IsVirtual(pTab) ? (Bitmask)-1 : 0; + + hasFK = sqlite3FkRequired(pParse, pTab, aXRef, chngKey); + + /* There is one entry in the aRegIdx[] array for each index on the table + ** being updated. Fill in aRegIdx[] with a register number that will hold + ** the key for accessing each index. + ** + ** FIXME: Be smarter about omitting indexes that use expressions. + */ for(j=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, j++){ int reg; - if( chngRowid ){ + if( chngKey || hasFK || pIdx->pPartIdxWhere || pIdx==pPk ){ reg = ++pParse->nMem; }else{ reg = 0; - for(i=0; i<pIdx->nColumn; i++){ - if( aXRef[pIdx->aiColumn[i]]>=0 ){ + for(i=0; i<pIdx->nKeyCol; i++){ + i16 iIdxCol = pIdx->aiColumn[i]; + if( iIdxCol<0 || aXRef[iIdxCol]>=0 ){ reg = ++pParse->nMem; break; } } } + if( reg==0 ) aToOpen[j+1] = 0; aRegIdx[j] = reg; } /* Begin generating code. */ v = sqlite3GetVdbe(pParse); if( v==0 ) goto update_cleanup; if( pParse->nested==0 ) sqlite3VdbeCountChanges(v); sqlite3BeginWriteOperation(pParse, 1, iDb); -#ifndef SQLITE_OMIT_VIRTUALTABLE - /* Virtual tables must be handled separately */ - if( IsVirtual(pTab) ){ - updateVirtualTable(pParse, pTabList, pTab, pChanges, pRowidExpr, aXRef, - pWhere); - pWhere = 0; - pTabList = 0; - goto update_cleanup; - } -#endif - - /* Allocate required registers. */ - regOldRowid = regNewRowid = ++pParse->nMem; - if( pTrigger || hasFK ){ - regOld = pParse->nMem + 1; - pParse->nMem += pTab->nCol; - } - if( chngRowid || pTrigger || hasFK ){ - regNewRowid = ++pParse->nMem; - } - regNew = pParse->nMem + 1; - pParse->nMem += pTab->nCol; - regRec = ++pParse->nMem; + /* Allocate required registers. */ + if( !IsVirtual(pTab) ){ + regRowSet = ++pParse->nMem; + regOldRowid = regNewRowid = ++pParse->nMem; + if( chngPk || pTrigger || hasFK ){ + regOld = pParse->nMem + 1; + pParse->nMem += pTab->nCol; + } + if( chngKey || pTrigger || hasFK ){ + regNewRowid = ++pParse->nMem; + } + regNew = pParse->nMem + 1; + pParse->nMem += pTab->nCol; + } /* Start the view context. */ if( isView ){ sqlite3AuthContextPush(pParse, &sContext, pTab->zName); } /* If we are trying to update a view, realize that view into - ** a ephemeral table. + ** an ephemeral table. */ #if !defined(SQLITE_OMIT_VIEW) && !defined(SQLITE_OMIT_TRIGGER) if( isView ){ - sqlite3MaterializeView(pParse, pTab, pWhere, iCur); + sqlite3MaterializeView(pParse, pTab, pWhere, iDataCur); } #endif /* Resolve the column names in all the expressions in the ** WHERE clause. */ if( sqlite3ResolveExprNames(&sNC, pWhere) ){ goto update_cleanup; } + +#ifndef SQLITE_OMIT_VIRTUALTABLE + /* Virtual tables must be handled separately */ + if( IsVirtual(pTab) ){ + updateVirtualTable(pParse, pTabList, pTab, pChanges, pRowidExpr, aXRef, + pWhere, onError); + goto update_cleanup; + } +#endif /* Begin the database scan */ - sqlite3VdbeAddOp2(v, OP_Null, 0, regOldRowid); - pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere,0, WHERE_ONEPASS_DESIRED); - if( pWInfo==0 ) goto update_cleanup; - okOnePass = pWInfo->okOnePass; - - /* Remember the rowid of every item to be updated. - */ - sqlite3VdbeAddOp2(v, OP_Rowid, iCur, regOldRowid); - if( !okOnePass ){ - regRowSet = ++pParse->nMem; - sqlite3VdbeAddOp2(v, OP_RowSetAdd, regRowSet, regOldRowid); - } - - /* End the database scan loop. - */ - sqlite3WhereEnd(pWInfo); + if( HasRowid(pTab) ){ + sqlite3VdbeAddOp3(v, OP_Null, 0, regRowSet, regOldRowid); + pWInfo = sqlite3WhereBegin( + pParse, pTabList, pWhere, 0, 0, WHERE_ONEPASS_DESIRED, iIdxCur + ); + if( pWInfo==0 ) goto update_cleanup; + okOnePass = sqlite3WhereOkOnePass(pWInfo, aiCurOnePass); + + /* Remember the rowid of every item to be updated. + */ + sqlite3VdbeAddOp2(v, OP_Rowid, iDataCur, regOldRowid); + if( !okOnePass ){ + sqlite3VdbeAddOp2(v, OP_RowSetAdd, regRowSet, regOldRowid); + } + + /* End the database scan loop. + */ + sqlite3WhereEnd(pWInfo); + }else{ + int iPk; /* First of nPk memory cells holding PRIMARY KEY value */ + i16 nPk; /* Number of components of the PRIMARY KEY */ + int addrOpen; /* Address of the OpenEphemeral instruction */ + + assert( pPk!=0 ); + nPk = pPk->nKeyCol; + iPk = pParse->nMem+1; + pParse->nMem += nPk; + regKey = ++pParse->nMem; + iEph = pParse->nTab++; + sqlite3VdbeAddOp2(v, OP_Null, 0, iPk); + addrOpen = sqlite3VdbeAddOp2(v, OP_OpenEphemeral, iEph, nPk); + sqlite3VdbeSetP4KeyInfo(pParse, pPk); + pWInfo = sqlite3WhereBegin(pParse, pTabList, pWhere, 0, 0, + WHERE_ONEPASS_DESIRED, iIdxCur); + if( pWInfo==0 ) goto update_cleanup; + okOnePass = sqlite3WhereOkOnePass(pWInfo, aiCurOnePass); + for(i=0; i<nPk; i++){ + assert( pPk->aiColumn[i]>=0 ); + sqlite3ExprCodeGetColumnOfTable(v, pTab, iDataCur, pPk->aiColumn[i], + iPk+i); + } + if( okOnePass ){ + sqlite3VdbeChangeToNoop(v, addrOpen); + nKey = nPk; + regKey = iPk; + }else{ + sqlite3VdbeAddOp4(v, OP_MakeRecord, iPk, nPk, regKey, + sqlite3IndexAffinityStr(db, pPk), nPk); + sqlite3VdbeAddOp2(v, OP_IdxInsert, iEph, regKey); + } + sqlite3WhereEnd(pWInfo); + } /* Initialize the count of updated rows */ if( (db->flags & SQLITE_CountRows) && !pParse->pTriggerTab ){ regRowCount = ++pParse->nMem; sqlite3VdbeAddOp2(v, OP_Integer, 0, regRowCount); } + labelBreak = sqlite3VdbeMakeLabel(v); if( !isView ){ /* ** Open every index that needs updating. Note that if any ** index could potentially invoke a REPLACE conflict resolution ** action, then we need to open all indices because we might need ** to be deleting some records. */ - if( !okOnePass ) sqlite3OpenTable(pParse, iCur, iDb, pTab, OP_OpenWrite); if( onError==OE_Replace ){ - openAll = 1; + memset(aToOpen, 1, nIdx+1); }else{ - openAll = 0; for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ if( pIdx->onError==OE_Replace ){ - openAll = 1; + memset(aToOpen, 1, nIdx+1); break; } } } - for(i=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, i++){ - if( openAll || aRegIdx[i]>0 ){ - KeyInfo *pKey = sqlite3IndexKeyinfo(pParse, pIdx); - sqlite3VdbeAddOp4(v, OP_OpenWrite, iCur+i+1, pIdx->tnum, iDb, - (char*)pKey, P4_KEYINFO_HANDOFF); - assert( pParse->nTab>iCur+i+1 ); - } - } + if( okOnePass ){ + if( aiCurOnePass[0]>=0 ) aToOpen[aiCurOnePass[0]-iBaseCur] = 0; + if( aiCurOnePass[1]>=0 ) aToOpen[aiCurOnePass[1]-iBaseCur] = 0; + } + sqlite3OpenTableAndIndices(pParse, pTab, OP_OpenWrite, 0, iBaseCur, aToOpen, + 0, 0); } /* Top of the update loop */ if( okOnePass ){ - int a1 = sqlite3VdbeAddOp1(v, OP_NotNull, regOldRowid); - addr = sqlite3VdbeAddOp0(v, OP_Goto); - sqlite3VdbeJumpHere(v, a1); + if( aToOpen[iDataCur-iBaseCur] && !isView ){ + assert( pPk ); + sqlite3VdbeAddOp4Int(v, OP_NotFound, iDataCur, labelBreak, regKey, nKey); + VdbeCoverageNeverTaken(v); + } + labelContinue = labelBreak; + sqlite3VdbeAddOp2(v, OP_IsNull, pPk ? regKey : regOldRowid, labelBreak); + VdbeCoverageIf(v, pPk==0); + VdbeCoverageIf(v, pPk!=0); + }else if( pPk ){ + labelContinue = sqlite3VdbeMakeLabel(v); + sqlite3VdbeAddOp2(v, OP_Rewind, iEph, labelBreak); VdbeCoverage(v); + addrTop = sqlite3VdbeAddOp2(v, OP_RowKey, iEph, regKey); + sqlite3VdbeAddOp4Int(v, OP_NotFound, iDataCur, labelContinue, regKey, 0); + VdbeCoverage(v); }else{ - addr = sqlite3VdbeAddOp3(v, OP_RowSetRead, regRowSet, 0, regOldRowid); + labelContinue = sqlite3VdbeAddOp3(v, OP_RowSetRead, regRowSet, labelBreak, + regOldRowid); + VdbeCoverage(v); + sqlite3VdbeAddOp3(v, OP_NotExists, iDataCur, labelContinue, regOldRowid); + VdbeCoverage(v); } - /* Make cursor iCur point to the record that is being updated. If - ** this record does not exist for some reason (deleted by a trigger, - ** for example, then jump to the next iteration of the RowSet loop. */ - sqlite3VdbeAddOp3(v, OP_NotExists, iCur, addr, regOldRowid); - /* If the record number will change, set register regNewRowid to ** contain the new value. If the record number is not being modified, ** then regNewRowid is the same register as regOldRowid, which is ** already populated. */ - assert( chngRowid || pTrigger || hasFK || regOldRowid==regNewRowid ); + assert( chngKey || pTrigger || hasFK || regOldRowid==regNewRowid ); if( chngRowid ){ sqlite3ExprCode(pParse, pRowidExpr, regNewRowid); - sqlite3VdbeAddOp1(v, OP_MustBeInt, regNewRowid); + sqlite3VdbeAddOp1(v, OP_MustBeInt, regNewRowid); VdbeCoverage(v); } - /* If there are triggers on this table, populate an array of registers - ** with the required old.* column data. */ - if( hasFK || pTrigger ){ + /* Compute the old pre-UPDATE content of the row being changed, if that + ** information is needed */ + if( chngPk || hasFK || pTrigger ){ u32 oldmask = (hasFK ? sqlite3FkOldmask(pParse, pTab) : 0); oldmask |= sqlite3TriggerColmask(pParse, pTrigger, pChanges, 0, TRIGGER_BEFORE|TRIGGER_AFTER, pTab, onError ); for(i=0; i<pTab->nCol; i++){ - if( aXRef[i]<0 || oldmask==0xffffffff || (oldmask & (1<<i)) ){ - sqlite3VdbeAddOp3(v, OP_Column, iCur, i, regOld+i); - sqlite3ColumnDefault(v, pTab, i, regOld+i); + if( oldmask==0xffffffff + || (i<32 && (oldmask & MASKBIT32(i))!=0) + || (pTab->aCol[i].colFlags & COLFLAG_PRIMKEY)!=0 + ){ + testcase( oldmask!=0xffffffff && i==31 ); + sqlite3ExprCodeGetColumnOfTable(v, pTab, iDataCur, i, regOld+i); }else{ sqlite3VdbeAddOp2(v, OP_Null, 0, regOld+i); } } - if( chngRowid==0 ){ + if( chngRowid==0 && pPk==0 ){ sqlite3VdbeAddOp2(v, OP_Copy, regOldRowid, regNewRowid); } } /* Populate the array of registers beginning at regNew with the new - ** row data. This array is used to check constaints, create the new + ** row data. This array is used to check constants, create the new ** table and index records, and as the values for any new.* references ** made by triggers. ** ** If there are one or more BEFORE triggers, then do not populate the ** registers associated with columns that are (a) not modified by @@ -86582,88 +117718,106 @@ sqlite3VdbeAddOp2(v, OP_Null, 0, regNew+i); }else{ j = aXRef[i]; if( j>=0 ){ sqlite3ExprCode(pParse, pChanges->a[j].pExpr, regNew+i); - }else if( 0==(tmask&TRIGGER_BEFORE) || i>31 || (newmask&(1<<i)) ){ + }else if( 0==(tmask&TRIGGER_BEFORE) || i>31 || (newmask & MASKBIT32(i)) ){ /* This branch loads the value of a column that will not be changed ** into a register. This is done if there are no BEFORE triggers, or ** if there are one or more BEFORE triggers that use this value via ** a new.* reference in a trigger program. */ testcase( i==31 ); testcase( i==32 ); - sqlite3VdbeAddOp3(v, OP_Column, iCur, i, regNew+i); - sqlite3ColumnDefault(v, pTab, i, regNew+i); + sqlite3ExprCodeGetColumnToReg(pParse, pTab, i, iDataCur, regNew+i); + }else{ + sqlite3VdbeAddOp2(v, OP_Null, 0, regNew+i); } } } /* Fire any BEFORE UPDATE triggers. This happens before constraints are ** verified. One could argue that this is wrong. */ if( tmask&TRIGGER_BEFORE ){ - sqlite3VdbeAddOp2(v, OP_Affinity, regNew, pTab->nCol); - sqlite3TableAffinityStr(v, pTab); + sqlite3TableAffinity(v, pTab, regNew); sqlite3CodeRowTrigger(pParse, pTrigger, TK_UPDATE, pChanges, - TRIGGER_BEFORE, pTab, regOldRowid, onError, addr); + TRIGGER_BEFORE, pTab, regOldRowid, onError, labelContinue); /* The row-trigger may have deleted the row being updated. In this ** case, jump to the next row. No updates or AFTER triggers are - ** required. This behaviour - what happens when the row being updated + ** required. This behavior - what happens when the row being updated ** is deleted or renamed by a BEFORE trigger - is left undefined in the ** documentation. */ - sqlite3VdbeAddOp3(v, OP_NotExists, iCur, addr, regOldRowid); + if( pPk ){ + sqlite3VdbeAddOp4Int(v, OP_NotFound, iDataCur, labelContinue,regKey,nKey); + VdbeCoverage(v); + }else{ + sqlite3VdbeAddOp3(v, OP_NotExists, iDataCur, labelContinue, regOldRowid); + VdbeCoverage(v); + } /* If it did not delete it, the row-trigger may still have modified ** some of the columns of the row being updated. Load the values for ** all columns not modified by the update statement into their ** registers in case this has happened. */ for(i=0; i<pTab->nCol; i++){ if( aXRef[i]<0 && i!=pTab->iPKey ){ - sqlite3VdbeAddOp3(v, OP_Column, iCur, i, regNew+i); - sqlite3ColumnDefault(v, pTab, i, regNew+i); + sqlite3ExprCodeGetColumnOfTable(v, pTab, iDataCur, i, regNew+i); } } } if( !isView ){ - int j1; /* Address of jump instruction */ + int addr1 = 0; /* Address of jump instruction */ + int bReplace = 0; /* True if REPLACE conflict resolution might happen */ /* Do constraint checks. */ - sqlite3GenerateConstraintChecks(pParse, pTab, iCur, regNewRowid, - aRegIdx, (chngRowid?regOldRowid:0), 1, onError, addr, 0); + assert( regOldRowid>0 ); + sqlite3GenerateConstraintChecks(pParse, pTab, aRegIdx, iDataCur, iIdxCur, + regNewRowid, regOldRowid, chngKey, onError, labelContinue, &bReplace, + aXRef); /* Do FK constraint checks. */ if( hasFK ){ - sqlite3FkCheck(pParse, pTab, regOldRowid, 0); + sqlite3FkCheck(pParse, pTab, regOldRowid, 0, aXRef, chngKey); } /* Delete the index entries associated with the current record. */ - j1 = sqlite3VdbeAddOp3(v, OP_NotExists, iCur, 0, regOldRowid); - sqlite3GenerateRowIndexDelete(pParse, pTab, iCur, aRegIdx); + if( bReplace || chngKey ){ + if( pPk ){ + addr1 = sqlite3VdbeAddOp4Int(v, OP_NotFound, iDataCur, 0, regKey, nKey); + }else{ + addr1 = sqlite3VdbeAddOp3(v, OP_NotExists, iDataCur, 0, regOldRowid); + } + VdbeCoverageNeverTaken(v); + } + sqlite3GenerateRowIndexDelete(pParse, pTab, iDataCur, iIdxCur, aRegIdx, -1); /* If changing the record number, delete the old record. */ - if( hasFK || chngRowid ){ - sqlite3VdbeAddOp2(v, OP_Delete, iCur, 0); + if( hasFK || chngKey || pPk!=0 ){ + sqlite3VdbeAddOp2(v, OP_Delete, iDataCur, 0); } - sqlite3VdbeJumpHere(v, j1); + if( bReplace || chngKey ){ + sqlite3VdbeJumpHere(v, addr1); + } if( hasFK ){ - sqlite3FkCheck(pParse, pTab, 0, regNewRowid); + sqlite3FkCheck(pParse, pTab, 0, regNewRowid, aXRef, chngKey); } /* Insert the new index entries and the new record. */ - sqlite3CompleteInsertion(pParse, pTab, iCur, regNewRowid, aRegIdx, 1, 0, 0); + sqlite3CompleteInsertion(pParse, pTab, iDataCur, iIdxCur, + regNewRowid, aRegIdx, 1, 0, 0); /* Do any ON CASCADE, SET NULL or SET DEFAULT operations required to ** handle rows (possibly in other tables) that refer via a foreign key ** to the row just updated. */ if( hasFK ){ - sqlite3FkActions(pParse, pTab, pChanges, regOldRowid); + sqlite3FkActions(pParse, pTab, pChanges, regOldRowid, aXRef, chngKey); } } /* Increment the row counter */ @@ -86670,25 +117824,33 @@ if( (db->flags & SQLITE_CountRows) && !pParse->pTriggerTab){ sqlite3VdbeAddOp2(v, OP_AddImm, regRowCount, 1); } sqlite3CodeRowTrigger(pParse, pTrigger, TK_UPDATE, pChanges, - TRIGGER_AFTER, pTab, regOldRowid, onError, addr); + TRIGGER_AFTER, pTab, regOldRowid, onError, labelContinue); /* Repeat the above with the next record to be updated, until ** all record selected by the WHERE clause have been updated. */ - sqlite3VdbeAddOp2(v, OP_Goto, 0, addr); - sqlite3VdbeJumpHere(v, addr); + if( okOnePass ){ + /* Nothing to do at end-of-loop for a single-pass */ + }else if( pPk ){ + sqlite3VdbeResolveLabel(v, labelContinue); + sqlite3VdbeAddOp2(v, OP_Next, iEph, addrTop); VdbeCoverage(v); + }else{ + sqlite3VdbeGoto(v, labelContinue); + } + sqlite3VdbeResolveLabel(v, labelBreak); /* Close all tables */ for(i=0, pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext, i++){ - if( openAll || aRegIdx[i]>0 ){ - sqlite3VdbeAddOp2(v, OP_Close, iCur+i+1, 0); + assert( aRegIdx ); + if( aToOpen[i+1] ){ + sqlite3VdbeAddOp2(v, OP_Close, iIdxCur+i, 0); } } - sqlite3VdbeAddOp2(v, OP_Close, iCur, 0); + if( iDataCur<iIdxCur ) sqlite3VdbeAddOp2(v, OP_Close, iDataCur, 0); /* Update the sqlite_sequence table by storing the content of the ** maximum rowid counter values recorded while inserting into ** autoincrement tables. */ @@ -86707,19 +117869,18 @@ sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "rows updated", SQLITE_STATIC); } update_cleanup: sqlite3AuthContextPop(&sContext); - sqlite3DbFree(db, aRegIdx); - sqlite3DbFree(db, aXRef); + sqlite3DbFree(db, aXRef); /* Also frees aRegIdx[] and aToOpen[] */ sqlite3SrcListDelete(db, pTabList); sqlite3ExprListDelete(db, pChanges); sqlite3ExprDelete(db, pWhere); return; } /* Make sure "isView" and other macros defined above are undefined. Otherwise -** thely may interfere with compilation of other functions in this file +** they may interfere with compilation of other functions in this file ** (or in another file, if this file becomes part of the amalgamation). */ #ifdef isView #undef isView #endif #ifdef pTrigger @@ -86728,96 +117889,129 @@ #ifndef SQLITE_OMIT_VIRTUALTABLE /* ** Generate code for an UPDATE of a virtual table. ** -** The strategy is that we create an ephemerial table that contains +** There are two possible strategies - the default and the special +** "onepass" strategy. Onepass is only used if the virtual table +** implementation indicates that pWhere may match at most one row. +** +** The default strategy is to create an ephemeral table that contains ** for each row to be changed: ** ** (A) The original rowid of that row. -** (B) The revised rowid for the row. (note1) +** (B) The revised rowid for the row. ** (C) The content of every column in the row. ** -** Then we loop over this ephemeral table and for each row in -** the ephermeral table call VUpdate. -** -** When finished, drop the ephemeral table. -** -** (note1) Actually, if we know in advance that (A) is always the same -** as (B) we only store (A), then duplicate (A) when pulling -** it out of the ephemeral table before calling VUpdate. +** Then loop through the contents of this ephemeral table executing a +** VUpdate for each row. When finished, drop the ephemeral table. +** +** The "onepass" strategy does not use an ephemeral table. Instead, it +** stores the same values (A, B and C above) in a register array and +** makes a single invocation of VUpdate. */ static void updateVirtualTable( Parse *pParse, /* The parsing context */ SrcList *pSrc, /* The virtual table to be modified */ Table *pTab, /* The virtual table */ ExprList *pChanges, /* The columns to change in the UPDATE statement */ Expr *pRowid, /* Expression used to recompute the rowid */ int *aXRef, /* Mapping from columns of pTab to entries in pChanges */ - Expr *pWhere /* WHERE clause of the UPDATE statement */ + Expr *pWhere, /* WHERE clause of the UPDATE statement */ + int onError /* ON CONFLICT strategy */ ){ Vdbe *v = pParse->pVdbe; /* Virtual machine under construction */ - ExprList *pEList = 0; /* The result set of the SELECT statement */ - Select *pSelect = 0; /* The SELECT statement */ - Expr *pExpr; /* Temporary expression */ int ephemTab; /* Table holding the result of the SELECT */ int i; /* Loop counter */ - int addr; /* Address of top of loop */ - int iReg; /* First register in set passed to OP_VUpdate */ sqlite3 *db = pParse->db; /* Database connection */ const char *pVTab = (const char*)sqlite3GetVTable(db, pTab); - SelectDest dest; - - /* Construct the SELECT statement that will find the new values for - ** all updated rows. - */ - pEList = sqlite3ExprListAppend(pParse, 0, sqlite3Expr(db, TK_ID, "_rowid_")); - if( pRowid ){ - pEList = sqlite3ExprListAppend(pParse, pEList, - sqlite3ExprDup(db, pRowid, 0)); - } - assert( pTab->iPKey<0 ); - for(i=0; i<pTab->nCol; i++){ - if( aXRef[i]>=0 ){ - pExpr = sqlite3ExprDup(db, pChanges->a[aXRef[i]].pExpr, 0); - }else{ - pExpr = sqlite3Expr(db, TK_ID, pTab->aCol[i].zName); - } - pEList = sqlite3ExprListAppend(pParse, pEList, pExpr); - } - pSelect = sqlite3SelectNew(pParse, pEList, pSrc, pWhere, 0, 0, 0, 0, 0, 0); - - /* Create the ephemeral table into which the update results will - ** be stored. - */ + WhereInfo *pWInfo; + int nArg = 2 + pTab->nCol; /* Number of arguments to VUpdate */ + int regArg; /* First register in VUpdate arg array */ + int regRec; /* Register in which to assemble record */ + int regRowid; /* Register for ephem table rowid */ + int iCsr = pSrc->a[0].iCursor; /* Cursor used for virtual table scan */ + int aDummy[2]; /* Unused arg for sqlite3WhereOkOnePass() */ + int bOnePass; /* True to use onepass strategy */ + int addr; /* Address of OP_OpenEphemeral */ + + /* Allocate nArg registers to martial the arguments to VUpdate. Then + ** create and open the ephemeral table in which the records created from + ** these arguments will be temporarily stored. */ assert( v ); ephemTab = pParse->nTab++; - sqlite3VdbeAddOp2(v, OP_OpenEphemeral, ephemTab, pTab->nCol+1+(pRowid!=0)); - - /* fill the ephemeral table - */ - sqlite3SelectDestInit(&dest, SRT_Table, ephemTab); - sqlite3Select(pParse, pSelect, &dest); - - /* Generate code to scan the ephemeral table and call VUpdate. */ - iReg = ++pParse->nMem; - pParse->nMem += pTab->nCol+1; - addr = sqlite3VdbeAddOp2(v, OP_Rewind, ephemTab, 0); - sqlite3VdbeAddOp3(v, OP_Column, ephemTab, 0, iReg); - sqlite3VdbeAddOp3(v, OP_Column, ephemTab, (pRowid?1:0), iReg+1); + addr= sqlite3VdbeAddOp2(v, OP_OpenEphemeral, ephemTab, nArg); + regArg = pParse->nMem + 1; + pParse->nMem += nArg; + regRec = ++pParse->nMem; + regRowid = ++pParse->nMem; + + /* Start scanning the virtual table */ + pWInfo = sqlite3WhereBegin(pParse, pSrc, pWhere, 0,0,WHERE_ONEPASS_DESIRED,0); + if( pWInfo==0 ) return; + + /* Populate the argument registers. */ + sqlite3VdbeAddOp2(v, OP_Rowid, iCsr, regArg); + if( pRowid ){ + sqlite3ExprCode(pParse, pRowid, regArg+1); + }else{ + sqlite3VdbeAddOp2(v, OP_Rowid, iCsr, regArg+1); + } for(i=0; i<pTab->nCol; i++){ - sqlite3VdbeAddOp3(v, OP_Column, ephemTab, i+1+(pRowid!=0), iReg+2+i); + if( aXRef[i]>=0 ){ + sqlite3ExprCode(pParse, pChanges->a[aXRef[i]].pExpr, regArg+2+i); + }else{ + sqlite3VdbeAddOp3(v, OP_VColumn, iCsr, i, regArg+2+i); + } + } + + bOnePass = sqlite3WhereOkOnePass(pWInfo, aDummy); + + if( bOnePass ){ + /* If using the onepass strategy, no-op out the OP_OpenEphemeral coded + ** above. Also, if this is a top-level parse (not a trigger), clear the + ** multi-write flag so that the VM does not open a statement journal */ + sqlite3VdbeChangeToNoop(v, addr); + if( sqlite3IsToplevel(pParse) ){ + pParse->isMultiWrite = 0; + } + }else{ + /* Create a record from the argument register contents and insert it into + ** the ephemeral table. */ + sqlite3VdbeAddOp3(v, OP_MakeRecord, regArg, nArg, regRec); + sqlite3VdbeAddOp2(v, OP_NewRowid, ephemTab, regRowid); + sqlite3VdbeAddOp3(v, OP_Insert, ephemTab, regRec, regRowid); + } + + + if( bOnePass==0 ){ + /* End the virtual table scan */ + sqlite3WhereEnd(pWInfo); + + /* Begin scannning through the ephemeral table. */ + addr = sqlite3VdbeAddOp1(v, OP_Rewind, ephemTab); VdbeCoverage(v); + + /* Extract arguments from the current row of the ephemeral table and + ** invoke the VUpdate method. */ + for(i=0; i<nArg; i++){ + sqlite3VdbeAddOp3(v, OP_Column, ephemTab, i, regArg+i); + } } sqlite3VtabMakeWritable(pParse, pTab); - sqlite3VdbeAddOp4(v, OP_VUpdate, 0, pTab->nCol+2, iReg, pVTab, P4_VTAB); + sqlite3VdbeAddOp4(v, OP_VUpdate, 0, nArg, regArg, pVTab, P4_VTAB); + sqlite3VdbeChangeP5(v, onError==OE_Default ? OE_Abort : onError); sqlite3MayAbort(pParse); - sqlite3VdbeAddOp2(v, OP_Next, ephemTab, addr+1); - sqlite3VdbeJumpHere(v, addr); - sqlite3VdbeAddOp2(v, OP_Close, ephemTab, 0); - /* Cleanup */ - sqlite3SelectDelete(db, pSelect); + /* End of the ephemeral table scan. Or, if using the onepass strategy, + ** jump to here if the scan visited zero rows. */ + if( bOnePass==0 ){ + sqlite3VdbeAddOp2(v, OP_Next, ephemTab, addr+1); VdbeCoverage(v); + sqlite3VdbeJumpHere(v, addr); + sqlite3VdbeAddOp2(v, OP_Close, ephemTab, 0); + }else{ + sqlite3WhereEnd(pWInfo); + } } #endif /* SQLITE_OMIT_VIRTUALTABLE */ /************** End of update.c **********************************************/ /************** Begin file vacuum.c ******************************************/ @@ -86835,10 +118029,12 @@ ** This file contains code used to implement the VACUUM command. ** ** Most of the code in this file may be omitted by defining the ** SQLITE_OMIT_VACUUM macro. */ +/* #include "sqliteInt.h" */ +/* #include "vdbeInt.h" */ #if !defined(SQLITE_OMIT_VACUUM) && !defined(SQLITE_OMIT_ATTACH) /* ** Finalize a prepared statement. If there was an error, store the ** text of the error message in *pzErrMsg. Return the result code. @@ -86864,11 +118060,11 @@ if( SQLITE_OK!=sqlite3_prepare(db, zSql, -1, &pStmt, 0) ){ sqlite3SetString(pzErrMsg, db, sqlite3_errmsg(db)); return sqlite3_errcode(db); } VVA_ONLY( rc = ) sqlite3_step(pStmt); - assert( rc!=SQLITE_ROW ); + assert( rc!=SQLITE_ROW || (db->flags&SQLITE_CountRows) ); return vacuumFinalize(db, pStmt, pzErrMsg); } /* ** Execute zSql on database db. The statement returns exactly @@ -86891,23 +118087,44 @@ return vacuumFinalize(db, pStmt, pzErrMsg); } /* -** The non-standard VACUUM command is used to clean up the database, +** The VACUUM command is used to clean up the database, ** collapse free space, etc. It is modelled after the VACUUM command -** in PostgreSQL. +** in PostgreSQL. The VACUUM command works as follows: ** -** In version 1.0.x of SQLite, the VACUUM command would call -** gdbm_reorganize() on all the database tables. But beginning -** with 2.0.0, SQLite no longer uses GDBM so this command has -** become a no-op. +** (1) Create a new transient database file +** (2) Copy all content from the database being vacuumed into +** the new transient database file +** (3) Copy content from the transient database back into the +** original database. +** +** The transient database requires temporary disk space approximately +** equal to the size of the original database. The copy operation of +** step (3) requires additional temporary disk space approximately equal +** to the size of the original database for the rollback journal. +** Hence, temporary disk space that is approximately 2x the size of the +** original database is required. Every page of the database is written +** approximately 3 times: Once for step (2) and twice for step (3). +** Two writes per page are required in step (3) because the original +** database content must be written into the rollback journal prior to +** overwriting the database with the vacuumed content. +** +** Only 1x temporary space and only 1x writes would be required if +** the copy of step (3) were replaced by deleting the original database +** and renaming the transient database as the original. But that will +** not work if other processes are attached to the original database. +** And a power loss in between deleting the original and renaming the +** transient would cause the database file to appear to be deleted +** following reboot. */ SQLITE_PRIVATE void sqlite3Vacuum(Parse *pParse){ Vdbe *v = sqlite3GetVdbe(pParse); if( v ){ sqlite3VdbeAddOp2(v, OP_Vacuum, 0, 0); + sqlite3VdbeUsesBtree(v, 0); } return; } /* @@ -86922,25 +118139,30 @@ int saved_nChange; /* Saved value of db->nChange */ int saved_nTotalChange; /* Saved value of db->nTotalChange */ void (*saved_xTrace)(void*,const char*); /* Saved db->xTrace */ Db *pDb = 0; /* Database to detach at end of vacuum */ int isMemDb; /* True if vacuuming a :memory: database */ - int nRes; + int nRes; /* Bytes of reserved space at the end of each page */ + int nDb; /* Number of attached databases */ if( !db->autoCommit ){ sqlite3SetString(pzErrMsg, db, "cannot VACUUM from within a transaction"); return SQLITE_ERROR; + } + if( db->nVdbeActive>1 ){ + sqlite3SetString(pzErrMsg, db,"cannot VACUUM - SQL statements in progress"); + return SQLITE_ERROR; } /* Save the current value of the database flags so that it can be ** restored before returning. Then set the writable-schema flag, and ** disable CHECK and foreign key constraints. */ saved_flags = db->flags; saved_nChange = db->nChange; saved_nTotalChange = db->nTotalChange; saved_xTrace = db->xTrace; - db->flags |= SQLITE_WriteSchema | SQLITE_IgnoreChecks; + db->flags |= SQLITE_WriteSchema | SQLITE_IgnoreChecks | SQLITE_PreferBuiltin; db->flags &= ~(SQLITE_ForeignKeys | SQLITE_ReverseOrder); db->xTrace = 0; pMain = db->aDb[0].pBt; isMemDb = sqlite3PagerIsMemdb(sqlite3BtreePager(pMain)); @@ -86957,28 +118179,31 @@ ** actually occurs when doing a vacuum since the vacuum_db is initially ** empty. Only the journal header is written. Apparently it takes more ** time to parse and run the PRAGMA to turn journalling off than it does ** to write the journal header file. */ + nDb = db->nDb; if( sqlite3TempInMemory(db) ){ zSql = "ATTACH ':memory:' AS vacuum_db;"; }else{ zSql = "ATTACH '' AS vacuum_db;"; } rc = execSql(db, pzErrMsg, zSql); + if( db->nDb>nDb ){ + pDb = &db->aDb[db->nDb-1]; + assert( strcmp(pDb->zName,"vacuum_db")==0 ); + } if( rc!=SQLITE_OK ) goto end_of_vacuum; - pDb = &db->aDb[db->nDb-1]; - assert( strcmp(db->aDb[db->nDb-1].zName,"vacuum_db")==0 ); pTemp = db->aDb[db->nDb-1].pBt; /* The call to execSql() to attach the temp database has left the file ** locked (as there was more than one active statement when the transaction ** to read the schema was concluded. Unlock it here so that this doesn't ** cause problems for the call to BtreeSetPageSize() below. */ sqlite3BtreeCommit(pTemp); - nRes = sqlite3BtreeGetReserve(pMain); + nRes = sqlite3BtreeGetOptimalReserve(pMain); /* A VACUUM cannot change the pagesize of an encrypted database. */ #ifdef SQLITE_HAS_CODEC if( db->nextPagesize ){ extern void sqlite3CodecGetKey(sqlite3*, int, void**, int*); @@ -86986,39 +118211,49 @@ char *zKey; sqlite3CodecGetKey(db, 0, (void**)&zKey, &nKey); if( nKey ) db->nextPagesize = 0; } #endif + + rc = execSql(db, pzErrMsg, "PRAGMA vacuum_db.synchronous=OFF"); + if( rc!=SQLITE_OK ) goto end_of_vacuum; + + /* Begin a transaction and take an exclusive lock on the main database + ** file. This is done before the sqlite3BtreeGetPageSize(pMain) call below, + ** to ensure that we do not try to change the page-size on a WAL database. + */ + rc = execSql(db, pzErrMsg, "BEGIN;"); + if( rc!=SQLITE_OK ) goto end_of_vacuum; + rc = sqlite3BtreeBeginTrans(pMain, 2); + if( rc!=SQLITE_OK ) goto end_of_vacuum; + + /* Do not attempt to change the page size for a WAL database */ + if( sqlite3PagerGetJournalMode(sqlite3BtreePager(pMain)) + ==PAGER_JOURNALMODE_WAL ){ + db->nextPagesize = 0; + } if( sqlite3BtreeSetPageSize(pTemp, sqlite3BtreeGetPageSize(pMain), nRes, 0) || (!isMemDb && sqlite3BtreeSetPageSize(pTemp, db->nextPagesize, nRes, 0)) || NEVER(db->mallocFailed) ){ rc = SQLITE_NOMEM; - goto end_of_vacuum; - } - rc = execSql(db, pzErrMsg, "PRAGMA vacuum_db.synchronous=OFF"); - if( rc!=SQLITE_OK ){ goto end_of_vacuum; } #ifndef SQLITE_OMIT_AUTOVACUUM sqlite3BtreeSetAutoVacuum(pTemp, db->nextAutovac>=0 ? db->nextAutovac : sqlite3BtreeGetAutoVacuum(pMain)); #endif - /* Begin a transaction */ - rc = execSql(db, pzErrMsg, "BEGIN EXCLUSIVE;"); - if( rc!=SQLITE_OK ) goto end_of_vacuum; - /* Query the schema of the main database. Create a mirror schema ** in the temporary database. */ rc = execExecSql(db, pzErrMsg, "SELECT 'CREATE TABLE vacuum_db.' || substr(sql,14) " " FROM sqlite_master WHERE type='table' AND name!='sqlite_sequence'" - " AND rootpage>0" + " AND coalesce(rootpage,1)>0" ); if( rc!=SQLITE_OK ) goto end_of_vacuum; rc = execExecSql(db, pzErrMsg, "SELECT 'CREATE INDEX vacuum_db.' || substr(sql,14)" " FROM sqlite_master WHERE sql LIKE 'CREATE INDEX %' "); @@ -87030,17 +118265,21 @@ /* Loop through the tables in the main database. For each, do ** an "INSERT INTO vacuum_db.xxx SELECT * FROM main.xxx;" to copy ** the contents to the temporary database. */ + assert( (db->flags & SQLITE_Vacuum)==0 ); + db->flags |= SQLITE_Vacuum; rc = execExecSql(db, pzErrMsg, "SELECT 'INSERT INTO vacuum_db.' || quote(name) " "|| ' SELECT * FROM main.' || quote(name) || ';'" "FROM main.sqlite_master " "WHERE type = 'table' AND name!='sqlite_sequence' " - " AND rootpage>0" + " AND coalesce(rootpage,1)>0" ); + assert( (db->flags & SQLITE_Vacuum)!=0 ); + db->flags &= ~SQLITE_Vacuum; if( rc!=SQLITE_OK ) goto end_of_vacuum; /* Copy over the sequence table */ rc = execExecSql(db, pzErrMsg, @@ -87068,17 +118307,15 @@ " WHERE type='view' OR type='trigger'" " OR (type='table' AND rootpage=0)" ); if( rc ) goto end_of_vacuum; - /* At this point, unless the main db was completely empty, there is now a - ** transaction open on the vacuum database, but not on the main database. - ** Open a btree level transaction on the main database. This allows a - ** call to sqlite3BtreeCopyFile(). The main database btree level - ** transaction is then committed, so the SQL level never knows it was - ** opened for writing. This way, the SQL transaction used to create the - ** temporary database never needs to be committed. + /* At this point, there is a write transaction open on both the + ** vacuum database and the main database. Assuming no error occurs, + ** both transactions are closed by this block - the main database + ** transaction by sqlite3BtreeCopyFile() and the other by an explicit + ** call to sqlite3BtreeCommit(). */ { u32 meta; int i; @@ -87091,10 +118328,11 @@ static const unsigned char aCopy[] = { BTREE_SCHEMA_VERSION, 1, /* Add one to the old schema cookie */ BTREE_DEFAULT_CACHE_SIZE, 0, /* Preserve the default page cache size */ BTREE_TEXT_ENCODING, 0, /* Preserve the text encoding */ BTREE_USER_VERSION, 0, /* Preserve the user version */ + BTREE_APPLICATION_ID, 0, /* Preserve the application id */ }; assert( 1==sqlite3BtreeIsInTrans(pTemp) ); assert( 1==sqlite3BtreeIsInTrans(pMain) ); @@ -87123,10 +118361,11 @@ /* Restore the original value of db->flags */ db->flags = saved_flags; db->nChange = saved_nChange; db->nTotalChange = saved_nTotalChange; db->xTrace = saved_xTrace; + sqlite3BtreeSetPageSize(pMain, -1, -1, 1); /* Currently there is an SQL level transaction open on the vacuum ** database. No locks are held on any other files (since the main file ** was committed at the btree level). So it safe to end the transaction ** by manually setting the autoCommit flag to true and detaching the @@ -87139,14 +118378,17 @@ sqlite3BtreeClose(pDb->pBt); pDb->pBt = 0; pDb->pSchema = 0; } - sqlite3ResetInternalSchema(db, 0); + /* This both clears the schemas and reduces the size of the db->aDb[] + ** array. */ + sqlite3ResetAllSchemasOfConnection(db); return rc; } + #endif /* SQLITE_OMIT_VACUUM && SQLITE_OMIT_ATTACH */ /************** End of vacuum.c **********************************************/ /************** Begin file vtab.c ********************************************/ /* @@ -87161,10 +118403,25 @@ ** ************************************************************************* ** This file contains code used to help implement virtual tables. */ #ifndef SQLITE_OMIT_VIRTUALTABLE +/* #include "sqliteInt.h" */ + +/* +** Before a virtual table xCreate() or xConnect() method is invoked, the +** sqlite3.pVtabCtx member variable is set to point to an instance of +** this struct allocated on the stack. It is used by the implementation of +** the sqlite3_declare_vtab() and sqlite3_vtab_config() APIs, both of which +** are invoked only from within xCreate and xConnect methods. +*/ +struct VtabCtx { + VTable *pVTable; /* The virtual table being constructed */ + Table *pTab; /* The Table object to which the virtual table belongs */ + VtabCtx *pPrior; /* Parent context (if any) */ + int bDeclared; /* True after sqlite3_declare_vtab() is called */ +}; /* ** The actual function that does the work of creating a new module. ** This function implements the sqlite3_create_module() and ** sqlite3_create_module_v2() interfaces. @@ -87174,64 +118431,73 @@ const char *zName, /* Name assigned to this module */ const sqlite3_module *pModule, /* The definition of the module */ void *pAux, /* Context pointer for xCreate/xConnect */ void (*xDestroy)(void *) /* Module destructor function */ ){ - int rc, nName; - Module *pMod; + int rc = SQLITE_OK; + int nName; sqlite3_mutex_enter(db->mutex); nName = sqlite3Strlen30(zName); - pMod = (Module *)sqlite3DbMallocRaw(db, sizeof(Module) + nName + 1); - if( pMod ){ - Module *pDel; - char *zCopy = (char *)(&pMod[1]); - memcpy(zCopy, zName, nName+1); - pMod->zName = zCopy; - pMod->pModule = pModule; - pMod->pAux = pAux; - pMod->xDestroy = xDestroy; - pDel = (Module *)sqlite3HashInsert(&db->aModule, zCopy, nName, (void*)pMod); - if( pDel && pDel->xDestroy ){ - pDel->xDestroy(pDel->pAux); - } - sqlite3DbFree(db, pDel); - if( pDel==pMod ){ - db->mallocFailed = 1; - } - sqlite3ResetInternalSchema(db, 0); - }else if( xDestroy ){ - xDestroy(pAux); - } - rc = sqlite3ApiExit(db, SQLITE_OK); + if( sqlite3HashFind(&db->aModule, zName) ){ + rc = SQLITE_MISUSE_BKPT; + }else{ + Module *pMod; + pMod = (Module *)sqlite3DbMallocRawNN(db, sizeof(Module) + nName + 1); + if( pMod ){ + Module *pDel; + char *zCopy = (char *)(&pMod[1]); + memcpy(zCopy, zName, nName+1); + pMod->zName = zCopy; + pMod->pModule = pModule; + pMod->pAux = pAux; + pMod->xDestroy = xDestroy; + pMod->pEpoTab = 0; + pDel = (Module *)sqlite3HashInsert(&db->aModule,zCopy,(void*)pMod); + assert( pDel==0 || pDel==pMod ); + if( pDel ){ + sqlite3OomFault(db); + sqlite3DbFree(db, pDel); + } + } + } + rc = sqlite3ApiExit(db, rc); + if( rc!=SQLITE_OK && xDestroy ) xDestroy(pAux); + sqlite3_mutex_leave(db->mutex); return rc; } /* ** External API function used to create a new virtual-table module. */ -SQLITE_API int sqlite3_create_module( +SQLITE_API int SQLITE_STDCALL sqlite3_create_module( sqlite3 *db, /* Database in which module is registered */ const char *zName, /* Name assigned to this module */ const sqlite3_module *pModule, /* The definition of the module */ void *pAux /* Context pointer for xCreate/xConnect */ ){ +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) || zName==0 ) return SQLITE_MISUSE_BKPT; +#endif return createModule(db, zName, pModule, pAux, 0); } /* ** External API function used to create a new virtual-table module. */ -SQLITE_API int sqlite3_create_module_v2( +SQLITE_API int SQLITE_STDCALL sqlite3_create_module_v2( sqlite3 *db, /* Database in which module is registered */ const char *zName, /* Name assigned to this module */ const sqlite3_module *pModule, /* The definition of the module */ void *pAux, /* Context pointer for xCreate/xConnect */ void (*xDestroy)(void *) /* Module destructor function */ ){ +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) || zName==0 ) return SQLITE_MISUSE_BKPT; +#endif return createModule(db, zName, pModule, pAux, xDestroy); } /* ** Lock the virtual table so that it cannot be disconnected. @@ -87265,11 +118531,11 @@ SQLITE_PRIVATE void sqlite3VtabUnlock(VTable *pVTab){ sqlite3 *db = pVTab->db; assert( db ); assert( pVTab->nRef>0 ); - assert( sqlite3SafetyCheckOk(db) ); + assert( db->magic==SQLITE_MAGIC_OPEN || db->magic==SQLITE_MAGIC_ZOMBIE ); pVTab->nRef--; if( pVTab->nRef==0 ){ sqlite3_vtab *p = pVTab->pVtab; if( p ){ @@ -87293,14 +118559,13 @@ /* Assert that the mutex (if any) associated with the BtShared database ** that contains table p is held by the caller. See header comments ** above function sqlite3VtabUnlockList() for an explanation of why ** this makes it safe to access the sqlite3.pDisconnect list of any - ** database connection that may have an entry in the p->pVTable list. */ - assert( db==0 || - sqlite3BtreeHoldsMutex(db->aDb[sqlite3SchemaToIndex(db, p->pSchema)].pBt) - ); + ** database connection that may have an entry in the p->pVTable list. + */ + assert( db==0 || sqlite3SchemaMutexHeld(db, 0, p->pSchema) ); while( pVTable ){ sqlite3 *db2 = pVTable->db; VTable *pNext = pVTable->pNext; assert( db2 ); @@ -87316,10 +118581,35 @@ } assert( !db || pRet ); return pRet; } + +/* +** Table *p is a virtual table. This function removes the VTable object +** for table *p associated with database connection db from the linked +** list in p->pVTab. It also decrements the VTable ref count. This is +** used when closing database connection db to free all of its VTable +** objects without disturbing the rest of the Schema object (which may +** be being used by other shared-cache connections). +*/ +SQLITE_PRIVATE void sqlite3VtabDisconnect(sqlite3 *db, Table *p){ + VTable **ppVTab; + + assert( IsVirtual(p) ); + assert( sqlite3BtreeHoldsAllMutexes(db) ); + assert( sqlite3_mutex_held(db->mutex) ); + + for(ppVTab=&p->pVTable; *ppVTab; ppVTab=&(*ppVTab)->pNext){ + if( (*ppVTab)->db==db ){ + VTable *pVTab = *ppVTab; + *ppVTab = pVTab->pNext; + sqlite3VtabUnlock(pVTab); + break; + } + } +} /* ** Disconnect all the virtual table objects in the sqlite3.pDisconnect list. ** @@ -87369,18 +118659,18 @@ ** connection db is decremented immediately (which may lead to the ** structure being xDisconnected and free). Any other VTable structures ** in the list are moved to the sqlite3.pDisconnect list of the associated ** database connection. */ -SQLITE_PRIVATE void sqlite3VtabClear(Table *p){ - vtabDisconnectAll(0, p); +SQLITE_PRIVATE void sqlite3VtabClear(sqlite3 *db, Table *p){ + if( !db || db->pnBytesFreed==0 ) vtabDisconnectAll(0, p); if( p->azModuleArg ){ int i; for(i=0; i<p->nModuleArg; i++){ - sqlite3DbFree(p->dbMem, p->azModuleArg[i]); + if( i!=1 ) sqlite3DbFree(db, p->azModuleArg[i]); } - sqlite3DbFree(p->dbMem, p->azModuleArg); + sqlite3DbFree(db, p->azModuleArg); } } /* ** Add a new module argument to pTable->azModuleArg[]. @@ -87387,27 +118677,21 @@ ** The string is not copied - the pointer is stored. The ** string will be freed automatically when the table is ** deleted. */ static void addModuleArgument(sqlite3 *db, Table *pTable, char *zArg){ - int i = pTable->nModuleArg++; - int nBytes = sizeof(char *)*(1+pTable->nModuleArg); + int nBytes = sizeof(char *)*(2+pTable->nModuleArg); char **azModuleArg; azModuleArg = sqlite3DbRealloc(db, pTable->azModuleArg, nBytes); if( azModuleArg==0 ){ - int j; - for(j=0; j<i; j++){ - sqlite3DbFree(db, pTable->azModuleArg[j]); - } sqlite3DbFree(db, zArg); - sqlite3DbFree(db, pTable->azModuleArg); - pTable->nModuleArg = 0; }else{ + int i = pTable->nModuleArg++; azModuleArg[i] = zArg; azModuleArg[i+1] = 0; + pTable->azModuleArg = azModuleArg; } - pTable->azModuleArg = azModuleArg; } /* ** The parser calls this routine when it first sees a CREATE VIRTUAL TABLE ** statement. The module name has been parsed, but the optional list @@ -87415,17 +118699,18 @@ */ SQLITE_PRIVATE void sqlite3VtabBeginParse( Parse *pParse, /* Parsing context */ Token *pName1, /* Name of new table, or database name */ Token *pName2, /* Name of new table or NULL */ - Token *pModuleName /* Name of the module for the virtual table */ + Token *pModuleName, /* Name of the module for the virtual table */ + int ifNotExists /* No error if the table already exists */ ){ int iDb; /* The database the table is being created in */ Table *pTable; /* The new virtual table */ sqlite3 *db; /* Database connection */ - sqlite3StartTable(pParse, pName1, pName2, 0, 0, 1, 0); + sqlite3StartTable(pParse, pName1, pName2, 0, 0, 1, ifNotExists); pTable = pParse->pNewTable; if( pTable==0 ) return; assert( 0==pTable->pIndex ); db = pParse->db; @@ -87433,13 +118718,18 @@ assert( iDb>=0 ); pTable->tabFlags |= TF_Virtual; pTable->nModuleArg = 0; addModuleArgument(db, pTable, sqlite3NameFromToken(db, pModuleName)); - addModuleArgument(db, pTable, sqlite3DbStrDup(db, db->aDb[iDb].zName)); + addModuleArgument(db, pTable, 0); addModuleArgument(db, pTable, sqlite3DbStrDup(db, pTable->zName)); - pParse->sNameToken.n = (int)(&pModuleName->z[pModuleName->n] - pName1->z); + assert( (pParse->sNameToken.z==pName2->z && pName2->z!=0) + || (pParse->sNameToken.z==pName1->z && pName2->z==0) + ); + pParse->sNameToken.n = (int)( + &pModuleName->z[pModuleName->n] - pParse->sNameToken.z + ); #ifndef SQLITE_OMIT_AUTHORIZATION /* Creating a virtual table invokes the authorization callback twice. ** The first invocation, to obtain permission to INSERT a row into the ** sqlite_master table, has already been made by sqlite3StartTable(). @@ -87456,11 +118746,11 @@ ** This routine takes the module argument that has been accumulating ** in pParse->zArg[] and appends it to the list of arguments on the ** virtual table currently under construction in pParse->pTable. */ static void addArgumentToVtab(Parse *pParse){ - if( pParse->sArg.z && ALWAYS(pParse->pNewTable) ){ + if( pParse->sArg.z && pParse->pNewTable ){ const char *z = (const char*)pParse->sArg.z; int n = pParse->sArg.n; sqlite3 *db = pParse->db; addModuleArgument(db, pParse->pNewTable, sqlite3DbStrNDup(db, z, n)); } @@ -87487,10 +118777,11 @@ */ if( !db->init.busy ){ char *zStmt; char *zWhere; int iDb; + int iReg; Vdbe *v; /* Compute the complete text of the CREATE VIRTUAL TABLE statement */ if( pEnd ){ pParse->sNameToken.n = (int)(pEnd->z - pParse->sNameToken.z) + pEnd->n; @@ -87519,14 +118810,16 @@ sqlite3DbFree(db, zStmt); v = sqlite3GetVdbe(pParse); sqlite3ChangeCookie(pParse, iDb); sqlite3VdbeAddOp2(v, OP_Expire, 0, 0); - zWhere = sqlite3MPrintf(db, "name='%q'", pTab->zName); - sqlite3VdbeAddOp4(v, OP_ParseSchema, iDb, 1, 0, zWhere, P4_DYNAMIC); - sqlite3VdbeAddOp4(v, OP_VCreate, iDb, 0, 0, - pTab->zName, sqlite3Strlen30(pTab->zName) + 1); + zWhere = sqlite3MPrintf(db, "name='%q' AND type='table'", pTab->zName); + sqlite3VdbeAddParseSchemaOp(v, iDb, zWhere); + + iReg = ++pParse->nMem; + sqlite3VdbeLoadString(v, iReg, pTab->zName); + sqlite3VdbeAddOp2(v, OP_VCreate, iDb, iReg); } /* If we are rereading the sqlite_master table create the in-memory ** record of the table. The xConnect() method is not called until ** the first time the virtual table is used in an SQL statement. This @@ -87534,18 +118827,17 @@ ** the required virtual table implementations are registered. */ else { Table *pOld; Schema *pSchema = pTab->pSchema; const char *zName = pTab->zName; - int nName = sqlite3Strlen30(zName); - pOld = sqlite3HashInsert(&pSchema->tblHash, zName, nName, pTab); + assert( sqlite3SchemaMutexHeld(db, 0, pSchema) ); + pOld = sqlite3HashInsert(&pSchema->tblHash, zName, pTab); if( pOld ){ - db->mallocFailed = 1; + sqlite3OomFault(db); assert( pTab==pOld ); /* Malloc must have failed inside HashInsert() */ return; } - pSchema->db = pParse->db; pParse->pNewTable = 0; } } /* @@ -87566,11 +118858,11 @@ Token *pArg = &pParse->sArg; if( pArg->z==0 ){ pArg->z = p->z; pArg->n = p->n; }else{ - assert(pArg->z < p->z); + assert(pArg->z <= p->z); pArg->n = (int)(&p->z[p->n] - pArg->z); } } /* @@ -87583,17 +118875,31 @@ Table *pTab, Module *pMod, int (*xConstruct)(sqlite3*,void*,int,const char*const*,sqlite3_vtab**,char**), char **pzErr ){ + VtabCtx sCtx; VTable *pVTable; int rc; const char *const*azArg = (const char *const*)pTab->azModuleArg; int nArg = pTab->nModuleArg; char *zErr = 0; - char *zModuleName = sqlite3MPrintf(db, "%s", pTab->zName); + char *zModuleName; + int iDb; + VtabCtx *pCtx; + /* Check that the virtual-table is not already being initialized */ + for(pCtx=db->pVtabCtx; pCtx; pCtx=pCtx->pPrior){ + if( pCtx->pTab==pTab ){ + *pzErr = sqlite3MPrintf(db, + "vtable constructor called recursively: %s", pTab->zName + ); + return SQLITE_LOCKED; + } + } + + zModuleName = sqlite3MPrintf(db, "%s", pTab->zName); if( !zModuleName ){ return SQLITE_NOMEM; } pVTable = sqlite3DbMallocZero(db, sizeof(VTable)); @@ -87602,51 +118908,64 @@ return SQLITE_NOMEM; } pVTable->db = db; pVTable->pMod = pMod; - assert( !db->pVTab ); - assert( xConstruct ); - db->pVTab = pTab; + iDb = sqlite3SchemaToIndex(db, pTab->pSchema); + pTab->azModuleArg[1] = db->aDb[iDb].zName; /* Invoke the virtual table constructor */ + assert( &db->pVtabCtx ); + assert( xConstruct ); + sCtx.pTab = pTab; + sCtx.pVTable = pVTable; + sCtx.pPrior = db->pVtabCtx; + sCtx.bDeclared = 0; + db->pVtabCtx = &sCtx; rc = xConstruct(db, pMod->pAux, nArg, azArg, &pVTable->pVtab, &zErr); - if( rc==SQLITE_NOMEM ) db->mallocFailed = 1; + db->pVtabCtx = sCtx.pPrior; + if( rc==SQLITE_NOMEM ) sqlite3OomFault(db); + assert( sCtx.pTab==pTab ); if( SQLITE_OK!=rc ){ if( zErr==0 ){ *pzErr = sqlite3MPrintf(db, "vtable constructor failed: %s", zModuleName); }else { *pzErr = sqlite3MPrintf(db, "%s", zErr); - sqlite3DbFree(db, zErr); + sqlite3_free(zErr); } sqlite3DbFree(db, pVTable); }else if( ALWAYS(pVTable->pVtab) ){ /* Justification of ALWAYS(): A correct vtab constructor must allocate ** the sqlite3_vtab object if successful. */ + memset(pVTable->pVtab, 0, sizeof(pVTable->pVtab[0])); pVTable->pVtab->pModule = pMod->pModule; pVTable->nRef = 1; - if( db->pVTab ){ + if( sCtx.bDeclared==0 ){ const char *zFormat = "vtable constructor did not declare schema: %s"; *pzErr = sqlite3MPrintf(db, zFormat, pTab->zName); sqlite3VtabUnlock(pVTable); rc = SQLITE_ERROR; }else{ int iCol; + u8 oooHidden = 0; /* If everything went according to plan, link the new VTable structure ** into the linked list headed by pTab->pVTable. Then loop through the ** columns of the table to see if any of them contain the token "hidden". - ** If so, set the Column.isHidden flag and remove the token from + ** If so, set the Column COLFLAG_HIDDEN flag and remove the token from ** the type string. */ pVTable->pNext = pTab->pVTable; pTab->pVTable = pVTable; for(iCol=0; iCol<pTab->nCol; iCol++){ char *zType = pTab->aCol[iCol].zType; int nType; int i = 0; - if( !zType ) continue; + if( !zType ){ + pTab->tabFlags |= oooHidden; + continue; + } nType = sqlite3Strlen30(zType); if( sqlite3StrNICmp("hidden", zType, 6)||(zType[6] && zType[6]!=' ') ){ for(i=0; i<nType; i++){ if( (0==sqlite3StrNICmp(" hidden", &zType[i], 7)) && (zType[i+7]=='\0' || zType[i+7]==' ') @@ -87664,18 +118983,20 @@ } if( zType[i]=='\0' && i>0 ){ assert(zType[i-1]==' '); zType[i-1] = '\0'; } - pTab->aCol[iCol].isHidden = 1; + pTab->aCol[iCol].colFlags |= COLFLAG_HIDDEN; + oooHidden = TF_OOOHidden; + }else{ + pTab->tabFlags |= oooHidden; } } } } sqlite3DbFree(db, zModuleName); - db->pVTab = 0; return rc; } /* ** This function is invoked by the parser to call the xConnect() method @@ -87695,11 +119016,11 @@ return SQLITE_OK; } /* Locate the required virtual table module */ zMod = pTab->azModuleArg[0]; - pMod = (Module*)sqlite3HashFind(&db->aModule, zMod, sqlite3Strlen30(zMod)); + pMod = (Module*)sqlite3HashFind(&db->aModule, zMod); if( !pMod ){ const char *zModule = pTab->azModuleArg[0]; sqlite3ErrorMsg(pParse, "no such module: %s", zModule); rc = SQLITE_ERROR; @@ -87712,15 +119033,15 @@ sqlite3DbFree(db, zErr); } return rc; } - /* -** Add the virtual table pVTab to the array sqlite3.aVTrans[]. +** Grow the db->aVTrans[] array so that there is room for at least one +** more v-table. Return SQLITE_NOMEM if a malloc fails, or SQLITE_OK otherwise. */ -static int addToVTrans(sqlite3 *db, VTable *pVTab){ +static int growVTrans(sqlite3 *db){ const int ARRAY_INCR = 5; /* Grow the sqlite3.aVTrans array if required */ if( (db->nVTrans%ARRAY_INCR)==0 ){ VTable **aVTrans; @@ -87731,14 +119052,21 @@ } memset(&aVTrans[db->nVTrans], 0, sizeof(sqlite3_vtab *)*ARRAY_INCR); db->aVTrans = aVTrans; } + return SQLITE_OK; +} + +/* +** Add the virtual table pVTab to the array sqlite3.aVTrans[]. Space should +** have already been reserved using growVTrans(). +*/ +static void addToVTrans(sqlite3 *db, VTable *pVTab){ /* Add pVtab to the end of sqlite3.aVTrans */ db->aVTrans[db->nVTrans++] = pVTab; sqlite3VtabLock(pVTab); - return SQLITE_OK; } /* ** This function is invoked by the vdbe to call the xCreate method ** of the virtual table named zTab in database iDb. @@ -87756,27 +119084,30 @@ pTab = sqlite3FindTable(db, zTab, db->aDb[iDb].zName); assert( pTab && (pTab->tabFlags & TF_Virtual)!=0 && !pTab->pVTable ); /* Locate the required virtual table module */ zMod = pTab->azModuleArg[0]; - pMod = (Module*)sqlite3HashFind(&db->aModule, zMod, sqlite3Strlen30(zMod)); + pMod = (Module*)sqlite3HashFind(&db->aModule, zMod); /* If the module has been registered and includes a Create method, ** invoke it now. If the module has not been registered, return an ** error. Otherwise, do nothing. */ - if( !pMod ){ + if( pMod==0 || pMod->pModule->xCreate==0 || pMod->pModule->xDestroy==0 ){ *pzErr = sqlite3MPrintf(db, "no such module: %s", zMod); rc = SQLITE_ERROR; }else{ rc = vtabCallConstructor(db, pTab, pMod, pMod->pModule->xCreate, pzErr); } /* Justification of ALWAYS(): The xConstructor method is required to ** create a valid sqlite3_vtab if it returns SQLITE_OK. */ if( rc==SQLITE_OK && ALWAYS(sqlite3GetVTable(db, pTab)) ){ - rc = addToVTrans(db, sqlite3GetVTable(db, pTab)); + rc = growVTrans(db); + if( rc==SQLITE_OK ){ + addToVTrans(db, sqlite3GetVTable(db, pTab)); + } } return rc; } @@ -87783,24 +119114,30 @@ /* ** This function is used to set the schema of a virtual table. It is only ** valid to call this function from within the xCreate() or xConnect() of a ** virtual table module. */ -SQLITE_API int sqlite3_declare_vtab(sqlite3 *db, const char *zCreateTable){ +SQLITE_API int SQLITE_STDCALL sqlite3_declare_vtab(sqlite3 *db, const char *zCreateTable){ + VtabCtx *pCtx; Parse *pParse; - int rc = SQLITE_OK; Table *pTab; char *zErr = 0; +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) || zCreateTable==0 ){ + return SQLITE_MISUSE_BKPT; + } +#endif sqlite3_mutex_enter(db->mutex); - pTab = db->pVTab; - if( !pTab ){ - sqlite3Error(db, SQLITE_MISUSE, 0); + pCtx = db->pVtabCtx; + if( !pCtx || pCtx->bDeclared ){ + sqlite3Error(db, SQLITE_MISUSE); sqlite3_mutex_leave(db->mutex); return SQLITE_MISUSE_BKPT; } + pTab = pCtx->pTab; assert( (pTab->tabFlags & TF_Virtual)!=0 ); pParse = sqlite3StackAllocZero(db, sizeof(*pParse)); if( pParse==0 ){ rc = SQLITE_NOMEM; @@ -87819,22 +119156,23 @@ pTab->aCol = pParse->pNewTable->aCol; pTab->nCol = pParse->pNewTable->nCol; pParse->pNewTable->nCol = 0; pParse->pNewTable->aCol = 0; } - db->pVTab = 0; + pCtx->bDeclared = 1; }else{ - sqlite3Error(db, SQLITE_ERROR, zErr); + sqlite3ErrorWithMsg(db, SQLITE_ERROR, (zErr ? "%s" : 0), zErr); sqlite3DbFree(db, zErr); rc = SQLITE_ERROR; } pParse->declareVtab = 0; if( pParse->pVdbe ){ sqlite3VdbeFinalize(pParse->pVdbe); } - sqlite3DeleteTable(pParse->pNewTable); + sqlite3DeleteTable(db, pParse->pNewTable); + sqlite3ParserReset(pParse); sqlite3StackFree(db, pParse); } assert( (rc&0xff)==rc ); rc = sqlite3ApiExit(db, rc); @@ -87853,15 +119191,22 @@ int rc = SQLITE_OK; Table *pTab; pTab = sqlite3FindTable(db, zTab, db->aDb[iDb].zName); if( ALWAYS(pTab!=0 && pTab->pVTable!=0) ){ - VTable *p = vtabDisconnectAll(db, pTab); - - assert( rc==SQLITE_OK ); - rc = p->pMod->pModule->xDestroy(p->pVtab); - + VTable *p; + int (*xDestroy)(sqlite3_vtab *); + for(p=pTab->pVTable; p; p=p->pNext){ + assert( p->pVtab ); + if( p->pVtab->nRef>0 ){ + return SQLITE_LOCKED; + } + } + p = vtabDisconnectAll(db, pTab); + xDestroy = p->pMod->pModule->xDestroy; + assert( xDestroy!=0 ); /* Checked before the virtual table is created */ + rc = xDestroy(p->pVtab); /* Remove the sqlite3_vtab* from the aVTrans[] array, if applicable */ if( rc==SQLITE_OK ){ assert( pTab->pVTable==p && p->pNext==0 ); p->pVtab = 0; pTab->pVTable = 0; @@ -87881,35 +119226,36 @@ ** The array is cleared after invoking the callbacks. */ static void callFinaliser(sqlite3 *db, int offset){ int i; if( db->aVTrans ){ + VTable **aVTrans = db->aVTrans; + db->aVTrans = 0; for(i=0; i<db->nVTrans; i++){ - VTable *pVTab = db->aVTrans[i]; + VTable *pVTab = aVTrans[i]; sqlite3_vtab *p = pVTab->pVtab; if( p ){ int (*x)(sqlite3_vtab *); x = *(int (**)(sqlite3_vtab *))((char *)p->pModule + offset); if( x ) x(p); } + pVTab->iSavepoint = 0; sqlite3VtabUnlock(pVTab); } - sqlite3DbFree(db, db->aVTrans); + sqlite3DbFree(db, aVTrans); db->nVTrans = 0; - db->aVTrans = 0; } } /* ** Invoke the xSync method of all virtual tables in the sqlite3.aVTrans ** array. Return the error code for the first error that occurs, or ** SQLITE_OK if all xSync operations are successful. ** -** Set *pzErrmsg to point to a buffer that should be released using -** sqlite3DbFree() containing an error message, if one is available. +** If an error message is available, leave it in p->zErrMsg. */ -SQLITE_PRIVATE int sqlite3VtabSync(sqlite3 *db, char **pzErrmsg){ +SQLITE_PRIVATE int sqlite3VtabSync(sqlite3 *db, Vdbe *p){ int i; int rc = SQLITE_OK; VTable **aVTrans = db->aVTrans; db->aVTrans = 0; @@ -87916,13 +119262,11 @@ for(i=0; rc==SQLITE_OK && i<db->nVTrans; i++){ int (*x)(sqlite3_vtab *); sqlite3_vtab *pVtab = aVTrans[i]->pVtab; if( pVtab && (x = pVtab->pModule->xSync)!=0 ){ rc = x(pVtab); - sqlite3DbFree(db, *pzErrmsg); - *pzErrmsg = pVtab->zErrMsg; - pVtab->zErrMsg = 0; + sqlite3VtabImportErrmsg(p, pVtab); } } db->aVTrans = aVTrans; return rc; } @@ -87971,22 +119315,75 @@ pModule = pVTab->pVtab->pModule; if( pModule->xBegin ){ int i; - /* If pVtab is already in the aVTrans array, return early */ for(i=0; i<db->nVTrans; i++){ if( db->aVTrans[i]==pVTab ){ return SQLITE_OK; } } - /* Invoke the xBegin method */ - rc = pModule->xBegin(pVTab->pVtab); + /* Invoke the xBegin method. If successful, add the vtab to the + ** sqlite3.aVTrans[] array. */ + rc = growVTrans(db); if( rc==SQLITE_OK ){ - rc = addToVTrans(db, pVTab); + rc = pModule->xBegin(pVTab->pVtab); + if( rc==SQLITE_OK ){ + int iSvpt = db->nStatement + db->nSavepoint; + addToVTrans(db, pVTab); + if( iSvpt ) rc = sqlite3VtabSavepoint(db, SAVEPOINT_BEGIN, iSvpt-1); + } + } + } + return rc; +} + +/* +** Invoke either the xSavepoint, xRollbackTo or xRelease method of all +** virtual tables that currently have an open transaction. Pass iSavepoint +** as the second argument to the virtual table method invoked. +** +** If op is SAVEPOINT_BEGIN, the xSavepoint method is invoked. If it is +** SAVEPOINT_ROLLBACK, the xRollbackTo method. Otherwise, if op is +** SAVEPOINT_RELEASE, then the xRelease method of each virtual table with +** an open transaction is invoked. +** +** If any virtual table method returns an error code other than SQLITE_OK, +** processing is abandoned and the error returned to the caller of this +** function immediately. If all calls to virtual table methods are successful, +** SQLITE_OK is returned. +*/ +SQLITE_PRIVATE int sqlite3VtabSavepoint(sqlite3 *db, int op, int iSavepoint){ + int rc = SQLITE_OK; + + assert( op==SAVEPOINT_RELEASE||op==SAVEPOINT_ROLLBACK||op==SAVEPOINT_BEGIN ); + assert( iSavepoint>=-1 ); + if( db->aVTrans ){ + int i; + for(i=0; rc==SQLITE_OK && i<db->nVTrans; i++){ + VTable *pVTab = db->aVTrans[i]; + const sqlite3_module *pMod = pVTab->pMod->pModule; + if( pVTab->pVtab && pMod->iVersion>=2 ){ + int (*xMethod)(sqlite3_vtab *, int); + switch( op ){ + case SAVEPOINT_BEGIN: + xMethod = pMod->xSavepoint; + pVTab->iSavepoint = iSavepoint+1; + break; + case SAVEPOINT_ROLLBACK: + xMethod = pMod->xRollbackTo; + break; + default: + xMethod = pMod->xRelease; + break; + } + if( xMethod && pVTab->iSavepoint>iSavepoint ){ + rc = xMethod(pVTab->pVtab, iSavepoint); + } + } } } return rc; } @@ -88010,11 +119407,11 @@ Expr *pExpr /* First argument to the function */ ){ Table *pTab; sqlite3_vtab *pVtab; sqlite3_module *pMod; - void (*xFunc)(sqlite3_context*,int,sqlite3_value**) = 0; + void (*xSFunc)(sqlite3_context*,int,sqlite3_value**) = 0; void *pArg = 0; FuncDef *pNew; int rc = 0; char *zLowerName; unsigned char *z; @@ -88038,11 +119435,11 @@ zLowerName = sqlite3DbStrDup(db, pDef->zName); if( zLowerName ){ for(z=(unsigned char*)zLowerName; *z; z++){ *z = sqlite3UpperToLower[*z]; } - rc = pMod->xFindFunction(pVtab, nArg, zLowerName, &xFunc, &pArg); + rc = pMod->xFindFunction(pVtab, nArg, zLowerName, &xSFunc, &pArg); sqlite3DbFree(db, zLowerName); } if( rc==0 ){ return pDef; } @@ -88055,13 +119452,13 @@ return pDef; } *pNew = *pDef; pNew->zName = (char *)&pNew[1]; memcpy(pNew->zName, pDef->zName, sqlite3Strlen30(pDef->zName)+1); - pNew->xFunc = xFunc; + pNew->xSFunc = xSFunc; pNew->pUserData = pArg; - pNew->flags |= SQLITE_FUNC_EPHEM; + pNew->funcFlags |= SQLITE_FUNC_EPHEM; return pNew; } /* ** Make sure virtual table pTab is contained in the pParse->apVirtualLock[] @@ -88077,25 +119474,142 @@ assert( IsVirtual(pTab) ); for(i=0; i<pToplevel->nVtabLock; i++){ if( pTab==pToplevel->apVtabLock[i] ) return; } n = (pToplevel->nVtabLock+1)*sizeof(pToplevel->apVtabLock[0]); - apVtabLock = sqlite3_realloc(pToplevel->apVtabLock, n); + apVtabLock = sqlite3_realloc64(pToplevel->apVtabLock, n); if( apVtabLock ){ pToplevel->apVtabLock = apVtabLock; pToplevel->apVtabLock[pToplevel->nVtabLock++] = pTab; }else{ - pToplevel->db->mallocFailed = 1; + sqlite3OomFault(pToplevel->db); } } + +/* +** Check to see if virtual tale module pMod can be have an eponymous +** virtual table instance. If it can, create one if one does not already +** exist. Return non-zero if the eponymous virtual table instance exists +** when this routine returns, and return zero if it does not exist. +** +** An eponymous virtual table instance is one that is named after its +** module, and more importantly, does not require a CREATE VIRTUAL TABLE +** statement in order to come into existance. Eponymous virtual table +** instances always exist. They cannot be DROP-ed. +** +** Any virtual table module for which xConnect and xCreate are the same +** method can have an eponymous virtual table instance. +*/ +SQLITE_PRIVATE int sqlite3VtabEponymousTableInit(Parse *pParse, Module *pMod){ + const sqlite3_module *pModule = pMod->pModule; + Table *pTab; + char *zErr = 0; + int nName; + int rc; + sqlite3 *db = pParse->db; + if( pMod->pEpoTab ) return 1; + if( pModule->xCreate!=0 && pModule->xCreate!=pModule->xConnect ) return 0; + nName = sqlite3Strlen30(pMod->zName) + 1; + pTab = sqlite3DbMallocZero(db, sizeof(Table) + nName); + if( pTab==0 ) return 0; + pMod->pEpoTab = pTab; + pTab->zName = (char*)&pTab[1]; + memcpy(pTab->zName, pMod->zName, nName); + pTab->nRef = 1; + pTab->pSchema = db->aDb[0].pSchema; + pTab->tabFlags |= TF_Virtual; + pTab->nModuleArg = 0; + pTab->iPKey = -1; + addModuleArgument(db, pTab, sqlite3DbStrDup(db, pTab->zName)); + addModuleArgument(db, pTab, 0); + addModuleArgument(db, pTab, sqlite3DbStrDup(db, pTab->zName)); + rc = vtabCallConstructor(db, pTab, pMod, pModule->xConnect, &zErr); + if( rc ){ + sqlite3ErrorMsg(pParse, "%s", zErr); + sqlite3DbFree(db, zErr); + sqlite3VtabEponymousTableClear(db, pMod); + return 0; + } + return 1; +} + +/* +** Erase the eponymous virtual table instance associated with +** virtual table module pMod, if it exists. +*/ +SQLITE_PRIVATE void sqlite3VtabEponymousTableClear(sqlite3 *db, Module *pMod){ + Table *pTab = pMod->pEpoTab; + if( pTab!=0 ){ + sqlite3DeleteColumnNames(db, pTab); + sqlite3VtabClear(db, pTab); + sqlite3DbFree(db, pTab); + pMod->pEpoTab = 0; + } +} + +/* +** Return the ON CONFLICT resolution mode in effect for the virtual +** table update operation currently in progress. +** +** The results of this routine are undefined unless it is called from +** within an xUpdate method. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_vtab_on_conflict(sqlite3 *db){ + static const unsigned char aMap[] = { + SQLITE_ROLLBACK, SQLITE_ABORT, SQLITE_FAIL, SQLITE_IGNORE, SQLITE_REPLACE + }; +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT; +#endif + assert( OE_Rollback==1 && OE_Abort==2 && OE_Fail==3 ); + assert( OE_Ignore==4 && OE_Replace==5 ); + assert( db->vtabOnConflict>=1 && db->vtabOnConflict<=5 ); + return (int)aMap[db->vtabOnConflict-1]; +} + +/* +** Call from within the xCreate() or xConnect() methods to provide +** the SQLite core with additional information about the behavior +** of the virtual table being implemented. +*/ +SQLITE_API int SQLITE_CDECL sqlite3_vtab_config(sqlite3 *db, int op, ...){ + va_list ap; + int rc = SQLITE_OK; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT; +#endif + sqlite3_mutex_enter(db->mutex); + va_start(ap, op); + switch( op ){ + case SQLITE_VTAB_CONSTRAINT_SUPPORT: { + VtabCtx *p = db->pVtabCtx; + if( !p ){ + rc = SQLITE_MISUSE_BKPT; + }else{ + assert( p->pTab==0 || (p->pTab->tabFlags & TF_Virtual)!=0 ); + p->pVTable->bConstraint = (u8)va_arg(ap, int); + } + break; + } + default: + rc = SQLITE_MISUSE_BKPT; + break; + } + va_end(ap); + + if( rc!=SQLITE_OK ) sqlite3Error(db, rc); + sqlite3_mutex_leave(db->mutex); + return rc; +} #endif /* SQLITE_OMIT_VIRTUALTABLE */ /************** End of vtab.c ************************************************/ -/************** Begin file where.c *******************************************/ +/************** Begin file wherecode.c ***************************************/ /* -** 2001 September 15 +** 2015-06-06 ** ** The author disclaims copyright to this source code. In place of ** a legal notice, here is a blessing: ** ** May you do good and not evil. @@ -88102,36 +119616,212 @@ ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* ** This module contains C code that generates VDBE code used to process -** the WHERE clause of SQL statements. This module is responsible for -** generating the code that loops through a table looking for applicable -** rows. Indices are selected and used to speed the search when doing -** so is applicable. Because this module is responsible for selecting -** indices, you might also think of this module as the "query optimizer". +** the WHERE clause of SQL statements. +** +** This file was split off from where.c on 2015-06-06 in order to reduce the +** size of where.c and make it easier to edit. This file contains the routines +** that actually generate the bulk of the WHERE loop code. The original where.c +** file retains the code that does query planning and analysis. +*/ +/* #include "sqliteInt.h" */ +/************** Include whereInt.h in the middle of wherecode.c **************/ +/************** Begin file whereInt.h ****************************************/ +/* +** 2013-11-12 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** +** This file contains structure and macro definitions for the query +** planner logic in "where.c". These definitions are broken out into +** a separate source file for easier editing. */ /* ** Trace output macros */ #if defined(SQLITE_TEST) || defined(SQLITE_DEBUG) -SQLITE_PRIVATE int sqlite3WhereTrace = 0; +/***/ int sqlite3WhereTrace; #endif -#if defined(SQLITE_TEST) && defined(SQLITE_DEBUG) -# define WHERETRACE(X) if(sqlite3WhereTrace) sqlite3DebugPrintf X +#if defined(SQLITE_DEBUG) \ + && (defined(SQLITE_TEST) || defined(SQLITE_ENABLE_WHERETRACE)) +# define WHERETRACE(K,X) if(sqlite3WhereTrace&(K)) sqlite3DebugPrintf X +# define WHERETRACE_ENABLED 1 #else -# define WHERETRACE(X) +# define WHERETRACE(K,X) #endif -/* Forward reference +/* Forward references */ typedef struct WhereClause WhereClause; typedef struct WhereMaskSet WhereMaskSet; typedef struct WhereOrInfo WhereOrInfo; typedef struct WhereAndInfo WhereAndInfo; -typedef struct WhereCost WhereCost; +typedef struct WhereLevel WhereLevel; +typedef struct WhereLoop WhereLoop; +typedef struct WherePath WherePath; +typedef struct WhereTerm WhereTerm; +typedef struct WhereLoopBuilder WhereLoopBuilder; +typedef struct WhereScan WhereScan; +typedef struct WhereOrCost WhereOrCost; +typedef struct WhereOrSet WhereOrSet; + +/* +** This object contains information needed to implement a single nested +** loop in WHERE clause. +** +** Contrast this object with WhereLoop. This object describes the +** implementation of the loop. WhereLoop describes the algorithm. +** This object contains a pointer to the WhereLoop algorithm as one of +** its elements. +** +** The WhereInfo object contains a single instance of this object for +** each term in the FROM clause (which is to say, for each of the +** nested loops as implemented). The order of WhereLevel objects determines +** the loop nested order, with WhereInfo.a[0] being the outer loop and +** WhereInfo.a[WhereInfo.nLevel-1] being the inner loop. +*/ +struct WhereLevel { + int iLeftJoin; /* Memory cell used to implement LEFT OUTER JOIN */ + int iTabCur; /* The VDBE cursor used to access the table */ + int iIdxCur; /* The VDBE cursor used to access pIdx */ + int addrBrk; /* Jump here to break out of the loop */ + int addrNxt; /* Jump here to start the next IN combination */ + int addrSkip; /* Jump here for next iteration of skip-scan */ + int addrCont; /* Jump here to continue with the next loop cycle */ + int addrFirst; /* First instruction of interior of the loop */ + int addrBody; /* Beginning of the body of this loop */ +#ifndef SQLITE_LIKE_DOESNT_MATCH_BLOBS + int iLikeRepCntr; /* LIKE range processing counter register */ + int addrLikeRep; /* LIKE range processing address */ +#endif + u8 iFrom; /* Which entry in the FROM clause */ + u8 op, p3, p5; /* Opcode, P3 & P5 of the opcode that ends the loop */ + int p1, p2; /* Operands of the opcode used to ends the loop */ + union { /* Information that depends on pWLoop->wsFlags */ + struct { + int nIn; /* Number of entries in aInLoop[] */ + struct InLoop { + int iCur; /* The VDBE cursor used by this IN operator */ + int addrInTop; /* Top of the IN loop */ + u8 eEndLoopOp; /* IN Loop terminator. OP_Next or OP_Prev */ + } *aInLoop; /* Information about each nested IN operator */ + } in; /* Used when pWLoop->wsFlags&WHERE_IN_ABLE */ + Index *pCovidx; /* Possible covering index for WHERE_MULTI_OR */ + } u; + struct WhereLoop *pWLoop; /* The selected WhereLoop object */ + Bitmask notReady; /* FROM entries not usable at this level */ +#ifdef SQLITE_ENABLE_STMT_SCANSTATUS + int addrVisit; /* Address at which row is visited */ +#endif +}; + +/* +** Each instance of this object represents an algorithm for evaluating one +** term of a join. Every term of the FROM clause will have at least +** one corresponding WhereLoop object (unless INDEXED BY constraints +** prevent a query solution - which is an error) and many terms of the +** FROM clause will have multiple WhereLoop objects, each describing a +** potential way of implementing that FROM-clause term, together with +** dependencies and cost estimates for using the chosen algorithm. +** +** Query planning consists of building up a collection of these WhereLoop +** objects, then computing a particular sequence of WhereLoop objects, with +** one WhereLoop object per FROM clause term, that satisfy all dependencies +** and that minimize the overall cost. +*/ +struct WhereLoop { + Bitmask prereq; /* Bitmask of other loops that must run first */ + Bitmask maskSelf; /* Bitmask identifying table iTab */ +#ifdef SQLITE_DEBUG + char cId; /* Symbolic ID of this loop for debugging use */ +#endif + u8 iTab; /* Position in FROM clause of table for this loop */ + u8 iSortIdx; /* Sorting index number. 0==None */ + LogEst rSetup; /* One-time setup cost (ex: create transient index) */ + LogEst rRun; /* Cost of running each loop */ + LogEst nOut; /* Estimated number of output rows */ + union { + struct { /* Information for internal btree tables */ + u16 nEq; /* Number of equality constraints */ + Index *pIndex; /* Index used, or NULL */ + } btree; + struct { /* Information for virtual tables */ + int idxNum; /* Index number */ + u8 needFree; /* True if sqlite3_free(idxStr) is needed */ + i8 isOrdered; /* True if satisfies ORDER BY */ + u16 omitMask; /* Terms that may be omitted */ + char *idxStr; /* Index identifier string */ + } vtab; + } u; + u32 wsFlags; /* WHERE_* flags describing the plan */ + u16 nLTerm; /* Number of entries in aLTerm[] */ + u16 nSkip; /* Number of NULL aLTerm[] entries */ + /**** whereLoopXfer() copies fields above ***********************/ +# define WHERE_LOOP_XFER_SZ offsetof(WhereLoop,nLSlot) + u16 nLSlot; /* Number of slots allocated for aLTerm[] */ + WhereTerm **aLTerm; /* WhereTerms used */ + WhereLoop *pNextLoop; /* Next WhereLoop object in the WhereClause */ + WhereTerm *aLTermSpace[3]; /* Initial aLTerm[] space */ +}; + +/* This object holds the prerequisites and the cost of running a +** subquery on one operand of an OR operator in the WHERE clause. +** See WhereOrSet for additional information +*/ +struct WhereOrCost { + Bitmask prereq; /* Prerequisites */ + LogEst rRun; /* Cost of running this subquery */ + LogEst nOut; /* Number of outputs for this subquery */ +}; + +/* The WhereOrSet object holds a set of possible WhereOrCosts that +** correspond to the subquery(s) of OR-clause processing. Only the +** best N_OR_COST elements are retained. +*/ +#define N_OR_COST 3 +struct WhereOrSet { + u16 n; /* Number of valid a[] entries */ + WhereOrCost a[N_OR_COST]; /* Set of best costs */ +}; + +/* +** Each instance of this object holds a sequence of WhereLoop objects +** that implement some or all of a query plan. +** +** Think of each WhereLoop object as a node in a graph with arcs +** showing dependencies and costs for travelling between nodes. (That is +** not a completely accurate description because WhereLoop costs are a +** vector, not a scalar, and because dependencies are many-to-one, not +** one-to-one as are graph nodes. But it is a useful visualization aid.) +** Then a WherePath object is a path through the graph that visits some +** or all of the WhereLoop objects once. +** +** The "solver" works by creating the N best WherePath objects of length +** 1. Then using those as a basis to compute the N best WherePath objects +** of length 2. And so forth until the length of WherePaths equals the +** number of nodes in the FROM clause. The best (lowest cost) WherePath +** at the end is the chosen query plan. +*/ +struct WherePath { + Bitmask maskLoop; /* Bitmask of all WhereLoop objects in this path */ + Bitmask revLoop; /* aLoop[]s that should be reversed for ORDER BY */ + LogEst nRow; /* Estimated number of rows generated by this path */ + LogEst rCost; /* Total cost of this path */ + LogEst rUnsorted; /* Total cost of this path ignoring sorting costs */ + i8 isOrdered; /* No. of ORDER BY terms satisfied. -1 for unknown */ + WhereLoop **aLoop; /* Array of WhereLoop objects implementing this path */ +}; /* ** The query generator uses an array of instances of this structure to ** help it analyze the subexpressions of the WHERE clause. Each WHERE ** clause subexpression is separated from the others by AND operators, @@ -88155,13 +119845,13 @@ ** ** A WhereTerm might also be two or more subterms connected by OR: ** ** (t1.X <op> <expr>) OR (t1.Y <op> <expr>) OR .... ** -** In this second case, wtFlag as the TERM_ORINFO set and eOperator==WO_OR +** In this second case, wtFlag has the TERM_ORINFO bit set and eOperator==WO_OR ** and the WhereTerm.u.pOrInfo field points to auxiliary information that -** is collected about the +** is collected about the OR clause. ** ** If a term in the WHERE clause does not match either of the two previous ** categories, then eOperator==0. The WhereTerm.pExpr field is still set ** to the original subexpression content and wtFlags is set up appropriately ** but no other fields in the WhereTerm object are meaningful. @@ -88180,23 +119870,24 @@ ** ** The number of terms in a join is limited by the number of bits ** in prereqRight and prereqAll. The default is 64 bits, hence SQLite ** is only able to process joins with 64 or fewer tables. */ -typedef struct WhereTerm WhereTerm; struct WhereTerm { Expr *pExpr; /* Pointer to the subexpression that is this term */ int iParent; /* Disable pWC->a[iParent] when this term disabled */ int leftCursor; /* Cursor number of X in "X <op> <expr>" */ union { int leftColumn; /* Column number of X in "X <op> <expr>" */ - WhereOrInfo *pOrInfo; /* Extra information if eOperator==WO_OR */ - WhereAndInfo *pAndInfo; /* Extra information if eOperator==WO_AND */ + WhereOrInfo *pOrInfo; /* Extra information if (eOperator & WO_OR)!=0 */ + WhereAndInfo *pAndInfo; /* Extra information if (eOperator& WO_AND)!=0 */ } u; + LogEst truthProb; /* Probability of truth for this expression */ u16 eOperator; /* A WO_xx value describing <op> */ - u8 wtFlags; /* TERM_xxx bit flags. See below */ + u16 wtFlags; /* TERM_xxx bit flags. See below */ u8 nChild; /* Number of children that must disable us */ + u8 eMatchOp; /* Op for vtab MATCH/LIKE/GLOB/REGEXP terms */ WhereClause *pWC; /* The clause this term is part of */ Bitmask prereqRight; /* Bitmask of tables used by pExpr->pRight */ Bitmask prereqAll; /* Bitmask of tables referenced by pExpr */ }; @@ -88208,19 +119899,53 @@ #define TERM_CODED 0x04 /* This term is already coded */ #define TERM_COPIED 0x08 /* Has a child */ #define TERM_ORINFO 0x10 /* Need to free the WhereTerm.u.pOrInfo object */ #define TERM_ANDINFO 0x20 /* Need to free the WhereTerm.u.pAndInfo obj */ #define TERM_OR_OK 0x40 /* Used during OR-clause processing */ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 +# define TERM_VNULL 0x80 /* Manufactured x>NULL or x<=NULL term */ +#else +# define TERM_VNULL 0x00 /* Disabled if not using stat3 */ +#endif +#define TERM_LIKEOPT 0x100 /* Virtual terms from the LIKE optimization */ +#define TERM_LIKECOND 0x200 /* Conditionally this LIKE operator term */ +#define TERM_LIKE 0x400 /* The original LIKE operator */ +#define TERM_IS 0x800 /* Term.pExpr is an IS operator */ + +/* +** An instance of the WhereScan object is used as an iterator for locating +** terms in the WHERE clause that are useful to the query planner. +*/ +struct WhereScan { + WhereClause *pOrigWC; /* Original, innermost WhereClause */ + WhereClause *pWC; /* WhereClause currently being scanned */ + const char *zCollName; /* Required collating sequence, if not NULL */ + Expr *pIdxExpr; /* Search for this index expression */ + char idxaff; /* Must match this affinity, if zCollName!=NULL */ + unsigned char nEquiv; /* Number of entries in aEquiv[] */ + unsigned char iEquiv; /* Next unused slot in aEquiv[] */ + u32 opMask; /* Acceptable operators */ + int k; /* Resume scanning at this->pWC->a[this->k] */ + int aiCur[11]; /* Cursors in the equivalence class */ + i16 aiColumn[11]; /* Corresponding column number in the eq-class */ +}; /* ** An instance of the following structure holds all information about a ** WHERE clause. Mostly this is a container for one or more WhereTerms. +** +** Explanation of pOuter: For a WHERE clause of the form +** +** a AND ((b AND c) OR (d AND e)) AND f +** +** There are separate WhereClause objects for the whole clause and for +** the subclauses "(b AND c)" and "(d AND e)". The pOuter field of the +** subclauses points to the WhereClause object for the whole clause. */ struct WhereClause { - Parse *pParse; /* The parser context */ - WhereMaskSet *pMaskSet; /* Mapping of table cursor numbers to bitmasks */ - Bitmask vmask; /* Bitmask identifying virtual table cursors */ + WhereInfo *pWInfo; /* WHERE clause processing context */ + WhereClause *pOuter; /* Outer conjunction */ u8 op; /* Split operator. TK_AND or TK_OR */ int nTerm; /* Number of terms */ int nSlot; /* Number of entries in a[] */ WhereTerm *a; /* Each a[] describes a term of the WHERE cluase */ #if defined(SQLITE_SMALL_STACK) @@ -88277,129 +120002,1916 @@ int n; /* Number of assigned cursor values */ int ix[BMS]; /* Cursor assigned to each bit */ }; /* -** A WhereCost object records a lookup strategy and the estimated -** cost of pursuing that strategy. +** Initialize a WhereMaskSet object */ -struct WhereCost { - WherePlan plan; /* The lookup strategy */ - double rCost; /* Overall cost of pursuing this search strategy */ - double nRow; /* Estimated number of output rows */ - Bitmask used; /* Bitmask of cursors used by this plan */ +#define initMaskSet(P) (P)->n=0 + +/* +** This object is a convenience wrapper holding all information needed +** to construct WhereLoop objects for a particular query. +*/ +struct WhereLoopBuilder { + WhereInfo *pWInfo; /* Information about this WHERE */ + WhereClause *pWC; /* WHERE clause terms */ + ExprList *pOrderBy; /* ORDER BY clause */ + WhereLoop *pNew; /* Template WhereLoop */ + WhereOrSet *pOrSet; /* Record best loops here, if not NULL */ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + UnpackedRecord *pRec; /* Probe for stat4 (if required) */ + int nRecValid; /* Number of valid fields currently in pRec */ +#endif }; /* -** Bitmasks for the operators that indices are able to exploit. An +** The WHERE clause processing routine has two halves. The +** first part does the start of the WHERE loop and the second +** half does the tail of the WHERE loop. An instance of +** this structure is returned by the first half and passed +** into the second half to give some continuity. +** +** An instance of this object holds the complete state of the query +** planner. +*/ +struct WhereInfo { + Parse *pParse; /* Parsing and code generating context */ + SrcList *pTabList; /* List of tables in the join */ + ExprList *pOrderBy; /* The ORDER BY clause or NULL */ + ExprList *pResultSet; /* Result set. DISTINCT operates on these */ + WhereLoop *pLoops; /* List of all WhereLoop objects */ + Bitmask revMask; /* Mask of ORDER BY terms that need reversing */ + LogEst nRowOut; /* Estimated number of output rows */ + u16 wctrlFlags; /* Flags originally passed to sqlite3WhereBegin() */ + i8 nOBSat; /* Number of ORDER BY terms satisfied by indices */ + u8 sorted; /* True if really sorted (not just grouped) */ + u8 eOnePass; /* ONEPASS_OFF, or _SINGLE, or _MULTI */ + u8 untestedTerms; /* Not all WHERE terms resolved by outer loop */ + u8 eDistinct; /* One of the WHERE_DISTINCT_* values below */ + u8 nLevel; /* Number of nested loop */ + int iTop; /* The very beginning of the WHERE loop */ + int iContinue; /* Jump here to continue with next record */ + int iBreak; /* Jump here to break out of the loop */ + int savedNQueryLoop; /* pParse->nQueryLoop outside the WHERE loop */ + int aiCurOnePass[2]; /* OP_OpenWrite cursors for the ONEPASS opt */ + WhereMaskSet sMaskSet; /* Map cursor numbers to bitmasks */ + WhereClause sWC; /* Decomposition of the WHERE clause */ + WhereLevel a[1]; /* Information about each nest loop in WHERE */ +}; + +/* +** Private interfaces - callable only by other where.c routines. +** +** where.c: +*/ +SQLITE_PRIVATE Bitmask sqlite3WhereGetMask(WhereMaskSet*,int); +SQLITE_PRIVATE WhereTerm *sqlite3WhereFindTerm( + WhereClause *pWC, /* The WHERE clause to be searched */ + int iCur, /* Cursor number of LHS */ + int iColumn, /* Column number of LHS */ + Bitmask notReady, /* RHS must not overlap with this mask */ + u32 op, /* Mask of WO_xx values describing operator */ + Index *pIdx /* Must be compatible with this index, if not NULL */ +); + +/* wherecode.c: */ +#ifndef SQLITE_OMIT_EXPLAIN +SQLITE_PRIVATE int sqlite3WhereExplainOneScan( + Parse *pParse, /* Parse context */ + SrcList *pTabList, /* Table list this loop refers to */ + WhereLevel *pLevel, /* Scan to write OP_Explain opcode for */ + int iLevel, /* Value for "level" column of output */ + int iFrom, /* Value for "from" column of output */ + u16 wctrlFlags /* Flags passed to sqlite3WhereBegin() */ +); +#else +# define sqlite3WhereExplainOneScan(u,v,w,x,y,z) 0 +#endif /* SQLITE_OMIT_EXPLAIN */ +#ifdef SQLITE_ENABLE_STMT_SCANSTATUS +SQLITE_PRIVATE void sqlite3WhereAddScanStatus( + Vdbe *v, /* Vdbe to add scanstatus entry to */ + SrcList *pSrclist, /* FROM clause pLvl reads data from */ + WhereLevel *pLvl, /* Level to add scanstatus() entry for */ + int addrExplain /* Address of OP_Explain (or 0) */ +); +#else +# define sqlite3WhereAddScanStatus(a, b, c, d) ((void)d) +#endif +SQLITE_PRIVATE Bitmask sqlite3WhereCodeOneLoopStart( + WhereInfo *pWInfo, /* Complete information about the WHERE clause */ + int iLevel, /* Which level of pWInfo->a[] should be coded */ + Bitmask notReady /* Which tables are currently available */ +); + +/* whereexpr.c: */ +SQLITE_PRIVATE void sqlite3WhereClauseInit(WhereClause*,WhereInfo*); +SQLITE_PRIVATE void sqlite3WhereClauseClear(WhereClause*); +SQLITE_PRIVATE void sqlite3WhereSplit(WhereClause*,Expr*,u8); +SQLITE_PRIVATE Bitmask sqlite3WhereExprUsage(WhereMaskSet*, Expr*); +SQLITE_PRIVATE Bitmask sqlite3WhereExprListUsage(WhereMaskSet*, ExprList*); +SQLITE_PRIVATE void sqlite3WhereExprAnalyze(SrcList*, WhereClause*); +SQLITE_PRIVATE void sqlite3WhereTabFuncArgs(Parse*, struct SrcList_item*, WhereClause*); + + + + + +/* +** Bitmasks for the operators on WhereTerm objects. These are all +** operators that are of interest to the query planner. An ** OR-ed combination of these values can be used when searching for -** terms in the where clause. +** particular WhereTerms within a WhereClause. */ -#define WO_IN 0x001 -#define WO_EQ 0x002 +#define WO_IN 0x0001 +#define WO_EQ 0x0002 #define WO_LT (WO_EQ<<(TK_LT-TK_EQ)) #define WO_LE (WO_EQ<<(TK_LE-TK_EQ)) #define WO_GT (WO_EQ<<(TK_GT-TK_EQ)) #define WO_GE (WO_EQ<<(TK_GE-TK_EQ)) -#define WO_MATCH 0x040 -#define WO_ISNULL 0x080 -#define WO_OR 0x100 /* Two or more OR-connected terms */ -#define WO_AND 0x200 /* Two or more AND-connected terms */ - -#define WO_ALL 0xfff /* Mask of all possible WO_* values */ -#define WO_SINGLE 0x0ff /* Mask of all non-compound WO_* values */ - -/* -** Value for wsFlags returned by bestIndex() and stored in -** WhereLevel.wsFlags. These flags determine which search -** strategies are appropriate. -** -** The least significant 12 bits is reserved as a mask for WO_ values above. -** The WhereLevel.wsFlags field is usually set to WO_IN|WO_EQ|WO_ISNULL. -** But if the table is the right table of a left join, WhereLevel.wsFlags -** is set to WO_IN|WO_EQ. The WhereLevel.wsFlags field can then be used as -** the "op" parameter to findTerm when we are resolving equality constraints. -** ISNULL constraints will then not be used on the right table of a left -** join. Tickets #2177 and #2189. -*/ -#define WHERE_ROWID_EQ 0x00001000 /* rowid=EXPR or rowid IN (...) */ -#define WHERE_ROWID_RANGE 0x00002000 /* rowid<EXPR and/or rowid>EXPR */ -#define WHERE_COLUMN_EQ 0x00010000 /* x=EXPR or x IN (...) or x IS NULL */ -#define WHERE_COLUMN_RANGE 0x00020000 /* x<EXPR and/or x>EXPR */ -#define WHERE_COLUMN_IN 0x00040000 /* x IN (...) */ -#define WHERE_COLUMN_NULL 0x00080000 /* x IS NULL */ -#define WHERE_INDEXED 0x000f0000 /* Anything that uses an index */ -#define WHERE_NOT_FULLSCAN 0x000f3000 /* Does not do a full table scan */ -#define WHERE_IN_ABLE 0x000f1000 /* Able to support an IN operator */ -#define WHERE_TOP_LIMIT 0x00100000 /* x<EXPR or x<=EXPR constraint */ -#define WHERE_BTM_LIMIT 0x00200000 /* x>EXPR or x>=EXPR constraint */ -#define WHERE_IDX_ONLY 0x00800000 /* Use index only - omit table */ -#define WHERE_ORDERBY 0x01000000 /* Output will appear in correct order */ -#define WHERE_REVERSE 0x02000000 /* Scan in reverse order */ -#define WHERE_UNIQUE 0x04000000 /* Selects no more than one row */ -#define WHERE_VIRTUALTABLE 0x08000000 /* Use virtual-table processing */ -#define WHERE_MULTI_OR 0x10000000 /* OR using multiple indices */ -#define WHERE_TEMP_INDEX 0x20000000 /* Uses an ephemeral index */ - -/* -** Initialize a preallocated WhereClause structure. -*/ -static void whereClauseInit( - WhereClause *pWC, /* The WhereClause to be initialized */ - Parse *pParse, /* The parsing context */ - WhereMaskSet *pMaskSet /* Mapping from table cursor numbers to bitmasks */ -){ - pWC->pParse = pParse; - pWC->pMaskSet = pMaskSet; - pWC->nTerm = 0; - pWC->nSlot = ArraySize(pWC->aStatic); - pWC->a = pWC->aStatic; - pWC->vmask = 0; -} - -/* Forward reference */ -static void whereClauseClear(WhereClause*); +#define WO_MATCH 0x0040 +#define WO_IS 0x0080 +#define WO_ISNULL 0x0100 +#define WO_OR 0x0200 /* Two or more OR-connected terms */ +#define WO_AND 0x0400 /* Two or more AND-connected terms */ +#define WO_EQUIV 0x0800 /* Of the form A==B, both columns */ +#define WO_NOOP 0x1000 /* This term does not restrict search space */ + +#define WO_ALL 0x1fff /* Mask of all possible WO_* values */ +#define WO_SINGLE 0x01ff /* Mask of all non-compound WO_* values */ + +/* +** These are definitions of bits in the WhereLoop.wsFlags field. +** The particular combination of bits in each WhereLoop help to +** determine the algorithm that WhereLoop represents. +*/ +#define WHERE_COLUMN_EQ 0x00000001 /* x=EXPR */ +#define WHERE_COLUMN_RANGE 0x00000002 /* x<EXPR and/or x>EXPR */ +#define WHERE_COLUMN_IN 0x00000004 /* x IN (...) */ +#define WHERE_COLUMN_NULL 0x00000008 /* x IS NULL */ +#define WHERE_CONSTRAINT 0x0000000f /* Any of the WHERE_COLUMN_xxx values */ +#define WHERE_TOP_LIMIT 0x00000010 /* x<EXPR or x<=EXPR constraint */ +#define WHERE_BTM_LIMIT 0x00000020 /* x>EXPR or x>=EXPR constraint */ +#define WHERE_BOTH_LIMIT 0x00000030 /* Both x>EXPR and x<EXPR */ +#define WHERE_IDX_ONLY 0x00000040 /* Use index only - omit table */ +#define WHERE_IPK 0x00000100 /* x is the INTEGER PRIMARY KEY */ +#define WHERE_INDEXED 0x00000200 /* WhereLoop.u.btree.pIndex is valid */ +#define WHERE_VIRTUALTABLE 0x00000400 /* WhereLoop.u.vtab is valid */ +#define WHERE_IN_ABLE 0x00000800 /* Able to support an IN operator */ +#define WHERE_ONEROW 0x00001000 /* Selects no more than one row */ +#define WHERE_MULTI_OR 0x00002000 /* OR using multiple indices */ +#define WHERE_AUTO_INDEX 0x00004000 /* Uses an ephemeral index */ +#define WHERE_SKIPSCAN 0x00008000 /* Uses the skip-scan algorithm */ +#define WHERE_UNQ_WANTED 0x00010000 /* WHERE_ONEROW would have been helpful*/ +#define WHERE_PARTIALIDX 0x00020000 /* The automatic index is partial */ + +/************** End of whereInt.h ********************************************/ +/************** Continuing where we left off in wherecode.c ******************/ + +#ifndef SQLITE_OMIT_EXPLAIN +/* +** This routine is a helper for explainIndexRange() below +** +** pStr holds the text of an expression that we are building up one term +** at a time. This routine adds a new term to the end of the expression. +** Terms are separated by AND so add the "AND" text for second and subsequent +** terms only. +*/ +static void explainAppendTerm( + StrAccum *pStr, /* The text expression being built */ + int iTerm, /* Index of this term. First is zero */ + const char *zColumn, /* Name of the column */ + const char *zOp /* Name of the operator */ +){ + if( iTerm ) sqlite3StrAccumAppend(pStr, " AND ", 5); + sqlite3StrAccumAppendAll(pStr, zColumn); + sqlite3StrAccumAppend(pStr, zOp, 1); + sqlite3StrAccumAppend(pStr, "?", 1); +} + +/* +** Return the name of the i-th column of the pIdx index. +*/ +static const char *explainIndexColumnName(Index *pIdx, int i){ + i = pIdx->aiColumn[i]; + if( i==XN_EXPR ) return "<expr>"; + if( i==XN_ROWID ) return "rowid"; + return pIdx->pTable->aCol[i].zName; +} + +/* +** Argument pLevel describes a strategy for scanning table pTab. This +** function appends text to pStr that describes the subset of table +** rows scanned by the strategy in the form of an SQL expression. +** +** For example, if the query: +** +** SELECT * FROM t1 WHERE a=1 AND b>2; +** +** is run and there is an index on (a, b), then this function returns a +** string similar to: +** +** "a=? AND b>?" +*/ +static void explainIndexRange(StrAccum *pStr, WhereLoop *pLoop){ + Index *pIndex = pLoop->u.btree.pIndex; + u16 nEq = pLoop->u.btree.nEq; + u16 nSkip = pLoop->nSkip; + int i, j; + + if( nEq==0 && (pLoop->wsFlags&(WHERE_BTM_LIMIT|WHERE_TOP_LIMIT))==0 ) return; + sqlite3StrAccumAppend(pStr, " (", 2); + for(i=0; i<nEq; i++){ + const char *z = explainIndexColumnName(pIndex, i); + if( i ) sqlite3StrAccumAppend(pStr, " AND ", 5); + sqlite3XPrintf(pStr, i>=nSkip ? "%s=?" : "ANY(%s)", z); + } + + j = i; + if( pLoop->wsFlags&WHERE_BTM_LIMIT ){ + const char *z = explainIndexColumnName(pIndex, i); + explainAppendTerm(pStr, i++, z, ">"); + } + if( pLoop->wsFlags&WHERE_TOP_LIMIT ){ + const char *z = explainIndexColumnName(pIndex, j); + explainAppendTerm(pStr, i, z, "<"); + } + sqlite3StrAccumAppend(pStr, ")", 1); +} + +/* +** This function is a no-op unless currently processing an EXPLAIN QUERY PLAN +** command, or if either SQLITE_DEBUG or SQLITE_ENABLE_STMT_SCANSTATUS was +** defined at compile-time. If it is not a no-op, a single OP_Explain opcode +** is added to the output to describe the table scan strategy in pLevel. +** +** If an OP_Explain opcode is added to the VM, its address is returned. +** Otherwise, if no OP_Explain is coded, zero is returned. +*/ +SQLITE_PRIVATE int sqlite3WhereExplainOneScan( + Parse *pParse, /* Parse context */ + SrcList *pTabList, /* Table list this loop refers to */ + WhereLevel *pLevel, /* Scan to write OP_Explain opcode for */ + int iLevel, /* Value for "level" column of output */ + int iFrom, /* Value for "from" column of output */ + u16 wctrlFlags /* Flags passed to sqlite3WhereBegin() */ +){ + int ret = 0; +#if !defined(SQLITE_DEBUG) && !defined(SQLITE_ENABLE_STMT_SCANSTATUS) + if( pParse->explain==2 ) +#endif + { + struct SrcList_item *pItem = &pTabList->a[pLevel->iFrom]; + Vdbe *v = pParse->pVdbe; /* VM being constructed */ + sqlite3 *db = pParse->db; /* Database handle */ + int iId = pParse->iSelectId; /* Select id (left-most output column) */ + int isSearch; /* True for a SEARCH. False for SCAN. */ + WhereLoop *pLoop; /* The controlling WhereLoop object */ + u32 flags; /* Flags that describe this loop */ + char *zMsg; /* Text to add to EQP output */ + StrAccum str; /* EQP output string */ + char zBuf[100]; /* Initial space for EQP output string */ + + pLoop = pLevel->pWLoop; + flags = pLoop->wsFlags; + if( (flags&WHERE_MULTI_OR) || (wctrlFlags&WHERE_ONETABLE_ONLY) ) return 0; + + isSearch = (flags&(WHERE_BTM_LIMIT|WHERE_TOP_LIMIT))!=0 + || ((flags&WHERE_VIRTUALTABLE)==0 && (pLoop->u.btree.nEq>0)) + || (wctrlFlags&(WHERE_ORDERBY_MIN|WHERE_ORDERBY_MAX)); + + sqlite3StrAccumInit(&str, db, zBuf, sizeof(zBuf), SQLITE_MAX_LENGTH); + sqlite3StrAccumAppendAll(&str, isSearch ? "SEARCH" : "SCAN"); + if( pItem->pSelect ){ + sqlite3XPrintf(&str, " SUBQUERY %d", pItem->iSelectId); + }else{ + sqlite3XPrintf(&str, " TABLE %s", pItem->zName); + } + + if( pItem->zAlias ){ + sqlite3XPrintf(&str, " AS %s", pItem->zAlias); + } + if( (flags & (WHERE_IPK|WHERE_VIRTUALTABLE))==0 ){ + const char *zFmt = 0; + Index *pIdx; + + assert( pLoop->u.btree.pIndex!=0 ); + pIdx = pLoop->u.btree.pIndex; + assert( !(flags&WHERE_AUTO_INDEX) || (flags&WHERE_IDX_ONLY) ); + if( !HasRowid(pItem->pTab) && IsPrimaryKeyIndex(pIdx) ){ + if( isSearch ){ + zFmt = "PRIMARY KEY"; + } + }else if( flags & WHERE_PARTIALIDX ){ + zFmt = "AUTOMATIC PARTIAL COVERING INDEX"; + }else if( flags & WHERE_AUTO_INDEX ){ + zFmt = "AUTOMATIC COVERING INDEX"; + }else if( flags & WHERE_IDX_ONLY ){ + zFmt = "COVERING INDEX %s"; + }else{ + zFmt = "INDEX %s"; + } + if( zFmt ){ + sqlite3StrAccumAppend(&str, " USING ", 7); + sqlite3XPrintf(&str, zFmt, pIdx->zName); + explainIndexRange(&str, pLoop); + } + }else if( (flags & WHERE_IPK)!=0 && (flags & WHERE_CONSTRAINT)!=0 ){ + const char *zRangeOp; + if( flags&(WHERE_COLUMN_EQ|WHERE_COLUMN_IN) ){ + zRangeOp = "="; + }else if( (flags&WHERE_BOTH_LIMIT)==WHERE_BOTH_LIMIT ){ + zRangeOp = ">? AND rowid<"; + }else if( flags&WHERE_BTM_LIMIT ){ + zRangeOp = ">"; + }else{ + assert( flags&WHERE_TOP_LIMIT); + zRangeOp = "<"; + } + sqlite3XPrintf(&str, " USING INTEGER PRIMARY KEY (rowid%s?)",zRangeOp); + } +#ifndef SQLITE_OMIT_VIRTUALTABLE + else if( (flags & WHERE_VIRTUALTABLE)!=0 ){ + sqlite3XPrintf(&str, " VIRTUAL TABLE INDEX %d:%s", + pLoop->u.vtab.idxNum, pLoop->u.vtab.idxStr); + } +#endif +#ifdef SQLITE_EXPLAIN_ESTIMATED_ROWS + if( pLoop->nOut>=10 ){ + sqlite3XPrintf(&str, " (~%llu rows)", sqlite3LogEstToInt(pLoop->nOut)); + }else{ + sqlite3StrAccumAppend(&str, " (~1 row)", 9); + } +#endif + zMsg = sqlite3StrAccumFinish(&str); + ret = sqlite3VdbeAddOp4(v, OP_Explain, iId, iLevel, iFrom, zMsg,P4_DYNAMIC); + } + return ret; +} +#endif /* SQLITE_OMIT_EXPLAIN */ + +#ifdef SQLITE_ENABLE_STMT_SCANSTATUS +/* +** Configure the VM passed as the first argument with an +** sqlite3_stmt_scanstatus() entry corresponding to the scan used to +** implement level pLvl. Argument pSrclist is a pointer to the FROM +** clause that the scan reads data from. +** +** If argument addrExplain is not 0, it must be the address of an +** OP_Explain instruction that describes the same loop. +*/ +SQLITE_PRIVATE void sqlite3WhereAddScanStatus( + Vdbe *v, /* Vdbe to add scanstatus entry to */ + SrcList *pSrclist, /* FROM clause pLvl reads data from */ + WhereLevel *pLvl, /* Level to add scanstatus() entry for */ + int addrExplain /* Address of OP_Explain (or 0) */ +){ + const char *zObj = 0; + WhereLoop *pLoop = pLvl->pWLoop; + if( (pLoop->wsFlags & WHERE_VIRTUALTABLE)==0 && pLoop->u.btree.pIndex!=0 ){ + zObj = pLoop->u.btree.pIndex->zName; + }else{ + zObj = pSrclist->a[pLvl->iFrom].zName; + } + sqlite3VdbeScanStatus( + v, addrExplain, pLvl->addrBody, pLvl->addrVisit, pLoop->nOut, zObj + ); +} +#endif + + +/* +** Disable a term in the WHERE clause. Except, do not disable the term +** if it controls a LEFT OUTER JOIN and it did not originate in the ON +** or USING clause of that join. +** +** Consider the term t2.z='ok' in the following queries: +** +** (1) SELECT * FROM t1 LEFT JOIN t2 ON t1.a=t2.x WHERE t2.z='ok' +** (2) SELECT * FROM t1 LEFT JOIN t2 ON t1.a=t2.x AND t2.z='ok' +** (3) SELECT * FROM t1, t2 WHERE t1.a=t2.x AND t2.z='ok' +** +** The t2.z='ok' is disabled in the in (2) because it originates +** in the ON clause. The term is disabled in (3) because it is not part +** of a LEFT OUTER JOIN. In (1), the term is not disabled. +** +** Disabling a term causes that term to not be tested in the inner loop +** of the join. Disabling is an optimization. When terms are satisfied +** by indices, we disable them to prevent redundant tests in the inner +** loop. We would get the correct results if nothing were ever disabled, +** but joins might run a little slower. The trick is to disable as much +** as we can without disabling too much. If we disabled in (1), we'd get +** the wrong answer. See ticket #813. +** +** If all the children of a term are disabled, then that term is also +** automatically disabled. In this way, terms get disabled if derived +** virtual terms are tested first. For example: +** +** x GLOB 'abc*' AND x>='abc' AND x<'acd' +** \___________/ \______/ \_____/ +** parent child1 child2 +** +** Only the parent term was in the original WHERE clause. The child1 +** and child2 terms were added by the LIKE optimization. If both of +** the virtual child terms are valid, then testing of the parent can be +** skipped. +** +** Usually the parent term is marked as TERM_CODED. But if the parent +** term was originally TERM_LIKE, then the parent gets TERM_LIKECOND instead. +** The TERM_LIKECOND marking indicates that the term should be coded inside +** a conditional such that is only evaluated on the second pass of a +** LIKE-optimization loop, when scanning BLOBs instead of strings. +*/ +static void disableTerm(WhereLevel *pLevel, WhereTerm *pTerm){ + int nLoop = 0; + while( pTerm + && (pTerm->wtFlags & TERM_CODED)==0 + && (pLevel->iLeftJoin==0 || ExprHasProperty(pTerm->pExpr, EP_FromJoin)) + && (pLevel->notReady & pTerm->prereqAll)==0 + ){ + if( nLoop && (pTerm->wtFlags & TERM_LIKE)!=0 ){ + pTerm->wtFlags |= TERM_LIKECOND; + }else{ + pTerm->wtFlags |= TERM_CODED; + } + if( pTerm->iParent<0 ) break; + pTerm = &pTerm->pWC->a[pTerm->iParent]; + pTerm->nChild--; + if( pTerm->nChild!=0 ) break; + nLoop++; + } +} + +/* +** Code an OP_Affinity opcode to apply the column affinity string zAff +** to the n registers starting at base. +** +** As an optimization, SQLITE_AFF_BLOB entries (which are no-ops) at the +** beginning and end of zAff are ignored. If all entries in zAff are +** SQLITE_AFF_BLOB, then no code gets generated. +** +** This routine makes its own copy of zAff so that the caller is free +** to modify zAff after this routine returns. +*/ +static void codeApplyAffinity(Parse *pParse, int base, int n, char *zAff){ + Vdbe *v = pParse->pVdbe; + if( zAff==0 ){ + assert( pParse->db->mallocFailed ); + return; + } + assert( v!=0 ); + + /* Adjust base and n to skip over SQLITE_AFF_BLOB entries at the beginning + ** and end of the affinity string. + */ + while( n>0 && zAff[0]==SQLITE_AFF_BLOB ){ + n--; + base++; + zAff++; + } + while( n>1 && zAff[n-1]==SQLITE_AFF_BLOB ){ + n--; + } + + /* Code the OP_Affinity opcode if there is anything left to do. */ + if( n>0 ){ + sqlite3VdbeAddOp4(v, OP_Affinity, base, n, 0, zAff, n); + sqlite3ExprCacheAffinityChange(pParse, base, n); + } +} + + +/* +** Generate code for a single equality term of the WHERE clause. An equality +** term can be either X=expr or X IN (...). pTerm is the term to be +** coded. +** +** The current value for the constraint is left in register iReg. +** +** For a constraint of the form X=expr, the expression is evaluated and its +** result is left on the stack. For constraints of the form X IN (...) +** this routine sets up a loop that will iterate over all values of X. +*/ +static int codeEqualityTerm( + Parse *pParse, /* The parsing context */ + WhereTerm *pTerm, /* The term of the WHERE clause to be coded */ + WhereLevel *pLevel, /* The level of the FROM clause we are working on */ + int iEq, /* Index of the equality term within this level */ + int bRev, /* True for reverse-order IN operations */ + int iTarget /* Attempt to leave results in this register */ +){ + Expr *pX = pTerm->pExpr; + Vdbe *v = pParse->pVdbe; + int iReg; /* Register holding results */ + + assert( iTarget>0 ); + if( pX->op==TK_EQ || pX->op==TK_IS ){ + iReg = sqlite3ExprCodeTarget(pParse, pX->pRight, iTarget); + }else if( pX->op==TK_ISNULL ){ + iReg = iTarget; + sqlite3VdbeAddOp2(v, OP_Null, 0, iReg); +#ifndef SQLITE_OMIT_SUBQUERY + }else{ + int eType; + int iTab; + struct InLoop *pIn; + WhereLoop *pLoop = pLevel->pWLoop; + + if( (pLoop->wsFlags & WHERE_VIRTUALTABLE)==0 + && pLoop->u.btree.pIndex!=0 + && pLoop->u.btree.pIndex->aSortOrder[iEq] + ){ + testcase( iEq==0 ); + testcase( bRev ); + bRev = !bRev; + } + assert( pX->op==TK_IN ); + iReg = iTarget; + eType = sqlite3FindInIndex(pParse, pX, IN_INDEX_LOOP, 0); + if( eType==IN_INDEX_INDEX_DESC ){ + testcase( bRev ); + bRev = !bRev; + } + iTab = pX->iTable; + sqlite3VdbeAddOp2(v, bRev ? OP_Last : OP_Rewind, iTab, 0); + VdbeCoverageIf(v, bRev); + VdbeCoverageIf(v, !bRev); + assert( (pLoop->wsFlags & WHERE_MULTI_OR)==0 ); + pLoop->wsFlags |= WHERE_IN_ABLE; + if( pLevel->u.in.nIn==0 ){ + pLevel->addrNxt = sqlite3VdbeMakeLabel(v); + } + pLevel->u.in.nIn++; + pLevel->u.in.aInLoop = + sqlite3DbReallocOrFree(pParse->db, pLevel->u.in.aInLoop, + sizeof(pLevel->u.in.aInLoop[0])*pLevel->u.in.nIn); + pIn = pLevel->u.in.aInLoop; + if( pIn ){ + pIn += pLevel->u.in.nIn - 1; + pIn->iCur = iTab; + if( eType==IN_INDEX_ROWID ){ + pIn->addrInTop = sqlite3VdbeAddOp2(v, OP_Rowid, iTab, iReg); + }else{ + pIn->addrInTop = sqlite3VdbeAddOp3(v, OP_Column, iTab, 0, iReg); + } + pIn->eEndLoopOp = bRev ? OP_PrevIfOpen : OP_NextIfOpen; + sqlite3VdbeAddOp1(v, OP_IsNull, iReg); VdbeCoverage(v); + }else{ + pLevel->u.in.nIn = 0; + } +#endif + } + disableTerm(pLevel, pTerm); + return iReg; +} + +/* +** Generate code that will evaluate all == and IN constraints for an +** index scan. +** +** For example, consider table t1(a,b,c,d,e,f) with index i1(a,b,c). +** Suppose the WHERE clause is this: a==5 AND b IN (1,2,3) AND c>5 AND c<10 +** The index has as many as three equality constraints, but in this +** example, the third "c" value is an inequality. So only two +** constraints are coded. This routine will generate code to evaluate +** a==5 and b IN (1,2,3). The current values for a and b will be stored +** in consecutive registers and the index of the first register is returned. +** +** In the example above nEq==2. But this subroutine works for any value +** of nEq including 0. If nEq==0, this routine is nearly a no-op. +** The only thing it does is allocate the pLevel->iMem memory cell and +** compute the affinity string. +** +** The nExtraReg parameter is 0 or 1. It is 0 if all WHERE clause constraints +** are == or IN and are covered by the nEq. nExtraReg is 1 if there is +** an inequality constraint (such as the "c>=5 AND c<10" in the example) that +** occurs after the nEq quality constraints. +** +** This routine allocates a range of nEq+nExtraReg memory cells and returns +** the index of the first memory cell in that range. The code that +** calls this routine will use that memory range to store keys for +** start and termination conditions of the loop. +** key value of the loop. If one or more IN operators appear, then +** this routine allocates an additional nEq memory cells for internal +** use. +** +** Before returning, *pzAff is set to point to a buffer containing a +** copy of the column affinity string of the index allocated using +** sqlite3DbMalloc(). Except, entries in the copy of the string associated +** with equality constraints that use BLOB or NONE affinity are set to +** SQLITE_AFF_BLOB. This is to deal with SQL such as the following: +** +** CREATE TABLE t1(a TEXT PRIMARY KEY, b); +** SELECT ... FROM t1 AS t2, t1 WHERE t1.a = t2.b; +** +** In the example above, the index on t1(a) has TEXT affinity. But since +** the right hand side of the equality constraint (t2.b) has BLOB/NONE affinity, +** no conversion should be attempted before using a t2.b value as part of +** a key to search the index. Hence the first byte in the returned affinity +** string in this example would be set to SQLITE_AFF_BLOB. +*/ +static int codeAllEqualityTerms( + Parse *pParse, /* Parsing context */ + WhereLevel *pLevel, /* Which nested loop of the FROM we are coding */ + int bRev, /* Reverse the order of IN operators */ + int nExtraReg, /* Number of extra registers to allocate */ + char **pzAff /* OUT: Set to point to affinity string */ +){ + u16 nEq; /* The number of == or IN constraints to code */ + u16 nSkip; /* Number of left-most columns to skip */ + Vdbe *v = pParse->pVdbe; /* The vm under construction */ + Index *pIdx; /* The index being used for this loop */ + WhereTerm *pTerm; /* A single constraint term */ + WhereLoop *pLoop; /* The WhereLoop object */ + int j; /* Loop counter */ + int regBase; /* Base register */ + int nReg; /* Number of registers to allocate */ + char *zAff; /* Affinity string to return */ + + /* This module is only called on query plans that use an index. */ + pLoop = pLevel->pWLoop; + assert( (pLoop->wsFlags & WHERE_VIRTUALTABLE)==0 ); + nEq = pLoop->u.btree.nEq; + nSkip = pLoop->nSkip; + pIdx = pLoop->u.btree.pIndex; + assert( pIdx!=0 ); + + /* Figure out how many memory cells we will need then allocate them. + */ + regBase = pParse->nMem + 1; + nReg = pLoop->u.btree.nEq + nExtraReg; + pParse->nMem += nReg; + + zAff = sqlite3DbStrDup(pParse->db,sqlite3IndexAffinityStr(pParse->db,pIdx)); + assert( zAff!=0 || pParse->db->mallocFailed ); + + if( nSkip ){ + int iIdxCur = pLevel->iIdxCur; + sqlite3VdbeAddOp1(v, (bRev?OP_Last:OP_Rewind), iIdxCur); + VdbeCoverageIf(v, bRev==0); + VdbeCoverageIf(v, bRev!=0); + VdbeComment((v, "begin skip-scan on %s", pIdx->zName)); + j = sqlite3VdbeAddOp0(v, OP_Goto); + pLevel->addrSkip = sqlite3VdbeAddOp4Int(v, (bRev?OP_SeekLT:OP_SeekGT), + iIdxCur, 0, regBase, nSkip); + VdbeCoverageIf(v, bRev==0); + VdbeCoverageIf(v, bRev!=0); + sqlite3VdbeJumpHere(v, j); + for(j=0; j<nSkip; j++){ + sqlite3VdbeAddOp3(v, OP_Column, iIdxCur, j, regBase+j); + testcase( pIdx->aiColumn[j]==XN_EXPR ); + VdbeComment((v, "%s", explainIndexColumnName(pIdx, j))); + } + } + + /* Evaluate the equality constraints + */ + assert( zAff==0 || (int)strlen(zAff)>=nEq ); + for(j=nSkip; j<nEq; j++){ + int r1; + pTerm = pLoop->aLTerm[j]; + assert( pTerm!=0 ); + /* The following testcase is true for indices with redundant columns. + ** Ex: CREATE INDEX i1 ON t1(a,b,a); SELECT * FROM t1 WHERE a=0 AND b=0; */ + testcase( (pTerm->wtFlags & TERM_CODED)!=0 ); + testcase( pTerm->wtFlags & TERM_VIRTUAL ); + r1 = codeEqualityTerm(pParse, pTerm, pLevel, j, bRev, regBase+j); + if( r1!=regBase+j ){ + if( nReg==1 ){ + sqlite3ReleaseTempReg(pParse, regBase); + regBase = r1; + }else{ + sqlite3VdbeAddOp2(v, OP_SCopy, r1, regBase+j); + } + } + testcase( pTerm->eOperator & WO_ISNULL ); + testcase( pTerm->eOperator & WO_IN ); + if( (pTerm->eOperator & (WO_ISNULL|WO_IN))==0 ){ + Expr *pRight = pTerm->pExpr->pRight; + if( (pTerm->wtFlags & TERM_IS)==0 && sqlite3ExprCanBeNull(pRight) ){ + sqlite3VdbeAddOp2(v, OP_IsNull, regBase+j, pLevel->addrBrk); + VdbeCoverage(v); + } + if( zAff ){ + if( sqlite3CompareAffinity(pRight, zAff[j])==SQLITE_AFF_BLOB ){ + zAff[j] = SQLITE_AFF_BLOB; + } + if( sqlite3ExprNeedsNoAffinityChange(pRight, zAff[j]) ){ + zAff[j] = SQLITE_AFF_BLOB; + } + } + } + } + *pzAff = zAff; + return regBase; +} + +#ifndef SQLITE_LIKE_DOESNT_MATCH_BLOBS +/* +** If the most recently coded instruction is a constant range contraint +** that originated from the LIKE optimization, then change the P3 to be +** pLoop->iLikeRepCntr and set P5. +** +** The LIKE optimization trys to evaluate "x LIKE 'abc%'" as a range +** expression: "x>='ABC' AND x<'abd'". But this requires that the range +** scan loop run twice, once for strings and a second time for BLOBs. +** The OP_String opcodes on the second pass convert the upper and lower +** bound string contants to blobs. This routine makes the necessary changes +** to the OP_String opcodes for that to happen. +** +** Except, of course, if SQLITE_LIKE_DOESNT_MATCH_BLOBS is defined, then +** only the one pass through the string space is required, so this routine +** becomes a no-op. +*/ +static void whereLikeOptimizationStringFixup( + Vdbe *v, /* prepared statement under construction */ + WhereLevel *pLevel, /* The loop that contains the LIKE operator */ + WhereTerm *pTerm /* The upper or lower bound just coded */ +){ + if( pTerm->wtFlags & TERM_LIKEOPT ){ + VdbeOp *pOp; + assert( pLevel->iLikeRepCntr>0 ); + pOp = sqlite3VdbeGetOp(v, -1); + assert( pOp!=0 ); + assert( pOp->opcode==OP_String8 + || pTerm->pWC->pWInfo->pParse->db->mallocFailed ); + pOp->p3 = pLevel->iLikeRepCntr; + pOp->p5 = 1; + } +} +#else +# define whereLikeOptimizationStringFixup(A,B,C) +#endif + +#ifdef SQLITE_ENABLE_CURSOR_HINTS +/* +** Information is passed from codeCursorHint() down to individual nodes of +** the expression tree (by sqlite3WalkExpr()) using an instance of this +** structure. +*/ +struct CCurHint { + int iTabCur; /* Cursor for the main table */ + int iIdxCur; /* Cursor for the index, if pIdx!=0. Unused otherwise */ + Index *pIdx; /* The index used to access the table */ +}; + +/* +** This function is called for every node of an expression that is a candidate +** for a cursor hint on an index cursor. For TK_COLUMN nodes that reference +** the table CCurHint.iTabCur, verify that the same column can be +** accessed through the index. If it cannot, then set pWalker->eCode to 1. +*/ +static int codeCursorHintCheckExpr(Walker *pWalker, Expr *pExpr){ + struct CCurHint *pHint = pWalker->u.pCCurHint; + assert( pHint->pIdx!=0 ); + if( pExpr->op==TK_COLUMN + && pExpr->iTable==pHint->iTabCur + && sqlite3ColumnOfIndex(pHint->pIdx, pExpr->iColumn)<0 + ){ + pWalker->eCode = 1; + } + return WRC_Continue; +} + + +/* +** This function is called on every node of an expression tree used as an +** argument to the OP_CursorHint instruction. If the node is a TK_COLUMN +** that accesses any table other than the one identified by +** CCurHint.iTabCur, then do the following: +** +** 1) allocate a register and code an OP_Column instruction to read +** the specified column into the new register, and +** +** 2) transform the expression node to a TK_REGISTER node that reads +** from the newly populated register. +** +** Also, if the node is a TK_COLUMN that does access the table idenified +** by pCCurHint.iTabCur, and an index is being used (which we will +** know because CCurHint.pIdx!=0) then transform the TK_COLUMN into +** an access of the index rather than the original table. +*/ +static int codeCursorHintFixExpr(Walker *pWalker, Expr *pExpr){ + int rc = WRC_Continue; + struct CCurHint *pHint = pWalker->u.pCCurHint; + if( pExpr->op==TK_COLUMN ){ + if( pExpr->iTable!=pHint->iTabCur ){ + Vdbe *v = pWalker->pParse->pVdbe; + int reg = ++pWalker->pParse->nMem; /* Register for column value */ + sqlite3ExprCodeGetColumnOfTable( + v, pExpr->pTab, pExpr->iTable, pExpr->iColumn, reg + ); + pExpr->op = TK_REGISTER; + pExpr->iTable = reg; + }else if( pHint->pIdx!=0 ){ + pExpr->iTable = pHint->iIdxCur; + pExpr->iColumn = sqlite3ColumnOfIndex(pHint->pIdx, pExpr->iColumn); + assert( pExpr->iColumn>=0 ); + } + }else if( pExpr->op==TK_AGG_FUNCTION ){ + /* An aggregate function in the WHERE clause of a query means this must + ** be a correlated sub-query, and expression pExpr is an aggregate from + ** the parent context. Do not walk the function arguments in this case. + ** + ** todo: It should be possible to replace this node with a TK_REGISTER + ** expression, as the result of the expression must be stored in a + ** register at this point. The same holds for TK_AGG_COLUMN nodes. */ + rc = WRC_Prune; + } + return rc; +} + +/* +** Insert an OP_CursorHint instruction if it is appropriate to do so. +*/ +static void codeCursorHint( + WhereInfo *pWInfo, /* The where clause */ + WhereLevel *pLevel, /* Which loop to provide hints for */ + WhereTerm *pEndRange /* Hint this end-of-scan boundary term if not NULL */ +){ + Parse *pParse = pWInfo->pParse; + sqlite3 *db = pParse->db; + Vdbe *v = pParse->pVdbe; + Expr *pExpr = 0; + WhereLoop *pLoop = pLevel->pWLoop; + int iCur; + WhereClause *pWC; + WhereTerm *pTerm; + int i, j; + struct CCurHint sHint; + Walker sWalker; + + if( OptimizationDisabled(db, SQLITE_CursorHints) ) return; + iCur = pLevel->iTabCur; + assert( iCur==pWInfo->pTabList->a[pLevel->iFrom].iCursor ); + sHint.iTabCur = iCur; + sHint.iIdxCur = pLevel->iIdxCur; + sHint.pIdx = pLoop->u.btree.pIndex; + memset(&sWalker, 0, sizeof(sWalker)); + sWalker.pParse = pParse; + sWalker.u.pCCurHint = &sHint; + pWC = &pWInfo->sWC; + for(i=0; i<pWC->nTerm; i++){ + pTerm = &pWC->a[i]; + if( pTerm->wtFlags & (TERM_VIRTUAL|TERM_CODED) ) continue; + if( pTerm->prereqAll & pLevel->notReady ) continue; + if( ExprHasProperty(pTerm->pExpr, EP_FromJoin) ) continue; + + /* All terms in pWLoop->aLTerm[] except pEndRange are used to initialize + ** the cursor. These terms are not needed as hints for a pure range + ** scan (that has no == terms) so omit them. */ + if( pLoop->u.btree.nEq==0 && pTerm!=pEndRange ){ + for(j=0; j<pLoop->nLTerm && pLoop->aLTerm[j]!=pTerm; j++){} + if( j<pLoop->nLTerm ) continue; + } + + /* No subqueries or non-deterministic functions allowed */ + if( sqlite3ExprContainsSubquery(pTerm->pExpr) ) continue; + + /* For an index scan, make sure referenced columns are actually in + ** the index. */ + if( sHint.pIdx!=0 ){ + sWalker.eCode = 0; + sWalker.xExprCallback = codeCursorHintCheckExpr; + sqlite3WalkExpr(&sWalker, pTerm->pExpr); + if( sWalker.eCode ) continue; + } + + /* If we survive all prior tests, that means this term is worth hinting */ + pExpr = sqlite3ExprAnd(db, pExpr, sqlite3ExprDup(db, pTerm->pExpr, 0)); + } + if( pExpr!=0 ){ + sWalker.xExprCallback = codeCursorHintFixExpr; + sqlite3WalkExpr(&sWalker, pExpr); + sqlite3VdbeAddOp4(v, OP_CursorHint, + (sHint.pIdx ? sHint.iIdxCur : sHint.iTabCur), 0, 0, + (const char*)pExpr, P4_EXPR); + } +} +#else +# define codeCursorHint(A,B,C) /* No-op */ +#endif /* SQLITE_ENABLE_CURSOR_HINTS */ + +/* +** Cursor iCur is open on an intkey b-tree (a table). Register iRowid contains +** a rowid value just read from cursor iIdxCur, open on index pIdx. This +** function generates code to do a deferred seek of cursor iCur to the +** rowid stored in register iRowid. +** +** Normally, this is just: +** +** OP_Seek $iCur $iRowid +** +** However, if the scan currently being coded is a branch of an OR-loop and +** the statement currently being coded is a SELECT, then P3 of the OP_Seek +** is set to iIdxCur and P4 is set to point to an array of integers +** containing one entry for each column of the table cursor iCur is open +** on. For each table column, if the column is the i'th column of the +** index, then the corresponding array entry is set to (i+1). If the column +** does not appear in the index at all, the array entry is set to 0. +*/ +static void codeDeferredSeek( + WhereInfo *pWInfo, /* Where clause context */ + Index *pIdx, /* Index scan is using */ + int iCur, /* Cursor for IPK b-tree */ + int iIdxCur /* Index cursor */ +){ + Parse *pParse = pWInfo->pParse; /* Parse context */ + Vdbe *v = pParse->pVdbe; /* Vdbe to generate code within */ + + assert( iIdxCur>0 ); + assert( pIdx->aiColumn[pIdx->nColumn-1]==-1 ); + + sqlite3VdbeAddOp3(v, OP_Seek, iIdxCur, 0, iCur); + if( (pWInfo->wctrlFlags & WHERE_FORCE_TABLE) + && DbMaskAllZero(sqlite3ParseToplevel(pParse)->writeMask) + ){ + int i; + Table *pTab = pIdx->pTable; + int *ai = (int*)sqlite3DbMallocZero(pParse->db, sizeof(int)*(pTab->nCol+1)); + if( ai ){ + ai[0] = pTab->nCol; + for(i=0; i<pIdx->nColumn-1; i++){ + assert( pIdx->aiColumn[i]<pTab->nCol ); + if( pIdx->aiColumn[i]>=0 ) ai[pIdx->aiColumn[i]+1] = i+1; + } + sqlite3VdbeChangeP4(v, -1, (char*)ai, P4_INTARRAY); + } + } +} + +/* +** Generate code for the start of the iLevel-th loop in the WHERE clause +** implementation described by pWInfo. +*/ +SQLITE_PRIVATE Bitmask sqlite3WhereCodeOneLoopStart( + WhereInfo *pWInfo, /* Complete information about the WHERE clause */ + int iLevel, /* Which level of pWInfo->a[] should be coded */ + Bitmask notReady /* Which tables are currently available */ +){ + int j, k; /* Loop counters */ + int iCur; /* The VDBE cursor for the table */ + int addrNxt; /* Where to jump to continue with the next IN case */ + int omitTable; /* True if we use the index only */ + int bRev; /* True if we need to scan in reverse order */ + WhereLevel *pLevel; /* The where level to be coded */ + WhereLoop *pLoop; /* The WhereLoop object being coded */ + WhereClause *pWC; /* Decomposition of the entire WHERE clause */ + WhereTerm *pTerm; /* A WHERE clause term */ + Parse *pParse; /* Parsing context */ + sqlite3 *db; /* Database connection */ + Vdbe *v; /* The prepared stmt under constructions */ + struct SrcList_item *pTabItem; /* FROM clause term being coded */ + int addrBrk; /* Jump here to break out of the loop */ + int addrCont; /* Jump here to continue with next cycle */ + int iRowidReg = 0; /* Rowid is stored in this register, if not zero */ + int iReleaseReg = 0; /* Temp register to free before returning */ + + pParse = pWInfo->pParse; + v = pParse->pVdbe; + pWC = &pWInfo->sWC; + db = pParse->db; + pLevel = &pWInfo->a[iLevel]; + pLoop = pLevel->pWLoop; + pTabItem = &pWInfo->pTabList->a[pLevel->iFrom]; + iCur = pTabItem->iCursor; + pLevel->notReady = notReady & ~sqlite3WhereGetMask(&pWInfo->sMaskSet, iCur); + bRev = (pWInfo->revMask>>iLevel)&1; + omitTable = (pLoop->wsFlags & WHERE_IDX_ONLY)!=0 + && (pWInfo->wctrlFlags & WHERE_FORCE_TABLE)==0; + VdbeModuleComment((v, "Begin WHERE-loop%d: %s",iLevel,pTabItem->pTab->zName)); + + /* Create labels for the "break" and "continue" instructions + ** for the current loop. Jump to addrBrk to break out of a loop. + ** Jump to cont to go immediately to the next iteration of the + ** loop. + ** + ** When there is an IN operator, we also have a "addrNxt" label that + ** means to continue with the next IN value combination. When + ** there are no IN operators in the constraints, the "addrNxt" label + ** is the same as "addrBrk". + */ + addrBrk = pLevel->addrBrk = pLevel->addrNxt = sqlite3VdbeMakeLabel(v); + addrCont = pLevel->addrCont = sqlite3VdbeMakeLabel(v); + + /* If this is the right table of a LEFT OUTER JOIN, allocate and + ** initialize a memory cell that records if this table matches any + ** row of the left table of the join. + */ + if( pLevel->iFrom>0 && (pTabItem[0].fg.jointype & JT_LEFT)!=0 ){ + pLevel->iLeftJoin = ++pParse->nMem; + sqlite3VdbeAddOp2(v, OP_Integer, 0, pLevel->iLeftJoin); + VdbeComment((v, "init LEFT JOIN no-match flag")); + } + + /* Special case of a FROM clause subquery implemented as a co-routine */ + if( pTabItem->fg.viaCoroutine ){ + int regYield = pTabItem->regReturn; + sqlite3VdbeAddOp3(v, OP_InitCoroutine, regYield, 0, pTabItem->addrFillSub); + pLevel->p2 = sqlite3VdbeAddOp2(v, OP_Yield, regYield, addrBrk); + VdbeCoverage(v); + VdbeComment((v, "next row of \"%s\"", pTabItem->pTab->zName)); + pLevel->op = OP_Goto; + }else + +#ifndef SQLITE_OMIT_VIRTUALTABLE + if( (pLoop->wsFlags & WHERE_VIRTUALTABLE)!=0 ){ + /* Case 1: The table is a virtual-table. Use the VFilter and VNext + ** to access the data. + */ + int iReg; /* P3 Value for OP_VFilter */ + int addrNotFound; + int nConstraint = pLoop->nLTerm; + + sqlite3ExprCachePush(pParse); + iReg = sqlite3GetTempRange(pParse, nConstraint+2); + addrNotFound = pLevel->addrBrk; + for(j=0; j<nConstraint; j++){ + int iTarget = iReg+j+2; + pTerm = pLoop->aLTerm[j]; + if( pTerm==0 ) continue; + if( pTerm->eOperator & WO_IN ){ + codeEqualityTerm(pParse, pTerm, pLevel, j, bRev, iTarget); + addrNotFound = pLevel->addrNxt; + }else{ + sqlite3ExprCode(pParse, pTerm->pExpr->pRight, iTarget); + } + } + sqlite3VdbeAddOp2(v, OP_Integer, pLoop->u.vtab.idxNum, iReg); + sqlite3VdbeAddOp2(v, OP_Integer, nConstraint, iReg+1); + sqlite3VdbeAddOp4(v, OP_VFilter, iCur, addrNotFound, iReg, + pLoop->u.vtab.idxStr, + pLoop->u.vtab.needFree ? P4_MPRINTF : P4_STATIC); + VdbeCoverage(v); + pLoop->u.vtab.needFree = 0; + for(j=0; j<nConstraint && j<16; j++){ + if( (pLoop->u.vtab.omitMask>>j)&1 ){ + disableTerm(pLevel, pLoop->aLTerm[j]); + } + } + pLevel->p1 = iCur; + pLevel->op = pWInfo->eOnePass ? OP_Noop : OP_VNext; + pLevel->p2 = sqlite3VdbeCurrentAddr(v); + sqlite3ReleaseTempRange(pParse, iReg, nConstraint+2); + sqlite3ExprCachePop(pParse); + }else +#endif /* SQLITE_OMIT_VIRTUALTABLE */ + + if( (pLoop->wsFlags & WHERE_IPK)!=0 + && (pLoop->wsFlags & (WHERE_COLUMN_IN|WHERE_COLUMN_EQ))!=0 + ){ + /* Case 2: We can directly reference a single row using an + ** equality comparison against the ROWID field. Or + ** we reference multiple rows using a "rowid IN (...)" + ** construct. + */ + assert( pLoop->u.btree.nEq==1 ); + pTerm = pLoop->aLTerm[0]; + assert( pTerm!=0 ); + assert( pTerm->pExpr!=0 ); + assert( omitTable==0 ); + testcase( pTerm->wtFlags & TERM_VIRTUAL ); + iReleaseReg = ++pParse->nMem; + iRowidReg = codeEqualityTerm(pParse, pTerm, pLevel, 0, bRev, iReleaseReg); + if( iRowidReg!=iReleaseReg ) sqlite3ReleaseTempReg(pParse, iReleaseReg); + addrNxt = pLevel->addrNxt; + sqlite3VdbeAddOp2(v, OP_MustBeInt, iRowidReg, addrNxt); VdbeCoverage(v); + sqlite3VdbeAddOp3(v, OP_NotExists, iCur, addrNxt, iRowidReg); + VdbeCoverage(v); + sqlite3ExprCacheAffinityChange(pParse, iRowidReg, 1); + sqlite3ExprCacheStore(pParse, iCur, -1, iRowidReg); + VdbeComment((v, "pk")); + pLevel->op = OP_Noop; + }else if( (pLoop->wsFlags & WHERE_IPK)!=0 + && (pLoop->wsFlags & WHERE_COLUMN_RANGE)!=0 + ){ + /* Case 3: We have an inequality comparison against the ROWID field. + */ + int testOp = OP_Noop; + int start; + int memEndValue = 0; + WhereTerm *pStart, *pEnd; + + assert( omitTable==0 ); + j = 0; + pStart = pEnd = 0; + if( pLoop->wsFlags & WHERE_BTM_LIMIT ) pStart = pLoop->aLTerm[j++]; + if( pLoop->wsFlags & WHERE_TOP_LIMIT ) pEnd = pLoop->aLTerm[j++]; + assert( pStart!=0 || pEnd!=0 ); + if( bRev ){ + pTerm = pStart; + pStart = pEnd; + pEnd = pTerm; + } + codeCursorHint(pWInfo, pLevel, pEnd); + if( pStart ){ + Expr *pX; /* The expression that defines the start bound */ + int r1, rTemp; /* Registers for holding the start boundary */ + + /* The following constant maps TK_xx codes into corresponding + ** seek opcodes. It depends on a particular ordering of TK_xx + */ + const u8 aMoveOp[] = { + /* TK_GT */ OP_SeekGT, + /* TK_LE */ OP_SeekLE, + /* TK_LT */ OP_SeekLT, + /* TK_GE */ OP_SeekGE + }; + assert( TK_LE==TK_GT+1 ); /* Make sure the ordering.. */ + assert( TK_LT==TK_GT+2 ); /* ... of the TK_xx values... */ + assert( TK_GE==TK_GT+3 ); /* ... is correcct. */ + + assert( (pStart->wtFlags & TERM_VNULL)==0 ); + testcase( pStart->wtFlags & TERM_VIRTUAL ); + pX = pStart->pExpr; + assert( pX!=0 ); + testcase( pStart->leftCursor!=iCur ); /* transitive constraints */ + r1 = sqlite3ExprCodeTemp(pParse, pX->pRight, &rTemp); + sqlite3VdbeAddOp3(v, aMoveOp[pX->op-TK_GT], iCur, addrBrk, r1); + VdbeComment((v, "pk")); + VdbeCoverageIf(v, pX->op==TK_GT); + VdbeCoverageIf(v, pX->op==TK_LE); + VdbeCoverageIf(v, pX->op==TK_LT); + VdbeCoverageIf(v, pX->op==TK_GE); + sqlite3ExprCacheAffinityChange(pParse, r1, 1); + sqlite3ReleaseTempReg(pParse, rTemp); + disableTerm(pLevel, pStart); + }else{ + sqlite3VdbeAddOp2(v, bRev ? OP_Last : OP_Rewind, iCur, addrBrk); + VdbeCoverageIf(v, bRev==0); + VdbeCoverageIf(v, bRev!=0); + } + if( pEnd ){ + Expr *pX; + pX = pEnd->pExpr; + assert( pX!=0 ); + assert( (pEnd->wtFlags & TERM_VNULL)==0 ); + testcase( pEnd->leftCursor!=iCur ); /* Transitive constraints */ + testcase( pEnd->wtFlags & TERM_VIRTUAL ); + memEndValue = ++pParse->nMem; + sqlite3ExprCode(pParse, pX->pRight, memEndValue); + if( pX->op==TK_LT || pX->op==TK_GT ){ + testOp = bRev ? OP_Le : OP_Ge; + }else{ + testOp = bRev ? OP_Lt : OP_Gt; + } + disableTerm(pLevel, pEnd); + } + start = sqlite3VdbeCurrentAddr(v); + pLevel->op = bRev ? OP_Prev : OP_Next; + pLevel->p1 = iCur; + pLevel->p2 = start; + assert( pLevel->p5==0 ); + if( testOp!=OP_Noop ){ + iRowidReg = ++pParse->nMem; + sqlite3VdbeAddOp2(v, OP_Rowid, iCur, iRowidReg); + sqlite3ExprCacheStore(pParse, iCur, -1, iRowidReg); + sqlite3VdbeAddOp3(v, testOp, memEndValue, addrBrk, iRowidReg); + VdbeCoverageIf(v, testOp==OP_Le); + VdbeCoverageIf(v, testOp==OP_Lt); + VdbeCoverageIf(v, testOp==OP_Ge); + VdbeCoverageIf(v, testOp==OP_Gt); + sqlite3VdbeChangeP5(v, SQLITE_AFF_NUMERIC | SQLITE_JUMPIFNULL); + } + }else if( pLoop->wsFlags & WHERE_INDEXED ){ + /* Case 4: A scan using an index. + ** + ** The WHERE clause may contain zero or more equality + ** terms ("==" or "IN" operators) that refer to the N + ** left-most columns of the index. It may also contain + ** inequality constraints (>, <, >= or <=) on the indexed + ** column that immediately follows the N equalities. Only + ** the right-most column can be an inequality - the rest must + ** use the "==" and "IN" operators. For example, if the + ** index is on (x,y,z), then the following clauses are all + ** optimized: + ** + ** x=5 + ** x=5 AND y=10 + ** x=5 AND y<10 + ** x=5 AND y>5 AND y<10 + ** x=5 AND y=5 AND z<=10 + ** + ** The z<10 term of the following cannot be used, only + ** the x=5 term: + ** + ** x=5 AND z<10 + ** + ** N may be zero if there are inequality constraints. + ** If there are no inequality constraints, then N is at + ** least one. + ** + ** This case is also used when there are no WHERE clause + ** constraints but an index is selected anyway, in order + ** to force the output order to conform to an ORDER BY. + */ + static const u8 aStartOp[] = { + 0, + 0, + OP_Rewind, /* 2: (!start_constraints && startEq && !bRev) */ + OP_Last, /* 3: (!start_constraints && startEq && bRev) */ + OP_SeekGT, /* 4: (start_constraints && !startEq && !bRev) */ + OP_SeekLT, /* 5: (start_constraints && !startEq && bRev) */ + OP_SeekGE, /* 6: (start_constraints && startEq && !bRev) */ + OP_SeekLE /* 7: (start_constraints && startEq && bRev) */ + }; + static const u8 aEndOp[] = { + OP_IdxGE, /* 0: (end_constraints && !bRev && !endEq) */ + OP_IdxGT, /* 1: (end_constraints && !bRev && endEq) */ + OP_IdxLE, /* 2: (end_constraints && bRev && !endEq) */ + OP_IdxLT, /* 3: (end_constraints && bRev && endEq) */ + }; + u16 nEq = pLoop->u.btree.nEq; /* Number of == or IN terms */ + int regBase; /* Base register holding constraint values */ + WhereTerm *pRangeStart = 0; /* Inequality constraint at range start */ + WhereTerm *pRangeEnd = 0; /* Inequality constraint at range end */ + int startEq; /* True if range start uses ==, >= or <= */ + int endEq; /* True if range end uses ==, >= or <= */ + int start_constraints; /* Start of range is constrained */ + int nConstraint; /* Number of constraint terms */ + Index *pIdx; /* The index we will be using */ + int iIdxCur; /* The VDBE cursor for the index */ + int nExtraReg = 0; /* Number of extra registers needed */ + int op; /* Instruction opcode */ + char *zStartAff; /* Affinity for start of range constraint */ + char cEndAff = 0; /* Affinity for end of range constraint */ + u8 bSeekPastNull = 0; /* True to seek past initial nulls */ + u8 bStopAtNull = 0; /* Add condition to terminate at NULLs */ + + pIdx = pLoop->u.btree.pIndex; + iIdxCur = pLevel->iIdxCur; + assert( nEq>=pLoop->nSkip ); + + /* If this loop satisfies a sort order (pOrderBy) request that + ** was passed to this function to implement a "SELECT min(x) ..." + ** query, then the caller will only allow the loop to run for + ** a single iteration. This means that the first row returned + ** should not have a NULL value stored in 'x'. If column 'x' is + ** the first one after the nEq equality constraints in the index, + ** this requires some special handling. + */ + assert( pWInfo->pOrderBy==0 + || pWInfo->pOrderBy->nExpr==1 + || (pWInfo->wctrlFlags&WHERE_ORDERBY_MIN)==0 ); + if( (pWInfo->wctrlFlags&WHERE_ORDERBY_MIN)!=0 + && pWInfo->nOBSat>0 + && (pIdx->nKeyCol>nEq) + ){ + assert( pLoop->nSkip==0 ); + bSeekPastNull = 1; + nExtraReg = 1; + } + + /* Find any inequality constraint terms for the start and end + ** of the range. + */ + j = nEq; + if( pLoop->wsFlags & WHERE_BTM_LIMIT ){ + pRangeStart = pLoop->aLTerm[j++]; + nExtraReg = 1; + /* Like optimization range constraints always occur in pairs */ + assert( (pRangeStart->wtFlags & TERM_LIKEOPT)==0 || + (pLoop->wsFlags & WHERE_TOP_LIMIT)!=0 ); + } + if( pLoop->wsFlags & WHERE_TOP_LIMIT ){ + pRangeEnd = pLoop->aLTerm[j++]; + nExtraReg = 1; +#ifndef SQLITE_LIKE_DOESNT_MATCH_BLOBS + if( (pRangeEnd->wtFlags & TERM_LIKEOPT)!=0 ){ + assert( pRangeStart!=0 ); /* LIKE opt constraints */ + assert( pRangeStart->wtFlags & TERM_LIKEOPT ); /* occur in pairs */ + pLevel->iLikeRepCntr = ++pParse->nMem; + testcase( bRev ); + testcase( pIdx->aSortOrder[nEq]==SQLITE_SO_DESC ); + sqlite3VdbeAddOp2(v, OP_Integer, + bRev ^ (pIdx->aSortOrder[nEq]==SQLITE_SO_DESC), + pLevel->iLikeRepCntr); + VdbeComment((v, "LIKE loop counter")); + pLevel->addrLikeRep = sqlite3VdbeCurrentAddr(v); + } +#endif + if( pRangeStart==0 + && (j = pIdx->aiColumn[nEq])>=0 + && pIdx->pTable->aCol[j].notNull==0 + ){ + bSeekPastNull = 1; + } + } + assert( pRangeEnd==0 || (pRangeEnd->wtFlags & TERM_VNULL)==0 ); + + /* If we are doing a reverse order scan on an ascending index, or + ** a forward order scan on a descending index, interchange the + ** start and end terms (pRangeStart and pRangeEnd). + */ + if( (nEq<pIdx->nKeyCol && bRev==(pIdx->aSortOrder[nEq]==SQLITE_SO_ASC)) + || (bRev && pIdx->nKeyCol==nEq) + ){ + SWAP(WhereTerm *, pRangeEnd, pRangeStart); + SWAP(u8, bSeekPastNull, bStopAtNull); + } + + /* Generate code to evaluate all constraint terms using == or IN + ** and store the values of those terms in an array of registers + ** starting at regBase. + */ + codeCursorHint(pWInfo, pLevel, pRangeEnd); + regBase = codeAllEqualityTerms(pParse,pLevel,bRev,nExtraReg,&zStartAff); + assert( zStartAff==0 || sqlite3Strlen30(zStartAff)>=nEq ); + if( zStartAff ) cEndAff = zStartAff[nEq]; + addrNxt = pLevel->addrNxt; + + testcase( pRangeStart && (pRangeStart->eOperator & WO_LE)!=0 ); + testcase( pRangeStart && (pRangeStart->eOperator & WO_GE)!=0 ); + testcase( pRangeEnd && (pRangeEnd->eOperator & WO_LE)!=0 ); + testcase( pRangeEnd && (pRangeEnd->eOperator & WO_GE)!=0 ); + startEq = !pRangeStart || pRangeStart->eOperator & (WO_LE|WO_GE); + endEq = !pRangeEnd || pRangeEnd->eOperator & (WO_LE|WO_GE); + start_constraints = pRangeStart || nEq>0; + + /* Seek the index cursor to the start of the range. */ + nConstraint = nEq; + if( pRangeStart ){ + Expr *pRight = pRangeStart->pExpr->pRight; + sqlite3ExprCode(pParse, pRight, regBase+nEq); + whereLikeOptimizationStringFixup(v, pLevel, pRangeStart); + if( (pRangeStart->wtFlags & TERM_VNULL)==0 + && sqlite3ExprCanBeNull(pRight) + ){ + sqlite3VdbeAddOp2(v, OP_IsNull, regBase+nEq, addrNxt); + VdbeCoverage(v); + } + if( zStartAff ){ + if( sqlite3CompareAffinity(pRight, zStartAff[nEq])==SQLITE_AFF_BLOB){ + /* Since the comparison is to be performed with no conversions + ** applied to the operands, set the affinity to apply to pRight to + ** SQLITE_AFF_BLOB. */ + zStartAff[nEq] = SQLITE_AFF_BLOB; + } + if( sqlite3ExprNeedsNoAffinityChange(pRight, zStartAff[nEq]) ){ + zStartAff[nEq] = SQLITE_AFF_BLOB; + } + } + nConstraint++; + testcase( pRangeStart->wtFlags & TERM_VIRTUAL ); + }else if( bSeekPastNull ){ + sqlite3VdbeAddOp2(v, OP_Null, 0, regBase+nEq); + nConstraint++; + startEq = 0; + start_constraints = 1; + } + codeApplyAffinity(pParse, regBase, nConstraint - bSeekPastNull, zStartAff); + op = aStartOp[(start_constraints<<2) + (startEq<<1) + bRev]; + assert( op!=0 ); + sqlite3VdbeAddOp4Int(v, op, iIdxCur, addrNxt, regBase, nConstraint); + VdbeCoverage(v); + VdbeCoverageIf(v, op==OP_Rewind); testcase( op==OP_Rewind ); + VdbeCoverageIf(v, op==OP_Last); testcase( op==OP_Last ); + VdbeCoverageIf(v, op==OP_SeekGT); testcase( op==OP_SeekGT ); + VdbeCoverageIf(v, op==OP_SeekGE); testcase( op==OP_SeekGE ); + VdbeCoverageIf(v, op==OP_SeekLE); testcase( op==OP_SeekLE ); + VdbeCoverageIf(v, op==OP_SeekLT); testcase( op==OP_SeekLT ); + + /* Load the value for the inequality constraint at the end of the + ** range (if any). + */ + nConstraint = nEq; + if( pRangeEnd ){ + Expr *pRight = pRangeEnd->pExpr->pRight; + sqlite3ExprCacheRemove(pParse, regBase+nEq, 1); + sqlite3ExprCode(pParse, pRight, regBase+nEq); + whereLikeOptimizationStringFixup(v, pLevel, pRangeEnd); + if( (pRangeEnd->wtFlags & TERM_VNULL)==0 + && sqlite3ExprCanBeNull(pRight) + ){ + sqlite3VdbeAddOp2(v, OP_IsNull, regBase+nEq, addrNxt); + VdbeCoverage(v); + } + if( sqlite3CompareAffinity(pRight, cEndAff)!=SQLITE_AFF_BLOB + && !sqlite3ExprNeedsNoAffinityChange(pRight, cEndAff) + ){ + codeApplyAffinity(pParse, regBase+nEq, 1, &cEndAff); + } + nConstraint++; + testcase( pRangeEnd->wtFlags & TERM_VIRTUAL ); + }else if( bStopAtNull ){ + sqlite3VdbeAddOp2(v, OP_Null, 0, regBase+nEq); + endEq = 0; + nConstraint++; + } + sqlite3DbFree(db, zStartAff); + + /* Top of the loop body */ + pLevel->p2 = sqlite3VdbeCurrentAddr(v); + + /* Check if the index cursor is past the end of the range. */ + if( nConstraint ){ + op = aEndOp[bRev*2 + endEq]; + sqlite3VdbeAddOp4Int(v, op, iIdxCur, addrNxt, regBase, nConstraint); + testcase( op==OP_IdxGT ); VdbeCoverageIf(v, op==OP_IdxGT ); + testcase( op==OP_IdxGE ); VdbeCoverageIf(v, op==OP_IdxGE ); + testcase( op==OP_IdxLT ); VdbeCoverageIf(v, op==OP_IdxLT ); + testcase( op==OP_IdxLE ); VdbeCoverageIf(v, op==OP_IdxLE ); + } + + /* Seek the table cursor, if required */ + disableTerm(pLevel, pRangeStart); + disableTerm(pLevel, pRangeEnd); + if( omitTable ){ + /* pIdx is a covering index. No need to access the main table. */ + }else if( HasRowid(pIdx->pTable) ){ + if( pWInfo->eOnePass!=ONEPASS_OFF ){ + iRowidReg = ++pParse->nMem; + sqlite3VdbeAddOp2(v, OP_IdxRowid, iIdxCur, iRowidReg); + sqlite3ExprCacheStore(pParse, iCur, -1, iRowidReg); + sqlite3VdbeAddOp3(v, OP_NotExists, iCur, 0, iRowidReg); + VdbeCoverage(v); + }else{ + codeDeferredSeek(pWInfo, pIdx, iCur, iIdxCur); + } + }else if( iCur!=iIdxCur ){ + Index *pPk = sqlite3PrimaryKeyIndex(pIdx->pTable); + iRowidReg = sqlite3GetTempRange(pParse, pPk->nKeyCol); + for(j=0; j<pPk->nKeyCol; j++){ + k = sqlite3ColumnOfIndex(pIdx, pPk->aiColumn[j]); + sqlite3VdbeAddOp3(v, OP_Column, iIdxCur, k, iRowidReg+j); + } + sqlite3VdbeAddOp4Int(v, OP_NotFound, iCur, addrCont, + iRowidReg, pPk->nKeyCol); VdbeCoverage(v); + } + + /* Record the instruction used to terminate the loop. Disable + ** WHERE clause terms made redundant by the index range scan. + */ + if( pLoop->wsFlags & WHERE_ONEROW ){ + pLevel->op = OP_Noop; + }else if( bRev ){ + pLevel->op = OP_Prev; + }else{ + pLevel->op = OP_Next; + } + pLevel->p1 = iIdxCur; + pLevel->p3 = (pLoop->wsFlags&WHERE_UNQ_WANTED)!=0 ? 1:0; + if( (pLoop->wsFlags & WHERE_CONSTRAINT)==0 ){ + pLevel->p5 = SQLITE_STMTSTATUS_FULLSCAN_STEP; + }else{ + assert( pLevel->p5==0 ); + } + }else + +#ifndef SQLITE_OMIT_OR_OPTIMIZATION + if( pLoop->wsFlags & WHERE_MULTI_OR ){ + /* Case 5: Two or more separately indexed terms connected by OR + ** + ** Example: + ** + ** CREATE TABLE t1(a,b,c,d); + ** CREATE INDEX i1 ON t1(a); + ** CREATE INDEX i2 ON t1(b); + ** CREATE INDEX i3 ON t1(c); + ** + ** SELECT * FROM t1 WHERE a=5 OR b=7 OR (c=11 AND d=13) + ** + ** In the example, there are three indexed terms connected by OR. + ** The top of the loop looks like this: + ** + ** Null 1 # Zero the rowset in reg 1 + ** + ** Then, for each indexed term, the following. The arguments to + ** RowSetTest are such that the rowid of the current row is inserted + ** into the RowSet. If it is already present, control skips the + ** Gosub opcode and jumps straight to the code generated by WhereEnd(). + ** + ** sqlite3WhereBegin(<term>) + ** RowSetTest # Insert rowid into rowset + ** Gosub 2 A + ** sqlite3WhereEnd() + ** + ** Following the above, code to terminate the loop. Label A, the target + ** of the Gosub above, jumps to the instruction right after the Goto. + ** + ** Null 1 # Zero the rowset in reg 1 + ** Goto B # The loop is finished. + ** + ** A: <loop body> # Return data, whatever. + ** + ** Return 2 # Jump back to the Gosub + ** + ** B: <after the loop> + ** + ** Added 2014-05-26: If the table is a WITHOUT ROWID table, then + ** use an ephemeral index instead of a RowSet to record the primary + ** keys of the rows we have already seen. + ** + */ + WhereClause *pOrWc; /* The OR-clause broken out into subterms */ + SrcList *pOrTab; /* Shortened table list or OR-clause generation */ + Index *pCov = 0; /* Potential covering index (or NULL) */ + int iCovCur = pParse->nTab++; /* Cursor used for index scans (if any) */ + + int regReturn = ++pParse->nMem; /* Register used with OP_Gosub */ + int regRowset = 0; /* Register for RowSet object */ + int regRowid = 0; /* Register holding rowid */ + int iLoopBody = sqlite3VdbeMakeLabel(v); /* Start of loop body */ + int iRetInit; /* Address of regReturn init */ + int untestedTerms = 0; /* Some terms not completely tested */ + int ii; /* Loop counter */ + u16 wctrlFlags; /* Flags for sub-WHERE clause */ + Expr *pAndExpr = 0; /* An ".. AND (...)" expression */ + Table *pTab = pTabItem->pTab; + + pTerm = pLoop->aLTerm[0]; + assert( pTerm!=0 ); + assert( pTerm->eOperator & WO_OR ); + assert( (pTerm->wtFlags & TERM_ORINFO)!=0 ); + pOrWc = &pTerm->u.pOrInfo->wc; + pLevel->op = OP_Return; + pLevel->p1 = regReturn; + + /* Set up a new SrcList in pOrTab containing the table being scanned + ** by this loop in the a[0] slot and all notReady tables in a[1..] slots. + ** This becomes the SrcList in the recursive call to sqlite3WhereBegin(). + */ + if( pWInfo->nLevel>1 ){ + int nNotReady; /* The number of notReady tables */ + struct SrcList_item *origSrc; /* Original list of tables */ + nNotReady = pWInfo->nLevel - iLevel - 1; + pOrTab = sqlite3StackAllocRaw(db, + sizeof(*pOrTab)+ nNotReady*sizeof(pOrTab->a[0])); + if( pOrTab==0 ) return notReady; + pOrTab->nAlloc = (u8)(nNotReady + 1); + pOrTab->nSrc = pOrTab->nAlloc; + memcpy(pOrTab->a, pTabItem, sizeof(*pTabItem)); + origSrc = pWInfo->pTabList->a; + for(k=1; k<=nNotReady; k++){ + memcpy(&pOrTab->a[k], &origSrc[pLevel[k].iFrom], sizeof(pOrTab->a[k])); + } + }else{ + pOrTab = pWInfo->pTabList; + } + + /* Initialize the rowset register to contain NULL. An SQL NULL is + ** equivalent to an empty rowset. Or, create an ephemeral index + ** capable of holding primary keys in the case of a WITHOUT ROWID. + ** + ** Also initialize regReturn to contain the address of the instruction + ** immediately following the OP_Return at the bottom of the loop. This + ** is required in a few obscure LEFT JOIN cases where control jumps + ** over the top of the loop into the body of it. In this case the + ** correct response for the end-of-loop code (the OP_Return) is to + ** fall through to the next instruction, just as an OP_Next does if + ** called on an uninitialized cursor. + */ + if( (pWInfo->wctrlFlags & WHERE_DUPLICATES_OK)==0 ){ + if( HasRowid(pTab) ){ + regRowset = ++pParse->nMem; + sqlite3VdbeAddOp2(v, OP_Null, 0, regRowset); + }else{ + Index *pPk = sqlite3PrimaryKeyIndex(pTab); + regRowset = pParse->nTab++; + sqlite3VdbeAddOp2(v, OP_OpenEphemeral, regRowset, pPk->nKeyCol); + sqlite3VdbeSetP4KeyInfo(pParse, pPk); + } + regRowid = ++pParse->nMem; + } + iRetInit = sqlite3VdbeAddOp2(v, OP_Integer, 0, regReturn); + + /* If the original WHERE clause is z of the form: (x1 OR x2 OR ...) AND y + ** Then for every term xN, evaluate as the subexpression: xN AND z + ** That way, terms in y that are factored into the disjunction will + ** be picked up by the recursive calls to sqlite3WhereBegin() below. + ** + ** Actually, each subexpression is converted to "xN AND w" where w is + ** the "interesting" terms of z - terms that did not originate in the + ** ON or USING clause of a LEFT JOIN, and terms that are usable as + ** indices. + ** + ** This optimization also only applies if the (x1 OR x2 OR ...) term + ** is not contained in the ON clause of a LEFT JOIN. + ** See ticket http://www.sqlite.org/src/info/f2369304e4 + */ + if( pWC->nTerm>1 ){ + int iTerm; + for(iTerm=0; iTerm<pWC->nTerm; iTerm++){ + Expr *pExpr = pWC->a[iTerm].pExpr; + if( &pWC->a[iTerm] == pTerm ) continue; + if( ExprHasProperty(pExpr, EP_FromJoin) ) continue; + testcase( pWC->a[iTerm].wtFlags & TERM_VIRTUAL ); + testcase( pWC->a[iTerm].wtFlags & TERM_CODED ); + if( (pWC->a[iTerm].wtFlags & (TERM_VIRTUAL|TERM_CODED))!=0 ) continue; + if( (pWC->a[iTerm].eOperator & WO_ALL)==0 ) continue; + testcase( pWC->a[iTerm].wtFlags & TERM_ORINFO ); + pExpr = sqlite3ExprDup(db, pExpr, 0); + pAndExpr = sqlite3ExprAnd(db, pAndExpr, pExpr); + } + if( pAndExpr ){ + pAndExpr = sqlite3PExpr(pParse, TK_AND|TKFLG_DONTFOLD, 0, pAndExpr, 0); + } + } + + /* Run a separate WHERE clause for each term of the OR clause. After + ** eliminating duplicates from other WHERE clauses, the action for each + ** sub-WHERE clause is to to invoke the main loop body as a subroutine. + */ + wctrlFlags = WHERE_OMIT_OPEN_CLOSE + | WHERE_FORCE_TABLE + | WHERE_ONETABLE_ONLY + | WHERE_NO_AUTOINDEX; + for(ii=0; ii<pOrWc->nTerm; ii++){ + WhereTerm *pOrTerm = &pOrWc->a[ii]; + if( pOrTerm->leftCursor==iCur || (pOrTerm->eOperator & WO_AND)!=0 ){ + WhereInfo *pSubWInfo; /* Info for single OR-term scan */ + Expr *pOrExpr = pOrTerm->pExpr; /* Current OR clause term */ + int jmp1 = 0; /* Address of jump operation */ + if( pAndExpr && !ExprHasProperty(pOrExpr, EP_FromJoin) ){ + pAndExpr->pLeft = pOrExpr; + pOrExpr = pAndExpr; + } + /* Loop through table entries that match term pOrTerm. */ + WHERETRACE(0xffff, ("Subplan for OR-clause:\n")); + pSubWInfo = sqlite3WhereBegin(pParse, pOrTab, pOrExpr, 0, 0, + wctrlFlags, iCovCur); + assert( pSubWInfo || pParse->nErr || db->mallocFailed ); + if( pSubWInfo ){ + WhereLoop *pSubLoop; + int addrExplain = sqlite3WhereExplainOneScan( + pParse, pOrTab, &pSubWInfo->a[0], iLevel, pLevel->iFrom, 0 + ); + sqlite3WhereAddScanStatus(v, pOrTab, &pSubWInfo->a[0], addrExplain); + + /* This is the sub-WHERE clause body. First skip over + ** duplicate rows from prior sub-WHERE clauses, and record the + ** rowid (or PRIMARY KEY) for the current row so that the same + ** row will be skipped in subsequent sub-WHERE clauses. + */ + if( (pWInfo->wctrlFlags & WHERE_DUPLICATES_OK)==0 ){ + int r; + int iSet = ((ii==pOrWc->nTerm-1)?-1:ii); + if( HasRowid(pTab) ){ + r = sqlite3ExprCodeGetColumn(pParse, pTab, -1, iCur, regRowid, 0); + jmp1 = sqlite3VdbeAddOp4Int(v, OP_RowSetTest, regRowset, 0, + r,iSet); + VdbeCoverage(v); + }else{ + Index *pPk = sqlite3PrimaryKeyIndex(pTab); + int nPk = pPk->nKeyCol; + int iPk; + + /* Read the PK into an array of temp registers. */ + r = sqlite3GetTempRange(pParse, nPk); + for(iPk=0; iPk<nPk; iPk++){ + int iCol = pPk->aiColumn[iPk]; + sqlite3ExprCodeGetColumnToReg(pParse, pTab, iCol, iCur, r+iPk); + } + + /* Check if the temp table already contains this key. If so, + ** the row has already been included in the result set and + ** can be ignored (by jumping past the Gosub below). Otherwise, + ** insert the key into the temp table and proceed with processing + ** the row. + ** + ** Use some of the same optimizations as OP_RowSetTest: If iSet + ** is zero, assume that the key cannot already be present in + ** the temp table. And if iSet is -1, assume that there is no + ** need to insert the key into the temp table, as it will never + ** be tested for. */ + if( iSet ){ + jmp1 = sqlite3VdbeAddOp4Int(v, OP_Found, regRowset, 0, r, nPk); + VdbeCoverage(v); + } + if( iSet>=0 ){ + sqlite3VdbeAddOp3(v, OP_MakeRecord, r, nPk, regRowid); + sqlite3VdbeAddOp3(v, OP_IdxInsert, regRowset, regRowid, 0); + if( iSet ) sqlite3VdbeChangeP5(v, OPFLAG_USESEEKRESULT); + } + + /* Release the array of temp registers */ + sqlite3ReleaseTempRange(pParse, r, nPk); + } + } + + /* Invoke the main loop body as a subroutine */ + sqlite3VdbeAddOp2(v, OP_Gosub, regReturn, iLoopBody); + + /* Jump here (skipping the main loop body subroutine) if the + ** current sub-WHERE row is a duplicate from prior sub-WHEREs. */ + if( jmp1 ) sqlite3VdbeJumpHere(v, jmp1); + + /* The pSubWInfo->untestedTerms flag means that this OR term + ** contained one or more AND term from a notReady table. The + ** terms from the notReady table could not be tested and will + ** need to be tested later. + */ + if( pSubWInfo->untestedTerms ) untestedTerms = 1; + + /* If all of the OR-connected terms are optimized using the same + ** index, and the index is opened using the same cursor number + ** by each call to sqlite3WhereBegin() made by this loop, it may + ** be possible to use that index as a covering index. + ** + ** If the call to sqlite3WhereBegin() above resulted in a scan that + ** uses an index, and this is either the first OR-connected term + ** processed or the index is the same as that used by all previous + ** terms, set pCov to the candidate covering index. Otherwise, set + ** pCov to NULL to indicate that no candidate covering index will + ** be available. + */ + pSubLoop = pSubWInfo->a[0].pWLoop; + assert( (pSubLoop->wsFlags & WHERE_AUTO_INDEX)==0 ); + if( (pSubLoop->wsFlags & WHERE_INDEXED)!=0 + && (ii==0 || pSubLoop->u.btree.pIndex==pCov) + && (HasRowid(pTab) || !IsPrimaryKeyIndex(pSubLoop->u.btree.pIndex)) + ){ + assert( pSubWInfo->a[0].iIdxCur==iCovCur ); + pCov = pSubLoop->u.btree.pIndex; + wctrlFlags |= WHERE_REOPEN_IDX; + }else{ + pCov = 0; + } + + /* Finish the loop through table entries that match term pOrTerm. */ + sqlite3WhereEnd(pSubWInfo); + } + } + } + pLevel->u.pCovidx = pCov; + if( pCov ) pLevel->iIdxCur = iCovCur; + if( pAndExpr ){ + pAndExpr->pLeft = 0; + sqlite3ExprDelete(db, pAndExpr); + } + sqlite3VdbeChangeP1(v, iRetInit, sqlite3VdbeCurrentAddr(v)); + sqlite3VdbeGoto(v, pLevel->addrBrk); + sqlite3VdbeResolveLabel(v, iLoopBody); + + if( pWInfo->nLevel>1 ) sqlite3StackFree(db, pOrTab); + if( !untestedTerms ) disableTerm(pLevel, pTerm); + }else +#endif /* SQLITE_OMIT_OR_OPTIMIZATION */ + + { + /* Case 6: There is no usable index. We must do a complete + ** scan of the entire table. + */ + static const u8 aStep[] = { OP_Next, OP_Prev }; + static const u8 aStart[] = { OP_Rewind, OP_Last }; + assert( bRev==0 || bRev==1 ); + if( pTabItem->fg.isRecursive ){ + /* Tables marked isRecursive have only a single row that is stored in + ** a pseudo-cursor. No need to Rewind or Next such cursors. */ + pLevel->op = OP_Noop; + }else{ + codeCursorHint(pWInfo, pLevel, 0); + pLevel->op = aStep[bRev]; + pLevel->p1 = iCur; + pLevel->p2 = 1 + sqlite3VdbeAddOp2(v, aStart[bRev], iCur, addrBrk); + VdbeCoverageIf(v, bRev==0); + VdbeCoverageIf(v, bRev!=0); + pLevel->p5 = SQLITE_STMTSTATUS_FULLSCAN_STEP; + } + } + +#ifdef SQLITE_ENABLE_STMT_SCANSTATUS + pLevel->addrVisit = sqlite3VdbeCurrentAddr(v); +#endif + + /* Insert code to test every subexpression that can be completely + ** computed using the current set of tables. + */ + for(pTerm=pWC->a, j=pWC->nTerm; j>0; j--, pTerm++){ + Expr *pE; + int skipLikeAddr = 0; + testcase( pTerm->wtFlags & TERM_VIRTUAL ); + testcase( pTerm->wtFlags & TERM_CODED ); + if( pTerm->wtFlags & (TERM_VIRTUAL|TERM_CODED) ) continue; + if( (pTerm->prereqAll & pLevel->notReady)!=0 ){ + testcase( pWInfo->untestedTerms==0 + && (pWInfo->wctrlFlags & WHERE_ONETABLE_ONLY)!=0 ); + pWInfo->untestedTerms = 1; + continue; + } + pE = pTerm->pExpr; + assert( pE!=0 ); + if( pLevel->iLeftJoin && !ExprHasProperty(pE, EP_FromJoin) ){ + continue; + } + if( pTerm->wtFlags & TERM_LIKECOND ){ +#ifdef SQLITE_LIKE_DOESNT_MATCH_BLOBS + continue; +#else + assert( pLevel->iLikeRepCntr>0 ); + skipLikeAddr = sqlite3VdbeAddOp1(v, OP_IfNot, pLevel->iLikeRepCntr); + VdbeCoverage(v); +#endif + } + sqlite3ExprIfFalse(pParse, pE, addrCont, SQLITE_JUMPIFNULL); + if( skipLikeAddr ) sqlite3VdbeJumpHere(v, skipLikeAddr); + pTerm->wtFlags |= TERM_CODED; + } + + /* Insert code to test for implied constraints based on transitivity + ** of the "==" operator. + ** + ** Example: If the WHERE clause contains "t1.a=t2.b" and "t2.b=123" + ** and we are coding the t1 loop and the t2 loop has not yet coded, + ** then we cannot use the "t1.a=t2.b" constraint, but we can code + ** the implied "t1.a=123" constraint. + */ + for(pTerm=pWC->a, j=pWC->nTerm; j>0; j--, pTerm++){ + Expr *pE, *pEAlt; + WhereTerm *pAlt; + if( pTerm->wtFlags & (TERM_VIRTUAL|TERM_CODED) ) continue; + if( (pTerm->eOperator & (WO_EQ|WO_IS))==0 ) continue; + if( (pTerm->eOperator & WO_EQUIV)==0 ) continue; + if( pTerm->leftCursor!=iCur ) continue; + if( pLevel->iLeftJoin ) continue; + pE = pTerm->pExpr; + assert( !ExprHasProperty(pE, EP_FromJoin) ); + assert( (pTerm->prereqRight & pLevel->notReady)!=0 ); + pAlt = sqlite3WhereFindTerm(pWC, iCur, pTerm->u.leftColumn, notReady, + WO_EQ|WO_IN|WO_IS, 0); + if( pAlt==0 ) continue; + if( pAlt->wtFlags & (TERM_CODED) ) continue; + testcase( pAlt->eOperator & WO_EQ ); + testcase( pAlt->eOperator & WO_IS ); + testcase( pAlt->eOperator & WO_IN ); + VdbeModuleComment((v, "begin transitive constraint")); + pEAlt = sqlite3StackAllocRaw(db, sizeof(*pEAlt)); + if( pEAlt ){ + *pEAlt = *pAlt->pExpr; + pEAlt->pLeft = pE->pLeft; + sqlite3ExprIfFalse(pParse, pEAlt, addrCont, SQLITE_JUMPIFNULL); + sqlite3StackFree(db, pEAlt); + } + } + + /* For a LEFT OUTER JOIN, generate code that will record the fact that + ** at least one row of the right table has matched the left table. + */ + if( pLevel->iLeftJoin ){ + pLevel->addrFirst = sqlite3VdbeCurrentAddr(v); + sqlite3VdbeAddOp2(v, OP_Integer, 1, pLevel->iLeftJoin); + VdbeComment((v, "record LEFT JOIN hit")); + sqlite3ExprCacheClear(pParse); + for(pTerm=pWC->a, j=0; j<pWC->nTerm; j++, pTerm++){ + testcase( pTerm->wtFlags & TERM_VIRTUAL ); + testcase( pTerm->wtFlags & TERM_CODED ); + if( pTerm->wtFlags & (TERM_VIRTUAL|TERM_CODED) ) continue; + if( (pTerm->prereqAll & pLevel->notReady)!=0 ){ + assert( pWInfo->untestedTerms ); + continue; + } + assert( pTerm->pExpr ); + sqlite3ExprIfFalse(pParse, pTerm->pExpr, addrCont, SQLITE_JUMPIFNULL); + pTerm->wtFlags |= TERM_CODED; + } + } + + return pLevel->notReady; +} + +/************** End of wherecode.c *******************************************/ +/************** Begin file whereexpr.c ***************************************/ +/* +** 2015-06-08 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This module contains C code that generates VDBE code used to process +** the WHERE clause of SQL statements. +** +** This file was originally part of where.c but was split out to improve +** readability and editabiliity. This file contains utility routines for +** analyzing Expr objects in the WHERE clause. +*/ +/* #include "sqliteInt.h" */ +/* #include "whereInt.h" */ + +/* Forward declarations */ +static void exprAnalyze(SrcList*, WhereClause*, int); /* ** Deallocate all memory associated with a WhereOrInfo object. */ static void whereOrInfoDelete(sqlite3 *db, WhereOrInfo *p){ - whereClauseClear(&p->wc); + sqlite3WhereClauseClear(&p->wc); sqlite3DbFree(db, p); } /* ** Deallocate all memory associated with a WhereAndInfo object. */ static void whereAndInfoDelete(sqlite3 *db, WhereAndInfo *p){ - whereClauseClear(&p->wc); + sqlite3WhereClauseClear(&p->wc); sqlite3DbFree(db, p); } -/* -** Deallocate a WhereClause structure. The WhereClause structure -** itself is not freed. This routine is the inverse of whereClauseInit(). -*/ -static void whereClauseClear(WhereClause *pWC){ - int i; - WhereTerm *a; - sqlite3 *db = pWC->pParse->db; - for(i=pWC->nTerm-1, a=pWC->a; i>=0; i--, a++){ - if( a->wtFlags & TERM_DYNAMIC ){ - sqlite3ExprDelete(db, a->pExpr); - } - if( a->wtFlags & TERM_ORINFO ){ - whereOrInfoDelete(db, a->u.pOrInfo); - }else if( a->wtFlags & TERM_ANDINFO ){ - whereAndInfoDelete(db, a->u.pAndInfo); - } - } - if( pWC->a!=pWC->aStatic ){ - sqlite3DbFree(db, pWC->a); - } -} - /* ** Add a single new WhereTerm entry to the WhereClause object pWC. ** The new WhereTerm object is constructed from Expr p and with wtFlags. ** The index in pWC->a[] of the new WhereTerm is returned on success. ** 0 is returned if the new WhereTerm could not be added due to a memory @@ -88415,17 +121927,18 @@ ** WARNING: This routine might reallocate the space used to store ** WhereTerms. All pointers to WhereTerms should be invalidated after ** calling this routine. Such pointers may be reinitialized by referencing ** the pWC->a[] array. */ -static int whereClauseInsert(WhereClause *pWC, Expr *p, u8 wtFlags){ +static int whereClauseInsert(WhereClause *pWC, Expr *p, u16 wtFlags){ WhereTerm *pTerm; int idx; + testcase( wtFlags & TERM_VIRTUAL ); if( pWC->nTerm>=pWC->nSlot ){ WhereTerm *pOld = pWC->a; - sqlite3 *db = pWC->pParse->db; - pWC->a = sqlite3DbMallocRaw(db, sizeof(pWC->a[0])*pWC->nSlot*2 ); + sqlite3 *db = pWC->pWInfo->pParse->db; + pWC->a = sqlite3DbMallocRawNN(db, sizeof(pWC->a[0])*pWC->nSlot*2 ); if( pWC->a==0 ){ if( wtFlags & TERM_DYNAMIC ){ sqlite3ExprDelete(db, p); } pWC->a = pOld; @@ -88434,174 +121947,67 @@ memcpy(pWC->a, pOld, sizeof(pWC->a[0])*pWC->nTerm); if( pOld!=pWC->aStatic ){ sqlite3DbFree(db, pOld); } pWC->nSlot = sqlite3DbMallocSize(db, pWC->a)/sizeof(pWC->a[0]); + memset(&pWC->a[pWC->nTerm], 0, sizeof(pWC->a[0])*(pWC->nSlot-pWC->nTerm)); } pTerm = &pWC->a[idx = pWC->nTerm++]; - pTerm->pExpr = p; + if( p && ExprHasProperty(p, EP_Unlikely) ){ + pTerm->truthProb = sqlite3LogEst(p->iTable) - 270; + }else{ + pTerm->truthProb = 1; + } + pTerm->pExpr = sqlite3ExprSkipCollate(p); pTerm->wtFlags = wtFlags; pTerm->pWC = pWC; pTerm->iParent = -1; return idx; } -/* -** This routine identifies subexpressions in the WHERE clause where -** each subexpression is separated by the AND operator or some other -** operator specified in the op parameter. The WhereClause structure -** is filled with pointers to subexpressions. For example: -** -** WHERE a=='hello' AND coalesce(b,11)<10 AND (c+12!=d OR c==22) -** \________/ \_______________/ \________________/ -** slot[0] slot[1] slot[2] -** -** The original WHERE clause in pExpr is unaltered. All this routine -** does is make slot[] entries point to substructure within pExpr. -** -** In the previous sentence and in the diagram, "slot[]" refers to -** the WhereClause.a[] array. The slot[] array grows as needed to contain -** all terms of the WHERE clause. -*/ -static void whereSplit(WhereClause *pWC, Expr *pExpr, int op){ - pWC->op = (u8)op; - if( pExpr==0 ) return; - if( pExpr->op!=op ){ - whereClauseInsert(pWC, pExpr, 0); - }else{ - whereSplit(pWC, pExpr->pLeft, op); - whereSplit(pWC, pExpr->pRight, op); - } -} - -/* -** Initialize an expression mask set (a WhereMaskSet object) -*/ -#define initMaskSet(P) memset(P, 0, sizeof(*P)) - -/* -** Return the bitmask for the given cursor number. Return 0 if -** iCursor is not in the set. -*/ -static Bitmask getMask(WhereMaskSet *pMaskSet, int iCursor){ - int i; - assert( pMaskSet->n<=sizeof(Bitmask)*8 ); - for(i=0; i<pMaskSet->n; i++){ - if( pMaskSet->ix[i]==iCursor ){ - return ((Bitmask)1)<<i; - } - } - return 0; -} - -/* -** Create a new mask for cursor iCursor. -** -** There is one cursor per table in the FROM clause. The number of -** tables in the FROM clause is limited by a test early in the -** sqlite3WhereBegin() routine. So we know that the pMaskSet->ix[] -** array will never overflow. -*/ -static void createMask(WhereMaskSet *pMaskSet, int iCursor){ - assert( pMaskSet->n < ArraySize(pMaskSet->ix) ); - pMaskSet->ix[pMaskSet->n++] = iCursor; -} - -/* -** This routine walks (recursively) an expression tree and generates -** a bitmask indicating which tables are used in that expression -** tree. -** -** In order for this routine to work, the calling function must have -** previously invoked sqlite3ResolveExprNames() on the expression. See -** the header comment on that routine for additional information. -** The sqlite3ResolveExprNames() routines looks for column names and -** sets their opcodes to TK_COLUMN and their Expr.iTable fields to -** the VDBE cursor number of the table. This routine just has to -** translate the cursor numbers into bitmask values and OR all -** the bitmasks together. -*/ -static Bitmask exprListTableUsage(WhereMaskSet*, ExprList*); -static Bitmask exprSelectTableUsage(WhereMaskSet*, Select*); -static Bitmask exprTableUsage(WhereMaskSet *pMaskSet, Expr *p){ - Bitmask mask = 0; - if( p==0 ) return 0; - if( p->op==TK_COLUMN ){ - mask = getMask(pMaskSet, p->iTable); - return mask; - } - mask = exprTableUsage(pMaskSet, p->pRight); - mask |= exprTableUsage(pMaskSet, p->pLeft); - if( ExprHasProperty(p, EP_xIsSelect) ){ - mask |= exprSelectTableUsage(pMaskSet, p->x.pSelect); - }else{ - mask |= exprListTableUsage(pMaskSet, p->x.pList); - } - return mask; -} -static Bitmask exprListTableUsage(WhereMaskSet *pMaskSet, ExprList *pList){ - int i; - Bitmask mask = 0; - if( pList ){ - for(i=0; i<pList->nExpr; i++){ - mask |= exprTableUsage(pMaskSet, pList->a[i].pExpr); - } - } - return mask; -} -static Bitmask exprSelectTableUsage(WhereMaskSet *pMaskSet, Select *pS){ - Bitmask mask = 0; - while( pS ){ - mask |= exprListTableUsage(pMaskSet, pS->pEList); - mask |= exprListTableUsage(pMaskSet, pS->pGroupBy); - mask |= exprListTableUsage(pMaskSet, pS->pOrderBy); - mask |= exprTableUsage(pMaskSet, pS->pWhere); - mask |= exprTableUsage(pMaskSet, pS->pHaving); - pS = pS->pPrior; - } - return mask; -} - /* ** Return TRUE if the given operator is one of the operators that is ** allowed for an indexable WHERE clause term. The allowed operators are -** "=", "<", ">", "<=", ">=", and "IN". +** "=", "<", ">", "<=", ">=", "IN", and "IS NULL" */ static int allowedOp(int op){ assert( TK_GT>TK_EQ && TK_GT<TK_GE ); assert( TK_LT>TK_EQ && TK_LT<TK_GE ); assert( TK_LE>TK_EQ && TK_LE<TK_GE ); assert( TK_GE==TK_EQ+4 ); - return op==TK_IN || (op>=TK_EQ && op<=TK_GE) || op==TK_ISNULL; + return op==TK_IN || (op>=TK_EQ && op<=TK_GE) || op==TK_ISNULL || op==TK_IS; } -/* -** Swap two objects of type TYPE. -*/ -#define SWAP(TYPE,A,B) {TYPE t=A; A=B; B=t;} - /* ** Commute a comparison operator. Expressions of the form "X op Y" ** are converted into "Y op X". ** -** If a collation sequence is associated with either the left or right -** side of the comparison, it remains associated with the same side after -** the commutation. So "Y collate NOCASE op X" becomes -** "X collate NOCASE op Y". This is because any collation sequence on +** If left/right precedence rules come into play when determining the +** collating sequence, then COLLATE operators are adjusted to ensure +** that the collating sequence does not change. For example: +** "Y collate NOCASE op X" becomes "X op Y" because any collation sequence on ** the left hand side of a comparison overrides any collation sequence -** attached to the right. For the same reason the EP_ExpCollate flag +** attached to the right. For the same reason the EP_Collate flag ** is not commuted. */ static void exprCommute(Parse *pParse, Expr *pExpr){ - u16 expRight = (pExpr->pRight->flags & EP_ExpCollate); - u16 expLeft = (pExpr->pLeft->flags & EP_ExpCollate); + u16 expRight = (pExpr->pRight->flags & EP_Collate); + u16 expLeft = (pExpr->pLeft->flags & EP_Collate); assert( allowedOp(pExpr->op) && pExpr->op!=TK_IN ); - pExpr->pRight->pColl = sqlite3ExprCollSeq(pParse, pExpr->pRight); - pExpr->pLeft->pColl = sqlite3ExprCollSeq(pParse, pExpr->pLeft); - SWAP(CollSeq*,pExpr->pRight->pColl,pExpr->pLeft->pColl); - pExpr->pRight->flags = (pExpr->pRight->flags & ~EP_ExpCollate) | expLeft; - pExpr->pLeft->flags = (pExpr->pLeft->flags & ~EP_ExpCollate) | expRight; + if( expRight==expLeft ){ + /* Either X and Y both have COLLATE operator or neither do */ + if( expRight ){ + /* Both X and Y have COLLATE operators. Make sure X is always + ** used by clearing the EP_Collate flag from Y. */ + pExpr->pRight->flags &= ~EP_Collate; + }else if( sqlite3ExprCollSeq(pParse, pExpr->pLeft)!=0 ){ + /* Neither X nor Y have COLLATE operators, but X has a non-default + ** collating sequence. So add the EP_Collate marker on X to cause + ** it to be searched first. */ + pExpr->pLeft->flags |= EP_Collate; + } + } SWAP(Expr*,pExpr->pRight,pExpr->pLeft); if( pExpr->op>=TK_GT ){ assert( TK_LT==TK_GT+2 ); assert( TK_GE==TK_LE+2 ); assert( TK_GT>TK_EQ ); @@ -88619,10 +122025,12 @@ assert( allowedOp(op) ); if( op==TK_IN ){ c = WO_IN; }else if( op==TK_ISNULL ){ c = WO_ISNULL; + }else if( op==TK_IS ){ + c = WO_IS; }else{ assert( (WO_EQ<<(op-TK_EQ)) < 0x7fff ); c = (u16)(WO_EQ<<(op-TK_EQ)); } assert( op!=TK_ISNULL || c==WO_ISNULL ); @@ -88630,92 +122038,27 @@ assert( op!=TK_EQ || c==WO_EQ ); assert( op!=TK_LT || c==WO_LT ); assert( op!=TK_LE || c==WO_LE ); assert( op!=TK_GT || c==WO_GT ); assert( op!=TK_GE || c==WO_GE ); + assert( op!=TK_IS || c==WO_IS ); return c; } -/* -** Search for a term in the WHERE clause that is of the form "X <op> <expr>" -** where X is a reference to the iColumn of table iCur and <op> is one of -** the WO_xx operator codes specified by the op parameter. -** Return a pointer to the term. Return 0 if not found. -*/ -static WhereTerm *findTerm( - WhereClause *pWC, /* The WHERE clause to be searched */ - int iCur, /* Cursor number of LHS */ - int iColumn, /* Column number of LHS */ - Bitmask notReady, /* RHS must not overlap with this mask */ - u32 op, /* Mask of WO_xx values describing operator */ - Index *pIdx /* Must be compatible with this index, if not NULL */ -){ - WhereTerm *pTerm; - int k; - assert( iCur>=0 ); - op &= WO_ALL; - for(pTerm=pWC->a, k=pWC->nTerm; k; k--, pTerm++){ - if( pTerm->leftCursor==iCur - && (pTerm->prereqRight & notReady)==0 - && pTerm->u.leftColumn==iColumn - && (pTerm->eOperator & op)!=0 - ){ - if( pIdx && pTerm->eOperator!=WO_ISNULL ){ - Expr *pX = pTerm->pExpr; - CollSeq *pColl; - char idxaff; - int j; - Parse *pParse = pWC->pParse; - - idxaff = pIdx->pTable->aCol[iColumn].affinity; - if( !sqlite3IndexAffinityOk(pX, idxaff) ) continue; - - /* Figure out the collation sequence required from an index for - ** it to be useful for optimising expression pX. Store this - ** value in variable pColl. - */ - assert(pX->pLeft); - pColl = sqlite3BinaryCompareCollSeq(pParse, pX->pLeft, pX->pRight); - assert(pColl || pParse->nErr); - - for(j=0; pIdx->aiColumn[j]!=iColumn; j++){ - if( NEVER(j>=pIdx->nColumn) ) return 0; - } - if( pColl && sqlite3StrICmp(pColl->zName, pIdx->azColl[j]) ) continue; - } - return pTerm; - } - } - return 0; -} - -/* Forward reference */ -static void exprAnalyze(SrcList*, WhereClause*, int); - -/* -** Call exprAnalyze on all terms in a WHERE clause. -** -** -*/ -static void exprAnalyzeAll( - SrcList *pTabList, /* the FROM clause */ - WhereClause *pWC /* the WHERE clause to be analyzed */ -){ - int i; - for(i=pWC->nTerm-1; i>=0; i--){ - exprAnalyze(pTabList, pWC, i); - } -} #ifndef SQLITE_OMIT_LIKE_OPTIMIZATION /* ** Check to see if the given expression is a LIKE or GLOB operator that ** can be optimized using inequality constraints. Return TRUE if it is ** so and false if not. ** ** In order for the operator to be optimizible, the RHS must be a string -** literal that does not begin with a wildcard. +** literal that does not begin with a wildcard. The LHS must be a column +** that may only be NULL, a string, or a BLOB, never a number. (This means +** that virtual tables cannot participate in the LIKE optimization.) The +** collating sequence for the column on the LHS must be appropriate for +** the operator. */ static int isLikeOrGlob( Parse *pParse, /* Parsing and code generating context */ Expr *pExpr, /* Test this expression */ Expr **ppPrefix, /* Pointer to TK_STRING expression with pattern prefix */ @@ -88726,67 +122069,55 @@ Expr *pRight, *pLeft; /* Right and left size of LIKE operator */ ExprList *pList; /* List of operands to the LIKE operator */ int c; /* One character in z[] */ int cnt; /* Number of non-wildcard prefix characters */ char wc[3]; /* Wildcard characters */ - CollSeq *pColl; /* Collating sequence for LHS */ sqlite3 *db = pParse->db; /* Database connection */ sqlite3_value *pVal = 0; int op; /* Opcode of pRight */ + int rc; /* Result code to return */ if( !sqlite3IsLikeFunction(db, pExpr, pnoCase, wc) ){ return 0; } #ifdef SQLITE_EBCDIC if( *pnoCase ) return 0; #endif pList = pExpr->x.pList; pLeft = pList->a[1].pExpr; - if( pLeft->op!=TK_COLUMN || sqlite3ExprAffinity(pLeft)!=SQLITE_AFF_TEXT ){ + if( pLeft->op!=TK_COLUMN + || sqlite3ExprAffinity(pLeft)!=SQLITE_AFF_TEXT + || IsVirtual(pLeft->pTab) /* Value might be numeric */ + ){ /* IMP: R-02065-49465 The left-hand side of the LIKE or GLOB operator must ** be the name of an indexed column with TEXT affinity. */ return 0; } assert( pLeft->iColumn!=(-1) ); /* Because IPK never has AFF_TEXT */ - pColl = sqlite3ExprCollSeq(pParse, pLeft); - if( pColl==0 ) return 0; /* Happens when LHS has an undefined collation */ - if( (pColl->type!=SQLITE_COLL_BINARY || *pnoCase) && - (pColl->type!=SQLITE_COLL_NOCASE || !*pnoCase) ){ - /* IMP: R-09003-32046 For the GLOB operator, the column must use the - ** default BINARY collating sequence. - ** IMP: R-41408-28306 For the LIKE operator, if case_sensitive_like mode - ** is enabled then the column must use the default BINARY collating - ** sequence, or if case_sensitive_like mode is disabled then the column - ** must use the built-in NOCASE collating sequence. - */ - return 0; - } - - pRight = pList->a[0].pExpr; - op = pRight->op; - if( op==TK_REGISTER ){ - op = pRight->op2; - } + + pRight = sqlite3ExprSkipCollate(pList->a[0].pExpr); + op = pRight->op; if( op==TK_VARIABLE ){ Vdbe *pReprepare = pParse->pReprepare; - pVal = sqlite3VdbeGetValue(pReprepare, pRight->iColumn, SQLITE_AFF_NONE); + int iCol = pRight->iColumn; + pVal = sqlite3VdbeGetBoundValue(pReprepare, iCol, SQLITE_AFF_BLOB); if( pVal && sqlite3_value_type(pVal)==SQLITE_TEXT ){ z = (char *)sqlite3_value_text(pVal); } - sqlite3VdbeSetVarmask(pParse->pVdbe, pRight->iColumn); + sqlite3VdbeSetVarmask(pParse->pVdbe, iCol); assert( pRight->op==TK_VARIABLE || pRight->op==TK_REGISTER ); }else if( op==TK_STRING ){ z = pRight->u.zToken; } if( z ){ cnt = 0; while( (c=z[cnt])!=0 && c!=wc[0] && c!=wc[1] && c!=wc[2] ){ cnt++; } - if( cnt!=0 && c!=0 && 255!=(u8)z[cnt-1] ){ + if( cnt!=0 && 255!=(u8)z[cnt-1] ){ Expr *pPrefix; - *pisComplete = z[cnt]==wc[0] && z[cnt+1]==0; + *pisComplete = c==wc[0] && z[cnt+1]==0; pPrefix = sqlite3Expr(db, TK_STRING, z); if( pPrefix ) pPrefix->u.zToken[cnt] = 0; *ppPrefix = pPrefix; if( op==TK_VARIABLE ){ Vdbe *v = pParse->pVdbe; @@ -88794,11 +122125,11 @@ if( *pisComplete && pRight->u.zToken[1] ){ /* If the rhs of the LIKE expression is a variable, and the current ** value of the variable means there is no need to invoke the LIKE ** function, then no OP_Variable will be added to the program. ** This causes problems for the sqlite3_bind_parameter_name() - ** API. To workaround them, add a dummy OP_Variable here. + ** API. To work around them, add a dummy OP_Variable here. */ int r1 = sqlite3GetTempReg(pParse); sqlite3ExprCodeTarget(pParse, pRight, r1); sqlite3VdbeChangeP3(v, sqlite3VdbeCurrentAddr(v)-1, 0); sqlite3ReleaseTempReg(pParse, r1); @@ -88807,53 +122138,157 @@ }else{ z = 0; } } + rc = (z!=0); sqlite3ValueFree(pVal); - return (z!=0); + return rc; } #endif /* SQLITE_OMIT_LIKE_OPTIMIZATION */ #ifndef SQLITE_OMIT_VIRTUALTABLE /* ** Check to see if the given expression is of the form ** -** column MATCH expr +** column OP expr +** +** where OP is one of MATCH, GLOB, LIKE or REGEXP and "column" is a +** column of a virtual table. ** ** If it is then return TRUE. If not, return FALSE. */ static int isMatchOfColumn( - Expr *pExpr /* Test this expression */ + Expr *pExpr, /* Test this expression */ + unsigned char *peOp2 /* OUT: 0 for MATCH, or else an op2 value */ ){ + struct Op2 { + const char *zOp; + unsigned char eOp2; + } aOp[] = { + { "match", SQLITE_INDEX_CONSTRAINT_MATCH }, + { "glob", SQLITE_INDEX_CONSTRAINT_GLOB }, + { "like", SQLITE_INDEX_CONSTRAINT_LIKE }, + { "regexp", SQLITE_INDEX_CONSTRAINT_REGEXP } + }; ExprList *pList; + Expr *pCol; /* Column reference */ + int i; if( pExpr->op!=TK_FUNCTION ){ return 0; } - if( sqlite3StrICmp(pExpr->u.zToken,"match")!=0 ){ - return 0; - } pList = pExpr->x.pList; - if( pList->nExpr!=2 ){ + if( pList==0 || pList->nExpr!=2 ){ return 0; } - if( pList->a[1].pExpr->op != TK_COLUMN ){ + pCol = pList->a[1].pExpr; + if( pCol->op!=TK_COLUMN || !IsVirtual(pCol->pTab) ){ return 0; } - return 1; + for(i=0; i<ArraySize(aOp); i++){ + if( sqlite3StrICmp(pExpr->u.zToken, aOp[i].zOp)==0 ){ + *peOp2 = aOp[i].eOp2; + return 1; + } + } + return 0; } #endif /* SQLITE_OMIT_VIRTUALTABLE */ /* ** If the pBase expression originated in the ON or USING clause of ** a join, then transfer the appropriate markings over to derived. */ static void transferJoinMarkings(Expr *pDerived, Expr *pBase){ - pDerived->flags |= pBase->flags & EP_FromJoin; - pDerived->iRightJoinTable = pBase->iRightJoinTable; + if( pDerived ){ + pDerived->flags |= pBase->flags & EP_FromJoin; + pDerived->iRightJoinTable = pBase->iRightJoinTable; + } +} + +/* +** Mark term iChild as being a child of term iParent +*/ +static void markTermAsChild(WhereClause *pWC, int iChild, int iParent){ + pWC->a[iChild].iParent = iParent; + pWC->a[iChild].truthProb = pWC->a[iParent].truthProb; + pWC->a[iParent].nChild++; +} + +/* +** Return the N-th AND-connected subterm of pTerm. Or if pTerm is not +** a conjunction, then return just pTerm when N==0. If N is exceeds +** the number of available subterms, return NULL. +*/ +static WhereTerm *whereNthSubterm(WhereTerm *pTerm, int N){ + if( pTerm->eOperator!=WO_AND ){ + return N==0 ? pTerm : 0; + } + if( N<pTerm->u.pAndInfo->wc.nTerm ){ + return &pTerm->u.pAndInfo->wc.a[N]; + } + return 0; +} + +/* +** Subterms pOne and pTwo are contained within WHERE clause pWC. The +** two subterms are in disjunction - they are OR-ed together. +** +** If these two terms are both of the form: "A op B" with the same +** A and B values but different operators and if the operators are +** compatible (if one is = and the other is <, for example) then +** add a new virtual AND term to pWC that is the combination of the +** two. +** +** Some examples: +** +** x<y OR x=y --> x<=y +** x=y OR x=y --> x=y +** x<=y OR x<y --> x<=y +** +** The following is NOT generated: +** +** x<y OR x>y --> x!=y +*/ +static void whereCombineDisjuncts( + SrcList *pSrc, /* the FROM clause */ + WhereClause *pWC, /* The complete WHERE clause */ + WhereTerm *pOne, /* First disjunct */ + WhereTerm *pTwo /* Second disjunct */ +){ + u16 eOp = pOne->eOperator | pTwo->eOperator; + sqlite3 *db; /* Database connection (for malloc) */ + Expr *pNew; /* New virtual expression */ + int op; /* Operator for the combined expression */ + int idxNew; /* Index in pWC of the next virtual term */ + + if( (pOne->eOperator & (WO_EQ|WO_LT|WO_LE|WO_GT|WO_GE))==0 ) return; + if( (pTwo->eOperator & (WO_EQ|WO_LT|WO_LE|WO_GT|WO_GE))==0 ) return; + if( (eOp & (WO_EQ|WO_LT|WO_LE))!=eOp + && (eOp & (WO_EQ|WO_GT|WO_GE))!=eOp ) return; + assert( pOne->pExpr->pLeft!=0 && pOne->pExpr->pRight!=0 ); + assert( pTwo->pExpr->pLeft!=0 && pTwo->pExpr->pRight!=0 ); + if( sqlite3ExprCompare(pOne->pExpr->pLeft, pTwo->pExpr->pLeft, -1) ) return; + if( sqlite3ExprCompare(pOne->pExpr->pRight, pTwo->pExpr->pRight, -1) )return; + /* If we reach this point, it means the two subterms can be combined */ + if( (eOp & (eOp-1))!=0 ){ + if( eOp & (WO_LT|WO_LE) ){ + eOp = WO_LE; + }else{ + assert( eOp & (WO_GT|WO_GE) ); + eOp = WO_GE; + } + } + db = pWC->pWInfo->pParse->db; + pNew = sqlite3ExprDup(db, pOne->pExpr, 0); + if( pNew==0 ) return; + for(op=TK_EQ; eOp!=(WO_EQ<<(op-TK_EQ)); op++){ assert( op<TK_GE ); } + pNew->op = op; + idxNew = whereClauseInsert(pWC, pNew, TERM_VIRTUAL|TERM_DYNAMIC); + exprAnalyze(pSrc, pWC, idxNew); } #if !defined(SQLITE_OMIT_OR_OPTIMIZATION) && !defined(SQLITE_OMIT_SUBQUERY) /* ** Analyze a term that consists of two or more OR-connected @@ -88876,14 +122311,15 @@ ** (A) t1.x=t2.y OR t1.x=t2.z OR t1.y=15 OR t1.z=t3.a+5 ** (B) x=expr1 OR expr2=x OR x=expr3 ** (C) t1.x=t2.y OR (t1.x=t2.z AND t1.y=15) ** (D) x=expr1 OR (y>11 AND y<22 AND z LIKE '*hello*') ** (E) (p.a=1 AND q.b=2 AND r.c=3) OR (p.x=4 AND q.y=5 AND r.z=6) +** (F) x>A OR (x=A AND y>=B) ** ** CASE 1: ** -** If all subterms are of the form T.C=expr for some single column of C +** If all subterms are of the form T.C=expr for some single column of C and ** a single table T (as shown in example B above) then create a new virtual ** term that is an equivalent IN expression. In other words, if the term ** being analyzed is: ** ** x = expr1 OR expr2 = x OR x = expr3 @@ -88891,10 +122327,20 @@ ** then create a new virtual term like this: ** ** x IN (expr1,expr2,expr3) ** ** CASE 2: +** +** If there are exactly two disjuncts and one side has x>A and the other side +** has x=A (for the same x and A) then add a new virtual conjunct term to the +** WHERE clause of the form "x>=A". Example: +** +** x>A OR (x=A AND y>B) adds: x>=A +** +** The added conjunct can sometimes be helpful in query planning. +** +** CASE 3: ** ** If all subterms are indexable by a single table T, then set ** ** WhereTerm.eOperator = WO_OR ** WhereTerm.u.pOrInfo->indexable |= the cursor number for table T @@ -88908,41 +122354,41 @@ ** u.pAndInfo set to a dynamically allocated WhereAndTerm object. ** ** From another point of view, "indexable" means that the subterm could ** potentially be used with an index if an appropriate index exists. ** This analysis does not consider whether or not the index exists; that -** is something the bestIndex() routine will determine. This analysis -** only looks at whether subterms appropriate for indexing exist. +** is decided elsewhere. This analysis only looks at whether subterms +** appropriate for indexing exist. ** -** All examples A through E above all satisfy case 2. But if a term -** also statisfies case 1 (such as B) we know that the optimizer will -** always prefer case 1, so in that case we pretend that case 2 is not +** All examples A through E above satisfy case 3. But if a term +** also satisfies case 1 (such as B) we know that the optimizer will +** always prefer case 1, so in that case we pretend that case 3 is not ** satisfied. ** ** It might be the case that multiple tables are indexable. For example, ** (E) above is indexable on tables P, Q, and R. ** -** Terms that satisfy case 2 are candidates for lookup by using +** Terms that satisfy case 3 are candidates for lookup by using ** separate indices to find rowids for each subterm and composing ** the union of all rowids using a RowSet object. This is similar ** to "bitmap indices" in other database engines. ** ** OTHERWISE: ** -** If neither case 1 nor case 2 apply, then leave the eOperator set to +** If none of cases 1, 2, or 3 apply, then leave the eOperator set to ** zero. This term is not useful for search. */ static void exprAnalyzeOrTerm( SrcList *pSrc, /* the FROM clause */ WhereClause *pWC, /* the complete WHERE clause */ int idxTerm /* Index of the OR-term to be analyzed */ ){ - Parse *pParse = pWC->pParse; /* Parser context */ + WhereInfo *pWInfo = pWC->pWInfo; /* WHERE clause processing context */ + Parse *pParse = pWInfo->pParse; /* Parser context */ sqlite3 *db = pParse->db; /* Database connection */ WhereTerm *pTerm = &pWC->a[idxTerm]; /* The term to be analyzed */ Expr *pExpr = pTerm->pExpr; /* The expression of the term */ - WhereMaskSet *pMaskSet = pWC->pMaskSet; /* Table use masks */ int i; /* Loop counters */ WhereClause *pOrWc; /* Breakup of pTerm into subterms */ WhereTerm *pOrTerm; /* A Sub-term within the pOrWc */ WhereOrInfo *pOrInfo; /* Additional information associated with pTerm */ Bitmask chngToIN; /* Tables that might satisfy case 1 */ @@ -88957,46 +122403,45 @@ assert( pExpr->op==TK_OR ); pTerm->u.pOrInfo = pOrInfo = sqlite3DbMallocZero(db, sizeof(*pOrInfo)); if( pOrInfo==0 ) return; pTerm->wtFlags |= TERM_ORINFO; pOrWc = &pOrInfo->wc; - whereClauseInit(pOrWc, pWC->pParse, pMaskSet); - whereSplit(pOrWc, pExpr, TK_OR); - exprAnalyzeAll(pSrc, pOrWc); + sqlite3WhereClauseInit(pOrWc, pWInfo); + sqlite3WhereSplit(pOrWc, pExpr, TK_OR); + sqlite3WhereExprAnalyze(pSrc, pOrWc); if( db->mallocFailed ) return; assert( pOrWc->nTerm>=2 ); /* - ** Compute the set of tables that might satisfy cases 1 or 2. + ** Compute the set of tables that might satisfy cases 1 or 3. */ indexable = ~(Bitmask)0; - chngToIN = ~(pWC->vmask); + chngToIN = ~(Bitmask)0; for(i=pOrWc->nTerm-1, pOrTerm=pOrWc->a; i>=0 && indexable; i--, pOrTerm++){ if( (pOrTerm->eOperator & WO_SINGLE)==0 ){ WhereAndInfo *pAndInfo; - assert( pOrTerm->eOperator==0 ); assert( (pOrTerm->wtFlags & (TERM_ANDINFO|TERM_ORINFO))==0 ); chngToIN = 0; - pAndInfo = sqlite3DbMallocRaw(db, sizeof(*pAndInfo)); + pAndInfo = sqlite3DbMallocRawNN(db, sizeof(*pAndInfo)); if( pAndInfo ){ WhereClause *pAndWC; WhereTerm *pAndTerm; int j; Bitmask b = 0; pOrTerm->u.pAndInfo = pAndInfo; pOrTerm->wtFlags |= TERM_ANDINFO; pOrTerm->eOperator = WO_AND; pAndWC = &pAndInfo->wc; - whereClauseInit(pAndWC, pWC->pParse, pMaskSet); - whereSplit(pAndWC, pOrTerm->pExpr, TK_AND); - exprAnalyzeAll(pSrc, pAndWC); - testcase( db->mallocFailed ); + sqlite3WhereClauseInit(pAndWC, pWC->pWInfo); + sqlite3WhereSplit(pAndWC, pOrTerm->pExpr, TK_AND); + sqlite3WhereExprAnalyze(pSrc, pAndWC); + pAndWC->pOuter = pWC; if( !db->mallocFailed ){ for(j=0, pAndTerm=pAndWC->a; j<pAndWC->nTerm; j++, pAndTerm++){ assert( pAndTerm->pExpr ); if( allowedOp(pAndTerm->pExpr->op) ){ - b |= getMask(pMaskSet, pAndTerm->leftCursor); + b |= sqlite3WhereGetMask(&pWInfo->sMaskSet, pAndTerm->leftCursor); } } } indexable &= b; } @@ -89003,30 +122448,44 @@ }else if( pOrTerm->wtFlags & TERM_COPIED ){ /* Skip this term for now. We revisit it when we process the ** corresponding TERM_VIRTUAL term */ }else{ Bitmask b; - b = getMask(pMaskSet, pOrTerm->leftCursor); + b = sqlite3WhereGetMask(&pWInfo->sMaskSet, pOrTerm->leftCursor); if( pOrTerm->wtFlags & TERM_VIRTUAL ){ WhereTerm *pOther = &pOrWc->a[pOrTerm->iParent]; - b |= getMask(pMaskSet, pOther->leftCursor); + b |= sqlite3WhereGetMask(&pWInfo->sMaskSet, pOther->leftCursor); } indexable &= b; - if( pOrTerm->eOperator!=WO_EQ ){ + if( (pOrTerm->eOperator & WO_EQ)==0 ){ chngToIN = 0; }else{ chngToIN &= b; } } } /* - ** Record the set of tables that satisfy case 2. The set might be + ** Record the set of tables that satisfy case 3. The set might be ** empty. */ pOrInfo->indexable = indexable; pTerm->eOperator = indexable==0 ? 0 : WO_OR; + + /* For a two-way OR, attempt to implementation case 2. + */ + if( indexable && pOrWc->nTerm==2 ){ + int iOne = 0; + WhereTerm *pOne; + while( (pOne = whereNthSubterm(&pOrWc->a[0],iOne++))!=0 ){ + int iTwo = 0; + WhereTerm *pTwo; + while( (pTwo = whereNthSubterm(&pOrWc->a[1],iTwo++))!=0 ){ + whereCombineDisjuncts(pSrc, pWC, pOne, pTwo); + } + } + } /* ** chngToIN holds a set of tables that *might* satisfy case 1. But ** we have to do some additional checking to see if case 1 really ** is satisfied. @@ -89060,21 +122519,22 @@ ** and column is found but leave okToChngToIN false if not found. */ for(j=0; j<2 && !okToChngToIN; j++){ pOrTerm = pOrWc->a; for(i=pOrWc->nTerm-1; i>=0; i--, pOrTerm++){ - assert( pOrTerm->eOperator==WO_EQ ); + assert( pOrTerm->eOperator & WO_EQ ); pOrTerm->wtFlags &= ~TERM_OR_OK; if( pOrTerm->leftCursor==iCursor ){ /* This is the 2-bit case and we are on the second iteration and ** current term is from the first iteration. So skip this term. */ assert( j==1 ); continue; } - if( (chngToIN & getMask(pMaskSet, pOrTerm->leftCursor))==0 ){ + if( (chngToIN & sqlite3WhereGetMask(&pWInfo->sMaskSet, + pOrTerm->leftCursor))==0 ){ /* This term must be of the form t1.a==t2.b where t2 is in the - ** chngToIN set but t1 is not. This term will be either preceeded + ** chngToIN set but t1 is not. This term will be either preceded ** or follwed by an inverted copy (t2.b==t1.a). Skip this term ** and use its inversion. */ testcase( pOrTerm->wtFlags & TERM_COPIED ); testcase( pOrTerm->wtFlags & TERM_VIRTUAL ); assert( pOrTerm->wtFlags & (TERM_COPIED|TERM_VIRTUAL) ); @@ -89086,21 +122546,21 @@ } if( i<0 ){ /* No candidate table+column was found. This can only occur ** on the second iteration */ assert( j==1 ); - assert( (chngToIN&(chngToIN-1))==0 ); - assert( chngToIN==getMask(pMaskSet, iCursor) ); + assert( IsPowerOfTwo(chngToIN) ); + assert( chngToIN==sqlite3WhereGetMask(&pWInfo->sMaskSet, iCursor) ); break; } testcase( j==1 ); /* We have found a candidate table and column. Check to see if that ** table and column is common to every term in the OR clause */ okToChngToIN = 1; for(; i>=0 && okToChngToIN; i--, pOrTerm++){ - assert( pOrTerm->eOperator==WO_EQ ); + assert( pOrTerm->eOperator & WO_EQ ); if( pOrTerm->leftCursor!=iCursor ){ pOrTerm->wtFlags &= ~TERM_OR_OK; }else if( pOrTerm->u.leftColumn!=iColumn ){ okToChngToIN = 0; }else{ @@ -89130,15 +122590,15 @@ Expr *pLeft = 0; /* The LHS of the IN operator */ Expr *pNew; /* The complete IN operator */ for(i=pOrWc->nTerm-1, pOrTerm=pOrWc->a; i>=0; i--, pOrTerm++){ if( (pOrTerm->wtFlags & TERM_OR_OK)==0 ) continue; - assert( pOrTerm->eOperator==WO_EQ ); + assert( pOrTerm->eOperator & WO_EQ ); assert( pOrTerm->leftCursor==iCursor ); assert( pOrTerm->u.leftColumn==iColumn ); pDup = sqlite3ExprDup(db, pOrTerm->pExpr->pRight, 0); - pList = sqlite3ExprListAppend(pWC->pParse, pList, pDup); + pList = sqlite3ExprListAppend(pWInfo->pParse, pList, pDup); pLeft = pOrTerm->pExpr->pLeft; } assert( pLeft!=0 ); pDup = sqlite3ExprDup(db, pLeft, 0); pNew = sqlite3PExpr(pParse, TK_IN, pDup, 0, 0); @@ -89149,21 +122609,130 @@ pNew->x.pList = pList; idxNew = whereClauseInsert(pWC, pNew, TERM_VIRTUAL|TERM_DYNAMIC); testcase( idxNew==0 ); exprAnalyze(pSrc, pWC, idxNew); pTerm = &pWC->a[idxTerm]; - pWC->a[idxNew].iParent = idxTerm; - pTerm->nChild = 1; + markTermAsChild(pWC, idxNew, idxTerm); }else{ sqlite3ExprListDelete(db, pList); } - pTerm->eOperator = 0; /* case 1 trumps case 2 */ + pTerm->eOperator = WO_NOOP; /* case 1 trumps case 3 */ } } } #endif /* !SQLITE_OMIT_OR_OPTIMIZATION && !SQLITE_OMIT_SUBQUERY */ +/* +** We already know that pExpr is a binary operator where both operands are +** column references. This routine checks to see if pExpr is an equivalence +** relation: +** 1. The SQLITE_Transitive optimization must be enabled +** 2. Must be either an == or an IS operator +** 3. Not originating in the ON clause of an OUTER JOIN +** 4. The affinities of A and B must be compatible +** 5a. Both operands use the same collating sequence OR +** 5b. The overall collating sequence is BINARY +** If this routine returns TRUE, that means that the RHS can be substituted +** for the LHS anyplace else in the WHERE clause where the LHS column occurs. +** This is an optimization. No harm comes from returning 0. But if 1 is +** returned when it should not be, then incorrect answers might result. +*/ +static int termIsEquivalence(Parse *pParse, Expr *pExpr){ + char aff1, aff2; + CollSeq *pColl; + const char *zColl1, *zColl2; + if( !OptimizationEnabled(pParse->db, SQLITE_Transitive) ) return 0; + if( pExpr->op!=TK_EQ && pExpr->op!=TK_IS ) return 0; + if( ExprHasProperty(pExpr, EP_FromJoin) ) return 0; + aff1 = sqlite3ExprAffinity(pExpr->pLeft); + aff2 = sqlite3ExprAffinity(pExpr->pRight); + if( aff1!=aff2 + && (!sqlite3IsNumericAffinity(aff1) || !sqlite3IsNumericAffinity(aff2)) + ){ + return 0; + } + pColl = sqlite3BinaryCompareCollSeq(pParse, pExpr->pLeft, pExpr->pRight); + if( pColl==0 || sqlite3StrICmp(pColl->zName, "BINARY")==0 ) return 1; + pColl = sqlite3ExprCollSeq(pParse, pExpr->pLeft); + /* Since pLeft and pRight are both a column references, their collating + ** sequence should always be defined. */ + zColl1 = ALWAYS(pColl) ? pColl->zName : 0; + pColl = sqlite3ExprCollSeq(pParse, pExpr->pRight); + zColl2 = ALWAYS(pColl) ? pColl->zName : 0; + return sqlite3StrICmp(zColl1, zColl2)==0; +} + +/* +** Recursively walk the expressions of a SELECT statement and generate +** a bitmask indicating which tables are used in that expression +** tree. +*/ +static Bitmask exprSelectUsage(WhereMaskSet *pMaskSet, Select *pS){ + Bitmask mask = 0; + while( pS ){ + SrcList *pSrc = pS->pSrc; + mask |= sqlite3WhereExprListUsage(pMaskSet, pS->pEList); + mask |= sqlite3WhereExprListUsage(pMaskSet, pS->pGroupBy); + mask |= sqlite3WhereExprListUsage(pMaskSet, pS->pOrderBy); + mask |= sqlite3WhereExprUsage(pMaskSet, pS->pWhere); + mask |= sqlite3WhereExprUsage(pMaskSet, pS->pHaving); + if( ALWAYS(pSrc!=0) ){ + int i; + for(i=0; i<pSrc->nSrc; i++){ + mask |= exprSelectUsage(pMaskSet, pSrc->a[i].pSelect); + mask |= sqlite3WhereExprUsage(pMaskSet, pSrc->a[i].pOn); + } + } + pS = pS->pPrior; + } + return mask; +} + +/* +** Expression pExpr is one operand of a comparison operator that might +** be useful for indexing. This routine checks to see if pExpr appears +** in any index. Return TRUE (1) if pExpr is an indexed term and return +** FALSE (0) if not. If TRUE is returned, also set *piCur to the cursor +** number of the table that is indexed and *piColumn to the column number +** of the column that is indexed, or -2 if an expression is being indexed. +** +** If pExpr is a TK_COLUMN column reference, then this routine always returns +** true even if that particular column is not indexed, because the column +** might be added to an automatic index later. +*/ +static int exprMightBeIndexed( + SrcList *pFrom, /* The FROM clause */ + Bitmask mPrereq, /* Bitmask of FROM clause terms referenced by pExpr */ + Expr *pExpr, /* An operand of a comparison operator */ + int *piCur, /* Write the referenced table cursor number here */ + int *piColumn /* Write the referenced table column number here */ +){ + Index *pIdx; + int i; + int iCur; + if( pExpr->op==TK_COLUMN ){ + *piCur = pExpr->iTable; + *piColumn = pExpr->iColumn; + return 1; + } + if( mPrereq==0 ) return 0; /* No table references */ + if( (mPrereq&(mPrereq-1))!=0 ) return 0; /* Refs more than one table */ + for(i=0; mPrereq>1; i++, mPrereq>>=1){} + iCur = pFrom->a[i].iCursor; + for(pIdx=pFrom->a[i].pTab->pIndex; pIdx; pIdx=pIdx->pNext){ + if( pIdx->aColExpr==0 ) continue; + for(i=0; i<pIdx->nKeyCol; i++){ + if( pIdx->aiColumn[i]!=(-2) ) continue; + if( sqlite3ExprCompare(pExpr, pIdx->aColExpr->a[i].pExpr, iCur)==0 ){ + *piCur = iCur; + *piColumn = -2; + return 1; + } + } + } + return 0; +} /* ** The input to this routine is an WhereTerm structure with only the ** "pExpr" field filled in. The job of this routine is to analyze the ** subexpression and populate all the other fields of the WhereTerm @@ -89184,65 +122753,74 @@ static void exprAnalyze( SrcList *pSrc, /* the FROM clause */ WhereClause *pWC, /* the WHERE clause */ int idxTerm /* Index of the term to be analyzed */ ){ + WhereInfo *pWInfo = pWC->pWInfo; /* WHERE clause processing context */ WhereTerm *pTerm; /* The term to be analyzed */ WhereMaskSet *pMaskSet; /* Set of table index masks */ Expr *pExpr; /* The expression to be analyzed */ Bitmask prereqLeft; /* Prerequesites of the pExpr->pLeft */ Bitmask prereqAll; /* Prerequesites of pExpr */ Bitmask extraRight = 0; /* Extra dependencies on LEFT JOIN */ Expr *pStr1 = 0; /* RHS of LIKE/GLOB operator */ int isComplete = 0; /* RHS of LIKE/GLOB ends with wildcard */ - int noCase = 0; /* LIKE/GLOB distinguishes case */ + int noCase = 0; /* uppercase equivalent to lowercase */ int op; /* Top-level operator. pExpr->op */ - Parse *pParse = pWC->pParse; /* Parsing context */ + Parse *pParse = pWInfo->pParse; /* Parsing context */ sqlite3 *db = pParse->db; /* Database connection */ + unsigned char eOp2; /* op2 value for LIKE/REGEXP/GLOB */ if( db->mallocFailed ){ return; } pTerm = &pWC->a[idxTerm]; - pMaskSet = pWC->pMaskSet; + pMaskSet = &pWInfo->sMaskSet; pExpr = pTerm->pExpr; - prereqLeft = exprTableUsage(pMaskSet, pExpr->pLeft); + assert( pExpr->op!=TK_AS && pExpr->op!=TK_COLLATE ); + prereqLeft = sqlite3WhereExprUsage(pMaskSet, pExpr->pLeft); op = pExpr->op; if( op==TK_IN ){ assert( pExpr->pRight==0 ); if( ExprHasProperty(pExpr, EP_xIsSelect) ){ - pTerm->prereqRight = exprSelectTableUsage(pMaskSet, pExpr->x.pSelect); + pTerm->prereqRight = exprSelectUsage(pMaskSet, pExpr->x.pSelect); }else{ - pTerm->prereqRight = exprListTableUsage(pMaskSet, pExpr->x.pList); + pTerm->prereqRight = sqlite3WhereExprListUsage(pMaskSet, pExpr->x.pList); } }else if( op==TK_ISNULL ){ pTerm->prereqRight = 0; }else{ - pTerm->prereqRight = exprTableUsage(pMaskSet, pExpr->pRight); + pTerm->prereqRight = sqlite3WhereExprUsage(pMaskSet, pExpr->pRight); } - prereqAll = exprTableUsage(pMaskSet, pExpr); + prereqAll = sqlite3WhereExprUsage(pMaskSet, pExpr); if( ExprHasProperty(pExpr, EP_FromJoin) ){ - Bitmask x = getMask(pMaskSet, pExpr->iRightJoinTable); + Bitmask x = sqlite3WhereGetMask(pMaskSet, pExpr->iRightJoinTable); prereqAll |= x; extraRight = x-1; /* ON clause terms may not be used with an index ** on left table of a LEFT JOIN. Ticket #3015 */ } pTerm->prereqAll = prereqAll; pTerm->leftCursor = -1; pTerm->iParent = -1; pTerm->eOperator = 0; - if( allowedOp(op) && (pTerm->prereqRight & prereqLeft)==0 ){ - Expr *pLeft = pExpr->pLeft; - Expr *pRight = pExpr->pRight; - if( pLeft->op==TK_COLUMN ){ - pTerm->leftCursor = pLeft->iTable; - pTerm->u.leftColumn = pLeft->iColumn; - pTerm->eOperator = operatorMask(op); - } - if( pRight && pRight->op==TK_COLUMN ){ + if( allowedOp(op) ){ + int iCur, iColumn; + Expr *pLeft = sqlite3ExprSkipCollate(pExpr->pLeft); + Expr *pRight = sqlite3ExprSkipCollate(pExpr->pRight); + u16 opMask = (pTerm->prereqRight & prereqLeft)==0 ? WO_ALL : WO_EQUIV; + if( exprMightBeIndexed(pSrc, prereqLeft, pLeft, &iCur, &iColumn) ){ + pTerm->leftCursor = iCur; + pTerm->u.leftColumn = iColumn; + pTerm->eOperator = operatorMask(op) & opMask; + } + if( op==TK_IS ) pTerm->wtFlags |= TERM_IS; + if( pRight + && exprMightBeIndexed(pSrc, pTerm->prereqRight, pRight, &iCur, &iColumn) + ){ WhereTerm *pNew; Expr *pDup; + u16 eExtraOp = 0; /* Extra bits for pNew->eOperator */ if( pTerm->leftCursor>=0 ){ int idxNew; pDup = sqlite3ExprDup(db, pExpr, 0); if( db->mallocFailed ){ sqlite3ExprDelete(db, pDup); @@ -89249,26 +122827,30 @@ return; } idxNew = whereClauseInsert(pWC, pDup, TERM_VIRTUAL|TERM_DYNAMIC); if( idxNew==0 ) return; pNew = &pWC->a[idxNew]; - pNew->iParent = idxTerm; + markTermAsChild(pWC, idxNew, idxTerm); + if( op==TK_IS ) pNew->wtFlags |= TERM_IS; pTerm = &pWC->a[idxTerm]; - pTerm->nChild = 1; pTerm->wtFlags |= TERM_COPIED; + + if( termIsEquivalence(pParse, pDup) ){ + pTerm->eOperator |= WO_EQUIV; + eExtraOp = WO_EQUIV; + } }else{ pDup = pExpr; pNew = pTerm; } exprCommute(pParse, pDup); - pLeft = pDup->pLeft; - pNew->leftCursor = pLeft->iTable; - pNew->u.leftColumn = pLeft->iColumn; + pNew->leftCursor = iCur; + pNew->u.leftColumn = iColumn; testcase( (prereqLeft | extraRight) != prereqLeft ); pNew->prereqRight = prereqLeft | extraRight; pNew->prereqAll = prereqAll; - pNew->eOperator = operatorMask(pDup->op); + pNew->eOperator = (operatorMask(pDup->op) + eExtraOp) & opMask; } } #ifndef SQLITE_OMIT_BETWEEN_OPTIMIZATION /* If a term is the BETWEEN operator, create two new virtual terms @@ -89296,17 +122878,17 @@ Expr *pNewExpr; int idxNew; pNewExpr = sqlite3PExpr(pParse, ops[i], sqlite3ExprDup(db, pExpr->pLeft, 0), sqlite3ExprDup(db, pList->a[i].pExpr, 0), 0); + transferJoinMarkings(pNewExpr, pExpr); idxNew = whereClauseInsert(pWC, pNewExpr, TERM_VIRTUAL|TERM_DYNAMIC); testcase( idxNew==0 ); exprAnalyze(pSrc, pWC, idxNew); pTerm = &pWC->a[idxTerm]; - pWC->a[idxNew].iParent = idxTerm; + markTermAsChild(pWC, idxNew, idxTerm); } - pTerm->nChild = 2; } #endif /* SQLITE_OMIT_BETWEEN_OPTIMIZATION */ #if !defined(SQLITE_OMIT_OR_OPTIMIZATION) && !defined(SQLITE_OMIT_SUBQUERY) /* Analyze a term that is composed of two or more subterms connected by @@ -89321,16 +122903,19 @@ #ifndef SQLITE_OMIT_LIKE_OPTIMIZATION /* Add constraints to reduce the search space on a LIKE or GLOB ** operator. ** - ** A like pattern of the form "x LIKE 'abc%'" is changed into constraints + ** A like pattern of the form "x LIKE 'aBc%'" is changed into constraints ** - ** x>='abc' AND x<'abd' AND x LIKE 'abc%' + ** x>='ABC' AND x<'abd' AND x LIKE 'aBc%' ** ** The last character of the prefix "abc" is incremented to form the - ** termination condition "abd". + ** termination condition "abd". If case is not significant (the default + ** for LIKE) then the lower-bound is made all uppercase and the upper- + ** bound is made all lowercase so that the bounds also work when comparing + ** BLOBs. */ if( pWC->op==TK_AND && isLikeOrGlob(pParse, pExpr, &pStr1, &isComplete, &noCase) ){ Expr *pLeft; /* LHS of LIKE/GLOB operator */ @@ -89337,13 +122922,30 @@ Expr *pStr2; /* Copy of pStr1 - RHS of LIKE/GLOB operator */ Expr *pNewExpr1; Expr *pNewExpr2; int idxNew1; int idxNew2; + const char *zCollSeqName; /* Name of collating sequence */ + const u16 wtFlags = TERM_LIKEOPT | TERM_VIRTUAL | TERM_DYNAMIC; pLeft = pExpr->x.pList->a[1].pExpr; pStr2 = sqlite3ExprDup(db, pStr1, 0); + + /* Convert the lower bound to upper-case and the upper bound to + ** lower-case (upper-case is less than lower-case in ASCII) so that + ** the range constraints also work for BLOBs + */ + if( noCase && !pParse->db->mallocFailed ){ + int i; + char c; + pTerm->wtFlags |= TERM_LIKE; + for(i=0; (c = pStr1->u.zToken[i])!=0; i++){ + pStr1->u.zToken[i] = sqlite3Toupper(c); + pStr2->u.zToken[i] = sqlite3Tolower(c); + } + } + if( !db->mallocFailed ){ u8 c, *pC; /* Last character before the first wildcard */ pC = (u8*)&pStr2->u.zToken[sqlite3Strlen30(pStr2->u.zToken)-1]; c = *pC; if( noCase ){ @@ -89352,28 +122954,35 @@ ** alphabetic range where case conversions will mess up the ** inequality. To avoid this, make sure to also run the full ** LIKE on all candidate expressions by clearing the isComplete flag */ if( c=='A'-1 ) isComplete = 0; - c = sqlite3UpperToLower[c]; } *pC = c + 1; } - pNewExpr1 = sqlite3PExpr(pParse, TK_GE, sqlite3ExprDup(db,pLeft,0),pStr1,0); - idxNew1 = whereClauseInsert(pWC, pNewExpr1, TERM_VIRTUAL|TERM_DYNAMIC); + zCollSeqName = noCase ? "NOCASE" : "BINARY"; + pNewExpr1 = sqlite3ExprDup(db, pLeft, 0); + pNewExpr1 = sqlite3PExpr(pParse, TK_GE, + sqlite3ExprAddCollateString(pParse,pNewExpr1,zCollSeqName), + pStr1, 0); + transferJoinMarkings(pNewExpr1, pExpr); + idxNew1 = whereClauseInsert(pWC, pNewExpr1, wtFlags); testcase( idxNew1==0 ); exprAnalyze(pSrc, pWC, idxNew1); - pNewExpr2 = sqlite3PExpr(pParse, TK_LT, sqlite3ExprDup(db,pLeft,0),pStr2,0); - idxNew2 = whereClauseInsert(pWC, pNewExpr2, TERM_VIRTUAL|TERM_DYNAMIC); + pNewExpr2 = sqlite3ExprDup(db, pLeft, 0); + pNewExpr2 = sqlite3PExpr(pParse, TK_LT, + sqlite3ExprAddCollateString(pParse,pNewExpr2,zCollSeqName), + pStr2, 0); + transferJoinMarkings(pNewExpr2, pExpr); + idxNew2 = whereClauseInsert(pWC, pNewExpr2, wtFlags); testcase( idxNew2==0 ); exprAnalyze(pSrc, pWC, idxNew2); pTerm = &pWC->a[idxTerm]; if( isComplete ){ - pWC->a[idxNew1].iParent = idxTerm; - pWC->a[idxNew2].iParent = idxTerm; - pTerm->nChild = 2; + markTermAsChild(pWC, idxNew1, idxTerm); + markTermAsChild(pWC, idxNew2, idxTerm); } } #endif /* SQLITE_OMIT_LIKE_OPTIMIZATION */ #ifndef SQLITE_OMIT_VIRTUALTABLE @@ -89381,20 +122990,20 @@ ** current expression is of the form: column MATCH expr. ** This information is used by the xBestIndex methods of ** virtual tables. The native query optimizer does not attempt ** to do anything with MATCH functions. */ - if( isMatchOfColumn(pExpr) ){ + if( isMatchOfColumn(pExpr, &eOp2) ){ int idxNew; Expr *pRight, *pLeft; WhereTerm *pNewTerm; Bitmask prereqColumn, prereqExpr; pRight = pExpr->x.pList->a[0].pExpr; pLeft = pExpr->x.pList->a[1].pExpr; - prereqExpr = exprTableUsage(pMaskSet, pRight); - prereqColumn = exprTableUsage(pMaskSet, pLeft); + prereqExpr = sqlite3WhereExprUsage(pMaskSet, pRight); + prereqColumn = sqlite3WhereExprUsage(pMaskSet, pLeft); if( (prereqExpr & prereqColumn)==0 ){ Expr *pNewExpr; pNewExpr = sqlite3PExpr(pParse, TK_MATCH, 0, sqlite3ExprDup(db, pRight, 0), 0); idxNew = whereClauseInsert(pWC, pNewExpr, TERM_VIRTUAL|TERM_DYNAMIC); @@ -89402,210 +123011,762 @@ pNewTerm = &pWC->a[idxNew]; pNewTerm->prereqRight = prereqExpr; pNewTerm->leftCursor = pLeft->iTable; pNewTerm->u.leftColumn = pLeft->iColumn; pNewTerm->eOperator = WO_MATCH; - pNewTerm->iParent = idxTerm; + pNewTerm->eMatchOp = eOp2; + markTermAsChild(pWC, idxNew, idxTerm); pTerm = &pWC->a[idxTerm]; - pTerm->nChild = 1; pTerm->wtFlags |= TERM_COPIED; pNewTerm->prereqAll = pTerm->prereqAll; } } #endif /* SQLITE_OMIT_VIRTUALTABLE */ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + /* When sqlite_stat3 histogram data is available an operator of the + ** form "x IS NOT NULL" can sometimes be evaluated more efficiently + ** as "x>NULL" if x is not an INTEGER PRIMARY KEY. So construct a + ** virtual term of that form. + ** + ** Note that the virtual term must be tagged with TERM_VNULL. + */ + if( pExpr->op==TK_NOTNULL + && pExpr->pLeft->op==TK_COLUMN + && pExpr->pLeft->iColumn>=0 + && OptimizationEnabled(db, SQLITE_Stat34) + ){ + Expr *pNewExpr; + Expr *pLeft = pExpr->pLeft; + int idxNew; + WhereTerm *pNewTerm; + + pNewExpr = sqlite3PExpr(pParse, TK_GT, + sqlite3ExprDup(db, pLeft, 0), + sqlite3PExpr(pParse, TK_NULL, 0, 0, 0), 0); + + idxNew = whereClauseInsert(pWC, pNewExpr, + TERM_VIRTUAL|TERM_DYNAMIC|TERM_VNULL); + if( idxNew ){ + pNewTerm = &pWC->a[idxNew]; + pNewTerm->prereqRight = 0; + pNewTerm->leftCursor = pLeft->iTable; + pNewTerm->u.leftColumn = pLeft->iColumn; + pNewTerm->eOperator = WO_GT; + markTermAsChild(pWC, idxNew, idxTerm); + pTerm = &pWC->a[idxTerm]; + pTerm->wtFlags |= TERM_COPIED; + pNewTerm->prereqAll = pTerm->prereqAll; + } + } +#endif /* SQLITE_ENABLE_STAT3_OR_STAT4 */ + /* Prevent ON clause terms of a LEFT JOIN from being used to drive ** an index for tables to the left of the join. */ pTerm->prereqRight |= extraRight; } -/* -** Return TRUE if any of the expressions in pList->a[iFirst...] contain -** a reference to any table other than the iBase table. -*/ -static int referencesOtherTables( - ExprList *pList, /* Search expressions in ths list */ - WhereMaskSet *pMaskSet, /* Mapping from tables to bitmaps */ - int iFirst, /* Be searching with the iFirst-th expression */ - int iBase /* Ignore references to this table */ -){ - Bitmask allowed = ~getMask(pMaskSet, iBase); - while( iFirst<pList->nExpr ){ - if( (exprTableUsage(pMaskSet, pList->a[iFirst++].pExpr)&allowed)!=0 ){ - return 1; - } - } - return 0; -} - - -/* -** This routine decides if pIdx can be used to satisfy the ORDER BY -** clause. If it can, it returns 1. If pIdx cannot satisfy the -** ORDER BY clause, this routine returns 0. -** -** pOrderBy is an ORDER BY clause from a SELECT statement. pTab is the -** left-most table in the FROM clause of that same SELECT statement and -** the table has a cursor number of "base". pIdx is an index on pTab. -** -** nEqCol is the number of columns of pIdx that are used as equality -** constraints. Any of these columns may be missing from the ORDER BY -** clause and the match can still be a success. -** -** All terms of the ORDER BY that match against the index must be either -** ASC or DESC. (Terms of the ORDER BY clause past the end of a UNIQUE -** index do not need to satisfy this constraint.) The *pbRev value is -** set to 1 if the ORDER BY clause is all DESC and it is set to 0 if -** the ORDER BY clause is all ASC. -*/ -static int isSortingIndex( - Parse *pParse, /* Parsing context */ - WhereMaskSet *pMaskSet, /* Mapping from table cursor numbers to bitmaps */ - Index *pIdx, /* The index we are testing */ - int base, /* Cursor number for the table to be sorted */ - ExprList *pOrderBy, /* The ORDER BY clause */ - int nEqCol, /* Number of index columns with == constraints */ - int *pbRev /* Set to 1 if ORDER BY is DESC */ -){ - int i, j; /* Loop counters */ - int sortOrder = 0; /* XOR of index and ORDER BY sort direction */ - int nTerm; /* Number of ORDER BY terms */ - struct ExprList_item *pTerm; /* A term of the ORDER BY clause */ - sqlite3 *db = pParse->db; - - assert( pOrderBy!=0 ); - nTerm = pOrderBy->nExpr; - assert( nTerm>0 ); - - /* Argument pIdx must either point to a 'real' named index structure, - ** or an index structure allocated on the stack by bestBtreeIndex() to - ** represent the rowid index that is part of every table. */ - assert( pIdx->zName || (pIdx->nColumn==1 && pIdx->aiColumn[0]==-1) ); - - /* Match terms of the ORDER BY clause against columns of - ** the index. - ** - ** Note that indices have pIdx->nColumn regular columns plus - ** one additional column containing the rowid. The rowid column - ** of the index is also allowed to match against the ORDER BY - ** clause. - */ - for(i=j=0, pTerm=pOrderBy->a; j<nTerm && i<=pIdx->nColumn; i++){ - Expr *pExpr; /* The expression of the ORDER BY pTerm */ - CollSeq *pColl; /* The collating sequence of pExpr */ - int termSortOrder; /* Sort order for this term */ - int iColumn; /* The i-th column of the index. -1 for rowid */ - int iSortOrder; /* 1 for DESC, 0 for ASC on the i-th index term */ - const char *zColl; /* Name of the collating sequence for i-th index term */ - - pExpr = pTerm->pExpr; - if( pExpr->op!=TK_COLUMN || pExpr->iTable!=base ){ - /* Can not use an index sort on anything that is not a column in the - ** left-most table of the FROM clause */ - break; - } - pColl = sqlite3ExprCollSeq(pParse, pExpr); - if( !pColl ){ - pColl = db->pDfltColl; - } - if( pIdx->zName && i<pIdx->nColumn ){ - iColumn = pIdx->aiColumn[i]; - if( iColumn==pIdx->pTable->iPKey ){ - iColumn = -1; - } - iSortOrder = pIdx->aSortOrder[i]; - zColl = pIdx->azColl[i]; - }else{ - iColumn = -1; - iSortOrder = 0; - zColl = pColl->zName; - } - if( pExpr->iColumn!=iColumn || sqlite3StrICmp(pColl->zName, zColl) ){ - /* Term j of the ORDER BY clause does not match column i of the index */ - if( i<nEqCol ){ - /* If an index column that is constrained by == fails to match an - ** ORDER BY term, that is OK. Just ignore that column of the index - */ - continue; - }else if( i==pIdx->nColumn ){ - /* Index column i is the rowid. All other terms match. */ - break; - }else{ - /* If an index column fails to match and is not constrained by == - ** then the index cannot satisfy the ORDER BY constraint. - */ - return 0; - } - } - assert( pIdx->aSortOrder!=0 || iColumn==-1 ); - assert( pTerm->sortOrder==0 || pTerm->sortOrder==1 ); - assert( iSortOrder==0 || iSortOrder==1 ); - termSortOrder = iSortOrder ^ pTerm->sortOrder; - if( i>nEqCol ){ - if( termSortOrder!=sortOrder ){ - /* Indices can only be used if all ORDER BY terms past the - ** equality constraints are all either DESC or ASC. */ - return 0; - } - }else{ - sortOrder = termSortOrder; - } - j++; - pTerm++; - if( iColumn<0 && !referencesOtherTables(pOrderBy, pMaskSet, j, base) ){ - /* If the indexed column is the primary key and everything matches - ** so far and none of the ORDER BY terms to the right reference other - ** tables in the join, then we are assured that the index can be used - ** to sort because the primary key is unique and so none of the other - ** columns will make any difference - */ - j = nTerm; - } - } - - *pbRev = sortOrder!=0; - if( j>=nTerm ){ - /* All terms of the ORDER BY clause are covered by this index so - ** this index can be used for sorting. */ - return 1; - } - if( pIdx->onError!=OE_None && i==pIdx->nColumn - && !referencesOtherTables(pOrderBy, pMaskSet, j, base) ){ - /* All terms of this index match some prefix of the ORDER BY clause - ** and the index is UNIQUE and no terms on the tail of the ORDER BY - ** clause reference other tables in a join. If this is all true then - ** the order by clause is superfluous. */ - return 1; - } - return 0; -} - -/* -** Prepare a crude estimate of the logarithm of the input value. -** The results need not be exact. This is only used for estimating -** the total cost of performing operations with O(logN) or O(NlogN) -** complexity. Because N is just a guess, it is no great tragedy if -** logN is a little off. -*/ -static double estLog(double N){ - double logN = 1; - double x = 10; - while( N>x ){ - logN += 1; - x *= 10; - } - return logN; +/*************************************************************************** +** Routines with file scope above. Interface to the rest of the where.c +** subsystem follows. +***************************************************************************/ + +/* +** This routine identifies subexpressions in the WHERE clause where +** each subexpression is separated by the AND operator or some other +** operator specified in the op parameter. The WhereClause structure +** is filled with pointers to subexpressions. For example: +** +** WHERE a=='hello' AND coalesce(b,11)<10 AND (c+12!=d OR c==22) +** \________/ \_______________/ \________________/ +** slot[0] slot[1] slot[2] +** +** The original WHERE clause in pExpr is unaltered. All this routine +** does is make slot[] entries point to substructure within pExpr. +** +** In the previous sentence and in the diagram, "slot[]" refers to +** the WhereClause.a[] array. The slot[] array grows as needed to contain +** all terms of the WHERE clause. +*/ +SQLITE_PRIVATE void sqlite3WhereSplit(WhereClause *pWC, Expr *pExpr, u8 op){ + Expr *pE2 = sqlite3ExprSkipCollate(pExpr); + pWC->op = op; + if( pE2==0 ) return; + if( pE2->op!=op ){ + whereClauseInsert(pWC, pExpr, 0); + }else{ + sqlite3WhereSplit(pWC, pE2->pLeft, op); + sqlite3WhereSplit(pWC, pE2->pRight, op); + } +} + +/* +** Initialize a preallocated WhereClause structure. +*/ +SQLITE_PRIVATE void sqlite3WhereClauseInit( + WhereClause *pWC, /* The WhereClause to be initialized */ + WhereInfo *pWInfo /* The WHERE processing context */ +){ + pWC->pWInfo = pWInfo; + pWC->pOuter = 0; + pWC->nTerm = 0; + pWC->nSlot = ArraySize(pWC->aStatic); + pWC->a = pWC->aStatic; +} + +/* +** Deallocate a WhereClause structure. The WhereClause structure +** itself is not freed. This routine is the inverse of +** sqlite3WhereClauseInit(). +*/ +SQLITE_PRIVATE void sqlite3WhereClauseClear(WhereClause *pWC){ + int i; + WhereTerm *a; + sqlite3 *db = pWC->pWInfo->pParse->db; + for(i=pWC->nTerm-1, a=pWC->a; i>=0; i--, a++){ + if( a->wtFlags & TERM_DYNAMIC ){ + sqlite3ExprDelete(db, a->pExpr); + } + if( a->wtFlags & TERM_ORINFO ){ + whereOrInfoDelete(db, a->u.pOrInfo); + }else if( a->wtFlags & TERM_ANDINFO ){ + whereAndInfoDelete(db, a->u.pAndInfo); + } + } + if( pWC->a!=pWC->aStatic ){ + sqlite3DbFree(db, pWC->a); + } +} + + +/* +** These routines walk (recursively) an expression tree and generate +** a bitmask indicating which tables are used in that expression +** tree. +*/ +SQLITE_PRIVATE Bitmask sqlite3WhereExprUsage(WhereMaskSet *pMaskSet, Expr *p){ + Bitmask mask = 0; + if( p==0 ) return 0; + if( p->op==TK_COLUMN ){ + mask = sqlite3WhereGetMask(pMaskSet, p->iTable); + return mask; + } + mask = sqlite3WhereExprUsage(pMaskSet, p->pRight); + mask |= sqlite3WhereExprUsage(pMaskSet, p->pLeft); + if( ExprHasProperty(p, EP_xIsSelect) ){ + mask |= exprSelectUsage(pMaskSet, p->x.pSelect); + }else{ + mask |= sqlite3WhereExprListUsage(pMaskSet, p->x.pList); + } + return mask; +} +SQLITE_PRIVATE Bitmask sqlite3WhereExprListUsage(WhereMaskSet *pMaskSet, ExprList *pList){ + int i; + Bitmask mask = 0; + if( pList ){ + for(i=0; i<pList->nExpr; i++){ + mask |= sqlite3WhereExprUsage(pMaskSet, pList->a[i].pExpr); + } + } + return mask; +} + + +/* +** Call exprAnalyze on all terms in a WHERE clause. +** +** Note that exprAnalyze() might add new virtual terms onto the +** end of the WHERE clause. We do not want to analyze these new +** virtual terms, so start analyzing at the end and work forward +** so that the added virtual terms are never processed. +*/ +SQLITE_PRIVATE void sqlite3WhereExprAnalyze( + SrcList *pTabList, /* the FROM clause */ + WhereClause *pWC /* the WHERE clause to be analyzed */ +){ + int i; + for(i=pWC->nTerm-1; i>=0; i--){ + exprAnalyze(pTabList, pWC, i); + } +} + +/* +** For table-valued-functions, transform the function arguments into +** new WHERE clause terms. +** +** Each function argument translates into an equality constraint against +** a HIDDEN column in the table. +*/ +SQLITE_PRIVATE void sqlite3WhereTabFuncArgs( + Parse *pParse, /* Parsing context */ + struct SrcList_item *pItem, /* The FROM clause term to process */ + WhereClause *pWC /* Xfer function arguments to here */ +){ + Table *pTab; + int j, k; + ExprList *pArgs; + Expr *pColRef; + Expr *pTerm; + if( pItem->fg.isTabFunc==0 ) return; + pTab = pItem->pTab; + assert( pTab!=0 ); + pArgs = pItem->u1.pFuncArg; + if( pArgs==0 ) return; + for(j=k=0; j<pArgs->nExpr; j++){ + while( k<pTab->nCol && (pTab->aCol[k].colFlags & COLFLAG_HIDDEN)==0 ){k++;} + if( k>=pTab->nCol ){ + sqlite3ErrorMsg(pParse, "too many arguments on %s() - max %d", + pTab->zName, j); + return; + } + pColRef = sqlite3PExpr(pParse, TK_COLUMN, 0, 0, 0); + if( pColRef==0 ) return; + pColRef->iTable = pItem->iCursor; + pColRef->iColumn = k++; + pColRef->pTab = pTab; + pTerm = sqlite3PExpr(pParse, TK_EQ, pColRef, + sqlite3ExprDup(pParse->db, pArgs->a[j].pExpr, 0), 0); + whereClauseInsert(pWC, pTerm, TERM_DYNAMIC); + } +} + +/************** End of whereexpr.c *******************************************/ +/************** Begin file where.c *******************************************/ +/* +** 2001 September 15 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** This module contains C code that generates VDBE code used to process +** the WHERE clause of SQL statements. This module is responsible for +** generating the code that loops through a table looking for applicable +** rows. Indices are selected and used to speed the search when doing +** so is applicable. Because this module is responsible for selecting +** indices, you might also think of this module as the "query optimizer". +*/ +/* #include "sqliteInt.h" */ +/* #include "whereInt.h" */ + +/* Forward declaration of methods */ +static int whereLoopResize(sqlite3*, WhereLoop*, int); + +/* Test variable that can be set to enable WHERE tracing */ +#if defined(SQLITE_TEST) || defined(SQLITE_DEBUG) +/***/ int sqlite3WhereTrace = 0; +#endif + + +/* +** Return the estimated number of output rows from a WHERE clause +*/ +SQLITE_PRIVATE u64 sqlite3WhereOutputRowCount(WhereInfo *pWInfo){ + return sqlite3LogEstToInt(pWInfo->nRowOut); +} + +/* +** Return one of the WHERE_DISTINCT_xxxxx values to indicate how this +** WHERE clause returns outputs for DISTINCT processing. +*/ +SQLITE_PRIVATE int sqlite3WhereIsDistinct(WhereInfo *pWInfo){ + return pWInfo->eDistinct; +} + +/* +** Return TRUE if the WHERE clause returns rows in ORDER BY order. +** Return FALSE if the output needs to be sorted. +*/ +SQLITE_PRIVATE int sqlite3WhereIsOrdered(WhereInfo *pWInfo){ + return pWInfo->nOBSat; +} + +/* +** Return the VDBE address or label to jump to in order to continue +** immediately with the next row of a WHERE clause. +*/ +SQLITE_PRIVATE int sqlite3WhereContinueLabel(WhereInfo *pWInfo){ + assert( pWInfo->iContinue!=0 ); + return pWInfo->iContinue; +} + +/* +** Return the VDBE address or label to jump to in order to break +** out of a WHERE loop. +*/ +SQLITE_PRIVATE int sqlite3WhereBreakLabel(WhereInfo *pWInfo){ + return pWInfo->iBreak; +} + +/* +** Return ONEPASS_OFF (0) if an UPDATE or DELETE statement is unable to +** operate directly on the rowis returned by a WHERE clause. Return +** ONEPASS_SINGLE (1) if the statement can operation directly because only +** a single row is to be changed. Return ONEPASS_MULTI (2) if the one-pass +** optimization can be used on multiple +** +** If the ONEPASS optimization is used (if this routine returns true) +** then also write the indices of open cursors used by ONEPASS +** into aiCur[0] and aiCur[1]. iaCur[0] gets the cursor of the data +** table and iaCur[1] gets the cursor used by an auxiliary index. +** Either value may be -1, indicating that cursor is not used. +** Any cursors returned will have been opened for writing. +** +** aiCur[0] and aiCur[1] both get -1 if the where-clause logic is +** unable to use the ONEPASS optimization. +*/ +SQLITE_PRIVATE int sqlite3WhereOkOnePass(WhereInfo *pWInfo, int *aiCur){ + memcpy(aiCur, pWInfo->aiCurOnePass, sizeof(int)*2); +#ifdef WHERETRACE_ENABLED + if( sqlite3WhereTrace && pWInfo->eOnePass!=ONEPASS_OFF ){ + sqlite3DebugPrintf("%s cursors: %d %d\n", + pWInfo->eOnePass==ONEPASS_SINGLE ? "ONEPASS_SINGLE" : "ONEPASS_MULTI", + aiCur[0], aiCur[1]); + } +#endif + return pWInfo->eOnePass; +} + +/* +** Move the content of pSrc into pDest +*/ +static void whereOrMove(WhereOrSet *pDest, WhereOrSet *pSrc){ + pDest->n = pSrc->n; + memcpy(pDest->a, pSrc->a, pDest->n*sizeof(pDest->a[0])); +} + +/* +** Try to insert a new prerequisite/cost entry into the WhereOrSet pSet. +** +** The new entry might overwrite an existing entry, or it might be +** appended, or it might be discarded. Do whatever is the right thing +** so that pSet keeps the N_OR_COST best entries seen so far. +*/ +static int whereOrInsert( + WhereOrSet *pSet, /* The WhereOrSet to be updated */ + Bitmask prereq, /* Prerequisites of the new entry */ + LogEst rRun, /* Run-cost of the new entry */ + LogEst nOut /* Number of outputs for the new entry */ +){ + u16 i; + WhereOrCost *p; + for(i=pSet->n, p=pSet->a; i>0; i--, p++){ + if( rRun<=p->rRun && (prereq & p->prereq)==prereq ){ + goto whereOrInsert_done; + } + if( p->rRun<=rRun && (p->prereq & prereq)==p->prereq ){ + return 0; + } + } + if( pSet->n<N_OR_COST ){ + p = &pSet->a[pSet->n++]; + p->nOut = nOut; + }else{ + p = pSet->a; + for(i=1; i<pSet->n; i++){ + if( p->rRun>pSet->a[i].rRun ) p = pSet->a + i; + } + if( p->rRun<=rRun ) return 0; + } +whereOrInsert_done: + p->prereq = prereq; + p->rRun = rRun; + if( p->nOut>nOut ) p->nOut = nOut; + return 1; +} + +/* +** Return the bitmask for the given cursor number. Return 0 if +** iCursor is not in the set. +*/ +SQLITE_PRIVATE Bitmask sqlite3WhereGetMask(WhereMaskSet *pMaskSet, int iCursor){ + int i; + assert( pMaskSet->n<=(int)sizeof(Bitmask)*8 ); + for(i=0; i<pMaskSet->n; i++){ + if( pMaskSet->ix[i]==iCursor ){ + return MASKBIT(i); + } + } + return 0; +} + +/* +** Create a new mask for cursor iCursor. +** +** There is one cursor per table in the FROM clause. The number of +** tables in the FROM clause is limited by a test early in the +** sqlite3WhereBegin() routine. So we know that the pMaskSet->ix[] +** array will never overflow. +*/ +static void createMask(WhereMaskSet *pMaskSet, int iCursor){ + assert( pMaskSet->n < ArraySize(pMaskSet->ix) ); + pMaskSet->ix[pMaskSet->n++] = iCursor; +} + +/* +** Advance to the next WhereTerm that matches according to the criteria +** established when the pScan object was initialized by whereScanInit(). +** Return NULL if there are no more matching WhereTerms. +*/ +static WhereTerm *whereScanNext(WhereScan *pScan){ + int iCur; /* The cursor on the LHS of the term */ + i16 iColumn; /* The column on the LHS of the term. -1 for IPK */ + Expr *pX; /* An expression being tested */ + WhereClause *pWC; /* Shorthand for pScan->pWC */ + WhereTerm *pTerm; /* The term being tested */ + int k = pScan->k; /* Where to start scanning */ + + while( pScan->iEquiv<=pScan->nEquiv ){ + iCur = pScan->aiCur[pScan->iEquiv-1]; + iColumn = pScan->aiColumn[pScan->iEquiv-1]; + if( iColumn==XN_EXPR && pScan->pIdxExpr==0 ) return 0; + while( (pWC = pScan->pWC)!=0 ){ + for(pTerm=pWC->a+k; k<pWC->nTerm; k++, pTerm++){ + if( pTerm->leftCursor==iCur + && pTerm->u.leftColumn==iColumn + && (iColumn!=XN_EXPR + || sqlite3ExprCompare(pTerm->pExpr->pLeft,pScan->pIdxExpr,iCur)==0) + && (pScan->iEquiv<=1 || !ExprHasProperty(pTerm->pExpr, EP_FromJoin)) + ){ + if( (pTerm->eOperator & WO_EQUIV)!=0 + && pScan->nEquiv<ArraySize(pScan->aiCur) + && (pX = sqlite3ExprSkipCollate(pTerm->pExpr->pRight))->op==TK_COLUMN + ){ + int j; + for(j=0; j<pScan->nEquiv; j++){ + if( pScan->aiCur[j]==pX->iTable + && pScan->aiColumn[j]==pX->iColumn ){ + break; + } + } + if( j==pScan->nEquiv ){ + pScan->aiCur[j] = pX->iTable; + pScan->aiColumn[j] = pX->iColumn; + pScan->nEquiv++; + } + } + if( (pTerm->eOperator & pScan->opMask)!=0 ){ + /* Verify the affinity and collating sequence match */ + if( pScan->zCollName && (pTerm->eOperator & WO_ISNULL)==0 ){ + CollSeq *pColl; + Parse *pParse = pWC->pWInfo->pParse; + pX = pTerm->pExpr; + if( !sqlite3IndexAffinityOk(pX, pScan->idxaff) ){ + continue; + } + assert(pX->pLeft); + pColl = sqlite3BinaryCompareCollSeq(pParse, + pX->pLeft, pX->pRight); + if( pColl==0 ) pColl = pParse->db->pDfltColl; + if( sqlite3StrICmp(pColl->zName, pScan->zCollName) ){ + continue; + } + } + if( (pTerm->eOperator & (WO_EQ|WO_IS))!=0 + && (pX = pTerm->pExpr->pRight)->op==TK_COLUMN + && pX->iTable==pScan->aiCur[0] + && pX->iColumn==pScan->aiColumn[0] + ){ + testcase( pTerm->eOperator & WO_IS ); + continue; + } + pScan->k = k+1; + return pTerm; + } + } + } + pScan->pWC = pScan->pWC->pOuter; + k = 0; + } + pScan->pWC = pScan->pOrigWC; + k = 0; + pScan->iEquiv++; + } + return 0; +} + +/* +** Initialize a WHERE clause scanner object. Return a pointer to the +** first match. Return NULL if there are no matches. +** +** The scanner will be searching the WHERE clause pWC. It will look +** for terms of the form "X <op> <expr>" where X is column iColumn of table +** iCur. The <op> must be one of the operators described by opMask. +** +** If the search is for X and the WHERE clause contains terms of the +** form X=Y then this routine might also return terms of the form +** "Y <op> <expr>". The number of levels of transitivity is limited, +** but is enough to handle most commonly occurring SQL statements. +** +** If X is not the INTEGER PRIMARY KEY then X must be compatible with +** index pIdx. +*/ +static WhereTerm *whereScanInit( + WhereScan *pScan, /* The WhereScan object being initialized */ + WhereClause *pWC, /* The WHERE clause to be scanned */ + int iCur, /* Cursor to scan for */ + int iColumn, /* Column to scan for */ + u32 opMask, /* Operator(s) to scan for */ + Index *pIdx /* Must be compatible with this index */ +){ + int j = 0; + + /* memset(pScan, 0, sizeof(*pScan)); */ + pScan->pOrigWC = pWC; + pScan->pWC = pWC; + pScan->pIdxExpr = 0; + if( pIdx ){ + j = iColumn; + iColumn = pIdx->aiColumn[j]; + if( iColumn==XN_EXPR ) pScan->pIdxExpr = pIdx->aColExpr->a[j].pExpr; + } + if( pIdx && iColumn>=0 ){ + pScan->idxaff = pIdx->pTable->aCol[iColumn].affinity; + pScan->zCollName = pIdx->azColl[j]; + }else{ + pScan->idxaff = 0; + pScan->zCollName = 0; + } + pScan->opMask = opMask; + pScan->k = 0; + pScan->aiCur[0] = iCur; + pScan->aiColumn[0] = iColumn; + pScan->nEquiv = 1; + pScan->iEquiv = 1; + return whereScanNext(pScan); +} + +/* +** Search for a term in the WHERE clause that is of the form "X <op> <expr>" +** where X is a reference to the iColumn of table iCur and <op> is one of +** the WO_xx operator codes specified by the op parameter. +** Return a pointer to the term. Return 0 if not found. +** +** If pIdx!=0 then search for terms matching the iColumn-th column of pIdx +** rather than the iColumn-th column of table iCur. +** +** The term returned might by Y=<expr> if there is another constraint in +** the WHERE clause that specifies that X=Y. Any such constraints will be +** identified by the WO_EQUIV bit in the pTerm->eOperator field. The +** aiCur[]/iaColumn[] arrays hold X and all its equivalents. There are 11 +** slots in aiCur[]/aiColumn[] so that means we can look for X plus up to 10 +** other equivalent values. Hence a search for X will return <expr> if X=A1 +** and A1=A2 and A2=A3 and ... and A9=A10 and A10=<expr>. +** +** If there are multiple terms in the WHERE clause of the form "X <op> <expr>" +** then try for the one with no dependencies on <expr> - in other words where +** <expr> is a constant expression of some kind. Only return entries of +** the form "X <op> Y" where Y is a column in another table if no terms of +** the form "X <op> <const-expr>" exist. If no terms with a constant RHS +** exist, try to return a term that does not use WO_EQUIV. +*/ +SQLITE_PRIVATE WhereTerm *sqlite3WhereFindTerm( + WhereClause *pWC, /* The WHERE clause to be searched */ + int iCur, /* Cursor number of LHS */ + int iColumn, /* Column number of LHS */ + Bitmask notReady, /* RHS must not overlap with this mask */ + u32 op, /* Mask of WO_xx values describing operator */ + Index *pIdx /* Must be compatible with this index, if not NULL */ +){ + WhereTerm *pResult = 0; + WhereTerm *p; + WhereScan scan; + + p = whereScanInit(&scan, pWC, iCur, iColumn, op, pIdx); + op &= WO_EQ|WO_IS; + while( p ){ + if( (p->prereqRight & notReady)==0 ){ + if( p->prereqRight==0 && (p->eOperator&op)!=0 ){ + testcase( p->eOperator & WO_IS ); + return p; + } + if( pResult==0 ) pResult = p; + } + p = whereScanNext(&scan); + } + return pResult; +} + +/* +** This function searches pList for an entry that matches the iCol-th column +** of index pIdx. +** +** If such an expression is found, its index in pList->a[] is returned. If +** no expression is found, -1 is returned. +*/ +static int findIndexCol( + Parse *pParse, /* Parse context */ + ExprList *pList, /* Expression list to search */ + int iBase, /* Cursor for table associated with pIdx */ + Index *pIdx, /* Index to match column of */ + int iCol /* Column of index to match */ +){ + int i; + const char *zColl = pIdx->azColl[iCol]; + + for(i=0; i<pList->nExpr; i++){ + Expr *p = sqlite3ExprSkipCollate(pList->a[i].pExpr); + if( p->op==TK_COLUMN + && p->iColumn==pIdx->aiColumn[iCol] + && p->iTable==iBase + ){ + CollSeq *pColl = sqlite3ExprCollSeq(pParse, pList->a[i].pExpr); + if( pColl && 0==sqlite3StrICmp(pColl->zName, zColl) ){ + return i; + } + } + } + + return -1; +} + +/* +** Return TRUE if the iCol-th column of index pIdx is NOT NULL +*/ +static int indexColumnNotNull(Index *pIdx, int iCol){ + int j; + assert( pIdx!=0 ); + assert( iCol>=0 && iCol<pIdx->nColumn ); + j = pIdx->aiColumn[iCol]; + if( j>=0 ){ + return pIdx->pTable->aCol[j].notNull; + }else if( j==(-1) ){ + return 1; + }else{ + assert( j==(-2) ); + return 0; /* Assume an indexed expression can always yield a NULL */ + + } +} + +/* +** Return true if the DISTINCT expression-list passed as the third argument +** is redundant. +** +** A DISTINCT list is redundant if any subset of the columns in the +** DISTINCT list are collectively unique and individually non-null. +*/ +static int isDistinctRedundant( + Parse *pParse, /* Parsing context */ + SrcList *pTabList, /* The FROM clause */ + WhereClause *pWC, /* The WHERE clause */ + ExprList *pDistinct /* The result set that needs to be DISTINCT */ +){ + Table *pTab; + Index *pIdx; + int i; + int iBase; + + /* If there is more than one table or sub-select in the FROM clause of + ** this query, then it will not be possible to show that the DISTINCT + ** clause is redundant. */ + if( pTabList->nSrc!=1 ) return 0; + iBase = pTabList->a[0].iCursor; + pTab = pTabList->a[0].pTab; + + /* If any of the expressions is an IPK column on table iBase, then return + ** true. Note: The (p->iTable==iBase) part of this test may be false if the + ** current SELECT is a correlated sub-query. + */ + for(i=0; i<pDistinct->nExpr; i++){ + Expr *p = sqlite3ExprSkipCollate(pDistinct->a[i].pExpr); + if( p->op==TK_COLUMN && p->iTable==iBase && p->iColumn<0 ) return 1; + } + + /* Loop through all indices on the table, checking each to see if it makes + ** the DISTINCT qualifier redundant. It does so if: + ** + ** 1. The index is itself UNIQUE, and + ** + ** 2. All of the columns in the index are either part of the pDistinct + ** list, or else the WHERE clause contains a term of the form "col=X", + ** where X is a constant value. The collation sequences of the + ** comparison and select-list expressions must match those of the index. + ** + ** 3. All of those index columns for which the WHERE clause does not + ** contain a "col=X" term are subject to a NOT NULL constraint. + */ + for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ + if( !IsUniqueIndex(pIdx) ) continue; + for(i=0; i<pIdx->nKeyCol; i++){ + if( 0==sqlite3WhereFindTerm(pWC, iBase, i, ~(Bitmask)0, WO_EQ, pIdx) ){ + if( findIndexCol(pParse, pDistinct, iBase, pIdx, i)<0 ) break; + if( indexColumnNotNull(pIdx, i)==0 ) break; + } + } + if( i==pIdx->nKeyCol ){ + /* This index implies that the DISTINCT qualifier is redundant. */ + return 1; + } + } + + return 0; +} + + +/* +** Estimate the logarithm of the input value to base 2. +*/ +static LogEst estLog(LogEst N){ + return N<=10 ? 0 : sqlite3LogEst(N) - 33; +} + +/* +** Convert OP_Column opcodes to OP_Copy in previously generated code. +** +** This routine runs over generated VDBE code and translates OP_Column +** opcodes into OP_Copy when the table is being accessed via co-routine +** instead of via table lookup. +** +** If the bIncrRowid parameter is 0, then any OP_Rowid instructions on +** cursor iTabCur are transformed into OP_Null. Or, if bIncrRowid is non-zero, +** then each OP_Rowid is transformed into an instruction to increment the +** value stored in its output register. +*/ +static void translateColumnToCopy( + Vdbe *v, /* The VDBE containing code to translate */ + int iStart, /* Translate from this opcode to the end */ + int iTabCur, /* OP_Column/OP_Rowid references to this table */ + int iRegister, /* The first column is in this register */ + int bIncrRowid /* If non-zero, transform OP_rowid to OP_AddImm(1) */ +){ + VdbeOp *pOp = sqlite3VdbeGetOp(v, iStart); + int iEnd = sqlite3VdbeCurrentAddr(v); + for(; iStart<iEnd; iStart++, pOp++){ + if( pOp->p1!=iTabCur ) continue; + if( pOp->opcode==OP_Column ){ + pOp->opcode = OP_Copy; + pOp->p1 = pOp->p2 + iRegister; + pOp->p2 = pOp->p3; + pOp->p3 = 0; + }else if( pOp->opcode==OP_Rowid ){ + if( bIncrRowid ){ + /* Increment the value stored in the P2 operand of the OP_Rowid. */ + pOp->opcode = OP_AddImm; + pOp->p1 = pOp->p2; + pOp->p2 = 1; + }else{ + pOp->opcode = OP_Null; + pOp->p1 = 0; + pOp->p3 = 0; + } + } + } } /* ** Two routines for printing the content of an sqlite3_index_info ** structure. Used for testing and debugging only. If neither ** SQLITE_TEST or SQLITE_DEBUG are defined, then these routines ** are no-ops. */ -#if !defined(SQLITE_OMIT_VIRTUALTABLE) && defined(SQLITE_DEBUG) +#if !defined(SQLITE_OMIT_VIRTUALTABLE) && defined(WHERETRACE_ENABLED) static void TRACE_IDX_INPUTS(sqlite3_index_info *p){ int i; if( !sqlite3WhereTrace ) return; for(i=0; i<p->nConstraint; i++){ sqlite3DebugPrintf(" constraint[%d]: col=%d termid=%d op=%d usabled=%d\n", @@ -89633,111 +123794,17 @@ } sqlite3DebugPrintf(" idxNum=%d\n", p->idxNum); sqlite3DebugPrintf(" idxStr=%s\n", p->idxStr); sqlite3DebugPrintf(" orderByConsumed=%d\n", p->orderByConsumed); sqlite3DebugPrintf(" estimatedCost=%g\n", p->estimatedCost); + sqlite3DebugPrintf(" estimatedRows=%lld\n", p->estimatedRows); } #else #define TRACE_IDX_INPUTS(A) #define TRACE_IDX_OUTPUTS(A) #endif -/* -** Required because bestIndex() is called by bestOrClauseIndex() -*/ -static void bestIndex( - Parse*, WhereClause*, struct SrcList_item*, Bitmask, ExprList*, WhereCost*); - -/* -** This routine attempts to find an scanning strategy that can be used -** to optimize an 'OR' expression that is part of a WHERE clause. -** -** The table associated with FROM clause term pSrc may be either a -** regular B-Tree table or a virtual table. -*/ -static void bestOrClauseIndex( - Parse *pParse, /* The parsing context */ - WhereClause *pWC, /* The WHERE clause */ - struct SrcList_item *pSrc, /* The FROM clause term to search */ - Bitmask notReady, /* Mask of cursors that are not available */ - ExprList *pOrderBy, /* The ORDER BY clause */ - WhereCost *pCost /* Lowest cost query plan */ -){ -#ifndef SQLITE_OMIT_OR_OPTIMIZATION - const int iCur = pSrc->iCursor; /* The cursor of the table to be accessed */ - const Bitmask maskSrc = getMask(pWC->pMaskSet, iCur); /* Bitmask for pSrc */ - WhereTerm * const pWCEnd = &pWC->a[pWC->nTerm]; /* End of pWC->a[] */ - WhereTerm *pTerm; /* A single term of the WHERE clause */ - - /* No OR-clause optimization allowed if the NOT INDEXED clause is used */ - if( pSrc->notIndexed ){ - return; - } - - /* Search the WHERE clause terms for a usable WO_OR term. */ - for(pTerm=pWC->a; pTerm<pWCEnd; pTerm++){ - if( pTerm->eOperator==WO_OR - && ((pTerm->prereqAll & ~maskSrc) & notReady)==0 - && (pTerm->u.pOrInfo->indexable & maskSrc)!=0 - ){ - WhereClause * const pOrWC = &pTerm->u.pOrInfo->wc; - WhereTerm * const pOrWCEnd = &pOrWC->a[pOrWC->nTerm]; - WhereTerm *pOrTerm; - int flags = WHERE_MULTI_OR; - double rTotal = 0; - double nRow = 0; - Bitmask used = 0; - - for(pOrTerm=pOrWC->a; pOrTerm<pOrWCEnd; pOrTerm++){ - WhereCost sTermCost; - WHERETRACE(("... Multi-index OR testing for term %d of %d....\n", - (pOrTerm - pOrWC->a), (pTerm - pWC->a) - )); - if( pOrTerm->eOperator==WO_AND ){ - WhereClause *pAndWC = &pOrTerm->u.pAndInfo->wc; - bestIndex(pParse, pAndWC, pSrc, notReady, 0, &sTermCost); - }else if( pOrTerm->leftCursor==iCur ){ - WhereClause tempWC; - tempWC.pParse = pWC->pParse; - tempWC.pMaskSet = pWC->pMaskSet; - tempWC.op = TK_AND; - tempWC.a = pOrTerm; - tempWC.nTerm = 1; - bestIndex(pParse, &tempWC, pSrc, notReady, 0, &sTermCost); - }else{ - continue; - } - rTotal += sTermCost.rCost; - nRow += sTermCost.nRow; - used |= sTermCost.used; - if( rTotal>=pCost->rCost ) break; - } - - /* If there is an ORDER BY clause, increase the scan cost to account - ** for the cost of the sort. */ - if( pOrderBy!=0 ){ - WHERETRACE(("... sorting increases OR cost %.9g to %.9g\n", - rTotal, rTotal+nRow*estLog(nRow))); - rTotal += nRow*estLog(nRow); - } - - /* If the cost of scanning using this OR term for optimization is - ** less than the current cost stored in pCost, replace the contents - ** of pCost. */ - WHERETRACE(("... multi-index OR cost=%.9g nrow=%.9g\n", rTotal, nRow)); - if( rTotal<pCost->rCost ){ - pCost->rCost = rTotal; - pCost->nRow = nRow; - pCost->used = used; - pCost->plan.wsFlags = flags; - pCost->plan.u.pTerm = pTerm; - } - } - } -#endif /* SQLITE_OMIT_OR_OPTIMIZATION */ -} - #ifndef SQLITE_OMIT_AUTOMATIC_INDEX /* ** Return TRUE if the WHERE clause term pTerm is of a form where it ** could be used with an index to access pSrc, assuming an appropriate ** index existed. @@ -89747,83 +123814,20 @@ struct SrcList_item *pSrc, /* Table we are trying to access */ Bitmask notReady /* Tables in outer loops of the join */ ){ char aff; if( pTerm->leftCursor!=pSrc->iCursor ) return 0; - if( pTerm->eOperator!=WO_EQ ) return 0; + if( (pTerm->eOperator & (WO_EQ|WO_IS))==0 ) return 0; if( (pTerm->prereqRight & notReady)!=0 ) return 0; + if( pTerm->u.leftColumn<0 ) return 0; aff = pSrc->pTab->aCol[pTerm->u.leftColumn].affinity; if( !sqlite3IndexAffinityOk(pTerm->pExpr, aff) ) return 0; + testcase( pTerm->pExpr->op==TK_IS ); return 1; } #endif -#ifndef SQLITE_OMIT_AUTOMATIC_INDEX -/* -** If the query plan for pSrc specified in pCost is a full table scan -** and indexing is allows (if there is no NOT INDEXED clause) and it -** possible to construct a transient index that would perform better -** than a full table scan even when the cost of constructing the index -** is taken into account, then alter the query plan to use the -** transient index. -*/ -static void bestAutomaticIndex( - Parse *pParse, /* The parsing context */ - WhereClause *pWC, /* The WHERE clause */ - struct SrcList_item *pSrc, /* The FROM clause term to search */ - Bitmask notReady, /* Mask of cursors that are not available */ - WhereCost *pCost /* Lowest cost query plan */ -){ - double nTableRow; /* Rows in the input table */ - double logN; /* log(nTableRow) */ - double costTempIdx; /* per-query cost of the transient index */ - WhereTerm *pTerm; /* A single term of the WHERE clause */ - WhereTerm *pWCEnd; /* End of pWC->a[] */ - Table *pTable; /* Table tht might be indexed */ - - if( (pParse->db->flags & SQLITE_AutoIndex)==0 ){ - /* Automatic indices are disabled at run-time */ - return; - } - if( (pCost->plan.wsFlags & WHERE_NOT_FULLSCAN)!=0 ){ - /* We already have some kind of index in use for this query. */ - return; - } - if( pSrc->notIndexed ){ - /* The NOT INDEXED clause appears in the SQL. */ - return; - } - - assert( pParse->nQueryLoop >= (double)1 ); - pTable = pSrc->pTab; - nTableRow = pTable->pIndex ? pTable->pIndex->aiRowEst[0] : 1000000; - logN = estLog(nTableRow); - costTempIdx = 2*logN*(nTableRow/pParse->nQueryLoop + 1); - if( costTempIdx>=pCost->rCost ){ - /* The cost of creating the transient table would be greater than - ** doing the full table scan */ - return; - } - - /* Search for any equality comparison term */ - pWCEnd = &pWC->a[pWC->nTerm]; - for(pTerm=pWC->a; pTerm<pWCEnd; pTerm++){ - if( termCanDriveIndex(pTerm, pSrc, notReady) ){ - WHERETRACE(("auto-index reduces cost from %.2f to %.2f\n", - pCost->rCost, costTempIdx)); - pCost->rCost = costTempIdx; - pCost->nRow = logN + 1; - pCost->plan.wsFlags = WHERE_TEMP_INDEX; - pCost->used = pTerm->prereqRight; - break; - } - } -} -#else -# define bestAutomaticIndex(A,B,C,D,E) /* no-op */ -#endif /* SQLITE_OMIT_AUTOMATIC_INDEX */ - #ifndef SQLITE_OMIT_AUTOMATIC_INDEX /* ** Generate code to construct the Index object for an automatic index ** and to set up the WhereLevel object pLevel so that the code generator @@ -89834,148 +123838,199 @@ WhereClause *pWC, /* The WHERE clause */ struct SrcList_item *pSrc, /* The FROM clause term to get the next index */ Bitmask notReady, /* Mask of cursors that are not available */ WhereLevel *pLevel /* Write new index here */ ){ - int nColumn; /* Number of columns in the constructed index */ + int nKeyCol; /* Number of columns in the constructed index */ WhereTerm *pTerm; /* A single term of the WHERE clause */ WhereTerm *pWCEnd; /* End of pWC->a[] */ - int nByte; /* Byte of memory needed for pIdx */ Index *pIdx; /* Object describing the transient index */ Vdbe *v; /* Prepared statement under construction */ - int regIsInit; /* Register set by initialization */ int addrInit; /* Address of the initialization bypass jump */ Table *pTable; /* The table being indexed */ - KeyInfo *pKeyinfo; /* Key information for the index */ int addrTop; /* Top of the index fill loop */ int regRecord; /* Register holding an index record */ int n; /* Column counter */ int i; /* Loop counter */ int mxBitCol; /* Maximum column in pSrc->colUsed */ CollSeq *pColl; /* Collating sequence to on a column */ + WhereLoop *pLoop; /* The Loop object */ + char *zNotUsed; /* Extra space on the end of pIdx */ Bitmask idxCols; /* Bitmap of columns used for indexing */ Bitmask extraCols; /* Bitmap of additional columns */ + u8 sentWarning = 0; /* True if a warnning has been issued */ + Expr *pPartial = 0; /* Partial Index Expression */ + int iContinue = 0; /* Jump here to skip excluded rows */ + struct SrcList_item *pTabItem; /* FROM clause term being indexed */ + int addrCounter = 0; /* Address where integer counter is initialized */ + int regBase; /* Array of registers where record is assembled */ /* Generate code to skip over the creation and initialization of the ** transient index on 2nd and subsequent iterations of the loop. */ v = pParse->pVdbe; assert( v!=0 ); - regIsInit = ++pParse->nMem; - addrInit = sqlite3VdbeAddOp1(v, OP_If, regIsInit); - sqlite3VdbeAddOp2(v, OP_Integer, 1, regIsInit); + addrInit = sqlite3CodeOnce(pParse); VdbeCoverage(v); /* Count the number of columns that will be added to the index ** and used to match WHERE clause constraints */ - nColumn = 0; + nKeyCol = 0; pTable = pSrc->pTab; pWCEnd = &pWC->a[pWC->nTerm]; + pLoop = pLevel->pWLoop; idxCols = 0; for(pTerm=pWC->a; pTerm<pWCEnd; pTerm++){ + Expr *pExpr = pTerm->pExpr; + assert( !ExprHasProperty(pExpr, EP_FromJoin) /* prereq always non-zero */ + || pExpr->iRightJoinTable!=pSrc->iCursor /* for the right-hand */ + || pLoop->prereq!=0 ); /* table of a LEFT JOIN */ + if( pLoop->prereq==0 + && (pTerm->wtFlags & TERM_VIRTUAL)==0 + && !ExprHasProperty(pExpr, EP_FromJoin) + && sqlite3ExprIsTableConstant(pExpr, pSrc->iCursor) ){ + pPartial = sqlite3ExprAnd(pParse->db, pPartial, + sqlite3ExprDup(pParse->db, pExpr, 0)); + } if( termCanDriveIndex(pTerm, pSrc, notReady) ){ int iCol = pTerm->u.leftColumn; - Bitmask cMask = iCol>=BMS ? ((Bitmask)1)<<(BMS-1) : ((Bitmask)1)<<iCol; + Bitmask cMask = iCol>=BMS ? MASKBIT(BMS-1) : MASKBIT(iCol); testcase( iCol==BMS ); testcase( iCol==BMS-1 ); + if( !sentWarning ){ + sqlite3_log(SQLITE_WARNING_AUTOINDEX, + "automatic index on %s(%s)", pTable->zName, + pTable->aCol[iCol].zName); + sentWarning = 1; + } if( (idxCols & cMask)==0 ){ - nColumn++; + if( whereLoopResize(pParse->db, pLoop, nKeyCol+1) ){ + goto end_auto_index_create; + } + pLoop->aLTerm[nKeyCol++] = pTerm; idxCols |= cMask; } } } - assert( nColumn>0 ); - pLevel->plan.nEq = nColumn; + assert( nKeyCol>0 ); + pLoop->u.btree.nEq = pLoop->nLTerm = nKeyCol; + pLoop->wsFlags = WHERE_COLUMN_EQ | WHERE_IDX_ONLY | WHERE_INDEXED + | WHERE_AUTO_INDEX; /* Count the number of additional columns needed to create a ** covering index. A "covering index" is an index that contains all ** columns that are needed by the query. With a covering index, the ** original table never needs to be accessed. Automatic indices must ** be a covering index because the index will not be updated if the ** original table changes and the index and table cannot both be used ** if they go out of sync. */ - extraCols = pSrc->colUsed & (~idxCols | (((Bitmask)1)<<(BMS-1))); - mxBitCol = (pTable->nCol >= BMS-1) ? BMS-1 : pTable->nCol; + extraCols = pSrc->colUsed & (~idxCols | MASKBIT(BMS-1)); + mxBitCol = MIN(BMS-1,pTable->nCol); testcase( pTable->nCol==BMS-1 ); testcase( pTable->nCol==BMS-2 ); for(i=0; i<mxBitCol; i++){ - if( extraCols & (((Bitmask)1)<<i) ) nColumn++; + if( extraCols & MASKBIT(i) ) nKeyCol++; } - if( pSrc->colUsed & (((Bitmask)1)<<(BMS-1)) ){ - nColumn += pTable->nCol - BMS + 1; + if( pSrc->colUsed & MASKBIT(BMS-1) ){ + nKeyCol += pTable->nCol - BMS + 1; } - pLevel->plan.wsFlags |= WHERE_COLUMN_EQ | WHERE_IDX_ONLY | WO_EQ; /* Construct the Index object to describe this index */ - nByte = sizeof(Index); - nByte += nColumn*sizeof(int); /* Index.aiColumn */ - nByte += nColumn*sizeof(char*); /* Index.azColl */ - nByte += nColumn; /* Index.aSortOrder */ - pIdx = sqlite3DbMallocZero(pParse->db, nByte); - if( pIdx==0 ) return; - pLevel->plan.u.pIdx = pIdx; - pIdx->azColl = (char**)&pIdx[1]; - pIdx->aiColumn = (int*)&pIdx->azColl[nColumn]; - pIdx->aSortOrder = (u8*)&pIdx->aiColumn[nColumn]; + pIdx = sqlite3AllocateIndexObject(pParse->db, nKeyCol+1, 0, &zNotUsed); + if( pIdx==0 ) goto end_auto_index_create; + pLoop->u.btree.pIndex = pIdx; pIdx->zName = "auto-index"; - pIdx->nColumn = nColumn; pIdx->pTable = pTable; n = 0; idxCols = 0; for(pTerm=pWC->a; pTerm<pWCEnd; pTerm++){ if( termCanDriveIndex(pTerm, pSrc, notReady) ){ int iCol = pTerm->u.leftColumn; - Bitmask cMask = iCol>=BMS ? ((Bitmask)1)<<(BMS-1) : ((Bitmask)1)<<iCol; + Bitmask cMask = iCol>=BMS ? MASKBIT(BMS-1) : MASKBIT(iCol); + testcase( iCol==BMS-1 ); + testcase( iCol==BMS ); if( (idxCols & cMask)==0 ){ Expr *pX = pTerm->pExpr; idxCols |= cMask; pIdx->aiColumn[n] = pTerm->u.leftColumn; pColl = sqlite3BinaryCompareCollSeq(pParse, pX->pLeft, pX->pRight); - pIdx->azColl[n] = pColl->zName; + pIdx->azColl[n] = pColl ? pColl->zName : sqlite3StrBINARY; n++; } } } - assert( n==pLevel->plan.nEq ); + assert( (u32)n==pLoop->u.btree.nEq ); /* Add additional columns needed to make the automatic index into ** a covering index */ for(i=0; i<mxBitCol; i++){ - if( extraCols & (((Bitmask)1)<<i) ){ - pIdx->aiColumn[n] = i; - pIdx->azColl[n] = "BINARY"; - n++; - } - } - if( pSrc->colUsed & (((Bitmask)1)<<(BMS-1)) ){ - for(i=BMS-1; i<pTable->nCol; i++){ - pIdx->aiColumn[n] = i; - pIdx->azColl[n] = "BINARY"; - n++; - } - } - assert( n==nColumn ); - - /* Create the automatic index */ - pKeyinfo = sqlite3IndexKeyinfo(pParse, pIdx); - assert( pLevel->iIdxCur>=0 ); - sqlite3VdbeAddOp4(v, OP_OpenAutoindex, pLevel->iIdxCur, nColumn+1, 0, - (char*)pKeyinfo, P4_KEYINFO_HANDOFF); - VdbeComment((v, "for %s", pTable->zName)); - - /* Fill the automatic index with content */ - addrTop = sqlite3VdbeAddOp1(v, OP_Rewind, pLevel->iTabCur); - regRecord = sqlite3GetTempReg(pParse); - sqlite3GenerateIndexKey(pParse, pIdx, pLevel->iTabCur, regRecord, 1); - sqlite3VdbeAddOp2(v, OP_IdxInsert, pLevel->iIdxCur, regRecord); - sqlite3VdbeChangeP5(v, OPFLAG_USESEEKRESULT); - sqlite3VdbeAddOp2(v, OP_Next, pLevel->iTabCur, addrTop+1); - sqlite3VdbeChangeP5(v, SQLITE_STMTSTATUS_AUTOINDEX); - sqlite3VdbeJumpHere(v, addrTop); - sqlite3ReleaseTempReg(pParse, regRecord); - - /* Jump here when skipping the initialization */ - sqlite3VdbeJumpHere(v, addrInit); + if( extraCols & MASKBIT(i) ){ + pIdx->aiColumn[n] = i; + pIdx->azColl[n] = sqlite3StrBINARY; + n++; + } + } + if( pSrc->colUsed & MASKBIT(BMS-1) ){ + for(i=BMS-1; i<pTable->nCol; i++){ + pIdx->aiColumn[n] = i; + pIdx->azColl[n] = sqlite3StrBINARY; + n++; + } + } + assert( n==nKeyCol ); + pIdx->aiColumn[n] = XN_ROWID; + pIdx->azColl[n] = sqlite3StrBINARY; + + /* Create the automatic index */ + assert( pLevel->iIdxCur>=0 ); + pLevel->iIdxCur = pParse->nTab++; + sqlite3VdbeAddOp2(v, OP_OpenAutoindex, pLevel->iIdxCur, nKeyCol+1); + sqlite3VdbeSetP4KeyInfo(pParse, pIdx); + VdbeComment((v, "for %s", pTable->zName)); + + /* Fill the automatic index with content */ + sqlite3ExprCachePush(pParse); + pTabItem = &pWC->pWInfo->pTabList->a[pLevel->iFrom]; + if( pTabItem->fg.viaCoroutine ){ + int regYield = pTabItem->regReturn; + addrCounter = sqlite3VdbeAddOp2(v, OP_Integer, 0, 0); + sqlite3VdbeAddOp3(v, OP_InitCoroutine, regYield, 0, pTabItem->addrFillSub); + addrTop = sqlite3VdbeAddOp1(v, OP_Yield, regYield); + VdbeCoverage(v); + VdbeComment((v, "next row of \"%s\"", pTabItem->pTab->zName)); + }else{ + addrTop = sqlite3VdbeAddOp1(v, OP_Rewind, pLevel->iTabCur); VdbeCoverage(v); + } + if( pPartial ){ + iContinue = sqlite3VdbeMakeLabel(v); + sqlite3ExprIfFalse(pParse, pPartial, iContinue, SQLITE_JUMPIFNULL); + pLoop->wsFlags |= WHERE_PARTIALIDX; + } + regRecord = sqlite3GetTempReg(pParse); + regBase = sqlite3GenerateIndexKey( + pParse, pIdx, pLevel->iTabCur, regRecord, 0, 0, 0, 0 + ); + sqlite3VdbeAddOp2(v, OP_IdxInsert, pLevel->iIdxCur, regRecord); + sqlite3VdbeChangeP5(v, OPFLAG_USESEEKRESULT); + if( pPartial ) sqlite3VdbeResolveLabel(v, iContinue); + if( pTabItem->fg.viaCoroutine ){ + sqlite3VdbeChangeP2(v, addrCounter, regBase+n); + translateColumnToCopy(v, addrTop, pLevel->iTabCur, pTabItem->regResult, 1); + sqlite3VdbeGoto(v, addrTop); + pTabItem->fg.viaCoroutine = 0; + }else{ + sqlite3VdbeAddOp2(v, OP_Next, pLevel->iTabCur, addrTop+1); VdbeCoverage(v); + } + sqlite3VdbeChangeP5(v, SQLITE_STMTSTATUS_AUTOINDEX); + sqlite3VdbeJumpHere(v, addrTop); + sqlite3ReleaseTempReg(pParse, regRecord); + sqlite3ExprCachePop(pParse); + + /* Jump here when skipping the initialization */ + sqlite3VdbeJumpHere(v, addrInit); + +end_auto_index_create: + sqlite3ExprDelete(pParse->db, pPartial); } #endif /* SQLITE_OMIT_AUTOMATIC_INDEX */ #ifndef SQLITE_OMIT_VIRTUALTABLE /* @@ -89982,12 +124037,13 @@ ** Allocate and populate an sqlite3_index_info structure. It is the ** responsibility of the caller to eventually release the structure ** by passing the pointer returned by this function to sqlite3_free(). */ static sqlite3_index_info *allocateIndexInfo( - Parse *pParse, + Parse *pParse, WhereClause *pWC, + Bitmask mUnusable, /* Ignore terms with these prereqs */ struct SrcList_item *pSrc, ExprList *pOrderBy ){ int i, j; int nTerm; @@ -89996,35 +124052,39 @@ struct sqlite3_index_constraint_usage *pUsage; WhereTerm *pTerm; int nOrderBy; sqlite3_index_info *pIdxInfo; - WHERETRACE(("Recomputing index info for %s...\n", pSrc->pTab->zName)); - /* Count the number of possible WHERE clause constraints referring ** to this virtual table */ for(i=nTerm=0, pTerm=pWC->a; i<pWC->nTerm; i++, pTerm++){ if( pTerm->leftCursor != pSrc->iCursor ) continue; - assert( (pTerm->eOperator&(pTerm->eOperator-1))==0 ); - testcase( pTerm->eOperator==WO_IN ); - testcase( pTerm->eOperator==WO_ISNULL ); - if( pTerm->eOperator & (WO_IN|WO_ISNULL) ) continue; + if( pTerm->prereqRight & mUnusable ) continue; + assert( IsPowerOfTwo(pTerm->eOperator & ~WO_EQUIV) ); + testcase( pTerm->eOperator & WO_IN ); + testcase( pTerm->eOperator & WO_ISNULL ); + testcase( pTerm->eOperator & WO_IS ); + testcase( pTerm->eOperator & WO_ALL ); + if( (pTerm->eOperator & ~(WO_ISNULL|WO_EQUIV|WO_IS))==0 ) continue; + if( pTerm->wtFlags & TERM_VNULL ) continue; + assert( pTerm->u.leftColumn>=(-1) ); nTerm++; } /* If the ORDER BY clause contains only columns in the current ** virtual table then allocate space for the aOrderBy part of ** the sqlite3_index_info structure. */ nOrderBy = 0; if( pOrderBy ){ - for(i=0; i<pOrderBy->nExpr; i++){ + int n = pOrderBy->nExpr; + for(i=0; i<n; i++){ Expr *pExpr = pOrderBy->a[i].pExpr; if( pExpr->op!=TK_COLUMN || pExpr->iTable!=pSrc->iCursor ) break; } - if( i==pOrderBy->nExpr ){ - nOrderBy = pOrderBy->nExpr; + if( i==n){ + nOrderBy = n; } } /* Allocate the sqlite3_index_info structure */ @@ -90031,11 +124091,10 @@ pIdxInfo = sqlite3DbMallocZero(pParse->db, sizeof(*pIdxInfo) + (sizeof(*pIdxCons) + sizeof(*pUsage))*nTerm + sizeof(*pIdxOrderBy)*nOrderBy ); if( pIdxInfo==0 ){ sqlite3ErrorMsg(pParse, "out of memory"); - /* (double)0 In case of SQLITE_OMIT_FLOATING_POINT... */ return 0; } /* Initialize the structure. The sqlite3_index_info structure contains ** many fields that are declared "const" to prevent xBestIndex from @@ -90051,28 +124110,39 @@ *(struct sqlite3_index_orderby**)&pIdxInfo->aOrderBy = pIdxOrderBy; *(struct sqlite3_index_constraint_usage**)&pIdxInfo->aConstraintUsage = pUsage; for(i=j=0, pTerm=pWC->a; i<pWC->nTerm; i++, pTerm++){ + u8 op; if( pTerm->leftCursor != pSrc->iCursor ) continue; - assert( (pTerm->eOperator&(pTerm->eOperator-1))==0 ); - testcase( pTerm->eOperator==WO_IN ); - testcase( pTerm->eOperator==WO_ISNULL ); - if( pTerm->eOperator & (WO_IN|WO_ISNULL) ) continue; + if( pTerm->prereqRight & mUnusable ) continue; + assert( IsPowerOfTwo(pTerm->eOperator & ~WO_EQUIV) ); + testcase( pTerm->eOperator & WO_IN ); + testcase( pTerm->eOperator & WO_IS ); + testcase( pTerm->eOperator & WO_ISNULL ); + testcase( pTerm->eOperator & WO_ALL ); + if( (pTerm->eOperator & ~(WO_ISNULL|WO_EQUIV|WO_IS))==0 ) continue; + if( pTerm->wtFlags & TERM_VNULL ) continue; + assert( pTerm->u.leftColumn>=(-1) ); pIdxCons[j].iColumn = pTerm->u.leftColumn; pIdxCons[j].iTermOffset = i; - pIdxCons[j].op = (u8)pTerm->eOperator; + op = (u8)pTerm->eOperator & WO_ALL; + if( op==WO_IN ) op = WO_EQ; + if( op==WO_MATCH ){ + op = pTerm->eMatchOp; + } + pIdxCons[j].op = op; /* The direct assignment in the previous line is possible only because ** the WO_ and SQLITE_INDEX_CONSTRAINT_ codes are identical. The ** following asserts verify this fact. */ assert( WO_EQ==SQLITE_INDEX_CONSTRAINT_EQ ); assert( WO_LT==SQLITE_INDEX_CONSTRAINT_LT ); assert( WO_LE==SQLITE_INDEX_CONSTRAINT_LE ); assert( WO_GT==SQLITE_INDEX_CONSTRAINT_GT ); assert( WO_GE==SQLITE_INDEX_CONSTRAINT_GE ); assert( WO_MATCH==SQLITE_INDEX_CONSTRAINT_MATCH ); - assert( pTerm->eOperator & (WO_EQ|WO_LT|WO_LE|WO_GT|WO_GE|WO_MATCH) ); + assert( pTerm->eOperator & (WO_IN|WO_EQ|WO_LT|WO_LE|WO_GT|WO_GE|WO_MATCH) ); j++; } for(i=0; i<nOrderBy; i++){ Expr *pExpr = pOrderBy->a[i].pExpr; pIdxOrderBy[i].iColumn = pExpr->iColumn; @@ -90083,12 +124153,12 @@ } /* ** The table object reference passed as the second argument to this function ** must represent a virtual table. This function invokes the xBestIndex() -** method of the virtual table with the sqlite3_index_info pointer passed -** as the argument. +** method of the virtual table with the sqlite3_index_info object that +** comes in as the 3rd argument to this function. ** ** If an error occurs, pParse is populated with an error message and a ** non-zero value is returned. Otherwise, 0 is returned and the output ** part of the sqlite3_index_info structure is left populated. ** @@ -90099,25 +124169,24 @@ static int vtabBestIndex(Parse *pParse, Table *pTab, sqlite3_index_info *p){ sqlite3_vtab *pVtab = sqlite3GetVTable(pParse->db, pTab)->pVtab; int i; int rc; - WHERETRACE(("xBestIndex for %s\n", pTab->zName)); TRACE_IDX_INPUTS(p); rc = pVtab->pModule->xBestIndex(pVtab, p); TRACE_IDX_OUTPUTS(p); if( rc!=SQLITE_OK ){ if( rc==SQLITE_NOMEM ){ - pParse->db->mallocFailed = 1; + sqlite3OomFault(pParse->db); }else if( !pVtab->zErrMsg ){ sqlite3ErrorMsg(pParse, "%s", sqlite3ErrStr(rc)); }else{ sqlite3ErrorMsg(pParse, "%s", pVtab->zErrMsg); } } - sqlite3DbFree(pParse->db, pVtab->zErrMsg); + sqlite3_free(pVtab->zErrMsg); pVtab->zErrMsg = 0; for(i=0; i<p->nConstraint; i++){ if( !p->aConstraint[i].usable && p->aConstraintUsage[i].argvIndex>0 ){ sqlite3ErrorMsg(pParse, @@ -90125,295 +124194,352 @@ } } return pParse->nErr; } - - -/* -** Compute the best index for a virtual table. -** -** The best index is computed by the xBestIndex method of the virtual -** table module. This routine is really just a wrapper that sets up -** the sqlite3_index_info structure that is used to communicate with -** xBestIndex. -** -** In a join, this routine might be called multiple times for the -** same virtual table. The sqlite3_index_info structure is created -** and initialized on the first invocation and reused on all subsequent -** invocations. The sqlite3_index_info structure is also used when -** code is generated to access the virtual table. The whereInfoDelete() -** routine takes care of freeing the sqlite3_index_info structure after -** everybody has finished with it. -*/ -static void bestVirtualIndex( - Parse *pParse, /* The parsing context */ - WhereClause *pWC, /* The WHERE clause */ - struct SrcList_item *pSrc, /* The FROM clause term to search */ - Bitmask notReady, /* Mask of cursors that are not available */ - ExprList *pOrderBy, /* The order by clause */ - WhereCost *pCost, /* Lowest cost query plan */ - sqlite3_index_info **ppIdxInfo /* Index information passed to xBestIndex */ -){ - Table *pTab = pSrc->pTab; - sqlite3_index_info *pIdxInfo; - struct sqlite3_index_constraint *pIdxCons; - struct sqlite3_index_constraint_usage *pUsage; - WhereTerm *pTerm; - int i, j; - int nOrderBy; - double rCost; - - /* Make sure wsFlags is initialized to some sane value. Otherwise, if the - ** malloc in allocateIndexInfo() fails and this function returns leaving - ** wsFlags in an uninitialized state, the caller may behave unpredictably. - */ - memset(pCost, 0, sizeof(*pCost)); - pCost->plan.wsFlags = WHERE_VIRTUALTABLE; - - /* If the sqlite3_index_info structure has not been previously - ** allocated and initialized, then allocate and initialize it now. - */ - pIdxInfo = *ppIdxInfo; - if( pIdxInfo==0 ){ - *ppIdxInfo = pIdxInfo = allocateIndexInfo(pParse, pWC, pSrc, pOrderBy); - } - if( pIdxInfo==0 ){ - return; - } - - /* At this point, the sqlite3_index_info structure that pIdxInfo points - ** to will have been initialized, either during the current invocation or - ** during some prior invocation. Now we just have to customize the - ** details of pIdxInfo for the current invocation and pass it to - ** xBestIndex. - */ - - /* The module name must be defined. Also, by this point there must - ** be a pointer to an sqlite3_vtab structure. Otherwise - ** sqlite3ViewGetColumnNames() would have picked up the error. - */ - assert( pTab->azModuleArg && pTab->azModuleArg[0] ); - assert( sqlite3GetVTable(pParse->db, pTab) ); - - /* Set the aConstraint[].usable fields and initialize all - ** output variables to zero. - ** - ** aConstraint[].usable is true for constraints where the right-hand - ** side contains only references to tables to the left of the current - ** table. In other words, if the constraint is of the form: - ** - ** column = expr - ** - ** and we are evaluating a join, then the constraint on column is - ** only valid if all tables referenced in expr occur to the left - ** of the table containing column. - ** - ** The aConstraints[] array contains entries for all constraints - ** on the current table. That way we only have to compute it once - ** even though we might try to pick the best index multiple times. - ** For each attempt at picking an index, the order of tables in the - ** join might be different so we have to recompute the usable flag - ** each time. - */ - pIdxCons = *(struct sqlite3_index_constraint**)&pIdxInfo->aConstraint; - pUsage = pIdxInfo->aConstraintUsage; - for(i=0; i<pIdxInfo->nConstraint; i++, pIdxCons++){ - j = pIdxCons->iTermOffset; - pTerm = &pWC->a[j]; - pIdxCons->usable = (pTerm->prereqRight¬Ready) ? 0 : 1; - } - memset(pUsage, 0, sizeof(pUsage[0])*pIdxInfo->nConstraint); - if( pIdxInfo->needToFreeIdxStr ){ - sqlite3_free(pIdxInfo->idxStr); - } - pIdxInfo->idxStr = 0; - pIdxInfo->idxNum = 0; - pIdxInfo->needToFreeIdxStr = 0; - pIdxInfo->orderByConsumed = 0; - /* ((double)2) In case of SQLITE_OMIT_FLOATING_POINT... */ - pIdxInfo->estimatedCost = SQLITE_BIG_DBL / ((double)2); - nOrderBy = pIdxInfo->nOrderBy; - if( !pOrderBy ){ - pIdxInfo->nOrderBy = 0; - } - - if( vtabBestIndex(pParse, pTab, pIdxInfo) ){ - return; - } - - pIdxCons = *(struct sqlite3_index_constraint**)&pIdxInfo->aConstraint; - for(i=0; i<pIdxInfo->nConstraint; i++){ - if( pUsage[i].argvIndex>0 ){ - pCost->used |= pWC->a[pIdxCons[i].iTermOffset].prereqRight; - } - } - - /* If there is an ORDER BY clause, and the selected virtual table index - ** does not satisfy it, increase the cost of the scan accordingly. This - ** matches the processing for non-virtual tables in bestBtreeIndex(). - */ - rCost = pIdxInfo->estimatedCost; - if( pOrderBy && pIdxInfo->orderByConsumed==0 ){ - rCost += estLog(rCost)*rCost; - } - - /* The cost is not allowed to be larger than SQLITE_BIG_DBL (the - ** inital value of lowestCost in this loop. If it is, then the - ** (cost<lowestCost) test below will never be true. - ** - ** Use "(double)2" instead of "2.0" in case OMIT_FLOATING_POINT - ** is defined. - */ - if( (SQLITE_BIG_DBL/((double)2))<rCost ){ - pCost->rCost = (SQLITE_BIG_DBL/((double)2)); - }else{ - pCost->rCost = rCost; - } - pCost->plan.u.pVtabIdx = pIdxInfo; - if( pIdxInfo->orderByConsumed ){ - pCost->plan.wsFlags |= WHERE_ORDERBY; - } - pCost->plan.nEq = 0; - pIdxInfo->nOrderBy = nOrderBy; - - /* Try to find a more efficient access pattern by using multiple indexes - ** to optimize an OR expression within the WHERE clause. - */ - bestOrClauseIndex(pParse, pWC, pSrc, notReady, pOrderBy, pCost); -} -#endif /* SQLITE_OMIT_VIRTUALTABLE */ - -/* -** Argument pIdx is a pointer to an index structure that has an array of -** SQLITE_INDEX_SAMPLES evenly spaced samples of the first indexed column -** stored in Index.aSample. The domain of values stored in said column -** may be thought of as divided into (SQLITE_INDEX_SAMPLES+1) regions. -** Region 0 contains all values smaller than the first sample value. Region -** 1 contains values larger than or equal to the value of the first sample, -** but smaller than the value of the second. And so on. -** -** If successful, this function determines which of the regions value -** pVal lies in, sets *piRegion to the region index (a value between 0 -** and SQLITE_INDEX_SAMPLES+1, inclusive) and returns SQLITE_OK. -** Or, if an OOM occurs while converting text values between encodings, -** SQLITE_NOMEM is returned and *piRegion is undefined. -*/ -#ifdef SQLITE_ENABLE_STAT2 -static int whereRangeRegion( +#endif /* !defined(SQLITE_OMIT_VIRTUALTABLE) */ + +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 +/* +** Estimate the location of a particular key among all keys in an +** index. Store the results in aStat as follows: +** +** aStat[0] Est. number of rows less than pRec +** aStat[1] Est. number of rows equal to pRec +** +** Return the index of the sample that is the smallest sample that +** is greater than or equal to pRec. Note that this index is not an index +** into the aSample[] array - it is an index into a virtual set of samples +** based on the contents of aSample[] and the number of fields in record +** pRec. +*/ +static int whereKeyStats( Parse *pParse, /* Database connection */ Index *pIdx, /* Index to consider domain of */ - sqlite3_value *pVal, /* Value to consider */ - int *piRegion /* OUT: Region of domain in which value lies */ -){ - if( ALWAYS(pVal) ){ - IndexSample *aSample = pIdx->aSample; - int i = 0; - int eType = sqlite3_value_type(pVal); - - if( eType==SQLITE_INTEGER || eType==SQLITE_FLOAT ){ - double r = sqlite3_value_double(pVal); - for(i=0; i<SQLITE_INDEX_SAMPLES; i++){ - if( aSample[i].eType==SQLITE_NULL ) continue; - if( aSample[i].eType>=SQLITE_TEXT || aSample[i].u.r>r ) break; - } - }else{ - sqlite3 *db = pParse->db; - CollSeq *pColl; - const u8 *z; - int n; - - /* pVal comes from sqlite3ValueFromExpr() so the type cannot be NULL */ - assert( eType==SQLITE_TEXT || eType==SQLITE_BLOB ); - - if( eType==SQLITE_BLOB ){ - z = (const u8 *)sqlite3_value_blob(pVal); - pColl = db->pDfltColl; - assert( pColl->enc==SQLITE_UTF8 ); - }else{ - pColl = sqlite3GetCollSeq(db, SQLITE_UTF8, 0, *pIdx->azColl); - if( pColl==0 ){ - sqlite3ErrorMsg(pParse, "no such collation sequence: %s", - *pIdx->azColl); - return SQLITE_ERROR; - } - z = (const u8 *)sqlite3ValueText(pVal, pColl->enc); - if( !z ){ - return SQLITE_NOMEM; - } - assert( z && pColl && pColl->xCmp ); - } - n = sqlite3ValueBytes(pVal, pColl->enc); - - for(i=0; i<SQLITE_INDEX_SAMPLES; i++){ - int r; - int eSampletype = aSample[i].eType; - if( eSampletype==SQLITE_NULL || eSampletype<eType ) continue; - if( (eSampletype!=eType) ) break; -#ifndef SQLITE_OMIT_UTF16 - if( pColl->enc!=SQLITE_UTF8 ){ - int nSample; - char *zSample = sqlite3Utf8to16( - db, pColl->enc, aSample[i].u.z, aSample[i].nByte, &nSample - ); - if( !zSample ){ - assert( db->mallocFailed ); - return SQLITE_NOMEM; - } - r = pColl->xCmp(pColl->pUser, nSample, zSample, n, z); - sqlite3DbFree(db, zSample); - }else -#endif - { - r = pColl->xCmp(pColl->pUser, aSample[i].nByte, aSample[i].u.z, n, z); - } - if( r>0 ) break; - } - } - - assert( i>=0 && i<=SQLITE_INDEX_SAMPLES ); - *piRegion = i; - } - return SQLITE_OK; -} -#endif /* #ifdef SQLITE_ENABLE_STAT2 */ - -/* -** If expression pExpr represents a literal value, set *pp to point to -** an sqlite3_value structure containing the same value, with affinity -** aff applied to it, before returning. It is the responsibility of the -** caller to eventually release this structure by passing it to -** sqlite3ValueFree(). -** -** If the current parse is a recompile (sqlite3Reprepare()) and pExpr -** is an SQL variable that currently has a non-NULL value bound to it, -** create an sqlite3_value structure containing this value, again with -** affinity aff applied to it, instead. -** -** If neither of the above apply, set *pp to NULL. -** -** If an error occurs, return an error code. Otherwise, SQLITE_OK. -*/ -#ifdef SQLITE_ENABLE_STAT2 -static int valueFromExpr( - Parse *pParse, - Expr *pExpr, - u8 aff, - sqlite3_value **pp -){ - /* The evalConstExpr() function will have already converted any TK_VARIABLE - ** expression involved in an comparison into a TK_REGISTER. */ - assert( pExpr->op!=TK_VARIABLE ); - if( pExpr->op==TK_REGISTER && pExpr->op2==TK_VARIABLE ){ - int iVar = pExpr->iColumn; - sqlite3VdbeSetVarmask(pParse->pVdbe, iVar); - *pp = sqlite3VdbeGetValue(pParse->pReprepare, iVar, aff); - return SQLITE_OK; - } - return sqlite3ValueFromExpr(pParse->db, pExpr, SQLITE_UTF8, aff, pp); -} -#endif + UnpackedRecord *pRec, /* Vector of values to consider */ + int roundUp, /* Round up if true. Round down if false */ + tRowcnt *aStat /* OUT: stats written here */ +){ + IndexSample *aSample = pIdx->aSample; + int iCol; /* Index of required stats in anEq[] etc. */ + int i; /* Index of first sample >= pRec */ + int iSample; /* Smallest sample larger than or equal to pRec */ + int iMin = 0; /* Smallest sample not yet tested */ + int iTest; /* Next sample to test */ + int res; /* Result of comparison operation */ + int nField; /* Number of fields in pRec */ + tRowcnt iLower = 0; /* anLt[] + anEq[] of largest sample pRec is > */ + +#ifndef SQLITE_DEBUG + UNUSED_PARAMETER( pParse ); +#endif + assert( pRec!=0 ); + assert( pIdx->nSample>0 ); + assert( pRec->nField>0 && pRec->nField<=pIdx->nSampleCol ); + + /* Do a binary search to find the first sample greater than or equal + ** to pRec. If pRec contains a single field, the set of samples to search + ** is simply the aSample[] array. If the samples in aSample[] contain more + ** than one fields, all fields following the first are ignored. + ** + ** If pRec contains N fields, where N is more than one, then as well as the + ** samples in aSample[] (truncated to N fields), the search also has to + ** consider prefixes of those samples. For example, if the set of samples + ** in aSample is: + ** + ** aSample[0] = (a, 5) + ** aSample[1] = (a, 10) + ** aSample[2] = (b, 5) + ** aSample[3] = (c, 100) + ** aSample[4] = (c, 105) + ** + ** Then the search space should ideally be the samples above and the + ** unique prefixes [a], [b] and [c]. But since that is hard to organize, + ** the code actually searches this set: + ** + ** 0: (a) + ** 1: (a, 5) + ** 2: (a, 10) + ** 3: (a, 10) + ** 4: (b) + ** 5: (b, 5) + ** 6: (c) + ** 7: (c, 100) + ** 8: (c, 105) + ** 9: (c, 105) + ** + ** For each sample in the aSample[] array, N samples are present in the + ** effective sample array. In the above, samples 0 and 1 are based on + ** sample aSample[0]. Samples 2 and 3 on aSample[1] etc. + ** + ** Often, sample i of each block of N effective samples has (i+1) fields. + ** Except, each sample may be extended to ensure that it is greater than or + ** equal to the previous sample in the array. For example, in the above, + ** sample 2 is the first sample of a block of N samples, so at first it + ** appears that it should be 1 field in size. However, that would make it + ** smaller than sample 1, so the binary search would not work. As a result, + ** it is extended to two fields. The duplicates that this creates do not + ** cause any problems. + */ + nField = pRec->nField; + iCol = 0; + iSample = pIdx->nSample * nField; + do{ + int iSamp; /* Index in aSample[] of test sample */ + int n; /* Number of fields in test sample */ + + iTest = (iMin+iSample)/2; + iSamp = iTest / nField; + if( iSamp>0 ){ + /* The proposed effective sample is a prefix of sample aSample[iSamp]. + ** Specifically, the shortest prefix of at least (1 + iTest%nField) + ** fields that is greater than the previous effective sample. */ + for(n=(iTest % nField) + 1; n<nField; n++){ + if( aSample[iSamp-1].anLt[n-1]!=aSample[iSamp].anLt[n-1] ) break; + } + }else{ + n = iTest + 1; + } + + pRec->nField = n; + res = sqlite3VdbeRecordCompare(aSample[iSamp].n, aSample[iSamp].p, pRec); + if( res<0 ){ + iLower = aSample[iSamp].anLt[n-1] + aSample[iSamp].anEq[n-1]; + iMin = iTest+1; + }else if( res==0 && n<nField ){ + iLower = aSample[iSamp].anLt[n-1]; + iMin = iTest+1; + res = -1; + }else{ + iSample = iTest; + iCol = n-1; + } + }while( res && iMin<iSample ); + i = iSample / nField; + +#ifdef SQLITE_DEBUG + /* The following assert statements check that the binary search code + ** above found the right answer. This block serves no purpose other + ** than to invoke the asserts. */ + if( pParse->db->mallocFailed==0 ){ + if( res==0 ){ + /* If (res==0) is true, then pRec must be equal to sample i. */ + assert( i<pIdx->nSample ); + assert( iCol==nField-1 ); + pRec->nField = nField; + assert( 0==sqlite3VdbeRecordCompare(aSample[i].n, aSample[i].p, pRec) + || pParse->db->mallocFailed + ); + }else{ + /* Unless i==pIdx->nSample, indicating that pRec is larger than + ** all samples in the aSample[] array, pRec must be smaller than the + ** (iCol+1) field prefix of sample i. */ + assert( i<=pIdx->nSample && i>=0 ); + pRec->nField = iCol+1; + assert( i==pIdx->nSample + || sqlite3VdbeRecordCompare(aSample[i].n, aSample[i].p, pRec)>0 + || pParse->db->mallocFailed ); + + /* if i==0 and iCol==0, then record pRec is smaller than all samples + ** in the aSample[] array. Otherwise, if (iCol>0) then pRec must + ** be greater than or equal to the (iCol) field prefix of sample i. + ** If (i>0), then pRec must also be greater than sample (i-1). */ + if( iCol>0 ){ + pRec->nField = iCol; + assert( sqlite3VdbeRecordCompare(aSample[i].n, aSample[i].p, pRec)<=0 + || pParse->db->mallocFailed ); + } + if( i>0 ){ + pRec->nField = nField; + assert( sqlite3VdbeRecordCompare(aSample[i-1].n, aSample[i-1].p, pRec)<0 + || pParse->db->mallocFailed ); + } + } + } +#endif /* ifdef SQLITE_DEBUG */ + + if( res==0 ){ + /* Record pRec is equal to sample i */ + assert( iCol==nField-1 ); + aStat[0] = aSample[i].anLt[iCol]; + aStat[1] = aSample[i].anEq[iCol]; + }else{ + /* At this point, the (iCol+1) field prefix of aSample[i] is the first + ** sample that is greater than pRec. Or, if i==pIdx->nSample then pRec + ** is larger than all samples in the array. */ + tRowcnt iUpper, iGap; + if( i>=pIdx->nSample ){ + iUpper = sqlite3LogEstToInt(pIdx->aiRowLogEst[0]); + }else{ + iUpper = aSample[i].anLt[iCol]; + } + + if( iLower>=iUpper ){ + iGap = 0; + }else{ + iGap = iUpper - iLower; + } + if( roundUp ){ + iGap = (iGap*2)/3; + }else{ + iGap = iGap/3; + } + aStat[0] = iLower + iGap; + aStat[1] = pIdx->aAvgEq[iCol]; + } + + /* Restore the pRec->nField value before returning. */ + pRec->nField = nField; + return i; +} +#endif /* SQLITE_ENABLE_STAT3_OR_STAT4 */ + +/* +** If it is not NULL, pTerm is a term that provides an upper or lower +** bound on a range scan. Without considering pTerm, it is estimated +** that the scan will visit nNew rows. This function returns the number +** estimated to be visited after taking pTerm into account. +** +** If the user explicitly specified a likelihood() value for this term, +** then the return value is the likelihood multiplied by the number of +** input rows. Otherwise, this function assumes that an "IS NOT NULL" term +** has a likelihood of 0.50, and any other term a likelihood of 0.25. +*/ +static LogEst whereRangeAdjust(WhereTerm *pTerm, LogEst nNew){ + LogEst nRet = nNew; + if( pTerm ){ + if( pTerm->truthProb<=0 ){ + nRet += pTerm->truthProb; + }else if( (pTerm->wtFlags & TERM_VNULL)==0 ){ + nRet -= 20; assert( 20==sqlite3LogEst(4) ); + } + } + return nRet; +} + + +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 +/* +** Return the affinity for a single column of an index. +*/ +static char sqlite3IndexColumnAffinity(sqlite3 *db, Index *pIdx, int iCol){ + assert( iCol>=0 && iCol<pIdx->nColumn ); + if( !pIdx->zColAff ){ + if( sqlite3IndexAffinityStr(db, pIdx)==0 ) return SQLITE_AFF_BLOB; + } + return pIdx->zColAff[iCol]; +} +#endif + + +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 +/* +** This function is called to estimate the number of rows visited by a +** range-scan on a skip-scan index. For example: +** +** CREATE INDEX i1 ON t1(a, b, c); +** SELECT * FROM t1 WHERE a=? AND c BETWEEN ? AND ?; +** +** Value pLoop->nOut is currently set to the estimated number of rows +** visited for scanning (a=? AND b=?). This function reduces that estimate +** by some factor to account for the (c BETWEEN ? AND ?) expression based +** on the stat4 data for the index. this scan will be peformed multiple +** times (once for each (a,b) combination that matches a=?) is dealt with +** by the caller. +** +** It does this by scanning through all stat4 samples, comparing values +** extracted from pLower and pUpper with the corresponding column in each +** sample. If L and U are the number of samples found to be less than or +** equal to the values extracted from pLower and pUpper respectively, and +** N is the total number of samples, the pLoop->nOut value is adjusted +** as follows: +** +** nOut = nOut * ( min(U - L, 1) / N ) +** +** If pLower is NULL, or a value cannot be extracted from the term, L is +** set to zero. If pUpper is NULL, or a value cannot be extracted from it, +** U is set to N. +** +** Normally, this function sets *pbDone to 1 before returning. However, +** if no value can be extracted from either pLower or pUpper (and so the +** estimate of the number of rows delivered remains unchanged), *pbDone +** is left as is. +** +** If an error occurs, an SQLite error code is returned. Otherwise, +** SQLITE_OK. +*/ +static int whereRangeSkipScanEst( + Parse *pParse, /* Parsing & code generating context */ + WhereTerm *pLower, /* Lower bound on the range. ex: "x>123" Might be NULL */ + WhereTerm *pUpper, /* Upper bound on the range. ex: "x<455" Might be NULL */ + WhereLoop *pLoop, /* Update the .nOut value of this loop */ + int *pbDone /* Set to true if at least one expr. value extracted */ +){ + Index *p = pLoop->u.btree.pIndex; + int nEq = pLoop->u.btree.nEq; + sqlite3 *db = pParse->db; + int nLower = -1; + int nUpper = p->nSample+1; + int rc = SQLITE_OK; + u8 aff = sqlite3IndexColumnAffinity(db, p, nEq); + CollSeq *pColl; + + sqlite3_value *p1 = 0; /* Value extracted from pLower */ + sqlite3_value *p2 = 0; /* Value extracted from pUpper */ + sqlite3_value *pVal = 0; /* Value extracted from record */ + + pColl = sqlite3LocateCollSeq(pParse, p->azColl[nEq]); + if( pLower ){ + rc = sqlite3Stat4ValueFromExpr(pParse, pLower->pExpr->pRight, aff, &p1); + nLower = 0; + } + if( pUpper && rc==SQLITE_OK ){ + rc = sqlite3Stat4ValueFromExpr(pParse, pUpper->pExpr->pRight, aff, &p2); + nUpper = p2 ? 0 : p->nSample; + } + + if( p1 || p2 ){ + int i; + int nDiff; + for(i=0; rc==SQLITE_OK && i<p->nSample; i++){ + rc = sqlite3Stat4Column(db, p->aSample[i].p, p->aSample[i].n, nEq, &pVal); + if( rc==SQLITE_OK && p1 ){ + int res = sqlite3MemCompare(p1, pVal, pColl); + if( res>=0 ) nLower++; + } + if( rc==SQLITE_OK && p2 ){ + int res = sqlite3MemCompare(p2, pVal, pColl); + if( res>=0 ) nUpper++; + } + } + nDiff = (nUpper - nLower); + if( nDiff<=0 ) nDiff = 1; + + /* If there is both an upper and lower bound specified, and the + ** comparisons indicate that they are close together, use the fallback + ** method (assume that the scan visits 1/64 of the rows) for estimating + ** the number of rows visited. Otherwise, estimate the number of rows + ** using the method described in the header comment for this function. */ + if( nDiff!=1 || pUpper==0 || pLower==0 ){ + int nAdjust = (sqlite3LogEst(p->nSample) - sqlite3LogEst(nDiff)); + pLoop->nOut -= nAdjust; + *pbDone = 1; + WHERETRACE(0x10, ("range skip-scan regions: %u..%u adjust=%d est=%d\n", + nLower, nUpper, nAdjust*-1, pLoop->nOut)); + } + + }else{ + assert( *pbDone==0 ); + } + + sqlite3ValueFree(p1); + sqlite3ValueFree(p2); + sqlite3ValueFree(pVal); + + return rc; +} +#endif /* SQLITE_ENABLE_STAT3_OR_STAT4 */ /* ** This function is used to estimate the number of rows that will be visited ** by scanning an index for a range of values. The range may have an upper ** bound, a lower bound, or both. The WHERE clause terms that set the upper @@ -90426,1470 +124552,2585 @@ ** pLower pUpper ** ** If either of the upper or lower bound is not present, then NULL is passed in ** place of the corresponding WhereTerm. ** -** The nEq parameter is passed the index of the index column subject to the -** range constraint. Or, equivalently, the number of equality constraints -** optimized by the proposed index scan. For example, assuming index p is -** on t1(a, b), and the SQL query is: +** The value in (pBuilder->pNew->u.btree.nEq) is the number of the index +** column subject to the range constraint. Or, equivalently, the number of +** equality constraints optimized by the proposed index scan. For example, +** assuming index p is on t1(a, b), and the SQL query is: ** ** ... FROM t1 WHERE a = ? AND b > ? AND b < ? ... ** -** then nEq should be passed the value 1 (as the range restricted column, -** b, is the second left-most column of the index). Or, if the query is: +** then nEq is set to 1 (as the range restricted column, b, is the second +** left-most column of the index). Or, if the query is: ** ** ... FROM t1 WHERE a > ? AND a < ? ... ** -** then nEq should be passed 0. -** -** The returned value is an integer between 1 and 100, inclusive. A return -** value of 1 indicates that the proposed range scan is expected to visit -** approximately 1/100th (1%) of the rows selected by the nEq equality -** constraints (if any). A return value of 100 indicates that it is expected -** that the range scan will visit every row (100%) selected by the equality -** constraints. -** -** In the absence of sqlite_stat2 ANALYZE data, each range inequality -** reduces the search space by 2/3rds. Hence a single constraint (x>?) -** results in a return of 33 and a range constraint (x>? AND x<?) results -** in a return of 11. +** then nEq is set to 0. +** +** When this function is called, *pnOut is set to the sqlite3LogEst() of the +** number of rows that the index scan is expected to visit without +** considering the range constraints. If nEq is 0, then *pnOut is the number of +** rows in the index. Assuming no error occurs, *pnOut is adjusted (reduced) +** to account for the range constraints pLower and pUpper. +** +** In the absence of sqlite_stat4 ANALYZE data, or if such data cannot be +** used, a single range inequality reduces the search space by a factor of 4. +** and a pair of constraints (x>? AND x<?) reduces the expected number of +** rows visited by a factor of 64. */ static int whereRangeScanEst( Parse *pParse, /* Parsing & code generating context */ - Index *p, /* The index containing the range-compared column; "x" */ - int nEq, /* index into p->aCol[] of the range-compared column */ + WhereLoopBuilder *pBuilder, WhereTerm *pLower, /* Lower bound on the range. ex: "x>123" Might be NULL */ WhereTerm *pUpper, /* Upper bound on the range. ex: "x<455" Might be NULL */ - int *piEst /* OUT: Return value */ -){ - int rc = SQLITE_OK; - -#ifdef SQLITE_ENABLE_STAT2 - - if( nEq==0 && p->aSample ){ - sqlite3_value *pLowerVal = 0; - sqlite3_value *pUpperVal = 0; - int iEst; - int iLower = 0; - int iUpper = SQLITE_INDEX_SAMPLES; - u8 aff = p->pTable->aCol[p->aiColumn[0]].affinity; - - if( pLower ){ - Expr *pExpr = pLower->pExpr->pRight; - rc = valueFromExpr(pParse, pExpr, aff, &pLowerVal); - } - if( rc==SQLITE_OK && pUpper ){ - Expr *pExpr = pUpper->pExpr->pRight; - rc = valueFromExpr(pParse, pExpr, aff, &pUpperVal); - } - - if( rc!=SQLITE_OK || (pLowerVal==0 && pUpperVal==0) ){ - sqlite3ValueFree(pLowerVal); - sqlite3ValueFree(pUpperVal); - goto range_est_fallback; - }else if( pLowerVal==0 ){ - rc = whereRangeRegion(pParse, p, pUpperVal, &iUpper); - if( pLower ) iLower = iUpper/2; - }else if( pUpperVal==0 ){ - rc = whereRangeRegion(pParse, p, pLowerVal, &iLower); - if( pUpper ) iUpper = (iLower + SQLITE_INDEX_SAMPLES + 1)/2; - }else{ - rc = whereRangeRegion(pParse, p, pUpperVal, &iUpper); + WhereLoop *pLoop /* Modify the .nOut and maybe .rRun fields */ +){ + int rc = SQLITE_OK; + int nOut = pLoop->nOut; + LogEst nNew; + +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + Index *p = pLoop->u.btree.pIndex; + int nEq = pLoop->u.btree.nEq; + + if( p->nSample>0 && nEq<p->nSampleCol ){ + if( nEq==pBuilder->nRecValid ){ + UnpackedRecord *pRec = pBuilder->pRec; + tRowcnt a[2]; + u8 aff; + + /* Variable iLower will be set to the estimate of the number of rows in + ** the index that are less than the lower bound of the range query. The + ** lower bound being the concatenation of $P and $L, where $P is the + ** key-prefix formed by the nEq values matched against the nEq left-most + ** columns of the index, and $L is the value in pLower. + ** + ** Or, if pLower is NULL or $L cannot be extracted from it (because it + ** is not a simple variable or literal value), the lower bound of the + ** range is $P. Due to a quirk in the way whereKeyStats() works, even + ** if $L is available, whereKeyStats() is called for both ($P) and + ** ($P:$L) and the larger of the two returned values is used. + ** + ** Similarly, iUpper is to be set to the estimate of the number of rows + ** less than the upper bound of the range query. Where the upper bound + ** is either ($P) or ($P:$U). Again, even if $U is available, both values + ** of iUpper are requested of whereKeyStats() and the smaller used. + ** + ** The number of rows between the two bounds is then just iUpper-iLower. + */ + tRowcnt iLower; /* Rows less than the lower bound */ + tRowcnt iUpper; /* Rows less than the upper bound */ + int iLwrIdx = -2; /* aSample[] for the lower bound */ + int iUprIdx = -1; /* aSample[] for the upper bound */ + + if( pRec ){ + testcase( pRec->nField!=pBuilder->nRecValid ); + pRec->nField = pBuilder->nRecValid; + } + aff = sqlite3IndexColumnAffinity(pParse->db, p, nEq); + assert( nEq!=p->nKeyCol || aff==SQLITE_AFF_INTEGER ); + /* Determine iLower and iUpper using ($P) only. */ + if( nEq==0 ){ + iLower = 0; + iUpper = p->nRowEst0; + }else{ + /* Note: this call could be optimized away - since the same values must + ** have been requested when testing key $P in whereEqualScanEst(). */ + whereKeyStats(pParse, p, pRec, 0, a); + iLower = a[0]; + iUpper = a[0] + a[1]; + } + + assert( pLower==0 || (pLower->eOperator & (WO_GT|WO_GE))!=0 ); + assert( pUpper==0 || (pUpper->eOperator & (WO_LT|WO_LE))!=0 ); + assert( p->aSortOrder!=0 ); + if( p->aSortOrder[nEq] ){ + /* The roles of pLower and pUpper are swapped for a DESC index */ + SWAP(WhereTerm*, pLower, pUpper); + } + + /* If possible, improve on the iLower estimate using ($P:$L). */ + if( pLower ){ + int bOk; /* True if value is extracted from pExpr */ + Expr *pExpr = pLower->pExpr->pRight; + rc = sqlite3Stat4ProbeSetValue(pParse, p, &pRec, pExpr, aff, nEq, &bOk); + if( rc==SQLITE_OK && bOk ){ + tRowcnt iNew; + iLwrIdx = whereKeyStats(pParse, p, pRec, 0, a); + iNew = a[0] + ((pLower->eOperator & (WO_GT|WO_LE)) ? a[1] : 0); + if( iNew>iLower ) iLower = iNew; + nOut--; + pLower = 0; + } + } + + /* If possible, improve on the iUpper estimate using ($P:$U). */ + if( pUpper ){ + int bOk; /* True if value is extracted from pExpr */ + Expr *pExpr = pUpper->pExpr->pRight; + rc = sqlite3Stat4ProbeSetValue(pParse, p, &pRec, pExpr, aff, nEq, &bOk); + if( rc==SQLITE_OK && bOk ){ + tRowcnt iNew; + iUprIdx = whereKeyStats(pParse, p, pRec, 1, a); + iNew = a[0] + ((pUpper->eOperator & (WO_GT|WO_LE)) ? a[1] : 0); + if( iNew<iUpper ) iUpper = iNew; + nOut--; + pUpper = 0; + } + } + + pBuilder->pRec = pRec; if( rc==SQLITE_OK ){ - rc = whereRangeRegion(pParse, p, pLowerVal, &iLower); - } - } - - iEst = iUpper - iLower; - testcase( iEst==SQLITE_INDEX_SAMPLES ); - assert( iEst<=SQLITE_INDEX_SAMPLES ); - if( iEst<1 ){ - iEst = 1; - } - - sqlite3ValueFree(pLowerVal); - sqlite3ValueFree(pUpperVal); - *piEst = (iEst * 100)/SQLITE_INDEX_SAMPLES; - return rc; - } -range_est_fallback: + if( iUpper>iLower ){ + nNew = sqlite3LogEst(iUpper - iLower); + /* TUNING: If both iUpper and iLower are derived from the same + ** sample, then assume they are 4x more selective. This brings + ** the estimated selectivity more in line with what it would be + ** if estimated without the use of STAT3/4 tables. */ + if( iLwrIdx==iUprIdx ) nNew -= 20; assert( 20==sqlite3LogEst(4) ); + }else{ + nNew = 10; assert( 10==sqlite3LogEst(2) ); + } + if( nNew<nOut ){ + nOut = nNew; + } + WHERETRACE(0x10, ("STAT4 range scan: %u..%u est=%d\n", + (u32)iLower, (u32)iUpper, nOut)); + } + }else{ + int bDone = 0; + rc = whereRangeSkipScanEst(pParse, pLower, pUpper, pLoop, &bDone); + if( bDone ) return rc; + } + } #else UNUSED_PARAMETER(pParse); - UNUSED_PARAMETER(p); - UNUSED_PARAMETER(nEq); -#endif + UNUSED_PARAMETER(pBuilder); assert( pLower || pUpper ); - if( pLower && pUpper ){ - *piEst = 11; - }else{ - *piEst = 33; - } - return rc; -} - - -/* -** Find the query plan for accessing a particular table. Write the -** best query plan and its cost into the WhereCost object supplied as the -** last parameter. -** -** The lowest cost plan wins. The cost is an estimate of the amount of -** CPU and disk I/O need to process the request using the selected plan. -** Factors that influence cost include: -** -** * The estimated number of rows that will be retrieved. (The -** fewer the better.) -** -** * Whether or not sorting must occur. -** -** * Whether or not there must be separate lookups in the -** index and in the main table. -** -** If there was an INDEXED BY clause (pSrc->pIndex) attached to the table in -** the SQL statement, then this function only considers plans using the -** named index. If no such plan is found, then the returned cost is -** SQLITE_BIG_DBL. If a plan is found that uses the named index, -** then the cost is calculated in the usual way. -** -** If a NOT INDEXED clause (pSrc->notIndexed!=0) was attached to the table -** in the SELECT statement, then no indexes are considered. However, the -** selected plan may still take advantage of the tables built-in rowid -** index. -*/ -static void bestBtreeIndex( - Parse *pParse, /* The parsing context */ - WhereClause *pWC, /* The WHERE clause */ - struct SrcList_item *pSrc, /* The FROM clause term to search */ - Bitmask notReady, /* Mask of cursors that are not available */ - ExprList *pOrderBy, /* The ORDER BY clause */ - WhereCost *pCost /* Lowest cost query plan */ -){ - int iCur = pSrc->iCursor; /* The cursor of the table to be accessed */ - Index *pProbe; /* An index we are evaluating */ - Index *pIdx; /* Copy of pProbe, or zero for IPK index */ - int eqTermMask; /* Current mask of valid equality operators */ - int idxEqTermMask; /* Index mask of valid equality operators */ - Index sPk; /* A fake index object for the primary key */ - unsigned int aiRowEstPk[2]; /* The aiRowEst[] value for the sPk index */ - int aiColumnPk = -1; /* The aColumn[] value for the sPk index */ - int wsFlagMask; /* Allowed flags in pCost->plan.wsFlag */ - - /* Initialize the cost to a worst-case value */ - memset(pCost, 0, sizeof(*pCost)); - pCost->rCost = SQLITE_BIG_DBL; - - /* If the pSrc table is the right table of a LEFT JOIN then we may not - ** use an index to satisfy IS NULL constraints on that table. This is - ** because columns might end up being NULL if the table does not match - - ** a circumstance which the index cannot help us discover. Ticket #2177. - */ - if( pSrc->jointype & JT_LEFT ){ - idxEqTermMask = WO_EQ|WO_IN; - }else{ - idxEqTermMask = WO_EQ|WO_IN|WO_ISNULL; - } - - if( pSrc->pIndex ){ - /* An INDEXED BY clause specifies a particular index to use */ - pIdx = pProbe = pSrc->pIndex; - wsFlagMask = ~(WHERE_ROWID_EQ|WHERE_ROWID_RANGE); - eqTermMask = idxEqTermMask; - }else{ - /* There is no INDEXED BY clause. Create a fake Index object to - ** represent the primary key */ - Index *pFirst; /* Any other index on the table */ - memset(&sPk, 0, sizeof(Index)); - sPk.nColumn = 1; - sPk.aiColumn = &aiColumnPk; - sPk.aiRowEst = aiRowEstPk; - aiRowEstPk[1] = 1; - sPk.onError = OE_Replace; - sPk.pTable = pSrc->pTab; - pFirst = pSrc->pTab->pIndex; - if( pSrc->notIndexed==0 ){ - sPk.pNext = pFirst; - } - /* The aiRowEstPk[0] is an estimate of the total number of rows in the - ** table. Get this information from the ANALYZE information if it is - ** available. If not available, assume the table 1 million rows in size. - */ - if( pFirst ){ - assert( pFirst->aiRowEst!=0 ); /* Allocated together with pFirst */ - aiRowEstPk[0] = pFirst->aiRowEst[0]; - }else{ - aiRowEstPk[0] = 1000000; - } - pProbe = &sPk; - wsFlagMask = ~( - WHERE_COLUMN_IN|WHERE_COLUMN_EQ|WHERE_COLUMN_NULL|WHERE_COLUMN_RANGE - ); - eqTermMask = WO_EQ|WO_IN; - pIdx = 0; - } - - /* Loop over all indices looking for the best one to use - */ - for(; pProbe; pIdx=pProbe=pProbe->pNext){ - const unsigned int * const aiRowEst = pProbe->aiRowEst; - double cost; /* Cost of using pProbe */ - double nRow; /* Estimated number of rows in result set */ - int rev; /* True to scan in reverse order */ - int wsFlags = 0; - Bitmask used = 0; - - /* The following variables are populated based on the properties of - ** scan being evaluated. They are then used to determine the expected - ** cost and number of rows returned. - ** - ** nEq: - ** Number of equality terms that can be implemented using the index. - ** - ** nInMul: - ** The "in-multiplier". This is an estimate of how many seek operations - ** SQLite must perform on the index in question. For example, if the - ** WHERE clause is: - ** - ** WHERE a IN (1, 2, 3) AND b IN (4, 5, 6) - ** - ** SQLite must perform 9 lookups on an index on (a, b), so nInMul is - ** set to 9. Given the same schema and either of the following WHERE - ** clauses: - ** - ** WHERE a = 1 - ** WHERE a >= 2 - ** - ** nInMul is set to 1. - ** - ** If there exists a WHERE term of the form "x IN (SELECT ...)", then - ** the sub-select is assumed to return 25 rows for the purposes of - ** determining nInMul. - ** - ** bInEst: - ** Set to true if there was at least one "x IN (SELECT ...)" term used - ** in determining the value of nInMul. - ** - ** estBound: - ** An estimate on the amount of the table that must be searched. A - ** value of 100 means the entire table is searched. Range constraints - ** might reduce this to a value less than 100 to indicate that only - ** a fraction of the table needs searching. In the absence of - ** sqlite_stat2 ANALYZE data, a single inequality reduces the search - ** space to 1/3rd its original size. So an x>? constraint reduces - ** estBound to 33. Two constraints (x>? AND x<?) reduce estBound to 11. - ** - ** bSort: - ** Boolean. True if there is an ORDER BY clause that will require an - ** external sort (i.e. scanning the index being evaluated will not - ** correctly order records). - ** - ** bLookup: - ** Boolean. True if for each index entry visited a lookup on the - ** corresponding table b-tree is required. This is always false - ** for the rowid index. For other indexes, it is true unless all the - ** columns of the table used by the SELECT statement are present in - ** the index (such an index is sometimes described as a covering index). - ** For example, given the index on (a, b), the second of the following - ** two queries requires table b-tree lookups, but the first does not. - ** - ** SELECT a, b FROM tbl WHERE a = 1; - ** SELECT a, b, c FROM tbl WHERE a = 1; - */ - int nEq; - int bInEst = 0; - int nInMul = 1; - int estBound = 100; - int nBound = 0; /* Number of range constraints seen */ - int bSort = 0; - int bLookup = 0; - WhereTerm *pTerm; /* A single term of the WHERE clause */ - - /* Determine the values of nEq and nInMul */ - for(nEq=0; nEq<pProbe->nColumn; nEq++){ - int j = pProbe->aiColumn[nEq]; - pTerm = findTerm(pWC, iCur, j, notReady, eqTermMask, pIdx); - if( pTerm==0 ) break; - wsFlags |= (WHERE_COLUMN_EQ|WHERE_ROWID_EQ); - if( pTerm->eOperator & WO_IN ){ - Expr *pExpr = pTerm->pExpr; - wsFlags |= WHERE_COLUMN_IN; - if( ExprHasProperty(pExpr, EP_xIsSelect) ){ - nInMul *= 25; - bInEst = 1; - }else if( pExpr->x.pList ){ - nInMul *= pExpr->x.pList->nExpr + 1; - } - }else if( pTerm->eOperator & WO_ISNULL ){ - wsFlags |= WHERE_COLUMN_NULL; - } - used |= pTerm->prereqRight; - } - - /* Determine the value of estBound. */ - if( nEq<pProbe->nColumn ){ - int j = pProbe->aiColumn[nEq]; - if( findTerm(pWC, iCur, j, notReady, WO_LT|WO_LE|WO_GT|WO_GE, pIdx) ){ - WhereTerm *pTop = findTerm(pWC, iCur, j, notReady, WO_LT|WO_LE, pIdx); - WhereTerm *pBtm = findTerm(pWC, iCur, j, notReady, WO_GT|WO_GE, pIdx); - whereRangeScanEst(pParse, pProbe, nEq, pBtm, pTop, &estBound); - if( pTop ){ - nBound = 1; - wsFlags |= WHERE_TOP_LIMIT; - used |= pTop->prereqRight; - } - if( pBtm ){ - nBound++; - wsFlags |= WHERE_BTM_LIMIT; - used |= pBtm->prereqRight; - } - wsFlags |= (WHERE_COLUMN_RANGE|WHERE_ROWID_RANGE); - } - }else if( pProbe->onError!=OE_None ){ - testcase( wsFlags & WHERE_COLUMN_IN ); - testcase( wsFlags & WHERE_COLUMN_NULL ); - if( (wsFlags & (WHERE_COLUMN_IN|WHERE_COLUMN_NULL))==0 ){ - wsFlags |= WHERE_UNIQUE; - } - } - - /* If there is an ORDER BY clause and the index being considered will - ** naturally scan rows in the required order, set the appropriate flags - ** in wsFlags. Otherwise, if there is an ORDER BY clause but the index - ** will scan rows in a different order, set the bSort variable. */ - if( pOrderBy ){ - if( (wsFlags & (WHERE_COLUMN_IN|WHERE_COLUMN_NULL))==0 - && isSortingIndex(pParse,pWC->pMaskSet,pProbe,iCur,pOrderBy,nEq,&rev) - ){ - wsFlags |= WHERE_ROWID_RANGE|WHERE_COLUMN_RANGE|WHERE_ORDERBY; - wsFlags |= (rev ? WHERE_REVERSE : 0); - }else{ - bSort = 1; - } - } - - /* If currently calculating the cost of using an index (not the IPK - ** index), determine if all required column data may be obtained without - ** using the main table (i.e. if the index is a covering - ** index for this query). If it is, set the WHERE_IDX_ONLY flag in - ** wsFlags. Otherwise, set the bLookup variable to true. */ - if( pIdx && wsFlags ){ - Bitmask m = pSrc->colUsed; - int j; - for(j=0; j<pIdx->nColumn; j++){ - int x = pIdx->aiColumn[j]; - if( x<BMS-1 ){ - m &= ~(((Bitmask)1)<<x); - } - } - if( m==0 ){ - wsFlags |= WHERE_IDX_ONLY; - }else{ - bLookup = 1; - } - } - - /* - ** Estimate the number of rows of output. For an IN operator, - ** do not let the estimate exceed half the rows in the table. - */ - nRow = (double)(aiRowEst[nEq] * nInMul); - if( bInEst && nRow*2>aiRowEst[0] ){ - nRow = aiRowEst[0]/2; - nInMul = (int)(nRow / aiRowEst[nEq]); - } - - /* Assume constant cost to access a row and logarithmic cost to - ** do a binary search. Hence, the initial cost is the number of output - ** rows plus log2(table-size) times the number of binary searches. - */ - cost = nRow + nInMul*estLog(aiRowEst[0]); - - /* Adjust the number of rows and the cost downward to reflect rows - ** that are excluded by range constraints. - */ - nRow = (nRow * (double)estBound) / (double)100; - cost = (cost * (double)estBound) / (double)100; - - /* Add in the estimated cost of sorting the result - */ - if( bSort ){ - cost += cost*estLog(cost); - } - - /* If all information can be taken directly from the index, we avoid - ** doing table lookups. This reduces the cost by half. (Not really - - ** this needs to be fixed.) - */ - if( pIdx && bLookup==0 ){ - cost /= (double)2; - } - /**** Cost of using this index has now been computed ****/ - - /* If there are additional constraints on this table that cannot - ** be used with the current index, but which might lower the number - ** of output rows, adjust the nRow value accordingly. This only - ** matters if the current index is the least costly, so do not bother - ** with this step if we already know this index will not be chosen. - ** Also, never reduce the output row count below 2 using this step. - ** - ** Do not reduce the output row count if pSrc is the only table that - ** is notReady; if notReady is a power of two. This will be the case - ** when the main sqlite3WhereBegin() loop is scanning for a table with - ** and "optimal" index, and on such a scan the output row count - ** reduction is not valid because it does not update the "pCost->used" - ** bitmap. The notReady bitmap will also be a power of two when we - ** are scanning for the last table in a 64-way join. We are willing - ** to bypass this optimization in that corner case. - */ - if( nRow>2 && cost<=pCost->rCost && (notReady & (notReady-1))!=0 ){ - int k; /* Loop counter */ - int nSkipEq = nEq; /* Number of == constraints to skip */ - int nSkipRange = nBound; /* Number of < constraints to skip */ - Bitmask thisTab; /* Bitmap for pSrc */ - - thisTab = getMask(pWC->pMaskSet, iCur); - for(pTerm=pWC->a, k=pWC->nTerm; nRow>2 && k; k--, pTerm++){ - if( pTerm->wtFlags & TERM_VIRTUAL ) continue; - if( (pTerm->prereqAll & notReady)!=thisTab ) continue; - if( pTerm->eOperator & (WO_EQ|WO_IN|WO_ISNULL) ){ - if( nSkipEq ){ - /* Ignore the first nEq equality matches since the index - ** has already accounted for these */ - nSkipEq--; - }else{ - /* Assume each additional equality match reduces the result - ** set size by a factor of 10 */ - nRow /= 10; - } - }else if( pTerm->eOperator & (WO_LT|WO_LE|WO_GT|WO_GE) ){ - if( nSkipRange ){ - /* Ignore the first nBound range constraints since the index - ** has already accounted for these */ - nSkipRange--; - }else{ - /* Assume each additional range constraint reduces the result - ** set size by a factor of 3 */ - nRow /= 3; - } - }else{ - /* Any other expression lowers the output row count by half */ - nRow /= 2; - } - } - if( nRow<2 ) nRow = 2; - } - - - WHERETRACE(( - "%s(%s): nEq=%d nInMul=%d estBound=%d bSort=%d bLookup=%d wsFlags=0x%x\n" - " notReady=0x%llx nRow=%.2f cost=%.2f used=0x%llx\n", - pSrc->pTab->zName, (pIdx ? pIdx->zName : "ipk"), - nEq, nInMul, estBound, bSort, bLookup, wsFlags, - notReady, nRow, cost, used - )); - - /* If this index is the best we have seen so far, then record this - ** index and its cost in the pCost structure. - */ - if( (!pIdx || wsFlags) - && (cost<pCost->rCost || (cost<=pCost->rCost && nRow<pCost->nRow)) - ){ - pCost->rCost = cost; - pCost->nRow = nRow; - pCost->used = used; - pCost->plan.wsFlags = (wsFlags&wsFlagMask); - pCost->plan.nEq = nEq; - pCost->plan.u.pIdx = pIdx; - } - - /* If there was an INDEXED BY clause, then only that one index is - ** considered. */ - if( pSrc->pIndex ) break; - - /* Reset masks for the next index in the loop */ - wsFlagMask = ~(WHERE_ROWID_EQ|WHERE_ROWID_RANGE); - eqTermMask = idxEqTermMask; - } - - /* If there is no ORDER BY clause and the SQLITE_ReverseOrder flag - ** is set, then reverse the order that the index will be scanned - ** in. This is used for application testing, to help find cases - ** where application behaviour depends on the (undefined) order that - ** SQLite outputs rows in in the absence of an ORDER BY clause. */ - if( !pOrderBy && pParse->db->flags & SQLITE_ReverseOrder ){ - pCost->plan.wsFlags |= WHERE_REVERSE; - } - - assert( pOrderBy || (pCost->plan.wsFlags&WHERE_ORDERBY)==0 ); - assert( pCost->plan.u.pIdx==0 || (pCost->plan.wsFlags&WHERE_ROWID_EQ)==0 ); - assert( pSrc->pIndex==0 - || pCost->plan.u.pIdx==0 - || pCost->plan.u.pIdx==pSrc->pIndex - ); - - WHERETRACE(("best index is: %s\n", - ((pCost->plan.wsFlags & WHERE_NOT_FULLSCAN)==0 ? "none" : - pCost->plan.u.pIdx ? pCost->plan.u.pIdx->zName : "ipk") - )); - - bestOrClauseIndex(pParse, pWC, pSrc, notReady, pOrderBy, pCost); - bestAutomaticIndex(pParse, pWC, pSrc, notReady, pCost); - pCost->plan.wsFlags |= eqTermMask; -} - -/* -** Find the query plan for accessing table pSrc->pTab. Write the -** best query plan and its cost into the WhereCost object supplied -** as the last parameter. This function may calculate the cost of -** both real and virtual table scans. -*/ -static void bestIndex( - Parse *pParse, /* The parsing context */ - WhereClause *pWC, /* The WHERE clause */ - struct SrcList_item *pSrc, /* The FROM clause term to search */ - Bitmask notReady, /* Mask of cursors that are not available */ - ExprList *pOrderBy, /* The ORDER BY clause */ - WhereCost *pCost /* Lowest cost query plan */ -){ -#ifndef SQLITE_OMIT_VIRTUALTABLE - if( IsVirtual(pSrc->pTab) ){ - sqlite3_index_info *p = 0; - bestVirtualIndex(pParse, pWC, pSrc, notReady, pOrderBy, pCost, &p); - if( p->needToFreeIdxStr ){ - sqlite3_free(p->idxStr); - } - sqlite3DbFree(pParse->db, p); - }else -#endif - { - bestBtreeIndex(pParse, pWC, pSrc, notReady, pOrderBy, pCost); - } -} - -/* -** Disable a term in the WHERE clause. Except, do not disable the term -** if it controls a LEFT OUTER JOIN and it did not originate in the ON -** or USING clause of that join. -** -** Consider the term t2.z='ok' in the following queries: -** -** (1) SELECT * FROM t1 LEFT JOIN t2 ON t1.a=t2.x WHERE t2.z='ok' -** (2) SELECT * FROM t1 LEFT JOIN t2 ON t1.a=t2.x AND t2.z='ok' -** (3) SELECT * FROM t1, t2 WHERE t1.a=t2.x AND t2.z='ok' -** -** The t2.z='ok' is disabled in the in (2) because it originates -** in the ON clause. The term is disabled in (3) because it is not part -** of a LEFT OUTER JOIN. In (1), the term is not disabled. -** -** Disabling a term causes that term to not be tested in the inner loop -** of the join. Disabling is an optimization. When terms are satisfied -** by indices, we disable them to prevent redundant tests in the inner -** loop. We would get the correct results if nothing were ever disabled, -** but joins might run a little slower. The trick is to disable as much -** as we can without disabling too much. If we disabled in (1), we'd get -** the wrong answer. See ticket #813. -*/ -static void disableTerm(WhereLevel *pLevel, WhereTerm *pTerm){ - if( pTerm - && ALWAYS((pTerm->wtFlags & TERM_CODED)==0) - && (pLevel->iLeftJoin==0 || ExprHasProperty(pTerm->pExpr, EP_FromJoin)) - ){ - pTerm->wtFlags |= TERM_CODED; - if( pTerm->iParent>=0 ){ - WhereTerm *pOther = &pTerm->pWC->a[pTerm->iParent]; - if( (--pOther->nChild)==0 ){ - disableTerm(pLevel, pOther); - } - } - } -} - -/* -** Code an OP_Affinity opcode to apply the column affinity string zAff -** to the n registers starting at base. -** -** As an optimization, SQLITE_AFF_NONE entries (which are no-ops) at the -** beginning and end of zAff are ignored. If all entries in zAff are -** SQLITE_AFF_NONE, then no code gets generated. -** -** This routine makes its own copy of zAff so that the caller is free -** to modify zAff after this routine returns. -*/ -static void codeApplyAffinity(Parse *pParse, int base, int n, char *zAff){ - Vdbe *v = pParse->pVdbe; - if( zAff==0 ){ - assert( pParse->db->mallocFailed ); - return; - } - assert( v!=0 ); - - /* Adjust base and n to skip over SQLITE_AFF_NONE entries at the beginning - ** and end of the affinity string. - */ - while( n>0 && zAff[0]==SQLITE_AFF_NONE ){ - n--; - base++; - zAff++; - } - while( n>1 && zAff[n-1]==SQLITE_AFF_NONE ){ - n--; - } - - /* Code the OP_Affinity opcode if there is anything left to do. */ - if( n>0 ){ - sqlite3VdbeAddOp2(v, OP_Affinity, base, n); - sqlite3VdbeChangeP4(v, -1, zAff, n); - sqlite3ExprCacheAffinityChange(pParse, base, n); - } -} - - -/* -** Generate code for a single equality term of the WHERE clause. An equality -** term can be either X=expr or X IN (...). pTerm is the term to be -** coded. -** -** The current value for the constraint is left in register iReg. -** -** For a constraint of the form X=expr, the expression is evaluated and its -** result is left on the stack. For constraints of the form X IN (...) -** this routine sets up a loop that will iterate over all values of X. -*/ -static int codeEqualityTerm( - Parse *pParse, /* The parsing context */ - WhereTerm *pTerm, /* The term of the WHERE clause to be coded */ - WhereLevel *pLevel, /* When level of the FROM clause we are working on */ - int iTarget /* Attempt to leave results in this register */ -){ - Expr *pX = pTerm->pExpr; - Vdbe *v = pParse->pVdbe; - int iReg; /* Register holding results */ - - assert( iTarget>0 ); - if( pX->op==TK_EQ ){ - iReg = sqlite3ExprCodeTarget(pParse, pX->pRight, iTarget); - }else if( pX->op==TK_ISNULL ){ - iReg = iTarget; - sqlite3VdbeAddOp2(v, OP_Null, 0, iReg); -#ifndef SQLITE_OMIT_SUBQUERY - }else{ - int eType; - int iTab; - struct InLoop *pIn; - - assert( pX->op==TK_IN ); - iReg = iTarget; - eType = sqlite3FindInIndex(pParse, pX, 0); - iTab = pX->iTable; - sqlite3VdbeAddOp2(v, OP_Rewind, iTab, 0); - assert( pLevel->plan.wsFlags & WHERE_IN_ABLE ); - if( pLevel->u.in.nIn==0 ){ - pLevel->addrNxt = sqlite3VdbeMakeLabel(v); - } - pLevel->u.in.nIn++; - pLevel->u.in.aInLoop = - sqlite3DbReallocOrFree(pParse->db, pLevel->u.in.aInLoop, - sizeof(pLevel->u.in.aInLoop[0])*pLevel->u.in.nIn); - pIn = pLevel->u.in.aInLoop; - if( pIn ){ - pIn += pLevel->u.in.nIn - 1; - pIn->iCur = iTab; - if( eType==IN_INDEX_ROWID ){ - pIn->addrInTop = sqlite3VdbeAddOp2(v, OP_Rowid, iTab, iReg); - }else{ - pIn->addrInTop = sqlite3VdbeAddOp3(v, OP_Column, iTab, 0, iReg); - } - sqlite3VdbeAddOp1(v, OP_IsNull, iReg); - }else{ - pLevel->u.in.nIn = 0; - } -#endif - } - disableTerm(pLevel, pTerm); - return iReg; -} - -/* -** Generate code that will evaluate all == and IN constraints for an -** index. -** -** For example, consider table t1(a,b,c,d,e,f) with index i1(a,b,c). -** Suppose the WHERE clause is this: a==5 AND b IN (1,2,3) AND c>5 AND c<10 -** The index has as many as three equality constraints, but in this -** example, the third "c" value is an inequality. So only two -** constraints are coded. This routine will generate code to evaluate -** a==5 and b IN (1,2,3). The current values for a and b will be stored -** in consecutive registers and the index of the first register is returned. -** -** In the example above nEq==2. But this subroutine works for any value -** of nEq including 0. If nEq==0, this routine is nearly a no-op. -** The only thing it does is allocate the pLevel->iMem memory cell and -** compute the affinity string. -** -** This routine always allocates at least one memory cell and returns -** the index of that memory cell. The code that -** calls this routine will use that memory cell to store the termination -** key value of the loop. If one or more IN operators appear, then -** this routine allocates an additional nEq memory cells for internal -** use. -** -** Before returning, *pzAff is set to point to a buffer containing a -** copy of the column affinity string of the index allocated using -** sqlite3DbMalloc(). Except, entries in the copy of the string associated -** with equality constraints that use NONE affinity are set to -** SQLITE_AFF_NONE. This is to deal with SQL such as the following: -** -** CREATE TABLE t1(a TEXT PRIMARY KEY, b); -** SELECT ... FROM t1 AS t2, t1 WHERE t1.a = t2.b; -** -** In the example above, the index on t1(a) has TEXT affinity. But since -** the right hand side of the equality constraint (t2.b) has NONE affinity, -** no conversion should be attempted before using a t2.b value as part of -** a key to search the index. Hence the first byte in the returned affinity -** string in this example would be set to SQLITE_AFF_NONE. -*/ -static int codeAllEqualityTerms( - Parse *pParse, /* Parsing context */ - WhereLevel *pLevel, /* Which nested loop of the FROM we are coding */ - WhereClause *pWC, /* The WHERE clause */ - Bitmask notReady, /* Which parts of FROM have not yet been coded */ - int nExtraReg, /* Number of extra registers to allocate */ - char **pzAff /* OUT: Set to point to affinity string */ -){ - int nEq = pLevel->plan.nEq; /* The number of == or IN constraints to code */ - Vdbe *v = pParse->pVdbe; /* The vm under construction */ - Index *pIdx; /* The index being used for this loop */ - int iCur = pLevel->iTabCur; /* The cursor of the table */ - WhereTerm *pTerm; /* A single constraint term */ - int j; /* Loop counter */ - int regBase; /* Base register */ - int nReg; /* Number of registers to allocate */ - char *zAff; /* Affinity string to return */ - - /* This module is only called on query plans that use an index. */ - assert( pLevel->plan.wsFlags & WHERE_INDEXED ); - pIdx = pLevel->plan.u.pIdx; - - /* Figure out how many memory cells we will need then allocate them. - */ - regBase = pParse->nMem + 1; - nReg = pLevel->plan.nEq + nExtraReg; - pParse->nMem += nReg; - - zAff = sqlite3DbStrDup(pParse->db, sqlite3IndexAffinityStr(v, pIdx)); - if( !zAff ){ - pParse->db->mallocFailed = 1; - } - - /* Evaluate the equality constraints - */ - assert( pIdx->nColumn>=nEq ); - for(j=0; j<nEq; j++){ - int r1; - int k = pIdx->aiColumn[j]; - pTerm = findTerm(pWC, iCur, k, notReady, pLevel->plan.wsFlags, pIdx); - if( NEVER(pTerm==0) ) break; - assert( (pTerm->wtFlags & TERM_CODED)==0 ); - r1 = codeEqualityTerm(pParse, pTerm, pLevel, regBase+j); - if( r1!=regBase+j ){ - if( nReg==1 ){ - sqlite3ReleaseTempReg(pParse, regBase); - regBase = r1; - }else{ - sqlite3VdbeAddOp2(v, OP_SCopy, r1, regBase+j); - } - } - testcase( pTerm->eOperator & WO_ISNULL ); - testcase( pTerm->eOperator & WO_IN ); - if( (pTerm->eOperator & (WO_ISNULL|WO_IN))==0 ){ - Expr *pRight = pTerm->pExpr->pRight; - sqlite3ExprCodeIsNullJump(v, pRight, regBase+j, pLevel->addrBrk); - if( zAff ){ - if( sqlite3CompareAffinity(pRight, zAff[j])==SQLITE_AFF_NONE ){ - zAff[j] = SQLITE_AFF_NONE; - } - if( sqlite3ExprNeedsNoAffinityChange(pRight, zAff[j]) ){ - zAff[j] = SQLITE_AFF_NONE; - } - } - } - } - *pzAff = zAff; - return regBase; -} - -/* -** Generate code for the start of the iLevel-th loop in the WHERE clause -** implementation described by pWInfo. -*/ -static Bitmask codeOneLoopStart( - WhereInfo *pWInfo, /* Complete information about the WHERE clause */ - int iLevel, /* Which level of pWInfo->a[] should be coded */ - u16 wctrlFlags, /* One of the WHERE_* flags defined in sqliteInt.h */ - Bitmask notReady /* Which tables are currently available */ -){ - int j, k; /* Loop counters */ - int iCur; /* The VDBE cursor for the table */ - int addrNxt; /* Where to jump to continue with the next IN case */ - int omitTable; /* True if we use the index only */ - int bRev; /* True if we need to scan in reverse order */ - WhereLevel *pLevel; /* The where level to be coded */ - WhereClause *pWC; /* Decomposition of the entire WHERE clause */ - WhereTerm *pTerm; /* A WHERE clause term */ - Parse *pParse; /* Parsing context */ - Vdbe *v; /* The prepared stmt under constructions */ - struct SrcList_item *pTabItem; /* FROM clause term being coded */ - int addrBrk; /* Jump here to break out of the loop */ - int addrCont; /* Jump here to continue with next cycle */ - int iRowidReg = 0; /* Rowid is stored in this register, if not zero */ - int iReleaseReg = 0; /* Temp register to free before returning */ - - pParse = pWInfo->pParse; - v = pParse->pVdbe; - pWC = pWInfo->pWC; - pLevel = &pWInfo->a[iLevel]; - pTabItem = &pWInfo->pTabList->a[pLevel->iFrom]; - iCur = pTabItem->iCursor; - bRev = (pLevel->plan.wsFlags & WHERE_REVERSE)!=0; - omitTable = (pLevel->plan.wsFlags & WHERE_IDX_ONLY)!=0 - && (wctrlFlags & WHERE_FORCE_TABLE)==0; - - /* Create labels for the "break" and "continue" instructions - ** for the current loop. Jump to addrBrk to break out of a loop. - ** Jump to cont to go immediately to the next iteration of the - ** loop. - ** - ** When there is an IN operator, we also have a "addrNxt" label that - ** means to continue with the next IN value combination. When - ** there are no IN operators in the constraints, the "addrNxt" label - ** is the same as "addrBrk". - */ - addrBrk = pLevel->addrBrk = pLevel->addrNxt = sqlite3VdbeMakeLabel(v); - addrCont = pLevel->addrCont = sqlite3VdbeMakeLabel(v); - - /* If this is the right table of a LEFT OUTER JOIN, allocate and - ** initialize a memory cell that records if this table matches any - ** row of the left table of the join. - */ - if( pLevel->iFrom>0 && (pTabItem[0].jointype & JT_LEFT)!=0 ){ - pLevel->iLeftJoin = ++pParse->nMem; - sqlite3VdbeAddOp2(v, OP_Integer, 0, pLevel->iLeftJoin); - VdbeComment((v, "init LEFT JOIN no-match flag")); - } - -#ifndef SQLITE_OMIT_VIRTUALTABLE - if( (pLevel->plan.wsFlags & WHERE_VIRTUALTABLE)!=0 ){ - /* Case 0: The table is a virtual-table. Use the VFilter and VNext - ** to access the data. - */ - int iReg; /* P3 Value for OP_VFilter */ - sqlite3_index_info *pVtabIdx = pLevel->plan.u.pVtabIdx; - int nConstraint = pVtabIdx->nConstraint; - struct sqlite3_index_constraint_usage *aUsage = - pVtabIdx->aConstraintUsage; - const struct sqlite3_index_constraint *aConstraint = - pVtabIdx->aConstraint; - - sqlite3ExprCachePush(pParse); - iReg = sqlite3GetTempRange(pParse, nConstraint+2); - for(j=1; j<=nConstraint; j++){ - for(k=0; k<nConstraint; k++){ - if( aUsage[k].argvIndex==j ){ - int iTerm = aConstraint[k].iTermOffset; - sqlite3ExprCode(pParse, pWC->a[iTerm].pExpr->pRight, iReg+j+1); - break; - } - } - if( k==nConstraint ) break; - } - sqlite3VdbeAddOp2(v, OP_Integer, pVtabIdx->idxNum, iReg); - sqlite3VdbeAddOp2(v, OP_Integer, j-1, iReg+1); - sqlite3VdbeAddOp4(v, OP_VFilter, iCur, addrBrk, iReg, pVtabIdx->idxStr, - pVtabIdx->needToFreeIdxStr ? P4_MPRINTF : P4_STATIC); - pVtabIdx->needToFreeIdxStr = 0; - for(j=0; j<nConstraint; j++){ - if( aUsage[j].omit ){ - int iTerm = aConstraint[j].iTermOffset; - disableTerm(pLevel, &pWC->a[iTerm]); - } - } - pLevel->op = OP_VNext; - pLevel->p1 = iCur; - pLevel->p2 = sqlite3VdbeCurrentAddr(v); - sqlite3ReleaseTempRange(pParse, iReg, nConstraint+2); - sqlite3ExprCachePop(pParse, 1); - }else -#endif /* SQLITE_OMIT_VIRTUALTABLE */ - - if( pLevel->plan.wsFlags & WHERE_ROWID_EQ ){ - /* Case 1: We can directly reference a single row using an - ** equality comparison against the ROWID field. Or - ** we reference multiple rows using a "rowid IN (...)" - ** construct. - */ - iReleaseReg = sqlite3GetTempReg(pParse); - pTerm = findTerm(pWC, iCur, -1, notReady, WO_EQ|WO_IN, 0); - assert( pTerm!=0 ); - assert( pTerm->pExpr!=0 ); - assert( pTerm->leftCursor==iCur ); - assert( omitTable==0 ); - iRowidReg = codeEqualityTerm(pParse, pTerm, pLevel, iReleaseReg); - addrNxt = pLevel->addrNxt; - sqlite3VdbeAddOp2(v, OP_MustBeInt, iRowidReg, addrNxt); - sqlite3VdbeAddOp3(v, OP_NotExists, iCur, addrNxt, iRowidReg); - sqlite3ExprCacheStore(pParse, iCur, -1, iRowidReg); - VdbeComment((v, "pk")); - pLevel->op = OP_Noop; - }else if( pLevel->plan.wsFlags & WHERE_ROWID_RANGE ){ - /* Case 2: We have an inequality comparison against the ROWID field. - */ - int testOp = OP_Noop; - int start; - int memEndValue = 0; - WhereTerm *pStart, *pEnd; - - assert( omitTable==0 ); - pStart = findTerm(pWC, iCur, -1, notReady, WO_GT|WO_GE, 0); - pEnd = findTerm(pWC, iCur, -1, notReady, WO_LT|WO_LE, 0); - if( bRev ){ - pTerm = pStart; - pStart = pEnd; - pEnd = pTerm; - } - if( pStart ){ - Expr *pX; /* The expression that defines the start bound */ - int r1, rTemp; /* Registers for holding the start boundary */ - - /* The following constant maps TK_xx codes into corresponding - ** seek opcodes. It depends on a particular ordering of TK_xx - */ - const u8 aMoveOp[] = { - /* TK_GT */ OP_SeekGt, - /* TK_LE */ OP_SeekLe, - /* TK_LT */ OP_SeekLt, - /* TK_GE */ OP_SeekGe - }; - assert( TK_LE==TK_GT+1 ); /* Make sure the ordering.. */ - assert( TK_LT==TK_GT+2 ); /* ... of the TK_xx values... */ - assert( TK_GE==TK_GT+3 ); /* ... is correcct. */ - - pX = pStart->pExpr; - assert( pX!=0 ); - assert( pStart->leftCursor==iCur ); - r1 = sqlite3ExprCodeTemp(pParse, pX->pRight, &rTemp); - sqlite3VdbeAddOp3(v, aMoveOp[pX->op-TK_GT], iCur, addrBrk, r1); - VdbeComment((v, "pk")); - sqlite3ExprCacheAffinityChange(pParse, r1, 1); - sqlite3ReleaseTempReg(pParse, rTemp); - disableTerm(pLevel, pStart); - }else{ - sqlite3VdbeAddOp2(v, bRev ? OP_Last : OP_Rewind, iCur, addrBrk); - } - if( pEnd ){ - Expr *pX; - pX = pEnd->pExpr; - assert( pX!=0 ); - assert( pEnd->leftCursor==iCur ); - memEndValue = ++pParse->nMem; - sqlite3ExprCode(pParse, pX->pRight, memEndValue); - if( pX->op==TK_LT || pX->op==TK_GT ){ - testOp = bRev ? OP_Le : OP_Ge; - }else{ - testOp = bRev ? OP_Lt : OP_Gt; - } - disableTerm(pLevel, pEnd); - } - start = sqlite3VdbeCurrentAddr(v); - pLevel->op = bRev ? OP_Prev : OP_Next; - pLevel->p1 = iCur; - pLevel->p2 = start; - if( pStart==0 && pEnd==0 ){ - pLevel->p5 = SQLITE_STMTSTATUS_FULLSCAN_STEP; - }else{ - assert( pLevel->p5==0 ); - } - if( testOp!=OP_Noop ){ - iRowidReg = iReleaseReg = sqlite3GetTempReg(pParse); - sqlite3VdbeAddOp2(v, OP_Rowid, iCur, iRowidReg); - sqlite3ExprCacheStore(pParse, iCur, -1, iRowidReg); - sqlite3VdbeAddOp3(v, testOp, memEndValue, addrBrk, iRowidReg); - sqlite3VdbeChangeP5(v, SQLITE_AFF_NUMERIC | SQLITE_JUMPIFNULL); - } - }else if( pLevel->plan.wsFlags & (WHERE_COLUMN_RANGE|WHERE_COLUMN_EQ) ){ - /* Case 3: A scan using an index. - ** - ** The WHERE clause may contain zero or more equality - ** terms ("==" or "IN" operators) that refer to the N - ** left-most columns of the index. It may also contain - ** inequality constraints (>, <, >= or <=) on the indexed - ** column that immediately follows the N equalities. Only - ** the right-most column can be an inequality - the rest must - ** use the "==" and "IN" operators. For example, if the - ** index is on (x,y,z), then the following clauses are all - ** optimized: - ** - ** x=5 - ** x=5 AND y=10 - ** x=5 AND y<10 - ** x=5 AND y>5 AND y<10 - ** x=5 AND y=5 AND z<=10 - ** - ** The z<10 term of the following cannot be used, only - ** the x=5 term: - ** - ** x=5 AND z<10 - ** - ** N may be zero if there are inequality constraints. - ** If there are no inequality constraints, then N is at - ** least one. - ** - ** This case is also used when there are no WHERE clause - ** constraints but an index is selected anyway, in order - ** to force the output order to conform to an ORDER BY. - */ - int aStartOp[] = { - 0, - 0, - OP_Rewind, /* 2: (!start_constraints && startEq && !bRev) */ - OP_Last, /* 3: (!start_constraints && startEq && bRev) */ - OP_SeekGt, /* 4: (start_constraints && !startEq && !bRev) */ - OP_SeekLt, /* 5: (start_constraints && !startEq && bRev) */ - OP_SeekGe, /* 6: (start_constraints && startEq && !bRev) */ - OP_SeekLe /* 7: (start_constraints && startEq && bRev) */ - }; - int aEndOp[] = { - OP_Noop, /* 0: (!end_constraints) */ - OP_IdxGE, /* 1: (end_constraints && !bRev) */ - OP_IdxLT /* 2: (end_constraints && bRev) */ - }; - int nEq = pLevel->plan.nEq; - int isMinQuery = 0; /* If this is an optimized SELECT min(x).. */ - int regBase; /* Base register holding constraint values */ - int r1; /* Temp register */ - WhereTerm *pRangeStart = 0; /* Inequality constraint at range start */ - WhereTerm *pRangeEnd = 0; /* Inequality constraint at range end */ - int startEq; /* True if range start uses ==, >= or <= */ - int endEq; /* True if range end uses ==, >= or <= */ - int start_constraints; /* Start of range is constrained */ - int nConstraint; /* Number of constraint terms */ - Index *pIdx; /* The index we will be using */ - int iIdxCur; /* The VDBE cursor for the index */ - int nExtraReg = 0; /* Number of extra registers needed */ - int op; /* Instruction opcode */ - char *zAff; - - pIdx = pLevel->plan.u.pIdx; - iIdxCur = pLevel->iIdxCur; - k = pIdx->aiColumn[nEq]; /* Column for inequality constraints */ - - /* If this loop satisfies a sort order (pOrderBy) request that - ** was passed to this function to implement a "SELECT min(x) ..." - ** query, then the caller will only allow the loop to run for - ** a single iteration. This means that the first row returned - ** should not have a NULL value stored in 'x'. If column 'x' is - ** the first one after the nEq equality constraints in the index, - ** this requires some special handling. - */ - if( (wctrlFlags&WHERE_ORDERBY_MIN)!=0 - && (pLevel->plan.wsFlags&WHERE_ORDERBY) - && (pIdx->nColumn>nEq) - ){ - /* assert( pOrderBy->nExpr==1 ); */ - /* assert( pOrderBy->a[0].pExpr->iColumn==pIdx->aiColumn[nEq] ); */ - isMinQuery = 1; - nExtraReg = 1; - } - - /* Find any inequality constraint terms for the start and end - ** of the range. - */ - if( pLevel->plan.wsFlags & WHERE_TOP_LIMIT ){ - pRangeEnd = findTerm(pWC, iCur, k, notReady, (WO_LT|WO_LE), pIdx); - nExtraReg = 1; - } - if( pLevel->plan.wsFlags & WHERE_BTM_LIMIT ){ - pRangeStart = findTerm(pWC, iCur, k, notReady, (WO_GT|WO_GE), pIdx); - nExtraReg = 1; - } - - /* Generate code to evaluate all constraint terms using == or IN - ** and store the values of those terms in an array of registers - ** starting at regBase. - */ - regBase = codeAllEqualityTerms( - pParse, pLevel, pWC, notReady, nExtraReg, &zAff - ); - addrNxt = pLevel->addrNxt; - - /* If we are doing a reverse order scan on an ascending index, or - ** a forward order scan on a descending index, interchange the - ** start and end terms (pRangeStart and pRangeEnd). - */ - if( bRev==(pIdx->aSortOrder[nEq]==SQLITE_SO_ASC) ){ - SWAP(WhereTerm *, pRangeEnd, pRangeStart); - } - - testcase( pRangeStart && pRangeStart->eOperator & WO_LE ); - testcase( pRangeStart && pRangeStart->eOperator & WO_GE ); - testcase( pRangeEnd && pRangeEnd->eOperator & WO_LE ); - testcase( pRangeEnd && pRangeEnd->eOperator & WO_GE ); - startEq = !pRangeStart || pRangeStart->eOperator & (WO_LE|WO_GE); - endEq = !pRangeEnd || pRangeEnd->eOperator & (WO_LE|WO_GE); - start_constraints = pRangeStart || nEq>0; - - /* Seek the index cursor to the start of the range. */ - nConstraint = nEq; - if( pRangeStart ){ - Expr *pRight = pRangeStart->pExpr->pRight; - sqlite3ExprCode(pParse, pRight, regBase+nEq); - sqlite3ExprCodeIsNullJump(v, pRight, regBase+nEq, addrNxt); - if( zAff ){ - if( sqlite3CompareAffinity(pRight, zAff[nConstraint])==SQLITE_AFF_NONE){ - /* Since the comparison is to be performed with no conversions - ** applied to the operands, set the affinity to apply to pRight to - ** SQLITE_AFF_NONE. */ - zAff[nConstraint] = SQLITE_AFF_NONE; - } - if( sqlite3ExprNeedsNoAffinityChange(pRight, zAff[nConstraint]) ){ - zAff[nConstraint] = SQLITE_AFF_NONE; - } - } - nConstraint++; - }else if( isMinQuery ){ - sqlite3VdbeAddOp2(v, OP_Null, 0, regBase+nEq); - nConstraint++; - startEq = 0; - start_constraints = 1; - } - codeApplyAffinity(pParse, regBase, nConstraint, zAff); - op = aStartOp[(start_constraints<<2) + (startEq<<1) + bRev]; - assert( op!=0 ); - testcase( op==OP_Rewind ); - testcase( op==OP_Last ); - testcase( op==OP_SeekGt ); - testcase( op==OP_SeekGe ); - testcase( op==OP_SeekLe ); - testcase( op==OP_SeekLt ); - sqlite3VdbeAddOp4Int(v, op, iIdxCur, addrNxt, regBase, nConstraint); - - /* Load the value for the inequality constraint at the end of the - ** range (if any). - */ - nConstraint = nEq; - if( pRangeEnd ){ - Expr *pRight = pRangeEnd->pExpr->pRight; - sqlite3ExprCacheRemove(pParse, regBase+nEq, 1); - sqlite3ExprCode(pParse, pRight, regBase+nEq); - sqlite3ExprCodeIsNullJump(v, pRight, regBase+nEq, addrNxt); - if( zAff ){ - if( sqlite3CompareAffinity(pRight, zAff[nConstraint])==SQLITE_AFF_NONE){ - /* Since the comparison is to be performed with no conversions - ** applied to the operands, set the affinity to apply to pRight to - ** SQLITE_AFF_NONE. */ - zAff[nConstraint] = SQLITE_AFF_NONE; - } - if( sqlite3ExprNeedsNoAffinityChange(pRight, zAff[nConstraint]) ){ - zAff[nConstraint] = SQLITE_AFF_NONE; - } - } - codeApplyAffinity(pParse, regBase, nEq+1, zAff); - nConstraint++; - } - sqlite3DbFree(pParse->db, zAff); - - /* Top of the loop body */ - pLevel->p2 = sqlite3VdbeCurrentAddr(v); - - /* Check if the index cursor is past the end of the range. */ - op = aEndOp[(pRangeEnd || nEq) * (1 + bRev)]; - testcase( op==OP_Noop ); - testcase( op==OP_IdxGE ); - testcase( op==OP_IdxLT ); - if( op!=OP_Noop ){ - sqlite3VdbeAddOp4Int(v, op, iIdxCur, addrNxt, regBase, nConstraint); - sqlite3VdbeChangeP5(v, endEq!=bRev ?1:0); - } - - /* If there are inequality constraints, check that the value - ** of the table column that the inequality contrains is not NULL. - ** If it is, jump to the next iteration of the loop. - */ - r1 = sqlite3GetTempReg(pParse); - testcase( pLevel->plan.wsFlags & WHERE_BTM_LIMIT ); - testcase( pLevel->plan.wsFlags & WHERE_TOP_LIMIT ); - if( pLevel->plan.wsFlags & (WHERE_BTM_LIMIT|WHERE_TOP_LIMIT) ){ - sqlite3VdbeAddOp3(v, OP_Column, iIdxCur, nEq, r1); - sqlite3VdbeAddOp2(v, OP_IsNull, r1, addrCont); - } - sqlite3ReleaseTempReg(pParse, r1); - - /* Seek the table cursor, if required */ - disableTerm(pLevel, pRangeStart); - disableTerm(pLevel, pRangeEnd); - if( !omitTable ){ - iRowidReg = iReleaseReg = sqlite3GetTempReg(pParse); - sqlite3VdbeAddOp2(v, OP_IdxRowid, iIdxCur, iRowidReg); - sqlite3ExprCacheStore(pParse, iCur, -1, iRowidReg); - sqlite3VdbeAddOp2(v, OP_Seek, iCur, iRowidReg); /* Deferred seek */ - } - - /* Record the instruction used to terminate the loop. Disable - ** WHERE clause terms made redundant by the index range scan. - */ - pLevel->op = bRev ? OP_Prev : OP_Next; - pLevel->p1 = iIdxCur; - }else - -#ifndef SQLITE_OMIT_OR_OPTIMIZATION - if( pLevel->plan.wsFlags & WHERE_MULTI_OR ){ - /* Case 4: Two or more separately indexed terms connected by OR - ** - ** Example: - ** - ** CREATE TABLE t1(a,b,c,d); - ** CREATE INDEX i1 ON t1(a); - ** CREATE INDEX i2 ON t1(b); - ** CREATE INDEX i3 ON t1(c); - ** - ** SELECT * FROM t1 WHERE a=5 OR b=7 OR (c=11 AND d=13) - ** - ** In the example, there are three indexed terms connected by OR. - ** The top of the loop looks like this: - ** - ** Null 1 # Zero the rowset in reg 1 - ** - ** Then, for each indexed term, the following. The arguments to - ** RowSetTest are such that the rowid of the current row is inserted - ** into the RowSet. If it is already present, control skips the - ** Gosub opcode and jumps straight to the code generated by WhereEnd(). - ** - ** sqlite3WhereBegin(<term>) - ** RowSetTest # Insert rowid into rowset - ** Gosub 2 A - ** sqlite3WhereEnd() - ** - ** Following the above, code to terminate the loop. Label A, the target - ** of the Gosub above, jumps to the instruction right after the Goto. - ** - ** Null 1 # Zero the rowset in reg 1 - ** Goto B # The loop is finished. - ** - ** A: <loop body> # Return data, whatever. - ** - ** Return 2 # Jump back to the Gosub - ** - ** B: <after the loop> - ** - */ - WhereClause *pOrWc; /* The OR-clause broken out into subterms */ - WhereTerm *pFinal; /* Final subterm within the OR-clause. */ - SrcList *pOrTab; /* Shortened table list or OR-clause generation */ - - int regReturn = ++pParse->nMem; /* Register used with OP_Gosub */ - int regRowset = 0; /* Register for RowSet object */ - int regRowid = 0; /* Register holding rowid */ - int iLoopBody = sqlite3VdbeMakeLabel(v); /* Start of loop body */ - int iRetInit; /* Address of regReturn init */ - int untestedTerms = 0; /* Some terms not completely tested */ - int ii; - - pTerm = pLevel->plan.u.pTerm; - assert( pTerm!=0 ); - assert( pTerm->eOperator==WO_OR ); - assert( (pTerm->wtFlags & TERM_ORINFO)!=0 ); - pOrWc = &pTerm->u.pOrInfo->wc; - pFinal = &pOrWc->a[pOrWc->nTerm-1]; - pLevel->op = OP_Return; - pLevel->p1 = regReturn; - - /* Set up a new SrcList ni pOrTab containing the table being scanned - ** by this loop in the a[0] slot and all notReady tables in a[1..] slots. - ** This becomes the SrcList in the recursive call to sqlite3WhereBegin(). - */ - if( pWInfo->nLevel>1 ){ - int nNotReady; /* The number of notReady tables */ - struct SrcList_item *origSrc; /* Original list of tables */ - nNotReady = pWInfo->nLevel - iLevel - 1; - pOrTab = sqlite3StackAllocRaw(pParse->db, - sizeof(*pOrTab)+ nNotReady*sizeof(pOrTab->a[0])); - if( pOrTab==0 ) return notReady; - pOrTab->nAlloc = (i16)(nNotReady + 1); - pOrTab->nSrc = pOrTab->nAlloc; - memcpy(pOrTab->a, pTabItem, sizeof(*pTabItem)); - origSrc = pWInfo->pTabList->a; - for(k=1; k<=nNotReady; k++){ - memcpy(&pOrTab->a[k], &origSrc[pLevel[k].iFrom], sizeof(pOrTab->a[k])); - } - }else{ - pOrTab = pWInfo->pTabList; - } - - /* Initialize the rowset register to contain NULL. An SQL NULL is - ** equivalent to an empty rowset. - ** - ** Also initialize regReturn to contain the address of the instruction - ** immediately following the OP_Return at the bottom of the loop. This - ** is required in a few obscure LEFT JOIN cases where control jumps - ** over the top of the loop into the body of it. In this case the - ** correct response for the end-of-loop code (the OP_Return) is to - ** fall through to the next instruction, just as an OP_Next does if - ** called on an uninitialized cursor. - */ - if( (wctrlFlags & WHERE_DUPLICATES_OK)==0 ){ - regRowset = ++pParse->nMem; - regRowid = ++pParse->nMem; - sqlite3VdbeAddOp2(v, OP_Null, 0, regRowset); - } - iRetInit = sqlite3VdbeAddOp2(v, OP_Integer, 0, regReturn); - - for(ii=0; ii<pOrWc->nTerm; ii++){ - WhereTerm *pOrTerm = &pOrWc->a[ii]; - if( pOrTerm->leftCursor==iCur || pOrTerm->eOperator==WO_AND ){ - WhereInfo *pSubWInfo; /* Info for single OR-term scan */ - /* Loop through table entries that match term pOrTerm. */ - pSubWInfo = sqlite3WhereBegin(pParse, pOrTab, pOrTerm->pExpr, 0, - WHERE_OMIT_OPEN | WHERE_OMIT_CLOSE | - WHERE_FORCE_TABLE | WHERE_ONETABLE_ONLY); - if( pSubWInfo ){ - if( (wctrlFlags & WHERE_DUPLICATES_OK)==0 ){ - int iSet = ((ii==pOrWc->nTerm-1)?-1:ii); - int r; - r = sqlite3ExprCodeGetColumn(pParse, pTabItem->pTab, -1, iCur, - regRowid); - sqlite3VdbeAddOp4Int(v, OP_RowSetTest, regRowset, - sqlite3VdbeCurrentAddr(v)+2, r, iSet); - } - sqlite3VdbeAddOp2(v, OP_Gosub, regReturn, iLoopBody); - - /* The pSubWInfo->untestedTerms flag means that this OR term - ** contained one or more AND term from a notReady table. The - ** terms from the notReady table could not be tested and will - ** need to be tested later. - */ - if( pSubWInfo->untestedTerms ) untestedTerms = 1; - - /* Finish the loop through table entries that match term pOrTerm. */ - sqlite3WhereEnd(pSubWInfo); - } - } - } - sqlite3VdbeChangeP1(v, iRetInit, sqlite3VdbeCurrentAddr(v)); - sqlite3VdbeAddOp2(v, OP_Goto, 0, pLevel->addrBrk); - sqlite3VdbeResolveLabel(v, iLoopBody); - - if( pWInfo->nLevel>1 ) sqlite3StackFree(pParse->db, pOrTab); - if( !untestedTerms ) disableTerm(pLevel, pTerm); - }else -#endif /* SQLITE_OMIT_OR_OPTIMIZATION */ - - { - /* Case 5: There is no usable index. We must do a complete - ** scan of the entire table. - */ - static const u8 aStep[] = { OP_Next, OP_Prev }; - static const u8 aStart[] = { OP_Rewind, OP_Last }; - assert( bRev==0 || bRev==1 ); - assert( omitTable==0 ); - pLevel->op = aStep[bRev]; - pLevel->p1 = iCur; - pLevel->p2 = 1 + sqlite3VdbeAddOp2(v, aStart[bRev], iCur, addrBrk); - pLevel->p5 = SQLITE_STMTSTATUS_FULLSCAN_STEP; - } - notReady &= ~getMask(pWC->pMaskSet, iCur); - - /* Insert code to test every subexpression that can be completely - ** computed using the current set of tables. - */ - k = 0; - for(pTerm=pWC->a, j=pWC->nTerm; j>0; j--, pTerm++){ - Expr *pE; - testcase( pTerm->wtFlags & TERM_VIRTUAL ); - testcase( pTerm->wtFlags & TERM_CODED ); - if( pTerm->wtFlags & (TERM_VIRTUAL|TERM_CODED) ) continue; - if( (pTerm->prereqAll & notReady)!=0 ){ - testcase( pWInfo->untestedTerms==0 - && (pWInfo->wctrlFlags & WHERE_ONETABLE_ONLY)!=0 ); - pWInfo->untestedTerms = 1; - continue; - } - pE = pTerm->pExpr; - assert( pE!=0 ); - if( pLevel->iLeftJoin && !ExprHasProperty(pE, EP_FromJoin) ){ - continue; - } - sqlite3ExprIfFalse(pParse, pE, addrCont, SQLITE_JUMPIFNULL); - k = 1; - pTerm->wtFlags |= TERM_CODED; - } - - /* For a LEFT OUTER JOIN, generate code that will record the fact that - ** at least one row of the right table has matched the left table. - */ - if( pLevel->iLeftJoin ){ - pLevel->addrFirst = sqlite3VdbeCurrentAddr(v); - sqlite3VdbeAddOp2(v, OP_Integer, 1, pLevel->iLeftJoin); - VdbeComment((v, "record LEFT JOIN hit")); - sqlite3ExprCacheClear(pParse); - for(pTerm=pWC->a, j=0; j<pWC->nTerm; j++, pTerm++){ - testcase( pTerm->wtFlags & TERM_VIRTUAL ); - testcase( pTerm->wtFlags & TERM_CODED ); - if( pTerm->wtFlags & (TERM_VIRTUAL|TERM_CODED) ) continue; - if( (pTerm->prereqAll & notReady)!=0 ){ - assert( pWInfo->untestedTerms ); - continue; - } - assert( pTerm->pExpr ); - sqlite3ExprIfFalse(pParse, pTerm->pExpr, addrCont, SQLITE_JUMPIFNULL); - pTerm->wtFlags |= TERM_CODED; - } - } - sqlite3ReleaseTempReg(pParse, iReleaseReg); - - return notReady; -} - -#if defined(SQLITE_TEST) -/* -** The following variable holds a text description of query plan generated -** by the most recent call to sqlite3WhereBegin(). Each call to WhereBegin -** overwrites the previous. This information is used for testing and -** analysis only. -*/ -SQLITE_API char sqlite3_query_plan[BMS*2*40]; /* Text of the join */ -static int nQPlan = 0; /* Next free slow in _query_plan[] */ - -#endif /* SQLITE_TEST */ - +#endif + assert( pUpper==0 || (pUpper->wtFlags & TERM_VNULL)==0 ); + nNew = whereRangeAdjust(pLower, nOut); + nNew = whereRangeAdjust(pUpper, nNew); + + /* TUNING: If there is both an upper and lower limit and neither limit + ** has an application-defined likelihood(), assume the range is + ** reduced by an additional 75%. This means that, by default, an open-ended + ** range query (e.g. col > ?) is assumed to match 1/4 of the rows in the + ** index. While a closed range (e.g. col BETWEEN ? AND ?) is estimated to + ** match 1/64 of the index. */ + if( pLower && pLower->truthProb>0 && pUpper && pUpper->truthProb>0 ){ + nNew -= 20; + } + + nOut -= (pLower!=0) + (pUpper!=0); + if( nNew<10 ) nNew = 10; + if( nNew<nOut ) nOut = nNew; +#if defined(WHERETRACE_ENABLED) + if( pLoop->nOut>nOut ){ + WHERETRACE(0x10,("Range scan lowers nOut from %d to %d\n", + pLoop->nOut, nOut)); + } +#endif + pLoop->nOut = (LogEst)nOut; + return rc; +} + +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 +/* +** Estimate the number of rows that will be returned based on +** an equality constraint x=VALUE and where that VALUE occurs in +** the histogram data. This only works when x is the left-most +** column of an index and sqlite_stat3 histogram data is available +** for that index. When pExpr==NULL that means the constraint is +** "x IS NULL" instead of "x=VALUE". +** +** Write the estimated row count into *pnRow and return SQLITE_OK. +** If unable to make an estimate, leave *pnRow unchanged and return +** non-zero. +** +** This routine can fail if it is unable to load a collating sequence +** required for string comparison, or if unable to allocate memory +** for a UTF conversion required for comparison. The error is stored +** in the pParse structure. +*/ +static int whereEqualScanEst( + Parse *pParse, /* Parsing & code generating context */ + WhereLoopBuilder *pBuilder, + Expr *pExpr, /* Expression for VALUE in the x=VALUE constraint */ + tRowcnt *pnRow /* Write the revised row estimate here */ +){ + Index *p = pBuilder->pNew->u.btree.pIndex; + int nEq = pBuilder->pNew->u.btree.nEq; + UnpackedRecord *pRec = pBuilder->pRec; + u8 aff; /* Column affinity */ + int rc; /* Subfunction return code */ + tRowcnt a[2]; /* Statistics */ + int bOk; + + assert( nEq>=1 ); + assert( nEq<=p->nColumn ); + assert( p->aSample!=0 ); + assert( p->nSample>0 ); + assert( pBuilder->nRecValid<nEq ); + + /* If values are not available for all fields of the index to the left + ** of this one, no estimate can be made. Return SQLITE_NOTFOUND. */ + if( pBuilder->nRecValid<(nEq-1) ){ + return SQLITE_NOTFOUND; + } + + /* This is an optimization only. The call to sqlite3Stat4ProbeSetValue() + ** below would return the same value. */ + if( nEq>=p->nColumn ){ + *pnRow = 1; + return SQLITE_OK; + } + + aff = sqlite3IndexColumnAffinity(pParse->db, p, nEq-1); + rc = sqlite3Stat4ProbeSetValue(pParse, p, &pRec, pExpr, aff, nEq-1, &bOk); + pBuilder->pRec = pRec; + if( rc!=SQLITE_OK ) return rc; + if( bOk==0 ) return SQLITE_NOTFOUND; + pBuilder->nRecValid = nEq; + + whereKeyStats(pParse, p, pRec, 0, a); + WHERETRACE(0x10,("equality scan regions: %d\n", (int)a[1])); + *pnRow = a[1]; + + return rc; +} +#endif /* SQLITE_ENABLE_STAT3_OR_STAT4 */ + +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 +/* +** Estimate the number of rows that will be returned based on +** an IN constraint where the right-hand side of the IN operator +** is a list of values. Example: +** +** WHERE x IN (1,2,3,4) +** +** Write the estimated row count into *pnRow and return SQLITE_OK. +** If unable to make an estimate, leave *pnRow unchanged and return +** non-zero. +** +** This routine can fail if it is unable to load a collating sequence +** required for string comparison, or if unable to allocate memory +** for a UTF conversion required for comparison. The error is stored +** in the pParse structure. +*/ +static int whereInScanEst( + Parse *pParse, /* Parsing & code generating context */ + WhereLoopBuilder *pBuilder, + ExprList *pList, /* The value list on the RHS of "x IN (v1,v2,v3,...)" */ + tRowcnt *pnRow /* Write the revised row estimate here */ +){ + Index *p = pBuilder->pNew->u.btree.pIndex; + i64 nRow0 = sqlite3LogEstToInt(p->aiRowLogEst[0]); + int nRecValid = pBuilder->nRecValid; + int rc = SQLITE_OK; /* Subfunction return code */ + tRowcnt nEst; /* Number of rows for a single term */ + tRowcnt nRowEst = 0; /* New estimate of the number of rows */ + int i; /* Loop counter */ + + assert( p->aSample!=0 ); + for(i=0; rc==SQLITE_OK && i<pList->nExpr; i++){ + nEst = nRow0; + rc = whereEqualScanEst(pParse, pBuilder, pList->a[i].pExpr, &nEst); + nRowEst += nEst; + pBuilder->nRecValid = nRecValid; + } + + if( rc==SQLITE_OK ){ + if( nRowEst > nRow0 ) nRowEst = nRow0; + *pnRow = nRowEst; + WHERETRACE(0x10,("IN row estimate: est=%d\n", nRowEst)); + } + assert( pBuilder->nRecValid==nRecValid ); + return rc; +} +#endif /* SQLITE_ENABLE_STAT3_OR_STAT4 */ + + +#ifdef WHERETRACE_ENABLED +/* +** Print the content of a WhereTerm object +*/ +static void whereTermPrint(WhereTerm *pTerm, int iTerm){ + if( pTerm==0 ){ + sqlite3DebugPrintf("TERM-%-3d NULL\n", iTerm); + }else{ + char zType[4]; + memcpy(zType, "...", 4); + if( pTerm->wtFlags & TERM_VIRTUAL ) zType[0] = 'V'; + if( pTerm->eOperator & WO_EQUIV ) zType[1] = 'E'; + if( ExprHasProperty(pTerm->pExpr, EP_FromJoin) ) zType[2] = 'L'; + sqlite3DebugPrintf( + "TERM-%-3d %p %s cursor=%-3d prob=%-3d op=0x%03x wtFlags=0x%04x\n", + iTerm, pTerm, zType, pTerm->leftCursor, pTerm->truthProb, + pTerm->eOperator, pTerm->wtFlags); + sqlite3TreeViewExpr(0, pTerm->pExpr, 0); + } +} +#endif + +#ifdef WHERETRACE_ENABLED +/* +** Print a WhereLoop object for debugging purposes +*/ +static void whereLoopPrint(WhereLoop *p, WhereClause *pWC){ + WhereInfo *pWInfo = pWC->pWInfo; + int nb = 1+(pWInfo->pTabList->nSrc+7)/8; + struct SrcList_item *pItem = pWInfo->pTabList->a + p->iTab; + Table *pTab = pItem->pTab; + sqlite3DebugPrintf("%c%2d.%0*llx.%0*llx", p->cId, + p->iTab, nb, p->maskSelf, nb, p->prereq); + sqlite3DebugPrintf(" %12s", + pItem->zAlias ? pItem->zAlias : pTab->zName); + if( (p->wsFlags & WHERE_VIRTUALTABLE)==0 ){ + const char *zName; + if( p->u.btree.pIndex && (zName = p->u.btree.pIndex->zName)!=0 ){ + if( strncmp(zName, "sqlite_autoindex_", 17)==0 ){ + int i = sqlite3Strlen30(zName) - 1; + while( zName[i]!='_' ) i--; + zName += i; + } + sqlite3DebugPrintf(".%-16s %2d", zName, p->u.btree.nEq); + }else{ + sqlite3DebugPrintf("%20s",""); + } + }else{ + char *z; + if( p->u.vtab.idxStr ){ + z = sqlite3_mprintf("(%d,\"%s\",%x)", + p->u.vtab.idxNum, p->u.vtab.idxStr, p->u.vtab.omitMask); + }else{ + z = sqlite3_mprintf("(%d,%x)", p->u.vtab.idxNum, p->u.vtab.omitMask); + } + sqlite3DebugPrintf(" %-19s", z); + sqlite3_free(z); + } + if( p->wsFlags & WHERE_SKIPSCAN ){ + sqlite3DebugPrintf(" f %05x %d-%d", p->wsFlags, p->nLTerm,p->nSkip); + }else{ + sqlite3DebugPrintf(" f %05x N %d", p->wsFlags, p->nLTerm); + } + sqlite3DebugPrintf(" cost %d,%d,%d\n", p->rSetup, p->rRun, p->nOut); + if( p->nLTerm && (sqlite3WhereTrace & 0x100)!=0 ){ + int i; + for(i=0; i<p->nLTerm; i++){ + whereTermPrint(p->aLTerm[i], i); + } + } +} +#endif + +/* +** Convert bulk memory into a valid WhereLoop that can be passed +** to whereLoopClear harmlessly. +*/ +static void whereLoopInit(WhereLoop *p){ + p->aLTerm = p->aLTermSpace; + p->nLTerm = 0; + p->nLSlot = ArraySize(p->aLTermSpace); + p->wsFlags = 0; +} + +/* +** Clear the WhereLoop.u union. Leave WhereLoop.pLTerm intact. +*/ +static void whereLoopClearUnion(sqlite3 *db, WhereLoop *p){ + if( p->wsFlags & (WHERE_VIRTUALTABLE|WHERE_AUTO_INDEX) ){ + if( (p->wsFlags & WHERE_VIRTUALTABLE)!=0 && p->u.vtab.needFree ){ + sqlite3_free(p->u.vtab.idxStr); + p->u.vtab.needFree = 0; + p->u.vtab.idxStr = 0; + }else if( (p->wsFlags & WHERE_AUTO_INDEX)!=0 && p->u.btree.pIndex!=0 ){ + sqlite3DbFree(db, p->u.btree.pIndex->zColAff); + sqlite3DbFree(db, p->u.btree.pIndex); + p->u.btree.pIndex = 0; + } + } +} + +/* +** Deallocate internal memory used by a WhereLoop object +*/ +static void whereLoopClear(sqlite3 *db, WhereLoop *p){ + if( p->aLTerm!=p->aLTermSpace ) sqlite3DbFree(db, p->aLTerm); + whereLoopClearUnion(db, p); + whereLoopInit(p); +} + +/* +** Increase the memory allocation for pLoop->aLTerm[] to be at least n. +*/ +static int whereLoopResize(sqlite3 *db, WhereLoop *p, int n){ + WhereTerm **paNew; + if( p->nLSlot>=n ) return SQLITE_OK; + n = (n+7)&~7; + paNew = sqlite3DbMallocRawNN(db, sizeof(p->aLTerm[0])*n); + if( paNew==0 ) return SQLITE_NOMEM; + memcpy(paNew, p->aLTerm, sizeof(p->aLTerm[0])*p->nLSlot); + if( p->aLTerm!=p->aLTermSpace ) sqlite3DbFree(db, p->aLTerm); + p->aLTerm = paNew; + p->nLSlot = n; + return SQLITE_OK; +} + +/* +** Transfer content from the second pLoop into the first. +*/ +static int whereLoopXfer(sqlite3 *db, WhereLoop *pTo, WhereLoop *pFrom){ + whereLoopClearUnion(db, pTo); + if( whereLoopResize(db, pTo, pFrom->nLTerm) ){ + memset(&pTo->u, 0, sizeof(pTo->u)); + return SQLITE_NOMEM; + } + memcpy(pTo, pFrom, WHERE_LOOP_XFER_SZ); + memcpy(pTo->aLTerm, pFrom->aLTerm, pTo->nLTerm*sizeof(pTo->aLTerm[0])); + if( pFrom->wsFlags & WHERE_VIRTUALTABLE ){ + pFrom->u.vtab.needFree = 0; + }else if( (pFrom->wsFlags & WHERE_AUTO_INDEX)!=0 ){ + pFrom->u.btree.pIndex = 0; + } + return SQLITE_OK; +} + +/* +** Delete a WhereLoop object +*/ +static void whereLoopDelete(sqlite3 *db, WhereLoop *p){ + whereLoopClear(db, p); + sqlite3DbFree(db, p); +} /* ** Free a WhereInfo structure */ static void whereInfoFree(sqlite3 *db, WhereInfo *pWInfo){ if( ALWAYS(pWInfo) ){ int i; for(i=0; i<pWInfo->nLevel; i++){ - sqlite3_index_info *pInfo = pWInfo->a[i].pIdxInfo; - if( pInfo ){ - /* assert( pInfo->needToFreeIdxStr==0 || db->mallocFailed ); */ - if( pInfo->needToFreeIdxStr ){ - sqlite3_free(pInfo->idxStr); - } - sqlite3DbFree(db, pInfo); - } - if( pWInfo->a[i].plan.wsFlags & WHERE_TEMP_INDEX ){ - Index *pIdx = pWInfo->a[i].plan.u.pIdx; - if( pIdx ){ - sqlite3DbFree(db, pIdx->zColAff); - sqlite3DbFree(db, pIdx); - } - } - } - whereClauseClear(pWInfo->pWC); + WhereLevel *pLevel = &pWInfo->a[i]; + if( pLevel->pWLoop && (pLevel->pWLoop->wsFlags & WHERE_IN_ABLE) ){ + sqlite3DbFree(db, pLevel->u.in.aInLoop); + } + } + sqlite3WhereClauseClear(&pWInfo->sWC); + while( pWInfo->pLoops ){ + WhereLoop *p = pWInfo->pLoops; + pWInfo->pLoops = p->pNextLoop; + whereLoopDelete(db, p); + } sqlite3DbFree(db, pWInfo); } } +/* +** Return TRUE if all of the following are true: +** +** (1) X has the same or lower cost that Y +** (2) X is a proper subset of Y +** (3) X skips at least as many columns as Y +** +** By "proper subset" we mean that X uses fewer WHERE clause terms +** than Y and that every WHERE clause term used by X is also used +** by Y. +** +** If X is a proper subset of Y then Y is a better choice and ought +** to have a lower cost. This routine returns TRUE when that cost +** relationship is inverted and needs to be adjusted. The third rule +** was added because if X uses skip-scan less than Y it still might +** deserve a lower cost even if it is a proper subset of Y. +*/ +static int whereLoopCheaperProperSubset( + const WhereLoop *pX, /* First WhereLoop to compare */ + const WhereLoop *pY /* Compare against this WhereLoop */ +){ + int i, j; + if( pX->nLTerm-pX->nSkip >= pY->nLTerm-pY->nSkip ){ + return 0; /* X is not a subset of Y */ + } + if( pY->nSkip > pX->nSkip ) return 0; + if( pX->rRun >= pY->rRun ){ + if( pX->rRun > pY->rRun ) return 0; /* X costs more than Y */ + if( pX->nOut > pY->nOut ) return 0; /* X costs more than Y */ + } + for(i=pX->nLTerm-1; i>=0; i--){ + if( pX->aLTerm[i]==0 ) continue; + for(j=pY->nLTerm-1; j>=0; j--){ + if( pY->aLTerm[j]==pX->aLTerm[i] ) break; + } + if( j<0 ) return 0; /* X not a subset of Y since term X[i] not used by Y */ + } + return 1; /* All conditions meet */ +} + +/* +** Try to adjust the cost of WhereLoop pTemplate upwards or downwards so +** that: +** +** (1) pTemplate costs less than any other WhereLoops that are a proper +** subset of pTemplate +** +** (2) pTemplate costs more than any other WhereLoops for which pTemplate +** is a proper subset. +** +** To say "WhereLoop X is a proper subset of Y" means that X uses fewer +** WHERE clause terms than Y and that every WHERE clause term used by X is +** also used by Y. +*/ +static void whereLoopAdjustCost(const WhereLoop *p, WhereLoop *pTemplate){ + if( (pTemplate->wsFlags & WHERE_INDEXED)==0 ) return; + for(; p; p=p->pNextLoop){ + if( p->iTab!=pTemplate->iTab ) continue; + if( (p->wsFlags & WHERE_INDEXED)==0 ) continue; + if( whereLoopCheaperProperSubset(p, pTemplate) ){ + /* Adjust pTemplate cost downward so that it is cheaper than its + ** subset p. */ + WHERETRACE(0x80,("subset cost adjustment %d,%d to %d,%d\n", + pTemplate->rRun, pTemplate->nOut, p->rRun, p->nOut-1)); + pTemplate->rRun = p->rRun; + pTemplate->nOut = p->nOut - 1; + }else if( whereLoopCheaperProperSubset(pTemplate, p) ){ + /* Adjust pTemplate cost upward so that it is costlier than p since + ** pTemplate is a proper subset of p */ + WHERETRACE(0x80,("subset cost adjustment %d,%d to %d,%d\n", + pTemplate->rRun, pTemplate->nOut, p->rRun, p->nOut+1)); + pTemplate->rRun = p->rRun; + pTemplate->nOut = p->nOut + 1; + } + } +} + +/* +** Search the list of WhereLoops in *ppPrev looking for one that can be +** supplanted by pTemplate. +** +** Return NULL if the WhereLoop list contains an entry that can supplant +** pTemplate, in other words if pTemplate does not belong on the list. +** +** If pX is a WhereLoop that pTemplate can supplant, then return the +** link that points to pX. +** +** If pTemplate cannot supplant any existing element of the list but needs +** to be added to the list, then return a pointer to the tail of the list. +*/ +static WhereLoop **whereLoopFindLesser( + WhereLoop **ppPrev, + const WhereLoop *pTemplate +){ + WhereLoop *p; + for(p=(*ppPrev); p; ppPrev=&p->pNextLoop, p=*ppPrev){ + if( p->iTab!=pTemplate->iTab || p->iSortIdx!=pTemplate->iSortIdx ){ + /* If either the iTab or iSortIdx values for two WhereLoop are different + ** then those WhereLoops need to be considered separately. Neither is + ** a candidate to replace the other. */ + continue; + } + /* In the current implementation, the rSetup value is either zero + ** or the cost of building an automatic index (NlogN) and the NlogN + ** is the same for compatible WhereLoops. */ + assert( p->rSetup==0 || pTemplate->rSetup==0 + || p->rSetup==pTemplate->rSetup ); + + /* whereLoopAddBtree() always generates and inserts the automatic index + ** case first. Hence compatible candidate WhereLoops never have a larger + ** rSetup. Call this SETUP-INVARIANT */ + assert( p->rSetup>=pTemplate->rSetup ); + + /* Any loop using an appliation-defined index (or PRIMARY KEY or + ** UNIQUE constraint) with one or more == constraints is better + ** than an automatic index. Unless it is a skip-scan. */ + if( (p->wsFlags & WHERE_AUTO_INDEX)!=0 + && (pTemplate->nSkip)==0 + && (pTemplate->wsFlags & WHERE_INDEXED)!=0 + && (pTemplate->wsFlags & WHERE_COLUMN_EQ)!=0 + && (p->prereq & pTemplate->prereq)==pTemplate->prereq + ){ + break; + } + + /* If existing WhereLoop p is better than pTemplate, pTemplate can be + ** discarded. WhereLoop p is better if: + ** (1) p has no more dependencies than pTemplate, and + ** (2) p has an equal or lower cost than pTemplate + */ + if( (p->prereq & pTemplate->prereq)==p->prereq /* (1) */ + && p->rSetup<=pTemplate->rSetup /* (2a) */ + && p->rRun<=pTemplate->rRun /* (2b) */ + && p->nOut<=pTemplate->nOut /* (2c) */ + ){ + return 0; /* Discard pTemplate */ + } + + /* If pTemplate is always better than p, then cause p to be overwritten + ** with pTemplate. pTemplate is better than p if: + ** (1) pTemplate has no more dependences than p, and + ** (2) pTemplate has an equal or lower cost than p. + */ + if( (p->prereq & pTemplate->prereq)==pTemplate->prereq /* (1) */ + && p->rRun>=pTemplate->rRun /* (2a) */ + && p->nOut>=pTemplate->nOut /* (2b) */ + ){ + assert( p->rSetup>=pTemplate->rSetup ); /* SETUP-INVARIANT above */ + break; /* Cause p to be overwritten by pTemplate */ + } + } + return ppPrev; +} + +/* +** Insert or replace a WhereLoop entry using the template supplied. +** +** An existing WhereLoop entry might be overwritten if the new template +** is better and has fewer dependencies. Or the template will be ignored +** and no insert will occur if an existing WhereLoop is faster and has +** fewer dependencies than the template. Otherwise a new WhereLoop is +** added based on the template. +** +** If pBuilder->pOrSet is not NULL then we care about only the +** prerequisites and rRun and nOut costs of the N best loops. That +** information is gathered in the pBuilder->pOrSet object. This special +** processing mode is used only for OR clause processing. +** +** When accumulating multiple loops (when pBuilder->pOrSet is NULL) we +** still might overwrite similar loops with the new template if the +** new template is better. Loops may be overwritten if the following +** conditions are met: +** +** (1) They have the same iTab. +** (2) They have the same iSortIdx. +** (3) The template has same or fewer dependencies than the current loop +** (4) The template has the same or lower cost than the current loop +*/ +static int whereLoopInsert(WhereLoopBuilder *pBuilder, WhereLoop *pTemplate){ + WhereLoop **ppPrev, *p; + WhereInfo *pWInfo = pBuilder->pWInfo; + sqlite3 *db = pWInfo->pParse->db; + + /* If pBuilder->pOrSet is defined, then only keep track of the costs + ** and prereqs. + */ + if( pBuilder->pOrSet!=0 ){ + if( pTemplate->nLTerm ){ +#if WHERETRACE_ENABLED + u16 n = pBuilder->pOrSet->n; + int x = +#endif + whereOrInsert(pBuilder->pOrSet, pTemplate->prereq, pTemplate->rRun, + pTemplate->nOut); +#if WHERETRACE_ENABLED /* 0x8 */ + if( sqlite3WhereTrace & 0x8 ){ + sqlite3DebugPrintf(x?" or-%d: ":" or-X: ", n); + whereLoopPrint(pTemplate, pBuilder->pWC); + } +#endif + } + return SQLITE_OK; + } + + /* Look for an existing WhereLoop to replace with pTemplate + */ + whereLoopAdjustCost(pWInfo->pLoops, pTemplate); + ppPrev = whereLoopFindLesser(&pWInfo->pLoops, pTemplate); + + if( ppPrev==0 ){ + /* There already exists a WhereLoop on the list that is better + ** than pTemplate, so just ignore pTemplate */ +#if WHERETRACE_ENABLED /* 0x8 */ + if( sqlite3WhereTrace & 0x8 ){ + sqlite3DebugPrintf(" skip: "); + whereLoopPrint(pTemplate, pBuilder->pWC); + } +#endif + return SQLITE_OK; + }else{ + p = *ppPrev; + } + + /* If we reach this point it means that either p[] should be overwritten + ** with pTemplate[] if p[] exists, or if p==NULL then allocate a new + ** WhereLoop and insert it. + */ +#if WHERETRACE_ENABLED /* 0x8 */ + if( sqlite3WhereTrace & 0x8 ){ + if( p!=0 ){ + sqlite3DebugPrintf("replace: "); + whereLoopPrint(p, pBuilder->pWC); + } + sqlite3DebugPrintf(" add: "); + whereLoopPrint(pTemplate, pBuilder->pWC); + } +#endif + if( p==0 ){ + /* Allocate a new WhereLoop to add to the end of the list */ + *ppPrev = p = sqlite3DbMallocRawNN(db, sizeof(WhereLoop)); + if( p==0 ) return SQLITE_NOMEM; + whereLoopInit(p); + p->pNextLoop = 0; + }else{ + /* We will be overwriting WhereLoop p[]. But before we do, first + ** go through the rest of the list and delete any other entries besides + ** p[] that are also supplated by pTemplate */ + WhereLoop **ppTail = &p->pNextLoop; + WhereLoop *pToDel; + while( *ppTail ){ + ppTail = whereLoopFindLesser(ppTail, pTemplate); + if( ppTail==0 ) break; + pToDel = *ppTail; + if( pToDel==0 ) break; + *ppTail = pToDel->pNextLoop; +#if WHERETRACE_ENABLED /* 0x8 */ + if( sqlite3WhereTrace & 0x8 ){ + sqlite3DebugPrintf(" delete: "); + whereLoopPrint(pToDel, pBuilder->pWC); + } +#endif + whereLoopDelete(db, pToDel); + } + } + whereLoopXfer(db, p, pTemplate); + if( (p->wsFlags & WHERE_VIRTUALTABLE)==0 ){ + Index *pIndex = p->u.btree.pIndex; + if( pIndex && pIndex->tnum==0 ){ + p->u.btree.pIndex = 0; + } + } + return SQLITE_OK; +} + +/* +** Adjust the WhereLoop.nOut value downward to account for terms of the +** WHERE clause that reference the loop but which are not used by an +** index. +* +** For every WHERE clause term that is not used by the index +** and which has a truth probability assigned by one of the likelihood(), +** likely(), or unlikely() SQL functions, reduce the estimated number +** of output rows by the probability specified. +** +** TUNING: For every WHERE clause term that is not used by the index +** and which does not have an assigned truth probability, heuristics +** described below are used to try to estimate the truth probability. +** TODO --> Perhaps this is something that could be improved by better +** table statistics. +** +** Heuristic 1: Estimate the truth probability as 93.75%. The 93.75% +** value corresponds to -1 in LogEst notation, so this means decrement +** the WhereLoop.nOut field for every such WHERE clause term. +** +** Heuristic 2: If there exists one or more WHERE clause terms of the +** form "x==EXPR" and EXPR is not a constant 0 or 1, then make sure the +** final output row estimate is no greater than 1/4 of the total number +** of rows in the table. In other words, assume that x==EXPR will filter +** out at least 3 out of 4 rows. If EXPR is -1 or 0 or 1, then maybe the +** "x" column is boolean or else -1 or 0 or 1 is a common default value +** on the "x" column and so in that case only cap the output row estimate +** at 1/2 instead of 1/4. +*/ +static void whereLoopOutputAdjust( + WhereClause *pWC, /* The WHERE clause */ + WhereLoop *pLoop, /* The loop to adjust downward */ + LogEst nRow /* Number of rows in the entire table */ +){ + WhereTerm *pTerm, *pX; + Bitmask notAllowed = ~(pLoop->prereq|pLoop->maskSelf); + int i, j, k; + LogEst iReduce = 0; /* pLoop->nOut should not exceed nRow-iReduce */ + + assert( (pLoop->wsFlags & WHERE_AUTO_INDEX)==0 ); + for(i=pWC->nTerm, pTerm=pWC->a; i>0; i--, pTerm++){ + if( (pTerm->wtFlags & TERM_VIRTUAL)!=0 ) break; + if( (pTerm->prereqAll & pLoop->maskSelf)==0 ) continue; + if( (pTerm->prereqAll & notAllowed)!=0 ) continue; + for(j=pLoop->nLTerm-1; j>=0; j--){ + pX = pLoop->aLTerm[j]; + if( pX==0 ) continue; + if( pX==pTerm ) break; + if( pX->iParent>=0 && (&pWC->a[pX->iParent])==pTerm ) break; + } + if( j<0 ){ + if( pTerm->truthProb<=0 ){ + /* If a truth probability is specified using the likelihood() hints, + ** then use the probability provided by the application. */ + pLoop->nOut += pTerm->truthProb; + }else{ + /* In the absence of explicit truth probabilities, use heuristics to + ** guess a reasonable truth probability. */ + pLoop->nOut--; + if( pTerm->eOperator&(WO_EQ|WO_IS) ){ + Expr *pRight = pTerm->pExpr->pRight; + testcase( pTerm->pExpr->op==TK_IS ); + if( sqlite3ExprIsInteger(pRight, &k) && k>=(-1) && k<=1 ){ + k = 10; + }else{ + k = 20; + } + if( iReduce<k ) iReduce = k; + } + } + } + } + if( pLoop->nOut > nRow-iReduce ) pLoop->nOut = nRow - iReduce; +} + +/* +** Adjust the cost C by the costMult facter T. This only occurs if +** compiled with -DSQLITE_ENABLE_COSTMULT +*/ +#ifdef SQLITE_ENABLE_COSTMULT +# define ApplyCostMultiplier(C,T) C += T +#else +# define ApplyCostMultiplier(C,T) +#endif + +/* +** We have so far matched pBuilder->pNew->u.btree.nEq terms of the +** index pIndex. Try to match one more. +** +** When this function is called, pBuilder->pNew->nOut contains the +** number of rows expected to be visited by filtering using the nEq +** terms only. If it is modified, this value is restored before this +** function returns. +** +** If pProbe->tnum==0, that means pIndex is a fake index used for the +** INTEGER PRIMARY KEY. +*/ +static int whereLoopAddBtreeIndex( + WhereLoopBuilder *pBuilder, /* The WhereLoop factory */ + struct SrcList_item *pSrc, /* FROM clause term being analyzed */ + Index *pProbe, /* An index on pSrc */ + LogEst nInMul /* log(Number of iterations due to IN) */ +){ + WhereInfo *pWInfo = pBuilder->pWInfo; /* WHERE analyse context */ + Parse *pParse = pWInfo->pParse; /* Parsing context */ + sqlite3 *db = pParse->db; /* Database connection malloc context */ + WhereLoop *pNew; /* Template WhereLoop under construction */ + WhereTerm *pTerm; /* A WhereTerm under consideration */ + int opMask; /* Valid operators for constraints */ + WhereScan scan; /* Iterator for WHERE terms */ + Bitmask saved_prereq; /* Original value of pNew->prereq */ + u16 saved_nLTerm; /* Original value of pNew->nLTerm */ + u16 saved_nEq; /* Original value of pNew->u.btree.nEq */ + u16 saved_nSkip; /* Original value of pNew->nSkip */ + u32 saved_wsFlags; /* Original value of pNew->wsFlags */ + LogEst saved_nOut; /* Original value of pNew->nOut */ + int rc = SQLITE_OK; /* Return code */ + LogEst rSize; /* Number of rows in the table */ + LogEst rLogSize; /* Logarithm of table size */ + WhereTerm *pTop = 0, *pBtm = 0; /* Top and bottom range constraints */ + + pNew = pBuilder->pNew; + if( db->mallocFailed ) return SQLITE_NOMEM; + + assert( (pNew->wsFlags & WHERE_VIRTUALTABLE)==0 ); + assert( (pNew->wsFlags & WHERE_TOP_LIMIT)==0 ); + if( pNew->wsFlags & WHERE_BTM_LIMIT ){ + opMask = WO_LT|WO_LE; + }else if( /*pProbe->tnum<=0 ||*/ (pSrc->fg.jointype & JT_LEFT)!=0 ){ + opMask = WO_EQ|WO_IN|WO_GT|WO_GE|WO_LT|WO_LE; + }else{ + opMask = WO_EQ|WO_IN|WO_GT|WO_GE|WO_LT|WO_LE|WO_ISNULL|WO_IS; + } + if( pProbe->bUnordered ) opMask &= ~(WO_GT|WO_GE|WO_LT|WO_LE); + + assert( pNew->u.btree.nEq<pProbe->nColumn ); + + saved_nEq = pNew->u.btree.nEq; + saved_nSkip = pNew->nSkip; + saved_nLTerm = pNew->nLTerm; + saved_wsFlags = pNew->wsFlags; + saved_prereq = pNew->prereq; + saved_nOut = pNew->nOut; + pTerm = whereScanInit(&scan, pBuilder->pWC, pSrc->iCursor, saved_nEq, + opMask, pProbe); + pNew->rSetup = 0; + rSize = pProbe->aiRowLogEst[0]; + rLogSize = estLog(rSize); + for(; rc==SQLITE_OK && pTerm!=0; pTerm = whereScanNext(&scan)){ + u16 eOp = pTerm->eOperator; /* Shorthand for pTerm->eOperator */ + LogEst rCostIdx; + LogEst nOutUnadjusted; /* nOut before IN() and WHERE adjustments */ + int nIn = 0; +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + int nRecValid = pBuilder->nRecValid; +#endif + if( (eOp==WO_ISNULL || (pTerm->wtFlags&TERM_VNULL)!=0) + && indexColumnNotNull(pProbe, saved_nEq) + ){ + continue; /* ignore IS [NOT] NULL constraints on NOT NULL columns */ + } + if( pTerm->prereqRight & pNew->maskSelf ) continue; + + /* Do not allow the upper bound of a LIKE optimization range constraint + ** to mix with a lower range bound from some other source */ + if( pTerm->wtFlags & TERM_LIKEOPT && pTerm->eOperator==WO_LT ) continue; + + pNew->wsFlags = saved_wsFlags; + pNew->u.btree.nEq = saved_nEq; + pNew->nLTerm = saved_nLTerm; + if( whereLoopResize(db, pNew, pNew->nLTerm+1) ) break; /* OOM */ + pNew->aLTerm[pNew->nLTerm++] = pTerm; + pNew->prereq = (saved_prereq | pTerm->prereqRight) & ~pNew->maskSelf; + + assert( nInMul==0 + || (pNew->wsFlags & WHERE_COLUMN_NULL)!=0 + || (pNew->wsFlags & WHERE_COLUMN_IN)!=0 + || (pNew->wsFlags & WHERE_SKIPSCAN)!=0 + ); + + if( eOp & WO_IN ){ + Expr *pExpr = pTerm->pExpr; + pNew->wsFlags |= WHERE_COLUMN_IN; + if( ExprHasProperty(pExpr, EP_xIsSelect) ){ + /* "x IN (SELECT ...)": TUNING: the SELECT returns 25 rows */ + nIn = 46; assert( 46==sqlite3LogEst(25) ); + }else if( ALWAYS(pExpr->x.pList && pExpr->x.pList->nExpr) ){ + /* "x IN (value, value, ...)" */ + nIn = sqlite3LogEst(pExpr->x.pList->nExpr); + } + assert( nIn>0 ); /* RHS always has 2 or more terms... The parser + ** changes "x IN (?)" into "x=?". */ + + }else if( eOp & (WO_EQ|WO_IS) ){ + int iCol = pProbe->aiColumn[saved_nEq]; + pNew->wsFlags |= WHERE_COLUMN_EQ; + assert( saved_nEq==pNew->u.btree.nEq ); + if( iCol==XN_ROWID + || (iCol>0 && nInMul==0 && saved_nEq==pProbe->nKeyCol-1) + ){ + if( iCol>=0 && pProbe->uniqNotNull==0 ){ + pNew->wsFlags |= WHERE_UNQ_WANTED; + }else{ + pNew->wsFlags |= WHERE_ONEROW; + } + } + }else if( eOp & WO_ISNULL ){ + pNew->wsFlags |= WHERE_COLUMN_NULL; + }else if( eOp & (WO_GT|WO_GE) ){ + testcase( eOp & WO_GT ); + testcase( eOp & WO_GE ); + pNew->wsFlags |= WHERE_COLUMN_RANGE|WHERE_BTM_LIMIT; + pBtm = pTerm; + pTop = 0; + if( pTerm->wtFlags & TERM_LIKEOPT ){ + /* Range contraints that come from the LIKE optimization are + ** always used in pairs. */ + pTop = &pTerm[1]; + assert( (pTop-(pTerm->pWC->a))<pTerm->pWC->nTerm ); + assert( pTop->wtFlags & TERM_LIKEOPT ); + assert( pTop->eOperator==WO_LT ); + if( whereLoopResize(db, pNew, pNew->nLTerm+1) ) break; /* OOM */ + pNew->aLTerm[pNew->nLTerm++] = pTop; + pNew->wsFlags |= WHERE_TOP_LIMIT; + } + }else{ + assert( eOp & (WO_LT|WO_LE) ); + testcase( eOp & WO_LT ); + testcase( eOp & WO_LE ); + pNew->wsFlags |= WHERE_COLUMN_RANGE|WHERE_TOP_LIMIT; + pTop = pTerm; + pBtm = (pNew->wsFlags & WHERE_BTM_LIMIT)!=0 ? + pNew->aLTerm[pNew->nLTerm-2] : 0; + } + + /* At this point pNew->nOut is set to the number of rows expected to + ** be visited by the index scan before considering term pTerm, or the + ** values of nIn and nInMul. In other words, assuming that all + ** "x IN(...)" terms are replaced with "x = ?". This block updates + ** the value of pNew->nOut to account for pTerm (but not nIn/nInMul). */ + assert( pNew->nOut==saved_nOut ); + if( pNew->wsFlags & WHERE_COLUMN_RANGE ){ + /* Adjust nOut using stat3/stat4 data. Or, if there is no stat3/stat4 + ** data, using some other estimate. */ + whereRangeScanEst(pParse, pBuilder, pBtm, pTop, pNew); + }else{ + int nEq = ++pNew->u.btree.nEq; + assert( eOp & (WO_ISNULL|WO_EQ|WO_IN|WO_IS) ); + + assert( pNew->nOut==saved_nOut ); + if( pTerm->truthProb<=0 && pProbe->aiColumn[saved_nEq]>=0 ){ + assert( (eOp & WO_IN) || nIn==0 ); + testcase( eOp & WO_IN ); + pNew->nOut += pTerm->truthProb; + pNew->nOut -= nIn; + }else{ +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + tRowcnt nOut = 0; + if( nInMul==0 + && pProbe->nSample + && pNew->u.btree.nEq<=pProbe->nSampleCol + && ((eOp & WO_IN)==0 || !ExprHasProperty(pTerm->pExpr, EP_xIsSelect)) + ){ + Expr *pExpr = pTerm->pExpr; + if( (eOp & (WO_EQ|WO_ISNULL|WO_IS))!=0 ){ + testcase( eOp & WO_EQ ); + testcase( eOp & WO_IS ); + testcase( eOp & WO_ISNULL ); + rc = whereEqualScanEst(pParse, pBuilder, pExpr->pRight, &nOut); + }else{ + rc = whereInScanEst(pParse, pBuilder, pExpr->x.pList, &nOut); + } + if( rc==SQLITE_NOTFOUND ) rc = SQLITE_OK; + if( rc!=SQLITE_OK ) break; /* Jump out of the pTerm loop */ + if( nOut ){ + pNew->nOut = sqlite3LogEst(nOut); + if( pNew->nOut>saved_nOut ) pNew->nOut = saved_nOut; + pNew->nOut -= nIn; + } + } + if( nOut==0 ) +#endif + { + pNew->nOut += (pProbe->aiRowLogEst[nEq] - pProbe->aiRowLogEst[nEq-1]); + if( eOp & WO_ISNULL ){ + /* TUNING: If there is no likelihood() value, assume that a + ** "col IS NULL" expression matches twice as many rows + ** as (col=?). */ + pNew->nOut += 10; + } + } + } + } + + /* Set rCostIdx to the cost of visiting selected rows in index. Add + ** it to pNew->rRun, which is currently set to the cost of the index + ** seek only. Then, if this is a non-covering index, add the cost of + ** visiting the rows in the main table. */ + rCostIdx = pNew->nOut + 1 + (15*pProbe->szIdxRow)/pSrc->pTab->szTabRow; + pNew->rRun = sqlite3LogEstAdd(rLogSize, rCostIdx); + if( (pNew->wsFlags & (WHERE_IDX_ONLY|WHERE_IPK))==0 ){ + pNew->rRun = sqlite3LogEstAdd(pNew->rRun, pNew->nOut + 16); + } + ApplyCostMultiplier(pNew->rRun, pProbe->pTable->costMult); + + nOutUnadjusted = pNew->nOut; + pNew->rRun += nInMul + nIn; + pNew->nOut += nInMul + nIn; + whereLoopOutputAdjust(pBuilder->pWC, pNew, rSize); + rc = whereLoopInsert(pBuilder, pNew); + + if( pNew->wsFlags & WHERE_COLUMN_RANGE ){ + pNew->nOut = saved_nOut; + }else{ + pNew->nOut = nOutUnadjusted; + } + + if( (pNew->wsFlags & WHERE_TOP_LIMIT)==0 + && pNew->u.btree.nEq<pProbe->nColumn + ){ + whereLoopAddBtreeIndex(pBuilder, pSrc, pProbe, nInMul+nIn); + } + pNew->nOut = saved_nOut; +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + pBuilder->nRecValid = nRecValid; +#endif + } + pNew->prereq = saved_prereq; + pNew->u.btree.nEq = saved_nEq; + pNew->nSkip = saved_nSkip; + pNew->wsFlags = saved_wsFlags; + pNew->nOut = saved_nOut; + pNew->nLTerm = saved_nLTerm; + + /* Consider using a skip-scan if there are no WHERE clause constraints + ** available for the left-most terms of the index, and if the average + ** number of repeats in the left-most terms is at least 18. + ** + ** The magic number 18 is selected on the basis that scanning 17 rows + ** is almost always quicker than an index seek (even though if the index + ** contains fewer than 2^17 rows we assume otherwise in other parts of + ** the code). And, even if it is not, it should not be too much slower. + ** On the other hand, the extra seeks could end up being significantly + ** more expensive. */ + assert( 42==sqlite3LogEst(18) ); + if( saved_nEq==saved_nSkip + && saved_nEq+1<pProbe->nKeyCol + && pProbe->noSkipScan==0 + && pProbe->aiRowLogEst[saved_nEq+1]>=42 /* TUNING: Minimum for skip-scan */ + && (rc = whereLoopResize(db, pNew, pNew->nLTerm+1))==SQLITE_OK + ){ + LogEst nIter; + pNew->u.btree.nEq++; + pNew->nSkip++; + pNew->aLTerm[pNew->nLTerm++] = 0; + pNew->wsFlags |= WHERE_SKIPSCAN; + nIter = pProbe->aiRowLogEst[saved_nEq] - pProbe->aiRowLogEst[saved_nEq+1]; + pNew->nOut -= nIter; + /* TUNING: Because uncertainties in the estimates for skip-scan queries, + ** add a 1.375 fudge factor to make skip-scan slightly less likely. */ + nIter += 5; + whereLoopAddBtreeIndex(pBuilder, pSrc, pProbe, nIter + nInMul); + pNew->nOut = saved_nOut; + pNew->u.btree.nEq = saved_nEq; + pNew->nSkip = saved_nSkip; + pNew->wsFlags = saved_wsFlags; + } + + return rc; +} + +/* +** Return True if it is possible that pIndex might be useful in +** implementing the ORDER BY clause in pBuilder. +** +** Return False if pBuilder does not contain an ORDER BY clause or +** if there is no way for pIndex to be useful in implementing that +** ORDER BY clause. +*/ +static int indexMightHelpWithOrderBy( + WhereLoopBuilder *pBuilder, + Index *pIndex, + int iCursor +){ + ExprList *pOB; + ExprList *aColExpr; + int ii, jj; + + if( pIndex->bUnordered ) return 0; + if( (pOB = pBuilder->pWInfo->pOrderBy)==0 ) return 0; + for(ii=0; ii<pOB->nExpr; ii++){ + Expr *pExpr = sqlite3ExprSkipCollate(pOB->a[ii].pExpr); + if( pExpr->op==TK_COLUMN && pExpr->iTable==iCursor ){ + if( pExpr->iColumn<0 ) return 1; + for(jj=0; jj<pIndex->nKeyCol; jj++){ + if( pExpr->iColumn==pIndex->aiColumn[jj] ) return 1; + } + }else if( (aColExpr = pIndex->aColExpr)!=0 ){ + for(jj=0; jj<pIndex->nKeyCol; jj++){ + if( pIndex->aiColumn[jj]!=XN_EXPR ) continue; + if( sqlite3ExprCompare(pExpr,aColExpr->a[jj].pExpr,iCursor)==0 ){ + return 1; + } + } + } + } + return 0; +} + +/* +** Return a bitmask where 1s indicate that the corresponding column of +** the table is used by an index. Only the first 63 columns are considered. +*/ +static Bitmask columnsInIndex(Index *pIdx){ + Bitmask m = 0; + int j; + for(j=pIdx->nColumn-1; j>=0; j--){ + int x = pIdx->aiColumn[j]; + if( x>=0 ){ + testcase( x==BMS-1 ); + testcase( x==BMS-2 ); + if( x<BMS-1 ) m |= MASKBIT(x); + } + } + return m; +} + +/* Check to see if a partial index with pPartIndexWhere can be used +** in the current query. Return true if it can be and false if not. +*/ +static int whereUsablePartialIndex(int iTab, WhereClause *pWC, Expr *pWhere){ + int i; + WhereTerm *pTerm; + while( pWhere->op==TK_AND ){ + if( !whereUsablePartialIndex(iTab,pWC,pWhere->pLeft) ) return 0; + pWhere = pWhere->pRight; + } + for(i=0, pTerm=pWC->a; i<pWC->nTerm; i++, pTerm++){ + Expr *pExpr = pTerm->pExpr; + if( sqlite3ExprImpliesExpr(pExpr, pWhere, iTab) + && (!ExprHasProperty(pExpr, EP_FromJoin) || pExpr->iRightJoinTable==iTab) + ){ + return 1; + } + } + return 0; +} + +/* +** Add all WhereLoop objects for a single table of the join where the table +** is idenfied by pBuilder->pNew->iTab. That table is guaranteed to be +** a b-tree table, not a virtual table. +** +** The costs (WhereLoop.rRun) of the b-tree loops added by this function +** are calculated as follows: +** +** For a full scan, assuming the table (or index) contains nRow rows: +** +** cost = nRow * 3.0 // full-table scan +** cost = nRow * K // scan of covering index +** cost = nRow * (K+3.0) // scan of non-covering index +** +** where K is a value between 1.1 and 3.0 set based on the relative +** estimated average size of the index and table records. +** +** For an index scan, where nVisit is the number of index rows visited +** by the scan, and nSeek is the number of seek operations required on +** the index b-tree: +** +** cost = nSeek * (log(nRow) + K * nVisit) // covering index +** cost = nSeek * (log(nRow) + (K+3.0) * nVisit) // non-covering index +** +** Normally, nSeek is 1. nSeek values greater than 1 come about if the +** WHERE clause includes "x IN (....)" terms used in place of "x=?". Or when +** implicit "x IN (SELECT x FROM tbl)" terms are added for skip-scans. +** +** The estimated values (nRow, nVisit, nSeek) often contain a large amount +** of uncertainty. For this reason, scoring is designed to pick plans that +** "do the least harm" if the estimates are inaccurate. For example, a +** log(nRow) factor is omitted from a non-covering index scan in order to +** bias the scoring in favor of using an index, since the worst-case +** performance of using an index is far better than the worst-case performance +** of a full table scan. +*/ +static int whereLoopAddBtree( + WhereLoopBuilder *pBuilder, /* WHERE clause information */ + Bitmask mExtra /* Extra prerequesites for using this table */ +){ + WhereInfo *pWInfo; /* WHERE analysis context */ + Index *pProbe; /* An index we are evaluating */ + Index sPk; /* A fake index object for the primary key */ + LogEst aiRowEstPk[2]; /* The aiRowLogEst[] value for the sPk index */ + i16 aiColumnPk = -1; /* The aColumn[] value for the sPk index */ + SrcList *pTabList; /* The FROM clause */ + struct SrcList_item *pSrc; /* The FROM clause btree term to add */ + WhereLoop *pNew; /* Template WhereLoop object */ + int rc = SQLITE_OK; /* Return code */ + int iSortIdx = 1; /* Index number */ + int b; /* A boolean value */ + LogEst rSize; /* number of rows in the table */ + LogEst rLogSize; /* Logarithm of the number of rows in the table */ + WhereClause *pWC; /* The parsed WHERE clause */ + Table *pTab; /* Table being queried */ + + pNew = pBuilder->pNew; + pWInfo = pBuilder->pWInfo; + pTabList = pWInfo->pTabList; + pSrc = pTabList->a + pNew->iTab; + pTab = pSrc->pTab; + pWC = pBuilder->pWC; + assert( !IsVirtual(pSrc->pTab) ); + + if( pSrc->pIBIndex ){ + /* An INDEXED BY clause specifies a particular index to use */ + pProbe = pSrc->pIBIndex; + }else if( !HasRowid(pTab) ){ + pProbe = pTab->pIndex; + }else{ + /* There is no INDEXED BY clause. Create a fake Index object in local + ** variable sPk to represent the rowid primary key index. Make this + ** fake index the first in a chain of Index objects with all of the real + ** indices to follow */ + Index *pFirst; /* First of real indices on the table */ + memset(&sPk, 0, sizeof(Index)); + sPk.nKeyCol = 1; + sPk.nColumn = 1; + sPk.aiColumn = &aiColumnPk; + sPk.aiRowLogEst = aiRowEstPk; + sPk.onError = OE_Replace; + sPk.pTable = pTab; + sPk.szIdxRow = pTab->szTabRow; + aiRowEstPk[0] = pTab->nRowLogEst; + aiRowEstPk[1] = 0; + pFirst = pSrc->pTab->pIndex; + if( pSrc->fg.notIndexed==0 ){ + /* The real indices of the table are only considered if the + ** NOT INDEXED qualifier is omitted from the FROM clause */ + sPk.pNext = pFirst; + } + pProbe = &sPk; + } + rSize = pTab->nRowLogEst; + rLogSize = estLog(rSize); + +#ifndef SQLITE_OMIT_AUTOMATIC_INDEX + /* Automatic indexes */ + if( !pBuilder->pOrSet /* Not part of an OR optimization */ + && (pWInfo->wctrlFlags & WHERE_NO_AUTOINDEX)==0 + && (pWInfo->pParse->db->flags & SQLITE_AutoIndex)!=0 + && pSrc->pIBIndex==0 /* Has no INDEXED BY clause */ + && !pSrc->fg.notIndexed /* Has no NOT INDEXED clause */ + && HasRowid(pTab) /* Not WITHOUT ROWID table. (FIXME: Why not?) */ + && !pSrc->fg.isCorrelated /* Not a correlated subquery */ + && !pSrc->fg.isRecursive /* Not a recursive common table expression. */ + ){ + /* Generate auto-index WhereLoops */ + WhereTerm *pTerm; + WhereTerm *pWCEnd = pWC->a + pWC->nTerm; + for(pTerm=pWC->a; rc==SQLITE_OK && pTerm<pWCEnd; pTerm++){ + if( pTerm->prereqRight & pNew->maskSelf ) continue; + if( termCanDriveIndex(pTerm, pSrc, 0) ){ + pNew->u.btree.nEq = 1; + pNew->nSkip = 0; + pNew->u.btree.pIndex = 0; + pNew->nLTerm = 1; + pNew->aLTerm[0] = pTerm; + /* TUNING: One-time cost for computing the automatic index is + ** estimated to be X*N*log2(N) where N is the number of rows in + ** the table being indexed and where X is 7 (LogEst=28) for normal + ** tables or 1.375 (LogEst=4) for views and subqueries. The value + ** of X is smaller for views and subqueries so that the query planner + ** will be more aggressive about generating automatic indexes for + ** those objects, since there is no opportunity to add schema + ** indexes on subqueries and views. */ + pNew->rSetup = rLogSize + rSize + 4; + if( pTab->pSelect==0 && (pTab->tabFlags & TF_Ephemeral)==0 ){ + pNew->rSetup += 24; + } + ApplyCostMultiplier(pNew->rSetup, pTab->costMult); + /* TUNING: Each index lookup yields 20 rows in the table. This + ** is more than the usual guess of 10 rows, since we have no way + ** of knowing how selective the index will ultimately be. It would + ** not be unreasonable to make this value much larger. */ + pNew->nOut = 43; assert( 43==sqlite3LogEst(20) ); + pNew->rRun = sqlite3LogEstAdd(rLogSize,pNew->nOut); + pNew->wsFlags = WHERE_AUTO_INDEX; + pNew->prereq = mExtra | pTerm->prereqRight; + rc = whereLoopInsert(pBuilder, pNew); + } + } + } +#endif /* SQLITE_OMIT_AUTOMATIC_INDEX */ + + /* Loop over all indices + */ + for(; rc==SQLITE_OK && pProbe; pProbe=pProbe->pNext, iSortIdx++){ + if( pProbe->pPartIdxWhere!=0 + && !whereUsablePartialIndex(pSrc->iCursor, pWC, pProbe->pPartIdxWhere) ){ + testcase( pNew->iTab!=pSrc->iCursor ); /* See ticket [98d973b8f5] */ + continue; /* Partial index inappropriate for this query */ + } + rSize = pProbe->aiRowLogEst[0]; + pNew->u.btree.nEq = 0; + pNew->nSkip = 0; + pNew->nLTerm = 0; + pNew->iSortIdx = 0; + pNew->rSetup = 0; + pNew->prereq = mExtra; + pNew->nOut = rSize; + pNew->u.btree.pIndex = pProbe; + b = indexMightHelpWithOrderBy(pBuilder, pProbe, pSrc->iCursor); + /* The ONEPASS_DESIRED flags never occurs together with ORDER BY */ + assert( (pWInfo->wctrlFlags & WHERE_ONEPASS_DESIRED)==0 || b==0 ); + if( pProbe->tnum<=0 ){ + /* Integer primary key index */ + pNew->wsFlags = WHERE_IPK; + + /* Full table scan */ + pNew->iSortIdx = b ? iSortIdx : 0; + /* TUNING: Cost of full table scan is (N*3.0). */ + pNew->rRun = rSize + 16; + ApplyCostMultiplier(pNew->rRun, pTab->costMult); + whereLoopOutputAdjust(pWC, pNew, rSize); + rc = whereLoopInsert(pBuilder, pNew); + pNew->nOut = rSize; + if( rc ) break; + }else{ + Bitmask m; + if( pProbe->isCovering ){ + pNew->wsFlags = WHERE_IDX_ONLY | WHERE_INDEXED; + m = 0; + }else{ + m = pSrc->colUsed & ~columnsInIndex(pProbe); + pNew->wsFlags = (m==0) ? (WHERE_IDX_ONLY|WHERE_INDEXED) : WHERE_INDEXED; + } + + /* Full scan via index */ + if( b + || !HasRowid(pTab) + || ( m==0 + && pProbe->bUnordered==0 + && (pProbe->szIdxRow<pTab->szTabRow) + && (pWInfo->wctrlFlags & WHERE_ONEPASS_DESIRED)==0 + && sqlite3GlobalConfig.bUseCis + && OptimizationEnabled(pWInfo->pParse->db, SQLITE_CoverIdxScan) + ) + ){ + pNew->iSortIdx = b ? iSortIdx : 0; + + /* The cost of visiting the index rows is N*K, where K is + ** between 1.1 and 3.0, depending on the relative sizes of the + ** index and table rows. If this is a non-covering index scan, + ** also add the cost of visiting table rows (N*3.0). */ + pNew->rRun = rSize + 1 + (15*pProbe->szIdxRow)/pTab->szTabRow; + if( m!=0 ){ + pNew->rRun = sqlite3LogEstAdd(pNew->rRun, rSize+16); + } + ApplyCostMultiplier(pNew->rRun, pTab->costMult); + whereLoopOutputAdjust(pWC, pNew, rSize); + rc = whereLoopInsert(pBuilder, pNew); + pNew->nOut = rSize; + if( rc ) break; + } + } + + rc = whereLoopAddBtreeIndex(pBuilder, pSrc, pProbe, 0); +#ifdef SQLITE_ENABLE_STAT3_OR_STAT4 + sqlite3Stat4ProbeFree(pBuilder->pRec); + pBuilder->nRecValid = 0; + pBuilder->pRec = 0; +#endif + + /* If there was an INDEXED BY clause, then only that one index is + ** considered. */ + if( pSrc->pIBIndex ) break; + } + return rc; +} + +#ifndef SQLITE_OMIT_VIRTUALTABLE +/* +** Add all WhereLoop objects for a table of the join identified by +** pBuilder->pNew->iTab. That table is guaranteed to be a virtual table. +** +** If there are no LEFT or CROSS JOIN joins in the query, both mExtra and +** mUnusable are set to 0. Otherwise, mExtra is a mask of all FROM clause +** entries that occur before the virtual table in the FROM clause and are +** separated from it by at least one LEFT or CROSS JOIN. Similarly, the +** mUnusable mask contains all FROM clause entries that occur after the +** virtual table and are separated from it by at least one LEFT or +** CROSS JOIN. +** +** For example, if the query were: +** +** ... FROM t1, t2 LEFT JOIN t3, t4, vt CROSS JOIN t5, t6; +** +** then mExtra corresponds to (t1, t2) and mUnusable to (t5, t6). +** +** All the tables in mExtra must be scanned before the current virtual +** table. So any terms for which all prerequisites are satisfied by +** mExtra may be specified as "usable" in all calls to xBestIndex. +** Conversely, all tables in mUnusable must be scanned after the current +** virtual table, so any terms for which the prerequisites overlap with +** mUnusable should always be configured as "not-usable" for xBestIndex. +*/ +static int whereLoopAddVirtual( + WhereLoopBuilder *pBuilder, /* WHERE clause information */ + Bitmask mExtra, /* Tables that must be scanned before this one */ + Bitmask mUnusable /* Tables that must be scanned after this one */ +){ + WhereInfo *pWInfo; /* WHERE analysis context */ + Parse *pParse; /* The parsing context */ + WhereClause *pWC; /* The WHERE clause */ + struct SrcList_item *pSrc; /* The FROM clause term to search */ + Table *pTab; + sqlite3 *db; + sqlite3_index_info *pIdxInfo; + struct sqlite3_index_constraint *pIdxCons; + struct sqlite3_index_constraint_usage *pUsage; + WhereTerm *pTerm; + int i, j; + int iTerm, mxTerm; + int nConstraint; + int seenIn = 0; /* True if an IN operator is seen */ + int seenVar = 0; /* True if a non-constant constraint is seen */ + int iPhase; /* 0: const w/o IN, 1: const, 2: no IN, 2: IN */ + WhereLoop *pNew; + int rc = SQLITE_OK; + + assert( (mExtra & mUnusable)==0 ); + pWInfo = pBuilder->pWInfo; + pParse = pWInfo->pParse; + db = pParse->db; + pWC = pBuilder->pWC; + pNew = pBuilder->pNew; + pSrc = &pWInfo->pTabList->a[pNew->iTab]; + pTab = pSrc->pTab; + assert( IsVirtual(pTab) ); + pIdxInfo = allocateIndexInfo(pParse, pWC, mUnusable, pSrc,pBuilder->pOrderBy); + if( pIdxInfo==0 ) return SQLITE_NOMEM; + pNew->prereq = 0; + pNew->rSetup = 0; + pNew->wsFlags = WHERE_VIRTUALTABLE; + pNew->nLTerm = 0; + pNew->u.vtab.needFree = 0; + pUsage = pIdxInfo->aConstraintUsage; + nConstraint = pIdxInfo->nConstraint; + if( whereLoopResize(db, pNew, nConstraint) ){ + sqlite3DbFree(db, pIdxInfo); + return SQLITE_NOMEM; + } + + for(iPhase=0; iPhase<=3; iPhase++){ + if( !seenIn && (iPhase&1)!=0 ){ + iPhase++; + if( iPhase>3 ) break; + } + if( !seenVar && iPhase>1 ) break; + pIdxCons = *(struct sqlite3_index_constraint**)&pIdxInfo->aConstraint; + for(i=0; i<pIdxInfo->nConstraint; i++, pIdxCons++){ + j = pIdxCons->iTermOffset; + pTerm = &pWC->a[j]; + switch( iPhase ){ + case 0: /* Constants without IN operator */ + pIdxCons->usable = 0; + if( (pTerm->eOperator & WO_IN)!=0 ){ + seenIn = 1; + } + if( (pTerm->prereqRight & ~mExtra)!=0 ){ + seenVar = 1; + }else if( (pTerm->eOperator & WO_IN)==0 ){ + pIdxCons->usable = 1; + } + break; + case 1: /* Constants with IN operators */ + assert( seenIn ); + pIdxCons->usable = (pTerm->prereqRight & ~mExtra)==0; + break; + case 2: /* Variables without IN */ + assert( seenVar ); + pIdxCons->usable = (pTerm->eOperator & WO_IN)==0; + break; + default: /* Variables with IN */ + assert( seenVar && seenIn ); + pIdxCons->usable = 1; + break; + } + } + memset(pUsage, 0, sizeof(pUsage[0])*pIdxInfo->nConstraint); + if( pIdxInfo->needToFreeIdxStr ) sqlite3_free(pIdxInfo->idxStr); + pIdxInfo->idxStr = 0; + pIdxInfo->idxNum = 0; + pIdxInfo->needToFreeIdxStr = 0; + pIdxInfo->orderByConsumed = 0; + pIdxInfo->estimatedCost = SQLITE_BIG_DBL / (double)2; + pIdxInfo->estimatedRows = 25; + pIdxInfo->idxFlags = 0; + pIdxInfo->colUsed = (sqlite3_int64)pSrc->colUsed; + rc = vtabBestIndex(pParse, pTab, pIdxInfo); + if( rc ) goto whereLoopAddVtab_exit; + pIdxCons = *(struct sqlite3_index_constraint**)&pIdxInfo->aConstraint; + pNew->prereq = mExtra; + mxTerm = -1; + assert( pNew->nLSlot>=nConstraint ); + for(i=0; i<nConstraint; i++) pNew->aLTerm[i] = 0; + pNew->u.vtab.omitMask = 0; + for(i=0; i<nConstraint; i++, pIdxCons++){ + if( (iTerm = pUsage[i].argvIndex - 1)>=0 ){ + j = pIdxCons->iTermOffset; + if( iTerm>=nConstraint + || j<0 + || j>=pWC->nTerm + || pNew->aLTerm[iTerm]!=0 + ){ + rc = SQLITE_ERROR; + sqlite3ErrorMsg(pParse, "%s.xBestIndex() malfunction", pTab->zName); + goto whereLoopAddVtab_exit; + } + testcase( iTerm==nConstraint-1 ); + testcase( j==0 ); + testcase( j==pWC->nTerm-1 ); + pTerm = &pWC->a[j]; + pNew->prereq |= pTerm->prereqRight; + assert( iTerm<pNew->nLSlot ); + pNew->aLTerm[iTerm] = pTerm; + if( iTerm>mxTerm ) mxTerm = iTerm; + testcase( iTerm==15 ); + testcase( iTerm==16 ); + if( iTerm<16 && pUsage[i].omit ) pNew->u.vtab.omitMask |= 1<<iTerm; + if( (pTerm->eOperator & WO_IN)!=0 ){ + if( pUsage[i].omit==0 ){ + /* Do not attempt to use an IN constraint if the virtual table + ** says that the equivalent EQ constraint cannot be safely omitted. + ** If we do attempt to use such a constraint, some rows might be + ** repeated in the output. */ + break; + } + /* A virtual table that is constrained by an IN clause may not + ** consume the ORDER BY clause because (1) the order of IN terms + ** is not necessarily related to the order of output terms and + ** (2) Multiple outputs from a single IN value will not merge + ** together. */ + pIdxInfo->orderByConsumed = 0; + pIdxInfo->idxFlags &= ~SQLITE_INDEX_SCAN_UNIQUE; + } + } + } + if( i>=nConstraint ){ + pNew->nLTerm = mxTerm+1; + assert( pNew->nLTerm<=pNew->nLSlot ); + pNew->u.vtab.idxNum = pIdxInfo->idxNum; + pNew->u.vtab.needFree = pIdxInfo->needToFreeIdxStr; + pIdxInfo->needToFreeIdxStr = 0; + pNew->u.vtab.idxStr = pIdxInfo->idxStr; + pNew->u.vtab.isOrdered = (i8)(pIdxInfo->orderByConsumed ? + pIdxInfo->nOrderBy : 0); + pNew->rSetup = 0; + pNew->rRun = sqlite3LogEstFromDouble(pIdxInfo->estimatedCost); + pNew->nOut = sqlite3LogEst(pIdxInfo->estimatedRows); + + /* Set the WHERE_ONEROW flag if the xBestIndex() method indicated + ** that the scan will visit at most one row. Clear it otherwise. */ + if( pIdxInfo->idxFlags & SQLITE_INDEX_SCAN_UNIQUE ){ + pNew->wsFlags |= WHERE_ONEROW; + }else{ + pNew->wsFlags &= ~WHERE_ONEROW; + } + whereLoopInsert(pBuilder, pNew); + if( pNew->u.vtab.needFree ){ + sqlite3_free(pNew->u.vtab.idxStr); + pNew->u.vtab.needFree = 0; + } + } + } + +whereLoopAddVtab_exit: + if( pIdxInfo->needToFreeIdxStr ) sqlite3_free(pIdxInfo->idxStr); + sqlite3DbFree(db, pIdxInfo); + return rc; +} +#endif /* SQLITE_OMIT_VIRTUALTABLE */ + +/* +** Add WhereLoop entries to handle OR terms. This works for either +** btrees or virtual tables. +*/ +static int whereLoopAddOr( + WhereLoopBuilder *pBuilder, + Bitmask mExtra, + Bitmask mUnusable +){ + WhereInfo *pWInfo = pBuilder->pWInfo; + WhereClause *pWC; + WhereLoop *pNew; + WhereTerm *pTerm, *pWCEnd; + int rc = SQLITE_OK; + int iCur; + WhereClause tempWC; + WhereLoopBuilder sSubBuild; + WhereOrSet sSum, sCur; + struct SrcList_item *pItem; + + pWC = pBuilder->pWC; + pWCEnd = pWC->a + pWC->nTerm; + pNew = pBuilder->pNew; + memset(&sSum, 0, sizeof(sSum)); + pItem = pWInfo->pTabList->a + pNew->iTab; + iCur = pItem->iCursor; + + for(pTerm=pWC->a; pTerm<pWCEnd && rc==SQLITE_OK; pTerm++){ + if( (pTerm->eOperator & WO_OR)!=0 + && (pTerm->u.pOrInfo->indexable & pNew->maskSelf)!=0 + ){ + WhereClause * const pOrWC = &pTerm->u.pOrInfo->wc; + WhereTerm * const pOrWCEnd = &pOrWC->a[pOrWC->nTerm]; + WhereTerm *pOrTerm; + int once = 1; + int i, j; + + sSubBuild = *pBuilder; + sSubBuild.pOrderBy = 0; + sSubBuild.pOrSet = &sCur; + + WHERETRACE(0x200, ("Begin processing OR-clause %p\n", pTerm)); + for(pOrTerm=pOrWC->a; pOrTerm<pOrWCEnd; pOrTerm++){ + if( (pOrTerm->eOperator & WO_AND)!=0 ){ + sSubBuild.pWC = &pOrTerm->u.pAndInfo->wc; + }else if( pOrTerm->leftCursor==iCur ){ + tempWC.pWInfo = pWC->pWInfo; + tempWC.pOuter = pWC; + tempWC.op = TK_AND; + tempWC.nTerm = 1; + tempWC.a = pOrTerm; + sSubBuild.pWC = &tempWC; + }else{ + continue; + } + sCur.n = 0; +#ifdef WHERETRACE_ENABLED + WHERETRACE(0x200, ("OR-term %d of %p has %d subterms:\n", + (int)(pOrTerm-pOrWC->a), pTerm, sSubBuild.pWC->nTerm)); + if( sqlite3WhereTrace & 0x400 ){ + for(i=0; i<sSubBuild.pWC->nTerm; i++){ + whereTermPrint(&sSubBuild.pWC->a[i], i); + } + } +#endif +#ifndef SQLITE_OMIT_VIRTUALTABLE + if( IsVirtual(pItem->pTab) ){ + rc = whereLoopAddVirtual(&sSubBuild, mExtra, mUnusable); + }else +#endif + { + rc = whereLoopAddBtree(&sSubBuild, mExtra); + } + if( rc==SQLITE_OK ){ + rc = whereLoopAddOr(&sSubBuild, mExtra, mUnusable); + } + assert( rc==SQLITE_OK || sCur.n==0 ); + if( sCur.n==0 ){ + sSum.n = 0; + break; + }else if( once ){ + whereOrMove(&sSum, &sCur); + once = 0; + }else{ + WhereOrSet sPrev; + whereOrMove(&sPrev, &sSum); + sSum.n = 0; + for(i=0; i<sPrev.n; i++){ + for(j=0; j<sCur.n; j++){ + whereOrInsert(&sSum, sPrev.a[i].prereq | sCur.a[j].prereq, + sqlite3LogEstAdd(sPrev.a[i].rRun, sCur.a[j].rRun), + sqlite3LogEstAdd(sPrev.a[i].nOut, sCur.a[j].nOut)); + } + } + } + } + pNew->nLTerm = 1; + pNew->aLTerm[0] = pTerm; + pNew->wsFlags = WHERE_MULTI_OR; + pNew->rSetup = 0; + pNew->iSortIdx = 0; + memset(&pNew->u, 0, sizeof(pNew->u)); + for(i=0; rc==SQLITE_OK && i<sSum.n; i++){ + /* TUNING: Currently sSum.a[i].rRun is set to the sum of the costs + ** of all sub-scans required by the OR-scan. However, due to rounding + ** errors, it may be that the cost of the OR-scan is equal to its + ** most expensive sub-scan. Add the smallest possible penalty + ** (equivalent to multiplying the cost by 1.07) to ensure that + ** this does not happen. Otherwise, for WHERE clauses such as the + ** following where there is an index on "y": + ** + ** WHERE likelihood(x=?, 0.99) OR y=? + ** + ** the planner may elect to "OR" together a full-table scan and an + ** index lookup. And other similarly odd results. */ + pNew->rRun = sSum.a[i].rRun + 1; + pNew->nOut = sSum.a[i].nOut; + pNew->prereq = sSum.a[i].prereq; + rc = whereLoopInsert(pBuilder, pNew); + } + WHERETRACE(0x200, ("End processing OR-clause %p\n", pTerm)); + } + } + return rc; +} + +/* +** Add all WhereLoop objects for all tables +*/ +static int whereLoopAddAll(WhereLoopBuilder *pBuilder){ + WhereInfo *pWInfo = pBuilder->pWInfo; + Bitmask mExtra = 0; + Bitmask mPrior = 0; + int iTab; + SrcList *pTabList = pWInfo->pTabList; + struct SrcList_item *pItem; + struct SrcList_item *pEnd = &pTabList->a[pWInfo->nLevel]; + sqlite3 *db = pWInfo->pParse->db; + int rc = SQLITE_OK; + WhereLoop *pNew; + u8 priorJointype = 0; + + /* Loop over the tables in the join, from left to right */ + pNew = pBuilder->pNew; + whereLoopInit(pNew); + for(iTab=0, pItem=pTabList->a; pItem<pEnd; iTab++, pItem++){ + Bitmask mUnusable = 0; + pNew->iTab = iTab; + pNew->maskSelf = sqlite3WhereGetMask(&pWInfo->sMaskSet, pItem->iCursor); + if( ((pItem->fg.jointype|priorJointype) & (JT_LEFT|JT_CROSS))!=0 ){ + /* This condition is true when pItem is the FROM clause term on the + ** right-hand-side of a LEFT or CROSS JOIN. */ + mExtra = mPrior; + } + priorJointype = pItem->fg.jointype; + if( IsVirtual(pItem->pTab) ){ + struct SrcList_item *p; + for(p=&pItem[1]; p<pEnd; p++){ + if( mUnusable || (p->fg.jointype & (JT_LEFT|JT_CROSS)) ){ + mUnusable |= sqlite3WhereGetMask(&pWInfo->sMaskSet, p->iCursor); + } + } + rc = whereLoopAddVirtual(pBuilder, mExtra, mUnusable); + }else{ + rc = whereLoopAddBtree(pBuilder, mExtra); + } + if( rc==SQLITE_OK ){ + rc = whereLoopAddOr(pBuilder, mExtra, mUnusable); + } + mPrior |= pNew->maskSelf; + if( rc || db->mallocFailed ) break; + } + + whereLoopClear(db, pNew); + return rc; +} + +/* +** Examine a WherePath (with the addition of the extra WhereLoop of the 5th +** parameters) to see if it outputs rows in the requested ORDER BY +** (or GROUP BY) without requiring a separate sort operation. Return N: +** +** N>0: N terms of the ORDER BY clause are satisfied +** N==0: No terms of the ORDER BY clause are satisfied +** N<0: Unknown yet how many terms of ORDER BY might be satisfied. +** +** Note that processing for WHERE_GROUPBY and WHERE_DISTINCTBY is not as +** strict. With GROUP BY and DISTINCT the only requirement is that +** equivalent rows appear immediately adjacent to one another. GROUP BY +** and DISTINCT do not require rows to appear in any particular order as long +** as equivalent rows are grouped together. Thus for GROUP BY and DISTINCT +** the pOrderBy terms can be matched in any order. With ORDER BY, the +** pOrderBy terms must be matched in strict left-to-right order. +*/ +static i8 wherePathSatisfiesOrderBy( + WhereInfo *pWInfo, /* The WHERE clause */ + ExprList *pOrderBy, /* ORDER BY or GROUP BY or DISTINCT clause to check */ + WherePath *pPath, /* The WherePath to check */ + u16 wctrlFlags, /* Might contain WHERE_GROUPBY or WHERE_DISTINCTBY */ + u16 nLoop, /* Number of entries in pPath->aLoop[] */ + WhereLoop *pLast, /* Add this WhereLoop to the end of pPath->aLoop[] */ + Bitmask *pRevMask /* OUT: Mask of WhereLoops to run in reverse order */ +){ + u8 revSet; /* True if rev is known */ + u8 rev; /* Composite sort order */ + u8 revIdx; /* Index sort order */ + u8 isOrderDistinct; /* All prior WhereLoops are order-distinct */ + u8 distinctColumns; /* True if the loop has UNIQUE NOT NULL columns */ + u8 isMatch; /* iColumn matches a term of the ORDER BY clause */ + u16 nKeyCol; /* Number of key columns in pIndex */ + u16 nColumn; /* Total number of ordered columns in the index */ + u16 nOrderBy; /* Number terms in the ORDER BY clause */ + int iLoop; /* Index of WhereLoop in pPath being processed */ + int i, j; /* Loop counters */ + int iCur; /* Cursor number for current WhereLoop */ + int iColumn; /* A column number within table iCur */ + WhereLoop *pLoop = 0; /* Current WhereLoop being processed. */ + WhereTerm *pTerm; /* A single term of the WHERE clause */ + Expr *pOBExpr; /* An expression from the ORDER BY clause */ + CollSeq *pColl; /* COLLATE function from an ORDER BY clause term */ + Index *pIndex; /* The index associated with pLoop */ + sqlite3 *db = pWInfo->pParse->db; /* Database connection */ + Bitmask obSat = 0; /* Mask of ORDER BY terms satisfied so far */ + Bitmask obDone; /* Mask of all ORDER BY terms */ + Bitmask orderDistinctMask; /* Mask of all well-ordered loops */ + Bitmask ready; /* Mask of inner loops */ + + /* + ** We say the WhereLoop is "one-row" if it generates no more than one + ** row of output. A WhereLoop is one-row if all of the following are true: + ** (a) All index columns match with WHERE_COLUMN_EQ. + ** (b) The index is unique + ** Any WhereLoop with an WHERE_COLUMN_EQ constraint on the rowid is one-row. + ** Every one-row WhereLoop will have the WHERE_ONEROW bit set in wsFlags. + ** + ** We say the WhereLoop is "order-distinct" if the set of columns from + ** that WhereLoop that are in the ORDER BY clause are different for every + ** row of the WhereLoop. Every one-row WhereLoop is automatically + ** order-distinct. A WhereLoop that has no columns in the ORDER BY clause + ** is not order-distinct. To be order-distinct is not quite the same as being + ** UNIQUE since a UNIQUE column or index can have multiple rows that + ** are NULL and NULL values are equivalent for the purpose of order-distinct. + ** To be order-distinct, the columns must be UNIQUE and NOT NULL. + ** + ** The rowid for a table is always UNIQUE and NOT NULL so whenever the + ** rowid appears in the ORDER BY clause, the corresponding WhereLoop is + ** automatically order-distinct. + */ + + assert( pOrderBy!=0 ); + if( nLoop && OptimizationDisabled(db, SQLITE_OrderByIdxJoin) ) return 0; + + nOrderBy = pOrderBy->nExpr; + testcase( nOrderBy==BMS-1 ); + if( nOrderBy>BMS-1 ) return 0; /* Cannot optimize overly large ORDER BYs */ + isOrderDistinct = 1; + obDone = MASKBIT(nOrderBy)-1; + orderDistinctMask = 0; + ready = 0; + for(iLoop=0; isOrderDistinct && obSat<obDone && iLoop<=nLoop; iLoop++){ + if( iLoop>0 ) ready |= pLoop->maskSelf; + pLoop = iLoop<nLoop ? pPath->aLoop[iLoop] : pLast; + if( pLoop->wsFlags & WHERE_VIRTUALTABLE ){ + if( pLoop->u.vtab.isOrdered ) obSat = obDone; + break; + } + iCur = pWInfo->pTabList->a[pLoop->iTab].iCursor; + + /* Mark off any ORDER BY term X that is a column in the table of + ** the current loop for which there is term in the WHERE + ** clause of the form X IS NULL or X=? that reference only outer + ** loops. + */ + for(i=0; i<nOrderBy; i++){ + if( MASKBIT(i) & obSat ) continue; + pOBExpr = sqlite3ExprSkipCollate(pOrderBy->a[i].pExpr); + if( pOBExpr->op!=TK_COLUMN ) continue; + if( pOBExpr->iTable!=iCur ) continue; + pTerm = sqlite3WhereFindTerm(&pWInfo->sWC, iCur, pOBExpr->iColumn, + ~ready, WO_EQ|WO_ISNULL|WO_IS, 0); + if( pTerm==0 ) continue; + if( (pTerm->eOperator&(WO_EQ|WO_IS))!=0 && pOBExpr->iColumn>=0 ){ + const char *z1, *z2; + pColl = sqlite3ExprCollSeq(pWInfo->pParse, pOrderBy->a[i].pExpr); + if( !pColl ) pColl = db->pDfltColl; + z1 = pColl->zName; + pColl = sqlite3ExprCollSeq(pWInfo->pParse, pTerm->pExpr); + if( !pColl ) pColl = db->pDfltColl; + z2 = pColl->zName; + if( sqlite3StrICmp(z1, z2)!=0 ) continue; + testcase( pTerm->pExpr->op==TK_IS ); + } + obSat |= MASKBIT(i); + } + + if( (pLoop->wsFlags & WHERE_ONEROW)==0 ){ + if( pLoop->wsFlags & WHERE_IPK ){ + pIndex = 0; + nKeyCol = 0; + nColumn = 1; + }else if( (pIndex = pLoop->u.btree.pIndex)==0 || pIndex->bUnordered ){ + return 0; + }else{ + nKeyCol = pIndex->nKeyCol; + nColumn = pIndex->nColumn; + assert( nColumn==nKeyCol+1 || !HasRowid(pIndex->pTable) ); + assert( pIndex->aiColumn[nColumn-1]==XN_ROWID + || !HasRowid(pIndex->pTable)); + isOrderDistinct = IsUniqueIndex(pIndex); + } + + /* Loop through all columns of the index and deal with the ones + ** that are not constrained by == or IN. + */ + rev = revSet = 0; + distinctColumns = 0; + for(j=0; j<nColumn; j++){ + u8 bOnce; /* True to run the ORDER BY search loop */ + + /* Skip over == and IS NULL terms */ + if( j<pLoop->u.btree.nEq + && pLoop->nSkip==0 + && ((i = pLoop->aLTerm[j]->eOperator) & (WO_EQ|WO_ISNULL|WO_IS))!=0 + ){ + if( i & WO_ISNULL ){ + testcase( isOrderDistinct ); + isOrderDistinct = 0; + } + continue; + } + + /* Get the column number in the table (iColumn) and sort order + ** (revIdx) for the j-th column of the index. + */ + if( pIndex ){ + iColumn = pIndex->aiColumn[j]; + revIdx = pIndex->aSortOrder[j]; + if( iColumn==pIndex->pTable->iPKey ) iColumn = -1; + }else{ + iColumn = XN_ROWID; + revIdx = 0; + } + + /* An unconstrained column that might be NULL means that this + ** WhereLoop is not well-ordered + */ + if( isOrderDistinct + && iColumn>=0 + && j>=pLoop->u.btree.nEq + && pIndex->pTable->aCol[iColumn].notNull==0 + ){ + isOrderDistinct = 0; + } + + /* Find the ORDER BY term that corresponds to the j-th column + ** of the index and mark that ORDER BY term off + */ + bOnce = 1; + isMatch = 0; + for(i=0; bOnce && i<nOrderBy; i++){ + if( MASKBIT(i) & obSat ) continue; + pOBExpr = sqlite3ExprSkipCollate(pOrderBy->a[i].pExpr); + testcase( wctrlFlags & WHERE_GROUPBY ); + testcase( wctrlFlags & WHERE_DISTINCTBY ); + if( (wctrlFlags & (WHERE_GROUPBY|WHERE_DISTINCTBY))==0 ) bOnce = 0; + if( iColumn>=(-1) ){ + if( pOBExpr->op!=TK_COLUMN ) continue; + if( pOBExpr->iTable!=iCur ) continue; + if( pOBExpr->iColumn!=iColumn ) continue; + }else{ + if( sqlite3ExprCompare(pOBExpr,pIndex->aColExpr->a[j].pExpr,iCur) ){ + continue; + } + } + if( iColumn>=0 ){ + pColl = sqlite3ExprCollSeq(pWInfo->pParse, pOrderBy->a[i].pExpr); + if( !pColl ) pColl = db->pDfltColl; + if( sqlite3StrICmp(pColl->zName, pIndex->azColl[j])!=0 ) continue; + } + isMatch = 1; + break; + } + if( isMatch && (wctrlFlags & WHERE_GROUPBY)==0 ){ + /* Make sure the sort order is compatible in an ORDER BY clause. + ** Sort order is irrelevant for a GROUP BY clause. */ + if( revSet ){ + if( (rev ^ revIdx)!=pOrderBy->a[i].sortOrder ) isMatch = 0; + }else{ + rev = revIdx ^ pOrderBy->a[i].sortOrder; + if( rev ) *pRevMask |= MASKBIT(iLoop); + revSet = 1; + } + } + if( isMatch ){ + if( iColumn<0 ){ + testcase( distinctColumns==0 ); + distinctColumns = 1; + } + obSat |= MASKBIT(i); + }else{ + /* No match found */ + if( j==0 || j<nKeyCol ){ + testcase( isOrderDistinct!=0 ); + isOrderDistinct = 0; + } + break; + } + } /* end Loop over all index columns */ + if( distinctColumns ){ + testcase( isOrderDistinct==0 ); + isOrderDistinct = 1; + } + } /* end-if not one-row */ + + /* Mark off any other ORDER BY terms that reference pLoop */ + if( isOrderDistinct ){ + orderDistinctMask |= pLoop->maskSelf; + for(i=0; i<nOrderBy; i++){ + Expr *p; + Bitmask mTerm; + if( MASKBIT(i) & obSat ) continue; + p = pOrderBy->a[i].pExpr; + mTerm = sqlite3WhereExprUsage(&pWInfo->sMaskSet,p); + if( mTerm==0 && !sqlite3ExprIsConstant(p) ) continue; + if( (mTerm&~orderDistinctMask)==0 ){ + obSat |= MASKBIT(i); + } + } + } + } /* End the loop over all WhereLoops from outer-most down to inner-most */ + if( obSat==obDone ) return (i8)nOrderBy; + if( !isOrderDistinct ){ + for(i=nOrderBy-1; i>0; i--){ + Bitmask m = MASKBIT(i) - 1; + if( (obSat&m)==m ) return i; + } + return 0; + } + return -1; +} + + +/* +** If the WHERE_GROUPBY flag is set in the mask passed to sqlite3WhereBegin(), +** the planner assumes that the specified pOrderBy list is actually a GROUP +** BY clause - and so any order that groups rows as required satisfies the +** request. +** +** Normally, in this case it is not possible for the caller to determine +** whether or not the rows are really being delivered in sorted order, or +** just in some other order that provides the required grouping. However, +** if the WHERE_SORTBYGROUP flag is also passed to sqlite3WhereBegin(), then +** this function may be called on the returned WhereInfo object. It returns +** true if the rows really will be sorted in the specified order, or false +** otherwise. +** +** For example, assuming: +** +** CREATE INDEX i1 ON t1(x, Y); +** +** then +** +** SELECT * FROM t1 GROUP BY x,y ORDER BY x,y; -- IsSorted()==1 +** SELECT * FROM t1 GROUP BY y,x ORDER BY y,x; -- IsSorted()==0 +*/ +SQLITE_PRIVATE int sqlite3WhereIsSorted(WhereInfo *pWInfo){ + assert( pWInfo->wctrlFlags & WHERE_GROUPBY ); + assert( pWInfo->wctrlFlags & WHERE_SORTBYGROUP ); + return pWInfo->sorted; +} + +#ifdef WHERETRACE_ENABLED +/* For debugging use only: */ +static const char *wherePathName(WherePath *pPath, int nLoop, WhereLoop *pLast){ + static char zName[65]; + int i; + for(i=0; i<nLoop; i++){ zName[i] = pPath->aLoop[i]->cId; } + if( pLast ) zName[i++] = pLast->cId; + zName[i] = 0; + return zName; +} +#endif + +/* +** Return the cost of sorting nRow rows, assuming that the keys have +** nOrderby columns and that the first nSorted columns are already in +** order. +*/ +static LogEst whereSortingCost( + LogEst nRow, + int nOrderBy, + int nSorted +){ + /* TUNING: Estimated cost of a full external sort, where N is + ** the number of rows to sort is: + ** + ** cost = (3.0 * N * log(N)). + ** + ** Or, if the order-by clause has X terms but only the last Y + ** terms are out of order, then block-sorting will reduce the + ** sorting cost to: + ** + ** cost = (3.0 * N * log(N)) * (Y/X) + ** + ** The (Y/X) term is implemented using stack variable rScale + ** below. */ + LogEst rScale, rSortCost; + assert( nOrderBy>0 && 66==sqlite3LogEst(100) ); + rScale = sqlite3LogEst((nOrderBy-nSorted)*100/nOrderBy) - 66; + rSortCost = nRow + estLog(nRow) + rScale + 16; + return rSortCost; +} + +/* +** Given the list of WhereLoop objects at pWInfo->pLoops, this routine +** attempts to find the lowest cost path that visits each WhereLoop +** once. This path is then loaded into the pWInfo->a[].pWLoop fields. +** +** Assume that the total number of output rows that will need to be sorted +** will be nRowEst (in the 10*log2 representation). Or, ignore sorting +** costs if nRowEst==0. +** +** Return SQLITE_OK on success or SQLITE_NOMEM of a memory allocation +** error occurs. +*/ +static int wherePathSolver(WhereInfo *pWInfo, LogEst nRowEst){ + int mxChoice; /* Maximum number of simultaneous paths tracked */ + int nLoop; /* Number of terms in the join */ + Parse *pParse; /* Parsing context */ + sqlite3 *db; /* The database connection */ + int iLoop; /* Loop counter over the terms of the join */ + int ii, jj; /* Loop counters */ + int mxI = 0; /* Index of next entry to replace */ + int nOrderBy; /* Number of ORDER BY clause terms */ + LogEst mxCost = 0; /* Maximum cost of a set of paths */ + LogEst mxUnsorted = 0; /* Maximum unsorted cost of a set of path */ + int nTo, nFrom; /* Number of valid entries in aTo[] and aFrom[] */ + WherePath *aFrom; /* All nFrom paths at the previous level */ + WherePath *aTo; /* The nTo best paths at the current level */ + WherePath *pFrom; /* An element of aFrom[] that we are working on */ + WherePath *pTo; /* An element of aTo[] that we are working on */ + WhereLoop *pWLoop; /* One of the WhereLoop objects */ + WhereLoop **pX; /* Used to divy up the pSpace memory */ + LogEst *aSortCost = 0; /* Sorting and partial sorting costs */ + char *pSpace; /* Temporary memory used by this routine */ + int nSpace; /* Bytes of space allocated at pSpace */ + + pParse = pWInfo->pParse; + db = pParse->db; + nLoop = pWInfo->nLevel; + /* TUNING: For simple queries, only the best path is tracked. + ** For 2-way joins, the 5 best paths are followed. + ** For joins of 3 or more tables, track the 10 best paths */ + mxChoice = (nLoop<=1) ? 1 : (nLoop==2 ? 5 : 10); + assert( nLoop<=pWInfo->pTabList->nSrc ); + WHERETRACE(0x002, ("---- begin solver. (nRowEst=%d)\n", nRowEst)); + + /* If nRowEst is zero and there is an ORDER BY clause, ignore it. In this + ** case the purpose of this call is to estimate the number of rows returned + ** by the overall query. Once this estimate has been obtained, the caller + ** will invoke this function a second time, passing the estimate as the + ** nRowEst parameter. */ + if( pWInfo->pOrderBy==0 || nRowEst==0 ){ + nOrderBy = 0; + }else{ + nOrderBy = pWInfo->pOrderBy->nExpr; + } + + /* Allocate and initialize space for aTo, aFrom and aSortCost[] */ + nSpace = (sizeof(WherePath)+sizeof(WhereLoop*)*nLoop)*mxChoice*2; + nSpace += sizeof(LogEst) * nOrderBy; + pSpace = sqlite3DbMallocRawNN(db, nSpace); + if( pSpace==0 ) return SQLITE_NOMEM; + aTo = (WherePath*)pSpace; + aFrom = aTo+mxChoice; + memset(aFrom, 0, sizeof(aFrom[0])); + pX = (WhereLoop**)(aFrom+mxChoice); + for(ii=mxChoice*2, pFrom=aTo; ii>0; ii--, pFrom++, pX += nLoop){ + pFrom->aLoop = pX; + } + if( nOrderBy ){ + /* If there is an ORDER BY clause and it is not being ignored, set up + ** space for the aSortCost[] array. Each element of the aSortCost array + ** is either zero - meaning it has not yet been initialized - or the + ** cost of sorting nRowEst rows of data where the first X terms of + ** the ORDER BY clause are already in order, where X is the array + ** index. */ + aSortCost = (LogEst*)pX; + memset(aSortCost, 0, sizeof(LogEst) * nOrderBy); + } + assert( aSortCost==0 || &pSpace[nSpace]==(char*)&aSortCost[nOrderBy] ); + assert( aSortCost!=0 || &pSpace[nSpace]==(char*)pX ); + + /* Seed the search with a single WherePath containing zero WhereLoops. + ** + ** TUNING: Do not let the number of iterations go above 28. If the cost + ** of computing an automatic index is not paid back within the first 28 + ** rows, then do not use the automatic index. */ + aFrom[0].nRow = MIN(pParse->nQueryLoop, 48); assert( 48==sqlite3LogEst(28) ); + nFrom = 1; + assert( aFrom[0].isOrdered==0 ); + if( nOrderBy ){ + /* If nLoop is zero, then there are no FROM terms in the query. Since + ** in this case the query may return a maximum of one row, the results + ** are already in the requested order. Set isOrdered to nOrderBy to + ** indicate this. Or, if nLoop is greater than zero, set isOrdered to + ** -1, indicating that the result set may or may not be ordered, + ** depending on the loops added to the current plan. */ + aFrom[0].isOrdered = nLoop>0 ? -1 : nOrderBy; + } + + /* Compute successively longer WherePaths using the previous generation + ** of WherePaths as the basis for the next. Keep track of the mxChoice + ** best paths at each generation */ + for(iLoop=0; iLoop<nLoop; iLoop++){ + nTo = 0; + for(ii=0, pFrom=aFrom; ii<nFrom; ii++, pFrom++){ + for(pWLoop=pWInfo->pLoops; pWLoop; pWLoop=pWLoop->pNextLoop){ + LogEst nOut; /* Rows visited by (pFrom+pWLoop) */ + LogEst rCost; /* Cost of path (pFrom+pWLoop) */ + LogEst rUnsorted; /* Unsorted cost of (pFrom+pWLoop) */ + i8 isOrdered = pFrom->isOrdered; /* isOrdered for (pFrom+pWLoop) */ + Bitmask maskNew; /* Mask of src visited by (..) */ + Bitmask revMask = 0; /* Mask of rev-order loops for (..) */ + + if( (pWLoop->prereq & ~pFrom->maskLoop)!=0 ) continue; + if( (pWLoop->maskSelf & pFrom->maskLoop)!=0 ) continue; + /* At this point, pWLoop is a candidate to be the next loop. + ** Compute its cost */ + rUnsorted = sqlite3LogEstAdd(pWLoop->rSetup,pWLoop->rRun + pFrom->nRow); + rUnsorted = sqlite3LogEstAdd(rUnsorted, pFrom->rUnsorted); + nOut = pFrom->nRow + pWLoop->nOut; + maskNew = pFrom->maskLoop | pWLoop->maskSelf; + if( isOrdered<0 ){ + isOrdered = wherePathSatisfiesOrderBy(pWInfo, + pWInfo->pOrderBy, pFrom, pWInfo->wctrlFlags, + iLoop, pWLoop, &revMask); + }else{ + revMask = pFrom->revLoop; + } + if( isOrdered>=0 && isOrdered<nOrderBy ){ + if( aSortCost[isOrdered]==0 ){ + aSortCost[isOrdered] = whereSortingCost( + nRowEst, nOrderBy, isOrdered + ); + } + rCost = sqlite3LogEstAdd(rUnsorted, aSortCost[isOrdered]); + + WHERETRACE(0x002, + ("---- sort cost=%-3d (%d/%d) increases cost %3d to %-3d\n", + aSortCost[isOrdered], (nOrderBy-isOrdered), nOrderBy, + rUnsorted, rCost)); + }else{ + rCost = rUnsorted; + } + + /* Check to see if pWLoop should be added to the set of + ** mxChoice best-so-far paths. + ** + ** First look for an existing path among best-so-far paths + ** that covers the same set of loops and has the same isOrdered + ** setting as the current path candidate. + ** + ** The term "((pTo->isOrdered^isOrdered)&0x80)==0" is equivalent + ** to (pTo->isOrdered==(-1))==(isOrdered==(-1))" for the range + ** of legal values for isOrdered, -1..64. + */ + for(jj=0, pTo=aTo; jj<nTo; jj++, pTo++){ + if( pTo->maskLoop==maskNew + && ((pTo->isOrdered^isOrdered)&0x80)==0 + ){ + testcase( jj==nTo-1 ); + break; + } + } + if( jj>=nTo ){ + /* None of the existing best-so-far paths match the candidate. */ + if( nTo>=mxChoice + && (rCost>mxCost || (rCost==mxCost && rUnsorted>=mxUnsorted)) + ){ + /* The current candidate is no better than any of the mxChoice + ** paths currently in the best-so-far buffer. So discard + ** this candidate as not viable. */ +#ifdef WHERETRACE_ENABLED /* 0x4 */ + if( sqlite3WhereTrace&0x4 ){ + sqlite3DebugPrintf("Skip %s cost=%-3d,%3d order=%c\n", + wherePathName(pFrom, iLoop, pWLoop), rCost, nOut, + isOrdered>=0 ? isOrdered+'0' : '?'); + } +#endif + continue; + } + /* If we reach this points it means that the new candidate path + ** needs to be added to the set of best-so-far paths. */ + if( nTo<mxChoice ){ + /* Increase the size of the aTo set by one */ + jj = nTo++; + }else{ + /* New path replaces the prior worst to keep count below mxChoice */ + jj = mxI; + } + pTo = &aTo[jj]; +#ifdef WHERETRACE_ENABLED /* 0x4 */ + if( sqlite3WhereTrace&0x4 ){ + sqlite3DebugPrintf("New %s cost=%-3d,%3d order=%c\n", + wherePathName(pFrom, iLoop, pWLoop), rCost, nOut, + isOrdered>=0 ? isOrdered+'0' : '?'); + } +#endif + }else{ + /* Control reaches here if best-so-far path pTo=aTo[jj] covers the + ** same set of loops and has the sam isOrdered setting as the + ** candidate path. Check to see if the candidate should replace + ** pTo or if the candidate should be skipped */ + if( pTo->rCost<rCost || (pTo->rCost==rCost && pTo->nRow<=nOut) ){ +#ifdef WHERETRACE_ENABLED /* 0x4 */ + if( sqlite3WhereTrace&0x4 ){ + sqlite3DebugPrintf( + "Skip %s cost=%-3d,%3d order=%c", + wherePathName(pFrom, iLoop, pWLoop), rCost, nOut, + isOrdered>=0 ? isOrdered+'0' : '?'); + sqlite3DebugPrintf(" vs %s cost=%-3d,%d order=%c\n", + wherePathName(pTo, iLoop+1, 0), pTo->rCost, pTo->nRow, + pTo->isOrdered>=0 ? pTo->isOrdered+'0' : '?'); + } +#endif + /* Discard the candidate path from further consideration */ + testcase( pTo->rCost==rCost ); + continue; + } + testcase( pTo->rCost==rCost+1 ); + /* Control reaches here if the candidate path is better than the + ** pTo path. Replace pTo with the candidate. */ +#ifdef WHERETRACE_ENABLED /* 0x4 */ + if( sqlite3WhereTrace&0x4 ){ + sqlite3DebugPrintf( + "Update %s cost=%-3d,%3d order=%c", + wherePathName(pFrom, iLoop, pWLoop), rCost, nOut, + isOrdered>=0 ? isOrdered+'0' : '?'); + sqlite3DebugPrintf(" was %s cost=%-3d,%3d order=%c\n", + wherePathName(pTo, iLoop+1, 0), pTo->rCost, pTo->nRow, + pTo->isOrdered>=0 ? pTo->isOrdered+'0' : '?'); + } +#endif + } + /* pWLoop is a winner. Add it to the set of best so far */ + pTo->maskLoop = pFrom->maskLoop | pWLoop->maskSelf; + pTo->revLoop = revMask; + pTo->nRow = nOut; + pTo->rCost = rCost; + pTo->rUnsorted = rUnsorted; + pTo->isOrdered = isOrdered; + memcpy(pTo->aLoop, pFrom->aLoop, sizeof(WhereLoop*)*iLoop); + pTo->aLoop[iLoop] = pWLoop; + if( nTo>=mxChoice ){ + mxI = 0; + mxCost = aTo[0].rCost; + mxUnsorted = aTo[0].nRow; + for(jj=1, pTo=&aTo[1]; jj<mxChoice; jj++, pTo++){ + if( pTo->rCost>mxCost + || (pTo->rCost==mxCost && pTo->rUnsorted>mxUnsorted) + ){ + mxCost = pTo->rCost; + mxUnsorted = pTo->rUnsorted; + mxI = jj; + } + } + } + } + } + +#ifdef WHERETRACE_ENABLED /* >=2 */ + if( sqlite3WhereTrace & 0x02 ){ + sqlite3DebugPrintf("---- after round %d ----\n", iLoop); + for(ii=0, pTo=aTo; ii<nTo; ii++, pTo++){ + sqlite3DebugPrintf(" %s cost=%-3d nrow=%-3d order=%c", + wherePathName(pTo, iLoop+1, 0), pTo->rCost, pTo->nRow, + pTo->isOrdered>=0 ? (pTo->isOrdered+'0') : '?'); + if( pTo->isOrdered>0 ){ + sqlite3DebugPrintf(" rev=0x%llx\n", pTo->revLoop); + }else{ + sqlite3DebugPrintf("\n"); + } + } + } +#endif + + /* Swap the roles of aFrom and aTo for the next generation */ + pFrom = aTo; + aTo = aFrom; + aFrom = pFrom; + nFrom = nTo; + } + + if( nFrom==0 ){ + sqlite3ErrorMsg(pParse, "no query solution"); + sqlite3DbFree(db, pSpace); + return SQLITE_ERROR; + } + + /* Find the lowest cost path. pFrom will be left pointing to that path */ + pFrom = aFrom; + for(ii=1; ii<nFrom; ii++){ + if( pFrom->rCost>aFrom[ii].rCost ) pFrom = &aFrom[ii]; + } + assert( pWInfo->nLevel==nLoop ); + /* Load the lowest cost path into pWInfo */ + for(iLoop=0; iLoop<nLoop; iLoop++){ + WhereLevel *pLevel = pWInfo->a + iLoop; + pLevel->pWLoop = pWLoop = pFrom->aLoop[iLoop]; + pLevel->iFrom = pWLoop->iTab; + pLevel->iTabCur = pWInfo->pTabList->a[pLevel->iFrom].iCursor; + } + if( (pWInfo->wctrlFlags & WHERE_WANT_DISTINCT)!=0 + && (pWInfo->wctrlFlags & WHERE_DISTINCTBY)==0 + && pWInfo->eDistinct==WHERE_DISTINCT_NOOP + && nRowEst + ){ + Bitmask notUsed; + int rc = wherePathSatisfiesOrderBy(pWInfo, pWInfo->pResultSet, pFrom, + WHERE_DISTINCTBY, nLoop-1, pFrom->aLoop[nLoop-1], ¬Used); + if( rc==pWInfo->pResultSet->nExpr ){ + pWInfo->eDistinct = WHERE_DISTINCT_ORDERED; + } + } + if( pWInfo->pOrderBy ){ + if( pWInfo->wctrlFlags & WHERE_DISTINCTBY ){ + if( pFrom->isOrdered==pWInfo->pOrderBy->nExpr ){ + pWInfo->eDistinct = WHERE_DISTINCT_ORDERED; + } + }else{ + pWInfo->nOBSat = pFrom->isOrdered; + if( pWInfo->nOBSat<0 ) pWInfo->nOBSat = 0; + pWInfo->revMask = pFrom->revLoop; + } + if( (pWInfo->wctrlFlags & WHERE_SORTBYGROUP) + && pWInfo->nOBSat==pWInfo->pOrderBy->nExpr && nLoop>0 + ){ + Bitmask revMask = 0; + int nOrder = wherePathSatisfiesOrderBy(pWInfo, pWInfo->pOrderBy, + pFrom, 0, nLoop-1, pFrom->aLoop[nLoop-1], &revMask + ); + assert( pWInfo->sorted==0 ); + if( nOrder==pWInfo->pOrderBy->nExpr ){ + pWInfo->sorted = 1; + pWInfo->revMask = revMask; + } + } + } + + + pWInfo->nRowOut = pFrom->nRow; + + /* Free temporary memory and return success */ + sqlite3DbFree(db, pSpace); + return SQLITE_OK; +} + +/* +** Most queries use only a single table (they are not joins) and have +** simple == constraints against indexed fields. This routine attempts +** to plan those simple cases using much less ceremony than the +** general-purpose query planner, and thereby yield faster sqlite3_prepare() +** times for the common case. +** +** Return non-zero on success, if this query can be handled by this +** no-frills query planner. Return zero if this query needs the +** general-purpose query planner. +*/ +static int whereShortCut(WhereLoopBuilder *pBuilder){ + WhereInfo *pWInfo; + struct SrcList_item *pItem; + WhereClause *pWC; + WhereTerm *pTerm; + WhereLoop *pLoop; + int iCur; + int j; + Table *pTab; + Index *pIdx; + + pWInfo = pBuilder->pWInfo; + if( pWInfo->wctrlFlags & WHERE_FORCE_TABLE ) return 0; + assert( pWInfo->pTabList->nSrc>=1 ); + pItem = pWInfo->pTabList->a; + pTab = pItem->pTab; + if( IsVirtual(pTab) ) return 0; + if( pItem->fg.isIndexedBy ) return 0; + iCur = pItem->iCursor; + pWC = &pWInfo->sWC; + pLoop = pBuilder->pNew; + pLoop->wsFlags = 0; + pLoop->nSkip = 0; + pTerm = sqlite3WhereFindTerm(pWC, iCur, -1, 0, WO_EQ|WO_IS, 0); + if( pTerm ){ + testcase( pTerm->eOperator & WO_IS ); + pLoop->wsFlags = WHERE_COLUMN_EQ|WHERE_IPK|WHERE_ONEROW; + pLoop->aLTerm[0] = pTerm; + pLoop->nLTerm = 1; + pLoop->u.btree.nEq = 1; + /* TUNING: Cost of a rowid lookup is 10 */ + pLoop->rRun = 33; /* 33==sqlite3LogEst(10) */ + }else{ + for(pIdx=pTab->pIndex; pIdx; pIdx=pIdx->pNext){ + int opMask; + assert( pLoop->aLTermSpace==pLoop->aLTerm ); + if( !IsUniqueIndex(pIdx) + || pIdx->pPartIdxWhere!=0 + || pIdx->nKeyCol>ArraySize(pLoop->aLTermSpace) + ) continue; + opMask = pIdx->uniqNotNull ? (WO_EQ|WO_IS) : WO_EQ; + for(j=0; j<pIdx->nKeyCol; j++){ + pTerm = sqlite3WhereFindTerm(pWC, iCur, j, 0, opMask, pIdx); + if( pTerm==0 ) break; + testcase( pTerm->eOperator & WO_IS ); + pLoop->aLTerm[j] = pTerm; + } + if( j!=pIdx->nKeyCol ) continue; + pLoop->wsFlags = WHERE_COLUMN_EQ|WHERE_ONEROW|WHERE_INDEXED; + if( pIdx->isCovering || (pItem->colUsed & ~columnsInIndex(pIdx))==0 ){ + pLoop->wsFlags |= WHERE_IDX_ONLY; + } + pLoop->nLTerm = j; + pLoop->u.btree.nEq = j; + pLoop->u.btree.pIndex = pIdx; + /* TUNING: Cost of a unique index lookup is 15 */ + pLoop->rRun = 39; /* 39==sqlite3LogEst(15) */ + break; + } + } + if( pLoop->wsFlags ){ + pLoop->nOut = (LogEst)1; + pWInfo->a[0].pWLoop = pLoop; + pLoop->maskSelf = sqlite3WhereGetMask(&pWInfo->sMaskSet, iCur); + pWInfo->a[0].iTabCur = iCur; + pWInfo->nRowOut = 1; + if( pWInfo->pOrderBy ) pWInfo->nOBSat = pWInfo->pOrderBy->nExpr; + if( pWInfo->wctrlFlags & WHERE_WANT_DISTINCT ){ + pWInfo->eDistinct = WHERE_DISTINCT_UNIQUE; + } +#ifdef SQLITE_DEBUG + pLoop->cId = '0'; +#endif + return 1; + } + return 0; +} /* ** Generate the beginning of the loop used for WHERE clause processing. ** The return value is a pointer to an opaque structure that contains ** information needed to terminate the loop. Later, the calling routine @@ -91962,43 +127203,65 @@ ** fi ** end ** ** ORDER BY CLAUSE PROCESSING ** -** *ppOrderBy is a pointer to the ORDER BY clause of a SELECT statement, +** pOrderBy is a pointer to the ORDER BY clause (or the GROUP BY clause +** if the WHERE_GROUPBY flag is set in wctrlFlags) of a SELECT statement ** if there is one. If there is no ORDER BY clause or if this routine -** is called from an UPDATE or DELETE statement, then ppOrderBy is NULL. -** -** If an index can be used so that the natural output order of the table -** scan is correct for the ORDER BY clause, then that index is used and -** *ppOrderBy is set to NULL. This is an optimization that prevents an -** unnecessary sort of the result set if an index appropriate for the -** ORDER BY clause already exists. -** -** If the where clause loops cannot be arranged to provide the correct -** output order, then the *ppOrderBy is unchanged. +** is called from an UPDATE or DELETE statement, then pOrderBy is NULL. +** +** The iIdxCur parameter is the cursor number of an index. If +** WHERE_ONETABLE_ONLY is set, iIdxCur is the cursor number of an index +** to use for OR clause processing. The WHERE clause should use this +** specific cursor. If WHERE_ONEPASS_DESIRED is set, then iIdxCur is +** the first cursor in an array of cursors for all indices. iIdxCur should +** be used to compute the appropriate cursor depending on which index is +** used. */ SQLITE_PRIVATE WhereInfo *sqlite3WhereBegin( Parse *pParse, /* The parser context */ - SrcList *pTabList, /* A list of all tables to be scanned */ + SrcList *pTabList, /* FROM clause: A list of all tables to be scanned */ Expr *pWhere, /* The WHERE clause */ - ExprList **ppOrderBy, /* An ORDER BY clause, or NULL */ - u16 wctrlFlags /* One of the WHERE_* flags defined in sqliteInt.h */ + ExprList *pOrderBy, /* An ORDER BY (or GROUP BY) clause, or NULL */ + ExprList *pResultSet, /* Result set of the query */ + u16 wctrlFlags, /* One of the WHERE_* flags defined in sqliteInt.h */ + int iIdxCur /* If WHERE_ONETABLE_ONLY is set, index cursor number */ ){ - int i; /* Loop counter */ int nByteWInfo; /* Num. bytes allocated for WhereInfo struct */ int nTabList; /* Number of elements in pTabList */ WhereInfo *pWInfo; /* Will become the return value of this function */ Vdbe *v = pParse->pVdbe; /* The virtual database engine */ Bitmask notReady; /* Cursors that are not yet positioned */ + WhereLoopBuilder sWLB; /* The WhereLoop builder */ WhereMaskSet *pMaskSet; /* The expression mask set */ - WhereClause *pWC; /* Decomposition of the WHERE clause */ - struct SrcList_item *pTabItem; /* A single entry from pTabList */ - WhereLevel *pLevel; /* A single level in the pWInfo list */ - int iFrom; /* First unused FROM clause element */ - int andFlags; /* AND-ed combination of all pWC->a[].wtFlags */ + WhereLevel *pLevel; /* A single level in pWInfo->a[] */ + WhereLoop *pLoop; /* Pointer to a single WhereLoop object */ + int ii; /* Loop counter */ sqlite3 *db; /* Database connection */ + int rc; /* Return code */ + u8 bFordelete = 0; /* OPFLAG_FORDELETE or zero, as appropriate */ + + assert( (wctrlFlags & WHERE_ONEPASS_MULTIROW)==0 || ( + (wctrlFlags & WHERE_ONEPASS_DESIRED)!=0 + && (wctrlFlags & WHERE_OMIT_OPEN_CLOSE)==0 + )); + + /* Variable initialization */ + db = pParse->db; + memset(&sWLB, 0, sizeof(sWLB)); + + /* An ORDER/GROUP BY clause of more than 63 terms cannot be optimized */ + testcase( pOrderBy && pOrderBy->nExpr==BMS-1 ); + if( pOrderBy && pOrderBy->nExpr>=BMS ) pOrderBy = 0; + sWLB.pOrderBy = pOrderBy; + + /* Disable the DISTINCT optimization if SQLITE_DistinctOpt is set via + ** sqlite3_test_ctrl(SQLITE_TESTCTRL_OPTIMIZATIONS,...) */ + if( OptimizationDisabled(db, SQLITE_DistinctOpt) ){ + wctrlFlags &= ~WHERE_WANT_DISTINCT; + } /* The number of tables in the FROM clause is limited by the number of ** bits in a Bitmask */ testcase( pTabList->nSrc==BMS ); @@ -92019,422 +127282,380 @@ ** struct, the contents of WhereInfo.a[], the WhereClause structure ** and the WhereMaskSet structure. Since WhereClause contains an 8-byte ** field (type Bitmask) it must be aligned on an 8-byte boundary on ** some architectures. Hence the ROUND8() below. */ - db = pParse->db; nByteWInfo = ROUND8(sizeof(WhereInfo)+(nTabList-1)*sizeof(WhereLevel)); - pWInfo = sqlite3DbMallocZero(db, - nByteWInfo + - sizeof(WhereClause) + - sizeof(WhereMaskSet) - ); + pWInfo = sqlite3DbMallocZero(db, nByteWInfo + sizeof(WhereLoop)); if( db->mallocFailed ){ sqlite3DbFree(db, pWInfo); pWInfo = 0; goto whereBeginError; } + pWInfo->aiCurOnePass[0] = pWInfo->aiCurOnePass[1] = -1; pWInfo->nLevel = nTabList; pWInfo->pParse = pParse; pWInfo->pTabList = pTabList; - pWInfo->iBreak = sqlite3VdbeMakeLabel(v); - pWInfo->pWC = pWC = (WhereClause *)&((u8 *)pWInfo)[nByteWInfo]; + pWInfo->pOrderBy = pOrderBy; + pWInfo->pResultSet = pResultSet; + pWInfo->iBreak = pWInfo->iContinue = sqlite3VdbeMakeLabel(v); pWInfo->wctrlFlags = wctrlFlags; pWInfo->savedNQueryLoop = pParse->nQueryLoop; - pMaskSet = (WhereMaskSet*)&pWC[1]; + assert( pWInfo->eOnePass==ONEPASS_OFF ); /* ONEPASS defaults to OFF */ + pMaskSet = &pWInfo->sMaskSet; + sWLB.pWInfo = pWInfo; + sWLB.pWC = &pWInfo->sWC; + sWLB.pNew = (WhereLoop*)(((char*)pWInfo)+nByteWInfo); + assert( EIGHT_BYTE_ALIGNMENT(sWLB.pNew) ); + whereLoopInit(sWLB.pNew); +#ifdef SQLITE_DEBUG + sWLB.pNew->cId = '*'; +#endif /* Split the WHERE clause into separate subexpressions where each ** subexpression is separated by an AND operator. */ initMaskSet(pMaskSet); - whereClauseInit(pWC, pParse, pMaskSet); - sqlite3ExprCodeConstants(pParse, pWhere); - whereSplit(pWC, pWhere, TK_AND); + sqlite3WhereClauseInit(&pWInfo->sWC, pWInfo); + sqlite3WhereSplit(&pWInfo->sWC, pWhere, TK_AND); /* Special case: a WHERE clause that is constant. Evaluate the ** expression and either jump over all of the code or fall thru. */ - if( pWhere && (nTabList==0 || sqlite3ExprIsConstantNotJoin(pWhere)) ){ - sqlite3ExprIfFalse(pParse, pWhere, pWInfo->iBreak, SQLITE_JUMPIFNULL); - pWhere = 0; + for(ii=0; ii<sWLB.pWC->nTerm; ii++){ + if( nTabList==0 || sqlite3ExprIsConstantNotJoin(sWLB.pWC->a[ii].pExpr) ){ + sqlite3ExprIfFalse(pParse, sWLB.pWC->a[ii].pExpr, pWInfo->iBreak, + SQLITE_JUMPIFNULL); + sWLB.pWC->a[ii].wtFlags |= TERM_CODED; + } + } + + /* Special case: No FROM clause + */ + if( nTabList==0 ){ + if( pOrderBy ) pWInfo->nOBSat = pOrderBy->nExpr; + if( wctrlFlags & WHERE_WANT_DISTINCT ){ + pWInfo->eDistinct = WHERE_DISTINCT_UNIQUE; + } } /* Assign a bit from the bitmask to every term in the FROM clause. ** - ** When assigning bitmask values to FROM clause cursors, it must be - ** the case that if X is the bitmask for the N-th FROM clause term then - ** the bitmask for all FROM clause terms to the left of the N-th term - ** is (X-1). An expression from the ON clause of a LEFT JOIN can use - ** its Expr.iRightJoinTable value to find the bitmask of the right table - ** of the join. Subtracting one from the right table bitmask gives a - ** bitmask for all tables to the left of the join. Knowing the bitmask - ** for all tables to the left of a left join is important. Ticket #3015. + ** The N-th term of the FROM clause is assigned a bitmask of 1<<N. ** - ** Configure the WhereClause.vmask variable so that bits that correspond - ** to virtual table cursors are set. This is used to selectively disable - ** the OR-to-IN transformation in exprAnalyzeOrTerm(). It is not helpful - ** with virtual tables. + ** The rule of the previous sentence ensures thta if X is the bitmask for + ** a table T, then X-1 is the bitmask for all other tables to the left of T. + ** Knowing the bitmask for all tables to the left of a left join is + ** important. Ticket #3015. ** ** Note that bitmasks are created for all pTabList->nSrc tables in ** pTabList, not just the first nTabList tables. nTabList is normally ** equal to pTabList->nSrc but might be shortened to 1 if the ** WHERE_ONETABLE_ONLY flag is set. */ - assert( pWC->vmask==0 && pMaskSet->n==0 ); - for(i=0; i<pTabList->nSrc; i++){ - createMask(pMaskSet, pTabList->a[i].iCursor); -#ifndef SQLITE_OMIT_VIRTUALTABLE - if( ALWAYS(pTabList->a[i].pTab) && IsVirtual(pTabList->a[i].pTab) ){ - pWC->vmask |= ((Bitmask)1 << i); - } -#endif - } -#ifndef NDEBUG - { - Bitmask toTheLeft = 0; - for(i=0; i<pTabList->nSrc; i++){ - Bitmask m = getMask(pMaskSet, pTabList->a[i].iCursor); - assert( (m-1)==toTheLeft ); - toTheLeft |= m; - } - } -#endif - - /* Analyze all of the subexpressions. Note that exprAnalyze() might - ** add new virtual terms onto the end of the WHERE clause. We do not - ** want to analyze these virtual terms, so start analyzing at the end - ** and work forward so that the added virtual terms are never processed. - */ - exprAnalyzeAll(pTabList, pWC); - if( db->mallocFailed ){ - goto whereBeginError; - } - - /* Chose the best index to use for each table in the FROM clause. - ** - ** This loop fills in the following fields: - ** - ** pWInfo->a[].pIdx The index to use for this level of the loop. - ** pWInfo->a[].wsFlags WHERE_xxx flags associated with pIdx - ** pWInfo->a[].nEq The number of == and IN constraints - ** pWInfo->a[].iFrom Which term of the FROM clause is being coded - ** pWInfo->a[].iTabCur The VDBE cursor for the database table - ** pWInfo->a[].iIdxCur The VDBE cursor for the index - ** pWInfo->a[].pTerm When wsFlags==WO_OR, the OR-clause term - ** - ** This loop also figures out the nesting order of tables in the FROM - ** clause. - */ - notReady = ~(Bitmask)0; - pTabItem = pTabList->a; - pLevel = pWInfo->a; - andFlags = ~0; - WHERETRACE(("*** Optimizer Start ***\n")); - for(i=iFrom=0, pLevel=pWInfo->a; i<nTabList; i++, pLevel++){ - WhereCost bestPlan; /* Most efficient plan seen so far */ - Index *pIdx; /* Index for FROM table at pTabItem */ - int j; /* For looping over FROM tables */ - int bestJ = -1; /* The value of j */ - Bitmask m; /* Bitmask value for j or bestJ */ - int isOptimal; /* Iterator for optimal/non-optimal search */ - - memset(&bestPlan, 0, sizeof(bestPlan)); - bestPlan.rCost = SQLITE_BIG_DBL; - - /* Loop through the remaining entries in the FROM clause to find the - ** next nested loop. The loop tests all FROM clause entries - ** either once or twice. - ** - ** The first test is always performed if there are two or more entries - ** remaining and never performed if there is only one FROM clause entry - ** to choose from. The first test looks for an "optimal" scan. In - ** this context an optimal scan is one that uses the same strategy - ** for the given FROM clause entry as would be selected if the entry - ** were used as the innermost nested loop. In other words, a table - ** is chosen such that the cost of running that table cannot be reduced - ** by waiting for other tables to run first. This "optimal" test works - ** by first assuming that the FROM clause is on the inner loop and finding - ** its query plan, then checking to see if that query plan uses any - ** other FROM clause terms that are notReady. If no notReady terms are - ** used then the "optimal" query plan works. - ** - ** The second loop iteration is only performed if no optimal scan - ** strategies were found by the first loop. This 2nd iteration is used to - ** search for the lowest cost scan overall. - ** - ** Previous versions of SQLite performed only the second iteration - - ** the next outermost loop was always that with the lowest overall - ** cost. However, this meant that SQLite could select the wrong plan - ** for scripts such as the following: - ** - ** CREATE TABLE t1(a, b); - ** CREATE TABLE t2(c, d); - ** SELECT * FROM t2, t1 WHERE t2.rowid = t1.a; - ** - ** The best strategy is to iterate through table t1 first. However it - ** is not possible to determine this with a simple greedy algorithm. - ** However, since the cost of a linear scan through table t2 is the same - ** as the cost of a linear scan through table t1, a simple greedy - ** algorithm may choose to use t2 for the outer loop, which is a much - ** costlier approach. - */ - for(isOptimal=(iFrom<nTabList-1); isOptimal>=0; isOptimal--){ - Bitmask mask; /* Mask of tables not yet ready */ - for(j=iFrom, pTabItem=&pTabList->a[j]; j<nTabList; j++, pTabItem++){ - int doNotReorder; /* True if this table should not be reordered */ - WhereCost sCost; /* Cost information from best[Virtual]Index() */ - ExprList *pOrderBy; /* ORDER BY clause for index to optimize */ - - doNotReorder = (pTabItem->jointype & (JT_LEFT|JT_CROSS))!=0; - if( j!=iFrom && doNotReorder ) break; - m = getMask(pMaskSet, pTabItem->iCursor); - if( (m & notReady)==0 ){ - if( j==iFrom ) iFrom++; - continue; - } - mask = (isOptimal ? m : notReady); - pOrderBy = ((i==0 && ppOrderBy )?*ppOrderBy:0); - - assert( pTabItem->pTab ); -#ifndef SQLITE_OMIT_VIRTUALTABLE - if( IsVirtual(pTabItem->pTab) ){ - sqlite3_index_info **pp = &pWInfo->a[j].pIdxInfo; - bestVirtualIndex(pParse, pWC, pTabItem, mask, pOrderBy, &sCost, pp); - }else -#endif - { - bestBtreeIndex(pParse, pWC, pTabItem, mask, pOrderBy, &sCost); - } - assert( isOptimal || (sCost.used¬Ready)==0 ); - - if( (sCost.used¬Ready)==0 - && (bestJ<0 || sCost.rCost<bestPlan.rCost - || (sCost.rCost<=bestPlan.rCost && sCost.nRow<bestPlan.nRow)) - ){ - WHERETRACE(("... best so far with cost=%g and nRow=%g\n", - sCost.rCost, sCost.nRow)); - bestPlan = sCost; - bestJ = j; - } - if( doNotReorder ) break; - } - } - assert( bestJ>=0 ); - assert( notReady & getMask(pMaskSet, pTabList->a[bestJ].iCursor) ); - WHERETRACE(("*** Optimizer selects table %d for loop %d\n", bestJ, - pLevel-pWInfo->a)); - if( (bestPlan.plan.wsFlags & WHERE_ORDERBY)!=0 ){ - *ppOrderBy = 0; - } - andFlags &= bestPlan.plan.wsFlags; - pLevel->plan = bestPlan.plan; - testcase( bestPlan.plan.wsFlags & WHERE_INDEXED ); - testcase( bestPlan.plan.wsFlags & WHERE_TEMP_INDEX ); - if( bestPlan.plan.wsFlags & (WHERE_INDEXED|WHERE_TEMP_INDEX) ){ - pLevel->iIdxCur = pParse->nTab++; - }else{ - pLevel->iIdxCur = -1; - } - notReady &= ~getMask(pMaskSet, pTabList->a[bestJ].iCursor); - pLevel->iFrom = (u8)bestJ; - if( bestPlan.nRow>=(double)1 ) pParse->nQueryLoop *= bestPlan.nRow; - - /* Check that if the table scanned by this loop iteration had an - ** INDEXED BY clause attached to it, that the named index is being - ** used for the scan. If not, then query compilation has failed. - ** Return an error. - */ - pIdx = pTabList->a[bestJ].pIndex; - if( pIdx ){ - if( (bestPlan.plan.wsFlags & WHERE_INDEXED)==0 ){ - sqlite3ErrorMsg(pParse, "cannot use index: %s", pIdx->zName); - goto whereBeginError; - }else{ - /* If an INDEXED BY clause is used, the bestIndex() function is - ** guaranteed to find the index specified in the INDEXED BY clause - ** if it find an index at all. */ - assert( bestPlan.plan.u.pIdx==pIdx ); - } - } - } - WHERETRACE(("*** Optimizer Finished ***\n")); - if( pParse->nErr || db->mallocFailed ){ - goto whereBeginError; - } - - /* If the total query only selects a single row, then the ORDER BY - ** clause is irrelevant. - */ - if( (andFlags & WHERE_UNIQUE)!=0 && ppOrderBy ){ - *ppOrderBy = 0; - } + for(ii=0; ii<pTabList->nSrc; ii++){ + createMask(pMaskSet, pTabList->a[ii].iCursor); + sqlite3WhereTabFuncArgs(pParse, &pTabList->a[ii], &pWInfo->sWC); + } +#ifdef SQLITE_DEBUG + for(ii=0; ii<pTabList->nSrc; ii++){ + Bitmask m = sqlite3WhereGetMask(pMaskSet, pTabList->a[ii].iCursor); + assert( m==MASKBIT(ii) ); + } +#endif + + /* Analyze all of the subexpressions. */ + sqlite3WhereExprAnalyze(pTabList, &pWInfo->sWC); + if( db->mallocFailed ) goto whereBeginError; + + if( wctrlFlags & WHERE_WANT_DISTINCT ){ + if( isDistinctRedundant(pParse, pTabList, &pWInfo->sWC, pResultSet) ){ + /* The DISTINCT marking is pointless. Ignore it. */ + pWInfo->eDistinct = WHERE_DISTINCT_UNIQUE; + }else if( pOrderBy==0 ){ + /* Try to ORDER BY the result set to make distinct processing easier */ + pWInfo->wctrlFlags |= WHERE_DISTINCTBY; + pWInfo->pOrderBy = pResultSet; + } + } + + /* Construct the WhereLoop objects */ + WHERETRACE(0xffff,("*** Optimizer Start *** (wctrlFlags: 0x%x)\n", + wctrlFlags)); +#if defined(WHERETRACE_ENABLED) + if( sqlite3WhereTrace & 0x100 ){ /* Display all terms of the WHERE clause */ + int i; + for(i=0; i<sWLB.pWC->nTerm; i++){ + whereTermPrint(&sWLB.pWC->a[i], i); + } + } +#endif + + if( nTabList!=1 || whereShortCut(&sWLB)==0 ){ + rc = whereLoopAddAll(&sWLB); + if( rc ) goto whereBeginError; + +#ifdef WHERETRACE_ENABLED + if( sqlite3WhereTrace ){ /* Display all of the WhereLoop objects */ + WhereLoop *p; + int i; + static const char zLabel[] = "0123456789abcdefghijklmnopqrstuvwyxz" + "ABCDEFGHIJKLMNOPQRSTUVWYXZ"; + for(p=pWInfo->pLoops, i=0; p; p=p->pNextLoop, i++){ + p->cId = zLabel[i%sizeof(zLabel)]; + whereLoopPrint(p, sWLB.pWC); + } + } +#endif + + wherePathSolver(pWInfo, 0); + if( db->mallocFailed ) goto whereBeginError; + if( pWInfo->pOrderBy ){ + wherePathSolver(pWInfo, pWInfo->nRowOut+1); + if( db->mallocFailed ) goto whereBeginError; + } + } + if( pWInfo->pOrderBy==0 && (db->flags & SQLITE_ReverseOrder)!=0 ){ + pWInfo->revMask = (Bitmask)(-1); + } + if( pParse->nErr || NEVER(db->mallocFailed) ){ + goto whereBeginError; + } +#ifdef WHERETRACE_ENABLED + if( sqlite3WhereTrace ){ + sqlite3DebugPrintf("---- Solution nRow=%d", pWInfo->nRowOut); + if( pWInfo->nOBSat>0 ){ + sqlite3DebugPrintf(" ORDERBY=%d,0x%llx", pWInfo->nOBSat, pWInfo->revMask); + } + switch( pWInfo->eDistinct ){ + case WHERE_DISTINCT_UNIQUE: { + sqlite3DebugPrintf(" DISTINCT=unique"); + break; + } + case WHERE_DISTINCT_ORDERED: { + sqlite3DebugPrintf(" DISTINCT=ordered"); + break; + } + case WHERE_DISTINCT_UNORDERED: { + sqlite3DebugPrintf(" DISTINCT=unordered"); + break; + } + } + sqlite3DebugPrintf("\n"); + for(ii=0; ii<pWInfo->nLevel; ii++){ + whereLoopPrint(pWInfo->a[ii].pWLoop, sWLB.pWC); + } + } +#endif + /* Attempt to omit tables from the join that do not effect the result */ + if( pWInfo->nLevel>=2 + && pResultSet!=0 + && OptimizationEnabled(db, SQLITE_OmitNoopJoin) + ){ + Bitmask tabUsed = sqlite3WhereExprListUsage(pMaskSet, pResultSet); + if( sWLB.pOrderBy ){ + tabUsed |= sqlite3WhereExprListUsage(pMaskSet, sWLB.pOrderBy); + } + while( pWInfo->nLevel>=2 ){ + WhereTerm *pTerm, *pEnd; + pLoop = pWInfo->a[pWInfo->nLevel-1].pWLoop; + if( (pWInfo->pTabList->a[pLoop->iTab].fg.jointype & JT_LEFT)==0 ) break; + if( (wctrlFlags & WHERE_WANT_DISTINCT)==0 + && (pLoop->wsFlags & WHERE_ONEROW)==0 + ){ + break; + } + if( (tabUsed & pLoop->maskSelf)!=0 ) break; + pEnd = sWLB.pWC->a + sWLB.pWC->nTerm; + for(pTerm=sWLB.pWC->a; pTerm<pEnd; pTerm++){ + if( (pTerm->prereqAll & pLoop->maskSelf)!=0 + && !ExprHasProperty(pTerm->pExpr, EP_FromJoin) + ){ + break; + } + } + if( pTerm<pEnd ) break; + WHERETRACE(0xffff, ("-> drop loop %c not used\n", pLoop->cId)); + pWInfo->nLevel--; + nTabList--; + } + } + WHERETRACE(0xffff,("*** Optimizer Finished ***\n")); + pWInfo->pParse->nQueryLoop += pWInfo->nRowOut; /* If the caller is an UPDATE or DELETE statement that is requesting ** to use a one-pass algorithm, determine if this is appropriate. - ** The one-pass algorithm only works if the WHERE clause constraints - ** the statement to update a single row. */ assert( (wctrlFlags & WHERE_ONEPASS_DESIRED)==0 || pWInfo->nLevel==1 ); - if( (wctrlFlags & WHERE_ONEPASS_DESIRED)!=0 && (andFlags & WHERE_UNIQUE)!=0 ){ - pWInfo->okOnePass = 1; - pWInfo->a[0].plan.wsFlags &= ~WHERE_IDX_ONLY; + if( (wctrlFlags & WHERE_ONEPASS_DESIRED)!=0 ){ + int wsFlags = pWInfo->a[0].pWLoop->wsFlags; + int bOnerow = (wsFlags & WHERE_ONEROW)!=0; + if( bOnerow + || ((wctrlFlags & WHERE_ONEPASS_MULTIROW)!=0 + && 0==(wsFlags & WHERE_VIRTUALTABLE)) + ){ + pWInfo->eOnePass = bOnerow ? ONEPASS_SINGLE : ONEPASS_MULTI; + if( HasRowid(pTabList->a[0].pTab) && (wsFlags & WHERE_IDX_ONLY) ){ + if( wctrlFlags & WHERE_ONEPASS_MULTIROW ){ + bFordelete = OPFLAG_FORDELETE; + } + pWInfo->a[0].pWLoop->wsFlags = (wsFlags & ~WHERE_IDX_ONLY); + } + } } /* Open all tables in the pTabList and any indices selected for ** searching those tables. */ - sqlite3CodeVerifySchema(pParse, -1); /* Insert the cookie verifier Goto */ - notReady = ~(Bitmask)0; - for(i=0, pLevel=pWInfo->a; i<nTabList; i++, pLevel++){ + for(ii=0, pLevel=pWInfo->a; ii<nTabList; ii++, pLevel++){ Table *pTab; /* Table to open */ int iDb; /* Index of database containing table/index */ - -#ifndef SQLITE_OMIT_EXPLAIN - if( pParse->explain==2 ){ - char *zMsg; - struct SrcList_item *pItem = &pTabList->a[pLevel->iFrom]; - zMsg = sqlite3MPrintf(db, "TABLE %s", pItem->zName); - if( pItem->zAlias ){ - zMsg = sqlite3MAppendf(db, zMsg, "%s AS %s", zMsg, pItem->zAlias); - } - if( (pLevel->plan.wsFlags & WHERE_TEMP_INDEX)!=0 ){ - zMsg = sqlite3MAppendf(db, zMsg, "%s WITH AUTOMATIC INDEX", zMsg); - }else if( (pLevel->plan.wsFlags & WHERE_INDEXED)!=0 ){ - zMsg = sqlite3MAppendf(db, zMsg, "%s WITH INDEX %s", - zMsg, pLevel->plan.u.pIdx->zName); - }else if( pLevel->plan.wsFlags & WHERE_MULTI_OR ){ - zMsg = sqlite3MAppendf(db, zMsg, "%s VIA MULTI-INDEX UNION", zMsg); - }else if( pLevel->plan.wsFlags & (WHERE_ROWID_EQ|WHERE_ROWID_RANGE) ){ - zMsg = sqlite3MAppendf(db, zMsg, "%s USING PRIMARY KEY", zMsg); - } -#ifndef SQLITE_OMIT_VIRTUALTABLE - else if( (pLevel->plan.wsFlags & WHERE_VIRTUALTABLE)!=0 ){ - sqlite3_index_info *pVtabIdx = pLevel->plan.u.pVtabIdx; - zMsg = sqlite3MAppendf(db, zMsg, "%s VIRTUAL TABLE INDEX %d:%s", zMsg, - pVtabIdx->idxNum, pVtabIdx->idxStr); - } -#endif - if( pLevel->plan.wsFlags & WHERE_ORDERBY ){ - zMsg = sqlite3MAppendf(db, zMsg, "%s ORDER BY", zMsg); - } - sqlite3VdbeAddOp4(v, OP_Explain, i, pLevel->iFrom, 0, zMsg, P4_DYNAMIC); - } -#endif /* SQLITE_OMIT_EXPLAIN */ + struct SrcList_item *pTabItem; + pTabItem = &pTabList->a[pLevel->iFrom]; pTab = pTabItem->pTab; - pLevel->iTabCur = pTabItem->iCursor; iDb = sqlite3SchemaToIndex(db, pTab->pSchema); + pLoop = pLevel->pWLoop; if( (pTab->tabFlags & TF_Ephemeral)!=0 || pTab->pSelect ){ /* Do nothing */ }else #ifndef SQLITE_OMIT_VIRTUALTABLE - if( (pLevel->plan.wsFlags & WHERE_VIRTUALTABLE)!=0 ){ + if( (pLoop->wsFlags & WHERE_VIRTUALTABLE)!=0 ){ const char *pVTab = (const char *)sqlite3GetVTable(db, pTab); int iCur = pTabItem->iCursor; sqlite3VdbeAddOp4(v, OP_VOpen, iCur, 0, 0, pVTab, P4_VTAB); + }else if( IsVirtual(pTab) ){ + /* noop */ }else #endif - if( (pLevel->plan.wsFlags & WHERE_IDX_ONLY)==0 - && (wctrlFlags & WHERE_OMIT_OPEN)==0 ){ - int op = pWInfo->okOnePass ? OP_OpenWrite : OP_OpenRead; + if( (pLoop->wsFlags & WHERE_IDX_ONLY)==0 + && (wctrlFlags & WHERE_OMIT_OPEN_CLOSE)==0 ){ + int op = OP_OpenRead; + if( pWInfo->eOnePass!=ONEPASS_OFF ){ + op = OP_OpenWrite; + pWInfo->aiCurOnePass[0] = pTabItem->iCursor; + }; sqlite3OpenTable(pParse, pTabItem->iCursor, iDb, pTab, op); - testcase( pTab->nCol==BMS-1 ); - testcase( pTab->nCol==BMS ); - if( !pWInfo->okOnePass && pTab->nCol<BMS ){ + assert( pTabItem->iCursor==pLevel->iTabCur ); + testcase( pWInfo->eOnePass==ONEPASS_OFF && pTab->nCol==BMS-1 ); + testcase( pWInfo->eOnePass==ONEPASS_OFF && pTab->nCol==BMS ); + if( pWInfo->eOnePass==ONEPASS_OFF && pTab->nCol<BMS && HasRowid(pTab) ){ Bitmask b = pTabItem->colUsed; int n = 0; for(; b; b=b>>1, n++){} - sqlite3VdbeChangeP4(v, sqlite3VdbeCurrentAddr(v)-1, - SQLITE_INT_TO_PTR(n), P4_INT32); + sqlite3VdbeChangeP4(v, -1, SQLITE_INT_TO_PTR(n), P4_INT32); assert( n<=pTab->nCol ); } +#ifdef SQLITE_ENABLE_CURSOR_HINTS + if( pLoop->u.btree.pIndex!=0 ){ + sqlite3VdbeChangeP5(v, OPFLAG_SEEKEQ|bFordelete); + }else +#endif + { + sqlite3VdbeChangeP5(v, bFordelete); + } +#ifdef SQLITE_ENABLE_COLUMN_USED_MASK + sqlite3VdbeAddOp4Dup8(v, OP_ColumnsUsed, pTabItem->iCursor, 0, 0, + (const u8*)&pTabItem->colUsed, P4_INT64); +#endif }else{ sqlite3TableLock(pParse, iDb, pTab->tnum, 0, pTab->zName); } -#ifndef SQLITE_OMIT_AUTOMATIC_INDEX - if( (pLevel->plan.wsFlags & WHERE_TEMP_INDEX)!=0 ){ - constructAutomaticIndex(pParse, pWC, pTabItem, notReady, pLevel); - }else -#endif - if( (pLevel->plan.wsFlags & WHERE_INDEXED)!=0 ){ - Index *pIx = pLevel->plan.u.pIdx; - KeyInfo *pKey = sqlite3IndexKeyinfo(pParse, pIx); - int iIdxCur = pLevel->iIdxCur; + if( pLoop->wsFlags & WHERE_INDEXED ){ + Index *pIx = pLoop->u.btree.pIndex; + int iIndexCur; + int op = OP_OpenRead; + /* iIdxCur is always set if to a positive value if ONEPASS is possible */ + assert( iIdxCur!=0 || (pWInfo->wctrlFlags & WHERE_ONEPASS_DESIRED)==0 ); + if( !HasRowid(pTab) && IsPrimaryKeyIndex(pIx) + && (wctrlFlags & WHERE_ONETABLE_ONLY)!=0 + ){ + /* This is one term of an OR-optimization using the PRIMARY KEY of a + ** WITHOUT ROWID table. No need for a separate index */ + iIndexCur = pLevel->iTabCur; + op = 0; + }else if( pWInfo->eOnePass!=ONEPASS_OFF ){ + Index *pJ = pTabItem->pTab->pIndex; + iIndexCur = iIdxCur; + assert( wctrlFlags & WHERE_ONEPASS_DESIRED ); + while( ALWAYS(pJ) && pJ!=pIx ){ + iIndexCur++; + pJ = pJ->pNext; + } + op = OP_OpenWrite; + pWInfo->aiCurOnePass[1] = iIndexCur; + }else if( iIdxCur && (wctrlFlags & WHERE_ONETABLE_ONLY)!=0 ){ + iIndexCur = iIdxCur; + if( wctrlFlags & WHERE_REOPEN_IDX ) op = OP_ReopenIdx; + }else{ + iIndexCur = pParse->nTab++; + } + pLevel->iIdxCur = iIndexCur; assert( pIx->pSchema==pTab->pSchema ); - assert( iIdxCur>=0 ); - sqlite3VdbeAddOp4(v, OP_OpenRead, iIdxCur, pIx->tnum, iDb, - (char*)pKey, P4_KEYINFO_HANDOFF); - VdbeComment((v, "%s", pIx->zName)); + assert( iIndexCur>=0 ); + if( op ){ + sqlite3VdbeAddOp3(v, op, iIndexCur, pIx->tnum, iDb); + sqlite3VdbeSetP4KeyInfo(pParse, pIx); + if( (pLoop->wsFlags & WHERE_CONSTRAINT)!=0 + && (pLoop->wsFlags & (WHERE_COLUMN_RANGE|WHERE_SKIPSCAN))==0 + && (pWInfo->wctrlFlags&WHERE_ORDERBY_MIN)==0 + ){ + sqlite3VdbeChangeP5(v, OPFLAG_SEEKEQ); /* Hint to COMDB2 */ + } + VdbeComment((v, "%s", pIx->zName)); +#ifdef SQLITE_ENABLE_COLUMN_USED_MASK + { + u64 colUsed = 0; + int ii, jj; + for(ii=0; ii<pIx->nColumn; ii++){ + jj = pIx->aiColumn[ii]; + if( jj<0 ) continue; + if( jj>63 ) jj = 63; + if( (pTabItem->colUsed & MASKBIT(jj))==0 ) continue; + colUsed |= ((u64)1)<<(ii<63 ? ii : 63); + } + sqlite3VdbeAddOp4Dup8(v, OP_ColumnsUsed, iIndexCur, 0, 0, + (u8*)&colUsed, P4_INT64); + } +#endif /* SQLITE_ENABLE_COLUMN_USED_MASK */ + } } - sqlite3CodeVerifySchema(pParse, iDb); - notReady &= ~getMask(pWC->pMaskSet, pTabItem->iCursor); + if( iDb>=0 ) sqlite3CodeVerifySchema(pParse, iDb); } pWInfo->iTop = sqlite3VdbeCurrentAddr(v); if( db->mallocFailed ) goto whereBeginError; /* Generate the code to do the search. Each iteration of the for ** loop below generates code for a single nested loop of the VM ** program. */ notReady = ~(Bitmask)0; - for(i=0; i<nTabList; i++){ - notReady = codeOneLoopStart(pWInfo, i, wctrlFlags, notReady); - pWInfo->iContinue = pWInfo->a[i].addrCont; - } - -#ifdef SQLITE_TEST /* For testing and debugging use only */ - /* Record in the query plan information about the current table - ** and the index used to access it (if any). If the table itself - ** is not used, its name is just '{}'. If no index is used - ** the index is listed as "{}". If the primary key is used the - ** index name is '*'. - */ - for(i=0; i<nTabList; i++){ - char *z; - int n; - pLevel = &pWInfo->a[i]; - pTabItem = &pTabList->a[pLevel->iFrom]; - z = pTabItem->zAlias; - if( z==0 ) z = pTabItem->pTab->zName; - n = sqlite3Strlen30(z); - if( n+nQPlan < sizeof(sqlite3_query_plan)-10 ){ - if( pLevel->plan.wsFlags & WHERE_IDX_ONLY ){ - memcpy(&sqlite3_query_plan[nQPlan], "{}", 2); - nQPlan += 2; - }else{ - memcpy(&sqlite3_query_plan[nQPlan], z, n); - nQPlan += n; - } - sqlite3_query_plan[nQPlan++] = ' '; - } - testcase( pLevel->plan.wsFlags & WHERE_ROWID_EQ ); - testcase( pLevel->plan.wsFlags & WHERE_ROWID_RANGE ); - if( pLevel->plan.wsFlags & (WHERE_ROWID_EQ|WHERE_ROWID_RANGE) ){ - memcpy(&sqlite3_query_plan[nQPlan], "* ", 2); - nQPlan += 2; - }else if( (pLevel->plan.wsFlags & WHERE_INDEXED)!=0 ){ - n = sqlite3Strlen30(pLevel->plan.u.pIdx->zName); - if( n+nQPlan < sizeof(sqlite3_query_plan)-2 ){ - memcpy(&sqlite3_query_plan[nQPlan], pLevel->plan.u.pIdx->zName, n); - nQPlan += n; - sqlite3_query_plan[nQPlan++] = ' '; - } - }else{ - memcpy(&sqlite3_query_plan[nQPlan], "{} ", 3); - nQPlan += 3; - } - } - while( nQPlan>0 && sqlite3_query_plan[nQPlan-1]==' ' ){ - sqlite3_query_plan[--nQPlan] = 0; - } - sqlite3_query_plan[nQPlan] = 0; - nQPlan = 0; -#endif /* SQLITE_TEST // Testing and debugging use only */ - - /* Record the continuation address in the WhereInfo structure. Then - ** clean up and return. - */ + for(ii=0; ii<nTabList; ii++){ + int addrExplain; + int wsFlags; + pLevel = &pWInfo->a[ii]; + wsFlags = pLevel->pWLoop->wsFlags; +#ifndef SQLITE_OMIT_AUTOMATIC_INDEX + if( (pLevel->pWLoop->wsFlags & WHERE_AUTO_INDEX)!=0 ){ + constructAutomaticIndex(pParse, &pWInfo->sWC, + &pTabList->a[pLevel->iFrom], notReady, pLevel); + if( db->mallocFailed ) goto whereBeginError; + } +#endif + addrExplain = sqlite3WhereExplainOneScan( + pParse, pTabList, pLevel, ii, pLevel->iFrom, wctrlFlags + ); + pLevel->addrBody = sqlite3VdbeCurrentAddr(v); + notReady = sqlite3WhereCodeOneLoopStart(pWInfo, ii, notReady); + pWInfo->iContinue = pLevel->addrCont; + if( (wsFlags&WHERE_MULTI_OR)==0 && (wctrlFlags&WHERE_ONETABLE_ONLY)==0 ){ + sqlite3WhereAddScanStatus(v, pTabList, pLevel, addrExplain); + } + } + + /* Done. */ + VdbeModuleComment((v, "Begin WHERE-core")); return pWInfo; /* Jump here if malloc fails */ whereBeginError: if( pWInfo ){ @@ -92451,113 +127672,169 @@ SQLITE_PRIVATE void sqlite3WhereEnd(WhereInfo *pWInfo){ Parse *pParse = pWInfo->pParse; Vdbe *v = pParse->pVdbe; int i; WhereLevel *pLevel; + WhereLoop *pLoop; SrcList *pTabList = pWInfo->pTabList; sqlite3 *db = pParse->db; /* Generate loop termination code. */ + VdbeModuleComment((v, "End WHERE-core")); sqlite3ExprCacheClear(pParse); for(i=pWInfo->nLevel-1; i>=0; i--){ + int addr; pLevel = &pWInfo->a[i]; + pLoop = pLevel->pWLoop; sqlite3VdbeResolveLabel(v, pLevel->addrCont); if( pLevel->op!=OP_Noop ){ - sqlite3VdbeAddOp2(v, pLevel->op, pLevel->p1, pLevel->p2); + sqlite3VdbeAddOp3(v, pLevel->op, pLevel->p1, pLevel->p2, pLevel->p3); sqlite3VdbeChangeP5(v, pLevel->p5); + VdbeCoverage(v); + VdbeCoverageIf(v, pLevel->op==OP_Next); + VdbeCoverageIf(v, pLevel->op==OP_Prev); + VdbeCoverageIf(v, pLevel->op==OP_VNext); } - if( pLevel->plan.wsFlags & WHERE_IN_ABLE && pLevel->u.in.nIn>0 ){ + if( pLoop->wsFlags & WHERE_IN_ABLE && pLevel->u.in.nIn>0 ){ struct InLoop *pIn; int j; sqlite3VdbeResolveLabel(v, pLevel->addrNxt); for(j=pLevel->u.in.nIn, pIn=&pLevel->u.in.aInLoop[j-1]; j>0; j--, pIn--){ sqlite3VdbeJumpHere(v, pIn->addrInTop+1); - sqlite3VdbeAddOp2(v, OP_Next, pIn->iCur, pIn->addrInTop); + sqlite3VdbeAddOp2(v, pIn->eEndLoopOp, pIn->iCur, pIn->addrInTop); + VdbeCoverage(v); + VdbeCoverageIf(v, pIn->eEndLoopOp==OP_PrevIfOpen); + VdbeCoverageIf(v, pIn->eEndLoopOp==OP_NextIfOpen); sqlite3VdbeJumpHere(v, pIn->addrInTop-1); } - sqlite3DbFree(db, pLevel->u.in.aInLoop); } sqlite3VdbeResolveLabel(v, pLevel->addrBrk); + if( pLevel->addrSkip ){ + sqlite3VdbeGoto(v, pLevel->addrSkip); + VdbeComment((v, "next skip-scan on %s", pLoop->u.btree.pIndex->zName)); + sqlite3VdbeJumpHere(v, pLevel->addrSkip); + sqlite3VdbeJumpHere(v, pLevel->addrSkip-2); + } +#ifndef SQLITE_LIKE_DOESNT_MATCH_BLOBS + if( pLevel->addrLikeRep ){ + int op; + if( sqlite3VdbeGetOp(v, pLevel->addrLikeRep-1)->p1 ){ + op = OP_DecrJumpZero; + }else{ + op = OP_JumpZeroIncr; + } + sqlite3VdbeAddOp2(v, op, pLevel->iLikeRepCntr, pLevel->addrLikeRep); + VdbeCoverage(v); + } +#endif if( pLevel->iLeftJoin ){ - int addr; - addr = sqlite3VdbeAddOp1(v, OP_IfPos, pLevel->iLeftJoin); - assert( (pLevel->plan.wsFlags & WHERE_IDX_ONLY)==0 - || (pLevel->plan.wsFlags & WHERE_INDEXED)!=0 ); - if( (pLevel->plan.wsFlags & WHERE_IDX_ONLY)==0 ){ + addr = sqlite3VdbeAddOp1(v, OP_IfPos, pLevel->iLeftJoin); VdbeCoverage(v); + assert( (pLoop->wsFlags & WHERE_IDX_ONLY)==0 + || (pLoop->wsFlags & WHERE_INDEXED)!=0 ); + if( (pLoop->wsFlags & WHERE_IDX_ONLY)==0 ){ sqlite3VdbeAddOp1(v, OP_NullRow, pTabList->a[i].iCursor); } - if( pLevel->iIdxCur>=0 ){ + if( pLoop->wsFlags & WHERE_INDEXED ){ sqlite3VdbeAddOp1(v, OP_NullRow, pLevel->iIdxCur); } if( pLevel->op==OP_Return ){ sqlite3VdbeAddOp2(v, OP_Gosub, pLevel->p1, pLevel->addrFirst); }else{ - sqlite3VdbeAddOp2(v, OP_Goto, 0, pLevel->addrFirst); + sqlite3VdbeGoto(v, pLevel->addrFirst); } sqlite3VdbeJumpHere(v, addr); } + VdbeModuleComment((v, "End WHERE-loop%d: %s", i, + pWInfo->pTabList->a[pLevel->iFrom].pTab->zName)); } /* The "break" point is here, just past the end of the outer loop. ** Set it. */ sqlite3VdbeResolveLabel(v, pWInfo->iBreak); - /* Close all of the cursors that were opened by sqlite3WhereBegin. - */ - assert( pWInfo->nLevel==1 || pWInfo->nLevel==pTabList->nSrc ); + assert( pWInfo->nLevel<=pTabList->nSrc ); for(i=0, pLevel=pWInfo->a; i<pWInfo->nLevel; i++, pLevel++){ + int k, last; + VdbeOp *pOp; + Index *pIdx = 0; struct SrcList_item *pTabItem = &pTabList->a[pLevel->iFrom]; Table *pTab = pTabItem->pTab; assert( pTab!=0 ); + pLoop = pLevel->pWLoop; + + /* For a co-routine, change all OP_Column references to the table of + ** the co-routine into OP_Copy of result contained in a register. + ** OP_Rowid becomes OP_Null. + */ + if( pTabItem->fg.viaCoroutine && !db->mallocFailed ){ + translateColumnToCopy(v, pLevel->addrBody, pLevel->iTabCur, + pTabItem->regResult, 0); + continue; + } + + /* Close all of the cursors that were opened by sqlite3WhereBegin. + ** Except, do not close cursors that will be reused by the OR optimization + ** (WHERE_OMIT_OPEN_CLOSE). And do not close the OP_OpenWrite cursors + ** created for the ONEPASS optimization. + */ if( (pTab->tabFlags & TF_Ephemeral)==0 && pTab->pSelect==0 - && (pWInfo->wctrlFlags & WHERE_OMIT_CLOSE)==0 + && (pWInfo->wctrlFlags & WHERE_OMIT_OPEN_CLOSE)==0 ){ - int ws = pLevel->plan.wsFlags; - if( !pWInfo->okOnePass && (ws & WHERE_IDX_ONLY)==0 ){ + int ws = pLoop->wsFlags; + if( pWInfo->eOnePass==ONEPASS_OFF && (ws & WHERE_IDX_ONLY)==0 ){ sqlite3VdbeAddOp1(v, OP_Close, pTabItem->iCursor); } - if( (ws & WHERE_INDEXED)!=0 && (ws & WHERE_TEMP_INDEX)==0 ){ + if( (ws & WHERE_INDEXED)!=0 + && (ws & (WHERE_IPK|WHERE_AUTO_INDEX))==0 + && pLevel->iIdxCur!=pWInfo->aiCurOnePass[1] + ){ sqlite3VdbeAddOp1(v, OP_Close, pLevel->iIdxCur); } } - /* If this scan uses an index, make code substitutions to read data - ** from the index in preference to the table. Sometimes, this means - ** the table need never be read from. This is a performance boost, - ** as the vdbe level waits until the table is read before actually - ** seeking the table cursor to the record corresponding to the current - ** position in the index. + /* If this scan uses an index, make VDBE code substitutions to read data + ** from the index instead of from the table where possible. In some cases + ** this optimization prevents the table from ever being read, which can + ** yield a significant performance boost. ** ** Calls to the code generator in between sqlite3WhereBegin and ** sqlite3WhereEnd will have created code that references the table ** directly. This loop scans all that code looking for opcodes ** that reference the table and converts them into opcodes that ** reference the index. */ - if( (pLevel->plan.wsFlags & WHERE_INDEXED)!=0 && !db->mallocFailed){ - int k, j, last; - VdbeOp *pOp; - Index *pIdx = pLevel->plan.u.pIdx; - - assert( pIdx!=0 ); - pOp = sqlite3VdbeGetOp(v, pWInfo->iTop); + if( pLoop->wsFlags & (WHERE_INDEXED|WHERE_IDX_ONLY) ){ + pIdx = pLoop->u.btree.pIndex; + }else if( pLoop->wsFlags & WHERE_MULTI_OR ){ + pIdx = pLevel->u.pCovidx; + } + if( pIdx + && (pWInfo->eOnePass==ONEPASS_OFF || !HasRowid(pIdx->pTable)) + && !db->mallocFailed + ){ last = sqlite3VdbeCurrentAddr(v); - for(k=pWInfo->iTop; k<last; k++, pOp++){ + k = pLevel->addrBody; + pOp = sqlite3VdbeGetOp(v, k); + for(; k<last; k++, pOp++){ if( pOp->p1!=pLevel->iTabCur ) continue; if( pOp->opcode==OP_Column ){ - for(j=0; j<pIdx->nColumn; j++){ - if( pOp->p2==pIdx->aiColumn[j] ){ - pOp->p2 = j; - pOp->p1 = pLevel->iIdxCur; - break; - } - } - assert( (pLevel->plan.wsFlags & WHERE_IDX_ONLY)==0 - || j<pIdx->nColumn ); + int x = pOp->p2; + assert( pIdx->pTable==pTab ); + if( !HasRowid(pTab) ){ + Index *pPk = sqlite3PrimaryKeyIndex(pTab); + x = pPk->aiColumn[x]; + assert( x>=0 ); + } + x = sqlite3ColumnOfIndex(pIdx, x); + if( x>=0 ){ + pOp->p2 = x; + pOp->p1 = pLevel->iIdxCur; + } + assert( (pLoop->wsFlags & WHERE_IDX_ONLY)==0 || x>=0 ); }else if( pOp->opcode==OP_Rowid ){ pOp->p1 = pLevel->iIdxCur; pOp->opcode = OP_IdxRowid; } } @@ -92571,22 +127848,38 @@ return; } /************** End of where.c ***********************************************/ /************** Begin file parse.c *******************************************/ -/* Driver template for the LEMON parser generator. -** The author disclaims copyright to this source code. +/* +** 2000-05-29 ** -** This version of "lempar.c" is modified, slightly, for use by SQLite. -** The only modifications are the addition of a couple of NEVER() -** macros to disable tests that are needed in the case of a general -** LALR(1) grammar but which are always false in the -** specific grammar used by SQLite. +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** Driver template for the LEMON parser generator. +** +** The "lemon" program processes an LALR(1) input grammar file, then uses +** this template to construct a parser. The "lemon" program inserts text +** at each "%%" line. Also, any "P-a-r-s-e" identifer prefix (without the +** interstitial "-" characters) contained in this template is changed into +** the value of the %name directive from the grammar. Otherwise, the content +** of this template is copied straight through into the generate parser +** source file. +** +** The following is the concatenation of all %include directives from the +** input grammar file: */ -/* First off, code is included that follows the "include" declaration -** in the input grammar file. */ +/* #include <stdio.h> */ +/************ Begin %include sections from the grammar ************************/ +/* #include "sqliteInt.h" */ /* ** Disable all error recovery processing in the parser push-down ** automaton. */ @@ -92595,10 +127888,22 @@ /* ** Make yytestcase() the same as testcase() */ #define yytestcase(X) testcase(X) +/* +** Indicate that sqlite3ParserFree() will never be called with a null +** pointer. +*/ +#define YYPARSEFREENEVERNULL 1 + +/* +** Alternative datatype for the argument to the malloc() routine passed +** into sqlite3ParserAlloc(). The default is size_t. +*/ +#define YYMALLOCARGTYPE u64 + /* ** An instance of this structure holds information about the ** LIMIT clause of a SELECT statement. */ struct LimitVal { @@ -92610,11 +127915,11 @@ ** An instance of this structure is used to store the LIKE, ** GLOB, NOT LIKE, and NOT GLOB operators. */ struct LikeOp { Token eOperator; /* "like" or "glob" or "regexp" */ - int not; /* True if the NOT keyword is present */ + int bNot; /* True if the NOT keyword is present */ }; /* ** An instance of the following structure describes the event of a ** TRIGGER. "a" is the event type, one of TK_UPDATE, TK_INSERT, @@ -92629,10 +127934,41 @@ /* ** An instance of this structure holds the ATTACH key and the key type. */ struct AttachKey { int type; Token key; }; +/* +** Disable lookaside memory allocation for objects that might be +** shared across database connections. +*/ +static void disableLookaside(Parse *pParse){ + pParse->disableLookaside++; + pParse->db->lookaside.bDisable++; +} + + + /* + ** For a compound SELECT statement, make sure p->pPrior->pNext==p for + ** all elements in the list. And make sure list length does not exceed + ** SQLITE_LIMIT_COMPOUND_SELECT. + */ + static void parserDoubleLinkSelect(Parse *pParse, Select *p){ + if( p->pPrior ){ + Select *pNext = 0, *pLoop; + int mxSelect, cnt = 0; + for(pLoop=p; pLoop; pNext=pLoop, pLoop=pLoop->pPrior, cnt++){ + pLoop->pNext = pNext; + pLoop->selFlags |= SF_Compound; + } + if( (p->selFlags & SF_MultiValue)==0 && + (mxSelect = pParse->db->aLimit[SQLITE_LIMIT_COMPOUND_SELECT])>0 && + cnt>mxSelect + ){ + sqlite3ErrorMsg(pParse, "too many terms in compound SELECT"); + } + } + } /* This is a utility routine used to set the ExprSpan.zStart and ** ExprSpan.zEnd values of pOut so that the span covers the complete ** range of text beginning with pStart and going to the end of pEnd. */ @@ -92663,10 +127999,17 @@ ){ pOut->pExpr = sqlite3PExpr(pParse, op, pLeft->pExpr, pRight->pExpr, 0); pOut->zStart = pLeft->zStart; pOut->zEnd = pRight->zEnd; } + + /* If doNot is true, then add a TK_NOT Expr-node wrapper around the + ** outside of *ppExpr. + */ + static void exprNot(Parse *pParse, int doNot, Expr **ppExpr){ + if( doNot ) *ppExpr = sqlite3PExpr(pParse, TK_NOT, *ppExpr, 0, 0); + } /* Construct an expression node for a unary postfix operator */ static void spanUnaryPostfix( ExprSpan *pOut, /* Write the new expression node here */ @@ -92682,11 +128025,11 @@ /* A routine to convert a binary TK_IS or TK_ISNOT expression into a ** unary TK_ISNULL or TK_NOTNULL expression. */ static void binaryToUnaryIfNull(Parse *pParse, Expr *pY, Expr *pA, int op){ sqlite3 *db = pParse->db; - if( db->mallocFailed==0 && pY->op==TK_NULL ){ + if( pA && pY && pY->op==TK_NULL ){ pA->op = (u8)op; sqlite3ExprDelete(db, pA->pRight); pA->pRight = 0; } } @@ -92702,94 +128045,132 @@ ){ pOut->pExpr = sqlite3PExpr(pParse, op, pOperand->pExpr, 0, 0); pOut->zStart = pPreOp->z; pOut->zEnd = pOperand->zEnd; } -/* Next is all token values, in a form suitable for use by makeheaders. -** This section will be null unless lemon is run with the -m switch. -*/ -/* -** These constants (all generated automatically by the parser generator) -** specify the various kinds of tokens (terminals) that the parser -** understands. -** -** Each symbol here is a terminal symbol in the grammar. -*/ -/* Make sure the INTERFACE macro is defined. -*/ -#ifndef INTERFACE -# define INTERFACE 1 -#endif -/* The next thing included is series of defines which control + + /* Add a single new term to an ExprList that is used to store a + ** list of identifiers. Report an error if the ID list contains + ** a COLLATE clause or an ASC or DESC keyword, except ignore the + ** error while parsing a legacy schema. + */ + static ExprList *parserAddExprIdListTerm( + Parse *pParse, + ExprList *pPrior, + Token *pIdToken, + int hasCollate, + int sortOrder + ){ + ExprList *p = sqlite3ExprListAppend(pParse, pPrior, 0); + if( (hasCollate || sortOrder!=SQLITE_SO_UNDEFINED) + && pParse->db->init.busy==0 + ){ + sqlite3ErrorMsg(pParse, "syntax error after column name \"%.*s\"", + pIdToken->n, pIdToken->z); + } + sqlite3ExprListSetName(pParse, p, pIdToken, 1); + return p; + } +/**************** End of %include directives **********************************/ +/* These constants specify the various numeric values for terminal symbols +** in a format understandable to "makeheaders". This section is blank unless +** "lemon" is run with the "-m" command-line option. +***************** Begin makeheaders token definitions *************************/ +/**************** End makeheaders token definitions ***************************/ + +/* The next sections is a series of control #defines. ** various aspects of the generated parser. -** YYCODETYPE is the data type used for storing terminal -** and nonterminal numbers. "unsigned char" is -** used if there are fewer than 250 terminals -** and nonterminals. "int" is used otherwise. -** YYNOCODE is a number of type YYCODETYPE which corresponds -** to no legal terminal or nonterminal number. This -** number is used to fill in empty slots of the hash -** table. +** YYCODETYPE is the data type used to store the integer codes +** that represent terminal and non-terminal symbols. +** "unsigned char" is used if there are fewer than +** 256 symbols. Larger types otherwise. +** YYNOCODE is a number of type YYCODETYPE that is not used for +** any terminal or nonterminal symbol. ** YYFALLBACK If defined, this indicates that one or more tokens -** have fall-back values which should be used if the -** original value of the token will not parse. -** YYACTIONTYPE is the data type used for storing terminal -** and nonterminal numbers. "unsigned char" is -** used if there are fewer than 250 rules and -** states combined. "int" is used otherwise. -** sqlite3ParserTOKENTYPE is the data type used for minor tokens given -** directly to the parser from the tokenizer. -** YYMINORTYPE is the data type used for all minor tokens. +** (also known as: "terminal symbols") have fall-back +** values which should be used if the original symbol +** would not parse. This permits keywords to sometimes +** be used as identifiers, for example. +** YYACTIONTYPE is the data type used for "action codes" - numbers +** that indicate what to do in response to the next +** token. +** sqlite3ParserTOKENTYPE is the data type used for minor type for terminal +** symbols. Background: A "minor type" is a semantic +** value associated with a terminal or non-terminal +** symbols. For example, for an "ID" terminal symbol, +** the minor type might be the name of the identifier. +** Each non-terminal can have a different minor type. +** Terminal symbols all have the same minor type, though. +** This macros defines the minor type for terminal +** symbols. +** YYMINORTYPE is the data type used for all minor types. ** This is typically a union of many types, one of ** which is sqlite3ParserTOKENTYPE. The entry in the union -** for base tokens is called "yy0". +** for terminal symbols is called "yy0". ** YYSTACKDEPTH is the maximum depth of the parser's stack. If ** zero the stack is dynamically sized using realloc() ** sqlite3ParserARG_SDECL A static variable declaration for the %extra_argument ** sqlite3ParserARG_PDECL A parameter declaration for the %extra_argument ** sqlite3ParserARG_STORE Code to store %extra_argument into yypParser ** sqlite3ParserARG_FETCH Code to extract %extra_argument from yypParser -** YYNSTATE the combined number of states. -** YYNRULE the number of rules in the grammar ** YYERRORSYMBOL is the code number of the error symbol. If not ** defined, then do no error processing. +** YYNSTATE the combined number of states. +** YYNRULE the number of rules in the grammar +** YY_MAX_SHIFT Maximum value for shift actions +** YY_MIN_SHIFTREDUCE Minimum value for shift-reduce actions +** YY_MAX_SHIFTREDUCE Maximum value for shift-reduce actions +** YY_MIN_REDUCE Maximum value for reduce actions +** YY_ERROR_ACTION The yy_action[] code for syntax error +** YY_ACCEPT_ACTION The yy_action[] code for accept +** YY_NO_ACTION The yy_action[] code for no-op */ +#ifndef INTERFACE +# define INTERFACE 1 +#endif +/************* Begin control #defines *****************************************/ #define YYCODETYPE unsigned char -#define YYNOCODE 254 +#define YYNOCODE 253 #define YYACTIONTYPE unsigned short int -#define YYWILDCARD 67 +#define YYWILDCARD 70 #define sqlite3ParserTOKENTYPE Token typedef union { int yyinit; sqlite3ParserTOKENTYPE yy0; - Select* yy3; - ExprList* yy14; - SrcList* yy65; - struct LikeOp yy96; - Expr* yy132; - u8 yy186; - int yy328; - ExprSpan yy346; - struct TrigEvent yy378; - IdList* yy408; - struct {int value; int mask;} yy429; - TriggerStep* yy473; - struct LimitVal yy476; + int yy4; + struct TrigEvent yy90; + ExprSpan yy118; + TriggerStep* yy203; + struct {int value; int mask;} yy215; + SrcList* yy259; + struct LimitVal yy292; + Expr* yy314; + ExprList* yy322; + struct LikeOp yy342; + IdList* yy384; + Select* yy387; + With* yy451; } YYMINORTYPE; #ifndef YYSTACKDEPTH #define YYSTACKDEPTH 100 #endif #define sqlite3ParserARG_SDECL Parse *pParse; #define sqlite3ParserARG_PDECL ,Parse *pParse #define sqlite3ParserARG_FETCH Parse *pParse = yypParser->pParse #define sqlite3ParserARG_STORE yypParser->pParse = pParse -#define YYNSTATE 631 -#define YYNRULE 330 #define YYFALLBACK 1 -#define YY_NO_ACTION (YYNSTATE+YYNRULE+2) -#define YY_ACCEPT_ACTION (YYNSTATE+YYNRULE+1) -#define YY_ERROR_ACTION (YYNSTATE+YYNRULE) +#define YYNSTATE 436 +#define YYNRULE 328 +#define YY_MAX_SHIFT 435 +#define YY_MIN_SHIFTREDUCE 649 +#define YY_MAX_SHIFTREDUCE 976 +#define YY_MIN_REDUCE 977 +#define YY_MAX_REDUCE 1304 +#define YY_ERROR_ACTION 1305 +#define YY_ACCEPT_ACTION 1306 +#define YY_NO_ACTION 1307 +/************* End control #defines *******************************************/ /* The yyzerominor constant is used to initialize instances of ** YYMINORTYPE objects to zero. */ static const YYMINORTYPE yyzerominor = { 0 }; @@ -92812,20 +128193,24 @@ ** action integer. ** ** Suppose the action integer is N. Then the action is determined as ** follows ** -** 0 <= N < YYNSTATE Shift N. That is, push the lookahead +** 0 <= N <= YY_MAX_SHIFT Shift N. That is, push the lookahead ** token onto the stack and goto state N. ** -** YYNSTATE <= N < YYNSTATE+YYNRULE Reduce by rule N-YYNSTATE. +** N between YY_MIN_SHIFTREDUCE Shift to an arbitrary state then +** and YY_MAX_SHIFTREDUCE reduce by rule N-YY_MIN_SHIFTREDUCE. ** -** N == YYNSTATE+YYNRULE A syntax error has occurred. +** N between YY_MIN_REDUCE Reduce by rule N-YY_MIN_REDUCE +** and YY_MAX_REDUCE + +** N == YY_ERROR_ACTION A syntax error has occurred. ** -** N == YYNSTATE+YYNRULE+1 The parser accepts its input. +** N == YY_ACCEPT_ACTION The parser accepts its input. ** -** N == YYNSTATE+YYNRULE+2 No such action. Denotes unused +** N == YY_NO_ACTION No such action. Denotes unused ** slots in the yy_action[] table. ** ** The action table is constructed as a single large table named yy_action[]. ** Given state S and lookahead X, the action is computed as ** @@ -92850,558 +128235,541 @@ ** yy_shift_ofst[] For each state, the offset into yy_action for ** shifting terminals. ** yy_reduce_ofst[] For each state, the offset into yy_action for ** shifting non-terminals after a reduce. ** yy_default[] Default action for each state. -*/ -#define YY_ACTTAB_COUNT (1550) +** +*********** Begin parsing tables **********************************************/ +#define YY_ACTTAB_COUNT (1501) static const YYACTIONTYPE yy_action[] = { - /* 0 */ 313, 49, 556, 46, 147, 172, 628, 598, 55, 55, - /* 10 */ 55, 55, 302, 53, 53, 53, 53, 52, 52, 51, - /* 20 */ 51, 51, 50, 238, 603, 66, 624, 623, 604, 598, - /* 30 */ 591, 585, 48, 53, 53, 53, 53, 52, 52, 51, - /* 40 */ 51, 51, 50, 238, 51, 51, 51, 50, 238, 56, - /* 50 */ 57, 47, 583, 582, 584, 584, 54, 54, 55, 55, - /* 60 */ 55, 55, 609, 53, 53, 53, 53, 52, 52, 51, - /* 70 */ 51, 51, 50, 238, 313, 598, 672, 330, 411, 217, - /* 80 */ 32, 53, 53, 53, 53, 52, 52, 51, 51, 51, - /* 90 */ 50, 238, 330, 414, 621, 620, 166, 598, 673, 382, - /* 100 */ 379, 378, 602, 73, 591, 585, 307, 424, 166, 58, - /* 110 */ 377, 382, 379, 378, 516, 515, 624, 623, 254, 200, - /* 120 */ 199, 198, 377, 56, 57, 47, 583, 582, 584, 584, - /* 130 */ 54, 54, 55, 55, 55, 55, 581, 53, 53, 53, - /* 140 */ 53, 52, 52, 51, 51, 51, 50, 238, 313, 270, - /* 150 */ 226, 422, 283, 133, 177, 139, 284, 385, 279, 384, - /* 160 */ 169, 197, 251, 282, 253, 226, 411, 275, 440, 167, - /* 170 */ 139, 284, 385, 279, 384, 169, 571, 236, 591, 585, - /* 180 */ 240, 414, 275, 622, 621, 620, 674, 437, 441, 442, - /* 190 */ 602, 88, 352, 266, 439, 268, 438, 56, 57, 47, - /* 200 */ 583, 582, 584, 584, 54, 54, 55, 55, 55, 55, - /* 210 */ 465, 53, 53, 53, 53, 52, 52, 51, 51, 51, - /* 220 */ 50, 238, 313, 471, 52, 52, 51, 51, 51, 50, - /* 230 */ 238, 234, 166, 491, 567, 382, 379, 378, 1, 440, - /* 240 */ 252, 176, 624, 623, 608, 67, 377, 513, 622, 443, - /* 250 */ 237, 577, 591, 585, 622, 172, 466, 598, 554, 441, - /* 260 */ 340, 409, 526, 580, 580, 349, 596, 553, 194, 482, - /* 270 */ 175, 56, 57, 47, 583, 582, 584, 584, 54, 54, - /* 280 */ 55, 55, 55, 55, 562, 53, 53, 53, 53, 52, - /* 290 */ 52, 51, 51, 51, 50, 238, 313, 594, 594, 594, - /* 300 */ 561, 578, 469, 65, 259, 351, 258, 411, 624, 623, - /* 310 */ 621, 620, 332, 576, 575, 240, 560, 568, 520, 411, - /* 320 */ 341, 237, 414, 624, 623, 598, 591, 585, 542, 519, - /* 330 */ 171, 602, 95, 68, 414, 624, 623, 624, 623, 38, - /* 340 */ 877, 506, 507, 602, 88, 56, 57, 47, 583, 582, - /* 350 */ 584, 584, 54, 54, 55, 55, 55, 55, 532, 53, - /* 360 */ 53, 53, 53, 52, 52, 51, 51, 51, 50, 238, - /* 370 */ 313, 411, 579, 398, 531, 237, 621, 620, 388, 625, - /* 380 */ 500, 206, 167, 396, 233, 312, 414, 387, 569, 492, - /* 390 */ 216, 621, 620, 566, 622, 602, 74, 533, 210, 491, - /* 400 */ 591, 585, 548, 621, 620, 621, 620, 300, 598, 466, - /* 410 */ 481, 67, 603, 35, 622, 601, 604, 547, 6, 56, - /* 420 */ 57, 47, 583, 582, 584, 584, 54, 54, 55, 55, - /* 430 */ 55, 55, 601, 53, 53, 53, 53, 52, 52, 51, - /* 440 */ 51, 51, 50, 238, 313, 411, 184, 409, 528, 580, - /* 450 */ 580, 551, 962, 186, 419, 2, 353, 259, 351, 258, - /* 460 */ 414, 409, 411, 580, 580, 44, 411, 544, 240, 602, - /* 470 */ 94, 190, 7, 62, 591, 585, 598, 414, 350, 607, - /* 480 */ 493, 414, 409, 317, 580, 580, 602, 95, 496, 565, - /* 490 */ 602, 80, 203, 56, 57, 47, 583, 582, 584, 584, - /* 500 */ 54, 54, 55, 55, 55, 55, 535, 53, 53, 53, - /* 510 */ 53, 52, 52, 51, 51, 51, 50, 238, 313, 202, - /* 520 */ 564, 293, 511, 49, 562, 46, 147, 411, 394, 183, - /* 530 */ 563, 549, 505, 549, 174, 409, 322, 580, 580, 39, - /* 540 */ 561, 37, 414, 624, 623, 192, 473, 383, 591, 585, - /* 550 */ 474, 602, 80, 601, 504, 544, 560, 364, 402, 210, - /* 560 */ 421, 952, 361, 952, 365, 201, 144, 56, 57, 47, - /* 570 */ 583, 582, 584, 584, 54, 54, 55, 55, 55, 55, - /* 580 */ 559, 53, 53, 53, 53, 52, 52, 51, 51, 51, - /* 590 */ 50, 238, 313, 601, 232, 264, 272, 321, 374, 484, - /* 600 */ 510, 146, 342, 146, 328, 425, 485, 407, 576, 575, - /* 610 */ 622, 621, 620, 49, 168, 46, 147, 353, 546, 491, - /* 620 */ 204, 240, 591, 585, 421, 951, 549, 951, 549, 168, - /* 630 */ 429, 67, 390, 343, 622, 434, 307, 423, 338, 360, - /* 640 */ 391, 56, 57, 47, 583, 582, 584, 584, 54, 54, - /* 650 */ 55, 55, 55, 55, 601, 53, 53, 53, 53, 52, - /* 660 */ 52, 51, 51, 51, 50, 238, 313, 34, 318, 425, - /* 670 */ 237, 21, 359, 273, 411, 167, 411, 276, 411, 540, - /* 680 */ 411, 422, 13, 318, 619, 618, 617, 622, 275, 414, - /* 690 */ 336, 414, 622, 414, 622, 414, 591, 585, 602, 69, - /* 700 */ 602, 97, 602, 100, 602, 98, 631, 629, 334, 475, - /* 710 */ 475, 367, 319, 148, 327, 56, 57, 47, 583, 582, - /* 720 */ 584, 584, 54, 54, 55, 55, 55, 55, 411, 53, - /* 730 */ 53, 53, 53, 52, 52, 51, 51, 51, 50, 238, - /* 740 */ 313, 411, 331, 414, 411, 49, 276, 46, 147, 569, - /* 750 */ 406, 216, 602, 106, 573, 573, 414, 354, 524, 414, - /* 760 */ 411, 622, 411, 224, 4, 602, 104, 605, 602, 108, - /* 770 */ 591, 585, 622, 20, 375, 414, 167, 414, 215, 144, - /* 780 */ 470, 239, 167, 225, 602, 109, 602, 134, 18, 56, - /* 790 */ 57, 47, 583, 582, 584, 584, 54, 54, 55, 55, - /* 800 */ 55, 55, 411, 53, 53, 53, 53, 52, 52, 51, - /* 810 */ 51, 51, 50, 238, 313, 411, 276, 414, 12, 459, - /* 820 */ 276, 171, 411, 16, 223, 189, 602, 135, 354, 170, - /* 830 */ 414, 622, 630, 2, 411, 622, 540, 414, 143, 602, - /* 840 */ 61, 359, 132, 622, 591, 585, 602, 105, 458, 414, - /* 850 */ 23, 622, 446, 326, 23, 538, 622, 325, 602, 103, - /* 860 */ 427, 530, 309, 56, 57, 47, 583, 582, 584, 584, - /* 870 */ 54, 54, 55, 55, 55, 55, 411, 53, 53, 53, - /* 880 */ 53, 52, 52, 51, 51, 51, 50, 238, 313, 411, - /* 890 */ 264, 414, 411, 276, 359, 219, 157, 214, 357, 366, - /* 900 */ 602, 96, 522, 521, 414, 622, 358, 414, 622, 622, - /* 910 */ 411, 613, 612, 602, 102, 142, 602, 77, 591, 585, - /* 920 */ 529, 540, 231, 426, 308, 414, 622, 622, 468, 521, - /* 930 */ 324, 601, 257, 263, 602, 99, 622, 56, 45, 47, - /* 940 */ 583, 582, 584, 584, 54, 54, 55, 55, 55, 55, - /* 950 */ 411, 53, 53, 53, 53, 52, 52, 51, 51, 51, - /* 960 */ 50, 238, 313, 264, 264, 414, 411, 213, 209, 544, - /* 970 */ 544, 207, 611, 28, 602, 138, 50, 238, 622, 622, - /* 980 */ 381, 414, 503, 140, 323, 222, 274, 622, 590, 589, - /* 990 */ 602, 137, 591, 585, 629, 334, 606, 30, 622, 571, - /* 1000 */ 236, 601, 601, 130, 496, 601, 453, 451, 288, 286, - /* 1010 */ 587, 586, 57, 47, 583, 582, 584, 584, 54, 54, - /* 1020 */ 55, 55, 55, 55, 411, 53, 53, 53, 53, 52, - /* 1030 */ 52, 51, 51, 51, 50, 238, 313, 588, 411, 414, - /* 1040 */ 411, 264, 410, 129, 595, 400, 27, 376, 602, 136, - /* 1050 */ 128, 165, 479, 414, 282, 414, 622, 622, 411, 622, - /* 1060 */ 622, 411, 602, 76, 602, 93, 591, 585, 188, 372, - /* 1070 */ 368, 125, 476, 414, 261, 160, 414, 171, 124, 472, - /* 1080 */ 123, 15, 602, 92, 450, 602, 75, 47, 583, 582, - /* 1090 */ 584, 584, 54, 54, 55, 55, 55, 55, 464, 53, - /* 1100 */ 53, 53, 53, 52, 52, 51, 51, 51, 50, 238, - /* 1110 */ 43, 405, 264, 3, 558, 264, 545, 415, 623, 159, - /* 1120 */ 541, 158, 539, 278, 25, 461, 121, 622, 408, 622, - /* 1130 */ 622, 622, 24, 43, 405, 622, 3, 622, 622, 120, - /* 1140 */ 415, 623, 11, 456, 411, 156, 452, 403, 509, 277, - /* 1150 */ 118, 408, 489, 113, 205, 449, 271, 567, 221, 414, - /* 1160 */ 269, 267, 155, 622, 622, 111, 411, 622, 602, 95, - /* 1170 */ 403, 622, 411, 110, 10, 622, 622, 40, 41, 534, - /* 1180 */ 567, 414, 64, 264, 42, 413, 412, 414, 601, 596, - /* 1190 */ 602, 91, 445, 436, 150, 435, 602, 90, 622, 265, - /* 1200 */ 40, 41, 337, 242, 411, 191, 333, 42, 413, 412, - /* 1210 */ 398, 420, 596, 316, 622, 399, 260, 107, 230, 414, - /* 1220 */ 594, 594, 594, 593, 592, 14, 220, 411, 602, 101, - /* 1230 */ 240, 622, 43, 405, 362, 3, 149, 315, 626, 415, - /* 1240 */ 623, 127, 414, 594, 594, 594, 593, 592, 14, 622, - /* 1250 */ 408, 602, 89, 411, 181, 33, 405, 463, 3, 411, - /* 1260 */ 264, 462, 415, 623, 616, 615, 614, 355, 414, 403, - /* 1270 */ 417, 416, 622, 408, 414, 622, 622, 602, 87, 567, - /* 1280 */ 418, 627, 622, 602, 86, 8, 241, 180, 126, 255, - /* 1290 */ 600, 178, 403, 240, 208, 455, 395, 294, 444, 40, - /* 1300 */ 41, 297, 567, 248, 622, 296, 42, 413, 412, 247, - /* 1310 */ 622, 596, 244, 622, 30, 60, 31, 243, 430, 624, - /* 1320 */ 623, 292, 40, 41, 622, 295, 145, 622, 601, 42, - /* 1330 */ 413, 412, 622, 622, 596, 393, 622, 397, 599, 59, - /* 1340 */ 235, 622, 594, 594, 594, 593, 592, 14, 218, 291, - /* 1350 */ 622, 36, 344, 305, 304, 303, 179, 301, 411, 567, - /* 1360 */ 454, 557, 173, 185, 622, 594, 594, 594, 593, 592, - /* 1370 */ 14, 411, 29, 414, 151, 289, 246, 523, 411, 196, - /* 1380 */ 195, 335, 602, 85, 411, 245, 414, 526, 392, 543, - /* 1390 */ 411, 596, 287, 414, 285, 602, 72, 537, 153, 414, - /* 1400 */ 466, 411, 602, 71, 154, 414, 411, 152, 602, 84, - /* 1410 */ 386, 536, 329, 411, 602, 83, 414, 518, 280, 411, - /* 1420 */ 513, 414, 594, 594, 594, 602, 82, 517, 414, 311, - /* 1430 */ 602, 81, 411, 514, 414, 512, 131, 602, 70, 229, - /* 1440 */ 228, 227, 494, 602, 17, 411, 488, 414, 259, 346, - /* 1450 */ 249, 389, 487, 486, 314, 164, 602, 79, 310, 240, - /* 1460 */ 414, 373, 480, 163, 262, 371, 414, 162, 369, 602, - /* 1470 */ 78, 212, 478, 26, 477, 602, 9, 161, 467, 363, - /* 1480 */ 141, 122, 339, 187, 119, 457, 348, 347, 117, 116, - /* 1490 */ 115, 112, 114, 448, 182, 22, 320, 433, 432, 431, - /* 1500 */ 19, 428, 610, 597, 574, 193, 572, 63, 298, 404, - /* 1510 */ 555, 552, 290, 281, 510, 460, 498, 499, 495, 447, - /* 1520 */ 356, 497, 256, 380, 306, 570, 5, 250, 345, 238, - /* 1530 */ 299, 550, 527, 490, 508, 525, 502, 401, 501, 963, - /* 1540 */ 211, 963, 483, 963, 963, 963, 963, 963, 963, 370, + /* 0 */ 311, 1306, 145, 651, 2, 192, 652, 338, 780, 92, + /* 10 */ 92, 92, 92, 85, 90, 90, 90, 90, 89, 89, + /* 20 */ 88, 88, 88, 87, 335, 88, 88, 88, 87, 335, + /* 30 */ 327, 856, 856, 92, 92, 92, 92, 697, 90, 90, + /* 40 */ 90, 90, 89, 89, 88, 88, 88, 87, 335, 76, + /* 50 */ 807, 74, 93, 94, 84, 868, 871, 860, 860, 91, + /* 60 */ 91, 92, 92, 92, 92, 335, 90, 90, 90, 90, + /* 70 */ 89, 89, 88, 88, 88, 87, 335, 311, 780, 90, + /* 80 */ 90, 90, 90, 89, 89, 88, 88, 88, 87, 335, + /* 90 */ 356, 808, 776, 701, 689, 689, 86, 83, 166, 257, + /* 100 */ 809, 715, 430, 86, 83, 166, 324, 697, 856, 856, + /* 110 */ 201, 158, 276, 387, 271, 386, 188, 689, 689, 828, + /* 120 */ 86, 83, 166, 269, 833, 49, 123, 87, 335, 93, + /* 130 */ 94, 84, 868, 871, 860, 860, 91, 91, 92, 92, + /* 140 */ 92, 92, 239, 90, 90, 90, 90, 89, 89, 88, + /* 150 */ 88, 88, 87, 335, 311, 763, 333, 332, 216, 408, + /* 160 */ 394, 69, 231, 393, 690, 691, 396, 910, 251, 354, + /* 170 */ 250, 288, 315, 430, 908, 430, 909, 89, 89, 88, + /* 180 */ 88, 88, 87, 335, 391, 856, 856, 690, 691, 183, + /* 190 */ 95, 123, 384, 381, 380, 833, 31, 833, 49, 912, + /* 200 */ 912, 751, 752, 379, 123, 311, 93, 94, 84, 868, + /* 210 */ 871, 860, 860, 91, 91, 92, 92, 92, 92, 114, + /* 220 */ 90, 90, 90, 90, 89, 89, 88, 88, 88, 87, + /* 230 */ 335, 430, 408, 399, 435, 657, 856, 856, 346, 57, + /* 240 */ 232, 828, 109, 704, 366, 689, 689, 363, 825, 760, + /* 250 */ 97, 749, 752, 833, 49, 708, 708, 93, 94, 84, + /* 260 */ 868, 871, 860, 860, 91, 91, 92, 92, 92, 92, + /* 270 */ 423, 90, 90, 90, 90, 89, 89, 88, 88, 88, + /* 280 */ 87, 335, 311, 114, 22, 361, 688, 58, 408, 390, + /* 290 */ 251, 349, 240, 213, 762, 689, 689, 847, 685, 115, + /* 300 */ 361, 231, 393, 689, 689, 396, 183, 689, 689, 384, + /* 310 */ 381, 380, 361, 856, 856, 690, 691, 160, 159, 223, + /* 320 */ 379, 738, 25, 806, 707, 841, 143, 689, 689, 835, + /* 330 */ 392, 339, 766, 766, 93, 94, 84, 868, 871, 860, + /* 340 */ 860, 91, 91, 92, 92, 92, 92, 914, 90, 90, + /* 350 */ 90, 90, 89, 89, 88, 88, 88, 87, 335, 311, + /* 360 */ 840, 840, 840, 266, 257, 690, 691, 778, 706, 86, + /* 370 */ 83, 166, 219, 690, 691, 737, 1, 690, 691, 689, + /* 380 */ 689, 689, 689, 430, 86, 83, 166, 249, 688, 937, + /* 390 */ 856, 856, 427, 699, 700, 828, 298, 690, 691, 221, + /* 400 */ 686, 115, 123, 944, 795, 833, 48, 342, 305, 970, + /* 410 */ 847, 93, 94, 84, 868, 871, 860, 860, 91, 91, + /* 420 */ 92, 92, 92, 92, 114, 90, 90, 90, 90, 89, + /* 430 */ 89, 88, 88, 88, 87, 335, 311, 940, 841, 679, + /* 440 */ 713, 429, 835, 430, 251, 354, 250, 355, 288, 690, + /* 450 */ 691, 690, 691, 285, 941, 340, 971, 287, 210, 23, + /* 460 */ 174, 793, 832, 430, 353, 833, 10, 856, 856, 24, + /* 470 */ 942, 151, 753, 840, 840, 840, 794, 968, 1290, 321, + /* 480 */ 398, 1290, 356, 352, 754, 833, 49, 935, 93, 94, + /* 490 */ 84, 868, 871, 860, 860, 91, 91, 92, 92, 92, + /* 500 */ 92, 430, 90, 90, 90, 90, 89, 89, 88, 88, + /* 510 */ 88, 87, 335, 311, 376, 114, 907, 705, 430, 907, + /* 520 */ 328, 890, 114, 833, 10, 966, 430, 857, 857, 320, + /* 530 */ 189, 163, 832, 165, 430, 906, 344, 323, 906, 904, + /* 540 */ 833, 10, 965, 306, 856, 856, 187, 419, 833, 10, + /* 550 */ 220, 869, 872, 832, 222, 403, 833, 49, 1219, 793, + /* 560 */ 68, 937, 406, 245, 66, 93, 94, 84, 868, 871, + /* 570 */ 860, 860, 91, 91, 92, 92, 92, 92, 861, 90, + /* 580 */ 90, 90, 90, 89, 89, 88, 88, 88, 87, 335, + /* 590 */ 311, 404, 213, 762, 834, 345, 114, 940, 902, 368, + /* 600 */ 727, 5, 316, 192, 396, 772, 780, 269, 230, 242, + /* 610 */ 771, 244, 397, 164, 941, 385, 123, 347, 55, 355, + /* 620 */ 329, 856, 856, 728, 333, 332, 688, 968, 1291, 724, + /* 630 */ 942, 1291, 413, 214, 833, 9, 362, 286, 955, 115, + /* 640 */ 718, 311, 93, 94, 84, 868, 871, 860, 860, 91, + /* 650 */ 91, 92, 92, 92, 92, 430, 90, 90, 90, 90, + /* 660 */ 89, 89, 88, 88, 88, 87, 335, 912, 912, 1300, + /* 670 */ 1300, 758, 856, 856, 325, 966, 780, 833, 35, 747, + /* 680 */ 720, 334, 699, 700, 977, 652, 338, 243, 745, 920, + /* 690 */ 920, 369, 187, 93, 94, 84, 868, 871, 860, 860, + /* 700 */ 91, 91, 92, 92, 92, 92, 114, 90, 90, 90, + /* 710 */ 90, 89, 89, 88, 88, 88, 87, 335, 311, 430, + /* 720 */ 954, 430, 112, 310, 430, 693, 317, 698, 400, 430, + /* 730 */ 793, 359, 430, 1017, 430, 192, 430, 401, 780, 430, + /* 740 */ 360, 833, 36, 833, 12, 430, 833, 27, 316, 856, + /* 750 */ 856, 833, 37, 20, 833, 38, 833, 39, 833, 28, + /* 760 */ 72, 833, 29, 663, 664, 665, 264, 833, 40, 234, + /* 770 */ 93, 94, 84, 868, 871, 860, 860, 91, 91, 92, + /* 780 */ 92, 92, 92, 430, 90, 90, 90, 90, 89, 89, + /* 790 */ 88, 88, 88, 87, 335, 311, 430, 698, 430, 917, + /* 800 */ 147, 430, 165, 916, 275, 833, 41, 430, 780, 430, + /* 810 */ 21, 430, 259, 430, 262, 274, 430, 367, 833, 42, + /* 820 */ 833, 11, 430, 833, 43, 235, 856, 856, 793, 833, + /* 830 */ 99, 833, 44, 833, 45, 833, 32, 75, 833, 46, + /* 840 */ 305, 967, 257, 257, 833, 47, 311, 93, 94, 84, + /* 850 */ 868, 871, 860, 860, 91, 91, 92, 92, 92, 92, + /* 860 */ 430, 90, 90, 90, 90, 89, 89, 88, 88, 88, + /* 870 */ 87, 335, 430, 186, 185, 184, 238, 856, 856, 650, + /* 880 */ 2, 1064, 833, 33, 739, 217, 218, 257, 971, 257, + /* 890 */ 426, 317, 257, 774, 833, 117, 257, 311, 93, 94, + /* 900 */ 84, 868, 871, 860, 860, 91, 91, 92, 92, 92, + /* 910 */ 92, 430, 90, 90, 90, 90, 89, 89, 88, 88, + /* 920 */ 88, 87, 335, 430, 318, 124, 212, 163, 856, 856, + /* 930 */ 943, 900, 898, 833, 118, 759, 726, 725, 257, 755, + /* 940 */ 289, 289, 733, 734, 961, 833, 119, 682, 311, 93, + /* 950 */ 82, 84, 868, 871, 860, 860, 91, 91, 92, 92, + /* 960 */ 92, 92, 430, 90, 90, 90, 90, 89, 89, 88, + /* 970 */ 88, 88, 87, 335, 430, 716, 246, 322, 331, 856, + /* 980 */ 856, 256, 114, 357, 833, 53, 808, 913, 913, 932, + /* 990 */ 156, 416, 420, 424, 930, 809, 833, 34, 364, 311, + /* 1000 */ 253, 94, 84, 868, 871, 860, 860, 91, 91, 92, + /* 1010 */ 92, 92, 92, 430, 90, 90, 90, 90, 89, 89, + /* 1020 */ 88, 88, 88, 87, 335, 430, 114, 114, 114, 960, + /* 1030 */ 856, 856, 307, 258, 830, 833, 100, 191, 252, 377, + /* 1040 */ 267, 68, 197, 68, 261, 716, 769, 833, 50, 71, + /* 1050 */ 911, 911, 263, 84, 868, 871, 860, 860, 91, 91, + /* 1060 */ 92, 92, 92, 92, 430, 90, 90, 90, 90, 89, + /* 1070 */ 89, 88, 88, 88, 87, 335, 80, 425, 802, 3, + /* 1080 */ 1214, 191, 430, 265, 336, 336, 833, 101, 741, 80, + /* 1090 */ 425, 897, 3, 723, 722, 428, 721, 336, 336, 430, + /* 1100 */ 893, 270, 430, 197, 833, 102, 430, 800, 428, 430, + /* 1110 */ 695, 430, 843, 111, 414, 430, 784, 409, 430, 831, + /* 1120 */ 430, 833, 98, 123, 833, 116, 847, 414, 833, 49, + /* 1130 */ 779, 833, 113, 833, 106, 226, 123, 833, 105, 847, + /* 1140 */ 833, 103, 833, 104, 791, 411, 77, 78, 290, 412, + /* 1150 */ 430, 291, 114, 79, 432, 431, 389, 430, 835, 77, + /* 1160 */ 78, 897, 839, 408, 410, 430, 79, 432, 431, 372, + /* 1170 */ 703, 835, 833, 52, 430, 80, 425, 430, 3, 833, + /* 1180 */ 54, 772, 843, 336, 336, 684, 771, 833, 51, 840, + /* 1190 */ 840, 840, 842, 19, 428, 672, 833, 26, 671, 833, + /* 1200 */ 30, 673, 840, 840, 840, 842, 19, 207, 661, 278, + /* 1210 */ 304, 148, 280, 414, 282, 248, 358, 822, 382, 6, + /* 1220 */ 348, 161, 273, 80, 425, 847, 3, 934, 895, 720, + /* 1230 */ 894, 336, 336, 296, 157, 415, 241, 284, 674, 958, + /* 1240 */ 194, 953, 428, 951, 948, 77, 78, 777, 319, 56, + /* 1250 */ 59, 135, 79, 432, 431, 121, 66, 835, 146, 128, + /* 1260 */ 350, 414, 819, 130, 351, 131, 132, 133, 375, 173, + /* 1270 */ 107, 138, 149, 847, 365, 178, 62, 70, 425, 936, + /* 1280 */ 3, 827, 889, 371, 255, 336, 336, 792, 840, 840, + /* 1290 */ 840, 842, 19, 77, 78, 915, 428, 208, 179, 144, + /* 1300 */ 79, 432, 431, 373, 260, 835, 180, 326, 675, 181, + /* 1310 */ 308, 744, 388, 743, 731, 414, 718, 742, 730, 712, + /* 1320 */ 402, 309, 711, 272, 788, 65, 710, 847, 709, 277, + /* 1330 */ 193, 789, 787, 279, 876, 73, 840, 840, 840, 842, + /* 1340 */ 19, 786, 281, 418, 283, 422, 227, 77, 78, 330, + /* 1350 */ 228, 229, 96, 767, 79, 432, 431, 407, 67, 835, + /* 1360 */ 215, 292, 293, 405, 294, 303, 302, 301, 204, 299, + /* 1370 */ 295, 202, 676, 681, 7, 433, 669, 203, 205, 206, + /* 1380 */ 125, 110, 313, 434, 667, 666, 658, 168, 224, 237, + /* 1390 */ 840, 840, 840, 842, 19, 120, 656, 337, 236, 155, + /* 1400 */ 167, 341, 233, 314, 108, 905, 903, 826, 127, 126, + /* 1410 */ 756, 170, 129, 172, 247, 928, 134, 136, 171, 60, + /* 1420 */ 61, 123, 169, 137, 933, 175, 176, 927, 8, 13, + /* 1430 */ 177, 254, 918, 139, 191, 924, 140, 370, 678, 150, + /* 1440 */ 374, 182, 274, 268, 141, 122, 63, 14, 378, 15, + /* 1450 */ 383, 64, 225, 846, 845, 874, 16, 4, 729, 765, + /* 1460 */ 770, 162, 395, 209, 211, 142, 801, 878, 796, 312, + /* 1470 */ 71, 68, 875, 873, 939, 190, 417, 938, 17, 195, + /* 1480 */ 196, 152, 18, 975, 199, 976, 153, 198, 154, 421, + /* 1490 */ 877, 844, 696, 81, 200, 297, 343, 1019, 1018, 300, + /* 1500 */ 653, }; static const YYCODETYPE yy_lookahead[] = { - /* 0 */ 19, 222, 223, 224, 225, 24, 1, 26, 77, 78, - /* 10 */ 79, 80, 15, 82, 83, 84, 85, 86, 87, 88, - /* 20 */ 89, 90, 91, 92, 113, 22, 26, 27, 117, 26, - /* 30 */ 49, 50, 81, 82, 83, 84, 85, 86, 87, 88, - /* 40 */ 89, 90, 91, 92, 88, 89, 90, 91, 92, 68, - /* 50 */ 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, - /* 60 */ 79, 80, 23, 82, 83, 84, 85, 86, 87, 88, - /* 70 */ 89, 90, 91, 92, 19, 94, 118, 19, 150, 22, - /* 80 */ 25, 82, 83, 84, 85, 86, 87, 88, 89, 90, - /* 90 */ 91, 92, 19, 165, 94, 95, 96, 94, 118, 99, - /* 100 */ 100, 101, 174, 175, 49, 50, 22, 23, 96, 54, - /* 110 */ 110, 99, 100, 101, 7, 8, 26, 27, 16, 105, - /* 120 */ 106, 107, 110, 68, 69, 70, 71, 72, 73, 74, - /* 130 */ 75, 76, 77, 78, 79, 80, 113, 82, 83, 84, - /* 140 */ 85, 86, 87, 88, 89, 90, 91, 92, 19, 16, - /* 150 */ 92, 67, 98, 24, 96, 97, 98, 99, 100, 101, - /* 160 */ 102, 25, 60, 109, 62, 92, 150, 109, 150, 25, - /* 170 */ 97, 98, 99, 100, 101, 102, 86, 87, 49, 50, - /* 180 */ 116, 165, 109, 165, 94, 95, 118, 97, 170, 171, - /* 190 */ 174, 175, 128, 60, 104, 62, 106, 68, 69, 70, - /* 200 */ 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, - /* 210 */ 11, 82, 83, 84, 85, 86, 87, 88, 89, 90, - /* 220 */ 91, 92, 19, 21, 86, 87, 88, 89, 90, 91, - /* 230 */ 92, 215, 96, 150, 66, 99, 100, 101, 22, 150, - /* 240 */ 138, 118, 26, 27, 161, 162, 110, 103, 165, 231, - /* 250 */ 232, 23, 49, 50, 165, 24, 57, 26, 32, 170, - /* 260 */ 171, 112, 94, 114, 115, 63, 98, 41, 185, 186, - /* 270 */ 118, 68, 69, 70, 71, 72, 73, 74, 75, 76, - /* 280 */ 77, 78, 79, 80, 12, 82, 83, 84, 85, 86, - /* 290 */ 87, 88, 89, 90, 91, 92, 19, 129, 130, 131, - /* 300 */ 28, 23, 100, 25, 105, 106, 107, 150, 26, 27, - /* 310 */ 94, 95, 169, 170, 171, 116, 44, 23, 46, 150, - /* 320 */ 231, 232, 165, 26, 27, 94, 49, 50, 23, 57, - /* 330 */ 25, 174, 175, 22, 165, 26, 27, 26, 27, 136, - /* 340 */ 138, 97, 98, 174, 175, 68, 69, 70, 71, 72, - /* 350 */ 73, 74, 75, 76, 77, 78, 79, 80, 23, 82, - /* 360 */ 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, - /* 370 */ 19, 150, 23, 216, 23, 232, 94, 95, 221, 150, - /* 380 */ 23, 160, 25, 214, 215, 163, 165, 88, 166, 167, - /* 390 */ 168, 94, 95, 23, 165, 174, 175, 88, 160, 150, - /* 400 */ 49, 50, 120, 94, 95, 94, 95, 158, 26, 57, - /* 410 */ 161, 162, 113, 136, 165, 194, 117, 120, 22, 68, - /* 420 */ 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, - /* 430 */ 79, 80, 194, 82, 83, 84, 85, 86, 87, 88, - /* 440 */ 89, 90, 91, 92, 19, 150, 23, 112, 23, 114, - /* 450 */ 115, 25, 142, 143, 144, 145, 218, 105, 106, 107, - /* 460 */ 165, 112, 150, 114, 115, 22, 150, 166, 116, 174, - /* 470 */ 175, 22, 76, 235, 49, 50, 94, 165, 240, 172, - /* 480 */ 173, 165, 112, 155, 114, 115, 174, 175, 181, 11, - /* 490 */ 174, 175, 22, 68, 69, 70, 71, 72, 73, 74, - /* 500 */ 75, 76, 77, 78, 79, 80, 205, 82, 83, 84, - /* 510 */ 85, 86, 87, 88, 89, 90, 91, 92, 19, 160, - /* 520 */ 23, 226, 23, 222, 12, 224, 225, 150, 216, 23, - /* 530 */ 23, 25, 36, 25, 25, 112, 220, 114, 115, 135, - /* 540 */ 28, 137, 165, 26, 27, 119, 30, 51, 49, 50, - /* 550 */ 34, 174, 175, 194, 58, 166, 44, 229, 46, 160, - /* 560 */ 22, 23, 234, 25, 48, 206, 207, 68, 69, 70, - /* 570 */ 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, - /* 580 */ 23, 82, 83, 84, 85, 86, 87, 88, 89, 90, - /* 590 */ 91, 92, 19, 194, 205, 150, 23, 220, 19, 181, - /* 600 */ 182, 95, 97, 95, 108, 67, 188, 169, 170, 171, - /* 610 */ 165, 94, 95, 222, 50, 224, 225, 218, 120, 150, - /* 620 */ 160, 116, 49, 50, 22, 23, 120, 25, 120, 50, - /* 630 */ 161, 162, 19, 128, 165, 244, 22, 23, 193, 240, - /* 640 */ 27, 68, 69, 70, 71, 72, 73, 74, 75, 76, - /* 650 */ 77, 78, 79, 80, 194, 82, 83, 84, 85, 86, - /* 660 */ 87, 88, 89, 90, 91, 92, 19, 25, 104, 67, - /* 670 */ 232, 24, 150, 23, 150, 25, 150, 150, 150, 150, - /* 680 */ 150, 67, 25, 104, 7, 8, 9, 165, 109, 165, - /* 690 */ 245, 165, 165, 165, 165, 165, 49, 50, 174, 175, - /* 700 */ 174, 175, 174, 175, 174, 175, 0, 1, 2, 105, - /* 710 */ 106, 107, 248, 249, 187, 68, 69, 70, 71, 72, - /* 720 */ 73, 74, 75, 76, 77, 78, 79, 80, 150, 82, - /* 730 */ 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, - /* 740 */ 19, 150, 213, 165, 150, 222, 150, 224, 225, 166, - /* 750 */ 167, 168, 174, 175, 129, 130, 165, 150, 165, 165, - /* 760 */ 150, 165, 150, 241, 35, 174, 175, 174, 174, 175, - /* 770 */ 49, 50, 165, 52, 23, 165, 25, 165, 206, 207, - /* 780 */ 23, 197, 25, 187, 174, 175, 174, 175, 204, 68, - /* 790 */ 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, - /* 800 */ 79, 80, 150, 82, 83, 84, 85, 86, 87, 88, - /* 810 */ 89, 90, 91, 92, 19, 150, 150, 165, 35, 23, - /* 820 */ 150, 25, 150, 22, 217, 24, 174, 175, 150, 35, - /* 830 */ 165, 165, 144, 145, 150, 165, 150, 165, 118, 174, - /* 840 */ 175, 150, 22, 165, 49, 50, 174, 175, 23, 165, - /* 850 */ 25, 165, 23, 187, 25, 27, 165, 187, 174, 175, - /* 860 */ 23, 23, 25, 68, 69, 70, 71, 72, 73, 74, - /* 870 */ 75, 76, 77, 78, 79, 80, 150, 82, 83, 84, - /* 880 */ 85, 86, 87, 88, 89, 90, 91, 92, 19, 150, - /* 890 */ 150, 165, 150, 150, 150, 217, 25, 160, 19, 213, - /* 900 */ 174, 175, 190, 191, 165, 165, 27, 165, 165, 165, - /* 910 */ 150, 150, 150, 174, 175, 39, 174, 175, 49, 50, - /* 920 */ 23, 150, 52, 250, 251, 165, 165, 165, 190, 191, - /* 930 */ 187, 194, 241, 193, 174, 175, 165, 68, 69, 70, - /* 940 */ 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, - /* 950 */ 150, 82, 83, 84, 85, 86, 87, 88, 89, 90, - /* 960 */ 91, 92, 19, 150, 150, 165, 150, 160, 160, 166, - /* 970 */ 166, 160, 150, 22, 174, 175, 91, 92, 165, 165, - /* 980 */ 52, 165, 29, 150, 213, 241, 23, 165, 49, 50, - /* 990 */ 174, 175, 49, 50, 1, 2, 173, 126, 165, 86, - /* 1000 */ 87, 194, 194, 22, 181, 194, 193, 193, 205, 205, - /* 1010 */ 71, 72, 69, 70, 71, 72, 73, 74, 75, 76, - /* 1020 */ 77, 78, 79, 80, 150, 82, 83, 84, 85, 86, - /* 1030 */ 87, 88, 89, 90, 91, 92, 19, 98, 150, 165, - /* 1040 */ 150, 150, 150, 22, 150, 150, 22, 52, 174, 175, - /* 1050 */ 22, 102, 20, 165, 109, 165, 165, 165, 150, 165, - /* 1060 */ 165, 150, 174, 175, 174, 175, 49, 50, 24, 19, - /* 1070 */ 43, 104, 59, 165, 138, 104, 165, 25, 53, 53, - /* 1080 */ 22, 5, 174, 175, 193, 174, 175, 70, 71, 72, - /* 1090 */ 73, 74, 75, 76, 77, 78, 79, 80, 1, 82, - /* 1100 */ 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, - /* 1110 */ 19, 20, 150, 22, 150, 150, 150, 26, 27, 118, - /* 1120 */ 150, 35, 150, 150, 76, 27, 108, 165, 37, 165, - /* 1130 */ 165, 165, 76, 19, 20, 165, 22, 165, 165, 127, - /* 1140 */ 26, 27, 22, 1, 150, 16, 20, 56, 150, 150, - /* 1150 */ 119, 37, 150, 119, 160, 193, 150, 66, 193, 165, - /* 1160 */ 150, 150, 121, 165, 165, 108, 150, 165, 174, 175, - /* 1170 */ 56, 165, 150, 127, 22, 165, 165, 86, 87, 88, - /* 1180 */ 66, 165, 16, 150, 93, 94, 95, 165, 194, 98, - /* 1190 */ 174, 175, 128, 23, 15, 23, 174, 175, 165, 150, - /* 1200 */ 86, 87, 65, 140, 150, 22, 3, 93, 94, 95, - /* 1210 */ 216, 4, 98, 252, 165, 221, 150, 164, 180, 165, - /* 1220 */ 129, 130, 131, 132, 133, 134, 193, 150, 174, 175, - /* 1230 */ 116, 165, 19, 20, 150, 22, 249, 252, 149, 26, - /* 1240 */ 27, 180, 165, 129, 130, 131, 132, 133, 134, 165, - /* 1250 */ 37, 174, 175, 150, 6, 19, 20, 150, 22, 150, - /* 1260 */ 150, 150, 26, 27, 149, 149, 13, 150, 165, 56, - /* 1270 */ 149, 159, 165, 37, 165, 165, 165, 174, 175, 66, - /* 1280 */ 146, 147, 165, 174, 175, 25, 152, 151, 154, 150, - /* 1290 */ 194, 151, 56, 116, 160, 150, 123, 202, 150, 86, - /* 1300 */ 87, 199, 66, 193, 165, 200, 93, 94, 95, 150, - /* 1310 */ 165, 98, 150, 165, 126, 22, 124, 150, 150, 26, - /* 1320 */ 27, 150, 86, 87, 165, 201, 150, 165, 194, 93, - /* 1330 */ 94, 95, 165, 165, 98, 150, 165, 122, 203, 125, - /* 1340 */ 227, 165, 129, 130, 131, 132, 133, 134, 5, 150, - /* 1350 */ 165, 135, 218, 10, 11, 12, 13, 14, 150, 66, - /* 1360 */ 17, 157, 118, 157, 165, 129, 130, 131, 132, 133, - /* 1370 */ 134, 150, 104, 165, 31, 210, 33, 176, 150, 86, - /* 1380 */ 87, 247, 174, 175, 150, 42, 165, 94, 121, 211, - /* 1390 */ 150, 98, 210, 165, 210, 174, 175, 211, 55, 165, - /* 1400 */ 57, 150, 174, 175, 61, 165, 150, 64, 174, 175, - /* 1410 */ 104, 211, 47, 150, 174, 175, 165, 176, 176, 150, - /* 1420 */ 103, 165, 129, 130, 131, 174, 175, 184, 165, 179, - /* 1430 */ 174, 175, 150, 178, 165, 176, 22, 174, 175, 230, - /* 1440 */ 92, 230, 184, 174, 175, 150, 176, 165, 105, 106, - /* 1450 */ 107, 150, 176, 176, 111, 156, 174, 175, 179, 116, - /* 1460 */ 165, 18, 157, 156, 238, 157, 165, 156, 45, 174, - /* 1470 */ 175, 157, 157, 135, 239, 174, 175, 156, 189, 157, - /* 1480 */ 68, 189, 139, 219, 22, 199, 157, 18, 192, 192, - /* 1490 */ 192, 189, 192, 199, 219, 243, 157, 40, 157, 157, - /* 1500 */ 243, 38, 153, 166, 233, 196, 233, 246, 198, 228, - /* 1510 */ 177, 177, 209, 177, 182, 199, 166, 177, 166, 199, - /* 1520 */ 242, 177, 242, 178, 148, 166, 196, 209, 209, 92, - /* 1530 */ 195, 208, 174, 186, 183, 174, 183, 191, 183, 253, - /* 1540 */ 236, 253, 186, 253, 253, 253, 253, 253, 253, 237, -}; -#define YY_SHIFT_USE_DFLT (-90) -#define YY_SHIFT_COUNT (418) -#define YY_SHIFT_MIN (-89) -#define YY_SHIFT_MAX (1469) + /* 0 */ 19, 144, 145, 146, 147, 24, 1, 2, 27, 80, + /* 10 */ 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, + /* 20 */ 91, 92, 93, 94, 95, 91, 92, 93, 94, 95, + /* 30 */ 19, 50, 51, 80, 81, 82, 83, 27, 85, 86, + /* 40 */ 87, 88, 89, 90, 91, 92, 93, 94, 95, 137, + /* 50 */ 177, 139, 71, 72, 73, 74, 75, 76, 77, 78, + /* 60 */ 79, 80, 81, 82, 83, 95, 85, 86, 87, 88, + /* 70 */ 89, 90, 91, 92, 93, 94, 95, 19, 97, 85, + /* 80 */ 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, + /* 90 */ 152, 33, 212, 173, 27, 28, 223, 224, 225, 152, + /* 100 */ 42, 181, 152, 223, 224, 225, 95, 97, 50, 51, + /* 110 */ 99, 100, 101, 102, 103, 104, 105, 27, 28, 59, + /* 120 */ 223, 224, 225, 112, 174, 175, 66, 94, 95, 71, + /* 130 */ 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, + /* 140 */ 82, 83, 195, 85, 86, 87, 88, 89, 90, 91, + /* 150 */ 92, 93, 94, 95, 19, 197, 89, 90, 220, 209, + /* 160 */ 210, 26, 119, 120, 97, 98, 208, 100, 108, 109, + /* 170 */ 110, 152, 157, 152, 107, 152, 109, 89, 90, 91, + /* 180 */ 92, 93, 94, 95, 163, 50, 51, 97, 98, 99, + /* 190 */ 55, 66, 102, 103, 104, 174, 175, 174, 175, 132, + /* 200 */ 133, 192, 193, 113, 66, 19, 71, 72, 73, 74, + /* 210 */ 75, 76, 77, 78, 79, 80, 81, 82, 83, 198, + /* 220 */ 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, + /* 230 */ 95, 152, 209, 210, 148, 149, 50, 51, 100, 53, + /* 240 */ 154, 59, 156, 174, 229, 27, 28, 232, 163, 163, + /* 250 */ 22, 192, 193, 174, 175, 27, 28, 71, 72, 73, + /* 260 */ 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, + /* 270 */ 251, 85, 86, 87, 88, 89, 90, 91, 92, 93, + /* 280 */ 94, 95, 19, 198, 198, 152, 152, 24, 209, 210, + /* 290 */ 108, 109, 110, 196, 197, 27, 28, 69, 164, 165, + /* 300 */ 152, 119, 120, 27, 28, 208, 99, 27, 28, 102, + /* 310 */ 103, 104, 152, 50, 51, 97, 98, 89, 90, 185, + /* 320 */ 113, 187, 22, 177, 174, 97, 58, 27, 28, 101, + /* 330 */ 115, 245, 117, 118, 71, 72, 73, 74, 75, 76, + /* 340 */ 77, 78, 79, 80, 81, 82, 83, 11, 85, 86, + /* 350 */ 87, 88, 89, 90, 91, 92, 93, 94, 95, 19, + /* 360 */ 132, 133, 134, 23, 152, 97, 98, 91, 174, 223, + /* 370 */ 224, 225, 239, 97, 98, 187, 22, 97, 98, 27, + /* 380 */ 28, 27, 28, 152, 223, 224, 225, 239, 152, 163, + /* 390 */ 50, 51, 170, 171, 172, 59, 160, 97, 98, 239, + /* 400 */ 164, 165, 66, 242, 124, 174, 175, 195, 22, 23, + /* 410 */ 69, 71, 72, 73, 74, 75, 76, 77, 78, 79, + /* 420 */ 80, 81, 82, 83, 198, 85, 86, 87, 88, 89, + /* 430 */ 90, 91, 92, 93, 94, 95, 19, 12, 97, 21, + /* 440 */ 23, 152, 101, 152, 108, 109, 110, 221, 152, 97, + /* 450 */ 98, 97, 98, 152, 29, 243, 70, 226, 23, 233, + /* 460 */ 26, 26, 152, 152, 238, 174, 175, 50, 51, 22, + /* 470 */ 45, 24, 47, 132, 133, 134, 124, 22, 23, 188, + /* 480 */ 163, 26, 152, 65, 59, 174, 175, 163, 71, 72, + /* 490 */ 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, + /* 500 */ 83, 152, 85, 86, 87, 88, 89, 90, 91, 92, + /* 510 */ 93, 94, 95, 19, 19, 198, 152, 23, 152, 152, + /* 520 */ 209, 103, 198, 174, 175, 70, 152, 50, 51, 219, + /* 530 */ 213, 214, 152, 98, 152, 171, 172, 188, 171, 172, + /* 540 */ 174, 175, 248, 249, 50, 51, 51, 251, 174, 175, + /* 550 */ 220, 74, 75, 152, 188, 152, 174, 175, 140, 124, + /* 560 */ 26, 163, 188, 16, 130, 71, 72, 73, 74, 75, + /* 570 */ 76, 77, 78, 79, 80, 81, 82, 83, 101, 85, + /* 580 */ 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, + /* 590 */ 19, 209, 196, 197, 23, 231, 198, 12, 231, 219, + /* 600 */ 37, 22, 107, 24, 208, 116, 27, 112, 201, 62, + /* 610 */ 121, 64, 152, 152, 29, 52, 66, 221, 211, 221, + /* 620 */ 219, 50, 51, 60, 89, 90, 152, 22, 23, 183, + /* 630 */ 45, 26, 47, 22, 174, 175, 238, 152, 164, 165, + /* 640 */ 106, 19, 71, 72, 73, 74, 75, 76, 77, 78, + /* 650 */ 79, 80, 81, 82, 83, 152, 85, 86, 87, 88, + /* 660 */ 89, 90, 91, 92, 93, 94, 95, 132, 133, 119, + /* 670 */ 120, 163, 50, 51, 111, 70, 97, 174, 175, 181, + /* 680 */ 182, 170, 171, 172, 0, 1, 2, 140, 190, 108, + /* 690 */ 109, 110, 51, 71, 72, 73, 74, 75, 76, 77, + /* 700 */ 78, 79, 80, 81, 82, 83, 198, 85, 86, 87, + /* 710 */ 88, 89, 90, 91, 92, 93, 94, 95, 19, 152, + /* 720 */ 152, 152, 22, 166, 152, 168, 169, 27, 19, 152, + /* 730 */ 26, 19, 152, 122, 152, 24, 152, 28, 27, 152, + /* 740 */ 28, 174, 175, 174, 175, 152, 174, 175, 107, 50, + /* 750 */ 51, 174, 175, 22, 174, 175, 174, 175, 174, 175, + /* 760 */ 138, 174, 175, 7, 8, 9, 16, 174, 175, 152, + /* 770 */ 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, + /* 780 */ 81, 82, 83, 152, 85, 86, 87, 88, 89, 90, + /* 790 */ 91, 92, 93, 94, 95, 19, 152, 97, 152, 31, + /* 800 */ 24, 152, 98, 35, 101, 174, 175, 152, 97, 152, + /* 810 */ 79, 152, 62, 152, 64, 112, 152, 49, 174, 175, + /* 820 */ 174, 175, 152, 174, 175, 152, 50, 51, 124, 174, + /* 830 */ 175, 174, 175, 174, 175, 174, 175, 138, 174, 175, + /* 840 */ 22, 23, 152, 152, 174, 175, 19, 71, 72, 73, + /* 850 */ 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, + /* 860 */ 152, 85, 86, 87, 88, 89, 90, 91, 92, 93, + /* 870 */ 94, 95, 152, 108, 109, 110, 152, 50, 51, 146, + /* 880 */ 147, 23, 174, 175, 26, 195, 195, 152, 70, 152, + /* 890 */ 168, 169, 152, 26, 174, 175, 152, 19, 71, 72, + /* 900 */ 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, + /* 910 */ 83, 152, 85, 86, 87, 88, 89, 90, 91, 92, + /* 920 */ 93, 94, 95, 152, 246, 247, 213, 214, 50, 51, + /* 930 */ 195, 152, 195, 174, 175, 195, 100, 101, 152, 195, + /* 940 */ 152, 152, 7, 8, 152, 174, 175, 163, 19, 71, + /* 950 */ 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, + /* 960 */ 82, 83, 152, 85, 86, 87, 88, 89, 90, 91, + /* 970 */ 92, 93, 94, 95, 152, 27, 152, 189, 189, 50, + /* 980 */ 51, 195, 198, 152, 174, 175, 33, 132, 133, 152, + /* 990 */ 123, 163, 163, 163, 152, 42, 174, 175, 152, 19, + /* 1000 */ 152, 72, 73, 74, 75, 76, 77, 78, 79, 80, + /* 1010 */ 81, 82, 83, 152, 85, 86, 87, 88, 89, 90, + /* 1020 */ 91, 92, 93, 94, 95, 152, 198, 198, 198, 23, + /* 1030 */ 50, 51, 26, 152, 23, 174, 175, 26, 23, 23, + /* 1040 */ 23, 26, 26, 26, 152, 97, 23, 174, 175, 26, + /* 1050 */ 132, 133, 152, 73, 74, 75, 76, 77, 78, 79, + /* 1060 */ 80, 81, 82, 83, 152, 85, 86, 87, 88, 89, + /* 1070 */ 90, 91, 92, 93, 94, 95, 19, 20, 23, 22, + /* 1080 */ 23, 26, 152, 152, 27, 28, 174, 175, 152, 19, + /* 1090 */ 20, 27, 22, 183, 183, 38, 152, 27, 28, 152, + /* 1100 */ 23, 152, 152, 26, 174, 175, 152, 152, 38, 152, + /* 1110 */ 23, 152, 27, 26, 57, 152, 215, 163, 152, 152, + /* 1120 */ 152, 174, 175, 66, 174, 175, 69, 57, 174, 175, + /* 1130 */ 152, 174, 175, 174, 175, 212, 66, 174, 175, 69, + /* 1140 */ 174, 175, 174, 175, 152, 152, 89, 90, 152, 193, + /* 1150 */ 152, 152, 198, 96, 97, 98, 91, 152, 101, 89, + /* 1160 */ 90, 97, 152, 209, 210, 152, 96, 97, 98, 235, + /* 1170 */ 152, 101, 174, 175, 152, 19, 20, 152, 22, 174, + /* 1180 */ 175, 116, 97, 27, 28, 152, 121, 174, 175, 132, + /* 1190 */ 133, 134, 135, 136, 38, 152, 174, 175, 152, 174, + /* 1200 */ 175, 152, 132, 133, 134, 135, 136, 234, 152, 212, + /* 1210 */ 150, 199, 212, 57, 212, 240, 240, 203, 178, 200, + /* 1220 */ 216, 186, 177, 19, 20, 69, 22, 203, 177, 182, + /* 1230 */ 177, 27, 28, 202, 200, 228, 216, 216, 155, 39, + /* 1240 */ 122, 159, 38, 159, 41, 89, 90, 91, 159, 241, + /* 1250 */ 241, 22, 96, 97, 98, 71, 130, 101, 222, 191, + /* 1260 */ 18, 57, 203, 194, 159, 194, 194, 194, 18, 158, + /* 1270 */ 244, 191, 222, 69, 159, 158, 137, 19, 20, 203, + /* 1280 */ 22, 191, 203, 46, 236, 27, 28, 159, 132, 133, + /* 1290 */ 134, 135, 136, 89, 90, 237, 38, 159, 158, 22, + /* 1300 */ 96, 97, 98, 179, 159, 101, 158, 48, 159, 158, + /* 1310 */ 179, 176, 107, 176, 184, 57, 106, 176, 184, 176, + /* 1320 */ 125, 179, 178, 176, 218, 107, 176, 69, 176, 217, + /* 1330 */ 159, 218, 218, 217, 159, 137, 132, 133, 134, 135, + /* 1340 */ 136, 218, 217, 179, 217, 179, 227, 89, 90, 95, + /* 1350 */ 230, 230, 129, 207, 96, 97, 98, 126, 128, 101, + /* 1360 */ 5, 206, 205, 127, 204, 10, 11, 12, 13, 14, + /* 1370 */ 203, 25, 17, 162, 26, 161, 13, 153, 153, 6, + /* 1380 */ 247, 180, 250, 151, 151, 151, 151, 32, 180, 34, + /* 1390 */ 132, 133, 134, 135, 136, 167, 4, 3, 43, 22, + /* 1400 */ 15, 68, 142, 250, 16, 23, 23, 120, 111, 131, + /* 1410 */ 20, 56, 123, 125, 16, 1, 123, 131, 63, 79, + /* 1420 */ 79, 66, 67, 111, 28, 36, 122, 1, 5, 22, + /* 1430 */ 107, 140, 54, 54, 26, 61, 107, 44, 20, 24, + /* 1440 */ 19, 105, 112, 23, 22, 40, 22, 22, 53, 22, + /* 1450 */ 53, 22, 53, 23, 23, 23, 22, 22, 30, 116, + /* 1460 */ 23, 122, 26, 23, 23, 22, 28, 11, 124, 114, + /* 1470 */ 26, 26, 23, 23, 23, 36, 24, 23, 36, 26, + /* 1480 */ 22, 22, 36, 23, 122, 23, 22, 26, 22, 24, + /* 1490 */ 23, 23, 23, 22, 122, 23, 141, 122, 122, 15, + /* 1500 */ 1, +}; +#define YY_SHIFT_USE_DFLT (-89) +#define YY_SHIFT_COUNT (435) +#define YY_SHIFT_MIN (-88) +#define YY_SHIFT_MAX (1499) static const short yy_shift_ofst[] = { - /* 0 */ 993, 1114, 1343, 1114, 1213, 1213, 90, 90, 0, -19, - /* 10 */ 1213, 1213, 1213, 1213, 1213, 352, 517, 721, 1091, 1213, - /* 20 */ 1213, 1213, 1213, 1213, 1213, 1213, 1213, 1213, 1213, 1213, - /* 30 */ 1213, 1213, 1213, 1213, 1213, 1213, 1213, 1213, 1213, 1213, - /* 40 */ 1213, 1213, 1213, 1213, 1213, 1213, 1213, 1236, 1213, 1213, - /* 50 */ 1213, 1213, 1213, 1213, 1213, 1213, 1213, 1213, 1213, 1213, - /* 60 */ 1213, -49, 199, 517, 517, 913, 913, 382, 1177, 55, - /* 70 */ 647, 573, 499, 425, 351, 277, 203, 129, 795, 795, - /* 80 */ 795, 795, 795, 795, 795, 795, 795, 795, 795, 795, - /* 90 */ 795, 795, 795, 795, 795, 795, 869, 795, 943, 1017, - /* 100 */ 1017, -69, -69, -69, -69, -1, -1, 58, 138, -44, - /* 110 */ 517, 517, 517, 517, 517, 517, 517, 517, 517, 517, - /* 120 */ 517, 517, 517, 517, 517, 517, 202, 579, 517, 517, - /* 130 */ 517, 517, 517, 382, 885, 1437, -90, -90, -90, 1293, - /* 140 */ 73, 272, 272, 309, 311, 297, 282, 216, 602, 538, - /* 150 */ 517, 517, 517, 517, 517, 517, 517, 517, 517, 517, - /* 160 */ 517, 517, 517, 517, 517, 517, 517, 517, 517, 517, - /* 170 */ 517, 517, 517, 517, 517, 517, 517, 517, 517, 517, - /* 180 */ 517, 517, 505, 231, 231, 231, 706, 64, 1177, 1177, - /* 190 */ 1177, -90, -90, -90, 136, 168, 168, 12, 496, 496, - /* 200 */ 496, 506, 423, 512, 370, 349, 335, 149, 149, 149, - /* 210 */ 149, 604, 516, 149, 149, 508, 3, 299, 677, 871, - /* 220 */ 613, 613, 879, 871, 879, 144, 382, 226, 382, 226, - /* 230 */ 564, 226, 613, 226, 226, 404, 625, 625, 382, 426, - /* 240 */ -89, 801, 1463, 1244, 1244, 1457, 1457, 1244, 1462, 1412, - /* 250 */ 1188, 1469, 1469, 1469, 1469, 1244, 1188, 1462, 1412, 1412, - /* 260 */ 1244, 1443, 1338, 1423, 1244, 1244, 1443, 1244, 1443, 1244, - /* 270 */ 1443, 1414, 1306, 1306, 1306, 1365, 1348, 1348, 1414, 1306, - /* 280 */ 1317, 1306, 1365, 1306, 1306, 1267, 1268, 1267, 1268, 1267, - /* 290 */ 1268, 1244, 1244, 1216, 1214, 1215, 1192, 1173, 1188, 1177, - /* 300 */ 1260, 1253, 1253, 1248, 1248, 1248, 1248, -90, -90, -90, - /* 310 */ -90, -90, -90, 939, 102, 614, 84, 133, 14, 837, - /* 320 */ 396, 829, 825, 796, 757, 751, 650, 357, 244, 107, - /* 330 */ 54, 305, 278, 1207, 1203, 1183, 1063, 1179, 1137, 1166, - /* 340 */ 1172, 1170, 1064, 1152, 1046, 1057, 1034, 1126, 1041, 1129, - /* 350 */ 1142, 1031, 1120, 1012, 1056, 1048, 1018, 1098, 1086, 1001, - /* 360 */ 1097, 1076, 1058, 971, 936, 1026, 1052, 1025, 1013, 1027, - /* 370 */ 967, 1044, 1032, 1050, 945, 949, 1028, 995, 1024, 1021, - /* 380 */ 963, 981, 928, 953, 951, 870, 876, 897, 838, 720, - /* 390 */ 828, 794, 820, 498, 642, 783, 657, 729, 642, 557, - /* 400 */ 507, 509, 497, 470, 478, 449, 294, 228, 443, 23, - /* 410 */ 152, 123, 68, -20, -42, 57, 39, -3, 5, + /* 0 */ 5, 1057, 1355, 1070, 1204, 1204, 1204, 90, 60, -19, + /* 10 */ 58, 58, 186, 1204, 1204, 1204, 1204, 1204, 1204, 1204, + /* 20 */ 67, 67, 182, 336, 218, 550, 135, 263, 340, 417, + /* 30 */ 494, 571, 622, 699, 776, 827, 827, 827, 827, 827, + /* 40 */ 827, 827, 827, 827, 827, 827, 827, 827, 827, 827, + /* 50 */ 878, 827, 929, 980, 980, 1156, 1204, 1204, 1204, 1204, + /* 60 */ 1204, 1204, 1204, 1204, 1204, 1204, 1204, 1204, 1204, 1204, + /* 70 */ 1204, 1204, 1204, 1204, 1204, 1204, 1204, 1204, 1204, 1204, + /* 80 */ 1204, 1204, 1204, 1204, 1258, 1204, 1204, 1204, 1204, 1204, + /* 90 */ 1204, 1204, 1204, 1204, 1204, 1204, 1204, 1204, -71, -47, + /* 100 */ -47, -47, -47, -47, -6, 88, -66, 218, 218, 418, + /* 110 */ 495, 535, 535, 33, 43, 10, -30, -89, -89, -89, + /* 120 */ 11, 425, 425, 268, 455, 605, 218, 218, 218, 218, + /* 130 */ 218, 218, 218, 218, 218, 218, 218, 218, 218, 218, + /* 140 */ 218, 218, 218, 218, 218, 684, 138, 10, 43, 125, + /* 150 */ 125, 125, 125, 125, 125, -89, -89, -89, 228, 341, + /* 160 */ 341, 207, 276, 300, 280, 352, 354, 218, 218, 218, + /* 170 */ 218, 218, 218, 218, 218, 218, 218, 218, 218, 218, + /* 180 */ 218, 218, 218, 218, 563, 563, 563, 218, 218, 435, + /* 190 */ 218, 218, 218, 579, 218, 218, 585, 218, 218, 218, + /* 200 */ 218, 218, 218, 218, 218, 218, 218, 581, 768, 711, + /* 210 */ 711, 711, 704, 215, 1065, 756, 434, 709, 709, 712, + /* 220 */ 434, 712, 534, 858, 641, 953, 709, -88, 953, 953, + /* 230 */ 867, 489, 447, 1200, 1118, 1118, 1203, 1203, 1118, 1229, + /* 240 */ 1184, 1126, 1242, 1242, 1242, 1242, 1118, 1250, 1126, 1229, + /* 250 */ 1184, 1184, 1126, 1118, 1250, 1139, 1237, 1118, 1118, 1250, + /* 260 */ 1277, 1118, 1250, 1118, 1250, 1277, 1205, 1205, 1205, 1259, + /* 270 */ 1277, 1205, 1210, 1205, 1259, 1205, 1205, 1195, 1218, 1195, + /* 280 */ 1218, 1195, 1218, 1195, 1218, 1118, 1118, 1198, 1277, 1254, + /* 290 */ 1254, 1277, 1223, 1231, 1230, 1236, 1126, 1346, 1348, 1363, + /* 300 */ 1363, 1373, 1373, 1373, 1373, -89, -89, -89, -89, -89, + /* 310 */ -89, 477, 547, 386, 818, 750, 765, 700, 1006, 731, + /* 320 */ 1011, 1015, 1016, 1017, 948, 836, 935, 703, 1023, 1055, + /* 330 */ 1064, 1077, 855, 918, 1087, 1085, 611, 1392, 1394, 1377, + /* 340 */ 1260, 1385, 1333, 1388, 1382, 1383, 1287, 1278, 1297, 1289, + /* 350 */ 1390, 1288, 1398, 1414, 1293, 1286, 1340, 1341, 1312, 1396, + /* 360 */ 1389, 1304, 1426, 1423, 1407, 1323, 1291, 1378, 1408, 1379, + /* 370 */ 1374, 1393, 1329, 1415, 1418, 1421, 1330, 1336, 1422, 1395, + /* 380 */ 1424, 1425, 1420, 1427, 1397, 1428, 1429, 1399, 1405, 1430, + /* 390 */ 1431, 1432, 1343, 1434, 1437, 1435, 1436, 1339, 1440, 1441, + /* 400 */ 1438, 1439, 1443, 1344, 1444, 1442, 1445, 1446, 1444, 1449, + /* 410 */ 1450, 1451, 1453, 1454, 1458, 1456, 1460, 1459, 1452, 1461, + /* 420 */ 1462, 1464, 1465, 1461, 1467, 1466, 1468, 1469, 1471, 1362, + /* 430 */ 1372, 1375, 1376, 1472, 1484, 1499, }; -#define YY_REDUCE_USE_DFLT (-222) -#define YY_REDUCE_COUNT (312) -#define YY_REDUCE_MIN (-221) -#define YY_REDUCE_MAX (1376) +#define YY_REDUCE_USE_DFLT (-144) +#define YY_REDUCE_COUNT (310) +#define YY_REDUCE_MIN (-143) +#define YY_REDUCE_MAX (1235) static const short yy_reduce_ofst[] = { - /* 0 */ 310, 994, 1134, 221, 169, 157, 89, 18, 83, 301, - /* 10 */ 377, 316, 312, 16, 295, 238, 249, 391, 1301, 1295, - /* 20 */ 1282, 1269, 1263, 1256, 1251, 1240, 1234, 1228, 1221, 1208, - /* 30 */ 1109, 1103, 1077, 1054, 1022, 1016, 911, 908, 890, 888, - /* 40 */ 874, 816, 800, 760, 742, 739, 726, 684, 672, 665, - /* 50 */ 652, 612, 610, 594, 591, 578, 530, 528, 526, 524, - /* 60 */ -72, -221, 399, 469, 445, 438, 143, 222, 359, 523, - /* 70 */ 523, 523, 523, 523, 523, 523, 523, 523, 523, 523, - /* 80 */ 523, 523, 523, 523, 523, 523, 523, 523, 523, 523, - /* 90 */ 523, 523, 523, 523, 523, 523, 523, 523, 523, 523, - /* 100 */ 523, 523, 523, 523, 523, 523, 523, 307, 523, 523, - /* 110 */ 1110, 678, 1033, 965, 962, 891, 814, 813, 744, 771, - /* 120 */ 691, 607, 522, 743, 686, 740, 328, 418, 670, 666, - /* 130 */ 596, 527, 529, 583, 523, 523, 523, 523, 523, 593, - /* 140 */ 823, 738, 712, 892, 1199, 1185, 1176, 1171, 673, 673, - /* 150 */ 1168, 1167, 1162, 1159, 1148, 1145, 1139, 1117, 1111, 1107, - /* 160 */ 1084, 1066, 1049, 1011, 1010, 1006, 1002, 999, 998, 973, - /* 170 */ 972, 970, 966, 964, 895, 894, 892, 833, 822, 762, - /* 180 */ 761, 229, 811, 804, 803, 389, 688, 808, 807, 737, - /* 190 */ 460, 464, 572, 584, 1356, 1361, 1358, 1347, 1355, 1353, - /* 200 */ 1351, 1323, 1335, 1346, 1335, 1335, 1335, 1335, 1335, 1335, - /* 210 */ 1335, 1312, 1304, 1335, 1335, 1323, 1359, 1330, 1376, 1320, - /* 220 */ 1319, 1318, 1280, 1316, 1278, 1345, 1352, 1344, 1350, 1340, - /* 230 */ 1332, 1336, 1303, 1334, 1333, 1281, 1273, 1271, 1337, 1310, - /* 240 */ 1309, 1349, 1261, 1342, 1341, 1257, 1252, 1339, 1275, 1302, - /* 250 */ 1294, 1300, 1298, 1297, 1296, 1329, 1286, 1264, 1292, 1289, - /* 260 */ 1322, 1321, 1235, 1226, 1315, 1314, 1311, 1308, 1307, 1305, - /* 270 */ 1299, 1279, 1277, 1276, 1270, 1258, 1211, 1209, 1250, 1259, - /* 280 */ 1255, 1242, 1243, 1241, 1201, 1200, 1184, 1186, 1182, 1178, - /* 290 */ 1165, 1206, 1204, 1113, 1135, 1095, 1124, 1105, 1102, 1096, - /* 300 */ 1112, 1140, 1136, 1121, 1116, 1115, 1089, 985, 961, 987, - /* 310 */ 1061, 1038, 1053, + /* 0 */ -143, 954, 86, 21, -50, 23, 79, 134, 226, -120, + /* 10 */ -127, 146, 161, 291, 349, 366, 311, 382, 374, 231, + /* 20 */ 364, 367, 396, 398, 236, 317, -103, -103, -103, -103, + /* 30 */ -103, -103, -103, -103, -103, -103, -103, -103, -103, -103, + /* 40 */ -103, -103, -103, -103, -103, -103, -103, -103, -103, -103, + /* 50 */ -103, -103, -103, -103, -103, 460, 503, 567, 569, 572, + /* 60 */ 577, 580, 582, 584, 587, 593, 631, 644, 646, 649, + /* 70 */ 655, 657, 659, 661, 664, 670, 708, 720, 759, 771, + /* 80 */ 810, 822, 861, 873, 912, 930, 947, 950, 957, 959, + /* 90 */ 963, 966, 968, 998, 1005, 1013, 1022, 1025, -103, -103, + /* 100 */ -103, -103, -103, -103, -103, -103, -103, 474, 212, 15, + /* 110 */ 498, 222, 511, -103, 97, 557, -103, -103, -103, -103, + /* 120 */ -80, 9, 59, 19, 294, 294, -53, -62, 690, 691, + /* 130 */ 735, 737, 740, 744, 133, 310, 148, 330, 160, 380, + /* 140 */ 786, 788, 401, 296, 789, 733, 85, 722, -42, 324, + /* 150 */ 508, 784, 828, 829, 830, 678, 713, 407, 69, 150, + /* 160 */ 194, 188, 289, 301, 403, 461, 485, 568, 617, 673, + /* 170 */ 724, 779, 792, 824, 831, 837, 842, 846, 848, 881, + /* 180 */ 892, 900, 931, 936, 446, 910, 911, 944, 949, 901, + /* 190 */ 955, 967, 978, 923, 992, 993, 956, 996, 999, 1010, + /* 200 */ 289, 1018, 1033, 1043, 1046, 1049, 1056, 934, 973, 997, + /* 210 */ 1000, 1002, 901, 1012, 1019, 1060, 1014, 1004, 1020, 975, + /* 220 */ 1024, 976, 1040, 1035, 1047, 1045, 1021, 1007, 1051, 1053, + /* 230 */ 1031, 1034, 1083, 1026, 1082, 1084, 1008, 1009, 1089, 1036, + /* 240 */ 1068, 1059, 1069, 1071, 1072, 1073, 1105, 1111, 1076, 1050, + /* 250 */ 1080, 1090, 1079, 1115, 1117, 1058, 1048, 1128, 1138, 1140, + /* 260 */ 1124, 1145, 1148, 1149, 1151, 1131, 1135, 1137, 1141, 1130, + /* 270 */ 1142, 1143, 1144, 1147, 1134, 1150, 1152, 1106, 1112, 1113, + /* 280 */ 1116, 1114, 1125, 1123, 1127, 1171, 1175, 1119, 1164, 1120, + /* 290 */ 1121, 1166, 1146, 1155, 1157, 1160, 1167, 1211, 1214, 1224, + /* 300 */ 1225, 1232, 1233, 1234, 1235, 1132, 1153, 1133, 1201, 1208, + /* 310 */ 1228, }; static const YYACTIONTYPE yy_default[] = { - /* 0 */ 636, 872, 961, 961, 961, 872, 901, 901, 961, 760, - /* 10 */ 961, 961, 961, 961, 870, 961, 961, 935, 961, 961, - /* 20 */ 961, 961, 961, 961, 961, 961, 961, 961, 961, 961, - /* 30 */ 961, 961, 961, 961, 961, 961, 961, 961, 961, 961, - /* 40 */ 961, 961, 961, 961, 961, 961, 961, 961, 961, 961, - /* 50 */ 961, 961, 961, 961, 961, 961, 961, 961, 961, 961, - /* 60 */ 961, 844, 961, 961, 961, 901, 901, 675, 764, 795, - /* 70 */ 961, 961, 961, 961, 961, 961, 961, 961, 934, 936, - /* 80 */ 810, 809, 803, 802, 914, 775, 800, 793, 786, 797, - /* 90 */ 873, 866, 867, 865, 869, 874, 961, 796, 832, 850, - /* 100 */ 831, 849, 856, 848, 834, 843, 833, 667, 835, 836, - /* 110 */ 961, 961, 961, 961, 961, 961, 961, 961, 961, 961, - /* 120 */ 961, 961, 961, 961, 961, 961, 662, 729, 961, 961, - /* 130 */ 961, 961, 961, 961, 837, 838, 853, 852, 851, 961, - /* 140 */ 961, 961, 961, 961, 961, 961, 961, 961, 961, 961, - /* 150 */ 961, 941, 939, 961, 885, 961, 961, 961, 961, 961, - /* 160 */ 961, 961, 961, 961, 961, 961, 961, 961, 961, 961, - /* 170 */ 961, 961, 961, 961, 961, 961, 961, 961, 961, 961, - /* 180 */ 961, 642, 961, 760, 760, 760, 636, 961, 961, 961, - /* 190 */ 961, 953, 764, 754, 720, 961, 961, 961, 961, 961, - /* 200 */ 961, 961, 961, 961, 961, 961, 961, 805, 743, 924, - /* 210 */ 926, 961, 907, 741, 664, 762, 677, 752, 644, 799, - /* 220 */ 777, 777, 919, 799, 919, 701, 961, 789, 961, 789, - /* 230 */ 698, 789, 777, 789, 789, 868, 961, 961, 961, 761, - /* 240 */ 752, 961, 946, 768, 768, 938, 938, 768, 811, 733, - /* 250 */ 799, 740, 740, 740, 740, 768, 799, 811, 733, 733, - /* 260 */ 768, 659, 913, 911, 768, 768, 659, 768, 659, 768, - /* 270 */ 659, 878, 731, 731, 731, 716, 882, 882, 878, 731, - /* 280 */ 701, 731, 716, 731, 731, 781, 776, 781, 776, 781, - /* 290 */ 776, 768, 768, 961, 794, 782, 792, 790, 799, 961, - /* 300 */ 719, 652, 652, 641, 641, 641, 641, 958, 958, 953, - /* 310 */ 703, 703, 685, 961, 961, 961, 961, 961, 961, 961, - /* 320 */ 887, 961, 961, 961, 961, 961, 961, 961, 961, 961, - /* 330 */ 961, 961, 961, 961, 637, 948, 961, 961, 945, 961, - /* 340 */ 961, 961, 961, 961, 961, 961, 961, 961, 961, 961, - /* 350 */ 961, 961, 961, 961, 961, 961, 961, 961, 961, 917, - /* 360 */ 961, 961, 961, 961, 961, 961, 910, 909, 961, 961, - /* 370 */ 961, 961, 961, 961, 961, 961, 961, 961, 961, 961, - /* 380 */ 961, 961, 961, 961, 961, 961, 961, 961, 961, 961, - /* 390 */ 961, 961, 961, 961, 791, 961, 783, 961, 871, 961, - /* 400 */ 961, 961, 961, 961, 961, 961, 961, 961, 961, 746, - /* 410 */ 820, 961, 819, 823, 818, 669, 961, 650, 961, 633, - /* 420 */ 638, 957, 960, 959, 956, 955, 954, 949, 947, 944, - /* 430 */ 943, 942, 940, 937, 933, 891, 889, 896, 895, 894, - /* 440 */ 893, 892, 890, 888, 886, 806, 804, 801, 798, 932, - /* 450 */ 884, 742, 739, 738, 658, 950, 916, 925, 923, 812, - /* 460 */ 922, 921, 920, 918, 915, 902, 808, 807, 734, 876, - /* 470 */ 875, 661, 906, 905, 904, 908, 912, 903, 770, 660, - /* 480 */ 657, 666, 723, 722, 730, 728, 727, 726, 725, 724, - /* 490 */ 721, 668, 676, 687, 715, 700, 699, 881, 883, 880, - /* 500 */ 879, 708, 707, 713, 712, 711, 710, 709, 706, 705, - /* 510 */ 704, 697, 696, 702, 695, 718, 717, 714, 694, 737, - /* 520 */ 736, 735, 732, 693, 692, 691, 823, 690, 689, 829, - /* 530 */ 828, 816, 860, 757, 756, 755, 767, 766, 779, 778, - /* 540 */ 814, 813, 780, 765, 759, 758, 774, 773, 772, 771, - /* 550 */ 763, 753, 785, 788, 787, 784, 845, 862, 769, 859, - /* 560 */ 931, 930, 929, 928, 927, 864, 863, 830, 827, 680, - /* 570 */ 681, 900, 898, 899, 897, 683, 682, 679, 678, 861, - /* 580 */ 748, 747, 857, 854, 846, 841, 858, 855, 847, 842, - /* 590 */ 840, 839, 825, 824, 822, 821, 817, 826, 671, 749, - /* 600 */ 745, 744, 815, 751, 750, 688, 686, 684, 665, 663, - /* 610 */ 656, 654, 653, 655, 651, 649, 648, 647, 646, 645, - /* 620 */ 674, 673, 672, 670, 669, 643, 640, 639, 635, 634, - /* 630 */ 632, -}; - -/* The next table maps tokens into fallback tokens. If a construct -** like the following: + /* 0 */ 982, 1300, 1300, 1300, 1214, 1214, 1214, 1305, 1300, 1109, + /* 10 */ 1138, 1138, 1274, 1305, 1305, 1305, 1305, 1305, 1305, 1212, + /* 20 */ 1305, 1305, 1305, 1300, 1305, 1113, 1144, 1305, 1305, 1305, + /* 30 */ 1305, 1305, 1305, 1305, 1305, 1273, 1275, 1152, 1151, 1254, + /* 40 */ 1125, 1149, 1142, 1146, 1215, 1208, 1209, 1207, 1211, 1216, + /* 50 */ 1305, 1145, 1177, 1192, 1176, 1305, 1305, 1305, 1305, 1305, + /* 60 */ 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, + /* 70 */ 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, + /* 80 */ 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, + /* 90 */ 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1186, 1191, + /* 100 */ 1198, 1190, 1187, 1179, 1178, 1180, 1181, 1305, 1305, 1008, + /* 110 */ 1074, 1305, 1305, 1182, 1305, 1020, 1183, 1195, 1194, 1193, + /* 120 */ 1015, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, + /* 130 */ 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, + /* 140 */ 1305, 1305, 1305, 1305, 1305, 982, 1300, 1305, 1305, 1300, + /* 150 */ 1300, 1300, 1300, 1300, 1300, 1292, 1113, 1103, 1305, 1305, + /* 160 */ 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1280, 1278, + /* 170 */ 1305, 1227, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, + /* 180 */ 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, + /* 190 */ 1305, 1305, 1305, 1109, 1305, 1305, 1305, 1305, 1305, 1305, + /* 200 */ 1305, 1305, 1305, 1305, 1305, 1305, 988, 1305, 1247, 1109, + /* 210 */ 1109, 1109, 1111, 1089, 1101, 990, 1148, 1127, 1127, 1259, + /* 220 */ 1148, 1259, 1045, 1068, 1042, 1138, 1127, 1210, 1138, 1138, + /* 230 */ 1110, 1101, 1305, 1285, 1118, 1118, 1277, 1277, 1118, 1157, + /* 240 */ 1078, 1148, 1085, 1085, 1085, 1085, 1118, 1005, 1148, 1157, + /* 250 */ 1078, 1078, 1148, 1118, 1005, 1253, 1251, 1118, 1118, 1005, + /* 260 */ 1220, 1118, 1005, 1118, 1005, 1220, 1076, 1076, 1076, 1060, + /* 270 */ 1220, 1076, 1045, 1076, 1060, 1076, 1076, 1131, 1126, 1131, + /* 280 */ 1126, 1131, 1126, 1131, 1126, 1118, 1118, 1305, 1220, 1224, + /* 290 */ 1224, 1220, 1143, 1132, 1141, 1139, 1148, 1011, 1063, 998, + /* 300 */ 998, 987, 987, 987, 987, 1297, 1297, 1292, 1047, 1047, + /* 310 */ 1030, 1305, 1305, 1305, 1305, 1305, 1305, 1022, 1305, 1229, + /* 320 */ 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, + /* 330 */ 1305, 1305, 1305, 1305, 1305, 1305, 1164, 1305, 983, 1287, + /* 340 */ 1305, 1305, 1284, 1305, 1305, 1305, 1305, 1305, 1305, 1305, + /* 350 */ 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, + /* 360 */ 1305, 1257, 1305, 1305, 1305, 1305, 1305, 1305, 1250, 1249, + /* 370 */ 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, + /* 380 */ 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, + /* 390 */ 1305, 1305, 1092, 1305, 1305, 1305, 1096, 1305, 1305, 1305, + /* 400 */ 1305, 1305, 1305, 1305, 1140, 1305, 1133, 1305, 1213, 1305, + /* 410 */ 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1305, 1302, + /* 420 */ 1305, 1305, 1305, 1301, 1305, 1305, 1305, 1305, 1305, 1166, + /* 430 */ 1305, 1165, 1169, 1305, 996, 1305, +}; +/********** End of lemon-generated parsing tables *****************************/ + +/* The next table maps tokens (terminal symbols) into fallback tokens. +** If a construct like the following: ** ** %fallback ID X Y Z. ** ** appears in the grammar, then ID becomes a fallback token for X, Y, ** and Z. Whenever one of the tokens X, Y, or Z is input to the parser ** but it does not parse, the type of the token is changed to ID and ** the parse is retried before an error is thrown. +** +** This feature can be used, for example, to cause some keywords in a language +** to revert to identifiers if they keyword does not apply in the context where +** it appears. */ #ifdef YYFALLBACK static const YYCODETYPE yyFallback[] = { 0, /* $ => nothing */ 0, /* SEMI => nothing */ - 26, /* EXPLAIN => ID */ - 26, /* QUERY => ID */ - 26, /* PLAN => ID */ - 26, /* BEGIN => ID */ + 27, /* EXPLAIN => ID */ + 27, /* QUERY => ID */ + 27, /* PLAN => ID */ + 27, /* BEGIN => ID */ 0, /* TRANSACTION => nothing */ - 26, /* DEFERRED => ID */ - 26, /* IMMEDIATE => ID */ - 26, /* EXCLUSIVE => ID */ + 27, /* DEFERRED => ID */ + 27, /* IMMEDIATE => ID */ + 27, /* EXCLUSIVE => ID */ 0, /* COMMIT => nothing */ - 26, /* END => ID */ - 26, /* ROLLBACK => ID */ - 26, /* SAVEPOINT => ID */ - 26, /* RELEASE => ID */ + 27, /* END => ID */ + 27, /* ROLLBACK => ID */ + 27, /* SAVEPOINT => ID */ + 27, /* RELEASE => ID */ 0, /* TO => nothing */ 0, /* TABLE => nothing */ 0, /* CREATE => nothing */ - 26, /* IF => ID */ + 27, /* IF => ID */ 0, /* NOT => nothing */ 0, /* EXISTS => nothing */ - 26, /* TEMP => ID */ + 27, /* TEMP => ID */ 0, /* LP => nothing */ 0, /* RP => nothing */ 0, /* AS => nothing */ + 27, /* WITHOUT => ID */ 0, /* COMMA => nothing */ 0, /* ID => nothing */ 0, /* INDEXED => nothing */ - 26, /* ABORT => ID */ - 26, /* ACTION => ID */ - 26, /* AFTER => ID */ - 26, /* ANALYZE => ID */ - 26, /* ASC => ID */ - 26, /* ATTACH => ID */ - 26, /* BEFORE => ID */ - 26, /* BY => ID */ - 26, /* CASCADE => ID */ - 26, /* CAST => ID */ - 26, /* COLUMNKW => ID */ - 26, /* CONFLICT => ID */ - 26, /* DATABASE => ID */ - 26, /* DESC => ID */ - 26, /* DETACH => ID */ - 26, /* EACH => ID */ - 26, /* FAIL => ID */ - 26, /* FOR => ID */ - 26, /* IGNORE => ID */ - 26, /* INITIALLY => ID */ - 26, /* INSTEAD => ID */ - 26, /* LIKE_KW => ID */ - 26, /* MATCH => ID */ - 26, /* NO => ID */ - 26, /* KEY => ID */ - 26, /* OF => ID */ - 26, /* OFFSET => ID */ - 26, /* PRAGMA => ID */ - 26, /* RAISE => ID */ - 26, /* REPLACE => ID */ - 26, /* RESTRICT => ID */ - 26, /* ROW => ID */ - 26, /* TRIGGER => ID */ - 26, /* VACUUM => ID */ - 26, /* VIEW => ID */ - 26, /* VIRTUAL => ID */ - 26, /* REINDEX => ID */ - 26, /* RENAME => ID */ - 26, /* CTIME_KW => ID */ + 27, /* ABORT => ID */ + 27, /* ACTION => ID */ + 27, /* AFTER => ID */ + 27, /* ANALYZE => ID */ + 27, /* ASC => ID */ + 27, /* ATTACH => ID */ + 27, /* BEFORE => ID */ + 27, /* BY => ID */ + 27, /* CASCADE => ID */ + 27, /* CAST => ID */ + 27, /* COLUMNKW => ID */ + 27, /* CONFLICT => ID */ + 27, /* DATABASE => ID */ + 27, /* DESC => ID */ + 27, /* DETACH => ID */ + 27, /* EACH => ID */ + 27, /* FAIL => ID */ + 27, /* FOR => ID */ + 27, /* IGNORE => ID */ + 27, /* INITIALLY => ID */ + 27, /* INSTEAD => ID */ + 27, /* LIKE_KW => ID */ + 27, /* MATCH => ID */ + 27, /* NO => ID */ + 27, /* KEY => ID */ + 27, /* OF => ID */ + 27, /* OFFSET => ID */ + 27, /* PRAGMA => ID */ + 27, /* RAISE => ID */ + 27, /* RECURSIVE => ID */ + 27, /* REPLACE => ID */ + 27, /* RESTRICT => ID */ + 27, /* ROW => ID */ + 27, /* TRIGGER => ID */ + 27, /* VACUUM => ID */ + 27, /* VIEW => ID */ + 27, /* VIRTUAL => ID */ + 27, /* WITH => ID */ + 27, /* REINDEX => ID */ + 27, /* RENAME => ID */ + 27, /* CTIME_KW => ID */ }; #endif /* YYFALLBACK */ /* The following structure represents a single element of the ** parser's stack. Information stored includes: @@ -93412,13 +128780,17 @@ ** (In other words, the "major" token.) ** ** + The semantic value stored at this level of the stack. This is ** the information used by the action routines in the grammar. ** It is sometimes called the "minor" token. +** +** After the "shift" half of a SHIFTREDUCE action, the stateno field +** actually contains the reduce action for the second half of the +** SHIFTREDUCE. */ struct yyStackEntry { - YYACTIONTYPE stateno; /* The state-number */ + YYACTIONTYPE stateno; /* The state-number, or reduce action in SHIFTREDUCE */ YYCODETYPE major; /* The major token value. This is the code ** number for the token at this stack level */ YYMINORTYPE minor; /* The user-supplied minor token value. This ** is the value of the token */ }; @@ -93441,10 +128813,11 @@ #endif }; typedef struct yyParser yyParser; #ifndef NDEBUG +/* #include <stdio.h> */ static FILE *yyTraceFILE = 0; static char *yyTracePrompt = 0; #endif /* NDEBUG */ #ifndef NDEBUG @@ -93481,68 +128854,67 @@ "PLAN", "BEGIN", "TRANSACTION", "DEFERRED", "IMMEDIATE", "EXCLUSIVE", "COMMIT", "END", "ROLLBACK", "SAVEPOINT", "RELEASE", "TO", "TABLE", "CREATE", "IF", "NOT", "EXISTS", "TEMP", "LP", "RP", - "AS", "COMMA", "ID", "INDEXED", - "ABORT", "ACTION", "AFTER", "ANALYZE", - "ASC", "ATTACH", "BEFORE", "BY", - "CASCADE", "CAST", "COLUMNKW", "CONFLICT", - "DATABASE", "DESC", "DETACH", "EACH", - "FAIL", "FOR", "IGNORE", "INITIALLY", - "INSTEAD", "LIKE_KW", "MATCH", "NO", - "KEY", "OF", "OFFSET", "PRAGMA", - "RAISE", "REPLACE", "RESTRICT", "ROW", - "TRIGGER", "VACUUM", "VIEW", "VIRTUAL", - "REINDEX", "RENAME", "CTIME_KW", "ANY", - "OR", "AND", "IS", "BETWEEN", - "IN", "ISNULL", "NOTNULL", "NE", - "EQ", "GT", "LE", "LT", - "GE", "ESCAPE", "BITAND", "BITOR", - "LSHIFT", "RSHIFT", "PLUS", "MINUS", - "STAR", "SLASH", "REM", "CONCAT", - "COLLATE", "BITNOT", "STRING", "JOIN_KW", - "CONSTRAINT", "DEFAULT", "NULL", "PRIMARY", - "UNIQUE", "CHECK", "REFERENCES", "AUTOINCR", - "ON", "INSERT", "DELETE", "UPDATE", - "SET", "DEFERRABLE", "FOREIGN", "DROP", - "UNION", "ALL", "EXCEPT", "INTERSECT", - "SELECT", "DISTINCT", "DOT", "FROM", + "AS", "WITHOUT", "COMMA", "ID", + "INDEXED", "ABORT", "ACTION", "AFTER", + "ANALYZE", "ASC", "ATTACH", "BEFORE", + "BY", "CASCADE", "CAST", "COLUMNKW", + "CONFLICT", "DATABASE", "DESC", "DETACH", + "EACH", "FAIL", "FOR", "IGNORE", + "INITIALLY", "INSTEAD", "LIKE_KW", "MATCH", + "NO", "KEY", "OF", "OFFSET", + "PRAGMA", "RAISE", "RECURSIVE", "REPLACE", + "RESTRICT", "ROW", "TRIGGER", "VACUUM", + "VIEW", "VIRTUAL", "WITH", "REINDEX", + "RENAME", "CTIME_KW", "ANY", "OR", + "AND", "IS", "BETWEEN", "IN", + "ISNULL", "NOTNULL", "NE", "EQ", + "GT", "LE", "LT", "GE", + "ESCAPE", "BITAND", "BITOR", "LSHIFT", + "RSHIFT", "PLUS", "MINUS", "STAR", + "SLASH", "REM", "CONCAT", "COLLATE", + "BITNOT", "STRING", "JOIN_KW", "CONSTRAINT", + "DEFAULT", "NULL", "PRIMARY", "UNIQUE", + "CHECK", "REFERENCES", "AUTOINCR", "ON", + "INSERT", "DELETE", "UPDATE", "SET", + "DEFERRABLE", "FOREIGN", "DROP", "UNION", + "ALL", "EXCEPT", "INTERSECT", "SELECT", + "VALUES", "DISTINCT", "DOT", "FROM", "JOIN", "USING", "ORDER", "GROUP", "HAVING", "LIMIT", "WHERE", "INTO", - "VALUES", "INTEGER", "FLOAT", "BLOB", - "REGISTER", "VARIABLE", "CASE", "WHEN", - "THEN", "ELSE", "INDEX", "ALTER", - "ADD", "error", "input", "cmdlist", - "ecmd", "explain", "cmdx", "cmd", - "transtype", "trans_opt", "nm", "savepoint_opt", - "create_table", "create_table_args", "createkw", "temp", - "ifnotexists", "dbnm", "columnlist", "conslist_opt", - "select", "column", "columnid", "type", - "carglist", "id", "ids", "typetoken", - "typename", "signed", "plus_num", "minus_num", - "carg", "ccons", "term", "expr", - "onconf", "sortorder", "autoinc", "idxlist_opt", + "INTEGER", "FLOAT", "BLOB", "VARIABLE", + "CASE", "WHEN", "THEN", "ELSE", + "INDEX", "ALTER", "ADD", "error", + "input", "cmdlist", "ecmd", "explain", + "cmdx", "cmd", "transtype", "trans_opt", + "nm", "savepoint_opt", "create_table", "create_table_args", + "createkw", "temp", "ifnotexists", "dbnm", + "columnlist", "conslist_opt", "table_options", "select", + "column", "columnid", "type", "carglist", + "typetoken", "typename", "signed", "plus_num", + "minus_num", "ccons", "term", "expr", + "onconf", "sortorder", "autoinc", "eidlist_opt", "refargs", "defer_subclause", "refarg", "refact", - "init_deferred_pred_opt", "conslist", "tcons", "idxlist", - "defer_subclause_opt", "orconf", "resolvetype", "raisetype", - "ifexists", "fullname", "oneselect", "multiselect_op", + "init_deferred_pred_opt", "conslist", "tconscomma", "tcons", + "sortlist", "eidlist", "defer_subclause_opt", "orconf", + "resolvetype", "raisetype", "ifexists", "fullname", + "selectnowith", "oneselect", "with", "multiselect_op", "distinct", "selcollist", "from", "where_opt", "groupby_opt", "having_opt", "orderby_opt", "limit_opt", - "sclp", "as", "seltablist", "stl_prefix", - "joinop", "indexed_opt", "on_opt", "using_opt", - "joinop2", "inscollist", "sortlist", "sortitem", - "nexprlist", "setlist", "insert_cmd", "inscollist_opt", - "itemlist", "exprlist", "likeop", "escape", + "values", "nexprlist", "exprlist", "sclp", + "as", "seltablist", "stl_prefix", "joinop", + "indexed_opt", "on_opt", "using_opt", "idlist", + "setlist", "insert_cmd", "idlist_opt", "likeop", "between_op", "in_op", "case_operand", "case_exprlist", "case_else", "uniqueflag", "collate", "nmnum", - "plus_opt", "number", "trigger_decl", "trigger_cmd_list", - "trigger_time", "trigger_event", "foreach_clause", "when_clause", - "trigger_cmd", "trnm", "tridxby", "database_kw_opt", - "key_opt", "add_column_fullname", "kwcolumn_opt", "create_vtab", - "vtabarglist", "vtabarg", "vtabargtoken", "lp", - "anylist", + "trigger_decl", "trigger_cmd_list", "trigger_time", "trigger_event", + "foreach_clause", "when_clause", "trigger_cmd", "trnm", + "tridxby", "database_kw_opt", "key_opt", "add_column_fullname", + "kwcolumn_opt", "create_vtab", "vtabarglist", "vtabarg", + "vtabargtoken", "lp", "anylist", "wqlist", }; #endif /* NDEBUG */ #ifndef NDEBUG /* For tracing reduce actions, the names of all rules are required. @@ -93578,308 +128950,306 @@ /* 27 */ "createkw ::= CREATE", /* 28 */ "ifnotexists ::=", /* 29 */ "ifnotexists ::= IF NOT EXISTS", /* 30 */ "temp ::= TEMP", /* 31 */ "temp ::=", - /* 32 */ "create_table_args ::= LP columnlist conslist_opt RP", + /* 32 */ "create_table_args ::= LP columnlist conslist_opt RP table_options", /* 33 */ "create_table_args ::= AS select", - /* 34 */ "columnlist ::= columnlist COMMA column", - /* 35 */ "columnlist ::= column", - /* 36 */ "column ::= columnid type carglist", - /* 37 */ "columnid ::= nm", - /* 38 */ "id ::= ID", - /* 39 */ "id ::= INDEXED", - /* 40 */ "ids ::= ID|STRING", - /* 41 */ "nm ::= id", - /* 42 */ "nm ::= STRING", - /* 43 */ "nm ::= JOIN_KW", - /* 44 */ "type ::=", - /* 45 */ "type ::= typetoken", - /* 46 */ "typetoken ::= typename", - /* 47 */ "typetoken ::= typename LP signed RP", - /* 48 */ "typetoken ::= typename LP signed COMMA signed RP", - /* 49 */ "typename ::= ids", - /* 50 */ "typename ::= typename ids", - /* 51 */ "signed ::= plus_num", - /* 52 */ "signed ::= minus_num", - /* 53 */ "carglist ::= carglist carg", - /* 54 */ "carglist ::=", - /* 55 */ "carg ::= CONSTRAINT nm ccons", - /* 56 */ "carg ::= ccons", - /* 57 */ "ccons ::= DEFAULT term", - /* 58 */ "ccons ::= DEFAULT LP expr RP", - /* 59 */ "ccons ::= DEFAULT PLUS term", - /* 60 */ "ccons ::= DEFAULT MINUS term", - /* 61 */ "ccons ::= DEFAULT id", - /* 62 */ "ccons ::= NULL onconf", - /* 63 */ "ccons ::= NOT NULL onconf", - /* 64 */ "ccons ::= PRIMARY KEY sortorder onconf autoinc", - /* 65 */ "ccons ::= UNIQUE onconf", - /* 66 */ "ccons ::= CHECK LP expr RP", - /* 67 */ "ccons ::= REFERENCES nm idxlist_opt refargs", - /* 68 */ "ccons ::= defer_subclause", - /* 69 */ "ccons ::= COLLATE ids", - /* 70 */ "autoinc ::=", - /* 71 */ "autoinc ::= AUTOINCR", - /* 72 */ "refargs ::=", - /* 73 */ "refargs ::= refargs refarg", - /* 74 */ "refarg ::= MATCH nm", - /* 75 */ "refarg ::= ON INSERT refact", - /* 76 */ "refarg ::= ON DELETE refact", - /* 77 */ "refarg ::= ON UPDATE refact", - /* 78 */ "refact ::= SET NULL", - /* 79 */ "refact ::= SET DEFAULT", - /* 80 */ "refact ::= CASCADE", - /* 81 */ "refact ::= RESTRICT", - /* 82 */ "refact ::= NO ACTION", - /* 83 */ "defer_subclause ::= NOT DEFERRABLE init_deferred_pred_opt", - /* 84 */ "defer_subclause ::= DEFERRABLE init_deferred_pred_opt", - /* 85 */ "init_deferred_pred_opt ::=", - /* 86 */ "init_deferred_pred_opt ::= INITIALLY DEFERRED", - /* 87 */ "init_deferred_pred_opt ::= INITIALLY IMMEDIATE", - /* 88 */ "conslist_opt ::=", - /* 89 */ "conslist_opt ::= COMMA conslist", - /* 90 */ "conslist ::= conslist COMMA tcons", - /* 91 */ "conslist ::= conslist tcons", - /* 92 */ "conslist ::= tcons", - /* 93 */ "tcons ::= CONSTRAINT nm", - /* 94 */ "tcons ::= PRIMARY KEY LP idxlist autoinc RP onconf", - /* 95 */ "tcons ::= UNIQUE LP idxlist RP onconf", - /* 96 */ "tcons ::= CHECK LP expr RP onconf", - /* 97 */ "tcons ::= FOREIGN KEY LP idxlist RP REFERENCES nm idxlist_opt refargs defer_subclause_opt", - /* 98 */ "defer_subclause_opt ::=", - /* 99 */ "defer_subclause_opt ::= defer_subclause", - /* 100 */ "onconf ::=", - /* 101 */ "onconf ::= ON CONFLICT resolvetype", - /* 102 */ "orconf ::=", - /* 103 */ "orconf ::= OR resolvetype", - /* 104 */ "resolvetype ::= raisetype", - /* 105 */ "resolvetype ::= IGNORE", - /* 106 */ "resolvetype ::= REPLACE", - /* 107 */ "cmd ::= DROP TABLE ifexists fullname", - /* 108 */ "ifexists ::= IF EXISTS", - /* 109 */ "ifexists ::=", - /* 110 */ "cmd ::= createkw temp VIEW ifnotexists nm dbnm AS select", - /* 111 */ "cmd ::= DROP VIEW ifexists fullname", - /* 112 */ "cmd ::= select", - /* 113 */ "select ::= oneselect", - /* 114 */ "select ::= select multiselect_op oneselect", + /* 34 */ "table_options ::=", + /* 35 */ "table_options ::= WITHOUT nm", + /* 36 */ "columnlist ::= columnlist COMMA column", + /* 37 */ "columnlist ::= column", + /* 38 */ "column ::= columnid type carglist", + /* 39 */ "columnid ::= nm", + /* 40 */ "nm ::= ID|INDEXED", + /* 41 */ "nm ::= STRING", + /* 42 */ "nm ::= JOIN_KW", + /* 43 */ "type ::=", + /* 44 */ "type ::= typetoken", + /* 45 */ "typetoken ::= typename", + /* 46 */ "typetoken ::= typename LP signed RP", + /* 47 */ "typetoken ::= typename LP signed COMMA signed RP", + /* 48 */ "typename ::= ID|STRING", + /* 49 */ "typename ::= typename ID|STRING", + /* 50 */ "signed ::= plus_num", + /* 51 */ "signed ::= minus_num", + /* 52 */ "carglist ::= carglist ccons", + /* 53 */ "carglist ::=", + /* 54 */ "ccons ::= CONSTRAINT nm", + /* 55 */ "ccons ::= DEFAULT term", + /* 56 */ "ccons ::= DEFAULT LP expr RP", + /* 57 */ "ccons ::= DEFAULT PLUS term", + /* 58 */ "ccons ::= DEFAULT MINUS term", + /* 59 */ "ccons ::= DEFAULT ID|INDEXED", + /* 60 */ "ccons ::= NULL onconf", + /* 61 */ "ccons ::= NOT NULL onconf", + /* 62 */ "ccons ::= PRIMARY KEY sortorder onconf autoinc", + /* 63 */ "ccons ::= UNIQUE onconf", + /* 64 */ "ccons ::= CHECK LP expr RP", + /* 65 */ "ccons ::= REFERENCES nm eidlist_opt refargs", + /* 66 */ "ccons ::= defer_subclause", + /* 67 */ "ccons ::= COLLATE ID|STRING", + /* 68 */ "autoinc ::=", + /* 69 */ "autoinc ::= AUTOINCR", + /* 70 */ "refargs ::=", + /* 71 */ "refargs ::= refargs refarg", + /* 72 */ "refarg ::= MATCH nm", + /* 73 */ "refarg ::= ON INSERT refact", + /* 74 */ "refarg ::= ON DELETE refact", + /* 75 */ "refarg ::= ON UPDATE refact", + /* 76 */ "refact ::= SET NULL", + /* 77 */ "refact ::= SET DEFAULT", + /* 78 */ "refact ::= CASCADE", + /* 79 */ "refact ::= RESTRICT", + /* 80 */ "refact ::= NO ACTION", + /* 81 */ "defer_subclause ::= NOT DEFERRABLE init_deferred_pred_opt", + /* 82 */ "defer_subclause ::= DEFERRABLE init_deferred_pred_opt", + /* 83 */ "init_deferred_pred_opt ::=", + /* 84 */ "init_deferred_pred_opt ::= INITIALLY DEFERRED", + /* 85 */ "init_deferred_pred_opt ::= INITIALLY IMMEDIATE", + /* 86 */ "conslist_opt ::=", + /* 87 */ "conslist_opt ::= COMMA conslist", + /* 88 */ "conslist ::= conslist tconscomma tcons", + /* 89 */ "conslist ::= tcons", + /* 90 */ "tconscomma ::= COMMA", + /* 91 */ "tconscomma ::=", + /* 92 */ "tcons ::= CONSTRAINT nm", + /* 93 */ "tcons ::= PRIMARY KEY LP sortlist autoinc RP onconf", + /* 94 */ "tcons ::= UNIQUE LP sortlist RP onconf", + /* 95 */ "tcons ::= CHECK LP expr RP onconf", + /* 96 */ "tcons ::= FOREIGN KEY LP eidlist RP REFERENCES nm eidlist_opt refargs defer_subclause_opt", + /* 97 */ "defer_subclause_opt ::=", + /* 98 */ "defer_subclause_opt ::= defer_subclause", + /* 99 */ "onconf ::=", + /* 100 */ "onconf ::= ON CONFLICT resolvetype", + /* 101 */ "orconf ::=", + /* 102 */ "orconf ::= OR resolvetype", + /* 103 */ "resolvetype ::= raisetype", + /* 104 */ "resolvetype ::= IGNORE", + /* 105 */ "resolvetype ::= REPLACE", + /* 106 */ "cmd ::= DROP TABLE ifexists fullname", + /* 107 */ "ifexists ::= IF EXISTS", + /* 108 */ "ifexists ::=", + /* 109 */ "cmd ::= createkw temp VIEW ifnotexists nm dbnm eidlist_opt AS select", + /* 110 */ "cmd ::= DROP VIEW ifexists fullname", + /* 111 */ "cmd ::= select", + /* 112 */ "select ::= with selectnowith", + /* 113 */ "selectnowith ::= oneselect", + /* 114 */ "selectnowith ::= selectnowith multiselect_op oneselect", /* 115 */ "multiselect_op ::= UNION", /* 116 */ "multiselect_op ::= UNION ALL", /* 117 */ "multiselect_op ::= EXCEPT|INTERSECT", /* 118 */ "oneselect ::= SELECT distinct selcollist from where_opt groupby_opt having_opt orderby_opt limit_opt", - /* 119 */ "distinct ::= DISTINCT", - /* 120 */ "distinct ::= ALL", - /* 121 */ "distinct ::=", - /* 122 */ "sclp ::= selcollist COMMA", - /* 123 */ "sclp ::=", - /* 124 */ "selcollist ::= sclp expr as", - /* 125 */ "selcollist ::= sclp STAR", - /* 126 */ "selcollist ::= sclp nm DOT STAR", - /* 127 */ "as ::= AS nm", - /* 128 */ "as ::= ids", - /* 129 */ "as ::=", - /* 130 */ "from ::=", - /* 131 */ "from ::= FROM seltablist", - /* 132 */ "stl_prefix ::= seltablist joinop", - /* 133 */ "stl_prefix ::=", - /* 134 */ "seltablist ::= stl_prefix nm dbnm as indexed_opt on_opt using_opt", - /* 135 */ "seltablist ::= stl_prefix LP select RP as on_opt using_opt", - /* 136 */ "seltablist ::= stl_prefix LP seltablist RP as on_opt using_opt", - /* 137 */ "dbnm ::=", - /* 138 */ "dbnm ::= DOT nm", - /* 139 */ "fullname ::= nm dbnm", - /* 140 */ "joinop ::= COMMA|JOIN", - /* 141 */ "joinop ::= JOIN_KW JOIN", - /* 142 */ "joinop ::= JOIN_KW nm JOIN", - /* 143 */ "joinop ::= JOIN_KW nm nm JOIN", - /* 144 */ "on_opt ::= ON expr", - /* 145 */ "on_opt ::=", - /* 146 */ "indexed_opt ::=", - /* 147 */ "indexed_opt ::= INDEXED BY nm", - /* 148 */ "indexed_opt ::= NOT INDEXED", - /* 149 */ "using_opt ::= USING LP inscollist RP", - /* 150 */ "using_opt ::=", - /* 151 */ "orderby_opt ::=", - /* 152 */ "orderby_opt ::= ORDER BY sortlist", - /* 153 */ "sortlist ::= sortlist COMMA sortitem sortorder", - /* 154 */ "sortlist ::= sortitem sortorder", - /* 155 */ "sortitem ::= expr", - /* 156 */ "sortorder ::= ASC", - /* 157 */ "sortorder ::= DESC", - /* 158 */ "sortorder ::=", - /* 159 */ "groupby_opt ::=", - /* 160 */ "groupby_opt ::= GROUP BY nexprlist", - /* 161 */ "having_opt ::=", - /* 162 */ "having_opt ::= HAVING expr", - /* 163 */ "limit_opt ::=", - /* 164 */ "limit_opt ::= LIMIT expr", - /* 165 */ "limit_opt ::= LIMIT expr OFFSET expr", - /* 166 */ "limit_opt ::= LIMIT expr COMMA expr", - /* 167 */ "cmd ::= DELETE FROM fullname indexed_opt where_opt", - /* 168 */ "where_opt ::=", - /* 169 */ "where_opt ::= WHERE expr", - /* 170 */ "cmd ::= UPDATE orconf fullname indexed_opt SET setlist where_opt", - /* 171 */ "setlist ::= setlist COMMA nm EQ expr", - /* 172 */ "setlist ::= nm EQ expr", - /* 173 */ "cmd ::= insert_cmd INTO fullname inscollist_opt VALUES LP itemlist RP", - /* 174 */ "cmd ::= insert_cmd INTO fullname inscollist_opt select", - /* 175 */ "cmd ::= insert_cmd INTO fullname inscollist_opt DEFAULT VALUES", - /* 176 */ "insert_cmd ::= INSERT orconf", - /* 177 */ "insert_cmd ::= REPLACE", - /* 178 */ "itemlist ::= itemlist COMMA expr", - /* 179 */ "itemlist ::= expr", - /* 180 */ "inscollist_opt ::=", - /* 181 */ "inscollist_opt ::= LP inscollist RP", - /* 182 */ "inscollist ::= inscollist COMMA nm", - /* 183 */ "inscollist ::= nm", + /* 119 */ "oneselect ::= values", + /* 120 */ "values ::= VALUES LP nexprlist RP", + /* 121 */ "values ::= values COMMA LP exprlist RP", + /* 122 */ "distinct ::= DISTINCT", + /* 123 */ "distinct ::= ALL", + /* 124 */ "distinct ::=", + /* 125 */ "sclp ::= selcollist COMMA", + /* 126 */ "sclp ::=", + /* 127 */ "selcollist ::= sclp expr as", + /* 128 */ "selcollist ::= sclp STAR", + /* 129 */ "selcollist ::= sclp nm DOT STAR", + /* 130 */ "as ::= AS nm", + /* 131 */ "as ::= ID|STRING", + /* 132 */ "as ::=", + /* 133 */ "from ::=", + /* 134 */ "from ::= FROM seltablist", + /* 135 */ "stl_prefix ::= seltablist joinop", + /* 136 */ "stl_prefix ::=", + /* 137 */ "seltablist ::= stl_prefix nm dbnm as indexed_opt on_opt using_opt", + /* 138 */ "seltablist ::= stl_prefix nm dbnm LP exprlist RP as on_opt using_opt", + /* 139 */ "seltablist ::= stl_prefix LP select RP as on_opt using_opt", + /* 140 */ "seltablist ::= stl_prefix LP seltablist RP as on_opt using_opt", + /* 141 */ "dbnm ::=", + /* 142 */ "dbnm ::= DOT nm", + /* 143 */ "fullname ::= nm dbnm", + /* 144 */ "joinop ::= COMMA|JOIN", + /* 145 */ "joinop ::= JOIN_KW JOIN", + /* 146 */ "joinop ::= JOIN_KW nm JOIN", + /* 147 */ "joinop ::= JOIN_KW nm nm JOIN", + /* 148 */ "on_opt ::= ON expr", + /* 149 */ "on_opt ::=", + /* 150 */ "indexed_opt ::=", + /* 151 */ "indexed_opt ::= INDEXED BY nm", + /* 152 */ "indexed_opt ::= NOT INDEXED", + /* 153 */ "using_opt ::= USING LP idlist RP", + /* 154 */ "using_opt ::=", + /* 155 */ "orderby_opt ::=", + /* 156 */ "orderby_opt ::= ORDER BY sortlist", + /* 157 */ "sortlist ::= sortlist COMMA expr sortorder", + /* 158 */ "sortlist ::= expr sortorder", + /* 159 */ "sortorder ::= ASC", + /* 160 */ "sortorder ::= DESC", + /* 161 */ "sortorder ::=", + /* 162 */ "groupby_opt ::=", + /* 163 */ "groupby_opt ::= GROUP BY nexprlist", + /* 164 */ "having_opt ::=", + /* 165 */ "having_opt ::= HAVING expr", + /* 166 */ "limit_opt ::=", + /* 167 */ "limit_opt ::= LIMIT expr", + /* 168 */ "limit_opt ::= LIMIT expr OFFSET expr", + /* 169 */ "limit_opt ::= LIMIT expr COMMA expr", + /* 170 */ "cmd ::= with DELETE FROM fullname indexed_opt where_opt", + /* 171 */ "where_opt ::=", + /* 172 */ "where_opt ::= WHERE expr", + /* 173 */ "cmd ::= with UPDATE orconf fullname indexed_opt SET setlist where_opt", + /* 174 */ "setlist ::= setlist COMMA nm EQ expr", + /* 175 */ "setlist ::= nm EQ expr", + /* 176 */ "cmd ::= with insert_cmd INTO fullname idlist_opt select", + /* 177 */ "cmd ::= with insert_cmd INTO fullname idlist_opt DEFAULT VALUES", + /* 178 */ "insert_cmd ::= INSERT orconf", + /* 179 */ "insert_cmd ::= REPLACE", + /* 180 */ "idlist_opt ::=", + /* 181 */ "idlist_opt ::= LP idlist RP", + /* 182 */ "idlist ::= idlist COMMA nm", + /* 183 */ "idlist ::= nm", /* 184 */ "expr ::= term", /* 185 */ "expr ::= LP expr RP", /* 186 */ "term ::= NULL", - /* 187 */ "expr ::= id", + /* 187 */ "expr ::= ID|INDEXED", /* 188 */ "expr ::= JOIN_KW", /* 189 */ "expr ::= nm DOT nm", /* 190 */ "expr ::= nm DOT nm DOT nm", /* 191 */ "term ::= INTEGER|FLOAT|BLOB", /* 192 */ "term ::= STRING", - /* 193 */ "expr ::= REGISTER", - /* 194 */ "expr ::= VARIABLE", - /* 195 */ "expr ::= expr COLLATE ids", - /* 196 */ "expr ::= CAST LP expr AS typetoken RP", - /* 197 */ "expr ::= ID LP distinct exprlist RP", - /* 198 */ "expr ::= ID LP STAR RP", - /* 199 */ "term ::= CTIME_KW", - /* 200 */ "expr ::= expr AND expr", - /* 201 */ "expr ::= expr OR expr", - /* 202 */ "expr ::= expr LT|GT|GE|LE expr", - /* 203 */ "expr ::= expr EQ|NE expr", - /* 204 */ "expr ::= expr BITAND|BITOR|LSHIFT|RSHIFT expr", - /* 205 */ "expr ::= expr PLUS|MINUS expr", - /* 206 */ "expr ::= expr STAR|SLASH|REM expr", - /* 207 */ "expr ::= expr CONCAT expr", - /* 208 */ "likeop ::= LIKE_KW", - /* 209 */ "likeop ::= NOT LIKE_KW", - /* 210 */ "likeop ::= MATCH", - /* 211 */ "likeop ::= NOT MATCH", - /* 212 */ "escape ::= ESCAPE expr", - /* 213 */ "escape ::=", - /* 214 */ "expr ::= expr likeop expr escape", - /* 215 */ "expr ::= expr ISNULL|NOTNULL", - /* 216 */ "expr ::= expr NOT NULL", - /* 217 */ "expr ::= expr IS expr", - /* 218 */ "expr ::= expr IS NOT expr", - /* 219 */ "expr ::= NOT expr", - /* 220 */ "expr ::= BITNOT expr", - /* 221 */ "expr ::= MINUS expr", - /* 222 */ "expr ::= PLUS expr", - /* 223 */ "between_op ::= BETWEEN", - /* 224 */ "between_op ::= NOT BETWEEN", - /* 225 */ "expr ::= expr between_op expr AND expr", - /* 226 */ "in_op ::= IN", - /* 227 */ "in_op ::= NOT IN", - /* 228 */ "expr ::= expr in_op LP exprlist RP", - /* 229 */ "expr ::= LP select RP", - /* 230 */ "expr ::= expr in_op LP select RP", - /* 231 */ "expr ::= expr in_op nm dbnm", - /* 232 */ "expr ::= EXISTS LP select RP", - /* 233 */ "expr ::= CASE case_operand case_exprlist case_else END", - /* 234 */ "case_exprlist ::= case_exprlist WHEN expr THEN expr", - /* 235 */ "case_exprlist ::= WHEN expr THEN expr", - /* 236 */ "case_else ::= ELSE expr", - /* 237 */ "case_else ::=", - /* 238 */ "case_operand ::= expr", - /* 239 */ "case_operand ::=", - /* 240 */ "exprlist ::= nexprlist", - /* 241 */ "exprlist ::=", - /* 242 */ "nexprlist ::= nexprlist COMMA expr", - /* 243 */ "nexprlist ::= expr", - /* 244 */ "cmd ::= createkw uniqueflag INDEX ifnotexists nm dbnm ON nm LP idxlist RP", - /* 245 */ "uniqueflag ::= UNIQUE", - /* 246 */ "uniqueflag ::=", - /* 247 */ "idxlist_opt ::=", - /* 248 */ "idxlist_opt ::= LP idxlist RP", - /* 249 */ "idxlist ::= idxlist COMMA nm collate sortorder", - /* 250 */ "idxlist ::= nm collate sortorder", - /* 251 */ "collate ::=", - /* 252 */ "collate ::= COLLATE ids", - /* 253 */ "cmd ::= DROP INDEX ifexists fullname", - /* 254 */ "cmd ::= VACUUM", - /* 255 */ "cmd ::= VACUUM nm", - /* 256 */ "cmd ::= PRAGMA nm dbnm", - /* 257 */ "cmd ::= PRAGMA nm dbnm EQ nmnum", - /* 258 */ "cmd ::= PRAGMA nm dbnm LP nmnum RP", - /* 259 */ "cmd ::= PRAGMA nm dbnm EQ minus_num", - /* 260 */ "cmd ::= PRAGMA nm dbnm LP minus_num RP", - /* 261 */ "nmnum ::= plus_num", - /* 262 */ "nmnum ::= nm", - /* 263 */ "nmnum ::= ON", - /* 264 */ "nmnum ::= DELETE", - /* 265 */ "nmnum ::= DEFAULT", - /* 266 */ "plus_num ::= plus_opt number", - /* 267 */ "minus_num ::= MINUS number", - /* 268 */ "number ::= INTEGER|FLOAT", - /* 269 */ "plus_opt ::= PLUS", - /* 270 */ "plus_opt ::=", - /* 271 */ "cmd ::= createkw trigger_decl BEGIN trigger_cmd_list END", - /* 272 */ "trigger_decl ::= temp TRIGGER ifnotexists nm dbnm trigger_time trigger_event ON fullname foreach_clause when_clause", - /* 273 */ "trigger_time ::= BEFORE", - /* 274 */ "trigger_time ::= AFTER", - /* 275 */ "trigger_time ::= INSTEAD OF", - /* 276 */ "trigger_time ::=", - /* 277 */ "trigger_event ::= DELETE|INSERT", - /* 278 */ "trigger_event ::= UPDATE", - /* 279 */ "trigger_event ::= UPDATE OF inscollist", - /* 280 */ "foreach_clause ::=", - /* 281 */ "foreach_clause ::= FOR EACH ROW", - /* 282 */ "when_clause ::=", - /* 283 */ "when_clause ::= WHEN expr", - /* 284 */ "trigger_cmd_list ::= trigger_cmd_list trigger_cmd SEMI", - /* 285 */ "trigger_cmd_list ::= trigger_cmd SEMI", - /* 286 */ "trnm ::= nm", - /* 287 */ "trnm ::= nm DOT nm", - /* 288 */ "tridxby ::=", - /* 289 */ "tridxby ::= INDEXED BY nm", - /* 290 */ "tridxby ::= NOT INDEXED", - /* 291 */ "trigger_cmd ::= UPDATE orconf trnm tridxby SET setlist where_opt", - /* 292 */ "trigger_cmd ::= insert_cmd INTO trnm inscollist_opt VALUES LP itemlist RP", - /* 293 */ "trigger_cmd ::= insert_cmd INTO trnm inscollist_opt select", - /* 294 */ "trigger_cmd ::= DELETE FROM trnm tridxby where_opt", - /* 295 */ "trigger_cmd ::= select", - /* 296 */ "expr ::= RAISE LP IGNORE RP", - /* 297 */ "expr ::= RAISE LP raisetype COMMA nm RP", - /* 298 */ "raisetype ::= ROLLBACK", - /* 299 */ "raisetype ::= ABORT", - /* 300 */ "raisetype ::= FAIL", - /* 301 */ "cmd ::= DROP TRIGGER ifexists fullname", - /* 302 */ "cmd ::= ATTACH database_kw_opt expr AS expr key_opt", - /* 303 */ "cmd ::= DETACH database_kw_opt expr", - /* 304 */ "key_opt ::=", - /* 305 */ "key_opt ::= KEY expr", - /* 306 */ "database_kw_opt ::= DATABASE", - /* 307 */ "database_kw_opt ::=", - /* 308 */ "cmd ::= REINDEX", - /* 309 */ "cmd ::= REINDEX nm dbnm", - /* 310 */ "cmd ::= ANALYZE", - /* 311 */ "cmd ::= ANALYZE nm dbnm", - /* 312 */ "cmd ::= ALTER TABLE fullname RENAME TO nm", - /* 313 */ "cmd ::= ALTER TABLE add_column_fullname ADD kwcolumn_opt column", - /* 314 */ "add_column_fullname ::= fullname", - /* 315 */ "kwcolumn_opt ::=", - /* 316 */ "kwcolumn_opt ::= COLUMNKW", - /* 317 */ "cmd ::= create_vtab", - /* 318 */ "cmd ::= create_vtab LP vtabarglist RP", - /* 319 */ "create_vtab ::= createkw VIRTUAL TABLE nm dbnm USING nm", - /* 320 */ "vtabarglist ::= vtabarg", - /* 321 */ "vtabarglist ::= vtabarglist COMMA vtabarg", - /* 322 */ "vtabarg ::=", - /* 323 */ "vtabarg ::= vtabarg vtabargtoken", - /* 324 */ "vtabargtoken ::= ANY", - /* 325 */ "vtabargtoken ::= lp anylist RP", - /* 326 */ "lp ::= LP", - /* 327 */ "anylist ::=", - /* 328 */ "anylist ::= anylist LP anylist RP", - /* 329 */ "anylist ::= anylist ANY", + /* 193 */ "expr ::= VARIABLE", + /* 194 */ "expr ::= expr COLLATE ID|STRING", + /* 195 */ "expr ::= CAST LP expr AS typetoken RP", + /* 196 */ "expr ::= ID|INDEXED LP distinct exprlist RP", + /* 197 */ "expr ::= ID|INDEXED LP STAR RP", + /* 198 */ "term ::= CTIME_KW", + /* 199 */ "expr ::= expr AND expr", + /* 200 */ "expr ::= expr OR expr", + /* 201 */ "expr ::= expr LT|GT|GE|LE expr", + /* 202 */ "expr ::= expr EQ|NE expr", + /* 203 */ "expr ::= expr BITAND|BITOR|LSHIFT|RSHIFT expr", + /* 204 */ "expr ::= expr PLUS|MINUS expr", + /* 205 */ "expr ::= expr STAR|SLASH|REM expr", + /* 206 */ "expr ::= expr CONCAT expr", + /* 207 */ "likeop ::= LIKE_KW|MATCH", + /* 208 */ "likeop ::= NOT LIKE_KW|MATCH", + /* 209 */ "expr ::= expr likeop expr", + /* 210 */ "expr ::= expr likeop expr ESCAPE expr", + /* 211 */ "expr ::= expr ISNULL|NOTNULL", + /* 212 */ "expr ::= expr NOT NULL", + /* 213 */ "expr ::= expr IS expr", + /* 214 */ "expr ::= expr IS NOT expr", + /* 215 */ "expr ::= NOT expr", + /* 216 */ "expr ::= BITNOT expr", + /* 217 */ "expr ::= MINUS expr", + /* 218 */ "expr ::= PLUS expr", + /* 219 */ "between_op ::= BETWEEN", + /* 220 */ "between_op ::= NOT BETWEEN", + /* 221 */ "expr ::= expr between_op expr AND expr", + /* 222 */ "in_op ::= IN", + /* 223 */ "in_op ::= NOT IN", + /* 224 */ "expr ::= expr in_op LP exprlist RP", + /* 225 */ "expr ::= LP select RP", + /* 226 */ "expr ::= expr in_op LP select RP", + /* 227 */ "expr ::= expr in_op nm dbnm", + /* 228 */ "expr ::= EXISTS LP select RP", + /* 229 */ "expr ::= CASE case_operand case_exprlist case_else END", + /* 230 */ "case_exprlist ::= case_exprlist WHEN expr THEN expr", + /* 231 */ "case_exprlist ::= WHEN expr THEN expr", + /* 232 */ "case_else ::= ELSE expr", + /* 233 */ "case_else ::=", + /* 234 */ "case_operand ::= expr", + /* 235 */ "case_operand ::=", + /* 236 */ "exprlist ::= nexprlist", + /* 237 */ "exprlist ::=", + /* 238 */ "nexprlist ::= nexprlist COMMA expr", + /* 239 */ "nexprlist ::= expr", + /* 240 */ "cmd ::= createkw uniqueflag INDEX ifnotexists nm dbnm ON nm LP sortlist RP where_opt", + /* 241 */ "uniqueflag ::= UNIQUE", + /* 242 */ "uniqueflag ::=", + /* 243 */ "eidlist_opt ::=", + /* 244 */ "eidlist_opt ::= LP eidlist RP", + /* 245 */ "eidlist ::= eidlist COMMA nm collate sortorder", + /* 246 */ "eidlist ::= nm collate sortorder", + /* 247 */ "collate ::=", + /* 248 */ "collate ::= COLLATE ID|STRING", + /* 249 */ "cmd ::= DROP INDEX ifexists fullname", + /* 250 */ "cmd ::= VACUUM", + /* 251 */ "cmd ::= VACUUM nm", + /* 252 */ "cmd ::= PRAGMA nm dbnm", + /* 253 */ "cmd ::= PRAGMA nm dbnm EQ nmnum", + /* 254 */ "cmd ::= PRAGMA nm dbnm LP nmnum RP", + /* 255 */ "cmd ::= PRAGMA nm dbnm EQ minus_num", + /* 256 */ "cmd ::= PRAGMA nm dbnm LP minus_num RP", + /* 257 */ "nmnum ::= plus_num", + /* 258 */ "nmnum ::= nm", + /* 259 */ "nmnum ::= ON", + /* 260 */ "nmnum ::= DELETE", + /* 261 */ "nmnum ::= DEFAULT", + /* 262 */ "plus_num ::= PLUS INTEGER|FLOAT", + /* 263 */ "plus_num ::= INTEGER|FLOAT", + /* 264 */ "minus_num ::= MINUS INTEGER|FLOAT", + /* 265 */ "cmd ::= createkw trigger_decl BEGIN trigger_cmd_list END", + /* 266 */ "trigger_decl ::= temp TRIGGER ifnotexists nm dbnm trigger_time trigger_event ON fullname foreach_clause when_clause", + /* 267 */ "trigger_time ::= BEFORE", + /* 268 */ "trigger_time ::= AFTER", + /* 269 */ "trigger_time ::= INSTEAD OF", + /* 270 */ "trigger_time ::=", + /* 271 */ "trigger_event ::= DELETE|INSERT", + /* 272 */ "trigger_event ::= UPDATE", + /* 273 */ "trigger_event ::= UPDATE OF idlist", + /* 274 */ "foreach_clause ::=", + /* 275 */ "foreach_clause ::= FOR EACH ROW", + /* 276 */ "when_clause ::=", + /* 277 */ "when_clause ::= WHEN expr", + /* 278 */ "trigger_cmd_list ::= trigger_cmd_list trigger_cmd SEMI", + /* 279 */ "trigger_cmd_list ::= trigger_cmd SEMI", + /* 280 */ "trnm ::= nm", + /* 281 */ "trnm ::= nm DOT nm", + /* 282 */ "tridxby ::=", + /* 283 */ "tridxby ::= INDEXED BY nm", + /* 284 */ "tridxby ::= NOT INDEXED", + /* 285 */ "trigger_cmd ::= UPDATE orconf trnm tridxby SET setlist where_opt", + /* 286 */ "trigger_cmd ::= insert_cmd INTO trnm idlist_opt select", + /* 287 */ "trigger_cmd ::= DELETE FROM trnm tridxby where_opt", + /* 288 */ "trigger_cmd ::= select", + /* 289 */ "expr ::= RAISE LP IGNORE RP", + /* 290 */ "expr ::= RAISE LP raisetype COMMA nm RP", + /* 291 */ "raisetype ::= ROLLBACK", + /* 292 */ "raisetype ::= ABORT", + /* 293 */ "raisetype ::= FAIL", + /* 294 */ "cmd ::= DROP TRIGGER ifexists fullname", + /* 295 */ "cmd ::= ATTACH database_kw_opt expr AS expr key_opt", + /* 296 */ "cmd ::= DETACH database_kw_opt expr", + /* 297 */ "key_opt ::=", + /* 298 */ "key_opt ::= KEY expr", + /* 299 */ "database_kw_opt ::= DATABASE", + /* 300 */ "database_kw_opt ::=", + /* 301 */ "cmd ::= REINDEX", + /* 302 */ "cmd ::= REINDEX nm dbnm", + /* 303 */ "cmd ::= ANALYZE", + /* 304 */ "cmd ::= ANALYZE nm dbnm", + /* 305 */ "cmd ::= ALTER TABLE fullname RENAME TO nm", + /* 306 */ "cmd ::= ALTER TABLE add_column_fullname ADD kwcolumn_opt column", + /* 307 */ "add_column_fullname ::= fullname", + /* 308 */ "kwcolumn_opt ::=", + /* 309 */ "kwcolumn_opt ::= COLUMNKW", + /* 310 */ "cmd ::= create_vtab", + /* 311 */ "cmd ::= create_vtab LP vtabarglist RP", + /* 312 */ "create_vtab ::= createkw VIRTUAL TABLE ifnotexists nm dbnm USING nm", + /* 313 */ "vtabarglist ::= vtabarg", + /* 314 */ "vtabarglist ::= vtabarglist COMMA vtabarg", + /* 315 */ "vtabarg ::=", + /* 316 */ "vtabarg ::= vtabarg vtabargtoken", + /* 317 */ "vtabargtoken ::= ANY", + /* 318 */ "vtabargtoken ::= lp anylist RP", + /* 319 */ "lp ::= LP", + /* 320 */ "anylist ::=", + /* 321 */ "anylist ::= anylist LP anylist RP", + /* 322 */ "anylist ::= anylist ANY", + /* 323 */ "with ::=", + /* 324 */ "with ::= WITH wqlist", + /* 325 */ "with ::= WITH RECURSIVE wqlist", + /* 326 */ "wqlist ::= nm eidlist_opt AS LP select RP", + /* 327 */ "wqlist ::= wqlist COMMA nm eidlist_opt AS LP select RP", }; #endif /* NDEBUG */ #if YYSTACKDEPTH<=0 @@ -93903,10 +129273,19 @@ #endif } } #endif +/* Datatype of the argument to the memory allocated passed as the +** second argument to sqlite3ParserAlloc() below. This can be changed by +** putting an appropriate #define in the %include section of the input +** grammar. +*/ +#ifndef YYMALLOCARGTYPE +# define YYMALLOCARGTYPE size_t +#endif + /* ** This function allocates a new parser. ** The only argument is a pointer to a function which works like ** malloc. ** @@ -93915,13 +129294,13 @@ ** ** Outputs: ** A pointer to a parser. This pointer is used in subsequent calls ** to sqlite3Parser and sqlite3ParserFree. */ -SQLITE_PRIVATE void *sqlite3ParserAlloc(void *(*mallocProc)(size_t)){ +SQLITE_PRIVATE void *sqlite3ParserAlloc(void *(*mallocProc)(YYMALLOCARGTYPE)){ yyParser *pParser; - pParser = (yyParser*)(*mallocProc)( (size_t)sizeof(yyParser) ); + pParser = (yyParser*)(*mallocProc)( (YYMALLOCARGTYPE)sizeof(yyParser) ); if( pParser ){ pParser->yyidx = -1; #ifdef YYTRACKMAXSTACKDEPTH pParser->yyidxMax = 0; #endif @@ -93932,14 +129311,16 @@ #endif } return pParser; } -/* The following function deletes the value associated with a -** symbol. The symbol can be either a terminal or nonterminal. -** "yymajor" is the symbol code, and "yypminor" is a pointer to -** the value. +/* The following function deletes the "minor type" or semantic value +** associated with a symbol. The symbol can be either a terminal +** or nonterminal. "yymajor" is the symbol code, and "yypminor" is +** a pointer to the value to be deleted. The code used to do the +** deletions is derived from the %destructor and/or %token_destructor +** directives of the input grammar. */ static void yy_destructor( yyParser *yypParser, /* The parser */ YYCODETYPE yymajor, /* Type code for object to destroy */ YYMINORTYPE *yypminor /* The object to be destroyed */ @@ -93951,132 +129332,127 @@ ** when the symbol is popped from the stack during a ** reduce or during error processing or when a parser is ** being destroyed before it is finished parsing. ** ** Note: during a reduce, the only symbols destroyed are those - ** which appear on the RHS of the rule, but which are not used + ** which appear on the RHS of the rule, but which are *not* used ** inside the C code. */ - case 160: /* select */ - case 194: /* oneselect */ +/********* Begin destructor definitions ***************************************/ + case 163: /* select */ + case 196: /* selectnowith */ + case 197: /* oneselect */ + case 208: /* values */ { -sqlite3SelectDelete(pParse->db, (yypminor->yy3)); +sqlite3SelectDelete(pParse->db, (yypminor->yy387)); } break; case 174: /* term */ case 175: /* expr */ - case 223: /* escape */ { -sqlite3ExprDelete(pParse->db, (yypminor->yy346).pExpr); +sqlite3ExprDelete(pParse->db, (yypminor->yy118).pExpr); } break; - case 179: /* idxlist_opt */ - case 187: /* idxlist */ - case 197: /* selcollist */ - case 200: /* groupby_opt */ - case 202: /* orderby_opt */ - case 204: /* sclp */ - case 214: /* sortlist */ - case 216: /* nexprlist */ - case 217: /* setlist */ - case 220: /* itemlist */ - case 221: /* exprlist */ + case 179: /* eidlist_opt */ + case 188: /* sortlist */ + case 189: /* eidlist */ + case 201: /* selcollist */ + case 204: /* groupby_opt */ + case 206: /* orderby_opt */ + case 209: /* nexprlist */ + case 210: /* exprlist */ + case 211: /* sclp */ + case 220: /* setlist */ case 227: /* case_exprlist */ { -sqlite3ExprListDelete(pParse->db, (yypminor->yy14)); -} - break; - case 193: /* fullname */ - case 198: /* from */ - case 206: /* seltablist */ - case 207: /* stl_prefix */ -{ -sqlite3SrcListDelete(pParse->db, (yypminor->yy65)); -} - break; - case 199: /* where_opt */ - case 201: /* having_opt */ - case 210: /* on_opt */ - case 215: /* sortitem */ +sqlite3ExprListDelete(pParse->db, (yypminor->yy322)); +} + break; + case 195: /* fullname */ + case 202: /* from */ + case 213: /* seltablist */ + case 214: /* stl_prefix */ +{ +sqlite3SrcListDelete(pParse->db, (yypminor->yy259)); +} + break; + case 198: /* with */ + case 251: /* wqlist */ +{ +sqlite3WithDelete(pParse->db, (yypminor->yy451)); +} + break; + case 203: /* where_opt */ + case 205: /* having_opt */ + case 217: /* on_opt */ case 226: /* case_operand */ case 228: /* case_else */ - case 239: /* when_clause */ - case 244: /* key_opt */ -{ -sqlite3ExprDelete(pParse->db, (yypminor->yy132)); -} - break; - case 211: /* using_opt */ - case 213: /* inscollist */ - case 219: /* inscollist_opt */ -{ -sqlite3IdListDelete(pParse->db, (yypminor->yy408)); -} - break; - case 235: /* trigger_cmd_list */ - case 240: /* trigger_cmd */ -{ -sqlite3DeleteTriggerStep(pParse->db, (yypminor->yy473)); -} - break; - case 237: /* trigger_event */ -{ -sqlite3IdListDelete(pParse->db, (yypminor->yy378).b); -} - break; + case 237: /* when_clause */ + case 242: /* key_opt */ +{ +sqlite3ExprDelete(pParse->db, (yypminor->yy314)); +} + break; + case 218: /* using_opt */ + case 219: /* idlist */ + case 222: /* idlist_opt */ +{ +sqlite3IdListDelete(pParse->db, (yypminor->yy384)); +} + break; + case 233: /* trigger_cmd_list */ + case 238: /* trigger_cmd */ +{ +sqlite3DeleteTriggerStep(pParse->db, (yypminor->yy203)); +} + break; + case 235: /* trigger_event */ +{ +sqlite3IdListDelete(pParse->db, (yypminor->yy90).b); +} + break; +/********* End destructor definitions *****************************************/ default: break; /* If no destructor action specified: do nothing */ } } /* ** Pop the parser's stack once. ** ** If there is a destructor routine associated with the token which ** is popped from the stack, then call it. -** -** Return the major token number for the symbol popped. */ -static int yy_pop_parser_stack(yyParser *pParser){ - YYCODETYPE yymajor; - yyStackEntry *yytos = &pParser->yystack[pParser->yyidx]; - - /* There is no mechanism by which the parser stack can be popped below - ** empty in SQLite. */ - if( NEVER(pParser->yyidx<0) ) return 0; +static void yy_pop_parser_stack(yyParser *pParser){ + yyStackEntry *yytos; + assert( pParser->yyidx>=0 ); + yytos = &pParser->yystack[pParser->yyidx--]; #ifndef NDEBUG - if( yyTraceFILE && pParser->yyidx>=0 ){ + if( yyTraceFILE ){ fprintf(yyTraceFILE,"%sPopping %s\n", yyTracePrompt, yyTokenName[yytos->major]); } #endif - yymajor = yytos->major; - yy_destructor(pParser, yymajor, &yytos->minor); - pParser->yyidx--; - return yymajor; + yy_destructor(pParser, yytos->major, &yytos->minor); } /* -** Deallocate and destroy a parser. Destructors are all called for +** Deallocate and destroy a parser. Destructors are called for ** all stack elements before shutting the parser down. ** -** Inputs: -** <ul> -** <li> A pointer to the parser. This should be a pointer -** obtained from sqlite3ParserAlloc. -** <li> A pointer to a function used to reclaim memory obtained -** from malloc. -** </ul> +** If the YYPARSEFREENEVERNULL macro exists (for example because it +** is defined in a %include section of the input grammar) then it is +** assumed that the input pointer is never NULL. */ SQLITE_PRIVATE void sqlite3ParserFree( void *p, /* The parser to be deleted */ void (*freeProc)(void*) /* Function used to reclaim memory */ ){ yyParser *pParser = (yyParser*)p; - /* In SQLite, we never try to destroy a parser that was not successfully - ** created in the first place. */ - if( NEVER(pParser==0) ) return; +#ifndef YYPARSEFREENEVERNULL + if( pParser==0 ) return; +#endif while( pParser->yyidx>=0 ) yy_pop_parser_stack(pParser); #if YYSTACKDEPTH<=0 free(pParser->yystack); #endif (*freeProc)((void*)pParser); @@ -94093,79 +129469,76 @@ #endif /* ** Find the appropriate action for a parser given the terminal ** look-ahead token iLookAhead. -** -** If the look-ahead token is YYNOCODE, then check to see if the action is -** independent of the look-ahead. If it is, return the action, otherwise -** return YY_NO_ACTION. */ static int yy_find_shift_action( yyParser *pParser, /* The parser */ YYCODETYPE iLookAhead /* The look-ahead token */ ){ int i; int stateno = pParser->yystack[pParser->yyidx].stateno; - if( stateno>YY_SHIFT_COUNT - || (i = yy_shift_ofst[stateno])==YY_SHIFT_USE_DFLT ){ - return yy_default[stateno]; - } - assert( iLookAhead!=YYNOCODE ); - i += iLookAhead; - if( i<0 || i>=YY_ACTTAB_COUNT || yy_lookahead[i]!=iLookAhead ){ - if( iLookAhead>0 ){ + if( stateno>=YY_MIN_REDUCE ) return stateno; + assert( stateno <= YY_SHIFT_COUNT ); + do{ + i = yy_shift_ofst[stateno]; + if( i==YY_SHIFT_USE_DFLT ) return yy_default[stateno]; + assert( iLookAhead!=YYNOCODE ); + i += iLookAhead; + if( i<0 || i>=YY_ACTTAB_COUNT || yy_lookahead[i]!=iLookAhead ){ + if( iLookAhead>0 ){ #ifdef YYFALLBACK - YYCODETYPE iFallback; /* Fallback token */ - if( iLookAhead<sizeof(yyFallback)/sizeof(yyFallback[0]) - && (iFallback = yyFallback[iLookAhead])!=0 ){ + YYCODETYPE iFallback; /* Fallback token */ + if( iLookAhead<sizeof(yyFallback)/sizeof(yyFallback[0]) + && (iFallback = yyFallback[iLookAhead])!=0 ){ #ifndef NDEBUG - if( yyTraceFILE ){ - fprintf(yyTraceFILE, "%sFALLBACK %s => %s\n", - yyTracePrompt, yyTokenName[iLookAhead], yyTokenName[iFallback]); + if( yyTraceFILE ){ + fprintf(yyTraceFILE, "%sFALLBACK %s => %s\n", + yyTracePrompt, yyTokenName[iLookAhead], yyTokenName[iFallback]); + } +#endif + assert( yyFallback[iFallback]==0 ); /* Fallback loop must terminate */ + iLookAhead = iFallback; + continue; } #endif - return yy_find_shift_action(pParser, iFallback); - } -#endif #ifdef YYWILDCARD - { - int j = i - iLookAhead + YYWILDCARD; - if( + { + int j = i - iLookAhead + YYWILDCARD; + if( #if YY_SHIFT_MIN+YYWILDCARD<0 - j>=0 && + j>=0 && #endif #if YY_SHIFT_MAX+YYWILDCARD>=YY_ACTTAB_COUNT - j<YY_ACTTAB_COUNT && + j<YY_ACTTAB_COUNT && #endif - yy_lookahead[j]==YYWILDCARD - ){ + yy_lookahead[j]==YYWILDCARD + ){ #ifndef NDEBUG - if( yyTraceFILE ){ - fprintf(yyTraceFILE, "%sWILDCARD %s => %s\n", - yyTracePrompt, yyTokenName[iLookAhead], yyTokenName[YYWILDCARD]); - } + if( yyTraceFILE ){ + fprintf(yyTraceFILE, "%sWILDCARD %s => %s\n", + yyTracePrompt, yyTokenName[iLookAhead], + yyTokenName[YYWILDCARD]); + } #endif /* NDEBUG */ - return yy_action[j]; + return yy_action[j]; + } } - } #endif /* YYWILDCARD */ + } + return yy_default[stateno]; + }else{ + return yy_action[i]; } - return yy_default[stateno]; - }else{ - return yy_action[i]; - } + }while(1); } /* ** Find the appropriate action for a parser given the non-terminal ** look-ahead token iLookAhead. -** -** If the look-ahead token is YYNOCODE, then check to see if the action is -** independent of the look-ahead. If it is, return the action, otherwise -** return YY_NO_ACTION. */ static int yy_find_reduce_action( int stateno, /* Current state number */ YYCODETYPE iLookAhead /* The look-ahead token */ ){ @@ -94204,17 +129577,38 @@ } #endif while( yypParser->yyidx>=0 ) yy_pop_parser_stack(yypParser); /* Here code is inserted which will execute if the parser ** stack every overflows */ +/******** Begin %stack_overflow code ******************************************/ UNUSED_PARAMETER(yypMinor); /* Silence some compiler warnings */ sqlite3ErrorMsg(pParse, "parser stack overflow"); - pParse->parseError = 1; +/******** End %stack_overflow code ********************************************/ sqlite3ParserARG_STORE; /* Suppress warning about unused %extra_argument var */ } +/* +** Print tracing information for a SHIFT action +*/ +#ifndef NDEBUG +static void yyTraceShift(yyParser *yypParser, int yyNewState){ + if( yyTraceFILE ){ + if( yyNewState<YYNSTATE ){ + fprintf(yyTraceFILE,"%sShift '%s', go to state %d\n", + yyTracePrompt,yyTokenName[yypParser->yystack[yypParser->yyidx].major], + yyNewState); + }else{ + fprintf(yyTraceFILE,"%sShift '%s'\n", + yyTracePrompt,yyTokenName[yypParser->yystack[yypParser->yyidx].major]); + } + } +} +#else +# define yyTraceShift(X,Y) +#endif + /* ** Perform a shift action. */ static void yy_shift( yyParser *yypParser, /* The parser to be shifted */ @@ -94245,86 +129639,75 @@ #endif yytos = &yypParser->yystack[yypParser->yyidx]; yytos->stateno = (YYACTIONTYPE)yyNewState; yytos->major = (YYCODETYPE)yyMajor; yytos->minor = *yypMinor; -#ifndef NDEBUG - if( yyTraceFILE && yypParser->yyidx>0 ){ - int i; - fprintf(yyTraceFILE,"%sShift %d\n",yyTracePrompt,yyNewState); - fprintf(yyTraceFILE,"%sStack:",yyTracePrompt); - for(i=1; i<=yypParser->yyidx; i++) - fprintf(yyTraceFILE," %s",yyTokenName[yypParser->yystack[i].major]); - fprintf(yyTraceFILE,"\n"); - } -#endif + yyTraceShift(yypParser, yyNewState); } /* The following table contains information about every rule that ** is used during the reduce. */ static const struct { YYCODETYPE lhs; /* Symbol on the left-hand side of the rule */ unsigned char nrhs; /* Number of right-hand side symbols in the rule */ } yyRuleInfo[] = { - { 142, 1 }, - { 143, 2 }, - { 143, 1 }, { 144, 1 }, - { 144, 3 }, - { 145, 0 }, + { 145, 2 }, { 145, 1 }, - { 145, 3 }, { 146, 1 }, + { 146, 3 }, + { 147, 0 }, + { 147, 1 }, { 147, 3 }, - { 149, 0 }, - { 149, 1 }, - { 149, 2 }, - { 148, 0 }, - { 148, 1 }, - { 148, 1 }, - { 148, 1 }, - { 147, 2 }, - { 147, 2 }, - { 147, 2 }, + { 148, 1 }, + { 149, 3 }, + { 151, 0 }, { 151, 1 }, - { 151, 0 }, - { 147, 2 }, - { 147, 3 }, - { 147, 5 }, - { 147, 2 }, - { 152, 6 }, - { 154, 1 }, - { 156, 0 }, - { 156, 3 }, - { 155, 1 }, - { 155, 0 }, - { 153, 4 }, - { 153, 2 }, - { 158, 3 }, - { 158, 1 }, - { 161, 3 }, - { 162, 1 }, - { 165, 1 }, - { 165, 1 }, - { 166, 1 }, + { 151, 2 }, + { 150, 0 }, { 150, 1 }, { 150, 1 }, { 150, 1 }, - { 163, 0 }, - { 163, 1 }, - { 167, 1 }, - { 167, 4 }, - { 167, 6 }, + { 149, 2 }, + { 149, 2 }, + { 149, 2 }, + { 153, 1 }, + { 153, 0 }, + { 149, 2 }, + { 149, 3 }, + { 149, 5 }, + { 149, 2 }, + { 154, 6 }, + { 156, 1 }, + { 158, 0 }, + { 158, 3 }, + { 157, 1 }, + { 157, 0 }, + { 155, 5 }, + { 155, 2 }, + { 162, 0 }, + { 162, 2 }, + { 160, 3 }, + { 160, 1 }, + { 164, 3 }, + { 165, 1 }, + { 152, 1 }, + { 152, 1 }, + { 152, 1 }, + { 166, 0 }, + { 166, 1 }, { 168, 1 }, - { 168, 2 }, - { 169, 1 }, + { 168, 4 }, + { 168, 6 }, { 169, 1 }, - { 164, 2 }, - { 164, 0 }, - { 172, 3 }, - { 172, 1 }, + { 169, 2 }, + { 170, 1 }, + { 170, 1 }, + { 167, 2 }, + { 167, 0 }, + { 173, 2 }, { 173, 2 }, { 173, 4 }, { 173, 3 }, { 173, 3 }, { 173, 2 }, @@ -94352,116 +129735,117 @@ { 181, 3 }, { 181, 2 }, { 184, 0 }, { 184, 2 }, { 184, 2 }, - { 159, 0 }, - { 159, 2 }, + { 161, 0 }, + { 161, 2 }, { 185, 3 }, - { 185, 2 }, { 185, 1 }, - { 186, 2 }, - { 186, 7 }, - { 186, 5 }, - { 186, 5 }, - { 186, 10 }, - { 188, 0 }, - { 188, 1 }, + { 186, 1 }, + { 186, 0 }, + { 187, 2 }, + { 187, 7 }, + { 187, 5 }, + { 187, 5 }, + { 187, 10 }, + { 190, 0 }, + { 190, 1 }, { 176, 0 }, { 176, 3 }, - { 189, 0 }, - { 189, 2 }, - { 190, 1 }, - { 190, 1 }, - { 190, 1 }, - { 147, 4 }, - { 192, 2 }, - { 192, 0 }, - { 147, 8 }, - { 147, 4 }, - { 147, 1 }, - { 160, 1 }, - { 160, 3 }, - { 195, 1 }, - { 195, 2 }, - { 195, 1 }, - { 194, 9 }, - { 196, 1 }, - { 196, 1 }, - { 196, 0 }, - { 204, 2 }, - { 204, 0 }, - { 197, 3 }, - { 197, 2 }, - { 197, 4 }, - { 205, 2 }, - { 205, 1 }, - { 205, 0 }, - { 198, 0 }, - { 198, 2 }, - { 207, 2 }, - { 207, 0 }, - { 206, 7 }, - { 206, 7 }, - { 206, 7 }, - { 157, 0 }, - { 157, 2 }, - { 193, 2 }, - { 208, 1 }, - { 208, 2 }, - { 208, 3 }, - { 208, 4 }, - { 210, 2 }, - { 210, 0 }, - { 209, 0 }, - { 209, 3 }, - { 209, 2 }, - { 211, 4 }, - { 211, 0 }, - { 202, 0 }, - { 202, 3 }, - { 214, 4 }, - { 214, 2 }, - { 215, 1 }, + { 191, 0 }, + { 191, 2 }, + { 192, 1 }, + { 192, 1 }, + { 192, 1 }, + { 149, 4 }, + { 194, 2 }, + { 194, 0 }, + { 149, 9 }, + { 149, 4 }, + { 149, 1 }, + { 163, 2 }, + { 196, 1 }, + { 196, 3 }, + { 199, 1 }, + { 199, 2 }, + { 199, 1 }, + { 197, 9 }, + { 197, 1 }, + { 208, 4 }, + { 208, 5 }, + { 200, 1 }, + { 200, 1 }, + { 200, 0 }, + { 211, 2 }, + { 211, 0 }, + { 201, 3 }, + { 201, 2 }, + { 201, 4 }, + { 212, 2 }, + { 212, 1 }, + { 212, 0 }, + { 202, 0 }, + { 202, 2 }, + { 214, 2 }, + { 214, 0 }, + { 213, 7 }, + { 213, 9 }, + { 213, 7 }, + { 213, 7 }, + { 159, 0 }, + { 159, 2 }, + { 195, 2 }, + { 215, 1 }, + { 215, 2 }, + { 215, 3 }, + { 215, 4 }, + { 217, 2 }, + { 217, 0 }, + { 216, 0 }, + { 216, 3 }, + { 216, 2 }, + { 218, 4 }, + { 218, 0 }, + { 206, 0 }, + { 206, 3 }, + { 188, 4 }, + { 188, 2 }, { 177, 1 }, { 177, 1 }, { 177, 0 }, - { 200, 0 }, - { 200, 3 }, - { 201, 0 }, - { 201, 2 }, + { 204, 0 }, + { 204, 3 }, + { 205, 0 }, + { 205, 2 }, + { 207, 0 }, + { 207, 2 }, + { 207, 4 }, + { 207, 4 }, + { 149, 6 }, { 203, 0 }, { 203, 2 }, - { 203, 4 }, - { 203, 4 }, - { 147, 5 }, - { 199, 0 }, - { 199, 2 }, - { 147, 7 }, - { 217, 5 }, - { 217, 3 }, - { 147, 8 }, - { 147, 5 }, - { 147, 6 }, - { 218, 2 }, - { 218, 1 }, + { 149, 8 }, + { 220, 5 }, { 220, 3 }, - { 220, 1 }, - { 219, 0 }, + { 149, 6 }, + { 149, 7 }, + { 221, 2 }, + { 221, 1 }, + { 222, 0 }, + { 222, 3 }, { 219, 3 }, - { 213, 3 }, - { 213, 1 }, + { 219, 1 }, { 175, 1 }, { 175, 3 }, { 174, 1 }, { 175, 1 }, { 175, 1 }, { 175, 3 }, { 175, 5 }, { 174, 1 }, { 174, 1 }, - { 175, 1 }, { 175, 1 }, { 175, 3 }, { 175, 6 }, { 175, 5 }, { 175, 4 }, @@ -94472,17 +129856,14 @@ { 175, 3 }, { 175, 3 }, { 175, 3 }, { 175, 3 }, { 175, 3 }, - { 222, 1 }, - { 222, 2 }, - { 222, 1 }, - { 222, 2 }, + { 223, 1 }, { 223, 2 }, - { 223, 0 }, - { 175, 4 }, + { 175, 3 }, + { 175, 5 }, { 175, 2 }, { 175, 3 }, { 175, 3 }, { 175, 4 }, { 175, 2 }, @@ -94504,100 +129885,102 @@ { 227, 4 }, { 228, 2 }, { 228, 0 }, { 226, 1 }, { 226, 0 }, - { 221, 1 }, - { 221, 0 }, - { 216, 3 }, - { 216, 1 }, - { 147, 11 }, + { 210, 1 }, + { 210, 0 }, + { 209, 3 }, + { 209, 1 }, + { 149, 12 }, { 229, 1 }, { 229, 0 }, { 179, 0 }, { 179, 3 }, - { 187, 5 }, - { 187, 3 }, + { 189, 5 }, + { 189, 3 }, { 230, 0 }, { 230, 2 }, - { 147, 4 }, - { 147, 1 }, - { 147, 2 }, - { 147, 3 }, - { 147, 5 }, - { 147, 6 }, - { 147, 5 }, - { 147, 6 }, + { 149, 4 }, + { 149, 1 }, + { 149, 2 }, + { 149, 3 }, + { 149, 5 }, + { 149, 6 }, + { 149, 5 }, + { 149, 6 }, { 231, 1 }, { 231, 1 }, { 231, 1 }, { 231, 1 }, { 231, 1 }, - { 170, 2 }, { 171, 2 }, - { 233, 1 }, - { 232, 1 }, - { 232, 0 }, - { 147, 5 }, - { 234, 11 }, - { 236, 1 }, - { 236, 1 }, - { 236, 2 }, + { 171, 1 }, + { 172, 2 }, + { 149, 5 }, + { 232, 11 }, + { 234, 1 }, + { 234, 1 }, + { 234, 2 }, + { 234, 0 }, + { 235, 1 }, + { 235, 1 }, + { 235, 3 }, { 236, 0 }, - { 237, 1 }, - { 237, 1 }, - { 237, 3 }, - { 238, 0 }, - { 238, 3 }, - { 239, 0 }, - { 239, 2 }, - { 235, 3 }, - { 235, 2 }, - { 241, 1 }, - { 241, 3 }, - { 242, 0 }, - { 242, 3 }, - { 242, 2 }, - { 240, 7 }, - { 240, 8 }, - { 240, 5 }, - { 240, 5 }, - { 240, 1 }, + { 236, 3 }, + { 237, 0 }, + { 237, 2 }, + { 233, 3 }, + { 233, 2 }, + { 239, 1 }, + { 239, 3 }, + { 240, 0 }, + { 240, 3 }, + { 240, 2 }, + { 238, 7 }, + { 238, 5 }, + { 238, 5 }, + { 238, 1 }, { 175, 4 }, { 175, 6 }, - { 191, 1 }, - { 191, 1 }, - { 191, 1 }, - { 147, 4 }, - { 147, 6 }, - { 147, 3 }, - { 244, 0 }, - { 244, 2 }, + { 193, 1 }, + { 193, 1 }, + { 193, 1 }, + { 149, 4 }, + { 149, 6 }, + { 149, 3 }, + { 242, 0 }, + { 242, 2 }, + { 241, 1 }, + { 241, 0 }, + { 149, 1 }, + { 149, 3 }, + { 149, 1 }, + { 149, 3 }, + { 149, 6 }, + { 149, 6 }, { 243, 1 }, - { 243, 0 }, - { 147, 1 }, - { 147, 3 }, - { 147, 1 }, - { 147, 3 }, - { 147, 6 }, - { 147, 6 }, - { 245, 1 }, - { 246, 0 }, + { 244, 0 }, + { 244, 1 }, + { 149, 1 }, + { 149, 4 }, + { 245, 8 }, { 246, 1 }, - { 147, 1 }, - { 147, 4 }, - { 247, 7 }, + { 246, 3 }, + { 247, 0 }, + { 247, 2 }, { 248, 1 }, { 248, 3 }, - { 249, 0 }, - { 249, 2 }, - { 250, 1 }, - { 250, 3 }, - { 251, 1 }, - { 252, 0 }, - { 252, 4 }, - { 252, 2 }, + { 249, 1 }, + { 250, 0 }, + { 250, 4 }, + { 250, 2 }, + { 198, 0 }, + { 198, 2 }, + { 198, 3 }, + { 251, 6 }, + { 251, 8 }, }; static void yy_accept(yyParser*); /* Forward Declaration */ /* @@ -94616,32 +129999,16 @@ sqlite3ParserARG_FETCH; yymsp = &yypParser->yystack[yypParser->yyidx]; #ifndef NDEBUG if( yyTraceFILE && yyruleno>=0 && yyruleno<(int)(sizeof(yyRuleName)/sizeof(yyRuleName[0])) ){ - fprintf(yyTraceFILE, "%sReduce [%s].\n", yyTracePrompt, - yyRuleName[yyruleno]); + yysize = yyRuleInfo[yyruleno].nrhs; + fprintf(yyTraceFILE, "%sReduce [%s], go to state %d.\n", yyTracePrompt, + yyRuleName[yyruleno], yymsp[-yysize].stateno); } #endif /* NDEBUG */ - - /* Silence complaints from purify about yygotominor being uninitialized - ** in some cases when it is copied into the stack after the following - ** switch. yygotominor is uninitialized when a rule reduces that does - ** not set the value of its left-hand side nonterminal. Leaving the - ** value of the nonterminal uninitialized is utterly harmless as long - ** as the value is never used. So really the only thing this code - ** accomplishes is to quieten purify. - ** - ** 2007-01-16: The wireshark project (www.wireshark.org) reports that - ** without this code, their parser segfaults. I'm not sure what there - ** parser is doing to make this happen. This is the second bug report - ** from wireshark this week. Clearly they are stressing Lemon in ways - ** that it has not been previously stressed... (SQLite ticket #2172) - */ - /*memset(&yygotominor, 0, sizeof(yygotominor));*/ yygotominor = yyzerominor; - switch( yyruleno ){ /* Beginning here are the reduction cases. A typical example ** follows: ** case 0: @@ -94648,34 +130015,32 @@ ** #line <lineno> <grammarfile> ** { ... } // User supplied code ** #line <lineno> <thisfile> ** break; */ - case 5: /* explain ::= */ -{ sqlite3BeginParse(pParse, 0); } - break; +/********** Begin reduce actions **********************************************/ case 6: /* explain ::= EXPLAIN */ -{ sqlite3BeginParse(pParse, 1); } +{ pParse->explain = 1; } break; case 7: /* explain ::= EXPLAIN QUERY PLAN */ -{ sqlite3BeginParse(pParse, 2); } +{ pParse->explain = 2; } break; case 8: /* cmdx ::= cmd */ { sqlite3FinishCoding(pParse); } break; case 9: /* cmd ::= BEGIN transtype trans_opt */ -{sqlite3BeginTransaction(pParse, yymsp[-1].minor.yy328);} +{sqlite3BeginTransaction(pParse, yymsp[-1].minor.yy4);} break; case 13: /* transtype ::= */ -{yygotominor.yy328 = TK_DEFERRED;} +{yygotominor.yy4 = TK_DEFERRED;} break; case 14: /* transtype ::= DEFERRED */ case 15: /* transtype ::= IMMEDIATE */ yytestcase(yyruleno==15); case 16: /* transtype ::= EXCLUSIVE */ yytestcase(yyruleno==16); case 115: /* multiselect_op ::= UNION */ yytestcase(yyruleno==115); case 117: /* multiselect_op ::= EXCEPT|INTERSECT */ yytestcase(yyruleno==117); -{yygotominor.yy328 = yymsp[0].major;} +{yygotominor.yy4 = yymsp[0].major;} break; case 17: /* cmd ::= COMMIT trans_opt */ case 18: /* cmd ::= END trans_opt */ yytestcase(yyruleno==18); {sqlite3CommitTransaction(pParse);} break; @@ -94697,1065 +130062,1200 @@ sqlite3Savepoint(pParse, SAVEPOINT_ROLLBACK, &yymsp[0].minor.yy0); } break; case 26: /* create_table ::= createkw temp TABLE ifnotexists nm dbnm */ { - sqlite3StartTable(pParse,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy0,yymsp[-4].minor.yy328,0,0,yymsp[-2].minor.yy328); + sqlite3StartTable(pParse,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy0,yymsp[-4].minor.yy4,0,0,yymsp[-2].minor.yy4); } break; case 27: /* createkw ::= CREATE */ { - pParse->db->lookaside.bEnabled = 0; + disableLookaside(pParse); yygotominor.yy0 = yymsp[0].minor.yy0; } break; case 28: /* ifnotexists ::= */ case 31: /* temp ::= */ yytestcase(yyruleno==31); - case 70: /* autoinc ::= */ yytestcase(yyruleno==70); - case 83: /* defer_subclause ::= NOT DEFERRABLE init_deferred_pred_opt */ yytestcase(yyruleno==83); - case 85: /* init_deferred_pred_opt ::= */ yytestcase(yyruleno==85); - case 87: /* init_deferred_pred_opt ::= INITIALLY IMMEDIATE */ yytestcase(yyruleno==87); - case 98: /* defer_subclause_opt ::= */ yytestcase(yyruleno==98); - case 109: /* ifexists ::= */ yytestcase(yyruleno==109); - case 120: /* distinct ::= ALL */ yytestcase(yyruleno==120); - case 121: /* distinct ::= */ yytestcase(yyruleno==121); - case 223: /* between_op ::= BETWEEN */ yytestcase(yyruleno==223); - case 226: /* in_op ::= IN */ yytestcase(yyruleno==226); -{yygotominor.yy328 = 0;} + case 34: /* table_options ::= */ yytestcase(yyruleno==34); + case 68: /* autoinc ::= */ yytestcase(yyruleno==68); + case 81: /* defer_subclause ::= NOT DEFERRABLE init_deferred_pred_opt */ yytestcase(yyruleno==81); + case 83: /* init_deferred_pred_opt ::= */ yytestcase(yyruleno==83); + case 85: /* init_deferred_pred_opt ::= INITIALLY IMMEDIATE */ yytestcase(yyruleno==85); + case 97: /* defer_subclause_opt ::= */ yytestcase(yyruleno==97); + case 108: /* ifexists ::= */ yytestcase(yyruleno==108); + case 124: /* distinct ::= */ yytestcase(yyruleno==124); + case 219: /* between_op ::= BETWEEN */ yytestcase(yyruleno==219); + case 222: /* in_op ::= IN */ yytestcase(yyruleno==222); + case 247: /* collate ::= */ yytestcase(yyruleno==247); +{yygotominor.yy4 = 0;} break; case 29: /* ifnotexists ::= IF NOT EXISTS */ case 30: /* temp ::= TEMP */ yytestcase(yyruleno==30); - case 71: /* autoinc ::= AUTOINCR */ yytestcase(yyruleno==71); - case 86: /* init_deferred_pred_opt ::= INITIALLY DEFERRED */ yytestcase(yyruleno==86); - case 108: /* ifexists ::= IF EXISTS */ yytestcase(yyruleno==108); - case 119: /* distinct ::= DISTINCT */ yytestcase(yyruleno==119); - case 224: /* between_op ::= NOT BETWEEN */ yytestcase(yyruleno==224); - case 227: /* in_op ::= NOT IN */ yytestcase(yyruleno==227); -{yygotominor.yy328 = 1;} + case 69: /* autoinc ::= AUTOINCR */ yytestcase(yyruleno==69); + case 84: /* init_deferred_pred_opt ::= INITIALLY DEFERRED */ yytestcase(yyruleno==84); + case 107: /* ifexists ::= IF EXISTS */ yytestcase(yyruleno==107); + case 220: /* between_op ::= NOT BETWEEN */ yytestcase(yyruleno==220); + case 223: /* in_op ::= NOT IN */ yytestcase(yyruleno==223); + case 248: /* collate ::= COLLATE ID|STRING */ yytestcase(yyruleno==248); +{yygotominor.yy4 = 1;} break; - case 32: /* create_table_args ::= LP columnlist conslist_opt RP */ + case 32: /* create_table_args ::= LP columnlist conslist_opt RP table_options */ { - sqlite3EndTable(pParse,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy0,0); + sqlite3EndTable(pParse,&yymsp[-2].minor.yy0,&yymsp[-1].minor.yy0,yymsp[0].minor.yy4,0); } break; case 33: /* create_table_args ::= AS select */ { - sqlite3EndTable(pParse,0,0,yymsp[0].minor.yy3); - sqlite3SelectDelete(pParse->db, yymsp[0].minor.yy3); + sqlite3EndTable(pParse,0,0,0,yymsp[0].minor.yy387); + sqlite3SelectDelete(pParse->db, yymsp[0].minor.yy387); } break; - case 36: /* column ::= columnid type carglist */ + case 35: /* table_options ::= WITHOUT nm */ +{ + if( yymsp[0].minor.yy0.n==5 && sqlite3_strnicmp(yymsp[0].minor.yy0.z,"rowid",5)==0 ){ + yygotominor.yy4 = TF_WithoutRowid | TF_NoVisibleRowid; + }else{ + yygotominor.yy4 = 0; + sqlite3ErrorMsg(pParse, "unknown table option: %.*s", yymsp[0].minor.yy0.n, yymsp[0].minor.yy0.z); + } +} + break; + case 38: /* column ::= columnid type carglist */ { yygotominor.yy0.z = yymsp[-2].minor.yy0.z; yygotominor.yy0.n = (int)(pParse->sLastToken.z-yymsp[-2].minor.yy0.z) + pParse->sLastToken.n; } break; - case 37: /* columnid ::= nm */ + case 39: /* columnid ::= nm */ { sqlite3AddColumn(pParse,&yymsp[0].minor.yy0); yygotominor.yy0 = yymsp[0].minor.yy0; + pParse->constraintName.n = 0; } break; - case 38: /* id ::= ID */ - case 39: /* id ::= INDEXED */ yytestcase(yyruleno==39); - case 40: /* ids ::= ID|STRING */ yytestcase(yyruleno==40); - case 41: /* nm ::= id */ yytestcase(yyruleno==41); - case 42: /* nm ::= STRING */ yytestcase(yyruleno==42); - case 43: /* nm ::= JOIN_KW */ yytestcase(yyruleno==43); - case 46: /* typetoken ::= typename */ yytestcase(yyruleno==46); - case 49: /* typename ::= ids */ yytestcase(yyruleno==49); - case 127: /* as ::= AS nm */ yytestcase(yyruleno==127); - case 128: /* as ::= ids */ yytestcase(yyruleno==128); - case 138: /* dbnm ::= DOT nm */ yytestcase(yyruleno==138); - case 147: /* indexed_opt ::= INDEXED BY nm */ yytestcase(yyruleno==147); - case 252: /* collate ::= COLLATE ids */ yytestcase(yyruleno==252); - case 261: /* nmnum ::= plus_num */ yytestcase(yyruleno==261); - case 262: /* nmnum ::= nm */ yytestcase(yyruleno==262); - case 263: /* nmnum ::= ON */ yytestcase(yyruleno==263); - case 264: /* nmnum ::= DELETE */ yytestcase(yyruleno==264); - case 265: /* nmnum ::= DEFAULT */ yytestcase(yyruleno==265); - case 266: /* plus_num ::= plus_opt number */ yytestcase(yyruleno==266); - case 267: /* minus_num ::= MINUS number */ yytestcase(yyruleno==267); - case 268: /* number ::= INTEGER|FLOAT */ yytestcase(yyruleno==268); - case 286: /* trnm ::= nm */ yytestcase(yyruleno==286); + case 40: /* nm ::= ID|INDEXED */ + case 41: /* nm ::= STRING */ yytestcase(yyruleno==41); + case 42: /* nm ::= JOIN_KW */ yytestcase(yyruleno==42); + case 45: /* typetoken ::= typename */ yytestcase(yyruleno==45); + case 48: /* typename ::= ID|STRING */ yytestcase(yyruleno==48); + case 130: /* as ::= AS nm */ yytestcase(yyruleno==130); + case 131: /* as ::= ID|STRING */ yytestcase(yyruleno==131); + case 142: /* dbnm ::= DOT nm */ yytestcase(yyruleno==142); + case 151: /* indexed_opt ::= INDEXED BY nm */ yytestcase(yyruleno==151); + case 257: /* nmnum ::= plus_num */ yytestcase(yyruleno==257); + case 258: /* nmnum ::= nm */ yytestcase(yyruleno==258); + case 259: /* nmnum ::= ON */ yytestcase(yyruleno==259); + case 260: /* nmnum ::= DELETE */ yytestcase(yyruleno==260); + case 261: /* nmnum ::= DEFAULT */ yytestcase(yyruleno==261); + case 262: /* plus_num ::= PLUS INTEGER|FLOAT */ yytestcase(yyruleno==262); + case 263: /* plus_num ::= INTEGER|FLOAT */ yytestcase(yyruleno==263); + case 264: /* minus_num ::= MINUS INTEGER|FLOAT */ yytestcase(yyruleno==264); + case 280: /* trnm ::= nm */ yytestcase(yyruleno==280); {yygotominor.yy0 = yymsp[0].minor.yy0;} break; - case 45: /* type ::= typetoken */ + case 44: /* type ::= typetoken */ {sqlite3AddColumnType(pParse,&yymsp[0].minor.yy0);} break; - case 47: /* typetoken ::= typename LP signed RP */ + case 46: /* typetoken ::= typename LP signed RP */ { yygotominor.yy0.z = yymsp[-3].minor.yy0.z; yygotominor.yy0.n = (int)(&yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n] - yymsp[-3].minor.yy0.z); } break; - case 48: /* typetoken ::= typename LP signed COMMA signed RP */ + case 47: /* typetoken ::= typename LP signed COMMA signed RP */ { yygotominor.yy0.z = yymsp[-5].minor.yy0.z; yygotominor.yy0.n = (int)(&yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n] - yymsp[-5].minor.yy0.z); } break; - case 50: /* typename ::= typename ids */ + case 49: /* typename ::= typename ID|STRING */ {yygotominor.yy0.z=yymsp[-1].minor.yy0.z; yygotominor.yy0.n=yymsp[0].minor.yy0.n+(int)(yymsp[0].minor.yy0.z-yymsp[-1].minor.yy0.z);} break; - case 57: /* ccons ::= DEFAULT term */ - case 59: /* ccons ::= DEFAULT PLUS term */ yytestcase(yyruleno==59); -{sqlite3AddDefaultValue(pParse,&yymsp[0].minor.yy346);} - break; - case 58: /* ccons ::= DEFAULT LP expr RP */ -{sqlite3AddDefaultValue(pParse,&yymsp[-1].minor.yy346);} - break; - case 60: /* ccons ::= DEFAULT MINUS term */ + case 54: /* ccons ::= CONSTRAINT nm */ + case 92: /* tcons ::= CONSTRAINT nm */ yytestcase(yyruleno==92); +{pParse->constraintName = yymsp[0].minor.yy0;} + break; + case 55: /* ccons ::= DEFAULT term */ + case 57: /* ccons ::= DEFAULT PLUS term */ yytestcase(yyruleno==57); +{sqlite3AddDefaultValue(pParse,&yymsp[0].minor.yy118);} + break; + case 56: /* ccons ::= DEFAULT LP expr RP */ +{sqlite3AddDefaultValue(pParse,&yymsp[-1].minor.yy118);} + break; + case 58: /* ccons ::= DEFAULT MINUS term */ { ExprSpan v; - v.pExpr = sqlite3PExpr(pParse, TK_UMINUS, yymsp[0].minor.yy346.pExpr, 0, 0); + v.pExpr = sqlite3PExpr(pParse, TK_UMINUS, yymsp[0].minor.yy118.pExpr, 0, 0); v.zStart = yymsp[-1].minor.yy0.z; - v.zEnd = yymsp[0].minor.yy346.zEnd; + v.zEnd = yymsp[0].minor.yy118.zEnd; sqlite3AddDefaultValue(pParse,&v); } break; - case 61: /* ccons ::= DEFAULT id */ + case 59: /* ccons ::= DEFAULT ID|INDEXED */ { ExprSpan v; spanExpr(&v, pParse, TK_STRING, &yymsp[0].minor.yy0); sqlite3AddDefaultValue(pParse,&v); } break; - case 63: /* ccons ::= NOT NULL onconf */ -{sqlite3AddNotNull(pParse, yymsp[0].minor.yy328);} - break; - case 64: /* ccons ::= PRIMARY KEY sortorder onconf autoinc */ -{sqlite3AddPrimaryKey(pParse,0,yymsp[-1].minor.yy328,yymsp[0].minor.yy328,yymsp[-2].minor.yy328);} - break; - case 65: /* ccons ::= UNIQUE onconf */ -{sqlite3CreateIndex(pParse,0,0,0,0,yymsp[0].minor.yy328,0,0,0,0);} - break; - case 66: /* ccons ::= CHECK LP expr RP */ -{sqlite3AddCheckConstraint(pParse,yymsp[-1].minor.yy346.pExpr);} - break; - case 67: /* ccons ::= REFERENCES nm idxlist_opt refargs */ -{sqlite3CreateForeignKey(pParse,0,&yymsp[-2].minor.yy0,yymsp[-1].minor.yy14,yymsp[0].minor.yy328);} - break; - case 68: /* ccons ::= defer_subclause */ -{sqlite3DeferForeignKey(pParse,yymsp[0].minor.yy328);} - break; - case 69: /* ccons ::= COLLATE ids */ + case 61: /* ccons ::= NOT NULL onconf */ +{sqlite3AddNotNull(pParse, yymsp[0].minor.yy4);} + break; + case 62: /* ccons ::= PRIMARY KEY sortorder onconf autoinc */ +{sqlite3AddPrimaryKey(pParse,0,yymsp[-1].minor.yy4,yymsp[0].minor.yy4,yymsp[-2].minor.yy4);} + break; + case 63: /* ccons ::= UNIQUE onconf */ +{sqlite3CreateIndex(pParse,0,0,0,0,yymsp[0].minor.yy4,0,0,0,0);} + break; + case 64: /* ccons ::= CHECK LP expr RP */ +{sqlite3AddCheckConstraint(pParse,yymsp[-1].minor.yy118.pExpr);} + break; + case 65: /* ccons ::= REFERENCES nm eidlist_opt refargs */ +{sqlite3CreateForeignKey(pParse,0,&yymsp[-2].minor.yy0,yymsp[-1].minor.yy322,yymsp[0].minor.yy4);} + break; + case 66: /* ccons ::= defer_subclause */ +{sqlite3DeferForeignKey(pParse,yymsp[0].minor.yy4);} + break; + case 67: /* ccons ::= COLLATE ID|STRING */ {sqlite3AddCollateType(pParse, &yymsp[0].minor.yy0);} break; - case 72: /* refargs ::= */ -{ yygotominor.yy328 = OE_None*0x0101; /* EV: R-19803-45884 */} - break; - case 73: /* refargs ::= refargs refarg */ -{ yygotominor.yy328 = (yymsp[-1].minor.yy328 & ~yymsp[0].minor.yy429.mask) | yymsp[0].minor.yy429.value; } - break; - case 74: /* refarg ::= MATCH nm */ - case 75: /* refarg ::= ON INSERT refact */ yytestcase(yyruleno==75); -{ yygotominor.yy429.value = 0; yygotominor.yy429.mask = 0x000000; } - break; - case 76: /* refarg ::= ON DELETE refact */ -{ yygotominor.yy429.value = yymsp[0].minor.yy328; yygotominor.yy429.mask = 0x0000ff; } - break; - case 77: /* refarg ::= ON UPDATE refact */ -{ yygotominor.yy429.value = yymsp[0].minor.yy328<<8; yygotominor.yy429.mask = 0x00ff00; } - break; - case 78: /* refact ::= SET NULL */ -{ yygotominor.yy328 = OE_SetNull; /* EV: R-33326-45252 */} - break; - case 79: /* refact ::= SET DEFAULT */ -{ yygotominor.yy328 = OE_SetDflt; /* EV: R-33326-45252 */} - break; - case 80: /* refact ::= CASCADE */ -{ yygotominor.yy328 = OE_Cascade; /* EV: R-33326-45252 */} - break; - case 81: /* refact ::= RESTRICT */ -{ yygotominor.yy328 = OE_Restrict; /* EV: R-33326-45252 */} - break; - case 82: /* refact ::= NO ACTION */ -{ yygotominor.yy328 = OE_None; /* EV: R-33326-45252 */} - break; - case 84: /* defer_subclause ::= DEFERRABLE init_deferred_pred_opt */ - case 99: /* defer_subclause_opt ::= defer_subclause */ yytestcase(yyruleno==99); - case 101: /* onconf ::= ON CONFLICT resolvetype */ yytestcase(yyruleno==101); - case 104: /* resolvetype ::= raisetype */ yytestcase(yyruleno==104); -{yygotominor.yy328 = yymsp[0].minor.yy328;} - break; - case 88: /* conslist_opt ::= */ + case 70: /* refargs ::= */ +{ yygotominor.yy4 = OE_None*0x0101; /* EV: R-19803-45884 */} + break; + case 71: /* refargs ::= refargs refarg */ +{ yygotominor.yy4 = (yymsp[-1].minor.yy4 & ~yymsp[0].minor.yy215.mask) | yymsp[0].minor.yy215.value; } + break; + case 72: /* refarg ::= MATCH nm */ + case 73: /* refarg ::= ON INSERT refact */ yytestcase(yyruleno==73); +{ yygotominor.yy215.value = 0; yygotominor.yy215.mask = 0x000000; } + break; + case 74: /* refarg ::= ON DELETE refact */ +{ yygotominor.yy215.value = yymsp[0].minor.yy4; yygotominor.yy215.mask = 0x0000ff; } + break; + case 75: /* refarg ::= ON UPDATE refact */ +{ yygotominor.yy215.value = yymsp[0].minor.yy4<<8; yygotominor.yy215.mask = 0x00ff00; } + break; + case 76: /* refact ::= SET NULL */ +{ yygotominor.yy4 = OE_SetNull; /* EV: R-33326-45252 */} + break; + case 77: /* refact ::= SET DEFAULT */ +{ yygotominor.yy4 = OE_SetDflt; /* EV: R-33326-45252 */} + break; + case 78: /* refact ::= CASCADE */ +{ yygotominor.yy4 = OE_Cascade; /* EV: R-33326-45252 */} + break; + case 79: /* refact ::= RESTRICT */ +{ yygotominor.yy4 = OE_Restrict; /* EV: R-33326-45252 */} + break; + case 80: /* refact ::= NO ACTION */ +{ yygotominor.yy4 = OE_None; /* EV: R-33326-45252 */} + break; + case 82: /* defer_subclause ::= DEFERRABLE init_deferred_pred_opt */ + case 98: /* defer_subclause_opt ::= defer_subclause */ yytestcase(yyruleno==98); + case 100: /* onconf ::= ON CONFLICT resolvetype */ yytestcase(yyruleno==100); + case 102: /* orconf ::= OR resolvetype */ yytestcase(yyruleno==102); + case 103: /* resolvetype ::= raisetype */ yytestcase(yyruleno==103); + case 178: /* insert_cmd ::= INSERT orconf */ yytestcase(yyruleno==178); +{yygotominor.yy4 = yymsp[0].minor.yy4;} + break; + case 86: /* conslist_opt ::= */ {yygotominor.yy0.n = 0; yygotominor.yy0.z = 0;} break; - case 89: /* conslist_opt ::= COMMA conslist */ + case 87: /* conslist_opt ::= COMMA conslist */ {yygotominor.yy0 = yymsp[-1].minor.yy0;} break; - case 94: /* tcons ::= PRIMARY KEY LP idxlist autoinc RP onconf */ -{sqlite3AddPrimaryKey(pParse,yymsp[-3].minor.yy14,yymsp[0].minor.yy328,yymsp[-2].minor.yy328,0);} - break; - case 95: /* tcons ::= UNIQUE LP idxlist RP onconf */ -{sqlite3CreateIndex(pParse,0,0,0,yymsp[-2].minor.yy14,yymsp[0].minor.yy328,0,0,0,0);} - break; - case 96: /* tcons ::= CHECK LP expr RP onconf */ -{sqlite3AddCheckConstraint(pParse,yymsp[-2].minor.yy346.pExpr);} - break; - case 97: /* tcons ::= FOREIGN KEY LP idxlist RP REFERENCES nm idxlist_opt refargs defer_subclause_opt */ -{ - sqlite3CreateForeignKey(pParse, yymsp[-6].minor.yy14, &yymsp[-3].minor.yy0, yymsp[-2].minor.yy14, yymsp[-1].minor.yy328); - sqlite3DeferForeignKey(pParse, yymsp[0].minor.yy328); -} - break; - case 100: /* onconf ::= */ -{yygotominor.yy328 = OE_Default;} - break; - case 102: /* orconf ::= */ -{yygotominor.yy186 = OE_Default;} - break; - case 103: /* orconf ::= OR resolvetype */ -{yygotominor.yy186 = (u8)yymsp[0].minor.yy328;} - break; - case 105: /* resolvetype ::= IGNORE */ -{yygotominor.yy328 = OE_Ignore;} - break; - case 106: /* resolvetype ::= REPLACE */ -{yygotominor.yy328 = OE_Replace;} - break; - case 107: /* cmd ::= DROP TABLE ifexists fullname */ -{ - sqlite3DropTable(pParse, yymsp[0].minor.yy65, 0, yymsp[-1].minor.yy328); -} - break; - case 110: /* cmd ::= createkw temp VIEW ifnotexists nm dbnm AS select */ -{ - sqlite3CreateView(pParse, &yymsp[-7].minor.yy0, &yymsp[-3].minor.yy0, &yymsp[-2].minor.yy0, yymsp[0].minor.yy3, yymsp[-6].minor.yy328, yymsp[-4].minor.yy328); -} - break; - case 111: /* cmd ::= DROP VIEW ifexists fullname */ -{ - sqlite3DropTable(pParse, yymsp[0].minor.yy65, 1, yymsp[-1].minor.yy328); -} - break; - case 112: /* cmd ::= select */ -{ - SelectDest dest = {SRT_Output, 0, 0, 0, 0}; - sqlite3Select(pParse, yymsp[0].minor.yy3, &dest); - sqlite3SelectDelete(pParse->db, yymsp[0].minor.yy3); -} - break; - case 113: /* select ::= oneselect */ -{yygotominor.yy3 = yymsp[0].minor.yy3;} - break; - case 114: /* select ::= select multiselect_op oneselect */ -{ - if( yymsp[0].minor.yy3 ){ - yymsp[0].minor.yy3->op = (u8)yymsp[-1].minor.yy328; - yymsp[0].minor.yy3->pPrior = yymsp[-2].minor.yy3; - }else{ - sqlite3SelectDelete(pParse->db, yymsp[-2].minor.yy3); - } - yygotominor.yy3 = yymsp[0].minor.yy3; + case 90: /* tconscomma ::= COMMA */ +{pParse->constraintName.n = 0;} + break; + case 93: /* tcons ::= PRIMARY KEY LP sortlist autoinc RP onconf */ +{sqlite3AddPrimaryKey(pParse,yymsp[-3].minor.yy322,yymsp[0].minor.yy4,yymsp[-2].minor.yy4,0);} + break; + case 94: /* tcons ::= UNIQUE LP sortlist RP onconf */ +{sqlite3CreateIndex(pParse,0,0,0,yymsp[-2].minor.yy322,yymsp[0].minor.yy4,0,0,0,0);} + break; + case 95: /* tcons ::= CHECK LP expr RP onconf */ +{sqlite3AddCheckConstraint(pParse,yymsp[-2].minor.yy118.pExpr);} + break; + case 96: /* tcons ::= FOREIGN KEY LP eidlist RP REFERENCES nm eidlist_opt refargs defer_subclause_opt */ +{ + sqlite3CreateForeignKey(pParse, yymsp[-6].minor.yy322, &yymsp[-3].minor.yy0, yymsp[-2].minor.yy322, yymsp[-1].minor.yy4); + sqlite3DeferForeignKey(pParse, yymsp[0].minor.yy4); +} + break; + case 99: /* onconf ::= */ + case 101: /* orconf ::= */ yytestcase(yyruleno==101); +{yygotominor.yy4 = OE_Default;} + break; + case 104: /* resolvetype ::= IGNORE */ +{yygotominor.yy4 = OE_Ignore;} + break; + case 105: /* resolvetype ::= REPLACE */ + case 179: /* insert_cmd ::= REPLACE */ yytestcase(yyruleno==179); +{yygotominor.yy4 = OE_Replace;} + break; + case 106: /* cmd ::= DROP TABLE ifexists fullname */ +{ + sqlite3DropTable(pParse, yymsp[0].minor.yy259, 0, yymsp[-1].minor.yy4); +} + break; + case 109: /* cmd ::= createkw temp VIEW ifnotexists nm dbnm eidlist_opt AS select */ +{ + sqlite3CreateView(pParse, &yymsp[-8].minor.yy0, &yymsp[-4].minor.yy0, &yymsp[-3].minor.yy0, yymsp[-2].minor.yy322, yymsp[0].minor.yy387, yymsp[-7].minor.yy4, yymsp[-5].minor.yy4); +} + break; + case 110: /* cmd ::= DROP VIEW ifexists fullname */ +{ + sqlite3DropTable(pParse, yymsp[0].minor.yy259, 1, yymsp[-1].minor.yy4); +} + break; + case 111: /* cmd ::= select */ +{ + SelectDest dest = {SRT_Output, 0, 0, 0, 0, 0}; + sqlite3Select(pParse, yymsp[0].minor.yy387, &dest); + sqlite3SelectDelete(pParse->db, yymsp[0].minor.yy387); +} + break; + case 112: /* select ::= with selectnowith */ +{ + Select *p = yymsp[0].minor.yy387; + if( p ){ + p->pWith = yymsp[-1].minor.yy451; + parserDoubleLinkSelect(pParse, p); + }else{ + sqlite3WithDelete(pParse->db, yymsp[-1].minor.yy451); + } + yygotominor.yy387 = p; +} + break; + case 113: /* selectnowith ::= oneselect */ + case 119: /* oneselect ::= values */ yytestcase(yyruleno==119); +{yygotominor.yy387 = yymsp[0].minor.yy387;} + break; + case 114: /* selectnowith ::= selectnowith multiselect_op oneselect */ +{ + Select *pRhs = yymsp[0].minor.yy387; + Select *pLhs = yymsp[-2].minor.yy387; + if( pRhs && pRhs->pPrior ){ + SrcList *pFrom; + Token x; + x.n = 0; + parserDoubleLinkSelect(pParse, pRhs); + pFrom = sqlite3SrcListAppendFromTerm(pParse,0,0,0,&x,pRhs,0,0); + pRhs = sqlite3SelectNew(pParse,0,pFrom,0,0,0,0,0,0,0); + } + if( pRhs ){ + pRhs->op = (u8)yymsp[-1].minor.yy4; + pRhs->pPrior = pLhs; + if( ALWAYS(pLhs) ) pLhs->selFlags &= ~SF_MultiValue; + pRhs->selFlags &= ~SF_MultiValue; + if( yymsp[-1].minor.yy4!=TK_ALL ) pParse->hasCompound = 1; + }else{ + sqlite3SelectDelete(pParse->db, pLhs); + } + yygotominor.yy387 = pRhs; } break; case 116: /* multiselect_op ::= UNION ALL */ -{yygotominor.yy328 = TK_ALL;} +{yygotominor.yy4 = TK_ALL;} break; case 118: /* oneselect ::= SELECT distinct selcollist from where_opt groupby_opt having_opt orderby_opt limit_opt */ { - yygotominor.yy3 = sqlite3SelectNew(pParse,yymsp[-6].minor.yy14,yymsp[-5].minor.yy65,yymsp[-4].minor.yy132,yymsp[-3].minor.yy14,yymsp[-2].minor.yy132,yymsp[-1].minor.yy14,yymsp[-7].minor.yy328,yymsp[0].minor.yy476.pLimit,yymsp[0].minor.yy476.pOffset); -} - break; - case 122: /* sclp ::= selcollist COMMA */ - case 248: /* idxlist_opt ::= LP idxlist RP */ yytestcase(yyruleno==248); -{yygotominor.yy14 = yymsp[-1].minor.yy14;} - break; - case 123: /* sclp ::= */ - case 151: /* orderby_opt ::= */ yytestcase(yyruleno==151); - case 159: /* groupby_opt ::= */ yytestcase(yyruleno==159); - case 241: /* exprlist ::= */ yytestcase(yyruleno==241); - case 247: /* idxlist_opt ::= */ yytestcase(yyruleno==247); -{yygotominor.yy14 = 0;} - break; - case 124: /* selcollist ::= sclp expr as */ -{ - yygotominor.yy14 = sqlite3ExprListAppend(pParse, yymsp[-2].minor.yy14, yymsp[-1].minor.yy346.pExpr); - if( yymsp[0].minor.yy0.n>0 ) sqlite3ExprListSetName(pParse, yygotominor.yy14, &yymsp[0].minor.yy0, 1); - sqlite3ExprListSetSpan(pParse,yygotominor.yy14,&yymsp[-1].minor.yy346); -} - break; - case 125: /* selcollist ::= sclp STAR */ -{ - Expr *p = sqlite3Expr(pParse->db, TK_ALL, 0); - yygotominor.yy14 = sqlite3ExprListAppend(pParse, yymsp[-1].minor.yy14, p); -} - break; - case 126: /* selcollist ::= sclp nm DOT STAR */ -{ - Expr *pRight = sqlite3PExpr(pParse, TK_ALL, 0, 0, &yymsp[0].minor.yy0); + yygotominor.yy387 = sqlite3SelectNew(pParse,yymsp[-6].minor.yy322,yymsp[-5].minor.yy259,yymsp[-4].minor.yy314,yymsp[-3].minor.yy322,yymsp[-2].minor.yy314,yymsp[-1].minor.yy322,yymsp[-7].minor.yy4,yymsp[0].minor.yy292.pLimit,yymsp[0].minor.yy292.pOffset); +#if SELECTTRACE_ENABLED + /* Populate the Select.zSelName[] string that is used to help with + ** query planner debugging, to differentiate between multiple Select + ** objects in a complex query. + ** + ** If the SELECT keyword is immediately followed by a C-style comment + ** then extract the first few alphanumeric characters from within that + ** comment to be the zSelName value. Otherwise, the label is #N where + ** is an integer that is incremented with each SELECT statement seen. + */ + if( yygotominor.yy387!=0 ){ + const char *z = yymsp[-8].minor.yy0.z+6; + int i; + sqlite3_snprintf(sizeof(yygotominor.yy387->zSelName), yygotominor.yy387->zSelName, "#%d", + ++pParse->nSelect); + while( z[0]==' ' ) z++; + if( z[0]=='/' && z[1]=='*' ){ + z += 2; + while( z[0]==' ' ) z++; + for(i=0; sqlite3Isalnum(z[i]); i++){} + sqlite3_snprintf(sizeof(yygotominor.yy387->zSelName), yygotominor.yy387->zSelName, "%.*s", i, z); + } + } +#endif /* SELECTRACE_ENABLED */ +} + break; + case 120: /* values ::= VALUES LP nexprlist RP */ +{ + yygotominor.yy387 = sqlite3SelectNew(pParse,yymsp[-1].minor.yy322,0,0,0,0,0,SF_Values,0,0); +} + break; + case 121: /* values ::= values COMMA LP exprlist RP */ +{ + Select *pRight, *pLeft = yymsp[-4].minor.yy387; + pRight = sqlite3SelectNew(pParse,yymsp[-1].minor.yy322,0,0,0,0,0,SF_Values|SF_MultiValue,0,0); + if( ALWAYS(pLeft) ) pLeft->selFlags &= ~SF_MultiValue; + if( pRight ){ + pRight->op = TK_ALL; + pLeft = yymsp[-4].minor.yy387; + pRight->pPrior = pLeft; + yygotominor.yy387 = pRight; + }else{ + yygotominor.yy387 = pLeft; + } +} + break; + case 122: /* distinct ::= DISTINCT */ +{yygotominor.yy4 = SF_Distinct;} + break; + case 123: /* distinct ::= ALL */ +{yygotominor.yy4 = SF_All;} + break; + case 125: /* sclp ::= selcollist COMMA */ + case 244: /* eidlist_opt ::= LP eidlist RP */ yytestcase(yyruleno==244); +{yygotominor.yy322 = yymsp[-1].minor.yy322;} + break; + case 126: /* sclp ::= */ + case 155: /* orderby_opt ::= */ yytestcase(yyruleno==155); + case 162: /* groupby_opt ::= */ yytestcase(yyruleno==162); + case 237: /* exprlist ::= */ yytestcase(yyruleno==237); + case 243: /* eidlist_opt ::= */ yytestcase(yyruleno==243); +{yygotominor.yy322 = 0;} + break; + case 127: /* selcollist ::= sclp expr as */ +{ + yygotominor.yy322 = sqlite3ExprListAppend(pParse, yymsp[-2].minor.yy322, yymsp[-1].minor.yy118.pExpr); + if( yymsp[0].minor.yy0.n>0 ) sqlite3ExprListSetName(pParse, yygotominor.yy322, &yymsp[0].minor.yy0, 1); + sqlite3ExprListSetSpan(pParse,yygotominor.yy322,&yymsp[-1].minor.yy118); +} + break; + case 128: /* selcollist ::= sclp STAR */ +{ + Expr *p = sqlite3Expr(pParse->db, TK_ASTERISK, 0); + yygotominor.yy322 = sqlite3ExprListAppend(pParse, yymsp[-1].minor.yy322, p); +} + break; + case 129: /* selcollist ::= sclp nm DOT STAR */ +{ + Expr *pRight = sqlite3PExpr(pParse, TK_ASTERISK, 0, 0, &yymsp[0].minor.yy0); Expr *pLeft = sqlite3PExpr(pParse, TK_ID, 0, 0, &yymsp[-2].minor.yy0); Expr *pDot = sqlite3PExpr(pParse, TK_DOT, pLeft, pRight, 0); - yygotominor.yy14 = sqlite3ExprListAppend(pParse,yymsp[-3].minor.yy14, pDot); + yygotominor.yy322 = sqlite3ExprListAppend(pParse,yymsp[-3].minor.yy322, pDot); } break; - case 129: /* as ::= */ + case 132: /* as ::= */ {yygotominor.yy0.n = 0;} break; - case 130: /* from ::= */ -{yygotominor.yy65 = sqlite3DbMallocZero(pParse->db, sizeof(*yygotominor.yy65));} - break; - case 131: /* from ::= FROM seltablist */ -{ - yygotominor.yy65 = yymsp[0].minor.yy65; - sqlite3SrcListShiftJoinType(yygotominor.yy65); -} - break; - case 132: /* stl_prefix ::= seltablist joinop */ -{ - yygotominor.yy65 = yymsp[-1].minor.yy65; - if( ALWAYS(yygotominor.yy65 && yygotominor.yy65->nSrc>0) ) yygotominor.yy65->a[yygotominor.yy65->nSrc-1].jointype = (u8)yymsp[0].minor.yy328; -} - break; - case 133: /* stl_prefix ::= */ -{yygotominor.yy65 = 0;} - break; - case 134: /* seltablist ::= stl_prefix nm dbnm as indexed_opt on_opt using_opt */ -{ - yygotominor.yy65 = sqlite3SrcListAppendFromTerm(pParse,yymsp[-6].minor.yy65,&yymsp[-5].minor.yy0,&yymsp[-4].minor.yy0,&yymsp[-3].minor.yy0,0,yymsp[-1].minor.yy132,yymsp[0].minor.yy408); - sqlite3SrcListIndexedBy(pParse, yygotominor.yy65, &yymsp[-2].minor.yy0); -} - break; - case 135: /* seltablist ::= stl_prefix LP select RP as on_opt using_opt */ -{ - yygotominor.yy65 = sqlite3SrcListAppendFromTerm(pParse,yymsp[-6].minor.yy65,0,0,&yymsp[-2].minor.yy0,yymsp[-4].minor.yy3,yymsp[-1].minor.yy132,yymsp[0].minor.yy408); - } - break; - case 136: /* seltablist ::= stl_prefix LP seltablist RP as on_opt using_opt */ -{ - if( yymsp[-6].minor.yy65==0 && yymsp[-2].minor.yy0.n==0 && yymsp[-1].minor.yy132==0 && yymsp[0].minor.yy408==0 ){ - yygotominor.yy65 = yymsp[-4].minor.yy65; + case 133: /* from ::= */ +{yygotominor.yy259 = sqlite3DbMallocZero(pParse->db, sizeof(*yygotominor.yy259));} + break; + case 134: /* from ::= FROM seltablist */ +{ + yygotominor.yy259 = yymsp[0].minor.yy259; + sqlite3SrcListShiftJoinType(yygotominor.yy259); +} + break; + case 135: /* stl_prefix ::= seltablist joinop */ +{ + yygotominor.yy259 = yymsp[-1].minor.yy259; + if( ALWAYS(yygotominor.yy259 && yygotominor.yy259->nSrc>0) ) yygotominor.yy259->a[yygotominor.yy259->nSrc-1].fg.jointype = (u8)yymsp[0].minor.yy4; +} + break; + case 136: /* stl_prefix ::= */ +{yygotominor.yy259 = 0;} + break; + case 137: /* seltablist ::= stl_prefix nm dbnm as indexed_opt on_opt using_opt */ +{ + yygotominor.yy259 = sqlite3SrcListAppendFromTerm(pParse,yymsp[-6].minor.yy259,&yymsp[-5].minor.yy0,&yymsp[-4].minor.yy0,&yymsp[-3].minor.yy0,0,yymsp[-1].minor.yy314,yymsp[0].minor.yy384); + sqlite3SrcListIndexedBy(pParse, yygotominor.yy259, &yymsp[-2].minor.yy0); +} + break; + case 138: /* seltablist ::= stl_prefix nm dbnm LP exprlist RP as on_opt using_opt */ +{ + yygotominor.yy259 = sqlite3SrcListAppendFromTerm(pParse,yymsp[-8].minor.yy259,&yymsp[-7].minor.yy0,&yymsp[-6].minor.yy0,&yymsp[-2].minor.yy0,0,yymsp[-1].minor.yy314,yymsp[0].minor.yy384); + sqlite3SrcListFuncArgs(pParse, yygotominor.yy259, yymsp[-4].minor.yy322); +} + break; + case 139: /* seltablist ::= stl_prefix LP select RP as on_opt using_opt */ +{ + yygotominor.yy259 = sqlite3SrcListAppendFromTerm(pParse,yymsp[-6].minor.yy259,0,0,&yymsp[-2].minor.yy0,yymsp[-4].minor.yy387,yymsp[-1].minor.yy314,yymsp[0].minor.yy384); + } + break; + case 140: /* seltablist ::= stl_prefix LP seltablist RP as on_opt using_opt */ +{ + if( yymsp[-6].minor.yy259==0 && yymsp[-2].minor.yy0.n==0 && yymsp[-1].minor.yy314==0 && yymsp[0].minor.yy384==0 ){ + yygotominor.yy259 = yymsp[-4].minor.yy259; + }else if( yymsp[-4].minor.yy259->nSrc==1 ){ + yygotominor.yy259 = sqlite3SrcListAppendFromTerm(pParse,yymsp[-6].minor.yy259,0,0,&yymsp[-2].minor.yy0,0,yymsp[-1].minor.yy314,yymsp[0].minor.yy384); + if( yygotominor.yy259 ){ + struct SrcList_item *pNew = &yygotominor.yy259->a[yygotominor.yy259->nSrc-1]; + struct SrcList_item *pOld = yymsp[-4].minor.yy259->a; + pNew->zName = pOld->zName; + pNew->zDatabase = pOld->zDatabase; + pNew->pSelect = pOld->pSelect; + pOld->zName = pOld->zDatabase = 0; + pOld->pSelect = 0; + } + sqlite3SrcListDelete(pParse->db, yymsp[-4].minor.yy259); }else{ Select *pSubquery; - sqlite3SrcListShiftJoinType(yymsp[-4].minor.yy65); - pSubquery = sqlite3SelectNew(pParse,0,yymsp[-4].minor.yy65,0,0,0,0,0,0,0); - yygotominor.yy65 = sqlite3SrcListAppendFromTerm(pParse,yymsp[-6].minor.yy65,0,0,&yymsp[-2].minor.yy0,pSubquery,yymsp[-1].minor.yy132,yymsp[0].minor.yy408); + sqlite3SrcListShiftJoinType(yymsp[-4].minor.yy259); + pSubquery = sqlite3SelectNew(pParse,0,yymsp[-4].minor.yy259,0,0,0,0,SF_NestedFrom,0,0); + yygotominor.yy259 = sqlite3SrcListAppendFromTerm(pParse,yymsp[-6].minor.yy259,0,0,&yymsp[-2].minor.yy0,pSubquery,yymsp[-1].minor.yy314,yymsp[0].minor.yy384); } } break; - case 137: /* dbnm ::= */ - case 146: /* indexed_opt ::= */ yytestcase(yyruleno==146); + case 141: /* dbnm ::= */ + case 150: /* indexed_opt ::= */ yytestcase(yyruleno==150); {yygotominor.yy0.z=0; yygotominor.yy0.n=0;} break; - case 139: /* fullname ::= nm dbnm */ -{yygotominor.yy65 = sqlite3SrcListAppend(pParse->db,0,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy0);} - break; - case 140: /* joinop ::= COMMA|JOIN */ -{ yygotominor.yy328 = JT_INNER; } - break; - case 141: /* joinop ::= JOIN_KW JOIN */ -{ yygotominor.yy328 = sqlite3JoinType(pParse,&yymsp[-1].minor.yy0,0,0); } - break; - case 142: /* joinop ::= JOIN_KW nm JOIN */ -{ yygotominor.yy328 = sqlite3JoinType(pParse,&yymsp[-2].minor.yy0,&yymsp[-1].minor.yy0,0); } - break; - case 143: /* joinop ::= JOIN_KW nm nm JOIN */ -{ yygotominor.yy328 = sqlite3JoinType(pParse,&yymsp[-3].minor.yy0,&yymsp[-2].minor.yy0,&yymsp[-1].minor.yy0); } - break; - case 144: /* on_opt ::= ON expr */ - case 155: /* sortitem ::= expr */ yytestcase(yyruleno==155); - case 162: /* having_opt ::= HAVING expr */ yytestcase(yyruleno==162); - case 169: /* where_opt ::= WHERE expr */ yytestcase(yyruleno==169); - case 236: /* case_else ::= ELSE expr */ yytestcase(yyruleno==236); - case 238: /* case_operand ::= expr */ yytestcase(yyruleno==238); -{yygotominor.yy132 = yymsp[0].minor.yy346.pExpr;} - break; - case 145: /* on_opt ::= */ - case 161: /* having_opt ::= */ yytestcase(yyruleno==161); - case 168: /* where_opt ::= */ yytestcase(yyruleno==168); - case 237: /* case_else ::= */ yytestcase(yyruleno==237); - case 239: /* case_operand ::= */ yytestcase(yyruleno==239); -{yygotominor.yy132 = 0;} - break; - case 148: /* indexed_opt ::= NOT INDEXED */ + case 143: /* fullname ::= nm dbnm */ +{yygotominor.yy259 = sqlite3SrcListAppend(pParse->db,0,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy0);} + break; + case 144: /* joinop ::= COMMA|JOIN */ +{ yygotominor.yy4 = JT_INNER; } + break; + case 145: /* joinop ::= JOIN_KW JOIN */ +{ yygotominor.yy4 = sqlite3JoinType(pParse,&yymsp[-1].minor.yy0,0,0); } + break; + case 146: /* joinop ::= JOIN_KW nm JOIN */ +{ yygotominor.yy4 = sqlite3JoinType(pParse,&yymsp[-2].minor.yy0,&yymsp[-1].minor.yy0,0); } + break; + case 147: /* joinop ::= JOIN_KW nm nm JOIN */ +{ yygotominor.yy4 = sqlite3JoinType(pParse,&yymsp[-3].minor.yy0,&yymsp[-2].minor.yy0,&yymsp[-1].minor.yy0); } + break; + case 148: /* on_opt ::= ON expr */ + case 165: /* having_opt ::= HAVING expr */ yytestcase(yyruleno==165); + case 172: /* where_opt ::= WHERE expr */ yytestcase(yyruleno==172); + case 232: /* case_else ::= ELSE expr */ yytestcase(yyruleno==232); + case 234: /* case_operand ::= expr */ yytestcase(yyruleno==234); +{yygotominor.yy314 = yymsp[0].minor.yy118.pExpr;} + break; + case 149: /* on_opt ::= */ + case 164: /* having_opt ::= */ yytestcase(yyruleno==164); + case 171: /* where_opt ::= */ yytestcase(yyruleno==171); + case 233: /* case_else ::= */ yytestcase(yyruleno==233); + case 235: /* case_operand ::= */ yytestcase(yyruleno==235); +{yygotominor.yy314 = 0;} + break; + case 152: /* indexed_opt ::= NOT INDEXED */ {yygotominor.yy0.z=0; yygotominor.yy0.n=1;} break; - case 149: /* using_opt ::= USING LP inscollist RP */ - case 181: /* inscollist_opt ::= LP inscollist RP */ yytestcase(yyruleno==181); -{yygotominor.yy408 = yymsp[-1].minor.yy408;} - break; - case 150: /* using_opt ::= */ - case 180: /* inscollist_opt ::= */ yytestcase(yyruleno==180); -{yygotominor.yy408 = 0;} - break; - case 152: /* orderby_opt ::= ORDER BY sortlist */ - case 160: /* groupby_opt ::= GROUP BY nexprlist */ yytestcase(yyruleno==160); - case 240: /* exprlist ::= nexprlist */ yytestcase(yyruleno==240); -{yygotominor.yy14 = yymsp[0].minor.yy14;} - break; - case 153: /* sortlist ::= sortlist COMMA sortitem sortorder */ -{ - yygotominor.yy14 = sqlite3ExprListAppend(pParse,yymsp[-3].minor.yy14,yymsp[-1].minor.yy132); - if( yygotominor.yy14 ) yygotominor.yy14->a[yygotominor.yy14->nExpr-1].sortOrder = (u8)yymsp[0].minor.yy328; -} - break; - case 154: /* sortlist ::= sortitem sortorder */ -{ - yygotominor.yy14 = sqlite3ExprListAppend(pParse,0,yymsp[-1].minor.yy132); - if( yygotominor.yy14 && ALWAYS(yygotominor.yy14->a) ) yygotominor.yy14->a[0].sortOrder = (u8)yymsp[0].minor.yy328; -} - break; - case 156: /* sortorder ::= ASC */ - case 158: /* sortorder ::= */ yytestcase(yyruleno==158); -{yygotominor.yy328 = SQLITE_SO_ASC;} - break; - case 157: /* sortorder ::= DESC */ -{yygotominor.yy328 = SQLITE_SO_DESC;} - break; - case 163: /* limit_opt ::= */ -{yygotominor.yy476.pLimit = 0; yygotominor.yy476.pOffset = 0;} - break; - case 164: /* limit_opt ::= LIMIT expr */ -{yygotominor.yy476.pLimit = yymsp[0].minor.yy346.pExpr; yygotominor.yy476.pOffset = 0;} - break; - case 165: /* limit_opt ::= LIMIT expr OFFSET expr */ -{yygotominor.yy476.pLimit = yymsp[-2].minor.yy346.pExpr; yygotominor.yy476.pOffset = yymsp[0].minor.yy346.pExpr;} - break; - case 166: /* limit_opt ::= LIMIT expr COMMA expr */ -{yygotominor.yy476.pOffset = yymsp[-2].minor.yy346.pExpr; yygotominor.yy476.pLimit = yymsp[0].minor.yy346.pExpr;} - break; - case 167: /* cmd ::= DELETE FROM fullname indexed_opt where_opt */ -{ - sqlite3SrcListIndexedBy(pParse, yymsp[-2].minor.yy65, &yymsp[-1].minor.yy0); - sqlite3DeleteFrom(pParse,yymsp[-2].minor.yy65,yymsp[0].minor.yy132); -} - break; - case 170: /* cmd ::= UPDATE orconf fullname indexed_opt SET setlist where_opt */ -{ - sqlite3SrcListIndexedBy(pParse, yymsp[-4].minor.yy65, &yymsp[-3].minor.yy0); - sqlite3ExprListCheckLength(pParse,yymsp[-1].minor.yy14,"set list"); - sqlite3Update(pParse,yymsp[-4].minor.yy65,yymsp[-1].minor.yy14,yymsp[0].minor.yy132,yymsp[-5].minor.yy186); -} - break; - case 171: /* setlist ::= setlist COMMA nm EQ expr */ -{ - yygotominor.yy14 = sqlite3ExprListAppend(pParse, yymsp[-4].minor.yy14, yymsp[0].minor.yy346.pExpr); - sqlite3ExprListSetName(pParse, yygotominor.yy14, &yymsp[-2].minor.yy0, 1); -} - break; - case 172: /* setlist ::= nm EQ expr */ -{ - yygotominor.yy14 = sqlite3ExprListAppend(pParse, 0, yymsp[0].minor.yy346.pExpr); - sqlite3ExprListSetName(pParse, yygotominor.yy14, &yymsp[-2].minor.yy0, 1); -} - break; - case 173: /* cmd ::= insert_cmd INTO fullname inscollist_opt VALUES LP itemlist RP */ -{sqlite3Insert(pParse, yymsp[-5].minor.yy65, yymsp[-1].minor.yy14, 0, yymsp[-4].minor.yy408, yymsp[-7].minor.yy186);} - break; - case 174: /* cmd ::= insert_cmd INTO fullname inscollist_opt select */ -{sqlite3Insert(pParse, yymsp[-2].minor.yy65, 0, yymsp[0].minor.yy3, yymsp[-1].minor.yy408, yymsp[-4].minor.yy186);} - break; - case 175: /* cmd ::= insert_cmd INTO fullname inscollist_opt DEFAULT VALUES */ -{sqlite3Insert(pParse, yymsp[-3].minor.yy65, 0, 0, yymsp[-2].minor.yy408, yymsp[-5].minor.yy186);} - break; - case 176: /* insert_cmd ::= INSERT orconf */ -{yygotominor.yy186 = yymsp[0].minor.yy186;} - break; - case 177: /* insert_cmd ::= REPLACE */ -{yygotominor.yy186 = OE_Replace;} - break; - case 178: /* itemlist ::= itemlist COMMA expr */ - case 242: /* nexprlist ::= nexprlist COMMA expr */ yytestcase(yyruleno==242); -{yygotominor.yy14 = sqlite3ExprListAppend(pParse,yymsp[-2].minor.yy14,yymsp[0].minor.yy346.pExpr);} - break; - case 179: /* itemlist ::= expr */ - case 243: /* nexprlist ::= expr */ yytestcase(yyruleno==243); -{yygotominor.yy14 = sqlite3ExprListAppend(pParse,0,yymsp[0].minor.yy346.pExpr);} - break; - case 182: /* inscollist ::= inscollist COMMA nm */ -{yygotominor.yy408 = sqlite3IdListAppend(pParse->db,yymsp[-2].minor.yy408,&yymsp[0].minor.yy0);} - break; - case 183: /* inscollist ::= nm */ -{yygotominor.yy408 = sqlite3IdListAppend(pParse->db,0,&yymsp[0].minor.yy0);} + case 153: /* using_opt ::= USING LP idlist RP */ + case 181: /* idlist_opt ::= LP idlist RP */ yytestcase(yyruleno==181); +{yygotominor.yy384 = yymsp[-1].minor.yy384;} + break; + case 154: /* using_opt ::= */ + case 180: /* idlist_opt ::= */ yytestcase(yyruleno==180); +{yygotominor.yy384 = 0;} + break; + case 156: /* orderby_opt ::= ORDER BY sortlist */ + case 163: /* groupby_opt ::= GROUP BY nexprlist */ yytestcase(yyruleno==163); + case 236: /* exprlist ::= nexprlist */ yytestcase(yyruleno==236); +{yygotominor.yy322 = yymsp[0].minor.yy322;} + break; + case 157: /* sortlist ::= sortlist COMMA expr sortorder */ +{ + yygotominor.yy322 = sqlite3ExprListAppend(pParse,yymsp[-3].minor.yy322,yymsp[-1].minor.yy118.pExpr); + sqlite3ExprListSetSortOrder(yygotominor.yy322,yymsp[0].minor.yy4); +} + break; + case 158: /* sortlist ::= expr sortorder */ +{ + yygotominor.yy322 = sqlite3ExprListAppend(pParse,0,yymsp[-1].minor.yy118.pExpr); + sqlite3ExprListSetSortOrder(yygotominor.yy322,yymsp[0].minor.yy4); +} + break; + case 159: /* sortorder ::= ASC */ +{yygotominor.yy4 = SQLITE_SO_ASC;} + break; + case 160: /* sortorder ::= DESC */ +{yygotominor.yy4 = SQLITE_SO_DESC;} + break; + case 161: /* sortorder ::= */ +{yygotominor.yy4 = SQLITE_SO_UNDEFINED;} + break; + case 166: /* limit_opt ::= */ +{yygotominor.yy292.pLimit = 0; yygotominor.yy292.pOffset = 0;} + break; + case 167: /* limit_opt ::= LIMIT expr */ +{yygotominor.yy292.pLimit = yymsp[0].minor.yy118.pExpr; yygotominor.yy292.pOffset = 0;} + break; + case 168: /* limit_opt ::= LIMIT expr OFFSET expr */ +{yygotominor.yy292.pLimit = yymsp[-2].minor.yy118.pExpr; yygotominor.yy292.pOffset = yymsp[0].minor.yy118.pExpr;} + break; + case 169: /* limit_opt ::= LIMIT expr COMMA expr */ +{yygotominor.yy292.pOffset = yymsp[-2].minor.yy118.pExpr; yygotominor.yy292.pLimit = yymsp[0].minor.yy118.pExpr;} + break; + case 170: /* cmd ::= with DELETE FROM fullname indexed_opt where_opt */ +{ + sqlite3WithPush(pParse, yymsp[-5].minor.yy451, 1); + sqlite3SrcListIndexedBy(pParse, yymsp[-2].minor.yy259, &yymsp[-1].minor.yy0); + sqlite3DeleteFrom(pParse,yymsp[-2].minor.yy259,yymsp[0].minor.yy314); +} + break; + case 173: /* cmd ::= with UPDATE orconf fullname indexed_opt SET setlist where_opt */ +{ + sqlite3WithPush(pParse, yymsp[-7].minor.yy451, 1); + sqlite3SrcListIndexedBy(pParse, yymsp[-4].minor.yy259, &yymsp[-3].minor.yy0); + sqlite3ExprListCheckLength(pParse,yymsp[-1].minor.yy322,"set list"); + sqlite3Update(pParse,yymsp[-4].minor.yy259,yymsp[-1].minor.yy322,yymsp[0].minor.yy314,yymsp[-5].minor.yy4); +} + break; + case 174: /* setlist ::= setlist COMMA nm EQ expr */ +{ + yygotominor.yy322 = sqlite3ExprListAppend(pParse, yymsp[-4].minor.yy322, yymsp[0].minor.yy118.pExpr); + sqlite3ExprListSetName(pParse, yygotominor.yy322, &yymsp[-2].minor.yy0, 1); +} + break; + case 175: /* setlist ::= nm EQ expr */ +{ + yygotominor.yy322 = sqlite3ExprListAppend(pParse, 0, yymsp[0].minor.yy118.pExpr); + sqlite3ExprListSetName(pParse, yygotominor.yy322, &yymsp[-2].minor.yy0, 1); +} + break; + case 176: /* cmd ::= with insert_cmd INTO fullname idlist_opt select */ +{ + sqlite3WithPush(pParse, yymsp[-5].minor.yy451, 1); + sqlite3Insert(pParse, yymsp[-2].minor.yy259, yymsp[0].minor.yy387, yymsp[-1].minor.yy384, yymsp[-4].minor.yy4); +} + break; + case 177: /* cmd ::= with insert_cmd INTO fullname idlist_opt DEFAULT VALUES */ +{ + sqlite3WithPush(pParse, yymsp[-6].minor.yy451, 1); + sqlite3Insert(pParse, yymsp[-3].minor.yy259, 0, yymsp[-2].minor.yy384, yymsp[-5].minor.yy4); +} + break; + case 182: /* idlist ::= idlist COMMA nm */ +{yygotominor.yy384 = sqlite3IdListAppend(pParse->db,yymsp[-2].minor.yy384,&yymsp[0].minor.yy0);} + break; + case 183: /* idlist ::= nm */ +{yygotominor.yy384 = sqlite3IdListAppend(pParse->db,0,&yymsp[0].minor.yy0);} break; case 184: /* expr ::= term */ - case 212: /* escape ::= ESCAPE expr */ yytestcase(yyruleno==212); -{yygotominor.yy346 = yymsp[0].minor.yy346;} +{yygotominor.yy118 = yymsp[0].minor.yy118;} break; case 185: /* expr ::= LP expr RP */ -{yygotominor.yy346.pExpr = yymsp[-1].minor.yy346.pExpr; spanSet(&yygotominor.yy346,&yymsp[-2].minor.yy0,&yymsp[0].minor.yy0);} +{yygotominor.yy118.pExpr = yymsp[-1].minor.yy118.pExpr; spanSet(&yygotominor.yy118,&yymsp[-2].minor.yy0,&yymsp[0].minor.yy0);} break; case 186: /* term ::= NULL */ case 191: /* term ::= INTEGER|FLOAT|BLOB */ yytestcase(yyruleno==191); case 192: /* term ::= STRING */ yytestcase(yyruleno==192); -{spanExpr(&yygotominor.yy346, pParse, yymsp[0].major, &yymsp[0].minor.yy0);} +{spanExpr(&yygotominor.yy118, pParse, yymsp[0].major, &yymsp[0].minor.yy0);} break; - case 187: /* expr ::= id */ + case 187: /* expr ::= ID|INDEXED */ case 188: /* expr ::= JOIN_KW */ yytestcase(yyruleno==188); -{spanExpr(&yygotominor.yy346, pParse, TK_ID, &yymsp[0].minor.yy0);} +{spanExpr(&yygotominor.yy118, pParse, TK_ID, &yymsp[0].minor.yy0);} break; case 189: /* expr ::= nm DOT nm */ { Expr *temp1 = sqlite3PExpr(pParse, TK_ID, 0, 0, &yymsp[-2].minor.yy0); Expr *temp2 = sqlite3PExpr(pParse, TK_ID, 0, 0, &yymsp[0].minor.yy0); - yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_DOT, temp1, temp2, 0); - spanSet(&yygotominor.yy346,&yymsp[-2].minor.yy0,&yymsp[0].minor.yy0); + yygotominor.yy118.pExpr = sqlite3PExpr(pParse, TK_DOT, temp1, temp2, 0); + spanSet(&yygotominor.yy118,&yymsp[-2].minor.yy0,&yymsp[0].minor.yy0); } break; case 190: /* expr ::= nm DOT nm DOT nm */ { Expr *temp1 = sqlite3PExpr(pParse, TK_ID, 0, 0, &yymsp[-4].minor.yy0); Expr *temp2 = sqlite3PExpr(pParse, TK_ID, 0, 0, &yymsp[-2].minor.yy0); Expr *temp3 = sqlite3PExpr(pParse, TK_ID, 0, 0, &yymsp[0].minor.yy0); Expr *temp4 = sqlite3PExpr(pParse, TK_DOT, temp2, temp3, 0); - yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_DOT, temp1, temp4, 0); - spanSet(&yygotominor.yy346,&yymsp[-4].minor.yy0,&yymsp[0].minor.yy0); -} - break; - case 193: /* expr ::= REGISTER */ -{ - /* When doing a nested parse, one can include terms in an expression - ** that look like this: #1 #2 ... These terms refer to registers - ** in the virtual machine. #N is the N-th register. */ - if( pParse->nested==0 ){ - sqlite3ErrorMsg(pParse, "near \"%T\": syntax error", &yymsp[0].minor.yy0); - yygotominor.yy346.pExpr = 0; - }else{ - yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_REGISTER, 0, 0, &yymsp[0].minor.yy0); - if( yygotominor.yy346.pExpr ) sqlite3GetInt32(&yymsp[0].minor.yy0.z[1], &yygotominor.yy346.pExpr->iTable); - } - spanSet(&yygotominor.yy346, &yymsp[0].minor.yy0, &yymsp[0].minor.yy0); -} - break; - case 194: /* expr ::= VARIABLE */ -{ - spanExpr(&yygotominor.yy346, pParse, TK_VARIABLE, &yymsp[0].minor.yy0); - sqlite3ExprAssignVarNumber(pParse, yygotominor.yy346.pExpr); - spanSet(&yygotominor.yy346, &yymsp[0].minor.yy0, &yymsp[0].minor.yy0); -} - break; - case 195: /* expr ::= expr COLLATE ids */ -{ - yygotominor.yy346.pExpr = sqlite3ExprSetColl(pParse, yymsp[-2].minor.yy346.pExpr, &yymsp[0].minor.yy0); - yygotominor.yy346.zStart = yymsp[-2].minor.yy346.zStart; - yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n]; -} - break; - case 196: /* expr ::= CAST LP expr AS typetoken RP */ -{ - yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_CAST, yymsp[-3].minor.yy346.pExpr, 0, &yymsp[-1].minor.yy0); - spanSet(&yygotominor.yy346,&yymsp[-5].minor.yy0,&yymsp[0].minor.yy0); -} - break; - case 197: /* expr ::= ID LP distinct exprlist RP */ -{ - if( yymsp[-1].minor.yy14 && yymsp[-1].minor.yy14->nExpr>pParse->db->aLimit[SQLITE_LIMIT_FUNCTION_ARG] ){ + yygotominor.yy118.pExpr = sqlite3PExpr(pParse, TK_DOT, temp1, temp4, 0); + spanSet(&yygotominor.yy118,&yymsp[-4].minor.yy0,&yymsp[0].minor.yy0); +} + break; + case 193: /* expr ::= VARIABLE */ +{ + if( yymsp[0].minor.yy0.n>=2 && yymsp[0].minor.yy0.z[0]=='#' && sqlite3Isdigit(yymsp[0].minor.yy0.z[1]) ){ + /* When doing a nested parse, one can include terms in an expression + ** that look like this: #1 #2 ... These terms refer to registers + ** in the virtual machine. #N is the N-th register. */ + if( pParse->nested==0 ){ + sqlite3ErrorMsg(pParse, "near \"%T\": syntax error", &yymsp[0].minor.yy0); + yygotominor.yy118.pExpr = 0; + }else{ + yygotominor.yy118.pExpr = sqlite3PExpr(pParse, TK_REGISTER, 0, 0, &yymsp[0].minor.yy0); + if( yygotominor.yy118.pExpr ) sqlite3GetInt32(&yymsp[0].minor.yy0.z[1], &yygotominor.yy118.pExpr->iTable); + } + }else{ + spanExpr(&yygotominor.yy118, pParse, TK_VARIABLE, &yymsp[0].minor.yy0); + sqlite3ExprAssignVarNumber(pParse, yygotominor.yy118.pExpr); + } + spanSet(&yygotominor.yy118, &yymsp[0].minor.yy0, &yymsp[0].minor.yy0); +} + break; + case 194: /* expr ::= expr COLLATE ID|STRING */ +{ + yygotominor.yy118.pExpr = sqlite3ExprAddCollateToken(pParse, yymsp[-2].minor.yy118.pExpr, &yymsp[0].minor.yy0, 1); + yygotominor.yy118.zStart = yymsp[-2].minor.yy118.zStart; + yygotominor.yy118.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n]; +} + break; + case 195: /* expr ::= CAST LP expr AS typetoken RP */ +{ + yygotominor.yy118.pExpr = sqlite3PExpr(pParse, TK_CAST, yymsp[-3].minor.yy118.pExpr, 0, &yymsp[-1].minor.yy0); + spanSet(&yygotominor.yy118,&yymsp[-5].minor.yy0,&yymsp[0].minor.yy0); +} + break; + case 196: /* expr ::= ID|INDEXED LP distinct exprlist RP */ +{ + if( yymsp[-1].minor.yy322 && yymsp[-1].minor.yy322->nExpr>pParse->db->aLimit[SQLITE_LIMIT_FUNCTION_ARG] ){ sqlite3ErrorMsg(pParse, "too many arguments on function %T", &yymsp[-4].minor.yy0); } - yygotominor.yy346.pExpr = sqlite3ExprFunction(pParse, yymsp[-1].minor.yy14, &yymsp[-4].minor.yy0); - spanSet(&yygotominor.yy346,&yymsp[-4].minor.yy0,&yymsp[0].minor.yy0); - if( yymsp[-2].minor.yy328 && yygotominor.yy346.pExpr ){ - yygotominor.yy346.pExpr->flags |= EP_Distinct; - } -} - break; - case 198: /* expr ::= ID LP STAR RP */ -{ - yygotominor.yy346.pExpr = sqlite3ExprFunction(pParse, 0, &yymsp[-3].minor.yy0); - spanSet(&yygotominor.yy346,&yymsp[-3].minor.yy0,&yymsp[0].minor.yy0); -} - break; - case 199: /* term ::= CTIME_KW */ -{ - /* The CURRENT_TIME, CURRENT_DATE, and CURRENT_TIMESTAMP values are - ** treated as functions that return constants */ - yygotominor.yy346.pExpr = sqlite3ExprFunction(pParse, 0,&yymsp[0].minor.yy0); - if( yygotominor.yy346.pExpr ){ - yygotominor.yy346.pExpr->op = TK_CONST_FUNC; - } - spanSet(&yygotominor.yy346, &yymsp[0].minor.yy0, &yymsp[0].minor.yy0); -} - break; - case 200: /* expr ::= expr AND expr */ - case 201: /* expr ::= expr OR expr */ yytestcase(yyruleno==201); - case 202: /* expr ::= expr LT|GT|GE|LE expr */ yytestcase(yyruleno==202); - case 203: /* expr ::= expr EQ|NE expr */ yytestcase(yyruleno==203); - case 204: /* expr ::= expr BITAND|BITOR|LSHIFT|RSHIFT expr */ yytestcase(yyruleno==204); - case 205: /* expr ::= expr PLUS|MINUS expr */ yytestcase(yyruleno==205); - case 206: /* expr ::= expr STAR|SLASH|REM expr */ yytestcase(yyruleno==206); - case 207: /* expr ::= expr CONCAT expr */ yytestcase(yyruleno==207); -{spanBinaryExpr(&yygotominor.yy346,pParse,yymsp[-1].major,&yymsp[-2].minor.yy346,&yymsp[0].minor.yy346);} - break; - case 208: /* likeop ::= LIKE_KW */ - case 210: /* likeop ::= MATCH */ yytestcase(yyruleno==210); -{yygotominor.yy96.eOperator = yymsp[0].minor.yy0; yygotominor.yy96.not = 0;} - break; - case 209: /* likeop ::= NOT LIKE_KW */ - case 211: /* likeop ::= NOT MATCH */ yytestcase(yyruleno==211); -{yygotominor.yy96.eOperator = yymsp[0].minor.yy0; yygotominor.yy96.not = 1;} - break; - case 213: /* escape ::= */ -{memset(&yygotominor.yy346,0,sizeof(yygotominor.yy346));} - break; - case 214: /* expr ::= expr likeop expr escape */ + yygotominor.yy118.pExpr = sqlite3ExprFunction(pParse, yymsp[-1].minor.yy322, &yymsp[-4].minor.yy0); + spanSet(&yygotominor.yy118,&yymsp[-4].minor.yy0,&yymsp[0].minor.yy0); + if( yymsp[-2].minor.yy4==SF_Distinct && yygotominor.yy118.pExpr ){ + yygotominor.yy118.pExpr->flags |= EP_Distinct; + } +} + break; + case 197: /* expr ::= ID|INDEXED LP STAR RP */ +{ + yygotominor.yy118.pExpr = sqlite3ExprFunction(pParse, 0, &yymsp[-3].minor.yy0); + spanSet(&yygotominor.yy118,&yymsp[-3].minor.yy0,&yymsp[0].minor.yy0); +} + break; + case 198: /* term ::= CTIME_KW */ +{ + yygotominor.yy118.pExpr = sqlite3ExprFunction(pParse, 0, &yymsp[0].minor.yy0); + spanSet(&yygotominor.yy118, &yymsp[0].minor.yy0, &yymsp[0].minor.yy0); +} + break; + case 199: /* expr ::= expr AND expr */ + case 200: /* expr ::= expr OR expr */ yytestcase(yyruleno==200); + case 201: /* expr ::= expr LT|GT|GE|LE expr */ yytestcase(yyruleno==201); + case 202: /* expr ::= expr EQ|NE expr */ yytestcase(yyruleno==202); + case 203: /* expr ::= expr BITAND|BITOR|LSHIFT|RSHIFT expr */ yytestcase(yyruleno==203); + case 204: /* expr ::= expr PLUS|MINUS expr */ yytestcase(yyruleno==204); + case 205: /* expr ::= expr STAR|SLASH|REM expr */ yytestcase(yyruleno==205); + case 206: /* expr ::= expr CONCAT expr */ yytestcase(yyruleno==206); +{spanBinaryExpr(&yygotominor.yy118,pParse,yymsp[-1].major,&yymsp[-2].minor.yy118,&yymsp[0].minor.yy118);} + break; + case 207: /* likeop ::= LIKE_KW|MATCH */ +{yygotominor.yy342.eOperator = yymsp[0].minor.yy0; yygotominor.yy342.bNot = 0;} + break; + case 208: /* likeop ::= NOT LIKE_KW|MATCH */ +{yygotominor.yy342.eOperator = yymsp[0].minor.yy0; yygotominor.yy342.bNot = 1;} + break; + case 209: /* expr ::= expr likeop expr */ +{ + ExprList *pList; + pList = sqlite3ExprListAppend(pParse,0, yymsp[0].minor.yy118.pExpr); + pList = sqlite3ExprListAppend(pParse,pList, yymsp[-2].minor.yy118.pExpr); + yygotominor.yy118.pExpr = sqlite3ExprFunction(pParse, pList, &yymsp[-1].minor.yy342.eOperator); + exprNot(pParse, yymsp[-1].minor.yy342.bNot, &yygotominor.yy118.pExpr); + yygotominor.yy118.zStart = yymsp[-2].minor.yy118.zStart; + yygotominor.yy118.zEnd = yymsp[0].minor.yy118.zEnd; + if( yygotominor.yy118.pExpr ) yygotominor.yy118.pExpr->flags |= EP_InfixFunc; +} + break; + case 210: /* expr ::= expr likeop expr ESCAPE expr */ { ExprList *pList; - pList = sqlite3ExprListAppend(pParse,0, yymsp[-1].minor.yy346.pExpr); - pList = sqlite3ExprListAppend(pParse,pList, yymsp[-3].minor.yy346.pExpr); - if( yymsp[0].minor.yy346.pExpr ){ - pList = sqlite3ExprListAppend(pParse,pList, yymsp[0].minor.yy346.pExpr); - } - yygotominor.yy346.pExpr = sqlite3ExprFunction(pParse, pList, &yymsp[-2].minor.yy96.eOperator); - if( yymsp[-2].minor.yy96.not ) yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_NOT, yygotominor.yy346.pExpr, 0, 0); - yygotominor.yy346.zStart = yymsp[-3].minor.yy346.zStart; - yygotominor.yy346.zEnd = yymsp[-1].minor.yy346.zEnd; - if( yygotominor.yy346.pExpr ) yygotominor.yy346.pExpr->flags |= EP_InfixFunc; -} - break; - case 215: /* expr ::= expr ISNULL|NOTNULL */ -{spanUnaryPostfix(&yygotominor.yy346,pParse,yymsp[0].major,&yymsp[-1].minor.yy346,&yymsp[0].minor.yy0);} - break; - case 216: /* expr ::= expr NOT NULL */ -{spanUnaryPostfix(&yygotominor.yy346,pParse,TK_NOTNULL,&yymsp[-2].minor.yy346,&yymsp[0].minor.yy0);} - break; - case 217: /* expr ::= expr IS expr */ -{ - spanBinaryExpr(&yygotominor.yy346,pParse,TK_IS,&yymsp[-2].minor.yy346,&yymsp[0].minor.yy346); - binaryToUnaryIfNull(pParse, yymsp[0].minor.yy346.pExpr, yygotominor.yy346.pExpr, TK_ISNULL); -} - break; - case 218: /* expr ::= expr IS NOT expr */ -{ - spanBinaryExpr(&yygotominor.yy346,pParse,TK_ISNOT,&yymsp[-3].minor.yy346,&yymsp[0].minor.yy346); - binaryToUnaryIfNull(pParse, yymsp[0].minor.yy346.pExpr, yygotominor.yy346.pExpr, TK_NOTNULL); -} - break; - case 219: /* expr ::= NOT expr */ - case 220: /* expr ::= BITNOT expr */ yytestcase(yyruleno==220); -{spanUnaryPrefix(&yygotominor.yy346,pParse,yymsp[-1].major,&yymsp[0].minor.yy346,&yymsp[-1].minor.yy0);} - break; - case 221: /* expr ::= MINUS expr */ -{spanUnaryPrefix(&yygotominor.yy346,pParse,TK_UMINUS,&yymsp[0].minor.yy346,&yymsp[-1].minor.yy0);} - break; - case 222: /* expr ::= PLUS expr */ -{spanUnaryPrefix(&yygotominor.yy346,pParse,TK_UPLUS,&yymsp[0].minor.yy346,&yymsp[-1].minor.yy0);} - break; - case 225: /* expr ::= expr between_op expr AND expr */ -{ - ExprList *pList = sqlite3ExprListAppend(pParse,0, yymsp[-2].minor.yy346.pExpr); - pList = sqlite3ExprListAppend(pParse,pList, yymsp[0].minor.yy346.pExpr); - yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_BETWEEN, yymsp[-4].minor.yy346.pExpr, 0, 0); - if( yygotominor.yy346.pExpr ){ - yygotominor.yy346.pExpr->x.pList = pList; + pList = sqlite3ExprListAppend(pParse,0, yymsp[-2].minor.yy118.pExpr); + pList = sqlite3ExprListAppend(pParse,pList, yymsp[-4].minor.yy118.pExpr); + pList = sqlite3ExprListAppend(pParse,pList, yymsp[0].minor.yy118.pExpr); + yygotominor.yy118.pExpr = sqlite3ExprFunction(pParse, pList, &yymsp[-3].minor.yy342.eOperator); + exprNot(pParse, yymsp[-3].minor.yy342.bNot, &yygotominor.yy118.pExpr); + yygotominor.yy118.zStart = yymsp[-4].minor.yy118.zStart; + yygotominor.yy118.zEnd = yymsp[0].minor.yy118.zEnd; + if( yygotominor.yy118.pExpr ) yygotominor.yy118.pExpr->flags |= EP_InfixFunc; +} + break; + case 211: /* expr ::= expr ISNULL|NOTNULL */ +{spanUnaryPostfix(&yygotominor.yy118,pParse,yymsp[0].major,&yymsp[-1].minor.yy118,&yymsp[0].minor.yy0);} + break; + case 212: /* expr ::= expr NOT NULL */ +{spanUnaryPostfix(&yygotominor.yy118,pParse,TK_NOTNULL,&yymsp[-2].minor.yy118,&yymsp[0].minor.yy0);} + break; + case 213: /* expr ::= expr IS expr */ +{ + spanBinaryExpr(&yygotominor.yy118,pParse,TK_IS,&yymsp[-2].minor.yy118,&yymsp[0].minor.yy118); + binaryToUnaryIfNull(pParse, yymsp[0].minor.yy118.pExpr, yygotominor.yy118.pExpr, TK_ISNULL); +} + break; + case 214: /* expr ::= expr IS NOT expr */ +{ + spanBinaryExpr(&yygotominor.yy118,pParse,TK_ISNOT,&yymsp[-3].minor.yy118,&yymsp[0].minor.yy118); + binaryToUnaryIfNull(pParse, yymsp[0].minor.yy118.pExpr, yygotominor.yy118.pExpr, TK_NOTNULL); +} + break; + case 215: /* expr ::= NOT expr */ + case 216: /* expr ::= BITNOT expr */ yytestcase(yyruleno==216); +{spanUnaryPrefix(&yygotominor.yy118,pParse,yymsp[-1].major,&yymsp[0].minor.yy118,&yymsp[-1].minor.yy0);} + break; + case 217: /* expr ::= MINUS expr */ +{spanUnaryPrefix(&yygotominor.yy118,pParse,TK_UMINUS,&yymsp[0].minor.yy118,&yymsp[-1].minor.yy0);} + break; + case 218: /* expr ::= PLUS expr */ +{spanUnaryPrefix(&yygotominor.yy118,pParse,TK_UPLUS,&yymsp[0].minor.yy118,&yymsp[-1].minor.yy0);} + break; + case 221: /* expr ::= expr between_op expr AND expr */ +{ + ExprList *pList = sqlite3ExprListAppend(pParse,0, yymsp[-2].minor.yy118.pExpr); + pList = sqlite3ExprListAppend(pParse,pList, yymsp[0].minor.yy118.pExpr); + yygotominor.yy118.pExpr = sqlite3PExpr(pParse, TK_BETWEEN, yymsp[-4].minor.yy118.pExpr, 0, 0); + if( yygotominor.yy118.pExpr ){ + yygotominor.yy118.pExpr->x.pList = pList; }else{ sqlite3ExprListDelete(pParse->db, pList); } - if( yymsp[-3].minor.yy328 ) yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_NOT, yygotominor.yy346.pExpr, 0, 0); - yygotominor.yy346.zStart = yymsp[-4].minor.yy346.zStart; - yygotominor.yy346.zEnd = yymsp[0].minor.yy346.zEnd; -} - break; - case 228: /* expr ::= expr in_op LP exprlist RP */ -{ - yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_IN, yymsp[-4].minor.yy346.pExpr, 0, 0); - if( yygotominor.yy346.pExpr ){ - yygotominor.yy346.pExpr->x.pList = yymsp[-1].minor.yy14; - sqlite3ExprSetHeight(pParse, yygotominor.yy346.pExpr); - }else{ - sqlite3ExprListDelete(pParse->db, yymsp[-1].minor.yy14); - } - if( yymsp[-3].minor.yy328 ) yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_NOT, yygotominor.yy346.pExpr, 0, 0); - yygotominor.yy346.zStart = yymsp[-4].minor.yy346.zStart; - yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n]; - } - break; - case 229: /* expr ::= LP select RP */ -{ - yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_SELECT, 0, 0, 0); - if( yygotominor.yy346.pExpr ){ - yygotominor.yy346.pExpr->x.pSelect = yymsp[-1].minor.yy3; - ExprSetProperty(yygotominor.yy346.pExpr, EP_xIsSelect); - sqlite3ExprSetHeight(pParse, yygotominor.yy346.pExpr); - }else{ - sqlite3SelectDelete(pParse->db, yymsp[-1].minor.yy3); - } - yygotominor.yy346.zStart = yymsp[-2].minor.yy0.z; - yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n]; - } - break; - case 230: /* expr ::= expr in_op LP select RP */ -{ - yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_IN, yymsp[-4].minor.yy346.pExpr, 0, 0); - if( yygotominor.yy346.pExpr ){ - yygotominor.yy346.pExpr->x.pSelect = yymsp[-1].minor.yy3; - ExprSetProperty(yygotominor.yy346.pExpr, EP_xIsSelect); - sqlite3ExprSetHeight(pParse, yygotominor.yy346.pExpr); - }else{ - sqlite3SelectDelete(pParse->db, yymsp[-1].minor.yy3); - } - if( yymsp[-3].minor.yy328 ) yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_NOT, yygotominor.yy346.pExpr, 0, 0); - yygotominor.yy346.zStart = yymsp[-4].minor.yy346.zStart; - yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n]; - } - break; - case 231: /* expr ::= expr in_op nm dbnm */ + exprNot(pParse, yymsp[-3].minor.yy4, &yygotominor.yy118.pExpr); + yygotominor.yy118.zStart = yymsp[-4].minor.yy118.zStart; + yygotominor.yy118.zEnd = yymsp[0].minor.yy118.zEnd; +} + break; + case 224: /* expr ::= expr in_op LP exprlist RP */ +{ + if( yymsp[-1].minor.yy322==0 ){ + /* Expressions of the form + ** + ** expr1 IN () + ** expr1 NOT IN () + ** + ** simplify to constants 0 (false) and 1 (true), respectively, + ** regardless of the value of expr1. + */ + yygotominor.yy118.pExpr = sqlite3PExpr(pParse, TK_INTEGER, 0, 0, &sqlite3IntTokens[yymsp[-3].minor.yy4]); + sqlite3ExprDelete(pParse->db, yymsp[-4].minor.yy118.pExpr); + }else if( yymsp[-1].minor.yy322->nExpr==1 ){ + /* Expressions of the form: + ** + ** expr1 IN (?1) + ** expr1 NOT IN (?2) + ** + ** with exactly one value on the RHS can be simplified to something + ** like this: + ** + ** expr1 == ?1 + ** expr1 <> ?2 + ** + ** But, the RHS of the == or <> is marked with the EP_Generic flag + ** so that it may not contribute to the computation of comparison + ** affinity or the collating sequence to use for comparison. Otherwise, + ** the semantics would be subtly different from IN or NOT IN. + */ + Expr *pRHS = yymsp[-1].minor.yy322->a[0].pExpr; + yymsp[-1].minor.yy322->a[0].pExpr = 0; + sqlite3ExprListDelete(pParse->db, yymsp[-1].minor.yy322); + /* pRHS cannot be NULL because a malloc error would have been detected + ** before now and control would have never reached this point */ + if( ALWAYS(pRHS) ){ + pRHS->flags &= ~EP_Collate; + pRHS->flags |= EP_Generic; + } + yygotominor.yy118.pExpr = sqlite3PExpr(pParse, yymsp[-3].minor.yy4 ? TK_NE : TK_EQ, yymsp[-4].minor.yy118.pExpr, pRHS, 0); + }else{ + yygotominor.yy118.pExpr = sqlite3PExpr(pParse, TK_IN, yymsp[-4].minor.yy118.pExpr, 0, 0); + if( yygotominor.yy118.pExpr ){ + yygotominor.yy118.pExpr->x.pList = yymsp[-1].minor.yy322; + sqlite3ExprSetHeightAndFlags(pParse, yygotominor.yy118.pExpr); + }else{ + sqlite3ExprListDelete(pParse->db, yymsp[-1].minor.yy322); + } + exprNot(pParse, yymsp[-3].minor.yy4, &yygotominor.yy118.pExpr); + } + yygotominor.yy118.zStart = yymsp[-4].minor.yy118.zStart; + yygotominor.yy118.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n]; + } + break; + case 225: /* expr ::= LP select RP */ +{ + yygotominor.yy118.pExpr = sqlite3PExpr(pParse, TK_SELECT, 0, 0, 0); + if( yygotominor.yy118.pExpr ){ + yygotominor.yy118.pExpr->x.pSelect = yymsp[-1].minor.yy387; + ExprSetProperty(yygotominor.yy118.pExpr, EP_xIsSelect|EP_Subquery); + sqlite3ExprSetHeightAndFlags(pParse, yygotominor.yy118.pExpr); + }else{ + sqlite3SelectDelete(pParse->db, yymsp[-1].minor.yy387); + } + yygotominor.yy118.zStart = yymsp[-2].minor.yy0.z; + yygotominor.yy118.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n]; + } + break; + case 226: /* expr ::= expr in_op LP select RP */ +{ + yygotominor.yy118.pExpr = sqlite3PExpr(pParse, TK_IN, yymsp[-4].minor.yy118.pExpr, 0, 0); + if( yygotominor.yy118.pExpr ){ + yygotominor.yy118.pExpr->x.pSelect = yymsp[-1].minor.yy387; + ExprSetProperty(yygotominor.yy118.pExpr, EP_xIsSelect|EP_Subquery); + sqlite3ExprSetHeightAndFlags(pParse, yygotominor.yy118.pExpr); + }else{ + sqlite3SelectDelete(pParse->db, yymsp[-1].minor.yy387); + } + exprNot(pParse, yymsp[-3].minor.yy4, &yygotominor.yy118.pExpr); + yygotominor.yy118.zStart = yymsp[-4].minor.yy118.zStart; + yygotominor.yy118.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n]; + } + break; + case 227: /* expr ::= expr in_op nm dbnm */ { SrcList *pSrc = sqlite3SrcListAppend(pParse->db, 0,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy0); - yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_IN, yymsp[-3].minor.yy346.pExpr, 0, 0); - if( yygotominor.yy346.pExpr ){ - yygotominor.yy346.pExpr->x.pSelect = sqlite3SelectNew(pParse, 0,pSrc,0,0,0,0,0,0,0); - ExprSetProperty(yygotominor.yy346.pExpr, EP_xIsSelect); - sqlite3ExprSetHeight(pParse, yygotominor.yy346.pExpr); + yygotominor.yy118.pExpr = sqlite3PExpr(pParse, TK_IN, yymsp[-3].minor.yy118.pExpr, 0, 0); + if( yygotominor.yy118.pExpr ){ + yygotominor.yy118.pExpr->x.pSelect = sqlite3SelectNew(pParse, 0,pSrc,0,0,0,0,0,0,0); + ExprSetProperty(yygotominor.yy118.pExpr, EP_xIsSelect|EP_Subquery); + sqlite3ExprSetHeightAndFlags(pParse, yygotominor.yy118.pExpr); }else{ sqlite3SrcListDelete(pParse->db, pSrc); } - if( yymsp[-2].minor.yy328 ) yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_NOT, yygotominor.yy346.pExpr, 0, 0); - yygotominor.yy346.zStart = yymsp[-3].minor.yy346.zStart; - yygotominor.yy346.zEnd = yymsp[0].minor.yy0.z ? &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n] : &yymsp[-1].minor.yy0.z[yymsp[-1].minor.yy0.n]; - } - break; - case 232: /* expr ::= EXISTS LP select RP */ -{ - Expr *p = yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_EXISTS, 0, 0, 0); + exprNot(pParse, yymsp[-2].minor.yy4, &yygotominor.yy118.pExpr); + yygotominor.yy118.zStart = yymsp[-3].minor.yy118.zStart; + yygotominor.yy118.zEnd = yymsp[0].minor.yy0.z ? &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n] : &yymsp[-1].minor.yy0.z[yymsp[-1].minor.yy0.n]; + } + break; + case 228: /* expr ::= EXISTS LP select RP */ +{ + Expr *p = yygotominor.yy118.pExpr = sqlite3PExpr(pParse, TK_EXISTS, 0, 0, 0); if( p ){ - p->x.pSelect = yymsp[-1].minor.yy3; - ExprSetProperty(p, EP_xIsSelect); - sqlite3ExprSetHeight(pParse, p); - }else{ - sqlite3SelectDelete(pParse->db, yymsp[-1].minor.yy3); - } - yygotominor.yy346.zStart = yymsp[-3].minor.yy0.z; - yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n]; - } - break; - case 233: /* expr ::= CASE case_operand case_exprlist case_else END */ -{ - yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_CASE, yymsp[-3].minor.yy132, yymsp[-1].minor.yy132, 0); - if( yygotominor.yy346.pExpr ){ - yygotominor.yy346.pExpr->x.pList = yymsp[-2].minor.yy14; - sqlite3ExprSetHeight(pParse, yygotominor.yy346.pExpr); - }else{ - sqlite3ExprListDelete(pParse->db, yymsp[-2].minor.yy14); - } - yygotominor.yy346.zStart = yymsp[-4].minor.yy0.z; - yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n]; -} - break; - case 234: /* case_exprlist ::= case_exprlist WHEN expr THEN expr */ -{ - yygotominor.yy14 = sqlite3ExprListAppend(pParse,yymsp[-4].minor.yy14, yymsp[-2].minor.yy346.pExpr); - yygotominor.yy14 = sqlite3ExprListAppend(pParse,yygotominor.yy14, yymsp[0].minor.yy346.pExpr); -} - break; - case 235: /* case_exprlist ::= WHEN expr THEN expr */ -{ - yygotominor.yy14 = sqlite3ExprListAppend(pParse,0, yymsp[-2].minor.yy346.pExpr); - yygotominor.yy14 = sqlite3ExprListAppend(pParse,yygotominor.yy14, yymsp[0].minor.yy346.pExpr); -} - break; - case 244: /* cmd ::= createkw uniqueflag INDEX ifnotexists nm dbnm ON nm LP idxlist RP */ -{ - sqlite3CreateIndex(pParse, &yymsp[-6].minor.yy0, &yymsp[-5].minor.yy0, - sqlite3SrcListAppend(pParse->db,0,&yymsp[-3].minor.yy0,0), yymsp[-1].minor.yy14, yymsp[-9].minor.yy328, - &yymsp[-10].minor.yy0, &yymsp[0].minor.yy0, SQLITE_SO_ASC, yymsp[-7].minor.yy328); -} - break; - case 245: /* uniqueflag ::= UNIQUE */ - case 299: /* raisetype ::= ABORT */ yytestcase(yyruleno==299); -{yygotominor.yy328 = OE_Abort;} - break; - case 246: /* uniqueflag ::= */ -{yygotominor.yy328 = OE_None;} - break; - case 249: /* idxlist ::= idxlist COMMA nm collate sortorder */ -{ - Expr *p = 0; - if( yymsp[-1].minor.yy0.n>0 ){ - p = sqlite3Expr(pParse->db, TK_COLUMN, 0); - sqlite3ExprSetColl(pParse, p, &yymsp[-1].minor.yy0); - } - yygotominor.yy14 = sqlite3ExprListAppend(pParse,yymsp[-4].minor.yy14, p); - sqlite3ExprListSetName(pParse,yygotominor.yy14,&yymsp[-2].minor.yy0,1); - sqlite3ExprListCheckLength(pParse, yygotominor.yy14, "index"); - if( yygotominor.yy14 ) yygotominor.yy14->a[yygotominor.yy14->nExpr-1].sortOrder = (u8)yymsp[0].minor.yy328; -} - break; - case 250: /* idxlist ::= nm collate sortorder */ -{ - Expr *p = 0; - if( yymsp[-1].minor.yy0.n>0 ){ - p = sqlite3PExpr(pParse, TK_COLUMN, 0, 0, 0); - sqlite3ExprSetColl(pParse, p, &yymsp[-1].minor.yy0); - } - yygotominor.yy14 = sqlite3ExprListAppend(pParse,0, p); - sqlite3ExprListSetName(pParse, yygotominor.yy14, &yymsp[-2].minor.yy0, 1); - sqlite3ExprListCheckLength(pParse, yygotominor.yy14, "index"); - if( yygotominor.yy14 ) yygotominor.yy14->a[yygotominor.yy14->nExpr-1].sortOrder = (u8)yymsp[0].minor.yy328; -} - break; - case 251: /* collate ::= */ -{yygotominor.yy0.z = 0; yygotominor.yy0.n = 0;} - break; - case 253: /* cmd ::= DROP INDEX ifexists fullname */ -{sqlite3DropIndex(pParse, yymsp[0].minor.yy65, yymsp[-1].minor.yy328);} - break; - case 254: /* cmd ::= VACUUM */ - case 255: /* cmd ::= VACUUM nm */ yytestcase(yyruleno==255); + p->x.pSelect = yymsp[-1].minor.yy387; + ExprSetProperty(p, EP_xIsSelect|EP_Subquery); + sqlite3ExprSetHeightAndFlags(pParse, p); + }else{ + sqlite3SelectDelete(pParse->db, yymsp[-1].minor.yy387); + } + yygotominor.yy118.zStart = yymsp[-3].minor.yy0.z; + yygotominor.yy118.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n]; + } + break; + case 229: /* expr ::= CASE case_operand case_exprlist case_else END */ +{ + yygotominor.yy118.pExpr = sqlite3PExpr(pParse, TK_CASE, yymsp[-3].minor.yy314, 0, 0); + if( yygotominor.yy118.pExpr ){ + yygotominor.yy118.pExpr->x.pList = yymsp[-1].minor.yy314 ? sqlite3ExprListAppend(pParse,yymsp[-2].minor.yy322,yymsp[-1].minor.yy314) : yymsp[-2].minor.yy322; + sqlite3ExprSetHeightAndFlags(pParse, yygotominor.yy118.pExpr); + }else{ + sqlite3ExprListDelete(pParse->db, yymsp[-2].minor.yy322); + sqlite3ExprDelete(pParse->db, yymsp[-1].minor.yy314); + } + yygotominor.yy118.zStart = yymsp[-4].minor.yy0.z; + yygotominor.yy118.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n]; +} + break; + case 230: /* case_exprlist ::= case_exprlist WHEN expr THEN expr */ +{ + yygotominor.yy322 = sqlite3ExprListAppend(pParse,yymsp[-4].minor.yy322, yymsp[-2].minor.yy118.pExpr); + yygotominor.yy322 = sqlite3ExprListAppend(pParse,yygotominor.yy322, yymsp[0].minor.yy118.pExpr); +} + break; + case 231: /* case_exprlist ::= WHEN expr THEN expr */ +{ + yygotominor.yy322 = sqlite3ExprListAppend(pParse,0, yymsp[-2].minor.yy118.pExpr); + yygotominor.yy322 = sqlite3ExprListAppend(pParse,yygotominor.yy322, yymsp[0].minor.yy118.pExpr); +} + break; + case 238: /* nexprlist ::= nexprlist COMMA expr */ +{yygotominor.yy322 = sqlite3ExprListAppend(pParse,yymsp[-2].minor.yy322,yymsp[0].minor.yy118.pExpr);} + break; + case 239: /* nexprlist ::= expr */ +{yygotominor.yy322 = sqlite3ExprListAppend(pParse,0,yymsp[0].minor.yy118.pExpr);} + break; + case 240: /* cmd ::= createkw uniqueflag INDEX ifnotexists nm dbnm ON nm LP sortlist RP where_opt */ +{ + sqlite3CreateIndex(pParse, &yymsp[-7].minor.yy0, &yymsp[-6].minor.yy0, + sqlite3SrcListAppend(pParse->db,0,&yymsp[-4].minor.yy0,0), yymsp[-2].minor.yy322, yymsp[-10].minor.yy4, + &yymsp[-11].minor.yy0, yymsp[0].minor.yy314, SQLITE_SO_ASC, yymsp[-8].minor.yy4); +} + break; + case 241: /* uniqueflag ::= UNIQUE */ + case 292: /* raisetype ::= ABORT */ yytestcase(yyruleno==292); +{yygotominor.yy4 = OE_Abort;} + break; + case 242: /* uniqueflag ::= */ +{yygotominor.yy4 = OE_None;} + break; + case 245: /* eidlist ::= eidlist COMMA nm collate sortorder */ +{ + yygotominor.yy322 = parserAddExprIdListTerm(pParse, yymsp[-4].minor.yy322, &yymsp[-2].minor.yy0, yymsp[-1].minor.yy4, yymsp[0].minor.yy4); +} + break; + case 246: /* eidlist ::= nm collate sortorder */ +{ + yygotominor.yy322 = parserAddExprIdListTerm(pParse, 0, &yymsp[-2].minor.yy0, yymsp[-1].minor.yy4, yymsp[0].minor.yy4); +} + break; + case 249: /* cmd ::= DROP INDEX ifexists fullname */ +{sqlite3DropIndex(pParse, yymsp[0].minor.yy259, yymsp[-1].minor.yy4);} + break; + case 250: /* cmd ::= VACUUM */ + case 251: /* cmd ::= VACUUM nm */ yytestcase(yyruleno==251); {sqlite3Vacuum(pParse);} break; - case 256: /* cmd ::= PRAGMA nm dbnm */ + case 252: /* cmd ::= PRAGMA nm dbnm */ {sqlite3Pragma(pParse,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy0,0,0);} break; - case 257: /* cmd ::= PRAGMA nm dbnm EQ nmnum */ + case 253: /* cmd ::= PRAGMA nm dbnm EQ nmnum */ {sqlite3Pragma(pParse,&yymsp[-3].minor.yy0,&yymsp[-2].minor.yy0,&yymsp[0].minor.yy0,0);} break; - case 258: /* cmd ::= PRAGMA nm dbnm LP nmnum RP */ + case 254: /* cmd ::= PRAGMA nm dbnm LP nmnum RP */ {sqlite3Pragma(pParse,&yymsp[-4].minor.yy0,&yymsp[-3].minor.yy0,&yymsp[-1].minor.yy0,0);} break; - case 259: /* cmd ::= PRAGMA nm dbnm EQ minus_num */ + case 255: /* cmd ::= PRAGMA nm dbnm EQ minus_num */ {sqlite3Pragma(pParse,&yymsp[-3].minor.yy0,&yymsp[-2].minor.yy0,&yymsp[0].minor.yy0,1);} break; - case 260: /* cmd ::= PRAGMA nm dbnm LP minus_num RP */ + case 256: /* cmd ::= PRAGMA nm dbnm LP minus_num RP */ {sqlite3Pragma(pParse,&yymsp[-4].minor.yy0,&yymsp[-3].minor.yy0,&yymsp[-1].minor.yy0,1);} break; - case 271: /* cmd ::= createkw trigger_decl BEGIN trigger_cmd_list END */ + case 265: /* cmd ::= createkw trigger_decl BEGIN trigger_cmd_list END */ { Token all; all.z = yymsp[-3].minor.yy0.z; all.n = (int)(yymsp[0].minor.yy0.z - yymsp[-3].minor.yy0.z) + yymsp[0].minor.yy0.n; - sqlite3FinishTrigger(pParse, yymsp[-1].minor.yy473, &all); + sqlite3FinishTrigger(pParse, yymsp[-1].minor.yy203, &all); } break; - case 272: /* trigger_decl ::= temp TRIGGER ifnotexists nm dbnm trigger_time trigger_event ON fullname foreach_clause when_clause */ + case 266: /* trigger_decl ::= temp TRIGGER ifnotexists nm dbnm trigger_time trigger_event ON fullname foreach_clause when_clause */ { - sqlite3BeginTrigger(pParse, &yymsp[-7].minor.yy0, &yymsp[-6].minor.yy0, yymsp[-5].minor.yy328, yymsp[-4].minor.yy378.a, yymsp[-4].minor.yy378.b, yymsp[-2].minor.yy65, yymsp[0].minor.yy132, yymsp[-10].minor.yy328, yymsp[-8].minor.yy328); + sqlite3BeginTrigger(pParse, &yymsp[-7].minor.yy0, &yymsp[-6].minor.yy0, yymsp[-5].minor.yy4, yymsp[-4].minor.yy90.a, yymsp[-4].minor.yy90.b, yymsp[-2].minor.yy259, yymsp[0].minor.yy314, yymsp[-10].minor.yy4, yymsp[-8].minor.yy4); yygotominor.yy0 = (yymsp[-6].minor.yy0.n==0?yymsp[-7].minor.yy0:yymsp[-6].minor.yy0); } break; - case 273: /* trigger_time ::= BEFORE */ - case 276: /* trigger_time ::= */ yytestcase(yyruleno==276); -{ yygotominor.yy328 = TK_BEFORE; } - break; - case 274: /* trigger_time ::= AFTER */ -{ yygotominor.yy328 = TK_AFTER; } - break; - case 275: /* trigger_time ::= INSTEAD OF */ -{ yygotominor.yy328 = TK_INSTEAD;} - break; - case 277: /* trigger_event ::= DELETE|INSERT */ - case 278: /* trigger_event ::= UPDATE */ yytestcase(yyruleno==278); -{yygotominor.yy378.a = yymsp[0].major; yygotominor.yy378.b = 0;} - break; - case 279: /* trigger_event ::= UPDATE OF inscollist */ -{yygotominor.yy378.a = TK_UPDATE; yygotominor.yy378.b = yymsp[0].minor.yy408;} - break; - case 282: /* when_clause ::= */ - case 304: /* key_opt ::= */ yytestcase(yyruleno==304); -{ yygotominor.yy132 = 0; } - break; - case 283: /* when_clause ::= WHEN expr */ - case 305: /* key_opt ::= KEY expr */ yytestcase(yyruleno==305); -{ yygotominor.yy132 = yymsp[0].minor.yy346.pExpr; } - break; - case 284: /* trigger_cmd_list ::= trigger_cmd_list trigger_cmd SEMI */ -{ - assert( yymsp[-2].minor.yy473!=0 ); - yymsp[-2].minor.yy473->pLast->pNext = yymsp[-1].minor.yy473; - yymsp[-2].minor.yy473->pLast = yymsp[-1].minor.yy473; - yygotominor.yy473 = yymsp[-2].minor.yy473; -} - break; - case 285: /* trigger_cmd_list ::= trigger_cmd SEMI */ -{ - assert( yymsp[-1].minor.yy473!=0 ); - yymsp[-1].minor.yy473->pLast = yymsp[-1].minor.yy473; - yygotominor.yy473 = yymsp[-1].minor.yy473; -} - break; - case 287: /* trnm ::= nm DOT nm */ + case 267: /* trigger_time ::= BEFORE */ + case 270: /* trigger_time ::= */ yytestcase(yyruleno==270); +{ yygotominor.yy4 = TK_BEFORE; } + break; + case 268: /* trigger_time ::= AFTER */ +{ yygotominor.yy4 = TK_AFTER; } + break; + case 269: /* trigger_time ::= INSTEAD OF */ +{ yygotominor.yy4 = TK_INSTEAD;} + break; + case 271: /* trigger_event ::= DELETE|INSERT */ + case 272: /* trigger_event ::= UPDATE */ yytestcase(yyruleno==272); +{yygotominor.yy90.a = yymsp[0].major; yygotominor.yy90.b = 0;} + break; + case 273: /* trigger_event ::= UPDATE OF idlist */ +{yygotominor.yy90.a = TK_UPDATE; yygotominor.yy90.b = yymsp[0].minor.yy384;} + break; + case 276: /* when_clause ::= */ + case 297: /* key_opt ::= */ yytestcase(yyruleno==297); +{ yygotominor.yy314 = 0; } + break; + case 277: /* when_clause ::= WHEN expr */ + case 298: /* key_opt ::= KEY expr */ yytestcase(yyruleno==298); +{ yygotominor.yy314 = yymsp[0].minor.yy118.pExpr; } + break; + case 278: /* trigger_cmd_list ::= trigger_cmd_list trigger_cmd SEMI */ +{ + assert( yymsp[-2].minor.yy203!=0 ); + yymsp[-2].minor.yy203->pLast->pNext = yymsp[-1].minor.yy203; + yymsp[-2].minor.yy203->pLast = yymsp[-1].minor.yy203; + yygotominor.yy203 = yymsp[-2].minor.yy203; +} + break; + case 279: /* trigger_cmd_list ::= trigger_cmd SEMI */ +{ + assert( yymsp[-1].minor.yy203!=0 ); + yymsp[-1].minor.yy203->pLast = yymsp[-1].minor.yy203; + yygotominor.yy203 = yymsp[-1].minor.yy203; +} + break; + case 281: /* trnm ::= nm DOT nm */ { yygotominor.yy0 = yymsp[0].minor.yy0; sqlite3ErrorMsg(pParse, "qualified table names are not allowed on INSERT, UPDATE, and DELETE " "statements within triggers"); } break; - case 289: /* tridxby ::= INDEXED BY nm */ + case 283: /* tridxby ::= INDEXED BY nm */ { sqlite3ErrorMsg(pParse, "the INDEXED BY clause is not allowed on UPDATE or DELETE statements " "within triggers"); } break; - case 290: /* tridxby ::= NOT INDEXED */ + case 284: /* tridxby ::= NOT INDEXED */ { sqlite3ErrorMsg(pParse, "the NOT INDEXED clause is not allowed on UPDATE or DELETE statements " "within triggers"); } break; - case 291: /* trigger_cmd ::= UPDATE orconf trnm tridxby SET setlist where_opt */ -{ yygotominor.yy473 = sqlite3TriggerUpdateStep(pParse->db, &yymsp[-4].minor.yy0, yymsp[-1].minor.yy14, yymsp[0].minor.yy132, yymsp[-5].minor.yy186); } - break; - case 292: /* trigger_cmd ::= insert_cmd INTO trnm inscollist_opt VALUES LP itemlist RP */ -{yygotominor.yy473 = sqlite3TriggerInsertStep(pParse->db, &yymsp[-5].minor.yy0, yymsp[-4].minor.yy408, yymsp[-1].minor.yy14, 0, yymsp[-7].minor.yy186);} - break; - case 293: /* trigger_cmd ::= insert_cmd INTO trnm inscollist_opt select */ -{yygotominor.yy473 = sqlite3TriggerInsertStep(pParse->db, &yymsp[-2].minor.yy0, yymsp[-1].minor.yy408, 0, yymsp[0].minor.yy3, yymsp[-4].minor.yy186);} - break; - case 294: /* trigger_cmd ::= DELETE FROM trnm tridxby where_opt */ -{yygotominor.yy473 = sqlite3TriggerDeleteStep(pParse->db, &yymsp[-2].minor.yy0, yymsp[0].minor.yy132);} - break; - case 295: /* trigger_cmd ::= select */ -{yygotominor.yy473 = sqlite3TriggerSelectStep(pParse->db, yymsp[0].minor.yy3); } - break; - case 296: /* expr ::= RAISE LP IGNORE RP */ -{ - yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_RAISE, 0, 0, 0); - if( yygotominor.yy346.pExpr ){ - yygotominor.yy346.pExpr->affinity = OE_Ignore; - } - yygotominor.yy346.zStart = yymsp[-3].minor.yy0.z; - yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n]; -} - break; - case 297: /* expr ::= RAISE LP raisetype COMMA nm RP */ -{ - yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_RAISE, 0, 0, &yymsp[-1].minor.yy0); - if( yygotominor.yy346.pExpr ) { - yygotominor.yy346.pExpr->affinity = (char)yymsp[-3].minor.yy328; - } - yygotominor.yy346.zStart = yymsp[-5].minor.yy0.z; - yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n]; -} - break; - case 298: /* raisetype ::= ROLLBACK */ -{yygotominor.yy328 = OE_Rollback;} - break; - case 300: /* raisetype ::= FAIL */ -{yygotominor.yy328 = OE_Fail;} - break; - case 301: /* cmd ::= DROP TRIGGER ifexists fullname */ -{ - sqlite3DropTrigger(pParse,yymsp[0].minor.yy65,yymsp[-1].minor.yy328); -} - break; - case 302: /* cmd ::= ATTACH database_kw_opt expr AS expr key_opt */ -{ - sqlite3Attach(pParse, yymsp[-3].minor.yy346.pExpr, yymsp[-1].minor.yy346.pExpr, yymsp[0].minor.yy132); -} - break; - case 303: /* cmd ::= DETACH database_kw_opt expr */ -{ - sqlite3Detach(pParse, yymsp[0].minor.yy346.pExpr); -} - break; - case 308: /* cmd ::= REINDEX */ + case 285: /* trigger_cmd ::= UPDATE orconf trnm tridxby SET setlist where_opt */ +{ yygotominor.yy203 = sqlite3TriggerUpdateStep(pParse->db, &yymsp[-4].minor.yy0, yymsp[-1].minor.yy322, yymsp[0].minor.yy314, yymsp[-5].minor.yy4); } + break; + case 286: /* trigger_cmd ::= insert_cmd INTO trnm idlist_opt select */ +{yygotominor.yy203 = sqlite3TriggerInsertStep(pParse->db, &yymsp[-2].minor.yy0, yymsp[-1].minor.yy384, yymsp[0].minor.yy387, yymsp[-4].minor.yy4);} + break; + case 287: /* trigger_cmd ::= DELETE FROM trnm tridxby where_opt */ +{yygotominor.yy203 = sqlite3TriggerDeleteStep(pParse->db, &yymsp[-2].minor.yy0, yymsp[0].minor.yy314);} + break; + case 288: /* trigger_cmd ::= select */ +{yygotominor.yy203 = sqlite3TriggerSelectStep(pParse->db, yymsp[0].minor.yy387); } + break; + case 289: /* expr ::= RAISE LP IGNORE RP */ +{ + yygotominor.yy118.pExpr = sqlite3PExpr(pParse, TK_RAISE, 0, 0, 0); + if( yygotominor.yy118.pExpr ){ + yygotominor.yy118.pExpr->affinity = OE_Ignore; + } + yygotominor.yy118.zStart = yymsp[-3].minor.yy0.z; + yygotominor.yy118.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n]; +} + break; + case 290: /* expr ::= RAISE LP raisetype COMMA nm RP */ +{ + yygotominor.yy118.pExpr = sqlite3PExpr(pParse, TK_RAISE, 0, 0, &yymsp[-1].minor.yy0); + if( yygotominor.yy118.pExpr ) { + yygotominor.yy118.pExpr->affinity = (char)yymsp[-3].minor.yy4; + } + yygotominor.yy118.zStart = yymsp[-5].minor.yy0.z; + yygotominor.yy118.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n]; +} + break; + case 291: /* raisetype ::= ROLLBACK */ +{yygotominor.yy4 = OE_Rollback;} + break; + case 293: /* raisetype ::= FAIL */ +{yygotominor.yy4 = OE_Fail;} + break; + case 294: /* cmd ::= DROP TRIGGER ifexists fullname */ +{ + sqlite3DropTrigger(pParse,yymsp[0].minor.yy259,yymsp[-1].minor.yy4); +} + break; + case 295: /* cmd ::= ATTACH database_kw_opt expr AS expr key_opt */ +{ + sqlite3Attach(pParse, yymsp[-3].minor.yy118.pExpr, yymsp[-1].minor.yy118.pExpr, yymsp[0].minor.yy314); +} + break; + case 296: /* cmd ::= DETACH database_kw_opt expr */ +{ + sqlite3Detach(pParse, yymsp[0].minor.yy118.pExpr); +} + break; + case 301: /* cmd ::= REINDEX */ {sqlite3Reindex(pParse, 0, 0);} break; - case 309: /* cmd ::= REINDEX nm dbnm */ + case 302: /* cmd ::= REINDEX nm dbnm */ {sqlite3Reindex(pParse, &yymsp[-1].minor.yy0, &yymsp[0].minor.yy0);} break; - case 310: /* cmd ::= ANALYZE */ + case 303: /* cmd ::= ANALYZE */ {sqlite3Analyze(pParse, 0, 0);} break; - case 311: /* cmd ::= ANALYZE nm dbnm */ + case 304: /* cmd ::= ANALYZE nm dbnm */ {sqlite3Analyze(pParse, &yymsp[-1].minor.yy0, &yymsp[0].minor.yy0);} break; - case 312: /* cmd ::= ALTER TABLE fullname RENAME TO nm */ + case 305: /* cmd ::= ALTER TABLE fullname RENAME TO nm */ { - sqlite3AlterRenameTable(pParse,yymsp[-3].minor.yy65,&yymsp[0].minor.yy0); + sqlite3AlterRenameTable(pParse,yymsp[-3].minor.yy259,&yymsp[0].minor.yy0); } break; - case 313: /* cmd ::= ALTER TABLE add_column_fullname ADD kwcolumn_opt column */ + case 306: /* cmd ::= ALTER TABLE add_column_fullname ADD kwcolumn_opt column */ { sqlite3AlterFinishAddColumn(pParse, &yymsp[0].minor.yy0); } break; - case 314: /* add_column_fullname ::= fullname */ + case 307: /* add_column_fullname ::= fullname */ { - pParse->db->lookaside.bEnabled = 0; - sqlite3AlterBeginAddColumn(pParse, yymsp[0].minor.yy65); + disableLookaside(pParse); + sqlite3AlterBeginAddColumn(pParse, yymsp[0].minor.yy259); } break; - case 317: /* cmd ::= create_vtab */ + case 310: /* cmd ::= create_vtab */ {sqlite3VtabFinishParse(pParse,0);} break; - case 318: /* cmd ::= create_vtab LP vtabarglist RP */ + case 311: /* cmd ::= create_vtab LP vtabarglist RP */ {sqlite3VtabFinishParse(pParse,&yymsp[0].minor.yy0);} break; - case 319: /* create_vtab ::= createkw VIRTUAL TABLE nm dbnm USING nm */ + case 312: /* create_vtab ::= createkw VIRTUAL TABLE ifnotexists nm dbnm USING nm */ { - sqlite3VtabBeginParse(pParse, &yymsp[-3].minor.yy0, &yymsp[-2].minor.yy0, &yymsp[0].minor.yy0); + sqlite3VtabBeginParse(pParse, &yymsp[-3].minor.yy0, &yymsp[-2].minor.yy0, &yymsp[0].minor.yy0, yymsp[-4].minor.yy4); } break; - case 322: /* vtabarg ::= */ + case 315: /* vtabarg ::= */ {sqlite3VtabArgInit(pParse);} break; - case 324: /* vtabargtoken ::= ANY */ - case 325: /* vtabargtoken ::= lp anylist RP */ yytestcase(yyruleno==325); - case 326: /* lp ::= LP */ yytestcase(yyruleno==326); + case 317: /* vtabargtoken ::= ANY */ + case 318: /* vtabargtoken ::= lp anylist RP */ yytestcase(yyruleno==318); + case 319: /* lp ::= LP */ yytestcase(yyruleno==319); {sqlite3VtabArgExtend(pParse,&yymsp[0].minor.yy0);} + break; + case 323: /* with ::= */ +{yygotominor.yy451 = 0;} + break; + case 324: /* with ::= WITH wqlist */ + case 325: /* with ::= WITH RECURSIVE wqlist */ yytestcase(yyruleno==325); +{ yygotominor.yy451 = yymsp[0].minor.yy451; } + break; + case 326: /* wqlist ::= nm eidlist_opt AS LP select RP */ +{ + yygotominor.yy451 = sqlite3WithAdd(pParse, 0, &yymsp[-5].minor.yy0, yymsp[-4].minor.yy322, yymsp[-1].minor.yy387); +} + break; + case 327: /* wqlist ::= wqlist COMMA nm eidlist_opt AS LP select RP */ +{ + yygotominor.yy451 = sqlite3WithAdd(pParse, yymsp[-7].minor.yy451, &yymsp[-5].minor.yy0, yymsp[-4].minor.yy322, yymsp[-1].minor.yy387); +} break; default: /* (0) input ::= cmdlist */ yytestcase(yyruleno==0); /* (1) cmdlist ::= cmdlist ecmd */ yytestcase(yyruleno==1); /* (2) cmdlist ::= ecmd */ yytestcase(yyruleno==2); /* (3) ecmd ::= SEMI */ yytestcase(yyruleno==3); /* (4) ecmd ::= explain cmdx SEMI */ yytestcase(yyruleno==4); + /* (5) explain ::= */ yytestcase(yyruleno==5); /* (10) trans_opt ::= */ yytestcase(yyruleno==10); /* (11) trans_opt ::= TRANSACTION */ yytestcase(yyruleno==11); /* (12) trans_opt ::= TRANSACTION nm */ yytestcase(yyruleno==12); /* (20) savepoint_opt ::= SAVEPOINT */ yytestcase(yyruleno==20); /* (21) savepoint_opt ::= */ yytestcase(yyruleno==21); /* (25) cmd ::= create_table create_table_args */ yytestcase(yyruleno==25); - /* (34) columnlist ::= columnlist COMMA column */ yytestcase(yyruleno==34); - /* (35) columnlist ::= column */ yytestcase(yyruleno==35); - /* (44) type ::= */ yytestcase(yyruleno==44); - /* (51) signed ::= plus_num */ yytestcase(yyruleno==51); - /* (52) signed ::= minus_num */ yytestcase(yyruleno==52); - /* (53) carglist ::= carglist carg */ yytestcase(yyruleno==53); - /* (54) carglist ::= */ yytestcase(yyruleno==54); - /* (55) carg ::= CONSTRAINT nm ccons */ yytestcase(yyruleno==55); - /* (56) carg ::= ccons */ yytestcase(yyruleno==56); - /* (62) ccons ::= NULL onconf */ yytestcase(yyruleno==62); - /* (90) conslist ::= conslist COMMA tcons */ yytestcase(yyruleno==90); - /* (91) conslist ::= conslist tcons */ yytestcase(yyruleno==91); - /* (92) conslist ::= tcons */ yytestcase(yyruleno==92); - /* (93) tcons ::= CONSTRAINT nm */ yytestcase(yyruleno==93); - /* (269) plus_opt ::= PLUS */ yytestcase(yyruleno==269); - /* (270) plus_opt ::= */ yytestcase(yyruleno==270); - /* (280) foreach_clause ::= */ yytestcase(yyruleno==280); - /* (281) foreach_clause ::= FOR EACH ROW */ yytestcase(yyruleno==281); - /* (288) tridxby ::= */ yytestcase(yyruleno==288); - /* (306) database_kw_opt ::= DATABASE */ yytestcase(yyruleno==306); - /* (307) database_kw_opt ::= */ yytestcase(yyruleno==307); - /* (315) kwcolumn_opt ::= */ yytestcase(yyruleno==315); - /* (316) kwcolumn_opt ::= COLUMNKW */ yytestcase(yyruleno==316); - /* (320) vtabarglist ::= vtabarg */ yytestcase(yyruleno==320); - /* (321) vtabarglist ::= vtabarglist COMMA vtabarg */ yytestcase(yyruleno==321); - /* (323) vtabarg ::= vtabarg vtabargtoken */ yytestcase(yyruleno==323); - /* (327) anylist ::= */ yytestcase(yyruleno==327); - /* (328) anylist ::= anylist LP anylist RP */ yytestcase(yyruleno==328); - /* (329) anylist ::= anylist ANY */ yytestcase(yyruleno==329); + /* (36) columnlist ::= columnlist COMMA column */ yytestcase(yyruleno==36); + /* (37) columnlist ::= column */ yytestcase(yyruleno==37); + /* (43) type ::= */ yytestcase(yyruleno==43); + /* (50) signed ::= plus_num */ yytestcase(yyruleno==50); + /* (51) signed ::= minus_num */ yytestcase(yyruleno==51); + /* (52) carglist ::= carglist ccons */ yytestcase(yyruleno==52); + /* (53) carglist ::= */ yytestcase(yyruleno==53); + /* (60) ccons ::= NULL onconf */ yytestcase(yyruleno==60); + /* (88) conslist ::= conslist tconscomma tcons */ yytestcase(yyruleno==88); + /* (89) conslist ::= tcons */ yytestcase(yyruleno==89); + /* (91) tconscomma ::= */ yytestcase(yyruleno==91); + /* (274) foreach_clause ::= */ yytestcase(yyruleno==274); + /* (275) foreach_clause ::= FOR EACH ROW */ yytestcase(yyruleno==275); + /* (282) tridxby ::= */ yytestcase(yyruleno==282); + /* (299) database_kw_opt ::= DATABASE */ yytestcase(yyruleno==299); + /* (300) database_kw_opt ::= */ yytestcase(yyruleno==300); + /* (308) kwcolumn_opt ::= */ yytestcase(yyruleno==308); + /* (309) kwcolumn_opt ::= COLUMNKW */ yytestcase(yyruleno==309); + /* (313) vtabarglist ::= vtabarg */ yytestcase(yyruleno==313); + /* (314) vtabarglist ::= vtabarglist COMMA vtabarg */ yytestcase(yyruleno==314); + /* (316) vtabarg ::= vtabarg vtabargtoken */ yytestcase(yyruleno==316); + /* (320) anylist ::= */ yytestcase(yyruleno==320); + /* (321) anylist ::= anylist LP anylist RP */ yytestcase(yyruleno==321); + /* (322) anylist ::= anylist ANY */ yytestcase(yyruleno==322); break; +/********** End reduce actions ************************************************/ }; + assert( yyruleno>=0 && yyruleno<sizeof(yyRuleInfo)/sizeof(yyRuleInfo[0]) ); yygoto = yyRuleInfo[yyruleno].lhs; yysize = yyRuleInfo[yyruleno].nrhs; yypParser->yyidx -= yysize; yyact = yy_find_reduce_action(yymsp[-yysize].stateno,(YYCODETYPE)yygoto); - if( yyact < YYNSTATE ){ -#ifdef NDEBUG - /* If we are not debugging and the reduce action popped at least + if( yyact <= YY_MAX_SHIFTREDUCE ){ + if( yyact>YY_MAX_SHIFT ) yyact += YY_MIN_REDUCE - YY_MIN_SHIFTREDUCE; + /* If the reduce action popped at least ** one element off the stack, then we can push the new element back ** onto the stack here, and skip the stack overflow test in yy_shift(). ** That gives a significant speed improvement. */ if( yysize ){ yypParser->yyidx++; yymsp -= yysize-1; yymsp->stateno = (YYACTIONTYPE)yyact; yymsp->major = (YYCODETYPE)yygoto; yymsp->minor = yygotominor; - }else -#endif - { + yyTraceShift(yypParser, yyact); + }else{ yy_shift(yypParser,yyact,yygoto,&yygotominor); } }else{ - assert( yyact == YYNSTATE + YYNRULE + 1 ); + assert( yyact == YY_ACCEPT_ACTION ); yy_accept(yypParser); } } /* @@ -95772,10 +131272,12 @@ } #endif while( yypParser->yyidx>=0 ) yy_pop_parser_stack(yypParser); /* Here code is inserted which will be executed whenever the ** parser fails */ +/************ Begin %parse_failure code ***************************************/ +/************ End %parse_failure code *****************************************/ sqlite3ParserARG_STORE; /* Suppress warning about unused %extra_argument variable */ } #endif /* YYNOERRORRECOVERY */ /* @@ -95786,15 +131288,16 @@ int yymajor, /* The major type of the error token */ YYMINORTYPE yyminor /* The minor type of the error token */ ){ sqlite3ParserARG_FETCH; #define TOKEN (yyminor.yy0) +/************ Begin %syntax_error code ****************************************/ UNUSED_PARAMETER(yymajor); /* Silence some compiler warnings */ assert( TOKEN.z[0] ); /* The tokenizer always gives us a token */ sqlite3ErrorMsg(pParse, "near \"%T\": syntax error", &TOKEN); - pParse->parseError = 1; +/************ End %syntax_error code ******************************************/ sqlite3ParserARG_STORE; /* Suppress warning about unused %extra_argument variable */ } /* ** The following is executed when the parser accepts @@ -95809,10 +131312,12 @@ } #endif while( yypParser->yyidx>=0 ) yy_pop_parser_stack(yypParser); /* Here code is inserted which will be executed whenever the ** parser accepts */ +/*********** Begin %parse_accept code *****************************************/ +/*********** End %parse_accept code *******************************************/ sqlite3ParserARG_STORE; /* Suppress warning about unused %extra_argument variable */ } /* The main parser program. ** The first argument is a pointer to a structure obtained from @@ -95839,11 +131344,13 @@ sqlite3ParserTOKENTYPE yyminor /* The value for the token */ sqlite3ParserARG_PDECL /* Optional %extra_argument parameter */ ){ YYMINORTYPE yyminorunion; int yyact; /* The parser action. */ +#if !defined(YYERRORSYMBOL) && !defined(YYNOERRORRECOVERY) int yyendofinput; /* True if we are at the end of input */ +#endif #ifdef YYERRORSYMBOL int yyerrorhit = 0; /* True if yymajor has invoked an error */ #endif yyParser *yypParser; /* The parser */ @@ -95860,30 +131367,38 @@ #endif yypParser->yyidx = 0; yypParser->yyerrcnt = -1; yypParser->yystack[0].stateno = 0; yypParser->yystack[0].major = 0; +#ifndef NDEBUG + if( yyTraceFILE ){ + fprintf(yyTraceFILE,"%sInitialize. Empty stack. State 0\n", + yyTracePrompt); + } +#endif } yyminorunion.yy0 = yyminor; +#if !defined(YYERRORSYMBOL) && !defined(YYNOERRORRECOVERY) yyendofinput = (yymajor==0); +#endif sqlite3ParserARG_STORE; #ifndef NDEBUG if( yyTraceFILE ){ - fprintf(yyTraceFILE,"%sInput %s\n",yyTracePrompt,yyTokenName[yymajor]); + fprintf(yyTraceFILE,"%sInput '%s'\n",yyTracePrompt,yyTokenName[yymajor]); } #endif do{ yyact = yy_find_shift_action(yypParser,(YYCODETYPE)yymajor); - if( yyact<YYNSTATE ){ - assert( !yyendofinput ); /* Impossible to shift the $ token */ + if( yyact <= YY_MAX_SHIFTREDUCE ){ + if( yyact > YY_MAX_SHIFT ) yyact += YY_MIN_REDUCE - YY_MIN_SHIFTREDUCE; yy_shift(yypParser,yyact,yymajor,&yyminorunion); yypParser->yyerrcnt--; yymajor = YYNOCODE; - }else if( yyact < YYNSTATE + YYNRULE ){ - yy_reduce(yypParser,yyact-YYNSTATE); + }else if( yyact <= YY_MAX_REDUCE ){ + yy_reduce(yypParser,yyact-YY_MIN_REDUCE); }else{ assert( yyact == YY_ERROR_ACTION ); #ifdef YYERRORSYMBOL int yymx; #endif @@ -95929,11 +131444,11 @@ while( yypParser->yyidx >= 0 && yymx != YYERRORSYMBOL && (yyact = yy_find_reduce_action( yypParser->yystack[yypParser->yyidx].stateno, - YYERRORSYMBOL)) >= YYNSTATE + YYERRORSYMBOL)) >= YY_MIN_REDUCE ){ yy_pop_parser_stack(yypParser); } if( yypParser->yyidx < 0 || yymajor==0 ){ yy_destructor(yypParser,(YYCODETYPE)yymajor,&yyminorunion); @@ -95979,10 +131494,20 @@ } yymajor = YYNOCODE; #endif } }while( yymajor!=YYNOCODE && yypParser->yyidx>=0 ); +#ifndef NDEBUG + if( yyTraceFILE ){ + int i; + fprintf(yyTraceFILE,"%sReturn. Stack=",yyTracePrompt); + for(i=1; i<=yypParser->yyidx; i++) + fprintf(yyTraceFILE,"%c%s", i==1 ? '[' : ' ', + yyTokenName[yypParser->yystack[i].major]); + fprintf(yyTraceFILE,"]\n"); + } +#endif return; } /************** End of parse.c ***********************************************/ /************** Begin file tokenize.c ****************************************/ @@ -96001,17 +131526,99 @@ ** ** This file contains C code that splits an SQL input string up into ** individual tokens and sends those tokens one-by-one over to the ** parser for analysis. */ +/* #include "sqliteInt.h" */ +/* #include <stdlib.h> */ + +/* Character classes for tokenizing +** +** In the sqlite3GetToken() function, a switch() on aiClass[c] is implemented +** using a lookup table, whereas a switch() directly on c uses a binary search. +** The lookup table is much faster. To maximize speed, and to ensure that +** a lookup table is used, all of the classes need to be small integers and +** all of them need to be used within the switch. +*/ +#define CC_X 0 /* The letter 'x', or start of BLOB literal */ +#define CC_KYWD 1 /* Alphabetics or '_'. Usable in a keyword */ +#define CC_ID 2 /* unicode characters usable in IDs */ +#define CC_DIGIT 3 /* Digits */ +#define CC_DOLLAR 4 /* '$' */ +#define CC_VARALPHA 5 /* '@', '#', ':'. Alphabetic SQL variables */ +#define CC_VARNUM 6 /* '?'. Numeric SQL variables */ +#define CC_SPACE 7 /* Space characters */ +#define CC_QUOTE 8 /* '"', '\'', or '`'. String literals, quoted ids */ +#define CC_QUOTE2 9 /* '['. [...] style quoted ids */ +#define CC_PIPE 10 /* '|'. Bitwise OR or concatenate */ +#define CC_MINUS 11 /* '-'. Minus or SQL-style comment */ +#define CC_LT 12 /* '<'. Part of < or <= or <> */ +#define CC_GT 13 /* '>'. Part of > or >= */ +#define CC_EQ 14 /* '='. Part of = or == */ +#define CC_BANG 15 /* '!'. Part of != */ +#define CC_SLASH 16 /* '/'. / or c-style comment */ +#define CC_LP 17 /* '(' */ +#define CC_RP 18 /* ')' */ +#define CC_SEMI 19 /* ';' */ +#define CC_PLUS 20 /* '+' */ +#define CC_STAR 21 /* '*' */ +#define CC_PERCENT 22 /* '%' */ +#define CC_COMMA 23 /* ',' */ +#define CC_AND 24 /* '&' */ +#define CC_TILDA 25 /* '~' */ +#define CC_DOT 26 /* '.' */ +#define CC_ILLEGAL 27 /* Illegal character */ + +static const unsigned char aiClass[] = { +#ifdef SQLITE_ASCII +/* x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xa xb xc xd xe xf */ +/* 0x */ 27, 27, 27, 27, 27, 27, 27, 27, 27, 7, 7, 27, 7, 7, 27, 27, +/* 1x */ 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, +/* 2x */ 7, 15, 8, 5, 4, 22, 24, 8, 17, 18, 21, 20, 23, 11, 26, 16, +/* 3x */ 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 5, 19, 12, 14, 13, 6, +/* 4x */ 5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 5x */ 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 9, 27, 27, 27, 1, +/* 6x */ 8, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, +/* 7x */ 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 27, 10, 27, 25, 27, +/* 8x */ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, +/* 9x */ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, +/* Ax */ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, +/* Bx */ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, +/* Cx */ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, +/* Dx */ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, +/* Ex */ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, +/* Fx */ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 +#endif +#ifdef SQLITE_EBCDIC +/* x0 x1 x2 x3 x4 x5 x6 x7 x8 x9 xa xb xc xd xe xf */ +/* 0x */ 27, 27, 27, 27, 27, 7, 27, 27, 27, 27, 27, 27, 7, 7, 27, 27, +/* 1x */ 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, +/* 2x */ 27, 27, 27, 27, 27, 7, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, +/* 3x */ 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, +/* 4x */ 7, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 12, 17, 20, 10, +/* 5x */ 24, 27, 27, 27, 27, 27, 27, 27, 27, 27, 15, 4, 21, 18, 19, 27, +/* 6x */ 11, 16, 27, 27, 27, 27, 27, 27, 27, 27, 27, 23, 22, 1, 13, 7, +/* 7x */ 27, 27, 27, 27, 27, 27, 27, 27, 27, 8, 5, 5, 5, 8, 14, 8, +/* 8x */ 27, 1, 1, 1, 1, 1, 1, 1, 1, 1, 27, 27, 27, 27, 27, 27, +/* 9x */ 27, 1, 1, 1, 1, 1, 1, 1, 1, 1, 27, 27, 27, 27, 27, 27, +/* 9x */ 25, 1, 1, 1, 1, 1, 1, 0, 1, 1, 27, 27, 27, 27, 27, 27, +/* Bx */ 27, 27, 27, 27, 27, 27, 27, 27, 27, 27, 9, 27, 27, 27, 27, 27, +/* Cx */ 27, 1, 1, 1, 1, 1, 1, 1, 1, 1, 27, 27, 27, 27, 27, 27, +/* Dx */ 27, 1, 1, 1, 1, 1, 1, 1, 1, 1, 27, 27, 27, 27, 27, 27, +/* Ex */ 27, 27, 1, 1, 1, 1, 1, 0, 1, 1, 27, 27, 27, 27, 27, 27, +/* Fx */ 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 27, 27, 27, 27, 27, 27, +#endif +}; /* -** The charMap() macro maps alphabetic characters into their +** The charMap() macro maps alphabetic characters (only) into their ** lower-case ASCII equivalent. On ASCII machines, this is just ** an upper-to-lower case map. On EBCDIC machines we also need -** to adjust the encoding. Only alphabetic characters and underscores -** need to be translated. +** to adjust the encoding. The mapping is only valid for alphabetics +** which are the only characters for which this feature is used. +** +** Used by keywordhash.h */ #ifdef SQLITE_ASCII # define charMap(X) sqlite3UpperToLower[(unsigned char)X] #endif #ifdef SQLITE_EBCDIC @@ -96041,11 +131648,11 @@ ** The sqlite3KeywordCode function looks up an identifier to determine if ** it is a keyword. If it is a keyword, the token code of that keyword is ** returned. If the input is not a keyword, TK_ID is returned. ** ** The implementation of this routine was generated by a program, -** mkkeywordhash.h, located in the tool subdirectory of the distribution. +** mkkeywordhash.c, located in the tool subdirectory of the distribution. ** The output of the mkkeywordhash.c program is written into a file ** named keywordhash.h and then included into this source file by ** the #include below. */ /************** Include keywordhash.h in the middle of tokenize.c ************/ @@ -96061,24 +131668,24 @@ ** might be implemented more directly using a hand-written hash table. ** But by using this automatically generated code, the size of the code ** is substantially reduced. This is important for embedded applications ** on platforms with limited memory. */ -/* Hash score: 175 */ -static int keywordCode(const char *z, int n){ - /* zText[] encodes 811 bytes of keywords in 541 bytes */ +/* Hash score: 182 */ +static int keywordCode(const char *z, int n, int *pType){ + /* zText[] encodes 834 bytes of keywords in 554 bytes */ /* REINDEXEDESCAPEACHECKEYBEFOREIGNOREGEXPLAINSTEADDATABASELECT */ /* ABLEFTHENDEFERRABLELSEXCEPTRANSACTIONATURALTERAISEXCLUSIVE */ /* XISTSAVEPOINTERSECTRIGGEREFERENCESCONSTRAINTOFFSETEMPORARY */ - /* UNIQUERYATTACHAVINGROUPDATEBEGINNERELEASEBETWEENOTNULLIKE */ - /* CASCADELETECASECOLLATECREATECURRENT_DATEDETACHIMMEDIATEJOIN */ - /* SERTMATCHPLANALYZEPRAGMABORTVALUESVIRTUALIMITWHENWHERENAME */ - /* AFTEREPLACEANDEFAULTAUTOINCREMENTCASTCOLUMNCOMMITCONFLICTCROSS */ - /* CURRENT_TIMESTAMPRIMARYDEFERREDISTINCTDROPFAILFROMFULLGLOBYIF */ - /* ISNULLORDERESTRICTOUTERIGHTROLLBACKROWUNIONUSINGVACUUMVIEW */ - /* INITIALLY */ - static const char zText[540] = { + /* UNIQUERYWITHOUTERELEASEATTACHAVINGROUPDATEBEGINNERECURSIVE */ + /* BETWEENOTNULLIKECASCADELETECASECOLLATECREATECURRENT_DATEDETACH */ + /* IMMEDIATEJOINSERTMATCHPLANALYZEPRAGMABORTVALUESVIRTUALIMITWHEN */ + /* WHERENAMEAFTEREPLACEANDEFAULTAUTOINCREMENTCASTCOLUMNCOMMIT */ + /* CONFLICTCROSSCURRENT_TIMESTAMPRIMARYDEFERREDISTINCTDROPFAIL */ + /* FROMFULLGLOBYIFISNULLORDERESTRICTRIGHTROLLBACKROWUNIONUSING */ + /* VACUUMVIEWINITIALLY */ + static const char zText[553] = { 'R','E','I','N','D','E','X','E','D','E','S','C','A','P','E','A','C','H', 'E','C','K','E','Y','B','E','F','O','R','E','I','G','N','O','R','E','G', 'E','X','P','L','A','I','N','S','T','E','A','D','D','A','T','A','B','A', 'S','E','L','E','C','T','A','B','L','E','F','T','H','E','N','D','E','F', 'E','R','R','A','B','L','E','L','S','E','X','C','E','P','T','R','A','N', @@ -96085,113 +131692,122 @@ 'S','A','C','T','I','O','N','A','T','U','R','A','L','T','E','R','A','I', 'S','E','X','C','L','U','S','I','V','E','X','I','S','T','S','A','V','E', 'P','O','I','N','T','E','R','S','E','C','T','R','I','G','G','E','R','E', 'F','E','R','E','N','C','E','S','C','O','N','S','T','R','A','I','N','T', 'O','F','F','S','E','T','E','M','P','O','R','A','R','Y','U','N','I','Q', - 'U','E','R','Y','A','T','T','A','C','H','A','V','I','N','G','R','O','U', - 'P','D','A','T','E','B','E','G','I','N','N','E','R','E','L','E','A','S', - 'E','B','E','T','W','E','E','N','O','T','N','U','L','L','I','K','E','C', - 'A','S','C','A','D','E','L','E','T','E','C','A','S','E','C','O','L','L', - 'A','T','E','C','R','E','A','T','E','C','U','R','R','E','N','T','_','D', - 'A','T','E','D','E','T','A','C','H','I','M','M','E','D','I','A','T','E', - 'J','O','I','N','S','E','R','T','M','A','T','C','H','P','L','A','N','A', - 'L','Y','Z','E','P','R','A','G','M','A','B','O','R','T','V','A','L','U', - 'E','S','V','I','R','T','U','A','L','I','M','I','T','W','H','E','N','W', - 'H','E','R','E','N','A','M','E','A','F','T','E','R','E','P','L','A','C', - 'E','A','N','D','E','F','A','U','L','T','A','U','T','O','I','N','C','R', - 'E','M','E','N','T','C','A','S','T','C','O','L','U','M','N','C','O','M', - 'M','I','T','C','O','N','F','L','I','C','T','C','R','O','S','S','C','U', - 'R','R','E','N','T','_','T','I','M','E','S','T','A','M','P','R','I','M', - 'A','R','Y','D','E','F','E','R','R','E','D','I','S','T','I','N','C','T', - 'D','R','O','P','F','A','I','L','F','R','O','M','F','U','L','L','G','L', - 'O','B','Y','I','F','I','S','N','U','L','L','O','R','D','E','R','E','S', - 'T','R','I','C','T','O','U','T','E','R','I','G','H','T','R','O','L','L', - 'B','A','C','K','R','O','W','U','N','I','O','N','U','S','I','N','G','V', - 'A','C','U','U','M','V','I','E','W','I','N','I','T','I','A','L','L','Y', + 'U','E','R','Y','W','I','T','H','O','U','T','E','R','E','L','E','A','S', + 'E','A','T','T','A','C','H','A','V','I','N','G','R','O','U','P','D','A', + 'T','E','B','E','G','I','N','N','E','R','E','C','U','R','S','I','V','E', + 'B','E','T','W','E','E','N','O','T','N','U','L','L','I','K','E','C','A', + 'S','C','A','D','E','L','E','T','E','C','A','S','E','C','O','L','L','A', + 'T','E','C','R','E','A','T','E','C','U','R','R','E','N','T','_','D','A', + 'T','E','D','E','T','A','C','H','I','M','M','E','D','I','A','T','E','J', + 'O','I','N','S','E','R','T','M','A','T','C','H','P','L','A','N','A','L', + 'Y','Z','E','P','R','A','G','M','A','B','O','R','T','V','A','L','U','E', + 'S','V','I','R','T','U','A','L','I','M','I','T','W','H','E','N','W','H', + 'E','R','E','N','A','M','E','A','F','T','E','R','E','P','L','A','C','E', + 'A','N','D','E','F','A','U','L','T','A','U','T','O','I','N','C','R','E', + 'M','E','N','T','C','A','S','T','C','O','L','U','M','N','C','O','M','M', + 'I','T','C','O','N','F','L','I','C','T','C','R','O','S','S','C','U','R', + 'R','E','N','T','_','T','I','M','E','S','T','A','M','P','R','I','M','A', + 'R','Y','D','E','F','E','R','R','E','D','I','S','T','I','N','C','T','D', + 'R','O','P','F','A','I','L','F','R','O','M','F','U','L','L','G','L','O', + 'B','Y','I','F','I','S','N','U','L','L','O','R','D','E','R','E','S','T', + 'R','I','C','T','R','I','G','H','T','R','O','L','L','B','A','C','K','R', + 'O','W','U','N','I','O','N','U','S','I','N','G','V','A','C','U','U','M', + 'V','I','E','W','I','N','I','T','I','A','L','L','Y', }; static const unsigned char aHash[127] = { - 72, 101, 114, 70, 0, 45, 0, 0, 78, 0, 73, 0, 0, - 42, 12, 74, 15, 0, 113, 81, 50, 108, 0, 19, 0, 0, - 118, 0, 116, 111, 0, 22, 89, 0, 9, 0, 0, 66, 67, - 0, 65, 6, 0, 48, 86, 98, 0, 115, 97, 0, 0, 44, - 0, 99, 24, 0, 17, 0, 119, 49, 23, 0, 5, 106, 25, - 92, 0, 0, 121, 102, 56, 120, 53, 28, 51, 0, 87, 0, - 96, 26, 0, 95, 0, 0, 0, 91, 88, 93, 84, 105, 14, - 39, 104, 0, 77, 0, 18, 85, 107, 32, 0, 117, 76, 109, - 58, 46, 80, 0, 0, 90, 40, 0, 112, 0, 36, 0, 0, - 29, 0, 82, 59, 60, 0, 20, 57, 0, 52, + 76, 105, 117, 74, 0, 45, 0, 0, 82, 0, 77, 0, 0, + 42, 12, 78, 15, 0, 116, 85, 54, 112, 0, 19, 0, 0, + 121, 0, 119, 115, 0, 22, 93, 0, 9, 0, 0, 70, 71, + 0, 69, 6, 0, 48, 90, 102, 0, 118, 101, 0, 0, 44, + 0, 103, 24, 0, 17, 0, 122, 53, 23, 0, 5, 110, 25, + 96, 0, 0, 124, 106, 60, 123, 57, 28, 55, 0, 91, 0, + 100, 26, 0, 99, 0, 0, 0, 95, 92, 97, 88, 109, 14, + 39, 108, 0, 81, 0, 18, 89, 111, 32, 0, 120, 80, 113, + 62, 46, 84, 0, 0, 94, 40, 59, 114, 0, 36, 0, 0, + 29, 0, 86, 63, 64, 0, 20, 61, 0, 56, }; - static const unsigned char aNext[121] = { + static const unsigned char aNext[124] = { 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 13, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 33, 0, 21, 0, 0, 0, 43, 3, 47, - 0, 0, 0, 0, 30, 0, 54, 0, 38, 0, 0, 0, 1, - 62, 0, 0, 63, 0, 41, 0, 0, 0, 0, 0, 0, 0, - 61, 0, 0, 0, 0, 31, 55, 16, 34, 10, 0, 0, 0, - 0, 0, 0, 0, 11, 68, 75, 0, 8, 0, 100, 94, 0, - 103, 0, 83, 0, 71, 0, 0, 110, 27, 37, 69, 79, 0, - 35, 64, 0, 0, + 0, 0, 0, 0, 33, 0, 21, 0, 0, 0, 0, 0, 50, + 0, 43, 3, 47, 0, 0, 0, 0, 30, 0, 58, 0, 38, + 0, 0, 0, 1, 66, 0, 0, 67, 0, 41, 0, 0, 0, + 0, 0, 0, 49, 65, 0, 0, 0, 0, 31, 52, 16, 34, + 10, 0, 0, 0, 0, 0, 0, 0, 11, 72, 79, 0, 8, + 0, 104, 98, 0, 107, 0, 87, 0, 75, 51, 0, 27, 37, + 73, 83, 0, 35, 68, 0, 0, }; - static const unsigned char aLen[121] = { + static const unsigned char aLen[124] = { 7, 7, 5, 4, 6, 4, 5, 3, 6, 7, 3, 6, 6, 7, 7, 3, 8, 2, 6, 5, 4, 4, 3, 10, 4, 6, 11, 6, 2, 7, 5, 5, 9, 6, 9, 9, 7, 10, 10, - 4, 6, 2, 3, 9, 4, 2, 6, 5, 6, 6, 5, 6, - 5, 5, 7, 7, 7, 3, 2, 4, 4, 7, 3, 6, 4, - 7, 6, 12, 6, 9, 4, 6, 5, 4, 7, 6, 5, 6, - 7, 5, 4, 5, 6, 5, 7, 3, 7, 13, 2, 2, 4, - 6, 6, 8, 5, 17, 12, 7, 8, 8, 2, 4, 4, 4, - 4, 4, 2, 2, 6, 5, 8, 5, 5, 8, 3, 5, 5, - 6, 4, 9, 3, + 4, 6, 2, 3, 9, 4, 2, 6, 5, 7, 4, 5, 7, + 6, 6, 5, 6, 5, 5, 9, 7, 7, 3, 2, 4, 4, + 7, 3, 6, 4, 7, 6, 12, 6, 9, 4, 6, 5, 4, + 7, 6, 5, 6, 7, 5, 4, 5, 6, 5, 7, 3, 7, + 13, 2, 2, 4, 6, 6, 8, 5, 17, 12, 7, 8, 8, + 2, 4, 4, 4, 4, 4, 2, 2, 6, 5, 8, 5, 8, + 3, 5, 5, 6, 4, 9, 3, }; - static const unsigned short int aOffset[121] = { + static const unsigned short int aOffset[124] = { 0, 2, 2, 8, 9, 14, 16, 20, 23, 25, 25, 29, 33, 36, 41, 46, 48, 53, 54, 59, 62, 65, 67, 69, 78, 81, 86, 91, 95, 96, 101, 105, 109, 117, 122, 128, 136, 142, 152, - 159, 162, 162, 165, 167, 167, 171, 176, 179, 184, 189, 194, 197, - 203, 206, 210, 217, 223, 223, 223, 226, 229, 233, 234, 238, 244, - 248, 255, 261, 273, 279, 288, 290, 296, 301, 303, 310, 315, 320, - 326, 332, 337, 341, 344, 350, 354, 361, 363, 370, 372, 374, 383, - 387, 393, 399, 407, 412, 412, 428, 435, 442, 443, 450, 454, 458, - 462, 466, 469, 471, 473, 479, 483, 491, 495, 500, 508, 511, 516, - 521, 527, 531, 536, + 159, 162, 162, 165, 167, 167, 171, 176, 179, 184, 184, 188, 192, + 199, 204, 209, 212, 218, 221, 225, 234, 240, 240, 240, 243, 246, + 250, 251, 255, 261, 265, 272, 278, 290, 296, 305, 307, 313, 318, + 320, 327, 332, 337, 343, 349, 354, 358, 361, 367, 371, 378, 380, + 387, 389, 391, 400, 404, 410, 416, 424, 429, 429, 445, 452, 459, + 460, 467, 471, 475, 479, 483, 486, 488, 490, 496, 500, 508, 513, + 521, 524, 529, 534, 540, 544, 549, }; - static const unsigned char aCode[121] = { + static const unsigned char aCode[124] = { TK_REINDEX, TK_INDEXED, TK_INDEX, TK_DESC, TK_ESCAPE, TK_EACH, TK_CHECK, TK_KEY, TK_BEFORE, TK_FOREIGN, TK_FOR, TK_IGNORE, TK_LIKE_KW, TK_EXPLAIN, TK_INSTEAD, TK_ADD, TK_DATABASE, TK_AS, TK_SELECT, TK_TABLE, TK_JOIN_KW, TK_THEN, TK_END, TK_DEFERRABLE, TK_ELSE, TK_EXCEPT, TK_TRANSACTION,TK_ACTION, TK_ON, TK_JOIN_KW, TK_ALTER, TK_RAISE, TK_EXCLUSIVE, TK_EXISTS, TK_SAVEPOINT, TK_INTERSECT, TK_TRIGGER, TK_REFERENCES, TK_CONSTRAINT, TK_INTO, TK_OFFSET, TK_OF, TK_SET, TK_TEMP, TK_TEMP, - TK_OR, TK_UNIQUE, TK_QUERY, TK_ATTACH, TK_HAVING, - TK_GROUP, TK_UPDATE, TK_BEGIN, TK_JOIN_KW, TK_RELEASE, - TK_BETWEEN, TK_NOTNULL, TK_NOT, TK_NO, TK_NULL, - TK_LIKE_KW, TK_CASCADE, TK_ASC, TK_DELETE, TK_CASE, - TK_COLLATE, TK_CREATE, TK_CTIME_KW, TK_DETACH, TK_IMMEDIATE, - TK_JOIN, TK_INSERT, TK_MATCH, TK_PLAN, TK_ANALYZE, - TK_PRAGMA, TK_ABORT, TK_VALUES, TK_VIRTUAL, TK_LIMIT, - TK_WHEN, TK_WHERE, TK_RENAME, TK_AFTER, TK_REPLACE, - TK_AND, TK_DEFAULT, TK_AUTOINCR, TK_TO, TK_IN, - TK_CAST, TK_COLUMNKW, TK_COMMIT, TK_CONFLICT, TK_JOIN_KW, - TK_CTIME_KW, TK_CTIME_KW, TK_PRIMARY, TK_DEFERRED, TK_DISTINCT, - TK_IS, TK_DROP, TK_FAIL, TK_FROM, TK_JOIN_KW, - TK_LIKE_KW, TK_BY, TK_IF, TK_ISNULL, TK_ORDER, - TK_RESTRICT, TK_JOIN_KW, TK_JOIN_KW, TK_ROLLBACK, TK_ROW, - TK_UNION, TK_USING, TK_VACUUM, TK_VIEW, TK_INITIALLY, - TK_ALL, + TK_OR, TK_UNIQUE, TK_QUERY, TK_WITHOUT, TK_WITH, + TK_JOIN_KW, TK_RELEASE, TK_ATTACH, TK_HAVING, TK_GROUP, + TK_UPDATE, TK_BEGIN, TK_JOIN_KW, TK_RECURSIVE, TK_BETWEEN, + TK_NOTNULL, TK_NOT, TK_NO, TK_NULL, TK_LIKE_KW, + TK_CASCADE, TK_ASC, TK_DELETE, TK_CASE, TK_COLLATE, + TK_CREATE, TK_CTIME_KW, TK_DETACH, TK_IMMEDIATE, TK_JOIN, + TK_INSERT, TK_MATCH, TK_PLAN, TK_ANALYZE, TK_PRAGMA, + TK_ABORT, TK_VALUES, TK_VIRTUAL, TK_LIMIT, TK_WHEN, + TK_WHERE, TK_RENAME, TK_AFTER, TK_REPLACE, TK_AND, + TK_DEFAULT, TK_AUTOINCR, TK_TO, TK_IN, TK_CAST, + TK_COLUMNKW, TK_COMMIT, TK_CONFLICT, TK_JOIN_KW, TK_CTIME_KW, + TK_CTIME_KW, TK_PRIMARY, TK_DEFERRED, TK_DISTINCT, TK_IS, + TK_DROP, TK_FAIL, TK_FROM, TK_JOIN_KW, TK_LIKE_KW, + TK_BY, TK_IF, TK_ISNULL, TK_ORDER, TK_RESTRICT, + TK_JOIN_KW, TK_ROLLBACK, TK_ROW, TK_UNION, TK_USING, + TK_VACUUM, TK_VIEW, TK_INITIALLY, TK_ALL, }; - int h, i; - if( n<2 ) return TK_ID; - h = ((charMap(z[0])*4) ^ - (charMap(z[n-1])*3) ^ - n) % 127; - for(i=((int)aHash[h])-1; i>=0; i=((int)aNext[i])-1){ - if( aLen[i]==n && sqlite3StrNICmp(&zText[aOffset[i]],z,n)==0 ){ + int i, j; + const char *zKW; + if( n>=2 ){ + i = ((charMap(z[0])*4) ^ (charMap(z[n-1])*3) ^ n) % 127; + for(i=((int)aHash[i])-1; i>=0; i=((int)aNext[i])-1){ + if( aLen[i]!=n ) continue; + j = 0; + zKW = &zText[aOffset[i]]; +#ifdef SQLITE_ASCII + while( j<n && (z[j]&~0x20)==zKW[j] ){ j++; } +#endif +#ifdef SQLITE_EBCDIC + while( j<n && toupper(z[j])==zKW[j] ){ j++; } +#endif + if( j<n ) continue; testcase( i==0 ); /* REINDEX */ testcase( i==1 ); /* INDEXED */ testcase( i==2 ); /* INDEX */ testcase( i==3 ); /* DESC */ testcase( i==4 ); /* ESCAPE */ @@ -96236,92 +131852,98 @@ testcase( i==43 ); /* TEMPORARY */ testcase( i==44 ); /* TEMP */ testcase( i==45 ); /* OR */ testcase( i==46 ); /* UNIQUE */ testcase( i==47 ); /* QUERY */ - testcase( i==48 ); /* ATTACH */ - testcase( i==49 ); /* HAVING */ - testcase( i==50 ); /* GROUP */ - testcase( i==51 ); /* UPDATE */ - testcase( i==52 ); /* BEGIN */ - testcase( i==53 ); /* INNER */ - testcase( i==54 ); /* RELEASE */ - testcase( i==55 ); /* BETWEEN */ - testcase( i==56 ); /* NOTNULL */ - testcase( i==57 ); /* NOT */ - testcase( i==58 ); /* NO */ - testcase( i==59 ); /* NULL */ - testcase( i==60 ); /* LIKE */ - testcase( i==61 ); /* CASCADE */ - testcase( i==62 ); /* ASC */ - testcase( i==63 ); /* DELETE */ - testcase( i==64 ); /* CASE */ - testcase( i==65 ); /* COLLATE */ - testcase( i==66 ); /* CREATE */ - testcase( i==67 ); /* CURRENT_DATE */ - testcase( i==68 ); /* DETACH */ - testcase( i==69 ); /* IMMEDIATE */ - testcase( i==70 ); /* JOIN */ - testcase( i==71 ); /* INSERT */ - testcase( i==72 ); /* MATCH */ - testcase( i==73 ); /* PLAN */ - testcase( i==74 ); /* ANALYZE */ - testcase( i==75 ); /* PRAGMA */ - testcase( i==76 ); /* ABORT */ - testcase( i==77 ); /* VALUES */ - testcase( i==78 ); /* VIRTUAL */ - testcase( i==79 ); /* LIMIT */ - testcase( i==80 ); /* WHEN */ - testcase( i==81 ); /* WHERE */ - testcase( i==82 ); /* RENAME */ - testcase( i==83 ); /* AFTER */ - testcase( i==84 ); /* REPLACE */ - testcase( i==85 ); /* AND */ - testcase( i==86 ); /* DEFAULT */ - testcase( i==87 ); /* AUTOINCREMENT */ - testcase( i==88 ); /* TO */ - testcase( i==89 ); /* IN */ - testcase( i==90 ); /* CAST */ - testcase( i==91 ); /* COLUMN */ - testcase( i==92 ); /* COMMIT */ - testcase( i==93 ); /* CONFLICT */ - testcase( i==94 ); /* CROSS */ - testcase( i==95 ); /* CURRENT_TIMESTAMP */ - testcase( i==96 ); /* CURRENT_TIME */ - testcase( i==97 ); /* PRIMARY */ - testcase( i==98 ); /* DEFERRED */ - testcase( i==99 ); /* DISTINCT */ - testcase( i==100 ); /* IS */ - testcase( i==101 ); /* DROP */ - testcase( i==102 ); /* FAIL */ - testcase( i==103 ); /* FROM */ - testcase( i==104 ); /* FULL */ - testcase( i==105 ); /* GLOB */ - testcase( i==106 ); /* BY */ - testcase( i==107 ); /* IF */ - testcase( i==108 ); /* ISNULL */ - testcase( i==109 ); /* ORDER */ - testcase( i==110 ); /* RESTRICT */ - testcase( i==111 ); /* OUTER */ - testcase( i==112 ); /* RIGHT */ - testcase( i==113 ); /* ROLLBACK */ - testcase( i==114 ); /* ROW */ - testcase( i==115 ); /* UNION */ - testcase( i==116 ); /* USING */ - testcase( i==117 ); /* VACUUM */ - testcase( i==118 ); /* VIEW */ - testcase( i==119 ); /* INITIALLY */ - testcase( i==120 ); /* ALL */ - return aCode[i]; - } - } - return TK_ID; + testcase( i==48 ); /* WITHOUT */ + testcase( i==49 ); /* WITH */ + testcase( i==50 ); /* OUTER */ + testcase( i==51 ); /* RELEASE */ + testcase( i==52 ); /* ATTACH */ + testcase( i==53 ); /* HAVING */ + testcase( i==54 ); /* GROUP */ + testcase( i==55 ); /* UPDATE */ + testcase( i==56 ); /* BEGIN */ + testcase( i==57 ); /* INNER */ + testcase( i==58 ); /* RECURSIVE */ + testcase( i==59 ); /* BETWEEN */ + testcase( i==60 ); /* NOTNULL */ + testcase( i==61 ); /* NOT */ + testcase( i==62 ); /* NO */ + testcase( i==63 ); /* NULL */ + testcase( i==64 ); /* LIKE */ + testcase( i==65 ); /* CASCADE */ + testcase( i==66 ); /* ASC */ + testcase( i==67 ); /* DELETE */ + testcase( i==68 ); /* CASE */ + testcase( i==69 ); /* COLLATE */ + testcase( i==70 ); /* CREATE */ + testcase( i==71 ); /* CURRENT_DATE */ + testcase( i==72 ); /* DETACH */ + testcase( i==73 ); /* IMMEDIATE */ + testcase( i==74 ); /* JOIN */ + testcase( i==75 ); /* INSERT */ + testcase( i==76 ); /* MATCH */ + testcase( i==77 ); /* PLAN */ + testcase( i==78 ); /* ANALYZE */ + testcase( i==79 ); /* PRAGMA */ + testcase( i==80 ); /* ABORT */ + testcase( i==81 ); /* VALUES */ + testcase( i==82 ); /* VIRTUAL */ + testcase( i==83 ); /* LIMIT */ + testcase( i==84 ); /* WHEN */ + testcase( i==85 ); /* WHERE */ + testcase( i==86 ); /* RENAME */ + testcase( i==87 ); /* AFTER */ + testcase( i==88 ); /* REPLACE */ + testcase( i==89 ); /* AND */ + testcase( i==90 ); /* DEFAULT */ + testcase( i==91 ); /* AUTOINCREMENT */ + testcase( i==92 ); /* TO */ + testcase( i==93 ); /* IN */ + testcase( i==94 ); /* CAST */ + testcase( i==95 ); /* COLUMN */ + testcase( i==96 ); /* COMMIT */ + testcase( i==97 ); /* CONFLICT */ + testcase( i==98 ); /* CROSS */ + testcase( i==99 ); /* CURRENT_TIMESTAMP */ + testcase( i==100 ); /* CURRENT_TIME */ + testcase( i==101 ); /* PRIMARY */ + testcase( i==102 ); /* DEFERRED */ + testcase( i==103 ); /* DISTINCT */ + testcase( i==104 ); /* IS */ + testcase( i==105 ); /* DROP */ + testcase( i==106 ); /* FAIL */ + testcase( i==107 ); /* FROM */ + testcase( i==108 ); /* FULL */ + testcase( i==109 ); /* GLOB */ + testcase( i==110 ); /* BY */ + testcase( i==111 ); /* IF */ + testcase( i==112 ); /* ISNULL */ + testcase( i==113 ); /* ORDER */ + testcase( i==114 ); /* RESTRICT */ + testcase( i==115 ); /* RIGHT */ + testcase( i==116 ); /* ROLLBACK */ + testcase( i==117 ); /* ROW */ + testcase( i==118 ); /* UNION */ + testcase( i==119 ); /* USING */ + testcase( i==120 ); /* VACUUM */ + testcase( i==121 ); /* VIEW */ + testcase( i==122 ); /* INITIALLY */ + testcase( i==123 ); /* ALL */ + *pType = aCode[i]; + break; + } + } + return n; } SQLITE_PRIVATE int sqlite3KeywordCode(const unsigned char *z, int n){ - return keywordCode((char*)z, n); + int id = TK_ID; + keywordCode((char*)z, n, &id); + return id; } -#define SQLITE_N_KEYWORD 121 +#define SQLITE_N_KEYWORD 124 /************** End of keywordhash.h *****************************************/ /************** Continuing where we left off in tokenize.c *******************/ @@ -96335,11 +131957,11 @@ ** ** For EBCDIC, the rules are more complex but have the same ** end result. ** ** Ticket #1066. the SQL standard does not allow '$' in the -** middle of identfiers. But many SQL implementations do. +** middle of identifiers. But many SQL implementations do. ** SQLite will allow '$' in identifiers for compatibility. ** But the feature is undocumented. */ #ifdef SQLITE_ASCII #define IdChar(C) ((sqlite3CtypeMap[(unsigned char)C]&0x46)!=0) @@ -96361,78 +131983,83 @@ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, /* Fx */ }; #define IdChar(C) (((c=C)>=0x42 && sqlite3IsEbcdicIdChar[c-0x40])) #endif +/* Make the IdChar function accessible from ctime.c */ +#ifndef SQLITE_OMIT_COMPILEOPTION_DIAGS +SQLITE_PRIVATE int sqlite3IsIdChar(u8 c){ return IdChar(c); } +#endif + /* -** Return the length of the token that begins at z[0]. +** Return the length (in bytes) of the token that begins at z[0]. ** Store the token type in *tokenType before returning. */ SQLITE_PRIVATE int sqlite3GetToken(const unsigned char *z, int *tokenType){ int i, c; - switch( *z ){ - case ' ': case '\t': case '\n': case '\f': case '\r': { + switch( aiClass[*z] ){ /* Switch on the character-class of the first byte + ** of the token. See the comment on the CC_ defines + ** above. */ + case CC_SPACE: { testcase( z[0]==' ' ); testcase( z[0]=='\t' ); testcase( z[0]=='\n' ); testcase( z[0]=='\f' ); testcase( z[0]=='\r' ); for(i=1; sqlite3Isspace(z[i]); i++){} *tokenType = TK_SPACE; return i; } - case '-': { + case CC_MINUS: { if( z[1]=='-' ){ - /* IMP: R-15891-05542 -- syntax diagram for comments */ for(i=2; (c=z[i])!=0 && c!='\n'; i++){} *tokenType = TK_SPACE; /* IMP: R-22934-25134 */ return i; } *tokenType = TK_MINUS; return 1; } - case '(': { + case CC_LP: { *tokenType = TK_LP; return 1; } - case ')': { + case CC_RP: { *tokenType = TK_RP; return 1; } - case ';': { + case CC_SEMI: { *tokenType = TK_SEMI; return 1; } - case '+': { + case CC_PLUS: { *tokenType = TK_PLUS; return 1; } - case '*': { + case CC_STAR: { *tokenType = TK_STAR; return 1; } - case '/': { + case CC_SLASH: { if( z[1]!='*' || z[2]==0 ){ *tokenType = TK_SLASH; return 1; } - /* IMP: R-15891-05542 -- syntax diagram for comments */ for(i=3, c=z[2]; (c!='*' || z[i]!='/') && (c=z[i])!=0; i++){} if( c ) i++; *tokenType = TK_SPACE; /* IMP: R-22934-25134 */ return i; } - case '%': { + case CC_PERCENT: { *tokenType = TK_REM; return 1; } - case '=': { + case CC_EQ: { *tokenType = TK_EQ; return 1 + (z[1]=='='); } - case '<': { + case CC_LT: { if( (c=z[1])=='=' ){ *tokenType = TK_LE; return 2; }else if( c=='>' ){ *tokenType = TK_NE; @@ -96443,11 +132070,11 @@ }else{ *tokenType = TK_LT; return 1; } } - case '>': { + case CC_GT: { if( (c=z[1])=='=' ){ *tokenType = TK_GE; return 2; }else if( c=='>' ){ *tokenType = TK_RSHIFT; @@ -96455,43 +132082,41 @@ }else{ *tokenType = TK_GT; return 1; } } - case '!': { + case CC_BANG: { if( z[1]!='=' ){ *tokenType = TK_ILLEGAL; return 2; }else{ *tokenType = TK_NE; return 2; } } - case '|': { + case CC_PIPE: { if( z[1]!='|' ){ *tokenType = TK_BITOR; return 1; }else{ *tokenType = TK_CONCAT; return 2; } } - case ',': { + case CC_COMMA: { *tokenType = TK_COMMA; return 1; } - case '&': { + case CC_AND: { *tokenType = TK_BITAND; return 1; } - case '~': { + case CC_TILDA: { *tokenType = TK_BITNOT; return 1; } - case '`': - case '\'': - case '"': { + case CC_QUOTE: { int delim = z[0]; testcase( delim=='`' ); testcase( delim=='\'' ); testcase( delim=='"' ); for(i=1; (c=z[i])!=0; i++){ @@ -96512,11 +132137,11 @@ }else{ *tokenType = TK_ILLEGAL; return i; } } - case '.': { + case CC_DOT: { #ifndef SQLITE_OMIT_FLOATING_POINT if( !sqlite3Isdigit(z[1]) ) #endif { *tokenType = TK_DOT; @@ -96523,17 +132148,22 @@ return 1; } /* If the next character is a digit, this is a floating point ** number that begins with ".". Fall thru into the next case */ } - case '0': case '1': case '2': case '3': case '4': - case '5': case '6': case '7': case '8': case '9': { + case CC_DIGIT: { testcase( z[0]=='0' ); testcase( z[0]=='1' ); testcase( z[0]=='2' ); testcase( z[0]=='3' ); testcase( z[0]=='4' ); testcase( z[0]=='5' ); testcase( z[0]=='6' ); testcase( z[0]=='7' ); testcase( z[0]=='8' ); testcase( z[0]=='9' ); *tokenType = TK_INTEGER; +#ifndef SQLITE_OMIT_HEX_INTEGER + if( z[0]=='0' && (z[1]=='x' || z[1]=='X') && sqlite3Isxdigit(z[2]) ){ + for(i=3; sqlite3Isxdigit(z[i]); i++){} + return i; + } +#endif for(i=0; sqlite3Isdigit(z[i]); i++){} #ifndef SQLITE_OMIT_FLOATING_POINT if( z[i]=='.' ){ i++; while( sqlite3Isdigit(z[i]) ){ i++; } @@ -96553,38 +132183,25 @@ *tokenType = TK_ILLEGAL; i++; } return i; } - case '[': { + case CC_QUOTE2: { for(i=1, c=z[0]; c!=']' && (c=z[i])!=0; i++){} *tokenType = c==']' ? TK_ID : TK_ILLEGAL; return i; } - case '?': { + case CC_VARNUM: { *tokenType = TK_VARIABLE; for(i=1; sqlite3Isdigit(z[i]); i++){} return i; } - case '#': { - for(i=1; sqlite3Isdigit(z[i]); i++){} - if( i>1 ){ - /* Parameters of the form #NNN (where NNN is a number) are used - ** internally by sqlite3NestedParse. */ - *tokenType = TK_REGISTER; - return i; - } - /* Fall through into the next case if the '#' is not followed by - ** a digit. Try to match #AAAA where AAAA is a parameter name. */ - } -#ifndef SQLITE_OMIT_TCL_VARIABLE - case '$': -#endif - case '@': /* For compatibility with MS SQL Server */ - case ':': { + case CC_DOLLAR: + case CC_VARALPHA: { int n = 0; - testcase( z[0]=='$' ); testcase( z[0]=='@' ); testcase( z[0]==':' ); + testcase( z[0]=='$' ); testcase( z[0]=='@' ); + testcase( z[0]==':' ); testcase( z[0]=='#' ); *tokenType = TK_VARIABLE; for(i=1; (c=z[i])!=0; i++){ if( IdChar(c) ){ n++; #ifndef SQLITE_OMIT_TCL_VARIABLE @@ -96606,38 +132223,51 @@ } } if( n==0 ) *tokenType = TK_ILLEGAL; return i; } + case CC_KYWD: { + for(i=1; aiClass[z[i]]<=CC_KYWD; i++){} + if( IdChar(z[i]) ){ + /* This token started out using characters that can appear in keywords, + ** but z[i] is a character not allowed within keywords, so this must + ** be an identifier instead */ + i++; + break; + } + *tokenType = TK_ID; + return keywordCode((char*)z, i, tokenType); + } #ifndef SQLITE_OMIT_BLOB_LITERAL - case 'x': case 'X': { + case CC_X: { testcase( z[0]=='x' ); testcase( z[0]=='X' ); if( z[1]=='\'' ){ *tokenType = TK_BLOB; - for(i=2; (c=z[i])!=0 && c!='\''; i++){ - if( !sqlite3Isxdigit(c) ){ - *tokenType = TK_ILLEGAL; - } - } - if( i%2 || !c ) *tokenType = TK_ILLEGAL; - if( c ) i++; - return i; - } - /* Otherwise fall through to the next case */ - } -#endif - default: { - if( !IdChar(*z) ){ - break; - } - for(i=1; IdChar(z[i]); i++){} - *tokenType = keywordCode((char*)z, i); - return i; - } - } - *tokenType = TK_ILLEGAL; - return 1; + for(i=2; sqlite3Isxdigit(z[i]); i++){} + if( z[i]!='\'' || i%2 ){ + *tokenType = TK_ILLEGAL; + while( z[i] && z[i]!='\'' ){ i++; } + } + if( z[i] ) i++; + return i; + } + /* If it is not a BLOB literal, then it must be an ID, since no + ** SQL keywords start with the letter 'x'. Fall through */ + } +#endif + case CC_ID: { + i = 1; + break; + } + default: { + *tokenType = TK_ILLEGAL; + return 1; + } + } + while( IdChar(z[i]) ){ i++; } + *tokenType = TK_ID; + return i; } /* ** Run the parser on the given SQL string. The parser structure is ** passed in. An SQLITE_ status code is returned. If an error occurs @@ -96649,95 +132279,84 @@ int nErr = 0; /* Number of errors encountered */ int i; /* Loop counter */ void *pEngine; /* The LEMON-generated LALR(1) parser */ int tokenType; /* type of the next token */ int lastTokenParsed = -1; /* type of the previous token */ - u8 enableLookaside; /* Saved value of db->lookaside.bEnabled */ sqlite3 *db = pParse->db; /* The database connection */ int mxSqlLen; /* Max length of an SQL string */ - + assert( zSql!=0 ); mxSqlLen = db->aLimit[SQLITE_LIMIT_SQL_LENGTH]; - if( db->activeVdbeCnt==0 ){ + if( db->nVdbeActive==0 ){ db->u1.isInterrupted = 0; } pParse->rc = SQLITE_OK; pParse->zTail = zSql; i = 0; assert( pzErrMsg!=0 ); - pEngine = sqlite3ParserAlloc((void*(*)(size_t))sqlite3Malloc); + /* sqlite3ParserTrace(stdout, "parser: "); */ + pEngine = sqlite3ParserAlloc(sqlite3Malloc); if( pEngine==0 ){ - db->mallocFailed = 1; + sqlite3OomFault(db); return SQLITE_NOMEM; } assert( pParse->pNewTable==0 ); assert( pParse->pNewTrigger==0 ); assert( pParse->nVar==0 ); - assert( pParse->nVarExpr==0 ); - assert( pParse->nVarExprAlloc==0 ); - assert( pParse->apVarExpr==0 ); - enableLookaside = db->lookaside.bEnabled; - if( db->lookaside.pStart ) db->lookaside.bEnabled = 1; - while( !db->mallocFailed && zSql[i]!=0 ){ + assert( pParse->nzVar==0 ); + assert( pParse->azVar==0 ); + while( zSql[i]!=0 ){ assert( i>=0 ); pParse->sLastToken.z = &zSql[i]; pParse->sLastToken.n = sqlite3GetToken((unsigned char*)&zSql[i],&tokenType); i += pParse->sLastToken.n; if( i>mxSqlLen ){ pParse->rc = SQLITE_TOOBIG; break; } - switch( tokenType ){ - case TK_SPACE: { - if( db->u1.isInterrupted ){ - sqlite3ErrorMsg(pParse, "interrupt"); - pParse->rc = SQLITE_INTERRUPT; - goto abort_parse; - } + if( tokenType>=TK_SPACE ){ + assert( tokenType==TK_SPACE || tokenType==TK_ILLEGAL ); + if( db->u1.isInterrupted ){ + pParse->rc = SQLITE_INTERRUPT; break; } - case TK_ILLEGAL: { - sqlite3DbFree(db, *pzErrMsg); - *pzErrMsg = sqlite3MPrintf(db, "unrecognized token: \"%T\"", + if( tokenType==TK_ILLEGAL ){ + sqlite3ErrorMsg(pParse, "unrecognized token: \"%T\"", &pParse->sLastToken); - nErr++; - goto abort_parse; - } - case TK_SEMI: { - pParse->zTail = &zSql[i]; - /* Fall thru into the default case */ - } - default: { - sqlite3Parser(pEngine, tokenType, pParse->sLastToken, pParse); - lastTokenParsed = tokenType; - if( pParse->rc!=SQLITE_OK ){ - goto abort_parse; - } break; } + }else{ + if( tokenType==TK_SEMI ) pParse->zTail = &zSql[i]; + sqlite3Parser(pEngine, tokenType, pParse->sLastToken, pParse); + lastTokenParsed = tokenType; + if( pParse->rc!=SQLITE_OK || db->mallocFailed ) break; } } -abort_parse: - if( zSql[i]==0 && nErr==0 && pParse->rc==SQLITE_OK ){ + assert( nErr==0 ); + if( pParse->rc==SQLITE_OK && db->mallocFailed==0 ){ + assert( zSql[i]==0 ); if( lastTokenParsed!=TK_SEMI ){ sqlite3Parser(pEngine, TK_SEMI, pParse->sLastToken, pParse); pParse->zTail = &zSql[i]; } - sqlite3Parser(pEngine, 0, pParse->sLastToken, pParse); + if( pParse->rc==SQLITE_OK && db->mallocFailed==0 ){ + sqlite3Parser(pEngine, 0, pParse->sLastToken, pParse); + } } #ifdef YYTRACKMAXSTACKDEPTH - sqlite3StatusSet(SQLITE_STATUS_PARSER_STACK, + sqlite3_mutex_enter(sqlite3MallocMutex()); + sqlite3StatusHighwater(SQLITE_STATUS_PARSER_STACK, sqlite3ParserStackPeak(pEngine) ); + sqlite3_mutex_leave(sqlite3MallocMutex()); #endif /* YYDEBUG */ sqlite3ParserFree(pEngine, sqlite3_free); - db->lookaside.bEnabled = enableLookaside; if( db->mallocFailed ){ pParse->rc = SQLITE_NOMEM; } if( pParse->rc!=SQLITE_OK && pParse->rc!=SQLITE_DONE && pParse->zErrMsg==0 ){ - sqlite3SetString(&pParse->zErrMsg, db, "%s", sqlite3ErrStr(pParse->rc)); + pParse->zErrMsg = sqlite3MPrintf(db, "%s", sqlite3ErrStr(pParse->rc)); } assert( pzErrMsg!=0 ); if( pParse->zErrMsg ){ *pzErrMsg = pParse->zErrMsg; sqlite3_log(pParse->rc, "%s", *pzErrMsg); @@ -96754,37 +132373,36 @@ pParse->aTableLock = 0; pParse->nTableLock = 0; } #endif #ifndef SQLITE_OMIT_VIRTUALTABLE - sqlite3DbFree(db, pParse->apVtabLock); + sqlite3_free(pParse->apVtabLock); #endif if( !IN_DECLARE_VTAB ){ /* If the pParse->declareVtab flag is set, do not delete any table ** structure built up in pParse->pNewTable. The calling code (see vtab.c) ** will take responsibility for freeing the Table structure. */ - sqlite3DeleteTable(pParse->pNewTable); + sqlite3DeleteTable(db, pParse->pNewTable); } + sqlite3WithDelete(db, pParse->pWithToFree); sqlite3DeleteTrigger(db, pParse->pNewTrigger); - sqlite3DbFree(db, pParse->apVarExpr); - sqlite3DbFree(db, pParse->aAlias); + for(i=pParse->nzVar-1; i>=0; i--) sqlite3DbFree(db, pParse->azVar[i]); + sqlite3DbFree(db, pParse->azVar); while( pParse->pAinc ){ AutoincInfo *p = pParse->pAinc; pParse->pAinc = p->pNext; sqlite3DbFree(db, p); } while( pParse->pZombieTab ){ Table *p = pParse->pZombieTab; pParse->pZombieTab = p->pNextZombie; - sqlite3DeleteTable(p); + sqlite3DeleteTable(db, p); } - if( nErr>0 && pParse->rc==SQLITE_OK ){ - pParse->rc = SQLITE_ERROR; - } + assert( nErr==0 || pParse->rc!=SQLITE_OK ); return nErr; } /************** End of tokenize.c ********************************************/ /************** Begin file complete.c ****************************************/ @@ -96804,10 +132422,11 @@ ** This file contains C code that implements the sqlite3_complete() API. ** This code used to be part of the tokenizer.c source file. But by ** separating it out, the code will be automatically omitted from ** static links that do not use it. */ +/* #include "sqliteInt.h" */ #ifndef SQLITE_OMIT_COMPLETE /* ** This is defined in tokenize.c. We just have to import the definition. */ @@ -96857,21 +132476,21 @@ ** ** (3) EXPLAIN The keyword EXPLAIN has been seen at the beginning of ** a statement. ** ** (4) CREATE The keyword CREATE has been seen at the beginning of a -** statement, possibly preceeded by EXPLAIN and/or followed by +** statement, possibly preceded by EXPLAIN and/or followed by ** TEMP or TEMPORARY ** ** (5) TRIGGER We are in the middle of a trigger definition that must be ** ended by a semicolon, the keyword END, and another semicolon. ** ** (6) SEMI We've seen the first semicolon in the ";END;" that occurs at ** the end of a trigger definition. ** ** (7) END We've seen the ";END" of the ";END;" that occurs at the end -** of a trigger difinition. +** of a trigger definition. ** ** Transitions between states above are determined by tokens extracted ** from the input. The following tokens are significant: ** ** (0) tkSEMI A semicolon. @@ -96888,11 +132507,11 @@ ** ** If we compile with SQLITE_OMIT_TRIGGER, all of the computation needed ** to recognize the end of a trigger can be omitted. All we have to do ** is look for a semicolon that is not part of an string or comment. */ -SQLITE_API int sqlite3_complete(const char *zSql){ +SQLITE_API int SQLITE_STDCALL sqlite3_complete(const char *zSql){ u8 state = 0; /* Current state, using numbers defined in header comment */ u8 token; /* Value of the next token */ #ifndef SQLITE_OMIT_TRIGGER /* A complex statement machine used to detect the end of a CREATE TRIGGER @@ -96910,20 +132529,27 @@ /* 6 SEMI: */ { 6, 6, 5, 5, 5, 5, 5, 7, }, /* 7 END: */ { 1, 7, 5, 5, 5, 5, 5, 5, }, }; #else /* If triggers are not supported by this compile then the statement machine - ** used to detect the end of a statement is much simplier + ** used to detect the end of a statement is much simpler */ static const u8 trans[3][3] = { /* Token: */ /* State: ** SEMI WS OTHER */ /* 0 INVALID: */ { 1, 0, 2, }, /* 1 START: */ { 1, 1, 2, }, /* 2 NORMAL: */ { 1, 2, 2, }, }; #endif /* SQLITE_OMIT_TRIGGER */ + +#ifdef SQLITE_ENABLE_API_ARMOR + if( zSql==0 ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif while( *zSql ){ switch( *zSql ){ case ';': { /* A semicolon */ token = tkSEMI; @@ -97046,14 +132672,14 @@ /* ** This routine is the same as the sqlite3_complete() routine described ** above, except that the parameter is required to be UTF-16 encoded, not ** UTF-8. */ -SQLITE_API int sqlite3_complete16(const void *zSql){ +SQLITE_API int SQLITE_STDCALL sqlite3_complete16(const void *zSql){ sqlite3_value *pVal; char const *zSql8; - int rc = SQLITE_NOMEM; + int rc; #ifndef SQLITE_OMIT_AUTOINIT rc = sqlite3_initialize(); if( rc ) return rc; #endif @@ -97064,11 +132690,11 @@ rc = sqlite3_complete(zSql8); }else{ rc = SQLITE_NOMEM; } sqlite3ValueFree(pVal); - return sqlite3ApiExit(0, rc); + return rc & 0xff; } #endif /* SQLITE_OMIT_UTF16 */ #endif /* SQLITE_OMIT_COMPLETE */ /************** End of complete.c ********************************************/ @@ -97087,10 +132713,11 @@ ** Main file for the SQLite library. The routines in this file ** implement the programmer interface to the library. Routines in ** other files are for internal use by SQLite and should not be ** accessed by users of the library. */ +/* #include "sqliteInt.h" */ #ifdef SQLITE_ENABLE_FTS3 /************** Include fts3.h in the middle of main.c ***********************/ /************** Begin file fts3.h ********************************************/ /* @@ -97106,10 +132733,11 @@ ****************************************************************************** ** ** This header file is used by programs that want to link against the ** FTS3 library. All it does is declare the sqlite3Fts3Init() interface. */ +/* #include "sqlite3.h" */ #if 0 extern "C" { #endif /* __cplusplus */ @@ -97138,10 +132766,11 @@ ****************************************************************************** ** ** This header file is used by programs that want to link against the ** RTREE library. All it does is declare the sqlite3RtreeInit() interface. */ +/* #include "sqlite3.h" */ #if 0 extern "C" { #endif /* __cplusplus */ @@ -97170,10 +132799,11 @@ ****************************************************************************** ** ** This header file is used by programs that want to link against the ** ICU extension. All it does is declare the sqlite3IcuInit() interface. */ +/* #include "sqlite3.h" */ #if 0 extern "C" { #endif /* __cplusplus */ @@ -97185,30 +132815,66 @@ /************** End of sqliteicu.h *******************************************/ /************** Continuing where we left off in main.c ***********************/ #endif +#ifdef SQLITE_ENABLE_JSON1 +SQLITE_PRIVATE int sqlite3Json1Init(sqlite3*); +#endif +#ifdef SQLITE_ENABLE_FTS5 +SQLITE_PRIVATE int sqlite3Fts5Init(sqlite3*); +#endif -/* -** The version of the library -*/ #ifndef SQLITE_AMALGAMATION +/* IMPLEMENTATION-OF: R-46656-45156 The sqlite3_version[] string constant +** contains the text of SQLITE_VERSION macro. +*/ SQLITE_API const char sqlite3_version[] = SQLITE_VERSION; #endif -SQLITE_API const char *sqlite3_libversion(void){ return sqlite3_version; } -SQLITE_API const char *sqlite3_sourceid(void){ return SQLITE_SOURCE_ID; } -SQLITE_API int sqlite3_libversion_number(void){ return SQLITE_VERSION_NUMBER; } -SQLITE_API int sqlite3_threadsafe(void){ return SQLITE_THREADSAFE; } + +/* IMPLEMENTATION-OF: R-53536-42575 The sqlite3_libversion() function returns +** a pointer to the to the sqlite3_version[] string constant. +*/ +SQLITE_API const char *SQLITE_STDCALL sqlite3_libversion(void){ return sqlite3_version; } + +/* IMPLEMENTATION-OF: R-63124-39300 The sqlite3_sourceid() function returns a +** pointer to a string constant whose value is the same as the +** SQLITE_SOURCE_ID C preprocessor macro. +*/ +SQLITE_API const char *SQLITE_STDCALL sqlite3_sourceid(void){ return SQLITE_SOURCE_ID; } + +/* IMPLEMENTATION-OF: R-35210-63508 The sqlite3_libversion_number() function +** returns an integer equal to SQLITE_VERSION_NUMBER. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_libversion_number(void){ return SQLITE_VERSION_NUMBER; } + +/* IMPLEMENTATION-OF: R-20790-14025 The sqlite3_threadsafe() function returns +** zero if and only if SQLite was compiled with mutexing code omitted due to +** the SQLITE_THREADSAFE compile-time option being set to 0. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_threadsafe(void){ return SQLITE_THREADSAFE; } + +/* +** When compiling the test fixture or with debugging enabled (on Win32), +** this variable being set to non-zero will cause OSTRACE macros to emit +** extra diagnostic information. +*/ +#ifdef SQLITE_HAVE_OS_TRACE +# ifndef SQLITE_DEBUG_OS_TRACE +# define SQLITE_DEBUG_OS_TRACE 0 +# endif + int sqlite3OSTrace = SQLITE_DEBUG_OS_TRACE; +#endif #if !defined(SQLITE_OMIT_TRACE) && defined(SQLITE_ENABLE_IOTRACE) /* ** If the following function pointer is not NULL and if ** SQLITE_ENABLE_IOTRACE is enabled, then messages describing ** I/O active are written using this function. These messages ** are intended for debugging activity only. */ -SQLITE_PRIVATE void (*sqlite3IoTrace)(const char*, ...) = 0; +SQLITE_API void (SQLITE_CDECL *sqlite3IoTrace)(const char*, ...) = 0; #endif /* ** If the following global variable points to a string which is the ** name of a directory, then that directory will be used to store @@ -97216,10 +132882,19 @@ ** ** See also the "PRAGMA temp_store_directory" SQL command. */ SQLITE_API char *sqlite3_temp_directory = 0; +/* +** If the following global variable points to a string which is the +** name of a directory, then that directory will be used to store +** all database files specified with a relative pathname. +** +** See also the "PRAGMA data_store_directory" SQL command. +*/ +SQLITE_API char *sqlite3_data_directory = 0; + /* ** Initialize SQLite. ** ** This routine must be called to initialize the memory allocation, ** VFS, and mutex subsystems prior to doing any serious work with @@ -97247,20 +132922,28 @@ ** call by X completes. ** ** * Recursive calls to this routine from thread X return immediately ** without blocking. */ -SQLITE_API int sqlite3_initialize(void){ - sqlite3_mutex *pMaster; /* The main static mutex */ +SQLITE_API int SQLITE_STDCALL sqlite3_initialize(void){ + MUTEX_LOGIC( sqlite3_mutex *pMaster; ) /* The main static mutex */ int rc; /* Result code */ +#ifdef SQLITE_EXTRA_INIT + int bRunExtraInit = 0; /* Extra initialization needed */ +#endif #ifdef SQLITE_OMIT_WSD rc = sqlite3_wsd_init(4096, 24); if( rc!=SQLITE_OK ){ return rc; } #endif + + /* If the following assert() fails on some obscure processor/compiler + ** combination, the work-around is to set the correct pointer + ** size at compile-time using -DSQLITE_PTRSIZE=n compile-time option */ + assert( SQLITE_PTRSIZE==sizeof(char*) ); /* If SQLite is already completely initialized, then this call ** to sqlite3_initialize() should be a no-op. But the initialization ** must be complete. So isInit must not be set until the very end ** of this routine. @@ -97282,11 +132965,11 @@ ** This operation is protected by the STATIC_MASTER mutex. Note that ** MutexAlloc() is called for a static mutex prior to initializing the ** malloc subsystem - this implies that the allocation of a static ** mutex must not require support from the malloc subsystem. */ - pMaster = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER); + MUTEX_LOGIC( pMaster = sqlite3MutexAlloc(SQLITE_MUTEX_STATIC_MASTER); ) sqlite3_mutex_enter(pMaster); sqlite3GlobalConfig.isMutexInit = 1; if( !sqlite3GlobalConfig.isMallocInit ){ rc = sqlite3MallocInit(); } @@ -97315,15 +132998,28 @@ /* Do the rest of the initialization under the recursive mutex so ** that we will be able to handle recursive calls into ** sqlite3_initialize(). The recursive calls normally come through ** sqlite3_os_init() when it invokes sqlite3_vfs_register(), but other ** recursive calls might also be possible. + ** + ** IMPLEMENTATION-OF: R-00140-37445 SQLite automatically serializes calls + ** to the xInit method, so the xInit method need not be threadsafe. + ** + ** The following mutex is what serializes access to the appdef pcache xInit + ** methods. The sqlite3_pcache_methods.xInit() all is embedded in the + ** call to sqlite3PcacheInitialize(). */ sqlite3_mutex_enter(sqlite3GlobalConfig.pInitMutex); if( sqlite3GlobalConfig.isInit==0 && sqlite3GlobalConfig.inProgress==0 ){ FuncDefHash *pHash = &GLOBAL(FuncDefHash, sqlite3GlobalFunctions); sqlite3GlobalConfig.inProgress = 1; +#ifdef SQLITE_ENABLE_SQLLOG + { + extern void sqlite3_init_sqllog(void); + sqlite3_init_sqllog(); + } +#endif memset(pHash, 0, sizeof(sqlite3GlobalFunctions)); sqlite3RegisterGlobalFunctions(); if( sqlite3GlobalConfig.isPCacheInit==0 ){ rc = sqlite3PcacheInitialize(); } @@ -97333,10 +133029,13 @@ } if( rc==SQLITE_OK ){ sqlite3PCacheBufferSetup( sqlite3GlobalConfig.pPage, sqlite3GlobalConfig.szPage, sqlite3GlobalConfig.nPage); sqlite3GlobalConfig.isInit = 1; +#ifdef SQLITE_EXTRA_INIT + bRunExtraInit = 1; +#endif } sqlite3GlobalConfig.inProgress = 0; } sqlite3_mutex_leave(sqlite3GlobalConfig.pInitMutex); @@ -97368,10 +133067,20 @@ memcpy(&y, &x, 8); assert( sqlite3IsNaN(y) ); } #endif #endif + + /* Do extra initialization steps requested by the SQLITE_EXTRA_INIT + ** compile-time option. + */ +#ifdef SQLITE_EXTRA_INIT + if( bRunExtraInit ){ + int SQLITE_EXTRA_INIT(const char*); + rc = SQLITE_EXTRA_INIT(0); + } +#endif return rc; } /* @@ -97380,12 +133089,23 @@ ** while any part of SQLite is otherwise in use in any thread. This ** routine is not threadsafe. But it is safe to invoke this routine ** on when SQLite is already shut down. If SQLite is already shut down ** when this routine is invoked, then this routine is a harmless no-op. */ -SQLITE_API int sqlite3_shutdown(void){ +SQLITE_API int SQLITE_STDCALL sqlite3_shutdown(void){ +#ifdef SQLITE_OMIT_WSD + int rc = sqlite3_wsd_init(4096, 24); + if( rc!=SQLITE_OK ){ + return rc; + } +#endif + if( sqlite3GlobalConfig.isInit ){ +#ifdef SQLITE_EXTRA_SHUTDOWN + void SQLITE_EXTRA_SHUTDOWN(void); + SQLITE_EXTRA_SHUTDOWN(); +#endif sqlite3_os_end(); sqlite3_reset_auto_extension(); sqlite3GlobalConfig.isInit = 0; } if( sqlite3GlobalConfig.isPCacheInit ){ @@ -97393,10 +133113,22 @@ sqlite3GlobalConfig.isPCacheInit = 0; } if( sqlite3GlobalConfig.isMallocInit ){ sqlite3MallocEnd(); sqlite3GlobalConfig.isMallocInit = 0; + +#ifndef SQLITE_OMIT_SHUTDOWN_DIRECTORIES + /* The heap subsystem has now been shutdown and these values are supposed + ** to be NULL or point to memory that was obtained from sqlite3_malloc(), + ** which would rely on that heap subsystem; therefore, make sure these + ** values cannot refer to heap memory that was just invalidated when the + ** heap subsystem was shutdown. This is only done if the current call to + ** this function resulted in the heap subsystem actually being shutdown. + */ + sqlite3_data_directory = 0; + sqlite3_temp_directory = 0; +#endif } if( sqlite3GlobalConfig.isMutexInit ){ sqlite3MutexEnd(); sqlite3GlobalConfig.isMutexInit = 0; } @@ -97411,11 +133143,11 @@ ** This routine should only be called when there are no outstanding ** database connections or memory allocations. This routine is not ** threadsafe. Failure to heed these warnings can lead to unpredictable ** behavior. */ -SQLITE_API int sqlite3_config(int op, ...){ +SQLITE_API int SQLITE_CDECL sqlite3_config(int op, ...){ va_list ap; int rc = SQLITE_OK; /* sqlite3_config() shall return SQLITE_MISUSE if it is invoked while ** the SQLite library is in use. */ @@ -97423,109 +133155,175 @@ va_start(ap, op); switch( op ){ /* Mutex configuration options are only available in a threadsafe - ** compile. + ** compile. */ -#if defined(SQLITE_THREADSAFE) && SQLITE_THREADSAFE>0 +#if defined(SQLITE_THREADSAFE) && SQLITE_THREADSAFE>0 /* IMP: R-54466-46756 */ case SQLITE_CONFIG_SINGLETHREAD: { - /* Disable all mutexing */ - sqlite3GlobalConfig.bCoreMutex = 0; - sqlite3GlobalConfig.bFullMutex = 0; + /* EVIDENCE-OF: R-02748-19096 This option sets the threading mode to + ** Single-thread. */ + sqlite3GlobalConfig.bCoreMutex = 0; /* Disable mutex on core */ + sqlite3GlobalConfig.bFullMutex = 0; /* Disable mutex on connections */ break; } +#endif +#if defined(SQLITE_THREADSAFE) && SQLITE_THREADSAFE>0 /* IMP: R-20520-54086 */ case SQLITE_CONFIG_MULTITHREAD: { - /* Disable mutexing of database connections */ - /* Enable mutexing of core data structures */ - sqlite3GlobalConfig.bCoreMutex = 1; - sqlite3GlobalConfig.bFullMutex = 0; + /* EVIDENCE-OF: R-14374-42468 This option sets the threading mode to + ** Multi-thread. */ + sqlite3GlobalConfig.bCoreMutex = 1; /* Enable mutex on core */ + sqlite3GlobalConfig.bFullMutex = 0; /* Disable mutex on connections */ break; } +#endif +#if defined(SQLITE_THREADSAFE) && SQLITE_THREADSAFE>0 /* IMP: R-59593-21810 */ case SQLITE_CONFIG_SERIALIZED: { - /* Enable all mutexing */ - sqlite3GlobalConfig.bCoreMutex = 1; - sqlite3GlobalConfig.bFullMutex = 1; + /* EVIDENCE-OF: R-41220-51800 This option sets the threading mode to + ** Serialized. */ + sqlite3GlobalConfig.bCoreMutex = 1; /* Enable mutex on core */ + sqlite3GlobalConfig.bFullMutex = 1; /* Enable mutex on connections */ break; } +#endif +#if defined(SQLITE_THREADSAFE) && SQLITE_THREADSAFE>0 /* IMP: R-63666-48755 */ case SQLITE_CONFIG_MUTEX: { /* Specify an alternative mutex implementation */ sqlite3GlobalConfig.mutex = *va_arg(ap, sqlite3_mutex_methods*); break; } +#endif +#if defined(SQLITE_THREADSAFE) && SQLITE_THREADSAFE>0 /* IMP: R-14450-37597 */ case SQLITE_CONFIG_GETMUTEX: { /* Retrieve the current mutex implementation */ *va_arg(ap, sqlite3_mutex_methods*) = sqlite3GlobalConfig.mutex; break; } #endif - case SQLITE_CONFIG_MALLOC: { - /* Specify an alternative malloc implementation */ + /* EVIDENCE-OF: R-55594-21030 The SQLITE_CONFIG_MALLOC option takes a + ** single argument which is a pointer to an instance of the + ** sqlite3_mem_methods structure. The argument specifies alternative + ** low-level memory allocation routines to be used in place of the memory + ** allocation routines built into SQLite. */ sqlite3GlobalConfig.m = *va_arg(ap, sqlite3_mem_methods*); break; } case SQLITE_CONFIG_GETMALLOC: { - /* Retrieve the current malloc() implementation */ + /* EVIDENCE-OF: R-51213-46414 The SQLITE_CONFIG_GETMALLOC option takes a + ** single argument which is a pointer to an instance of the + ** sqlite3_mem_methods structure. The sqlite3_mem_methods structure is + ** filled with the currently defined memory allocation routines. */ if( sqlite3GlobalConfig.m.xMalloc==0 ) sqlite3MemSetDefault(); *va_arg(ap, sqlite3_mem_methods*) = sqlite3GlobalConfig.m; break; } case SQLITE_CONFIG_MEMSTATUS: { - /* Enable or disable the malloc status collection */ + /* EVIDENCE-OF: R-61275-35157 The SQLITE_CONFIG_MEMSTATUS option takes + ** single argument of type int, interpreted as a boolean, which enables + ** or disables the collection of memory allocation statistics. */ sqlite3GlobalConfig.bMemstat = va_arg(ap, int); break; } case SQLITE_CONFIG_SCRATCH: { - /* Designate a buffer for scratch memory space */ + /* EVIDENCE-OF: R-08404-60887 There are three arguments to + ** SQLITE_CONFIG_SCRATCH: A pointer an 8-byte aligned memory buffer from + ** which the scratch allocations will be drawn, the size of each scratch + ** allocation (sz), and the maximum number of scratch allocations (N). */ sqlite3GlobalConfig.pScratch = va_arg(ap, void*); sqlite3GlobalConfig.szScratch = va_arg(ap, int); sqlite3GlobalConfig.nScratch = va_arg(ap, int); break; } case SQLITE_CONFIG_PAGECACHE: { - /* Designate a buffer for page cache memory space */ + /* EVIDENCE-OF: R-18761-36601 There are three arguments to + ** SQLITE_CONFIG_PAGECACHE: A pointer to 8-byte aligned memory (pMem), + ** the size of each page cache line (sz), and the number of cache lines + ** (N). */ sqlite3GlobalConfig.pPage = va_arg(ap, void*); sqlite3GlobalConfig.szPage = va_arg(ap, int); sqlite3GlobalConfig.nPage = va_arg(ap, int); break; } + case SQLITE_CONFIG_PCACHE_HDRSZ: { + /* EVIDENCE-OF: R-39100-27317 The SQLITE_CONFIG_PCACHE_HDRSZ option takes + ** a single parameter which is a pointer to an integer and writes into + ** that integer the number of extra bytes per page required for each page + ** in SQLITE_CONFIG_PAGECACHE. */ + *va_arg(ap, int*) = + sqlite3HeaderSizeBtree() + + sqlite3HeaderSizePcache() + + sqlite3HeaderSizePcache1(); + break; + } case SQLITE_CONFIG_PCACHE: { - /* Specify an alternative page cache implementation */ - sqlite3GlobalConfig.pcache = *va_arg(ap, sqlite3_pcache_methods*); + /* no-op */ break; } - case SQLITE_CONFIG_GETPCACHE: { - if( sqlite3GlobalConfig.pcache.xInit==0 ){ + /* now an error */ + rc = SQLITE_ERROR; + break; + } + + case SQLITE_CONFIG_PCACHE2: { + /* EVIDENCE-OF: R-63325-48378 The SQLITE_CONFIG_PCACHE2 option takes a + ** single argument which is a pointer to an sqlite3_pcache_methods2 + ** object. This object specifies the interface to a custom page cache + ** implementation. */ + sqlite3GlobalConfig.pcache2 = *va_arg(ap, sqlite3_pcache_methods2*); + break; + } + case SQLITE_CONFIG_GETPCACHE2: { + /* EVIDENCE-OF: R-22035-46182 The SQLITE_CONFIG_GETPCACHE2 option takes a + ** single argument which is a pointer to an sqlite3_pcache_methods2 + ** object. SQLite copies of the current page cache implementation into + ** that object. */ + if( sqlite3GlobalConfig.pcache2.xInit==0 ){ sqlite3PCacheSetDefault(); } - *va_arg(ap, sqlite3_pcache_methods*) = sqlite3GlobalConfig.pcache; + *va_arg(ap, sqlite3_pcache_methods2*) = sqlite3GlobalConfig.pcache2; break; } +/* EVIDENCE-OF: R-06626-12911 The SQLITE_CONFIG_HEAP option is only +** available if SQLite is compiled with either SQLITE_ENABLE_MEMSYS3 or +** SQLITE_ENABLE_MEMSYS5 and returns SQLITE_ERROR if invoked otherwise. */ #if defined(SQLITE_ENABLE_MEMSYS3) || defined(SQLITE_ENABLE_MEMSYS5) case SQLITE_CONFIG_HEAP: { - /* Designate a buffer for heap memory space */ + /* EVIDENCE-OF: R-19854-42126 There are three arguments to + ** SQLITE_CONFIG_HEAP: An 8-byte aligned pointer to the memory, the + ** number of bytes in the memory buffer, and the minimum allocation size. + */ sqlite3GlobalConfig.pHeap = va_arg(ap, void*); sqlite3GlobalConfig.nHeap = va_arg(ap, int); sqlite3GlobalConfig.mnReq = va_arg(ap, int); + + if( sqlite3GlobalConfig.mnReq<1 ){ + sqlite3GlobalConfig.mnReq = 1; + }else if( sqlite3GlobalConfig.mnReq>(1<<12) ){ + /* cap min request size at 2^12 */ + sqlite3GlobalConfig.mnReq = (1<<12); + } if( sqlite3GlobalConfig.pHeap==0 ){ - /* If the heap pointer is NULL, then restore the malloc implementation - ** back to NULL pointers too. This will cause the malloc to go - ** back to its default implementation when sqlite3_initialize() is - ** run. + /* EVIDENCE-OF: R-49920-60189 If the first pointer (the memory pointer) + ** is NULL, then SQLite reverts to using its default memory allocator + ** (the system malloc() implementation), undoing any prior invocation of + ** SQLITE_CONFIG_MALLOC. + ** + ** Setting sqlite3GlobalConfig.m to all zeros will cause malloc to + ** revert to its default implementation when sqlite3_initialize() is run */ memset(&sqlite3GlobalConfig.m, 0, sizeof(sqlite3GlobalConfig.m)); }else{ - /* The heap pointer is not NULL, then install one of the - ** mem5.c/mem3.c methods. If neither ENABLE_MEMSYS3 nor - ** ENABLE_MEMSYS5 is defined, return an error. - */ + /* EVIDENCE-OF: R-61006-08918 If the memory pointer is not NULL then the + ** alternative memory allocator is engaged to handle all of SQLites + ** memory allocation needs. */ #ifdef SQLITE_ENABLE_MEMSYS3 sqlite3GlobalConfig.m = *sqlite3MemGetMemsys3(); #endif #ifdef SQLITE_ENABLE_MEMSYS5 sqlite3GlobalConfig.m = *sqlite3MemGetMemsys5(); @@ -97539,11 +133337,11 @@ sqlite3GlobalConfig.szLookaside = va_arg(ap, int); sqlite3GlobalConfig.nLookaside = va_arg(ap, int); break; } - /* Record a pointer to the logger funcction and its first argument. + /* Record a pointer to the logger function and its first argument. ** The default is NULL. Logging is disabled if the function pointer is ** NULL. */ case SQLITE_CONFIG_LOG: { /* MSVC is picky about pulling func ptrs from va lists. @@ -97553,10 +133351,82 @@ typedef void(*LOGFUNC_t)(void*,int,const char*); sqlite3GlobalConfig.xLog = va_arg(ap, LOGFUNC_t); sqlite3GlobalConfig.pLogArg = va_arg(ap, void*); break; } + + /* EVIDENCE-OF: R-55548-33817 The compile-time setting for URI filenames + ** can be changed at start-time using the + ** sqlite3_config(SQLITE_CONFIG_URI,1) or + ** sqlite3_config(SQLITE_CONFIG_URI,0) configuration calls. + */ + case SQLITE_CONFIG_URI: { + /* EVIDENCE-OF: R-25451-61125 The SQLITE_CONFIG_URI option takes a single + ** argument of type int. If non-zero, then URI handling is globally + ** enabled. If the parameter is zero, then URI handling is globally + ** disabled. */ + sqlite3GlobalConfig.bOpenUri = va_arg(ap, int); + break; + } + + case SQLITE_CONFIG_COVERING_INDEX_SCAN: { + /* EVIDENCE-OF: R-36592-02772 The SQLITE_CONFIG_COVERING_INDEX_SCAN + ** option takes a single integer argument which is interpreted as a + ** boolean in order to enable or disable the use of covering indices for + ** full table scans in the query optimizer. */ + sqlite3GlobalConfig.bUseCis = va_arg(ap, int); + break; + } + +#ifdef SQLITE_ENABLE_SQLLOG + case SQLITE_CONFIG_SQLLOG: { + typedef void(*SQLLOGFUNC_t)(void*, sqlite3*, const char*, int); + sqlite3GlobalConfig.xSqllog = va_arg(ap, SQLLOGFUNC_t); + sqlite3GlobalConfig.pSqllogArg = va_arg(ap, void *); + break; + } +#endif + + case SQLITE_CONFIG_MMAP_SIZE: { + /* EVIDENCE-OF: R-58063-38258 SQLITE_CONFIG_MMAP_SIZE takes two 64-bit + ** integer (sqlite3_int64) values that are the default mmap size limit + ** (the default setting for PRAGMA mmap_size) and the maximum allowed + ** mmap size limit. */ + sqlite3_int64 szMmap = va_arg(ap, sqlite3_int64); + sqlite3_int64 mxMmap = va_arg(ap, sqlite3_int64); + /* EVIDENCE-OF: R-53367-43190 If either argument to this option is + ** negative, then that argument is changed to its compile-time default. + ** + ** EVIDENCE-OF: R-34993-45031 The maximum allowed mmap size will be + ** silently truncated if necessary so that it does not exceed the + ** compile-time maximum mmap size set by the SQLITE_MAX_MMAP_SIZE + ** compile-time option. + */ + if( mxMmap<0 || mxMmap>SQLITE_MAX_MMAP_SIZE ){ + mxMmap = SQLITE_MAX_MMAP_SIZE; + } + if( szMmap<0 ) szMmap = SQLITE_DEFAULT_MMAP_SIZE; + if( szMmap>mxMmap) szMmap = mxMmap; + sqlite3GlobalConfig.mxMmap = mxMmap; + sqlite3GlobalConfig.szMmap = szMmap; + break; + } + +#if SQLITE_OS_WIN && defined(SQLITE_WIN32_MALLOC) /* IMP: R-04780-55815 */ + case SQLITE_CONFIG_WIN32_HEAPSIZE: { + /* EVIDENCE-OF: R-34926-03360 SQLITE_CONFIG_WIN32_HEAPSIZE takes a 32-bit + ** unsigned integer value that specifies the maximum size of the created + ** heap. */ + sqlite3GlobalConfig.nHeap = va_arg(ap, int); + break; + } +#endif + + case SQLITE_CONFIG_PMASZ: { + sqlite3GlobalConfig.szPma = va_arg(ap, unsigned int); + break; + } default: { rc = SQLITE_ERROR; break; } @@ -97575,10 +133445,11 @@ ** space for the lookaside memory is obtained from sqlite3_malloc(). ** If pStart is not NULL then it is sz*cnt bytes of memory to use for ** the lookaside memory. */ static int setupLookaside(sqlite3 *db, void *pBuf, int sz, int cnt){ +#ifndef SQLITE_OMIT_LOOKASIDE void *pStart; if( db->lookaside.nOut ){ return SQLITE_BUSY; } /* Free any existing lookaside buffer for this handle before @@ -97586,25 +133457,25 @@ ** both at the same time. */ if( db->lookaside.bMalloced ){ sqlite3_free(db->lookaside.pStart); } - /* The size of a lookaside slot needs to be larger than a pointer - ** to be useful. + /* The size of a lookaside slot after ROUNDDOWN8 needs to be larger + ** than a pointer to be useful. */ + sz = ROUNDDOWN8(sz); /* IMP: R-33038-09382 */ if( sz<=(int)sizeof(LookasideSlot*) ) sz = 0; if( cnt<0 ) cnt = 0; if( sz==0 || cnt==0 ){ sz = 0; pStart = 0; }else if( pBuf==0 ){ - sz = ROUND8(sz); sqlite3BeginBenignMalloc(); - pStart = sqlite3Malloc( sz*cnt ); + pStart = sqlite3Malloc( sz*cnt ); /* IMP: R-61949-35727 */ sqlite3EndBenignMalloc(); + if( pStart ) cnt = sqlite3MallocSize(pStart)/sz; }else{ - sz = ROUNDDOWN8(sz); pStart = pBuf; } db->lookaside.pStart = pStart; db->lookaside.pFree = 0; db->lookaside.sz = (u16)sz; @@ -97617,44 +133488,134 @@ p->pNext = db->lookaside.pFree; db->lookaside.pFree = p; p = (LookasideSlot*)&((u8*)p)[sz]; } db->lookaside.pEnd = p; - db->lookaside.bEnabled = 1; + db->lookaside.bDisable = 0; db->lookaside.bMalloced = pBuf==0 ?1:0; }else{ - db->lookaside.pEnd = 0; - db->lookaside.bEnabled = 0; + db->lookaside.pStart = db; + db->lookaside.pEnd = db; + db->lookaside.bDisable = 1; db->lookaside.bMalloced = 0; } +#endif /* SQLITE_OMIT_LOOKASIDE */ return SQLITE_OK; } /* ** Return the mutex associated with a database connection. */ -SQLITE_API sqlite3_mutex *sqlite3_db_mutex(sqlite3 *db){ +SQLITE_API sqlite3_mutex *SQLITE_STDCALL sqlite3_db_mutex(sqlite3 *db){ +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif return db->mutex; } + +/* +** Free up as much memory as we can from the given database +** connection. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_db_release_memory(sqlite3 *db){ + int i; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT; +#endif + sqlite3_mutex_enter(db->mutex); + sqlite3BtreeEnterAll(db); + for(i=0; i<db->nDb; i++){ + Btree *pBt = db->aDb[i].pBt; + if( pBt ){ + Pager *pPager = sqlite3BtreePager(pBt); + sqlite3PagerShrink(pPager); + } + } + sqlite3BtreeLeaveAll(db); + sqlite3_mutex_leave(db->mutex); + return SQLITE_OK; +} + +/* +** Flush any dirty pages in the pager-cache for any attached database +** to disk. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_db_cacheflush(sqlite3 *db){ + int i; + int rc = SQLITE_OK; + int bSeenBusy = 0; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT; +#endif + sqlite3_mutex_enter(db->mutex); + sqlite3BtreeEnterAll(db); + for(i=0; rc==SQLITE_OK && i<db->nDb; i++){ + Btree *pBt = db->aDb[i].pBt; + if( pBt && sqlite3BtreeIsInTrans(pBt) ){ + Pager *pPager = sqlite3BtreePager(pBt); + rc = sqlite3PagerFlush(pPager); + if( rc==SQLITE_BUSY ){ + bSeenBusy = 1; + rc = SQLITE_OK; + } + } + } + sqlite3BtreeLeaveAll(db); + sqlite3_mutex_leave(db->mutex); + return ((rc==SQLITE_OK && bSeenBusy) ? SQLITE_BUSY : rc); +} /* ** Configuration settings for an individual database connection */ -SQLITE_API int sqlite3_db_config(sqlite3 *db, int op, ...){ +SQLITE_API int SQLITE_CDECL sqlite3_db_config(sqlite3 *db, int op, ...){ va_list ap; int rc; va_start(ap, op); switch( op ){ case SQLITE_DBCONFIG_LOOKASIDE: { - void *pBuf = va_arg(ap, void*); - int sz = va_arg(ap, int); - int cnt = va_arg(ap, int); + void *pBuf = va_arg(ap, void*); /* IMP: R-26835-10964 */ + int sz = va_arg(ap, int); /* IMP: R-47871-25994 */ + int cnt = va_arg(ap, int); /* IMP: R-04460-53386 */ rc = setupLookaside(db, pBuf, sz, cnt); break; } default: { - rc = SQLITE_ERROR; + static const struct { + int op; /* The opcode */ + u32 mask; /* Mask of the bit in sqlite3.flags to set/clear */ + } aFlagOp[] = { + { SQLITE_DBCONFIG_ENABLE_FKEY, SQLITE_ForeignKeys }, + { SQLITE_DBCONFIG_ENABLE_TRIGGER, SQLITE_EnableTrigger }, + }; + unsigned int i; + rc = SQLITE_ERROR; /* IMP: R-42790-23372 */ + for(i=0; i<ArraySize(aFlagOp); i++){ + if( aFlagOp[i].op==op ){ + int onoff = va_arg(ap, int); + int *pRes = va_arg(ap, int*); + int oldFlags = db->flags; + if( onoff>0 ){ + db->flags |= aFlagOp[i].mask; + }else if( onoff==0 ){ + db->flags &= ~aFlagOp[i].mask; + } + if( oldFlags!=db->flags ){ + sqlite3ExpirePreparedStatements(db); + } + if( pRes ){ + *pRes = (db->flags & aFlagOp[i].mask)!=0; + } + rc = SQLITE_OK; + break; + } + } break; } } va_end(ap); return rc; @@ -97681,17 +133642,24 @@ int nKey1, const void *pKey1, int nKey2, const void *pKey2 ){ int rc, n; n = nKey1<nKey2 ? nKey1 : nKey2; + /* EVIDENCE-OF: R-65033-28449 The built-in BINARY collation compares + ** strings byte by byte using the memcmp() function from the standard C + ** library. */ rc = memcmp(pKey1, pKey2, n); if( rc==0 ){ if( padFlag && allSpaces(((char*)pKey1)+n, nKey1-n) && allSpaces(((char*)pKey2)+n, nKey2-n) ){ - /* Leave rc unchanged at 0 */ + /* EVIDENCE-OF: R-31624-24737 RTRIM is like BINARY except that extra + ** spaces at the end of either string do not change the result. In other + ** words, strings will compare equal to one another as long as they + ** differ only in the number of spaces at the end. + */ }else{ rc = nKey1 - nKey2; } } return rc; @@ -97698,11 +133666,11 @@ } /* ** Another built-in collating sequence: NOCASE. ** -** This collating sequence is intended to be used for "case independant +** This collating sequence is intended to be used for "case independent ** comparison". SQLite's knowledge of upper and lower case equivalents ** extends only to the 26 characters used in the English language. ** ** At the moment there is only a UTF-8 implementation. */ @@ -97721,25 +133689,43 @@ } /* ** Return the ROWID of the most recent insert */ -SQLITE_API sqlite_int64 sqlite3_last_insert_rowid(sqlite3 *db){ +SQLITE_API sqlite_int64 SQLITE_STDCALL sqlite3_last_insert_rowid(sqlite3 *db){ +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif return db->lastRowid; } /* ** Return the number of changes in the most recent call to sqlite3_exec(). */ -SQLITE_API int sqlite3_changes(sqlite3 *db){ +SQLITE_API int SQLITE_STDCALL sqlite3_changes(sqlite3 *db){ +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif return db->nChange; } /* ** Return the number of changes since the database handle was opened. */ -SQLITE_API int sqlite3_total_changes(sqlite3 *db){ +SQLITE_API int SQLITE_STDCALL sqlite3_total_changes(sqlite3 *db){ +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif return db->nTotalChange; } /* ** Close all open savepoints. This function only manipulates fields of the @@ -97754,59 +133740,174 @@ } db->nSavepoint = 0; db->nStatement = 0; db->isTransactionSavepoint = 0; } + +/* +** Invoke the destructor function associated with FuncDef p, if any. Except, +** if this is not the last copy of the function, do not invoke it. Multiple +** copies of a single function are created when create_function() is called +** with SQLITE_ANY as the encoding. +*/ +static void functionDestroy(sqlite3 *db, FuncDef *p){ + FuncDestructor *pDestructor = p->pDestructor; + if( pDestructor ){ + pDestructor->nRef--; + if( pDestructor->nRef==0 ){ + pDestructor->xDestroy(pDestructor->pUserData); + sqlite3DbFree(db, pDestructor); + } + } +} + +/* +** Disconnect all sqlite3_vtab objects that belong to database connection +** db. This is called when db is being closed. +*/ +static void disconnectAllVtab(sqlite3 *db){ +#ifndef SQLITE_OMIT_VIRTUALTABLE + int i; + HashElem *p; + sqlite3BtreeEnterAll(db); + for(i=0; i<db->nDb; i++){ + Schema *pSchema = db->aDb[i].pSchema; + if( db->aDb[i].pSchema ){ + for(p=sqliteHashFirst(&pSchema->tblHash); p; p=sqliteHashNext(p)){ + Table *pTab = (Table *)sqliteHashData(p); + if( IsVirtual(pTab) ) sqlite3VtabDisconnect(db, pTab); + } + } + } + for(p=sqliteHashFirst(&db->aModule); p; p=sqliteHashNext(p)){ + Module *pMod = (Module *)sqliteHashData(p); + if( pMod->pEpoTab ){ + sqlite3VtabDisconnect(db, pMod->pEpoTab); + } + } + sqlite3VtabUnlockList(db); + sqlite3BtreeLeaveAll(db); +#else + UNUSED_PARAMETER(db); +#endif +} + +/* +** Return TRUE if database connection db has unfinalized prepared +** statements or unfinished sqlite3_backup objects. +*/ +static int connectionIsBusy(sqlite3 *db){ + int j; + assert( sqlite3_mutex_held(db->mutex) ); + if( db->pVdbe ) return 1; + for(j=0; j<db->nDb; j++){ + Btree *pBt = db->aDb[j].pBt; + if( pBt && sqlite3BtreeIsInBackup(pBt) ) return 1; + } + return 0; +} /* ** Close an existing SQLite database */ -SQLITE_API int sqlite3_close(sqlite3 *db){ - HashElem *i; - int j; - +static int sqlite3Close(sqlite3 *db, int forceZombie){ if( !db ){ + /* EVIDENCE-OF: R-63257-11740 Calling sqlite3_close() or + ** sqlite3_close_v2() with a NULL pointer argument is a harmless no-op. */ return SQLITE_OK; } if( !sqlite3SafetyCheckSickOrOk(db) ){ return SQLITE_MISUSE_BKPT; } sqlite3_mutex_enter(db->mutex); - sqlite3ResetInternalSchema(db, 0); + /* Force xDisconnect calls on all virtual tables */ + disconnectAllVtab(db); - /* If a transaction is open, the ResetInternalSchema() call above + /* If a transaction is open, the disconnectAllVtab() call above ** will not have called the xDisconnect() method on any virtual ** tables in the db->aVTrans[] array. The following sqlite3VtabRollback() ** call will do so. We need to do this before the check for active ** SQL statements below, as the v-table implementation may be storing ** some prepared statements internally. */ sqlite3VtabRollback(db); - /* If there are any outstanding VMs, return SQLITE_BUSY. */ - if( db->pVdbe ){ - sqlite3Error(db, SQLITE_BUSY, - "unable to close due to unfinalised statements"); + /* Legacy behavior (sqlite3_close() behavior) is to return + ** SQLITE_BUSY if the connection can not be closed immediately. + */ + if( !forceZombie && connectionIsBusy(db) ){ + sqlite3ErrorWithMsg(db, SQLITE_BUSY, "unable to close due to unfinalized " + "statements or unfinished backups"); sqlite3_mutex_leave(db->mutex); return SQLITE_BUSY; } - assert( sqlite3SafetyCheckSickOrOk(db) ); - - for(j=0; j<db->nDb; j++){ - Btree *pBt = db->aDb[j].pBt; - if( pBt && sqlite3BtreeIsInBackup(pBt) ){ - sqlite3Error(db, SQLITE_BUSY, - "unable to close due to unfinished backup operation"); - sqlite3_mutex_leave(db->mutex); - return SQLITE_BUSY; - } - } + +#ifdef SQLITE_ENABLE_SQLLOG + if( sqlite3GlobalConfig.xSqllog ){ + /* Closing the handle. Fourth parameter is passed the value 2. */ + sqlite3GlobalConfig.xSqllog(sqlite3GlobalConfig.pSqllogArg, db, 0, 2); + } +#endif + + /* Convert the connection into a zombie and then close it. + */ + db->magic = SQLITE_MAGIC_ZOMBIE; + sqlite3LeaveMutexAndCloseZombie(db); + return SQLITE_OK; +} + +/* +** Two variations on the public interface for closing a database +** connection. The sqlite3_close() version returns SQLITE_BUSY and +** leaves the connection option if there are unfinalized prepared +** statements or unfinished sqlite3_backups. The sqlite3_close_v2() +** version forces the connection to become a zombie if there are +** unclosed resources, and arranges for deallocation when the last +** prepare statement or sqlite3_backup closes. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_close(sqlite3 *db){ return sqlite3Close(db,0); } +SQLITE_API int SQLITE_STDCALL sqlite3_close_v2(sqlite3 *db){ return sqlite3Close(db,1); } + + +/* +** Close the mutex on database connection db. +** +** Furthermore, if database connection db is a zombie (meaning that there +** has been a prior call to sqlite3_close(db) or sqlite3_close_v2(db)) and +** every sqlite3_stmt has now been finalized and every sqlite3_backup has +** finished, then free all resources. +*/ +SQLITE_PRIVATE void sqlite3LeaveMutexAndCloseZombie(sqlite3 *db){ + HashElem *i; /* Hash table iterator */ + int j; + + /* If there are outstanding sqlite3_stmt or sqlite3_backup objects + ** or if the connection has not yet been closed by sqlite3_close_v2(), + ** then just leave the mutex and return. + */ + if( db->magic!=SQLITE_MAGIC_ZOMBIE || connectionIsBusy(db) ){ + sqlite3_mutex_leave(db->mutex); + return; + } + + /* If we reach this point, it means that the database connection has + ** closed all sqlite3_stmt and sqlite3_backup objects and has been + ** passed to sqlite3_close (meaning that it is a zombie). Therefore, + ** go ahead and free all resources. + */ + + /* If a transaction is open, roll it back. This also ensures that if + ** any database schemas have been modified by an uncommitted transaction + ** they are reset. And that the required b-tree mutex is held to make + ** the pager rollback and schema reset an atomic operation. */ + sqlite3RollbackAll(db, SQLITE_OK); /* Free any outstanding Savepoint structures. */ sqlite3CloseSavepoints(db); + /* Close all database connections */ for(j=0; j<db->nDb; j++){ struct Db *pDb = &db->aDb[j]; if( pDb->pBt ){ sqlite3BtreeClose(pDb->pBt); pDb->pBt = 0; @@ -97813,24 +133914,32 @@ if( j!=1 ){ pDb->pSchema = 0; } } } - sqlite3ResetInternalSchema(db, 0); + /* Clear the TEMP schema separately and last */ + if( db->aDb[1].pSchema ){ + sqlite3SchemaClear(db->aDb[1].pSchema); + } + sqlite3VtabUnlockList(db); + + /* Free up the array of auxiliary databases */ + sqlite3CollapseDatabaseArray(db); + assert( db->nDb<=2 ); + assert( db->aDb==db->aDbStatic ); /* Tell the code in notify.c that the connection no longer holds any ** locks and does not require any further unlock-notify callbacks. */ sqlite3ConnectionClosed(db); - assert( db->nDb<=2 ); - assert( db->aDb==db->aDbStatic ); for(j=0; j<ArraySize(db->aFunc.a); j++){ FuncDef *pNext, *pHash, *p; for(p=db->aFunc.a[j]; p; p=pHash){ pHash = p->pHash; while( p ){ + functionDestroy(db, p); pNext = p->pNext; sqlite3DbFree(db, p); p = pNext; } } @@ -97850,20 +133959,23 @@ for(i=sqliteHashFirst(&db->aModule); i; i=sqliteHashNext(i)){ Module *pMod = (Module *)sqliteHashData(i); if( pMod->xDestroy ){ pMod->xDestroy(pMod->pAux); } + sqlite3VtabEponymousTableClear(db, pMod); sqlite3DbFree(db, pMod); } sqlite3HashClear(&db->aModule); #endif - sqlite3Error(db, SQLITE_OK, 0); /* Deallocates any cached error strings. */ - if( db->pErr ){ - sqlite3ValueFree(db->pErr); - } + sqlite3Error(db, SQLITE_OK); /* Deallocates any cached error strings. */ + sqlite3ValueFree(db->pErr); sqlite3CloseExtensions(db); +#if SQLITE_USER_AUTHENTICATION + sqlite3_free(db->auth.zAuthUser); + sqlite3_free(db->auth.zAuthPW); +#endif db->magic = SQLITE_MAGIC_ERROR; /* The temp-database schema is allocated differently from the other schema ** objects (using sqliteMalloc() directly, instead of sqlite3BtreeSchema()). @@ -97878,47 +133990,173 @@ assert( db->lookaside.nOut==0 ); /* Fails on a lookaside memory leak */ if( db->lookaside.bMalloced ){ sqlite3_free(db->lookaside.pStart); } sqlite3_free(db); - return SQLITE_OK; } /* -** Rollback all database files. +** Rollback all database files. If tripCode is not SQLITE_OK, then +** any write cursors are invalidated ("tripped" - as in "tripping a circuit +** breaker") and made to return tripCode if there are any further +** attempts to use that cursor. Read cursors remain open and valid +** but are "saved" in case the table pages are moved around. */ -SQLITE_PRIVATE void sqlite3RollbackAll(sqlite3 *db){ +SQLITE_PRIVATE void sqlite3RollbackAll(sqlite3 *db, int tripCode){ int i; int inTrans = 0; + int schemaChange; assert( sqlite3_mutex_held(db->mutex) ); sqlite3BeginBenignMalloc(); + + /* Obtain all b-tree mutexes before making any calls to BtreeRollback(). + ** This is important in case the transaction being rolled back has + ** modified the database schema. If the b-tree mutexes are not taken + ** here, then another shared-cache connection might sneak in between + ** the database rollback and schema reset, which can cause false + ** corruption reports in some cases. */ + sqlite3BtreeEnterAll(db); + schemaChange = (db->flags & SQLITE_InternChanges)!=0 && db->init.busy==0; + for(i=0; i<db->nDb; i++){ - if( db->aDb[i].pBt ){ - if( sqlite3BtreeIsInTrans(db->aDb[i].pBt) ){ + Btree *p = db->aDb[i].pBt; + if( p ){ + if( sqlite3BtreeIsInTrans(p) ){ inTrans = 1; } - sqlite3BtreeRollback(db->aDb[i].pBt); - db->aDb[i].inTrans = 0; + sqlite3BtreeRollback(p, tripCode, !schemaChange); } } sqlite3VtabRollback(db); sqlite3EndBenignMalloc(); - if( db->flags&SQLITE_InternChanges ){ + if( (db->flags&SQLITE_InternChanges)!=0 && db->init.busy==0 ){ sqlite3ExpirePreparedStatements(db); - sqlite3ResetInternalSchema(db, 0); + sqlite3ResetAllSchemasOfConnection(db); } + sqlite3BtreeLeaveAll(db); /* Any deferred constraint violations have now been resolved. */ db->nDeferredCons = 0; + db->nDeferredImmCons = 0; + db->flags &= ~SQLITE_DeferFKs; /* If one has been configured, invoke the rollback-hook callback */ if( db->xRollbackCallback && (inTrans || !db->autoCommit) ){ db->xRollbackCallback(db->pRollbackArg); } } +/* +** Return a static string containing the name corresponding to the error code +** specified in the argument. +*/ +#if defined(SQLITE_NEED_ERR_NAME) +SQLITE_PRIVATE const char *sqlite3ErrName(int rc){ + const char *zName = 0; + int i, origRc = rc; + for(i=0; i<2 && zName==0; i++, rc &= 0xff){ + switch( rc ){ + case SQLITE_OK: zName = "SQLITE_OK"; break; + case SQLITE_ERROR: zName = "SQLITE_ERROR"; break; + case SQLITE_INTERNAL: zName = "SQLITE_INTERNAL"; break; + case SQLITE_PERM: zName = "SQLITE_PERM"; break; + case SQLITE_ABORT: zName = "SQLITE_ABORT"; break; + case SQLITE_ABORT_ROLLBACK: zName = "SQLITE_ABORT_ROLLBACK"; break; + case SQLITE_BUSY: zName = "SQLITE_BUSY"; break; + case SQLITE_BUSY_RECOVERY: zName = "SQLITE_BUSY_RECOVERY"; break; + case SQLITE_BUSY_SNAPSHOT: zName = "SQLITE_BUSY_SNAPSHOT"; break; + case SQLITE_LOCKED: zName = "SQLITE_LOCKED"; break; + case SQLITE_LOCKED_SHAREDCACHE: zName = "SQLITE_LOCKED_SHAREDCACHE";break; + case SQLITE_NOMEM: zName = "SQLITE_NOMEM"; break; + case SQLITE_READONLY: zName = "SQLITE_READONLY"; break; + case SQLITE_READONLY_RECOVERY: zName = "SQLITE_READONLY_RECOVERY"; break; + case SQLITE_READONLY_CANTLOCK: zName = "SQLITE_READONLY_CANTLOCK"; break; + case SQLITE_READONLY_ROLLBACK: zName = "SQLITE_READONLY_ROLLBACK"; break; + case SQLITE_READONLY_DBMOVED: zName = "SQLITE_READONLY_DBMOVED"; break; + case SQLITE_INTERRUPT: zName = "SQLITE_INTERRUPT"; break; + case SQLITE_IOERR: zName = "SQLITE_IOERR"; break; + case SQLITE_IOERR_READ: zName = "SQLITE_IOERR_READ"; break; + case SQLITE_IOERR_SHORT_READ: zName = "SQLITE_IOERR_SHORT_READ"; break; + case SQLITE_IOERR_WRITE: zName = "SQLITE_IOERR_WRITE"; break; + case SQLITE_IOERR_FSYNC: zName = "SQLITE_IOERR_FSYNC"; break; + case SQLITE_IOERR_DIR_FSYNC: zName = "SQLITE_IOERR_DIR_FSYNC"; break; + case SQLITE_IOERR_TRUNCATE: zName = "SQLITE_IOERR_TRUNCATE"; break; + case SQLITE_IOERR_FSTAT: zName = "SQLITE_IOERR_FSTAT"; break; + case SQLITE_IOERR_UNLOCK: zName = "SQLITE_IOERR_UNLOCK"; break; + case SQLITE_IOERR_RDLOCK: zName = "SQLITE_IOERR_RDLOCK"; break; + case SQLITE_IOERR_DELETE: zName = "SQLITE_IOERR_DELETE"; break; + case SQLITE_IOERR_NOMEM: zName = "SQLITE_IOERR_NOMEM"; break; + case SQLITE_IOERR_ACCESS: zName = "SQLITE_IOERR_ACCESS"; break; + case SQLITE_IOERR_CHECKRESERVEDLOCK: + zName = "SQLITE_IOERR_CHECKRESERVEDLOCK"; break; + case SQLITE_IOERR_LOCK: zName = "SQLITE_IOERR_LOCK"; break; + case SQLITE_IOERR_CLOSE: zName = "SQLITE_IOERR_CLOSE"; break; + case SQLITE_IOERR_DIR_CLOSE: zName = "SQLITE_IOERR_DIR_CLOSE"; break; + case SQLITE_IOERR_SHMOPEN: zName = "SQLITE_IOERR_SHMOPEN"; break; + case SQLITE_IOERR_SHMSIZE: zName = "SQLITE_IOERR_SHMSIZE"; break; + case SQLITE_IOERR_SHMLOCK: zName = "SQLITE_IOERR_SHMLOCK"; break; + case SQLITE_IOERR_SHMMAP: zName = "SQLITE_IOERR_SHMMAP"; break; + case SQLITE_IOERR_SEEK: zName = "SQLITE_IOERR_SEEK"; break; + case SQLITE_IOERR_DELETE_NOENT: zName = "SQLITE_IOERR_DELETE_NOENT";break; + case SQLITE_IOERR_MMAP: zName = "SQLITE_IOERR_MMAP"; break; + case SQLITE_IOERR_GETTEMPPATH: zName = "SQLITE_IOERR_GETTEMPPATH"; break; + case SQLITE_IOERR_CONVPATH: zName = "SQLITE_IOERR_CONVPATH"; break; + case SQLITE_CORRUPT: zName = "SQLITE_CORRUPT"; break; + case SQLITE_CORRUPT_VTAB: zName = "SQLITE_CORRUPT_VTAB"; break; + case SQLITE_NOTFOUND: zName = "SQLITE_NOTFOUND"; break; + case SQLITE_FULL: zName = "SQLITE_FULL"; break; + case SQLITE_CANTOPEN: zName = "SQLITE_CANTOPEN"; break; + case SQLITE_CANTOPEN_NOTEMPDIR: zName = "SQLITE_CANTOPEN_NOTEMPDIR";break; + case SQLITE_CANTOPEN_ISDIR: zName = "SQLITE_CANTOPEN_ISDIR"; break; + case SQLITE_CANTOPEN_FULLPATH: zName = "SQLITE_CANTOPEN_FULLPATH"; break; + case SQLITE_CANTOPEN_CONVPATH: zName = "SQLITE_CANTOPEN_CONVPATH"; break; + case SQLITE_PROTOCOL: zName = "SQLITE_PROTOCOL"; break; + case SQLITE_EMPTY: zName = "SQLITE_EMPTY"; break; + case SQLITE_SCHEMA: zName = "SQLITE_SCHEMA"; break; + case SQLITE_TOOBIG: zName = "SQLITE_TOOBIG"; break; + case SQLITE_CONSTRAINT: zName = "SQLITE_CONSTRAINT"; break; + case SQLITE_CONSTRAINT_UNIQUE: zName = "SQLITE_CONSTRAINT_UNIQUE"; break; + case SQLITE_CONSTRAINT_TRIGGER: zName = "SQLITE_CONSTRAINT_TRIGGER";break; + case SQLITE_CONSTRAINT_FOREIGNKEY: + zName = "SQLITE_CONSTRAINT_FOREIGNKEY"; break; + case SQLITE_CONSTRAINT_CHECK: zName = "SQLITE_CONSTRAINT_CHECK"; break; + case SQLITE_CONSTRAINT_PRIMARYKEY: + zName = "SQLITE_CONSTRAINT_PRIMARYKEY"; break; + case SQLITE_CONSTRAINT_NOTNULL: zName = "SQLITE_CONSTRAINT_NOTNULL";break; + case SQLITE_CONSTRAINT_COMMITHOOK: + zName = "SQLITE_CONSTRAINT_COMMITHOOK"; break; + case SQLITE_CONSTRAINT_VTAB: zName = "SQLITE_CONSTRAINT_VTAB"; break; + case SQLITE_CONSTRAINT_FUNCTION: + zName = "SQLITE_CONSTRAINT_FUNCTION"; break; + case SQLITE_CONSTRAINT_ROWID: zName = "SQLITE_CONSTRAINT_ROWID"; break; + case SQLITE_MISMATCH: zName = "SQLITE_MISMATCH"; break; + case SQLITE_MISUSE: zName = "SQLITE_MISUSE"; break; + case SQLITE_NOLFS: zName = "SQLITE_NOLFS"; break; + case SQLITE_AUTH: zName = "SQLITE_AUTH"; break; + case SQLITE_FORMAT: zName = "SQLITE_FORMAT"; break; + case SQLITE_RANGE: zName = "SQLITE_RANGE"; break; + case SQLITE_NOTADB: zName = "SQLITE_NOTADB"; break; + case SQLITE_ROW: zName = "SQLITE_ROW"; break; + case SQLITE_NOTICE: zName = "SQLITE_NOTICE"; break; + case SQLITE_NOTICE_RECOVER_WAL: zName = "SQLITE_NOTICE_RECOVER_WAL";break; + case SQLITE_NOTICE_RECOVER_ROLLBACK: + zName = "SQLITE_NOTICE_RECOVER_ROLLBACK"; break; + case SQLITE_WARNING: zName = "SQLITE_WARNING"; break; + case SQLITE_WARNING_AUTOINDEX: zName = "SQLITE_WARNING_AUTOINDEX"; break; + case SQLITE_DONE: zName = "SQLITE_DONE"; break; + } + } + if( zName==0 ){ + static char zBuf[50]; + sqlite3_snprintf(sizeof(zBuf), zBuf, "SQLITE_UNKNOWN(%d)", origRc); + zName = zBuf; + } + return zName; +} +#endif + /* ** Return a static string that describes the kind of error specified in the ** argument. */ SQLITE_PRIVATE const char *sqlite3ErrStr(int rc){ @@ -97933,14 +134171,14 @@ /* SQLITE_NOMEM */ "out of memory", /* SQLITE_READONLY */ "attempt to write a readonly database", /* SQLITE_INTERRUPT */ "interrupted", /* SQLITE_IOERR */ "disk I/O error", /* SQLITE_CORRUPT */ "database disk image is malformed", - /* SQLITE_NOTFOUND */ 0, + /* SQLITE_NOTFOUND */ "unknown operation", /* SQLITE_FULL */ "database or disk is full", /* SQLITE_CANTOPEN */ "unable to open database file", - /* SQLITE_PROTOCOL */ 0, + /* SQLITE_PROTOCOL */ "locking protocol", /* SQLITE_EMPTY */ "table contains no data", /* SQLITE_SCHEMA */ "database schema has changed", /* SQLITE_TOOBIG */ "string or blob too big", /* SQLITE_CONSTRAINT */ "constraint failed", /* SQLITE_MISMATCH */ "datatype mismatch", @@ -97949,16 +134187,25 @@ /* SQLITE_AUTH */ "authorization denied", /* SQLITE_FORMAT */ "auxiliary database format error", /* SQLITE_RANGE */ "bind or column index out of range", /* SQLITE_NOTADB */ "file is encrypted or is not a database", }; - rc &= 0xff; - if( ALWAYS(rc>=0) && rc<(int)(sizeof(aMsg)/sizeof(aMsg[0])) && aMsg[rc]!=0 ){ - return aMsg[rc]; - }else{ - return "unknown error"; + const char *zErr = "unknown error"; + switch( rc ){ + case SQLITE_ABORT_ROLLBACK: { + zErr = "abort due to ROLLBACK"; + break; + } + default: { + rc &= 0xff; + if( ALWAYS(rc>=0) && rc<ArraySize(aMsg) && aMsg[rc]!=0 ){ + zErr = aMsg[rc]; + } + break; + } } + return zErr; } /* ** This routine implements a busy callback that sleeps and tries ** again until a timeout value is reached. The timeout value is @@ -97967,16 +134214,16 @@ */ static int sqliteDefaultBusyCallback( void *ptr, /* Database connection */ int count /* Number of times table has been busy */ ){ -#if SQLITE_OS_WIN || (defined(HAVE_USLEEP) && HAVE_USLEEP) +#if SQLITE_OS_WIN || HAVE_USLEEP static const u8 delays[] = { 1, 2, 5, 10, 15, 20, 25, 25, 25, 50, 50, 100 }; static const u8 totals[] = { 0, 1, 3, 8, 18, 33, 53, 78, 103, 128, 178, 228 }; -# define NDELAY (sizeof(delays)/sizeof(delays[0])) +# define NDELAY ArraySize(delays) sqlite3 *db = (sqlite3 *)ptr; int timeout = db->busyTimeout; int delay, prior; assert( count>=0 ); @@ -98025,19 +134272,23 @@ /* ** This routine sets the busy callback for an Sqlite database to the ** given callback function with the given argument. */ -SQLITE_API int sqlite3_busy_handler( +SQLITE_API int SQLITE_STDCALL sqlite3_busy_handler( sqlite3 *db, int (*xBusy)(void*,int), void *pArg ){ +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT; +#endif sqlite3_mutex_enter(db->mutex); db->busyHandler.xFunc = xBusy; db->busyHandler.pArg = pArg; db->busyHandler.nBusy = 0; + db->busyTimeout = 0; sqlite3_mutex_leave(db->mutex); return SQLITE_OK; } #ifndef SQLITE_OMIT_PROGRESS_CALLBACK @@ -98044,20 +134295,26 @@ /* ** This routine sets the progress callback for an Sqlite database to the ** given callback function with the given argument. The progress callback will ** be invoked every nOps opcodes. */ -SQLITE_API void sqlite3_progress_handler( +SQLITE_API void SQLITE_STDCALL sqlite3_progress_handler( sqlite3 *db, int nOps, int (*xProgress)(void*), void *pArg ){ +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + (void)SQLITE_MISUSE_BKPT; + return; + } +#endif sqlite3_mutex_enter(db->mutex); if( nOps>0 ){ db->xProgress = xProgress; - db->nProgressOps = nOps; + db->nProgressOps = (unsigned)nOps; db->pProgressArg = pArg; }else{ db->xProgress = 0; db->nProgressOps = 0; db->pProgressArg = 0; @@ -98069,24 +134326,33 @@ /* ** This routine installs a default busy handler that waits for the ** specified number of milliseconds before returning 0. */ -SQLITE_API int sqlite3_busy_timeout(sqlite3 *db, int ms){ +SQLITE_API int SQLITE_STDCALL sqlite3_busy_timeout(sqlite3 *db, int ms){ +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT; +#endif if( ms>0 ){ - db->busyTimeout = ms; sqlite3_busy_handler(db, sqliteDefaultBusyCallback, (void*)db); + db->busyTimeout = ms; }else{ sqlite3_busy_handler(db, 0, 0); } return SQLITE_OK; } /* ** Cause any pending operation to stop at its earliest opportunity. */ -SQLITE_API void sqlite3_interrupt(sqlite3 *db){ +SQLITE_API void SQLITE_STDCALL sqlite3_interrupt(sqlite3 *db){ +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + (void)SQLITE_MISUSE_BKPT; + return; + } +#endif db->u1.isInterrupted = 1; } /* @@ -98099,26 +134365,32 @@ sqlite3 *db, const char *zFunctionName, int nArg, int enc, void *pUserData, - void (*xFunc)(sqlite3_context*,int,sqlite3_value **), + void (*xSFunc)(sqlite3_context*,int,sqlite3_value **), void (*xStep)(sqlite3_context*,int,sqlite3_value **), - void (*xFinal)(sqlite3_context*) + void (*xFinal)(sqlite3_context*), + FuncDestructor *pDestructor ){ FuncDef *p; int nName; + int extraFlags; assert( sqlite3_mutex_held(db->mutex) ); if( zFunctionName==0 || - (xFunc && (xFinal || xStep)) || - (!xFunc && (xFinal && !xStep)) || - (!xFunc && (!xFinal && xStep)) || + (xSFunc && (xFinal || xStep)) || + (!xSFunc && (xFinal && !xStep)) || + (!xSFunc && (!xFinal && xStep)) || (nArg<-1 || nArg>SQLITE_MAX_FUNCTION_ARG) || (255<(nName = sqlite3Strlen30( zFunctionName))) ){ return SQLITE_MISUSE_BKPT; } + + assert( SQLITE_FUNC_CONSTANT==SQLITE_DETERMINISTIC ); + extraFlags = enc & SQLITE_DETERMINISTIC; + enc &= (SQLITE_FUNC_ENCMASK|SQLITE_ANY); #ifndef SQLITE_OMIT_UTF16 /* If SQLITE_UTF16 is specified as the encoding type, transform this ** to one of SQLITE_UTF16LE or SQLITE_UTF16BE using the ** SQLITE_UTF16NATIVE macro. SQLITE_UTF16 is not used internally. @@ -98128,15 +134400,15 @@ */ if( enc==SQLITE_UTF16 ){ enc = SQLITE_UTF16NATIVE; }else if( enc==SQLITE_ANY ){ int rc; - rc = sqlite3CreateFunc(db, zFunctionName, nArg, SQLITE_UTF8, - pUserData, xFunc, xStep, xFinal); + rc = sqlite3CreateFunc(db, zFunctionName, nArg, SQLITE_UTF8|extraFlags, + pUserData, xSFunc, xStep, xFinal, pDestructor); if( rc==SQLITE_OK ){ - rc = sqlite3CreateFunc(db, zFunctionName, nArg, SQLITE_UTF16LE, - pUserData, xFunc, xStep, xFinal); + rc = sqlite3CreateFunc(db, zFunctionName, nArg, SQLITE_UTF16LE|extraFlags, + pUserData, xSFunc, xStep, xFinal, pDestructor); } if( rc!=SQLITE_OK ){ return rc; } enc = SQLITE_UTF16BE; @@ -98149,13 +134421,13 @@ ** and there are active VMs, then return SQLITE_BUSY. If a function ** is being overridden/deleted but there are no active VMs, allow the ** operation to continue but invalidate all precompiled statements. */ p = sqlite3FindFunction(db, zFunctionName, nName, nArg, (u8)enc, 0); - if( p && p->iPrefEnc==enc && p->nArg==nArg ){ - if( db->activeVdbeCnt ){ - sqlite3Error(db, SQLITE_BUSY, + if( p && (p->funcFlags & SQLITE_FUNC_ENCMASK)==enc && p->nArg==nArg ){ + if( db->nVdbeActive ){ + sqlite3ErrorWithMsg(db, SQLITE_BUSY, "unable to delete/modify user-function due to active statements"); assert( !db->mallocFailed ); return SQLITE_BUSY; }else{ sqlite3ExpirePreparedStatements(db); @@ -98165,57 +134437,108 @@ p = sqlite3FindFunction(db, zFunctionName, nName, nArg, (u8)enc, 1); assert(p || db->mallocFailed); if( !p ){ return SQLITE_NOMEM; } - p->flags = 0; - p->xFunc = xFunc; - p->xStep = xStep; + + /* If an older version of the function with a configured destructor is + ** being replaced invoke the destructor function here. */ + functionDestroy(db, p); + + if( pDestructor ){ + pDestructor->nRef++; + } + p->pDestructor = pDestructor; + p->funcFlags = (p->funcFlags & SQLITE_FUNC_ENCMASK) | extraFlags; + testcase( p->funcFlags & SQLITE_DETERMINISTIC ); + p->xSFunc = xSFunc ? xSFunc : xStep; p->xFinalize = xFinal; p->pUserData = pUserData; p->nArg = (u16)nArg; return SQLITE_OK; } /* ** Create new user functions. */ -SQLITE_API int sqlite3_create_function( +SQLITE_API int SQLITE_STDCALL sqlite3_create_function( sqlite3 *db, - const char *zFunctionName, + const char *zFunc, int nArg, int enc, void *p, - void (*xFunc)(sqlite3_context*,int,sqlite3_value **), + void (*xSFunc)(sqlite3_context*,int,sqlite3_value **), void (*xStep)(sqlite3_context*,int,sqlite3_value **), void (*xFinal)(sqlite3_context*) ){ - int rc; + return sqlite3_create_function_v2(db, zFunc, nArg, enc, p, xSFunc, xStep, + xFinal, 0); +} + +SQLITE_API int SQLITE_STDCALL sqlite3_create_function_v2( + sqlite3 *db, + const char *zFunc, + int nArg, + int enc, + void *p, + void (*xSFunc)(sqlite3_context*,int,sqlite3_value **), + void (*xStep)(sqlite3_context*,int,sqlite3_value **), + void (*xFinal)(sqlite3_context*), + void (*xDestroy)(void *) +){ + int rc = SQLITE_ERROR; + FuncDestructor *pArg = 0; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + return SQLITE_MISUSE_BKPT; + } +#endif sqlite3_mutex_enter(db->mutex); - rc = sqlite3CreateFunc(db, zFunctionName, nArg, enc, p, xFunc, xStep, xFinal); + if( xDestroy ){ + pArg = (FuncDestructor *)sqlite3DbMallocZero(db, sizeof(FuncDestructor)); + if( !pArg ){ + xDestroy(p); + goto out; + } + pArg->xDestroy = xDestroy; + pArg->pUserData = p; + } + rc = sqlite3CreateFunc(db, zFunc, nArg, enc, p, xSFunc, xStep, xFinal, pArg); + if( pArg && pArg->nRef==0 ){ + assert( rc!=SQLITE_OK ); + xDestroy(p); + sqlite3DbFree(db, pArg); + } + + out: rc = sqlite3ApiExit(db, rc); sqlite3_mutex_leave(db->mutex); return rc; } #ifndef SQLITE_OMIT_UTF16 -SQLITE_API int sqlite3_create_function16( +SQLITE_API int SQLITE_STDCALL sqlite3_create_function16( sqlite3 *db, const void *zFunctionName, int nArg, int eTextRep, void *p, - void (*xFunc)(sqlite3_context*,int,sqlite3_value**), + void (*xSFunc)(sqlite3_context*,int,sqlite3_value**), void (*xStep)(sqlite3_context*,int,sqlite3_value**), void (*xFinal)(sqlite3_context*) ){ int rc; char *zFunc8; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) || zFunctionName==0 ) return SQLITE_MISUSE_BKPT; +#endif sqlite3_mutex_enter(db->mutex); assert( !db->mallocFailed ); zFunc8 = sqlite3Utf16to8(db, zFunctionName, -1, SQLITE_UTF16NATIVE); - rc = sqlite3CreateFunc(db, zFunc8, nArg, eTextRep, p, xFunc, xStep, xFinal); + rc = sqlite3CreateFunc(db, zFunc8, nArg, eTextRep, p, xSFunc,xStep,xFinal,0); sqlite3DbFree(db, zFunc8); rc = sqlite3ApiExit(db, rc); sqlite3_mutex_leave(db->mutex); return rc; } @@ -98232,23 +134555,29 @@ ** When virtual tables intend to provide an overloaded function, they ** should call this routine to make sure the global function exists. ** A global function must exist in order for name resolution to work ** properly. */ -SQLITE_API int sqlite3_overload_function( +SQLITE_API int SQLITE_STDCALL sqlite3_overload_function( sqlite3 *db, const char *zName, int nArg ){ int nName = sqlite3Strlen30(zName); - int rc; + int rc = SQLITE_OK; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) || zName==0 || nArg<-2 ){ + return SQLITE_MISUSE_BKPT; + } +#endif sqlite3_mutex_enter(db->mutex); if( sqlite3FindFunction(db, zName, nName, nArg, SQLITE_UTF8, 0)==0 ){ - sqlite3CreateFunc(db, zName, nArg, SQLITE_UTF8, - 0, sqlite3InvalidFunction, 0, 0); + rc = sqlite3CreateFunc(db, zName, nArg, SQLITE_UTF8, + 0, sqlite3InvalidFunction, 0, 0, 0); } - rc = sqlite3ApiExit(db, SQLITE_OK); + rc = sqlite3ApiExit(db, rc); sqlite3_mutex_leave(db->mutex); return rc; } #ifndef SQLITE_OMIT_TRACE @@ -98258,12 +134587,19 @@ ** ** A NULL trace function means that no tracing is executes. A non-NULL ** trace is a pointer to a function that is invoked at the start of each ** SQL statement. */ -SQLITE_API void *sqlite3_trace(sqlite3 *db, void (*xTrace)(void*,const char*), void *pArg){ +SQLITE_API void *SQLITE_STDCALL sqlite3_trace(sqlite3 *db, void (*xTrace)(void*,const char*), void *pArg){ void *pOld; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif sqlite3_mutex_enter(db->mutex); pOld = db->pTraceArg; db->xTrace = xTrace; db->pTraceArg = pArg; sqlite3_mutex_leave(db->mutex); @@ -98275,37 +134611,50 @@ ** ** A NULL profile function means that no profiling is executes. A non-NULL ** profile is a pointer to a function that is invoked at the conclusion of ** each SQL statement that is run. */ -SQLITE_API void *sqlite3_profile( +SQLITE_API void *SQLITE_STDCALL sqlite3_profile( sqlite3 *db, void (*xProfile)(void*,const char*,sqlite_uint64), void *pArg ){ void *pOld; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif sqlite3_mutex_enter(db->mutex); pOld = db->pProfileArg; db->xProfile = xProfile; db->pProfileArg = pArg; sqlite3_mutex_leave(db->mutex); return pOld; } #endif /* SQLITE_OMIT_TRACE */ -/*** EXPERIMENTAL *** -** -** Register a function to be invoked when a transaction comments. +/* +** Register a function to be invoked when a transaction commits. ** If the invoked function returns non-zero, then the commit becomes a ** rollback. */ -SQLITE_API void *sqlite3_commit_hook( +SQLITE_API void *SQLITE_STDCALL sqlite3_commit_hook( sqlite3 *db, /* Attach the hook to this database */ int (*xCallback)(void*), /* Function to invoke on each commit */ void *pArg /* Argument to the function */ ){ void *pOld; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif sqlite3_mutex_enter(db->mutex); pOld = db->pCommitArg; db->xCommitCallback = xCallback; db->pCommitArg = pArg; sqlite3_mutex_leave(db->mutex); @@ -98314,16 +134663,23 @@ /* ** Register a callback to be invoked each time a row is updated, ** inserted or deleted using this database connection. */ -SQLITE_API void *sqlite3_update_hook( +SQLITE_API void *SQLITE_STDCALL sqlite3_update_hook( sqlite3 *db, /* Attach the hook to this database */ void (*xCallback)(void*,int,char const *,char const *,sqlite_int64), void *pArg /* Argument to the function */ ){ void *pRet; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif sqlite3_mutex_enter(db->mutex); pRet = db->pUpdateArg; db->xUpdateCallback = xCallback; db->pUpdateArg = pArg; sqlite3_mutex_leave(db->mutex); @@ -98332,24 +134688,218 @@ /* ** Register a callback to be invoked each time a transaction is rolled ** back by this database connection. */ -SQLITE_API void *sqlite3_rollback_hook( +SQLITE_API void *SQLITE_STDCALL sqlite3_rollback_hook( sqlite3 *db, /* Attach the hook to this database */ void (*xCallback)(void*), /* Callback function */ void *pArg /* Argument to the function */ ){ void *pRet; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif sqlite3_mutex_enter(db->mutex); pRet = db->pRollbackArg; db->xRollbackCallback = xCallback; db->pRollbackArg = pArg; sqlite3_mutex_leave(db->mutex); return pRet; } +#ifndef SQLITE_OMIT_WAL +/* +** The sqlite3_wal_hook() callback registered by sqlite3_wal_autocheckpoint(). +** Invoke sqlite3_wal_checkpoint if the number of frames in the log file +** is greater than sqlite3.pWalArg cast to an integer (the value configured by +** wal_autocheckpoint()). +*/ +SQLITE_PRIVATE int sqlite3WalDefaultHook( + void *pClientData, /* Argument */ + sqlite3 *db, /* Connection */ + const char *zDb, /* Database */ + int nFrame /* Size of WAL */ +){ + if( nFrame>=SQLITE_PTR_TO_INT(pClientData) ){ + sqlite3BeginBenignMalloc(); + sqlite3_wal_checkpoint(db, zDb); + sqlite3EndBenignMalloc(); + } + return SQLITE_OK; +} +#endif /* SQLITE_OMIT_WAL */ + +/* +** Configure an sqlite3_wal_hook() callback to automatically checkpoint +** a database after committing a transaction if there are nFrame or +** more frames in the log file. Passing zero or a negative value as the +** nFrame parameter disables automatic checkpoints entirely. +** +** The callback registered by this function replaces any existing callback +** registered using sqlite3_wal_hook(). Likewise, registering a callback +** using sqlite3_wal_hook() disables the automatic checkpoint mechanism +** configured by this function. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_wal_autocheckpoint(sqlite3 *db, int nFrame){ +#ifdef SQLITE_OMIT_WAL + UNUSED_PARAMETER(db); + UNUSED_PARAMETER(nFrame); +#else +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT; +#endif + if( nFrame>0 ){ + sqlite3_wal_hook(db, sqlite3WalDefaultHook, SQLITE_INT_TO_PTR(nFrame)); + }else{ + sqlite3_wal_hook(db, 0, 0); + } +#endif + return SQLITE_OK; +} + +/* +** Register a callback to be invoked each time a transaction is written +** into the write-ahead-log by this database connection. +*/ +SQLITE_API void *SQLITE_STDCALL sqlite3_wal_hook( + sqlite3 *db, /* Attach the hook to this db handle */ + int(*xCallback)(void *, sqlite3*, const char*, int), + void *pArg /* First argument passed to xCallback() */ +){ +#ifndef SQLITE_OMIT_WAL + void *pRet; +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif + sqlite3_mutex_enter(db->mutex); + pRet = db->pWalArg; + db->xWalCallback = xCallback; + db->pWalArg = pArg; + sqlite3_mutex_leave(db->mutex); + return pRet; +#else + return 0; +#endif +} + +/* +** Checkpoint database zDb. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_wal_checkpoint_v2( + sqlite3 *db, /* Database handle */ + const char *zDb, /* Name of attached database (or NULL) */ + int eMode, /* SQLITE_CHECKPOINT_* value */ + int *pnLog, /* OUT: Size of WAL log in frames */ + int *pnCkpt /* OUT: Total number of frames checkpointed */ +){ +#ifdef SQLITE_OMIT_WAL + return SQLITE_OK; +#else + int rc; /* Return code */ + int iDb = SQLITE_MAX_ATTACHED; /* sqlite3.aDb[] index of db to checkpoint */ + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT; +#endif + + /* Initialize the output variables to -1 in case an error occurs. */ + if( pnLog ) *pnLog = -1; + if( pnCkpt ) *pnCkpt = -1; + + assert( SQLITE_CHECKPOINT_PASSIVE==0 ); + assert( SQLITE_CHECKPOINT_FULL==1 ); + assert( SQLITE_CHECKPOINT_RESTART==2 ); + assert( SQLITE_CHECKPOINT_TRUNCATE==3 ); + if( eMode<SQLITE_CHECKPOINT_PASSIVE || eMode>SQLITE_CHECKPOINT_TRUNCATE ){ + /* EVIDENCE-OF: R-03996-12088 The M parameter must be a valid checkpoint + ** mode: */ + return SQLITE_MISUSE; + } + + sqlite3_mutex_enter(db->mutex); + if( zDb && zDb[0] ){ + iDb = sqlite3FindDbName(db, zDb); + } + if( iDb<0 ){ + rc = SQLITE_ERROR; + sqlite3ErrorWithMsg(db, SQLITE_ERROR, "unknown database: %s", zDb); + }else{ + db->busyHandler.nBusy = 0; + rc = sqlite3Checkpoint(db, iDb, eMode, pnLog, pnCkpt); + sqlite3Error(db, rc); + } + rc = sqlite3ApiExit(db, rc); + sqlite3_mutex_leave(db->mutex); + return rc; +#endif +} + + +/* +** Checkpoint database zDb. If zDb is NULL, or if the buffer zDb points +** to contains a zero-length string, all attached databases are +** checkpointed. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_wal_checkpoint(sqlite3 *db, const char *zDb){ + /* EVIDENCE-OF: R-41613-20553 The sqlite3_wal_checkpoint(D,X) is equivalent to + ** sqlite3_wal_checkpoint_v2(D,X,SQLITE_CHECKPOINT_PASSIVE,0,0). */ + return sqlite3_wal_checkpoint_v2(db,zDb,SQLITE_CHECKPOINT_PASSIVE,0,0); +} + +#ifndef SQLITE_OMIT_WAL +/* +** Run a checkpoint on database iDb. This is a no-op if database iDb is +** not currently open in WAL mode. +** +** If a transaction is open on the database being checkpointed, this +** function returns SQLITE_LOCKED and a checkpoint is not attempted. If +** an error occurs while running the checkpoint, an SQLite error code is +** returned (i.e. SQLITE_IOERR). Otherwise, SQLITE_OK. +** +** The mutex on database handle db should be held by the caller. The mutex +** associated with the specific b-tree being checkpointed is taken by +** this function while the checkpoint is running. +** +** If iDb is passed SQLITE_MAX_ATTACHED, then all attached databases are +** checkpointed. If an error is encountered it is returned immediately - +** no attempt is made to checkpoint any remaining databases. +** +** Parameter eMode is one of SQLITE_CHECKPOINT_PASSIVE, FULL or RESTART. +*/ +SQLITE_PRIVATE int sqlite3Checkpoint(sqlite3 *db, int iDb, int eMode, int *pnLog, int *pnCkpt){ + int rc = SQLITE_OK; /* Return code */ + int i; /* Used to iterate through attached dbs */ + int bBusy = 0; /* True if SQLITE_BUSY has been encountered */ + + assert( sqlite3_mutex_held(db->mutex) ); + assert( !pnLog || *pnLog==-1 ); + assert( !pnCkpt || *pnCkpt==-1 ); + + for(i=0; i<db->nDb && rc==SQLITE_OK; i++){ + if( i==iDb || iDb==SQLITE_MAX_ATTACHED ){ + rc = sqlite3BtreeCheckpoint(db->aDb[i].pBt, eMode, pnLog, pnCkpt); + pnLog = 0; + pnCkpt = 0; + if( rc==SQLITE_BUSY ){ + bBusy = 1; + rc = SQLITE_OK; + } + } + } + + return (rc==SQLITE_OK && bBusy) ? SQLITE_BUSY : rc; +} +#endif /* SQLITE_OMIT_WAL */ + /* ** This function returns true if main-memory should be used instead of ** a temporary file for transient pager files and statement journals. ** The value returned depends on the value of db->temp_store (runtime ** parameter) and the compile time value of SQLITE_TEMP_STORE. The @@ -98373,76 +134923,24 @@ #endif #if SQLITE_TEMP_STORE==2 return ( db->temp_store!=1 ); #endif #if SQLITE_TEMP_STORE==3 + UNUSED_PARAMETER(db); return 1; #endif #if SQLITE_TEMP_STORE<1 || SQLITE_TEMP_STORE>3 + UNUSED_PARAMETER(db); return 0; #endif } -/* -** This routine is called to create a connection to a database BTree -** driver. If zFilename is the name of a file, then that file is -** opened and used. If zFilename is the magic name ":memory:" then -** the database is stored in memory (and is thus forgotten as soon as -** the connection is closed.) If zFilename is NULL then the database -** is a "virtual" database for transient use only and is deleted as -** soon as the connection is closed. -** -** A virtual database can be either a disk file (that is automatically -** deleted when the file is closed) or it an be held entirely in memory. -** The sqlite3TempInMemory() function is used to determine which. -*/ -SQLITE_PRIVATE int sqlite3BtreeFactory( - sqlite3 *db, /* Main database when opening aux otherwise 0 */ - const char *zFilename, /* Name of the file containing the BTree database */ - int omitJournal, /* if TRUE then do not journal this file */ - int nCache, /* How many pages in the page cache */ - int vfsFlags, /* Flags passed through to vfsOpen */ - Btree **ppBtree /* Pointer to new Btree object written here */ -){ - int btFlags = 0; - int rc; - - assert( sqlite3_mutex_held(db->mutex) ); - assert( ppBtree != 0); - if( omitJournal ){ - btFlags |= BTREE_OMIT_JOURNAL; - } - if( db->flags & SQLITE_NoReadlock ){ - btFlags |= BTREE_NO_READLOCK; - } -#ifndef SQLITE_OMIT_MEMORYDB - if( zFilename==0 && sqlite3TempInMemory(db) ){ - zFilename = ":memory:"; - } -#endif - - if( (vfsFlags & SQLITE_OPEN_MAIN_DB)!=0 && (zFilename==0 || *zFilename==0) ){ - vfsFlags = (vfsFlags & ~SQLITE_OPEN_MAIN_DB) | SQLITE_OPEN_TEMP_DB; - } - rc = sqlite3BtreeOpen(zFilename, (sqlite3 *)db, ppBtree, btFlags, vfsFlags); - - /* If the B-Tree was successfully opened, set the pager-cache size to the - ** default value. Except, if the call to BtreeOpen() returned a handle - ** open on an existing shared pager-cache, do not change the pager-cache - ** size. - */ - if( rc==SQLITE_OK && 0==sqlite3BtreeSchema(*ppBtree, 0, 0) ){ - sqlite3BtreeSetCacheSize(*ppBtree, nCache); - } - return rc; -} - /* ** Return UTF-8 encoded English language explanation of the most recent ** error. */ -SQLITE_API const char *sqlite3_errmsg(sqlite3 *db){ +SQLITE_API const char *SQLITE_STDCALL sqlite3_errmsg(sqlite3 *db){ const char *z; if( !db ){ return sqlite3ErrStr(SQLITE_NOMEM); } if( !sqlite3SafetyCheckSickOrOk(db) ){ @@ -98450,10 +134948,11 @@ } sqlite3_mutex_enter(db->mutex); if( db->mallocFailed ){ z = sqlite3ErrStr(SQLITE_NOMEM); }else{ + testcase( db->pErr==0 ); z = (char*)sqlite3_value_text(db->pErr); assert( !db->mallocFailed ); if( z==0 ){ z = sqlite3ErrStr(db->errCode); } @@ -98465,11 +134964,11 @@ #ifndef SQLITE_OMIT_UTF16 /* ** Return UTF-16 encoded English language explanation of the most recent ** error. */ -SQLITE_API const void *sqlite3_errmsg16(sqlite3 *db){ +SQLITE_API const void *SQLITE_STDCALL sqlite3_errmsg16(sqlite3 *db){ static const u16 outOfMem[] = { 'o', 'u', 't', ' ', 'o', 'f', ' ', 'm', 'e', 'm', 'o', 'r', 'y', 0 }; static const u16 misuse[] = { 'l', 'i', 'b', 'r', 'a', 'r', 'y', ' ', @@ -98491,20 +134990,19 @@ if( db->mallocFailed ){ z = (void *)outOfMem; }else{ z = sqlite3_value_text16(db->pErr); if( z==0 ){ - sqlite3ValueSetStr(db->pErr, -1, sqlite3ErrStr(db->errCode), - SQLITE_UTF8, SQLITE_STATIC); + sqlite3ErrorWithMsg(db, db->errCode, sqlite3ErrStr(db->errCode)); z = sqlite3_value_text16(db->pErr); } /* A malloc() may have failed within the call to sqlite3_value_text16() ** above. If this is the case, then the db->mallocFailed flag needs to ** be cleared before returning. Do this directly, instead of via ** sqlite3ApiExit(), to avoid setting the database handle error message. */ - db->mallocFailed = 0; + sqlite3OomClear(db); } sqlite3_mutex_leave(db->mutex); return z; } #endif /* SQLITE_OMIT_UTF16 */ @@ -98511,45 +135009,52 @@ /* ** Return the most recent error code generated by an SQLite routine. If NULL is ** passed to this function, we assume a malloc() failed during sqlite3_open(). */ -SQLITE_API int sqlite3_errcode(sqlite3 *db){ +SQLITE_API int SQLITE_STDCALL sqlite3_errcode(sqlite3 *db){ if( db && !sqlite3SafetyCheckSickOrOk(db) ){ return SQLITE_MISUSE_BKPT; } if( !db || db->mallocFailed ){ return SQLITE_NOMEM; } return db->errCode & db->errMask; } -SQLITE_API int sqlite3_extended_errcode(sqlite3 *db){ +SQLITE_API int SQLITE_STDCALL sqlite3_extended_errcode(sqlite3 *db){ if( db && !sqlite3SafetyCheckSickOrOk(db) ){ return SQLITE_MISUSE_BKPT; } if( !db || db->mallocFailed ){ return SQLITE_NOMEM; } return db->errCode; } + +/* +** Return a string that describes the kind of error specified in the +** argument. For now, this simply calls the internal sqlite3ErrStr() +** function. +*/ +SQLITE_API const char *SQLITE_STDCALL sqlite3_errstr(int rc){ + return sqlite3ErrStr(rc); +} /* ** Create a new collating function for database "db". The name is zName ** and the encoding is enc. */ static int createCollation( sqlite3* db, const char *zName, u8 enc, - u8 collType, void* pCtx, int(*xCompare)(void*,int,const void*,int,const void*), void(*xDel)(void*) ){ CollSeq *pColl; int enc2; - int nName = sqlite3Strlen30(zName); assert( sqlite3_mutex_held(db->mutex) ); /* If SQLITE_UTF16 is specified as the encoding type, transform this ** to one of SQLITE_UTF16LE or SQLITE_UTF16BE using the @@ -98569,12 +135074,12 @@ ** sequence. If so, and there are active VMs, return busy. If there ** are no active VMs, invalidate any pre-compiled statements. */ pColl = sqlite3FindCollSeq(db, (u8)enc2, zName, 0); if( pColl && pColl->xCmp ){ - if( db->activeVdbeCnt ){ - sqlite3Error(db, SQLITE_BUSY, + if( db->nVdbeActive ){ + sqlite3ErrorWithMsg(db, SQLITE_BUSY, "unable to delete/modify collation sequence due to active statements"); return SQLITE_BUSY; } sqlite3ExpirePreparedStatements(db); @@ -98583,11 +135088,11 @@ ** then any copies made by synthCollSeq() need to be invalidated. ** Also, collation destructor - CollSeq.xDel() - function may need ** to be called. */ if( (pColl->enc & ~SQLITE_UTF16_ALIGNED)==enc2 ){ - CollSeq *aColl = sqlite3HashFind(&db->aCollSeq, zName, nName); + CollSeq *aColl = sqlite3HashFind(&db->aCollSeq, zName); int j; for(j=0; j<3; j++){ CollSeq *p = &aColl[j]; if( p->enc==pColl->enc ){ if( p->xDel ){ @@ -98598,18 +135103,16 @@ } } } pColl = sqlite3FindCollSeq(db, (u8)enc2, zName, 1); - if( pColl ){ - pColl->xCmp = xCompare; - pColl->pUser = pCtx; - pColl->xDel = xDel; - pColl->enc = (u8)(enc2 | (enc & SQLITE_UTF16_ALIGNED)); - pColl->type = collType; - } - sqlite3Error(db, SQLITE_OK, 0); + if( pColl==0 ) return SQLITE_NOMEM; + pColl->xCmp = xCompare; + pColl->pUser = pCtx; + pColl->xDel = xDel; + pColl->enc = (u8)(enc2 | (enc & SQLITE_UTF16_ALIGNED)); + sqlite3Error(db, SQLITE_OK); return SQLITE_OK; } /* @@ -98625,12 +135128,13 @@ SQLITE_MAX_COMPOUND_SELECT, SQLITE_MAX_VDBE_OP, SQLITE_MAX_FUNCTION_ARG, SQLITE_MAX_ATTACHED, SQLITE_MAX_LIKE_PATTERN_LENGTH, - SQLITE_MAX_VARIABLE_NUMBER, + SQLITE_MAX_VARIABLE_NUMBER, /* IMP: R-38091-32352 */ SQLITE_MAX_TRIGGER_DEPTH, + SQLITE_MAX_WORKER_THREADS, }; /* ** Make sure the hard limits are set to reasonable values */ @@ -98650,21 +135154,24 @@ # error SQLITE_MAX_VDBE_OP must be at least 40 #endif #if SQLITE_MAX_FUNCTION_ARG<0 || SQLITE_MAX_FUNCTION_ARG>1000 # error SQLITE_MAX_FUNCTION_ARG must be between 0 and 1000 #endif -#if SQLITE_MAX_ATTACHED<0 || SQLITE_MAX_ATTACHED>30 -# error SQLITE_MAX_ATTACHED must be between 0 and 30 +#if SQLITE_MAX_ATTACHED<0 || SQLITE_MAX_ATTACHED>125 +# error SQLITE_MAX_ATTACHED must be between 0 and 125 #endif #if SQLITE_MAX_LIKE_PATTERN_LENGTH<1 # error SQLITE_MAX_LIKE_PATTERN_LENGTH must be at least 1 #endif #if SQLITE_MAX_COLUMN>32767 # error SQLITE_MAX_COLUMN must not exceed 32767 #endif #if SQLITE_MAX_TRIGGER_DEPTH<1 # error SQLITE_MAX_TRIGGER_DEPTH must be at least 1 +#endif +#if SQLITE_MAX_WORKER_THREADS<0 || SQLITE_MAX_WORKER_THREADS>50 +# error SQLITE_MAX_WORKER_THREADS must be between 0 and 50 #endif /* ** Change the value of a limit. Report the old value. @@ -98674,45 +135181,345 @@ ** ** A new lower limit does not shrink existing constructs. ** It merely prevents new constructs that exceed the limit ** from forming. */ -SQLITE_API int sqlite3_limit(sqlite3 *db, int limitId, int newLimit){ +SQLITE_API int SQLITE_STDCALL sqlite3_limit(sqlite3 *db, int limitId, int newLimit){ int oldLimit; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + (void)SQLITE_MISUSE_BKPT; + return -1; + } +#endif + + /* EVIDENCE-OF: R-30189-54097 For each limit category SQLITE_LIMIT_NAME + ** there is a hard upper bound set at compile-time by a C preprocessor + ** macro called SQLITE_MAX_NAME. (The "_LIMIT_" in the name is changed to + ** "_MAX_".) + */ + assert( aHardLimit[SQLITE_LIMIT_LENGTH]==SQLITE_MAX_LENGTH ); + assert( aHardLimit[SQLITE_LIMIT_SQL_LENGTH]==SQLITE_MAX_SQL_LENGTH ); + assert( aHardLimit[SQLITE_LIMIT_COLUMN]==SQLITE_MAX_COLUMN ); + assert( aHardLimit[SQLITE_LIMIT_EXPR_DEPTH]==SQLITE_MAX_EXPR_DEPTH ); + assert( aHardLimit[SQLITE_LIMIT_COMPOUND_SELECT]==SQLITE_MAX_COMPOUND_SELECT); + assert( aHardLimit[SQLITE_LIMIT_VDBE_OP]==SQLITE_MAX_VDBE_OP ); + assert( aHardLimit[SQLITE_LIMIT_FUNCTION_ARG]==SQLITE_MAX_FUNCTION_ARG ); + assert( aHardLimit[SQLITE_LIMIT_ATTACHED]==SQLITE_MAX_ATTACHED ); + assert( aHardLimit[SQLITE_LIMIT_LIKE_PATTERN_LENGTH]== + SQLITE_MAX_LIKE_PATTERN_LENGTH ); + assert( aHardLimit[SQLITE_LIMIT_VARIABLE_NUMBER]==SQLITE_MAX_VARIABLE_NUMBER); + assert( aHardLimit[SQLITE_LIMIT_TRIGGER_DEPTH]==SQLITE_MAX_TRIGGER_DEPTH ); + assert( aHardLimit[SQLITE_LIMIT_WORKER_THREADS]==SQLITE_MAX_WORKER_THREADS ); + assert( SQLITE_LIMIT_WORKER_THREADS==(SQLITE_N_LIMIT-1) ); + + if( limitId<0 || limitId>=SQLITE_N_LIMIT ){ return -1; } oldLimit = db->aLimit[limitId]; - if( newLimit>=0 ){ + if( newLimit>=0 ){ /* IMP: R-52476-28732 */ if( newLimit>aHardLimit[limitId] ){ - newLimit = aHardLimit[limitId]; + newLimit = aHardLimit[limitId]; /* IMP: R-51463-25634 */ } db->aLimit[limitId] = newLimit; } - return oldLimit; + return oldLimit; /* IMP: R-53341-35419 */ } + +/* +** This function is used to parse both URIs and non-URI filenames passed by the +** user to API functions sqlite3_open() or sqlite3_open_v2(), and for database +** URIs specified as part of ATTACH statements. +** +** The first argument to this function is the name of the VFS to use (or +** a NULL to signify the default VFS) if the URI does not contain a "vfs=xxx" +** query parameter. The second argument contains the URI (or non-URI filename) +** itself. When this function is called the *pFlags variable should contain +** the default flags to open the database handle with. The value stored in +** *pFlags may be updated before returning if the URI filename contains +** "cache=xxx" or "mode=xxx" query parameters. +** +** If successful, SQLITE_OK is returned. In this case *ppVfs is set to point to +** the VFS that should be used to open the database file. *pzFile is set to +** point to a buffer containing the name of the file to open. It is the +** responsibility of the caller to eventually call sqlite3_free() to release +** this buffer. +** +** If an error occurs, then an SQLite error code is returned and *pzErrMsg +** may be set to point to a buffer containing an English language error +** message. It is the responsibility of the caller to eventually release +** this buffer by calling sqlite3_free(). +*/ +SQLITE_PRIVATE int sqlite3ParseUri( + const char *zDefaultVfs, /* VFS to use if no "vfs=xxx" query option */ + const char *zUri, /* Nul-terminated URI to parse */ + unsigned int *pFlags, /* IN/OUT: SQLITE_OPEN_XXX flags */ + sqlite3_vfs **ppVfs, /* OUT: VFS to use */ + char **pzFile, /* OUT: Filename component of URI */ + char **pzErrMsg /* OUT: Error message (if rc!=SQLITE_OK) */ +){ + int rc = SQLITE_OK; + unsigned int flags = *pFlags; + const char *zVfs = zDefaultVfs; + char *zFile; + char c; + int nUri = sqlite3Strlen30(zUri); + + assert( *pzErrMsg==0 ); + + if( ((flags & SQLITE_OPEN_URI) /* IMP: R-48725-32206 */ + || sqlite3GlobalConfig.bOpenUri) /* IMP: R-51689-46548 */ + && nUri>=5 && memcmp(zUri, "file:", 5)==0 /* IMP: R-57884-37496 */ + ){ + char *zOpt; + int eState; /* Parser state when parsing URI */ + int iIn; /* Input character index */ + int iOut = 0; /* Output character index */ + u64 nByte = nUri+2; /* Bytes of space to allocate */ + + /* Make sure the SQLITE_OPEN_URI flag is set to indicate to the VFS xOpen + ** method that there may be extra parameters following the file-name. */ + flags |= SQLITE_OPEN_URI; + + for(iIn=0; iIn<nUri; iIn++) nByte += (zUri[iIn]=='&'); + zFile = sqlite3_malloc64(nByte); + if( !zFile ) return SQLITE_NOMEM; + + iIn = 5; +#ifdef SQLITE_ALLOW_URI_AUTHORITY + if( strncmp(zUri+5, "///", 3)==0 ){ + iIn = 7; + /* The following condition causes URIs with five leading / characters + ** like file://///host/path to be converted into UNCs like //host/path. + ** The correct URI for that UNC has only two or four leading / characters + ** file://host/path or file:////host/path. But 5 leading slashes is a + ** common error, we are told, so we handle it as a special case. */ + if( strncmp(zUri+7, "///", 3)==0 ){ iIn++; } + }else if( strncmp(zUri+5, "//localhost/", 12)==0 ){ + iIn = 16; + } +#else + /* Discard the scheme and authority segments of the URI. */ + if( zUri[5]=='/' && zUri[6]=='/' ){ + iIn = 7; + while( zUri[iIn] && zUri[iIn]!='/' ) iIn++; + if( iIn!=7 && (iIn!=16 || memcmp("localhost", &zUri[7], 9)) ){ + *pzErrMsg = sqlite3_mprintf("invalid uri authority: %.*s", + iIn-7, &zUri[7]); + rc = SQLITE_ERROR; + goto parse_uri_out; + } + } +#endif + + /* Copy the filename and any query parameters into the zFile buffer. + ** Decode %HH escape codes along the way. + ** + ** Within this loop, variable eState may be set to 0, 1 or 2, depending + ** on the parsing context. As follows: + ** + ** 0: Parsing file-name. + ** 1: Parsing name section of a name=value query parameter. + ** 2: Parsing value section of a name=value query parameter. + */ + eState = 0; + while( (c = zUri[iIn])!=0 && c!='#' ){ + iIn++; + if( c=='%' + && sqlite3Isxdigit(zUri[iIn]) + && sqlite3Isxdigit(zUri[iIn+1]) + ){ + int octet = (sqlite3HexToInt(zUri[iIn++]) << 4); + octet += sqlite3HexToInt(zUri[iIn++]); + + assert( octet>=0 && octet<256 ); + if( octet==0 ){ + /* This branch is taken when "%00" appears within the URI. In this + ** case we ignore all text in the remainder of the path, name or + ** value currently being parsed. So ignore the current character + ** and skip to the next "?", "=" or "&", as appropriate. */ + while( (c = zUri[iIn])!=0 && c!='#' + && (eState!=0 || c!='?') + && (eState!=1 || (c!='=' && c!='&')) + && (eState!=2 || c!='&') + ){ + iIn++; + } + continue; + } + c = octet; + }else if( eState==1 && (c=='&' || c=='=') ){ + if( zFile[iOut-1]==0 ){ + /* An empty option name. Ignore this option altogether. */ + while( zUri[iIn] && zUri[iIn]!='#' && zUri[iIn-1]!='&' ) iIn++; + continue; + } + if( c=='&' ){ + zFile[iOut++] = '\0'; + }else{ + eState = 2; + } + c = 0; + }else if( (eState==0 && c=='?') || (eState==2 && c=='&') ){ + c = 0; + eState = 1; + } + zFile[iOut++] = c; + } + if( eState==1 ) zFile[iOut++] = '\0'; + zFile[iOut++] = '\0'; + zFile[iOut++] = '\0'; + + /* Check if there were any options specified that should be interpreted + ** here. Options that are interpreted here include "vfs" and those that + ** correspond to flags that may be passed to the sqlite3_open_v2() + ** method. */ + zOpt = &zFile[sqlite3Strlen30(zFile)+1]; + while( zOpt[0] ){ + int nOpt = sqlite3Strlen30(zOpt); + char *zVal = &zOpt[nOpt+1]; + int nVal = sqlite3Strlen30(zVal); + + if( nOpt==3 && memcmp("vfs", zOpt, 3)==0 ){ + zVfs = zVal; + }else{ + struct OpenMode { + const char *z; + int mode; + } *aMode = 0; + char *zModeType = 0; + int mask = 0; + int limit = 0; + + if( nOpt==5 && memcmp("cache", zOpt, 5)==0 ){ + static struct OpenMode aCacheMode[] = { + { "shared", SQLITE_OPEN_SHAREDCACHE }, + { "private", SQLITE_OPEN_PRIVATECACHE }, + { 0, 0 } + }; + + mask = SQLITE_OPEN_SHAREDCACHE|SQLITE_OPEN_PRIVATECACHE; + aMode = aCacheMode; + limit = mask; + zModeType = "cache"; + } + if( nOpt==4 && memcmp("mode", zOpt, 4)==0 ){ + static struct OpenMode aOpenMode[] = { + { "ro", SQLITE_OPEN_READONLY }, + { "rw", SQLITE_OPEN_READWRITE }, + { "rwc", SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE }, + { "memory", SQLITE_OPEN_MEMORY }, + { 0, 0 } + }; + + mask = SQLITE_OPEN_READONLY | SQLITE_OPEN_READWRITE + | SQLITE_OPEN_CREATE | SQLITE_OPEN_MEMORY; + aMode = aOpenMode; + limit = mask & flags; + zModeType = "access"; + } + + if( aMode ){ + int i; + int mode = 0; + for(i=0; aMode[i].z; i++){ + const char *z = aMode[i].z; + if( nVal==sqlite3Strlen30(z) && 0==memcmp(zVal, z, nVal) ){ + mode = aMode[i].mode; + break; + } + } + if( mode==0 ){ + *pzErrMsg = sqlite3_mprintf("no such %s mode: %s", zModeType, zVal); + rc = SQLITE_ERROR; + goto parse_uri_out; + } + if( (mode & ~SQLITE_OPEN_MEMORY)>limit ){ + *pzErrMsg = sqlite3_mprintf("%s mode not allowed: %s", + zModeType, zVal); + rc = SQLITE_PERM; + goto parse_uri_out; + } + flags = (flags & ~mask) | mode; + } + } + + zOpt = &zVal[nVal+1]; + } + + }else{ + zFile = sqlite3_malloc64(nUri+2); + if( !zFile ) return SQLITE_NOMEM; + memcpy(zFile, zUri, nUri); + zFile[nUri] = '\0'; + zFile[nUri+1] = '\0'; + flags &= ~SQLITE_OPEN_URI; + } + + *ppVfs = sqlite3_vfs_find(zVfs); + if( *ppVfs==0 ){ + *pzErrMsg = sqlite3_mprintf("no such vfs: %s", zVfs); + rc = SQLITE_ERROR; + } + parse_uri_out: + if( rc!=SQLITE_OK ){ + sqlite3_free(zFile); + zFile = 0; + } + *pFlags = flags; + *pzFile = zFile; + return rc; +} + /* ** This routine does the work of opening a database on behalf of ** sqlite3_open() and sqlite3_open16(). The database filename "zFilename" ** is UTF-8 encoded. */ static int openDatabase( const char *zFilename, /* Database filename UTF-8 encoded */ sqlite3 **ppDb, /* OUT: Returned database handle */ - unsigned flags, /* Operational flags */ + unsigned int flags, /* Operational flags */ const char *zVfs /* Name of the VFS to use */ ){ - sqlite3 *db; - int rc; - int isThreadsafe; + sqlite3 *db; /* Store allocated handle here */ + int rc; /* Return code */ + int isThreadsafe; /* True for threadsafe connections */ + char *zOpen = 0; /* Filename argument to pass to BtreeOpen() */ + char *zErrMsg = 0; /* Error message from sqlite3ParseUri() */ +#ifdef SQLITE_ENABLE_API_ARMOR + if( ppDb==0 ) return SQLITE_MISUSE_BKPT; +#endif *ppDb = 0; #ifndef SQLITE_OMIT_AUTOINIT rc = sqlite3_initialize(); if( rc ) return rc; #endif + + /* Only allow sensible combinations of bits in the flags argument. + ** Throw an error if any non-sense combination is used. If we + ** do not block illegal combinations here, it could trigger + ** assert() statements in deeper layers. Sensible combinations + ** are: + ** + ** 1: SQLITE_OPEN_READONLY + ** 2: SQLITE_OPEN_READWRITE + ** 6: SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE + */ + assert( SQLITE_OPEN_READONLY == 0x01 ); + assert( SQLITE_OPEN_READWRITE == 0x02 ); + assert( SQLITE_OPEN_CREATE == 0x04 ); + testcase( (1<<(flags&7))==0x02 ); /* READONLY */ + testcase( (1<<(flags&7))==0x04 ); /* READWRITE */ + testcase( (1<<(flags&7))==0x40 ); /* READWRITE | CREATE */ + if( ((1<<(flags&7)) & 0x46)==0 ){ + return SQLITE_MISUSE_BKPT; /* IMP: R-65497-44594 */ + } if( sqlite3GlobalConfig.bCoreMutex==0 ){ isThreadsafe = 0; }else if( flags & SQLITE_OPEN_NOMUTEX ){ isThreadsafe = 0; @@ -98730,11 +135537,12 @@ /* Remove harmful bits from the flags parameter ** ** The SQLITE_OPEN_NOMUTEX and SQLITE_OPEN_FULLMUTEX flags were ** dealt with in the previous code block. Besides these, the only ** valid input flags for sqlite3_open_v2() are SQLITE_OPEN_READONLY, - ** SQLITE_OPEN_READWRITE, and SQLITE_OPEN_CREATE. Silently mask + ** SQLITE_OPEN_READWRITE, SQLITE_OPEN_CREATE, SQLITE_OPEN_SHAREDCACHE, + ** SQLITE_OPEN_PRIVATECACHE, and some reserved bits. Silently mask ** off all other flags. */ flags &= ~( SQLITE_OPEN_DELETEONCLOSE | SQLITE_OPEN_EXCLUSIVE | SQLITE_OPEN_MAIN_DB | @@ -98743,11 +135551,12 @@ SQLITE_OPEN_MAIN_JOURNAL | SQLITE_OPEN_TEMP_JOURNAL | SQLITE_OPEN_SUBJOURNAL | SQLITE_OPEN_MASTER_JOURNAL | SQLITE_OPEN_NOMUTEX | - SQLITE_OPEN_FULLMUTEX + SQLITE_OPEN_FULLMUTEX | + SQLITE_OPEN_WAL ); /* Allocate the sqlite data structure */ db = sqlite3MallocZero( sizeof(sqlite3) ); if( db==0 ) goto opendb_out; @@ -98765,73 +135574,93 @@ db->magic = SQLITE_MAGIC_BUSY; db->aDb = db->aDbStatic; assert( sizeof(db->aLimit)==sizeof(aHardLimit) ); memcpy(db->aLimit, aHardLimit, sizeof(db->aLimit)); + db->aLimit[SQLITE_LIMIT_WORKER_THREADS] = SQLITE_DEFAULT_WORKER_THREADS; db->autoCommit = 1; db->nextAutovac = -1; + db->szMmap = sqlite3GlobalConfig.szMmap; db->nextPagesize = 0; - db->flags |= SQLITE_ShortColNames | SQLITE_AutoIndex + db->nMaxSorterMmap = 0x7FFFFFFF; + db->flags |= SQLITE_ShortColNames | SQLITE_EnableTrigger | SQLITE_CacheSpill +#if !defined(SQLITE_DEFAULT_AUTOMATIC_INDEX) || SQLITE_DEFAULT_AUTOMATIC_INDEX + | SQLITE_AutoIndex +#endif +#if SQLITE_DEFAULT_CKPTFULLFSYNC + | SQLITE_CkptFullFSync +#endif #if SQLITE_DEFAULT_FILE_FORMAT<4 | SQLITE_LegacyFileFmt #endif #ifdef SQLITE_ENABLE_LOAD_EXTENSION | SQLITE_LoadExtension #endif #if SQLITE_DEFAULT_RECURSIVE_TRIGGERS | SQLITE_RecTriggers #endif +#if defined(SQLITE_DEFAULT_FOREIGN_KEYS) && SQLITE_DEFAULT_FOREIGN_KEYS + | SQLITE_ForeignKeys +#endif +#if defined(SQLITE_REVERSE_UNORDERED_SELECTS) + | SQLITE_ReverseOrder +#endif +#if defined(SQLITE_ENABLE_OVERSIZE_CELL_CHECK) + | SQLITE_CellSizeCk +#endif ; sqlite3HashInit(&db->aCollSeq); #ifndef SQLITE_OMIT_VIRTUALTABLE sqlite3HashInit(&db->aModule); #endif - db->pVfs = sqlite3_vfs_find(zVfs); - if( !db->pVfs ){ - rc = SQLITE_ERROR; - sqlite3Error(db, rc, "no such vfs: %s", zVfs); - goto opendb_out; - } - /* Add the default collation sequence BINARY. BINARY works for both UTF-8 ** and UTF-16, so add a version for each to avoid any unnecessary ** conversions. The only error that can occur here is a malloc() failure. + ** + ** EVIDENCE-OF: R-52786-44878 SQLite defines three built-in collating + ** functions: */ - createCollation(db, "BINARY", SQLITE_UTF8, SQLITE_COLL_BINARY, 0, - binCollFunc, 0); - createCollation(db, "BINARY", SQLITE_UTF16BE, SQLITE_COLL_BINARY, 0, - binCollFunc, 0); - createCollation(db, "BINARY", SQLITE_UTF16LE, SQLITE_COLL_BINARY, 0, - binCollFunc, 0); - createCollation(db, "RTRIM", SQLITE_UTF8, SQLITE_COLL_USER, (void*)1, - binCollFunc, 0); + createCollation(db, sqlite3StrBINARY, SQLITE_UTF8, 0, binCollFunc, 0); + createCollation(db, sqlite3StrBINARY, SQLITE_UTF16BE, 0, binCollFunc, 0); + createCollation(db, sqlite3StrBINARY, SQLITE_UTF16LE, 0, binCollFunc, 0); + createCollation(db, "NOCASE", SQLITE_UTF8, 0, nocaseCollatingFunc, 0); + createCollation(db, "RTRIM", SQLITE_UTF8, (void*)1, binCollFunc, 0); if( db->mallocFailed ){ goto opendb_out; } - db->pDfltColl = sqlite3FindCollSeq(db, SQLITE_UTF8, "BINARY", 0); + /* EVIDENCE-OF: R-08308-17224 The default collating function for all + ** strings is BINARY. + */ + db->pDfltColl = sqlite3FindCollSeq(db, SQLITE_UTF8, sqlite3StrBINARY, 0); assert( db->pDfltColl!=0 ); - /* Also add a UTF-8 case-insensitive collation sequence. */ - createCollation(db, "NOCASE", SQLITE_UTF8, SQLITE_COLL_NOCASE, 0, - nocaseCollatingFunc, 0); + /* Parse the filename/URI argument. */ + db->openFlags = flags; + rc = sqlite3ParseUri(zVfs, zFilename, &flags, &db->pVfs, &zOpen, &zErrMsg); + if( rc!=SQLITE_OK ){ + if( rc==SQLITE_NOMEM ) sqlite3OomFault(db); + sqlite3ErrorWithMsg(db, rc, zErrMsg ? "%s" : 0, zErrMsg); + sqlite3_free(zErrMsg); + goto opendb_out; + } /* Open the backend database driver */ - db->openFlags = flags; - rc = sqlite3BtreeFactory(db, zFilename, 0, SQLITE_DEFAULT_CACHE_SIZE, - flags | SQLITE_OPEN_MAIN_DB, - &db->aDb[0].pBt); + rc = sqlite3BtreeOpen(db->pVfs, zOpen, db, &db->aDb[0].pBt, 0, + flags | SQLITE_OPEN_MAIN_DB); if( rc!=SQLITE_OK ){ if( rc==SQLITE_IOERR_NOMEM ){ rc = SQLITE_NOMEM; } - sqlite3Error(db, rc, 0); + sqlite3Error(db, rc); goto opendb_out; } + sqlite3BtreeEnter(db->aDb[0].pBt); db->aDb[0].pSchema = sqlite3SchemaGet(db, db->aDb[0].pBt); + if( !db->mallocFailed ) ENC(db) = SCHEMA_ENC(db); + sqlite3BtreeLeave(db->aDb[0].pBt); db->aDb[1].pSchema = sqlite3SchemaGet(db, 0); - /* The default safety_level for the main database is 'full'; for the temp ** database it is 'NONE'. This matches the pager layer defaults. */ db->aDb[0].zName = "main"; @@ -98846,20 +135675,23 @@ /* Register all built-in functions, but do not attempt to read the ** database schema yet. This is delayed until the first time the database ** is accessed. */ - sqlite3Error(db, SQLITE_OK, 0); + sqlite3Error(db, SQLITE_OK); sqlite3RegisterBuiltinFunctions(db); /* Load automatic extensions - extensions that have been registered ** using the sqlite3_automatic_extension() API. */ - sqlite3AutoLoadExtensions(db); rc = sqlite3_errcode(db); - if( rc!=SQLITE_OK ){ - goto opendb_out; + if( rc==SQLITE_OK ){ + sqlite3AutoLoadExtensions(db); + rc = sqlite3_errcode(db); + if( rc!=SQLITE_OK ){ + goto opendb_out; + } } #ifdef SQLITE_ENABLE_FTS1 if( !db->mallocFailed ){ extern int sqlite3Fts1Init(sqlite3*); @@ -98872,15 +135704,21 @@ extern int sqlite3Fts2Init(sqlite3*); rc = sqlite3Fts2Init(db); } #endif -#ifdef SQLITE_ENABLE_FTS3 +#ifdef SQLITE_ENABLE_FTS3 /* automatically defined by SQLITE_ENABLE_FTS4 */ if( !db->mallocFailed && rc==SQLITE_OK ){ rc = sqlite3Fts3Init(db); } #endif + +#ifdef SQLITE_ENABLE_FTS5 + if( !db->mallocFailed && rc==SQLITE_OK ){ + rc = sqlite3Fts5Init(db); + } +#endif #ifdef SQLITE_ENABLE_ICU if( !db->mallocFailed && rc==SQLITE_OK ){ rc = sqlite3IcuInit(db); } @@ -98890,11 +135728,21 @@ if( !db->mallocFailed && rc==SQLITE_OK){ rc = sqlite3RtreeInit(db); } #endif - sqlite3Error(db, rc, 0); +#ifdef SQLITE_ENABLE_DBSTAT_VTAB + if( !db->mallocFailed && rc==SQLITE_OK){ + rc = sqlite3DbstatRegister(db); + } +#endif + +#ifdef SQLITE_ENABLE_JSON1 + if( !db->mallocFailed && rc==SQLITE_OK){ + rc = sqlite3Json1Init(db); + } +#endif /* -DSQLITE_DEFAULT_LOCKING_MODE=1 makes EXCLUSIVE the default locking ** mode. -DSQLITE_DEFAULT_LOCKING_MODE=0 make NORMAL the default locking ** mode. Doing nothing at all also makes NORMAL the default. */ @@ -98901,145 +135749,178 @@ #ifdef SQLITE_DEFAULT_LOCKING_MODE db->dfltLockMode = SQLITE_DEFAULT_LOCKING_MODE; sqlite3PagerLockingMode(sqlite3BtreePager(db->aDb[0].pBt), SQLITE_DEFAULT_LOCKING_MODE); #endif + + if( rc ) sqlite3Error(db, rc); /* Enable the lookaside-malloc subsystem */ setupLookaside(db, 0, sqlite3GlobalConfig.szLookaside, sqlite3GlobalConfig.nLookaside); + sqlite3_wal_autocheckpoint(db, SQLITE_DEFAULT_WAL_AUTOCHECKPOINT); + opendb_out: if( db ){ - assert( db->mutex!=0 || isThreadsafe==0 || sqlite3GlobalConfig.bFullMutex==0 ); + assert( db->mutex!=0 || isThreadsafe==0 + || sqlite3GlobalConfig.bFullMutex==0 ); sqlite3_mutex_leave(db->mutex); } rc = sqlite3_errcode(db); + assert( db!=0 || rc==SQLITE_NOMEM ); if( rc==SQLITE_NOMEM ){ sqlite3_close(db); db = 0; }else if( rc!=SQLITE_OK ){ db->magic = SQLITE_MAGIC_SICK; } *ppDb = db; - return sqlite3ApiExit(0, rc); +#ifdef SQLITE_ENABLE_SQLLOG + if( sqlite3GlobalConfig.xSqllog ){ + /* Opening a db handle. Fourth parameter is passed 0. */ + void *pArg = sqlite3GlobalConfig.pSqllogArg; + sqlite3GlobalConfig.xSqllog(pArg, db, zFilename, 0); + } +#endif +#if defined(SQLITE_HAS_CODEC) + if( rc==SQLITE_OK ){ + const char *zHexKey = sqlite3_uri_parameter(zOpen, "hexkey"); + if( zHexKey && zHexKey[0] ){ + u8 iByte; + int i; + char zKey[40]; + for(i=0, iByte=0; i<sizeof(zKey)*2 && sqlite3Isxdigit(zHexKey[i]); i++){ + iByte = (iByte<<4) + sqlite3HexToInt(zHexKey[i]); + if( (i&1)!=0 ) zKey[i/2] = iByte; + } + sqlite3_key_v2(db, 0, zKey, i/2); + } + } +#endif + sqlite3_free(zOpen); + return rc & 0xff; } /* ** Open a new database handle. */ -SQLITE_API int sqlite3_open( +SQLITE_API int SQLITE_STDCALL sqlite3_open( const char *zFilename, sqlite3 **ppDb ){ return openDatabase(zFilename, ppDb, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, 0); } -SQLITE_API int sqlite3_open_v2( +SQLITE_API int SQLITE_STDCALL sqlite3_open_v2( const char *filename, /* Database filename (UTF-8) */ sqlite3 **ppDb, /* OUT: SQLite db handle */ int flags, /* Flags */ const char *zVfs /* Name of VFS module to use */ ){ - return openDatabase(filename, ppDb, flags, zVfs); + return openDatabase(filename, ppDb, (unsigned int)flags, zVfs); } #ifndef SQLITE_OMIT_UTF16 /* ** Open a new database handle. */ -SQLITE_API int sqlite3_open16( +SQLITE_API int SQLITE_STDCALL sqlite3_open16( const void *zFilename, sqlite3 **ppDb ){ char const *zFilename8; /* zFilename encoded in UTF-8 instead of UTF-16 */ sqlite3_value *pVal; int rc; - assert( zFilename ); - assert( ppDb ); +#ifdef SQLITE_ENABLE_API_ARMOR + if( ppDb==0 ) return SQLITE_MISUSE_BKPT; +#endif *ppDb = 0; #ifndef SQLITE_OMIT_AUTOINIT rc = sqlite3_initialize(); if( rc ) return rc; #endif + if( zFilename==0 ) zFilename = "\000\000"; pVal = sqlite3ValueNew(0); sqlite3ValueSetStr(pVal, -1, zFilename, SQLITE_UTF16NATIVE, SQLITE_STATIC); zFilename8 = sqlite3ValueText(pVal, SQLITE_UTF8); if( zFilename8 ){ rc = openDatabase(zFilename8, ppDb, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, 0); assert( *ppDb || rc==SQLITE_NOMEM ); if( rc==SQLITE_OK && !DbHasProperty(*ppDb, 0, DB_SchemaLoaded) ){ - ENC(*ppDb) = SQLITE_UTF16NATIVE; + SCHEMA_ENC(*ppDb) = ENC(*ppDb) = SQLITE_UTF16NATIVE; } }else{ rc = SQLITE_NOMEM; } sqlite3ValueFree(pVal); - return sqlite3ApiExit(0, rc); + return rc & 0xff; } #endif /* SQLITE_OMIT_UTF16 */ /* ** Register a new collation sequence with the database handle db. */ -SQLITE_API int sqlite3_create_collation( +SQLITE_API int SQLITE_STDCALL sqlite3_create_collation( sqlite3* db, const char *zName, int enc, void* pCtx, int(*xCompare)(void*,int,const void*,int,const void*) ){ - int rc; - sqlite3_mutex_enter(db->mutex); - assert( !db->mallocFailed ); - rc = createCollation(db, zName, (u8)enc, SQLITE_COLL_USER, pCtx, xCompare, 0); - rc = sqlite3ApiExit(db, rc); - sqlite3_mutex_leave(db->mutex); - return rc; + return sqlite3_create_collation_v2(db, zName, enc, pCtx, xCompare, 0); } /* ** Register a new collation sequence with the database handle db. */ -SQLITE_API int sqlite3_create_collation_v2( +SQLITE_API int SQLITE_STDCALL sqlite3_create_collation_v2( sqlite3* db, const char *zName, int enc, void* pCtx, int(*xCompare)(void*,int,const void*,int,const void*), void(*xDel)(void*) ){ int rc; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) || zName==0 ) return SQLITE_MISUSE_BKPT; +#endif sqlite3_mutex_enter(db->mutex); assert( !db->mallocFailed ); - rc = createCollation(db, zName, (u8)enc, SQLITE_COLL_USER, pCtx, xCompare, xDel); + rc = createCollation(db, zName, (u8)enc, pCtx, xCompare, xDel); rc = sqlite3ApiExit(db, rc); sqlite3_mutex_leave(db->mutex); return rc; } #ifndef SQLITE_OMIT_UTF16 /* ** Register a new collation sequence with the database handle db. */ -SQLITE_API int sqlite3_create_collation16( +SQLITE_API int SQLITE_STDCALL sqlite3_create_collation16( sqlite3* db, const void *zName, int enc, void* pCtx, int(*xCompare)(void*,int,const void*,int,const void*) ){ int rc = SQLITE_OK; char *zName8; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) || zName==0 ) return SQLITE_MISUSE_BKPT; +#endif sqlite3_mutex_enter(db->mutex); assert( !db->mallocFailed ); zName8 = sqlite3Utf16to8(db, zName, -1, SQLITE_UTF16NATIVE); if( zName8 ){ - rc = createCollation(db, zName8, (u8)enc, SQLITE_COLL_USER, pCtx, xCompare, 0); + rc = createCollation(db, zName8, (u8)enc, pCtx, xCompare, 0); sqlite3DbFree(db, zName8); } rc = sqlite3ApiExit(db, rc); sqlite3_mutex_leave(db->mutex); return rc; @@ -99048,15 +135929,18 @@ /* ** Register a collation sequence factory callback with the database handle ** db. Replace any previously installed collation sequence factory. */ -SQLITE_API int sqlite3_collation_needed( +SQLITE_API int SQLITE_STDCALL sqlite3_collation_needed( sqlite3 *db, void *pCollNeededArg, void(*xCollNeeded)(void*,sqlite3*,int eTextRep,const char*) ){ +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT; +#endif sqlite3_mutex_enter(db->mutex); db->xCollNeeded = xCollNeeded; db->xCollNeeded16 = 0; db->pCollNeededArg = pCollNeededArg; sqlite3_mutex_leave(db->mutex); @@ -99066,52 +135950,57 @@ #ifndef SQLITE_OMIT_UTF16 /* ** Register a collation sequence factory callback with the database handle ** db. Replace any previously installed collation sequence factory. */ -SQLITE_API int sqlite3_collation_needed16( +SQLITE_API int SQLITE_STDCALL sqlite3_collation_needed16( sqlite3 *db, void *pCollNeededArg, void(*xCollNeeded16)(void*,sqlite3*,int eTextRep,const void*) ){ +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT; +#endif sqlite3_mutex_enter(db->mutex); db->xCollNeeded = 0; db->xCollNeeded16 = xCollNeeded16; db->pCollNeededArg = pCollNeededArg; sqlite3_mutex_leave(db->mutex); return SQLITE_OK; } #endif /* SQLITE_OMIT_UTF16 */ -#ifndef SQLITE_OMIT_GLOBALRECOVER #ifndef SQLITE_OMIT_DEPRECATED /* ** This function is now an anachronism. It used to be used to recover from a ** malloc() failure, but SQLite now does this automatically. */ -SQLITE_API int sqlite3_global_recover(void){ +SQLITE_API int SQLITE_STDCALL sqlite3_global_recover(void){ return SQLITE_OK; } -#endif #endif /* ** Test to see whether or not the database connection is in autocommit ** mode. Return TRUE if it is and FALSE if not. Autocommit mode is on ** by default. Autocommit is disabled by a BEGIN statement and reenabled ** by the next COMMIT or ROLLBACK. -** -******* THIS IS AN EXPERIMENTAL API AND IS SUBJECT TO CHANGE ****** */ -SQLITE_API int sqlite3_get_autocommit(sqlite3 *db){ +SQLITE_API int SQLITE_STDCALL sqlite3_get_autocommit(sqlite3 *db){ +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif return db->autoCommit; } /* -** The following routines are subtitutes for constants SQLITE_CORRUPT, +** The following routines are substitutes for constants SQLITE_CORRUPT, ** SQLITE_MISUSE, SQLITE_CANTOPEN, SQLITE_IOERR and possibly other error -** constants. They server two purposes: +** constants. They serve two purposes: ** ** 1. Serve as a convenient place to set a breakpoint in a debugger ** to detect when version error conditions occurs. ** ** 2. Invoke sqlite3_log() to provide the source code location where @@ -99118,21 +136007,26 @@ ** a low-level error is first detected. */ SQLITE_PRIVATE int sqlite3CorruptError(int lineno){ testcase( sqlite3GlobalConfig.xLog!=0 ); sqlite3_log(SQLITE_CORRUPT, - "database corruption found by source line %d", lineno); + "database corruption at line %d of [%.10s]", + lineno, 20+sqlite3_sourceid()); return SQLITE_CORRUPT; } SQLITE_PRIVATE int sqlite3MisuseError(int lineno){ testcase( sqlite3GlobalConfig.xLog!=0 ); - sqlite3_log(SQLITE_MISUSE, "misuse detected by source line %d", lineno); + sqlite3_log(SQLITE_MISUSE, + "misuse at line %d of [%.10s]", + lineno, 20+sqlite3_sourceid()); return SQLITE_MISUSE; } SQLITE_PRIVATE int sqlite3CantopenError(int lineno){ testcase( sqlite3GlobalConfig.xLog!=0 ); - sqlite3_log(SQLITE_CANTOPEN, "cannot open file at source line %d", lineno); + sqlite3_log(SQLITE_CANTOPEN, + "cannot open file at line %d of [%.10s]", + lineno, 20+sqlite3_sourceid()); return SQLITE_CANTOPEN; } #ifndef SQLITE_OMIT_DEPRECATED @@ -99141,20 +136035,19 @@ ** data for this thread has been deallocated. ** ** SQLite no longer uses thread-specific data so this routine is now a ** no-op. It is retained for historical compatibility. */ -SQLITE_API void sqlite3_thread_cleanup(void){ +SQLITE_API void SQLITE_STDCALL sqlite3_thread_cleanup(void){ } #endif /* ** Return meta information about a specific column of a database table. ** See comment in sqlite3.h (sqlite.h.in) for details. */ -#ifdef SQLITE_ENABLE_COLUMN_METADATA -SQLITE_API int sqlite3_table_column_metadata( +SQLITE_API int SQLITE_STDCALL sqlite3_table_column_metadata( sqlite3 *db, /* Connection handle */ const char *zDbName, /* Database name or NULL */ const char *zTableName, /* Table name */ const char *zColumnName, /* Column name */ char const **pzDataType, /* OUTPUT: Declared data type */ @@ -99165,18 +136058,24 @@ ){ int rc; char *zErrMsg = 0; Table *pTab = 0; Column *pCol = 0; - int iCol; - + int iCol = 0; char const *zDataType = 0; char const *zCollSeq = 0; int notnull = 0; int primarykey = 0; int autoinc = 0; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) || zTableName==0 ){ + return SQLITE_MISUSE_BKPT; + } +#endif + /* Ensure the database schema has been loaded */ sqlite3_mutex_enter(db->mutex); sqlite3BtreeEnterAll(db); rc = sqlite3Init(db, &zErrMsg); if( SQLITE_OK!=rc ){ @@ -99189,25 +136088,27 @@ pTab = 0; goto error_out; } /* Find the column for which info is requested */ - if( sqlite3IsRowid(zColumnName) ){ - iCol = pTab->iPKey; - if( iCol>=0 ){ - pCol = &pTab->aCol[iCol]; - } + if( zColumnName==0 ){ + /* Query for existance of table only */ }else{ for(iCol=0; iCol<pTab->nCol; iCol++){ pCol = &pTab->aCol[iCol]; if( 0==sqlite3StrICmp(pCol->zName, zColumnName) ){ break; } } if( iCol==pTab->nCol ){ - pTab = 0; - goto error_out; + if( HasRowid(pTab) && sqlite3IsRowid(zColumnName) ){ + iCol = pTab->iPKey; + pCol = iCol>=0 ? &pTab->aCol[iCol] : 0; + }else{ + pTab = 0; + goto error_out; + } } } /* The following block stores the meta information that will be returned ** to the caller in local variables zDataType, zCollSeq, notnull, primarykey @@ -99221,18 +136122,18 @@ */ if( pCol ){ zDataType = pCol->zType; zCollSeq = pCol->zColl; notnull = pCol->notNull!=0; - primarykey = pCol->isPrimKey!=0; + primarykey = (pCol->colFlags & COLFLAG_PRIMKEY)!=0; autoinc = pTab->iPKey==iCol && (pTab->tabFlags & TF_Autoincrement)!=0; }else{ zDataType = "INTEGER"; primarykey = 1; } if( !zCollSeq ){ - zCollSeq = "BINARY"; + zCollSeq = sqlite3StrBINARY; } error_out: sqlite3BtreeLeaveAll(db); @@ -99250,22 +136151,21 @@ sqlite3DbFree(db, zErrMsg); zErrMsg = sqlite3MPrintf(db, "no such table column: %s.%s", zTableName, zColumnName); rc = SQLITE_ERROR; } - sqlite3Error(db, rc, (zErrMsg?"%s":0), zErrMsg); + sqlite3ErrorWithMsg(db, rc, (zErrMsg?"%s":0), zErrMsg); sqlite3DbFree(db, zErrMsg); rc = sqlite3ApiExit(db, rc); sqlite3_mutex_leave(db->mutex); return rc; } -#endif /* ** Sleep for a little while. Return the amount of time slept. */ -SQLITE_API int sqlite3_sleep(int ms){ +SQLITE_API int SQLITE_STDCALL sqlite3_sleep(int ms){ sqlite3_vfs *pVfs; int rc; pVfs = sqlite3_vfs_find(0); if( pVfs==0 ) return 0; @@ -99277,57 +136177,68 @@ } /* ** Enable or disable the extended result codes. */ -SQLITE_API int sqlite3_extended_result_codes(sqlite3 *db, int onoff){ +SQLITE_API int SQLITE_STDCALL sqlite3_extended_result_codes(sqlite3 *db, int onoff){ +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT; +#endif sqlite3_mutex_enter(db->mutex); db->errMask = onoff ? 0xffffffff : 0xff; sqlite3_mutex_leave(db->mutex); return SQLITE_OK; } /* ** Invoke the xFileControl method on a particular database. */ -SQLITE_API int sqlite3_file_control(sqlite3 *db, const char *zDbName, int op, void *pArg){ +SQLITE_API int SQLITE_STDCALL sqlite3_file_control(sqlite3 *db, const char *zDbName, int op, void *pArg){ int rc = SQLITE_ERROR; - int iDb; + Btree *pBtree; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT; +#endif sqlite3_mutex_enter(db->mutex); - if( zDbName==0 ){ - iDb = 0; - }else{ - for(iDb=0; iDb<db->nDb; iDb++){ - if( strcmp(db->aDb[iDb].zName, zDbName)==0 ) break; - } - } - if( iDb<db->nDb ){ - Btree *pBtree = db->aDb[iDb].pBt; - if( pBtree ){ - Pager *pPager; - sqlite3_file *fd; - sqlite3BtreeEnter(pBtree); - pPager = sqlite3BtreePager(pBtree); - assert( pPager!=0 ); - fd = sqlite3PagerFile(pPager); - assert( fd!=0 ); - if( fd->pMethods ){ - rc = sqlite3OsFileControl(fd, op, pArg); - } - sqlite3BtreeLeave(pBtree); - } + pBtree = sqlite3DbNameToBtree(db, zDbName); + if( pBtree ){ + Pager *pPager; + sqlite3_file *fd; + sqlite3BtreeEnter(pBtree); + pPager = sqlite3BtreePager(pBtree); + assert( pPager!=0 ); + fd = sqlite3PagerFile(pPager); + assert( fd!=0 ); + if( op==SQLITE_FCNTL_FILE_POINTER ){ + *(sqlite3_file**)pArg = fd; + rc = SQLITE_OK; + }else if( op==SQLITE_FCNTL_VFS_POINTER ){ + *(sqlite3_vfs**)pArg = sqlite3PagerVfs(pPager); + rc = SQLITE_OK; + }else if( op==SQLITE_FCNTL_JOURNAL_POINTER ){ + *(sqlite3_file**)pArg = sqlite3PagerJrnlFile(pPager); + rc = SQLITE_OK; + }else if( fd->pMethods ){ + rc = sqlite3OsFileControl(fd, op, pArg); + }else{ + rc = SQLITE_NOTFOUND; + } + sqlite3BtreeLeave(pBtree); } sqlite3_mutex_leave(db->mutex); - return rc; + return rc; } /* ** Interface to the testing logic. */ -SQLITE_API int sqlite3_test_control(int op, ...){ +SQLITE_API int SQLITE_CDECL sqlite3_test_control(int op, ...){ int rc = 0; -#ifndef SQLITE_OMIT_BUILTIN_TEST +#ifdef SQLITE_OMIT_BUILTIN_TEST + UNUSED_PARAMETER(op); +#else va_list ap; va_start(ap, op); switch( op ){ /* @@ -99352,11 +136263,11 @@ ** Reset the PRNG back to its uninitialized state. The next call ** to sqlite3_randomness() will reseed the PRNG using a single call ** to the xRandomness method of the default VFS. */ case SQLITE_TESTCTRL_PRNG_RESET: { - sqlite3PrngResetState(); + sqlite3_randomness(0,0); break; } /* ** sqlite3_test_control(BITVEC_TEST, size, program) @@ -99370,10 +136281,32 @@ int sz = va_arg(ap, int); int *aProg = va_arg(ap, int*); rc = sqlite3BitvecBuiltinTest(sz, aProg); break; } + + /* + ** sqlite3_test_control(FAULT_INSTALL, xCallback) + ** + ** Arrange to invoke xCallback() whenever sqlite3FaultSim() is called, + ** if xCallback is not NULL. + ** + ** As a test of the fault simulator mechanism itself, sqlite3FaultSim(0) + ** is called immediately after installing the new callback and the return + ** value from sqlite3FaultSim(0) becomes the return from + ** sqlite3_test_control(). + */ + case SQLITE_TESTCTRL_FAULT_INSTALL: { + /* MSVC is picky about pulling func ptrs from va lists. + ** http://support.microsoft.com/kb/47961 + ** sqlite3GlobalConfig.xTestCallback = va_arg(ap, int(*)(int)); + */ + typedef int(*TESTCALLBACKFUNC_t)(int); + sqlite3GlobalConfig.xTestCallback = va_arg(ap, TESTCALLBACKFUNC_t); + rc = sqlite3FaultSim(0); + break; + } /* ** sqlite3_test_control(BENIGN_MALLOC_HOOKS, xBegin, xEnd) ** ** Register hooks to call to indicate which malloc() failures @@ -99397,16 +136330,20 @@ ** as it existing before this routine was called. ** ** IMPORTANT: Changing the PENDING byte from 0x40000000 results in ** an incompatible database file format. Changing the PENDING byte ** while any database connection is open results in undefined and - ** dileterious behavior. + ** deleterious behavior. */ case SQLITE_TESTCTRL_PENDING_BYTE: { - unsigned int newVal = va_arg(ap, unsigned int); - rc = sqlite3PendingByte; - if( newVal ) sqlite3PendingByte = newVal; + rc = PENDING_BYTE; +#ifndef SQLITE_OMIT_WSD + { + unsigned int newVal = va_arg(ap, unsigned int); + if( newVal ) sqlite3PendingByte = newVal; + } +#endif break; } /* ** sqlite3_test_control(SQLITE_TESTCTRL_ASSERT, int X) @@ -99419,11 +136356,11 @@ ** process aborts. If X is false and assert() is disabled, then the ** return value is zero. */ case SQLITE_TESTCTRL_ASSERT: { volatile int x = 0; - assert( (x = va_arg(ap,int))!=0 ); + assert( /*side-effects-ok*/ (x = va_arg(ap,int))!=0 ); rc = x; break; } @@ -99457,10 +136394,26 @@ case SQLITE_TESTCTRL_ALWAYS: { int x = va_arg(ap,int); rc = ALWAYS(x); break; } + + /* + ** sqlite3_test_control(SQLITE_TESTCTRL_BYTEORDER); + ** + ** The integer returned reveals the byte-order of the computer on which + ** SQLite is running: + ** + ** 1 big-endian, determined at run-time + ** 10 little-endian, determined at run-time + ** 432101 big-endian, determined at compile-time + ** 123410 little-endian, determined at compile-time + */ + case SQLITE_TESTCTRL_BYTEORDER: { + rc = SQLITE_BYTEORDER*100 + SQLITE_LITTLEENDIAN*10 + SQLITE_BIGENDIAN; + break; + } /* sqlite3_test_control(SQLITE_TESTCTRL_RESERVE, sqlite3 *db, int N) ** ** Set the nReserve size to N for the main database on the database ** connection db. @@ -99483,12 +136436,11 @@ ** with various optimizations disabled to verify that the same answer ** is obtained in every case. */ case SQLITE_TESTCTRL_OPTIMIZATIONS: { sqlite3 *db = va_arg(ap, sqlite3*); - int x = va_arg(ap,int); - db->flags = (x & SQLITE_OptMask) | (db->flags & ~SQLITE_OptMask); + db->dbOptFlags = (u16)(va_arg(ap, int) & 0xffff); break; } #ifdef SQLITE_N_KEYWORD /* sqlite3_test_control(SQLITE_TESTCTRL_ISKEYWORD, const char *zWord) @@ -99506,16 +136458,293 @@ rc = (sqlite3KeywordCode((u8*)zWord, n)!=TK_ID) ? SQLITE_N_KEYWORD : 0; break; } #endif + /* sqlite3_test_control(SQLITE_TESTCTRL_SCRATCHMALLOC, sz, &pNew, pFree); + ** + ** Pass pFree into sqlite3ScratchFree(). + ** If sz>0 then allocate a scratch buffer into pNew. + */ + case SQLITE_TESTCTRL_SCRATCHMALLOC: { + void *pFree, **ppNew; + int sz; + sz = va_arg(ap, int); + ppNew = va_arg(ap, void**); + pFree = va_arg(ap, void*); + if( sz ) *ppNew = sqlite3ScratchMalloc(sz); + sqlite3ScratchFree(pFree); + break; + } + + /* sqlite3_test_control(SQLITE_TESTCTRL_LOCALTIME_FAULT, int onoff); + ** + ** If parameter onoff is non-zero, configure the wrappers so that all + ** subsequent calls to localtime() and variants fail. If onoff is zero, + ** undo this setting. + */ + case SQLITE_TESTCTRL_LOCALTIME_FAULT: { + sqlite3GlobalConfig.bLocaltimeFault = va_arg(ap, int); + break; + } + + /* sqlite3_test_control(SQLITE_TESTCTRL_NEVER_CORRUPT, int); + ** + ** Set or clear a flag that indicates that the database file is always well- + ** formed and never corrupt. This flag is clear by default, indicating that + ** database files might have arbitrary corruption. Setting the flag during + ** testing causes certain assert() statements in the code to be activated + ** that demonstrat invariants on well-formed database files. + */ + case SQLITE_TESTCTRL_NEVER_CORRUPT: { + sqlite3GlobalConfig.neverCorrupt = va_arg(ap, int); + break; + } + + + /* sqlite3_test_control(SQLITE_TESTCTRL_VDBE_COVERAGE, xCallback, ptr); + ** + ** Set the VDBE coverage callback function to xCallback with context + ** pointer ptr. + */ + case SQLITE_TESTCTRL_VDBE_COVERAGE: { +#ifdef SQLITE_VDBE_COVERAGE + typedef void (*branch_callback)(void*,int,u8,u8); + sqlite3GlobalConfig.xVdbeBranch = va_arg(ap,branch_callback); + sqlite3GlobalConfig.pVdbeBranchArg = va_arg(ap,void*); +#endif + break; + } + + /* sqlite3_test_control(SQLITE_TESTCTRL_SORTER_MMAP, db, nMax); */ + case SQLITE_TESTCTRL_SORTER_MMAP: { + sqlite3 *db = va_arg(ap, sqlite3*); + db->nMaxSorterMmap = va_arg(ap, int); + break; + } + + /* sqlite3_test_control(SQLITE_TESTCTRL_ISINIT); + ** + ** Return SQLITE_OK if SQLite has been initialized and SQLITE_ERROR if + ** not. + */ + case SQLITE_TESTCTRL_ISINIT: { + if( sqlite3GlobalConfig.isInit==0 ) rc = SQLITE_ERROR; + break; + } + + /* sqlite3_test_control(SQLITE_TESTCTRL_IMPOSTER, db, dbName, onOff, tnum); + ** + ** This test control is used to create imposter tables. "db" is a pointer + ** to the database connection. dbName is the database name (ex: "main" or + ** "temp") which will receive the imposter. "onOff" turns imposter mode on + ** or off. "tnum" is the root page of the b-tree to which the imposter + ** table should connect. + ** + ** Enable imposter mode only when the schema has already been parsed. Then + ** run a single CREATE TABLE statement to construct the imposter table in + ** the parsed schema. Then turn imposter mode back off again. + ** + ** If onOff==0 and tnum>0 then reset the schema for all databases, causing + ** the schema to be reparsed the next time it is needed. This has the + ** effect of erasing all imposter tables. + */ + case SQLITE_TESTCTRL_IMPOSTER: { + sqlite3 *db = va_arg(ap, sqlite3*); + sqlite3_mutex_enter(db->mutex); + db->init.iDb = sqlite3FindDbName(db, va_arg(ap,const char*)); + db->init.busy = db->init.imposterTable = va_arg(ap,int); + db->init.newTnum = va_arg(ap,int); + if( db->init.busy==0 && db->init.newTnum>0 ){ + sqlite3ResetAllSchemasOfConnection(db); + } + sqlite3_mutex_leave(db->mutex); + break; + } } va_end(ap); #endif /* SQLITE_OMIT_BUILTIN_TEST */ return rc; } +/* +** This is a utility routine, useful to VFS implementations, that checks +** to see if a database file was a URI that contained a specific query +** parameter, and if so obtains the value of the query parameter. +** +** The zFilename argument is the filename pointer passed into the xOpen() +** method of a VFS implementation. The zParam argument is the name of the +** query parameter we seek. This routine returns the value of the zParam +** parameter if it exists. If the parameter does not exist, this routine +** returns a NULL pointer. +*/ +SQLITE_API const char *SQLITE_STDCALL sqlite3_uri_parameter(const char *zFilename, const char *zParam){ + if( zFilename==0 || zParam==0 ) return 0; + zFilename += sqlite3Strlen30(zFilename) + 1; + while( zFilename[0] ){ + int x = strcmp(zFilename, zParam); + zFilename += sqlite3Strlen30(zFilename) + 1; + if( x==0 ) return zFilename; + zFilename += sqlite3Strlen30(zFilename) + 1; + } + return 0; +} + +/* +** Return a boolean value for a query parameter. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_uri_boolean(const char *zFilename, const char *zParam, int bDflt){ + const char *z = sqlite3_uri_parameter(zFilename, zParam); + bDflt = bDflt!=0; + return z ? sqlite3GetBoolean(z, bDflt) : bDflt; +} + +/* +** Return a 64-bit integer value for a query parameter. +*/ +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_uri_int64( + const char *zFilename, /* Filename as passed to xOpen */ + const char *zParam, /* URI parameter sought */ + sqlite3_int64 bDflt /* return if parameter is missing */ +){ + const char *z = sqlite3_uri_parameter(zFilename, zParam); + sqlite3_int64 v; + if( z && sqlite3DecOrHexToI64(z, &v)==SQLITE_OK ){ + bDflt = v; + } + return bDflt; +} + +/* +** Return the Btree pointer identified by zDbName. Return NULL if not found. +*/ +SQLITE_PRIVATE Btree *sqlite3DbNameToBtree(sqlite3 *db, const char *zDbName){ + int i; + for(i=0; i<db->nDb; i++){ + if( db->aDb[i].pBt + && (zDbName==0 || sqlite3StrICmp(zDbName, db->aDb[i].zName)==0) + ){ + return db->aDb[i].pBt; + } + } + return 0; +} + +/* +** Return the filename of the database associated with a database +** connection. +*/ +SQLITE_API const char *SQLITE_STDCALL sqlite3_db_filename(sqlite3 *db, const char *zDbName){ + Btree *pBt; +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + (void)SQLITE_MISUSE_BKPT; + return 0; + } +#endif + pBt = sqlite3DbNameToBtree(db, zDbName); + return pBt ? sqlite3BtreeGetFilename(pBt) : 0; +} + +/* +** Return 1 if database is read-only or 0 if read/write. Return -1 if +** no such database exists. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_db_readonly(sqlite3 *db, const char *zDbName){ + Btree *pBt; +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + (void)SQLITE_MISUSE_BKPT; + return -1; + } +#endif + pBt = sqlite3DbNameToBtree(db, zDbName); + return pBt ? sqlite3BtreeIsReadonly(pBt) : -1; +} + +#ifdef SQLITE_ENABLE_SNAPSHOT +/* +** Obtain a snapshot handle for the snapshot of database zDb currently +** being read by handle db. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_snapshot_get( + sqlite3 *db, + const char *zDb, + sqlite3_snapshot **ppSnapshot +){ + int rc = SQLITE_ERROR; +#ifndef SQLITE_OMIT_WAL + int iDb; + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + return SQLITE_MISUSE_BKPT; + } +#endif + sqlite3_mutex_enter(db->mutex); + + iDb = sqlite3FindDbName(db, zDb); + if( iDb==0 || iDb>1 ){ + Btree *pBt = db->aDb[iDb].pBt; + if( 0==sqlite3BtreeIsInTrans(pBt) ){ + rc = sqlite3BtreeBeginTrans(pBt, 0); + if( rc==SQLITE_OK ){ + rc = sqlite3PagerSnapshotGet(sqlite3BtreePager(pBt), ppSnapshot); + } + } + } + + sqlite3_mutex_leave(db->mutex); +#endif /* SQLITE_OMIT_WAL */ + return rc; +} + +/* +** Open a read-transaction on the snapshot idendified by pSnapshot. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_snapshot_open( + sqlite3 *db, + const char *zDb, + sqlite3_snapshot *pSnapshot +){ + int rc = SQLITE_ERROR; +#ifndef SQLITE_OMIT_WAL + +#ifdef SQLITE_ENABLE_API_ARMOR + if( !sqlite3SafetyCheckOk(db) ){ + return SQLITE_MISUSE_BKPT; + } +#endif + sqlite3_mutex_enter(db->mutex); + if( db->autoCommit==0 ){ + int iDb; + iDb = sqlite3FindDbName(db, zDb); + if( iDb==0 || iDb>1 ){ + Btree *pBt = db->aDb[iDb].pBt; + if( 0==sqlite3BtreeIsInReadTrans(pBt) ){ + rc = sqlite3PagerSnapshotOpen(sqlite3BtreePager(pBt), pSnapshot); + if( rc==SQLITE_OK ){ + rc = sqlite3BtreeBeginTrans(pBt, 0); + sqlite3PagerSnapshotOpen(sqlite3BtreePager(pBt), 0); + } + } + } + } + + sqlite3_mutex_leave(db->mutex); +#endif /* SQLITE_OMIT_WAL */ + return rc; +} + +/* +** Free a snapshot handle obtained from sqlite3_snapshot_get(). +*/ +SQLITE_API void SQLITE_STDCALL sqlite3_snapshot_free(sqlite3_snapshot *pSnapshot){ + sqlite3_free(pSnapshot); +} +#endif /* SQLITE_ENABLE_SNAPSHOT */ + /************** End of main.c ************************************************/ /************** Begin file notify.c ******************************************/ /* ** 2009 March 3 ** @@ -99529,10 +136758,12 @@ ************************************************************************* ** ** This file contains the implementation of the sqlite3_unlock_notify() ** API method and its associated functionality. */ +/* #include "sqliteInt.h" */ +/* #include "btreeInt.h" */ /* Omit this entire file if SQLITE_ENABLE_UNLOCK_NOTIFY is not defined. */ #ifdef SQLITE_ENABLE_UNLOCK_NOTIFY /* @@ -99659,11 +136890,11 @@ ** ** Each call to this routine overrides any prior callbacks registered ** on the same "db". If xNotify==0 then any prior callbacks are immediately ** cancelled. */ -SQLITE_API int sqlite3_unlock_notify( +SQLITE_API int SQLITE_STDCALL sqlite3_unlock_notify( sqlite3 *db, void (*xNotify)(void **, int), void *pArg ){ int rc = SQLITE_OK; @@ -99671,10 +136902,11 @@ sqlite3_mutex_enter(db->mutex); enterMutex(); if( xNotify==0 ){ removeFromBlockedList(db); + db->pBlockingConnection = 0; db->pUnlockConnection = 0; db->xUnlockNotify = 0; db->pUnlockArg = 0; }else if( 0==db->pBlockingConnection ){ /* The blocking transaction has been concluded. Or there never was a @@ -99697,11 +136929,11 @@ } } leaveMutex(); assert( !db->mallocFailed ); - sqlite3Error(db, rc, (rc?"database is deadlocked":0)); + sqlite3ErrorWithMsg(db, rc, (rc?"database is deadlocked":0)); sqlite3_mutex_leave(db->mutex); return rc; } /* @@ -99768,11 +137000,11 @@ sqlite3BeginBenignMalloc(); assert( aArg==aDyn || (aDyn==0 && aArg==aStatic) ); assert( nArg<=(int)ArraySize(aStatic) || aArg==aDyn ); if( (!aDyn && nArg==(int)ArraySize(aStatic)) - || (aDyn && nArg==(int)(sqlite3DbMallocSize(db, aDyn)/sizeof(void*))) + || (aDyn && nArg==(int)(sqlite3MallocSize(aDyn)/sizeof(void*))) ){ /* The aArg[] array needs to grow. */ void **pNew = (void **)sqlite3Malloc(nArg*sizeof(void *)*2); if( pNew ){ memcpy(pNew, aArg, nArg*sizeof(void *)); @@ -99918,11 +137150,11 @@ ** option. But that functionality is no longer supported. ** ** A doclist is stored like this: ** ** array { -** varint docid; +** varint docid; (delta from previous doclist) ** array { (position list for column 0) ** varint position; (2 more than the delta from previous position) ** } ** array { ** varint POS_COLUMN; (marks start of position list for new column) @@ -99949,12 +137181,12 @@ ** The 123 value is the first docid. For column zero in this document ** there are two matches at positions 3 and 10 (5-2 and 9-2+3). The 1 ** at D signals the start of a new column; the 1 at E indicates that the ** new column is column number 1. There are two positions at 12 and 45 ** (14-2 and 35-2+12). The 0 at H indicate the end-of-document. The -** 234 at I is the next docid. It has one position 72 (72-2) and then -** terminates with the 0 at K. +** 234 at I is the delta to next docid (357). It has one position 70 +** (72-2) and then terminates with the 0 at K. ** ** A "position-list" is the list of positions for multiple columns for ** a single docid. A "column-list" is the set of positions for a single ** column. Hence, a position-list consists of one or more column-lists, ** a document record consists of a docid followed by a position-list and @@ -100134,22 +137366,12 @@ ** we simply write the new doclist. Segment merges overwrite older ** data for a particular docid with newer data, so deletes or updates ** will eventually overtake the earlier data and knock it out. The ** query logic likewise merges doclists so that newer data knocks out ** older data. -** -** TODO(shess) Provide a VACUUM type operation to clear out all -** deletions and duplications. This would basically be a forced merge -** into a single segment. */ -#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) - -#if defined(SQLITE_ENABLE_FTS3) && !defined(SQLITE_CORE) -# define SQLITE_CORE 1 -#endif - /************** Include fts3Int.h in the middle of fts3.c ********************/ /************** Begin file fts3Int.h *****************************************/ /* ** 2009 Nov 12 ** @@ -100161,18 +137383,41 @@ ** May you share freely, never taking more than you give. ** ****************************************************************************** ** */ - #ifndef _FTSINT_H #define _FTSINT_H #if !defined(NDEBUG) && !defined(SQLITE_DEBUG) # define NDEBUG 1 #endif +/* FTS3/FTS4 require virtual tables */ +#ifdef SQLITE_OMIT_VIRTUALTABLE +# undef SQLITE_ENABLE_FTS3 +# undef SQLITE_ENABLE_FTS4 +#endif + +/* +** FTS4 is really an extension for FTS3. It is enabled using the +** SQLITE_ENABLE_FTS3 macro. But to avoid confusion we also all +** the SQLITE_ENABLE_FTS4 macro to serve as an alisse for SQLITE_ENABLE_FTS3. +*/ +#if defined(SQLITE_ENABLE_FTS4) && !defined(SQLITE_ENABLE_FTS3) +# define SQLITE_ENABLE_FTS3 +#endif + +#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) + +/* If not building as part of the core, include sqlite3ext.h. */ +#ifndef SQLITE_CORE +/* # include "sqlite3ext.h" */ +SQLITE_EXTENSION_INIT3 +#endif + +/* #include "sqlite3.h" */ /************** Include fts3_tokenizer.h in the middle of fts3Int.h **********/ /************** Begin file fts3_tokenizer.h **********************************/ /* ** 2006 July 10 ** @@ -100197,10 +137442,11 @@ /* TODO(shess) Only used for SQLITE_OK and SQLITE_DONE at this time. ** If tokenizers are to be allowed to call sqlite3_*() functions, then ** we will need a way to register the API consistently. */ +/* #include "sqlite3.h" */ /* ** Structures used by the tokenizer interface. When a new tokenizer ** implementation is registered, the caller provides a pointer to ** an sqlite3_tokenizer_module containing pointers to the callback @@ -100224,11 +137470,11 @@ typedef struct sqlite3_tokenizer_cursor sqlite3_tokenizer_cursor; struct sqlite3_tokenizer_module { /* - ** Structure version. Should always be set to 0. + ** Structure version. Should always be set to 0 or 1. */ int iVersion; /* ** Create a new tokenizer. The values in the argv[] array are the @@ -100242,11 +137488,11 @@ ** to the strings "arg1" and "arg2". ** ** This method should return either SQLITE_OK (0), or an SQLite error ** code. If SQLITE_OK is returned, then *ppTokenizer should be set ** to point at the newly created tokenizer structure. The generic - ** sqlite3_tokenizer.pModule variable should not be initialised by + ** sqlite3_tokenizer.pModule variable should not be initialized by ** this callback. The caller will do so. */ int (*xCreate)( int argc, /* Size of argv array */ const char *const*argv, /* Tokenizer argument strings */ @@ -100305,10 +137551,19 @@ const char **ppToken, int *pnBytes, /* OUT: Normalized text for token */ int *piStartOffset, /* OUT: Byte offset of token in input buffer */ int *piEndOffset, /* OUT: Byte offset of end of token in input buffer */ int *piPosition /* OUT: Number of tokens returned before this one */ ); + + /*********************************************************************** + ** Methods below this point are only available if iVersion>=1. + */ + + /* + ** Configure the language id of a tokenizer cursor. + */ + int (*xLanguageid)(sqlite3_tokenizer_cursor *pCsr, int iLangid); }; struct sqlite3_tokenizer { const sqlite3_tokenizer_module *pModule; /* The module for this tokenizer */ /* Tokenizer implementations will typically add additional fields */ @@ -100338,11 +137593,11 @@ ** May you do good and not evil. ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* -** This is the header file for the generic hash-table implemenation +** This is the header file for the generic hash-table implementation ** used in SQLite. We've modified it slightly to serve as a standalone ** hash table implementation for the full-text indexing module. ** */ #ifndef _FTS3_HASH_H_ @@ -100442,10 +137697,22 @@ #endif /* _FTS3_HASH_H_ */ /************** End of fts3_hash.h *******************************************/ /************** Continuing where we left off in fts3Int.h ********************/ + +/* +** This constant determines the maximum depth of an FTS expression tree +** that the library will create and use. FTS uses recursion to perform +** various operations on the query tree, so the disadvantage of a large +** limit is that it may allow very large queries to use large amounts +** of stack space (perhaps causing a stack overflow). +*/ +#ifndef SQLITE_FTS3_MAX_EXPR_DEPTH +# define SQLITE_FTS3_MAX_EXPR_DEPTH 12 +#endif + /* ** This constant controls how often segments are merged. Once there are ** FTS3_MERGE_COUNT segments of level N, they are merged into a single ** segment of level N+1. @@ -100467,16 +137734,42 @@ ** similar macro called ArraySize(). Use a different name to avoid ** a collision when building an amalgamation with built-in FTS3. */ #define SizeofArray(X) ((int)(sizeof(X)/sizeof(X[0]))) + +#ifndef MIN +# define MIN(x,y) ((x)<(y)?(x):(y)) +#endif +#ifndef MAX +# define MAX(x,y) ((x)>(y)?(x):(y)) +#endif + /* ** Maximum length of a varint encoded integer. The varint format is different ** from that used by SQLite, so the maximum length is 10, not 9. */ #define FTS3_VARINT_MAX 10 +/* +** FTS4 virtual tables may maintain multiple indexes - one index of all terms +** in the document set and zero or more prefix indexes. All indexes are stored +** as one or more b+-trees in the %_segments and %_segdir tables. +** +** It is possible to determine which index a b+-tree belongs to based on the +** value stored in the "%_segdir.level" column. Given this value L, the index +** that the b+-tree belongs to is (L<<10). In other words, all b+-trees with +** level values between 0 and 1023 (inclusive) belong to index 0, all levels +** between 1024 and 2047 to index 1, and so on. +** +** It is considered impossible for an index to use more than 1024 levels. In +** theory though this may happen, but only after at least +** (FTS3_MERGE_COUNT^1024) separate flushes of the pending-terms tables. +*/ +#define FTS3_SEGDIR_MAXLEVEL 1024 +#define FTS3_SEGDIR_MAXLEVEL_STR "1024" + /* ** The testcase() macro is only used by the amalgamation. If undefined, ** make it a no-op. */ #ifndef testcase @@ -100497,31 +137790,77 @@ #ifndef SQLITE_AMALGAMATION /* ** Macros indicating that conditional expressions are always true or ** false. */ +#ifdef SQLITE_COVERAGE_TEST +# define ALWAYS(x) (1) +# define NEVER(X) (0) +#elif defined(SQLITE_DEBUG) +# define ALWAYS(x) sqlite3Fts3Always((x)!=0) +# define NEVER(x) sqlite3Fts3Never((x)!=0) +SQLITE_PRIVATE int sqlite3Fts3Always(int b); +SQLITE_PRIVATE int sqlite3Fts3Never(int b); +#else # define ALWAYS(x) (x) -# define NEVER(X) (x) +# define NEVER(x) (x) +#endif + /* ** Internal types used by SQLite. */ typedef unsigned char u8; /* 1-byte (or larger) unsigned integer */ typedef short int i16; /* 2-byte (or larger) signed integer */ typedef unsigned int u32; /* 4-byte unsigned integer */ typedef sqlite3_uint64 u64; /* 8-byte unsigned integer */ +typedef sqlite3_int64 i64; /* 8-byte signed integer */ + /* ** Macro used to suppress compiler warnings for unused parameters. */ #define UNUSED_PARAMETER(x) (void)(x) + +/* +** Activate assert() only if SQLITE_TEST is enabled. +*/ +#if !defined(NDEBUG) && !defined(SQLITE_DEBUG) +# define NDEBUG 1 +#endif + +/* +** The TESTONLY macro is used to enclose variable declarations or +** other bits of code that are needed to support the arguments +** within testcase() and assert() macros. +*/ +#if defined(SQLITE_DEBUG) || defined(SQLITE_COVERAGE_TEST) +# define TESTONLY(X) X +#else +# define TESTONLY(X) +#endif + +#endif /* SQLITE_AMALGAMATION */ + +#ifdef SQLITE_DEBUG +SQLITE_PRIVATE int sqlite3Fts3Corrupt(void); +# define FTS_CORRUPT_VTAB sqlite3Fts3Corrupt() +#else +# define FTS_CORRUPT_VTAB SQLITE_CORRUPT_VTAB #endif typedef struct Fts3Table Fts3Table; typedef struct Fts3Cursor Fts3Cursor; typedef struct Fts3Expr Fts3Expr; typedef struct Fts3Phrase Fts3Phrase; -typedef struct Fts3SegReader Fts3SegReader; +typedef struct Fts3PhraseToken Fts3PhraseToken; + +typedef struct Fts3Doclist Fts3Doclist; typedef struct Fts3SegFilter Fts3SegFilter; +typedef struct Fts3DeferredToken Fts3DeferredToken; +typedef struct Fts3SegReader Fts3SegReader; +typedef struct Fts3MultiSegReader Fts3MultiSegReader; + +typedef struct MatchinfoBuffer MatchinfoBuffer; /* ** A connection to a fulltext index is an instance of the following ** structure. The xCreate and xConnect methods create an instance ** of this structure and xDestroy and xDisconnect free that instance. @@ -100533,43 +137872,78 @@ sqlite3 *db; /* The database connection */ const char *zDb; /* logical database name */ const char *zName; /* virtual table name */ int nColumn; /* number of named columns in virtual table */ char **azColumn; /* column names. malloced */ + u8 *abNotindexed; /* True for 'notindexed' columns */ sqlite3_tokenizer *pTokenizer; /* tokenizer for inserts and queries */ + char *zContentTbl; /* content=xxx option, or NULL */ + char *zLanguageid; /* languageid=xxx option, or NULL */ + int nAutoincrmerge; /* Value configured by 'automerge' */ + u32 nLeafAdd; /* Number of leaf blocks added this trans */ /* Precompiled statements used by the implementation. Each of these ** statements is run and reset within a single virtual table API call. */ - sqlite3_stmt *aStmt[25]; - - /* Pointer to string containing the SQL: - ** - ** "SELECT block FROM %_segments WHERE blockid BETWEEN ? AND ? - ** ORDER BY blockid" - */ - char *zSelectLeaves; - int nLeavesStmt; /* Valid statements in aLeavesStmt */ - int nLeavesTotal; /* Total number of prepared leaves stmts */ - int nLeavesAlloc; /* Allocated size of aLeavesStmt */ - sqlite3_stmt **aLeavesStmt; /* Array of prepared zSelectLeaves stmts */ + sqlite3_stmt *aStmt[40]; + + char *zReadExprlist; + char *zWriteExprlist; int nNodeSize; /* Soft limit for node size */ - u8 bHasContent; /* True if %_content table exists */ + u8 bFts4; /* True for FTS4, false for FTS3 */ + u8 bHasStat; /* True if %_stat table exists (2==unknown) */ u8 bHasDocsize; /* True if %_docsize table exists */ - - /* The following hash table is used to buffer pending index updates during - ** transactions. Variable nPendingData estimates the memory size of the - ** pending data, including hash table overhead, but not malloc overhead. - ** When nPendingData exceeds nMaxPendingData, the buffer is flushed - ** automatically. Variable iPrevDocid is the docid of the most recently - ** inserted record. - */ - int nMaxPendingData; - int nPendingData; - sqlite_int64 iPrevDocid; - Fts3Hash pendingTerms; + u8 bDescIdx; /* True if doclists are in reverse order */ + u8 bIgnoreSavepoint; /* True to ignore xSavepoint invocations */ + int nPgsz; /* Page size for host database */ + char *zSegmentsTbl; /* Name of %_segments table */ + sqlite3_blob *pSegments; /* Blob handle open on %_segments table */ + + /* + ** The following array of hash tables is used to buffer pending index + ** updates during transactions. All pending updates buffered at any one + ** time must share a common language-id (see the FTS4 langid= feature). + ** The current language id is stored in variable iPrevLangid. + ** + ** A single FTS4 table may have multiple full-text indexes. For each index + ** there is an entry in the aIndex[] array. Index 0 is an index of all the + ** terms that appear in the document set. Each subsequent index in aIndex[] + ** is an index of prefixes of a specific length. + ** + ** Variable nPendingData contains an estimate the memory consumed by the + ** pending data structures, including hash table overhead, but not including + ** malloc overhead. When nPendingData exceeds nMaxPendingData, all hash + ** tables are flushed to disk. Variable iPrevDocid is the docid of the most + ** recently inserted record. + */ + int nIndex; /* Size of aIndex[] */ + struct Fts3Index { + int nPrefix; /* Prefix length (0 for main terms index) */ + Fts3Hash hPending; /* Pending terms table for this index */ + } *aIndex; + int nMaxPendingData; /* Max pending data before flush to disk */ + int nPendingData; /* Current bytes of pending data */ + sqlite_int64 iPrevDocid; /* Docid of most recently inserted document */ + int iPrevLangid; /* Langid of recently inserted document */ + int bPrevDelete; /* True if last operation was a delete */ + +#if defined(SQLITE_DEBUG) || defined(SQLITE_COVERAGE_TEST) + /* State variables used for validating that the transaction control + ** methods of the virtual table are called at appropriate times. These + ** values do not contribute to FTS functionality; they are used for + ** verifying the operation of the SQLite core. + */ + int inTransaction; /* True after xBegin but before xCommit/xRollback */ + int mxSavepoint; /* Largest valid xSavepoint integer */ +#endif + +#ifdef SQLITE_TEST + /* True to disable the incremental doclist optimization. This is controled + ** by special insert command 'test-no-incr-doclist'. */ + int bNoIncrDoclist; +#endif }; /* ** When the core wants to read from the virtual table, it creates a ** virtual table cursor (an instance of the following structure) using @@ -100580,17 +137954,30 @@ i16 eSearch; /* Search strategy (see below) */ u8 isEof; /* True if at End Of Results */ u8 isRequireSeek; /* True if must seek pStmt to %_content row */ sqlite3_stmt *pStmt; /* Prepared statement in use by the cursor */ Fts3Expr *pExpr; /* Parsed MATCH query string */ + int iLangid; /* Language being queried for */ + int nPhrase; /* Number of matchable phrases in query */ + Fts3DeferredToken *pDeferred; /* Deferred search tokens, if any */ sqlite3_int64 iPrevId; /* Previous id read from aDoclist */ char *pNextId; /* Pointer into the body of aDoclist */ char *aDoclist; /* List of docids for full-text queries */ int nDoclist; /* Size of buffer at aDoclist */ + u8 bDesc; /* True to sort in descending order */ + int eEvalmode; /* An FTS3_EVAL_XX constant */ + int nRowAvg; /* Average size of database rows, in pages */ + sqlite3_int64 nDoc; /* Documents in table */ + i64 iMinDocid; /* Minimum docid to return */ + i64 iMaxDocid; /* Maximum docid to return */ int isMatchinfoNeeded; /* True when aMatchinfo[] needs filling in */ - u32 *aMatchinfo; /* Information about most recent match */ + MatchinfoBuffer *pMIBuffer; /* Buffer for matchinfo data */ }; + +#define FTS3_EVAL_FILTER 0 +#define FTS3_EVAL_NEXT 1 +#define FTS3_EVAL_MATCHINFO 2 /* ** The Fts3Cursor.eSearch member is always set to one of the following. ** Actualy, Fts3Cursor.eSearch can be greater than or equal to ** FTS3_FULLTEXT_SEARCH. If so, then Fts3Cursor.eSearch - 2 is the index @@ -100607,54 +137994,107 @@ */ #define FTS3_FULLSCAN_SEARCH 0 /* Linear scan of %_content table */ #define FTS3_DOCID_SEARCH 1 /* Lookup by rowid on %_content table */ #define FTS3_FULLTEXT_SEARCH 2 /* Full-text index search */ +/* +** The lower 16-bits of the sqlite3_index_info.idxNum value set by +** the xBestIndex() method contains the Fts3Cursor.eSearch value described +** above. The upper 16-bits contain a combination of the following +** bits, used to describe extra constraints on full-text searches. +*/ +#define FTS3_HAVE_LANGID 0x00010000 /* languageid=? */ +#define FTS3_HAVE_DOCID_GE 0x00020000 /* docid>=? */ +#define FTS3_HAVE_DOCID_LE 0x00040000 /* docid<=? */ + +struct Fts3Doclist { + char *aAll; /* Array containing doclist (or NULL) */ + int nAll; /* Size of a[] in bytes */ + char *pNextDocid; /* Pointer to next docid */ + + sqlite3_int64 iDocid; /* Current docid (if pList!=0) */ + int bFreeList; /* True if pList should be sqlite3_free()d */ + char *pList; /* Pointer to position list following iDocid */ + int nList; /* Length of position list */ +}; + /* ** A "phrase" is a sequence of one or more tokens that must match in ** sequence. A single token is the base case and the most common case. -** For a sequence of tokens contained in "...", nToken will be the number -** of tokens in the string. +** For a sequence of tokens contained in double-quotes (i.e. "one two three") +** nToken will be the number of tokens in the string. */ +struct Fts3PhraseToken { + char *z; /* Text of the token */ + int n; /* Number of bytes in buffer z */ + int isPrefix; /* True if token ends with a "*" character */ + int bFirst; /* True if token must appear at position 0 */ + + /* Variables above this point are populated when the expression is + ** parsed (by code in fts3_expr.c). Below this point the variables are + ** used when evaluating the expression. */ + Fts3DeferredToken *pDeferred; /* Deferred token object for this token */ + Fts3MultiSegReader *pSegcsr; /* Segment-reader for this token */ +}; + struct Fts3Phrase { + /* Cache of doclist for this phrase. */ + Fts3Doclist doclist; + int bIncr; /* True if doclist is loaded incrementally */ + int iDoclistToken; + + /* Used by sqlite3Fts3EvalPhrasePoslist() if this is a descendent of an + ** OR condition. */ + char *pOrPoslist; + i64 iOrDocid; + + /* Variables below this point are populated by fts3_expr.c when parsing + ** a MATCH expression. Everything above is part of the evaluation phase. + */ int nToken; /* Number of tokens in the phrase */ int iColumn; /* Index of column this phrase must match */ - int isNot; /* Phrase prefixed by unary not (-) operator */ - struct PhraseToken { - char *z; /* Text of the token */ - int n; /* Number of bytes in buffer pointed to by z */ - int isPrefix; /* True if token ends in with a "*" character */ - } aToken[1]; /* One entry for each token in the phrase */ + Fts3PhraseToken aToken[1]; /* One entry for each token in the phrase */ }; /* ** A tree of these objects forms the RHS of a MATCH operator. ** -** If Fts3Expr.eType is either FTSQUERY_NEAR or FTSQUERY_PHRASE and isLoaded -** is true, then aDoclist points to a malloced buffer, size nDoclist bytes, -** containing the results of the NEAR or phrase query in FTS3 doclist -** format. As usual, the initial "Length" field found in doclists stored -** on disk is omitted from this buffer. +** If Fts3Expr.eType is FTSQUERY_PHRASE and isLoaded is true, then aDoclist +** points to a malloced buffer, size nDoclist bytes, containing the results +** of this phrase query in FTS3 doclist format. As usual, the initial +** "Length" field found in doclists stored on disk is omitted from this +** buffer. ** -** Variable pCurrent always points to the start of a docid field within -** aDoclist. Since the doclist is usually scanned in docid order, this can -** be used to accelerate seeking to the required docid within the doclist. +** Variable aMI is used only for FTSQUERY_NEAR nodes to store the global +** matchinfo data. If it is not NULL, it points to an array of size nCol*3, +** where nCol is the number of columns in the queried FTS table. The array +** is populated as follows: +** +** aMI[iCol*3 + 0] = Undefined +** aMI[iCol*3 + 1] = Number of occurrences +** aMI[iCol*3 + 2] = Number of rows containing at least one instance +** +** The aMI array is allocated using sqlite3_malloc(). It should be freed +** when the expression node is. */ struct Fts3Expr { int eType; /* One of the FTSQUERY_XXX values defined below */ int nNear; /* Valid if eType==FTSQUERY_NEAR */ Fts3Expr *pParent; /* pParent->pLeft==this or pParent->pRight==this */ Fts3Expr *pLeft; /* Left operand */ Fts3Expr *pRight; /* Right operand */ Fts3Phrase *pPhrase; /* Valid if eType==FTSQUERY_PHRASE */ - int isLoaded; /* True if aDoclist/nDoclist are initialized. */ - char *aDoclist; /* Buffer containing doclist */ - int nDoclist; /* Size of aDoclist in bytes */ + /* The following are used by the fts3_eval.c module. */ + sqlite3_int64 iDocid; /* Current docid */ + u8 bEof; /* True this expression is at EOF already */ + u8 bStart; /* True if iDocid is valid */ + u8 bDeferred; /* True if this expression is entirely deferred */ - sqlite3_int64 iCurrent; - char *pCurrent; + /* The following are used by the fts3_snippet.c module. */ + int iPhrase; /* Index of this phrase in matchinfo() results */ + u32 *aMI; /* See above */ }; /* ** Candidate values for Fts3Query.eType. Note that the order of the first ** four values is in order of precedence when parsing expressions. For @@ -100671,90 +138111,200 @@ #define FTSQUERY_AND 3 #define FTSQUERY_OR 4 #define FTSQUERY_PHRASE 5 -/* fts3_init.c */ -SQLITE_PRIVATE int sqlite3Fts3DeleteVtab(int, sqlite3_vtab *); -SQLITE_PRIVATE int sqlite3Fts3InitVtab(int, sqlite3*, void*, int, const char*const*, - sqlite3_vtab **, char **); - /* fts3_write.c */ SQLITE_PRIVATE int sqlite3Fts3UpdateMethod(sqlite3_vtab*,int,sqlite3_value**,sqlite3_int64*); SQLITE_PRIVATE int sqlite3Fts3PendingTermsFlush(Fts3Table *); SQLITE_PRIVATE void sqlite3Fts3PendingTermsClear(Fts3Table *); SQLITE_PRIVATE int sqlite3Fts3Optimize(Fts3Table *); -SQLITE_PRIVATE int sqlite3Fts3SegReaderNew(Fts3Table *,int, sqlite3_int64, +SQLITE_PRIVATE int sqlite3Fts3SegReaderNew(int, int, sqlite3_int64, sqlite3_int64, sqlite3_int64, const char *, int, Fts3SegReader**); -SQLITE_PRIVATE int sqlite3Fts3SegReaderPending(Fts3Table*,const char*,int,int,Fts3SegReader**); -SQLITE_PRIVATE void sqlite3Fts3SegReaderFree(Fts3Table *, Fts3SegReader *); -SQLITE_PRIVATE int sqlite3Fts3SegReaderIterate( - Fts3Table *, Fts3SegReader **, int, Fts3SegFilter *, - int (*)(Fts3Table *, void *, char *, int, char *, int), void * -); -SQLITE_PRIVATE int sqlite3Fts3ReadBlock(Fts3Table*, sqlite3_int64, char const**, int*); -SQLITE_PRIVATE int sqlite3Fts3AllSegdirs(Fts3Table*, sqlite3_stmt **); -SQLITE_PRIVATE int sqlite3Fts3MatchinfoDocsizeLocal(Fts3Cursor*, u32*); -SQLITE_PRIVATE int sqlite3Fts3MatchinfoDocsizeGlobal(Fts3Cursor*, u32*); +SQLITE_PRIVATE int sqlite3Fts3SegReaderPending( + Fts3Table*,int,const char*,int,int,Fts3SegReader**); +SQLITE_PRIVATE void sqlite3Fts3SegReaderFree(Fts3SegReader *); +SQLITE_PRIVATE int sqlite3Fts3AllSegdirs(Fts3Table*, int, int, int, sqlite3_stmt **); +SQLITE_PRIVATE int sqlite3Fts3ReadBlock(Fts3Table*, sqlite3_int64, char **, int*, int*); + +SQLITE_PRIVATE int sqlite3Fts3SelectDoctotal(Fts3Table *, sqlite3_stmt **); +SQLITE_PRIVATE int sqlite3Fts3SelectDocsize(Fts3Table *, sqlite3_int64, sqlite3_stmt **); + +#ifndef SQLITE_DISABLE_FTS4_DEFERRED +SQLITE_PRIVATE void sqlite3Fts3FreeDeferredTokens(Fts3Cursor *); +SQLITE_PRIVATE int sqlite3Fts3DeferToken(Fts3Cursor *, Fts3PhraseToken *, int); +SQLITE_PRIVATE int sqlite3Fts3CacheDeferredDoclists(Fts3Cursor *); +SQLITE_PRIVATE void sqlite3Fts3FreeDeferredDoclists(Fts3Cursor *); +SQLITE_PRIVATE int sqlite3Fts3DeferredTokenList(Fts3DeferredToken *, char **, int *); +#else +# define sqlite3Fts3FreeDeferredTokens(x) +# define sqlite3Fts3DeferToken(x,y,z) SQLITE_OK +# define sqlite3Fts3CacheDeferredDoclists(x) SQLITE_OK +# define sqlite3Fts3FreeDeferredDoclists(x) +# define sqlite3Fts3DeferredTokenList(x,y,z) SQLITE_OK +#endif + +SQLITE_PRIVATE void sqlite3Fts3SegmentsClose(Fts3Table *); +SQLITE_PRIVATE int sqlite3Fts3MaxLevel(Fts3Table *, int *); + +/* Special values interpreted by sqlite3SegReaderCursor() */ +#define FTS3_SEGCURSOR_PENDING -1 +#define FTS3_SEGCURSOR_ALL -2 + +SQLITE_PRIVATE int sqlite3Fts3SegReaderStart(Fts3Table*, Fts3MultiSegReader*, Fts3SegFilter*); +SQLITE_PRIVATE int sqlite3Fts3SegReaderStep(Fts3Table *, Fts3MultiSegReader *); +SQLITE_PRIVATE void sqlite3Fts3SegReaderFinish(Fts3MultiSegReader *); + +SQLITE_PRIVATE int sqlite3Fts3SegReaderCursor(Fts3Table *, + int, int, int, const char *, int, int, int, Fts3MultiSegReader *); /* Flags allowed as part of the 4th argument to SegmentReaderIterate() */ #define FTS3_SEGMENT_REQUIRE_POS 0x00000001 #define FTS3_SEGMENT_IGNORE_EMPTY 0x00000002 #define FTS3_SEGMENT_COLUMN_FILTER 0x00000004 #define FTS3_SEGMENT_PREFIX 0x00000008 +#define FTS3_SEGMENT_SCAN 0x00000010 +#define FTS3_SEGMENT_FIRST 0x00000020 /* Type passed as 4th argument to SegmentReaderIterate() */ struct Fts3SegFilter { const char *zTerm; int nTerm; int iCol; int flags; }; + +struct Fts3MultiSegReader { + /* Used internally by sqlite3Fts3SegReaderXXX() calls */ + Fts3SegReader **apSegment; /* Array of Fts3SegReader objects */ + int nSegment; /* Size of apSegment array */ + int nAdvance; /* How many seg-readers to advance */ + Fts3SegFilter *pFilter; /* Pointer to filter object */ + char *aBuffer; /* Buffer to merge doclists in */ + int nBuffer; /* Allocated size of aBuffer[] in bytes */ + + int iColFilter; /* If >=0, filter for this column */ + int bRestart; + + /* Used by fts3.c only. */ + int nCost; /* Cost of running iterator */ + int bLookup; /* True if a lookup of a single entry. */ + + /* Output values. Valid only after Fts3SegReaderStep() returns SQLITE_ROW. */ + char *zTerm; /* Pointer to term buffer */ + int nTerm; /* Size of zTerm in bytes */ + char *aDoclist; /* Pointer to doclist buffer */ + int nDoclist; /* Size of aDoclist[] in bytes */ +}; + +SQLITE_PRIVATE int sqlite3Fts3Incrmerge(Fts3Table*,int,int); + +#define fts3GetVarint32(p, piVal) ( \ + (*(u8*)(p)&0x80) ? sqlite3Fts3GetVarint32(p, piVal) : (*piVal=*(u8*)(p), 1) \ +) /* fts3.c */ +SQLITE_PRIVATE void sqlite3Fts3ErrMsg(char**,const char*,...); SQLITE_PRIVATE int sqlite3Fts3PutVarint(char *, sqlite3_int64); SQLITE_PRIVATE int sqlite3Fts3GetVarint(const char *, sqlite_int64 *); SQLITE_PRIVATE int sqlite3Fts3GetVarint32(const char *, int *); SQLITE_PRIVATE int sqlite3Fts3VarintLen(sqlite3_uint64); SQLITE_PRIVATE void sqlite3Fts3Dequote(char *); - -SQLITE_PRIVATE char *sqlite3Fts3FindPositions(Fts3Expr *, sqlite3_int64, int); -SQLITE_PRIVATE int sqlite3Fts3ExprLoadDoclist(Fts3Table *, Fts3Expr *); -SQLITE_PRIVATE int sqlite3Fts3ExprNearTrim(Fts3Expr *, Fts3Expr *, int); +SQLITE_PRIVATE void sqlite3Fts3DoclistPrev(int,char*,int,char**,sqlite3_int64*,int*,u8*); +SQLITE_PRIVATE int sqlite3Fts3EvalPhraseStats(Fts3Cursor *, Fts3Expr *, u32 *); +SQLITE_PRIVATE int sqlite3Fts3FirstFilter(sqlite3_int64, char *, int, char *); +SQLITE_PRIVATE void sqlite3Fts3CreateStatTable(int*, Fts3Table*); +SQLITE_PRIVATE int sqlite3Fts3EvalTestDeferred(Fts3Cursor *pCsr, int *pRc); /* fts3_tokenizer.c */ SQLITE_PRIVATE const char *sqlite3Fts3NextToken(const char *, int *); SQLITE_PRIVATE int sqlite3Fts3InitHashTable(sqlite3 *, Fts3Hash *, const char *); -SQLITE_PRIVATE int sqlite3Fts3InitTokenizer(Fts3Hash *pHash, - const char *, sqlite3_tokenizer **, const char **, char ** +SQLITE_PRIVATE int sqlite3Fts3InitTokenizer(Fts3Hash *pHash, const char *, + sqlite3_tokenizer **, char ** ); +SQLITE_PRIVATE int sqlite3Fts3IsIdChar(char); /* fts3_snippet.c */ SQLITE_PRIVATE void sqlite3Fts3Offsets(sqlite3_context*, Fts3Cursor*); SQLITE_PRIVATE void sqlite3Fts3Snippet(sqlite3_context *, Fts3Cursor *, const char *, const char *, const char *, int, int ); -SQLITE_PRIVATE void sqlite3Fts3Matchinfo(sqlite3_context *, Fts3Cursor *); +SQLITE_PRIVATE void sqlite3Fts3Matchinfo(sqlite3_context *, Fts3Cursor *, const char *); +SQLITE_PRIVATE void sqlite3Fts3MIBufferFree(MatchinfoBuffer *p); /* fts3_expr.c */ -SQLITE_PRIVATE int sqlite3Fts3ExprParse(sqlite3_tokenizer *, - char **, int, int, const char *, int, Fts3Expr ** +SQLITE_PRIVATE int sqlite3Fts3ExprParse(sqlite3_tokenizer *, int, + char **, int, int, int, const char *, int, Fts3Expr **, char ** ); SQLITE_PRIVATE void sqlite3Fts3ExprFree(Fts3Expr *); #ifdef SQLITE_TEST SQLITE_PRIVATE int sqlite3Fts3ExprInitTestInterface(sqlite3 *db); +SQLITE_PRIVATE int sqlite3Fts3InitTerm(sqlite3 *db); #endif +SQLITE_PRIVATE int sqlite3Fts3OpenTokenizer(sqlite3_tokenizer *, int, const char *, int, + sqlite3_tokenizer_cursor ** +); + +/* fts3_aux.c */ +SQLITE_PRIVATE int sqlite3Fts3InitAux(sqlite3 *db); + +SQLITE_PRIVATE void sqlite3Fts3EvalPhraseCleanup(Fts3Phrase *); + +SQLITE_PRIVATE int sqlite3Fts3MsrIncrStart( + Fts3Table*, Fts3MultiSegReader*, int, const char*, int); +SQLITE_PRIVATE int sqlite3Fts3MsrIncrNext( + Fts3Table *, Fts3MultiSegReader *, sqlite3_int64 *, char **, int *); +SQLITE_PRIVATE int sqlite3Fts3EvalPhrasePoslist(Fts3Cursor *, Fts3Expr *, int iCol, char **); +SQLITE_PRIVATE int sqlite3Fts3MsrOvfl(Fts3Cursor *, Fts3MultiSegReader *, int *); +SQLITE_PRIVATE int sqlite3Fts3MsrIncrRestart(Fts3MultiSegReader *pCsr); + +/* fts3_tokenize_vtab.c */ +SQLITE_PRIVATE int sqlite3Fts3InitTok(sqlite3*, Fts3Hash *); + +/* fts3_unicode2.c (functions generated by parsing unicode text files) */ +#ifndef SQLITE_DISABLE_FTS3_UNICODE +SQLITE_PRIVATE int sqlite3FtsUnicodeFold(int, int); +SQLITE_PRIVATE int sqlite3FtsUnicodeIsalnum(int); +SQLITE_PRIVATE int sqlite3FtsUnicodeIsdiacritic(int); +#endif + +#endif /* !SQLITE_CORE || SQLITE_ENABLE_FTS3 */ #endif /* _FTSINT_H */ /************** End of fts3Int.h *********************************************/ /************** Continuing where we left off in fts3.c ***********************/ +#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) +#if defined(SQLITE_ENABLE_FTS3) && !defined(SQLITE_CORE) +# define SQLITE_CORE 1 +#endif +/* #include <assert.h> */ +/* #include <stdlib.h> */ +/* #include <stddef.h> */ +/* #include <stdio.h> */ +/* #include <string.h> */ +/* #include <stdarg.h> */ + +/* #include "fts3.h" */ #ifndef SQLITE_CORE +/* # include "sqlite3ext.h" */ SQLITE_EXTENSION_INIT1 #endif + +static int fts3EvalNext(Fts3Cursor *pCsr); +static int fts3EvalStart(Fts3Cursor *pCsr); +static int fts3TermSegReaderCursor( + Fts3Cursor *, const char *, int, int, Fts3MultiSegReader **); + +#ifndef SQLITE_AMALGAMATION +# if defined(SQLITE_DEBUG) +SQLITE_PRIVATE int sqlite3Fts3Always(int b) { assert( b ); return b; } +SQLITE_PRIVATE int sqlite3Fts3Never(int b) { assert( !b ); return b; } +# endif +#endif /* ** Write a 64-bit variable-length integer to memory starting at p[0]. ** The length of data written will be between 1 and FTS3_VARINT_MAX bytes. ** The number of bytes written is returned. @@ -100769,36 +138319,63 @@ q[-1] &= 0x7f; /* turn off high bit in final byte */ assert( q - (unsigned char *)p <= FTS3_VARINT_MAX ); return (int) (q - (unsigned char *)p); } +#define GETVARINT_STEP(v, ptr, shift, mask1, mask2, var, ret) \ + v = (v & mask1) | ( (*ptr++) << shift ); \ + if( (v & mask2)==0 ){ var = v; return ret; } +#define GETVARINT_INIT(v, ptr, shift, mask1, mask2, var, ret) \ + v = (*ptr++); \ + if( (v & mask2)==0 ){ var = v; return ret; } + /* ** Read a 64-bit variable-length integer from memory starting at p[0]. ** Return the number of bytes read, or 0 on error. ** The value is stored in *v. */ SQLITE_PRIVATE int sqlite3Fts3GetVarint(const char *p, sqlite_int64 *v){ - const unsigned char *q = (const unsigned char *) p; - sqlite_uint64 x = 0, y = 1; - while( (*q&0x80)==0x80 && q-(unsigned char *)p<FTS3_VARINT_MAX ){ - x += y * (*q++ & 0x7f); - y <<= 7; - } - x += y * (*q++); - *v = (sqlite_int64) x; - return (int) (q - (unsigned char *)p); + const char *pStart = p; + u32 a; + u64 b; + int shift; + + GETVARINT_INIT(a, p, 0, 0x00, 0x80, *v, 1); + GETVARINT_STEP(a, p, 7, 0x7F, 0x4000, *v, 2); + GETVARINT_STEP(a, p, 14, 0x3FFF, 0x200000, *v, 3); + GETVARINT_STEP(a, p, 21, 0x1FFFFF, 0x10000000, *v, 4); + b = (a & 0x0FFFFFFF ); + + for(shift=28; shift<=63; shift+=7){ + u64 c = *p++; + b += (c&0x7F) << shift; + if( (c & 0x80)==0 ) break; + } + *v = b; + return (int)(p - pStart); } /* ** Similar to sqlite3Fts3GetVarint(), except that the output is truncated to a ** 32-bit integer before it is returned. */ SQLITE_PRIVATE int sqlite3Fts3GetVarint32(const char *p, int *pi){ - sqlite_int64 i; - int ret = sqlite3Fts3GetVarint(p, &i); - *pi = (int) i; - return ret; + u32 a; + +#ifndef fts3GetVarint32 + GETVARINT_INIT(a, p, 0, 0x00, 0x80, *pi, 1); +#else + a = (*p++); + assert( a & 0x80 ); +#endif + + GETVARINT_STEP(a, p, 7, 0x7F, 0x4000, *pi, 2); + GETVARINT_STEP(a, p, 14, 0x3FFF, 0x200000, *pi, 3); + GETVARINT_STEP(a, p, 21, 0x1FFFFF, 0x10000000, *pi, 4); + a = (a & 0x0FFFFFFF ); + *pi = (int)(a | ((u32)(*p & 0x0F) << 28)); + return 5; } /* ** Return the number of bytes required to encode v as a varint */ @@ -100834,11 +138411,11 @@ int iOut = 0; /* Index of next byte to write to output */ /* If the first byte was a '[', then the close-quote character is a ']' */ if( quote=='[' ) quote = ']'; - while( ALWAYS(z[iIn]) ){ + while( z[iIn] ){ if( z[iIn]==quote ){ if( z[iIn+1]!=quote ) break; z[iOut++] = quote; iIn += 2; }else{ @@ -100859,21 +138436,35 @@ *pp += sqlite3Fts3GetVarint(*pp, &iVal); *pVal += iVal; } /* -** As long as *pp has not reached its end (pEnd), then do the same -** as fts3GetDeltaVarint(): read a single varint and add it to *pVal. -** But if we have reached the end of the varint, just set *pp=0 and -** leave *pVal unchanged. +** When this function is called, *pp points to the first byte following a +** varint that is part of a doclist (or position-list, or any other list +** of varints). This function moves *pp to point to the start of that varint, +** and sets *pVal by the varint value. +** +** Argument pStart points to the first byte of the doclist that the +** varint is part of. */ -static void fts3GetDeltaVarint2(char **pp, char *pEnd, sqlite3_int64 *pVal){ - if( *pp>=pEnd ){ - *pp = 0; - }else{ - fts3GetDeltaVarint(pp, pVal); - } +static void fts3GetReverseVarint( + char **pp, + char *pStart, + sqlite3_int64 *pVal +){ + sqlite3_int64 iVal; + char *p; + + /* Pointer p now points at the first byte past the varint we are + ** interested in. So, unless the doclist is corrupt, the 0x80 bit is + ** clear on character p[-1]. */ + for(p = (*pp)-2; p>=pStart && *p&0x80; p--); + p++; + *pp = p; + + sqlite3Fts3GetVarint(p, &iVal); + *pVal = iVal; } /* ** The xDisconnect() virtual table method. */ @@ -100880,31 +138471,43 @@ static int fts3DisconnectMethod(sqlite3_vtab *pVtab){ Fts3Table *p = (Fts3Table *)pVtab; int i; assert( p->nPendingData==0 ); + assert( p->pSegments==0 ); /* Free any prepared statements held */ for(i=0; i<SizeofArray(p->aStmt); i++){ sqlite3_finalize(p->aStmt[i]); } - for(i=0; i<p->nLeavesStmt; i++){ - sqlite3_finalize(p->aLeavesStmt[i]); - } - sqlite3_free(p->zSelectLeaves); - sqlite3_free(p->aLeavesStmt); + sqlite3_free(p->zSegmentsTbl); + sqlite3_free(p->zReadExprlist); + sqlite3_free(p->zWriteExprlist); + sqlite3_free(p->zContentTbl); + sqlite3_free(p->zLanguageid); /* Invoke the tokenizer destructor to free the tokenizer. */ p->pTokenizer->pModule->xDestroy(p->pTokenizer); sqlite3_free(p); return SQLITE_OK; } + +/* +** Write an error message into *pzErr +*/ +SQLITE_PRIVATE void sqlite3Fts3ErrMsg(char **pzErr, const char *zFormat, ...){ + va_list ap; + sqlite3_free(*pzErr); + va_start(ap, zFormat); + *pzErr = sqlite3_vmprintf(zFormat, ap); + va_end(ap); +} /* ** Construct one or more SQL statements from the format string given -** and then evaluate those statements. The success code is writting +** and then evaluate those statements. The success code is written ** into *pRc. ** ** If *pRc is initially non-zero then this routine is a no-op. */ static void fts3DbExec( @@ -100929,20 +138532,23 @@ /* ** The xDestroy() virtual table method. */ static int fts3DestroyMethod(sqlite3_vtab *pVtab){ - int rc = SQLITE_OK; /* Return code */ Fts3Table *p = (Fts3Table *)pVtab; - sqlite3 *db = p->db; + int rc = SQLITE_OK; /* Return code */ + const char *zDb = p->zDb; /* Name of database (e.g. "main", "temp") */ + sqlite3 *db = p->db; /* Database handle */ /* Drop the shadow tables */ - fts3DbExec(&rc, db, "DROP TABLE IF EXISTS %Q.'%q_content'", p->zDb, p->zName); - fts3DbExec(&rc, db, "DROP TABLE IF EXISTS %Q.'%q_segments'", p->zDb,p->zName); - fts3DbExec(&rc, db, "DROP TABLE IF EXISTS %Q.'%q_segdir'", p->zDb, p->zName); - fts3DbExec(&rc, db, "DROP TABLE IF EXISTS %Q.'%q_docsize'", p->zDb, p->zName); - fts3DbExec(&rc, db, "DROP TABLE IF EXISTS %Q.'%q_stat'", p->zDb, p->zName); + if( p->zContentTbl==0 ){ + fts3DbExec(&rc, db, "DROP TABLE IF EXISTS %Q.'%q_content'", zDb, p->zName); + } + fts3DbExec(&rc, db, "DROP TABLE IF EXISTS %Q.'%q_segments'", zDb,p->zName); + fts3DbExec(&rc, db, "DROP TABLE IF EXISTS %Q.'%q_segdir'", zDb, p->zName); + fts3DbExec(&rc, db, "DROP TABLE IF EXISTS %Q.'%q_docsize'", zDb, p->zName); + fts3DbExec(&rc, db, "DROP TABLE IF EXISTS %Q.'%q_stat'", zDb, p->zName); /* If everything has worked, invoke fts3DisconnectMethod() to free the ** memory associated with the Fts3Table structure and return SQLITE_OK. ** Otherwise, return an SQLite error code. */ @@ -100952,37 +138558,59 @@ /* ** Invoke sqlite3_declare_vtab() to declare the schema for the FTS3 table ** passed as the first argument. This is done as part of the xConnect() ** and xCreate() methods. +** +** If *pRc is non-zero when this function is called, it is a no-op. +** Otherwise, if an error occurs, an SQLite error code is stored in *pRc +** before returning. */ -static int fts3DeclareVtab(Fts3Table *p){ - int i; /* Iterator variable */ - int rc; /* Return code */ - char *zSql; /* SQL statement passed to declare_vtab() */ - char *zCols; /* List of user defined columns */ - - /* Create a list of user columns for the virtual table */ - zCols = sqlite3_mprintf("%Q, ", p->azColumn[0]); - for(i=1; zCols && i<p->nColumn; i++){ - zCols = sqlite3_mprintf("%z%Q, ", zCols, p->azColumn[i]); - } - - /* Create the whole "CREATE TABLE" statement to pass to SQLite */ - zSql = sqlite3_mprintf( - "CREATE TABLE x(%s %Q HIDDEN, docid HIDDEN)", zCols, p->zName - ); - - if( !zCols || !zSql ){ - rc = SQLITE_NOMEM; - }else{ - rc = sqlite3_declare_vtab(p->db, zSql); - } - - sqlite3_free(zSql); - sqlite3_free(zCols); - return rc; +static void fts3DeclareVtab(int *pRc, Fts3Table *p){ + if( *pRc==SQLITE_OK ){ + int i; /* Iterator variable */ + int rc; /* Return code */ + char *zSql; /* SQL statement passed to declare_vtab() */ + char *zCols; /* List of user defined columns */ + const char *zLanguageid; + + zLanguageid = (p->zLanguageid ? p->zLanguageid : "__langid"); + sqlite3_vtab_config(p->db, SQLITE_VTAB_CONSTRAINT_SUPPORT, 1); + + /* Create a list of user columns for the virtual table */ + zCols = sqlite3_mprintf("%Q, ", p->azColumn[0]); + for(i=1; zCols && i<p->nColumn; i++){ + zCols = sqlite3_mprintf("%z%Q, ", zCols, p->azColumn[i]); + } + + /* Create the whole "CREATE TABLE" statement to pass to SQLite */ + zSql = sqlite3_mprintf( + "CREATE TABLE x(%s %Q HIDDEN, docid HIDDEN, %Q HIDDEN)", + zCols, p->zName, zLanguageid + ); + if( !zCols || !zSql ){ + rc = SQLITE_NOMEM; + }else{ + rc = sqlite3_declare_vtab(p->db, zSql); + } + + sqlite3_free(zSql); + sqlite3_free(zCols); + *pRc = rc; + } +} + +/* +** Create the %_stat table if it does not already exist. +*/ +SQLITE_PRIVATE void sqlite3Fts3CreateStatTable(int *pRc, Fts3Table *p){ + fts3DbExec(pRc, p->db, + "CREATE TABLE IF NOT EXISTS %Q.'%q_stat'" + "(id INTEGER PRIMARY KEY, value BLOB);", + p->zDb, p->zName + ); + if( (*pRc)==SQLITE_OK ) p->bHasStat = 1; } /* ** Create the backing store tables (%_content, %_segments and %_segdir) ** required by the FTS3 table passed as the only argument. This is done @@ -100993,29 +138621,35 @@ ** %_stat tables required by FTS4. */ static int fts3CreateTables(Fts3Table *p){ int rc = SQLITE_OK; /* Return code */ int i; /* Iterator variable */ - char *zContentCols; /* Columns of %_content table */ sqlite3 *db = p->db; /* The database connection */ - /* Create a list of user columns for the content table */ - if( p->bHasContent ){ + if( p->zContentTbl==0 ){ + const char *zLanguageid = p->zLanguageid; + char *zContentCols; /* Columns of %_content table */ + + /* Create a list of user columns for the content table */ zContentCols = sqlite3_mprintf("docid INTEGER PRIMARY KEY"); for(i=0; zContentCols && i<p->nColumn; i++){ char *z = p->azColumn[i]; zContentCols = sqlite3_mprintf("%z, 'c%d%q'", zContentCols, i, z); } + if( zLanguageid && zContentCols ){ + zContentCols = sqlite3_mprintf("%z, langid", zContentCols, zLanguageid); + } if( zContentCols==0 ) rc = SQLITE_NOMEM; - + /* Create the content table */ fts3DbExec(&rc, db, "CREATE TABLE %Q.'%q_content'(%s)", p->zDb, p->zName, zContentCols ); sqlite3_free(zContentCols); } + /* Create other tables */ fts3DbExec(&rc, db, "CREATE TABLE %Q.'%q_segments'(blockid INTEGER PRIMARY KEY, block BLOB);", p->zDb, p->zName ); @@ -101034,49 +138668,417 @@ if( p->bHasDocsize ){ fts3DbExec(&rc, db, "CREATE TABLE %Q.'%q_docsize'(docid INTEGER PRIMARY KEY, size BLOB);", p->zDb, p->zName ); - fts3DbExec(&rc, db, - "CREATE TABLE %Q.'%q_stat'(id INTEGER PRIMARY KEY, value BLOB);", - p->zDb, p->zName - ); + } + assert( p->bHasStat==p->bFts4 ); + if( p->bHasStat ){ + sqlite3Fts3CreateStatTable(&rc, p); } return rc; } /* -** An sqlite3_exec() callback for fts3TableExists. +** Store the current database page-size in bytes in p->nPgsz. +** +** If *pRc is non-zero when this function is called, it is a no-op. +** Otherwise, if an error occurs, an SQLite error code is stored in *pRc +** before returning. */ -static int fts3TableExistsCallback(void *pArg, int n, char **pp1, char **pp2){ - *(int*)pArg = 1; +static void fts3DatabasePageSize(int *pRc, Fts3Table *p){ + if( *pRc==SQLITE_OK ){ + int rc; /* Return code */ + char *zSql; /* SQL text "PRAGMA %Q.page_size" */ + sqlite3_stmt *pStmt; /* Compiled "PRAGMA %Q.page_size" statement */ + + zSql = sqlite3_mprintf("PRAGMA %Q.page_size", p->zDb); + if( !zSql ){ + rc = SQLITE_NOMEM; + }else{ + rc = sqlite3_prepare(p->db, zSql, -1, &pStmt, 0); + if( rc==SQLITE_OK ){ + sqlite3_step(pStmt); + p->nPgsz = sqlite3_column_int(pStmt, 0); + rc = sqlite3_finalize(pStmt); + }else if( rc==SQLITE_AUTH ){ + p->nPgsz = 1024; + rc = SQLITE_OK; + } + } + assert( p->nPgsz>0 || rc!=SQLITE_OK ); + sqlite3_free(zSql); + *pRc = rc; + } +} + +/* +** "Special" FTS4 arguments are column specifications of the following form: +** +** <key> = <value> +** +** There may not be whitespace surrounding the "=" character. The <value> +** term may be quoted, but the <key> may not. +*/ +static int fts3IsSpecialColumn( + const char *z, + int *pnKey, + char **pzValue +){ + char *zValue; + const char *zCsr = z; + + while( *zCsr!='=' ){ + if( *zCsr=='\0' ) return 0; + zCsr++; + } + + *pnKey = (int)(zCsr-z); + zValue = sqlite3_mprintf("%s", &zCsr[1]); + if( zValue ){ + sqlite3Fts3Dequote(zValue); + } + *pzValue = zValue; return 1; } /* -** Determine if a table currently exists in the database. -*/ -static void fts3TableExists( - int *pRc, /* Success code */ - sqlite3 *db, /* The database connection to test */ - const char *zDb, /* ATTACHed database within the connection */ - const char *zName, /* Name of the FTS3 table */ - const char *zSuffix, /* Shadow table extension */ - u8 *pResult /* Write results here */ -){ - int rc = SQLITE_OK; - int res = 0; - char *zSql; - if( *pRc ) return; - zSql = sqlite3_mprintf( - "SELECT 1 FROM %Q.sqlite_master WHERE name='%q%s'", - zDb, zName, zSuffix - ); - rc = sqlite3_exec(db, zSql, fts3TableExistsCallback, &res, 0); +** Append the output of a printf() style formatting to an existing string. +*/ +static void fts3Appendf( + int *pRc, /* IN/OUT: Error code */ + char **pz, /* IN/OUT: Pointer to string buffer */ + const char *zFormat, /* Printf format string to append */ + ... /* Arguments for printf format string */ +){ + if( *pRc==SQLITE_OK ){ + va_list ap; + char *z; + va_start(ap, zFormat); + z = sqlite3_vmprintf(zFormat, ap); + va_end(ap); + if( z && *pz ){ + char *z2 = sqlite3_mprintf("%s%s", *pz, z); + sqlite3_free(z); + z = z2; + } + if( z==0 ) *pRc = SQLITE_NOMEM; + sqlite3_free(*pz); + *pz = z; + } +} + +/* +** Return a copy of input string zInput enclosed in double-quotes (") and +** with all double quote characters escaped. For example: +** +** fts3QuoteId("un \"zip\"") -> "un \"\"zip\"\"" +** +** The pointer returned points to memory obtained from sqlite3_malloc(). It +** is the callers responsibility to call sqlite3_free() to release this +** memory. +*/ +static char *fts3QuoteId(char const *zInput){ + int nRet; + char *zRet; + nRet = 2 + (int)strlen(zInput)*2 + 1; + zRet = sqlite3_malloc(nRet); + if( zRet ){ + int i; + char *z = zRet; + *(z++) = '"'; + for(i=0; zInput[i]; i++){ + if( zInput[i]=='"' ) *(z++) = '"'; + *(z++) = zInput[i]; + } + *(z++) = '"'; + *(z++) = '\0'; + } + return zRet; +} + +/* +** Return a list of comma separated SQL expressions and a FROM clause that +** could be used in a SELECT statement such as the following: +** +** SELECT <list of expressions> FROM %_content AS x ... +** +** to return the docid, followed by each column of text data in order +** from left to write. If parameter zFunc is not NULL, then instead of +** being returned directly each column of text data is passed to an SQL +** function named zFunc first. For example, if zFunc is "unzip" and the +** table has the three user-defined columns "a", "b", and "c", the following +** string is returned: +** +** "docid, unzip(x.'a'), unzip(x.'b'), unzip(x.'c') FROM %_content AS x" +** +** The pointer returned points to a buffer allocated by sqlite3_malloc(). It +** is the responsibility of the caller to eventually free it. +** +** If *pRc is not SQLITE_OK when this function is called, it is a no-op (and +** a NULL pointer is returned). Otherwise, if an OOM error is encountered +** by this function, NULL is returned and *pRc is set to SQLITE_NOMEM. If +** no error occurs, *pRc is left unmodified. +*/ +static char *fts3ReadExprList(Fts3Table *p, const char *zFunc, int *pRc){ + char *zRet = 0; + char *zFree = 0; + char *zFunction; + int i; + + if( p->zContentTbl==0 ){ + if( !zFunc ){ + zFunction = ""; + }else{ + zFree = zFunction = fts3QuoteId(zFunc); + } + fts3Appendf(pRc, &zRet, "docid"); + for(i=0; i<p->nColumn; i++){ + fts3Appendf(pRc, &zRet, ",%s(x.'c%d%q')", zFunction, i, p->azColumn[i]); + } + if( p->zLanguageid ){ + fts3Appendf(pRc, &zRet, ", x.%Q", "langid"); + } + sqlite3_free(zFree); + }else{ + fts3Appendf(pRc, &zRet, "rowid"); + for(i=0; i<p->nColumn; i++){ + fts3Appendf(pRc, &zRet, ", x.'%q'", p->azColumn[i]); + } + if( p->zLanguageid ){ + fts3Appendf(pRc, &zRet, ", x.%Q", p->zLanguageid); + } + } + fts3Appendf(pRc, &zRet, " FROM '%q'.'%q%s' AS x", + p->zDb, + (p->zContentTbl ? p->zContentTbl : p->zName), + (p->zContentTbl ? "" : "_content") + ); + return zRet; +} + +/* +** Return a list of N comma separated question marks, where N is the number +** of columns in the %_content table (one for the docid plus one for each +** user-defined text column). +** +** If argument zFunc is not NULL, then all but the first question mark +** is preceded by zFunc and an open bracket, and followed by a closed +** bracket. For example, if zFunc is "zip" and the FTS3 table has three +** user-defined text columns, the following string is returned: +** +** "?, zip(?), zip(?), zip(?)" +** +** The pointer returned points to a buffer allocated by sqlite3_malloc(). It +** is the responsibility of the caller to eventually free it. +** +** If *pRc is not SQLITE_OK when this function is called, it is a no-op (and +** a NULL pointer is returned). Otherwise, if an OOM error is encountered +** by this function, NULL is returned and *pRc is set to SQLITE_NOMEM. If +** no error occurs, *pRc is left unmodified. +*/ +static char *fts3WriteExprList(Fts3Table *p, const char *zFunc, int *pRc){ + char *zRet = 0; + char *zFree = 0; + char *zFunction; + int i; + + if( !zFunc ){ + zFunction = ""; + }else{ + zFree = zFunction = fts3QuoteId(zFunc); + } + fts3Appendf(pRc, &zRet, "?"); + for(i=0; i<p->nColumn; i++){ + fts3Appendf(pRc, &zRet, ",%s(?)", zFunction); + } + if( p->zLanguageid ){ + fts3Appendf(pRc, &zRet, ", ?"); + } + sqlite3_free(zFree); + return zRet; +} + +/* +** This function interprets the string at (*pp) as a non-negative integer +** value. It reads the integer and sets *pnOut to the value read, then +** sets *pp to point to the byte immediately following the last byte of +** the integer value. +** +** Only decimal digits ('0'..'9') may be part of an integer value. +** +** If *pp does not being with a decimal digit SQLITE_ERROR is returned and +** the output value undefined. Otherwise SQLITE_OK is returned. +** +** This function is used when parsing the "prefix=" FTS4 parameter. +*/ +static int fts3GobbleInt(const char **pp, int *pnOut){ + const int MAX_NPREFIX = 10000000; + const char *p; /* Iterator pointer */ + int nInt = 0; /* Output value */ + + for(p=*pp; p[0]>='0' && p[0]<='9'; p++){ + nInt = nInt * 10 + (p[0] - '0'); + if( nInt>MAX_NPREFIX ){ + nInt = 0; + break; + } + } + if( p==*pp ) return SQLITE_ERROR; + *pnOut = nInt; + *pp = p; + return SQLITE_OK; +} + +/* +** This function is called to allocate an array of Fts3Index structures +** representing the indexes maintained by the current FTS table. FTS tables +** always maintain the main "terms" index, but may also maintain one or +** more "prefix" indexes, depending on the value of the "prefix=" parameter +** (if any) specified as part of the CREATE VIRTUAL TABLE statement. +** +** Argument zParam is passed the value of the "prefix=" option if one was +** specified, or NULL otherwise. +** +** If no error occurs, SQLITE_OK is returned and *apIndex set to point to +** the allocated array. *pnIndex is set to the number of elements in the +** array. If an error does occur, an SQLite error code is returned. +** +** Regardless of whether or not an error is returned, it is the responsibility +** of the caller to call sqlite3_free() on the output array to free it. +*/ +static int fts3PrefixParameter( + const char *zParam, /* ABC in prefix=ABC parameter to parse */ + int *pnIndex, /* OUT: size of *apIndex[] array */ + struct Fts3Index **apIndex /* OUT: Array of indexes for this table */ +){ + struct Fts3Index *aIndex; /* Allocated array */ + int nIndex = 1; /* Number of entries in array */ + + if( zParam && zParam[0] ){ + const char *p; + nIndex++; + for(p=zParam; *p; p++){ + if( *p==',' ) nIndex++; + } + } + + aIndex = sqlite3_malloc(sizeof(struct Fts3Index) * nIndex); + *apIndex = aIndex; + if( !aIndex ){ + return SQLITE_NOMEM; + } + + memset(aIndex, 0, sizeof(struct Fts3Index) * nIndex); + if( zParam ){ + const char *p = zParam; + int i; + for(i=1; i<nIndex; i++){ + int nPrefix = 0; + if( fts3GobbleInt(&p, &nPrefix) ) return SQLITE_ERROR; + assert( nPrefix>=0 ); + if( nPrefix==0 ){ + nIndex--; + i--; + }else{ + aIndex[i].nPrefix = nPrefix; + } + p++; + } + } + + *pnIndex = nIndex; + return SQLITE_OK; +} + +/* +** This function is called when initializing an FTS4 table that uses the +** content=xxx option. It determines the number of and names of the columns +** of the new FTS4 table. +** +** The third argument passed to this function is the value passed to the +** config=xxx option (i.e. "xxx"). This function queries the database for +** a table of that name. If found, the output variables are populated +** as follows: +** +** *pnCol: Set to the number of columns table xxx has, +** +** *pnStr: Set to the total amount of space required to store a copy +** of each columns name, including the nul-terminator. +** +** *pazCol: Set to point to an array of *pnCol strings. Each string is +** the name of the corresponding column in table xxx. The array +** and its contents are allocated using a single allocation. It +** is the responsibility of the caller to free this allocation +** by eventually passing the *pazCol value to sqlite3_free(). +** +** If the table cannot be found, an error code is returned and the output +** variables are undefined. Or, if an OOM is encountered, SQLITE_NOMEM is +** returned (and the output variables are undefined). +*/ +static int fts3ContentColumns( + sqlite3 *db, /* Database handle */ + const char *zDb, /* Name of db (i.e. "main", "temp" etc.) */ + const char *zTbl, /* Name of content table */ + const char ***pazCol, /* OUT: Malloc'd array of column names */ + int *pnCol, /* OUT: Size of array *pazCol */ + int *pnStr, /* OUT: Bytes of string content */ + char **pzErr /* OUT: error message */ +){ + int rc = SQLITE_OK; /* Return code */ + char *zSql; /* "SELECT *" statement on zTbl */ + sqlite3_stmt *pStmt = 0; /* Compiled version of zSql */ + + zSql = sqlite3_mprintf("SELECT * FROM %Q.%Q", zDb, zTbl); + if( !zSql ){ + rc = SQLITE_NOMEM; + }else{ + rc = sqlite3_prepare(db, zSql, -1, &pStmt, 0); + if( rc!=SQLITE_OK ){ + sqlite3Fts3ErrMsg(pzErr, "%s", sqlite3_errmsg(db)); + } + } sqlite3_free(zSql); - *pResult = res & 0xff; - if( rc!=SQLITE_ABORT ) *pRc = rc; + + if( rc==SQLITE_OK ){ + const char **azCol; /* Output array */ + int nStr = 0; /* Size of all column names (incl. 0x00) */ + int nCol; /* Number of table columns */ + int i; /* Used to iterate through columns */ + + /* Loop through the returned columns. Set nStr to the number of bytes of + ** space required to store a copy of each column name, including the + ** nul-terminator byte. */ + nCol = sqlite3_column_count(pStmt); + for(i=0; i<nCol; i++){ + const char *zCol = sqlite3_column_name(pStmt, i); + nStr += (int)strlen(zCol) + 1; + } + + /* Allocate and populate the array to return. */ + azCol = (const char **)sqlite3_malloc(sizeof(char *) * nCol + nStr); + if( azCol==0 ){ + rc = SQLITE_NOMEM; + }else{ + char *p = (char *)&azCol[nCol]; + for(i=0; i<nCol; i++){ + const char *zCol = sqlite3_column_name(pStmt, i); + int n = (int)strlen(zCol)+1; + memcpy(p, zCol, n); + azCol[i] = p; + p += n; + } + } + sqlite3_finalize(pStmt); + + /* Set the output variables. */ + *pnCol = nCol; + *pnStr = nStr; + *pazCol = azCol; + } + + return rc; } /* ** This function is the implementation of both the xConnect and xCreate ** methods of the FTS3 virtual table. @@ -101096,128 +139098,369 @@ const char * const *argv, /* xCreate/xConnect argument array */ sqlite3_vtab **ppVTab, /* Write the resulting vtab structure here */ char **pzErr /* Write any error message here */ ){ Fts3Hash *pHash = (Fts3Hash *)pAux; - Fts3Table *p; /* Pointer to allocated vtab */ - int rc; /* Return code */ + Fts3Table *p = 0; /* Pointer to allocated vtab */ + int rc = SQLITE_OK; /* Return code */ int i; /* Iterator variable */ int nByte; /* Size of allocation used for *p */ int iCol; /* Column index */ int nString = 0; /* Bytes required to hold all column names */ int nCol = 0; /* Number of columns in the FTS table */ char *zCsr; /* Space for holding column names */ int nDb; /* Bytes required to hold database name */ int nName; /* Bytes required to hold table name */ - - const char *zTokenizer = 0; /* Name of tokenizer to use */ + int isFts4 = (argv[0][3]=='4'); /* True for FTS4, false for FTS3 */ + const char **aCol; /* Array of column names */ sqlite3_tokenizer *pTokenizer = 0; /* Tokenizer for this table */ + + int nIndex = 0; /* Size of aIndex[] array */ + struct Fts3Index *aIndex = 0; /* Array of indexes for this table */ + + /* The results of parsing supported FTS4 key=value options: */ + int bNoDocsize = 0; /* True to omit %_docsize table */ + int bDescIdx = 0; /* True to store descending indexes */ + char *zPrefix = 0; /* Prefix parameter value (or NULL) */ + char *zCompress = 0; /* compress=? parameter (or NULL) */ + char *zUncompress = 0; /* uncompress=? parameter (or NULL) */ + char *zContent = 0; /* content=? parameter (or NULL) */ + char *zLanguageid = 0; /* languageid=? parameter (or NULL) */ + char **azNotindexed = 0; /* The set of notindexed= columns */ + int nNotindexed = 0; /* Size of azNotindexed[] array */ + + assert( strlen(argv[0])==4 ); + assert( (sqlite3_strnicmp(argv[0], "fts4", 4)==0 && isFts4) + || (sqlite3_strnicmp(argv[0], "fts3", 4)==0 && !isFts4) + ); nDb = (int)strlen(argv[1]) + 1; nName = (int)strlen(argv[2]) + 1; - for(i=3; i<argc; i++){ + + nByte = sizeof(const char *) * (argc-2); + aCol = (const char **)sqlite3_malloc(nByte); + if( aCol ){ + memset((void*)aCol, 0, nByte); + azNotindexed = (char **)sqlite3_malloc(nByte); + } + if( azNotindexed ){ + memset(azNotindexed, 0, nByte); + } + if( !aCol || !azNotindexed ){ + rc = SQLITE_NOMEM; + goto fts3_init_out; + } + + /* Loop through all of the arguments passed by the user to the FTS3/4 + ** module (i.e. all the column names and special arguments). This loop + ** does the following: + ** + ** + Figures out the number of columns the FTSX table will have, and + ** the number of bytes of space that must be allocated to store copies + ** of the column names. + ** + ** + If there is a tokenizer specification included in the arguments, + ** initializes the tokenizer pTokenizer. + */ + for(i=3; rc==SQLITE_OK && i<argc; i++){ char const *z = argv[i]; - rc = sqlite3Fts3InitTokenizer(pHash, z, &pTokenizer, &zTokenizer, pzErr); - if( rc!=SQLITE_OK ){ - return rc; + int nKey; + char *zVal; + + /* Check if this is a tokenizer specification */ + if( !pTokenizer + && strlen(z)>8 + && 0==sqlite3_strnicmp(z, "tokenize", 8) + && 0==sqlite3Fts3IsIdChar(z[8]) + ){ + rc = sqlite3Fts3InitTokenizer(pHash, &z[9], &pTokenizer, pzErr); } - if( z!=zTokenizer ){ + + /* Check if it is an FTS4 special argument. */ + else if( isFts4 && fts3IsSpecialColumn(z, &nKey, &zVal) ){ + struct Fts4Option { + const char *zOpt; + int nOpt; + } aFts4Opt[] = { + { "matchinfo", 9 }, /* 0 -> MATCHINFO */ + { "prefix", 6 }, /* 1 -> PREFIX */ + { "compress", 8 }, /* 2 -> COMPRESS */ + { "uncompress", 10 }, /* 3 -> UNCOMPRESS */ + { "order", 5 }, /* 4 -> ORDER */ + { "content", 7 }, /* 5 -> CONTENT */ + { "languageid", 10 }, /* 6 -> LANGUAGEID */ + { "notindexed", 10 } /* 7 -> NOTINDEXED */ + }; + + int iOpt; + if( !zVal ){ + rc = SQLITE_NOMEM; + }else{ + for(iOpt=0; iOpt<SizeofArray(aFts4Opt); iOpt++){ + struct Fts4Option *pOp = &aFts4Opt[iOpt]; + if( nKey==pOp->nOpt && !sqlite3_strnicmp(z, pOp->zOpt, pOp->nOpt) ){ + break; + } + } + if( iOpt==SizeofArray(aFts4Opt) ){ + sqlite3Fts3ErrMsg(pzErr, "unrecognized parameter: %s", z); + rc = SQLITE_ERROR; + }else{ + switch( iOpt ){ + case 0: /* MATCHINFO */ + if( strlen(zVal)!=4 || sqlite3_strnicmp(zVal, "fts3", 4) ){ + sqlite3Fts3ErrMsg(pzErr, "unrecognized matchinfo: %s", zVal); + rc = SQLITE_ERROR; + } + bNoDocsize = 1; + break; + + case 1: /* PREFIX */ + sqlite3_free(zPrefix); + zPrefix = zVal; + zVal = 0; + break; + + case 2: /* COMPRESS */ + sqlite3_free(zCompress); + zCompress = zVal; + zVal = 0; + break; + + case 3: /* UNCOMPRESS */ + sqlite3_free(zUncompress); + zUncompress = zVal; + zVal = 0; + break; + + case 4: /* ORDER */ + if( (strlen(zVal)!=3 || sqlite3_strnicmp(zVal, "asc", 3)) + && (strlen(zVal)!=4 || sqlite3_strnicmp(zVal, "desc", 4)) + ){ + sqlite3Fts3ErrMsg(pzErr, "unrecognized order: %s", zVal); + rc = SQLITE_ERROR; + } + bDescIdx = (zVal[0]=='d' || zVal[0]=='D'); + break; + + case 5: /* CONTENT */ + sqlite3_free(zContent); + zContent = zVal; + zVal = 0; + break; + + case 6: /* LANGUAGEID */ + assert( iOpt==6 ); + sqlite3_free(zLanguageid); + zLanguageid = zVal; + zVal = 0; + break; + + case 7: /* NOTINDEXED */ + azNotindexed[nNotindexed++] = zVal; + zVal = 0; + break; + } + } + sqlite3_free(zVal); + } + } + + /* Otherwise, the argument is a column name. */ + else { nString += (int)(strlen(z) + 1); + aCol[nCol++] = z; } } - nCol = argc - 3 - (zTokenizer!=0); - if( zTokenizer==0 ){ - rc = sqlite3Fts3InitTokenizer(pHash, 0, &pTokenizer, 0, pzErr); - if( rc!=SQLITE_OK ){ - return rc; - } - assert( pTokenizer ); - } + + /* If a content=xxx option was specified, the following: + ** + ** 1. Ignore any compress= and uncompress= options. + ** + ** 2. If no column names were specified as part of the CREATE VIRTUAL + ** TABLE statement, use all columns from the content table. + */ + if( rc==SQLITE_OK && zContent ){ + sqlite3_free(zCompress); + sqlite3_free(zUncompress); + zCompress = 0; + zUncompress = 0; + if( nCol==0 ){ + sqlite3_free((void*)aCol); + aCol = 0; + rc = fts3ContentColumns(db, argv[1], zContent,&aCol,&nCol,&nString,pzErr); + + /* If a languageid= option was specified, remove the language id + ** column from the aCol[] array. */ + if( rc==SQLITE_OK && zLanguageid ){ + int j; + for(j=0; j<nCol; j++){ + if( sqlite3_stricmp(zLanguageid, aCol[j])==0 ){ + int k; + for(k=j; k<nCol; k++) aCol[k] = aCol[k+1]; + nCol--; + break; + } + } + } + } + } + if( rc!=SQLITE_OK ) goto fts3_init_out; if( nCol==0 ){ + assert( nString==0 ); + aCol[0] = "content"; + nString = 8; nCol = 1; } + if( pTokenizer==0 ){ + rc = sqlite3Fts3InitTokenizer(pHash, "simple", &pTokenizer, pzErr); + if( rc!=SQLITE_OK ) goto fts3_init_out; + } + assert( pTokenizer ); + + rc = fts3PrefixParameter(zPrefix, &nIndex, &aIndex); + if( rc==SQLITE_ERROR ){ + assert( zPrefix ); + sqlite3Fts3ErrMsg(pzErr, "error parsing prefix parameter: %s", zPrefix); + } + if( rc!=SQLITE_OK ) goto fts3_init_out; + /* Allocate and populate the Fts3Table structure. */ - nByte = sizeof(Fts3Table) + /* Fts3Table */ + nByte = sizeof(Fts3Table) + /* Fts3Table */ nCol * sizeof(char *) + /* azColumn */ + nIndex * sizeof(struct Fts3Index) + /* aIndex */ + nCol * sizeof(u8) + /* abNotindexed */ nName + /* zName */ nDb + /* zDb */ nString; /* Space for azColumn strings */ p = (Fts3Table*)sqlite3_malloc(nByte); if( p==0 ){ rc = SQLITE_NOMEM; goto fts3_init_out; } memset(p, 0, nByte); - p->db = db; p->nColumn = nCol; p->nPendingData = 0; p->azColumn = (char **)&p[1]; p->pTokenizer = pTokenizer; - p->nNodeSize = 1000; p->nMaxPendingData = FTS3_MAX_PENDING_DATA; - zCsr = (char *)&p->azColumn[nCol]; + p->bHasDocsize = (isFts4 && bNoDocsize==0); + p->bHasStat = isFts4; + p->bFts4 = isFts4; + p->bDescIdx = bDescIdx; + p->nAutoincrmerge = 0xff; /* 0xff means setting unknown */ + p->zContentTbl = zContent; + p->zLanguageid = zLanguageid; + zContent = 0; + zLanguageid = 0; + TESTONLY( p->inTransaction = -1 ); + TESTONLY( p->mxSavepoint = -1 ); - fts3HashInit(&p->pendingTerms, FTS3_HASH_STRING, 1); + p->aIndex = (struct Fts3Index *)&p->azColumn[nCol]; + memcpy(p->aIndex, aIndex, sizeof(struct Fts3Index) * nIndex); + p->nIndex = nIndex; + for(i=0; i<nIndex; i++){ + fts3HashInit(&p->aIndex[i].hPending, FTS3_HASH_STRING, 1); + } + p->abNotindexed = (u8 *)&p->aIndex[nIndex]; /* Fill in the zName and zDb fields of the vtab structure. */ + zCsr = (char *)&p->abNotindexed[nCol]; p->zName = zCsr; memcpy(zCsr, argv[2], nName); zCsr += nName; p->zDb = zCsr; memcpy(zCsr, argv[1], nDb); zCsr += nDb; /* Fill in the azColumn array */ - iCol = 0; - for(i=3; i<argc; i++){ - if( argv[i]!=zTokenizer ){ - char *z; - int n; - z = (char *)sqlite3Fts3NextToken(argv[i], &n); - memcpy(zCsr, z, n); - zCsr[n] = '\0'; - sqlite3Fts3Dequote(zCsr); - p->azColumn[iCol++] = zCsr; - zCsr += n+1; - assert( zCsr <= &((char *)p)[nByte] ); - } - } - if( iCol==0 ){ - assert( nCol==1 ); - p->azColumn[0] = "content"; - } + for(iCol=0; iCol<nCol; iCol++){ + char *z; + int n = 0; + z = (char *)sqlite3Fts3NextToken(aCol[iCol], &n); + memcpy(zCsr, z, n); + zCsr[n] = '\0'; + sqlite3Fts3Dequote(zCsr); + p->azColumn[iCol] = zCsr; + zCsr += n+1; + assert( zCsr <= &((char *)p)[nByte] ); + } + + /* Fill in the abNotindexed array */ + for(iCol=0; iCol<nCol; iCol++){ + int n = (int)strlen(p->azColumn[iCol]); + for(i=0; i<nNotindexed; i++){ + char *zNot = azNotindexed[i]; + if( zNot && n==(int)strlen(zNot) + && 0==sqlite3_strnicmp(p->azColumn[iCol], zNot, n) + ){ + p->abNotindexed[iCol] = 1; + sqlite3_free(zNot); + azNotindexed[i] = 0; + } + } + } + for(i=0; i<nNotindexed; i++){ + if( azNotindexed[i] ){ + sqlite3Fts3ErrMsg(pzErr, "no such column: %s", azNotindexed[i]); + rc = SQLITE_ERROR; + } + } + + if( rc==SQLITE_OK && (zCompress==0)!=(zUncompress==0) ){ + char const *zMiss = (zCompress==0 ? "compress" : "uncompress"); + rc = SQLITE_ERROR; + sqlite3Fts3ErrMsg(pzErr, "missing %s parameter in fts4 constructor", zMiss); + } + p->zReadExprlist = fts3ReadExprList(p, zUncompress, &rc); + p->zWriteExprlist = fts3WriteExprList(p, zCompress, &rc); + if( rc!=SQLITE_OK ) goto fts3_init_out; /* If this is an xCreate call, create the underlying tables in the ** database. TODO: For xConnect(), it could verify that said tables exist. */ if( isCreate ){ - p->bHasContent = 1; - p->bHasDocsize = argv[0][3]=='4'; rc = fts3CreateTables(p); - }else{ - rc = SQLITE_OK; - fts3TableExists(&rc, db, argv[1], argv[2], "_content", &p->bHasContent); - fts3TableExists(&rc, db, argv[1], argv[2], "_docsize", &p->bHasDocsize); - } - if( rc!=SQLITE_OK ) goto fts3_init_out; - - rc = fts3DeclareVtab(p); - if( rc!=SQLITE_OK ) goto fts3_init_out; - - *ppVTab = &p->base; + } + + /* Check to see if a legacy fts3 table has been "upgraded" by the + ** addition of a %_stat table so that it can use incremental merge. + */ + if( !isFts4 && !isCreate ){ + p->bHasStat = 2; + } + + /* Figure out the page-size for the database. This is required in order to + ** estimate the cost of loading large doclists from the database. */ + fts3DatabasePageSize(&rc, p); + p->nNodeSize = p->nPgsz-35; + + /* Declare the table schema to SQLite. */ + fts3DeclareVtab(&rc, p); fts3_init_out: - assert( p || (pTokenizer && rc!=SQLITE_OK) ); + sqlite3_free(zPrefix); + sqlite3_free(aIndex); + sqlite3_free(zCompress); + sqlite3_free(zUncompress); + sqlite3_free(zContent); + sqlite3_free(zLanguageid); + for(i=0; i<nNotindexed; i++) sqlite3_free(azNotindexed[i]); + sqlite3_free((void *)aCol); + sqlite3_free((void *)azNotindexed); if( rc!=SQLITE_OK ){ if( p ){ fts3DisconnectMethod((sqlite3_vtab *)p); - }else{ + }else if( pTokenizer ){ pTokenizer->pModule->xDestroy(pTokenizer); } + }else{ + assert( p->pSegments==0 ); + *ppVTab = &p->base; } return rc; } /* @@ -101242,10 +139485,36 @@ sqlite3_vtab **ppVtab, /* OUT: New sqlite3_vtab object */ char **pzErr /* OUT: sqlite3_malloc'd error message */ ){ return fts3InitVtab(1, db, pAux, argc, argv, ppVtab, pzErr); } + +/* +** Set the pIdxInfo->estimatedRows variable to nRow. Unless this +** extension is currently being used by a version of SQLite too old to +** support estimatedRows. In that case this function is a no-op. +*/ +static void fts3SetEstimatedRows(sqlite3_index_info *pIdxInfo, i64 nRow){ +#if SQLITE_VERSION_NUMBER>=3008002 + if( sqlite3_libversion_number()>=3008002 ){ + pIdxInfo->estimatedRows = nRow; + } +#endif +} + +/* +** Set the SQLITE_INDEX_SCAN_UNIQUE flag in pIdxInfo->flags. Unless this +** extension is currently being used by a version of SQLite too old to +** support index-info flags. In that case this function is a no-op. +*/ +static void fts3SetUniqueFlag(sqlite3_index_info *pIdxInfo){ +#if SQLITE_VERSION_NUMBER>=3008012 + if( sqlite3_libversion_number()>=3008012 ){ + pIdxInfo->idxFlags |= SQLITE_INDEX_SCAN_UNIQUE; + } +#endif +} /* ** Implementation of the xBestIndex method for FTS3 tables. There ** are three possible strategies, in order of preference: ** @@ -101255,25 +139524,44 @@ */ static int fts3BestIndexMethod(sqlite3_vtab *pVTab, sqlite3_index_info *pInfo){ Fts3Table *p = (Fts3Table *)pVTab; int i; /* Iterator variable */ int iCons = -1; /* Index of constraint to use */ + + int iLangidCons = -1; /* Index of langid=x constraint, if present */ + int iDocidGe = -1; /* Index of docid>=x constraint, if present */ + int iDocidLe = -1; /* Index of docid<=x constraint, if present */ + int iIdx; /* By default use a full table scan. This is an expensive option, ** so search through the constraints to see if a more efficient ** strategy is possible. */ pInfo->idxNum = FTS3_FULLSCAN_SEARCH; - pInfo->estimatedCost = 500000; + pInfo->estimatedCost = 5000000; for(i=0; i<pInfo->nConstraint; i++){ + int bDocid; /* True if this constraint is on docid */ struct sqlite3_index_constraint *pCons = &pInfo->aConstraint[i]; - if( pCons->usable==0 ) continue; + if( pCons->usable==0 ){ + if( pCons->op==SQLITE_INDEX_CONSTRAINT_MATCH ){ + /* There exists an unusable MATCH constraint. This means that if + ** the planner does elect to use the results of this call as part + ** of the overall query plan the user will see an "unable to use + ** function MATCH in the requested context" error. To discourage + ** this, return a very high cost here. */ + pInfo->idxNum = FTS3_FULLSCAN_SEARCH; + pInfo->estimatedCost = 1e50; + fts3SetEstimatedRows(pInfo, ((sqlite3_int64)1) << 50); + return SQLITE_OK; + } + continue; + } + + bDocid = (pCons->iColumn<0 || pCons->iColumn==p->nColumn+1); /* A direct lookup on the rowid or docid column. Assign a cost of 1.0. */ - if( pCons->op==SQLITE_INDEX_CONSTRAINT_EQ - && (pCons->iColumn<0 || pCons->iColumn==p->nColumn+1 ) - ){ + if( iCons<0 && pCons->op==SQLITE_INDEX_CONSTRAINT_EQ && bDocid ){ pInfo->idxNum = FTS3_DOCID_SEARCH; pInfo->estimatedCost = 1.0; iCons = i; } @@ -101290,18 +139578,71 @@ && pCons->iColumn>=0 && pCons->iColumn<=p->nColumn ){ pInfo->idxNum = FTS3_FULLTEXT_SEARCH + pCons->iColumn; pInfo->estimatedCost = 2.0; iCons = i; - break; + } + + /* Equality constraint on the langid column */ + if( pCons->op==SQLITE_INDEX_CONSTRAINT_EQ + && pCons->iColumn==p->nColumn + 2 + ){ + iLangidCons = i; + } + + if( bDocid ){ + switch( pCons->op ){ + case SQLITE_INDEX_CONSTRAINT_GE: + case SQLITE_INDEX_CONSTRAINT_GT: + iDocidGe = i; + break; + + case SQLITE_INDEX_CONSTRAINT_LE: + case SQLITE_INDEX_CONSTRAINT_LT: + iDocidLe = i; + break; + } } } + /* If using a docid=? or rowid=? strategy, set the UNIQUE flag. */ + if( pInfo->idxNum==FTS3_DOCID_SEARCH ) fts3SetUniqueFlag(pInfo); + + iIdx = 1; if( iCons>=0 ){ - pInfo->aConstraintUsage[iCons].argvIndex = 1; + pInfo->aConstraintUsage[iCons].argvIndex = iIdx++; pInfo->aConstraintUsage[iCons].omit = 1; } + if( iLangidCons>=0 ){ + pInfo->idxNum |= FTS3_HAVE_LANGID; + pInfo->aConstraintUsage[iLangidCons].argvIndex = iIdx++; + } + if( iDocidGe>=0 ){ + pInfo->idxNum |= FTS3_HAVE_DOCID_GE; + pInfo->aConstraintUsage[iDocidGe].argvIndex = iIdx++; + } + if( iDocidLe>=0 ){ + pInfo->idxNum |= FTS3_HAVE_DOCID_LE; + pInfo->aConstraintUsage[iDocidLe].argvIndex = iIdx++; + } + + /* Regardless of the strategy selected, FTS can deliver rows in rowid (or + ** docid) order. Both ascending and descending are possible. + */ + if( pInfo->nOrderBy==1 ){ + struct sqlite3_index_orderby *pOrder = &pInfo->aOrderBy[0]; + if( pOrder->iColumn<0 || pOrder->iColumn==p->nColumn+1 ){ + if( pOrder->desc ){ + pInfo->idxStr = "DESC"; + }else{ + pInfo->idxStr = "ASC"; + } + pInfo->orderByConsumed = 1; + } + } + + assert( p->pSegments==0 ); return SQLITE_OK; } /* ** Implementation of xOpen method. @@ -101325,176 +139666,260 @@ /* ** Close the cursor. For additional information see the documentation ** on the xClose method of the virtual table interface. */ -static int fulltextClose(sqlite3_vtab_cursor *pCursor){ +static int fts3CloseMethod(sqlite3_vtab_cursor *pCursor){ Fts3Cursor *pCsr = (Fts3Cursor *)pCursor; + assert( ((Fts3Table *)pCsr->base.pVtab)->pSegments==0 ); sqlite3_finalize(pCsr->pStmt); sqlite3Fts3ExprFree(pCsr->pExpr); + sqlite3Fts3FreeDeferredTokens(pCsr); sqlite3_free(pCsr->aDoclist); - sqlite3_free(pCsr->aMatchinfo); + sqlite3Fts3MIBufferFree(pCsr->pMIBuffer); + assert( ((Fts3Table *)pCsr->base.pVtab)->pSegments==0 ); sqlite3_free(pCsr); return SQLITE_OK; } + +/* +** If pCsr->pStmt has not been prepared (i.e. if pCsr->pStmt==0), then +** compose and prepare an SQL statement of the form: +** +** "SELECT <columns> FROM %_content WHERE rowid = ?" +** +** (or the equivalent for a content=xxx table) and set pCsr->pStmt to +** it. If an error occurs, return an SQLite error code. +** +** Otherwise, set *ppStmt to point to pCsr->pStmt and return SQLITE_OK. +*/ +static int fts3CursorSeekStmt(Fts3Cursor *pCsr, sqlite3_stmt **ppStmt){ + int rc = SQLITE_OK; + if( pCsr->pStmt==0 ){ + Fts3Table *p = (Fts3Table *)pCsr->base.pVtab; + char *zSql; + zSql = sqlite3_mprintf("SELECT %s WHERE rowid = ?", p->zReadExprlist); + if( !zSql ) return SQLITE_NOMEM; + rc = sqlite3_prepare_v2(p->db, zSql, -1, &pCsr->pStmt, 0); + sqlite3_free(zSql); + } + *ppStmt = pCsr->pStmt; + return rc; +} /* ** Position the pCsr->pStmt statement so that it is on the row ** of the %_content table that contains the last match. Return ** SQLITE_OK on success. */ static int fts3CursorSeek(sqlite3_context *pContext, Fts3Cursor *pCsr){ + int rc = SQLITE_OK; if( pCsr->isRequireSeek ){ - pCsr->isRequireSeek = 0; - sqlite3_bind_int64(pCsr->pStmt, 1, pCsr->iPrevId); - if( SQLITE_ROW==sqlite3_step(pCsr->pStmt) ){ - return SQLITE_OK; - }else{ - int rc = sqlite3_reset(pCsr->pStmt); - if( rc==SQLITE_OK ){ - /* If no row was found and no error has occured, then the %_content - ** table is missing a row that is present in the full-text index. - ** The data structures are corrupt. - */ - rc = SQLITE_CORRUPT; - } - pCsr->isEof = 1; - if( pContext ){ - sqlite3_result_error_code(pContext, rc); - } - return rc; - } - }else{ - return SQLITE_OK; - } + sqlite3_stmt *pStmt = 0; + + rc = fts3CursorSeekStmt(pCsr, &pStmt); + if( rc==SQLITE_OK ){ + sqlite3_bind_int64(pCsr->pStmt, 1, pCsr->iPrevId); + pCsr->isRequireSeek = 0; + if( SQLITE_ROW==sqlite3_step(pCsr->pStmt) ){ + return SQLITE_OK; + }else{ + rc = sqlite3_reset(pCsr->pStmt); + if( rc==SQLITE_OK && ((Fts3Table *)pCsr->base.pVtab)->zContentTbl==0 ){ + /* If no row was found and no error has occurred, then the %_content + ** table is missing a row that is present in the full-text index. + ** The data structures are corrupt. */ + rc = FTS_CORRUPT_VTAB; + pCsr->isEof = 1; + } + } + } + } + + if( rc!=SQLITE_OK && pContext ){ + sqlite3_result_error_code(pContext, rc); + } + return rc; } /* -** Advance the cursor to the next row in the %_content table that -** matches the search criteria. For a MATCH search, this will be -** the next row that matches. For a full-table scan, this will be -** simply the next row in the %_content table. For a docid lookup, -** this routine simply sets the EOF flag. +** This function is used to process a single interior node when searching +** a b-tree for a term or term prefix. The node data is passed to this +** function via the zNode/nNode parameters. The term to search for is +** passed in zTerm/nTerm. ** -** Return SQLITE_OK if nothing goes wrong. SQLITE_OK is returned -** even if we reach end-of-file. The fts3EofMethod() will be called -** subsequently to determine whether or not an EOF was hit. +** If piFirst is not NULL, then this function sets *piFirst to the blockid +** of the child node that heads the sub-tree that may contain the term. +** +** If piLast is not NULL, then *piLast is set to the right-most child node +** that heads a sub-tree that may contain a term for which zTerm/nTerm is +** a prefix. +** +** If an OOM error occurs, SQLITE_NOMEM is returned. Otherwise, SQLITE_OK. */ -static int fts3NextMethod(sqlite3_vtab_cursor *pCursor){ +static int fts3ScanInteriorNode( + const char *zTerm, /* Term to select leaves for */ + int nTerm, /* Size of term zTerm in bytes */ + const char *zNode, /* Buffer containing segment interior node */ + int nNode, /* Size of buffer at zNode */ + sqlite3_int64 *piFirst, /* OUT: Selected child node */ + sqlite3_int64 *piLast /* OUT: Selected child node */ +){ int rc = SQLITE_OK; /* Return code */ - Fts3Cursor *pCsr = (Fts3Cursor *)pCursor; - - if( pCsr->aDoclist==0 ){ - if( SQLITE_ROW!=sqlite3_step(pCsr->pStmt) ){ - pCsr->isEof = 1; - rc = sqlite3_reset(pCsr->pStmt); - } - }else if( pCsr->pNextId>=&pCsr->aDoclist[pCsr->nDoclist] ){ - pCsr->isEof = 1; - }else{ - sqlite3_reset(pCsr->pStmt); - fts3GetDeltaVarint(&pCsr->pNextId, &pCsr->iPrevId); - pCsr->isRequireSeek = 1; - pCsr->isMatchinfoNeeded = 1; - } - return rc; -} - - -/* -** The buffer pointed to by argument zNode (size nNode bytes) contains the -** root node of a b-tree segment. The segment is guaranteed to be at least -** one level high (i.e. the root node is not also a leaf). If successful, -** this function locates the leaf node of the segment that may contain the -** term specified by arguments zTerm and nTerm and writes its block number -** to *piLeaf. -** -** It is possible that the returned leaf node does not contain the specified -** term. However, if the segment does contain said term, it is stored on -** the identified leaf node. Because this function only inspects interior -** segment nodes (and never loads leaf nodes into memory), it is not possible -** to be sure. + const char *zCsr = zNode; /* Cursor to iterate through node */ + const char *zEnd = &zCsr[nNode];/* End of interior node buffer */ + char *zBuffer = 0; /* Buffer to load terms into */ + int nAlloc = 0; /* Size of allocated buffer */ + int isFirstTerm = 1; /* True when processing first term on page */ + sqlite3_int64 iChild; /* Block id of child node to descend to */ + + /* Skip over the 'height' varint that occurs at the start of every + ** interior node. Then load the blockid of the left-child of the b-tree + ** node into variable iChild. + ** + ** Even if the data structure on disk is corrupted, this (reading two + ** varints from the buffer) does not risk an overread. If zNode is a + ** root node, then the buffer comes from a SELECT statement. SQLite does + ** not make this guarantee explicitly, but in practice there are always + ** either more than 20 bytes of allocated space following the nNode bytes of + ** contents, or two zero bytes. Or, if the node is read from the %_segments + ** table, then there are always 20 bytes of zeroed padding following the + ** nNode bytes of content (see sqlite3Fts3ReadBlock() for details). + */ + zCsr += sqlite3Fts3GetVarint(zCsr, &iChild); + zCsr += sqlite3Fts3GetVarint(zCsr, &iChild); + if( zCsr>zEnd ){ + return FTS_CORRUPT_VTAB; + } + + while( zCsr<zEnd && (piFirst || piLast) ){ + int cmp; /* memcmp() result */ + int nSuffix; /* Size of term suffix */ + int nPrefix = 0; /* Size of term prefix */ + int nBuffer; /* Total term size */ + + /* Load the next term on the node into zBuffer. Use realloc() to expand + ** the size of zBuffer if required. */ + if( !isFirstTerm ){ + zCsr += fts3GetVarint32(zCsr, &nPrefix); + } + isFirstTerm = 0; + zCsr += fts3GetVarint32(zCsr, &nSuffix); + + if( nPrefix<0 || nSuffix<0 || &zCsr[nSuffix]>zEnd ){ + rc = FTS_CORRUPT_VTAB; + goto finish_scan; + } + if( nPrefix+nSuffix>nAlloc ){ + char *zNew; + nAlloc = (nPrefix+nSuffix) * 2; + zNew = (char *)sqlite3_realloc(zBuffer, nAlloc); + if( !zNew ){ + rc = SQLITE_NOMEM; + goto finish_scan; + } + zBuffer = zNew; + } + assert( zBuffer ); + memcpy(&zBuffer[nPrefix], zCsr, nSuffix); + nBuffer = nPrefix + nSuffix; + zCsr += nSuffix; + + /* Compare the term we are searching for with the term just loaded from + ** the interior node. If the specified term is greater than or equal + ** to the term from the interior node, then all terms on the sub-tree + ** headed by node iChild are smaller than zTerm. No need to search + ** iChild. + ** + ** If the interior node term is larger than the specified term, then + ** the tree headed by iChild may contain the specified term. + */ + cmp = memcmp(zTerm, zBuffer, (nBuffer>nTerm ? nTerm : nBuffer)); + if( piFirst && (cmp<0 || (cmp==0 && nBuffer>nTerm)) ){ + *piFirst = iChild; + piFirst = 0; + } + + if( piLast && cmp<0 ){ + *piLast = iChild; + piLast = 0; + } + + iChild++; + }; + + if( piFirst ) *piFirst = iChild; + if( piLast ) *piLast = iChild; + + finish_scan: + sqlite3_free(zBuffer); + return rc; +} + + +/* +** The buffer pointed to by argument zNode (size nNode bytes) contains an +** interior node of a b-tree segment. The zTerm buffer (size nTerm bytes) +** contains a term. This function searches the sub-tree headed by the zNode +** node for the range of leaf nodes that may contain the specified term +** or terms for which the specified term is a prefix. +** +** If piLeaf is not NULL, then *piLeaf is set to the blockid of the +** left-most leaf node in the tree that may contain the specified term. +** If piLeaf2 is not NULL, then *piLeaf2 is set to the blockid of the +** right-most leaf node that may contain a term for which the specified +** term is a prefix. +** +** It is possible that the range of returned leaf nodes does not contain +** the specified term or any terms for which it is a prefix. However, if the +** segment does contain any such terms, they are stored within the identified +** range. Because this function only inspects interior segment nodes (and +** never loads leaf nodes into memory), it is not possible to be sure. ** ** If an error occurs, an error code other than SQLITE_OK is returned. */ static int fts3SelectLeaf( Fts3Table *p, /* Virtual table handle */ const char *zTerm, /* Term to select leaves for */ int nTerm, /* Size of term zTerm in bytes */ const char *zNode, /* Buffer containing segment interior node */ int nNode, /* Size of buffer at zNode */ - sqlite3_int64 *piLeaf /* Selected leaf node */ + sqlite3_int64 *piLeaf, /* Selected leaf node */ + sqlite3_int64 *piLeaf2 /* Selected leaf node */ ){ int rc = SQLITE_OK; /* Return code */ - const char *zCsr = zNode; /* Cursor to iterate through node */ - const char *zEnd = &zCsr[nNode];/* End of interior node buffer */ - char *zBuffer = 0; /* Buffer to load terms into */ - int nAlloc = 0; /* Size of allocated buffer */ - - while( 1 ){ - int isFirstTerm = 1; /* True when processing first term on page */ - int iHeight; /* Height of this node in tree */ - sqlite3_int64 iChild; /* Block id of child node to descend to */ - int nBlock; /* Size of child node in bytes */ - - zCsr += sqlite3Fts3GetVarint32(zCsr, &iHeight); - zCsr += sqlite3Fts3GetVarint(zCsr, &iChild); - - while( zCsr<zEnd ){ - int cmp; /* memcmp() result */ - int nSuffix; /* Size of term suffix */ - int nPrefix = 0; /* Size of term prefix */ - int nBuffer; /* Total term size */ - - /* Load the next term on the node into zBuffer */ - if( !isFirstTerm ){ - zCsr += sqlite3Fts3GetVarint32(zCsr, &nPrefix); - } - isFirstTerm = 0; - zCsr += sqlite3Fts3GetVarint32(zCsr, &nSuffix); - if( nPrefix+nSuffix>nAlloc ){ - char *zNew; - nAlloc = (nPrefix+nSuffix) * 2; - zNew = (char *)sqlite3_realloc(zBuffer, nAlloc); - if( !zNew ){ - sqlite3_free(zBuffer); - return SQLITE_NOMEM; - } - zBuffer = zNew; - } - memcpy(&zBuffer[nPrefix], zCsr, nSuffix); - nBuffer = nPrefix + nSuffix; - zCsr += nSuffix; - - /* Compare the term we are searching for with the term just loaded from - ** the interior node. If the specified term is greater than or equal - ** to the term from the interior node, then all terms on the sub-tree - ** headed by node iChild are smaller than zTerm. No need to search - ** iChild. - ** - ** If the interior node term is larger than the specified term, then - ** the tree headed by iChild may contain the specified term. - */ - cmp = memcmp(zTerm, zBuffer, (nBuffer>nTerm ? nTerm : nBuffer)); - if( cmp<0 || (cmp==0 && nBuffer>nTerm) ) break; - iChild++; - }; - - /* If (iHeight==1), the children of this interior node are leaves. The - ** specified term may be present on leaf node iChild. - */ - if( iHeight==1 ){ - *piLeaf = iChild; - break; - } - - /* Descend to interior node iChild. */ - rc = sqlite3Fts3ReadBlock(p, iChild, &zCsr, &nBlock); - if( rc!=SQLITE_OK ) break; - zEnd = &zCsr[nBlock]; - } - sqlite3_free(zBuffer); + int iHeight; /* Height of this node in tree */ + + assert( piLeaf || piLeaf2 ); + + fts3GetVarint32(zNode, &iHeight); + rc = fts3ScanInteriorNode(zTerm, nTerm, zNode, nNode, piLeaf, piLeaf2); + assert( !piLeaf2 || !piLeaf || rc!=SQLITE_OK || (*piLeaf<=*piLeaf2) ); + + if( rc==SQLITE_OK && iHeight>1 ){ + char *zBlob = 0; /* Blob read from %_segments table */ + int nBlob = 0; /* Size of zBlob in bytes */ + + if( piLeaf && piLeaf2 && (*piLeaf!=*piLeaf2) ){ + rc = sqlite3Fts3ReadBlock(p, *piLeaf, &zBlob, &nBlob, 0); + if( rc==SQLITE_OK ){ + rc = fts3SelectLeaf(p, zTerm, nTerm, zBlob, nBlob, piLeaf, 0); + } + sqlite3_free(zBlob); + piLeaf = 0; + zBlob = 0; + } + + if( rc==SQLITE_OK ){ + rc = sqlite3Fts3ReadBlock(p, piLeaf?*piLeaf:*piLeaf2, &zBlob, &nBlob, 0); + } + if( rc==SQLITE_OK ){ + rc = fts3SelectLeaf(p, zTerm, nTerm, zBlob, nBlob, piLeaf, piLeaf2); + } + sqlite3_free(zBlob); + } + return rc; } /* ** This function is used to create delta-encoded serialized lists of FTS3 @@ -101666,15 +140091,15 @@ while( *p1 || *p2 ){ int iCol1; /* The current column index in pp1 */ int iCol2; /* The current column index in pp2 */ - if( *p1==POS_COLUMN ) sqlite3Fts3GetVarint32(&p1[1], &iCol1); + if( *p1==POS_COLUMN ) fts3GetVarint32(&p1[1], &iCol1); else if( *p1==POS_END ) iCol1 = POSITION_LIST_END; else iCol1 = 0; - if( *p2==POS_COLUMN ) sqlite3Fts3GetVarint32(&p2[1], &iCol2); + if( *p2==POS_COLUMN ) fts3GetVarint32(&p2[1], &iCol2); else if( *p2==POS_END ) iCol2 = POSITION_LIST_END; else iCol2 = 0; if( iCol1==iCol2 ){ sqlite3_int64 i1 = 0; /* Last position from pp1 */ @@ -101721,43 +140146,67 @@ *pp1 = p1 + 1; *pp2 = p2 + 1; } /* -** nToken==1 searches for adjacent positions. +** This function is used to merge two position lists into one. When it is +** called, *pp1 and *pp2 must both point to position lists. A position-list is +** the part of a doclist that follows each document id. For example, if a row +** contains: +** +** 'a b c'|'x y z'|'a b b a' +** +** Then the position list for this row for token 'b' would consist of: +** +** 0x02 0x01 0x02 0x03 0x03 0x00 +** +** When this function returns, both *pp1 and *pp2 are left pointing to the +** byte following the 0x00 terminator of their respective position lists. +** +** If isSaveLeft is 0, an entry is added to the output position list for +** each position in *pp2 for which there exists one or more positions in +** *pp1 so that (pos(*pp2)>pos(*pp1) && pos(*pp2)-pos(*pp1)<=nToken). i.e. +** when the *pp1 token appears before the *pp2 token, but not more than nToken +** slots before it. +** +** e.g. nToken==1 searches for adjacent positions. */ static int fts3PoslistPhraseMerge( - char **pp, /* Output buffer */ + char **pp, /* IN/OUT: Preallocated output buffer */ int nToken, /* Maximum difference in token positions */ int isSaveLeft, /* Save the left position */ - char **pp1, /* Left input list */ - char **pp2 /* Right input list */ + int isExact, /* If *pp1 is exactly nTokens before *pp2 */ + char **pp1, /* IN/OUT: Left input list */ + char **pp2 /* IN/OUT: Right input list */ ){ - char *p = (pp ? *pp : 0); + char *p = *pp; char *p1 = *pp1; char *p2 = *pp2; - int iCol1 = 0; int iCol2 = 0; - assert( *p1!=0 && *p2!=0 ); + + /* Never set both isSaveLeft and isExact for the same invocation. */ + assert( isSaveLeft==0 || isExact==0 ); + + assert( p!=0 && *p1!=0 && *p2!=0 ); if( *p1==POS_COLUMN ){ p1++; - p1 += sqlite3Fts3GetVarint32(p1, &iCol1); + p1 += fts3GetVarint32(p1, &iCol1); } if( *p2==POS_COLUMN ){ p2++; - p2 += sqlite3Fts3GetVarint32(p2, &iCol2); + p2 += fts3GetVarint32(p2, &iCol2); } while( 1 ){ if( iCol1==iCol2 ){ char *pSave = p; sqlite3_int64 iPrev = 0; sqlite3_int64 iPos1 = 0; sqlite3_int64 iPos2 = 0; - if( pp && iCol1 ){ + if( iCol1 ){ *p++ = POS_COLUMN; p += sqlite3Fts3PutVarint(p, iCol1); } assert( *p1!=POS_END && *p1!=POS_COLUMN ); @@ -101764,22 +140213,18 @@ assert( *p2!=POS_END && *p2!=POS_COLUMN ); fts3GetDeltaVarint(&p1, &iPos1); iPos1 -= 2; fts3GetDeltaVarint(&p2, &iPos2); iPos2 -= 2; while( 1 ){ - if( iPos2>iPos1 && iPos2<=iPos1+nToken ){ + if( iPos2==iPos1+nToken + || (isExact==0 && iPos2>iPos1 && iPos2<=iPos1+nToken) + ){ sqlite3_int64 iSave; - if( !pp ){ - fts3PoslistCopy(0, &p2); - fts3PoslistCopy(0, &p1); - *pp1 = p1; - *pp2 = p2; - return 1; - } iSave = isSaveLeft ? iPos1 : iPos2; fts3PutDeltaVarint(&p, &iPrev, iSave+2); iPrev -= 2; pSave = 0; + assert( p ); } if( (!isSaveLeft && iPos2<=(iPos1+nToken)) || iPos2<=iPos1 ){ if( (*p2&0xFE)==0 ) break; fts3GetDeltaVarint(&p2, &iPos2); iPos2 -= 2; }else{ @@ -101797,13 +140242,13 @@ fts3ColumnlistCopy(0, &p2); assert( (*p1&0xFE)==0 && (*p2&0xFE)==0 ); if( 0==*p1 || 0==*p2 ) break; p1++; - p1 += sqlite3Fts3GetVarint32(p1, &iCol1); + p1 += fts3GetVarint32(p1, &iCol1); p2++; - p2 += sqlite3Fts3GetVarint32(p2, &iCol2); + p2 += fts3GetVarint32(p2, &iCol2); } /* Advance pointer p1 or p2 (whichever corresponds to the smaller of ** iCol1 and iCol2) so that it points to either the 0x00 that marks the ** end of the position list, or the 0x01 that precedes the next @@ -101811,33 +140256,45 @@ */ else if( iCol1<iCol2 ){ fts3ColumnlistCopy(0, &p1); if( 0==*p1 ) break; p1++; - p1 += sqlite3Fts3GetVarint32(p1, &iCol1); + p1 += fts3GetVarint32(p1, &iCol1); }else{ fts3ColumnlistCopy(0, &p2); if( 0==*p2 ) break; p2++; - p2 += sqlite3Fts3GetVarint32(p2, &iCol2); + p2 += fts3GetVarint32(p2, &iCol2); } } fts3PoslistCopy(0, &p2); fts3PoslistCopy(0, &p1); *pp1 = p1; *pp2 = p2; - if( !pp || *pp==p ){ + if( *pp==p ){ return 0; } *p++ = 0x00; *pp = p; return 1; } /* -** Merge two position-lists as required by the NEAR operator. +** Merge two position-lists as required by the NEAR operator. The argument +** position lists correspond to the left and right phrases of an expression +** like: +** +** "phrase 1" NEAR "phrase number 2" +** +** Position list *pp1 corresponds to the left-hand side of the NEAR +** expression and *pp2 to the right. As usual, the indexes in the position +** lists are the offsets of the last token in each phrase (tokens "1" and "2" +** in the example above). +** +** The output position list - written to *pp - is a copy of *pp2 with those +** entries that are not sufficiently NEAR entries in *pp1 removed. */ static int fts3PoslistNearMerge( char **pp, /* Output buffer */ char *aTmp, /* Temporary buffer space */ int nRight, /* Maximum difference in token positions */ @@ -101846,637 +140303,826 @@ char **pp2 /* IN/OUT: Right input list */ ){ char *p1 = *pp1; char *p2 = *pp2; - if( !pp ){ - if( fts3PoslistPhraseMerge(0, nRight, 0, pp1, pp2) ) return 1; - *pp1 = p1; - *pp2 = p2; - return fts3PoslistPhraseMerge(0, nLeft, 0, pp2, pp1); - }else{ - char *pTmp1 = aTmp; - char *pTmp2; - char *aTmp2; - int res = 1; - - fts3PoslistPhraseMerge(&pTmp1, nRight, 0, pp1, pp2); - aTmp2 = pTmp2 = pTmp1; - *pp1 = p1; - *pp2 = p2; - fts3PoslistPhraseMerge(&pTmp2, nLeft, 1, pp2, pp1); - if( pTmp1!=aTmp && pTmp2!=aTmp2 ){ - fts3PoslistMerge(pp, &aTmp, &aTmp2); - }else if( pTmp1!=aTmp ){ - fts3PoslistCopy(pp, &aTmp); - }else if( pTmp2!=aTmp2 ){ - fts3PoslistCopy(pp, &aTmp2); - }else{ - res = 0; - } - - return res; - } -} - -/* -** Values that may be used as the first parameter to fts3DoclistMerge(). -*/ -#define MERGE_NOT 2 /* D + D -> D */ -#define MERGE_AND 3 /* D + D -> D */ -#define MERGE_OR 4 /* D + D -> D */ -#define MERGE_POS_OR 5 /* P + P -> P */ -#define MERGE_PHRASE 6 /* P + P -> D */ -#define MERGE_POS_PHRASE 7 /* P + P -> P */ -#define MERGE_NEAR 8 /* P + P -> D */ -#define MERGE_POS_NEAR 9 /* P + P -> P */ - -/* -** Merge the two doclists passed in buffer a1 (size n1 bytes) and a2 -** (size n2 bytes). The output is written to pre-allocated buffer aBuffer, -** which is guaranteed to be large enough to hold the results. The number -** of bytes written to aBuffer is stored in *pnBuffer before returning. -** -** If successful, SQLITE_OK is returned. Otherwise, if a malloc error -** occurs while allocating a temporary buffer as part of the merge operation, -** SQLITE_NOMEM is returned. -*/ -static int fts3DoclistMerge( - int mergetype, /* One of the MERGE_XXX constants */ - int nParam1, /* Used by MERGE_NEAR and MERGE_POS_NEAR */ - int nParam2, /* Used by MERGE_NEAR and MERGE_POS_NEAR */ - char *aBuffer, /* Pre-allocated output buffer */ - int *pnBuffer, /* OUT: Bytes written to aBuffer */ - char *a1, /* Buffer containing first doclist */ - int n1, /* Size of buffer a1 */ - char *a2, /* Buffer containing second doclist */ - int n2 /* Size of buffer a2 */ + char *pTmp1 = aTmp; + char *pTmp2; + char *aTmp2; + int res = 1; + + fts3PoslistPhraseMerge(&pTmp1, nRight, 0, 0, pp1, pp2); + aTmp2 = pTmp2 = pTmp1; + *pp1 = p1; + *pp2 = p2; + fts3PoslistPhraseMerge(&pTmp2, nLeft, 1, 0, pp2, pp1); + if( pTmp1!=aTmp && pTmp2!=aTmp2 ){ + fts3PoslistMerge(pp, &aTmp, &aTmp2); + }else if( pTmp1!=aTmp ){ + fts3PoslistCopy(pp, &aTmp); + }else if( pTmp2!=aTmp2 ){ + fts3PoslistCopy(pp, &aTmp2); + }else{ + res = 0; + } + + return res; +} + +/* +** An instance of this function is used to merge together the (potentially +** large number of) doclists for each term that matches a prefix query. +** See function fts3TermSelectMerge() for details. +*/ +typedef struct TermSelect TermSelect; +struct TermSelect { + char *aaOutput[16]; /* Malloc'd output buffers */ + int anOutput[16]; /* Size each output buffer in bytes */ +}; + +/* +** This function is used to read a single varint from a buffer. Parameter +** pEnd points 1 byte past the end of the buffer. When this function is +** called, if *pp points to pEnd or greater, then the end of the buffer +** has been reached. In this case *pp is set to 0 and the function returns. +** +** If *pp does not point to or past pEnd, then a single varint is read +** from *pp. *pp is then set to point 1 byte past the end of the read varint. +** +** If bDescIdx is false, the value read is added to *pVal before returning. +** If it is true, the value read is subtracted from *pVal before this +** function returns. +*/ +static void fts3GetDeltaVarint3( + char **pp, /* IN/OUT: Point to read varint from */ + char *pEnd, /* End of buffer */ + int bDescIdx, /* True if docids are descending */ + sqlite3_int64 *pVal /* IN/OUT: Integer value */ +){ + if( *pp>=pEnd ){ + *pp = 0; + }else{ + sqlite3_int64 iVal; + *pp += sqlite3Fts3GetVarint(*pp, &iVal); + if( bDescIdx ){ + *pVal -= iVal; + }else{ + *pVal += iVal; + } + } +} + +/* +** This function is used to write a single varint to a buffer. The varint +** is written to *pp. Before returning, *pp is set to point 1 byte past the +** end of the value written. +** +** If *pbFirst is zero when this function is called, the value written to +** the buffer is that of parameter iVal. +** +** If *pbFirst is non-zero when this function is called, then the value +** written is either (iVal-*piPrev) (if bDescIdx is zero) or (*piPrev-iVal) +** (if bDescIdx is non-zero). +** +** Before returning, this function always sets *pbFirst to 1 and *piPrev +** to the value of parameter iVal. +*/ +static void fts3PutDeltaVarint3( + char **pp, /* IN/OUT: Output pointer */ + int bDescIdx, /* True for descending docids */ + sqlite3_int64 *piPrev, /* IN/OUT: Previous value written to list */ + int *pbFirst, /* IN/OUT: True after first int written */ + sqlite3_int64 iVal /* Write this value to the list */ +){ + sqlite3_int64 iWrite; + if( bDescIdx==0 || *pbFirst==0 ){ + iWrite = iVal - *piPrev; + }else{ + iWrite = *piPrev - iVal; + } + assert( *pbFirst || *piPrev==0 ); + assert( *pbFirst==0 || iWrite>0 ); + *pp += sqlite3Fts3PutVarint(*pp, iWrite); + *piPrev = iVal; + *pbFirst = 1; +} + + +/* +** This macro is used by various functions that merge doclists. The two +** arguments are 64-bit docid values. If the value of the stack variable +** bDescDoclist is 0 when this macro is invoked, then it returns (i1-i2). +** Otherwise, (i2-i1). +** +** Using this makes it easier to write code that can merge doclists that are +** sorted in either ascending or descending order. +*/ +#define DOCID_CMP(i1, i2) ((bDescDoclist?-1:1) * (i1-i2)) + +/* +** This function does an "OR" merge of two doclists (output contains all +** positions contained in either argument doclist). If the docids in the +** input doclists are sorted in ascending order, parameter bDescDoclist +** should be false. If they are sorted in ascending order, it should be +** passed a non-zero value. +** +** If no error occurs, *paOut is set to point at an sqlite3_malloc'd buffer +** containing the output doclist and SQLITE_OK is returned. In this case +** *pnOut is set to the number of bytes in the output doclist. +** +** If an error occurs, an SQLite error code is returned. The output values +** are undefined in this case. +*/ +static int fts3DoclistOrMerge( + int bDescDoclist, /* True if arguments are desc */ + char *a1, int n1, /* First doclist */ + char *a2, int n2, /* Second doclist */ + char **paOut, int *pnOut /* OUT: Malloc'd doclist */ +){ + sqlite3_int64 i1 = 0; + sqlite3_int64 i2 = 0; + sqlite3_int64 iPrev = 0; + char *pEnd1 = &a1[n1]; + char *pEnd2 = &a2[n2]; + char *p1 = a1; + char *p2 = a2; + char *p; + char *aOut; + int bFirstOut = 0; + + *paOut = 0; + *pnOut = 0; + + /* Allocate space for the output. Both the input and output doclists + ** are delta encoded. If they are in ascending order (bDescDoclist==0), + ** then the first docid in each list is simply encoded as a varint. For + ** each subsequent docid, the varint stored is the difference between the + ** current and previous docid (a positive number - since the list is in + ** ascending order). + ** + ** The first docid written to the output is therefore encoded using the + ** same number of bytes as it is in whichever of the input lists it is + ** read from. And each subsequent docid read from the same input list + ** consumes either the same or less bytes as it did in the input (since + ** the difference between it and the previous value in the output must + ** be a positive value less than or equal to the delta value read from + ** the input list). The same argument applies to all but the first docid + ** read from the 'other' list. And to the contents of all position lists + ** that will be copied and merged from the input to the output. + ** + ** However, if the first docid copied to the output is a negative number, + ** then the encoding of the first docid from the 'other' input list may + ** be larger in the output than it was in the input (since the delta value + ** may be a larger positive integer than the actual docid). + ** + ** The space required to store the output is therefore the sum of the + ** sizes of the two inputs, plus enough space for exactly one of the input + ** docids to grow. + ** + ** A symetric argument may be made if the doclists are in descending + ** order. + */ + aOut = sqlite3_malloc(n1+n2+FTS3_VARINT_MAX-1); + if( !aOut ) return SQLITE_NOMEM; + + p = aOut; + fts3GetDeltaVarint3(&p1, pEnd1, 0, &i1); + fts3GetDeltaVarint3(&p2, pEnd2, 0, &i2); + while( p1 || p2 ){ + sqlite3_int64 iDiff = DOCID_CMP(i1, i2); + + if( p2 && p1 && iDiff==0 ){ + fts3PutDeltaVarint3(&p, bDescDoclist, &iPrev, &bFirstOut, i1); + fts3PoslistMerge(&p, &p1, &p2); + fts3GetDeltaVarint3(&p1, pEnd1, bDescDoclist, &i1); + fts3GetDeltaVarint3(&p2, pEnd2, bDescDoclist, &i2); + }else if( !p2 || (p1 && iDiff<0) ){ + fts3PutDeltaVarint3(&p, bDescDoclist, &iPrev, &bFirstOut, i1); + fts3PoslistCopy(&p, &p1); + fts3GetDeltaVarint3(&p1, pEnd1, bDescDoclist, &i1); + }else{ + fts3PutDeltaVarint3(&p, bDescDoclist, &iPrev, &bFirstOut, i2); + fts3PoslistCopy(&p, &p2); + fts3GetDeltaVarint3(&p2, pEnd2, bDescDoclist, &i2); + } + } + + *paOut = aOut; + *pnOut = (int)(p-aOut); + assert( *pnOut<=n1+n2+FTS3_VARINT_MAX-1 ); + return SQLITE_OK; +} + +/* +** This function does a "phrase" merge of two doclists. In a phrase merge, +** the output contains a copy of each position from the right-hand input +** doclist for which there is a position in the left-hand input doclist +** exactly nDist tokens before it. +** +** If the docids in the input doclists are sorted in ascending order, +** parameter bDescDoclist should be false. If they are sorted in ascending +** order, it should be passed a non-zero value. +** +** The right-hand input doclist is overwritten by this function. +*/ +static int fts3DoclistPhraseMerge( + int bDescDoclist, /* True if arguments are desc */ + int nDist, /* Distance from left to right (1=adjacent) */ + char *aLeft, int nLeft, /* Left doclist */ + char **paRight, int *pnRight /* IN/OUT: Right/output doclist */ ){ sqlite3_int64 i1 = 0; sqlite3_int64 i2 = 0; sqlite3_int64 iPrev = 0; - - char *p = aBuffer; - char *p1 = a1; - char *p2 = a2; - char *pEnd1 = &a1[n1]; - char *pEnd2 = &a2[n2]; - - assert( mergetype==MERGE_OR || mergetype==MERGE_POS_OR - || mergetype==MERGE_AND || mergetype==MERGE_NOT - || mergetype==MERGE_PHRASE || mergetype==MERGE_POS_PHRASE - || mergetype==MERGE_NEAR || mergetype==MERGE_POS_NEAR - ); - - if( !aBuffer ){ - *pnBuffer = 0; - return SQLITE_NOMEM; - } - - /* Read the first docid from each doclist */ - fts3GetDeltaVarint2(&p1, pEnd1, &i1); - fts3GetDeltaVarint2(&p2, pEnd2, &i2); - - switch( mergetype ){ - case MERGE_OR: - case MERGE_POS_OR: - while( p1 || p2 ){ - if( p2 && p1 && i1==i2 ){ - fts3PutDeltaVarint(&p, &iPrev, i1); - if( mergetype==MERGE_POS_OR ) fts3PoslistMerge(&p, &p1, &p2); - fts3GetDeltaVarint2(&p1, pEnd1, &i1); - fts3GetDeltaVarint2(&p2, pEnd2, &i2); - }else if( !p2 || (p1 && i1<i2) ){ - fts3PutDeltaVarint(&p, &iPrev, i1); - if( mergetype==MERGE_POS_OR ) fts3PoslistCopy(&p, &p1); - fts3GetDeltaVarint2(&p1, pEnd1, &i1); - }else{ - fts3PutDeltaVarint(&p, &iPrev, i2); - if( mergetype==MERGE_POS_OR ) fts3PoslistCopy(&p, &p2); - fts3GetDeltaVarint2(&p2, pEnd2, &i2); - } - } - break; - - case MERGE_AND: - while( p1 && p2 ){ - if( i1==i2 ){ - fts3PutDeltaVarint(&p, &iPrev, i1); - fts3GetDeltaVarint2(&p1, pEnd1, &i1); - fts3GetDeltaVarint2(&p2, pEnd2, &i2); - }else if( i1<i2 ){ - fts3GetDeltaVarint2(&p1, pEnd1, &i1); - }else{ - fts3GetDeltaVarint2(&p2, pEnd2, &i2); - } - } - break; - - case MERGE_NOT: - while( p1 ){ - if( p2 && i1==i2 ){ - fts3GetDeltaVarint2(&p1, pEnd1, &i1); - fts3GetDeltaVarint2(&p2, pEnd2, &i2); - }else if( !p2 || i1<i2 ){ - fts3PutDeltaVarint(&p, &iPrev, i1); - fts3GetDeltaVarint2(&p1, pEnd1, &i1); - }else{ - fts3GetDeltaVarint2(&p2, pEnd2, &i2); - } - } - break; - - case MERGE_POS_PHRASE: - case MERGE_PHRASE: { - char **ppPos = (mergetype==MERGE_PHRASE ? 0 : &p); - while( p1 && p2 ){ - if( i1==i2 ){ - char *pSave = p; - sqlite3_int64 iPrevSave = iPrev; - fts3PutDeltaVarint(&p, &iPrev, i1); - if( 0==fts3PoslistPhraseMerge(ppPos, 1, 0, &p1, &p2) ){ - p = pSave; - iPrev = iPrevSave; - } - fts3GetDeltaVarint2(&p1, pEnd1, &i1); - fts3GetDeltaVarint2(&p2, pEnd2, &i2); - }else if( i1<i2 ){ - fts3PoslistCopy(0, &p1); - fts3GetDeltaVarint2(&p1, pEnd1, &i1); - }else{ - fts3PoslistCopy(0, &p2); - fts3GetDeltaVarint2(&p2, pEnd2, &i2); - } - } - break; - } - - default: assert( mergetype==MERGE_POS_NEAR || mergetype==MERGE_NEAR ); { - char *aTmp = 0; - char **ppPos = 0; - - if( mergetype==MERGE_POS_NEAR ){ - ppPos = &p; - aTmp = sqlite3_malloc(2*(n1+n2+1)); - if( !aTmp ){ - return SQLITE_NOMEM; - } - } - - while( p1 && p2 ){ - if( i1==i2 ){ - char *pSave = p; - sqlite3_int64 iPrevSave = iPrev; - fts3PutDeltaVarint(&p, &iPrev, i1); - - if( !fts3PoslistNearMerge(ppPos, aTmp, nParam1, nParam2, &p1, &p2) ){ - iPrev = iPrevSave; - p = pSave; - } - - fts3GetDeltaVarint2(&p1, pEnd1, &i1); - fts3GetDeltaVarint2(&p2, pEnd2, &i2); - }else if( i1<i2 ){ - fts3PoslistCopy(0, &p1); - fts3GetDeltaVarint2(&p1, pEnd1, &i1); - }else{ - fts3PoslistCopy(0, &p2); - fts3GetDeltaVarint2(&p2, pEnd2, &i2); - } - } - sqlite3_free(aTmp); - break; - } - } - - *pnBuffer = (int)(p-aBuffer); - return SQLITE_OK; -} - -/* -** A pointer to an instance of this structure is used as the context -** argument to sqlite3Fts3SegReaderIterate() -*/ -typedef struct TermSelect TermSelect; -struct TermSelect { - int isReqPos; - char *aOutput; /* Malloc'd output buffer */ - int nOutput; /* Size of output in bytes */ -}; - -/* -** This function is used as the sqlite3Fts3SegReaderIterate() callback when -** querying the full-text index for a doclist associated with a term or -** term-prefix. -*/ -static int fts3TermSelectCb( - Fts3Table *p, /* Virtual table object */ - void *pContext, /* Pointer to TermSelect structure */ - char *zTerm, - int nTerm, - char *aDoclist, - int nDoclist -){ - TermSelect *pTS = (TermSelect *)pContext; - int nNew = pTS->nOutput + nDoclist; - char *aNew = sqlite3_malloc(nNew); - - UNUSED_PARAMETER(p); - UNUSED_PARAMETER(zTerm); - UNUSED_PARAMETER(nTerm); - - if( !aNew ){ - return SQLITE_NOMEM; - } - - if( pTS->nOutput==0 ){ + char *aRight = *paRight; + char *pEnd1 = &aLeft[nLeft]; + char *pEnd2 = &aRight[*pnRight]; + char *p1 = aLeft; + char *p2 = aRight; + char *p; + int bFirstOut = 0; + char *aOut; + + assert( nDist>0 ); + if( bDescDoclist ){ + aOut = sqlite3_malloc(*pnRight + FTS3_VARINT_MAX); + if( aOut==0 ) return SQLITE_NOMEM; + }else{ + aOut = aRight; + } + p = aOut; + + fts3GetDeltaVarint3(&p1, pEnd1, 0, &i1); + fts3GetDeltaVarint3(&p2, pEnd2, 0, &i2); + + while( p1 && p2 ){ + sqlite3_int64 iDiff = DOCID_CMP(i1, i2); + if( iDiff==0 ){ + char *pSave = p; + sqlite3_int64 iPrevSave = iPrev; + int bFirstOutSave = bFirstOut; + + fts3PutDeltaVarint3(&p, bDescDoclist, &iPrev, &bFirstOut, i1); + if( 0==fts3PoslistPhraseMerge(&p, nDist, 0, 1, &p1, &p2) ){ + p = pSave; + iPrev = iPrevSave; + bFirstOut = bFirstOutSave; + } + fts3GetDeltaVarint3(&p1, pEnd1, bDescDoclist, &i1); + fts3GetDeltaVarint3(&p2, pEnd2, bDescDoclist, &i2); + }else if( iDiff<0 ){ + fts3PoslistCopy(0, &p1); + fts3GetDeltaVarint3(&p1, pEnd1, bDescDoclist, &i1); + }else{ + fts3PoslistCopy(0, &p2); + fts3GetDeltaVarint3(&p2, pEnd2, bDescDoclist, &i2); + } + } + + *pnRight = (int)(p - aOut); + if( bDescDoclist ){ + sqlite3_free(aRight); + *paRight = aOut; + } + + return SQLITE_OK; +} + +/* +** Argument pList points to a position list nList bytes in size. This +** function checks to see if the position list contains any entries for +** a token in position 0 (of any column). If so, it writes argument iDelta +** to the output buffer pOut, followed by a position list consisting only +** of the entries from pList at position 0, and terminated by an 0x00 byte. +** The value returned is the number of bytes written to pOut (if any). +*/ +SQLITE_PRIVATE int sqlite3Fts3FirstFilter( + sqlite3_int64 iDelta, /* Varint that may be written to pOut */ + char *pList, /* Position list (no 0x00 term) */ + int nList, /* Size of pList in bytes */ + char *pOut /* Write output here */ +){ + int nOut = 0; + int bWritten = 0; /* True once iDelta has been written */ + char *p = pList; + char *pEnd = &pList[nList]; + + if( *p!=0x01 ){ + if( *p==0x02 ){ + nOut += sqlite3Fts3PutVarint(&pOut[nOut], iDelta); + pOut[nOut++] = 0x02; + bWritten = 1; + } + fts3ColumnlistCopy(0, &p); + } + + while( p<pEnd && *p==0x01 ){ + sqlite3_int64 iCol; + p++; + p += sqlite3Fts3GetVarint(p, &iCol); + if( *p==0x02 ){ + if( bWritten==0 ){ + nOut += sqlite3Fts3PutVarint(&pOut[nOut], iDelta); + bWritten = 1; + } + pOut[nOut++] = 0x01; + nOut += sqlite3Fts3PutVarint(&pOut[nOut], iCol); + pOut[nOut++] = 0x02; + } + fts3ColumnlistCopy(0, &p); + } + if( bWritten ){ + pOut[nOut++] = 0x00; + } + + return nOut; +} + + +/* +** Merge all doclists in the TermSelect.aaOutput[] array into a single +** doclist stored in TermSelect.aaOutput[0]. If successful, delete all +** other doclists (except the aaOutput[0] one) and return SQLITE_OK. +** +** If an OOM error occurs, return SQLITE_NOMEM. In this case it is +** the responsibility of the caller to free any doclists left in the +** TermSelect.aaOutput[] array. +*/ +static int fts3TermSelectFinishMerge(Fts3Table *p, TermSelect *pTS){ + char *aOut = 0; + int nOut = 0; + int i; + + /* Loop through the doclists in the aaOutput[] array. Merge them all + ** into a single doclist. + */ + for(i=0; i<SizeofArray(pTS->aaOutput); i++){ + if( pTS->aaOutput[i] ){ + if( !aOut ){ + aOut = pTS->aaOutput[i]; + nOut = pTS->anOutput[i]; + pTS->aaOutput[i] = 0; + }else{ + int nNew; + char *aNew; + + int rc = fts3DoclistOrMerge(p->bDescIdx, + pTS->aaOutput[i], pTS->anOutput[i], aOut, nOut, &aNew, &nNew + ); + if( rc!=SQLITE_OK ){ + sqlite3_free(aOut); + return rc; + } + + sqlite3_free(pTS->aaOutput[i]); + sqlite3_free(aOut); + pTS->aaOutput[i] = 0; + aOut = aNew; + nOut = nNew; + } + } + } + + pTS->aaOutput[0] = aOut; + pTS->anOutput[0] = nOut; + return SQLITE_OK; +} + +/* +** Merge the doclist aDoclist/nDoclist into the TermSelect object passed +** as the first argument. The merge is an "OR" merge (see function +** fts3DoclistOrMerge() for details). +** +** This function is called with the doclist for each term that matches +** a queried prefix. It merges all these doclists into one, the doclist +** for the specified prefix. Since there can be a very large number of +** doclists to merge, the merging is done pair-wise using the TermSelect +** object. +** +** This function returns SQLITE_OK if the merge is successful, or an +** SQLite error code (SQLITE_NOMEM) if an error occurs. +*/ +static int fts3TermSelectMerge( + Fts3Table *p, /* FTS table handle */ + TermSelect *pTS, /* TermSelect object to merge into */ + char *aDoclist, /* Pointer to doclist */ + int nDoclist /* Size of aDoclist in bytes */ +){ + if( pTS->aaOutput[0]==0 ){ /* If this is the first term selected, copy the doclist to the output - ** buffer using memcpy(). TODO: Add a way to transfer control of the - ** aDoclist buffer from the caller so as to avoid the memcpy(). - */ - memcpy(aNew, aDoclist, nDoclist); - }else{ - /* The output buffer is not empty. Merge doclist aDoclist with the - ** existing output. This can only happen with prefix-searches (as - ** searches for exact terms return exactly one doclist). - */ - int mergetype = (pTS->isReqPos ? MERGE_POS_OR : MERGE_OR); - fts3DoclistMerge(mergetype, 0, 0, - aNew, &nNew, pTS->aOutput, pTS->nOutput, aDoclist, nDoclist - ); - } - - sqlite3_free(pTS->aOutput); - pTS->aOutput = aNew; - pTS->nOutput = nNew; - - return SQLITE_OK; -} - -/* -** This function retreives the doclist for the specified term (or term -** prefix) from the database. -** -** The returned doclist may be in one of two formats, depending on the -** value of parameter isReqPos. If isReqPos is zero, then the doclist is -** a sorted list of delta-compressed docids (a bare doclist). If isReqPos -** is non-zero, then the returned list is in the same format as is stored -** in the database without the found length specifier at the start of on-disk -** doclists. -*/ -static int fts3TermSelect( - Fts3Table *p, /* Virtual table handle */ - int iColumn, /* Column to query (or -ve for all columns) */ + ** buffer using memcpy(). + ** + ** Add FTS3_VARINT_MAX bytes of unused space to the end of the + ** allocation. This is so as to ensure that the buffer is big enough + ** to hold the current doclist AND'd with any other doclist. If the + ** doclists are stored in order=ASC order, this padding would not be + ** required (since the size of [doclistA AND doclistB] is always less + ** than or equal to the size of [doclistA] in that case). But this is + ** not true for order=DESC. For example, a doclist containing (1, -1) + ** may be smaller than (-1), as in the first example the -1 may be stored + ** as a single-byte delta, whereas in the second it must be stored as a + ** FTS3_VARINT_MAX byte varint. + ** + ** Similar padding is added in the fts3DoclistOrMerge() function. + */ + pTS->aaOutput[0] = sqlite3_malloc(nDoclist + FTS3_VARINT_MAX + 1); + pTS->anOutput[0] = nDoclist; + if( pTS->aaOutput[0] ){ + memcpy(pTS->aaOutput[0], aDoclist, nDoclist); + }else{ + return SQLITE_NOMEM; + } + }else{ + char *aMerge = aDoclist; + int nMerge = nDoclist; + int iOut; + + for(iOut=0; iOut<SizeofArray(pTS->aaOutput); iOut++){ + if( pTS->aaOutput[iOut]==0 ){ + assert( iOut>0 ); + pTS->aaOutput[iOut] = aMerge; + pTS->anOutput[iOut] = nMerge; + break; + }else{ + char *aNew; + int nNew; + + int rc = fts3DoclistOrMerge(p->bDescIdx, aMerge, nMerge, + pTS->aaOutput[iOut], pTS->anOutput[iOut], &aNew, &nNew + ); + if( rc!=SQLITE_OK ){ + if( aMerge!=aDoclist ) sqlite3_free(aMerge); + return rc; + } + + if( aMerge!=aDoclist ) sqlite3_free(aMerge); + sqlite3_free(pTS->aaOutput[iOut]); + pTS->aaOutput[iOut] = 0; + + aMerge = aNew; + nMerge = nNew; + if( (iOut+1)==SizeofArray(pTS->aaOutput) ){ + pTS->aaOutput[iOut] = aMerge; + pTS->anOutput[iOut] = nMerge; + } + } + } + } + return SQLITE_OK; +} + +/* +** Append SegReader object pNew to the end of the pCsr->apSegment[] array. +*/ +static int fts3SegReaderCursorAppend( + Fts3MultiSegReader *pCsr, + Fts3SegReader *pNew +){ + if( (pCsr->nSegment%16)==0 ){ + Fts3SegReader **apNew; + int nByte = (pCsr->nSegment + 16)*sizeof(Fts3SegReader*); + apNew = (Fts3SegReader **)sqlite3_realloc(pCsr->apSegment, nByte); + if( !apNew ){ + sqlite3Fts3SegReaderFree(pNew); + return SQLITE_NOMEM; + } + pCsr->apSegment = apNew; + } + pCsr->apSegment[pCsr->nSegment++] = pNew; + return SQLITE_OK; +} + +/* +** Add seg-reader objects to the Fts3MultiSegReader object passed as the +** 8th argument. +** +** This function returns SQLITE_OK if successful, or an SQLite error code +** otherwise. +*/ +static int fts3SegReaderCursor( + Fts3Table *p, /* FTS3 table handle */ + int iLangid, /* Language id */ + int iIndex, /* Index to search (from 0 to p->nIndex-1) */ + int iLevel, /* Level of segments to scan */ + const char *zTerm, /* Term to query for */ + int nTerm, /* Size of zTerm in bytes */ + int isPrefix, /* True for a prefix search */ + int isScan, /* True to scan from zTerm to EOF */ + Fts3MultiSegReader *pCsr /* Cursor object to populate */ +){ + int rc = SQLITE_OK; /* Error code */ + sqlite3_stmt *pStmt = 0; /* Statement to iterate through segments */ + int rc2; /* Result of sqlite3_reset() */ + + /* If iLevel is less than 0 and this is not a scan, include a seg-reader + ** for the pending-terms. If this is a scan, then this call must be being + ** made by an fts4aux module, not an FTS table. In this case calling + ** Fts3SegReaderPending might segfault, as the data structures used by + ** fts4aux are not completely populated. So it's easiest to filter these + ** calls out here. */ + if( iLevel<0 && p->aIndex ){ + Fts3SegReader *pSeg = 0; + rc = sqlite3Fts3SegReaderPending(p, iIndex, zTerm, nTerm, isPrefix||isScan, &pSeg); + if( rc==SQLITE_OK && pSeg ){ + rc = fts3SegReaderCursorAppend(pCsr, pSeg); + } + } + + if( iLevel!=FTS3_SEGCURSOR_PENDING ){ + if( rc==SQLITE_OK ){ + rc = sqlite3Fts3AllSegdirs(p, iLangid, iIndex, iLevel, &pStmt); + } + + while( rc==SQLITE_OK && SQLITE_ROW==(rc = sqlite3_step(pStmt)) ){ + Fts3SegReader *pSeg = 0; + + /* Read the values returned by the SELECT into local variables. */ + sqlite3_int64 iStartBlock = sqlite3_column_int64(pStmt, 1); + sqlite3_int64 iLeavesEndBlock = sqlite3_column_int64(pStmt, 2); + sqlite3_int64 iEndBlock = sqlite3_column_int64(pStmt, 3); + int nRoot = sqlite3_column_bytes(pStmt, 4); + char const *zRoot = sqlite3_column_blob(pStmt, 4); + + /* If zTerm is not NULL, and this segment is not stored entirely on its + ** root node, the range of leaves scanned can be reduced. Do this. */ + if( iStartBlock && zTerm ){ + sqlite3_int64 *pi = (isPrefix ? &iLeavesEndBlock : 0); + rc = fts3SelectLeaf(p, zTerm, nTerm, zRoot, nRoot, &iStartBlock, pi); + if( rc!=SQLITE_OK ) goto finished; + if( isPrefix==0 && isScan==0 ) iLeavesEndBlock = iStartBlock; + } + + rc = sqlite3Fts3SegReaderNew(pCsr->nSegment+1, + (isPrefix==0 && isScan==0), + iStartBlock, iLeavesEndBlock, + iEndBlock, zRoot, nRoot, &pSeg + ); + if( rc!=SQLITE_OK ) goto finished; + rc = fts3SegReaderCursorAppend(pCsr, pSeg); + } + } + + finished: + rc2 = sqlite3_reset(pStmt); + if( rc==SQLITE_DONE ) rc = rc2; + + return rc; +} + +/* +** Set up a cursor object for iterating through a full-text index or a +** single level therein. +*/ +SQLITE_PRIVATE int sqlite3Fts3SegReaderCursor( + Fts3Table *p, /* FTS3 table handle */ + int iLangid, /* Language-id to search */ + int iIndex, /* Index to search (from 0 to p->nIndex-1) */ + int iLevel, /* Level of segments to scan */ + const char *zTerm, /* Term to query for */ + int nTerm, /* Size of zTerm in bytes */ + int isPrefix, /* True for a prefix search */ + int isScan, /* True to scan from zTerm to EOF */ + Fts3MultiSegReader *pCsr /* Cursor object to populate */ +){ + assert( iIndex>=0 && iIndex<p->nIndex ); + assert( iLevel==FTS3_SEGCURSOR_ALL + || iLevel==FTS3_SEGCURSOR_PENDING + || iLevel>=0 + ); + assert( iLevel<FTS3_SEGDIR_MAXLEVEL ); + assert( FTS3_SEGCURSOR_ALL<0 && FTS3_SEGCURSOR_PENDING<0 ); + assert( isPrefix==0 || isScan==0 ); + + memset(pCsr, 0, sizeof(Fts3MultiSegReader)); + return fts3SegReaderCursor( + p, iLangid, iIndex, iLevel, zTerm, nTerm, isPrefix, isScan, pCsr + ); +} + +/* +** In addition to its current configuration, have the Fts3MultiSegReader +** passed as the 4th argument also scan the doclist for term zTerm/nTerm. +** +** SQLITE_OK is returned if no error occurs, otherwise an SQLite error code. +*/ +static int fts3SegReaderCursorAddZero( + Fts3Table *p, /* FTS virtual table handle */ + int iLangid, + const char *zTerm, /* Term to scan doclist of */ + int nTerm, /* Number of bytes in zTerm */ + Fts3MultiSegReader *pCsr /* Fts3MultiSegReader to modify */ +){ + return fts3SegReaderCursor(p, + iLangid, 0, FTS3_SEGCURSOR_ALL, zTerm, nTerm, 0, 0,pCsr + ); +} + +/* +** Open an Fts3MultiSegReader to scan the doclist for term zTerm/nTerm. Or, +** if isPrefix is true, to scan the doclist for all terms for which +** zTerm/nTerm is a prefix. If successful, return SQLITE_OK and write +** a pointer to the new Fts3MultiSegReader to *ppSegcsr. Otherwise, return +** an SQLite error code. +** +** It is the responsibility of the caller to free this object by eventually +** passing it to fts3SegReaderCursorFree() +** +** SQLITE_OK is returned if no error occurs, otherwise an SQLite error code. +** Output parameter *ppSegcsr is set to 0 if an error occurs. +*/ +static int fts3TermSegReaderCursor( + Fts3Cursor *pCsr, /* Virtual table cursor handle */ const char *zTerm, /* Term to query for */ int nTerm, /* Size of zTerm in bytes */ int isPrefix, /* True for a prefix search */ - int isReqPos, /* True to include position lists in output */ + Fts3MultiSegReader **ppSegcsr /* OUT: Allocated seg-reader cursor */ +){ + Fts3MultiSegReader *pSegcsr; /* Object to allocate and return */ + int rc = SQLITE_NOMEM; /* Return code */ + + pSegcsr = sqlite3_malloc(sizeof(Fts3MultiSegReader)); + if( pSegcsr ){ + int i; + int bFound = 0; /* True once an index has been found */ + Fts3Table *p = (Fts3Table *)pCsr->base.pVtab; + + if( isPrefix ){ + for(i=1; bFound==0 && i<p->nIndex; i++){ + if( p->aIndex[i].nPrefix==nTerm ){ + bFound = 1; + rc = sqlite3Fts3SegReaderCursor(p, pCsr->iLangid, + i, FTS3_SEGCURSOR_ALL, zTerm, nTerm, 0, 0, pSegcsr + ); + pSegcsr->bLookup = 1; + } + } + + for(i=1; bFound==0 && i<p->nIndex; i++){ + if( p->aIndex[i].nPrefix==nTerm+1 ){ + bFound = 1; + rc = sqlite3Fts3SegReaderCursor(p, pCsr->iLangid, + i, FTS3_SEGCURSOR_ALL, zTerm, nTerm, 1, 0, pSegcsr + ); + if( rc==SQLITE_OK ){ + rc = fts3SegReaderCursorAddZero( + p, pCsr->iLangid, zTerm, nTerm, pSegcsr + ); + } + } + } + } + + if( bFound==0 ){ + rc = sqlite3Fts3SegReaderCursor(p, pCsr->iLangid, + 0, FTS3_SEGCURSOR_ALL, zTerm, nTerm, isPrefix, 0, pSegcsr + ); + pSegcsr->bLookup = !isPrefix; + } + } + + *ppSegcsr = pSegcsr; + return rc; +} + +/* +** Free an Fts3MultiSegReader allocated by fts3TermSegReaderCursor(). +*/ +static void fts3SegReaderCursorFree(Fts3MultiSegReader *pSegcsr){ + sqlite3Fts3SegReaderFinish(pSegcsr); + sqlite3_free(pSegcsr); +} + +/* +** This function retrieves the doclist for the specified term (or term +** prefix) from the database. +*/ +static int fts3TermSelect( + Fts3Table *p, /* Virtual table handle */ + Fts3PhraseToken *pTok, /* Token to query for */ + int iColumn, /* Column to query (or -ve for all columns) */ int *pnOut, /* OUT: Size of buffer at *ppOut */ char **ppOut /* OUT: Malloced result buffer */ ){ - int i; - TermSelect tsc; + int rc; /* Return code */ + Fts3MultiSegReader *pSegcsr; /* Seg-reader cursor for this term */ + TermSelect tsc; /* Object for pair-wise doclist merging */ Fts3SegFilter filter; /* Segment term filter configuration */ - Fts3SegReader **apSegment; /* Array of segments to read data from */ - int nSegment = 0; /* Size of apSegment array */ - int nAlloc = 16; /* Allocated size of segment array */ - int rc; /* Return code */ - sqlite3_stmt *pStmt = 0; /* SQL statement to scan %_segdir table */ - int iAge = 0; /* Used to assign ages to segments */ - - apSegment = (Fts3SegReader **)sqlite3_malloc(sizeof(Fts3SegReader*)*nAlloc); - if( !apSegment ) return SQLITE_NOMEM; - rc = sqlite3Fts3SegReaderPending(p, zTerm, nTerm, isPrefix, &apSegment[0]); - if( rc!=SQLITE_OK ) goto finished; - if( apSegment[0] ){ - nSegment = 1; - } - - /* Loop through the entire %_segdir table. For each segment, create a - ** Fts3SegReader to iterate through the subset of the segment leaves - ** that may contain a term that matches zTerm/nTerm. For non-prefix - ** searches, this is always a single leaf. For prefix searches, this - ** may be a contiguous block of leaves. - ** - ** The code in this loop does not actually load any leaves into memory - ** (unless the root node happens to be a leaf). It simply examines the - ** b-tree structure to determine which leaves need to be inspected. - */ - rc = sqlite3Fts3AllSegdirs(p, &pStmt); - while( rc==SQLITE_OK && SQLITE_ROW==(rc = sqlite3_step(pStmt)) ){ - Fts3SegReader *pNew = 0; - int nRoot = sqlite3_column_bytes(pStmt, 4); - char const *zRoot = sqlite3_column_blob(pStmt, 4); - if( sqlite3_column_int64(pStmt, 1)==0 ){ - /* The entire segment is stored on the root node (which must be a - ** leaf). Do not bother inspecting any data in this case, just - ** create a Fts3SegReader to scan the single leaf. - */ - rc = sqlite3Fts3SegReaderNew(p, iAge, 0, 0, 0, zRoot, nRoot, &pNew); - }else{ - int rc2; /* Return value of sqlite3Fts3ReadBlock() */ - sqlite3_int64 i1; /* Blockid of leaf that may contain zTerm */ - rc = fts3SelectLeaf(p, zTerm, nTerm, zRoot, nRoot, &i1); - if( rc==SQLITE_OK ){ - sqlite3_int64 i2 = sqlite3_column_int64(pStmt, 2); - rc = sqlite3Fts3SegReaderNew(p, iAge, i1, i2, 0, 0, 0, &pNew); - } - - /* The following call to ReadBlock() serves to reset the SQL statement - ** used to retrieve blocks of data from the %_segments table. If it is - ** not reset here, then it may remain classified as an active statement - ** by SQLite, which may lead to "DROP TABLE" or "DETACH" commands - ** failing. - */ - rc2 = sqlite3Fts3ReadBlock(p, 0, 0, 0); - if( rc==SQLITE_OK ){ - rc = rc2; - } - } - iAge++; - - /* If a new Fts3SegReader was allocated, add it to the apSegment array. */ - assert( pNew!=0 || rc!=SQLITE_OK ); - if( pNew ){ - if( nSegment==nAlloc ){ - Fts3SegReader **pArray; - nAlloc += 16; - pArray = (Fts3SegReader **)sqlite3_realloc( - apSegment, nAlloc*sizeof(Fts3SegReader *) - ); - if( !pArray ){ - sqlite3Fts3SegReaderFree(p, pNew); - rc = SQLITE_NOMEM; - goto finished; - } - apSegment = pArray; - } - apSegment[nSegment++] = pNew; - } - } - if( rc!=SQLITE_DONE ){ - assert( rc!=SQLITE_OK ); - goto finished; - } - + + pSegcsr = pTok->pSegcsr; memset(&tsc, 0, sizeof(TermSelect)); - tsc.isReqPos = isReqPos; - filter.flags = FTS3_SEGMENT_IGNORE_EMPTY - | (isPrefix ? FTS3_SEGMENT_PREFIX : 0) - | (isReqPos ? FTS3_SEGMENT_REQUIRE_POS : 0) + filter.flags = FTS3_SEGMENT_IGNORE_EMPTY | FTS3_SEGMENT_REQUIRE_POS + | (pTok->isPrefix ? FTS3_SEGMENT_PREFIX : 0) + | (pTok->bFirst ? FTS3_SEGMENT_FIRST : 0) | (iColumn<p->nColumn ? FTS3_SEGMENT_COLUMN_FILTER : 0); filter.iCol = iColumn; - filter.zTerm = zTerm; - filter.nTerm = nTerm; - - rc = sqlite3Fts3SegReaderIterate(p, apSegment, nSegment, &filter, - fts3TermSelectCb, (void *)&tsc - ); - - if( rc==SQLITE_OK ){ - *ppOut = tsc.aOutput; - *pnOut = tsc.nOutput; - }else{ - sqlite3_free(tsc.aOutput); - } - -finished: - sqlite3_reset(pStmt); - for(i=0; i<nSegment; i++){ - sqlite3Fts3SegReaderFree(p, apSegment[i]); - } - sqlite3_free(apSegment); - return rc; -} - - -/* -** Return a DocList corresponding to the phrase *pPhrase. -*/ -static int fts3PhraseSelect( - Fts3Table *p, /* Virtual table handle */ - Fts3Phrase *pPhrase, /* Phrase to return a doclist for */ - int isReqPos, /* True if output should contain positions */ - char **paOut, /* OUT: Pointer to malloc'd result buffer */ - int *pnOut /* OUT: Size of buffer at *paOut */ -){ - char *pOut = 0; - int nOut = 0; - int rc = SQLITE_OK; - int ii; - int iCol = pPhrase->iColumn; - int isTermPos = (pPhrase->nToken>1 || isReqPos); - - for(ii=0; ii<pPhrase->nToken; ii++){ - struct PhraseToken *pTok = &pPhrase->aToken[ii]; - char *z = pTok->z; /* Next token of the phrase */ - int n = pTok->n; /* Size of z in bytes */ - int isPrefix = pTok->isPrefix;/* True if token is a prefix */ - char *pList; /* Pointer to token doclist */ - int nList; /* Size of buffer at pList */ - - rc = fts3TermSelect(p, iCol, z, n, isPrefix, isTermPos, &nList, &pList); - if( rc!=SQLITE_OK ) break; - - if( ii==0 ){ - pOut = pList; - nOut = nList; - }else{ - /* Merge the new term list and the current output. If this is the - ** last term in the phrase, and positions are not required in the - ** output of this function, the positions can be dropped as part - ** of this merge. Either way, the result of this merge will be - ** smaller than nList bytes. The code in fts3DoclistMerge() is written - ** so that it is safe to use pList as the output as well as an input - ** in this case. - */ - int mergetype = MERGE_POS_PHRASE; - if( ii==pPhrase->nToken-1 && !isReqPos ){ - mergetype = MERGE_PHRASE; - } - fts3DoclistMerge(mergetype, 0, 0, pList, &nOut, pOut, nOut, pList, nList); - sqlite3_free(pOut); - pOut = pList; - } - assert( nOut==0 || pOut!=0 ); - } - - if( rc==SQLITE_OK ){ - *paOut = pOut; - *pnOut = nOut; - }else{ - sqlite3_free(pOut); - } - return rc; -} - -static int fts3NearMerge( - int mergetype, /* MERGE_POS_NEAR or MERGE_NEAR */ - int nNear, /* Parameter to NEAR operator */ - int nTokenLeft, /* Number of tokens in LHS phrase arg */ - char *aLeft, /* Doclist for LHS (incl. positions) */ - int nLeft, /* Size of LHS doclist in bytes */ - int nTokenRight, /* As nTokenLeft */ - char *aRight, /* As aLeft */ - int nRight, /* As nRight */ - char **paOut, /* OUT: Results of merge (malloced) */ - int *pnOut /* OUT: Sized of output buffer */ -){ - char *aOut; - int rc; - - assert( mergetype==MERGE_POS_NEAR || MERGE_NEAR ); - - aOut = sqlite3_malloc(nLeft+nRight+1); - if( aOut==0 ){ - rc = SQLITE_NOMEM; - }else{ - rc = fts3DoclistMerge(mergetype, nNear+nTokenRight, nNear+nTokenLeft, - aOut, pnOut, aLeft, nLeft, aRight, nRight - ); - if( rc!=SQLITE_OK ){ - sqlite3_free(aOut); - aOut = 0; - } - } - - *paOut = aOut; - return rc; -} - -SQLITE_PRIVATE int sqlite3Fts3ExprNearTrim(Fts3Expr *pLeft, Fts3Expr *pRight, int nNear){ - int rc; - if( pLeft->aDoclist==0 || pRight->aDoclist==0 ){ - sqlite3_free(pLeft->aDoclist); - sqlite3_free(pRight->aDoclist); - pRight->aDoclist = 0; - pLeft->aDoclist = 0; - rc = SQLITE_OK; - }else{ - char *aOut; - int nOut; - - rc = fts3NearMerge(MERGE_POS_NEAR, nNear, - pLeft->pPhrase->nToken, pLeft->aDoclist, pLeft->nDoclist, - pRight->pPhrase->nToken, pRight->aDoclist, pRight->nDoclist, - &aOut, &nOut - ); - if( rc!=SQLITE_OK ) return rc; - sqlite3_free(pRight->aDoclist); - pRight->aDoclist = aOut; - pRight->nDoclist = nOut; - - rc = fts3NearMerge(MERGE_POS_NEAR, nNear, - pRight->pPhrase->nToken, pRight->aDoclist, pRight->nDoclist, - pLeft->pPhrase->nToken, pLeft->aDoclist, pLeft->nDoclist, - &aOut, &nOut - ); - sqlite3_free(pLeft->aDoclist); - pLeft->aDoclist = aOut; - pLeft->nDoclist = nOut; - } - return rc; -} - -/* -** Evaluate the full-text expression pExpr against fts3 table pTab. Store -** the resulting doclist in *paOut and *pnOut. This routine mallocs for -** the space needed to store the output. The caller is responsible for -** freeing the space when it has finished. -*/ -static int evalFts3Expr( - Fts3Table *p, /* Virtual table handle */ - Fts3Expr *pExpr, /* Parsed fts3 expression */ - char **paOut, /* OUT: Pointer to malloc'd result buffer */ - int *pnOut, /* OUT: Size of buffer at *paOut */ - int isReqPos /* Require positions in output buffer */ -){ - int rc = SQLITE_OK; /* Return code */ - - /* Zero the output parameters. */ - *paOut = 0; - *pnOut = 0; - - if( pExpr ){ - assert( pExpr->eType==FTSQUERY_PHRASE - || pExpr->eType==FTSQUERY_NEAR - || isReqPos==0 - ); - if( pExpr->eType==FTSQUERY_PHRASE ){ - rc = fts3PhraseSelect(p, pExpr->pPhrase, - isReqPos || (pExpr->pParent && pExpr->pParent->eType==FTSQUERY_NEAR), - paOut, pnOut - ); - }else{ - char *aLeft; - char *aRight; - int nLeft; - int nRight; - - if( 0==(rc = evalFts3Expr(p, pExpr->pRight, &aRight, &nRight, isReqPos)) - && 0==(rc = evalFts3Expr(p, pExpr->pLeft, &aLeft, &nLeft, isReqPos)) - ){ - assert( pExpr->eType==FTSQUERY_NEAR || pExpr->eType==FTSQUERY_OR - || pExpr->eType==FTSQUERY_AND || pExpr->eType==FTSQUERY_NOT - ); - switch( pExpr->eType ){ - case FTSQUERY_NEAR: { - Fts3Expr *pLeft; - Fts3Expr *pRight; - int mergetype = isReqPos ? MERGE_POS_NEAR : MERGE_NEAR; - - if( pExpr->pParent && pExpr->pParent->eType==FTSQUERY_NEAR ){ - mergetype = MERGE_POS_NEAR; - } - pLeft = pExpr->pLeft; - while( pLeft->eType==FTSQUERY_NEAR ){ - pLeft=pLeft->pRight; - } - pRight = pExpr->pRight; - assert( pRight->eType==FTSQUERY_PHRASE ); - assert( pLeft->eType==FTSQUERY_PHRASE ); - - rc = fts3NearMerge(mergetype, pExpr->nNear, - pLeft->pPhrase->nToken, aLeft, nLeft, - pRight->pPhrase->nToken, aRight, nRight, - paOut, pnOut - ); - sqlite3_free(aLeft); - break; - } - - case FTSQUERY_OR: { - /* Allocate a buffer for the output. The maximum size is the - ** sum of the sizes of the two input buffers. The +1 term is - ** so that a buffer of zero bytes is never allocated - this can - ** cause fts3DoclistMerge() to incorrectly return SQLITE_NOMEM. - */ - char *aBuffer = sqlite3_malloc(nRight+nLeft+1); - rc = fts3DoclistMerge(MERGE_OR, 0, 0, aBuffer, pnOut, - aLeft, nLeft, aRight, nRight - ); - *paOut = aBuffer; - sqlite3_free(aLeft); - break; - } - - default: { - assert( FTSQUERY_NOT==MERGE_NOT && FTSQUERY_AND==MERGE_AND ); - fts3DoclistMerge(pExpr->eType, 0, 0, aLeft, pnOut, - aLeft, nLeft, aRight, nRight - ); - *paOut = aLeft; - break; - } - } - } - sqlite3_free(aRight); - } - } - - return rc; + filter.zTerm = pTok->z; + filter.nTerm = pTok->n; + + rc = sqlite3Fts3SegReaderStart(p, pSegcsr, &filter); + while( SQLITE_OK==rc + && SQLITE_ROW==(rc = sqlite3Fts3SegReaderStep(p, pSegcsr)) + ){ + rc = fts3TermSelectMerge(p, &tsc, pSegcsr->aDoclist, pSegcsr->nDoclist); + } + + if( rc==SQLITE_OK ){ + rc = fts3TermSelectFinishMerge(p, &tsc); + } + if( rc==SQLITE_OK ){ + *ppOut = tsc.aaOutput[0]; + *pnOut = tsc.anOutput[0]; + }else{ + int i; + for(i=0; i<SizeofArray(tsc.aaOutput); i++){ + sqlite3_free(tsc.aaOutput[i]); + } + } + + fts3SegReaderCursorFree(pSegcsr); + pTok->pSegcsr = 0; + return rc; +} + +/* +** This function counts the total number of docids in the doclist stored +** in buffer aList[], size nList bytes. +** +** If the isPoslist argument is true, then it is assumed that the doclist +** contains a position-list following each docid. Otherwise, it is assumed +** that the doclist is simply a list of docids stored as delta encoded +** varints. +*/ +static int fts3DoclistCountDocids(char *aList, int nList){ + int nDoc = 0; /* Return value */ + if( aList ){ + char *aEnd = &aList[nList]; /* Pointer to one byte after EOF */ + char *p = aList; /* Cursor */ + while( p<aEnd ){ + nDoc++; + while( (*p++)&0x80 ); /* Skip docid varint */ + fts3PoslistCopy(0, &p); /* Skip over position list */ + } + } + + return nDoc; +} + +/* +** Advance the cursor to the next row in the %_content table that +** matches the search criteria. For a MATCH search, this will be +** the next row that matches. For a full-table scan, this will be +** simply the next row in the %_content table. For a docid lookup, +** this routine simply sets the EOF flag. +** +** Return SQLITE_OK if nothing goes wrong. SQLITE_OK is returned +** even if we reach end-of-file. The fts3EofMethod() will be called +** subsequently to determine whether or not an EOF was hit. +*/ +static int fts3NextMethod(sqlite3_vtab_cursor *pCursor){ + int rc; + Fts3Cursor *pCsr = (Fts3Cursor *)pCursor; + if( pCsr->eSearch==FTS3_DOCID_SEARCH || pCsr->eSearch==FTS3_FULLSCAN_SEARCH ){ + if( SQLITE_ROW!=sqlite3_step(pCsr->pStmt) ){ + pCsr->isEof = 1; + rc = sqlite3_reset(pCsr->pStmt); + }else{ + pCsr->iPrevId = sqlite3_column_int64(pCsr->pStmt, 0); + rc = SQLITE_OK; + } + }else{ + rc = fts3EvalNext((Fts3Cursor *)pCursor); + } + assert( ((Fts3Table *)pCsr->base.pVtab)->pSegments==0 ); + return rc; +} + +/* +** The following are copied from sqliteInt.h. +** +** Constants for the largest and smallest possible 64-bit signed integers. +** These macros are designed to work correctly on both 32-bit and 64-bit +** compilers. +*/ +#ifndef SQLITE_AMALGAMATION +# define LARGEST_INT64 (0xffffffff|(((sqlite3_int64)0x7fffffff)<<32)) +# define SMALLEST_INT64 (((sqlite3_int64)-1) - LARGEST_INT64) +#endif + +/* +** If the numeric type of argument pVal is "integer", then return it +** converted to a 64-bit signed integer. Otherwise, return a copy of +** the second parameter, iDefault. +*/ +static sqlite3_int64 fts3DocidRange(sqlite3_value *pVal, i64 iDefault){ + if( pVal ){ + int eType = sqlite3_value_numeric_type(pVal); + if( eType==SQLITE_INTEGER ){ + return sqlite3_value_int64(pVal); + } + } + return iDefault; } /* ** This is the xFilter interface for the virtual table. See ** the virtual table xFilter method documentation for additional @@ -102491,86 +141137,120 @@ ** If idxNum>=FTS3_FULLTEXT_SEARCH then use the full text index. The ** column on the left-hand side of the MATCH operator is column ** number idxNum-FTS3_FULLTEXT_SEARCH, 0 indexed. argv[0] is the right-hand ** side of the MATCH operator. */ -/* TODO(shess) Upgrade the cursor initialization and destruction to -** account for fts3FilterMethod() being called multiple times on the -** same cursor. The current solution is very fragile. Apply fix to -** fts3 as appropriate. -*/ static int fts3FilterMethod( sqlite3_vtab_cursor *pCursor, /* The cursor used for this query */ int idxNum, /* Strategy index */ const char *idxStr, /* Unused */ int nVal, /* Number of elements in apVal */ sqlite3_value **apVal /* Arguments for the indexing scheme */ ){ - const char *azSql[] = { - "SELECT * FROM %Q.'%q_content' WHERE docid = ?", /* non-full-table-scan */ - "SELECT * FROM %Q.'%q_content'", /* full-table-scan */ - }; - int rc; /* Return code */ + int rc = SQLITE_OK; char *zSql; /* SQL statement used to access %_content */ + int eSearch; Fts3Table *p = (Fts3Table *)pCursor->pVtab; Fts3Cursor *pCsr = (Fts3Cursor *)pCursor; + + sqlite3_value *pCons = 0; /* The MATCH or rowid constraint, if any */ + sqlite3_value *pLangid = 0; /* The "langid = ?" constraint, if any */ + sqlite3_value *pDocidGe = 0; /* The "docid >= ?" constraint, if any */ + sqlite3_value *pDocidLe = 0; /* The "docid <= ?" constraint, if any */ + int iIdx; UNUSED_PARAMETER(idxStr); UNUSED_PARAMETER(nVal); - assert( idxNum>=0 && idxNum<=(FTS3_FULLTEXT_SEARCH+p->nColumn) ); - assert( nVal==0 || nVal==1 ); - assert( (nVal==0)==(idxNum==FTS3_FULLSCAN_SEARCH) ); + eSearch = (idxNum & 0x0000FFFF); + assert( eSearch>=0 && eSearch<=(FTS3_FULLTEXT_SEARCH+p->nColumn) ); + assert( p->pSegments==0 ); + + /* Collect arguments into local variables */ + iIdx = 0; + if( eSearch!=FTS3_FULLSCAN_SEARCH ) pCons = apVal[iIdx++]; + if( idxNum & FTS3_HAVE_LANGID ) pLangid = apVal[iIdx++]; + if( idxNum & FTS3_HAVE_DOCID_GE ) pDocidGe = apVal[iIdx++]; + if( idxNum & FTS3_HAVE_DOCID_LE ) pDocidLe = apVal[iIdx++]; + assert( iIdx==nVal ); /* In case the cursor has been used before, clear it now. */ sqlite3_finalize(pCsr->pStmt); sqlite3_free(pCsr->aDoclist); + sqlite3Fts3MIBufferFree(pCsr->pMIBuffer); sqlite3Fts3ExprFree(pCsr->pExpr); memset(&pCursor[1], 0, sizeof(Fts3Cursor)-sizeof(sqlite3_vtab_cursor)); + + /* Set the lower and upper bounds on docids to return */ + pCsr->iMinDocid = fts3DocidRange(pDocidGe, SMALLEST_INT64); + pCsr->iMaxDocid = fts3DocidRange(pDocidLe, LARGEST_INT64); + + if( idxStr ){ + pCsr->bDesc = (idxStr[0]=='D'); + }else{ + pCsr->bDesc = p->bDescIdx; + } + pCsr->eSearch = (i16)eSearch; + + if( eSearch!=FTS3_DOCID_SEARCH && eSearch!=FTS3_FULLSCAN_SEARCH ){ + int iCol = eSearch-FTS3_FULLTEXT_SEARCH; + const char *zQuery = (const char *)sqlite3_value_text(pCons); + + if( zQuery==0 && sqlite3_value_type(pCons)!=SQLITE_NULL ){ + return SQLITE_NOMEM; + } + + pCsr->iLangid = 0; + if( pLangid ) pCsr->iLangid = sqlite3_value_int(pLangid); + + assert( p->base.zErrMsg==0 ); + rc = sqlite3Fts3ExprParse(p->pTokenizer, pCsr->iLangid, + p->azColumn, p->bFts4, p->nColumn, iCol, zQuery, -1, &pCsr->pExpr, + &p->base.zErrMsg + ); + if( rc!=SQLITE_OK ){ + return rc; + } + + rc = fts3EvalStart(pCsr); + sqlite3Fts3SegmentsClose(p); + if( rc!=SQLITE_OK ) return rc; + pCsr->pNextId = pCsr->aDoclist; + pCsr->iPrevId = 0; + } /* Compile a SELECT statement for this cursor. For a full-table-scan, the ** statement loops through all rows of the %_content table. For a ** full-text query or docid lookup, the statement retrieves a single ** row by docid. */ - zSql = sqlite3_mprintf(azSql[idxNum==FTS3_FULLSCAN_SEARCH], p->zDb, p->zName); - if( !zSql ){ - rc = SQLITE_NOMEM; - }else{ - rc = sqlite3_prepare_v2(p->db, zSql, -1, &pCsr->pStmt, 0); - sqlite3_free(zSql); - } - if( rc!=SQLITE_OK ) return rc; - pCsr->eSearch = (i16)idxNum; - - if( idxNum==FTS3_DOCID_SEARCH ){ - rc = sqlite3_bind_value(pCsr->pStmt, 1, apVal[0]); - }else if( idxNum!=FTS3_FULLSCAN_SEARCH ){ - int iCol = idxNum-FTS3_FULLTEXT_SEARCH; - const char *zQuery = (const char *)sqlite3_value_text(apVal[0]); - - if( zQuery==0 && sqlite3_value_type(apVal[0])!=SQLITE_NULL ){ - return SQLITE_NOMEM; - } - - rc = sqlite3Fts3ExprParse(p->pTokenizer, p->azColumn, p->nColumn, - iCol, zQuery, -1, &pCsr->pExpr - ); - if( rc!=SQLITE_OK ){ - if( rc==SQLITE_ERROR ){ - p->base.zErrMsg = sqlite3_mprintf("malformed MATCH expression: [%s]", - zQuery); - } - return rc; - } - - rc = evalFts3Expr(p, pCsr->pExpr, &pCsr->aDoclist, &pCsr->nDoclist, 0); - pCsr->pNextId = pCsr->aDoclist; - pCsr->iPrevId = 0; - } - - if( rc!=SQLITE_OK ) return rc; + if( eSearch==FTS3_FULLSCAN_SEARCH ){ + if( pDocidGe || pDocidLe ){ + zSql = sqlite3_mprintf( + "SELECT %s WHERE rowid BETWEEN %lld AND %lld ORDER BY rowid %s", + p->zReadExprlist, pCsr->iMinDocid, pCsr->iMaxDocid, + (pCsr->bDesc ? "DESC" : "ASC") + ); + }else{ + zSql = sqlite3_mprintf("SELECT %s ORDER BY rowid %s", + p->zReadExprlist, (pCsr->bDesc ? "DESC" : "ASC") + ); + } + if( zSql ){ + rc = sqlite3_prepare_v2(p->db, zSql, -1, &pCsr->pStmt, 0); + sqlite3_free(zSql); + }else{ + rc = SQLITE_NOMEM; + } + }else if( eSearch==FTS3_DOCID_SEARCH ){ + rc = fts3CursorSeekStmt(pCsr, &pCsr->pStmt); + if( rc==SQLITE_OK ){ + rc = sqlite3_bind_value(pCsr->pStmt, 1, pCons); + } + } + if( rc!=SQLITE_OK ) return rc; + return fts3NextMethod(pCursor); } /* ** This is the xEof method of the virtual table. SQLite calls this @@ -102586,53 +141266,67 @@ ** exposes %_content.docid as the rowid for the virtual table. The ** rowid should be written to *pRowid. */ static int fts3RowidMethod(sqlite3_vtab_cursor *pCursor, sqlite_int64 *pRowid){ Fts3Cursor *pCsr = (Fts3Cursor *) pCursor; - if( pCsr->aDoclist ){ - *pRowid = pCsr->iPrevId; - }else{ - *pRowid = sqlite3_column_int64(pCsr->pStmt, 0); - } + *pRowid = pCsr->iPrevId; return SQLITE_OK; } /* ** This is the xColumn method, called by SQLite to request a value from ** the row that the supplied cursor currently points to. +** +** If: +** +** (iCol < p->nColumn) -> The value of the iCol'th user column. +** (iCol == p->nColumn) -> Magic column with the same name as the table. +** (iCol == p->nColumn+1) -> Docid column +** (iCol == p->nColumn+2) -> Langid column */ static int fts3ColumnMethod( sqlite3_vtab_cursor *pCursor, /* Cursor to retrieve value from */ - sqlite3_context *pContext, /* Context for sqlite3_result_xxx() calls */ + sqlite3_context *pCtx, /* Context for sqlite3_result_xxx() calls */ int iCol /* Index of column to read value from */ ){ - int rc; /* Return Code */ + int rc = SQLITE_OK; /* Return Code */ Fts3Cursor *pCsr = (Fts3Cursor *) pCursor; Fts3Table *p = (Fts3Table *)pCursor->pVtab; /* The column value supplied by SQLite must be in range. */ - assert( iCol>=0 && iCol<=p->nColumn+1 ); + assert( iCol>=0 && iCol<=p->nColumn+2 ); if( iCol==p->nColumn+1 ){ /* This call is a request for the "docid" column. Since "docid" is an ** alias for "rowid", use the xRowid() method to obtain the value. */ - sqlite3_int64 iRowid; - rc = fts3RowidMethod(pCursor, &iRowid); - sqlite3_result_int64(pContext, iRowid); + sqlite3_result_int64(pCtx, pCsr->iPrevId); }else if( iCol==p->nColumn ){ /* The extra column whose name is the same as the table. - ** Return a blob which is a pointer to the cursor. - */ - sqlite3_result_blob(pContext, &pCsr, sizeof(pCsr), SQLITE_TRANSIENT); - rc = SQLITE_OK; + ** Return a blob which is a pointer to the cursor. */ + sqlite3_result_blob(pCtx, &pCsr, sizeof(pCsr), SQLITE_TRANSIENT); + }else if( iCol==p->nColumn+2 && pCsr->pExpr ){ + sqlite3_result_int64(pCtx, pCsr->iLangid); }else{ + /* The requested column is either a user column (one that contains + ** indexed data), or the language-id column. */ rc = fts3CursorSeek(0, pCsr); + if( rc==SQLITE_OK ){ - sqlite3_result_value(pContext, sqlite3_column_value(pCsr->pStmt, iCol+1)); + if( iCol==p->nColumn+2 ){ + int iLangid = 0; + if( p->zLanguageid ){ + iLangid = sqlite3_column_int(pCsr->pStmt, p->nColumn+1); + } + sqlite3_result_int(pCtx, iLangid); + }else if( sqlite3_data_count(pCsr->pStmt)>(iCol+1) ){ + sqlite3_result_value(pCtx, sqlite3_column_value(pCsr->pStmt, iCol+1)); + } } } + + assert( ((Fts3Table *)pCsr->base.pVtab)->pSegments==0 ); return rc; } /* ** This function is the implementation of the xUpdate callback used by @@ -102651,99 +141345,160 @@ /* ** Implementation of xSync() method. Flush the contents of the pending-terms ** hash-table to the database. */ static int fts3SyncMethod(sqlite3_vtab *pVtab){ - return sqlite3Fts3PendingTermsFlush((Fts3Table *)pVtab); + + /* Following an incremental-merge operation, assuming that the input + ** segments are not completely consumed (the usual case), they are updated + ** in place to remove the entries that have already been merged. This + ** involves updating the leaf block that contains the smallest unmerged + ** entry and each block (if any) between the leaf and the root node. So + ** if the height of the input segment b-trees is N, and input segments + ** are merged eight at a time, updating the input segments at the end + ** of an incremental-merge requires writing (8*(1+N)) blocks. N is usually + ** small - often between 0 and 2. So the overhead of the incremental + ** merge is somewhere between 8 and 24 blocks. To avoid this overhead + ** dwarfing the actual productive work accomplished, the incremental merge + ** is only attempted if it will write at least 64 leaf blocks. Hence + ** nMinMerge. + ** + ** Of course, updating the input segments also involves deleting a bunch + ** of blocks from the segments table. But this is not considered overhead + ** as it would also be required by a crisis-merge that used the same input + ** segments. + */ + const u32 nMinMerge = 64; /* Minimum amount of incr-merge work to do */ + + Fts3Table *p = (Fts3Table*)pVtab; + int rc = sqlite3Fts3PendingTermsFlush(p); + + if( rc==SQLITE_OK + && p->nLeafAdd>(nMinMerge/16) + && p->nAutoincrmerge && p->nAutoincrmerge!=0xff + ){ + int mxLevel = 0; /* Maximum relative level value in db */ + int A; /* Incr-merge parameter A */ + + rc = sqlite3Fts3MaxLevel(p, &mxLevel); + assert( rc==SQLITE_OK || mxLevel==0 ); + A = p->nLeafAdd * mxLevel; + A += (A/2); + if( A>(int)nMinMerge ) rc = sqlite3Fts3Incrmerge(p, A, p->nAutoincrmerge); + } + sqlite3Fts3SegmentsClose(p); + return rc; } /* -** Implementation of xBegin() method. This is a no-op. +** If it is currently unknown whether or not the FTS table has an %_stat +** table (if p->bHasStat==2), attempt to determine this (set p->bHasStat +** to 0 or 1). Return SQLITE_OK if successful, or an SQLite error code +** if an error occurs. +*/ +static int fts3SetHasStat(Fts3Table *p){ + int rc = SQLITE_OK; + if( p->bHasStat==2 ){ + const char *zFmt ="SELECT 1 FROM %Q.sqlite_master WHERE tbl_name='%q_stat'"; + char *zSql = sqlite3_mprintf(zFmt, p->zDb, p->zName); + if( zSql ){ + sqlite3_stmt *pStmt = 0; + rc = sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0); + if( rc==SQLITE_OK ){ + int bHasStat = (sqlite3_step(pStmt)==SQLITE_ROW); + rc = sqlite3_finalize(pStmt); + if( rc==SQLITE_OK ) p->bHasStat = bHasStat; + } + sqlite3_free(zSql); + }else{ + rc = SQLITE_NOMEM; + } + } + return rc; +} + +/* +** Implementation of xBegin() method. */ static int fts3BeginMethod(sqlite3_vtab *pVtab){ + Fts3Table *p = (Fts3Table*)pVtab; UNUSED_PARAMETER(pVtab); - assert( ((Fts3Table *)pVtab)->nPendingData==0 ); - return SQLITE_OK; + assert( p->pSegments==0 ); + assert( p->nPendingData==0 ); + assert( p->inTransaction!=1 ); + TESTONLY( p->inTransaction = 1 ); + TESTONLY( p->mxSavepoint = -1; ); + p->nLeafAdd = 0; + return fts3SetHasStat(p); } /* ** Implementation of xCommit() method. This is a no-op. The contents of ** the pending-terms hash-table have already been flushed into the database ** by fts3SyncMethod(). */ static int fts3CommitMethod(sqlite3_vtab *pVtab){ + TESTONLY( Fts3Table *p = (Fts3Table*)pVtab ); UNUSED_PARAMETER(pVtab); - assert( ((Fts3Table *)pVtab)->nPendingData==0 ); + assert( p->nPendingData==0 ); + assert( p->inTransaction!=0 ); + assert( p->pSegments==0 ); + TESTONLY( p->inTransaction = 0 ); + TESTONLY( p->mxSavepoint = -1; ); return SQLITE_OK; } /* ** Implementation of xRollback(). Discard the contents of the pending-terms ** hash-table. Any changes made to the database are reverted by SQLite. */ static int fts3RollbackMethod(sqlite3_vtab *pVtab){ - sqlite3Fts3PendingTermsClear((Fts3Table *)pVtab); - return SQLITE_OK; -} - -/* -** Load the doclist associated with expression pExpr to pExpr->aDoclist. -** The loaded doclist contains positions as well as the document ids. -** This is used by the matchinfo(), snippet() and offsets() auxillary -** functions. -*/ -SQLITE_PRIVATE int sqlite3Fts3ExprLoadDoclist(Fts3Table *pTab, Fts3Expr *pExpr){ - return evalFts3Expr(pTab, pExpr, &pExpr->aDoclist, &pExpr->nDoclist, 1); -} - -/* -** After ExprLoadDoclist() (see above) has been called, this function is -** used to iterate/search through the position lists that make up the doclist -** stored in pExpr->aDoclist. -*/ -SQLITE_PRIVATE char *sqlite3Fts3FindPositions( - Fts3Expr *pExpr, /* Access this expressions doclist */ - sqlite3_int64 iDocid, /* Docid associated with requested pos-list */ - int iCol /* Column of requested pos-list */ -){ - assert( pExpr->isLoaded ); - if( pExpr->aDoclist ){ - char *pEnd = &pExpr->aDoclist[pExpr->nDoclist]; - char *pCsr = pExpr->pCurrent; - - assert( pCsr ); - while( pCsr<pEnd ){ - if( pExpr->iCurrent<iDocid ){ - fts3PoslistCopy(0, &pCsr); - if( pCsr<pEnd ){ - fts3GetDeltaVarint(&pCsr, &pExpr->iCurrent); - } - pExpr->pCurrent = pCsr; - }else{ - if( pExpr->iCurrent==iDocid ){ - int iThis = 0; - if( iCol<0 ){ - /* If iCol is negative, return a pointer to the start of the - ** position-list (instead of a pointer to the start of a list - ** of offsets associated with a specific column). - */ - return pCsr; - } - while( iThis<iCol ){ - fts3ColumnlistCopy(0, &pCsr); - if( *pCsr==0x00 ) return 0; - pCsr++; - pCsr += sqlite3Fts3GetVarint32(pCsr, &iThis); - } - if( iCol==iThis && (*pCsr&0xFE) ) return pCsr; - } - return 0; - } - } - } - - return 0; + Fts3Table *p = (Fts3Table*)pVtab; + sqlite3Fts3PendingTermsClear(p); + assert( p->inTransaction!=0 ); + TESTONLY( p->inTransaction = 0 ); + TESTONLY( p->mxSavepoint = -1; ); + return SQLITE_OK; +} + +/* +** When called, *ppPoslist must point to the byte immediately following the +** end of a position-list. i.e. ( (*ppPoslist)[-1]==POS_END ). This function +** moves *ppPoslist so that it instead points to the first byte of the +** same position list. +*/ +static void fts3ReversePoslist(char *pStart, char **ppPoslist){ + char *p = &(*ppPoslist)[-2]; + char c = 0; + + /* Skip backwards passed any trailing 0x00 bytes added by NearTrim() */ + while( p>pStart && (c=*p--)==0 ); + + /* Search backwards for a varint with value zero (the end of the previous + ** poslist). This is an 0x00 byte preceded by some byte that does not + ** have the 0x80 bit set. */ + while( p>pStart && (*p & 0x80) | c ){ + c = *p--; + } + assert( p==pStart || c==0 ); + + /* At this point p points to that preceding byte without the 0x80 bit + ** set. So to find the start of the poslist, skip forward 2 bytes then + ** over a varint. + ** + ** Normally. The other case is that p==pStart and the poslist to return + ** is the first in the doclist. In this case do not skip forward 2 bytes. + ** The second part of the if condition (c==0 && *ppPoslist>&p[2]) + ** is required for cases where the first byte of a doclist and the + ** doclist is empty. For example, if the first docid is 10, a doclist + ** that begins with: + ** + ** 0x0A 0x00 <next docid delta varint> + */ + if( p>pStart || (c==0 && *ppPoslist>&p[2]) ){ p = &p[2]; } + while( *p++&0x80 ); + *ppPoslist = p; } /* ** Helper function used by the implementation of the overloaded snippet(), ** offsets() and optimize() SQL functions. @@ -102756,11 +141511,11 @@ */ static int fts3FunctionArg( sqlite3_context *pContext, /* SQL function call context */ const char *zFunc, /* Function name */ sqlite3_value *pVal, /* argv[0] passed to function */ - Fts3Cursor **ppCsr /* OUT: Store cursor handle here */ + Fts3Cursor **ppCsr /* OUT: Store cursor handle here */ ){ Fts3Cursor *pRet; if( sqlite3_value_type(pVal)!=SQLITE_BLOB || sqlite3_value_bytes(pVal)!=sizeof(Fts3Cursor *) ){ @@ -102808,10 +141563,12 @@ case 3: zEnd = (const char*)sqlite3_value_text(apVal[2]); case 2: zStart = (const char*)sqlite3_value_text(apVal[1]); } if( !zEllipsis || !zEnd || !zStart ){ sqlite3_result_error_nomem(pContext); + }else if( nToken==0 ){ + sqlite3_result_text(pContext, "", -1, SQLITE_STATIC); }else if( SQLITE_OK==fts3CursorSeek(pContext, pCsr) ){ sqlite3Fts3Snippet(pContext, pCsr, zStart, zEnd, zEllipsis, iCol, nToken); } } @@ -102882,19 +141639,17 @@ sqlite3_context *pContext, /* SQLite function call context */ int nVal, /* Size of argument array */ sqlite3_value **apVal /* Array of arguments */ ){ Fts3Cursor *pCsr; /* Cursor handle passed through apVal[0] */ - - if( nVal!=1 ){ - sqlite3_result_error(pContext, - "wrong number of arguments to function matchinfo()", -1); - return; - } - + assert( nVal==1 || nVal==2 ); if( SQLITE_OK==fts3FunctionArg(pContext, "matchinfo", apVal[0], &pCsr) ){ - sqlite3Fts3Matchinfo(pContext, pCsr); + const char *zArg = 0; + if( nVal>1 ){ + zArg = (const char *)sqlite3_value_text(apVal[1]); + } + sqlite3Fts3Matchinfo(pContext, pCsr, zArg); } } /* ** This routine implements the xFindFunction method for the FTS3 @@ -102939,25 +141694,42 @@ static int fts3RenameMethod( sqlite3_vtab *pVtab, /* Virtual table handle */ const char *zName /* New name of table */ ){ Fts3Table *p = (Fts3Table *)pVtab; - sqlite3 *db; /* Database connection */ + sqlite3 *db = p->db; /* Database connection */ int rc; /* Return Code */ - - db = p->db; - rc = SQLITE_OK; - fts3DbExec(&rc, db, - "ALTER TABLE %Q.'%q_content' RENAME TO '%q_content';", - p->zDb, p->zName, zName - ); - if( rc==SQLITE_ERROR ) rc = SQLITE_OK; + + /* At this point it must be known if the %_stat table exists or not. + ** So bHasStat may not be 2. */ + rc = fts3SetHasStat(p); + + /* As it happens, the pending terms table is always empty here. This is + ** because an "ALTER TABLE RENAME TABLE" statement inside a transaction + ** always opens a savepoint transaction. And the xSavepoint() method + ** flushes the pending terms table. But leave the (no-op) call to + ** PendingTermsFlush() in in case that changes. + */ + assert( p->nPendingData==0 ); + if( rc==SQLITE_OK ){ + rc = sqlite3Fts3PendingTermsFlush(p); + } + + if( p->zContentTbl==0 ){ + fts3DbExec(&rc, db, + "ALTER TABLE %Q.'%q_content' RENAME TO '%q_content';", + p->zDb, p->zName, zName + ); + } + if( p->bHasDocsize ){ fts3DbExec(&rc, db, "ALTER TABLE %Q.'%q_docsize' RENAME TO '%q_docsize';", p->zDb, p->zName, zName ); + } + if( p->bHasStat ){ fts3DbExec(&rc, db, "ALTER TABLE %Q.'%q_stat' RENAME TO '%q_stat';", p->zDb, p->zName, zName ); } @@ -102969,20 +141741,67 @@ "ALTER TABLE %Q.'%q_segdir' RENAME TO '%q_segdir';", p->zDb, p->zName, zName ); return rc; } + +/* +** The xSavepoint() method. +** +** Flush the contents of the pending-terms table to disk. +*/ +static int fts3SavepointMethod(sqlite3_vtab *pVtab, int iSavepoint){ + int rc = SQLITE_OK; + UNUSED_PARAMETER(iSavepoint); + assert( ((Fts3Table *)pVtab)->inTransaction ); + assert( ((Fts3Table *)pVtab)->mxSavepoint < iSavepoint ); + TESTONLY( ((Fts3Table *)pVtab)->mxSavepoint = iSavepoint ); + if( ((Fts3Table *)pVtab)->bIgnoreSavepoint==0 ){ + rc = fts3SyncMethod(pVtab); + } + return rc; +} + +/* +** The xRelease() method. +** +** This is a no-op. +*/ +static int fts3ReleaseMethod(sqlite3_vtab *pVtab, int iSavepoint){ + TESTONLY( Fts3Table *p = (Fts3Table*)pVtab ); + UNUSED_PARAMETER(iSavepoint); + UNUSED_PARAMETER(pVtab); + assert( p->inTransaction ); + assert( p->mxSavepoint >= iSavepoint ); + TESTONLY( p->mxSavepoint = iSavepoint-1 ); + return SQLITE_OK; +} + +/* +** The xRollbackTo() method. +** +** Discard the contents of the pending terms table. +*/ +static int fts3RollbackToMethod(sqlite3_vtab *pVtab, int iSavepoint){ + Fts3Table *p = (Fts3Table*)pVtab; + UNUSED_PARAMETER(iSavepoint); + assert( p->inTransaction ); + assert( p->mxSavepoint >= iSavepoint ); + TESTONLY( p->mxSavepoint = iSavepoint ); + sqlite3Fts3PendingTermsClear(p); + return SQLITE_OK; +} static const sqlite3_module fts3Module = { - /* iVersion */ 0, + /* iVersion */ 2, /* xCreate */ fts3CreateMethod, /* xConnect */ fts3ConnectMethod, /* xBestIndex */ fts3BestIndexMethod, /* xDisconnect */ fts3DisconnectMethod, /* xDestroy */ fts3DestroyMethod, /* xOpen */ fts3OpenMethod, - /* xClose */ fulltextClose, + /* xClose */ fts3CloseMethod, /* xFilter */ fts3FilterMethod, /* xNext */ fts3NextMethod, /* xEof */ fts3EofMethod, /* xColumn */ fts3ColumnMethod, /* xRowid */ fts3RowidMethod, @@ -102991,10 +141810,13 @@ /* xSync */ fts3SyncMethod, /* xCommit */ fts3CommitMethod, /* xRollback */ fts3RollbackMethod, /* xFindFunction */ fts3FindFunctionMethod, /* xRename */ fts3RenameMethod, + /* xSavepoint */ fts3SavepointMethod, + /* xRelease */ fts3ReleaseMethod, + /* xRollbackTo */ fts3RollbackToMethod, }; /* ** This function is registered as the module destructor (called when an ** FTS3 enabled database connection is closed). It frees the memory @@ -103005,45 +141827,64 @@ sqlite3Fts3HashClear(pHash); sqlite3_free(pHash); } /* -** The fts3 built-in tokenizers - "simple" and "porter" - are implemented -** in files fts3_tokenizer1.c and fts3_porter.c respectively. The following -** two forward declarations are for functions declared in these files -** used to retrieve the respective implementations. +** The fts3 built-in tokenizers - "simple", "porter" and "icu"- are +** implemented in files fts3_tokenizer1.c, fts3_porter.c and fts3_icu.c +** respectively. The following three forward declarations are for functions +** declared in these files used to retrieve the respective implementations. ** ** Calling sqlite3Fts3SimpleTokenizerModule() sets the value pointed -** to by the argument to point a the "simple" tokenizer implementation. -** Function ...PorterTokenizerModule() sets *pModule to point to the -** porter tokenizer/stemmer implementation. +** to by the argument to point to the "simple" tokenizer implementation. +** And so on. */ SQLITE_PRIVATE void sqlite3Fts3SimpleTokenizerModule(sqlite3_tokenizer_module const**ppModule); SQLITE_PRIVATE void sqlite3Fts3PorterTokenizerModule(sqlite3_tokenizer_module const**ppModule); +#ifndef SQLITE_DISABLE_FTS3_UNICODE +SQLITE_PRIVATE void sqlite3Fts3UnicodeTokenizer(sqlite3_tokenizer_module const**ppModule); +#endif +#ifdef SQLITE_ENABLE_ICU SQLITE_PRIVATE void sqlite3Fts3IcuTokenizerModule(sqlite3_tokenizer_module const**ppModule); +#endif /* -** Initialise the fts3 extension. If this extension is built as part +** Initialize the fts3 extension. If this extension is built as part ** of the sqlite library, then this function is called directly by ** SQLite. If fts3 is built as a dynamically loadable extension, this ** function is called by the sqlite3_extension_init() entry point. */ SQLITE_PRIVATE int sqlite3Fts3Init(sqlite3 *db){ int rc = SQLITE_OK; Fts3Hash *pHash = 0; const sqlite3_tokenizer_module *pSimple = 0; const sqlite3_tokenizer_module *pPorter = 0; +#ifndef SQLITE_DISABLE_FTS3_UNICODE + const sqlite3_tokenizer_module *pUnicode = 0; +#endif #ifdef SQLITE_ENABLE_ICU const sqlite3_tokenizer_module *pIcu = 0; sqlite3Fts3IcuTokenizerModule(&pIcu); #endif + +#ifndef SQLITE_DISABLE_FTS3_UNICODE + sqlite3Fts3UnicodeTokenizer(&pUnicode); +#endif + +#ifdef SQLITE_TEST + rc = sqlite3Fts3InitTerm(db); + if( rc!=SQLITE_OK ) return rc; +#endif + + rc = sqlite3Fts3InitAux(db); + if( rc!=SQLITE_OK ) return rc; sqlite3Fts3SimpleTokenizerModule(&pSimple); sqlite3Fts3PorterTokenizerModule(&pPorter); - /* Allocate and initialise the hash-table used to store tokenizers. */ + /* Allocate and initialize the hash-table used to store tokenizers. */ pHash = sqlite3_malloc(sizeof(Fts3Hash)); if( !pHash ){ rc = SQLITE_NOMEM; }else{ sqlite3Fts3HashInit(pHash, FTS3_HASH_STRING, 1); @@ -103051,10 +141892,14 @@ /* Load the built-in tokenizers into the hash table */ if( rc==SQLITE_OK ){ if( sqlite3Fts3HashInsert(pHash, "simple", 7, (void *)pSimple) || sqlite3Fts3HashInsert(pHash, "porter", 7, (void *)pPorter) + +#ifndef SQLITE_DISABLE_FTS3_UNICODE + || sqlite3Fts3HashInsert(pHash, "unicode61", 10, (void *)pUnicode) +#endif #ifdef SQLITE_ENABLE_ICU || (pIcu && sqlite3Fts3HashInsert(pHash, "icu", 4, (void *)pIcu)) #endif ){ rc = SQLITE_NOMEM; @@ -103073,11 +141918,12 @@ */ if( SQLITE_OK==rc && SQLITE_OK==(rc = sqlite3Fts3InitHashTable(db, pHash, "fts3_tokenizer")) && SQLITE_OK==(rc = sqlite3_overload_function(db, "snippet", -1)) && SQLITE_OK==(rc = sqlite3_overload_function(db, "offsets", 1)) - && SQLITE_OK==(rc = sqlite3_overload_function(db, "matchinfo", -1)) + && SQLITE_OK==(rc = sqlite3_overload_function(db, "matchinfo", 1)) + && SQLITE_OK==(rc = sqlite3_overload_function(db, "matchinfo", 2)) && SQLITE_OK==(rc = sqlite3_overload_function(db, "optimize", 1)) ){ rc = sqlite3_create_module_v2( db, "fts3", &fts3Module, (void *)pHash, hashDestroy ); @@ -103084,12 +141930,16 @@ if( rc==SQLITE_OK ){ rc = sqlite3_create_module_v2( db, "fts4", &fts3Module, (void *)pHash, 0 ); } + if( rc==SQLITE_OK ){ + rc = sqlite3Fts3InitTok(db, (void *)pHash); + } return rc; } + /* An error has occurred. Delete the hash table and return the error code. */ assert( rc!=SQLITE_OK ); if( pHash ){ sqlite3Fts3HashClear(pHash); @@ -103096,12 +141946,1958 @@ sqlite3_free(pHash); } return rc; } +/* +** Allocate an Fts3MultiSegReader for each token in the expression headed +** by pExpr. +** +** An Fts3SegReader object is a cursor that can seek or scan a range of +** entries within a single segment b-tree. An Fts3MultiSegReader uses multiple +** Fts3SegReader objects internally to provide an interface to seek or scan +** within the union of all segments of a b-tree. Hence the name. +** +** If the allocated Fts3MultiSegReader just seeks to a single entry in a +** segment b-tree (if the term is not a prefix or it is a prefix for which +** there exists prefix b-tree of the right length) then it may be traversed +** and merged incrementally. Otherwise, it has to be merged into an in-memory +** doclist and then traversed. +*/ +static void fts3EvalAllocateReaders( + Fts3Cursor *pCsr, /* FTS cursor handle */ + Fts3Expr *pExpr, /* Allocate readers for this expression */ + int *pnToken, /* OUT: Total number of tokens in phrase. */ + int *pnOr, /* OUT: Total number of OR nodes in expr. */ + int *pRc /* IN/OUT: Error code */ +){ + if( pExpr && SQLITE_OK==*pRc ){ + if( pExpr->eType==FTSQUERY_PHRASE ){ + int i; + int nToken = pExpr->pPhrase->nToken; + *pnToken += nToken; + for(i=0; i<nToken; i++){ + Fts3PhraseToken *pToken = &pExpr->pPhrase->aToken[i]; + int rc = fts3TermSegReaderCursor(pCsr, + pToken->z, pToken->n, pToken->isPrefix, &pToken->pSegcsr + ); + if( rc!=SQLITE_OK ){ + *pRc = rc; + return; + } + } + assert( pExpr->pPhrase->iDoclistToken==0 ); + pExpr->pPhrase->iDoclistToken = -1; + }else{ + *pnOr += (pExpr->eType==FTSQUERY_OR); + fts3EvalAllocateReaders(pCsr, pExpr->pLeft, pnToken, pnOr, pRc); + fts3EvalAllocateReaders(pCsr, pExpr->pRight, pnToken, pnOr, pRc); + } + } +} + +/* +** Arguments pList/nList contain the doclist for token iToken of phrase p. +** It is merged into the main doclist stored in p->doclist.aAll/nAll. +** +** This function assumes that pList points to a buffer allocated using +** sqlite3_malloc(). This function takes responsibility for eventually +** freeing the buffer. +** +** SQLITE_OK is returned if successful, or SQLITE_NOMEM if an error occurs. +*/ +static int fts3EvalPhraseMergeToken( + Fts3Table *pTab, /* FTS Table pointer */ + Fts3Phrase *p, /* Phrase to merge pList/nList into */ + int iToken, /* Token pList/nList corresponds to */ + char *pList, /* Pointer to doclist */ + int nList /* Number of bytes in pList */ +){ + int rc = SQLITE_OK; + assert( iToken!=p->iDoclistToken ); + + if( pList==0 ){ + sqlite3_free(p->doclist.aAll); + p->doclist.aAll = 0; + p->doclist.nAll = 0; + } + + else if( p->iDoclistToken<0 ){ + p->doclist.aAll = pList; + p->doclist.nAll = nList; + } + + else if( p->doclist.aAll==0 ){ + sqlite3_free(pList); + } + + else { + char *pLeft; + char *pRight; + int nLeft; + int nRight; + int nDiff; + + if( p->iDoclistToken<iToken ){ + pLeft = p->doclist.aAll; + nLeft = p->doclist.nAll; + pRight = pList; + nRight = nList; + nDiff = iToken - p->iDoclistToken; + }else{ + pRight = p->doclist.aAll; + nRight = p->doclist.nAll; + pLeft = pList; + nLeft = nList; + nDiff = p->iDoclistToken - iToken; + } + + rc = fts3DoclistPhraseMerge( + pTab->bDescIdx, nDiff, pLeft, nLeft, &pRight, &nRight + ); + sqlite3_free(pLeft); + p->doclist.aAll = pRight; + p->doclist.nAll = nRight; + } + + if( iToken>p->iDoclistToken ) p->iDoclistToken = iToken; + return rc; +} + +/* +** Load the doclist for phrase p into p->doclist.aAll/nAll. The loaded doclist +** does not take deferred tokens into account. +** +** SQLITE_OK is returned if no error occurs, otherwise an SQLite error code. +*/ +static int fts3EvalPhraseLoad( + Fts3Cursor *pCsr, /* FTS Cursor handle */ + Fts3Phrase *p /* Phrase object */ +){ + Fts3Table *pTab = (Fts3Table *)pCsr->base.pVtab; + int iToken; + int rc = SQLITE_OK; + + for(iToken=0; rc==SQLITE_OK && iToken<p->nToken; iToken++){ + Fts3PhraseToken *pToken = &p->aToken[iToken]; + assert( pToken->pDeferred==0 || pToken->pSegcsr==0 ); + + if( pToken->pSegcsr ){ + int nThis = 0; + char *pThis = 0; + rc = fts3TermSelect(pTab, pToken, p->iColumn, &nThis, &pThis); + if( rc==SQLITE_OK ){ + rc = fts3EvalPhraseMergeToken(pTab, p, iToken, pThis, nThis); + } + } + assert( pToken->pSegcsr==0 ); + } + + return rc; +} + +/* +** This function is called on each phrase after the position lists for +** any deferred tokens have been loaded into memory. It updates the phrases +** current position list to include only those positions that are really +** instances of the phrase (after considering deferred tokens). If this +** means that the phrase does not appear in the current row, doclist.pList +** and doclist.nList are both zeroed. +** +** SQLITE_OK is returned if no error occurs, otherwise an SQLite error code. +*/ +static int fts3EvalDeferredPhrase(Fts3Cursor *pCsr, Fts3Phrase *pPhrase){ + int iToken; /* Used to iterate through phrase tokens */ + char *aPoslist = 0; /* Position list for deferred tokens */ + int nPoslist = 0; /* Number of bytes in aPoslist */ + int iPrev = -1; /* Token number of previous deferred token */ + + assert( pPhrase->doclist.bFreeList==0 ); + + for(iToken=0; iToken<pPhrase->nToken; iToken++){ + Fts3PhraseToken *pToken = &pPhrase->aToken[iToken]; + Fts3DeferredToken *pDeferred = pToken->pDeferred; + + if( pDeferred ){ + char *pList; + int nList; + int rc = sqlite3Fts3DeferredTokenList(pDeferred, &pList, &nList); + if( rc!=SQLITE_OK ) return rc; + + if( pList==0 ){ + sqlite3_free(aPoslist); + pPhrase->doclist.pList = 0; + pPhrase->doclist.nList = 0; + return SQLITE_OK; + + }else if( aPoslist==0 ){ + aPoslist = pList; + nPoslist = nList; + + }else{ + char *aOut = pList; + char *p1 = aPoslist; + char *p2 = aOut; + + assert( iPrev>=0 ); + fts3PoslistPhraseMerge(&aOut, iToken-iPrev, 0, 1, &p1, &p2); + sqlite3_free(aPoslist); + aPoslist = pList; + nPoslist = (int)(aOut - aPoslist); + if( nPoslist==0 ){ + sqlite3_free(aPoslist); + pPhrase->doclist.pList = 0; + pPhrase->doclist.nList = 0; + return SQLITE_OK; + } + } + iPrev = iToken; + } + } + + if( iPrev>=0 ){ + int nMaxUndeferred = pPhrase->iDoclistToken; + if( nMaxUndeferred<0 ){ + pPhrase->doclist.pList = aPoslist; + pPhrase->doclist.nList = nPoslist; + pPhrase->doclist.iDocid = pCsr->iPrevId; + pPhrase->doclist.bFreeList = 1; + }else{ + int nDistance; + char *p1; + char *p2; + char *aOut; + + if( nMaxUndeferred>iPrev ){ + p1 = aPoslist; + p2 = pPhrase->doclist.pList; + nDistance = nMaxUndeferred - iPrev; + }else{ + p1 = pPhrase->doclist.pList; + p2 = aPoslist; + nDistance = iPrev - nMaxUndeferred; + } + + aOut = (char *)sqlite3_malloc(nPoslist+8); + if( !aOut ){ + sqlite3_free(aPoslist); + return SQLITE_NOMEM; + } + + pPhrase->doclist.pList = aOut; + if( fts3PoslistPhraseMerge(&aOut, nDistance, 0, 1, &p1, &p2) ){ + pPhrase->doclist.bFreeList = 1; + pPhrase->doclist.nList = (int)(aOut - pPhrase->doclist.pList); + }else{ + sqlite3_free(aOut); + pPhrase->doclist.pList = 0; + pPhrase->doclist.nList = 0; + } + sqlite3_free(aPoslist); + } + } + + return SQLITE_OK; +} + +/* +** Maximum number of tokens a phrase may have to be considered for the +** incremental doclists strategy. +*/ +#define MAX_INCR_PHRASE_TOKENS 4 + +/* +** This function is called for each Fts3Phrase in a full-text query +** expression to initialize the mechanism for returning rows. Once this +** function has been called successfully on an Fts3Phrase, it may be +** used with fts3EvalPhraseNext() to iterate through the matching docids. +** +** If parameter bOptOk is true, then the phrase may (or may not) use the +** incremental loading strategy. Otherwise, the entire doclist is loaded into +** memory within this call. +** +** SQLITE_OK is returned if no error occurs, otherwise an SQLite error code. +*/ +static int fts3EvalPhraseStart(Fts3Cursor *pCsr, int bOptOk, Fts3Phrase *p){ + Fts3Table *pTab = (Fts3Table *)pCsr->base.pVtab; + int rc = SQLITE_OK; /* Error code */ + int i; + + /* Determine if doclists may be loaded from disk incrementally. This is + ** possible if the bOptOk argument is true, the FTS doclists will be + ** scanned in forward order, and the phrase consists of + ** MAX_INCR_PHRASE_TOKENS or fewer tokens, none of which are are "^first" + ** tokens or prefix tokens that cannot use a prefix-index. */ + int bHaveIncr = 0; + int bIncrOk = (bOptOk + && pCsr->bDesc==pTab->bDescIdx + && p->nToken<=MAX_INCR_PHRASE_TOKENS && p->nToken>0 +#ifdef SQLITE_TEST + && pTab->bNoIncrDoclist==0 +#endif + ); + for(i=0; bIncrOk==1 && i<p->nToken; i++){ + Fts3PhraseToken *pToken = &p->aToken[i]; + if( pToken->bFirst || (pToken->pSegcsr!=0 && !pToken->pSegcsr->bLookup) ){ + bIncrOk = 0; + } + if( pToken->pSegcsr ) bHaveIncr = 1; + } + + if( bIncrOk && bHaveIncr ){ + /* Use the incremental approach. */ + int iCol = (p->iColumn >= pTab->nColumn ? -1 : p->iColumn); + for(i=0; rc==SQLITE_OK && i<p->nToken; i++){ + Fts3PhraseToken *pToken = &p->aToken[i]; + Fts3MultiSegReader *pSegcsr = pToken->pSegcsr; + if( pSegcsr ){ + rc = sqlite3Fts3MsrIncrStart(pTab, pSegcsr, iCol, pToken->z, pToken->n); + } + } + p->bIncr = 1; + }else{ + /* Load the full doclist for the phrase into memory. */ + rc = fts3EvalPhraseLoad(pCsr, p); + p->bIncr = 0; + } + + assert( rc!=SQLITE_OK || p->nToken<1 || p->aToken[0].pSegcsr==0 || p->bIncr ); + return rc; +} + +/* +** This function is used to iterate backwards (from the end to start) +** through doclists. It is used by this module to iterate through phrase +** doclists in reverse and by the fts3_write.c module to iterate through +** pending-terms lists when writing to databases with "order=desc". +** +** The doclist may be sorted in ascending (parameter bDescIdx==0) or +** descending (parameter bDescIdx==1) order of docid. Regardless, this +** function iterates from the end of the doclist to the beginning. +*/ +SQLITE_PRIVATE void sqlite3Fts3DoclistPrev( + int bDescIdx, /* True if the doclist is desc */ + char *aDoclist, /* Pointer to entire doclist */ + int nDoclist, /* Length of aDoclist in bytes */ + char **ppIter, /* IN/OUT: Iterator pointer */ + sqlite3_int64 *piDocid, /* IN/OUT: Docid pointer */ + int *pnList, /* OUT: List length pointer */ + u8 *pbEof /* OUT: End-of-file flag */ +){ + char *p = *ppIter; + + assert( nDoclist>0 ); + assert( *pbEof==0 ); + assert( p || *piDocid==0 ); + assert( !p || (p>aDoclist && p<&aDoclist[nDoclist]) ); + + if( p==0 ){ + sqlite3_int64 iDocid = 0; + char *pNext = 0; + char *pDocid = aDoclist; + char *pEnd = &aDoclist[nDoclist]; + int iMul = 1; + + while( pDocid<pEnd ){ + sqlite3_int64 iDelta; + pDocid += sqlite3Fts3GetVarint(pDocid, &iDelta); + iDocid += (iMul * iDelta); + pNext = pDocid; + fts3PoslistCopy(0, &pDocid); + while( pDocid<pEnd && *pDocid==0 ) pDocid++; + iMul = (bDescIdx ? -1 : 1); + } + + *pnList = (int)(pEnd - pNext); + *ppIter = pNext; + *piDocid = iDocid; + }else{ + int iMul = (bDescIdx ? -1 : 1); + sqlite3_int64 iDelta; + fts3GetReverseVarint(&p, aDoclist, &iDelta); + *piDocid -= (iMul * iDelta); + + if( p==aDoclist ){ + *pbEof = 1; + }else{ + char *pSave = p; + fts3ReversePoslist(aDoclist, &p); + *pnList = (int)(pSave - p); + } + *ppIter = p; + } +} + +/* +** Iterate forwards through a doclist. +*/ +SQLITE_PRIVATE void sqlite3Fts3DoclistNext( + int bDescIdx, /* True if the doclist is desc */ + char *aDoclist, /* Pointer to entire doclist */ + int nDoclist, /* Length of aDoclist in bytes */ + char **ppIter, /* IN/OUT: Iterator pointer */ + sqlite3_int64 *piDocid, /* IN/OUT: Docid pointer */ + u8 *pbEof /* OUT: End-of-file flag */ +){ + char *p = *ppIter; + + assert( nDoclist>0 ); + assert( *pbEof==0 ); + assert( p || *piDocid==0 ); + assert( !p || (p>=aDoclist && p<=&aDoclist[nDoclist]) ); + + if( p==0 ){ + p = aDoclist; + p += sqlite3Fts3GetVarint(p, piDocid); + }else{ + fts3PoslistCopy(0, &p); + while( p<&aDoclist[nDoclist] && *p==0 ) p++; + if( p>=&aDoclist[nDoclist] ){ + *pbEof = 1; + }else{ + sqlite3_int64 iVar; + p += sqlite3Fts3GetVarint(p, &iVar); + *piDocid += ((bDescIdx ? -1 : 1) * iVar); + } + } + + *ppIter = p; +} + +/* +** Advance the iterator pDL to the next entry in pDL->aAll/nAll. Set *pbEof +** to true if EOF is reached. +*/ +static void fts3EvalDlPhraseNext( + Fts3Table *pTab, + Fts3Doclist *pDL, + u8 *pbEof +){ + char *pIter; /* Used to iterate through aAll */ + char *pEnd = &pDL->aAll[pDL->nAll]; /* 1 byte past end of aAll */ + + if( pDL->pNextDocid ){ + pIter = pDL->pNextDocid; + }else{ + pIter = pDL->aAll; + } + + if( pIter>=pEnd ){ + /* We have already reached the end of this doclist. EOF. */ + *pbEof = 1; + }else{ + sqlite3_int64 iDelta; + pIter += sqlite3Fts3GetVarint(pIter, &iDelta); + if( pTab->bDescIdx==0 || pDL->pNextDocid==0 ){ + pDL->iDocid += iDelta; + }else{ + pDL->iDocid -= iDelta; + } + pDL->pList = pIter; + fts3PoslistCopy(0, &pIter); + pDL->nList = (int)(pIter - pDL->pList); + + /* pIter now points just past the 0x00 that terminates the position- + ** list for document pDL->iDocid. However, if this position-list was + ** edited in place by fts3EvalNearTrim(), then pIter may not actually + ** point to the start of the next docid value. The following line deals + ** with this case by advancing pIter past the zero-padding added by + ** fts3EvalNearTrim(). */ + while( pIter<pEnd && *pIter==0 ) pIter++; + + pDL->pNextDocid = pIter; + assert( pIter>=&pDL->aAll[pDL->nAll] || *pIter ); + *pbEof = 0; + } +} + +/* +** Helper type used by fts3EvalIncrPhraseNext() and incrPhraseTokenNext(). +*/ +typedef struct TokenDoclist TokenDoclist; +struct TokenDoclist { + int bIgnore; + sqlite3_int64 iDocid; + char *pList; + int nList; +}; + +/* +** Token pToken is an incrementally loaded token that is part of a +** multi-token phrase. Advance it to the next matching document in the +** database and populate output variable *p with the details of the new +** entry. Or, if the iterator has reached EOF, set *pbEof to true. +** +** If an error occurs, return an SQLite error code. Otherwise, return +** SQLITE_OK. +*/ +static int incrPhraseTokenNext( + Fts3Table *pTab, /* Virtual table handle */ + Fts3Phrase *pPhrase, /* Phrase to advance token of */ + int iToken, /* Specific token to advance */ + TokenDoclist *p, /* OUT: Docid and doclist for new entry */ + u8 *pbEof /* OUT: True if iterator is at EOF */ +){ + int rc = SQLITE_OK; + + if( pPhrase->iDoclistToken==iToken ){ + assert( p->bIgnore==0 ); + assert( pPhrase->aToken[iToken].pSegcsr==0 ); + fts3EvalDlPhraseNext(pTab, &pPhrase->doclist, pbEof); + p->pList = pPhrase->doclist.pList; + p->nList = pPhrase->doclist.nList; + p->iDocid = pPhrase->doclist.iDocid; + }else{ + Fts3PhraseToken *pToken = &pPhrase->aToken[iToken]; + assert( pToken->pDeferred==0 ); + assert( pToken->pSegcsr || pPhrase->iDoclistToken>=0 ); + if( pToken->pSegcsr ){ + assert( p->bIgnore==0 ); + rc = sqlite3Fts3MsrIncrNext( + pTab, pToken->pSegcsr, &p->iDocid, &p->pList, &p->nList + ); + if( p->pList==0 ) *pbEof = 1; + }else{ + p->bIgnore = 1; + } + } + + return rc; +} + + +/* +** The phrase iterator passed as the second argument: +** +** * features at least one token that uses an incremental doclist, and +** +** * does not contain any deferred tokens. +** +** Advance it to the next matching documnent in the database and populate +** the Fts3Doclist.pList and nList fields. +** +** If there is no "next" entry and no error occurs, then *pbEof is set to +** 1 before returning. Otherwise, if no error occurs and the iterator is +** successfully advanced, *pbEof is set to 0. +** +** If an error occurs, return an SQLite error code. Otherwise, return +** SQLITE_OK. +*/ +static int fts3EvalIncrPhraseNext( + Fts3Cursor *pCsr, /* FTS Cursor handle */ + Fts3Phrase *p, /* Phrase object to advance to next docid */ + u8 *pbEof /* OUT: Set to 1 if EOF */ +){ + int rc = SQLITE_OK; + Fts3Doclist *pDL = &p->doclist; + Fts3Table *pTab = (Fts3Table *)pCsr->base.pVtab; + u8 bEof = 0; + + /* This is only called if it is guaranteed that the phrase has at least + ** one incremental token. In which case the bIncr flag is set. */ + assert( p->bIncr==1 ); + + if( p->nToken==1 && p->bIncr ){ + rc = sqlite3Fts3MsrIncrNext(pTab, p->aToken[0].pSegcsr, + &pDL->iDocid, &pDL->pList, &pDL->nList + ); + if( pDL->pList==0 ) bEof = 1; + }else{ + int bDescDoclist = pCsr->bDesc; + struct TokenDoclist a[MAX_INCR_PHRASE_TOKENS]; + + memset(a, 0, sizeof(a)); + assert( p->nToken<=MAX_INCR_PHRASE_TOKENS ); + assert( p->iDoclistToken<MAX_INCR_PHRASE_TOKENS ); + + while( bEof==0 ){ + int bMaxSet = 0; + sqlite3_int64 iMax = 0; /* Largest docid for all iterators */ + int i; /* Used to iterate through tokens */ + + /* Advance the iterator for each token in the phrase once. */ + for(i=0; rc==SQLITE_OK && i<p->nToken && bEof==0; i++){ + rc = incrPhraseTokenNext(pTab, p, i, &a[i], &bEof); + if( a[i].bIgnore==0 && (bMaxSet==0 || DOCID_CMP(iMax, a[i].iDocid)<0) ){ + iMax = a[i].iDocid; + bMaxSet = 1; + } + } + assert( rc!=SQLITE_OK || (p->nToken>=1 && a[p->nToken-1].bIgnore==0) ); + assert( rc!=SQLITE_OK || bMaxSet ); + + /* Keep advancing iterators until they all point to the same document */ + for(i=0; i<p->nToken; i++){ + while( rc==SQLITE_OK && bEof==0 + && a[i].bIgnore==0 && DOCID_CMP(a[i].iDocid, iMax)<0 + ){ + rc = incrPhraseTokenNext(pTab, p, i, &a[i], &bEof); + if( DOCID_CMP(a[i].iDocid, iMax)>0 ){ + iMax = a[i].iDocid; + i = 0; + } + } + } + + /* Check if the current entries really are a phrase match */ + if( bEof==0 ){ + int nList = 0; + int nByte = a[p->nToken-1].nList; + char *aDoclist = sqlite3_malloc(nByte+1); + if( !aDoclist ) return SQLITE_NOMEM; + memcpy(aDoclist, a[p->nToken-1].pList, nByte+1); + + for(i=0; i<(p->nToken-1); i++){ + if( a[i].bIgnore==0 ){ + char *pL = a[i].pList; + char *pR = aDoclist; + char *pOut = aDoclist; + int nDist = p->nToken-1-i; + int res = fts3PoslistPhraseMerge(&pOut, nDist, 0, 1, &pL, &pR); + if( res==0 ) break; + nList = (int)(pOut - aDoclist); + } + } + if( i==(p->nToken-1) ){ + pDL->iDocid = iMax; + pDL->pList = aDoclist; + pDL->nList = nList; + pDL->bFreeList = 1; + break; + } + sqlite3_free(aDoclist); + } + } + } + + *pbEof = bEof; + return rc; +} + +/* +** Attempt to move the phrase iterator to point to the next matching docid. +** If an error occurs, return an SQLite error code. Otherwise, return +** SQLITE_OK. +** +** If there is no "next" entry and no error occurs, then *pbEof is set to +** 1 before returning. Otherwise, if no error occurs and the iterator is +** successfully advanced, *pbEof is set to 0. +*/ +static int fts3EvalPhraseNext( + Fts3Cursor *pCsr, /* FTS Cursor handle */ + Fts3Phrase *p, /* Phrase object to advance to next docid */ + u8 *pbEof /* OUT: Set to 1 if EOF */ +){ + int rc = SQLITE_OK; + Fts3Doclist *pDL = &p->doclist; + Fts3Table *pTab = (Fts3Table *)pCsr->base.pVtab; + + if( p->bIncr ){ + rc = fts3EvalIncrPhraseNext(pCsr, p, pbEof); + }else if( pCsr->bDesc!=pTab->bDescIdx && pDL->nAll ){ + sqlite3Fts3DoclistPrev(pTab->bDescIdx, pDL->aAll, pDL->nAll, + &pDL->pNextDocid, &pDL->iDocid, &pDL->nList, pbEof + ); + pDL->pList = pDL->pNextDocid; + }else{ + fts3EvalDlPhraseNext(pTab, pDL, pbEof); + } + + return rc; +} + +/* +** +** If *pRc is not SQLITE_OK when this function is called, it is a no-op. +** Otherwise, fts3EvalPhraseStart() is called on all phrases within the +** expression. Also the Fts3Expr.bDeferred variable is set to true for any +** expressions for which all descendent tokens are deferred. +** +** If parameter bOptOk is zero, then it is guaranteed that the +** Fts3Phrase.doclist.aAll/nAll variables contain the entire doclist for +** each phrase in the expression (subject to deferred token processing). +** Or, if bOptOk is non-zero, then one or more tokens within the expression +** may be loaded incrementally, meaning doclist.aAll/nAll is not available. +** +** If an error occurs within this function, *pRc is set to an SQLite error +** code before returning. +*/ +static void fts3EvalStartReaders( + Fts3Cursor *pCsr, /* FTS Cursor handle */ + Fts3Expr *pExpr, /* Expression to initialize phrases in */ + int *pRc /* IN/OUT: Error code */ +){ + if( pExpr && SQLITE_OK==*pRc ){ + if( pExpr->eType==FTSQUERY_PHRASE ){ + int nToken = pExpr->pPhrase->nToken; + if( nToken ){ + int i; + for(i=0; i<nToken; i++){ + if( pExpr->pPhrase->aToken[i].pDeferred==0 ) break; + } + pExpr->bDeferred = (i==nToken); + } + *pRc = fts3EvalPhraseStart(pCsr, 1, pExpr->pPhrase); + }else{ + fts3EvalStartReaders(pCsr, pExpr->pLeft, pRc); + fts3EvalStartReaders(pCsr, pExpr->pRight, pRc); + pExpr->bDeferred = (pExpr->pLeft->bDeferred && pExpr->pRight->bDeferred); + } + } +} + +/* +** An array of the following structures is assembled as part of the process +** of selecting tokens to defer before the query starts executing (as part +** of the xFilter() method). There is one element in the array for each +** token in the FTS expression. +** +** Tokens are divided into AND/NEAR clusters. All tokens in a cluster belong +** to phrases that are connected only by AND and NEAR operators (not OR or +** NOT). When determining tokens to defer, each AND/NEAR cluster is considered +** separately. The root of a tokens AND/NEAR cluster is stored in +** Fts3TokenAndCost.pRoot. +*/ +typedef struct Fts3TokenAndCost Fts3TokenAndCost; +struct Fts3TokenAndCost { + Fts3Phrase *pPhrase; /* The phrase the token belongs to */ + int iToken; /* Position of token in phrase */ + Fts3PhraseToken *pToken; /* The token itself */ + Fts3Expr *pRoot; /* Root of NEAR/AND cluster */ + int nOvfl; /* Number of overflow pages to load doclist */ + int iCol; /* The column the token must match */ +}; + +/* +** This function is used to populate an allocated Fts3TokenAndCost array. +** +** If *pRc is not SQLITE_OK when this function is called, it is a no-op. +** Otherwise, if an error occurs during execution, *pRc is set to an +** SQLite error code. +*/ +static void fts3EvalTokenCosts( + Fts3Cursor *pCsr, /* FTS Cursor handle */ + Fts3Expr *pRoot, /* Root of current AND/NEAR cluster */ + Fts3Expr *pExpr, /* Expression to consider */ + Fts3TokenAndCost **ppTC, /* Write new entries to *(*ppTC)++ */ + Fts3Expr ***ppOr, /* Write new OR root to *(*ppOr)++ */ + int *pRc /* IN/OUT: Error code */ +){ + if( *pRc==SQLITE_OK ){ + if( pExpr->eType==FTSQUERY_PHRASE ){ + Fts3Phrase *pPhrase = pExpr->pPhrase; + int i; + for(i=0; *pRc==SQLITE_OK && i<pPhrase->nToken; i++){ + Fts3TokenAndCost *pTC = (*ppTC)++; + pTC->pPhrase = pPhrase; + pTC->iToken = i; + pTC->pRoot = pRoot; + pTC->pToken = &pPhrase->aToken[i]; + pTC->iCol = pPhrase->iColumn; + *pRc = sqlite3Fts3MsrOvfl(pCsr, pTC->pToken->pSegcsr, &pTC->nOvfl); + } + }else if( pExpr->eType!=FTSQUERY_NOT ){ + assert( pExpr->eType==FTSQUERY_OR + || pExpr->eType==FTSQUERY_AND + || pExpr->eType==FTSQUERY_NEAR + ); + assert( pExpr->pLeft && pExpr->pRight ); + if( pExpr->eType==FTSQUERY_OR ){ + pRoot = pExpr->pLeft; + **ppOr = pRoot; + (*ppOr)++; + } + fts3EvalTokenCosts(pCsr, pRoot, pExpr->pLeft, ppTC, ppOr, pRc); + if( pExpr->eType==FTSQUERY_OR ){ + pRoot = pExpr->pRight; + **ppOr = pRoot; + (*ppOr)++; + } + fts3EvalTokenCosts(pCsr, pRoot, pExpr->pRight, ppTC, ppOr, pRc); + } + } +} + +/* +** Determine the average document (row) size in pages. If successful, +** write this value to *pnPage and return SQLITE_OK. Otherwise, return +** an SQLite error code. +** +** The average document size in pages is calculated by first calculating +** determining the average size in bytes, B. If B is less than the amount +** of data that will fit on a single leaf page of an intkey table in +** this database, then the average docsize is 1. Otherwise, it is 1 plus +** the number of overflow pages consumed by a record B bytes in size. +*/ +static int fts3EvalAverageDocsize(Fts3Cursor *pCsr, int *pnPage){ + if( pCsr->nRowAvg==0 ){ + /* The average document size, which is required to calculate the cost + ** of each doclist, has not yet been determined. Read the required + ** data from the %_stat table to calculate it. + ** + ** Entry 0 of the %_stat table is a blob containing (nCol+1) FTS3 + ** varints, where nCol is the number of columns in the FTS3 table. + ** The first varint is the number of documents currently stored in + ** the table. The following nCol varints contain the total amount of + ** data stored in all rows of each column of the table, from left + ** to right. + */ + int rc; + Fts3Table *p = (Fts3Table*)pCsr->base.pVtab; + sqlite3_stmt *pStmt; + sqlite3_int64 nDoc = 0; + sqlite3_int64 nByte = 0; + const char *pEnd; + const char *a; + + rc = sqlite3Fts3SelectDoctotal(p, &pStmt); + if( rc!=SQLITE_OK ) return rc; + a = sqlite3_column_blob(pStmt, 0); + assert( a ); + + pEnd = &a[sqlite3_column_bytes(pStmt, 0)]; + a += sqlite3Fts3GetVarint(a, &nDoc); + while( a<pEnd ){ + a += sqlite3Fts3GetVarint(a, &nByte); + } + if( nDoc==0 || nByte==0 ){ + sqlite3_reset(pStmt); + return FTS_CORRUPT_VTAB; + } + + pCsr->nDoc = nDoc; + pCsr->nRowAvg = (int)(((nByte / nDoc) + p->nPgsz) / p->nPgsz); + assert( pCsr->nRowAvg>0 ); + rc = sqlite3_reset(pStmt); + if( rc!=SQLITE_OK ) return rc; + } + + *pnPage = pCsr->nRowAvg; + return SQLITE_OK; +} + +/* +** This function is called to select the tokens (if any) that will be +** deferred. The array aTC[] has already been populated when this is +** called. +** +** This function is called once for each AND/NEAR cluster in the +** expression. Each invocation determines which tokens to defer within +** the cluster with root node pRoot. See comments above the definition +** of struct Fts3TokenAndCost for more details. +** +** If no error occurs, SQLITE_OK is returned and sqlite3Fts3DeferToken() +** called on each token to defer. Otherwise, an SQLite error code is +** returned. +*/ +static int fts3EvalSelectDeferred( + Fts3Cursor *pCsr, /* FTS Cursor handle */ + Fts3Expr *pRoot, /* Consider tokens with this root node */ + Fts3TokenAndCost *aTC, /* Array of expression tokens and costs */ + int nTC /* Number of entries in aTC[] */ +){ + Fts3Table *pTab = (Fts3Table *)pCsr->base.pVtab; + int nDocSize = 0; /* Number of pages per doc loaded */ + int rc = SQLITE_OK; /* Return code */ + int ii; /* Iterator variable for various purposes */ + int nOvfl = 0; /* Total overflow pages used by doclists */ + int nToken = 0; /* Total number of tokens in cluster */ + + int nMinEst = 0; /* The minimum count for any phrase so far. */ + int nLoad4 = 1; /* (Phrases that will be loaded)^4. */ + + /* Tokens are never deferred for FTS tables created using the content=xxx + ** option. The reason being that it is not guaranteed that the content + ** table actually contains the same data as the index. To prevent this from + ** causing any problems, the deferred token optimization is completely + ** disabled for content=xxx tables. */ + if( pTab->zContentTbl ){ + return SQLITE_OK; + } + + /* Count the tokens in this AND/NEAR cluster. If none of the doclists + ** associated with the tokens spill onto overflow pages, or if there is + ** only 1 token, exit early. No tokens to defer in this case. */ + for(ii=0; ii<nTC; ii++){ + if( aTC[ii].pRoot==pRoot ){ + nOvfl += aTC[ii].nOvfl; + nToken++; + } + } + if( nOvfl==0 || nToken<2 ) return SQLITE_OK; + + /* Obtain the average docsize (in pages). */ + rc = fts3EvalAverageDocsize(pCsr, &nDocSize); + assert( rc!=SQLITE_OK || nDocSize>0 ); + + + /* Iterate through all tokens in this AND/NEAR cluster, in ascending order + ** of the number of overflow pages that will be loaded by the pager layer + ** to retrieve the entire doclist for the token from the full-text index. + ** Load the doclists for tokens that are either: + ** + ** a. The cheapest token in the entire query (i.e. the one visited by the + ** first iteration of this loop), or + ** + ** b. Part of a multi-token phrase. + ** + ** After each token doclist is loaded, merge it with the others from the + ** same phrase and count the number of documents that the merged doclist + ** contains. Set variable "nMinEst" to the smallest number of documents in + ** any phrase doclist for which 1 or more token doclists have been loaded. + ** Let nOther be the number of other phrases for which it is certain that + ** one or more tokens will not be deferred. + ** + ** Then, for each token, defer it if loading the doclist would result in + ** loading N or more overflow pages into memory, where N is computed as: + ** + ** (nMinEst + 4^nOther - 1) / (4^nOther) + */ + for(ii=0; ii<nToken && rc==SQLITE_OK; ii++){ + int iTC; /* Used to iterate through aTC[] array. */ + Fts3TokenAndCost *pTC = 0; /* Set to cheapest remaining token. */ + + /* Set pTC to point to the cheapest remaining token. */ + for(iTC=0; iTC<nTC; iTC++){ + if( aTC[iTC].pToken && aTC[iTC].pRoot==pRoot + && (!pTC || aTC[iTC].nOvfl<pTC->nOvfl) + ){ + pTC = &aTC[iTC]; + } + } + assert( pTC ); + + if( ii && pTC->nOvfl>=((nMinEst+(nLoad4/4)-1)/(nLoad4/4))*nDocSize ){ + /* The number of overflow pages to load for this (and therefore all + ** subsequent) tokens is greater than the estimated number of pages + ** that will be loaded if all subsequent tokens are deferred. + */ + Fts3PhraseToken *pToken = pTC->pToken; + rc = sqlite3Fts3DeferToken(pCsr, pToken, pTC->iCol); + fts3SegReaderCursorFree(pToken->pSegcsr); + pToken->pSegcsr = 0; + }else{ + /* Set nLoad4 to the value of (4^nOther) for the next iteration of the + ** for-loop. Except, limit the value to 2^24 to prevent it from + ** overflowing the 32-bit integer it is stored in. */ + if( ii<12 ) nLoad4 = nLoad4*4; + + if( ii==0 || (pTC->pPhrase->nToken>1 && ii!=nToken-1) ){ + /* Either this is the cheapest token in the entire query, or it is + ** part of a multi-token phrase. Either way, the entire doclist will + ** (eventually) be loaded into memory. It may as well be now. */ + Fts3PhraseToken *pToken = pTC->pToken; + int nList = 0; + char *pList = 0; + rc = fts3TermSelect(pTab, pToken, pTC->iCol, &nList, &pList); + assert( rc==SQLITE_OK || pList==0 ); + if( rc==SQLITE_OK ){ + rc = fts3EvalPhraseMergeToken( + pTab, pTC->pPhrase, pTC->iToken,pList,nList + ); + } + if( rc==SQLITE_OK ){ + int nCount; + nCount = fts3DoclistCountDocids( + pTC->pPhrase->doclist.aAll, pTC->pPhrase->doclist.nAll + ); + if( ii==0 || nCount<nMinEst ) nMinEst = nCount; + } + } + } + pTC->pToken = 0; + } + + return rc; +} + +/* +** This function is called from within the xFilter method. It initializes +** the full-text query currently stored in pCsr->pExpr. To iterate through +** the results of a query, the caller does: +** +** fts3EvalStart(pCsr); +** while( 1 ){ +** fts3EvalNext(pCsr); +** if( pCsr->bEof ) break; +** ... return row pCsr->iPrevId to the caller ... +** } +*/ +static int fts3EvalStart(Fts3Cursor *pCsr){ + Fts3Table *pTab = (Fts3Table *)pCsr->base.pVtab; + int rc = SQLITE_OK; + int nToken = 0; + int nOr = 0; + + /* Allocate a MultiSegReader for each token in the expression. */ + fts3EvalAllocateReaders(pCsr, pCsr->pExpr, &nToken, &nOr, &rc); + + /* Determine which, if any, tokens in the expression should be deferred. */ +#ifndef SQLITE_DISABLE_FTS4_DEFERRED + if( rc==SQLITE_OK && nToken>1 && pTab->bFts4 ){ + Fts3TokenAndCost *aTC; + Fts3Expr **apOr; + aTC = (Fts3TokenAndCost *)sqlite3_malloc( + sizeof(Fts3TokenAndCost) * nToken + + sizeof(Fts3Expr *) * nOr * 2 + ); + apOr = (Fts3Expr **)&aTC[nToken]; + + if( !aTC ){ + rc = SQLITE_NOMEM; + }else{ + int ii; + Fts3TokenAndCost *pTC = aTC; + Fts3Expr **ppOr = apOr; + + fts3EvalTokenCosts(pCsr, 0, pCsr->pExpr, &pTC, &ppOr, &rc); + nToken = (int)(pTC-aTC); + nOr = (int)(ppOr-apOr); + + if( rc==SQLITE_OK ){ + rc = fts3EvalSelectDeferred(pCsr, 0, aTC, nToken); + for(ii=0; rc==SQLITE_OK && ii<nOr; ii++){ + rc = fts3EvalSelectDeferred(pCsr, apOr[ii], aTC, nToken); + } + } + + sqlite3_free(aTC); + } + } +#endif + + fts3EvalStartReaders(pCsr, pCsr->pExpr, &rc); + return rc; +} + +/* +** Invalidate the current position list for phrase pPhrase. +*/ +static void fts3EvalInvalidatePoslist(Fts3Phrase *pPhrase){ + if( pPhrase->doclist.bFreeList ){ + sqlite3_free(pPhrase->doclist.pList); + } + pPhrase->doclist.pList = 0; + pPhrase->doclist.nList = 0; + pPhrase->doclist.bFreeList = 0; +} + +/* +** This function is called to edit the position list associated with +** the phrase object passed as the fifth argument according to a NEAR +** condition. For example: +** +** abc NEAR/5 "def ghi" +** +** Parameter nNear is passed the NEAR distance of the expression (5 in +** the example above). When this function is called, *paPoslist points to +** the position list, and *pnToken is the number of phrase tokens in, the +** phrase on the other side of the NEAR operator to pPhrase. For example, +** if pPhrase refers to the "def ghi" phrase, then *paPoslist points to +** the position list associated with phrase "abc". +** +** All positions in the pPhrase position list that are not sufficiently +** close to a position in the *paPoslist position list are removed. If this +** leaves 0 positions, zero is returned. Otherwise, non-zero. +** +** Before returning, *paPoslist is set to point to the position lsit +** associated with pPhrase. And *pnToken is set to the number of tokens in +** pPhrase. +*/ +static int fts3EvalNearTrim( + int nNear, /* NEAR distance. As in "NEAR/nNear". */ + char *aTmp, /* Temporary space to use */ + char **paPoslist, /* IN/OUT: Position list */ + int *pnToken, /* IN/OUT: Tokens in phrase of *paPoslist */ + Fts3Phrase *pPhrase /* The phrase object to trim the doclist of */ +){ + int nParam1 = nNear + pPhrase->nToken; + int nParam2 = nNear + *pnToken; + int nNew; + char *p2; + char *pOut; + int res; + + assert( pPhrase->doclist.pList ); + + p2 = pOut = pPhrase->doclist.pList; + res = fts3PoslistNearMerge( + &pOut, aTmp, nParam1, nParam2, paPoslist, &p2 + ); + if( res ){ + nNew = (int)(pOut - pPhrase->doclist.pList) - 1; + assert( pPhrase->doclist.pList[nNew]=='\0' ); + assert( nNew<=pPhrase->doclist.nList && nNew>0 ); + memset(&pPhrase->doclist.pList[nNew], 0, pPhrase->doclist.nList - nNew); + pPhrase->doclist.nList = nNew; + *paPoslist = pPhrase->doclist.pList; + *pnToken = pPhrase->nToken; + } + + return res; +} + +/* +** This function is a no-op if *pRc is other than SQLITE_OK when it is called. +** Otherwise, it advances the expression passed as the second argument to +** point to the next matching row in the database. Expressions iterate through +** matching rows in docid order. Ascending order if Fts3Cursor.bDesc is zero, +** or descending if it is non-zero. +** +** If an error occurs, *pRc is set to an SQLite error code. Otherwise, if +** successful, the following variables in pExpr are set: +** +** Fts3Expr.bEof (non-zero if EOF - there is no next row) +** Fts3Expr.iDocid (valid if bEof==0. The docid of the next row) +** +** If the expression is of type FTSQUERY_PHRASE, and the expression is not +** at EOF, then the following variables are populated with the position list +** for the phrase for the visited row: +** +** FTs3Expr.pPhrase->doclist.nList (length of pList in bytes) +** FTs3Expr.pPhrase->doclist.pList (pointer to position list) +** +** It says above that this function advances the expression to the next +** matching row. This is usually true, but there are the following exceptions: +** +** 1. Deferred tokens are not taken into account. If a phrase consists +** entirely of deferred tokens, it is assumed to match every row in +** the db. In this case the position-list is not populated at all. +** +** Or, if a phrase contains one or more deferred tokens and one or +** more non-deferred tokens, then the expression is advanced to the +** next possible match, considering only non-deferred tokens. In other +** words, if the phrase is "A B C", and "B" is deferred, the expression +** is advanced to the next row that contains an instance of "A * C", +** where "*" may match any single token. The position list in this case +** is populated as for "A * C" before returning. +** +** 2. NEAR is treated as AND. If the expression is "x NEAR y", it is +** advanced to point to the next row that matches "x AND y". +** +** See sqlite3Fts3EvalTestDeferred() for details on testing if a row is +** really a match, taking into account deferred tokens and NEAR operators. +*/ +static void fts3EvalNextRow( + Fts3Cursor *pCsr, /* FTS Cursor handle */ + Fts3Expr *pExpr, /* Expr. to advance to next matching row */ + int *pRc /* IN/OUT: Error code */ +){ + if( *pRc==SQLITE_OK ){ + int bDescDoclist = pCsr->bDesc; /* Used by DOCID_CMP() macro */ + assert( pExpr->bEof==0 ); + pExpr->bStart = 1; + + switch( pExpr->eType ){ + case FTSQUERY_NEAR: + case FTSQUERY_AND: { + Fts3Expr *pLeft = pExpr->pLeft; + Fts3Expr *pRight = pExpr->pRight; + assert( !pLeft->bDeferred || !pRight->bDeferred ); + + if( pLeft->bDeferred ){ + /* LHS is entirely deferred. So we assume it matches every row. + ** Advance the RHS iterator to find the next row visited. */ + fts3EvalNextRow(pCsr, pRight, pRc); + pExpr->iDocid = pRight->iDocid; + pExpr->bEof = pRight->bEof; + }else if( pRight->bDeferred ){ + /* RHS is entirely deferred. So we assume it matches every row. + ** Advance the LHS iterator to find the next row visited. */ + fts3EvalNextRow(pCsr, pLeft, pRc); + pExpr->iDocid = pLeft->iDocid; + pExpr->bEof = pLeft->bEof; + }else{ + /* Neither the RHS or LHS are deferred. */ + fts3EvalNextRow(pCsr, pLeft, pRc); + fts3EvalNextRow(pCsr, pRight, pRc); + while( !pLeft->bEof && !pRight->bEof && *pRc==SQLITE_OK ){ + sqlite3_int64 iDiff = DOCID_CMP(pLeft->iDocid, pRight->iDocid); + if( iDiff==0 ) break; + if( iDiff<0 ){ + fts3EvalNextRow(pCsr, pLeft, pRc); + }else{ + fts3EvalNextRow(pCsr, pRight, pRc); + } + } + pExpr->iDocid = pLeft->iDocid; + pExpr->bEof = (pLeft->bEof || pRight->bEof); + if( pExpr->eType==FTSQUERY_NEAR && pExpr->bEof ){ + if( pRight->pPhrase && pRight->pPhrase->doclist.aAll ){ + Fts3Doclist *pDl = &pRight->pPhrase->doclist; + while( *pRc==SQLITE_OK && pRight->bEof==0 ){ + memset(pDl->pList, 0, pDl->nList); + fts3EvalNextRow(pCsr, pRight, pRc); + } + } + if( pLeft->pPhrase && pLeft->pPhrase->doclist.aAll ){ + Fts3Doclist *pDl = &pLeft->pPhrase->doclist; + while( *pRc==SQLITE_OK && pLeft->bEof==0 ){ + memset(pDl->pList, 0, pDl->nList); + fts3EvalNextRow(pCsr, pLeft, pRc); + } + } + } + } + break; + } + + case FTSQUERY_OR: { + Fts3Expr *pLeft = pExpr->pLeft; + Fts3Expr *pRight = pExpr->pRight; + sqlite3_int64 iCmp = DOCID_CMP(pLeft->iDocid, pRight->iDocid); + + assert( pLeft->bStart || pLeft->iDocid==pRight->iDocid ); + assert( pRight->bStart || pLeft->iDocid==pRight->iDocid ); + + if( pRight->bEof || (pLeft->bEof==0 && iCmp<0) ){ + fts3EvalNextRow(pCsr, pLeft, pRc); + }else if( pLeft->bEof || (pRight->bEof==0 && iCmp>0) ){ + fts3EvalNextRow(pCsr, pRight, pRc); + }else{ + fts3EvalNextRow(pCsr, pLeft, pRc); + fts3EvalNextRow(pCsr, pRight, pRc); + } + + pExpr->bEof = (pLeft->bEof && pRight->bEof); + iCmp = DOCID_CMP(pLeft->iDocid, pRight->iDocid); + if( pRight->bEof || (pLeft->bEof==0 && iCmp<0) ){ + pExpr->iDocid = pLeft->iDocid; + }else{ + pExpr->iDocid = pRight->iDocid; + } + + break; + } + + case FTSQUERY_NOT: { + Fts3Expr *pLeft = pExpr->pLeft; + Fts3Expr *pRight = pExpr->pRight; + + if( pRight->bStart==0 ){ + fts3EvalNextRow(pCsr, pRight, pRc); + assert( *pRc!=SQLITE_OK || pRight->bStart ); + } + + fts3EvalNextRow(pCsr, pLeft, pRc); + if( pLeft->bEof==0 ){ + while( !*pRc + && !pRight->bEof + && DOCID_CMP(pLeft->iDocid, pRight->iDocid)>0 + ){ + fts3EvalNextRow(pCsr, pRight, pRc); + } + } + pExpr->iDocid = pLeft->iDocid; + pExpr->bEof = pLeft->bEof; + break; + } + + default: { + Fts3Phrase *pPhrase = pExpr->pPhrase; + fts3EvalInvalidatePoslist(pPhrase); + *pRc = fts3EvalPhraseNext(pCsr, pPhrase, &pExpr->bEof); + pExpr->iDocid = pPhrase->doclist.iDocid; + break; + } + } + } +} + +/* +** If *pRc is not SQLITE_OK, or if pExpr is not the root node of a NEAR +** cluster, then this function returns 1 immediately. +** +** Otherwise, it checks if the current row really does match the NEAR +** expression, using the data currently stored in the position lists +** (Fts3Expr->pPhrase.doclist.pList/nList) for each phrase in the expression. +** +** If the current row is a match, the position list associated with each +** phrase in the NEAR expression is edited in place to contain only those +** phrase instances sufficiently close to their peers to satisfy all NEAR +** constraints. In this case it returns 1. If the NEAR expression does not +** match the current row, 0 is returned. The position lists may or may not +** be edited if 0 is returned. +*/ +static int fts3EvalNearTest(Fts3Expr *pExpr, int *pRc){ + int res = 1; + + /* The following block runs if pExpr is the root of a NEAR query. + ** For example, the query: + ** + ** "w" NEAR "x" NEAR "y" NEAR "z" + ** + ** which is represented in tree form as: + ** + ** | + ** +--NEAR--+ <-- root of NEAR query + ** | | + ** +--NEAR--+ "z" + ** | | + ** +--NEAR--+ "y" + ** | | + ** "w" "x" + ** + ** The right-hand child of a NEAR node is always a phrase. The + ** left-hand child may be either a phrase or a NEAR node. There are + ** no exceptions to this - it's the way the parser in fts3_expr.c works. + */ + if( *pRc==SQLITE_OK + && pExpr->eType==FTSQUERY_NEAR + && pExpr->bEof==0 + && (pExpr->pParent==0 || pExpr->pParent->eType!=FTSQUERY_NEAR) + ){ + Fts3Expr *p; + int nTmp = 0; /* Bytes of temp space */ + char *aTmp; /* Temp space for PoslistNearMerge() */ + + /* Allocate temporary working space. */ + for(p=pExpr; p->pLeft; p=p->pLeft){ + nTmp += p->pRight->pPhrase->doclist.nList; + } + nTmp += p->pPhrase->doclist.nList; + if( nTmp==0 ){ + res = 0; + }else{ + aTmp = sqlite3_malloc(nTmp*2); + if( !aTmp ){ + *pRc = SQLITE_NOMEM; + res = 0; + }else{ + char *aPoslist = p->pPhrase->doclist.pList; + int nToken = p->pPhrase->nToken; + + for(p=p->pParent;res && p && p->eType==FTSQUERY_NEAR; p=p->pParent){ + Fts3Phrase *pPhrase = p->pRight->pPhrase; + int nNear = p->nNear; + res = fts3EvalNearTrim(nNear, aTmp, &aPoslist, &nToken, pPhrase); + } + + aPoslist = pExpr->pRight->pPhrase->doclist.pList; + nToken = pExpr->pRight->pPhrase->nToken; + for(p=pExpr->pLeft; p && res; p=p->pLeft){ + int nNear; + Fts3Phrase *pPhrase; + assert( p->pParent && p->pParent->pLeft==p ); + nNear = p->pParent->nNear; + pPhrase = ( + p->eType==FTSQUERY_NEAR ? p->pRight->pPhrase : p->pPhrase + ); + res = fts3EvalNearTrim(nNear, aTmp, &aPoslist, &nToken, pPhrase); + } + } + + sqlite3_free(aTmp); + } + } + + return res; +} + +/* +** This function is a helper function for sqlite3Fts3EvalTestDeferred(). +** Assuming no error occurs or has occurred, It returns non-zero if the +** expression passed as the second argument matches the row that pCsr +** currently points to, or zero if it does not. +** +** If *pRc is not SQLITE_OK when this function is called, it is a no-op. +** If an error occurs during execution of this function, *pRc is set to +** the appropriate SQLite error code. In this case the returned value is +** undefined. +*/ +static int fts3EvalTestExpr( + Fts3Cursor *pCsr, /* FTS cursor handle */ + Fts3Expr *pExpr, /* Expr to test. May or may not be root. */ + int *pRc /* IN/OUT: Error code */ +){ + int bHit = 1; /* Return value */ + if( *pRc==SQLITE_OK ){ + switch( pExpr->eType ){ + case FTSQUERY_NEAR: + case FTSQUERY_AND: + bHit = ( + fts3EvalTestExpr(pCsr, pExpr->pLeft, pRc) + && fts3EvalTestExpr(pCsr, pExpr->pRight, pRc) + && fts3EvalNearTest(pExpr, pRc) + ); + + /* If the NEAR expression does not match any rows, zero the doclist for + ** all phrases involved in the NEAR. This is because the snippet(), + ** offsets() and matchinfo() functions are not supposed to recognize + ** any instances of phrases that are part of unmatched NEAR queries. + ** For example if this expression: + ** + ** ... MATCH 'a OR (b NEAR c)' + ** + ** is matched against a row containing: + ** + ** 'a b d e' + ** + ** then any snippet() should ony highlight the "a" term, not the "b" + ** (as "b" is part of a non-matching NEAR clause). + */ + if( bHit==0 + && pExpr->eType==FTSQUERY_NEAR + && (pExpr->pParent==0 || pExpr->pParent->eType!=FTSQUERY_NEAR) + ){ + Fts3Expr *p; + for(p=pExpr; p->pPhrase==0; p=p->pLeft){ + if( p->pRight->iDocid==pCsr->iPrevId ){ + fts3EvalInvalidatePoslist(p->pRight->pPhrase); + } + } + if( p->iDocid==pCsr->iPrevId ){ + fts3EvalInvalidatePoslist(p->pPhrase); + } + } + + break; + + case FTSQUERY_OR: { + int bHit1 = fts3EvalTestExpr(pCsr, pExpr->pLeft, pRc); + int bHit2 = fts3EvalTestExpr(pCsr, pExpr->pRight, pRc); + bHit = bHit1 || bHit2; + break; + } + + case FTSQUERY_NOT: + bHit = ( + fts3EvalTestExpr(pCsr, pExpr->pLeft, pRc) + && !fts3EvalTestExpr(pCsr, pExpr->pRight, pRc) + ); + break; + + default: { +#ifndef SQLITE_DISABLE_FTS4_DEFERRED + if( pCsr->pDeferred + && (pExpr->iDocid==pCsr->iPrevId || pExpr->bDeferred) + ){ + Fts3Phrase *pPhrase = pExpr->pPhrase; + assert( pExpr->bDeferred || pPhrase->doclist.bFreeList==0 ); + if( pExpr->bDeferred ){ + fts3EvalInvalidatePoslist(pPhrase); + } + *pRc = fts3EvalDeferredPhrase(pCsr, pPhrase); + bHit = (pPhrase->doclist.pList!=0); + pExpr->iDocid = pCsr->iPrevId; + }else +#endif + { + bHit = (pExpr->bEof==0 && pExpr->iDocid==pCsr->iPrevId); + } + break; + } + } + } + return bHit; +} + +/* +** This function is called as the second part of each xNext operation when +** iterating through the results of a full-text query. At this point the +** cursor points to a row that matches the query expression, with the +** following caveats: +** +** * Up until this point, "NEAR" operators in the expression have been +** treated as "AND". +** +** * Deferred tokens have not yet been considered. +** +** If *pRc is not SQLITE_OK when this function is called, it immediately +** returns 0. Otherwise, it tests whether or not after considering NEAR +** operators and deferred tokens the current row is still a match for the +** expression. It returns 1 if both of the following are true: +** +** 1. *pRc is SQLITE_OK when this function returns, and +** +** 2. After scanning the current FTS table row for the deferred tokens, +** it is determined that the row does *not* match the query. +** +** Or, if no error occurs and it seems the current row does match the FTS +** query, return 0. +*/ +SQLITE_PRIVATE int sqlite3Fts3EvalTestDeferred(Fts3Cursor *pCsr, int *pRc){ + int rc = *pRc; + int bMiss = 0; + if( rc==SQLITE_OK ){ + + /* If there are one or more deferred tokens, load the current row into + ** memory and scan it to determine the position list for each deferred + ** token. Then, see if this row is really a match, considering deferred + ** tokens and NEAR operators (neither of which were taken into account + ** earlier, by fts3EvalNextRow()). + */ + if( pCsr->pDeferred ){ + rc = fts3CursorSeek(0, pCsr); + if( rc==SQLITE_OK ){ + rc = sqlite3Fts3CacheDeferredDoclists(pCsr); + } + } + bMiss = (0==fts3EvalTestExpr(pCsr, pCsr->pExpr, &rc)); + + /* Free the position-lists accumulated for each deferred token above. */ + sqlite3Fts3FreeDeferredDoclists(pCsr); + *pRc = rc; + } + return (rc==SQLITE_OK && bMiss); +} + +/* +** Advance to the next document that matches the FTS expression in +** Fts3Cursor.pExpr. +*/ +static int fts3EvalNext(Fts3Cursor *pCsr){ + int rc = SQLITE_OK; /* Return Code */ + Fts3Expr *pExpr = pCsr->pExpr; + assert( pCsr->isEof==0 ); + if( pExpr==0 ){ + pCsr->isEof = 1; + }else{ + do { + if( pCsr->isRequireSeek==0 ){ + sqlite3_reset(pCsr->pStmt); + } + assert( sqlite3_data_count(pCsr->pStmt)==0 ); + fts3EvalNextRow(pCsr, pExpr, &rc); + pCsr->isEof = pExpr->bEof; + pCsr->isRequireSeek = 1; + pCsr->isMatchinfoNeeded = 1; + pCsr->iPrevId = pExpr->iDocid; + }while( pCsr->isEof==0 && sqlite3Fts3EvalTestDeferred(pCsr, &rc) ); + } + + /* Check if the cursor is past the end of the docid range specified + ** by Fts3Cursor.iMinDocid/iMaxDocid. If so, set the EOF flag. */ + if( rc==SQLITE_OK && ( + (pCsr->bDesc==0 && pCsr->iPrevId>pCsr->iMaxDocid) + || (pCsr->bDesc!=0 && pCsr->iPrevId<pCsr->iMinDocid) + )){ + pCsr->isEof = 1; + } + + return rc; +} + +/* +** Restart interation for expression pExpr so that the next call to +** fts3EvalNext() visits the first row. Do not allow incremental +** loading or merging of phrase doclists for this iteration. +** +** If *pRc is other than SQLITE_OK when this function is called, it is +** a no-op. If an error occurs within this function, *pRc is set to an +** SQLite error code before returning. +*/ +static void fts3EvalRestart( + Fts3Cursor *pCsr, + Fts3Expr *pExpr, + int *pRc +){ + if( pExpr && *pRc==SQLITE_OK ){ + Fts3Phrase *pPhrase = pExpr->pPhrase; + + if( pPhrase ){ + fts3EvalInvalidatePoslist(pPhrase); + if( pPhrase->bIncr ){ + int i; + for(i=0; i<pPhrase->nToken; i++){ + Fts3PhraseToken *pToken = &pPhrase->aToken[i]; + assert( pToken->pDeferred==0 ); + if( pToken->pSegcsr ){ + sqlite3Fts3MsrIncrRestart(pToken->pSegcsr); + } + } + *pRc = fts3EvalPhraseStart(pCsr, 0, pPhrase); + } + pPhrase->doclist.pNextDocid = 0; + pPhrase->doclist.iDocid = 0; + pPhrase->pOrPoslist = 0; + } + + pExpr->iDocid = 0; + pExpr->bEof = 0; + pExpr->bStart = 0; + + fts3EvalRestart(pCsr, pExpr->pLeft, pRc); + fts3EvalRestart(pCsr, pExpr->pRight, pRc); + } +} + +/* +** After allocating the Fts3Expr.aMI[] array for each phrase in the +** expression rooted at pExpr, the cursor iterates through all rows matched +** by pExpr, calling this function for each row. This function increments +** the values in Fts3Expr.aMI[] according to the position-list currently +** found in Fts3Expr.pPhrase->doclist.pList for each of the phrase +** expression nodes. +*/ +static void fts3EvalUpdateCounts(Fts3Expr *pExpr){ + if( pExpr ){ + Fts3Phrase *pPhrase = pExpr->pPhrase; + if( pPhrase && pPhrase->doclist.pList ){ + int iCol = 0; + char *p = pPhrase->doclist.pList; + + assert( *p ); + while( 1 ){ + u8 c = 0; + int iCnt = 0; + while( 0xFE & (*p | c) ){ + if( (c&0x80)==0 ) iCnt++; + c = *p++ & 0x80; + } + + /* aMI[iCol*3 + 1] = Number of occurrences + ** aMI[iCol*3 + 2] = Number of rows containing at least one instance + */ + pExpr->aMI[iCol*3 + 1] += iCnt; + pExpr->aMI[iCol*3 + 2] += (iCnt>0); + if( *p==0x00 ) break; + p++; + p += fts3GetVarint32(p, &iCol); + } + } + + fts3EvalUpdateCounts(pExpr->pLeft); + fts3EvalUpdateCounts(pExpr->pRight); + } +} + +/* +** Expression pExpr must be of type FTSQUERY_PHRASE. +** +** If it is not already allocated and populated, this function allocates and +** populates the Fts3Expr.aMI[] array for expression pExpr. If pExpr is part +** of a NEAR expression, then it also allocates and populates the same array +** for all other phrases that are part of the NEAR expression. +** +** SQLITE_OK is returned if the aMI[] array is successfully allocated and +** populated. Otherwise, if an error occurs, an SQLite error code is returned. +*/ +static int fts3EvalGatherStats( + Fts3Cursor *pCsr, /* Cursor object */ + Fts3Expr *pExpr /* FTSQUERY_PHRASE expression */ +){ + int rc = SQLITE_OK; /* Return code */ + + assert( pExpr->eType==FTSQUERY_PHRASE ); + if( pExpr->aMI==0 ){ + Fts3Table *pTab = (Fts3Table *)pCsr->base.pVtab; + Fts3Expr *pRoot; /* Root of NEAR expression */ + Fts3Expr *p; /* Iterator used for several purposes */ + + sqlite3_int64 iPrevId = pCsr->iPrevId; + sqlite3_int64 iDocid; + u8 bEof; + + /* Find the root of the NEAR expression */ + pRoot = pExpr; + while( pRoot->pParent && pRoot->pParent->eType==FTSQUERY_NEAR ){ + pRoot = pRoot->pParent; + } + iDocid = pRoot->iDocid; + bEof = pRoot->bEof; + assert( pRoot->bStart ); + + /* Allocate space for the aMSI[] array of each FTSQUERY_PHRASE node */ + for(p=pRoot; p; p=p->pLeft){ + Fts3Expr *pE = (p->eType==FTSQUERY_PHRASE?p:p->pRight); + assert( pE->aMI==0 ); + pE->aMI = (u32 *)sqlite3_malloc(pTab->nColumn * 3 * sizeof(u32)); + if( !pE->aMI ) return SQLITE_NOMEM; + memset(pE->aMI, 0, pTab->nColumn * 3 * sizeof(u32)); + } + + fts3EvalRestart(pCsr, pRoot, &rc); + + while( pCsr->isEof==0 && rc==SQLITE_OK ){ + + do { + /* Ensure the %_content statement is reset. */ + if( pCsr->isRequireSeek==0 ) sqlite3_reset(pCsr->pStmt); + assert( sqlite3_data_count(pCsr->pStmt)==0 ); + + /* Advance to the next document */ + fts3EvalNextRow(pCsr, pRoot, &rc); + pCsr->isEof = pRoot->bEof; + pCsr->isRequireSeek = 1; + pCsr->isMatchinfoNeeded = 1; + pCsr->iPrevId = pRoot->iDocid; + }while( pCsr->isEof==0 + && pRoot->eType==FTSQUERY_NEAR + && sqlite3Fts3EvalTestDeferred(pCsr, &rc) + ); + + if( rc==SQLITE_OK && pCsr->isEof==0 ){ + fts3EvalUpdateCounts(pRoot); + } + } + + pCsr->isEof = 0; + pCsr->iPrevId = iPrevId; + + if( bEof ){ + pRoot->bEof = bEof; + }else{ + /* Caution: pRoot may iterate through docids in ascending or descending + ** order. For this reason, even though it seems more defensive, the + ** do loop can not be written: + ** + ** do {...} while( pRoot->iDocid<iDocid && rc==SQLITE_OK ); + */ + fts3EvalRestart(pCsr, pRoot, &rc); + do { + fts3EvalNextRow(pCsr, pRoot, &rc); + assert( pRoot->bEof==0 ); + }while( pRoot->iDocid!=iDocid && rc==SQLITE_OK ); + } + } + return rc; +} + +/* +** This function is used by the matchinfo() module to query a phrase +** expression node for the following information: +** +** 1. The total number of occurrences of the phrase in each column of +** the FTS table (considering all rows), and +** +** 2. For each column, the number of rows in the table for which the +** column contains at least one instance of the phrase. +** +** If no error occurs, SQLITE_OK is returned and the values for each column +** written into the array aiOut as follows: +** +** aiOut[iCol*3 + 1] = Number of occurrences +** aiOut[iCol*3 + 2] = Number of rows containing at least one instance +** +** Caveats: +** +** * If a phrase consists entirely of deferred tokens, then all output +** values are set to the number of documents in the table. In other +** words we assume that very common tokens occur exactly once in each +** column of each row of the table. +** +** * If a phrase contains some deferred tokens (and some non-deferred +** tokens), count the potential occurrence identified by considering +** the non-deferred tokens instead of actual phrase occurrences. +** +** * If the phrase is part of a NEAR expression, then only phrase instances +** that meet the NEAR constraint are included in the counts. +*/ +SQLITE_PRIVATE int sqlite3Fts3EvalPhraseStats( + Fts3Cursor *pCsr, /* FTS cursor handle */ + Fts3Expr *pExpr, /* Phrase expression */ + u32 *aiOut /* Array to write results into (see above) */ +){ + Fts3Table *pTab = (Fts3Table *)pCsr->base.pVtab; + int rc = SQLITE_OK; + int iCol; + + if( pExpr->bDeferred && pExpr->pParent->eType!=FTSQUERY_NEAR ){ + assert( pCsr->nDoc>0 ); + for(iCol=0; iCol<pTab->nColumn; iCol++){ + aiOut[iCol*3 + 1] = (u32)pCsr->nDoc; + aiOut[iCol*3 + 2] = (u32)pCsr->nDoc; + } + }else{ + rc = fts3EvalGatherStats(pCsr, pExpr); + if( rc==SQLITE_OK ){ + assert( pExpr->aMI ); + for(iCol=0; iCol<pTab->nColumn; iCol++){ + aiOut[iCol*3 + 1] = pExpr->aMI[iCol*3 + 1]; + aiOut[iCol*3 + 2] = pExpr->aMI[iCol*3 + 2]; + } + } + } + + return rc; +} + +/* +** The expression pExpr passed as the second argument to this function +** must be of type FTSQUERY_PHRASE. +** +** The returned value is either NULL or a pointer to a buffer containing +** a position-list indicating the occurrences of the phrase in column iCol +** of the current row. +** +** More specifically, the returned buffer contains 1 varint for each +** occurrence of the phrase in the column, stored using the normal (delta+2) +** compression and is terminated by either an 0x01 or 0x00 byte. For example, +** if the requested column contains "a b X c d X X" and the position-list +** for 'X' is requested, the buffer returned may contain: +** +** 0x04 0x05 0x03 0x01 or 0x04 0x05 0x03 0x00 +** +** This function works regardless of whether or not the phrase is deferred, +** incremental, or neither. +*/ +SQLITE_PRIVATE int sqlite3Fts3EvalPhrasePoslist( + Fts3Cursor *pCsr, /* FTS3 cursor object */ + Fts3Expr *pExpr, /* Phrase to return doclist for */ + int iCol, /* Column to return position list for */ + char **ppOut /* OUT: Pointer to position list */ +){ + Fts3Phrase *pPhrase = pExpr->pPhrase; + Fts3Table *pTab = (Fts3Table *)pCsr->base.pVtab; + char *pIter; + int iThis; + sqlite3_int64 iDocid; + + /* If this phrase is applies specifically to some column other than + ** column iCol, return a NULL pointer. */ + *ppOut = 0; + assert( iCol>=0 && iCol<pTab->nColumn ); + if( (pPhrase->iColumn<pTab->nColumn && pPhrase->iColumn!=iCol) ){ + return SQLITE_OK; + } + + iDocid = pExpr->iDocid; + pIter = pPhrase->doclist.pList; + if( iDocid!=pCsr->iPrevId || pExpr->bEof ){ + int rc = SQLITE_OK; + int bDescDoclist = pTab->bDescIdx; /* For DOCID_CMP macro */ + int bOr = 0; + u8 bTreeEof = 0; + Fts3Expr *p; /* Used to iterate from pExpr to root */ + Fts3Expr *pNear; /* Most senior NEAR ancestor (or pExpr) */ + int bMatch; + + /* Check if this phrase descends from an OR expression node. If not, + ** return NULL. Otherwise, the entry that corresponds to docid + ** pCsr->iPrevId may lie earlier in the doclist buffer. Or, if the + ** tree that the node is part of has been marked as EOF, but the node + ** itself is not EOF, then it may point to an earlier entry. */ + pNear = pExpr; + for(p=pExpr->pParent; p; p=p->pParent){ + if( p->eType==FTSQUERY_OR ) bOr = 1; + if( p->eType==FTSQUERY_NEAR ) pNear = p; + if( p->bEof ) bTreeEof = 1; + } + if( bOr==0 ) return SQLITE_OK; + + /* This is the descendent of an OR node. In this case we cannot use + ** an incremental phrase. Load the entire doclist for the phrase + ** into memory in this case. */ + if( pPhrase->bIncr ){ + int bEofSave = pNear->bEof; + fts3EvalRestart(pCsr, pNear, &rc); + while( rc==SQLITE_OK && !pNear->bEof ){ + fts3EvalNextRow(pCsr, pNear, &rc); + if( bEofSave==0 && pNear->iDocid==iDocid ) break; + } + assert( rc!=SQLITE_OK || pPhrase->bIncr==0 ); + } + if( bTreeEof ){ + while( rc==SQLITE_OK && !pNear->bEof ){ + fts3EvalNextRow(pCsr, pNear, &rc); + } + } + if( rc!=SQLITE_OK ) return rc; + + bMatch = 1; + for(p=pNear; p; p=p->pLeft){ + u8 bEof = 0; + Fts3Expr *pTest = p; + Fts3Phrase *pPh; + assert( pTest->eType==FTSQUERY_NEAR || pTest->eType==FTSQUERY_PHRASE ); + if( pTest->eType==FTSQUERY_NEAR ) pTest = pTest->pRight; + assert( pTest->eType==FTSQUERY_PHRASE ); + pPh = pTest->pPhrase; + + pIter = pPh->pOrPoslist; + iDocid = pPh->iOrDocid; + if( pCsr->bDesc==bDescDoclist ){ + bEof = !pPh->doclist.nAll || + (pIter >= (pPh->doclist.aAll + pPh->doclist.nAll)); + while( (pIter==0 || DOCID_CMP(iDocid, pCsr->iPrevId)<0 ) && bEof==0 ){ + sqlite3Fts3DoclistNext( + bDescDoclist, pPh->doclist.aAll, pPh->doclist.nAll, + &pIter, &iDocid, &bEof + ); + } + }else{ + bEof = !pPh->doclist.nAll || (pIter && pIter<=pPh->doclist.aAll); + while( (pIter==0 || DOCID_CMP(iDocid, pCsr->iPrevId)>0 ) && bEof==0 ){ + int dummy; + sqlite3Fts3DoclistPrev( + bDescDoclist, pPh->doclist.aAll, pPh->doclist.nAll, + &pIter, &iDocid, &dummy, &bEof + ); + } + } + pPh->pOrPoslist = pIter; + pPh->iOrDocid = iDocid; + if( bEof || iDocid!=pCsr->iPrevId ) bMatch = 0; + } + + if( bMatch ){ + pIter = pPhrase->pOrPoslist; + }else{ + pIter = 0; + } + } + if( pIter==0 ) return SQLITE_OK; + + if( *pIter==0x01 ){ + pIter++; + pIter += fts3GetVarint32(pIter, &iThis); + }else{ + iThis = 0; + } + while( iThis<iCol ){ + fts3ColumnlistCopy(0, &pIter); + if( *pIter==0x00 ) return SQLITE_OK; + pIter++; + pIter += fts3GetVarint32(pIter, &iThis); + } + if( *pIter==0x00 ){ + pIter = 0; + } + + *ppOut = ((iCol==iThis)?pIter:0); + return SQLITE_OK; +} + +/* +** Free all components of the Fts3Phrase structure that were allocated by +** the eval module. Specifically, this means to free: +** +** * the contents of pPhrase->doclist, and +** * any Fts3MultiSegReader objects held by phrase tokens. +*/ +SQLITE_PRIVATE void sqlite3Fts3EvalPhraseCleanup(Fts3Phrase *pPhrase){ + if( pPhrase ){ + int i; + sqlite3_free(pPhrase->doclist.aAll); + fts3EvalInvalidatePoslist(pPhrase); + memset(&pPhrase->doclist, 0, sizeof(Fts3Doclist)); + for(i=0; i<pPhrase->nToken; i++){ + fts3SegReaderCursorFree(pPhrase->aToken[i].pSegcsr); + pPhrase->aToken[i].pSegcsr = 0; + } + } +} + + +/* +** Return SQLITE_CORRUPT_VTAB. +*/ +#ifdef SQLITE_DEBUG +SQLITE_PRIVATE int sqlite3Fts3Corrupt(){ + return SQLITE_CORRUPT_VTAB; +} +#endif + #if !SQLITE_CORE -SQLITE_API int sqlite3_extension_init( +/* +** Initialize API pointer table, if required. +*/ +#ifdef _WIN32 +__declspec(dllexport) +#endif +SQLITE_API int SQLITE_STDCALL sqlite3_fts3_init( sqlite3 *db, char **pzErrMsg, const sqlite3_api_routines *pApi ){ SQLITE_EXTENSION_INIT2(pApi) @@ -103110,10 +143906,563 @@ #endif #endif /************** End of fts3.c ************************************************/ +/************** Begin file fts3_aux.c ****************************************/ +/* +** 2011 Jan 27 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +*/ +/* #include "fts3Int.h" */ +#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) + +/* #include <string.h> */ +/* #include <assert.h> */ + +typedef struct Fts3auxTable Fts3auxTable; +typedef struct Fts3auxCursor Fts3auxCursor; + +struct Fts3auxTable { + sqlite3_vtab base; /* Base class used by SQLite core */ + Fts3Table *pFts3Tab; +}; + +struct Fts3auxCursor { + sqlite3_vtab_cursor base; /* Base class used by SQLite core */ + Fts3MultiSegReader csr; /* Must be right after "base" */ + Fts3SegFilter filter; + char *zStop; + int nStop; /* Byte-length of string zStop */ + int iLangid; /* Language id to query */ + int isEof; /* True if cursor is at EOF */ + sqlite3_int64 iRowid; /* Current rowid */ + + int iCol; /* Current value of 'col' column */ + int nStat; /* Size of aStat[] array */ + struct Fts3auxColstats { + sqlite3_int64 nDoc; /* 'documents' values for current csr row */ + sqlite3_int64 nOcc; /* 'occurrences' values for current csr row */ + } *aStat; +}; + +/* +** Schema of the terms table. +*/ +#define FTS3_AUX_SCHEMA \ + "CREATE TABLE x(term, col, documents, occurrences, languageid HIDDEN)" + +/* +** This function does all the work for both the xConnect and xCreate methods. +** These tables have no persistent representation of their own, so xConnect +** and xCreate are identical operations. +*/ +static int fts3auxConnectMethod( + sqlite3 *db, /* Database connection */ + void *pUnused, /* Unused */ + int argc, /* Number of elements in argv array */ + const char * const *argv, /* xCreate/xConnect argument array */ + sqlite3_vtab **ppVtab, /* OUT: New sqlite3_vtab object */ + char **pzErr /* OUT: sqlite3_malloc'd error message */ +){ + char const *zDb; /* Name of database (e.g. "main") */ + char const *zFts3; /* Name of fts3 table */ + int nDb; /* Result of strlen(zDb) */ + int nFts3; /* Result of strlen(zFts3) */ + int nByte; /* Bytes of space to allocate here */ + int rc; /* value returned by declare_vtab() */ + Fts3auxTable *p; /* Virtual table object to return */ + + UNUSED_PARAMETER(pUnused); + + /* The user should invoke this in one of two forms: + ** + ** CREATE VIRTUAL TABLE xxx USING fts4aux(fts4-table); + ** CREATE VIRTUAL TABLE xxx USING fts4aux(fts4-table-db, fts4-table); + */ + if( argc!=4 && argc!=5 ) goto bad_args; + + zDb = argv[1]; + nDb = (int)strlen(zDb); + if( argc==5 ){ + if( nDb==4 && 0==sqlite3_strnicmp("temp", zDb, 4) ){ + zDb = argv[3]; + nDb = (int)strlen(zDb); + zFts3 = argv[4]; + }else{ + goto bad_args; + } + }else{ + zFts3 = argv[3]; + } + nFts3 = (int)strlen(zFts3); + + rc = sqlite3_declare_vtab(db, FTS3_AUX_SCHEMA); + if( rc!=SQLITE_OK ) return rc; + + nByte = sizeof(Fts3auxTable) + sizeof(Fts3Table) + nDb + nFts3 + 2; + p = (Fts3auxTable *)sqlite3_malloc(nByte); + if( !p ) return SQLITE_NOMEM; + memset(p, 0, nByte); + + p->pFts3Tab = (Fts3Table *)&p[1]; + p->pFts3Tab->zDb = (char *)&p->pFts3Tab[1]; + p->pFts3Tab->zName = &p->pFts3Tab->zDb[nDb+1]; + p->pFts3Tab->db = db; + p->pFts3Tab->nIndex = 1; + + memcpy((char *)p->pFts3Tab->zDb, zDb, nDb); + memcpy((char *)p->pFts3Tab->zName, zFts3, nFts3); + sqlite3Fts3Dequote((char *)p->pFts3Tab->zName); + + *ppVtab = (sqlite3_vtab *)p; + return SQLITE_OK; + + bad_args: + sqlite3Fts3ErrMsg(pzErr, "invalid arguments to fts4aux constructor"); + return SQLITE_ERROR; +} + +/* +** This function does the work for both the xDisconnect and xDestroy methods. +** These tables have no persistent representation of their own, so xDisconnect +** and xDestroy are identical operations. +*/ +static int fts3auxDisconnectMethod(sqlite3_vtab *pVtab){ + Fts3auxTable *p = (Fts3auxTable *)pVtab; + Fts3Table *pFts3 = p->pFts3Tab; + int i; + + /* Free any prepared statements held */ + for(i=0; i<SizeofArray(pFts3->aStmt); i++){ + sqlite3_finalize(pFts3->aStmt[i]); + } + sqlite3_free(pFts3->zSegmentsTbl); + sqlite3_free(p); + return SQLITE_OK; +} + +#define FTS4AUX_EQ_CONSTRAINT 1 +#define FTS4AUX_GE_CONSTRAINT 2 +#define FTS4AUX_LE_CONSTRAINT 4 + +/* +** xBestIndex - Analyze a WHERE and ORDER BY clause. +*/ +static int fts3auxBestIndexMethod( + sqlite3_vtab *pVTab, + sqlite3_index_info *pInfo +){ + int i; + int iEq = -1; + int iGe = -1; + int iLe = -1; + int iLangid = -1; + int iNext = 1; /* Next free argvIndex value */ + + UNUSED_PARAMETER(pVTab); + + /* This vtab delivers always results in "ORDER BY term ASC" order. */ + if( pInfo->nOrderBy==1 + && pInfo->aOrderBy[0].iColumn==0 + && pInfo->aOrderBy[0].desc==0 + ){ + pInfo->orderByConsumed = 1; + } + + /* Search for equality and range constraints on the "term" column. + ** And equality constraints on the hidden "languageid" column. */ + for(i=0; i<pInfo->nConstraint; i++){ + if( pInfo->aConstraint[i].usable ){ + int op = pInfo->aConstraint[i].op; + int iCol = pInfo->aConstraint[i].iColumn; + + if( iCol==0 ){ + if( op==SQLITE_INDEX_CONSTRAINT_EQ ) iEq = i; + if( op==SQLITE_INDEX_CONSTRAINT_LT ) iLe = i; + if( op==SQLITE_INDEX_CONSTRAINT_LE ) iLe = i; + if( op==SQLITE_INDEX_CONSTRAINT_GT ) iGe = i; + if( op==SQLITE_INDEX_CONSTRAINT_GE ) iGe = i; + } + if( iCol==4 ){ + if( op==SQLITE_INDEX_CONSTRAINT_EQ ) iLangid = i; + } + } + } + + if( iEq>=0 ){ + pInfo->idxNum = FTS4AUX_EQ_CONSTRAINT; + pInfo->aConstraintUsage[iEq].argvIndex = iNext++; + pInfo->estimatedCost = 5; + }else{ + pInfo->idxNum = 0; + pInfo->estimatedCost = 20000; + if( iGe>=0 ){ + pInfo->idxNum += FTS4AUX_GE_CONSTRAINT; + pInfo->aConstraintUsage[iGe].argvIndex = iNext++; + pInfo->estimatedCost /= 2; + } + if( iLe>=0 ){ + pInfo->idxNum += FTS4AUX_LE_CONSTRAINT; + pInfo->aConstraintUsage[iLe].argvIndex = iNext++; + pInfo->estimatedCost /= 2; + } + } + if( iLangid>=0 ){ + pInfo->aConstraintUsage[iLangid].argvIndex = iNext++; + pInfo->estimatedCost--; + } + + return SQLITE_OK; +} + +/* +** xOpen - Open a cursor. +*/ +static int fts3auxOpenMethod(sqlite3_vtab *pVTab, sqlite3_vtab_cursor **ppCsr){ + Fts3auxCursor *pCsr; /* Pointer to cursor object to return */ + + UNUSED_PARAMETER(pVTab); + + pCsr = (Fts3auxCursor *)sqlite3_malloc(sizeof(Fts3auxCursor)); + if( !pCsr ) return SQLITE_NOMEM; + memset(pCsr, 0, sizeof(Fts3auxCursor)); + + *ppCsr = (sqlite3_vtab_cursor *)pCsr; + return SQLITE_OK; +} + +/* +** xClose - Close a cursor. +*/ +static int fts3auxCloseMethod(sqlite3_vtab_cursor *pCursor){ + Fts3Table *pFts3 = ((Fts3auxTable *)pCursor->pVtab)->pFts3Tab; + Fts3auxCursor *pCsr = (Fts3auxCursor *)pCursor; + + sqlite3Fts3SegmentsClose(pFts3); + sqlite3Fts3SegReaderFinish(&pCsr->csr); + sqlite3_free((void *)pCsr->filter.zTerm); + sqlite3_free(pCsr->zStop); + sqlite3_free(pCsr->aStat); + sqlite3_free(pCsr); + return SQLITE_OK; +} + +static int fts3auxGrowStatArray(Fts3auxCursor *pCsr, int nSize){ + if( nSize>pCsr->nStat ){ + struct Fts3auxColstats *aNew; + aNew = (struct Fts3auxColstats *)sqlite3_realloc(pCsr->aStat, + sizeof(struct Fts3auxColstats) * nSize + ); + if( aNew==0 ) return SQLITE_NOMEM; + memset(&aNew[pCsr->nStat], 0, + sizeof(struct Fts3auxColstats) * (nSize - pCsr->nStat) + ); + pCsr->aStat = aNew; + pCsr->nStat = nSize; + } + return SQLITE_OK; +} + +/* +** xNext - Advance the cursor to the next row, if any. +*/ +static int fts3auxNextMethod(sqlite3_vtab_cursor *pCursor){ + Fts3auxCursor *pCsr = (Fts3auxCursor *)pCursor; + Fts3Table *pFts3 = ((Fts3auxTable *)pCursor->pVtab)->pFts3Tab; + int rc; + + /* Increment our pretend rowid value. */ + pCsr->iRowid++; + + for(pCsr->iCol++; pCsr->iCol<pCsr->nStat; pCsr->iCol++){ + if( pCsr->aStat[pCsr->iCol].nDoc>0 ) return SQLITE_OK; + } + + rc = sqlite3Fts3SegReaderStep(pFts3, &pCsr->csr); + if( rc==SQLITE_ROW ){ + int i = 0; + int nDoclist = pCsr->csr.nDoclist; + char *aDoclist = pCsr->csr.aDoclist; + int iCol; + + int eState = 0; + + if( pCsr->zStop ){ + int n = (pCsr->nStop<pCsr->csr.nTerm) ? pCsr->nStop : pCsr->csr.nTerm; + int mc = memcmp(pCsr->zStop, pCsr->csr.zTerm, n); + if( mc<0 || (mc==0 && pCsr->csr.nTerm>pCsr->nStop) ){ + pCsr->isEof = 1; + return SQLITE_OK; + } + } + + if( fts3auxGrowStatArray(pCsr, 2) ) return SQLITE_NOMEM; + memset(pCsr->aStat, 0, sizeof(struct Fts3auxColstats) * pCsr->nStat); + iCol = 0; + + while( i<nDoclist ){ + sqlite3_int64 v = 0; + + i += sqlite3Fts3GetVarint(&aDoclist[i], &v); + switch( eState ){ + /* State 0. In this state the integer just read was a docid. */ + case 0: + pCsr->aStat[0].nDoc++; + eState = 1; + iCol = 0; + break; + + /* State 1. In this state we are expecting either a 1, indicating + ** that the following integer will be a column number, or the + ** start of a position list for column 0. + ** + ** The only difference between state 1 and state 2 is that if the + ** integer encountered in state 1 is not 0 or 1, then we need to + ** increment the column 0 "nDoc" count for this term. + */ + case 1: + assert( iCol==0 ); + if( v>1 ){ + pCsr->aStat[1].nDoc++; + } + eState = 2; + /* fall through */ + + case 2: + if( v==0 ){ /* 0x00. Next integer will be a docid. */ + eState = 0; + }else if( v==1 ){ /* 0x01. Next integer will be a column number. */ + eState = 3; + }else{ /* 2 or greater. A position. */ + pCsr->aStat[iCol+1].nOcc++; + pCsr->aStat[0].nOcc++; + } + break; + + /* State 3. The integer just read is a column number. */ + default: assert( eState==3 ); + iCol = (int)v; + if( fts3auxGrowStatArray(pCsr, iCol+2) ) return SQLITE_NOMEM; + pCsr->aStat[iCol+1].nDoc++; + eState = 2; + break; + } + } + + pCsr->iCol = 0; + rc = SQLITE_OK; + }else{ + pCsr->isEof = 1; + } + return rc; +} + +/* +** xFilter - Initialize a cursor to point at the start of its data. +*/ +static int fts3auxFilterMethod( + sqlite3_vtab_cursor *pCursor, /* The cursor used for this query */ + int idxNum, /* Strategy index */ + const char *idxStr, /* Unused */ + int nVal, /* Number of elements in apVal */ + sqlite3_value **apVal /* Arguments for the indexing scheme */ +){ + Fts3auxCursor *pCsr = (Fts3auxCursor *)pCursor; + Fts3Table *pFts3 = ((Fts3auxTable *)pCursor->pVtab)->pFts3Tab; + int rc; + int isScan = 0; + int iLangVal = 0; /* Language id to query */ + + int iEq = -1; /* Index of term=? value in apVal */ + int iGe = -1; /* Index of term>=? value in apVal */ + int iLe = -1; /* Index of term<=? value in apVal */ + int iLangid = -1; /* Index of languageid=? value in apVal */ + int iNext = 0; + + UNUSED_PARAMETER(nVal); + UNUSED_PARAMETER(idxStr); + + assert( idxStr==0 ); + assert( idxNum==FTS4AUX_EQ_CONSTRAINT || idxNum==0 + || idxNum==FTS4AUX_LE_CONSTRAINT || idxNum==FTS4AUX_GE_CONSTRAINT + || idxNum==(FTS4AUX_LE_CONSTRAINT|FTS4AUX_GE_CONSTRAINT) + ); + + if( idxNum==FTS4AUX_EQ_CONSTRAINT ){ + iEq = iNext++; + }else{ + isScan = 1; + if( idxNum & FTS4AUX_GE_CONSTRAINT ){ + iGe = iNext++; + } + if( idxNum & FTS4AUX_LE_CONSTRAINT ){ + iLe = iNext++; + } + } + if( iNext<nVal ){ + iLangid = iNext++; + } + + /* In case this cursor is being reused, close and zero it. */ + testcase(pCsr->filter.zTerm); + sqlite3Fts3SegReaderFinish(&pCsr->csr); + sqlite3_free((void *)pCsr->filter.zTerm); + sqlite3_free(pCsr->aStat); + memset(&pCsr->csr, 0, ((u8*)&pCsr[1]) - (u8*)&pCsr->csr); + + pCsr->filter.flags = FTS3_SEGMENT_REQUIRE_POS|FTS3_SEGMENT_IGNORE_EMPTY; + if( isScan ) pCsr->filter.flags |= FTS3_SEGMENT_SCAN; + + if( iEq>=0 || iGe>=0 ){ + const unsigned char *zStr = sqlite3_value_text(apVal[0]); + assert( (iEq==0 && iGe==-1) || (iEq==-1 && iGe==0) ); + if( zStr ){ + pCsr->filter.zTerm = sqlite3_mprintf("%s", zStr); + pCsr->filter.nTerm = sqlite3_value_bytes(apVal[0]); + if( pCsr->filter.zTerm==0 ) return SQLITE_NOMEM; + } + } + + if( iLe>=0 ){ + pCsr->zStop = sqlite3_mprintf("%s", sqlite3_value_text(apVal[iLe])); + pCsr->nStop = sqlite3_value_bytes(apVal[iLe]); + if( pCsr->zStop==0 ) return SQLITE_NOMEM; + } + + if( iLangid>=0 ){ + iLangVal = sqlite3_value_int(apVal[iLangid]); + + /* If the user specified a negative value for the languageid, use zero + ** instead. This works, as the "languageid=?" constraint will also + ** be tested by the VDBE layer. The test will always be false (since + ** this module will not return a row with a negative languageid), and + ** so the overall query will return zero rows. */ + if( iLangVal<0 ) iLangVal = 0; + } + pCsr->iLangid = iLangVal; + + rc = sqlite3Fts3SegReaderCursor(pFts3, iLangVal, 0, FTS3_SEGCURSOR_ALL, + pCsr->filter.zTerm, pCsr->filter.nTerm, 0, isScan, &pCsr->csr + ); + if( rc==SQLITE_OK ){ + rc = sqlite3Fts3SegReaderStart(pFts3, &pCsr->csr, &pCsr->filter); + } + + if( rc==SQLITE_OK ) rc = fts3auxNextMethod(pCursor); + return rc; +} + +/* +** xEof - Return true if the cursor is at EOF, or false otherwise. +*/ +static int fts3auxEofMethod(sqlite3_vtab_cursor *pCursor){ + Fts3auxCursor *pCsr = (Fts3auxCursor *)pCursor; + return pCsr->isEof; +} + +/* +** xColumn - Return a column value. +*/ +static int fts3auxColumnMethod( + sqlite3_vtab_cursor *pCursor, /* Cursor to retrieve value from */ + sqlite3_context *pCtx, /* Context for sqlite3_result_xxx() calls */ + int iCol /* Index of column to read value from */ +){ + Fts3auxCursor *p = (Fts3auxCursor *)pCursor; + + assert( p->isEof==0 ); + switch( iCol ){ + case 0: /* term */ + sqlite3_result_text(pCtx, p->csr.zTerm, p->csr.nTerm, SQLITE_TRANSIENT); + break; + + case 1: /* col */ + if( p->iCol ){ + sqlite3_result_int(pCtx, p->iCol-1); + }else{ + sqlite3_result_text(pCtx, "*", -1, SQLITE_STATIC); + } + break; + + case 2: /* documents */ + sqlite3_result_int64(pCtx, p->aStat[p->iCol].nDoc); + break; + + case 3: /* occurrences */ + sqlite3_result_int64(pCtx, p->aStat[p->iCol].nOcc); + break; + + default: /* languageid */ + assert( iCol==4 ); + sqlite3_result_int(pCtx, p->iLangid); + break; + } + + return SQLITE_OK; +} + +/* +** xRowid - Return the current rowid for the cursor. +*/ +static int fts3auxRowidMethod( + sqlite3_vtab_cursor *pCursor, /* Cursor to retrieve value from */ + sqlite_int64 *pRowid /* OUT: Rowid value */ +){ + Fts3auxCursor *pCsr = (Fts3auxCursor *)pCursor; + *pRowid = pCsr->iRowid; + return SQLITE_OK; +} + +/* +** Register the fts3aux module with database connection db. Return SQLITE_OK +** if successful or an error code if sqlite3_create_module() fails. +*/ +SQLITE_PRIVATE int sqlite3Fts3InitAux(sqlite3 *db){ + static const sqlite3_module fts3aux_module = { + 0, /* iVersion */ + fts3auxConnectMethod, /* xCreate */ + fts3auxConnectMethod, /* xConnect */ + fts3auxBestIndexMethod, /* xBestIndex */ + fts3auxDisconnectMethod, /* xDisconnect */ + fts3auxDisconnectMethod, /* xDestroy */ + fts3auxOpenMethod, /* xOpen */ + fts3auxCloseMethod, /* xClose */ + fts3auxFilterMethod, /* xFilter */ + fts3auxNextMethod, /* xNext */ + fts3auxEofMethod, /* xEof */ + fts3auxColumnMethod, /* xColumn */ + fts3auxRowidMethod, /* xRowid */ + 0, /* xUpdate */ + 0, /* xBegin */ + 0, /* xSync */ + 0, /* xCommit */ + 0, /* xRollback */ + 0, /* xFindFunction */ + 0, /* xRename */ + 0, /* xSavepoint */ + 0, /* xRelease */ + 0 /* xRollbackTo */ + }; + int rc; /* Return code */ + + rc = sqlite3_create_module(db, "fts4aux", &fts3aux_module, 0); + return rc; +} + +#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) */ + +/************** End of fts3_aux.c ********************************************/ /************** Begin file fts3_expr.c ***************************************/ /* ** 2008 Nov 28 ** ** The author disclaims copyright to this source code. In place of @@ -103128,10 +144477,11 @@ ** This module contains code that implements a parser for fts3 query strings ** (the right-hand argument to the MATCH operator). Because the supported ** syntax is relatively simple, the whole tokenizer/parser system is ** hand-coded. */ +/* #include "fts3Int.h" */ #if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) /* ** By default, this module parses the legacy syntax that has been ** traditionally used by fts3. Or, if SQLITE_ENABLE_FTS3_PARENTHESIS @@ -103190,36 +144540,93 @@ /* ** Default span for NEAR operators. */ #define SQLITE_FTS3_DEFAULT_NEAR_PARAM 10 +/* #include <string.h> */ +/* #include <assert.h> */ +/* +** isNot: +** This variable is used by function getNextNode(). When getNextNode() is +** called, it sets ParseContext.isNot to true if the 'next node' is a +** FTSQUERY_PHRASE with a unary "-" attached to it. i.e. "mysql" in the +** FTS3 query "sqlite -mysql". Otherwise, ParseContext.isNot is set to +** zero. +*/ typedef struct ParseContext ParseContext; struct ParseContext { sqlite3_tokenizer *pTokenizer; /* Tokenizer module */ + int iLangid; /* Language id used with tokenizer */ const char **azCol; /* Array of column names for fts3 table */ + int bFts4; /* True to allow FTS4-only syntax */ int nCol; /* Number of entries in azCol[] */ int iDefaultCol; /* Default column to query */ + int isNot; /* True if getNextNode() sees a unary - */ sqlite3_context *pCtx; /* Write error message here */ int nNest; /* Number of nested brackets */ }; /* ** This function is equivalent to the standard isspace() function. ** ** The standard isspace() can be awkward to use safely, because although it -** is defined to accept an argument of type int, its behaviour when passed +** is defined to accept an argument of type int, its behavior when passed ** an integer that falls outside of the range of the unsigned char type ** is undefined (and sometimes, "undefined" means segfault). This wrapper ** is defined to accept an argument of type char, and always returns 0 for ** any values that fall outside of the range of the unsigned char type (i.e. ** negative values). */ static int fts3isspace(char c){ - return (c&0x80)==0 ? isspace(c) : 0; + return c==' ' || c=='\t' || c=='\n' || c=='\r' || c=='\v' || c=='\f'; } +/* +** Allocate nByte bytes of memory using sqlite3_malloc(). If successful, +** zero the memory before returning a pointer to it. If unsuccessful, +** return NULL. +*/ +static void *fts3MallocZero(int nByte){ + void *pRet = sqlite3_malloc(nByte); + if( pRet ) memset(pRet, 0, nByte); + return pRet; +} + +SQLITE_PRIVATE int sqlite3Fts3OpenTokenizer( + sqlite3_tokenizer *pTokenizer, + int iLangid, + const char *z, + int n, + sqlite3_tokenizer_cursor **ppCsr +){ + sqlite3_tokenizer_module const *pModule = pTokenizer->pModule; + sqlite3_tokenizer_cursor *pCsr = 0; + int rc; + + rc = pModule->xOpen(pTokenizer, z, n, &pCsr); + assert( rc==SQLITE_OK || pCsr==0 ); + if( rc==SQLITE_OK ){ + pCsr->pTokenizer = pTokenizer; + if( pModule->iVersion>=1 ){ + rc = pModule->xLanguageid(pCsr, iLangid); + if( rc!=SQLITE_OK ){ + pModule->xClose(pCsr); + pCsr = 0; + } + } + } + *ppCsr = pCsr; + return rc; +} + +/* +** Function getNextNode(), which is called by fts3ExprParse(), may itself +** call fts3ExprParse(). So this forward declaration is required. +*/ +static int fts3ExprParse(ParseContext *, const char *, int, Fts3Expr **, int *); + /* ** Extract the next token from buffer z (length n) using the tokenizer ** and other information (column names etc.) in pParse. Create an Fts3Expr ** structure of type FTSQUERY_PHRASE containing a phrase consisting of this ** single token and set *ppExpr to point to it. If the end of the buffer is @@ -103240,28 +144647,32 @@ sqlite3_tokenizer *pTokenizer = pParse->pTokenizer; sqlite3_tokenizer_module const *pModule = pTokenizer->pModule; int rc; sqlite3_tokenizer_cursor *pCursor; Fts3Expr *pRet = 0; - int nConsumed = 0; + int i = 0; - rc = pModule->xOpen(pTokenizer, z, n, &pCursor); + /* Set variable i to the maximum number of bytes of input to tokenize. */ + for(i=0; i<n; i++){ + if( sqlite3_fts3_enable_parentheses && (z[i]=='(' || z[i]==')') ) break; + if( z[i]=='"' ) break; + } + + *pnConsumed = i; + rc = sqlite3Fts3OpenTokenizer(pTokenizer, pParse->iLangid, z, i, &pCursor); if( rc==SQLITE_OK ){ const char *zToken; - int nToken, iStart, iEnd, iPosition; + int nToken = 0, iStart = 0, iEnd = 0, iPosition = 0; int nByte; /* total space to allocate */ - pCursor->pTokenizer = pTokenizer; rc = pModule->xNext(pCursor, &zToken, &nToken, &iStart, &iEnd, &iPosition); - if( rc==SQLITE_OK ){ nByte = sizeof(Fts3Expr) + sizeof(Fts3Phrase) + nToken; - pRet = (Fts3Expr *)sqlite3_malloc(nByte); + pRet = (Fts3Expr *)fts3MallocZero(nByte); if( !pRet ){ rc = SQLITE_NOMEM; }else{ - memset(pRet, 0, nByte); pRet->eType = FTSQUERY_PHRASE; pRet->pPhrase = (Fts3Phrase *)&pRet[1]; pRet->pPhrase->nToken = 1; pRet->pPhrase->iColumn = iCol; pRet->pPhrase->aToken[0].n = nToken; @@ -103270,21 +144681,34 @@ if( iEnd<n && z[iEnd]=='*' ){ pRet->pPhrase->aToken[0].isPrefix = 1; iEnd++; } - if( !sqlite3_fts3_enable_parentheses && iStart>0 && z[iStart-1]=='-' ){ - pRet->pPhrase->isNot = 1; + + while( 1 ){ + if( !sqlite3_fts3_enable_parentheses + && iStart>0 && z[iStart-1]=='-' + ){ + pParse->isNot = 1; + iStart--; + }else if( pParse->bFts4 && iStart>0 && z[iStart-1]=='^' ){ + pRet->pPhrase->aToken[0].bFirst = 1; + iStart--; + }else{ + break; + } } + } - nConsumed = iEnd; + *pnConsumed = iEnd; + }else if( i && rc==SQLITE_DONE ){ + rc = SQLITE_OK; } pModule->xClose(pCursor); } - *pnConsumed = nConsumed; *ppExpr = pRet; return rc; } @@ -103323,70 +144747,91 @@ Fts3Expr *p = 0; sqlite3_tokenizer_cursor *pCursor = 0; char *zTemp = 0; int nTemp = 0; - rc = pModule->xOpen(pTokenizer, zInput, nInput, &pCursor); + const int nSpace = sizeof(Fts3Expr) + sizeof(Fts3Phrase); + int nToken = 0; + + /* The final Fts3Expr data structure, including the Fts3Phrase, + ** Fts3PhraseToken structures token buffers are all stored as a single + ** allocation so that the expression can be freed with a single call to + ** sqlite3_free(). Setting this up requires a two pass approach. + ** + ** The first pass, in the block below, uses a tokenizer cursor to iterate + ** through the tokens in the expression. This pass uses fts3ReallocOrFree() + ** to assemble data in two dynamic buffers: + ** + ** Buffer p: Points to the Fts3Expr structure, followed by the Fts3Phrase + ** structure, followed by the array of Fts3PhraseToken + ** structures. This pass only populates the Fts3PhraseToken array. + ** + ** Buffer zTemp: Contains copies of all tokens. + ** + ** The second pass, in the block that begins "if( rc==SQLITE_DONE )" below, + ** appends buffer zTemp to buffer p, and fills in the Fts3Expr and Fts3Phrase + ** structures. + */ + rc = sqlite3Fts3OpenTokenizer( + pTokenizer, pParse->iLangid, zInput, nInput, &pCursor); if( rc==SQLITE_OK ){ int ii; - pCursor->pTokenizer = pTokenizer; for(ii=0; rc==SQLITE_OK; ii++){ - const char *zToken; - int nToken, iBegin, iEnd, iPos; - rc = pModule->xNext(pCursor, &zToken, &nToken, &iBegin, &iEnd, &iPos); + const char *zByte; + int nByte = 0, iBegin = 0, iEnd = 0, iPos = 0; + rc = pModule->xNext(pCursor, &zByte, &nByte, &iBegin, &iEnd, &iPos); if( rc==SQLITE_OK ){ - int nByte = sizeof(Fts3Expr) + sizeof(Fts3Phrase); - p = fts3ReallocOrFree(p, nByte+ii*sizeof(struct PhraseToken)); - zTemp = fts3ReallocOrFree(zTemp, nTemp + nToken); - if( !p || !zTemp ){ - goto no_mem; - } - if( ii==0 ){ - memset(p, 0, nByte); - p->pPhrase = (Fts3Phrase *)&p[1]; - } - p->pPhrase = (Fts3Phrase *)&p[1]; - p->pPhrase->nToken = ii+1; - p->pPhrase->aToken[ii].n = nToken; - memcpy(&zTemp[nTemp], zToken, nToken); - nTemp += nToken; - if( iEnd<nInput && zInput[iEnd]=='*' ){ - p->pPhrase->aToken[ii].isPrefix = 1; - }else{ - p->pPhrase->aToken[ii].isPrefix = 0; - } + Fts3PhraseToken *pToken; + + p = fts3ReallocOrFree(p, nSpace + ii*sizeof(Fts3PhraseToken)); + if( !p ) goto no_mem; + + zTemp = fts3ReallocOrFree(zTemp, nTemp + nByte); + if( !zTemp ) goto no_mem; + + assert( nToken==ii ); + pToken = &((Fts3Phrase *)(&p[1]))->aToken[ii]; + memset(pToken, 0, sizeof(Fts3PhraseToken)); + + memcpy(&zTemp[nTemp], zByte, nByte); + nTemp += nByte; + + pToken->n = nByte; + pToken->isPrefix = (iEnd<nInput && zInput[iEnd]=='*'); + pToken->bFirst = (iBegin>0 && zInput[iBegin-1]=='^'); + nToken = ii+1; } } pModule->xClose(pCursor); pCursor = 0; } if( rc==SQLITE_DONE ){ int jj; - char *zNew = NULL; - int nNew = 0; - int nByte = sizeof(Fts3Expr) + sizeof(Fts3Phrase); - nByte += (p?(p->pPhrase->nToken-1):0) * sizeof(struct PhraseToken); - p = fts3ReallocOrFree(p, nByte + nTemp); - if( !p ){ - goto no_mem; - } + char *zBuf = 0; + + p = fts3ReallocOrFree(p, nSpace + nToken*sizeof(Fts3PhraseToken) + nTemp); + if( !p ) goto no_mem; + memset(p, 0, (char *)&(((Fts3Phrase *)&p[1])->aToken[0])-(char *)p); + p->eType = FTSQUERY_PHRASE; + p->pPhrase = (Fts3Phrase *)&p[1]; + p->pPhrase->iColumn = pParse->iDefaultCol; + p->pPhrase->nToken = nToken; + + zBuf = (char *)&p->pPhrase->aToken[nToken]; if( zTemp ){ - zNew = &(((char *)p)[nByte]); - memcpy(zNew, zTemp, nTemp); + memcpy(zBuf, zTemp, nTemp); + sqlite3_free(zTemp); }else{ - memset(p, 0, nByte+nTemp); + assert( nTemp==0 ); } - p->pPhrase = (Fts3Phrase *)&p[1]; + for(jj=0; jj<p->pPhrase->nToken; jj++){ - p->pPhrase->aToken[jj].z = &zNew[nNew]; - nNew += p->pPhrase->aToken[jj].n; + p->pPhrase->aToken[jj].z = zBuf; + zBuf += p->pPhrase->aToken[jj].n; } - sqlite3_free(zTemp); - p->eType = FTSQUERY_PHRASE; - p->pPhrase->iColumn = pParse->iDefaultCol; rc = SQLITE_OK; } *ppExpr = p; return rc; @@ -103399,16 +144844,10 @@ sqlite3_free(p); *ppExpr = 0; return SQLITE_NOMEM; } -/* -** Function getNextNode(), which is called by fts3ExprParse(), may itself -** call fts3ExprParse(). So this forward declaration is required. -*/ -static int fts3ExprParse(ParseContext *, const char *, int, Fts3Expr **, int *); - /* ** The output variable *ppExpr is populated with an allocated Fts3Expr ** structure, or set to 0 if the end of the input buffer is reached. ** ** Returns an SQLite error code. SQLITE_OK if everything works, SQLITE_NOMEM @@ -103438,10 +144877,12 @@ int rc; Fts3Expr *pRet = 0; const char *zInput = z; int nInput = n; + + pParse->isNot = 0; /* Skip over any whitespace before checking for a keyword, an open or ** close bracket, or a quoted string. */ while( nInput>0 && fts3isspace(*zInput) ){ @@ -103482,15 +144923,14 @@ */ cNext = zInput[nKey]; if( fts3isspace(cNext) || cNext=='"' || cNext=='(' || cNext==')' || cNext==0 ){ - pRet = (Fts3Expr *)sqlite3_malloc(sizeof(Fts3Expr)); + pRet = (Fts3Expr *)fts3MallocZero(sizeof(Fts3Expr)); if( !pRet ){ return SQLITE_NOMEM; } - memset(pRet, 0, sizeof(Fts3Expr)); pRet->eType = pKey->eType; pRet->nNear = nNear; *ppExpr = pRet; *pnConsumed = (int)((zInput - z) + nKey); return SQLITE_OK; @@ -103500,32 +144940,10 @@ ** user has supplied a token such as "ORacle". Continue. */ } } - /* Check for an open bracket. */ - if( sqlite3_fts3_enable_parentheses ){ - if( *zInput=='(' ){ - int nConsumed; - int rc; - pParse->nNest++; - rc = fts3ExprParse(pParse, &zInput[1], nInput-1, ppExpr, &nConsumed); - if( rc==SQLITE_OK && !*ppExpr ){ - rc = SQLITE_DONE; - } - *pnConsumed = (int)((zInput - z) + 1 + nConsumed); - return rc; - } - - /* Check for a close bracket. */ - if( *zInput==')' ){ - pParse->nNest--; - *pnConsumed = (int)((zInput - z) + 1); - return SQLITE_DONE; - } - } - /* See if we are dealing with a quoted phrase. If this is the case, then ** search for the closing quote and pass the whole string to getNextString() ** for processing. This is easy to do, as fts3 has no syntax for escaping ** a quote character embedded in a string. */ @@ -103536,10 +144954,25 @@ return SQLITE_ERROR; } return getNextString(pParse, &zInput[1], ii-1, ppExpr); } + if( sqlite3_fts3_enable_parentheses ){ + if( *zInput=='(' ){ + int nConsumed = 0; + pParse->nNest++; + rc = fts3ExprParse(pParse, zInput+1, nInput-1, ppExpr, &nConsumed); + if( rc==SQLITE_OK && !*ppExpr ){ rc = SQLITE_DONE; } + *pnConsumed = (int)(zInput - z) + 1 + nConsumed; + return rc; + }else if( *zInput==')' ){ + pParse->nNest--; + *pnConsumed = (int)((zInput - z) + 1); + *ppExpr = 0; + return SQLITE_DONE; + } + } /* If control flows to this point, this must be a regular token, or ** the end of the input. Read a regular token using the sqlite3_tokenizer ** interface. Before doing so, figure out if there is an explicit ** column specifier for the token. @@ -103654,101 +145087,104 @@ int isRequirePhrase = 1; while( rc==SQLITE_OK ){ Fts3Expr *p = 0; int nByte = 0; - rc = getNextNode(pParse, zIn, nIn, &p, &nByte); - if( rc==SQLITE_OK ){ - int isPhrase; - - if( !sqlite3_fts3_enable_parentheses - && p->eType==FTSQUERY_PHRASE && p->pPhrase->isNot - ){ - /* Create an implicit NOT operator. */ - Fts3Expr *pNot = sqlite3_malloc(sizeof(Fts3Expr)); - if( !pNot ){ - sqlite3Fts3ExprFree(p); - rc = SQLITE_NOMEM; - goto exprparse_out; - } - memset(pNot, 0, sizeof(Fts3Expr)); - pNot->eType = FTSQUERY_NOT; - pNot->pRight = p; - if( pNotBranch ){ - pNot->pLeft = pNotBranch; - } - pNotBranch = pNot; - p = pPrev; - }else{ - int eType = p->eType; - assert( eType!=FTSQUERY_PHRASE || !p->pPhrase->isNot ); - isPhrase = (eType==FTSQUERY_PHRASE || p->pLeft); - - /* The isRequirePhrase variable is set to true if a phrase or - ** an expression contained in parenthesis is required. If a - ** binary operator (AND, OR, NOT or NEAR) is encounted when - ** isRequirePhrase is set, this is a syntax error. - */ - if( !isPhrase && isRequirePhrase ){ - sqlite3Fts3ExprFree(p); - rc = SQLITE_ERROR; - goto exprparse_out; - } - - if( isPhrase && !isRequirePhrase ){ - /* Insert an implicit AND operator. */ - Fts3Expr *pAnd; - assert( pRet && pPrev ); - pAnd = sqlite3_malloc(sizeof(Fts3Expr)); - if( !pAnd ){ - sqlite3Fts3ExprFree(p); - rc = SQLITE_NOMEM; - goto exprparse_out; - } - memset(pAnd, 0, sizeof(Fts3Expr)); - pAnd->eType = FTSQUERY_AND; - insertBinaryOperator(&pRet, pPrev, pAnd); - pPrev = pAnd; - } - - /* This test catches attempts to make either operand of a NEAR - ** operator something other than a phrase. For example, either of - ** the following: - ** - ** (bracketed expression) NEAR phrase - ** phrase NEAR (bracketed expression) - ** - ** Return an error in either case. - */ - if( pPrev && ( - (eType==FTSQUERY_NEAR && !isPhrase && pPrev->eType!=FTSQUERY_PHRASE) - || (eType!=FTSQUERY_PHRASE && isPhrase && pPrev->eType==FTSQUERY_NEAR) - )){ - sqlite3Fts3ExprFree(p); - rc = SQLITE_ERROR; - goto exprparse_out; - } - - if( isPhrase ){ - if( pRet ){ - assert( pPrev && pPrev->pLeft && pPrev->pRight==0 ); - pPrev->pRight = p; - p->pParent = pPrev; - }else{ - pRet = p; - } - }else{ - insertBinaryOperator(&pRet, pPrev, p); - } - isRequirePhrase = !isPhrase; + + rc = getNextNode(pParse, zIn, nIn, &p, &nByte); + assert( nByte>0 || (rc!=SQLITE_OK && p==0) ); + if( rc==SQLITE_OK ){ + if( p ){ + int isPhrase; + + if( !sqlite3_fts3_enable_parentheses + && p->eType==FTSQUERY_PHRASE && pParse->isNot + ){ + /* Create an implicit NOT operator. */ + Fts3Expr *pNot = fts3MallocZero(sizeof(Fts3Expr)); + if( !pNot ){ + sqlite3Fts3ExprFree(p); + rc = SQLITE_NOMEM; + goto exprparse_out; + } + pNot->eType = FTSQUERY_NOT; + pNot->pRight = p; + p->pParent = pNot; + if( pNotBranch ){ + pNot->pLeft = pNotBranch; + pNotBranch->pParent = pNot; + } + pNotBranch = pNot; + p = pPrev; + }else{ + int eType = p->eType; + isPhrase = (eType==FTSQUERY_PHRASE || p->pLeft); + + /* The isRequirePhrase variable is set to true if a phrase or + ** an expression contained in parenthesis is required. If a + ** binary operator (AND, OR, NOT or NEAR) is encounted when + ** isRequirePhrase is set, this is a syntax error. + */ + if( !isPhrase && isRequirePhrase ){ + sqlite3Fts3ExprFree(p); + rc = SQLITE_ERROR; + goto exprparse_out; + } + + if( isPhrase && !isRequirePhrase ){ + /* Insert an implicit AND operator. */ + Fts3Expr *pAnd; + assert( pRet && pPrev ); + pAnd = fts3MallocZero(sizeof(Fts3Expr)); + if( !pAnd ){ + sqlite3Fts3ExprFree(p); + rc = SQLITE_NOMEM; + goto exprparse_out; + } + pAnd->eType = FTSQUERY_AND; + insertBinaryOperator(&pRet, pPrev, pAnd); + pPrev = pAnd; + } + + /* This test catches attempts to make either operand of a NEAR + ** operator something other than a phrase. For example, either of + ** the following: + ** + ** (bracketed expression) NEAR phrase + ** phrase NEAR (bracketed expression) + ** + ** Return an error in either case. + */ + if( pPrev && ( + (eType==FTSQUERY_NEAR && !isPhrase && pPrev->eType!=FTSQUERY_PHRASE) + || (eType!=FTSQUERY_PHRASE && isPhrase && pPrev->eType==FTSQUERY_NEAR) + )){ + sqlite3Fts3ExprFree(p); + rc = SQLITE_ERROR; + goto exprparse_out; + } + + if( isPhrase ){ + if( pRet ){ + assert( pPrev && pPrev->pLeft && pPrev->pRight==0 ); + pPrev->pRight = p; + p->pParent = pPrev; + }else{ + pRet = p; + } + }else{ + insertBinaryOperator(&pRet, pPrev, p); + } + isRequirePhrase = !isPhrase; + } + pPrev = p; } assert( nByte>0 ); } assert( rc!=SQLITE_OK || (nByte>0 && nByte<=nIn) ); nIn -= nByte; zIn += nByte; - pPrev = p; } if( rc==SQLITE_DONE && pRet && isRequirePhrase ){ rc = SQLITE_ERROR; } @@ -103762,10 +145198,11 @@ Fts3Expr *pIter = pNotBranch; while( pIter->pLeft ){ pIter = pIter->pLeft; } pIter->pLeft = pRet; + pRet->pParent = pIter; pRet = pNotBranch; } } } *pnConsumed = n - nIn; @@ -103777,10 +145214,253 @@ pRet = 0; } *ppExpr = pRet; return rc; } + +/* +** Return SQLITE_ERROR if the maximum depth of the expression tree passed +** as the only argument is more than nMaxDepth. +*/ +static int fts3ExprCheckDepth(Fts3Expr *p, int nMaxDepth){ + int rc = SQLITE_OK; + if( p ){ + if( nMaxDepth<0 ){ + rc = SQLITE_TOOBIG; + }else{ + rc = fts3ExprCheckDepth(p->pLeft, nMaxDepth-1); + if( rc==SQLITE_OK ){ + rc = fts3ExprCheckDepth(p->pRight, nMaxDepth-1); + } + } + } + return rc; +} + +/* +** This function attempts to transform the expression tree at (*pp) to +** an equivalent but more balanced form. The tree is modified in place. +** If successful, SQLITE_OK is returned and (*pp) set to point to the +** new root expression node. +** +** nMaxDepth is the maximum allowable depth of the balanced sub-tree. +** +** Otherwise, if an error occurs, an SQLite error code is returned and +** expression (*pp) freed. +*/ +static int fts3ExprBalance(Fts3Expr **pp, int nMaxDepth){ + int rc = SQLITE_OK; /* Return code */ + Fts3Expr *pRoot = *pp; /* Initial root node */ + Fts3Expr *pFree = 0; /* List of free nodes. Linked by pParent. */ + int eType = pRoot->eType; /* Type of node in this tree */ + + if( nMaxDepth==0 ){ + rc = SQLITE_ERROR; + } + + if( rc==SQLITE_OK ){ + if( (eType==FTSQUERY_AND || eType==FTSQUERY_OR) ){ + Fts3Expr **apLeaf; + apLeaf = (Fts3Expr **)sqlite3_malloc(sizeof(Fts3Expr *) * nMaxDepth); + if( 0==apLeaf ){ + rc = SQLITE_NOMEM; + }else{ + memset(apLeaf, 0, sizeof(Fts3Expr *) * nMaxDepth); + } + + if( rc==SQLITE_OK ){ + int i; + Fts3Expr *p; + + /* Set $p to point to the left-most leaf in the tree of eType nodes. */ + for(p=pRoot; p->eType==eType; p=p->pLeft){ + assert( p->pParent==0 || p->pParent->pLeft==p ); + assert( p->pLeft && p->pRight ); + } + + /* This loop runs once for each leaf in the tree of eType nodes. */ + while( 1 ){ + int iLvl; + Fts3Expr *pParent = p->pParent; /* Current parent of p */ + + assert( pParent==0 || pParent->pLeft==p ); + p->pParent = 0; + if( pParent ){ + pParent->pLeft = 0; + }else{ + pRoot = 0; + } + rc = fts3ExprBalance(&p, nMaxDepth-1); + if( rc!=SQLITE_OK ) break; + + for(iLvl=0; p && iLvl<nMaxDepth; iLvl++){ + if( apLeaf[iLvl]==0 ){ + apLeaf[iLvl] = p; + p = 0; + }else{ + assert( pFree ); + pFree->pLeft = apLeaf[iLvl]; + pFree->pRight = p; + pFree->pLeft->pParent = pFree; + pFree->pRight->pParent = pFree; + + p = pFree; + pFree = pFree->pParent; + p->pParent = 0; + apLeaf[iLvl] = 0; + } + } + if( p ){ + sqlite3Fts3ExprFree(p); + rc = SQLITE_TOOBIG; + break; + } + + /* If that was the last leaf node, break out of the loop */ + if( pParent==0 ) break; + + /* Set $p to point to the next leaf in the tree of eType nodes */ + for(p=pParent->pRight; p->eType==eType; p=p->pLeft); + + /* Remove pParent from the original tree. */ + assert( pParent->pParent==0 || pParent->pParent->pLeft==pParent ); + pParent->pRight->pParent = pParent->pParent; + if( pParent->pParent ){ + pParent->pParent->pLeft = pParent->pRight; + }else{ + assert( pParent==pRoot ); + pRoot = pParent->pRight; + } + + /* Link pParent into the free node list. It will be used as an + ** internal node of the new tree. */ + pParent->pParent = pFree; + pFree = pParent; + } + + if( rc==SQLITE_OK ){ + p = 0; + for(i=0; i<nMaxDepth; i++){ + if( apLeaf[i] ){ + if( p==0 ){ + p = apLeaf[i]; + p->pParent = 0; + }else{ + assert( pFree!=0 ); + pFree->pRight = p; + pFree->pLeft = apLeaf[i]; + pFree->pLeft->pParent = pFree; + pFree->pRight->pParent = pFree; + + p = pFree; + pFree = pFree->pParent; + p->pParent = 0; + } + } + } + pRoot = p; + }else{ + /* An error occurred. Delete the contents of the apLeaf[] array + ** and pFree list. Everything else is cleaned up by the call to + ** sqlite3Fts3ExprFree(pRoot) below. */ + Fts3Expr *pDel; + for(i=0; i<nMaxDepth; i++){ + sqlite3Fts3ExprFree(apLeaf[i]); + } + while( (pDel=pFree)!=0 ){ + pFree = pDel->pParent; + sqlite3_free(pDel); + } + } + + assert( pFree==0 ); + sqlite3_free( apLeaf ); + } + }else if( eType==FTSQUERY_NOT ){ + Fts3Expr *pLeft = pRoot->pLeft; + Fts3Expr *pRight = pRoot->pRight; + + pRoot->pLeft = 0; + pRoot->pRight = 0; + pLeft->pParent = 0; + pRight->pParent = 0; + + rc = fts3ExprBalance(&pLeft, nMaxDepth-1); + if( rc==SQLITE_OK ){ + rc = fts3ExprBalance(&pRight, nMaxDepth-1); + } + + if( rc!=SQLITE_OK ){ + sqlite3Fts3ExprFree(pRight); + sqlite3Fts3ExprFree(pLeft); + }else{ + assert( pLeft && pRight ); + pRoot->pLeft = pLeft; + pLeft->pParent = pRoot; + pRoot->pRight = pRight; + pRight->pParent = pRoot; + } + } + } + + if( rc!=SQLITE_OK ){ + sqlite3Fts3ExprFree(pRoot); + pRoot = 0; + } + *pp = pRoot; + return rc; +} + +/* +** This function is similar to sqlite3Fts3ExprParse(), with the following +** differences: +** +** 1. It does not do expression rebalancing. +** 2. It does not check that the expression does not exceed the +** maximum allowable depth. +** 3. Even if it fails, *ppExpr may still be set to point to an +** expression tree. It should be deleted using sqlite3Fts3ExprFree() +** in this case. +*/ +static int fts3ExprParseUnbalanced( + sqlite3_tokenizer *pTokenizer, /* Tokenizer module */ + int iLangid, /* Language id for tokenizer */ + char **azCol, /* Array of column names for fts3 table */ + int bFts4, /* True to allow FTS4-only syntax */ + int nCol, /* Number of entries in azCol[] */ + int iDefaultCol, /* Default column to query */ + const char *z, int n, /* Text of MATCH query */ + Fts3Expr **ppExpr /* OUT: Parsed query structure */ +){ + int nParsed; + int rc; + ParseContext sParse; + + memset(&sParse, 0, sizeof(ParseContext)); + sParse.pTokenizer = pTokenizer; + sParse.iLangid = iLangid; + sParse.azCol = (const char **)azCol; + sParse.nCol = nCol; + sParse.iDefaultCol = iDefaultCol; + sParse.bFts4 = bFts4; + if( z==0 ){ + *ppExpr = 0; + return SQLITE_OK; + } + if( n<0 ){ + n = (int)strlen(z); + } + rc = fts3ExprParse(&sParse, z, n, ppExpr, &nParsed); + assert( rc==SQLITE_OK || *ppExpr==0 ); + + /* Check for mismatched parenthesis */ + if( rc==SQLITE_OK && sParse.nNest ){ + rc = SQLITE_ERROR; + } + + return rc; +} /* ** Parameters z and n contain a pointer to and length of a buffer containing ** an fts3 query expression, respectively. This function attempts to parse the ** query expression and create a tree of Fts3Expr structures representing the @@ -103804,52 +145484,84 @@ ** specified as part of the query string), or -1 if tokens may by default ** match any table column. */ SQLITE_PRIVATE int sqlite3Fts3ExprParse( sqlite3_tokenizer *pTokenizer, /* Tokenizer module */ + int iLangid, /* Language id for tokenizer */ char **azCol, /* Array of column names for fts3 table */ + int bFts4, /* True to allow FTS4-only syntax */ int nCol, /* Number of entries in azCol[] */ int iDefaultCol, /* Default column to query */ const char *z, int n, /* Text of MATCH query */ - Fts3Expr **ppExpr /* OUT: Parsed query structure */ + Fts3Expr **ppExpr, /* OUT: Parsed query structure */ + char **pzErr /* OUT: Error message (sqlite3_malloc) */ ){ - int nParsed; - int rc; - ParseContext sParse; - sParse.pTokenizer = pTokenizer; - sParse.azCol = (const char **)azCol; - sParse.nCol = nCol; - sParse.iDefaultCol = iDefaultCol; - sParse.nNest = 0; - if( z==0 ){ - *ppExpr = 0; - return SQLITE_OK; - } - if( n<0 ){ - n = (int)strlen(z); - } - rc = fts3ExprParse(&sParse, z, n, ppExpr, &nParsed); - - /* Check for mismatched parenthesis */ - if( rc==SQLITE_OK && sParse.nNest ){ - rc = SQLITE_ERROR; + int rc = fts3ExprParseUnbalanced( + pTokenizer, iLangid, azCol, bFts4, nCol, iDefaultCol, z, n, ppExpr + ); + + /* Rebalance the expression. And check that its depth does not exceed + ** SQLITE_FTS3_MAX_EXPR_DEPTH. */ + if( rc==SQLITE_OK && *ppExpr ){ + rc = fts3ExprBalance(ppExpr, SQLITE_FTS3_MAX_EXPR_DEPTH); + if( rc==SQLITE_OK ){ + rc = fts3ExprCheckDepth(*ppExpr, SQLITE_FTS3_MAX_EXPR_DEPTH); + } + } + + if( rc!=SQLITE_OK ){ sqlite3Fts3ExprFree(*ppExpr); *ppExpr = 0; + if( rc==SQLITE_TOOBIG ){ + sqlite3Fts3ErrMsg(pzErr, + "FTS expression tree is too large (maximum depth %d)", + SQLITE_FTS3_MAX_EXPR_DEPTH + ); + rc = SQLITE_ERROR; + }else if( rc==SQLITE_ERROR ){ + sqlite3Fts3ErrMsg(pzErr, "malformed MATCH expression: [%s]", z); + } } return rc; } + +/* +** Free a single node of an expression tree. +*/ +static void fts3FreeExprNode(Fts3Expr *p){ + assert( p->eType==FTSQUERY_PHRASE || p->pPhrase==0 ); + sqlite3Fts3EvalPhraseCleanup(p->pPhrase); + sqlite3_free(p->aMI); + sqlite3_free(p); +} /* ** Free a parsed fts3 query expression allocated by sqlite3Fts3ExprParse(). +** +** This function would be simpler if it recursively called itself. But +** that would mean passing a sufficiently large expression to ExprParse() +** could cause a stack overflow. */ -SQLITE_PRIVATE void sqlite3Fts3ExprFree(Fts3Expr *p){ - if( p ){ - sqlite3Fts3ExprFree(p->pLeft); - sqlite3Fts3ExprFree(p->pRight); - sqlite3_free(p->aDoclist); - sqlite3_free(p); +SQLITE_PRIVATE void sqlite3Fts3ExprFree(Fts3Expr *pDel){ + Fts3Expr *p; + assert( pDel==0 || pDel->pParent==0 ); + for(p=pDel; p && (p->pLeft||p->pRight); p=(p->pLeft ? p->pLeft : p->pRight)){ + assert( p->pParent==0 || p==p->pParent->pRight || p==p->pParent->pLeft ); + } + while( p ){ + Fts3Expr *pParent = p->pParent; + fts3FreeExprNode(p); + if( pParent && p==pParent->pLeft && pParent->pRight ){ + p = pParent->pRight; + while( p && (p->pLeft || p->pRight) ){ + assert( p==p->pParent->pRight || p==p->pParent->pLeft ); + p = (p->pLeft ? p->pLeft : p->pRight); + } + }else{ + p = pParent; + } } } /**************************************************************************** ***************************************************************************** @@ -103856,10 +145568,11 @@ ** Everything after this point is just test code. */ #ifdef SQLITE_TEST +/* #include <stdio.h> */ /* ** Function to query the hash-table of tokenizers (see README.tokenizers). */ static int queryTestTokenizer( @@ -103886,51 +145599,60 @@ return sqlite3_finalize(pStmt); } /* -** This function is part of the test interface for the query parser. It -** writes a text representation of the query expression pExpr into the -** buffer pointed to by argument zBuf. It is assumed that zBuf is large -** enough to store the required text representation. +** Return a pointer to a buffer containing a text representation of the +** expression passed as the first argument. The buffer is obtained from +** sqlite3_malloc(). It is the responsibility of the caller to use +** sqlite3_free() to release the memory. If an OOM condition is encountered, +** NULL is returned. +** +** If the second argument is not NULL, then its contents are prepended to +** the returned expression text and then freed using sqlite3_free(). */ -static void exprToString(Fts3Expr *pExpr, char *zBuf){ +static char *exprToString(Fts3Expr *pExpr, char *zBuf){ + if( pExpr==0 ){ + return sqlite3_mprintf(""); + } switch( pExpr->eType ){ case FTSQUERY_PHRASE: { Fts3Phrase *pPhrase = pExpr->pPhrase; int i; - zBuf += sprintf(zBuf, "PHRASE %d %d", pPhrase->iColumn, pPhrase->isNot); - for(i=0; i<pPhrase->nToken; i++){ - zBuf += sprintf(zBuf," %.*s",pPhrase->aToken[i].n,pPhrase->aToken[i].z); - zBuf += sprintf(zBuf,"%s", (pPhrase->aToken[i].isPrefix?"+":"")); + zBuf = sqlite3_mprintf( + "%zPHRASE %d 0", zBuf, pPhrase->iColumn); + for(i=0; zBuf && i<pPhrase->nToken; i++){ + zBuf = sqlite3_mprintf("%z %.*s%s", zBuf, + pPhrase->aToken[i].n, pPhrase->aToken[i].z, + (pPhrase->aToken[i].isPrefix?"+":"") + ); } - return; + return zBuf; } case FTSQUERY_NEAR: - zBuf += sprintf(zBuf, "NEAR/%d ", pExpr->nNear); + zBuf = sqlite3_mprintf("%zNEAR/%d ", zBuf, pExpr->nNear); break; case FTSQUERY_NOT: - zBuf += sprintf(zBuf, "NOT "); + zBuf = sqlite3_mprintf("%zNOT ", zBuf); break; case FTSQUERY_AND: - zBuf += sprintf(zBuf, "AND "); + zBuf = sqlite3_mprintf("%zAND ", zBuf); break; case FTSQUERY_OR: - zBuf += sprintf(zBuf, "OR "); + zBuf = sqlite3_mprintf("%zOR ", zBuf); break; } - zBuf += sprintf(zBuf, "{"); - exprToString(pExpr->pLeft, zBuf); - zBuf += strlen(zBuf); - zBuf += sprintf(zBuf, "} "); - - zBuf += sprintf(zBuf, "{"); - exprToString(pExpr->pRight, zBuf); - zBuf += strlen(zBuf); - zBuf += sprintf(zBuf, "}"); + if( zBuf ) zBuf = sqlite3_mprintf("%z{", zBuf); + if( zBuf ) zBuf = exprToString(pExpr->pLeft, zBuf); + if( zBuf ) zBuf = sqlite3_mprintf("%z} {", zBuf); + + if( zBuf ) zBuf = exprToString(pExpr->pRight, zBuf); + if( zBuf ) zBuf = sqlite3_mprintf("%z}", zBuf); + + return zBuf; } /* ** This is the implementation of a scalar SQL function used to test the ** expression parser. It should be called as follows: @@ -103957,10 +145679,11 @@ const char *zExpr; int nExpr; int nCol; int ii; Fts3Expr *pExpr; + char *zBuf = 0; sqlite3 *db = sqlite3_context_db_handle(context); if( argc<3 ){ sqlite3_result_error(context, "Usage: fts3_exprtest(tokenizer, expr, col1, ...", -1 @@ -103996,24 +145719,34 @@ } for(ii=0; ii<nCol; ii++){ azCol[ii] = (char *)sqlite3_value_text(argv[ii+2]); } - rc = sqlite3Fts3ExprParse( - pTokenizer, azCol, nCol, nCol, zExpr, nExpr, &pExpr - ); - if( rc==SQLITE_NOMEM ){ - sqlite3_result_error_nomem(context); - goto exprtest_out; - }else if( rc==SQLITE_OK ){ - char zBuf[4096]; - exprToString(pExpr, zBuf); - sqlite3_result_text(context, zBuf, -1, SQLITE_TRANSIENT); + if( sqlite3_user_data(context) ){ + char *zDummy = 0; + rc = sqlite3Fts3ExprParse( + pTokenizer, 0, azCol, 0, nCol, nCol, zExpr, nExpr, &pExpr, &zDummy + ); + assert( rc==SQLITE_OK || pExpr==0 ); + sqlite3_free(zDummy); + }else{ + rc = fts3ExprParseUnbalanced( + pTokenizer, 0, azCol, 0, nCol, nCol, zExpr, nExpr, &pExpr + ); + } + + if( rc!=SQLITE_OK && rc!=SQLITE_NOMEM ){ sqlite3Fts3ExprFree(pExpr); - }else{ sqlite3_result_error(context, "Error parsing expression", -1); + }else if( rc==SQLITE_NOMEM || !(zBuf = exprToString(pExpr, 0)) ){ + sqlite3_result_error_nomem(context); + }else{ + sqlite3_result_text(context, zBuf, -1, SQLITE_TRANSIENT); + sqlite3_free(zBuf); } + + sqlite3Fts3ExprFree(pExpr); exprtest_out: if( pModule && pTokenizer ){ rc = pModule->xDestroy(pTokenizer); } @@ -104023,13 +145756,19 @@ /* ** Register the query expression parser test function fts3_exprtest() ** with database connection db. */ SQLITE_PRIVATE int sqlite3Fts3ExprInitTestInterface(sqlite3* db){ - return sqlite3_create_function( + int rc = sqlite3_create_function( db, "fts3_exprtest", -1, SQLITE_UTF8, 0, fts3ExprTest, 0, 0 ); + if( rc==SQLITE_OK ){ + rc = sqlite3_create_function(db, "fts3_exprtest_rebalance", + -1, SQLITE_UTF8, (void *)1, fts3ExprTest, 0, 0 + ); + } + return rc; } #endif #endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) */ @@ -104058,13 +145797,18 @@ ** (in which case SQLITE_CORE is not defined), or ** ** * The FTS3 module is being built into the core of ** SQLite (in which case SQLITE_ENABLE_FTS3 is defined). */ +/* #include "fts3Int.h" */ #if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) +/* #include <assert.h> */ +/* #include <stdlib.h> */ +/* #include <string.h> */ +/* #include "fts3_hash.h" */ /* ** Malloc and Free functions */ static void *fts3HashMalloc(int n){ @@ -104126,17 +145870,17 @@ /* ** Hash and comparison functions when the mode is FTS3_HASH_STRING */ static int fts3StrHash(const void *pKey, int nKey){ const char *z = (const char *)pKey; - int h = 0; + unsigned h = 0; if( nKey<=0 ) nKey = (int) strlen(z); while( nKey > 0 ){ h = (h<<3) ^ h ^ *z++; nKey--; } - return h & 0x7fffffff; + return (int)(h & 0x7fffffff); } static int fts3StrCompare(const void *pKey1, int n1, const void *pKey2, int n2){ if( n1!=n2 ) return 1; return strncmp((const char*)pKey1,(const char*)pKey2,n1); } @@ -104438,24 +146182,29 @@ ** (in which case SQLITE_CORE is not defined), or ** ** * The FTS3 module is being built into the core of ** SQLite (in which case SQLITE_ENABLE_FTS3 is defined). */ +/* #include "fts3Int.h" */ #if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) +/* #include <assert.h> */ +/* #include <stdlib.h> */ +/* #include <stdio.h> */ +/* #include <string.h> */ - +/* #include "fts3_tokenizer.h" */ /* ** Class derived from sqlite3_tokenizer */ typedef struct porter_tokenizer { sqlite3_tokenizer base; /* Base class */ } porter_tokenizer; /* -** Class derived from sqlit3_tokenizer_cursor +** Class derived from sqlite3_tokenizer_cursor */ typedef struct porter_tokenizer_cursor { sqlite3_tokenizer_cursor base; const char *zInput; /* input we are tokenizing */ int nInput; /* size of the input */ @@ -104594,11 +146343,11 @@ ** Return true if the m-value for z is 1 or more. In other words, ** return true if z contains at least one vowel that is followed ** by a consonant. ** ** In this routine z[] is in reverse order. So we are really looking -** for an instance of of a consonant followed by a vowel. +** for an instance of a consonant followed by a vowel. */ static int m_gt_0(const char *z){ while( isVowel(z) ){ z++; } if( *z==0 ) return 0; while( isConsonant(z) ){ z++; } @@ -104753,11 +146502,11 @@ */ static void porter_stemmer(const char *zIn, int nIn, char *zOut, int *pnOut){ int i, j; char zReverse[28]; char *z, *z2; - if( nIn<3 || nIn>=sizeof(zReverse)-7 ){ + if( nIn<3 || nIn>=(int)sizeof(zReverse)-7 ){ /* The word is too big or too small for the porter stemmer. ** Fallback to the copy stemmer */ copy_stemmer(zIn, nIn, zOut, pnOut); return; } @@ -104814,61 +146563,74 @@ } /* Step 2 */ switch( z[1] ){ case 'a': - stem(&z, "lanoita", "ate", m_gt_0) || - stem(&z, "lanoit", "tion", m_gt_0); + if( !stem(&z, "lanoita", "ate", m_gt_0) ){ + stem(&z, "lanoit", "tion", m_gt_0); + } break; case 'c': - stem(&z, "icne", "ence", m_gt_0) || - stem(&z, "icna", "ance", m_gt_0); + if( !stem(&z, "icne", "ence", m_gt_0) ){ + stem(&z, "icna", "ance", m_gt_0); + } break; case 'e': stem(&z, "rezi", "ize", m_gt_0); break; case 'g': stem(&z, "igol", "log", m_gt_0); break; case 'l': - stem(&z, "ilb", "ble", m_gt_0) || - stem(&z, "illa", "al", m_gt_0) || - stem(&z, "iltne", "ent", m_gt_0) || - stem(&z, "ile", "e", m_gt_0) || - stem(&z, "ilsuo", "ous", m_gt_0); + if( !stem(&z, "ilb", "ble", m_gt_0) + && !stem(&z, "illa", "al", m_gt_0) + && !stem(&z, "iltne", "ent", m_gt_0) + && !stem(&z, "ile", "e", m_gt_0) + ){ + stem(&z, "ilsuo", "ous", m_gt_0); + } break; case 'o': - stem(&z, "noitazi", "ize", m_gt_0) || - stem(&z, "noita", "ate", m_gt_0) || - stem(&z, "rota", "ate", m_gt_0); + if( !stem(&z, "noitazi", "ize", m_gt_0) + && !stem(&z, "noita", "ate", m_gt_0) + ){ + stem(&z, "rota", "ate", m_gt_0); + } break; case 's': - stem(&z, "msila", "al", m_gt_0) || - stem(&z, "ssenevi", "ive", m_gt_0) || - stem(&z, "ssenluf", "ful", m_gt_0) || - stem(&z, "ssensuo", "ous", m_gt_0); + if( !stem(&z, "msila", "al", m_gt_0) + && !stem(&z, "ssenevi", "ive", m_gt_0) + && !stem(&z, "ssenluf", "ful", m_gt_0) + ){ + stem(&z, "ssensuo", "ous", m_gt_0); + } break; case 't': - stem(&z, "itila", "al", m_gt_0) || - stem(&z, "itivi", "ive", m_gt_0) || - stem(&z, "itilib", "ble", m_gt_0); + if( !stem(&z, "itila", "al", m_gt_0) + && !stem(&z, "itivi", "ive", m_gt_0) + ){ + stem(&z, "itilib", "ble", m_gt_0); + } break; } /* Step 3 */ switch( z[0] ){ case 'e': - stem(&z, "etaci", "ic", m_gt_0) || - stem(&z, "evita", "", m_gt_0) || - stem(&z, "ezila", "al", m_gt_0); + if( !stem(&z, "etaci", "ic", m_gt_0) + && !stem(&z, "evita", "", m_gt_0) + ){ + stem(&z, "ezila", "al", m_gt_0); + } break; case 'i': stem(&z, "itici", "ic", m_gt_0); break; case 'l': - stem(&z, "laci", "ic", m_gt_0) || - stem(&z, "luf", "", m_gt_0); + if( !stem(&z, "laci", "ic", m_gt_0) ){ + stem(&z, "luf", "", m_gt_0); + } break; case 's': stem(&z, "ssen", "", m_gt_0); break; } @@ -104905,13 +146667,15 @@ if( z[2]=='a' ){ if( m_gt_1(z+3) ){ z += 3; } }else if( z[2]=='e' ){ - stem(&z, "tneme", "", m_gt_1) || - stem(&z, "tnem", "", m_gt_1) || - stem(&z, "tne", "", m_gt_1); + if( !stem(&z, "tneme", "", m_gt_1) + && !stem(&z, "tnem", "", m_gt_1) + ){ + stem(&z, "tne", "", m_gt_1); + } } } break; case 'o': if( z[0]=='u' ){ @@ -104926,12 +146690,13 @@ if( z[0]=='m' && z[2]=='i' && m_gt_1(z+3) ){ z += 3; } break; case 't': - stem(&z, "eta", "", m_gt_1) || - stem(&z, "iti", "", m_gt_1); + if( !stem(&z, "eta", "", m_gt_1) ){ + stem(&z, "iti", "", m_gt_1); + } break; case 'u': if( z[0]=='s' && z[2]=='o' && m_gt_1(z+3) ){ z += 3; } @@ -105041,10 +146806,11 @@ porterCreate, porterDestroy, porterOpen, porterClose, porterNext, + 0 }; /* ** Allocate a new porter tokenizer. Return a pointer to the new ** tokenizer in *ppModule @@ -105082,16 +146848,15 @@ ** (in which case SQLITE_CORE is not defined), or ** ** * The FTS3 module is being built into the core of ** SQLite (in which case SQLITE_ENABLE_FTS3 is defined). */ +/* #include "fts3Int.h" */ #if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) -#ifndef SQLITE_CORE - SQLITE_EXTENSION_INIT1 -#endif - +/* #include <assert.h> */ +/* #include <string.h> */ /* ** Implementation of the SQL scalar function for accessing the underlying ** hash table. This function may be called as follows: ** @@ -105127,24 +146892,34 @@ zName = sqlite3_value_text(argv[0]); nName = sqlite3_value_bytes(argv[0])+1; if( argc==2 ){ +#ifdef SQLITE_ENABLE_FTS3_TOKENIZER void *pOld; int n = sqlite3_value_bytes(argv[1]); - if( n!=sizeof(pPtr) ){ + if( zName==0 || n!=sizeof(pPtr) ){ sqlite3_result_error(context, "argument type mismatch", -1); return; } pPtr = *(void **)sqlite3_value_blob(argv[1]); pOld = sqlite3Fts3HashInsert(pHash, (void *)zName, nName, pPtr); if( pOld==pPtr ){ sqlite3_result_error(context, "out of memory", -1); return; } - }else{ - pPtr = sqlite3Fts3HashFind(pHash, zName, nName); +#else + sqlite3_result_error(context, "fts3tokenize: " + "disabled - rebuild with -DSQLITE_ENABLE_FTS3_TOKENIZER", -1 + ); + return; +#endif /* SQLITE_ENABLE_FTS3_TOKENIZER */ + }else + { + if( zName ){ + pPtr = sqlite3Fts3HashFind(pHash, zName, nName); + } if( !pPtr ){ char *zErr = sqlite3_mprintf("unknown tokenizer: %s", zName); sqlite3_result_error(context, zErr, -1); sqlite3_free(zErr); return; @@ -105152,11 +146927,11 @@ } sqlite3_result_blob(context, (void *)&pPtr, sizeof(pPtr), SQLITE_TRANSIENT); } -static int fts3IsIdChar(char c){ +SQLITE_PRIVATE int sqlite3Fts3IsIdChar(char c){ static const char isFtsIdChar[] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 0x */ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 1x */ 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 2x */ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, /* 3x */ @@ -105190,13 +146965,13 @@ while( *z2 && z2[0]!=']' ) z2++; if( *z2 ) z2++; break; default: - if( fts3IsIdChar(*z1) ){ + if( sqlite3Fts3IsIdChar(*z1) ){ z2 = &z1[1]; - while( fts3IsIdChar(*z2) ) z2++; + while( sqlite3Fts3IsIdChar(*z2) ) z2++; }else{ z1++; } } } @@ -105205,44 +146980,36 @@ return z1; } SQLITE_PRIVATE int sqlite3Fts3InitTokenizer( Fts3Hash *pHash, /* Tokenizer hash table */ - const char *zArg, /* Possible tokenizer specification */ + const char *zArg, /* Tokenizer name */ sqlite3_tokenizer **ppTok, /* OUT: Tokenizer (if applicable) */ - const char **pzTokenizer, /* OUT: Set to zArg if is tokenizer */ char **pzErr /* OUT: Set to malloced error message */ ){ int rc; char *z = (char *)zArg; - int n; + int n = 0; char *zCopy; char *zEnd; /* Pointer to nul-term of zCopy */ sqlite3_tokenizer_module *m; - if( !z ){ - zCopy = sqlite3_mprintf("simple"); - }else{ - if( sqlite3_strnicmp(z, "tokenize", 8) || fts3IsIdChar(z[8])){ - return SQLITE_OK; - } - zCopy = sqlite3_mprintf("%s", &z[8]); - *pzTokenizer = zArg; - } - if( !zCopy ){ - return SQLITE_NOMEM; - } - + zCopy = sqlite3_mprintf("%s", zArg); + if( !zCopy ) return SQLITE_NOMEM; zEnd = &zCopy[strlen(zCopy)]; z = (char *)sqlite3Fts3NextToken(zCopy, &n); + if( z==0 ){ + assert( n==0 ); + z = zCopy; + } z[n] = '\0'; sqlite3Fts3Dequote(z); - m = (sqlite3_tokenizer_module *)sqlite3Fts3HashFind(pHash, z, (int)strlen(z)+1); + m = (sqlite3_tokenizer_module *)sqlite3Fts3HashFind(pHash,z,(int)strlen(z)+1); if( !m ){ - *pzErr = sqlite3_mprintf("unknown tokenizer: %s", z); + sqlite3Fts3ErrMsg(pzErr, "unknown tokenizer: %s", z); rc = SQLITE_ERROR; }else{ char const **aArg = 0; int iArg = 0; z = &z[n+1]; @@ -105261,11 +147028,11 @@ z = &z[n+1]; } rc = m->xCreate(iArg, aArg, ppTok); assert( rc!=SQLITE_OK || *ppTok ); if( rc!=SQLITE_OK ){ - *pzErr = sqlite3_mprintf("unknown tokenizer"); + sqlite3Fts3ErrMsg(pzErr, "unknown tokenizer"); }else{ (*ppTok)->pModule = m; } sqlite3_free((void *)aArg); } @@ -105275,18 +147042,19 @@ } #ifdef SQLITE_TEST +#include <tcl.h> +/* #include <string.h> */ /* ** Implementation of a special SQL scalar function for testing tokenizers ** designed to be used in concert with the Tcl testing framework. This -** function must be called with two arguments: +** function must be called with two or more arguments: ** -** SELECT <function-name>(<key-name>, <input-string>); -** SELECT <function-name>(<key-name>, <pointer>); +** SELECT <function-name>(<key-name>, ..., <input-string>); ** ** where <function-name> is the name passed as the second argument ** to the sqlite3Fts3InitHashTable() function (e.g. 'fts3_tokenizer') ** concatenated with the string '_test' (e.g. 'fts3_tokenizer_test'). ** @@ -105319,54 +147087,57 @@ const char *zName; int nName; const char *zInput; int nInput; - const char *zArg = 0; + const char *azArg[64]; const char *zToken; - int nToken; - int iStart; - int iEnd; - int iPos; + int nToken = 0; + int iStart = 0; + int iEnd = 0; + int iPos = 0; + int i; Tcl_Obj *pRet; - assert( argc==2 || argc==3 ); + if( argc<2 ){ + sqlite3_result_error(context, "insufficient arguments", -1); + return; + } nName = sqlite3_value_bytes(argv[0]); zName = (const char *)sqlite3_value_text(argv[0]); nInput = sqlite3_value_bytes(argv[argc-1]); zInput = (const char *)sqlite3_value_text(argv[argc-1]); - if( argc==3 ){ - zArg = (const char *)sqlite3_value_text(argv[1]); - } - pHash = (Fts3Hash *)sqlite3_user_data(context); p = (sqlite3_tokenizer_module *)sqlite3Fts3HashFind(pHash, zName, nName+1); if( !p ){ - char *zErr = sqlite3_mprintf("unknown tokenizer: %s", zName); - sqlite3_result_error(context, zErr, -1); - sqlite3_free(zErr); + char *zErr2 = sqlite3_mprintf("unknown tokenizer: %s", zName); + sqlite3_result_error(context, zErr2, -1); + sqlite3_free(zErr2); return; } pRet = Tcl_NewObj(); Tcl_IncrRefCount(pRet); - if( SQLITE_OK!=p->xCreate(zArg ? 1 : 0, &zArg, &pTokenizer) ){ + for(i=1; i<argc-1; i++){ + azArg[i-1] = (const char *)sqlite3_value_text(argv[i]); + } + + if( SQLITE_OK!=p->xCreate(argc-2, azArg, &pTokenizer) ){ zErr = "error in xCreate()"; goto finish; } pTokenizer->pModule = p; - if( SQLITE_OK!=p->xOpen(pTokenizer, zInput, nInput, &pCsr) ){ + if( sqlite3Fts3OpenTokenizer(pTokenizer, 0, zInput, nInput, &pCsr) ){ zErr = "error in xOpen()"; goto finish; } - pCsr->pTokenizer = pTokenizer; while( SQLITE_OK==p->xNext(pCsr, &zToken, &nToken, &iStart, &iEnd, &iPos) ){ Tcl_ListObjAppendElement(0, pRet, Tcl_NewIntObj(iPos)); Tcl_ListObjAppendElement(0, pRet, Tcl_NewStringObj(zToken, nToken)); zToken = &zInput[iStart]; @@ -105390,10 +147161,11 @@ sqlite3_result_text(context, Tcl_GetString(pRet), -1, SQLITE_TRANSIENT); } Tcl_DecrRefCount(pRet); } +#ifdef SQLITE_ENABLE_FTS3_TOKENIZER static int registerTokenizer( sqlite3 *db, char *zName, const sqlite3_tokenizer_module *p @@ -105411,10 +147183,12 @@ sqlite3_bind_blob(pStmt, 2, &p, sizeof(p), SQLITE_STATIC); sqlite3_step(pStmt); return sqlite3_finalize(pStmt); } +#endif /* SQLITE_ENABLE_FTS3_TOKENIZER */ + static int queryTokenizer( sqlite3 *db, char *zName, @@ -105482,25 +147256,27 @@ assert( rc==SQLITE_ERROR ); assert( p2==0 ); assert( 0==strcmp(sqlite3_errmsg(db), "unknown tokenizer: nosuchtokenizer") ); /* Test the storage function */ +#ifdef SQLITE_ENABLE_FTS3_TOKENIZER rc = registerTokenizer(db, "nosuchtokenizer", p1); assert( rc==SQLITE_OK ); rc = queryTokenizer(db, "nosuchtokenizer", &p2); assert( rc==SQLITE_OK ); assert( p2==p1 ); +#endif sqlite3_result_text(context, "ok", -1, SQLITE_STATIC); } #endif /* ** Set up SQL objects in database db used to access the contents of ** the hash table pointed to by argument pHash. The hash table must -** been initialised to use string keys, and to take a private copy +** been initialized to use string keys, and to take a private copy ** of the key when a value is inserted. i.e. by a call similar to: ** ** sqlite3Fts3HashInit(pHash, FTS3_HASH_STRING, 1); ** ** This function adds a scalar function (see header comment above @@ -105530,19 +147306,24 @@ if( !zTest || !zTest2 ){ rc = SQLITE_NOMEM; } #endif - if( SQLITE_OK!=rc - || SQLITE_OK!=(rc = sqlite3_create_function(db, zName, 1, any, p, scalarFunc, 0, 0)) - || SQLITE_OK!=(rc = sqlite3_create_function(db, zName, 2, any, p, scalarFunc, 0, 0)) + if( SQLITE_OK==rc ){ + rc = sqlite3_create_function(db, zName, 1, any, p, scalarFunc, 0, 0); + } + if( SQLITE_OK==rc ){ + rc = sqlite3_create_function(db, zName, 2, any, p, scalarFunc, 0, 0); + } #ifdef SQLITE_TEST - || SQLITE_OK!=(rc = sqlite3_create_function(db, zTest, 2, any, p, testFunc, 0, 0)) - || SQLITE_OK!=(rc = sqlite3_create_function(db, zTest, 3, any, p, testFunc, 0, 0)) - || SQLITE_OK!=(rc = sqlite3_create_function(db, zTest2, 0, any, pdb, intTestFunc, 0, 0)) + if( SQLITE_OK==rc ){ + rc = sqlite3_create_function(db, zTest, -1, any, p, testFunc, 0, 0); + } + if( SQLITE_OK==rc ){ + rc = sqlite3_create_function(db, zTest2, 0, any, pdb, intTestFunc, 0, 0); + } #endif - ); #ifdef SQLITE_TEST sqlite3_free(zTest); sqlite3_free(zTest2); #endif @@ -105576,14 +147357,19 @@ ** (in which case SQLITE_CORE is not defined), or ** ** * The FTS3 module is being built into the core of ** SQLite (in which case SQLITE_ENABLE_FTS3 is defined). */ +/* #include "fts3Int.h" */ #if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) +/* #include <assert.h> */ +/* #include <stdlib.h> */ +/* #include <stdio.h> */ +/* #include <string.h> */ - +/* #include "fts3_tokenizer.h" */ typedef struct simple_tokenizer { sqlite3_tokenizer base; char delim[128]; /* flag ASCII delimiters */ } simple_tokenizer; @@ -105600,10 +147386,13 @@ static int simpleDelim(simple_tokenizer *t, unsigned char c){ return c<0x80 && t->delim[c]; } +static int fts3_isalnum(int x){ + return (x>='0' && x<='9') || (x>='A' && x<='Z') || (x>='a' && x<='z'); +} /* ** Create a new tokenizer instance. */ static int simpleCreate( @@ -105634,11 +147423,11 @@ } } else { /* Mark non-alphanumeric ASCII characters as delimiters */ int i; for(i=1; i<0x80; i++){ - t->delim[i] = !isalnum(i) ? -1 : 0; + t->delim[i] = !fts3_isalnum(i) ? -1 : 0; } } *ppTokenizer = &t->base; return SQLITE_OK; @@ -105740,11 +147529,11 @@ for(i=0; i<n; i++){ /* TODO(shess) This needs expansion to handle UTF-8 ** case-insensitivity. */ unsigned char ch = p[iStartOffset+i]; - c->pToken[i] = (char)(ch<0x80 ? tolower(ch) : ch); + c->pToken[i] = (char)((ch>='A' && ch<='Z') ? ch-'A'+'a' : ch); } *ppToken = c->pToken; *pnBytes = n; *piStartOffset = iStartOffset; *piEndOffset = c->iOffset; @@ -105764,10 +147553,11 @@ simpleCreate, simpleDestroy, simpleOpen, simpleClose, simpleNext, + 0, }; /* ** Allocate a new simple tokenizer. Return a pointer to the new ** tokenizer in *ppModule @@ -105779,10 +147569,467 @@ } #endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) */ /************** End of fts3_tokenizer1.c *************************************/ +/************** Begin file fts3_tokenize_vtab.c ******************************/ +/* +** 2013 Apr 22 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This file contains code for the "fts3tokenize" virtual table module. +** An fts3tokenize virtual table is created as follows: +** +** CREATE VIRTUAL TABLE <tbl> USING fts3tokenize( +** <tokenizer-name>, <arg-1>, ... +** ); +** +** The table created has the following schema: +** +** CREATE TABLE <tbl>(input, token, start, end, position) +** +** When queried, the query must include a WHERE clause of type: +** +** input = <string> +** +** The virtual table module tokenizes this <string>, using the FTS3 +** tokenizer specified by the arguments to the CREATE VIRTUAL TABLE +** statement and returns one row for each token in the result. With +** fields set as follows: +** +** input: Always set to a copy of <string> +** token: A token from the input. +** start: Byte offset of the token within the input <string>. +** end: Byte offset of the byte immediately following the end of the +** token within the input string. +** pos: Token offset of token within input. +** +*/ +/* #include "fts3Int.h" */ +#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) + +/* #include <string.h> */ +/* #include <assert.h> */ + +typedef struct Fts3tokTable Fts3tokTable; +typedef struct Fts3tokCursor Fts3tokCursor; + +/* +** Virtual table structure. +*/ +struct Fts3tokTable { + sqlite3_vtab base; /* Base class used by SQLite core */ + const sqlite3_tokenizer_module *pMod; + sqlite3_tokenizer *pTok; +}; + +/* +** Virtual table cursor structure. +*/ +struct Fts3tokCursor { + sqlite3_vtab_cursor base; /* Base class used by SQLite core */ + char *zInput; /* Input string */ + sqlite3_tokenizer_cursor *pCsr; /* Cursor to iterate through zInput */ + int iRowid; /* Current 'rowid' value */ + const char *zToken; /* Current 'token' value */ + int nToken; /* Size of zToken in bytes */ + int iStart; /* Current 'start' value */ + int iEnd; /* Current 'end' value */ + int iPos; /* Current 'pos' value */ +}; + +/* +** Query FTS for the tokenizer implementation named zName. +*/ +static int fts3tokQueryTokenizer( + Fts3Hash *pHash, + const char *zName, + const sqlite3_tokenizer_module **pp, + char **pzErr +){ + sqlite3_tokenizer_module *p; + int nName = (int)strlen(zName); + + p = (sqlite3_tokenizer_module *)sqlite3Fts3HashFind(pHash, zName, nName+1); + if( !p ){ + sqlite3Fts3ErrMsg(pzErr, "unknown tokenizer: %s", zName); + return SQLITE_ERROR; + } + + *pp = p; + return SQLITE_OK; +} + +/* +** The second argument, argv[], is an array of pointers to nul-terminated +** strings. This function makes a copy of the array and strings into a +** single block of memory. It then dequotes any of the strings that appear +** to be quoted. +** +** If successful, output parameter *pazDequote is set to point at the +** array of dequoted strings and SQLITE_OK is returned. The caller is +** responsible for eventually calling sqlite3_free() to free the array +** in this case. Or, if an error occurs, an SQLite error code is returned. +** The final value of *pazDequote is undefined in this case. +*/ +static int fts3tokDequoteArray( + int argc, /* Number of elements in argv[] */ + const char * const *argv, /* Input array */ + char ***pazDequote /* Output array */ +){ + int rc = SQLITE_OK; /* Return code */ + if( argc==0 ){ + *pazDequote = 0; + }else{ + int i; + int nByte = 0; + char **azDequote; + + for(i=0; i<argc; i++){ + nByte += (int)(strlen(argv[i]) + 1); + } + + *pazDequote = azDequote = sqlite3_malloc(sizeof(char *)*argc + nByte); + if( azDequote==0 ){ + rc = SQLITE_NOMEM; + }else{ + char *pSpace = (char *)&azDequote[argc]; + for(i=0; i<argc; i++){ + int n = (int)strlen(argv[i]); + azDequote[i] = pSpace; + memcpy(pSpace, argv[i], n+1); + sqlite3Fts3Dequote(pSpace); + pSpace += (n+1); + } + } + } + + return rc; +} + +/* +** Schema of the tokenizer table. +*/ +#define FTS3_TOK_SCHEMA "CREATE TABLE x(input, token, start, end, position)" + +/* +** This function does all the work for both the xConnect and xCreate methods. +** These tables have no persistent representation of their own, so xConnect +** and xCreate are identical operations. +** +** argv[0]: module name +** argv[1]: database name +** argv[2]: table name +** argv[3]: first argument (tokenizer name) +*/ +static int fts3tokConnectMethod( + sqlite3 *db, /* Database connection */ + void *pHash, /* Hash table of tokenizers */ + int argc, /* Number of elements in argv array */ + const char * const *argv, /* xCreate/xConnect argument array */ + sqlite3_vtab **ppVtab, /* OUT: New sqlite3_vtab object */ + char **pzErr /* OUT: sqlite3_malloc'd error message */ +){ + Fts3tokTable *pTab = 0; + const sqlite3_tokenizer_module *pMod = 0; + sqlite3_tokenizer *pTok = 0; + int rc; + char **azDequote = 0; + int nDequote; + + rc = sqlite3_declare_vtab(db, FTS3_TOK_SCHEMA); + if( rc!=SQLITE_OK ) return rc; + + nDequote = argc-3; + rc = fts3tokDequoteArray(nDequote, &argv[3], &azDequote); + + if( rc==SQLITE_OK ){ + const char *zModule; + if( nDequote<1 ){ + zModule = "simple"; + }else{ + zModule = azDequote[0]; + } + rc = fts3tokQueryTokenizer((Fts3Hash*)pHash, zModule, &pMod, pzErr); + } + + assert( (rc==SQLITE_OK)==(pMod!=0) ); + if( rc==SQLITE_OK ){ + const char * const *azArg = (const char * const *)&azDequote[1]; + rc = pMod->xCreate((nDequote>1 ? nDequote-1 : 0), azArg, &pTok); + } + + if( rc==SQLITE_OK ){ + pTab = (Fts3tokTable *)sqlite3_malloc(sizeof(Fts3tokTable)); + if( pTab==0 ){ + rc = SQLITE_NOMEM; + } + } + + if( rc==SQLITE_OK ){ + memset(pTab, 0, sizeof(Fts3tokTable)); + pTab->pMod = pMod; + pTab->pTok = pTok; + *ppVtab = &pTab->base; + }else{ + if( pTok ){ + pMod->xDestroy(pTok); + } + } + + sqlite3_free(azDequote); + return rc; +} + +/* +** This function does the work for both the xDisconnect and xDestroy methods. +** These tables have no persistent representation of their own, so xDisconnect +** and xDestroy are identical operations. +*/ +static int fts3tokDisconnectMethod(sqlite3_vtab *pVtab){ + Fts3tokTable *pTab = (Fts3tokTable *)pVtab; + + pTab->pMod->xDestroy(pTab->pTok); + sqlite3_free(pTab); + return SQLITE_OK; +} + +/* +** xBestIndex - Analyze a WHERE and ORDER BY clause. +*/ +static int fts3tokBestIndexMethod( + sqlite3_vtab *pVTab, + sqlite3_index_info *pInfo +){ + int i; + UNUSED_PARAMETER(pVTab); + + for(i=0; i<pInfo->nConstraint; i++){ + if( pInfo->aConstraint[i].usable + && pInfo->aConstraint[i].iColumn==0 + && pInfo->aConstraint[i].op==SQLITE_INDEX_CONSTRAINT_EQ + ){ + pInfo->idxNum = 1; + pInfo->aConstraintUsage[i].argvIndex = 1; + pInfo->aConstraintUsage[i].omit = 1; + pInfo->estimatedCost = 1; + return SQLITE_OK; + } + } + + pInfo->idxNum = 0; + assert( pInfo->estimatedCost>1000000.0 ); + + return SQLITE_OK; +} + +/* +** xOpen - Open a cursor. +*/ +static int fts3tokOpenMethod(sqlite3_vtab *pVTab, sqlite3_vtab_cursor **ppCsr){ + Fts3tokCursor *pCsr; + UNUSED_PARAMETER(pVTab); + + pCsr = (Fts3tokCursor *)sqlite3_malloc(sizeof(Fts3tokCursor)); + if( pCsr==0 ){ + return SQLITE_NOMEM; + } + memset(pCsr, 0, sizeof(Fts3tokCursor)); + + *ppCsr = (sqlite3_vtab_cursor *)pCsr; + return SQLITE_OK; +} + +/* +** Reset the tokenizer cursor passed as the only argument. As if it had +** just been returned by fts3tokOpenMethod(). +*/ +static void fts3tokResetCursor(Fts3tokCursor *pCsr){ + if( pCsr->pCsr ){ + Fts3tokTable *pTab = (Fts3tokTable *)(pCsr->base.pVtab); + pTab->pMod->xClose(pCsr->pCsr); + pCsr->pCsr = 0; + } + sqlite3_free(pCsr->zInput); + pCsr->zInput = 0; + pCsr->zToken = 0; + pCsr->nToken = 0; + pCsr->iStart = 0; + pCsr->iEnd = 0; + pCsr->iPos = 0; + pCsr->iRowid = 0; +} + +/* +** xClose - Close a cursor. +*/ +static int fts3tokCloseMethod(sqlite3_vtab_cursor *pCursor){ + Fts3tokCursor *pCsr = (Fts3tokCursor *)pCursor; + + fts3tokResetCursor(pCsr); + sqlite3_free(pCsr); + return SQLITE_OK; +} + +/* +** xNext - Advance the cursor to the next row, if any. +*/ +static int fts3tokNextMethod(sqlite3_vtab_cursor *pCursor){ + Fts3tokCursor *pCsr = (Fts3tokCursor *)pCursor; + Fts3tokTable *pTab = (Fts3tokTable *)(pCursor->pVtab); + int rc; /* Return code */ + + pCsr->iRowid++; + rc = pTab->pMod->xNext(pCsr->pCsr, + &pCsr->zToken, &pCsr->nToken, + &pCsr->iStart, &pCsr->iEnd, &pCsr->iPos + ); + + if( rc!=SQLITE_OK ){ + fts3tokResetCursor(pCsr); + if( rc==SQLITE_DONE ) rc = SQLITE_OK; + } + + return rc; +} + +/* +** xFilter - Initialize a cursor to point at the start of its data. +*/ +static int fts3tokFilterMethod( + sqlite3_vtab_cursor *pCursor, /* The cursor used for this query */ + int idxNum, /* Strategy index */ + const char *idxStr, /* Unused */ + int nVal, /* Number of elements in apVal */ + sqlite3_value **apVal /* Arguments for the indexing scheme */ +){ + int rc = SQLITE_ERROR; + Fts3tokCursor *pCsr = (Fts3tokCursor *)pCursor; + Fts3tokTable *pTab = (Fts3tokTable *)(pCursor->pVtab); + UNUSED_PARAMETER(idxStr); + UNUSED_PARAMETER(nVal); + + fts3tokResetCursor(pCsr); + if( idxNum==1 ){ + const char *zByte = (const char *)sqlite3_value_text(apVal[0]); + int nByte = sqlite3_value_bytes(apVal[0]); + pCsr->zInput = sqlite3_malloc(nByte+1); + if( pCsr->zInput==0 ){ + rc = SQLITE_NOMEM; + }else{ + memcpy(pCsr->zInput, zByte, nByte); + pCsr->zInput[nByte] = 0; + rc = pTab->pMod->xOpen(pTab->pTok, pCsr->zInput, nByte, &pCsr->pCsr); + if( rc==SQLITE_OK ){ + pCsr->pCsr->pTokenizer = pTab->pTok; + } + } + } + + if( rc!=SQLITE_OK ) return rc; + return fts3tokNextMethod(pCursor); +} + +/* +** xEof - Return true if the cursor is at EOF, or false otherwise. +*/ +static int fts3tokEofMethod(sqlite3_vtab_cursor *pCursor){ + Fts3tokCursor *pCsr = (Fts3tokCursor *)pCursor; + return (pCsr->zToken==0); +} + +/* +** xColumn - Return a column value. +*/ +static int fts3tokColumnMethod( + sqlite3_vtab_cursor *pCursor, /* Cursor to retrieve value from */ + sqlite3_context *pCtx, /* Context for sqlite3_result_xxx() calls */ + int iCol /* Index of column to read value from */ +){ + Fts3tokCursor *pCsr = (Fts3tokCursor *)pCursor; + + /* CREATE TABLE x(input, token, start, end, position) */ + switch( iCol ){ + case 0: + sqlite3_result_text(pCtx, pCsr->zInput, -1, SQLITE_TRANSIENT); + break; + case 1: + sqlite3_result_text(pCtx, pCsr->zToken, pCsr->nToken, SQLITE_TRANSIENT); + break; + case 2: + sqlite3_result_int(pCtx, pCsr->iStart); + break; + case 3: + sqlite3_result_int(pCtx, pCsr->iEnd); + break; + default: + assert( iCol==4 ); + sqlite3_result_int(pCtx, pCsr->iPos); + break; + } + return SQLITE_OK; +} + +/* +** xRowid - Return the current rowid for the cursor. +*/ +static int fts3tokRowidMethod( + sqlite3_vtab_cursor *pCursor, /* Cursor to retrieve value from */ + sqlite_int64 *pRowid /* OUT: Rowid value */ +){ + Fts3tokCursor *pCsr = (Fts3tokCursor *)pCursor; + *pRowid = (sqlite3_int64)pCsr->iRowid; + return SQLITE_OK; +} + +/* +** Register the fts3tok module with database connection db. Return SQLITE_OK +** if successful or an error code if sqlite3_create_module() fails. +*/ +SQLITE_PRIVATE int sqlite3Fts3InitTok(sqlite3 *db, Fts3Hash *pHash){ + static const sqlite3_module fts3tok_module = { + 0, /* iVersion */ + fts3tokConnectMethod, /* xCreate */ + fts3tokConnectMethod, /* xConnect */ + fts3tokBestIndexMethod, /* xBestIndex */ + fts3tokDisconnectMethod, /* xDisconnect */ + fts3tokDisconnectMethod, /* xDestroy */ + fts3tokOpenMethod, /* xOpen */ + fts3tokCloseMethod, /* xClose */ + fts3tokFilterMethod, /* xFilter */ + fts3tokNextMethod, /* xNext */ + fts3tokEofMethod, /* xEof */ + fts3tokColumnMethod, /* xColumn */ + fts3tokRowidMethod, /* xRowid */ + 0, /* xUpdate */ + 0, /* xBegin */ + 0, /* xSync */ + 0, /* xCommit */ + 0, /* xRollback */ + 0, /* xFindFunction */ + 0, /* xRename */ + 0, /* xSavepoint */ + 0, /* xRelease */ + 0 /* xRollbackTo */ + }; + int rc; /* Return code */ + + rc = sqlite3_create_module(db, "fts3tokenize", &fts3tok_module, (void*)pHash); + return rc; +} + +#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) */ + +/************** End of fts3_tokenize_vtab.c **********************************/ /************** Begin file fts3_write.c **************************************/ /* ** 2009 Oct 23 ** ** The author disclaims copyright to this source code. In place of @@ -105799,21 +148046,89 @@ ** tables. It also contains code to merge FTS3 b-tree segments. Some ** of the sub-routines used to merge segments are also used by the query ** code in fts3.c. */ +/* #include "fts3Int.h" */ #if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) + +/* #include <string.h> */ +/* #include <assert.h> */ +/* #include <stdlib.h> */ + + +#define FTS_MAX_APPENDABLE_HEIGHT 16 + +/* +** When full-text index nodes are loaded from disk, the buffer that they +** are loaded into has the following number of bytes of padding at the end +** of it. i.e. if a full-text index node is 900 bytes in size, then a buffer +** of 920 bytes is allocated for it. +** +** This means that if we have a pointer into a buffer containing node data, +** it is always safe to read up to two varints from it without risking an +** overread, even if the node data is corrupted. +*/ +#define FTS3_NODE_PADDING (FTS3_VARINT_MAX*2) + +/* +** Under certain circumstances, b-tree nodes (doclists) can be loaded into +** memory incrementally instead of all at once. This can be a big performance +** win (reduced IO and CPU) if SQLite stops calling the virtual table xNext() +** method before retrieving all query results (as may happen, for example, +** if a query has a LIMIT clause). +** +** Incremental loading is used for b-tree nodes FTS3_NODE_CHUNK_THRESHOLD +** bytes and larger. Nodes are loaded in chunks of FTS3_NODE_CHUNKSIZE bytes. +** The code is written so that the hard lower-limit for each of these values +** is 1. Clearly such small values would be inefficient, but can be useful +** for testing purposes. +** +** If this module is built with SQLITE_TEST defined, these constants may +** be overridden at runtime for testing purposes. File fts3_test.c contains +** a Tcl interface to read and write the values. +*/ +#ifdef SQLITE_TEST +int test_fts3_node_chunksize = (4*1024); +int test_fts3_node_chunk_threshold = (4*1024)*4; +# define FTS3_NODE_CHUNKSIZE test_fts3_node_chunksize +# define FTS3_NODE_CHUNK_THRESHOLD test_fts3_node_chunk_threshold +#else +# define FTS3_NODE_CHUNKSIZE (4*1024) +# define FTS3_NODE_CHUNK_THRESHOLD (FTS3_NODE_CHUNKSIZE*4) +#endif + +/* +** The two values that may be meaningfully bound to the :1 parameter in +** statements SQL_REPLACE_STAT and SQL_SELECT_STAT. +*/ +#define FTS_STAT_DOCTOTAL 0 +#define FTS_STAT_INCRMERGEHINT 1 +#define FTS_STAT_AUTOINCRMERGE 2 + +/* +** If FTS_LOG_MERGES is defined, call sqlite3_log() to report each automatic +** and incremental merge operation that takes place. This is used for +** debugging FTS only, it should not usually be turned on in production +** systems. +*/ +#ifdef FTS3_LOG_MERGES +static void fts3LogMerge(int nMerge, sqlite3_int64 iAbsLevel){ + sqlite3_log(SQLITE_OK, "%d-way merge from level %d", nMerge, (int)iAbsLevel); +} +#else +#define fts3LogMerge(x, y) +#endif typedef struct PendingList PendingList; typedef struct SegmentNode SegmentNode; typedef struct SegmentWriter SegmentWriter; /* -** Data structure used while accumulating terms in the pending-terms hash -** table. The hash table entry maps from term (a string) to a malloc'd -** instance of this structure. +** An instance of the following data structure is used to build doclists +** incrementally. See function fts3PendingListAppend() for details. */ struct PendingList { int nData; char *aData; int nSpace; @@ -105820,10 +148135,21 @@ sqlite3_int64 iLastDocid; sqlite3_int64 iLastCol; sqlite3_int64 iLastPos; }; + +/* +** Each cursor has a (possibly empty) linked list of the following objects. +*/ +struct Fts3DeferredToken { + Fts3PhraseToken *pToken; /* Pointer to corresponding expr token */ + int iCol; /* Column token must occur in */ + Fts3DeferredToken *pNext; /* Next in list of deferred tokens */ + PendingList *pList; /* Doclist is assembled here */ +}; + /* ** An instance of this structure is used to iterate through the terms on ** a contiguous set of segment b-tree leaf nodes. Although the details of ** this structure are only manipulated by code in this file, opaque handles ** of type Fts3SegReader* are also used by code in fts3.c to iterate through @@ -105839,34 +148165,46 @@ ** fts3SegReaderFirstDocid() ** fts3SegReaderNextDocid() */ struct Fts3SegReader { int iIdx; /* Index within level, or 0x7FFFFFFF for PT */ - sqlite3_int64 iStartBlock; - sqlite3_int64 iEndBlock; - sqlite3_stmt *pStmt; /* SQL Statement to access leaf nodes */ + u8 bLookup; /* True for a lookup only */ + u8 rootOnly; /* True for a root-only reader */ + + sqlite3_int64 iStartBlock; /* Rowid of first leaf block to traverse */ + sqlite3_int64 iLeafEndBlock; /* Rowid of final leaf block to traverse */ + sqlite3_int64 iEndBlock; /* Rowid of final block in segment (or 0) */ + sqlite3_int64 iCurrentBlock; /* Current leaf block (or 0) */ + char *aNode; /* Pointer to node data (or NULL) */ int nNode; /* Size of buffer at aNode (or 0) */ - int nTermAlloc; /* Allocated size of zTerm buffer */ + int nPopulate; /* If >0, bytes of buffer aNode[] loaded */ + sqlite3_blob *pBlob; /* If not NULL, blob handle to read node */ + Fts3HashElem **ppNextElem; /* Variables set by fts3SegReaderNext(). These may be read directly ** by the caller. They are valid from the time SegmentReaderNew() returns ** until SegmentReaderNext() returns something other than SQLITE_OK ** (i.e. SQLITE_DONE). */ int nTerm; /* Number of bytes in current term */ char *zTerm; /* Pointer to current term */ + int nTermAlloc; /* Allocated size of zTerm buffer */ char *aDoclist; /* Pointer to doclist of current entry */ int nDoclist; /* Size of doclist in current entry */ - /* The following variables are used to iterate through the current doclist */ + /* The following variables are used by fts3SegReaderNextDocid() to iterate + ** through the current doclist (aDoclist/nDoclist). + */ char *pOffsetList; + int nOffsetList; /* For descending pending seg-readers only */ sqlite3_int64 iDocid; }; #define fts3SegReaderIsPending(p) ((p)->ppNextElem!=0) +#define fts3SegReaderIsRootOnly(p) ((p)->rootOnly!=0) /* ** An instance of this structure is used to create a segment b-tree in the ** database. The internal details of this type are only accessed by the ** following functions: @@ -105884,10 +148222,11 @@ int nMalloc; /* Size of malloc'd buffer at zMalloc */ char *zMalloc; /* Malloc'd space (possibly) used for zTerm */ int nSize; /* Size of allocation at aData */ int nData; /* Bytes of data in aData */ char *aData; /* Pointer to block from malloc() */ + i64 nLeafData; /* Number of bytes of leaf data written */ }; /* ** Type SegmentNode is used by the following three functions to create ** the interior part of the segment b+-tree structures (everything except @@ -105895,10 +148234,18 @@ ** within the fts3SegWriterXXX() family of functions described above. ** ** fts3NodeAddTerm() ** fts3NodeWrite() ** fts3NodeFree() +** +** When a b+tree is written to the database (either as a result of a merge +** or the pending-terms table being flushed), leaves are written into the +** database file as soon as they are completely populated. The interior of +** the tree is assembled in memory and written out only once all leaves have +** been populated and stored. This is Ok, as the b+-tree fanout is usually +** very large, meaning that the interior of the tree consumes relatively +** little memory. */ struct SegmentNode { SegmentNode *pParent; /* Parent node (or NULL for root node) */ SegmentNode *pRight; /* Pointer to right-sibling */ SegmentNode *pLeftmost; /* Pointer to left-most node of this depth */ @@ -105925,22 +148272,39 @@ #define SQL_NEXT_SEGMENT_INDEX 8 #define SQL_INSERT_SEGMENTS 9 #define SQL_NEXT_SEGMENTS_ID 10 #define SQL_INSERT_SEGDIR 11 #define SQL_SELECT_LEVEL 12 -#define SQL_SELECT_ALL_LEVEL 13 +#define SQL_SELECT_LEVEL_RANGE 13 #define SQL_SELECT_LEVEL_COUNT 14 -#define SQL_SELECT_SEGDIR_COUNT_MAX 15 -#define SQL_DELETE_SEGDIR_BY_LEVEL 16 +#define SQL_SELECT_SEGDIR_MAX_LEVEL 15 +#define SQL_DELETE_SEGDIR_LEVEL 16 #define SQL_DELETE_SEGMENTS_RANGE 17 #define SQL_CONTENT_INSERT 18 -#define SQL_GET_BLOCK 19 -#define SQL_DELETE_DOCSIZE 20 -#define SQL_REPLACE_DOCSIZE 21 -#define SQL_SELECT_DOCSIZE 22 -#define SQL_SELECT_DOCTOTAL 23 -#define SQL_REPLACE_DOCTOTAL 24 +#define SQL_DELETE_DOCSIZE 19 +#define SQL_REPLACE_DOCSIZE 20 +#define SQL_SELECT_DOCSIZE 21 +#define SQL_SELECT_STAT 22 +#define SQL_REPLACE_STAT 23 + +#define SQL_SELECT_ALL_PREFIX_LEVEL 24 +#define SQL_DELETE_ALL_TERMS_SEGDIR 25 +#define SQL_DELETE_SEGDIR_RANGE 26 +#define SQL_SELECT_ALL_LANGID 27 +#define SQL_FIND_MERGE_LEVEL 28 +#define SQL_MAX_LEAF_NODE_ESTIMATE 29 +#define SQL_DELETE_SEGDIR_ENTRY 30 +#define SQL_SHIFT_SEGDIR_ENTRY 31 +#define SQL_SELECT_SEGDIR 32 +#define SQL_CHOMP_SEGDIR 33 +#define SQL_SEGMENT_IS_APPENDABLE 34 +#define SQL_SELECT_INDEXES 35 +#define SQL_SELECT_MXLEVEL 36 + +#define SQL_SELECT_LEVEL_RANGE2 37 +#define SQL_UPDATE_LEVEL_IDX 38 +#define SQL_UPDATE_LEVEL 39 /* ** This function is used to obtain an SQLite prepared statement handle ** for the statement identified by the second argument. If successful, ** *pp is set to the requested statement handle and SQLITE_OK returned. @@ -105963,34 +148327,98 @@ /* 2 */ "DELETE FROM %Q.'%q_content'", /* 3 */ "DELETE FROM %Q.'%q_segments'", /* 4 */ "DELETE FROM %Q.'%q_segdir'", /* 5 */ "DELETE FROM %Q.'%q_docsize'", /* 6 */ "DELETE FROM %Q.'%q_stat'", -/* 7 */ "SELECT * FROM %Q.'%q_content' WHERE rowid=?", +/* 7 */ "SELECT %s WHERE rowid=?", /* 8 */ "SELECT (SELECT max(idx) FROM %Q.'%q_segdir' WHERE level = ?) + 1", -/* 9 */ "INSERT INTO %Q.'%q_segments'(blockid, block) VALUES(?, ?)", +/* 9 */ "REPLACE INTO %Q.'%q_segments'(blockid, block) VALUES(?, ?)", /* 10 */ "SELECT coalesce((SELECT max(blockid) FROM %Q.'%q_segments') + 1, 1)", -/* 11 */ "INSERT INTO %Q.'%q_segdir' VALUES(?,?,?,?,?,?)", +/* 11 */ "REPLACE INTO %Q.'%q_segdir' VALUES(?,?,?,?,?,?)", /* Return segments in order from oldest to newest.*/ /* 12 */ "SELECT idx, start_block, leaves_end_block, end_block, root " "FROM %Q.'%q_segdir' WHERE level = ? ORDER BY idx ASC", /* 13 */ "SELECT idx, start_block, leaves_end_block, end_block, root " - "FROM %Q.'%q_segdir' ORDER BY level DESC, idx ASC", + "FROM %Q.'%q_segdir' WHERE level BETWEEN ? AND ?" + "ORDER BY level DESC, idx ASC", /* 14 */ "SELECT count(*) FROM %Q.'%q_segdir' WHERE level = ?", -/* 15 */ "SELECT count(*), max(level) FROM %Q.'%q_segdir'", +/* 15 */ "SELECT max(level) FROM %Q.'%q_segdir' WHERE level BETWEEN ? AND ?", /* 16 */ "DELETE FROM %Q.'%q_segdir' WHERE level = ?", /* 17 */ "DELETE FROM %Q.'%q_segments' WHERE blockid BETWEEN ? AND ?", -/* 18 */ "INSERT INTO %Q.'%q_content' VALUES(%z)", -/* 19 */ "SELECT block FROM %Q.'%q_segments' WHERE blockid = ?", -/* 20 */ "DELETE FROM %Q.'%q_docsize' WHERE docid = ?", -/* 21 */ "REPLACE INTO %Q.'%q_docsize' VALUES(?,?)", -/* 22 */ "SELECT size FROM %Q.'%q_docsize' WHERE docid=?", -/* 23 */ "SELECT value FROM %Q.'%q_stat' WHERE id=0", -/* 24 */ "REPLACE INTO %Q.'%q_stat' VALUES(0,?)", +/* 18 */ "INSERT INTO %Q.'%q_content' VALUES(%s)", +/* 19 */ "DELETE FROM %Q.'%q_docsize' WHERE docid = ?", +/* 20 */ "REPLACE INTO %Q.'%q_docsize' VALUES(?,?)", +/* 21 */ "SELECT size FROM %Q.'%q_docsize' WHERE docid=?", +/* 22 */ "SELECT value FROM %Q.'%q_stat' WHERE id=?", +/* 23 */ "REPLACE INTO %Q.'%q_stat' VALUES(?,?)", +/* 24 */ "", +/* 25 */ "", + +/* 26 */ "DELETE FROM %Q.'%q_segdir' WHERE level BETWEEN ? AND ?", +/* 27 */ "SELECT ? UNION SELECT level / (1024 * ?) FROM %Q.'%q_segdir'", + +/* This statement is used to determine which level to read the input from +** when performing an incremental merge. It returns the absolute level number +** of the oldest level in the db that contains at least ? segments. Or, +** if no level in the FTS index contains more than ? segments, the statement +** returns zero rows. */ +/* 28 */ "SELECT level FROM %Q.'%q_segdir' GROUP BY level HAVING count(*)>=?" + " ORDER BY (level %% 1024) ASC LIMIT 1", + +/* Estimate the upper limit on the number of leaf nodes in a new segment +** created by merging the oldest :2 segments from absolute level :1. See +** function sqlite3Fts3Incrmerge() for details. */ +/* 29 */ "SELECT 2 * total(1 + leaves_end_block - start_block) " + " FROM %Q.'%q_segdir' WHERE level = ? AND idx < ?", + +/* SQL_DELETE_SEGDIR_ENTRY +** Delete the %_segdir entry on absolute level :1 with index :2. */ +/* 30 */ "DELETE FROM %Q.'%q_segdir' WHERE level = ? AND idx = ?", + +/* SQL_SHIFT_SEGDIR_ENTRY +** Modify the idx value for the segment with idx=:3 on absolute level :2 +** to :1. */ +/* 31 */ "UPDATE %Q.'%q_segdir' SET idx = ? WHERE level=? AND idx=?", + +/* SQL_SELECT_SEGDIR +** Read a single entry from the %_segdir table. The entry from absolute +** level :1 with index value :2. */ +/* 32 */ "SELECT idx, start_block, leaves_end_block, end_block, root " + "FROM %Q.'%q_segdir' WHERE level = ? AND idx = ?", + +/* SQL_CHOMP_SEGDIR +** Update the start_block (:1) and root (:2) fields of the %_segdir +** entry located on absolute level :3 with index :4. */ +/* 33 */ "UPDATE %Q.'%q_segdir' SET start_block = ?, root = ?" + "WHERE level = ? AND idx = ?", + +/* SQL_SEGMENT_IS_APPENDABLE +** Return a single row if the segment with end_block=? is appendable. Or +** no rows otherwise. */ +/* 34 */ "SELECT 1 FROM %Q.'%q_segments' WHERE blockid=? AND block IS NULL", + +/* SQL_SELECT_INDEXES +** Return the list of valid segment indexes for absolute level ? */ +/* 35 */ "SELECT idx FROM %Q.'%q_segdir' WHERE level=? ORDER BY 1 ASC", + +/* SQL_SELECT_MXLEVEL +** Return the largest relative level in the FTS index or indexes. */ +/* 36 */ "SELECT max( level %% 1024 ) FROM %Q.'%q_segdir'", + + /* Return segments in order from oldest to newest.*/ +/* 37 */ "SELECT level, idx, end_block " + "FROM %Q.'%q_segdir' WHERE level BETWEEN ? AND ? " + "ORDER BY level DESC, idx ASC", + + /* Update statements used while promoting segments */ +/* 38 */ "UPDATE OR FAIL %Q.'%q_segdir' SET level=-1,idx=? " + "WHERE level=? AND idx=?", +/* 39 */ "UPDATE OR FAIL %Q.'%q_segdir' SET level=? WHERE level=-1" + }; int rc = SQLITE_OK; sqlite3_stmt *pStmt; assert( SizeofArray(azSql)==SizeofArray(p->aStmt) ); @@ -105998,24 +148426,13 @@ pStmt = p->aStmt[eStmt]; if( !pStmt ){ char *zSql; if( eStmt==SQL_CONTENT_INSERT ){ - int i; /* Iterator variable */ - char *zVarlist; /* The "?, ?, ..." string */ - zVarlist = (char *)sqlite3_malloc(2*p->nColumn+2); - if( !zVarlist ){ - *pp = 0; - return SQLITE_NOMEM; - } - zVarlist[0] = '?'; - zVarlist[p->nColumn*2+1] = '\0'; - for(i=1; i<=p->nColumn; i++){ - zVarlist[i*2-1] = ','; - zVarlist[i*2] = '?'; - } - zSql = sqlite3_mprintf(azSql[eStmt], p->zDb, p->zName, zVarlist); + zSql = sqlite3_mprintf(azSql[eStmt], p->zDb, p->zName, p->zWriteExprlist); + }else if( eStmt==SQL_SELECT_CONTENT_BY_ROWID ){ + zSql = sqlite3_mprintf(azSql[eStmt], p->zReadExprlist); }else{ zSql = sqlite3_mprintf(azSql[eStmt], p->zDb, p->zName); } if( !zSql ){ rc = SQLITE_NOMEM; @@ -106035,10 +148452,65 @@ } *pp = pStmt; return rc; } + +static int fts3SelectDocsize( + Fts3Table *pTab, /* FTS3 table handle */ + sqlite3_int64 iDocid, /* Docid to bind for SQL_SELECT_DOCSIZE */ + sqlite3_stmt **ppStmt /* OUT: Statement handle */ +){ + sqlite3_stmt *pStmt = 0; /* Statement requested from fts3SqlStmt() */ + int rc; /* Return code */ + + rc = fts3SqlStmt(pTab, SQL_SELECT_DOCSIZE, &pStmt, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_int64(pStmt, 1, iDocid); + rc = sqlite3_step(pStmt); + if( rc!=SQLITE_ROW || sqlite3_column_type(pStmt, 0)!=SQLITE_BLOB ){ + rc = sqlite3_reset(pStmt); + if( rc==SQLITE_OK ) rc = FTS_CORRUPT_VTAB; + pStmt = 0; + }else{ + rc = SQLITE_OK; + } + } + + *ppStmt = pStmt; + return rc; +} + +SQLITE_PRIVATE int sqlite3Fts3SelectDoctotal( + Fts3Table *pTab, /* Fts3 table handle */ + sqlite3_stmt **ppStmt /* OUT: Statement handle */ +){ + sqlite3_stmt *pStmt = 0; + int rc; + rc = fts3SqlStmt(pTab, SQL_SELECT_STAT, &pStmt, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_int(pStmt, 1, FTS_STAT_DOCTOTAL); + if( sqlite3_step(pStmt)!=SQLITE_ROW + || sqlite3_column_type(pStmt, 0)!=SQLITE_BLOB + ){ + rc = sqlite3_reset(pStmt); + if( rc==SQLITE_OK ) rc = FTS_CORRUPT_VTAB; + pStmt = 0; + } + } + *ppStmt = pStmt; + return rc; +} + +SQLITE_PRIVATE int sqlite3Fts3SelectDocsize( + Fts3Table *pTab, /* Fts3 table handle */ + sqlite3_int64 iDocid, /* Docid to read size data for */ + sqlite3_stmt **ppStmt /* OUT: Statement handle */ +){ + return fts3SelectDocsize(pTab, iDocid, ppStmt); +} + /* ** Similar to fts3SqlStmt(). Except, after binding the parameters in ** array apVal[] to the SQL statement identified by eStmt, the statement ** is executed. ** @@ -106062,46 +148534,75 @@ *pRC = rc; } /* -** Read a single block from the %_segments table. If the specified block -** does not exist, return SQLITE_CORRUPT. If some other error (malloc, IO -** etc.) occurs, return the appropriate SQLite error code. -** -** Otherwise, if successful, set *pzBlock to point to a buffer containing -** the block read from the database, and *pnBlock to the size of the read -** block in bytes. -** -** WARNING: The returned buffer is only valid until the next call to -** sqlite3Fts3ReadBlock(). -*/ -SQLITE_PRIVATE int sqlite3Fts3ReadBlock( - Fts3Table *p, - sqlite3_int64 iBlock, - char const **pzBlock, - int *pnBlock -){ - sqlite3_stmt *pStmt; - int rc = fts3SqlStmt(p, SQL_GET_BLOCK, &pStmt, 0); - if( rc!=SQLITE_OK ) return rc; - sqlite3_reset(pStmt); - - if( pzBlock ){ - sqlite3_bind_int64(pStmt, 1, iBlock); - rc = sqlite3_step(pStmt); - if( rc!=SQLITE_ROW ){ - return (rc==SQLITE_DONE ? SQLITE_CORRUPT : rc); - } - - *pnBlock = sqlite3_column_bytes(pStmt, 0); - *pzBlock = (char *)sqlite3_column_blob(pStmt, 0); - if( sqlite3_column_type(pStmt, 0)!=SQLITE_BLOB ){ - return SQLITE_CORRUPT; - } - } - return SQLITE_OK; +** This function ensures that the caller has obtained an exclusive +** shared-cache table-lock on the %_segdir table. This is required before +** writing data to the fts3 table. If this lock is not acquired first, then +** the caller may end up attempting to take this lock as part of committing +** a transaction, causing SQLite to return SQLITE_LOCKED or +** LOCKED_SHAREDCACHEto a COMMIT command. +** +** It is best to avoid this because if FTS3 returns any error when +** committing a transaction, the whole transaction will be rolled back. +** And this is not what users expect when they get SQLITE_LOCKED_SHAREDCACHE. +** It can still happen if the user locks the underlying tables directly +** instead of accessing them via FTS. +*/ +static int fts3Writelock(Fts3Table *p){ + int rc = SQLITE_OK; + + if( p->nPendingData==0 ){ + sqlite3_stmt *pStmt; + rc = fts3SqlStmt(p, SQL_DELETE_SEGDIR_LEVEL, &pStmt, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_null(pStmt, 1); + sqlite3_step(pStmt); + rc = sqlite3_reset(pStmt); + } + } + + return rc; +} + +/* +** FTS maintains a separate indexes for each language-id (a 32-bit integer). +** Within each language id, a separate index is maintained to store the +** document terms, and each configured prefix size (configured the FTS +** "prefix=" option). And each index consists of multiple levels ("relative +** levels"). +** +** All three of these values (the language id, the specific index and the +** level within the index) are encoded in 64-bit integer values stored +** in the %_segdir table on disk. This function is used to convert three +** separate component values into the single 64-bit integer value that +** can be used to query the %_segdir table. +** +** Specifically, each language-id/index combination is allocated 1024 +** 64-bit integer level values ("absolute levels"). The main terms index +** for language-id 0 is allocate values 0-1023. The first prefix index +** (if any) for language-id 0 is allocated values 1024-2047. And so on. +** Language 1 indexes are allocated immediately following language 0. +** +** So, for a system with nPrefix prefix indexes configured, the block of +** absolute levels that corresponds to language-id iLangid and index +** iIndex starts at absolute level ((iLangid * (nPrefix+1) + iIndex) * 1024). +*/ +static sqlite3_int64 getAbsoluteLevel( + Fts3Table *p, /* FTS3 table handle */ + int iLangid, /* Language id */ + int iIndex, /* Index in p->aIndex[] */ + int iLevel /* Level of segments */ +){ + sqlite3_int64 iBase; /* First absolute level for iLangid/iIndex */ + assert( iLangid>=0 ); + assert( p->nIndex>0 ); + assert( iIndex>=0 && iIndex<p->nIndex ); + + iBase = ((sqlite3_int64)iLangid * p->nIndex + iIndex) * FTS3_SEGDIR_MAXLEVEL; + return iBase + iLevel; } /* ** Set *ppStmt to a statement handle that may be used to iterate through ** all rows in the %_segdir table, from oldest to newest. If successful, @@ -106117,12 +148618,42 @@ ** 1: start_block ** 2: leaves_end_block ** 3: end_block ** 4: root */ -SQLITE_PRIVATE int sqlite3Fts3AllSegdirs(Fts3Table *p, sqlite3_stmt **ppStmt){ - return fts3SqlStmt(p, SQL_SELECT_ALL_LEVEL, ppStmt, 0); +SQLITE_PRIVATE int sqlite3Fts3AllSegdirs( + Fts3Table *p, /* FTS3 table */ + int iLangid, /* Language being queried */ + int iIndex, /* Index for p->aIndex[] */ + int iLevel, /* Level to select (relative level) */ + sqlite3_stmt **ppStmt /* OUT: Compiled statement */ +){ + int rc; + sqlite3_stmt *pStmt = 0; + + assert( iLevel==FTS3_SEGCURSOR_ALL || iLevel>=0 ); + assert( iLevel<FTS3_SEGDIR_MAXLEVEL ); + assert( iIndex>=0 && iIndex<p->nIndex ); + + if( iLevel<0 ){ + /* "SELECT * FROM %_segdir WHERE level BETWEEN ? AND ? ORDER BY ..." */ + rc = fts3SqlStmt(p, SQL_SELECT_LEVEL_RANGE, &pStmt, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_int64(pStmt, 1, getAbsoluteLevel(p, iLangid, iIndex, 0)); + sqlite3_bind_int64(pStmt, 2, + getAbsoluteLevel(p, iLangid, iIndex, FTS3_SEGDIR_MAXLEVEL-1) + ); + } + }else{ + /* "SELECT * FROM %_segdir WHERE level = ? ORDER BY ..." */ + rc = fts3SqlStmt(p, SQL_SELECT_LEVEL, &pStmt, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_int64(pStmt, 1, getAbsoluteLevel(p, iLangid, iIndex,iLevel)); + } + } + *ppStmt = pStmt; + return rc; } /* ** Append a single varint to a PendingList buffer. SQLITE_OK is returned @@ -106229,53 +148760,101 @@ *pp = p; return 1; } return 0; } + +/* +** Free a PendingList object allocated by fts3PendingListAppend(). +*/ +static void fts3PendingListDelete(PendingList *pList){ + sqlite3_free(pList); +} + +/* +** Add an entry to one of the pending-terms hash tables. +*/ +static int fts3PendingTermsAddOne( + Fts3Table *p, + int iCol, + int iPos, + Fts3Hash *pHash, /* Pending terms hash table to add entry to */ + const char *zToken, + int nToken +){ + PendingList *pList; + int rc = SQLITE_OK; + + pList = (PendingList *)fts3HashFind(pHash, zToken, nToken); + if( pList ){ + p->nPendingData -= (pList->nData + nToken + sizeof(Fts3HashElem)); + } + if( fts3PendingListAppend(&pList, p->iPrevDocid, iCol, iPos, &rc) ){ + if( pList==fts3HashInsert(pHash, zToken, nToken, pList) ){ + /* Malloc failed while inserting the new entry. This can only + ** happen if there was no previous entry for this token. + */ + assert( 0==fts3HashFind(pHash, zToken, nToken) ); + sqlite3_free(pList); + rc = SQLITE_NOMEM; + } + } + if( rc==SQLITE_OK ){ + p->nPendingData += (pList->nData + nToken + sizeof(Fts3HashElem)); + } + return rc; +} /* ** Tokenize the nul-terminated string zText and add all tokens to the ** pending-terms hash-table. The docid used is that currently stored in ** p->iPrevDocid, and the column is specified by argument iCol. ** ** If successful, SQLITE_OK is returned. Otherwise, an SQLite error code. */ static int fts3PendingTermsAdd( - Fts3Table *p, /* FTS table into which text will be inserted */ - const char *zText, /* Text of document to be inseted */ - int iCol, /* Column number into which text is inserted */ - u32 *pnWord /* OUT: Number of tokens inserted */ + Fts3Table *p, /* Table into which text will be inserted */ + int iLangid, /* Language id to use */ + const char *zText, /* Text of document to be inserted */ + int iCol, /* Column into which text is being inserted */ + u32 *pnWord /* IN/OUT: Incr. by number tokens inserted */ ){ int rc; - int iStart; - int iEnd; - int iPos; + int iStart = 0; + int iEnd = 0; + int iPos = 0; int nWord = 0; char const *zToken; - int nToken; + int nToken = 0; sqlite3_tokenizer *pTokenizer = p->pTokenizer; sqlite3_tokenizer_module const *pModule = pTokenizer->pModule; sqlite3_tokenizer_cursor *pCsr; int (*xNext)(sqlite3_tokenizer_cursor *pCursor, const char**,int*,int*,int*,int*); assert( pTokenizer && pModule ); - rc = pModule->xOpen(pTokenizer, zText, -1, &pCsr); + /* If the user has inserted a NULL value, this function may be called with + ** zText==0. In this case, add zero token entries to the hash table and + ** return early. */ + if( zText==0 ){ + *pnWord = 0; + return SQLITE_OK; + } + + rc = sqlite3Fts3OpenTokenizer(pTokenizer, iLangid, zText, -1, &pCsr); if( rc!=SQLITE_OK ){ return rc; } - pCsr->pTokenizer = pTokenizer; xNext = pModule->xNext; while( SQLITE_OK==rc && SQLITE_OK==(rc = xNext(pCsr, &zToken, &nToken, &iStart, &iEnd, &iPos)) ){ - PendingList *pList; - + int i; if( iPos>=nWord ) nWord = iPos+1; /* Positions cannot be negative; we use -1 as a terminator internally. ** Tokens must have a non-zero length. */ @@ -106282,60 +148861,79 @@ if( iPos<0 || !zToken || nToken<=0 ){ rc = SQLITE_ERROR; break; } - pList = (PendingList *)fts3HashFind(&p->pendingTerms, zToken, nToken); - if( pList ){ - p->nPendingData -= (pList->nData + nToken + sizeof(Fts3HashElem)); - } - if( fts3PendingListAppend(&pList, p->iPrevDocid, iCol, iPos, &rc) ){ - if( pList==fts3HashInsert(&p->pendingTerms, zToken, nToken, pList) ){ - /* Malloc failed while inserting the new entry. This can only - ** happen if there was no previous entry for this token. - */ - assert( 0==fts3HashFind(&p->pendingTerms, zToken, nToken) ); - sqlite3_free(pList); - rc = SQLITE_NOMEM; - } - } - if( rc==SQLITE_OK ){ - p->nPendingData += (pList->nData + nToken + sizeof(Fts3HashElem)); + /* Add the term to the terms index */ + rc = fts3PendingTermsAddOne( + p, iCol, iPos, &p->aIndex[0].hPending, zToken, nToken + ); + + /* Add the term to each of the prefix indexes that it is not too + ** short for. */ + for(i=1; rc==SQLITE_OK && i<p->nIndex; i++){ + struct Fts3Index *pIndex = &p->aIndex[i]; + if( nToken<pIndex->nPrefix ) continue; + rc = fts3PendingTermsAddOne( + p, iCol, iPos, &pIndex->hPending, zToken, pIndex->nPrefix + ); } } pModule->xClose(pCsr); - *pnWord = nWord; + *pnWord += nWord; return (rc==SQLITE_DONE ? SQLITE_OK : rc); } /* ** Calling this function indicates that subsequent calls to ** fts3PendingTermsAdd() are to add term/position-list pairs for the ** contents of the document with docid iDocid. */ -static int fts3PendingTermsDocid(Fts3Table *p, sqlite_int64 iDocid){ +static int fts3PendingTermsDocid( + Fts3Table *p, /* Full-text table handle */ + int bDelete, /* True if this op is a delete */ + int iLangid, /* Language id of row being written */ + sqlite_int64 iDocid /* Docid of row being written */ +){ + assert( iLangid>=0 ); + assert( bDelete==1 || bDelete==0 ); + /* TODO(shess) Explore whether partially flushing the buffer on ** forced-flush would provide better performance. I suspect that if ** we ordered the doclists by size and flushed the largest until the ** buffer was half empty, that would let the less frequent terms ** generate longer doclists. */ - if( iDocid<=p->iPrevDocid || p->nPendingData>p->nMaxPendingData ){ + if( iDocid<p->iPrevDocid + || (iDocid==p->iPrevDocid && p->bPrevDelete==0) + || p->iPrevLangid!=iLangid + || p->nPendingData>p->nMaxPendingData + ){ int rc = sqlite3Fts3PendingTermsFlush(p); if( rc!=SQLITE_OK ) return rc; } p->iPrevDocid = iDocid; + p->iPrevLangid = iLangid; + p->bPrevDelete = bDelete; return SQLITE_OK; } +/* +** Discard the contents of the pending-terms hash tables. +*/ SQLITE_PRIVATE void sqlite3Fts3PendingTermsClear(Fts3Table *p){ - Fts3HashElem *pElem; - for(pElem=fts3HashFirst(&p->pendingTerms); pElem; pElem=fts3HashNext(pElem)){ - sqlite3_free(fts3HashData(pElem)); + int i; + for(i=0; i<p->nIndex; i++){ + Fts3HashElem *pElem; + Fts3Hash *pHash = &p->aIndex[i].hPending; + for(pElem=fts3HashFirst(pHash); pElem; pElem=fts3HashNext(pElem)){ + PendingList *pList = (PendingList *)fts3HashData(pElem); + fts3PendingListDelete(pList); + } + fts3HashClear(pHash); } - fts3HashClear(&p->pendingTerms); p->nPendingData = 0; } /* ** This function is called by the xUpdate() method as part of an INSERT @@ -106343,19 +148941,26 @@ ** pendingTerms hash table. ** ** Argument apVal is the same as the similarly named argument passed to ** fts3InsertData(). Parameter iDocid is the docid of the new row. */ -static int fts3InsertTerms(Fts3Table *p, sqlite3_value **apVal, u32 *aSz){ +static int fts3InsertTerms( + Fts3Table *p, + int iLangid, + sqlite3_value **apVal, + u32 *aSz +){ int i; /* Iterator variable */ for(i=2; i<p->nColumn+2; i++){ - const char *zText = (const char *)sqlite3_value_text(apVal[i]); - if( zText ){ - int rc = fts3PendingTermsAdd(p, zText, i-2, &aSz[i-2]); + int iCol = i-2; + if( p->abNotindexed[iCol]==0 ){ + const char *zText = (const char *)sqlite3_value_text(apVal[i]); + int rc = fts3PendingTermsAdd(p, iLangid, zText, iCol, &aSz[iCol]); if( rc!=SQLITE_OK ){ return rc; } + aSz[p->nColumn] += sqlite3_value_bytes(apVal[i]); } } return SQLITE_OK; } @@ -106369,18 +148974,31 @@ ** apVal[2] Left-most user-defined column ** ... ** apVal[p->nColumn+1] Right-most user-defined column ** apVal[p->nColumn+2] Hidden column with same name as table ** apVal[p->nColumn+3] Hidden "docid" column (alias for rowid) +** apVal[p->nColumn+4] Hidden languageid column */ static int fts3InsertData( Fts3Table *p, /* Full-text table */ sqlite3_value **apVal, /* Array of values to insert */ sqlite3_int64 *piDocid /* OUT: Docid for row just inserted */ ){ int rc; /* Return code */ sqlite3_stmt *pContentInsert; /* INSERT INTO %_content VALUES(...) */ + + if( p->zContentTbl ){ + sqlite3_value *pRowid = apVal[p->nColumn+3]; + if( sqlite3_value_type(pRowid)==SQLITE_NULL ){ + pRowid = apVal[1]; + } + if( sqlite3_value_type(pRowid)!=SQLITE_INTEGER ){ + return SQLITE_CONSTRAINT; + } + *piDocid = sqlite3_value_int64(pRowid); + return SQLITE_OK; + } /* Locate the statement handle used to insert data into the %_content ** table. The SQL for this statement is: ** ** INSERT INTO %_content VALUES(?, ?, ?, ...) @@ -106387,13 +149005,17 @@ ** ** The statement features N '?' variables, where N is the number of user ** defined columns in the FTS3 table, plus one for the docid field. */ rc = fts3SqlStmt(p, SQL_CONTENT_INSERT, &pContentInsert, &apVal[1]); - if( rc!=SQLITE_OK ){ - return rc; + if( rc==SQLITE_OK && p->zLanguageid ){ + rc = sqlite3_bind_int( + pContentInsert, p->nColumn+2, + sqlite3_value_int(apVal[p->nColumn+4]) + ); } + if( rc!=SQLITE_OK ) return rc; /* There is a quirk here. The users INSERT statement may have specified ** a value for the "rowid" field, for the "docid" field, or for both. ** Which is a problem, since "rowid" and "docid" are aliases for the ** same value. For example: @@ -106428,55 +149050,78 @@ /* ** Remove all data from the FTS3 table. Clear the hash table containing ** pending terms. */ -static int fts3DeleteAll(Fts3Table *p){ +static int fts3DeleteAll(Fts3Table *p, int bContent){ int rc = SQLITE_OK; /* Return code */ /* Discard the contents of the pending-terms hash table. */ sqlite3Fts3PendingTermsClear(p); - /* Delete everything from the %_content, %_segments and %_segdir tables. */ - fts3SqlExec(&rc, p, SQL_DELETE_ALL_CONTENT, 0); + /* Delete everything from the shadow tables. Except, leave %_content as + ** is if bContent is false. */ + assert( p->zContentTbl==0 || bContent==0 ); + if( bContent ) fts3SqlExec(&rc, p, SQL_DELETE_ALL_CONTENT, 0); fts3SqlExec(&rc, p, SQL_DELETE_ALL_SEGMENTS, 0); fts3SqlExec(&rc, p, SQL_DELETE_ALL_SEGDIR, 0); if( p->bHasDocsize ){ fts3SqlExec(&rc, p, SQL_DELETE_ALL_DOCSIZE, 0); + } + if( p->bHasStat ){ fts3SqlExec(&rc, p, SQL_DELETE_ALL_STAT, 0); } return rc; } + +/* +** +*/ +static int langidFromSelect(Fts3Table *p, sqlite3_stmt *pSelect){ + int iLangid = 0; + if( p->zLanguageid ) iLangid = sqlite3_column_int(pSelect, p->nColumn+1); + return iLangid; +} /* ** The first element in the apVal[] array is assumed to contain the docid ** (an integer) of a row about to be deleted. Remove all terms from the ** full-text index. */ -static void fts3DeleteTerms( +static void fts3DeleteTerms( int *pRC, /* Result code */ Fts3Table *p, /* The FTS table to delete from */ - sqlite3_value **apVal, /* apVal[] contains the docid to be deleted */ - u32 *aSz /* Sizes of deleted document written here */ + sqlite3_value *pRowid, /* The docid to be deleted */ + u32 *aSz, /* Sizes of deleted document written here */ + int *pbFound /* OUT: Set to true if row really does exist */ ){ int rc; sqlite3_stmt *pSelect; + assert( *pbFound==0 ); if( *pRC ) return; - rc = fts3SqlStmt(p, SQL_SELECT_CONTENT_BY_ROWID, &pSelect, apVal); + rc = fts3SqlStmt(p, SQL_SELECT_CONTENT_BY_ROWID, &pSelect, &pRowid); if( rc==SQLITE_OK ){ if( SQLITE_ROW==sqlite3_step(pSelect) ){ int i; - for(i=1; i<=p->nColumn; i++){ - const char *zText = (const char *)sqlite3_column_text(pSelect, i); - rc = fts3PendingTermsAdd(p, zText, -1, &aSz[i-1]); - if( rc!=SQLITE_OK ){ - sqlite3_reset(pSelect); - *pRC = rc; - return; + int iLangid = langidFromSelect(p, pSelect); + i64 iDocid = sqlite3_column_int64(pSelect, 0); + rc = fts3PendingTermsDocid(p, 1, iLangid, iDocid); + for(i=1; rc==SQLITE_OK && i<=p->nColumn; i++){ + int iCol = i-1; + if( p->abNotindexed[iCol]==0 ){ + const char *zText = (const char *)sqlite3_column_text(pSelect, i); + rc = fts3PendingTermsAdd(p, iLangid, zText, -1, &aSz[iCol]); + aSz[p->nColumn] += sqlite3_column_bytes(pSelect, i); } } + if( rc!=SQLITE_OK ){ + sqlite3_reset(pSelect); + *pRC = rc; + return; + } + *pbFound = 1; } rc = sqlite3_reset(pSelect); }else{ sqlite3_reset(pSelect); } @@ -106485,11 +149130,11 @@ /* ** Forward declaration to account for the circular dependency between ** functions fts3SegmentMerge() and fts3AllocateSegdirIdx(). */ -static int fts3SegmentMerge(Fts3Table *, int); +static int fts3SegmentMerge(Fts3Table *, int, int, int); /* ** This function allocates a new level iLevel index in the segdir table. ** Usually, indexes are allocated within a level sequentially starting ** with 0, so the allocated index is one greater than the value returned @@ -106502,19 +149147,30 @@ ** allocated index is 0. ** ** If successful, *piIdx is set to the allocated index slot and SQLITE_OK ** returned. Otherwise, an SQLite error code is returned. */ -static int fts3AllocateSegdirIdx(Fts3Table *p, int iLevel, int *piIdx){ +static int fts3AllocateSegdirIdx( + Fts3Table *p, + int iLangid, /* Language id */ + int iIndex, /* Index for p->aIndex */ + int iLevel, + int *piIdx +){ int rc; /* Return Code */ sqlite3_stmt *pNextIdx; /* Query for next idx at level iLevel */ int iNext = 0; /* Result of query pNextIdx */ + assert( iLangid>=0 ); + assert( p->nIndex>=1 ); + /* Set variable iNext to the next available segdir index at level iLevel. */ rc = fts3SqlStmt(p, SQL_NEXT_SEGMENT_INDEX, &pNextIdx, 0); if( rc==SQLITE_OK ){ - sqlite3_bind_int(pNextIdx, 1, iLevel); + sqlite3_bind_int64( + pNextIdx, 1, getAbsoluteLevel(p, iLangid, iIndex, iLevel) + ); if( SQLITE_ROW==sqlite3_step(pNextIdx) ){ iNext = sqlite3_column_int(pNextIdx, 0); } rc = sqlite3_reset(pNextIdx); } @@ -106524,26 +149180,167 @@ ** full, merge all segments in level iLevel into a single iLevel+1 ** segment and allocate (newly freed) index 0 at level iLevel. Otherwise, ** if iNext is less than FTS3_MERGE_COUNT, allocate index iNext. */ if( iNext>=FTS3_MERGE_COUNT ){ - rc = fts3SegmentMerge(p, iLevel); + fts3LogMerge(16, getAbsoluteLevel(p, iLangid, iIndex, iLevel)); + rc = fts3SegmentMerge(p, iLangid, iIndex, iLevel); *piIdx = 0; }else{ *piIdx = iNext; } } return rc; } + +/* +** The %_segments table is declared as follows: +** +** CREATE TABLE %_segments(blockid INTEGER PRIMARY KEY, block BLOB) +** +** This function reads data from a single row of the %_segments table. The +** specific row is identified by the iBlockid parameter. If paBlob is not +** NULL, then a buffer is allocated using sqlite3_malloc() and populated +** with the contents of the blob stored in the "block" column of the +** identified table row is. Whether or not paBlob is NULL, *pnBlob is set +** to the size of the blob in bytes before returning. +** +** If an error occurs, or the table does not contain the specified row, +** an SQLite error code is returned. Otherwise, SQLITE_OK is returned. If +** paBlob is non-NULL, then it is the responsibility of the caller to +** eventually free the returned buffer. +** +** This function may leave an open sqlite3_blob* handle in the +** Fts3Table.pSegments variable. This handle is reused by subsequent calls +** to this function. The handle may be closed by calling the +** sqlite3Fts3SegmentsClose() function. Reusing a blob handle is a handy +** performance improvement, but the blob handle should always be closed +** before control is returned to the user (to prevent a lock being held +** on the database file for longer than necessary). Thus, any virtual table +** method (xFilter etc.) that may directly or indirectly call this function +** must call sqlite3Fts3SegmentsClose() before returning. +*/ +SQLITE_PRIVATE int sqlite3Fts3ReadBlock( + Fts3Table *p, /* FTS3 table handle */ + sqlite3_int64 iBlockid, /* Access the row with blockid=$iBlockid */ + char **paBlob, /* OUT: Blob data in malloc'd buffer */ + int *pnBlob, /* OUT: Size of blob data */ + int *pnLoad /* OUT: Bytes actually loaded */ +){ + int rc; /* Return code */ + + /* pnBlob must be non-NULL. paBlob may be NULL or non-NULL. */ + assert( pnBlob ); + + if( p->pSegments ){ + rc = sqlite3_blob_reopen(p->pSegments, iBlockid); + }else{ + if( 0==p->zSegmentsTbl ){ + p->zSegmentsTbl = sqlite3_mprintf("%s_segments", p->zName); + if( 0==p->zSegmentsTbl ) return SQLITE_NOMEM; + } + rc = sqlite3_blob_open( + p->db, p->zDb, p->zSegmentsTbl, "block", iBlockid, 0, &p->pSegments + ); + } + + if( rc==SQLITE_OK ){ + int nByte = sqlite3_blob_bytes(p->pSegments); + *pnBlob = nByte; + if( paBlob ){ + char *aByte = sqlite3_malloc(nByte + FTS3_NODE_PADDING); + if( !aByte ){ + rc = SQLITE_NOMEM; + }else{ + if( pnLoad && nByte>(FTS3_NODE_CHUNK_THRESHOLD) ){ + nByte = FTS3_NODE_CHUNKSIZE; + *pnLoad = nByte; + } + rc = sqlite3_blob_read(p->pSegments, aByte, nByte, 0); + memset(&aByte[nByte], 0, FTS3_NODE_PADDING); + if( rc!=SQLITE_OK ){ + sqlite3_free(aByte); + aByte = 0; + } + } + *paBlob = aByte; + } + } + + return rc; +} + +/* +** Close the blob handle at p->pSegments, if it is open. See comments above +** the sqlite3Fts3ReadBlock() function for details. +*/ +SQLITE_PRIVATE void sqlite3Fts3SegmentsClose(Fts3Table *p){ + sqlite3_blob_close(p->pSegments); + p->pSegments = 0; +} + +static int fts3SegReaderIncrRead(Fts3SegReader *pReader){ + int nRead; /* Number of bytes to read */ + int rc; /* Return code */ + + nRead = MIN(pReader->nNode - pReader->nPopulate, FTS3_NODE_CHUNKSIZE); + rc = sqlite3_blob_read( + pReader->pBlob, + &pReader->aNode[pReader->nPopulate], + nRead, + pReader->nPopulate + ); + + if( rc==SQLITE_OK ){ + pReader->nPopulate += nRead; + memset(&pReader->aNode[pReader->nPopulate], 0, FTS3_NODE_PADDING); + if( pReader->nPopulate==pReader->nNode ){ + sqlite3_blob_close(pReader->pBlob); + pReader->pBlob = 0; + pReader->nPopulate = 0; + } + } + return rc; +} + +static int fts3SegReaderRequire(Fts3SegReader *pReader, char *pFrom, int nByte){ + int rc = SQLITE_OK; + assert( !pReader->pBlob + || (pFrom>=pReader->aNode && pFrom<&pReader->aNode[pReader->nNode]) + ); + while( pReader->pBlob && rc==SQLITE_OK + && (pFrom - pReader->aNode + nByte)>pReader->nPopulate + ){ + rc = fts3SegReaderIncrRead(pReader); + } + return rc; +} + +/* +** Set an Fts3SegReader cursor to point at EOF. +*/ +static void fts3SegReaderSetEof(Fts3SegReader *pSeg){ + if( !fts3SegReaderIsRootOnly(pSeg) ){ + sqlite3_free(pSeg->aNode); + sqlite3_blob_close(pSeg->pBlob); + pSeg->pBlob = 0; + } + pSeg->aNode = 0; +} /* ** Move the iterator passed as the first argument to the next term in the ** segment. If successful, SQLITE_OK is returned. If there is no next term, ** SQLITE_DONE. Otherwise, an SQLite error code. */ -static int fts3SegReaderNext(Fts3SegReader *pReader){ +static int fts3SegReaderNext( + Fts3Table *p, + Fts3SegReader *pReader, + int bIncr +){ + int rc; /* Return code of various sub-routines */ char *pNext; /* Cursor variable */ int nPrefix; /* Number of bytes in term prefix */ int nSuffix; /* Number of bytes in term suffix */ if( !pReader->aDoclist ){ @@ -106551,42 +149348,68 @@ }else{ pNext = &pReader->aDoclist[pReader->nDoclist]; } if( !pNext || pNext>=&pReader->aNode[pReader->nNode] ){ - int rc; + if( fts3SegReaderIsPending(pReader) ){ Fts3HashElem *pElem = *(pReader->ppNextElem); - if( pElem==0 ){ - pReader->aNode = 0; - }else{ + sqlite3_free(pReader->aNode); + pReader->aNode = 0; + if( pElem ){ + char *aCopy; PendingList *pList = (PendingList *)fts3HashData(pElem); + int nCopy = pList->nData+1; pReader->zTerm = (char *)fts3HashKey(pElem); pReader->nTerm = fts3HashKeysize(pElem); - pReader->nNode = pReader->nDoclist = pList->nData + 1; - pReader->aNode = pReader->aDoclist = pList->aData; + aCopy = (char*)sqlite3_malloc(nCopy); + if( !aCopy ) return SQLITE_NOMEM; + memcpy(aCopy, pList->aData, nCopy); + pReader->nNode = pReader->nDoclist = nCopy; + pReader->aNode = pReader->aDoclist = aCopy; pReader->ppNextElem++; assert( pReader->aNode ); } return SQLITE_OK; } - if( !pReader->pStmt ){ - pReader->aNode = 0; + + fts3SegReaderSetEof(pReader); + + /* If iCurrentBlock>=iLeafEndBlock, this is an EOF condition. All leaf + ** blocks have already been traversed. */ + assert( pReader->iCurrentBlock<=pReader->iLeafEndBlock ); + if( pReader->iCurrentBlock>=pReader->iLeafEndBlock ){ return SQLITE_OK; } - rc = sqlite3_step(pReader->pStmt); - if( rc!=SQLITE_ROW ){ - pReader->aNode = 0; - return (rc==SQLITE_DONE ? SQLITE_OK : rc); + + rc = sqlite3Fts3ReadBlock( + p, ++pReader->iCurrentBlock, &pReader->aNode, &pReader->nNode, + (bIncr ? &pReader->nPopulate : 0) + ); + if( rc!=SQLITE_OK ) return rc; + assert( pReader->pBlob==0 ); + if( bIncr && pReader->nPopulate<pReader->nNode ){ + pReader->pBlob = p->pSegments; + p->pSegments = 0; } - pReader->nNode = sqlite3_column_bytes(pReader->pStmt, 0); - pReader->aNode = (char *)sqlite3_column_blob(pReader->pStmt, 0); pNext = pReader->aNode; } + + assert( !fts3SegReaderIsPending(pReader) ); + + rc = fts3SegReaderRequire(pReader, pNext, FTS3_VARINT_MAX*2); + if( rc!=SQLITE_OK ) return rc; - pNext += sqlite3Fts3GetVarint32(pNext, &nPrefix); - pNext += sqlite3Fts3GetVarint32(pNext, &nSuffix); + /* Because of the FTS3_NODE_PADDING bytes of padding, the following is + ** safe (no risk of overread) even if the node data is corrupted. */ + pNext += fts3GetVarint32(pNext, &nPrefix); + pNext += fts3GetVarint32(pNext, &nSuffix); + if( nPrefix<0 || nSuffix<=0 + || &pNext[nSuffix]>&pReader->aNode[pReader->nNode] + ){ + return FTS_CORRUPT_VTAB; + } if( nPrefix+nSuffix>pReader->nTermAlloc ){ int nNew = (nPrefix+nSuffix)*2; char *zNew = sqlite3_realloc(pReader->zTerm, nNew); if( !zNew ){ @@ -106593,30 +149416,57 @@ return SQLITE_NOMEM; } pReader->zTerm = zNew; pReader->nTermAlloc = nNew; } + + rc = fts3SegReaderRequire(pReader, pNext, nSuffix+FTS3_VARINT_MAX); + if( rc!=SQLITE_OK ) return rc; + memcpy(&pReader->zTerm[nPrefix], pNext, nSuffix); pReader->nTerm = nPrefix+nSuffix; pNext += nSuffix; - pNext += sqlite3Fts3GetVarint32(pNext, &pReader->nDoclist); - assert( pNext<&pReader->aNode[pReader->nNode] ); + pNext += fts3GetVarint32(pNext, &pReader->nDoclist); pReader->aDoclist = pNext; pReader->pOffsetList = 0; + + /* Check that the doclist does not appear to extend past the end of the + ** b-tree node. And that the final byte of the doclist is 0x00. If either + ** of these statements is untrue, then the data structure is corrupt. + */ + if( &pReader->aDoclist[pReader->nDoclist]>&pReader->aNode[pReader->nNode] + || (pReader->nPopulate==0 && pReader->aDoclist[pReader->nDoclist-1]) + ){ + return FTS_CORRUPT_VTAB; + } return SQLITE_OK; } /* ** Set the SegReader to point to the first docid in the doclist associated ** with the current term. */ -static void fts3SegReaderFirstDocid(Fts3SegReader *pReader){ - int n; +static int fts3SegReaderFirstDocid(Fts3Table *pTab, Fts3SegReader *pReader){ + int rc = SQLITE_OK; assert( pReader->aDoclist ); assert( !pReader->pOffsetList ); - n = sqlite3Fts3GetVarint(pReader->aDoclist, &pReader->iDocid); - pReader->pOffsetList = &pReader->aDoclist[n]; + if( pTab->bDescIdx && fts3SegReaderIsPending(pReader) ){ + u8 bEof = 0; + pReader->iDocid = 0; + pReader->nOffsetList = 0; + sqlite3Fts3DoclistPrev(0, + pReader->aDoclist, pReader->nDoclist, &pReader->pOffsetList, + &pReader->iDocid, &pReader->nOffsetList, &bEof + ); + }else{ + rc = fts3SegReaderRequire(pReader, pReader->aDoclist, FTS3_VARINT_MAX); + if( rc==SQLITE_OK ){ + int n = sqlite3Fts3GetVarint(pReader->aDoclist, &pReader->iDocid); + pReader->pOffsetList = &pReader->aDoclist[n]; + } + } + return rc; } /* ** Advance the SegReader to point to the next docid in the doclist ** associated with the current term. @@ -106625,166 +149475,205 @@ ** *ppOffsetList is set to point to the first column-offset list ** in the doclist entry (i.e. immediately past the docid varint). ** *pnOffsetList is set to the length of the set of column-offset ** lists, not including the nul-terminator byte. For example: */ -static void fts3SegReaderNextDocid( - Fts3SegReader *pReader, - char **ppOffsetList, - int *pnOffsetList +static int fts3SegReaderNextDocid( + Fts3Table *pTab, + Fts3SegReader *pReader, /* Reader to advance to next docid */ + char **ppOffsetList, /* OUT: Pointer to current position-list */ + int *pnOffsetList /* OUT: Length of *ppOffsetList in bytes */ ){ + int rc = SQLITE_OK; char *p = pReader->pOffsetList; char c = 0; - /* Pointer p currently points at the first byte of an offset list. The - ** following two lines advance it to point one byte past the end of - ** the same offset list. - */ - while( *p | c ) c = *p++ & 0x80; - p++; - - /* If required, populate the output variables with a pointer to and the - ** size of the previous offset-list. - */ - if( ppOffsetList ){ - *ppOffsetList = pReader->pOffsetList; - *pnOffsetList = (int)(p - pReader->pOffsetList - 1); - } - - /* If there are no more entries in the doclist, set pOffsetList to - ** NULL. Otherwise, set Fts3SegReader.iDocid to the next docid and - ** Fts3SegReader.pOffsetList to point to the next offset list before - ** returning. - */ - if( p>=&pReader->aDoclist[pReader->nDoclist] ){ - pReader->pOffsetList = 0; - }else{ - sqlite3_int64 iDelta; - pReader->pOffsetList = p + sqlite3Fts3GetVarint(p, &iDelta); - pReader->iDocid += iDelta; - } + assert( p ); + + if( pTab->bDescIdx && fts3SegReaderIsPending(pReader) ){ + /* A pending-terms seg-reader for an FTS4 table that uses order=desc. + ** Pending-terms doclists are always built up in ascending order, so + ** we have to iterate through them backwards here. */ + u8 bEof = 0; + if( ppOffsetList ){ + *ppOffsetList = pReader->pOffsetList; + *pnOffsetList = pReader->nOffsetList - 1; + } + sqlite3Fts3DoclistPrev(0, + pReader->aDoclist, pReader->nDoclist, &p, &pReader->iDocid, + &pReader->nOffsetList, &bEof + ); + if( bEof ){ + pReader->pOffsetList = 0; + }else{ + pReader->pOffsetList = p; + } + }else{ + char *pEnd = &pReader->aDoclist[pReader->nDoclist]; + + /* Pointer p currently points at the first byte of an offset list. The + ** following block advances it to point one byte past the end of + ** the same offset list. */ + while( 1 ){ + + /* The following line of code (and the "p++" below the while() loop) is + ** normally all that is required to move pointer p to the desired + ** position. The exception is if this node is being loaded from disk + ** incrementally and pointer "p" now points to the first byte past + ** the populated part of pReader->aNode[]. + */ + while( *p | c ) c = *p++ & 0x80; + assert( *p==0 ); + + if( pReader->pBlob==0 || p<&pReader->aNode[pReader->nPopulate] ) break; + rc = fts3SegReaderIncrRead(pReader); + if( rc!=SQLITE_OK ) return rc; + } + p++; + + /* If required, populate the output variables with a pointer to and the + ** size of the previous offset-list. + */ + if( ppOffsetList ){ + *ppOffsetList = pReader->pOffsetList; + *pnOffsetList = (int)(p - pReader->pOffsetList - 1); + } + + /* List may have been edited in place by fts3EvalNearTrim() */ + while( p<pEnd && *p==0 ) p++; + + /* If there are no more entries in the doclist, set pOffsetList to + ** NULL. Otherwise, set Fts3SegReader.iDocid to the next docid and + ** Fts3SegReader.pOffsetList to point to the next offset list before + ** returning. + */ + if( p>=pEnd ){ + pReader->pOffsetList = 0; + }else{ + rc = fts3SegReaderRequire(pReader, p, FTS3_VARINT_MAX); + if( rc==SQLITE_OK ){ + sqlite3_int64 iDelta; + pReader->pOffsetList = p + sqlite3Fts3GetVarint(p, &iDelta); + if( pTab->bDescIdx ){ + pReader->iDocid -= iDelta; + }else{ + pReader->iDocid += iDelta; + } + } + } + } + + return SQLITE_OK; +} + + +SQLITE_PRIVATE int sqlite3Fts3MsrOvfl( + Fts3Cursor *pCsr, + Fts3MultiSegReader *pMsr, + int *pnOvfl +){ + Fts3Table *p = (Fts3Table*)pCsr->base.pVtab; + int nOvfl = 0; + int ii; + int rc = SQLITE_OK; + int pgsz = p->nPgsz; + + assert( p->bFts4 ); + assert( pgsz>0 ); + + for(ii=0; rc==SQLITE_OK && ii<pMsr->nSegment; ii++){ + Fts3SegReader *pReader = pMsr->apSegment[ii]; + if( !fts3SegReaderIsPending(pReader) + && !fts3SegReaderIsRootOnly(pReader) + ){ + sqlite3_int64 jj; + for(jj=pReader->iStartBlock; jj<=pReader->iLeafEndBlock; jj++){ + int nBlob; + rc = sqlite3Fts3ReadBlock(p, jj, 0, &nBlob, 0); + if( rc!=SQLITE_OK ) break; + if( (nBlob+35)>pgsz ){ + nOvfl += (nBlob + 34)/pgsz; + } + } + } + } + *pnOvfl = nOvfl; + return rc; } /* ** Free all allocations associated with the iterator passed as the ** second argument. */ -SQLITE_PRIVATE void sqlite3Fts3SegReaderFree(Fts3Table *p, Fts3SegReader *pReader){ +SQLITE_PRIVATE void sqlite3Fts3SegReaderFree(Fts3SegReader *pReader){ if( pReader ){ - if( pReader->pStmt ){ - /* Move the leaf-range SELECT statement to the aLeavesStmt[] array, - ** so that it can be reused when required by another query. - */ - assert( p->nLeavesStmt<p->nLeavesTotal ); - sqlite3_reset(pReader->pStmt); - p->aLeavesStmt[p->nLeavesStmt++] = pReader->pStmt; - } if( !fts3SegReaderIsPending(pReader) ){ sqlite3_free(pReader->zTerm); } - sqlite3_free(pReader); + if( !fts3SegReaderIsRootOnly(pReader) ){ + sqlite3_free(pReader->aNode); + } + sqlite3_blob_close(pReader->pBlob); } + sqlite3_free(pReader); } /* ** Allocate a new SegReader object. */ SQLITE_PRIVATE int sqlite3Fts3SegReaderNew( - Fts3Table *p, /* Virtual table handle */ int iAge, /* Segment "age". */ + int bLookup, /* True for a lookup only */ sqlite3_int64 iStartLeaf, /* First leaf to traverse */ sqlite3_int64 iEndLeaf, /* Final leaf to traverse */ sqlite3_int64 iEndBlock, /* Final block of segment */ const char *zRoot, /* Buffer containing root node */ int nRoot, /* Size of buffer containing root node */ Fts3SegReader **ppReader /* OUT: Allocated Fts3SegReader */ ){ - int rc = SQLITE_OK; /* Return code */ Fts3SegReader *pReader; /* Newly allocated SegReader object */ int nExtra = 0; /* Bytes to allocate segment root node */ + assert( iStartLeaf<=iEndLeaf ); if( iStartLeaf==0 ){ - nExtra = nRoot; + nExtra = nRoot + FTS3_NODE_PADDING; } pReader = (Fts3SegReader *)sqlite3_malloc(sizeof(Fts3SegReader) + nExtra); if( !pReader ){ return SQLITE_NOMEM; } memset(pReader, 0, sizeof(Fts3SegReader)); - pReader->iStartBlock = iStartLeaf; pReader->iIdx = iAge; + pReader->bLookup = bLookup!=0; + pReader->iStartBlock = iStartLeaf; + pReader->iLeafEndBlock = iEndLeaf; pReader->iEndBlock = iEndBlock; if( nExtra ){ /* The entire segment is stored in the root node. */ pReader->aNode = (char *)&pReader[1]; + pReader->rootOnly = 1; pReader->nNode = nRoot; memcpy(pReader->aNode, zRoot, nRoot); - }else{ - /* If the text of the SQL statement to iterate through a contiguous - ** set of entries in the %_segments table has not yet been composed, - ** compose it now. - */ - if( !p->zSelectLeaves ){ - p->zSelectLeaves = sqlite3_mprintf( - "SELECT block FROM %Q.'%q_segments' WHERE blockid BETWEEN ? AND ? " - "ORDER BY blockid", p->zDb, p->zName - ); - if( !p->zSelectLeaves ){ - rc = SQLITE_NOMEM; - goto finished; - } - } - - /* If there are no free statements in the aLeavesStmt[] array, prepare - ** a new statement now. Otherwise, reuse a prepared statement from - ** aLeavesStmt[]. - */ - if( p->nLeavesStmt==0 ){ - if( p->nLeavesTotal==p->nLeavesAlloc ){ - int nNew = p->nLeavesAlloc + 16; - sqlite3_stmt **aNew = (sqlite3_stmt **)sqlite3_realloc( - p->aLeavesStmt, nNew*sizeof(sqlite3_stmt *) - ); - if( !aNew ){ - rc = SQLITE_NOMEM; - goto finished; - } - p->nLeavesAlloc = nNew; - p->aLeavesStmt = aNew; - } - rc = sqlite3_prepare_v2(p->db, p->zSelectLeaves, -1, &pReader->pStmt, 0); - if( rc!=SQLITE_OK ){ - goto finished; - } - p->nLeavesTotal++; - }else{ - pReader->pStmt = p->aLeavesStmt[--p->nLeavesStmt]; - } - - /* Bind the start and end leaf blockids to the prepared SQL statement. */ - sqlite3_bind_int64(pReader->pStmt, 1, iStartLeaf); - sqlite3_bind_int64(pReader->pStmt, 2, iEndLeaf); - } - rc = fts3SegReaderNext(pReader); - - finished: - if( rc==SQLITE_OK ){ - *ppReader = pReader; - }else{ - sqlite3Fts3SegReaderFree(p, pReader); - } - return rc; + memset(&pReader->aNode[nRoot], 0, FTS3_NODE_PADDING); + }else{ + pReader->iCurrentBlock = iStartLeaf-1; + } + *ppReader = pReader; + return SQLITE_OK; } /* ** This is a comparison function used as a qsort() callback when sorting ** an array of pending terms by term. This occurs as part of flushing ** the contents of the pending-terms hash table to the database. */ -static int fts3CompareElemByTerm(const void *lhs, const void *rhs){ +static int SQLITE_CDECL fts3CompareElemByTerm( + const void *lhs, + const void *rhs +){ char *z1 = fts3HashKey(*(Fts3HashElem **)lhs); char *z2 = fts3HashKey(*(Fts3HashElem **)rhs); int n1 = fts3HashKeysize(*(Fts3HashElem **)lhs); int n2 = fts3HashKeysize(*(Fts3HashElem **)rhs); @@ -106797,28 +149686,46 @@ } /* ** This function is used to allocate an Fts3SegReader that iterates through ** a subset of the terms stored in the Fts3Table.pendingTerms array. +** +** If the isPrefixIter parameter is zero, then the returned SegReader iterates +** through each term in the pending-terms table. Or, if isPrefixIter is +** non-zero, it iterates through each term and its prefixes. For example, if +** the pending terms hash table contains the terms "sqlite", "mysql" and +** "firebird", then the iterator visits the following 'terms' (in the order +** shown): +** +** f fi fir fire fireb firebi firebir firebird +** m my mys mysq mysql +** s sq sql sqli sqlit sqlite +** +** Whereas if isPrefixIter is zero, the terms visited are: +** +** firebird mysql sqlite */ SQLITE_PRIVATE int sqlite3Fts3SegReaderPending( Fts3Table *p, /* Virtual table handle */ + int iIndex, /* Index for p->aIndex */ const char *zTerm, /* Term to search for */ int nTerm, /* Size of buffer zTerm */ - int isPrefix, /* True for a term-prefix query */ + int bPrefix, /* True for a prefix iterator */ Fts3SegReader **ppReader /* OUT: SegReader for pending-terms */ ){ Fts3SegReader *pReader = 0; /* Fts3SegReader object to return */ + Fts3HashElem *pE; /* Iterator variable */ Fts3HashElem **aElem = 0; /* Array of term hash entries to scan */ int nElem = 0; /* Size of array at aElem */ int rc = SQLITE_OK; /* Return Code */ + Fts3Hash *pHash; - if( isPrefix ){ + pHash = &p->aIndex[iIndex].hPending; + if( bPrefix ){ int nAlloc = 0; /* Size of allocated array at aElem */ - Fts3HashElem *pE = 0; /* Iterator variable */ - for(pE=fts3HashFirst(&p->pendingTerms); pE; pE=fts3HashNext(pE)){ + for(pE=fts3HashFirst(pHash); pE; pE=fts3HashNext(pE)){ char *zKey = (char *)fts3HashKey(pE); int nKey = fts3HashKeysize(pE); if( nTerm==0 || (nKey>=nTerm && 0==memcmp(zKey, zTerm, nTerm)) ){ if( nElem==nAlloc ){ Fts3HashElem **aElem2; @@ -106831,10 +149738,11 @@ nElem = 0; break; } aElem = aElem2; } + aElem[nElem++] = pE; } } /* If more than one term matches the prefix, sort the Fts3HashElem @@ -106844,11 +149752,18 @@ if( nElem>1 ){ qsort(aElem, nElem, sizeof(Fts3HashElem *), fts3CompareElemByTerm); } }else{ - Fts3HashElem *pE = fts3HashFindElem(&p->pendingTerms, zTerm, nTerm); + /* The query is a simple term lookup that matches at most one term in + ** the index. All that is required is a straight hash-lookup. + ** + ** Because the stack address of pE may be accessed via the aElem pointer + ** below, the "Fts3HashElem *pE" must be declared so that it is valid + ** within this entire function, not just this "else{...}" block. + */ + pE = fts3HashFindElem(pHash, zTerm, nTerm); if( pE ){ aElem = &pE; nElem = 1; } } @@ -106861,58 +149776,20 @@ }else{ memset(pReader, 0, nByte); pReader->iIdx = 0x7FFFFFFF; pReader->ppNextElem = (Fts3HashElem **)&pReader[1]; memcpy(pReader->ppNextElem, aElem, nElem*sizeof(Fts3HashElem *)); - fts3SegReaderNext(pReader); } } - if( isPrefix ){ + if( bPrefix ){ sqlite3_free(aElem); } *ppReader = pReader; return rc; } - -/* -** The second argument to this function is expected to be a statement of -** the form: -** -** SELECT -** idx, -- col 0 -** start_block, -- col 1 -** leaves_end_block, -- col 2 -** end_block, -- col 3 -** root -- col 4 -** FROM %_segdir ... -** -** This function allocates and initializes a Fts3SegReader structure to -** iterate through the terms stored in the segment identified by the -** current row that pStmt is pointing to. -** -** If successful, the Fts3SegReader is left pointing to the first term -** in the segment and SQLITE_OK is returned. Otherwise, an SQLite error -** code is returned. -*/ -static int fts3SegReaderNew( - Fts3Table *p, /* Virtual table handle */ - sqlite3_stmt *pStmt, /* See above */ - int iAge, /* Segment "age". */ - Fts3SegReader **ppReader /* OUT: Allocated Fts3SegReader */ -){ - return sqlite3Fts3SegReaderNew(p, iAge, - sqlite3_column_int64(pStmt, 1), - sqlite3_column_int64(pStmt, 2), - sqlite3_column_int64(pStmt, 3), - sqlite3_column_blob(pStmt, 4), - sqlite3_column_bytes(pStmt, 4), - ppReader - ); -} - /* ** Compare the entries pointed to by two Fts3SegReader structures. ** Comparison is as follows: ** ** 1) EOF is greater than not EOF. @@ -106962,10 +149839,22 @@ if( pLhs->iDocid==pRhs->iDocid ){ rc = pRhs->iIdx - pLhs->iIdx; }else{ rc = (pLhs->iDocid > pRhs->iDocid) ? 1 : -1; } + } + assert( pLhs->aNode && pRhs->aNode ); + return rc; +} +static int fts3SegReaderDoclistCmpRev(Fts3SegReader *pLhs, Fts3SegReader *pRhs){ + int rc = (pLhs->pOffsetList==0)-(pRhs->pOffsetList==0); + if( rc==0 ){ + if( pLhs->iDocid==pRhs->iDocid ){ + rc = pRhs->iIdx - pLhs->iIdx; + }else{ + rc = (pLhs->iDocid < pRhs->iDocid) ? 1 : -1; + } } assert( pLhs->aNode && pRhs->aNode ); return rc; } @@ -107049,32 +149938,60 @@ sqlite3_step(pStmt); rc = sqlite3_reset(pStmt); } return rc; } + +/* +** Find the largest relative level number in the table. If successful, set +** *pnMax to this value and return SQLITE_OK. Otherwise, if an error occurs, +** set *pnMax to zero and return an SQLite error code. +*/ +SQLITE_PRIVATE int sqlite3Fts3MaxLevel(Fts3Table *p, int *pnMax){ + int rc; + int mxLevel = 0; + sqlite3_stmt *pStmt = 0; + + rc = fts3SqlStmt(p, SQL_SELECT_MXLEVEL, &pStmt, 0); + if( rc==SQLITE_OK ){ + if( SQLITE_ROW==sqlite3_step(pStmt) ){ + mxLevel = sqlite3_column_int(pStmt, 0); + } + rc = sqlite3_reset(pStmt); + } + *pnMax = mxLevel; + return rc; +} /* ** Insert a record into the %_segdir table. */ static int fts3WriteSegdir( Fts3Table *p, /* Virtual table handle */ - int iLevel, /* Value for "level" field */ + sqlite3_int64 iLevel, /* Value for "level" field (absolute level) */ int iIdx, /* Value for "idx" field */ sqlite3_int64 iStartBlock, /* Value for "start_block" field */ sqlite3_int64 iLeafEndBlock, /* Value for "leaves_end_block" field */ sqlite3_int64 iEndBlock, /* Value for "end_block" field */ + sqlite3_int64 nLeafData, /* Bytes of leaf data in segment */ char *zRoot, /* Blob value for "root" field */ int nRoot /* Number of bytes in buffer zRoot */ ){ sqlite3_stmt *pStmt; int rc = fts3SqlStmt(p, SQL_INSERT_SEGDIR, &pStmt, 0); if( rc==SQLITE_OK ){ - sqlite3_bind_int(pStmt, 1, iLevel); + sqlite3_bind_int64(pStmt, 1, iLevel); sqlite3_bind_int(pStmt, 2, iIdx); sqlite3_bind_int64(pStmt, 3, iStartBlock); sqlite3_bind_int64(pStmt, 4, iLeafEndBlock); - sqlite3_bind_int64(pStmt, 5, iEndBlock); + if( nLeafData==0 ){ + sqlite3_bind_int64(pStmt, 5, iEndBlock); + }else{ + char *zEnd = sqlite3_mprintf("%lld %lld", iEndBlock, nLeafData); + if( !zEnd ) return SQLITE_NOMEM; + sqlite3_bind_text(pStmt, 5, zEnd, -1, sqlite3_free); + } sqlite3_bind_blob(pStmt, 6, zRoot, nRoot, SQLITE_STATIC); sqlite3_step(pStmt); rc = sqlite3_reset(pStmt); } return rc; @@ -107103,11 +150020,11 @@ /* ** Add term zTerm to the SegmentNode. It is guaranteed that zTerm is larger ** (according to memcmp) than the previous term. */ static int fts3NodeAddTerm( - Fts3Table *p, /* Virtual table handle */ + Fts3Table *p, /* Virtual table handle */ SegmentNode **ppTree, /* IN/OUT: SegmentNode handle */ int isCopyTerm, /* True if zTerm/nTerm is transient */ const char *zTerm, /* Pointer to buffer containing term */ int nTerm /* Size of term in bytes */ ){ @@ -107366,10 +150283,11 @@ int rc; /* The current leaf node is full. Write it out to the database. */ rc = fts3WriteSegment(p, pWriter->iFree++, pWriter->aData, nData); if( rc!=SQLITE_OK ) return rc; + p->nLeafAdd++; /* Add the current term to the interior node tree. The term added to ** the interior tree must: ** ** a) be greater than the largest term on the leaf node just written @@ -107394,10 +150312,13 @@ sqlite3Fts3VarintLen(nTerm) + /* varint containing suffix size */ nTerm + /* Term suffix */ sqlite3Fts3VarintLen(nDoclist) + /* Size of doclist */ nDoclist; /* Doclist data */ } + + /* Increase the total number of bytes written to account for the new entry. */ + pWriter->nLeafData += nReq; /* If the buffer currently allocated is too small for this entry, realloc ** the buffer to make it large enough. */ if( nReq>pWriter->nSize ){ @@ -107449,11 +150370,11 @@ ** returned. Otherwise, an SQLite error code. */ static int fts3SegWriterFlush( Fts3Table *p, /* Virtual table handle */ SegmentWriter *pWriter, /* SegmentWriter to flush to the db */ - int iLevel, /* Value for 'level' column of %_segdir */ + sqlite3_int64 iLevel, /* Value for 'level' column of %_segdir */ int iIdx /* Value for 'idx' column of %_segdir */ ){ int rc; /* Return code */ if( pWriter->pTree ){ sqlite3_int64 iLast = 0; /* Largest block id written to database */ @@ -107466,18 +150387,19 @@ if( rc==SQLITE_OK ){ rc = fts3NodeWrite(p, pWriter->pTree, 1, pWriter->iFirst, pWriter->iFree, &iLast, &zRoot, &nRoot); } if( rc==SQLITE_OK ){ - rc = fts3WriteSegdir( - p, iLevel, iIdx, pWriter->iFirst, iLastLeaf, iLast, zRoot, nRoot); + rc = fts3WriteSegdir(p, iLevel, iIdx, + pWriter->iFirst, iLastLeaf, iLast, pWriter->nLeafData, zRoot, nRoot); } }else{ /* The entire tree fits on the root node. Write it to the segdir table. */ - rc = fts3WriteSegdir( - p, iLevel, iIdx, 0, 0, 0, pWriter->aData, pWriter->nData); + rc = fts3WriteSegdir(p, iLevel, iIdx, + 0, 0, 0, pWriter->nLeafData, pWriter->aData, pWriter->nData); } + p->nLeafAdd++; return rc; } /* ** Release all memory held by the SegmentWriter object passed as the @@ -107494,66 +150416,123 @@ /* ** The first value in the apVal[] array is assumed to contain an integer. ** This function tests if there exist any documents with docid values that ** are different from that integer. i.e. if deleting the document with docid -** apVal[0] would mean the FTS3 table were empty. -** -** If successful, *pisEmpty is set to true if the table is empty except for -** document apVal[0], or false otherwise, and SQLITE_OK is returned. If an -** error occurs, an SQLite error code is returned. -*/ -static int fts3IsEmpty(Fts3Table *p, sqlite3_value **apVal, int *pisEmpty){ - sqlite3_stmt *pStmt; - int rc; - rc = fts3SqlStmt(p, SQL_IS_EMPTY, &pStmt, apVal); - if( rc==SQLITE_OK ){ - if( SQLITE_ROW==sqlite3_step(pStmt) ){ - *pisEmpty = sqlite3_column_int(pStmt, 0); - } - rc = sqlite3_reset(pStmt); - } - return rc; -} - -/* -** Set *pnSegment to the number of segments of level iLevel in the database. -** -** Return SQLITE_OK if successful, or an SQLite error code if not. -*/ -static int fts3SegmentCount(Fts3Table *p, int iLevel, int *pnSegment){ - sqlite3_stmt *pStmt; - int rc; - - assert( iLevel>=0 ); - rc = fts3SqlStmt(p, SQL_SELECT_LEVEL_COUNT, &pStmt, 0); - if( rc!=SQLITE_OK ) return rc; - sqlite3_bind_int(pStmt, 1, iLevel); - if( SQLITE_ROW==sqlite3_step(pStmt) ){ - *pnSegment = sqlite3_column_int(pStmt, 0); - } - return sqlite3_reset(pStmt); -} - -/* -** Set *pnSegment to the total number of segments in the database. Set -** *pnMax to the largest segment level in the database (segment levels -** are stored in the 'level' column of the %_segdir table). -** -** Return SQLITE_OK if successful, or an SQLite error code if not. -*/ -static int fts3SegmentCountMax(Fts3Table *p, int *pnSegment, int *pnMax){ - sqlite3_stmt *pStmt; - int rc; - - rc = fts3SqlStmt(p, SQL_SELECT_SEGDIR_COUNT_MAX, &pStmt, 0); - if( rc!=SQLITE_OK ) return rc; - if( SQLITE_ROW==sqlite3_step(pStmt) ){ - *pnSegment = sqlite3_column_int(pStmt, 0); - *pnMax = sqlite3_column_int(pStmt, 1); - } - return sqlite3_reset(pStmt); +** pRowid would mean the FTS3 table were empty. +** +** If successful, *pisEmpty is set to true if the table is empty except for +** document pRowid, or false otherwise, and SQLITE_OK is returned. If an +** error occurs, an SQLite error code is returned. +*/ +static int fts3IsEmpty(Fts3Table *p, sqlite3_value *pRowid, int *pisEmpty){ + sqlite3_stmt *pStmt; + int rc; + if( p->zContentTbl ){ + /* If using the content=xxx option, assume the table is never empty */ + *pisEmpty = 0; + rc = SQLITE_OK; + }else{ + rc = fts3SqlStmt(p, SQL_IS_EMPTY, &pStmt, &pRowid); + if( rc==SQLITE_OK ){ + if( SQLITE_ROW==sqlite3_step(pStmt) ){ + *pisEmpty = sqlite3_column_int(pStmt, 0); + } + rc = sqlite3_reset(pStmt); + } + } + return rc; +} + +/* +** Set *pnMax to the largest segment level in the database for the index +** iIndex. +** +** Segment levels are stored in the 'level' column of the %_segdir table. +** +** Return SQLITE_OK if successful, or an SQLite error code if not. +*/ +static int fts3SegmentMaxLevel( + Fts3Table *p, + int iLangid, + int iIndex, + sqlite3_int64 *pnMax +){ + sqlite3_stmt *pStmt; + int rc; + assert( iIndex>=0 && iIndex<p->nIndex ); + + /* Set pStmt to the compiled version of: + ** + ** SELECT max(level) FROM %Q.'%q_segdir' WHERE level BETWEEN ? AND ? + ** + ** (1024 is actually the value of macro FTS3_SEGDIR_PREFIXLEVEL_STR). + */ + rc = fts3SqlStmt(p, SQL_SELECT_SEGDIR_MAX_LEVEL, &pStmt, 0); + if( rc!=SQLITE_OK ) return rc; + sqlite3_bind_int64(pStmt, 1, getAbsoluteLevel(p, iLangid, iIndex, 0)); + sqlite3_bind_int64(pStmt, 2, + getAbsoluteLevel(p, iLangid, iIndex, FTS3_SEGDIR_MAXLEVEL-1) + ); + if( SQLITE_ROW==sqlite3_step(pStmt) ){ + *pnMax = sqlite3_column_int64(pStmt, 0); + } + return sqlite3_reset(pStmt); +} + +/* +** iAbsLevel is an absolute level that may be assumed to exist within +** the database. This function checks if it is the largest level number +** within its index. Assuming no error occurs, *pbMax is set to 1 if +** iAbsLevel is indeed the largest level, or 0 otherwise, and SQLITE_OK +** is returned. If an error occurs, an error code is returned and the +** final value of *pbMax is undefined. +*/ +static int fts3SegmentIsMaxLevel(Fts3Table *p, i64 iAbsLevel, int *pbMax){ + + /* Set pStmt to the compiled version of: + ** + ** SELECT max(level) FROM %Q.'%q_segdir' WHERE level BETWEEN ? AND ? + ** + ** (1024 is actually the value of macro FTS3_SEGDIR_PREFIXLEVEL_STR). + */ + sqlite3_stmt *pStmt; + int rc = fts3SqlStmt(p, SQL_SELECT_SEGDIR_MAX_LEVEL, &pStmt, 0); + if( rc!=SQLITE_OK ) return rc; + sqlite3_bind_int64(pStmt, 1, iAbsLevel+1); + sqlite3_bind_int64(pStmt, 2, + ((iAbsLevel/FTS3_SEGDIR_MAXLEVEL)+1) * FTS3_SEGDIR_MAXLEVEL + ); + + *pbMax = 0; + if( SQLITE_ROW==sqlite3_step(pStmt) ){ + *pbMax = sqlite3_column_type(pStmt, 0)==SQLITE_NULL; + } + return sqlite3_reset(pStmt); +} + +/* +** Delete all entries in the %_segments table associated with the segment +** opened with seg-reader pSeg. This function does not affect the contents +** of the %_segdir table. +*/ +static int fts3DeleteSegment( + Fts3Table *p, /* FTS table handle */ + Fts3SegReader *pSeg /* Segment to delete */ +){ + int rc = SQLITE_OK; /* Return code */ + if( pSeg->iStartBlock ){ + sqlite3_stmt *pDelete; /* SQL statement to delete rows */ + rc = fts3SqlStmt(p, SQL_DELETE_SEGMENTS_RANGE, &pDelete, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_int64(pDelete, 1, pSeg->iStartBlock); + sqlite3_bind_int64(pDelete, 2, pSeg->iEndBlock); + sqlite3_step(pDelete); + rc = sqlite3_reset(pDelete); + } + } + return rc; } /* ** This function is used after merging multiple segments into a single large ** segment to delete the old, now redundant, segment b-trees. Specifically, @@ -107568,41 +150547,48 @@ ** ** SQLITE_OK is returned if successful, otherwise an SQLite error code. */ static int fts3DeleteSegdir( Fts3Table *p, /* Virtual table handle */ + int iLangid, /* Language id */ + int iIndex, /* Index for p->aIndex */ int iLevel, /* Level of %_segdir entries to delete */ Fts3SegReader **apSegment, /* Array of SegReader objects */ int nReader /* Size of array apSegment */ ){ - int rc; /* Return Code */ + int rc = SQLITE_OK; /* Return Code */ int i; /* Iterator variable */ - sqlite3_stmt *pDelete; /* SQL statement to delete rows */ + sqlite3_stmt *pDelete = 0; /* SQL statement to delete rows */ - rc = fts3SqlStmt(p, SQL_DELETE_SEGMENTS_RANGE, &pDelete, 0); for(i=0; rc==SQLITE_OK && i<nReader; i++){ - Fts3SegReader *pSegment = apSegment[i]; - if( pSegment->iStartBlock ){ - sqlite3_bind_int64(pDelete, 1, pSegment->iStartBlock); - sqlite3_bind_int64(pDelete, 2, pSegment->iEndBlock); - sqlite3_step(pDelete); - rc = sqlite3_reset(pDelete); - } + rc = fts3DeleteSegment(p, apSegment[i]); } if( rc!=SQLITE_OK ){ return rc; } - if( iLevel>=0 ){ - rc = fts3SqlStmt(p, SQL_DELETE_SEGDIR_BY_LEVEL, &pDelete, 0); + assert( iLevel>=0 || iLevel==FTS3_SEGCURSOR_ALL ); + if( iLevel==FTS3_SEGCURSOR_ALL ){ + rc = fts3SqlStmt(p, SQL_DELETE_SEGDIR_RANGE, &pDelete, 0); if( rc==SQLITE_OK ){ - sqlite3_bind_int(pDelete, 1, iLevel); - sqlite3_step(pDelete); - rc = sqlite3_reset(pDelete); + sqlite3_bind_int64(pDelete, 1, getAbsoluteLevel(p, iLangid, iIndex, 0)); + sqlite3_bind_int64(pDelete, 2, + getAbsoluteLevel(p, iLangid, iIndex, FTS3_SEGDIR_MAXLEVEL-1) + ); } }else{ - fts3SqlExec(&rc, p, SQL_DELETE_ALL_SEGDIR, 0); + rc = fts3SqlStmt(p, SQL_DELETE_SEGDIR_LEVEL, &pDelete, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_int64( + pDelete, 1, getAbsoluteLevel(p, iLangid, iIndex, iLevel) + ); + } + } + + if( rc==SQLITE_OK ){ + sqlite3_step(pDelete); + rc = sqlite3_reset(pDelete); } return rc; } @@ -107612,13 +150598,17 @@ ** function adjusts the pointer *ppList and the length *pnList so that they ** identify the subset of the position list that corresponds to column iCol. ** ** If there are no entries in the input position list for column iCol, then ** *pnList is set to zero before returning. +** +** If parameter bZero is non-zero, then any part of the input list following +** the end of the output list is zeroed before returning. */ static void fts3ColumnFilter( int iCol, /* Column to filter on */ + int bZero, /* Zero out anything following *ppList */ char **ppList, /* IN/OUT: Pointer to position list */ int *pnList /* IN/OUT: Size of buffer *ppList in bytes */ ){ char *pList = *ppList; int nList = *pnList; @@ -107640,228 +150630,554 @@ pList = p; if( nList==0 ){ break; } p = &pList[1]; - p += sqlite3Fts3GetVarint32(p, &iCurrent); + p += fts3GetVarint32(p, &iCurrent); } + if( bZero && &pList[nList]!=pEnd ){ + memset(&pList[nList], 0, pEnd - &pList[nList]); + } *ppList = pList; *pnList = nList; } /* -** sqlite3Fts3SegReaderIterate() callback used when merging multiple -** segments to create a single, larger segment. -*/ -static int fts3MergeCallback( - Fts3Table *p, /* FTS3 Virtual table handle */ - void *pContext, /* Pointer to SegmentWriter* to write with */ - char *zTerm, /* Term to write to the db */ - int nTerm, /* Number of bytes in zTerm */ - char *aDoclist, /* Doclist associated with zTerm */ - int nDoclist /* Number of bytes in doclist */ -){ - SegmentWriter **ppW = (SegmentWriter **)pContext; - return fts3SegWriterAdd(p, ppW, 1, zTerm, nTerm, aDoclist, nDoclist); -} - -/* -** sqlite3Fts3SegReaderIterate() callback used when flushing the contents -** of the pending-terms hash table to the database. -*/ -static int fts3FlushCallback( - Fts3Table *p, /* FTS3 Virtual table handle */ - void *pContext, /* Pointer to SegmentWriter* to write with */ - char *zTerm, /* Term to write to the db */ - int nTerm, /* Number of bytes in zTerm */ - char *aDoclist, /* Doclist associated with zTerm */ - int nDoclist /* Number of bytes in doclist */ -){ - SegmentWriter **ppW = (SegmentWriter **)pContext; - return fts3SegWriterAdd(p, ppW, 0, zTerm, nTerm, aDoclist, nDoclist); -} - -/* -** This function is used to iterate through a contiguous set of terms -** stored in the full-text index. It merges data contained in one or -** more segments to support this. -** -** The second argument is passed an array of pointers to SegReader objects -** allocated with sqlite3Fts3SegReaderNew(). This function merges the range -** of terms selected by each SegReader. If a single term is present in -** more than one segment, the associated doclists are merged. For each -** term and (possibly merged) doclist in the merged range, the callback -** function xFunc is invoked with its arguments set as follows. -** -** arg 0: Copy of 'p' parameter passed to this function -** arg 1: Copy of 'pContext' parameter passed to this function -** arg 2: Pointer to buffer containing term -** arg 3: Size of arg 2 buffer in bytes -** arg 4: Pointer to buffer containing doclist -** arg 5: Size of arg 2 buffer in bytes -** -** The 4th argument to this function is a pointer to a structure of type -** Fts3SegFilter, defined in fts3Int.h. The contents of this structure -** further restrict the range of terms that callbacks are made for and -** modify the behaviour of this function. See comments above structure -** definition for details. -*/ -SQLITE_PRIVATE int sqlite3Fts3SegReaderIterate( +** Cache data in the Fts3MultiSegReader.aBuffer[] buffer (overwriting any +** existing data). Grow the buffer if required. +** +** If successful, return SQLITE_OK. Otherwise, if an OOM error is encountered +** trying to resize the buffer, return SQLITE_NOMEM. +*/ +static int fts3MsrBufferData( + Fts3MultiSegReader *pMsr, /* Multi-segment-reader handle */ + char *pList, + int nList +){ + if( nList>pMsr->nBuffer ){ + char *pNew; + pMsr->nBuffer = nList*2; + pNew = (char *)sqlite3_realloc(pMsr->aBuffer, pMsr->nBuffer); + if( !pNew ) return SQLITE_NOMEM; + pMsr->aBuffer = pNew; + } + + memcpy(pMsr->aBuffer, pList, nList); + return SQLITE_OK; +} + +SQLITE_PRIVATE int sqlite3Fts3MsrIncrNext( + Fts3Table *p, /* Virtual table handle */ + Fts3MultiSegReader *pMsr, /* Multi-segment-reader handle */ + sqlite3_int64 *piDocid, /* OUT: Docid value */ + char **paPoslist, /* OUT: Pointer to position list */ + int *pnPoslist /* OUT: Size of position list in bytes */ +){ + int nMerge = pMsr->nAdvance; + Fts3SegReader **apSegment = pMsr->apSegment; + int (*xCmp)(Fts3SegReader *, Fts3SegReader *) = ( + p->bDescIdx ? fts3SegReaderDoclistCmpRev : fts3SegReaderDoclistCmp + ); + + if( nMerge==0 ){ + *paPoslist = 0; + return SQLITE_OK; + } + + while( 1 ){ + Fts3SegReader *pSeg; + pSeg = pMsr->apSegment[0]; + + if( pSeg->pOffsetList==0 ){ + *paPoslist = 0; + break; + }else{ + int rc; + char *pList; + int nList; + int j; + sqlite3_int64 iDocid = apSegment[0]->iDocid; + + rc = fts3SegReaderNextDocid(p, apSegment[0], &pList, &nList); + j = 1; + while( rc==SQLITE_OK + && j<nMerge + && apSegment[j]->pOffsetList + && apSegment[j]->iDocid==iDocid + ){ + rc = fts3SegReaderNextDocid(p, apSegment[j], 0, 0); + j++; + } + if( rc!=SQLITE_OK ) return rc; + fts3SegReaderSort(pMsr->apSegment, nMerge, j, xCmp); + + if( nList>0 && fts3SegReaderIsPending(apSegment[0]) ){ + rc = fts3MsrBufferData(pMsr, pList, nList+1); + if( rc!=SQLITE_OK ) return rc; + assert( (pMsr->aBuffer[nList] & 0xFE)==0x00 ); + pList = pMsr->aBuffer; + } + + if( pMsr->iColFilter>=0 ){ + fts3ColumnFilter(pMsr->iColFilter, 1, &pList, &nList); + } + + if( nList>0 ){ + *paPoslist = pList; + *piDocid = iDocid; + *pnPoslist = nList; + break; + } + } + } + + return SQLITE_OK; +} + +static int fts3SegReaderStart( Fts3Table *p, /* Virtual table handle */ - Fts3SegReader **apSegment, /* Array of Fts3SegReader objects */ - int nSegment, /* Size of apSegment array */ - Fts3SegFilter *pFilter, /* Restrictions on range of iteration */ - int (*xFunc)(Fts3Table *, void *, char *, int, char *, int), /* Callback */ - void *pContext /* Callback context (2nd argument) */ + Fts3MultiSegReader *pCsr, /* Cursor object */ + const char *zTerm, /* Term searched for (or NULL) */ + int nTerm /* Length of zTerm in bytes */ ){ - int i; /* Iterator variable */ - char *aBuffer = 0; /* Buffer to merge doclists in */ - int nAlloc = 0; /* Allocated size of aBuffer buffer */ - int rc = SQLITE_OK; /* Return code */ - - int isIgnoreEmpty = (pFilter->flags & FTS3_SEGMENT_IGNORE_EMPTY); - int isRequirePos = (pFilter->flags & FTS3_SEGMENT_REQUIRE_POS); - int isColFilter = (pFilter->flags & FTS3_SEGMENT_COLUMN_FILTER); - int isPrefix = (pFilter->flags & FTS3_SEGMENT_PREFIX); - - /* If there are zero segments, this function is a no-op. This scenario - ** comes about only when reading from an empty database. - */ - if( nSegment==0 ) goto finished; + int i; + int nSeg = pCsr->nSegment; /* If the Fts3SegFilter defines a specific term (or term prefix) to search ** for, then advance each segment iterator until it points to a term of ** equal or greater value than the specified term. This prevents many ** unnecessary merge/sort operations for the case where single segment ** b-tree leaf nodes contain more than one term. */ - if( pFilter->zTerm ){ - int nTerm = pFilter->nTerm; - const char *zTerm = pFilter->zTerm; - for(i=0; i<nSegment; i++){ + for(i=0; pCsr->bRestart==0 && i<pCsr->nSegment; i++){ + int res = 0; + Fts3SegReader *pSeg = pCsr->apSegment[i]; + do { + int rc = fts3SegReaderNext(p, pSeg, 0); + if( rc!=SQLITE_OK ) return rc; + }while( zTerm && (res = fts3SegReaderTermCmp(pSeg, zTerm, nTerm))<0 ); + + if( pSeg->bLookup && res!=0 ){ + fts3SegReaderSetEof(pSeg); + } + } + fts3SegReaderSort(pCsr->apSegment, nSeg, nSeg, fts3SegReaderCmp); + + return SQLITE_OK; +} + +SQLITE_PRIVATE int sqlite3Fts3SegReaderStart( + Fts3Table *p, /* Virtual table handle */ + Fts3MultiSegReader *pCsr, /* Cursor object */ + Fts3SegFilter *pFilter /* Restrictions on range of iteration */ +){ + pCsr->pFilter = pFilter; + return fts3SegReaderStart(p, pCsr, pFilter->zTerm, pFilter->nTerm); +} + +SQLITE_PRIVATE int sqlite3Fts3MsrIncrStart( + Fts3Table *p, /* Virtual table handle */ + Fts3MultiSegReader *pCsr, /* Cursor object */ + int iCol, /* Column to match on. */ + const char *zTerm, /* Term to iterate through a doclist for */ + int nTerm /* Number of bytes in zTerm */ +){ + int i; + int rc; + int nSegment = pCsr->nSegment; + int (*xCmp)(Fts3SegReader *, Fts3SegReader *) = ( + p->bDescIdx ? fts3SegReaderDoclistCmpRev : fts3SegReaderDoclistCmp + ); + + assert( pCsr->pFilter==0 ); + assert( zTerm && nTerm>0 ); + + /* Advance each segment iterator until it points to the term zTerm/nTerm. */ + rc = fts3SegReaderStart(p, pCsr, zTerm, nTerm); + if( rc!=SQLITE_OK ) return rc; + + /* Determine how many of the segments actually point to zTerm/nTerm. */ + for(i=0; i<nSegment; i++){ + Fts3SegReader *pSeg = pCsr->apSegment[i]; + if( !pSeg->aNode || fts3SegReaderTermCmp(pSeg, zTerm, nTerm) ){ + break; + } + } + pCsr->nAdvance = i; + + /* Advance each of the segments to point to the first docid. */ + for(i=0; i<pCsr->nAdvance; i++){ + rc = fts3SegReaderFirstDocid(p, pCsr->apSegment[i]); + if( rc!=SQLITE_OK ) return rc; + } + fts3SegReaderSort(pCsr->apSegment, i, i, xCmp); + + assert( iCol<0 || iCol<p->nColumn ); + pCsr->iColFilter = iCol; + + return SQLITE_OK; +} + +/* +** This function is called on a MultiSegReader that has been started using +** sqlite3Fts3MsrIncrStart(). One or more calls to MsrIncrNext() may also +** have been made. Calling this function puts the MultiSegReader in such +** a state that if the next two calls are: +** +** sqlite3Fts3SegReaderStart() +** sqlite3Fts3SegReaderStep() +** +** then the entire doclist for the term is available in +** MultiSegReader.aDoclist/nDoclist. +*/ +SQLITE_PRIVATE int sqlite3Fts3MsrIncrRestart(Fts3MultiSegReader *pCsr){ + int i; /* Used to iterate through segment-readers */ + + assert( pCsr->zTerm==0 ); + assert( pCsr->nTerm==0 ); + assert( pCsr->aDoclist==0 ); + assert( pCsr->nDoclist==0 ); + + pCsr->nAdvance = 0; + pCsr->bRestart = 1; + for(i=0; i<pCsr->nSegment; i++){ + pCsr->apSegment[i]->pOffsetList = 0; + pCsr->apSegment[i]->nOffsetList = 0; + pCsr->apSegment[i]->iDocid = 0; + } + + return SQLITE_OK; +} + + +SQLITE_PRIVATE int sqlite3Fts3SegReaderStep( + Fts3Table *p, /* Virtual table handle */ + Fts3MultiSegReader *pCsr /* Cursor object */ +){ + int rc = SQLITE_OK; + + int isIgnoreEmpty = (pCsr->pFilter->flags & FTS3_SEGMENT_IGNORE_EMPTY); + int isRequirePos = (pCsr->pFilter->flags & FTS3_SEGMENT_REQUIRE_POS); + int isColFilter = (pCsr->pFilter->flags & FTS3_SEGMENT_COLUMN_FILTER); + int isPrefix = (pCsr->pFilter->flags & FTS3_SEGMENT_PREFIX); + int isScan = (pCsr->pFilter->flags & FTS3_SEGMENT_SCAN); + int isFirst = (pCsr->pFilter->flags & FTS3_SEGMENT_FIRST); + + Fts3SegReader **apSegment = pCsr->apSegment; + int nSegment = pCsr->nSegment; + Fts3SegFilter *pFilter = pCsr->pFilter; + int (*xCmp)(Fts3SegReader *, Fts3SegReader *) = ( + p->bDescIdx ? fts3SegReaderDoclistCmpRev : fts3SegReaderDoclistCmp + ); + + if( pCsr->nSegment==0 ) return SQLITE_OK; + + do { + int nMerge; + int i; + + /* Advance the first pCsr->nAdvance entries in the apSegment[] array + ** forward. Then sort the list in order of current term again. + */ + for(i=0; i<pCsr->nAdvance; i++){ Fts3SegReader *pSeg = apSegment[i]; - while( fts3SegReaderTermCmp(pSeg, zTerm, nTerm)<0 ){ - rc = fts3SegReaderNext(pSeg); - if( rc!=SQLITE_OK ) goto finished; } - } - } - - fts3SegReaderSort(apSegment, nSegment, nSegment, fts3SegReaderCmp); - while( apSegment[0]->aNode ){ - int nTerm = apSegment[0]->nTerm; - char *zTerm = apSegment[0]->zTerm; - int nMerge = 1; + if( pSeg->bLookup ){ + fts3SegReaderSetEof(pSeg); + }else{ + rc = fts3SegReaderNext(p, pSeg, 0); + } + if( rc!=SQLITE_OK ) return rc; + } + fts3SegReaderSort(apSegment, nSegment, pCsr->nAdvance, fts3SegReaderCmp); + pCsr->nAdvance = 0; + + /* If all the seg-readers are at EOF, we're finished. return SQLITE_OK. */ + assert( rc==SQLITE_OK ); + if( apSegment[0]->aNode==0 ) break; + + pCsr->nTerm = apSegment[0]->nTerm; + pCsr->zTerm = apSegment[0]->zTerm; /* If this is a prefix-search, and if the term that apSegment[0] points ** to does not share a suffix with pFilter->zTerm/nTerm, then all ** required callbacks have been made. In this case exit early. ** ** Similarly, if this is a search for an exact match, and the first term ** of segment apSegment[0] is not a match, exit early. */ - if( pFilter->zTerm ){ - if( nTerm<pFilter->nTerm - || (!isPrefix && nTerm>pFilter->nTerm) - || memcmp(zTerm, pFilter->zTerm, pFilter->nTerm) - ){ - goto finished; + if( pFilter->zTerm && !isScan ){ + if( pCsr->nTerm<pFilter->nTerm + || (!isPrefix && pCsr->nTerm>pFilter->nTerm) + || memcmp(pCsr->zTerm, pFilter->zTerm, pFilter->nTerm) + ){ + break; } } + nMerge = 1; while( nMerge<nSegment && apSegment[nMerge]->aNode - && apSegment[nMerge]->nTerm==nTerm - && 0==memcmp(zTerm, apSegment[nMerge]->zTerm, nTerm) + && apSegment[nMerge]->nTerm==pCsr->nTerm + && 0==memcmp(pCsr->zTerm, apSegment[nMerge]->zTerm, pCsr->nTerm) ){ nMerge++; } assert( isIgnoreEmpty || (isRequirePos && !isColFilter) ); - if( nMerge==1 && !isIgnoreEmpty ){ - Fts3SegReader *p0 = apSegment[0]; - rc = xFunc(p, pContext, zTerm, nTerm, p0->aDoclist, p0->nDoclist); - if( rc!=SQLITE_OK ) goto finished; + if( nMerge==1 + && !isIgnoreEmpty + && !isFirst + && (p->bDescIdx==0 || fts3SegReaderIsPending(apSegment[0])==0) + ){ + pCsr->nDoclist = apSegment[0]->nDoclist; + if( fts3SegReaderIsPending(apSegment[0]) ){ + rc = fts3MsrBufferData(pCsr, apSegment[0]->aDoclist, pCsr->nDoclist); + pCsr->aDoclist = pCsr->aBuffer; + }else{ + pCsr->aDoclist = apSegment[0]->aDoclist; + } + if( rc==SQLITE_OK ) rc = SQLITE_ROW; }else{ int nDoclist = 0; /* Size of doclist */ sqlite3_int64 iPrev = 0; /* Previous docid stored in doclist */ /* The current term of the first nMerge entries in the array ** of Fts3SegReader objects is the same. The doclists must be merged - ** and a single term added to the new segment. + ** and a single term returned with the merged doclist. */ for(i=0; i<nMerge; i++){ - fts3SegReaderFirstDocid(apSegment[i]); + fts3SegReaderFirstDocid(p, apSegment[i]); } - fts3SegReaderSort(apSegment, nMerge, nMerge, fts3SegReaderDoclistCmp); + fts3SegReaderSort(apSegment, nMerge, nMerge, xCmp); while( apSegment[0]->pOffsetList ){ int j; /* Number of segments that share a docid */ - char *pList; - int nList; + char *pList = 0; + int nList = 0; int nByte; sqlite3_int64 iDocid = apSegment[0]->iDocid; - fts3SegReaderNextDocid(apSegment[0], &pList, &nList); + fts3SegReaderNextDocid(p, apSegment[0], &pList, &nList); j = 1; while( j<nMerge && apSegment[j]->pOffsetList && apSegment[j]->iDocid==iDocid ){ - fts3SegReaderNextDocid(apSegment[j], 0, 0); + fts3SegReaderNextDocid(p, apSegment[j], 0, 0); j++; } if( isColFilter ){ - fts3ColumnFilter(pFilter->iCol, &pList, &nList); + fts3ColumnFilter(pFilter->iCol, 0, &pList, &nList); } if( !isIgnoreEmpty || nList>0 ){ - nByte = sqlite3Fts3VarintLen(iDocid-iPrev) + (isRequirePos?nList+1:0); - if( nDoclist+nByte>nAlloc ){ + + /* Calculate the 'docid' delta value to write into the merged + ** doclist. */ + sqlite3_int64 iDelta; + if( p->bDescIdx && nDoclist>0 ){ + iDelta = iPrev - iDocid; + }else{ + iDelta = iDocid - iPrev; + } + assert( iDelta>0 || (nDoclist==0 && iDelta==iDocid) ); + assert( nDoclist>0 || iDelta==iDocid ); + + nByte = sqlite3Fts3VarintLen(iDelta) + (isRequirePos?nList+1:0); + if( nDoclist+nByte>pCsr->nBuffer ){ char *aNew; - nAlloc = nDoclist+nByte*2; - aNew = sqlite3_realloc(aBuffer, nAlloc); + pCsr->nBuffer = (nDoclist+nByte)*2; + aNew = sqlite3_realloc(pCsr->aBuffer, pCsr->nBuffer); if( !aNew ){ - rc = SQLITE_NOMEM; - goto finished; - } - aBuffer = aNew; - } - nDoclist += sqlite3Fts3PutVarint(&aBuffer[nDoclist], iDocid-iPrev); - iPrev = iDocid; - if( isRequirePos ){ - memcpy(&aBuffer[nDoclist], pList, nList); - nDoclist += nList; - aBuffer[nDoclist++] = '\0'; + return SQLITE_NOMEM; + } + pCsr->aBuffer = aNew; + } + + if( isFirst ){ + char *a = &pCsr->aBuffer[nDoclist]; + int nWrite; + + nWrite = sqlite3Fts3FirstFilter(iDelta, pList, nList, a); + if( nWrite ){ + iPrev = iDocid; + nDoclist += nWrite; + } + }else{ + nDoclist += sqlite3Fts3PutVarint(&pCsr->aBuffer[nDoclist], iDelta); + iPrev = iDocid; + if( isRequirePos ){ + memcpy(&pCsr->aBuffer[nDoclist], pList, nList); + nDoclist += nList; + pCsr->aBuffer[nDoclist++] = '\0'; + } } } - fts3SegReaderSort(apSegment, nMerge, j, fts3SegReaderDoclistCmp); + fts3SegReaderSort(apSegment, nMerge, j, xCmp); } - if( nDoclist>0 ){ - rc = xFunc(p, pContext, zTerm, nTerm, aBuffer, nDoclist); - if( rc!=SQLITE_OK ) goto finished; - } - } - - /* If there is a term specified to filter on, and this is not a prefix - ** search, return now. The callback that corresponds to the required - ** term (if such a term exists in the index) has already been made. - */ - if( pFilter->zTerm && !isPrefix ){ - goto finished; - } - - for(i=0; i<nMerge; i++){ - rc = fts3SegReaderNext(apSegment[i]); - if( rc!=SQLITE_OK ) goto finished; - } - fts3SegReaderSort(apSegment, nSegment, nMerge, fts3SegReaderCmp); - } - - finished: - sqlite3_free(aBuffer); + pCsr->aDoclist = pCsr->aBuffer; + pCsr->nDoclist = nDoclist; + rc = SQLITE_ROW; + } + } + pCsr->nAdvance = nMerge; + }while( rc==SQLITE_OK ); + + return rc; +} + + +SQLITE_PRIVATE void sqlite3Fts3SegReaderFinish( + Fts3MultiSegReader *pCsr /* Cursor object */ +){ + if( pCsr ){ + int i; + for(i=0; i<pCsr->nSegment; i++){ + sqlite3Fts3SegReaderFree(pCsr->apSegment[i]); + } + sqlite3_free(pCsr->apSegment); + sqlite3_free(pCsr->aBuffer); + + pCsr->nSegment = 0; + pCsr->apSegment = 0; + pCsr->aBuffer = 0; + } +} + +/* +** Decode the "end_block" field, selected by column iCol of the SELECT +** statement passed as the first argument. +** +** The "end_block" field may contain either an integer, or a text field +** containing the text representation of two non-negative integers separated +** by one or more space (0x20) characters. In the first case, set *piEndBlock +** to the integer value and *pnByte to zero before returning. In the second, +** set *piEndBlock to the first value and *pnByte to the second. +*/ +static void fts3ReadEndBlockField( + sqlite3_stmt *pStmt, + int iCol, + i64 *piEndBlock, + i64 *pnByte +){ + const unsigned char *zText = sqlite3_column_text(pStmt, iCol); + if( zText ){ + int i; + int iMul = 1; + i64 iVal = 0; + for(i=0; zText[i]>='0' && zText[i]<='9'; i++){ + iVal = iVal*10 + (zText[i] - '0'); + } + *piEndBlock = iVal; + while( zText[i]==' ' ) i++; + iVal = 0; + if( zText[i]=='-' ){ + i++; + iMul = -1; + } + for(/* no-op */; zText[i]>='0' && zText[i]<='9'; i++){ + iVal = iVal*10 + (zText[i] - '0'); + } + *pnByte = (iVal * (i64)iMul); + } +} + + +/* +** A segment of size nByte bytes has just been written to absolute level +** iAbsLevel. Promote any segments that should be promoted as a result. +*/ +static int fts3PromoteSegments( + Fts3Table *p, /* FTS table handle */ + sqlite3_int64 iAbsLevel, /* Absolute level just updated */ + sqlite3_int64 nByte /* Size of new segment at iAbsLevel */ +){ + int rc = SQLITE_OK; + sqlite3_stmt *pRange; + + rc = fts3SqlStmt(p, SQL_SELECT_LEVEL_RANGE2, &pRange, 0); + + if( rc==SQLITE_OK ){ + int bOk = 0; + i64 iLast = (iAbsLevel/FTS3_SEGDIR_MAXLEVEL + 1) * FTS3_SEGDIR_MAXLEVEL - 1; + i64 nLimit = (nByte*3)/2; + + /* Loop through all entries in the %_segdir table corresponding to + ** segments in this index on levels greater than iAbsLevel. If there is + ** at least one such segment, and it is possible to determine that all + ** such segments are smaller than nLimit bytes in size, they will be + ** promoted to level iAbsLevel. */ + sqlite3_bind_int64(pRange, 1, iAbsLevel+1); + sqlite3_bind_int64(pRange, 2, iLast); + while( SQLITE_ROW==sqlite3_step(pRange) ){ + i64 nSize = 0, dummy; + fts3ReadEndBlockField(pRange, 2, &dummy, &nSize); + if( nSize<=0 || nSize>nLimit ){ + /* If nSize==0, then the %_segdir.end_block field does not not + ** contain a size value. This happens if it was written by an + ** old version of FTS. In this case it is not possible to determine + ** the size of the segment, and so segment promotion does not + ** take place. */ + bOk = 0; + break; + } + bOk = 1; + } + rc = sqlite3_reset(pRange); + + if( bOk ){ + int iIdx = 0; + sqlite3_stmt *pUpdate1 = 0; + sqlite3_stmt *pUpdate2 = 0; + + if( rc==SQLITE_OK ){ + rc = fts3SqlStmt(p, SQL_UPDATE_LEVEL_IDX, &pUpdate1, 0); + } + if( rc==SQLITE_OK ){ + rc = fts3SqlStmt(p, SQL_UPDATE_LEVEL, &pUpdate2, 0); + } + + if( rc==SQLITE_OK ){ + + /* Loop through all %_segdir entries for segments in this index with + ** levels equal to or greater than iAbsLevel. As each entry is visited, + ** updated it to set (level = -1) and (idx = N), where N is 0 for the + ** oldest segment in the range, 1 for the next oldest, and so on. + ** + ** In other words, move all segments being promoted to level -1, + ** setting the "idx" fields as appropriate to keep them in the same + ** order. The contents of level -1 (which is never used, except + ** transiently here), will be moved back to level iAbsLevel below. */ + sqlite3_bind_int64(pRange, 1, iAbsLevel); + while( SQLITE_ROW==sqlite3_step(pRange) ){ + sqlite3_bind_int(pUpdate1, 1, iIdx++); + sqlite3_bind_int(pUpdate1, 2, sqlite3_column_int(pRange, 0)); + sqlite3_bind_int(pUpdate1, 3, sqlite3_column_int(pRange, 1)); + sqlite3_step(pUpdate1); + rc = sqlite3_reset(pUpdate1); + if( rc!=SQLITE_OK ){ + sqlite3_reset(pRange); + break; + } + } + } + if( rc==SQLITE_OK ){ + rc = sqlite3_reset(pRange); + } + + /* Move level -1 to level iAbsLevel */ + if( rc==SQLITE_OK ){ + sqlite3_bind_int64(pUpdate2, 1, iAbsLevel); + sqlite3_step(pUpdate2); + rc = sqlite3_reset(pUpdate2); + } + } + } + + return rc; } /* ** Merge all level iLevel segments in the database into a single @@ -107872,160 +151188,136 @@ ** If this function is called with iLevel<0, but there is only one ** segment in the database, SQLITE_DONE is returned immediately. ** Otherwise, if successful, SQLITE_OK is returned. If an error occurs, ** an SQLite error code is returned. */ -static int fts3SegmentMerge(Fts3Table *p, int iLevel){ - int i; /* Iterator variable */ - int rc; /* Return code */ - int iIdx; /* Index of new segment */ - int iNewLevel; /* Level to create new segment at */ - sqlite3_stmt *pStmt = 0; - SegmentWriter *pWriter = 0; - int nSegment = 0; /* Number of segments being merged */ - Fts3SegReader **apSegment = 0; /* Array of Segment iterators */ - Fts3SegReader *pPending = 0; /* Iterator for pending-terms */ - Fts3SegFilter filter; /* Segment term filter condition */ - - if( iLevel<0 ){ - /* This call is to merge all segments in the database to a single - ** segment. The level of the new segment is equal to the the numerically - ** greatest segment level currently present in the database. The index - ** of the new segment is always 0. - */ - iIdx = 0; - rc = sqlite3Fts3SegReaderPending(p, 0, 0, 1, &pPending); - if( rc!=SQLITE_OK ) goto finished; - rc = fts3SegmentCountMax(p, &nSegment, &iNewLevel); - if( rc!=SQLITE_OK ) goto finished; - nSegment += (pPending!=0); - if( nSegment<=1 ){ - return SQLITE_DONE; - } - }else{ - /* This call is to merge all segments at level iLevel. Find the next - ** available segment index at level iLevel+1. The call to - ** fts3AllocateSegdirIdx() will merge the segments at level iLevel+1 to - ** a single iLevel+2 segment if necessary. - */ - iNewLevel = iLevel+1; - rc = fts3AllocateSegdirIdx(p, iNewLevel, &iIdx); - if( rc!=SQLITE_OK ) goto finished; - rc = fts3SegmentCount(p, iLevel, &nSegment); - if( rc!=SQLITE_OK ) goto finished; - } - assert( nSegment>0 ); - assert( iNewLevel>=0 ); - - /* Allocate space for an array of pointers to segment iterators. */ - apSegment = (Fts3SegReader**)sqlite3_malloc(sizeof(Fts3SegReader *)*nSegment); - if( !apSegment ){ - rc = SQLITE_NOMEM; - goto finished; - } - memset(apSegment, 0, sizeof(Fts3SegReader *)*nSegment); - - /* Allocate a Fts3SegReader structure for each segment being merged. A - ** Fts3SegReader stores the state data required to iterate through all - ** entries on all leaves of a single segment. - */ - assert( SQL_SELECT_LEVEL+1==SQL_SELECT_ALL_LEVEL); - rc = fts3SqlStmt(p, SQL_SELECT_LEVEL+(iLevel<0), &pStmt, 0); - if( rc!=SQLITE_OK ) goto finished; - sqlite3_bind_int(pStmt, 1, iLevel); - for(i=0; SQLITE_ROW==(sqlite3_step(pStmt)); i++){ - rc = fts3SegReaderNew(p, pStmt, i, &apSegment[i]); - if( rc!=SQLITE_OK ){ - goto finished; - } - } - rc = sqlite3_reset(pStmt); - if( pPending ){ - apSegment[i] = pPending; - pPending = 0; - } - pStmt = 0; - if( rc!=SQLITE_OK ) goto finished; +static int fts3SegmentMerge( + Fts3Table *p, + int iLangid, /* Language id to merge */ + int iIndex, /* Index in p->aIndex[] to merge */ + int iLevel /* Level to merge */ +){ + int rc; /* Return code */ + int iIdx = 0; /* Index of new segment */ + sqlite3_int64 iNewLevel = 0; /* Level/index to create new segment at */ + SegmentWriter *pWriter = 0; /* Used to write the new, merged, segment */ + Fts3SegFilter filter; /* Segment term filter condition */ + Fts3MultiSegReader csr; /* Cursor to iterate through level(s) */ + int bIgnoreEmpty = 0; /* True to ignore empty segments */ + i64 iMaxLevel = 0; /* Max level number for this index/langid */ + + assert( iLevel==FTS3_SEGCURSOR_ALL + || iLevel==FTS3_SEGCURSOR_PENDING + || iLevel>=0 + ); + assert( iLevel<FTS3_SEGDIR_MAXLEVEL ); + assert( iIndex>=0 && iIndex<p->nIndex ); + + rc = sqlite3Fts3SegReaderCursor(p, iLangid, iIndex, iLevel, 0, 0, 1, 0, &csr); + if( rc!=SQLITE_OK || csr.nSegment==0 ) goto finished; + + if( iLevel!=FTS3_SEGCURSOR_PENDING ){ + rc = fts3SegmentMaxLevel(p, iLangid, iIndex, &iMaxLevel); + if( rc!=SQLITE_OK ) goto finished; + } + + if( iLevel==FTS3_SEGCURSOR_ALL ){ + /* This call is to merge all segments in the database to a single + ** segment. The level of the new segment is equal to the numerically + ** greatest segment level currently present in the database for this + ** index. The idx of the new segment is always 0. */ + if( csr.nSegment==1 ){ + rc = SQLITE_DONE; + goto finished; + } + iNewLevel = iMaxLevel; + bIgnoreEmpty = 1; + + }else{ + /* This call is to merge all segments at level iLevel. find the next + ** available segment index at level iLevel+1. The call to + ** fts3AllocateSegdirIdx() will merge the segments at level iLevel+1 to + ** a single iLevel+2 segment if necessary. */ + assert( FTS3_SEGCURSOR_PENDING==-1 ); + iNewLevel = getAbsoluteLevel(p, iLangid, iIndex, iLevel+1); + rc = fts3AllocateSegdirIdx(p, iLangid, iIndex, iLevel+1, &iIdx); + bIgnoreEmpty = (iLevel!=FTS3_SEGCURSOR_PENDING) && (iNewLevel>iMaxLevel); + } + if( rc!=SQLITE_OK ) goto finished; + + assert( csr.nSegment>0 ); + assert( iNewLevel>=getAbsoluteLevel(p, iLangid, iIndex, 0) ); + assert( iNewLevel<getAbsoluteLevel(p, iLangid, iIndex,FTS3_SEGDIR_MAXLEVEL) ); memset(&filter, 0, sizeof(Fts3SegFilter)); filter.flags = FTS3_SEGMENT_REQUIRE_POS; - filter.flags |= (iLevel<0 ? FTS3_SEGMENT_IGNORE_EMPTY : 0); - rc = sqlite3Fts3SegReaderIterate(p, apSegment, nSegment, - &filter, fts3MergeCallback, (void *)&pWriter - ); + filter.flags |= (bIgnoreEmpty ? FTS3_SEGMENT_IGNORE_EMPTY : 0); + + rc = sqlite3Fts3SegReaderStart(p, &csr, &filter); + while( SQLITE_OK==rc ){ + rc = sqlite3Fts3SegReaderStep(p, &csr); + if( rc!=SQLITE_ROW ) break; + rc = fts3SegWriterAdd(p, &pWriter, 1, + csr.zTerm, csr.nTerm, csr.aDoclist, csr.nDoclist); + } if( rc!=SQLITE_OK ) goto finished; + assert( pWriter || bIgnoreEmpty ); - rc = fts3DeleteSegdir(p, iLevel, apSegment, nSegment); - if( rc==SQLITE_OK ){ + if( iLevel!=FTS3_SEGCURSOR_PENDING ){ + rc = fts3DeleteSegdir( + p, iLangid, iIndex, iLevel, csr.apSegment, csr.nSegment + ); + if( rc!=SQLITE_OK ) goto finished; + } + if( pWriter ){ rc = fts3SegWriterFlush(p, pWriter, iNewLevel, iIdx); + if( rc==SQLITE_OK ){ + if( iLevel==FTS3_SEGCURSOR_PENDING || iNewLevel<iMaxLevel ){ + rc = fts3PromoteSegments(p, iNewLevel, pWriter->nLeafData); + } + } } finished: fts3SegWriterFree(pWriter); - if( apSegment ){ - for(i=0; i<nSegment; i++){ - sqlite3Fts3SegReaderFree(p, apSegment[i]); - } - sqlite3_free(apSegment); - } - sqlite3Fts3SegReaderFree(p, pPending); - sqlite3_reset(pStmt); + sqlite3Fts3SegReaderFinish(&csr); return rc; } /* -** Flush the contents of pendingTerms to a level 0 segment. +** Flush the contents of pendingTerms to level 0 segments. */ SQLITE_PRIVATE int sqlite3Fts3PendingTermsFlush(Fts3Table *p){ - int rc; /* Return Code */ - int idx; /* Index of new segment created */ - SegmentWriter *pWriter = 0; /* Used to write the segment */ - Fts3SegReader *pReader = 0; /* Used to iterate through the hash table */ - - /* Allocate a SegReader object to iterate through the contents of the - ** pending-terms table. If an error occurs, or if there are no terms - ** in the pending-terms table, return immediately. - */ - rc = sqlite3Fts3SegReaderPending(p, 0, 0, 1, &pReader); - if( rc!=SQLITE_OK || pReader==0 ){ - return rc; - } - - /* Determine the next index at level 0. If level 0 is already full, this - ** call may merge all existing level 0 segments into a single level 1 - ** segment. - */ - rc = fts3AllocateSegdirIdx(p, 0, &idx); - - /* If no errors have occured, iterate through the contents of the - ** pending-terms hash table using the Fts3SegReader iterator. The callback - ** writes each term (along with its doclist) to the database via the - ** SegmentWriter handle pWriter. - */ - if( rc==SQLITE_OK ){ - void *c = (void *)&pWriter; /* SegReaderIterate() callback context */ - Fts3SegFilter f; /* SegReaderIterate() parameters */ - - memset(&f, 0, sizeof(Fts3SegFilter)); - f.flags = FTS3_SEGMENT_REQUIRE_POS; - rc = sqlite3Fts3SegReaderIterate(p, &pReader, 1, &f, fts3FlushCallback, c); - } - assert( pWriter || rc!=SQLITE_OK ); - - /* If no errors have occured, flush the SegmentWriter object to the - ** database. Then delete the SegmentWriter and Fts3SegReader objects - ** allocated by this function. - */ - if( rc==SQLITE_OK ){ - rc = fts3SegWriterFlush(p, pWriter, 0, idx); - } - fts3SegWriterFree(pWriter); - sqlite3Fts3SegReaderFree(p, pReader); - - if( rc==SQLITE_OK ){ - sqlite3Fts3PendingTermsClear(p); + int rc = SQLITE_OK; + int i; + + for(i=0; rc==SQLITE_OK && i<p->nIndex; i++){ + rc = fts3SegmentMerge(p, p->iPrevLangid, i, FTS3_SEGCURSOR_PENDING); + if( rc==SQLITE_DONE ) rc = SQLITE_OK; + } + sqlite3Fts3PendingTermsClear(p); + + /* Determine the auto-incr-merge setting if unknown. If enabled, + ** estimate the number of leaf blocks of content to be written + */ + if( rc==SQLITE_OK && p->bHasStat + && p->nAutoincrmerge==0xff && p->nLeafAdd>0 + ){ + sqlite3_stmt *pStmt = 0; + rc = fts3SqlStmt(p, SQL_SELECT_STAT, &pStmt, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_int(pStmt, 1, FTS_STAT_AUTOINCRMERGE); + rc = sqlite3_step(pStmt); + if( rc==SQLITE_ROW ){ + p->nAutoincrmerge = sqlite3_column_int(pStmt, 0); + if( p->nAutoincrmerge==1 ) p->nAutoincrmerge = 8; + }else if( rc==SQLITE_DONE ){ + p->nAutoincrmerge = 0; + } + rc = sqlite3_reset(pStmt); + } } return rc; } /* @@ -108061,88 +151353,19 @@ assert(j<=nBuf); a[i] = (u32)(x & 0xffffffff); } } -/* -** Fill in the document size auxiliary information for the matchinfo -** structure. The auxiliary information is: -** -** N Total number of documents in the full-text index -** a0 Average length of column 0 over the whole index -** n0 Length of column 0 on the matching row -** ... -** aM Average length of column M over the whole index -** nM Length of column M on the matching row -** -** The fts3MatchinfoDocsizeLocal() routine fills in the nX values. -** The fts3MatchinfoDocsizeGlobal() routine fills in N and the aX values. -*/ -SQLITE_PRIVATE int sqlite3Fts3MatchinfoDocsizeLocal(Fts3Cursor *pCur, u32 *a){ - const char *pBlob; /* The BLOB holding %_docsize info */ - int nBlob; /* Size of the BLOB */ - sqlite3_stmt *pStmt; /* Statement for reading and writing */ - int i, j; /* Loop counters */ - sqlite3_int64 x; /* Varint value */ - int rc; /* Result code from subfunctions */ - Fts3Table *p; /* The FTS table */ - - p = (Fts3Table*)pCur->base.pVtab; - rc = fts3SqlStmt(p, SQL_SELECT_DOCSIZE, &pStmt, 0); - if( rc ){ - return rc; - } - sqlite3_bind_int64(pStmt, 1, pCur->iPrevId); - if( sqlite3_step(pStmt)==SQLITE_ROW ){ - nBlob = sqlite3_column_bytes(pStmt, 0); - pBlob = (const char*)sqlite3_column_blob(pStmt, 0); - for(i=j=0; i<p->nColumn && j<nBlob; i++){ - j = sqlite3Fts3GetVarint(&pBlob[j], &x); - a[2+i*2] = (u32)(x & 0xffffffff); - } - } - sqlite3_reset(pStmt); - return SQLITE_OK; -} -SQLITE_PRIVATE int sqlite3Fts3MatchinfoDocsizeGlobal(Fts3Cursor *pCur, u32 *a){ - const char *pBlob; /* The BLOB holding %_stat info */ - int nBlob; /* Size of the BLOB */ - sqlite3_stmt *pStmt; /* Statement for reading and writing */ - int i, j; /* Loop counters */ - sqlite3_int64 x; /* Varint value */ - int nDoc; /* Number of documents */ - int rc; /* Result code from subfunctions */ - Fts3Table *p; /* The FTS table */ - - p = (Fts3Table*)pCur->base.pVtab; - rc = fts3SqlStmt(p, SQL_SELECT_DOCTOTAL, &pStmt, 0); - if( rc ){ - return rc; - } - if( sqlite3_step(pStmt)==SQLITE_ROW ){ - nBlob = sqlite3_column_bytes(pStmt, 0); - pBlob = (const char*)sqlite3_column_blob(pStmt, 0); - j = sqlite3Fts3GetVarint(pBlob, &x); - a[0] = nDoc = (u32)(x & 0xffffffff); - for(i=0; i<p->nColumn && j<nBlob; i++){ - j = sqlite3Fts3GetVarint(&pBlob[j], &x); - a[1+i*2] = ((u32)(x & 0xffffffff) + nDoc/2)/nDoc; - } - } - sqlite3_reset(pStmt); - return SQLITE_OK; -} - /* ** Insert the sizes (in tokens) for each column of the document ** with docid equal to p->iPrevDocid. The sizes are encoded as ** a blob of varints. */ static void fts3InsertDocsize( - int *pRC, /* Result code */ - Fts3Table *p, /* Table into which to insert */ - u32 *aSz /* Sizes of each column */ + int *pRC, /* Result code */ + Fts3Table *p, /* Table into which to insert */ + u32 *aSz /* Sizes of each column, in tokens */ ){ char *pBlob; /* The BLOB encoding of the document size */ int nBlob; /* Number of bytes in the BLOB */ sqlite3_stmt *pStmt; /* Statement used to insert the encoding */ int rc; /* Result code from subfunctions */ @@ -108165,75 +151388,1921 @@ sqlite3_step(pStmt); *pRC = sqlite3_reset(pStmt); } /* -** Update the 0 record of the %_stat table so that it holds a blob -** which contains the document count followed by the cumulative -** document sizes for all columns. +** Record 0 of the %_stat table contains a blob consisting of N varints, +** where N is the number of user defined columns in the fts3 table plus +** two. If nCol is the number of user defined columns, then values of the +** varints are set as follows: +** +** Varint 0: Total number of rows in the table. +** +** Varint 1..nCol: For each column, the total number of tokens stored in +** the column for all rows of the table. +** +** Varint 1+nCol: The total size, in bytes, of all text values in all +** columns of all rows of the table. +** */ static void fts3UpdateDocTotals( - int *pRC, /* The result code */ - Fts3Table *p, /* Table being updated */ - u32 *aSzIns, /* Size increases */ - u32 *aSzDel, /* Size decreases */ - int nChng /* Change in the number of documents */ + int *pRC, /* The result code */ + Fts3Table *p, /* Table being updated */ + u32 *aSzIns, /* Size increases */ + u32 *aSzDel, /* Size decreases */ + int nChng /* Change in the number of documents */ ){ char *pBlob; /* Storage for BLOB written into %_stat */ int nBlob; /* Size of BLOB written into %_stat */ u32 *a; /* Array of integers that becomes the BLOB */ sqlite3_stmt *pStmt; /* Statement for reading and writing */ int i; /* Loop counter */ int rc; /* Result code from subfunctions */ + const int nStat = p->nColumn+2; + if( *pRC ) return; - a = sqlite3_malloc( (sizeof(u32)+10)*(p->nColumn+1) ); + a = sqlite3_malloc( (sizeof(u32)+10)*nStat ); if( a==0 ){ *pRC = SQLITE_NOMEM; return; } - pBlob = (char*)&a[p->nColumn+1]; - rc = fts3SqlStmt(p, SQL_SELECT_DOCTOTAL, &pStmt, 0); + pBlob = (char*)&a[nStat]; + rc = fts3SqlStmt(p, SQL_SELECT_STAT, &pStmt, 0); if( rc ){ sqlite3_free(a); *pRC = rc; return; } + sqlite3_bind_int(pStmt, 1, FTS_STAT_DOCTOTAL); if( sqlite3_step(pStmt)==SQLITE_ROW ){ - fts3DecodeIntArray(p->nColumn+1, a, + fts3DecodeIntArray(nStat, a, sqlite3_column_blob(pStmt, 0), sqlite3_column_bytes(pStmt, 0)); }else{ - memset(a, 0, sizeof(u32)*(p->nColumn+1) ); + memset(a, 0, sizeof(u32)*(nStat) ); } - sqlite3_reset(pStmt); + rc = sqlite3_reset(pStmt); + if( rc!=SQLITE_OK ){ + sqlite3_free(a); + *pRC = rc; + return; + } if( nChng<0 && a[0]<(u32)(-nChng) ){ a[0] = 0; }else{ a[0] += nChng; } - for(i=0; i<p->nColumn; i++){ + for(i=0; i<p->nColumn+1; i++){ u32 x = a[i+1]; if( x+aSzIns[i] < aSzDel[i] ){ x = 0; }else{ x = x + aSzIns[i] - aSzDel[i]; } a[i+1] = x; } - fts3EncodeIntArray(p->nColumn+1, a, pBlob, &nBlob); - rc = fts3SqlStmt(p, SQL_REPLACE_DOCTOTAL, &pStmt, 0); + fts3EncodeIntArray(nStat, a, pBlob, &nBlob); + rc = fts3SqlStmt(p, SQL_REPLACE_STAT, &pStmt, 0); if( rc ){ sqlite3_free(a); *pRC = rc; return; } - sqlite3_bind_blob(pStmt, 1, pBlob, nBlob, SQLITE_STATIC); + sqlite3_bind_int(pStmt, 1, FTS_STAT_DOCTOTAL); + sqlite3_bind_blob(pStmt, 2, pBlob, nBlob, SQLITE_STATIC); sqlite3_step(pStmt); *pRC = sqlite3_reset(pStmt); sqlite3_free(a); } + +/* +** Merge the entire database so that there is one segment for each +** iIndex/iLangid combination. +*/ +static int fts3DoOptimize(Fts3Table *p, int bReturnDone){ + int bSeenDone = 0; + int rc; + sqlite3_stmt *pAllLangid = 0; + + rc = fts3SqlStmt(p, SQL_SELECT_ALL_LANGID, &pAllLangid, 0); + if( rc==SQLITE_OK ){ + int rc2; + sqlite3_bind_int(pAllLangid, 1, p->iPrevLangid); + sqlite3_bind_int(pAllLangid, 2, p->nIndex); + while( sqlite3_step(pAllLangid)==SQLITE_ROW ){ + int i; + int iLangid = sqlite3_column_int(pAllLangid, 0); + for(i=0; rc==SQLITE_OK && i<p->nIndex; i++){ + rc = fts3SegmentMerge(p, iLangid, i, FTS3_SEGCURSOR_ALL); + if( rc==SQLITE_DONE ){ + bSeenDone = 1; + rc = SQLITE_OK; + } + } + } + rc2 = sqlite3_reset(pAllLangid); + if( rc==SQLITE_OK ) rc = rc2; + } + + sqlite3Fts3SegmentsClose(p); + sqlite3Fts3PendingTermsClear(p); + + return (rc==SQLITE_OK && bReturnDone && bSeenDone) ? SQLITE_DONE : rc; +} + +/* +** This function is called when the user executes the following statement: +** +** INSERT INTO <tbl>(<tbl>) VALUES('rebuild'); +** +** The entire FTS index is discarded and rebuilt. If the table is one +** created using the content=xxx option, then the new index is based on +** the current contents of the xxx table. Otherwise, it is rebuilt based +** on the contents of the %_content table. +*/ +static int fts3DoRebuild(Fts3Table *p){ + int rc; /* Return Code */ + + rc = fts3DeleteAll(p, 0); + if( rc==SQLITE_OK ){ + u32 *aSz = 0; + u32 *aSzIns = 0; + u32 *aSzDel = 0; + sqlite3_stmt *pStmt = 0; + int nEntry = 0; + + /* Compose and prepare an SQL statement to loop through the content table */ + char *zSql = sqlite3_mprintf("SELECT %s" , p->zReadExprlist); + if( !zSql ){ + rc = SQLITE_NOMEM; + }else{ + rc = sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0); + sqlite3_free(zSql); + } + + if( rc==SQLITE_OK ){ + int nByte = sizeof(u32) * (p->nColumn+1)*3; + aSz = (u32 *)sqlite3_malloc(nByte); + if( aSz==0 ){ + rc = SQLITE_NOMEM; + }else{ + memset(aSz, 0, nByte); + aSzIns = &aSz[p->nColumn+1]; + aSzDel = &aSzIns[p->nColumn+1]; + } + } + + while( rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pStmt) ){ + int iCol; + int iLangid = langidFromSelect(p, pStmt); + rc = fts3PendingTermsDocid(p, 0, iLangid, sqlite3_column_int64(pStmt, 0)); + memset(aSz, 0, sizeof(aSz[0]) * (p->nColumn+1)); + for(iCol=0; rc==SQLITE_OK && iCol<p->nColumn; iCol++){ + if( p->abNotindexed[iCol]==0 ){ + const char *z = (const char *) sqlite3_column_text(pStmt, iCol+1); + rc = fts3PendingTermsAdd(p, iLangid, z, iCol, &aSz[iCol]); + aSz[p->nColumn] += sqlite3_column_bytes(pStmt, iCol+1); + } + } + if( p->bHasDocsize ){ + fts3InsertDocsize(&rc, p, aSz); + } + if( rc!=SQLITE_OK ){ + sqlite3_finalize(pStmt); + pStmt = 0; + }else{ + nEntry++; + for(iCol=0; iCol<=p->nColumn; iCol++){ + aSzIns[iCol] += aSz[iCol]; + } + } + } + if( p->bFts4 ){ + fts3UpdateDocTotals(&rc, p, aSzIns, aSzDel, nEntry); + } + sqlite3_free(aSz); + + if( pStmt ){ + int rc2 = sqlite3_finalize(pStmt); + if( rc==SQLITE_OK ){ + rc = rc2; + } + } + } + + return rc; +} + + +/* +** This function opens a cursor used to read the input data for an +** incremental merge operation. Specifically, it opens a cursor to scan +** the oldest nSeg segments (idx=0 through idx=(nSeg-1)) in absolute +** level iAbsLevel. +*/ +static int fts3IncrmergeCsr( + Fts3Table *p, /* FTS3 table handle */ + sqlite3_int64 iAbsLevel, /* Absolute level to open */ + int nSeg, /* Number of segments to merge */ + Fts3MultiSegReader *pCsr /* Cursor object to populate */ +){ + int rc; /* Return Code */ + sqlite3_stmt *pStmt = 0; /* Statement used to read %_segdir entry */ + int nByte; /* Bytes allocated at pCsr->apSegment[] */ + + /* Allocate space for the Fts3MultiSegReader.aCsr[] array */ + memset(pCsr, 0, sizeof(*pCsr)); + nByte = sizeof(Fts3SegReader *) * nSeg; + pCsr->apSegment = (Fts3SegReader **)sqlite3_malloc(nByte); + + if( pCsr->apSegment==0 ){ + rc = SQLITE_NOMEM; + }else{ + memset(pCsr->apSegment, 0, nByte); + rc = fts3SqlStmt(p, SQL_SELECT_LEVEL, &pStmt, 0); + } + if( rc==SQLITE_OK ){ + int i; + int rc2; + sqlite3_bind_int64(pStmt, 1, iAbsLevel); + assert( pCsr->nSegment==0 ); + for(i=0; rc==SQLITE_OK && sqlite3_step(pStmt)==SQLITE_ROW && i<nSeg; i++){ + rc = sqlite3Fts3SegReaderNew(i, 0, + sqlite3_column_int64(pStmt, 1), /* segdir.start_block */ + sqlite3_column_int64(pStmt, 2), /* segdir.leaves_end_block */ + sqlite3_column_int64(pStmt, 3), /* segdir.end_block */ + sqlite3_column_blob(pStmt, 4), /* segdir.root */ + sqlite3_column_bytes(pStmt, 4), /* segdir.root */ + &pCsr->apSegment[i] + ); + pCsr->nSegment++; + } + rc2 = sqlite3_reset(pStmt); + if( rc==SQLITE_OK ) rc = rc2; + } + + return rc; +} + +typedef struct IncrmergeWriter IncrmergeWriter; +typedef struct NodeWriter NodeWriter; +typedef struct Blob Blob; +typedef struct NodeReader NodeReader; + +/* +** An instance of the following structure is used as a dynamic buffer +** to build up nodes or other blobs of data in. +** +** The function blobGrowBuffer() is used to extend the allocation. +*/ +struct Blob { + char *a; /* Pointer to allocation */ + int n; /* Number of valid bytes of data in a[] */ + int nAlloc; /* Allocated size of a[] (nAlloc>=n) */ +}; + +/* +** This structure is used to build up buffers containing segment b-tree +** nodes (blocks). +*/ +struct NodeWriter { + sqlite3_int64 iBlock; /* Current block id */ + Blob key; /* Last key written to the current block */ + Blob block; /* Current block image */ +}; + +/* +** An object of this type contains the state required to create or append +** to an appendable b-tree segment. +*/ +struct IncrmergeWriter { + int nLeafEst; /* Space allocated for leaf blocks */ + int nWork; /* Number of leaf pages flushed */ + sqlite3_int64 iAbsLevel; /* Absolute level of input segments */ + int iIdx; /* Index of *output* segment in iAbsLevel+1 */ + sqlite3_int64 iStart; /* Block number of first allocated block */ + sqlite3_int64 iEnd; /* Block number of last allocated block */ + sqlite3_int64 nLeafData; /* Bytes of leaf page data so far */ + u8 bNoLeafData; /* If true, store 0 for segment size */ + NodeWriter aNodeWriter[FTS_MAX_APPENDABLE_HEIGHT]; +}; + +/* +** An object of the following type is used to read data from a single +** FTS segment node. See the following functions: +** +** nodeReaderInit() +** nodeReaderNext() +** nodeReaderRelease() +*/ +struct NodeReader { + const char *aNode; + int nNode; + int iOff; /* Current offset within aNode[] */ + + /* Output variables. Containing the current node entry. */ + sqlite3_int64 iChild; /* Pointer to child node */ + Blob term; /* Current term */ + const char *aDoclist; /* Pointer to doclist */ + int nDoclist; /* Size of doclist in bytes */ +}; + +/* +** If *pRc is not SQLITE_OK when this function is called, it is a no-op. +** Otherwise, if the allocation at pBlob->a is not already at least nMin +** bytes in size, extend (realloc) it to be so. +** +** If an OOM error occurs, set *pRc to SQLITE_NOMEM and leave pBlob->a +** unmodified. Otherwise, if the allocation succeeds, update pBlob->nAlloc +** to reflect the new size of the pBlob->a[] buffer. +*/ +static void blobGrowBuffer(Blob *pBlob, int nMin, int *pRc){ + if( *pRc==SQLITE_OK && nMin>pBlob->nAlloc ){ + int nAlloc = nMin; + char *a = (char *)sqlite3_realloc(pBlob->a, nAlloc); + if( a ){ + pBlob->nAlloc = nAlloc; + pBlob->a = a; + }else{ + *pRc = SQLITE_NOMEM; + } + } +} + +/* +** Attempt to advance the node-reader object passed as the first argument to +** the next entry on the node. +** +** Return an error code if an error occurs (SQLITE_NOMEM is possible). +** Otherwise return SQLITE_OK. If there is no next entry on the node +** (e.g. because the current entry is the last) set NodeReader->aNode to +** NULL to indicate EOF. Otherwise, populate the NodeReader structure output +** variables for the new entry. +*/ +static int nodeReaderNext(NodeReader *p){ + int bFirst = (p->term.n==0); /* True for first term on the node */ + int nPrefix = 0; /* Bytes to copy from previous term */ + int nSuffix = 0; /* Bytes to append to the prefix */ + int rc = SQLITE_OK; /* Return code */ + + assert( p->aNode ); + if( p->iChild && bFirst==0 ) p->iChild++; + if( p->iOff>=p->nNode ){ + /* EOF */ + p->aNode = 0; + }else{ + if( bFirst==0 ){ + p->iOff += fts3GetVarint32(&p->aNode[p->iOff], &nPrefix); + } + p->iOff += fts3GetVarint32(&p->aNode[p->iOff], &nSuffix); + + blobGrowBuffer(&p->term, nPrefix+nSuffix, &rc); + if( rc==SQLITE_OK ){ + memcpy(&p->term.a[nPrefix], &p->aNode[p->iOff], nSuffix); + p->term.n = nPrefix+nSuffix; + p->iOff += nSuffix; + if( p->iChild==0 ){ + p->iOff += fts3GetVarint32(&p->aNode[p->iOff], &p->nDoclist); + p->aDoclist = &p->aNode[p->iOff]; + p->iOff += p->nDoclist; + } + } + } + + assert( p->iOff<=p->nNode ); + + return rc; +} + +/* +** Release all dynamic resources held by node-reader object *p. +*/ +static void nodeReaderRelease(NodeReader *p){ + sqlite3_free(p->term.a); +} + +/* +** Initialize a node-reader object to read the node in buffer aNode/nNode. +** +** If successful, SQLITE_OK is returned and the NodeReader object set to +** point to the first entry on the node (if any). Otherwise, an SQLite +** error code is returned. +*/ +static int nodeReaderInit(NodeReader *p, const char *aNode, int nNode){ + memset(p, 0, sizeof(NodeReader)); + p->aNode = aNode; + p->nNode = nNode; + + /* Figure out if this is a leaf or an internal node. */ + if( p->aNode[0] ){ + /* An internal node. */ + p->iOff = 1 + sqlite3Fts3GetVarint(&p->aNode[1], &p->iChild); + }else{ + p->iOff = 1; + } + + return nodeReaderNext(p); +} + +/* +** This function is called while writing an FTS segment each time a leaf o +** node is finished and written to disk. The key (zTerm/nTerm) is guaranteed +** to be greater than the largest key on the node just written, but smaller +** than or equal to the first key that will be written to the next leaf +** node. +** +** The block id of the leaf node just written to disk may be found in +** (pWriter->aNodeWriter[0].iBlock) when this function is called. +*/ +static int fts3IncrmergePush( + Fts3Table *p, /* Fts3 table handle */ + IncrmergeWriter *pWriter, /* Writer object */ + const char *zTerm, /* Term to write to internal node */ + int nTerm /* Bytes at zTerm */ +){ + sqlite3_int64 iPtr = pWriter->aNodeWriter[0].iBlock; + int iLayer; + + assert( nTerm>0 ); + for(iLayer=1; ALWAYS(iLayer<FTS_MAX_APPENDABLE_HEIGHT); iLayer++){ + sqlite3_int64 iNextPtr = 0; + NodeWriter *pNode = &pWriter->aNodeWriter[iLayer]; + int rc = SQLITE_OK; + int nPrefix; + int nSuffix; + int nSpace; + + /* Figure out how much space the key will consume if it is written to + ** the current node of layer iLayer. Due to the prefix compression, + ** the space required changes depending on which node the key is to + ** be added to. */ + nPrefix = fts3PrefixCompress(pNode->key.a, pNode->key.n, zTerm, nTerm); + nSuffix = nTerm - nPrefix; + nSpace = sqlite3Fts3VarintLen(nPrefix); + nSpace += sqlite3Fts3VarintLen(nSuffix) + nSuffix; + + if( pNode->key.n==0 || (pNode->block.n + nSpace)<=p->nNodeSize ){ + /* If the current node of layer iLayer contains zero keys, or if adding + ** the key to it will not cause it to grow to larger than nNodeSize + ** bytes in size, write the key here. */ + + Blob *pBlk = &pNode->block; + if( pBlk->n==0 ){ + blobGrowBuffer(pBlk, p->nNodeSize, &rc); + if( rc==SQLITE_OK ){ + pBlk->a[0] = (char)iLayer; + pBlk->n = 1 + sqlite3Fts3PutVarint(&pBlk->a[1], iPtr); + } + } + blobGrowBuffer(pBlk, pBlk->n + nSpace, &rc); + blobGrowBuffer(&pNode->key, nTerm, &rc); + + if( rc==SQLITE_OK ){ + if( pNode->key.n ){ + pBlk->n += sqlite3Fts3PutVarint(&pBlk->a[pBlk->n], nPrefix); + } + pBlk->n += sqlite3Fts3PutVarint(&pBlk->a[pBlk->n], nSuffix); + memcpy(&pBlk->a[pBlk->n], &zTerm[nPrefix], nSuffix); + pBlk->n += nSuffix; + + memcpy(pNode->key.a, zTerm, nTerm); + pNode->key.n = nTerm; + } + }else{ + /* Otherwise, flush the current node of layer iLayer to disk. + ** Then allocate a new, empty sibling node. The key will be written + ** into the parent of this node. */ + rc = fts3WriteSegment(p, pNode->iBlock, pNode->block.a, pNode->block.n); + + assert( pNode->block.nAlloc>=p->nNodeSize ); + pNode->block.a[0] = (char)iLayer; + pNode->block.n = 1 + sqlite3Fts3PutVarint(&pNode->block.a[1], iPtr+1); + + iNextPtr = pNode->iBlock; + pNode->iBlock++; + pNode->key.n = 0; + } + + if( rc!=SQLITE_OK || iNextPtr==0 ) return rc; + iPtr = iNextPtr; + } + + assert( 0 ); + return 0; +} + +/* +** Append a term and (optionally) doclist to the FTS segment node currently +** stored in blob *pNode. The node need not contain any terms, but the +** header must be written before this function is called. +** +** A node header is a single 0x00 byte for a leaf node, or a height varint +** followed by the left-hand-child varint for an internal node. +** +** The term to be appended is passed via arguments zTerm/nTerm. For a +** leaf node, the doclist is passed as aDoclist/nDoclist. For an internal +** node, both aDoclist and nDoclist must be passed 0. +** +** If the size of the value in blob pPrev is zero, then this is the first +** term written to the node. Otherwise, pPrev contains a copy of the +** previous term. Before this function returns, it is updated to contain a +** copy of zTerm/nTerm. +** +** It is assumed that the buffer associated with pNode is already large +** enough to accommodate the new entry. The buffer associated with pPrev +** is extended by this function if requrired. +** +** If an error (i.e. OOM condition) occurs, an SQLite error code is +** returned. Otherwise, SQLITE_OK. +*/ +static int fts3AppendToNode( + Blob *pNode, /* Current node image to append to */ + Blob *pPrev, /* Buffer containing previous term written */ + const char *zTerm, /* New term to write */ + int nTerm, /* Size of zTerm in bytes */ + const char *aDoclist, /* Doclist (or NULL) to write */ + int nDoclist /* Size of aDoclist in bytes */ +){ + int rc = SQLITE_OK; /* Return code */ + int bFirst = (pPrev->n==0); /* True if this is the first term written */ + int nPrefix; /* Size of term prefix in bytes */ + int nSuffix; /* Size of term suffix in bytes */ + + /* Node must have already been started. There must be a doclist for a + ** leaf node, and there must not be a doclist for an internal node. */ + assert( pNode->n>0 ); + assert( (pNode->a[0]=='\0')==(aDoclist!=0) ); + + blobGrowBuffer(pPrev, nTerm, &rc); + if( rc!=SQLITE_OK ) return rc; + + nPrefix = fts3PrefixCompress(pPrev->a, pPrev->n, zTerm, nTerm); + nSuffix = nTerm - nPrefix; + memcpy(pPrev->a, zTerm, nTerm); + pPrev->n = nTerm; + + if( bFirst==0 ){ + pNode->n += sqlite3Fts3PutVarint(&pNode->a[pNode->n], nPrefix); + } + pNode->n += sqlite3Fts3PutVarint(&pNode->a[pNode->n], nSuffix); + memcpy(&pNode->a[pNode->n], &zTerm[nPrefix], nSuffix); + pNode->n += nSuffix; + + if( aDoclist ){ + pNode->n += sqlite3Fts3PutVarint(&pNode->a[pNode->n], nDoclist); + memcpy(&pNode->a[pNode->n], aDoclist, nDoclist); + pNode->n += nDoclist; + } + + assert( pNode->n<=pNode->nAlloc ); + + return SQLITE_OK; +} + +/* +** Append the current term and doclist pointed to by cursor pCsr to the +** appendable b-tree segment opened for writing by pWriter. +** +** Return SQLITE_OK if successful, or an SQLite error code otherwise. +*/ +static int fts3IncrmergeAppend( + Fts3Table *p, /* Fts3 table handle */ + IncrmergeWriter *pWriter, /* Writer object */ + Fts3MultiSegReader *pCsr /* Cursor containing term and doclist */ +){ + const char *zTerm = pCsr->zTerm; + int nTerm = pCsr->nTerm; + const char *aDoclist = pCsr->aDoclist; + int nDoclist = pCsr->nDoclist; + int rc = SQLITE_OK; /* Return code */ + int nSpace; /* Total space in bytes required on leaf */ + int nPrefix; /* Size of prefix shared with previous term */ + int nSuffix; /* Size of suffix (nTerm - nPrefix) */ + NodeWriter *pLeaf; /* Object used to write leaf nodes */ + + pLeaf = &pWriter->aNodeWriter[0]; + nPrefix = fts3PrefixCompress(pLeaf->key.a, pLeaf->key.n, zTerm, nTerm); + nSuffix = nTerm - nPrefix; + + nSpace = sqlite3Fts3VarintLen(nPrefix); + nSpace += sqlite3Fts3VarintLen(nSuffix) + nSuffix; + nSpace += sqlite3Fts3VarintLen(nDoclist) + nDoclist; + + /* If the current block is not empty, and if adding this term/doclist + ** to the current block would make it larger than Fts3Table.nNodeSize + ** bytes, write this block out to the database. */ + if( pLeaf->block.n>0 && (pLeaf->block.n + nSpace)>p->nNodeSize ){ + rc = fts3WriteSegment(p, pLeaf->iBlock, pLeaf->block.a, pLeaf->block.n); + pWriter->nWork++; + + /* Add the current term to the parent node. The term added to the + ** parent must: + ** + ** a) be greater than the largest term on the leaf node just written + ** to the database (still available in pLeaf->key), and + ** + ** b) be less than or equal to the term about to be added to the new + ** leaf node (zTerm/nTerm). + ** + ** In other words, it must be the prefix of zTerm 1 byte longer than + ** the common prefix (if any) of zTerm and pWriter->zTerm. + */ + if( rc==SQLITE_OK ){ + rc = fts3IncrmergePush(p, pWriter, zTerm, nPrefix+1); + } + + /* Advance to the next output block */ + pLeaf->iBlock++; + pLeaf->key.n = 0; + pLeaf->block.n = 0; + + nSuffix = nTerm; + nSpace = 1; + nSpace += sqlite3Fts3VarintLen(nSuffix) + nSuffix; + nSpace += sqlite3Fts3VarintLen(nDoclist) + nDoclist; + } + + pWriter->nLeafData += nSpace; + blobGrowBuffer(&pLeaf->block, pLeaf->block.n + nSpace, &rc); + if( rc==SQLITE_OK ){ + if( pLeaf->block.n==0 ){ + pLeaf->block.n = 1; + pLeaf->block.a[0] = '\0'; + } + rc = fts3AppendToNode( + &pLeaf->block, &pLeaf->key, zTerm, nTerm, aDoclist, nDoclist + ); + } + + return rc; +} + +/* +** This function is called to release all dynamic resources held by the +** merge-writer object pWriter, and if no error has occurred, to flush +** all outstanding node buffers held by pWriter to disk. +** +** If *pRc is not SQLITE_OK when this function is called, then no attempt +** is made to write any data to disk. Instead, this function serves only +** to release outstanding resources. +** +** Otherwise, if *pRc is initially SQLITE_OK and an error occurs while +** flushing buffers to disk, *pRc is set to an SQLite error code before +** returning. +*/ +static void fts3IncrmergeRelease( + Fts3Table *p, /* FTS3 table handle */ + IncrmergeWriter *pWriter, /* Merge-writer object */ + int *pRc /* IN/OUT: Error code */ +){ + int i; /* Used to iterate through non-root layers */ + int iRoot; /* Index of root in pWriter->aNodeWriter */ + NodeWriter *pRoot; /* NodeWriter for root node */ + int rc = *pRc; /* Error code */ + + /* Set iRoot to the index in pWriter->aNodeWriter[] of the output segment + ** root node. If the segment fits entirely on a single leaf node, iRoot + ** will be set to 0. If the root node is the parent of the leaves, iRoot + ** will be 1. And so on. */ + for(iRoot=FTS_MAX_APPENDABLE_HEIGHT-1; iRoot>=0; iRoot--){ + NodeWriter *pNode = &pWriter->aNodeWriter[iRoot]; + if( pNode->block.n>0 ) break; + assert( *pRc || pNode->block.nAlloc==0 ); + assert( *pRc || pNode->key.nAlloc==0 ); + sqlite3_free(pNode->block.a); + sqlite3_free(pNode->key.a); + } + + /* Empty output segment. This is a no-op. */ + if( iRoot<0 ) return; + + /* The entire output segment fits on a single node. Normally, this means + ** the node would be stored as a blob in the "root" column of the %_segdir + ** table. However, this is not permitted in this case. The problem is that + ** space has already been reserved in the %_segments table, and so the + ** start_block and end_block fields of the %_segdir table must be populated. + ** And, by design or by accident, released versions of FTS cannot handle + ** segments that fit entirely on the root node with start_block!=0. + ** + ** Instead, create a synthetic root node that contains nothing but a + ** pointer to the single content node. So that the segment consists of a + ** single leaf and a single interior (root) node. + ** + ** Todo: Better might be to defer allocating space in the %_segments + ** table until we are sure it is needed. + */ + if( iRoot==0 ){ + Blob *pBlock = &pWriter->aNodeWriter[1].block; + blobGrowBuffer(pBlock, 1 + FTS3_VARINT_MAX, &rc); + if( rc==SQLITE_OK ){ + pBlock->a[0] = 0x01; + pBlock->n = 1 + sqlite3Fts3PutVarint( + &pBlock->a[1], pWriter->aNodeWriter[0].iBlock + ); + } + iRoot = 1; + } + pRoot = &pWriter->aNodeWriter[iRoot]; + + /* Flush all currently outstanding nodes to disk. */ + for(i=0; i<iRoot; i++){ + NodeWriter *pNode = &pWriter->aNodeWriter[i]; + if( pNode->block.n>0 && rc==SQLITE_OK ){ + rc = fts3WriteSegment(p, pNode->iBlock, pNode->block.a, pNode->block.n); + } + sqlite3_free(pNode->block.a); + sqlite3_free(pNode->key.a); + } + + /* Write the %_segdir record. */ + if( rc==SQLITE_OK ){ + rc = fts3WriteSegdir(p, + pWriter->iAbsLevel+1, /* level */ + pWriter->iIdx, /* idx */ + pWriter->iStart, /* start_block */ + pWriter->aNodeWriter[0].iBlock, /* leaves_end_block */ + pWriter->iEnd, /* end_block */ + (pWriter->bNoLeafData==0 ? pWriter->nLeafData : 0), /* end_block */ + pRoot->block.a, pRoot->block.n /* root */ + ); + } + sqlite3_free(pRoot->block.a); + sqlite3_free(pRoot->key.a); + + *pRc = rc; +} + +/* +** Compare the term in buffer zLhs (size in bytes nLhs) with that in +** zRhs (size in bytes nRhs) using memcmp. If one term is a prefix of +** the other, it is considered to be smaller than the other. +** +** Return -ve if zLhs is smaller than zRhs, 0 if it is equal, or +ve +** if it is greater. +*/ +static int fts3TermCmp( + const char *zLhs, int nLhs, /* LHS of comparison */ + const char *zRhs, int nRhs /* RHS of comparison */ +){ + int nCmp = MIN(nLhs, nRhs); + int res; + + res = memcmp(zLhs, zRhs, nCmp); + if( res==0 ) res = nLhs - nRhs; + + return res; +} + + +/* +** Query to see if the entry in the %_segments table with blockid iEnd is +** NULL. If no error occurs and the entry is NULL, set *pbRes 1 before +** returning. Otherwise, set *pbRes to 0. +** +** Or, if an error occurs while querying the database, return an SQLite +** error code. The final value of *pbRes is undefined in this case. +** +** This is used to test if a segment is an "appendable" segment. If it +** is, then a NULL entry has been inserted into the %_segments table +** with blockid %_segdir.end_block. +*/ +static int fts3IsAppendable(Fts3Table *p, sqlite3_int64 iEnd, int *pbRes){ + int bRes = 0; /* Result to set *pbRes to */ + sqlite3_stmt *pCheck = 0; /* Statement to query database with */ + int rc; /* Return code */ + + rc = fts3SqlStmt(p, SQL_SEGMENT_IS_APPENDABLE, &pCheck, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_int64(pCheck, 1, iEnd); + if( SQLITE_ROW==sqlite3_step(pCheck) ) bRes = 1; + rc = sqlite3_reset(pCheck); + } + + *pbRes = bRes; + return rc; +} + +/* +** This function is called when initializing an incremental-merge operation. +** It checks if the existing segment with index value iIdx at absolute level +** (iAbsLevel+1) can be appended to by the incremental merge. If it can, the +** merge-writer object *pWriter is initialized to write to it. +** +** An existing segment can be appended to by an incremental merge if: +** +** * It was initially created as an appendable segment (with all required +** space pre-allocated), and +** +** * The first key read from the input (arguments zKey and nKey) is +** greater than the largest key currently stored in the potential +** output segment. +*/ +static int fts3IncrmergeLoad( + Fts3Table *p, /* Fts3 table handle */ + sqlite3_int64 iAbsLevel, /* Absolute level of input segments */ + int iIdx, /* Index of candidate output segment */ + const char *zKey, /* First key to write */ + int nKey, /* Number of bytes in nKey */ + IncrmergeWriter *pWriter /* Populate this object */ +){ + int rc; /* Return code */ + sqlite3_stmt *pSelect = 0; /* SELECT to read %_segdir entry */ + + rc = fts3SqlStmt(p, SQL_SELECT_SEGDIR, &pSelect, 0); + if( rc==SQLITE_OK ){ + sqlite3_int64 iStart = 0; /* Value of %_segdir.start_block */ + sqlite3_int64 iLeafEnd = 0; /* Value of %_segdir.leaves_end_block */ + sqlite3_int64 iEnd = 0; /* Value of %_segdir.end_block */ + const char *aRoot = 0; /* Pointer to %_segdir.root buffer */ + int nRoot = 0; /* Size of aRoot[] in bytes */ + int rc2; /* Return code from sqlite3_reset() */ + int bAppendable = 0; /* Set to true if segment is appendable */ + + /* Read the %_segdir entry for index iIdx absolute level (iAbsLevel+1) */ + sqlite3_bind_int64(pSelect, 1, iAbsLevel+1); + sqlite3_bind_int(pSelect, 2, iIdx); + if( sqlite3_step(pSelect)==SQLITE_ROW ){ + iStart = sqlite3_column_int64(pSelect, 1); + iLeafEnd = sqlite3_column_int64(pSelect, 2); + fts3ReadEndBlockField(pSelect, 3, &iEnd, &pWriter->nLeafData); + if( pWriter->nLeafData<0 ){ + pWriter->nLeafData = pWriter->nLeafData * -1; + } + pWriter->bNoLeafData = (pWriter->nLeafData==0); + nRoot = sqlite3_column_bytes(pSelect, 4); + aRoot = sqlite3_column_blob(pSelect, 4); + }else{ + return sqlite3_reset(pSelect); + } + + /* Check for the zero-length marker in the %_segments table */ + rc = fts3IsAppendable(p, iEnd, &bAppendable); + + /* Check that zKey/nKey is larger than the largest key the candidate */ + if( rc==SQLITE_OK && bAppendable ){ + char *aLeaf = 0; + int nLeaf = 0; + + rc = sqlite3Fts3ReadBlock(p, iLeafEnd, &aLeaf, &nLeaf, 0); + if( rc==SQLITE_OK ){ + NodeReader reader; + for(rc = nodeReaderInit(&reader, aLeaf, nLeaf); + rc==SQLITE_OK && reader.aNode; + rc = nodeReaderNext(&reader) + ){ + assert( reader.aNode ); + } + if( fts3TermCmp(zKey, nKey, reader.term.a, reader.term.n)<=0 ){ + bAppendable = 0; + } + nodeReaderRelease(&reader); + } + sqlite3_free(aLeaf); + } + + if( rc==SQLITE_OK && bAppendable ){ + /* It is possible to append to this segment. Set up the IncrmergeWriter + ** object to do so. */ + int i; + int nHeight = (int)aRoot[0]; + NodeWriter *pNode; + + pWriter->nLeafEst = (int)((iEnd - iStart) + 1)/FTS_MAX_APPENDABLE_HEIGHT; + pWriter->iStart = iStart; + pWriter->iEnd = iEnd; + pWriter->iAbsLevel = iAbsLevel; + pWriter->iIdx = iIdx; + + for(i=nHeight+1; i<FTS_MAX_APPENDABLE_HEIGHT; i++){ + pWriter->aNodeWriter[i].iBlock = pWriter->iStart + i*pWriter->nLeafEst; + } + + pNode = &pWriter->aNodeWriter[nHeight]; + pNode->iBlock = pWriter->iStart + pWriter->nLeafEst*nHeight; + blobGrowBuffer(&pNode->block, MAX(nRoot, p->nNodeSize), &rc); + if( rc==SQLITE_OK ){ + memcpy(pNode->block.a, aRoot, nRoot); + pNode->block.n = nRoot; + } + + for(i=nHeight; i>=0 && rc==SQLITE_OK; i--){ + NodeReader reader; + pNode = &pWriter->aNodeWriter[i]; + + rc = nodeReaderInit(&reader, pNode->block.a, pNode->block.n); + while( reader.aNode && rc==SQLITE_OK ) rc = nodeReaderNext(&reader); + blobGrowBuffer(&pNode->key, reader.term.n, &rc); + if( rc==SQLITE_OK ){ + memcpy(pNode->key.a, reader.term.a, reader.term.n); + pNode->key.n = reader.term.n; + if( i>0 ){ + char *aBlock = 0; + int nBlock = 0; + pNode = &pWriter->aNodeWriter[i-1]; + pNode->iBlock = reader.iChild; + rc = sqlite3Fts3ReadBlock(p, reader.iChild, &aBlock, &nBlock, 0); + blobGrowBuffer(&pNode->block, MAX(nBlock, p->nNodeSize), &rc); + if( rc==SQLITE_OK ){ + memcpy(pNode->block.a, aBlock, nBlock); + pNode->block.n = nBlock; + } + sqlite3_free(aBlock); + } + } + nodeReaderRelease(&reader); + } + } + + rc2 = sqlite3_reset(pSelect); + if( rc==SQLITE_OK ) rc = rc2; + } + + return rc; +} + +/* +** Determine the largest segment index value that exists within absolute +** level iAbsLevel+1. If no error occurs, set *piIdx to this value plus +** one before returning SQLITE_OK. Or, if there are no segments at all +** within level iAbsLevel, set *piIdx to zero. +** +** If an error occurs, return an SQLite error code. The final value of +** *piIdx is undefined in this case. +*/ +static int fts3IncrmergeOutputIdx( + Fts3Table *p, /* FTS Table handle */ + sqlite3_int64 iAbsLevel, /* Absolute index of input segments */ + int *piIdx /* OUT: Next free index at iAbsLevel+1 */ +){ + int rc; + sqlite3_stmt *pOutputIdx = 0; /* SQL used to find output index */ + + rc = fts3SqlStmt(p, SQL_NEXT_SEGMENT_INDEX, &pOutputIdx, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_int64(pOutputIdx, 1, iAbsLevel+1); + sqlite3_step(pOutputIdx); + *piIdx = sqlite3_column_int(pOutputIdx, 0); + rc = sqlite3_reset(pOutputIdx); + } + + return rc; +} + +/* +** Allocate an appendable output segment on absolute level iAbsLevel+1 +** with idx value iIdx. +** +** In the %_segdir table, a segment is defined by the values in three +** columns: +** +** start_block +** leaves_end_block +** end_block +** +** When an appendable segment is allocated, it is estimated that the +** maximum number of leaf blocks that may be required is the sum of the +** number of leaf blocks consumed by the input segments, plus the number +** of input segments, multiplied by two. This value is stored in stack +** variable nLeafEst. +** +** A total of 16*nLeafEst blocks are allocated when an appendable segment +** is created ((1 + end_block - start_block)==16*nLeafEst). The contiguous +** array of leaf nodes starts at the first block allocated. The array +** of interior nodes that are parents of the leaf nodes start at block +** (start_block + (1 + end_block - start_block) / 16). And so on. +** +** In the actual code below, the value "16" is replaced with the +** pre-processor macro FTS_MAX_APPENDABLE_HEIGHT. +*/ +static int fts3IncrmergeWriter( + Fts3Table *p, /* Fts3 table handle */ + sqlite3_int64 iAbsLevel, /* Absolute level of input segments */ + int iIdx, /* Index of new output segment */ + Fts3MultiSegReader *pCsr, /* Cursor that data will be read from */ + IncrmergeWriter *pWriter /* Populate this object */ +){ + int rc; /* Return Code */ + int i; /* Iterator variable */ + int nLeafEst = 0; /* Blocks allocated for leaf nodes */ + sqlite3_stmt *pLeafEst = 0; /* SQL used to determine nLeafEst */ + sqlite3_stmt *pFirstBlock = 0; /* SQL used to determine first block */ + + /* Calculate nLeafEst. */ + rc = fts3SqlStmt(p, SQL_MAX_LEAF_NODE_ESTIMATE, &pLeafEst, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_int64(pLeafEst, 1, iAbsLevel); + sqlite3_bind_int64(pLeafEst, 2, pCsr->nSegment); + if( SQLITE_ROW==sqlite3_step(pLeafEst) ){ + nLeafEst = sqlite3_column_int(pLeafEst, 0); + } + rc = sqlite3_reset(pLeafEst); + } + if( rc!=SQLITE_OK ) return rc; + + /* Calculate the first block to use in the output segment */ + rc = fts3SqlStmt(p, SQL_NEXT_SEGMENTS_ID, &pFirstBlock, 0); + if( rc==SQLITE_OK ){ + if( SQLITE_ROW==sqlite3_step(pFirstBlock) ){ + pWriter->iStart = sqlite3_column_int64(pFirstBlock, 0); + pWriter->iEnd = pWriter->iStart - 1; + pWriter->iEnd += nLeafEst * FTS_MAX_APPENDABLE_HEIGHT; + } + rc = sqlite3_reset(pFirstBlock); + } + if( rc!=SQLITE_OK ) return rc; + + /* Insert the marker in the %_segments table to make sure nobody tries + ** to steal the space just allocated. This is also used to identify + ** appendable segments. */ + rc = fts3WriteSegment(p, pWriter->iEnd, 0, 0); + if( rc!=SQLITE_OK ) return rc; + + pWriter->iAbsLevel = iAbsLevel; + pWriter->nLeafEst = nLeafEst; + pWriter->iIdx = iIdx; + + /* Set up the array of NodeWriter objects */ + for(i=0; i<FTS_MAX_APPENDABLE_HEIGHT; i++){ + pWriter->aNodeWriter[i].iBlock = pWriter->iStart + i*pWriter->nLeafEst; + } + return SQLITE_OK; +} + +/* +** Remove an entry from the %_segdir table. This involves running the +** following two statements: +** +** DELETE FROM %_segdir WHERE level = :iAbsLevel AND idx = :iIdx +** UPDATE %_segdir SET idx = idx - 1 WHERE level = :iAbsLevel AND idx > :iIdx +** +** The DELETE statement removes the specific %_segdir level. The UPDATE +** statement ensures that the remaining segments have contiguously allocated +** idx values. +*/ +static int fts3RemoveSegdirEntry( + Fts3Table *p, /* FTS3 table handle */ + sqlite3_int64 iAbsLevel, /* Absolute level to delete from */ + int iIdx /* Index of %_segdir entry to delete */ +){ + int rc; /* Return code */ + sqlite3_stmt *pDelete = 0; /* DELETE statement */ + + rc = fts3SqlStmt(p, SQL_DELETE_SEGDIR_ENTRY, &pDelete, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_int64(pDelete, 1, iAbsLevel); + sqlite3_bind_int(pDelete, 2, iIdx); + sqlite3_step(pDelete); + rc = sqlite3_reset(pDelete); + } + + return rc; +} + +/* +** One or more segments have just been removed from absolute level iAbsLevel. +** Update the 'idx' values of the remaining segments in the level so that +** the idx values are a contiguous sequence starting from 0. +*/ +static int fts3RepackSegdirLevel( + Fts3Table *p, /* FTS3 table handle */ + sqlite3_int64 iAbsLevel /* Absolute level to repack */ +){ + int rc; /* Return code */ + int *aIdx = 0; /* Array of remaining idx values */ + int nIdx = 0; /* Valid entries in aIdx[] */ + int nAlloc = 0; /* Allocated size of aIdx[] */ + int i; /* Iterator variable */ + sqlite3_stmt *pSelect = 0; /* Select statement to read idx values */ + sqlite3_stmt *pUpdate = 0; /* Update statement to modify idx values */ + + rc = fts3SqlStmt(p, SQL_SELECT_INDEXES, &pSelect, 0); + if( rc==SQLITE_OK ){ + int rc2; + sqlite3_bind_int64(pSelect, 1, iAbsLevel); + while( SQLITE_ROW==sqlite3_step(pSelect) ){ + if( nIdx>=nAlloc ){ + int *aNew; + nAlloc += 16; + aNew = sqlite3_realloc(aIdx, nAlloc*sizeof(int)); + if( !aNew ){ + rc = SQLITE_NOMEM; + break; + } + aIdx = aNew; + } + aIdx[nIdx++] = sqlite3_column_int(pSelect, 0); + } + rc2 = sqlite3_reset(pSelect); + if( rc==SQLITE_OK ) rc = rc2; + } + + if( rc==SQLITE_OK ){ + rc = fts3SqlStmt(p, SQL_SHIFT_SEGDIR_ENTRY, &pUpdate, 0); + } + if( rc==SQLITE_OK ){ + sqlite3_bind_int64(pUpdate, 2, iAbsLevel); + } + + assert( p->bIgnoreSavepoint==0 ); + p->bIgnoreSavepoint = 1; + for(i=0; rc==SQLITE_OK && i<nIdx; i++){ + if( aIdx[i]!=i ){ + sqlite3_bind_int(pUpdate, 3, aIdx[i]); + sqlite3_bind_int(pUpdate, 1, i); + sqlite3_step(pUpdate); + rc = sqlite3_reset(pUpdate); + } + } + p->bIgnoreSavepoint = 0; + + sqlite3_free(aIdx); + return rc; +} + +static void fts3StartNode(Blob *pNode, int iHeight, sqlite3_int64 iChild){ + pNode->a[0] = (char)iHeight; + if( iChild ){ + assert( pNode->nAlloc>=1+sqlite3Fts3VarintLen(iChild) ); + pNode->n = 1 + sqlite3Fts3PutVarint(&pNode->a[1], iChild); + }else{ + assert( pNode->nAlloc>=1 ); + pNode->n = 1; + } +} + +/* +** The first two arguments are a pointer to and the size of a segment b-tree +** node. The node may be a leaf or an internal node. +** +** This function creates a new node image in blob object *pNew by copying +** all terms that are greater than or equal to zTerm/nTerm (for leaf nodes) +** or greater than zTerm/nTerm (for internal nodes) from aNode/nNode. +*/ +static int fts3TruncateNode( + const char *aNode, /* Current node image */ + int nNode, /* Size of aNode in bytes */ + Blob *pNew, /* OUT: Write new node image here */ + const char *zTerm, /* Omit all terms smaller than this */ + int nTerm, /* Size of zTerm in bytes */ + sqlite3_int64 *piBlock /* OUT: Block number in next layer down */ +){ + NodeReader reader; /* Reader object */ + Blob prev = {0, 0, 0}; /* Previous term written to new node */ + int rc = SQLITE_OK; /* Return code */ + int bLeaf = aNode[0]=='\0'; /* True for a leaf node */ + + /* Allocate required output space */ + blobGrowBuffer(pNew, nNode, &rc); + if( rc!=SQLITE_OK ) return rc; + pNew->n = 0; + + /* Populate new node buffer */ + for(rc = nodeReaderInit(&reader, aNode, nNode); + rc==SQLITE_OK && reader.aNode; + rc = nodeReaderNext(&reader) + ){ + if( pNew->n==0 ){ + int res = fts3TermCmp(reader.term.a, reader.term.n, zTerm, nTerm); + if( res<0 || (bLeaf==0 && res==0) ) continue; + fts3StartNode(pNew, (int)aNode[0], reader.iChild); + *piBlock = reader.iChild; + } + rc = fts3AppendToNode( + pNew, &prev, reader.term.a, reader.term.n, + reader.aDoclist, reader.nDoclist + ); + if( rc!=SQLITE_OK ) break; + } + if( pNew->n==0 ){ + fts3StartNode(pNew, (int)aNode[0], reader.iChild); + *piBlock = reader.iChild; + } + assert( pNew->n<=pNew->nAlloc ); + + nodeReaderRelease(&reader); + sqlite3_free(prev.a); + return rc; +} + +/* +** Remove all terms smaller than zTerm/nTerm from segment iIdx in absolute +** level iAbsLevel. This may involve deleting entries from the %_segments +** table, and modifying existing entries in both the %_segments and %_segdir +** tables. +** +** SQLITE_OK is returned if the segment is updated successfully. Or an +** SQLite error code otherwise. +*/ +static int fts3TruncateSegment( + Fts3Table *p, /* FTS3 table handle */ + sqlite3_int64 iAbsLevel, /* Absolute level of segment to modify */ + int iIdx, /* Index within level of segment to modify */ + const char *zTerm, /* Remove terms smaller than this */ + int nTerm /* Number of bytes in buffer zTerm */ +){ + int rc = SQLITE_OK; /* Return code */ + Blob root = {0,0,0}; /* New root page image */ + Blob block = {0,0,0}; /* Buffer used for any other block */ + sqlite3_int64 iBlock = 0; /* Block id */ + sqlite3_int64 iNewStart = 0; /* New value for iStartBlock */ + sqlite3_int64 iOldStart = 0; /* Old value for iStartBlock */ + sqlite3_stmt *pFetch = 0; /* Statement used to fetch segdir */ + + rc = fts3SqlStmt(p, SQL_SELECT_SEGDIR, &pFetch, 0); + if( rc==SQLITE_OK ){ + int rc2; /* sqlite3_reset() return code */ + sqlite3_bind_int64(pFetch, 1, iAbsLevel); + sqlite3_bind_int(pFetch, 2, iIdx); + if( SQLITE_ROW==sqlite3_step(pFetch) ){ + const char *aRoot = sqlite3_column_blob(pFetch, 4); + int nRoot = sqlite3_column_bytes(pFetch, 4); + iOldStart = sqlite3_column_int64(pFetch, 1); + rc = fts3TruncateNode(aRoot, nRoot, &root, zTerm, nTerm, &iBlock); + } + rc2 = sqlite3_reset(pFetch); + if( rc==SQLITE_OK ) rc = rc2; + } + + while( rc==SQLITE_OK && iBlock ){ + char *aBlock = 0; + int nBlock = 0; + iNewStart = iBlock; + + rc = sqlite3Fts3ReadBlock(p, iBlock, &aBlock, &nBlock, 0); + if( rc==SQLITE_OK ){ + rc = fts3TruncateNode(aBlock, nBlock, &block, zTerm, nTerm, &iBlock); + } + if( rc==SQLITE_OK ){ + rc = fts3WriteSegment(p, iNewStart, block.a, block.n); + } + sqlite3_free(aBlock); + } + + /* Variable iNewStart now contains the first valid leaf node. */ + if( rc==SQLITE_OK && iNewStart ){ + sqlite3_stmt *pDel = 0; + rc = fts3SqlStmt(p, SQL_DELETE_SEGMENTS_RANGE, &pDel, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_int64(pDel, 1, iOldStart); + sqlite3_bind_int64(pDel, 2, iNewStart-1); + sqlite3_step(pDel); + rc = sqlite3_reset(pDel); + } + } + + if( rc==SQLITE_OK ){ + sqlite3_stmt *pChomp = 0; + rc = fts3SqlStmt(p, SQL_CHOMP_SEGDIR, &pChomp, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_int64(pChomp, 1, iNewStart); + sqlite3_bind_blob(pChomp, 2, root.a, root.n, SQLITE_STATIC); + sqlite3_bind_int64(pChomp, 3, iAbsLevel); + sqlite3_bind_int(pChomp, 4, iIdx); + sqlite3_step(pChomp); + rc = sqlite3_reset(pChomp); + } + } + + sqlite3_free(root.a); + sqlite3_free(block.a); + return rc; +} + +/* +** This function is called after an incrmental-merge operation has run to +** merge (or partially merge) two or more segments from absolute level +** iAbsLevel. +** +** Each input segment is either removed from the db completely (if all of +** its data was copied to the output segment by the incrmerge operation) +** or modified in place so that it no longer contains those entries that +** have been duplicated in the output segment. +*/ +static int fts3IncrmergeChomp( + Fts3Table *p, /* FTS table handle */ + sqlite3_int64 iAbsLevel, /* Absolute level containing segments */ + Fts3MultiSegReader *pCsr, /* Chomp all segments opened by this cursor */ + int *pnRem /* Number of segments not deleted */ +){ + int i; + int nRem = 0; + int rc = SQLITE_OK; + + for(i=pCsr->nSegment-1; i>=0 && rc==SQLITE_OK; i--){ + Fts3SegReader *pSeg = 0; + int j; + + /* Find the Fts3SegReader object with Fts3SegReader.iIdx==i. It is hiding + ** somewhere in the pCsr->apSegment[] array. */ + for(j=0; ALWAYS(j<pCsr->nSegment); j++){ + pSeg = pCsr->apSegment[j]; + if( pSeg->iIdx==i ) break; + } + assert( j<pCsr->nSegment && pSeg->iIdx==i ); + + if( pSeg->aNode==0 ){ + /* Seg-reader is at EOF. Remove the entire input segment. */ + rc = fts3DeleteSegment(p, pSeg); + if( rc==SQLITE_OK ){ + rc = fts3RemoveSegdirEntry(p, iAbsLevel, pSeg->iIdx); + } + *pnRem = 0; + }else{ + /* The incremental merge did not copy all the data from this + ** segment to the upper level. The segment is modified in place + ** so that it contains no keys smaller than zTerm/nTerm. */ + const char *zTerm = pSeg->zTerm; + int nTerm = pSeg->nTerm; + rc = fts3TruncateSegment(p, iAbsLevel, pSeg->iIdx, zTerm, nTerm); + nRem++; + } + } + + if( rc==SQLITE_OK && nRem!=pCsr->nSegment ){ + rc = fts3RepackSegdirLevel(p, iAbsLevel); + } + + *pnRem = nRem; + return rc; +} + +/* +** Store an incr-merge hint in the database. +*/ +static int fts3IncrmergeHintStore(Fts3Table *p, Blob *pHint){ + sqlite3_stmt *pReplace = 0; + int rc; /* Return code */ + + rc = fts3SqlStmt(p, SQL_REPLACE_STAT, &pReplace, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_int(pReplace, 1, FTS_STAT_INCRMERGEHINT); + sqlite3_bind_blob(pReplace, 2, pHint->a, pHint->n, SQLITE_STATIC); + sqlite3_step(pReplace); + rc = sqlite3_reset(pReplace); + } + + return rc; +} + +/* +** Load an incr-merge hint from the database. The incr-merge hint, if one +** exists, is stored in the rowid==1 row of the %_stat table. +** +** If successful, populate blob *pHint with the value read from the %_stat +** table and return SQLITE_OK. Otherwise, if an error occurs, return an +** SQLite error code. +*/ +static int fts3IncrmergeHintLoad(Fts3Table *p, Blob *pHint){ + sqlite3_stmt *pSelect = 0; + int rc; + + pHint->n = 0; + rc = fts3SqlStmt(p, SQL_SELECT_STAT, &pSelect, 0); + if( rc==SQLITE_OK ){ + int rc2; + sqlite3_bind_int(pSelect, 1, FTS_STAT_INCRMERGEHINT); + if( SQLITE_ROW==sqlite3_step(pSelect) ){ + const char *aHint = sqlite3_column_blob(pSelect, 0); + int nHint = sqlite3_column_bytes(pSelect, 0); + if( aHint ){ + blobGrowBuffer(pHint, nHint, &rc); + if( rc==SQLITE_OK ){ + memcpy(pHint->a, aHint, nHint); + pHint->n = nHint; + } + } + } + rc2 = sqlite3_reset(pSelect); + if( rc==SQLITE_OK ) rc = rc2; + } + + return rc; +} + +/* +** If *pRc is not SQLITE_OK when this function is called, it is a no-op. +** Otherwise, append an entry to the hint stored in blob *pHint. Each entry +** consists of two varints, the absolute level number of the input segments +** and the number of input segments. +** +** If successful, leave *pRc set to SQLITE_OK and return. If an error occurs, +** set *pRc to an SQLite error code before returning. +*/ +static void fts3IncrmergeHintPush( + Blob *pHint, /* Hint blob to append to */ + i64 iAbsLevel, /* First varint to store in hint */ + int nInput, /* Second varint to store in hint */ + int *pRc /* IN/OUT: Error code */ +){ + blobGrowBuffer(pHint, pHint->n + 2*FTS3_VARINT_MAX, pRc); + if( *pRc==SQLITE_OK ){ + pHint->n += sqlite3Fts3PutVarint(&pHint->a[pHint->n], iAbsLevel); + pHint->n += sqlite3Fts3PutVarint(&pHint->a[pHint->n], (i64)nInput); + } +} + +/* +** Read the last entry (most recently pushed) from the hint blob *pHint +** and then remove the entry. Write the two values read to *piAbsLevel and +** *pnInput before returning. +** +** If no error occurs, return SQLITE_OK. If the hint blob in *pHint does +** not contain at least two valid varints, return SQLITE_CORRUPT_VTAB. +*/ +static int fts3IncrmergeHintPop(Blob *pHint, i64 *piAbsLevel, int *pnInput){ + const int nHint = pHint->n; + int i; + + i = pHint->n-2; + while( i>0 && (pHint->a[i-1] & 0x80) ) i--; + while( i>0 && (pHint->a[i-1] & 0x80) ) i--; + + pHint->n = i; + i += sqlite3Fts3GetVarint(&pHint->a[i], piAbsLevel); + i += fts3GetVarint32(&pHint->a[i], pnInput); + if( i!=nHint ) return FTS_CORRUPT_VTAB; + + return SQLITE_OK; +} + + +/* +** Attempt an incremental merge that writes nMerge leaf blocks. +** +** Incremental merges happen nMin segments at a time. The segments +** to be merged are the nMin oldest segments (the ones with the smallest +** values for the _segdir.idx field) in the highest level that contains +** at least nMin segments. Multiple merges might occur in an attempt to +** write the quota of nMerge leaf blocks. +*/ +SQLITE_PRIVATE int sqlite3Fts3Incrmerge(Fts3Table *p, int nMerge, int nMin){ + int rc; /* Return code */ + int nRem = nMerge; /* Number of leaf pages yet to be written */ + Fts3MultiSegReader *pCsr; /* Cursor used to read input data */ + Fts3SegFilter *pFilter; /* Filter used with cursor pCsr */ + IncrmergeWriter *pWriter; /* Writer object */ + int nSeg = 0; /* Number of input segments */ + sqlite3_int64 iAbsLevel = 0; /* Absolute level number to work on */ + Blob hint = {0, 0, 0}; /* Hint read from %_stat table */ + int bDirtyHint = 0; /* True if blob 'hint' has been modified */ + + /* Allocate space for the cursor, filter and writer objects */ + const int nAlloc = sizeof(*pCsr) + sizeof(*pFilter) + sizeof(*pWriter); + pWriter = (IncrmergeWriter *)sqlite3_malloc(nAlloc); + if( !pWriter ) return SQLITE_NOMEM; + pFilter = (Fts3SegFilter *)&pWriter[1]; + pCsr = (Fts3MultiSegReader *)&pFilter[1]; + + rc = fts3IncrmergeHintLoad(p, &hint); + while( rc==SQLITE_OK && nRem>0 ){ + const i64 nMod = FTS3_SEGDIR_MAXLEVEL * p->nIndex; + sqlite3_stmt *pFindLevel = 0; /* SQL used to determine iAbsLevel */ + int bUseHint = 0; /* True if attempting to append */ + int iIdx = 0; /* Largest idx in level (iAbsLevel+1) */ + + /* Search the %_segdir table for the absolute level with the smallest + ** relative level number that contains at least nMin segments, if any. + ** If one is found, set iAbsLevel to the absolute level number and + ** nSeg to nMin. If no level with at least nMin segments can be found, + ** set nSeg to -1. + */ + rc = fts3SqlStmt(p, SQL_FIND_MERGE_LEVEL, &pFindLevel, 0); + sqlite3_bind_int(pFindLevel, 1, nMin); + if( sqlite3_step(pFindLevel)==SQLITE_ROW ){ + iAbsLevel = sqlite3_column_int64(pFindLevel, 0); + nSeg = nMin; + }else{ + nSeg = -1; + } + rc = sqlite3_reset(pFindLevel); + + /* If the hint read from the %_stat table is not empty, check if the + ** last entry in it specifies a relative level smaller than or equal + ** to the level identified by the block above (if any). If so, this + ** iteration of the loop will work on merging at the hinted level. + */ + if( rc==SQLITE_OK && hint.n ){ + int nHint = hint.n; + sqlite3_int64 iHintAbsLevel = 0; /* Hint level */ + int nHintSeg = 0; /* Hint number of segments */ + + rc = fts3IncrmergeHintPop(&hint, &iHintAbsLevel, &nHintSeg); + if( nSeg<0 || (iAbsLevel % nMod) >= (iHintAbsLevel % nMod) ){ + iAbsLevel = iHintAbsLevel; + nSeg = nHintSeg; + bUseHint = 1; + bDirtyHint = 1; + }else{ + /* This undoes the effect of the HintPop() above - so that no entry + ** is removed from the hint blob. */ + hint.n = nHint; + } + } + + /* If nSeg is less that zero, then there is no level with at least + ** nMin segments and no hint in the %_stat table. No work to do. + ** Exit early in this case. */ + if( nSeg<0 ) break; + + /* Open a cursor to iterate through the contents of the oldest nSeg + ** indexes of absolute level iAbsLevel. If this cursor is opened using + ** the 'hint' parameters, it is possible that there are less than nSeg + ** segments available in level iAbsLevel. In this case, no work is + ** done on iAbsLevel - fall through to the next iteration of the loop + ** to start work on some other level. */ + memset(pWriter, 0, nAlloc); + pFilter->flags = FTS3_SEGMENT_REQUIRE_POS; + + if( rc==SQLITE_OK ){ + rc = fts3IncrmergeOutputIdx(p, iAbsLevel, &iIdx); + assert( bUseHint==1 || bUseHint==0 ); + if( iIdx==0 || (bUseHint && iIdx==1) ){ + int bIgnore = 0; + rc = fts3SegmentIsMaxLevel(p, iAbsLevel+1, &bIgnore); + if( bIgnore ){ + pFilter->flags |= FTS3_SEGMENT_IGNORE_EMPTY; + } + } + } + + if( rc==SQLITE_OK ){ + rc = fts3IncrmergeCsr(p, iAbsLevel, nSeg, pCsr); + } + if( SQLITE_OK==rc && pCsr->nSegment==nSeg + && SQLITE_OK==(rc = sqlite3Fts3SegReaderStart(p, pCsr, pFilter)) + && SQLITE_ROW==(rc = sqlite3Fts3SegReaderStep(p, pCsr)) + ){ + if( bUseHint && iIdx>0 ){ + const char *zKey = pCsr->zTerm; + int nKey = pCsr->nTerm; + rc = fts3IncrmergeLoad(p, iAbsLevel, iIdx-1, zKey, nKey, pWriter); + }else{ + rc = fts3IncrmergeWriter(p, iAbsLevel, iIdx, pCsr, pWriter); + } + + if( rc==SQLITE_OK && pWriter->nLeafEst ){ + fts3LogMerge(nSeg, iAbsLevel); + do { + rc = fts3IncrmergeAppend(p, pWriter, pCsr); + if( rc==SQLITE_OK ) rc = sqlite3Fts3SegReaderStep(p, pCsr); + if( pWriter->nWork>=nRem && rc==SQLITE_ROW ) rc = SQLITE_OK; + }while( rc==SQLITE_ROW ); + + /* Update or delete the input segments */ + if( rc==SQLITE_OK ){ + nRem -= (1 + pWriter->nWork); + rc = fts3IncrmergeChomp(p, iAbsLevel, pCsr, &nSeg); + if( nSeg!=0 ){ + bDirtyHint = 1; + fts3IncrmergeHintPush(&hint, iAbsLevel, nSeg, &rc); + } + } + } + + if( nSeg!=0 ){ + pWriter->nLeafData = pWriter->nLeafData * -1; + } + fts3IncrmergeRelease(p, pWriter, &rc); + if( nSeg==0 && pWriter->bNoLeafData==0 ){ + fts3PromoteSegments(p, iAbsLevel+1, pWriter->nLeafData); + } + } + + sqlite3Fts3SegReaderFinish(pCsr); + } + + /* Write the hint values into the %_stat table for the next incr-merger */ + if( bDirtyHint && rc==SQLITE_OK ){ + rc = fts3IncrmergeHintStore(p, &hint); + } + + sqlite3_free(pWriter); + sqlite3_free(hint.a); + return rc; +} + +/* +** Convert the text beginning at *pz into an integer and return +** its value. Advance *pz to point to the first character past +** the integer. +*/ +static int fts3Getint(const char **pz){ + const char *z = *pz; + int i = 0; + while( (*z)>='0' && (*z)<='9' ) i = 10*i + *(z++) - '0'; + *pz = z; + return i; +} + +/* +** Process statements of the form: +** +** INSERT INTO table(table) VALUES('merge=A,B'); +** +** A and B are integers that decode to be the number of leaf pages +** written for the merge, and the minimum number of segments on a level +** before it will be selected for a merge, respectively. +*/ +static int fts3DoIncrmerge( + Fts3Table *p, /* FTS3 table handle */ + const char *zParam /* Nul-terminated string containing "A,B" */ +){ + int rc; + int nMin = (FTS3_MERGE_COUNT / 2); + int nMerge = 0; + const char *z = zParam; + + /* Read the first integer value */ + nMerge = fts3Getint(&z); + + /* If the first integer value is followed by a ',', read the second + ** integer value. */ + if( z[0]==',' && z[1]!='\0' ){ + z++; + nMin = fts3Getint(&z); + } + + if( z[0]!='\0' || nMin<2 ){ + rc = SQLITE_ERROR; + }else{ + rc = SQLITE_OK; + if( !p->bHasStat ){ + assert( p->bFts4==0 ); + sqlite3Fts3CreateStatTable(&rc, p); + } + if( rc==SQLITE_OK ){ + rc = sqlite3Fts3Incrmerge(p, nMerge, nMin); + } + sqlite3Fts3SegmentsClose(p); + } + return rc; +} + +/* +** Process statements of the form: +** +** INSERT INTO table(table) VALUES('automerge=X'); +** +** where X is an integer. X==0 means to turn automerge off. X!=0 means +** turn it on. The setting is persistent. +*/ +static int fts3DoAutoincrmerge( + Fts3Table *p, /* FTS3 table handle */ + const char *zParam /* Nul-terminated string containing boolean */ +){ + int rc = SQLITE_OK; + sqlite3_stmt *pStmt = 0; + p->nAutoincrmerge = fts3Getint(&zParam); + if( p->nAutoincrmerge==1 || p->nAutoincrmerge>FTS3_MERGE_COUNT ){ + p->nAutoincrmerge = 8; + } + if( !p->bHasStat ){ + assert( p->bFts4==0 ); + sqlite3Fts3CreateStatTable(&rc, p); + if( rc ) return rc; + } + rc = fts3SqlStmt(p, SQL_REPLACE_STAT, &pStmt, 0); + if( rc ) return rc; + sqlite3_bind_int(pStmt, 1, FTS_STAT_AUTOINCRMERGE); + sqlite3_bind_int(pStmt, 2, p->nAutoincrmerge); + sqlite3_step(pStmt); + rc = sqlite3_reset(pStmt); + return rc; +} + +/* +** Return a 64-bit checksum for the FTS index entry specified by the +** arguments to this function. +*/ +static u64 fts3ChecksumEntry( + const char *zTerm, /* Pointer to buffer containing term */ + int nTerm, /* Size of zTerm in bytes */ + int iLangid, /* Language id for current row */ + int iIndex, /* Index (0..Fts3Table.nIndex-1) */ + i64 iDocid, /* Docid for current row. */ + int iCol, /* Column number */ + int iPos /* Position */ +){ + int i; + u64 ret = (u64)iDocid; + + ret += (ret<<3) + iLangid; + ret += (ret<<3) + iIndex; + ret += (ret<<3) + iCol; + ret += (ret<<3) + iPos; + for(i=0; i<nTerm; i++) ret += (ret<<3) + zTerm[i]; + + return ret; +} + +/* +** Return a checksum of all entries in the FTS index that correspond to +** language id iLangid. The checksum is calculated by XORing the checksums +** of each individual entry (see fts3ChecksumEntry()) together. +** +** If successful, the checksum value is returned and *pRc set to SQLITE_OK. +** Otherwise, if an error occurs, *pRc is set to an SQLite error code. The +** return value is undefined in this case. +*/ +static u64 fts3ChecksumIndex( + Fts3Table *p, /* FTS3 table handle */ + int iLangid, /* Language id to return cksum for */ + int iIndex, /* Index to cksum (0..p->nIndex-1) */ + int *pRc /* OUT: Return code */ +){ + Fts3SegFilter filter; + Fts3MultiSegReader csr; + int rc; + u64 cksum = 0; + + assert( *pRc==SQLITE_OK ); + + memset(&filter, 0, sizeof(filter)); + memset(&csr, 0, sizeof(csr)); + filter.flags = FTS3_SEGMENT_REQUIRE_POS|FTS3_SEGMENT_IGNORE_EMPTY; + filter.flags |= FTS3_SEGMENT_SCAN; + + rc = sqlite3Fts3SegReaderCursor( + p, iLangid, iIndex, FTS3_SEGCURSOR_ALL, 0, 0, 0, 1,&csr + ); + if( rc==SQLITE_OK ){ + rc = sqlite3Fts3SegReaderStart(p, &csr, &filter); + } + + if( rc==SQLITE_OK ){ + while( SQLITE_ROW==(rc = sqlite3Fts3SegReaderStep(p, &csr)) ){ + char *pCsr = csr.aDoclist; + char *pEnd = &pCsr[csr.nDoclist]; + + i64 iDocid = 0; + i64 iCol = 0; + i64 iPos = 0; + + pCsr += sqlite3Fts3GetVarint(pCsr, &iDocid); + while( pCsr<pEnd ){ + i64 iVal = 0; + pCsr += sqlite3Fts3GetVarint(pCsr, &iVal); + if( pCsr<pEnd ){ + if( iVal==0 || iVal==1 ){ + iCol = 0; + iPos = 0; + if( iVal ){ + pCsr += sqlite3Fts3GetVarint(pCsr, &iCol); + }else{ + pCsr += sqlite3Fts3GetVarint(pCsr, &iVal); + iDocid += iVal; + } + }else{ + iPos += (iVal - 2); + cksum = cksum ^ fts3ChecksumEntry( + csr.zTerm, csr.nTerm, iLangid, iIndex, iDocid, + (int)iCol, (int)iPos + ); + } + } + } + } + } + sqlite3Fts3SegReaderFinish(&csr); + + *pRc = rc; + return cksum; +} + +/* +** Check if the contents of the FTS index match the current contents of the +** content table. If no error occurs and the contents do match, set *pbOk +** to true and return SQLITE_OK. Or if the contents do not match, set *pbOk +** to false before returning. +** +** If an error occurs (e.g. an OOM or IO error), return an SQLite error +** code. The final value of *pbOk is undefined in this case. +*/ +static int fts3IntegrityCheck(Fts3Table *p, int *pbOk){ + int rc = SQLITE_OK; /* Return code */ + u64 cksum1 = 0; /* Checksum based on FTS index contents */ + u64 cksum2 = 0; /* Checksum based on %_content contents */ + sqlite3_stmt *pAllLangid = 0; /* Statement to return all language-ids */ + + /* This block calculates the checksum according to the FTS index. */ + rc = fts3SqlStmt(p, SQL_SELECT_ALL_LANGID, &pAllLangid, 0); + if( rc==SQLITE_OK ){ + int rc2; + sqlite3_bind_int(pAllLangid, 1, p->iPrevLangid); + sqlite3_bind_int(pAllLangid, 2, p->nIndex); + while( rc==SQLITE_OK && sqlite3_step(pAllLangid)==SQLITE_ROW ){ + int iLangid = sqlite3_column_int(pAllLangid, 0); + int i; + for(i=0; i<p->nIndex; i++){ + cksum1 = cksum1 ^ fts3ChecksumIndex(p, iLangid, i, &rc); + } + } + rc2 = sqlite3_reset(pAllLangid); + if( rc==SQLITE_OK ) rc = rc2; + } + + /* This block calculates the checksum according to the %_content table */ + if( rc==SQLITE_OK ){ + sqlite3_tokenizer_module const *pModule = p->pTokenizer->pModule; + sqlite3_stmt *pStmt = 0; + char *zSql; + + zSql = sqlite3_mprintf("SELECT %s" , p->zReadExprlist); + if( !zSql ){ + rc = SQLITE_NOMEM; + }else{ + rc = sqlite3_prepare_v2(p->db, zSql, -1, &pStmt, 0); + sqlite3_free(zSql); + } + + while( rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pStmt) ){ + i64 iDocid = sqlite3_column_int64(pStmt, 0); + int iLang = langidFromSelect(p, pStmt); + int iCol; + + for(iCol=0; rc==SQLITE_OK && iCol<p->nColumn; iCol++){ + if( p->abNotindexed[iCol]==0 ){ + const char *zText = (const char *)sqlite3_column_text(pStmt, iCol+1); + int nText = sqlite3_column_bytes(pStmt, iCol+1); + sqlite3_tokenizer_cursor *pT = 0; + + rc = sqlite3Fts3OpenTokenizer(p->pTokenizer, iLang, zText, nText,&pT); + while( rc==SQLITE_OK ){ + char const *zToken; /* Buffer containing token */ + int nToken = 0; /* Number of bytes in token */ + int iDum1 = 0, iDum2 = 0; /* Dummy variables */ + int iPos = 0; /* Position of token in zText */ + + rc = pModule->xNext(pT, &zToken, &nToken, &iDum1, &iDum2, &iPos); + if( rc==SQLITE_OK ){ + int i; + cksum2 = cksum2 ^ fts3ChecksumEntry( + zToken, nToken, iLang, 0, iDocid, iCol, iPos + ); + for(i=1; i<p->nIndex; i++){ + if( p->aIndex[i].nPrefix<=nToken ){ + cksum2 = cksum2 ^ fts3ChecksumEntry( + zToken, p->aIndex[i].nPrefix, iLang, i, iDocid, iCol, iPos + ); + } + } + } + } + if( pT ) pModule->xClose(pT); + if( rc==SQLITE_DONE ) rc = SQLITE_OK; + } + } + } + + sqlite3_finalize(pStmt); + } + + *pbOk = (cksum1==cksum2); + return rc; +} + +/* +** Run the integrity-check. If no error occurs and the current contents of +** the FTS index are correct, return SQLITE_OK. Or, if the contents of the +** FTS index are incorrect, return SQLITE_CORRUPT_VTAB. +** +** Or, if an error (e.g. an OOM or IO error) occurs, return an SQLite +** error code. +** +** The integrity-check works as follows. For each token and indexed token +** prefix in the document set, a 64-bit checksum is calculated (by code +** in fts3ChecksumEntry()) based on the following: +** +** + The index number (0 for the main index, 1 for the first prefix +** index etc.), +** + The token (or token prefix) text itself, +** + The language-id of the row it appears in, +** + The docid of the row it appears in, +** + The column it appears in, and +** + The tokens position within that column. +** +** The checksums for all entries in the index are XORed together to create +** a single checksum for the entire index. +** +** The integrity-check code calculates the same checksum in two ways: +** +** 1. By scanning the contents of the FTS index, and +** 2. By scanning and tokenizing the content table. +** +** If the two checksums are identical, the integrity-check is deemed to have +** passed. +*/ +static int fts3DoIntegrityCheck( + Fts3Table *p /* FTS3 table handle */ +){ + int rc; + int bOk = 0; + rc = fts3IntegrityCheck(p, &bOk); + if( rc==SQLITE_OK && bOk==0 ) rc = FTS_CORRUPT_VTAB; + return rc; +} /* ** Handle a 'special' INSERT of the form: ** ** "INSERT INTO tbl(tbl) VALUES(<expr>)" @@ -108247,34 +153316,232 @@ int nVal = sqlite3_value_bytes(pVal); if( !zVal ){ return SQLITE_NOMEM; }else if( nVal==8 && 0==sqlite3_strnicmp(zVal, "optimize", 8) ){ - rc = fts3SegmentMerge(p, -1); - if( rc==SQLITE_DONE ){ - rc = SQLITE_OK; - }else{ - sqlite3Fts3PendingTermsClear(p); - } + rc = fts3DoOptimize(p, 0); + }else if( nVal==7 && 0==sqlite3_strnicmp(zVal, "rebuild", 7) ){ + rc = fts3DoRebuild(p); + }else if( nVal==15 && 0==sqlite3_strnicmp(zVal, "integrity-check", 15) ){ + rc = fts3DoIntegrityCheck(p); + }else if( nVal>6 && 0==sqlite3_strnicmp(zVal, "merge=", 6) ){ + rc = fts3DoIncrmerge(p, &zVal[6]); + }else if( nVal>10 && 0==sqlite3_strnicmp(zVal, "automerge=", 10) ){ + rc = fts3DoAutoincrmerge(p, &zVal[10]); #ifdef SQLITE_TEST }else if( nVal>9 && 0==sqlite3_strnicmp(zVal, "nodesize=", 9) ){ p->nNodeSize = atoi(&zVal[9]); rc = SQLITE_OK; }else if( nVal>11 && 0==sqlite3_strnicmp(zVal, "maxpending=", 9) ){ p->nMaxPendingData = atoi(&zVal[11]); + rc = SQLITE_OK; + }else if( nVal>21 && 0==sqlite3_strnicmp(zVal, "test-no-incr-doclist=", 21) ){ + p->bNoIncrDoclist = atoi(&zVal[21]); rc = SQLITE_OK; #endif }else{ rc = SQLITE_ERROR; } return rc; } + +#ifndef SQLITE_DISABLE_FTS4_DEFERRED +/* +** Delete all cached deferred doclists. Deferred doclists are cached +** (allocated) by the sqlite3Fts3CacheDeferredDoclists() function. +*/ +SQLITE_PRIVATE void sqlite3Fts3FreeDeferredDoclists(Fts3Cursor *pCsr){ + Fts3DeferredToken *pDef; + for(pDef=pCsr->pDeferred; pDef; pDef=pDef->pNext){ + fts3PendingListDelete(pDef->pList); + pDef->pList = 0; + } +} + +/* +** Free all entries in the pCsr->pDeffered list. Entries are added to +** this list using sqlite3Fts3DeferToken(). +*/ +SQLITE_PRIVATE void sqlite3Fts3FreeDeferredTokens(Fts3Cursor *pCsr){ + Fts3DeferredToken *pDef; + Fts3DeferredToken *pNext; + for(pDef=pCsr->pDeferred; pDef; pDef=pNext){ + pNext = pDef->pNext; + fts3PendingListDelete(pDef->pList); + sqlite3_free(pDef); + } + pCsr->pDeferred = 0; +} + +/* +** Generate deferred-doclists for all tokens in the pCsr->pDeferred list +** based on the row that pCsr currently points to. +** +** A deferred-doclist is like any other doclist with position information +** included, except that it only contains entries for a single row of the +** table, not for all rows. +*/ +SQLITE_PRIVATE int sqlite3Fts3CacheDeferredDoclists(Fts3Cursor *pCsr){ + int rc = SQLITE_OK; /* Return code */ + if( pCsr->pDeferred ){ + int i; /* Used to iterate through table columns */ + sqlite3_int64 iDocid; /* Docid of the row pCsr points to */ + Fts3DeferredToken *pDef; /* Used to iterate through deferred tokens */ + + Fts3Table *p = (Fts3Table *)pCsr->base.pVtab; + sqlite3_tokenizer *pT = p->pTokenizer; + sqlite3_tokenizer_module const *pModule = pT->pModule; + + assert( pCsr->isRequireSeek==0 ); + iDocid = sqlite3_column_int64(pCsr->pStmt, 0); + + for(i=0; i<p->nColumn && rc==SQLITE_OK; i++){ + if( p->abNotindexed[i]==0 ){ + const char *zText = (const char *)sqlite3_column_text(pCsr->pStmt, i+1); + sqlite3_tokenizer_cursor *pTC = 0; + + rc = sqlite3Fts3OpenTokenizer(pT, pCsr->iLangid, zText, -1, &pTC); + while( rc==SQLITE_OK ){ + char const *zToken; /* Buffer containing token */ + int nToken = 0; /* Number of bytes in token */ + int iDum1 = 0, iDum2 = 0; /* Dummy variables */ + int iPos = 0; /* Position of token in zText */ + + rc = pModule->xNext(pTC, &zToken, &nToken, &iDum1, &iDum2, &iPos); + for(pDef=pCsr->pDeferred; pDef && rc==SQLITE_OK; pDef=pDef->pNext){ + Fts3PhraseToken *pPT = pDef->pToken; + if( (pDef->iCol>=p->nColumn || pDef->iCol==i) + && (pPT->bFirst==0 || iPos==0) + && (pPT->n==nToken || (pPT->isPrefix && pPT->n<nToken)) + && (0==memcmp(zToken, pPT->z, pPT->n)) + ){ + fts3PendingListAppend(&pDef->pList, iDocid, i, iPos, &rc); + } + } + } + if( pTC ) pModule->xClose(pTC); + if( rc==SQLITE_DONE ) rc = SQLITE_OK; + } + } + + for(pDef=pCsr->pDeferred; pDef && rc==SQLITE_OK; pDef=pDef->pNext){ + if( pDef->pList ){ + rc = fts3PendingListAppendVarint(&pDef->pList, 0); + } + } + } + + return rc; +} + +SQLITE_PRIVATE int sqlite3Fts3DeferredTokenList( + Fts3DeferredToken *p, + char **ppData, + int *pnData +){ + char *pRet; + int nSkip; + sqlite3_int64 dummy; + + *ppData = 0; + *pnData = 0; + + if( p->pList==0 ){ + return SQLITE_OK; + } + + pRet = (char *)sqlite3_malloc(p->pList->nData); + if( !pRet ) return SQLITE_NOMEM; + + nSkip = sqlite3Fts3GetVarint(p->pList->aData, &dummy); + *pnData = p->pList->nData - nSkip; + *ppData = pRet; + + memcpy(pRet, &p->pList->aData[nSkip], *pnData); + return SQLITE_OK; +} + +/* +** Add an entry for token pToken to the pCsr->pDeferred list. +*/ +SQLITE_PRIVATE int sqlite3Fts3DeferToken( + Fts3Cursor *pCsr, /* Fts3 table cursor */ + Fts3PhraseToken *pToken, /* Token to defer */ + int iCol /* Column that token must appear in (or -1) */ +){ + Fts3DeferredToken *pDeferred; + pDeferred = sqlite3_malloc(sizeof(*pDeferred)); + if( !pDeferred ){ + return SQLITE_NOMEM; + } + memset(pDeferred, 0, sizeof(*pDeferred)); + pDeferred->pToken = pToken; + pDeferred->pNext = pCsr->pDeferred; + pDeferred->iCol = iCol; + pCsr->pDeferred = pDeferred; + + assert( pToken->pDeferred==0 ); + pToken->pDeferred = pDeferred; + + return SQLITE_OK; +} +#endif + +/* +** SQLite value pRowid contains the rowid of a row that may or may not be +** present in the FTS3 table. If it is, delete it and adjust the contents +** of subsiduary data structures accordingly. +*/ +static int fts3DeleteByRowid( + Fts3Table *p, + sqlite3_value *pRowid, + int *pnChng, /* IN/OUT: Decrement if row is deleted */ + u32 *aSzDel +){ + int rc = SQLITE_OK; /* Return code */ + int bFound = 0; /* True if *pRowid really is in the table */ + + fts3DeleteTerms(&rc, p, pRowid, aSzDel, &bFound); + if( bFound && rc==SQLITE_OK ){ + int isEmpty = 0; /* Deleting *pRowid leaves the table empty */ + rc = fts3IsEmpty(p, pRowid, &isEmpty); + if( rc==SQLITE_OK ){ + if( isEmpty ){ + /* Deleting this row means the whole table is empty. In this case + ** delete the contents of all three tables and throw away any + ** data in the pendingTerms hash table. */ + rc = fts3DeleteAll(p, 1); + *pnChng = 0; + memset(aSzDel, 0, sizeof(u32) * (p->nColumn+1) * 2); + }else{ + *pnChng = *pnChng - 1; + if( p->zContentTbl==0 ){ + fts3SqlExec(&rc, p, SQL_DELETE_CONTENT, &pRowid); + } + if( p->bHasDocsize ){ + fts3SqlExec(&rc, p, SQL_DELETE_DOCSIZE, &pRowid); + } + } + } + } + + return rc; +} /* ** This function does the work for the xUpdate method of FTS3 virtual -** tables. +** tables. The schema of the virtual table being: +** +** CREATE TABLE <table name>( +** <user columns>, +** <table name> HIDDEN, +** docid HIDDEN, +** <langid> HIDDEN +** ); +** +** */ SQLITE_PRIVATE int sqlite3Fts3UpdateMethod( sqlite3_vtab *pVtab, /* FTS3 vtab object */ int nArg, /* Size of argument array */ sqlite3_value **apVal, /* Array of arguments */ @@ -108281,70 +153548,139 @@ sqlite_int64 *pRowid /* OUT: The affected (or effected) rowid */ ){ Fts3Table *p = (Fts3Table *)pVtab; int rc = SQLITE_OK; /* Return Code */ int isRemove = 0; /* True for an UPDATE or DELETE */ - sqlite3_int64 iRemove = 0; /* Rowid removed by UPDATE or DELETE */ - u32 *aSzIns; /* Sizes of inserted documents */ - u32 *aSzDel; /* Sizes of deleted documents */ + u32 *aSzIns = 0; /* Sizes of inserted documents */ + u32 *aSzDel = 0; /* Sizes of deleted documents */ int nChng = 0; /* Net change in number of documents */ + int bInsertDone = 0; + /* At this point it must be known if the %_stat table exists or not. + ** So bHasStat may not be 2. */ + assert( p->bHasStat==0 || p->bHasStat==1 ); + + assert( p->pSegments==0 ); + assert( + nArg==1 /* DELETE operations */ + || nArg==(2 + p->nColumn + 3) /* INSERT or UPDATE operations */ + ); + + /* Check for a "special" INSERT operation. One of the form: + ** + ** INSERT INTO xyz(xyz) VALUES('command'); + */ + if( nArg>1 + && sqlite3_value_type(apVal[0])==SQLITE_NULL + && sqlite3_value_type(apVal[p->nColumn+2])!=SQLITE_NULL + ){ + rc = fts3SpecialInsert(p, apVal[p->nColumn+2]); + goto update_out; + } + + if( nArg>1 && sqlite3_value_int(apVal[2 + p->nColumn + 2])<0 ){ + rc = SQLITE_CONSTRAINT; + goto update_out; + } /* Allocate space to hold the change in document sizes */ - aSzIns = sqlite3_malloc( sizeof(aSzIns[0])*p->nColumn*2 ); - if( aSzIns==0 ) return SQLITE_NOMEM; - aSzDel = &aSzIns[p->nColumn]; - memset(aSzIns, 0, sizeof(aSzIns[0])*p->nColumn*2); + aSzDel = sqlite3_malloc( sizeof(aSzDel[0])*(p->nColumn+1)*2 ); + if( aSzDel==0 ){ + rc = SQLITE_NOMEM; + goto update_out; + } + aSzIns = &aSzDel[p->nColumn+1]; + memset(aSzDel, 0, sizeof(aSzDel[0])*(p->nColumn+1)*2); + + rc = fts3Writelock(p); + if( rc!=SQLITE_OK ) goto update_out; + + /* If this is an INSERT operation, or an UPDATE that modifies the rowid + ** value, then this operation requires constraint handling. + ** + ** If the on-conflict mode is REPLACE, this means that the existing row + ** should be deleted from the database before inserting the new row. Or, + ** if the on-conflict mode is other than REPLACE, then this method must + ** detect the conflict and return SQLITE_CONSTRAINT before beginning to + ** modify the database file. + */ + if( nArg>1 && p->zContentTbl==0 ){ + /* Find the value object that holds the new rowid value. */ + sqlite3_value *pNewRowid = apVal[3+p->nColumn]; + if( sqlite3_value_type(pNewRowid)==SQLITE_NULL ){ + pNewRowid = apVal[1]; + } + + if( sqlite3_value_type(pNewRowid)!=SQLITE_NULL && ( + sqlite3_value_type(apVal[0])==SQLITE_NULL + || sqlite3_value_int64(apVal[0])!=sqlite3_value_int64(pNewRowid) + )){ + /* The new rowid is not NULL (in this case the rowid will be + ** automatically assigned and there is no chance of a conflict), and + ** the statement is either an INSERT or an UPDATE that modifies the + ** rowid column. So if the conflict mode is REPLACE, then delete any + ** existing row with rowid=pNewRowid. + ** + ** Or, if the conflict mode is not REPLACE, insert the new record into + ** the %_content table. If we hit the duplicate rowid constraint (or any + ** other error) while doing so, return immediately. + ** + ** This branch may also run if pNewRowid contains a value that cannot + ** be losslessly converted to an integer. In this case, the eventual + ** call to fts3InsertData() (either just below or further on in this + ** function) will return SQLITE_MISMATCH. If fts3DeleteByRowid is + ** invoked, it will delete zero rows (since no row will have + ** docid=$pNewRowid if $pNewRowid is not an integer value). + */ + if( sqlite3_vtab_on_conflict(p->db)==SQLITE_REPLACE ){ + rc = fts3DeleteByRowid(p, pNewRowid, &nChng, aSzDel); + }else{ + rc = fts3InsertData(p, apVal, pRowid); + bInsertDone = 1; + } + } + } + if( rc!=SQLITE_OK ){ + goto update_out; + } /* If this is a DELETE or UPDATE operation, remove the old record. */ if( sqlite3_value_type(apVal[0])!=SQLITE_NULL ){ - int isEmpty; - rc = fts3IsEmpty(p, apVal, &isEmpty); - if( rc==SQLITE_OK ){ - if( isEmpty ){ - /* Deleting this row means the whole table is empty. In this case - ** delete the contents of all three tables and throw away any - ** data in the pendingTerms hash table. - */ - rc = fts3DeleteAll(p); - }else{ - isRemove = 1; - iRemove = sqlite3_value_int64(apVal[0]); - rc = fts3PendingTermsDocid(p, iRemove); - fts3DeleteTerms(&rc, p, apVal, aSzDel); - fts3SqlExec(&rc, p, SQL_DELETE_CONTENT, apVal); - if( p->bHasDocsize ){ - fts3SqlExec(&rc, p, SQL_DELETE_DOCSIZE, apVal); - nChng--; - } - } - } - }else if( sqlite3_value_type(apVal[p->nColumn+2])!=SQLITE_NULL ){ - sqlite3_free(aSzIns); - return fts3SpecialInsert(p, apVal[p->nColumn+2]); + assert( sqlite3_value_type(apVal[0])==SQLITE_INTEGER ); + rc = fts3DeleteByRowid(p, apVal[0], &nChng, aSzDel); + isRemove = 1; } /* If this is an INSERT or UPDATE operation, insert the new record. */ if( nArg>1 && rc==SQLITE_OK ){ - rc = fts3InsertData(p, apVal, pRowid); - if( rc==SQLITE_OK && (!isRemove || *pRowid!=iRemove) ){ - rc = fts3PendingTermsDocid(p, *pRowid); + int iLangid = sqlite3_value_int(apVal[2 + p->nColumn + 2]); + if( bInsertDone==0 ){ + rc = fts3InsertData(p, apVal, pRowid); + if( rc==SQLITE_CONSTRAINT && p->zContentTbl==0 ){ + rc = FTS_CORRUPT_VTAB; + } + } + if( rc==SQLITE_OK && (!isRemove || *pRowid!=p->iPrevDocid ) ){ + rc = fts3PendingTermsDocid(p, 0, iLangid, *pRowid); } if( rc==SQLITE_OK ){ - rc = fts3InsertTerms(p, apVal, aSzIns); + assert( p->iPrevDocid==*pRowid ); + rc = fts3InsertTerms(p, iLangid, apVal, aSzIns); } if( p->bHasDocsize ){ - nChng++; fts3InsertDocsize(&rc, p, aSzIns); } + nChng++; } - if( p->bHasDocsize ){ + if( p->bFts4 ){ fts3UpdateDocTotals(&rc, p, aSzIns, aSzDel, nChng); } - sqlite3_free(aSzIns); + update_out: + sqlite3_free(aSzDel); + sqlite3Fts3SegmentsClose(p); return rc; } /* ** Flush any data in the pending-terms hash table to disk. If successful, @@ -108353,21 +153689,20 @@ */ SQLITE_PRIVATE int sqlite3Fts3Optimize(Fts3Table *p){ int rc; rc = sqlite3_exec(p->db, "SAVEPOINT fts3", 0, 0, 0); if( rc==SQLITE_OK ){ - rc = fts3SegmentMerge(p, -1); - if( rc==SQLITE_OK ){ - rc = sqlite3_exec(p->db, "RELEASE fts3", 0, 0, 0); - if( rc==SQLITE_OK ){ - sqlite3Fts3PendingTermsClear(p); - } + rc = fts3DoOptimize(p, 1); + if( rc==SQLITE_OK || rc==SQLITE_DONE ){ + int rc2 = sqlite3_exec(p->db, "RELEASE fts3", 0, 0, 0); + if( rc2!=SQLITE_OK ) rc = rc2; }else{ sqlite3_exec(p->db, "ROLLBACK TO fts3", 0, 0, 0); sqlite3_exec(p->db, "RELEASE fts3", 0, 0, 0); } } + sqlite3Fts3SegmentsClose(p); return rc; } #endif @@ -108384,21 +153719,42 @@ ** May you share freely, never taking more than you give. ** ****************************************************************************** */ +/* #include "fts3Int.h" */ #if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) +/* #include <string.h> */ +/* #include <assert.h> */ + +/* +** Characters that may appear in the second argument to matchinfo(). +*/ +#define FTS3_MATCHINFO_NPHRASE 'p' /* 1 value */ +#define FTS3_MATCHINFO_NCOL 'c' /* 1 value */ +#define FTS3_MATCHINFO_NDOC 'n' /* 1 value */ +#define FTS3_MATCHINFO_AVGLENGTH 'a' /* nCol values */ +#define FTS3_MATCHINFO_LENGTH 'l' /* nCol values */ +#define FTS3_MATCHINFO_LCS 's' /* nCol values */ +#define FTS3_MATCHINFO_HITS 'x' /* 3*nCol*nPhrase values */ +#define FTS3_MATCHINFO_LHITS 'y' /* nCol*nPhrase values */ +#define FTS3_MATCHINFO_LHITS_BM 'b' /* nCol*nPhrase values */ + +/* +** The default value for the second argument to matchinfo(). +*/ +#define FTS3_MATCHINFO_DEFAULT "pcx" /* ** Used as an fts3ExprIterate() context when loading phrase doclists to ** Fts3Expr.aDoclist[]/nDoclist. */ typedef struct LoadDoclistCtx LoadDoclistCtx; struct LoadDoclistCtx { - Fts3Table *pTab; /* FTS3 Table */ + Fts3Cursor *pCsr; /* FTS3 Cursor */ int nPhrase; /* Number of phrases seen so far */ int nToken; /* Number of tokens seen so far */ }; /* @@ -108440,13 +153796,28 @@ */ typedef struct MatchInfo MatchInfo; struct MatchInfo { Fts3Cursor *pCursor; /* FTS3 Cursor */ int nCol; /* Number of columns in table */ + int nPhrase; /* Number of matchable phrases in query */ + sqlite3_int64 nDoc; /* Number of docs in database */ + char flag; u32 *aMatchinfo; /* Pre-allocated buffer */ }; +/* +** An instance of this structure is used to manage a pair of buffers, each +** (nElem * sizeof(u32)) bytes in size. See the MatchinfoBuffer code below +** for details. +*/ +struct MatchinfoBuffer { + u8 aRef[3]; + int nElem; + int bGlobal; /* Set if global data is loaded */ + char *zMatchinfo; + u32 aMatchinfo[1]; +}; /* ** The snippet() and offsets() functions both return text values. An instance ** of the following structure is used to accumulate those values while the @@ -108457,10 +153828,101 @@ char *z; /* Pointer to buffer containing string */ int n; /* Length of z in bytes (excl. nul-term) */ int nAlloc; /* Allocated size of buffer z in bytes */ }; + +/************************************************************************* +** Start of MatchinfoBuffer code. +*/ + +/* +** Allocate a two-slot MatchinfoBuffer object. +*/ +static MatchinfoBuffer *fts3MIBufferNew(int nElem, const char *zMatchinfo){ + MatchinfoBuffer *pRet; + int nByte = sizeof(u32) * (2*nElem + 1) + sizeof(MatchinfoBuffer); + int nStr = (int)strlen(zMatchinfo); + + pRet = sqlite3_malloc(nByte + nStr+1); + if( pRet ){ + memset(pRet, 0, nByte); + pRet->aMatchinfo[0] = (u8*)(&pRet->aMatchinfo[1]) - (u8*)pRet; + pRet->aMatchinfo[1+nElem] = pRet->aMatchinfo[0] + sizeof(u32)*(nElem+1); + pRet->nElem = nElem; + pRet->zMatchinfo = ((char*)pRet) + nByte; + memcpy(pRet->zMatchinfo, zMatchinfo, nStr+1); + pRet->aRef[0] = 1; + } + + return pRet; +} + +static void fts3MIBufferFree(void *p){ + MatchinfoBuffer *pBuf = (MatchinfoBuffer*)((u8*)p - ((u32*)p)[-1]); + + assert( (u32*)p==&pBuf->aMatchinfo[1] + || (u32*)p==&pBuf->aMatchinfo[pBuf->nElem+2] + ); + if( (u32*)p==&pBuf->aMatchinfo[1] ){ + pBuf->aRef[1] = 0; + }else{ + pBuf->aRef[2] = 0; + } + + if( pBuf->aRef[0]==0 && pBuf->aRef[1]==0 && pBuf->aRef[2]==0 ){ + sqlite3_free(pBuf); + } +} + +static void (*fts3MIBufferAlloc(MatchinfoBuffer *p, u32 **paOut))(void*){ + void (*xRet)(void*) = 0; + u32 *aOut = 0; + + if( p->aRef[1]==0 ){ + p->aRef[1] = 1; + aOut = &p->aMatchinfo[1]; + xRet = fts3MIBufferFree; + } + else if( p->aRef[2]==0 ){ + p->aRef[2] = 1; + aOut = &p->aMatchinfo[p->nElem+2]; + xRet = fts3MIBufferFree; + }else{ + aOut = (u32*)sqlite3_malloc(p->nElem * sizeof(u32)); + if( aOut ){ + xRet = sqlite3_free; + if( p->bGlobal ) memcpy(aOut, &p->aMatchinfo[1], p->nElem*sizeof(u32)); + } + } + + *paOut = aOut; + return xRet; +} + +static void fts3MIBufferSetGlobal(MatchinfoBuffer *p){ + p->bGlobal = 1; + memcpy(&p->aMatchinfo[2+p->nElem], &p->aMatchinfo[1], p->nElem*sizeof(u32)); +} + +/* +** Free a MatchinfoBuffer object allocated using fts3MIBufferNew() +*/ +SQLITE_PRIVATE void sqlite3Fts3MIBufferFree(MatchinfoBuffer *p){ + if( p ){ + assert( p->aRef[0]==1 ); + p->aRef[0] = 0; + if( p->aRef[0]==0 && p->aRef[1]==0 && p->aRef[2]==0 ){ + sqlite3_free(p); + } + } +} + +/* +** End of MatchinfoBuffer code. +*************************************************************************/ + /* ** This function is used to help iterate through a position-list. A position ** list is a list of unique integers, sorted from smallest to largest. Each ** element of the list is represented by an FTS3 varint that takes the value @@ -108480,11 +153942,11 @@ ** After it returns, *piPos contains the value of the next element of the ** list and *pp is advanced to the following varint. */ static void fts3GetDeltaPosition(char **pp, int *piPos){ int iVal; - *pp += sqlite3Fts3GetVarint32(*pp, &iVal); + *pp += fts3GetVarint32(*pp, &iVal); *piPos += (iVal-2); } /* ** Helper function for fts3ExprIterate() (see below). @@ -108494,11 +153956,11 @@ int *piPhrase, /* Pointer to phrase counter */ int (*x)(Fts3Expr*,int,void*), /* Callback function to invoke for phrases */ void *pCtx /* Second argument to pass to callback */ ){ int rc; /* Return code */ - int eType = pExpr->eType; /* Type of expression node pExpr */ + int eType = pExpr->eType; /* Type of expression node pExpr */ if( eType!=FTSQUERY_PHRASE ){ assert( pExpr->pLeft && pExpr->pRight ); rc = fts3ExprIterate2(pExpr->pLeft, piPhrase, x, pCtx); if( rc==SQLITE_OK && eType!=FTSQUERY_NOT ){ @@ -108528,96 +153990,29 @@ ){ int iPhrase = 0; /* Variable used as the phrase counter */ return fts3ExprIterate2(pExpr, &iPhrase, x, pCtx); } -/* -** The argument to this function is always a phrase node. Its doclist -** (Fts3Expr.aDoclist[]) and the doclists associated with all phrase nodes -** to the left of this one in the query tree have already been loaded. -** -** If this phrase node is part of a series of phrase nodes joined by -** NEAR operators (and is not the left-most of said series), then elements are -** removed from the phrases doclist consistent with the NEAR restriction. If -** required, elements may be removed from the doclists of phrases to the -** left of this one that are part of the same series of NEAR operator -** connected phrases. -** -** If an OOM error occurs, SQLITE_NOMEM is returned. Otherwise, SQLITE_OK. -*/ -static int fts3ExprNearTrim(Fts3Expr *pExpr){ - int rc = SQLITE_OK; - Fts3Expr *pParent = pExpr->pParent; - - assert( pExpr->eType==FTSQUERY_PHRASE ); - while( rc==SQLITE_OK - && pParent - && pParent->eType==FTSQUERY_NEAR - && pParent->pRight==pExpr - ){ - /* This expression (pExpr) is the right-hand-side of a NEAR operator. - ** Find the expression to the left of the same operator. - */ - int nNear = pParent->nNear; - Fts3Expr *pLeft = pParent->pLeft; - - if( pLeft->eType!=FTSQUERY_PHRASE ){ - assert( pLeft->eType==FTSQUERY_NEAR ); - assert( pLeft->pRight->eType==FTSQUERY_PHRASE ); - pLeft = pLeft->pRight; - } - - rc = sqlite3Fts3ExprNearTrim(pLeft, pExpr, nNear); - - pExpr = pLeft; - pParent = pExpr->pParent; - } - - return rc; -} /* ** This is an fts3ExprIterate() callback used while loading the doclists ** for each phrase into Fts3Expr.aDoclist[]/nDoclist. See also ** fts3ExprLoadDoclists(). */ -static int fts3ExprLoadDoclistsCb1(Fts3Expr *pExpr, int iPhrase, void *ctx){ +static int fts3ExprLoadDoclistsCb(Fts3Expr *pExpr, int iPhrase, void *ctx){ int rc = SQLITE_OK; + Fts3Phrase *pPhrase = pExpr->pPhrase; LoadDoclistCtx *p = (LoadDoclistCtx *)ctx; UNUSED_PARAMETER(iPhrase); p->nPhrase++; - p->nToken += pExpr->pPhrase->nToken; - - if( pExpr->isLoaded==0 ){ - rc = sqlite3Fts3ExprLoadDoclist(p->pTab, pExpr); - pExpr->isLoaded = 1; - if( rc==SQLITE_OK ){ - rc = fts3ExprNearTrim(pExpr); - } - } + p->nToken += pPhrase->nToken; return rc; } -/* -** This is an fts3ExprIterate() callback used while loading the doclists -** for each phrase into Fts3Expr.aDoclist[]/nDoclist. See also -** fts3ExprLoadDoclists(). -*/ -static int fts3ExprLoadDoclistsCb2(Fts3Expr *pExpr, int iPhrase, void *ctx){ - UNUSED_PARAMETER(iPhrase); - UNUSED_PARAMETER(ctx); - if( pExpr->aDoclist ){ - pExpr->pCurrent = pExpr->aDoclist; - pExpr->iCurrent = 0; - pExpr->pCurrent += sqlite3Fts3GetVarint(pExpr->pCurrent, &pExpr->iCurrent); - } - return SQLITE_OK; -} - /* ** Load the doclists for each phrase in the query associated with FTS3 cursor ** pCsr. ** ** If pnPhrase is not NULL, then *pnPhrase is set to the number of matchable @@ -108631,19 +154026,27 @@ int *pnPhrase, /* OUT: Number of phrases in query */ int *pnToken /* OUT: Number of tokens in query */ ){ int rc; /* Return Code */ LoadDoclistCtx sCtx = {0,0,0}; /* Context for fts3ExprIterate() */ - sCtx.pTab = (Fts3Table *)pCsr->base.pVtab; - rc = fts3ExprIterate(pCsr->pExpr, fts3ExprLoadDoclistsCb1, (void *)&sCtx); - if( rc==SQLITE_OK ){ - (void)fts3ExprIterate(pCsr->pExpr, fts3ExprLoadDoclistsCb2, 0); - } + sCtx.pCsr = pCsr; + rc = fts3ExprIterate(pCsr->pExpr, fts3ExprLoadDoclistsCb, (void *)&sCtx); if( pnPhrase ) *pnPhrase = sCtx.nPhrase; if( pnToken ) *pnToken = sCtx.nToken; return rc; } + +static int fts3ExprPhraseCountCb(Fts3Expr *pExpr, int iPhrase, void *ctx){ + (*(int *)ctx)++; + pExpr->iPhrase = iPhrase; + return SQLITE_OK; +} +static int fts3ExprPhraseCount(Fts3Expr *pExpr){ + int nPhrase = 0; + (void)fts3ExprIterate(pExpr, fts3ExprPhraseCountCb, (void *)&nPhrase); + return nPhrase; +} /* ** Advance the position list iterator specified by the first two ** arguments so that it points to the first element with a value greater ** than or equal to parameter iNext. @@ -108771,38 +154174,42 @@ */ static int fts3SnippetFindPositions(Fts3Expr *pExpr, int iPhrase, void *ctx){ SnippetIter *p = (SnippetIter *)ctx; SnippetPhrase *pPhrase = &p->aPhrase[iPhrase]; char *pCsr; + int rc; pPhrase->nToken = pExpr->pPhrase->nToken; - - pCsr = sqlite3Fts3FindPositions(pExpr, p->pCsr->iPrevId, p->iCol); + rc = sqlite3Fts3EvalPhrasePoslist(p->pCsr, pExpr, p->iCol, &pCsr); + assert( rc==SQLITE_OK || pCsr==0 ); if( pCsr ){ int iFirst = 0; pPhrase->pList = pCsr; fts3GetDeltaPosition(&pCsr, &iFirst); + assert( iFirst>=0 ); pPhrase->pHead = pCsr; pPhrase->pTail = pCsr; pPhrase->iHead = iFirst; pPhrase->iTail = iFirst; }else{ - assert( pPhrase->pList==0 && pPhrase->pHead==0 && pPhrase->pTail==0 ); + assert( rc!=SQLITE_OK || ( + pPhrase->pList==0 && pPhrase->pHead==0 && pPhrase->pTail==0 + )); } - return SQLITE_OK; + return rc; } /* ** Select the fragment of text consisting of nFragment contiguous tokens ** from column iCol that represent the "best" snippet. The best snippet ** is the snippet with the highest score, where scores are calculated ** by adding: ** -** (a) +1 point for each occurence of a matchable phrase in the snippet. +** (a) +1 point for each occurrence of a matchable phrase in the snippet. ** -** (b) +1000 points for the first occurence of each matchable phrase in +** (b) +1000 points for the first occurrence of each matchable phrase in ** the snippet for which the corresponding mCovered bit is not set. ** ** The selected snippet parameters are stored in structure *pFragment before ** returning. The score of the selected snippet is stored in *piScore ** before returning. @@ -108849,41 +154256,43 @@ sIter.pCsr = pCsr; sIter.iCol = iCol; sIter.nSnippet = nSnippet; sIter.nPhrase = nList; sIter.iCurrent = -1; - (void)fts3ExprIterate(pCsr->pExpr, fts3SnippetFindPositions, (void *)&sIter); - - /* Set the *pmSeen output variable. */ - for(i=0; i<nList; i++){ - if( sIter.aPhrase[i].pHead ){ - *pmSeen |= (u64)1 << i; - } - } - - /* Loop through all candidate snippets. Store the best snippet in - ** *pFragment. Store its associated 'score' in iBestScore. - */ - pFragment->iCol = iCol; - while( !fts3SnippetNextCandidate(&sIter) ){ - int iPos; - int iScore; - u64 mCover; - u64 mHighlight; - fts3SnippetDetails(&sIter, mCovered, &iPos, &iScore, &mCover, &mHighlight); - assert( iScore>=0 ); - if( iScore>iBestScore ){ - pFragment->iPos = iPos; - pFragment->hlmask = mHighlight; - pFragment->covered = mCover; - iBestScore = iScore; - } - } - + rc = fts3ExprIterate(pCsr->pExpr, fts3SnippetFindPositions, (void*)&sIter); + if( rc==SQLITE_OK ){ + + /* Set the *pmSeen output variable. */ + for(i=0; i<nList; i++){ + if( sIter.aPhrase[i].pHead ){ + *pmSeen |= (u64)1 << i; + } + } + + /* Loop through all candidate snippets. Store the best snippet in + ** *pFragment. Store its associated 'score' in iBestScore. + */ + pFragment->iCol = iCol; + while( !fts3SnippetNextCandidate(&sIter) ){ + int iPos; + int iScore; + u64 mCover; + u64 mHighlite; + fts3SnippetDetails(&sIter, mCovered, &iPos, &iScore, &mCover,&mHighlite); + assert( iScore>=0 ); + if( iScore>iBestScore ){ + pFragment->iPos = iPos; + pFragment->hlmask = mHighlite; + pFragment->covered = mCover; + iBestScore = iScore; + } + } + + *piScore = iBestScore; + } sqlite3_free(sIter.aPhrase); - *piScore = iBestScore; - return SQLITE_OK; + return rc; } /* ** Append a string to the string-buffer passed as the first argument. @@ -108911,10 +154320,11 @@ return SQLITE_NOMEM; } pStr->z = zNew; pStr->nAlloc = nAlloc; } + assert( pStr->z!=0 && (pStr->nAlloc >= pStr->n+nAppend+1) ); /* Append the data to the string buffer. */ memcpy(&pStr->z[pStr->n], zAppend, nAppend); pStr->n += nAppend; pStr->z[pStr->n] = '\0'; @@ -108942,10 +154352,11 @@ ** is no way for fts3BestSnippet() to know whether or not the document ** actually contains terms that follow the final highlighted term. */ static int fts3SnippetShift( Fts3Table *pTab, /* FTS3 table snippet comes from */ + int iLangid, /* Language id to use in tokenizing */ int nSnippet, /* Number of tokens desired for snippet */ const char *zDoc, /* Document text to extract snippet from */ int nDoc, /* Size of buffer zDoc in bytes */ int *piPos, /* IN/OUT: First token of snippet */ u64 *pHlmask /* IN/OUT: Mask of tokens to highlight */ @@ -108977,17 +154388,16 @@ pMod = (sqlite3_tokenizer_module *)pTab->pTokenizer->pModule; /* Open a cursor on zDoc/nDoc. Check if there are (nSnippet+nDesired) ** or more tokens in zDoc/nDoc. */ - rc = pMod->xOpen(pTab->pTokenizer, zDoc, nDoc, &pC); + rc = sqlite3Fts3OpenTokenizer(pTab->pTokenizer, iLangid, zDoc, nDoc, &pC); if( rc!=SQLITE_OK ){ return rc; } - pC->pTokenizer = pTab->pTokenizer; while( rc==SQLITE_OK && iCurrent<(nSnippet+nDesired) ){ - const char *ZDUMMY; int DUMMY1, DUMMY2, DUMMY3; + const char *ZDUMMY; int DUMMY1 = 0, DUMMY2 = 0, DUMMY3 = 0; rc = pMod->xNext(pC, &ZDUMMY, &DUMMY1, &DUMMY2, &DUMMY3, &iCurrent); } pMod->xClose(pC); if( rc!=SQLITE_OK && rc!=SQLITE_DONE ){ return rc; } @@ -109027,12 +154437,10 @@ int iPos = pFragment->iPos; /* First token of snippet */ u64 hlmask = pFragment->hlmask; /* Highlight-mask for snippet */ int iCol = pFragment->iCol+1; /* Query column to extract text from */ sqlite3_tokenizer_module *pMod; /* Tokenizer module methods object */ sqlite3_tokenizer_cursor *pC; /* Tokenizer cursor open on zDoc/nDoc */ - const char *ZDUMMY; /* Dummy argument used with tokenizer */ - int DUMMY1; /* Dummy argument used with tokenizer */ zDoc = (const char *)sqlite3_column_text(pCsr->pStmt, iCol); if( zDoc==0 ){ if( sqlite3_column_type(pCsr->pStmt, iCol)!=SQLITE_NULL ){ return SQLITE_NOMEM; @@ -109041,21 +154449,33 @@ } nDoc = sqlite3_column_bytes(pCsr->pStmt, iCol); /* Open a token cursor on the document. */ pMod = (sqlite3_tokenizer_module *)pTab->pTokenizer->pModule; - rc = pMod->xOpen(pTab->pTokenizer, zDoc, nDoc, &pC); + rc = sqlite3Fts3OpenTokenizer(pTab->pTokenizer, pCsr->iLangid, zDoc,nDoc,&pC); if( rc!=SQLITE_OK ){ return rc; } - pC->pTokenizer = pTab->pTokenizer; while( rc==SQLITE_OK ){ - int iBegin; /* Offset in zDoc of start of token */ - int iFin; /* Offset in zDoc of end of token */ - int isHighlight; /* True for highlighted terms */ + const char *ZDUMMY; /* Dummy argument used with tokenizer */ + int DUMMY1 = -1; /* Dummy argument used with tokenizer */ + int iBegin = 0; /* Offset in zDoc of start of token */ + int iFin = 0; /* Offset in zDoc of end of token */ + int isHighlight = 0; /* True for highlighted terms */ + /* Variable DUMMY1 is initialized to a negative value above. Elsewhere + ** in the FTS code the variable that the third argument to xNext points to + ** is initialized to zero before the first (*but not necessarily + ** subsequent*) call to xNext(). This is done for a particular application + ** that needs to know whether or not the tokenizer is being used for + ** snippet generation or for some other purpose. + ** + ** Extreme care is required when writing code to depend on this + ** initialization. It is not a documented part of the tokenizer interface. + ** If a tokenizer is used directly by any code outside of FTS, this + ** convention might not be respected. */ rc = pMod->xNext(pC, &ZDUMMY, &DUMMY1, &iBegin, &iFin, &iCurrent); if( rc!=SQLITE_OK ){ if( rc==SQLITE_DONE ){ /* Special case - the last token of the snippet is also the last token ** of the column. Append any punctuation that occurred between the end @@ -109067,19 +154487,25 @@ } if( iCurrent<iPos ){ continue; } if( !isShiftDone ){ int n = nDoc - iBegin; - rc = fts3SnippetShift(pTab, nSnippet, &zDoc[iBegin], n, &iPos, &hlmask); + rc = fts3SnippetShift( + pTab, pCsr->iLangid, nSnippet, &zDoc[iBegin], n, &iPos, &hlmask + ); isShiftDone = 1; /* Now that the shift has been done, check if the initial "..." are ** required. They are required if (a) this is not the first fragment, ** or (b) this fragment does not begin at position 0 of its column. */ - if( rc==SQLITE_OK && (iPos>0 || iFragment>0) ){ - rc = fts3StringAppend(pOut, zEllipsis, -1); + if( rc==SQLITE_OK ){ + if( iPos>0 || iFragment>0 ){ + rc = fts3StringAppend(pOut, zEllipsis, -1); + }else if( iBegin ){ + rc = fts3StringAppend(pOut, zDoc, iBegin); + } } if( rc!=SQLITE_OK || iCurrent<iPos ) continue; } if( iCurrent>=(iPos+nSnippet) ){ @@ -109131,147 +154557,549 @@ *ppCollist = pEnd; return nEntry; } -static void fts3LoadColumnlistCounts(char **pp, u32 *aOut, int isGlobal){ - char *pCsr = *pp; - while( *pCsr ){ - int nHit; - sqlite3_int64 iCol = 0; - if( *pCsr==0x01 ){ - pCsr++; - pCsr += sqlite3Fts3GetVarint(pCsr, &iCol); - } - nHit = fts3ColumnlistCount(&pCsr); - assert( nHit>0 ); - if( isGlobal ){ - aOut[iCol*3+1]++; - } - aOut[iCol*3] += nHit; - } - pCsr++; - *pp = pCsr; +/* +** This function gathers 'y' or 'b' data for a single phrase. +*/ +static void fts3ExprLHits( + Fts3Expr *pExpr, /* Phrase expression node */ + MatchInfo *p /* Matchinfo context */ +){ + Fts3Table *pTab = (Fts3Table *)p->pCursor->base.pVtab; + int iStart; + Fts3Phrase *pPhrase = pExpr->pPhrase; + char *pIter = pPhrase->doclist.pList; + int iCol = 0; + + assert( p->flag==FTS3_MATCHINFO_LHITS_BM || p->flag==FTS3_MATCHINFO_LHITS ); + if( p->flag==FTS3_MATCHINFO_LHITS ){ + iStart = pExpr->iPhrase * p->nCol; + }else{ + iStart = pExpr->iPhrase * ((p->nCol + 31) / 32); + } + + while( 1 ){ + int nHit = fts3ColumnlistCount(&pIter); + if( (pPhrase->iColumn>=pTab->nColumn || pPhrase->iColumn==iCol) ){ + if( p->flag==FTS3_MATCHINFO_LHITS ){ + p->aMatchinfo[iStart + iCol] = (u32)nHit; + }else if( nHit ){ + p->aMatchinfo[iStart + (iCol+1)/32] |= (1 << (iCol&0x1F)); + } + } + assert( *pIter==0x00 || *pIter==0x01 ); + if( *pIter!=0x01 ) break; + pIter++; + pIter += fts3GetVarint32(pIter, &iCol); + } +} + +/* +** Gather the results for matchinfo directives 'y' and 'b'. +*/ +static void fts3ExprLHitGather( + Fts3Expr *pExpr, + MatchInfo *p +){ + assert( (pExpr->pLeft==0)==(pExpr->pRight==0) ); + if( pExpr->bEof==0 && pExpr->iDocid==p->pCursor->iPrevId ){ + if( pExpr->pLeft ){ + fts3ExprLHitGather(pExpr->pLeft, p); + fts3ExprLHitGather(pExpr->pRight, p); + }else{ + fts3ExprLHits(pExpr, p); + } + } } /* ** fts3ExprIterate() callback used to collect the "global" matchinfo stats -** for a single query. The "global" stats are those elements of the matchinfo -** array that are constant for all rows returned by the current query. +** for a single query. +** +** fts3ExprIterate() callback to load the 'global' elements of a +** FTS3_MATCHINFO_HITS matchinfo array. The global stats are those elements +** of the matchinfo array that are constant for all rows returned by the +** current query. +** +** Argument pCtx is actually a pointer to a struct of type MatchInfo. This +** function populates Matchinfo.aMatchinfo[] as follows: +** +** for(iCol=0; iCol<nCol; iCol++){ +** aMatchinfo[3*iPhrase*nCol + 3*iCol + 1] = X; +** aMatchinfo[3*iPhrase*nCol + 3*iCol + 2] = Y; +** } +** +** where X is the number of matches for phrase iPhrase is column iCol of all +** rows of the table. Y is the number of rows for which column iCol contains +** at least one instance of phrase iPhrase. +** +** If the phrase pExpr consists entirely of deferred tokens, then all X and +** Y values are set to nDoc, where nDoc is the number of documents in the +** file system. This is done because the full-text index doclist is required +** to calculate these values properly, and the full-text index doclist is +** not available for deferred tokens. */ -static int fts3ExprGlobalMatchinfoCb( +static int fts3ExprGlobalHitsCb( Fts3Expr *pExpr, /* Phrase expression node */ int iPhrase, /* Phrase number (numbered from zero) */ void *pCtx /* Pointer to MatchInfo structure */ ){ MatchInfo *p = (MatchInfo *)pCtx; - char *pCsr; - char *pEnd; - const int iStart = 2 + (iPhrase * p->nCol * 3) + 1; - - assert( pExpr->isLoaded ); - - /* Fill in the global hit count matrix row for this phrase. */ - pCsr = pExpr->aDoclist; - pEnd = &pExpr->aDoclist[pExpr->nDoclist]; - while( pCsr<pEnd ){ - while( *pCsr++ & 0x80 ); /* Skip past docid. */ - fts3LoadColumnlistCounts(&pCsr, &p->aMatchinfo[iStart], 1); - } - - return SQLITE_OK; + return sqlite3Fts3EvalPhraseStats( + p->pCursor, pExpr, &p->aMatchinfo[3*iPhrase*p->nCol] + ); } /* -** fts3ExprIterate() callback used to collect the "local" matchinfo stats -** for a single query. The "local" stats are those elements of the matchinfo +** fts3ExprIterate() callback used to collect the "local" part of the +** FTS3_MATCHINFO_HITS array. The local stats are those elements of the ** array that are different for each row returned by the query. */ -static int fts3ExprLocalMatchinfoCb( +static int fts3ExprLocalHitsCb( Fts3Expr *pExpr, /* Phrase expression node */ int iPhrase, /* Phrase number */ void *pCtx /* Pointer to MatchInfo structure */ ){ + int rc = SQLITE_OK; MatchInfo *p = (MatchInfo *)pCtx; + int iStart = iPhrase * p->nCol * 3; + int i; - if( pExpr->aDoclist ){ + for(i=0; i<p->nCol && rc==SQLITE_OK; i++){ char *pCsr; - int iStart = 2 + (iPhrase * p->nCol * 3); - int i; - - for(i=0; i<p->nCol; i++) p->aMatchinfo[iStart+i*3] = 0; - - pCsr = sqlite3Fts3FindPositions(pExpr, p->pCursor->iPrevId, -1); + rc = sqlite3Fts3EvalPhrasePoslist(p->pCursor, pExpr, i, &pCsr); if( pCsr ){ - fts3LoadColumnlistCounts(&pCsr, &p->aMatchinfo[iStart], 0); + p->aMatchinfo[iStart+i*3] = fts3ColumnlistCount(&pCsr); + }else{ + p->aMatchinfo[iStart+i*3] = 0; } } + return rc; +} + +static int fts3MatchinfoCheck( + Fts3Table *pTab, + char cArg, + char **pzErr +){ + if( (cArg==FTS3_MATCHINFO_NPHRASE) + || (cArg==FTS3_MATCHINFO_NCOL) + || (cArg==FTS3_MATCHINFO_NDOC && pTab->bFts4) + || (cArg==FTS3_MATCHINFO_AVGLENGTH && pTab->bFts4) + || (cArg==FTS3_MATCHINFO_LENGTH && pTab->bHasDocsize) + || (cArg==FTS3_MATCHINFO_LCS) + || (cArg==FTS3_MATCHINFO_HITS) + || (cArg==FTS3_MATCHINFO_LHITS) + || (cArg==FTS3_MATCHINFO_LHITS_BM) + ){ + return SQLITE_OK; + } + sqlite3Fts3ErrMsg(pzErr, "unrecognized matchinfo request: %c", cArg); + return SQLITE_ERROR; +} + +static int fts3MatchinfoSize(MatchInfo *pInfo, char cArg){ + int nVal; /* Number of integers output by cArg */ + + switch( cArg ){ + case FTS3_MATCHINFO_NDOC: + case FTS3_MATCHINFO_NPHRASE: + case FTS3_MATCHINFO_NCOL: + nVal = 1; + break; + + case FTS3_MATCHINFO_AVGLENGTH: + case FTS3_MATCHINFO_LENGTH: + case FTS3_MATCHINFO_LCS: + nVal = pInfo->nCol; + break; + + case FTS3_MATCHINFO_LHITS: + nVal = pInfo->nCol * pInfo->nPhrase; + break; + + case FTS3_MATCHINFO_LHITS_BM: + nVal = pInfo->nPhrase * ((pInfo->nCol + 31) / 32); + break; + + default: + assert( cArg==FTS3_MATCHINFO_HITS ); + nVal = pInfo->nCol * pInfo->nPhrase * 3; + break; + } + + return nVal; +} + +static int fts3MatchinfoSelectDoctotal( + Fts3Table *pTab, + sqlite3_stmt **ppStmt, + sqlite3_int64 *pnDoc, + const char **paLen +){ + sqlite3_stmt *pStmt; + const char *a; + sqlite3_int64 nDoc; + + if( !*ppStmt ){ + int rc = sqlite3Fts3SelectDoctotal(pTab, ppStmt); + if( rc!=SQLITE_OK ) return rc; + } + pStmt = *ppStmt; + assert( sqlite3_data_count(pStmt)==1 ); + + a = sqlite3_column_blob(pStmt, 0); + a += sqlite3Fts3GetVarint(a, &nDoc); + if( nDoc==0 ) return FTS_CORRUPT_VTAB; + *pnDoc = (u32)nDoc; + + if( paLen ) *paLen = a; + return SQLITE_OK; +} + +/* +** An instance of the following structure is used to store state while +** iterating through a multi-column position-list corresponding to the +** hits for a single phrase on a single row in order to calculate the +** values for a matchinfo() FTS3_MATCHINFO_LCS request. +*/ +typedef struct LcsIterator LcsIterator; +struct LcsIterator { + Fts3Expr *pExpr; /* Pointer to phrase expression */ + int iPosOffset; /* Tokens count up to end of this phrase */ + char *pRead; /* Cursor used to iterate through aDoclist */ + int iPos; /* Current position */ +}; + +/* +** If LcsIterator.iCol is set to the following value, the iterator has +** finished iterating through all offsets for all columns. +*/ +#define LCS_ITERATOR_FINISHED 0x7FFFFFFF; + +static int fts3MatchinfoLcsCb( + Fts3Expr *pExpr, /* Phrase expression node */ + int iPhrase, /* Phrase number (numbered from zero) */ + void *pCtx /* Pointer to MatchInfo structure */ +){ + LcsIterator *aIter = (LcsIterator *)pCtx; + aIter[iPhrase].pExpr = pExpr; + return SQLITE_OK; +} + +/* +** Advance the iterator passed as an argument to the next position. Return +** 1 if the iterator is at EOF or if it now points to the start of the +** position list for the next column. +*/ +static int fts3LcsIteratorAdvance(LcsIterator *pIter){ + char *pRead = pIter->pRead; + sqlite3_int64 iRead; + int rc = 0; + + pRead += sqlite3Fts3GetVarint(pRead, &iRead); + if( iRead==0 || iRead==1 ){ + pRead = 0; + rc = 1; + }else{ + pIter->iPos += (int)(iRead-2); + } + + pIter->pRead = pRead; + return rc; +} + +/* +** This function implements the FTS3_MATCHINFO_LCS matchinfo() flag. +** +** If the call is successful, the longest-common-substring lengths for each +** column are written into the first nCol elements of the pInfo->aMatchinfo[] +** array before returning. SQLITE_OK is returned in this case. +** +** Otherwise, if an error occurs, an SQLite error code is returned and the +** data written to the first nCol elements of pInfo->aMatchinfo[] is +** undefined. +*/ +static int fts3MatchinfoLcs(Fts3Cursor *pCsr, MatchInfo *pInfo){ + LcsIterator *aIter; + int i; + int iCol; + int nToken = 0; + + /* Allocate and populate the array of LcsIterator objects. The array + ** contains one element for each matchable phrase in the query. + **/ + aIter = sqlite3_malloc(sizeof(LcsIterator) * pCsr->nPhrase); + if( !aIter ) return SQLITE_NOMEM; + memset(aIter, 0, sizeof(LcsIterator) * pCsr->nPhrase); + (void)fts3ExprIterate(pCsr->pExpr, fts3MatchinfoLcsCb, (void*)aIter); + + for(i=0; i<pInfo->nPhrase; i++){ + LcsIterator *pIter = &aIter[i]; + nToken -= pIter->pExpr->pPhrase->nToken; + pIter->iPosOffset = nToken; + } + + for(iCol=0; iCol<pInfo->nCol; iCol++){ + int nLcs = 0; /* LCS value for this column */ + int nLive = 0; /* Number of iterators in aIter not at EOF */ + + for(i=0; i<pInfo->nPhrase; i++){ + int rc; + LcsIterator *pIt = &aIter[i]; + rc = sqlite3Fts3EvalPhrasePoslist(pCsr, pIt->pExpr, iCol, &pIt->pRead); + if( rc!=SQLITE_OK ) return rc; + if( pIt->pRead ){ + pIt->iPos = pIt->iPosOffset; + fts3LcsIteratorAdvance(&aIter[i]); + nLive++; + } + } + + while( nLive>0 ){ + LcsIterator *pAdv = 0; /* The iterator to advance by one position */ + int nThisLcs = 0; /* LCS for the current iterator positions */ + + for(i=0; i<pInfo->nPhrase; i++){ + LcsIterator *pIter = &aIter[i]; + if( pIter->pRead==0 ){ + /* This iterator is already at EOF for this column. */ + nThisLcs = 0; + }else{ + if( pAdv==0 || pIter->iPos<pAdv->iPos ){ + pAdv = pIter; + } + if( nThisLcs==0 || pIter->iPos==pIter[-1].iPos ){ + nThisLcs++; + }else{ + nThisLcs = 1; + } + if( nThisLcs>nLcs ) nLcs = nThisLcs; + } + } + if( fts3LcsIteratorAdvance(pAdv) ) nLive--; + } + + pInfo->aMatchinfo[iCol] = nLcs; + } + + sqlite3_free(aIter); return SQLITE_OK; } + +/* +** Populate the buffer pInfo->aMatchinfo[] with an array of integers to +** be returned by the matchinfo() function. Argument zArg contains the +** format string passed as the second argument to matchinfo (or the +** default value "pcx" if no second argument was specified). The format +** string has already been validated and the pInfo->aMatchinfo[] array +** is guaranteed to be large enough for the output. +** +** If bGlobal is true, then populate all fields of the matchinfo() output. +** If it is false, then assume that those fields that do not change between +** rows (i.e. FTS3_MATCHINFO_NPHRASE, NCOL, NDOC, AVGLENGTH and part of HITS) +** have already been populated. +** +** Return SQLITE_OK if successful, or an SQLite error code if an error +** occurs. If a value other than SQLITE_OK is returned, the state the +** pInfo->aMatchinfo[] buffer is left in is undefined. +*/ +static int fts3MatchinfoValues( + Fts3Cursor *pCsr, /* FTS3 cursor object */ + int bGlobal, /* True to grab the global stats */ + MatchInfo *pInfo, /* Matchinfo context object */ + const char *zArg /* Matchinfo format string */ +){ + int rc = SQLITE_OK; + int i; + Fts3Table *pTab = (Fts3Table *)pCsr->base.pVtab; + sqlite3_stmt *pSelect = 0; + + for(i=0; rc==SQLITE_OK && zArg[i]; i++){ + pInfo->flag = zArg[i]; + switch( zArg[i] ){ + case FTS3_MATCHINFO_NPHRASE: + if( bGlobal ) pInfo->aMatchinfo[0] = pInfo->nPhrase; + break; + + case FTS3_MATCHINFO_NCOL: + if( bGlobal ) pInfo->aMatchinfo[0] = pInfo->nCol; + break; + + case FTS3_MATCHINFO_NDOC: + if( bGlobal ){ + sqlite3_int64 nDoc = 0; + rc = fts3MatchinfoSelectDoctotal(pTab, &pSelect, &nDoc, 0); + pInfo->aMatchinfo[0] = (u32)nDoc; + } + break; + + case FTS3_MATCHINFO_AVGLENGTH: + if( bGlobal ){ + sqlite3_int64 nDoc; /* Number of rows in table */ + const char *a; /* Aggregate column length array */ + + rc = fts3MatchinfoSelectDoctotal(pTab, &pSelect, &nDoc, &a); + if( rc==SQLITE_OK ){ + int iCol; + for(iCol=0; iCol<pInfo->nCol; iCol++){ + u32 iVal; + sqlite3_int64 nToken; + a += sqlite3Fts3GetVarint(a, &nToken); + iVal = (u32)(((u32)(nToken&0xffffffff)+nDoc/2)/nDoc); + pInfo->aMatchinfo[iCol] = iVal; + } + } + } + break; + + case FTS3_MATCHINFO_LENGTH: { + sqlite3_stmt *pSelectDocsize = 0; + rc = sqlite3Fts3SelectDocsize(pTab, pCsr->iPrevId, &pSelectDocsize); + if( rc==SQLITE_OK ){ + int iCol; + const char *a = sqlite3_column_blob(pSelectDocsize, 0); + for(iCol=0; iCol<pInfo->nCol; iCol++){ + sqlite3_int64 nToken; + a += sqlite3Fts3GetVarint(a, &nToken); + pInfo->aMatchinfo[iCol] = (u32)nToken; + } + } + sqlite3_reset(pSelectDocsize); + break; + } + + case FTS3_MATCHINFO_LCS: + rc = fts3ExprLoadDoclists(pCsr, 0, 0); + if( rc==SQLITE_OK ){ + rc = fts3MatchinfoLcs(pCsr, pInfo); + } + break; + + case FTS3_MATCHINFO_LHITS_BM: + case FTS3_MATCHINFO_LHITS: { + int nZero = fts3MatchinfoSize(pInfo, zArg[i]) * sizeof(u32); + memset(pInfo->aMatchinfo, 0, nZero); + fts3ExprLHitGather(pCsr->pExpr, pInfo); + break; + } + + default: { + Fts3Expr *pExpr; + assert( zArg[i]==FTS3_MATCHINFO_HITS ); + pExpr = pCsr->pExpr; + rc = fts3ExprLoadDoclists(pCsr, 0, 0); + if( rc!=SQLITE_OK ) break; + if( bGlobal ){ + if( pCsr->pDeferred ){ + rc = fts3MatchinfoSelectDoctotal(pTab, &pSelect, &pInfo->nDoc, 0); + if( rc!=SQLITE_OK ) break; + } + rc = fts3ExprIterate(pExpr, fts3ExprGlobalHitsCb,(void*)pInfo); + sqlite3Fts3EvalTestDeferred(pCsr, &rc); + if( rc!=SQLITE_OK ) break; + } + (void)fts3ExprIterate(pExpr, fts3ExprLocalHitsCb,(void*)pInfo); + break; + } + } + + pInfo->aMatchinfo += fts3MatchinfoSize(pInfo, zArg[i]); + } + + sqlite3_reset(pSelect); + return rc; +} + /* ** Populate pCsr->aMatchinfo[] with data for the current row. The ** 'matchinfo' data is an array of 32-bit unsigned integers (C type u32). */ -static int fts3GetMatchinfo(Fts3Cursor *pCsr){ +static void fts3GetMatchinfo( + sqlite3_context *pCtx, /* Return results here */ + Fts3Cursor *pCsr, /* FTS3 Cursor object */ + const char *zArg /* Second argument to matchinfo() function */ +){ MatchInfo sInfo; Fts3Table *pTab = (Fts3Table *)pCsr->base.pVtab; int rc = SQLITE_OK; + int bGlobal = 0; /* Collect 'global' stats as well as local */ + u32 *aOut = 0; + void (*xDestroyOut)(void*) = 0; + + memset(&sInfo, 0, sizeof(MatchInfo)); sInfo.pCursor = pCsr; sInfo.nCol = pTab->nColumn; - if( pCsr->aMatchinfo==0 ){ - /* If Fts3Cursor.aMatchinfo[] is NULL, then this is the first time the - ** matchinfo function has been called for this query. In this case - ** allocate the array used to accumulate the matchinfo data and - ** initialize those elements that are constant for every row. - */ - int nPhrase; /* Number of phrases */ - int nMatchinfo; /* Number of u32 elements in match-info */ - - /* Load doclists for each phrase in the query. */ - rc = fts3ExprLoadDoclists(pCsr, &nPhrase, 0); - if( rc!=SQLITE_OK ){ - return rc; - } - nMatchinfo = 2 + 3*sInfo.nCol*nPhrase; - if( pTab->bHasDocsize ){ - nMatchinfo += 1 + 2*pTab->nColumn; - } - - sInfo.aMatchinfo = (u32 *)sqlite3_malloc(sizeof(u32)*nMatchinfo); - if( !sInfo.aMatchinfo ){ - return SQLITE_NOMEM; - } - memset(sInfo.aMatchinfo, 0, sizeof(u32)*nMatchinfo); - - - /* First element of match-info is the number of phrases in the query */ - sInfo.aMatchinfo[0] = nPhrase; - sInfo.aMatchinfo[1] = sInfo.nCol; - (void)fts3ExprIterate(pCsr->pExpr, fts3ExprGlobalMatchinfoCb,(void*)&sInfo); - if( pTab->bHasDocsize ){ - int ofst = 2 + 3*sInfo.aMatchinfo[0]*sInfo.aMatchinfo[1]; - rc = sqlite3Fts3MatchinfoDocsizeGlobal(pCsr, &sInfo.aMatchinfo[ofst]); - } - pCsr->aMatchinfo = sInfo.aMatchinfo; + /* If there is cached matchinfo() data, but the format string for the + ** cache does not match the format string for this request, discard + ** the cached data. */ + if( pCsr->pMIBuffer && strcmp(pCsr->pMIBuffer->zMatchinfo, zArg) ){ + sqlite3Fts3MIBufferFree(pCsr->pMIBuffer); + pCsr->pMIBuffer = 0; + } + + /* If Fts3Cursor.pMIBuffer is NULL, then this is the first time the + ** matchinfo function has been called for this query. In this case + ** allocate the array used to accumulate the matchinfo data and + ** initialize those elements that are constant for every row. + */ + if( pCsr->pMIBuffer==0 ){ + int nMatchinfo = 0; /* Number of u32 elements in match-info */ + int i; /* Used to iterate through zArg */ + + /* Determine the number of phrases in the query */ + pCsr->nPhrase = fts3ExprPhraseCount(pCsr->pExpr); + sInfo.nPhrase = pCsr->nPhrase; + + /* Determine the number of integers in the buffer returned by this call. */ + for(i=0; zArg[i]; i++){ + char *zErr = 0; + if( fts3MatchinfoCheck(pTab, zArg[i], &zErr) ){ + sqlite3_result_error(pCtx, zErr, -1); + sqlite3_free(zErr); + return; + } + nMatchinfo += fts3MatchinfoSize(&sInfo, zArg[i]); + } + + /* Allocate space for Fts3Cursor.aMatchinfo[] and Fts3Cursor.zMatchinfo. */ + pCsr->pMIBuffer = fts3MIBufferNew(nMatchinfo, zArg); + if( !pCsr->pMIBuffer ) rc = SQLITE_NOMEM; + pCsr->isMatchinfoNeeded = 1; + bGlobal = 1; } - sInfo.aMatchinfo = pCsr->aMatchinfo; - if( rc==SQLITE_OK && pCsr->isMatchinfoNeeded ){ - (void)fts3ExprIterate(pCsr->pExpr, fts3ExprLocalMatchinfoCb, (void*)&sInfo); - if( pTab->bHasDocsize ){ - int ofst = 2 + 3*sInfo.aMatchinfo[0]*sInfo.aMatchinfo[1]; - rc = sqlite3Fts3MatchinfoDocsizeLocal(pCsr, &sInfo.aMatchinfo[ofst]); - } - pCsr->isMatchinfoNeeded = 0; + if( rc==SQLITE_OK ){ + xDestroyOut = fts3MIBufferAlloc(pCsr->pMIBuffer, &aOut); + if( xDestroyOut==0 ){ + rc = SQLITE_NOMEM; + } } - return SQLITE_OK; + if( rc==SQLITE_OK ){ + sInfo.aMatchinfo = aOut; + sInfo.nPhrase = pCsr->nPhrase; + rc = fts3MatchinfoValues(pCsr, bGlobal, &sInfo, zArg); + if( bGlobal ){ + fts3MIBufferSetGlobal(pCsr->pMIBuffer); + } + } + + if( rc!=SQLITE_OK ){ + sqlite3_result_error_code(pCtx, rc); + if( xDestroyOut ) xDestroyOut(aOut); + }else{ + int n = pCsr->pMIBuffer->nElem * sizeof(u32); + sqlite3_result_blob(pCtx, aOut, n, xDestroyOut); + } } /* ** Implementation of snippet() function. */ @@ -109328,12 +155156,12 @@ /* Loop through all columns of the table being considered for snippets. ** If the iCol argument to this function was negative, this means all ** columns of the FTS3 table. Otherwise, only column iCol is considered. */ for(iRead=0; iRead<pTab->nColumn; iRead++){ - SnippetFragment sF; - int iS; + SnippetFragment sF = {0, 0, 0, 0}; + int iS = 0; if( iCol>=0 && iRead!=iCol ) continue; /* Find the best snippet of nFToken tokens in column iRead. */ rc = fts3BestSnippet(nFToken, pCsr, iRead, mCovered, &mSeen, &sF, &iS); if( rc!=SQLITE_OK ){ @@ -109362,10 +155190,11 @@ i, (i==nSnippet-1), nFToken, zStart, zEnd, zEllipsis, &res ); } snippet_out: + sqlite3Fts3SegmentsClose(pTab); if( rc!=SQLITE_OK ){ sqlite3_result_error_code(pCtx, rc); sqlite3_free(res.z); }else{ sqlite3_result_text(pCtx, res.z, -1, sqlite3_free); @@ -109381,10 +155210,11 @@ int iPos; /* Position just read from pList */ int iOff; /* Offset of this term from read positions */ }; struct TermOffsetCtx { + Fts3Cursor *pCsr; int iCol; /* Column of table to populate aTerm for */ int iTerm; sqlite3_int64 iDocid; TermOffset *aTerm; }; @@ -109396,13 +155226,14 @@ TermOffsetCtx *p = (TermOffsetCtx *)ctx; int nTerm; /* Number of tokens in phrase */ int iTerm; /* For looping through nTerm phrase terms */ char *pList; /* Pointer to position list for phrase */ int iPos = 0; /* First position in position-list */ + int rc; UNUSED_PARAMETER(iPhrase); - pList = sqlite3Fts3FindPositions(pExpr, p->iDocid, p->iCol); + rc = sqlite3Fts3EvalPhrasePoslist(p->pCsr, pExpr, p->iCol, &pList); nTerm = pExpr->pPhrase->nToken; if( pList ){ fts3GetDeltaPosition(&pList, &iPos); assert( iPos>=0 ); } @@ -109412,11 +155243,11 @@ pT->iOff = nTerm-iTerm-1; pT->pList = pList; pT->iPos = iPos; } - return SQLITE_OK; + return rc; } /* ** Implementation of offsets() function. */ @@ -109424,12 +155255,10 @@ sqlite3_context *pCtx, /* SQLite function call context */ Fts3Cursor *pCsr /* Cursor object */ ){ Fts3Table *pTab = (Fts3Table *)pCsr->base.pVtab; sqlite3_tokenizer_module const *pMod = pTab->pTokenizer->pModule; - const char *ZDUMMY; /* Dummy argument used with xNext() */ - int NDUMMY; /* Dummy argument used with xNext() */ int rc; /* Return Code */ int nToken; /* Number of tokens in query */ int iCol; /* Column currently being processed */ StrBuffer res = {0, 0, 0}; /* Result string */ TermOffsetCtx sCtx; /* Context for fts3ExprTermOffsetInit() */ @@ -109451,29 +155280,32 @@ if( 0==sCtx.aTerm ){ rc = SQLITE_NOMEM; goto offsets_out; } sCtx.iDocid = pCsr->iPrevId; + sCtx.pCsr = pCsr; /* Loop through the table columns, appending offset information to ** string-buffer res for each column. */ for(iCol=0; iCol<pTab->nColumn; iCol++){ sqlite3_tokenizer_cursor *pC; /* Tokenizer cursor */ - int iStart; - int iEnd; - int iCurrent; + const char *ZDUMMY; /* Dummy argument used with xNext() */ + int NDUMMY = 0; /* Dummy argument used with xNext() */ + int iStart = 0; + int iEnd = 0; + int iCurrent = 0; const char *zDoc; int nDoc; /* Initialize the contents of sCtx.aTerm[] for column iCol. There is ** no way that this operation can fail, so the return code from ** fts3ExprIterate() can be discarded. */ sCtx.iCol = iCol; sCtx.iTerm = 0; - (void)fts3ExprIterate(pCsr->pExpr, fts3ExprTermOffsetInit, (void *)&sCtx); + (void)fts3ExprIterate(pCsr->pExpr, fts3ExprTermOffsetInit, (void*)&sCtx); /* Retreive the text stored in column iCol. If an SQL NULL is stored ** in column iCol, jump immediately to the next iteration of the loop. ** If an OOM occurs while retrieving the data (this can happen if SQLite ** needs to transform the data from utf-16 to utf-8), return SQLITE_NOMEM @@ -109488,13 +155320,14 @@ rc = SQLITE_NOMEM; goto offsets_out; } /* Initialize a tokenizer iterator to iterate through column iCol. */ - rc = pMod->xOpen(pTab->pTokenizer, zDoc, nDoc, &pC); + rc = sqlite3Fts3OpenTokenizer(pTab->pTokenizer, pCsr->iLangid, + zDoc, nDoc, &pC + ); if( rc!=SQLITE_OK ) goto offsets_out; - pC->pTokenizer = pTab->pTokenizer; rc = pMod->xNext(pC, &ZDUMMY, &NDUMMY, &iStart, &iEnd, &iCurrent); while( rc==SQLITE_OK ){ int i; /* Used to loop through terms */ int iMinPos = 0x7FFFFFFF; /* Position of next token */ @@ -109508,11 +155341,11 @@ } } if( !pTerm ){ /* All offsets for this column have been gathered. */ - break; + rc = SQLITE_DONE; }else{ assert( iCurrent<=iMinPos ); if( 0==(0xFE&*pTerm->pList) ){ pTerm->pList = 0; }else{ @@ -109525,12 +155358,12 @@ char aBuffer[64]; sqlite3_snprintf(sizeof(aBuffer), aBuffer, "%d %d %d %d ", iCol, pTerm-sCtx.aTerm, iStart, iEnd-iStart ); rc = fts3StringAppend(&res, aBuffer, -1); - }else if( rc==SQLITE_DONE ){ - rc = SQLITE_CORRUPT; + }else if( rc==SQLITE_DONE && pTab->zContentTbl==0 ){ + rc = FTS_CORRUPT_VTAB; } } } if( rc==SQLITE_DONE ){ rc = SQLITE_OK; @@ -109541,10 +155374,11 @@ } offsets_out: sqlite3_free(sCtx.aTerm); assert( rc!=SQLITE_DONE ); + sqlite3Fts3SegmentsClose(pTab); if( rc!=SQLITE_OK ){ sqlite3_result_error_code(pCtx, rc); sqlite3_free(res.z); }else{ sqlite3_result_text(pCtx, res.z, res.n-1, sqlite3_free); @@ -109553,32 +155387,801 @@ } /* ** Implementation of matchinfo() function. */ -SQLITE_PRIVATE void sqlite3Fts3Matchinfo(sqlite3_context *pContext, Fts3Cursor *pCsr){ - int rc; +SQLITE_PRIVATE void sqlite3Fts3Matchinfo( + sqlite3_context *pContext, /* Function call context */ + Fts3Cursor *pCsr, /* FTS3 table cursor */ + const char *zArg /* Second arg to matchinfo() function */ +){ + Fts3Table *pTab = (Fts3Table *)pCsr->base.pVtab; + const char *zFormat; + + if( zArg ){ + zFormat = zArg; + }else{ + zFormat = FTS3_MATCHINFO_DEFAULT; + } + if( !pCsr->pExpr ){ sqlite3_result_blob(pContext, "", 0, SQLITE_STATIC); return; - } - rc = fts3GetMatchinfo(pCsr); - if( rc!=SQLITE_OK ){ - sqlite3_result_error_code(pContext, rc); }else{ - Fts3Table *pTab = (Fts3Table*)pCsr->base.pVtab; - int n = sizeof(u32)*(2+pCsr->aMatchinfo[0]*pCsr->aMatchinfo[1]*3); - if( pTab->bHasDocsize ){ - n += sizeof(u32)*(1 + 2*pTab->nColumn); - } - sqlite3_result_blob(pContext, pCsr->aMatchinfo, n, SQLITE_TRANSIENT); + /* Retrieve matchinfo() data. */ + fts3GetMatchinfo(pContext, pCsr, zFormat); + sqlite3Fts3SegmentsClose(pTab); } } #endif /************** End of fts3_snippet.c ****************************************/ +/************** Begin file fts3_unicode.c ************************************/ +/* +** 2012 May 24 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** Implementation of the "unicode" full-text-search tokenizer. +*/ + +#ifndef SQLITE_DISABLE_FTS3_UNICODE + +/* #include "fts3Int.h" */ +#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) + +/* #include <assert.h> */ +/* #include <stdlib.h> */ +/* #include <stdio.h> */ +/* #include <string.h> */ + +/* #include "fts3_tokenizer.h" */ + +/* +** The following two macros - READ_UTF8 and WRITE_UTF8 - have been copied +** from the sqlite3 source file utf.c. If this file is compiled as part +** of the amalgamation, they are not required. +*/ +#ifndef SQLITE_AMALGAMATION + +static const unsigned char sqlite3Utf8Trans1[] = { + 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, + 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, + 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, + 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, + 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, + 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, + 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, + 0x00, 0x01, 0x02, 0x03, 0x00, 0x01, 0x00, 0x00, +}; + +#define READ_UTF8(zIn, zTerm, c) \ + c = *(zIn++); \ + if( c>=0xc0 ){ \ + c = sqlite3Utf8Trans1[c-0xc0]; \ + while( zIn!=zTerm && (*zIn & 0xc0)==0x80 ){ \ + c = (c<<6) + (0x3f & *(zIn++)); \ + } \ + if( c<0x80 \ + || (c&0xFFFFF800)==0xD800 \ + || (c&0xFFFFFFFE)==0xFFFE ){ c = 0xFFFD; } \ + } + +#define WRITE_UTF8(zOut, c) { \ + if( c<0x00080 ){ \ + *zOut++ = (u8)(c&0xFF); \ + } \ + else if( c<0x00800 ){ \ + *zOut++ = 0xC0 + (u8)((c>>6)&0x1F); \ + *zOut++ = 0x80 + (u8)(c & 0x3F); \ + } \ + else if( c<0x10000 ){ \ + *zOut++ = 0xE0 + (u8)((c>>12)&0x0F); \ + *zOut++ = 0x80 + (u8)((c>>6) & 0x3F); \ + *zOut++ = 0x80 + (u8)(c & 0x3F); \ + }else{ \ + *zOut++ = 0xF0 + (u8)((c>>18) & 0x07); \ + *zOut++ = 0x80 + (u8)((c>>12) & 0x3F); \ + *zOut++ = 0x80 + (u8)((c>>6) & 0x3F); \ + *zOut++ = 0x80 + (u8)(c & 0x3F); \ + } \ +} + +#endif /* ifndef SQLITE_AMALGAMATION */ + +typedef struct unicode_tokenizer unicode_tokenizer; +typedef struct unicode_cursor unicode_cursor; + +struct unicode_tokenizer { + sqlite3_tokenizer base; + int bRemoveDiacritic; + int nException; + int *aiException; +}; + +struct unicode_cursor { + sqlite3_tokenizer_cursor base; + const unsigned char *aInput; /* Input text being tokenized */ + int nInput; /* Size of aInput[] in bytes */ + int iOff; /* Current offset within aInput[] */ + int iToken; /* Index of next token to be returned */ + char *zToken; /* storage for current token */ + int nAlloc; /* space allocated at zToken */ +}; + + +/* +** Destroy a tokenizer allocated by unicodeCreate(). +*/ +static int unicodeDestroy(sqlite3_tokenizer *pTokenizer){ + if( pTokenizer ){ + unicode_tokenizer *p = (unicode_tokenizer *)pTokenizer; + sqlite3_free(p->aiException); + sqlite3_free(p); + } + return SQLITE_OK; +} + +/* +** As part of a tokenchars= or separators= option, the CREATE VIRTUAL TABLE +** statement has specified that the tokenizer for this table shall consider +** all characters in string zIn/nIn to be separators (if bAlnum==0) or +** token characters (if bAlnum==1). +** +** For each codepoint in the zIn/nIn string, this function checks if the +** sqlite3FtsUnicodeIsalnum() function already returns the desired result. +** If so, no action is taken. Otherwise, the codepoint is added to the +** unicode_tokenizer.aiException[] array. For the purposes of tokenization, +** the return value of sqlite3FtsUnicodeIsalnum() is inverted for all +** codepoints in the aiException[] array. +** +** If a standalone diacritic mark (one that sqlite3FtsUnicodeIsdiacritic() +** identifies as a diacritic) occurs in the zIn/nIn string it is ignored. +** It is not possible to change the behavior of the tokenizer with respect +** to these codepoints. +*/ +static int unicodeAddExceptions( + unicode_tokenizer *p, /* Tokenizer to add exceptions to */ + int bAlnum, /* Replace Isalnum() return value with this */ + const char *zIn, /* Array of characters to make exceptions */ + int nIn /* Length of z in bytes */ +){ + const unsigned char *z = (const unsigned char *)zIn; + const unsigned char *zTerm = &z[nIn]; + int iCode; + int nEntry = 0; + + assert( bAlnum==0 || bAlnum==1 ); + + while( z<zTerm ){ + READ_UTF8(z, zTerm, iCode); + assert( (sqlite3FtsUnicodeIsalnum(iCode) & 0xFFFFFFFE)==0 ); + if( sqlite3FtsUnicodeIsalnum(iCode)!=bAlnum + && sqlite3FtsUnicodeIsdiacritic(iCode)==0 + ){ + nEntry++; + } + } + + if( nEntry ){ + int *aNew; /* New aiException[] array */ + int nNew; /* Number of valid entries in array aNew[] */ + + aNew = sqlite3_realloc(p->aiException, (p->nException+nEntry)*sizeof(int)); + if( aNew==0 ) return SQLITE_NOMEM; + nNew = p->nException; + + z = (const unsigned char *)zIn; + while( z<zTerm ){ + READ_UTF8(z, zTerm, iCode); + if( sqlite3FtsUnicodeIsalnum(iCode)!=bAlnum + && sqlite3FtsUnicodeIsdiacritic(iCode)==0 + ){ + int i, j; + for(i=0; i<nNew && aNew[i]<iCode; i++); + for(j=nNew; j>i; j--) aNew[j] = aNew[j-1]; + aNew[i] = iCode; + nNew++; + } + } + p->aiException = aNew; + p->nException = nNew; + } + + return SQLITE_OK; +} + +/* +** Return true if the p->aiException[] array contains the value iCode. +*/ +static int unicodeIsException(unicode_tokenizer *p, int iCode){ + if( p->nException>0 ){ + int *a = p->aiException; + int iLo = 0; + int iHi = p->nException-1; + + while( iHi>=iLo ){ + int iTest = (iHi + iLo) / 2; + if( iCode==a[iTest] ){ + return 1; + }else if( iCode>a[iTest] ){ + iLo = iTest+1; + }else{ + iHi = iTest-1; + } + } + } + + return 0; +} + +/* +** Return true if, for the purposes of tokenization, codepoint iCode is +** considered a token character (not a separator). +*/ +static int unicodeIsAlnum(unicode_tokenizer *p, int iCode){ + assert( (sqlite3FtsUnicodeIsalnum(iCode) & 0xFFFFFFFE)==0 ); + return sqlite3FtsUnicodeIsalnum(iCode) ^ unicodeIsException(p, iCode); +} + +/* +** Create a new tokenizer instance. +*/ +static int unicodeCreate( + int nArg, /* Size of array argv[] */ + const char * const *azArg, /* Tokenizer creation arguments */ + sqlite3_tokenizer **pp /* OUT: New tokenizer handle */ +){ + unicode_tokenizer *pNew; /* New tokenizer object */ + int i; + int rc = SQLITE_OK; + + pNew = (unicode_tokenizer *) sqlite3_malloc(sizeof(unicode_tokenizer)); + if( pNew==NULL ) return SQLITE_NOMEM; + memset(pNew, 0, sizeof(unicode_tokenizer)); + pNew->bRemoveDiacritic = 1; + + for(i=0; rc==SQLITE_OK && i<nArg; i++){ + const char *z = azArg[i]; + int n = (int)strlen(z); + + if( n==19 && memcmp("remove_diacritics=1", z, 19)==0 ){ + pNew->bRemoveDiacritic = 1; + } + else if( n==19 && memcmp("remove_diacritics=0", z, 19)==0 ){ + pNew->bRemoveDiacritic = 0; + } + else if( n>=11 && memcmp("tokenchars=", z, 11)==0 ){ + rc = unicodeAddExceptions(pNew, 1, &z[11], n-11); + } + else if( n>=11 && memcmp("separators=", z, 11)==0 ){ + rc = unicodeAddExceptions(pNew, 0, &z[11], n-11); + } + else{ + /* Unrecognized argument */ + rc = SQLITE_ERROR; + } + } + + if( rc!=SQLITE_OK ){ + unicodeDestroy((sqlite3_tokenizer *)pNew); + pNew = 0; + } + *pp = (sqlite3_tokenizer *)pNew; + return rc; +} + +/* +** Prepare to begin tokenizing a particular string. The input +** string to be tokenized is pInput[0..nBytes-1]. A cursor +** used to incrementally tokenize this string is returned in +** *ppCursor. +*/ +static int unicodeOpen( + sqlite3_tokenizer *p, /* The tokenizer */ + const char *aInput, /* Input string */ + int nInput, /* Size of string aInput in bytes */ + sqlite3_tokenizer_cursor **pp /* OUT: New cursor object */ +){ + unicode_cursor *pCsr; + + pCsr = (unicode_cursor *)sqlite3_malloc(sizeof(unicode_cursor)); + if( pCsr==0 ){ + return SQLITE_NOMEM; + } + memset(pCsr, 0, sizeof(unicode_cursor)); + + pCsr->aInput = (const unsigned char *)aInput; + if( aInput==0 ){ + pCsr->nInput = 0; + }else if( nInput<0 ){ + pCsr->nInput = (int)strlen(aInput); + }else{ + pCsr->nInput = nInput; + } + + *pp = &pCsr->base; + UNUSED_PARAMETER(p); + return SQLITE_OK; +} + +/* +** Close a tokenization cursor previously opened by a call to +** simpleOpen() above. +*/ +static int unicodeClose(sqlite3_tokenizer_cursor *pCursor){ + unicode_cursor *pCsr = (unicode_cursor *) pCursor; + sqlite3_free(pCsr->zToken); + sqlite3_free(pCsr); + return SQLITE_OK; +} + +/* +** Extract the next token from a tokenization cursor. The cursor must +** have been opened by a prior call to simpleOpen(). +*/ +static int unicodeNext( + sqlite3_tokenizer_cursor *pC, /* Cursor returned by simpleOpen */ + const char **paToken, /* OUT: Token text */ + int *pnToken, /* OUT: Number of bytes at *paToken */ + int *piStart, /* OUT: Starting offset of token */ + int *piEnd, /* OUT: Ending offset of token */ + int *piPos /* OUT: Position integer of token */ +){ + unicode_cursor *pCsr = (unicode_cursor *)pC; + unicode_tokenizer *p = ((unicode_tokenizer *)pCsr->base.pTokenizer); + int iCode = 0; + char *zOut; + const unsigned char *z = &pCsr->aInput[pCsr->iOff]; + const unsigned char *zStart = z; + const unsigned char *zEnd; + const unsigned char *zTerm = &pCsr->aInput[pCsr->nInput]; + + /* Scan past any delimiter characters before the start of the next token. + ** Return SQLITE_DONE early if this takes us all the way to the end of + ** the input. */ + while( z<zTerm ){ + READ_UTF8(z, zTerm, iCode); + if( unicodeIsAlnum(p, iCode) ) break; + zStart = z; + } + if( zStart>=zTerm ) return SQLITE_DONE; + + zOut = pCsr->zToken; + do { + int iOut; + + /* Grow the output buffer if required. */ + if( (zOut-pCsr->zToken)>=(pCsr->nAlloc-4) ){ + char *zNew = sqlite3_realloc(pCsr->zToken, pCsr->nAlloc+64); + if( !zNew ) return SQLITE_NOMEM; + zOut = &zNew[zOut - pCsr->zToken]; + pCsr->zToken = zNew; + pCsr->nAlloc += 64; + } + + /* Write the folded case of the last character read to the output */ + zEnd = z; + iOut = sqlite3FtsUnicodeFold(iCode, p->bRemoveDiacritic); + if( iOut ){ + WRITE_UTF8(zOut, iOut); + } + + /* If the cursor is not at EOF, read the next character */ + if( z>=zTerm ) break; + READ_UTF8(z, zTerm, iCode); + }while( unicodeIsAlnum(p, iCode) + || sqlite3FtsUnicodeIsdiacritic(iCode) + ); + + /* Set the output variables and return. */ + pCsr->iOff = (int)(z - pCsr->aInput); + *paToken = pCsr->zToken; + *pnToken = (int)(zOut - pCsr->zToken); + *piStart = (int)(zStart - pCsr->aInput); + *piEnd = (int)(zEnd - pCsr->aInput); + *piPos = pCsr->iToken++; + return SQLITE_OK; +} + +/* +** Set *ppModule to a pointer to the sqlite3_tokenizer_module +** structure for the unicode tokenizer. +*/ +SQLITE_PRIVATE void sqlite3Fts3UnicodeTokenizer(sqlite3_tokenizer_module const **ppModule){ + static const sqlite3_tokenizer_module module = { + 0, + unicodeCreate, + unicodeDestroy, + unicodeOpen, + unicodeClose, + unicodeNext, + 0, + }; + *ppModule = &module; +} + +#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) */ +#endif /* ifndef SQLITE_DISABLE_FTS3_UNICODE */ + +/************** End of fts3_unicode.c ****************************************/ +/************** Begin file fts3_unicode2.c ***********************************/ +/* +** 2012 May 25 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +*/ + +/* +** DO NOT EDIT THIS MACHINE GENERATED FILE. +*/ + +#ifndef SQLITE_DISABLE_FTS3_UNICODE +#if defined(SQLITE_ENABLE_FTS3) || defined(SQLITE_ENABLE_FTS4) + +/* #include <assert.h> */ + +/* +** Return true if the argument corresponds to a unicode codepoint +** classified as either a letter or a number. Otherwise false. +** +** The results are undefined if the value passed to this function +** is less than zero. +*/ +SQLITE_PRIVATE int sqlite3FtsUnicodeIsalnum(int c){ + /* Each unsigned integer in the following array corresponds to a contiguous + ** range of unicode codepoints that are not either letters or numbers (i.e. + ** codepoints for which this function should return 0). + ** + ** The most significant 22 bits in each 32-bit value contain the first + ** codepoint in the range. The least significant 10 bits are used to store + ** the size of the range (always at least 1). In other words, the value + ** ((C<<22) + N) represents a range of N codepoints starting with codepoint + ** C. It is not possible to represent a range larger than 1023 codepoints + ** using this format. + */ + static const unsigned int aEntry[] = { + 0x00000030, 0x0000E807, 0x00016C06, 0x0001EC2F, 0x0002AC07, + 0x0002D001, 0x0002D803, 0x0002EC01, 0x0002FC01, 0x00035C01, + 0x0003DC01, 0x000B0804, 0x000B480E, 0x000B9407, 0x000BB401, + 0x000BBC81, 0x000DD401, 0x000DF801, 0x000E1002, 0x000E1C01, + 0x000FD801, 0x00120808, 0x00156806, 0x00162402, 0x00163C01, + 0x00164437, 0x0017CC02, 0x00180005, 0x00181816, 0x00187802, + 0x00192C15, 0x0019A804, 0x0019C001, 0x001B5001, 0x001B580F, + 0x001B9C07, 0x001BF402, 0x001C000E, 0x001C3C01, 0x001C4401, + 0x001CC01B, 0x001E980B, 0x001FAC09, 0x001FD804, 0x00205804, + 0x00206C09, 0x00209403, 0x0020A405, 0x0020C00F, 0x00216403, + 0x00217801, 0x0023901B, 0x00240004, 0x0024E803, 0x0024F812, + 0x00254407, 0x00258804, 0x0025C001, 0x00260403, 0x0026F001, + 0x0026F807, 0x00271C02, 0x00272C03, 0x00275C01, 0x00278802, + 0x0027C802, 0x0027E802, 0x00280403, 0x0028F001, 0x0028F805, + 0x00291C02, 0x00292C03, 0x00294401, 0x0029C002, 0x0029D401, + 0x002A0403, 0x002AF001, 0x002AF808, 0x002B1C03, 0x002B2C03, + 0x002B8802, 0x002BC002, 0x002C0403, 0x002CF001, 0x002CF807, + 0x002D1C02, 0x002D2C03, 0x002D5802, 0x002D8802, 0x002DC001, + 0x002E0801, 0x002EF805, 0x002F1803, 0x002F2804, 0x002F5C01, + 0x002FCC08, 0x00300403, 0x0030F807, 0x00311803, 0x00312804, + 0x00315402, 0x00318802, 0x0031FC01, 0x00320802, 0x0032F001, + 0x0032F807, 0x00331803, 0x00332804, 0x00335402, 0x00338802, + 0x00340802, 0x0034F807, 0x00351803, 0x00352804, 0x00355C01, + 0x00358802, 0x0035E401, 0x00360802, 0x00372801, 0x00373C06, + 0x00375801, 0x00376008, 0x0037C803, 0x0038C401, 0x0038D007, + 0x0038FC01, 0x00391C09, 0x00396802, 0x003AC401, 0x003AD006, + 0x003AEC02, 0x003B2006, 0x003C041F, 0x003CD00C, 0x003DC417, + 0x003E340B, 0x003E6424, 0x003EF80F, 0x003F380D, 0x0040AC14, + 0x00412806, 0x00415804, 0x00417803, 0x00418803, 0x00419C07, + 0x0041C404, 0x0042080C, 0x00423C01, 0x00426806, 0x0043EC01, + 0x004D740C, 0x004E400A, 0x00500001, 0x0059B402, 0x005A0001, + 0x005A6C02, 0x005BAC03, 0x005C4803, 0x005CC805, 0x005D4802, + 0x005DC802, 0x005ED023, 0x005F6004, 0x005F7401, 0x0060000F, + 0x0062A401, 0x0064800C, 0x0064C00C, 0x00650001, 0x00651002, + 0x0066C011, 0x00672002, 0x00677822, 0x00685C05, 0x00687802, + 0x0069540A, 0x0069801D, 0x0069FC01, 0x006A8007, 0x006AA006, + 0x006C0005, 0x006CD011, 0x006D6823, 0x006E0003, 0x006E840D, + 0x006F980E, 0x006FF004, 0x00709014, 0x0070EC05, 0x0071F802, + 0x00730008, 0x00734019, 0x0073B401, 0x0073C803, 0x00770027, + 0x0077F004, 0x007EF401, 0x007EFC03, 0x007F3403, 0x007F7403, + 0x007FB403, 0x007FF402, 0x00800065, 0x0081A806, 0x0081E805, + 0x00822805, 0x0082801A, 0x00834021, 0x00840002, 0x00840C04, + 0x00842002, 0x00845001, 0x00845803, 0x00847806, 0x00849401, + 0x00849C01, 0x0084A401, 0x0084B801, 0x0084E802, 0x00850005, + 0x00852804, 0x00853C01, 0x00864264, 0x00900027, 0x0091000B, + 0x0092704E, 0x00940200, 0x009C0475, 0x009E53B9, 0x00AD400A, + 0x00B39406, 0x00B3BC03, 0x00B3E404, 0x00B3F802, 0x00B5C001, + 0x00B5FC01, 0x00B7804F, 0x00B8C00C, 0x00BA001A, 0x00BA6C59, + 0x00BC00D6, 0x00BFC00C, 0x00C00005, 0x00C02019, 0x00C0A807, + 0x00C0D802, 0x00C0F403, 0x00C26404, 0x00C28001, 0x00C3EC01, + 0x00C64002, 0x00C6580A, 0x00C70024, 0x00C8001F, 0x00C8A81E, + 0x00C94001, 0x00C98020, 0x00CA2827, 0x00CB003F, 0x00CC0100, + 0x01370040, 0x02924037, 0x0293F802, 0x02983403, 0x0299BC10, + 0x029A7C01, 0x029BC008, 0x029C0017, 0x029C8002, 0x029E2402, + 0x02A00801, 0x02A01801, 0x02A02C01, 0x02A08C09, 0x02A0D804, + 0x02A1D004, 0x02A20002, 0x02A2D011, 0x02A33802, 0x02A38012, + 0x02A3E003, 0x02A4980A, 0x02A51C0D, 0x02A57C01, 0x02A60004, + 0x02A6CC1B, 0x02A77802, 0x02A8A40E, 0x02A90C01, 0x02A93002, + 0x02A97004, 0x02A9DC03, 0x02A9EC01, 0x02AAC001, 0x02AAC803, + 0x02AADC02, 0x02AAF802, 0x02AB0401, 0x02AB7802, 0x02ABAC07, + 0x02ABD402, 0x02AF8C0B, 0x03600001, 0x036DFC02, 0x036FFC02, + 0x037FFC01, 0x03EC7801, 0x03ECA401, 0x03EEC810, 0x03F4F802, + 0x03F7F002, 0x03F8001A, 0x03F88007, 0x03F8C023, 0x03F95013, + 0x03F9A004, 0x03FBFC01, 0x03FC040F, 0x03FC6807, 0x03FCEC06, + 0x03FD6C0B, 0x03FF8007, 0x03FFA007, 0x03FFE405, 0x04040003, + 0x0404DC09, 0x0405E411, 0x0406400C, 0x0407402E, 0x040E7C01, + 0x040F4001, 0x04215C01, 0x04247C01, 0x0424FC01, 0x04280403, + 0x04281402, 0x04283004, 0x0428E003, 0x0428FC01, 0x04294009, + 0x0429FC01, 0x042CE407, 0x04400003, 0x0440E016, 0x04420003, + 0x0442C012, 0x04440003, 0x04449C0E, 0x04450004, 0x04460003, + 0x0446CC0E, 0x04471404, 0x045AAC0D, 0x0491C004, 0x05BD442E, + 0x05BE3C04, 0x074000F6, 0x07440027, 0x0744A4B5, 0x07480046, + 0x074C0057, 0x075B0401, 0x075B6C01, 0x075BEC01, 0x075C5401, + 0x075CD401, 0x075D3C01, 0x075DBC01, 0x075E2401, 0x075EA401, + 0x075F0C01, 0x07BBC002, 0x07C0002C, 0x07C0C064, 0x07C2800F, + 0x07C2C40E, 0x07C3040F, 0x07C3440F, 0x07C4401F, 0x07C4C03C, + 0x07C5C02B, 0x07C7981D, 0x07C8402B, 0x07C90009, 0x07C94002, + 0x07CC0021, 0x07CCC006, 0x07CCDC46, 0x07CE0014, 0x07CE8025, + 0x07CF1805, 0x07CF8011, 0x07D0003F, 0x07D10001, 0x07D108B6, + 0x07D3E404, 0x07D4003E, 0x07D50004, 0x07D54018, 0x07D7EC46, + 0x07D9140B, 0x07DA0046, 0x07DC0074, 0x38000401, 0x38008060, + 0x380400F0, + }; + static const unsigned int aAscii[4] = { + 0xFFFFFFFF, 0xFC00FFFF, 0xF8000001, 0xF8000001, + }; + + if( c<128 ){ + return ( (aAscii[c >> 5] & (1 << (c & 0x001F)))==0 ); + }else if( c<(1<<22) ){ + unsigned int key = (((unsigned int)c)<<10) | 0x000003FF; + int iRes = 0; + int iHi = sizeof(aEntry)/sizeof(aEntry[0]) - 1; + int iLo = 0; + while( iHi>=iLo ){ + int iTest = (iHi + iLo) / 2; + if( key >= aEntry[iTest] ){ + iRes = iTest; + iLo = iTest+1; + }else{ + iHi = iTest-1; + } + } + assert( aEntry[0]<key ); + assert( key>=aEntry[iRes] ); + return (((unsigned int)c) >= ((aEntry[iRes]>>10) + (aEntry[iRes]&0x3FF))); + } + return 1; +} + + +/* +** If the argument is a codepoint corresponding to a lowercase letter +** in the ASCII range with a diacritic added, return the codepoint +** of the ASCII letter only. For example, if passed 235 - "LATIN +** SMALL LETTER E WITH DIAERESIS" - return 65 ("LATIN SMALL LETTER +** E"). The resuls of passing a codepoint that corresponds to an +** uppercase letter are undefined. +*/ +static int remove_diacritic(int c){ + unsigned short aDia[] = { + 0, 1797, 1848, 1859, 1891, 1928, 1940, 1995, + 2024, 2040, 2060, 2110, 2168, 2206, 2264, 2286, + 2344, 2383, 2472, 2488, 2516, 2596, 2668, 2732, + 2782, 2842, 2894, 2954, 2984, 3000, 3028, 3336, + 3456, 3696, 3712, 3728, 3744, 3896, 3912, 3928, + 3968, 4008, 4040, 4106, 4138, 4170, 4202, 4234, + 4266, 4296, 4312, 4344, 4408, 4424, 4472, 4504, + 6148, 6198, 6264, 6280, 6360, 6429, 6505, 6529, + 61448, 61468, 61534, 61592, 61642, 61688, 61704, 61726, + 61784, 61800, 61836, 61880, 61914, 61948, 61998, 62122, + 62154, 62200, 62218, 62302, 62364, 62442, 62478, 62536, + 62554, 62584, 62604, 62640, 62648, 62656, 62664, 62730, + 62924, 63050, 63082, 63274, 63390, + }; + char aChar[] = { + '\0', 'a', 'c', 'e', 'i', 'n', 'o', 'u', 'y', 'y', 'a', 'c', + 'd', 'e', 'e', 'g', 'h', 'i', 'j', 'k', 'l', 'n', 'o', 'r', + 's', 't', 'u', 'u', 'w', 'y', 'z', 'o', 'u', 'a', 'i', 'o', + 'u', 'g', 'k', 'o', 'j', 'g', 'n', 'a', 'e', 'i', 'o', 'r', + 'u', 's', 't', 'h', 'a', 'e', 'o', 'y', '\0', '\0', '\0', '\0', + '\0', '\0', '\0', '\0', 'a', 'b', 'd', 'd', 'e', 'f', 'g', 'h', + 'h', 'i', 'k', 'l', 'l', 'm', 'n', 'p', 'r', 'r', 's', 't', + 'u', 'v', 'w', 'w', 'x', 'y', 'z', 'h', 't', 'w', 'y', 'a', + 'e', 'i', 'o', 'u', 'y', + }; + + unsigned int key = (((unsigned int)c)<<3) | 0x00000007; + int iRes = 0; + int iHi = sizeof(aDia)/sizeof(aDia[0]) - 1; + int iLo = 0; + while( iHi>=iLo ){ + int iTest = (iHi + iLo) / 2; + if( key >= aDia[iTest] ){ + iRes = iTest; + iLo = iTest+1; + }else{ + iHi = iTest-1; + } + } + assert( key>=aDia[iRes] ); + return ((c > (aDia[iRes]>>3) + (aDia[iRes]&0x07)) ? c : (int)aChar[iRes]); +} + + +/* +** Return true if the argument interpreted as a unicode codepoint +** is a diacritical modifier character. +*/ +SQLITE_PRIVATE int sqlite3FtsUnicodeIsdiacritic(int c){ + unsigned int mask0 = 0x08029FDF; + unsigned int mask1 = 0x000361F8; + if( c<768 || c>817 ) return 0; + return (c < 768+32) ? + (mask0 & (1 << (c-768))) : + (mask1 & (1 << (c-768-32))); +} + + +/* +** Interpret the argument as a unicode codepoint. If the codepoint +** is an upper case character that has a lower case equivalent, +** return the codepoint corresponding to the lower case version. +** Otherwise, return a copy of the argument. +** +** The results are undefined if the value passed to this function +** is less than zero. +*/ +SQLITE_PRIVATE int sqlite3FtsUnicodeFold(int c, int bRemoveDiacritic){ + /* Each entry in the following array defines a rule for folding a range + ** of codepoints to lower case. The rule applies to a range of nRange + ** codepoints starting at codepoint iCode. + ** + ** If the least significant bit in flags is clear, then the rule applies + ** to all nRange codepoints (i.e. all nRange codepoints are upper case and + ** need to be folded). Or, if it is set, then the rule only applies to + ** every second codepoint in the range, starting with codepoint C. + ** + ** The 7 most significant bits in flags are an index into the aiOff[] + ** array. If a specific codepoint C does require folding, then its lower + ** case equivalent is ((C + aiOff[flags>>1]) & 0xFFFF). + ** + ** The contents of this array are generated by parsing the CaseFolding.txt + ** file distributed as part of the "Unicode Character Database". See + ** http://www.unicode.org for details. + */ + static const struct TableEntry { + unsigned short iCode; + unsigned char flags; + unsigned char nRange; + } aEntry[] = { + {65, 14, 26}, {181, 64, 1}, {192, 14, 23}, + {216, 14, 7}, {256, 1, 48}, {306, 1, 6}, + {313, 1, 16}, {330, 1, 46}, {376, 116, 1}, + {377, 1, 6}, {383, 104, 1}, {385, 50, 1}, + {386, 1, 4}, {390, 44, 1}, {391, 0, 1}, + {393, 42, 2}, {395, 0, 1}, {398, 32, 1}, + {399, 38, 1}, {400, 40, 1}, {401, 0, 1}, + {403, 42, 1}, {404, 46, 1}, {406, 52, 1}, + {407, 48, 1}, {408, 0, 1}, {412, 52, 1}, + {413, 54, 1}, {415, 56, 1}, {416, 1, 6}, + {422, 60, 1}, {423, 0, 1}, {425, 60, 1}, + {428, 0, 1}, {430, 60, 1}, {431, 0, 1}, + {433, 58, 2}, {435, 1, 4}, {439, 62, 1}, + {440, 0, 1}, {444, 0, 1}, {452, 2, 1}, + {453, 0, 1}, {455, 2, 1}, {456, 0, 1}, + {458, 2, 1}, {459, 1, 18}, {478, 1, 18}, + {497, 2, 1}, {498, 1, 4}, {502, 122, 1}, + {503, 134, 1}, {504, 1, 40}, {544, 110, 1}, + {546, 1, 18}, {570, 70, 1}, {571, 0, 1}, + {573, 108, 1}, {574, 68, 1}, {577, 0, 1}, + {579, 106, 1}, {580, 28, 1}, {581, 30, 1}, + {582, 1, 10}, {837, 36, 1}, {880, 1, 4}, + {886, 0, 1}, {902, 18, 1}, {904, 16, 3}, + {908, 26, 1}, {910, 24, 2}, {913, 14, 17}, + {931, 14, 9}, {962, 0, 1}, {975, 4, 1}, + {976, 140, 1}, {977, 142, 1}, {981, 146, 1}, + {982, 144, 1}, {984, 1, 24}, {1008, 136, 1}, + {1009, 138, 1}, {1012, 130, 1}, {1013, 128, 1}, + {1015, 0, 1}, {1017, 152, 1}, {1018, 0, 1}, + {1021, 110, 3}, {1024, 34, 16}, {1040, 14, 32}, + {1120, 1, 34}, {1162, 1, 54}, {1216, 6, 1}, + {1217, 1, 14}, {1232, 1, 88}, {1329, 22, 38}, + {4256, 66, 38}, {4295, 66, 1}, {4301, 66, 1}, + {7680, 1, 150}, {7835, 132, 1}, {7838, 96, 1}, + {7840, 1, 96}, {7944, 150, 8}, {7960, 150, 6}, + {7976, 150, 8}, {7992, 150, 8}, {8008, 150, 6}, + {8025, 151, 8}, {8040, 150, 8}, {8072, 150, 8}, + {8088, 150, 8}, {8104, 150, 8}, {8120, 150, 2}, + {8122, 126, 2}, {8124, 148, 1}, {8126, 100, 1}, + {8136, 124, 4}, {8140, 148, 1}, {8152, 150, 2}, + {8154, 120, 2}, {8168, 150, 2}, {8170, 118, 2}, + {8172, 152, 1}, {8184, 112, 2}, {8186, 114, 2}, + {8188, 148, 1}, {8486, 98, 1}, {8490, 92, 1}, + {8491, 94, 1}, {8498, 12, 1}, {8544, 8, 16}, + {8579, 0, 1}, {9398, 10, 26}, {11264, 22, 47}, + {11360, 0, 1}, {11362, 88, 1}, {11363, 102, 1}, + {11364, 90, 1}, {11367, 1, 6}, {11373, 84, 1}, + {11374, 86, 1}, {11375, 80, 1}, {11376, 82, 1}, + {11378, 0, 1}, {11381, 0, 1}, {11390, 78, 2}, + {11392, 1, 100}, {11499, 1, 4}, {11506, 0, 1}, + {42560, 1, 46}, {42624, 1, 24}, {42786, 1, 14}, + {42802, 1, 62}, {42873, 1, 4}, {42877, 76, 1}, + {42878, 1, 10}, {42891, 0, 1}, {42893, 74, 1}, + {42896, 1, 4}, {42912, 1, 10}, {42922, 72, 1}, + {65313, 14, 26}, + }; + static const unsigned short aiOff[] = { + 1, 2, 8, 15, 16, 26, 28, 32, + 37, 38, 40, 48, 63, 64, 69, 71, + 79, 80, 116, 202, 203, 205, 206, 207, + 209, 210, 211, 213, 214, 217, 218, 219, + 775, 7264, 10792, 10795, 23228, 23256, 30204, 54721, + 54753, 54754, 54756, 54787, 54793, 54809, 57153, 57274, + 57921, 58019, 58363, 61722, 65268, 65341, 65373, 65406, + 65408, 65410, 65415, 65424, 65436, 65439, 65450, 65462, + 65472, 65476, 65478, 65480, 65482, 65488, 65506, 65511, + 65514, 65521, 65527, 65528, 65529, + }; + + int ret = c; + + assert( c>=0 ); + assert( sizeof(unsigned short)==2 && sizeof(unsigned char)==1 ); + + if( c<128 ){ + if( c>='A' && c<='Z' ) ret = c + ('a' - 'A'); + }else if( c<65536 ){ + int iHi = sizeof(aEntry)/sizeof(aEntry[0]) - 1; + int iLo = 0; + int iRes = -1; + + while( iHi>=iLo ){ + int iTest = (iHi + iLo) / 2; + int cmp = (c - aEntry[iTest].iCode); + if( cmp>=0 ){ + iRes = iTest; + iLo = iTest+1; + }else{ + iHi = iTest-1; + } + } + assert( iRes<0 || c>=aEntry[iRes].iCode ); + + if( iRes>=0 ){ + const struct TableEntry *p = &aEntry[iRes]; + if( c<(p->iCode + p->nRange) && 0==(0x01 & p->flags & (p->iCode ^ c)) ){ + ret = (c + (aiOff[p->flags>>1])) & 0x0000FFFF; + assert( ret>0 ); + } + } + + if( bRemoveDiacritic ) ret = remove_diacritic(ret); + } + + else if( c>=66560 && c<66600 ){ + ret = c + 40; + } + + return ret; +} +#endif /* defined(SQLITE_ENABLE_FTS3) || defined(SQLITE_ENABLE_FTS4) */ +#endif /* !defined(SQLITE_DISABLE_FTS3_UNICODE) */ + +/************** End of fts3_unicode2.c ***************************************/ /************** Begin file rtree.c *******************************************/ /* ** 2001 September 15 ** ** The author disclaims copyright to this source code. In place of @@ -109591,93 +156194,120 @@ ************************************************************************* ** This file contains code for implementations of the r-tree and r*-tree ** algorithms packaged as an SQLite virtual table module. */ -#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_RTREE) - -/* -** This file contains an implementation of a couple of different variants -** of the r-tree algorithm. See the README file for further details. The -** same data-structure is used for all, but the algorithms for insert and -** delete operations vary. The variants used are selected at compile time -** by defining the following symbols: -*/ - -/* Either, both or none of the following may be set to activate -** r*tree variant algorithms. -*/ -#define VARIANT_RSTARTREE_CHOOSESUBTREE 0 -#define VARIANT_RSTARTREE_REINSERT 1 - -/* -** Exactly one of the following must be set to 1. -*/ -#define VARIANT_GUTTMAN_QUADRATIC_SPLIT 0 -#define VARIANT_GUTTMAN_LINEAR_SPLIT 0 -#define VARIANT_RSTARTREE_SPLIT 1 - -#define VARIANT_GUTTMAN_SPLIT \ - (VARIANT_GUTTMAN_LINEAR_SPLIT||VARIANT_GUTTMAN_QUADRATIC_SPLIT) - -#if VARIANT_GUTTMAN_QUADRATIC_SPLIT - #define PickNext QuadraticPickNext - #define PickSeeds QuadraticPickSeeds - #define AssignCells splitNodeGuttman -#endif -#if VARIANT_GUTTMAN_LINEAR_SPLIT - #define PickNext LinearPickNext - #define PickSeeds LinearPickSeeds - #define AssignCells splitNodeGuttman -#endif -#if VARIANT_RSTARTREE_SPLIT - #define AssignCells splitNodeStartree -#endif - +/* +** Database Format of R-Tree Tables +** -------------------------------- +** +** The data structure for a single virtual r-tree table is stored in three +** native SQLite tables declared as follows. In each case, the '%' character +** in the table name is replaced with the user-supplied name of the r-tree +** table. +** +** CREATE TABLE %_node(nodeno INTEGER PRIMARY KEY, data BLOB) +** CREATE TABLE %_parent(nodeno INTEGER PRIMARY KEY, parentnode INTEGER) +** CREATE TABLE %_rowid(rowid INTEGER PRIMARY KEY, nodeno INTEGER) +** +** The data for each node of the r-tree structure is stored in the %_node +** table. For each node that is not the root node of the r-tree, there is +** an entry in the %_parent table associating the node with its parent. +** And for each row of data in the table, there is an entry in the %_rowid +** table that maps from the entries rowid to the id of the node that it +** is stored on. +** +** The root node of an r-tree always exists, even if the r-tree table is +** empty. The nodeno of the root node is always 1. All other nodes in the +** table must be the same size as the root node. The content of each node +** is formatted as follows: +** +** 1. If the node is the root node (node 1), then the first 2 bytes +** of the node contain the tree depth as a big-endian integer. +** For non-root nodes, the first 2 bytes are left unused. +** +** 2. The next 2 bytes contain the number of entries currently +** stored in the node. +** +** 3. The remainder of the node contains the node entries. Each entry +** consists of a single 8-byte integer followed by an even number +** of 4-byte coordinates. For leaf nodes the integer is the rowid +** of a record. For internal nodes it is the node number of a +** child page. +*/ + +#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_RTREE) #ifndef SQLITE_CORE +/* #include "sqlite3ext.h" */ SQLITE_EXTENSION_INIT1 #else +/* #include "sqlite3.h" */ #endif +/* #include <string.h> */ +/* #include <assert.h> */ +/* #include <stdio.h> */ #ifndef SQLITE_AMALGAMATION +#include "sqlite3rtree.h" typedef sqlite3_int64 i64; typedef unsigned char u8; +typedef unsigned short u16; typedef unsigned int u32; #endif + +/* The following macro is used to suppress compiler warnings. +*/ +#ifndef UNUSED_PARAMETER +# define UNUSED_PARAMETER(x) (void)(x) +#endif typedef struct Rtree Rtree; typedef struct RtreeCursor RtreeCursor; typedef struct RtreeNode RtreeNode; typedef struct RtreeCell RtreeCell; typedef struct RtreeConstraint RtreeConstraint; +typedef struct RtreeMatchArg RtreeMatchArg; +typedef struct RtreeGeomCallback RtreeGeomCallback; typedef union RtreeCoord RtreeCoord; +typedef struct RtreeSearchPoint RtreeSearchPoint; /* The rtree may have between 1 and RTREE_MAX_DIMENSIONS dimensions. */ #define RTREE_MAX_DIMENSIONS 5 /* Size of hash table Rtree.aHash. This hash table is not expected to ** ever contain very many entries, so a fixed number of buckets is ** used. */ -#define HASHSIZE 128 +#define HASHSIZE 97 + +/* The xBestIndex method of this virtual table requires an estimate of +** the number of rows in the virtual table to calculate the costs of +** various strategies. If possible, this estimate is loaded from the +** sqlite_stat1 table (with RTREE_MIN_ROWEST as a hard-coded minimum). +** Otherwise, if no sqlite_stat1 entry is available, use +** RTREE_DEFAULT_ROWEST. +*/ +#define RTREE_DEFAULT_ROWEST 1048576 +#define RTREE_MIN_ROWEST 100 /* ** An rtree virtual-table object. */ struct Rtree { - sqlite3_vtab base; + sqlite3_vtab base; /* Base class. Must be first */ sqlite3 *db; /* Host database connection */ int iNodeSize; /* Size in bytes of each node in the node table */ - int nDim; /* Number of dimensions */ - int nBytesPerCell; /* Bytes consumed per cell */ + u8 nDim; /* Number of dimensions */ + u8 eCoordType; /* RTREE_COORD_REAL32 or RTREE_COORD_INT32 */ + u8 nBytesPerCell; /* Bytes consumed per cell */ int iDepth; /* Current depth of the r-tree structure */ char *zDb; /* Name of database containing r-tree table */ char *zName; /* Name of r-tree table */ - RtreeNode *aHash[HASHSIZE]; /* Hash table of in-memory nodes. */ int nBusy; /* Current number of users of this structure */ + i64 nRowEst; /* Estimated number of rows in this table */ /* List of nodes removed during a CondenseTree operation. List is ** linked together via the pointer normally used for hash chains - ** RtreeNode.pNext. RtreeNode.iNode stores the depth of the sub-tree ** headed by the node (leaf nodes have RtreeNode.iNode==0). @@ -109698,16 +156328,48 @@ /* Statements to read/write/delete a record from xxx_parent */ sqlite3_stmt *pReadParent; sqlite3_stmt *pWriteParent; sqlite3_stmt *pDeleteParent; - int eCoordType; + RtreeNode *aHash[HASHSIZE]; /* Hash table of in-memory nodes. */ }; -/* Possible values for eCoordType: */ +/* Possible values for Rtree.eCoordType: */ #define RTREE_COORD_REAL32 0 #define RTREE_COORD_INT32 1 + +/* +** If SQLITE_RTREE_INT_ONLY is defined, then this virtual table will +** only deal with integer coordinates. No floating point operations +** will be done. +*/ +#ifdef SQLITE_RTREE_INT_ONLY + typedef sqlite3_int64 RtreeDValue; /* High accuracy coordinate */ + typedef int RtreeValue; /* Low accuracy coordinate */ +# define RTREE_ZERO 0 +#else + typedef double RtreeDValue; /* High accuracy coordinate */ + typedef float RtreeValue; /* Low accuracy coordinate */ +# define RTREE_ZERO 0.0 +#endif + +/* +** When doing a search of an r-tree, instances of the following structure +** record intermediate results from the tree walk. +** +** The id is always a node-id. For iLevel>=1 the id is the node-id of +** the node that the RtreeSearchPoint represents. When iLevel==0, however, +** the id is of the parent node and the cell that RtreeSearchPoint +** represents is the iCell-th entry in the parent node. +*/ +struct RtreeSearchPoint { + RtreeDValue rScore; /* The score for this node. Smallest goes first. */ + sqlite3_int64 id; /* Node ID */ + u8 iLevel; /* 0=entries. 1=leaf node. 2+ for higher */ + u8 eWithin; /* PARTLY_WITHIN or FULLY_WITHIN */ + u8 iCell; /* Cell index within the node */ +}; /* ** The minimum number of cells allowed for a node is a third of the ** maximum. In Gutman's notation: ** @@ -109718,88 +156380,164 @@ */ #define RTREE_MINCELLS(p) ((((p)->iNodeSize-4)/(p)->nBytesPerCell)/3) #define RTREE_REINSERT(p) RTREE_MINCELLS(p) #define RTREE_MAXCELLS 51 +/* +** The smallest possible node-size is (512-64)==448 bytes. And the largest +** supported cell size is 48 bytes (8 byte rowid + ten 4 byte coordinates). +** Therefore all non-root nodes must contain at least 3 entries. Since +** 2^40 is greater than 2^64, an r-tree structure always has a depth of +** 40 or less. +*/ +#define RTREE_MAX_DEPTH 40 + + +/* +** Number of entries in the cursor RtreeNode cache. The first entry is +** used to cache the RtreeNode for RtreeCursor.sPoint. The remaining +** entries cache the RtreeNode for the first elements of the priority queue. +*/ +#define RTREE_CACHE_SZ 5 + /* ** An rtree cursor object. */ struct RtreeCursor { - sqlite3_vtab_cursor base; - RtreeNode *pNode; /* Node cursor is currently pointing at */ - int iCell; /* Index of current cell in pNode */ + sqlite3_vtab_cursor base; /* Base class. Must be first */ + u8 atEOF; /* True if at end of search */ + u8 bPoint; /* True if sPoint is valid */ int iStrategy; /* Copy of idxNum search parameter */ int nConstraint; /* Number of entries in aConstraint */ RtreeConstraint *aConstraint; /* Search constraints. */ + int nPointAlloc; /* Number of slots allocated for aPoint[] */ + int nPoint; /* Number of slots used in aPoint[] */ + int mxLevel; /* iLevel value for root of the tree */ + RtreeSearchPoint *aPoint; /* Priority queue for search points */ + RtreeSearchPoint sPoint; /* Cached next search point */ + RtreeNode *aNode[RTREE_CACHE_SZ]; /* Rtree node cache */ + u32 anQueue[RTREE_MAX_DEPTH+1]; /* Number of queued entries by iLevel */ }; +/* Return the Rtree of a RtreeCursor */ +#define RTREE_OF_CURSOR(X) ((Rtree*)((X)->base.pVtab)) + +/* +** A coordinate can be either a floating point number or a integer. All +** coordinates within a single R-Tree are always of the same time. +*/ union RtreeCoord { - float f; - int i; + RtreeValue f; /* Floating point value */ + int i; /* Integer value */ + u32 u; /* Unsigned for byte-order conversions */ }; /* ** The argument is an RtreeCoord. Return the value stored within the RtreeCoord -** formatted as a double. This macro assumes that local variable pRtree points -** to the Rtree structure associated with the RtreeCoord. +** formatted as a RtreeDValue (double or int64). This macro assumes that local +** variable pRtree points to the Rtree structure associated with the +** RtreeCoord. */ -#define DCOORD(coord) ( \ - (pRtree->eCoordType==RTREE_COORD_REAL32) ? \ - ((double)coord.f) : \ - ((double)coord.i) \ -) +#ifdef SQLITE_RTREE_INT_ONLY +# define DCOORD(coord) ((RtreeDValue)coord.i) +#else +# define DCOORD(coord) ( \ + (pRtree->eCoordType==RTREE_COORD_REAL32) ? \ + ((double)coord.f) : \ + ((double)coord.i) \ + ) +#endif /* ** A search constraint. */ struct RtreeConstraint { - int iCoord; /* Index of constrained coordinate */ - int op; /* Constraining operation */ - double rValue; /* Constraint value. */ + int iCoord; /* Index of constrained coordinate */ + int op; /* Constraining operation */ + union { + RtreeDValue rValue; /* Constraint value. */ + int (*xGeom)(sqlite3_rtree_geometry*,int,RtreeDValue*,int*); + int (*xQueryFunc)(sqlite3_rtree_query_info*); + } u; + sqlite3_rtree_query_info *pInfo; /* xGeom and xQueryFunc argument */ }; /* Possible values for RtreeConstraint.op */ -#define RTREE_EQ 0x41 -#define RTREE_LE 0x42 -#define RTREE_LT 0x43 -#define RTREE_GE 0x44 -#define RTREE_GT 0x45 +#define RTREE_EQ 0x41 /* A */ +#define RTREE_LE 0x42 /* B */ +#define RTREE_LT 0x43 /* C */ +#define RTREE_GE 0x44 /* D */ +#define RTREE_GT 0x45 /* E */ +#define RTREE_MATCH 0x46 /* F: Old-style sqlite3_rtree_geometry_callback() */ +#define RTREE_QUERY 0x47 /* G: New-style sqlite3_rtree_query_callback() */ + /* ** An rtree structure node. -** -** Data format (RtreeNode.zData): -** -** 1. If the node is the root node (node 1), then the first 2 bytes -** of the node contain the tree depth as a big-endian integer. -** For non-root nodes, the first 2 bytes are left unused. -** -** 2. The next 2 bytes contain the number of entries currently -** stored in the node. -** -** 3. The remainder of the node contains the node entries. Each entry -** consists of a single 8-byte integer followed by an even number -** of 4-byte coordinates. For leaf nodes the integer is the rowid -** of a record. For internal nodes it is the node number of a -** child page. */ struct RtreeNode { - RtreeNode *pParent; /* Parent node */ - i64 iNode; - int nRef; - int isDirty; - u8 *zData; - RtreeNode *pNext; /* Next node in this hash chain */ + RtreeNode *pParent; /* Parent node */ + i64 iNode; /* The node number */ + int nRef; /* Number of references to this node */ + int isDirty; /* True if the node needs to be written to disk */ + u8 *zData; /* Content of the node, as should be on disk */ + RtreeNode *pNext; /* Next node in this hash collision chain */ }; + +/* Return the number of cells in a node */ #define NCELL(pNode) readInt16(&(pNode)->zData[2]) /* -** Structure to store a deserialized rtree record. +** A single cell from a node, deserialized */ struct RtreeCell { - i64 iRowid; - RtreeCoord aCoord[RTREE_MAX_DIMENSIONS*2]; + i64 iRowid; /* Node or entry ID */ + RtreeCoord aCoord[RTREE_MAX_DIMENSIONS*2]; /* Bounding box coordinates */ +}; + + +/* +** This object becomes the sqlite3_user_data() for the SQL functions +** that are created by sqlite3_rtree_geometry_callback() and +** sqlite3_rtree_query_callback() and which appear on the right of MATCH +** operators in order to constrain a search. +** +** xGeom and xQueryFunc are the callback functions. Exactly one of +** xGeom and xQueryFunc fields is non-NULL, depending on whether the +** SQL function was created using sqlite3_rtree_geometry_callback() or +** sqlite3_rtree_query_callback(). +** +** This object is deleted automatically by the destructor mechanism in +** sqlite3_create_function_v2(). +*/ +struct RtreeGeomCallback { + int (*xGeom)(sqlite3_rtree_geometry*, int, RtreeDValue*, int*); + int (*xQueryFunc)(sqlite3_rtree_query_info*); + void (*xDestructor)(void*); + void *pContext; +}; + + +/* +** Value for the first field of every RtreeMatchArg object. The MATCH +** operator tests that the first field of a blob operand matches this +** value to avoid operating on invalid blobs (which could cause a segfault). +*/ +#define RTREE_GEOMETRY_MAGIC 0x891245AB + +/* +** An instance of this structure (in the form of a BLOB) is returned by +** the SQL functions that sqlite3_rtree_geometry_callback() and +** sqlite3_rtree_query_callback() create, and is read as the right-hand +** operand to the MATCH operator of an R-Tree. +*/ +struct RtreeMatchArg { + u32 magic; /* Always RTREE_GEOMETRY_MAGIC */ + RtreeGeomCallback cb; /* Info about the callback functions */ + int nParam; /* Number of parameters to the SQL function */ + sqlite3_value **apSqlParam; /* Original SQL parameter values */ + RtreeDValue aParam[1]; /* Values for parameters to the SQL function */ }; #ifndef MAX # define MAX(x,y) ((x) < (y) ? (y) : (x)) #endif @@ -109813,17 +156551,16 @@ */ static int readInt16(u8 *p){ return (p[0]<<8) + p[1]; } static void readCoord(u8 *p, RtreeCoord *pCoord){ - u32 i = ( + pCoord->u = ( (((u32)p[0]) << 24) + (((u32)p[1]) << 16) + (((u32)p[2]) << 8) + (((u32)p[3]) << 0) ); - *(u32 *)pCoord = i; } static i64 readInt64(u8 *p){ return ( (((i64)p[0]) << 56) + (((i64)p[1]) << 48) + @@ -109848,11 +156585,11 @@ } static int writeCoord(u8 *p, RtreeCoord *pCoord){ u32 i; assert( sizeof(RtreeCoord)==4 ); assert( sizeof(u32)==4 ); - i = *(u32 *)pCoord; + i = pCoord->u; p[0] = (i>>24)&0xFF; p[1] = (i>>16)&0xFF; p[2] = (i>> 8)&0xFF; p[3] = (i>> 0)&0xFF; return 4; @@ -109880,49 +156617,41 @@ /* ** Clear the content of node p (set all bytes to 0x00). */ static void nodeZero(Rtree *pRtree, RtreeNode *p){ - if( p ){ - memset(&p->zData[2], 0, pRtree->iNodeSize-2); - p->isDirty = 1; - } + memset(&p->zData[2], 0, pRtree->iNodeSize-2); + p->isDirty = 1; } /* ** Given a node number iNode, return the corresponding key to use ** in the Rtree.aHash table. */ static int nodeHash(i64 iNode){ - return ( - (iNode>>56) ^ (iNode>>48) ^ (iNode>>40) ^ (iNode>>32) ^ - (iNode>>24) ^ (iNode>>16) ^ (iNode>> 8) ^ (iNode>> 0) - ) % HASHSIZE; + return iNode % HASHSIZE; } /* ** Search the node hash table for node iNode. If found, return a pointer ** to it. Otherwise, return 0. */ static RtreeNode *nodeHashLookup(Rtree *pRtree, i64 iNode){ RtreeNode *p; - assert( iNode!=0 ); for(p=pRtree->aHash[nodeHash(iNode)]; p && p->iNode!=iNode; p=p->pNext); return p; } /* ** Add node pNode to the node hash table. */ static void nodeHashInsert(Rtree *pRtree, RtreeNode *pNode){ - if( pNode ){ - int iHash; - assert( pNode->pNext==0 ); - iHash = nodeHash(pNode->iNode); - pNode->pNext = pRtree->aHash[iHash]; - pRtree->aHash[iHash] = pNode; - } + int iHash; + assert( pNode->pNext==0 ); + iHash = nodeHash(pNode->iNode); + pNode->pNext = pRtree->aHash[iHash]; + pRtree->aHash[iHash] = pNode; } /* ** Remove node pNode from the node hash table. */ @@ -109940,15 +156669,15 @@ ** Allocate and return new r-tree node. Initially, (RtreeNode.iNode==0), ** indicating that node has not yet been assigned a node number. It is ** assigned a node number when nodeWrite() is called to write the ** node contents out to the database. */ -static RtreeNode *nodeNew(Rtree *pRtree, RtreeNode *pParent, int zero){ +static RtreeNode *nodeNew(Rtree *pRtree, RtreeNode *pParent){ RtreeNode *pNode; pNode = (RtreeNode *)sqlite3_malloc(sizeof(RtreeNode) + pRtree->iNodeSize); if( pNode ){ - memset(pNode, 0, sizeof(RtreeNode) + (zero?pRtree->iNodeSize:0)); + memset(pNode, 0, sizeof(RtreeNode) + pRtree->iNodeSize); pNode->zData = (u8 *)&pNode[1]; pNode->nRef = 1; pNode->pParent = pParent; pNode->isDirty = 1; nodeReference(pParent); @@ -109957,18 +156686,18 @@ } /* ** Obtain a reference to an r-tree node. */ -static int -nodeAcquire( +static int nodeAcquire( Rtree *pRtree, /* R-tree structure */ i64 iNode, /* Node number to load */ RtreeNode *pParent, /* Either the parent node or NULL */ RtreeNode **ppNode /* OUT: Acquired node */ ){ int rc; + int rc2 = SQLITE_OK; RtreeNode *pNode; /* Check if the requested node is already in the hash table. If so, ** increase its reference count and return it. */ @@ -109981,55 +156710,79 @@ pNode->nRef++; *ppNode = pNode; return SQLITE_OK; } - pNode = (RtreeNode *)sqlite3_malloc(sizeof(RtreeNode) + pRtree->iNodeSize); - if( !pNode ){ - *ppNode = 0; - return SQLITE_NOMEM; - } - pNode->pParent = pParent; - pNode->zData = (u8 *)&pNode[1]; - pNode->nRef = 1; - pNode->iNode = iNode; - pNode->isDirty = 0; - pNode->pNext = 0; - sqlite3_bind_int64(pRtree->pReadNode, 1, iNode); rc = sqlite3_step(pRtree->pReadNode); if( rc==SQLITE_ROW ){ const u8 *zBlob = sqlite3_column_blob(pRtree->pReadNode, 0); - assert( sqlite3_column_bytes(pRtree->pReadNode, 0)==pRtree->iNodeSize ); - memcpy(pNode->zData, zBlob, pRtree->iNodeSize); - nodeReference(pParent); + if( pRtree->iNodeSize==sqlite3_column_bytes(pRtree->pReadNode, 0) ){ + pNode = (RtreeNode *)sqlite3_malloc(sizeof(RtreeNode)+pRtree->iNodeSize); + if( !pNode ){ + rc2 = SQLITE_NOMEM; + }else{ + pNode->pParent = pParent; + pNode->zData = (u8 *)&pNode[1]; + pNode->nRef = 1; + pNode->iNode = iNode; + pNode->isDirty = 0; + pNode->pNext = 0; + memcpy(pNode->zData, zBlob, pRtree->iNodeSize); + nodeReference(pParent); + } + } + } + rc = sqlite3_reset(pRtree->pReadNode); + if( rc==SQLITE_OK ) rc = rc2; + + /* If the root node was just loaded, set pRtree->iDepth to the height + ** of the r-tree structure. A height of zero means all data is stored on + ** the root node. A height of one means the children of the root node + ** are the leaves, and so on. If the depth as specified on the root node + ** is greater than RTREE_MAX_DEPTH, the r-tree structure must be corrupt. + */ + if( pNode && iNode==1 ){ + pRtree->iDepth = readInt16(pNode->zData); + if( pRtree->iDepth>RTREE_MAX_DEPTH ){ + rc = SQLITE_CORRUPT_VTAB; + } + } + + /* If no error has occurred so far, check if the "number of entries" + ** field on the node is too large. If so, set the return code to + ** SQLITE_CORRUPT_VTAB. + */ + if( pNode && rc==SQLITE_OK ){ + if( NCELL(pNode)>((pRtree->iNodeSize-4)/pRtree->nBytesPerCell) ){ + rc = SQLITE_CORRUPT_VTAB; + } + } + + if( rc==SQLITE_OK ){ + if( pNode!=0 ){ + nodeHashInsert(pRtree, pNode); + }else{ + rc = SQLITE_CORRUPT_VTAB; + } + *ppNode = pNode; }else{ sqlite3_free(pNode); - pNode = 0; - } - - *ppNode = pNode; - rc = sqlite3_reset(pRtree->pReadNode); - - if( rc==SQLITE_OK && iNode==1 ){ - pRtree->iDepth = readInt16(pNode->zData); - } - - assert( (rc==SQLITE_OK && pNode) || (pNode==0 && rc!=SQLITE_OK) ); - nodeHashInsert(pRtree, pNode); + *ppNode = 0; + } return rc; } /* ** Overwrite cell iCell of node pNode with the contents of pCell. */ static void nodeOverwriteCell( - Rtree *pRtree, - RtreeNode *pNode, - RtreeCell *pCell, - int iCell + Rtree *pRtree, /* The overall R-Tree */ + RtreeNode *pNode, /* The node into which the cell is to be written */ + RtreeCell *pCell, /* The cell to write */ + int iCell /* Index into pNode into which pCell is written */ ){ int ii; u8 *p = &pNode->zData[4 + pRtree->nBytesPerCell*iCell]; p += writeInt64(p, pCell->iRowid); for(ii=0; ii<(pRtree->nDim*2); ii++){ @@ -110037,11 +156790,11 @@ } pNode->isDirty = 1; } /* -** Remove cell the cell with index iCell from node pNode. +** Remove the cell with index iCell from node pNode. */ static void nodeDeleteCell(Rtree *pRtree, RtreeNode *pNode, int iCell){ u8 *pDst = &pNode->zData[4 + pRtree->nBytesPerCell*iCell]; u8 *pSrc = &pDst[pRtree->nBytesPerCell]; int nByte = (NCELL(pNode) - iCell - 1) * pRtree->nBytesPerCell; @@ -110054,24 +156807,22 @@ ** Insert the contents of cell pCell into node pNode. If the insert ** is successful, return SQLITE_OK. ** ** If there is not enough free space in pNode, return SQLITE_FULL. */ -static int -nodeInsertCell( - Rtree *pRtree, - RtreeNode *pNode, - RtreeCell *pCell +static int nodeInsertCell( + Rtree *pRtree, /* The overall R-Tree */ + RtreeNode *pNode, /* Write new cell into this node */ + RtreeCell *pCell /* The cell to be inserted */ ){ int nCell; /* Current number of cells in pNode */ int nMaxCell; /* Maximum number of cells for pNode */ nMaxCell = (pRtree->iNodeSize-4)/pRtree->nBytesPerCell; nCell = NCELL(pNode); - assert(nCell<=nMaxCell); - + assert( nCell<=nMaxCell ); if( nCell<nMaxCell ){ nodeOverwriteCell(pRtree, pNode, pCell, nCell); writeInt16(&pNode->zData[2], nCell+1); pNode->isDirty = 1; } @@ -110080,12 +156831,11 @@ } /* ** If the node is dirty, write it out to the database. */ -static int -nodeWrite(Rtree *pRtree, RtreeNode *pNode){ +static int nodeWrite(Rtree *pRtree, RtreeNode *pNode){ int rc = SQLITE_OK; if( pNode->isDirty ){ sqlite3_stmt *p = pRtree->pWriteNode; if( pNode->iNode ){ sqlite3_bind_int64(p, 1, pNode->iNode); @@ -110106,12 +156856,11 @@ /* ** Release a reference to a node. If the node is dirty and the reference ** count drops to zero, the node data is written to the database. */ -static int -nodeRelease(Rtree *pRtree, RtreeNode *pNode){ +static int nodeRelease(Rtree *pRtree, RtreeNode *pNode){ int rc = SQLITE_OK; if( pNode ){ assert( pNode->nRef>0 ); pNode->nRef--; if( pNode->nRef==0 ){ @@ -110135,45 +156884,49 @@ ** Return the 64-bit integer value associated with cell iCell of ** node pNode. If pNode is a leaf node, this is a rowid. If it is ** an internal node, then the 64-bit integer is a child page number. */ static i64 nodeGetRowid( - Rtree *pRtree, - RtreeNode *pNode, - int iCell + Rtree *pRtree, /* The overall R-Tree */ + RtreeNode *pNode, /* The node from which to extract the ID */ + int iCell /* The cell index from which to extract the ID */ ){ assert( iCell<NCELL(pNode) ); return readInt64(&pNode->zData[4 + pRtree->nBytesPerCell*iCell]); } /* ** Return coordinate iCoord from cell iCell in node pNode. */ static void nodeGetCoord( - Rtree *pRtree, - RtreeNode *pNode, - int iCell, - int iCoord, - RtreeCoord *pCoord /* Space to write result to */ + Rtree *pRtree, /* The overall R-Tree */ + RtreeNode *pNode, /* The node from which to extract a coordinate */ + int iCell, /* The index of the cell within the node */ + int iCoord, /* Which coordinate to extract */ + RtreeCoord *pCoord /* OUT: Space to write result to */ ){ readCoord(&pNode->zData[12 + pRtree->nBytesPerCell*iCell + 4*iCoord], pCoord); } /* ** Deserialize cell iCell of node pNode. Populate the structure pointed ** to by pCell with the results. */ static void nodeGetCell( - Rtree *pRtree, - RtreeNode *pNode, - int iCell, - RtreeCell *pCell + Rtree *pRtree, /* The overall R-Tree */ + RtreeNode *pNode, /* The node containing the cell to be read */ + int iCell, /* Index of the cell within the node */ + RtreeCell *pCell /* OUT: Write the cell contents here */ ){ + u8 *pData; + RtreeCoord *pCoord; int ii; pCell->iRowid = nodeGetRowid(pRtree, pNode, iCell); + pData = pNode->zData + (12 + pRtree->nBytesPerCell*iCell); + pCoord = pCell->aCoord; for(ii=0; ii<pRtree->nDim*2; ii++){ - nodeGetCoord(pRtree, pNode, iCell, ii, &pCell->aCoord[ii]); + readCoord(&pData[ii*4], &pCoord[ii]); } } /* Forward declaration for the function that does the work of @@ -110287,21 +157040,41 @@ *ppCursor = (sqlite3_vtab_cursor *)pCsr; return rc; } + +/* +** Free the RtreeCursor.aConstraint[] array and its contents. +*/ +static void freeCursorConstraints(RtreeCursor *pCsr){ + if( pCsr->aConstraint ){ + int i; /* Used to iterate through constraint array */ + for(i=0; i<pCsr->nConstraint; i++){ + sqlite3_rtree_query_info *pInfo = pCsr->aConstraint[i].pInfo; + if( pInfo ){ + if( pInfo->xDelUser ) pInfo->xDelUser(pInfo->pUser); + sqlite3_free(pInfo); + } + } + sqlite3_free(pCsr->aConstraint); + pCsr->aConstraint = 0; + } +} + /* ** Rtree virtual table module xClose method. */ static int rtreeClose(sqlite3_vtab_cursor *cur){ Rtree *pRtree = (Rtree *)(cur->pVtab); - int rc; + int ii; RtreeCursor *pCsr = (RtreeCursor *)cur; - sqlite3_free(pCsr->aConstraint); - rc = nodeRelease(pRtree, pCsr->pNode); + freeCursorConstraints(pCsr); + sqlite3_free(pCsr->aPoint); + for(ii=0; ii<RTREE_CACHE_SZ; ii++) nodeRelease(pRtree, pCsr->aNode[ii]); sqlite3_free(pCsr); - return rc; + return SQLITE_OK; } /* ** Rtree virtual table module xEof method. ** @@ -110308,237 +157081,546 @@ ** Return non-zero if the cursor does not currently point to a valid ** record (i.e if the scan has finished), or zero otherwise. */ static int rtreeEof(sqlite3_vtab_cursor *cur){ RtreeCursor *pCsr = (RtreeCursor *)cur; - return (pCsr->pNode==0); -} - -/* -** Cursor pCursor currently points to a cell in a non-leaf page. -** Return true if the sub-tree headed by the cell is filtered -** (excluded) by the constraints in the pCursor->aConstraint[] -** array, or false otherwise. -*/ -static int testRtreeCell(Rtree *pRtree, RtreeCursor *pCursor){ - RtreeCell cell; - int ii; - int bRes = 0; - - nodeGetCell(pRtree, pCursor->pNode, pCursor->iCell, &cell); - for(ii=0; bRes==0 && ii<pCursor->nConstraint; ii++){ - RtreeConstraint *p = &pCursor->aConstraint[ii]; - double cell_min = DCOORD(cell.aCoord[(p->iCoord>>1)*2]); - double cell_max = DCOORD(cell.aCoord[(p->iCoord>>1)*2+1]); - - assert(p->op==RTREE_LE || p->op==RTREE_LT || p->op==RTREE_GE - || p->op==RTREE_GT || p->op==RTREE_EQ - ); - - switch( p->op ){ - case RTREE_LE: case RTREE_LT: bRes = p->rValue<cell_min; break; - case RTREE_GE: case RTREE_GT: bRes = p->rValue>cell_max; break; - case RTREE_EQ: - bRes = (p->rValue>cell_max || p->rValue<cell_min); - break; - } - } - - return bRes; -} - -/* -** Return true if the cell that cursor pCursor currently points to -** would be filtered (excluded) by the constraints in the -** pCursor->aConstraint[] array, or false otherwise. -** -** This function assumes that the cell is part of a leaf node. -*/ -static int testRtreeEntry(Rtree *pRtree, RtreeCursor *pCursor){ - RtreeCell cell; - int ii; - - nodeGetCell(pRtree, pCursor->pNode, pCursor->iCell, &cell); - for(ii=0; ii<pCursor->nConstraint; ii++){ - RtreeConstraint *p = &pCursor->aConstraint[ii]; - double coord = DCOORD(cell.aCoord[p->iCoord]); - int res; - assert(p->op==RTREE_LE || p->op==RTREE_LT || p->op==RTREE_GE - || p->op==RTREE_GT || p->op==RTREE_EQ - ); - switch( p->op ){ - case RTREE_LE: res = (coord<=p->rValue); break; - case RTREE_LT: res = (coord<p->rValue); break; - case RTREE_GE: res = (coord>=p->rValue); break; - case RTREE_GT: res = (coord>p->rValue); break; - case RTREE_EQ: res = (coord==p->rValue); break; - } - - if( !res ) return 1; - } - - return 0; -} - -/* -** Cursor pCursor currently points at a node that heads a sub-tree of -** height iHeight (if iHeight==0, then the node is a leaf). Descend -** to point to the left-most cell of the sub-tree that matches the -** configured constraints. -*/ -static int descendToCell( - Rtree *pRtree, - RtreeCursor *pCursor, - int iHeight, - int *pEof /* OUT: Set to true if cannot descend */ -){ - int isEof; - int rc; - int ii; - RtreeNode *pChild; - sqlite3_int64 iRowid; - - RtreeNode *pSavedNode = pCursor->pNode; - int iSavedCell = pCursor->iCell; - - assert( iHeight>=0 ); - - if( iHeight==0 ){ - isEof = testRtreeEntry(pRtree, pCursor); - }else{ - isEof = testRtreeCell(pRtree, pCursor); - } - if( isEof || iHeight==0 ){ - *pEof = isEof; - return SQLITE_OK; - } - - iRowid = nodeGetRowid(pRtree, pCursor->pNode, pCursor->iCell); - rc = nodeAcquire(pRtree, iRowid, pCursor->pNode, &pChild); - if( rc!=SQLITE_OK ){ - return rc; - } - - nodeRelease(pRtree, pCursor->pNode); - pCursor->pNode = pChild; - isEof = 1; - for(ii=0; isEof && ii<NCELL(pChild); ii++){ - pCursor->iCell = ii; - rc = descendToCell(pRtree, pCursor, iHeight-1, &isEof); - if( rc!=SQLITE_OK ){ - return rc; - } - } - - if( isEof ){ - assert( pCursor->pNode==pChild ); - nodeReference(pSavedNode); - nodeRelease(pRtree, pChild); - pCursor->pNode = pSavedNode; - pCursor->iCell = iSavedCell; - } - - *pEof = isEof; - return SQLITE_OK; + return pCsr->atEOF; +} + +/* +** Convert raw bits from the on-disk RTree record into a coordinate value. +** The on-disk format is big-endian and needs to be converted for little- +** endian platforms. The on-disk record stores integer coordinates if +** eInt is true and it stores 32-bit floating point records if eInt is +** false. a[] is the four bytes of the on-disk record to be decoded. +** Store the results in "r". +** +** There are three versions of this macro, one each for little-endian and +** big-endian processors and a third generic implementation. The endian- +** specific implementations are much faster and are preferred if the +** processor endianness is known at compile-time. The SQLITE_BYTEORDER +** macro is part of sqliteInt.h and hence the endian-specific +** implementation will only be used if this module is compiled as part +** of the amalgamation. +*/ +#if defined(SQLITE_BYTEORDER) && SQLITE_BYTEORDER==1234 +#define RTREE_DECODE_COORD(eInt, a, r) { \ + RtreeCoord c; /* Coordinate decoded */ \ + memcpy(&c.u,a,4); \ + c.u = ((c.u>>24)&0xff)|((c.u>>8)&0xff00)| \ + ((c.u&0xff)<<24)|((c.u&0xff00)<<8); \ + r = eInt ? (sqlite3_rtree_dbl)c.i : (sqlite3_rtree_dbl)c.f; \ +} +#elif defined(SQLITE_BYTEORDER) && SQLITE_BYTEORDER==4321 +#define RTREE_DECODE_COORD(eInt, a, r) { \ + RtreeCoord c; /* Coordinate decoded */ \ + memcpy(&c.u,a,4); \ + r = eInt ? (sqlite3_rtree_dbl)c.i : (sqlite3_rtree_dbl)c.f; \ +} +#else +#define RTREE_DECODE_COORD(eInt, a, r) { \ + RtreeCoord c; /* Coordinate decoded */ \ + c.u = ((u32)a[0]<<24) + ((u32)a[1]<<16) \ + +((u32)a[2]<<8) + a[3]; \ + r = eInt ? (sqlite3_rtree_dbl)c.i : (sqlite3_rtree_dbl)c.f; \ +} +#endif + +/* +** Check the RTree node or entry given by pCellData and p against the MATCH +** constraint pConstraint. +*/ +static int rtreeCallbackConstraint( + RtreeConstraint *pConstraint, /* The constraint to test */ + int eInt, /* True if RTree holding integer coordinates */ + u8 *pCellData, /* Raw cell content */ + RtreeSearchPoint *pSearch, /* Container of this cell */ + sqlite3_rtree_dbl *prScore, /* OUT: score for the cell */ + int *peWithin /* OUT: visibility of the cell */ +){ + int i; /* Loop counter */ + sqlite3_rtree_query_info *pInfo = pConstraint->pInfo; /* Callback info */ + int nCoord = pInfo->nCoord; /* No. of coordinates */ + int rc; /* Callback return code */ + sqlite3_rtree_dbl aCoord[RTREE_MAX_DIMENSIONS*2]; /* Decoded coordinates */ + + assert( pConstraint->op==RTREE_MATCH || pConstraint->op==RTREE_QUERY ); + assert( nCoord==2 || nCoord==4 || nCoord==6 || nCoord==8 || nCoord==10 ); + + if( pConstraint->op==RTREE_QUERY && pSearch->iLevel==1 ){ + pInfo->iRowid = readInt64(pCellData); + } + pCellData += 8; + for(i=0; i<nCoord; i++, pCellData += 4){ + RTREE_DECODE_COORD(eInt, pCellData, aCoord[i]); + } + if( pConstraint->op==RTREE_MATCH ){ + rc = pConstraint->u.xGeom((sqlite3_rtree_geometry*)pInfo, + nCoord, aCoord, &i); + if( i==0 ) *peWithin = NOT_WITHIN; + *prScore = RTREE_ZERO; + }else{ + pInfo->aCoord = aCoord; + pInfo->iLevel = pSearch->iLevel - 1; + pInfo->rScore = pInfo->rParentScore = pSearch->rScore; + pInfo->eWithin = pInfo->eParentWithin = pSearch->eWithin; + rc = pConstraint->u.xQueryFunc(pInfo); + if( pInfo->eWithin<*peWithin ) *peWithin = pInfo->eWithin; + if( pInfo->rScore<*prScore || *prScore<RTREE_ZERO ){ + *prScore = pInfo->rScore; + } + } + return rc; +} + +/* +** Check the internal RTree node given by pCellData against constraint p. +** If this constraint cannot be satisfied by any child within the node, +** set *peWithin to NOT_WITHIN. +*/ +static void rtreeNonleafConstraint( + RtreeConstraint *p, /* The constraint to test */ + int eInt, /* True if RTree holds integer coordinates */ + u8 *pCellData, /* Raw cell content as appears on disk */ + int *peWithin /* Adjust downward, as appropriate */ +){ + sqlite3_rtree_dbl val; /* Coordinate value convert to a double */ + + /* p->iCoord might point to either a lower or upper bound coordinate + ** in a coordinate pair. But make pCellData point to the lower bound. + */ + pCellData += 8 + 4*(p->iCoord&0xfe); + + assert(p->op==RTREE_LE || p->op==RTREE_LT || p->op==RTREE_GE + || p->op==RTREE_GT || p->op==RTREE_EQ ); + switch( p->op ){ + case RTREE_LE: + case RTREE_LT: + case RTREE_EQ: + RTREE_DECODE_COORD(eInt, pCellData, val); + /* val now holds the lower bound of the coordinate pair */ + if( p->u.rValue>=val ) return; + if( p->op!=RTREE_EQ ) break; /* RTREE_LE and RTREE_LT end here */ + /* Fall through for the RTREE_EQ case */ + + default: /* RTREE_GT or RTREE_GE, or fallthrough of RTREE_EQ */ + pCellData += 4; + RTREE_DECODE_COORD(eInt, pCellData, val); + /* val now holds the upper bound of the coordinate pair */ + if( p->u.rValue<=val ) return; + } + *peWithin = NOT_WITHIN; +} + +/* +** Check the leaf RTree cell given by pCellData against constraint p. +** If this constraint is not satisfied, set *peWithin to NOT_WITHIN. +** If the constraint is satisfied, leave *peWithin unchanged. +** +** The constraint is of the form: xN op $val +** +** The op is given by p->op. The xN is p->iCoord-th coordinate in +** pCellData. $val is given by p->u.rValue. +*/ +static void rtreeLeafConstraint( + RtreeConstraint *p, /* The constraint to test */ + int eInt, /* True if RTree holds integer coordinates */ + u8 *pCellData, /* Raw cell content as appears on disk */ + int *peWithin /* Adjust downward, as appropriate */ +){ + RtreeDValue xN; /* Coordinate value converted to a double */ + + assert(p->op==RTREE_LE || p->op==RTREE_LT || p->op==RTREE_GE + || p->op==RTREE_GT || p->op==RTREE_EQ ); + pCellData += 8 + p->iCoord*4; + RTREE_DECODE_COORD(eInt, pCellData, xN); + switch( p->op ){ + case RTREE_LE: if( xN <= p->u.rValue ) return; break; + case RTREE_LT: if( xN < p->u.rValue ) return; break; + case RTREE_GE: if( xN >= p->u.rValue ) return; break; + case RTREE_GT: if( xN > p->u.rValue ) return; break; + default: if( xN == p->u.rValue ) return; break; + } + *peWithin = NOT_WITHIN; } /* ** One of the cells in node pNode is guaranteed to have a 64-bit ** integer value equal to iRowid. Return the index of this cell. */ -static int nodeRowidIndex(Rtree *pRtree, RtreeNode *pNode, i64 iRowid){ +static int nodeRowidIndex( + Rtree *pRtree, + RtreeNode *pNode, + i64 iRowid, + int *piIndex +){ int ii; - for(ii=0; nodeGetRowid(pRtree, pNode, ii)!=iRowid; ii++){ - assert( ii<(NCELL(pNode)-1) ); + int nCell = NCELL(pNode); + assert( nCell<200 ); + for(ii=0; ii<nCell; ii++){ + if( nodeGetRowid(pRtree, pNode, ii)==iRowid ){ + *piIndex = ii; + return SQLITE_OK; + } } - return ii; + return SQLITE_CORRUPT_VTAB; } /* ** Return the index of the cell containing a pointer to node pNode ** in its parent. If pNode is the root node, return -1. */ -static int nodeParentIndex(Rtree *pRtree, RtreeNode *pNode){ +static int nodeParentIndex(Rtree *pRtree, RtreeNode *pNode, int *piIndex){ RtreeNode *pParent = pNode->pParent; if( pParent ){ - return nodeRowidIndex(pRtree, pParent, pNode->iNode); + return nodeRowidIndex(pRtree, pParent, pNode->iNode, piIndex); } - return -1; + *piIndex = -1; + return SQLITE_OK; +} + +/* +** Compare two search points. Return negative, zero, or positive if the first +** is less than, equal to, or greater than the second. +** +** The rScore is the primary key. Smaller rScore values come first. +** If the rScore is a tie, then use iLevel as the tie breaker with smaller +** iLevel values coming first. In this way, if rScore is the same for all +** SearchPoints, then iLevel becomes the deciding factor and the result +** is a depth-first search, which is the desired default behavior. +*/ +static int rtreeSearchPointCompare( + const RtreeSearchPoint *pA, + const RtreeSearchPoint *pB +){ + if( pA->rScore<pB->rScore ) return -1; + if( pA->rScore>pB->rScore ) return +1; + if( pA->iLevel<pB->iLevel ) return -1; + if( pA->iLevel>pB->iLevel ) return +1; + return 0; +} + +/* +** Interchange to search points in a cursor. +*/ +static void rtreeSearchPointSwap(RtreeCursor *p, int i, int j){ + RtreeSearchPoint t = p->aPoint[i]; + assert( i<j ); + p->aPoint[i] = p->aPoint[j]; + p->aPoint[j] = t; + i++; j++; + if( i<RTREE_CACHE_SZ ){ + if( j>=RTREE_CACHE_SZ ){ + nodeRelease(RTREE_OF_CURSOR(p), p->aNode[i]); + p->aNode[i] = 0; + }else{ + RtreeNode *pTemp = p->aNode[i]; + p->aNode[i] = p->aNode[j]; + p->aNode[j] = pTemp; + } + } +} + +/* +** Return the search point with the lowest current score. +*/ +static RtreeSearchPoint *rtreeSearchPointFirst(RtreeCursor *pCur){ + return pCur->bPoint ? &pCur->sPoint : pCur->nPoint ? pCur->aPoint : 0; +} + +/* +** Get the RtreeNode for the search point with the lowest score. +*/ +static RtreeNode *rtreeNodeOfFirstSearchPoint(RtreeCursor *pCur, int *pRC){ + sqlite3_int64 id; + int ii = 1 - pCur->bPoint; + assert( ii==0 || ii==1 ); + assert( pCur->bPoint || pCur->nPoint ); + if( pCur->aNode[ii]==0 ){ + assert( pRC!=0 ); + id = ii ? pCur->aPoint[0].id : pCur->sPoint.id; + *pRC = nodeAcquire(RTREE_OF_CURSOR(pCur), id, 0, &pCur->aNode[ii]); + } + return pCur->aNode[ii]; +} + +/* +** Push a new element onto the priority queue +*/ +static RtreeSearchPoint *rtreeEnqueue( + RtreeCursor *pCur, /* The cursor */ + RtreeDValue rScore, /* Score for the new search point */ + u8 iLevel /* Level for the new search point */ +){ + int i, j; + RtreeSearchPoint *pNew; + if( pCur->nPoint>=pCur->nPointAlloc ){ + int nNew = pCur->nPointAlloc*2 + 8; + pNew = sqlite3_realloc(pCur->aPoint, nNew*sizeof(pCur->aPoint[0])); + if( pNew==0 ) return 0; + pCur->aPoint = pNew; + pCur->nPointAlloc = nNew; + } + i = pCur->nPoint++; + pNew = pCur->aPoint + i; + pNew->rScore = rScore; + pNew->iLevel = iLevel; + assert( iLevel<=RTREE_MAX_DEPTH ); + while( i>0 ){ + RtreeSearchPoint *pParent; + j = (i-1)/2; + pParent = pCur->aPoint + j; + if( rtreeSearchPointCompare(pNew, pParent)>=0 ) break; + rtreeSearchPointSwap(pCur, j, i); + i = j; + pNew = pParent; + } + return pNew; +} + +/* +** Allocate a new RtreeSearchPoint and return a pointer to it. Return +** NULL if malloc fails. +*/ +static RtreeSearchPoint *rtreeSearchPointNew( + RtreeCursor *pCur, /* The cursor */ + RtreeDValue rScore, /* Score for the new search point */ + u8 iLevel /* Level for the new search point */ +){ + RtreeSearchPoint *pNew, *pFirst; + pFirst = rtreeSearchPointFirst(pCur); + pCur->anQueue[iLevel]++; + if( pFirst==0 + || pFirst->rScore>rScore + || (pFirst->rScore==rScore && pFirst->iLevel>iLevel) + ){ + if( pCur->bPoint ){ + int ii; + pNew = rtreeEnqueue(pCur, rScore, iLevel); + if( pNew==0 ) return 0; + ii = (int)(pNew - pCur->aPoint) + 1; + if( ii<RTREE_CACHE_SZ ){ + assert( pCur->aNode[ii]==0 ); + pCur->aNode[ii] = pCur->aNode[0]; + }else{ + nodeRelease(RTREE_OF_CURSOR(pCur), pCur->aNode[0]); + } + pCur->aNode[0] = 0; + *pNew = pCur->sPoint; + } + pCur->sPoint.rScore = rScore; + pCur->sPoint.iLevel = iLevel; + pCur->bPoint = 1; + return &pCur->sPoint; + }else{ + return rtreeEnqueue(pCur, rScore, iLevel); + } +} + +#if 0 +/* Tracing routines for the RtreeSearchPoint queue */ +static void tracePoint(RtreeSearchPoint *p, int idx, RtreeCursor *pCur){ + if( idx<0 ){ printf(" s"); }else{ printf("%2d", idx); } + printf(" %d.%05lld.%02d %g %d", + p->iLevel, p->id, p->iCell, p->rScore, p->eWithin + ); + idx++; + if( idx<RTREE_CACHE_SZ ){ + printf(" %p\n", pCur->aNode[idx]); + }else{ + printf("\n"); + } +} +static void traceQueue(RtreeCursor *pCur, const char *zPrefix){ + int ii; + printf("=== %9s ", zPrefix); + if( pCur->bPoint ){ + tracePoint(&pCur->sPoint, -1, pCur); + } + for(ii=0; ii<pCur->nPoint; ii++){ + if( ii>0 || pCur->bPoint ) printf(" "); + tracePoint(&pCur->aPoint[ii], ii, pCur); + } +} +# define RTREE_QUEUE_TRACE(A,B) traceQueue(A,B) +#else +# define RTREE_QUEUE_TRACE(A,B) /* no-op */ +#endif + +/* Remove the search point with the lowest current score. +*/ +static void rtreeSearchPointPop(RtreeCursor *p){ + int i, j, k, n; + i = 1 - p->bPoint; + assert( i==0 || i==1 ); + if( p->aNode[i] ){ + nodeRelease(RTREE_OF_CURSOR(p), p->aNode[i]); + p->aNode[i] = 0; + } + if( p->bPoint ){ + p->anQueue[p->sPoint.iLevel]--; + p->bPoint = 0; + }else if( p->nPoint ){ + p->anQueue[p->aPoint[0].iLevel]--; + n = --p->nPoint; + p->aPoint[0] = p->aPoint[n]; + if( n<RTREE_CACHE_SZ-1 ){ + p->aNode[1] = p->aNode[n+1]; + p->aNode[n+1] = 0; + } + i = 0; + while( (j = i*2+1)<n ){ + k = j+1; + if( k<n && rtreeSearchPointCompare(&p->aPoint[k], &p->aPoint[j])<0 ){ + if( rtreeSearchPointCompare(&p->aPoint[k], &p->aPoint[i])<0 ){ + rtreeSearchPointSwap(p, i, k); + i = k; + }else{ + break; + } + }else{ + if( rtreeSearchPointCompare(&p->aPoint[j], &p->aPoint[i])<0 ){ + rtreeSearchPointSwap(p, i, j); + i = j; + }else{ + break; + } + } + } + } +} + + +/* +** Continue the search on cursor pCur until the front of the queue +** contains an entry suitable for returning as a result-set row, +** or until the RtreeSearchPoint queue is empty, indicating that the +** query has completed. +*/ +static int rtreeStepToLeaf(RtreeCursor *pCur){ + RtreeSearchPoint *p; + Rtree *pRtree = RTREE_OF_CURSOR(pCur); + RtreeNode *pNode; + int eWithin; + int rc = SQLITE_OK; + int nCell; + int nConstraint = pCur->nConstraint; + int ii; + int eInt; + RtreeSearchPoint x; + + eInt = pRtree->eCoordType==RTREE_COORD_INT32; + while( (p = rtreeSearchPointFirst(pCur))!=0 && p->iLevel>0 ){ + pNode = rtreeNodeOfFirstSearchPoint(pCur, &rc); + if( rc ) return rc; + nCell = NCELL(pNode); + assert( nCell<200 ); + while( p->iCell<nCell ){ + sqlite3_rtree_dbl rScore = (sqlite3_rtree_dbl)-1; + u8 *pCellData = pNode->zData + (4+pRtree->nBytesPerCell*p->iCell); + eWithin = FULLY_WITHIN; + for(ii=0; ii<nConstraint; ii++){ + RtreeConstraint *pConstraint = pCur->aConstraint + ii; + if( pConstraint->op>=RTREE_MATCH ){ + rc = rtreeCallbackConstraint(pConstraint, eInt, pCellData, p, + &rScore, &eWithin); + if( rc ) return rc; + }else if( p->iLevel==1 ){ + rtreeLeafConstraint(pConstraint, eInt, pCellData, &eWithin); + }else{ + rtreeNonleafConstraint(pConstraint, eInt, pCellData, &eWithin); + } + if( eWithin==NOT_WITHIN ) break; + } + p->iCell++; + if( eWithin==NOT_WITHIN ) continue; + x.iLevel = p->iLevel - 1; + if( x.iLevel ){ + x.id = readInt64(pCellData); + x.iCell = 0; + }else{ + x.id = p->id; + x.iCell = p->iCell - 1; + } + if( p->iCell>=nCell ){ + RTREE_QUEUE_TRACE(pCur, "POP-S:"); + rtreeSearchPointPop(pCur); + } + if( rScore<RTREE_ZERO ) rScore = RTREE_ZERO; + p = rtreeSearchPointNew(pCur, rScore, x.iLevel); + if( p==0 ) return SQLITE_NOMEM; + p->eWithin = eWithin; + p->id = x.id; + p->iCell = x.iCell; + RTREE_QUEUE_TRACE(pCur, "PUSH-S:"); + break; + } + if( p->iCell>=nCell ){ + RTREE_QUEUE_TRACE(pCur, "POP-Se:"); + rtreeSearchPointPop(pCur); + } + } + pCur->atEOF = p==0; + return SQLITE_OK; } /* ** Rtree virtual table module xNext method. */ static int rtreeNext(sqlite3_vtab_cursor *pVtabCursor){ - Rtree *pRtree = (Rtree *)(pVtabCursor->pVtab); RtreeCursor *pCsr = (RtreeCursor *)pVtabCursor; int rc = SQLITE_OK; - if( pCsr->iStrategy==1 ){ - /* This "scan" is a direct lookup by rowid. There is no next entry. */ - nodeRelease(pRtree, pCsr->pNode); - pCsr->pNode = 0; - } - - else if( pCsr->pNode ){ - /* Move to the next entry that matches the configured constraints. */ - int iHeight = 0; - while( pCsr->pNode ){ - RtreeNode *pNode = pCsr->pNode; - int nCell = NCELL(pNode); - for(pCsr->iCell++; pCsr->iCell<nCell; pCsr->iCell++){ - int isEof; - rc = descendToCell(pRtree, pCsr, iHeight, &isEof); - if( rc!=SQLITE_OK || !isEof ){ - return rc; - } - } - pCsr->pNode = pNode->pParent; - pCsr->iCell = nodeParentIndex(pRtree, pNode); - nodeReference(pCsr->pNode); - nodeRelease(pRtree, pNode); - iHeight++; - } - } - + /* Move to the next entry that matches the configured constraints. */ + RTREE_QUEUE_TRACE(pCsr, "POP-Nx:"); + rtreeSearchPointPop(pCsr); + rc = rtreeStepToLeaf(pCsr); return rc; } /* ** Rtree virtual table module xRowid method. */ static int rtreeRowid(sqlite3_vtab_cursor *pVtabCursor, sqlite_int64 *pRowid){ - Rtree *pRtree = (Rtree *)pVtabCursor->pVtab; RtreeCursor *pCsr = (RtreeCursor *)pVtabCursor; - - assert(pCsr->pNode); - *pRowid = nodeGetRowid(pRtree, pCsr->pNode, pCsr->iCell); - - return SQLITE_OK; + RtreeSearchPoint *p = rtreeSearchPointFirst(pCsr); + int rc = SQLITE_OK; + RtreeNode *pNode = rtreeNodeOfFirstSearchPoint(pCsr, &rc); + if( rc==SQLITE_OK && p ){ + *pRowid = nodeGetRowid(RTREE_OF_CURSOR(pCsr), pNode, p->iCell); + } + return rc; } /* ** Rtree virtual table module xColumn method. */ static int rtreeColumn(sqlite3_vtab_cursor *cur, sqlite3_context *ctx, int i){ Rtree *pRtree = (Rtree *)cur->pVtab; RtreeCursor *pCsr = (RtreeCursor *)cur; + RtreeSearchPoint *p = rtreeSearchPointFirst(pCsr); + RtreeCoord c; + int rc = SQLITE_OK; + RtreeNode *pNode = rtreeNodeOfFirstSearchPoint(pCsr, &rc); + if( rc ) return rc; + if( p==0 ) return SQLITE_OK; if( i==0 ){ - i64 iRowid = nodeGetRowid(pRtree, pCsr->pNode, pCsr->iCell); - sqlite3_result_int64(ctx, iRowid); + sqlite3_result_int64(ctx, nodeGetRowid(pRtree, pNode, p->iCell)); }else{ - RtreeCoord c; - nodeGetCoord(pRtree, pCsr->pNode, pCsr->iCell, i-1, &c); + if( rc ) return rc; + nodeGetCoord(pRtree, pNode, p->iCell, i-1, &c); +#ifndef SQLITE_RTREE_INT_ONLY if( pRtree->eCoordType==RTREE_COORD_REAL32 ){ sqlite3_result_double(ctx, c.f); - }else{ + }else +#endif + { assert( pRtree->eCoordType==RTREE_COORD_INT32 ); sqlite3_result_int(ctx, c.i); } } - return SQLITE_OK; } /* ** Use nodeAcquire() to obtain the leaf node containing the record with @@ -110545,24 +157627,78 @@ ** rowid iRowid. If successful, set *ppLeaf to point to the node and ** return SQLITE_OK. If there is no such record in the table, set ** *ppLeaf to 0 and return SQLITE_OK. If an error occurs, set *ppLeaf ** to zero and return an SQLite error code. */ -static int findLeafNode(Rtree *pRtree, i64 iRowid, RtreeNode **ppLeaf){ +static int findLeafNode( + Rtree *pRtree, /* RTree to search */ + i64 iRowid, /* The rowid searching for */ + RtreeNode **ppLeaf, /* Write the node here */ + sqlite3_int64 *piNode /* Write the node-id here */ +){ int rc; *ppLeaf = 0; sqlite3_bind_int64(pRtree->pReadRowid, 1, iRowid); if( sqlite3_step(pRtree->pReadRowid)==SQLITE_ROW ){ i64 iNode = sqlite3_column_int64(pRtree->pReadRowid, 0); + if( piNode ) *piNode = iNode; rc = nodeAcquire(pRtree, iNode, 0, ppLeaf); sqlite3_reset(pRtree->pReadRowid); }else{ rc = sqlite3_reset(pRtree->pReadRowid); } return rc; } +/* +** This function is called to configure the RtreeConstraint object passed +** as the second argument for a MATCH constraint. The value passed as the +** first argument to this function is the right-hand operand to the MATCH +** operator. +*/ +static int deserializeGeometry(sqlite3_value *pValue, RtreeConstraint *pCons){ + RtreeMatchArg *pBlob; /* BLOB returned by geometry function */ + sqlite3_rtree_query_info *pInfo; /* Callback information */ + int nBlob; /* Size of the geometry function blob */ + int nExpected; /* Expected size of the BLOB */ + + /* Check that value is actually a blob. */ + if( sqlite3_value_type(pValue)!=SQLITE_BLOB ) return SQLITE_ERROR; + + /* Check that the blob is roughly the right size. */ + nBlob = sqlite3_value_bytes(pValue); + if( nBlob<(int)sizeof(RtreeMatchArg) ){ + return SQLITE_ERROR; + } + + pInfo = (sqlite3_rtree_query_info*)sqlite3_malloc( sizeof(*pInfo)+nBlob ); + if( !pInfo ) return SQLITE_NOMEM; + memset(pInfo, 0, sizeof(*pInfo)); + pBlob = (RtreeMatchArg*)&pInfo[1]; + + memcpy(pBlob, sqlite3_value_blob(pValue), nBlob); + nExpected = (int)(sizeof(RtreeMatchArg) + + pBlob->nParam*sizeof(sqlite3_value*) + + (pBlob->nParam-1)*sizeof(RtreeDValue)); + if( pBlob->magic!=RTREE_GEOMETRY_MAGIC || nBlob!=nExpected ){ + sqlite3_free(pInfo); + return SQLITE_ERROR; + } + pInfo->pContext = pBlob->cb.pContext; + pInfo->nParam = pBlob->nParam; + pInfo->aParam = pBlob->aParam; + pInfo->apSqlParam = pBlob->apSqlParam; + + if( pBlob->cb.xGeom ){ + pCons->u.xGeom = pBlob->cb.xGeom; + }else{ + pCons->op = RTREE_QUERY; + pCons->u.xQueryFunc = pBlob->cb.xQueryFunc; + } + pCons->pInfo = pInfo; + return SQLITE_OK; +} /* ** Rtree virtual table module xFilter method. */ static int rtreeFilter( @@ -110570,91 +157706,129 @@ int idxNum, const char *idxStr, int argc, sqlite3_value **argv ){ Rtree *pRtree = (Rtree *)pVtabCursor->pVtab; RtreeCursor *pCsr = (RtreeCursor *)pVtabCursor; - RtreeNode *pRoot = 0; int ii; int rc = SQLITE_OK; + int iCell = 0; rtreeReference(pRtree); - sqlite3_free(pCsr->aConstraint); - pCsr->aConstraint = 0; + /* Reset the cursor to the same state as rtreeOpen() leaves it in. */ + freeCursorConstraints(pCsr); + sqlite3_free(pCsr->aPoint); + memset(pCsr, 0, sizeof(RtreeCursor)); + pCsr->base.pVtab = (sqlite3_vtab*)pRtree; + pCsr->iStrategy = idxNum; - if( idxNum==1 ){ /* Special case - lookup by rowid. */ RtreeNode *pLeaf; /* Leaf on which the required cell resides */ + RtreeSearchPoint *p; /* Search point for the the leaf */ i64 iRowid = sqlite3_value_int64(argv[0]); - rc = findLeafNode(pRtree, iRowid, &pLeaf); - pCsr->pNode = pLeaf; - if( pLeaf && rc==SQLITE_OK ){ - pCsr->iCell = nodeRowidIndex(pRtree, pLeaf, iRowid); + i64 iNode = 0; + rc = findLeafNode(pRtree, iRowid, &pLeaf, &iNode); + if( rc==SQLITE_OK && pLeaf!=0 ){ + p = rtreeSearchPointNew(pCsr, RTREE_ZERO, 0); + assert( p!=0 ); /* Always returns pCsr->sPoint */ + pCsr->aNode[0] = pLeaf; + p->id = iNode; + p->eWithin = PARTLY_WITHIN; + rc = nodeRowidIndex(pRtree, pLeaf, iRowid, &iCell); + p->iCell = iCell; + RTREE_QUEUE_TRACE(pCsr, "PUSH-F1:"); + }else{ + pCsr->atEOF = 1; } }else{ /* Normal case - r-tree scan. Set up the RtreeCursor.aConstraint array ** with the configured constraints. */ - if( argc>0 ){ + rc = nodeAcquire(pRtree, 1, 0, &pRoot); + if( rc==SQLITE_OK && argc>0 ){ pCsr->aConstraint = sqlite3_malloc(sizeof(RtreeConstraint)*argc); pCsr->nConstraint = argc; if( !pCsr->aConstraint ){ rc = SQLITE_NOMEM; }else{ - assert( (idxStr==0 && argc==0) || strlen(idxStr)==argc*2 ); + memset(pCsr->aConstraint, 0, sizeof(RtreeConstraint)*argc); + memset(pCsr->anQueue, 0, sizeof(u32)*(pRtree->iDepth + 1)); + assert( (idxStr==0 && argc==0) + || (idxStr && (int)strlen(idxStr)==argc*2) ); for(ii=0; ii<argc; ii++){ RtreeConstraint *p = &pCsr->aConstraint[ii]; p->op = idxStr[ii*2]; - p->iCoord = idxStr[ii*2+1]-'a'; - p->rValue = sqlite3_value_double(argv[ii]); - } - } - } - - if( rc==SQLITE_OK ){ - pCsr->pNode = 0; - rc = nodeAcquire(pRtree, 1, 0, &pRoot); - } - if( rc==SQLITE_OK ){ - int isEof = 1; - int nCell = NCELL(pRoot); - pCsr->pNode = pRoot; - for(pCsr->iCell=0; rc==SQLITE_OK && pCsr->iCell<nCell; pCsr->iCell++){ - assert( pCsr->pNode==pRoot ); - rc = descendToCell(pRtree, pCsr, pRtree->iDepth, &isEof); - if( !isEof ){ - break; - } - } - if( rc==SQLITE_OK && isEof ){ - assert( pCsr->pNode==pRoot ); - nodeRelease(pRtree, pRoot); - pCsr->pNode = 0; - } - assert( rc!=SQLITE_OK || !pCsr->pNode || pCsr->iCell<NCELL(pCsr->pNode) ); - } - } - + p->iCoord = idxStr[ii*2+1]-'0'; + if( p->op>=RTREE_MATCH ){ + /* A MATCH operator. The right-hand-side must be a blob that + ** can be cast into an RtreeMatchArg object. One created using + ** an sqlite3_rtree_geometry_callback() SQL user function. + */ + rc = deserializeGeometry(argv[ii], p); + if( rc!=SQLITE_OK ){ + break; + } + p->pInfo->nCoord = pRtree->nDim*2; + p->pInfo->anQueue = pCsr->anQueue; + p->pInfo->mxLevel = pRtree->iDepth + 1; + }else{ +#ifdef SQLITE_RTREE_INT_ONLY + p->u.rValue = sqlite3_value_int64(argv[ii]); +#else + p->u.rValue = sqlite3_value_double(argv[ii]); +#endif + } + } + } + } + if( rc==SQLITE_OK ){ + RtreeSearchPoint *pNew; + pNew = rtreeSearchPointNew(pCsr, RTREE_ZERO, pRtree->iDepth+1); + if( pNew==0 ) return SQLITE_NOMEM; + pNew->id = 1; + pNew->iCell = 0; + pNew->eWithin = PARTLY_WITHIN; + assert( pCsr->bPoint==1 ); + pCsr->aNode[0] = pRoot; + pRoot = 0; + RTREE_QUEUE_TRACE(pCsr, "PUSH-Fm:"); + rc = rtreeStepToLeaf(pCsr); + } + } + + nodeRelease(pRtree, pRoot); rtreeRelease(pRtree); return rc; } + +/* +** Set the pIdxInfo->estimatedRows variable to nRow. Unless this +** extension is currently being used by a version of SQLite too old to +** support estimatedRows. In that case this function is a no-op. +*/ +static void setEstimatedRows(sqlite3_index_info *pIdxInfo, i64 nRow){ +#if SQLITE_VERSION_NUMBER>=3008002 + if( sqlite3_libversion_number()>=3008002 ){ + pIdxInfo->estimatedRows = nRow; + } +#endif +} /* ** Rtree virtual table module xBestIndex method. There are three ** table scan strategies to choose from (in order from most to ** least desirable): ** ** idxNum idxStr Strategy ** ------------------------------------------------ ** 1 Unused Direct lookup by rowid. -** 2 See below R-tree query. -** 3 Unused Full table scan. +** 2 See below R-tree query or full-table scan. ** ------------------------------------------------ ** -** If strategy 1 or 3 is used, then idxStr is not meaningful. If strategy +** If strategy 1 is used, then idxStr is not meaningful. If strategy ** 2 is used, idxStr is formatted to contain 2 bytes for each ** constraint used. The first two bytes of idxStr correspond to ** the constraint in sqlite3_index_info.aConstraintUsage[] with ** (argvIndex==1) etc. ** @@ -110666,29 +157840,45 @@ ** = 0x41 ('A') ** <= 0x42 ('B') ** < 0x43 ('C') ** >= 0x44 ('D') ** > 0x45 ('E') +** MATCH 0x46 ('F') ** ---------------------- ** ** The second of each pair of bytes identifies the coordinate column ** to which the constraint applies. The leftmost coordinate column ** is 'a', the second from the left 'b' etc. */ static int rtreeBestIndex(sqlite3_vtab *tab, sqlite3_index_info *pIdxInfo){ + Rtree *pRtree = (Rtree*)tab; int rc = SQLITE_OK; - int ii, cCol; + int ii; + int bMatch = 0; /* True if there exists a MATCH constraint */ + i64 nRow; /* Estimated rows returned by this scan */ int iIdx = 0; char zIdxStr[RTREE_MAX_DIMENSIONS*8+1]; memset(zIdxStr, 0, sizeof(zIdxStr)); - assert( pIdxInfo->idxStr==0 ); + /* Check if there exists a MATCH constraint - even an unusable one. If there + ** is, do not consider the lookup-by-rowid plan as using such a plan would + ** require the VDBE to evaluate the MATCH constraint, which is not currently + ** possible. */ for(ii=0; ii<pIdxInfo->nConstraint; ii++){ + if( pIdxInfo->aConstraint[ii].op==SQLITE_INDEX_CONSTRAINT_MATCH ){ + bMatch = 1; + } + } + + assert( pIdxInfo->idxStr==0 ); + for(ii=0; ii<pIdxInfo->nConstraint && iIdx<(int)(sizeof(zIdxStr)-1); ii++){ struct sqlite3_index_constraint *p = &pIdxInfo->aConstraint[ii]; - if( p->usable && p->iColumn==0 && p->op==SQLITE_INDEX_CONSTRAINT_EQ ){ + if( bMatch==0 && p->usable + && p->iColumn==0 && p->op==SQLITE_INDEX_CONSTRAINT_EQ + ){ /* We have an equality constraint on the rowid. Use strategy 1. */ int jj; for(jj=0; jj<ii; jj++){ pIdxInfo->aConstraintUsage[jj].argvIndex = 0; pIdxInfo->aConstraintUsage[jj].omit = 0; @@ -110698,89 +157888,69 @@ pIdxInfo->aConstraintUsage[jj].omit = 1; /* This strategy involves a two rowid lookups on an B-Tree structures ** and then a linear search of an R-Tree node. This should be ** considered almost as quick as a direct rowid lookup (for which - ** sqlite uses an internal cost of 0.0). + ** sqlite uses an internal cost of 0.0). It is expected to return + ** a single row. */ - pIdxInfo->estimatedCost = 10.0; + pIdxInfo->estimatedCost = 30.0; + setEstimatedRows(pIdxInfo, 1); return SQLITE_OK; } - if( p->usable && p->iColumn>0 ){ - u8 op = 0; + if( p->usable && (p->iColumn>0 || p->op==SQLITE_INDEX_CONSTRAINT_MATCH) ){ + u8 op; switch( p->op ){ case SQLITE_INDEX_CONSTRAINT_EQ: op = RTREE_EQ; break; case SQLITE_INDEX_CONSTRAINT_GT: op = RTREE_GT; break; case SQLITE_INDEX_CONSTRAINT_LE: op = RTREE_LE; break; case SQLITE_INDEX_CONSTRAINT_LT: op = RTREE_LT; break; case SQLITE_INDEX_CONSTRAINT_GE: op = RTREE_GE; break; - } - if( op ){ - /* Make sure this particular constraint has not been used before. - ** If it has been used before, ignore it. - ** - ** A <= or < can be used if there is a prior >= or >. - ** A >= or > can be used if there is a prior < or <=. - ** A <= or < is disqualified if there is a prior <=, <, or ==. - ** A >= or > is disqualified if there is a prior >=, >, or ==. - ** A == is disqualifed if there is any prior constraint. - */ - int j, opmsk; - static const unsigned char compatible[] = { 0, 0, 1, 1, 2, 2 }; - assert( compatible[RTREE_EQ & 7]==0 ); - assert( compatible[RTREE_LT & 7]==1 ); - assert( compatible[RTREE_LE & 7]==1 ); - assert( compatible[RTREE_GT & 7]==2 ); - assert( compatible[RTREE_GE & 7]==2 ); - cCol = p->iColumn - 1 + 'a'; - opmsk = compatible[op & 7]; - for(j=0; j<iIdx; j+=2){ - if( zIdxStr[j+1]==cCol && (compatible[zIdxStr[j] & 7] & opmsk)!=0 ){ - op = 0; - break; - } - } - } - if( op ){ - assert( iIdx<sizeof(zIdxStr)-1 ); - zIdxStr[iIdx++] = op; - zIdxStr[iIdx++] = cCol; - pIdxInfo->aConstraintUsage[ii].argvIndex = (iIdx/2); - pIdxInfo->aConstraintUsage[ii].omit = 1; - } + default: + assert( p->op==SQLITE_INDEX_CONSTRAINT_MATCH ); + op = RTREE_MATCH; + break; + } + zIdxStr[iIdx++] = op; + zIdxStr[iIdx++] = p->iColumn - 1 + '0'; + pIdxInfo->aConstraintUsage[ii].argvIndex = (iIdx/2); + pIdxInfo->aConstraintUsage[ii].omit = 1; } } pIdxInfo->idxNum = 2; pIdxInfo->needToFreeIdxStr = 1; if( iIdx>0 && 0==(pIdxInfo->idxStr = sqlite3_mprintf("%s", zIdxStr)) ){ return SQLITE_NOMEM; } - assert( iIdx>=0 ); - pIdxInfo->estimatedCost = (2000000.0 / (double)(iIdx + 1)); + + nRow = pRtree->nRowEst / (iIdx + 1); + pIdxInfo->estimatedCost = (double)6.0 * (double)nRow; + setEstimatedRows(pIdxInfo, nRow); + return rc; } /* ** Return the N-dimensional volumn of the cell stored in *p. */ -static float cellArea(Rtree *pRtree, RtreeCell *p){ - float area = 1.0; +static RtreeDValue cellArea(Rtree *pRtree, RtreeCell *p){ + RtreeDValue area = (RtreeDValue)1; int ii; for(ii=0; ii<(pRtree->nDim*2); ii+=2){ - area = area * (DCOORD(p->aCoord[ii+1]) - DCOORD(p->aCoord[ii])); + area = (area * (DCOORD(p->aCoord[ii+1]) - DCOORD(p->aCoord[ii]))); } return area; } /* ** Return the margin length of cell p. The margin length is the sum ** of the objects size in each dimension. */ -static float cellMargin(Rtree *pRtree, RtreeCell *p){ - float margin = 0.0; +static RtreeDValue cellMargin(Rtree *pRtree, RtreeCell *p){ + RtreeDValue margin = (RtreeDValue)0; int ii; for(ii=0; ii<(pRtree->nDim*2); ii+=2){ margin += (DCOORD(p->aCoord[ii+1]) - DCOORD(p->aCoord[ii])); } return margin; @@ -110824,71 +157994,45 @@ } /* ** Return the amount cell p would grow by if it were unioned with pCell. */ -static float cellGrowth(Rtree *pRtree, RtreeCell *p, RtreeCell *pCell){ - float area; +static RtreeDValue cellGrowth(Rtree *pRtree, RtreeCell *p, RtreeCell *pCell){ + RtreeDValue area; RtreeCell cell; memcpy(&cell, p, sizeof(RtreeCell)); area = cellArea(pRtree, &cell); cellUnion(pRtree, &cell, pCell); return (cellArea(pRtree, &cell)-area); } -#if VARIANT_RSTARTREE_CHOOSESUBTREE || VARIANT_RSTARTREE_SPLIT -static float cellOverlap( +static RtreeDValue cellOverlap( Rtree *pRtree, RtreeCell *p, RtreeCell *aCell, - int nCell, - int iExclude + int nCell ){ int ii; - float overlap = 0.0; + RtreeDValue overlap = RTREE_ZERO; for(ii=0; ii<nCell; ii++){ - if( ii!=iExclude ){ - int jj; - float o = 1.0; - for(jj=0; jj<(pRtree->nDim*2); jj+=2){ - double x1; - double x2; - - x1 = MAX(DCOORD(p->aCoord[jj]), DCOORD(aCell[ii].aCoord[jj])); - x2 = MIN(DCOORD(p->aCoord[jj+1]), DCOORD(aCell[ii].aCoord[jj+1])); - - if( x2<x1 ){ - o = 0.0; - break; - }else{ - o = o * (x2-x1); - } - } - overlap += o; - } + int jj; + RtreeDValue o = (RtreeDValue)1; + for(jj=0; jj<(pRtree->nDim*2); jj+=2){ + RtreeDValue x1, x2; + x1 = MAX(DCOORD(p->aCoord[jj]), DCOORD(aCell[ii].aCoord[jj])); + x2 = MIN(DCOORD(p->aCoord[jj+1]), DCOORD(aCell[ii].aCoord[jj+1])); + if( x2<x1 ){ + o = (RtreeDValue)0; + break; + }else{ + o = o * (x2-x1); + } + } + overlap += o; } return overlap; } -#endif - -#if VARIANT_RSTARTREE_CHOOSESUBTREE -static float cellOverlapEnlargement( - Rtree *pRtree, - RtreeCell *p, - RtreeCell *pInsert, - RtreeCell *aCell, - int nCell, - int iExclude -){ - float before; - float after; - before = cellOverlap(pRtree, p, aCell, nCell, iExclude); - cellUnion(pRtree, p, pInsert); - after = cellOverlap(pRtree, p, aCell, nCell, iExclude); - return after-before; -} -#endif /* ** This function implements the ChooseLeaf algorithm from Gutman[84]. ** ChooseSubTree in r*tree terminology. @@ -110904,60 +158048,36 @@ RtreeNode *pNode; rc = nodeAcquire(pRtree, 1, 0, &pNode); for(ii=0; rc==SQLITE_OK && ii<(pRtree->iDepth-iHeight); ii++){ int iCell; - sqlite3_int64 iBest; + sqlite3_int64 iBest = 0; - float fMinGrowth; - float fMinArea; - float fMinOverlap; + RtreeDValue fMinGrowth = RTREE_ZERO; + RtreeDValue fMinArea = RTREE_ZERO; int nCell = NCELL(pNode); RtreeCell cell; RtreeNode *pChild; RtreeCell *aCell = 0; -#if VARIANT_RSTARTREE_CHOOSESUBTREE - if( ii==(pRtree->iDepth-1) ){ - int jj; - aCell = sqlite3_malloc(sizeof(RtreeCell)*nCell); - if( !aCell ){ - rc = SQLITE_NOMEM; - nodeRelease(pRtree, pNode); - pNode = 0; - continue; - } - for(jj=0; jj<nCell; jj++){ - nodeGetCell(pRtree, pNode, jj, &aCell[jj]); - } - } -#endif - /* Select the child node which will be enlarged the least if pCell ** is inserted into it. Resolve ties by choosing the entry with ** the smallest area. */ for(iCell=0; iCell<nCell; iCell++){ - float growth; - float area; - float overlap = 0.0; + int bBest = 0; + RtreeDValue growth; + RtreeDValue area; nodeGetCell(pRtree, pNode, iCell, &cell); growth = cellGrowth(pRtree, &cell, pCell); area = cellArea(pRtree, &cell); -#if VARIANT_RSTARTREE_CHOOSESUBTREE - if( ii==(pRtree->iDepth-1) ){ - overlap = cellOverlapEnlargement(pRtree,&cell,pCell,aCell,nCell,iCell); - } -#endif - if( (iCell==0) - || (overlap<fMinOverlap) - || (overlap==fMinOverlap && growth<fMinGrowth) - || (overlap==fMinOverlap && growth==fMinGrowth && area<fMinArea) - ){ - fMinOverlap = overlap; + if( iCell==0||growth<fMinGrowth||(growth==fMinGrowth && area<fMinArea) ){ + bBest = 1; + } + if( bBest ){ fMinGrowth = growth; fMinArea = area; iBest = cell.iRowid; } } @@ -110975,29 +158095,34 @@ /* ** A cell with the same content as pCell has just been inserted into ** the node pNode. This function updates the bounding box cells in ** all ancestor elements. */ -static void AdjustTree( +static int AdjustTree( Rtree *pRtree, /* Rtree table */ RtreeNode *pNode, /* Adjust ancestry of this node. */ RtreeCell *pCell /* This cell was just inserted */ ){ RtreeNode *p = pNode; while( p->pParent ){ - RtreeCell cell; RtreeNode *pParent = p->pParent; - int iCell = nodeParentIndex(pRtree, p); + RtreeCell cell; + int iCell; + + if( nodeParentIndex(pRtree, p, &iCell) ){ + return SQLITE_CORRUPT_VTAB; + } nodeGetCell(pRtree, pParent, iCell, &cell); if( !cellContains(pRtree, &cell, pCell) ){ cellUnion(pRtree, &cell, pCell); nodeOverwriteCell(pRtree, pParent, &cell, iCell); } p = pParent; } + return SQLITE_OK; } /* ** Write mapping (iRowid->iNode) to the <rtree>_rowid table. */ @@ -111018,159 +158143,10 @@ return sqlite3_reset(pRtree->pWriteParent); } static int rtreeInsertCell(Rtree *, RtreeNode *, RtreeCell *, int); -#if VARIANT_GUTTMAN_LINEAR_SPLIT -/* -** Implementation of the linear variant of the PickNext() function from -** Guttman[84]. -*/ -static RtreeCell *LinearPickNext( - Rtree *pRtree, - RtreeCell *aCell, - int nCell, - RtreeCell *pLeftBox, - RtreeCell *pRightBox, - int *aiUsed -){ - int ii; - for(ii=0; aiUsed[ii]; ii++); - aiUsed[ii] = 1; - return &aCell[ii]; -} - -/* -** Implementation of the linear variant of the PickSeeds() function from -** Guttman[84]. -*/ -static void LinearPickSeeds( - Rtree *pRtree, - RtreeCell *aCell, - int nCell, - int *piLeftSeed, - int *piRightSeed -){ - int i; - int iLeftSeed = 0; - int iRightSeed = 1; - float maxNormalInnerWidth = 0.0; - - /* Pick two "seed" cells from the array of cells. The algorithm used - ** here is the LinearPickSeeds algorithm from Gutman[1984]. The - ** indices of the two seed cells in the array are stored in local - ** variables iLeftSeek and iRightSeed. - */ - for(i=0; i<pRtree->nDim; i++){ - float x1 = DCOORD(aCell[0].aCoord[i*2]); - float x2 = DCOORD(aCell[0].aCoord[i*2+1]); - float x3 = x1; - float x4 = x2; - int jj; - - int iCellLeft = 0; - int iCellRight = 0; - - for(jj=1; jj<nCell; jj++){ - float left = DCOORD(aCell[jj].aCoord[i*2]); - float right = DCOORD(aCell[jj].aCoord[i*2+1]); - - if( left<x1 ) x1 = left; - if( right>x4 ) x4 = right; - if( left>x3 ){ - x3 = left; - iCellRight = jj; - } - if( right<x2 ){ - x2 = right; - iCellLeft = jj; - } - } - - if( x4!=x1 ){ - float normalwidth = (x3 - x2) / (x4 - x1); - if( normalwidth>maxNormalInnerWidth ){ - iLeftSeed = iCellLeft; - iRightSeed = iCellRight; - } - } - } - - *piLeftSeed = iLeftSeed; - *piRightSeed = iRightSeed; -} -#endif /* VARIANT_GUTTMAN_LINEAR_SPLIT */ - -#if VARIANT_GUTTMAN_QUADRATIC_SPLIT -/* -** Implementation of the quadratic variant of the PickNext() function from -** Guttman[84]. -*/ -static RtreeCell *QuadraticPickNext( - Rtree *pRtree, - RtreeCell *aCell, - int nCell, - RtreeCell *pLeftBox, - RtreeCell *pRightBox, - int *aiUsed -){ - #define FABS(a) ((a)<0.0?-1.0*(a):(a)) - - int iSelect = -1; - float fDiff; - int ii; - for(ii=0; ii<nCell; ii++){ - if( aiUsed[ii]==0 ){ - float left = cellGrowth(pRtree, pLeftBox, &aCell[ii]); - float right = cellGrowth(pRtree, pLeftBox, &aCell[ii]); - float diff = FABS(right-left); - if( iSelect<0 || diff>fDiff ){ - fDiff = diff; - iSelect = ii; - } - } - } - aiUsed[iSelect] = 1; - return &aCell[iSelect]; -} - -/* -** Implementation of the quadratic variant of the PickSeeds() function from -** Guttman[84]. -*/ -static void QuadraticPickSeeds( - Rtree *pRtree, - RtreeCell *aCell, - int nCell, - int *piLeftSeed, - int *piRightSeed -){ - int ii; - int jj; - - int iLeftSeed = 0; - int iRightSeed = 1; - float fWaste = 0.0; - - for(ii=0; ii<nCell; ii++){ - for(jj=ii+1; jj<nCell; jj++){ - float right = cellArea(pRtree, &aCell[jj]); - float growth = cellGrowth(pRtree, &aCell[ii], &aCell[jj]); - float waste = growth - right; - - if( waste>fWaste ){ - iLeftSeed = ii; - iRightSeed = jj; - fWaste = waste; - } - } - } - - *piLeftSeed = iLeftSeed; - *piRightSeed = iRightSeed; -} -#endif /* VARIANT_GUTTMAN_QUADRATIC_SPLIT */ /* ** Arguments aIdx, aDistance and aSpare all point to arrays of size ** nIdx. The aIdx array contains the set of integers from 0 to ** (nIdx-1) in no particular order. This function sorts the values @@ -111188,11 +158164,11 @@ ** sorting algorithm. */ static void SortByDistance( int *aIdx, int nIdx, - float *aDistance, + RtreeDValue *aDistance, int *aSpare ){ if( nIdx>1 ){ int iLeft = 0; int iRight = 0; @@ -111214,12 +158190,12 @@ iRight++; }else if( iRight==nRight ){ aIdx[iLeft+iRight] = aLeft[iLeft]; iLeft++; }else{ - float fLeft = aDistance[aLeft[iLeft]]; - float fRight = aDistance[aRight[iRight]]; + RtreeDValue fLeft = aDistance[aLeft[iLeft]]; + RtreeDValue fRight = aDistance[aRight[iRight]]; if( fLeft<fRight ){ aIdx[iLeft+iRight] = aLeft[iLeft]; iLeft++; }else{ aIdx[iLeft+iRight] = aRight[iRight]; @@ -111231,12 +158207,12 @@ #if 0 /* Check that the sort worked */ { int jj; for(jj=1; jj<nIdx; jj++){ - float left = aDistance[aIdx[jj-1]]; - float right = aDistance[aIdx[jj]]; + RtreeDValue left = aDistance[aIdx[jj-1]]; + RtreeDValue right = aDistance[aIdx[jj]]; assert( left<=right ); } } #endif } @@ -111275,14 +158251,14 @@ SortByDimension(pRtree, aRight, nRight, iDim, aCell, aSpare); memcpy(aSpare, aLeft, sizeof(int)*nLeft); aLeft = aSpare; while( iLeft<nLeft || iRight<nRight ){ - double xleft1 = DCOORD(aCell[aLeft[iLeft]].aCoord[iDim*2]); - double xleft2 = DCOORD(aCell[aLeft[iLeft]].aCoord[iDim*2+1]); - double xright1 = DCOORD(aCell[aRight[iRight]].aCoord[iDim*2]); - double xright2 = DCOORD(aCell[aRight[iRight]].aCoord[iDim*2+1]); + RtreeDValue xleft1 = DCOORD(aCell[aLeft[iLeft]].aCoord[iDim*2]); + RtreeDValue xleft2 = DCOORD(aCell[aLeft[iLeft]].aCoord[iDim*2+1]); + RtreeDValue xright1 = DCOORD(aCell[aRight[iRight]].aCoord[iDim*2]); + RtreeDValue xright2 = DCOORD(aCell[aRight[iRight]].aCoord[iDim*2+1]); if( (iLeft!=nLeft) && ((iRight==nRight) || (xleft1<xright1) || (xleft1==xright1 && xleft2<xright2) )){ aIdx[iLeft+iRight] = aLeft[iLeft]; @@ -111296,22 +158272,21 @@ #if 0 /* Check that the sort worked */ { int jj; for(jj=1; jj<nIdx; jj++){ - float xleft1 = aCell[aIdx[jj-1]].aCoord[iDim*2]; - float xleft2 = aCell[aIdx[jj-1]].aCoord[iDim*2+1]; - float xright1 = aCell[aIdx[jj]].aCoord[iDim*2]; - float xright2 = aCell[aIdx[jj]].aCoord[iDim*2+1]; + RtreeDValue xleft1 = aCell[aIdx[jj-1]].aCoord[iDim*2]; + RtreeDValue xleft2 = aCell[aIdx[jj-1]].aCoord[iDim*2+1]; + RtreeDValue xright1 = aCell[aIdx[jj]].aCoord[iDim*2]; + RtreeDValue xright2 = aCell[aIdx[jj]].aCoord[iDim*2+1]; assert( xleft1<=xright1 && (xleft1<xright1 || xleft2<=xright2) ); } } #endif } } -#if VARIANT_RSTARTREE_SPLIT /* ** Implementation of the R*-tree variant of SplitNode from Beckman[1990]. */ static int splitNodeStartree( Rtree *pRtree, @@ -111324,13 +158299,13 @@ ){ int **aaSorted; int *aSpare; int ii; - int iBestDim; - int iBestSplit; - float fBestMargin; + int iBestDim = 0; + int iBestSplit = 0; + RtreeDValue fBestMargin = RTREE_ZERO; int nByte = (pRtree->nDim+1)*(sizeof(int*)+nCell*sizeof(int)); aaSorted = (int **)sqlite3_malloc(nByte); if( !aaSorted ){ @@ -111347,14 +158322,14 @@ } SortByDimension(pRtree, aaSorted[ii], nCell, ii, aCell, aSpare); } for(ii=0; ii<pRtree->nDim; ii++){ - float margin = 0.0; - float fBestOverlap; - float fBestArea; - int iBestLeft; + RtreeDValue margin = RTREE_ZERO; + RtreeDValue fBestOverlap = RTREE_ZERO; + RtreeDValue fBestArea = RTREE_ZERO; + int iBestLeft = 0; int nLeft; for( nLeft=RTREE_MINCELLS(pRtree); nLeft<=(nCell-RTREE_MINCELLS(pRtree)); @@ -111361,12 +158336,12 @@ nLeft++ ){ RtreeCell left; RtreeCell right; int kk; - float overlap; - float area; + RtreeDValue overlap; + RtreeDValue area; memcpy(&left, &aCell[aaSorted[ii][0]], sizeof(RtreeCell)); memcpy(&right, &aCell[aaSorted[ii][nCell-1]], sizeof(RtreeCell)); for(kk=1; kk<(nCell-1); kk++){ if( kk<nLeft ){ @@ -111375,11 +158350,11 @@ cellUnion(pRtree, &right, &aCell[aaSorted[ii][kk]]); } } margin += cellMargin(pRtree, &left); margin += cellMargin(pRtree, &right); - overlap = cellOverlap(pRtree, &left, &right, 1, -1); + overlap = cellOverlap(pRtree, &left, &right, 1); area = cellArea(pRtree, &left) + cellArea(pRtree, &right); if( (nLeft==RTREE_MINCELLS(pRtree)) || (overlap<fBestOverlap) || (overlap==fBestOverlap && area<fBestArea) ){ @@ -111407,67 +158382,11 @@ } sqlite3_free(aaSorted); return SQLITE_OK; } -#endif - -#if VARIANT_GUTTMAN_SPLIT -/* -** Implementation of the regular R-tree SplitNode from Guttman[1984]. -*/ -static int splitNodeGuttman( - Rtree *pRtree, - RtreeCell *aCell, - int nCell, - RtreeNode *pLeft, - RtreeNode *pRight, - RtreeCell *pBboxLeft, - RtreeCell *pBboxRight -){ - int iLeftSeed = 0; - int iRightSeed = 1; - int *aiUsed; - int i; - - aiUsed = sqlite3_malloc(sizeof(int)*nCell); - if( !aiUsed ){ - return SQLITE_NOMEM; - } - memset(aiUsed, 0, sizeof(int)*nCell); - - PickSeeds(pRtree, aCell, nCell, &iLeftSeed, &iRightSeed); - - memcpy(pBboxLeft, &aCell[iLeftSeed], sizeof(RtreeCell)); - memcpy(pBboxRight, &aCell[iRightSeed], sizeof(RtreeCell)); - nodeInsertCell(pRtree, pLeft, &aCell[iLeftSeed]); - nodeInsertCell(pRtree, pRight, &aCell[iRightSeed]); - aiUsed[iLeftSeed] = 1; - aiUsed[iRightSeed] = 1; - - for(i=nCell-2; i>0; i--){ - RtreeCell *pNext; - pNext = PickNext(pRtree, aCell, nCell, pBboxLeft, pBboxRight, aiUsed); - float diff = - cellGrowth(pRtree, pBboxLeft, pNext) - - cellGrowth(pRtree, pBboxRight, pNext) - ; - if( (RTREE_MINCELLS(pRtree)-NCELL(pRight)==i) - || (diff>0.0 && (RTREE_MINCELLS(pRtree)-NCELL(pLeft)!=i)) - ){ - nodeInsertCell(pRtree, pRight, pNext); - cellUnion(pRtree, pBboxRight, pNext); - }else{ - nodeInsertCell(pRtree, pLeft, pNext); - cellUnion(pRtree, pBboxLeft, pNext); - } - } - - sqlite3_free(aiUsed); - return SQLITE_OK; -} -#endif + static int updateMapping( Rtree *pRtree, i64 iRowid, RtreeNode *pNode, @@ -111522,18 +158441,18 @@ nodeZero(pRtree, pNode); memcpy(&aCell[nCell], pCell, sizeof(RtreeCell)); nCell++; if( pNode->iNode==1 ){ - pRight = nodeNew(pRtree, pNode, 1); - pLeft = nodeNew(pRtree, pNode, 1); + pRight = nodeNew(pRtree, pNode); + pLeft = nodeNew(pRtree, pNode); pRtree->iDepth++; pNode->isDirty = 1; writeInt16(pNode->zData, pRtree->iDepth); }else{ pLeft = pNode; - pRight = nodeNew(pRtree, pLeft->pParent, 1); + pRight = nodeNew(pRtree, pLeft->pParent); nodeReference(pLeft); } if( !pLeft || !pRight ){ rc = SQLITE_NOMEM; @@ -111541,17 +158460,22 @@ } memset(pLeft->zData, 0, pRtree->iNodeSize); memset(pRight->zData, 0, pRtree->iNodeSize); - rc = AssignCells(pRtree, aCell, nCell, pLeft, pRight, &leftbbox, &rightbbox); + rc = splitNodeStartree(pRtree, aCell, nCell, pLeft, pRight, + &leftbbox, &rightbbox); if( rc!=SQLITE_OK ){ goto splitnode_out; } - /* Ensure both child nodes have node numbers assigned to them. */ - if( (0==pRight->iNode && SQLITE_OK!=(rc = nodeWrite(pRtree, pRight))) + /* Ensure both child nodes have node numbers assigned to them by calling + ** nodeWrite(). Node pRight always needs a node number, as it was created + ** by nodeNew() above. But node pLeft sometimes already has a node number. + ** In this case avoid the all to nodeWrite(). + */ + if( SQLITE_OK!=(rc = nodeWrite(pRtree, pRight)) || (0==pLeft->iNode && SQLITE_OK!=(rc = nodeWrite(pRtree, pLeft))) ){ goto splitnode_out; } @@ -111563,13 +158487,19 @@ if( rc!=SQLITE_OK ){ goto splitnode_out; } }else{ RtreeNode *pParent = pLeft->pParent; - int iCell = nodeParentIndex(pRtree, pLeft); - nodeOverwriteCell(pRtree, pParent, &leftbbox, iCell); - AdjustTree(pRtree, pParent, &leftbbox); + int iCell; + rc = nodeParentIndex(pRtree, pLeft, &iCell); + if( rc==SQLITE_OK ){ + nodeOverwriteCell(pRtree, pParent, &leftbbox, iCell); + rc = AdjustTree(pRtree, pParent, &leftbbox); + } + if( rc!=SQLITE_OK ){ + goto splitnode_out; + } } if( (rc = rtreeInsertCell(pRtree, pRight->pParent, &rightbbox, iHeight+1)) ){ goto splitnode_out; } @@ -111609,44 +158539,73 @@ nodeRelease(pRtree, pLeft); sqlite3_free(aCell); return rc; } +/* +** If node pLeaf is not the root of the r-tree and its pParent pointer is +** still NULL, load all ancestor nodes of pLeaf into memory and populate +** the pLeaf->pParent chain all the way up to the root node. +** +** This operation is required when a row is deleted (or updated - an update +** is implemented as a delete followed by an insert). SQLite provides the +** rowid of the row to delete, which can be used to find the leaf on which +** the entry resides (argument pLeaf). Once the leaf is located, this +** function is called to determine its ancestry. +*/ static int fixLeafParent(Rtree *pRtree, RtreeNode *pLeaf){ int rc = SQLITE_OK; - if( pLeaf->iNode!=1 && pLeaf->pParent==0 ){ - sqlite3_bind_int64(pRtree->pReadParent, 1, pLeaf->iNode); - if( sqlite3_step(pRtree->pReadParent)==SQLITE_ROW ){ - i64 iNode = sqlite3_column_int64(pRtree->pReadParent, 0); - rc = nodeAcquire(pRtree, iNode, 0, &pLeaf->pParent); - }else{ - rc = SQLITE_ERROR; - } - sqlite3_reset(pRtree->pReadParent); - if( rc==SQLITE_OK ){ - rc = fixLeafParent(pRtree, pLeaf->pParent); - } + RtreeNode *pChild = pLeaf; + while( rc==SQLITE_OK && pChild->iNode!=1 && pChild->pParent==0 ){ + int rc2 = SQLITE_OK; /* sqlite3_reset() return code */ + sqlite3_bind_int64(pRtree->pReadParent, 1, pChild->iNode); + rc = sqlite3_step(pRtree->pReadParent); + if( rc==SQLITE_ROW ){ + RtreeNode *pTest; /* Used to test for reference loops */ + i64 iNode; /* Node number of parent node */ + + /* Before setting pChild->pParent, test that we are not creating a + ** loop of references (as we would if, say, pChild==pParent). We don't + ** want to do this as it leads to a memory leak when trying to delete + ** the referenced counted node structures. + */ + iNode = sqlite3_column_int64(pRtree->pReadParent, 0); + for(pTest=pLeaf; pTest && pTest->iNode!=iNode; pTest=pTest->pParent); + if( !pTest ){ + rc2 = nodeAcquire(pRtree, iNode, 0, &pChild->pParent); + } + } + rc = sqlite3_reset(pRtree->pReadParent); + if( rc==SQLITE_OK ) rc = rc2; + if( rc==SQLITE_OK && !pChild->pParent ) rc = SQLITE_CORRUPT_VTAB; + pChild = pChild->pParent; } return rc; } static int deleteCell(Rtree *, RtreeNode *, int, int); static int removeNode(Rtree *pRtree, RtreeNode *pNode, int iHeight){ int rc; - RtreeNode *pParent; + int rc2; + RtreeNode *pParent = 0; int iCell; assert( pNode->nRef==1 ); /* Remove the entry in the parent cell. */ - iCell = nodeParentIndex(pRtree, pNode); - pParent = pNode->pParent; - pNode->pParent = 0; - if( SQLITE_OK!=(rc = deleteCell(pRtree, pParent, iCell, iHeight+1)) - || SQLITE_OK!=(rc = nodeRelease(pRtree, pParent)) - ){ + rc = nodeParentIndex(pRtree, pNode, &iCell); + if( rc==SQLITE_OK ){ + pParent = pNode->pParent; + pNode->pParent = 0; + rc = deleteCell(pRtree, pParent, iCell, iHeight+1); + } + rc2 = nodeRelease(pRtree, pParent); + if( rc==SQLITE_OK ){ + rc = rc2; + } + if( rc!=SQLITE_OK ){ return rc; } /* Remove the xxx_node entry. */ sqlite3_bind_int64(pRtree->pDeleteNode, 1, pNode->iNode); @@ -111672,12 +158631,13 @@ pRtree->pDeleted = pNode; return SQLITE_OK; } -static void fixBoundingBox(Rtree *pRtree, RtreeNode *pNode){ +static int fixBoundingBox(Rtree *pRtree, RtreeNode *pNode){ RtreeNode *pParent = pNode->pParent; + int rc = SQLITE_OK; if( pParent ){ int ii; int nCell = NCELL(pNode); RtreeCell box; /* Bounding box for pNode */ nodeGetCell(pRtree, pNode, 0, &box); @@ -111685,21 +158645,25 @@ RtreeCell cell; nodeGetCell(pRtree, pNode, ii, &cell); cellUnion(pRtree, &box, &cell); } box.iRowid = pNode->iNode; - ii = nodeParentIndex(pRtree, pNode); - nodeOverwriteCell(pRtree, pParent, &box, ii); - fixBoundingBox(pRtree, pParent); + rc = nodeParentIndex(pRtree, pNode, &ii); + if( rc==SQLITE_OK ){ + nodeOverwriteCell(pRtree, pParent, &box, ii); + rc = fixBoundingBox(pRtree, pParent); + } } + return rc; } /* ** Delete the cell at index iCell of node pNode. After removing the ** cell, adjust the r-tree data structure if required. */ static int deleteCell(Rtree *pRtree, RtreeNode *pNode, int iCell, int iHeight){ + RtreeNode *pParent; int rc; if( SQLITE_OK!=(rc = fixLeafParent(pRtree, pNode)) ){ return rc; } @@ -111712,18 +158676,17 @@ /* If the node is not the tree root and now has less than the minimum ** number of cells, remove it from the tree. Otherwise, update the ** cell in the parent node so that it tightly contains the updated ** node. */ - if( pNode->iNode!=1 ){ - RtreeNode *pParent = pNode->pParent; - if( (pParent->iNode!=1 || NCELL(pParent)!=1) - && (NCELL(pNode)<RTREE_MINCELLS(pRtree)) - ){ + pParent = pNode->pParent; + assert( pParent || pNode->iNode==1 ); + if( pParent ){ + if( NCELL(pNode)<RTREE_MINCELLS(pRtree) ){ rc = removeNode(pRtree, pNode, iHeight); }else{ - fixBoundingBox(pRtree, pNode); + rc = fixBoundingBox(pRtree, pNode); } } return rc; } @@ -111735,36 +158698,38 @@ int iHeight ){ int *aOrder; int *aSpare; RtreeCell *aCell; - float *aDistance; + RtreeDValue *aDistance; int nCell; - float aCenterCoord[RTREE_MAX_DIMENSIONS]; + RtreeDValue aCenterCoord[RTREE_MAX_DIMENSIONS]; int iDim; int ii; int rc = SQLITE_OK; + int n; - memset(aCenterCoord, 0, sizeof(float)*RTREE_MAX_DIMENSIONS); + memset(aCenterCoord, 0, sizeof(RtreeDValue)*RTREE_MAX_DIMENSIONS); nCell = NCELL(pNode)+1; + n = (nCell+1)&(~1); /* Allocate the buffers used by this operation. The allocation is ** relinquished before this function returns. */ - aCell = (RtreeCell *)sqlite3_malloc(nCell * ( - sizeof(RtreeCell) + /* aCell array */ - sizeof(int) + /* aOrder array */ - sizeof(int) + /* aSpare array */ - sizeof(float) /* aDistance array */ + aCell = (RtreeCell *)sqlite3_malloc(n * ( + sizeof(RtreeCell) + /* aCell array */ + sizeof(int) + /* aOrder array */ + sizeof(int) + /* aSpare array */ + sizeof(RtreeDValue) /* aDistance array */ )); if( !aCell ){ return SQLITE_NOMEM; } - aOrder = (int *)&aCell[nCell]; - aSpare = (int *)&aOrder[nCell]; - aDistance = (float *)&aSpare[nCell]; + aOrder = (int *)&aCell[n]; + aSpare = (int *)&aOrder[n]; + aDistance = (RtreeDValue *)&aSpare[n]; for(ii=0; ii<nCell; ii++){ if( ii==(nCell-1) ){ memcpy(&aCell[ii], pCell, sizeof(RtreeCell)); }else{ @@ -111775,18 +158740,18 @@ aCenterCoord[iDim] += DCOORD(aCell[ii].aCoord[iDim*2]); aCenterCoord[iDim] += DCOORD(aCell[ii].aCoord[iDim*2+1]); } } for(iDim=0; iDim<pRtree->nDim; iDim++){ - aCenterCoord[iDim] = aCenterCoord[iDim]/((float)nCell*2.0); + aCenterCoord[iDim] = (aCenterCoord[iDim]/(nCell*(RtreeDValue)2)); } for(ii=0; ii<nCell; ii++){ - aDistance[ii] = 0.0; + aDistance[ii] = RTREE_ZERO; for(iDim=0; iDim<pRtree->nDim; iDim++){ - float coord = DCOORD(aCell[ii].aCoord[iDim*2+1]) - - DCOORD(aCell[ii].aCoord[iDim*2]); + RtreeDValue coord = (DCOORD(aCell[ii].aCoord[iDim*2+1]) - + DCOORD(aCell[ii].aCoord[iDim*2])); aDistance[ii] += (coord-aCenterCoord[iDim])*(coord-aCenterCoord[iDim]); } } SortByDistance(aOrder, nCell, aDistance, aSpare); @@ -111802,11 +158767,11 @@ rc = parentWrite(pRtree, p->iRowid, pNode->iNode); } } } if( rc==SQLITE_OK ){ - fixBoundingBox(pRtree, pNode); + rc = fixBoundingBox(pRtree, pNode); } for(; rc==SQLITE_OK && ii<nCell; ii++){ /* Find a node to store this cell in. pNode->iNode currently contains ** the height of the sub-tree headed by the cell. */ @@ -111845,26 +158810,24 @@ nodeReference(pNode); pChild->pParent = pNode; } } if( nodeInsertCell(pRtree, pNode, pCell) ){ -#if VARIANT_RSTARTREE_REINSERT if( iHeight<=pRtree->iReinsertHeight || pNode->iNode==1){ rc = SplitNode(pRtree, pNode, pCell, iHeight); }else{ pRtree->iReinsertHeight = iHeight; rc = Reinsert(pRtree, pNode, pCell, iHeight); } -#else - rc = SplitNode(pRtree, pNode, pCell, iHeight); -#endif - }else{ - AdjustTree(pRtree, pNode, pCell); - if( iHeight==0 ){ - rc = rowidWrite(pRtree, pCell->iRowid, pNode->iNode); - }else{ - rc = parentWrite(pRtree, pCell->iRowid, pNode->iNode); + }else{ + rc = AdjustTree(pRtree, pNode, pCell); + if( rc==SQLITE_OK ){ + if( iHeight==0 ){ + rc = rowidWrite(pRtree, pCell->iRowid, pNode->iNode); + }else{ + rc = parentWrite(pRtree, pCell->iRowid, pNode->iNode); + } } } return rc; } @@ -111879,14 +158842,14 @@ nodeGetCell(pRtree, pNode, ii, &cell); /* Find a node to store this cell in. pNode->iNode currently contains ** the height of the sub-tree headed by the cell. */ - rc = ChooseLeaf(pRtree, &cell, pNode->iNode, &pInsert); + rc = ChooseLeaf(pRtree, &cell, (int)pNode->iNode, &pInsert); if( rc==SQLITE_OK ){ int rc2; - rc = rtreeInsertCell(pRtree, pInsert, &cell, pNode->iNode); + rc = rtreeInsertCell(pRtree, pInsert, &cell, (int)pNode->iNode); rc2 = nodeRelease(pRtree, pInsert); if( rc==SQLITE_OK ){ rc = rc2; } } @@ -111905,19 +158868,123 @@ rc = sqlite3_reset(pRtree->pWriteRowid); *piRowid = sqlite3_last_insert_rowid(pRtree->db); return rc; } -#ifndef NDEBUG -static int hashIsEmpty(Rtree *pRtree){ - int ii; - for(ii=0; ii<HASHSIZE; ii++){ - assert( !pRtree->aHash[ii] ); - } - return 1; -} -#endif +/* +** Remove the entry with rowid=iDelete from the r-tree structure. +*/ +static int rtreeDeleteRowid(Rtree *pRtree, sqlite3_int64 iDelete){ + int rc; /* Return code */ + RtreeNode *pLeaf = 0; /* Leaf node containing record iDelete */ + int iCell; /* Index of iDelete cell in pLeaf */ + RtreeNode *pRoot; /* Root node of rtree structure */ + + + /* Obtain a reference to the root node to initialize Rtree.iDepth */ + rc = nodeAcquire(pRtree, 1, 0, &pRoot); + + /* Obtain a reference to the leaf node that contains the entry + ** about to be deleted. + */ + if( rc==SQLITE_OK ){ + rc = findLeafNode(pRtree, iDelete, &pLeaf, 0); + } + + /* Delete the cell in question from the leaf node. */ + if( rc==SQLITE_OK ){ + int rc2; + rc = nodeRowidIndex(pRtree, pLeaf, iDelete, &iCell); + if( rc==SQLITE_OK ){ + rc = deleteCell(pRtree, pLeaf, iCell, 0); + } + rc2 = nodeRelease(pRtree, pLeaf); + if( rc==SQLITE_OK ){ + rc = rc2; + } + } + + /* Delete the corresponding entry in the <rtree>_rowid table. */ + if( rc==SQLITE_OK ){ + sqlite3_bind_int64(pRtree->pDeleteRowid, 1, iDelete); + sqlite3_step(pRtree->pDeleteRowid); + rc = sqlite3_reset(pRtree->pDeleteRowid); + } + + /* Check if the root node now has exactly one child. If so, remove + ** it, schedule the contents of the child for reinsertion and + ** reduce the tree height by one. + ** + ** This is equivalent to copying the contents of the child into + ** the root node (the operation that Gutman's paper says to perform + ** in this scenario). + */ + if( rc==SQLITE_OK && pRtree->iDepth>0 && NCELL(pRoot)==1 ){ + int rc2; + RtreeNode *pChild; + i64 iChild = nodeGetRowid(pRtree, pRoot, 0); + rc = nodeAcquire(pRtree, iChild, pRoot, &pChild); + if( rc==SQLITE_OK ){ + rc = removeNode(pRtree, pChild, pRtree->iDepth-1); + } + rc2 = nodeRelease(pRtree, pChild); + if( rc==SQLITE_OK ) rc = rc2; + if( rc==SQLITE_OK ){ + pRtree->iDepth--; + writeInt16(pRoot->zData, pRtree->iDepth); + pRoot->isDirty = 1; + } + } + + /* Re-insert the contents of any underfull nodes removed from the tree. */ + for(pLeaf=pRtree->pDeleted; pLeaf; pLeaf=pRtree->pDeleted){ + if( rc==SQLITE_OK ){ + rc = reinsertNodeContent(pRtree, pLeaf); + } + pRtree->pDeleted = pLeaf->pNext; + sqlite3_free(pLeaf); + } + + /* Release the reference to the root node. */ + if( rc==SQLITE_OK ){ + rc = nodeRelease(pRtree, pRoot); + }else{ + nodeRelease(pRtree, pRoot); + } + + return rc; +} + +/* +** Rounding constants for float->double conversion. +*/ +#define RNDTOWARDS (1.0 - 1.0/8388608.0) /* Round towards zero */ +#define RNDAWAY (1.0 + 1.0/8388608.0) /* Round away from zero */ + +#if !defined(SQLITE_RTREE_INT_ONLY) +/* +** Convert an sqlite3_value into an RtreeValue (presumably a float) +** while taking care to round toward negative or positive, respectively. +*/ +static RtreeValue rtreeValueDown(sqlite3_value *v){ + double d = sqlite3_value_double(v); + float f = (float)d; + if( f>d ){ + f = (float)(d*(d<0 ? RNDAWAY : RNDTOWARDS)); + } + return f; +} +static RtreeValue rtreeValueUp(sqlite3_value *v){ + double d = sqlite3_value_double(v); + float f = (float)d; + if( f<d ){ + f = (float)(d*(d<0 ? RNDTOWARDS : RNDAWAY)); + } + return f; +} +#endif /* !defined(SQLITE_RTREE_INT_ONLY) */ + /* ** The xUpdate method for rtree module virtual tables. */ static int rtreeUpdate( @@ -111926,140 +158993,108 @@ sqlite3_value **azData, sqlite_int64 *pRowid ){ Rtree *pRtree = (Rtree *)pVtab; int rc = SQLITE_OK; + RtreeCell cell; /* New cell to insert if nData>1 */ + int bHaveRowid = 0; /* Set to 1 after new rowid is determined */ rtreeReference(pRtree); - assert(nData>=1); - assert(hashIsEmpty(pRtree)); - - /* If azData[0] is not an SQL NULL value, it is the rowid of a - ** record to delete from the r-tree table. The following block does - ** just that. - */ - if( sqlite3_value_type(azData[0])!=SQLITE_NULL ){ - i64 iDelete; /* The rowid to delete */ - RtreeNode *pLeaf; /* Leaf node containing record iDelete */ - int iCell; /* Index of iDelete cell in pLeaf */ - RtreeNode *pRoot; - - /* Obtain a reference to the root node to initialise Rtree.iDepth */ - rc = nodeAcquire(pRtree, 1, 0, &pRoot); - - /* Obtain a reference to the leaf node that contains the entry - ** about to be deleted. - */ - if( rc==SQLITE_OK ){ - iDelete = sqlite3_value_int64(azData[0]); - rc = findLeafNode(pRtree, iDelete, &pLeaf); - } - - /* Delete the cell in question from the leaf node. */ - if( rc==SQLITE_OK ){ - int rc2; - iCell = nodeRowidIndex(pRtree, pLeaf, iDelete); - rc = deleteCell(pRtree, pLeaf, iCell, 0); - rc2 = nodeRelease(pRtree, pLeaf); - if( rc==SQLITE_OK ){ - rc = rc2; - } - } - - /* Delete the corresponding entry in the <rtree>_rowid table. */ - if( rc==SQLITE_OK ){ - sqlite3_bind_int64(pRtree->pDeleteRowid, 1, iDelete); - sqlite3_step(pRtree->pDeleteRowid); - rc = sqlite3_reset(pRtree->pDeleteRowid); - } - - /* Check if the root node now has exactly one child. If so, remove - ** it, schedule the contents of the child for reinsertion and - ** reduce the tree height by one. - ** - ** This is equivalent to copying the contents of the child into - ** the root node (the operation that Gutman's paper says to perform - ** in this scenario). - */ - if( rc==SQLITE_OK && pRtree->iDepth>0 ){ - if( rc==SQLITE_OK && NCELL(pRoot)==1 ){ - RtreeNode *pChild; - i64 iChild = nodeGetRowid(pRtree, pRoot, 0); - rc = nodeAcquire(pRtree, iChild, pRoot, &pChild); - if( rc==SQLITE_OK ){ - rc = removeNode(pRtree, pChild, pRtree->iDepth-1); - } - if( rc==SQLITE_OK ){ - pRtree->iDepth--; - writeInt16(pRoot->zData, pRtree->iDepth); - pRoot->isDirty = 1; - } - } - } - - /* Re-insert the contents of any underfull nodes removed from the tree. */ - for(pLeaf=pRtree->pDeleted; pLeaf; pLeaf=pRtree->pDeleted){ - if( rc==SQLITE_OK ){ - rc = reinsertNodeContent(pRtree, pLeaf); - } - pRtree->pDeleted = pLeaf->pNext; - sqlite3_free(pLeaf); - } - - /* Release the reference to the root node. */ - if( rc==SQLITE_OK ){ - rc = nodeRelease(pRtree, pRoot); - }else{ - nodeRelease(pRtree, pRoot); - } - } - - /* If the azData[] array contains more than one element, elements - ** (azData[2]..azData[argc-1]) contain a new record to insert into - ** the r-tree structure. - */ - if( rc==SQLITE_OK && nData>1 ){ - /* Insert a new record into the r-tree */ - RtreeCell cell; + + cell.iRowid = 0; /* Used only to suppress a compiler warning */ + + /* Constraint handling. A write operation on an r-tree table may return + ** SQLITE_CONSTRAINT for two reasons: + ** + ** 1. A duplicate rowid value, or + ** 2. The supplied data violates the "x2>=x1" constraint. + ** + ** In the first case, if the conflict-handling mode is REPLACE, then + ** the conflicting row can be removed before proceeding. In the second + ** case, SQLITE_CONSTRAINT must be returned regardless of the + ** conflict-handling mode specified by the user. + */ + if( nData>1 ){ int ii; - RtreeNode *pLeaf; - /* Populate the cell.aCoord[] array. The first coordinate is azData[3]. */ - assert( nData==(pRtree->nDim*2 + 3) ); + /* Populate the cell.aCoord[] array. The first coordinate is azData[3]. + ** + ** NB: nData can only be less than nDim*2+3 if the rtree is mis-declared + ** with "column" that are interpreted as table constraints. + ** Example: CREATE VIRTUAL TABLE bad USING rtree(x,y,CHECK(y>5)); + ** This problem was discovered after years of use, so we silently ignore + ** these kinds of misdeclared tables to avoid breaking any legacy. + */ + assert( nData<=(pRtree->nDim*2 + 3) ); + +#ifndef SQLITE_RTREE_INT_ONLY if( pRtree->eCoordType==RTREE_COORD_REAL32 ){ - for(ii=0; ii<(pRtree->nDim*2); ii+=2){ - cell.aCoord[ii].f = (float)sqlite3_value_double(azData[ii+3]); - cell.aCoord[ii+1].f = (float)sqlite3_value_double(azData[ii+4]); + for(ii=0; ii<nData-4; ii+=2){ + cell.aCoord[ii].f = rtreeValueDown(azData[ii+3]); + cell.aCoord[ii+1].f = rtreeValueUp(azData[ii+4]); if( cell.aCoord[ii].f>cell.aCoord[ii+1].f ){ rc = SQLITE_CONSTRAINT; goto constraint; } } - }else{ - for(ii=0; ii<(pRtree->nDim*2); ii+=2){ + }else +#endif + { + for(ii=0; ii<nData-4; ii+=2){ cell.aCoord[ii].i = sqlite3_value_int(azData[ii+3]); cell.aCoord[ii+1].i = sqlite3_value_int(azData[ii+4]); if( cell.aCoord[ii].i>cell.aCoord[ii+1].i ){ rc = SQLITE_CONSTRAINT; goto constraint; } } } - /* Figure out the rowid of the new row. */ - if( sqlite3_value_type(azData[2])==SQLITE_NULL ){ - rc = newRowid(pRtree, &cell.iRowid); - }else{ + /* If a rowid value was supplied, check if it is already present in + ** the table. If so, the constraint has failed. */ + if( sqlite3_value_type(azData[2])!=SQLITE_NULL ){ cell.iRowid = sqlite3_value_int64(azData[2]); - sqlite3_bind_int64(pRtree->pReadRowid, 1, cell.iRowid); - if( SQLITE_ROW==sqlite3_step(pRtree->pReadRowid) ){ - sqlite3_reset(pRtree->pReadRowid); - rc = SQLITE_CONSTRAINT; - goto constraint; + if( sqlite3_value_type(azData[0])==SQLITE_NULL + || sqlite3_value_int64(azData[0])!=cell.iRowid + ){ + int steprc; + sqlite3_bind_int64(pRtree->pReadRowid, 1, cell.iRowid); + steprc = sqlite3_step(pRtree->pReadRowid); + rc = sqlite3_reset(pRtree->pReadRowid); + if( SQLITE_ROW==steprc ){ + if( sqlite3_vtab_on_conflict(pRtree->db)==SQLITE_REPLACE ){ + rc = rtreeDeleteRowid(pRtree, cell.iRowid); + }else{ + rc = SQLITE_CONSTRAINT; + goto constraint; + } + } } - rc = sqlite3_reset(pRtree->pReadRowid); + bHaveRowid = 1; + } + } + + /* If azData[0] is not an SQL NULL value, it is the rowid of a + ** record to delete from the r-tree table. The following block does + ** just that. + */ + if( sqlite3_value_type(azData[0])!=SQLITE_NULL ){ + rc = rtreeDeleteRowid(pRtree, sqlite3_value_int64(azData[0])); + } + + /* If the azData[] array contains more than one element, elements + ** (azData[2]..azData[argc-1]) contain a new record to insert into + ** the r-tree structure. + */ + if( rc==SQLITE_OK && nData>1 ){ + /* Insert the new record into the r-tree */ + RtreeNode *pLeaf = 0; + + /* Figure out the rowid of the new row. */ + if( bHaveRowid==0 ){ + rc = newRowid(pRtree, &cell.iRowid); } *pRowid = cell.iRowid; if( rc==SQLITE_OK ){ rc = ChooseLeaf(pRtree, &cell, 0, &pLeaf); @@ -112098,13 +159133,50 @@ rc = sqlite3_exec(pRtree->db, zSql, 0, 0, 0); sqlite3_free(zSql); } return rc; } + +/* +** This function populates the pRtree->nRowEst variable with an estimate +** of the number of rows in the virtual table. If possible, this is based +** on sqlite_stat1 data. Otherwise, use RTREE_DEFAULT_ROWEST. +*/ +static int rtreeQueryStat1(sqlite3 *db, Rtree *pRtree){ + const char *zFmt = "SELECT stat FROM %Q.sqlite_stat1 WHERE tbl = '%q_rowid'"; + char *zSql; + sqlite3_stmt *p; + int rc; + i64 nRow = 0; + + zSql = sqlite3_mprintf(zFmt, pRtree->zDb, pRtree->zName); + if( zSql==0 ){ + rc = SQLITE_NOMEM; + }else{ + rc = sqlite3_prepare_v2(db, zSql, -1, &p, 0); + if( rc==SQLITE_OK ){ + if( sqlite3_step(p)==SQLITE_ROW ) nRow = sqlite3_column_int64(p, 0); + rc = sqlite3_finalize(p); + }else if( rc!=SQLITE_NOMEM ){ + rc = SQLITE_OK; + } + + if( rc==SQLITE_OK ){ + if( nRow==0 ){ + pRtree->nRowEst = RTREE_DEFAULT_ROWEST; + }else{ + pRtree->nRowEst = MAX(nRow, RTREE_MIN_ROWEST); + } + } + sqlite3_free(zSql); + } + + return rc; +} static sqlite3_module rtreeModule = { - 0, /* iVersion */ + 0, /* iVersion */ rtreeCreate, /* xCreate - create a table */ rtreeConnect, /* xConnect - connect to an existing table */ rtreeBestIndex, /* xBestIndex - Determine search strategy */ rtreeDisconnect, /* xDisconnect - Disconnect from a table */ rtreeDestroy, /* xDestroy - Drop a table */ @@ -112119,11 +159191,14 @@ 0, /* xBegin - begin transaction */ 0, /* xSync - sync transaction */ 0, /* xCommit - commit transaction */ 0, /* xRollback - rollback transaction */ 0, /* xFindFunction - function overloading */ - rtreeRename /* xRename - rename the table */ + rtreeRename, /* xRename - rename the table */ + 0, /* xSavepoint */ + 0, /* xRelease */ + 0 /* xRollbackTo */ }; static int rtreeSqlInit( Rtree *pRtree, sqlite3 *db, @@ -112157,11 +159232,12 @@ if( isCreate ){ char *zCreate = sqlite3_mprintf( "CREATE TABLE \"%w\".\"%w_node\"(nodeno INTEGER PRIMARY KEY, data BLOB);" "CREATE TABLE \"%w\".\"%w_rowid\"(rowid INTEGER PRIMARY KEY, nodeno INTEGER);" -"CREATE TABLE \"%w\".\"%w_parent\"(nodeno INTEGER PRIMARY KEY, parentnode INTEGER);" +"CREATE TABLE \"%w\".\"%w_parent\"(nodeno INTEGER PRIMARY KEY," + " parentnode INTEGER);" "INSERT INTO '%q'.'%q_node' VALUES(1, zeroblob(%d))", zDb, zPrefix, zDb, zPrefix, zDb, zPrefix, zDb, zPrefix, pRtree->iNodeSize ); if( !zCreate ){ return SQLITE_NOMEM; @@ -112181,10 +159257,11 @@ appStmt[5] = &pRtree->pDeleteRowid; appStmt[6] = &pRtree->pReadParent; appStmt[7] = &pRtree->pWriteParent; appStmt[8] = &pRtree->pDeleteParent; + rc = rtreeQueryStat1(db, pRtree); for(i=0; i<N_STATEMENT && rc==SQLITE_OK; i++){ char *zSql = sqlite3_mprintf(azSql[i], zDb, zPrefix); if( zSql ){ rc = sqlite3_prepare_v2(db, zSql, -1, appStmt[i], 0); }else{ @@ -112234,30 +159311,36 @@ ** would fit in a single node, use a smaller node-size. */ static int getNodeSize( sqlite3 *db, /* Database handle */ Rtree *pRtree, /* Rtree handle */ - int isCreate /* True for xCreate, false for xConnect */ + int isCreate, /* True for xCreate, false for xConnect */ + char **pzErr /* OUT: Error message, if any */ ){ int rc; char *zSql; if( isCreate ){ - int iPageSize; + int iPageSize = 0; zSql = sqlite3_mprintf("PRAGMA %Q.page_size", pRtree->zDb); rc = getIntFromStmt(db, zSql, &iPageSize); if( rc==SQLITE_OK ){ pRtree->iNodeSize = iPageSize-64; if( (4+pRtree->nBytesPerCell*RTREE_MAXCELLS)<pRtree->iNodeSize ){ pRtree->iNodeSize = 4+pRtree->nBytesPerCell*RTREE_MAXCELLS; } + }else{ + *pzErr = sqlite3_mprintf("%s", sqlite3_errmsg(db)); } }else{ zSql = sqlite3_mprintf( "SELECT length(data) FROM '%q'.'%q_node' WHERE nodeno = 1", pRtree->zDb, pRtree->zName ); rc = getIntFromStmt(db, zSql, &pRtree->iNodeSize); + if( rc!=SQLITE_OK ){ + *pzErr = sqlite3_mprintf("%s", sqlite3_errmsg(db)); + } } sqlite3_free(zSql); return rc; } @@ -112281,11 +159364,11 @@ ){ int rc = SQLITE_OK; Rtree *pRtree; int nDb; /* Length of string argv[1] */ int nName; /* Length of string argv[2] */ - int eCoordType = (int)pAux; + int eCoordType = (pAux ? RTREE_COORD_INT32 : RTREE_COORD_REAL32); const char *aErrMsg[] = { 0, /* 0 */ "Wrong number of columns for an rtree table", /* 1 */ "Too few columns for an rtree table", /* 2 */ @@ -112296,13 +159379,15 @@ if( aErrMsg[iErr] ){ *pzErr = sqlite3_mprintf("%s", aErrMsg[iErr]); return SQLITE_ERROR; } + sqlite3_vtab_config(db, SQLITE_VTAB_CONSTRAINT_SUPPORT, 1); + /* Allocate the sqlite3_vtab structure */ - nDb = strlen(argv[1]); - nName = strlen(argv[2]); + nDb = (int)strlen(argv[1]); + nName = (int)strlen(argv[2]); pRtree = (Rtree *)sqlite3_malloc(sizeof(Rtree)+nDb+nName+2); if( !pRtree ){ return SQLITE_NOMEM; } memset(pRtree, 0, sizeof(Rtree)+nDb+nName+2); @@ -112315,11 +159400,11 @@ pRtree->eCoordType = eCoordType; memcpy(pRtree->zDb, argv[1], nDb); memcpy(pRtree->zName, argv[2], nName); /* Figure out the node size to use. */ - rc = getNodeSize(db, pRtree, isCreate); + rc = getNodeSize(db, pRtree, isCreate, pzErr); /* Create/Connect to the underlying relational database schema. If ** that is successful, call sqlite3_declare_vtab() to configure ** the r-tree table schema. */ @@ -112350,10 +159435,12 @@ } if( rc==SQLITE_OK ){ *ppVtab = (sqlite3_vtab *)pRtree; }else{ + assert( *ppVtab==0 ); + assert( pRtree->nBusy==1 ); rtreeRelease(pRtree); } return rc; } @@ -112360,14 +159447,14 @@ /* ** Implementation of a scalar function that decodes r-tree nodes to ** human readable strings. This can be used for debugging and analysis. ** -** The scalar function takes two arguments, a blob of data containing -** an r-tree node, and the number of dimensions the r-tree indexes. -** For a two-dimensional r-tree structure called "rt", to deserialize -** all nodes, a statement like: +** The scalar function takes two arguments: (1) the number of dimensions +** to the rtree (between 1 and 5, inclusive) and (2) a blob of data containing +** an r-tree node. For a two-dimensional r-tree structure called "rt", to +** deserialize all nodes, a statement like: ** ** SELECT rtreenode(2, data) FROM rt_node; ** ** The human readable string takes the form of a Tcl list with one ** entry for each cell in the r-tree node. Each entry is itself a @@ -112378,10 +159465,11 @@ char *zText = 0; RtreeNode node; Rtree tree; int ii; + UNUSED_PARAMETER(nArg); memset(&node, 0, sizeof(RtreeNode)); memset(&tree, 0, sizeof(Rtree)); tree.nDim = sqlite3_value_int(apArg[0]); tree.nBytesPerCell = 8 + 8 * tree.nDim; node.zData = (u8 *)sqlite3_value_blob(apArg[1]); @@ -112391,15 +159479,21 @@ int nCell = 0; RtreeCell cell; int jj; nodeGetCell(&tree, &node, ii, &cell); - sqlite3_snprintf(512-nCell,&zCell[nCell],"%d", cell.iRowid); - nCell = strlen(zCell); + sqlite3_snprintf(512-nCell,&zCell[nCell],"%lld", cell.iRowid); + nCell = (int)strlen(zCell); for(jj=0; jj<tree.nDim*2; jj++){ - sqlite3_snprintf(512-nCell,&zCell[nCell]," %f",(double)cell.aCoord[jj].f); - nCell = strlen(zCell); +#ifndef SQLITE_RTREE_INT_ONLY + sqlite3_snprintf(512-nCell,&zCell[nCell], " %g", + (double)cell.aCoord[jj].f); +#else + sqlite3_snprintf(512-nCell,&zCell[nCell], " %d", + cell.aCoord[jj].i); +#endif + nCell = (int)strlen(zCell); } if( zText ){ char *zTextNew = sqlite3_mprintf("%s {%s}", zText, zCell); sqlite3_free(zText); @@ -112410,11 +159504,21 @@ } sqlite3_result_text(ctx, zText, -1, sqlite3_free); } +/* This routine implements an SQL function that returns the "depth" parameter +** from the front of a blob that is an r-tree node. For example: +** +** SELECT rtreedepth(data) FROM rt_node WHERE nodeno=1; +** +** The depth value is 0 for all nodes other than the root node, and the root +** node always has nodeno=1, so the example above is the primary use for this +** routine. This routine is intended for testing and analysis only. +*/ static void rtreedepth(sqlite3_context *ctx, int nArg, sqlite3_value **apArg){ + UNUSED_PARAMETER(nArg); if( sqlite3_value_type(apArg[0])!=SQLITE_BLOB || sqlite3_value_bytes(apArg[0])<2 ){ sqlite3_result_error(ctx, "Invalid argument to rtreedepth()", -1); }else{ @@ -112427,34 +159531,160 @@ ** Register the r-tree module with database handle db. This creates the ** virtual table module "rtree" and the debugging/analysis scalar ** function "rtreenode". */ SQLITE_PRIVATE int sqlite3RtreeInit(sqlite3 *db){ - int rc = SQLITE_OK; - - if( rc==SQLITE_OK ){ - int utf8 = SQLITE_UTF8; - rc = sqlite3_create_function(db, "rtreenode", 2, utf8, 0, rtreenode, 0, 0); - } - if( rc==SQLITE_OK ){ - int utf8 = SQLITE_UTF8; + const int utf8 = SQLITE_UTF8; + int rc; + + rc = sqlite3_create_function(db, "rtreenode", 2, utf8, 0, rtreenode, 0, 0); + if( rc==SQLITE_OK ){ rc = sqlite3_create_function(db, "rtreedepth", 1, utf8, 0,rtreedepth, 0, 0); } if( rc==SQLITE_OK ){ +#ifdef SQLITE_RTREE_INT_ONLY + void *c = (void *)RTREE_COORD_INT32; +#else void *c = (void *)RTREE_COORD_REAL32; +#endif rc = sqlite3_create_module_v2(db, "rtree", &rtreeModule, c, 0); } if( rc==SQLITE_OK ){ void *c = (void *)RTREE_COORD_INT32; rc = sqlite3_create_module_v2(db, "rtree_i32", &rtreeModule, c, 0); } return rc; } + +/* +** This routine deletes the RtreeGeomCallback object that was attached +** one of the SQL functions create by sqlite3_rtree_geometry_callback() +** or sqlite3_rtree_query_callback(). In other words, this routine is the +** destructor for an RtreeGeomCallback objecct. This routine is called when +** the corresponding SQL function is deleted. +*/ +static void rtreeFreeCallback(void *p){ + RtreeGeomCallback *pInfo = (RtreeGeomCallback*)p; + if( pInfo->xDestructor ) pInfo->xDestructor(pInfo->pContext); + sqlite3_free(p); +} + +/* +** This routine frees the BLOB that is returned by geomCallback(). +*/ +static void rtreeMatchArgFree(void *pArg){ + int i; + RtreeMatchArg *p = (RtreeMatchArg*)pArg; + for(i=0; i<p->nParam; i++){ + sqlite3_value_free(p->apSqlParam[i]); + } + sqlite3_free(p); +} + +/* +** Each call to sqlite3_rtree_geometry_callback() or +** sqlite3_rtree_query_callback() creates an ordinary SQLite +** scalar function that is implemented by this routine. +** +** All this function does is construct an RtreeMatchArg object that +** contains the geometry-checking callback routines and a list of +** parameters to this function, then return that RtreeMatchArg object +** as a BLOB. +** +** The R-Tree MATCH operator will read the returned BLOB, deserialize +** the RtreeMatchArg object, and use the RtreeMatchArg object to figure +** out which elements of the R-Tree should be returned by the query. +*/ +static void geomCallback(sqlite3_context *ctx, int nArg, sqlite3_value **aArg){ + RtreeGeomCallback *pGeomCtx = (RtreeGeomCallback *)sqlite3_user_data(ctx); + RtreeMatchArg *pBlob; + int nBlob; + int memErr = 0; + + nBlob = sizeof(RtreeMatchArg) + (nArg-1)*sizeof(RtreeDValue) + + nArg*sizeof(sqlite3_value*); + pBlob = (RtreeMatchArg *)sqlite3_malloc(nBlob); + if( !pBlob ){ + sqlite3_result_error_nomem(ctx); + }else{ + int i; + pBlob->magic = RTREE_GEOMETRY_MAGIC; + pBlob->cb = pGeomCtx[0]; + pBlob->apSqlParam = (sqlite3_value**)&pBlob->aParam[nArg]; + pBlob->nParam = nArg; + for(i=0; i<nArg; i++){ + pBlob->apSqlParam[i] = sqlite3_value_dup(aArg[i]); + if( pBlob->apSqlParam[i]==0 ) memErr = 1; +#ifdef SQLITE_RTREE_INT_ONLY + pBlob->aParam[i] = sqlite3_value_int64(aArg[i]); +#else + pBlob->aParam[i] = sqlite3_value_double(aArg[i]); +#endif + } + if( memErr ){ + sqlite3_result_error_nomem(ctx); + rtreeMatchArgFree(pBlob); + }else{ + sqlite3_result_blob(ctx, pBlob, nBlob, rtreeMatchArgFree); + } + } +} + +/* +** Register a new geometry function for use with the r-tree MATCH operator. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_rtree_geometry_callback( + sqlite3 *db, /* Register SQL function on this connection */ + const char *zGeom, /* Name of the new SQL function */ + int (*xGeom)(sqlite3_rtree_geometry*,int,RtreeDValue*,int*), /* Callback */ + void *pContext /* Extra data associated with the callback */ +){ + RtreeGeomCallback *pGeomCtx; /* Context object for new user-function */ + + /* Allocate and populate the context object. */ + pGeomCtx = (RtreeGeomCallback *)sqlite3_malloc(sizeof(RtreeGeomCallback)); + if( !pGeomCtx ) return SQLITE_NOMEM; + pGeomCtx->xGeom = xGeom; + pGeomCtx->xQueryFunc = 0; + pGeomCtx->xDestructor = 0; + pGeomCtx->pContext = pContext; + return sqlite3_create_function_v2(db, zGeom, -1, SQLITE_ANY, + (void *)pGeomCtx, geomCallback, 0, 0, rtreeFreeCallback + ); +} + +/* +** Register a new 2nd-generation geometry function for use with the +** r-tree MATCH operator. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_rtree_query_callback( + sqlite3 *db, /* Register SQL function on this connection */ + const char *zQueryFunc, /* Name of new SQL function */ + int (*xQueryFunc)(sqlite3_rtree_query_info*), /* Callback */ + void *pContext, /* Extra data passed into the callback */ + void (*xDestructor)(void*) /* Destructor for the extra data */ +){ + RtreeGeomCallback *pGeomCtx; /* Context object for new user-function */ + + /* Allocate and populate the context object. */ + pGeomCtx = (RtreeGeomCallback *)sqlite3_malloc(sizeof(RtreeGeomCallback)); + if( !pGeomCtx ) return SQLITE_NOMEM; + pGeomCtx->xGeom = 0; + pGeomCtx->xQueryFunc = xQueryFunc; + pGeomCtx->xDestructor = xDestructor; + pGeomCtx->pContext = pContext; + return sqlite3_create_function_v2(db, zQueryFunc, -1, SQLITE_ANY, + (void *)pGeomCtx, geomCallback, 0, 0, rtreeFreeCallback + ); +} #if !SQLITE_CORE -SQLITE_API int sqlite3_extension_init( +#ifdef _WIN32 +__declspec(dllexport) +#endif +SQLITE_API int SQLITE_STDCALL sqlite3_rtree_init( sqlite3 *db, char **pzErrMsg, const sqlite3_api_routines *pApi ){ SQLITE_EXTENSION_INIT2(pApi) @@ -112488,11 +159718,11 @@ ** operator) using the ICU uregex_XX() APIs. ** ** * Implementations of the SQL scalar upper() and lower() functions ** for case mapping. ** -** * Integration of ICU and SQLite collation seqences. +** * Integration of ICU and SQLite collation sequences. ** ** * An implementation of the LIKE operator that uses ICU to ** provide case-independent matching. */ @@ -112502,14 +159732,17 @@ #include <unicode/utypes.h> #include <unicode/uregex.h> #include <unicode/ustring.h> #include <unicode/ucol.h> +/* #include <assert.h> */ #ifndef SQLITE_CORE +/* #include "sqlite3ext.h" */ SQLITE_EXTENSION_INIT1 #else +/* #include "sqlite3.h" */ #endif /* ** Maximum length (in bytes) of the pattern in a LIKE or GLOB ** operator. @@ -112546,11 +159779,10 @@ while( zPattern[iPattern]!=0 ){ /* Read (and consume) the next character from the input pattern. */ UChar32 uPattern; U8_NEXT_UNSAFE(zPattern, iPattern, uPattern); - assert(uPattern!=0); /* There are now 4 possibilities: ** ** 1. uPattern is an unescaped match-all character "%", ** 2. uPattern is an unescaped match-one character "_", @@ -112709,10 +159941,12 @@ static void icuRegexpFunc(sqlite3_context *p, int nArg, sqlite3_value **apArg){ UErrorCode status = U_ZERO_ERROR; URegularExpression *pExpr; UBool res; const UChar *zString = sqlite3_value_text16(apArg[1]); + + (void)nArg; /* Unused parameter */ /* If the left hand side of the regexp operator is NULL, ** then the result is also NULL. */ if( !zString ){ @@ -112883,10 +160117,11 @@ const char *zName; /* SQL Collation sequence name (eg. "japanese") */ UCollator *pUCollator; /* ICU library collation object */ int rc; /* Return code from sqlite3_create_collation_x() */ assert(nArg==2); + (void)nArg; /* Unused parameter */ zLocale = (const char *)sqlite3_value_text(apArg[0]); zName = (const char *)sqlite3_value_text(apArg[1]); if( !zLocale || !zName ){ return; @@ -112938,11 +160173,11 @@ }; int rc = SQLITE_OK; int i; - for(i=0; rc==SQLITE_OK && i<(sizeof(scalars)/sizeof(struct IcuScalar)); i++){ + for(i=0; rc==SQLITE_OK && i<(int)(sizeof(scalars)/sizeof(scalars[0])); i++){ struct IcuScalar *p = &scalars[i]; rc = sqlite3_create_function( db, p->zName, p->nArg, p->enc, p->pContext, p->xFunc, 0, 0 ); } @@ -112949,11 +160184,14 @@ return rc; } #if !SQLITE_CORE -SQLITE_API int sqlite3_extension_init( +#ifdef _WIN32 +__declspec(dllexport) +#endif +SQLITE_API int SQLITE_STDCALL sqlite3_icu_init( sqlite3 *db, char **pzErrMsg, const sqlite3_api_routines *pApi ){ SQLITE_EXTENSION_INIT2(pApi) @@ -112975,19 +160213,22 @@ ** May you find forgiveness for yourself and forgive others. ** May you share freely, never taking more than you give. ** ************************************************************************* ** This file implements a tokenizer for fts3 based on the ICU library. -** -** $Id: fts3_icu.c,v 1.3 2008/09/01 18:34:20 danielk1977 Exp $ */ - +/* #include "fts3Int.h" */ #if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) #ifdef SQLITE_ENABLE_ICU +/* #include <assert.h> */ +/* #include <string.h> */ +/* #include "fts3_tokenizer.h" */ #include <unicode/ubrk.h> +/* #include <unicode/ucol.h> */ +/* #include <unicode/ustring.h> */ #include <unicode/utf16.h> typedef struct IcuTokenizer IcuTokenizer; typedef struct IcuCursor IcuCursor; @@ -113072,25 +160313,28 @@ int iInput = 0; int iOut = 0; *ppCursor = 0; - if( nInput<0 ){ + if( zInput==0 ){ + nInput = 0; + zInput = ""; + }else if( nInput<0 ){ nInput = strlen(zInput); } nChar = nInput+1; pCsr = (IcuCursor *)sqlite3_malloc( sizeof(IcuCursor) + /* IcuCursor */ - nChar * sizeof(UChar) + /* IcuCursor.aChar[] */ + ((nChar+3)&~3) * sizeof(UChar) + /* IcuCursor.aChar[] */ (nChar+1) * sizeof(int) /* IcuCursor.aOffset[] */ ); if( !pCsr ){ return SQLITE_NOMEM; } memset(pCsr, 0, sizeof(IcuCursor)); pCsr->aChar = (UChar *)&pCsr[1]; - pCsr->aOffset = (int *)&pCsr->aChar[nChar]; + pCsr->aOffset = (int *)&pCsr->aChar[(nChar+3)&~3]; pCsr->aOffset[iOut] = iInput; U8_NEXT(zInput, iInput, nInput, c); while( c>0 ){ int isError = 0; @@ -113158,11 +160402,11 @@ return SQLITE_DONE; } while( iStart<iEnd ){ int iWhite = iStart; - U8_NEXT(pCsr->aChar, iWhite, pCsr->nChar, c); + U16_NEXT(pCsr->aChar, iWhite, pCsr->nChar, c); if( u_isspace(c) ){ iStart = iWhite; }else{ break; } @@ -113199,16 +160443,17 @@ /* ** The set of routines that implement the simple tokenizer */ static const sqlite3_tokenizer_module icuTokenizerModule = { - 0, /* iVersion */ - icuCreate, /* xCreate */ - icuDestroy, /* xCreate */ - icuOpen, /* xOpen */ - icuClose, /* xClose */ - icuNext, /* xNext */ + 0, /* iVersion */ + icuCreate, /* xCreate */ + icuDestroy, /* xCreate */ + icuOpen, /* xOpen */ + icuClose, /* xClose */ + icuNext, /* xNext */ + 0, /* xLanguageid */ }; /* ** Set *ppModule to point at the implementation of the ICU tokenizer. */ @@ -113220,5 +160465,27721 @@ #endif /* defined(SQLITE_ENABLE_ICU) */ #endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS3) */ /************** End of fts3_icu.c ********************************************/ +/************** Begin file sqlite3rbu.c **************************************/ +/* +** 2014 August 30 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** +** +** OVERVIEW +** +** The RBU extension requires that the RBU update be packaged as an +** SQLite database. The tables it expects to find are described in +** sqlite3rbu.h. Essentially, for each table xyz in the target database +** that the user wishes to write to, a corresponding data_xyz table is +** created in the RBU database and populated with one row for each row to +** update, insert or delete from the target table. +** +** The update proceeds in three stages: +** +** 1) The database is updated. The modified database pages are written +** to a *-oal file. A *-oal file is just like a *-wal file, except +** that it is named "<database>-oal" instead of "<database>-wal". +** Because regular SQLite clients do not look for file named +** "<database>-oal", they go on using the original database in +** rollback mode while the *-oal file is being generated. +** +** During this stage RBU does not update the database by writing +** directly to the target tables. Instead it creates "imposter" +** tables using the SQLITE_TESTCTRL_IMPOSTER interface that it uses +** to update each b-tree individually. All updates required by each +** b-tree are completed before moving on to the next, and all +** updates are done in sorted key order. +** +** 2) The "<database>-oal" file is moved to the equivalent "<database>-wal" +** location using a call to rename(2). Before doing this the RBU +** module takes an EXCLUSIVE lock on the database file, ensuring +** that there are no other active readers. +** +** Once the EXCLUSIVE lock is released, any other database readers +** detect the new *-wal file and read the database in wal mode. At +** this point they see the new version of the database - including +** the updates made as part of the RBU update. +** +** 3) The new *-wal file is checkpointed. This proceeds in the same way +** as a regular database checkpoint, except that a single frame is +** checkpointed each time sqlite3rbu_step() is called. If the RBU +** handle is closed before the entire *-wal file is checkpointed, +** the checkpoint progress is saved in the RBU database and the +** checkpoint can be resumed by another RBU client at some point in +** the future. +** +** POTENTIAL PROBLEMS +** +** The rename() call might not be portable. And RBU is not currently +** syncing the directory after renaming the file. +** +** When state is saved, any commit to the *-oal file and the commit to +** the RBU update database are not atomic. So if the power fails at the +** wrong moment they might get out of sync. As the main database will be +** committed before the RBU update database this will likely either just +** pass unnoticed, or result in SQLITE_CONSTRAINT errors (due to UNIQUE +** constraint violations). +** +** If some client does modify the target database mid RBU update, or some +** other error occurs, the RBU extension will keep throwing errors. It's +** not really clear how to get out of this state. The system could just +** by delete the RBU update database and *-oal file and have the device +** download the update again and start over. +** +** At present, for an UPDATE, both the new.* and old.* records are +** collected in the rbu_xyz table. And for both UPDATEs and DELETEs all +** fields are collected. This means we're probably writing a lot more +** data to disk when saving the state of an ongoing update to the RBU +** update database than is strictly necessary. +** +*/ + +/* #include <assert.h> */ +/* #include <string.h> */ +/* #include <stdio.h> */ + +/* #include "sqlite3.h" */ + +#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_RBU) +/************** Include sqlite3rbu.h in the middle of sqlite3rbu.c ***********/ +/************** Begin file sqlite3rbu.h **************************************/ +/* +** 2014 August 30 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** +** This file contains the public interface for the RBU extension. +*/ + +/* +** SUMMARY +** +** Writing a transaction containing a large number of operations on +** b-tree indexes that are collectively larger than the available cache +** memory can be very inefficient. +** +** The problem is that in order to update a b-tree, the leaf page (at least) +** containing the entry being inserted or deleted must be modified. If the +** working set of leaves is larger than the available cache memory, then a +** single leaf that is modified more than once as part of the transaction +** may be loaded from or written to the persistent media multiple times. +** Additionally, because the index updates are likely to be applied in +** random order, access to pages within the database is also likely to be in +** random order, which is itself quite inefficient. +** +** One way to improve the situation is to sort the operations on each index +** by index key before applying them to the b-tree. This leads to an IO +** pattern that resembles a single linear scan through the index b-tree, +** and all but guarantees each modified leaf page is loaded and stored +** exactly once. SQLite uses this trick to improve the performance of +** CREATE INDEX commands. This extension allows it to be used to improve +** the performance of large transactions on existing databases. +** +** Additionally, this extension allows the work involved in writing the +** large transaction to be broken down into sub-transactions performed +** sequentially by separate processes. This is useful if the system cannot +** guarantee that a single update process will run for long enough to apply +** the entire update, for example because the update is being applied on a +** mobile device that is frequently rebooted. Even after the writer process +** has committed one or more sub-transactions, other database clients continue +** to read from the original database snapshot. In other words, partially +** applied transactions are not visible to other clients. +** +** "RBU" stands for "Resumable Bulk Update". As in a large database update +** transmitted via a wireless network to a mobile device. A transaction +** applied using this extension is hence refered to as an "RBU update". +** +** +** LIMITATIONS +** +** An "RBU update" transaction is subject to the following limitations: +** +** * The transaction must consist of INSERT, UPDATE and DELETE operations +** only. +** +** * INSERT statements may not use any default values. +** +** * UPDATE and DELETE statements must identify their target rows by +** non-NULL PRIMARY KEY values. Rows with NULL values stored in PRIMARY +** KEY fields may not be updated or deleted. If the table being written +** has no PRIMARY KEY, affected rows must be identified by rowid. +** +** * UPDATE statements may not modify PRIMARY KEY columns. +** +** * No triggers will be fired. +** +** * No foreign key violations are detected or reported. +** +** * CHECK constraints are not enforced. +** +** * No constraint handling mode except for "OR ROLLBACK" is supported. +** +** +** PREPARATION +** +** An "RBU update" is stored as a separate SQLite database. A database +** containing an RBU update is an "RBU database". For each table in the +** target database to be updated, the RBU database should contain a table +** named "data_<target name>" containing the same set of columns as the +** target table, and one more - "rbu_control". The data_% table should +** have no PRIMARY KEY or UNIQUE constraints, but each column should have +** the same type as the corresponding column in the target database. +** The "rbu_control" column should have no type at all. For example, if +** the target database contains: +** +** CREATE TABLE t1(a INTEGER PRIMARY KEY, b TEXT, c UNIQUE); +** +** Then the RBU database should contain: +** +** CREATE TABLE data_t1(a INTEGER, b TEXT, c, rbu_control); +** +** The order of the columns in the data_% table does not matter. +** +** Instead of a regular table, the RBU database may also contain virtual +** tables or view named using the data_<target> naming scheme. +** +** Instead of the plain data_<target> naming scheme, RBU database tables +** may also be named data<integer>_<target>, where <integer> is any sequence +** of zero or more numeric characters (0-9). This can be significant because +** tables within the RBU database are always processed in order sorted by +** name. By judicious selection of the the <integer> portion of the names +** of the RBU tables the user can therefore control the order in which they +** are processed. This can be useful, for example, to ensure that "external +** content" FTS4 tables are updated before their underlying content tables. +** +** If the target database table is a virtual table or a table that has no +** PRIMARY KEY declaration, the data_% table must also contain a column +** named "rbu_rowid". This column is mapped to the tables implicit primary +** key column - "rowid". Virtual tables for which the "rowid" column does +** not function like a primary key value cannot be updated using RBU. For +** example, if the target db contains either of the following: +** +** CREATE VIRTUAL TABLE x1 USING fts3(a, b); +** CREATE TABLE x1(a, b) +** +** then the RBU database should contain: +** +** CREATE TABLE data_x1(a, b, rbu_rowid, rbu_control); +** +** All non-hidden columns (i.e. all columns matched by "SELECT *") of the +** target table must be present in the input table. For virtual tables, +** hidden columns are optional - they are updated by RBU if present in +** the input table, or not otherwise. For example, to write to an fts4 +** table with a hidden languageid column such as: +** +** CREATE VIRTUAL TABLE ft1 USING fts4(a, b, languageid='langid'); +** +** Either of the following input table schemas may be used: +** +** CREATE TABLE data_ft1(a, b, langid, rbu_rowid, rbu_control); +** CREATE TABLE data_ft1(a, b, rbu_rowid, rbu_control); +** +** For each row to INSERT into the target database as part of the RBU +** update, the corresponding data_% table should contain a single record +** with the "rbu_control" column set to contain integer value 0. The +** other columns should be set to the values that make up the new record +** to insert. +** +** If the target database table has an INTEGER PRIMARY KEY, it is not +** possible to insert a NULL value into the IPK column. Attempting to +** do so results in an SQLITE_MISMATCH error. +** +** For each row to DELETE from the target database as part of the RBU +** update, the corresponding data_% table should contain a single record +** with the "rbu_control" column set to contain integer value 1. The +** real primary key values of the row to delete should be stored in the +** corresponding columns of the data_% table. The values stored in the +** other columns are not used. +** +** For each row to UPDATE from the target database as part of the RBU +** update, the corresponding data_% table should contain a single record +** with the "rbu_control" column set to contain a value of type text. +** The real primary key values identifying the row to update should be +** stored in the corresponding columns of the data_% table row, as should +** the new values of all columns being update. The text value in the +** "rbu_control" column must contain the same number of characters as +** there are columns in the target database table, and must consist entirely +** of 'x' and '.' characters (or in some special cases 'd' - see below). For +** each column that is being updated, the corresponding character is set to +** 'x'. For those that remain as they are, the corresponding character of the +** rbu_control value should be set to '.'. For example, given the tables +** above, the update statement: +** +** UPDATE t1 SET c = 'usa' WHERE a = 4; +** +** is represented by the data_t1 row created by: +** +** INSERT INTO data_t1(a, b, c, rbu_control) VALUES(4, NULL, 'usa', '..x'); +** +** Instead of an 'x' character, characters of the rbu_control value specified +** for UPDATEs may also be set to 'd'. In this case, instead of updating the +** target table with the value stored in the corresponding data_% column, the +** user-defined SQL function "rbu_delta()" is invoked and the result stored in +** the target table column. rbu_delta() is invoked with two arguments - the +** original value currently stored in the target table column and the +** value specified in the data_xxx table. +** +** For example, this row: +** +** INSERT INTO data_t1(a, b, c, rbu_control) VALUES(4, NULL, 'usa', '..d'); +** +** is similar to an UPDATE statement such as: +** +** UPDATE t1 SET c = rbu_delta(c, 'usa') WHERE a = 4; +** +** Finally, if an 'f' character appears in place of a 'd' or 's' in an +** ota_control string, the contents of the data_xxx table column is assumed +** to be a "fossil delta" - a patch to be applied to a blob value in the +** format used by the fossil source-code management system. In this case +** the existing value within the target database table must be of type BLOB. +** It is replaced by the result of applying the specified fossil delta to +** itself. +** +** If the target database table is a virtual table or a table with no PRIMARY +** KEY, the rbu_control value should not include a character corresponding +** to the rbu_rowid value. For example, this: +** +** INSERT INTO data_ft1(a, b, rbu_rowid, rbu_control) +** VALUES(NULL, 'usa', 12, '.x'); +** +** causes a result similar to: +** +** UPDATE ft1 SET b = 'usa' WHERE rowid = 12; +** +** The data_xxx tables themselves should have no PRIMARY KEY declarations. +** However, RBU is more efficient if reading the rows in from each data_xxx +** table in "rowid" order is roughly the same as reading them sorted by +** the PRIMARY KEY of the corresponding target database table. In other +** words, rows should be sorted using the destination table PRIMARY KEY +** fields before they are inserted into the data_xxx tables. +** +** USAGE +** +** The API declared below allows an application to apply an RBU update +** stored on disk to an existing target database. Essentially, the +** application: +** +** 1) Opens an RBU handle using the sqlite3rbu_open() function. +** +** 2) Registers any required virtual table modules with the database +** handle returned by sqlite3rbu_db(). Also, if required, register +** the rbu_delta() implementation. +** +** 3) Calls the sqlite3rbu_step() function one or more times on +** the new handle. Each call to sqlite3rbu_step() performs a single +** b-tree operation, so thousands of calls may be required to apply +** a complete update. +** +** 4) Calls sqlite3rbu_close() to close the RBU update handle. If +** sqlite3rbu_step() has been called enough times to completely +** apply the update to the target database, then the RBU database +** is marked as fully applied. Otherwise, the state of the RBU +** update application is saved in the RBU database for later +** resumption. +** +** See comments below for more detail on APIs. +** +** If an update is only partially applied to the target database by the +** time sqlite3rbu_close() is called, various state information is saved +** within the RBU database. This allows subsequent processes to automatically +** resume the RBU update from where it left off. +** +** To remove all RBU extension state information, returning an RBU database +** to its original contents, it is sufficient to drop all tables that begin +** with the prefix "rbu_" +** +** DATABASE LOCKING +** +** An RBU update may not be applied to a database in WAL mode. Attempting +** to do so is an error (SQLITE_ERROR). +** +** While an RBU handle is open, a SHARED lock may be held on the target +** database file. This means it is possible for other clients to read the +** database, but not to write it. +** +** If an RBU update is started and then suspended before it is completed, +** then an external client writes to the database, then attempting to resume +** the suspended RBU update is also an error (SQLITE_BUSY). +*/ + +#ifndef _SQLITE3RBU_H +#define _SQLITE3RBU_H + +/* #include "sqlite3.h" ** Required for error code definitions ** */ + +#if 0 +extern "C" { +#endif + +typedef struct sqlite3rbu sqlite3rbu; + +/* +** Open an RBU handle. +** +** Argument zTarget is the path to the target database. Argument zRbu is +** the path to the RBU database. Each call to this function must be matched +** by a call to sqlite3rbu_close(). When opening the databases, RBU passes +** the SQLITE_CONFIG_URI flag to sqlite3_open_v2(). So if either zTarget +** or zRbu begin with "file:", it will be interpreted as an SQLite +** database URI, not a regular file name. +** +** If the zState argument is passed a NULL value, the RBU extension stores +** the current state of the update (how many rows have been updated, which +** indexes are yet to be updated etc.) within the RBU database itself. This +** can be convenient, as it means that the RBU application does not need to +** organize removing a separate state file after the update is concluded. +** Or, if zState is non-NULL, it must be a path to a database file in which +** the RBU extension can store the state of the update. +** +** When resuming an RBU update, the zState argument must be passed the same +** value as when the RBU update was started. +** +** Once the RBU update is finished, the RBU extension does not +** automatically remove any zState database file, even if it created it. +** +** By default, RBU uses the default VFS to access the files on disk. To +** use a VFS other than the default, an SQLite "file:" URI containing a +** "vfs=..." option may be passed as the zTarget option. +** +** IMPORTANT NOTE FOR ZIPVFS USERS: The RBU extension works with all of +** SQLite's built-in VFSs, including the multiplexor VFS. However it does +** not work out of the box with zipvfs. Refer to the comment describing +** the zipvfs_create_vfs() API below for details on using RBU with zipvfs. +*/ +SQLITE_API sqlite3rbu *SQLITE_STDCALL sqlite3rbu_open( + const char *zTarget, + const char *zRbu, + const char *zState +); + +/* +** Internally, each RBU connection uses a separate SQLite database +** connection to access the target and rbu update databases. This +** API allows the application direct access to these database handles. +** +** The first argument passed to this function must be a valid, open, RBU +** handle. The second argument should be passed zero to access the target +** database handle, or non-zero to access the rbu update database handle. +** Accessing the underlying database handles may be useful in the +** following scenarios: +** +** * If any target tables are virtual tables, it may be necessary to +** call sqlite3_create_module() on the target database handle to +** register the required virtual table implementations. +** +** * If the data_xxx tables in the RBU source database are virtual +** tables, the application may need to call sqlite3_create_module() on +** the rbu update db handle to any required virtual table +** implementations. +** +** * If the application uses the "rbu_delta()" feature described above, +** it must use sqlite3_create_function() or similar to register the +** rbu_delta() implementation with the target database handle. +** +** If an error has occurred, either while opening or stepping the RBU object, +** this function may return NULL. The error code and message may be collected +** when sqlite3rbu_close() is called. +** +** Database handles returned by this function remain valid until the next +** call to any sqlite3rbu_xxx() function other than sqlite3rbu_db(). +*/ +SQLITE_API sqlite3 *SQLITE_STDCALL sqlite3rbu_db(sqlite3rbu*, int bRbu); + +/* +** Do some work towards applying the RBU update to the target db. +** +** Return SQLITE_DONE if the update has been completely applied, or +** SQLITE_OK if no error occurs but there remains work to do to apply +** the RBU update. If an error does occur, some other error code is +** returned. +** +** Once a call to sqlite3rbu_step() has returned a value other than +** SQLITE_OK, all subsequent calls on the same RBU handle are no-ops +** that immediately return the same value. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3rbu_step(sqlite3rbu *pRbu); + +/* +** Force RBU to save its state to disk. +** +** If a power failure or application crash occurs during an update, following +** system recovery RBU may resume the update from the point at which the state +** was last saved. In other words, from the most recent successful call to +** sqlite3rbu_close() or this function. +** +** SQLITE_OK is returned if successful, or an SQLite error code otherwise. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3rbu_savestate(sqlite3rbu *pRbu); + +/* +** Close an RBU handle. +** +** If the RBU update has been completely applied, mark the RBU database +** as fully applied. Otherwise, assuming no error has occurred, save the +** current state of the RBU update appliation to the RBU database. +** +** If an error has already occurred as part of an sqlite3rbu_step() +** or sqlite3rbu_open() call, or if one occurs within this function, an +** SQLite error code is returned. Additionally, *pzErrmsg may be set to +** point to a buffer containing a utf-8 formatted English language error +** message. It is the responsibility of the caller to eventually free any +** such buffer using sqlite3_free(). +** +** Otherwise, if no error occurs, this function returns SQLITE_OK if the +** update has been partially applied, or SQLITE_DONE if it has been +** completely applied. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3rbu_close(sqlite3rbu *pRbu, char **pzErrmsg); + +/* +** Return the total number of key-value operations (inserts, deletes or +** updates) that have been performed on the target database since the +** current RBU update was started. +*/ +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3rbu_progress(sqlite3rbu *pRbu); + +/* +** Create an RBU VFS named zName that accesses the underlying file-system +** via existing VFS zParent. Or, if the zParent parameter is passed NULL, +** then the new RBU VFS uses the default system VFS to access the file-system. +** The new object is registered as a non-default VFS with SQLite before +** returning. +** +** Part of the RBU implementation uses a custom VFS object. Usually, this +** object is created and deleted automatically by RBU. +** +** The exception is for applications that also use zipvfs. In this case, +** the custom VFS must be explicitly created by the user before the RBU +** handle is opened. The RBU VFS should be installed so that the zipvfs +** VFS uses the RBU VFS, which in turn uses any other VFS layers in use +** (for example multiplexor) to access the file-system. For example, +** to assemble an RBU enabled VFS stack that uses both zipvfs and +** multiplexor (error checking omitted): +** +** // Create a VFS named "multiplex" (not the default). +** sqlite3_multiplex_initialize(0, 0); +** +** // Create an rbu VFS named "rbu" that uses multiplexor. If the +** // second argument were replaced with NULL, the "rbu" VFS would +** // access the file-system via the system default VFS, bypassing the +** // multiplexor. +** sqlite3rbu_create_vfs("rbu", "multiplex"); +** +** // Create a zipvfs VFS named "zipvfs" that uses rbu. +** zipvfs_create_vfs_v3("zipvfs", "rbu", 0, xCompressorAlgorithmDetector); +** +** // Make zipvfs the default VFS. +** sqlite3_vfs_register(sqlite3_vfs_find("zipvfs"), 1); +** +** Because the default VFS created above includes a RBU functionality, it +** may be used by RBU clients. Attempting to use RBU with a zipvfs VFS stack +** that does not include the RBU layer results in an error. +** +** The overhead of adding the "rbu" VFS to the system is negligible for +** non-RBU users. There is no harm in an application accessing the +** file-system via "rbu" all the time, even if it only uses RBU functionality +** occasionally. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3rbu_create_vfs(const char *zName, const char *zParent); + +/* +** Deregister and destroy an RBU vfs created by an earlier call to +** sqlite3rbu_create_vfs(). +** +** VFS objects are not reference counted. If a VFS object is destroyed +** before all database handles that use it have been closed, the results +** are undefined. +*/ +SQLITE_API void SQLITE_STDCALL sqlite3rbu_destroy_vfs(const char *zName); + +#if 0 +} /* end of the 'extern "C"' block */ +#endif + +#endif /* _SQLITE3RBU_H */ + +/************** End of sqlite3rbu.h ******************************************/ +/************** Continuing where we left off in sqlite3rbu.c *****************/ + +#if defined(_WIN32_WCE) +/* #include "windows.h" */ +#endif + +/* Maximum number of prepared UPDATE statements held by this module */ +#define SQLITE_RBU_UPDATE_CACHESIZE 16 + +/* +** Swap two objects of type TYPE. +*/ +#if !defined(SQLITE_AMALGAMATION) +# define SWAP(TYPE,A,B) {TYPE t=A; A=B; B=t;} +#endif + +/* +** The rbu_state table is used to save the state of a partially applied +** update so that it can be resumed later. The table consists of integer +** keys mapped to values as follows: +** +** RBU_STATE_STAGE: +** May be set to integer values 1, 2, 4 or 5. As follows: +** 1: the *-rbu file is currently under construction. +** 2: the *-rbu file has been constructed, but not yet moved +** to the *-wal path. +** 4: the checkpoint is underway. +** 5: the rbu update has been checkpointed. +** +** RBU_STATE_TBL: +** Only valid if STAGE==1. The target database name of the table +** currently being written. +** +** RBU_STATE_IDX: +** Only valid if STAGE==1. The target database name of the index +** currently being written, or NULL if the main table is currently being +** updated. +** +** RBU_STATE_ROW: +** Only valid if STAGE==1. Number of rows already processed for the current +** table/index. +** +** RBU_STATE_PROGRESS: +** Trbul number of sqlite3rbu_step() calls made so far as part of this +** rbu update. +** +** RBU_STATE_CKPT: +** Valid if STAGE==4. The 64-bit checksum associated with the wal-index +** header created by recovering the *-wal file. This is used to detect +** cases when another client appends frames to the *-wal file in the +** middle of an incremental checkpoint (an incremental checkpoint cannot +** be continued if this happens). +** +** RBU_STATE_COOKIE: +** Valid if STAGE==1. The current change-counter cookie value in the +** target db file. +** +** RBU_STATE_OALSZ: +** Valid if STAGE==1. The size in bytes of the *-oal file. +*/ +#define RBU_STATE_STAGE 1 +#define RBU_STATE_TBL 2 +#define RBU_STATE_IDX 3 +#define RBU_STATE_ROW 4 +#define RBU_STATE_PROGRESS 5 +#define RBU_STATE_CKPT 6 +#define RBU_STATE_COOKIE 7 +#define RBU_STATE_OALSZ 8 + +#define RBU_STAGE_OAL 1 +#define RBU_STAGE_MOVE 2 +#define RBU_STAGE_CAPTURE 3 +#define RBU_STAGE_CKPT 4 +#define RBU_STAGE_DONE 5 + + +#define RBU_CREATE_STATE \ + "CREATE TABLE IF NOT EXISTS %s.rbu_state(k INTEGER PRIMARY KEY, v)" + +typedef struct RbuFrame RbuFrame; +typedef struct RbuObjIter RbuObjIter; +typedef struct RbuState RbuState; +typedef struct rbu_vfs rbu_vfs; +typedef struct rbu_file rbu_file; +typedef struct RbuUpdateStmt RbuUpdateStmt; + +#if !defined(SQLITE_AMALGAMATION) +typedef unsigned int u32; +typedef unsigned char u8; +typedef sqlite3_int64 i64; +#endif + +/* +** These values must match the values defined in wal.c for the equivalent +** locks. These are not magic numbers as they are part of the SQLite file +** format. +*/ +#define WAL_LOCK_WRITE 0 +#define WAL_LOCK_CKPT 1 +#define WAL_LOCK_READ0 3 + +/* +** A structure to store values read from the rbu_state table in memory. +*/ +struct RbuState { + int eStage; + char *zTbl; + char *zIdx; + i64 iWalCksum; + int nRow; + i64 nProgress; + u32 iCookie; + i64 iOalSz; +}; + +struct RbuUpdateStmt { + char *zMask; /* Copy of update mask used with pUpdate */ + sqlite3_stmt *pUpdate; /* Last update statement (or NULL) */ + RbuUpdateStmt *pNext; +}; + +/* +** An iterator of this type is used to iterate through all objects in +** the target database that require updating. For each such table, the +** iterator visits, in order: +** +** * the table itself, +** * each index of the table (zero or more points to visit), and +** * a special "cleanup table" state. +** +** abIndexed: +** If the table has no indexes on it, abIndexed is set to NULL. Otherwise, +** it points to an array of flags nTblCol elements in size. The flag is +** set for each column that is either a part of the PK or a part of an +** index. Or clear otherwise. +** +*/ +struct RbuObjIter { + sqlite3_stmt *pTblIter; /* Iterate through tables */ + sqlite3_stmt *pIdxIter; /* Index iterator */ + int nTblCol; /* Size of azTblCol[] array */ + char **azTblCol; /* Array of unquoted target column names */ + char **azTblType; /* Array of target column types */ + int *aiSrcOrder; /* src table col -> target table col */ + u8 *abTblPk; /* Array of flags, set on target PK columns */ + u8 *abNotNull; /* Array of flags, set on NOT NULL columns */ + u8 *abIndexed; /* Array of flags, set on indexed & PK cols */ + int eType; /* Table type - an RBU_PK_XXX value */ + + /* Output variables. zTbl==0 implies EOF. */ + int bCleanup; /* True in "cleanup" state */ + const char *zTbl; /* Name of target db table */ + const char *zDataTbl; /* Name of rbu db table (or null) */ + const char *zIdx; /* Name of target db index (or null) */ + int iTnum; /* Root page of current object */ + int iPkTnum; /* If eType==EXTERNAL, root of PK index */ + int bUnique; /* Current index is unique */ + + /* Statements created by rbuObjIterPrepareAll() */ + int nCol; /* Number of columns in current object */ + sqlite3_stmt *pSelect; /* Source data */ + sqlite3_stmt *pInsert; /* Statement for INSERT operations */ + sqlite3_stmt *pDelete; /* Statement for DELETE ops */ + sqlite3_stmt *pTmpInsert; /* Insert into rbu_tmp_$zDataTbl */ + + /* Last UPDATE used (for PK b-tree updates only), or NULL. */ + RbuUpdateStmt *pRbuUpdate; +}; + +/* +** Values for RbuObjIter.eType +** +** 0: Table does not exist (error) +** 1: Table has an implicit rowid. +** 2: Table has an explicit IPK column. +** 3: Table has an external PK index. +** 4: Table is WITHOUT ROWID. +** 5: Table is a virtual table. +*/ +#define RBU_PK_NOTABLE 0 +#define RBU_PK_NONE 1 +#define RBU_PK_IPK 2 +#define RBU_PK_EXTERNAL 3 +#define RBU_PK_WITHOUT_ROWID 4 +#define RBU_PK_VTAB 5 + + +/* +** Within the RBU_STAGE_OAL stage, each call to sqlite3rbu_step() performs +** one of the following operations. +*/ +#define RBU_INSERT 1 /* Insert on a main table b-tree */ +#define RBU_DELETE 2 /* Delete a row from a main table b-tree */ +#define RBU_IDX_DELETE 3 /* Delete a row from an aux. index b-tree */ +#define RBU_IDX_INSERT 4 /* Insert on an aux. index b-tree */ +#define RBU_UPDATE 5 /* Update a row in a main table b-tree */ + + +/* +** A single step of an incremental checkpoint - frame iWalFrame of the wal +** file should be copied to page iDbPage of the database file. +*/ +struct RbuFrame { + u32 iDbPage; + u32 iWalFrame; +}; + +/* +** RBU handle. +*/ +struct sqlite3rbu { + int eStage; /* Value of RBU_STATE_STAGE field */ + sqlite3 *dbMain; /* target database handle */ + sqlite3 *dbRbu; /* rbu database handle */ + char *zTarget; /* Path to target db */ + char *zRbu; /* Path to rbu db */ + char *zState; /* Path to state db (or NULL if zRbu) */ + char zStateDb[5]; /* Db name for state ("stat" or "main") */ + int rc; /* Value returned by last rbu_step() call */ + char *zErrmsg; /* Error message if rc!=SQLITE_OK */ + int nStep; /* Rows processed for current object */ + int nProgress; /* Rows processed for all objects */ + RbuObjIter objiter; /* Iterator for skipping through tbl/idx */ + const char *zVfsName; /* Name of automatically created rbu vfs */ + rbu_file *pTargetFd; /* File handle open on target db */ + i64 iOalSz; + + /* The following state variables are used as part of the incremental + ** checkpoint stage (eStage==RBU_STAGE_CKPT). See comments surrounding + ** function rbuSetupCheckpoint() for details. */ + u32 iMaxFrame; /* Largest iWalFrame value in aFrame[] */ + u32 mLock; + int nFrame; /* Entries in aFrame[] array */ + int nFrameAlloc; /* Allocated size of aFrame[] array */ + RbuFrame *aFrame; + int pgsz; + u8 *aBuf; + i64 iWalCksum; +}; + +/* +** An rbu VFS is implemented using an instance of this structure. +*/ +struct rbu_vfs { + sqlite3_vfs base; /* rbu VFS shim methods */ + sqlite3_vfs *pRealVfs; /* Underlying VFS */ + sqlite3_mutex *mutex; /* Mutex to protect pMain */ + rbu_file *pMain; /* Linked list of main db files */ +}; + +/* +** Each file opened by an rbu VFS is represented by an instance of +** the following structure. +*/ +struct rbu_file { + sqlite3_file base; /* sqlite3_file methods */ + sqlite3_file *pReal; /* Underlying file handle */ + rbu_vfs *pRbuVfs; /* Pointer to the rbu_vfs object */ + sqlite3rbu *pRbu; /* Pointer to rbu object (rbu target only) */ + + int openFlags; /* Flags this file was opened with */ + u32 iCookie; /* Cookie value for main db files */ + u8 iWriteVer; /* "write-version" value for main db files */ + + int nShm; /* Number of entries in apShm[] array */ + char **apShm; /* Array of mmap'd *-shm regions */ + char *zDel; /* Delete this when closing file */ + + const char *zWal; /* Wal filename for this main db file */ + rbu_file *pWalFd; /* Wal file descriptor for this main db */ + rbu_file *pMainNext; /* Next MAIN_DB file */ +}; + + +/************************************************************************* +** The following three functions, found below: +** +** rbuDeltaGetInt() +** rbuDeltaChecksum() +** rbuDeltaApply() +** +** are lifted from the fossil source code (http://fossil-scm.org). They +** are used to implement the scalar SQL function rbu_fossil_delta(). +*/ + +/* +** Read bytes from *pz and convert them into a positive integer. When +** finished, leave *pz pointing to the first character past the end of +** the integer. The *pLen parameter holds the length of the string +** in *pz and is decremented once for each character in the integer. +*/ +static unsigned int rbuDeltaGetInt(const char **pz, int *pLen){ + static const signed char zValue[] = { + -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, + -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, + -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, + 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -1, -1, -1, -1, -1, -1, + -1, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, + 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, -1, -1, -1, -1, 36, + -1, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, + 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, -1, -1, -1, 63, -1, + }; + unsigned int v = 0; + int c; + unsigned char *z = (unsigned char*)*pz; + unsigned char *zStart = z; + while( (c = zValue[0x7f&*(z++)])>=0 ){ + v = (v<<6) + c; + } + z--; + *pLen -= z - zStart; + *pz = (char*)z; + return v; +} + +/* +** Compute a 32-bit checksum on the N-byte buffer. Return the result. +*/ +static unsigned int rbuDeltaChecksum(const char *zIn, size_t N){ + const unsigned char *z = (const unsigned char *)zIn; + unsigned sum0 = 0; + unsigned sum1 = 0; + unsigned sum2 = 0; + unsigned sum3 = 0; + while(N >= 16){ + sum0 += ((unsigned)z[0] + z[4] + z[8] + z[12]); + sum1 += ((unsigned)z[1] + z[5] + z[9] + z[13]); + sum2 += ((unsigned)z[2] + z[6] + z[10]+ z[14]); + sum3 += ((unsigned)z[3] + z[7] + z[11]+ z[15]); + z += 16; + N -= 16; + } + while(N >= 4){ + sum0 += z[0]; + sum1 += z[1]; + sum2 += z[2]; + sum3 += z[3]; + z += 4; + N -= 4; + } + sum3 += (sum2 << 8) + (sum1 << 16) + (sum0 << 24); + switch(N){ + case 3: sum3 += (z[2] << 8); + case 2: sum3 += (z[1] << 16); + case 1: sum3 += (z[0] << 24); + default: ; + } + return sum3; +} + +/* +** Apply a delta. +** +** The output buffer should be big enough to hold the whole output +** file and a NUL terminator at the end. The delta_output_size() +** routine will determine this size for you. +** +** The delta string should be null-terminated. But the delta string +** may contain embedded NUL characters (if the input and output are +** binary files) so we also have to pass in the length of the delta in +** the lenDelta parameter. +** +** This function returns the size of the output file in bytes (excluding +** the final NUL terminator character). Except, if the delta string is +** malformed or intended for use with a source file other than zSrc, +** then this routine returns -1. +** +** Refer to the delta_create() documentation above for a description +** of the delta file format. +*/ +static int rbuDeltaApply( + const char *zSrc, /* The source or pattern file */ + int lenSrc, /* Length of the source file */ + const char *zDelta, /* Delta to apply to the pattern */ + int lenDelta, /* Length of the delta */ + char *zOut /* Write the output into this preallocated buffer */ +){ + unsigned int limit; + unsigned int total = 0; +#ifndef FOSSIL_OMIT_DELTA_CKSUM_TEST + char *zOrigOut = zOut; +#endif + + limit = rbuDeltaGetInt(&zDelta, &lenDelta); + if( *zDelta!='\n' ){ + /* ERROR: size integer not terminated by "\n" */ + return -1; + } + zDelta++; lenDelta--; + while( *zDelta && lenDelta>0 ){ + unsigned int cnt, ofst; + cnt = rbuDeltaGetInt(&zDelta, &lenDelta); + switch( zDelta[0] ){ + case '@': { + zDelta++; lenDelta--; + ofst = rbuDeltaGetInt(&zDelta, &lenDelta); + if( lenDelta>0 && zDelta[0]!=',' ){ + /* ERROR: copy command not terminated by ',' */ + return -1; + } + zDelta++; lenDelta--; + total += cnt; + if( total>limit ){ + /* ERROR: copy exceeds output file size */ + return -1; + } + if( (int)(ofst+cnt) > lenSrc ){ + /* ERROR: copy extends past end of input */ + return -1; + } + memcpy(zOut, &zSrc[ofst], cnt); + zOut += cnt; + break; + } + case ':': { + zDelta++; lenDelta--; + total += cnt; + if( total>limit ){ + /* ERROR: insert command gives an output larger than predicted */ + return -1; + } + if( (int)cnt>lenDelta ){ + /* ERROR: insert count exceeds size of delta */ + return -1; + } + memcpy(zOut, zDelta, cnt); + zOut += cnt; + zDelta += cnt; + lenDelta -= cnt; + break; + } + case ';': { + zDelta++; lenDelta--; + zOut[0] = 0; +#ifndef FOSSIL_OMIT_DELTA_CKSUM_TEST + if( cnt!=rbuDeltaChecksum(zOrigOut, total) ){ + /* ERROR: bad checksum */ + return -1; + } +#endif + if( total!=limit ){ + /* ERROR: generated size does not match predicted size */ + return -1; + } + return total; + } + default: { + /* ERROR: unknown delta operator */ + return -1; + } + } + } + /* ERROR: unterminated delta */ + return -1; +} + +static int rbuDeltaOutputSize(const char *zDelta, int lenDelta){ + int size; + size = rbuDeltaGetInt(&zDelta, &lenDelta); + if( *zDelta!='\n' ){ + /* ERROR: size integer not terminated by "\n" */ + return -1; + } + return size; +} + +/* +** End of code taken from fossil. +*************************************************************************/ + +/* +** Implementation of SQL scalar function rbu_fossil_delta(). +** +** This function applies a fossil delta patch to a blob. Exactly two +** arguments must be passed to this function. The first is the blob to +** patch and the second the patch to apply. If no error occurs, this +** function returns the patched blob. +*/ +static void rbuFossilDeltaFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const char *aDelta; + int nDelta; + const char *aOrig; + int nOrig; + + int nOut; + int nOut2; + char *aOut; + + assert( argc==2 ); + + nOrig = sqlite3_value_bytes(argv[0]); + aOrig = (const char*)sqlite3_value_blob(argv[0]); + nDelta = sqlite3_value_bytes(argv[1]); + aDelta = (const char*)sqlite3_value_blob(argv[1]); + + /* Figure out the size of the output */ + nOut = rbuDeltaOutputSize(aDelta, nDelta); + if( nOut<0 ){ + sqlite3_result_error(context, "corrupt fossil delta", -1); + return; + } + + aOut = sqlite3_malloc(nOut+1); + if( aOut==0 ){ + sqlite3_result_error_nomem(context); + }else{ + nOut2 = rbuDeltaApply(aOrig, nOrig, aDelta, nDelta, aOut); + if( nOut2!=nOut ){ + sqlite3_result_error(context, "corrupt fossil delta", -1); + }else{ + sqlite3_result_blob(context, aOut, nOut, sqlite3_free); + } + } +} + + +/* +** Prepare the SQL statement in buffer zSql against database handle db. +** If successful, set *ppStmt to point to the new statement and return +** SQLITE_OK. +** +** Otherwise, if an error does occur, set *ppStmt to NULL and return +** an SQLite error code. Additionally, set output variable *pzErrmsg to +** point to a buffer containing an error message. It is the responsibility +** of the caller to (eventually) free this buffer using sqlite3_free(). +*/ +static int prepareAndCollectError( + sqlite3 *db, + sqlite3_stmt **ppStmt, + char **pzErrmsg, + const char *zSql +){ + int rc = sqlite3_prepare_v2(db, zSql, -1, ppStmt, 0); + if( rc!=SQLITE_OK ){ + *pzErrmsg = sqlite3_mprintf("%s", sqlite3_errmsg(db)); + *ppStmt = 0; + } + return rc; +} + +/* +** Reset the SQL statement passed as the first argument. Return a copy +** of the value returned by sqlite3_reset(). +** +** If an error has occurred, then set *pzErrmsg to point to a buffer +** containing an error message. It is the responsibility of the caller +** to eventually free this buffer using sqlite3_free(). +*/ +static int resetAndCollectError(sqlite3_stmt *pStmt, char **pzErrmsg){ + int rc = sqlite3_reset(pStmt); + if( rc!=SQLITE_OK ){ + *pzErrmsg = sqlite3_mprintf("%s", sqlite3_errmsg(sqlite3_db_handle(pStmt))); + } + return rc; +} + +/* +** Unless it is NULL, argument zSql points to a buffer allocated using +** sqlite3_malloc containing an SQL statement. This function prepares the SQL +** statement against database db and frees the buffer. If statement +** compilation is successful, *ppStmt is set to point to the new statement +** handle and SQLITE_OK is returned. +** +** Otherwise, if an error occurs, *ppStmt is set to NULL and an error code +** returned. In this case, *pzErrmsg may also be set to point to an error +** message. It is the responsibility of the caller to free this error message +** buffer using sqlite3_free(). +** +** If argument zSql is NULL, this function assumes that an OOM has occurred. +** In this case SQLITE_NOMEM is returned and *ppStmt set to NULL. +*/ +static int prepareFreeAndCollectError( + sqlite3 *db, + sqlite3_stmt **ppStmt, + char **pzErrmsg, + char *zSql +){ + int rc; + assert( *pzErrmsg==0 ); + if( zSql==0 ){ + rc = SQLITE_NOMEM; + *ppStmt = 0; + }else{ + rc = prepareAndCollectError(db, ppStmt, pzErrmsg, zSql); + sqlite3_free(zSql); + } + return rc; +} + +/* +** Free the RbuObjIter.azTblCol[] and RbuObjIter.abTblPk[] arrays allocated +** by an earlier call to rbuObjIterCacheTableInfo(). +*/ +static void rbuObjIterFreeCols(RbuObjIter *pIter){ + int i; + for(i=0; i<pIter->nTblCol; i++){ + sqlite3_free(pIter->azTblCol[i]); + sqlite3_free(pIter->azTblType[i]); + } + sqlite3_free(pIter->azTblCol); + pIter->azTblCol = 0; + pIter->azTblType = 0; + pIter->aiSrcOrder = 0; + pIter->abTblPk = 0; + pIter->abNotNull = 0; + pIter->nTblCol = 0; + pIter->eType = 0; /* Invalid value */ +} + +/* +** Finalize all statements and free all allocations that are specific to +** the current object (table/index pair). +*/ +static void rbuObjIterClearStatements(RbuObjIter *pIter){ + RbuUpdateStmt *pUp; + + sqlite3_finalize(pIter->pSelect); + sqlite3_finalize(pIter->pInsert); + sqlite3_finalize(pIter->pDelete); + sqlite3_finalize(pIter->pTmpInsert); + pUp = pIter->pRbuUpdate; + while( pUp ){ + RbuUpdateStmt *pTmp = pUp->pNext; + sqlite3_finalize(pUp->pUpdate); + sqlite3_free(pUp); + pUp = pTmp; + } + + pIter->pSelect = 0; + pIter->pInsert = 0; + pIter->pDelete = 0; + pIter->pRbuUpdate = 0; + pIter->pTmpInsert = 0; + pIter->nCol = 0; +} + +/* +** Clean up any resources allocated as part of the iterator object passed +** as the only argument. +*/ +static void rbuObjIterFinalize(RbuObjIter *pIter){ + rbuObjIterClearStatements(pIter); + sqlite3_finalize(pIter->pTblIter); + sqlite3_finalize(pIter->pIdxIter); + rbuObjIterFreeCols(pIter); + memset(pIter, 0, sizeof(RbuObjIter)); +} + +/* +** Advance the iterator to the next position. +** +** If no error occurs, SQLITE_OK is returned and the iterator is left +** pointing to the next entry. Otherwise, an error code and message is +** left in the RBU handle passed as the first argument. A copy of the +** error code is returned. +*/ +static int rbuObjIterNext(sqlite3rbu *p, RbuObjIter *pIter){ + int rc = p->rc; + if( rc==SQLITE_OK ){ + + /* Free any SQLite statements used while processing the previous object */ + rbuObjIterClearStatements(pIter); + if( pIter->zIdx==0 ){ + rc = sqlite3_exec(p->dbMain, + "DROP TRIGGER IF EXISTS temp.rbu_insert_tr;" + "DROP TRIGGER IF EXISTS temp.rbu_update1_tr;" + "DROP TRIGGER IF EXISTS temp.rbu_update2_tr;" + "DROP TRIGGER IF EXISTS temp.rbu_delete_tr;" + , 0, 0, &p->zErrmsg + ); + } + + if( rc==SQLITE_OK ){ + if( pIter->bCleanup ){ + rbuObjIterFreeCols(pIter); + pIter->bCleanup = 0; + rc = sqlite3_step(pIter->pTblIter); + if( rc!=SQLITE_ROW ){ + rc = resetAndCollectError(pIter->pTblIter, &p->zErrmsg); + pIter->zTbl = 0; + }else{ + pIter->zTbl = (const char*)sqlite3_column_text(pIter->pTblIter, 0); + pIter->zDataTbl = (const char*)sqlite3_column_text(pIter->pTblIter,1); + rc = (pIter->zDataTbl && pIter->zTbl) ? SQLITE_OK : SQLITE_NOMEM; + } + }else{ + if( pIter->zIdx==0 ){ + sqlite3_stmt *pIdx = pIter->pIdxIter; + rc = sqlite3_bind_text(pIdx, 1, pIter->zTbl, -1, SQLITE_STATIC); + } + if( rc==SQLITE_OK ){ + rc = sqlite3_step(pIter->pIdxIter); + if( rc!=SQLITE_ROW ){ + rc = resetAndCollectError(pIter->pIdxIter, &p->zErrmsg); + pIter->bCleanup = 1; + pIter->zIdx = 0; + }else{ + pIter->zIdx = (const char*)sqlite3_column_text(pIter->pIdxIter, 0); + pIter->iTnum = sqlite3_column_int(pIter->pIdxIter, 1); + pIter->bUnique = sqlite3_column_int(pIter->pIdxIter, 2); + rc = pIter->zIdx ? SQLITE_OK : SQLITE_NOMEM; + } + } + } + } + } + + if( rc!=SQLITE_OK ){ + rbuObjIterFinalize(pIter); + p->rc = rc; + } + return rc; +} + + +/* +** The implementation of the rbu_target_name() SQL function. This function +** accepts one argument - the name of a table in the RBU database. If the +** table name matches the pattern: +** +** data[0-9]_<name> +** +** where <name> is any sequence of 1 or more characters, <name> is returned. +** Otherwise, if the only argument does not match the above pattern, an SQL +** NULL is returned. +** +** "data_t1" -> "t1" +** "data0123_t2" -> "t2" +** "dataAB_t3" -> NULL +*/ +static void rbuTargetNameFunc( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + const char *zIn; + assert( argc==1 ); + + zIn = (const char*)sqlite3_value_text(argv[0]); + if( zIn && strlen(zIn)>4 && memcmp("data", zIn, 4)==0 ){ + int i; + for(i=4; zIn[i]>='0' && zIn[i]<='9'; i++); + if( zIn[i]=='_' && zIn[i+1] ){ + sqlite3_result_text(context, &zIn[i+1], -1, SQLITE_STATIC); + } + } +} + +/* +** Initialize the iterator structure passed as the second argument. +** +** If no error occurs, SQLITE_OK is returned and the iterator is left +** pointing to the first entry. Otherwise, an error code and message is +** left in the RBU handle passed as the first argument. A copy of the +** error code is returned. +*/ +static int rbuObjIterFirst(sqlite3rbu *p, RbuObjIter *pIter){ + int rc; + memset(pIter, 0, sizeof(RbuObjIter)); + + rc = prepareAndCollectError(p->dbRbu, &pIter->pTblIter, &p->zErrmsg, + "SELECT rbu_target_name(name) AS target, name FROM sqlite_master " + "WHERE type IN ('table', 'view') AND target IS NOT NULL " + "ORDER BY name" + ); + + if( rc==SQLITE_OK ){ + rc = prepareAndCollectError(p->dbMain, &pIter->pIdxIter, &p->zErrmsg, + "SELECT name, rootpage, sql IS NULL OR substr(8, 6)=='UNIQUE' " + " FROM main.sqlite_master " + " WHERE type='index' AND tbl_name = ?" + ); + } + + pIter->bCleanup = 1; + p->rc = rc; + return rbuObjIterNext(p, pIter); +} + +/* +** This is a wrapper around "sqlite3_mprintf(zFmt, ...)". If an OOM occurs, +** an error code is stored in the RBU handle passed as the first argument. +** +** If an error has already occurred (p->rc is already set to something other +** than SQLITE_OK), then this function returns NULL without modifying the +** stored error code. In this case it still calls sqlite3_free() on any +** printf() parameters associated with %z conversions. +*/ +static char *rbuMPrintf(sqlite3rbu *p, const char *zFmt, ...){ + char *zSql = 0; + va_list ap; + va_start(ap, zFmt); + zSql = sqlite3_vmprintf(zFmt, ap); + if( p->rc==SQLITE_OK ){ + if( zSql==0 ) p->rc = SQLITE_NOMEM; + }else{ + sqlite3_free(zSql); + zSql = 0; + } + va_end(ap); + return zSql; +} + +/* +** Argument zFmt is a sqlite3_mprintf() style format string. The trailing +** arguments are the usual subsitution values. This function performs +** the printf() style substitutions and executes the result as an SQL +** statement on the RBU handles database. +** +** If an error occurs, an error code and error message is stored in the +** RBU handle. If an error has already occurred when this function is +** called, it is a no-op. +*/ +static int rbuMPrintfExec(sqlite3rbu *p, sqlite3 *db, const char *zFmt, ...){ + va_list ap; + char *zSql; + va_start(ap, zFmt); + zSql = sqlite3_vmprintf(zFmt, ap); + if( p->rc==SQLITE_OK ){ + if( zSql==0 ){ + p->rc = SQLITE_NOMEM; + }else{ + p->rc = sqlite3_exec(db, zSql, 0, 0, &p->zErrmsg); + } + } + sqlite3_free(zSql); + va_end(ap); + return p->rc; +} + +/* +** Attempt to allocate and return a pointer to a zeroed block of nByte +** bytes. +** +** If an error (i.e. an OOM condition) occurs, return NULL and leave an +** error code in the rbu handle passed as the first argument. Or, if an +** error has already occurred when this function is called, return NULL +** immediately without attempting the allocation or modifying the stored +** error code. +*/ +static void *rbuMalloc(sqlite3rbu *p, int nByte){ + void *pRet = 0; + if( p->rc==SQLITE_OK ){ + assert( nByte>0 ); + pRet = sqlite3_malloc64(nByte); + if( pRet==0 ){ + p->rc = SQLITE_NOMEM; + }else{ + memset(pRet, 0, nByte); + } + } + return pRet; +} + + +/* +** Allocate and zero the pIter->azTblCol[] and abTblPk[] arrays so that +** there is room for at least nCol elements. If an OOM occurs, store an +** error code in the RBU handle passed as the first argument. +*/ +static void rbuAllocateIterArrays(sqlite3rbu *p, RbuObjIter *pIter, int nCol){ + int nByte = (2*sizeof(char*) + sizeof(int) + 3*sizeof(u8)) * nCol; + char **azNew; + + azNew = (char**)rbuMalloc(p, nByte); + if( azNew ){ + pIter->azTblCol = azNew; + pIter->azTblType = &azNew[nCol]; + pIter->aiSrcOrder = (int*)&pIter->azTblType[nCol]; + pIter->abTblPk = (u8*)&pIter->aiSrcOrder[nCol]; + pIter->abNotNull = (u8*)&pIter->abTblPk[nCol]; + pIter->abIndexed = (u8*)&pIter->abNotNull[nCol]; + } +} + +/* +** The first argument must be a nul-terminated string. This function +** returns a copy of the string in memory obtained from sqlite3_malloc(). +** It is the responsibility of the caller to eventually free this memory +** using sqlite3_free(). +** +** If an OOM condition is encountered when attempting to allocate memory, +** output variable (*pRc) is set to SQLITE_NOMEM before returning. Otherwise, +** if the allocation succeeds, (*pRc) is left unchanged. +*/ +static char *rbuStrndup(const char *zStr, int *pRc){ + char *zRet = 0; + + assert( *pRc==SQLITE_OK ); + if( zStr ){ + size_t nCopy = strlen(zStr) + 1; + zRet = (char*)sqlite3_malloc64(nCopy); + if( zRet ){ + memcpy(zRet, zStr, nCopy); + }else{ + *pRc = SQLITE_NOMEM; + } + } + + return zRet; +} + +/* +** Finalize the statement passed as the second argument. +** +** If the sqlite3_finalize() call indicates that an error occurs, and the +** rbu handle error code is not already set, set the error code and error +** message accordingly. +*/ +static void rbuFinalize(sqlite3rbu *p, sqlite3_stmt *pStmt){ + sqlite3 *db = sqlite3_db_handle(pStmt); + int rc = sqlite3_finalize(pStmt); + if( p->rc==SQLITE_OK && rc!=SQLITE_OK ){ + p->rc = rc; + p->zErrmsg = sqlite3_mprintf("%s", sqlite3_errmsg(db)); + } +} + +/* Determine the type of a table. +** +** peType is of type (int*), a pointer to an output parameter of type +** (int). This call sets the output parameter as follows, depending +** on the type of the table specified by parameters dbName and zTbl. +** +** RBU_PK_NOTABLE: No such table. +** RBU_PK_NONE: Table has an implicit rowid. +** RBU_PK_IPK: Table has an explicit IPK column. +** RBU_PK_EXTERNAL: Table has an external PK index. +** RBU_PK_WITHOUT_ROWID: Table is WITHOUT ROWID. +** RBU_PK_VTAB: Table is a virtual table. +** +** Argument *piPk is also of type (int*), and also points to an output +** parameter. Unless the table has an external primary key index +** (i.e. unless *peType is set to 3), then *piPk is set to zero. Or, +** if the table does have an external primary key index, then *piPk +** is set to the root page number of the primary key index before +** returning. +** +** ALGORITHM: +** +** if( no entry exists in sqlite_master ){ +** return RBU_PK_NOTABLE +** }else if( sql for the entry starts with "CREATE VIRTUAL" ){ +** return RBU_PK_VTAB +** }else if( "PRAGMA index_list()" for the table contains a "pk" index ){ +** if( the index that is the pk exists in sqlite_master ){ +** *piPK = rootpage of that index. +** return RBU_PK_EXTERNAL +** }else{ +** return RBU_PK_WITHOUT_ROWID +** } +** }else if( "PRAGMA table_info()" lists one or more "pk" columns ){ +** return RBU_PK_IPK +** }else{ +** return RBU_PK_NONE +** } +*/ +static void rbuTableType( + sqlite3rbu *p, + const char *zTab, + int *peType, + int *piTnum, + int *piPk +){ + /* + ** 0) SELECT count(*) FROM sqlite_master where name=%Q AND IsVirtual(%Q) + ** 1) PRAGMA index_list = ? + ** 2) SELECT count(*) FROM sqlite_master where name=%Q + ** 3) PRAGMA table_info = ? + */ + sqlite3_stmt *aStmt[4] = {0, 0, 0, 0}; + + *peType = RBU_PK_NOTABLE; + *piPk = 0; + + assert( p->rc==SQLITE_OK ); + p->rc = prepareFreeAndCollectError(p->dbMain, &aStmt[0], &p->zErrmsg, + sqlite3_mprintf( + "SELECT (sql LIKE 'create virtual%%'), rootpage" + " FROM sqlite_master" + " WHERE name=%Q", zTab + )); + if( p->rc!=SQLITE_OK || sqlite3_step(aStmt[0])!=SQLITE_ROW ){ + /* Either an error, or no such table. */ + goto rbuTableType_end; + } + if( sqlite3_column_int(aStmt[0], 0) ){ + *peType = RBU_PK_VTAB; /* virtual table */ + goto rbuTableType_end; + } + *piTnum = sqlite3_column_int(aStmt[0], 1); + + p->rc = prepareFreeAndCollectError(p->dbMain, &aStmt[1], &p->zErrmsg, + sqlite3_mprintf("PRAGMA index_list=%Q",zTab) + ); + if( p->rc ) goto rbuTableType_end; + while( sqlite3_step(aStmt[1])==SQLITE_ROW ){ + const u8 *zOrig = sqlite3_column_text(aStmt[1], 3); + const u8 *zIdx = sqlite3_column_text(aStmt[1], 1); + if( zOrig && zIdx && zOrig[0]=='p' ){ + p->rc = prepareFreeAndCollectError(p->dbMain, &aStmt[2], &p->zErrmsg, + sqlite3_mprintf( + "SELECT rootpage FROM sqlite_master WHERE name = %Q", zIdx + )); + if( p->rc==SQLITE_OK ){ + if( sqlite3_step(aStmt[2])==SQLITE_ROW ){ + *piPk = sqlite3_column_int(aStmt[2], 0); + *peType = RBU_PK_EXTERNAL; + }else{ + *peType = RBU_PK_WITHOUT_ROWID; + } + } + goto rbuTableType_end; + } + } + + p->rc = prepareFreeAndCollectError(p->dbMain, &aStmt[3], &p->zErrmsg, + sqlite3_mprintf("PRAGMA table_info=%Q",zTab) + ); + if( p->rc==SQLITE_OK ){ + while( sqlite3_step(aStmt[3])==SQLITE_ROW ){ + if( sqlite3_column_int(aStmt[3],5)>0 ){ + *peType = RBU_PK_IPK; /* explicit IPK column */ + goto rbuTableType_end; + } + } + *peType = RBU_PK_NONE; + } + +rbuTableType_end: { + unsigned int i; + for(i=0; i<sizeof(aStmt)/sizeof(aStmt[0]); i++){ + rbuFinalize(p, aStmt[i]); + } + } +} + +/* +** This is a helper function for rbuObjIterCacheTableInfo(). It populates +** the pIter->abIndexed[] array. +*/ +static void rbuObjIterCacheIndexedCols(sqlite3rbu *p, RbuObjIter *pIter){ + sqlite3_stmt *pList = 0; + int bIndex = 0; + + if( p->rc==SQLITE_OK ){ + memcpy(pIter->abIndexed, pIter->abTblPk, sizeof(u8)*pIter->nTblCol); + p->rc = prepareFreeAndCollectError(p->dbMain, &pList, &p->zErrmsg, + sqlite3_mprintf("PRAGMA main.index_list = %Q", pIter->zTbl) + ); + } + + while( p->rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pList) ){ + const char *zIdx = (const char*)sqlite3_column_text(pList, 1); + sqlite3_stmt *pXInfo = 0; + if( zIdx==0 ) break; + p->rc = prepareFreeAndCollectError(p->dbMain, &pXInfo, &p->zErrmsg, + sqlite3_mprintf("PRAGMA main.index_xinfo = %Q", zIdx) + ); + while( p->rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pXInfo) ){ + int iCid = sqlite3_column_int(pXInfo, 1); + if( iCid>=0 ) pIter->abIndexed[iCid] = 1; + } + rbuFinalize(p, pXInfo); + bIndex = 1; + } + + rbuFinalize(p, pList); + if( bIndex==0 ) pIter->abIndexed = 0; +} + + +/* +** If they are not already populated, populate the pIter->azTblCol[], +** pIter->abTblPk[], pIter->nTblCol and pIter->bRowid variables according to +** the table (not index) that the iterator currently points to. +** +** Return SQLITE_OK if successful, or an SQLite error code otherwise. If +** an error does occur, an error code and error message are also left in +** the RBU handle. +*/ +static int rbuObjIterCacheTableInfo(sqlite3rbu *p, RbuObjIter *pIter){ + if( pIter->azTblCol==0 ){ + sqlite3_stmt *pStmt = 0; + int nCol = 0; + int i; /* for() loop iterator variable */ + int bRbuRowid = 0; /* If input table has column "rbu_rowid" */ + int iOrder = 0; + int iTnum = 0; + + /* Figure out the type of table this step will deal with. */ + assert( pIter->eType==0 ); + rbuTableType(p, pIter->zTbl, &pIter->eType, &iTnum, &pIter->iPkTnum); + if( p->rc==SQLITE_OK && pIter->eType==RBU_PK_NOTABLE ){ + p->rc = SQLITE_ERROR; + p->zErrmsg = sqlite3_mprintf("no such table: %s", pIter->zTbl); + } + if( p->rc ) return p->rc; + if( pIter->zIdx==0 ) pIter->iTnum = iTnum; + + assert( pIter->eType==RBU_PK_NONE || pIter->eType==RBU_PK_IPK + || pIter->eType==RBU_PK_EXTERNAL || pIter->eType==RBU_PK_WITHOUT_ROWID + || pIter->eType==RBU_PK_VTAB + ); + + /* Populate the azTblCol[] and nTblCol variables based on the columns + ** of the input table. Ignore any input table columns that begin with + ** "rbu_". */ + p->rc = prepareFreeAndCollectError(p->dbRbu, &pStmt, &p->zErrmsg, + sqlite3_mprintf("SELECT * FROM '%q'", pIter->zDataTbl) + ); + if( p->rc==SQLITE_OK ){ + nCol = sqlite3_column_count(pStmt); + rbuAllocateIterArrays(p, pIter, nCol); + } + for(i=0; p->rc==SQLITE_OK && i<nCol; i++){ + const char *zName = (const char*)sqlite3_column_name(pStmt, i); + if( sqlite3_strnicmp("rbu_", zName, 4) ){ + char *zCopy = rbuStrndup(zName, &p->rc); + pIter->aiSrcOrder[pIter->nTblCol] = pIter->nTblCol; + pIter->azTblCol[pIter->nTblCol++] = zCopy; + } + else if( 0==sqlite3_stricmp("rbu_rowid", zName) ){ + bRbuRowid = 1; + } + } + sqlite3_finalize(pStmt); + pStmt = 0; + + if( p->rc==SQLITE_OK + && bRbuRowid!=(pIter->eType==RBU_PK_VTAB || pIter->eType==RBU_PK_NONE) + ){ + p->rc = SQLITE_ERROR; + p->zErrmsg = sqlite3_mprintf( + "table %q %s rbu_rowid column", pIter->zDataTbl, + (bRbuRowid ? "may not have" : "requires") + ); + } + + /* Check that all non-HIDDEN columns in the destination table are also + ** present in the input table. Populate the abTblPk[], azTblType[] and + ** aiTblOrder[] arrays at the same time. */ + if( p->rc==SQLITE_OK ){ + p->rc = prepareFreeAndCollectError(p->dbMain, &pStmt, &p->zErrmsg, + sqlite3_mprintf("PRAGMA table_info(%Q)", pIter->zTbl) + ); + } + while( p->rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pStmt) ){ + const char *zName = (const char*)sqlite3_column_text(pStmt, 1); + if( zName==0 ) break; /* An OOM - finalize() below returns S_NOMEM */ + for(i=iOrder; i<pIter->nTblCol; i++){ + if( 0==strcmp(zName, pIter->azTblCol[i]) ) break; + } + if( i==pIter->nTblCol ){ + p->rc = SQLITE_ERROR; + p->zErrmsg = sqlite3_mprintf("column missing from %q: %s", + pIter->zDataTbl, zName + ); + }else{ + int iPk = sqlite3_column_int(pStmt, 5); + int bNotNull = sqlite3_column_int(pStmt, 3); + const char *zType = (const char*)sqlite3_column_text(pStmt, 2); + + if( i!=iOrder ){ + SWAP(int, pIter->aiSrcOrder[i], pIter->aiSrcOrder[iOrder]); + SWAP(char*, pIter->azTblCol[i], pIter->azTblCol[iOrder]); + } + + pIter->azTblType[iOrder] = rbuStrndup(zType, &p->rc); + pIter->abTblPk[iOrder] = (iPk!=0); + pIter->abNotNull[iOrder] = (u8)bNotNull || (iPk!=0); + iOrder++; + } + } + + rbuFinalize(p, pStmt); + rbuObjIterCacheIndexedCols(p, pIter); + assert( pIter->eType!=RBU_PK_VTAB || pIter->abIndexed==0 ); + } + + return p->rc; +} + +/* +** This function constructs and returns a pointer to a nul-terminated +** string containing some SQL clause or list based on one or more of the +** column names currently stored in the pIter->azTblCol[] array. +*/ +static char *rbuObjIterGetCollist( + sqlite3rbu *p, /* RBU object */ + RbuObjIter *pIter /* Object iterator for column names */ +){ + char *zList = 0; + const char *zSep = ""; + int i; + for(i=0; i<pIter->nTblCol; i++){ + const char *z = pIter->azTblCol[i]; + zList = rbuMPrintf(p, "%z%s\"%w\"", zList, zSep, z); + zSep = ", "; + } + return zList; +} + +/* +** This function is used to create a SELECT list (the list of SQL +** expressions that follows a SELECT keyword) for a SELECT statement +** used to read from an data_xxx or rbu_tmp_xxx table while updating the +** index object currently indicated by the iterator object passed as the +** second argument. A "PRAGMA index_xinfo = <idxname>" statement is used +** to obtain the required information. +** +** If the index is of the following form: +** +** CREATE INDEX i1 ON t1(c, b COLLATE nocase); +** +** and "t1" is a table with an explicit INTEGER PRIMARY KEY column +** "ipk", the returned string is: +** +** "`c` COLLATE 'BINARY', `b` COLLATE 'NOCASE', `ipk` COLLATE 'BINARY'" +** +** As well as the returned string, three other malloc'd strings are +** returned via output parameters. As follows: +** +** pzImposterCols: ... +** pzImposterPk: ... +** pzWhere: ... +*/ +static char *rbuObjIterGetIndexCols( + sqlite3rbu *p, /* RBU object */ + RbuObjIter *pIter, /* Object iterator for column names */ + char **pzImposterCols, /* OUT: Columns for imposter table */ + char **pzImposterPk, /* OUT: Imposter PK clause */ + char **pzWhere, /* OUT: WHERE clause */ + int *pnBind /* OUT: Trbul number of columns */ +){ + int rc = p->rc; /* Error code */ + int rc2; /* sqlite3_finalize() return code */ + char *zRet = 0; /* String to return */ + char *zImpCols = 0; /* String to return via *pzImposterCols */ + char *zImpPK = 0; /* String to return via *pzImposterPK */ + char *zWhere = 0; /* String to return via *pzWhere */ + int nBind = 0; /* Value to return via *pnBind */ + const char *zCom = ""; /* Set to ", " later on */ + const char *zAnd = ""; /* Set to " AND " later on */ + sqlite3_stmt *pXInfo = 0; /* PRAGMA index_xinfo = ? */ + + if( rc==SQLITE_OK ){ + assert( p->zErrmsg==0 ); + rc = prepareFreeAndCollectError(p->dbMain, &pXInfo, &p->zErrmsg, + sqlite3_mprintf("PRAGMA main.index_xinfo = %Q", pIter->zIdx) + ); + } + + while( rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pXInfo) ){ + int iCid = sqlite3_column_int(pXInfo, 1); + int bDesc = sqlite3_column_int(pXInfo, 3); + const char *zCollate = (const char*)sqlite3_column_text(pXInfo, 4); + const char *zCol; + const char *zType; + + if( iCid<0 ){ + /* An integer primary key. If the table has an explicit IPK, use + ** its name. Otherwise, use "rbu_rowid". */ + if( pIter->eType==RBU_PK_IPK ){ + int i; + for(i=0; pIter->abTblPk[i]==0; i++); + assert( i<pIter->nTblCol ); + zCol = pIter->azTblCol[i]; + }else{ + zCol = "rbu_rowid"; + } + zType = "INTEGER"; + }else{ + zCol = pIter->azTblCol[iCid]; + zType = pIter->azTblType[iCid]; + } + + zRet = sqlite3_mprintf("%z%s\"%w\" COLLATE %Q", zRet, zCom, zCol, zCollate); + if( pIter->bUnique==0 || sqlite3_column_int(pXInfo, 5) ){ + const char *zOrder = (bDesc ? " DESC" : ""); + zImpPK = sqlite3_mprintf("%z%s\"rbu_imp_%d%w\"%s", + zImpPK, zCom, nBind, zCol, zOrder + ); + } + zImpCols = sqlite3_mprintf("%z%s\"rbu_imp_%d%w\" %s COLLATE %Q", + zImpCols, zCom, nBind, zCol, zType, zCollate + ); + zWhere = sqlite3_mprintf( + "%z%s\"rbu_imp_%d%w\" IS ?", zWhere, zAnd, nBind, zCol + ); + if( zRet==0 || zImpPK==0 || zImpCols==0 || zWhere==0 ) rc = SQLITE_NOMEM; + zCom = ", "; + zAnd = " AND "; + nBind++; + } + + rc2 = sqlite3_finalize(pXInfo); + if( rc==SQLITE_OK ) rc = rc2; + + if( rc!=SQLITE_OK ){ + sqlite3_free(zRet); + sqlite3_free(zImpCols); + sqlite3_free(zImpPK); + sqlite3_free(zWhere); + zRet = 0; + zImpCols = 0; + zImpPK = 0; + zWhere = 0; + p->rc = rc; + } + + *pzImposterCols = zImpCols; + *pzImposterPk = zImpPK; + *pzWhere = zWhere; + *pnBind = nBind; + return zRet; +} + +/* +** Assuming the current table columns are "a", "b" and "c", and the zObj +** paramter is passed "old", return a string of the form: +** +** "old.a, old.b, old.b" +** +** With the column names escaped. +** +** For tables with implicit rowids - RBU_PK_EXTERNAL and RBU_PK_NONE, append +** the text ", old._rowid_" to the returned value. +*/ +static char *rbuObjIterGetOldlist( + sqlite3rbu *p, + RbuObjIter *pIter, + const char *zObj +){ + char *zList = 0; + if( p->rc==SQLITE_OK && pIter->abIndexed ){ + const char *zS = ""; + int i; + for(i=0; i<pIter->nTblCol; i++){ + if( pIter->abIndexed[i] ){ + const char *zCol = pIter->azTblCol[i]; + zList = sqlite3_mprintf("%z%s%s.\"%w\"", zList, zS, zObj, zCol); + }else{ + zList = sqlite3_mprintf("%z%sNULL", zList, zS); + } + zS = ", "; + if( zList==0 ){ + p->rc = SQLITE_NOMEM; + break; + } + } + + /* For a table with implicit rowids, append "old._rowid_" to the list. */ + if( pIter->eType==RBU_PK_EXTERNAL || pIter->eType==RBU_PK_NONE ){ + zList = rbuMPrintf(p, "%z, %s._rowid_", zList, zObj); + } + } + return zList; +} + +/* +** Return an expression that can be used in a WHERE clause to match the +** primary key of the current table. For example, if the table is: +** +** CREATE TABLE t1(a, b, c, PRIMARY KEY(b, c)); +** +** Return the string: +** +** "b = ?1 AND c = ?2" +*/ +static char *rbuObjIterGetWhere( + sqlite3rbu *p, + RbuObjIter *pIter +){ + char *zList = 0; + if( pIter->eType==RBU_PK_VTAB || pIter->eType==RBU_PK_NONE ){ + zList = rbuMPrintf(p, "_rowid_ = ?%d", pIter->nTblCol+1); + }else if( pIter->eType==RBU_PK_EXTERNAL ){ + const char *zSep = ""; + int i; + for(i=0; i<pIter->nTblCol; i++){ + if( pIter->abTblPk[i] ){ + zList = rbuMPrintf(p, "%z%sc%d=?%d", zList, zSep, i, i+1); + zSep = " AND "; + } + } + zList = rbuMPrintf(p, + "_rowid_ = (SELECT id FROM rbu_imposter2 WHERE %z)", zList + ); + + }else{ + const char *zSep = ""; + int i; + for(i=0; i<pIter->nTblCol; i++){ + if( pIter->abTblPk[i] ){ + const char *zCol = pIter->azTblCol[i]; + zList = rbuMPrintf(p, "%z%s\"%w\"=?%d", zList, zSep, zCol, i+1); + zSep = " AND "; + } + } + } + return zList; +} + +/* +** The SELECT statement iterating through the keys for the current object +** (p->objiter.pSelect) currently points to a valid row. However, there +** is something wrong with the rbu_control value in the rbu_control value +** stored in the (p->nCol+1)'th column. Set the error code and error message +** of the RBU handle to something reflecting this. +*/ +static void rbuBadControlError(sqlite3rbu *p){ + p->rc = SQLITE_ERROR; + p->zErrmsg = sqlite3_mprintf("invalid rbu_control value"); +} + + +/* +** Return a nul-terminated string containing the comma separated list of +** assignments that should be included following the "SET" keyword of +** an UPDATE statement used to update the table object that the iterator +** passed as the second argument currently points to if the rbu_control +** column of the data_xxx table entry is set to zMask. +** +** The memory for the returned string is obtained from sqlite3_malloc(). +** It is the responsibility of the caller to eventually free it using +** sqlite3_free(). +** +** If an OOM error is encountered when allocating space for the new +** string, an error code is left in the rbu handle passed as the first +** argument and NULL is returned. Or, if an error has already occurred +** when this function is called, NULL is returned immediately, without +** attempting the allocation or modifying the stored error code. +*/ +static char *rbuObjIterGetSetlist( + sqlite3rbu *p, + RbuObjIter *pIter, + const char *zMask +){ + char *zList = 0; + if( p->rc==SQLITE_OK ){ + int i; + + if( (int)strlen(zMask)!=pIter->nTblCol ){ + rbuBadControlError(p); + }else{ + const char *zSep = ""; + for(i=0; i<pIter->nTblCol; i++){ + char c = zMask[pIter->aiSrcOrder[i]]; + if( c=='x' ){ + zList = rbuMPrintf(p, "%z%s\"%w\"=?%d", + zList, zSep, pIter->azTblCol[i], i+1 + ); + zSep = ", "; + } + else if( c=='d' ){ + zList = rbuMPrintf(p, "%z%s\"%w\"=rbu_delta(\"%w\", ?%d)", + zList, zSep, pIter->azTblCol[i], pIter->azTblCol[i], i+1 + ); + zSep = ", "; + } + else if( c=='f' ){ + zList = rbuMPrintf(p, "%z%s\"%w\"=rbu_fossil_delta(\"%w\", ?%d)", + zList, zSep, pIter->azTblCol[i], pIter->azTblCol[i], i+1 + ); + zSep = ", "; + } + } + } + } + return zList; +} + +/* +** Return a nul-terminated string consisting of nByte comma separated +** "?" expressions. For example, if nByte is 3, return a pointer to +** a buffer containing the string "?,?,?". +** +** The memory for the returned string is obtained from sqlite3_malloc(). +** It is the responsibility of the caller to eventually free it using +** sqlite3_free(). +** +** If an OOM error is encountered when allocating space for the new +** string, an error code is left in the rbu handle passed as the first +** argument and NULL is returned. Or, if an error has already occurred +** when this function is called, NULL is returned immediately, without +** attempting the allocation or modifying the stored error code. +*/ +static char *rbuObjIterGetBindlist(sqlite3rbu *p, int nBind){ + char *zRet = 0; + int nByte = nBind*2 + 1; + + zRet = (char*)rbuMalloc(p, nByte); + if( zRet ){ + int i; + for(i=0; i<nBind; i++){ + zRet[i*2] = '?'; + zRet[i*2+1] = (i+1==nBind) ? '\0' : ','; + } + } + return zRet; +} + +/* +** The iterator currently points to a table (not index) of type +** RBU_PK_WITHOUT_ROWID. This function creates the PRIMARY KEY +** declaration for the corresponding imposter table. For example, +** if the iterator points to a table created as: +** +** CREATE TABLE t1(a, b, c, PRIMARY KEY(b, a DESC)) WITHOUT ROWID +** +** this function returns: +** +** PRIMARY KEY("b", "a" DESC) +*/ +static char *rbuWithoutRowidPK(sqlite3rbu *p, RbuObjIter *pIter){ + char *z = 0; + assert( pIter->zIdx==0 ); + if( p->rc==SQLITE_OK ){ + const char *zSep = "PRIMARY KEY("; + sqlite3_stmt *pXList = 0; /* PRAGMA index_list = (pIter->zTbl) */ + sqlite3_stmt *pXInfo = 0; /* PRAGMA index_xinfo = <pk-index> */ + + p->rc = prepareFreeAndCollectError(p->dbMain, &pXList, &p->zErrmsg, + sqlite3_mprintf("PRAGMA main.index_list = %Q", pIter->zTbl) + ); + while( p->rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pXList) ){ + const char *zOrig = (const char*)sqlite3_column_text(pXList,3); + if( zOrig && strcmp(zOrig, "pk")==0 ){ + const char *zIdx = (const char*)sqlite3_column_text(pXList,1); + if( zIdx ){ + p->rc = prepareFreeAndCollectError(p->dbMain, &pXInfo, &p->zErrmsg, + sqlite3_mprintf("PRAGMA main.index_xinfo = %Q", zIdx) + ); + } + break; + } + } + rbuFinalize(p, pXList); + + while( p->rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pXInfo) ){ + if( sqlite3_column_int(pXInfo, 5) ){ + /* int iCid = sqlite3_column_int(pXInfo, 0); */ + const char *zCol = (const char*)sqlite3_column_text(pXInfo, 2); + const char *zDesc = sqlite3_column_int(pXInfo, 3) ? " DESC" : ""; + z = rbuMPrintf(p, "%z%s\"%w\"%s", z, zSep, zCol, zDesc); + zSep = ", "; + } + } + z = rbuMPrintf(p, "%z)", z); + rbuFinalize(p, pXInfo); + } + return z; +} + +/* +** This function creates the second imposter table used when writing to +** a table b-tree where the table has an external primary key. If the +** iterator passed as the second argument does not currently point to +** a table (not index) with an external primary key, this function is a +** no-op. +** +** Assuming the iterator does point to a table with an external PK, this +** function creates a WITHOUT ROWID imposter table named "rbu_imposter2" +** used to access that PK index. For example, if the target table is +** declared as follows: +** +** CREATE TABLE t1(a, b TEXT, c REAL, PRIMARY KEY(b, c)); +** +** then the imposter table schema is: +** +** CREATE TABLE rbu_imposter2(c1 TEXT, c2 REAL, id INTEGER) WITHOUT ROWID; +** +*/ +static void rbuCreateImposterTable2(sqlite3rbu *p, RbuObjIter *pIter){ + if( p->rc==SQLITE_OK && pIter->eType==RBU_PK_EXTERNAL ){ + int tnum = pIter->iPkTnum; /* Root page of PK index */ + sqlite3_stmt *pQuery = 0; /* SELECT name ... WHERE rootpage = $tnum */ + const char *zIdx = 0; /* Name of PK index */ + sqlite3_stmt *pXInfo = 0; /* PRAGMA main.index_xinfo = $zIdx */ + const char *zComma = ""; + char *zCols = 0; /* Used to build up list of table cols */ + char *zPk = 0; /* Used to build up table PK declaration */ + + /* Figure out the name of the primary key index for the current table. + ** This is needed for the argument to "PRAGMA index_xinfo". Set + ** zIdx to point to a nul-terminated string containing this name. */ + p->rc = prepareAndCollectError(p->dbMain, &pQuery, &p->zErrmsg, + "SELECT name FROM sqlite_master WHERE rootpage = ?" + ); + if( p->rc==SQLITE_OK ){ + sqlite3_bind_int(pQuery, 1, tnum); + if( SQLITE_ROW==sqlite3_step(pQuery) ){ + zIdx = (const char*)sqlite3_column_text(pQuery, 0); + } + } + if( zIdx ){ + p->rc = prepareFreeAndCollectError(p->dbMain, &pXInfo, &p->zErrmsg, + sqlite3_mprintf("PRAGMA main.index_xinfo = %Q", zIdx) + ); + } + rbuFinalize(p, pQuery); + + while( p->rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pXInfo) ){ + int bKey = sqlite3_column_int(pXInfo, 5); + if( bKey ){ + int iCid = sqlite3_column_int(pXInfo, 1); + int bDesc = sqlite3_column_int(pXInfo, 3); + const char *zCollate = (const char*)sqlite3_column_text(pXInfo, 4); + zCols = rbuMPrintf(p, "%z%sc%d %s COLLATE %s", zCols, zComma, + iCid, pIter->azTblType[iCid], zCollate + ); + zPk = rbuMPrintf(p, "%z%sc%d%s", zPk, zComma, iCid, bDesc?" DESC":""); + zComma = ", "; + } + } + zCols = rbuMPrintf(p, "%z, id INTEGER", zCols); + rbuFinalize(p, pXInfo); + + sqlite3_test_control(SQLITE_TESTCTRL_IMPOSTER, p->dbMain, "main", 1, tnum); + rbuMPrintfExec(p, p->dbMain, + "CREATE TABLE rbu_imposter2(%z, PRIMARY KEY(%z)) WITHOUT ROWID", + zCols, zPk + ); + sqlite3_test_control(SQLITE_TESTCTRL_IMPOSTER, p->dbMain, "main", 0, 0); + } +} + +/* +** If an error has already occurred when this function is called, it +** immediately returns zero (without doing any work). Or, if an error +** occurs during the execution of this function, it sets the error code +** in the sqlite3rbu object indicated by the first argument and returns +** zero. +** +** The iterator passed as the second argument is guaranteed to point to +** a table (not an index) when this function is called. This function +** attempts to create any imposter table required to write to the main +** table b-tree of the table before returning. Non-zero is returned if +** an imposter table are created, or zero otherwise. +** +** An imposter table is required in all cases except RBU_PK_VTAB. Only +** virtual tables are written to directly. The imposter table has the +** same schema as the actual target table (less any UNIQUE constraints). +** More precisely, the "same schema" means the same columns, types, +** collation sequences. For tables that do not have an external PRIMARY +** KEY, it also means the same PRIMARY KEY declaration. +*/ +static void rbuCreateImposterTable(sqlite3rbu *p, RbuObjIter *pIter){ + if( p->rc==SQLITE_OK && pIter->eType!=RBU_PK_VTAB ){ + int tnum = pIter->iTnum; + const char *zComma = ""; + char *zSql = 0; + int iCol; + sqlite3_test_control(SQLITE_TESTCTRL_IMPOSTER, p->dbMain, "main", 0, 1); + + for(iCol=0; p->rc==SQLITE_OK && iCol<pIter->nTblCol; iCol++){ + const char *zPk = ""; + const char *zCol = pIter->azTblCol[iCol]; + const char *zColl = 0; + + p->rc = sqlite3_table_column_metadata( + p->dbMain, "main", pIter->zTbl, zCol, 0, &zColl, 0, 0, 0 + ); + + if( pIter->eType==RBU_PK_IPK && pIter->abTblPk[iCol] ){ + /* If the target table column is an "INTEGER PRIMARY KEY", add + ** "PRIMARY KEY" to the imposter table column declaration. */ + zPk = "PRIMARY KEY "; + } + zSql = rbuMPrintf(p, "%z%s\"%w\" %s %sCOLLATE %s%s", + zSql, zComma, zCol, pIter->azTblType[iCol], zPk, zColl, + (pIter->abNotNull[iCol] ? " NOT NULL" : "") + ); + zComma = ", "; + } + + if( pIter->eType==RBU_PK_WITHOUT_ROWID ){ + char *zPk = rbuWithoutRowidPK(p, pIter); + if( zPk ){ + zSql = rbuMPrintf(p, "%z, %z", zSql, zPk); + } + } + + sqlite3_test_control(SQLITE_TESTCTRL_IMPOSTER, p->dbMain, "main", 1, tnum); + rbuMPrintfExec(p, p->dbMain, "CREATE TABLE \"rbu_imp_%w\"(%z)%s", + pIter->zTbl, zSql, + (pIter->eType==RBU_PK_WITHOUT_ROWID ? " WITHOUT ROWID" : "") + ); + sqlite3_test_control(SQLITE_TESTCTRL_IMPOSTER, p->dbMain, "main", 0, 0); + } +} + +/* +** Prepare a statement used to insert rows into the "rbu_tmp_xxx" table. +** Specifically a statement of the form: +** +** INSERT INTO rbu_tmp_xxx VALUES(?, ?, ? ...); +** +** The number of bound variables is equal to the number of columns in +** the target table, plus one (for the rbu_control column), plus one more +** (for the rbu_rowid column) if the target table is an implicit IPK or +** virtual table. +*/ +static void rbuObjIterPrepareTmpInsert( + sqlite3rbu *p, + RbuObjIter *pIter, + const char *zCollist, + const char *zRbuRowid +){ + int bRbuRowid = (pIter->eType==RBU_PK_EXTERNAL || pIter->eType==RBU_PK_NONE); + char *zBind = rbuObjIterGetBindlist(p, pIter->nTblCol + 1 + bRbuRowid); + if( zBind ){ + assert( pIter->pTmpInsert==0 ); + p->rc = prepareFreeAndCollectError( + p->dbRbu, &pIter->pTmpInsert, &p->zErrmsg, sqlite3_mprintf( + "INSERT INTO %s.'rbu_tmp_%q'(rbu_control,%s%s) VALUES(%z)", + p->zStateDb, pIter->zDataTbl, zCollist, zRbuRowid, zBind + )); + } +} + +static void rbuTmpInsertFunc( + sqlite3_context *pCtx, + int nVal, + sqlite3_value **apVal +){ + sqlite3rbu *p = sqlite3_user_data(pCtx); + int rc = SQLITE_OK; + int i; + + for(i=0; rc==SQLITE_OK && i<nVal; i++){ + rc = sqlite3_bind_value(p->objiter.pTmpInsert, i+1, apVal[i]); + } + if( rc==SQLITE_OK ){ + sqlite3_step(p->objiter.pTmpInsert); + rc = sqlite3_reset(p->objiter.pTmpInsert); + } + + if( rc!=SQLITE_OK ){ + sqlite3_result_error_code(pCtx, rc); + } +} + +/* +** Ensure that the SQLite statement handles required to update the +** target database object currently indicated by the iterator passed +** as the second argument are available. +*/ +static int rbuObjIterPrepareAll( + sqlite3rbu *p, + RbuObjIter *pIter, + int nOffset /* Add "LIMIT -1 OFFSET $nOffset" to SELECT */ +){ + assert( pIter->bCleanup==0 ); + if( pIter->pSelect==0 && rbuObjIterCacheTableInfo(p, pIter)==SQLITE_OK ){ + const int tnum = pIter->iTnum; + char *zCollist = 0; /* List of indexed columns */ + char **pz = &p->zErrmsg; + const char *zIdx = pIter->zIdx; + char *zLimit = 0; + + if( nOffset ){ + zLimit = sqlite3_mprintf(" LIMIT -1 OFFSET %d", nOffset); + if( !zLimit ) p->rc = SQLITE_NOMEM; + } + + if( zIdx ){ + const char *zTbl = pIter->zTbl; + char *zImposterCols = 0; /* Columns for imposter table */ + char *zImposterPK = 0; /* Primary key declaration for imposter */ + char *zWhere = 0; /* WHERE clause on PK columns */ + char *zBind = 0; + int nBind = 0; + + assert( pIter->eType!=RBU_PK_VTAB ); + zCollist = rbuObjIterGetIndexCols( + p, pIter, &zImposterCols, &zImposterPK, &zWhere, &nBind + ); + zBind = rbuObjIterGetBindlist(p, nBind); + + /* Create the imposter table used to write to this index. */ + sqlite3_test_control(SQLITE_TESTCTRL_IMPOSTER, p->dbMain, "main", 0, 1); + sqlite3_test_control(SQLITE_TESTCTRL_IMPOSTER, p->dbMain, "main", 1,tnum); + rbuMPrintfExec(p, p->dbMain, + "CREATE TABLE \"rbu_imp_%w\"( %s, PRIMARY KEY( %s ) ) WITHOUT ROWID", + zTbl, zImposterCols, zImposterPK + ); + sqlite3_test_control(SQLITE_TESTCTRL_IMPOSTER, p->dbMain, "main", 0, 0); + + /* Create the statement to insert index entries */ + pIter->nCol = nBind; + if( p->rc==SQLITE_OK ){ + p->rc = prepareFreeAndCollectError( + p->dbMain, &pIter->pInsert, &p->zErrmsg, + sqlite3_mprintf("INSERT INTO \"rbu_imp_%w\" VALUES(%s)", zTbl, zBind) + ); + } + + /* And to delete index entries */ + if( p->rc==SQLITE_OK ){ + p->rc = prepareFreeAndCollectError( + p->dbMain, &pIter->pDelete, &p->zErrmsg, + sqlite3_mprintf("DELETE FROM \"rbu_imp_%w\" WHERE %s", zTbl, zWhere) + ); + } + + /* Create the SELECT statement to read keys in sorted order */ + if( p->rc==SQLITE_OK ){ + char *zSql; + if( pIter->eType==RBU_PK_EXTERNAL || pIter->eType==RBU_PK_NONE ){ + zSql = sqlite3_mprintf( + "SELECT %s, rbu_control FROM %s.'rbu_tmp_%q' ORDER BY %s%s", + zCollist, p->zStateDb, pIter->zDataTbl, + zCollist, zLimit + ); + }else{ + zSql = sqlite3_mprintf( + "SELECT %s, rbu_control FROM '%q' " + "WHERE typeof(rbu_control)='integer' AND rbu_control!=1 " + "UNION ALL " + "SELECT %s, rbu_control FROM %s.'rbu_tmp_%q' " + "ORDER BY %s%s", + zCollist, pIter->zDataTbl, + zCollist, p->zStateDb, pIter->zDataTbl, + zCollist, zLimit + ); + } + p->rc = prepareFreeAndCollectError(p->dbRbu, &pIter->pSelect, pz, zSql); + } + + sqlite3_free(zImposterCols); + sqlite3_free(zImposterPK); + sqlite3_free(zWhere); + sqlite3_free(zBind); + }else{ + int bRbuRowid = (pIter->eType==RBU_PK_VTAB || pIter->eType==RBU_PK_NONE); + const char *zTbl = pIter->zTbl; /* Table this step applies to */ + const char *zWrite; /* Imposter table name */ + + char *zBindings = rbuObjIterGetBindlist(p, pIter->nTblCol + bRbuRowid); + char *zWhere = rbuObjIterGetWhere(p, pIter); + char *zOldlist = rbuObjIterGetOldlist(p, pIter, "old"); + char *zNewlist = rbuObjIterGetOldlist(p, pIter, "new"); + + zCollist = rbuObjIterGetCollist(p, pIter); + pIter->nCol = pIter->nTblCol; + + /* Create the imposter table or tables (if required). */ + rbuCreateImposterTable(p, pIter); + rbuCreateImposterTable2(p, pIter); + zWrite = (pIter->eType==RBU_PK_VTAB ? "" : "rbu_imp_"); + + /* Create the INSERT statement to write to the target PK b-tree */ + if( p->rc==SQLITE_OK ){ + p->rc = prepareFreeAndCollectError(p->dbMain, &pIter->pInsert, pz, + sqlite3_mprintf( + "INSERT INTO \"%s%w\"(%s%s) VALUES(%s)", + zWrite, zTbl, zCollist, (bRbuRowid ? ", _rowid_" : ""), zBindings + ) + ); + } + + /* Create the DELETE statement to write to the target PK b-tree */ + if( p->rc==SQLITE_OK ){ + p->rc = prepareFreeAndCollectError(p->dbMain, &pIter->pDelete, pz, + sqlite3_mprintf( + "DELETE FROM \"%s%w\" WHERE %s", zWrite, zTbl, zWhere + ) + ); + } + + if( pIter->abIndexed ){ + const char *zRbuRowid = ""; + if( pIter->eType==RBU_PK_EXTERNAL || pIter->eType==RBU_PK_NONE ){ + zRbuRowid = ", rbu_rowid"; + } + + /* Create the rbu_tmp_xxx table and the triggers to populate it. */ + rbuMPrintfExec(p, p->dbRbu, + "CREATE TABLE IF NOT EXISTS %s.'rbu_tmp_%q' AS " + "SELECT *%s FROM '%q' WHERE 0;" + , p->zStateDb, pIter->zDataTbl + , (pIter->eType==RBU_PK_EXTERNAL ? ", 0 AS rbu_rowid" : "") + , pIter->zDataTbl + ); + + rbuMPrintfExec(p, p->dbMain, + "CREATE TEMP TRIGGER rbu_delete_tr BEFORE DELETE ON \"%s%w\" " + "BEGIN " + " SELECT rbu_tmp_insert(2, %s);" + "END;" + + "CREATE TEMP TRIGGER rbu_update1_tr BEFORE UPDATE ON \"%s%w\" " + "BEGIN " + " SELECT rbu_tmp_insert(2, %s);" + "END;" + + "CREATE TEMP TRIGGER rbu_update2_tr AFTER UPDATE ON \"%s%w\" " + "BEGIN " + " SELECT rbu_tmp_insert(3, %s);" + "END;", + zWrite, zTbl, zOldlist, + zWrite, zTbl, zOldlist, + zWrite, zTbl, zNewlist + ); + + if( pIter->eType==RBU_PK_EXTERNAL || pIter->eType==RBU_PK_NONE ){ + rbuMPrintfExec(p, p->dbMain, + "CREATE TEMP TRIGGER rbu_insert_tr AFTER INSERT ON \"%s%w\" " + "BEGIN " + " SELECT rbu_tmp_insert(0, %s);" + "END;", + zWrite, zTbl, zNewlist + ); + } + + rbuObjIterPrepareTmpInsert(p, pIter, zCollist, zRbuRowid); + } + + /* Create the SELECT statement to read keys from data_xxx */ + if( p->rc==SQLITE_OK ){ + p->rc = prepareFreeAndCollectError(p->dbRbu, &pIter->pSelect, pz, + sqlite3_mprintf( + "SELECT %s, rbu_control%s FROM '%q'%s", + zCollist, (bRbuRowid ? ", rbu_rowid" : ""), + pIter->zDataTbl, zLimit + ) + ); + } + + sqlite3_free(zWhere); + sqlite3_free(zOldlist); + sqlite3_free(zNewlist); + sqlite3_free(zBindings); + } + sqlite3_free(zCollist); + sqlite3_free(zLimit); + } + + return p->rc; +} + +/* +** Set output variable *ppStmt to point to an UPDATE statement that may +** be used to update the imposter table for the main table b-tree of the +** table object that pIter currently points to, assuming that the +** rbu_control column of the data_xyz table contains zMask. +** +** If the zMask string does not specify any columns to update, then this +** is not an error. Output variable *ppStmt is set to NULL in this case. +*/ +static int rbuGetUpdateStmt( + sqlite3rbu *p, /* RBU handle */ + RbuObjIter *pIter, /* Object iterator */ + const char *zMask, /* rbu_control value ('x.x.') */ + sqlite3_stmt **ppStmt /* OUT: UPDATE statement handle */ +){ + RbuUpdateStmt **pp; + RbuUpdateStmt *pUp = 0; + int nUp = 0; + + /* In case an error occurs */ + *ppStmt = 0; + + /* Search for an existing statement. If one is found, shift it to the front + ** of the LRU queue and return immediately. Otherwise, leave nUp pointing + ** to the number of statements currently in the cache and pUp to the + ** last object in the list. */ + for(pp=&pIter->pRbuUpdate; *pp; pp=&((*pp)->pNext)){ + pUp = *pp; + if( strcmp(pUp->zMask, zMask)==0 ){ + *pp = pUp->pNext; + pUp->pNext = pIter->pRbuUpdate; + pIter->pRbuUpdate = pUp; + *ppStmt = pUp->pUpdate; + return SQLITE_OK; + } + nUp++; + } + assert( pUp==0 || pUp->pNext==0 ); + + if( nUp>=SQLITE_RBU_UPDATE_CACHESIZE ){ + for(pp=&pIter->pRbuUpdate; *pp!=pUp; pp=&((*pp)->pNext)); + *pp = 0; + sqlite3_finalize(pUp->pUpdate); + pUp->pUpdate = 0; + }else{ + pUp = (RbuUpdateStmt*)rbuMalloc(p, sizeof(RbuUpdateStmt)+pIter->nTblCol+1); + } + + if( pUp ){ + char *zWhere = rbuObjIterGetWhere(p, pIter); + char *zSet = rbuObjIterGetSetlist(p, pIter, zMask); + char *zUpdate = 0; + + pUp->zMask = (char*)&pUp[1]; + memcpy(pUp->zMask, zMask, pIter->nTblCol); + pUp->pNext = pIter->pRbuUpdate; + pIter->pRbuUpdate = pUp; + + if( zSet ){ + const char *zPrefix = ""; + + if( pIter->eType!=RBU_PK_VTAB ) zPrefix = "rbu_imp_"; + zUpdate = sqlite3_mprintf("UPDATE \"%s%w\" SET %s WHERE %s", + zPrefix, pIter->zTbl, zSet, zWhere + ); + p->rc = prepareFreeAndCollectError( + p->dbMain, &pUp->pUpdate, &p->zErrmsg, zUpdate + ); + *ppStmt = pUp->pUpdate; + } + sqlite3_free(zWhere); + sqlite3_free(zSet); + } + + return p->rc; +} + +static sqlite3 *rbuOpenDbhandle(sqlite3rbu *p, const char *zName){ + sqlite3 *db = 0; + if( p->rc==SQLITE_OK ){ + const int flags = SQLITE_OPEN_READWRITE|SQLITE_OPEN_CREATE|SQLITE_OPEN_URI; + p->rc = sqlite3_open_v2(zName, &db, flags, p->zVfsName); + if( p->rc ){ + p->zErrmsg = sqlite3_mprintf("%s", sqlite3_errmsg(db)); + sqlite3_close(db); + db = 0; + } + } + return db; +} + +/* +** Open the database handle and attach the RBU database as "rbu". If an +** error occurs, leave an error code and message in the RBU handle. +*/ +static void rbuOpenDatabase(sqlite3rbu *p){ + assert( p->rc==SQLITE_OK ); + assert( p->dbMain==0 && p->dbRbu==0 ); + + p->eStage = 0; + p->dbMain = rbuOpenDbhandle(p, p->zTarget); + p->dbRbu = rbuOpenDbhandle(p, p->zRbu); + + /* If using separate RBU and state databases, attach the state database to + ** the RBU db handle now. */ + if( p->zState ){ + rbuMPrintfExec(p, p->dbRbu, "ATTACH %Q AS stat", p->zState); + memcpy(p->zStateDb, "stat", 4); + }else{ + memcpy(p->zStateDb, "main", 4); + } + + if( p->rc==SQLITE_OK ){ + p->rc = sqlite3_create_function(p->dbMain, + "rbu_tmp_insert", -1, SQLITE_UTF8, (void*)p, rbuTmpInsertFunc, 0, 0 + ); + } + + if( p->rc==SQLITE_OK ){ + p->rc = sqlite3_create_function(p->dbMain, + "rbu_fossil_delta", 2, SQLITE_UTF8, 0, rbuFossilDeltaFunc, 0, 0 + ); + } + + if( p->rc==SQLITE_OK ){ + p->rc = sqlite3_create_function(p->dbRbu, + "rbu_target_name", 1, SQLITE_UTF8, (void*)p, rbuTargetNameFunc, 0, 0 + ); + } + + if( p->rc==SQLITE_OK ){ + p->rc = sqlite3_file_control(p->dbMain, "main", SQLITE_FCNTL_RBU, (void*)p); + } + rbuMPrintfExec(p, p->dbMain, "SELECT * FROM sqlite_master"); + + /* Mark the database file just opened as an RBU target database. If + ** this call returns SQLITE_NOTFOUND, then the RBU vfs is not in use. + ** This is an error. */ + if( p->rc==SQLITE_OK ){ + p->rc = sqlite3_file_control(p->dbMain, "main", SQLITE_FCNTL_RBU, (void*)p); + } + + if( p->rc==SQLITE_NOTFOUND ){ + p->rc = SQLITE_ERROR; + p->zErrmsg = sqlite3_mprintf("rbu vfs not found"); + } +} + +/* +** This routine is a copy of the sqlite3FileSuffix3() routine from the core. +** It is a no-op unless SQLITE_ENABLE_8_3_NAMES is defined. +** +** If SQLITE_ENABLE_8_3_NAMES is set at compile-time and if the database +** filename in zBaseFilename is a URI with the "8_3_names=1" parameter and +** if filename in z[] has a suffix (a.k.a. "extension") that is longer than +** three characters, then shorten the suffix on z[] to be the last three +** characters of the original suffix. +** +** If SQLITE_ENABLE_8_3_NAMES is set to 2 at compile-time, then always +** do the suffix shortening regardless of URI parameter. +** +** Examples: +** +** test.db-journal => test.nal +** test.db-wal => test.wal +** test.db-shm => test.shm +** test.db-mj7f3319fa => test.9fa +*/ +static void rbuFileSuffix3(const char *zBase, char *z){ +#ifdef SQLITE_ENABLE_8_3_NAMES +#if SQLITE_ENABLE_8_3_NAMES<2 + if( sqlite3_uri_boolean(zBase, "8_3_names", 0) ) +#endif + { + int i, sz; + sz = sqlite3Strlen30(z); + for(i=sz-1; i>0 && z[i]!='/' && z[i]!='.'; i--){} + if( z[i]=='.' && ALWAYS(sz>i+4) ) memmove(&z[i+1], &z[sz-3], 4); + } +#endif +} + +/* +** Return the current wal-index header checksum for the target database +** as a 64-bit integer. +** +** The checksum is store in the first page of xShmMap memory as an 8-byte +** blob starting at byte offset 40. +*/ +static i64 rbuShmChecksum(sqlite3rbu *p){ + i64 iRet = 0; + if( p->rc==SQLITE_OK ){ + sqlite3_file *pDb = p->pTargetFd->pReal; + u32 volatile *ptr; + p->rc = pDb->pMethods->xShmMap(pDb, 0, 32*1024, 0, (void volatile**)&ptr); + if( p->rc==SQLITE_OK ){ + iRet = ((i64)ptr[10] << 32) + ptr[11]; + } + } + return iRet; +} + +/* +** This function is called as part of initializing or reinitializing an +** incremental checkpoint. +** +** It populates the sqlite3rbu.aFrame[] array with the set of +** (wal frame -> db page) copy operations required to checkpoint the +** current wal file, and obtains the set of shm locks required to safely +** perform the copy operations directly on the file-system. +** +** If argument pState is not NULL, then the incremental checkpoint is +** being resumed. In this case, if the checksum of the wal-index-header +** following recovery is not the same as the checksum saved in the RbuState +** object, then the rbu handle is set to DONE state. This occurs if some +** other client appends a transaction to the wal file in the middle of +** an incremental checkpoint. +*/ +static void rbuSetupCheckpoint(sqlite3rbu *p, RbuState *pState){ + + /* If pState is NULL, then the wal file may not have been opened and + ** recovered. Running a read-statement here to ensure that doing so + ** does not interfere with the "capture" process below. */ + if( pState==0 ){ + p->eStage = 0; + if( p->rc==SQLITE_OK ){ + p->rc = sqlite3_exec(p->dbMain, "SELECT * FROM sqlite_master", 0, 0, 0); + } + } + + /* Assuming no error has occurred, run a "restart" checkpoint with the + ** sqlite3rbu.eStage variable set to CAPTURE. This turns on the following + ** special behaviour in the rbu VFS: + ** + ** * If the exclusive shm WRITER or READ0 lock cannot be obtained, + ** the checkpoint fails with SQLITE_BUSY (normally SQLite would + ** proceed with running a passive checkpoint instead of failing). + ** + ** * Attempts to read from the *-wal file or write to the database file + ** do not perform any IO. Instead, the frame/page combinations that + ** would be read/written are recorded in the sqlite3rbu.aFrame[] + ** array. + ** + ** * Calls to xShmLock(UNLOCK) to release the exclusive shm WRITER, + ** READ0 and CHECKPOINT locks taken as part of the checkpoint are + ** no-ops. These locks will not be released until the connection + ** is closed. + ** + ** * Attempting to xSync() the database file causes an SQLITE_INTERNAL + ** error. + ** + ** As a result, unless an error (i.e. OOM or SQLITE_BUSY) occurs, the + ** checkpoint below fails with SQLITE_INTERNAL, and leaves the aFrame[] + ** array populated with a set of (frame -> page) mappings. Because the + ** WRITER, CHECKPOINT and READ0 locks are still held, it is safe to copy + ** data from the wal file into the database file according to the + ** contents of aFrame[]. + */ + if( p->rc==SQLITE_OK ){ + int rc2; + p->eStage = RBU_STAGE_CAPTURE; + rc2 = sqlite3_exec(p->dbMain, "PRAGMA main.wal_checkpoint=restart", 0, 0,0); + if( rc2!=SQLITE_INTERNAL ) p->rc = rc2; + } + + if( p->rc==SQLITE_OK ){ + p->eStage = RBU_STAGE_CKPT; + p->nStep = (pState ? pState->nRow : 0); + p->aBuf = rbuMalloc(p, p->pgsz); + p->iWalCksum = rbuShmChecksum(p); + } + + if( p->rc==SQLITE_OK && pState && pState->iWalCksum!=p->iWalCksum ){ + p->rc = SQLITE_DONE; + p->eStage = RBU_STAGE_DONE; + } +} + +/* +** Called when iAmt bytes are read from offset iOff of the wal file while +** the rbu object is in capture mode. Record the frame number of the frame +** being read in the aFrame[] array. +*/ +static int rbuCaptureWalRead(sqlite3rbu *pRbu, i64 iOff, int iAmt){ + const u32 mReq = (1<<WAL_LOCK_WRITE)|(1<<WAL_LOCK_CKPT)|(1<<WAL_LOCK_READ0); + u32 iFrame; + + if( pRbu->mLock!=mReq ){ + pRbu->rc = SQLITE_BUSY; + return SQLITE_INTERNAL; + } + + pRbu->pgsz = iAmt; + if( pRbu->nFrame==pRbu->nFrameAlloc ){ + int nNew = (pRbu->nFrameAlloc ? pRbu->nFrameAlloc : 64) * 2; + RbuFrame *aNew; + aNew = (RbuFrame*)sqlite3_realloc64(pRbu->aFrame, nNew * sizeof(RbuFrame)); + if( aNew==0 ) return SQLITE_NOMEM; + pRbu->aFrame = aNew; + pRbu->nFrameAlloc = nNew; + } + + iFrame = (u32)((iOff-32) / (i64)(iAmt+24)) + 1; + if( pRbu->iMaxFrame<iFrame ) pRbu->iMaxFrame = iFrame; + pRbu->aFrame[pRbu->nFrame].iWalFrame = iFrame; + pRbu->aFrame[pRbu->nFrame].iDbPage = 0; + pRbu->nFrame++; + return SQLITE_OK; +} + +/* +** Called when a page of data is written to offset iOff of the database +** file while the rbu handle is in capture mode. Record the page number +** of the page being written in the aFrame[] array. +*/ +static int rbuCaptureDbWrite(sqlite3rbu *pRbu, i64 iOff){ + pRbu->aFrame[pRbu->nFrame-1].iDbPage = (u32)(iOff / pRbu->pgsz) + 1; + return SQLITE_OK; +} + +/* +** This is called as part of an incremental checkpoint operation. Copy +** a single frame of data from the wal file into the database file, as +** indicated by the RbuFrame object. +*/ +static void rbuCheckpointFrame(sqlite3rbu *p, RbuFrame *pFrame){ + sqlite3_file *pWal = p->pTargetFd->pWalFd->pReal; + sqlite3_file *pDb = p->pTargetFd->pReal; + i64 iOff; + + assert( p->rc==SQLITE_OK ); + iOff = (i64)(pFrame->iWalFrame-1) * (p->pgsz + 24) + 32 + 24; + p->rc = pWal->pMethods->xRead(pWal, p->aBuf, p->pgsz, iOff); + if( p->rc ) return; + + iOff = (i64)(pFrame->iDbPage-1) * p->pgsz; + p->rc = pDb->pMethods->xWrite(pDb, p->aBuf, p->pgsz, iOff); +} + + +/* +** Take an EXCLUSIVE lock on the database file. +*/ +static void rbuLockDatabase(sqlite3rbu *p){ + sqlite3_file *pReal = p->pTargetFd->pReal; + assert( p->rc==SQLITE_OK ); + p->rc = pReal->pMethods->xLock(pReal, SQLITE_LOCK_SHARED); + if( p->rc==SQLITE_OK ){ + p->rc = pReal->pMethods->xLock(pReal, SQLITE_LOCK_EXCLUSIVE); + } +} + +#if defined(_WIN32_WCE) +static LPWSTR rbuWinUtf8ToUnicode(const char *zFilename){ + int nChar; + LPWSTR zWideFilename; + + nChar = MultiByteToWideChar(CP_UTF8, 0, zFilename, -1, NULL, 0); + if( nChar==0 ){ + return 0; + } + zWideFilename = sqlite3_malloc64( nChar*sizeof(zWideFilename[0]) ); + if( zWideFilename==0 ){ + return 0; + } + memset(zWideFilename, 0, nChar*sizeof(zWideFilename[0])); + nChar = MultiByteToWideChar(CP_UTF8, 0, zFilename, -1, zWideFilename, + nChar); + if( nChar==0 ){ + sqlite3_free(zWideFilename); + zWideFilename = 0; + } + return zWideFilename; +} +#endif + +/* +** The RBU handle is currently in RBU_STAGE_OAL state, with a SHARED lock +** on the database file. This proc moves the *-oal file to the *-wal path, +** then reopens the database file (this time in vanilla, non-oal, WAL mode). +** If an error occurs, leave an error code and error message in the rbu +** handle. +*/ +static void rbuMoveOalFile(sqlite3rbu *p){ + const char *zBase = sqlite3_db_filename(p->dbMain, "main"); + + char *zWal = sqlite3_mprintf("%s-wal", zBase); + char *zOal = sqlite3_mprintf("%s-oal", zBase); + + assert( p->eStage==RBU_STAGE_MOVE ); + assert( p->rc==SQLITE_OK && p->zErrmsg==0 ); + if( zWal==0 || zOal==0 ){ + p->rc = SQLITE_NOMEM; + }else{ + /* Move the *-oal file to *-wal. At this point connection p->db is + ** holding a SHARED lock on the target database file (because it is + ** in WAL mode). So no other connection may be writing the db. + ** + ** In order to ensure that there are no database readers, an EXCLUSIVE + ** lock is obtained here before the *-oal is moved to *-wal. + */ + rbuLockDatabase(p); + if( p->rc==SQLITE_OK ){ + rbuFileSuffix3(zBase, zWal); + rbuFileSuffix3(zBase, zOal); + + /* Re-open the databases. */ + rbuObjIterFinalize(&p->objiter); + sqlite3_close(p->dbMain); + sqlite3_close(p->dbRbu); + p->dbMain = 0; + p->dbRbu = 0; + +#if defined(_WIN32_WCE) + { + LPWSTR zWideOal; + LPWSTR zWideWal; + + zWideOal = rbuWinUtf8ToUnicode(zOal); + if( zWideOal ){ + zWideWal = rbuWinUtf8ToUnicode(zWal); + if( zWideWal ){ + if( MoveFileW(zWideOal, zWideWal) ){ + p->rc = SQLITE_OK; + }else{ + p->rc = SQLITE_IOERR; + } + sqlite3_free(zWideWal); + }else{ + p->rc = SQLITE_IOERR_NOMEM; + } + sqlite3_free(zWideOal); + }else{ + p->rc = SQLITE_IOERR_NOMEM; + } + } +#else + p->rc = rename(zOal, zWal) ? SQLITE_IOERR : SQLITE_OK; +#endif + + if( p->rc==SQLITE_OK ){ + rbuOpenDatabase(p); + rbuSetupCheckpoint(p, 0); + } + } + } + + sqlite3_free(zWal); + sqlite3_free(zOal); +} + +/* +** The SELECT statement iterating through the keys for the current object +** (p->objiter.pSelect) currently points to a valid row. This function +** determines the type of operation requested by this row and returns +** one of the following values to indicate the result: +** +** * RBU_INSERT +** * RBU_DELETE +** * RBU_IDX_DELETE +** * RBU_UPDATE +** +** If RBU_UPDATE is returned, then output variable *pzMask is set to +** point to the text value indicating the columns to update. +** +** If the rbu_control field contains an invalid value, an error code and +** message are left in the RBU handle and zero returned. +*/ +static int rbuStepType(sqlite3rbu *p, const char **pzMask){ + int iCol = p->objiter.nCol; /* Index of rbu_control column */ + int res = 0; /* Return value */ + + switch( sqlite3_column_type(p->objiter.pSelect, iCol) ){ + case SQLITE_INTEGER: { + int iVal = sqlite3_column_int(p->objiter.pSelect, iCol); + if( iVal==0 ){ + res = RBU_INSERT; + }else if( iVal==1 ){ + res = RBU_DELETE; + }else if( iVal==2 ){ + res = RBU_IDX_DELETE; + }else if( iVal==3 ){ + res = RBU_IDX_INSERT; + } + break; + } + + case SQLITE_TEXT: { + const unsigned char *z = sqlite3_column_text(p->objiter.pSelect, iCol); + if( z==0 ){ + p->rc = SQLITE_NOMEM; + }else{ + *pzMask = (const char*)z; + } + res = RBU_UPDATE; + + break; + } + + default: + break; + } + + if( res==0 ){ + rbuBadControlError(p); + } + return res; +} + +#ifdef SQLITE_DEBUG +/* +** Assert that column iCol of statement pStmt is named zName. +*/ +static void assertColumnName(sqlite3_stmt *pStmt, int iCol, const char *zName){ + const char *zCol = sqlite3_column_name(pStmt, iCol); + assert( 0==sqlite3_stricmp(zName, zCol) ); +} +#else +# define assertColumnName(x,y,z) +#endif + +/* +** This function does the work for an sqlite3rbu_step() call. +** +** The object-iterator (p->objiter) currently points to a valid object, +** and the input cursor (p->objiter.pSelect) currently points to a valid +** input row. Perform whatever processing is required and return. +** +** If no error occurs, SQLITE_OK is returned. Otherwise, an error code +** and message is left in the RBU handle and a copy of the error code +** returned. +*/ +static int rbuStep(sqlite3rbu *p){ + RbuObjIter *pIter = &p->objiter; + const char *zMask = 0; + int i; + int eType = rbuStepType(p, &zMask); + + if( eType ){ + assert( eType!=RBU_UPDATE || pIter->zIdx==0 ); + + if( pIter->zIdx==0 && eType==RBU_IDX_DELETE ){ + rbuBadControlError(p); + } + else if( + eType==RBU_INSERT + || eType==RBU_DELETE + || eType==RBU_IDX_DELETE + || eType==RBU_IDX_INSERT + ){ + sqlite3_value *pVal; + sqlite3_stmt *pWriter; + + assert( eType!=RBU_UPDATE ); + assert( eType!=RBU_DELETE || pIter->zIdx==0 ); + + if( eType==RBU_IDX_DELETE || eType==RBU_DELETE ){ + pWriter = pIter->pDelete; + }else{ + pWriter = pIter->pInsert; + } + + for(i=0; i<pIter->nCol; i++){ + /* If this is an INSERT into a table b-tree and the table has an + ** explicit INTEGER PRIMARY KEY, check that this is not an attempt + ** to write a NULL into the IPK column. That is not permitted. */ + if( eType==RBU_INSERT + && pIter->zIdx==0 && pIter->eType==RBU_PK_IPK && pIter->abTblPk[i] + && sqlite3_column_type(pIter->pSelect, i)==SQLITE_NULL + ){ + p->rc = SQLITE_MISMATCH; + p->zErrmsg = sqlite3_mprintf("datatype mismatch"); + goto step_out; + } + + if( eType==RBU_DELETE && pIter->abTblPk[i]==0 ){ + continue; + } + + pVal = sqlite3_column_value(pIter->pSelect, i); + p->rc = sqlite3_bind_value(pWriter, i+1, pVal); + if( p->rc ) goto step_out; + } + if( pIter->zIdx==0 + && (pIter->eType==RBU_PK_VTAB || pIter->eType==RBU_PK_NONE) + ){ + /* For a virtual table, or a table with no primary key, the + ** SELECT statement is: + ** + ** SELECT <cols>, rbu_control, rbu_rowid FROM .... + ** + ** Hence column_value(pIter->nCol+1). + */ + assertColumnName(pIter->pSelect, pIter->nCol+1, "rbu_rowid"); + pVal = sqlite3_column_value(pIter->pSelect, pIter->nCol+1); + p->rc = sqlite3_bind_value(pWriter, pIter->nCol+1, pVal); + } + if( p->rc==SQLITE_OK ){ + sqlite3_step(pWriter); + p->rc = resetAndCollectError(pWriter, &p->zErrmsg); + } + }else{ + sqlite3_value *pVal; + sqlite3_stmt *pUpdate = 0; + assert( eType==RBU_UPDATE ); + rbuGetUpdateStmt(p, pIter, zMask, &pUpdate); + if( pUpdate ){ + for(i=0; p->rc==SQLITE_OK && i<pIter->nCol; i++){ + char c = zMask[pIter->aiSrcOrder[i]]; + pVal = sqlite3_column_value(pIter->pSelect, i); + if( pIter->abTblPk[i] || c!='.' ){ + p->rc = sqlite3_bind_value(pUpdate, i+1, pVal); + } + } + if( p->rc==SQLITE_OK + && (pIter->eType==RBU_PK_VTAB || pIter->eType==RBU_PK_NONE) + ){ + /* Bind the rbu_rowid value to column _rowid_ */ + assertColumnName(pIter->pSelect, pIter->nCol+1, "rbu_rowid"); + pVal = sqlite3_column_value(pIter->pSelect, pIter->nCol+1); + p->rc = sqlite3_bind_value(pUpdate, pIter->nCol+1, pVal); + } + if( p->rc==SQLITE_OK ){ + sqlite3_step(pUpdate); + p->rc = resetAndCollectError(pUpdate, &p->zErrmsg); + } + } + } + } + + step_out: + return p->rc; +} + +/* +** Increment the schema cookie of the main database opened by p->dbMain. +*/ +static void rbuIncrSchemaCookie(sqlite3rbu *p){ + if( p->rc==SQLITE_OK ){ + int iCookie = 1000000; + sqlite3_stmt *pStmt; + + p->rc = prepareAndCollectError(p->dbMain, &pStmt, &p->zErrmsg, + "PRAGMA schema_version" + ); + if( p->rc==SQLITE_OK ){ + /* Coverage: it may be that this sqlite3_step() cannot fail. There + ** is already a transaction open, so the prepared statement cannot + ** throw an SQLITE_SCHEMA exception. The only database page the + ** statement reads is page 1, which is guaranteed to be in the cache. + ** And no memory allocations are required. */ + if( SQLITE_ROW==sqlite3_step(pStmt) ){ + iCookie = sqlite3_column_int(pStmt, 0); + } + rbuFinalize(p, pStmt); + } + if( p->rc==SQLITE_OK ){ + rbuMPrintfExec(p, p->dbMain, "PRAGMA schema_version = %d", iCookie+1); + } + } +} + +/* +** Update the contents of the rbu_state table within the rbu database. The +** value stored in the RBU_STATE_STAGE column is eStage. All other values +** are determined by inspecting the rbu handle passed as the first argument. +*/ +static void rbuSaveState(sqlite3rbu *p, int eStage){ + if( p->rc==SQLITE_OK || p->rc==SQLITE_DONE ){ + sqlite3_stmt *pInsert = 0; + int rc; + + assert( p->zErrmsg==0 ); + rc = prepareFreeAndCollectError(p->dbRbu, &pInsert, &p->zErrmsg, + sqlite3_mprintf( + "INSERT OR REPLACE INTO %s.rbu_state(k, v) VALUES " + "(%d, %d), " + "(%d, %Q), " + "(%d, %Q), " + "(%d, %d), " + "(%d, %d), " + "(%d, %lld), " + "(%d, %lld), " + "(%d, %lld) ", + p->zStateDb, + RBU_STATE_STAGE, eStage, + RBU_STATE_TBL, p->objiter.zTbl, + RBU_STATE_IDX, p->objiter.zIdx, + RBU_STATE_ROW, p->nStep, + RBU_STATE_PROGRESS, p->nProgress, + RBU_STATE_CKPT, p->iWalCksum, + RBU_STATE_COOKIE, (i64)p->pTargetFd->iCookie, + RBU_STATE_OALSZ, p->iOalSz + ) + ); + assert( pInsert==0 || rc==SQLITE_OK ); + + if( rc==SQLITE_OK ){ + sqlite3_step(pInsert); + rc = sqlite3_finalize(pInsert); + } + if( rc!=SQLITE_OK ) p->rc = rc; + } +} + + +/* +** Step the RBU object. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3rbu_step(sqlite3rbu *p){ + if( p ){ + switch( p->eStage ){ + case RBU_STAGE_OAL: { + RbuObjIter *pIter = &p->objiter; + while( p->rc==SQLITE_OK && pIter->zTbl ){ + + if( pIter->bCleanup ){ + /* Clean up the rbu_tmp_xxx table for the previous table. It + ** cannot be dropped as there are currently active SQL statements. + ** But the contents can be deleted. */ + if( pIter->abIndexed ){ + rbuMPrintfExec(p, p->dbRbu, + "DELETE FROM %s.'rbu_tmp_%q'", p->zStateDb, pIter->zDataTbl + ); + } + }else{ + rbuObjIterPrepareAll(p, pIter, 0); + + /* Advance to the next row to process. */ + if( p->rc==SQLITE_OK ){ + int rc = sqlite3_step(pIter->pSelect); + if( rc==SQLITE_ROW ){ + p->nProgress++; + p->nStep++; + return rbuStep(p); + } + p->rc = sqlite3_reset(pIter->pSelect); + p->nStep = 0; + } + } + + rbuObjIterNext(p, pIter); + } + + if( p->rc==SQLITE_OK ){ + assert( pIter->zTbl==0 ); + rbuSaveState(p, RBU_STAGE_MOVE); + rbuIncrSchemaCookie(p); + if( p->rc==SQLITE_OK ){ + p->rc = sqlite3_exec(p->dbMain, "COMMIT", 0, 0, &p->zErrmsg); + } + if( p->rc==SQLITE_OK ){ + p->rc = sqlite3_exec(p->dbRbu, "COMMIT", 0, 0, &p->zErrmsg); + } + p->eStage = RBU_STAGE_MOVE; + } + break; + } + + case RBU_STAGE_MOVE: { + if( p->rc==SQLITE_OK ){ + rbuMoveOalFile(p); + p->nProgress++; + } + break; + } + + case RBU_STAGE_CKPT: { + if( p->rc==SQLITE_OK ){ + if( p->nStep>=p->nFrame ){ + sqlite3_file *pDb = p->pTargetFd->pReal; + + /* Sync the db file */ + p->rc = pDb->pMethods->xSync(pDb, SQLITE_SYNC_NORMAL); + + /* Update nBackfill */ + if( p->rc==SQLITE_OK ){ + void volatile *ptr; + p->rc = pDb->pMethods->xShmMap(pDb, 0, 32*1024, 0, &ptr); + if( p->rc==SQLITE_OK ){ + ((u32 volatile*)ptr)[24] = p->iMaxFrame; + } + } + + if( p->rc==SQLITE_OK ){ + p->eStage = RBU_STAGE_DONE; + p->rc = SQLITE_DONE; + } + }else{ + RbuFrame *pFrame = &p->aFrame[p->nStep]; + rbuCheckpointFrame(p, pFrame); + p->nStep++; + } + p->nProgress++; + } + break; + } + + default: + break; + } + return p->rc; + }else{ + return SQLITE_NOMEM; + } +} + +/* +** Free an RbuState object allocated by rbuLoadState(). +*/ +static void rbuFreeState(RbuState *p){ + if( p ){ + sqlite3_free(p->zTbl); + sqlite3_free(p->zIdx); + sqlite3_free(p); + } +} + +/* +** Allocate an RbuState object and load the contents of the rbu_state +** table into it. Return a pointer to the new object. It is the +** responsibility of the caller to eventually free the object using +** sqlite3_free(). +** +** If an error occurs, leave an error code and message in the rbu handle +** and return NULL. +*/ +static RbuState *rbuLoadState(sqlite3rbu *p){ + RbuState *pRet = 0; + sqlite3_stmt *pStmt = 0; + int rc; + int rc2; + + pRet = (RbuState*)rbuMalloc(p, sizeof(RbuState)); + if( pRet==0 ) return 0; + + rc = prepareFreeAndCollectError(p->dbRbu, &pStmt, &p->zErrmsg, + sqlite3_mprintf("SELECT k, v FROM %s.rbu_state", p->zStateDb) + ); + while( rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pStmt) ){ + switch( sqlite3_column_int(pStmt, 0) ){ + case RBU_STATE_STAGE: + pRet->eStage = sqlite3_column_int(pStmt, 1); + if( pRet->eStage!=RBU_STAGE_OAL + && pRet->eStage!=RBU_STAGE_MOVE + && pRet->eStage!=RBU_STAGE_CKPT + ){ + p->rc = SQLITE_CORRUPT; + } + break; + + case RBU_STATE_TBL: + pRet->zTbl = rbuStrndup((char*)sqlite3_column_text(pStmt, 1), &rc); + break; + + case RBU_STATE_IDX: + pRet->zIdx = rbuStrndup((char*)sqlite3_column_text(pStmt, 1), &rc); + break; + + case RBU_STATE_ROW: + pRet->nRow = sqlite3_column_int(pStmt, 1); + break; + + case RBU_STATE_PROGRESS: + pRet->nProgress = sqlite3_column_int64(pStmt, 1); + break; + + case RBU_STATE_CKPT: + pRet->iWalCksum = sqlite3_column_int64(pStmt, 1); + break; + + case RBU_STATE_COOKIE: + pRet->iCookie = (u32)sqlite3_column_int64(pStmt, 1); + break; + + case RBU_STATE_OALSZ: + pRet->iOalSz = (u32)sqlite3_column_int64(pStmt, 1); + break; + + default: + rc = SQLITE_CORRUPT; + break; + } + } + rc2 = sqlite3_finalize(pStmt); + if( rc==SQLITE_OK ) rc = rc2; + + p->rc = rc; + return pRet; +} + +/* +** Compare strings z1 and z2, returning 0 if they are identical, or non-zero +** otherwise. Either or both argument may be NULL. Two NULL values are +** considered equal, and NULL is considered distinct from all other values. +*/ +static int rbuStrCompare(const char *z1, const char *z2){ + if( z1==0 && z2==0 ) return 0; + if( z1==0 || z2==0 ) return 1; + return (sqlite3_stricmp(z1, z2)!=0); +} + +/* +** This function is called as part of sqlite3rbu_open() when initializing +** an rbu handle in OAL stage. If the rbu update has not started (i.e. +** the rbu_state table was empty) it is a no-op. Otherwise, it arranges +** things so that the next call to sqlite3rbu_step() continues on from +** where the previous rbu handle left off. +** +** If an error occurs, an error code and error message are left in the +** rbu handle passed as the first argument. +*/ +static void rbuSetupOal(sqlite3rbu *p, RbuState *pState){ + assert( p->rc==SQLITE_OK ); + if( pState->zTbl ){ + RbuObjIter *pIter = &p->objiter; + int rc = SQLITE_OK; + + while( rc==SQLITE_OK && pIter->zTbl && (pIter->bCleanup + || rbuStrCompare(pIter->zIdx, pState->zIdx) + || rbuStrCompare(pIter->zTbl, pState->zTbl) + )){ + rc = rbuObjIterNext(p, pIter); + } + + if( rc==SQLITE_OK && !pIter->zTbl ){ + rc = SQLITE_ERROR; + p->zErrmsg = sqlite3_mprintf("rbu_state mismatch error"); + } + + if( rc==SQLITE_OK ){ + p->nStep = pState->nRow; + rc = rbuObjIterPrepareAll(p, &p->objiter, p->nStep); + } + + p->rc = rc; + } +} + +/* +** If there is a "*-oal" file in the file-system corresponding to the +** target database in the file-system, delete it. If an error occurs, +** leave an error code and error message in the rbu handle. +*/ +static void rbuDeleteOalFile(sqlite3rbu *p){ + char *zOal = rbuMPrintf(p, "%s-oal", p->zTarget); + if( zOal ){ + sqlite3_vfs *pVfs = sqlite3_vfs_find(0); + assert( pVfs && p->rc==SQLITE_OK && p->zErrmsg==0 ); + pVfs->xDelete(pVfs, zOal, 0); + sqlite3_free(zOal); + } +} + +/* +** Allocate a private rbu VFS for the rbu handle passed as the only +** argument. This VFS will be used unless the call to sqlite3rbu_open() +** specified a URI with a vfs=? option in place of a target database +** file name. +*/ +static void rbuCreateVfs(sqlite3rbu *p){ + int rnd; + char zRnd[64]; + + assert( p->rc==SQLITE_OK ); + sqlite3_randomness(sizeof(int), (void*)&rnd); + sqlite3_snprintf(sizeof(zRnd), zRnd, "rbu_vfs_%d", rnd); + p->rc = sqlite3rbu_create_vfs(zRnd, 0); + if( p->rc==SQLITE_OK ){ + sqlite3_vfs *pVfs = sqlite3_vfs_find(zRnd); + assert( pVfs ); + p->zVfsName = pVfs->zName; + } +} + +/* +** Destroy the private VFS created for the rbu handle passed as the only +** argument by an earlier call to rbuCreateVfs(). +*/ +static void rbuDeleteVfs(sqlite3rbu *p){ + if( p->zVfsName ){ + sqlite3rbu_destroy_vfs(p->zVfsName); + p->zVfsName = 0; + } +} + +/* +** Open and return a new RBU handle. +*/ +SQLITE_API sqlite3rbu *SQLITE_STDCALL sqlite3rbu_open( + const char *zTarget, + const char *zRbu, + const char *zState +){ + sqlite3rbu *p; + size_t nTarget = strlen(zTarget); + size_t nRbu = strlen(zRbu); + size_t nState = zState ? strlen(zState) : 0; + size_t nByte = sizeof(sqlite3rbu) + nTarget+1 + nRbu+1+ nState+1; + + p = (sqlite3rbu*)sqlite3_malloc64(nByte); + if( p ){ + RbuState *pState = 0; + + /* Create the custom VFS. */ + memset(p, 0, sizeof(sqlite3rbu)); + rbuCreateVfs(p); + + /* Open the target database */ + if( p->rc==SQLITE_OK ){ + p->zTarget = (char*)&p[1]; + memcpy(p->zTarget, zTarget, nTarget+1); + p->zRbu = &p->zTarget[nTarget+1]; + memcpy(p->zRbu, zRbu, nRbu+1); + if( zState ){ + p->zState = &p->zRbu[nRbu+1]; + memcpy(p->zState, zState, nState+1); + } + rbuOpenDatabase(p); + } + + /* If it has not already been created, create the rbu_state table */ + rbuMPrintfExec(p, p->dbRbu, RBU_CREATE_STATE, p->zStateDb); + + if( p->rc==SQLITE_OK ){ + pState = rbuLoadState(p); + assert( pState || p->rc!=SQLITE_OK ); + if( p->rc==SQLITE_OK ){ + + if( pState->eStage==0 ){ + rbuDeleteOalFile(p); + p->eStage = RBU_STAGE_OAL; + }else{ + p->eStage = pState->eStage; + } + p->nProgress = pState->nProgress; + p->iOalSz = pState->iOalSz; + } + } + assert( p->rc!=SQLITE_OK || p->eStage!=0 ); + + if( p->rc==SQLITE_OK && p->pTargetFd->pWalFd ){ + if( p->eStage==RBU_STAGE_OAL ){ + p->rc = SQLITE_ERROR; + p->zErrmsg = sqlite3_mprintf("cannot update wal mode database"); + }else if( p->eStage==RBU_STAGE_MOVE ){ + p->eStage = RBU_STAGE_CKPT; + p->nStep = 0; + } + } + + if( p->rc==SQLITE_OK + && (p->eStage==RBU_STAGE_OAL || p->eStage==RBU_STAGE_MOVE) + && pState->eStage!=0 && p->pTargetFd->iCookie!=pState->iCookie + ){ + /* At this point (pTargetFd->iCookie) contains the value of the + ** change-counter cookie (the thing that gets incremented when a + ** transaction is committed in rollback mode) currently stored on + ** page 1 of the database file. */ + p->rc = SQLITE_BUSY; + p->zErrmsg = sqlite3_mprintf("database modified during rbu update"); + } + + if( p->rc==SQLITE_OK ){ + if( p->eStage==RBU_STAGE_OAL ){ + sqlite3 *db = p->dbMain; + + /* Open transactions both databases. The *-oal file is opened or + ** created at this point. */ + p->rc = sqlite3_exec(db, "BEGIN IMMEDIATE", 0, 0, &p->zErrmsg); + if( p->rc==SQLITE_OK ){ + p->rc = sqlite3_exec(p->dbRbu, "BEGIN IMMEDIATE", 0, 0, &p->zErrmsg); + } + + /* Check if the main database is a zipvfs db. If it is, set the upper + ** level pager to use "journal_mode=off". This prevents it from + ** generating a large journal using a temp file. */ + if( p->rc==SQLITE_OK ){ + int frc = sqlite3_file_control(db, "main", SQLITE_FCNTL_ZIPVFS, 0); + if( frc==SQLITE_OK ){ + p->rc = sqlite3_exec(db, "PRAGMA journal_mode=off",0,0,&p->zErrmsg); + } + } + + /* Point the object iterator at the first object */ + if( p->rc==SQLITE_OK ){ + p->rc = rbuObjIterFirst(p, &p->objiter); + } + + /* If the RBU database contains no data_xxx tables, declare the RBU + ** update finished. */ + if( p->rc==SQLITE_OK && p->objiter.zTbl==0 ){ + p->rc = SQLITE_DONE; + } + + if( p->rc==SQLITE_OK ){ + rbuSetupOal(p, pState); + } + + }else if( p->eStage==RBU_STAGE_MOVE ){ + /* no-op */ + }else if( p->eStage==RBU_STAGE_CKPT ){ + rbuSetupCheckpoint(p, pState); + }else if( p->eStage==RBU_STAGE_DONE ){ + p->rc = SQLITE_DONE; + }else{ + p->rc = SQLITE_CORRUPT; + } + } + + rbuFreeState(pState); + } + + return p; +} + + +/* +** Return the database handle used by pRbu. +*/ +SQLITE_API sqlite3 *SQLITE_STDCALL sqlite3rbu_db(sqlite3rbu *pRbu, int bRbu){ + sqlite3 *db = 0; + if( pRbu ){ + db = (bRbu ? pRbu->dbRbu : pRbu->dbMain); + } + return db; +} + + +/* +** If the error code currently stored in the RBU handle is SQLITE_CONSTRAINT, +** then edit any error message string so as to remove all occurrences of +** the pattern "rbu_imp_[0-9]*". +*/ +static void rbuEditErrmsg(sqlite3rbu *p){ + if( p->rc==SQLITE_CONSTRAINT && p->zErrmsg ){ + int i; + size_t nErrmsg = strlen(p->zErrmsg); + for(i=0; i<(nErrmsg-8); i++){ + if( memcmp(&p->zErrmsg[i], "rbu_imp_", 8)==0 ){ + int nDel = 8; + while( p->zErrmsg[i+nDel]>='0' && p->zErrmsg[i+nDel]<='9' ) nDel++; + memmove(&p->zErrmsg[i], &p->zErrmsg[i+nDel], nErrmsg + 1 - i - nDel); + nErrmsg -= nDel; + } + } + } +} + +/* +** Close the RBU handle. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3rbu_close(sqlite3rbu *p, char **pzErrmsg){ + int rc; + if( p ){ + + /* Commit the transaction to the *-oal file. */ + if( p->rc==SQLITE_OK && p->eStage==RBU_STAGE_OAL ){ + p->rc = sqlite3_exec(p->dbMain, "COMMIT", 0, 0, &p->zErrmsg); + } + + rbuSaveState(p, p->eStage); + + if( p->rc==SQLITE_OK && p->eStage==RBU_STAGE_OAL ){ + p->rc = sqlite3_exec(p->dbRbu, "COMMIT", 0, 0, &p->zErrmsg); + } + + /* Close any open statement handles. */ + rbuObjIterFinalize(&p->objiter); + + /* Close the open database handle and VFS object. */ + sqlite3_close(p->dbMain); + sqlite3_close(p->dbRbu); + rbuDeleteVfs(p); + sqlite3_free(p->aBuf); + sqlite3_free(p->aFrame); + + rbuEditErrmsg(p); + rc = p->rc; + *pzErrmsg = p->zErrmsg; + sqlite3_free(p); + }else{ + rc = SQLITE_NOMEM; + *pzErrmsg = 0; + } + return rc; +} + +/* +** Return the total number of key-value operations (inserts, deletes or +** updates) that have been performed on the target database since the +** current RBU update was started. +*/ +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3rbu_progress(sqlite3rbu *pRbu){ + return pRbu->nProgress; +} + +SQLITE_API int SQLITE_STDCALL sqlite3rbu_savestate(sqlite3rbu *p){ + int rc = p->rc; + + if( rc==SQLITE_DONE ) return SQLITE_OK; + + assert( p->eStage>=RBU_STAGE_OAL && p->eStage<=RBU_STAGE_DONE ); + if( p->eStage==RBU_STAGE_OAL ){ + assert( rc!=SQLITE_DONE ); + if( rc==SQLITE_OK ) rc = sqlite3_exec(p->dbMain, "COMMIT", 0, 0, 0); + } + + p->rc = rc; + rbuSaveState(p, p->eStage); + rc = p->rc; + + if( p->eStage==RBU_STAGE_OAL ){ + assert( rc!=SQLITE_DONE ); + if( rc==SQLITE_OK ) rc = sqlite3_exec(p->dbRbu, "COMMIT", 0, 0, 0); + if( rc==SQLITE_OK ) rc = sqlite3_exec(p->dbRbu, "BEGIN IMMEDIATE", 0, 0, 0); + if( rc==SQLITE_OK ) rc = sqlite3_exec(p->dbMain, "BEGIN IMMEDIATE", 0, 0,0); + } + + p->rc = rc; + return rc; +} + +/************************************************************************** +** Beginning of RBU VFS shim methods. The VFS shim modifies the behaviour +** of a standard VFS in the following ways: +** +** 1. Whenever the first page of a main database file is read or +** written, the value of the change-counter cookie is stored in +** rbu_file.iCookie. Similarly, the value of the "write-version" +** database header field is stored in rbu_file.iWriteVer. This ensures +** that the values are always trustworthy within an open transaction. +** +** 2. Whenever an SQLITE_OPEN_WAL file is opened, the (rbu_file.pWalFd) +** member variable of the associated database file descriptor is set +** to point to the new file. A mutex protected linked list of all main +** db fds opened using a particular RBU VFS is maintained at +** rbu_vfs.pMain to facilitate this. +** +** 3. Using a new file-control "SQLITE_FCNTL_RBU", a main db rbu_file +** object can be marked as the target database of an RBU update. This +** turns on the following extra special behaviour: +** +** 3a. If xAccess() is called to check if there exists a *-wal file +** associated with an RBU target database currently in RBU_STAGE_OAL +** stage (preparing the *-oal file), the following special handling +** applies: +** +** * if the *-wal file does exist, return SQLITE_CANTOPEN. An RBU +** target database may not be in wal mode already. +** +** * if the *-wal file does not exist, set the output parameter to +** non-zero (to tell SQLite that it does exist) anyway. +** +** Then, when xOpen() is called to open the *-wal file associated with +** the RBU target in RBU_STAGE_OAL stage, instead of opening the *-wal +** file, the rbu vfs opens the corresponding *-oal file instead. +** +** 3b. The *-shm pages returned by xShmMap() for a target db file in +** RBU_STAGE_OAL mode are actually stored in heap memory. This is to +** avoid creating a *-shm file on disk. Additionally, xShmLock() calls +** are no-ops on target database files in RBU_STAGE_OAL mode. This is +** because assert() statements in some VFS implementations fail if +** xShmLock() is called before xShmMap(). +** +** 3c. If an EXCLUSIVE lock is attempted on a target database file in any +** mode except RBU_STAGE_DONE (all work completed and checkpointed), it +** fails with an SQLITE_BUSY error. This is to stop RBU connections +** from automatically checkpointing a *-wal (or *-oal) file from within +** sqlite3_close(). +** +** 3d. In RBU_STAGE_CAPTURE mode, all xRead() calls on the wal file, and +** all xWrite() calls on the target database file perform no IO. +** Instead the frame and page numbers that would be read and written +** are recorded. Additionally, successful attempts to obtain exclusive +** xShmLock() WRITER, CHECKPOINTER and READ0 locks on the target +** database file are recorded. xShmLock() calls to unlock the same +** locks are no-ops (so that once obtained, these locks are never +** relinquished). Finally, calls to xSync() on the target database +** file fail with SQLITE_INTERNAL errors. +*/ + +static void rbuUnlockShm(rbu_file *p){ + if( p->pRbu ){ + int (*xShmLock)(sqlite3_file*,int,int,int) = p->pReal->pMethods->xShmLock; + int i; + for(i=0; i<SQLITE_SHM_NLOCK;i++){ + if( (1<<i) & p->pRbu->mLock ){ + xShmLock(p->pReal, i, 1, SQLITE_SHM_UNLOCK|SQLITE_SHM_EXCLUSIVE); + } + } + p->pRbu->mLock = 0; + } +} + +/* +** Close an rbu file. +*/ +static int rbuVfsClose(sqlite3_file *pFile){ + rbu_file *p = (rbu_file*)pFile; + int rc; + int i; + + /* Free the contents of the apShm[] array. And the array itself. */ + for(i=0; i<p->nShm; i++){ + sqlite3_free(p->apShm[i]); + } + sqlite3_free(p->apShm); + p->apShm = 0; + sqlite3_free(p->zDel); + + if( p->openFlags & SQLITE_OPEN_MAIN_DB ){ + rbu_file **pp; + sqlite3_mutex_enter(p->pRbuVfs->mutex); + for(pp=&p->pRbuVfs->pMain; *pp!=p; pp=&((*pp)->pMainNext)); + *pp = p->pMainNext; + sqlite3_mutex_leave(p->pRbuVfs->mutex); + rbuUnlockShm(p); + p->pReal->pMethods->xShmUnmap(p->pReal, 0); + } + + /* Close the underlying file handle */ + rc = p->pReal->pMethods->xClose(p->pReal); + return rc; +} + + +/* +** Read and return an unsigned 32-bit big-endian integer from the buffer +** passed as the only argument. +*/ +static u32 rbuGetU32(u8 *aBuf){ + return ((u32)aBuf[0] << 24) + + ((u32)aBuf[1] << 16) + + ((u32)aBuf[2] << 8) + + ((u32)aBuf[3]); +} + +/* +** Read data from an rbuVfs-file. +*/ +static int rbuVfsRead( + sqlite3_file *pFile, + void *zBuf, + int iAmt, + sqlite_int64 iOfst +){ + rbu_file *p = (rbu_file*)pFile; + sqlite3rbu *pRbu = p->pRbu; + int rc; + + if( pRbu && pRbu->eStage==RBU_STAGE_CAPTURE ){ + assert( p->openFlags & SQLITE_OPEN_WAL ); + rc = rbuCaptureWalRead(p->pRbu, iOfst, iAmt); + }else{ + if( pRbu && pRbu->eStage==RBU_STAGE_OAL + && (p->openFlags & SQLITE_OPEN_WAL) + && iOfst>=pRbu->iOalSz + ){ + rc = SQLITE_OK; + memset(zBuf, 0, iAmt); + }else{ + rc = p->pReal->pMethods->xRead(p->pReal, zBuf, iAmt, iOfst); + } + if( rc==SQLITE_OK && iOfst==0 && (p->openFlags & SQLITE_OPEN_MAIN_DB) ){ + /* These look like magic numbers. But they are stable, as they are part + ** of the definition of the SQLite file format, which may not change. */ + u8 *pBuf = (u8*)zBuf; + p->iCookie = rbuGetU32(&pBuf[24]); + p->iWriteVer = pBuf[19]; + } + } + return rc; +} + +/* +** Write data to an rbuVfs-file. +*/ +static int rbuVfsWrite( + sqlite3_file *pFile, + const void *zBuf, + int iAmt, + sqlite_int64 iOfst +){ + rbu_file *p = (rbu_file*)pFile; + sqlite3rbu *pRbu = p->pRbu; + int rc; + + if( pRbu && pRbu->eStage==RBU_STAGE_CAPTURE ){ + assert( p->openFlags & SQLITE_OPEN_MAIN_DB ); + rc = rbuCaptureDbWrite(p->pRbu, iOfst); + }else{ + if( pRbu && pRbu->eStage==RBU_STAGE_OAL + && (p->openFlags & SQLITE_OPEN_WAL) + && iOfst>=pRbu->iOalSz + ){ + pRbu->iOalSz = iAmt + iOfst; + } + rc = p->pReal->pMethods->xWrite(p->pReal, zBuf, iAmt, iOfst); + if( rc==SQLITE_OK && iOfst==0 && (p->openFlags & SQLITE_OPEN_MAIN_DB) ){ + /* These look like magic numbers. But they are stable, as they are part + ** of the definition of the SQLite file format, which may not change. */ + u8 *pBuf = (u8*)zBuf; + p->iCookie = rbuGetU32(&pBuf[24]); + p->iWriteVer = pBuf[19]; + } + } + return rc; +} + +/* +** Truncate an rbuVfs-file. +*/ +static int rbuVfsTruncate(sqlite3_file *pFile, sqlite_int64 size){ + rbu_file *p = (rbu_file*)pFile; + return p->pReal->pMethods->xTruncate(p->pReal, size); +} + +/* +** Sync an rbuVfs-file. +*/ +static int rbuVfsSync(sqlite3_file *pFile, int flags){ + rbu_file *p = (rbu_file *)pFile; + if( p->pRbu && p->pRbu->eStage==RBU_STAGE_CAPTURE ){ + if( p->openFlags & SQLITE_OPEN_MAIN_DB ){ + return SQLITE_INTERNAL; + } + return SQLITE_OK; + } + return p->pReal->pMethods->xSync(p->pReal, flags); +} + +/* +** Return the current file-size of an rbuVfs-file. +*/ +static int rbuVfsFileSize(sqlite3_file *pFile, sqlite_int64 *pSize){ + rbu_file *p = (rbu_file *)pFile; + return p->pReal->pMethods->xFileSize(p->pReal, pSize); +} + +/* +** Lock an rbuVfs-file. +*/ +static int rbuVfsLock(sqlite3_file *pFile, int eLock){ + rbu_file *p = (rbu_file*)pFile; + sqlite3rbu *pRbu = p->pRbu; + int rc = SQLITE_OK; + + assert( p->openFlags & (SQLITE_OPEN_MAIN_DB|SQLITE_OPEN_TEMP_DB) ); + if( pRbu && eLock==SQLITE_LOCK_EXCLUSIVE && pRbu->eStage!=RBU_STAGE_DONE ){ + /* Do not allow EXCLUSIVE locks. Preventing SQLite from taking this + ** prevents it from checkpointing the database from sqlite3_close(). */ + rc = SQLITE_BUSY; + }else{ + rc = p->pReal->pMethods->xLock(p->pReal, eLock); + } + + return rc; +} + +/* +** Unlock an rbuVfs-file. +*/ +static int rbuVfsUnlock(sqlite3_file *pFile, int eLock){ + rbu_file *p = (rbu_file *)pFile; + return p->pReal->pMethods->xUnlock(p->pReal, eLock); +} + +/* +** Check if another file-handle holds a RESERVED lock on an rbuVfs-file. +*/ +static int rbuVfsCheckReservedLock(sqlite3_file *pFile, int *pResOut){ + rbu_file *p = (rbu_file *)pFile; + return p->pReal->pMethods->xCheckReservedLock(p->pReal, pResOut); +} + +/* +** File control method. For custom operations on an rbuVfs-file. +*/ +static int rbuVfsFileControl(sqlite3_file *pFile, int op, void *pArg){ + rbu_file *p = (rbu_file *)pFile; + int (*xControl)(sqlite3_file*,int,void*) = p->pReal->pMethods->xFileControl; + int rc; + + assert( p->openFlags & (SQLITE_OPEN_MAIN_DB|SQLITE_OPEN_TEMP_DB) + || p->openFlags & (SQLITE_OPEN_TRANSIENT_DB|SQLITE_OPEN_TEMP_JOURNAL) + ); + if( op==SQLITE_FCNTL_RBU ){ + sqlite3rbu *pRbu = (sqlite3rbu*)pArg; + + /* First try to find another RBU vfs lower down in the vfs stack. If + ** one is found, this vfs will operate in pass-through mode. The lower + ** level vfs will do the special RBU handling. */ + rc = xControl(p->pReal, op, pArg); + + if( rc==SQLITE_NOTFOUND ){ + /* Now search for a zipvfs instance lower down in the VFS stack. If + ** one is found, this is an error. */ + void *dummy = 0; + rc = xControl(p->pReal, SQLITE_FCNTL_ZIPVFS, &dummy); + if( rc==SQLITE_OK ){ + rc = SQLITE_ERROR; + pRbu->zErrmsg = sqlite3_mprintf("rbu/zipvfs setup error"); + }else if( rc==SQLITE_NOTFOUND ){ + pRbu->pTargetFd = p; + p->pRbu = pRbu; + if( p->pWalFd ) p->pWalFd->pRbu = pRbu; + rc = SQLITE_OK; + } + } + return rc; + } + + rc = xControl(p->pReal, op, pArg); + if( rc==SQLITE_OK && op==SQLITE_FCNTL_VFSNAME ){ + rbu_vfs *pRbuVfs = p->pRbuVfs; + char *zIn = *(char**)pArg; + char *zOut = sqlite3_mprintf("rbu(%s)/%z", pRbuVfs->base.zName, zIn); + *(char**)pArg = zOut; + if( zOut==0 ) rc = SQLITE_NOMEM; + } + + return rc; +} + +/* +** Return the sector-size in bytes for an rbuVfs-file. +*/ +static int rbuVfsSectorSize(sqlite3_file *pFile){ + rbu_file *p = (rbu_file *)pFile; + return p->pReal->pMethods->xSectorSize(p->pReal); +} + +/* +** Return the device characteristic flags supported by an rbuVfs-file. +*/ +static int rbuVfsDeviceCharacteristics(sqlite3_file *pFile){ + rbu_file *p = (rbu_file *)pFile; + return p->pReal->pMethods->xDeviceCharacteristics(p->pReal); +} + +/* +** Take or release a shared-memory lock. +*/ +static int rbuVfsShmLock(sqlite3_file *pFile, int ofst, int n, int flags){ + rbu_file *p = (rbu_file*)pFile; + sqlite3rbu *pRbu = p->pRbu; + int rc = SQLITE_OK; + +#ifdef SQLITE_AMALGAMATION + assert( WAL_CKPT_LOCK==1 ); +#endif + + assert( p->openFlags & (SQLITE_OPEN_MAIN_DB|SQLITE_OPEN_TEMP_DB) ); + if( pRbu && (pRbu->eStage==RBU_STAGE_OAL || pRbu->eStage==RBU_STAGE_MOVE) ){ + /* Magic number 1 is the WAL_CKPT_LOCK lock. Preventing SQLite from + ** taking this lock also prevents any checkpoints from occurring. + ** todo: really, it's not clear why this might occur, as + ** wal_autocheckpoint ought to be turned off. */ + if( ofst==WAL_LOCK_CKPT && n==1 ) rc = SQLITE_BUSY; + }else{ + int bCapture = 0; + if( n==1 && (flags & SQLITE_SHM_EXCLUSIVE) + && pRbu && pRbu->eStage==RBU_STAGE_CAPTURE + && (ofst==WAL_LOCK_WRITE || ofst==WAL_LOCK_CKPT || ofst==WAL_LOCK_READ0) + ){ + bCapture = 1; + } + + if( bCapture==0 || 0==(flags & SQLITE_SHM_UNLOCK) ){ + rc = p->pReal->pMethods->xShmLock(p->pReal, ofst, n, flags); + if( bCapture && rc==SQLITE_OK ){ + pRbu->mLock |= (1 << ofst); + } + } + } + + return rc; +} + +/* +** Obtain a pointer to a mapping of a single 32KiB page of the *-shm file. +*/ +static int rbuVfsShmMap( + sqlite3_file *pFile, + int iRegion, + int szRegion, + int isWrite, + void volatile **pp +){ + rbu_file *p = (rbu_file*)pFile; + int rc = SQLITE_OK; + int eStage = (p->pRbu ? p->pRbu->eStage : 0); + + /* If not in RBU_STAGE_OAL, allow this call to pass through. Or, if this + ** rbu is in the RBU_STAGE_OAL state, use heap memory for *-shm space + ** instead of a file on disk. */ + assert( p->openFlags & (SQLITE_OPEN_MAIN_DB|SQLITE_OPEN_TEMP_DB) ); + if( eStage==RBU_STAGE_OAL || eStage==RBU_STAGE_MOVE ){ + if( iRegion<=p->nShm ){ + int nByte = (iRegion+1) * sizeof(char*); + char **apNew = (char**)sqlite3_realloc64(p->apShm, nByte); + if( apNew==0 ){ + rc = SQLITE_NOMEM; + }else{ + memset(&apNew[p->nShm], 0, sizeof(char*) * (1 + iRegion - p->nShm)); + p->apShm = apNew; + p->nShm = iRegion+1; + } + } + + if( rc==SQLITE_OK && p->apShm[iRegion]==0 ){ + char *pNew = (char*)sqlite3_malloc64(szRegion); + if( pNew==0 ){ + rc = SQLITE_NOMEM; + }else{ + memset(pNew, 0, szRegion); + p->apShm[iRegion] = pNew; + } + } + + if( rc==SQLITE_OK ){ + *pp = p->apShm[iRegion]; + }else{ + *pp = 0; + } + }else{ + assert( p->apShm==0 ); + rc = p->pReal->pMethods->xShmMap(p->pReal, iRegion, szRegion, isWrite, pp); + } + + return rc; +} + +/* +** Memory barrier. +*/ +static void rbuVfsShmBarrier(sqlite3_file *pFile){ + rbu_file *p = (rbu_file *)pFile; + p->pReal->pMethods->xShmBarrier(p->pReal); +} + +/* +** The xShmUnmap method. +*/ +static int rbuVfsShmUnmap(sqlite3_file *pFile, int delFlag){ + rbu_file *p = (rbu_file*)pFile; + int rc = SQLITE_OK; + int eStage = (p->pRbu ? p->pRbu->eStage : 0); + + assert( p->openFlags & (SQLITE_OPEN_MAIN_DB|SQLITE_OPEN_TEMP_DB) ); + if( eStage==RBU_STAGE_OAL || eStage==RBU_STAGE_MOVE ){ + /* no-op */ + }else{ + /* Release the checkpointer and writer locks */ + rbuUnlockShm(p); + rc = p->pReal->pMethods->xShmUnmap(p->pReal, delFlag); + } + return rc; +} + +/* +** Given that zWal points to a buffer containing a wal file name passed to +** either the xOpen() or xAccess() VFS method, return a pointer to the +** file-handle opened by the same database connection on the corresponding +** database file. +*/ +static rbu_file *rbuFindMaindb(rbu_vfs *pRbuVfs, const char *zWal){ + rbu_file *pDb; + sqlite3_mutex_enter(pRbuVfs->mutex); + for(pDb=pRbuVfs->pMain; pDb && pDb->zWal!=zWal; pDb=pDb->pMainNext); + sqlite3_mutex_leave(pRbuVfs->mutex); + return pDb; +} + +/* +** Open an rbu file handle. +*/ +static int rbuVfsOpen( + sqlite3_vfs *pVfs, + const char *zName, + sqlite3_file *pFile, + int flags, + int *pOutFlags +){ + static sqlite3_io_methods rbuvfs_io_methods = { + 2, /* iVersion */ + rbuVfsClose, /* xClose */ + rbuVfsRead, /* xRead */ + rbuVfsWrite, /* xWrite */ + rbuVfsTruncate, /* xTruncate */ + rbuVfsSync, /* xSync */ + rbuVfsFileSize, /* xFileSize */ + rbuVfsLock, /* xLock */ + rbuVfsUnlock, /* xUnlock */ + rbuVfsCheckReservedLock, /* xCheckReservedLock */ + rbuVfsFileControl, /* xFileControl */ + rbuVfsSectorSize, /* xSectorSize */ + rbuVfsDeviceCharacteristics, /* xDeviceCharacteristics */ + rbuVfsShmMap, /* xShmMap */ + rbuVfsShmLock, /* xShmLock */ + rbuVfsShmBarrier, /* xShmBarrier */ + rbuVfsShmUnmap, /* xShmUnmap */ + 0, 0 /* xFetch, xUnfetch */ + }; + rbu_vfs *pRbuVfs = (rbu_vfs*)pVfs; + sqlite3_vfs *pRealVfs = pRbuVfs->pRealVfs; + rbu_file *pFd = (rbu_file *)pFile; + int rc = SQLITE_OK; + const char *zOpen = zName; + + memset(pFd, 0, sizeof(rbu_file)); + pFd->pReal = (sqlite3_file*)&pFd[1]; + pFd->pRbuVfs = pRbuVfs; + pFd->openFlags = flags; + if( zName ){ + if( flags & SQLITE_OPEN_MAIN_DB ){ + /* A main database has just been opened. The following block sets + ** (pFd->zWal) to point to a buffer owned by SQLite that contains + ** the name of the *-wal file this db connection will use. SQLite + ** happens to pass a pointer to this buffer when using xAccess() + ** or xOpen() to operate on the *-wal file. */ + int n = (int)strlen(zName); + const char *z = &zName[n]; + if( flags & SQLITE_OPEN_URI ){ + int odd = 0; + while( 1 ){ + if( z[0]==0 ){ + odd = 1 - odd; + if( odd && z[1]==0 ) break; + } + z++; + } + z += 2; + }else{ + while( *z==0 ) z++; + } + z += (n + 8 + 1); + pFd->zWal = z; + } + else if( flags & SQLITE_OPEN_WAL ){ + rbu_file *pDb = rbuFindMaindb(pRbuVfs, zName); + if( pDb ){ + if( pDb->pRbu && pDb->pRbu->eStage==RBU_STAGE_OAL ){ + /* This call is to open a *-wal file. Intead, open the *-oal. This + ** code ensures that the string passed to xOpen() is terminated by a + ** pair of '\0' bytes in case the VFS attempts to extract a URI + ** parameter from it. */ + size_t nCopy = strlen(zName); + char *zCopy = sqlite3_malloc64(nCopy+2); + if( zCopy ){ + memcpy(zCopy, zName, nCopy); + zCopy[nCopy-3] = 'o'; + zCopy[nCopy] = '\0'; + zCopy[nCopy+1] = '\0'; + zOpen = (const char*)(pFd->zDel = zCopy); + }else{ + rc = SQLITE_NOMEM; + } + pFd->pRbu = pDb->pRbu; + } + pDb->pWalFd = pFd; + } + } + } + + if( rc==SQLITE_OK ){ + rc = pRealVfs->xOpen(pRealVfs, zOpen, pFd->pReal, flags, pOutFlags); + } + if( pFd->pReal->pMethods ){ + /* The xOpen() operation has succeeded. Set the sqlite3_file.pMethods + ** pointer and, if the file is a main database file, link it into the + ** mutex protected linked list of all such files. */ + pFile->pMethods = &rbuvfs_io_methods; + if( flags & SQLITE_OPEN_MAIN_DB ){ + sqlite3_mutex_enter(pRbuVfs->mutex); + pFd->pMainNext = pRbuVfs->pMain; + pRbuVfs->pMain = pFd; + sqlite3_mutex_leave(pRbuVfs->mutex); + } + }else{ + sqlite3_free(pFd->zDel); + } + + return rc; +} + +/* +** Delete the file located at zPath. +*/ +static int rbuVfsDelete(sqlite3_vfs *pVfs, const char *zPath, int dirSync){ + sqlite3_vfs *pRealVfs = ((rbu_vfs*)pVfs)->pRealVfs; + return pRealVfs->xDelete(pRealVfs, zPath, dirSync); +} + +/* +** Test for access permissions. Return true if the requested permission +** is available, or false otherwise. +*/ +static int rbuVfsAccess( + sqlite3_vfs *pVfs, + const char *zPath, + int flags, + int *pResOut +){ + rbu_vfs *pRbuVfs = (rbu_vfs*)pVfs; + sqlite3_vfs *pRealVfs = pRbuVfs->pRealVfs; + int rc; + + rc = pRealVfs->xAccess(pRealVfs, zPath, flags, pResOut); + + /* If this call is to check if a *-wal file associated with an RBU target + ** database connection exists, and the RBU update is in RBU_STAGE_OAL, + ** the following special handling is activated: + ** + ** a) if the *-wal file does exist, return SQLITE_CANTOPEN. This + ** ensures that the RBU extension never tries to update a database + ** in wal mode, even if the first page of the database file has + ** been damaged. + ** + ** b) if the *-wal file does not exist, claim that it does anyway, + ** causing SQLite to call xOpen() to open it. This call will also + ** be intercepted (see the rbuVfsOpen() function) and the *-oal + ** file opened instead. + */ + if( rc==SQLITE_OK && flags==SQLITE_ACCESS_EXISTS ){ + rbu_file *pDb = rbuFindMaindb(pRbuVfs, zPath); + if( pDb && pDb->pRbu && pDb->pRbu->eStage==RBU_STAGE_OAL ){ + if( *pResOut ){ + rc = SQLITE_CANTOPEN; + }else{ + *pResOut = 1; + } + } + } + + return rc; +} + +/* +** Populate buffer zOut with the full canonical pathname corresponding +** to the pathname in zPath. zOut is guaranteed to point to a buffer +** of at least (DEVSYM_MAX_PATHNAME+1) bytes. +*/ +static int rbuVfsFullPathname( + sqlite3_vfs *pVfs, + const char *zPath, + int nOut, + char *zOut +){ + sqlite3_vfs *pRealVfs = ((rbu_vfs*)pVfs)->pRealVfs; + return pRealVfs->xFullPathname(pRealVfs, zPath, nOut, zOut); +} + +#ifndef SQLITE_OMIT_LOAD_EXTENSION +/* +** Open the dynamic library located at zPath and return a handle. +*/ +static void *rbuVfsDlOpen(sqlite3_vfs *pVfs, const char *zPath){ + sqlite3_vfs *pRealVfs = ((rbu_vfs*)pVfs)->pRealVfs; + return pRealVfs->xDlOpen(pRealVfs, zPath); +} + +/* +** Populate the buffer zErrMsg (size nByte bytes) with a human readable +** utf-8 string describing the most recent error encountered associated +** with dynamic libraries. +*/ +static void rbuVfsDlError(sqlite3_vfs *pVfs, int nByte, char *zErrMsg){ + sqlite3_vfs *pRealVfs = ((rbu_vfs*)pVfs)->pRealVfs; + pRealVfs->xDlError(pRealVfs, nByte, zErrMsg); +} + +/* +** Return a pointer to the symbol zSymbol in the dynamic library pHandle. +*/ +static void (*rbuVfsDlSym( + sqlite3_vfs *pVfs, + void *pArg, + const char *zSym +))(void){ + sqlite3_vfs *pRealVfs = ((rbu_vfs*)pVfs)->pRealVfs; + return pRealVfs->xDlSym(pRealVfs, pArg, zSym); +} + +/* +** Close the dynamic library handle pHandle. +*/ +static void rbuVfsDlClose(sqlite3_vfs *pVfs, void *pHandle){ + sqlite3_vfs *pRealVfs = ((rbu_vfs*)pVfs)->pRealVfs; + pRealVfs->xDlClose(pRealVfs, pHandle); +} +#endif /* SQLITE_OMIT_LOAD_EXTENSION */ + +/* +** Populate the buffer pointed to by zBufOut with nByte bytes of +** random data. +*/ +static int rbuVfsRandomness(sqlite3_vfs *pVfs, int nByte, char *zBufOut){ + sqlite3_vfs *pRealVfs = ((rbu_vfs*)pVfs)->pRealVfs; + return pRealVfs->xRandomness(pRealVfs, nByte, zBufOut); +} + +/* +** Sleep for nMicro microseconds. Return the number of microseconds +** actually slept. +*/ +static int rbuVfsSleep(sqlite3_vfs *pVfs, int nMicro){ + sqlite3_vfs *pRealVfs = ((rbu_vfs*)pVfs)->pRealVfs; + return pRealVfs->xSleep(pRealVfs, nMicro); +} + +/* +** Return the current time as a Julian Day number in *pTimeOut. +*/ +static int rbuVfsCurrentTime(sqlite3_vfs *pVfs, double *pTimeOut){ + sqlite3_vfs *pRealVfs = ((rbu_vfs*)pVfs)->pRealVfs; + return pRealVfs->xCurrentTime(pRealVfs, pTimeOut); +} + +/* +** No-op. +*/ +static int rbuVfsGetLastError(sqlite3_vfs *pVfs, int a, char *b){ + return 0; +} + +/* +** Deregister and destroy an RBU vfs created by an earlier call to +** sqlite3rbu_create_vfs(). +*/ +SQLITE_API void SQLITE_STDCALL sqlite3rbu_destroy_vfs(const char *zName){ + sqlite3_vfs *pVfs = sqlite3_vfs_find(zName); + if( pVfs && pVfs->xOpen==rbuVfsOpen ){ + sqlite3_mutex_free(((rbu_vfs*)pVfs)->mutex); + sqlite3_vfs_unregister(pVfs); + sqlite3_free(pVfs); + } +} + +/* +** Create an RBU VFS named zName that accesses the underlying file-system +** via existing VFS zParent. The new object is registered as a non-default +** VFS with SQLite before returning. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3rbu_create_vfs(const char *zName, const char *zParent){ + + /* Template for VFS */ + static sqlite3_vfs vfs_template = { + 1, /* iVersion */ + 0, /* szOsFile */ + 0, /* mxPathname */ + 0, /* pNext */ + 0, /* zName */ + 0, /* pAppData */ + rbuVfsOpen, /* xOpen */ + rbuVfsDelete, /* xDelete */ + rbuVfsAccess, /* xAccess */ + rbuVfsFullPathname, /* xFullPathname */ + +#ifndef SQLITE_OMIT_LOAD_EXTENSION + rbuVfsDlOpen, /* xDlOpen */ + rbuVfsDlError, /* xDlError */ + rbuVfsDlSym, /* xDlSym */ + rbuVfsDlClose, /* xDlClose */ +#else + 0, 0, 0, 0, +#endif + + rbuVfsRandomness, /* xRandomness */ + rbuVfsSleep, /* xSleep */ + rbuVfsCurrentTime, /* xCurrentTime */ + rbuVfsGetLastError, /* xGetLastError */ + 0, /* xCurrentTimeInt64 (version 2) */ + 0, 0, 0 /* Unimplemented version 3 methods */ + }; + + rbu_vfs *pNew = 0; /* Newly allocated VFS */ + int rc = SQLITE_OK; + size_t nName; + size_t nByte; + + nName = strlen(zName); + nByte = sizeof(rbu_vfs) + nName + 1; + pNew = (rbu_vfs*)sqlite3_malloc64(nByte); + if( pNew==0 ){ + rc = SQLITE_NOMEM; + }else{ + sqlite3_vfs *pParent; /* Parent VFS */ + memset(pNew, 0, nByte); + pParent = sqlite3_vfs_find(zParent); + if( pParent==0 ){ + rc = SQLITE_NOTFOUND; + }else{ + char *zSpace; + memcpy(&pNew->base, &vfs_template, sizeof(sqlite3_vfs)); + pNew->base.mxPathname = pParent->mxPathname; + pNew->base.szOsFile = sizeof(rbu_file) + pParent->szOsFile; + pNew->pRealVfs = pParent; + pNew->base.zName = (const char*)(zSpace = (char*)&pNew[1]); + memcpy(zSpace, zName, nName); + + /* Allocate the mutex and register the new VFS (not as the default) */ + pNew->mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_RECURSIVE); + if( pNew->mutex==0 ){ + rc = SQLITE_NOMEM; + }else{ + rc = sqlite3_vfs_register(&pNew->base, 0); + } + } + + if( rc!=SQLITE_OK ){ + sqlite3_mutex_free(pNew->mutex); + sqlite3_free(pNew); + } + } + + return rc; +} + + +/**************************************************************************/ + +#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_RBU) */ + +/************** End of sqlite3rbu.c ******************************************/ +/************** Begin file dbstat.c ******************************************/ +/* +** 2010 July 12 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This file contains an implementation of the "dbstat" virtual table. +** +** The dbstat virtual table is used to extract low-level formatting +** information from an SQLite database in order to implement the +** "sqlite3_analyzer" utility. See the ../tool/spaceanal.tcl script +** for an example implementation. +** +** Additional information is available on the "dbstat.html" page of the +** official SQLite documentation. +*/ + +/* #include "sqliteInt.h" ** Requires access to internal data structures ** */ +#if (defined(SQLITE_ENABLE_DBSTAT_VTAB) || defined(SQLITE_TEST)) \ + && !defined(SQLITE_OMIT_VIRTUALTABLE) + +/* +** Page paths: +** +** The value of the 'path' column describes the path taken from the +** root-node of the b-tree structure to each page. The value of the +** root-node path is '/'. +** +** The value of the path for the left-most child page of the root of +** a b-tree is '/000/'. (Btrees store content ordered from left to right +** so the pages to the left have smaller keys than the pages to the right.) +** The next to left-most child of the root page is +** '/001', and so on, each sibling page identified by a 3-digit hex +** value. The children of the 451st left-most sibling have paths such +** as '/1c2/000/, '/1c2/001/' etc. +** +** Overflow pages are specified by appending a '+' character and a +** six-digit hexadecimal value to the path to the cell they are linked +** from. For example, the three overflow pages in a chain linked from +** the left-most cell of the 450th child of the root page are identified +** by the paths: +** +** '/1c2/000+000000' // First page in overflow chain +** '/1c2/000+000001' // Second page in overflow chain +** '/1c2/000+000002' // Third page in overflow chain +** +** If the paths are sorted using the BINARY collation sequence, then +** the overflow pages associated with a cell will appear earlier in the +** sort-order than its child page: +** +** '/1c2/000/' // Left-most child of 451st child of root +*/ +#define VTAB_SCHEMA \ + "CREATE TABLE xx( " \ + " name STRING, /* Name of table or index */" \ + " path INTEGER, /* Path to page from root */" \ + " pageno INTEGER, /* Page number */" \ + " pagetype STRING, /* 'internal', 'leaf' or 'overflow' */" \ + " ncell INTEGER, /* Cells on page (0 for overflow) */" \ + " payload INTEGER, /* Bytes of payload on this page */" \ + " unused INTEGER, /* Bytes of unused space on this page */" \ + " mx_payload INTEGER, /* Largest payload size of all cells */" \ + " pgoffset INTEGER, /* Offset of page in file */" \ + " pgsize INTEGER, /* Size of the page */" \ + " schema TEXT HIDDEN /* Database schema being analyzed */" \ + ");" + + +typedef struct StatTable StatTable; +typedef struct StatCursor StatCursor; +typedef struct StatPage StatPage; +typedef struct StatCell StatCell; + +struct StatCell { + int nLocal; /* Bytes of local payload */ + u32 iChildPg; /* Child node (or 0 if this is a leaf) */ + int nOvfl; /* Entries in aOvfl[] */ + u32 *aOvfl; /* Array of overflow page numbers */ + int nLastOvfl; /* Bytes of payload on final overflow page */ + int iOvfl; /* Iterates through aOvfl[] */ +}; + +struct StatPage { + u32 iPgno; + DbPage *pPg; + int iCell; + + char *zPath; /* Path to this page */ + + /* Variables populated by statDecodePage(): */ + u8 flags; /* Copy of flags byte */ + int nCell; /* Number of cells on page */ + int nUnused; /* Number of unused bytes on page */ + StatCell *aCell; /* Array of parsed cells */ + u32 iRightChildPg; /* Right-child page number (or 0) */ + int nMxPayload; /* Largest payload of any cell on this page */ +}; + +struct StatCursor { + sqlite3_vtab_cursor base; + sqlite3_stmt *pStmt; /* Iterates through set of root pages */ + int isEof; /* After pStmt has returned SQLITE_DONE */ + int iDb; /* Schema used for this query */ + + StatPage aPage[32]; + int iPage; /* Current entry in aPage[] */ + + /* Values to return. */ + char *zName; /* Value of 'name' column */ + char *zPath; /* Value of 'path' column */ + u32 iPageno; /* Value of 'pageno' column */ + char *zPagetype; /* Value of 'pagetype' column */ + int nCell; /* Value of 'ncell' column */ + int nPayload; /* Value of 'payload' column */ + int nUnused; /* Value of 'unused' column */ + int nMxPayload; /* Value of 'mx_payload' column */ + i64 iOffset; /* Value of 'pgOffset' column */ + int szPage; /* Value of 'pgSize' column */ +}; + +struct StatTable { + sqlite3_vtab base; + sqlite3 *db; + int iDb; /* Index of database to analyze */ +}; + +#ifndef get2byte +# define get2byte(x) ((x)[0]<<8 | (x)[1]) +#endif + +/* +** Connect to or create a statvfs virtual table. +*/ +static int statConnect( + sqlite3 *db, + void *pAux, + int argc, const char *const*argv, + sqlite3_vtab **ppVtab, + char **pzErr +){ + StatTable *pTab = 0; + int rc = SQLITE_OK; + int iDb; + + if( argc>=4 ){ + Token nm; + sqlite3TokenInit(&nm, (char*)argv[3]); + iDb = sqlite3FindDb(db, &nm); + if( iDb<0 ){ + *pzErr = sqlite3_mprintf("no such database: %s", argv[3]); + return SQLITE_ERROR; + } + }else{ + iDb = 0; + } + rc = sqlite3_declare_vtab(db, VTAB_SCHEMA); + if( rc==SQLITE_OK ){ + pTab = (StatTable *)sqlite3_malloc64(sizeof(StatTable)); + if( pTab==0 ) rc = SQLITE_NOMEM; + } + + assert( rc==SQLITE_OK || pTab==0 ); + if( rc==SQLITE_OK ){ + memset(pTab, 0, sizeof(StatTable)); + pTab->db = db; + pTab->iDb = iDb; + } + + *ppVtab = (sqlite3_vtab*)pTab; + return rc; +} + +/* +** Disconnect from or destroy a statvfs virtual table. +*/ +static int statDisconnect(sqlite3_vtab *pVtab){ + sqlite3_free(pVtab); + return SQLITE_OK; +} + +/* +** There is no "best-index". This virtual table always does a linear +** scan. However, a schema=? constraint should cause this table to +** operate on a different database schema, so check for it. +** +** idxNum is normally 0, but will be 1 if a schema=? constraint exists. +*/ +static int statBestIndex(sqlite3_vtab *tab, sqlite3_index_info *pIdxInfo){ + int i; + + pIdxInfo->estimatedCost = 1.0e6; /* Initial cost estimate */ + + /* Look for a valid schema=? constraint. If found, change the idxNum to + ** 1 and request the value of that constraint be sent to xFilter. And + ** lower the cost estimate to encourage the constrained version to be + ** used. + */ + for(i=0; i<pIdxInfo->nConstraint; i++){ + if( pIdxInfo->aConstraint[i].usable==0 ) continue; + if( pIdxInfo->aConstraint[i].op!=SQLITE_INDEX_CONSTRAINT_EQ ) continue; + if( pIdxInfo->aConstraint[i].iColumn!=10 ) continue; + pIdxInfo->idxNum = 1; + pIdxInfo->estimatedCost = 1.0; + pIdxInfo->aConstraintUsage[i].argvIndex = 1; + pIdxInfo->aConstraintUsage[i].omit = 1; + break; + } + + + /* Records are always returned in ascending order of (name, path). + ** If this will satisfy the client, set the orderByConsumed flag so that + ** SQLite does not do an external sort. + */ + if( ( pIdxInfo->nOrderBy==1 + && pIdxInfo->aOrderBy[0].iColumn==0 + && pIdxInfo->aOrderBy[0].desc==0 + ) || + ( pIdxInfo->nOrderBy==2 + && pIdxInfo->aOrderBy[0].iColumn==0 + && pIdxInfo->aOrderBy[0].desc==0 + && pIdxInfo->aOrderBy[1].iColumn==1 + && pIdxInfo->aOrderBy[1].desc==0 + ) + ){ + pIdxInfo->orderByConsumed = 1; + } + + return SQLITE_OK; +} + +/* +** Open a new statvfs cursor. +*/ +static int statOpen(sqlite3_vtab *pVTab, sqlite3_vtab_cursor **ppCursor){ + StatTable *pTab = (StatTable *)pVTab; + StatCursor *pCsr; + + pCsr = (StatCursor *)sqlite3_malloc64(sizeof(StatCursor)); + if( pCsr==0 ){ + return SQLITE_NOMEM; + }else{ + memset(pCsr, 0, sizeof(StatCursor)); + pCsr->base.pVtab = pVTab; + pCsr->iDb = pTab->iDb; + } + + *ppCursor = (sqlite3_vtab_cursor *)pCsr; + return SQLITE_OK; +} + +static void statClearPage(StatPage *p){ + int i; + if( p->aCell ){ + for(i=0; i<p->nCell; i++){ + sqlite3_free(p->aCell[i].aOvfl); + } + sqlite3_free(p->aCell); + } + sqlite3PagerUnref(p->pPg); + sqlite3_free(p->zPath); + memset(p, 0, sizeof(StatPage)); +} + +static void statResetCsr(StatCursor *pCsr){ + int i; + sqlite3_reset(pCsr->pStmt); + for(i=0; i<ArraySize(pCsr->aPage); i++){ + statClearPage(&pCsr->aPage[i]); + } + pCsr->iPage = 0; + sqlite3_free(pCsr->zPath); + pCsr->zPath = 0; + pCsr->isEof = 0; +} + +/* +** Close a statvfs cursor. +*/ +static int statClose(sqlite3_vtab_cursor *pCursor){ + StatCursor *pCsr = (StatCursor *)pCursor; + statResetCsr(pCsr); + sqlite3_finalize(pCsr->pStmt); + sqlite3_free(pCsr); + return SQLITE_OK; +} + +static void getLocalPayload( + int nUsable, /* Usable bytes per page */ + u8 flags, /* Page flags */ + int nTotal, /* Total record (payload) size */ + int *pnLocal /* OUT: Bytes stored locally */ +){ + int nLocal; + int nMinLocal; + int nMaxLocal; + + if( flags==0x0D ){ /* Table leaf node */ + nMinLocal = (nUsable - 12) * 32 / 255 - 23; + nMaxLocal = nUsable - 35; + }else{ /* Index interior and leaf nodes */ + nMinLocal = (nUsable - 12) * 32 / 255 - 23; + nMaxLocal = (nUsable - 12) * 64 / 255 - 23; + } + + nLocal = nMinLocal + (nTotal - nMinLocal) % (nUsable - 4); + if( nLocal>nMaxLocal ) nLocal = nMinLocal; + *pnLocal = nLocal; +} + +static int statDecodePage(Btree *pBt, StatPage *p){ + int nUnused; + int iOff; + int nHdr; + int isLeaf; + int szPage; + + u8 *aData = sqlite3PagerGetData(p->pPg); + u8 *aHdr = &aData[p->iPgno==1 ? 100 : 0]; + + p->flags = aHdr[0]; + p->nCell = get2byte(&aHdr[3]); + p->nMxPayload = 0; + + isLeaf = (p->flags==0x0A || p->flags==0x0D); + nHdr = 12 - isLeaf*4 + (p->iPgno==1)*100; + + nUnused = get2byte(&aHdr[5]) - nHdr - 2*p->nCell; + nUnused += (int)aHdr[7]; + iOff = get2byte(&aHdr[1]); + while( iOff ){ + nUnused += get2byte(&aData[iOff+2]); + iOff = get2byte(&aData[iOff]); + } + p->nUnused = nUnused; + p->iRightChildPg = isLeaf ? 0 : sqlite3Get4byte(&aHdr[8]); + szPage = sqlite3BtreeGetPageSize(pBt); + + if( p->nCell ){ + int i; /* Used to iterate through cells */ + int nUsable; /* Usable bytes per page */ + + sqlite3BtreeEnter(pBt); + nUsable = szPage - sqlite3BtreeGetReserveNoMutex(pBt); + sqlite3BtreeLeave(pBt); + p->aCell = sqlite3_malloc64((p->nCell+1) * sizeof(StatCell)); + if( p->aCell==0 ) return SQLITE_NOMEM; + memset(p->aCell, 0, (p->nCell+1) * sizeof(StatCell)); + + for(i=0; i<p->nCell; i++){ + StatCell *pCell = &p->aCell[i]; + + iOff = get2byte(&aData[nHdr+i*2]); + if( !isLeaf ){ + pCell->iChildPg = sqlite3Get4byte(&aData[iOff]); + iOff += 4; + } + if( p->flags==0x05 ){ + /* A table interior node. nPayload==0. */ + }else{ + u32 nPayload; /* Bytes of payload total (local+overflow) */ + int nLocal; /* Bytes of payload stored locally */ + iOff += getVarint32(&aData[iOff], nPayload); + if( p->flags==0x0D ){ + u64 dummy; + iOff += sqlite3GetVarint(&aData[iOff], &dummy); + } + if( nPayload>(u32)p->nMxPayload ) p->nMxPayload = nPayload; + getLocalPayload(nUsable, p->flags, nPayload, &nLocal); + pCell->nLocal = nLocal; + assert( nLocal>=0 ); + assert( nPayload>=(u32)nLocal ); + assert( nLocal<=(nUsable-35) ); + if( nPayload>(u32)nLocal ){ + int j; + int nOvfl = ((nPayload - nLocal) + nUsable-4 - 1) / (nUsable - 4); + pCell->nLastOvfl = (nPayload-nLocal) - (nOvfl-1) * (nUsable-4); + pCell->nOvfl = nOvfl; + pCell->aOvfl = sqlite3_malloc64(sizeof(u32)*nOvfl); + if( pCell->aOvfl==0 ) return SQLITE_NOMEM; + pCell->aOvfl[0] = sqlite3Get4byte(&aData[iOff+nLocal]); + for(j=1; j<nOvfl; j++){ + int rc; + u32 iPrev = pCell->aOvfl[j-1]; + DbPage *pPg = 0; + rc = sqlite3PagerGet(sqlite3BtreePager(pBt), iPrev, &pPg, 0); + if( rc!=SQLITE_OK ){ + assert( pPg==0 ); + return rc; + } + pCell->aOvfl[j] = sqlite3Get4byte(sqlite3PagerGetData(pPg)); + sqlite3PagerUnref(pPg); + } + } + } + } + } + + return SQLITE_OK; +} + +/* +** Populate the pCsr->iOffset and pCsr->szPage member variables. Based on +** the current value of pCsr->iPageno. +*/ +static void statSizeAndOffset(StatCursor *pCsr){ + StatTable *pTab = (StatTable *)((sqlite3_vtab_cursor *)pCsr)->pVtab; + Btree *pBt = pTab->db->aDb[pTab->iDb].pBt; + Pager *pPager = sqlite3BtreePager(pBt); + sqlite3_file *fd; + sqlite3_int64 x[2]; + + /* The default page size and offset */ + pCsr->szPage = sqlite3BtreeGetPageSize(pBt); + pCsr->iOffset = (i64)pCsr->szPage * (pCsr->iPageno - 1); + + /* If connected to a ZIPVFS backend, override the page size and + ** offset with actual values obtained from ZIPVFS. + */ + fd = sqlite3PagerFile(pPager); + x[0] = pCsr->iPageno; + if( fd->pMethods!=0 && sqlite3OsFileControl(fd, 230440, &x)==SQLITE_OK ){ + pCsr->iOffset = x[0]; + pCsr->szPage = (int)x[1]; + } +} + +/* +** Move a statvfs cursor to the next entry in the file. +*/ +static int statNext(sqlite3_vtab_cursor *pCursor){ + int rc; + int nPayload; + char *z; + StatCursor *pCsr = (StatCursor *)pCursor; + StatTable *pTab = (StatTable *)pCursor->pVtab; + Btree *pBt = pTab->db->aDb[pCsr->iDb].pBt; + Pager *pPager = sqlite3BtreePager(pBt); + + sqlite3_free(pCsr->zPath); + pCsr->zPath = 0; + +statNextRestart: + if( pCsr->aPage[0].pPg==0 ){ + rc = sqlite3_step(pCsr->pStmt); + if( rc==SQLITE_ROW ){ + int nPage; + u32 iRoot = (u32)sqlite3_column_int64(pCsr->pStmt, 1); + sqlite3PagerPagecount(pPager, &nPage); + if( nPage==0 ){ + pCsr->isEof = 1; + return sqlite3_reset(pCsr->pStmt); + } + rc = sqlite3PagerGet(pPager, iRoot, &pCsr->aPage[0].pPg, 0); + pCsr->aPage[0].iPgno = iRoot; + pCsr->aPage[0].iCell = 0; + pCsr->aPage[0].zPath = z = sqlite3_mprintf("/"); + pCsr->iPage = 0; + if( z==0 ) rc = SQLITE_NOMEM; + }else{ + pCsr->isEof = 1; + return sqlite3_reset(pCsr->pStmt); + } + }else{ + + /* Page p itself has already been visited. */ + StatPage *p = &pCsr->aPage[pCsr->iPage]; + + while( p->iCell<p->nCell ){ + StatCell *pCell = &p->aCell[p->iCell]; + if( pCell->iOvfl<pCell->nOvfl ){ + int nUsable; + sqlite3BtreeEnter(pBt); + nUsable = sqlite3BtreeGetPageSize(pBt) - + sqlite3BtreeGetReserveNoMutex(pBt); + sqlite3BtreeLeave(pBt); + pCsr->zName = (char *)sqlite3_column_text(pCsr->pStmt, 0); + pCsr->iPageno = pCell->aOvfl[pCell->iOvfl]; + pCsr->zPagetype = "overflow"; + pCsr->nCell = 0; + pCsr->nMxPayload = 0; + pCsr->zPath = z = sqlite3_mprintf( + "%s%.3x+%.6x", p->zPath, p->iCell, pCell->iOvfl + ); + if( pCell->iOvfl<pCell->nOvfl-1 ){ + pCsr->nUnused = 0; + pCsr->nPayload = nUsable - 4; + }else{ + pCsr->nPayload = pCell->nLastOvfl; + pCsr->nUnused = nUsable - 4 - pCsr->nPayload; + } + pCell->iOvfl++; + statSizeAndOffset(pCsr); + return z==0 ? SQLITE_NOMEM : SQLITE_OK; + } + if( p->iRightChildPg ) break; + p->iCell++; + } + + if( !p->iRightChildPg || p->iCell>p->nCell ){ + statClearPage(p); + if( pCsr->iPage==0 ) return statNext(pCursor); + pCsr->iPage--; + goto statNextRestart; /* Tail recursion */ + } + pCsr->iPage++; + assert( p==&pCsr->aPage[pCsr->iPage-1] ); + + if( p->iCell==p->nCell ){ + p[1].iPgno = p->iRightChildPg; + }else{ + p[1].iPgno = p->aCell[p->iCell].iChildPg; + } + rc = sqlite3PagerGet(pPager, p[1].iPgno, &p[1].pPg, 0); + p[1].iCell = 0; + p[1].zPath = z = sqlite3_mprintf("%s%.3x/", p->zPath, p->iCell); + p->iCell++; + if( z==0 ) rc = SQLITE_NOMEM; + } + + + /* Populate the StatCursor fields with the values to be returned + ** by the xColumn() and xRowid() methods. + */ + if( rc==SQLITE_OK ){ + int i; + StatPage *p = &pCsr->aPage[pCsr->iPage]; + pCsr->zName = (char *)sqlite3_column_text(pCsr->pStmt, 0); + pCsr->iPageno = p->iPgno; + + rc = statDecodePage(pBt, p); + if( rc==SQLITE_OK ){ + statSizeAndOffset(pCsr); + + switch( p->flags ){ + case 0x05: /* table internal */ + case 0x02: /* index internal */ + pCsr->zPagetype = "internal"; + break; + case 0x0D: /* table leaf */ + case 0x0A: /* index leaf */ + pCsr->zPagetype = "leaf"; + break; + default: + pCsr->zPagetype = "corrupted"; + break; + } + pCsr->nCell = p->nCell; + pCsr->nUnused = p->nUnused; + pCsr->nMxPayload = p->nMxPayload; + pCsr->zPath = z = sqlite3_mprintf("%s", p->zPath); + if( z==0 ) rc = SQLITE_NOMEM; + nPayload = 0; + for(i=0; i<p->nCell; i++){ + nPayload += p->aCell[i].nLocal; + } + pCsr->nPayload = nPayload; + } + } + + return rc; +} + +static int statEof(sqlite3_vtab_cursor *pCursor){ + StatCursor *pCsr = (StatCursor *)pCursor; + return pCsr->isEof; +} + +static int statFilter( + sqlite3_vtab_cursor *pCursor, + int idxNum, const char *idxStr, + int argc, sqlite3_value **argv +){ + StatCursor *pCsr = (StatCursor *)pCursor; + StatTable *pTab = (StatTable*)(pCursor->pVtab); + char *zSql; + int rc = SQLITE_OK; + char *zMaster; + + if( idxNum==1 ){ + const char *zDbase = (const char*)sqlite3_value_text(argv[0]); + pCsr->iDb = sqlite3FindDbName(pTab->db, zDbase); + if( pCsr->iDb<0 ){ + sqlite3_free(pCursor->pVtab->zErrMsg); + pCursor->pVtab->zErrMsg = sqlite3_mprintf("no such schema: %s", zDbase); + return pCursor->pVtab->zErrMsg ? SQLITE_ERROR : SQLITE_NOMEM; + } + }else{ + pCsr->iDb = pTab->iDb; + } + statResetCsr(pCsr); + sqlite3_finalize(pCsr->pStmt); + pCsr->pStmt = 0; + zMaster = pCsr->iDb==1 ? "sqlite_temp_master" : "sqlite_master"; + zSql = sqlite3_mprintf( + "SELECT 'sqlite_master' AS name, 1 AS rootpage, 'table' AS type" + " UNION ALL " + "SELECT name, rootpage, type" + " FROM \"%w\".%s WHERE rootpage!=0" + " ORDER BY name", pTab->db->aDb[pCsr->iDb].zName, zMaster); + if( zSql==0 ){ + return SQLITE_NOMEM; + }else{ + rc = sqlite3_prepare_v2(pTab->db, zSql, -1, &pCsr->pStmt, 0); + sqlite3_free(zSql); + } + + if( rc==SQLITE_OK ){ + rc = statNext(pCursor); + } + return rc; +} + +static int statColumn( + sqlite3_vtab_cursor *pCursor, + sqlite3_context *ctx, + int i +){ + StatCursor *pCsr = (StatCursor *)pCursor; + switch( i ){ + case 0: /* name */ + sqlite3_result_text(ctx, pCsr->zName, -1, SQLITE_TRANSIENT); + break; + case 1: /* path */ + sqlite3_result_text(ctx, pCsr->zPath, -1, SQLITE_TRANSIENT); + break; + case 2: /* pageno */ + sqlite3_result_int64(ctx, pCsr->iPageno); + break; + case 3: /* pagetype */ + sqlite3_result_text(ctx, pCsr->zPagetype, -1, SQLITE_STATIC); + break; + case 4: /* ncell */ + sqlite3_result_int(ctx, pCsr->nCell); + break; + case 5: /* payload */ + sqlite3_result_int(ctx, pCsr->nPayload); + break; + case 6: /* unused */ + sqlite3_result_int(ctx, pCsr->nUnused); + break; + case 7: /* mx_payload */ + sqlite3_result_int(ctx, pCsr->nMxPayload); + break; + case 8: /* pgoffset */ + sqlite3_result_int64(ctx, pCsr->iOffset); + break; + case 9: /* pgsize */ + sqlite3_result_int(ctx, pCsr->szPage); + break; + default: { /* schema */ + sqlite3 *db = sqlite3_context_db_handle(ctx); + int iDb = pCsr->iDb; + sqlite3_result_text(ctx, db->aDb[iDb].zName, -1, SQLITE_STATIC); + break; + } + } + return SQLITE_OK; +} + +static int statRowid(sqlite3_vtab_cursor *pCursor, sqlite_int64 *pRowid){ + StatCursor *pCsr = (StatCursor *)pCursor; + *pRowid = pCsr->iPageno; + return SQLITE_OK; +} + +/* +** Invoke this routine to register the "dbstat" virtual table module +*/ +SQLITE_PRIVATE int sqlite3DbstatRegister(sqlite3 *db){ + static sqlite3_module dbstat_module = { + 0, /* iVersion */ + statConnect, /* xCreate */ + statConnect, /* xConnect */ + statBestIndex, /* xBestIndex */ + statDisconnect, /* xDisconnect */ + statDisconnect, /* xDestroy */ + statOpen, /* xOpen - open a cursor */ + statClose, /* xClose - close a cursor */ + statFilter, /* xFilter - configure scan constraints */ + statNext, /* xNext - advance a cursor */ + statEof, /* xEof - check for end of scan */ + statColumn, /* xColumn - read data */ + statRowid, /* xRowid - read data */ + 0, /* xUpdate */ + 0, /* xBegin */ + 0, /* xSync */ + 0, /* xCommit */ + 0, /* xRollback */ + 0, /* xFindMethod */ + 0, /* xRename */ + }; + return sqlite3_create_module(db, "dbstat", &dbstat_module, 0); +} +#elif defined(SQLITE_ENABLE_DBSTAT_VTAB) +SQLITE_PRIVATE int sqlite3DbstatRegister(sqlite3 *db){ return SQLITE_OK; } +#endif /* SQLITE_ENABLE_DBSTAT_VTAB */ + +/************** End of dbstat.c **********************************************/ +/************** Begin file json1.c *******************************************/ +/* +** 2015-08-12 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This SQLite extension implements JSON functions. The interface is +** modeled after MySQL JSON functions: +** +** https://dev.mysql.com/doc/refman/5.7/en/json.html +** +** For the time being, all JSON is stored as pure text. (We might add +** a JSONB type in the future which stores a binary encoding of JSON in +** a BLOB, but there is no support for JSONB in the current implementation. +** This implementation parses JSON text at 250 MB/s, so it is hard to see +** how JSONB might improve on that.) +*/ +#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_JSON1) +#if !defined(_SQLITEINT_H_) +/* #include "sqlite3ext.h" */ +#endif +SQLITE_EXTENSION_INIT1 +/* #include <assert.h> */ +/* #include <string.h> */ +/* #include <stdlib.h> */ +/* #include <stdarg.h> */ + +/* Mark a function parameter as unused, to suppress nuisance compiler +** warnings. */ +#ifndef UNUSED_PARAM +# define UNUSED_PARAM(X) (void)(X) +#endif + +#ifndef LARGEST_INT64 +# define LARGEST_INT64 (0xffffffff|(((sqlite3_int64)0x7fffffff)<<32)) +# define SMALLEST_INT64 (((sqlite3_int64)-1) - LARGEST_INT64) +#endif + +/* +** Versions of isspace(), isalnum() and isdigit() to which it is safe +** to pass signed char values. +*/ +#ifdef sqlite3Isdigit + /* Use the SQLite core versions if this routine is part of the + ** SQLite amalgamation */ +# define safe_isdigit(x) sqlite3Isdigit(x) +# define safe_isalnum(x) sqlite3Isalnum(x) +#else + /* Use the standard library for separate compilation */ +#include <ctype.h> /* amalgamator: keep */ +# define safe_isdigit(x) isdigit((unsigned char)(x)) +# define safe_isalnum(x) isalnum((unsigned char)(x)) +#endif + +/* +** Growing our own isspace() routine this way is twice as fast as +** the library isspace() function, resulting in a 7% overall performance +** increase for the parser. (Ubuntu14.10 gcc 4.8.4 x64 with -Os). +*/ +static const char jsonIsSpace[] = { + 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, +}; +#define safe_isspace(x) (jsonIsSpace[(unsigned char)x]) + +#ifndef SQLITE_AMALGAMATION + /* Unsigned integer types. These are already defined in the sqliteInt.h, + ** but the definitions need to be repeated for separate compilation. */ + typedef sqlite3_uint64 u64; + typedef unsigned int u32; + typedef unsigned char u8; +#endif + +/* Objects */ +typedef struct JsonString JsonString; +typedef struct JsonNode JsonNode; +typedef struct JsonParse JsonParse; + +/* An instance of this object represents a JSON string +** under construction. Really, this is a generic string accumulator +** that can be and is used to create strings other than JSON. +*/ +struct JsonString { + sqlite3_context *pCtx; /* Function context - put error messages here */ + char *zBuf; /* Append JSON content here */ + u64 nAlloc; /* Bytes of storage available in zBuf[] */ + u64 nUsed; /* Bytes of zBuf[] currently used */ + u8 bStatic; /* True if zBuf is static space */ + u8 bErr; /* True if an error has been encountered */ + char zSpace[100]; /* Initial static space */ +}; + +/* JSON type values +*/ +#define JSON_NULL 0 +#define JSON_TRUE 1 +#define JSON_FALSE 2 +#define JSON_INT 3 +#define JSON_REAL 4 +#define JSON_STRING 5 +#define JSON_ARRAY 6 +#define JSON_OBJECT 7 + +/* The "subtype" set for JSON values */ +#define JSON_SUBTYPE 74 /* Ascii for "J" */ + +/* +** Names of the various JSON types: +*/ +static const char * const jsonType[] = { + "null", "true", "false", "integer", "real", "text", "array", "object" +}; + +/* Bit values for the JsonNode.jnFlag field +*/ +#define JNODE_RAW 0x01 /* Content is raw, not JSON encoded */ +#define JNODE_ESCAPE 0x02 /* Content is text with \ escapes */ +#define JNODE_REMOVE 0x04 /* Do not output */ +#define JNODE_REPLACE 0x08 /* Replace with JsonNode.iVal */ +#define JNODE_APPEND 0x10 /* More ARRAY/OBJECT entries at u.iAppend */ +#define JNODE_LABEL 0x20 /* Is a label of an object */ + + +/* A single node of parsed JSON +*/ +struct JsonNode { + u8 eType; /* One of the JSON_ type values */ + u8 jnFlags; /* JNODE flags */ + u8 iVal; /* Replacement value when JNODE_REPLACE */ + u32 n; /* Bytes of content, or number of sub-nodes */ + union { + const char *zJContent; /* Content for INT, REAL, and STRING */ + u32 iAppend; /* More terms for ARRAY and OBJECT */ + u32 iKey; /* Key for ARRAY objects in json_tree() */ + } u; +}; + +/* A completely parsed JSON string +*/ +struct JsonParse { + u32 nNode; /* Number of slots of aNode[] used */ + u32 nAlloc; /* Number of slots of aNode[] allocated */ + JsonNode *aNode; /* Array of nodes containing the parse */ + const char *zJson; /* Original JSON string */ + u32 *aUp; /* Index of parent of each node */ + u8 oom; /* Set to true if out of memory */ + u8 nErr; /* Number of errors seen */ +}; + +/************************************************************************** +** Utility routines for dealing with JsonString objects +**************************************************************************/ + +/* Set the JsonString object to an empty string +*/ +static void jsonZero(JsonString *p){ + p->zBuf = p->zSpace; + p->nAlloc = sizeof(p->zSpace); + p->nUsed = 0; + p->bStatic = 1; +} + +/* Initialize the JsonString object +*/ +static void jsonInit(JsonString *p, sqlite3_context *pCtx){ + p->pCtx = pCtx; + p->bErr = 0; + jsonZero(p); +} + + +/* Free all allocated memory and reset the JsonString object back to its +** initial state. +*/ +static void jsonReset(JsonString *p){ + if( !p->bStatic ) sqlite3_free(p->zBuf); + jsonZero(p); +} + + +/* Report an out-of-memory (OOM) condition +*/ +static void jsonOom(JsonString *p){ + p->bErr = 1; + sqlite3_result_error_nomem(p->pCtx); + jsonReset(p); +} + +/* Enlarge pJson->zBuf so that it can hold at least N more bytes. +** Return zero on success. Return non-zero on an OOM error +*/ +static int jsonGrow(JsonString *p, u32 N){ + u64 nTotal = N<p->nAlloc ? p->nAlloc*2 : p->nAlloc+N+10; + char *zNew; + if( p->bStatic ){ + if( p->bErr ) return 1; + zNew = sqlite3_malloc64(nTotal); + if( zNew==0 ){ + jsonOom(p); + return SQLITE_NOMEM; + } + memcpy(zNew, p->zBuf, (size_t)p->nUsed); + p->zBuf = zNew; + p->bStatic = 0; + }else{ + zNew = sqlite3_realloc64(p->zBuf, nTotal); + if( zNew==0 ){ + jsonOom(p); + return SQLITE_NOMEM; + } + p->zBuf = zNew; + } + p->nAlloc = nTotal; + return SQLITE_OK; +} + +/* Append N bytes from zIn onto the end of the JsonString string. +*/ +static void jsonAppendRaw(JsonString *p, const char *zIn, u32 N){ + if( (N+p->nUsed >= p->nAlloc) && jsonGrow(p,N)!=0 ) return; + memcpy(p->zBuf+p->nUsed, zIn, N); + p->nUsed += N; +} + +/* Append formatted text (not to exceed N bytes) to the JsonString. +*/ +static void jsonPrintf(int N, JsonString *p, const char *zFormat, ...){ + va_list ap; + if( (p->nUsed + N >= p->nAlloc) && jsonGrow(p, N) ) return; + va_start(ap, zFormat); + sqlite3_vsnprintf(N, p->zBuf+p->nUsed, zFormat, ap); + va_end(ap); + p->nUsed += (int)strlen(p->zBuf+p->nUsed); +} + +/* Append a single character +*/ +static void jsonAppendChar(JsonString *p, char c){ + if( p->nUsed>=p->nAlloc && jsonGrow(p,1)!=0 ) return; + p->zBuf[p->nUsed++] = c; +} + +/* Append a comma separator to the output buffer, if the previous +** character is not '[' or '{'. +*/ +static void jsonAppendSeparator(JsonString *p){ + char c; + if( p->nUsed==0 ) return; + c = p->zBuf[p->nUsed-1]; + if( c!='[' && c!='{' ) jsonAppendChar(p, ','); +} + +/* Append the N-byte string in zIn to the end of the JsonString string +** under construction. Enclose the string in "..." and escape +** any double-quotes or backslash characters contained within the +** string. +*/ +static void jsonAppendString(JsonString *p, const char *zIn, u32 N){ + u32 i; + if( (N+p->nUsed+2 >= p->nAlloc) && jsonGrow(p,N+2)!=0 ) return; + p->zBuf[p->nUsed++] = '"'; + for(i=0; i<N; i++){ + unsigned char c = ((unsigned const char*)zIn)[i]; + if( c=='"' || c=='\\' ){ + json_simple_escape: + if( (p->nUsed+N+3-i > p->nAlloc) && jsonGrow(p,N+3-i)!=0 ) return; + p->zBuf[p->nUsed++] = '\\'; + }else if( c<=0x1f ){ + static const char aSpecial[] = { + 0, 0, 0, 0, 0, 0, 0, 0, 'b', 't', 'n', 0, 'f', 'r', 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 + }; + assert( sizeof(aSpecial)==32 ); + assert( aSpecial['\b']=='b' ); + assert( aSpecial['\f']=='f' ); + assert( aSpecial['\n']=='n' ); + assert( aSpecial['\r']=='r' ); + assert( aSpecial['\t']=='t' ); + if( aSpecial[c] ){ + c = aSpecial[c]; + goto json_simple_escape; + } + if( (p->nUsed+N+7+i > p->nAlloc) && jsonGrow(p,N+7-i)!=0 ) return; + p->zBuf[p->nUsed++] = '\\'; + p->zBuf[p->nUsed++] = 'u'; + p->zBuf[p->nUsed++] = '0'; + p->zBuf[p->nUsed++] = '0'; + p->zBuf[p->nUsed++] = '0' + (c>>4); + c = "0123456789abcdef"[c&0xf]; + } + p->zBuf[p->nUsed++] = c; + } + p->zBuf[p->nUsed++] = '"'; + assert( p->nUsed<p->nAlloc ); +} + +/* +** Append a function parameter value to the JSON string under +** construction. +*/ +static void jsonAppendValue( + JsonString *p, /* Append to this JSON string */ + sqlite3_value *pValue /* Value to append */ +){ + switch( sqlite3_value_type(pValue) ){ + case SQLITE_NULL: { + jsonAppendRaw(p, "null", 4); + break; + } + case SQLITE_INTEGER: + case SQLITE_FLOAT: { + const char *z = (const char*)sqlite3_value_text(pValue); + u32 n = (u32)sqlite3_value_bytes(pValue); + jsonAppendRaw(p, z, n); + break; + } + case SQLITE_TEXT: { + const char *z = (const char*)sqlite3_value_text(pValue); + u32 n = (u32)sqlite3_value_bytes(pValue); + if( sqlite3_value_subtype(pValue)==JSON_SUBTYPE ){ + jsonAppendRaw(p, z, n); + }else{ + jsonAppendString(p, z, n); + } + break; + } + default: { + if( p->bErr==0 ){ + sqlite3_result_error(p->pCtx, "JSON cannot hold BLOB values", -1); + p->bErr = 2; + jsonReset(p); + } + break; + } + } +} + + +/* Make the JSON in p the result of the SQL function. +*/ +static void jsonResult(JsonString *p){ + if( p->bErr==0 ){ + sqlite3_result_text64(p->pCtx, p->zBuf, p->nUsed, + p->bStatic ? SQLITE_TRANSIENT : sqlite3_free, + SQLITE_UTF8); + jsonZero(p); + } + assert( p->bStatic ); +} + +/************************************************************************** +** Utility routines for dealing with JsonNode and JsonParse objects +**************************************************************************/ + +/* +** Return the number of consecutive JsonNode slots need to represent +** the parsed JSON at pNode. The minimum answer is 1. For ARRAY and +** OBJECT types, the number might be larger. +** +** Appended elements are not counted. The value returned is the number +** by which the JsonNode counter should increment in order to go to the +** next peer value. +*/ +static u32 jsonNodeSize(JsonNode *pNode){ + return pNode->eType>=JSON_ARRAY ? pNode->n+1 : 1; +} + +/* +** Reclaim all memory allocated by a JsonParse object. But do not +** delete the JsonParse object itself. +*/ +static void jsonParseReset(JsonParse *pParse){ + sqlite3_free(pParse->aNode); + pParse->aNode = 0; + pParse->nNode = 0; + pParse->nAlloc = 0; + sqlite3_free(pParse->aUp); + pParse->aUp = 0; +} + +/* +** Convert the JsonNode pNode into a pure JSON string and +** append to pOut. Subsubstructure is also included. Return +** the number of JsonNode objects that are encoded. +*/ +static void jsonRenderNode( + JsonNode *pNode, /* The node to render */ + JsonString *pOut, /* Write JSON here */ + sqlite3_value **aReplace /* Replacement values */ +){ + switch( pNode->eType ){ + default: { + assert( pNode->eType==JSON_NULL ); + jsonAppendRaw(pOut, "null", 4); + break; + } + case JSON_TRUE: { + jsonAppendRaw(pOut, "true", 4); + break; + } + case JSON_FALSE: { + jsonAppendRaw(pOut, "false", 5); + break; + } + case JSON_STRING: { + if( pNode->jnFlags & JNODE_RAW ){ + jsonAppendString(pOut, pNode->u.zJContent, pNode->n); + break; + } + /* Fall through into the next case */ + } + case JSON_REAL: + case JSON_INT: { + jsonAppendRaw(pOut, pNode->u.zJContent, pNode->n); + break; + } + case JSON_ARRAY: { + u32 j = 1; + jsonAppendChar(pOut, '['); + for(;;){ + while( j<=pNode->n ){ + if( pNode[j].jnFlags & (JNODE_REMOVE|JNODE_REPLACE) ){ + if( pNode[j].jnFlags & JNODE_REPLACE ){ + jsonAppendSeparator(pOut); + jsonAppendValue(pOut, aReplace[pNode[j].iVal]); + } + }else{ + jsonAppendSeparator(pOut); + jsonRenderNode(&pNode[j], pOut, aReplace); + } + j += jsonNodeSize(&pNode[j]); + } + if( (pNode->jnFlags & JNODE_APPEND)==0 ) break; + pNode = &pNode[pNode->u.iAppend]; + j = 1; + } + jsonAppendChar(pOut, ']'); + break; + } + case JSON_OBJECT: { + u32 j = 1; + jsonAppendChar(pOut, '{'); + for(;;){ + while( j<=pNode->n ){ + if( (pNode[j+1].jnFlags & JNODE_REMOVE)==0 ){ + jsonAppendSeparator(pOut); + jsonRenderNode(&pNode[j], pOut, aReplace); + jsonAppendChar(pOut, ':'); + if( pNode[j+1].jnFlags & JNODE_REPLACE ){ + jsonAppendValue(pOut, aReplace[pNode[j+1].iVal]); + }else{ + jsonRenderNode(&pNode[j+1], pOut, aReplace); + } + } + j += 1 + jsonNodeSize(&pNode[j+1]); + } + if( (pNode->jnFlags & JNODE_APPEND)==0 ) break; + pNode = &pNode[pNode->u.iAppend]; + j = 1; + } + jsonAppendChar(pOut, '}'); + break; + } + } +} + +/* +** Return a JsonNode and all its descendents as a JSON string. +*/ +static void jsonReturnJson( + JsonNode *pNode, /* Node to return */ + sqlite3_context *pCtx, /* Return value for this function */ + sqlite3_value **aReplace /* Array of replacement values */ +){ + JsonString s; + jsonInit(&s, pCtx); + jsonRenderNode(pNode, &s, aReplace); + jsonResult(&s); + sqlite3_result_subtype(pCtx, JSON_SUBTYPE); +} + +/* +** Make the JsonNode the return value of the function. +*/ +static void jsonReturn( + JsonNode *pNode, /* Node to return */ + sqlite3_context *pCtx, /* Return value for this function */ + sqlite3_value **aReplace /* Array of replacement values */ +){ + switch( pNode->eType ){ + default: { + assert( pNode->eType==JSON_NULL ); + sqlite3_result_null(pCtx); + break; + } + case JSON_TRUE: { + sqlite3_result_int(pCtx, 1); + break; + } + case JSON_FALSE: { + sqlite3_result_int(pCtx, 0); + break; + } + case JSON_INT: { + sqlite3_int64 i = 0; + const char *z = pNode->u.zJContent; + if( z[0]=='-' ){ z++; } + while( z[0]>='0' && z[0]<='9' ){ + unsigned v = *(z++) - '0'; + if( i>=LARGEST_INT64/10 ){ + if( i>LARGEST_INT64/10 ) goto int_as_real; + if( z[0]>='0' && z[0]<='9' ) goto int_as_real; + if( v==9 ) goto int_as_real; + if( v==8 ){ + if( pNode->u.zJContent[0]=='-' ){ + sqlite3_result_int64(pCtx, SMALLEST_INT64); + goto int_done; + }else{ + goto int_as_real; + } + } + } + i = i*10 + v; + } + if( pNode->u.zJContent[0]=='-' ){ i = -i; } + sqlite3_result_int64(pCtx, i); + int_done: + break; + int_as_real: /* fall through to real */; + } + case JSON_REAL: { + double r; +#ifdef SQLITE_AMALGAMATION + const char *z = pNode->u.zJContent; + sqlite3AtoF(z, &r, sqlite3Strlen30(z), SQLITE_UTF8); +#else + r = strtod(pNode->u.zJContent, 0); +#endif + sqlite3_result_double(pCtx, r); + break; + } + case JSON_STRING: { +#if 0 /* Never happens because JNODE_RAW is only set by json_set(), + ** json_insert() and json_replace() and those routines do not + ** call jsonReturn() */ + if( pNode->jnFlags & JNODE_RAW ){ + sqlite3_result_text(pCtx, pNode->u.zJContent, pNode->n, + SQLITE_TRANSIENT); + }else +#endif + assert( (pNode->jnFlags & JNODE_RAW)==0 ); + if( (pNode->jnFlags & JNODE_ESCAPE)==0 ){ + /* JSON formatted without any backslash-escapes */ + sqlite3_result_text(pCtx, pNode->u.zJContent+1, pNode->n-2, + SQLITE_TRANSIENT); + }else{ + /* Translate JSON formatted string into raw text */ + u32 i; + u32 n = pNode->n; + const char *z = pNode->u.zJContent; + char *zOut; + u32 j; + zOut = sqlite3_malloc( n+1 ); + if( zOut==0 ){ + sqlite3_result_error_nomem(pCtx); + break; + } + for(i=1, j=0; i<n-1; i++){ + char c = z[i]; + if( c!='\\' ){ + zOut[j++] = c; + }else{ + c = z[++i]; + if( c=='u' ){ + u32 v = 0, k; + for(k=0; k<4 && i<n-2; i++, k++){ + c = z[i+1]; + if( c>='0' && c<='9' ) v = v*16 + c - '0'; + else if( c>='A' && c<='F' ) v = v*16 + c - 'A' + 10; + else if( c>='a' && c<='f' ) v = v*16 + c - 'a' + 10; + else break; + } + if( v==0 ) break; + if( v<=0x7f ){ + zOut[j++] = (char)v; + }else if( v<=0x7ff ){ + zOut[j++] = (char)(0xc0 | (v>>6)); + zOut[j++] = 0x80 | (v&0x3f); + }else{ + zOut[j++] = (char)(0xe0 | (v>>12)); + zOut[j++] = 0x80 | ((v>>6)&0x3f); + zOut[j++] = 0x80 | (v&0x3f); + } + }else{ + if( c=='b' ){ + c = '\b'; + }else if( c=='f' ){ + c = '\f'; + }else if( c=='n' ){ + c = '\n'; + }else if( c=='r' ){ + c = '\r'; + }else if( c=='t' ){ + c = '\t'; + } + zOut[j++] = c; + } + } + } + zOut[j] = 0; + sqlite3_result_text(pCtx, zOut, j, sqlite3_free); + } + break; + } + case JSON_ARRAY: + case JSON_OBJECT: { + jsonReturnJson(pNode, pCtx, aReplace); + break; + } + } +} + +/* Forward reference */ +static int jsonParseAddNode(JsonParse*,u32,u32,const char*); + +/* +** A macro to hint to the compiler that a function should not be +** inlined. +*/ +#if defined(__GNUC__) +# define JSON_NOINLINE __attribute__((noinline)) +#elif defined(_MSC_VER) && _MSC_VER>=1310 +# define JSON_NOINLINE __declspec(noinline) +#else +# define JSON_NOINLINE +#endif + + +static JSON_NOINLINE int jsonParseAddNodeExpand( + JsonParse *pParse, /* Append the node to this object */ + u32 eType, /* Node type */ + u32 n, /* Content size or sub-node count */ + const char *zContent /* Content */ +){ + u32 nNew; + JsonNode *pNew; + assert( pParse->nNode>=pParse->nAlloc ); + if( pParse->oom ) return -1; + nNew = pParse->nAlloc*2 + 10; + pNew = sqlite3_realloc(pParse->aNode, sizeof(JsonNode)*nNew); + if( pNew==0 ){ + pParse->oom = 1; + return -1; + } + pParse->nAlloc = nNew; + pParse->aNode = pNew; + assert( pParse->nNode<pParse->nAlloc ); + return jsonParseAddNode(pParse, eType, n, zContent); +} + +/* +** Create a new JsonNode instance based on the arguments and append that +** instance to the JsonParse. Return the index in pParse->aNode[] of the +** new node, or -1 if a memory allocation fails. +*/ +static int jsonParseAddNode( + JsonParse *pParse, /* Append the node to this object */ + u32 eType, /* Node type */ + u32 n, /* Content size or sub-node count */ + const char *zContent /* Content */ +){ + JsonNode *p; + if( pParse->nNode>=pParse->nAlloc ){ + return jsonParseAddNodeExpand(pParse, eType, n, zContent); + } + p = &pParse->aNode[pParse->nNode]; + p->eType = (u8)eType; + p->jnFlags = 0; + p->iVal = 0; + p->n = n; + p->u.zJContent = zContent; + return pParse->nNode++; +} + +/* +** Parse a single JSON value which begins at pParse->zJson[i]. Return the +** index of the first character past the end of the value parsed. +** +** Return negative for a syntax error. Special cases: return -2 if the +** first non-whitespace character is '}' and return -3 if the first +** non-whitespace character is ']'. +*/ +static int jsonParseValue(JsonParse *pParse, u32 i){ + char c; + u32 j; + int iThis; + int x; + JsonNode *pNode; + while( safe_isspace(pParse->zJson[i]) ){ i++; } + if( (c = pParse->zJson[i])=='{' ){ + /* Parse object */ + iThis = jsonParseAddNode(pParse, JSON_OBJECT, 0, 0); + if( iThis<0 ) return -1; + for(j=i+1;;j++){ + while( safe_isspace(pParse->zJson[j]) ){ j++; } + x = jsonParseValue(pParse, j); + if( x<0 ){ + if( x==(-2) && pParse->nNode==(u32)iThis+1 ) return j+1; + return -1; + } + if( pParse->oom ) return -1; + pNode = &pParse->aNode[pParse->nNode-1]; + if( pNode->eType!=JSON_STRING ) return -1; + pNode->jnFlags |= JNODE_LABEL; + j = x; + while( safe_isspace(pParse->zJson[j]) ){ j++; } + if( pParse->zJson[j]!=':' ) return -1; + j++; + x = jsonParseValue(pParse, j); + if( x<0 ) return -1; + j = x; + while( safe_isspace(pParse->zJson[j]) ){ j++; } + c = pParse->zJson[j]; + if( c==',' ) continue; + if( c!='}' ) return -1; + break; + } + pParse->aNode[iThis].n = pParse->nNode - (u32)iThis - 1; + return j+1; + }else if( c=='[' ){ + /* Parse array */ + iThis = jsonParseAddNode(pParse, JSON_ARRAY, 0, 0); + if( iThis<0 ) return -1; + for(j=i+1;;j++){ + while( safe_isspace(pParse->zJson[j]) ){ j++; } + x = jsonParseValue(pParse, j); + if( x<0 ){ + if( x==(-3) && pParse->nNode==(u32)iThis+1 ) return j+1; + return -1; + } + j = x; + while( safe_isspace(pParse->zJson[j]) ){ j++; } + c = pParse->zJson[j]; + if( c==',' ) continue; + if( c!=']' ) return -1; + break; + } + pParse->aNode[iThis].n = pParse->nNode - (u32)iThis - 1; + return j+1; + }else if( c=='"' ){ + /* Parse string */ + u8 jnFlags = 0; + j = i+1; + for(;;){ + c = pParse->zJson[j]; + if( c==0 ) return -1; + if( c=='\\' ){ + c = pParse->zJson[++j]; + if( c==0 ) return -1; + jnFlags = JNODE_ESCAPE; + }else if( c=='"' ){ + break; + } + j++; + } + jsonParseAddNode(pParse, JSON_STRING, j+1-i, &pParse->zJson[i]); + if( !pParse->oom ) pParse->aNode[pParse->nNode-1].jnFlags = jnFlags; + return j+1; + }else if( c=='n' + && strncmp(pParse->zJson+i,"null",4)==0 + && !safe_isalnum(pParse->zJson[i+4]) ){ + jsonParseAddNode(pParse, JSON_NULL, 0, 0); + return i+4; + }else if( c=='t' + && strncmp(pParse->zJson+i,"true",4)==0 + && !safe_isalnum(pParse->zJson[i+4]) ){ + jsonParseAddNode(pParse, JSON_TRUE, 0, 0); + return i+4; + }else if( c=='f' + && strncmp(pParse->zJson+i,"false",5)==0 + && !safe_isalnum(pParse->zJson[i+5]) ){ + jsonParseAddNode(pParse, JSON_FALSE, 0, 0); + return i+5; + }else if( c=='-' || (c>='0' && c<='9') ){ + /* Parse number */ + u8 seenDP = 0; + u8 seenE = 0; + j = i+1; + for(;; j++){ + c = pParse->zJson[j]; + if( c>='0' && c<='9' ) continue; + if( c=='.' ){ + if( pParse->zJson[j-1]=='-' ) return -1; + if( seenDP ) return -1; + seenDP = 1; + continue; + } + if( c=='e' || c=='E' ){ + if( pParse->zJson[j-1]<'0' ) return -1; + if( seenE ) return -1; + seenDP = seenE = 1; + c = pParse->zJson[j+1]; + if( c=='+' || c=='-' ){ + j++; + c = pParse->zJson[j+1]; + } + if( c<'0' || c>'9' ) return -1; + continue; + } + break; + } + if( pParse->zJson[j-1]<'0' ) return -1; + jsonParseAddNode(pParse, seenDP ? JSON_REAL : JSON_INT, + j - i, &pParse->zJson[i]); + return j; + }else if( c=='}' ){ + return -2; /* End of {...} */ + }else if( c==']' ){ + return -3; /* End of [...] */ + }else if( c==0 ){ + return 0; /* End of file */ + }else{ + return -1; /* Syntax error */ + } +} + +/* +** Parse a complete JSON string. Return 0 on success or non-zero if there +** are any errors. If an error occurs, free all memory associated with +** pParse. +** +** pParse is uninitialized when this routine is called. +*/ +static int jsonParse( + JsonParse *pParse, /* Initialize and fill this JsonParse object */ + sqlite3_context *pCtx, /* Report errors here */ + const char *zJson /* Input JSON text to be parsed */ +){ + int i; + memset(pParse, 0, sizeof(*pParse)); + if( zJson==0 ) return 1; + pParse->zJson = zJson; + i = jsonParseValue(pParse, 0); + if( pParse->oom ) i = -1; + if( i>0 ){ + while( safe_isspace(zJson[i]) ) i++; + if( zJson[i] ) i = -1; + } + if( i<=0 ){ + if( pCtx!=0 ){ + if( pParse->oom ){ + sqlite3_result_error_nomem(pCtx); + }else{ + sqlite3_result_error(pCtx, "malformed JSON", -1); + } + } + jsonParseReset(pParse); + return 1; + } + return 0; +} + +/* Mark node i of pParse as being a child of iParent. Call recursively +** to fill in all the descendants of node i. +*/ +static void jsonParseFillInParentage(JsonParse *pParse, u32 i, u32 iParent){ + JsonNode *pNode = &pParse->aNode[i]; + u32 j; + pParse->aUp[i] = iParent; + switch( pNode->eType ){ + case JSON_ARRAY: { + for(j=1; j<=pNode->n; j += jsonNodeSize(pNode+j)){ + jsonParseFillInParentage(pParse, i+j, i); + } + break; + } + case JSON_OBJECT: { + for(j=1; j<=pNode->n; j += jsonNodeSize(pNode+j+1)+1){ + pParse->aUp[i+j] = i; + jsonParseFillInParentage(pParse, i+j+1, i); + } + break; + } + default: { + break; + } + } +} + +/* +** Compute the parentage of all nodes in a completed parse. +*/ +static int jsonParseFindParents(JsonParse *pParse){ + u32 *aUp; + assert( pParse->aUp==0 ); + aUp = pParse->aUp = sqlite3_malloc( sizeof(u32)*pParse->nNode ); + if( aUp==0 ){ + pParse->oom = 1; + return SQLITE_NOMEM; + } + jsonParseFillInParentage(pParse, 0, 0); + return SQLITE_OK; +} + +/* +** Compare the OBJECT label at pNode against zKey,nKey. Return true on +** a match. +*/ +static int jsonLabelCompare(JsonNode *pNode, const char *zKey, u32 nKey){ + if( pNode->jnFlags & JNODE_RAW ){ + if( pNode->n!=nKey ) return 0; + return strncmp(pNode->u.zJContent, zKey, nKey)==0; + }else{ + if( pNode->n!=nKey+2 ) return 0; + return strncmp(pNode->u.zJContent+1, zKey, nKey)==0; + } +} + +/* forward declaration */ +static JsonNode *jsonLookupAppend(JsonParse*,const char*,int*,const char**); + +/* +** Search along zPath to find the node specified. Return a pointer +** to that node, or NULL if zPath is malformed or if there is no such +** node. +** +** If pApnd!=0, then try to append new nodes to complete zPath if it is +** possible to do so and if no existing node corresponds to zPath. If +** new nodes are appended *pApnd is set to 1. +*/ +static JsonNode *jsonLookupStep( + JsonParse *pParse, /* The JSON to search */ + u32 iRoot, /* Begin the search at this node */ + const char *zPath, /* The path to search */ + int *pApnd, /* Append nodes to complete path if not NULL */ + const char **pzErr /* Make *pzErr point to any syntax error in zPath */ +){ + u32 i, j, nKey; + const char *zKey; + JsonNode *pRoot = &pParse->aNode[iRoot]; + if( zPath[0]==0 ) return pRoot; + if( zPath[0]=='.' ){ + if( pRoot->eType!=JSON_OBJECT ) return 0; + zPath++; + if( zPath[0]=='"' ){ + zKey = zPath + 1; + for(i=1; zPath[i] && zPath[i]!='"'; i++){} + nKey = i-1; + if( zPath[i] ){ + i++; + }else{ + *pzErr = zPath; + return 0; + } + }else{ + zKey = zPath; + for(i=0; zPath[i] && zPath[i]!='.' && zPath[i]!='['; i++){} + nKey = i; + } + if( nKey==0 ){ + *pzErr = zPath; + return 0; + } + j = 1; + for(;;){ + while( j<=pRoot->n ){ + if( jsonLabelCompare(pRoot+j, zKey, nKey) ){ + return jsonLookupStep(pParse, iRoot+j+1, &zPath[i], pApnd, pzErr); + } + j++; + j += jsonNodeSize(&pRoot[j]); + } + if( (pRoot->jnFlags & JNODE_APPEND)==0 ) break; + iRoot += pRoot->u.iAppend; + pRoot = &pParse->aNode[iRoot]; + j = 1; + } + if( pApnd ){ + u32 iStart, iLabel; + JsonNode *pNode; + iStart = jsonParseAddNode(pParse, JSON_OBJECT, 2, 0); + iLabel = jsonParseAddNode(pParse, JSON_STRING, i, zPath); + zPath += i; + pNode = jsonLookupAppend(pParse, zPath, pApnd, pzErr); + if( pParse->oom ) return 0; + if( pNode ){ + pRoot = &pParse->aNode[iRoot]; + pRoot->u.iAppend = iStart - iRoot; + pRoot->jnFlags |= JNODE_APPEND; + pParse->aNode[iLabel].jnFlags |= JNODE_RAW; + } + return pNode; + } + }else if( zPath[0]=='[' && safe_isdigit(zPath[1]) ){ + if( pRoot->eType!=JSON_ARRAY ) return 0; + i = 0; + j = 1; + while( safe_isdigit(zPath[j]) ){ + i = i*10 + zPath[j] - '0'; + j++; + } + if( zPath[j]!=']' ){ + *pzErr = zPath; + return 0; + } + zPath += j + 1; + j = 1; + for(;;){ + while( j<=pRoot->n && (i>0 || (pRoot[j].jnFlags & JNODE_REMOVE)!=0) ){ + if( (pRoot[j].jnFlags & JNODE_REMOVE)==0 ) i--; + j += jsonNodeSize(&pRoot[j]); + } + if( (pRoot->jnFlags & JNODE_APPEND)==0 ) break; + iRoot += pRoot->u.iAppend; + pRoot = &pParse->aNode[iRoot]; + j = 1; + } + if( j<=pRoot->n ){ + return jsonLookupStep(pParse, iRoot+j, zPath, pApnd, pzErr); + } + if( i==0 && pApnd ){ + u32 iStart; + JsonNode *pNode; + iStart = jsonParseAddNode(pParse, JSON_ARRAY, 1, 0); + pNode = jsonLookupAppend(pParse, zPath, pApnd, pzErr); + if( pParse->oom ) return 0; + if( pNode ){ + pRoot = &pParse->aNode[iRoot]; + pRoot->u.iAppend = iStart - iRoot; + pRoot->jnFlags |= JNODE_APPEND; + } + return pNode; + } + }else{ + *pzErr = zPath; + } + return 0; +} + +/* +** Append content to pParse that will complete zPath. Return a pointer +** to the inserted node, or return NULL if the append fails. +*/ +static JsonNode *jsonLookupAppend( + JsonParse *pParse, /* Append content to the JSON parse */ + const char *zPath, /* Description of content to append */ + int *pApnd, /* Set this flag to 1 */ + const char **pzErr /* Make this point to any syntax error */ +){ + *pApnd = 1; + if( zPath[0]==0 ){ + jsonParseAddNode(pParse, JSON_NULL, 0, 0); + return pParse->oom ? 0 : &pParse->aNode[pParse->nNode-1]; + } + if( zPath[0]=='.' ){ + jsonParseAddNode(pParse, JSON_OBJECT, 0, 0); + }else if( strncmp(zPath,"[0]",3)==0 ){ + jsonParseAddNode(pParse, JSON_ARRAY, 0, 0); + }else{ + return 0; + } + if( pParse->oom ) return 0; + return jsonLookupStep(pParse, pParse->nNode-1, zPath, pApnd, pzErr); +} + +/* +** Return the text of a syntax error message on a JSON path. Space is +** obtained from sqlite3_malloc(). +*/ +static char *jsonPathSyntaxError(const char *zErr){ + return sqlite3_mprintf("JSON path error near '%q'", zErr); +} + +/* +** Do a node lookup using zPath. Return a pointer to the node on success. +** Return NULL if not found or if there is an error. +** +** On an error, write an error message into pCtx and increment the +** pParse->nErr counter. +** +** If pApnd!=NULL then try to append missing nodes and set *pApnd = 1 if +** nodes are appended. +*/ +static JsonNode *jsonLookup( + JsonParse *pParse, /* The JSON to search */ + const char *zPath, /* The path to search */ + int *pApnd, /* Append nodes to complete path if not NULL */ + sqlite3_context *pCtx /* Report errors here, if not NULL */ +){ + const char *zErr = 0; + JsonNode *pNode = 0; + char *zMsg; + + if( zPath==0 ) return 0; + if( zPath[0]!='$' ){ + zErr = zPath; + goto lookup_err; + } + zPath++; + pNode = jsonLookupStep(pParse, 0, zPath, pApnd, &zErr); + if( zErr==0 ) return pNode; + +lookup_err: + pParse->nErr++; + assert( zErr!=0 && pCtx!=0 ); + zMsg = jsonPathSyntaxError(zErr); + if( zMsg ){ + sqlite3_result_error(pCtx, zMsg, -1); + sqlite3_free(zMsg); + }else{ + sqlite3_result_error_nomem(pCtx); + } + return 0; +} + + +/* +** Report the wrong number of arguments for json_insert(), json_replace() +** or json_set(). +*/ +static void jsonWrongNumArgs( + sqlite3_context *pCtx, + const char *zFuncName +){ + char *zMsg = sqlite3_mprintf("json_%s() needs an odd number of arguments", + zFuncName); + sqlite3_result_error(pCtx, zMsg, -1); + sqlite3_free(zMsg); +} + + +/**************************************************************************** +** SQL functions used for testing and debugging +****************************************************************************/ + +#ifdef SQLITE_DEBUG +/* +** The json_parse(JSON) function returns a string which describes +** a parse of the JSON provided. Or it returns NULL if JSON is not +** well-formed. +*/ +static void jsonParseFunc( + sqlite3_context *ctx, + int argc, + sqlite3_value **argv +){ + JsonString s; /* Output string - not real JSON */ + JsonParse x; /* The parse */ + u32 i; + + assert( argc==1 ); + if( jsonParse(&x, ctx, (const char*)sqlite3_value_text(argv[0])) ) return; + jsonParseFindParents(&x); + jsonInit(&s, ctx); + for(i=0; i<x.nNode; i++){ + const char *zType; + if( x.aNode[i].jnFlags & JNODE_LABEL ){ + assert( x.aNode[i].eType==JSON_STRING ); + zType = "label"; + }else{ + zType = jsonType[x.aNode[i].eType]; + } + jsonPrintf(100, &s,"node %3u: %7s n=%-4d up=%-4d", + i, zType, x.aNode[i].n, x.aUp[i]); + if( x.aNode[i].u.zJContent!=0 ){ + jsonAppendRaw(&s, " ", 1); + jsonAppendRaw(&s, x.aNode[i].u.zJContent, x.aNode[i].n); + } + jsonAppendRaw(&s, "\n", 1); + } + jsonParseReset(&x); + jsonResult(&s); +} + +/* +** The json_test1(JSON) function return true (1) if the input is JSON +** text generated by another json function. It returns (0) if the input +** is not known to be JSON. +*/ +static void jsonTest1Func( + sqlite3_context *ctx, + int argc, + sqlite3_value **argv +){ + UNUSED_PARAM(argc); + sqlite3_result_int(ctx, sqlite3_value_subtype(argv[0])==JSON_SUBTYPE); +} +#endif /* SQLITE_DEBUG */ + +/**************************************************************************** +** Scalar SQL function implementations +****************************************************************************/ + +/* +** Implementation of the json_array(VALUE,...) function. Return a JSON +** array that contains all values given in arguments. Or if any argument +** is a BLOB, throw an error. +*/ +static void jsonArrayFunc( + sqlite3_context *ctx, + int argc, + sqlite3_value **argv +){ + int i; + JsonString jx; + + jsonInit(&jx, ctx); + jsonAppendChar(&jx, '['); + for(i=0; i<argc; i++){ + jsonAppendSeparator(&jx); + jsonAppendValue(&jx, argv[i]); + } + jsonAppendChar(&jx, ']'); + jsonResult(&jx); + sqlite3_result_subtype(ctx, JSON_SUBTYPE); +} + + +/* +** json_array_length(JSON) +** json_array_length(JSON, PATH) +** +** Return the number of elements in the top-level JSON array. +** Return 0 if the input is not a well-formed JSON array. +*/ +static void jsonArrayLengthFunc( + sqlite3_context *ctx, + int argc, + sqlite3_value **argv +){ + JsonParse x; /* The parse */ + sqlite3_int64 n = 0; + u32 i; + JsonNode *pNode; + + if( jsonParse(&x, ctx, (const char*)sqlite3_value_text(argv[0])) ) return; + assert( x.nNode ); + if( argc==2 ){ + const char *zPath = (const char*)sqlite3_value_text(argv[1]); + pNode = jsonLookup(&x, zPath, 0, ctx); + }else{ + pNode = x.aNode; + } + if( pNode==0 ){ + x.nErr = 1; + }else if( pNode->eType==JSON_ARRAY ){ + assert( (pNode->jnFlags & JNODE_APPEND)==0 ); + for(i=1; i<=pNode->n; n++){ + i += jsonNodeSize(&pNode[i]); + } + } + if( x.nErr==0 ) sqlite3_result_int64(ctx, n); + jsonParseReset(&x); +} + +/* +** json_extract(JSON, PATH, ...) +** +** Return the element described by PATH. Return NULL if there is no +** PATH element. If there are multiple PATHs, then return a JSON array +** with the result from each path. Throw an error if the JSON or any PATH +** is malformed. +*/ +static void jsonExtractFunc( + sqlite3_context *ctx, + int argc, + sqlite3_value **argv +){ + JsonParse x; /* The parse */ + JsonNode *pNode; + const char *zPath; + JsonString jx; + int i; + + if( argc<2 ) return; + if( jsonParse(&x, ctx, (const char*)sqlite3_value_text(argv[0])) ) return; + jsonInit(&jx, ctx); + jsonAppendChar(&jx, '['); + for(i=1; i<argc; i++){ + zPath = (const char*)sqlite3_value_text(argv[i]); + pNode = jsonLookup(&x, zPath, 0, ctx); + if( x.nErr ) break; + if( argc>2 ){ + jsonAppendSeparator(&jx); + if( pNode ){ + jsonRenderNode(pNode, &jx, 0); + }else{ + jsonAppendRaw(&jx, "null", 4); + } + }else if( pNode ){ + jsonReturn(pNode, ctx, 0); + } + } + if( argc>2 && i==argc ){ + jsonAppendChar(&jx, ']'); + jsonResult(&jx); + sqlite3_result_subtype(ctx, JSON_SUBTYPE); + } + jsonReset(&jx); + jsonParseReset(&x); +} + +/* +** Implementation of the json_object(NAME,VALUE,...) function. Return a JSON +** object that contains all name/value given in arguments. Or if any name +** is not a string or if any value is a BLOB, throw an error. +*/ +static void jsonObjectFunc( + sqlite3_context *ctx, + int argc, + sqlite3_value **argv +){ + int i; + JsonString jx; + const char *z; + u32 n; + + if( argc&1 ){ + sqlite3_result_error(ctx, "json_object() requires an even number " + "of arguments", -1); + return; + } + jsonInit(&jx, ctx); + jsonAppendChar(&jx, '{'); + for(i=0; i<argc; i+=2){ + if( sqlite3_value_type(argv[i])!=SQLITE_TEXT ){ + sqlite3_result_error(ctx, "json_object() labels must be TEXT", -1); + jsonReset(&jx); + return; + } + jsonAppendSeparator(&jx); + z = (const char*)sqlite3_value_text(argv[i]); + n = (u32)sqlite3_value_bytes(argv[i]); + jsonAppendString(&jx, z, n); + jsonAppendChar(&jx, ':'); + jsonAppendValue(&jx, argv[i+1]); + } + jsonAppendChar(&jx, '}'); + jsonResult(&jx); + sqlite3_result_subtype(ctx, JSON_SUBTYPE); +} + + +/* +** json_remove(JSON, PATH, ...) +** +** Remove the named elements from JSON and return the result. malformed +** JSON or PATH arguments result in an error. +*/ +static void jsonRemoveFunc( + sqlite3_context *ctx, + int argc, + sqlite3_value **argv +){ + JsonParse x; /* The parse */ + JsonNode *pNode; + const char *zPath; + u32 i; + + if( argc<1 ) return; + if( jsonParse(&x, ctx, (const char*)sqlite3_value_text(argv[0])) ) return; + assert( x.nNode ); + for(i=1; i<(u32)argc; i++){ + zPath = (const char*)sqlite3_value_text(argv[i]); + if( zPath==0 ) goto remove_done; + pNode = jsonLookup(&x, zPath, 0, ctx); + if( x.nErr ) goto remove_done; + if( pNode ) pNode->jnFlags |= JNODE_REMOVE; + } + if( (x.aNode[0].jnFlags & JNODE_REMOVE)==0 ){ + jsonReturnJson(x.aNode, ctx, 0); + } +remove_done: + jsonParseReset(&x); +} + +/* +** json_replace(JSON, PATH, VALUE, ...) +** +** Replace the value at PATH with VALUE. If PATH does not already exist, +** this routine is a no-op. If JSON or PATH is malformed, throw an error. +*/ +static void jsonReplaceFunc( + sqlite3_context *ctx, + int argc, + sqlite3_value **argv +){ + JsonParse x; /* The parse */ + JsonNode *pNode; + const char *zPath; + u32 i; + + if( argc<1 ) return; + if( (argc&1)==0 ) { + jsonWrongNumArgs(ctx, "replace"); + return; + } + if( jsonParse(&x, ctx, (const char*)sqlite3_value_text(argv[0])) ) return; + assert( x.nNode ); + for(i=1; i<(u32)argc; i+=2){ + zPath = (const char*)sqlite3_value_text(argv[i]); + pNode = jsonLookup(&x, zPath, 0, ctx); + if( x.nErr ) goto replace_err; + if( pNode ){ + pNode->jnFlags |= (u8)JNODE_REPLACE; + pNode->iVal = (u8)(i+1); + } + } + if( x.aNode[0].jnFlags & JNODE_REPLACE ){ + sqlite3_result_value(ctx, argv[x.aNode[0].iVal]); + }else{ + jsonReturnJson(x.aNode, ctx, argv); + } +replace_err: + jsonParseReset(&x); +} + +/* +** json_set(JSON, PATH, VALUE, ...) +** +** Set the value at PATH to VALUE. Create the PATH if it does not already +** exist. Overwrite existing values that do exist. +** If JSON or PATH is malformed, throw an error. +** +** json_insert(JSON, PATH, VALUE, ...) +** +** Create PATH and initialize it to VALUE. If PATH already exists, this +** routine is a no-op. If JSON or PATH is malformed, throw an error. +*/ +static void jsonSetFunc( + sqlite3_context *ctx, + int argc, + sqlite3_value **argv +){ + JsonParse x; /* The parse */ + JsonNode *pNode; + const char *zPath; + u32 i; + int bApnd; + int bIsSet = *(int*)sqlite3_user_data(ctx); + + if( argc<1 ) return; + if( (argc&1)==0 ) { + jsonWrongNumArgs(ctx, bIsSet ? "set" : "insert"); + return; + } + if( jsonParse(&x, ctx, (const char*)sqlite3_value_text(argv[0])) ) return; + assert( x.nNode ); + for(i=1; i<(u32)argc; i+=2){ + zPath = (const char*)sqlite3_value_text(argv[i]); + bApnd = 0; + pNode = jsonLookup(&x, zPath, &bApnd, ctx); + if( x.oom ){ + sqlite3_result_error_nomem(ctx); + goto jsonSetDone; + }else if( x.nErr ){ + goto jsonSetDone; + }else if( pNode && (bApnd || bIsSet) ){ + pNode->jnFlags |= (u8)JNODE_REPLACE; + pNode->iVal = (u8)(i+1); + } + } + if( x.aNode[0].jnFlags & JNODE_REPLACE ){ + sqlite3_result_value(ctx, argv[x.aNode[0].iVal]); + }else{ + jsonReturnJson(x.aNode, ctx, argv); + } +jsonSetDone: + jsonParseReset(&x); +} + +/* +** json_type(JSON) +** json_type(JSON, PATH) +** +** Return the top-level "type" of a JSON string. Throw an error if +** either the JSON or PATH inputs are not well-formed. +*/ +static void jsonTypeFunc( + sqlite3_context *ctx, + int argc, + sqlite3_value **argv +){ + JsonParse x; /* The parse */ + const char *zPath; + JsonNode *pNode; + + if( jsonParse(&x, ctx, (const char*)sqlite3_value_text(argv[0])) ) return; + assert( x.nNode ); + if( argc==2 ){ + zPath = (const char*)sqlite3_value_text(argv[1]); + pNode = jsonLookup(&x, zPath, 0, ctx); + }else{ + pNode = x.aNode; + } + if( pNode ){ + sqlite3_result_text(ctx, jsonType[pNode->eType], -1, SQLITE_STATIC); + } + jsonParseReset(&x); +} + +/* +** json_valid(JSON) +** +** Return 1 if JSON is a well-formed JSON string according to RFC-7159. +** Return 0 otherwise. +*/ +static void jsonValidFunc( + sqlite3_context *ctx, + int argc, + sqlite3_value **argv +){ + JsonParse x; /* The parse */ + int rc = 0; + + UNUSED_PARAM(argc); + if( jsonParse(&x, 0, (const char*)sqlite3_value_text(argv[0]))==0 ){ + rc = 1; + } + jsonParseReset(&x); + sqlite3_result_int(ctx, rc); +} + + +/**************************************************************************** +** Aggregate SQL function implementations +****************************************************************************/ +/* +** json_group_array(VALUE) +** +** Return a JSON array composed of all values in the aggregate. +*/ +static void jsonArrayStep( + sqlite3_context *ctx, + int argc, + sqlite3_value **argv +){ + JsonString *pStr; + UNUSED_PARAM(argc); + pStr = (JsonString*)sqlite3_aggregate_context(ctx, sizeof(*pStr)); + if( pStr ){ + if( pStr->zBuf==0 ){ + jsonInit(pStr, ctx); + jsonAppendChar(pStr, '['); + }else{ + jsonAppendChar(pStr, ','); + pStr->pCtx = ctx; + } + jsonAppendValue(pStr, argv[0]); + } +} +static void jsonArrayFinal(sqlite3_context *ctx){ + JsonString *pStr; + pStr = (JsonString*)sqlite3_aggregate_context(ctx, 0); + if( pStr ){ + pStr->pCtx = ctx; + jsonAppendChar(pStr, ']'); + if( pStr->bErr ){ + if( pStr->bErr==1 ) sqlite3_result_error_nomem(ctx); + assert( pStr->bStatic ); + }else{ + sqlite3_result_text(ctx, pStr->zBuf, pStr->nUsed, + pStr->bStatic ? SQLITE_TRANSIENT : sqlite3_free); + pStr->bStatic = 1; + } + }else{ + sqlite3_result_text(ctx, "[]", 2, SQLITE_STATIC); + } + sqlite3_result_subtype(ctx, JSON_SUBTYPE); +} + +/* +** json_group_obj(NAME,VALUE) +** +** Return a JSON object composed of all names and values in the aggregate. +*/ +static void jsonObjectStep( + sqlite3_context *ctx, + int argc, + sqlite3_value **argv +){ + JsonString *pStr; + const char *z; + u32 n; + UNUSED_PARAM(argc); + pStr = (JsonString*)sqlite3_aggregate_context(ctx, sizeof(*pStr)); + if( pStr ){ + if( pStr->zBuf==0 ){ + jsonInit(pStr, ctx); + jsonAppendChar(pStr, '{'); + }else{ + jsonAppendChar(pStr, ','); + pStr->pCtx = ctx; + } + z = (const char*)sqlite3_value_text(argv[0]); + n = (u32)sqlite3_value_bytes(argv[0]); + jsonAppendString(pStr, z, n); + jsonAppendChar(pStr, ':'); + jsonAppendValue(pStr, argv[1]); + } +} +static void jsonObjectFinal(sqlite3_context *ctx){ + JsonString *pStr; + pStr = (JsonString*)sqlite3_aggregate_context(ctx, 0); + if( pStr ){ + jsonAppendChar(pStr, '}'); + if( pStr->bErr ){ + if( pStr->bErr==0 ) sqlite3_result_error_nomem(ctx); + assert( pStr->bStatic ); + }else{ + sqlite3_result_text(ctx, pStr->zBuf, pStr->nUsed, + pStr->bStatic ? SQLITE_TRANSIENT : sqlite3_free); + pStr->bStatic = 1; + } + }else{ + sqlite3_result_text(ctx, "{}", 2, SQLITE_STATIC); + } + sqlite3_result_subtype(ctx, JSON_SUBTYPE); +} + + +#ifndef SQLITE_OMIT_VIRTUALTABLE +/**************************************************************************** +** The json_each virtual table +****************************************************************************/ +typedef struct JsonEachCursor JsonEachCursor; +struct JsonEachCursor { + sqlite3_vtab_cursor base; /* Base class - must be first */ + u32 iRowid; /* The rowid */ + u32 iBegin; /* The first node of the scan */ + u32 i; /* Index in sParse.aNode[] of current row */ + u32 iEnd; /* EOF when i equals or exceeds this value */ + u8 eType; /* Type of top-level element */ + u8 bRecursive; /* True for json_tree(). False for json_each() */ + char *zJson; /* Input JSON */ + char *zRoot; /* Path by which to filter zJson */ + JsonParse sParse; /* Parse of the input JSON */ +}; + +/* Constructor for the json_each virtual table */ +static int jsonEachConnect( + sqlite3 *db, + void *pAux, + int argc, const char *const*argv, + sqlite3_vtab **ppVtab, + char **pzErr +){ + sqlite3_vtab *pNew; + int rc; + +/* Column numbers */ +#define JEACH_KEY 0 +#define JEACH_VALUE 1 +#define JEACH_TYPE 2 +#define JEACH_ATOM 3 +#define JEACH_ID 4 +#define JEACH_PARENT 5 +#define JEACH_FULLKEY 6 +#define JEACH_PATH 7 +#define JEACH_JSON 8 +#define JEACH_ROOT 9 + + UNUSED_PARAM(pzErr); + UNUSED_PARAM(argv); + UNUSED_PARAM(argc); + UNUSED_PARAM(pAux); + rc = sqlite3_declare_vtab(db, + "CREATE TABLE x(key,value,type,atom,id,parent,fullkey,path," + "json HIDDEN,root HIDDEN)"); + if( rc==SQLITE_OK ){ + pNew = *ppVtab = sqlite3_malloc( sizeof(*pNew) ); + if( pNew==0 ) return SQLITE_NOMEM; + memset(pNew, 0, sizeof(*pNew)); + } + return rc; +} + +/* destructor for json_each virtual table */ +static int jsonEachDisconnect(sqlite3_vtab *pVtab){ + sqlite3_free(pVtab); + return SQLITE_OK; +} + +/* constructor for a JsonEachCursor object for json_each(). */ +static int jsonEachOpenEach(sqlite3_vtab *p, sqlite3_vtab_cursor **ppCursor){ + JsonEachCursor *pCur; + + UNUSED_PARAM(p); + pCur = sqlite3_malloc( sizeof(*pCur) ); + if( pCur==0 ) return SQLITE_NOMEM; + memset(pCur, 0, sizeof(*pCur)); + *ppCursor = &pCur->base; + return SQLITE_OK; +} + +/* constructor for a JsonEachCursor object for json_tree(). */ +static int jsonEachOpenTree(sqlite3_vtab *p, sqlite3_vtab_cursor **ppCursor){ + int rc = jsonEachOpenEach(p, ppCursor); + if( rc==SQLITE_OK ){ + JsonEachCursor *pCur = (JsonEachCursor*)*ppCursor; + pCur->bRecursive = 1; + } + return rc; +} + +/* Reset a JsonEachCursor back to its original state. Free any memory +** held. */ +static void jsonEachCursorReset(JsonEachCursor *p){ + sqlite3_free(p->zJson); + sqlite3_free(p->zRoot); + jsonParseReset(&p->sParse); + p->iRowid = 0; + p->i = 0; + p->iEnd = 0; + p->eType = 0; + p->zJson = 0; + p->zRoot = 0; +} + +/* Destructor for a jsonEachCursor object */ +static int jsonEachClose(sqlite3_vtab_cursor *cur){ + JsonEachCursor *p = (JsonEachCursor*)cur; + jsonEachCursorReset(p); + sqlite3_free(cur); + return SQLITE_OK; +} + +/* Return TRUE if the jsonEachCursor object has been advanced off the end +** of the JSON object */ +static int jsonEachEof(sqlite3_vtab_cursor *cur){ + JsonEachCursor *p = (JsonEachCursor*)cur; + return p->i >= p->iEnd; +} + +/* Advance the cursor to the next element for json_tree() */ +static int jsonEachNext(sqlite3_vtab_cursor *cur){ + JsonEachCursor *p = (JsonEachCursor*)cur; + if( p->bRecursive ){ + if( p->sParse.aNode[p->i].jnFlags & JNODE_LABEL ) p->i++; + p->i++; + p->iRowid++; + if( p->i<p->iEnd ){ + u32 iUp = p->sParse.aUp[p->i]; + JsonNode *pUp = &p->sParse.aNode[iUp]; + p->eType = pUp->eType; + if( pUp->eType==JSON_ARRAY ){ + if( iUp==p->i-1 ){ + pUp->u.iKey = 0; + }else{ + pUp->u.iKey++; + } + } + } + }else{ + switch( p->eType ){ + case JSON_ARRAY: { + p->i += jsonNodeSize(&p->sParse.aNode[p->i]); + p->iRowid++; + break; + } + case JSON_OBJECT: { + p->i += 1 + jsonNodeSize(&p->sParse.aNode[p->i+1]); + p->iRowid++; + break; + } + default: { + p->i = p->iEnd; + break; + } + } + } + return SQLITE_OK; +} + +/* Append the name of the path for element i to pStr +*/ +static void jsonEachComputePath( + JsonEachCursor *p, /* The cursor */ + JsonString *pStr, /* Write the path here */ + u32 i /* Path to this element */ +){ + JsonNode *pNode, *pUp; + u32 iUp; + if( i==0 ){ + jsonAppendChar(pStr, '$'); + return; + } + iUp = p->sParse.aUp[i]; + jsonEachComputePath(p, pStr, iUp); + pNode = &p->sParse.aNode[i]; + pUp = &p->sParse.aNode[iUp]; + if( pUp->eType==JSON_ARRAY ){ + jsonPrintf(30, pStr, "[%d]", pUp->u.iKey); + }else{ + assert( pUp->eType==JSON_OBJECT ); + if( (pNode->jnFlags & JNODE_LABEL)==0 ) pNode--; + assert( pNode->eType==JSON_STRING ); + assert( pNode->jnFlags & JNODE_LABEL ); + jsonPrintf(pNode->n+1, pStr, ".%.*s", pNode->n-2, pNode->u.zJContent+1); + } +} + +/* Return the value of a column */ +static int jsonEachColumn( + sqlite3_vtab_cursor *cur, /* The cursor */ + sqlite3_context *ctx, /* First argument to sqlite3_result_...() */ + int i /* Which column to return */ +){ + JsonEachCursor *p = (JsonEachCursor*)cur; + JsonNode *pThis = &p->sParse.aNode[p->i]; + switch( i ){ + case JEACH_KEY: { + if( p->i==0 ) break; + if( p->eType==JSON_OBJECT ){ + jsonReturn(pThis, ctx, 0); + }else if( p->eType==JSON_ARRAY ){ + u32 iKey; + if( p->bRecursive ){ + if( p->iRowid==0 ) break; + iKey = p->sParse.aNode[p->sParse.aUp[p->i]].u.iKey; + }else{ + iKey = p->iRowid; + } + sqlite3_result_int64(ctx, (sqlite3_int64)iKey); + } + break; + } + case JEACH_VALUE: { + if( pThis->jnFlags & JNODE_LABEL ) pThis++; + jsonReturn(pThis, ctx, 0); + break; + } + case JEACH_TYPE: { + if( pThis->jnFlags & JNODE_LABEL ) pThis++; + sqlite3_result_text(ctx, jsonType[pThis->eType], -1, SQLITE_STATIC); + break; + } + case JEACH_ATOM: { + if( pThis->jnFlags & JNODE_LABEL ) pThis++; + if( pThis->eType>=JSON_ARRAY ) break; + jsonReturn(pThis, ctx, 0); + break; + } + case JEACH_ID: { + sqlite3_result_int64(ctx, + (sqlite3_int64)p->i + ((pThis->jnFlags & JNODE_LABEL)!=0)); + break; + } + case JEACH_PARENT: { + if( p->i>p->iBegin && p->bRecursive ){ + sqlite3_result_int64(ctx, (sqlite3_int64)p->sParse.aUp[p->i]); + } + break; + } + case JEACH_FULLKEY: { + JsonString x; + jsonInit(&x, ctx); + if( p->bRecursive ){ + jsonEachComputePath(p, &x, p->i); + }else{ + if( p->zRoot ){ + jsonAppendRaw(&x, p->zRoot, (int)strlen(p->zRoot)); + }else{ + jsonAppendChar(&x, '$'); + } + if( p->eType==JSON_ARRAY ){ + jsonPrintf(30, &x, "[%d]", p->iRowid); + }else{ + jsonPrintf(pThis->n, &x, ".%.*s", pThis->n-2, pThis->u.zJContent+1); + } + } + jsonResult(&x); + break; + } + case JEACH_PATH: { + if( p->bRecursive ){ + JsonString x; + jsonInit(&x, ctx); + jsonEachComputePath(p, &x, p->sParse.aUp[p->i]); + jsonResult(&x); + break; + } + /* For json_each() path and root are the same so fall through + ** into the root case */ + } + case JEACH_ROOT: { + const char *zRoot = p->zRoot; + if( zRoot==0 ) zRoot = "$"; + sqlite3_result_text(ctx, zRoot, -1, SQLITE_STATIC); + break; + } + case JEACH_JSON: { + assert( i==JEACH_JSON ); + sqlite3_result_text(ctx, p->sParse.zJson, -1, SQLITE_STATIC); + break; + } + } + return SQLITE_OK; +} + +/* Return the current rowid value */ +static int jsonEachRowid(sqlite3_vtab_cursor *cur, sqlite_int64 *pRowid){ + JsonEachCursor *p = (JsonEachCursor*)cur; + *pRowid = p->iRowid; + return SQLITE_OK; +} + +/* The query strategy is to look for an equality constraint on the json +** column. Without such a constraint, the table cannot operate. idxNum is +** 1 if the constraint is found, 3 if the constraint and zRoot are found, +** and 0 otherwise. +*/ +static int jsonEachBestIndex( + sqlite3_vtab *tab, + sqlite3_index_info *pIdxInfo +){ + int i; + int jsonIdx = -1; + int rootIdx = -1; + const struct sqlite3_index_constraint *pConstraint; + + UNUSED_PARAM(tab); + pConstraint = pIdxInfo->aConstraint; + for(i=0; i<pIdxInfo->nConstraint; i++, pConstraint++){ + if( pConstraint->usable==0 ) continue; + if( pConstraint->op!=SQLITE_INDEX_CONSTRAINT_EQ ) continue; + switch( pConstraint->iColumn ){ + case JEACH_JSON: jsonIdx = i; break; + case JEACH_ROOT: rootIdx = i; break; + default: /* no-op */ break; + } + } + if( jsonIdx<0 ){ + pIdxInfo->idxNum = 0; + pIdxInfo->estimatedCost = 1e99; + }else{ + pIdxInfo->estimatedCost = 1.0; + pIdxInfo->aConstraintUsage[jsonIdx].argvIndex = 1; + pIdxInfo->aConstraintUsage[jsonIdx].omit = 1; + if( rootIdx<0 ){ + pIdxInfo->idxNum = 1; + }else{ + pIdxInfo->aConstraintUsage[rootIdx].argvIndex = 2; + pIdxInfo->aConstraintUsage[rootIdx].omit = 1; + pIdxInfo->idxNum = 3; + } + } + return SQLITE_OK; +} + +/* Start a search on a new JSON string */ +static int jsonEachFilter( + sqlite3_vtab_cursor *cur, + int idxNum, const char *idxStr, + int argc, sqlite3_value **argv +){ + JsonEachCursor *p = (JsonEachCursor*)cur; + const char *z; + const char *zRoot = 0; + sqlite3_int64 n; + + UNUSED_PARAM(idxStr); + UNUSED_PARAM(argc); + jsonEachCursorReset(p); + if( idxNum==0 ) return SQLITE_OK; + z = (const char*)sqlite3_value_text(argv[0]); + if( z==0 ) return SQLITE_OK; + n = sqlite3_value_bytes(argv[0]); + p->zJson = sqlite3_malloc64( n+1 ); + if( p->zJson==0 ) return SQLITE_NOMEM; + memcpy(p->zJson, z, (size_t)n+1); + if( jsonParse(&p->sParse, 0, p->zJson) ){ + int rc = SQLITE_NOMEM; + if( p->sParse.oom==0 ){ + sqlite3_free(cur->pVtab->zErrMsg); + cur->pVtab->zErrMsg = sqlite3_mprintf("malformed JSON"); + if( cur->pVtab->zErrMsg ) rc = SQLITE_ERROR; + } + jsonEachCursorReset(p); + return rc; + }else if( p->bRecursive && jsonParseFindParents(&p->sParse) ){ + jsonEachCursorReset(p); + return SQLITE_NOMEM; + }else{ + JsonNode *pNode = 0; + if( idxNum==3 ){ + const char *zErr = 0; + zRoot = (const char*)sqlite3_value_text(argv[1]); + if( zRoot==0 ) return SQLITE_OK; + n = sqlite3_value_bytes(argv[1]); + p->zRoot = sqlite3_malloc64( n+1 ); + if( p->zRoot==0 ) return SQLITE_NOMEM; + memcpy(p->zRoot, zRoot, (size_t)n+1); + if( zRoot[0]!='$' ){ + zErr = zRoot; + }else{ + pNode = jsonLookupStep(&p->sParse, 0, p->zRoot+1, 0, &zErr); + } + if( zErr ){ + sqlite3_free(cur->pVtab->zErrMsg); + cur->pVtab->zErrMsg = jsonPathSyntaxError(zErr); + jsonEachCursorReset(p); + return cur->pVtab->zErrMsg ? SQLITE_ERROR : SQLITE_NOMEM; + }else if( pNode==0 ){ + return SQLITE_OK; + } + }else{ + pNode = p->sParse.aNode; + } + p->iBegin = p->i = (int)(pNode - p->sParse.aNode); + p->eType = pNode->eType; + if( p->eType>=JSON_ARRAY ){ + pNode->u.iKey = 0; + p->iEnd = p->i + pNode->n + 1; + if( p->bRecursive ){ + p->eType = p->sParse.aNode[p->sParse.aUp[p->i]].eType; + if( p->i>0 && (p->sParse.aNode[p->i-1].jnFlags & JNODE_LABEL)!=0 ){ + p->i--; + } + }else{ + p->i++; + } + }else{ + p->iEnd = p->i+1; + } + } + return SQLITE_OK; +} + +/* The methods of the json_each virtual table */ +static sqlite3_module jsonEachModule = { + 0, /* iVersion */ + 0, /* xCreate */ + jsonEachConnect, /* xConnect */ + jsonEachBestIndex, /* xBestIndex */ + jsonEachDisconnect, /* xDisconnect */ + 0, /* xDestroy */ + jsonEachOpenEach, /* xOpen - open a cursor */ + jsonEachClose, /* xClose - close a cursor */ + jsonEachFilter, /* xFilter - configure scan constraints */ + jsonEachNext, /* xNext - advance a cursor */ + jsonEachEof, /* xEof - check for end of scan */ + jsonEachColumn, /* xColumn - read data */ + jsonEachRowid, /* xRowid - read data */ + 0, /* xUpdate */ + 0, /* xBegin */ + 0, /* xSync */ + 0, /* xCommit */ + 0, /* xRollback */ + 0, /* xFindMethod */ + 0, /* xRename */ + 0, /* xSavepoint */ + 0, /* xRelease */ + 0 /* xRollbackTo */ +}; + +/* The methods of the json_tree virtual table. */ +static sqlite3_module jsonTreeModule = { + 0, /* iVersion */ + 0, /* xCreate */ + jsonEachConnect, /* xConnect */ + jsonEachBestIndex, /* xBestIndex */ + jsonEachDisconnect, /* xDisconnect */ + 0, /* xDestroy */ + jsonEachOpenTree, /* xOpen - open a cursor */ + jsonEachClose, /* xClose - close a cursor */ + jsonEachFilter, /* xFilter - configure scan constraints */ + jsonEachNext, /* xNext - advance a cursor */ + jsonEachEof, /* xEof - check for end of scan */ + jsonEachColumn, /* xColumn - read data */ + jsonEachRowid, /* xRowid - read data */ + 0, /* xUpdate */ + 0, /* xBegin */ + 0, /* xSync */ + 0, /* xCommit */ + 0, /* xRollback */ + 0, /* xFindMethod */ + 0, /* xRename */ + 0, /* xSavepoint */ + 0, /* xRelease */ + 0 /* xRollbackTo */ +}; +#endif /* SQLITE_OMIT_VIRTUALTABLE */ + +/**************************************************************************** +** The following routines are the only publically visible identifiers in this +** file. Call the following routines in order to register the various SQL +** functions and the virtual table implemented by this file. +****************************************************************************/ + +SQLITE_PRIVATE int sqlite3Json1Init(sqlite3 *db){ + int rc = SQLITE_OK; + unsigned int i; + static const struct { + const char *zName; + int nArg; + int flag; + void (*xFunc)(sqlite3_context*,int,sqlite3_value**); + } aFunc[] = { + { "json", 1, 0, jsonRemoveFunc }, + { "json_array", -1, 0, jsonArrayFunc }, + { "json_array_length", 1, 0, jsonArrayLengthFunc }, + { "json_array_length", 2, 0, jsonArrayLengthFunc }, + { "json_extract", -1, 0, jsonExtractFunc }, + { "json_insert", -1, 0, jsonSetFunc }, + { "json_object", -1, 0, jsonObjectFunc }, + { "json_remove", -1, 0, jsonRemoveFunc }, + { "json_replace", -1, 0, jsonReplaceFunc }, + { "json_set", -1, 1, jsonSetFunc }, + { "json_type", 1, 0, jsonTypeFunc }, + { "json_type", 2, 0, jsonTypeFunc }, + { "json_valid", 1, 0, jsonValidFunc }, + +#if SQLITE_DEBUG + /* DEBUG and TESTING functions */ + { "json_parse", 1, 0, jsonParseFunc }, + { "json_test1", 1, 0, jsonTest1Func }, +#endif + }; + static const struct { + const char *zName; + int nArg; + void (*xStep)(sqlite3_context*,int,sqlite3_value**); + void (*xFinal)(sqlite3_context*); + } aAgg[] = { + { "json_group_array", 1, jsonArrayStep, jsonArrayFinal }, + { "json_group_object", 2, jsonObjectStep, jsonObjectFinal }, + }; +#ifndef SQLITE_OMIT_VIRTUALTABLE + static const struct { + const char *zName; + sqlite3_module *pModule; + } aMod[] = { + { "json_each", &jsonEachModule }, + { "json_tree", &jsonTreeModule }, + }; +#endif + for(i=0; i<sizeof(aFunc)/sizeof(aFunc[0]) && rc==SQLITE_OK; i++){ + rc = sqlite3_create_function(db, aFunc[i].zName, aFunc[i].nArg, + SQLITE_UTF8 | SQLITE_DETERMINISTIC, + (void*)&aFunc[i].flag, + aFunc[i].xFunc, 0, 0); + } + for(i=0; i<sizeof(aAgg)/sizeof(aAgg[0]) && rc==SQLITE_OK; i++){ + rc = sqlite3_create_function(db, aAgg[i].zName, aAgg[i].nArg, + SQLITE_UTF8 | SQLITE_DETERMINISTIC, 0, + 0, aAgg[i].xStep, aAgg[i].xFinal); + } +#ifndef SQLITE_OMIT_VIRTUALTABLE + for(i=0; i<sizeof(aMod)/sizeof(aMod[0]) && rc==SQLITE_OK; i++){ + rc = sqlite3_create_module(db, aMod[i].zName, aMod[i].pModule, 0); + } +#endif + return rc; +} + + +#ifndef SQLITE_CORE +#ifdef _WIN32 +__declspec(dllexport) +#endif +SQLITE_API int SQLITE_STDCALL sqlite3_json_init( + sqlite3 *db, + char **pzErrMsg, + const sqlite3_api_routines *pApi +){ + SQLITE_EXTENSION_INIT2(pApi); + (void)pzErrMsg; /* Unused parameter */ + return sqlite3Json1Init(db); +} +#endif +#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_JSON1) */ + +/************** End of json1.c ***********************************************/ +/************** Begin file fts5.c ********************************************/ + + +#if !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS5) + +#if !defined(NDEBUG) && !defined(SQLITE_DEBUG) +# define NDEBUG 1 +#endif +#if defined(NDEBUG) && defined(SQLITE_DEBUG) +# undef NDEBUG +#endif + +/* +** 2014 May 31 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** Interfaces to extend FTS5. Using the interfaces defined in this file, +** FTS5 may be extended with: +** +** * custom tokenizers, and +** * custom auxiliary functions. +*/ + + +#ifndef _FTS5_H +#define _FTS5_H + +/* #include "sqlite3.h" */ + +#if 0 +extern "C" { +#endif + +/************************************************************************* +** CUSTOM AUXILIARY FUNCTIONS +** +** Virtual table implementations may overload SQL functions by implementing +** the sqlite3_module.xFindFunction() method. +*/ + +typedef struct Fts5ExtensionApi Fts5ExtensionApi; +typedef struct Fts5Context Fts5Context; +typedef struct Fts5PhraseIter Fts5PhraseIter; + +typedef void (*fts5_extension_function)( + const Fts5ExtensionApi *pApi, /* API offered by current FTS version */ + Fts5Context *pFts, /* First arg to pass to pApi functions */ + sqlite3_context *pCtx, /* Context for returning result/error */ + int nVal, /* Number of values in apVal[] array */ + sqlite3_value **apVal /* Array of trailing arguments */ +); + +struct Fts5PhraseIter { + const unsigned char *a; + const unsigned char *b; +}; + +/* +** EXTENSION API FUNCTIONS +** +** xUserData(pFts): +** Return a copy of the context pointer the extension function was +** registered with. +** +** xColumnTotalSize(pFts, iCol, pnToken): +** If parameter iCol is less than zero, set output variable *pnToken +** to the total number of tokens in the FTS5 table. Or, if iCol is +** non-negative but less than the number of columns in the table, return +** the total number of tokens in column iCol, considering all rows in +** the FTS5 table. +** +** If parameter iCol is greater than or equal to the number of columns +** in the table, SQLITE_RANGE is returned. Or, if an error occurs (e.g. +** an OOM condition or IO error), an appropriate SQLite error code is +** returned. +** +** xColumnCount(pFts): +** Return the number of columns in the table. +** +** xColumnSize(pFts, iCol, pnToken): +** If parameter iCol is less than zero, set output variable *pnToken +** to the total number of tokens in the current row. Or, if iCol is +** non-negative but less than the number of columns in the table, set +** *pnToken to the number of tokens in column iCol of the current row. +** +** If parameter iCol is greater than or equal to the number of columns +** in the table, SQLITE_RANGE is returned. Or, if an error occurs (e.g. +** an OOM condition or IO error), an appropriate SQLite error code is +** returned. +** +** This function may be quite inefficient if used with an FTS5 table +** created with the "columnsize=0" option. +** +** xColumnText: +** This function attempts to retrieve the text of column iCol of the +** current document. If successful, (*pz) is set to point to a buffer +** containing the text in utf-8 encoding, (*pn) is set to the size in bytes +** (not characters) of the buffer and SQLITE_OK is returned. Otherwise, +** if an error occurs, an SQLite error code is returned and the final values +** of (*pz) and (*pn) are undefined. +** +** xPhraseCount: +** Returns the number of phrases in the current query expression. +** +** xPhraseSize: +** Returns the number of tokens in phrase iPhrase of the query. Phrases +** are numbered starting from zero. +** +** xInstCount: +** Set *pnInst to the total number of occurrences of all phrases within +** the query within the current row. Return SQLITE_OK if successful, or +** an error code (i.e. SQLITE_NOMEM) if an error occurs. +** +** This API can be quite slow if used with an FTS5 table created with the +** "detail=none" or "detail=column" option. If the FTS5 table is created +** with either "detail=none" or "detail=column" and "content=" option +** (i.e. if it is a contentless table), then this API always returns 0. +** +** xInst: +** Query for the details of phrase match iIdx within the current row. +** Phrase matches are numbered starting from zero, so the iIdx argument +** should be greater than or equal to zero and smaller than the value +** output by xInstCount(). +** +** Usually, output parameter *piPhrase is set to the phrase number, *piCol +** to the column in which it occurs and *piOff the token offset of the +** first token of the phrase. The exception is if the table was created +** with the offsets=0 option specified. In this case *piOff is always +** set to -1. +** +** Returns SQLITE_OK if successful, or an error code (i.e. SQLITE_NOMEM) +** if an error occurs. +** +** This API can be quite slow if used with an FTS5 table created with the +** "detail=none" or "detail=column" option. +** +** xRowid: +** Returns the rowid of the current row. +** +** xTokenize: +** Tokenize text using the tokenizer belonging to the FTS5 table. +** +** xQueryPhrase(pFts5, iPhrase, pUserData, xCallback): +** This API function is used to query the FTS table for phrase iPhrase +** of the current query. Specifically, a query equivalent to: +** +** ... FROM ftstable WHERE ftstable MATCH $p ORDER BY rowid +** +** with $p set to a phrase equivalent to the phrase iPhrase of the +** current query is executed. For each row visited, the callback function +** passed as the fourth argument is invoked. The context and API objects +** passed to the callback function may be used to access the properties of +** each matched row. Invoking Api.xUserData() returns a copy of the pointer +** passed as the third argument to pUserData. +** +** If the callback function returns any value other than SQLITE_OK, the +** query is abandoned and the xQueryPhrase function returns immediately. +** If the returned value is SQLITE_DONE, xQueryPhrase returns SQLITE_OK. +** Otherwise, the error code is propagated upwards. +** +** If the query runs to completion without incident, SQLITE_OK is returned. +** Or, if some error occurs before the query completes or is aborted by +** the callback, an SQLite error code is returned. +** +** +** xSetAuxdata(pFts5, pAux, xDelete) +** +** Save the pointer passed as the second argument as the extension functions +** "auxiliary data". The pointer may then be retrieved by the current or any +** future invocation of the same fts5 extension function made as part of +** of the same MATCH query using the xGetAuxdata() API. +** +** Each extension function is allocated a single auxiliary data slot for +** each FTS query (MATCH expression). If the extension function is invoked +** more than once for a single FTS query, then all invocations share a +** single auxiliary data context. +** +** If there is already an auxiliary data pointer when this function is +** invoked, then it is replaced by the new pointer. If an xDelete callback +** was specified along with the original pointer, it is invoked at this +** point. +** +** The xDelete callback, if one is specified, is also invoked on the +** auxiliary data pointer after the FTS5 query has finished. +** +** If an error (e.g. an OOM condition) occurs within this function, an +** the auxiliary data is set to NULL and an error code returned. If the +** xDelete parameter was not NULL, it is invoked on the auxiliary data +** pointer before returning. +** +** +** xGetAuxdata(pFts5, bClear) +** +** Returns the current auxiliary data pointer for the fts5 extension +** function. See the xSetAuxdata() method for details. +** +** If the bClear argument is non-zero, then the auxiliary data is cleared +** (set to NULL) before this function returns. In this case the xDelete, +** if any, is not invoked. +** +** +** xRowCount(pFts5, pnRow) +** +** This function is used to retrieve the total number of rows in the table. +** In other words, the same value that would be returned by: +** +** SELECT count(*) FROM ftstable; +** +** xPhraseFirst() +** This function is used, along with type Fts5PhraseIter and the xPhraseNext +** method, to iterate through all instances of a single query phrase within +** the current row. This is the same information as is accessible via the +** xInstCount/xInst APIs. While the xInstCount/xInst APIs are more convenient +** to use, this API may be faster under some circumstances. To iterate +** through instances of phrase iPhrase, use the following code: +** +** Fts5PhraseIter iter; +** int iCol, iOff; +** for(pApi->xPhraseFirst(pFts, iPhrase, &iter, &iCol, &iOff); +** iCol>=0; +** pApi->xPhraseNext(pFts, &iter, &iCol, &iOff) +** ){ +** // An instance of phrase iPhrase at offset iOff of column iCol +** } +** +** The Fts5PhraseIter structure is defined above. Applications should not +** modify this structure directly - it should only be used as shown above +** with the xPhraseFirst() and xPhraseNext() API methods (and by +** xPhraseFirstColumn() and xPhraseNextColumn() as illustrated below). +** +** This API can be quite slow if used with an FTS5 table created with the +** "detail=none" or "detail=column" option. If the FTS5 table is created +** with either "detail=none" or "detail=column" and "content=" option +** (i.e. if it is a contentless table), then this API always iterates +** through an empty set (all calls to xPhraseFirst() set iCol to -1). +** +** xPhraseNext() +** See xPhraseFirst above. +** +** xPhraseFirstColumn() +** This function and xPhraseNextColumn() are similar to the xPhraseFirst() +** and xPhraseNext() APIs described above. The difference is that instead +** of iterating through all instances of a phrase in the current row, these +** APIs are used to iterate through the set of columns in the current row +** that contain one or more instances of a specified phrase. For example: +** +** Fts5PhraseIter iter; +** int iCol; +** for(pApi->xPhraseFirstColumn(pFts, iPhrase, &iter, &iCol); +** iCol>=0; +** pApi->xPhraseNextColumn(pFts, &iter, &iCol) +** ){ +** // Column iCol contains at least one instance of phrase iPhrase +** } +** +** This API can be quite slow if used with an FTS5 table created with the +** "detail=none" option. If the FTS5 table is created with either +** "detail=none" "content=" option (i.e. if it is a contentless table), +** then this API always iterates through an empty set (all calls to +** xPhraseFirstColumn() set iCol to -1). +** +** The information accessed using this API and its companion +** xPhraseFirstColumn() may also be obtained using xPhraseFirst/xPhraseNext +** (or xInst/xInstCount). The chief advantage of this API is that it is +** significantly more efficient than those alternatives when used with +** "detail=column" tables. +** +** xPhraseNextColumn() +** See xPhraseFirstColumn above. +*/ +struct Fts5ExtensionApi { + int iVersion; /* Currently always set to 3 */ + + void *(*xUserData)(Fts5Context*); + + int (*xColumnCount)(Fts5Context*); + int (*xRowCount)(Fts5Context*, sqlite3_int64 *pnRow); + int (*xColumnTotalSize)(Fts5Context*, int iCol, sqlite3_int64 *pnToken); + + int (*xTokenize)(Fts5Context*, + const char *pText, int nText, /* Text to tokenize */ + void *pCtx, /* Context passed to xToken() */ + int (*xToken)(void*, int, const char*, int, int, int) /* Callback */ + ); + + int (*xPhraseCount)(Fts5Context*); + int (*xPhraseSize)(Fts5Context*, int iPhrase); + + int (*xInstCount)(Fts5Context*, int *pnInst); + int (*xInst)(Fts5Context*, int iIdx, int *piPhrase, int *piCol, int *piOff); + + sqlite3_int64 (*xRowid)(Fts5Context*); + int (*xColumnText)(Fts5Context*, int iCol, const char **pz, int *pn); + int (*xColumnSize)(Fts5Context*, int iCol, int *pnToken); + + int (*xQueryPhrase)(Fts5Context*, int iPhrase, void *pUserData, + int(*)(const Fts5ExtensionApi*,Fts5Context*,void*) + ); + int (*xSetAuxdata)(Fts5Context*, void *pAux, void(*xDelete)(void*)); + void *(*xGetAuxdata)(Fts5Context*, int bClear); + + int (*xPhraseFirst)(Fts5Context*, int iPhrase, Fts5PhraseIter*, int*, int*); + void (*xPhraseNext)(Fts5Context*, Fts5PhraseIter*, int *piCol, int *piOff); + + int (*xPhraseFirstColumn)(Fts5Context*, int iPhrase, Fts5PhraseIter*, int*); + void (*xPhraseNextColumn)(Fts5Context*, Fts5PhraseIter*, int *piCol); +}; + +/* +** CUSTOM AUXILIARY FUNCTIONS +*************************************************************************/ + +/************************************************************************* +** CUSTOM TOKENIZERS +** +** Applications may also register custom tokenizer types. A tokenizer +** is registered by providing fts5 with a populated instance of the +** following structure. All structure methods must be defined, setting +** any member of the fts5_tokenizer struct to NULL leads to undefined +** behaviour. The structure methods are expected to function as follows: +** +** xCreate: +** This function is used to allocate and inititalize a tokenizer instance. +** A tokenizer instance is required to actually tokenize text. +** +** The first argument passed to this function is a copy of the (void*) +** pointer provided by the application when the fts5_tokenizer object +** was registered with FTS5 (the third argument to xCreateTokenizer()). +** The second and third arguments are an array of nul-terminated strings +** containing the tokenizer arguments, if any, specified following the +** tokenizer name as part of the CREATE VIRTUAL TABLE statement used +** to create the FTS5 table. +** +** The final argument is an output variable. If successful, (*ppOut) +** should be set to point to the new tokenizer handle and SQLITE_OK +** returned. If an error occurs, some value other than SQLITE_OK should +** be returned. In this case, fts5 assumes that the final value of *ppOut +** is undefined. +** +** xDelete: +** This function is invoked to delete a tokenizer handle previously +** allocated using xCreate(). Fts5 guarantees that this function will +** be invoked exactly once for each successful call to xCreate(). +** +** xTokenize: +** This function is expected to tokenize the nText byte string indicated +** by argument pText. pText may or may not be nul-terminated. The first +** argument passed to this function is a pointer to an Fts5Tokenizer object +** returned by an earlier call to xCreate(). +** +** The second argument indicates the reason that FTS5 is requesting +** tokenization of the supplied text. This is always one of the following +** four values: +** +** <ul><li> <b>FTS5_TOKENIZE_DOCUMENT</b> - A document is being inserted into +** or removed from the FTS table. The tokenizer is being invoked to +** determine the set of tokens to add to (or delete from) the +** FTS index. +** +** <li> <b>FTS5_TOKENIZE_QUERY</b> - A MATCH query is being executed +** against the FTS index. The tokenizer is being called to tokenize +** a bareword or quoted string specified as part of the query. +** +** <li> <b>(FTS5_TOKENIZE_QUERY | FTS5_TOKENIZE_PREFIX)</b> - Same as +** FTS5_TOKENIZE_QUERY, except that the bareword or quoted string is +** followed by a "*" character, indicating that the last token +** returned by the tokenizer will be treated as a token prefix. +** +** <li> <b>FTS5_TOKENIZE_AUX</b> - The tokenizer is being invoked to +** satisfy an fts5_api.xTokenize() request made by an auxiliary +** function. Or an fts5_api.xColumnSize() request made by the same +** on a columnsize=0 database. +** </ul> +** +** For each token in the input string, the supplied callback xToken() must +** be invoked. The first argument to it should be a copy of the pointer +** passed as the second argument to xTokenize(). The third and fourth +** arguments are a pointer to a buffer containing the token text, and the +** size of the token in bytes. The 4th and 5th arguments are the byte offsets +** of the first byte of and first byte immediately following the text from +** which the token is derived within the input. +** +** The second argument passed to the xToken() callback ("tflags") should +** normally be set to 0. The exception is if the tokenizer supports +** synonyms. In this case see the discussion below for details. +** +** FTS5 assumes the xToken() callback is invoked for each token in the +** order that they occur within the input text. +** +** If an xToken() callback returns any value other than SQLITE_OK, then +** the tokenization should be abandoned and the xTokenize() method should +** immediately return a copy of the xToken() return value. Or, if the +** input buffer is exhausted, xTokenize() should return SQLITE_OK. Finally, +** if an error occurs with the xTokenize() implementation itself, it +** may abandon the tokenization and return any error code other than +** SQLITE_OK or SQLITE_DONE. +** +** SYNONYM SUPPORT +** +** Custom tokenizers may also support synonyms. Consider a case in which a +** user wishes to query for a phrase such as "first place". Using the +** built-in tokenizers, the FTS5 query 'first + place' will match instances +** of "first place" within the document set, but not alternative forms +** such as "1st place". In some applications, it would be better to match +** all instances of "first place" or "1st place" regardless of which form +** the user specified in the MATCH query text. +** +** There are several ways to approach this in FTS5: +** +** <ol><li> By mapping all synonyms to a single token. In this case, the +** In the above example, this means that the tokenizer returns the +** same token for inputs "first" and "1st". Say that token is in +** fact "first", so that when the user inserts the document "I won +** 1st place" entries are added to the index for tokens "i", "won", +** "first" and "place". If the user then queries for '1st + place', +** the tokenizer substitutes "first" for "1st" and the query works +** as expected. +** +** <li> By adding multiple synonyms for a single term to the FTS index. +** In this case, when tokenizing query text, the tokenizer may +** provide multiple synonyms for a single term within the document. +** FTS5 then queries the index for each synonym individually. For +** example, faced with the query: +** +** <codeblock> +** ... MATCH 'first place'</codeblock> +** +** the tokenizer offers both "1st" and "first" as synonyms for the +** first token in the MATCH query and FTS5 effectively runs a query +** similar to: +** +** <codeblock> +** ... MATCH '(first OR 1st) place'</codeblock> +** +** except that, for the purposes of auxiliary functions, the query +** still appears to contain just two phrases - "(first OR 1st)" +** being treated as a single phrase. +** +** <li> By adding multiple synonyms for a single term to the FTS index. +** Using this method, when tokenizing document text, the tokenizer +** provides multiple synonyms for each token. So that when a +** document such as "I won first place" is tokenized, entries are +** added to the FTS index for "i", "won", "first", "1st" and +** "place". +** +** This way, even if the tokenizer does not provide synonyms +** when tokenizing query text (it should not - to do would be +** inefficient), it doesn't matter if the user queries for +** 'first + place' or '1st + place', as there are entires in the +** FTS index corresponding to both forms of the first token. +** </ol> +** +** Whether it is parsing document or query text, any call to xToken that +** specifies a <i>tflags</i> argument with the FTS5_TOKEN_COLOCATED bit +** is considered to supply a synonym for the previous token. For example, +** when parsing the document "I won first place", a tokenizer that supports +** synonyms would call xToken() 5 times, as follows: +** +** <codeblock> +** xToken(pCtx, 0, "i", 1, 0, 1); +** xToken(pCtx, 0, "won", 3, 2, 5); +** xToken(pCtx, 0, "first", 5, 6, 11); +** xToken(pCtx, FTS5_TOKEN_COLOCATED, "1st", 3, 6, 11); +** xToken(pCtx, 0, "place", 5, 12, 17); +**</codeblock> +** +** It is an error to specify the FTS5_TOKEN_COLOCATED flag the first time +** xToken() is called. Multiple synonyms may be specified for a single token +** by making multiple calls to xToken(FTS5_TOKEN_COLOCATED) in sequence. +** There is no limit to the number of synonyms that may be provided for a +** single token. +** +** In many cases, method (1) above is the best approach. It does not add +** extra data to the FTS index or require FTS5 to query for multiple terms, +** so it is efficient in terms of disk space and query speed. However, it +** does not support prefix queries very well. If, as suggested above, the +** token "first" is subsituted for "1st" by the tokenizer, then the query: +** +** <codeblock> +** ... MATCH '1s*'</codeblock> +** +** will not match documents that contain the token "1st" (as the tokenizer +** will probably not map "1s" to any prefix of "first"). +** +** For full prefix support, method (3) may be preferred. In this case, +** because the index contains entries for both "first" and "1st", prefix +** queries such as 'fi*' or '1s*' will match correctly. However, because +** extra entries are added to the FTS index, this method uses more space +** within the database. +** +** Method (2) offers a midpoint between (1) and (3). Using this method, +** a query such as '1s*' will match documents that contain the literal +** token "1st", but not "first" (assuming the tokenizer is not able to +** provide synonyms for prefixes). However, a non-prefix query like '1st' +** will match against "1st" and "first". This method does not require +** extra disk space, as no extra entries are added to the FTS index. +** On the other hand, it may require more CPU cycles to run MATCH queries, +** as separate queries of the FTS index are required for each synonym. +** +** When using methods (2) or (3), it is important that the tokenizer only +** provide synonyms when tokenizing document text (method (2)) or query +** text (method (3)), not both. Doing so will not cause any errors, but is +** inefficient. +*/ +typedef struct Fts5Tokenizer Fts5Tokenizer; +typedef struct fts5_tokenizer fts5_tokenizer; +struct fts5_tokenizer { + int (*xCreate)(void*, const char **azArg, int nArg, Fts5Tokenizer **ppOut); + void (*xDelete)(Fts5Tokenizer*); + int (*xTokenize)(Fts5Tokenizer*, + void *pCtx, + int flags, /* Mask of FTS5_TOKENIZE_* flags */ + const char *pText, int nText, + int (*xToken)( + void *pCtx, /* Copy of 2nd argument to xTokenize() */ + int tflags, /* Mask of FTS5_TOKEN_* flags */ + const char *pToken, /* Pointer to buffer containing token */ + int nToken, /* Size of token in bytes */ + int iStart, /* Byte offset of token within input text */ + int iEnd /* Byte offset of end of token within input text */ + ) + ); +}; + +/* Flags that may be passed as the third argument to xTokenize() */ +#define FTS5_TOKENIZE_QUERY 0x0001 +#define FTS5_TOKENIZE_PREFIX 0x0002 +#define FTS5_TOKENIZE_DOCUMENT 0x0004 +#define FTS5_TOKENIZE_AUX 0x0008 + +/* Flags that may be passed by the tokenizer implementation back to FTS5 +** as the third argument to the supplied xToken callback. */ +#define FTS5_TOKEN_COLOCATED 0x0001 /* Same position as prev. token */ + +/* +** END OF CUSTOM TOKENIZERS +*************************************************************************/ + +/************************************************************************* +** FTS5 EXTENSION REGISTRATION API +*/ +typedef struct fts5_api fts5_api; +struct fts5_api { + int iVersion; /* Currently always set to 2 */ + + /* Create a new tokenizer */ + int (*xCreateTokenizer)( + fts5_api *pApi, + const char *zName, + void *pContext, + fts5_tokenizer *pTokenizer, + void (*xDestroy)(void*) + ); + + /* Find an existing tokenizer */ + int (*xFindTokenizer)( + fts5_api *pApi, + const char *zName, + void **ppContext, + fts5_tokenizer *pTokenizer + ); + + /* Create a new auxiliary function */ + int (*xCreateFunction)( + fts5_api *pApi, + const char *zName, + void *pContext, + fts5_extension_function xFunction, + void (*xDestroy)(void*) + ); +}; + +/* +** END OF REGISTRATION API +*************************************************************************/ + +#if 0 +} /* end of the 'extern "C"' block */ +#endif + +#endif /* _FTS5_H */ + + +/* +** 2014 May 31 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +*/ +#ifndef _FTS5INT_H +#define _FTS5INT_H + +/* #include "fts5.h" */ +/* #include "sqlite3ext.h" */ +SQLITE_EXTENSION_INIT1 + +/* #include <string.h> */ +/* #include <assert.h> */ + +#ifndef SQLITE_AMALGAMATION + +typedef unsigned char u8; +typedef unsigned int u32; +typedef unsigned short u16; +typedef short i16; +typedef sqlite3_int64 i64; +typedef sqlite3_uint64 u64; + +#define ArraySize(x) ((int)(sizeof(x) / sizeof(x[0]))) + +#define testcase(x) +#define ALWAYS(x) 1 +#define NEVER(x) 0 + +#define MIN(x,y) (((x) < (y)) ? (x) : (y)) +#define MAX(x,y) (((x) > (y)) ? (x) : (y)) + +/* +** Constants for the largest and smallest possible 64-bit signed integers. +*/ +# define LARGEST_INT64 (0xffffffff|(((i64)0x7fffffff)<<32)) +# define SMALLEST_INT64 (((i64)-1) - LARGEST_INT64) + +#endif + + +/* +** Maximum number of prefix indexes on single FTS5 table. This must be +** less than 32. If it is set to anything large than that, an #error +** directive in fts5_index.c will cause the build to fail. +*/ +#define FTS5_MAX_PREFIX_INDEXES 31 + +#define FTS5_DEFAULT_NEARDIST 10 +#define FTS5_DEFAULT_RANK "bm25" + +/* Name of rank and rowid columns */ +#define FTS5_RANK_NAME "rank" +#define FTS5_ROWID_NAME "rowid" + +#ifdef SQLITE_DEBUG +# define FTS5_CORRUPT sqlite3Fts5Corrupt() +static int sqlite3Fts5Corrupt(void); +#else +# define FTS5_CORRUPT SQLITE_CORRUPT_VTAB +#endif + +/* +** The assert_nc() macro is similar to the assert() macro, except that it +** is used for assert() conditions that are true only if it can be +** guranteed that the database is not corrupt. +*/ +#ifdef SQLITE_DEBUG +SQLITE_API extern int sqlite3_fts5_may_be_corrupt; +# define assert_nc(x) assert(sqlite3_fts5_may_be_corrupt || (x)) +#else +# define assert_nc(x) assert(x) +#endif + +/* Mark a function parameter as unused, to suppress nuisance compiler +** warnings. */ +#ifndef UNUSED_PARAM +# define UNUSED_PARAM(X) (void)(X) +#endif + +#ifndef UNUSED_PARAM2 +# define UNUSED_PARAM2(X, Y) (void)(X), (void)(Y) +#endif + +typedef struct Fts5Global Fts5Global; +typedef struct Fts5Colset Fts5Colset; + +/* If a NEAR() clump or phrase may only match a specific set of columns, +** then an object of the following type is used to record the set of columns. +** Each entry in the aiCol[] array is a column that may be matched. +** +** This object is used by fts5_expr.c and fts5_index.c. +*/ +struct Fts5Colset { + int nCol; + int aiCol[1]; +}; + + + +/************************************************************************** +** Interface to code in fts5_config.c. fts5_config.c contains contains code +** to parse the arguments passed to the CREATE VIRTUAL TABLE statement. +*/ + +typedef struct Fts5Config Fts5Config; + +/* +** An instance of the following structure encodes all information that can +** be gleaned from the CREATE VIRTUAL TABLE statement. +** +** And all information loaded from the %_config table. +** +** nAutomerge: +** The minimum number of segments that an auto-merge operation should +** attempt to merge together. A value of 1 sets the object to use the +** compile time default. Zero disables auto-merge altogether. +** +** zContent: +** +** zContentRowid: +** The value of the content_rowid= option, if one was specified. Or +** the string "rowid" otherwise. This text is not quoted - if it is +** used as part of an SQL statement it needs to be quoted appropriately. +** +** zContentExprlist: +** +** pzErrmsg: +** This exists in order to allow the fts5_index.c module to return a +** decent error message if it encounters a file-format version it does +** not understand. +** +** bColumnsize: +** True if the %_docsize table is created. +** +** bPrefixIndex: +** This is only used for debugging. If set to false, any prefix indexes +** are ignored. This value is configured using: +** +** INSERT INTO tbl(tbl, rank) VALUES('prefix-index', $bPrefixIndex); +** +*/ +struct Fts5Config { + sqlite3 *db; /* Database handle */ + char *zDb; /* Database holding FTS index (e.g. "main") */ + char *zName; /* Name of FTS index */ + int nCol; /* Number of columns */ + char **azCol; /* Column names */ + u8 *abUnindexed; /* True for unindexed columns */ + int nPrefix; /* Number of prefix indexes */ + int *aPrefix; /* Sizes in bytes of nPrefix prefix indexes */ + int eContent; /* An FTS5_CONTENT value */ + char *zContent; /* content table */ + char *zContentRowid; /* "content_rowid=" option value */ + int bColumnsize; /* "columnsize=" option value (dflt==1) */ + int eDetail; /* FTS5_DETAIL_XXX value */ + char *zContentExprlist; + Fts5Tokenizer *pTok; + fts5_tokenizer *pTokApi; + + /* Values loaded from the %_config table */ + int iCookie; /* Incremented when %_config is modified */ + int pgsz; /* Approximate page size used in %_data */ + int nAutomerge; /* 'automerge' setting */ + int nCrisisMerge; /* Maximum allowed segments per level */ + int nHashSize; /* Bytes of memory for in-memory hash */ + char *zRank; /* Name of rank function */ + char *zRankArgs; /* Arguments to rank function */ + + /* If non-NULL, points to sqlite3_vtab.base.zErrmsg. Often NULL. */ + char **pzErrmsg; + +#ifdef SQLITE_DEBUG + int bPrefixIndex; /* True to use prefix-indexes */ +#endif +}; + +/* Current expected value of %_config table 'version' field */ +#define FTS5_CURRENT_VERSION 4 + +#define FTS5_CONTENT_NORMAL 0 +#define FTS5_CONTENT_NONE 1 +#define FTS5_CONTENT_EXTERNAL 2 + +#define FTS5_DETAIL_FULL 0 +#define FTS5_DETAIL_NONE 1 +#define FTS5_DETAIL_COLUMNS 2 + + + +static int sqlite3Fts5ConfigParse( + Fts5Global*, sqlite3*, int, const char **, Fts5Config**, char** +); +static void sqlite3Fts5ConfigFree(Fts5Config*); + +static int sqlite3Fts5ConfigDeclareVtab(Fts5Config *pConfig); + +static int sqlite3Fts5Tokenize( + Fts5Config *pConfig, /* FTS5 Configuration object */ + int flags, /* FTS5_TOKENIZE_* flags */ + const char *pText, int nText, /* Text to tokenize */ + void *pCtx, /* Context passed to xToken() */ + int (*xToken)(void*, int, const char*, int, int, int) /* Callback */ +); + +static void sqlite3Fts5Dequote(char *z); + +/* Load the contents of the %_config table */ +static int sqlite3Fts5ConfigLoad(Fts5Config*, int); + +/* Set the value of a single config attribute */ +static int sqlite3Fts5ConfigSetValue(Fts5Config*, const char*, sqlite3_value*, int*); + +static int sqlite3Fts5ConfigParseRank(const char*, char**, char**); + +/* +** End of interface to code in fts5_config.c. +**************************************************************************/ + +/************************************************************************** +** Interface to code in fts5_buffer.c. +*/ + +/* +** Buffer object for the incremental building of string data. +*/ +typedef struct Fts5Buffer Fts5Buffer; +struct Fts5Buffer { + u8 *p; + int n; + int nSpace; +}; + +static int sqlite3Fts5BufferSize(int*, Fts5Buffer*, u32); +static void sqlite3Fts5BufferAppendVarint(int*, Fts5Buffer*, i64); +static void sqlite3Fts5BufferAppendBlob(int*, Fts5Buffer*, u32, const u8*); +static void sqlite3Fts5BufferAppendString(int *, Fts5Buffer*, const char*); +static void sqlite3Fts5BufferFree(Fts5Buffer*); +static void sqlite3Fts5BufferZero(Fts5Buffer*); +static void sqlite3Fts5BufferSet(int*, Fts5Buffer*, int, const u8*); +static void sqlite3Fts5BufferAppendPrintf(int *, Fts5Buffer*, char *zFmt, ...); + +static char *sqlite3Fts5Mprintf(int *pRc, const char *zFmt, ...); + +#define fts5BufferZero(x) sqlite3Fts5BufferZero(x) +#define fts5BufferAppendVarint(a,b,c) sqlite3Fts5BufferAppendVarint(a,b,c) +#define fts5BufferFree(a) sqlite3Fts5BufferFree(a) +#define fts5BufferAppendBlob(a,b,c,d) sqlite3Fts5BufferAppendBlob(a,b,c,d) +#define fts5BufferSet(a,b,c,d) sqlite3Fts5BufferSet(a,b,c,d) + +#define fts5BufferGrow(pRc,pBuf,nn) ( \ + (u32)((pBuf)->n) + (u32)(nn) <= (u32)((pBuf)->nSpace) ? 0 : \ + sqlite3Fts5BufferSize((pRc),(pBuf),(nn)+(pBuf)->n) \ +) + +/* Write and decode big-endian 32-bit integer values */ +static void sqlite3Fts5Put32(u8*, int); +static int sqlite3Fts5Get32(const u8*); + +#define FTS5_POS2COLUMN(iPos) (int)(iPos >> 32) +#define FTS5_POS2OFFSET(iPos) (int)(iPos & 0xFFFFFFFF) + +typedef struct Fts5PoslistReader Fts5PoslistReader; +struct Fts5PoslistReader { + /* Variables used only by sqlite3Fts5PoslistIterXXX() functions. */ + const u8 *a; /* Position list to iterate through */ + int n; /* Size of buffer at a[] in bytes */ + int i; /* Current offset in a[] */ + + u8 bFlag; /* For client use (any custom purpose) */ + + /* Output variables */ + u8 bEof; /* Set to true at EOF */ + i64 iPos; /* (iCol<<32) + iPos */ +}; +static int sqlite3Fts5PoslistReaderInit( + const u8 *a, int n, /* Poslist buffer to iterate through */ + Fts5PoslistReader *pIter /* Iterator object to initialize */ +); +static int sqlite3Fts5PoslistReaderNext(Fts5PoslistReader*); + +typedef struct Fts5PoslistWriter Fts5PoslistWriter; +struct Fts5PoslistWriter { + i64 iPrev; +}; +static int sqlite3Fts5PoslistWriterAppend(Fts5Buffer*, Fts5PoslistWriter*, i64); +static void sqlite3Fts5PoslistSafeAppend(Fts5Buffer*, i64*, i64); + +static int sqlite3Fts5PoslistNext64( + const u8 *a, int n, /* Buffer containing poslist */ + int *pi, /* IN/OUT: Offset within a[] */ + i64 *piOff /* IN/OUT: Current offset */ +); + +/* Malloc utility */ +static void *sqlite3Fts5MallocZero(int *pRc, int nByte); +static char *sqlite3Fts5Strndup(int *pRc, const char *pIn, int nIn); + +/* Character set tests (like isspace(), isalpha() etc.) */ +static int sqlite3Fts5IsBareword(char t); + + +/* Bucket of terms object used by the integrity-check in offsets=0 mode. */ +typedef struct Fts5Termset Fts5Termset; +static int sqlite3Fts5TermsetNew(Fts5Termset**); +static int sqlite3Fts5TermsetAdd(Fts5Termset*, int, const char*, int, int *pbPresent); +static void sqlite3Fts5TermsetFree(Fts5Termset*); + +/* +** End of interface to code in fts5_buffer.c. +**************************************************************************/ + +/************************************************************************** +** Interface to code in fts5_index.c. fts5_index.c contains contains code +** to access the data stored in the %_data table. +*/ + +typedef struct Fts5Index Fts5Index; +typedef struct Fts5IndexIter Fts5IndexIter; + +struct Fts5IndexIter { + i64 iRowid; + const u8 *pData; + int nData; + u8 bEof; +}; + +#define sqlite3Fts5IterEof(x) ((x)->bEof) + +/* +** Values used as part of the flags argument passed to IndexQuery(). +*/ +#define FTS5INDEX_QUERY_PREFIX 0x0001 /* Prefix query */ +#define FTS5INDEX_QUERY_DESC 0x0002 /* Docs in descending rowid order */ +#define FTS5INDEX_QUERY_TEST_NOIDX 0x0004 /* Do not use prefix index */ +#define FTS5INDEX_QUERY_SCAN 0x0008 /* Scan query (fts5vocab) */ + +/* The following are used internally by the fts5_index.c module. They are +** defined here only to make it easier to avoid clashes with the flags +** above. */ +#define FTS5INDEX_QUERY_SKIPEMPTY 0x0010 +#define FTS5INDEX_QUERY_NOOUTPUT 0x0020 + +/* +** Create/destroy an Fts5Index object. +*/ +static int sqlite3Fts5IndexOpen(Fts5Config *pConfig, int bCreate, Fts5Index**, char**); +static int sqlite3Fts5IndexClose(Fts5Index *p); + +/* +** Return a simple checksum value based on the arguments. +*/ +static u64 sqlite3Fts5IndexEntryCksum( + i64 iRowid, + int iCol, + int iPos, + int iIdx, + const char *pTerm, + int nTerm +); + +/* +** Argument p points to a buffer containing utf-8 text that is n bytes in +** size. Return the number of bytes in the nChar character prefix of the +** buffer, or 0 if there are less than nChar characters in total. +*/ +static int sqlite3Fts5IndexCharlenToBytelen( + const char *p, + int nByte, + int nChar +); + +/* +** Open a new iterator to iterate though all rowids that match the +** specified token or token prefix. +*/ +static int sqlite3Fts5IndexQuery( + Fts5Index *p, /* FTS index to query */ + const char *pToken, int nToken, /* Token (or prefix) to query for */ + int flags, /* Mask of FTS5INDEX_QUERY_X flags */ + Fts5Colset *pColset, /* Match these columns only */ + Fts5IndexIter **ppIter /* OUT: New iterator object */ +); + +/* +** The various operations on open token or token prefix iterators opened +** using sqlite3Fts5IndexQuery(). +*/ +static int sqlite3Fts5IterNext(Fts5IndexIter*); +static int sqlite3Fts5IterNextFrom(Fts5IndexIter*, i64 iMatch); + +/* +** Close an iterator opened by sqlite3Fts5IndexQuery(). +*/ +static void sqlite3Fts5IterClose(Fts5IndexIter*); + +/* +** This interface is used by the fts5vocab module. +*/ +static const char *sqlite3Fts5IterTerm(Fts5IndexIter*, int*); +static int sqlite3Fts5IterNextScan(Fts5IndexIter*); + + +/* +** Insert or remove data to or from the index. Each time a document is +** added to or removed from the index, this function is called one or more +** times. +** +** For an insert, it must be called once for each token in the new document. +** If the operation is a delete, it must be called (at least) once for each +** unique token in the document with an iCol value less than zero. The iPos +** argument is ignored for a delete. +*/ +static int sqlite3Fts5IndexWrite( + Fts5Index *p, /* Index to write to */ + int iCol, /* Column token appears in (-ve -> delete) */ + int iPos, /* Position of token within column */ + const char *pToken, int nToken /* Token to add or remove to or from index */ +); + +/* +** Indicate that subsequent calls to sqlite3Fts5IndexWrite() pertain to +** document iDocid. +*/ +static int sqlite3Fts5IndexBeginWrite( + Fts5Index *p, /* Index to write to */ + int bDelete, /* True if current operation is a delete */ + i64 iDocid /* Docid to add or remove data from */ +); + +/* +** Flush any data stored in the in-memory hash tables to the database. +** If the bCommit flag is true, also close any open blob handles. +*/ +static int sqlite3Fts5IndexSync(Fts5Index *p, int bCommit); + +/* +** Discard any data stored in the in-memory hash tables. Do not write it +** to the database. Additionally, assume that the contents of the %_data +** table may have changed on disk. So any in-memory caches of %_data +** records must be invalidated. +*/ +static int sqlite3Fts5IndexRollback(Fts5Index *p); + +/* +** Get or set the "averages" values. +*/ +static int sqlite3Fts5IndexGetAverages(Fts5Index *p, i64 *pnRow, i64 *anSize); +static int sqlite3Fts5IndexSetAverages(Fts5Index *p, const u8*, int); + +/* +** Functions called by the storage module as part of integrity-check. +*/ +static int sqlite3Fts5IndexIntegrityCheck(Fts5Index*, u64 cksum); + +/* +** Called during virtual module initialization to register UDF +** fts5_decode() with SQLite +*/ +static int sqlite3Fts5IndexInit(sqlite3*); + +static int sqlite3Fts5IndexSetCookie(Fts5Index*, int); + +/* +** Return the total number of entries read from the %_data table by +** this connection since it was created. +*/ +static int sqlite3Fts5IndexReads(Fts5Index *p); + +static int sqlite3Fts5IndexReinit(Fts5Index *p); +static int sqlite3Fts5IndexOptimize(Fts5Index *p); +static int sqlite3Fts5IndexMerge(Fts5Index *p, int nMerge); + +static int sqlite3Fts5IndexLoadConfig(Fts5Index *p); + +/* +** End of interface to code in fts5_index.c. +**************************************************************************/ + +/************************************************************************** +** Interface to code in fts5_varint.c. +*/ +static int sqlite3Fts5GetVarint32(const unsigned char *p, u32 *v); +static int sqlite3Fts5GetVarintLen(u32 iVal); +static u8 sqlite3Fts5GetVarint(const unsigned char*, u64*); +static int sqlite3Fts5PutVarint(unsigned char *p, u64 v); + +#define fts5GetVarint32(a,b) sqlite3Fts5GetVarint32(a,(u32*)&b) +#define fts5GetVarint sqlite3Fts5GetVarint + +#define fts5FastGetVarint32(a, iOff, nVal) { \ + nVal = (a)[iOff++]; \ + if( nVal & 0x80 ){ \ + iOff--; \ + iOff += fts5GetVarint32(&(a)[iOff], nVal); \ + } \ +} + + +/* +** End of interface to code in fts5_varint.c. +**************************************************************************/ + + +/************************************************************************** +** Interface to code in fts5.c. +*/ + +static int sqlite3Fts5GetTokenizer( + Fts5Global*, + const char **azArg, + int nArg, + Fts5Tokenizer**, + fts5_tokenizer**, + char **pzErr +); + +static Fts5Index *sqlite3Fts5IndexFromCsrid(Fts5Global*, i64, Fts5Config **); + +/* +** End of interface to code in fts5.c. +**************************************************************************/ + +/************************************************************************** +** Interface to code in fts5_hash.c. +*/ +typedef struct Fts5Hash Fts5Hash; + +/* +** Create a hash table, free a hash table. +*/ +static int sqlite3Fts5HashNew(Fts5Config*, Fts5Hash**, int *pnSize); +static void sqlite3Fts5HashFree(Fts5Hash*); + +static int sqlite3Fts5HashWrite( + Fts5Hash*, + i64 iRowid, /* Rowid for this entry */ + int iCol, /* Column token appears in (-ve -> delete) */ + int iPos, /* Position of token within column */ + char bByte, + const char *pToken, int nToken /* Token to add or remove to or from index */ +); + +/* +** Empty (but do not delete) a hash table. +*/ +static void sqlite3Fts5HashClear(Fts5Hash*); + +static int sqlite3Fts5HashQuery( + Fts5Hash*, /* Hash table to query */ + const char *pTerm, int nTerm, /* Query term */ + const u8 **ppDoclist, /* OUT: Pointer to doclist for pTerm */ + int *pnDoclist /* OUT: Size of doclist in bytes */ +); + +static int sqlite3Fts5HashScanInit( + Fts5Hash*, /* Hash table to query */ + const char *pTerm, int nTerm /* Query prefix */ +); +static void sqlite3Fts5HashScanNext(Fts5Hash*); +static int sqlite3Fts5HashScanEof(Fts5Hash*); +static void sqlite3Fts5HashScanEntry(Fts5Hash *, + const char **pzTerm, /* OUT: term (nul-terminated) */ + const u8 **ppDoclist, /* OUT: pointer to doclist */ + int *pnDoclist /* OUT: size of doclist in bytes */ +); + + +/* +** End of interface to code in fts5_hash.c. +**************************************************************************/ + +/************************************************************************** +** Interface to code in fts5_storage.c. fts5_storage.c contains contains +** code to access the data stored in the %_content and %_docsize tables. +*/ + +#define FTS5_STMT_SCAN_ASC 0 /* SELECT rowid, * FROM ... ORDER BY 1 ASC */ +#define FTS5_STMT_SCAN_DESC 1 /* SELECT rowid, * FROM ... ORDER BY 1 DESC */ +#define FTS5_STMT_LOOKUP 2 /* SELECT rowid, * FROM ... WHERE rowid=? */ + +typedef struct Fts5Storage Fts5Storage; + +static int sqlite3Fts5StorageOpen(Fts5Config*, Fts5Index*, int, Fts5Storage**, char**); +static int sqlite3Fts5StorageClose(Fts5Storage *p); +static int sqlite3Fts5StorageRename(Fts5Storage*, const char *zName); + +static int sqlite3Fts5DropAll(Fts5Config*); +static int sqlite3Fts5CreateTable(Fts5Config*, const char*, const char*, int, char **); + +static int sqlite3Fts5StorageDelete(Fts5Storage *p, i64, sqlite3_value**); +static int sqlite3Fts5StorageContentInsert(Fts5Storage *p, sqlite3_value**, i64*); +static int sqlite3Fts5StorageIndexInsert(Fts5Storage *p, sqlite3_value**, i64); + +static int sqlite3Fts5StorageIntegrity(Fts5Storage *p); + +static int sqlite3Fts5StorageStmt(Fts5Storage *p, int eStmt, sqlite3_stmt**, char**); +static void sqlite3Fts5StorageStmtRelease(Fts5Storage *p, int eStmt, sqlite3_stmt*); + +static int sqlite3Fts5StorageDocsize(Fts5Storage *p, i64 iRowid, int *aCol); +static int sqlite3Fts5StorageSize(Fts5Storage *p, int iCol, i64 *pnAvg); +static int sqlite3Fts5StorageRowCount(Fts5Storage *p, i64 *pnRow); + +static int sqlite3Fts5StorageSync(Fts5Storage *p, int bCommit); +static int sqlite3Fts5StorageRollback(Fts5Storage *p); + +static int sqlite3Fts5StorageConfigValue( + Fts5Storage *p, const char*, sqlite3_value*, int +); + +static int sqlite3Fts5StorageDeleteAll(Fts5Storage *p); +static int sqlite3Fts5StorageRebuild(Fts5Storage *p); +static int sqlite3Fts5StorageOptimize(Fts5Storage *p); +static int sqlite3Fts5StorageMerge(Fts5Storage *p, int nMerge); + +/* +** End of interface to code in fts5_storage.c. +**************************************************************************/ + + +/************************************************************************** +** Interface to code in fts5_expr.c. +*/ +typedef struct Fts5Expr Fts5Expr; +typedef struct Fts5ExprNode Fts5ExprNode; +typedef struct Fts5Parse Fts5Parse; +typedef struct Fts5Token Fts5Token; +typedef struct Fts5ExprPhrase Fts5ExprPhrase; +typedef struct Fts5ExprNearset Fts5ExprNearset; + +struct Fts5Token { + const char *p; /* Token text (not NULL terminated) */ + int n; /* Size of buffer p in bytes */ +}; + +/* Parse a MATCH expression. */ +static int sqlite3Fts5ExprNew( + Fts5Config *pConfig, + const char *zExpr, + Fts5Expr **ppNew, + char **pzErr +); + +/* +** for(rc = sqlite3Fts5ExprFirst(pExpr, pIdx, bDesc); +** rc==SQLITE_OK && 0==sqlite3Fts5ExprEof(pExpr); +** rc = sqlite3Fts5ExprNext(pExpr) +** ){ +** // The document with rowid iRowid matches the expression! +** i64 iRowid = sqlite3Fts5ExprRowid(pExpr); +** } +*/ +static int sqlite3Fts5ExprFirst(Fts5Expr*, Fts5Index *pIdx, i64 iMin, int bDesc); +static int sqlite3Fts5ExprNext(Fts5Expr*, i64 iMax); +static int sqlite3Fts5ExprEof(Fts5Expr*); +static i64 sqlite3Fts5ExprRowid(Fts5Expr*); + +static void sqlite3Fts5ExprFree(Fts5Expr*); + +/* Called during startup to register a UDF with SQLite */ +static int sqlite3Fts5ExprInit(Fts5Global*, sqlite3*); + +static int sqlite3Fts5ExprPhraseCount(Fts5Expr*); +static int sqlite3Fts5ExprPhraseSize(Fts5Expr*, int iPhrase); +static int sqlite3Fts5ExprPoslist(Fts5Expr*, int, const u8 **); + +typedef struct Fts5PoslistPopulator Fts5PoslistPopulator; +static Fts5PoslistPopulator *sqlite3Fts5ExprClearPoslists(Fts5Expr*, int); +static int sqlite3Fts5ExprPopulatePoslists( + Fts5Config*, Fts5Expr*, Fts5PoslistPopulator*, int, const char*, int +); +static void sqlite3Fts5ExprCheckPoslists(Fts5Expr*, i64); +static void sqlite3Fts5ExprClearEof(Fts5Expr*); + +static int sqlite3Fts5ExprClonePhrase(Fts5Expr*, int, Fts5Expr**); + +static int sqlite3Fts5ExprPhraseCollist(Fts5Expr *, int, const u8 **, int *); + +/******************************************* +** The fts5_expr.c API above this point is used by the other hand-written +** C code in this module. The interfaces below this point are called by +** the parser code in fts5parse.y. */ + +static void sqlite3Fts5ParseError(Fts5Parse *pParse, const char *zFmt, ...); + +static Fts5ExprNode *sqlite3Fts5ParseNode( + Fts5Parse *pParse, + int eType, + Fts5ExprNode *pLeft, + Fts5ExprNode *pRight, + Fts5ExprNearset *pNear +); + +static Fts5ExprPhrase *sqlite3Fts5ParseTerm( + Fts5Parse *pParse, + Fts5ExprPhrase *pPhrase, + Fts5Token *pToken, + int bPrefix +); + +static Fts5ExprNearset *sqlite3Fts5ParseNearset( + Fts5Parse*, + Fts5ExprNearset*, + Fts5ExprPhrase* +); + +static Fts5Colset *sqlite3Fts5ParseColset( + Fts5Parse*, + Fts5Colset*, + Fts5Token * +); + +static void sqlite3Fts5ParsePhraseFree(Fts5ExprPhrase*); +static void sqlite3Fts5ParseNearsetFree(Fts5ExprNearset*); +static void sqlite3Fts5ParseNodeFree(Fts5ExprNode*); + +static void sqlite3Fts5ParseSetDistance(Fts5Parse*, Fts5ExprNearset*, Fts5Token*); +static void sqlite3Fts5ParseSetColset(Fts5Parse*, Fts5ExprNearset*, Fts5Colset*); +static void sqlite3Fts5ParseFinished(Fts5Parse *pParse, Fts5ExprNode *p); +static void sqlite3Fts5ParseNear(Fts5Parse *pParse, Fts5Token*); + +/* +** End of interface to code in fts5_expr.c. +**************************************************************************/ + + + +/************************************************************************** +** Interface to code in fts5_aux.c. +*/ + +static int sqlite3Fts5AuxInit(fts5_api*); +/* +** End of interface to code in fts5_aux.c. +**************************************************************************/ + +/************************************************************************** +** Interface to code in fts5_tokenizer.c. +*/ + +static int sqlite3Fts5TokenizerInit(fts5_api*); +/* +** End of interface to code in fts5_tokenizer.c. +**************************************************************************/ + +/************************************************************************** +** Interface to code in fts5_vocab.c. +*/ + +static int sqlite3Fts5VocabInit(Fts5Global*, sqlite3*); + +/* +** End of interface to code in fts5_vocab.c. +**************************************************************************/ + + +/************************************************************************** +** Interface to automatically generated code in fts5_unicode2.c. +*/ +static int sqlite3Fts5UnicodeIsalnum(int c); +static int sqlite3Fts5UnicodeIsdiacritic(int c); +static int sqlite3Fts5UnicodeFold(int c, int bRemoveDiacritic); +/* +** End of interface to code in fts5_unicode2.c. +**************************************************************************/ + +#endif + +#define FTS5_OR 1 +#define FTS5_AND 2 +#define FTS5_NOT 3 +#define FTS5_TERM 4 +#define FTS5_COLON 5 +#define FTS5_LP 6 +#define FTS5_RP 7 +#define FTS5_LCP 8 +#define FTS5_RCP 9 +#define FTS5_STRING 10 +#define FTS5_COMMA 11 +#define FTS5_PLUS 12 +#define FTS5_STAR 13 + +/* +** 2000-05-29 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +** Driver template for the LEMON parser generator. +** +** The "lemon" program processes an LALR(1) input grammar file, then uses +** this template to construct a parser. The "lemon" program inserts text +** at each "%%" line. Also, any "P-a-r-s-e" identifer prefix (without the +** interstitial "-" characters) contained in this template is changed into +** the value of the %name directive from the grammar. Otherwise, the content +** of this template is copied straight through into the generate parser +** source file. +** +** The following is the concatenation of all %include directives from the +** input grammar file: +*/ +/* #include <stdio.h> */ +/************ Begin %include sections from the grammar ************************/ + +/* #include "fts5Int.h" */ +/* #include "fts5parse.h" */ + +/* +** Disable all error recovery processing in the parser push-down +** automaton. +*/ +#define fts5YYNOERRORRECOVERY 1 + +/* +** Make fts5yytestcase() the same as testcase() +*/ +#define fts5yytestcase(X) testcase(X) + +/* +** Indicate that sqlite3ParserFree() will never be called with a null +** pointer. +*/ +#define fts5YYPARSEFREENOTNULL 1 + +/* +** Alternative datatype for the argument to the malloc() routine passed +** into sqlite3ParserAlloc(). The default is size_t. +*/ +#define fts5YYMALLOCARGTYPE u64 + +/**************** End of %include directives **********************************/ +/* These constants specify the various numeric values for terminal symbols +** in a format understandable to "makeheaders". This section is blank unless +** "lemon" is run with the "-m" command-line option. +***************** Begin makeheaders token definitions *************************/ +/**************** End makeheaders token definitions ***************************/ + +/* The next sections is a series of control #defines. +** various aspects of the generated parser. +** fts5YYCODETYPE is the data type used to store the integer codes +** that represent terminal and non-terminal symbols. +** "unsigned char" is used if there are fewer than +** 256 symbols. Larger types otherwise. +** fts5YYNOCODE is a number of type fts5YYCODETYPE that is not used for +** any terminal or nonterminal symbol. +** fts5YYFALLBACK If defined, this indicates that one or more tokens +** (also known as: "terminal symbols") have fall-back +** values which should be used if the original symbol +** would not parse. This permits keywords to sometimes +** be used as identifiers, for example. +** fts5YYACTIONTYPE is the data type used for "action codes" - numbers +** that indicate what to do in response to the next +** token. +** sqlite3Fts5ParserFTS5TOKENTYPE is the data type used for minor type for terminal +** symbols. Background: A "minor type" is a semantic +** value associated with a terminal or non-terminal +** symbols. For example, for an "ID" terminal symbol, +** the minor type might be the name of the identifier. +** Each non-terminal can have a different minor type. +** Terminal symbols all have the same minor type, though. +** This macros defines the minor type for terminal +** symbols. +** fts5YYMINORTYPE is the data type used for all minor types. +** This is typically a union of many types, one of +** which is sqlite3Fts5ParserFTS5TOKENTYPE. The entry in the union +** for terminal symbols is called "fts5yy0". +** fts5YYSTACKDEPTH is the maximum depth of the parser's stack. If +** zero the stack is dynamically sized using realloc() +** sqlite3Fts5ParserARG_SDECL A static variable declaration for the %extra_argument +** sqlite3Fts5ParserARG_PDECL A parameter declaration for the %extra_argument +** sqlite3Fts5ParserARG_STORE Code to store %extra_argument into fts5yypParser +** sqlite3Fts5ParserARG_FETCH Code to extract %extra_argument from fts5yypParser +** fts5YYERRORSYMBOL is the code number of the error symbol. If not +** defined, then do no error processing. +** fts5YYNSTATE the combined number of states. +** fts5YYNRULE the number of rules in the grammar +** fts5YY_MAX_SHIFT Maximum value for shift actions +** fts5YY_MIN_SHIFTREDUCE Minimum value for shift-reduce actions +** fts5YY_MAX_SHIFTREDUCE Maximum value for shift-reduce actions +** fts5YY_MIN_REDUCE Maximum value for reduce actions +** fts5YY_ERROR_ACTION The fts5yy_action[] code for syntax error +** fts5YY_ACCEPT_ACTION The fts5yy_action[] code for accept +** fts5YY_NO_ACTION The fts5yy_action[] code for no-op +*/ +#ifndef INTERFACE +# define INTERFACE 1 +#endif +/************* Begin control #defines *****************************************/ +#define fts5YYCODETYPE unsigned char +#define fts5YYNOCODE 27 +#define fts5YYACTIONTYPE unsigned char +#define sqlite3Fts5ParserFTS5TOKENTYPE Fts5Token +typedef union { + int fts5yyinit; + sqlite3Fts5ParserFTS5TOKENTYPE fts5yy0; + Fts5Colset* fts5yy3; + Fts5ExprPhrase* fts5yy11; + Fts5ExprNode* fts5yy18; + int fts5yy20; + Fts5ExprNearset* fts5yy26; +} fts5YYMINORTYPE; +#ifndef fts5YYSTACKDEPTH +#define fts5YYSTACKDEPTH 100 +#endif +#define sqlite3Fts5ParserARG_SDECL Fts5Parse *pParse; +#define sqlite3Fts5ParserARG_PDECL ,Fts5Parse *pParse +#define sqlite3Fts5ParserARG_FETCH Fts5Parse *pParse = fts5yypParser->pParse +#define sqlite3Fts5ParserARG_STORE fts5yypParser->pParse = pParse +#define fts5YYNSTATE 26 +#define fts5YYNRULE 24 +#define fts5YY_MAX_SHIFT 25 +#define fts5YY_MIN_SHIFTREDUCE 40 +#define fts5YY_MAX_SHIFTREDUCE 63 +#define fts5YY_MIN_REDUCE 64 +#define fts5YY_MAX_REDUCE 87 +#define fts5YY_ERROR_ACTION 88 +#define fts5YY_ACCEPT_ACTION 89 +#define fts5YY_NO_ACTION 90 +/************* End control #defines *******************************************/ + +/* The fts5yyzerominor constant is used to initialize instances of +** fts5YYMINORTYPE objects to zero. */ +static const fts5YYMINORTYPE fts5yyzerominor = { 0 }; + +/* Define the fts5yytestcase() macro to be a no-op if is not already defined +** otherwise. +** +** Applications can choose to define fts5yytestcase() in the %include section +** to a macro that can assist in verifying code coverage. For production +** code the fts5yytestcase() macro should be turned off. But it is useful +** for testing. +*/ +#ifndef fts5yytestcase +# define fts5yytestcase(X) +#endif + + +/* Next are the tables used to determine what action to take based on the +** current state and lookahead token. These tables are used to implement +** functions that take a state number and lookahead value and return an +** action integer. +** +** Suppose the action integer is N. Then the action is determined as +** follows +** +** 0 <= N <= fts5YY_MAX_SHIFT Shift N. That is, push the lookahead +** token onto the stack and goto state N. +** +** N between fts5YY_MIN_SHIFTREDUCE Shift to an arbitrary state then +** and fts5YY_MAX_SHIFTREDUCE reduce by rule N-fts5YY_MIN_SHIFTREDUCE. +** +** N between fts5YY_MIN_REDUCE Reduce by rule N-fts5YY_MIN_REDUCE +** and fts5YY_MAX_REDUCE + +** N == fts5YY_ERROR_ACTION A syntax error has occurred. +** +** N == fts5YY_ACCEPT_ACTION The parser accepts its input. +** +** N == fts5YY_NO_ACTION No such action. Denotes unused +** slots in the fts5yy_action[] table. +** +** The action table is constructed as a single large table named fts5yy_action[]. +** Given state S and lookahead X, the action is computed as +** +** fts5yy_action[ fts5yy_shift_ofst[S] + X ] +** +** If the index value fts5yy_shift_ofst[S]+X is out of range or if the value +** fts5yy_lookahead[fts5yy_shift_ofst[S]+X] is not equal to X or if fts5yy_shift_ofst[S] +** is equal to fts5YY_SHIFT_USE_DFLT, it means that the action is not in the table +** and that fts5yy_default[S] should be used instead. +** +** The formula above is for computing the action when the lookahead is +** a terminal symbol. If the lookahead is a non-terminal (as occurs after +** a reduce action) then the fts5yy_reduce_ofst[] array is used in place of +** the fts5yy_shift_ofst[] array and fts5YY_REDUCE_USE_DFLT is used in place of +** fts5YY_SHIFT_USE_DFLT. +** +** The following are the tables generated in this section: +** +** fts5yy_action[] A single table containing all actions. +** fts5yy_lookahead[] A table containing the lookahead for each entry in +** fts5yy_action. Used to detect hash collisions. +** fts5yy_shift_ofst[] For each state, the offset into fts5yy_action for +** shifting terminals. +** fts5yy_reduce_ofst[] For each state, the offset into fts5yy_action for +** shifting non-terminals after a reduce. +** fts5yy_default[] Default action for each state. +** +*********** Begin parsing tables **********************************************/ +#define fts5YY_ACTTAB_COUNT (78) +static const fts5YYACTIONTYPE fts5yy_action[] = { + /* 0 */ 89, 15, 46, 5, 48, 24, 12, 19, 23, 14, + /* 10 */ 46, 5, 48, 24, 20, 21, 23, 43, 46, 5, + /* 20 */ 48, 24, 6, 18, 23, 17, 46, 5, 48, 24, + /* 30 */ 75, 7, 23, 25, 46, 5, 48, 24, 62, 47, + /* 40 */ 23, 48, 24, 7, 11, 23, 9, 3, 4, 2, + /* 50 */ 62, 50, 52, 44, 64, 3, 4, 2, 49, 4, + /* 60 */ 2, 1, 23, 11, 16, 9, 12, 2, 10, 61, + /* 70 */ 53, 59, 62, 60, 22, 13, 55, 8, +}; +static const fts5YYCODETYPE fts5yy_lookahead[] = { + /* 0 */ 15, 16, 17, 18, 19, 20, 10, 11, 23, 16, + /* 10 */ 17, 18, 19, 20, 23, 24, 23, 16, 17, 18, + /* 20 */ 19, 20, 22, 23, 23, 16, 17, 18, 19, 20, + /* 30 */ 5, 6, 23, 16, 17, 18, 19, 20, 13, 17, + /* 40 */ 23, 19, 20, 6, 8, 23, 10, 1, 2, 3, + /* 50 */ 13, 9, 10, 7, 0, 1, 2, 3, 19, 2, + /* 60 */ 3, 6, 23, 8, 21, 10, 10, 3, 10, 25, + /* 70 */ 10, 10, 13, 25, 12, 10, 7, 5, +}; +#define fts5YY_SHIFT_USE_DFLT (-5) +#define fts5YY_SHIFT_COUNT (25) +#define fts5YY_SHIFT_MIN (-4) +#define fts5YY_SHIFT_MAX (72) +static const signed char fts5yy_shift_ofst[] = { + /* 0 */ 55, 55, 55, 55, 55, 36, -4, 56, 58, 25, + /* 10 */ 37, 60, 59, 59, 46, 54, 42, 57, 62, 61, + /* 20 */ 62, 69, 65, 62, 72, 64, +}; +#define fts5YY_REDUCE_USE_DFLT (-16) +#define fts5YY_REDUCE_COUNT (13) +#define fts5YY_REDUCE_MIN (-15) +#define fts5YY_REDUCE_MAX (48) +static const signed char fts5yy_reduce_ofst[] = { + /* 0 */ -15, -7, 1, 9, 17, 22, -9, 0, 39, 44, + /* 10 */ 44, 43, 44, 48, +}; +static const fts5YYACTIONTYPE fts5yy_default[] = { + /* 0 */ 88, 88, 88, 88, 88, 69, 82, 88, 88, 87, + /* 10 */ 87, 88, 87, 87, 88, 88, 88, 66, 80, 88, + /* 20 */ 81, 88, 88, 78, 88, 65, +}; +/********** End of lemon-generated parsing tables *****************************/ + +/* The next table maps tokens (terminal symbols) into fallback tokens. +** If a construct like the following: +** +** %fallback ID X Y Z. +** +** appears in the grammar, then ID becomes a fallback token for X, Y, +** and Z. Whenever one of the tokens X, Y, or Z is input to the parser +** but it does not parse, the type of the token is changed to ID and +** the parse is retried before an error is thrown. +** +** This feature can be used, for example, to cause some keywords in a language +** to revert to identifiers if they keyword does not apply in the context where +** it appears. +*/ +#ifdef fts5YYFALLBACK +static const fts5YYCODETYPE fts5yyFallback[] = { +}; +#endif /* fts5YYFALLBACK */ + +/* The following structure represents a single element of the +** parser's stack. Information stored includes: +** +** + The state number for the parser at this level of the stack. +** +** + The value of the token stored at this level of the stack. +** (In other words, the "major" token.) +** +** + The semantic value stored at this level of the stack. This is +** the information used by the action routines in the grammar. +** It is sometimes called the "minor" token. +** +** After the "shift" half of a SHIFTREDUCE action, the stateno field +** actually contains the reduce action for the second half of the +** SHIFTREDUCE. +*/ +struct fts5yyStackEntry { + fts5YYACTIONTYPE stateno; /* The state-number, or reduce action in SHIFTREDUCE */ + fts5YYCODETYPE major; /* The major token value. This is the code + ** number for the token at this stack level */ + fts5YYMINORTYPE minor; /* The user-supplied minor token value. This + ** is the value of the token */ +}; +typedef struct fts5yyStackEntry fts5yyStackEntry; + +/* The state of the parser is completely contained in an instance of +** the following structure */ +struct fts5yyParser { + int fts5yyidx; /* Index of top element in stack */ +#ifdef fts5YYTRACKMAXSTACKDEPTH + int fts5yyidxMax; /* Maximum value of fts5yyidx */ +#endif + int fts5yyerrcnt; /* Shifts left before out of the error */ + sqlite3Fts5ParserARG_SDECL /* A place to hold %extra_argument */ +#if fts5YYSTACKDEPTH<=0 + int fts5yystksz; /* Current side of the stack */ + fts5yyStackEntry *fts5yystack; /* The parser's stack */ +#else + fts5yyStackEntry fts5yystack[fts5YYSTACKDEPTH]; /* The parser's stack */ +#endif +}; +typedef struct fts5yyParser fts5yyParser; + +#ifndef NDEBUG +/* #include <stdio.h> */ +static FILE *fts5yyTraceFILE = 0; +static char *fts5yyTracePrompt = 0; +#endif /* NDEBUG */ + +#ifndef NDEBUG +/* +** Turn parser tracing on by giving a stream to which to write the trace +** and a prompt to preface each trace message. Tracing is turned off +** by making either argument NULL +** +** Inputs: +** <ul> +** <li> A FILE* to which trace output should be written. +** If NULL, then tracing is turned off. +** <li> A prefix string written at the beginning of every +** line of trace output. If NULL, then tracing is +** turned off. +** </ul> +** +** Outputs: +** None. +*/ +static void sqlite3Fts5ParserTrace(FILE *TraceFILE, char *zTracePrompt){ + fts5yyTraceFILE = TraceFILE; + fts5yyTracePrompt = zTracePrompt; + if( fts5yyTraceFILE==0 ) fts5yyTracePrompt = 0; + else if( fts5yyTracePrompt==0 ) fts5yyTraceFILE = 0; +} +#endif /* NDEBUG */ + +#ifndef NDEBUG +/* For tracing shifts, the names of all terminals and nonterminals +** are required. The following table supplies these names */ +static const char *const fts5yyTokenName[] = { + "$", "OR", "AND", "NOT", + "TERM", "COLON", "LP", "RP", + "LCP", "RCP", "STRING", "COMMA", + "PLUS", "STAR", "error", "input", + "expr", "cnearset", "exprlist", "nearset", + "colset", "colsetlist", "nearphrases", "phrase", + "neardist_opt", "star_opt", +}; +#endif /* NDEBUG */ + +#ifndef NDEBUG +/* For tracing reduce actions, the names of all rules are required. +*/ +static const char *const fts5yyRuleName[] = { + /* 0 */ "input ::= expr", + /* 1 */ "expr ::= expr AND expr", + /* 2 */ "expr ::= expr OR expr", + /* 3 */ "expr ::= expr NOT expr", + /* 4 */ "expr ::= LP expr RP", + /* 5 */ "expr ::= exprlist", + /* 6 */ "exprlist ::= cnearset", + /* 7 */ "exprlist ::= exprlist cnearset", + /* 8 */ "cnearset ::= nearset", + /* 9 */ "cnearset ::= colset COLON nearset", + /* 10 */ "colset ::= LCP colsetlist RCP", + /* 11 */ "colset ::= STRING", + /* 12 */ "colsetlist ::= colsetlist STRING", + /* 13 */ "colsetlist ::= STRING", + /* 14 */ "nearset ::= phrase", + /* 15 */ "nearset ::= STRING LP nearphrases neardist_opt RP", + /* 16 */ "nearphrases ::= phrase", + /* 17 */ "nearphrases ::= nearphrases phrase", + /* 18 */ "neardist_opt ::=", + /* 19 */ "neardist_opt ::= COMMA STRING", + /* 20 */ "phrase ::= phrase PLUS STRING star_opt", + /* 21 */ "phrase ::= STRING star_opt", + /* 22 */ "star_opt ::= STAR", + /* 23 */ "star_opt ::=", +}; +#endif /* NDEBUG */ + + +#if fts5YYSTACKDEPTH<=0 +/* +** Try to increase the size of the parser stack. +*/ +static void fts5yyGrowStack(fts5yyParser *p){ + int newSize; + fts5yyStackEntry *pNew; + + newSize = p->fts5yystksz*2 + 100; + pNew = realloc(p->fts5yystack, newSize*sizeof(pNew[0])); + if( pNew ){ + p->fts5yystack = pNew; + p->fts5yystksz = newSize; +#ifndef NDEBUG + if( fts5yyTraceFILE ){ + fprintf(fts5yyTraceFILE,"%sStack grows to %d entries!\n", + fts5yyTracePrompt, p->fts5yystksz); + } +#endif + } +} +#endif + +/* Datatype of the argument to the memory allocated passed as the +** second argument to sqlite3Fts5ParserAlloc() below. This can be changed by +** putting an appropriate #define in the %include section of the input +** grammar. +*/ +#ifndef fts5YYMALLOCARGTYPE +# define fts5YYMALLOCARGTYPE size_t +#endif + +/* +** This function allocates a new parser. +** The only argument is a pointer to a function which works like +** malloc. +** +** Inputs: +** A pointer to the function used to allocate memory. +** +** Outputs: +** A pointer to a parser. This pointer is used in subsequent calls +** to sqlite3Fts5Parser and sqlite3Fts5ParserFree. +*/ +static void *sqlite3Fts5ParserAlloc(void *(*mallocProc)(fts5YYMALLOCARGTYPE)){ + fts5yyParser *pParser; + pParser = (fts5yyParser*)(*mallocProc)( (fts5YYMALLOCARGTYPE)sizeof(fts5yyParser) ); + if( pParser ){ + pParser->fts5yyidx = -1; +#ifdef fts5YYTRACKMAXSTACKDEPTH + pParser->fts5yyidxMax = 0; +#endif +#if fts5YYSTACKDEPTH<=0 + pParser->fts5yystack = NULL; + pParser->fts5yystksz = 0; + fts5yyGrowStack(pParser); +#endif + } + return pParser; +} + +/* The following function deletes the "minor type" or semantic value +** associated with a symbol. The symbol can be either a terminal +** or nonterminal. "fts5yymajor" is the symbol code, and "fts5yypminor" is +** a pointer to the value to be deleted. The code used to do the +** deletions is derived from the %destructor and/or %token_destructor +** directives of the input grammar. +*/ +static void fts5yy_destructor( + fts5yyParser *fts5yypParser, /* The parser */ + fts5YYCODETYPE fts5yymajor, /* Type code for object to destroy */ + fts5YYMINORTYPE *fts5yypminor /* The object to be destroyed */ +){ + sqlite3Fts5ParserARG_FETCH; + switch( fts5yymajor ){ + /* Here is inserted the actions which take place when a + ** terminal or non-terminal is destroyed. This can happen + ** when the symbol is popped from the stack during a + ** reduce or during error processing or when a parser is + ** being destroyed before it is finished parsing. + ** + ** Note: during a reduce, the only symbols destroyed are those + ** which appear on the RHS of the rule, but which are *not* used + ** inside the C code. + */ +/********* Begin destructor definitions ***************************************/ + case 15: /* input */ +{ + (void)pParse; +} + break; + case 16: /* expr */ + case 17: /* cnearset */ + case 18: /* exprlist */ +{ + sqlite3Fts5ParseNodeFree((fts5yypminor->fts5yy18)); +} + break; + case 19: /* nearset */ + case 22: /* nearphrases */ +{ + sqlite3Fts5ParseNearsetFree((fts5yypminor->fts5yy26)); +} + break; + case 20: /* colset */ + case 21: /* colsetlist */ +{ + sqlite3_free((fts5yypminor->fts5yy3)); +} + break; + case 23: /* phrase */ +{ + sqlite3Fts5ParsePhraseFree((fts5yypminor->fts5yy11)); +} + break; +/********* End destructor definitions *****************************************/ + default: break; /* If no destructor action specified: do nothing */ + } +} + +/* +** Pop the parser's stack once. +** +** If there is a destructor routine associated with the token which +** is popped from the stack, then call it. +*/ +static void fts5yy_pop_parser_stack(fts5yyParser *pParser){ + fts5yyStackEntry *fts5yytos; + assert( pParser->fts5yyidx>=0 ); + fts5yytos = &pParser->fts5yystack[pParser->fts5yyidx--]; +#ifndef NDEBUG + if( fts5yyTraceFILE ){ + fprintf(fts5yyTraceFILE,"%sPopping %s\n", + fts5yyTracePrompt, + fts5yyTokenName[fts5yytos->major]); + } +#endif + fts5yy_destructor(pParser, fts5yytos->major, &fts5yytos->minor); +} + +/* +** Deallocate and destroy a parser. Destructors are called for +** all stack elements before shutting the parser down. +** +** If the fts5YYPARSEFREENEVERNULL macro exists (for example because it +** is defined in a %include section of the input grammar) then it is +** assumed that the input pointer is never NULL. +*/ +static void sqlite3Fts5ParserFree( + void *p, /* The parser to be deleted */ + void (*freeProc)(void*) /* Function used to reclaim memory */ +){ + fts5yyParser *pParser = (fts5yyParser*)p; +#ifndef fts5YYPARSEFREENEVERNULL + if( pParser==0 ) return; +#endif + while( pParser->fts5yyidx>=0 ) fts5yy_pop_parser_stack(pParser); +#if fts5YYSTACKDEPTH<=0 + free(pParser->fts5yystack); +#endif + (*freeProc)((void*)pParser); +} + +/* +** Return the peak depth of the stack for a parser. +*/ +#ifdef fts5YYTRACKMAXSTACKDEPTH +static int sqlite3Fts5ParserStackPeak(void *p){ + fts5yyParser *pParser = (fts5yyParser*)p; + return pParser->fts5yyidxMax; +} +#endif + +/* +** Find the appropriate action for a parser given the terminal +** look-ahead token iLookAhead. +*/ +static int fts5yy_find_shift_action( + fts5yyParser *pParser, /* The parser */ + fts5YYCODETYPE iLookAhead /* The look-ahead token */ +){ + int i; + int stateno = pParser->fts5yystack[pParser->fts5yyidx].stateno; + + if( stateno>=fts5YY_MIN_REDUCE ) return stateno; + assert( stateno <= fts5YY_SHIFT_COUNT ); + do{ + i = fts5yy_shift_ofst[stateno]; + if( i==fts5YY_SHIFT_USE_DFLT ) return fts5yy_default[stateno]; + assert( iLookAhead!=fts5YYNOCODE ); + i += iLookAhead; + if( i<0 || i>=fts5YY_ACTTAB_COUNT || fts5yy_lookahead[i]!=iLookAhead ){ + if( iLookAhead>0 ){ +#ifdef fts5YYFALLBACK + fts5YYCODETYPE iFallback; /* Fallback token */ + if( iLookAhead<sizeof(fts5yyFallback)/sizeof(fts5yyFallback[0]) + && (iFallback = fts5yyFallback[iLookAhead])!=0 ){ +#ifndef NDEBUG + if( fts5yyTraceFILE ){ + fprintf(fts5yyTraceFILE, "%sFALLBACK %s => %s\n", + fts5yyTracePrompt, fts5yyTokenName[iLookAhead], fts5yyTokenName[iFallback]); + } +#endif + assert( fts5yyFallback[iFallback]==0 ); /* Fallback loop must terminate */ + iLookAhead = iFallback; + continue; + } +#endif +#ifdef fts5YYWILDCARD + { + int j = i - iLookAhead + fts5YYWILDCARD; + if( +#if fts5YY_SHIFT_MIN+fts5YYWILDCARD<0 + j>=0 && +#endif +#if fts5YY_SHIFT_MAX+fts5YYWILDCARD>=fts5YY_ACTTAB_COUNT + j<fts5YY_ACTTAB_COUNT && +#endif + fts5yy_lookahead[j]==fts5YYWILDCARD + ){ +#ifndef NDEBUG + if( fts5yyTraceFILE ){ + fprintf(fts5yyTraceFILE, "%sWILDCARD %s => %s\n", + fts5yyTracePrompt, fts5yyTokenName[iLookAhead], + fts5yyTokenName[fts5YYWILDCARD]); + } +#endif /* NDEBUG */ + return fts5yy_action[j]; + } + } +#endif /* fts5YYWILDCARD */ + } + return fts5yy_default[stateno]; + }else{ + return fts5yy_action[i]; + } + }while(1); +} + +/* +** Find the appropriate action for a parser given the non-terminal +** look-ahead token iLookAhead. +*/ +static int fts5yy_find_reduce_action( + int stateno, /* Current state number */ + fts5YYCODETYPE iLookAhead /* The look-ahead token */ +){ + int i; +#ifdef fts5YYERRORSYMBOL + if( stateno>fts5YY_REDUCE_COUNT ){ + return fts5yy_default[stateno]; + } +#else + assert( stateno<=fts5YY_REDUCE_COUNT ); +#endif + i = fts5yy_reduce_ofst[stateno]; + assert( i!=fts5YY_REDUCE_USE_DFLT ); + assert( iLookAhead!=fts5YYNOCODE ); + i += iLookAhead; +#ifdef fts5YYERRORSYMBOL + if( i<0 || i>=fts5YY_ACTTAB_COUNT || fts5yy_lookahead[i]!=iLookAhead ){ + return fts5yy_default[stateno]; + } +#else + assert( i>=0 && i<fts5YY_ACTTAB_COUNT ); + assert( fts5yy_lookahead[i]==iLookAhead ); +#endif + return fts5yy_action[i]; +} + +/* +** The following routine is called if the stack overflows. +*/ +static void fts5yyStackOverflow(fts5yyParser *fts5yypParser, fts5YYMINORTYPE *fts5yypMinor){ + sqlite3Fts5ParserARG_FETCH; + fts5yypParser->fts5yyidx--; +#ifndef NDEBUG + if( fts5yyTraceFILE ){ + fprintf(fts5yyTraceFILE,"%sStack Overflow!\n",fts5yyTracePrompt); + } +#endif + while( fts5yypParser->fts5yyidx>=0 ) fts5yy_pop_parser_stack(fts5yypParser); + /* Here code is inserted which will execute if the parser + ** stack every overflows */ +/******** Begin %stack_overflow code ******************************************/ + + UNUSED_PARAM(fts5yypMinor); /* Silence a compiler warning */ + sqlite3Fts5ParseError(pParse, "fts5: parser stack overflow"); +/******** End %stack_overflow code ********************************************/ + sqlite3Fts5ParserARG_STORE; /* Suppress warning about unused %extra_argument var */ +} + +/* +** Print tracing information for a SHIFT action +*/ +#ifndef NDEBUG +static void fts5yyTraceShift(fts5yyParser *fts5yypParser, int fts5yyNewState){ + if( fts5yyTraceFILE ){ + if( fts5yyNewState<fts5YYNSTATE ){ + fprintf(fts5yyTraceFILE,"%sShift '%s', go to state %d\n", + fts5yyTracePrompt,fts5yyTokenName[fts5yypParser->fts5yystack[fts5yypParser->fts5yyidx].major], + fts5yyNewState); + }else{ + fprintf(fts5yyTraceFILE,"%sShift '%s'\n", + fts5yyTracePrompt,fts5yyTokenName[fts5yypParser->fts5yystack[fts5yypParser->fts5yyidx].major]); + } + } +} +#else +# define fts5yyTraceShift(X,Y) +#endif + +/* +** Perform a shift action. +*/ +static void fts5yy_shift( + fts5yyParser *fts5yypParser, /* The parser to be shifted */ + int fts5yyNewState, /* The new state to shift in */ + int fts5yyMajor, /* The major token to shift in */ + fts5YYMINORTYPE *fts5yypMinor /* Pointer to the minor token to shift in */ +){ + fts5yyStackEntry *fts5yytos; + fts5yypParser->fts5yyidx++; +#ifdef fts5YYTRACKMAXSTACKDEPTH + if( fts5yypParser->fts5yyidx>fts5yypParser->fts5yyidxMax ){ + fts5yypParser->fts5yyidxMax = fts5yypParser->fts5yyidx; + } +#endif +#if fts5YYSTACKDEPTH>0 + if( fts5yypParser->fts5yyidx>=fts5YYSTACKDEPTH ){ + fts5yyStackOverflow(fts5yypParser, fts5yypMinor); + return; + } +#else + if( fts5yypParser->fts5yyidx>=fts5yypParser->fts5yystksz ){ + fts5yyGrowStack(fts5yypParser); + if( fts5yypParser->fts5yyidx>=fts5yypParser->fts5yystksz ){ + fts5yyStackOverflow(fts5yypParser, fts5yypMinor); + return; + } + } +#endif + fts5yytos = &fts5yypParser->fts5yystack[fts5yypParser->fts5yyidx]; + fts5yytos->stateno = (fts5YYACTIONTYPE)fts5yyNewState; + fts5yytos->major = (fts5YYCODETYPE)fts5yyMajor; + fts5yytos->minor = *fts5yypMinor; + fts5yyTraceShift(fts5yypParser, fts5yyNewState); +} + +/* The following table contains information about every rule that +** is used during the reduce. +*/ +static const struct { + fts5YYCODETYPE lhs; /* Symbol on the left-hand side of the rule */ + unsigned char nrhs; /* Number of right-hand side symbols in the rule */ +} fts5yyRuleInfo[] = { + { 15, 1 }, + { 16, 3 }, + { 16, 3 }, + { 16, 3 }, + { 16, 3 }, + { 16, 1 }, + { 18, 1 }, + { 18, 2 }, + { 17, 1 }, + { 17, 3 }, + { 20, 3 }, + { 20, 1 }, + { 21, 2 }, + { 21, 1 }, + { 19, 1 }, + { 19, 5 }, + { 22, 1 }, + { 22, 2 }, + { 24, 0 }, + { 24, 2 }, + { 23, 4 }, + { 23, 2 }, + { 25, 1 }, + { 25, 0 }, +}; + +static void fts5yy_accept(fts5yyParser*); /* Forward Declaration */ + +/* +** Perform a reduce action and the shift that must immediately +** follow the reduce. +*/ +static void fts5yy_reduce( + fts5yyParser *fts5yypParser, /* The parser */ + int fts5yyruleno /* Number of the rule by which to reduce */ +){ + int fts5yygoto; /* The next state */ + int fts5yyact; /* The next action */ + fts5YYMINORTYPE fts5yygotominor; /* The LHS of the rule reduced */ + fts5yyStackEntry *fts5yymsp; /* The top of the parser's stack */ + int fts5yysize; /* Amount to pop the stack */ + sqlite3Fts5ParserARG_FETCH; + fts5yymsp = &fts5yypParser->fts5yystack[fts5yypParser->fts5yyidx]; +#ifndef NDEBUG + if( fts5yyTraceFILE && fts5yyruleno>=0 + && fts5yyruleno<(int)(sizeof(fts5yyRuleName)/sizeof(fts5yyRuleName[0])) ){ + fts5yysize = fts5yyRuleInfo[fts5yyruleno].nrhs; + fprintf(fts5yyTraceFILE, "%sReduce [%s], go to state %d.\n", fts5yyTracePrompt, + fts5yyRuleName[fts5yyruleno], fts5yymsp[-fts5yysize].stateno); + } +#endif /* NDEBUG */ + fts5yygotominor = fts5yyzerominor; + + switch( fts5yyruleno ){ + /* Beginning here are the reduction cases. A typical example + ** follows: + ** case 0: + ** #line <lineno> <grammarfile> + ** { ... } // User supplied code + ** #line <lineno> <thisfile> + ** break; + */ +/********** Begin reduce actions **********************************************/ + case 0: /* input ::= expr */ +{ sqlite3Fts5ParseFinished(pParse, fts5yymsp[0].minor.fts5yy18); } + break; + case 1: /* expr ::= expr AND expr */ +{ + fts5yygotominor.fts5yy18 = sqlite3Fts5ParseNode(pParse, FTS5_AND, fts5yymsp[-2].minor.fts5yy18, fts5yymsp[0].minor.fts5yy18, 0); +} + break; + case 2: /* expr ::= expr OR expr */ +{ + fts5yygotominor.fts5yy18 = sqlite3Fts5ParseNode(pParse, FTS5_OR, fts5yymsp[-2].minor.fts5yy18, fts5yymsp[0].minor.fts5yy18, 0); +} + break; + case 3: /* expr ::= expr NOT expr */ +{ + fts5yygotominor.fts5yy18 = sqlite3Fts5ParseNode(pParse, FTS5_NOT, fts5yymsp[-2].minor.fts5yy18, fts5yymsp[0].minor.fts5yy18, 0); +} + break; + case 4: /* expr ::= LP expr RP */ +{fts5yygotominor.fts5yy18 = fts5yymsp[-1].minor.fts5yy18;} + break; + case 5: /* expr ::= exprlist */ + case 6: /* exprlist ::= cnearset */ fts5yytestcase(fts5yyruleno==6); +{fts5yygotominor.fts5yy18 = fts5yymsp[0].minor.fts5yy18;} + break; + case 7: /* exprlist ::= exprlist cnearset */ +{ + fts5yygotominor.fts5yy18 = sqlite3Fts5ParseNode(pParse, FTS5_AND, fts5yymsp[-1].minor.fts5yy18, fts5yymsp[0].minor.fts5yy18, 0); +} + break; + case 8: /* cnearset ::= nearset */ +{ + fts5yygotominor.fts5yy18 = sqlite3Fts5ParseNode(pParse, FTS5_STRING, 0, 0, fts5yymsp[0].minor.fts5yy26); +} + break; + case 9: /* cnearset ::= colset COLON nearset */ +{ + sqlite3Fts5ParseSetColset(pParse, fts5yymsp[0].minor.fts5yy26, fts5yymsp[-2].minor.fts5yy3); + fts5yygotominor.fts5yy18 = sqlite3Fts5ParseNode(pParse, FTS5_STRING, 0, 0, fts5yymsp[0].minor.fts5yy26); +} + break; + case 10: /* colset ::= LCP colsetlist RCP */ +{ fts5yygotominor.fts5yy3 = fts5yymsp[-1].minor.fts5yy3; } + break; + case 11: /* colset ::= STRING */ +{ + fts5yygotominor.fts5yy3 = sqlite3Fts5ParseColset(pParse, 0, &fts5yymsp[0].minor.fts5yy0); +} + break; + case 12: /* colsetlist ::= colsetlist STRING */ +{ + fts5yygotominor.fts5yy3 = sqlite3Fts5ParseColset(pParse, fts5yymsp[-1].minor.fts5yy3, &fts5yymsp[0].minor.fts5yy0); } + break; + case 13: /* colsetlist ::= STRING */ +{ + fts5yygotominor.fts5yy3 = sqlite3Fts5ParseColset(pParse, 0, &fts5yymsp[0].minor.fts5yy0); +} + break; + case 14: /* nearset ::= phrase */ +{ fts5yygotominor.fts5yy26 = sqlite3Fts5ParseNearset(pParse, 0, fts5yymsp[0].minor.fts5yy11); } + break; + case 15: /* nearset ::= STRING LP nearphrases neardist_opt RP */ +{ + sqlite3Fts5ParseNear(pParse, &fts5yymsp[-4].minor.fts5yy0); + sqlite3Fts5ParseSetDistance(pParse, fts5yymsp[-2].minor.fts5yy26, &fts5yymsp[-1].minor.fts5yy0); + fts5yygotominor.fts5yy26 = fts5yymsp[-2].minor.fts5yy26; +} + break; + case 16: /* nearphrases ::= phrase */ +{ + fts5yygotominor.fts5yy26 = sqlite3Fts5ParseNearset(pParse, 0, fts5yymsp[0].minor.fts5yy11); +} + break; + case 17: /* nearphrases ::= nearphrases phrase */ +{ + fts5yygotominor.fts5yy26 = sqlite3Fts5ParseNearset(pParse, fts5yymsp[-1].minor.fts5yy26, fts5yymsp[0].minor.fts5yy11); +} + break; + case 18: /* neardist_opt ::= */ +{ fts5yygotominor.fts5yy0.p = 0; fts5yygotominor.fts5yy0.n = 0; } + break; + case 19: /* neardist_opt ::= COMMA STRING */ +{ fts5yygotominor.fts5yy0 = fts5yymsp[0].minor.fts5yy0; } + break; + case 20: /* phrase ::= phrase PLUS STRING star_opt */ +{ + fts5yygotominor.fts5yy11 = sqlite3Fts5ParseTerm(pParse, fts5yymsp[-3].minor.fts5yy11, &fts5yymsp[-1].minor.fts5yy0, fts5yymsp[0].minor.fts5yy20); +} + break; + case 21: /* phrase ::= STRING star_opt */ +{ + fts5yygotominor.fts5yy11 = sqlite3Fts5ParseTerm(pParse, 0, &fts5yymsp[-1].minor.fts5yy0, fts5yymsp[0].minor.fts5yy20); +} + break; + case 22: /* star_opt ::= STAR */ +{ fts5yygotominor.fts5yy20 = 1; } + break; + case 23: /* star_opt ::= */ +{ fts5yygotominor.fts5yy20 = 0; } + break; + default: + break; +/********** End reduce actions ************************************************/ + }; + assert( fts5yyruleno>=0 && fts5yyruleno<sizeof(fts5yyRuleInfo)/sizeof(fts5yyRuleInfo[0]) ); + fts5yygoto = fts5yyRuleInfo[fts5yyruleno].lhs; + fts5yysize = fts5yyRuleInfo[fts5yyruleno].nrhs; + fts5yypParser->fts5yyidx -= fts5yysize; + fts5yyact = fts5yy_find_reduce_action(fts5yymsp[-fts5yysize].stateno,(fts5YYCODETYPE)fts5yygoto); + if( fts5yyact <= fts5YY_MAX_SHIFTREDUCE ){ + if( fts5yyact>fts5YY_MAX_SHIFT ) fts5yyact += fts5YY_MIN_REDUCE - fts5YY_MIN_SHIFTREDUCE; + /* If the reduce action popped at least + ** one element off the stack, then we can push the new element back + ** onto the stack here, and skip the stack overflow test in fts5yy_shift(). + ** That gives a significant speed improvement. */ + if( fts5yysize ){ + fts5yypParser->fts5yyidx++; + fts5yymsp -= fts5yysize-1; + fts5yymsp->stateno = (fts5YYACTIONTYPE)fts5yyact; + fts5yymsp->major = (fts5YYCODETYPE)fts5yygoto; + fts5yymsp->minor = fts5yygotominor; + fts5yyTraceShift(fts5yypParser, fts5yyact); + }else{ + fts5yy_shift(fts5yypParser,fts5yyact,fts5yygoto,&fts5yygotominor); + } + }else{ + assert( fts5yyact == fts5YY_ACCEPT_ACTION ); + fts5yy_accept(fts5yypParser); + } +} + +/* +** The following code executes when the parse fails +*/ +#ifndef fts5YYNOERRORRECOVERY +static void fts5yy_parse_failed( + fts5yyParser *fts5yypParser /* The parser */ +){ + sqlite3Fts5ParserARG_FETCH; +#ifndef NDEBUG + if( fts5yyTraceFILE ){ + fprintf(fts5yyTraceFILE,"%sFail!\n",fts5yyTracePrompt); + } +#endif + while( fts5yypParser->fts5yyidx>=0 ) fts5yy_pop_parser_stack(fts5yypParser); + /* Here code is inserted which will be executed whenever the + ** parser fails */ +/************ Begin %parse_failure code ***************************************/ +/************ End %parse_failure code *****************************************/ + sqlite3Fts5ParserARG_STORE; /* Suppress warning about unused %extra_argument variable */ +} +#endif /* fts5YYNOERRORRECOVERY */ + +/* +** The following code executes when a syntax error first occurs. +*/ +static void fts5yy_syntax_error( + fts5yyParser *fts5yypParser, /* The parser */ + int fts5yymajor, /* The major type of the error token */ + fts5YYMINORTYPE fts5yyminor /* The minor type of the error token */ +){ + sqlite3Fts5ParserARG_FETCH; +#define FTS5TOKEN (fts5yyminor.fts5yy0) +/************ Begin %syntax_error code ****************************************/ + + UNUSED_PARAM(fts5yymajor); /* Silence a compiler warning */ + sqlite3Fts5ParseError( + pParse, "fts5: syntax error near \"%.*s\"",FTS5TOKEN.n,FTS5TOKEN.p + ); +/************ End %syntax_error code ******************************************/ + sqlite3Fts5ParserARG_STORE; /* Suppress warning about unused %extra_argument variable */ +} + +/* +** The following is executed when the parser accepts +*/ +static void fts5yy_accept( + fts5yyParser *fts5yypParser /* The parser */ +){ + sqlite3Fts5ParserARG_FETCH; +#ifndef NDEBUG + if( fts5yyTraceFILE ){ + fprintf(fts5yyTraceFILE,"%sAccept!\n",fts5yyTracePrompt); + } +#endif + while( fts5yypParser->fts5yyidx>=0 ) fts5yy_pop_parser_stack(fts5yypParser); + /* Here code is inserted which will be executed whenever the + ** parser accepts */ +/*********** Begin %parse_accept code *****************************************/ +/*********** End %parse_accept code *******************************************/ + sqlite3Fts5ParserARG_STORE; /* Suppress warning about unused %extra_argument variable */ +} + +/* The main parser program. +** The first argument is a pointer to a structure obtained from +** "sqlite3Fts5ParserAlloc" which describes the current state of the parser. +** The second argument is the major token number. The third is +** the minor token. The fourth optional argument is whatever the +** user wants (and specified in the grammar) and is available for +** use by the action routines. +** +** Inputs: +** <ul> +** <li> A pointer to the parser (an opaque structure.) +** <li> The major token number. +** <li> The minor token number. +** <li> An option argument of a grammar-specified type. +** </ul> +** +** Outputs: +** None. +*/ +static void sqlite3Fts5Parser( + void *fts5yyp, /* The parser */ + int fts5yymajor, /* The major token code number */ + sqlite3Fts5ParserFTS5TOKENTYPE fts5yyminor /* The value for the token */ + sqlite3Fts5ParserARG_PDECL /* Optional %extra_argument parameter */ +){ + fts5YYMINORTYPE fts5yyminorunion; + int fts5yyact; /* The parser action. */ +#if !defined(fts5YYERRORSYMBOL) && !defined(fts5YYNOERRORRECOVERY) + int fts5yyendofinput; /* True if we are at the end of input */ +#endif +#ifdef fts5YYERRORSYMBOL + int fts5yyerrorhit = 0; /* True if fts5yymajor has invoked an error */ +#endif + fts5yyParser *fts5yypParser; /* The parser */ + + /* (re)initialize the parser, if necessary */ + fts5yypParser = (fts5yyParser*)fts5yyp; + if( fts5yypParser->fts5yyidx<0 ){ +#if fts5YYSTACKDEPTH<=0 + if( fts5yypParser->fts5yystksz <=0 ){ + /*memset(&fts5yyminorunion, 0, sizeof(fts5yyminorunion));*/ + fts5yyminorunion = fts5yyzerominor; + fts5yyStackOverflow(fts5yypParser, &fts5yyminorunion); + return; + } +#endif + fts5yypParser->fts5yyidx = 0; + fts5yypParser->fts5yyerrcnt = -1; + fts5yypParser->fts5yystack[0].stateno = 0; + fts5yypParser->fts5yystack[0].major = 0; +#ifndef NDEBUG + if( fts5yyTraceFILE ){ + fprintf(fts5yyTraceFILE,"%sInitialize. Empty stack. State 0\n", + fts5yyTracePrompt); + } +#endif + } + fts5yyminorunion.fts5yy0 = fts5yyminor; +#if !defined(fts5YYERRORSYMBOL) && !defined(fts5YYNOERRORRECOVERY) + fts5yyendofinput = (fts5yymajor==0); +#endif + sqlite3Fts5ParserARG_STORE; + +#ifndef NDEBUG + if( fts5yyTraceFILE ){ + fprintf(fts5yyTraceFILE,"%sInput '%s'\n",fts5yyTracePrompt,fts5yyTokenName[fts5yymajor]); + } +#endif + + do{ + fts5yyact = fts5yy_find_shift_action(fts5yypParser,(fts5YYCODETYPE)fts5yymajor); + if( fts5yyact <= fts5YY_MAX_SHIFTREDUCE ){ + if( fts5yyact > fts5YY_MAX_SHIFT ) fts5yyact += fts5YY_MIN_REDUCE - fts5YY_MIN_SHIFTREDUCE; + fts5yy_shift(fts5yypParser,fts5yyact,fts5yymajor,&fts5yyminorunion); + fts5yypParser->fts5yyerrcnt--; + fts5yymajor = fts5YYNOCODE; + }else if( fts5yyact <= fts5YY_MAX_REDUCE ){ + fts5yy_reduce(fts5yypParser,fts5yyact-fts5YY_MIN_REDUCE); + }else{ + assert( fts5yyact == fts5YY_ERROR_ACTION ); +#ifdef fts5YYERRORSYMBOL + int fts5yymx; +#endif +#ifndef NDEBUG + if( fts5yyTraceFILE ){ + fprintf(fts5yyTraceFILE,"%sSyntax Error!\n",fts5yyTracePrompt); + } +#endif +#ifdef fts5YYERRORSYMBOL + /* A syntax error has occurred. + ** The response to an error depends upon whether or not the + ** grammar defines an error token "ERROR". + ** + ** This is what we do if the grammar does define ERROR: + ** + ** * Call the %syntax_error function. + ** + ** * Begin popping the stack until we enter a state where + ** it is legal to shift the error symbol, then shift + ** the error symbol. + ** + ** * Set the error count to three. + ** + ** * Begin accepting and shifting new tokens. No new error + ** processing will occur until three tokens have been + ** shifted successfully. + ** + */ + if( fts5yypParser->fts5yyerrcnt<0 ){ + fts5yy_syntax_error(fts5yypParser,fts5yymajor,fts5yyminorunion); + } + fts5yymx = fts5yypParser->fts5yystack[fts5yypParser->fts5yyidx].major; + if( fts5yymx==fts5YYERRORSYMBOL || fts5yyerrorhit ){ +#ifndef NDEBUG + if( fts5yyTraceFILE ){ + fprintf(fts5yyTraceFILE,"%sDiscard input token %s\n", + fts5yyTracePrompt,fts5yyTokenName[fts5yymajor]); + } +#endif + fts5yy_destructor(fts5yypParser, (fts5YYCODETYPE)fts5yymajor,&fts5yyminorunion); + fts5yymajor = fts5YYNOCODE; + }else{ + while( + fts5yypParser->fts5yyidx >= 0 && + fts5yymx != fts5YYERRORSYMBOL && + (fts5yyact = fts5yy_find_reduce_action( + fts5yypParser->fts5yystack[fts5yypParser->fts5yyidx].stateno, + fts5YYERRORSYMBOL)) >= fts5YY_MIN_REDUCE + ){ + fts5yy_pop_parser_stack(fts5yypParser); + } + if( fts5yypParser->fts5yyidx < 0 || fts5yymajor==0 ){ + fts5yy_destructor(fts5yypParser,(fts5YYCODETYPE)fts5yymajor,&fts5yyminorunion); + fts5yy_parse_failed(fts5yypParser); + fts5yymajor = fts5YYNOCODE; + }else if( fts5yymx!=fts5YYERRORSYMBOL ){ + fts5YYMINORTYPE u2; + u2.fts5YYERRSYMDT = 0; + fts5yy_shift(fts5yypParser,fts5yyact,fts5YYERRORSYMBOL,&u2); + } + } + fts5yypParser->fts5yyerrcnt = 3; + fts5yyerrorhit = 1; +#elif defined(fts5YYNOERRORRECOVERY) + /* If the fts5YYNOERRORRECOVERY macro is defined, then do not attempt to + ** do any kind of error recovery. Instead, simply invoke the syntax + ** error routine and continue going as if nothing had happened. + ** + ** Applications can set this macro (for example inside %include) if + ** they intend to abandon the parse upon the first syntax error seen. + */ + fts5yy_syntax_error(fts5yypParser,fts5yymajor,fts5yyminorunion); + fts5yy_destructor(fts5yypParser,(fts5YYCODETYPE)fts5yymajor,&fts5yyminorunion); + fts5yymajor = fts5YYNOCODE; + +#else /* fts5YYERRORSYMBOL is not defined */ + /* This is what we do if the grammar does not define ERROR: + ** + ** * Report an error message, and throw away the input token. + ** + ** * If the input token is $, then fail the parse. + ** + ** As before, subsequent error messages are suppressed until + ** three input tokens have been successfully shifted. + */ + if( fts5yypParser->fts5yyerrcnt<=0 ){ + fts5yy_syntax_error(fts5yypParser,fts5yymajor,fts5yyminorunion); + } + fts5yypParser->fts5yyerrcnt = 3; + fts5yy_destructor(fts5yypParser,(fts5YYCODETYPE)fts5yymajor,&fts5yyminorunion); + if( fts5yyendofinput ){ + fts5yy_parse_failed(fts5yypParser); + } + fts5yymajor = fts5YYNOCODE; +#endif + } + }while( fts5yymajor!=fts5YYNOCODE && fts5yypParser->fts5yyidx>=0 ); +#ifndef NDEBUG + if( fts5yyTraceFILE ){ + int i; + fprintf(fts5yyTraceFILE,"%sReturn. Stack=",fts5yyTracePrompt); + for(i=1; i<=fts5yypParser->fts5yyidx; i++) + fprintf(fts5yyTraceFILE,"%c%s", i==1 ? '[' : ' ', + fts5yyTokenName[fts5yypParser->fts5yystack[i].major]); + fprintf(fts5yyTraceFILE,"]\n"); + } +#endif + return; +} + +/* +** 2014 May 31 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +*/ + + +/* #include "fts5Int.h" */ +#include <math.h> /* amalgamator: keep */ + +/* +** Object used to iterate through all "coalesced phrase instances" in +** a single column of the current row. If the phrase instances in the +** column being considered do not overlap, this object simply iterates +** through them. Or, if they do overlap (share one or more tokens in +** common), each set of overlapping instances is treated as a single +** match. See documentation for the highlight() auxiliary function for +** details. +** +** Usage is: +** +** for(rc = fts5CInstIterNext(pApi, pFts, iCol, &iter); +** (rc==SQLITE_OK && 0==fts5CInstIterEof(&iter); +** rc = fts5CInstIterNext(&iter) +** ){ +** printf("instance starts at %d, ends at %d\n", iter.iStart, iter.iEnd); +** } +** +*/ +typedef struct CInstIter CInstIter; +struct CInstIter { + const Fts5ExtensionApi *pApi; /* API offered by current FTS version */ + Fts5Context *pFts; /* First arg to pass to pApi functions */ + int iCol; /* Column to search */ + int iInst; /* Next phrase instance index */ + int nInst; /* Total number of phrase instances */ + + /* Output variables */ + int iStart; /* First token in coalesced phrase instance */ + int iEnd; /* Last token in coalesced phrase instance */ +}; + +/* +** Advance the iterator to the next coalesced phrase instance. Return +** an SQLite error code if an error occurs, or SQLITE_OK otherwise. +*/ +static int fts5CInstIterNext(CInstIter *pIter){ + int rc = SQLITE_OK; + pIter->iStart = -1; + pIter->iEnd = -1; + + while( rc==SQLITE_OK && pIter->iInst<pIter->nInst ){ + int ip; int ic; int io; + rc = pIter->pApi->xInst(pIter->pFts, pIter->iInst, &ip, &ic, &io); + if( rc==SQLITE_OK ){ + if( ic==pIter->iCol ){ + int iEnd = io - 1 + pIter->pApi->xPhraseSize(pIter->pFts, ip); + if( pIter->iStart<0 ){ + pIter->iStart = io; + pIter->iEnd = iEnd; + }else if( io<=pIter->iEnd ){ + if( iEnd>pIter->iEnd ) pIter->iEnd = iEnd; + }else{ + break; + } + } + pIter->iInst++; + } + } + + return rc; +} + +/* +** Initialize the iterator object indicated by the final parameter to +** iterate through coalesced phrase instances in column iCol. +*/ +static int fts5CInstIterInit( + const Fts5ExtensionApi *pApi, + Fts5Context *pFts, + int iCol, + CInstIter *pIter +){ + int rc; + + memset(pIter, 0, sizeof(CInstIter)); + pIter->pApi = pApi; + pIter->pFts = pFts; + pIter->iCol = iCol; + rc = pApi->xInstCount(pFts, &pIter->nInst); + + if( rc==SQLITE_OK ){ + rc = fts5CInstIterNext(pIter); + } + + return rc; +} + + + +/************************************************************************* +** Start of highlight() implementation. +*/ +typedef struct HighlightContext HighlightContext; +struct HighlightContext { + CInstIter iter; /* Coalesced Instance Iterator */ + int iPos; /* Current token offset in zIn[] */ + int iRangeStart; /* First token to include */ + int iRangeEnd; /* If non-zero, last token to include */ + const char *zOpen; /* Opening highlight */ + const char *zClose; /* Closing highlight */ + const char *zIn; /* Input text */ + int nIn; /* Size of input text in bytes */ + int iOff; /* Current offset within zIn[] */ + char *zOut; /* Output value */ +}; + +/* +** Append text to the HighlightContext output string - p->zOut. Argument +** z points to a buffer containing n bytes of text to append. If n is +** negative, everything up until the first '\0' is appended to the output. +** +** If *pRc is set to any value other than SQLITE_OK when this function is +** called, it is a no-op. If an error (i.e. an OOM condition) is encountered, +** *pRc is set to an error code before returning. +*/ +static void fts5HighlightAppend( + int *pRc, + HighlightContext *p, + const char *z, int n +){ + if( *pRc==SQLITE_OK ){ + if( n<0 ) n = (int)strlen(z); + p->zOut = sqlite3_mprintf("%z%.*s", p->zOut, n, z); + if( p->zOut==0 ) *pRc = SQLITE_NOMEM; + } +} + +/* +** Tokenizer callback used by implementation of highlight() function. +*/ +static int fts5HighlightCb( + void *pContext, /* Pointer to HighlightContext object */ + int tflags, /* Mask of FTS5_TOKEN_* flags */ + const char *pToken, /* Buffer containing token */ + int nToken, /* Size of token in bytes */ + int iStartOff, /* Start offset of token */ + int iEndOff /* End offset of token */ +){ + HighlightContext *p = (HighlightContext*)pContext; + int rc = SQLITE_OK; + int iPos; + + UNUSED_PARAM2(pToken, nToken); + + if( tflags & FTS5_TOKEN_COLOCATED ) return SQLITE_OK; + iPos = p->iPos++; + + if( p->iRangeEnd>0 ){ + if( iPos<p->iRangeStart || iPos>p->iRangeEnd ) return SQLITE_OK; + if( p->iRangeStart && iPos==p->iRangeStart ) p->iOff = iStartOff; + } + + if( iPos==p->iter.iStart ){ + fts5HighlightAppend(&rc, p, &p->zIn[p->iOff], iStartOff - p->iOff); + fts5HighlightAppend(&rc, p, p->zOpen, -1); + p->iOff = iStartOff; + } + + if( iPos==p->iter.iEnd ){ + if( p->iRangeEnd && p->iter.iStart<p->iRangeStart ){ + fts5HighlightAppend(&rc, p, p->zOpen, -1); + } + fts5HighlightAppend(&rc, p, &p->zIn[p->iOff], iEndOff - p->iOff); + fts5HighlightAppend(&rc, p, p->zClose, -1); + p->iOff = iEndOff; + if( rc==SQLITE_OK ){ + rc = fts5CInstIterNext(&p->iter); + } + } + + if( p->iRangeEnd>0 && iPos==p->iRangeEnd ){ + fts5HighlightAppend(&rc, p, &p->zIn[p->iOff], iEndOff - p->iOff); + p->iOff = iEndOff; + if( iPos<p->iter.iEnd ){ + fts5HighlightAppend(&rc, p, p->zClose, -1); + } + } + + return rc; +} + +/* +** Implementation of highlight() function. +*/ +static void fts5HighlightFunction( + const Fts5ExtensionApi *pApi, /* API offered by current FTS version */ + Fts5Context *pFts, /* First arg to pass to pApi functions */ + sqlite3_context *pCtx, /* Context for returning result/error */ + int nVal, /* Number of values in apVal[] array */ + sqlite3_value **apVal /* Array of trailing arguments */ +){ + HighlightContext ctx; + int rc; + int iCol; + + if( nVal!=3 ){ + const char *zErr = "wrong number of arguments to function highlight()"; + sqlite3_result_error(pCtx, zErr, -1); + return; + } + + iCol = sqlite3_value_int(apVal[0]); + memset(&ctx, 0, sizeof(HighlightContext)); + ctx.zOpen = (const char*)sqlite3_value_text(apVal[1]); + ctx.zClose = (const char*)sqlite3_value_text(apVal[2]); + rc = pApi->xColumnText(pFts, iCol, &ctx.zIn, &ctx.nIn); + + if( ctx.zIn ){ + if( rc==SQLITE_OK ){ + rc = fts5CInstIterInit(pApi, pFts, iCol, &ctx.iter); + } + + if( rc==SQLITE_OK ){ + rc = pApi->xTokenize(pFts, ctx.zIn, ctx.nIn, (void*)&ctx,fts5HighlightCb); + } + fts5HighlightAppend(&rc, &ctx, &ctx.zIn[ctx.iOff], ctx.nIn - ctx.iOff); + + if( rc==SQLITE_OK ){ + sqlite3_result_text(pCtx, (const char*)ctx.zOut, -1, SQLITE_TRANSIENT); + } + sqlite3_free(ctx.zOut); + } + if( rc!=SQLITE_OK ){ + sqlite3_result_error_code(pCtx, rc); + } +} +/* +** End of highlight() implementation. +**************************************************************************/ + +/* +** Implementation of snippet() function. +*/ +static void fts5SnippetFunction( + const Fts5ExtensionApi *pApi, /* API offered by current FTS version */ + Fts5Context *pFts, /* First arg to pass to pApi functions */ + sqlite3_context *pCtx, /* Context for returning result/error */ + int nVal, /* Number of values in apVal[] array */ + sqlite3_value **apVal /* Array of trailing arguments */ +){ + HighlightContext ctx; + int rc = SQLITE_OK; /* Return code */ + int iCol; /* 1st argument to snippet() */ + const char *zEllips; /* 4th argument to snippet() */ + int nToken; /* 5th argument to snippet() */ + int nInst = 0; /* Number of instance matches this row */ + int i; /* Used to iterate through instances */ + int nPhrase; /* Number of phrases in query */ + unsigned char *aSeen; /* Array of "seen instance" flags */ + int iBestCol; /* Column containing best snippet */ + int iBestStart = 0; /* First token of best snippet */ + int iBestLast; /* Last token of best snippet */ + int nBestScore = 0; /* Score of best snippet */ + int nColSize = 0; /* Total size of iBestCol in tokens */ + + if( nVal!=5 ){ + const char *zErr = "wrong number of arguments to function snippet()"; + sqlite3_result_error(pCtx, zErr, -1); + return; + } + + memset(&ctx, 0, sizeof(HighlightContext)); + iCol = sqlite3_value_int(apVal[0]); + ctx.zOpen = (const char*)sqlite3_value_text(apVal[1]); + ctx.zClose = (const char*)sqlite3_value_text(apVal[2]); + zEllips = (const char*)sqlite3_value_text(apVal[3]); + nToken = sqlite3_value_int(apVal[4]); + iBestLast = nToken-1; + + iBestCol = (iCol>=0 ? iCol : 0); + nPhrase = pApi->xPhraseCount(pFts); + aSeen = sqlite3_malloc(nPhrase); + if( aSeen==0 ){ + rc = SQLITE_NOMEM; + } + + if( rc==SQLITE_OK ){ + rc = pApi->xInstCount(pFts, &nInst); + } + for(i=0; rc==SQLITE_OK && i<nInst; i++){ + int ip, iSnippetCol, iStart; + memset(aSeen, 0, nPhrase); + rc = pApi->xInst(pFts, i, &ip, &iSnippetCol, &iStart); + if( rc==SQLITE_OK && (iCol<0 || iSnippetCol==iCol) ){ + int nScore = 1000; + int iLast = iStart - 1 + pApi->xPhraseSize(pFts, ip); + int j; + aSeen[ip] = 1; + + for(j=i+1; rc==SQLITE_OK && j<nInst; j++){ + int ic; int io; int iFinal; + rc = pApi->xInst(pFts, j, &ip, &ic, &io); + iFinal = io + pApi->xPhraseSize(pFts, ip) - 1; + if( rc==SQLITE_OK && ic==iSnippetCol && iLast<iStart+nToken ){ + nScore += aSeen[ip] ? 1000 : 1; + aSeen[ip] = 1; + if( iFinal>iLast ) iLast = iFinal; + } + } + + if( rc==SQLITE_OK && nScore>nBestScore ){ + iBestCol = iSnippetCol; + iBestStart = iStart; + iBestLast = iLast; + nBestScore = nScore; + } + } + } + + if( rc==SQLITE_OK ){ + rc = pApi->xColumnSize(pFts, iBestCol, &nColSize); + } + if( rc==SQLITE_OK ){ + rc = pApi->xColumnText(pFts, iBestCol, &ctx.zIn, &ctx.nIn); + } + if( ctx.zIn ){ + if( rc==SQLITE_OK ){ + rc = fts5CInstIterInit(pApi, pFts, iBestCol, &ctx.iter); + } + + if( (iBestStart+nToken-1)>iBestLast ){ + iBestStart -= (iBestStart+nToken-1-iBestLast) / 2; + } + if( iBestStart+nToken>nColSize ){ + iBestStart = nColSize - nToken; + } + if( iBestStart<0 ) iBestStart = 0; + + ctx.iRangeStart = iBestStart; + ctx.iRangeEnd = iBestStart + nToken - 1; + + if( iBestStart>0 ){ + fts5HighlightAppend(&rc, &ctx, zEllips, -1); + } + if( rc==SQLITE_OK ){ + rc = pApi->xTokenize(pFts, ctx.zIn, ctx.nIn, (void*)&ctx,fts5HighlightCb); + } + if( ctx.iRangeEnd>=(nColSize-1) ){ + fts5HighlightAppend(&rc, &ctx, &ctx.zIn[ctx.iOff], ctx.nIn - ctx.iOff); + }else{ + fts5HighlightAppend(&rc, &ctx, zEllips, -1); + } + + if( rc==SQLITE_OK ){ + sqlite3_result_text(pCtx, (const char*)ctx.zOut, -1, SQLITE_TRANSIENT); + }else{ + sqlite3_result_error_code(pCtx, rc); + } + sqlite3_free(ctx.zOut); + } + sqlite3_free(aSeen); +} + +/************************************************************************/ + +/* +** The first time the bm25() function is called for a query, an instance +** of the following structure is allocated and populated. +*/ +typedef struct Fts5Bm25Data Fts5Bm25Data; +struct Fts5Bm25Data { + int nPhrase; /* Number of phrases in query */ + double avgdl; /* Average number of tokens in each row */ + double *aIDF; /* IDF for each phrase */ + double *aFreq; /* Array used to calculate phrase freq. */ +}; + +/* +** Callback used by fts5Bm25GetData() to count the number of rows in the +** table matched by each individual phrase within the query. +*/ +static int fts5CountCb( + const Fts5ExtensionApi *pApi, + Fts5Context *pFts, + void *pUserData /* Pointer to sqlite3_int64 variable */ +){ + sqlite3_int64 *pn = (sqlite3_int64*)pUserData; + UNUSED_PARAM2(pApi, pFts); + (*pn)++; + return SQLITE_OK; +} + +/* +** Set *ppData to point to the Fts5Bm25Data object for the current query. +** If the object has not already been allocated, allocate and populate it +** now. +*/ +static int fts5Bm25GetData( + const Fts5ExtensionApi *pApi, + Fts5Context *pFts, + Fts5Bm25Data **ppData /* OUT: bm25-data object for this query */ +){ + int rc = SQLITE_OK; /* Return code */ + Fts5Bm25Data *p; /* Object to return */ + + p = pApi->xGetAuxdata(pFts, 0); + if( p==0 ){ + int nPhrase; /* Number of phrases in query */ + sqlite3_int64 nRow = 0; /* Number of rows in table */ + sqlite3_int64 nToken = 0; /* Number of tokens in table */ + int nByte; /* Bytes of space to allocate */ + int i; + + /* Allocate the Fts5Bm25Data object */ + nPhrase = pApi->xPhraseCount(pFts); + nByte = sizeof(Fts5Bm25Data) + nPhrase*2*sizeof(double); + p = (Fts5Bm25Data*)sqlite3_malloc(nByte); + if( p==0 ){ + rc = SQLITE_NOMEM; + }else{ + memset(p, 0, nByte); + p->nPhrase = nPhrase; + p->aIDF = (double*)&p[1]; + p->aFreq = &p->aIDF[nPhrase]; + } + + /* Calculate the average document length for this FTS5 table */ + if( rc==SQLITE_OK ) rc = pApi->xRowCount(pFts, &nRow); + if( rc==SQLITE_OK ) rc = pApi->xColumnTotalSize(pFts, -1, &nToken); + if( rc==SQLITE_OK ) p->avgdl = (double)nToken / (double)nRow; + + /* Calculate an IDF for each phrase in the query */ + for(i=0; rc==SQLITE_OK && i<nPhrase; i++){ + sqlite3_int64 nHit = 0; + rc = pApi->xQueryPhrase(pFts, i, (void*)&nHit, fts5CountCb); + if( rc==SQLITE_OK ){ + /* Calculate the IDF (Inverse Document Frequency) for phrase i. + ** This is done using the standard BM25 formula as found on wikipedia: + ** + ** IDF = log( (N - nHit + 0.5) / (nHit + 0.5) ) + ** + ** where "N" is the total number of documents in the set and nHit + ** is the number that contain at least one instance of the phrase + ** under consideration. + ** + ** The problem with this is that if (N < 2*nHit), the IDF is + ** negative. Which is undesirable. So the mimimum allowable IDF is + ** (1e-6) - roughly the same as a term that appears in just over + ** half of set of 5,000,000 documents. */ + double idf = log( (nRow - nHit + 0.5) / (nHit + 0.5) ); + if( idf<=0.0 ) idf = 1e-6; + p->aIDF[i] = idf; + } + } + + if( rc!=SQLITE_OK ){ + sqlite3_free(p); + }else{ + rc = pApi->xSetAuxdata(pFts, p, sqlite3_free); + } + if( rc!=SQLITE_OK ) p = 0; + } + *ppData = p; + return rc; +} + +/* +** Implementation of bm25() function. +*/ +static void fts5Bm25Function( + const Fts5ExtensionApi *pApi, /* API offered by current FTS version */ + Fts5Context *pFts, /* First arg to pass to pApi functions */ + sqlite3_context *pCtx, /* Context for returning result/error */ + int nVal, /* Number of values in apVal[] array */ + sqlite3_value **apVal /* Array of trailing arguments */ +){ + const double k1 = 1.2; /* Constant "k1" from BM25 formula */ + const double b = 0.75; /* Constant "b" from BM25 formula */ + int rc = SQLITE_OK; /* Error code */ + double score = 0.0; /* SQL function return value */ + Fts5Bm25Data *pData; /* Values allocated/calculated once only */ + int i; /* Iterator variable */ + int nInst = 0; /* Value returned by xInstCount() */ + double D = 0.0; /* Total number of tokens in row */ + double *aFreq = 0; /* Array of phrase freq. for current row */ + + /* Calculate the phrase frequency (symbol "f(qi,D)" in the documentation) + ** for each phrase in the query for the current row. */ + rc = fts5Bm25GetData(pApi, pFts, &pData); + if( rc==SQLITE_OK ){ + aFreq = pData->aFreq; + memset(aFreq, 0, sizeof(double) * pData->nPhrase); + rc = pApi->xInstCount(pFts, &nInst); + } + for(i=0; rc==SQLITE_OK && i<nInst; i++){ + int ip; int ic; int io; + rc = pApi->xInst(pFts, i, &ip, &ic, &io); + if( rc==SQLITE_OK ){ + double w = (nVal > ic) ? sqlite3_value_double(apVal[ic]) : 1.0; + aFreq[ip] += w; + } + } + + /* Figure out the total size of the current row in tokens. */ + if( rc==SQLITE_OK ){ + int nTok; + rc = pApi->xColumnSize(pFts, -1, &nTok); + D = (double)nTok; + } + + /* Determine the BM25 score for the current row. */ + for(i=0; rc==SQLITE_OK && i<pData->nPhrase; i++){ + score += pData->aIDF[i] * ( + ( aFreq[i] * (k1 + 1.0) ) / + ( aFreq[i] + k1 * (1 - b + b * D / pData->avgdl) ) + ); + } + + /* If no error has occurred, return the calculated score. Otherwise, + ** throw an SQL exception. */ + if( rc==SQLITE_OK ){ + sqlite3_result_double(pCtx, -1.0 * score); + }else{ + sqlite3_result_error_code(pCtx, rc); + } +} + +static int sqlite3Fts5AuxInit(fts5_api *pApi){ + struct Builtin { + const char *zFunc; /* Function name (nul-terminated) */ + void *pUserData; /* User-data pointer */ + fts5_extension_function xFunc;/* Callback function */ + void (*xDestroy)(void*); /* Destructor function */ + } aBuiltin [] = { + { "snippet", 0, fts5SnippetFunction, 0 }, + { "highlight", 0, fts5HighlightFunction, 0 }, + { "bm25", 0, fts5Bm25Function, 0 }, + }; + int rc = SQLITE_OK; /* Return code */ + int i; /* To iterate through builtin functions */ + + for(i=0; rc==SQLITE_OK && i<ArraySize(aBuiltin); i++){ + rc = pApi->xCreateFunction(pApi, + aBuiltin[i].zFunc, + aBuiltin[i].pUserData, + aBuiltin[i].xFunc, + aBuiltin[i].xDestroy + ); + } + + return rc; +} + + + +/* +** 2014 May 31 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +*/ + + + +/* #include "fts5Int.h" */ + +static int sqlite3Fts5BufferSize(int *pRc, Fts5Buffer *pBuf, u32 nByte){ + if( (u32)pBuf->nSpace<nByte ){ + u32 nNew = pBuf->nSpace ? pBuf->nSpace : 64; + u8 *pNew; + while( nNew<nByte ){ + nNew = nNew * 2; + } + pNew = sqlite3_realloc(pBuf->p, nNew); + if( pNew==0 ){ + *pRc = SQLITE_NOMEM; + return 1; + }else{ + pBuf->nSpace = nNew; + pBuf->p = pNew; + } + } + return 0; +} + + +/* +** Encode value iVal as an SQLite varint and append it to the buffer object +** pBuf. If an OOM error occurs, set the error code in p. +*/ +static void sqlite3Fts5BufferAppendVarint(int *pRc, Fts5Buffer *pBuf, i64 iVal){ + if( fts5BufferGrow(pRc, pBuf, 9) ) return; + pBuf->n += sqlite3Fts5PutVarint(&pBuf->p[pBuf->n], iVal); +} + +static void sqlite3Fts5Put32(u8 *aBuf, int iVal){ + aBuf[0] = (iVal>>24) & 0x00FF; + aBuf[1] = (iVal>>16) & 0x00FF; + aBuf[2] = (iVal>> 8) & 0x00FF; + aBuf[3] = (iVal>> 0) & 0x00FF; +} + +static int sqlite3Fts5Get32(const u8 *aBuf){ + return (aBuf[0] << 24) + (aBuf[1] << 16) + (aBuf[2] << 8) + aBuf[3]; +} + +/* +** Append buffer nData/pData to buffer pBuf. If an OOM error occurs, set +** the error code in p. If an error has already occurred when this function +** is called, it is a no-op. +*/ +static void sqlite3Fts5BufferAppendBlob( + int *pRc, + Fts5Buffer *pBuf, + u32 nData, + const u8 *pData +){ + assert_nc( *pRc || nData>=0 ); + if( fts5BufferGrow(pRc, pBuf, nData) ) return; + memcpy(&pBuf->p[pBuf->n], pData, nData); + pBuf->n += nData; +} + +/* +** Append the nul-terminated string zStr to the buffer pBuf. This function +** ensures that the byte following the buffer data is set to 0x00, even +** though this byte is not included in the pBuf->n count. +*/ +static void sqlite3Fts5BufferAppendString( + int *pRc, + Fts5Buffer *pBuf, + const char *zStr +){ + int nStr = (int)strlen(zStr); + sqlite3Fts5BufferAppendBlob(pRc, pBuf, nStr+1, (const u8*)zStr); + pBuf->n--; +} + +/* +** Argument zFmt is a printf() style format string. This function performs +** the printf() style processing, then appends the results to buffer pBuf. +** +** Like sqlite3Fts5BufferAppendString(), this function ensures that the byte +** following the buffer data is set to 0x00, even though this byte is not +** included in the pBuf->n count. +*/ +static void sqlite3Fts5BufferAppendPrintf( + int *pRc, + Fts5Buffer *pBuf, + char *zFmt, ... +){ + if( *pRc==SQLITE_OK ){ + char *zTmp; + va_list ap; + va_start(ap, zFmt); + zTmp = sqlite3_vmprintf(zFmt, ap); + va_end(ap); + + if( zTmp==0 ){ + *pRc = SQLITE_NOMEM; + }else{ + sqlite3Fts5BufferAppendString(pRc, pBuf, zTmp); + sqlite3_free(zTmp); + } + } +} + +static char *sqlite3Fts5Mprintf(int *pRc, const char *zFmt, ...){ + char *zRet = 0; + if( *pRc==SQLITE_OK ){ + va_list ap; + va_start(ap, zFmt); + zRet = sqlite3_vmprintf(zFmt, ap); + va_end(ap); + if( zRet==0 ){ + *pRc = SQLITE_NOMEM; + } + } + return zRet; +} + + +/* +** Free any buffer allocated by pBuf. Zero the structure before returning. +*/ +static void sqlite3Fts5BufferFree(Fts5Buffer *pBuf){ + sqlite3_free(pBuf->p); + memset(pBuf, 0, sizeof(Fts5Buffer)); +} + +/* +** Zero the contents of the buffer object. But do not free the associated +** memory allocation. +*/ +static void sqlite3Fts5BufferZero(Fts5Buffer *pBuf){ + pBuf->n = 0; +} + +/* +** Set the buffer to contain nData/pData. If an OOM error occurs, leave an +** the error code in p. If an error has already occurred when this function +** is called, it is a no-op. +*/ +static void sqlite3Fts5BufferSet( + int *pRc, + Fts5Buffer *pBuf, + int nData, + const u8 *pData +){ + pBuf->n = 0; + sqlite3Fts5BufferAppendBlob(pRc, pBuf, nData, pData); +} + +static int sqlite3Fts5PoslistNext64( + const u8 *a, int n, /* Buffer containing poslist */ + int *pi, /* IN/OUT: Offset within a[] */ + i64 *piOff /* IN/OUT: Current offset */ +){ + int i = *pi; + if( i>=n ){ + /* EOF */ + *piOff = -1; + return 1; + }else{ + i64 iOff = *piOff; + int iVal; + fts5FastGetVarint32(a, i, iVal); + if( iVal==1 ){ + fts5FastGetVarint32(a, i, iVal); + iOff = ((i64)iVal) << 32; + fts5FastGetVarint32(a, i, iVal); + } + *piOff = iOff + (iVal-2); + *pi = i; + return 0; + } +} + + +/* +** Advance the iterator object passed as the only argument. Return true +** if the iterator reaches EOF, or false otherwise. +*/ +static int sqlite3Fts5PoslistReaderNext(Fts5PoslistReader *pIter){ + if( sqlite3Fts5PoslistNext64(pIter->a, pIter->n, &pIter->i, &pIter->iPos) ){ + pIter->bEof = 1; + } + return pIter->bEof; +} + +static int sqlite3Fts5PoslistReaderInit( + const u8 *a, int n, /* Poslist buffer to iterate through */ + Fts5PoslistReader *pIter /* Iterator object to initialize */ +){ + memset(pIter, 0, sizeof(*pIter)); + pIter->a = a; + pIter->n = n; + sqlite3Fts5PoslistReaderNext(pIter); + return pIter->bEof; +} + +/* +** Append position iPos to the position list being accumulated in buffer +** pBuf, which must be already be large enough to hold the new data. +** The previous position written to this list is *piPrev. *piPrev is set +** to iPos before returning. +*/ +static void sqlite3Fts5PoslistSafeAppend( + Fts5Buffer *pBuf, + i64 *piPrev, + i64 iPos +){ + static const i64 colmask = ((i64)(0x7FFFFFFF)) << 32; + if( (iPos & colmask) != (*piPrev & colmask) ){ + pBuf->p[pBuf->n++] = 1; + pBuf->n += sqlite3Fts5PutVarint(&pBuf->p[pBuf->n], (iPos>>32)); + *piPrev = (iPos & colmask); + } + pBuf->n += sqlite3Fts5PutVarint(&pBuf->p[pBuf->n], (iPos-*piPrev)+2); + *piPrev = iPos; +} + +static int sqlite3Fts5PoslistWriterAppend( + Fts5Buffer *pBuf, + Fts5PoslistWriter *pWriter, + i64 iPos +){ + int rc = 0; /* Initialized only to suppress erroneous warning from Clang */ + if( fts5BufferGrow(&rc, pBuf, 5+5+5) ) return rc; + sqlite3Fts5PoslistSafeAppend(pBuf, &pWriter->iPrev, iPos); + return SQLITE_OK; +} + +static void *sqlite3Fts5MallocZero(int *pRc, int nByte){ + void *pRet = 0; + if( *pRc==SQLITE_OK ){ + pRet = sqlite3_malloc(nByte); + if( pRet==0 && nByte>0 ){ + *pRc = SQLITE_NOMEM; + }else{ + memset(pRet, 0, nByte); + } + } + return pRet; +} + +/* +** Return a nul-terminated copy of the string indicated by pIn. If nIn +** is non-negative, then it is the length of the string in bytes. Otherwise, +** the length of the string is determined using strlen(). +** +** It is the responsibility of the caller to eventually free the returned +** buffer using sqlite3_free(). If an OOM error occurs, NULL is returned. +*/ +static char *sqlite3Fts5Strndup(int *pRc, const char *pIn, int nIn){ + char *zRet = 0; + if( *pRc==SQLITE_OK ){ + if( nIn<0 ){ + nIn = (int)strlen(pIn); + } + zRet = (char*)sqlite3_malloc(nIn+1); + if( zRet ){ + memcpy(zRet, pIn, nIn); + zRet[nIn] = '\0'; + }else{ + *pRc = SQLITE_NOMEM; + } + } + return zRet; +} + + +/* +** Return true if character 't' may be part of an FTS5 bareword, or false +** otherwise. Characters that may be part of barewords: +** +** * All non-ASCII characters, +** * The 52 upper and lower case ASCII characters, and +** * The 10 integer ASCII characters. +** * The underscore character "_" (0x5F). +** * The unicode "subsitute" character (0x1A). +*/ +static int sqlite3Fts5IsBareword(char t){ + u8 aBareword[128] = { + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 0x00 .. 0x0F */ + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, /* 0x10 .. 0x1F */ + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 0x20 .. 0x2F */ + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, /* 0x30 .. 0x3F */ + 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, /* 0x40 .. 0x4F */ + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, /* 0x50 .. 0x5F */ + 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, /* 0x60 .. 0x6F */ + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0 /* 0x70 .. 0x7F */ + }; + + return (t & 0x80) || aBareword[(int)t]; +} + + +/************************************************************************* +*/ +typedef struct Fts5TermsetEntry Fts5TermsetEntry; +struct Fts5TermsetEntry { + char *pTerm; + int nTerm; + int iIdx; /* Index (main or aPrefix[] entry) */ + Fts5TermsetEntry *pNext; +}; + +struct Fts5Termset { + Fts5TermsetEntry *apHash[512]; +}; + +static int sqlite3Fts5TermsetNew(Fts5Termset **pp){ + int rc = SQLITE_OK; + *pp = sqlite3Fts5MallocZero(&rc, sizeof(Fts5Termset)); + return rc; +} + +static int sqlite3Fts5TermsetAdd( + Fts5Termset *p, + int iIdx, + const char *pTerm, int nTerm, + int *pbPresent +){ + int rc = SQLITE_OK; + *pbPresent = 0; + if( p ){ + int i; + u32 hash = 13; + Fts5TermsetEntry *pEntry; + + /* Calculate a hash value for this term. This is the same hash checksum + ** used by the fts5_hash.c module. This is not important for correct + ** operation of the module, but is necessary to ensure that some tests + ** designed to produce hash table collisions really do work. */ + for(i=nTerm-1; i>=0; i--){ + hash = (hash << 3) ^ hash ^ pTerm[i]; + } + hash = (hash << 3) ^ hash ^ iIdx; + hash = hash % ArraySize(p->apHash); + + for(pEntry=p->apHash[hash]; pEntry; pEntry=pEntry->pNext){ + if( pEntry->iIdx==iIdx + && pEntry->nTerm==nTerm + && memcmp(pEntry->pTerm, pTerm, nTerm)==0 + ){ + *pbPresent = 1; + break; + } + } + + if( pEntry==0 ){ + pEntry = sqlite3Fts5MallocZero(&rc, sizeof(Fts5TermsetEntry) + nTerm); + if( pEntry ){ + pEntry->pTerm = (char*)&pEntry[1]; + pEntry->nTerm = nTerm; + pEntry->iIdx = iIdx; + memcpy(pEntry->pTerm, pTerm, nTerm); + pEntry->pNext = p->apHash[hash]; + p->apHash[hash] = pEntry; + } + } + } + + return rc; +} + +static void sqlite3Fts5TermsetFree(Fts5Termset *p){ + if( p ){ + u32 i; + for(i=0; i<ArraySize(p->apHash); i++){ + Fts5TermsetEntry *pEntry = p->apHash[i]; + while( pEntry ){ + Fts5TermsetEntry *pDel = pEntry; + pEntry = pEntry->pNext; + sqlite3_free(pDel); + } + } + sqlite3_free(p); + } +} + +/* +** 2014 Jun 09 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This is an SQLite module implementing full-text search. +*/ + + +/* #include "fts5Int.h" */ + +#define FTS5_DEFAULT_PAGE_SIZE 4050 +#define FTS5_DEFAULT_AUTOMERGE 4 +#define FTS5_DEFAULT_CRISISMERGE 16 +#define FTS5_DEFAULT_HASHSIZE (1024*1024) + +/* Maximum allowed page size */ +#define FTS5_MAX_PAGE_SIZE (128*1024) + +static int fts5_iswhitespace(char x){ + return (x==' '); +} + +static int fts5_isopenquote(char x){ + return (x=='"' || x=='\'' || x=='[' || x=='`'); +} + +/* +** Argument pIn points to a character that is part of a nul-terminated +** string. Return a pointer to the first character following *pIn in +** the string that is not a white-space character. +*/ +static const char *fts5ConfigSkipWhitespace(const char *pIn){ + const char *p = pIn; + if( p ){ + while( fts5_iswhitespace(*p) ){ p++; } + } + return p; +} + +/* +** Argument pIn points to a character that is part of a nul-terminated +** string. Return a pointer to the first character following *pIn in +** the string that is not a "bareword" character. +*/ +static const char *fts5ConfigSkipBareword(const char *pIn){ + const char *p = pIn; + while ( sqlite3Fts5IsBareword(*p) ) p++; + if( p==pIn ) p = 0; + return p; +} + +static int fts5_isdigit(char a){ + return (a>='0' && a<='9'); +} + + + +static const char *fts5ConfigSkipLiteral(const char *pIn){ + const char *p = pIn; + switch( *p ){ + case 'n': case 'N': + if( sqlite3_strnicmp("null", p, 4)==0 ){ + p = &p[4]; + }else{ + p = 0; + } + break; + + case 'x': case 'X': + p++; + if( *p=='\'' ){ + p++; + while( (*p>='a' && *p<='f') + || (*p>='A' && *p<='F') + || (*p>='0' && *p<='9') + ){ + p++; + } + if( *p=='\'' && 0==((p-pIn)%2) ){ + p++; + }else{ + p = 0; + } + }else{ + p = 0; + } + break; + + case '\'': + p++; + while( p ){ + if( *p=='\'' ){ + p++; + if( *p!='\'' ) break; + } + p++; + if( *p==0 ) p = 0; + } + break; + + default: + /* maybe a number */ + if( *p=='+' || *p=='-' ) p++; + while( fts5_isdigit(*p) ) p++; + + /* At this point, if the literal was an integer, the parse is + ** finished. Or, if it is a floating point value, it may continue + ** with either a decimal point or an 'E' character. */ + if( *p=='.' && fts5_isdigit(p[1]) ){ + p += 2; + while( fts5_isdigit(*p) ) p++; + } + if( p==pIn ) p = 0; + + break; + } + + return p; +} + +/* +** The first character of the string pointed to by argument z is guaranteed +** to be an open-quote character (see function fts5_isopenquote()). +** +** This function searches for the corresponding close-quote character within +** the string and, if found, dequotes the string in place and adds a new +** nul-terminator byte. +** +** If the close-quote is found, the value returned is the byte offset of +** the character immediately following it. Or, if the close-quote is not +** found, -1 is returned. If -1 is returned, the buffer is left in an +** undefined state. +*/ +static int fts5Dequote(char *z){ + char q; + int iIn = 1; + int iOut = 0; + q = z[0]; + + /* Set stack variable q to the close-quote character */ + assert( q=='[' || q=='\'' || q=='"' || q=='`' ); + if( q=='[' ) q = ']'; + + while( ALWAYS(z[iIn]) ){ + if( z[iIn]==q ){ + if( z[iIn+1]!=q ){ + /* Character iIn was the close quote. */ + iIn++; + break; + }else{ + /* Character iIn and iIn+1 form an escaped quote character. Skip + ** the input cursor past both and copy a single quote character + ** to the output buffer. */ + iIn += 2; + z[iOut++] = q; + } + }else{ + z[iOut++] = z[iIn++]; + } + } + + z[iOut] = '\0'; + return iIn; +} + +/* +** Convert an SQL-style quoted string into a normal string by removing +** the quote characters. The conversion is done in-place. If the +** input does not begin with a quote character, then this routine +** is a no-op. +** +** Examples: +** +** "abc" becomes abc +** 'xyz' becomes xyz +** [pqr] becomes pqr +** `mno` becomes mno +*/ +static void sqlite3Fts5Dequote(char *z){ + char quote; /* Quote character (if any ) */ + + assert( 0==fts5_iswhitespace(z[0]) ); + quote = z[0]; + if( quote=='[' || quote=='\'' || quote=='"' || quote=='`' ){ + fts5Dequote(z); + } +} + + +struct Fts5Enum { + const char *zName; + int eVal; +}; +typedef struct Fts5Enum Fts5Enum; + +static int fts5ConfigSetEnum( + const Fts5Enum *aEnum, + const char *zEnum, + int *peVal +){ + int nEnum = (int)strlen(zEnum); + int i; + int iVal = -1; + + for(i=0; aEnum[i].zName; i++){ + if( sqlite3_strnicmp(aEnum[i].zName, zEnum, nEnum)==0 ){ + if( iVal>=0 ) return SQLITE_ERROR; + iVal = aEnum[i].eVal; + } + } + + *peVal = iVal; + return iVal<0 ? SQLITE_ERROR : SQLITE_OK; +} + +/* +** Parse a "special" CREATE VIRTUAL TABLE directive and update +** configuration object pConfig as appropriate. +** +** If successful, object pConfig is updated and SQLITE_OK returned. If +** an error occurs, an SQLite error code is returned and an error message +** may be left in *pzErr. It is the responsibility of the caller to +** eventually free any such error message using sqlite3_free(). +*/ +static int fts5ConfigParseSpecial( + Fts5Global *pGlobal, + Fts5Config *pConfig, /* Configuration object to update */ + const char *zCmd, /* Special command to parse */ + const char *zArg, /* Argument to parse */ + char **pzErr /* OUT: Error message */ +){ + int rc = SQLITE_OK; + int nCmd = (int)strlen(zCmd); + if( sqlite3_strnicmp("prefix", zCmd, nCmd)==0 ){ + const int nByte = sizeof(int) * FTS5_MAX_PREFIX_INDEXES; + const char *p; + int bFirst = 1; + if( pConfig->aPrefix==0 ){ + pConfig->aPrefix = sqlite3Fts5MallocZero(&rc, nByte); + if( rc ) return rc; + } + + p = zArg; + while( 1 ){ + int nPre = 0; + + while( p[0]==' ' ) p++; + if( bFirst==0 && p[0]==',' ){ + p++; + while( p[0]==' ' ) p++; + }else if( p[0]=='\0' ){ + break; + } + if( p[0]<'0' || p[0]>'9' ){ + *pzErr = sqlite3_mprintf("malformed prefix=... directive"); + rc = SQLITE_ERROR; + break; + } + + if( pConfig->nPrefix==FTS5_MAX_PREFIX_INDEXES ){ + *pzErr = sqlite3_mprintf( + "too many prefix indexes (max %d)", FTS5_MAX_PREFIX_INDEXES + ); + rc = SQLITE_ERROR; + break; + } + + while( p[0]>='0' && p[0]<='9' && nPre<1000 ){ + nPre = nPre*10 + (p[0] - '0'); + p++; + } + + if( nPre<=0 || nPre>=1000 ){ + *pzErr = sqlite3_mprintf("prefix length out of range (max 999)"); + rc = SQLITE_ERROR; + break; + } + + pConfig->aPrefix[pConfig->nPrefix] = nPre; + pConfig->nPrefix++; + bFirst = 0; + } + assert( pConfig->nPrefix<=FTS5_MAX_PREFIX_INDEXES ); + return rc; + } + + if( sqlite3_strnicmp("tokenize", zCmd, nCmd)==0 ){ + const char *p = (const char*)zArg; + int nArg = (int)strlen(zArg) + 1; + char **azArg = sqlite3Fts5MallocZero(&rc, sizeof(char*) * nArg); + char *pDel = sqlite3Fts5MallocZero(&rc, nArg * 2); + char *pSpace = pDel; + + if( azArg && pSpace ){ + if( pConfig->pTok ){ + *pzErr = sqlite3_mprintf("multiple tokenize=... directives"); + rc = SQLITE_ERROR; + }else{ + for(nArg=0; p && *p; nArg++){ + const char *p2 = fts5ConfigSkipWhitespace(p); + if( *p2=='\'' ){ + p = fts5ConfigSkipLiteral(p2); + }else{ + p = fts5ConfigSkipBareword(p2); + } + if( p ){ + memcpy(pSpace, p2, p-p2); + azArg[nArg] = pSpace; + sqlite3Fts5Dequote(pSpace); + pSpace += (p - p2) + 1; + p = fts5ConfigSkipWhitespace(p); + } + } + if( p==0 ){ + *pzErr = sqlite3_mprintf("parse error in tokenize directive"); + rc = SQLITE_ERROR; + }else{ + rc = sqlite3Fts5GetTokenizer(pGlobal, + (const char**)azArg, nArg, &pConfig->pTok, &pConfig->pTokApi, + pzErr + ); + } + } + } + + sqlite3_free(azArg); + sqlite3_free(pDel); + return rc; + } + + if( sqlite3_strnicmp("content", zCmd, nCmd)==0 ){ + if( pConfig->eContent!=FTS5_CONTENT_NORMAL ){ + *pzErr = sqlite3_mprintf("multiple content=... directives"); + rc = SQLITE_ERROR; + }else{ + if( zArg[0] ){ + pConfig->eContent = FTS5_CONTENT_EXTERNAL; + pConfig->zContent = sqlite3Fts5Mprintf(&rc, "%Q.%Q", pConfig->zDb,zArg); + }else{ + pConfig->eContent = FTS5_CONTENT_NONE; + } + } + return rc; + } + + if( sqlite3_strnicmp("content_rowid", zCmd, nCmd)==0 ){ + if( pConfig->zContentRowid ){ + *pzErr = sqlite3_mprintf("multiple content_rowid=... directives"); + rc = SQLITE_ERROR; + }else{ + pConfig->zContentRowid = sqlite3Fts5Strndup(&rc, zArg, -1); + } + return rc; + } + + if( sqlite3_strnicmp("columnsize", zCmd, nCmd)==0 ){ + if( (zArg[0]!='0' && zArg[0]!='1') || zArg[1]!='\0' ){ + *pzErr = sqlite3_mprintf("malformed columnsize=... directive"); + rc = SQLITE_ERROR; + }else{ + pConfig->bColumnsize = (zArg[0]=='1'); + } + return rc; + } + + if( sqlite3_strnicmp("detail", zCmd, nCmd)==0 ){ + const Fts5Enum aDetail[] = { + { "none", FTS5_DETAIL_NONE }, + { "full", FTS5_DETAIL_FULL }, + { "columns", FTS5_DETAIL_COLUMNS }, + { 0, 0 } + }; + + if( (rc = fts5ConfigSetEnum(aDetail, zArg, &pConfig->eDetail)) ){ + *pzErr = sqlite3_mprintf("malformed detail=... directive"); + } + return rc; + } + + *pzErr = sqlite3_mprintf("unrecognized option: \"%.*s\"", nCmd, zCmd); + return SQLITE_ERROR; +} + +/* +** Allocate an instance of the default tokenizer ("simple") at +** Fts5Config.pTokenizer. Return SQLITE_OK if successful, or an SQLite error +** code if an error occurs. +*/ +static int fts5ConfigDefaultTokenizer(Fts5Global *pGlobal, Fts5Config *pConfig){ + assert( pConfig->pTok==0 && pConfig->pTokApi==0 ); + return sqlite3Fts5GetTokenizer( + pGlobal, 0, 0, &pConfig->pTok, &pConfig->pTokApi, 0 + ); +} + +/* +** Gobble up the first bareword or quoted word from the input buffer zIn. +** Return a pointer to the character immediately following the last in +** the gobbled word if successful, or a NULL pointer otherwise (failed +** to find close-quote character). +** +** Before returning, set pzOut to point to a new buffer containing a +** nul-terminated, dequoted copy of the gobbled word. If the word was +** quoted, *pbQuoted is also set to 1 before returning. +** +** If *pRc is other than SQLITE_OK when this function is called, it is +** a no-op (NULL is returned). Otherwise, if an OOM occurs within this +** function, *pRc is set to SQLITE_NOMEM before returning. *pRc is *not* +** set if a parse error (failed to find close quote) occurs. +*/ +static const char *fts5ConfigGobbleWord( + int *pRc, /* IN/OUT: Error code */ + const char *zIn, /* Buffer to gobble string/bareword from */ + char **pzOut, /* OUT: malloc'd buffer containing str/bw */ + int *pbQuoted /* OUT: Set to true if dequoting required */ +){ + const char *zRet = 0; + + int nIn = (int)strlen(zIn); + char *zOut = sqlite3_malloc(nIn+1); + + assert( *pRc==SQLITE_OK ); + *pbQuoted = 0; + *pzOut = 0; + + if( zOut==0 ){ + *pRc = SQLITE_NOMEM; + }else{ + memcpy(zOut, zIn, nIn+1); + if( fts5_isopenquote(zOut[0]) ){ + int ii = fts5Dequote(zOut); + zRet = &zIn[ii]; + *pbQuoted = 1; + }else{ + zRet = fts5ConfigSkipBareword(zIn); + zOut[zRet-zIn] = '\0'; + } + } + + if( zRet==0 ){ + sqlite3_free(zOut); + }else{ + *pzOut = zOut; + } + + return zRet; +} + +static int fts5ConfigParseColumn( + Fts5Config *p, + char *zCol, + char *zArg, + char **pzErr +){ + int rc = SQLITE_OK; + if( 0==sqlite3_stricmp(zCol, FTS5_RANK_NAME) + || 0==sqlite3_stricmp(zCol, FTS5_ROWID_NAME) + ){ + *pzErr = sqlite3_mprintf("reserved fts5 column name: %s", zCol); + rc = SQLITE_ERROR; + }else if( zArg ){ + if( 0==sqlite3_stricmp(zArg, "unindexed") ){ + p->abUnindexed[p->nCol] = 1; + }else{ + *pzErr = sqlite3_mprintf("unrecognized column option: %s", zArg); + rc = SQLITE_ERROR; + } + } + + p->azCol[p->nCol++] = zCol; + return rc; +} + +/* +** Populate the Fts5Config.zContentExprlist string. +*/ +static int fts5ConfigMakeExprlist(Fts5Config *p){ + int i; + int rc = SQLITE_OK; + Fts5Buffer buf = {0, 0, 0}; + + sqlite3Fts5BufferAppendPrintf(&rc, &buf, "T.%Q", p->zContentRowid); + if( p->eContent!=FTS5_CONTENT_NONE ){ + for(i=0; i<p->nCol; i++){ + if( p->eContent==FTS5_CONTENT_EXTERNAL ){ + sqlite3Fts5BufferAppendPrintf(&rc, &buf, ", T.%Q", p->azCol[i]); + }else{ + sqlite3Fts5BufferAppendPrintf(&rc, &buf, ", T.c%d", i); + } + } + } + + assert( p->zContentExprlist==0 ); + p->zContentExprlist = (char*)buf.p; + return rc; +} + +/* +** Arguments nArg/azArg contain the string arguments passed to the xCreate +** or xConnect method of the virtual table. This function attempts to +** allocate an instance of Fts5Config containing the results of parsing +** those arguments. +** +** If successful, SQLITE_OK is returned and *ppOut is set to point to the +** new Fts5Config object. If an error occurs, an SQLite error code is +** returned, *ppOut is set to NULL and an error message may be left in +** *pzErr. It is the responsibility of the caller to eventually free any +** such error message using sqlite3_free(). +*/ +static int sqlite3Fts5ConfigParse( + Fts5Global *pGlobal, + sqlite3 *db, + int nArg, /* Number of arguments */ + const char **azArg, /* Array of nArg CREATE VIRTUAL TABLE args */ + Fts5Config **ppOut, /* OUT: Results of parse */ + char **pzErr /* OUT: Error message */ +){ + int rc = SQLITE_OK; /* Return code */ + Fts5Config *pRet; /* New object to return */ + int i; + int nByte; + + *ppOut = pRet = (Fts5Config*)sqlite3_malloc(sizeof(Fts5Config)); + if( pRet==0 ) return SQLITE_NOMEM; + memset(pRet, 0, sizeof(Fts5Config)); + pRet->db = db; + pRet->iCookie = -1; + + nByte = nArg * (sizeof(char*) + sizeof(u8)); + pRet->azCol = (char**)sqlite3Fts5MallocZero(&rc, nByte); + pRet->abUnindexed = (u8*)&pRet->azCol[nArg]; + pRet->zDb = sqlite3Fts5Strndup(&rc, azArg[1], -1); + pRet->zName = sqlite3Fts5Strndup(&rc, azArg[2], -1); + pRet->bColumnsize = 1; + pRet->eDetail = FTS5_DETAIL_FULL; +#ifdef SQLITE_DEBUG + pRet->bPrefixIndex = 1; +#endif + if( rc==SQLITE_OK && sqlite3_stricmp(pRet->zName, FTS5_RANK_NAME)==0 ){ + *pzErr = sqlite3_mprintf("reserved fts5 table name: %s", pRet->zName); + rc = SQLITE_ERROR; + } + + for(i=3; rc==SQLITE_OK && i<nArg; i++){ + const char *zOrig = azArg[i]; + const char *z; + char *zOne = 0; + char *zTwo = 0; + int bOption = 0; + int bMustBeCol = 0; + + z = fts5ConfigGobbleWord(&rc, zOrig, &zOne, &bMustBeCol); + z = fts5ConfigSkipWhitespace(z); + if( z && *z=='=' ){ + bOption = 1; + z++; + if( bMustBeCol ) z = 0; + } + z = fts5ConfigSkipWhitespace(z); + if( z && z[0] ){ + int bDummy; + z = fts5ConfigGobbleWord(&rc, z, &zTwo, &bDummy); + if( z && z[0] ) z = 0; + } + + if( rc==SQLITE_OK ){ + if( z==0 ){ + *pzErr = sqlite3_mprintf("parse error in \"%s\"", zOrig); + rc = SQLITE_ERROR; + }else{ + if( bOption ){ + rc = fts5ConfigParseSpecial(pGlobal, pRet, zOne, zTwo?zTwo:"", pzErr); + }else{ + rc = fts5ConfigParseColumn(pRet, zOne, zTwo, pzErr); + zOne = 0; + } + } + } + + sqlite3_free(zOne); + sqlite3_free(zTwo); + } + + /* If a tokenizer= option was successfully parsed, the tokenizer has + ** already been allocated. Otherwise, allocate an instance of the default + ** tokenizer (unicode61) now. */ + if( rc==SQLITE_OK && pRet->pTok==0 ){ + rc = fts5ConfigDefaultTokenizer(pGlobal, pRet); + } + + /* If no zContent option was specified, fill in the default values. */ + if( rc==SQLITE_OK && pRet->zContent==0 ){ + const char *zTail = 0; + assert( pRet->eContent==FTS5_CONTENT_NORMAL + || pRet->eContent==FTS5_CONTENT_NONE + ); + if( pRet->eContent==FTS5_CONTENT_NORMAL ){ + zTail = "content"; + }else if( pRet->bColumnsize ){ + zTail = "docsize"; + } + + if( zTail ){ + pRet->zContent = sqlite3Fts5Mprintf( + &rc, "%Q.'%q_%s'", pRet->zDb, pRet->zName, zTail + ); + } + } + + if( rc==SQLITE_OK && pRet->zContentRowid==0 ){ + pRet->zContentRowid = sqlite3Fts5Strndup(&rc, "rowid", -1); + } + + /* Formulate the zContentExprlist text */ + if( rc==SQLITE_OK ){ + rc = fts5ConfigMakeExprlist(pRet); + } + + if( rc!=SQLITE_OK ){ + sqlite3Fts5ConfigFree(pRet); + *ppOut = 0; + } + return rc; +} + +/* +** Free the configuration object passed as the only argument. +*/ +static void sqlite3Fts5ConfigFree(Fts5Config *pConfig){ + if( pConfig ){ + int i; + if( pConfig->pTok ){ + pConfig->pTokApi->xDelete(pConfig->pTok); + } + sqlite3_free(pConfig->zDb); + sqlite3_free(pConfig->zName); + for(i=0; i<pConfig->nCol; i++){ + sqlite3_free(pConfig->azCol[i]); + } + sqlite3_free(pConfig->azCol); + sqlite3_free(pConfig->aPrefix); + sqlite3_free(pConfig->zRank); + sqlite3_free(pConfig->zRankArgs); + sqlite3_free(pConfig->zContent); + sqlite3_free(pConfig->zContentRowid); + sqlite3_free(pConfig->zContentExprlist); + sqlite3_free(pConfig); + } +} + +/* +** Call sqlite3_declare_vtab() based on the contents of the configuration +** object passed as the only argument. Return SQLITE_OK if successful, or +** an SQLite error code if an error occurs. +*/ +static int sqlite3Fts5ConfigDeclareVtab(Fts5Config *pConfig){ + int i; + int rc = SQLITE_OK; + char *zSql; + + zSql = sqlite3Fts5Mprintf(&rc, "CREATE TABLE x("); + for(i=0; zSql && i<pConfig->nCol; i++){ + const char *zSep = (i==0?"":", "); + zSql = sqlite3Fts5Mprintf(&rc, "%z%s%Q", zSql, zSep, pConfig->azCol[i]); + } + zSql = sqlite3Fts5Mprintf(&rc, "%z, %Q HIDDEN, %s HIDDEN)", + zSql, pConfig->zName, FTS5_RANK_NAME + ); + + assert( zSql || rc==SQLITE_NOMEM ); + if( zSql ){ + rc = sqlite3_declare_vtab(pConfig->db, zSql); + sqlite3_free(zSql); + } + + return rc; +} + +/* +** Tokenize the text passed via the second and third arguments. +** +** The callback is invoked once for each token in the input text. The +** arguments passed to it are, in order: +** +** void *pCtx // Copy of 4th argument to sqlite3Fts5Tokenize() +** const char *pToken // Pointer to buffer containing token +** int nToken // Size of token in bytes +** int iStart // Byte offset of start of token within input text +** int iEnd // Byte offset of end of token within input text +** int iPos // Position of token in input (first token is 0) +** +** If the callback returns a non-zero value the tokenization is abandoned +** and no further callbacks are issued. +** +** This function returns SQLITE_OK if successful or an SQLite error code +** if an error occurs. If the tokenization was abandoned early because +** the callback returned SQLITE_DONE, this is not an error and this function +** still returns SQLITE_OK. Or, if the tokenization was abandoned early +** because the callback returned another non-zero value, it is assumed +** to be an SQLite error code and returned to the caller. +*/ +static int sqlite3Fts5Tokenize( + Fts5Config *pConfig, /* FTS5 Configuration object */ + int flags, /* FTS5_TOKENIZE_* flags */ + const char *pText, int nText, /* Text to tokenize */ + void *pCtx, /* Context passed to xToken() */ + int (*xToken)(void*, int, const char*, int, int, int) /* Callback */ +){ + if( pText==0 ) return SQLITE_OK; + return pConfig->pTokApi->xTokenize( + pConfig->pTok, pCtx, flags, pText, nText, xToken + ); +} + +/* +** Argument pIn points to the first character in what is expected to be +** a comma-separated list of SQL literals followed by a ')' character. +** If it actually is this, return a pointer to the ')'. Otherwise, return +** NULL to indicate a parse error. +*/ +static const char *fts5ConfigSkipArgs(const char *pIn){ + const char *p = pIn; + + while( 1 ){ + p = fts5ConfigSkipWhitespace(p); + p = fts5ConfigSkipLiteral(p); + p = fts5ConfigSkipWhitespace(p); + if( p==0 || *p==')' ) break; + if( *p!=',' ){ + p = 0; + break; + } + p++; + } + + return p; +} + +/* +** Parameter zIn contains a rank() function specification. The format of +** this is: +** +** + Bareword (function name) +** + Open parenthesis - "(" +** + Zero or more SQL literals in a comma separated list +** + Close parenthesis - ")" +*/ +static int sqlite3Fts5ConfigParseRank( + const char *zIn, /* Input string */ + char **pzRank, /* OUT: Rank function name */ + char **pzRankArgs /* OUT: Rank function arguments */ +){ + const char *p = zIn; + const char *pRank; + char *zRank = 0; + char *zRankArgs = 0; + int rc = SQLITE_OK; + + *pzRank = 0; + *pzRankArgs = 0; + + if( p==0 ){ + rc = SQLITE_ERROR; + }else{ + p = fts5ConfigSkipWhitespace(p); + pRank = p; + p = fts5ConfigSkipBareword(p); + + if( p ){ + zRank = sqlite3Fts5MallocZero(&rc, 1 + p - pRank); + if( zRank ) memcpy(zRank, pRank, p-pRank); + }else{ + rc = SQLITE_ERROR; + } + + if( rc==SQLITE_OK ){ + p = fts5ConfigSkipWhitespace(p); + if( *p!='(' ) rc = SQLITE_ERROR; + p++; + } + if( rc==SQLITE_OK ){ + const char *pArgs; + p = fts5ConfigSkipWhitespace(p); + pArgs = p; + if( *p!=')' ){ + p = fts5ConfigSkipArgs(p); + if( p==0 ){ + rc = SQLITE_ERROR; + }else{ + zRankArgs = sqlite3Fts5MallocZero(&rc, 1 + p - pArgs); + if( zRankArgs ) memcpy(zRankArgs, pArgs, p-pArgs); + } + } + } + } + + if( rc!=SQLITE_OK ){ + sqlite3_free(zRank); + assert( zRankArgs==0 ); + }else{ + *pzRank = zRank; + *pzRankArgs = zRankArgs; + } + return rc; +} + +static int sqlite3Fts5ConfigSetValue( + Fts5Config *pConfig, + const char *zKey, + sqlite3_value *pVal, + int *pbBadkey +){ + int rc = SQLITE_OK; + + if( 0==sqlite3_stricmp(zKey, "pgsz") ){ + int pgsz = 0; + if( SQLITE_INTEGER==sqlite3_value_numeric_type(pVal) ){ + pgsz = sqlite3_value_int(pVal); + } + if( pgsz<=0 || pgsz>FTS5_MAX_PAGE_SIZE ){ + *pbBadkey = 1; + }else{ + pConfig->pgsz = pgsz; + } + } + + else if( 0==sqlite3_stricmp(zKey, "hashsize") ){ + int nHashSize = -1; + if( SQLITE_INTEGER==sqlite3_value_numeric_type(pVal) ){ + nHashSize = sqlite3_value_int(pVal); + } + if( nHashSize<=0 ){ + *pbBadkey = 1; + }else{ + pConfig->nHashSize = nHashSize; + } + } + + else if( 0==sqlite3_stricmp(zKey, "automerge") ){ + int nAutomerge = -1; + if( SQLITE_INTEGER==sqlite3_value_numeric_type(pVal) ){ + nAutomerge = sqlite3_value_int(pVal); + } + if( nAutomerge<0 || nAutomerge>64 ){ + *pbBadkey = 1; + }else{ + if( nAutomerge==1 ) nAutomerge = FTS5_DEFAULT_AUTOMERGE; + pConfig->nAutomerge = nAutomerge; + } + } + + else if( 0==sqlite3_stricmp(zKey, "crisismerge") ){ + int nCrisisMerge = -1; + if( SQLITE_INTEGER==sqlite3_value_numeric_type(pVal) ){ + nCrisisMerge = sqlite3_value_int(pVal); + } + if( nCrisisMerge<0 ){ + *pbBadkey = 1; + }else{ + if( nCrisisMerge<=1 ) nCrisisMerge = FTS5_DEFAULT_CRISISMERGE; + pConfig->nCrisisMerge = nCrisisMerge; + } + } + + else if( 0==sqlite3_stricmp(zKey, "rank") ){ + const char *zIn = (const char*)sqlite3_value_text(pVal); + char *zRank; + char *zRankArgs; + rc = sqlite3Fts5ConfigParseRank(zIn, &zRank, &zRankArgs); + if( rc==SQLITE_OK ){ + sqlite3_free(pConfig->zRank); + sqlite3_free(pConfig->zRankArgs); + pConfig->zRank = zRank; + pConfig->zRankArgs = zRankArgs; + }else if( rc==SQLITE_ERROR ){ + rc = SQLITE_OK; + *pbBadkey = 1; + } + }else{ + *pbBadkey = 1; + } + return rc; +} + +/* +** Load the contents of the %_config table into memory. +*/ +static int sqlite3Fts5ConfigLoad(Fts5Config *pConfig, int iCookie){ + const char *zSelect = "SELECT k, v FROM %Q.'%q_config'"; + char *zSql; + sqlite3_stmt *p = 0; + int rc = SQLITE_OK; + int iVersion = 0; + + /* Set default values */ + pConfig->pgsz = FTS5_DEFAULT_PAGE_SIZE; + pConfig->nAutomerge = FTS5_DEFAULT_AUTOMERGE; + pConfig->nCrisisMerge = FTS5_DEFAULT_CRISISMERGE; + pConfig->nHashSize = FTS5_DEFAULT_HASHSIZE; + + zSql = sqlite3Fts5Mprintf(&rc, zSelect, pConfig->zDb, pConfig->zName); + if( zSql ){ + rc = sqlite3_prepare_v2(pConfig->db, zSql, -1, &p, 0); + sqlite3_free(zSql); + } + + assert( rc==SQLITE_OK || p==0 ); + if( rc==SQLITE_OK ){ + while( SQLITE_ROW==sqlite3_step(p) ){ + const char *zK = (const char*)sqlite3_column_text(p, 0); + sqlite3_value *pVal = sqlite3_column_value(p, 1); + if( 0==sqlite3_stricmp(zK, "version") ){ + iVersion = sqlite3_value_int(pVal); + }else{ + int bDummy = 0; + sqlite3Fts5ConfigSetValue(pConfig, zK, pVal, &bDummy); + } + } + rc = sqlite3_finalize(p); + } + + if( rc==SQLITE_OK && iVersion!=FTS5_CURRENT_VERSION ){ + rc = SQLITE_ERROR; + if( pConfig->pzErrmsg ){ + assert( 0==*pConfig->pzErrmsg ); + *pConfig->pzErrmsg = sqlite3_mprintf( + "invalid fts5 file format (found %d, expected %d) - run 'rebuild'", + iVersion, FTS5_CURRENT_VERSION + ); + } + } + + if( rc==SQLITE_OK ){ + pConfig->iCookie = iCookie; + } + return rc; +} + +/* +** 2014 May 31 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +*/ + + + +/* #include "fts5Int.h" */ +/* #include "fts5parse.h" */ + +/* +** All token types in the generated fts5parse.h file are greater than 0. +*/ +#define FTS5_EOF 0 + +#define FTS5_LARGEST_INT64 (0xffffffff|(((i64)0x7fffffff)<<32)) + +typedef struct Fts5ExprTerm Fts5ExprTerm; + +/* +** Functions generated by lemon from fts5parse.y. +*/ +static void *sqlite3Fts5ParserAlloc(void *(*mallocProc)(u64)); +static void sqlite3Fts5ParserFree(void*, void (*freeProc)(void*)); +static void sqlite3Fts5Parser(void*, int, Fts5Token, Fts5Parse*); +#ifndef NDEBUG +/* #include <stdio.h> */ +static void sqlite3Fts5ParserTrace(FILE*, char*); +#endif + + +struct Fts5Expr { + Fts5Index *pIndex; + Fts5Config *pConfig; + Fts5ExprNode *pRoot; + int bDesc; /* Iterate in descending rowid order */ + int nPhrase; /* Number of phrases in expression */ + Fts5ExprPhrase **apExprPhrase; /* Pointers to phrase objects */ +}; + +/* +** eType: +** Expression node type. Always one of: +** +** FTS5_AND (nChild, apChild valid) +** FTS5_OR (nChild, apChild valid) +** FTS5_NOT (nChild, apChild valid) +** FTS5_STRING (pNear valid) +** FTS5_TERM (pNear valid) +*/ +struct Fts5ExprNode { + int eType; /* Node type */ + int bEof; /* True at EOF */ + int bNomatch; /* True if entry is not a match */ + + /* Next method for this node. */ + int (*xNext)(Fts5Expr*, Fts5ExprNode*, int, i64); + + i64 iRowid; /* Current rowid */ + Fts5ExprNearset *pNear; /* For FTS5_STRING - cluster of phrases */ + + /* Child nodes. For a NOT node, this array always contains 2 entries. For + ** AND or OR nodes, it contains 2 or more entries. */ + int nChild; /* Number of child nodes */ + Fts5ExprNode *apChild[1]; /* Array of child nodes */ +}; + +#define Fts5NodeIsString(p) ((p)->eType==FTS5_TERM || (p)->eType==FTS5_STRING) + +/* +** Invoke the xNext method of an Fts5ExprNode object. This macro should be +** used as if it has the same signature as the xNext() methods themselves. +*/ +#define fts5ExprNodeNext(a,b,c,d) (b)->xNext((a), (b), (c), (d)) + +/* +** An instance of the following structure represents a single search term +** or term prefix. +*/ +struct Fts5ExprTerm { + int bPrefix; /* True for a prefix term */ + char *zTerm; /* nul-terminated term */ + Fts5IndexIter *pIter; /* Iterator for this term */ + Fts5ExprTerm *pSynonym; /* Pointer to first in list of synonyms */ +}; + +/* +** A phrase. One or more terms that must appear in a contiguous sequence +** within a document for it to match. +*/ +struct Fts5ExprPhrase { + Fts5ExprNode *pNode; /* FTS5_STRING node this phrase is part of */ + Fts5Buffer poslist; /* Current position list */ + int nTerm; /* Number of entries in aTerm[] */ + Fts5ExprTerm aTerm[1]; /* Terms that make up this phrase */ +}; + +/* +** One or more phrases that must appear within a certain token distance of +** each other within each matching document. +*/ +struct Fts5ExprNearset { + int nNear; /* NEAR parameter */ + Fts5Colset *pColset; /* Columns to search (NULL -> all columns) */ + int nPhrase; /* Number of entries in aPhrase[] array */ + Fts5ExprPhrase *apPhrase[1]; /* Array of phrase pointers */ +}; + + +/* +** Parse context. +*/ +struct Fts5Parse { + Fts5Config *pConfig; + char *zErr; + int rc; + int nPhrase; /* Size of apPhrase array */ + Fts5ExprPhrase **apPhrase; /* Array of all phrases */ + Fts5ExprNode *pExpr; /* Result of a successful parse */ +}; + +static void sqlite3Fts5ParseError(Fts5Parse *pParse, const char *zFmt, ...){ + va_list ap; + va_start(ap, zFmt); + if( pParse->rc==SQLITE_OK ){ + pParse->zErr = sqlite3_vmprintf(zFmt, ap); + pParse->rc = SQLITE_ERROR; + } + va_end(ap); +} + +static int fts5ExprIsspace(char t){ + return t==' ' || t=='\t' || t=='\n' || t=='\r'; +} + +/* +** Read the first token from the nul-terminated string at *pz. +*/ +static int fts5ExprGetToken( + Fts5Parse *pParse, + const char **pz, /* IN/OUT: Pointer into buffer */ + Fts5Token *pToken +){ + const char *z = *pz; + int tok; + + /* Skip past any whitespace */ + while( fts5ExprIsspace(*z) ) z++; + + pToken->p = z; + pToken->n = 1; + switch( *z ){ + case '(': tok = FTS5_LP; break; + case ')': tok = FTS5_RP; break; + case '{': tok = FTS5_LCP; break; + case '}': tok = FTS5_RCP; break; + case ':': tok = FTS5_COLON; break; + case ',': tok = FTS5_COMMA; break; + case '+': tok = FTS5_PLUS; break; + case '*': tok = FTS5_STAR; break; + case '\0': tok = FTS5_EOF; break; + + case '"': { + const char *z2; + tok = FTS5_STRING; + + for(z2=&z[1]; 1; z2++){ + if( z2[0]=='"' ){ + z2++; + if( z2[0]!='"' ) break; + } + if( z2[0]=='\0' ){ + sqlite3Fts5ParseError(pParse, "unterminated string"); + return FTS5_EOF; + } + } + pToken->n = (z2 - z); + break; + } + + default: { + const char *z2; + if( sqlite3Fts5IsBareword(z[0])==0 ){ + sqlite3Fts5ParseError(pParse, "fts5: syntax error near \"%.1s\"", z); + return FTS5_EOF; + } + tok = FTS5_STRING; + for(z2=&z[1]; sqlite3Fts5IsBareword(*z2); z2++); + pToken->n = (z2 - z); + if( pToken->n==2 && memcmp(pToken->p, "OR", 2)==0 ) tok = FTS5_OR; + if( pToken->n==3 && memcmp(pToken->p, "NOT", 3)==0 ) tok = FTS5_NOT; + if( pToken->n==3 && memcmp(pToken->p, "AND", 3)==0 ) tok = FTS5_AND; + break; + } + } + + *pz = &pToken->p[pToken->n]; + return tok; +} + +static void *fts5ParseAlloc(u64 t){ return sqlite3_malloc((int)t); } +static void fts5ParseFree(void *p){ sqlite3_free(p); } + +static int sqlite3Fts5ExprNew( + Fts5Config *pConfig, /* FTS5 Configuration */ + const char *zExpr, /* Expression text */ + Fts5Expr **ppNew, + char **pzErr +){ + Fts5Parse sParse; + Fts5Token token; + const char *z = zExpr; + int t; /* Next token type */ + void *pEngine; + Fts5Expr *pNew; + + *ppNew = 0; + *pzErr = 0; + memset(&sParse, 0, sizeof(sParse)); + pEngine = sqlite3Fts5ParserAlloc(fts5ParseAlloc); + if( pEngine==0 ){ return SQLITE_NOMEM; } + sParse.pConfig = pConfig; + + do { + t = fts5ExprGetToken(&sParse, &z, &token); + sqlite3Fts5Parser(pEngine, t, token, &sParse); + }while( sParse.rc==SQLITE_OK && t!=FTS5_EOF ); + sqlite3Fts5ParserFree(pEngine, fts5ParseFree); + + assert( sParse.rc!=SQLITE_OK || sParse.zErr==0 ); + if( sParse.rc==SQLITE_OK ){ + *ppNew = pNew = sqlite3_malloc(sizeof(Fts5Expr)); + if( pNew==0 ){ + sParse.rc = SQLITE_NOMEM; + sqlite3Fts5ParseNodeFree(sParse.pExpr); + }else{ + if( !sParse.pExpr ){ + const int nByte = sizeof(Fts5ExprNode); + pNew->pRoot = (Fts5ExprNode*)sqlite3Fts5MallocZero(&sParse.rc, nByte); + if( pNew->pRoot ){ + pNew->pRoot->bEof = 1; + } + }else{ + pNew->pRoot = sParse.pExpr; + } + pNew->pIndex = 0; + pNew->pConfig = pConfig; + pNew->apExprPhrase = sParse.apPhrase; + pNew->nPhrase = sParse.nPhrase; + sParse.apPhrase = 0; + } + } + + sqlite3_free(sParse.apPhrase); + *pzErr = sParse.zErr; + return sParse.rc; +} + +/* +** Free the expression node object passed as the only argument. +*/ +static void sqlite3Fts5ParseNodeFree(Fts5ExprNode *p){ + if( p ){ + int i; + for(i=0; i<p->nChild; i++){ + sqlite3Fts5ParseNodeFree(p->apChild[i]); + } + sqlite3Fts5ParseNearsetFree(p->pNear); + sqlite3_free(p); + } +} + +/* +** Free the expression object passed as the only argument. +*/ +static void sqlite3Fts5ExprFree(Fts5Expr *p){ + if( p ){ + sqlite3Fts5ParseNodeFree(p->pRoot); + sqlite3_free(p->apExprPhrase); + sqlite3_free(p); + } +} + +/* +** Argument pTerm must be a synonym iterator. Return the current rowid +** that it points to. +*/ +static i64 fts5ExprSynonymRowid(Fts5ExprTerm *pTerm, int bDesc, int *pbEof){ + i64 iRet = 0; + int bRetValid = 0; + Fts5ExprTerm *p; + + assert( pTerm->pSynonym ); + assert( bDesc==0 || bDesc==1 ); + for(p=pTerm; p; p=p->pSynonym){ + if( 0==sqlite3Fts5IterEof(p->pIter) ){ + i64 iRowid = p->pIter->iRowid; + if( bRetValid==0 || (bDesc!=(iRowid<iRet)) ){ + iRet = iRowid; + bRetValid = 1; + } + } + } + + if( pbEof && bRetValid==0 ) *pbEof = 1; + return iRet; +} + +/* +** Argument pTerm must be a synonym iterator. +*/ +static int fts5ExprSynonymList( + Fts5ExprTerm *pTerm, + i64 iRowid, + Fts5Buffer *pBuf, /* Use this buffer for space if required */ + u8 **pa, int *pn +){ + Fts5PoslistReader aStatic[4]; + Fts5PoslistReader *aIter = aStatic; + int nIter = 0; + int nAlloc = 4; + int rc = SQLITE_OK; + Fts5ExprTerm *p; + + assert( pTerm->pSynonym ); + for(p=pTerm; p; p=p->pSynonym){ + Fts5IndexIter *pIter = p->pIter; + if( sqlite3Fts5IterEof(pIter)==0 && pIter->iRowid==iRowid ){ + if( pIter->nData==0 ) continue; + if( nIter==nAlloc ){ + int nByte = sizeof(Fts5PoslistReader) * nAlloc * 2; + Fts5PoslistReader *aNew = (Fts5PoslistReader*)sqlite3_malloc(nByte); + if( aNew==0 ){ + rc = SQLITE_NOMEM; + goto synonym_poslist_out; + } + memcpy(aNew, aIter, sizeof(Fts5PoslistReader) * nIter); + nAlloc = nAlloc*2; + if( aIter!=aStatic ) sqlite3_free(aIter); + aIter = aNew; + } + sqlite3Fts5PoslistReaderInit(pIter->pData, pIter->nData, &aIter[nIter]); + assert( aIter[nIter].bEof==0 ); + nIter++; + } + } + + if( nIter==1 ){ + *pa = (u8*)aIter[0].a; + *pn = aIter[0].n; + }else{ + Fts5PoslistWriter writer = {0}; + i64 iPrev = -1; + fts5BufferZero(pBuf); + while( 1 ){ + int i; + i64 iMin = FTS5_LARGEST_INT64; + for(i=0; i<nIter; i++){ + if( aIter[i].bEof==0 ){ + if( aIter[i].iPos==iPrev ){ + if( sqlite3Fts5PoslistReaderNext(&aIter[i]) ) continue; + } + if( aIter[i].iPos<iMin ){ + iMin = aIter[i].iPos; + } + } + } + if( iMin==FTS5_LARGEST_INT64 || rc!=SQLITE_OK ) break; + rc = sqlite3Fts5PoslistWriterAppend(pBuf, &writer, iMin); + iPrev = iMin; + } + if( rc==SQLITE_OK ){ + *pa = pBuf->p; + *pn = pBuf->n; + } + } + + synonym_poslist_out: + if( aIter!=aStatic ) sqlite3_free(aIter); + return rc; +} + + +/* +** All individual term iterators in pPhrase are guaranteed to be valid and +** pointing to the same rowid when this function is called. This function +** checks if the current rowid really is a match, and if so populates +** the pPhrase->poslist buffer accordingly. Output parameter *pbMatch +** is set to true if this is really a match, or false otherwise. +** +** SQLITE_OK is returned if an error occurs, or an SQLite error code +** otherwise. It is not considered an error code if the current rowid is +** not a match. +*/ +static int fts5ExprPhraseIsMatch( + Fts5ExprNode *pNode, /* Node pPhrase belongs to */ + Fts5ExprPhrase *pPhrase, /* Phrase object to initialize */ + int *pbMatch /* OUT: Set to true if really a match */ +){ + Fts5PoslistWriter writer = {0}; + Fts5PoslistReader aStatic[4]; + Fts5PoslistReader *aIter = aStatic; + int i; + int rc = SQLITE_OK; + + fts5BufferZero(&pPhrase->poslist); + + /* If the aStatic[] array is not large enough, allocate a large array + ** using sqlite3_malloc(). This approach could be improved upon. */ + if( pPhrase->nTerm>ArraySize(aStatic) ){ + int nByte = sizeof(Fts5PoslistReader) * pPhrase->nTerm; + aIter = (Fts5PoslistReader*)sqlite3_malloc(nByte); + if( !aIter ) return SQLITE_NOMEM; + } + memset(aIter, 0, sizeof(Fts5PoslistReader) * pPhrase->nTerm); + + /* Initialize a term iterator for each term in the phrase */ + for(i=0; i<pPhrase->nTerm; i++){ + Fts5ExprTerm *pTerm = &pPhrase->aTerm[i]; + int n = 0; + int bFlag = 0; + u8 *a = 0; + if( pTerm->pSynonym ){ + Fts5Buffer buf = {0, 0, 0}; + rc = fts5ExprSynonymList(pTerm, pNode->iRowid, &buf, &a, &n); + if( rc ){ + sqlite3_free(a); + goto ismatch_out; + } + if( a==buf.p ) bFlag = 1; + }else{ + a = (u8*)pTerm->pIter->pData; + n = pTerm->pIter->nData; + } + sqlite3Fts5PoslistReaderInit(a, n, &aIter[i]); + aIter[i].bFlag = (u8)bFlag; + if( aIter[i].bEof ) goto ismatch_out; + } + + while( 1 ){ + int bMatch; + i64 iPos = aIter[0].iPos; + do { + bMatch = 1; + for(i=0; i<pPhrase->nTerm; i++){ + Fts5PoslistReader *pPos = &aIter[i]; + i64 iAdj = iPos + i; + if( pPos->iPos!=iAdj ){ + bMatch = 0; + while( pPos->iPos<iAdj ){ + if( sqlite3Fts5PoslistReaderNext(pPos) ) goto ismatch_out; + } + if( pPos->iPos>iAdj ) iPos = pPos->iPos-i; + } + } + }while( bMatch==0 ); + + /* Append position iPos to the output */ + rc = sqlite3Fts5PoslistWriterAppend(&pPhrase->poslist, &writer, iPos); + if( rc!=SQLITE_OK ) goto ismatch_out; + + for(i=0; i<pPhrase->nTerm; i++){ + if( sqlite3Fts5PoslistReaderNext(&aIter[i]) ) goto ismatch_out; + } + } + + ismatch_out: + *pbMatch = (pPhrase->poslist.n>0); + for(i=0; i<pPhrase->nTerm; i++){ + if( aIter[i].bFlag ) sqlite3_free((u8*)aIter[i].a); + } + if( aIter!=aStatic ) sqlite3_free(aIter); + return rc; +} + +typedef struct Fts5LookaheadReader Fts5LookaheadReader; +struct Fts5LookaheadReader { + const u8 *a; /* Buffer containing position list */ + int n; /* Size of buffer a[] in bytes */ + int i; /* Current offset in position list */ + i64 iPos; /* Current position */ + i64 iLookahead; /* Next position */ +}; + +#define FTS5_LOOKAHEAD_EOF (((i64)1) << 62) + +static int fts5LookaheadReaderNext(Fts5LookaheadReader *p){ + p->iPos = p->iLookahead; + if( sqlite3Fts5PoslistNext64(p->a, p->n, &p->i, &p->iLookahead) ){ + p->iLookahead = FTS5_LOOKAHEAD_EOF; + } + return (p->iPos==FTS5_LOOKAHEAD_EOF); +} + +static int fts5LookaheadReaderInit( + const u8 *a, int n, /* Buffer to read position list from */ + Fts5LookaheadReader *p /* Iterator object to initialize */ +){ + memset(p, 0, sizeof(Fts5LookaheadReader)); + p->a = a; + p->n = n; + fts5LookaheadReaderNext(p); + return fts5LookaheadReaderNext(p); +} + +typedef struct Fts5NearTrimmer Fts5NearTrimmer; +struct Fts5NearTrimmer { + Fts5LookaheadReader reader; /* Input iterator */ + Fts5PoslistWriter writer; /* Writer context */ + Fts5Buffer *pOut; /* Output poslist */ +}; + +/* +** The near-set object passed as the first argument contains more than +** one phrase. All phrases currently point to the same row. The +** Fts5ExprPhrase.poslist buffers are populated accordingly. This function +** tests if the current row contains instances of each phrase sufficiently +** close together to meet the NEAR constraint. Non-zero is returned if it +** does, or zero otherwise. +** +** If in/out parameter (*pRc) is set to other than SQLITE_OK when this +** function is called, it is a no-op. Or, if an error (e.g. SQLITE_NOMEM) +** occurs within this function (*pRc) is set accordingly before returning. +** The return value is undefined in both these cases. +** +** If no error occurs and non-zero (a match) is returned, the position-list +** of each phrase object is edited to contain only those entries that +** meet the constraint before returning. +*/ +static int fts5ExprNearIsMatch(int *pRc, Fts5ExprNearset *pNear){ + Fts5NearTrimmer aStatic[4]; + Fts5NearTrimmer *a = aStatic; + Fts5ExprPhrase **apPhrase = pNear->apPhrase; + + int i; + int rc = *pRc; + int bMatch; + + assert( pNear->nPhrase>1 ); + + /* If the aStatic[] array is not large enough, allocate a large array + ** using sqlite3_malloc(). This approach could be improved upon. */ + if( pNear->nPhrase>ArraySize(aStatic) ){ + int nByte = sizeof(Fts5NearTrimmer) * pNear->nPhrase; + a = (Fts5NearTrimmer*)sqlite3Fts5MallocZero(&rc, nByte); + }else{ + memset(aStatic, 0, sizeof(aStatic)); + } + if( rc!=SQLITE_OK ){ + *pRc = rc; + return 0; + } + + /* Initialize a lookahead iterator for each phrase. After passing the + ** buffer and buffer size to the lookaside-reader init function, zero + ** the phrase poslist buffer. The new poslist for the phrase (containing + ** the same entries as the original with some entries removed on account + ** of the NEAR constraint) is written over the original even as it is + ** being read. This is safe as the entries for the new poslist are a + ** subset of the old, so it is not possible for data yet to be read to + ** be overwritten. */ + for(i=0; i<pNear->nPhrase; i++){ + Fts5Buffer *pPoslist = &apPhrase[i]->poslist; + fts5LookaheadReaderInit(pPoslist->p, pPoslist->n, &a[i].reader); + pPoslist->n = 0; + a[i].pOut = pPoslist; + } + + while( 1 ){ + int iAdv; + i64 iMin; + i64 iMax; + + /* This block advances the phrase iterators until they point to a set of + ** entries that together comprise a match. */ + iMax = a[0].reader.iPos; + do { + bMatch = 1; + for(i=0; i<pNear->nPhrase; i++){ + Fts5LookaheadReader *pPos = &a[i].reader; + iMin = iMax - pNear->apPhrase[i]->nTerm - pNear->nNear; + if( pPos->iPos<iMin || pPos->iPos>iMax ){ + bMatch = 0; + while( pPos->iPos<iMin ){ + if( fts5LookaheadReaderNext(pPos) ) goto ismatch_out; + } + if( pPos->iPos>iMax ) iMax = pPos->iPos; + } + } + }while( bMatch==0 ); + + /* Add an entry to each output position list */ + for(i=0; i<pNear->nPhrase; i++){ + i64 iPos = a[i].reader.iPos; + Fts5PoslistWriter *pWriter = &a[i].writer; + if( a[i].pOut->n==0 || iPos!=pWriter->iPrev ){ + sqlite3Fts5PoslistWriterAppend(a[i].pOut, pWriter, iPos); + } + } + + iAdv = 0; + iMin = a[0].reader.iLookahead; + for(i=0; i<pNear->nPhrase; i++){ + if( a[i].reader.iLookahead < iMin ){ + iMin = a[i].reader.iLookahead; + iAdv = i; + } + } + if( fts5LookaheadReaderNext(&a[iAdv].reader) ) goto ismatch_out; + } + + ismatch_out: { + int bRet = a[0].pOut->n>0; + *pRc = rc; + if( a!=aStatic ) sqlite3_free(a); + return bRet; + } +} + +/* +** Advance iterator pIter until it points to a value equal to or laster +** than the initial value of *piLast. If this means the iterator points +** to a value laster than *piLast, update *piLast to the new lastest value. +** +** If the iterator reaches EOF, set *pbEof to true before returning. If +** an error occurs, set *pRc to an error code. If either *pbEof or *pRc +** are set, return a non-zero value. Otherwise, return zero. +*/ +static int fts5ExprAdvanceto( + Fts5IndexIter *pIter, /* Iterator to advance */ + int bDesc, /* True if iterator is "rowid DESC" */ + i64 *piLast, /* IN/OUT: Lastest rowid seen so far */ + int *pRc, /* OUT: Error code */ + int *pbEof /* OUT: Set to true if EOF */ +){ + i64 iLast = *piLast; + i64 iRowid; + + iRowid = pIter->iRowid; + if( (bDesc==0 && iLast>iRowid) || (bDesc && iLast<iRowid) ){ + int rc = sqlite3Fts5IterNextFrom(pIter, iLast); + if( rc || sqlite3Fts5IterEof(pIter) ){ + *pRc = rc; + *pbEof = 1; + return 1; + } + iRowid = pIter->iRowid; + assert( (bDesc==0 && iRowid>=iLast) || (bDesc==1 && iRowid<=iLast) ); + } + *piLast = iRowid; + + return 0; +} + +static int fts5ExprSynonymAdvanceto( + Fts5ExprTerm *pTerm, /* Term iterator to advance */ + int bDesc, /* True if iterator is "rowid DESC" */ + i64 *piLast, /* IN/OUT: Lastest rowid seen so far */ + int *pRc /* OUT: Error code */ +){ + int rc = SQLITE_OK; + i64 iLast = *piLast; + Fts5ExprTerm *p; + int bEof = 0; + + for(p=pTerm; rc==SQLITE_OK && p; p=p->pSynonym){ + if( sqlite3Fts5IterEof(p->pIter)==0 ){ + i64 iRowid = p->pIter->iRowid; + if( (bDesc==0 && iLast>iRowid) || (bDesc && iLast<iRowid) ){ + rc = sqlite3Fts5IterNextFrom(p->pIter, iLast); + } + } + } + + if( rc!=SQLITE_OK ){ + *pRc = rc; + bEof = 1; + }else{ + *piLast = fts5ExprSynonymRowid(pTerm, bDesc, &bEof); + } + return bEof; +} + + +static int fts5ExprNearTest( + int *pRc, + Fts5Expr *pExpr, /* Expression that pNear is a part of */ + Fts5ExprNode *pNode /* The "NEAR" node (FTS5_STRING) */ +){ + Fts5ExprNearset *pNear = pNode->pNear; + int rc = *pRc; + + if( pExpr->pConfig->eDetail!=FTS5_DETAIL_FULL ){ + Fts5ExprTerm *pTerm; + Fts5ExprPhrase *pPhrase = pNear->apPhrase[0]; + pPhrase->poslist.n = 0; + for(pTerm=&pPhrase->aTerm[0]; pTerm; pTerm=pTerm->pSynonym){ + Fts5IndexIter *pIter = pTerm->pIter; + if( sqlite3Fts5IterEof(pIter)==0 ){ + if( pIter->iRowid==pNode->iRowid && pIter->nData>0 ){ + pPhrase->poslist.n = 1; + } + } + } + return pPhrase->poslist.n; + }else{ + int i; + + /* Check that each phrase in the nearset matches the current row. + ** Populate the pPhrase->poslist buffers at the same time. If any + ** phrase is not a match, break out of the loop early. */ + for(i=0; rc==SQLITE_OK && i<pNear->nPhrase; i++){ + Fts5ExprPhrase *pPhrase = pNear->apPhrase[i]; + if( pPhrase->nTerm>1 || pPhrase->aTerm[0].pSynonym || pNear->pColset ){ + int bMatch = 0; + rc = fts5ExprPhraseIsMatch(pNode, pPhrase, &bMatch); + if( bMatch==0 ) break; + }else{ + Fts5IndexIter *pIter = pPhrase->aTerm[0].pIter; + fts5BufferSet(&rc, &pPhrase->poslist, pIter->nData, pIter->pData); + } + } + + *pRc = rc; + if( i==pNear->nPhrase && (i==1 || fts5ExprNearIsMatch(pRc, pNear)) ){ + return 1; + } + return 0; + } +} + + +/* +** Initialize all term iterators in the pNear object. If any term is found +** to match no documents at all, return immediately without initializing any +** further iterators. +*/ +static int fts5ExprNearInitAll( + Fts5Expr *pExpr, + Fts5ExprNode *pNode +){ + Fts5ExprNearset *pNear = pNode->pNear; + int i, j; + int rc = SQLITE_OK; + + assert( pNode->bNomatch==0 ); + for(i=0; rc==SQLITE_OK && i<pNear->nPhrase; i++){ + Fts5ExprPhrase *pPhrase = pNear->apPhrase[i]; + for(j=0; j<pPhrase->nTerm; j++){ + Fts5ExprTerm *pTerm = &pPhrase->aTerm[j]; + Fts5ExprTerm *p; + int bEof = 1; + + for(p=pTerm; p && rc==SQLITE_OK; p=p->pSynonym){ + if( p->pIter ){ + sqlite3Fts5IterClose(p->pIter); + p->pIter = 0; + } + rc = sqlite3Fts5IndexQuery( + pExpr->pIndex, p->zTerm, (int)strlen(p->zTerm), + (pTerm->bPrefix ? FTS5INDEX_QUERY_PREFIX : 0) | + (pExpr->bDesc ? FTS5INDEX_QUERY_DESC : 0), + pNear->pColset, + &p->pIter + ); + assert( rc==SQLITE_OK || p->pIter==0 ); + if( p->pIter && 0==sqlite3Fts5IterEof(p->pIter) ){ + bEof = 0; + } + } + + if( bEof ){ + pNode->bEof = 1; + return rc; + } + } + } + + return rc; +} + +/* +** If pExpr is an ASC iterator, this function returns a value with the +** same sign as: +** +** (iLhs - iRhs) +** +** Otherwise, if this is a DESC iterator, the opposite is returned: +** +** (iRhs - iLhs) +*/ +static int fts5RowidCmp( + Fts5Expr *pExpr, + i64 iLhs, + i64 iRhs +){ + assert( pExpr->bDesc==0 || pExpr->bDesc==1 ); + if( pExpr->bDesc==0 ){ + if( iLhs<iRhs ) return -1; + return (iLhs > iRhs); + }else{ + if( iLhs>iRhs ) return -1; + return (iLhs < iRhs); + } +} + +static void fts5ExprSetEof(Fts5ExprNode *pNode){ + int i; + pNode->bEof = 1; + pNode->bNomatch = 0; + for(i=0; i<pNode->nChild; i++){ + fts5ExprSetEof(pNode->apChild[i]); + } +} + +static void fts5ExprNodeZeroPoslist(Fts5ExprNode *pNode){ + if( pNode->eType==FTS5_STRING || pNode->eType==FTS5_TERM ){ + Fts5ExprNearset *pNear = pNode->pNear; + int i; + for(i=0; i<pNear->nPhrase; i++){ + Fts5ExprPhrase *pPhrase = pNear->apPhrase[i]; + pPhrase->poslist.n = 0; + } + }else{ + int i; + for(i=0; i<pNode->nChild; i++){ + fts5ExprNodeZeroPoslist(pNode->apChild[i]); + } + } +} + + + +/* +** Compare the values currently indicated by the two nodes as follows: +** +** res = (*p1) - (*p2) +** +** Nodes that point to values that come later in the iteration order are +** considered to be larger. Nodes at EOF are the largest of all. +** +** This means that if the iteration order is ASC, then numerically larger +** rowids are considered larger. Or if it is the default DESC, numerically +** smaller rowids are larger. +*/ +static int fts5NodeCompare( + Fts5Expr *pExpr, + Fts5ExprNode *p1, + Fts5ExprNode *p2 +){ + if( p2->bEof ) return -1; + if( p1->bEof ) return +1; + return fts5RowidCmp(pExpr, p1->iRowid, p2->iRowid); +} + +/* +** All individual term iterators in pNear are guaranteed to be valid when +** this function is called. This function checks if all term iterators +** point to the same rowid, and if not, advances them until they do. +** If an EOF is reached before this happens, *pbEof is set to true before +** returning. +** +** SQLITE_OK is returned if an error occurs, or an SQLite error code +** otherwise. It is not considered an error code if an iterator reaches +** EOF. +*/ +static int fts5ExprNodeTest_STRING( + Fts5Expr *pExpr, /* Expression pPhrase belongs to */ + Fts5ExprNode *pNode +){ + Fts5ExprNearset *pNear = pNode->pNear; + Fts5ExprPhrase *pLeft = pNear->apPhrase[0]; + int rc = SQLITE_OK; + i64 iLast; /* Lastest rowid any iterator points to */ + int i, j; /* Phrase and token index, respectively */ + int bMatch; /* True if all terms are at the same rowid */ + const int bDesc = pExpr->bDesc; + + /* Check that this node should not be FTS5_TERM */ + assert( pNear->nPhrase>1 + || pNear->apPhrase[0]->nTerm>1 + || pNear->apPhrase[0]->aTerm[0].pSynonym + ); + + /* Initialize iLast, the "lastest" rowid any iterator points to. If the + ** iterator skips through rowids in the default ascending order, this means + ** the maximum rowid. Or, if the iterator is "ORDER BY rowid DESC", then it + ** means the minimum rowid. */ + if( pLeft->aTerm[0].pSynonym ){ + iLast = fts5ExprSynonymRowid(&pLeft->aTerm[0], bDesc, 0); + }else{ + iLast = pLeft->aTerm[0].pIter->iRowid; + } + + do { + bMatch = 1; + for(i=0; i<pNear->nPhrase; i++){ + Fts5ExprPhrase *pPhrase = pNear->apPhrase[i]; + for(j=0; j<pPhrase->nTerm; j++){ + Fts5ExprTerm *pTerm = &pPhrase->aTerm[j]; + if( pTerm->pSynonym ){ + i64 iRowid = fts5ExprSynonymRowid(pTerm, bDesc, 0); + if( iRowid==iLast ) continue; + bMatch = 0; + if( fts5ExprSynonymAdvanceto(pTerm, bDesc, &iLast, &rc) ){ + pNode->bNomatch = 0; + pNode->bEof = 1; + return rc; + } + }else{ + Fts5IndexIter *pIter = pPhrase->aTerm[j].pIter; + if( pIter->iRowid==iLast ) continue; + bMatch = 0; + if( fts5ExprAdvanceto(pIter, bDesc, &iLast, &rc, &pNode->bEof) ){ + return rc; + } + } + } + } + }while( bMatch==0 ); + + pNode->iRowid = iLast; + pNode->bNomatch = ((0==fts5ExprNearTest(&rc, pExpr, pNode)) && rc==SQLITE_OK); + assert( pNode->bEof==0 || pNode->bNomatch==0 ); + + return rc; +} + +/* +** Advance the first term iterator in the first phrase of pNear. Set output +** variable *pbEof to true if it reaches EOF or if an error occurs. +** +** Return SQLITE_OK if successful, or an SQLite error code if an error +** occurs. +*/ +static int fts5ExprNodeNext_STRING( + Fts5Expr *pExpr, /* Expression pPhrase belongs to */ + Fts5ExprNode *pNode, /* FTS5_STRING or FTS5_TERM node */ + int bFromValid, + i64 iFrom +){ + Fts5ExprTerm *pTerm = &pNode->pNear->apPhrase[0]->aTerm[0]; + int rc = SQLITE_OK; + + pNode->bNomatch = 0; + if( pTerm->pSynonym ){ + int bEof = 1; + Fts5ExprTerm *p; + + /* Find the firstest rowid any synonym points to. */ + i64 iRowid = fts5ExprSynonymRowid(pTerm, pExpr->bDesc, 0); + + /* Advance each iterator that currently points to iRowid. Or, if iFrom + ** is valid - each iterator that points to a rowid before iFrom. */ + for(p=pTerm; p; p=p->pSynonym){ + if( sqlite3Fts5IterEof(p->pIter)==0 ){ + i64 ii = p->pIter->iRowid; + if( ii==iRowid + || (bFromValid && ii!=iFrom && (ii>iFrom)==pExpr->bDesc) + ){ + if( bFromValid ){ + rc = sqlite3Fts5IterNextFrom(p->pIter, iFrom); + }else{ + rc = sqlite3Fts5IterNext(p->pIter); + } + if( rc!=SQLITE_OK ) break; + if( sqlite3Fts5IterEof(p->pIter)==0 ){ + bEof = 0; + } + }else{ + bEof = 0; + } + } + } + + /* Set the EOF flag if either all synonym iterators are at EOF or an + ** error has occurred. */ + pNode->bEof = (rc || bEof); + }else{ + Fts5IndexIter *pIter = pTerm->pIter; + + assert( Fts5NodeIsString(pNode) ); + if( bFromValid ){ + rc = sqlite3Fts5IterNextFrom(pIter, iFrom); + }else{ + rc = sqlite3Fts5IterNext(pIter); + } + + pNode->bEof = (rc || sqlite3Fts5IterEof(pIter)); + } + + if( pNode->bEof==0 ){ + assert( rc==SQLITE_OK ); + rc = fts5ExprNodeTest_STRING(pExpr, pNode); + } + + return rc; +} + + +static int fts5ExprNodeTest_TERM( + Fts5Expr *pExpr, /* Expression that pNear is a part of */ + Fts5ExprNode *pNode /* The "NEAR" node (FTS5_TERM) */ +){ + /* As this "NEAR" object is actually a single phrase that consists + ** of a single term only, grab pointers into the poslist managed by the + ** fts5_index.c iterator object. This is much faster than synthesizing + ** a new poslist the way we have to for more complicated phrase or NEAR + ** expressions. */ + Fts5ExprPhrase *pPhrase = pNode->pNear->apPhrase[0]; + Fts5IndexIter *pIter = pPhrase->aTerm[0].pIter; + + assert( pNode->eType==FTS5_TERM ); + assert( pNode->pNear->nPhrase==1 && pPhrase->nTerm==1 ); + assert( pPhrase->aTerm[0].pSynonym==0 ); + + pPhrase->poslist.n = pIter->nData; + if( pExpr->pConfig->eDetail==FTS5_DETAIL_FULL ){ + pPhrase->poslist.p = (u8*)pIter->pData; + } + pNode->iRowid = pIter->iRowid; + pNode->bNomatch = (pPhrase->poslist.n==0); + return SQLITE_OK; +} + +/* +** xNext() method for a node of type FTS5_TERM. +*/ +static int fts5ExprNodeNext_TERM( + Fts5Expr *pExpr, + Fts5ExprNode *pNode, + int bFromValid, + i64 iFrom +){ + int rc; + Fts5IndexIter *pIter = pNode->pNear->apPhrase[0]->aTerm[0].pIter; + + assert( pNode->bEof==0 ); + if( bFromValid ){ + rc = sqlite3Fts5IterNextFrom(pIter, iFrom); + }else{ + rc = sqlite3Fts5IterNext(pIter); + } + if( rc==SQLITE_OK && sqlite3Fts5IterEof(pIter)==0 ){ + rc = fts5ExprNodeTest_TERM(pExpr, pNode); + }else{ + pNode->bEof = 1; + pNode->bNomatch = 0; + } + return rc; +} + +static void fts5ExprNodeTest_OR( + Fts5Expr *pExpr, /* Expression of which pNode is a part */ + Fts5ExprNode *pNode /* Expression node to test */ +){ + Fts5ExprNode *pNext = pNode->apChild[0]; + int i; + + for(i=1; i<pNode->nChild; i++){ + Fts5ExprNode *pChild = pNode->apChild[i]; + int cmp = fts5NodeCompare(pExpr, pNext, pChild); + if( cmp>0 || (cmp==0 && pChild->bNomatch==0) ){ + pNext = pChild; + } + } + pNode->iRowid = pNext->iRowid; + pNode->bEof = pNext->bEof; + pNode->bNomatch = pNext->bNomatch; +} + +static int fts5ExprNodeNext_OR( + Fts5Expr *pExpr, + Fts5ExprNode *pNode, + int bFromValid, + i64 iFrom +){ + int i; + i64 iLast = pNode->iRowid; + + for(i=0; i<pNode->nChild; i++){ + Fts5ExprNode *p1 = pNode->apChild[i]; + assert( p1->bEof || fts5RowidCmp(pExpr, p1->iRowid, iLast)>=0 ); + if( p1->bEof==0 ){ + if( (p1->iRowid==iLast) + || (bFromValid && fts5RowidCmp(pExpr, p1->iRowid, iFrom)<0) + ){ + int rc = fts5ExprNodeNext(pExpr, p1, bFromValid, iFrom); + if( rc!=SQLITE_OK ) return rc; + } + } + } + + fts5ExprNodeTest_OR(pExpr, pNode); + return SQLITE_OK; +} + +/* +** Argument pNode is an FTS5_AND node. +*/ +static int fts5ExprNodeTest_AND( + Fts5Expr *pExpr, /* Expression pPhrase belongs to */ + Fts5ExprNode *pAnd /* FTS5_AND node to advance */ +){ + int iChild; + i64 iLast = pAnd->iRowid; + int rc = SQLITE_OK; + int bMatch; + + assert( pAnd->bEof==0 ); + do { + pAnd->bNomatch = 0; + bMatch = 1; + for(iChild=0; iChild<pAnd->nChild; iChild++){ + Fts5ExprNode *pChild = pAnd->apChild[iChild]; + int cmp = fts5RowidCmp(pExpr, iLast, pChild->iRowid); + if( cmp>0 ){ + /* Advance pChild until it points to iLast or laster */ + rc = fts5ExprNodeNext(pExpr, pChild, 1, iLast); + if( rc!=SQLITE_OK ) return rc; + } + + /* If the child node is now at EOF, so is the parent AND node. Otherwise, + ** the child node is guaranteed to have advanced at least as far as + ** rowid iLast. So if it is not at exactly iLast, pChild->iRowid is the + ** new lastest rowid seen so far. */ + assert( pChild->bEof || fts5RowidCmp(pExpr, iLast, pChild->iRowid)<=0 ); + if( pChild->bEof ){ + fts5ExprSetEof(pAnd); + bMatch = 1; + break; + }else if( iLast!=pChild->iRowid ){ + bMatch = 0; + iLast = pChild->iRowid; + } + + if( pChild->bNomatch ){ + pAnd->bNomatch = 1; + } + } + }while( bMatch==0 ); + + if( pAnd->bNomatch && pAnd!=pExpr->pRoot ){ + fts5ExprNodeZeroPoslist(pAnd); + } + pAnd->iRowid = iLast; + return SQLITE_OK; +} + +static int fts5ExprNodeNext_AND( + Fts5Expr *pExpr, + Fts5ExprNode *pNode, + int bFromValid, + i64 iFrom +){ + int rc = fts5ExprNodeNext(pExpr, pNode->apChild[0], bFromValid, iFrom); + if( rc==SQLITE_OK ){ + rc = fts5ExprNodeTest_AND(pExpr, pNode); + } + return rc; +} + +static int fts5ExprNodeTest_NOT( + Fts5Expr *pExpr, /* Expression pPhrase belongs to */ + Fts5ExprNode *pNode /* FTS5_NOT node to advance */ +){ + int rc = SQLITE_OK; + Fts5ExprNode *p1 = pNode->apChild[0]; + Fts5ExprNode *p2 = pNode->apChild[1]; + assert( pNode->nChild==2 ); + + while( rc==SQLITE_OK && p1->bEof==0 ){ + int cmp = fts5NodeCompare(pExpr, p1, p2); + if( cmp>0 ){ + rc = fts5ExprNodeNext(pExpr, p2, 1, p1->iRowid); + cmp = fts5NodeCompare(pExpr, p1, p2); + } + assert( rc!=SQLITE_OK || cmp<=0 ); + if( cmp || p2->bNomatch ) break; + rc = fts5ExprNodeNext(pExpr, p1, 0, 0); + } + pNode->bEof = p1->bEof; + pNode->bNomatch = p1->bNomatch; + pNode->iRowid = p1->iRowid; + if( p1->bEof ){ + fts5ExprNodeZeroPoslist(p2); + } + return rc; +} + +static int fts5ExprNodeNext_NOT( + Fts5Expr *pExpr, + Fts5ExprNode *pNode, + int bFromValid, + i64 iFrom +){ + int rc = fts5ExprNodeNext(pExpr, pNode->apChild[0], bFromValid, iFrom); + if( rc==SQLITE_OK ){ + rc = fts5ExprNodeTest_NOT(pExpr, pNode); + } + return rc; +} + +/* +** If pNode currently points to a match, this function returns SQLITE_OK +** without modifying it. Otherwise, pNode is advanced until it does point +** to a match or EOF is reached. +*/ +static int fts5ExprNodeTest( + Fts5Expr *pExpr, /* Expression of which pNode is a part */ + Fts5ExprNode *pNode /* Expression node to test */ +){ + int rc = SQLITE_OK; + if( pNode->bEof==0 ){ + switch( pNode->eType ){ + + case FTS5_STRING: { + rc = fts5ExprNodeTest_STRING(pExpr, pNode); + break; + } + + case FTS5_TERM: { + rc = fts5ExprNodeTest_TERM(pExpr, pNode); + break; + } + + case FTS5_AND: { + rc = fts5ExprNodeTest_AND(pExpr, pNode); + break; + } + + case FTS5_OR: { + fts5ExprNodeTest_OR(pExpr, pNode); + break; + } + + default: assert( pNode->eType==FTS5_NOT ); { + rc = fts5ExprNodeTest_NOT(pExpr, pNode); + break; + } + } + } + return rc; +} + + +/* +** Set node pNode, which is part of expression pExpr, to point to the first +** match. If there are no matches, set the Node.bEof flag to indicate EOF. +** +** Return an SQLite error code if an error occurs, or SQLITE_OK otherwise. +** It is not an error if there are no matches. +*/ +static int fts5ExprNodeFirst(Fts5Expr *pExpr, Fts5ExprNode *pNode){ + int rc = SQLITE_OK; + pNode->bEof = 0; + pNode->bNomatch = 0; + + if( Fts5NodeIsString(pNode) ){ + /* Initialize all term iterators in the NEAR object. */ + rc = fts5ExprNearInitAll(pExpr, pNode); + }else{ + int i; + int nEof = 0; + for(i=0; i<pNode->nChild && rc==SQLITE_OK; i++){ + Fts5ExprNode *pChild = pNode->apChild[i]; + rc = fts5ExprNodeFirst(pExpr, pNode->apChild[i]); + assert( pChild->bEof==0 || pChild->bEof==1 ); + nEof += pChild->bEof; + } + pNode->iRowid = pNode->apChild[0]->iRowid; + + switch( pNode->eType ){ + case FTS5_AND: + if( nEof>0 ) fts5ExprSetEof(pNode); + break; + + case FTS5_OR: + if( pNode->nChild==nEof ) fts5ExprSetEof(pNode); + break; + + default: + assert( pNode->eType==FTS5_NOT ); + pNode->bEof = pNode->apChild[0]->bEof; + break; + } + } + + if( rc==SQLITE_OK ){ + rc = fts5ExprNodeTest(pExpr, pNode); + } + return rc; +} + + +/* +** Begin iterating through the set of documents in index pIdx matched by +** the MATCH expression passed as the first argument. If the "bDesc" +** parameter is passed a non-zero value, iteration is in descending rowid +** order. Or, if it is zero, in ascending order. +** +** If iterating in ascending rowid order (bDesc==0), the first document +** visited is that with the smallest rowid that is larger than or equal +** to parameter iFirst. Or, if iterating in ascending order (bDesc==1), +** then the first document visited must have a rowid smaller than or +** equal to iFirst. +** +** Return SQLITE_OK if successful, or an SQLite error code otherwise. It +** is not considered an error if the query does not match any documents. +*/ +static int sqlite3Fts5ExprFirst(Fts5Expr *p, Fts5Index *pIdx, i64 iFirst, int bDesc){ + Fts5ExprNode *pRoot = p->pRoot; + int rc = SQLITE_OK; + if( pRoot->xNext ){ + p->pIndex = pIdx; + p->bDesc = bDesc; + rc = fts5ExprNodeFirst(p, pRoot); + + /* If not at EOF but the current rowid occurs earlier than iFirst in + ** the iteration order, move to document iFirst or later. */ + if( pRoot->bEof==0 && fts5RowidCmp(p, pRoot->iRowid, iFirst)<0 ){ + rc = fts5ExprNodeNext(p, pRoot, 1, iFirst); + } + + /* If the iterator is not at a real match, skip forward until it is. */ + while( pRoot->bNomatch ){ + assert( pRoot->bEof==0 && rc==SQLITE_OK ); + rc = fts5ExprNodeNext(p, pRoot, 0, 0); + } + } + return rc; +} + +/* +** Move to the next document +** +** Return SQLITE_OK if successful, or an SQLite error code otherwise. It +** is not considered an error if the query does not match any documents. +*/ +static int sqlite3Fts5ExprNext(Fts5Expr *p, i64 iLast){ + int rc; + Fts5ExprNode *pRoot = p->pRoot; + assert( pRoot->bEof==0 && pRoot->bNomatch==0 ); + do { + rc = fts5ExprNodeNext(p, pRoot, 0, 0); + assert( pRoot->bNomatch==0 || (rc==SQLITE_OK && pRoot->bEof==0) ); + }while( pRoot->bNomatch ); + if( fts5RowidCmp(p, pRoot->iRowid, iLast)>0 ){ + pRoot->bEof = 1; + } + return rc; +} + +static int sqlite3Fts5ExprEof(Fts5Expr *p){ + return p->pRoot->bEof; +} + +static i64 sqlite3Fts5ExprRowid(Fts5Expr *p){ + return p->pRoot->iRowid; +} + +static int fts5ParseStringFromToken(Fts5Token *pToken, char **pz){ + int rc = SQLITE_OK; + *pz = sqlite3Fts5Strndup(&rc, pToken->p, pToken->n); + return rc; +} + +/* +** Free the phrase object passed as the only argument. +*/ +static void fts5ExprPhraseFree(Fts5ExprPhrase *pPhrase){ + if( pPhrase ){ + int i; + for(i=0; i<pPhrase->nTerm; i++){ + Fts5ExprTerm *pSyn; + Fts5ExprTerm *pNext; + Fts5ExprTerm *pTerm = &pPhrase->aTerm[i]; + sqlite3_free(pTerm->zTerm); + sqlite3Fts5IterClose(pTerm->pIter); + for(pSyn=pTerm->pSynonym; pSyn; pSyn=pNext){ + pNext = pSyn->pSynonym; + sqlite3Fts5IterClose(pSyn->pIter); + fts5BufferFree((Fts5Buffer*)&pSyn[1]); + sqlite3_free(pSyn); + } + } + if( pPhrase->poslist.nSpace>0 ) fts5BufferFree(&pPhrase->poslist); + sqlite3_free(pPhrase); + } +} + +/* +** If argument pNear is NULL, then a new Fts5ExprNearset object is allocated +** and populated with pPhrase. Or, if pNear is not NULL, phrase pPhrase is +** appended to it and the results returned. +** +** If an OOM error occurs, both the pNear and pPhrase objects are freed and +** NULL returned. +*/ +static Fts5ExprNearset *sqlite3Fts5ParseNearset( + Fts5Parse *pParse, /* Parse context */ + Fts5ExprNearset *pNear, /* Existing nearset, or NULL */ + Fts5ExprPhrase *pPhrase /* Recently parsed phrase */ +){ + const int SZALLOC = 8; + Fts5ExprNearset *pRet = 0; + + if( pParse->rc==SQLITE_OK ){ + if( pPhrase==0 ){ + return pNear; + } + if( pNear==0 ){ + int nByte = sizeof(Fts5ExprNearset) + SZALLOC * sizeof(Fts5ExprPhrase*); + pRet = sqlite3_malloc(nByte); + if( pRet==0 ){ + pParse->rc = SQLITE_NOMEM; + }else{ + memset(pRet, 0, nByte); + } + }else if( (pNear->nPhrase % SZALLOC)==0 ){ + int nNew = pNear->nPhrase + SZALLOC; + int nByte = sizeof(Fts5ExprNearset) + nNew * sizeof(Fts5ExprPhrase*); + + pRet = (Fts5ExprNearset*)sqlite3_realloc(pNear, nByte); + if( pRet==0 ){ + pParse->rc = SQLITE_NOMEM; + } + }else{ + pRet = pNear; + } + } + + if( pRet==0 ){ + assert( pParse->rc!=SQLITE_OK ); + sqlite3Fts5ParseNearsetFree(pNear); + sqlite3Fts5ParsePhraseFree(pPhrase); + }else{ + pRet->apPhrase[pRet->nPhrase++] = pPhrase; + } + return pRet; +} + +typedef struct TokenCtx TokenCtx; +struct TokenCtx { + Fts5ExprPhrase *pPhrase; + int rc; +}; + +/* +** Callback for tokenizing terms used by ParseTerm(). +*/ +static int fts5ParseTokenize( + void *pContext, /* Pointer to Fts5InsertCtx object */ + int tflags, /* Mask of FTS5_TOKEN_* flags */ + const char *pToken, /* Buffer containing token */ + int nToken, /* Size of token in bytes */ + int iUnused1, /* Start offset of token */ + int iUnused2 /* End offset of token */ +){ + int rc = SQLITE_OK; + const int SZALLOC = 8; + TokenCtx *pCtx = (TokenCtx*)pContext; + Fts5ExprPhrase *pPhrase = pCtx->pPhrase; + + UNUSED_PARAM2(iUnused1, iUnused2); + + /* If an error has already occurred, this is a no-op */ + if( pCtx->rc!=SQLITE_OK ) return pCtx->rc; + + assert( pPhrase==0 || pPhrase->nTerm>0 ); + if( pPhrase && (tflags & FTS5_TOKEN_COLOCATED) ){ + Fts5ExprTerm *pSyn; + int nByte = sizeof(Fts5ExprTerm) + sizeof(Fts5Buffer) + nToken+1; + pSyn = (Fts5ExprTerm*)sqlite3_malloc(nByte); + if( pSyn==0 ){ + rc = SQLITE_NOMEM; + }else{ + memset(pSyn, 0, nByte); + pSyn->zTerm = ((char*)pSyn) + sizeof(Fts5ExprTerm) + sizeof(Fts5Buffer); + memcpy(pSyn->zTerm, pToken, nToken); + pSyn->pSynonym = pPhrase->aTerm[pPhrase->nTerm-1].pSynonym; + pPhrase->aTerm[pPhrase->nTerm-1].pSynonym = pSyn; + } + }else{ + Fts5ExprTerm *pTerm; + if( pPhrase==0 || (pPhrase->nTerm % SZALLOC)==0 ){ + Fts5ExprPhrase *pNew; + int nNew = SZALLOC + (pPhrase ? pPhrase->nTerm : 0); + + pNew = (Fts5ExprPhrase*)sqlite3_realloc(pPhrase, + sizeof(Fts5ExprPhrase) + sizeof(Fts5ExprTerm) * nNew + ); + if( pNew==0 ){ + rc = SQLITE_NOMEM; + }else{ + if( pPhrase==0 ) memset(pNew, 0, sizeof(Fts5ExprPhrase)); + pCtx->pPhrase = pPhrase = pNew; + pNew->nTerm = nNew - SZALLOC; + } + } + + if( rc==SQLITE_OK ){ + pTerm = &pPhrase->aTerm[pPhrase->nTerm++]; + memset(pTerm, 0, sizeof(Fts5ExprTerm)); + pTerm->zTerm = sqlite3Fts5Strndup(&rc, pToken, nToken); + } + } + + pCtx->rc = rc; + return rc; +} + + +/* +** Free the phrase object passed as the only argument. +*/ +static void sqlite3Fts5ParsePhraseFree(Fts5ExprPhrase *pPhrase){ + fts5ExprPhraseFree(pPhrase); +} + +/* +** Free the phrase object passed as the second argument. +*/ +static void sqlite3Fts5ParseNearsetFree(Fts5ExprNearset *pNear){ + if( pNear ){ + int i; + for(i=0; i<pNear->nPhrase; i++){ + fts5ExprPhraseFree(pNear->apPhrase[i]); + } + sqlite3_free(pNear->pColset); + sqlite3_free(pNear); + } +} + +static void sqlite3Fts5ParseFinished(Fts5Parse *pParse, Fts5ExprNode *p){ + assert( pParse->pExpr==0 ); + pParse->pExpr = p; +} + +/* +** This function is called by the parser to process a string token. The +** string may or may not be quoted. In any case it is tokenized and a +** phrase object consisting of all tokens returned. +*/ +static Fts5ExprPhrase *sqlite3Fts5ParseTerm( + Fts5Parse *pParse, /* Parse context */ + Fts5ExprPhrase *pAppend, /* Phrase to append to */ + Fts5Token *pToken, /* String to tokenize */ + int bPrefix /* True if there is a trailing "*" */ +){ + Fts5Config *pConfig = pParse->pConfig; + TokenCtx sCtx; /* Context object passed to callback */ + int rc; /* Tokenize return code */ + char *z = 0; + + memset(&sCtx, 0, sizeof(TokenCtx)); + sCtx.pPhrase = pAppend; + + rc = fts5ParseStringFromToken(pToken, &z); + if( rc==SQLITE_OK ){ + int flags = FTS5_TOKENIZE_QUERY | (bPrefix ? FTS5_TOKENIZE_QUERY : 0); + int n; + sqlite3Fts5Dequote(z); + n = (int)strlen(z); + rc = sqlite3Fts5Tokenize(pConfig, flags, z, n, &sCtx, fts5ParseTokenize); + } + sqlite3_free(z); + if( rc || (rc = sCtx.rc) ){ + pParse->rc = rc; + fts5ExprPhraseFree(sCtx.pPhrase); + sCtx.pPhrase = 0; + }else if( sCtx.pPhrase ){ + + if( pAppend==0 ){ + if( (pParse->nPhrase % 8)==0 ){ + int nByte = sizeof(Fts5ExprPhrase*) * (pParse->nPhrase + 8); + Fts5ExprPhrase **apNew; + apNew = (Fts5ExprPhrase**)sqlite3_realloc(pParse->apPhrase, nByte); + if( apNew==0 ){ + pParse->rc = SQLITE_NOMEM; + fts5ExprPhraseFree(sCtx.pPhrase); + return 0; + } + pParse->apPhrase = apNew; + } + pParse->nPhrase++; + } + + pParse->apPhrase[pParse->nPhrase-1] = sCtx.pPhrase; + assert( sCtx.pPhrase->nTerm>0 ); + sCtx.pPhrase->aTerm[sCtx.pPhrase->nTerm-1].bPrefix = bPrefix; + } + + return sCtx.pPhrase; +} + +/* +** Create a new FTS5 expression by cloning phrase iPhrase of the +** expression passed as the second argument. +*/ +static int sqlite3Fts5ExprClonePhrase( + Fts5Expr *pExpr, + int iPhrase, + Fts5Expr **ppNew +){ + int rc = SQLITE_OK; /* Return code */ + Fts5ExprPhrase *pOrig; /* The phrase extracted from pExpr */ + int i; /* Used to iterate through phrase terms */ + Fts5Expr *pNew = 0; /* Expression to return via *ppNew */ + TokenCtx sCtx = {0,0}; /* Context object for fts5ParseTokenize */ + + pOrig = pExpr->apExprPhrase[iPhrase]; + pNew = (Fts5Expr*)sqlite3Fts5MallocZero(&rc, sizeof(Fts5Expr)); + if( rc==SQLITE_OK ){ + pNew->apExprPhrase = (Fts5ExprPhrase**)sqlite3Fts5MallocZero(&rc, + sizeof(Fts5ExprPhrase*)); + } + if( rc==SQLITE_OK ){ + pNew->pRoot = (Fts5ExprNode*)sqlite3Fts5MallocZero(&rc, + sizeof(Fts5ExprNode)); + } + if( rc==SQLITE_OK ){ + pNew->pRoot->pNear = (Fts5ExprNearset*)sqlite3Fts5MallocZero(&rc, + sizeof(Fts5ExprNearset) + sizeof(Fts5ExprPhrase*)); + } + + for(i=0; rc==SQLITE_OK && i<pOrig->nTerm; i++){ + int tflags = 0; + Fts5ExprTerm *p; + for(p=&pOrig->aTerm[i]; p && rc==SQLITE_OK; p=p->pSynonym){ + const char *zTerm = p->zTerm; + rc = fts5ParseTokenize((void*)&sCtx, tflags, zTerm, (int)strlen(zTerm), + 0, 0); + tflags = FTS5_TOKEN_COLOCATED; + } + if( rc==SQLITE_OK ){ + sCtx.pPhrase->aTerm[i].bPrefix = pOrig->aTerm[i].bPrefix; + } + } + + if( rc==SQLITE_OK ){ + /* All the allocations succeeded. Put the expression object together. */ + pNew->pIndex = pExpr->pIndex; + pNew->pConfig = pExpr->pConfig; + pNew->nPhrase = 1; + pNew->apExprPhrase[0] = sCtx.pPhrase; + pNew->pRoot->pNear->apPhrase[0] = sCtx.pPhrase; + pNew->pRoot->pNear->nPhrase = 1; + sCtx.pPhrase->pNode = pNew->pRoot; + + if( pOrig->nTerm==1 && pOrig->aTerm[0].pSynonym==0 ){ + pNew->pRoot->eType = FTS5_TERM; + pNew->pRoot->xNext = fts5ExprNodeNext_TERM; + }else{ + pNew->pRoot->eType = FTS5_STRING; + pNew->pRoot->xNext = fts5ExprNodeNext_STRING; + } + }else{ + sqlite3Fts5ExprFree(pNew); + fts5ExprPhraseFree(sCtx.pPhrase); + pNew = 0; + } + + *ppNew = pNew; + return rc; +} + + +/* +** Token pTok has appeared in a MATCH expression where the NEAR operator +** is expected. If token pTok does not contain "NEAR", store an error +** in the pParse object. +*/ +static void sqlite3Fts5ParseNear(Fts5Parse *pParse, Fts5Token *pTok){ + if( pTok->n!=4 || memcmp("NEAR", pTok->p, 4) ){ + sqlite3Fts5ParseError( + pParse, "fts5: syntax error near \"%.*s\"", pTok->n, pTok->p + ); + } +} + +static void sqlite3Fts5ParseSetDistance( + Fts5Parse *pParse, + Fts5ExprNearset *pNear, + Fts5Token *p +){ + int nNear = 0; + int i; + if( p->n ){ + for(i=0; i<p->n; i++){ + char c = (char)p->p[i]; + if( c<'0' || c>'9' ){ + sqlite3Fts5ParseError( + pParse, "expected integer, got \"%.*s\"", p->n, p->p + ); + return; + } + nNear = nNear * 10 + (p->p[i] - '0'); + } + }else{ + nNear = FTS5_DEFAULT_NEARDIST; + } + pNear->nNear = nNear; +} + +/* +** The second argument passed to this function may be NULL, or it may be +** an existing Fts5Colset object. This function returns a pointer to +** a new colset object containing the contents of (p) with new value column +** number iCol appended. +** +** If an OOM error occurs, store an error code in pParse and return NULL. +** The old colset object (if any) is not freed in this case. +*/ +static Fts5Colset *fts5ParseColset( + Fts5Parse *pParse, /* Store SQLITE_NOMEM here if required */ + Fts5Colset *p, /* Existing colset object */ + int iCol /* New column to add to colset object */ +){ + int nCol = p ? p->nCol : 0; /* Num. columns already in colset object */ + Fts5Colset *pNew; /* New colset object to return */ + + assert( pParse->rc==SQLITE_OK ); + assert( iCol>=0 && iCol<pParse->pConfig->nCol ); + + pNew = sqlite3_realloc(p, sizeof(Fts5Colset) + sizeof(int)*nCol); + if( pNew==0 ){ + pParse->rc = SQLITE_NOMEM; + }else{ + int *aiCol = pNew->aiCol; + int i, j; + for(i=0; i<nCol; i++){ + if( aiCol[i]==iCol ) return pNew; + if( aiCol[i]>iCol ) break; + } + for(j=nCol; j>i; j--){ + aiCol[j] = aiCol[j-1]; + } + aiCol[i] = iCol; + pNew->nCol = nCol+1; + +#ifndef NDEBUG + /* Check that the array is in order and contains no duplicate entries. */ + for(i=1; i<pNew->nCol; i++) assert( pNew->aiCol[i]>pNew->aiCol[i-1] ); +#endif + } + + return pNew; +} + +static Fts5Colset *sqlite3Fts5ParseColset( + Fts5Parse *pParse, /* Store SQLITE_NOMEM here if required */ + Fts5Colset *pColset, /* Existing colset object */ + Fts5Token *p +){ + Fts5Colset *pRet = 0; + int iCol; + char *z; /* Dequoted copy of token p */ + + z = sqlite3Fts5Strndup(&pParse->rc, p->p, p->n); + if( pParse->rc==SQLITE_OK ){ + Fts5Config *pConfig = pParse->pConfig; + sqlite3Fts5Dequote(z); + for(iCol=0; iCol<pConfig->nCol; iCol++){ + if( 0==sqlite3_stricmp(pConfig->azCol[iCol], z) ) break; + } + if( iCol==pConfig->nCol ){ + sqlite3Fts5ParseError(pParse, "no such column: %s", z); + }else{ + pRet = fts5ParseColset(pParse, pColset, iCol); + } + sqlite3_free(z); + } + + if( pRet==0 ){ + assert( pParse->rc!=SQLITE_OK ); + sqlite3_free(pColset); + } + + return pRet; +} + +static void sqlite3Fts5ParseSetColset( + Fts5Parse *pParse, + Fts5ExprNearset *pNear, + Fts5Colset *pColset +){ + if( pParse->pConfig->eDetail==FTS5_DETAIL_NONE ){ + pParse->rc = SQLITE_ERROR; + pParse->zErr = sqlite3_mprintf( + "fts5: column queries are not supported (detail=none)" + ); + sqlite3_free(pColset); + return; + } + + if( pNear ){ + pNear->pColset = pColset; + }else{ + sqlite3_free(pColset); + } +} + +static void fts5ExprAssignXNext(Fts5ExprNode *pNode){ + switch( pNode->eType ){ + case FTS5_STRING: { + Fts5ExprNearset *pNear = pNode->pNear; + if( pNear->nPhrase==1 && pNear->apPhrase[0]->nTerm==1 + && pNear->apPhrase[0]->aTerm[0].pSynonym==0 + ){ + pNode->eType = FTS5_TERM; + pNode->xNext = fts5ExprNodeNext_TERM; + }else{ + pNode->xNext = fts5ExprNodeNext_STRING; + } + break; + }; + + case FTS5_OR: { + pNode->xNext = fts5ExprNodeNext_OR; + break; + }; + + case FTS5_AND: { + pNode->xNext = fts5ExprNodeNext_AND; + break; + }; + + default: assert( pNode->eType==FTS5_NOT ); { + pNode->xNext = fts5ExprNodeNext_NOT; + break; + }; + } +} + +static void fts5ExprAddChildren(Fts5ExprNode *p, Fts5ExprNode *pSub){ + if( p->eType!=FTS5_NOT && pSub->eType==p->eType ){ + int nByte = sizeof(Fts5ExprNode*) * pSub->nChild; + memcpy(&p->apChild[p->nChild], pSub->apChild, nByte); + p->nChild += pSub->nChild; + sqlite3_free(pSub); + }else{ + p->apChild[p->nChild++] = pSub; + } +} + +/* +** Allocate and return a new expression object. If anything goes wrong (i.e. +** OOM error), leave an error code in pParse and return NULL. +*/ +static Fts5ExprNode *sqlite3Fts5ParseNode( + Fts5Parse *pParse, /* Parse context */ + int eType, /* FTS5_STRING, AND, OR or NOT */ + Fts5ExprNode *pLeft, /* Left hand child expression */ + Fts5ExprNode *pRight, /* Right hand child expression */ + Fts5ExprNearset *pNear /* For STRING expressions, the near cluster */ +){ + Fts5ExprNode *pRet = 0; + + if( pParse->rc==SQLITE_OK ){ + int nChild = 0; /* Number of children of returned node */ + int nByte; /* Bytes of space to allocate for this node */ + + assert( (eType!=FTS5_STRING && !pNear) + || (eType==FTS5_STRING && !pLeft && !pRight) + ); + if( eType==FTS5_STRING && pNear==0 ) return 0; + if( eType!=FTS5_STRING && pLeft==0 ) return pRight; + if( eType!=FTS5_STRING && pRight==0 ) return pLeft; + + if( eType==FTS5_NOT ){ + nChild = 2; + }else if( eType==FTS5_AND || eType==FTS5_OR ){ + nChild = 2; + if( pLeft->eType==eType ) nChild += pLeft->nChild-1; + if( pRight->eType==eType ) nChild += pRight->nChild-1; + } + + nByte = sizeof(Fts5ExprNode) + sizeof(Fts5ExprNode*)*(nChild-1); + pRet = (Fts5ExprNode*)sqlite3Fts5MallocZero(&pParse->rc, nByte); + + if( pRet ){ + pRet->eType = eType; + pRet->pNear = pNear; + fts5ExprAssignXNext(pRet); + if( eType==FTS5_STRING ){ + int iPhrase; + for(iPhrase=0; iPhrase<pNear->nPhrase; iPhrase++){ + pNear->apPhrase[iPhrase]->pNode = pRet; + } + + if( pParse->pConfig->eDetail!=FTS5_DETAIL_FULL + && (pNear->nPhrase!=1 || pNear->apPhrase[0]->nTerm!=1) + ){ + assert( pParse->rc==SQLITE_OK ); + pParse->rc = SQLITE_ERROR; + assert( pParse->zErr==0 ); + pParse->zErr = sqlite3_mprintf( + "fts5: %s queries are not supported (detail!=full)", + pNear->nPhrase==1 ? "phrase": "NEAR" + ); + sqlite3_free(pRet); + pRet = 0; + } + + }else{ + fts5ExprAddChildren(pRet, pLeft); + fts5ExprAddChildren(pRet, pRight); + } + } + } + + if( pRet==0 ){ + assert( pParse->rc!=SQLITE_OK ); + sqlite3Fts5ParseNodeFree(pLeft); + sqlite3Fts5ParseNodeFree(pRight); + sqlite3Fts5ParseNearsetFree(pNear); + } + return pRet; +} + +static char *fts5ExprTermPrint(Fts5ExprTerm *pTerm){ + int nByte = 0; + Fts5ExprTerm *p; + char *zQuoted; + + /* Determine the maximum amount of space required. */ + for(p=pTerm; p; p=p->pSynonym){ + nByte += (int)strlen(pTerm->zTerm) * 2 + 3 + 2; + } + zQuoted = sqlite3_malloc(nByte); + + if( zQuoted ){ + int i = 0; + for(p=pTerm; p; p=p->pSynonym){ + char *zIn = p->zTerm; + zQuoted[i++] = '"'; + while( *zIn ){ + if( *zIn=='"' ) zQuoted[i++] = '"'; + zQuoted[i++] = *zIn++; + } + zQuoted[i++] = '"'; + if( p->pSynonym ) zQuoted[i++] = '|'; + } + if( pTerm->bPrefix ){ + zQuoted[i++] = ' '; + zQuoted[i++] = '*'; + } + zQuoted[i++] = '\0'; + } + return zQuoted; +} + +static char *fts5PrintfAppend(char *zApp, const char *zFmt, ...){ + char *zNew; + va_list ap; + va_start(ap, zFmt); + zNew = sqlite3_vmprintf(zFmt, ap); + va_end(ap); + if( zApp && zNew ){ + char *zNew2 = sqlite3_mprintf("%s%s", zApp, zNew); + sqlite3_free(zNew); + zNew = zNew2; + } + sqlite3_free(zApp); + return zNew; +} + +/* +** Compose a tcl-readable representation of expression pExpr. Return a +** pointer to a buffer containing that representation. It is the +** responsibility of the caller to at some point free the buffer using +** sqlite3_free(). +*/ +static char *fts5ExprPrintTcl( + Fts5Config *pConfig, + const char *zNearsetCmd, + Fts5ExprNode *pExpr +){ + char *zRet = 0; + if( pExpr->eType==FTS5_STRING || pExpr->eType==FTS5_TERM ){ + Fts5ExprNearset *pNear = pExpr->pNear; + int i; + int iTerm; + + zRet = fts5PrintfAppend(zRet, "%s ", zNearsetCmd); + if( zRet==0 ) return 0; + if( pNear->pColset ){ + int *aiCol = pNear->pColset->aiCol; + int nCol = pNear->pColset->nCol; + if( nCol==1 ){ + zRet = fts5PrintfAppend(zRet, "-col %d ", aiCol[0]); + }else{ + zRet = fts5PrintfAppend(zRet, "-col {%d", aiCol[0]); + for(i=1; i<pNear->pColset->nCol; i++){ + zRet = fts5PrintfAppend(zRet, " %d", aiCol[i]); + } + zRet = fts5PrintfAppend(zRet, "} "); + } + if( zRet==0 ) return 0; + } + + if( pNear->nPhrase>1 ){ + zRet = fts5PrintfAppend(zRet, "-near %d ", pNear->nNear); + if( zRet==0 ) return 0; + } + + zRet = fts5PrintfAppend(zRet, "--"); + if( zRet==0 ) return 0; + + for(i=0; i<pNear->nPhrase; i++){ + Fts5ExprPhrase *pPhrase = pNear->apPhrase[i]; + + zRet = fts5PrintfAppend(zRet, " {"); + for(iTerm=0; zRet && iTerm<pPhrase->nTerm; iTerm++){ + char *zTerm = pPhrase->aTerm[iTerm].zTerm; + zRet = fts5PrintfAppend(zRet, "%s%s", iTerm==0?"":" ", zTerm); + if( pPhrase->aTerm[iTerm].bPrefix ){ + zRet = fts5PrintfAppend(zRet, "*"); + } + } + + if( zRet ) zRet = fts5PrintfAppend(zRet, "}"); + if( zRet==0 ) return 0; + } + + }else{ + char const *zOp = 0; + int i; + switch( pExpr->eType ){ + case FTS5_AND: zOp = "AND"; break; + case FTS5_NOT: zOp = "NOT"; break; + default: + assert( pExpr->eType==FTS5_OR ); + zOp = "OR"; + break; + } + + zRet = sqlite3_mprintf("%s", zOp); + for(i=0; zRet && i<pExpr->nChild; i++){ + char *z = fts5ExprPrintTcl(pConfig, zNearsetCmd, pExpr->apChild[i]); + if( !z ){ + sqlite3_free(zRet); + zRet = 0; + }else{ + zRet = fts5PrintfAppend(zRet, " [%z]", z); + } + } + } + + return zRet; +} + +static char *fts5ExprPrint(Fts5Config *pConfig, Fts5ExprNode *pExpr){ + char *zRet = 0; + if( pExpr->eType==FTS5_STRING || pExpr->eType==FTS5_TERM ){ + Fts5ExprNearset *pNear = pExpr->pNear; + int i; + int iTerm; + + if( pNear->pColset ){ + int iCol = pNear->pColset->aiCol[0]; + zRet = fts5PrintfAppend(zRet, "%s : ", pConfig->azCol[iCol]); + if( zRet==0 ) return 0; + } + + if( pNear->nPhrase>1 ){ + zRet = fts5PrintfAppend(zRet, "NEAR("); + if( zRet==0 ) return 0; + } + + for(i=0; i<pNear->nPhrase; i++){ + Fts5ExprPhrase *pPhrase = pNear->apPhrase[i]; + if( i!=0 ){ + zRet = fts5PrintfAppend(zRet, " "); + if( zRet==0 ) return 0; + } + for(iTerm=0; iTerm<pPhrase->nTerm; iTerm++){ + char *zTerm = fts5ExprTermPrint(&pPhrase->aTerm[iTerm]); + if( zTerm ){ + zRet = fts5PrintfAppend(zRet, "%s%s", iTerm==0?"":" + ", zTerm); + sqlite3_free(zTerm); + } + if( zTerm==0 || zRet==0 ){ + sqlite3_free(zRet); + return 0; + } + } + } + + if( pNear->nPhrase>1 ){ + zRet = fts5PrintfAppend(zRet, ", %d)", pNear->nNear); + if( zRet==0 ) return 0; + } + + }else{ + char const *zOp = 0; + int i; + + switch( pExpr->eType ){ + case FTS5_AND: zOp = " AND "; break; + case FTS5_NOT: zOp = " NOT "; break; + default: + assert( pExpr->eType==FTS5_OR ); + zOp = " OR "; + break; + } + + for(i=0; i<pExpr->nChild; i++){ + char *z = fts5ExprPrint(pConfig, pExpr->apChild[i]); + if( z==0 ){ + sqlite3_free(zRet); + zRet = 0; + }else{ + int e = pExpr->apChild[i]->eType; + int b = (e!=FTS5_STRING && e!=FTS5_TERM); + zRet = fts5PrintfAppend(zRet, "%s%s%z%s", + (i==0 ? "" : zOp), + (b?"(":""), z, (b?")":"") + ); + } + if( zRet==0 ) break; + } + } + + return zRet; +} + +/* +** The implementation of user-defined scalar functions fts5_expr() (bTcl==0) +** and fts5_expr_tcl() (bTcl!=0). +*/ +static void fts5ExprFunction( + sqlite3_context *pCtx, /* Function call context */ + int nArg, /* Number of args */ + sqlite3_value **apVal, /* Function arguments */ + int bTcl +){ + Fts5Global *pGlobal = (Fts5Global*)sqlite3_user_data(pCtx); + sqlite3 *db = sqlite3_context_db_handle(pCtx); + const char *zExpr = 0; + char *zErr = 0; + Fts5Expr *pExpr = 0; + int rc; + int i; + + const char **azConfig; /* Array of arguments for Fts5Config */ + const char *zNearsetCmd = "nearset"; + int nConfig; /* Size of azConfig[] */ + Fts5Config *pConfig = 0; + int iArg = 1; + + if( nArg<1 ){ + zErr = sqlite3_mprintf("wrong number of arguments to function %s", + bTcl ? "fts5_expr_tcl" : "fts5_expr" + ); + sqlite3_result_error(pCtx, zErr, -1); + sqlite3_free(zErr); + return; + } + + if( bTcl && nArg>1 ){ + zNearsetCmd = (const char*)sqlite3_value_text(apVal[1]); + iArg = 2; + } + + nConfig = 3 + (nArg-iArg); + azConfig = (const char**)sqlite3_malloc(sizeof(char*) * nConfig); + if( azConfig==0 ){ + sqlite3_result_error_nomem(pCtx); + return; + } + azConfig[0] = 0; + azConfig[1] = "main"; + azConfig[2] = "tbl"; + for(i=3; iArg<nArg; iArg++){ + azConfig[i++] = (const char*)sqlite3_value_text(apVal[iArg]); + } + + zExpr = (const char*)sqlite3_value_text(apVal[0]); + + rc = sqlite3Fts5ConfigParse(pGlobal, db, nConfig, azConfig, &pConfig, &zErr); + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5ExprNew(pConfig, zExpr, &pExpr, &zErr); + } + if( rc==SQLITE_OK ){ + char *zText; + if( pExpr->pRoot->xNext==0 ){ + zText = sqlite3_mprintf(""); + }else if( bTcl ){ + zText = fts5ExprPrintTcl(pConfig, zNearsetCmd, pExpr->pRoot); + }else{ + zText = fts5ExprPrint(pConfig, pExpr->pRoot); + } + if( zText==0 ){ + rc = SQLITE_NOMEM; + }else{ + sqlite3_result_text(pCtx, zText, -1, SQLITE_TRANSIENT); + sqlite3_free(zText); + } + } + + if( rc!=SQLITE_OK ){ + if( zErr ){ + sqlite3_result_error(pCtx, zErr, -1); + sqlite3_free(zErr); + }else{ + sqlite3_result_error_code(pCtx, rc); + } + } + sqlite3_free((void *)azConfig); + sqlite3Fts5ConfigFree(pConfig); + sqlite3Fts5ExprFree(pExpr); +} + +static void fts5ExprFunctionHr( + sqlite3_context *pCtx, /* Function call context */ + int nArg, /* Number of args */ + sqlite3_value **apVal /* Function arguments */ +){ + fts5ExprFunction(pCtx, nArg, apVal, 0); +} +static void fts5ExprFunctionTcl( + sqlite3_context *pCtx, /* Function call context */ + int nArg, /* Number of args */ + sqlite3_value **apVal /* Function arguments */ +){ + fts5ExprFunction(pCtx, nArg, apVal, 1); +} + +/* +** The implementation of an SQLite user-defined-function that accepts a +** single integer as an argument. If the integer is an alpha-numeric +** unicode code point, 1 is returned. Otherwise 0. +*/ +static void fts5ExprIsAlnum( + sqlite3_context *pCtx, /* Function call context */ + int nArg, /* Number of args */ + sqlite3_value **apVal /* Function arguments */ +){ + int iCode; + if( nArg!=1 ){ + sqlite3_result_error(pCtx, + "wrong number of arguments to function fts5_isalnum", -1 + ); + return; + } + iCode = sqlite3_value_int(apVal[0]); + sqlite3_result_int(pCtx, sqlite3Fts5UnicodeIsalnum(iCode)); +} + +static void fts5ExprFold( + sqlite3_context *pCtx, /* Function call context */ + int nArg, /* Number of args */ + sqlite3_value **apVal /* Function arguments */ +){ + if( nArg!=1 && nArg!=2 ){ + sqlite3_result_error(pCtx, + "wrong number of arguments to function fts5_fold", -1 + ); + }else{ + int iCode; + int bRemoveDiacritics = 0; + iCode = sqlite3_value_int(apVal[0]); + if( nArg==2 ) bRemoveDiacritics = sqlite3_value_int(apVal[1]); + sqlite3_result_int(pCtx, sqlite3Fts5UnicodeFold(iCode, bRemoveDiacritics)); + } +} + +/* +** This is called during initialization to register the fts5_expr() scalar +** UDF with the SQLite handle passed as the only argument. +*/ +static int sqlite3Fts5ExprInit(Fts5Global *pGlobal, sqlite3 *db){ + struct Fts5ExprFunc { + const char *z; + void (*x)(sqlite3_context*,int,sqlite3_value**); + } aFunc[] = { + { "fts5_expr", fts5ExprFunctionHr }, + { "fts5_expr_tcl", fts5ExprFunctionTcl }, + { "fts5_isalnum", fts5ExprIsAlnum }, + { "fts5_fold", fts5ExprFold }, + }; + int i; + int rc = SQLITE_OK; + void *pCtx = (void*)pGlobal; + + for(i=0; rc==SQLITE_OK && i<ArraySize(aFunc); i++){ + struct Fts5ExprFunc *p = &aFunc[i]; + rc = sqlite3_create_function(db, p->z, -1, SQLITE_UTF8, pCtx, p->x, 0, 0); + } + + /* Avoid a warning indicating that sqlite3Fts5ParserTrace() is unused */ +#ifndef NDEBUG + (void)sqlite3Fts5ParserTrace; +#endif + + return rc; +} + +/* +** Return the number of phrases in expression pExpr. +*/ +static int sqlite3Fts5ExprPhraseCount(Fts5Expr *pExpr){ + return (pExpr ? pExpr->nPhrase : 0); +} + +/* +** Return the number of terms in the iPhrase'th phrase in pExpr. +*/ +static int sqlite3Fts5ExprPhraseSize(Fts5Expr *pExpr, int iPhrase){ + if( iPhrase<0 || iPhrase>=pExpr->nPhrase ) return 0; + return pExpr->apExprPhrase[iPhrase]->nTerm; +} + +/* +** This function is used to access the current position list for phrase +** iPhrase. +*/ +static int sqlite3Fts5ExprPoslist(Fts5Expr *pExpr, int iPhrase, const u8 **pa){ + int nRet; + Fts5ExprPhrase *pPhrase = pExpr->apExprPhrase[iPhrase]; + Fts5ExprNode *pNode = pPhrase->pNode; + if( pNode->bEof==0 && pNode->iRowid==pExpr->pRoot->iRowid ){ + *pa = pPhrase->poslist.p; + nRet = pPhrase->poslist.n; + }else{ + *pa = 0; + nRet = 0; + } + return nRet; +} + +struct Fts5PoslistPopulator { + Fts5PoslistWriter writer; + int bOk; /* True if ok to populate */ + int bMiss; +}; + +static Fts5PoslistPopulator *sqlite3Fts5ExprClearPoslists(Fts5Expr *pExpr, int bLive){ + Fts5PoslistPopulator *pRet; + pRet = sqlite3_malloc(sizeof(Fts5PoslistPopulator)*pExpr->nPhrase); + if( pRet ){ + int i; + memset(pRet, 0, sizeof(Fts5PoslistPopulator)*pExpr->nPhrase); + for(i=0; i<pExpr->nPhrase; i++){ + Fts5Buffer *pBuf = &pExpr->apExprPhrase[i]->poslist; + Fts5ExprNode *pNode = pExpr->apExprPhrase[i]->pNode; + assert( pExpr->apExprPhrase[i]->nTerm==1 ); + if( bLive && + (pBuf->n==0 || pNode->iRowid!=pExpr->pRoot->iRowid || pNode->bEof) + ){ + pRet[i].bMiss = 1; + }else{ + pBuf->n = 0; + } + } + } + return pRet; +} + +struct Fts5ExprCtx { + Fts5Expr *pExpr; + Fts5PoslistPopulator *aPopulator; + i64 iOff; +}; +typedef struct Fts5ExprCtx Fts5ExprCtx; + +/* +** TODO: Make this more efficient! +*/ +static int fts5ExprColsetTest(Fts5Colset *pColset, int iCol){ + int i; + for(i=0; i<pColset->nCol; i++){ + if( pColset->aiCol[i]==iCol ) return 1; + } + return 0; +} + +static int fts5ExprPopulatePoslistsCb( + void *pCtx, /* Copy of 2nd argument to xTokenize() */ + int tflags, /* Mask of FTS5_TOKEN_* flags */ + const char *pToken, /* Pointer to buffer containing token */ + int nToken, /* Size of token in bytes */ + int iUnused1, /* Byte offset of token within input text */ + int iUnused2 /* Byte offset of end of token within input text */ +){ + Fts5ExprCtx *p = (Fts5ExprCtx*)pCtx; + Fts5Expr *pExpr = p->pExpr; + int i; + + UNUSED_PARAM2(iUnused1, iUnused2); + + if( (tflags & FTS5_TOKEN_COLOCATED)==0 ) p->iOff++; + for(i=0; i<pExpr->nPhrase; i++){ + Fts5ExprTerm *pTerm; + if( p->aPopulator[i].bOk==0 ) continue; + for(pTerm=&pExpr->apExprPhrase[i]->aTerm[0]; pTerm; pTerm=pTerm->pSynonym){ + int nTerm = strlen(pTerm->zTerm); + if( (nTerm==nToken || (nTerm<nToken && pTerm->bPrefix)) + && memcmp(pTerm->zTerm, pToken, nTerm)==0 + ){ + int rc = sqlite3Fts5PoslistWriterAppend( + &pExpr->apExprPhrase[i]->poslist, &p->aPopulator[i].writer, p->iOff + ); + if( rc ) return rc; + break; + } + } + } + return SQLITE_OK; +} + +static int sqlite3Fts5ExprPopulatePoslists( + Fts5Config *pConfig, + Fts5Expr *pExpr, + Fts5PoslistPopulator *aPopulator, + int iCol, + const char *z, int n +){ + int i; + Fts5ExprCtx sCtx; + sCtx.pExpr = pExpr; + sCtx.aPopulator = aPopulator; + sCtx.iOff = (((i64)iCol) << 32) - 1; + + for(i=0; i<pExpr->nPhrase; i++){ + Fts5ExprNode *pNode = pExpr->apExprPhrase[i]->pNode; + Fts5Colset *pColset = pNode->pNear->pColset; + if( (pColset && 0==fts5ExprColsetTest(pColset, iCol)) + || aPopulator[i].bMiss + ){ + aPopulator[i].bOk = 0; + }else{ + aPopulator[i].bOk = 1; + } + } + + return sqlite3Fts5Tokenize(pConfig, + FTS5_TOKENIZE_DOCUMENT, z, n, (void*)&sCtx, fts5ExprPopulatePoslistsCb + ); +} + +static void fts5ExprClearPoslists(Fts5ExprNode *pNode){ + if( pNode->eType==FTS5_TERM || pNode->eType==FTS5_STRING ){ + pNode->pNear->apPhrase[0]->poslist.n = 0; + }else{ + int i; + for(i=0; i<pNode->nChild; i++){ + fts5ExprClearPoslists(pNode->apChild[i]); + } + } +} + +static int fts5ExprCheckPoslists(Fts5ExprNode *pNode, i64 iRowid){ + pNode->iRowid = iRowid; + pNode->bEof = 0; + switch( pNode->eType ){ + case FTS5_TERM: + case FTS5_STRING: + return (pNode->pNear->apPhrase[0]->poslist.n>0); + + case FTS5_AND: { + int i; + for(i=0; i<pNode->nChild; i++){ + if( fts5ExprCheckPoslists(pNode->apChild[i], iRowid)==0 ){ + fts5ExprClearPoslists(pNode); + return 0; + } + } + break; + } + + case FTS5_OR: { + int i; + int bRet = 0; + for(i=0; i<pNode->nChild; i++){ + if( fts5ExprCheckPoslists(pNode->apChild[i], iRowid) ){ + bRet = 1; + } + } + return bRet; + } + + default: { + assert( pNode->eType==FTS5_NOT ); + if( 0==fts5ExprCheckPoslists(pNode->apChild[0], iRowid) + || 0!=fts5ExprCheckPoslists(pNode->apChild[1], iRowid) + ){ + fts5ExprClearPoslists(pNode); + return 0; + } + break; + } + } + return 1; +} + +static void sqlite3Fts5ExprCheckPoslists(Fts5Expr *pExpr, i64 iRowid){ + fts5ExprCheckPoslists(pExpr->pRoot, iRowid); +} + +static void fts5ExprClearEof(Fts5ExprNode *pNode){ + int i; + for(i=0; i<pNode->nChild; i++){ + fts5ExprClearEof(pNode->apChild[i]); + } + pNode->bEof = 0; +} +static void sqlite3Fts5ExprClearEof(Fts5Expr *pExpr){ + fts5ExprClearEof(pExpr->pRoot); +} + +/* +** This function is only called for detail=columns tables. +*/ +static int sqlite3Fts5ExprPhraseCollist( + Fts5Expr *pExpr, + int iPhrase, + const u8 **ppCollist, + int *pnCollist +){ + Fts5ExprPhrase *pPhrase = pExpr->apExprPhrase[iPhrase]; + Fts5ExprNode *pNode = pPhrase->pNode; + int rc = SQLITE_OK; + + assert( iPhrase>=0 && iPhrase<pExpr->nPhrase ); + assert( pExpr->pConfig->eDetail==FTS5_DETAIL_COLUMNS ); + + if( pNode->bEof==0 + && pNode->iRowid==pExpr->pRoot->iRowid + && pPhrase->poslist.n>0 + ){ + Fts5ExprTerm *pTerm = &pPhrase->aTerm[0]; + if( pTerm->pSynonym ){ + Fts5Buffer *pBuf = (Fts5Buffer*)&pTerm->pSynonym[1]; + rc = fts5ExprSynonymList( + pTerm, pNode->iRowid, pBuf, (u8**)ppCollist, pnCollist + ); + }else{ + *ppCollist = pPhrase->aTerm[0].pIter->pData; + *pnCollist = pPhrase->aTerm[0].pIter->nData; + } + }else{ + *ppCollist = 0; + *pnCollist = 0; + } + + return rc; +} + + +/* +** 2014 August 11 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +*/ + + + +/* #include "fts5Int.h" */ + +typedef struct Fts5HashEntry Fts5HashEntry; + +/* +** This file contains the implementation of an in-memory hash table used +** to accumuluate "term -> doclist" content before it is flused to a level-0 +** segment. +*/ + + +struct Fts5Hash { + int eDetail; /* Copy of Fts5Config.eDetail */ + int *pnByte; /* Pointer to bytes counter */ + int nEntry; /* Number of entries currently in hash */ + int nSlot; /* Size of aSlot[] array */ + Fts5HashEntry *pScan; /* Current ordered scan item */ + Fts5HashEntry **aSlot; /* Array of hash slots */ +}; + +/* +** Each entry in the hash table is represented by an object of the +** following type. Each object, its key (zKey[]) and its current data +** are stored in a single memory allocation. The position list data +** immediately follows the key data in memory. +** +** The data that follows the key is in a similar, but not identical format +** to the doclist data stored in the database. It is: +** +** * Rowid, as a varint +** * Position list, without 0x00 terminator. +** * Size of previous position list and rowid, as a 4 byte +** big-endian integer. +** +** iRowidOff: +** Offset of last rowid written to data area. Relative to first byte of +** structure. +** +** nData: +** Bytes of data written since iRowidOff. +*/ +struct Fts5HashEntry { + Fts5HashEntry *pHashNext; /* Next hash entry with same hash-key */ + Fts5HashEntry *pScanNext; /* Next entry in sorted order */ + + int nAlloc; /* Total size of allocation */ + int iSzPoslist; /* Offset of space for 4-byte poslist size */ + int nData; /* Total bytes of data (incl. structure) */ + int nKey; /* Length of zKey[] in bytes */ + u8 bDel; /* Set delete-flag @ iSzPoslist */ + u8 bContent; /* Set content-flag (detail=none mode) */ + i16 iCol; /* Column of last value written */ + int iPos; /* Position of last value written */ + i64 iRowid; /* Rowid of last value written */ + char zKey[8]; /* Nul-terminated entry key */ +}; + +/* +** Size of Fts5HashEntry without the zKey[] array. +*/ +#define FTS5_HASHENTRYSIZE (sizeof(Fts5HashEntry)-8) + + + +/* +** Allocate a new hash table. +*/ +static int sqlite3Fts5HashNew(Fts5Config *pConfig, Fts5Hash **ppNew, int *pnByte){ + int rc = SQLITE_OK; + Fts5Hash *pNew; + + *ppNew = pNew = (Fts5Hash*)sqlite3_malloc(sizeof(Fts5Hash)); + if( pNew==0 ){ + rc = SQLITE_NOMEM; + }else{ + int nByte; + memset(pNew, 0, sizeof(Fts5Hash)); + pNew->pnByte = pnByte; + pNew->eDetail = pConfig->eDetail; + + pNew->nSlot = 1024; + nByte = sizeof(Fts5HashEntry*) * pNew->nSlot; + pNew->aSlot = (Fts5HashEntry**)sqlite3_malloc(nByte); + if( pNew->aSlot==0 ){ + sqlite3_free(pNew); + *ppNew = 0; + rc = SQLITE_NOMEM; + }else{ + memset(pNew->aSlot, 0, nByte); + } + } + return rc; +} + +/* +** Free a hash table object. +*/ +static void sqlite3Fts5HashFree(Fts5Hash *pHash){ + if( pHash ){ + sqlite3Fts5HashClear(pHash); + sqlite3_free(pHash->aSlot); + sqlite3_free(pHash); + } +} + +/* +** Empty (but do not delete) a hash table. +*/ +static void sqlite3Fts5HashClear(Fts5Hash *pHash){ + int i; + for(i=0; i<pHash->nSlot; i++){ + Fts5HashEntry *pNext; + Fts5HashEntry *pSlot; + for(pSlot=pHash->aSlot[i]; pSlot; pSlot=pNext){ + pNext = pSlot->pHashNext; + sqlite3_free(pSlot); + } + } + memset(pHash->aSlot, 0, pHash->nSlot * sizeof(Fts5HashEntry*)); + pHash->nEntry = 0; +} + +static unsigned int fts5HashKey(int nSlot, const u8 *p, int n){ + int i; + unsigned int h = 13; + for(i=n-1; i>=0; i--){ + h = (h << 3) ^ h ^ p[i]; + } + return (h % nSlot); +} + +static unsigned int fts5HashKey2(int nSlot, u8 b, const u8 *p, int n){ + int i; + unsigned int h = 13; + for(i=n-1; i>=0; i--){ + h = (h << 3) ^ h ^ p[i]; + } + h = (h << 3) ^ h ^ b; + return (h % nSlot); +} + +/* +** Resize the hash table by doubling the number of slots. +*/ +static int fts5HashResize(Fts5Hash *pHash){ + int nNew = pHash->nSlot*2; + int i; + Fts5HashEntry **apNew; + Fts5HashEntry **apOld = pHash->aSlot; + + apNew = (Fts5HashEntry**)sqlite3_malloc(nNew*sizeof(Fts5HashEntry*)); + if( !apNew ) return SQLITE_NOMEM; + memset(apNew, 0, nNew*sizeof(Fts5HashEntry*)); + + for(i=0; i<pHash->nSlot; i++){ + while( apOld[i] ){ + int iHash; + Fts5HashEntry *p = apOld[i]; + apOld[i] = p->pHashNext; + iHash = fts5HashKey(nNew, (u8*)p->zKey, (int)strlen(p->zKey)); + p->pHashNext = apNew[iHash]; + apNew[iHash] = p; + } + } + + sqlite3_free(apOld); + pHash->nSlot = nNew; + pHash->aSlot = apNew; + return SQLITE_OK; +} + +static void fts5HashAddPoslistSize(Fts5Hash *pHash, Fts5HashEntry *p){ + if( p->iSzPoslist ){ + u8 *pPtr = (u8*)p; + if( pHash->eDetail==FTS5_DETAIL_NONE ){ + assert( p->nData==p->iSzPoslist ); + if( p->bDel ){ + pPtr[p->nData++] = 0x00; + if( p->bContent ){ + pPtr[p->nData++] = 0x00; + } + } + }else{ + int nSz = (p->nData - p->iSzPoslist - 1); /* Size in bytes */ + int nPos = nSz*2 + p->bDel; /* Value of nPos field */ + + assert( p->bDel==0 || p->bDel==1 ); + if( nPos<=127 ){ + pPtr[p->iSzPoslist] = (u8)nPos; + }else{ + int nByte = sqlite3Fts5GetVarintLen((u32)nPos); + memmove(&pPtr[p->iSzPoslist + nByte], &pPtr[p->iSzPoslist + 1], nSz); + sqlite3Fts5PutVarint(&pPtr[p->iSzPoslist], nPos); + p->nData += (nByte-1); + } + } + + p->iSzPoslist = 0; + p->bDel = 0; + p->bContent = 0; + } +} + +/* +** Add an entry to the in-memory hash table. The key is the concatenation +** of bByte and (pToken/nToken). The value is (iRowid/iCol/iPos). +** +** (bByte || pToken) -> (iRowid,iCol,iPos) +** +** Or, if iCol is negative, then the value is a delete marker. +*/ +static int sqlite3Fts5HashWrite( + Fts5Hash *pHash, + i64 iRowid, /* Rowid for this entry */ + int iCol, /* Column token appears in (-ve -> delete) */ + int iPos, /* Position of token within column */ + char bByte, /* First byte of token */ + const char *pToken, int nToken /* Token to add or remove to or from index */ +){ + unsigned int iHash; + Fts5HashEntry *p; + u8 *pPtr; + int nIncr = 0; /* Amount to increment (*pHash->pnByte) by */ + int bNew; /* If non-delete entry should be written */ + + bNew = (pHash->eDetail==FTS5_DETAIL_FULL); + + /* Attempt to locate an existing hash entry */ + iHash = fts5HashKey2(pHash->nSlot, (u8)bByte, (const u8*)pToken, nToken); + for(p=pHash->aSlot[iHash]; p; p=p->pHashNext){ + if( p->zKey[0]==bByte + && p->nKey==nToken + && memcmp(&p->zKey[1], pToken, nToken)==0 + ){ + break; + } + } + + /* If an existing hash entry cannot be found, create a new one. */ + if( p==0 ){ + /* Figure out how much space to allocate */ + int nByte = FTS5_HASHENTRYSIZE + (nToken+1) + 1 + 64; + if( nByte<128 ) nByte = 128; + + /* Grow the Fts5Hash.aSlot[] array if necessary. */ + if( (pHash->nEntry*2)>=pHash->nSlot ){ + int rc = fts5HashResize(pHash); + if( rc!=SQLITE_OK ) return rc; + iHash = fts5HashKey2(pHash->nSlot, (u8)bByte, (const u8*)pToken, nToken); + } + + /* Allocate new Fts5HashEntry and add it to the hash table. */ + p = (Fts5HashEntry*)sqlite3_malloc(nByte); + if( !p ) return SQLITE_NOMEM; + memset(p, 0, FTS5_HASHENTRYSIZE); + p->nAlloc = nByte; + p->zKey[0] = bByte; + memcpy(&p->zKey[1], pToken, nToken); + assert( iHash==fts5HashKey(pHash->nSlot, (u8*)p->zKey, nToken+1) ); + p->nKey = nToken; + p->zKey[nToken+1] = '\0'; + p->nData = nToken+1 + 1 + FTS5_HASHENTRYSIZE; + p->pHashNext = pHash->aSlot[iHash]; + pHash->aSlot[iHash] = p; + pHash->nEntry++; + + /* Add the first rowid field to the hash-entry */ + p->nData += sqlite3Fts5PutVarint(&((u8*)p)[p->nData], iRowid); + p->iRowid = iRowid; + + p->iSzPoslist = p->nData; + if( pHash->eDetail!=FTS5_DETAIL_NONE ){ + p->nData += 1; + p->iCol = (pHash->eDetail==FTS5_DETAIL_FULL ? 0 : -1); + } + + nIncr += p->nData; + }else{ + + /* Appending to an existing hash-entry. Check that there is enough + ** space to append the largest possible new entry. Worst case scenario + ** is: + ** + ** + 9 bytes for a new rowid, + ** + 4 byte reserved for the "poslist size" varint. + ** + 1 byte for a "new column" byte, + ** + 3 bytes for a new column number (16-bit max) as a varint, + ** + 5 bytes for the new position offset (32-bit max). + */ + if( (p->nAlloc - p->nData) < (9 + 4 + 1 + 3 + 5) ){ + int nNew = p->nAlloc * 2; + Fts5HashEntry *pNew; + Fts5HashEntry **pp; + pNew = (Fts5HashEntry*)sqlite3_realloc(p, nNew); + if( pNew==0 ) return SQLITE_NOMEM; + pNew->nAlloc = nNew; + for(pp=&pHash->aSlot[iHash]; *pp!=p; pp=&(*pp)->pHashNext); + *pp = pNew; + p = pNew; + } + nIncr -= p->nData; + } + assert( (p->nAlloc - p->nData) >= (9 + 4 + 1 + 3 + 5) ); + + pPtr = (u8*)p; + + /* If this is a new rowid, append the 4-byte size field for the previous + ** entry, and the new rowid for this entry. */ + if( iRowid!=p->iRowid ){ + fts5HashAddPoslistSize(pHash, p); + p->nData += sqlite3Fts5PutVarint(&pPtr[p->nData], iRowid - p->iRowid); + p->iRowid = iRowid; + bNew = 1; + p->iSzPoslist = p->nData; + if( pHash->eDetail!=FTS5_DETAIL_NONE ){ + p->nData += 1; + p->iCol = (pHash->eDetail==FTS5_DETAIL_FULL ? 0 : -1); + p->iPos = 0; + } + } + + if( iCol>=0 ){ + if( pHash->eDetail==FTS5_DETAIL_NONE ){ + p->bContent = 1; + }else{ + /* Append a new column value, if necessary */ + assert( iCol>=p->iCol ); + if( iCol!=p->iCol ){ + if( pHash->eDetail==FTS5_DETAIL_FULL ){ + pPtr[p->nData++] = 0x01; + p->nData += sqlite3Fts5PutVarint(&pPtr[p->nData], iCol); + p->iCol = iCol; + p->iPos = 0; + }else{ + bNew = 1; + p->iCol = iPos = iCol; + } + } + + /* Append the new position offset, if necessary */ + if( bNew ){ + p->nData += sqlite3Fts5PutVarint(&pPtr[p->nData], iPos - p->iPos + 2); + p->iPos = iPos; + } + } + }else{ + /* This is a delete. Set the delete flag. */ + p->bDel = 1; + } + + nIncr += p->nData; + *pHash->pnByte += nIncr; + return SQLITE_OK; +} + + +/* +** Arguments pLeft and pRight point to linked-lists of hash-entry objects, +** each sorted in key order. This function merges the two lists into a +** single list and returns a pointer to its first element. +*/ +static Fts5HashEntry *fts5HashEntryMerge( + Fts5HashEntry *pLeft, + Fts5HashEntry *pRight +){ + Fts5HashEntry *p1 = pLeft; + Fts5HashEntry *p2 = pRight; + Fts5HashEntry *pRet = 0; + Fts5HashEntry **ppOut = &pRet; + + while( p1 || p2 ){ + if( p1==0 ){ + *ppOut = p2; + p2 = 0; + }else if( p2==0 ){ + *ppOut = p1; + p1 = 0; + }else{ + int i = 0; + while( p1->zKey[i]==p2->zKey[i] ) i++; + + if( ((u8)p1->zKey[i])>((u8)p2->zKey[i]) ){ + /* p2 is smaller */ + *ppOut = p2; + ppOut = &p2->pScanNext; + p2 = p2->pScanNext; + }else{ + /* p1 is smaller */ + *ppOut = p1; + ppOut = &p1->pScanNext; + p1 = p1->pScanNext; + } + *ppOut = 0; + } + } + + return pRet; +} + +/* +** Extract all tokens from hash table iHash and link them into a list +** in sorted order. The hash table is cleared before returning. It is +** the responsibility of the caller to free the elements of the returned +** list. +*/ +static int fts5HashEntrySort( + Fts5Hash *pHash, + const char *pTerm, int nTerm, /* Query prefix, if any */ + Fts5HashEntry **ppSorted +){ + const int nMergeSlot = 32; + Fts5HashEntry **ap; + Fts5HashEntry *pList; + int iSlot; + int i; + + *ppSorted = 0; + ap = sqlite3_malloc(sizeof(Fts5HashEntry*) * nMergeSlot); + if( !ap ) return SQLITE_NOMEM; + memset(ap, 0, sizeof(Fts5HashEntry*) * nMergeSlot); + + for(iSlot=0; iSlot<pHash->nSlot; iSlot++){ + Fts5HashEntry *pIter; + for(pIter=pHash->aSlot[iSlot]; pIter; pIter=pIter->pHashNext){ + if( pTerm==0 || 0==memcmp(pIter->zKey, pTerm, nTerm) ){ + Fts5HashEntry *pEntry = pIter; + pEntry->pScanNext = 0; + for(i=0; ap[i]; i++){ + pEntry = fts5HashEntryMerge(pEntry, ap[i]); + ap[i] = 0; + } + ap[i] = pEntry; + } + } + } + + pList = 0; + for(i=0; i<nMergeSlot; i++){ + pList = fts5HashEntryMerge(pList, ap[i]); + } + + pHash->nEntry = 0; + sqlite3_free(ap); + *ppSorted = pList; + return SQLITE_OK; +} + +/* +** Query the hash table for a doclist associated with term pTerm/nTerm. +*/ +static int sqlite3Fts5HashQuery( + Fts5Hash *pHash, /* Hash table to query */ + const char *pTerm, int nTerm, /* Query term */ + const u8 **ppDoclist, /* OUT: Pointer to doclist for pTerm */ + int *pnDoclist /* OUT: Size of doclist in bytes */ +){ + unsigned int iHash = fts5HashKey(pHash->nSlot, (const u8*)pTerm, nTerm); + Fts5HashEntry *p; + + for(p=pHash->aSlot[iHash]; p; p=p->pHashNext){ + if( memcmp(p->zKey, pTerm, nTerm)==0 && p->zKey[nTerm]==0 ) break; + } + + if( p ){ + fts5HashAddPoslistSize(pHash, p); + *ppDoclist = (const u8*)&p->zKey[nTerm+1]; + *pnDoclist = p->nData - (FTS5_HASHENTRYSIZE + nTerm + 1); + }else{ + *ppDoclist = 0; + *pnDoclist = 0; + } + + return SQLITE_OK; +} + +static int sqlite3Fts5HashScanInit( + Fts5Hash *p, /* Hash table to query */ + const char *pTerm, int nTerm /* Query prefix */ +){ + return fts5HashEntrySort(p, pTerm, nTerm, &p->pScan); +} + +static void sqlite3Fts5HashScanNext(Fts5Hash *p){ + assert( !sqlite3Fts5HashScanEof(p) ); + p->pScan = p->pScan->pScanNext; +} + +static int sqlite3Fts5HashScanEof(Fts5Hash *p){ + return (p->pScan==0); +} + +static void sqlite3Fts5HashScanEntry( + Fts5Hash *pHash, + const char **pzTerm, /* OUT: term (nul-terminated) */ + const u8 **ppDoclist, /* OUT: pointer to doclist */ + int *pnDoclist /* OUT: size of doclist in bytes */ +){ + Fts5HashEntry *p; + if( (p = pHash->pScan) ){ + int nTerm = (int)strlen(p->zKey); + fts5HashAddPoslistSize(pHash, p); + *pzTerm = p->zKey; + *ppDoclist = (const u8*)&p->zKey[nTerm+1]; + *pnDoclist = p->nData - (FTS5_HASHENTRYSIZE + nTerm + 1); + }else{ + *pzTerm = 0; + *ppDoclist = 0; + *pnDoclist = 0; + } +} + + +/* +** 2014 May 31 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** Low level access to the FTS index stored in the database file. The +** routines in this file file implement all read and write access to the +** %_data table. Other parts of the system access this functionality via +** the interface defined in fts5Int.h. +*/ + + +/* #include "fts5Int.h" */ + +/* +** Overview: +** +** The %_data table contains all the FTS indexes for an FTS5 virtual table. +** As well as the main term index, there may be up to 31 prefix indexes. +** The format is similar to FTS3/4, except that: +** +** * all segment b-tree leaf data is stored in fixed size page records +** (e.g. 1000 bytes). A single doclist may span multiple pages. Care is +** taken to ensure it is possible to iterate in either direction through +** the entries in a doclist, or to seek to a specific entry within a +** doclist, without loading it into memory. +** +** * large doclists that span many pages have associated "doclist index" +** records that contain a copy of the first rowid on each page spanned by +** the doclist. This is used to speed up seek operations, and merges of +** large doclists with very small doclists. +** +** * extra fields in the "structure record" record the state of ongoing +** incremental merge operations. +** +*/ + + +#define FTS5_OPT_WORK_UNIT 1000 /* Number of leaf pages per optimize step */ +#define FTS5_WORK_UNIT 64 /* Number of leaf pages in unit of work */ + +#define FTS5_MIN_DLIDX_SIZE 4 /* Add dlidx if this many empty pages */ + +#define FTS5_MAIN_PREFIX '0' + +#if FTS5_MAX_PREFIX_INDEXES > 31 +# error "FTS5_MAX_PREFIX_INDEXES is too large" +#endif + +/* +** Details: +** +** The %_data table managed by this module, +** +** CREATE TABLE %_data(id INTEGER PRIMARY KEY, block BLOB); +** +** , contains the following 5 types of records. See the comments surrounding +** the FTS5_*_ROWID macros below for a description of how %_data rowids are +** assigned to each fo them. +** +** 1. Structure Records: +** +** The set of segments that make up an index - the index structure - are +** recorded in a single record within the %_data table. The record consists +** of a single 32-bit configuration cookie value followed by a list of +** SQLite varints. If the FTS table features more than one index (because +** there are one or more prefix indexes), it is guaranteed that all share +** the same cookie value. +** +** Immediately following the configuration cookie, the record begins with +** three varints: +** +** + number of levels, +** + total number of segments on all levels, +** + value of write counter. +** +** Then, for each level from 0 to nMax: +** +** + number of input segments in ongoing merge. +** + total number of segments in level. +** + for each segment from oldest to newest: +** + segment id (always > 0) +** + first leaf page number (often 1, always greater than 0) +** + final leaf page number +** +** 2. The Averages Record: +** +** A single record within the %_data table. The data is a list of varints. +** The first value is the number of rows in the index. Then, for each column +** from left to right, the total number of tokens in the column for all +** rows of the table. +** +** 3. Segment leaves: +** +** TERM/DOCLIST FORMAT: +** +** Most of each segment leaf is taken up by term/doclist data. The +** general format of term/doclist, starting with the first term +** on the leaf page, is: +** +** varint : size of first term +** blob: first term data +** doclist: first doclist +** zero-or-more { +** varint: number of bytes in common with previous term +** varint: number of bytes of new term data (nNew) +** blob: nNew bytes of new term data +** doclist: next doclist +** } +** +** doclist format: +** +** varint: first rowid +** poslist: first poslist +** zero-or-more { +** varint: rowid delta (always > 0) +** poslist: next poslist +** } +** +** poslist format: +** +** varint: size of poslist in bytes multiplied by 2, not including +** this field. Plus 1 if this entry carries the "delete" flag. +** collist: collist for column 0 +** zero-or-more { +** 0x01 byte +** varint: column number (I) +** collist: collist for column I +** } +** +** collist format: +** +** varint: first offset + 2 +** zero-or-more { +** varint: offset delta + 2 +** } +** +** PAGE FORMAT +** +** Each leaf page begins with a 4-byte header containing 2 16-bit +** unsigned integer fields in big-endian format. They are: +** +** * The byte offset of the first rowid on the page, if it exists +** and occurs before the first term (otherwise 0). +** +** * The byte offset of the start of the page footer. If the page +** footer is 0 bytes in size, then this field is the same as the +** size of the leaf page in bytes. +** +** The page footer consists of a single varint for each term located +** on the page. Each varint is the byte offset of the current term +** within the page, delta-compressed against the previous value. In +** other words, the first varint in the footer is the byte offset of +** the first term, the second is the byte offset of the second less that +** of the first, and so on. +** +** The term/doclist format described above is accurate if the entire +** term/doclist data fits on a single leaf page. If this is not the case, +** the format is changed in two ways: +** +** + if the first rowid on a page occurs before the first term, it +** is stored as a literal value: +** +** varint: first rowid +** +** + the first term on each page is stored in the same way as the +** very first term of the segment: +** +** varint : size of first term +** blob: first term data +** +** 5. Segment doclist indexes: +** +** Doclist indexes are themselves b-trees, however they usually consist of +** a single leaf record only. The format of each doclist index leaf page +** is: +** +** * Flags byte. Bits are: +** 0x01: Clear if leaf is also the root page, otherwise set. +** +** * Page number of fts index leaf page. As a varint. +** +** * First rowid on page indicated by previous field. As a varint. +** +** * A list of varints, one for each subsequent termless page. A +** positive delta if the termless page contains at least one rowid, +** or an 0x00 byte otherwise. +** +** Internal doclist index nodes are: +** +** * Flags byte. Bits are: +** 0x01: Clear for root page, otherwise set. +** +** * Page number of first child page. As a varint. +** +** * Copy of first rowid on page indicated by previous field. As a varint. +** +** * A list of delta-encoded varints - the first rowid on each subsequent +** child page. +** +*/ + +/* +** Rowids for the averages and structure records in the %_data table. +*/ +#define FTS5_AVERAGES_ROWID 1 /* Rowid used for the averages record */ +#define FTS5_STRUCTURE_ROWID 10 /* The structure record */ + +/* +** Macros determining the rowids used by segment leaves and dlidx leaves +** and nodes. All nodes and leaves are stored in the %_data table with large +** positive rowids. +** +** Each segment has a unique non-zero 16-bit id. +** +** The rowid for each segment leaf is found by passing the segment id and +** the leaf page number to the FTS5_SEGMENT_ROWID macro. Leaves are numbered +** sequentially starting from 1. +*/ +#define FTS5_DATA_ID_B 16 /* Max seg id number 65535 */ +#define FTS5_DATA_DLI_B 1 /* Doclist-index flag (1 bit) */ +#define FTS5_DATA_HEIGHT_B 5 /* Max dlidx tree height of 32 */ +#define FTS5_DATA_PAGE_B 31 /* Max page number of 2147483648 */ + +#define fts5_dri(segid, dlidx, height, pgno) ( \ + ((i64)(segid) << (FTS5_DATA_PAGE_B+FTS5_DATA_HEIGHT_B+FTS5_DATA_DLI_B)) + \ + ((i64)(dlidx) << (FTS5_DATA_PAGE_B + FTS5_DATA_HEIGHT_B)) + \ + ((i64)(height) << (FTS5_DATA_PAGE_B)) + \ + ((i64)(pgno)) \ +) + +#define FTS5_SEGMENT_ROWID(segid, pgno) fts5_dri(segid, 0, 0, pgno) +#define FTS5_DLIDX_ROWID(segid, height, pgno) fts5_dri(segid, 1, height, pgno) + +/* +** Maximum segments permitted in a single index +*/ +#define FTS5_MAX_SEGMENT 2000 + +#ifdef SQLITE_DEBUG +static int sqlite3Fts5Corrupt() { return SQLITE_CORRUPT_VTAB; } +#endif + + +/* +** Each time a blob is read from the %_data table, it is padded with this +** many zero bytes. This makes it easier to decode the various record formats +** without overreading if the records are corrupt. +*/ +#define FTS5_DATA_ZERO_PADDING 8 +#define FTS5_DATA_PADDING 20 + +typedef struct Fts5Data Fts5Data; +typedef struct Fts5DlidxIter Fts5DlidxIter; +typedef struct Fts5DlidxLvl Fts5DlidxLvl; +typedef struct Fts5DlidxWriter Fts5DlidxWriter; +typedef struct Fts5Iter Fts5Iter; +typedef struct Fts5PageWriter Fts5PageWriter; +typedef struct Fts5SegIter Fts5SegIter; +typedef struct Fts5DoclistIter Fts5DoclistIter; +typedef struct Fts5SegWriter Fts5SegWriter; +typedef struct Fts5Structure Fts5Structure; +typedef struct Fts5StructureLevel Fts5StructureLevel; +typedef struct Fts5StructureSegment Fts5StructureSegment; + +struct Fts5Data { + u8 *p; /* Pointer to buffer containing record */ + int nn; /* Size of record in bytes */ + int szLeaf; /* Size of leaf without page-index */ +}; + +/* +** One object per %_data table. +*/ +struct Fts5Index { + Fts5Config *pConfig; /* Virtual table configuration */ + char *zDataTbl; /* Name of %_data table */ + int nWorkUnit; /* Leaf pages in a "unit" of work */ + + /* + ** Variables related to the accumulation of tokens and doclists within the + ** in-memory hash tables before they are flushed to disk. + */ + Fts5Hash *pHash; /* Hash table for in-memory data */ + int nPendingData; /* Current bytes of pending data */ + i64 iWriteRowid; /* Rowid for current doc being written */ + int bDelete; /* Current write is a delete */ + + /* Error state. */ + int rc; /* Current error code */ + + /* State used by the fts5DataXXX() functions. */ + sqlite3_blob *pReader; /* RO incr-blob open on %_data table */ + sqlite3_stmt *pWriter; /* "INSERT ... %_data VALUES(?,?)" */ + sqlite3_stmt *pDeleter; /* "DELETE FROM %_data ... id>=? AND id<=?" */ + sqlite3_stmt *pIdxWriter; /* "INSERT ... %_idx VALUES(?,?,?,?)" */ + sqlite3_stmt *pIdxDeleter; /* "DELETE FROM %_idx WHERE segid=? */ + sqlite3_stmt *pIdxSelect; + int nRead; /* Total number of blocks read */ +}; + +struct Fts5DoclistIter { + u8 *aEof; /* Pointer to 1 byte past end of doclist */ + + /* Output variables. aPoslist==0 at EOF */ + i64 iRowid; + u8 *aPoslist; + int nPoslist; + int nSize; +}; + +/* +** The contents of the "structure" record for each index are represented +** using an Fts5Structure record in memory. Which uses instances of the +** other Fts5StructureXXX types as components. +*/ +struct Fts5StructureSegment { + int iSegid; /* Segment id */ + int pgnoFirst; /* First leaf page number in segment */ + int pgnoLast; /* Last leaf page number in segment */ +}; +struct Fts5StructureLevel { + int nMerge; /* Number of segments in incr-merge */ + int nSeg; /* Total number of segments on level */ + Fts5StructureSegment *aSeg; /* Array of segments. aSeg[0] is oldest. */ +}; +struct Fts5Structure { + int nRef; /* Object reference count */ + u64 nWriteCounter; /* Total leaves written to level 0 */ + int nSegment; /* Total segments in this structure */ + int nLevel; /* Number of levels in this index */ + Fts5StructureLevel aLevel[1]; /* Array of nLevel level objects */ +}; + +/* +** An object of type Fts5SegWriter is used to write to segments. +*/ +struct Fts5PageWriter { + int pgno; /* Page number for this page */ + int iPrevPgidx; /* Previous value written into pgidx */ + Fts5Buffer buf; /* Buffer containing leaf data */ + Fts5Buffer pgidx; /* Buffer containing page-index */ + Fts5Buffer term; /* Buffer containing previous term on page */ +}; +struct Fts5DlidxWriter { + int pgno; /* Page number for this page */ + int bPrevValid; /* True if iPrev is valid */ + i64 iPrev; /* Previous rowid value written to page */ + Fts5Buffer buf; /* Buffer containing page data */ +}; +struct Fts5SegWriter { + int iSegid; /* Segid to write to */ + Fts5PageWriter writer; /* PageWriter object */ + i64 iPrevRowid; /* Previous rowid written to current leaf */ + u8 bFirstRowidInDoclist; /* True if next rowid is first in doclist */ + u8 bFirstRowidInPage; /* True if next rowid is first in page */ + /* TODO1: Can use (writer.pgidx.n==0) instead of bFirstTermInPage */ + u8 bFirstTermInPage; /* True if next term will be first in leaf */ + int nLeafWritten; /* Number of leaf pages written */ + int nEmpty; /* Number of contiguous term-less nodes */ + + int nDlidx; /* Allocated size of aDlidx[] array */ + Fts5DlidxWriter *aDlidx; /* Array of Fts5DlidxWriter objects */ + + /* Values to insert into the %_idx table */ + Fts5Buffer btterm; /* Next term to insert into %_idx table */ + int iBtPage; /* Page number corresponding to btterm */ +}; + +typedef struct Fts5CResult Fts5CResult; +struct Fts5CResult { + u16 iFirst; /* aSeg[] index of firstest iterator */ + u8 bTermEq; /* True if the terms are equal */ +}; + +/* +** Object for iterating through a single segment, visiting each term/rowid +** pair in the segment. +** +** pSeg: +** The segment to iterate through. +** +** iLeafPgno: +** Current leaf page number within segment. +** +** iLeafOffset: +** Byte offset within the current leaf that is the first byte of the +** position list data (one byte passed the position-list size field). +** rowid field of the current entry. Usually this is the size field of the +** position list data. The exception is if the rowid for the current entry +** is the last thing on the leaf page. +** +** pLeaf: +** Buffer containing current leaf page data. Set to NULL at EOF. +** +** iTermLeafPgno, iTermLeafOffset: +** Leaf page number containing the last term read from the segment. And +** the offset immediately following the term data. +** +** flags: +** Mask of FTS5_SEGITER_XXX values. Interpreted as follows: +** +** FTS5_SEGITER_ONETERM: +** If set, set the iterator to point to EOF after the current doclist +** has been exhausted. Do not proceed to the next term in the segment. +** +** FTS5_SEGITER_REVERSE: +** This flag is only ever set if FTS5_SEGITER_ONETERM is also set. If +** it is set, iterate through rowid in descending order instead of the +** default ascending order. +** +** iRowidOffset/nRowidOffset/aRowidOffset: +** These are used if the FTS5_SEGITER_REVERSE flag is set. +** +** For each rowid on the page corresponding to the current term, the +** corresponding aRowidOffset[] entry is set to the byte offset of the +** start of the "position-list-size" field within the page. +** +** iTermIdx: +** Index of current term on iTermLeafPgno. +*/ +struct Fts5SegIter { + Fts5StructureSegment *pSeg; /* Segment to iterate through */ + int flags; /* Mask of configuration flags */ + int iLeafPgno; /* Current leaf page number */ + Fts5Data *pLeaf; /* Current leaf data */ + Fts5Data *pNextLeaf; /* Leaf page (iLeafPgno+1) */ + int iLeafOffset; /* Byte offset within current leaf */ + + /* Next method */ + void (*xNext)(Fts5Index*, Fts5SegIter*, int*); + + /* The page and offset from which the current term was read. The offset + ** is the offset of the first rowid in the current doclist. */ + int iTermLeafPgno; + int iTermLeafOffset; + + int iPgidxOff; /* Next offset in pgidx */ + int iEndofDoclist; + + /* The following are only used if the FTS5_SEGITER_REVERSE flag is set. */ + int iRowidOffset; /* Current entry in aRowidOffset[] */ + int nRowidOffset; /* Allocated size of aRowidOffset[] array */ + int *aRowidOffset; /* Array of offset to rowid fields */ + + Fts5DlidxIter *pDlidx; /* If there is a doclist-index */ + + /* Variables populated based on current entry. */ + Fts5Buffer term; /* Current term */ + i64 iRowid; /* Current rowid */ + int nPos; /* Number of bytes in current position list */ + u8 bDel; /* True if the delete flag is set */ +}; + +/* +** Argument is a pointer to an Fts5Data structure that contains a +** leaf page. +*/ +#define ASSERT_SZLEAF_OK(x) assert( \ + (x)->szLeaf==(x)->nn || (x)->szLeaf==fts5GetU16(&(x)->p[2]) \ +) + +#define FTS5_SEGITER_ONETERM 0x01 +#define FTS5_SEGITER_REVERSE 0x02 + +/* +** Argument is a pointer to an Fts5Data structure that contains a leaf +** page. This macro evaluates to true if the leaf contains no terms, or +** false if it contains at least one term. +*/ +#define fts5LeafIsTermless(x) ((x)->szLeaf >= (x)->nn) + +#define fts5LeafTermOff(x, i) (fts5GetU16(&(x)->p[(x)->szLeaf + (i)*2])) + +#define fts5LeafFirstRowidOff(x) (fts5GetU16((x)->p)) + +/* +** Object for iterating through the merged results of one or more segments, +** visiting each term/rowid pair in the merged data. +** +** nSeg is always a power of two greater than or equal to the number of +** segments that this object is merging data from. Both the aSeg[] and +** aFirst[] arrays are sized at nSeg entries. The aSeg[] array is padded +** with zeroed objects - these are handled as if they were iterators opened +** on empty segments. +** +** The results of comparing segments aSeg[N] and aSeg[N+1], where N is an +** even number, is stored in aFirst[(nSeg+N)/2]. The "result" of the +** comparison in this context is the index of the iterator that currently +** points to the smaller term/rowid combination. Iterators at EOF are +** considered to be greater than all other iterators. +** +** aFirst[1] contains the index in aSeg[] of the iterator that points to +** the smallest key overall. aFirst[0] is unused. +** +** poslist: +** Used by sqlite3Fts5IterPoslist() when the poslist needs to be buffered. +** There is no way to tell if this is populated or not. +*/ +struct Fts5Iter { + Fts5IndexIter base; /* Base class containing output vars */ + + Fts5Index *pIndex; /* Index that owns this iterator */ + Fts5Structure *pStruct; /* Database structure for this iterator */ + Fts5Buffer poslist; /* Buffer containing current poslist */ + Fts5Colset *pColset; /* Restrict matches to these columns */ + + /* Invoked to set output variables. */ + void (*xSetOutputs)(Fts5Iter*, Fts5SegIter*); + + int nSeg; /* Size of aSeg[] array */ + int bRev; /* True to iterate in reverse order */ + u8 bSkipEmpty; /* True to skip deleted entries */ + + i64 iSwitchRowid; /* Firstest rowid of other than aFirst[1] */ + Fts5CResult *aFirst; /* Current merge state (see above) */ + Fts5SegIter aSeg[1]; /* Array of segment iterators */ +}; + + +/* +** An instance of the following type is used to iterate through the contents +** of a doclist-index record. +** +** pData: +** Record containing the doclist-index data. +** +** bEof: +** Set to true once iterator has reached EOF. +** +** iOff: +** Set to the current offset within record pData. +*/ +struct Fts5DlidxLvl { + Fts5Data *pData; /* Data for current page of this level */ + int iOff; /* Current offset into pData */ + int bEof; /* At EOF already */ + int iFirstOff; /* Used by reverse iterators */ + + /* Output variables */ + int iLeafPgno; /* Page number of current leaf page */ + i64 iRowid; /* First rowid on leaf iLeafPgno */ +}; +struct Fts5DlidxIter { + int nLvl; + int iSegid; + Fts5DlidxLvl aLvl[1]; +}; + +static void fts5PutU16(u8 *aOut, u16 iVal){ + aOut[0] = (iVal>>8); + aOut[1] = (iVal&0xFF); +} + +static u16 fts5GetU16(const u8 *aIn){ + return ((u16)aIn[0] << 8) + aIn[1]; +} + +/* +** Allocate and return a buffer at least nByte bytes in size. +** +** If an OOM error is encountered, return NULL and set the error code in +** the Fts5Index handle passed as the first argument. +*/ +static void *fts5IdxMalloc(Fts5Index *p, int nByte){ + return sqlite3Fts5MallocZero(&p->rc, nByte); +} + +/* +** Compare the contents of the pLeft buffer with the pRight/nRight blob. +** +** Return -ve if pLeft is smaller than pRight, 0 if they are equal or +** +ve if pRight is smaller than pLeft. In other words: +** +** res = *pLeft - *pRight +*/ +#ifdef SQLITE_DEBUG +static int fts5BufferCompareBlob( + Fts5Buffer *pLeft, /* Left hand side of comparison */ + const u8 *pRight, int nRight /* Right hand side of comparison */ +){ + int nCmp = MIN(pLeft->n, nRight); + int res = memcmp(pLeft->p, pRight, nCmp); + return (res==0 ? (pLeft->n - nRight) : res); +} +#endif + +/* +** Compare the contents of the two buffers using memcmp(). If one buffer +** is a prefix of the other, it is considered the lesser. +** +** Return -ve if pLeft is smaller than pRight, 0 if they are equal or +** +ve if pRight is smaller than pLeft. In other words: +** +** res = *pLeft - *pRight +*/ +static int fts5BufferCompare(Fts5Buffer *pLeft, Fts5Buffer *pRight){ + int nCmp = MIN(pLeft->n, pRight->n); + int res = memcmp(pLeft->p, pRight->p, nCmp); + return (res==0 ? (pLeft->n - pRight->n) : res); +} + +static int fts5LeafFirstTermOff(Fts5Data *pLeaf){ + int ret; + fts5GetVarint32(&pLeaf->p[pLeaf->szLeaf], ret); + return ret; +} + +/* +** Close the read-only blob handle, if it is open. +*/ +static void fts5CloseReader(Fts5Index *p){ + if( p->pReader ){ + sqlite3_blob *pReader = p->pReader; + p->pReader = 0; + sqlite3_blob_close(pReader); + } +} + + +/* +** Retrieve a record from the %_data table. +** +** If an error occurs, NULL is returned and an error left in the +** Fts5Index object. +*/ +static Fts5Data *fts5DataRead(Fts5Index *p, i64 iRowid){ + Fts5Data *pRet = 0; + if( p->rc==SQLITE_OK ){ + int rc = SQLITE_OK; + + if( p->pReader ){ + /* This call may return SQLITE_ABORT if there has been a savepoint + ** rollback since it was last used. In this case a new blob handle + ** is required. */ + sqlite3_blob *pBlob = p->pReader; + p->pReader = 0; + rc = sqlite3_blob_reopen(pBlob, iRowid); + assert( p->pReader==0 ); + p->pReader = pBlob; + if( rc!=SQLITE_OK ){ + fts5CloseReader(p); + } + if( rc==SQLITE_ABORT ) rc = SQLITE_OK; + } + + /* If the blob handle is not open at this point, open it and seek + ** to the requested entry. */ + if( p->pReader==0 && rc==SQLITE_OK ){ + Fts5Config *pConfig = p->pConfig; + rc = sqlite3_blob_open(pConfig->db, + pConfig->zDb, p->zDataTbl, "block", iRowid, 0, &p->pReader + ); + } + + /* If either of the sqlite3_blob_open() or sqlite3_blob_reopen() calls + ** above returned SQLITE_ERROR, return SQLITE_CORRUPT_VTAB instead. + ** All the reasons those functions might return SQLITE_ERROR - missing + ** table, missing row, non-blob/text in block column - indicate + ** backing store corruption. */ + if( rc==SQLITE_ERROR ) rc = FTS5_CORRUPT; + + if( rc==SQLITE_OK ){ + u8 *aOut = 0; /* Read blob data into this buffer */ + int nByte = sqlite3_blob_bytes(p->pReader); + int nAlloc = sizeof(Fts5Data) + nByte + FTS5_DATA_PADDING; + pRet = (Fts5Data*)sqlite3_malloc(nAlloc); + if( pRet ){ + pRet->nn = nByte; + aOut = pRet->p = (u8*)&pRet[1]; + }else{ + rc = SQLITE_NOMEM; + } + + if( rc==SQLITE_OK ){ + rc = sqlite3_blob_read(p->pReader, aOut, nByte, 0); + } + if( rc!=SQLITE_OK ){ + sqlite3_free(pRet); + pRet = 0; + }else{ + /* TODO1: Fix this */ + pRet->szLeaf = fts5GetU16(&pRet->p[2]); + } + } + p->rc = rc; + p->nRead++; + } + + assert( (pRet==0)==(p->rc!=SQLITE_OK) ); + return pRet; +} + +/* +** Release a reference to data record returned by an earlier call to +** fts5DataRead(). +*/ +static void fts5DataRelease(Fts5Data *pData){ + sqlite3_free(pData); +} + +static int fts5IndexPrepareStmt( + Fts5Index *p, + sqlite3_stmt **ppStmt, + char *zSql +){ + if( p->rc==SQLITE_OK ){ + if( zSql ){ + p->rc = sqlite3_prepare_v2(p->pConfig->db, zSql, -1, ppStmt, 0); + }else{ + p->rc = SQLITE_NOMEM; + } + } + sqlite3_free(zSql); + return p->rc; +} + + +/* +** INSERT OR REPLACE a record into the %_data table. +*/ +static void fts5DataWrite(Fts5Index *p, i64 iRowid, const u8 *pData, int nData){ + if( p->rc!=SQLITE_OK ) return; + + if( p->pWriter==0 ){ + Fts5Config *pConfig = p->pConfig; + fts5IndexPrepareStmt(p, &p->pWriter, sqlite3_mprintf( + "REPLACE INTO '%q'.'%q_data'(id, block) VALUES(?,?)", + pConfig->zDb, pConfig->zName + )); + if( p->rc ) return; + } + + sqlite3_bind_int64(p->pWriter, 1, iRowid); + sqlite3_bind_blob(p->pWriter, 2, pData, nData, SQLITE_STATIC); + sqlite3_step(p->pWriter); + p->rc = sqlite3_reset(p->pWriter); +} + +/* +** Execute the following SQL: +** +** DELETE FROM %_data WHERE id BETWEEN $iFirst AND $iLast +*/ +static void fts5DataDelete(Fts5Index *p, i64 iFirst, i64 iLast){ + if( p->rc!=SQLITE_OK ) return; + + if( p->pDeleter==0 ){ + int rc; + Fts5Config *pConfig = p->pConfig; + char *zSql = sqlite3_mprintf( + "DELETE FROM '%q'.'%q_data' WHERE id>=? AND id<=?", + pConfig->zDb, pConfig->zName + ); + if( zSql==0 ){ + rc = SQLITE_NOMEM; + }else{ + rc = sqlite3_prepare_v2(pConfig->db, zSql, -1, &p->pDeleter, 0); + sqlite3_free(zSql); + } + if( rc!=SQLITE_OK ){ + p->rc = rc; + return; + } + } + + sqlite3_bind_int64(p->pDeleter, 1, iFirst); + sqlite3_bind_int64(p->pDeleter, 2, iLast); + sqlite3_step(p->pDeleter); + p->rc = sqlite3_reset(p->pDeleter); +} + +/* +** Remove all records associated with segment iSegid. +*/ +static void fts5DataRemoveSegment(Fts5Index *p, int iSegid){ + i64 iFirst = FTS5_SEGMENT_ROWID(iSegid, 0); + i64 iLast = FTS5_SEGMENT_ROWID(iSegid+1, 0)-1; + fts5DataDelete(p, iFirst, iLast); + if( p->pIdxDeleter==0 ){ + Fts5Config *pConfig = p->pConfig; + fts5IndexPrepareStmt(p, &p->pIdxDeleter, sqlite3_mprintf( + "DELETE FROM '%q'.'%q_idx' WHERE segid=?", + pConfig->zDb, pConfig->zName + )); + } + if( p->rc==SQLITE_OK ){ + sqlite3_bind_int(p->pIdxDeleter, 1, iSegid); + sqlite3_step(p->pIdxDeleter); + p->rc = sqlite3_reset(p->pIdxDeleter); + } +} + +/* +** Release a reference to an Fts5Structure object returned by an earlier +** call to fts5StructureRead() or fts5StructureDecode(). +*/ +static void fts5StructureRelease(Fts5Structure *pStruct){ + if( pStruct && 0>=(--pStruct->nRef) ){ + int i; + assert( pStruct->nRef==0 ); + for(i=0; i<pStruct->nLevel; i++){ + sqlite3_free(pStruct->aLevel[i].aSeg); + } + sqlite3_free(pStruct); + } +} + +static void fts5StructureRef(Fts5Structure *pStruct){ + pStruct->nRef++; +} + +/* +** Deserialize and return the structure record currently stored in serialized +** form within buffer pData/nData. +** +** The Fts5Structure.aLevel[] and each Fts5StructureLevel.aSeg[] array +** are over-allocated by one slot. This allows the structure contents +** to be more easily edited. +** +** If an error occurs, *ppOut is set to NULL and an SQLite error code +** returned. Otherwise, *ppOut is set to point to the new object and +** SQLITE_OK returned. +*/ +static int fts5StructureDecode( + const u8 *pData, /* Buffer containing serialized structure */ + int nData, /* Size of buffer pData in bytes */ + int *piCookie, /* Configuration cookie value */ + Fts5Structure **ppOut /* OUT: Deserialized object */ +){ + int rc = SQLITE_OK; + int i = 0; + int iLvl; + int nLevel = 0; + int nSegment = 0; + int nByte; /* Bytes of space to allocate at pRet */ + Fts5Structure *pRet = 0; /* Structure object to return */ + + /* Grab the cookie value */ + if( piCookie ) *piCookie = sqlite3Fts5Get32(pData); + i = 4; + + /* Read the total number of levels and segments from the start of the + ** structure record. */ + i += fts5GetVarint32(&pData[i], nLevel); + i += fts5GetVarint32(&pData[i], nSegment); + nByte = ( + sizeof(Fts5Structure) + /* Main structure */ + sizeof(Fts5StructureLevel) * (nLevel-1) /* aLevel[] array */ + ); + pRet = (Fts5Structure*)sqlite3Fts5MallocZero(&rc, nByte); + + if( pRet ){ + pRet->nRef = 1; + pRet->nLevel = nLevel; + pRet->nSegment = nSegment; + i += sqlite3Fts5GetVarint(&pData[i], &pRet->nWriteCounter); + + for(iLvl=0; rc==SQLITE_OK && iLvl<nLevel; iLvl++){ + Fts5StructureLevel *pLvl = &pRet->aLevel[iLvl]; + int nTotal; + int iSeg; + + if( i>=nData ){ + rc = FTS5_CORRUPT; + }else{ + i += fts5GetVarint32(&pData[i], pLvl->nMerge); + i += fts5GetVarint32(&pData[i], nTotal); + assert( nTotal>=pLvl->nMerge ); + pLvl->aSeg = (Fts5StructureSegment*)sqlite3Fts5MallocZero(&rc, + nTotal * sizeof(Fts5StructureSegment) + ); + } + + if( rc==SQLITE_OK ){ + pLvl->nSeg = nTotal; + for(iSeg=0; iSeg<nTotal; iSeg++){ + if( i>=nData ){ + rc = FTS5_CORRUPT; + break; + } + i += fts5GetVarint32(&pData[i], pLvl->aSeg[iSeg].iSegid); + i += fts5GetVarint32(&pData[i], pLvl->aSeg[iSeg].pgnoFirst); + i += fts5GetVarint32(&pData[i], pLvl->aSeg[iSeg].pgnoLast); + } + } + } + if( rc!=SQLITE_OK ){ + fts5StructureRelease(pRet); + pRet = 0; + } + } + + *ppOut = pRet; + return rc; +} + +/* +** +*/ +static void fts5StructureAddLevel(int *pRc, Fts5Structure **ppStruct){ + if( *pRc==SQLITE_OK ){ + Fts5Structure *pStruct = *ppStruct; + int nLevel = pStruct->nLevel; + int nByte = ( + sizeof(Fts5Structure) + /* Main structure */ + sizeof(Fts5StructureLevel) * (nLevel+1) /* aLevel[] array */ + ); + + pStruct = sqlite3_realloc(pStruct, nByte); + if( pStruct ){ + memset(&pStruct->aLevel[nLevel], 0, sizeof(Fts5StructureLevel)); + pStruct->nLevel++; + *ppStruct = pStruct; + }else{ + *pRc = SQLITE_NOMEM; + } + } +} + +/* +** Extend level iLvl so that there is room for at least nExtra more +** segments. +*/ +static void fts5StructureExtendLevel( + int *pRc, + Fts5Structure *pStruct, + int iLvl, + int nExtra, + int bInsert +){ + if( *pRc==SQLITE_OK ){ + Fts5StructureLevel *pLvl = &pStruct->aLevel[iLvl]; + Fts5StructureSegment *aNew; + int nByte; + + nByte = (pLvl->nSeg + nExtra) * sizeof(Fts5StructureSegment); + aNew = sqlite3_realloc(pLvl->aSeg, nByte); + if( aNew ){ + if( bInsert==0 ){ + memset(&aNew[pLvl->nSeg], 0, sizeof(Fts5StructureSegment) * nExtra); + }else{ + int nMove = pLvl->nSeg * sizeof(Fts5StructureSegment); + memmove(&aNew[nExtra], aNew, nMove); + memset(aNew, 0, sizeof(Fts5StructureSegment) * nExtra); + } + pLvl->aSeg = aNew; + }else{ + *pRc = SQLITE_NOMEM; + } + } +} + +/* +** Read, deserialize and return the structure record. +** +** The Fts5Structure.aLevel[] and each Fts5StructureLevel.aSeg[] array +** are over-allocated as described for function fts5StructureDecode() +** above. +** +** If an error occurs, NULL is returned and an error code left in the +** Fts5Index handle. If an error has already occurred when this function +** is called, it is a no-op. +*/ +static Fts5Structure *fts5StructureRead(Fts5Index *p){ + Fts5Config *pConfig = p->pConfig; + Fts5Structure *pRet = 0; /* Object to return */ + int iCookie; /* Configuration cookie */ + Fts5Data *pData; + + pData = fts5DataRead(p, FTS5_STRUCTURE_ROWID); + if( p->rc ) return 0; + /* TODO: Do we need this if the leaf-index is appended? Probably... */ + memset(&pData->p[pData->nn], 0, FTS5_DATA_PADDING); + p->rc = fts5StructureDecode(pData->p, pData->nn, &iCookie, &pRet); + if( p->rc==SQLITE_OK && pConfig->iCookie!=iCookie ){ + p->rc = sqlite3Fts5ConfigLoad(pConfig, iCookie); + } + + fts5DataRelease(pData); + if( p->rc!=SQLITE_OK ){ + fts5StructureRelease(pRet); + pRet = 0; + } + return pRet; +} + +/* +** Return the total number of segments in index structure pStruct. This +** function is only ever used as part of assert() conditions. +*/ +#ifdef SQLITE_DEBUG +static int fts5StructureCountSegments(Fts5Structure *pStruct){ + int nSegment = 0; /* Total number of segments */ + if( pStruct ){ + int iLvl; /* Used to iterate through levels */ + for(iLvl=0; iLvl<pStruct->nLevel; iLvl++){ + nSegment += pStruct->aLevel[iLvl].nSeg; + } + } + + return nSegment; +} +#endif + +#define fts5BufferSafeAppendBlob(pBuf, pBlob, nBlob) { \ + assert( (pBuf)->nSpace>=((pBuf)->n+nBlob) ); \ + memcpy(&(pBuf)->p[(pBuf)->n], pBlob, nBlob); \ + (pBuf)->n += nBlob; \ +} + +#define fts5BufferSafeAppendVarint(pBuf, iVal) { \ + (pBuf)->n += sqlite3Fts5PutVarint(&(pBuf)->p[(pBuf)->n], (iVal)); \ + assert( (pBuf)->nSpace>=(pBuf)->n ); \ +} + + +/* +** Serialize and store the "structure" record. +** +** If an error occurs, leave an error code in the Fts5Index object. If an +** error has already occurred, this function is a no-op. +*/ +static void fts5StructureWrite(Fts5Index *p, Fts5Structure *pStruct){ + if( p->rc==SQLITE_OK ){ + Fts5Buffer buf; /* Buffer to serialize record into */ + int iLvl; /* Used to iterate through levels */ + int iCookie; /* Cookie value to store */ + + assert( pStruct->nSegment==fts5StructureCountSegments(pStruct) ); + memset(&buf, 0, sizeof(Fts5Buffer)); + + /* Append the current configuration cookie */ + iCookie = p->pConfig->iCookie; + if( iCookie<0 ) iCookie = 0; + + if( 0==sqlite3Fts5BufferSize(&p->rc, &buf, 4+9+9+9) ){ + sqlite3Fts5Put32(buf.p, iCookie); + buf.n = 4; + fts5BufferSafeAppendVarint(&buf, pStruct->nLevel); + fts5BufferSafeAppendVarint(&buf, pStruct->nSegment); + fts5BufferSafeAppendVarint(&buf, (i64)pStruct->nWriteCounter); + } + + for(iLvl=0; iLvl<pStruct->nLevel; iLvl++){ + int iSeg; /* Used to iterate through segments */ + Fts5StructureLevel *pLvl = &pStruct->aLevel[iLvl]; + fts5BufferAppendVarint(&p->rc, &buf, pLvl->nMerge); + fts5BufferAppendVarint(&p->rc, &buf, pLvl->nSeg); + assert( pLvl->nMerge<=pLvl->nSeg ); + + for(iSeg=0; iSeg<pLvl->nSeg; iSeg++){ + fts5BufferAppendVarint(&p->rc, &buf, pLvl->aSeg[iSeg].iSegid); + fts5BufferAppendVarint(&p->rc, &buf, pLvl->aSeg[iSeg].pgnoFirst); + fts5BufferAppendVarint(&p->rc, &buf, pLvl->aSeg[iSeg].pgnoLast); + } + } + + fts5DataWrite(p, FTS5_STRUCTURE_ROWID, buf.p, buf.n); + fts5BufferFree(&buf); + } +} + +#if 0 +static void fts5DebugStructure(int*,Fts5Buffer*,Fts5Structure*); +static void fts5PrintStructure(const char *zCaption, Fts5Structure *pStruct){ + int rc = SQLITE_OK; + Fts5Buffer buf; + memset(&buf, 0, sizeof(buf)); + fts5DebugStructure(&rc, &buf, pStruct); + fprintf(stdout, "%s: %s\n", zCaption, buf.p); + fflush(stdout); + fts5BufferFree(&buf); +} +#else +# define fts5PrintStructure(x,y) +#endif + +static int fts5SegmentSize(Fts5StructureSegment *pSeg){ + return 1 + pSeg->pgnoLast - pSeg->pgnoFirst; +} + +/* +** Return a copy of index structure pStruct. Except, promote as many +** segments as possible to level iPromote. If an OOM occurs, NULL is +** returned. +*/ +static void fts5StructurePromoteTo( + Fts5Index *p, + int iPromote, + int szPromote, + Fts5Structure *pStruct +){ + int il, is; + Fts5StructureLevel *pOut = &pStruct->aLevel[iPromote]; + + if( pOut->nMerge==0 ){ + for(il=iPromote+1; il<pStruct->nLevel; il++){ + Fts5StructureLevel *pLvl = &pStruct->aLevel[il]; + if( pLvl->nMerge ) return; + for(is=pLvl->nSeg-1; is>=0; is--){ + int sz = fts5SegmentSize(&pLvl->aSeg[is]); + if( sz>szPromote ) return; + fts5StructureExtendLevel(&p->rc, pStruct, iPromote, 1, 1); + if( p->rc ) return; + memcpy(pOut->aSeg, &pLvl->aSeg[is], sizeof(Fts5StructureSegment)); + pOut->nSeg++; + pLvl->nSeg--; + } + } + } +} + +/* +** A new segment has just been written to level iLvl of index structure +** pStruct. This function determines if any segments should be promoted +** as a result. Segments are promoted in two scenarios: +** +** a) If the segment just written is smaller than one or more segments +** within the previous populated level, it is promoted to the previous +** populated level. +** +** b) If the segment just written is larger than the newest segment on +** the next populated level, then that segment, and any other adjacent +** segments that are also smaller than the one just written, are +** promoted. +** +** If one or more segments are promoted, the structure object is updated +** to reflect this. +*/ +static void fts5StructurePromote( + Fts5Index *p, /* FTS5 backend object */ + int iLvl, /* Index level just updated */ + Fts5Structure *pStruct /* Index structure */ +){ + if( p->rc==SQLITE_OK ){ + int iTst; + int iPromote = -1; + int szPromote = 0; /* Promote anything this size or smaller */ + Fts5StructureSegment *pSeg; /* Segment just written */ + int szSeg; /* Size of segment just written */ + int nSeg = pStruct->aLevel[iLvl].nSeg; + + if( nSeg==0 ) return; + pSeg = &pStruct->aLevel[iLvl].aSeg[pStruct->aLevel[iLvl].nSeg-1]; + szSeg = (1 + pSeg->pgnoLast - pSeg->pgnoFirst); + + /* Check for condition (a) */ + for(iTst=iLvl-1; iTst>=0 && pStruct->aLevel[iTst].nSeg==0; iTst--); + if( iTst>=0 ){ + int i; + int szMax = 0; + Fts5StructureLevel *pTst = &pStruct->aLevel[iTst]; + assert( pTst->nMerge==0 ); + for(i=0; i<pTst->nSeg; i++){ + int sz = pTst->aSeg[i].pgnoLast - pTst->aSeg[i].pgnoFirst + 1; + if( sz>szMax ) szMax = sz; + } + if( szMax>=szSeg ){ + /* Condition (a) is true. Promote the newest segment on level + ** iLvl to level iTst. */ + iPromote = iTst; + szPromote = szMax; + } + } + + /* If condition (a) is not met, assume (b) is true. StructurePromoteTo() + ** is a no-op if it is not. */ + if( iPromote<0 ){ + iPromote = iLvl; + szPromote = szSeg; + } + fts5StructurePromoteTo(p, iPromote, szPromote, pStruct); + } +} + + +/* +** Advance the iterator passed as the only argument. If the end of the +** doclist-index page is reached, return non-zero. +*/ +static int fts5DlidxLvlNext(Fts5DlidxLvl *pLvl){ + Fts5Data *pData = pLvl->pData; + + if( pLvl->iOff==0 ){ + assert( pLvl->bEof==0 ); + pLvl->iOff = 1; + pLvl->iOff += fts5GetVarint32(&pData->p[1], pLvl->iLeafPgno); + pLvl->iOff += fts5GetVarint(&pData->p[pLvl->iOff], (u64*)&pLvl->iRowid); + pLvl->iFirstOff = pLvl->iOff; + }else{ + int iOff; + for(iOff=pLvl->iOff; iOff<pData->nn; iOff++){ + if( pData->p[iOff] ) break; + } + + if( iOff<pData->nn ){ + i64 iVal; + pLvl->iLeafPgno += (iOff - pLvl->iOff) + 1; + iOff += fts5GetVarint(&pData->p[iOff], (u64*)&iVal); + pLvl->iRowid += iVal; + pLvl->iOff = iOff; + }else{ + pLvl->bEof = 1; + } + } + + return pLvl->bEof; +} + +/* +** Advance the iterator passed as the only argument. +*/ +static int fts5DlidxIterNextR(Fts5Index *p, Fts5DlidxIter *pIter, int iLvl){ + Fts5DlidxLvl *pLvl = &pIter->aLvl[iLvl]; + + assert( iLvl<pIter->nLvl ); + if( fts5DlidxLvlNext(pLvl) ){ + if( (iLvl+1) < pIter->nLvl ){ + fts5DlidxIterNextR(p, pIter, iLvl+1); + if( pLvl[1].bEof==0 ){ + fts5DataRelease(pLvl->pData); + memset(pLvl, 0, sizeof(Fts5DlidxLvl)); + pLvl->pData = fts5DataRead(p, + FTS5_DLIDX_ROWID(pIter->iSegid, iLvl, pLvl[1].iLeafPgno) + ); + if( pLvl->pData ) fts5DlidxLvlNext(pLvl); + } + } + } + + return pIter->aLvl[0].bEof; +} +static int fts5DlidxIterNext(Fts5Index *p, Fts5DlidxIter *pIter){ + return fts5DlidxIterNextR(p, pIter, 0); +} + +/* +** The iterator passed as the first argument has the following fields set +** as follows. This function sets up the rest of the iterator so that it +** points to the first rowid in the doclist-index. +** +** pData: +** pointer to doclist-index record, +** +** When this function is called pIter->iLeafPgno is the page number the +** doclist is associated with (the one featuring the term). +*/ +static int fts5DlidxIterFirst(Fts5DlidxIter *pIter){ + int i; + for(i=0; i<pIter->nLvl; i++){ + fts5DlidxLvlNext(&pIter->aLvl[i]); + } + return pIter->aLvl[0].bEof; +} + + +static int fts5DlidxIterEof(Fts5Index *p, Fts5DlidxIter *pIter){ + return p->rc!=SQLITE_OK || pIter->aLvl[0].bEof; +} + +static void fts5DlidxIterLast(Fts5Index *p, Fts5DlidxIter *pIter){ + int i; + + /* Advance each level to the last entry on the last page */ + for(i=pIter->nLvl-1; p->rc==SQLITE_OK && i>=0; i--){ + Fts5DlidxLvl *pLvl = &pIter->aLvl[i]; + while( fts5DlidxLvlNext(pLvl)==0 ); + pLvl->bEof = 0; + + if( i>0 ){ + Fts5DlidxLvl *pChild = &pLvl[-1]; + fts5DataRelease(pChild->pData); + memset(pChild, 0, sizeof(Fts5DlidxLvl)); + pChild->pData = fts5DataRead(p, + FTS5_DLIDX_ROWID(pIter->iSegid, i-1, pLvl->iLeafPgno) + ); + } + } +} + +/* +** Move the iterator passed as the only argument to the previous entry. +*/ +static int fts5DlidxLvlPrev(Fts5DlidxLvl *pLvl){ + int iOff = pLvl->iOff; + + assert( pLvl->bEof==0 ); + if( iOff<=pLvl->iFirstOff ){ + pLvl->bEof = 1; + }else{ + u8 *a = pLvl->pData->p; + i64 iVal; + int iLimit; + int ii; + int nZero = 0; + + /* Currently iOff points to the first byte of a varint. This block + ** decrements iOff until it points to the first byte of the previous + ** varint. Taking care not to read any memory locations that occur + ** before the buffer in memory. */ + iLimit = (iOff>9 ? iOff-9 : 0); + for(iOff--; iOff>iLimit; iOff--){ + if( (a[iOff-1] & 0x80)==0 ) break; + } + + fts5GetVarint(&a[iOff], (u64*)&iVal); + pLvl->iRowid -= iVal; + pLvl->iLeafPgno--; + + /* Skip backwards past any 0x00 varints. */ + for(ii=iOff-1; ii>=pLvl->iFirstOff && a[ii]==0x00; ii--){ + nZero++; + } + if( ii>=pLvl->iFirstOff && (a[ii] & 0x80) ){ + /* The byte immediately before the last 0x00 byte has the 0x80 bit + ** set. So the last 0x00 is only a varint 0 if there are 8 more 0x80 + ** bytes before a[ii]. */ + int bZero = 0; /* True if last 0x00 counts */ + if( (ii-8)>=pLvl->iFirstOff ){ + int j; + for(j=1; j<=8 && (a[ii-j] & 0x80); j++); + bZero = (j>8); + } + if( bZero==0 ) nZero--; + } + pLvl->iLeafPgno -= nZero; + pLvl->iOff = iOff - nZero; + } + + return pLvl->bEof; +} + +static int fts5DlidxIterPrevR(Fts5Index *p, Fts5DlidxIter *pIter, int iLvl){ + Fts5DlidxLvl *pLvl = &pIter->aLvl[iLvl]; + + assert( iLvl<pIter->nLvl ); + if( fts5DlidxLvlPrev(pLvl) ){ + if( (iLvl+1) < pIter->nLvl ){ + fts5DlidxIterPrevR(p, pIter, iLvl+1); + if( pLvl[1].bEof==0 ){ + fts5DataRelease(pLvl->pData); + memset(pLvl, 0, sizeof(Fts5DlidxLvl)); + pLvl->pData = fts5DataRead(p, + FTS5_DLIDX_ROWID(pIter->iSegid, iLvl, pLvl[1].iLeafPgno) + ); + if( pLvl->pData ){ + while( fts5DlidxLvlNext(pLvl)==0 ); + pLvl->bEof = 0; + } + } + } + } + + return pIter->aLvl[0].bEof; +} +static int fts5DlidxIterPrev(Fts5Index *p, Fts5DlidxIter *pIter){ + return fts5DlidxIterPrevR(p, pIter, 0); +} + +/* +** Free a doclist-index iterator object allocated by fts5DlidxIterInit(). +*/ +static void fts5DlidxIterFree(Fts5DlidxIter *pIter){ + if( pIter ){ + int i; + for(i=0; i<pIter->nLvl; i++){ + fts5DataRelease(pIter->aLvl[i].pData); + } + sqlite3_free(pIter); + } +} + +static Fts5DlidxIter *fts5DlidxIterInit( + Fts5Index *p, /* Fts5 Backend to iterate within */ + int bRev, /* True for ORDER BY ASC */ + int iSegid, /* Segment id */ + int iLeafPg /* Leaf page number to load dlidx for */ +){ + Fts5DlidxIter *pIter = 0; + int i; + int bDone = 0; + + for(i=0; p->rc==SQLITE_OK && bDone==0; i++){ + int nByte = sizeof(Fts5DlidxIter) + i * sizeof(Fts5DlidxLvl); + Fts5DlidxIter *pNew; + + pNew = (Fts5DlidxIter*)sqlite3_realloc(pIter, nByte); + if( pNew==0 ){ + p->rc = SQLITE_NOMEM; + }else{ + i64 iRowid = FTS5_DLIDX_ROWID(iSegid, i, iLeafPg); + Fts5DlidxLvl *pLvl = &pNew->aLvl[i]; + pIter = pNew; + memset(pLvl, 0, sizeof(Fts5DlidxLvl)); + pLvl->pData = fts5DataRead(p, iRowid); + if( pLvl->pData && (pLvl->pData->p[0] & 0x0001)==0 ){ + bDone = 1; + } + pIter->nLvl = i+1; + } + } + + if( p->rc==SQLITE_OK ){ + pIter->iSegid = iSegid; + if( bRev==0 ){ + fts5DlidxIterFirst(pIter); + }else{ + fts5DlidxIterLast(p, pIter); + } + } + + if( p->rc!=SQLITE_OK ){ + fts5DlidxIterFree(pIter); + pIter = 0; + } + + return pIter; +} + +static i64 fts5DlidxIterRowid(Fts5DlidxIter *pIter){ + return pIter->aLvl[0].iRowid; +} +static int fts5DlidxIterPgno(Fts5DlidxIter *pIter){ + return pIter->aLvl[0].iLeafPgno; +} + +/* +** Load the next leaf page into the segment iterator. +*/ +static void fts5SegIterNextPage( + Fts5Index *p, /* FTS5 backend object */ + Fts5SegIter *pIter /* Iterator to advance to next page */ +){ + Fts5Data *pLeaf; + Fts5StructureSegment *pSeg = pIter->pSeg; + fts5DataRelease(pIter->pLeaf); + pIter->iLeafPgno++; + if( pIter->pNextLeaf ){ + pIter->pLeaf = pIter->pNextLeaf; + pIter->pNextLeaf = 0; + }else if( pIter->iLeafPgno<=pSeg->pgnoLast ){ + pIter->pLeaf = fts5DataRead(p, + FTS5_SEGMENT_ROWID(pSeg->iSegid, pIter->iLeafPgno) + ); + }else{ + pIter->pLeaf = 0; + } + pLeaf = pIter->pLeaf; + + if( pLeaf ){ + pIter->iPgidxOff = pLeaf->szLeaf; + if( fts5LeafIsTermless(pLeaf) ){ + pIter->iEndofDoclist = pLeaf->nn+1; + }else{ + pIter->iPgidxOff += fts5GetVarint32(&pLeaf->p[pIter->iPgidxOff], + pIter->iEndofDoclist + ); + } + } +} + +/* +** Argument p points to a buffer containing a varint to be interpreted as a +** position list size field. Read the varint and return the number of bytes +** read. Before returning, set *pnSz to the number of bytes in the position +** list, and *pbDel to true if the delete flag is set, or false otherwise. +*/ +static int fts5GetPoslistSize(const u8 *p, int *pnSz, int *pbDel){ + int nSz; + int n = 0; + fts5FastGetVarint32(p, n, nSz); + assert_nc( nSz>=0 ); + *pnSz = nSz/2; + *pbDel = nSz & 0x0001; + return n; +} + +/* +** Fts5SegIter.iLeafOffset currently points to the first byte of a +** position-list size field. Read the value of the field and store it +** in the following variables: +** +** Fts5SegIter.nPos +** Fts5SegIter.bDel +** +** Leave Fts5SegIter.iLeafOffset pointing to the first byte of the +** position list content (if any). +*/ +static void fts5SegIterLoadNPos(Fts5Index *p, Fts5SegIter *pIter){ + if( p->rc==SQLITE_OK ){ + int iOff = pIter->iLeafOffset; /* Offset to read at */ + ASSERT_SZLEAF_OK(pIter->pLeaf); + if( p->pConfig->eDetail==FTS5_DETAIL_NONE ){ + int iEod = MIN(pIter->iEndofDoclist, pIter->pLeaf->szLeaf); + pIter->bDel = 0; + pIter->nPos = 1; + if( iOff<iEod && pIter->pLeaf->p[iOff]==0 ){ + pIter->bDel = 1; + iOff++; + if( iOff<iEod && pIter->pLeaf->p[iOff]==0 ){ + pIter->nPos = 1; + iOff++; + }else{ + pIter->nPos = 0; + } + } + }else{ + int nSz; + fts5FastGetVarint32(pIter->pLeaf->p, iOff, nSz); + pIter->bDel = (nSz & 0x0001); + pIter->nPos = nSz>>1; + assert_nc( pIter->nPos>=0 ); + } + pIter->iLeafOffset = iOff; + } +} + +static void fts5SegIterLoadRowid(Fts5Index *p, Fts5SegIter *pIter){ + u8 *a = pIter->pLeaf->p; /* Buffer to read data from */ + int iOff = pIter->iLeafOffset; + + ASSERT_SZLEAF_OK(pIter->pLeaf); + if( iOff>=pIter->pLeaf->szLeaf ){ + fts5SegIterNextPage(p, pIter); + if( pIter->pLeaf==0 ){ + if( p->rc==SQLITE_OK ) p->rc = FTS5_CORRUPT; + return; + } + iOff = 4; + a = pIter->pLeaf->p; + } + iOff += sqlite3Fts5GetVarint(&a[iOff], (u64*)&pIter->iRowid); + pIter->iLeafOffset = iOff; +} + +/* +** Fts5SegIter.iLeafOffset currently points to the first byte of the +** "nSuffix" field of a term. Function parameter nKeep contains the value +** of the "nPrefix" field (if there was one - it is passed 0 if this is +** the first term in the segment). +** +** This function populates: +** +** Fts5SegIter.term +** Fts5SegIter.rowid +** +** accordingly and leaves (Fts5SegIter.iLeafOffset) set to the content of +** the first position list. The position list belonging to document +** (Fts5SegIter.iRowid). +*/ +static void fts5SegIterLoadTerm(Fts5Index *p, Fts5SegIter *pIter, int nKeep){ + u8 *a = pIter->pLeaf->p; /* Buffer to read data from */ + int iOff = pIter->iLeafOffset; /* Offset to read at */ + int nNew; /* Bytes of new data */ + + iOff += fts5GetVarint32(&a[iOff], nNew); + if( iOff+nNew>pIter->pLeaf->nn ){ + p->rc = FTS5_CORRUPT; + return; + } + pIter->term.n = nKeep; + fts5BufferAppendBlob(&p->rc, &pIter->term, nNew, &a[iOff]); + iOff += nNew; + pIter->iTermLeafOffset = iOff; + pIter->iTermLeafPgno = pIter->iLeafPgno; + pIter->iLeafOffset = iOff; + + if( pIter->iPgidxOff>=pIter->pLeaf->nn ){ + pIter->iEndofDoclist = pIter->pLeaf->nn+1; + }else{ + int nExtra; + pIter->iPgidxOff += fts5GetVarint32(&a[pIter->iPgidxOff], nExtra); + pIter->iEndofDoclist += nExtra; + } + + fts5SegIterLoadRowid(p, pIter); +} + +static void fts5SegIterNext(Fts5Index*, Fts5SegIter*, int*); +static void fts5SegIterNext_Reverse(Fts5Index*, Fts5SegIter*, int*); +static void fts5SegIterNext_None(Fts5Index*, Fts5SegIter*, int*); + +static void fts5SegIterSetNext(Fts5Index *p, Fts5SegIter *pIter){ + if( pIter->flags & FTS5_SEGITER_REVERSE ){ + pIter->xNext = fts5SegIterNext_Reverse; + }else if( p->pConfig->eDetail==FTS5_DETAIL_NONE ){ + pIter->xNext = fts5SegIterNext_None; + }else{ + pIter->xNext = fts5SegIterNext; + } +} + +/* +** Initialize the iterator object pIter to iterate through the entries in +** segment pSeg. The iterator is left pointing to the first entry when +** this function returns. +** +** If an error occurs, Fts5Index.rc is set to an appropriate error code. If +** an error has already occurred when this function is called, it is a no-op. +*/ +static void fts5SegIterInit( + Fts5Index *p, /* FTS index object */ + Fts5StructureSegment *pSeg, /* Description of segment */ + Fts5SegIter *pIter /* Object to populate */ +){ + if( pSeg->pgnoFirst==0 ){ + /* This happens if the segment is being used as an input to an incremental + ** merge and all data has already been "trimmed". See function + ** fts5TrimSegments() for details. In this case leave the iterator empty. + ** The caller will see the (pIter->pLeaf==0) and assume the iterator is + ** at EOF already. */ + assert( pIter->pLeaf==0 ); + return; + } + + if( p->rc==SQLITE_OK ){ + memset(pIter, 0, sizeof(*pIter)); + fts5SegIterSetNext(p, pIter); + pIter->pSeg = pSeg; + pIter->iLeafPgno = pSeg->pgnoFirst-1; + fts5SegIterNextPage(p, pIter); + } + + if( p->rc==SQLITE_OK ){ + pIter->iLeafOffset = 4; + assert_nc( pIter->pLeaf->nn>4 ); + assert( fts5LeafFirstTermOff(pIter->pLeaf)==4 ); + pIter->iPgidxOff = pIter->pLeaf->szLeaf+1; + fts5SegIterLoadTerm(p, pIter, 0); + fts5SegIterLoadNPos(p, pIter); + } +} + +/* +** This function is only ever called on iterators created by calls to +** Fts5IndexQuery() with the FTS5INDEX_QUERY_DESC flag set. +** +** The iterator is in an unusual state when this function is called: the +** Fts5SegIter.iLeafOffset variable is set to the offset of the start of +** the position-list size field for the first relevant rowid on the page. +** Fts5SegIter.rowid is set, but nPos and bDel are not. +** +** This function advances the iterator so that it points to the last +** relevant rowid on the page and, if necessary, initializes the +** aRowidOffset[] and iRowidOffset variables. At this point the iterator +** is in its regular state - Fts5SegIter.iLeafOffset points to the first +** byte of the position list content associated with said rowid. +*/ +static void fts5SegIterReverseInitPage(Fts5Index *p, Fts5SegIter *pIter){ + int eDetail = p->pConfig->eDetail; + int n = pIter->pLeaf->szLeaf; + int i = pIter->iLeafOffset; + u8 *a = pIter->pLeaf->p; + int iRowidOffset = 0; + + if( n>pIter->iEndofDoclist ){ + n = pIter->iEndofDoclist; + } + + ASSERT_SZLEAF_OK(pIter->pLeaf); + while( 1 ){ + i64 iDelta = 0; + + if( eDetail==FTS5_DETAIL_NONE ){ + /* todo */ + if( i<n && a[i]==0 ){ + i++; + if( i<n && a[i]==0 ) i++; + } + }else{ + int nPos; + int bDummy; + i += fts5GetPoslistSize(&a[i], &nPos, &bDummy); + i += nPos; + } + if( i>=n ) break; + i += fts5GetVarint(&a[i], (u64*)&iDelta); + pIter->iRowid += iDelta; + + /* If necessary, grow the pIter->aRowidOffset[] array. */ + if( iRowidOffset>=pIter->nRowidOffset ){ + int nNew = pIter->nRowidOffset + 8; + int *aNew = (int*)sqlite3_realloc(pIter->aRowidOffset, nNew*sizeof(int)); + if( aNew==0 ){ + p->rc = SQLITE_NOMEM; + break; + } + pIter->aRowidOffset = aNew; + pIter->nRowidOffset = nNew; + } + + pIter->aRowidOffset[iRowidOffset++] = pIter->iLeafOffset; + pIter->iLeafOffset = i; + } + pIter->iRowidOffset = iRowidOffset; + fts5SegIterLoadNPos(p, pIter); +} + +/* +** +*/ +static void fts5SegIterReverseNewPage(Fts5Index *p, Fts5SegIter *pIter){ + assert( pIter->flags & FTS5_SEGITER_REVERSE ); + assert( pIter->flags & FTS5_SEGITER_ONETERM ); + + fts5DataRelease(pIter->pLeaf); + pIter->pLeaf = 0; + while( p->rc==SQLITE_OK && pIter->iLeafPgno>pIter->iTermLeafPgno ){ + Fts5Data *pNew; + pIter->iLeafPgno--; + pNew = fts5DataRead(p, FTS5_SEGMENT_ROWID( + pIter->pSeg->iSegid, pIter->iLeafPgno + )); + if( pNew ){ + /* iTermLeafOffset may be equal to szLeaf if the term is the last + ** thing on the page - i.e. the first rowid is on the following page. + ** In this case leave pIter->pLeaf==0, this iterator is at EOF. */ + if( pIter->iLeafPgno==pIter->iTermLeafPgno ){ + assert( pIter->pLeaf==0 ); + if( pIter->iTermLeafOffset<pNew->szLeaf ){ + pIter->pLeaf = pNew; + pIter->iLeafOffset = pIter->iTermLeafOffset; + } + }else{ + int iRowidOff; + iRowidOff = fts5LeafFirstRowidOff(pNew); + if( iRowidOff ){ + pIter->pLeaf = pNew; + pIter->iLeafOffset = iRowidOff; + } + } + + if( pIter->pLeaf ){ + u8 *a = &pIter->pLeaf->p[pIter->iLeafOffset]; + pIter->iLeafOffset += fts5GetVarint(a, (u64*)&pIter->iRowid); + break; + }else{ + fts5DataRelease(pNew); + } + } + } + + if( pIter->pLeaf ){ + pIter->iEndofDoclist = pIter->pLeaf->nn+1; + fts5SegIterReverseInitPage(p, pIter); + } +} + +/* +** Return true if the iterator passed as the second argument currently +** points to a delete marker. A delete marker is an entry with a 0 byte +** position-list. +*/ +static int fts5MultiIterIsEmpty(Fts5Index *p, Fts5Iter *pIter){ + Fts5SegIter *pSeg = &pIter->aSeg[pIter->aFirst[1].iFirst]; + return (p->rc==SQLITE_OK && pSeg->pLeaf && pSeg->nPos==0); +} + +/* +** Advance iterator pIter to the next entry. +** +** This version of fts5SegIterNext() is only used by reverse iterators. +*/ +static void fts5SegIterNext_Reverse( + Fts5Index *p, /* FTS5 backend object */ + Fts5SegIter *pIter, /* Iterator to advance */ + int *pbUnused /* Unused */ +){ + assert( pIter->flags & FTS5_SEGITER_REVERSE ); + assert( pIter->pNextLeaf==0 ); + UNUSED_PARAM(pbUnused); + + if( pIter->iRowidOffset>0 ){ + u8 *a = pIter->pLeaf->p; + int iOff; + i64 iDelta; + + pIter->iRowidOffset--; + pIter->iLeafOffset = pIter->aRowidOffset[pIter->iRowidOffset]; + fts5SegIterLoadNPos(p, pIter); + iOff = pIter->iLeafOffset; + if( p->pConfig->eDetail!=FTS5_DETAIL_NONE ){ + iOff += pIter->nPos; + } + fts5GetVarint(&a[iOff], (u64*)&iDelta); + pIter->iRowid -= iDelta; + }else{ + fts5SegIterReverseNewPage(p, pIter); + } +} + +/* +** Advance iterator pIter to the next entry. +** +** This version of fts5SegIterNext() is only used if detail=none and the +** iterator is not a reverse direction iterator. +*/ +static void fts5SegIterNext_None( + Fts5Index *p, /* FTS5 backend object */ + Fts5SegIter *pIter, /* Iterator to advance */ + int *pbNewTerm /* OUT: Set for new term */ +){ + int iOff; + + assert( p->rc==SQLITE_OK ); + assert( (pIter->flags & FTS5_SEGITER_REVERSE)==0 ); + assert( p->pConfig->eDetail==FTS5_DETAIL_NONE ); + + ASSERT_SZLEAF_OK(pIter->pLeaf); + iOff = pIter->iLeafOffset; + + /* Next entry is on the next page */ + if( pIter->pSeg && iOff>=pIter->pLeaf->szLeaf ){ + fts5SegIterNextPage(p, pIter); + if( p->rc || pIter->pLeaf==0 ) return; + pIter->iRowid = 0; + iOff = 4; + } + + if( iOff<pIter->iEndofDoclist ){ + /* Next entry is on the current page */ + i64 iDelta; + iOff += sqlite3Fts5GetVarint(&pIter->pLeaf->p[iOff], (u64*)&iDelta); + pIter->iLeafOffset = iOff; + pIter->iRowid += iDelta; + }else if( (pIter->flags & FTS5_SEGITER_ONETERM)==0 ){ + if( pIter->pSeg ){ + int nKeep = 0; + if( iOff!=fts5LeafFirstTermOff(pIter->pLeaf) ){ + iOff += fts5GetVarint32(&pIter->pLeaf->p[iOff], nKeep); + } + pIter->iLeafOffset = iOff; + fts5SegIterLoadTerm(p, pIter, nKeep); + }else{ + const u8 *pList = 0; + const char *zTerm = 0; + int nList; + sqlite3Fts5HashScanNext(p->pHash); + sqlite3Fts5HashScanEntry(p->pHash, &zTerm, &pList, &nList); + if( pList==0 ) goto next_none_eof; + pIter->pLeaf->p = (u8*)pList; + pIter->pLeaf->nn = nList; + pIter->pLeaf->szLeaf = nList; + pIter->iEndofDoclist = nList; + sqlite3Fts5BufferSet(&p->rc,&pIter->term, (int)strlen(zTerm), (u8*)zTerm); + pIter->iLeafOffset = fts5GetVarint(pList, (u64*)&pIter->iRowid); + } + + if( pbNewTerm ) *pbNewTerm = 1; + }else{ + goto next_none_eof; + } + + fts5SegIterLoadNPos(p, pIter); + + return; + next_none_eof: + fts5DataRelease(pIter->pLeaf); + pIter->pLeaf = 0; +} + + +/* +** Advance iterator pIter to the next entry. +** +** If an error occurs, Fts5Index.rc is set to an appropriate error code. It +** is not considered an error if the iterator reaches EOF. If an error has +** already occurred when this function is called, it is a no-op. +*/ +static void fts5SegIterNext( + Fts5Index *p, /* FTS5 backend object */ + Fts5SegIter *pIter, /* Iterator to advance */ + int *pbNewTerm /* OUT: Set for new term */ +){ + Fts5Data *pLeaf = pIter->pLeaf; + int iOff; + int bNewTerm = 0; + int nKeep = 0; + u8 *a; + int n; + + assert( pbNewTerm==0 || *pbNewTerm==0 ); + assert( p->pConfig->eDetail!=FTS5_DETAIL_NONE ); + + /* Search for the end of the position list within the current page. */ + a = pLeaf->p; + n = pLeaf->szLeaf; + + ASSERT_SZLEAF_OK(pLeaf); + iOff = pIter->iLeafOffset + pIter->nPos; + + if( iOff<n ){ + /* The next entry is on the current page. */ + assert_nc( iOff<=pIter->iEndofDoclist ); + if( iOff>=pIter->iEndofDoclist ){ + bNewTerm = 1; + if( iOff!=fts5LeafFirstTermOff(pLeaf) ){ + iOff += fts5GetVarint32(&a[iOff], nKeep); + } + }else{ + u64 iDelta; + iOff += sqlite3Fts5GetVarint(&a[iOff], &iDelta); + pIter->iRowid += iDelta; + assert_nc( iDelta>0 ); + } + pIter->iLeafOffset = iOff; + + }else if( pIter->pSeg==0 ){ + const u8 *pList = 0; + const char *zTerm = 0; + int nList = 0; + assert( (pIter->flags & FTS5_SEGITER_ONETERM) || pbNewTerm ); + if( 0==(pIter->flags & FTS5_SEGITER_ONETERM) ){ + sqlite3Fts5HashScanNext(p->pHash); + sqlite3Fts5HashScanEntry(p->pHash, &zTerm, &pList, &nList); + } + if( pList==0 ){ + fts5DataRelease(pIter->pLeaf); + pIter->pLeaf = 0; + }else{ + pIter->pLeaf->p = (u8*)pList; + pIter->pLeaf->nn = nList; + pIter->pLeaf->szLeaf = nList; + pIter->iEndofDoclist = nList+1; + sqlite3Fts5BufferSet(&p->rc, &pIter->term, (int)strlen(zTerm), + (u8*)zTerm); + pIter->iLeafOffset = fts5GetVarint(pList, (u64*)&pIter->iRowid); + *pbNewTerm = 1; + } + }else{ + iOff = 0; + /* Next entry is not on the current page */ + while( iOff==0 ){ + fts5SegIterNextPage(p, pIter); + pLeaf = pIter->pLeaf; + if( pLeaf==0 ) break; + ASSERT_SZLEAF_OK(pLeaf); + if( (iOff = fts5LeafFirstRowidOff(pLeaf)) && iOff<pLeaf->szLeaf ){ + iOff += sqlite3Fts5GetVarint(&pLeaf->p[iOff], (u64*)&pIter->iRowid); + pIter->iLeafOffset = iOff; + + if( pLeaf->nn>pLeaf->szLeaf ){ + pIter->iPgidxOff = pLeaf->szLeaf + fts5GetVarint32( + &pLeaf->p[pLeaf->szLeaf], pIter->iEndofDoclist + ); + } + + } + else if( pLeaf->nn>pLeaf->szLeaf ){ + pIter->iPgidxOff = pLeaf->szLeaf + fts5GetVarint32( + &pLeaf->p[pLeaf->szLeaf], iOff + ); + pIter->iLeafOffset = iOff; + pIter->iEndofDoclist = iOff; + bNewTerm = 1; + } + assert_nc( iOff<pLeaf->szLeaf ); + if( iOff>pLeaf->szLeaf ){ + p->rc = FTS5_CORRUPT; + return; + } + } + } + + /* Check if the iterator is now at EOF. If so, return early. */ + if( pIter->pLeaf ){ + if( bNewTerm ){ + if( pIter->flags & FTS5_SEGITER_ONETERM ){ + fts5DataRelease(pIter->pLeaf); + pIter->pLeaf = 0; + }else{ + fts5SegIterLoadTerm(p, pIter, nKeep); + fts5SegIterLoadNPos(p, pIter); + if( pbNewTerm ) *pbNewTerm = 1; + } + }else{ + /* The following could be done by calling fts5SegIterLoadNPos(). But + ** this block is particularly performance critical, so equivalent + ** code is inlined. + ** + ** Later: Switched back to fts5SegIterLoadNPos() because it supports + ** detail=none mode. Not ideal. + */ + int nSz; + assert( p->rc==SQLITE_OK ); + fts5FastGetVarint32(pIter->pLeaf->p, pIter->iLeafOffset, nSz); + pIter->bDel = (nSz & 0x0001); + pIter->nPos = nSz>>1; + assert_nc( pIter->nPos>=0 ); + } + } +} + +#define SWAPVAL(T, a, b) { T tmp; tmp=a; a=b; b=tmp; } + +#define fts5IndexSkipVarint(a, iOff) { \ + int iEnd = iOff+9; \ + while( (a[iOff++] & 0x80) && iOff<iEnd ); \ +} + +/* +** Iterator pIter currently points to the first rowid in a doclist. This +** function sets the iterator up so that iterates in reverse order through +** the doclist. +*/ +static void fts5SegIterReverse(Fts5Index *p, Fts5SegIter *pIter){ + Fts5DlidxIter *pDlidx = pIter->pDlidx; + Fts5Data *pLast = 0; + int pgnoLast = 0; + + if( pDlidx ){ + int iSegid = pIter->pSeg->iSegid; + pgnoLast = fts5DlidxIterPgno(pDlidx); + pLast = fts5DataRead(p, FTS5_SEGMENT_ROWID(iSegid, pgnoLast)); + }else{ + Fts5Data *pLeaf = pIter->pLeaf; /* Current leaf data */ + + /* Currently, Fts5SegIter.iLeafOffset points to the first byte of + ** position-list content for the current rowid. Back it up so that it + ** points to the start of the position-list size field. */ + int iPoslist; + if( pIter->iTermLeafPgno==pIter->iLeafPgno ){ + iPoslist = pIter->iTermLeafOffset; + }else{ + iPoslist = 4; + } + fts5IndexSkipVarint(pLeaf->p, iPoslist); + pIter->iLeafOffset = iPoslist; + + /* If this condition is true then the largest rowid for the current + ** term may not be stored on the current page. So search forward to + ** see where said rowid really is. */ + if( pIter->iEndofDoclist>=pLeaf->szLeaf ){ + int pgno; + Fts5StructureSegment *pSeg = pIter->pSeg; + + /* The last rowid in the doclist may not be on the current page. Search + ** forward to find the page containing the last rowid. */ + for(pgno=pIter->iLeafPgno+1; !p->rc && pgno<=pSeg->pgnoLast; pgno++){ + i64 iAbs = FTS5_SEGMENT_ROWID(pSeg->iSegid, pgno); + Fts5Data *pNew = fts5DataRead(p, iAbs); + if( pNew ){ + int iRowid, bTermless; + iRowid = fts5LeafFirstRowidOff(pNew); + bTermless = fts5LeafIsTermless(pNew); + if( iRowid ){ + SWAPVAL(Fts5Data*, pNew, pLast); + pgnoLast = pgno; + } + fts5DataRelease(pNew); + if( bTermless==0 ) break; + } + } + } + } + + /* If pLast is NULL at this point, then the last rowid for this doclist + ** lies on the page currently indicated by the iterator. In this case + ** pIter->iLeafOffset is already set to point to the position-list size + ** field associated with the first relevant rowid on the page. + ** + ** Or, if pLast is non-NULL, then it is the page that contains the last + ** rowid. In this case configure the iterator so that it points to the + ** first rowid on this page. + */ + if( pLast ){ + int iOff; + fts5DataRelease(pIter->pLeaf); + pIter->pLeaf = pLast; + pIter->iLeafPgno = pgnoLast; + iOff = fts5LeafFirstRowidOff(pLast); + iOff += fts5GetVarint(&pLast->p[iOff], (u64*)&pIter->iRowid); + pIter->iLeafOffset = iOff; + + if( fts5LeafIsTermless(pLast) ){ + pIter->iEndofDoclist = pLast->nn+1; + }else{ + pIter->iEndofDoclist = fts5LeafFirstTermOff(pLast); + } + + } + + fts5SegIterReverseInitPage(p, pIter); +} + +/* +** Iterator pIter currently points to the first rowid of a doclist. +** There is a doclist-index associated with the final term on the current +** page. If the current term is the last term on the page, load the +** doclist-index from disk and initialize an iterator at (pIter->pDlidx). +*/ +static void fts5SegIterLoadDlidx(Fts5Index *p, Fts5SegIter *pIter){ + int iSeg = pIter->pSeg->iSegid; + int bRev = (pIter->flags & FTS5_SEGITER_REVERSE); + Fts5Data *pLeaf = pIter->pLeaf; /* Current leaf data */ + + assert( pIter->flags & FTS5_SEGITER_ONETERM ); + assert( pIter->pDlidx==0 ); + + /* Check if the current doclist ends on this page. If it does, return + ** early without loading the doclist-index (as it belongs to a different + ** term. */ + if( pIter->iTermLeafPgno==pIter->iLeafPgno + && pIter->iEndofDoclist<pLeaf->szLeaf + ){ + return; + } + + pIter->pDlidx = fts5DlidxIterInit(p, bRev, iSeg, pIter->iTermLeafPgno); +} + +/* +** The iterator object passed as the second argument currently contains +** no valid values except for the Fts5SegIter.pLeaf member variable. This +** function searches the leaf page for a term matching (pTerm/nTerm). +** +** If the specified term is found on the page, then the iterator is left +** pointing to it. If argument bGe is zero and the term is not found, +** the iterator is left pointing at EOF. +** +** If bGe is non-zero and the specified term is not found, then the +** iterator is left pointing to the smallest term in the segment that +** is larger than the specified term, even if this term is not on the +** current page. +*/ +static void fts5LeafSeek( + Fts5Index *p, /* Leave any error code here */ + int bGe, /* True for a >= search */ + Fts5SegIter *pIter, /* Iterator to seek */ + const u8 *pTerm, int nTerm /* Term to search for */ +){ + int iOff; + const u8 *a = pIter->pLeaf->p; + int szLeaf = pIter->pLeaf->szLeaf; + int n = pIter->pLeaf->nn; + + int nMatch = 0; + int nKeep = 0; + int nNew = 0; + int iTermOff; + int iPgidx; /* Current offset in pgidx */ + int bEndOfPage = 0; + + assert( p->rc==SQLITE_OK ); + + iPgidx = szLeaf; + iPgidx += fts5GetVarint32(&a[iPgidx], iTermOff); + iOff = iTermOff; + + while( 1 ){ + + /* Figure out how many new bytes are in this term */ + fts5FastGetVarint32(a, iOff, nNew); + if( nKeep<nMatch ){ + goto search_failed; + } + + assert( nKeep>=nMatch ); + if( nKeep==nMatch ){ + int nCmp; + int i; + nCmp = MIN(nNew, nTerm-nMatch); + for(i=0; i<nCmp; i++){ + if( a[iOff+i]!=pTerm[nMatch+i] ) break; + } + nMatch += i; + + if( nTerm==nMatch ){ + if( i==nNew ){ + goto search_success; + }else{ + goto search_failed; + } + }else if( i<nNew && a[iOff+i]>pTerm[nMatch] ){ + goto search_failed; + } + } + + if( iPgidx>=n ){ + bEndOfPage = 1; + break; + } + + iPgidx += fts5GetVarint32(&a[iPgidx], nKeep); + iTermOff += nKeep; + iOff = iTermOff; + + /* Read the nKeep field of the next term. */ + fts5FastGetVarint32(a, iOff, nKeep); + } + + search_failed: + if( bGe==0 ){ + fts5DataRelease(pIter->pLeaf); + pIter->pLeaf = 0; + return; + }else if( bEndOfPage ){ + do { + fts5SegIterNextPage(p, pIter); + if( pIter->pLeaf==0 ) return; + a = pIter->pLeaf->p; + if( fts5LeafIsTermless(pIter->pLeaf)==0 ){ + iPgidx = pIter->pLeaf->szLeaf; + iPgidx += fts5GetVarint32(&pIter->pLeaf->p[iPgidx], iOff); + if( iOff<4 || iOff>=pIter->pLeaf->szLeaf ){ + p->rc = FTS5_CORRUPT; + }else{ + nKeep = 0; + iTermOff = iOff; + n = pIter->pLeaf->nn; + iOff += fts5GetVarint32(&a[iOff], nNew); + break; + } + } + }while( 1 ); + } + + search_success: + + pIter->iLeafOffset = iOff + nNew; + pIter->iTermLeafOffset = pIter->iLeafOffset; + pIter->iTermLeafPgno = pIter->iLeafPgno; + + fts5BufferSet(&p->rc, &pIter->term, nKeep, pTerm); + fts5BufferAppendBlob(&p->rc, &pIter->term, nNew, &a[iOff]); + + if( iPgidx>=n ){ + pIter->iEndofDoclist = pIter->pLeaf->nn+1; + }else{ + int nExtra; + iPgidx += fts5GetVarint32(&a[iPgidx], nExtra); + pIter->iEndofDoclist = iTermOff + nExtra; + } + pIter->iPgidxOff = iPgidx; + + fts5SegIterLoadRowid(p, pIter); + fts5SegIterLoadNPos(p, pIter); +} + +/* +** Initialize the object pIter to point to term pTerm/nTerm within segment +** pSeg. If there is no such term in the index, the iterator is set to EOF. +** +** If an error occurs, Fts5Index.rc is set to an appropriate error code. If +** an error has already occurred when this function is called, it is a no-op. +*/ +static void fts5SegIterSeekInit( + Fts5Index *p, /* FTS5 backend */ + const u8 *pTerm, int nTerm, /* Term to seek to */ + int flags, /* Mask of FTS5INDEX_XXX flags */ + Fts5StructureSegment *pSeg, /* Description of segment */ + Fts5SegIter *pIter /* Object to populate */ +){ + int iPg = 1; + int bGe = (flags & FTS5INDEX_QUERY_SCAN); + int bDlidx = 0; /* True if there is a doclist-index */ + + assert( bGe==0 || (flags & FTS5INDEX_QUERY_DESC)==0 ); + assert( pTerm && nTerm ); + memset(pIter, 0, sizeof(*pIter)); + pIter->pSeg = pSeg; + + /* This block sets stack variable iPg to the leaf page number that may + ** contain term (pTerm/nTerm), if it is present in the segment. */ + if( p->pIdxSelect==0 ){ + Fts5Config *pConfig = p->pConfig; + fts5IndexPrepareStmt(p, &p->pIdxSelect, sqlite3_mprintf( + "SELECT pgno FROM '%q'.'%q_idx' WHERE " + "segid=? AND term<=? ORDER BY term DESC LIMIT 1", + pConfig->zDb, pConfig->zName + )); + } + if( p->rc ) return; + sqlite3_bind_int(p->pIdxSelect, 1, pSeg->iSegid); + sqlite3_bind_blob(p->pIdxSelect, 2, pTerm, nTerm, SQLITE_STATIC); + if( SQLITE_ROW==sqlite3_step(p->pIdxSelect) ){ + i64 val = sqlite3_column_int(p->pIdxSelect, 0); + iPg = (int)(val>>1); + bDlidx = (val & 0x0001); + } + p->rc = sqlite3_reset(p->pIdxSelect); + + if( iPg<pSeg->pgnoFirst ){ + iPg = pSeg->pgnoFirst; + bDlidx = 0; + } + + pIter->iLeafPgno = iPg - 1; + fts5SegIterNextPage(p, pIter); + + if( pIter->pLeaf ){ + fts5LeafSeek(p, bGe, pIter, pTerm, nTerm); + } + + if( p->rc==SQLITE_OK && bGe==0 ){ + pIter->flags |= FTS5_SEGITER_ONETERM; + if( pIter->pLeaf ){ + if( flags & FTS5INDEX_QUERY_DESC ){ + pIter->flags |= FTS5_SEGITER_REVERSE; + } + if( bDlidx ){ + fts5SegIterLoadDlidx(p, pIter); + } + if( flags & FTS5INDEX_QUERY_DESC ){ + fts5SegIterReverse(p, pIter); + } + } + } + + fts5SegIterSetNext(p, pIter); + + /* Either: + ** + ** 1) an error has occurred, or + ** 2) the iterator points to EOF, or + ** 3) the iterator points to an entry with term (pTerm/nTerm), or + ** 4) the FTS5INDEX_QUERY_SCAN flag was set and the iterator points + ** to an entry with a term greater than or equal to (pTerm/nTerm). + */ + assert( p->rc!=SQLITE_OK /* 1 */ + || pIter->pLeaf==0 /* 2 */ + || fts5BufferCompareBlob(&pIter->term, pTerm, nTerm)==0 /* 3 */ + || (bGe && fts5BufferCompareBlob(&pIter->term, pTerm, nTerm)>0) /* 4 */ + ); +} + +/* +** Initialize the object pIter to point to term pTerm/nTerm within the +** in-memory hash table. If there is no such term in the hash-table, the +** iterator is set to EOF. +** +** If an error occurs, Fts5Index.rc is set to an appropriate error code. If +** an error has already occurred when this function is called, it is a no-op. +*/ +static void fts5SegIterHashInit( + Fts5Index *p, /* FTS5 backend */ + const u8 *pTerm, int nTerm, /* Term to seek to */ + int flags, /* Mask of FTS5INDEX_XXX flags */ + Fts5SegIter *pIter /* Object to populate */ +){ + const u8 *pList = 0; + int nList = 0; + const u8 *z = 0; + int n = 0; + + assert( p->pHash ); + assert( p->rc==SQLITE_OK ); + + if( pTerm==0 || (flags & FTS5INDEX_QUERY_SCAN) ){ + p->rc = sqlite3Fts5HashScanInit(p->pHash, (const char*)pTerm, nTerm); + sqlite3Fts5HashScanEntry(p->pHash, (const char**)&z, &pList, &nList); + n = (z ? (int)strlen((const char*)z) : 0); + }else{ + pIter->flags |= FTS5_SEGITER_ONETERM; + sqlite3Fts5HashQuery(p->pHash, (const char*)pTerm, nTerm, &pList, &nList); + z = pTerm; + n = nTerm; + } + + if( pList ){ + Fts5Data *pLeaf; + sqlite3Fts5BufferSet(&p->rc, &pIter->term, n, z); + pLeaf = fts5IdxMalloc(p, sizeof(Fts5Data)); + if( pLeaf==0 ) return; + pLeaf->p = (u8*)pList; + pLeaf->nn = pLeaf->szLeaf = nList; + pIter->pLeaf = pLeaf; + pIter->iLeafOffset = fts5GetVarint(pLeaf->p, (u64*)&pIter->iRowid); + pIter->iEndofDoclist = pLeaf->nn; + + if( flags & FTS5INDEX_QUERY_DESC ){ + pIter->flags |= FTS5_SEGITER_REVERSE; + fts5SegIterReverseInitPage(p, pIter); + }else{ + fts5SegIterLoadNPos(p, pIter); + } + } + + fts5SegIterSetNext(p, pIter); +} + +/* +** Zero the iterator passed as the only argument. +*/ +static void fts5SegIterClear(Fts5SegIter *pIter){ + fts5BufferFree(&pIter->term); + fts5DataRelease(pIter->pLeaf); + fts5DataRelease(pIter->pNextLeaf); + fts5DlidxIterFree(pIter->pDlidx); + sqlite3_free(pIter->aRowidOffset); + memset(pIter, 0, sizeof(Fts5SegIter)); +} + +#ifdef SQLITE_DEBUG + +/* +** This function is used as part of the big assert() procedure implemented by +** fts5AssertMultiIterSetup(). It ensures that the result currently stored +** in *pRes is the correct result of comparing the current positions of the +** two iterators. +*/ +static void fts5AssertComparisonResult( + Fts5Iter *pIter, + Fts5SegIter *p1, + Fts5SegIter *p2, + Fts5CResult *pRes +){ + int i1 = p1 - pIter->aSeg; + int i2 = p2 - pIter->aSeg; + + if( p1->pLeaf || p2->pLeaf ){ + if( p1->pLeaf==0 ){ + assert( pRes->iFirst==i2 ); + }else if( p2->pLeaf==0 ){ + assert( pRes->iFirst==i1 ); + }else{ + int nMin = MIN(p1->term.n, p2->term.n); + int res = memcmp(p1->term.p, p2->term.p, nMin); + if( res==0 ) res = p1->term.n - p2->term.n; + + if( res==0 ){ + assert( pRes->bTermEq==1 ); + assert( p1->iRowid!=p2->iRowid ); + res = ((p1->iRowid > p2->iRowid)==pIter->bRev) ? -1 : 1; + }else{ + assert( pRes->bTermEq==0 ); + } + + if( res<0 ){ + assert( pRes->iFirst==i1 ); + }else{ + assert( pRes->iFirst==i2 ); + } + } + } +} + +/* +** This function is a no-op unless SQLITE_DEBUG is defined when this module +** is compiled. In that case, this function is essentially an assert() +** statement used to verify that the contents of the pIter->aFirst[] array +** are correct. +*/ +static void fts5AssertMultiIterSetup(Fts5Index *p, Fts5Iter *pIter){ + if( p->rc==SQLITE_OK ){ + Fts5SegIter *pFirst = &pIter->aSeg[ pIter->aFirst[1].iFirst ]; + int i; + + assert( (pFirst->pLeaf==0)==pIter->base.bEof ); + + /* Check that pIter->iSwitchRowid is set correctly. */ + for(i=0; i<pIter->nSeg; i++){ + Fts5SegIter *p1 = &pIter->aSeg[i]; + assert( p1==pFirst + || p1->pLeaf==0 + || fts5BufferCompare(&pFirst->term, &p1->term) + || p1->iRowid==pIter->iSwitchRowid + || (p1->iRowid<pIter->iSwitchRowid)==pIter->bRev + ); + } + + for(i=0; i<pIter->nSeg; i+=2){ + Fts5SegIter *p1 = &pIter->aSeg[i]; + Fts5SegIter *p2 = &pIter->aSeg[i+1]; + Fts5CResult *pRes = &pIter->aFirst[(pIter->nSeg + i) / 2]; + fts5AssertComparisonResult(pIter, p1, p2, pRes); + } + + for(i=1; i<(pIter->nSeg / 2); i+=2){ + Fts5SegIter *p1 = &pIter->aSeg[ pIter->aFirst[i*2].iFirst ]; + Fts5SegIter *p2 = &pIter->aSeg[ pIter->aFirst[i*2+1].iFirst ]; + Fts5CResult *pRes = &pIter->aFirst[i]; + fts5AssertComparisonResult(pIter, p1, p2, pRes); + } + } +} +#else +# define fts5AssertMultiIterSetup(x,y) +#endif + +/* +** Do the comparison necessary to populate pIter->aFirst[iOut]. +** +** If the returned value is non-zero, then it is the index of an entry +** in the pIter->aSeg[] array that is (a) not at EOF, and (b) pointing +** to a key that is a duplicate of another, higher priority, +** segment-iterator in the pSeg->aSeg[] array. +*/ +static int fts5MultiIterDoCompare(Fts5Iter *pIter, int iOut){ + int i1; /* Index of left-hand Fts5SegIter */ + int i2; /* Index of right-hand Fts5SegIter */ + int iRes; + Fts5SegIter *p1; /* Left-hand Fts5SegIter */ + Fts5SegIter *p2; /* Right-hand Fts5SegIter */ + Fts5CResult *pRes = &pIter->aFirst[iOut]; + + assert( iOut<pIter->nSeg && iOut>0 ); + assert( pIter->bRev==0 || pIter->bRev==1 ); + + if( iOut>=(pIter->nSeg/2) ){ + i1 = (iOut - pIter->nSeg/2) * 2; + i2 = i1 + 1; + }else{ + i1 = pIter->aFirst[iOut*2].iFirst; + i2 = pIter->aFirst[iOut*2+1].iFirst; + } + p1 = &pIter->aSeg[i1]; + p2 = &pIter->aSeg[i2]; + + pRes->bTermEq = 0; + if( p1->pLeaf==0 ){ /* If p1 is at EOF */ + iRes = i2; + }else if( p2->pLeaf==0 ){ /* If p2 is at EOF */ + iRes = i1; + }else{ + int res = fts5BufferCompare(&p1->term, &p2->term); + if( res==0 ){ + assert( i2>i1 ); + assert( i2!=0 ); + pRes->bTermEq = 1; + if( p1->iRowid==p2->iRowid ){ + p1->bDel = p2->bDel; + return i2; + } + res = ((p1->iRowid > p2->iRowid)==pIter->bRev) ? -1 : +1; + } + assert( res!=0 ); + if( res<0 ){ + iRes = i1; + }else{ + iRes = i2; + } + } + + pRes->iFirst = (u16)iRes; + return 0; +} + +/* +** Move the seg-iter so that it points to the first rowid on page iLeafPgno. +** It is an error if leaf iLeafPgno does not exist or contains no rowids. +*/ +static void fts5SegIterGotoPage( + Fts5Index *p, /* FTS5 backend object */ + Fts5SegIter *pIter, /* Iterator to advance */ + int iLeafPgno +){ + assert( iLeafPgno>pIter->iLeafPgno ); + + if( iLeafPgno>pIter->pSeg->pgnoLast ){ + p->rc = FTS5_CORRUPT; + }else{ + fts5DataRelease(pIter->pNextLeaf); + pIter->pNextLeaf = 0; + pIter->iLeafPgno = iLeafPgno-1; + fts5SegIterNextPage(p, pIter); + assert( p->rc!=SQLITE_OK || pIter->iLeafPgno==iLeafPgno ); + + if( p->rc==SQLITE_OK ){ + int iOff; + u8 *a = pIter->pLeaf->p; + int n = pIter->pLeaf->szLeaf; + + iOff = fts5LeafFirstRowidOff(pIter->pLeaf); + if( iOff<4 || iOff>=n ){ + p->rc = FTS5_CORRUPT; + }else{ + iOff += fts5GetVarint(&a[iOff], (u64*)&pIter->iRowid); + pIter->iLeafOffset = iOff; + fts5SegIterLoadNPos(p, pIter); + } + } + } +} + +/* +** Advance the iterator passed as the second argument until it is at or +** past rowid iFrom. Regardless of the value of iFrom, the iterator is +** always advanced at least once. +*/ +static void fts5SegIterNextFrom( + Fts5Index *p, /* FTS5 backend object */ + Fts5SegIter *pIter, /* Iterator to advance */ + i64 iMatch /* Advance iterator at least this far */ +){ + int bRev = (pIter->flags & FTS5_SEGITER_REVERSE); + Fts5DlidxIter *pDlidx = pIter->pDlidx; + int iLeafPgno = pIter->iLeafPgno; + int bMove = 1; + + assert( pIter->flags & FTS5_SEGITER_ONETERM ); + assert( pIter->pDlidx ); + assert( pIter->pLeaf ); + + if( bRev==0 ){ + while( !fts5DlidxIterEof(p, pDlidx) && iMatch>fts5DlidxIterRowid(pDlidx) ){ + iLeafPgno = fts5DlidxIterPgno(pDlidx); + fts5DlidxIterNext(p, pDlidx); + } + assert_nc( iLeafPgno>=pIter->iLeafPgno || p->rc ); + if( iLeafPgno>pIter->iLeafPgno ){ + fts5SegIterGotoPage(p, pIter, iLeafPgno); + bMove = 0; + } + }else{ + assert( pIter->pNextLeaf==0 ); + assert( iMatch<pIter->iRowid ); + while( !fts5DlidxIterEof(p, pDlidx) && iMatch<fts5DlidxIterRowid(pDlidx) ){ + fts5DlidxIterPrev(p, pDlidx); + } + iLeafPgno = fts5DlidxIterPgno(pDlidx); + + assert( fts5DlidxIterEof(p, pDlidx) || iLeafPgno<=pIter->iLeafPgno ); + + if( iLeafPgno<pIter->iLeafPgno ){ + pIter->iLeafPgno = iLeafPgno+1; + fts5SegIterReverseNewPage(p, pIter); + bMove = 0; + } + } + + do{ + if( bMove && p->rc==SQLITE_OK ) pIter->xNext(p, pIter, 0); + if( pIter->pLeaf==0 ) break; + if( bRev==0 && pIter->iRowid>=iMatch ) break; + if( bRev!=0 && pIter->iRowid<=iMatch ) break; + bMove = 1; + }while( p->rc==SQLITE_OK ); +} + + +/* +** Free the iterator object passed as the second argument. +*/ +static void fts5MultiIterFree(Fts5Iter *pIter){ + if( pIter ){ + int i; + for(i=0; i<pIter->nSeg; i++){ + fts5SegIterClear(&pIter->aSeg[i]); + } + fts5StructureRelease(pIter->pStruct); + fts5BufferFree(&pIter->poslist); + sqlite3_free(pIter); + } +} + +static void fts5MultiIterAdvanced( + Fts5Index *p, /* FTS5 backend to iterate within */ + Fts5Iter *pIter, /* Iterator to update aFirst[] array for */ + int iChanged, /* Index of sub-iterator just advanced */ + int iMinset /* Minimum entry in aFirst[] to set */ +){ + int i; + for(i=(pIter->nSeg+iChanged)/2; i>=iMinset && p->rc==SQLITE_OK; i=i/2){ + int iEq; + if( (iEq = fts5MultiIterDoCompare(pIter, i)) ){ + Fts5SegIter *pSeg = &pIter->aSeg[iEq]; + assert( p->rc==SQLITE_OK ); + pSeg->xNext(p, pSeg, 0); + i = pIter->nSeg + iEq; + } + } +} + +/* +** Sub-iterator iChanged of iterator pIter has just been advanced. It still +** points to the same term though - just a different rowid. This function +** attempts to update the contents of the pIter->aFirst[] accordingly. +** If it does so successfully, 0 is returned. Otherwise 1. +** +** If non-zero is returned, the caller should call fts5MultiIterAdvanced() +** on the iterator instead. That function does the same as this one, except +** that it deals with more complicated cases as well. +*/ +static int fts5MultiIterAdvanceRowid( + Fts5Iter *pIter, /* Iterator to update aFirst[] array for */ + int iChanged, /* Index of sub-iterator just advanced */ + Fts5SegIter **ppFirst +){ + Fts5SegIter *pNew = &pIter->aSeg[iChanged]; + + if( pNew->iRowid==pIter->iSwitchRowid + || (pNew->iRowid<pIter->iSwitchRowid)==pIter->bRev + ){ + int i; + Fts5SegIter *pOther = &pIter->aSeg[iChanged ^ 0x0001]; + pIter->iSwitchRowid = pIter->bRev ? SMALLEST_INT64 : LARGEST_INT64; + for(i=(pIter->nSeg+iChanged)/2; 1; i=i/2){ + Fts5CResult *pRes = &pIter->aFirst[i]; + + assert( pNew->pLeaf ); + assert( pRes->bTermEq==0 || pOther->pLeaf ); + + if( pRes->bTermEq ){ + if( pNew->iRowid==pOther->iRowid ){ + return 1; + }else if( (pOther->iRowid>pNew->iRowid)==pIter->bRev ){ + pIter->iSwitchRowid = pOther->iRowid; + pNew = pOther; + }else if( (pOther->iRowid>pIter->iSwitchRowid)==pIter->bRev ){ + pIter->iSwitchRowid = pOther->iRowid; + } + } + pRes->iFirst = (u16)(pNew - pIter->aSeg); + if( i==1 ) break; + + pOther = &pIter->aSeg[ pIter->aFirst[i ^ 0x0001].iFirst ]; + } + } + + *ppFirst = pNew; + return 0; +} + +/* +** Set the pIter->bEof variable based on the state of the sub-iterators. +*/ +static void fts5MultiIterSetEof(Fts5Iter *pIter){ + Fts5SegIter *pSeg = &pIter->aSeg[ pIter->aFirst[1].iFirst ]; + pIter->base.bEof = pSeg->pLeaf==0; + pIter->iSwitchRowid = pSeg->iRowid; +} + +/* +** Move the iterator to the next entry. +** +** If an error occurs, an error code is left in Fts5Index.rc. It is not +** considered an error if the iterator reaches EOF, or if it is already at +** EOF when this function is called. +*/ +static void fts5MultiIterNext( + Fts5Index *p, + Fts5Iter *pIter, + int bFrom, /* True if argument iFrom is valid */ + i64 iFrom /* Advance at least as far as this */ +){ + int bUseFrom = bFrom; + while( p->rc==SQLITE_OK ){ + int iFirst = pIter->aFirst[1].iFirst; + int bNewTerm = 0; + Fts5SegIter *pSeg = &pIter->aSeg[iFirst]; + assert( p->rc==SQLITE_OK ); + if( bUseFrom && pSeg->pDlidx ){ + fts5SegIterNextFrom(p, pSeg, iFrom); + }else{ + pSeg->xNext(p, pSeg, &bNewTerm); + } + + if( pSeg->pLeaf==0 || bNewTerm + || fts5MultiIterAdvanceRowid(pIter, iFirst, &pSeg) + ){ + fts5MultiIterAdvanced(p, pIter, iFirst, 1); + fts5MultiIterSetEof(pIter); + pSeg = &pIter->aSeg[pIter->aFirst[1].iFirst]; + if( pSeg->pLeaf==0 ) return; + } + + fts5AssertMultiIterSetup(p, pIter); + assert( pSeg==&pIter->aSeg[pIter->aFirst[1].iFirst] && pSeg->pLeaf ); + if( pIter->bSkipEmpty==0 || pSeg->nPos ){ + pIter->xSetOutputs(pIter, pSeg); + return; + } + bUseFrom = 0; + } +} + +static void fts5MultiIterNext2( + Fts5Index *p, + Fts5Iter *pIter, + int *pbNewTerm /* OUT: True if *might* be new term */ +){ + assert( pIter->bSkipEmpty ); + if( p->rc==SQLITE_OK ){ + do { + int iFirst = pIter->aFirst[1].iFirst; + Fts5SegIter *pSeg = &pIter->aSeg[iFirst]; + int bNewTerm = 0; + + assert( p->rc==SQLITE_OK ); + pSeg->xNext(p, pSeg, &bNewTerm); + if( pSeg->pLeaf==0 || bNewTerm + || fts5MultiIterAdvanceRowid(pIter, iFirst, &pSeg) + ){ + fts5MultiIterAdvanced(p, pIter, iFirst, 1); + fts5MultiIterSetEof(pIter); + *pbNewTerm = 1; + }else{ + *pbNewTerm = 0; + } + fts5AssertMultiIterSetup(p, pIter); + + }while( fts5MultiIterIsEmpty(p, pIter) ); + } +} + +static void fts5IterSetOutputs_Noop(Fts5Iter *pUnused1, Fts5SegIter *pUnused2){ + UNUSED_PARAM2(pUnused1, pUnused2); +} + +static Fts5Iter *fts5MultiIterAlloc( + Fts5Index *p, /* FTS5 backend to iterate within */ + int nSeg +){ + Fts5Iter *pNew; + int nSlot; /* Power of two >= nSeg */ + + for(nSlot=2; nSlot<nSeg; nSlot=nSlot*2); + pNew = fts5IdxMalloc(p, + sizeof(Fts5Iter) + /* pNew */ + sizeof(Fts5SegIter) * (nSlot-1) + /* pNew->aSeg[] */ + sizeof(Fts5CResult) * nSlot /* pNew->aFirst[] */ + ); + if( pNew ){ + pNew->nSeg = nSlot; + pNew->aFirst = (Fts5CResult*)&pNew->aSeg[nSlot]; + pNew->pIndex = p; + pNew->xSetOutputs = fts5IterSetOutputs_Noop; + } + return pNew; +} + +static void fts5PoslistCallback( + Fts5Index *pUnused, + void *pContext, + const u8 *pChunk, int nChunk +){ + UNUSED_PARAM(pUnused); + assert_nc( nChunk>=0 ); + if( nChunk>0 ){ + fts5BufferSafeAppendBlob((Fts5Buffer*)pContext, pChunk, nChunk); + } +} + +typedef struct PoslistCallbackCtx PoslistCallbackCtx; +struct PoslistCallbackCtx { + Fts5Buffer *pBuf; /* Append to this buffer */ + Fts5Colset *pColset; /* Restrict matches to this column */ + int eState; /* See above */ +}; + +typedef struct PoslistOffsetsCtx PoslistOffsetsCtx; +struct PoslistOffsetsCtx { + Fts5Buffer *pBuf; /* Append to this buffer */ + Fts5Colset *pColset; /* Restrict matches to this column */ + int iRead; + int iWrite; +}; + +/* +** TODO: Make this more efficient! +*/ +static int fts5IndexColsetTest(Fts5Colset *pColset, int iCol){ + int i; + for(i=0; i<pColset->nCol; i++){ + if( pColset->aiCol[i]==iCol ) return 1; + } + return 0; +} + +static void fts5PoslistOffsetsCallback( + Fts5Index *pUnused, + void *pContext, + const u8 *pChunk, int nChunk +){ + PoslistOffsetsCtx *pCtx = (PoslistOffsetsCtx*)pContext; + UNUSED_PARAM(pUnused); + assert_nc( nChunk>=0 ); + if( nChunk>0 ){ + int i = 0; + while( i<nChunk ){ + int iVal; + i += fts5GetVarint32(&pChunk[i], iVal); + iVal += pCtx->iRead - 2; + pCtx->iRead = iVal; + if( fts5IndexColsetTest(pCtx->pColset, iVal) ){ + fts5BufferSafeAppendVarint(pCtx->pBuf, iVal + 2 - pCtx->iWrite); + pCtx->iWrite = iVal; + } + } + } +} + +static void fts5PoslistFilterCallback( + Fts5Index *pUnused, + void *pContext, + const u8 *pChunk, int nChunk +){ + PoslistCallbackCtx *pCtx = (PoslistCallbackCtx*)pContext; + UNUSED_PARAM(pUnused); + assert_nc( nChunk>=0 ); + if( nChunk>0 ){ + /* Search through to find the first varint with value 1. This is the + ** start of the next columns hits. */ + int i = 0; + int iStart = 0; + + if( pCtx->eState==2 ){ + int iCol; + fts5FastGetVarint32(pChunk, i, iCol); + if( fts5IndexColsetTest(pCtx->pColset, iCol) ){ + pCtx->eState = 1; + fts5BufferSafeAppendVarint(pCtx->pBuf, 1); + }else{ + pCtx->eState = 0; + } + } + + do { + while( i<nChunk && pChunk[i]!=0x01 ){ + while( pChunk[i] & 0x80 ) i++; + i++; + } + if( pCtx->eState ){ + fts5BufferSafeAppendBlob(pCtx->pBuf, &pChunk[iStart], i-iStart); + } + if( i<nChunk ){ + int iCol; + iStart = i; + i++; + if( i>=nChunk ){ + pCtx->eState = 2; + }else{ + fts5FastGetVarint32(pChunk, i, iCol); + pCtx->eState = fts5IndexColsetTest(pCtx->pColset, iCol); + if( pCtx->eState ){ + fts5BufferSafeAppendBlob(pCtx->pBuf, &pChunk[iStart], i-iStart); + iStart = i; + } + } + } + }while( i<nChunk ); + } +} + +static void fts5ChunkIterate( + Fts5Index *p, /* Index object */ + Fts5SegIter *pSeg, /* Poslist of this iterator */ + void *pCtx, /* Context pointer for xChunk callback */ + void (*xChunk)(Fts5Index*, void*, const u8*, int) +){ + int nRem = pSeg->nPos; /* Number of bytes still to come */ + Fts5Data *pData = 0; + u8 *pChunk = &pSeg->pLeaf->p[pSeg->iLeafOffset]; + int nChunk = MIN(nRem, pSeg->pLeaf->szLeaf - pSeg->iLeafOffset); + int pgno = pSeg->iLeafPgno; + int pgnoSave = 0; + + /* This function does notmwork with detail=none databases. */ + assert( p->pConfig->eDetail!=FTS5_DETAIL_NONE ); + + if( (pSeg->flags & FTS5_SEGITER_REVERSE)==0 ){ + pgnoSave = pgno+1; + } + + while( 1 ){ + xChunk(p, pCtx, pChunk, nChunk); + nRem -= nChunk; + fts5DataRelease(pData); + if( nRem<=0 ){ + break; + }else{ + pgno++; + pData = fts5DataRead(p, FTS5_SEGMENT_ROWID(pSeg->pSeg->iSegid, pgno)); + if( pData==0 ) break; + pChunk = &pData->p[4]; + nChunk = MIN(nRem, pData->szLeaf - 4); + if( pgno==pgnoSave ){ + assert( pSeg->pNextLeaf==0 ); + pSeg->pNextLeaf = pData; + pData = 0; + } + } + } +} + +/* +** Iterator pIter currently points to a valid entry (not EOF). This +** function appends the position list data for the current entry to +** buffer pBuf. It does not make a copy of the position-list size +** field. +*/ +static void fts5SegiterPoslist( + Fts5Index *p, + Fts5SegIter *pSeg, + Fts5Colset *pColset, + Fts5Buffer *pBuf +){ + if( 0==fts5BufferGrow(&p->rc, pBuf, pSeg->nPos) ){ + if( pColset==0 ){ + fts5ChunkIterate(p, pSeg, (void*)pBuf, fts5PoslistCallback); + }else{ + if( p->pConfig->eDetail==FTS5_DETAIL_FULL ){ + PoslistCallbackCtx sCtx; + sCtx.pBuf = pBuf; + sCtx.pColset = pColset; + sCtx.eState = fts5IndexColsetTest(pColset, 0); + assert( sCtx.eState==0 || sCtx.eState==1 ); + fts5ChunkIterate(p, pSeg, (void*)&sCtx, fts5PoslistFilterCallback); + }else{ + PoslistOffsetsCtx sCtx; + memset(&sCtx, 0, sizeof(sCtx)); + sCtx.pBuf = pBuf; + sCtx.pColset = pColset; + fts5ChunkIterate(p, pSeg, (void*)&sCtx, fts5PoslistOffsetsCallback); + } + } + } +} + +/* +** IN/OUT parameter (*pa) points to a position list n bytes in size. If +** the position list contains entries for column iCol, then (*pa) is set +** to point to the sub-position-list for that column and the number of +** bytes in it returned. Or, if the argument position list does not +** contain any entries for column iCol, return 0. +*/ +static int fts5IndexExtractCol( + const u8 **pa, /* IN/OUT: Pointer to poslist */ + int n, /* IN: Size of poslist in bytes */ + int iCol /* Column to extract from poslist */ +){ + int iCurrent = 0; /* Anything before the first 0x01 is col 0 */ + const u8 *p = *pa; + const u8 *pEnd = &p[n]; /* One byte past end of position list */ + + while( iCol>iCurrent ){ + /* Advance pointer p until it points to pEnd or an 0x01 byte that is + ** not part of a varint. Note that it is not possible for a negative + ** or extremely large varint to occur within an uncorrupted position + ** list. So the last byte of each varint may be assumed to have a clear + ** 0x80 bit. */ + while( *p!=0x01 ){ + while( *p++ & 0x80 ); + if( p>=pEnd ) return 0; + } + *pa = p++; + iCurrent = *p++; + if( iCurrent & 0x80 ){ + p--; + p += fts5GetVarint32(p, iCurrent); + } + } + if( iCol!=iCurrent ) return 0; + + /* Advance pointer p until it points to pEnd or an 0x01 byte that is + ** not part of a varint */ + while( p<pEnd && *p!=0x01 ){ + while( *p++ & 0x80 ); + } + + return p - (*pa); +} + +static int fts5IndexExtractColset ( + Fts5Colset *pColset, /* Colset to filter on */ + const u8 *pPos, int nPos, /* Position list */ + Fts5Buffer *pBuf /* Output buffer */ +){ + int rc = SQLITE_OK; + int i; + + fts5BufferZero(pBuf); + for(i=0; i<pColset->nCol; i++){ + const u8 *pSub = pPos; + int nSub = fts5IndexExtractCol(&pSub, nPos, pColset->aiCol[i]); + if( nSub ){ + fts5BufferAppendBlob(&rc, pBuf, nSub, pSub); + } + } + return rc; +} + +/* +** xSetOutputs callback used by detail=none tables. +*/ +static void fts5IterSetOutputs_None(Fts5Iter *pIter, Fts5SegIter *pSeg){ + assert( pIter->pIndex->pConfig->eDetail==FTS5_DETAIL_NONE ); + pIter->base.iRowid = pSeg->iRowid; + pIter->base.nData = pSeg->nPos; +} + +/* +** xSetOutputs callback used by detail=full and detail=col tables when no +** column filters are specified. +*/ +static void fts5IterSetOutputs_Nocolset(Fts5Iter *pIter, Fts5SegIter *pSeg){ + pIter->base.iRowid = pSeg->iRowid; + pIter->base.nData = pSeg->nPos; + + assert( pIter->pIndex->pConfig->eDetail!=FTS5_DETAIL_NONE ); + assert( pIter->pColset==0 ); + + if( pSeg->iLeafOffset+pSeg->nPos<=pSeg->pLeaf->szLeaf ){ + /* All data is stored on the current page. Populate the output + ** variables to point into the body of the page object. */ + pIter->base.pData = &pSeg->pLeaf->p[pSeg->iLeafOffset]; + }else{ + /* The data is distributed over two or more pages. Copy it into the + ** Fts5Iter.poslist buffer and then set the output pointer to point + ** to this buffer. */ + fts5BufferZero(&pIter->poslist); + fts5SegiterPoslist(pIter->pIndex, pSeg, 0, &pIter->poslist); + pIter->base.pData = pIter->poslist.p; + } +} + +/* +** xSetOutputs callback used by detail=col when there is a column filter +** and there are 100 or more columns. Also called as a fallback from +** fts5IterSetOutputs_Col100 if the column-list spans more than one page. +*/ +static void fts5IterSetOutputs_Col(Fts5Iter *pIter, Fts5SegIter *pSeg){ + fts5BufferZero(&pIter->poslist); + fts5SegiterPoslist(pIter->pIndex, pSeg, pIter->pColset, &pIter->poslist); + pIter->base.iRowid = pSeg->iRowid; + pIter->base.pData = pIter->poslist.p; + pIter->base.nData = pIter->poslist.n; +} + +/* +** xSetOutputs callback used when: +** +** * detail=col, +** * there is a column filter, and +** * the table contains 100 or fewer columns. +** +** The last point is to ensure all column numbers are stored as +** single-byte varints. +*/ +static void fts5IterSetOutputs_Col100(Fts5Iter *pIter, Fts5SegIter *pSeg){ + + assert( pIter->pIndex->pConfig->eDetail==FTS5_DETAIL_COLUMNS ); + assert( pIter->pColset ); + + if( pSeg->iLeafOffset+pSeg->nPos>pSeg->pLeaf->szLeaf ){ + fts5IterSetOutputs_Col(pIter, pSeg); + }else{ + u8 *a = (u8*)&pSeg->pLeaf->p[pSeg->iLeafOffset]; + u8 *pEnd = (u8*)&a[pSeg->nPos]; + int iPrev = 0; + int *aiCol = pIter->pColset->aiCol; + int *aiColEnd = &aiCol[pIter->pColset->nCol]; + + u8 *aOut = pIter->poslist.p; + int iPrevOut = 0; + + pIter->base.iRowid = pSeg->iRowid; + + while( a<pEnd ){ + iPrev += (int)a++[0] - 2; + while( *aiCol<iPrev ){ + aiCol++; + if( aiCol==aiColEnd ) goto setoutputs_col_out; + } + if( *aiCol==iPrev ){ + *aOut++ = (iPrev - iPrevOut) + 2; + iPrevOut = iPrev; + } + } + +setoutputs_col_out: + pIter->base.pData = pIter->poslist.p; + pIter->base.nData = aOut - pIter->poslist.p; + } +} + +/* +** xSetOutputs callback used by detail=full when there is a column filter. +*/ +static void fts5IterSetOutputs_Full(Fts5Iter *pIter, Fts5SegIter *pSeg){ + Fts5Colset *pColset = pIter->pColset; + pIter->base.iRowid = pSeg->iRowid; + + assert( pIter->pIndex->pConfig->eDetail==FTS5_DETAIL_FULL ); + assert( pColset ); + + if( pSeg->iLeafOffset+pSeg->nPos<=pSeg->pLeaf->szLeaf ){ + /* All data is stored on the current page. Populate the output + ** variables to point into the body of the page object. */ + const u8 *a = &pSeg->pLeaf->p[pSeg->iLeafOffset]; + if( pColset->nCol==1 ){ + pIter->base.nData = fts5IndexExtractCol(&a, pSeg->nPos,pColset->aiCol[0]); + pIter->base.pData = a; + }else{ + fts5BufferZero(&pIter->poslist); + fts5IndexExtractColset(pColset, a, pSeg->nPos, &pIter->poslist); + pIter->base.pData = pIter->poslist.p; + pIter->base.nData = pIter->poslist.n; + } + }else{ + /* The data is distributed over two or more pages. Copy it into the + ** Fts5Iter.poslist buffer and then set the output pointer to point + ** to this buffer. */ + fts5BufferZero(&pIter->poslist); + fts5SegiterPoslist(pIter->pIndex, pSeg, pColset, &pIter->poslist); + pIter->base.pData = pIter->poslist.p; + pIter->base.nData = pIter->poslist.n; + } +} + +static void fts5IterSetOutputCb(int *pRc, Fts5Iter *pIter){ + if( *pRc==SQLITE_OK ){ + Fts5Config *pConfig = pIter->pIndex->pConfig; + if( pConfig->eDetail==FTS5_DETAIL_NONE ){ + pIter->xSetOutputs = fts5IterSetOutputs_None; + } + + else if( pIter->pColset==0 ){ + pIter->xSetOutputs = fts5IterSetOutputs_Nocolset; + } + + else if( pConfig->eDetail==FTS5_DETAIL_FULL ){ + pIter->xSetOutputs = fts5IterSetOutputs_Full; + } + + else{ + assert( pConfig->eDetail==FTS5_DETAIL_COLUMNS ); + if( pConfig->nCol<=100 ){ + pIter->xSetOutputs = fts5IterSetOutputs_Col100; + sqlite3Fts5BufferSize(pRc, &pIter->poslist, pConfig->nCol); + }else{ + pIter->xSetOutputs = fts5IterSetOutputs_Col; + } + } + } +} + + +/* +** Allocate a new Fts5Iter object. +** +** The new object will be used to iterate through data in structure pStruct. +** If iLevel is -ve, then all data in all segments is merged. Or, if iLevel +** is zero or greater, data from the first nSegment segments on level iLevel +** is merged. +** +** The iterator initially points to the first term/rowid entry in the +** iterated data. +*/ +static void fts5MultiIterNew( + Fts5Index *p, /* FTS5 backend to iterate within */ + Fts5Structure *pStruct, /* Structure of specific index */ + int flags, /* FTS5INDEX_QUERY_XXX flags */ + Fts5Colset *pColset, /* Colset to filter on (or NULL) */ + const u8 *pTerm, int nTerm, /* Term to seek to (or NULL/0) */ + int iLevel, /* Level to iterate (-1 for all) */ + int nSegment, /* Number of segments to merge (iLevel>=0) */ + Fts5Iter **ppOut /* New object */ +){ + int nSeg = 0; /* Number of segment-iters in use */ + int iIter = 0; /* */ + int iSeg; /* Used to iterate through segments */ + Fts5StructureLevel *pLvl; + Fts5Iter *pNew; + + assert( (pTerm==0 && nTerm==0) || iLevel<0 ); + + /* Allocate space for the new multi-seg-iterator. */ + if( p->rc==SQLITE_OK ){ + if( iLevel<0 ){ + assert( pStruct->nSegment==fts5StructureCountSegments(pStruct) ); + nSeg = pStruct->nSegment; + nSeg += (p->pHash ? 1 : 0); + }else{ + nSeg = MIN(pStruct->aLevel[iLevel].nSeg, nSegment); + } + } + *ppOut = pNew = fts5MultiIterAlloc(p, nSeg); + if( pNew==0 ) return; + pNew->bRev = (0!=(flags & FTS5INDEX_QUERY_DESC)); + pNew->bSkipEmpty = (0!=(flags & FTS5INDEX_QUERY_SKIPEMPTY)); + pNew->pStruct = pStruct; + pNew->pColset = pColset; + fts5StructureRef(pStruct); + if( (flags & FTS5INDEX_QUERY_NOOUTPUT)==0 ){ + fts5IterSetOutputCb(&p->rc, pNew); + } + + /* Initialize each of the component segment iterators. */ + if( p->rc==SQLITE_OK ){ + if( iLevel<0 ){ + Fts5StructureLevel *pEnd = &pStruct->aLevel[pStruct->nLevel]; + if( p->pHash ){ + /* Add a segment iterator for the current contents of the hash table. */ + Fts5SegIter *pIter = &pNew->aSeg[iIter++]; + fts5SegIterHashInit(p, pTerm, nTerm, flags, pIter); + } + for(pLvl=&pStruct->aLevel[0]; pLvl<pEnd; pLvl++){ + for(iSeg=pLvl->nSeg-1; iSeg>=0; iSeg--){ + Fts5StructureSegment *pSeg = &pLvl->aSeg[iSeg]; + Fts5SegIter *pIter = &pNew->aSeg[iIter++]; + if( pTerm==0 ){ + fts5SegIterInit(p, pSeg, pIter); + }else{ + fts5SegIterSeekInit(p, pTerm, nTerm, flags, pSeg, pIter); + } + } + } + }else{ + pLvl = &pStruct->aLevel[iLevel]; + for(iSeg=nSeg-1; iSeg>=0; iSeg--){ + fts5SegIterInit(p, &pLvl->aSeg[iSeg], &pNew->aSeg[iIter++]); + } + } + assert( iIter==nSeg ); + } + + /* If the above was successful, each component iterators now points + ** to the first entry in its segment. In this case initialize the + ** aFirst[] array. Or, if an error has occurred, free the iterator + ** object and set the output variable to NULL. */ + if( p->rc==SQLITE_OK ){ + for(iIter=pNew->nSeg-1; iIter>0; iIter--){ + int iEq; + if( (iEq = fts5MultiIterDoCompare(pNew, iIter)) ){ + Fts5SegIter *pSeg = &pNew->aSeg[iEq]; + if( p->rc==SQLITE_OK ) pSeg->xNext(p, pSeg, 0); + fts5MultiIterAdvanced(p, pNew, iEq, iIter); + } + } + fts5MultiIterSetEof(pNew); + fts5AssertMultiIterSetup(p, pNew); + + if( pNew->bSkipEmpty && fts5MultiIterIsEmpty(p, pNew) ){ + fts5MultiIterNext(p, pNew, 0, 0); + }else if( pNew->base.bEof==0 ){ + Fts5SegIter *pSeg = &pNew->aSeg[pNew->aFirst[1].iFirst]; + pNew->xSetOutputs(pNew, pSeg); + } + + }else{ + fts5MultiIterFree(pNew); + *ppOut = 0; + } +} + +/* +** Create an Fts5Iter that iterates through the doclist provided +** as the second argument. +*/ +static void fts5MultiIterNew2( + Fts5Index *p, /* FTS5 backend to iterate within */ + Fts5Data *pData, /* Doclist to iterate through */ + int bDesc, /* True for descending rowid order */ + Fts5Iter **ppOut /* New object */ +){ + Fts5Iter *pNew; + pNew = fts5MultiIterAlloc(p, 2); + if( pNew ){ + Fts5SegIter *pIter = &pNew->aSeg[1]; + + pIter->flags = FTS5_SEGITER_ONETERM; + if( pData->szLeaf>0 ){ + pIter->pLeaf = pData; + pIter->iLeafOffset = fts5GetVarint(pData->p, (u64*)&pIter->iRowid); + pIter->iEndofDoclist = pData->nn; + pNew->aFirst[1].iFirst = 1; + if( bDesc ){ + pNew->bRev = 1; + pIter->flags |= FTS5_SEGITER_REVERSE; + fts5SegIterReverseInitPage(p, pIter); + }else{ + fts5SegIterLoadNPos(p, pIter); + } + pData = 0; + }else{ + pNew->base.bEof = 1; + } + fts5SegIterSetNext(p, pIter); + + *ppOut = pNew; + } + + fts5DataRelease(pData); +} + +/* +** Return true if the iterator is at EOF or if an error has occurred. +** False otherwise. +*/ +static int fts5MultiIterEof(Fts5Index *p, Fts5Iter *pIter){ + assert( p->rc + || (pIter->aSeg[ pIter->aFirst[1].iFirst ].pLeaf==0)==pIter->base.bEof + ); + return (p->rc || pIter->base.bEof); +} + +/* +** Return the rowid of the entry that the iterator currently points +** to. If the iterator points to EOF when this function is called the +** results are undefined. +*/ +static i64 fts5MultiIterRowid(Fts5Iter *pIter){ + assert( pIter->aSeg[ pIter->aFirst[1].iFirst ].pLeaf ); + return pIter->aSeg[ pIter->aFirst[1].iFirst ].iRowid; +} + +/* +** Move the iterator to the next entry at or following iMatch. +*/ +static void fts5MultiIterNextFrom( + Fts5Index *p, + Fts5Iter *pIter, + i64 iMatch +){ + while( 1 ){ + i64 iRowid; + fts5MultiIterNext(p, pIter, 1, iMatch); + if( fts5MultiIterEof(p, pIter) ) break; + iRowid = fts5MultiIterRowid(pIter); + if( pIter->bRev==0 && iRowid>=iMatch ) break; + if( pIter->bRev!=0 && iRowid<=iMatch ) break; + } +} + +/* +** Return a pointer to a buffer containing the term associated with the +** entry that the iterator currently points to. +*/ +static const u8 *fts5MultiIterTerm(Fts5Iter *pIter, int *pn){ + Fts5SegIter *p = &pIter->aSeg[ pIter->aFirst[1].iFirst ]; + *pn = p->term.n; + return p->term.p; +} + +/* +** Allocate a new segment-id for the structure pStruct. The new segment +** id must be between 1 and 65335 inclusive, and must not be used by +** any currently existing segment. If a free segment id cannot be found, +** SQLITE_FULL is returned. +** +** If an error has already occurred, this function is a no-op. 0 is +** returned in this case. +*/ +static int fts5AllocateSegid(Fts5Index *p, Fts5Structure *pStruct){ + int iSegid = 0; + + if( p->rc==SQLITE_OK ){ + if( pStruct->nSegment>=FTS5_MAX_SEGMENT ){ + p->rc = SQLITE_FULL; + }else{ + while( iSegid==0 ){ + int iLvl, iSeg; + sqlite3_randomness(sizeof(u32), (void*)&iSegid); + iSegid = iSegid & ((1 << FTS5_DATA_ID_B)-1); + for(iLvl=0; iLvl<pStruct->nLevel; iLvl++){ + for(iSeg=0; iSeg<pStruct->aLevel[iLvl].nSeg; iSeg++){ + if( iSegid==pStruct->aLevel[iLvl].aSeg[iSeg].iSegid ){ + iSegid = 0; + } + } + } + } + } + } + + return iSegid; +} + +/* +** Discard all data currently cached in the hash-tables. +*/ +static void fts5IndexDiscardData(Fts5Index *p){ + assert( p->pHash || p->nPendingData==0 ); + if( p->pHash ){ + sqlite3Fts5HashClear(p->pHash); + p->nPendingData = 0; + } +} + +/* +** Return the size of the prefix, in bytes, that buffer +** (pNew/<length-unknown>) shares with buffer (pOld/nOld). +** +** Buffer (pNew/<length-unknown>) is guaranteed to be greater +** than buffer (pOld/nOld). +*/ +static int fts5PrefixCompress(int nOld, const u8 *pOld, const u8 *pNew){ + int i; + for(i=0; i<nOld; i++){ + if( pOld[i]!=pNew[i] ) break; + } + return i; +} + +static void fts5WriteDlidxClear( + Fts5Index *p, + Fts5SegWriter *pWriter, + int bFlush /* If true, write dlidx to disk */ +){ + int i; + assert( bFlush==0 || (pWriter->nDlidx>0 && pWriter->aDlidx[0].buf.n>0) ); + for(i=0; i<pWriter->nDlidx; i++){ + Fts5DlidxWriter *pDlidx = &pWriter->aDlidx[i]; + if( pDlidx->buf.n==0 ) break; + if( bFlush ){ + assert( pDlidx->pgno!=0 ); + fts5DataWrite(p, + FTS5_DLIDX_ROWID(pWriter->iSegid, i, pDlidx->pgno), + pDlidx->buf.p, pDlidx->buf.n + ); + } + sqlite3Fts5BufferZero(&pDlidx->buf); + pDlidx->bPrevValid = 0; + } +} + +/* +** Grow the pWriter->aDlidx[] array to at least nLvl elements in size. +** Any new array elements are zeroed before returning. +*/ +static int fts5WriteDlidxGrow( + Fts5Index *p, + Fts5SegWriter *pWriter, + int nLvl +){ + if( p->rc==SQLITE_OK && nLvl>=pWriter->nDlidx ){ + Fts5DlidxWriter *aDlidx = (Fts5DlidxWriter*)sqlite3_realloc( + pWriter->aDlidx, sizeof(Fts5DlidxWriter) * nLvl + ); + if( aDlidx==0 ){ + p->rc = SQLITE_NOMEM; + }else{ + int nByte = sizeof(Fts5DlidxWriter) * (nLvl - pWriter->nDlidx); + memset(&aDlidx[pWriter->nDlidx], 0, nByte); + pWriter->aDlidx = aDlidx; + pWriter->nDlidx = nLvl; + } + } + return p->rc; +} + +/* +** If the current doclist-index accumulating in pWriter->aDlidx[] is large +** enough, flush it to disk and return 1. Otherwise discard it and return +** zero. +*/ +static int fts5WriteFlushDlidx(Fts5Index *p, Fts5SegWriter *pWriter){ + int bFlag = 0; + + /* If there were FTS5_MIN_DLIDX_SIZE or more empty leaf pages written + ** to the database, also write the doclist-index to disk. */ + if( pWriter->aDlidx[0].buf.n>0 && pWriter->nEmpty>=FTS5_MIN_DLIDX_SIZE ){ + bFlag = 1; + } + fts5WriteDlidxClear(p, pWriter, bFlag); + pWriter->nEmpty = 0; + return bFlag; +} + +/* +** This function is called whenever processing of the doclist for the +** last term on leaf page (pWriter->iBtPage) is completed. +** +** The doclist-index for that term is currently stored in-memory within the +** Fts5SegWriter.aDlidx[] array. If it is large enough, this function +** writes it out to disk. Or, if it is too small to bother with, discards +** it. +** +** Fts5SegWriter.btterm currently contains the first term on page iBtPage. +*/ +static void fts5WriteFlushBtree(Fts5Index *p, Fts5SegWriter *pWriter){ + int bFlag; + + assert( pWriter->iBtPage || pWriter->nEmpty==0 ); + if( pWriter->iBtPage==0 ) return; + bFlag = fts5WriteFlushDlidx(p, pWriter); + + if( p->rc==SQLITE_OK ){ + const char *z = (pWriter->btterm.n>0?(const char*)pWriter->btterm.p:""); + /* The following was already done in fts5WriteInit(): */ + /* sqlite3_bind_int(p->pIdxWriter, 1, pWriter->iSegid); */ + sqlite3_bind_blob(p->pIdxWriter, 2, z, pWriter->btterm.n, SQLITE_STATIC); + sqlite3_bind_int64(p->pIdxWriter, 3, bFlag + ((i64)pWriter->iBtPage<<1)); + sqlite3_step(p->pIdxWriter); + p->rc = sqlite3_reset(p->pIdxWriter); + } + pWriter->iBtPage = 0; +} + +/* +** This is called once for each leaf page except the first that contains +** at least one term. Argument (nTerm/pTerm) is the split-key - a term that +** is larger than all terms written to earlier leaves, and equal to or +** smaller than the first term on the new leaf. +** +** If an error occurs, an error code is left in Fts5Index.rc. If an error +** has already occurred when this function is called, it is a no-op. +*/ +static void fts5WriteBtreeTerm( + Fts5Index *p, /* FTS5 backend object */ + Fts5SegWriter *pWriter, /* Writer object */ + int nTerm, const u8 *pTerm /* First term on new page */ +){ + fts5WriteFlushBtree(p, pWriter); + fts5BufferSet(&p->rc, &pWriter->btterm, nTerm, pTerm); + pWriter->iBtPage = pWriter->writer.pgno; +} + +/* +** This function is called when flushing a leaf page that contains no +** terms at all to disk. +*/ +static void fts5WriteBtreeNoTerm( + Fts5Index *p, /* FTS5 backend object */ + Fts5SegWriter *pWriter /* Writer object */ +){ + /* If there were no rowids on the leaf page either and the doclist-index + ** has already been started, append an 0x00 byte to it. */ + if( pWriter->bFirstRowidInPage && pWriter->aDlidx[0].buf.n>0 ){ + Fts5DlidxWriter *pDlidx = &pWriter->aDlidx[0]; + assert( pDlidx->bPrevValid ); + sqlite3Fts5BufferAppendVarint(&p->rc, &pDlidx->buf, 0); + } + + /* Increment the "number of sequential leaves without a term" counter. */ + pWriter->nEmpty++; +} + +static i64 fts5DlidxExtractFirstRowid(Fts5Buffer *pBuf){ + i64 iRowid; + int iOff; + + iOff = 1 + fts5GetVarint(&pBuf->p[1], (u64*)&iRowid); + fts5GetVarint(&pBuf->p[iOff], (u64*)&iRowid); + return iRowid; +} + +/* +** Rowid iRowid has just been appended to the current leaf page. It is the +** first on the page. This function appends an appropriate entry to the current +** doclist-index. +*/ +static void fts5WriteDlidxAppend( + Fts5Index *p, + Fts5SegWriter *pWriter, + i64 iRowid +){ + int i; + int bDone = 0; + + for(i=0; p->rc==SQLITE_OK && bDone==0; i++){ + i64 iVal; + Fts5DlidxWriter *pDlidx = &pWriter->aDlidx[i]; + + if( pDlidx->buf.n>=p->pConfig->pgsz ){ + /* The current doclist-index page is full. Write it to disk and push + ** a copy of iRowid (which will become the first rowid on the next + ** doclist-index leaf page) up into the next level of the b-tree + ** hierarchy. If the node being flushed is currently the root node, + ** also push its first rowid upwards. */ + pDlidx->buf.p[0] = 0x01; /* Not the root node */ + fts5DataWrite(p, + FTS5_DLIDX_ROWID(pWriter->iSegid, i, pDlidx->pgno), + pDlidx->buf.p, pDlidx->buf.n + ); + fts5WriteDlidxGrow(p, pWriter, i+2); + pDlidx = &pWriter->aDlidx[i]; + if( p->rc==SQLITE_OK && pDlidx[1].buf.n==0 ){ + i64 iFirst = fts5DlidxExtractFirstRowid(&pDlidx->buf); + + /* This was the root node. Push its first rowid up to the new root. */ + pDlidx[1].pgno = pDlidx->pgno; + sqlite3Fts5BufferAppendVarint(&p->rc, &pDlidx[1].buf, 0); + sqlite3Fts5BufferAppendVarint(&p->rc, &pDlidx[1].buf, pDlidx->pgno); + sqlite3Fts5BufferAppendVarint(&p->rc, &pDlidx[1].buf, iFirst); + pDlidx[1].bPrevValid = 1; + pDlidx[1].iPrev = iFirst; + } + + sqlite3Fts5BufferZero(&pDlidx->buf); + pDlidx->bPrevValid = 0; + pDlidx->pgno++; + }else{ + bDone = 1; + } + + if( pDlidx->bPrevValid ){ + iVal = iRowid - pDlidx->iPrev; + }else{ + i64 iPgno = (i==0 ? pWriter->writer.pgno : pDlidx[-1].pgno); + assert( pDlidx->buf.n==0 ); + sqlite3Fts5BufferAppendVarint(&p->rc, &pDlidx->buf, !bDone); + sqlite3Fts5BufferAppendVarint(&p->rc, &pDlidx->buf, iPgno); + iVal = iRowid; + } + + sqlite3Fts5BufferAppendVarint(&p->rc, &pDlidx->buf, iVal); + pDlidx->bPrevValid = 1; + pDlidx->iPrev = iRowid; + } +} + +static void fts5WriteFlushLeaf(Fts5Index *p, Fts5SegWriter *pWriter){ + static const u8 zero[] = { 0x00, 0x00, 0x00, 0x00 }; + Fts5PageWriter *pPage = &pWriter->writer; + i64 iRowid; + + assert( (pPage->pgidx.n==0)==(pWriter->bFirstTermInPage) ); + + /* Set the szLeaf header field. */ + assert( 0==fts5GetU16(&pPage->buf.p[2]) ); + fts5PutU16(&pPage->buf.p[2], (u16)pPage->buf.n); + + if( pWriter->bFirstTermInPage ){ + /* No term was written to this page. */ + assert( pPage->pgidx.n==0 ); + fts5WriteBtreeNoTerm(p, pWriter); + }else{ + /* Append the pgidx to the page buffer. Set the szLeaf header field. */ + fts5BufferAppendBlob(&p->rc, &pPage->buf, pPage->pgidx.n, pPage->pgidx.p); + } + + /* Write the page out to disk */ + iRowid = FTS5_SEGMENT_ROWID(pWriter->iSegid, pPage->pgno); + fts5DataWrite(p, iRowid, pPage->buf.p, pPage->buf.n); + + /* Initialize the next page. */ + fts5BufferZero(&pPage->buf); + fts5BufferZero(&pPage->pgidx); + fts5BufferAppendBlob(&p->rc, &pPage->buf, 4, zero); + pPage->iPrevPgidx = 0; + pPage->pgno++; + + /* Increase the leaves written counter */ + pWriter->nLeafWritten++; + + /* The new leaf holds no terms or rowids */ + pWriter->bFirstTermInPage = 1; + pWriter->bFirstRowidInPage = 1; +} + +/* +** Append term pTerm/nTerm to the segment being written by the writer passed +** as the second argument. +** +** If an error occurs, set the Fts5Index.rc error code. If an error has +** already occurred, this function is a no-op. +*/ +static void fts5WriteAppendTerm( + Fts5Index *p, + Fts5SegWriter *pWriter, + int nTerm, const u8 *pTerm +){ + int nPrefix; /* Bytes of prefix compression for term */ + Fts5PageWriter *pPage = &pWriter->writer; + Fts5Buffer *pPgidx = &pWriter->writer.pgidx; + + assert( p->rc==SQLITE_OK ); + assert( pPage->buf.n>=4 ); + assert( pPage->buf.n>4 || pWriter->bFirstTermInPage ); + + /* If the current leaf page is full, flush it to disk. */ + if( (pPage->buf.n + pPgidx->n + nTerm + 2)>=p->pConfig->pgsz ){ + if( pPage->buf.n>4 ){ + fts5WriteFlushLeaf(p, pWriter); + } + fts5BufferGrow(&p->rc, &pPage->buf, nTerm+FTS5_DATA_PADDING); + } + + /* TODO1: Updating pgidx here. */ + pPgidx->n += sqlite3Fts5PutVarint( + &pPgidx->p[pPgidx->n], pPage->buf.n - pPage->iPrevPgidx + ); + pPage->iPrevPgidx = pPage->buf.n; +#if 0 + fts5PutU16(&pPgidx->p[pPgidx->n], pPage->buf.n); + pPgidx->n += 2; +#endif + + if( pWriter->bFirstTermInPage ){ + nPrefix = 0; + if( pPage->pgno!=1 ){ + /* This is the first term on a leaf that is not the leftmost leaf in + ** the segment b-tree. In this case it is necessary to add a term to + ** the b-tree hierarchy that is (a) larger than the largest term + ** already written to the segment and (b) smaller than or equal to + ** this term. In other words, a prefix of (pTerm/nTerm) that is one + ** byte longer than the longest prefix (pTerm/nTerm) shares with the + ** previous term. + ** + ** Usually, the previous term is available in pPage->term. The exception + ** is if this is the first term written in an incremental-merge step. + ** In this case the previous term is not available, so just write a + ** copy of (pTerm/nTerm) into the parent node. This is slightly + ** inefficient, but still correct. */ + int n = nTerm; + if( pPage->term.n ){ + n = 1 + fts5PrefixCompress(pPage->term.n, pPage->term.p, pTerm); + } + fts5WriteBtreeTerm(p, pWriter, n, pTerm); + pPage = &pWriter->writer; + } + }else{ + nPrefix = fts5PrefixCompress(pPage->term.n, pPage->term.p, pTerm); + fts5BufferAppendVarint(&p->rc, &pPage->buf, nPrefix); + } + + /* Append the number of bytes of new data, then the term data itself + ** to the page. */ + fts5BufferAppendVarint(&p->rc, &pPage->buf, nTerm - nPrefix); + fts5BufferAppendBlob(&p->rc, &pPage->buf, nTerm - nPrefix, &pTerm[nPrefix]); + + /* Update the Fts5PageWriter.term field. */ + fts5BufferSet(&p->rc, &pPage->term, nTerm, pTerm); + pWriter->bFirstTermInPage = 0; + + pWriter->bFirstRowidInPage = 0; + pWriter->bFirstRowidInDoclist = 1; + + assert( p->rc || (pWriter->nDlidx>0 && pWriter->aDlidx[0].buf.n==0) ); + pWriter->aDlidx[0].pgno = pPage->pgno; +} + +/* +** Append a rowid and position-list size field to the writers output. +*/ +static void fts5WriteAppendRowid( + Fts5Index *p, + Fts5SegWriter *pWriter, + i64 iRowid +){ + if( p->rc==SQLITE_OK ){ + Fts5PageWriter *pPage = &pWriter->writer; + + if( (pPage->buf.n + pPage->pgidx.n)>=p->pConfig->pgsz ){ + fts5WriteFlushLeaf(p, pWriter); + } + + /* If this is to be the first rowid written to the page, set the + ** rowid-pointer in the page-header. Also append a value to the dlidx + ** buffer, in case a doclist-index is required. */ + if( pWriter->bFirstRowidInPage ){ + fts5PutU16(pPage->buf.p, (u16)pPage->buf.n); + fts5WriteDlidxAppend(p, pWriter, iRowid); + } + + /* Write the rowid. */ + if( pWriter->bFirstRowidInDoclist || pWriter->bFirstRowidInPage ){ + fts5BufferAppendVarint(&p->rc, &pPage->buf, iRowid); + }else{ + assert( p->rc || iRowid>pWriter->iPrevRowid ); + fts5BufferAppendVarint(&p->rc, &pPage->buf, iRowid - pWriter->iPrevRowid); + } + pWriter->iPrevRowid = iRowid; + pWriter->bFirstRowidInDoclist = 0; + pWriter->bFirstRowidInPage = 0; + } +} + +static void fts5WriteAppendPoslistData( + Fts5Index *p, + Fts5SegWriter *pWriter, + const u8 *aData, + int nData +){ + Fts5PageWriter *pPage = &pWriter->writer; + const u8 *a = aData; + int n = nData; + + assert( p->pConfig->pgsz>0 ); + while( p->rc==SQLITE_OK + && (pPage->buf.n + pPage->pgidx.n + n)>=p->pConfig->pgsz + ){ + int nReq = p->pConfig->pgsz - pPage->buf.n - pPage->pgidx.n; + int nCopy = 0; + while( nCopy<nReq ){ + i64 dummy; + nCopy += fts5GetVarint(&a[nCopy], (u64*)&dummy); + } + fts5BufferAppendBlob(&p->rc, &pPage->buf, nCopy, a); + a += nCopy; + n -= nCopy; + fts5WriteFlushLeaf(p, pWriter); + } + if( n>0 ){ + fts5BufferAppendBlob(&p->rc, &pPage->buf, n, a); + } +} + +/* +** Flush any data cached by the writer object to the database. Free any +** allocations associated with the writer. +*/ +static void fts5WriteFinish( + Fts5Index *p, + Fts5SegWriter *pWriter, /* Writer object */ + int *pnLeaf /* OUT: Number of leaf pages in b-tree */ +){ + int i; + Fts5PageWriter *pLeaf = &pWriter->writer; + if( p->rc==SQLITE_OK ){ + assert( pLeaf->pgno>=1 ); + if( pLeaf->buf.n>4 ){ + fts5WriteFlushLeaf(p, pWriter); + } + *pnLeaf = pLeaf->pgno-1; + fts5WriteFlushBtree(p, pWriter); + } + fts5BufferFree(&pLeaf->term); + fts5BufferFree(&pLeaf->buf); + fts5BufferFree(&pLeaf->pgidx); + fts5BufferFree(&pWriter->btterm); + + for(i=0; i<pWriter->nDlidx; i++){ + sqlite3Fts5BufferFree(&pWriter->aDlidx[i].buf); + } + sqlite3_free(pWriter->aDlidx); +} + +static void fts5WriteInit( + Fts5Index *p, + Fts5SegWriter *pWriter, + int iSegid +){ + const int nBuffer = p->pConfig->pgsz + FTS5_DATA_PADDING; + + memset(pWriter, 0, sizeof(Fts5SegWriter)); + pWriter->iSegid = iSegid; + + fts5WriteDlidxGrow(p, pWriter, 1); + pWriter->writer.pgno = 1; + pWriter->bFirstTermInPage = 1; + pWriter->iBtPage = 1; + + assert( pWriter->writer.buf.n==0 ); + assert( pWriter->writer.pgidx.n==0 ); + + /* Grow the two buffers to pgsz + padding bytes in size. */ + sqlite3Fts5BufferSize(&p->rc, &pWriter->writer.pgidx, nBuffer); + sqlite3Fts5BufferSize(&p->rc, &pWriter->writer.buf, nBuffer); + + if( p->pIdxWriter==0 ){ + Fts5Config *pConfig = p->pConfig; + fts5IndexPrepareStmt(p, &p->pIdxWriter, sqlite3_mprintf( + "INSERT INTO '%q'.'%q_idx'(segid,term,pgno) VALUES(?,?,?)", + pConfig->zDb, pConfig->zName + )); + } + + if( p->rc==SQLITE_OK ){ + /* Initialize the 4-byte leaf-page header to 0x00. */ + memset(pWriter->writer.buf.p, 0, 4); + pWriter->writer.buf.n = 4; + + /* Bind the current output segment id to the index-writer. This is an + ** optimization over binding the same value over and over as rows are + ** inserted into %_idx by the current writer. */ + sqlite3_bind_int(p->pIdxWriter, 1, pWriter->iSegid); + } +} + +/* +** Iterator pIter was used to iterate through the input segments of on an +** incremental merge operation. This function is called if the incremental +** merge step has finished but the input has not been completely exhausted. +*/ +static void fts5TrimSegments(Fts5Index *p, Fts5Iter *pIter){ + int i; + Fts5Buffer buf; + memset(&buf, 0, sizeof(Fts5Buffer)); + for(i=0; i<pIter->nSeg; i++){ + Fts5SegIter *pSeg = &pIter->aSeg[i]; + if( pSeg->pSeg==0 ){ + /* no-op */ + }else if( pSeg->pLeaf==0 ){ + /* All keys from this input segment have been transfered to the output. + ** Set both the first and last page-numbers to 0 to indicate that the + ** segment is now empty. */ + pSeg->pSeg->pgnoLast = 0; + pSeg->pSeg->pgnoFirst = 0; + }else{ + int iOff = pSeg->iTermLeafOffset; /* Offset on new first leaf page */ + i64 iLeafRowid; + Fts5Data *pData; + int iId = pSeg->pSeg->iSegid; + u8 aHdr[4] = {0x00, 0x00, 0x00, 0x00}; + + iLeafRowid = FTS5_SEGMENT_ROWID(iId, pSeg->iTermLeafPgno); + pData = fts5DataRead(p, iLeafRowid); + if( pData ){ + fts5BufferZero(&buf); + fts5BufferGrow(&p->rc, &buf, pData->nn); + fts5BufferAppendBlob(&p->rc, &buf, sizeof(aHdr), aHdr); + fts5BufferAppendVarint(&p->rc, &buf, pSeg->term.n); + fts5BufferAppendBlob(&p->rc, &buf, pSeg->term.n, pSeg->term.p); + fts5BufferAppendBlob(&p->rc, &buf, pData->szLeaf-iOff, &pData->p[iOff]); + if( p->rc==SQLITE_OK ){ + /* Set the szLeaf field */ + fts5PutU16(&buf.p[2], (u16)buf.n); + } + + /* Set up the new page-index array */ + fts5BufferAppendVarint(&p->rc, &buf, 4); + if( pSeg->iLeafPgno==pSeg->iTermLeafPgno + && pSeg->iEndofDoclist<pData->szLeaf + ){ + int nDiff = pData->szLeaf - pSeg->iEndofDoclist; + fts5BufferAppendVarint(&p->rc, &buf, buf.n - 1 - nDiff - 4); + fts5BufferAppendBlob(&p->rc, &buf, + pData->nn - pSeg->iPgidxOff, &pData->p[pSeg->iPgidxOff] + ); + } + + fts5DataRelease(pData); + pSeg->pSeg->pgnoFirst = pSeg->iTermLeafPgno; + fts5DataDelete(p, FTS5_SEGMENT_ROWID(iId, 1), iLeafRowid); + fts5DataWrite(p, iLeafRowid, buf.p, buf.n); + } + } + } + fts5BufferFree(&buf); +} + +static void fts5MergeChunkCallback( + Fts5Index *p, + void *pCtx, + const u8 *pChunk, int nChunk +){ + Fts5SegWriter *pWriter = (Fts5SegWriter*)pCtx; + fts5WriteAppendPoslistData(p, pWriter, pChunk, nChunk); +} + +/* +** +*/ +static void fts5IndexMergeLevel( + Fts5Index *p, /* FTS5 backend object */ + Fts5Structure **ppStruct, /* IN/OUT: Stucture of index */ + int iLvl, /* Level to read input from */ + int *pnRem /* Write up to this many output leaves */ +){ + Fts5Structure *pStruct = *ppStruct; + Fts5StructureLevel *pLvl = &pStruct->aLevel[iLvl]; + Fts5StructureLevel *pLvlOut; + Fts5Iter *pIter = 0; /* Iterator to read input data */ + int nRem = pnRem ? *pnRem : 0; /* Output leaf pages left to write */ + int nInput; /* Number of input segments */ + Fts5SegWriter writer; /* Writer object */ + Fts5StructureSegment *pSeg; /* Output segment */ + Fts5Buffer term; + int bOldest; /* True if the output segment is the oldest */ + int eDetail = p->pConfig->eDetail; + const int flags = FTS5INDEX_QUERY_NOOUTPUT; + + assert( iLvl<pStruct->nLevel ); + assert( pLvl->nMerge<=pLvl->nSeg ); + + memset(&writer, 0, sizeof(Fts5SegWriter)); + memset(&term, 0, sizeof(Fts5Buffer)); + if( pLvl->nMerge ){ + pLvlOut = &pStruct->aLevel[iLvl+1]; + assert( pLvlOut->nSeg>0 ); + nInput = pLvl->nMerge; + pSeg = &pLvlOut->aSeg[pLvlOut->nSeg-1]; + + fts5WriteInit(p, &writer, pSeg->iSegid); + writer.writer.pgno = pSeg->pgnoLast+1; + writer.iBtPage = 0; + }else{ + int iSegid = fts5AllocateSegid(p, pStruct); + + /* Extend the Fts5Structure object as required to ensure the output + ** segment exists. */ + if( iLvl==pStruct->nLevel-1 ){ + fts5StructureAddLevel(&p->rc, ppStruct); + pStruct = *ppStruct; + } + fts5StructureExtendLevel(&p->rc, pStruct, iLvl+1, 1, 0); + if( p->rc ) return; + pLvl = &pStruct->aLevel[iLvl]; + pLvlOut = &pStruct->aLevel[iLvl+1]; + + fts5WriteInit(p, &writer, iSegid); + + /* Add the new segment to the output level */ + pSeg = &pLvlOut->aSeg[pLvlOut->nSeg]; + pLvlOut->nSeg++; + pSeg->pgnoFirst = 1; + pSeg->iSegid = iSegid; + pStruct->nSegment++; + + /* Read input from all segments in the input level */ + nInput = pLvl->nSeg; + } + bOldest = (pLvlOut->nSeg==1 && pStruct->nLevel==iLvl+2); + + assert( iLvl>=0 ); + for(fts5MultiIterNew(p, pStruct, flags, 0, 0, 0, iLvl, nInput, &pIter); + fts5MultiIterEof(p, pIter)==0; + fts5MultiIterNext(p, pIter, 0, 0) + ){ + Fts5SegIter *pSegIter = &pIter->aSeg[ pIter->aFirst[1].iFirst ]; + int nPos; /* position-list size field value */ + int nTerm; + const u8 *pTerm; + + /* Check for key annihilation. */ + if( pSegIter->nPos==0 && (bOldest || pSegIter->bDel==0) ) continue; + + pTerm = fts5MultiIterTerm(pIter, &nTerm); + if( nTerm!=term.n || memcmp(pTerm, term.p, nTerm) ){ + if( pnRem && writer.nLeafWritten>nRem ){ + break; + } + + /* This is a new term. Append a term to the output segment. */ + fts5WriteAppendTerm(p, &writer, nTerm, pTerm); + fts5BufferSet(&p->rc, &term, nTerm, pTerm); + } + + /* Append the rowid to the output */ + /* WRITEPOSLISTSIZE */ + fts5WriteAppendRowid(p, &writer, fts5MultiIterRowid(pIter)); + + if( eDetail==FTS5_DETAIL_NONE ){ + if( pSegIter->bDel ){ + fts5BufferAppendVarint(&p->rc, &writer.writer.buf, 0); + if( pSegIter->nPos>0 ){ + fts5BufferAppendVarint(&p->rc, &writer.writer.buf, 0); + } + } + }else{ + /* Append the position-list data to the output */ + nPos = pSegIter->nPos*2 + pSegIter->bDel; + fts5BufferAppendVarint(&p->rc, &writer.writer.buf, nPos); + fts5ChunkIterate(p, pSegIter, (void*)&writer, fts5MergeChunkCallback); + } + } + + /* Flush the last leaf page to disk. Set the output segment b-tree height + ** and last leaf page number at the same time. */ + fts5WriteFinish(p, &writer, &pSeg->pgnoLast); + + if( fts5MultiIterEof(p, pIter) ){ + int i; + + /* Remove the redundant segments from the %_data table */ + for(i=0; i<nInput; i++){ + fts5DataRemoveSegment(p, pLvl->aSeg[i].iSegid); + } + + /* Remove the redundant segments from the input level */ + if( pLvl->nSeg!=nInput ){ + int nMove = (pLvl->nSeg - nInput) * sizeof(Fts5StructureSegment); + memmove(pLvl->aSeg, &pLvl->aSeg[nInput], nMove); + } + pStruct->nSegment -= nInput; + pLvl->nSeg -= nInput; + pLvl->nMerge = 0; + if( pSeg->pgnoLast==0 ){ + pLvlOut->nSeg--; + pStruct->nSegment--; + } + }else{ + assert( pSeg->pgnoLast>0 ); + fts5TrimSegments(p, pIter); + pLvl->nMerge = nInput; + } + + fts5MultiIterFree(pIter); + fts5BufferFree(&term); + if( pnRem ) *pnRem -= writer.nLeafWritten; +} + +/* +** Do up to nPg pages of automerge work on the index. +*/ +static void fts5IndexMerge( + Fts5Index *p, /* FTS5 backend object */ + Fts5Structure **ppStruct, /* IN/OUT: Current structure of index */ + int nPg /* Pages of work to do */ +){ + int nRem = nPg; + Fts5Structure *pStruct = *ppStruct; + while( nRem>0 && p->rc==SQLITE_OK ){ + int iLvl; /* To iterate through levels */ + int iBestLvl = 0; /* Level offering the most input segments */ + int nBest = 0; /* Number of input segments on best level */ + + /* Set iBestLvl to the level to read input segments from. */ + assert( pStruct->nLevel>0 ); + for(iLvl=0; iLvl<pStruct->nLevel; iLvl++){ + Fts5StructureLevel *pLvl = &pStruct->aLevel[iLvl]; + if( pLvl->nMerge ){ + if( pLvl->nMerge>nBest ){ + iBestLvl = iLvl; + nBest = pLvl->nMerge; + } + break; + } + if( pLvl->nSeg>nBest ){ + nBest = pLvl->nSeg; + iBestLvl = iLvl; + } + } + + /* If nBest is still 0, then the index must be empty. */ +#ifdef SQLITE_DEBUG + for(iLvl=0; nBest==0 && iLvl<pStruct->nLevel; iLvl++){ + assert( pStruct->aLevel[iLvl].nSeg==0 ); + } +#endif + + if( nBest<p->pConfig->nAutomerge + && pStruct->aLevel[iBestLvl].nMerge==0 + ){ + break; + } + fts5IndexMergeLevel(p, &pStruct, iBestLvl, &nRem); + if( p->rc==SQLITE_OK && pStruct->aLevel[iBestLvl].nMerge==0 ){ + fts5StructurePromote(p, iBestLvl+1, pStruct); + } + } + *ppStruct = pStruct; +} + +/* +** A total of nLeaf leaf pages of data has just been flushed to a level-0 +** segment. This function updates the write-counter accordingly and, if +** necessary, performs incremental merge work. +** +** If an error occurs, set the Fts5Index.rc error code. If an error has +** already occurred, this function is a no-op. +*/ +static void fts5IndexAutomerge( + Fts5Index *p, /* FTS5 backend object */ + Fts5Structure **ppStruct, /* IN/OUT: Current structure of index */ + int nLeaf /* Number of output leaves just written */ +){ + if( p->rc==SQLITE_OK && p->pConfig->nAutomerge>0 ){ + Fts5Structure *pStruct = *ppStruct; + u64 nWrite; /* Initial value of write-counter */ + int nWork; /* Number of work-quanta to perform */ + int nRem; /* Number of leaf pages left to write */ + + /* Update the write-counter. While doing so, set nWork. */ + nWrite = pStruct->nWriteCounter; + nWork = (int)(((nWrite + nLeaf) / p->nWorkUnit) - (nWrite / p->nWorkUnit)); + pStruct->nWriteCounter += nLeaf; + nRem = (int)(p->nWorkUnit * nWork * pStruct->nLevel); + + fts5IndexMerge(p, ppStruct, nRem); + } +} + +static void fts5IndexCrisismerge( + Fts5Index *p, /* FTS5 backend object */ + Fts5Structure **ppStruct /* IN/OUT: Current structure of index */ +){ + const int nCrisis = p->pConfig->nCrisisMerge; + Fts5Structure *pStruct = *ppStruct; + int iLvl = 0; + + assert( p->rc!=SQLITE_OK || pStruct->nLevel>0 ); + while( p->rc==SQLITE_OK && pStruct->aLevel[iLvl].nSeg>=nCrisis ){ + fts5IndexMergeLevel(p, &pStruct, iLvl, 0); + assert( p->rc!=SQLITE_OK || pStruct->nLevel>(iLvl+1) ); + fts5StructurePromote(p, iLvl+1, pStruct); + iLvl++; + } + *ppStruct = pStruct; +} + +static int fts5IndexReturn(Fts5Index *p){ + int rc = p->rc; + p->rc = SQLITE_OK; + return rc; +} + +typedef struct Fts5FlushCtx Fts5FlushCtx; +struct Fts5FlushCtx { + Fts5Index *pIdx; + Fts5SegWriter writer; +}; + +/* +** Buffer aBuf[] contains a list of varints, all small enough to fit +** in a 32-bit integer. Return the size of the largest prefix of this +** list nMax bytes or less in size. +*/ +static int fts5PoslistPrefix(const u8 *aBuf, int nMax){ + int ret; + u32 dummy; + ret = fts5GetVarint32(aBuf, dummy); + if( ret<nMax ){ + while( 1 ){ + int i = fts5GetVarint32(&aBuf[ret], dummy); + if( (ret + i) > nMax ) break; + ret += i; + } + } + return ret; +} + +/* +** Flush the contents of in-memory hash table iHash to a new level-0 +** segment on disk. Also update the corresponding structure record. +** +** If an error occurs, set the Fts5Index.rc error code. If an error has +** already occurred, this function is a no-op. +*/ +static void fts5FlushOneHash(Fts5Index *p){ + Fts5Hash *pHash = p->pHash; + Fts5Structure *pStruct; + int iSegid; + int pgnoLast = 0; /* Last leaf page number in segment */ + + /* Obtain a reference to the index structure and allocate a new segment-id + ** for the new level-0 segment. */ + pStruct = fts5StructureRead(p); + iSegid = fts5AllocateSegid(p, pStruct); + + if( iSegid ){ + const int pgsz = p->pConfig->pgsz; + int eDetail = p->pConfig->eDetail; + Fts5StructureSegment *pSeg; /* New segment within pStruct */ + Fts5Buffer *pBuf; /* Buffer in which to assemble leaf page */ + Fts5Buffer *pPgidx; /* Buffer in which to assemble pgidx */ + + Fts5SegWriter writer; + fts5WriteInit(p, &writer, iSegid); + + pBuf = &writer.writer.buf; + pPgidx = &writer.writer.pgidx; + + /* fts5WriteInit() should have initialized the buffers to (most likely) + ** the maximum space required. */ + assert( p->rc || pBuf->nSpace>=(pgsz + FTS5_DATA_PADDING) ); + assert( p->rc || pPgidx->nSpace>=(pgsz + FTS5_DATA_PADDING) ); + + /* Begin scanning through hash table entries. This loop runs once for each + ** term/doclist currently stored within the hash table. */ + if( p->rc==SQLITE_OK ){ + p->rc = sqlite3Fts5HashScanInit(pHash, 0, 0); + } + while( p->rc==SQLITE_OK && 0==sqlite3Fts5HashScanEof(pHash) ){ + const char *zTerm; /* Buffer containing term */ + const u8 *pDoclist; /* Pointer to doclist for this term */ + int nDoclist; /* Size of doclist in bytes */ + + /* Write the term for this entry to disk. */ + sqlite3Fts5HashScanEntry(pHash, &zTerm, &pDoclist, &nDoclist); + fts5WriteAppendTerm(p, &writer, (int)strlen(zTerm), (const u8*)zTerm); + + assert( writer.bFirstRowidInPage==0 ); + if( pgsz>=(pBuf->n + pPgidx->n + nDoclist + 1) ){ + /* The entire doclist will fit on the current leaf. */ + fts5BufferSafeAppendBlob(pBuf, pDoclist, nDoclist); + }else{ + i64 iRowid = 0; + i64 iDelta = 0; + int iOff = 0; + + /* The entire doclist will not fit on this leaf. The following + ** loop iterates through the poslists that make up the current + ** doclist. */ + while( p->rc==SQLITE_OK && iOff<nDoclist ){ + iOff += fts5GetVarint(&pDoclist[iOff], (u64*)&iDelta); + iRowid += iDelta; + + if( writer.bFirstRowidInPage ){ + fts5PutU16(&pBuf->p[0], (u16)pBuf->n); /* first rowid on page */ + pBuf->n += sqlite3Fts5PutVarint(&pBuf->p[pBuf->n], iRowid); + writer.bFirstRowidInPage = 0; + fts5WriteDlidxAppend(p, &writer, iRowid); + }else{ + pBuf->n += sqlite3Fts5PutVarint(&pBuf->p[pBuf->n], iDelta); + } + assert( pBuf->n<=pBuf->nSpace ); + + if( eDetail==FTS5_DETAIL_NONE ){ + if( iOff<nDoclist && pDoclist[iOff]==0 ){ + pBuf->p[pBuf->n++] = 0; + iOff++; + if( iOff<nDoclist && pDoclist[iOff]==0 ){ + pBuf->p[pBuf->n++] = 0; + iOff++; + } + } + if( (pBuf->n + pPgidx->n)>=pgsz ){ + fts5WriteFlushLeaf(p, &writer); + } + }else{ + int bDummy; + int nPos; + int nCopy = fts5GetPoslistSize(&pDoclist[iOff], &nPos, &bDummy); + nCopy += nPos; + if( (pBuf->n + pPgidx->n + nCopy) <= pgsz ){ + /* The entire poslist will fit on the current leaf. So copy + ** it in one go. */ + fts5BufferSafeAppendBlob(pBuf, &pDoclist[iOff], nCopy); + }else{ + /* The entire poslist will not fit on this leaf. So it needs + ** to be broken into sections. The only qualification being + ** that each varint must be stored contiguously. */ + const u8 *pPoslist = &pDoclist[iOff]; + int iPos = 0; + while( p->rc==SQLITE_OK ){ + int nSpace = pgsz - pBuf->n - pPgidx->n; + int n = 0; + if( (nCopy - iPos)<=nSpace ){ + n = nCopy - iPos; + }else{ + n = fts5PoslistPrefix(&pPoslist[iPos], nSpace); + } + assert( n>0 ); + fts5BufferSafeAppendBlob(pBuf, &pPoslist[iPos], n); + iPos += n; + if( (pBuf->n + pPgidx->n)>=pgsz ){ + fts5WriteFlushLeaf(p, &writer); + } + if( iPos>=nCopy ) break; + } + } + iOff += nCopy; + } + } + } + + /* TODO2: Doclist terminator written here. */ + /* pBuf->p[pBuf->n++] = '\0'; */ + assert( pBuf->n<=pBuf->nSpace ); + sqlite3Fts5HashScanNext(pHash); + } + sqlite3Fts5HashClear(pHash); + fts5WriteFinish(p, &writer, &pgnoLast); + + /* Update the Fts5Structure. It is written back to the database by the + ** fts5StructureRelease() call below. */ + if( pStruct->nLevel==0 ){ + fts5StructureAddLevel(&p->rc, &pStruct); + } + fts5StructureExtendLevel(&p->rc, pStruct, 0, 1, 0); + if( p->rc==SQLITE_OK ){ + pSeg = &pStruct->aLevel[0].aSeg[ pStruct->aLevel[0].nSeg++ ]; + pSeg->iSegid = iSegid; + pSeg->pgnoFirst = 1; + pSeg->pgnoLast = pgnoLast; + pStruct->nSegment++; + } + fts5StructurePromote(p, 0, pStruct); + } + + fts5IndexAutomerge(p, &pStruct, pgnoLast); + fts5IndexCrisismerge(p, &pStruct); + fts5StructureWrite(p, pStruct); + fts5StructureRelease(pStruct); +} + +/* +** Flush any data stored in the in-memory hash tables to the database. +*/ +static void fts5IndexFlush(Fts5Index *p){ + /* Unless it is empty, flush the hash table to disk */ + if( p->nPendingData ){ + assert( p->pHash ); + p->nPendingData = 0; + fts5FlushOneHash(p); + } +} + + +static int sqlite3Fts5IndexOptimize(Fts5Index *p){ + Fts5Structure *pStruct; + Fts5Structure *pNew = 0; + int nSeg = 0; + + assert( p->rc==SQLITE_OK ); + fts5IndexFlush(p); + pStruct = fts5StructureRead(p); + + if( pStruct ){ + assert( pStruct->nSegment==fts5StructureCountSegments(pStruct) ); + nSeg = pStruct->nSegment; + if( nSeg>1 ){ + int nByte = sizeof(Fts5Structure); + nByte += (pStruct->nLevel+1) * sizeof(Fts5StructureLevel); + pNew = (Fts5Structure*)sqlite3Fts5MallocZero(&p->rc, nByte); + } + } + if( pNew ){ + Fts5StructureLevel *pLvl; + int nByte = nSeg * sizeof(Fts5StructureSegment); + pNew->nLevel = pStruct->nLevel+1; + pNew->nRef = 1; + pNew->nWriteCounter = pStruct->nWriteCounter; + pLvl = &pNew->aLevel[pStruct->nLevel]; + pLvl->aSeg = (Fts5StructureSegment*)sqlite3Fts5MallocZero(&p->rc, nByte); + if( pLvl->aSeg ){ + int iLvl, iSeg; + int iSegOut = 0; + for(iLvl=0; iLvl<pStruct->nLevel; iLvl++){ + for(iSeg=0; iSeg<pStruct->aLevel[iLvl].nSeg; iSeg++){ + pLvl->aSeg[iSegOut] = pStruct->aLevel[iLvl].aSeg[iSeg]; + iSegOut++; + } + } + pNew->nSegment = pLvl->nSeg = nSeg; + }else{ + sqlite3_free(pNew); + pNew = 0; + } + } + + if( pNew ){ + int iLvl = pNew->nLevel-1; + while( p->rc==SQLITE_OK && pNew->aLevel[iLvl].nSeg>0 ){ + int nRem = FTS5_OPT_WORK_UNIT; + fts5IndexMergeLevel(p, &pNew, iLvl, &nRem); + } + + fts5StructureWrite(p, pNew); + fts5StructureRelease(pNew); + } + + fts5StructureRelease(pStruct); + return fts5IndexReturn(p); +} + +static int sqlite3Fts5IndexMerge(Fts5Index *p, int nMerge){ + Fts5Structure *pStruct; + + pStruct = fts5StructureRead(p); + if( pStruct && pStruct->nLevel ){ + fts5IndexMerge(p, &pStruct, nMerge); + fts5StructureWrite(p, pStruct); + } + fts5StructureRelease(pStruct); + + return fts5IndexReturn(p); +} + +static void fts5AppendRowid( + Fts5Index *p, + i64 iDelta, + Fts5Iter *pUnused, + Fts5Buffer *pBuf +){ + UNUSED_PARAM(pUnused); + fts5BufferAppendVarint(&p->rc, pBuf, iDelta); +} + +static void fts5AppendPoslist( + Fts5Index *p, + i64 iDelta, + Fts5Iter *pMulti, + Fts5Buffer *pBuf +){ + int nData = pMulti->base.nData; + assert( nData>0 ); + if( p->rc==SQLITE_OK && 0==fts5BufferGrow(&p->rc, pBuf, nData+9+9) ){ + fts5BufferSafeAppendVarint(pBuf, iDelta); + fts5BufferSafeAppendVarint(pBuf, nData*2); + fts5BufferSafeAppendBlob(pBuf, pMulti->base.pData, nData); + } +} + + +static void fts5DoclistIterNext(Fts5DoclistIter *pIter){ + u8 *p = pIter->aPoslist + pIter->nSize + pIter->nPoslist; + + assert( pIter->aPoslist ); + if( p>=pIter->aEof ){ + pIter->aPoslist = 0; + }else{ + i64 iDelta; + + p += fts5GetVarint(p, (u64*)&iDelta); + pIter->iRowid += iDelta; + + /* Read position list size */ + if( p[0] & 0x80 ){ + int nPos; + pIter->nSize = fts5GetVarint32(p, nPos); + pIter->nPoslist = (nPos>>1); + }else{ + pIter->nPoslist = ((int)(p[0])) >> 1; + pIter->nSize = 1; + } + + pIter->aPoslist = p; + } +} + +static void fts5DoclistIterInit( + Fts5Buffer *pBuf, + Fts5DoclistIter *pIter +){ + memset(pIter, 0, sizeof(*pIter)); + pIter->aPoslist = pBuf->p; + pIter->aEof = &pBuf->p[pBuf->n]; + fts5DoclistIterNext(pIter); +} + +#if 0 +/* +** Append a doclist to buffer pBuf. +** +** This function assumes that space within the buffer has already been +** allocated. +*/ +static void fts5MergeAppendDocid( + Fts5Buffer *pBuf, /* Buffer to write to */ + i64 *piLastRowid, /* IN/OUT: Previous rowid written (if any) */ + i64 iRowid /* Rowid to append */ +){ + assert( pBuf->n!=0 || (*piLastRowid)==0 ); + fts5BufferSafeAppendVarint(pBuf, iRowid - *piLastRowid); + *piLastRowid = iRowid; +} +#endif + +#define fts5MergeAppendDocid(pBuf, iLastRowid, iRowid) { \ + assert( (pBuf)->n!=0 || (iLastRowid)==0 ); \ + fts5BufferSafeAppendVarint((pBuf), (iRowid) - (iLastRowid)); \ + (iLastRowid) = (iRowid); \ +} + +/* +** Swap the contents of buffer *p1 with that of *p2. +*/ +static void fts5BufferSwap(Fts5Buffer *p1, Fts5Buffer *p2){ + Fts5Buffer tmp = *p1; + *p1 = *p2; + *p2 = tmp; +} + +static void fts5NextRowid(Fts5Buffer *pBuf, int *piOff, i64 *piRowid){ + int i = *piOff; + if( i>=pBuf->n ){ + *piOff = -1; + }else{ + u64 iVal; + *piOff = i + sqlite3Fts5GetVarint(&pBuf->p[i], &iVal); + *piRowid += iVal; + } +} + +/* +** This is the equivalent of fts5MergePrefixLists() for detail=none mode. +** In this case the buffers consist of a delta-encoded list of rowids only. +*/ +static void fts5MergeRowidLists( + Fts5Index *p, /* FTS5 backend object */ + Fts5Buffer *p1, /* First list to merge */ + Fts5Buffer *p2 /* Second list to merge */ +){ + int i1 = 0; + int i2 = 0; + i64 iRowid1 = 0; + i64 iRowid2 = 0; + i64 iOut = 0; + + Fts5Buffer out; + memset(&out, 0, sizeof(out)); + sqlite3Fts5BufferSize(&p->rc, &out, p1->n + p2->n); + if( p->rc ) return; + + fts5NextRowid(p1, &i1, &iRowid1); + fts5NextRowid(p2, &i2, &iRowid2); + while( i1>=0 || i2>=0 ){ + if( i1>=0 && (i2<0 || iRowid1<iRowid2) ){ + assert( iOut==0 || iRowid1>iOut ); + fts5BufferSafeAppendVarint(&out, iRowid1 - iOut); + iOut = iRowid1; + fts5NextRowid(p1, &i1, &iRowid1); + }else{ + assert( iOut==0 || iRowid2>iOut ); + fts5BufferSafeAppendVarint(&out, iRowid2 - iOut); + iOut = iRowid2; + if( i1>=0 && iRowid1==iRowid2 ){ + fts5NextRowid(p1, &i1, &iRowid1); + } + fts5NextRowid(p2, &i2, &iRowid2); + } + } + + fts5BufferSwap(&out, p1); + fts5BufferFree(&out); +} + +/* +** Buffers p1 and p2 contain doclists. This function merges the content +** of the two doclists together and sets buffer p1 to the result before +** returning. +** +** If an error occurs, an error code is left in p->rc. If an error has +** already occurred, this function is a no-op. +*/ +static void fts5MergePrefixLists( + Fts5Index *p, /* FTS5 backend object */ + Fts5Buffer *p1, /* First list to merge */ + Fts5Buffer *p2 /* Second list to merge */ +){ + if( p2->n ){ + i64 iLastRowid = 0; + Fts5DoclistIter i1; + Fts5DoclistIter i2; + Fts5Buffer out = {0, 0, 0}; + Fts5Buffer tmp = {0, 0, 0}; + + if( sqlite3Fts5BufferSize(&p->rc, &out, p1->n + p2->n) ) return; + fts5DoclistIterInit(p1, &i1); + fts5DoclistIterInit(p2, &i2); + + while( 1 ){ + if( i1.iRowid<i2.iRowid ){ + /* Copy entry from i1 */ + fts5MergeAppendDocid(&out, iLastRowid, i1.iRowid); + fts5BufferSafeAppendBlob(&out, i1.aPoslist, i1.nPoslist+i1.nSize); + fts5DoclistIterNext(&i1); + if( i1.aPoslist==0 ) break; + } + else if( i2.iRowid!=i1.iRowid ){ + /* Copy entry from i2 */ + fts5MergeAppendDocid(&out, iLastRowid, i2.iRowid); + fts5BufferSafeAppendBlob(&out, i2.aPoslist, i2.nPoslist+i2.nSize); + fts5DoclistIterNext(&i2); + if( i2.aPoslist==0 ) break; + } + else{ + /* Merge the two position lists. */ + i64 iPos1 = 0; + i64 iPos2 = 0; + int iOff1 = 0; + int iOff2 = 0; + u8 *a1 = &i1.aPoslist[i1.nSize]; + u8 *a2 = &i2.aPoslist[i2.nSize]; + + i64 iPrev = 0; + Fts5PoslistWriter writer; + memset(&writer, 0, sizeof(writer)); + + fts5MergeAppendDocid(&out, iLastRowid, i2.iRowid); + fts5BufferZero(&tmp); + sqlite3Fts5BufferSize(&p->rc, &tmp, i1.nPoslist + i2.nPoslist); + if( p->rc ) break; + + sqlite3Fts5PoslistNext64(a1, i1.nPoslist, &iOff1, &iPos1); + sqlite3Fts5PoslistNext64(a2, i2.nPoslist, &iOff2, &iPos2); + assert( iPos1>=0 && iPos2>=0 ); + + if( iPos1<iPos2 ){ + sqlite3Fts5PoslistSafeAppend(&tmp, &iPrev, iPos1); + sqlite3Fts5PoslistNext64(a1, i1.nPoslist, &iOff1, &iPos1); + }else{ + sqlite3Fts5PoslistSafeAppend(&tmp, &iPrev, iPos2); + sqlite3Fts5PoslistNext64(a2, i2.nPoslist, &iOff2, &iPos2); + } + + if( iPos1>=0 && iPos2>=0 ){ + while( 1 ){ + if( iPos1<iPos2 ){ + if( iPos1!=iPrev ){ + sqlite3Fts5PoslistSafeAppend(&tmp, &iPrev, iPos1); + } + sqlite3Fts5PoslistNext64(a1, i1.nPoslist, &iOff1, &iPos1); + if( iPos1<0 ) break; + }else{ + assert( iPos2!=iPrev ); + sqlite3Fts5PoslistSafeAppend(&tmp, &iPrev, iPos2); + sqlite3Fts5PoslistNext64(a2, i2.nPoslist, &iOff2, &iPos2); + if( iPos2<0 ) break; + } + } + } + + if( iPos1>=0 ){ + if( iPos1!=iPrev ){ + sqlite3Fts5PoslistSafeAppend(&tmp, &iPrev, iPos1); + } + fts5BufferSafeAppendBlob(&tmp, &a1[iOff1], i1.nPoslist-iOff1); + }else{ + assert( iPos2>=0 && iPos2!=iPrev ); + sqlite3Fts5PoslistSafeAppend(&tmp, &iPrev, iPos2); + fts5BufferSafeAppendBlob(&tmp, &a2[iOff2], i2.nPoslist-iOff2); + } + + /* WRITEPOSLISTSIZE */ + fts5BufferSafeAppendVarint(&out, tmp.n * 2); + fts5BufferSafeAppendBlob(&out, tmp.p, tmp.n); + fts5DoclistIterNext(&i1); + fts5DoclistIterNext(&i2); + if( i1.aPoslist==0 || i2.aPoslist==0 ) break; + } + } + + if( i1.aPoslist ){ + fts5MergeAppendDocid(&out, iLastRowid, i1.iRowid); + fts5BufferSafeAppendBlob(&out, i1.aPoslist, i1.aEof - i1.aPoslist); + } + else if( i2.aPoslist ){ + fts5MergeAppendDocid(&out, iLastRowid, i2.iRowid); + fts5BufferSafeAppendBlob(&out, i2.aPoslist, i2.aEof - i2.aPoslist); + } + + fts5BufferSet(&p->rc, p1, out.n, out.p); + fts5BufferFree(&tmp); + fts5BufferFree(&out); + } +} + +static void fts5SetupPrefixIter( + Fts5Index *p, /* Index to read from */ + int bDesc, /* True for "ORDER BY rowid DESC" */ + const u8 *pToken, /* Buffer containing prefix to match */ + int nToken, /* Size of buffer pToken in bytes */ + Fts5Colset *pColset, /* Restrict matches to these columns */ + Fts5Iter **ppIter /* OUT: New iterator */ +){ + Fts5Structure *pStruct; + Fts5Buffer *aBuf; + const int nBuf = 32; + + void (*xMerge)(Fts5Index*, Fts5Buffer*, Fts5Buffer*); + void (*xAppend)(Fts5Index*, i64, Fts5Iter*, Fts5Buffer*); + if( p->pConfig->eDetail==FTS5_DETAIL_NONE ){ + xMerge = fts5MergeRowidLists; + xAppend = fts5AppendRowid; + }else{ + xMerge = fts5MergePrefixLists; + xAppend = fts5AppendPoslist; + } + + aBuf = (Fts5Buffer*)fts5IdxMalloc(p, sizeof(Fts5Buffer)*nBuf); + pStruct = fts5StructureRead(p); + + if( aBuf && pStruct ){ + const int flags = FTS5INDEX_QUERY_SCAN + | FTS5INDEX_QUERY_SKIPEMPTY + | FTS5INDEX_QUERY_NOOUTPUT; + int i; + i64 iLastRowid = 0; + Fts5Iter *p1 = 0; /* Iterator used to gather data from index */ + Fts5Data *pData; + Fts5Buffer doclist; + int bNewTerm = 1; + + memset(&doclist, 0, sizeof(doclist)); + fts5MultiIterNew(p, pStruct, flags, pColset, pToken, nToken, -1, 0, &p1); + fts5IterSetOutputCb(&p->rc, p1); + for( /* no-op */ ; + fts5MultiIterEof(p, p1)==0; + fts5MultiIterNext2(p, p1, &bNewTerm) + ){ + Fts5SegIter *pSeg = &p1->aSeg[ p1->aFirst[1].iFirst ]; + int nTerm = pSeg->term.n; + const u8 *pTerm = pSeg->term.p; + p1->xSetOutputs(p1, pSeg); + + assert_nc( memcmp(pToken, pTerm, MIN(nToken, nTerm))<=0 ); + if( bNewTerm ){ + if( nTerm<nToken || memcmp(pToken, pTerm, nToken) ) break; + } + + if( p1->base.nData==0 ) continue; + + if( p1->base.iRowid<=iLastRowid && doclist.n>0 ){ + for(i=0; p->rc==SQLITE_OK && doclist.n; i++){ + assert( i<nBuf ); + if( aBuf[i].n==0 ){ + fts5BufferSwap(&doclist, &aBuf[i]); + fts5BufferZero(&doclist); + }else{ + xMerge(p, &doclist, &aBuf[i]); + fts5BufferZero(&aBuf[i]); + } + } + iLastRowid = 0; + } + + xAppend(p, p1->base.iRowid-iLastRowid, p1, &doclist); + iLastRowid = p1->base.iRowid; + } + + for(i=0; i<nBuf; i++){ + if( p->rc==SQLITE_OK ){ + xMerge(p, &doclist, &aBuf[i]); + } + fts5BufferFree(&aBuf[i]); + } + fts5MultiIterFree(p1); + + pData = fts5IdxMalloc(p, sizeof(Fts5Data) + doclist.n); + if( pData ){ + pData->p = (u8*)&pData[1]; + pData->nn = pData->szLeaf = doclist.n; + memcpy(pData->p, doclist.p, doclist.n); + fts5MultiIterNew2(p, pData, bDesc, ppIter); + } + fts5BufferFree(&doclist); + } + + fts5StructureRelease(pStruct); + sqlite3_free(aBuf); +} + + +/* +** Indicate that all subsequent calls to sqlite3Fts5IndexWrite() pertain +** to the document with rowid iRowid. +*/ +static int sqlite3Fts5IndexBeginWrite(Fts5Index *p, int bDelete, i64 iRowid){ + assert( p->rc==SQLITE_OK ); + + /* Allocate the hash table if it has not already been allocated */ + if( p->pHash==0 ){ + p->rc = sqlite3Fts5HashNew(p->pConfig, &p->pHash, &p->nPendingData); + } + + /* Flush the hash table to disk if required */ + if( iRowid<p->iWriteRowid + || (iRowid==p->iWriteRowid && p->bDelete==0) + || (p->nPendingData > p->pConfig->nHashSize) + ){ + fts5IndexFlush(p); + } + + p->iWriteRowid = iRowid; + p->bDelete = bDelete; + return fts5IndexReturn(p); +} + +/* +** Commit data to disk. +*/ +static int sqlite3Fts5IndexSync(Fts5Index *p, int bCommit){ + assert( p->rc==SQLITE_OK ); + fts5IndexFlush(p); + if( bCommit ) fts5CloseReader(p); + return fts5IndexReturn(p); +} + +/* +** Discard any data stored in the in-memory hash tables. Do not write it +** to the database. Additionally, assume that the contents of the %_data +** table may have changed on disk. So any in-memory caches of %_data +** records must be invalidated. +*/ +static int sqlite3Fts5IndexRollback(Fts5Index *p){ + fts5CloseReader(p); + fts5IndexDiscardData(p); + /* assert( p->rc==SQLITE_OK ); */ + return SQLITE_OK; +} + +/* +** The %_data table is completely empty when this function is called. This +** function populates it with the initial structure objects for each index, +** and the initial version of the "averages" record (a zero-byte blob). +*/ +static int sqlite3Fts5IndexReinit(Fts5Index *p){ + Fts5Structure s; + memset(&s, 0, sizeof(Fts5Structure)); + fts5DataWrite(p, FTS5_AVERAGES_ROWID, (const u8*)"", 0); + fts5StructureWrite(p, &s); + return fts5IndexReturn(p); +} + +/* +** Open a new Fts5Index handle. If the bCreate argument is true, create +** and initialize the underlying %_data table. +** +** If successful, set *pp to point to the new object and return SQLITE_OK. +** Otherwise, set *pp to NULL and return an SQLite error code. +*/ +static int sqlite3Fts5IndexOpen( + Fts5Config *pConfig, + int bCreate, + Fts5Index **pp, + char **pzErr +){ + int rc = SQLITE_OK; + Fts5Index *p; /* New object */ + + *pp = p = (Fts5Index*)sqlite3Fts5MallocZero(&rc, sizeof(Fts5Index)); + if( rc==SQLITE_OK ){ + p->pConfig = pConfig; + p->nWorkUnit = FTS5_WORK_UNIT; + p->zDataTbl = sqlite3Fts5Mprintf(&rc, "%s_data", pConfig->zName); + if( p->zDataTbl && bCreate ){ + rc = sqlite3Fts5CreateTable( + pConfig, "data", "id INTEGER PRIMARY KEY, block BLOB", 0, pzErr + ); + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5CreateTable(pConfig, "idx", + "segid, term, pgno, PRIMARY KEY(segid, term)", + 1, pzErr + ); + } + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5IndexReinit(p); + } + } + } + + assert( rc!=SQLITE_OK || p->rc==SQLITE_OK ); + if( rc ){ + sqlite3Fts5IndexClose(p); + *pp = 0; + } + return rc; +} + +/* +** Close a handle opened by an earlier call to sqlite3Fts5IndexOpen(). +*/ +static int sqlite3Fts5IndexClose(Fts5Index *p){ + int rc = SQLITE_OK; + if( p ){ + assert( p->pReader==0 ); + sqlite3_finalize(p->pWriter); + sqlite3_finalize(p->pDeleter); + sqlite3_finalize(p->pIdxWriter); + sqlite3_finalize(p->pIdxDeleter); + sqlite3_finalize(p->pIdxSelect); + sqlite3Fts5HashFree(p->pHash); + sqlite3_free(p->zDataTbl); + sqlite3_free(p); + } + return rc; +} + +/* +** Argument p points to a buffer containing utf-8 text that is n bytes in +** size. Return the number of bytes in the nChar character prefix of the +** buffer, or 0 if there are less than nChar characters in total. +*/ +static int sqlite3Fts5IndexCharlenToBytelen( + const char *p, + int nByte, + int nChar +){ + int n = 0; + int i; + for(i=0; i<nChar; i++){ + if( n>=nByte ) return 0; /* Input contains fewer than nChar chars */ + if( (unsigned char)p[n++]>=0xc0 ){ + while( (p[n] & 0xc0)==0x80 ) n++; + } + } + return n; +} + +/* +** pIn is a UTF-8 encoded string, nIn bytes in size. Return the number of +** unicode characters in the string. +*/ +static int fts5IndexCharlen(const char *pIn, int nIn){ + int nChar = 0; + int i = 0; + while( i<nIn ){ + if( (unsigned char)pIn[i++]>=0xc0 ){ + while( i<nIn && (pIn[i] & 0xc0)==0x80 ) i++; + } + nChar++; + } + return nChar; +} + +/* +** Insert or remove data to or from the index. Each time a document is +** added to or removed from the index, this function is called one or more +** times. +** +** For an insert, it must be called once for each token in the new document. +** If the operation is a delete, it must be called (at least) once for each +** unique token in the document with an iCol value less than zero. The iPos +** argument is ignored for a delete. +*/ +static int sqlite3Fts5IndexWrite( + Fts5Index *p, /* Index to write to */ + int iCol, /* Column token appears in (-ve -> delete) */ + int iPos, /* Position of token within column */ + const char *pToken, int nToken /* Token to add or remove to or from index */ +){ + int i; /* Used to iterate through indexes */ + int rc = SQLITE_OK; /* Return code */ + Fts5Config *pConfig = p->pConfig; + + assert( p->rc==SQLITE_OK ); + assert( (iCol<0)==p->bDelete ); + + /* Add the entry to the main terms index. */ + rc = sqlite3Fts5HashWrite( + p->pHash, p->iWriteRowid, iCol, iPos, FTS5_MAIN_PREFIX, pToken, nToken + ); + + for(i=0; i<pConfig->nPrefix && rc==SQLITE_OK; i++){ + const int nChar = pConfig->aPrefix[i]; + int nByte = sqlite3Fts5IndexCharlenToBytelen(pToken, nToken, nChar); + if( nByte ){ + rc = sqlite3Fts5HashWrite(p->pHash, + p->iWriteRowid, iCol, iPos, (char)(FTS5_MAIN_PREFIX+i+1), pToken, + nByte + ); + } + } + + return rc; +} + +/* +** Open a new iterator to iterate though all rowid that match the +** specified token or token prefix. +*/ +static int sqlite3Fts5IndexQuery( + Fts5Index *p, /* FTS index to query */ + const char *pToken, int nToken, /* Token (or prefix) to query for */ + int flags, /* Mask of FTS5INDEX_QUERY_X flags */ + Fts5Colset *pColset, /* Match these columns only */ + Fts5IndexIter **ppIter /* OUT: New iterator object */ +){ + Fts5Config *pConfig = p->pConfig; + Fts5Iter *pRet = 0; + Fts5Buffer buf = {0, 0, 0}; + + /* If the QUERY_SCAN flag is set, all other flags must be clear. */ + assert( (flags & FTS5INDEX_QUERY_SCAN)==0 || flags==FTS5INDEX_QUERY_SCAN ); + + if( sqlite3Fts5BufferSize(&p->rc, &buf, nToken+1)==0 ){ + int iIdx = 0; /* Index to search */ + memcpy(&buf.p[1], pToken, nToken); + + /* Figure out which index to search and set iIdx accordingly. If this + ** is a prefix query for which there is no prefix index, set iIdx to + ** greater than pConfig->nPrefix to indicate that the query will be + ** satisfied by scanning multiple terms in the main index. + ** + ** If the QUERY_TEST_NOIDX flag was specified, then this must be a + ** prefix-query. Instead of using a prefix-index (if one exists), + ** evaluate the prefix query using the main FTS index. This is used + ** for internal sanity checking by the integrity-check in debug + ** mode only. */ +#ifdef SQLITE_DEBUG + if( pConfig->bPrefixIndex==0 || (flags & FTS5INDEX_QUERY_TEST_NOIDX) ){ + assert( flags & FTS5INDEX_QUERY_PREFIX ); + iIdx = 1+pConfig->nPrefix; + }else +#endif + if( flags & FTS5INDEX_QUERY_PREFIX ){ + int nChar = fts5IndexCharlen(pToken, nToken); + for(iIdx=1; iIdx<=pConfig->nPrefix; iIdx++){ + if( pConfig->aPrefix[iIdx-1]==nChar ) break; + } + } + + if( iIdx<=pConfig->nPrefix ){ + /* Straight index lookup */ + Fts5Structure *pStruct = fts5StructureRead(p); + buf.p[0] = (u8)(FTS5_MAIN_PREFIX + iIdx); + if( pStruct ){ + fts5MultiIterNew(p, pStruct, flags | FTS5INDEX_QUERY_SKIPEMPTY, + pColset, buf.p, nToken+1, -1, 0, &pRet + ); + fts5StructureRelease(pStruct); + } + }else{ + /* Scan multiple terms in the main index */ + int bDesc = (flags & FTS5INDEX_QUERY_DESC)!=0; + buf.p[0] = FTS5_MAIN_PREFIX; + fts5SetupPrefixIter(p, bDesc, buf.p, nToken+1, pColset, &pRet); + assert( p->rc!=SQLITE_OK || pRet->pColset==0 ); + fts5IterSetOutputCb(&p->rc, pRet); + if( p->rc==SQLITE_OK ){ + Fts5SegIter *pSeg = &pRet->aSeg[pRet->aFirst[1].iFirst]; + if( pSeg->pLeaf ) pRet->xSetOutputs(pRet, pSeg); + } + } + + if( p->rc ){ + sqlite3Fts5IterClose(&pRet->base); + pRet = 0; + fts5CloseReader(p); + } + + *ppIter = &pRet->base; + sqlite3Fts5BufferFree(&buf); + } + return fts5IndexReturn(p); +} + +/* +** Return true if the iterator passed as the only argument is at EOF. +*/ +/* +** Move to the next matching rowid. +*/ +static int sqlite3Fts5IterNext(Fts5IndexIter *pIndexIter){ + Fts5Iter *pIter = (Fts5Iter*)pIndexIter; + assert( pIter->pIndex->rc==SQLITE_OK ); + fts5MultiIterNext(pIter->pIndex, pIter, 0, 0); + return fts5IndexReturn(pIter->pIndex); +} + +/* +** Move to the next matching term/rowid. Used by the fts5vocab module. +*/ +static int sqlite3Fts5IterNextScan(Fts5IndexIter *pIndexIter){ + Fts5Iter *pIter = (Fts5Iter*)pIndexIter; + Fts5Index *p = pIter->pIndex; + + assert( pIter->pIndex->rc==SQLITE_OK ); + + fts5MultiIterNext(p, pIter, 0, 0); + if( p->rc==SQLITE_OK ){ + Fts5SegIter *pSeg = &pIter->aSeg[ pIter->aFirst[1].iFirst ]; + if( pSeg->pLeaf && pSeg->term.p[0]!=FTS5_MAIN_PREFIX ){ + fts5DataRelease(pSeg->pLeaf); + pSeg->pLeaf = 0; + pIter->base.bEof = 1; + } + } + + return fts5IndexReturn(pIter->pIndex); +} + +/* +** Move to the next matching rowid that occurs at or after iMatch. The +** definition of "at or after" depends on whether this iterator iterates +** in ascending or descending rowid order. +*/ +static int sqlite3Fts5IterNextFrom(Fts5IndexIter *pIndexIter, i64 iMatch){ + Fts5Iter *pIter = (Fts5Iter*)pIndexIter; + fts5MultiIterNextFrom(pIter->pIndex, pIter, iMatch); + return fts5IndexReturn(pIter->pIndex); +} + +/* +** Return the current term. +*/ +static const char *sqlite3Fts5IterTerm(Fts5IndexIter *pIndexIter, int *pn){ + int n; + const char *z = (const char*)fts5MultiIterTerm((Fts5Iter*)pIndexIter, &n); + *pn = n-1; + return &z[1]; +} + +/* +** Close an iterator opened by an earlier call to sqlite3Fts5IndexQuery(). +*/ +static void sqlite3Fts5IterClose(Fts5IndexIter *pIndexIter){ + if( pIndexIter ){ + Fts5Iter *pIter = (Fts5Iter*)pIndexIter; + Fts5Index *pIndex = pIter->pIndex; + fts5MultiIterFree(pIter); + fts5CloseReader(pIndex); + } +} + +/* +** Read and decode the "averages" record from the database. +** +** Parameter anSize must point to an array of size nCol, where nCol is +** the number of user defined columns in the FTS table. +*/ +static int sqlite3Fts5IndexGetAverages(Fts5Index *p, i64 *pnRow, i64 *anSize){ + int nCol = p->pConfig->nCol; + Fts5Data *pData; + + *pnRow = 0; + memset(anSize, 0, sizeof(i64) * nCol); + pData = fts5DataRead(p, FTS5_AVERAGES_ROWID); + if( p->rc==SQLITE_OK && pData->nn ){ + int i = 0; + int iCol; + i += fts5GetVarint(&pData->p[i], (u64*)pnRow); + for(iCol=0; i<pData->nn && iCol<nCol; iCol++){ + i += fts5GetVarint(&pData->p[i], (u64*)&anSize[iCol]); + } + } + + fts5DataRelease(pData); + return fts5IndexReturn(p); +} + +/* +** Replace the current "averages" record with the contents of the buffer +** supplied as the second argument. +*/ +static int sqlite3Fts5IndexSetAverages(Fts5Index *p, const u8 *pData, int nData){ + assert( p->rc==SQLITE_OK ); + fts5DataWrite(p, FTS5_AVERAGES_ROWID, pData, nData); + return fts5IndexReturn(p); +} + +/* +** Return the total number of blocks this module has read from the %_data +** table since it was created. +*/ +static int sqlite3Fts5IndexReads(Fts5Index *p){ + return p->nRead; +} + +/* +** Set the 32-bit cookie value stored at the start of all structure +** records to the value passed as the second argument. +** +** Return SQLITE_OK if successful, or an SQLite error code if an error +** occurs. +*/ +static int sqlite3Fts5IndexSetCookie(Fts5Index *p, int iNew){ + int rc; /* Return code */ + Fts5Config *pConfig = p->pConfig; /* Configuration object */ + u8 aCookie[4]; /* Binary representation of iNew */ + sqlite3_blob *pBlob = 0; + + assert( p->rc==SQLITE_OK ); + sqlite3Fts5Put32(aCookie, iNew); + + rc = sqlite3_blob_open(pConfig->db, pConfig->zDb, p->zDataTbl, + "block", FTS5_STRUCTURE_ROWID, 1, &pBlob + ); + if( rc==SQLITE_OK ){ + sqlite3_blob_write(pBlob, aCookie, 4, 0); + rc = sqlite3_blob_close(pBlob); + } + + return rc; +} + +static int sqlite3Fts5IndexLoadConfig(Fts5Index *p){ + Fts5Structure *pStruct; + pStruct = fts5StructureRead(p); + fts5StructureRelease(pStruct); + return fts5IndexReturn(p); +} + + +/************************************************************************* +************************************************************************** +** Below this point is the implementation of the integrity-check +** functionality. +*/ + +/* +** Return a simple checksum value based on the arguments. +*/ +static u64 sqlite3Fts5IndexEntryCksum( + i64 iRowid, + int iCol, + int iPos, + int iIdx, + const char *pTerm, + int nTerm +){ + int i; + u64 ret = iRowid; + ret += (ret<<3) + iCol; + ret += (ret<<3) + iPos; + if( iIdx>=0 ) ret += (ret<<3) + (FTS5_MAIN_PREFIX + iIdx); + for(i=0; i<nTerm; i++) ret += (ret<<3) + pTerm[i]; + return ret; +} + +#ifdef SQLITE_DEBUG +/* +** This function is purely an internal test. It does not contribute to +** FTS functionality, or even the integrity-check, in any way. +** +** Instead, it tests that the same set of pgno/rowid combinations are +** visited regardless of whether the doclist-index identified by parameters +** iSegid/iLeaf is iterated in forwards or reverse order. +*/ +static void fts5TestDlidxReverse( + Fts5Index *p, + int iSegid, /* Segment id to load from */ + int iLeaf /* Load doclist-index for this leaf */ +){ + Fts5DlidxIter *pDlidx = 0; + u64 cksum1 = 13; + u64 cksum2 = 13; + + for(pDlidx=fts5DlidxIterInit(p, 0, iSegid, iLeaf); + fts5DlidxIterEof(p, pDlidx)==0; + fts5DlidxIterNext(p, pDlidx) + ){ + i64 iRowid = fts5DlidxIterRowid(pDlidx); + int pgno = fts5DlidxIterPgno(pDlidx); + assert( pgno>iLeaf ); + cksum1 += iRowid + ((i64)pgno<<32); + } + fts5DlidxIterFree(pDlidx); + pDlidx = 0; + + for(pDlidx=fts5DlidxIterInit(p, 1, iSegid, iLeaf); + fts5DlidxIterEof(p, pDlidx)==0; + fts5DlidxIterPrev(p, pDlidx) + ){ + i64 iRowid = fts5DlidxIterRowid(pDlidx); + int pgno = fts5DlidxIterPgno(pDlidx); + assert( fts5DlidxIterPgno(pDlidx)>iLeaf ); + cksum2 += iRowid + ((i64)pgno<<32); + } + fts5DlidxIterFree(pDlidx); + pDlidx = 0; + + if( p->rc==SQLITE_OK && cksum1!=cksum2 ) p->rc = FTS5_CORRUPT; +} + +static int fts5QueryCksum( + Fts5Index *p, /* Fts5 index object */ + int iIdx, + const char *z, /* Index key to query for */ + int n, /* Size of index key in bytes */ + int flags, /* Flags for Fts5IndexQuery */ + u64 *pCksum /* IN/OUT: Checksum value */ +){ + int eDetail = p->pConfig->eDetail; + u64 cksum = *pCksum; + Fts5IndexIter *pIter = 0; + int rc = sqlite3Fts5IndexQuery(p, z, n, flags, 0, &pIter); + + while( rc==SQLITE_OK && 0==sqlite3Fts5IterEof(pIter) ){ + i64 rowid = pIter->iRowid; + + if( eDetail==FTS5_DETAIL_NONE ){ + cksum ^= sqlite3Fts5IndexEntryCksum(rowid, 0, 0, iIdx, z, n); + }else{ + Fts5PoslistReader sReader; + for(sqlite3Fts5PoslistReaderInit(pIter->pData, pIter->nData, &sReader); + sReader.bEof==0; + sqlite3Fts5PoslistReaderNext(&sReader) + ){ + int iCol = FTS5_POS2COLUMN(sReader.iPos); + int iOff = FTS5_POS2OFFSET(sReader.iPos); + cksum ^= sqlite3Fts5IndexEntryCksum(rowid, iCol, iOff, iIdx, z, n); + } + } + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5IterNext(pIter); + } + } + sqlite3Fts5IterClose(pIter); + + *pCksum = cksum; + return rc; +} + + +/* +** This function is also purely an internal test. It does not contribute to +** FTS functionality, or even the integrity-check, in any way. +*/ +static void fts5TestTerm( + Fts5Index *p, + Fts5Buffer *pPrev, /* Previous term */ + const char *z, int n, /* Possibly new term to test */ + u64 expected, + u64 *pCksum +){ + int rc = p->rc; + if( pPrev->n==0 ){ + fts5BufferSet(&rc, pPrev, n, (const u8*)z); + }else + if( rc==SQLITE_OK && (pPrev->n!=n || memcmp(pPrev->p, z, n)) ){ + u64 cksum3 = *pCksum; + const char *zTerm = (const char*)&pPrev->p[1]; /* term sans prefix-byte */ + int nTerm = pPrev->n-1; /* Size of zTerm in bytes */ + int iIdx = (pPrev->p[0] - FTS5_MAIN_PREFIX); + int flags = (iIdx==0 ? 0 : FTS5INDEX_QUERY_PREFIX); + u64 ck1 = 0; + u64 ck2 = 0; + + /* Check that the results returned for ASC and DESC queries are + ** the same. If not, call this corruption. */ + rc = fts5QueryCksum(p, iIdx, zTerm, nTerm, flags, &ck1); + if( rc==SQLITE_OK ){ + int f = flags|FTS5INDEX_QUERY_DESC; + rc = fts5QueryCksum(p, iIdx, zTerm, nTerm, f, &ck2); + } + if( rc==SQLITE_OK && ck1!=ck2 ) rc = FTS5_CORRUPT; + + /* If this is a prefix query, check that the results returned if the + ** the index is disabled are the same. In both ASC and DESC order. + ** + ** This check may only be performed if the hash table is empty. This + ** is because the hash table only supports a single scan query at + ** a time, and the multi-iter loop from which this function is called + ** is already performing such a scan. */ + if( p->nPendingData==0 ){ + if( iIdx>0 && rc==SQLITE_OK ){ + int f = flags|FTS5INDEX_QUERY_TEST_NOIDX; + ck2 = 0; + rc = fts5QueryCksum(p, iIdx, zTerm, nTerm, f, &ck2); + if( rc==SQLITE_OK && ck1!=ck2 ) rc = FTS5_CORRUPT; + } + if( iIdx>0 && rc==SQLITE_OK ){ + int f = flags|FTS5INDEX_QUERY_TEST_NOIDX|FTS5INDEX_QUERY_DESC; + ck2 = 0; + rc = fts5QueryCksum(p, iIdx, zTerm, nTerm, f, &ck2); + if( rc==SQLITE_OK && ck1!=ck2 ) rc = FTS5_CORRUPT; + } + } + + cksum3 ^= ck1; + fts5BufferSet(&rc, pPrev, n, (const u8*)z); + + if( rc==SQLITE_OK && cksum3!=expected ){ + rc = FTS5_CORRUPT; + } + *pCksum = cksum3; + } + p->rc = rc; +} + +#else +# define fts5TestDlidxReverse(x,y,z) +# define fts5TestTerm(u,v,w,x,y,z) +#endif + +/* +** Check that: +** +** 1) All leaves of pSeg between iFirst and iLast (inclusive) exist and +** contain zero terms. +** 2) All leaves of pSeg between iNoRowid and iLast (inclusive) exist and +** contain zero rowids. +*/ +static void fts5IndexIntegrityCheckEmpty( + Fts5Index *p, + Fts5StructureSegment *pSeg, /* Segment to check internal consistency */ + int iFirst, + int iNoRowid, + int iLast +){ + int i; + + /* Now check that the iter.nEmpty leaves following the current leaf + ** (a) exist and (b) contain no terms. */ + for(i=iFirst; p->rc==SQLITE_OK && i<=iLast; i++){ + Fts5Data *pLeaf = fts5DataRead(p, FTS5_SEGMENT_ROWID(pSeg->iSegid, i)); + if( pLeaf ){ + if( !fts5LeafIsTermless(pLeaf) ) p->rc = FTS5_CORRUPT; + if( i>=iNoRowid && 0!=fts5LeafFirstRowidOff(pLeaf) ) p->rc = FTS5_CORRUPT; + } + fts5DataRelease(pLeaf); + } +} + +static void fts5IntegrityCheckPgidx(Fts5Index *p, Fts5Data *pLeaf){ + int iTermOff = 0; + int ii; + + Fts5Buffer buf1 = {0,0,0}; + Fts5Buffer buf2 = {0,0,0}; + + ii = pLeaf->szLeaf; + while( ii<pLeaf->nn && p->rc==SQLITE_OK ){ + int res; + int iOff; + int nIncr; + + ii += fts5GetVarint32(&pLeaf->p[ii], nIncr); + iTermOff += nIncr; + iOff = iTermOff; + + if( iOff>=pLeaf->szLeaf ){ + p->rc = FTS5_CORRUPT; + }else if( iTermOff==nIncr ){ + int nByte; + iOff += fts5GetVarint32(&pLeaf->p[iOff], nByte); + if( (iOff+nByte)>pLeaf->szLeaf ){ + p->rc = FTS5_CORRUPT; + }else{ + fts5BufferSet(&p->rc, &buf1, nByte, &pLeaf->p[iOff]); + } + }else{ + int nKeep, nByte; + iOff += fts5GetVarint32(&pLeaf->p[iOff], nKeep); + iOff += fts5GetVarint32(&pLeaf->p[iOff], nByte); + if( nKeep>buf1.n || (iOff+nByte)>pLeaf->szLeaf ){ + p->rc = FTS5_CORRUPT; + }else{ + buf1.n = nKeep; + fts5BufferAppendBlob(&p->rc, &buf1, nByte, &pLeaf->p[iOff]); + } + + if( p->rc==SQLITE_OK ){ + res = fts5BufferCompare(&buf1, &buf2); + if( res<=0 ) p->rc = FTS5_CORRUPT; + } + } + fts5BufferSet(&p->rc, &buf2, buf1.n, buf1.p); + } + + fts5BufferFree(&buf1); + fts5BufferFree(&buf2); +} + +static void fts5IndexIntegrityCheckSegment( + Fts5Index *p, /* FTS5 backend object */ + Fts5StructureSegment *pSeg /* Segment to check internal consistency */ +){ + Fts5Config *pConfig = p->pConfig; + sqlite3_stmt *pStmt = 0; + int rc2; + int iIdxPrevLeaf = pSeg->pgnoFirst-1; + int iDlidxPrevLeaf = pSeg->pgnoLast; + + if( pSeg->pgnoFirst==0 ) return; + + fts5IndexPrepareStmt(p, &pStmt, sqlite3_mprintf( + "SELECT segid, term, (pgno>>1), (pgno&1) FROM %Q.'%q_idx' WHERE segid=%d", + pConfig->zDb, pConfig->zName, pSeg->iSegid + )); + + /* Iterate through the b-tree hierarchy. */ + while( p->rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pStmt) ){ + i64 iRow; /* Rowid for this leaf */ + Fts5Data *pLeaf; /* Data for this leaf */ + + int nIdxTerm = sqlite3_column_bytes(pStmt, 1); + const char *zIdxTerm = (const char*)sqlite3_column_text(pStmt, 1); + int iIdxLeaf = sqlite3_column_int(pStmt, 2); + int bIdxDlidx = sqlite3_column_int(pStmt, 3); + + /* If the leaf in question has already been trimmed from the segment, + ** ignore this b-tree entry. Otherwise, load it into memory. */ + if( iIdxLeaf<pSeg->pgnoFirst ) continue; + iRow = FTS5_SEGMENT_ROWID(pSeg->iSegid, iIdxLeaf); + pLeaf = fts5DataRead(p, iRow); + if( pLeaf==0 ) break; + + /* Check that the leaf contains at least one term, and that it is equal + ** to or larger than the split-key in zIdxTerm. Also check that if there + ** is also a rowid pointer within the leaf page header, it points to a + ** location before the term. */ + if( pLeaf->nn<=pLeaf->szLeaf ){ + p->rc = FTS5_CORRUPT; + }else{ + int iOff; /* Offset of first term on leaf */ + int iRowidOff; /* Offset of first rowid on leaf */ + int nTerm; /* Size of term on leaf in bytes */ + int res; /* Comparison of term and split-key */ + + iOff = fts5LeafFirstTermOff(pLeaf); + iRowidOff = fts5LeafFirstRowidOff(pLeaf); + if( iRowidOff>=iOff ){ + p->rc = FTS5_CORRUPT; + }else{ + iOff += fts5GetVarint32(&pLeaf->p[iOff], nTerm); + res = memcmp(&pLeaf->p[iOff], zIdxTerm, MIN(nTerm, nIdxTerm)); + if( res==0 ) res = nTerm - nIdxTerm; + if( res<0 ) p->rc = FTS5_CORRUPT; + } + + fts5IntegrityCheckPgidx(p, pLeaf); + } + fts5DataRelease(pLeaf); + if( p->rc ) break; + + /* Now check that the iter.nEmpty leaves following the current leaf + ** (a) exist and (b) contain no terms. */ + fts5IndexIntegrityCheckEmpty( + p, pSeg, iIdxPrevLeaf+1, iDlidxPrevLeaf+1, iIdxLeaf-1 + ); + if( p->rc ) break; + + /* If there is a doclist-index, check that it looks right. */ + if( bIdxDlidx ){ + Fts5DlidxIter *pDlidx = 0; /* For iterating through doclist index */ + int iPrevLeaf = iIdxLeaf; + int iSegid = pSeg->iSegid; + int iPg = 0; + i64 iKey; + + for(pDlidx=fts5DlidxIterInit(p, 0, iSegid, iIdxLeaf); + fts5DlidxIterEof(p, pDlidx)==0; + fts5DlidxIterNext(p, pDlidx) + ){ + + /* Check any rowid-less pages that occur before the current leaf. */ + for(iPg=iPrevLeaf+1; iPg<fts5DlidxIterPgno(pDlidx); iPg++){ + iKey = FTS5_SEGMENT_ROWID(iSegid, iPg); + pLeaf = fts5DataRead(p, iKey); + if( pLeaf ){ + if( fts5LeafFirstRowidOff(pLeaf)!=0 ) p->rc = FTS5_CORRUPT; + fts5DataRelease(pLeaf); + } + } + iPrevLeaf = fts5DlidxIterPgno(pDlidx); + + /* Check that the leaf page indicated by the iterator really does + ** contain the rowid suggested by the same. */ + iKey = FTS5_SEGMENT_ROWID(iSegid, iPrevLeaf); + pLeaf = fts5DataRead(p, iKey); + if( pLeaf ){ + i64 iRowid; + int iRowidOff = fts5LeafFirstRowidOff(pLeaf); + ASSERT_SZLEAF_OK(pLeaf); + if( iRowidOff>=pLeaf->szLeaf ){ + p->rc = FTS5_CORRUPT; + }else{ + fts5GetVarint(&pLeaf->p[iRowidOff], (u64*)&iRowid); + if( iRowid!=fts5DlidxIterRowid(pDlidx) ) p->rc = FTS5_CORRUPT; + } + fts5DataRelease(pLeaf); + } + } + + iDlidxPrevLeaf = iPg; + fts5DlidxIterFree(pDlidx); + fts5TestDlidxReverse(p, iSegid, iIdxLeaf); + }else{ + iDlidxPrevLeaf = pSeg->pgnoLast; + /* TODO: Check there is no doclist index */ + } + + iIdxPrevLeaf = iIdxLeaf; + } + + rc2 = sqlite3_finalize(pStmt); + if( p->rc==SQLITE_OK ) p->rc = rc2; + + /* Page iter.iLeaf must now be the rightmost leaf-page in the segment */ +#if 0 + if( p->rc==SQLITE_OK && iter.iLeaf!=pSeg->pgnoLast ){ + p->rc = FTS5_CORRUPT; + } +#endif +} + + +/* +** Run internal checks to ensure that the FTS index (a) is internally +** consistent and (b) contains entries for which the XOR of the checksums +** as calculated by sqlite3Fts5IndexEntryCksum() is cksum. +** +** Return SQLITE_CORRUPT if any of the internal checks fail, or if the +** checksum does not match. Return SQLITE_OK if all checks pass without +** error, or some other SQLite error code if another error (e.g. OOM) +** occurs. +*/ +static int sqlite3Fts5IndexIntegrityCheck(Fts5Index *p, u64 cksum){ + int eDetail = p->pConfig->eDetail; + u64 cksum2 = 0; /* Checksum based on contents of indexes */ + Fts5Buffer poslist = {0,0,0}; /* Buffer used to hold a poslist */ + Fts5Iter *pIter; /* Used to iterate through entire index */ + Fts5Structure *pStruct; /* Index structure */ + +#ifdef SQLITE_DEBUG + /* Used by extra internal tests only run if NDEBUG is not defined */ + u64 cksum3 = 0; /* Checksum based on contents of indexes */ + Fts5Buffer term = {0,0,0}; /* Buffer used to hold most recent term */ +#endif + const int flags = FTS5INDEX_QUERY_NOOUTPUT; + + /* Load the FTS index structure */ + pStruct = fts5StructureRead(p); + + /* Check that the internal nodes of each segment match the leaves */ + if( pStruct ){ + int iLvl, iSeg; + for(iLvl=0; iLvl<pStruct->nLevel; iLvl++){ + for(iSeg=0; iSeg<pStruct->aLevel[iLvl].nSeg; iSeg++){ + Fts5StructureSegment *pSeg = &pStruct->aLevel[iLvl].aSeg[iSeg]; + fts5IndexIntegrityCheckSegment(p, pSeg); + } + } + } + + /* The cksum argument passed to this function is a checksum calculated + ** based on all expected entries in the FTS index (including prefix index + ** entries). This block checks that a checksum calculated based on the + ** actual contents of FTS index is identical. + ** + ** Two versions of the same checksum are calculated. The first (stack + ** variable cksum2) based on entries extracted from the full-text index + ** while doing a linear scan of each individual index in turn. + ** + ** As each term visited by the linear scans, a separate query for the + ** same term is performed. cksum3 is calculated based on the entries + ** extracted by these queries. + */ + for(fts5MultiIterNew(p, pStruct, flags, 0, 0, 0, -1, 0, &pIter); + fts5MultiIterEof(p, pIter)==0; + fts5MultiIterNext(p, pIter, 0, 0) + ){ + int n; /* Size of term in bytes */ + i64 iPos = 0; /* Position read from poslist */ + int iOff = 0; /* Offset within poslist */ + i64 iRowid = fts5MultiIterRowid(pIter); + char *z = (char*)fts5MultiIterTerm(pIter, &n); + + /* If this is a new term, query for it. Update cksum3 with the results. */ + fts5TestTerm(p, &term, z, n, cksum2, &cksum3); + + if( eDetail==FTS5_DETAIL_NONE ){ + if( 0==fts5MultiIterIsEmpty(p, pIter) ){ + cksum2 ^= sqlite3Fts5IndexEntryCksum(iRowid, 0, 0, -1, z, n); + } + }else{ + poslist.n = 0; + fts5SegiterPoslist(p, &pIter->aSeg[pIter->aFirst[1].iFirst], 0, &poslist); + while( 0==sqlite3Fts5PoslistNext64(poslist.p, poslist.n, &iOff, &iPos) ){ + int iCol = FTS5_POS2COLUMN(iPos); + int iTokOff = FTS5_POS2OFFSET(iPos); + cksum2 ^= sqlite3Fts5IndexEntryCksum(iRowid, iCol, iTokOff, -1, z, n); + } + } + } + fts5TestTerm(p, &term, 0, 0, cksum2, &cksum3); + + fts5MultiIterFree(pIter); + if( p->rc==SQLITE_OK && cksum!=cksum2 ) p->rc = FTS5_CORRUPT; + + fts5StructureRelease(pStruct); +#ifdef SQLITE_DEBUG + fts5BufferFree(&term); +#endif + fts5BufferFree(&poslist); + return fts5IndexReturn(p); +} + +/************************************************************************* +************************************************************************** +** Below this point is the implementation of the fts5_decode() scalar +** function only. +*/ + +/* +** Decode a segment-data rowid from the %_data table. This function is +** the opposite of macro FTS5_SEGMENT_ROWID(). +*/ +static void fts5DecodeRowid( + i64 iRowid, /* Rowid from %_data table */ + int *piSegid, /* OUT: Segment id */ + int *pbDlidx, /* OUT: Dlidx flag */ + int *piHeight, /* OUT: Height */ + int *piPgno /* OUT: Page number */ +){ + *piPgno = (int)(iRowid & (((i64)1 << FTS5_DATA_PAGE_B) - 1)); + iRowid >>= FTS5_DATA_PAGE_B; + + *piHeight = (int)(iRowid & (((i64)1 << FTS5_DATA_HEIGHT_B) - 1)); + iRowid >>= FTS5_DATA_HEIGHT_B; + + *pbDlidx = (int)(iRowid & 0x0001); + iRowid >>= FTS5_DATA_DLI_B; + + *piSegid = (int)(iRowid & (((i64)1 << FTS5_DATA_ID_B) - 1)); +} + +static void fts5DebugRowid(int *pRc, Fts5Buffer *pBuf, i64 iKey){ + int iSegid, iHeight, iPgno, bDlidx; /* Rowid compenents */ + fts5DecodeRowid(iKey, &iSegid, &bDlidx, &iHeight, &iPgno); + + if( iSegid==0 ){ + if( iKey==FTS5_AVERAGES_ROWID ){ + sqlite3Fts5BufferAppendPrintf(pRc, pBuf, "{averages} "); + }else{ + sqlite3Fts5BufferAppendPrintf(pRc, pBuf, "{structure}"); + } + } + else{ + sqlite3Fts5BufferAppendPrintf(pRc, pBuf, "{%ssegid=%d h=%d pgno=%d}", + bDlidx ? "dlidx " : "", iSegid, iHeight, iPgno + ); + } +} + +static void fts5DebugStructure( + int *pRc, /* IN/OUT: error code */ + Fts5Buffer *pBuf, + Fts5Structure *p +){ + int iLvl, iSeg; /* Iterate through levels, segments */ + + for(iLvl=0; iLvl<p->nLevel; iLvl++){ + Fts5StructureLevel *pLvl = &p->aLevel[iLvl]; + sqlite3Fts5BufferAppendPrintf(pRc, pBuf, + " {lvl=%d nMerge=%d nSeg=%d", iLvl, pLvl->nMerge, pLvl->nSeg + ); + for(iSeg=0; iSeg<pLvl->nSeg; iSeg++){ + Fts5StructureSegment *pSeg = &pLvl->aSeg[iSeg]; + sqlite3Fts5BufferAppendPrintf(pRc, pBuf, " {id=%d leaves=%d..%d}", + pSeg->iSegid, pSeg->pgnoFirst, pSeg->pgnoLast + ); + } + sqlite3Fts5BufferAppendPrintf(pRc, pBuf, "}"); + } +} + +/* +** This is part of the fts5_decode() debugging aid. +** +** Arguments pBlob/nBlob contain a serialized Fts5Structure object. This +** function appends a human-readable representation of the same object +** to the buffer passed as the second argument. +*/ +static void fts5DecodeStructure( + int *pRc, /* IN/OUT: error code */ + Fts5Buffer *pBuf, + const u8 *pBlob, int nBlob +){ + int rc; /* Return code */ + Fts5Structure *p = 0; /* Decoded structure object */ + + rc = fts5StructureDecode(pBlob, nBlob, 0, &p); + if( rc!=SQLITE_OK ){ + *pRc = rc; + return; + } + + fts5DebugStructure(pRc, pBuf, p); + fts5StructureRelease(p); +} + +/* +** This is part of the fts5_decode() debugging aid. +** +** Arguments pBlob/nBlob contain an "averages" record. This function +** appends a human-readable representation of record to the buffer passed +** as the second argument. +*/ +static void fts5DecodeAverages( + int *pRc, /* IN/OUT: error code */ + Fts5Buffer *pBuf, + const u8 *pBlob, int nBlob +){ + int i = 0; + const char *zSpace = ""; + + while( i<nBlob ){ + u64 iVal; + i += sqlite3Fts5GetVarint(&pBlob[i], &iVal); + sqlite3Fts5BufferAppendPrintf(pRc, pBuf, "%s%d", zSpace, (int)iVal); + zSpace = " "; + } +} + +/* +** Buffer (a/n) is assumed to contain a list of serialized varints. Read +** each varint and append its string representation to buffer pBuf. Return +** after either the input buffer is exhausted or a 0 value is read. +** +** The return value is the number of bytes read from the input buffer. +*/ +static int fts5DecodePoslist(int *pRc, Fts5Buffer *pBuf, const u8 *a, int n){ + int iOff = 0; + while( iOff<n ){ + int iVal; + iOff += fts5GetVarint32(&a[iOff], iVal); + sqlite3Fts5BufferAppendPrintf(pRc, pBuf, " %d", iVal); + } + return iOff; +} + +/* +** The start of buffer (a/n) contains the start of a doclist. The doclist +** may or may not finish within the buffer. This function appends a text +** representation of the part of the doclist that is present to buffer +** pBuf. +** +** The return value is the number of bytes read from the input buffer. +*/ +static int fts5DecodeDoclist(int *pRc, Fts5Buffer *pBuf, const u8 *a, int n){ + i64 iDocid = 0; + int iOff = 0; + + if( n>0 ){ + iOff = sqlite3Fts5GetVarint(a, (u64*)&iDocid); + sqlite3Fts5BufferAppendPrintf(pRc, pBuf, " id=%lld", iDocid); + } + while( iOff<n ){ + int nPos; + int bDel; + iOff += fts5GetPoslistSize(&a[iOff], &nPos, &bDel); + sqlite3Fts5BufferAppendPrintf(pRc, pBuf, " nPos=%d%s", nPos, bDel?"*":""); + iOff += fts5DecodePoslist(pRc, pBuf, &a[iOff], MIN(n-iOff, nPos)); + if( iOff<n ){ + i64 iDelta; + iOff += sqlite3Fts5GetVarint(&a[iOff], (u64*)&iDelta); + iDocid += iDelta; + sqlite3Fts5BufferAppendPrintf(pRc, pBuf, " id=%lld", iDocid); + } + } + + return iOff; +} + +/* +** This function is part of the fts5_decode() debugging function. It is +** only ever used with detail=none tables. +** +** Buffer (pData/nData) contains a doclist in the format used by detail=none +** tables. This function appends a human-readable version of that list to +** buffer pBuf. +** +** If *pRc is other than SQLITE_OK when this function is called, it is a +** no-op. If an OOM or other error occurs within this function, *pRc is +** set to an SQLite error code before returning. The final state of buffer +** pBuf is undefined in this case. +*/ +static void fts5DecodeRowidList( + int *pRc, /* IN/OUT: Error code */ + Fts5Buffer *pBuf, /* Buffer to append text to */ + const u8 *pData, int nData /* Data to decode list-of-rowids from */ +){ + int i = 0; + i64 iRowid = 0; + + while( i<nData ){ + const char *zApp = ""; + u64 iVal; + i += sqlite3Fts5GetVarint(&pData[i], &iVal); + iRowid += iVal; + + if( i<nData && pData[i]==0x00 ){ + i++; + if( i<nData && pData[i]==0x00 ){ + i++; + zApp = "+"; + }else{ + zApp = "*"; + } + } + + sqlite3Fts5BufferAppendPrintf(pRc, pBuf, " %lld%s", iRowid, zApp); + } +} + +/* +** The implementation of user-defined scalar function fts5_decode(). +*/ +static void fts5DecodeFunction( + sqlite3_context *pCtx, /* Function call context */ + int nArg, /* Number of args (always 2) */ + sqlite3_value **apVal /* Function arguments */ +){ + i64 iRowid; /* Rowid for record being decoded */ + int iSegid,iHeight,iPgno,bDlidx;/* Rowid components */ + const u8 *aBlob; int n; /* Record to decode */ + u8 *a = 0; + Fts5Buffer s; /* Build up text to return here */ + int rc = SQLITE_OK; /* Return code */ + int nSpace = 0; + int eDetailNone = (sqlite3_user_data(pCtx)!=0); + + assert( nArg==2 ); + UNUSED_PARAM(nArg); + memset(&s, 0, sizeof(Fts5Buffer)); + iRowid = sqlite3_value_int64(apVal[0]); + + /* Make a copy of the second argument (a blob) in aBlob[]. The aBlob[] + ** copy is followed by FTS5_DATA_ZERO_PADDING 0x00 bytes, which prevents + ** buffer overreads even if the record is corrupt. */ + n = sqlite3_value_bytes(apVal[1]); + aBlob = sqlite3_value_blob(apVal[1]); + nSpace = n + FTS5_DATA_ZERO_PADDING; + a = (u8*)sqlite3Fts5MallocZero(&rc, nSpace); + if( a==0 ) goto decode_out; + memcpy(a, aBlob, n); + + + fts5DecodeRowid(iRowid, &iSegid, &bDlidx, &iHeight, &iPgno); + + fts5DebugRowid(&rc, &s, iRowid); + if( bDlidx ){ + Fts5Data dlidx; + Fts5DlidxLvl lvl; + + dlidx.p = a; + dlidx.nn = n; + + memset(&lvl, 0, sizeof(Fts5DlidxLvl)); + lvl.pData = &dlidx; + lvl.iLeafPgno = iPgno; + + for(fts5DlidxLvlNext(&lvl); lvl.bEof==0; fts5DlidxLvlNext(&lvl)){ + sqlite3Fts5BufferAppendPrintf(&rc, &s, + " %d(%lld)", lvl.iLeafPgno, lvl.iRowid + ); + } + }else if( iSegid==0 ){ + if( iRowid==FTS5_AVERAGES_ROWID ){ + fts5DecodeAverages(&rc, &s, a, n); + }else{ + fts5DecodeStructure(&rc, &s, a, n); + } + }else if( eDetailNone ){ + Fts5Buffer term; /* Current term read from page */ + int szLeaf; + int iPgidxOff = szLeaf = fts5GetU16(&a[2]); + int iTermOff; + int nKeep = 0; + int iOff; + + memset(&term, 0, sizeof(Fts5Buffer)); + + /* Decode any entries that occur before the first term. */ + if( szLeaf<n ){ + iPgidxOff += fts5GetVarint32(&a[iPgidxOff], iTermOff); + }else{ + iTermOff = szLeaf; + } + fts5DecodeRowidList(&rc, &s, &a[4], iTermOff-4); + + iOff = iTermOff; + while( iOff<szLeaf ){ + int nAppend; + + /* Read the term data for the next term*/ + iOff += fts5GetVarint32(&a[iOff], nAppend); + term.n = nKeep; + fts5BufferAppendBlob(&rc, &term, nAppend, &a[iOff]); + sqlite3Fts5BufferAppendPrintf( + &rc, &s, " term=%.*s", term.n, (const char*)term.p + ); + iOff += nAppend; + + /* Figure out where the doclist for this term ends */ + if( iPgidxOff<n ){ + int nIncr; + iPgidxOff += fts5GetVarint32(&a[iPgidxOff], nIncr); + iTermOff += nIncr; + }else{ + iTermOff = szLeaf; + } + + fts5DecodeRowidList(&rc, &s, &a[iOff], iTermOff-iOff); + iOff = iTermOff; + if( iOff<szLeaf ){ + iOff += fts5GetVarint32(&a[iOff], nKeep); + } + } + + fts5BufferFree(&term); + }else{ + Fts5Buffer term; /* Current term read from page */ + int szLeaf; /* Offset of pgidx in a[] */ + int iPgidxOff; + int iPgidxPrev = 0; /* Previous value read from pgidx */ + int iTermOff = 0; + int iRowidOff = 0; + int iOff; + int nDoclist; + + memset(&term, 0, sizeof(Fts5Buffer)); + + if( n<4 ){ + sqlite3Fts5BufferSet(&rc, &s, 7, (const u8*)"corrupt"); + goto decode_out; + }else{ + iRowidOff = fts5GetU16(&a[0]); + iPgidxOff = szLeaf = fts5GetU16(&a[2]); + if( iPgidxOff<n ){ + fts5GetVarint32(&a[iPgidxOff], iTermOff); + } + } + + /* Decode the position list tail at the start of the page */ + if( iRowidOff!=0 ){ + iOff = iRowidOff; + }else if( iTermOff!=0 ){ + iOff = iTermOff; + }else{ + iOff = szLeaf; + } + fts5DecodePoslist(&rc, &s, &a[4], iOff-4); + + /* Decode any more doclist data that appears on the page before the + ** first term. */ + nDoclist = (iTermOff ? iTermOff : szLeaf) - iOff; + fts5DecodeDoclist(&rc, &s, &a[iOff], nDoclist); + + while( iPgidxOff<n ){ + int bFirst = (iPgidxOff==szLeaf); /* True for first term on page */ + int nByte; /* Bytes of data */ + int iEnd; + + iPgidxOff += fts5GetVarint32(&a[iPgidxOff], nByte); + iPgidxPrev += nByte; + iOff = iPgidxPrev; + + if( iPgidxOff<n ){ + fts5GetVarint32(&a[iPgidxOff], nByte); + iEnd = iPgidxPrev + nByte; + }else{ + iEnd = szLeaf; + } + + if( bFirst==0 ){ + iOff += fts5GetVarint32(&a[iOff], nByte); + term.n = nByte; + } + iOff += fts5GetVarint32(&a[iOff], nByte); + fts5BufferAppendBlob(&rc, &term, nByte, &a[iOff]); + iOff += nByte; + + sqlite3Fts5BufferAppendPrintf( + &rc, &s, " term=%.*s", term.n, (const char*)term.p + ); + iOff += fts5DecodeDoclist(&rc, &s, &a[iOff], iEnd-iOff); + } + + fts5BufferFree(&term); + } + + decode_out: + sqlite3_free(a); + if( rc==SQLITE_OK ){ + sqlite3_result_text(pCtx, (const char*)s.p, s.n, SQLITE_TRANSIENT); + }else{ + sqlite3_result_error_code(pCtx, rc); + } + fts5BufferFree(&s); +} + +/* +** The implementation of user-defined scalar function fts5_rowid(). +*/ +static void fts5RowidFunction( + sqlite3_context *pCtx, /* Function call context */ + int nArg, /* Number of args (always 2) */ + sqlite3_value **apVal /* Function arguments */ +){ + const char *zArg; + if( nArg==0 ){ + sqlite3_result_error(pCtx, "should be: fts5_rowid(subject, ....)", -1); + }else{ + zArg = (const char*)sqlite3_value_text(apVal[0]); + if( 0==sqlite3_stricmp(zArg, "segment") ){ + i64 iRowid; + int segid, pgno; + if( nArg!=3 ){ + sqlite3_result_error(pCtx, + "should be: fts5_rowid('segment', segid, pgno))", -1 + ); + }else{ + segid = sqlite3_value_int(apVal[1]); + pgno = sqlite3_value_int(apVal[2]); + iRowid = FTS5_SEGMENT_ROWID(segid, pgno); + sqlite3_result_int64(pCtx, iRowid); + } + }else{ + sqlite3_result_error(pCtx, + "first arg to fts5_rowid() must be 'segment'" , -1 + ); + } + } +} + +/* +** This is called as part of registering the FTS5 module with database +** connection db. It registers several user-defined scalar functions useful +** with FTS5. +** +** If successful, SQLITE_OK is returned. If an error occurs, some other +** SQLite error code is returned instead. +*/ +static int sqlite3Fts5IndexInit(sqlite3 *db){ + int rc = sqlite3_create_function( + db, "fts5_decode", 2, SQLITE_UTF8, 0, fts5DecodeFunction, 0, 0 + ); + + if( rc==SQLITE_OK ){ + rc = sqlite3_create_function( + db, "fts5_decode_none", 2, + SQLITE_UTF8, (void*)db, fts5DecodeFunction, 0, 0 + ); + } + + if( rc==SQLITE_OK ){ + rc = sqlite3_create_function( + db, "fts5_rowid", -1, SQLITE_UTF8, 0, fts5RowidFunction, 0, 0 + ); + } + return rc; +} + +/* +** 2014 Jun 09 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This is an SQLite module implementing full-text search. +*/ + + +/* #include "fts5Int.h" */ + +/* +** This variable is set to false when running tests for which the on disk +** structures should not be corrupt. Otherwise, true. If it is false, extra +** assert() conditions in the fts5 code are activated - conditions that are +** only true if it is guaranteed that the fts5 database is not corrupt. +*/ +SQLITE_API int sqlite3_fts5_may_be_corrupt = 1; + + +typedef struct Fts5Auxdata Fts5Auxdata; +typedef struct Fts5Auxiliary Fts5Auxiliary; +typedef struct Fts5Cursor Fts5Cursor; +typedef struct Fts5Sorter Fts5Sorter; +typedef struct Fts5Table Fts5Table; +typedef struct Fts5TokenizerModule Fts5TokenizerModule; + +/* +** NOTES ON TRANSACTIONS: +** +** SQLite invokes the following virtual table methods as transactions are +** opened and closed by the user: +** +** xBegin(): Start of a new transaction. +** xSync(): Initial part of two-phase commit. +** xCommit(): Final part of two-phase commit. +** xRollback(): Rollback the transaction. +** +** Anything that is required as part of a commit that may fail is performed +** in the xSync() callback. Current versions of SQLite ignore any errors +** returned by xCommit(). +** +** And as sub-transactions are opened/closed: +** +** xSavepoint(int S): Open savepoint S. +** xRelease(int S): Commit and close savepoint S. +** xRollbackTo(int S): Rollback to start of savepoint S. +** +** During a write-transaction the fts5_index.c module may cache some data +** in-memory. It is flushed to disk whenever xSync(), xRelease() or +** xSavepoint() is called. And discarded whenever xRollback() or xRollbackTo() +** is called. +** +** Additionally, if SQLITE_DEBUG is defined, an instance of the following +** structure is used to record the current transaction state. This information +** is not required, but it is used in the assert() statements executed by +** function fts5CheckTransactionState() (see below). +*/ +struct Fts5TransactionState { + int eState; /* 0==closed, 1==open, 2==synced */ + int iSavepoint; /* Number of open savepoints (0 -> none) */ +}; + +/* +** A single object of this type is allocated when the FTS5 module is +** registered with a database handle. It is used to store pointers to +** all registered FTS5 extensions - tokenizers and auxiliary functions. +*/ +struct Fts5Global { + fts5_api api; /* User visible part of object (see fts5.h) */ + sqlite3 *db; /* Associated database connection */ + i64 iNextId; /* Used to allocate unique cursor ids */ + Fts5Auxiliary *pAux; /* First in list of all aux. functions */ + Fts5TokenizerModule *pTok; /* First in list of all tokenizer modules */ + Fts5TokenizerModule *pDfltTok; /* Default tokenizer module */ + Fts5Cursor *pCsr; /* First in list of all open cursors */ +}; + +/* +** Each auxiliary function registered with the FTS5 module is represented +** by an object of the following type. All such objects are stored as part +** of the Fts5Global.pAux list. +*/ +struct Fts5Auxiliary { + Fts5Global *pGlobal; /* Global context for this function */ + char *zFunc; /* Function name (nul-terminated) */ + void *pUserData; /* User-data pointer */ + fts5_extension_function xFunc; /* Callback function */ + void (*xDestroy)(void*); /* Destructor function */ + Fts5Auxiliary *pNext; /* Next registered auxiliary function */ +}; + +/* +** Each tokenizer module registered with the FTS5 module is represented +** by an object of the following type. All such objects are stored as part +** of the Fts5Global.pTok list. +*/ +struct Fts5TokenizerModule { + char *zName; /* Name of tokenizer */ + void *pUserData; /* User pointer passed to xCreate() */ + fts5_tokenizer x; /* Tokenizer functions */ + void (*xDestroy)(void*); /* Destructor function */ + Fts5TokenizerModule *pNext; /* Next registered tokenizer module */ +}; + +/* +** Virtual-table object. +*/ +struct Fts5Table { + sqlite3_vtab base; /* Base class used by SQLite core */ + Fts5Config *pConfig; /* Virtual table configuration */ + Fts5Index *pIndex; /* Full-text index */ + Fts5Storage *pStorage; /* Document store */ + Fts5Global *pGlobal; /* Global (connection wide) data */ + Fts5Cursor *pSortCsr; /* Sort data from this cursor */ +#ifdef SQLITE_DEBUG + struct Fts5TransactionState ts; +#endif +}; + +struct Fts5MatchPhrase { + Fts5Buffer *pPoslist; /* Pointer to current poslist */ + int nTerm; /* Size of phrase in terms */ +}; + +/* +** pStmt: +** SELECT rowid, <fts> FROM <fts> ORDER BY +rank; +** +** aIdx[]: +** There is one entry in the aIdx[] array for each phrase in the query, +** the value of which is the offset within aPoslist[] following the last +** byte of the position list for the corresponding phrase. +*/ +struct Fts5Sorter { + sqlite3_stmt *pStmt; + i64 iRowid; /* Current rowid */ + const u8 *aPoslist; /* Position lists for current row */ + int nIdx; /* Number of entries in aIdx[] */ + int aIdx[1]; /* Offsets into aPoslist for current row */ +}; + + +/* +** Virtual-table cursor object. +** +** iSpecial: +** If this is a 'special' query (refer to function fts5SpecialMatch()), +** then this variable contains the result of the query. +** +** iFirstRowid, iLastRowid: +** These variables are only used for FTS5_PLAN_MATCH cursors. Assuming the +** cursor iterates in ascending order of rowids, iFirstRowid is the lower +** limit of rowids to return, and iLastRowid the upper. In other words, the +** WHERE clause in the user's query might have been: +** +** <tbl> MATCH <expr> AND rowid BETWEEN $iFirstRowid AND $iLastRowid +** +** If the cursor iterates in descending order of rowid, iFirstRowid +** is the upper limit (i.e. the "first" rowid visited) and iLastRowid +** the lower. +*/ +struct Fts5Cursor { + sqlite3_vtab_cursor base; /* Base class used by SQLite core */ + Fts5Cursor *pNext; /* Next cursor in Fts5Cursor.pCsr list */ + int *aColumnSize; /* Values for xColumnSize() */ + i64 iCsrId; /* Cursor id */ + + /* Zero from this point onwards on cursor reset */ + int ePlan; /* FTS5_PLAN_XXX value */ + int bDesc; /* True for "ORDER BY rowid DESC" queries */ + i64 iFirstRowid; /* Return no rowids earlier than this */ + i64 iLastRowid; /* Return no rowids later than this */ + sqlite3_stmt *pStmt; /* Statement used to read %_content */ + Fts5Expr *pExpr; /* Expression for MATCH queries */ + Fts5Sorter *pSorter; /* Sorter for "ORDER BY rank" queries */ + int csrflags; /* Mask of cursor flags (see below) */ + i64 iSpecial; /* Result of special query */ + + /* "rank" function. Populated on demand from vtab.xColumn(). */ + char *zRank; /* Custom rank function */ + char *zRankArgs; /* Custom rank function args */ + Fts5Auxiliary *pRank; /* Rank callback (or NULL) */ + int nRankArg; /* Number of trailing arguments for rank() */ + sqlite3_value **apRankArg; /* Array of trailing arguments */ + sqlite3_stmt *pRankArgStmt; /* Origin of objects in apRankArg[] */ + + /* Auxiliary data storage */ + Fts5Auxiliary *pAux; /* Currently executing extension function */ + Fts5Auxdata *pAuxdata; /* First in linked list of saved aux-data */ + + /* Cache used by auxiliary functions xInst() and xInstCount() */ + Fts5PoslistReader *aInstIter; /* One for each phrase */ + int nInstAlloc; /* Size of aInst[] array (entries / 3) */ + int nInstCount; /* Number of phrase instances */ + int *aInst; /* 3 integers per phrase instance */ +}; + +/* +** Bits that make up the "idxNum" parameter passed indirectly by +** xBestIndex() to xFilter(). +*/ +#define FTS5_BI_MATCH 0x0001 /* <tbl> MATCH ? */ +#define FTS5_BI_RANK 0x0002 /* rank MATCH ? */ +#define FTS5_BI_ROWID_EQ 0x0004 /* rowid == ? */ +#define FTS5_BI_ROWID_LE 0x0008 /* rowid <= ? */ +#define FTS5_BI_ROWID_GE 0x0010 /* rowid >= ? */ + +#define FTS5_BI_ORDER_RANK 0x0020 +#define FTS5_BI_ORDER_ROWID 0x0040 +#define FTS5_BI_ORDER_DESC 0x0080 + +/* +** Values for Fts5Cursor.csrflags +*/ +#define FTS5CSR_EOF 0x01 +#define FTS5CSR_REQUIRE_CONTENT 0x02 +#define FTS5CSR_REQUIRE_DOCSIZE 0x04 +#define FTS5CSR_REQUIRE_INST 0x08 +#define FTS5CSR_FREE_ZRANK 0x10 +#define FTS5CSR_REQUIRE_RESEEK 0x20 +#define FTS5CSR_REQUIRE_POSLIST 0x40 + +#define BitFlagAllTest(x,y) (((x) & (y))==(y)) +#define BitFlagTest(x,y) (((x) & (y))!=0) + + +/* +** Macros to Set(), Clear() and Test() cursor flags. +*/ +#define CsrFlagSet(pCsr, flag) ((pCsr)->csrflags |= (flag)) +#define CsrFlagClear(pCsr, flag) ((pCsr)->csrflags &= ~(flag)) +#define CsrFlagTest(pCsr, flag) ((pCsr)->csrflags & (flag)) + +struct Fts5Auxdata { + Fts5Auxiliary *pAux; /* Extension to which this belongs */ + void *pPtr; /* Pointer value */ + void(*xDelete)(void*); /* Destructor */ + Fts5Auxdata *pNext; /* Next object in linked list */ +}; + +#ifdef SQLITE_DEBUG +#define FTS5_BEGIN 1 +#define FTS5_SYNC 2 +#define FTS5_COMMIT 3 +#define FTS5_ROLLBACK 4 +#define FTS5_SAVEPOINT 5 +#define FTS5_RELEASE 6 +#define FTS5_ROLLBACKTO 7 +static void fts5CheckTransactionState(Fts5Table *p, int op, int iSavepoint){ + switch( op ){ + case FTS5_BEGIN: + assert( p->ts.eState==0 ); + p->ts.eState = 1; + p->ts.iSavepoint = -1; + break; + + case FTS5_SYNC: + assert( p->ts.eState==1 ); + p->ts.eState = 2; + break; + + case FTS5_COMMIT: + assert( p->ts.eState==2 ); + p->ts.eState = 0; + break; + + case FTS5_ROLLBACK: + assert( p->ts.eState==1 || p->ts.eState==2 || p->ts.eState==0 ); + p->ts.eState = 0; + break; + + case FTS5_SAVEPOINT: + assert( p->ts.eState==1 ); + assert( iSavepoint>=0 ); + assert( iSavepoint>p->ts.iSavepoint ); + p->ts.iSavepoint = iSavepoint; + break; + + case FTS5_RELEASE: + assert( p->ts.eState==1 ); + assert( iSavepoint>=0 ); + assert( iSavepoint<=p->ts.iSavepoint ); + p->ts.iSavepoint = iSavepoint-1; + break; + + case FTS5_ROLLBACKTO: + assert( p->ts.eState==1 ); + assert( iSavepoint>=0 ); + assert( iSavepoint<=p->ts.iSavepoint ); + p->ts.iSavepoint = iSavepoint; + break; + } +} +#else +# define fts5CheckTransactionState(x,y,z) +#endif + +/* +** Return true if pTab is a contentless table. +*/ +static int fts5IsContentless(Fts5Table *pTab){ + return pTab->pConfig->eContent==FTS5_CONTENT_NONE; +} + +/* +** Delete a virtual table handle allocated by fts5InitVtab(). +*/ +static void fts5FreeVtab(Fts5Table *pTab){ + if( pTab ){ + sqlite3Fts5IndexClose(pTab->pIndex); + sqlite3Fts5StorageClose(pTab->pStorage); + sqlite3Fts5ConfigFree(pTab->pConfig); + sqlite3_free(pTab); + } +} + +/* +** The xDisconnect() virtual table method. +*/ +static int fts5DisconnectMethod(sqlite3_vtab *pVtab){ + fts5FreeVtab((Fts5Table*)pVtab); + return SQLITE_OK; +} + +/* +** The xDestroy() virtual table method. +*/ +static int fts5DestroyMethod(sqlite3_vtab *pVtab){ + Fts5Table *pTab = (Fts5Table*)pVtab; + int rc = sqlite3Fts5DropAll(pTab->pConfig); + if( rc==SQLITE_OK ){ + fts5FreeVtab((Fts5Table*)pVtab); + } + return rc; +} + +/* +** This function is the implementation of both the xConnect and xCreate +** methods of the FTS3 virtual table. +** +** The argv[] array contains the following: +** +** argv[0] -> module name ("fts5") +** argv[1] -> database name +** argv[2] -> table name +** argv[...] -> "column name" and other module argument fields. +*/ +static int fts5InitVtab( + int bCreate, /* True for xCreate, false for xConnect */ + sqlite3 *db, /* The SQLite database connection */ + void *pAux, /* Hash table containing tokenizers */ + int argc, /* Number of elements in argv array */ + const char * const *argv, /* xCreate/xConnect argument array */ + sqlite3_vtab **ppVTab, /* Write the resulting vtab structure here */ + char **pzErr /* Write any error message here */ +){ + Fts5Global *pGlobal = (Fts5Global*)pAux; + const char **azConfig = (const char**)argv; + int rc = SQLITE_OK; /* Return code */ + Fts5Config *pConfig = 0; /* Results of parsing argc/argv */ + Fts5Table *pTab = 0; /* New virtual table object */ + + /* Allocate the new vtab object and parse the configuration */ + pTab = (Fts5Table*)sqlite3Fts5MallocZero(&rc, sizeof(Fts5Table)); + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5ConfigParse(pGlobal, db, argc, azConfig, &pConfig, pzErr); + assert( (rc==SQLITE_OK && *pzErr==0) || pConfig==0 ); + } + if( rc==SQLITE_OK ){ + pTab->pConfig = pConfig; + pTab->pGlobal = pGlobal; + } + + /* Open the index sub-system */ + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5IndexOpen(pConfig, bCreate, &pTab->pIndex, pzErr); + } + + /* Open the storage sub-system */ + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5StorageOpen( + pConfig, pTab->pIndex, bCreate, &pTab->pStorage, pzErr + ); + } + + /* Call sqlite3_declare_vtab() */ + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5ConfigDeclareVtab(pConfig); + } + + /* Load the initial configuration */ + if( rc==SQLITE_OK ){ + assert( pConfig->pzErrmsg==0 ); + pConfig->pzErrmsg = pzErr; + rc = sqlite3Fts5IndexLoadConfig(pTab->pIndex); + sqlite3Fts5IndexRollback(pTab->pIndex); + pConfig->pzErrmsg = 0; + } + + if( rc!=SQLITE_OK ){ + fts5FreeVtab(pTab); + pTab = 0; + }else if( bCreate ){ + fts5CheckTransactionState(pTab, FTS5_BEGIN, 0); + } + *ppVTab = (sqlite3_vtab*)pTab; + return rc; +} + +/* +** The xConnect() and xCreate() methods for the virtual table. All the +** work is done in function fts5InitVtab(). +*/ +static int fts5ConnectMethod( + sqlite3 *db, /* Database connection */ + void *pAux, /* Pointer to tokenizer hash table */ + int argc, /* Number of elements in argv array */ + const char * const *argv, /* xCreate/xConnect argument array */ + sqlite3_vtab **ppVtab, /* OUT: New sqlite3_vtab object */ + char **pzErr /* OUT: sqlite3_malloc'd error message */ +){ + return fts5InitVtab(0, db, pAux, argc, argv, ppVtab, pzErr); +} +static int fts5CreateMethod( + sqlite3 *db, /* Database connection */ + void *pAux, /* Pointer to tokenizer hash table */ + int argc, /* Number of elements in argv array */ + const char * const *argv, /* xCreate/xConnect argument array */ + sqlite3_vtab **ppVtab, /* OUT: New sqlite3_vtab object */ + char **pzErr /* OUT: sqlite3_malloc'd error message */ +){ + return fts5InitVtab(1, db, pAux, argc, argv, ppVtab, pzErr); +} + +/* +** The different query plans. +*/ +#define FTS5_PLAN_MATCH 1 /* (<tbl> MATCH ?) */ +#define FTS5_PLAN_SOURCE 2 /* A source cursor for SORTED_MATCH */ +#define FTS5_PLAN_SPECIAL 3 /* An internal query */ +#define FTS5_PLAN_SORTED_MATCH 4 /* (<tbl> MATCH ? ORDER BY rank) */ +#define FTS5_PLAN_SCAN 5 /* No usable constraint */ +#define FTS5_PLAN_ROWID 6 /* (rowid = ?) */ + +/* +** Set the SQLITE_INDEX_SCAN_UNIQUE flag in pIdxInfo->flags. Unless this +** extension is currently being used by a version of SQLite too old to +** support index-info flags. In that case this function is a no-op. +*/ +static void fts5SetUniqueFlag(sqlite3_index_info *pIdxInfo){ +#if SQLITE_VERSION_NUMBER>=3008012 +#ifndef SQLITE_CORE + if( sqlite3_libversion_number()>=3008012 ) +#endif + { + pIdxInfo->idxFlags |= SQLITE_INDEX_SCAN_UNIQUE; + } +#endif +} + +/* +** Implementation of the xBestIndex method for FTS5 tables. Within the +** WHERE constraint, it searches for the following: +** +** 1. A MATCH constraint against the special column. +** 2. A MATCH constraint against the "rank" column. +** 3. An == constraint against the rowid column. +** 4. A < or <= constraint against the rowid column. +** 5. A > or >= constraint against the rowid column. +** +** Within the ORDER BY, either: +** +** 5. ORDER BY rank [ASC|DESC] +** 6. ORDER BY rowid [ASC|DESC] +** +** Costs are assigned as follows: +** +** a) If an unusable MATCH operator is present in the WHERE clause, the +** cost is unconditionally set to 1e50 (a really big number). +** +** a) If a MATCH operator is present, the cost depends on the other +** constraints also present. As follows: +** +** * No other constraints: cost=1000.0 +** * One rowid range constraint: cost=750.0 +** * Both rowid range constraints: cost=500.0 +** * An == rowid constraint: cost=100.0 +** +** b) Otherwise, if there is no MATCH: +** +** * No other constraints: cost=1000000.0 +** * One rowid range constraint: cost=750000.0 +** * Both rowid range constraints: cost=250000.0 +** * An == rowid constraint: cost=10.0 +** +** Costs are not modified by the ORDER BY clause. +*/ +static int fts5BestIndexMethod(sqlite3_vtab *pVTab, sqlite3_index_info *pInfo){ + Fts5Table *pTab = (Fts5Table*)pVTab; + Fts5Config *pConfig = pTab->pConfig; + int idxFlags = 0; /* Parameter passed through to xFilter() */ + int bHasMatch; + int iNext; + int i; + + struct Constraint { + int op; /* Mask against sqlite3_index_constraint.op */ + int fts5op; /* FTS5 mask for idxFlags */ + int iCol; /* 0==rowid, 1==tbl, 2==rank */ + int omit; /* True to omit this if found */ + int iConsIndex; /* Index in pInfo->aConstraint[] */ + } aConstraint[] = { + {SQLITE_INDEX_CONSTRAINT_MATCH|SQLITE_INDEX_CONSTRAINT_EQ, + FTS5_BI_MATCH, 1, 1, -1}, + {SQLITE_INDEX_CONSTRAINT_MATCH|SQLITE_INDEX_CONSTRAINT_EQ, + FTS5_BI_RANK, 2, 1, -1}, + {SQLITE_INDEX_CONSTRAINT_EQ, FTS5_BI_ROWID_EQ, 0, 0, -1}, + {SQLITE_INDEX_CONSTRAINT_LT|SQLITE_INDEX_CONSTRAINT_LE, + FTS5_BI_ROWID_LE, 0, 0, -1}, + {SQLITE_INDEX_CONSTRAINT_GT|SQLITE_INDEX_CONSTRAINT_GE, + FTS5_BI_ROWID_GE, 0, 0, -1}, + }; + + int aColMap[3]; + aColMap[0] = -1; + aColMap[1] = pConfig->nCol; + aColMap[2] = pConfig->nCol+1; + + /* Set idxFlags flags for all WHERE clause terms that will be used. */ + for(i=0; i<pInfo->nConstraint; i++){ + struct sqlite3_index_constraint *p = &pInfo->aConstraint[i]; + int j; + for(j=0; j<ArraySize(aConstraint); j++){ + struct Constraint *pC = &aConstraint[j]; + if( p->iColumn==aColMap[pC->iCol] && p->op & pC->op ){ + if( p->usable ){ + pC->iConsIndex = i; + idxFlags |= pC->fts5op; + }else if( j==0 ){ + /* As there exists an unusable MATCH constraint this is an + ** unusable plan. Set a prohibitively high cost. */ + pInfo->estimatedCost = 1e50; + return SQLITE_OK; + } + } + } + } + + /* Set idxFlags flags for the ORDER BY clause */ + if( pInfo->nOrderBy==1 ){ + int iSort = pInfo->aOrderBy[0].iColumn; + if( iSort==(pConfig->nCol+1) && BitFlagTest(idxFlags, FTS5_BI_MATCH) ){ + idxFlags |= FTS5_BI_ORDER_RANK; + }else if( iSort==-1 ){ + idxFlags |= FTS5_BI_ORDER_ROWID; + } + if( BitFlagTest(idxFlags, FTS5_BI_ORDER_RANK|FTS5_BI_ORDER_ROWID) ){ + pInfo->orderByConsumed = 1; + if( pInfo->aOrderBy[0].desc ){ + idxFlags |= FTS5_BI_ORDER_DESC; + } + } + } + + /* Calculate the estimated cost based on the flags set in idxFlags. */ + bHasMatch = BitFlagTest(idxFlags, FTS5_BI_MATCH); + if( BitFlagTest(idxFlags, FTS5_BI_ROWID_EQ) ){ + pInfo->estimatedCost = bHasMatch ? 100.0 : 10.0; + if( bHasMatch==0 ) fts5SetUniqueFlag(pInfo); + }else if( BitFlagAllTest(idxFlags, FTS5_BI_ROWID_LE|FTS5_BI_ROWID_GE) ){ + pInfo->estimatedCost = bHasMatch ? 500.0 : 250000.0; + }else if( BitFlagTest(idxFlags, FTS5_BI_ROWID_LE|FTS5_BI_ROWID_GE) ){ + pInfo->estimatedCost = bHasMatch ? 750.0 : 750000.0; + }else{ + pInfo->estimatedCost = bHasMatch ? 1000.0 : 1000000.0; + } + + /* Assign argvIndex values to each constraint in use. */ + iNext = 1; + for(i=0; i<ArraySize(aConstraint); i++){ + struct Constraint *pC = &aConstraint[i]; + if( pC->iConsIndex>=0 ){ + pInfo->aConstraintUsage[pC->iConsIndex].argvIndex = iNext++; + pInfo->aConstraintUsage[pC->iConsIndex].omit = (unsigned char)pC->omit; + } + } + + pInfo->idxNum = idxFlags; + return SQLITE_OK; +} + +/* +** Implementation of xOpen method. +*/ +static int fts5OpenMethod(sqlite3_vtab *pVTab, sqlite3_vtab_cursor **ppCsr){ + Fts5Table *pTab = (Fts5Table*)pVTab; + Fts5Config *pConfig = pTab->pConfig; + Fts5Cursor *pCsr; /* New cursor object */ + int nByte; /* Bytes of space to allocate */ + int rc = SQLITE_OK; /* Return code */ + + nByte = sizeof(Fts5Cursor) + pConfig->nCol * sizeof(int); + pCsr = (Fts5Cursor*)sqlite3_malloc(nByte); + if( pCsr ){ + Fts5Global *pGlobal = pTab->pGlobal; + memset(pCsr, 0, nByte); + pCsr->aColumnSize = (int*)&pCsr[1]; + pCsr->pNext = pGlobal->pCsr; + pGlobal->pCsr = pCsr; + pCsr->iCsrId = ++pGlobal->iNextId; + }else{ + rc = SQLITE_NOMEM; + } + *ppCsr = (sqlite3_vtab_cursor*)pCsr; + return rc; +} + +static int fts5StmtType(Fts5Cursor *pCsr){ + if( pCsr->ePlan==FTS5_PLAN_SCAN ){ + return (pCsr->bDesc) ? FTS5_STMT_SCAN_DESC : FTS5_STMT_SCAN_ASC; + } + return FTS5_STMT_LOOKUP; +} + +/* +** This function is called after the cursor passed as the only argument +** is moved to point at a different row. It clears all cached data +** specific to the previous row stored by the cursor object. +*/ +static void fts5CsrNewrow(Fts5Cursor *pCsr){ + CsrFlagSet(pCsr, + FTS5CSR_REQUIRE_CONTENT + | FTS5CSR_REQUIRE_DOCSIZE + | FTS5CSR_REQUIRE_INST + | FTS5CSR_REQUIRE_POSLIST + ); +} + +static void fts5FreeCursorComponents(Fts5Cursor *pCsr){ + Fts5Table *pTab = (Fts5Table*)(pCsr->base.pVtab); + Fts5Auxdata *pData; + Fts5Auxdata *pNext; + + sqlite3_free(pCsr->aInstIter); + sqlite3_free(pCsr->aInst); + if( pCsr->pStmt ){ + int eStmt = fts5StmtType(pCsr); + sqlite3Fts5StorageStmtRelease(pTab->pStorage, eStmt, pCsr->pStmt); + } + if( pCsr->pSorter ){ + Fts5Sorter *pSorter = pCsr->pSorter; + sqlite3_finalize(pSorter->pStmt); + sqlite3_free(pSorter); + } + + if( pCsr->ePlan!=FTS5_PLAN_SOURCE ){ + sqlite3Fts5ExprFree(pCsr->pExpr); + } + + for(pData=pCsr->pAuxdata; pData; pData=pNext){ + pNext = pData->pNext; + if( pData->xDelete ) pData->xDelete(pData->pPtr); + sqlite3_free(pData); + } + + sqlite3_finalize(pCsr->pRankArgStmt); + sqlite3_free(pCsr->apRankArg); + + if( CsrFlagTest(pCsr, FTS5CSR_FREE_ZRANK) ){ + sqlite3_free(pCsr->zRank); + sqlite3_free(pCsr->zRankArgs); + } + + memset(&pCsr->ePlan, 0, sizeof(Fts5Cursor) - ((u8*)&pCsr->ePlan - (u8*)pCsr)); +} + + +/* +** Close the cursor. For additional information see the documentation +** on the xClose method of the virtual table interface. +*/ +static int fts5CloseMethod(sqlite3_vtab_cursor *pCursor){ + if( pCursor ){ + Fts5Table *pTab = (Fts5Table*)(pCursor->pVtab); + Fts5Cursor *pCsr = (Fts5Cursor*)pCursor; + Fts5Cursor **pp; + + fts5FreeCursorComponents(pCsr); + /* Remove the cursor from the Fts5Global.pCsr list */ + for(pp=&pTab->pGlobal->pCsr; (*pp)!=pCsr; pp=&(*pp)->pNext); + *pp = pCsr->pNext; + + sqlite3_free(pCsr); + } + return SQLITE_OK; +} + +static int fts5SorterNext(Fts5Cursor *pCsr){ + Fts5Sorter *pSorter = pCsr->pSorter; + int rc; + + rc = sqlite3_step(pSorter->pStmt); + if( rc==SQLITE_DONE ){ + rc = SQLITE_OK; + CsrFlagSet(pCsr, FTS5CSR_EOF); + }else if( rc==SQLITE_ROW ){ + const u8 *a; + const u8 *aBlob; + int nBlob; + int i; + int iOff = 0; + rc = SQLITE_OK; + + pSorter->iRowid = sqlite3_column_int64(pSorter->pStmt, 0); + nBlob = sqlite3_column_bytes(pSorter->pStmt, 1); + aBlob = a = sqlite3_column_blob(pSorter->pStmt, 1); + + /* nBlob==0 in detail=none mode. */ + if( nBlob>0 ){ + for(i=0; i<(pSorter->nIdx-1); i++){ + int iVal; + a += fts5GetVarint32(a, iVal); + iOff += iVal; + pSorter->aIdx[i] = iOff; + } + pSorter->aIdx[i] = &aBlob[nBlob] - a; + pSorter->aPoslist = a; + } + + fts5CsrNewrow(pCsr); + } + + return rc; +} + + +/* +** Set the FTS5CSR_REQUIRE_RESEEK flag on all FTS5_PLAN_MATCH cursors +** open on table pTab. +*/ +static void fts5TripCursors(Fts5Table *pTab){ + Fts5Cursor *pCsr; + for(pCsr=pTab->pGlobal->pCsr; pCsr; pCsr=pCsr->pNext){ + if( pCsr->ePlan==FTS5_PLAN_MATCH + && pCsr->base.pVtab==(sqlite3_vtab*)pTab + ){ + CsrFlagSet(pCsr, FTS5CSR_REQUIRE_RESEEK); + } + } +} + +/* +** If the REQUIRE_RESEEK flag is set on the cursor passed as the first +** argument, close and reopen all Fts5IndexIter iterators that the cursor +** is using. Then attempt to move the cursor to a rowid equal to or laster +** (in the cursors sort order - ASC or DESC) than the current rowid. +** +** If the new rowid is not equal to the old, set output parameter *pbSkip +** to 1 before returning. Otherwise, leave it unchanged. +** +** Return SQLITE_OK if successful or if no reseek was required, or an +** error code if an error occurred. +*/ +static int fts5CursorReseek(Fts5Cursor *pCsr, int *pbSkip){ + int rc = SQLITE_OK; + assert( *pbSkip==0 ); + if( CsrFlagTest(pCsr, FTS5CSR_REQUIRE_RESEEK) ){ + Fts5Table *pTab = (Fts5Table*)(pCsr->base.pVtab); + int bDesc = pCsr->bDesc; + i64 iRowid = sqlite3Fts5ExprRowid(pCsr->pExpr); + + rc = sqlite3Fts5ExprFirst(pCsr->pExpr, pTab->pIndex, iRowid, bDesc); + if( rc==SQLITE_OK && iRowid!=sqlite3Fts5ExprRowid(pCsr->pExpr) ){ + *pbSkip = 1; + } + + CsrFlagClear(pCsr, FTS5CSR_REQUIRE_RESEEK); + fts5CsrNewrow(pCsr); + if( sqlite3Fts5ExprEof(pCsr->pExpr) ){ + CsrFlagSet(pCsr, FTS5CSR_EOF); + *pbSkip = 1; + } + } + return rc; +} + + +/* +** Advance the cursor to the next row in the table that matches the +** search criteria. +** +** Return SQLITE_OK if nothing goes wrong. SQLITE_OK is returned +** even if we reach end-of-file. The fts5EofMethod() will be called +** subsequently to determine whether or not an EOF was hit. +*/ +static int fts5NextMethod(sqlite3_vtab_cursor *pCursor){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCursor; + int rc; + + assert( (pCsr->ePlan<3)== + (pCsr->ePlan==FTS5_PLAN_MATCH || pCsr->ePlan==FTS5_PLAN_SOURCE) + ); + assert( !CsrFlagTest(pCsr, FTS5CSR_EOF) ); + + if( pCsr->ePlan<3 ){ + int bSkip = 0; + if( (rc = fts5CursorReseek(pCsr, &bSkip)) || bSkip ) return rc; + rc = sqlite3Fts5ExprNext(pCsr->pExpr, pCsr->iLastRowid); + CsrFlagSet(pCsr, sqlite3Fts5ExprEof(pCsr->pExpr)); + fts5CsrNewrow(pCsr); + }else{ + switch( pCsr->ePlan ){ + case FTS5_PLAN_SPECIAL: { + CsrFlagSet(pCsr, FTS5CSR_EOF); + rc = SQLITE_OK; + break; + } + + case FTS5_PLAN_SORTED_MATCH: { + rc = fts5SorterNext(pCsr); + break; + } + + default: + rc = sqlite3_step(pCsr->pStmt); + if( rc!=SQLITE_ROW ){ + CsrFlagSet(pCsr, FTS5CSR_EOF); + rc = sqlite3_reset(pCsr->pStmt); + }else{ + rc = SQLITE_OK; + } + break; + } + } + + return rc; +} + + +static int fts5PrepareStatement( + sqlite3_stmt **ppStmt, + Fts5Config *pConfig, + const char *zFmt, + ... +){ + sqlite3_stmt *pRet = 0; + int rc; + char *zSql; + va_list ap; + + va_start(ap, zFmt); + zSql = sqlite3_vmprintf(zFmt, ap); + if( zSql==0 ){ + rc = SQLITE_NOMEM; + }else{ + rc = sqlite3_prepare_v2(pConfig->db, zSql, -1, &pRet, 0); + if( rc!=SQLITE_OK ){ + *pConfig->pzErrmsg = sqlite3_mprintf("%s", sqlite3_errmsg(pConfig->db)); + } + sqlite3_free(zSql); + } + + va_end(ap); + *ppStmt = pRet; + return rc; +} + +static int fts5CursorFirstSorted(Fts5Table *pTab, Fts5Cursor *pCsr, int bDesc){ + Fts5Config *pConfig = pTab->pConfig; + Fts5Sorter *pSorter; + int nPhrase; + int nByte; + int rc; + const char *zRank = pCsr->zRank; + const char *zRankArgs = pCsr->zRankArgs; + + nPhrase = sqlite3Fts5ExprPhraseCount(pCsr->pExpr); + nByte = sizeof(Fts5Sorter) + sizeof(int) * (nPhrase-1); + pSorter = (Fts5Sorter*)sqlite3_malloc(nByte); + if( pSorter==0 ) return SQLITE_NOMEM; + memset(pSorter, 0, nByte); + pSorter->nIdx = nPhrase; + + /* TODO: It would be better to have some system for reusing statement + ** handles here, rather than preparing a new one for each query. But that + ** is not possible as SQLite reference counts the virtual table objects. + ** And since the statement required here reads from this very virtual + ** table, saving it creates a circular reference. + ** + ** If SQLite a built-in statement cache, this wouldn't be a problem. */ + rc = fts5PrepareStatement(&pSorter->pStmt, pConfig, + "SELECT rowid, rank FROM %Q.%Q ORDER BY %s(%s%s%s) %s", + pConfig->zDb, pConfig->zName, zRank, pConfig->zName, + (zRankArgs ? ", " : ""), + (zRankArgs ? zRankArgs : ""), + bDesc ? "DESC" : "ASC" + ); + + pCsr->pSorter = pSorter; + if( rc==SQLITE_OK ){ + assert( pTab->pSortCsr==0 ); + pTab->pSortCsr = pCsr; + rc = fts5SorterNext(pCsr); + pTab->pSortCsr = 0; + } + + if( rc!=SQLITE_OK ){ + sqlite3_finalize(pSorter->pStmt); + sqlite3_free(pSorter); + pCsr->pSorter = 0; + } + + return rc; +} + +static int fts5CursorFirst(Fts5Table *pTab, Fts5Cursor *pCsr, int bDesc){ + int rc; + Fts5Expr *pExpr = pCsr->pExpr; + rc = sqlite3Fts5ExprFirst(pExpr, pTab->pIndex, pCsr->iFirstRowid, bDesc); + if( sqlite3Fts5ExprEof(pExpr) ){ + CsrFlagSet(pCsr, FTS5CSR_EOF); + } + fts5CsrNewrow(pCsr); + return rc; +} + +/* +** Process a "special" query. A special query is identified as one with a +** MATCH expression that begins with a '*' character. The remainder of +** the text passed to the MATCH operator are used as the special query +** parameters. +*/ +static int fts5SpecialMatch( + Fts5Table *pTab, + Fts5Cursor *pCsr, + const char *zQuery +){ + int rc = SQLITE_OK; /* Return code */ + const char *z = zQuery; /* Special query text */ + int n; /* Number of bytes in text at z */ + + while( z[0]==' ' ) z++; + for(n=0; z[n] && z[n]!=' '; n++); + + assert( pTab->base.zErrMsg==0 ); + pCsr->ePlan = FTS5_PLAN_SPECIAL; + + if( 0==sqlite3_strnicmp("reads", z, n) ){ + pCsr->iSpecial = sqlite3Fts5IndexReads(pTab->pIndex); + } + else if( 0==sqlite3_strnicmp("id", z, n) ){ + pCsr->iSpecial = pCsr->iCsrId; + } + else{ + /* An unrecognized directive. Return an error message. */ + pTab->base.zErrMsg = sqlite3_mprintf("unknown special query: %.*s", n, z); + rc = SQLITE_ERROR; + } + + return rc; +} + +/* +** Search for an auxiliary function named zName that can be used with table +** pTab. If one is found, return a pointer to the corresponding Fts5Auxiliary +** structure. Otherwise, if no such function exists, return NULL. +*/ +static Fts5Auxiliary *fts5FindAuxiliary(Fts5Table *pTab, const char *zName){ + Fts5Auxiliary *pAux; + + for(pAux=pTab->pGlobal->pAux; pAux; pAux=pAux->pNext){ + if( sqlite3_stricmp(zName, pAux->zFunc)==0 ) return pAux; + } + + /* No function of the specified name was found. Return 0. */ + return 0; +} + + +static int fts5FindRankFunction(Fts5Cursor *pCsr){ + Fts5Table *pTab = (Fts5Table*)(pCsr->base.pVtab); + Fts5Config *pConfig = pTab->pConfig; + int rc = SQLITE_OK; + Fts5Auxiliary *pAux = 0; + const char *zRank = pCsr->zRank; + const char *zRankArgs = pCsr->zRankArgs; + + if( zRankArgs ){ + char *zSql = sqlite3Fts5Mprintf(&rc, "SELECT %s", zRankArgs); + if( zSql ){ + sqlite3_stmt *pStmt = 0; + rc = sqlite3_prepare_v2(pConfig->db, zSql, -1, &pStmt, 0); + sqlite3_free(zSql); + assert( rc==SQLITE_OK || pCsr->pRankArgStmt==0 ); + if( rc==SQLITE_OK ){ + if( SQLITE_ROW==sqlite3_step(pStmt) ){ + int nByte; + pCsr->nRankArg = sqlite3_column_count(pStmt); + nByte = sizeof(sqlite3_value*)*pCsr->nRankArg; + pCsr->apRankArg = (sqlite3_value**)sqlite3Fts5MallocZero(&rc, nByte); + if( rc==SQLITE_OK ){ + int i; + for(i=0; i<pCsr->nRankArg; i++){ + pCsr->apRankArg[i] = sqlite3_column_value(pStmt, i); + } + } + pCsr->pRankArgStmt = pStmt; + }else{ + rc = sqlite3_finalize(pStmt); + assert( rc!=SQLITE_OK ); + } + } + } + } + + if( rc==SQLITE_OK ){ + pAux = fts5FindAuxiliary(pTab, zRank); + if( pAux==0 ){ + assert( pTab->base.zErrMsg==0 ); + pTab->base.zErrMsg = sqlite3_mprintf("no such function: %s", zRank); + rc = SQLITE_ERROR; + } + } + + pCsr->pRank = pAux; + return rc; +} + + +static int fts5CursorParseRank( + Fts5Config *pConfig, + Fts5Cursor *pCsr, + sqlite3_value *pRank +){ + int rc = SQLITE_OK; + if( pRank ){ + const char *z = (const char*)sqlite3_value_text(pRank); + char *zRank = 0; + char *zRankArgs = 0; + + if( z==0 ){ + if( sqlite3_value_type(pRank)==SQLITE_NULL ) rc = SQLITE_ERROR; + }else{ + rc = sqlite3Fts5ConfigParseRank(z, &zRank, &zRankArgs); + } + if( rc==SQLITE_OK ){ + pCsr->zRank = zRank; + pCsr->zRankArgs = zRankArgs; + CsrFlagSet(pCsr, FTS5CSR_FREE_ZRANK); + }else if( rc==SQLITE_ERROR ){ + pCsr->base.pVtab->zErrMsg = sqlite3_mprintf( + "parse error in rank function: %s", z + ); + } + }else{ + if( pConfig->zRank ){ + pCsr->zRank = (char*)pConfig->zRank; + pCsr->zRankArgs = (char*)pConfig->zRankArgs; + }else{ + pCsr->zRank = (char*)FTS5_DEFAULT_RANK; + pCsr->zRankArgs = 0; + } + } + return rc; +} + +static i64 fts5GetRowidLimit(sqlite3_value *pVal, i64 iDefault){ + if( pVal ){ + int eType = sqlite3_value_numeric_type(pVal); + if( eType==SQLITE_INTEGER ){ + return sqlite3_value_int64(pVal); + } + } + return iDefault; +} + +/* +** This is the xFilter interface for the virtual table. See +** the virtual table xFilter method documentation for additional +** information. +** +** There are three possible query strategies: +** +** 1. Full-text search using a MATCH operator. +** 2. A by-rowid lookup. +** 3. A full-table scan. +*/ +static int fts5FilterMethod( + sqlite3_vtab_cursor *pCursor, /* The cursor used for this query */ + int idxNum, /* Strategy index */ + const char *zUnused, /* Unused */ + int nVal, /* Number of elements in apVal */ + sqlite3_value **apVal /* Arguments for the indexing scheme */ +){ + Fts5Table *pTab = (Fts5Table*)(pCursor->pVtab); + Fts5Config *pConfig = pTab->pConfig; + Fts5Cursor *pCsr = (Fts5Cursor*)pCursor; + int rc = SQLITE_OK; /* Error code */ + int iVal = 0; /* Counter for apVal[] */ + int bDesc; /* True if ORDER BY [rank|rowid] DESC */ + int bOrderByRank; /* True if ORDER BY rank */ + sqlite3_value *pMatch = 0; /* <tbl> MATCH ? expression (or NULL) */ + sqlite3_value *pRank = 0; /* rank MATCH ? expression (or NULL) */ + sqlite3_value *pRowidEq = 0; /* rowid = ? expression (or NULL) */ + sqlite3_value *pRowidLe = 0; /* rowid <= ? expression (or NULL) */ + sqlite3_value *pRowidGe = 0; /* rowid >= ? expression (or NULL) */ + char **pzErrmsg = pConfig->pzErrmsg; + + UNUSED_PARAM(zUnused); + UNUSED_PARAM(nVal); + + if( pCsr->ePlan ){ + fts5FreeCursorComponents(pCsr); + memset(&pCsr->ePlan, 0, sizeof(Fts5Cursor) - ((u8*)&pCsr->ePlan-(u8*)pCsr)); + } + + assert( pCsr->pStmt==0 ); + assert( pCsr->pExpr==0 ); + assert( pCsr->csrflags==0 ); + assert( pCsr->pRank==0 ); + assert( pCsr->zRank==0 ); + assert( pCsr->zRankArgs==0 ); + + assert( pzErrmsg==0 || pzErrmsg==&pTab->base.zErrMsg ); + pConfig->pzErrmsg = &pTab->base.zErrMsg; + + /* Decode the arguments passed through to this function. + ** + ** Note: The following set of if(...) statements must be in the same + ** order as the corresponding entries in the struct at the top of + ** fts5BestIndexMethod(). */ + if( BitFlagTest(idxNum, FTS5_BI_MATCH) ) pMatch = apVal[iVal++]; + if( BitFlagTest(idxNum, FTS5_BI_RANK) ) pRank = apVal[iVal++]; + if( BitFlagTest(idxNum, FTS5_BI_ROWID_EQ) ) pRowidEq = apVal[iVal++]; + if( BitFlagTest(idxNum, FTS5_BI_ROWID_LE) ) pRowidLe = apVal[iVal++]; + if( BitFlagTest(idxNum, FTS5_BI_ROWID_GE) ) pRowidGe = apVal[iVal++]; + assert( iVal==nVal ); + bOrderByRank = ((idxNum & FTS5_BI_ORDER_RANK) ? 1 : 0); + pCsr->bDesc = bDesc = ((idxNum & FTS5_BI_ORDER_DESC) ? 1 : 0); + + /* Set the cursor upper and lower rowid limits. Only some strategies + ** actually use them. This is ok, as the xBestIndex() method leaves the + ** sqlite3_index_constraint.omit flag clear for range constraints + ** on the rowid field. */ + if( pRowidEq ){ + pRowidLe = pRowidGe = pRowidEq; + } + if( bDesc ){ + pCsr->iFirstRowid = fts5GetRowidLimit(pRowidLe, LARGEST_INT64); + pCsr->iLastRowid = fts5GetRowidLimit(pRowidGe, SMALLEST_INT64); + }else{ + pCsr->iLastRowid = fts5GetRowidLimit(pRowidLe, LARGEST_INT64); + pCsr->iFirstRowid = fts5GetRowidLimit(pRowidGe, SMALLEST_INT64); + } + + if( pTab->pSortCsr ){ + /* If pSortCsr is non-NULL, then this call is being made as part of + ** processing for a "... MATCH <expr> ORDER BY rank" query (ePlan is + ** set to FTS5_PLAN_SORTED_MATCH). pSortCsr is the cursor that will + ** return results to the user for this query. The current cursor + ** (pCursor) is used to execute the query issued by function + ** fts5CursorFirstSorted() above. */ + assert( pRowidEq==0 && pRowidLe==0 && pRowidGe==0 && pRank==0 ); + assert( nVal==0 && pMatch==0 && bOrderByRank==0 && bDesc==0 ); + assert( pCsr->iLastRowid==LARGEST_INT64 ); + assert( pCsr->iFirstRowid==SMALLEST_INT64 ); + pCsr->ePlan = FTS5_PLAN_SOURCE; + pCsr->pExpr = pTab->pSortCsr->pExpr; + rc = fts5CursorFirst(pTab, pCsr, bDesc); + sqlite3Fts5ExprClearEof(pCsr->pExpr); + }else if( pMatch ){ + const char *zExpr = (const char*)sqlite3_value_text(apVal[0]); + if( zExpr==0 ) zExpr = ""; + + rc = fts5CursorParseRank(pConfig, pCsr, pRank); + if( rc==SQLITE_OK ){ + if( zExpr[0]=='*' ){ + /* The user has issued a query of the form "MATCH '*...'". This + ** indicates that the MATCH expression is not a full text query, + ** but a request for an internal parameter. */ + rc = fts5SpecialMatch(pTab, pCsr, &zExpr[1]); + }else{ + char **pzErr = &pTab->base.zErrMsg; + rc = sqlite3Fts5ExprNew(pConfig, zExpr, &pCsr->pExpr, pzErr); + if( rc==SQLITE_OK ){ + if( bOrderByRank ){ + pCsr->ePlan = FTS5_PLAN_SORTED_MATCH; + rc = fts5CursorFirstSorted(pTab, pCsr, bDesc); + }else{ + pCsr->ePlan = FTS5_PLAN_MATCH; + rc = fts5CursorFirst(pTab, pCsr, bDesc); + } + } + } + } + }else if( pConfig->zContent==0 ){ + *pConfig->pzErrmsg = sqlite3_mprintf( + "%s: table does not support scanning", pConfig->zName + ); + rc = SQLITE_ERROR; + }else{ + /* This is either a full-table scan (ePlan==FTS5_PLAN_SCAN) or a lookup + ** by rowid (ePlan==FTS5_PLAN_ROWID). */ + pCsr->ePlan = (pRowidEq ? FTS5_PLAN_ROWID : FTS5_PLAN_SCAN); + rc = sqlite3Fts5StorageStmt( + pTab->pStorage, fts5StmtType(pCsr), &pCsr->pStmt, &pTab->base.zErrMsg + ); + if( rc==SQLITE_OK ){ + if( pCsr->ePlan==FTS5_PLAN_ROWID ){ + sqlite3_bind_value(pCsr->pStmt, 1, apVal[0]); + }else{ + sqlite3_bind_int64(pCsr->pStmt, 1, pCsr->iFirstRowid); + sqlite3_bind_int64(pCsr->pStmt, 2, pCsr->iLastRowid); + } + rc = fts5NextMethod(pCursor); + } + } + + pConfig->pzErrmsg = pzErrmsg; + return rc; +} + +/* +** This is the xEof method of the virtual table. SQLite calls this +** routine to find out if it has reached the end of a result set. +*/ +static int fts5EofMethod(sqlite3_vtab_cursor *pCursor){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCursor; + return (CsrFlagTest(pCsr, FTS5CSR_EOF) ? 1 : 0); +} + +/* +** Return the rowid that the cursor currently points to. +*/ +static i64 fts5CursorRowid(Fts5Cursor *pCsr){ + assert( pCsr->ePlan==FTS5_PLAN_MATCH + || pCsr->ePlan==FTS5_PLAN_SORTED_MATCH + || pCsr->ePlan==FTS5_PLAN_SOURCE + ); + if( pCsr->pSorter ){ + return pCsr->pSorter->iRowid; + }else{ + return sqlite3Fts5ExprRowid(pCsr->pExpr); + } +} + +/* +** This is the xRowid method. The SQLite core calls this routine to +** retrieve the rowid for the current row of the result set. fts5 +** exposes %_content.rowid as the rowid for the virtual table. The +** rowid should be written to *pRowid. +*/ +static int fts5RowidMethod(sqlite3_vtab_cursor *pCursor, sqlite_int64 *pRowid){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCursor; + int ePlan = pCsr->ePlan; + + assert( CsrFlagTest(pCsr, FTS5CSR_EOF)==0 ); + switch( ePlan ){ + case FTS5_PLAN_SPECIAL: + *pRowid = 0; + break; + + case FTS5_PLAN_SOURCE: + case FTS5_PLAN_MATCH: + case FTS5_PLAN_SORTED_MATCH: + *pRowid = fts5CursorRowid(pCsr); + break; + + default: + *pRowid = sqlite3_column_int64(pCsr->pStmt, 0); + break; + } + + return SQLITE_OK; +} + +/* +** If the cursor requires seeking (bSeekRequired flag is set), seek it. +** Return SQLITE_OK if no error occurs, or an SQLite error code otherwise. +** +** If argument bErrormsg is true and an error occurs, an error message may +** be left in sqlite3_vtab.zErrMsg. +*/ +static int fts5SeekCursor(Fts5Cursor *pCsr, int bErrormsg){ + int rc = SQLITE_OK; + + /* If the cursor does not yet have a statement handle, obtain one now. */ + if( pCsr->pStmt==0 ){ + Fts5Table *pTab = (Fts5Table*)(pCsr->base.pVtab); + int eStmt = fts5StmtType(pCsr); + rc = sqlite3Fts5StorageStmt( + pTab->pStorage, eStmt, &pCsr->pStmt, (bErrormsg?&pTab->base.zErrMsg:0) + ); + assert( rc!=SQLITE_OK || pTab->base.zErrMsg==0 ); + assert( CsrFlagTest(pCsr, FTS5CSR_REQUIRE_CONTENT) ); + } + + if( rc==SQLITE_OK && CsrFlagTest(pCsr, FTS5CSR_REQUIRE_CONTENT) ){ + assert( pCsr->pExpr ); + sqlite3_reset(pCsr->pStmt); + sqlite3_bind_int64(pCsr->pStmt, 1, fts5CursorRowid(pCsr)); + rc = sqlite3_step(pCsr->pStmt); + if( rc==SQLITE_ROW ){ + rc = SQLITE_OK; + CsrFlagClear(pCsr, FTS5CSR_REQUIRE_CONTENT); + }else{ + rc = sqlite3_reset(pCsr->pStmt); + if( rc==SQLITE_OK ){ + rc = FTS5_CORRUPT; + } + } + } + return rc; +} + +static void fts5SetVtabError(Fts5Table *p, const char *zFormat, ...){ + va_list ap; /* ... printf arguments */ + va_start(ap, zFormat); + assert( p->base.zErrMsg==0 ); + p->base.zErrMsg = sqlite3_vmprintf(zFormat, ap); + va_end(ap); +} + +/* +** This function is called to handle an FTS INSERT command. In other words, +** an INSERT statement of the form: +** +** INSERT INTO fts(fts) VALUES($pCmd) +** INSERT INTO fts(fts, rank) VALUES($pCmd, $pVal) +** +** Argument pVal is the value assigned to column "fts" by the INSERT +** statement. This function returns SQLITE_OK if successful, or an SQLite +** error code if an error occurs. +** +** The commands implemented by this function are documented in the "Special +** INSERT Directives" section of the documentation. It should be updated if +** more commands are added to this function. +*/ +static int fts5SpecialInsert( + Fts5Table *pTab, /* Fts5 table object */ + const char *zCmd, /* Text inserted into table-name column */ + sqlite3_value *pVal /* Value inserted into rank column */ +){ + Fts5Config *pConfig = pTab->pConfig; + int rc = SQLITE_OK; + int bError = 0; + + if( 0==sqlite3_stricmp("delete-all", zCmd) ){ + if( pConfig->eContent==FTS5_CONTENT_NORMAL ){ + fts5SetVtabError(pTab, + "'delete-all' may only be used with a " + "contentless or external content fts5 table" + ); + rc = SQLITE_ERROR; + }else{ + rc = sqlite3Fts5StorageDeleteAll(pTab->pStorage); + } + }else if( 0==sqlite3_stricmp("rebuild", zCmd) ){ + if( pConfig->eContent==FTS5_CONTENT_NONE ){ + fts5SetVtabError(pTab, + "'rebuild' may not be used with a contentless fts5 table" + ); + rc = SQLITE_ERROR; + }else{ + rc = sqlite3Fts5StorageRebuild(pTab->pStorage); + } + }else if( 0==sqlite3_stricmp("optimize", zCmd) ){ + rc = sqlite3Fts5StorageOptimize(pTab->pStorage); + }else if( 0==sqlite3_stricmp("merge", zCmd) ){ + int nMerge = sqlite3_value_int(pVal); + rc = sqlite3Fts5StorageMerge(pTab->pStorage, nMerge); + }else if( 0==sqlite3_stricmp("integrity-check", zCmd) ){ + rc = sqlite3Fts5StorageIntegrity(pTab->pStorage); +#ifdef SQLITE_DEBUG + }else if( 0==sqlite3_stricmp("prefix-index", zCmd) ){ + pConfig->bPrefixIndex = sqlite3_value_int(pVal); +#endif + }else{ + rc = sqlite3Fts5IndexLoadConfig(pTab->pIndex); + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5ConfigSetValue(pTab->pConfig, zCmd, pVal, &bError); + } + if( rc==SQLITE_OK ){ + if( bError ){ + rc = SQLITE_ERROR; + }else{ + rc = sqlite3Fts5StorageConfigValue(pTab->pStorage, zCmd, pVal, 0); + } + } + } + return rc; +} + +static int fts5SpecialDelete( + Fts5Table *pTab, + sqlite3_value **apVal +){ + int rc = SQLITE_OK; + int eType1 = sqlite3_value_type(apVal[1]); + if( eType1==SQLITE_INTEGER ){ + sqlite3_int64 iDel = sqlite3_value_int64(apVal[1]); + rc = sqlite3Fts5StorageDelete(pTab->pStorage, iDel, &apVal[2]); + } + return rc; +} + +static void fts5StorageInsert( + int *pRc, + Fts5Table *pTab, + sqlite3_value **apVal, + i64 *piRowid +){ + int rc = *pRc; + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5StorageContentInsert(pTab->pStorage, apVal, piRowid); + } + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5StorageIndexInsert(pTab->pStorage, apVal, *piRowid); + } + *pRc = rc; +} + +/* +** This function is the implementation of the xUpdate callback used by +** FTS3 virtual tables. It is invoked by SQLite each time a row is to be +** inserted, updated or deleted. +** +** A delete specifies a single argument - the rowid of the row to remove. +** +** Update and insert operations pass: +** +** 1. The "old" rowid, or NULL. +** 2. The "new" rowid. +** 3. Values for each of the nCol matchable columns. +** 4. Values for the two hidden columns (<tablename> and "rank"). +*/ +static int fts5UpdateMethod( + sqlite3_vtab *pVtab, /* Virtual table handle */ + int nArg, /* Size of argument array */ + sqlite3_value **apVal, /* Array of arguments */ + sqlite_int64 *pRowid /* OUT: The affected (or effected) rowid */ +){ + Fts5Table *pTab = (Fts5Table*)pVtab; + Fts5Config *pConfig = pTab->pConfig; + int eType0; /* value_type() of apVal[0] */ + int rc = SQLITE_OK; /* Return code */ + + /* A transaction must be open when this is called. */ + assert( pTab->ts.eState==1 ); + + assert( pVtab->zErrMsg==0 ); + assert( nArg==1 || nArg==(2+pConfig->nCol+2) ); + assert( nArg==1 + || sqlite3_value_type(apVal[1])==SQLITE_INTEGER + || sqlite3_value_type(apVal[1])==SQLITE_NULL + ); + assert( pTab->pConfig->pzErrmsg==0 ); + pTab->pConfig->pzErrmsg = &pTab->base.zErrMsg; + + /* Put any active cursors into REQUIRE_SEEK state. */ + fts5TripCursors(pTab); + + eType0 = sqlite3_value_type(apVal[0]); + if( eType0==SQLITE_NULL + && sqlite3_value_type(apVal[2+pConfig->nCol])!=SQLITE_NULL + ){ + /* A "special" INSERT op. These are handled separately. */ + const char *z = (const char*)sqlite3_value_text(apVal[2+pConfig->nCol]); + if( pConfig->eContent!=FTS5_CONTENT_NORMAL + && 0==sqlite3_stricmp("delete", z) + ){ + rc = fts5SpecialDelete(pTab, apVal); + }else{ + rc = fts5SpecialInsert(pTab, z, apVal[2 + pConfig->nCol + 1]); + } + }else{ + /* A regular INSERT, UPDATE or DELETE statement. The trick here is that + ** any conflict on the rowid value must be detected before any + ** modifications are made to the database file. There are 4 cases: + ** + ** 1) DELETE + ** 2) UPDATE (rowid not modified) + ** 3) UPDATE (rowid modified) + ** 4) INSERT + ** + ** Cases 3 and 4 may violate the rowid constraint. + */ + int eConflict = SQLITE_ABORT; + if( pConfig->eContent==FTS5_CONTENT_NORMAL ){ + eConflict = sqlite3_vtab_on_conflict(pConfig->db); + } + + assert( eType0==SQLITE_INTEGER || eType0==SQLITE_NULL ); + assert( nArg!=1 || eType0==SQLITE_INTEGER ); + + /* Filter out attempts to run UPDATE or DELETE on contentless tables. + ** This is not suported. */ + if( eType0==SQLITE_INTEGER && fts5IsContentless(pTab) ){ + pTab->base.zErrMsg = sqlite3_mprintf( + "cannot %s contentless fts5 table: %s", + (nArg>1 ? "UPDATE" : "DELETE from"), pConfig->zName + ); + rc = SQLITE_ERROR; + } + + /* Case 1: DELETE */ + else if( nArg==1 ){ + i64 iDel = sqlite3_value_int64(apVal[0]); /* Rowid to delete */ + rc = sqlite3Fts5StorageDelete(pTab->pStorage, iDel, 0); + } + + /* Case 2: INSERT */ + else if( eType0!=SQLITE_INTEGER ){ + /* If this is a REPLACE, first remove the current entry (if any) */ + if( eConflict==SQLITE_REPLACE + && sqlite3_value_type(apVal[1])==SQLITE_INTEGER + ){ + i64 iNew = sqlite3_value_int64(apVal[1]); /* Rowid to delete */ + rc = sqlite3Fts5StorageDelete(pTab->pStorage, iNew, 0); + } + fts5StorageInsert(&rc, pTab, apVal, pRowid); + } + + /* Case 2: UPDATE */ + else{ + i64 iOld = sqlite3_value_int64(apVal[0]); /* Old rowid */ + i64 iNew = sqlite3_value_int64(apVal[1]); /* New rowid */ + if( iOld!=iNew ){ + if( eConflict==SQLITE_REPLACE ){ + rc = sqlite3Fts5StorageDelete(pTab->pStorage, iOld, 0); + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5StorageDelete(pTab->pStorage, iNew, 0); + } + fts5StorageInsert(&rc, pTab, apVal, pRowid); + }else{ + rc = sqlite3Fts5StorageContentInsert(pTab->pStorage, apVal, pRowid); + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5StorageDelete(pTab->pStorage, iOld, 0); + } + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5StorageIndexInsert(pTab->pStorage, apVal, *pRowid); + } + } + }else{ + rc = sqlite3Fts5StorageDelete(pTab->pStorage, iOld, 0); + fts5StorageInsert(&rc, pTab, apVal, pRowid); + } + } + } + + pTab->pConfig->pzErrmsg = 0; + return rc; +} + +/* +** Implementation of xSync() method. +*/ +static int fts5SyncMethod(sqlite3_vtab *pVtab){ + int rc; + Fts5Table *pTab = (Fts5Table*)pVtab; + fts5CheckTransactionState(pTab, FTS5_SYNC, 0); + pTab->pConfig->pzErrmsg = &pTab->base.zErrMsg; + fts5TripCursors(pTab); + rc = sqlite3Fts5StorageSync(pTab->pStorage, 1); + pTab->pConfig->pzErrmsg = 0; + return rc; +} + +/* +** Implementation of xBegin() method. +*/ +static int fts5BeginMethod(sqlite3_vtab *pVtab){ + UNUSED_PARAM(pVtab); /* Call below is a no-op for NDEBUG builds */ + fts5CheckTransactionState((Fts5Table*)pVtab, FTS5_BEGIN, 0); + return SQLITE_OK; +} + +/* +** Implementation of xCommit() method. This is a no-op. The contents of +** the pending-terms hash-table have already been flushed into the database +** by fts5SyncMethod(). +*/ +static int fts5CommitMethod(sqlite3_vtab *pVtab){ + UNUSED_PARAM(pVtab); /* Call below is a no-op for NDEBUG builds */ + fts5CheckTransactionState((Fts5Table*)pVtab, FTS5_COMMIT, 0); + return SQLITE_OK; +} + +/* +** Implementation of xRollback(). Discard the contents of the pending-terms +** hash-table. Any changes made to the database are reverted by SQLite. +*/ +static int fts5RollbackMethod(sqlite3_vtab *pVtab){ + int rc; + Fts5Table *pTab = (Fts5Table*)pVtab; + fts5CheckTransactionState(pTab, FTS5_ROLLBACK, 0); + rc = sqlite3Fts5StorageRollback(pTab->pStorage); + return rc; +} + +static int fts5CsrPoslist(Fts5Cursor*, int, const u8**, int*); + +static void *fts5ApiUserData(Fts5Context *pCtx){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCtx; + return pCsr->pAux->pUserData; +} + +static int fts5ApiColumnCount(Fts5Context *pCtx){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCtx; + return ((Fts5Table*)(pCsr->base.pVtab))->pConfig->nCol; +} + +static int fts5ApiColumnTotalSize( + Fts5Context *pCtx, + int iCol, + sqlite3_int64 *pnToken +){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCtx; + Fts5Table *pTab = (Fts5Table*)(pCsr->base.pVtab); + return sqlite3Fts5StorageSize(pTab->pStorage, iCol, pnToken); +} + +static int fts5ApiRowCount(Fts5Context *pCtx, i64 *pnRow){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCtx; + Fts5Table *pTab = (Fts5Table*)(pCsr->base.pVtab); + return sqlite3Fts5StorageRowCount(pTab->pStorage, pnRow); +} + +static int fts5ApiTokenize( + Fts5Context *pCtx, + const char *pText, int nText, + void *pUserData, + int (*xToken)(void*, int, const char*, int, int, int) +){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCtx; + Fts5Table *pTab = (Fts5Table*)(pCsr->base.pVtab); + return sqlite3Fts5Tokenize( + pTab->pConfig, FTS5_TOKENIZE_AUX, pText, nText, pUserData, xToken + ); +} + +static int fts5ApiPhraseCount(Fts5Context *pCtx){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCtx; + return sqlite3Fts5ExprPhraseCount(pCsr->pExpr); +} + +static int fts5ApiPhraseSize(Fts5Context *pCtx, int iPhrase){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCtx; + return sqlite3Fts5ExprPhraseSize(pCsr->pExpr, iPhrase); +} + +static int fts5ApiColumnText( + Fts5Context *pCtx, + int iCol, + const char **pz, + int *pn +){ + int rc = SQLITE_OK; + Fts5Cursor *pCsr = (Fts5Cursor*)pCtx; + if( fts5IsContentless((Fts5Table*)(pCsr->base.pVtab)) ){ + *pz = 0; + *pn = 0; + }else{ + rc = fts5SeekCursor(pCsr, 0); + if( rc==SQLITE_OK ){ + *pz = (const char*)sqlite3_column_text(pCsr->pStmt, iCol+1); + *pn = sqlite3_column_bytes(pCsr->pStmt, iCol+1); + } + } + return rc; +} + +static int fts5CsrPoslist( + Fts5Cursor *pCsr, + int iPhrase, + const u8 **pa, + int *pn +){ + Fts5Config *pConfig = ((Fts5Table*)(pCsr->base.pVtab))->pConfig; + int rc = SQLITE_OK; + int bLive = (pCsr->pSorter==0); + + if( CsrFlagTest(pCsr, FTS5CSR_REQUIRE_POSLIST) ){ + + if( pConfig->eDetail!=FTS5_DETAIL_FULL ){ + Fts5PoslistPopulator *aPopulator; + int i; + aPopulator = sqlite3Fts5ExprClearPoslists(pCsr->pExpr, bLive); + if( aPopulator==0 ) rc = SQLITE_NOMEM; + for(i=0; i<pConfig->nCol && rc==SQLITE_OK; i++){ + int n; const char *z; + rc = fts5ApiColumnText((Fts5Context*)pCsr, i, &z, &n); + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5ExprPopulatePoslists( + pConfig, pCsr->pExpr, aPopulator, i, z, n + ); + } + } + sqlite3_free(aPopulator); + + if( pCsr->pSorter ){ + sqlite3Fts5ExprCheckPoslists(pCsr->pExpr, pCsr->pSorter->iRowid); + } + } + CsrFlagClear(pCsr, FTS5CSR_REQUIRE_POSLIST); + } + + if( pCsr->pSorter && pConfig->eDetail==FTS5_DETAIL_FULL ){ + Fts5Sorter *pSorter = pCsr->pSorter; + int i1 = (iPhrase==0 ? 0 : pSorter->aIdx[iPhrase-1]); + *pn = pSorter->aIdx[iPhrase] - i1; + *pa = &pSorter->aPoslist[i1]; + }else{ + *pn = sqlite3Fts5ExprPoslist(pCsr->pExpr, iPhrase, pa); + } + + return rc; +} + +/* +** Ensure that the Fts5Cursor.nInstCount and aInst[] variables are populated +** correctly for the current view. Return SQLITE_OK if successful, or an +** SQLite error code otherwise. +*/ +static int fts5CacheInstArray(Fts5Cursor *pCsr){ + int rc = SQLITE_OK; + Fts5PoslistReader *aIter; /* One iterator for each phrase */ + int nIter; /* Number of iterators/phrases */ + + nIter = sqlite3Fts5ExprPhraseCount(pCsr->pExpr); + if( pCsr->aInstIter==0 ){ + int nByte = sizeof(Fts5PoslistReader) * nIter; + pCsr->aInstIter = (Fts5PoslistReader*)sqlite3Fts5MallocZero(&rc, nByte); + } + aIter = pCsr->aInstIter; + + if( aIter ){ + int nInst = 0; /* Number instances seen so far */ + int i; + + /* Initialize all iterators */ + for(i=0; i<nIter && rc==SQLITE_OK; i++){ + const u8 *a; + int n; + rc = fts5CsrPoslist(pCsr, i, &a, &n); + if( rc==SQLITE_OK ){ + sqlite3Fts5PoslistReaderInit(a, n, &aIter[i]); + } + } + + if( rc==SQLITE_OK ){ + while( 1 ){ + int *aInst; + int iBest = -1; + for(i=0; i<nIter; i++){ + if( (aIter[i].bEof==0) + && (iBest<0 || aIter[i].iPos<aIter[iBest].iPos) + ){ + iBest = i; + } + } + if( iBest<0 ) break; + + nInst++; + if( nInst>=pCsr->nInstAlloc ){ + pCsr->nInstAlloc = pCsr->nInstAlloc ? pCsr->nInstAlloc*2 : 32; + aInst = (int*)sqlite3_realloc( + pCsr->aInst, pCsr->nInstAlloc*sizeof(int)*3 + ); + if( aInst ){ + pCsr->aInst = aInst; + }else{ + rc = SQLITE_NOMEM; + break; + } + } + + aInst = &pCsr->aInst[3 * (nInst-1)]; + aInst[0] = iBest; + aInst[1] = FTS5_POS2COLUMN(aIter[iBest].iPos); + aInst[2] = FTS5_POS2OFFSET(aIter[iBest].iPos); + sqlite3Fts5PoslistReaderNext(&aIter[iBest]); + } + } + + pCsr->nInstCount = nInst; + CsrFlagClear(pCsr, FTS5CSR_REQUIRE_INST); + } + return rc; +} + +static int fts5ApiInstCount(Fts5Context *pCtx, int *pnInst){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCtx; + int rc = SQLITE_OK; + if( CsrFlagTest(pCsr, FTS5CSR_REQUIRE_INST)==0 + || SQLITE_OK==(rc = fts5CacheInstArray(pCsr)) ){ + *pnInst = pCsr->nInstCount; + } + return rc; +} + +static int fts5ApiInst( + Fts5Context *pCtx, + int iIdx, + int *piPhrase, + int *piCol, + int *piOff +){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCtx; + int rc = SQLITE_OK; + if( CsrFlagTest(pCsr, FTS5CSR_REQUIRE_INST)==0 + || SQLITE_OK==(rc = fts5CacheInstArray(pCsr)) + ){ + if( iIdx<0 || iIdx>=pCsr->nInstCount ){ + rc = SQLITE_RANGE; +#if 0 + }else if( fts5IsOffsetless((Fts5Table*)pCsr->base.pVtab) ){ + *piPhrase = pCsr->aInst[iIdx*3]; + *piCol = pCsr->aInst[iIdx*3 + 2]; + *piOff = -1; +#endif + }else{ + *piPhrase = pCsr->aInst[iIdx*3]; + *piCol = pCsr->aInst[iIdx*3 + 1]; + *piOff = pCsr->aInst[iIdx*3 + 2]; + } + } + return rc; +} + +static sqlite3_int64 fts5ApiRowid(Fts5Context *pCtx){ + return fts5CursorRowid((Fts5Cursor*)pCtx); +} + +static int fts5ColumnSizeCb( + void *pContext, /* Pointer to int */ + int tflags, + const char *pUnused, /* Buffer containing token */ + int nUnused, /* Size of token in bytes */ + int iUnused1, /* Start offset of token */ + int iUnused2 /* End offset of token */ +){ + int *pCnt = (int*)pContext; + UNUSED_PARAM2(pUnused, nUnused); + UNUSED_PARAM2(iUnused1, iUnused2); + if( (tflags & FTS5_TOKEN_COLOCATED)==0 ){ + (*pCnt)++; + } + return SQLITE_OK; +} + +static int fts5ApiColumnSize(Fts5Context *pCtx, int iCol, int *pnToken){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCtx; + Fts5Table *pTab = (Fts5Table*)(pCsr->base.pVtab); + Fts5Config *pConfig = pTab->pConfig; + int rc = SQLITE_OK; + + if( CsrFlagTest(pCsr, FTS5CSR_REQUIRE_DOCSIZE) ){ + if( pConfig->bColumnsize ){ + i64 iRowid = fts5CursorRowid(pCsr); + rc = sqlite3Fts5StorageDocsize(pTab->pStorage, iRowid, pCsr->aColumnSize); + }else if( pConfig->zContent==0 ){ + int i; + for(i=0; i<pConfig->nCol; i++){ + if( pConfig->abUnindexed[i]==0 ){ + pCsr->aColumnSize[i] = -1; + } + } + }else{ + int i; + for(i=0; rc==SQLITE_OK && i<pConfig->nCol; i++){ + if( pConfig->abUnindexed[i]==0 ){ + const char *z; int n; + void *p = (void*)(&pCsr->aColumnSize[i]); + pCsr->aColumnSize[i] = 0; + rc = fts5ApiColumnText(pCtx, i, &z, &n); + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5Tokenize( + pConfig, FTS5_TOKENIZE_AUX, z, n, p, fts5ColumnSizeCb + ); + } + } + } + } + CsrFlagClear(pCsr, FTS5CSR_REQUIRE_DOCSIZE); + } + if( iCol<0 ){ + int i; + *pnToken = 0; + for(i=0; i<pConfig->nCol; i++){ + *pnToken += pCsr->aColumnSize[i]; + } + }else if( iCol<pConfig->nCol ){ + *pnToken = pCsr->aColumnSize[iCol]; + }else{ + *pnToken = 0; + rc = SQLITE_RANGE; + } + return rc; +} + +/* +** Implementation of the xSetAuxdata() method. +*/ +static int fts5ApiSetAuxdata( + Fts5Context *pCtx, /* Fts5 context */ + void *pPtr, /* Pointer to save as auxdata */ + void(*xDelete)(void*) /* Destructor for pPtr (or NULL) */ +){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCtx; + Fts5Auxdata *pData; + + /* Search through the cursors list of Fts5Auxdata objects for one that + ** corresponds to the currently executing auxiliary function. */ + for(pData=pCsr->pAuxdata; pData; pData=pData->pNext){ + if( pData->pAux==pCsr->pAux ) break; + } + + if( pData ){ + if( pData->xDelete ){ + pData->xDelete(pData->pPtr); + } + }else{ + int rc = SQLITE_OK; + pData = (Fts5Auxdata*)sqlite3Fts5MallocZero(&rc, sizeof(Fts5Auxdata)); + if( pData==0 ){ + if( xDelete ) xDelete(pPtr); + return rc; + } + pData->pAux = pCsr->pAux; + pData->pNext = pCsr->pAuxdata; + pCsr->pAuxdata = pData; + } + + pData->xDelete = xDelete; + pData->pPtr = pPtr; + return SQLITE_OK; +} + +static void *fts5ApiGetAuxdata(Fts5Context *pCtx, int bClear){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCtx; + Fts5Auxdata *pData; + void *pRet = 0; + + for(pData=pCsr->pAuxdata; pData; pData=pData->pNext){ + if( pData->pAux==pCsr->pAux ) break; + } + + if( pData ){ + pRet = pData->pPtr; + if( bClear ){ + pData->pPtr = 0; + pData->xDelete = 0; + } + } + + return pRet; +} + +static void fts5ApiPhraseNext( + Fts5Context *pUnused, + Fts5PhraseIter *pIter, + int *piCol, int *piOff +){ + UNUSED_PARAM(pUnused); + if( pIter->a>=pIter->b ){ + *piCol = -1; + *piOff = -1; + }else{ + int iVal; + pIter->a += fts5GetVarint32(pIter->a, iVal); + if( iVal==1 ){ + pIter->a += fts5GetVarint32(pIter->a, iVal); + *piCol = iVal; + *piOff = 0; + pIter->a += fts5GetVarint32(pIter->a, iVal); + } + *piOff += (iVal-2); + } +} + +static int fts5ApiPhraseFirst( + Fts5Context *pCtx, + int iPhrase, + Fts5PhraseIter *pIter, + int *piCol, int *piOff +){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCtx; + int n; + int rc = fts5CsrPoslist(pCsr, iPhrase, &pIter->a, &n); + if( rc==SQLITE_OK ){ + pIter->b = &pIter->a[n]; + *piCol = 0; + *piOff = 0; + fts5ApiPhraseNext(pCtx, pIter, piCol, piOff); + } + return rc; +} + +static void fts5ApiPhraseNextColumn( + Fts5Context *pCtx, + Fts5PhraseIter *pIter, + int *piCol +){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCtx; + Fts5Config *pConfig = ((Fts5Table*)(pCsr->base.pVtab))->pConfig; + + if( pConfig->eDetail==FTS5_DETAIL_COLUMNS ){ + if( pIter->a>=pIter->b ){ + *piCol = -1; + }else{ + int iIncr; + pIter->a += fts5GetVarint32(&pIter->a[0], iIncr); + *piCol += (iIncr-2); + } + }else{ + while( 1 ){ + int dummy; + if( pIter->a>=pIter->b ){ + *piCol = -1; + return; + } + if( pIter->a[0]==0x01 ) break; + pIter->a += fts5GetVarint32(pIter->a, dummy); + } + pIter->a += 1 + fts5GetVarint32(&pIter->a[1], *piCol); + } +} + +static int fts5ApiPhraseFirstColumn( + Fts5Context *pCtx, + int iPhrase, + Fts5PhraseIter *pIter, + int *piCol +){ + int rc = SQLITE_OK; + Fts5Cursor *pCsr = (Fts5Cursor*)pCtx; + Fts5Config *pConfig = ((Fts5Table*)(pCsr->base.pVtab))->pConfig; + + if( pConfig->eDetail==FTS5_DETAIL_COLUMNS ){ + Fts5Sorter *pSorter = pCsr->pSorter; + int n; + if( pSorter ){ + int i1 = (iPhrase==0 ? 0 : pSorter->aIdx[iPhrase-1]); + n = pSorter->aIdx[iPhrase] - i1; + pIter->a = &pSorter->aPoslist[i1]; + }else{ + rc = sqlite3Fts5ExprPhraseCollist(pCsr->pExpr, iPhrase, &pIter->a, &n); + } + if( rc==SQLITE_OK ){ + pIter->b = &pIter->a[n]; + *piCol = 0; + fts5ApiPhraseNextColumn(pCtx, pIter, piCol); + } + }else{ + int n; + rc = fts5CsrPoslist(pCsr, iPhrase, &pIter->a, &n); + if( rc==SQLITE_OK ){ + pIter->b = &pIter->a[n]; + if( n<=0 ){ + *piCol = -1; + }else if( pIter->a[0]==0x01 ){ + pIter->a += 1 + fts5GetVarint32(&pIter->a[1], *piCol); + }else{ + *piCol = 0; + } + } + } + + return rc; +} + + +static int fts5ApiQueryPhrase(Fts5Context*, int, void*, + int(*)(const Fts5ExtensionApi*, Fts5Context*, void*) +); + +static const Fts5ExtensionApi sFts5Api = { + 2, /* iVersion */ + fts5ApiUserData, + fts5ApiColumnCount, + fts5ApiRowCount, + fts5ApiColumnTotalSize, + fts5ApiTokenize, + fts5ApiPhraseCount, + fts5ApiPhraseSize, + fts5ApiInstCount, + fts5ApiInst, + fts5ApiRowid, + fts5ApiColumnText, + fts5ApiColumnSize, + fts5ApiQueryPhrase, + fts5ApiSetAuxdata, + fts5ApiGetAuxdata, + fts5ApiPhraseFirst, + fts5ApiPhraseNext, + fts5ApiPhraseFirstColumn, + fts5ApiPhraseNextColumn, +}; + +/* +** Implementation of API function xQueryPhrase(). +*/ +static int fts5ApiQueryPhrase( + Fts5Context *pCtx, + int iPhrase, + void *pUserData, + int(*xCallback)(const Fts5ExtensionApi*, Fts5Context*, void*) +){ + Fts5Cursor *pCsr = (Fts5Cursor*)pCtx; + Fts5Table *pTab = (Fts5Table*)(pCsr->base.pVtab); + int rc; + Fts5Cursor *pNew = 0; + + rc = fts5OpenMethod(pCsr->base.pVtab, (sqlite3_vtab_cursor**)&pNew); + if( rc==SQLITE_OK ){ + pNew->ePlan = FTS5_PLAN_MATCH; + pNew->iFirstRowid = SMALLEST_INT64; + pNew->iLastRowid = LARGEST_INT64; + pNew->base.pVtab = (sqlite3_vtab*)pTab; + rc = sqlite3Fts5ExprClonePhrase(pCsr->pExpr, iPhrase, &pNew->pExpr); + } + + if( rc==SQLITE_OK ){ + for(rc = fts5CursorFirst(pTab, pNew, 0); + rc==SQLITE_OK && CsrFlagTest(pNew, FTS5CSR_EOF)==0; + rc = fts5NextMethod((sqlite3_vtab_cursor*)pNew) + ){ + rc = xCallback(&sFts5Api, (Fts5Context*)pNew, pUserData); + if( rc!=SQLITE_OK ){ + if( rc==SQLITE_DONE ) rc = SQLITE_OK; + break; + } + } + } + + fts5CloseMethod((sqlite3_vtab_cursor*)pNew); + return rc; +} + +static void fts5ApiInvoke( + Fts5Auxiliary *pAux, + Fts5Cursor *pCsr, + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + assert( pCsr->pAux==0 ); + pCsr->pAux = pAux; + pAux->xFunc(&sFts5Api, (Fts5Context*)pCsr, context, argc, argv); + pCsr->pAux = 0; +} + +static Fts5Cursor *fts5CursorFromCsrid(Fts5Global *pGlobal, i64 iCsrId){ + Fts5Cursor *pCsr; + for(pCsr=pGlobal->pCsr; pCsr; pCsr=pCsr->pNext){ + if( pCsr->iCsrId==iCsrId ) break; + } + return pCsr; +} + +static void fts5ApiCallback( + sqlite3_context *context, + int argc, + sqlite3_value **argv +){ + + Fts5Auxiliary *pAux; + Fts5Cursor *pCsr; + i64 iCsrId; + + assert( argc>=1 ); + pAux = (Fts5Auxiliary*)sqlite3_user_data(context); + iCsrId = sqlite3_value_int64(argv[0]); + + pCsr = fts5CursorFromCsrid(pAux->pGlobal, iCsrId); + if( pCsr==0 ){ + char *zErr = sqlite3_mprintf("no such cursor: %lld", iCsrId); + sqlite3_result_error(context, zErr, -1); + sqlite3_free(zErr); + }else{ + fts5ApiInvoke(pAux, pCsr, context, argc-1, &argv[1]); + } +} + + +/* +** Given cursor id iId, return a pointer to the corresponding Fts5Index +** object. Or NULL If the cursor id does not exist. +** +** If successful, set *ppConfig to point to the associated config object +** before returning. +*/ +static Fts5Index *sqlite3Fts5IndexFromCsrid( + Fts5Global *pGlobal, /* FTS5 global context for db handle */ + i64 iCsrId, /* Id of cursor to find */ + Fts5Config **ppConfig /* OUT: Configuration object */ +){ + Fts5Cursor *pCsr; + Fts5Table *pTab; + + pCsr = fts5CursorFromCsrid(pGlobal, iCsrId); + pTab = (Fts5Table*)pCsr->base.pVtab; + *ppConfig = pTab->pConfig; + + return pTab->pIndex; +} + +/* +** Return a "position-list blob" corresponding to the current position of +** cursor pCsr via sqlite3_result_blob(). A position-list blob contains +** the current position-list for each phrase in the query associated with +** cursor pCsr. +** +** A position-list blob begins with (nPhrase-1) varints, where nPhrase is +** the number of phrases in the query. Following the varints are the +** concatenated position lists for each phrase, in order. +** +** The first varint (if it exists) contains the size of the position list +** for phrase 0. The second (same disclaimer) contains the size of position +** list 1. And so on. There is no size field for the final position list, +** as it can be derived from the total size of the blob. +*/ +static int fts5PoslistBlob(sqlite3_context *pCtx, Fts5Cursor *pCsr){ + int i; + int rc = SQLITE_OK; + int nPhrase = sqlite3Fts5ExprPhraseCount(pCsr->pExpr); + Fts5Buffer val; + + memset(&val, 0, sizeof(Fts5Buffer)); + switch( ((Fts5Table*)(pCsr->base.pVtab))->pConfig->eDetail ){ + case FTS5_DETAIL_FULL: + + /* Append the varints */ + for(i=0; i<(nPhrase-1); i++){ + const u8 *dummy; + int nByte = sqlite3Fts5ExprPoslist(pCsr->pExpr, i, &dummy); + sqlite3Fts5BufferAppendVarint(&rc, &val, nByte); + } + + /* Append the position lists */ + for(i=0; i<nPhrase; i++){ + const u8 *pPoslist; + int nPoslist; + nPoslist = sqlite3Fts5ExprPoslist(pCsr->pExpr, i, &pPoslist); + sqlite3Fts5BufferAppendBlob(&rc, &val, nPoslist, pPoslist); + } + break; + + case FTS5_DETAIL_COLUMNS: + + /* Append the varints */ + for(i=0; rc==SQLITE_OK && i<(nPhrase-1); i++){ + const u8 *dummy; + int nByte; + rc = sqlite3Fts5ExprPhraseCollist(pCsr->pExpr, i, &dummy, &nByte); + sqlite3Fts5BufferAppendVarint(&rc, &val, nByte); + } + + /* Append the position lists */ + for(i=0; rc==SQLITE_OK && i<nPhrase; i++){ + const u8 *pPoslist; + int nPoslist; + rc = sqlite3Fts5ExprPhraseCollist(pCsr->pExpr, i, &pPoslist, &nPoslist); + sqlite3Fts5BufferAppendBlob(&rc, &val, nPoslist, pPoslist); + } + break; + + default: + break; + } + + sqlite3_result_blob(pCtx, val.p, val.n, sqlite3_free); + return rc; +} + +/* +** This is the xColumn method, called by SQLite to request a value from +** the row that the supplied cursor currently points to. +*/ +static int fts5ColumnMethod( + sqlite3_vtab_cursor *pCursor, /* Cursor to retrieve value from */ + sqlite3_context *pCtx, /* Context for sqlite3_result_xxx() calls */ + int iCol /* Index of column to read value from */ +){ + Fts5Table *pTab = (Fts5Table*)(pCursor->pVtab); + Fts5Config *pConfig = pTab->pConfig; + Fts5Cursor *pCsr = (Fts5Cursor*)pCursor; + int rc = SQLITE_OK; + + assert( CsrFlagTest(pCsr, FTS5CSR_EOF)==0 ); + + if( pCsr->ePlan==FTS5_PLAN_SPECIAL ){ + if( iCol==pConfig->nCol ){ + sqlite3_result_int64(pCtx, pCsr->iSpecial); + } + }else + + if( iCol==pConfig->nCol ){ + /* User is requesting the value of the special column with the same name + ** as the table. Return the cursor integer id number. This value is only + ** useful in that it may be passed as the first argument to an FTS5 + ** auxiliary function. */ + sqlite3_result_int64(pCtx, pCsr->iCsrId); + }else if( iCol==pConfig->nCol+1 ){ + + /* The value of the "rank" column. */ + if( pCsr->ePlan==FTS5_PLAN_SOURCE ){ + fts5PoslistBlob(pCtx, pCsr); + }else if( + pCsr->ePlan==FTS5_PLAN_MATCH + || pCsr->ePlan==FTS5_PLAN_SORTED_MATCH + ){ + if( pCsr->pRank || SQLITE_OK==(rc = fts5FindRankFunction(pCsr)) ){ + fts5ApiInvoke(pCsr->pRank, pCsr, pCtx, pCsr->nRankArg, pCsr->apRankArg); + } + } + }else if( !fts5IsContentless(pTab) ){ + rc = fts5SeekCursor(pCsr, 1); + if( rc==SQLITE_OK ){ + sqlite3_result_value(pCtx, sqlite3_column_value(pCsr->pStmt, iCol+1)); + } + } + return rc; +} + + +/* +** This routine implements the xFindFunction method for the FTS3 +** virtual table. +*/ +static int fts5FindFunctionMethod( + sqlite3_vtab *pVtab, /* Virtual table handle */ + int nUnused, /* Number of SQL function arguments */ + const char *zName, /* Name of SQL function */ + void (**pxFunc)(sqlite3_context*,int,sqlite3_value**), /* OUT: Result */ + void **ppArg /* OUT: User data for *pxFunc */ +){ + Fts5Table *pTab = (Fts5Table*)pVtab; + Fts5Auxiliary *pAux; + + UNUSED_PARAM(nUnused); + pAux = fts5FindAuxiliary(pTab, zName); + if( pAux ){ + *pxFunc = fts5ApiCallback; + *ppArg = (void*)pAux; + return 1; + } + + /* No function of the specified name was found. Return 0. */ + return 0; +} + +/* +** Implementation of FTS5 xRename method. Rename an fts5 table. +*/ +static int fts5RenameMethod( + sqlite3_vtab *pVtab, /* Virtual table handle */ + const char *zName /* New name of table */ +){ + Fts5Table *pTab = (Fts5Table*)pVtab; + return sqlite3Fts5StorageRename(pTab->pStorage, zName); +} + +/* +** The xSavepoint() method. +** +** Flush the contents of the pending-terms table to disk. +*/ +static int fts5SavepointMethod(sqlite3_vtab *pVtab, int iSavepoint){ + Fts5Table *pTab = (Fts5Table*)pVtab; + UNUSED_PARAM(iSavepoint); /* Call below is a no-op for NDEBUG builds */ + fts5CheckTransactionState(pTab, FTS5_SAVEPOINT, iSavepoint); + fts5TripCursors(pTab); + return sqlite3Fts5StorageSync(pTab->pStorage, 0); +} + +/* +** The xRelease() method. +** +** This is a no-op. +*/ +static int fts5ReleaseMethod(sqlite3_vtab *pVtab, int iSavepoint){ + Fts5Table *pTab = (Fts5Table*)pVtab; + UNUSED_PARAM(iSavepoint); /* Call below is a no-op for NDEBUG builds */ + fts5CheckTransactionState(pTab, FTS5_RELEASE, iSavepoint); + fts5TripCursors(pTab); + return sqlite3Fts5StorageSync(pTab->pStorage, 0); +} + +/* +** The xRollbackTo() method. +** +** Discard the contents of the pending terms table. +*/ +static int fts5RollbackToMethod(sqlite3_vtab *pVtab, int iSavepoint){ + Fts5Table *pTab = (Fts5Table*)pVtab; + UNUSED_PARAM(iSavepoint); /* Call below is a no-op for NDEBUG builds */ + fts5CheckTransactionState(pTab, FTS5_ROLLBACKTO, iSavepoint); + fts5TripCursors(pTab); + return sqlite3Fts5StorageRollback(pTab->pStorage); +} + +/* +** Register a new auxiliary function with global context pGlobal. +*/ +static int fts5CreateAux( + fts5_api *pApi, /* Global context (one per db handle) */ + const char *zName, /* Name of new function */ + void *pUserData, /* User data for aux. function */ + fts5_extension_function xFunc, /* Aux. function implementation */ + void(*xDestroy)(void*) /* Destructor for pUserData */ +){ + Fts5Global *pGlobal = (Fts5Global*)pApi; + int rc = sqlite3_overload_function(pGlobal->db, zName, -1); + if( rc==SQLITE_OK ){ + Fts5Auxiliary *pAux; + int nName; /* Size of zName in bytes, including \0 */ + int nByte; /* Bytes of space to allocate */ + + nName = (int)strlen(zName) + 1; + nByte = sizeof(Fts5Auxiliary) + nName; + pAux = (Fts5Auxiliary*)sqlite3_malloc(nByte); + if( pAux ){ + memset(pAux, 0, nByte); + pAux->zFunc = (char*)&pAux[1]; + memcpy(pAux->zFunc, zName, nName); + pAux->pGlobal = pGlobal; + pAux->pUserData = pUserData; + pAux->xFunc = xFunc; + pAux->xDestroy = xDestroy; + pAux->pNext = pGlobal->pAux; + pGlobal->pAux = pAux; + }else{ + rc = SQLITE_NOMEM; + } + } + + return rc; +} + +/* +** Register a new tokenizer. This is the implementation of the +** fts5_api.xCreateTokenizer() method. +*/ +static int fts5CreateTokenizer( + fts5_api *pApi, /* Global context (one per db handle) */ + const char *zName, /* Name of new function */ + void *pUserData, /* User data for aux. function */ + fts5_tokenizer *pTokenizer, /* Tokenizer implementation */ + void(*xDestroy)(void*) /* Destructor for pUserData */ +){ + Fts5Global *pGlobal = (Fts5Global*)pApi; + Fts5TokenizerModule *pNew; + int nName; /* Size of zName and its \0 terminator */ + int nByte; /* Bytes of space to allocate */ + int rc = SQLITE_OK; + + nName = (int)strlen(zName) + 1; + nByte = sizeof(Fts5TokenizerModule) + nName; + pNew = (Fts5TokenizerModule*)sqlite3_malloc(nByte); + if( pNew ){ + memset(pNew, 0, nByte); + pNew->zName = (char*)&pNew[1]; + memcpy(pNew->zName, zName, nName); + pNew->pUserData = pUserData; + pNew->x = *pTokenizer; + pNew->xDestroy = xDestroy; + pNew->pNext = pGlobal->pTok; + pGlobal->pTok = pNew; + if( pNew->pNext==0 ){ + pGlobal->pDfltTok = pNew; + } + }else{ + rc = SQLITE_NOMEM; + } + + return rc; +} + +static Fts5TokenizerModule *fts5LocateTokenizer( + Fts5Global *pGlobal, + const char *zName +){ + Fts5TokenizerModule *pMod = 0; + + if( zName==0 ){ + pMod = pGlobal->pDfltTok; + }else{ + for(pMod=pGlobal->pTok; pMod; pMod=pMod->pNext){ + if( sqlite3_stricmp(zName, pMod->zName)==0 ) break; + } + } + + return pMod; +} + +/* +** Find a tokenizer. This is the implementation of the +** fts5_api.xFindTokenizer() method. +*/ +static int fts5FindTokenizer( + fts5_api *pApi, /* Global context (one per db handle) */ + const char *zName, /* Name of new function */ + void **ppUserData, + fts5_tokenizer *pTokenizer /* Populate this object */ +){ + int rc = SQLITE_OK; + Fts5TokenizerModule *pMod; + + pMod = fts5LocateTokenizer((Fts5Global*)pApi, zName); + if( pMod ){ + *pTokenizer = pMod->x; + *ppUserData = pMod->pUserData; + }else{ + memset(pTokenizer, 0, sizeof(fts5_tokenizer)); + rc = SQLITE_ERROR; + } + + return rc; +} + +static int sqlite3Fts5GetTokenizer( + Fts5Global *pGlobal, + const char **azArg, + int nArg, + Fts5Tokenizer **ppTok, + fts5_tokenizer **ppTokApi, + char **pzErr +){ + Fts5TokenizerModule *pMod; + int rc = SQLITE_OK; + + pMod = fts5LocateTokenizer(pGlobal, nArg==0 ? 0 : azArg[0]); + if( pMod==0 ){ + assert( nArg>0 ); + rc = SQLITE_ERROR; + *pzErr = sqlite3_mprintf("no such tokenizer: %s", azArg[0]); + }else{ + rc = pMod->x.xCreate(pMod->pUserData, &azArg[1], (nArg?nArg-1:0), ppTok); + *ppTokApi = &pMod->x; + if( rc!=SQLITE_OK && pzErr ){ + *pzErr = sqlite3_mprintf("error in tokenizer constructor"); + } + } + + if( rc!=SQLITE_OK ){ + *ppTokApi = 0; + *ppTok = 0; + } + + return rc; +} + +static void fts5ModuleDestroy(void *pCtx){ + Fts5TokenizerModule *pTok, *pNextTok; + Fts5Auxiliary *pAux, *pNextAux; + Fts5Global *pGlobal = (Fts5Global*)pCtx; + + for(pAux=pGlobal->pAux; pAux; pAux=pNextAux){ + pNextAux = pAux->pNext; + if( pAux->xDestroy ) pAux->xDestroy(pAux->pUserData); + sqlite3_free(pAux); + } + + for(pTok=pGlobal->pTok; pTok; pTok=pNextTok){ + pNextTok = pTok->pNext; + if( pTok->xDestroy ) pTok->xDestroy(pTok->pUserData); + sqlite3_free(pTok); + } + + sqlite3_free(pGlobal); +} + +static void fts5Fts5Func( + sqlite3_context *pCtx, /* Function call context */ + int nArg, /* Number of args */ + sqlite3_value **apUnused /* Function arguments */ +){ + Fts5Global *pGlobal = (Fts5Global*)sqlite3_user_data(pCtx); + char buf[8]; + UNUSED_PARAM2(nArg, apUnused); + assert( nArg==0 ); + assert( sizeof(buf)>=sizeof(pGlobal) ); + memcpy(buf, (void*)&pGlobal, sizeof(pGlobal)); + sqlite3_result_blob(pCtx, buf, sizeof(pGlobal), SQLITE_TRANSIENT); +} + +/* +** Implementation of fts5_source_id() function. +*/ +static void fts5SourceIdFunc( + sqlite3_context *pCtx, /* Function call context */ + int nArg, /* Number of args */ + sqlite3_value **apUnused /* Function arguments */ +){ + assert( nArg==0 ); + UNUSED_PARAM2(nArg, apUnused); + sqlite3_result_text(pCtx, "fts5: 2016-02-15 17:29:24 3d862f207e3adc00f78066799ac5a8c282430a5f", -1, SQLITE_TRANSIENT); +} + +static int fts5Init(sqlite3 *db){ + static const sqlite3_module fts5Mod = { + /* iVersion */ 2, + /* xCreate */ fts5CreateMethod, + /* xConnect */ fts5ConnectMethod, + /* xBestIndex */ fts5BestIndexMethod, + /* xDisconnect */ fts5DisconnectMethod, + /* xDestroy */ fts5DestroyMethod, + /* xOpen */ fts5OpenMethod, + /* xClose */ fts5CloseMethod, + /* xFilter */ fts5FilterMethod, + /* xNext */ fts5NextMethod, + /* xEof */ fts5EofMethod, + /* xColumn */ fts5ColumnMethod, + /* xRowid */ fts5RowidMethod, + /* xUpdate */ fts5UpdateMethod, + /* xBegin */ fts5BeginMethod, + /* xSync */ fts5SyncMethod, + /* xCommit */ fts5CommitMethod, + /* xRollback */ fts5RollbackMethod, + /* xFindFunction */ fts5FindFunctionMethod, + /* xRename */ fts5RenameMethod, + /* xSavepoint */ fts5SavepointMethod, + /* xRelease */ fts5ReleaseMethod, + /* xRollbackTo */ fts5RollbackToMethod, + }; + + int rc; + Fts5Global *pGlobal = 0; + + pGlobal = (Fts5Global*)sqlite3_malloc(sizeof(Fts5Global)); + if( pGlobal==0 ){ + rc = SQLITE_NOMEM; + }else{ + void *p = (void*)pGlobal; + memset(pGlobal, 0, sizeof(Fts5Global)); + pGlobal->db = db; + pGlobal->api.iVersion = 2; + pGlobal->api.xCreateFunction = fts5CreateAux; + pGlobal->api.xCreateTokenizer = fts5CreateTokenizer; + pGlobal->api.xFindTokenizer = fts5FindTokenizer; + rc = sqlite3_create_module_v2(db, "fts5", &fts5Mod, p, fts5ModuleDestroy); + if( rc==SQLITE_OK ) rc = sqlite3Fts5IndexInit(db); + if( rc==SQLITE_OK ) rc = sqlite3Fts5ExprInit(pGlobal, db); + if( rc==SQLITE_OK ) rc = sqlite3Fts5AuxInit(&pGlobal->api); + if( rc==SQLITE_OK ) rc = sqlite3Fts5TokenizerInit(&pGlobal->api); + if( rc==SQLITE_OK ) rc = sqlite3Fts5VocabInit(pGlobal, db); + if( rc==SQLITE_OK ){ + rc = sqlite3_create_function( + db, "fts5", 0, SQLITE_UTF8, p, fts5Fts5Func, 0, 0 + ); + } + if( rc==SQLITE_OK ){ + rc = sqlite3_create_function( + db, "fts5_source_id", 0, SQLITE_UTF8, p, fts5SourceIdFunc, 0, 0 + ); + } + } + return rc; +} + +/* +** The following functions are used to register the module with SQLite. If +** this module is being built as part of the SQLite core (SQLITE_CORE is +** defined), then sqlite3_open() will call sqlite3Fts5Init() directly. +** +** Or, if this module is being built as a loadable extension, +** sqlite3Fts5Init() is omitted and the two standard entry points +** sqlite3_fts_init() and sqlite3_fts5_init() defined instead. +*/ +#ifndef SQLITE_CORE +#ifdef _WIN32 +__declspec(dllexport) +#endif +SQLITE_API int SQLITE_STDCALL sqlite3_fts_init( + sqlite3 *db, + char **pzErrMsg, + const sqlite3_api_routines *pApi +){ + SQLITE_EXTENSION_INIT2(pApi); + (void)pzErrMsg; /* Unused parameter */ + return fts5Init(db); +} + +#ifdef _WIN32 +__declspec(dllexport) +#endif +SQLITE_API int SQLITE_STDCALL sqlite3_fts5_init( + sqlite3 *db, + char **pzErrMsg, + const sqlite3_api_routines *pApi +){ + SQLITE_EXTENSION_INIT2(pApi); + (void)pzErrMsg; /* Unused parameter */ + return fts5Init(db); +} +#else +SQLITE_PRIVATE int sqlite3Fts5Init(sqlite3 *db){ + return fts5Init(db); +} +#endif + +/* +** 2014 May 31 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +*/ + + + +/* #include "fts5Int.h" */ + +struct Fts5Storage { + Fts5Config *pConfig; + Fts5Index *pIndex; + int bTotalsValid; /* True if nTotalRow/aTotalSize[] are valid */ + i64 nTotalRow; /* Total number of rows in FTS table */ + i64 *aTotalSize; /* Total sizes of each column */ + sqlite3_stmt *aStmt[11]; +}; + + +#if FTS5_STMT_SCAN_ASC!=0 +# error "FTS5_STMT_SCAN_ASC mismatch" +#endif +#if FTS5_STMT_SCAN_DESC!=1 +# error "FTS5_STMT_SCAN_DESC mismatch" +#endif +#if FTS5_STMT_LOOKUP!=2 +# error "FTS5_STMT_LOOKUP mismatch" +#endif + +#define FTS5_STMT_INSERT_CONTENT 3 +#define FTS5_STMT_REPLACE_CONTENT 4 +#define FTS5_STMT_DELETE_CONTENT 5 +#define FTS5_STMT_REPLACE_DOCSIZE 6 +#define FTS5_STMT_DELETE_DOCSIZE 7 +#define FTS5_STMT_LOOKUP_DOCSIZE 8 +#define FTS5_STMT_REPLACE_CONFIG 9 +#define FTS5_STMT_SCAN 10 + +/* +** Prepare the two insert statements - Fts5Storage.pInsertContent and +** Fts5Storage.pInsertDocsize - if they have not already been prepared. +** Return SQLITE_OK if successful, or an SQLite error code if an error +** occurs. +*/ +static int fts5StorageGetStmt( + Fts5Storage *p, /* Storage handle */ + int eStmt, /* FTS5_STMT_XXX constant */ + sqlite3_stmt **ppStmt, /* OUT: Prepared statement handle */ + char **pzErrMsg /* OUT: Error message (if any) */ +){ + int rc = SQLITE_OK; + + /* If there is no %_docsize table, there should be no requests for + ** statements to operate on it. */ + assert( p->pConfig->bColumnsize || ( + eStmt!=FTS5_STMT_REPLACE_DOCSIZE + && eStmt!=FTS5_STMT_DELETE_DOCSIZE + && eStmt!=FTS5_STMT_LOOKUP_DOCSIZE + )); + + assert( eStmt>=0 && eStmt<ArraySize(p->aStmt) ); + if( p->aStmt[eStmt]==0 ){ + const char *azStmt[] = { + "SELECT %s FROM %s T WHERE T.%Q >= ? AND T.%Q <= ? ORDER BY T.%Q ASC", + "SELECT %s FROM %s T WHERE T.%Q <= ? AND T.%Q >= ? ORDER BY T.%Q DESC", + "SELECT %s FROM %s T WHERE T.%Q=?", /* LOOKUP */ + + "INSERT INTO %Q.'%q_content' VALUES(%s)", /* INSERT_CONTENT */ + "REPLACE INTO %Q.'%q_content' VALUES(%s)", /* REPLACE_CONTENT */ + "DELETE FROM %Q.'%q_content' WHERE id=?", /* DELETE_CONTENT */ + "REPLACE INTO %Q.'%q_docsize' VALUES(?,?)", /* REPLACE_DOCSIZE */ + "DELETE FROM %Q.'%q_docsize' WHERE id=?", /* DELETE_DOCSIZE */ + + "SELECT sz FROM %Q.'%q_docsize' WHERE id=?", /* LOOKUP_DOCSIZE */ + + "REPLACE INTO %Q.'%q_config' VALUES(?,?)", /* REPLACE_CONFIG */ + "SELECT %s FROM %s AS T", /* SCAN */ + }; + Fts5Config *pC = p->pConfig; + char *zSql = 0; + + switch( eStmt ){ + case FTS5_STMT_SCAN: + zSql = sqlite3_mprintf(azStmt[eStmt], + pC->zContentExprlist, pC->zContent + ); + break; + + case FTS5_STMT_SCAN_ASC: + case FTS5_STMT_SCAN_DESC: + zSql = sqlite3_mprintf(azStmt[eStmt], pC->zContentExprlist, + pC->zContent, pC->zContentRowid, pC->zContentRowid, + pC->zContentRowid + ); + break; + + case FTS5_STMT_LOOKUP: + zSql = sqlite3_mprintf(azStmt[eStmt], + pC->zContentExprlist, pC->zContent, pC->zContentRowid + ); + break; + + case FTS5_STMT_INSERT_CONTENT: + case FTS5_STMT_REPLACE_CONTENT: { + int nCol = pC->nCol + 1; + char *zBind; + int i; + + zBind = sqlite3_malloc(1 + nCol*2); + if( zBind ){ + for(i=0; i<nCol; i++){ + zBind[i*2] = '?'; + zBind[i*2 + 1] = ','; + } + zBind[i*2-1] = '\0'; + zSql = sqlite3_mprintf(azStmt[eStmt], pC->zDb, pC->zName, zBind); + sqlite3_free(zBind); + } + break; + } + + default: + zSql = sqlite3_mprintf(azStmt[eStmt], pC->zDb, pC->zName); + break; + } + + if( zSql==0 ){ + rc = SQLITE_NOMEM; + }else{ + rc = sqlite3_prepare_v2(pC->db, zSql, -1, &p->aStmt[eStmt], 0); + sqlite3_free(zSql); + if( rc!=SQLITE_OK && pzErrMsg ){ + *pzErrMsg = sqlite3_mprintf("%s", sqlite3_errmsg(pC->db)); + } + } + } + + *ppStmt = p->aStmt[eStmt]; + return rc; +} + + +static int fts5ExecPrintf( + sqlite3 *db, + char **pzErr, + const char *zFormat, + ... +){ + int rc; + va_list ap; /* ... printf arguments */ + char *zSql; + + va_start(ap, zFormat); + zSql = sqlite3_vmprintf(zFormat, ap); + + if( zSql==0 ){ + rc = SQLITE_NOMEM; + }else{ + rc = sqlite3_exec(db, zSql, 0, 0, pzErr); + sqlite3_free(zSql); + } + + va_end(ap); + return rc; +} + +/* +** Drop all shadow tables. Return SQLITE_OK if successful or an SQLite error +** code otherwise. +*/ +static int sqlite3Fts5DropAll(Fts5Config *pConfig){ + int rc = fts5ExecPrintf(pConfig->db, 0, + "DROP TABLE IF EXISTS %Q.'%q_data';" + "DROP TABLE IF EXISTS %Q.'%q_idx';" + "DROP TABLE IF EXISTS %Q.'%q_config';", + pConfig->zDb, pConfig->zName, + pConfig->zDb, pConfig->zName, + pConfig->zDb, pConfig->zName + ); + if( rc==SQLITE_OK && pConfig->bColumnsize ){ + rc = fts5ExecPrintf(pConfig->db, 0, + "DROP TABLE IF EXISTS %Q.'%q_docsize';", + pConfig->zDb, pConfig->zName + ); + } + if( rc==SQLITE_OK && pConfig->eContent==FTS5_CONTENT_NORMAL ){ + rc = fts5ExecPrintf(pConfig->db, 0, + "DROP TABLE IF EXISTS %Q.'%q_content';", + pConfig->zDb, pConfig->zName + ); + } + return rc; +} + +static void fts5StorageRenameOne( + Fts5Config *pConfig, /* Current FTS5 configuration */ + int *pRc, /* IN/OUT: Error code */ + const char *zTail, /* Tail of table name e.g. "data", "config" */ + const char *zName /* New name of FTS5 table */ +){ + if( *pRc==SQLITE_OK ){ + *pRc = fts5ExecPrintf(pConfig->db, 0, + "ALTER TABLE %Q.'%q_%s' RENAME TO '%q_%s';", + pConfig->zDb, pConfig->zName, zTail, zName, zTail + ); + } +} + +static int sqlite3Fts5StorageRename(Fts5Storage *pStorage, const char *zName){ + Fts5Config *pConfig = pStorage->pConfig; + int rc = sqlite3Fts5StorageSync(pStorage, 1); + + fts5StorageRenameOne(pConfig, &rc, "data", zName); + fts5StorageRenameOne(pConfig, &rc, "idx", zName); + fts5StorageRenameOne(pConfig, &rc, "config", zName); + if( pConfig->bColumnsize ){ + fts5StorageRenameOne(pConfig, &rc, "docsize", zName); + } + if( pConfig->eContent==FTS5_CONTENT_NORMAL ){ + fts5StorageRenameOne(pConfig, &rc, "content", zName); + } + return rc; +} + +/* +** Create the shadow table named zPost, with definition zDefn. Return +** SQLITE_OK if successful, or an SQLite error code otherwise. +*/ +static int sqlite3Fts5CreateTable( + Fts5Config *pConfig, /* FTS5 configuration */ + const char *zPost, /* Shadow table to create (e.g. "content") */ + const char *zDefn, /* Columns etc. for shadow table */ + int bWithout, /* True for without rowid */ + char **pzErr /* OUT: Error message */ +){ + int rc; + char *zErr = 0; + + rc = fts5ExecPrintf(pConfig->db, &zErr, "CREATE TABLE %Q.'%q_%q'(%s)%s", + pConfig->zDb, pConfig->zName, zPost, zDefn, bWithout?" WITHOUT ROWID":"" + ); + if( zErr ){ + *pzErr = sqlite3_mprintf( + "fts5: error creating shadow table %q_%s: %s", + pConfig->zName, zPost, zErr + ); + sqlite3_free(zErr); + } + + return rc; +} + +/* +** Open a new Fts5Index handle. If the bCreate argument is true, create +** and initialize the underlying tables +** +** If successful, set *pp to point to the new object and return SQLITE_OK. +** Otherwise, set *pp to NULL and return an SQLite error code. +*/ +static int sqlite3Fts5StorageOpen( + Fts5Config *pConfig, + Fts5Index *pIndex, + int bCreate, + Fts5Storage **pp, + char **pzErr /* OUT: Error message */ +){ + int rc = SQLITE_OK; + Fts5Storage *p; /* New object */ + int nByte; /* Bytes of space to allocate */ + + nByte = sizeof(Fts5Storage) /* Fts5Storage object */ + + pConfig->nCol * sizeof(i64); /* Fts5Storage.aTotalSize[] */ + *pp = p = (Fts5Storage*)sqlite3_malloc(nByte); + if( !p ) return SQLITE_NOMEM; + + memset(p, 0, nByte); + p->aTotalSize = (i64*)&p[1]; + p->pConfig = pConfig; + p->pIndex = pIndex; + + if( bCreate ){ + if( pConfig->eContent==FTS5_CONTENT_NORMAL ){ + int nDefn = 32 + pConfig->nCol*10; + char *zDefn = sqlite3_malloc(32 + pConfig->nCol * 10); + if( zDefn==0 ){ + rc = SQLITE_NOMEM; + }else{ + int i; + int iOff; + sqlite3_snprintf(nDefn, zDefn, "id INTEGER PRIMARY KEY"); + iOff = (int)strlen(zDefn); + for(i=0; i<pConfig->nCol; i++){ + sqlite3_snprintf(nDefn-iOff, &zDefn[iOff], ", c%d", i); + iOff += (int)strlen(&zDefn[iOff]); + } + rc = sqlite3Fts5CreateTable(pConfig, "content", zDefn, 0, pzErr); + } + sqlite3_free(zDefn); + } + + if( rc==SQLITE_OK && pConfig->bColumnsize ){ + rc = sqlite3Fts5CreateTable( + pConfig, "docsize", "id INTEGER PRIMARY KEY, sz BLOB", 0, pzErr + ); + } + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5CreateTable( + pConfig, "config", "k PRIMARY KEY, v", 1, pzErr + ); + } + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5StorageConfigValue(p, "version", 0, FTS5_CURRENT_VERSION); + } + } + + if( rc ){ + sqlite3Fts5StorageClose(p); + *pp = 0; + } + return rc; +} + +/* +** Close a handle opened by an earlier call to sqlite3Fts5StorageOpen(). +*/ +static int sqlite3Fts5StorageClose(Fts5Storage *p){ + int rc = SQLITE_OK; + if( p ){ + int i; + + /* Finalize all SQL statements */ + for(i=0; i<ArraySize(p->aStmt); i++){ + sqlite3_finalize(p->aStmt[i]); + } + + sqlite3_free(p); + } + return rc; +} + +typedef struct Fts5InsertCtx Fts5InsertCtx; +struct Fts5InsertCtx { + Fts5Storage *pStorage; + int iCol; + int szCol; /* Size of column value in tokens */ +}; + +/* +** Tokenization callback used when inserting tokens into the FTS index. +*/ +static int fts5StorageInsertCallback( + void *pContext, /* Pointer to Fts5InsertCtx object */ + int tflags, + const char *pToken, /* Buffer containing token */ + int nToken, /* Size of token in bytes */ + int iUnused1, /* Start offset of token */ + int iUnused2 /* End offset of token */ +){ + Fts5InsertCtx *pCtx = (Fts5InsertCtx*)pContext; + Fts5Index *pIdx = pCtx->pStorage->pIndex; + UNUSED_PARAM2(iUnused1, iUnused2); + if( (tflags & FTS5_TOKEN_COLOCATED)==0 || pCtx->szCol==0 ){ + pCtx->szCol++; + } + return sqlite3Fts5IndexWrite(pIdx, pCtx->iCol, pCtx->szCol-1, pToken, nToken); +} + +/* +** If a row with rowid iDel is present in the %_content table, add the +** delete-markers to the FTS index necessary to delete it. Do not actually +** remove the %_content row at this time though. +*/ +static int fts5StorageDeleteFromIndex( + Fts5Storage *p, + i64 iDel, + sqlite3_value **apVal +){ + Fts5Config *pConfig = p->pConfig; + sqlite3_stmt *pSeek = 0; /* SELECT to read row iDel from %_data */ + int rc; /* Return code */ + int rc2; /* sqlite3_reset() return code */ + int iCol; + Fts5InsertCtx ctx; + + if( apVal==0 ){ + rc = fts5StorageGetStmt(p, FTS5_STMT_LOOKUP, &pSeek, 0); + if( rc!=SQLITE_OK ) return rc; + sqlite3_bind_int64(pSeek, 1, iDel); + if( sqlite3_step(pSeek)!=SQLITE_ROW ){ + return sqlite3_reset(pSeek); + } + } + + ctx.pStorage = p; + ctx.iCol = -1; + rc = sqlite3Fts5IndexBeginWrite(p->pIndex, 1, iDel); + for(iCol=1; rc==SQLITE_OK && iCol<=pConfig->nCol; iCol++){ + if( pConfig->abUnindexed[iCol-1]==0 ){ + const char *zText; + int nText; + if( pSeek ){ + zText = (const char*)sqlite3_column_text(pSeek, iCol); + nText = sqlite3_column_bytes(pSeek, iCol); + }else{ + zText = (const char*)sqlite3_value_text(apVal[iCol-1]); + nText = sqlite3_value_bytes(apVal[iCol-1]); + } + ctx.szCol = 0; + rc = sqlite3Fts5Tokenize(pConfig, FTS5_TOKENIZE_DOCUMENT, + zText, nText, (void*)&ctx, fts5StorageInsertCallback + ); + p->aTotalSize[iCol-1] -= (i64)ctx.szCol; + } + } + p->nTotalRow--; + + rc2 = sqlite3_reset(pSeek); + if( rc==SQLITE_OK ) rc = rc2; + return rc; +} + + +/* +** Insert a record into the %_docsize table. Specifically, do: +** +** INSERT OR REPLACE INTO %_docsize(id, sz) VALUES(iRowid, pBuf); +** +** If there is no %_docsize table (as happens if the columnsize=0 option +** is specified when the FTS5 table is created), this function is a no-op. +*/ +static int fts5StorageInsertDocsize( + Fts5Storage *p, /* Storage module to write to */ + i64 iRowid, /* id value */ + Fts5Buffer *pBuf /* sz value */ +){ + int rc = SQLITE_OK; + if( p->pConfig->bColumnsize ){ + sqlite3_stmt *pReplace = 0; + rc = fts5StorageGetStmt(p, FTS5_STMT_REPLACE_DOCSIZE, &pReplace, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_int64(pReplace, 1, iRowid); + sqlite3_bind_blob(pReplace, 2, pBuf->p, pBuf->n, SQLITE_STATIC); + sqlite3_step(pReplace); + rc = sqlite3_reset(pReplace); + } + } + return rc; +} + +/* +** Load the contents of the "averages" record from disk into the +** p->nTotalRow and p->aTotalSize[] variables. If successful, and if +** argument bCache is true, set the p->bTotalsValid flag to indicate +** that the contents of aTotalSize[] and nTotalRow are valid until +** further notice. +** +** Return SQLITE_OK if successful, or an SQLite error code if an error +** occurs. +*/ +static int fts5StorageLoadTotals(Fts5Storage *p, int bCache){ + int rc = SQLITE_OK; + if( p->bTotalsValid==0 ){ + rc = sqlite3Fts5IndexGetAverages(p->pIndex, &p->nTotalRow, p->aTotalSize); + p->bTotalsValid = bCache; + } + return rc; +} + +/* +** Store the current contents of the p->nTotalRow and p->aTotalSize[] +** variables in the "averages" record on disk. +** +** Return SQLITE_OK if successful, or an SQLite error code if an error +** occurs. +*/ +static int fts5StorageSaveTotals(Fts5Storage *p){ + int nCol = p->pConfig->nCol; + int i; + Fts5Buffer buf; + int rc = SQLITE_OK; + memset(&buf, 0, sizeof(buf)); + + sqlite3Fts5BufferAppendVarint(&rc, &buf, p->nTotalRow); + for(i=0; i<nCol; i++){ + sqlite3Fts5BufferAppendVarint(&rc, &buf, p->aTotalSize[i]); + } + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5IndexSetAverages(p->pIndex, buf.p, buf.n); + } + sqlite3_free(buf.p); + + return rc; +} + +/* +** Remove a row from the FTS table. +*/ +static int sqlite3Fts5StorageDelete(Fts5Storage *p, i64 iDel, sqlite3_value **apVal){ + Fts5Config *pConfig = p->pConfig; + int rc; + sqlite3_stmt *pDel = 0; + + assert( pConfig->eContent!=FTS5_CONTENT_NORMAL || apVal==0 ); + rc = fts5StorageLoadTotals(p, 1); + + /* Delete the index records */ + if( rc==SQLITE_OK ){ + rc = fts5StorageDeleteFromIndex(p, iDel, apVal); + } + + /* Delete the %_docsize record */ + if( rc==SQLITE_OK && pConfig->bColumnsize ){ + rc = fts5StorageGetStmt(p, FTS5_STMT_DELETE_DOCSIZE, &pDel, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_int64(pDel, 1, iDel); + sqlite3_step(pDel); + rc = sqlite3_reset(pDel); + } + } + + /* Delete the %_content record */ + if( pConfig->eContent==FTS5_CONTENT_NORMAL ){ + if( rc==SQLITE_OK ){ + rc = fts5StorageGetStmt(p, FTS5_STMT_DELETE_CONTENT, &pDel, 0); + } + if( rc==SQLITE_OK ){ + sqlite3_bind_int64(pDel, 1, iDel); + sqlite3_step(pDel); + rc = sqlite3_reset(pDel); + } + } + + /* Write the averages record */ + if( rc==SQLITE_OK ){ + rc = fts5StorageSaveTotals(p); + } + + return rc; +} + +/* +** Delete all entries in the FTS5 index. +*/ +static int sqlite3Fts5StorageDeleteAll(Fts5Storage *p){ + Fts5Config *pConfig = p->pConfig; + int rc; + + /* Delete the contents of the %_data and %_docsize tables. */ + rc = fts5ExecPrintf(pConfig->db, 0, + "DELETE FROM %Q.'%q_data';" + "DELETE FROM %Q.'%q_idx';", + pConfig->zDb, pConfig->zName, + pConfig->zDb, pConfig->zName + ); + if( rc==SQLITE_OK && pConfig->bColumnsize ){ + rc = fts5ExecPrintf(pConfig->db, 0, + "DELETE FROM %Q.'%q_docsize';", + pConfig->zDb, pConfig->zName + ); + } + + /* Reinitialize the %_data table. This call creates the initial structure + ** and averages records. */ + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5IndexReinit(p->pIndex); + } + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5StorageConfigValue(p, "version", 0, FTS5_CURRENT_VERSION); + } + return rc; +} + +static int sqlite3Fts5StorageRebuild(Fts5Storage *p){ + Fts5Buffer buf = {0,0,0}; + Fts5Config *pConfig = p->pConfig; + sqlite3_stmt *pScan = 0; + Fts5InsertCtx ctx; + int rc; + + memset(&ctx, 0, sizeof(Fts5InsertCtx)); + ctx.pStorage = p; + rc = sqlite3Fts5StorageDeleteAll(p); + if( rc==SQLITE_OK ){ + rc = fts5StorageLoadTotals(p, 1); + } + + if( rc==SQLITE_OK ){ + rc = fts5StorageGetStmt(p, FTS5_STMT_SCAN, &pScan, 0); + } + + while( rc==SQLITE_OK && SQLITE_ROW==sqlite3_step(pScan) ){ + i64 iRowid = sqlite3_column_int64(pScan, 0); + + sqlite3Fts5BufferZero(&buf); + rc = sqlite3Fts5IndexBeginWrite(p->pIndex, 0, iRowid); + for(ctx.iCol=0; rc==SQLITE_OK && ctx.iCol<pConfig->nCol; ctx.iCol++){ + ctx.szCol = 0; + if( pConfig->abUnindexed[ctx.iCol]==0 ){ + rc = sqlite3Fts5Tokenize(pConfig, + FTS5_TOKENIZE_DOCUMENT, + (const char*)sqlite3_column_text(pScan, ctx.iCol+1), + sqlite3_column_bytes(pScan, ctx.iCol+1), + (void*)&ctx, + fts5StorageInsertCallback + ); + } + sqlite3Fts5BufferAppendVarint(&rc, &buf, ctx.szCol); + p->aTotalSize[ctx.iCol] += (i64)ctx.szCol; + } + p->nTotalRow++; + + if( rc==SQLITE_OK ){ + rc = fts5StorageInsertDocsize(p, iRowid, &buf); + } + } + sqlite3_free(buf.p); + + /* Write the averages record */ + if( rc==SQLITE_OK ){ + rc = fts5StorageSaveTotals(p); + } + return rc; +} + +static int sqlite3Fts5StorageOptimize(Fts5Storage *p){ + return sqlite3Fts5IndexOptimize(p->pIndex); +} + +static int sqlite3Fts5StorageMerge(Fts5Storage *p, int nMerge){ + return sqlite3Fts5IndexMerge(p->pIndex, nMerge); +} + +/* +** Allocate a new rowid. This is used for "external content" tables when +** a NULL value is inserted into the rowid column. The new rowid is allocated +** by inserting a dummy row into the %_docsize table. The dummy will be +** overwritten later. +** +** If the %_docsize table does not exist, SQLITE_MISMATCH is returned. In +** this case the user is required to provide a rowid explicitly. +*/ +static int fts5StorageNewRowid(Fts5Storage *p, i64 *piRowid){ + int rc = SQLITE_MISMATCH; + if( p->pConfig->bColumnsize ){ + sqlite3_stmt *pReplace = 0; + rc = fts5StorageGetStmt(p, FTS5_STMT_REPLACE_DOCSIZE, &pReplace, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_null(pReplace, 1); + sqlite3_bind_null(pReplace, 2); + sqlite3_step(pReplace); + rc = sqlite3_reset(pReplace); + } + if( rc==SQLITE_OK ){ + *piRowid = sqlite3_last_insert_rowid(p->pConfig->db); + } + } + return rc; +} + +/* +** Insert a new row into the FTS content table. +*/ +static int sqlite3Fts5StorageContentInsert( + Fts5Storage *p, + sqlite3_value **apVal, + i64 *piRowid +){ + Fts5Config *pConfig = p->pConfig; + int rc = SQLITE_OK; + + /* Insert the new row into the %_content table. */ + if( pConfig->eContent!=FTS5_CONTENT_NORMAL ){ + if( sqlite3_value_type(apVal[1])==SQLITE_INTEGER ){ + *piRowid = sqlite3_value_int64(apVal[1]); + }else{ + rc = fts5StorageNewRowid(p, piRowid); + } + }else{ + sqlite3_stmt *pInsert = 0; /* Statement to write %_content table */ + int i; /* Counter variable */ + rc = fts5StorageGetStmt(p, FTS5_STMT_INSERT_CONTENT, &pInsert, 0); + for(i=1; rc==SQLITE_OK && i<=pConfig->nCol+1; i++){ + rc = sqlite3_bind_value(pInsert, i, apVal[i]); + } + if( rc==SQLITE_OK ){ + sqlite3_step(pInsert); + rc = sqlite3_reset(pInsert); + } + *piRowid = sqlite3_last_insert_rowid(pConfig->db); + } + + return rc; +} + +/* +** Insert new entries into the FTS index and %_docsize table. +*/ +static int sqlite3Fts5StorageIndexInsert( + Fts5Storage *p, + sqlite3_value **apVal, + i64 iRowid +){ + Fts5Config *pConfig = p->pConfig; + int rc = SQLITE_OK; /* Return code */ + Fts5InsertCtx ctx; /* Tokenization callback context object */ + Fts5Buffer buf; /* Buffer used to build up %_docsize blob */ + + memset(&buf, 0, sizeof(Fts5Buffer)); + ctx.pStorage = p; + rc = fts5StorageLoadTotals(p, 1); + + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5IndexBeginWrite(p->pIndex, 0, iRowid); + } + for(ctx.iCol=0; rc==SQLITE_OK && ctx.iCol<pConfig->nCol; ctx.iCol++){ + ctx.szCol = 0; + if( pConfig->abUnindexed[ctx.iCol]==0 ){ + rc = sqlite3Fts5Tokenize(pConfig, + FTS5_TOKENIZE_DOCUMENT, + (const char*)sqlite3_value_text(apVal[ctx.iCol+2]), + sqlite3_value_bytes(apVal[ctx.iCol+2]), + (void*)&ctx, + fts5StorageInsertCallback + ); + } + sqlite3Fts5BufferAppendVarint(&rc, &buf, ctx.szCol); + p->aTotalSize[ctx.iCol] += (i64)ctx.szCol; + } + p->nTotalRow++; + + /* Write the %_docsize record */ + if( rc==SQLITE_OK ){ + rc = fts5StorageInsertDocsize(p, iRowid, &buf); + } + sqlite3_free(buf.p); + + /* Write the averages record */ + if( rc==SQLITE_OK ){ + rc = fts5StorageSaveTotals(p); + } + + return rc; +} + +static int fts5StorageCount(Fts5Storage *p, const char *zSuffix, i64 *pnRow){ + Fts5Config *pConfig = p->pConfig; + char *zSql; + int rc; + + zSql = sqlite3_mprintf("SELECT count(*) FROM %Q.'%q_%s'", + pConfig->zDb, pConfig->zName, zSuffix + ); + if( zSql==0 ){ + rc = SQLITE_NOMEM; + }else{ + sqlite3_stmt *pCnt = 0; + rc = sqlite3_prepare_v2(pConfig->db, zSql, -1, &pCnt, 0); + if( rc==SQLITE_OK ){ + if( SQLITE_ROW==sqlite3_step(pCnt) ){ + *pnRow = sqlite3_column_int64(pCnt, 0); + } + rc = sqlite3_finalize(pCnt); + } + } + + sqlite3_free(zSql); + return rc; +} + +/* +** Context object used by sqlite3Fts5StorageIntegrity(). +*/ +typedef struct Fts5IntegrityCtx Fts5IntegrityCtx; +struct Fts5IntegrityCtx { + i64 iRowid; + int iCol; + int szCol; + u64 cksum; + Fts5Termset *pTermset; + Fts5Config *pConfig; +}; + + +/* +** Tokenization callback used by integrity check. +*/ +static int fts5StorageIntegrityCallback( + void *pContext, /* Pointer to Fts5IntegrityCtx object */ + int tflags, + const char *pToken, /* Buffer containing token */ + int nToken, /* Size of token in bytes */ + int iUnused1, /* Start offset of token */ + int iUnused2 /* End offset of token */ +){ + Fts5IntegrityCtx *pCtx = (Fts5IntegrityCtx*)pContext; + Fts5Termset *pTermset = pCtx->pTermset; + int bPresent; + int ii; + int rc = SQLITE_OK; + int iPos; + int iCol; + + UNUSED_PARAM2(iUnused1, iUnused2); + + if( (tflags & FTS5_TOKEN_COLOCATED)==0 || pCtx->szCol==0 ){ + pCtx->szCol++; + } + + switch( pCtx->pConfig->eDetail ){ + case FTS5_DETAIL_FULL: + iPos = pCtx->szCol-1; + iCol = pCtx->iCol; + break; + + case FTS5_DETAIL_COLUMNS: + iPos = pCtx->iCol; + iCol = 0; + break; + + default: + assert( pCtx->pConfig->eDetail==FTS5_DETAIL_NONE ); + iPos = 0; + iCol = 0; + break; + } + + rc = sqlite3Fts5TermsetAdd(pTermset, 0, pToken, nToken, &bPresent); + if( rc==SQLITE_OK && bPresent==0 ){ + pCtx->cksum ^= sqlite3Fts5IndexEntryCksum( + pCtx->iRowid, iCol, iPos, 0, pToken, nToken + ); + } + + for(ii=0; rc==SQLITE_OK && ii<pCtx->pConfig->nPrefix; ii++){ + const int nChar = pCtx->pConfig->aPrefix[ii]; + int nByte = sqlite3Fts5IndexCharlenToBytelen(pToken, nToken, nChar); + if( nByte ){ + rc = sqlite3Fts5TermsetAdd(pTermset, ii+1, pToken, nByte, &bPresent); + if( bPresent==0 ){ + pCtx->cksum ^= sqlite3Fts5IndexEntryCksum( + pCtx->iRowid, iCol, iPos, ii+1, pToken, nByte + ); + } + } + } + + return rc; +} + +/* +** Check that the contents of the FTS index match that of the %_content +** table. Return SQLITE_OK if they do, or SQLITE_CORRUPT if not. Return +** some other SQLite error code if an error occurs while attempting to +** determine this. +*/ +static int sqlite3Fts5StorageIntegrity(Fts5Storage *p){ + Fts5Config *pConfig = p->pConfig; + int rc; /* Return code */ + int *aColSize; /* Array of size pConfig->nCol */ + i64 *aTotalSize; /* Array of size pConfig->nCol */ + Fts5IntegrityCtx ctx; + sqlite3_stmt *pScan; + + memset(&ctx, 0, sizeof(Fts5IntegrityCtx)); + ctx.pConfig = p->pConfig; + aTotalSize = (i64*)sqlite3_malloc(pConfig->nCol * (sizeof(int)+sizeof(i64))); + if( !aTotalSize ) return SQLITE_NOMEM; + aColSize = (int*)&aTotalSize[pConfig->nCol]; + memset(aTotalSize, 0, sizeof(i64) * pConfig->nCol); + + /* Generate the expected index checksum based on the contents of the + ** %_content table. This block stores the checksum in ctx.cksum. */ + rc = fts5StorageGetStmt(p, FTS5_STMT_SCAN, &pScan, 0); + if( rc==SQLITE_OK ){ + int rc2; + while( SQLITE_ROW==sqlite3_step(pScan) ){ + int i; + ctx.iRowid = sqlite3_column_int64(pScan, 0); + ctx.szCol = 0; + if( pConfig->bColumnsize ){ + rc = sqlite3Fts5StorageDocsize(p, ctx.iRowid, aColSize); + } + if( rc==SQLITE_OK && pConfig->eDetail==FTS5_DETAIL_NONE ){ + rc = sqlite3Fts5TermsetNew(&ctx.pTermset); + } + for(i=0; rc==SQLITE_OK && i<pConfig->nCol; i++){ + if( pConfig->abUnindexed[i] ) continue; + ctx.iCol = i; + ctx.szCol = 0; + if( pConfig->eDetail==FTS5_DETAIL_COLUMNS ){ + rc = sqlite3Fts5TermsetNew(&ctx.pTermset); + } + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5Tokenize(pConfig, + FTS5_TOKENIZE_DOCUMENT, + (const char*)sqlite3_column_text(pScan, i+1), + sqlite3_column_bytes(pScan, i+1), + (void*)&ctx, + fts5StorageIntegrityCallback + ); + } + if( rc==SQLITE_OK && pConfig->bColumnsize && ctx.szCol!=aColSize[i] ){ + rc = FTS5_CORRUPT; + } + aTotalSize[i] += ctx.szCol; + if( pConfig->eDetail==FTS5_DETAIL_COLUMNS ){ + sqlite3Fts5TermsetFree(ctx.pTermset); + ctx.pTermset = 0; + } + } + sqlite3Fts5TermsetFree(ctx.pTermset); + ctx.pTermset = 0; + + if( rc!=SQLITE_OK ) break; + } + rc2 = sqlite3_reset(pScan); + if( rc==SQLITE_OK ) rc = rc2; + } + + /* Test that the "totals" (sometimes called "averages") record looks Ok */ + if( rc==SQLITE_OK ){ + int i; + rc = fts5StorageLoadTotals(p, 0); + for(i=0; rc==SQLITE_OK && i<pConfig->nCol; i++){ + if( p->aTotalSize[i]!=aTotalSize[i] ) rc = FTS5_CORRUPT; + } + } + + /* Check that the %_docsize and %_content tables contain the expected + ** number of rows. */ + if( rc==SQLITE_OK && pConfig->eContent==FTS5_CONTENT_NORMAL ){ + i64 nRow = 0; + rc = fts5StorageCount(p, "content", &nRow); + if( rc==SQLITE_OK && nRow!=p->nTotalRow ) rc = FTS5_CORRUPT; + } + if( rc==SQLITE_OK && pConfig->bColumnsize ){ + i64 nRow = 0; + rc = fts5StorageCount(p, "docsize", &nRow); + if( rc==SQLITE_OK && nRow!=p->nTotalRow ) rc = FTS5_CORRUPT; + } + + /* Pass the expected checksum down to the FTS index module. It will + ** verify, amongst other things, that it matches the checksum generated by + ** inspecting the index itself. */ + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5IndexIntegrityCheck(p->pIndex, ctx.cksum); + } + + sqlite3_free(aTotalSize); + return rc; +} + +/* +** Obtain an SQLite statement handle that may be used to read data from the +** %_content table. +*/ +static int sqlite3Fts5StorageStmt( + Fts5Storage *p, + int eStmt, + sqlite3_stmt **pp, + char **pzErrMsg +){ + int rc; + assert( eStmt==FTS5_STMT_SCAN_ASC + || eStmt==FTS5_STMT_SCAN_DESC + || eStmt==FTS5_STMT_LOOKUP + ); + rc = fts5StorageGetStmt(p, eStmt, pp, pzErrMsg); + if( rc==SQLITE_OK ){ + assert( p->aStmt[eStmt]==*pp ); + p->aStmt[eStmt] = 0; + } + return rc; +} + +/* +** Release an SQLite statement handle obtained via an earlier call to +** sqlite3Fts5StorageStmt(). The eStmt parameter passed to this function +** must match that passed to the sqlite3Fts5StorageStmt() call. +*/ +static void sqlite3Fts5StorageStmtRelease( + Fts5Storage *p, + int eStmt, + sqlite3_stmt *pStmt +){ + assert( eStmt==FTS5_STMT_SCAN_ASC + || eStmt==FTS5_STMT_SCAN_DESC + || eStmt==FTS5_STMT_LOOKUP + ); + if( p->aStmt[eStmt]==0 ){ + sqlite3_reset(pStmt); + p->aStmt[eStmt] = pStmt; + }else{ + sqlite3_finalize(pStmt); + } +} + +static int fts5StorageDecodeSizeArray( + int *aCol, int nCol, /* Array to populate */ + const u8 *aBlob, int nBlob /* Record to read varints from */ +){ + int i; + int iOff = 0; + for(i=0; i<nCol; i++){ + if( iOff>=nBlob ) return 1; + iOff += fts5GetVarint32(&aBlob[iOff], aCol[i]); + } + return (iOff!=nBlob); +} + +/* +** Argument aCol points to an array of integers containing one entry for +** each table column. This function reads the %_docsize record for the +** specified rowid and populates aCol[] with the results. +** +** An SQLite error code is returned if an error occurs, or SQLITE_OK +** otherwise. +*/ +static int sqlite3Fts5StorageDocsize(Fts5Storage *p, i64 iRowid, int *aCol){ + int nCol = p->pConfig->nCol; /* Number of user columns in table */ + sqlite3_stmt *pLookup = 0; /* Statement to query %_docsize */ + int rc; /* Return Code */ + + assert( p->pConfig->bColumnsize ); + rc = fts5StorageGetStmt(p, FTS5_STMT_LOOKUP_DOCSIZE, &pLookup, 0); + if( rc==SQLITE_OK ){ + int bCorrupt = 1; + sqlite3_bind_int64(pLookup, 1, iRowid); + if( SQLITE_ROW==sqlite3_step(pLookup) ){ + const u8 *aBlob = sqlite3_column_blob(pLookup, 0); + int nBlob = sqlite3_column_bytes(pLookup, 0); + if( 0==fts5StorageDecodeSizeArray(aCol, nCol, aBlob, nBlob) ){ + bCorrupt = 0; + } + } + rc = sqlite3_reset(pLookup); + if( bCorrupt && rc==SQLITE_OK ){ + rc = FTS5_CORRUPT; + } + } + + return rc; +} + +static int sqlite3Fts5StorageSize(Fts5Storage *p, int iCol, i64 *pnToken){ + int rc = fts5StorageLoadTotals(p, 0); + if( rc==SQLITE_OK ){ + *pnToken = 0; + if( iCol<0 ){ + int i; + for(i=0; i<p->pConfig->nCol; i++){ + *pnToken += p->aTotalSize[i]; + } + }else if( iCol<p->pConfig->nCol ){ + *pnToken = p->aTotalSize[iCol]; + }else{ + rc = SQLITE_RANGE; + } + } + return rc; +} + +static int sqlite3Fts5StorageRowCount(Fts5Storage *p, i64 *pnRow){ + int rc = fts5StorageLoadTotals(p, 0); + if( rc==SQLITE_OK ){ + *pnRow = p->nTotalRow; + } + return rc; +} + +/* +** Flush any data currently held in-memory to disk. +*/ +static int sqlite3Fts5StorageSync(Fts5Storage *p, int bCommit){ + if( bCommit && p->bTotalsValid ){ + int rc = fts5StorageSaveTotals(p); + p->bTotalsValid = 0; + if( rc!=SQLITE_OK ) return rc; + } + return sqlite3Fts5IndexSync(p->pIndex, bCommit); +} + +static int sqlite3Fts5StorageRollback(Fts5Storage *p){ + p->bTotalsValid = 0; + return sqlite3Fts5IndexRollback(p->pIndex); +} + +static int sqlite3Fts5StorageConfigValue( + Fts5Storage *p, + const char *z, + sqlite3_value *pVal, + int iVal +){ + sqlite3_stmt *pReplace = 0; + int rc = fts5StorageGetStmt(p, FTS5_STMT_REPLACE_CONFIG, &pReplace, 0); + if( rc==SQLITE_OK ){ + sqlite3_bind_text(pReplace, 1, z, -1, SQLITE_STATIC); + if( pVal ){ + sqlite3_bind_value(pReplace, 2, pVal); + }else{ + sqlite3_bind_int(pReplace, 2, iVal); + } + sqlite3_step(pReplace); + rc = sqlite3_reset(pReplace); + } + if( rc==SQLITE_OK && pVal ){ + int iNew = p->pConfig->iCookie + 1; + rc = sqlite3Fts5IndexSetCookie(p->pIndex, iNew); + if( rc==SQLITE_OK ){ + p->pConfig->iCookie = iNew; + } + } + return rc; +} + + + +/* +** 2014 May 31 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +*/ + + +/* #include "fts5Int.h" */ + +/************************************************************************** +** Start of ascii tokenizer implementation. +*/ + +/* +** For tokenizers with no "unicode" modifier, the set of token characters +** is the same as the set of ASCII range alphanumeric characters. +*/ +static unsigned char aAsciiTokenChar[128] = { + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 0x00..0x0F */ + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 0x10..0x1F */ + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 0x20..0x2F */ + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, /* 0x30..0x3F */ + 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, /* 0x40..0x4F */ + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, /* 0x50..0x5F */ + 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, /* 0x60..0x6F */ + 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, /* 0x70..0x7F */ +}; + +typedef struct AsciiTokenizer AsciiTokenizer; +struct AsciiTokenizer { + unsigned char aTokenChar[128]; +}; + +static void fts5AsciiAddExceptions( + AsciiTokenizer *p, + const char *zArg, + int bTokenChars +){ + int i; + for(i=0; zArg[i]; i++){ + if( (zArg[i] & 0x80)==0 ){ + p->aTokenChar[(int)zArg[i]] = (unsigned char)bTokenChars; + } + } +} + +/* +** Delete a "ascii" tokenizer. +*/ +static void fts5AsciiDelete(Fts5Tokenizer *p){ + sqlite3_free(p); +} + +/* +** Create an "ascii" tokenizer. +*/ +static int fts5AsciiCreate( + void *pUnused, + const char **azArg, int nArg, + Fts5Tokenizer **ppOut +){ + int rc = SQLITE_OK; + AsciiTokenizer *p = 0; + UNUSED_PARAM(pUnused); + if( nArg%2 ){ + rc = SQLITE_ERROR; + }else{ + p = sqlite3_malloc(sizeof(AsciiTokenizer)); + if( p==0 ){ + rc = SQLITE_NOMEM; + }else{ + int i; + memset(p, 0, sizeof(AsciiTokenizer)); + memcpy(p->aTokenChar, aAsciiTokenChar, sizeof(aAsciiTokenChar)); + for(i=0; rc==SQLITE_OK && i<nArg; i+=2){ + const char *zArg = azArg[i+1]; + if( 0==sqlite3_stricmp(azArg[i], "tokenchars") ){ + fts5AsciiAddExceptions(p, zArg, 1); + }else + if( 0==sqlite3_stricmp(azArg[i], "separators") ){ + fts5AsciiAddExceptions(p, zArg, 0); + }else{ + rc = SQLITE_ERROR; + } + } + if( rc!=SQLITE_OK ){ + fts5AsciiDelete((Fts5Tokenizer*)p); + p = 0; + } + } + } + + *ppOut = (Fts5Tokenizer*)p; + return rc; +} + + +static void asciiFold(char *aOut, const char *aIn, int nByte){ + int i; + for(i=0; i<nByte; i++){ + char c = aIn[i]; + if( c>='A' && c<='Z' ) c += 32; + aOut[i] = c; + } +} + +/* +** Tokenize some text using the ascii tokenizer. +*/ +static int fts5AsciiTokenize( + Fts5Tokenizer *pTokenizer, + void *pCtx, + int iUnused, + const char *pText, int nText, + int (*xToken)(void*, int, const char*, int nToken, int iStart, int iEnd) +){ + AsciiTokenizer *p = (AsciiTokenizer*)pTokenizer; + int rc = SQLITE_OK; + int ie; + int is = 0; + + char aFold[64]; + int nFold = sizeof(aFold); + char *pFold = aFold; + unsigned char *a = p->aTokenChar; + + UNUSED_PARAM(iUnused); + + while( is<nText && rc==SQLITE_OK ){ + int nByte; + + /* Skip any leading divider characters. */ + while( is<nText && ((pText[is]&0x80)==0 && a[(int)pText[is]]==0) ){ + is++; + } + if( is==nText ) break; + + /* Count the token characters */ + ie = is+1; + while( ie<nText && ((pText[ie]&0x80) || a[(int)pText[ie]] ) ){ + ie++; + } + + /* Fold to lower case */ + nByte = ie-is; + if( nByte>nFold ){ + if( pFold!=aFold ) sqlite3_free(pFold); + pFold = sqlite3_malloc(nByte*2); + if( pFold==0 ){ + rc = SQLITE_NOMEM; + break; + } + nFold = nByte*2; + } + asciiFold(pFold, &pText[is], nByte); + + /* Invoke the token callback */ + rc = xToken(pCtx, 0, pFold, nByte, is, ie); + is = ie+1; + } + + if( pFold!=aFold ) sqlite3_free(pFold); + if( rc==SQLITE_DONE ) rc = SQLITE_OK; + return rc; +} + +/************************************************************************** +** Start of unicode61 tokenizer implementation. +*/ + + +/* +** The following two macros - READ_UTF8 and WRITE_UTF8 - have been copied +** from the sqlite3 source file utf.c. If this file is compiled as part +** of the amalgamation, they are not required. +*/ +#ifndef SQLITE_AMALGAMATION + +static const unsigned char sqlite3Utf8Trans1[] = { + 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, + 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, + 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, + 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, + 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, + 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, + 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, + 0x00, 0x01, 0x02, 0x03, 0x00, 0x01, 0x00, 0x00, +}; + +#define READ_UTF8(zIn, zTerm, c) \ + c = *(zIn++); \ + if( c>=0xc0 ){ \ + c = sqlite3Utf8Trans1[c-0xc0]; \ + while( zIn!=zTerm && (*zIn & 0xc0)==0x80 ){ \ + c = (c<<6) + (0x3f & *(zIn++)); \ + } \ + if( c<0x80 \ + || (c&0xFFFFF800)==0xD800 \ + || (c&0xFFFFFFFE)==0xFFFE ){ c = 0xFFFD; } \ + } + + +#define WRITE_UTF8(zOut, c) { \ + if( c<0x00080 ){ \ + *zOut++ = (unsigned char)(c&0xFF); \ + } \ + else if( c<0x00800 ){ \ + *zOut++ = 0xC0 + (unsigned char)((c>>6)&0x1F); \ + *zOut++ = 0x80 + (unsigned char)(c & 0x3F); \ + } \ + else if( c<0x10000 ){ \ + *zOut++ = 0xE0 + (unsigned char)((c>>12)&0x0F); \ + *zOut++ = 0x80 + (unsigned char)((c>>6) & 0x3F); \ + *zOut++ = 0x80 + (unsigned char)(c & 0x3F); \ + }else{ \ + *zOut++ = 0xF0 + (unsigned char)((c>>18) & 0x07); \ + *zOut++ = 0x80 + (unsigned char)((c>>12) & 0x3F); \ + *zOut++ = 0x80 + (unsigned char)((c>>6) & 0x3F); \ + *zOut++ = 0x80 + (unsigned char)(c & 0x3F); \ + } \ +} + +#endif /* ifndef SQLITE_AMALGAMATION */ + +typedef struct Unicode61Tokenizer Unicode61Tokenizer; +struct Unicode61Tokenizer { + unsigned char aTokenChar[128]; /* ASCII range token characters */ + char *aFold; /* Buffer to fold text into */ + int nFold; /* Size of aFold[] in bytes */ + int bRemoveDiacritic; /* True if remove_diacritics=1 is set */ + int nException; + int *aiException; +}; + +static int fts5UnicodeAddExceptions( + Unicode61Tokenizer *p, /* Tokenizer object */ + const char *z, /* Characters to treat as exceptions */ + int bTokenChars /* 1 for 'tokenchars', 0 for 'separators' */ +){ + int rc = SQLITE_OK; + int n = (int)strlen(z); + int *aNew; + + if( n>0 ){ + aNew = (int*)sqlite3_realloc(p->aiException, (n+p->nException)*sizeof(int)); + if( aNew ){ + int nNew = p->nException; + const unsigned char *zCsr = (const unsigned char*)z; + const unsigned char *zTerm = (const unsigned char*)&z[n]; + while( zCsr<zTerm ){ + int iCode; + int bToken; + READ_UTF8(zCsr, zTerm, iCode); + if( iCode<128 ){ + p->aTokenChar[iCode] = (unsigned char)bTokenChars; + }else{ + bToken = sqlite3Fts5UnicodeIsalnum(iCode); + assert( (bToken==0 || bToken==1) ); + assert( (bTokenChars==0 || bTokenChars==1) ); + if( bToken!=bTokenChars && sqlite3Fts5UnicodeIsdiacritic(iCode)==0 ){ + int i; + for(i=0; i<nNew; i++){ + if( aNew[i]>iCode ) break; + } + memmove(&aNew[i+1], &aNew[i], (nNew-i)*sizeof(int)); + aNew[i] = iCode; + nNew++; + } + } + } + p->aiException = aNew; + p->nException = nNew; + }else{ + rc = SQLITE_NOMEM; + } + } + + return rc; +} + +/* +** Return true if the p->aiException[] array contains the value iCode. +*/ +static int fts5UnicodeIsException(Unicode61Tokenizer *p, int iCode){ + if( p->nException>0 ){ + int *a = p->aiException; + int iLo = 0; + int iHi = p->nException-1; + + while( iHi>=iLo ){ + int iTest = (iHi + iLo) / 2; + if( iCode==a[iTest] ){ + return 1; + }else if( iCode>a[iTest] ){ + iLo = iTest+1; + }else{ + iHi = iTest-1; + } + } + } + + return 0; +} + +/* +** Delete a "unicode61" tokenizer. +*/ +static void fts5UnicodeDelete(Fts5Tokenizer *pTok){ + if( pTok ){ + Unicode61Tokenizer *p = (Unicode61Tokenizer*)pTok; + sqlite3_free(p->aiException); + sqlite3_free(p->aFold); + sqlite3_free(p); + } + return; +} + +/* +** Create a "unicode61" tokenizer. +*/ +static int fts5UnicodeCreate( + void *pUnused, + const char **azArg, int nArg, + Fts5Tokenizer **ppOut +){ + int rc = SQLITE_OK; /* Return code */ + Unicode61Tokenizer *p = 0; /* New tokenizer object */ + + UNUSED_PARAM(pUnused); + + if( nArg%2 ){ + rc = SQLITE_ERROR; + }else{ + p = (Unicode61Tokenizer*)sqlite3_malloc(sizeof(Unicode61Tokenizer)); + if( p ){ + int i; + memset(p, 0, sizeof(Unicode61Tokenizer)); + memcpy(p->aTokenChar, aAsciiTokenChar, sizeof(aAsciiTokenChar)); + p->bRemoveDiacritic = 1; + p->nFold = 64; + p->aFold = sqlite3_malloc(p->nFold * sizeof(char)); + if( p->aFold==0 ){ + rc = SQLITE_NOMEM; + } + for(i=0; rc==SQLITE_OK && i<nArg; i+=2){ + const char *zArg = azArg[i+1]; + if( 0==sqlite3_stricmp(azArg[i], "remove_diacritics") ){ + if( (zArg[0]!='0' && zArg[0]!='1') || zArg[1] ){ + rc = SQLITE_ERROR; + } + p->bRemoveDiacritic = (zArg[0]=='1'); + }else + if( 0==sqlite3_stricmp(azArg[i], "tokenchars") ){ + rc = fts5UnicodeAddExceptions(p, zArg, 1); + }else + if( 0==sqlite3_stricmp(azArg[i], "separators") ){ + rc = fts5UnicodeAddExceptions(p, zArg, 0); + }else{ + rc = SQLITE_ERROR; + } + } + }else{ + rc = SQLITE_NOMEM; + } + if( rc!=SQLITE_OK ){ + fts5UnicodeDelete((Fts5Tokenizer*)p); + p = 0; + } + *ppOut = (Fts5Tokenizer*)p; + } + return rc; +} + +/* +** Return true if, for the purposes of tokenizing with the tokenizer +** passed as the first argument, codepoint iCode is considered a token +** character (not a separator). +*/ +static int fts5UnicodeIsAlnum(Unicode61Tokenizer *p, int iCode){ + assert( (sqlite3Fts5UnicodeIsalnum(iCode) & 0xFFFFFFFE)==0 ); + return sqlite3Fts5UnicodeIsalnum(iCode) ^ fts5UnicodeIsException(p, iCode); +} + +static int fts5UnicodeTokenize( + Fts5Tokenizer *pTokenizer, + void *pCtx, + int iUnused, + const char *pText, int nText, + int (*xToken)(void*, int, const char*, int nToken, int iStart, int iEnd) +){ + Unicode61Tokenizer *p = (Unicode61Tokenizer*)pTokenizer; + int rc = SQLITE_OK; + unsigned char *a = p->aTokenChar; + + unsigned char *zTerm = (unsigned char*)&pText[nText]; + unsigned char *zCsr = (unsigned char *)pText; + + /* Output buffer */ + char *aFold = p->aFold; + int nFold = p->nFold; + const char *pEnd = &aFold[nFold-6]; + + UNUSED_PARAM(iUnused); + + /* Each iteration of this loop gobbles up a contiguous run of separators, + ** then the next token. */ + while( rc==SQLITE_OK ){ + int iCode; /* non-ASCII codepoint read from input */ + char *zOut = aFold; + int is; + int ie; + + /* Skip any separator characters. */ + while( 1 ){ + if( zCsr>=zTerm ) goto tokenize_done; + if( *zCsr & 0x80 ) { + /* A character outside of the ascii range. Skip past it if it is + ** a separator character. Or break out of the loop if it is not. */ + is = zCsr - (unsigned char*)pText; + READ_UTF8(zCsr, zTerm, iCode); + if( fts5UnicodeIsAlnum(p, iCode) ){ + goto non_ascii_tokenchar; + } + }else{ + if( a[*zCsr] ){ + is = zCsr - (unsigned char*)pText; + goto ascii_tokenchar; + } + zCsr++; + } + } + + /* Run through the tokenchars. Fold them into the output buffer along + ** the way. */ + while( zCsr<zTerm ){ + + /* Grow the output buffer so that there is sufficient space to fit the + ** largest possible utf-8 character. */ + if( zOut>pEnd ){ + aFold = sqlite3_malloc(nFold*2); + if( aFold==0 ){ + rc = SQLITE_NOMEM; + goto tokenize_done; + } + zOut = &aFold[zOut - p->aFold]; + memcpy(aFold, p->aFold, nFold); + sqlite3_free(p->aFold); + p->aFold = aFold; + p->nFold = nFold = nFold*2; + pEnd = &aFold[nFold-6]; + } + + if( *zCsr & 0x80 ){ + /* An non-ascii-range character. Fold it into the output buffer if + ** it is a token character, or break out of the loop if it is not. */ + READ_UTF8(zCsr, zTerm, iCode); + if( fts5UnicodeIsAlnum(p,iCode)||sqlite3Fts5UnicodeIsdiacritic(iCode) ){ + non_ascii_tokenchar: + iCode = sqlite3Fts5UnicodeFold(iCode, p->bRemoveDiacritic); + if( iCode ) WRITE_UTF8(zOut, iCode); + }else{ + break; + } + }else if( a[*zCsr]==0 ){ + /* An ascii-range separator character. End of token. */ + break; + }else{ + ascii_tokenchar: + if( *zCsr>='A' && *zCsr<='Z' ){ + *zOut++ = *zCsr + 32; + }else{ + *zOut++ = *zCsr; + } + zCsr++; + } + ie = zCsr - (unsigned char*)pText; + } + + /* Invoke the token callback */ + rc = xToken(pCtx, 0, aFold, zOut-aFold, is, ie); + } + + tokenize_done: + if( rc==SQLITE_DONE ) rc = SQLITE_OK; + return rc; +} + +/************************************************************************** +** Start of porter stemmer implementation. +*/ + +/* Any tokens larger than this (in bytes) are passed through without +** stemming. */ +#define FTS5_PORTER_MAX_TOKEN 64 + +typedef struct PorterTokenizer PorterTokenizer; +struct PorterTokenizer { + fts5_tokenizer tokenizer; /* Parent tokenizer module */ + Fts5Tokenizer *pTokenizer; /* Parent tokenizer instance */ + char aBuf[FTS5_PORTER_MAX_TOKEN + 64]; +}; + +/* +** Delete a "porter" tokenizer. +*/ +static void fts5PorterDelete(Fts5Tokenizer *pTok){ + if( pTok ){ + PorterTokenizer *p = (PorterTokenizer*)pTok; + if( p->pTokenizer ){ + p->tokenizer.xDelete(p->pTokenizer); + } + sqlite3_free(p); + } +} + +/* +** Create a "porter" tokenizer. +*/ +static int fts5PorterCreate( + void *pCtx, + const char **azArg, int nArg, + Fts5Tokenizer **ppOut +){ + fts5_api *pApi = (fts5_api*)pCtx; + int rc = SQLITE_OK; + PorterTokenizer *pRet; + void *pUserdata = 0; + const char *zBase = "unicode61"; + + if( nArg>0 ){ + zBase = azArg[0]; + } + + pRet = (PorterTokenizer*)sqlite3_malloc(sizeof(PorterTokenizer)); + if( pRet ){ + memset(pRet, 0, sizeof(PorterTokenizer)); + rc = pApi->xFindTokenizer(pApi, zBase, &pUserdata, &pRet->tokenizer); + }else{ + rc = SQLITE_NOMEM; + } + if( rc==SQLITE_OK ){ + int nArg2 = (nArg>0 ? nArg-1 : 0); + const char **azArg2 = (nArg2 ? &azArg[1] : 0); + rc = pRet->tokenizer.xCreate(pUserdata, azArg2, nArg2, &pRet->pTokenizer); + } + + if( rc!=SQLITE_OK ){ + fts5PorterDelete((Fts5Tokenizer*)pRet); + pRet = 0; + } + *ppOut = (Fts5Tokenizer*)pRet; + return rc; +} + +typedef struct PorterContext PorterContext; +struct PorterContext { + void *pCtx; + int (*xToken)(void*, int, const char*, int, int, int); + char *aBuf; +}; + +typedef struct PorterRule PorterRule; +struct PorterRule { + const char *zSuffix; + int nSuffix; + int (*xCond)(char *zStem, int nStem); + const char *zOutput; + int nOutput; +}; + +#if 0 +static int fts5PorterApply(char *aBuf, int *pnBuf, PorterRule *aRule){ + int ret = -1; + int nBuf = *pnBuf; + PorterRule *p; + + for(p=aRule; p->zSuffix; p++){ + assert( strlen(p->zSuffix)==p->nSuffix ); + assert( strlen(p->zOutput)==p->nOutput ); + if( nBuf<p->nSuffix ) continue; + if( 0==memcmp(&aBuf[nBuf - p->nSuffix], p->zSuffix, p->nSuffix) ) break; + } + + if( p->zSuffix ){ + int nStem = nBuf - p->nSuffix; + if( p->xCond==0 || p->xCond(aBuf, nStem) ){ + memcpy(&aBuf[nStem], p->zOutput, p->nOutput); + *pnBuf = nStem + p->nOutput; + ret = p - aRule; + } + } + + return ret; +} +#endif + +static int fts5PorterIsVowel(char c, int bYIsVowel){ + return ( + c=='a' || c=='e' || c=='i' || c=='o' || c=='u' || (bYIsVowel && c=='y') + ); +} + +static int fts5PorterGobbleVC(char *zStem, int nStem, int bPrevCons){ + int i; + int bCons = bPrevCons; + + /* Scan for a vowel */ + for(i=0; i<nStem; i++){ + if( 0==(bCons = !fts5PorterIsVowel(zStem[i], bCons)) ) break; + } + + /* Scan for a consonent */ + for(i++; i<nStem; i++){ + if( (bCons = !fts5PorterIsVowel(zStem[i], bCons)) ) return i+1; + } + return 0; +} + +/* porter rule condition: (m > 0) */ +static int fts5Porter_MGt0(char *zStem, int nStem){ + return !!fts5PorterGobbleVC(zStem, nStem, 0); +} + +/* porter rule condition: (m > 1) */ +static int fts5Porter_MGt1(char *zStem, int nStem){ + int n; + n = fts5PorterGobbleVC(zStem, nStem, 0); + if( n && fts5PorterGobbleVC(&zStem[n], nStem-n, 1) ){ + return 1; + } + return 0; +} + +/* porter rule condition: (m = 1) */ +static int fts5Porter_MEq1(char *zStem, int nStem){ + int n; + n = fts5PorterGobbleVC(zStem, nStem, 0); + if( n && 0==fts5PorterGobbleVC(&zStem[n], nStem-n, 1) ){ + return 1; + } + return 0; +} + +/* porter rule condition: (*o) */ +static int fts5Porter_Ostar(char *zStem, int nStem){ + if( zStem[nStem-1]=='w' || zStem[nStem-1]=='x' || zStem[nStem-1]=='y' ){ + return 0; + }else{ + int i; + int mask = 0; + int bCons = 0; + for(i=0; i<nStem; i++){ + bCons = !fts5PorterIsVowel(zStem[i], bCons); + assert( bCons==0 || bCons==1 ); + mask = (mask << 1) + bCons; + } + return ((mask & 0x0007)==0x0005); + } +} + +/* porter rule condition: (m > 1 and (*S or *T)) */ +static int fts5Porter_MGt1_and_S_or_T(char *zStem, int nStem){ + assert( nStem>0 ); + return (zStem[nStem-1]=='s' || zStem[nStem-1]=='t') + && fts5Porter_MGt1(zStem, nStem); +} + +/* porter rule condition: (*v*) */ +static int fts5Porter_Vowel(char *zStem, int nStem){ + int i; + for(i=0; i<nStem; i++){ + if( fts5PorterIsVowel(zStem[i], i>0) ){ + return 1; + } + } + return 0; +} + + +/************************************************************************** +*************************************************************************** +** GENERATED CODE STARTS HERE (mkportersteps.tcl) +*/ + +static int fts5PorterStep4(char *aBuf, int *pnBuf){ + int ret = 0; + int nBuf = *pnBuf; + switch( aBuf[nBuf-2] ){ + + case 'a': + if( nBuf>2 && 0==memcmp("al", &aBuf[nBuf-2], 2) ){ + if( fts5Porter_MGt1(aBuf, nBuf-2) ){ + *pnBuf = nBuf - 2; + } + } + break; + + case 'c': + if( nBuf>4 && 0==memcmp("ance", &aBuf[nBuf-4], 4) ){ + if( fts5Porter_MGt1(aBuf, nBuf-4) ){ + *pnBuf = nBuf - 4; + } + }else if( nBuf>4 && 0==memcmp("ence", &aBuf[nBuf-4], 4) ){ + if( fts5Porter_MGt1(aBuf, nBuf-4) ){ + *pnBuf = nBuf - 4; + } + } + break; + + case 'e': + if( nBuf>2 && 0==memcmp("er", &aBuf[nBuf-2], 2) ){ + if( fts5Porter_MGt1(aBuf, nBuf-2) ){ + *pnBuf = nBuf - 2; + } + } + break; + + case 'i': + if( nBuf>2 && 0==memcmp("ic", &aBuf[nBuf-2], 2) ){ + if( fts5Porter_MGt1(aBuf, nBuf-2) ){ + *pnBuf = nBuf - 2; + } + } + break; + + case 'l': + if( nBuf>4 && 0==memcmp("able", &aBuf[nBuf-4], 4) ){ + if( fts5Porter_MGt1(aBuf, nBuf-4) ){ + *pnBuf = nBuf - 4; + } + }else if( nBuf>4 && 0==memcmp("ible", &aBuf[nBuf-4], 4) ){ + if( fts5Porter_MGt1(aBuf, nBuf-4) ){ + *pnBuf = nBuf - 4; + } + } + break; + + case 'n': + if( nBuf>3 && 0==memcmp("ant", &aBuf[nBuf-3], 3) ){ + if( fts5Porter_MGt1(aBuf, nBuf-3) ){ + *pnBuf = nBuf - 3; + } + }else if( nBuf>5 && 0==memcmp("ement", &aBuf[nBuf-5], 5) ){ + if( fts5Porter_MGt1(aBuf, nBuf-5) ){ + *pnBuf = nBuf - 5; + } + }else if( nBuf>4 && 0==memcmp("ment", &aBuf[nBuf-4], 4) ){ + if( fts5Porter_MGt1(aBuf, nBuf-4) ){ + *pnBuf = nBuf - 4; + } + }else if( nBuf>3 && 0==memcmp("ent", &aBuf[nBuf-3], 3) ){ + if( fts5Porter_MGt1(aBuf, nBuf-3) ){ + *pnBuf = nBuf - 3; + } + } + break; + + case 'o': + if( nBuf>3 && 0==memcmp("ion", &aBuf[nBuf-3], 3) ){ + if( fts5Porter_MGt1_and_S_or_T(aBuf, nBuf-3) ){ + *pnBuf = nBuf - 3; + } + }else if( nBuf>2 && 0==memcmp("ou", &aBuf[nBuf-2], 2) ){ + if( fts5Porter_MGt1(aBuf, nBuf-2) ){ + *pnBuf = nBuf - 2; + } + } + break; + + case 's': + if( nBuf>3 && 0==memcmp("ism", &aBuf[nBuf-3], 3) ){ + if( fts5Porter_MGt1(aBuf, nBuf-3) ){ + *pnBuf = nBuf - 3; + } + } + break; + + case 't': + if( nBuf>3 && 0==memcmp("ate", &aBuf[nBuf-3], 3) ){ + if( fts5Porter_MGt1(aBuf, nBuf-3) ){ + *pnBuf = nBuf - 3; + } + }else if( nBuf>3 && 0==memcmp("iti", &aBuf[nBuf-3], 3) ){ + if( fts5Porter_MGt1(aBuf, nBuf-3) ){ + *pnBuf = nBuf - 3; + } + } + break; + + case 'u': + if( nBuf>3 && 0==memcmp("ous", &aBuf[nBuf-3], 3) ){ + if( fts5Porter_MGt1(aBuf, nBuf-3) ){ + *pnBuf = nBuf - 3; + } + } + break; + + case 'v': + if( nBuf>3 && 0==memcmp("ive", &aBuf[nBuf-3], 3) ){ + if( fts5Porter_MGt1(aBuf, nBuf-3) ){ + *pnBuf = nBuf - 3; + } + } + break; + + case 'z': + if( nBuf>3 && 0==memcmp("ize", &aBuf[nBuf-3], 3) ){ + if( fts5Porter_MGt1(aBuf, nBuf-3) ){ + *pnBuf = nBuf - 3; + } + } + break; + + } + return ret; +} + + +static int fts5PorterStep1B2(char *aBuf, int *pnBuf){ + int ret = 0; + int nBuf = *pnBuf; + switch( aBuf[nBuf-2] ){ + + case 'a': + if( nBuf>2 && 0==memcmp("at", &aBuf[nBuf-2], 2) ){ + memcpy(&aBuf[nBuf-2], "ate", 3); + *pnBuf = nBuf - 2 + 3; + ret = 1; + } + break; + + case 'b': + if( nBuf>2 && 0==memcmp("bl", &aBuf[nBuf-2], 2) ){ + memcpy(&aBuf[nBuf-2], "ble", 3); + *pnBuf = nBuf - 2 + 3; + ret = 1; + } + break; + + case 'i': + if( nBuf>2 && 0==memcmp("iz", &aBuf[nBuf-2], 2) ){ + memcpy(&aBuf[nBuf-2], "ize", 3); + *pnBuf = nBuf - 2 + 3; + ret = 1; + } + break; + + } + return ret; +} + + +static int fts5PorterStep2(char *aBuf, int *pnBuf){ + int ret = 0; + int nBuf = *pnBuf; + switch( aBuf[nBuf-2] ){ + + case 'a': + if( nBuf>7 && 0==memcmp("ational", &aBuf[nBuf-7], 7) ){ + if( fts5Porter_MGt0(aBuf, nBuf-7) ){ + memcpy(&aBuf[nBuf-7], "ate", 3); + *pnBuf = nBuf - 7 + 3; + } + }else if( nBuf>6 && 0==memcmp("tional", &aBuf[nBuf-6], 6) ){ + if( fts5Porter_MGt0(aBuf, nBuf-6) ){ + memcpy(&aBuf[nBuf-6], "tion", 4); + *pnBuf = nBuf - 6 + 4; + } + } + break; + + case 'c': + if( nBuf>4 && 0==memcmp("enci", &aBuf[nBuf-4], 4) ){ + if( fts5Porter_MGt0(aBuf, nBuf-4) ){ + memcpy(&aBuf[nBuf-4], "ence", 4); + *pnBuf = nBuf - 4 + 4; + } + }else if( nBuf>4 && 0==memcmp("anci", &aBuf[nBuf-4], 4) ){ + if( fts5Porter_MGt0(aBuf, nBuf-4) ){ + memcpy(&aBuf[nBuf-4], "ance", 4); + *pnBuf = nBuf - 4 + 4; + } + } + break; + + case 'e': + if( nBuf>4 && 0==memcmp("izer", &aBuf[nBuf-4], 4) ){ + if( fts5Porter_MGt0(aBuf, nBuf-4) ){ + memcpy(&aBuf[nBuf-4], "ize", 3); + *pnBuf = nBuf - 4 + 3; + } + } + break; + + case 'g': + if( nBuf>4 && 0==memcmp("logi", &aBuf[nBuf-4], 4) ){ + if( fts5Porter_MGt0(aBuf, nBuf-4) ){ + memcpy(&aBuf[nBuf-4], "log", 3); + *pnBuf = nBuf - 4 + 3; + } + } + break; + + case 'l': + if( nBuf>3 && 0==memcmp("bli", &aBuf[nBuf-3], 3) ){ + if( fts5Porter_MGt0(aBuf, nBuf-3) ){ + memcpy(&aBuf[nBuf-3], "ble", 3); + *pnBuf = nBuf - 3 + 3; + } + }else if( nBuf>4 && 0==memcmp("alli", &aBuf[nBuf-4], 4) ){ + if( fts5Porter_MGt0(aBuf, nBuf-4) ){ + memcpy(&aBuf[nBuf-4], "al", 2); + *pnBuf = nBuf - 4 + 2; + } + }else if( nBuf>5 && 0==memcmp("entli", &aBuf[nBuf-5], 5) ){ + if( fts5Porter_MGt0(aBuf, nBuf-5) ){ + memcpy(&aBuf[nBuf-5], "ent", 3); + *pnBuf = nBuf - 5 + 3; + } + }else if( nBuf>3 && 0==memcmp("eli", &aBuf[nBuf-3], 3) ){ + if( fts5Porter_MGt0(aBuf, nBuf-3) ){ + memcpy(&aBuf[nBuf-3], "e", 1); + *pnBuf = nBuf - 3 + 1; + } + }else if( nBuf>5 && 0==memcmp("ousli", &aBuf[nBuf-5], 5) ){ + if( fts5Porter_MGt0(aBuf, nBuf-5) ){ + memcpy(&aBuf[nBuf-5], "ous", 3); + *pnBuf = nBuf - 5 + 3; + } + } + break; + + case 'o': + if( nBuf>7 && 0==memcmp("ization", &aBuf[nBuf-7], 7) ){ + if( fts5Porter_MGt0(aBuf, nBuf-7) ){ + memcpy(&aBuf[nBuf-7], "ize", 3); + *pnBuf = nBuf - 7 + 3; + } + }else if( nBuf>5 && 0==memcmp("ation", &aBuf[nBuf-5], 5) ){ + if( fts5Porter_MGt0(aBuf, nBuf-5) ){ + memcpy(&aBuf[nBuf-5], "ate", 3); + *pnBuf = nBuf - 5 + 3; + } + }else if( nBuf>4 && 0==memcmp("ator", &aBuf[nBuf-4], 4) ){ + if( fts5Porter_MGt0(aBuf, nBuf-4) ){ + memcpy(&aBuf[nBuf-4], "ate", 3); + *pnBuf = nBuf - 4 + 3; + } + } + break; + + case 's': + if( nBuf>5 && 0==memcmp("alism", &aBuf[nBuf-5], 5) ){ + if( fts5Porter_MGt0(aBuf, nBuf-5) ){ + memcpy(&aBuf[nBuf-5], "al", 2); + *pnBuf = nBuf - 5 + 2; + } + }else if( nBuf>7 && 0==memcmp("iveness", &aBuf[nBuf-7], 7) ){ + if( fts5Porter_MGt0(aBuf, nBuf-7) ){ + memcpy(&aBuf[nBuf-7], "ive", 3); + *pnBuf = nBuf - 7 + 3; + } + }else if( nBuf>7 && 0==memcmp("fulness", &aBuf[nBuf-7], 7) ){ + if( fts5Porter_MGt0(aBuf, nBuf-7) ){ + memcpy(&aBuf[nBuf-7], "ful", 3); + *pnBuf = nBuf - 7 + 3; + } + }else if( nBuf>7 && 0==memcmp("ousness", &aBuf[nBuf-7], 7) ){ + if( fts5Porter_MGt0(aBuf, nBuf-7) ){ + memcpy(&aBuf[nBuf-7], "ous", 3); + *pnBuf = nBuf - 7 + 3; + } + } + break; + + case 't': + if( nBuf>5 && 0==memcmp("aliti", &aBuf[nBuf-5], 5) ){ + if( fts5Porter_MGt0(aBuf, nBuf-5) ){ + memcpy(&aBuf[nBuf-5], "al", 2); + *pnBuf = nBuf - 5 + 2; + } + }else if( nBuf>5 && 0==memcmp("iviti", &aBuf[nBuf-5], 5) ){ + if( fts5Porter_MGt0(aBuf, nBuf-5) ){ + memcpy(&aBuf[nBuf-5], "ive", 3); + *pnBuf = nBuf - 5 + 3; + } + }else if( nBuf>6 && 0==memcmp("biliti", &aBuf[nBuf-6], 6) ){ + if( fts5Porter_MGt0(aBuf, nBuf-6) ){ + memcpy(&aBuf[nBuf-6], "ble", 3); + *pnBuf = nBuf - 6 + 3; + } + } + break; + + } + return ret; +} + + +static int fts5PorterStep3(char *aBuf, int *pnBuf){ + int ret = 0; + int nBuf = *pnBuf; + switch( aBuf[nBuf-2] ){ + + case 'a': + if( nBuf>4 && 0==memcmp("ical", &aBuf[nBuf-4], 4) ){ + if( fts5Porter_MGt0(aBuf, nBuf-4) ){ + memcpy(&aBuf[nBuf-4], "ic", 2); + *pnBuf = nBuf - 4 + 2; + } + } + break; + + case 's': + if( nBuf>4 && 0==memcmp("ness", &aBuf[nBuf-4], 4) ){ + if( fts5Porter_MGt0(aBuf, nBuf-4) ){ + *pnBuf = nBuf - 4; + } + } + break; + + case 't': + if( nBuf>5 && 0==memcmp("icate", &aBuf[nBuf-5], 5) ){ + if( fts5Porter_MGt0(aBuf, nBuf-5) ){ + memcpy(&aBuf[nBuf-5], "ic", 2); + *pnBuf = nBuf - 5 + 2; + } + }else if( nBuf>5 && 0==memcmp("iciti", &aBuf[nBuf-5], 5) ){ + if( fts5Porter_MGt0(aBuf, nBuf-5) ){ + memcpy(&aBuf[nBuf-5], "ic", 2); + *pnBuf = nBuf - 5 + 2; + } + } + break; + + case 'u': + if( nBuf>3 && 0==memcmp("ful", &aBuf[nBuf-3], 3) ){ + if( fts5Porter_MGt0(aBuf, nBuf-3) ){ + *pnBuf = nBuf - 3; + } + } + break; + + case 'v': + if( nBuf>5 && 0==memcmp("ative", &aBuf[nBuf-5], 5) ){ + if( fts5Porter_MGt0(aBuf, nBuf-5) ){ + *pnBuf = nBuf - 5; + } + } + break; + + case 'z': + if( nBuf>5 && 0==memcmp("alize", &aBuf[nBuf-5], 5) ){ + if( fts5Porter_MGt0(aBuf, nBuf-5) ){ + memcpy(&aBuf[nBuf-5], "al", 2); + *pnBuf = nBuf - 5 + 2; + } + } + break; + + } + return ret; +} + + +static int fts5PorterStep1B(char *aBuf, int *pnBuf){ + int ret = 0; + int nBuf = *pnBuf; + switch( aBuf[nBuf-2] ){ + + case 'e': + if( nBuf>3 && 0==memcmp("eed", &aBuf[nBuf-3], 3) ){ + if( fts5Porter_MGt0(aBuf, nBuf-3) ){ + memcpy(&aBuf[nBuf-3], "ee", 2); + *pnBuf = nBuf - 3 + 2; + } + }else if( nBuf>2 && 0==memcmp("ed", &aBuf[nBuf-2], 2) ){ + if( fts5Porter_Vowel(aBuf, nBuf-2) ){ + *pnBuf = nBuf - 2; + ret = 1; + } + } + break; + + case 'n': + if( nBuf>3 && 0==memcmp("ing", &aBuf[nBuf-3], 3) ){ + if( fts5Porter_Vowel(aBuf, nBuf-3) ){ + *pnBuf = nBuf - 3; + ret = 1; + } + } + break; + + } + return ret; +} + +/* +** GENERATED CODE ENDS HERE (mkportersteps.tcl) +*************************************************************************** +**************************************************************************/ + +static void fts5PorterStep1A(char *aBuf, int *pnBuf){ + int nBuf = *pnBuf; + if( aBuf[nBuf-1]=='s' ){ + if( aBuf[nBuf-2]=='e' ){ + if( (nBuf>4 && aBuf[nBuf-4]=='s' && aBuf[nBuf-3]=='s') + || (nBuf>3 && aBuf[nBuf-3]=='i' ) + ){ + *pnBuf = nBuf-2; + }else{ + *pnBuf = nBuf-1; + } + } + else if( aBuf[nBuf-2]!='s' ){ + *pnBuf = nBuf-1; + } + } +} + +static int fts5PorterCb( + void *pCtx, + int tflags, + const char *pToken, + int nToken, + int iStart, + int iEnd +){ + PorterContext *p = (PorterContext*)pCtx; + + char *aBuf; + int nBuf; + + if( nToken>FTS5_PORTER_MAX_TOKEN || nToken<3 ) goto pass_through; + aBuf = p->aBuf; + nBuf = nToken; + memcpy(aBuf, pToken, nBuf); + + /* Step 1. */ + fts5PorterStep1A(aBuf, &nBuf); + if( fts5PorterStep1B(aBuf, &nBuf) ){ + if( fts5PorterStep1B2(aBuf, &nBuf)==0 ){ + char c = aBuf[nBuf-1]; + if( fts5PorterIsVowel(c, 0)==0 + && c!='l' && c!='s' && c!='z' && c==aBuf[nBuf-2] + ){ + nBuf--; + }else if( fts5Porter_MEq1(aBuf, nBuf) && fts5Porter_Ostar(aBuf, nBuf) ){ + aBuf[nBuf++] = 'e'; + } + } + } + + /* Step 1C. */ + if( aBuf[nBuf-1]=='y' && fts5Porter_Vowel(aBuf, nBuf-1) ){ + aBuf[nBuf-1] = 'i'; + } + + /* Steps 2 through 4. */ + fts5PorterStep2(aBuf, &nBuf); + fts5PorterStep3(aBuf, &nBuf); + fts5PorterStep4(aBuf, &nBuf); + + /* Step 5a. */ + assert( nBuf>0 ); + if( aBuf[nBuf-1]=='e' ){ + if( fts5Porter_MGt1(aBuf, nBuf-1) + || (fts5Porter_MEq1(aBuf, nBuf-1) && !fts5Porter_Ostar(aBuf, nBuf-1)) + ){ + nBuf--; + } + } + + /* Step 5b. */ + if( nBuf>1 && aBuf[nBuf-1]=='l' + && aBuf[nBuf-2]=='l' && fts5Porter_MGt1(aBuf, nBuf-1) + ){ + nBuf--; + } + + return p->xToken(p->pCtx, tflags, aBuf, nBuf, iStart, iEnd); + + pass_through: + return p->xToken(p->pCtx, tflags, pToken, nToken, iStart, iEnd); +} + +/* +** Tokenize using the porter tokenizer. +*/ +static int fts5PorterTokenize( + Fts5Tokenizer *pTokenizer, + void *pCtx, + int flags, + const char *pText, int nText, + int (*xToken)(void*, int, const char*, int nToken, int iStart, int iEnd) +){ + PorterTokenizer *p = (PorterTokenizer*)pTokenizer; + PorterContext sCtx; + sCtx.xToken = xToken; + sCtx.pCtx = pCtx; + sCtx.aBuf = p->aBuf; + return p->tokenizer.xTokenize( + p->pTokenizer, (void*)&sCtx, flags, pText, nText, fts5PorterCb + ); +} + +/* +** Register all built-in tokenizers with FTS5. +*/ +static int sqlite3Fts5TokenizerInit(fts5_api *pApi){ + struct BuiltinTokenizer { + const char *zName; + fts5_tokenizer x; + } aBuiltin[] = { + { "unicode61", {fts5UnicodeCreate, fts5UnicodeDelete, fts5UnicodeTokenize}}, + { "ascii", {fts5AsciiCreate, fts5AsciiDelete, fts5AsciiTokenize }}, + { "porter", {fts5PorterCreate, fts5PorterDelete, fts5PorterTokenize }}, + }; + + int rc = SQLITE_OK; /* Return code */ + int i; /* To iterate through builtin functions */ + + for(i=0; rc==SQLITE_OK && i<ArraySize(aBuiltin); i++){ + rc = pApi->xCreateTokenizer(pApi, + aBuiltin[i].zName, + (void*)pApi, + &aBuiltin[i].x, + 0 + ); + } + + return rc; +} + + + +/* +** 2012 May 25 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +*/ + +/* +** DO NOT EDIT THIS MACHINE GENERATED FILE. +*/ + + +/* #include <assert.h> */ + +/* +** Return true if the argument corresponds to a unicode codepoint +** classified as either a letter or a number. Otherwise false. +** +** The results are undefined if the value passed to this function +** is less than zero. +*/ +static int sqlite3Fts5UnicodeIsalnum(int c){ + /* Each unsigned integer in the following array corresponds to a contiguous + ** range of unicode codepoints that are not either letters or numbers (i.e. + ** codepoints for which this function should return 0). + ** + ** The most significant 22 bits in each 32-bit value contain the first + ** codepoint in the range. The least significant 10 bits are used to store + ** the size of the range (always at least 1). In other words, the value + ** ((C<<22) + N) represents a range of N codepoints starting with codepoint + ** C. It is not possible to represent a range larger than 1023 codepoints + ** using this format. + */ + static const unsigned int aEntry[] = { + 0x00000030, 0x0000E807, 0x00016C06, 0x0001EC2F, 0x0002AC07, + 0x0002D001, 0x0002D803, 0x0002EC01, 0x0002FC01, 0x00035C01, + 0x0003DC01, 0x000B0804, 0x000B480E, 0x000B9407, 0x000BB401, + 0x000BBC81, 0x000DD401, 0x000DF801, 0x000E1002, 0x000E1C01, + 0x000FD801, 0x00120808, 0x00156806, 0x00162402, 0x00163C01, + 0x00164437, 0x0017CC02, 0x00180005, 0x00181816, 0x00187802, + 0x00192C15, 0x0019A804, 0x0019C001, 0x001B5001, 0x001B580F, + 0x001B9C07, 0x001BF402, 0x001C000E, 0x001C3C01, 0x001C4401, + 0x001CC01B, 0x001E980B, 0x001FAC09, 0x001FD804, 0x00205804, + 0x00206C09, 0x00209403, 0x0020A405, 0x0020C00F, 0x00216403, + 0x00217801, 0x0023901B, 0x00240004, 0x0024E803, 0x0024F812, + 0x00254407, 0x00258804, 0x0025C001, 0x00260403, 0x0026F001, + 0x0026F807, 0x00271C02, 0x00272C03, 0x00275C01, 0x00278802, + 0x0027C802, 0x0027E802, 0x00280403, 0x0028F001, 0x0028F805, + 0x00291C02, 0x00292C03, 0x00294401, 0x0029C002, 0x0029D401, + 0x002A0403, 0x002AF001, 0x002AF808, 0x002B1C03, 0x002B2C03, + 0x002B8802, 0x002BC002, 0x002C0403, 0x002CF001, 0x002CF807, + 0x002D1C02, 0x002D2C03, 0x002D5802, 0x002D8802, 0x002DC001, + 0x002E0801, 0x002EF805, 0x002F1803, 0x002F2804, 0x002F5C01, + 0x002FCC08, 0x00300403, 0x0030F807, 0x00311803, 0x00312804, + 0x00315402, 0x00318802, 0x0031FC01, 0x00320802, 0x0032F001, + 0x0032F807, 0x00331803, 0x00332804, 0x00335402, 0x00338802, + 0x00340802, 0x0034F807, 0x00351803, 0x00352804, 0x00355C01, + 0x00358802, 0x0035E401, 0x00360802, 0x00372801, 0x00373C06, + 0x00375801, 0x00376008, 0x0037C803, 0x0038C401, 0x0038D007, + 0x0038FC01, 0x00391C09, 0x00396802, 0x003AC401, 0x003AD006, + 0x003AEC02, 0x003B2006, 0x003C041F, 0x003CD00C, 0x003DC417, + 0x003E340B, 0x003E6424, 0x003EF80F, 0x003F380D, 0x0040AC14, + 0x00412806, 0x00415804, 0x00417803, 0x00418803, 0x00419C07, + 0x0041C404, 0x0042080C, 0x00423C01, 0x00426806, 0x0043EC01, + 0x004D740C, 0x004E400A, 0x00500001, 0x0059B402, 0x005A0001, + 0x005A6C02, 0x005BAC03, 0x005C4803, 0x005CC805, 0x005D4802, + 0x005DC802, 0x005ED023, 0x005F6004, 0x005F7401, 0x0060000F, + 0x0062A401, 0x0064800C, 0x0064C00C, 0x00650001, 0x00651002, + 0x0066C011, 0x00672002, 0x00677822, 0x00685C05, 0x00687802, + 0x0069540A, 0x0069801D, 0x0069FC01, 0x006A8007, 0x006AA006, + 0x006C0005, 0x006CD011, 0x006D6823, 0x006E0003, 0x006E840D, + 0x006F980E, 0x006FF004, 0x00709014, 0x0070EC05, 0x0071F802, + 0x00730008, 0x00734019, 0x0073B401, 0x0073C803, 0x00770027, + 0x0077F004, 0x007EF401, 0x007EFC03, 0x007F3403, 0x007F7403, + 0x007FB403, 0x007FF402, 0x00800065, 0x0081A806, 0x0081E805, + 0x00822805, 0x0082801A, 0x00834021, 0x00840002, 0x00840C04, + 0x00842002, 0x00845001, 0x00845803, 0x00847806, 0x00849401, + 0x00849C01, 0x0084A401, 0x0084B801, 0x0084E802, 0x00850005, + 0x00852804, 0x00853C01, 0x00864264, 0x00900027, 0x0091000B, + 0x0092704E, 0x00940200, 0x009C0475, 0x009E53B9, 0x00AD400A, + 0x00B39406, 0x00B3BC03, 0x00B3E404, 0x00B3F802, 0x00B5C001, + 0x00B5FC01, 0x00B7804F, 0x00B8C00C, 0x00BA001A, 0x00BA6C59, + 0x00BC00D6, 0x00BFC00C, 0x00C00005, 0x00C02019, 0x00C0A807, + 0x00C0D802, 0x00C0F403, 0x00C26404, 0x00C28001, 0x00C3EC01, + 0x00C64002, 0x00C6580A, 0x00C70024, 0x00C8001F, 0x00C8A81E, + 0x00C94001, 0x00C98020, 0x00CA2827, 0x00CB003F, 0x00CC0100, + 0x01370040, 0x02924037, 0x0293F802, 0x02983403, 0x0299BC10, + 0x029A7C01, 0x029BC008, 0x029C0017, 0x029C8002, 0x029E2402, + 0x02A00801, 0x02A01801, 0x02A02C01, 0x02A08C09, 0x02A0D804, + 0x02A1D004, 0x02A20002, 0x02A2D011, 0x02A33802, 0x02A38012, + 0x02A3E003, 0x02A4980A, 0x02A51C0D, 0x02A57C01, 0x02A60004, + 0x02A6CC1B, 0x02A77802, 0x02A8A40E, 0x02A90C01, 0x02A93002, + 0x02A97004, 0x02A9DC03, 0x02A9EC01, 0x02AAC001, 0x02AAC803, + 0x02AADC02, 0x02AAF802, 0x02AB0401, 0x02AB7802, 0x02ABAC07, + 0x02ABD402, 0x02AF8C0B, 0x03600001, 0x036DFC02, 0x036FFC02, + 0x037FFC01, 0x03EC7801, 0x03ECA401, 0x03EEC810, 0x03F4F802, + 0x03F7F002, 0x03F8001A, 0x03F88007, 0x03F8C023, 0x03F95013, + 0x03F9A004, 0x03FBFC01, 0x03FC040F, 0x03FC6807, 0x03FCEC06, + 0x03FD6C0B, 0x03FF8007, 0x03FFA007, 0x03FFE405, 0x04040003, + 0x0404DC09, 0x0405E411, 0x0406400C, 0x0407402E, 0x040E7C01, + 0x040F4001, 0x04215C01, 0x04247C01, 0x0424FC01, 0x04280403, + 0x04281402, 0x04283004, 0x0428E003, 0x0428FC01, 0x04294009, + 0x0429FC01, 0x042CE407, 0x04400003, 0x0440E016, 0x04420003, + 0x0442C012, 0x04440003, 0x04449C0E, 0x04450004, 0x04460003, + 0x0446CC0E, 0x04471404, 0x045AAC0D, 0x0491C004, 0x05BD442E, + 0x05BE3C04, 0x074000F6, 0x07440027, 0x0744A4B5, 0x07480046, + 0x074C0057, 0x075B0401, 0x075B6C01, 0x075BEC01, 0x075C5401, + 0x075CD401, 0x075D3C01, 0x075DBC01, 0x075E2401, 0x075EA401, + 0x075F0C01, 0x07BBC002, 0x07C0002C, 0x07C0C064, 0x07C2800F, + 0x07C2C40E, 0x07C3040F, 0x07C3440F, 0x07C4401F, 0x07C4C03C, + 0x07C5C02B, 0x07C7981D, 0x07C8402B, 0x07C90009, 0x07C94002, + 0x07CC0021, 0x07CCC006, 0x07CCDC46, 0x07CE0014, 0x07CE8025, + 0x07CF1805, 0x07CF8011, 0x07D0003F, 0x07D10001, 0x07D108B6, + 0x07D3E404, 0x07D4003E, 0x07D50004, 0x07D54018, 0x07D7EC46, + 0x07D9140B, 0x07DA0046, 0x07DC0074, 0x38000401, 0x38008060, + 0x380400F0, + }; + static const unsigned int aAscii[4] = { + 0xFFFFFFFF, 0xFC00FFFF, 0xF8000001, 0xF8000001, + }; + + if( (unsigned int)c<128 ){ + return ( (aAscii[c >> 5] & (1 << (c & 0x001F)))==0 ); + }else if( (unsigned int)c<(1<<22) ){ + unsigned int key = (((unsigned int)c)<<10) | 0x000003FF; + int iRes = 0; + int iHi = sizeof(aEntry)/sizeof(aEntry[0]) - 1; + int iLo = 0; + while( iHi>=iLo ){ + int iTest = (iHi + iLo) / 2; + if( key >= aEntry[iTest] ){ + iRes = iTest; + iLo = iTest+1; + }else{ + iHi = iTest-1; + } + } + assert( aEntry[0]<key ); + assert( key>=aEntry[iRes] ); + return (((unsigned int)c) >= ((aEntry[iRes]>>10) + (aEntry[iRes]&0x3FF))); + } + return 1; +} + + +/* +** If the argument is a codepoint corresponding to a lowercase letter +** in the ASCII range with a diacritic added, return the codepoint +** of the ASCII letter only. For example, if passed 235 - "LATIN +** SMALL LETTER E WITH DIAERESIS" - return 65 ("LATIN SMALL LETTER +** E"). The resuls of passing a codepoint that corresponds to an +** uppercase letter are undefined. +*/ +static int fts5_remove_diacritic(int c){ + unsigned short aDia[] = { + 0, 1797, 1848, 1859, 1891, 1928, 1940, 1995, + 2024, 2040, 2060, 2110, 2168, 2206, 2264, 2286, + 2344, 2383, 2472, 2488, 2516, 2596, 2668, 2732, + 2782, 2842, 2894, 2954, 2984, 3000, 3028, 3336, + 3456, 3696, 3712, 3728, 3744, 3896, 3912, 3928, + 3968, 4008, 4040, 4106, 4138, 4170, 4202, 4234, + 4266, 4296, 4312, 4344, 4408, 4424, 4472, 4504, + 6148, 6198, 6264, 6280, 6360, 6429, 6505, 6529, + 61448, 61468, 61534, 61592, 61642, 61688, 61704, 61726, + 61784, 61800, 61836, 61880, 61914, 61948, 61998, 62122, + 62154, 62200, 62218, 62302, 62364, 62442, 62478, 62536, + 62554, 62584, 62604, 62640, 62648, 62656, 62664, 62730, + 62924, 63050, 63082, 63274, 63390, + }; + char aChar[] = { + '\0', 'a', 'c', 'e', 'i', 'n', 'o', 'u', 'y', 'y', 'a', 'c', + 'd', 'e', 'e', 'g', 'h', 'i', 'j', 'k', 'l', 'n', 'o', 'r', + 's', 't', 'u', 'u', 'w', 'y', 'z', 'o', 'u', 'a', 'i', 'o', + 'u', 'g', 'k', 'o', 'j', 'g', 'n', 'a', 'e', 'i', 'o', 'r', + 'u', 's', 't', 'h', 'a', 'e', 'o', 'y', '\0', '\0', '\0', '\0', + '\0', '\0', '\0', '\0', 'a', 'b', 'd', 'd', 'e', 'f', 'g', 'h', + 'h', 'i', 'k', 'l', 'l', 'm', 'n', 'p', 'r', 'r', 's', 't', + 'u', 'v', 'w', 'w', 'x', 'y', 'z', 'h', 't', 'w', 'y', 'a', + 'e', 'i', 'o', 'u', 'y', + }; + + unsigned int key = (((unsigned int)c)<<3) | 0x00000007; + int iRes = 0; + int iHi = sizeof(aDia)/sizeof(aDia[0]) - 1; + int iLo = 0; + while( iHi>=iLo ){ + int iTest = (iHi + iLo) / 2; + if( key >= aDia[iTest] ){ + iRes = iTest; + iLo = iTest+1; + }else{ + iHi = iTest-1; + } + } + assert( key>=aDia[iRes] ); + return ((c > (aDia[iRes]>>3) + (aDia[iRes]&0x07)) ? c : (int)aChar[iRes]); +} + + +/* +** Return true if the argument interpreted as a unicode codepoint +** is a diacritical modifier character. +*/ +static int sqlite3Fts5UnicodeIsdiacritic(int c){ + unsigned int mask0 = 0x08029FDF; + unsigned int mask1 = 0x000361F8; + if( c<768 || c>817 ) return 0; + return (c < 768+32) ? + (mask0 & (1 << (c-768))) : + (mask1 & (1 << (c-768-32))); +} + + +/* +** Interpret the argument as a unicode codepoint. If the codepoint +** is an upper case character that has a lower case equivalent, +** return the codepoint corresponding to the lower case version. +** Otherwise, return a copy of the argument. +** +** The results are undefined if the value passed to this function +** is less than zero. +*/ +static int sqlite3Fts5UnicodeFold(int c, int bRemoveDiacritic){ + /* Each entry in the following array defines a rule for folding a range + ** of codepoints to lower case. The rule applies to a range of nRange + ** codepoints starting at codepoint iCode. + ** + ** If the least significant bit in flags is clear, then the rule applies + ** to all nRange codepoints (i.e. all nRange codepoints are upper case and + ** need to be folded). Or, if it is set, then the rule only applies to + ** every second codepoint in the range, starting with codepoint C. + ** + ** The 7 most significant bits in flags are an index into the aiOff[] + ** array. If a specific codepoint C does require folding, then its lower + ** case equivalent is ((C + aiOff[flags>>1]) & 0xFFFF). + ** + ** The contents of this array are generated by parsing the CaseFolding.txt + ** file distributed as part of the "Unicode Character Database". See + ** http://www.unicode.org for details. + */ + static const struct TableEntry { + unsigned short iCode; + unsigned char flags; + unsigned char nRange; + } aEntry[] = { + {65, 14, 26}, {181, 64, 1}, {192, 14, 23}, + {216, 14, 7}, {256, 1, 48}, {306, 1, 6}, + {313, 1, 16}, {330, 1, 46}, {376, 116, 1}, + {377, 1, 6}, {383, 104, 1}, {385, 50, 1}, + {386, 1, 4}, {390, 44, 1}, {391, 0, 1}, + {393, 42, 2}, {395, 0, 1}, {398, 32, 1}, + {399, 38, 1}, {400, 40, 1}, {401, 0, 1}, + {403, 42, 1}, {404, 46, 1}, {406, 52, 1}, + {407, 48, 1}, {408, 0, 1}, {412, 52, 1}, + {413, 54, 1}, {415, 56, 1}, {416, 1, 6}, + {422, 60, 1}, {423, 0, 1}, {425, 60, 1}, + {428, 0, 1}, {430, 60, 1}, {431, 0, 1}, + {433, 58, 2}, {435, 1, 4}, {439, 62, 1}, + {440, 0, 1}, {444, 0, 1}, {452, 2, 1}, + {453, 0, 1}, {455, 2, 1}, {456, 0, 1}, + {458, 2, 1}, {459, 1, 18}, {478, 1, 18}, + {497, 2, 1}, {498, 1, 4}, {502, 122, 1}, + {503, 134, 1}, {504, 1, 40}, {544, 110, 1}, + {546, 1, 18}, {570, 70, 1}, {571, 0, 1}, + {573, 108, 1}, {574, 68, 1}, {577, 0, 1}, + {579, 106, 1}, {580, 28, 1}, {581, 30, 1}, + {582, 1, 10}, {837, 36, 1}, {880, 1, 4}, + {886, 0, 1}, {902, 18, 1}, {904, 16, 3}, + {908, 26, 1}, {910, 24, 2}, {913, 14, 17}, + {931, 14, 9}, {962, 0, 1}, {975, 4, 1}, + {976, 140, 1}, {977, 142, 1}, {981, 146, 1}, + {982, 144, 1}, {984, 1, 24}, {1008, 136, 1}, + {1009, 138, 1}, {1012, 130, 1}, {1013, 128, 1}, + {1015, 0, 1}, {1017, 152, 1}, {1018, 0, 1}, + {1021, 110, 3}, {1024, 34, 16}, {1040, 14, 32}, + {1120, 1, 34}, {1162, 1, 54}, {1216, 6, 1}, + {1217, 1, 14}, {1232, 1, 88}, {1329, 22, 38}, + {4256, 66, 38}, {4295, 66, 1}, {4301, 66, 1}, + {7680, 1, 150}, {7835, 132, 1}, {7838, 96, 1}, + {7840, 1, 96}, {7944, 150, 8}, {7960, 150, 6}, + {7976, 150, 8}, {7992, 150, 8}, {8008, 150, 6}, + {8025, 151, 8}, {8040, 150, 8}, {8072, 150, 8}, + {8088, 150, 8}, {8104, 150, 8}, {8120, 150, 2}, + {8122, 126, 2}, {8124, 148, 1}, {8126, 100, 1}, + {8136, 124, 4}, {8140, 148, 1}, {8152, 150, 2}, + {8154, 120, 2}, {8168, 150, 2}, {8170, 118, 2}, + {8172, 152, 1}, {8184, 112, 2}, {8186, 114, 2}, + {8188, 148, 1}, {8486, 98, 1}, {8490, 92, 1}, + {8491, 94, 1}, {8498, 12, 1}, {8544, 8, 16}, + {8579, 0, 1}, {9398, 10, 26}, {11264, 22, 47}, + {11360, 0, 1}, {11362, 88, 1}, {11363, 102, 1}, + {11364, 90, 1}, {11367, 1, 6}, {11373, 84, 1}, + {11374, 86, 1}, {11375, 80, 1}, {11376, 82, 1}, + {11378, 0, 1}, {11381, 0, 1}, {11390, 78, 2}, + {11392, 1, 100}, {11499, 1, 4}, {11506, 0, 1}, + {42560, 1, 46}, {42624, 1, 24}, {42786, 1, 14}, + {42802, 1, 62}, {42873, 1, 4}, {42877, 76, 1}, + {42878, 1, 10}, {42891, 0, 1}, {42893, 74, 1}, + {42896, 1, 4}, {42912, 1, 10}, {42922, 72, 1}, + {65313, 14, 26}, + }; + static const unsigned short aiOff[] = { + 1, 2, 8, 15, 16, 26, 28, 32, + 37, 38, 40, 48, 63, 64, 69, 71, + 79, 80, 116, 202, 203, 205, 206, 207, + 209, 210, 211, 213, 214, 217, 218, 219, + 775, 7264, 10792, 10795, 23228, 23256, 30204, 54721, + 54753, 54754, 54756, 54787, 54793, 54809, 57153, 57274, + 57921, 58019, 58363, 61722, 65268, 65341, 65373, 65406, + 65408, 65410, 65415, 65424, 65436, 65439, 65450, 65462, + 65472, 65476, 65478, 65480, 65482, 65488, 65506, 65511, + 65514, 65521, 65527, 65528, 65529, + }; + + int ret = c; + + assert( sizeof(unsigned short)==2 && sizeof(unsigned char)==1 ); + + if( c<128 ){ + if( c>='A' && c<='Z' ) ret = c + ('a' - 'A'); + }else if( c<65536 ){ + const struct TableEntry *p; + int iHi = sizeof(aEntry)/sizeof(aEntry[0]) - 1; + int iLo = 0; + int iRes = -1; + + assert( c>aEntry[0].iCode ); + while( iHi>=iLo ){ + int iTest = (iHi + iLo) / 2; + int cmp = (c - aEntry[iTest].iCode); + if( cmp>=0 ){ + iRes = iTest; + iLo = iTest+1; + }else{ + iHi = iTest-1; + } + } + + assert( iRes>=0 && c>=aEntry[iRes].iCode ); + p = &aEntry[iRes]; + if( c<(p->iCode + p->nRange) && 0==(0x01 & p->flags & (p->iCode ^ c)) ){ + ret = (c + (aiOff[p->flags>>1])) & 0x0000FFFF; + assert( ret>0 ); + } + + if( bRemoveDiacritic ) ret = fts5_remove_diacritic(ret); + } + + else if( c>=66560 && c<66600 ){ + ret = c + 40; + } + + return ret; +} + +/* +** 2015 May 30 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** Routines for varint serialization and deserialization. +*/ + + +/* #include "fts5Int.h" */ + +/* +** This is a copy of the sqlite3GetVarint32() routine from the SQLite core. +** Except, this version does handle the single byte case that the core +** version depends on being handled before its function is called. +*/ +static int sqlite3Fts5GetVarint32(const unsigned char *p, u32 *v){ + u32 a,b; + + /* The 1-byte case. Overwhelmingly the most common. */ + a = *p; + /* a: p0 (unmasked) */ + if (!(a&0x80)) + { + /* Values between 0 and 127 */ + *v = a; + return 1; + } + + /* The 2-byte case */ + p++; + b = *p; + /* b: p1 (unmasked) */ + if (!(b&0x80)) + { + /* Values between 128 and 16383 */ + a &= 0x7f; + a = a<<7; + *v = a | b; + return 2; + } + + /* The 3-byte case */ + p++; + a = a<<14; + a |= *p; + /* a: p0<<14 | p2 (unmasked) */ + if (!(a&0x80)) + { + /* Values between 16384 and 2097151 */ + a &= (0x7f<<14)|(0x7f); + b &= 0x7f; + b = b<<7; + *v = a | b; + return 3; + } + + /* A 32-bit varint is used to store size information in btrees. + ** Objects are rarely larger than 2MiB limit of a 3-byte varint. + ** A 3-byte varint is sufficient, for example, to record the size + ** of a 1048569-byte BLOB or string. + ** + ** We only unroll the first 1-, 2-, and 3- byte cases. The very + ** rare larger cases can be handled by the slower 64-bit varint + ** routine. + */ + { + u64 v64; + u8 n; + p -= 2; + n = sqlite3Fts5GetVarint(p, &v64); + *v = (u32)v64; + assert( n>3 && n<=9 ); + return n; + } +} + + +/* +** Bitmasks used by sqlite3GetVarint(). These precomputed constants +** are defined here rather than simply putting the constant expressions +** inline in order to work around bugs in the RVT compiler. +** +** SLOT_2_0 A mask for (0x7f<<14) | 0x7f +** +** SLOT_4_2_0 A mask for (0x7f<<28) | SLOT_2_0 +*/ +#define SLOT_2_0 0x001fc07f +#define SLOT_4_2_0 0xf01fc07f + +/* +** Read a 64-bit variable-length integer from memory starting at p[0]. +** Return the number of bytes read. The value is stored in *v. +*/ +static u8 sqlite3Fts5GetVarint(const unsigned char *p, u64 *v){ + u32 a,b,s; + + a = *p; + /* a: p0 (unmasked) */ + if (!(a&0x80)) + { + *v = a; + return 1; + } + + p++; + b = *p; + /* b: p1 (unmasked) */ + if (!(b&0x80)) + { + a &= 0x7f; + a = a<<7; + a |= b; + *v = a; + return 2; + } + + /* Verify that constants are precomputed correctly */ + assert( SLOT_2_0 == ((0x7f<<14) | (0x7f)) ); + assert( SLOT_4_2_0 == ((0xfU<<28) | (0x7f<<14) | (0x7f)) ); + + p++; + a = a<<14; + a |= *p; + /* a: p0<<14 | p2 (unmasked) */ + if (!(a&0x80)) + { + a &= SLOT_2_0; + b &= 0x7f; + b = b<<7; + a |= b; + *v = a; + return 3; + } + + /* CSE1 from below */ + a &= SLOT_2_0; + p++; + b = b<<14; + b |= *p; + /* b: p1<<14 | p3 (unmasked) */ + if (!(b&0x80)) + { + b &= SLOT_2_0; + /* moved CSE1 up */ + /* a &= (0x7f<<14)|(0x7f); */ + a = a<<7; + a |= b; + *v = a; + return 4; + } + + /* a: p0<<14 | p2 (masked) */ + /* b: p1<<14 | p3 (unmasked) */ + /* 1:save off p0<<21 | p1<<14 | p2<<7 | p3 (masked) */ + /* moved CSE1 up */ + /* a &= (0x7f<<14)|(0x7f); */ + b &= SLOT_2_0; + s = a; + /* s: p0<<14 | p2 (masked) */ + + p++; + a = a<<14; + a |= *p; + /* a: p0<<28 | p2<<14 | p4 (unmasked) */ + if (!(a&0x80)) + { + /* we can skip these cause they were (effectively) done above in calc'ing s */ + /* a &= (0x7f<<28)|(0x7f<<14)|(0x7f); */ + /* b &= (0x7f<<14)|(0x7f); */ + b = b<<7; + a |= b; + s = s>>18; + *v = ((u64)s)<<32 | a; + return 5; + } + + /* 2:save off p0<<21 | p1<<14 | p2<<7 | p3 (masked) */ + s = s<<7; + s |= b; + /* s: p0<<21 | p1<<14 | p2<<7 | p3 (masked) */ + + p++; + b = b<<14; + b |= *p; + /* b: p1<<28 | p3<<14 | p5 (unmasked) */ + if (!(b&0x80)) + { + /* we can skip this cause it was (effectively) done above in calc'ing s */ + /* b &= (0x7f<<28)|(0x7f<<14)|(0x7f); */ + a &= SLOT_2_0; + a = a<<7; + a |= b; + s = s>>18; + *v = ((u64)s)<<32 | a; + return 6; + } + + p++; + a = a<<14; + a |= *p; + /* a: p2<<28 | p4<<14 | p6 (unmasked) */ + if (!(a&0x80)) + { + a &= SLOT_4_2_0; + b &= SLOT_2_0; + b = b<<7; + a |= b; + s = s>>11; + *v = ((u64)s)<<32 | a; + return 7; + } + + /* CSE2 from below */ + a &= SLOT_2_0; + p++; + b = b<<14; + b |= *p; + /* b: p3<<28 | p5<<14 | p7 (unmasked) */ + if (!(b&0x80)) + { + b &= SLOT_4_2_0; + /* moved CSE2 up */ + /* a &= (0x7f<<14)|(0x7f); */ + a = a<<7; + a |= b; + s = s>>4; + *v = ((u64)s)<<32 | a; + return 8; + } + + p++; + a = a<<15; + a |= *p; + /* a: p4<<29 | p6<<15 | p8 (unmasked) */ + + /* moved CSE2 up */ + /* a &= (0x7f<<29)|(0x7f<<15)|(0xff); */ + b &= SLOT_2_0; + b = b<<8; + a |= b; + + s = s<<4; + b = p[-4]; + b &= 0x7f; + b = b>>3; + s |= b; + + *v = ((u64)s)<<32 | a; + + return 9; +} + +/* +** The variable-length integer encoding is as follows: +** +** KEY: +** A = 0xxxxxxx 7 bits of data and one flag bit +** B = 1xxxxxxx 7 bits of data and one flag bit +** C = xxxxxxxx 8 bits of data +** +** 7 bits - A +** 14 bits - BA +** 21 bits - BBA +** 28 bits - BBBA +** 35 bits - BBBBA +** 42 bits - BBBBBA +** 49 bits - BBBBBBA +** 56 bits - BBBBBBBA +** 64 bits - BBBBBBBBC +*/ + +#ifdef SQLITE_NOINLINE +# define FTS5_NOINLINE SQLITE_NOINLINE +#else +# define FTS5_NOINLINE +#endif + +/* +** Write a 64-bit variable-length integer to memory starting at p[0]. +** The length of data write will be between 1 and 9 bytes. The number +** of bytes written is returned. +** +** A variable-length integer consists of the lower 7 bits of each byte +** for all bytes that have the 8th bit set and one byte with the 8th +** bit clear. Except, if we get to the 9th byte, it stores the full +** 8 bits and is the last byte. +*/ +static int FTS5_NOINLINE fts5PutVarint64(unsigned char *p, u64 v){ + int i, j, n; + u8 buf[10]; + if( v & (((u64)0xff000000)<<32) ){ + p[8] = (u8)v; + v >>= 8; + for(i=7; i>=0; i--){ + p[i] = (u8)((v & 0x7f) | 0x80); + v >>= 7; + } + return 9; + } + n = 0; + do{ + buf[n++] = (u8)((v & 0x7f) | 0x80); + v >>= 7; + }while( v!=0 ); + buf[0] &= 0x7f; + assert( n<=9 ); + for(i=0, j=n-1; j>=0; j--, i++){ + p[i] = buf[j]; + } + return n; +} + +static int sqlite3Fts5PutVarint(unsigned char *p, u64 v){ + if( v<=0x7f ){ + p[0] = v&0x7f; + return 1; + } + if( v<=0x3fff ){ + p[0] = ((v>>7)&0x7f)|0x80; + p[1] = v&0x7f; + return 2; + } + return fts5PutVarint64(p,v); +} + + +static int sqlite3Fts5GetVarintLen(u32 iVal){ +#if 0 + if( iVal<(1 << 7 ) ) return 1; +#endif + assert( iVal>=(1 << 7) ); + if( iVal<(1 << 14) ) return 2; + if( iVal<(1 << 21) ) return 3; + if( iVal<(1 << 28) ) return 4; + return 5; +} + + +/* +** 2015 May 08 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** This is an SQLite virtual table module implementing direct access to an +** existing FTS5 index. The module may create several different types of +** tables: +** +** col: +** CREATE TABLE vocab(term, col, doc, cnt, PRIMARY KEY(term, col)); +** +** One row for each term/column combination. The value of $doc is set to +** the number of fts5 rows that contain at least one instance of term +** $term within column $col. Field $cnt is set to the total number of +** instances of term $term in column $col (in any row of the fts5 table). +** +** row: +** CREATE TABLE vocab(term, doc, cnt, PRIMARY KEY(term)); +** +** One row for each term in the database. The value of $doc is set to +** the number of fts5 rows that contain at least one instance of term +** $term. Field $cnt is set to the total number of instances of term +** $term in the database. +*/ + + +/* #include "fts5Int.h" */ + + +typedef struct Fts5VocabTable Fts5VocabTable; +typedef struct Fts5VocabCursor Fts5VocabCursor; + +struct Fts5VocabTable { + sqlite3_vtab base; + char *zFts5Tbl; /* Name of fts5 table */ + char *zFts5Db; /* Db containing fts5 table */ + sqlite3 *db; /* Database handle */ + Fts5Global *pGlobal; /* FTS5 global object for this database */ + int eType; /* FTS5_VOCAB_COL or ROW */ +}; + +struct Fts5VocabCursor { + sqlite3_vtab_cursor base; + sqlite3_stmt *pStmt; /* Statement holding lock on pIndex */ + Fts5Index *pIndex; /* Associated FTS5 index */ + + int bEof; /* True if this cursor is at EOF */ + Fts5IndexIter *pIter; /* Term/rowid iterator object */ + + int nLeTerm; /* Size of zLeTerm in bytes */ + char *zLeTerm; /* (term <= $zLeTerm) paramater, or NULL */ + + /* These are used by 'col' tables only */ + Fts5Config *pConfig; /* Fts5 table configuration */ + int iCol; + i64 *aCnt; + i64 *aDoc; + + /* Output values used by 'row' and 'col' tables */ + i64 rowid; /* This table's current rowid value */ + Fts5Buffer term; /* Current value of 'term' column */ +}; + +#define FTS5_VOCAB_COL 0 +#define FTS5_VOCAB_ROW 1 + +#define FTS5_VOCAB_COL_SCHEMA "term, col, doc, cnt" +#define FTS5_VOCAB_ROW_SCHEMA "term, doc, cnt" + +/* +** Bits for the mask used as the idxNum value by xBestIndex/xFilter. +*/ +#define FTS5_VOCAB_TERM_EQ 0x01 +#define FTS5_VOCAB_TERM_GE 0x02 +#define FTS5_VOCAB_TERM_LE 0x04 + + +/* +** Translate a string containing an fts5vocab table type to an +** FTS5_VOCAB_XXX constant. If successful, set *peType to the output +** value and return SQLITE_OK. Otherwise, set *pzErr to an error message +** and return SQLITE_ERROR. +*/ +static int fts5VocabTableType(const char *zType, char **pzErr, int *peType){ + int rc = SQLITE_OK; + char *zCopy = sqlite3Fts5Strndup(&rc, zType, -1); + if( rc==SQLITE_OK ){ + sqlite3Fts5Dequote(zCopy); + if( sqlite3_stricmp(zCopy, "col")==0 ){ + *peType = FTS5_VOCAB_COL; + }else + + if( sqlite3_stricmp(zCopy, "row")==0 ){ + *peType = FTS5_VOCAB_ROW; + }else + { + *pzErr = sqlite3_mprintf("fts5vocab: unknown table type: %Q", zCopy); + rc = SQLITE_ERROR; + } + sqlite3_free(zCopy); + } + + return rc; +} + + +/* +** The xDisconnect() virtual table method. +*/ +static int fts5VocabDisconnectMethod(sqlite3_vtab *pVtab){ + Fts5VocabTable *pTab = (Fts5VocabTable*)pVtab; + sqlite3_free(pTab); + return SQLITE_OK; +} + +/* +** The xDestroy() virtual table method. +*/ +static int fts5VocabDestroyMethod(sqlite3_vtab *pVtab){ + Fts5VocabTable *pTab = (Fts5VocabTable*)pVtab; + sqlite3_free(pTab); + return SQLITE_OK; +} + +/* +** This function is the implementation of both the xConnect and xCreate +** methods of the FTS3 virtual table. +** +** The argv[] array contains the following: +** +** argv[0] -> module name ("fts5vocab") +** argv[1] -> database name +** argv[2] -> table name +** +** then: +** +** argv[3] -> name of fts5 table +** argv[4] -> type of fts5vocab table +** +** or, for tables in the TEMP schema only. +** +** argv[3] -> name of fts5 tables database +** argv[4] -> name of fts5 table +** argv[5] -> type of fts5vocab table +*/ +static int fts5VocabInitVtab( + sqlite3 *db, /* The SQLite database connection */ + void *pAux, /* Pointer to Fts5Global object */ + int argc, /* Number of elements in argv array */ + const char * const *argv, /* xCreate/xConnect argument array */ + sqlite3_vtab **ppVTab, /* Write the resulting vtab structure here */ + char **pzErr /* Write any error message here */ +){ + const char *azSchema[] = { + "CREATE TABlE vocab(" FTS5_VOCAB_COL_SCHEMA ")", + "CREATE TABlE vocab(" FTS5_VOCAB_ROW_SCHEMA ")" + }; + + Fts5VocabTable *pRet = 0; + int rc = SQLITE_OK; /* Return code */ + int bDb; + + bDb = (argc==6 && strlen(argv[1])==4 && memcmp("temp", argv[1], 4)==0); + + if( argc!=5 && bDb==0 ){ + *pzErr = sqlite3_mprintf("wrong number of vtable arguments"); + rc = SQLITE_ERROR; + }else{ + int nByte; /* Bytes of space to allocate */ + const char *zDb = bDb ? argv[3] : argv[1]; + const char *zTab = bDb ? argv[4] : argv[3]; + const char *zType = bDb ? argv[5] : argv[4]; + int nDb = (int)strlen(zDb)+1; + int nTab = (int)strlen(zTab)+1; + int eType = 0; + + rc = fts5VocabTableType(zType, pzErr, &eType); + if( rc==SQLITE_OK ){ + assert( eType>=0 && eType<ArraySize(azSchema) ); + rc = sqlite3_declare_vtab(db, azSchema[eType]); + } + + nByte = sizeof(Fts5VocabTable) + nDb + nTab; + pRet = sqlite3Fts5MallocZero(&rc, nByte); + if( pRet ){ + pRet->pGlobal = (Fts5Global*)pAux; + pRet->eType = eType; + pRet->db = db; + pRet->zFts5Tbl = (char*)&pRet[1]; + pRet->zFts5Db = &pRet->zFts5Tbl[nTab]; + memcpy(pRet->zFts5Tbl, zTab, nTab); + memcpy(pRet->zFts5Db, zDb, nDb); + sqlite3Fts5Dequote(pRet->zFts5Tbl); + sqlite3Fts5Dequote(pRet->zFts5Db); + } + } + + *ppVTab = (sqlite3_vtab*)pRet; + return rc; +} + + +/* +** The xConnect() and xCreate() methods for the virtual table. All the +** work is done in function fts5VocabInitVtab(). +*/ +static int fts5VocabConnectMethod( + sqlite3 *db, /* Database connection */ + void *pAux, /* Pointer to tokenizer hash table */ + int argc, /* Number of elements in argv array */ + const char * const *argv, /* xCreate/xConnect argument array */ + sqlite3_vtab **ppVtab, /* OUT: New sqlite3_vtab object */ + char **pzErr /* OUT: sqlite3_malloc'd error message */ +){ + return fts5VocabInitVtab(db, pAux, argc, argv, ppVtab, pzErr); +} +static int fts5VocabCreateMethod( + sqlite3 *db, /* Database connection */ + void *pAux, /* Pointer to tokenizer hash table */ + int argc, /* Number of elements in argv array */ + const char * const *argv, /* xCreate/xConnect argument array */ + sqlite3_vtab **ppVtab, /* OUT: New sqlite3_vtab object */ + char **pzErr /* OUT: sqlite3_malloc'd error message */ +){ + return fts5VocabInitVtab(db, pAux, argc, argv, ppVtab, pzErr); +} + +/* +** Implementation of the xBestIndex method. +*/ +static int fts5VocabBestIndexMethod( + sqlite3_vtab *pUnused, + sqlite3_index_info *pInfo +){ + int i; + int iTermEq = -1; + int iTermGe = -1; + int iTermLe = -1; + int idxNum = 0; + int nArg = 0; + + UNUSED_PARAM(pUnused); + + for(i=0; i<pInfo->nConstraint; i++){ + struct sqlite3_index_constraint *p = &pInfo->aConstraint[i]; + if( p->usable==0 ) continue; + if( p->iColumn==0 ){ /* term column */ + if( p->op==SQLITE_INDEX_CONSTRAINT_EQ ) iTermEq = i; + if( p->op==SQLITE_INDEX_CONSTRAINT_LE ) iTermLe = i; + if( p->op==SQLITE_INDEX_CONSTRAINT_LT ) iTermLe = i; + if( p->op==SQLITE_INDEX_CONSTRAINT_GE ) iTermGe = i; + if( p->op==SQLITE_INDEX_CONSTRAINT_GT ) iTermGe = i; + } + } + + if( iTermEq>=0 ){ + idxNum |= FTS5_VOCAB_TERM_EQ; + pInfo->aConstraintUsage[iTermEq].argvIndex = ++nArg; + pInfo->estimatedCost = 100; + }else{ + pInfo->estimatedCost = 1000000; + if( iTermGe>=0 ){ + idxNum |= FTS5_VOCAB_TERM_GE; + pInfo->aConstraintUsage[iTermGe].argvIndex = ++nArg; + pInfo->estimatedCost = pInfo->estimatedCost / 2; + } + if( iTermLe>=0 ){ + idxNum |= FTS5_VOCAB_TERM_LE; + pInfo->aConstraintUsage[iTermLe].argvIndex = ++nArg; + pInfo->estimatedCost = pInfo->estimatedCost / 2; + } + } + + pInfo->idxNum = idxNum; + + return SQLITE_OK; +} + +/* +** Implementation of xOpen method. +*/ +static int fts5VocabOpenMethod( + sqlite3_vtab *pVTab, + sqlite3_vtab_cursor **ppCsr +){ + Fts5VocabTable *pTab = (Fts5VocabTable*)pVTab; + Fts5Index *pIndex = 0; + Fts5Config *pConfig = 0; + Fts5VocabCursor *pCsr = 0; + int rc = SQLITE_OK; + sqlite3_stmt *pStmt = 0; + char *zSql = 0; + + zSql = sqlite3Fts5Mprintf(&rc, + "SELECT t.%Q FROM %Q.%Q AS t WHERE t.%Q MATCH '*id'", + pTab->zFts5Tbl, pTab->zFts5Db, pTab->zFts5Tbl, pTab->zFts5Tbl + ); + if( zSql ){ + rc = sqlite3_prepare_v2(pTab->db, zSql, -1, &pStmt, 0); + } + sqlite3_free(zSql); + assert( rc==SQLITE_OK || pStmt==0 ); + if( rc==SQLITE_ERROR ) rc = SQLITE_OK; + + if( pStmt && sqlite3_step(pStmt)==SQLITE_ROW ){ + i64 iId = sqlite3_column_int64(pStmt, 0); + pIndex = sqlite3Fts5IndexFromCsrid(pTab->pGlobal, iId, &pConfig); + } + + if( rc==SQLITE_OK && pIndex==0 ){ + rc = sqlite3_finalize(pStmt); + pStmt = 0; + if( rc==SQLITE_OK ){ + pVTab->zErrMsg = sqlite3_mprintf( + "no such fts5 table: %s.%s", pTab->zFts5Db, pTab->zFts5Tbl + ); + rc = SQLITE_ERROR; + } + } + + if( rc==SQLITE_OK ){ + int nByte = pConfig->nCol * sizeof(i64) * 2 + sizeof(Fts5VocabCursor); + pCsr = (Fts5VocabCursor*)sqlite3Fts5MallocZero(&rc, nByte); + } + + if( pCsr ){ + pCsr->pIndex = pIndex; + pCsr->pStmt = pStmt; + pCsr->pConfig = pConfig; + pCsr->aCnt = (i64*)&pCsr[1]; + pCsr->aDoc = &pCsr->aCnt[pConfig->nCol]; + }else{ + sqlite3_finalize(pStmt); + } + + *ppCsr = (sqlite3_vtab_cursor*)pCsr; + return rc; +} + +static void fts5VocabResetCursor(Fts5VocabCursor *pCsr){ + pCsr->rowid = 0; + sqlite3Fts5IterClose(pCsr->pIter); + pCsr->pIter = 0; + sqlite3_free(pCsr->zLeTerm); + pCsr->nLeTerm = -1; + pCsr->zLeTerm = 0; +} + +/* +** Close the cursor. For additional information see the documentation +** on the xClose method of the virtual table interface. +*/ +static int fts5VocabCloseMethod(sqlite3_vtab_cursor *pCursor){ + Fts5VocabCursor *pCsr = (Fts5VocabCursor*)pCursor; + fts5VocabResetCursor(pCsr); + sqlite3Fts5BufferFree(&pCsr->term); + sqlite3_finalize(pCsr->pStmt); + sqlite3_free(pCsr); + return SQLITE_OK; +} + + +/* +** Advance the cursor to the next row in the table. +*/ +static int fts5VocabNextMethod(sqlite3_vtab_cursor *pCursor){ + Fts5VocabCursor *pCsr = (Fts5VocabCursor*)pCursor; + Fts5VocabTable *pTab = (Fts5VocabTable*)pCursor->pVtab; + int rc = SQLITE_OK; + int nCol = pCsr->pConfig->nCol; + + pCsr->rowid++; + + if( pTab->eType==FTS5_VOCAB_COL ){ + for(pCsr->iCol++; pCsr->iCol<nCol; pCsr->iCol++){ + if( pCsr->aDoc[pCsr->iCol] ) break; + } + } + + if( pTab->eType==FTS5_VOCAB_ROW || pCsr->iCol>=nCol ){ + if( sqlite3Fts5IterEof(pCsr->pIter) ){ + pCsr->bEof = 1; + }else{ + const char *zTerm; + int nTerm; + + zTerm = sqlite3Fts5IterTerm(pCsr->pIter, &nTerm); + if( pCsr->nLeTerm>=0 ){ + int nCmp = MIN(nTerm, pCsr->nLeTerm); + int bCmp = memcmp(pCsr->zLeTerm, zTerm, nCmp); + if( bCmp<0 || (bCmp==0 && pCsr->nLeTerm<nTerm) ){ + pCsr->bEof = 1; + return SQLITE_OK; + } + } + + sqlite3Fts5BufferSet(&rc, &pCsr->term, nTerm, (const u8*)zTerm); + memset(pCsr->aCnt, 0, nCol * sizeof(i64)); + memset(pCsr->aDoc, 0, nCol * sizeof(i64)); + pCsr->iCol = 0; + + assert( pTab->eType==FTS5_VOCAB_COL || pTab->eType==FTS5_VOCAB_ROW ); + while( rc==SQLITE_OK ){ + const u8 *pPos; int nPos; /* Position list */ + i64 iPos = 0; /* 64-bit position read from poslist */ + int iOff = 0; /* Current offset within position list */ + + pPos = pCsr->pIter->pData; + nPos = pCsr->pIter->nData; + switch( pCsr->pConfig->eDetail ){ + case FTS5_DETAIL_FULL: + pPos = pCsr->pIter->pData; + nPos = pCsr->pIter->nData; + if( pTab->eType==FTS5_VOCAB_ROW ){ + while( 0==sqlite3Fts5PoslistNext64(pPos, nPos, &iOff, &iPos) ){ + pCsr->aCnt[0]++; + } + pCsr->aDoc[0]++; + }else{ + int iCol = -1; + while( 0==sqlite3Fts5PoslistNext64(pPos, nPos, &iOff, &iPos) ){ + int ii = FTS5_POS2COLUMN(iPos); + pCsr->aCnt[ii]++; + if( iCol!=ii ){ + if( ii>=nCol ){ + rc = FTS5_CORRUPT; + break; + } + pCsr->aDoc[ii]++; + iCol = ii; + } + } + } + break; + + case FTS5_DETAIL_COLUMNS: + if( pTab->eType==FTS5_VOCAB_ROW ){ + pCsr->aDoc[0]++; + }else{ + while( 0==sqlite3Fts5PoslistNext64(pPos, nPos, &iOff,&iPos) ){ + assert_nc( iPos>=0 && iPos<nCol ); + if( iPos>=nCol ){ + rc = FTS5_CORRUPT; + break; + } + pCsr->aDoc[iPos]++; + } + } + break; + + default: + assert( pCsr->pConfig->eDetail==FTS5_DETAIL_NONE ); + pCsr->aDoc[0]++; + break; + } + + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5IterNextScan(pCsr->pIter); + } + + if( rc==SQLITE_OK ){ + zTerm = sqlite3Fts5IterTerm(pCsr->pIter, &nTerm); + if( nTerm!=pCsr->term.n || memcmp(zTerm, pCsr->term.p, nTerm) ){ + break; + } + if( sqlite3Fts5IterEof(pCsr->pIter) ) break; + } + } + } + } + + if( rc==SQLITE_OK && pCsr->bEof==0 && pTab->eType==FTS5_VOCAB_COL ){ + while( pCsr->aDoc[pCsr->iCol]==0 ) pCsr->iCol++; + assert( pCsr->iCol<pCsr->pConfig->nCol ); + } + return rc; +} + +/* +** This is the xFilter implementation for the virtual table. +*/ +static int fts5VocabFilterMethod( + sqlite3_vtab_cursor *pCursor, /* The cursor used for this query */ + int idxNum, /* Strategy index */ + const char *zUnused, /* Unused */ + int nUnused, /* Number of elements in apVal */ + sqlite3_value **apVal /* Arguments for the indexing scheme */ +){ + Fts5VocabCursor *pCsr = (Fts5VocabCursor*)pCursor; + int rc = SQLITE_OK; + + int iVal = 0; + int f = FTS5INDEX_QUERY_SCAN; + const char *zTerm = 0; + int nTerm = 0; + + sqlite3_value *pEq = 0; + sqlite3_value *pGe = 0; + sqlite3_value *pLe = 0; + + UNUSED_PARAM2(zUnused, nUnused); + + fts5VocabResetCursor(pCsr); + if( idxNum & FTS5_VOCAB_TERM_EQ ) pEq = apVal[iVal++]; + if( idxNum & FTS5_VOCAB_TERM_GE ) pGe = apVal[iVal++]; + if( idxNum & FTS5_VOCAB_TERM_LE ) pLe = apVal[iVal++]; + + if( pEq ){ + zTerm = (const char *)sqlite3_value_text(pEq); + nTerm = sqlite3_value_bytes(pEq); + f = 0; + }else{ + if( pGe ){ + zTerm = (const char *)sqlite3_value_text(pGe); + nTerm = sqlite3_value_bytes(pGe); + } + if( pLe ){ + const char *zCopy = (const char *)sqlite3_value_text(pLe); + pCsr->nLeTerm = sqlite3_value_bytes(pLe); + pCsr->zLeTerm = sqlite3_malloc(pCsr->nLeTerm+1); + if( pCsr->zLeTerm==0 ){ + rc = SQLITE_NOMEM; + }else{ + memcpy(pCsr->zLeTerm, zCopy, pCsr->nLeTerm+1); + } + } + } + + + if( rc==SQLITE_OK ){ + rc = sqlite3Fts5IndexQuery(pCsr->pIndex, zTerm, nTerm, f, 0, &pCsr->pIter); + } + if( rc==SQLITE_OK ){ + rc = fts5VocabNextMethod(pCursor); + } + + return rc; +} + +/* +** This is the xEof method of the virtual table. SQLite calls this +** routine to find out if it has reached the end of a result set. +*/ +static int fts5VocabEofMethod(sqlite3_vtab_cursor *pCursor){ + Fts5VocabCursor *pCsr = (Fts5VocabCursor*)pCursor; + return pCsr->bEof; +} + +static int fts5VocabColumnMethod( + sqlite3_vtab_cursor *pCursor, /* Cursor to retrieve value from */ + sqlite3_context *pCtx, /* Context for sqlite3_result_xxx() calls */ + int iCol /* Index of column to read value from */ +){ + Fts5VocabCursor *pCsr = (Fts5VocabCursor*)pCursor; + int eDetail = pCsr->pConfig->eDetail; + int eType = ((Fts5VocabTable*)(pCursor->pVtab))->eType; + i64 iVal = 0; + + if( iCol==0 ){ + sqlite3_result_text( + pCtx, (const char*)pCsr->term.p, pCsr->term.n, SQLITE_TRANSIENT + ); + }else if( eType==FTS5_VOCAB_COL ){ + assert( iCol==1 || iCol==2 || iCol==3 ); + if( iCol==1 ){ + if( eDetail!=FTS5_DETAIL_NONE ){ + const char *z = pCsr->pConfig->azCol[pCsr->iCol]; + sqlite3_result_text(pCtx, z, -1, SQLITE_STATIC); + } + }else if( iCol==2 ){ + iVal = pCsr->aDoc[pCsr->iCol]; + }else{ + iVal = pCsr->aCnt[pCsr->iCol]; + } + }else{ + assert( iCol==1 || iCol==2 ); + if( iCol==1 ){ + iVal = pCsr->aDoc[0]; + }else{ + iVal = pCsr->aCnt[0]; + } + } + + if( iVal>0 ) sqlite3_result_int64(pCtx, iVal); + return SQLITE_OK; +} + +/* +** This is the xRowid method. The SQLite core calls this routine to +** retrieve the rowid for the current row of the result set. The +** rowid should be written to *pRowid. +*/ +static int fts5VocabRowidMethod( + sqlite3_vtab_cursor *pCursor, + sqlite_int64 *pRowid +){ + Fts5VocabCursor *pCsr = (Fts5VocabCursor*)pCursor; + *pRowid = pCsr->rowid; + return SQLITE_OK; +} + +static int sqlite3Fts5VocabInit(Fts5Global *pGlobal, sqlite3 *db){ + static const sqlite3_module fts5Vocab = { + /* iVersion */ 2, + /* xCreate */ fts5VocabCreateMethod, + /* xConnect */ fts5VocabConnectMethod, + /* xBestIndex */ fts5VocabBestIndexMethod, + /* xDisconnect */ fts5VocabDisconnectMethod, + /* xDestroy */ fts5VocabDestroyMethod, + /* xOpen */ fts5VocabOpenMethod, + /* xClose */ fts5VocabCloseMethod, + /* xFilter */ fts5VocabFilterMethod, + /* xNext */ fts5VocabNextMethod, + /* xEof */ fts5VocabEofMethod, + /* xColumn */ fts5VocabColumnMethod, + /* xRowid */ fts5VocabRowidMethod, + /* xUpdate */ 0, + /* xBegin */ 0, + /* xSync */ 0, + /* xCommit */ 0, + /* xRollback */ 0, + /* xFindFunction */ 0, + /* xRename */ 0, + /* xSavepoint */ 0, + /* xRelease */ 0, + /* xRollbackTo */ 0, + }; + void *p = (void*)pGlobal; + + return sqlite3_create_module_v2(db, "fts5vocab", &fts5Vocab, p, 0); +} + + + + + +#endif /* !defined(SQLITE_CORE) || defined(SQLITE_ENABLE_FTS5) */ + +/************** End of fts5.c ************************************************/ Index: src/sqlite3.h ================================================================== --- src/sqlite3.h +++ src/sqlite3.h @@ -21,11 +21,11 @@ ** to experimental interfaces but reserve the right to make minor changes ** if experience from use "in the wild" suggest such changes are prudent. ** ** The official C-language API documentation for SQLite is derived ** from comments in this file. This file is the authoritative source -** on how SQLite interfaces are suppose to operate. +** on how SQLite interfaces are supposed to operate. ** ** The name of this file under configuration management is "sqlite.h.in". ** The makefile makes some minor changes to this file (such as inserting ** the version number) and changes its name to "sqlite3.h" as ** part of the build process. @@ -41,25 +41,29 @@ extern "C" { #endif /* -** Add the ability to override 'extern' +** Provide the ability to override linkage features of the interface. */ #ifndef SQLITE_EXTERN # define SQLITE_EXTERN extern #endif - #ifndef SQLITE_API # define SQLITE_API #endif - +#ifndef SQLITE_CDECL +# define SQLITE_CDECL +#endif +#ifndef SQLITE_STDCALL +# define SQLITE_STDCALL +#endif /* ** These no-op macros are used in front of interfaces to mark those ** interfaces as either deprecated or experimental. New applications -** should not use deprecated interfaces - they are support for backwards +** should not use deprecated interfaces - they are supported for backwards ** compatibility only. Application writers should be aware that ** experimental interfaces are subject to change in point releases. ** ** These macros used to resolve to various kinds of compiler magic that ** would generate warning messages when they were used. But that @@ -95,23 +99,23 @@ ** be held constant and Z will be incremented or else Y will be incremented ** and Z will be reset to zero. ** ** Since version 3.6.18, SQLite source code has been stored in the ** <a href="http://www.fossil-scm.org/">Fossil configuration management -** system</a>. ^The SQLITE_SOURCE_ID macro evalutes to +** system</a>. ^The SQLITE_SOURCE_ID macro evaluates to ** a string which identifies a particular check-in of SQLite ** within its configuration management system. ^The SQLITE_SOURCE_ID ** string contains the date and time of the check-in (UTC) and an SHA1 ** hash of the entire source tree. ** ** See also: [sqlite3_libversion()], ** [sqlite3_libversion_number()], [sqlite3_sourceid()], ** [sqlite_version()] and [sqlite_source_id()]. */ -#define SQLITE_VERSION "3.6.23" -#define SQLITE_VERSION_NUMBER 3006023 -#define SQLITE_SOURCE_ID "2010-04-15 23:24:29 f96782b389b5b97b488dc5814f7082e0393f64cd" +#define SQLITE_VERSION "3.11.0" +#define SQLITE_VERSION_NUMBER 3011000 +#define SQLITE_SOURCE_ID "2016-02-15 17:29:24 3d862f207e3adc00f78066799ac5a8c282430a5f" /* ** CAPI3REF: Run-Time Library Version Numbers ** KEYWORDS: sqlite3_version, sqlite3_sourceid ** @@ -118,11 +122,11 @@ ** These interfaces provide the same information as the [SQLITE_VERSION], ** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros ** but are associated with the library instead of the header file. ^(Cautious ** programmers might include assert() statements in their application to ** verify that values returned by these interfaces match the macros in -** the header, and thus insure that the application is +** the header, and thus ensure that the application is ** compiled with matching library and header files. ** ** <blockquote><pre> ** assert( sqlite3_libversion_number()==SQLITE_VERSION_NUMBER ); ** assert( strcmp(sqlite3_sourceid(),SQLITE_SOURCE_ID)==0 ); @@ -140,46 +144,46 @@ ** [SQLITE_SOURCE_ID] C preprocessor macro. ** ** See also: [sqlite_version()] and [sqlite_source_id()]. */ SQLITE_API SQLITE_EXTERN const char sqlite3_version[]; -SQLITE_API const char *sqlite3_libversion(void); -SQLITE_API const char *sqlite3_sourceid(void); -SQLITE_API int sqlite3_libversion_number(void); +SQLITE_API const char *SQLITE_STDCALL sqlite3_libversion(void); +SQLITE_API const char *SQLITE_STDCALL sqlite3_sourceid(void); +SQLITE_API int SQLITE_STDCALL sqlite3_libversion_number(void); /* ** CAPI3REF: Run-Time Library Compilation Options Diagnostics ** ** ^The sqlite3_compileoption_used() function returns 0 or 1 ** indicating whether the specified option was defined at ** compile time. ^The SQLITE_ prefix may be omitted from the ** option name passed to sqlite3_compileoption_used(). ** -** ^The sqlite3_compileoption_get() function allows interating +** ^The sqlite3_compileoption_get() function allows iterating ** over the list of options that were defined at compile time by ** returning the N-th compile time option string. ^If N is out of range, ** sqlite3_compileoption_get() returns a NULL pointer. ^The SQLITE_ ** prefix is omitted from any strings returned by ** sqlite3_compileoption_get(). ** ** ^Support for the diagnostic functions sqlite3_compileoption_used() -** and sqlite3_compileoption_get() may be omitted by specifing the +** and sqlite3_compileoption_get() may be omitted by specifying the ** [SQLITE_OMIT_COMPILEOPTION_DIAGS] option at compile time. ** ** See also: SQL functions [sqlite_compileoption_used()] and ** [sqlite_compileoption_get()] and the [compile_options pragma]. */ #ifndef SQLITE_OMIT_COMPILEOPTION_DIAGS -SQLITE_API int sqlite3_compileoption_used(const char *zOptName); -SQLITE_API const char *sqlite3_compileoption_get(int N); +SQLITE_API int SQLITE_STDCALL sqlite3_compileoption_used(const char *zOptName); +SQLITE_API const char *SQLITE_STDCALL sqlite3_compileoption_get(int N); #endif /* ** CAPI3REF: Test To See If The Library Is Threadsafe ** ** ^The sqlite3_threadsafe() function returns zero if and only if -** SQLite was compiled mutexing code omitted due to the +** SQLite was compiled with mutexing code omitted due to the ** [SQLITE_THREADSAFE] compile-time option being set to 0. ** ** SQLite can be compiled with or without mutexes. When ** the [SQLITE_THREADSAFE] C preprocessor macro is 1 or 2, mutexes ** are enabled and SQLite is threadsafe. When the @@ -199,29 +203,30 @@ ** This interface only reports on the compile-time mutex setting ** of the [SQLITE_THREADSAFE] flag. If SQLite is compiled with ** SQLITE_THREADSAFE=1 or =2 then mutexes are enabled by default but ** can be fully or partially disabled using a call to [sqlite3_config()] ** with the verbs [SQLITE_CONFIG_SINGLETHREAD], [SQLITE_CONFIG_MULTITHREAD], -** or [SQLITE_CONFIG_MUTEX]. ^(The return value of the +** or [SQLITE_CONFIG_SERIALIZED]. ^(The return value of the ** sqlite3_threadsafe() function shows only the compile-time setting of ** thread safety, not any run-time changes to that setting made by ** sqlite3_config(). In other words, the return value from sqlite3_threadsafe() ** is unchanged by calls to sqlite3_config().)^ ** ** See the [threading mode] documentation for additional information. */ -SQLITE_API int sqlite3_threadsafe(void); +SQLITE_API int SQLITE_STDCALL sqlite3_threadsafe(void); /* ** CAPI3REF: Database Connection Handle ** KEYWORDS: {database connection} {database connections} ** ** Each open SQLite database is represented by a pointer to an instance of ** the opaque structure named "sqlite3". It is useful to think of an sqlite3 ** pointer as an object. The [sqlite3_open()], [sqlite3_open16()], and ** [sqlite3_open_v2()] interfaces are its constructors, and [sqlite3_close()] -** is its destructor. There are many other interfaces (such as +** and [sqlite3_close_v2()] are its destructors. There are many other +** interfaces (such as ** [sqlite3_prepare_v2()], [sqlite3_create_function()], and ** [sqlite3_busy_timeout()] to name but three) that are methods on an ** sqlite3 object. */ typedef struct sqlite3 sqlite3; @@ -263,33 +268,52 @@ # define double sqlite3_int64 #endif /* ** CAPI3REF: Closing A Database Connection -** -** ^The sqlite3_close() routine is the destructor for the [sqlite3] object. -** ^Calls to sqlite3_close() return SQLITE_OK if the [sqlite3] object is -** successfullly destroyed and all associated resources are deallocated. -** -** Applications must [sqlite3_finalize | finalize] all [prepared statements] -** and [sqlite3_blob_close | close] all [BLOB handles] associated with -** the [sqlite3] object prior to attempting to close the object. ^If -** sqlite3_close() is called on a [database connection] that still has -** outstanding [prepared statements] or [BLOB handles], then it returns -** SQLITE_BUSY. -** -** ^If [sqlite3_close()] is invoked while a transaction is open, +** DESTRUCTOR: sqlite3 +** +** ^The sqlite3_close() and sqlite3_close_v2() routines are destructors +** for the [sqlite3] object. +** ^Calls to sqlite3_close() and sqlite3_close_v2() return [SQLITE_OK] if +** the [sqlite3] object is successfully destroyed and all associated +** resources are deallocated. +** +** ^If the database connection is associated with unfinalized prepared +** statements or unfinished sqlite3_backup objects then sqlite3_close() +** will leave the database connection open and return [SQLITE_BUSY]. +** ^If sqlite3_close_v2() is called with unfinalized prepared statements +** and/or unfinished sqlite3_backups, then the database connection becomes +** an unusable "zombie" which will automatically be deallocated when the +** last prepared statement is finalized or the last sqlite3_backup is +** finished. The sqlite3_close_v2() interface is intended for use with +** host languages that are garbage collected, and where the order in which +** destructors are called is arbitrary. +** +** Applications should [sqlite3_finalize | finalize] all [prepared statements], +** [sqlite3_blob_close | close] all [BLOB handles], and +** [sqlite3_backup_finish | finish] all [sqlite3_backup] objects associated +** with the [sqlite3] object prior to attempting to close the object. ^If +** sqlite3_close_v2() is called on a [database connection] that still has +** outstanding [prepared statements], [BLOB handles], and/or +** [sqlite3_backup] objects then it returns [SQLITE_OK] and the deallocation +** of resources is deferred until all [prepared statements], [BLOB handles], +** and [sqlite3_backup] objects are also destroyed. +** +** ^If an [sqlite3] object is destroyed while a transaction is open, ** the transaction is automatically rolled back. ** -** The C parameter to [sqlite3_close(C)] must be either a NULL +** The C parameter to [sqlite3_close(C)] and [sqlite3_close_v2(C)] +** must be either a NULL ** pointer or an [sqlite3] object pointer obtained ** from [sqlite3_open()], [sqlite3_open16()], or ** [sqlite3_open_v2()], and not previously closed. -** ^Calling sqlite3_close() with a NULL pointer argument is a -** harmless no-op. +** ^Calling sqlite3_close() or sqlite3_close_v2() with a NULL pointer +** argument is a harmless no-op. */ -SQLITE_API int sqlite3_close(sqlite3 *); +SQLITE_API int SQLITE_STDCALL sqlite3_close(sqlite3*); +SQLITE_API int SQLITE_STDCALL sqlite3_close_v2(sqlite3*); /* ** The type for a callback function. ** This is legacy and deprecated. It is included for historical ** compatibility and is not documented. @@ -296,10 +320,11 @@ */ typedef int (*sqlite3_callback)(void*,int,char**, char**); /* ** CAPI3REF: One-Step Query Execution Interface +** METHOD: sqlite3 ** ** The sqlite3_exec() interface is a convenience wrapper around ** [sqlite3_prepare_v2()], [sqlite3_step()], and [sqlite3_finalize()], ** that allows an application to run multiple statements of SQL ** without having to use a lot of C code. @@ -308,11 +333,11 @@ ** semicolon-separate SQL statements passed into its 2nd argument, ** in the context of the [database connection] passed in as its 1st ** argument. ^If the callback function of the 3rd argument to ** sqlite3_exec() is not NULL, then it is invoked for each result row ** coming out of the evaluated SQL statements. ^The 4th argument to -** to sqlite3_exec() is relayed through to the 1st argument of each +** sqlite3_exec() is relayed through to the 1st argument of each ** callback invocation. ^If the callback pointer to sqlite3_exec() ** is NULL, then no callback is ever invoked and result rows are ** ignored. ** ** ^If an error occurs while evaluating the SQL statements passed into @@ -320,11 +345,11 @@ ** subsequent statements are skipped. ^If the 5th parameter to sqlite3_exec() ** is not NULL then any error message is written into memory obtained ** from [sqlite3_malloc()] and passed back through the 5th parameter. ** To avoid memory leaks, the application should invoke [sqlite3_free()] ** on error message strings returned through the 5th parameter of -** of sqlite3_exec() after the error message string is no longer needed. +** sqlite3_exec() after the error message string is no longer needed. ** ^If the 5th parameter to sqlite3_exec() is not NULL and no errors ** occur, then sqlite3_exec() sets the pointer in its 5th parameter to ** NULL before returning. ** ** ^If an sqlite3_exec() callback returns non-zero, the sqlite3_exec() @@ -347,37 +372,36 @@ ** is not changed. ** ** Restrictions: ** ** <ul> -** <li> The application must insure that the 1st parameter to sqlite3_exec() +** <li> The application must ensure that the 1st parameter to sqlite3_exec() ** is a valid and open [database connection]. -** <li> The application must not close [database connection] specified by +** <li> The application must not close the [database connection] specified by ** the 1st parameter to sqlite3_exec() while sqlite3_exec() is running. ** <li> The application must not modify the SQL statement text passed into ** the 2nd parameter of sqlite3_exec() while sqlite3_exec() is running. ** </ul> */ -SQLITE_API int sqlite3_exec( +SQLITE_API int SQLITE_STDCALL sqlite3_exec( sqlite3*, /* An open database */ const char *sql, /* SQL to be evaluated */ int (*callback)(void*,int,char**,char**), /* Callback function */ void *, /* 1st argument to callback */ char **errmsg /* Error msg written here */ ); /* ** CAPI3REF: Result Codes -** KEYWORDS: SQLITE_OK {error code} {error codes} -** KEYWORDS: {result code} {result codes} +** KEYWORDS: {result code definitions} ** ** Many SQLite functions return an integer result code from the set shown -** here in order to indicates success or failure. +** here in order to indicate success or failure. ** ** New error codes may be added in future versions of SQLite. ** -** See also: [SQLITE_IOERR_READ | extended result codes] +** See also: [extended result code definitions] */ #define SQLITE_OK 0 /* Successful result */ /* beginning-of-error-codes */ #define SQLITE_ERROR 1 /* SQL error or missing database */ #define SQLITE_INTERNAL 2 /* Internal logic error in SQLite */ @@ -388,14 +412,14 @@ #define SQLITE_NOMEM 7 /* A malloc() failed */ #define SQLITE_READONLY 8 /* Attempt to write a readonly database */ #define SQLITE_INTERRUPT 9 /* Operation terminated by sqlite3_interrupt()*/ #define SQLITE_IOERR 10 /* Some kind of disk I/O error occurred */ #define SQLITE_CORRUPT 11 /* The database disk image is malformed */ -#define SQLITE_NOTFOUND 12 /* NOT USED. Table or record not found */ +#define SQLITE_NOTFOUND 12 /* Unknown opcode in sqlite3_file_control() */ #define SQLITE_FULL 13 /* Insertion failed because database is full */ #define SQLITE_CANTOPEN 14 /* Unable to open the database file */ -#define SQLITE_PROTOCOL 15 /* NOT USED. Database lock protocol error */ +#define SQLITE_PROTOCOL 15 /* Database lock protocol error */ #define SQLITE_EMPTY 16 /* Database is empty */ #define SQLITE_SCHEMA 17 /* The database schema changed */ #define SQLITE_TOOBIG 18 /* String or BLOB exceeds size limit */ #define SQLITE_CONSTRAINT 19 /* Abort due to constraint violation */ #define SQLITE_MISMATCH 20 /* Data type mismatch */ @@ -403,36 +427,31 @@ #define SQLITE_NOLFS 22 /* Uses OS features not supported on host */ #define SQLITE_AUTH 23 /* Authorization denied */ #define SQLITE_FORMAT 24 /* Auxiliary database format error */ #define SQLITE_RANGE 25 /* 2nd parameter to sqlite3_bind out of range */ #define SQLITE_NOTADB 26 /* File opened that is not a database file */ +#define SQLITE_NOTICE 27 /* Notifications from sqlite3_log() */ +#define SQLITE_WARNING 28 /* Warnings from sqlite3_log() */ #define SQLITE_ROW 100 /* sqlite3_step() has another row ready */ #define SQLITE_DONE 101 /* sqlite3_step() has finished executing */ /* end-of-error-codes */ /* ** CAPI3REF: Extended Result Codes -** KEYWORDS: {extended error code} {extended error codes} -** KEYWORDS: {extended result code} {extended result codes} +** KEYWORDS: {extended result code definitions} ** -** In its default configuration, SQLite API routines return one of 26 integer -** [SQLITE_OK | result codes]. However, experience has shown that many of +** In its default configuration, SQLite API routines return one of 30 integer +** [result codes]. However, experience has shown that many of ** these result codes are too coarse-grained. They do not provide as ** much information about problems as programmers might like. In an effort to ** address this, newer versions of SQLite (version 3.3.8 and later) include ** support for additional result codes that provide more detailed information -** about errors. The extended result codes are enabled or disabled +** about errors. These [extended result codes] are enabled or disabled ** on a per database connection basis using the -** [sqlite3_extended_result_codes()] API. -** -** Some of the available extended result codes are listed here. -** One may expect the number of extended result codes will be expand -** over time. Software that uses extended result codes should expect -** to see new result codes in future releases of SQLite. -** -** The SQLITE_OK result code will never be extended. It will always -** be exactly zero. +** [sqlite3_extended_result_codes()] API. Or, the extended code for +** the most recent error can be obtained using +** [sqlite3_extended_errcode()]. */ #define SQLITE_IOERR_READ (SQLITE_IOERR | (1<<8)) #define SQLITE_IOERR_SHORT_READ (SQLITE_IOERR | (2<<8)) #define SQLITE_IOERR_WRITE (SQLITE_IOERR | (3<<8)) #define SQLITE_IOERR_FSYNC (SQLITE_IOERR | (4<<8)) @@ -447,26 +466,64 @@ #define SQLITE_IOERR_ACCESS (SQLITE_IOERR | (13<<8)) #define SQLITE_IOERR_CHECKRESERVEDLOCK (SQLITE_IOERR | (14<<8)) #define SQLITE_IOERR_LOCK (SQLITE_IOERR | (15<<8)) #define SQLITE_IOERR_CLOSE (SQLITE_IOERR | (16<<8)) #define SQLITE_IOERR_DIR_CLOSE (SQLITE_IOERR | (17<<8)) -#define SQLITE_LOCKED_SHAREDCACHE (SQLITE_LOCKED | (1<<8) ) +#define SQLITE_IOERR_SHMOPEN (SQLITE_IOERR | (18<<8)) +#define SQLITE_IOERR_SHMSIZE (SQLITE_IOERR | (19<<8)) +#define SQLITE_IOERR_SHMLOCK (SQLITE_IOERR | (20<<8)) +#define SQLITE_IOERR_SHMMAP (SQLITE_IOERR | (21<<8)) +#define SQLITE_IOERR_SEEK (SQLITE_IOERR | (22<<8)) +#define SQLITE_IOERR_DELETE_NOENT (SQLITE_IOERR | (23<<8)) +#define SQLITE_IOERR_MMAP (SQLITE_IOERR | (24<<8)) +#define SQLITE_IOERR_GETTEMPPATH (SQLITE_IOERR | (25<<8)) +#define SQLITE_IOERR_CONVPATH (SQLITE_IOERR | (26<<8)) +#define SQLITE_IOERR_VNODE (SQLITE_IOERR | (27<<8)) +#define SQLITE_IOERR_AUTH (SQLITE_IOERR | (28<<8)) +#define SQLITE_LOCKED_SHAREDCACHE (SQLITE_LOCKED | (1<<8)) +#define SQLITE_BUSY_RECOVERY (SQLITE_BUSY | (1<<8)) +#define SQLITE_BUSY_SNAPSHOT (SQLITE_BUSY | (2<<8)) +#define SQLITE_CANTOPEN_NOTEMPDIR (SQLITE_CANTOPEN | (1<<8)) +#define SQLITE_CANTOPEN_ISDIR (SQLITE_CANTOPEN | (2<<8)) +#define SQLITE_CANTOPEN_FULLPATH (SQLITE_CANTOPEN | (3<<8)) +#define SQLITE_CANTOPEN_CONVPATH (SQLITE_CANTOPEN | (4<<8)) +#define SQLITE_CORRUPT_VTAB (SQLITE_CORRUPT | (1<<8)) +#define SQLITE_READONLY_RECOVERY (SQLITE_READONLY | (1<<8)) +#define SQLITE_READONLY_CANTLOCK (SQLITE_READONLY | (2<<8)) +#define SQLITE_READONLY_ROLLBACK (SQLITE_READONLY | (3<<8)) +#define SQLITE_READONLY_DBMOVED (SQLITE_READONLY | (4<<8)) +#define SQLITE_ABORT_ROLLBACK (SQLITE_ABORT | (2<<8)) +#define SQLITE_CONSTRAINT_CHECK (SQLITE_CONSTRAINT | (1<<8)) +#define SQLITE_CONSTRAINT_COMMITHOOK (SQLITE_CONSTRAINT | (2<<8)) +#define SQLITE_CONSTRAINT_FOREIGNKEY (SQLITE_CONSTRAINT | (3<<8)) +#define SQLITE_CONSTRAINT_FUNCTION (SQLITE_CONSTRAINT | (4<<8)) +#define SQLITE_CONSTRAINT_NOTNULL (SQLITE_CONSTRAINT | (5<<8)) +#define SQLITE_CONSTRAINT_PRIMARYKEY (SQLITE_CONSTRAINT | (6<<8)) +#define SQLITE_CONSTRAINT_TRIGGER (SQLITE_CONSTRAINT | (7<<8)) +#define SQLITE_CONSTRAINT_UNIQUE (SQLITE_CONSTRAINT | (8<<8)) +#define SQLITE_CONSTRAINT_VTAB (SQLITE_CONSTRAINT | (9<<8)) +#define SQLITE_CONSTRAINT_ROWID (SQLITE_CONSTRAINT |(10<<8)) +#define SQLITE_NOTICE_RECOVER_WAL (SQLITE_NOTICE | (1<<8)) +#define SQLITE_NOTICE_RECOVER_ROLLBACK (SQLITE_NOTICE | (2<<8)) +#define SQLITE_WARNING_AUTOINDEX (SQLITE_WARNING | (1<<8)) +#define SQLITE_AUTH_USER (SQLITE_AUTH | (1<<8)) /* ** CAPI3REF: Flags For File Open Operations ** ** These bit values are intended for use in the ** 3rd parameter to the [sqlite3_open_v2()] interface and -** in the 4th parameter to the xOpen method of the -** [sqlite3_vfs] object. +** in the 4th parameter to the [sqlite3_vfs.xOpen] method. */ #define SQLITE_OPEN_READONLY 0x00000001 /* Ok for sqlite3_open_v2() */ #define SQLITE_OPEN_READWRITE 0x00000002 /* Ok for sqlite3_open_v2() */ #define SQLITE_OPEN_CREATE 0x00000004 /* Ok for sqlite3_open_v2() */ #define SQLITE_OPEN_DELETEONCLOSE 0x00000008 /* VFS only */ #define SQLITE_OPEN_EXCLUSIVE 0x00000010 /* VFS only */ #define SQLITE_OPEN_AUTOPROXY 0x00000020 /* VFS only */ +#define SQLITE_OPEN_URI 0x00000040 /* Ok for sqlite3_open_v2() */ +#define SQLITE_OPEN_MEMORY 0x00000080 /* Ok for sqlite3_open_v2() */ #define SQLITE_OPEN_MAIN_DB 0x00000100 /* VFS only */ #define SQLITE_OPEN_TEMP_DB 0x00000200 /* VFS only */ #define SQLITE_OPEN_TRANSIENT_DB 0x00000400 /* VFS only */ #define SQLITE_OPEN_MAIN_JOURNAL 0x00000800 /* VFS only */ #define SQLITE_OPEN_TEMP_JOURNAL 0x00001000 /* VFS only */ @@ -474,16 +531,19 @@ #define SQLITE_OPEN_MASTER_JOURNAL 0x00004000 /* VFS only */ #define SQLITE_OPEN_NOMUTEX 0x00008000 /* Ok for sqlite3_open_v2() */ #define SQLITE_OPEN_FULLMUTEX 0x00010000 /* Ok for sqlite3_open_v2() */ #define SQLITE_OPEN_SHAREDCACHE 0x00020000 /* Ok for sqlite3_open_v2() */ #define SQLITE_OPEN_PRIVATECACHE 0x00040000 /* Ok for sqlite3_open_v2() */ +#define SQLITE_OPEN_WAL 0x00080000 /* VFS only */ + +/* Reserved: 0x00F00000 */ /* ** CAPI3REF: Device Characteristics ** -** The xDeviceCapabilities method of the [sqlite3_io_methods] -** object returns an integer which is a vector of the these +** The xDeviceCharacteristics method of the [sqlite3_io_methods] +** object returns an integer which is a vector of these ** bit values expressing I/O characteristics of the mass storage ** device that holds the file that the [sqlite3_io_methods] ** refers to. ** ** The SQLITE_IOCAP_ATOMIC property means that all writes of @@ -493,23 +553,34 @@ ** nnn are atomic. The SQLITE_IOCAP_SAFE_APPEND value means ** that when data is appended to a file, the data is appended ** first then the size of the file is extended, never the other ** way around. The SQLITE_IOCAP_SEQUENTIAL property means that ** information is written to disk in the same order as calls -** to xWrite(). +** to xWrite(). The SQLITE_IOCAP_POWERSAFE_OVERWRITE property means that +** after reboot following a crash or power loss, the only bytes in a +** file that were written at the application level might have changed +** and that adjacent bytes, even bytes within the same sector are +** guaranteed to be unchanged. The SQLITE_IOCAP_UNDELETABLE_WHEN_OPEN +** flag indicate that a file cannot be deleted when open. The +** SQLITE_IOCAP_IMMUTABLE flag indicates that the file is on +** read-only media and cannot be changed even by processes with +** elevated privileges. */ -#define SQLITE_IOCAP_ATOMIC 0x00000001 -#define SQLITE_IOCAP_ATOMIC512 0x00000002 -#define SQLITE_IOCAP_ATOMIC1K 0x00000004 -#define SQLITE_IOCAP_ATOMIC2K 0x00000008 -#define SQLITE_IOCAP_ATOMIC4K 0x00000010 -#define SQLITE_IOCAP_ATOMIC8K 0x00000020 -#define SQLITE_IOCAP_ATOMIC16K 0x00000040 -#define SQLITE_IOCAP_ATOMIC32K 0x00000080 -#define SQLITE_IOCAP_ATOMIC64K 0x00000100 -#define SQLITE_IOCAP_SAFE_APPEND 0x00000200 -#define SQLITE_IOCAP_SEQUENTIAL 0x00000400 +#define SQLITE_IOCAP_ATOMIC 0x00000001 +#define SQLITE_IOCAP_ATOMIC512 0x00000002 +#define SQLITE_IOCAP_ATOMIC1K 0x00000004 +#define SQLITE_IOCAP_ATOMIC2K 0x00000008 +#define SQLITE_IOCAP_ATOMIC4K 0x00000010 +#define SQLITE_IOCAP_ATOMIC8K 0x00000020 +#define SQLITE_IOCAP_ATOMIC16K 0x00000040 +#define SQLITE_IOCAP_ATOMIC32K 0x00000080 +#define SQLITE_IOCAP_ATOMIC64K 0x00000100 +#define SQLITE_IOCAP_SAFE_APPEND 0x00000200 +#define SQLITE_IOCAP_SEQUENTIAL 0x00000400 +#define SQLITE_IOCAP_UNDELETABLE_WHEN_OPEN 0x00000800 +#define SQLITE_IOCAP_POWERSAFE_OVERWRITE 0x00001000 +#define SQLITE_IOCAP_IMMUTABLE 0x00002000 /* ** CAPI3REF: File Locking Levels ** ** SQLite uses one of these integer values as the second @@ -533,10 +604,22 @@ ** sync operation only needs to flush data to mass storage. Inode ** information need not be flushed. If the lower four bits of the flag ** equal SQLITE_SYNC_NORMAL, that means to use normal fsync() semantics. ** If the lower four bits equal SQLITE_SYNC_FULL, that means ** to use Mac OS X style fullsync instead of fsync(). +** +** Do not confuse the SQLITE_SYNC_NORMAL and SQLITE_SYNC_FULL flags +** with the [PRAGMA synchronous]=NORMAL and [PRAGMA synchronous]=FULL +** settings. The [synchronous pragma] determines when calls to the +** xSync VFS method occur and applies uniformly across all platforms. +** The SQLITE_SYNC_NORMAL and SQLITE_SYNC_FULL flags determine how +** energetic or rigorous or forceful the sync operations are and +** only make a difference on Mac OSX for the default SQLite code. +** (Third-party VFS implementations might also make the distinction +** between SQLITE_SYNC_NORMAL and SQLITE_SYNC_FULL, but among the +** operating systems natively supported by SQLite, only Mac OSX +** cares about the difference.) */ #define SQLITE_SYNC_NORMAL 0x00002 #define SQLITE_SYNC_FULL 0x00003 #define SQLITE_SYNC_DATAONLY 0x00010 @@ -557,21 +640,22 @@ }; /* ** CAPI3REF: OS Interface File Virtual Methods Object ** -** Every file opened by the [sqlite3_vfs] xOpen method populates an +** Every file opened by the [sqlite3_vfs.xOpen] method populates an ** [sqlite3_file] object (or, more commonly, a subclass of the ** [sqlite3_file] object) with a pointer to an instance of this object. ** This object defines the methods used to perform various operations ** against the open file represented by the [sqlite3_file] object. ** -** If the xOpen method sets the sqlite3_file.pMethods element +** If the [sqlite3_vfs.xOpen] method sets the sqlite3_file.pMethods element ** to a non-NULL pointer, then the sqlite3_io_methods.xClose method -** may be invoked even if the xOpen reported that it failed. The -** only way to prevent a call to xClose following a failed xOpen -** is for the xOpen to set the sqlite3_file.pMethods element to NULL. +** may be invoked even if the [sqlite3_vfs.xOpen] reported that it failed. The +** only way to prevent a call to xClose following a failed [sqlite3_vfs.xOpen] +** is for the [sqlite3_vfs.xOpen] to set the sqlite3_file.pMethods element +** to NULL. ** ** The flags argument to xSync may be one of [SQLITE_SYNC_NORMAL] or ** [SQLITE_SYNC_FULL]. The first choice is the normal fsync(). ** The second choice is a Mac OS X style fullsync. The [SQLITE_SYNC_DATAONLY] ** flag may be ORed in to indicate that only the data of the file @@ -599,13 +683,15 @@ ** write return values. Potential uses for xFileControl() might be ** functions to enable blocking locks with timeouts, to change the ** locking strategy (for example to use dot-file locks), to inquire ** about the status of a lock, or to break stale locks. The SQLite ** core reserves all opcodes less than 100 for its own use. -** A [SQLITE_FCNTL_LOCKSTATE | list of opcodes] less than 100 is available. +** A [file control opcodes | list of opcodes] less than 100 is available. ** Applications that define a custom xFileControl method should use opcodes -** greater than 100 to avoid conflicts. +** greater than 100 to avoid conflicts. VFS implementations should +** return [SQLITE_NOTFOUND] for file control opcodes that they do not +** recognize. ** ** The xSectorSize() method returns the sector size of the ** device that underlies the file. The sector size is the ** minimum write that can be performed without disturbing ** other bytes in the file. The xDeviceCharacteristics() @@ -656,32 +742,288 @@ int (*xUnlock)(sqlite3_file*, int); int (*xCheckReservedLock)(sqlite3_file*, int *pResOut); int (*xFileControl)(sqlite3_file*, int op, void *pArg); int (*xSectorSize)(sqlite3_file*); int (*xDeviceCharacteristics)(sqlite3_file*); + /* Methods above are valid for version 1 */ + int (*xShmMap)(sqlite3_file*, int iPg, int pgsz, int, void volatile**); + int (*xShmLock)(sqlite3_file*, int offset, int n, int flags); + void (*xShmBarrier)(sqlite3_file*); + int (*xShmUnmap)(sqlite3_file*, int deleteFlag); + /* Methods above are valid for version 2 */ + int (*xFetch)(sqlite3_file*, sqlite3_int64 iOfst, int iAmt, void **pp); + int (*xUnfetch)(sqlite3_file*, sqlite3_int64 iOfst, void *p); + /* Methods above are valid for version 3 */ /* Additional methods may be added in future releases */ }; /* ** CAPI3REF: Standard File Control Opcodes +** KEYWORDS: {file control opcodes} {file control opcode} ** ** These integer constants are opcodes for the xFileControl method ** of the [sqlite3_io_methods] object and for the [sqlite3_file_control()] ** interface. ** +** <ul> +** <li>[[SQLITE_FCNTL_LOCKSTATE]] ** The [SQLITE_FCNTL_LOCKSTATE] opcode is used for debugging. This ** opcode causes the xFileControl method to write the current state of ** the lock (one of [SQLITE_LOCK_NONE], [SQLITE_LOCK_SHARED], ** [SQLITE_LOCK_RESERVED], [SQLITE_LOCK_PENDING], or [SQLITE_LOCK_EXCLUSIVE]) ** into an integer that the pArg argument points to. This capability -** is used during testing and only needs to be supported when SQLITE_TEST -** is defined. +** is used during testing and is only available when the SQLITE_TEST +** compile-time option is used. +** +** <li>[[SQLITE_FCNTL_SIZE_HINT]] +** The [SQLITE_FCNTL_SIZE_HINT] opcode is used by SQLite to give the VFS +** layer a hint of how large the database file will grow to be during the +** current transaction. This hint is not guaranteed to be accurate but it +** is often close. The underlying VFS might choose to preallocate database +** file space based on this hint in order to help writes to the database +** file run faster. +** +** <li>[[SQLITE_FCNTL_CHUNK_SIZE]] +** The [SQLITE_FCNTL_CHUNK_SIZE] opcode is used to request that the VFS +** extends and truncates the database file in chunks of a size specified +** by the user. The fourth argument to [sqlite3_file_control()] should +** point to an integer (type int) containing the new chunk-size to use +** for the nominated database. Allocating database file space in large +** chunks (say 1MB at a time), may reduce file-system fragmentation and +** improve performance on some systems. +** +** <li>[[SQLITE_FCNTL_FILE_POINTER]] +** The [SQLITE_FCNTL_FILE_POINTER] opcode is used to obtain a pointer +** to the [sqlite3_file] object associated with a particular database +** connection. See also [SQLITE_FCNTL_JOURNAL_POINTER]. +** +** <li>[[SQLITE_FCNTL_JOURNAL_POINTER]] +** The [SQLITE_FCNTL_JOURNAL_POINTER] opcode is used to obtain a pointer +** to the [sqlite3_file] object associated with the journal file (either +** the [rollback journal] or the [write-ahead log]) for a particular database +** connection. See also [SQLITE_FCNTL_FILE_POINTER]. +** +** <li>[[SQLITE_FCNTL_SYNC_OMITTED]] +** No longer in use. +** +** <li>[[SQLITE_FCNTL_SYNC]] +** The [SQLITE_FCNTL_SYNC] opcode is generated internally by SQLite and +** sent to the VFS immediately before the xSync method is invoked on a +** database file descriptor. Or, if the xSync method is not invoked +** because the user has configured SQLite with +** [PRAGMA synchronous | PRAGMA synchronous=OFF] it is invoked in place +** of the xSync method. In most cases, the pointer argument passed with +** this file-control is NULL. However, if the database file is being synced +** as part of a multi-database commit, the argument points to a nul-terminated +** string containing the transactions master-journal file name. VFSes that +** do not need this signal should silently ignore this opcode. Applications +** should not call [sqlite3_file_control()] with this opcode as doing so may +** disrupt the operation of the specialized VFSes that do require it. +** +** <li>[[SQLITE_FCNTL_COMMIT_PHASETWO]] +** The [SQLITE_FCNTL_COMMIT_PHASETWO] opcode is generated internally by SQLite +** and sent to the VFS after a transaction has been committed immediately +** but before the database is unlocked. VFSes that do not need this signal +** should silently ignore this opcode. Applications should not call +** [sqlite3_file_control()] with this opcode as doing so may disrupt the +** operation of the specialized VFSes that do require it. +** +** <li>[[SQLITE_FCNTL_WIN32_AV_RETRY]] +** ^The [SQLITE_FCNTL_WIN32_AV_RETRY] opcode is used to configure automatic +** retry counts and intervals for certain disk I/O operations for the +** windows [VFS] in order to provide robustness in the presence of +** anti-virus programs. By default, the windows VFS will retry file read, +** file write, and file delete operations up to 10 times, with a delay +** of 25 milliseconds before the first retry and with the delay increasing +** by an additional 25 milliseconds with each subsequent retry. This +** opcode allows these two values (10 retries and 25 milliseconds of delay) +** to be adjusted. The values are changed for all database connections +** within the same process. The argument is a pointer to an array of two +** integers where the first integer i the new retry count and the second +** integer is the delay. If either integer is negative, then the setting +** is not changed but instead the prior value of that setting is written +** into the array entry, allowing the current retry settings to be +** interrogated. The zDbName parameter is ignored. +** +** <li>[[SQLITE_FCNTL_PERSIST_WAL]] +** ^The [SQLITE_FCNTL_PERSIST_WAL] opcode is used to set or query the +** persistent [WAL | Write Ahead Log] setting. By default, the auxiliary +** write ahead log and shared memory files used for transaction control +** are automatically deleted when the latest connection to the database +** closes. Setting persistent WAL mode causes those files to persist after +** close. Persisting the files is useful when other processes that do not +** have write permission on the directory containing the database file want +** to read the database file, as the WAL and shared memory files must exist +** in order for the database to be readable. The fourth parameter to +** [sqlite3_file_control()] for this opcode should be a pointer to an integer. +** That integer is 0 to disable persistent WAL mode or 1 to enable persistent +** WAL mode. If the integer is -1, then it is overwritten with the current +** WAL persistence setting. +** +** <li>[[SQLITE_FCNTL_POWERSAFE_OVERWRITE]] +** ^The [SQLITE_FCNTL_POWERSAFE_OVERWRITE] opcode is used to set or query the +** persistent "powersafe-overwrite" or "PSOW" setting. The PSOW setting +** determines the [SQLITE_IOCAP_POWERSAFE_OVERWRITE] bit of the +** xDeviceCharacteristics methods. The fourth parameter to +** [sqlite3_file_control()] for this opcode should be a pointer to an integer. +** That integer is 0 to disable zero-damage mode or 1 to enable zero-damage +** mode. If the integer is -1, then it is overwritten with the current +** zero-damage mode setting. +** +** <li>[[SQLITE_FCNTL_OVERWRITE]] +** ^The [SQLITE_FCNTL_OVERWRITE] opcode is invoked by SQLite after opening +** a write transaction to indicate that, unless it is rolled back for some +** reason, the entire database file will be overwritten by the current +** transaction. This is used by VACUUM operations. +** +** <li>[[SQLITE_FCNTL_VFSNAME]] +** ^The [SQLITE_FCNTL_VFSNAME] opcode can be used to obtain the names of +** all [VFSes] in the VFS stack. The names are of all VFS shims and the +** final bottom-level VFS are written into memory obtained from +** [sqlite3_malloc()] and the result is stored in the char* variable +** that the fourth parameter of [sqlite3_file_control()] points to. +** The caller is responsible for freeing the memory when done. As with +** all file-control actions, there is no guarantee that this will actually +** do anything. Callers should initialize the char* variable to a NULL +** pointer in case this file-control is not implemented. This file-control +** is intended for diagnostic use only. +** +** <li>[[SQLITE_FCNTL_VFS_POINTER]] +** ^The [SQLITE_FCNTL_VFS_POINTER] opcode finds a pointer to the top-level +** [VFSes] currently in use. ^(The argument X in +** sqlite3_file_control(db,SQLITE_FCNTL_VFS_POINTER,X) must be +** of type "[sqlite3_vfs] **". This opcodes will set *X +** to a pointer to the top-level VFS.)^ +** ^When there are multiple VFS shims in the stack, this opcode finds the +** upper-most shim only. +** +** <li>[[SQLITE_FCNTL_PRAGMA]] +** ^Whenever a [PRAGMA] statement is parsed, an [SQLITE_FCNTL_PRAGMA] +** file control is sent to the open [sqlite3_file] object corresponding +** to the database file to which the pragma statement refers. ^The argument +** to the [SQLITE_FCNTL_PRAGMA] file control is an array of +** pointers to strings (char**) in which the second element of the array +** is the name of the pragma and the third element is the argument to the +** pragma or NULL if the pragma has no argument. ^The handler for an +** [SQLITE_FCNTL_PRAGMA] file control can optionally make the first element +** of the char** argument point to a string obtained from [sqlite3_mprintf()] +** or the equivalent and that string will become the result of the pragma or +** the error message if the pragma fails. ^If the +** [SQLITE_FCNTL_PRAGMA] file control returns [SQLITE_NOTFOUND], then normal +** [PRAGMA] processing continues. ^If the [SQLITE_FCNTL_PRAGMA] +** file control returns [SQLITE_OK], then the parser assumes that the +** VFS has handled the PRAGMA itself and the parser generates a no-op +** prepared statement if result string is NULL, or that returns a copy +** of the result string if the string is non-NULL. +** ^If the [SQLITE_FCNTL_PRAGMA] file control returns +** any result code other than [SQLITE_OK] or [SQLITE_NOTFOUND], that means +** that the VFS encountered an error while handling the [PRAGMA] and the +** compilation of the PRAGMA fails with an error. ^The [SQLITE_FCNTL_PRAGMA] +** file control occurs at the beginning of pragma statement analysis and so +** it is able to override built-in [PRAGMA] statements. +** +** <li>[[SQLITE_FCNTL_BUSYHANDLER]] +** ^The [SQLITE_FCNTL_BUSYHANDLER] +** file-control may be invoked by SQLite on the database file handle +** shortly after it is opened in order to provide a custom VFS with access +** to the connections busy-handler callback. The argument is of type (void **) +** - an array of two (void *) values. The first (void *) actually points +** to a function of type (int (*)(void *)). In order to invoke the connections +** busy-handler, this function should be invoked with the second (void *) in +** the array as the only argument. If it returns non-zero, then the operation +** should be retried. If it returns zero, the custom VFS should abandon the +** current operation. +** +** <li>[[SQLITE_FCNTL_TEMPFILENAME]] +** ^Application can invoke the [SQLITE_FCNTL_TEMPFILENAME] file-control +** to have SQLite generate a +** temporary filename using the same algorithm that is followed to generate +** temporary filenames for TEMP tables and other internal uses. The +** argument should be a char** which will be filled with the filename +** written into memory obtained from [sqlite3_malloc()]. The caller should +** invoke [sqlite3_free()] on the result to avoid a memory leak. +** +** <li>[[SQLITE_FCNTL_MMAP_SIZE]] +** The [SQLITE_FCNTL_MMAP_SIZE] file control is used to query or set the +** maximum number of bytes that will be used for memory-mapped I/O. +** The argument is a pointer to a value of type sqlite3_int64 that +** is an advisory maximum number of bytes in the file to memory map. The +** pointer is overwritten with the old value. The limit is not changed if +** the value originally pointed to is negative, and so the current limit +** can be queried by passing in a pointer to a negative number. This +** file-control is used internally to implement [PRAGMA mmap_size]. +** +** <li>[[SQLITE_FCNTL_TRACE]] +** The [SQLITE_FCNTL_TRACE] file control provides advisory information +** to the VFS about what the higher layers of the SQLite stack are doing. +** This file control is used by some VFS activity tracing [shims]. +** The argument is a zero-terminated string. Higher layers in the +** SQLite stack may generate instances of this file control if +** the [SQLITE_USE_FCNTL_TRACE] compile-time option is enabled. +** +** <li>[[SQLITE_FCNTL_HAS_MOVED]] +** The [SQLITE_FCNTL_HAS_MOVED] file control interprets its argument as a +** pointer to an integer and it writes a boolean into that integer depending +** on whether or not the file has been renamed, moved, or deleted since it +** was first opened. +** +** <li>[[SQLITE_FCNTL_WIN32_SET_HANDLE]] +** The [SQLITE_FCNTL_WIN32_SET_HANDLE] opcode is used for debugging. This +** opcode causes the xFileControl method to swap the file handle with the one +** pointed to by the pArg argument. This capability is used during testing +** and only needs to be supported when SQLITE_TEST is defined. +** +** <li>[[SQLITE_FCNTL_WAL_BLOCK]] +** The [SQLITE_FCNTL_WAL_BLOCK] is a signal to the VFS layer that it might +** be advantageous to block on the next WAL lock if the lock is not immediately +** available. The WAL subsystem issues this signal during rare +** circumstances in order to fix a problem with priority inversion. +** Applications should <em>not</em> use this file-control. +** +** <li>[[SQLITE_FCNTL_ZIPVFS]] +** The [SQLITE_FCNTL_ZIPVFS] opcode is implemented by zipvfs only. All other +** VFS should return SQLITE_NOTFOUND for this opcode. +** +** <li>[[SQLITE_FCNTL_RBU]] +** The [SQLITE_FCNTL_RBU] opcode is implemented by the special VFS used by +** the RBU extension only. All other VFS should return SQLITE_NOTFOUND for +** this opcode. +** </ul> */ -#define SQLITE_FCNTL_LOCKSTATE 1 -#define SQLITE_GET_LOCKPROXYFILE 2 -#define SQLITE_SET_LOCKPROXYFILE 3 -#define SQLITE_LAST_ERRNO 4 +#define SQLITE_FCNTL_LOCKSTATE 1 +#define SQLITE_FCNTL_GET_LOCKPROXYFILE 2 +#define SQLITE_FCNTL_SET_LOCKPROXYFILE 3 +#define SQLITE_FCNTL_LAST_ERRNO 4 +#define SQLITE_FCNTL_SIZE_HINT 5 +#define SQLITE_FCNTL_CHUNK_SIZE 6 +#define SQLITE_FCNTL_FILE_POINTER 7 +#define SQLITE_FCNTL_SYNC_OMITTED 8 +#define SQLITE_FCNTL_WIN32_AV_RETRY 9 +#define SQLITE_FCNTL_PERSIST_WAL 10 +#define SQLITE_FCNTL_OVERWRITE 11 +#define SQLITE_FCNTL_VFSNAME 12 +#define SQLITE_FCNTL_POWERSAFE_OVERWRITE 13 +#define SQLITE_FCNTL_PRAGMA 14 +#define SQLITE_FCNTL_BUSYHANDLER 15 +#define SQLITE_FCNTL_TEMPFILENAME 16 +#define SQLITE_FCNTL_MMAP_SIZE 18 +#define SQLITE_FCNTL_TRACE 19 +#define SQLITE_FCNTL_HAS_MOVED 20 +#define SQLITE_FCNTL_SYNC 21 +#define SQLITE_FCNTL_COMMIT_PHASETWO 22 +#define SQLITE_FCNTL_WIN32_SET_HANDLE 23 +#define SQLITE_FCNTL_WAL_BLOCK 24 +#define SQLITE_FCNTL_ZIPVFS 25 +#define SQLITE_FCNTL_RBU 26 +#define SQLITE_FCNTL_VFS_POINTER 27 +#define SQLITE_FCNTL_JOURNAL_POINTER 28 + +/* deprecated names */ +#define SQLITE_GET_LOCKPROXYFILE SQLITE_FCNTL_GET_LOCKPROXYFILE +#define SQLITE_SET_LOCKPROXYFILE SQLITE_FCNTL_SET_LOCKPROXYFILE +#define SQLITE_LAST_ERRNO SQLITE_FCNTL_LAST_ERRNO + /* ** CAPI3REF: Mutex Handle ** ** The mutex module within SQLite defines [sqlite3_mutex] to be an @@ -696,11 +1038,12 @@ /* ** CAPI3REF: OS Interface Object ** ** An instance of the sqlite3_vfs object defines the interface between ** the SQLite core and the underlying operating system. The "vfs" -** in the name of the object stands for "virtual file system". +** in the name of the object stands for "virtual file system". See +** the [VFS | VFS documentation] for further information. ** ** The value of the iVersion field is initially 1 but may be larger in ** future versions of SQLite. Additional fields may be appended to this ** object when the iVersion value is increased. Note that the structure ** of the sqlite3_vfs object changes in the transaction between @@ -725,19 +1068,24 @@ ** object once the object has been registered. ** ** The zName field holds the name of the VFS module. The name must ** be unique across all VFS modules. ** -** SQLite will guarantee that the zFilename parameter to xOpen +** [[sqlite3_vfs.xOpen]] +** ^SQLite guarantees that the zFilename parameter to xOpen ** is either a NULL pointer or string obtained -** from xFullPathname(). SQLite further guarantees that +** from xFullPathname() with an optional suffix added. +** ^If a suffix is added to the zFilename parameter, it will +** consist of a single "-" character followed by no more than +** 11 alphanumeric and/or "-" characters. +** ^SQLite further guarantees that ** the string will be valid and unchanged until xClose() is ** called. Because of the previous sentence, ** the [sqlite3_file] can safely store a pointer to the ** filename if it needs to remember the filename for some reason. -** If the zFilename parameter is xOpen is a NULL pointer then xOpen -** must invent its own temporary name for the file. Whenever the +** If the zFilename parameter to xOpen is a NULL pointer then xOpen +** must invent its own temporary name for the file. ^Whenever the ** xFilename parameter is NULL it will also be the case that the ** flags parameter will include [SQLITE_OPEN_DELETEONCLOSE]. ** ** The flags argument to xOpen() includes all bits set in ** the flags argument to [sqlite3_open_v2()]. Or if [sqlite3_open()] @@ -744,11 +1092,11 @@ ** or [sqlite3_open16()] is used, then flags includes at least ** [SQLITE_OPEN_READWRITE] | [SQLITE_OPEN_CREATE]. ** If xOpen() opens a file read-only then it sets *pOutFlags to ** include [SQLITE_OPEN_READONLY]. Other bits in *pOutFlags may be set. ** -** SQLite will also add one of the following flags to the xOpen() +** ^(SQLite will also add one of the following flags to the xOpen() ** call, depending on the object being opened: ** ** <ul> ** <li> [SQLITE_OPEN_MAIN_DB] ** <li> [SQLITE_OPEN_MAIN_JOURNAL] @@ -755,11 +1103,12 @@ ** <li> [SQLITE_OPEN_TEMP_DB] ** <li> [SQLITE_OPEN_TEMP_JOURNAL] ** <li> [SQLITE_OPEN_TRANSIENT_DB] ** <li> [SQLITE_OPEN_SUBJOURNAL] ** <li> [SQLITE_OPEN_MASTER_JOURNAL] -** </ul> +** <li> [SQLITE_OPEN_WAL] +** </ul>)^ ** ** The file I/O implementation can use the object type flags to ** change the way it deals with files. For example, an application ** that does not care about crash recovery or rollback might make ** the open of a journal file a no-op. Writes to this journal would @@ -774,59 +1123,81 @@ ** <li> [SQLITE_OPEN_DELETEONCLOSE] ** <li> [SQLITE_OPEN_EXCLUSIVE] ** </ul> ** ** The [SQLITE_OPEN_DELETEONCLOSE] flag means the file should be -** deleted when it is closed. The [SQLITE_OPEN_DELETEONCLOSE] -** will be set for TEMP databases, journals and for subjournals. +** deleted when it is closed. ^The [SQLITE_OPEN_DELETEONCLOSE] +** will be set for TEMP databases and their journals, transient +** databases, and subjournals. ** -** The [SQLITE_OPEN_EXCLUSIVE] flag is always used in conjunction +** ^The [SQLITE_OPEN_EXCLUSIVE] flag is always used in conjunction ** with the [SQLITE_OPEN_CREATE] flag, which are both directly ** analogous to the O_EXCL and O_CREAT flags of the POSIX open() ** API. The SQLITE_OPEN_EXCLUSIVE flag, when paired with the ** SQLITE_OPEN_CREATE, is used to indicate that file should always ** be created, and that it is an error if it already exists. ** It is <i>not</i> used to indicate the file should be opened ** for exclusive access. ** -** At least szOsFile bytes of memory are allocated by SQLite +** ^At least szOsFile bytes of memory are allocated by SQLite ** to hold the [sqlite3_file] structure passed as the third ** argument to xOpen. The xOpen method does not have to ** allocate the structure; it should just fill it in. Note that ** the xOpen method must set the sqlite3_file.pMethods to either ** a valid [sqlite3_io_methods] object or to NULL. xOpen must do ** this even if the open fails. SQLite expects that the sqlite3_file.pMethods ** element will be valid after xOpen returns regardless of the success ** or failure of the xOpen call. ** -** The flags argument to xAccess() may be [SQLITE_ACCESS_EXISTS] +** [[sqlite3_vfs.xAccess]] +** ^The flags argument to xAccess() may be [SQLITE_ACCESS_EXISTS] ** to test for the existence of a file, or [SQLITE_ACCESS_READWRITE] to ** test whether a file is readable and writable, or [SQLITE_ACCESS_READ] ** to test whether a file is at least readable. The file can be a ** directory. ** -** SQLite will always allocate at least mxPathname+1 bytes for the +** ^SQLite will always allocate at least mxPathname+1 bytes for the ** output buffer xFullPathname. The exact size of the output buffer ** is also passed as a parameter to both methods. If the output buffer ** is not large enough, [SQLITE_CANTOPEN] should be returned. Since this is ** handled as a fatal error by SQLite, vfs implementations should endeavor ** to prevent this by setting mxPathname to a sufficiently large value. ** -** The xRandomness(), xSleep(), and xCurrentTime() interfaces -** are not strictly a part of the filesystem, but they are +** The xRandomness(), xSleep(), xCurrentTime(), and xCurrentTimeInt64() +** interfaces are not strictly a part of the filesystem, but they are ** included in the VFS structure for completeness. ** The xRandomness() function attempts to return nBytes bytes ** of good-quality randomness into zOut. The return value is ** the actual number of bytes of randomness obtained. ** The xSleep() method causes the calling thread to sleep for at -** least the number of microseconds given. The xCurrentTime() -** method returns a Julian Day Number for the current date and time. +** least the number of microseconds given. ^The xCurrentTime() +** method returns a Julian Day Number for the current date and time as +** a floating point value. +** ^The xCurrentTimeInt64() method returns, as an integer, the Julian +** Day Number multiplied by 86400000 (the number of milliseconds in +** a 24-hour day). +** ^SQLite will use the xCurrentTimeInt64() method to get the current +** date and time if that method is available (if iVersion is 2 or +** greater and the function pointer is not NULL) and will fall back +** to xCurrentTime() if xCurrentTimeInt64() is unavailable. ** +** ^The xSetSystemCall(), xGetSystemCall(), and xNestSystemCall() interfaces +** are not used by the SQLite core. These optional interfaces are provided +** by some VFSes to facilitate testing of the VFS code. By overriding +** system calls with functions under its control, a test program can +** simulate faults and error conditions that would otherwise be difficult +** or impossible to induce. The set of system calls that can be overridden +** varies from one VFS to another, and from one version of the same VFS to the +** next. Applications that use these interfaces must be prepared for any +** or all of these interfaces to be NULL or for their behavior to change +** from one release to the next. Applications must not attempt to access +** any of these methods if the iVersion of the VFS is less than 3. */ typedef struct sqlite3_vfs sqlite3_vfs; +typedef void (*sqlite3_syscall_ptr)(void); struct sqlite3_vfs { - int iVersion; /* Structure version number */ + int iVersion; /* Structure version number (currently 3) */ int szOsFile; /* Size of subclassed sqlite3_file */ int mxPathname; /* Maximum file pathname length */ sqlite3_vfs *pNext; /* Next registered VFS */ const char *zName; /* Name of this virtual file system */ void *pAppData; /* Pointer to application-specific data */ @@ -841,12 +1212,27 @@ void (*xDlClose)(sqlite3_vfs*, void*); int (*xRandomness)(sqlite3_vfs*, int nByte, char *zOut); int (*xSleep)(sqlite3_vfs*, int microseconds); int (*xCurrentTime)(sqlite3_vfs*, double*); int (*xGetLastError)(sqlite3_vfs*, int, char *); - /* New fields may be appended in figure versions. The iVersion - ** value will increment whenever this happens. */ + /* + ** The methods above are in version 1 of the sqlite_vfs object + ** definition. Those that follow are added in version 2 or later + */ + int (*xCurrentTimeInt64)(sqlite3_vfs*, sqlite3_int64*); + /* + ** The methods above are in versions 1 and 2 of the sqlite_vfs object. + ** Those below are for version 3 and greater. + */ + int (*xSetSystemCall)(sqlite3_vfs*, const char *zName, sqlite3_syscall_ptr); + sqlite3_syscall_ptr (*xGetSystemCall)(sqlite3_vfs*, const char *zName); + const char *(*xNextSystemCall)(sqlite3_vfs*, const char *zName); + /* + ** The methods above are in versions 1 through 3 of the sqlite_vfs object. + ** New fields may be appended in figure versions. The iVersion + ** value will increment whenever this happens. + */ }; /* ** CAPI3REF: Flags for the xAccess VFS method ** @@ -854,17 +1240,62 @@ ** the xAccess method of an [sqlite3_vfs] object. They determine ** what kind of permissions the xAccess method is looking for. ** With SQLITE_ACCESS_EXISTS, the xAccess method ** simply checks whether the file exists. ** With SQLITE_ACCESS_READWRITE, the xAccess method -** checks whether the file is both readable and writable. +** checks whether the named directory is both readable and writable +** (in other words, if files can be added, removed, and renamed within +** the directory). +** The SQLITE_ACCESS_READWRITE constant is currently used only by the +** [temp_store_directory pragma], though this could change in a future +** release of SQLite. ** With SQLITE_ACCESS_READ, the xAccess method -** checks whether the file is readable. +** checks whether the file is readable. The SQLITE_ACCESS_READ constant is +** currently unused, though it might be used in a future release of +** SQLite. */ #define SQLITE_ACCESS_EXISTS 0 -#define SQLITE_ACCESS_READWRITE 1 -#define SQLITE_ACCESS_READ 2 +#define SQLITE_ACCESS_READWRITE 1 /* Used by PRAGMA temp_store_directory */ +#define SQLITE_ACCESS_READ 2 /* Unused */ + +/* +** CAPI3REF: Flags for the xShmLock VFS method +** +** These integer constants define the various locking operations +** allowed by the xShmLock method of [sqlite3_io_methods]. The +** following are the only legal combinations of flags to the +** xShmLock method: +** +** <ul> +** <li> SQLITE_SHM_LOCK | SQLITE_SHM_SHARED +** <li> SQLITE_SHM_LOCK | SQLITE_SHM_EXCLUSIVE +** <li> SQLITE_SHM_UNLOCK | SQLITE_SHM_SHARED +** <li> SQLITE_SHM_UNLOCK | SQLITE_SHM_EXCLUSIVE +** </ul> +** +** When unlocking, the same SHARED or EXCLUSIVE flag must be supplied as +** was given on the corresponding lock. +** +** The xShmLock method can transition between unlocked and SHARED or +** between unlocked and EXCLUSIVE. It cannot transition between SHARED +** and EXCLUSIVE. +*/ +#define SQLITE_SHM_UNLOCK 1 +#define SQLITE_SHM_LOCK 2 +#define SQLITE_SHM_SHARED 4 +#define SQLITE_SHM_EXCLUSIVE 8 + +/* +** CAPI3REF: Maximum xShmLock index +** +** The xShmLock method on [sqlite3_io_methods] may use values +** between 0 and this upper bound as its "offset" argument. +** The SQLite core will never attempt to acquire or release a +** lock outside of this range +*/ +#define SQLITE_SHM_NLOCK 8 + /* ** CAPI3REF: Initialize The SQLite Library ** ** ^The sqlite3_initialize() routine initializes the @@ -937,14 +1368,14 @@ ** sqlite3_os_init() and sqlite3_os_end(). An application-supplied ** implementation of sqlite3_os_init() or sqlite3_os_end() ** must return [SQLITE_OK] on success and some other [error code] upon ** failure. */ -SQLITE_API int sqlite3_initialize(void); -SQLITE_API int sqlite3_shutdown(void); -SQLITE_API int sqlite3_os_init(void); -SQLITE_API int sqlite3_os_end(void); +SQLITE_API int SQLITE_STDCALL sqlite3_initialize(void); +SQLITE_API int SQLITE_STDCALL sqlite3_shutdown(void); +SQLITE_API int SQLITE_STDCALL sqlite3_os_init(void); +SQLITE_API int SQLITE_STDCALL sqlite3_os_end(void); /* ** CAPI3REF: Configuring The SQLite Library ** ** The sqlite3_config() interface is used to make global configuration @@ -951,54 +1382,52 @@ ** changes to SQLite in order to tune SQLite to the specific needs of ** the application. The default configuration is recommended for most ** applications and so this routine is usually not necessary. It is ** provided to support rare applications with unusual needs. ** -** The sqlite3_config() interface is not threadsafe. The application -** must insure that no other SQLite interfaces are invoked by other -** threads while sqlite3_config() is running. Furthermore, sqlite3_config() +** <b>The sqlite3_config() interface is not threadsafe. The application +** must ensure that no other SQLite interfaces are invoked by other +** threads while sqlite3_config() is running.</b> +** +** The sqlite3_config() interface ** may only be invoked prior to library initialization using ** [sqlite3_initialize()] or after shutdown by [sqlite3_shutdown()]. ** ^If sqlite3_config() is called after [sqlite3_initialize()] and before ** [sqlite3_shutdown()] then it will return SQLITE_MISUSE. ** Note, however, that ^sqlite3_config() can be called as part of the ** implementation of an application-defined [sqlite3_os_init()]. ** ** The first argument to sqlite3_config() is an integer -** [SQLITE_CONFIG_SINGLETHREAD | configuration option] that determines +** [configuration option] that determines ** what property of SQLite is to be configured. Subsequent arguments -** vary depending on the [SQLITE_CONFIG_SINGLETHREAD | configuration option] +** vary depending on the [configuration option] ** in the first argument. ** ** ^When a configuration option is set, sqlite3_config() returns [SQLITE_OK]. ** ^If the option is unknown or SQLite is unable to set the option ** then this routine returns a non-zero [error code]. */ -SQLITE_API int sqlite3_config(int, ...); +SQLITE_API int SQLITE_CDECL sqlite3_config(int, ...); /* ** CAPI3REF: Configure database connections +** METHOD: sqlite3 ** ** The sqlite3_db_config() interface is used to make configuration ** changes to a [database connection]. The interface is similar to ** [sqlite3_config()] except that the changes apply to a single -** [database connection] (specified in the first argument). The -** sqlite3_db_config() interface should only be used immediately after -** the database connection is created using [sqlite3_open()], -** [sqlite3_open16()], or [sqlite3_open_v2()]. +** [database connection] (specified in the first argument). ** ** The second argument to sqlite3_db_config(D,V,...) is the -** configuration verb - an integer code that indicates what -** aspect of the [database connection] is being configured. -** The only choice for this value is [SQLITE_DBCONFIG_LOOKASIDE]. -** New verbs are likely to be added in future releases of SQLite. -** Additional arguments depend on the verb. +** [SQLITE_DBCONFIG_LOOKASIDE | configuration verb] - an integer code +** that indicates what aspect of the [database connection] is being configured. +** Subsequent arguments vary depending on the configuration verb. ** ** ^Calls to sqlite3_db_config() return SQLITE_OK if and only if ** the call is considered successful. */ -SQLITE_API int sqlite3_db_config(sqlite3*, int op, ...); +SQLITE_API int SQLITE_CDECL sqlite3_db_config(sqlite3*, int op, ...); /* ** CAPI3REF: Memory Allocation Routines ** ** An instance of this object defines the interface between SQLite @@ -1021,20 +1450,14 @@ ** also used during testing of SQLite in order to specify an alternative ** memory allocator that simulates memory out-of-memory conditions in ** order to verify that SQLite recovers gracefully from such ** conditions. ** -** The xMalloc and xFree methods must work like the -** malloc() and free() functions from the standard C library. -** The xRealloc method must work like realloc() from the standard C library -** with the exception that if the second argument to xRealloc is zero, -** xRealloc must be a no-op - it must not perform any allocation or -** deallocation. ^SQLite guarantees that the second argument to +** The xMalloc, xRealloc, and xFree methods must work like the +** malloc(), realloc() and free() functions from the standard C library. +** ^SQLite guarantees that the second argument to ** xRealloc is always a value returned by a prior call to xRoundup. -** And so in cases where xRoundup always returns a positive number, -** xRealloc can perform exactly as the standard library realloc() and -** still be in compliance with this specification. ** ** xSize should return the allocated size of a memory allocation ** previously obtained from xMalloc or xRealloc. The allocated size ** is always at least as big as the requested size but may be larger. ** @@ -1044,11 +1467,11 @@ ** of 8. Some allocators round up to a larger multiple or to a power of 2. ** Every memory allocation request coming in through [sqlite3_malloc()] ** or [sqlite3_realloc()] first calls xRoundup. If xRoundup returns 0, ** that causes the corresponding memory allocation to fail. ** -** The xInit method initializes the memory allocator. (For example, +** The xInit method initializes the memory allocator. For example, ** it might allocate any require mutexes or initialize internal data ** structures. The xShutdown method is invoked (indirectly) by ** [sqlite3_shutdown()] and should deallocate any resources acquired ** by xInit. The pAppData pointer is used as the only parameter to ** xInit and xShutdown. @@ -1079,10 +1502,11 @@ void *pAppData; /* Argument to xInit() and xShutdown() */ }; /* ** CAPI3REF: Configuration Options +** KEYWORDS: {configuration option} ** ** These constants are the available integer configuration options that ** can be passed as the first argument to the [sqlite3_config()] interface. ** ** New configuration options may be added in future releases of SQLite. @@ -1091,11 +1515,11 @@ ** the call worked. The [sqlite3_config()] interface will return a ** non-zero [error code] if a discontinued or unsupported configuration option ** is invoked. ** ** <dl> -** <dt>SQLITE_CONFIG_SINGLETHREAD</dt> +** [[SQLITE_CONFIG_SINGLETHREAD]] <dt>SQLITE_CONFIG_SINGLETHREAD</dt> ** <dd>There are no arguments to this option. ^This option sets the ** [threading mode] to Single-thread. In other words, it disables ** all mutexing and puts SQLite into a mode where it can only be used ** by a single thread. ^If SQLite is compiled with ** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then @@ -1102,11 +1526,11 @@ ** it is not possible to change the [threading mode] from its default ** value of Single-thread and so [sqlite3_config()] will return ** [SQLITE_ERROR] if called with the SQLITE_CONFIG_SINGLETHREAD ** configuration option.</dd> ** -** <dt>SQLITE_CONFIG_MULTITHREAD</dt> +** [[SQLITE_CONFIG_MULTITHREAD]] <dt>SQLITE_CONFIG_MULTITHREAD</dt> ** <dd>There are no arguments to this option. ^This option sets the ** [threading mode] to Multi-thread. In other words, it disables ** mutexing on [database connection] and [prepared statement] objects. ** The application is responsible for serializing access to ** [database connections] and [prepared statements]. But other mutexes @@ -1116,11 +1540,11 @@ ** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then ** it is not possible to set the Multi-thread [threading mode] and ** [sqlite3_config()] will return [SQLITE_ERROR] if called with the ** SQLITE_CONFIG_MULTITHREAD configuration option.</dd> ** -** <dt>SQLITE_CONFIG_SERIALIZED</dt> +** [[SQLITE_CONFIG_SERIALIZED]] <dt>SQLITE_CONFIG_SERIALIZED</dt> ** <dd>There are no arguments to this option. ^This option sets the ** [threading mode] to Serialized. In other words, this option enables ** all mutexes including the recursive ** mutexes on [database connection] and [prepared statement] objects. ** In this mode (which is the default when SQLite is compiled with @@ -1132,111 +1556,132 @@ ** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then ** it is not possible to set the Serialized [threading mode] and ** [sqlite3_config()] will return [SQLITE_ERROR] if called with the ** SQLITE_CONFIG_SERIALIZED configuration option.</dd> ** -** <dt>SQLITE_CONFIG_MALLOC</dt> -** <dd> ^(This option takes a single argument which is a pointer to an -** instance of the [sqlite3_mem_methods] structure. The argument specifies +** [[SQLITE_CONFIG_MALLOC]] <dt>SQLITE_CONFIG_MALLOC</dt> +** <dd> ^(The SQLITE_CONFIG_MALLOC option takes a single argument which is +** a pointer to an instance of the [sqlite3_mem_methods] structure. +** The argument specifies ** alternative low-level memory allocation routines to be used in place of ** the memory allocation routines built into SQLite.)^ ^SQLite makes ** its own private copy of the content of the [sqlite3_mem_methods] structure ** before the [sqlite3_config()] call returns.</dd> ** -** <dt>SQLITE_CONFIG_GETMALLOC</dt> -** <dd> ^(This option takes a single argument which is a pointer to an -** instance of the [sqlite3_mem_methods] structure. The [sqlite3_mem_methods] +** [[SQLITE_CONFIG_GETMALLOC]] <dt>SQLITE_CONFIG_GETMALLOC</dt> +** <dd> ^(The SQLITE_CONFIG_GETMALLOC option takes a single argument which +** is a pointer to an instance of the [sqlite3_mem_methods] structure. +** The [sqlite3_mem_methods] ** structure is filled with the currently defined memory allocation routines.)^ ** This option can be used to overload the default memory allocation ** routines with a wrapper that simulations memory allocation failure or ** tracks memory usage, for example. </dd> ** -** <dt>SQLITE_CONFIG_MEMSTATUS</dt> -** <dd> ^This option takes single argument of type int, interpreted as a -** boolean, which enables or disables the collection of memory allocation -** statistics. ^(When memory allocation statistics are disabled, the -** following SQLite interfaces become non-operational: +** [[SQLITE_CONFIG_MEMSTATUS]] <dt>SQLITE_CONFIG_MEMSTATUS</dt> +** <dd> ^The SQLITE_CONFIG_MEMSTATUS option takes single argument of type int, +** interpreted as a boolean, which enables or disables the collection of +** memory allocation statistics. ^(When memory allocation statistics are +** disabled, the following SQLite interfaces become non-operational: ** <ul> ** <li> [sqlite3_memory_used()] ** <li> [sqlite3_memory_highwater()] -** <li> [sqlite3_soft_heap_limit()] -** <li> [sqlite3_status()] +** <li> [sqlite3_soft_heap_limit64()] +** <li> [sqlite3_status64()] ** </ul>)^ ** ^Memory allocation statistics are enabled by default unless SQLite is ** compiled with [SQLITE_DEFAULT_MEMSTATUS]=0 in which case memory ** allocation statistics are disabled by default. ** </dd> ** -** <dt>SQLITE_CONFIG_SCRATCH</dt> -** <dd> ^This option specifies a static memory buffer that SQLite can use for -** scratch memory. There are three arguments: A pointer an 8-byte -** aligned memory buffer from which the scrach allocations will be +** [[SQLITE_CONFIG_SCRATCH]] <dt>SQLITE_CONFIG_SCRATCH</dt> +** <dd> ^The SQLITE_CONFIG_SCRATCH option specifies a static memory buffer +** that SQLite can use for scratch memory. ^(There are three arguments +** to SQLITE_CONFIG_SCRATCH: A pointer an 8-byte +** aligned memory buffer from which the scratch allocations will be ** drawn, the size of each scratch allocation (sz), -** and the maximum number of scratch allocations (N). The sz -** argument must be a multiple of 16. The sz parameter should be a few bytes -** larger than the actual scratch space required due to internal overhead. +** and the maximum number of scratch allocations (N).)^ ** The first argument must be a pointer to an 8-byte aligned buffer ** of at least sz*N bytes of memory. -** ^SQLite will use no more than one scratch buffer per thread. So -** N should be set to the expected maximum number of threads. ^SQLite will -** never require a scratch buffer that is more than 6 times the database -** page size. ^If SQLite needs needs additional scratch memory beyond -** what is provided by this configuration option, then -** [sqlite3_malloc()] will be used to obtain the memory needed.</dd> +** ^SQLite will not use more than one scratch buffers per thread. +** ^SQLite will never request a scratch buffer that is more than 6 +** times the database page size. +** ^If SQLite needs needs additional +** scratch memory beyond what is provided by this configuration option, then +** [sqlite3_malloc()] will be used to obtain the memory needed.<p> +** ^When the application provides any amount of scratch memory using +** SQLITE_CONFIG_SCRATCH, SQLite avoids unnecessary large +** [sqlite3_malloc|heap allocations]. +** This can help [Robson proof|prevent memory allocation failures] due to heap +** fragmentation in low-memory embedded systems. +** </dd> ** -** <dt>SQLITE_CONFIG_PAGECACHE</dt> -** <dd> ^This option specifies a static memory buffer that SQLite can use for -** the database page cache with the default page cache implemenation. -** This configuration should not be used if an application-define page -** cache implementation is loaded using the SQLITE_CONFIG_PCACHE option. -** There are three arguments to this option: A pointer to 8-byte aligned -** memory, the size of each page buffer (sz), and the number of pages (N). +** [[SQLITE_CONFIG_PAGECACHE]] <dt>SQLITE_CONFIG_PAGECACHE</dt> +** <dd> ^The SQLITE_CONFIG_PAGECACHE option specifies a memory pool +** that SQLite can use for the database page cache with the default page +** cache implementation. +** This configuration option is a no-op if an application-define page +** cache implementation is loaded using the [SQLITE_CONFIG_PCACHE2]. +** ^There are three arguments to SQLITE_CONFIG_PAGECACHE: A pointer to +** 8-byte aligned memory (pMem), the size of each page cache line (sz), +** and the number of cache lines (N). ** The sz argument should be the size of the largest database page -** (a power of two between 512 and 32768) plus a little extra for each -** page header. ^The page header size is 20 to 40 bytes depending on -** the host architecture. ^It is harmless, apart from the wasted memory, -** to make sz a little too large. The first -** argument should point to an allocation of at least sz*N bytes of memory. -** ^SQLite will use the memory provided by the first argument to satisfy its -** memory needs for the first N pages that it adds to cache. ^If additional -** page cache memory is needed beyond what is provided by this option, then -** SQLite goes to [sqlite3_malloc()] for the additional storage space. -** ^The implementation might use one or more of the N buffers to hold -** memory accounting information. The pointer in the first argument must -** be aligned to an 8-byte boundary or subsequent behavior of SQLite -** will be undefined.</dd> +** (a power of two between 512 and 65536) plus some extra bytes for each +** page header. ^The number of extra bytes needed by the page header +** can be determined using [SQLITE_CONFIG_PCACHE_HDRSZ]. +** ^It is harmless, apart from the wasted memory, +** for the sz parameter to be larger than necessary. The pMem +** argument must be either a NULL pointer or a pointer to an 8-byte +** aligned block of memory of at least sz*N bytes, otherwise +** subsequent behavior is undefined. +** ^When pMem is not NULL, SQLite will strive to use the memory provided +** to satisfy page cache needs, falling back to [sqlite3_malloc()] if +** a page cache line is larger than sz bytes or if all of the pMem buffer +** is exhausted. +** ^If pMem is NULL and N is non-zero, then each database connection +** does an initial bulk allocation for page cache memory +** from [sqlite3_malloc()] sufficient for N cache lines if N is positive or +** of -1024*N bytes if N is negative, . ^If additional +** page cache memory is needed beyond what is provided by the initial +** allocation, then SQLite goes to [sqlite3_malloc()] separately for each +** additional cache line. </dd> ** -** <dt>SQLITE_CONFIG_HEAP</dt> -** <dd> ^This option specifies a static memory buffer that SQLite will use -** for all of its dynamic memory allocation needs beyond those provided -** for by [SQLITE_CONFIG_SCRATCH] and [SQLITE_CONFIG_PAGECACHE]. -** There are three arguments: An 8-byte aligned pointer to the memory, +** [[SQLITE_CONFIG_HEAP]] <dt>SQLITE_CONFIG_HEAP</dt> +** <dd> ^The SQLITE_CONFIG_HEAP option specifies a static memory buffer +** that SQLite will use for all of its dynamic memory allocation needs +** beyond those provided for by [SQLITE_CONFIG_SCRATCH] and +** [SQLITE_CONFIG_PAGECACHE]. +** ^The SQLITE_CONFIG_HEAP option is only available if SQLite is compiled +** with either [SQLITE_ENABLE_MEMSYS3] or [SQLITE_ENABLE_MEMSYS5] and returns +** [SQLITE_ERROR] if invoked otherwise. +** ^There are three arguments to SQLITE_CONFIG_HEAP: +** An 8-byte aligned pointer to the memory, ** the number of bytes in the memory buffer, and the minimum allocation size. ** ^If the first pointer (the memory pointer) is NULL, then SQLite reverts ** to using its default memory allocator (the system malloc() implementation), ** undoing any prior invocation of [SQLITE_CONFIG_MALLOC]. ^If the -** memory pointer is not NULL and either [SQLITE_ENABLE_MEMSYS3] or -** [SQLITE_ENABLE_MEMSYS5] are defined, then the alternative memory +** memory pointer is not NULL then the alternative memory ** allocator is engaged to handle all of SQLites memory allocation needs. ** The first pointer (the memory pointer) must be aligned to an 8-byte -** boundary or subsequent behavior of SQLite will be undefined.</dd> +** boundary or subsequent behavior of SQLite will be undefined. +** The minimum allocation size is capped at 2**12. Reasonable values +** for the minimum allocation size are 2**5 through 2**8.</dd> ** -** <dt>SQLITE_CONFIG_MUTEX</dt> -** <dd> ^(This option takes a single argument which is a pointer to an -** instance of the [sqlite3_mutex_methods] structure. The argument specifies -** alternative low-level mutex routines to be used in place -** the mutex routines built into SQLite.)^ ^SQLite makes a copy of the -** content of the [sqlite3_mutex_methods] structure before the call to +** [[SQLITE_CONFIG_MUTEX]] <dt>SQLITE_CONFIG_MUTEX</dt> +** <dd> ^(The SQLITE_CONFIG_MUTEX option takes a single argument which is a +** pointer to an instance of the [sqlite3_mutex_methods] structure. +** The argument specifies alternative low-level mutex routines to be used +** in place the mutex routines built into SQLite.)^ ^SQLite makes a copy of +** the content of the [sqlite3_mutex_methods] structure before the call to ** [sqlite3_config()] returns. ^If SQLite is compiled with ** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then ** the entire mutexing subsystem is omitted from the build and hence calls to ** [sqlite3_config()] with the SQLITE_CONFIG_MUTEX configuration option will ** return [SQLITE_ERROR].</dd> ** -** <dt>SQLITE_CONFIG_GETMUTEX</dt> -** <dd> ^(This option takes a single argument which is a pointer to an -** instance of the [sqlite3_mutex_methods] structure. The +** [[SQLITE_CONFIG_GETMUTEX]] <dt>SQLITE_CONFIG_GETMUTEX</dt> +** <dd> ^(The SQLITE_CONFIG_GETMUTEX option takes a single argument which +** is a pointer to an instance of the [sqlite3_mutex_methods] structure. The ** [sqlite3_mutex_methods] ** structure is filled with the currently defined mutex routines.)^ ** This option can be used to overload the default mutex allocation ** routines with a wrapper used to track mutex usage for performance ** profiling or testing, for example. ^If SQLite is compiled with @@ -1243,33 +1688,35 @@ ** the [SQLITE_THREADSAFE | SQLITE_THREADSAFE=0] compile-time option then ** the entire mutexing subsystem is omitted from the build and hence calls to ** [sqlite3_config()] with the SQLITE_CONFIG_GETMUTEX configuration option will ** return [SQLITE_ERROR].</dd> ** -** <dt>SQLITE_CONFIG_LOOKASIDE</dt> -** <dd> ^(This option takes two arguments that determine the default -** memory allocation for the lookaside memory allocator on each -** [database connection]. The first argument is the +** [[SQLITE_CONFIG_LOOKASIDE]] <dt>SQLITE_CONFIG_LOOKASIDE</dt> +** <dd> ^(The SQLITE_CONFIG_LOOKASIDE option takes two arguments that determine +** the default size of lookaside memory on each [database connection]. +** The first argument is the ** size of each lookaside buffer slot and the second is the number of -** slots allocated to each database connection.)^ ^(This option sets the -** <i>default</i> lookaside size. The [SQLITE_DBCONFIG_LOOKASIDE] -** verb to [sqlite3_db_config()] can be used to change the lookaside +** slots allocated to each database connection.)^ ^(SQLITE_CONFIG_LOOKASIDE +** sets the <i>default</i> lookaside size. The [SQLITE_DBCONFIG_LOOKASIDE] +** option to [sqlite3_db_config()] can be used to change the lookaside ** configuration on individual connections.)^ </dd> ** -** <dt>SQLITE_CONFIG_PCACHE</dt> -** <dd> ^(This option takes a single argument which is a pointer to -** an [sqlite3_pcache_methods] object. This object specifies the interface -** to a custom page cache implementation.)^ ^SQLite makes a copy of the -** object and uses it for page cache memory allocations.</dd> -** -** <dt>SQLITE_CONFIG_GETPCACHE</dt> -** <dd> ^(This option takes a single argument which is a pointer to an -** [sqlite3_pcache_methods] object. SQLite copies of the current -** page cache implementation into that object.)^ </dd> -** -** <dt>SQLITE_CONFIG_LOG</dt> -** <dd> ^The SQLITE_CONFIG_LOG option takes two arguments: a pointer to a +** [[SQLITE_CONFIG_PCACHE2]] <dt>SQLITE_CONFIG_PCACHE2</dt> +** <dd> ^(The SQLITE_CONFIG_PCACHE2 option takes a single argument which is +** a pointer to an [sqlite3_pcache_methods2] object. This object specifies +** the interface to a custom page cache implementation.)^ +** ^SQLite makes a copy of the [sqlite3_pcache_methods2] object.</dd> +** +** [[SQLITE_CONFIG_GETPCACHE2]] <dt>SQLITE_CONFIG_GETPCACHE2</dt> +** <dd> ^(The SQLITE_CONFIG_GETPCACHE2 option takes a single argument which +** is a pointer to an [sqlite3_pcache_methods2] object. SQLite copies of +** the current page cache implementation into that object.)^ </dd> +** +** [[SQLITE_CONFIG_LOG]] <dt>SQLITE_CONFIG_LOG</dt> +** <dd> The SQLITE_CONFIG_LOG option is used to configure the SQLite +** global [error log]. +** (^The SQLITE_CONFIG_LOG option takes two arguments: a pointer to a ** function with a call signature of void(*)(void*,int,const char*), ** and a pointer to void. ^If the function pointer is not NULL, it is ** invoked by [sqlite3_log()] to process each logging event. ^If the ** function pointer is NULL, the [sqlite3_log()] interface becomes a no-op. ** ^The void pointer that is the second argument to SQLITE_CONFIG_LOG is @@ -1282,10 +1729,99 @@ ** The SQLite logging interface is not reentrant; the logger function ** supplied by the application must not invoke any SQLite interface. ** In a multi-threaded application, the application-defined logger ** function must be threadsafe. </dd> ** +** [[SQLITE_CONFIG_URI]] <dt>SQLITE_CONFIG_URI +** <dd>^(The SQLITE_CONFIG_URI option takes a single argument of type int. +** If non-zero, then URI handling is globally enabled. If the parameter is zero, +** then URI handling is globally disabled.)^ ^If URI handling is globally +** enabled, all filenames passed to [sqlite3_open()], [sqlite3_open_v2()], +** [sqlite3_open16()] or +** specified as part of [ATTACH] commands are interpreted as URIs, regardless +** of whether or not the [SQLITE_OPEN_URI] flag is set when the database +** connection is opened. ^If it is globally disabled, filenames are +** only interpreted as URIs if the SQLITE_OPEN_URI flag is set when the +** database connection is opened. ^(By default, URI handling is globally +** disabled. The default value may be changed by compiling with the +** [SQLITE_USE_URI] symbol defined.)^ +** +** [[SQLITE_CONFIG_COVERING_INDEX_SCAN]] <dt>SQLITE_CONFIG_COVERING_INDEX_SCAN +** <dd>^The SQLITE_CONFIG_COVERING_INDEX_SCAN option takes a single integer +** argument which is interpreted as a boolean in order to enable or disable +** the use of covering indices for full table scans in the query optimizer. +** ^The default setting is determined +** by the [SQLITE_ALLOW_COVERING_INDEX_SCAN] compile-time option, or is "on" +** if that compile-time option is omitted. +** The ability to disable the use of covering indices for full table scans +** is because some incorrectly coded legacy applications might malfunction +** when the optimization is enabled. Providing the ability to +** disable the optimization allows the older, buggy application code to work +** without change even with newer versions of SQLite. +** +** [[SQLITE_CONFIG_PCACHE]] [[SQLITE_CONFIG_GETPCACHE]] +** <dt>SQLITE_CONFIG_PCACHE and SQLITE_CONFIG_GETPCACHE +** <dd> These options are obsolete and should not be used by new code. +** They are retained for backwards compatibility but are now no-ops. +** </dd> +** +** [[SQLITE_CONFIG_SQLLOG]] +** <dt>SQLITE_CONFIG_SQLLOG +** <dd>This option is only available if sqlite is compiled with the +** [SQLITE_ENABLE_SQLLOG] pre-processor macro defined. The first argument should +** be a pointer to a function of type void(*)(void*,sqlite3*,const char*, int). +** The second should be of type (void*). The callback is invoked by the library +** in three separate circumstances, identified by the value passed as the +** fourth parameter. If the fourth parameter is 0, then the database connection +** passed as the second argument has just been opened. The third argument +** points to a buffer containing the name of the main database file. If the +** fourth parameter is 1, then the SQL statement that the third parameter +** points to has just been executed. Or, if the fourth parameter is 2, then +** the connection being passed as the second parameter is being closed. The +** third parameter is passed NULL In this case. An example of using this +** configuration option can be seen in the "test_sqllog.c" source file in +** the canonical SQLite source tree.</dd> +** +** [[SQLITE_CONFIG_MMAP_SIZE]] +** <dt>SQLITE_CONFIG_MMAP_SIZE +** <dd>^SQLITE_CONFIG_MMAP_SIZE takes two 64-bit integer (sqlite3_int64) values +** that are the default mmap size limit (the default setting for +** [PRAGMA mmap_size]) and the maximum allowed mmap size limit. +** ^The default setting can be overridden by each database connection using +** either the [PRAGMA mmap_size] command, or by using the +** [SQLITE_FCNTL_MMAP_SIZE] file control. ^(The maximum allowed mmap size +** will be silently truncated if necessary so that it does not exceed the +** compile-time maximum mmap size set by the +** [SQLITE_MAX_MMAP_SIZE] compile-time option.)^ +** ^If either argument to this option is negative, then that argument is +** changed to its compile-time default. +** +** [[SQLITE_CONFIG_WIN32_HEAPSIZE]] +** <dt>SQLITE_CONFIG_WIN32_HEAPSIZE +** <dd>^The SQLITE_CONFIG_WIN32_HEAPSIZE option is only available if SQLite is +** compiled for Windows with the [SQLITE_WIN32_MALLOC] pre-processor macro +** defined. ^SQLITE_CONFIG_WIN32_HEAPSIZE takes a 32-bit unsigned integer value +** that specifies the maximum size of the created heap. +** +** [[SQLITE_CONFIG_PCACHE_HDRSZ]] +** <dt>SQLITE_CONFIG_PCACHE_HDRSZ +** <dd>^The SQLITE_CONFIG_PCACHE_HDRSZ option takes a single parameter which +** is a pointer to an integer and writes into that integer the number of extra +** bytes per page required for each page in [SQLITE_CONFIG_PAGECACHE]. +** The amount of extra space required can change depending on the compiler, +** target platform, and SQLite version. +** +** [[SQLITE_CONFIG_PMASZ]] +** <dt>SQLITE_CONFIG_PMASZ +** <dd>^The SQLITE_CONFIG_PMASZ option takes a single parameter which +** is an unsigned integer and sets the "Minimum PMA Size" for the multithreaded +** sorter to that integer. The default minimum PMA Size is set by the +** [SQLITE_SORTER_PMASZ] compile-time option. New threads are launched +** to help with sort operations when multithreaded sorting +** is enabled (using the [PRAGMA threads] command) and the amount of content +** to be sorted exceeds the page size times the minimum of the +** [PRAGMA cache_size] setting and this value. ** </dl> */ #define SQLITE_CONFIG_SINGLETHREAD 1 /* nil */ #define SQLITE_CONFIG_MULTITHREAD 2 /* nil */ #define SQLITE_CONFIG_SERIALIZED 3 /* nil */ @@ -1297,13 +1833,22 @@ #define SQLITE_CONFIG_MEMSTATUS 9 /* boolean */ #define SQLITE_CONFIG_MUTEX 10 /* sqlite3_mutex_methods* */ #define SQLITE_CONFIG_GETMUTEX 11 /* sqlite3_mutex_methods* */ /* previously SQLITE_CONFIG_CHUNKALLOC 12 which is now unused. */ #define SQLITE_CONFIG_LOOKASIDE 13 /* int int */ -#define SQLITE_CONFIG_PCACHE 14 /* sqlite3_pcache_methods* */ -#define SQLITE_CONFIG_GETPCACHE 15 /* sqlite3_pcache_methods* */ +#define SQLITE_CONFIG_PCACHE 14 /* no-op */ +#define SQLITE_CONFIG_GETPCACHE 15 /* no-op */ #define SQLITE_CONFIG_LOG 16 /* xFunc, void* */ +#define SQLITE_CONFIG_URI 17 /* int */ +#define SQLITE_CONFIG_PCACHE2 18 /* sqlite3_pcache_methods2* */ +#define SQLITE_CONFIG_GETPCACHE2 19 /* sqlite3_pcache_methods2* */ +#define SQLITE_CONFIG_COVERING_INDEX_SCAN 20 /* int */ +#define SQLITE_CONFIG_SQLLOG 21 /* xSqllog, void* */ +#define SQLITE_CONFIG_MMAP_SIZE 22 /* sqlite3_int64, sqlite3_int64 */ +#define SQLITE_CONFIG_WIN32_HEAPSIZE 23 /* int nByte */ +#define SQLITE_CONFIG_PCACHE_HDRSZ 24 /* int *psz */ +#define SQLITE_CONFIG_PMASZ 25 /* unsigned int szPma */ /* ** CAPI3REF: Database Connection Configuration Options ** ** These constants are the available integer configuration options that @@ -1319,55 +1864,91 @@ ** <dl> ** <dt>SQLITE_DBCONFIG_LOOKASIDE</dt> ** <dd> ^This option takes three additional arguments that determine the ** [lookaside memory allocator] configuration for the [database connection]. ** ^The first argument (the third parameter to [sqlite3_db_config()] is a -** pointer to an memory buffer to use for lookaside memory. +** pointer to a memory buffer to use for lookaside memory. ** ^The first argument after the SQLITE_DBCONFIG_LOOKASIDE verb ** may be NULL in which case SQLite will allocate the ** lookaside buffer itself using [sqlite3_malloc()]. ^The second argument is the ** size of each lookaside buffer slot. ^The third argument is the number of ** slots. The size of the buffer in the first argument must be greater than ** or equal to the product of the second and third arguments. The buffer ** must be aligned to an 8-byte boundary. ^If the second argument to ** SQLITE_DBCONFIG_LOOKASIDE is not a multiple of 8, it is internally -** rounded down to the next smaller -** multiple of 8. See also: [SQLITE_CONFIG_LOOKASIDE]</dd> +** rounded down to the next smaller multiple of 8. ^(The lookaside memory +** configuration for a database connection can only be changed when that +** connection is not currently using lookaside memory, or in other words +** when the "current value" returned by +** [sqlite3_db_status](D,[SQLITE_CONFIG_LOOKASIDE],...) is zero. +** Any attempt to change the lookaside memory configuration when lookaside +** memory is in use leaves the configuration unchanged and returns +** [SQLITE_BUSY].)^</dd> +** +** <dt>SQLITE_DBCONFIG_ENABLE_FKEY</dt> +** <dd> ^This option is used to enable or disable the enforcement of +** [foreign key constraints]. There should be two additional arguments. +** The first argument is an integer which is 0 to disable FK enforcement, +** positive to enable FK enforcement or negative to leave FK enforcement +** unchanged. The second parameter is a pointer to an integer into which +** is written 0 or 1 to indicate whether FK enforcement is off or on +** following this call. The second parameter may be a NULL pointer, in +** which case the FK enforcement setting is not reported back. </dd> +** +** <dt>SQLITE_DBCONFIG_ENABLE_TRIGGER</dt> +** <dd> ^This option is used to enable or disable [CREATE TRIGGER | triggers]. +** There should be two additional arguments. +** The first argument is an integer which is 0 to disable triggers, +** positive to enable triggers or negative to leave the setting unchanged. +** The second parameter is a pointer to an integer into which +** is written 0 or 1 to indicate whether triggers are disabled or enabled +** following this call. The second parameter may be a NULL pointer, in +** which case the trigger setting is not reported back. </dd> ** ** </dl> */ -#define SQLITE_DBCONFIG_LOOKASIDE 1001 /* void* int int */ +#define SQLITE_DBCONFIG_LOOKASIDE 1001 /* void* int int */ +#define SQLITE_DBCONFIG_ENABLE_FKEY 1002 /* int int* */ +#define SQLITE_DBCONFIG_ENABLE_TRIGGER 1003 /* int int* */ /* ** CAPI3REF: Enable Or Disable Extended Result Codes +** METHOD: sqlite3 ** ** ^The sqlite3_extended_result_codes() routine enables or disables the ** [extended result codes] feature of SQLite. ^The extended result ** codes are disabled by default for historical compatibility. */ -SQLITE_API int sqlite3_extended_result_codes(sqlite3*, int onoff); +SQLITE_API int SQLITE_STDCALL sqlite3_extended_result_codes(sqlite3*, int onoff); /* ** CAPI3REF: Last Insert Rowid +** METHOD: sqlite3 ** -** ^Each entry in an SQLite table has a unique 64-bit signed +** ^Each entry in most SQLite tables (except for [WITHOUT ROWID] tables) +** has a unique 64-bit signed ** integer key called the [ROWID | "rowid"]. ^The rowid is always available ** as an undeclared column named ROWID, OID, or _ROWID_ as long as those ** names are not also used by explicitly declared columns. ^If ** the table has a column of type [INTEGER PRIMARY KEY] then that column ** is another alias for the rowid. ** -** ^This routine returns the [rowid] of the most recent -** successful [INSERT] into the database from the [database connection] -** in the first argument. ^If no successful [INSERT]s -** have ever occurred on that database connection, zero is returned. +** ^The sqlite3_last_insert_rowid(D) interface returns the [rowid] of the +** most recent successful [INSERT] into a rowid table or [virtual table] +** on database connection D. +** ^Inserts into [WITHOUT ROWID] tables are not recorded. +** ^If no successful [INSERT]s into rowid tables +** have ever occurred on the database connection D, +** then sqlite3_last_insert_rowid(D) returns zero. ** -** ^(If an [INSERT] occurs within a trigger, then the [rowid] of the inserted -** row is returned by this routine as long as the trigger is running. -** But once the trigger terminates, the value returned by this routine -** reverts to the last value inserted before the trigger fired.)^ +** ^(If an [INSERT] occurs within a trigger or within a [virtual table] +** method, then this routine will return the [rowid] of the inserted +** row as long as the trigger or virtual table method is running. +** But once the trigger or virtual table method ends, the value returned +** by this routine reverts to what it was before the trigger or virtual +** table method began.)^ ** ** ^An [INSERT] that fails due to a constraint violation is not a ** successful [INSERT] and does not change the value returned by this ** routine. ^Thus INSERT OR FAIL, INSERT OR IGNORE, INSERT OR ROLLBACK, ** and INSERT OR ABORT make no changes to the return value of this @@ -1388,94 +1969,92 @@ ** function is running and thus changes the last insert [rowid], ** then the value returned by [sqlite3_last_insert_rowid()] is ** unpredictable and might not equal either the old or the new ** last insert [rowid]. */ -SQLITE_API sqlite3_int64 sqlite3_last_insert_rowid(sqlite3*); +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_last_insert_rowid(sqlite3*); /* ** CAPI3REF: Count The Number Of Rows Modified -** -** ^This function returns the number of database rows that were changed -** or inserted or deleted by the most recently completed SQL statement -** on the [database connection] specified by the first parameter. -** ^(Only changes that are directly specified by the [INSERT], [UPDATE], -** or [DELETE] statement are counted. Auxiliary changes caused by -** triggers or [foreign key actions] are not counted.)^ Use the -** [sqlite3_total_changes()] function to find the total number of changes -** including changes caused by triggers and foreign key actions. -** -** ^Changes to a view that are simulated by an [INSTEAD OF trigger] -** are not counted. Only real table changes are counted. -** -** ^(A "row change" is a change to a single row of a single table -** caused by an INSERT, DELETE, or UPDATE statement. Rows that -** are changed as side effects of [REPLACE] constraint resolution, -** rollback, ABORT processing, [DROP TABLE], or by any other -** mechanisms do not count as direct row changes.)^ -** -** A "trigger context" is a scope of execution that begins and -** ends with the script of a [CREATE TRIGGER | trigger]. -** Most SQL statements are -** evaluated outside of any trigger. This is the "top level" -** trigger context. If a trigger fires from the top level, a -** new trigger context is entered for the duration of that one -** trigger. Subtriggers create subcontexts for their duration. -** -** ^Calling [sqlite3_exec()] or [sqlite3_step()] recursively does -** not create a new trigger context. -** -** ^This function returns the number of direct row changes in the -** most recent INSERT, UPDATE, or DELETE statement within the same -** trigger context. -** -** ^Thus, when called from the top level, this function returns the -** number of changes in the most recent INSERT, UPDATE, or DELETE -** that also occurred at the top level. ^(Within the body of a trigger, -** the sqlite3_changes() interface can be called to find the number of -** changes in the most recently completed INSERT, UPDATE, or DELETE -** statement within the body of the same trigger. -** However, the number returned does not include changes -** caused by subtriggers since those have their own context.)^ +** METHOD: sqlite3 +** +** ^This function returns the number of rows modified, inserted or +** deleted by the most recently completed INSERT, UPDATE or DELETE +** statement on the database connection specified by the only parameter. +** ^Executing any other type of SQL statement does not modify the value +** returned by this function. +** +** ^Only changes made directly by the INSERT, UPDATE or DELETE statement are +** considered - auxiliary changes caused by [CREATE TRIGGER | triggers], +** [foreign key actions] or [REPLACE] constraint resolution are not counted. +** +** Changes to a view that are intercepted by +** [INSTEAD OF trigger | INSTEAD OF triggers] are not counted. ^The value +** returned by sqlite3_changes() immediately after an INSERT, UPDATE or +** DELETE statement run on a view is always zero. Only changes made to real +** tables are counted. +** +** Things are more complicated if the sqlite3_changes() function is +** executed while a trigger program is running. This may happen if the +** program uses the [changes() SQL function], or if some other callback +** function invokes sqlite3_changes() directly. Essentially: +** +** <ul> +** <li> ^(Before entering a trigger program the value returned by +** sqlite3_changes() function is saved. After the trigger program +** has finished, the original value is restored.)^ +** +** <li> ^(Within a trigger program each INSERT, UPDATE and DELETE +** statement sets the value returned by sqlite3_changes() +** upon completion as normal. Of course, this value will not include +** any changes performed by sub-triggers, as the sqlite3_changes() +** value will be saved and restored after each sub-trigger has run.)^ +** </ul> +** +** ^This means that if the changes() SQL function (or similar) is used +** by the first INSERT, UPDATE or DELETE statement within a trigger, it +** returns the value as set when the calling statement began executing. +** ^If it is used by the second or subsequent such statement within a trigger +** program, the value returned reflects the number of rows modified by the +** previous INSERT, UPDATE or DELETE statement within the same trigger. ** ** See also the [sqlite3_total_changes()] interface, the ** [count_changes pragma], and the [changes() SQL function]. ** ** If a separate thread makes changes on the same database connection ** while [sqlite3_changes()] is running then the value returned ** is unpredictable and not meaningful. */ -SQLITE_API int sqlite3_changes(sqlite3*); +SQLITE_API int SQLITE_STDCALL sqlite3_changes(sqlite3*); /* ** CAPI3REF: Total Number Of Rows Modified -** -** ^This function returns the number of row changes caused by [INSERT], -** [UPDATE] or [DELETE] statements since the [database connection] was opened. -** ^(The count returned by sqlite3_total_changes() includes all changes -** from all [CREATE TRIGGER | trigger] contexts and changes made by -** [foreign key actions]. However, -** the count does not include changes used to implement [REPLACE] constraints, -** do rollbacks or ABORT processing, or [DROP TABLE] processing. The -** count does not include rows of views that fire an [INSTEAD OF trigger], -** though if the INSTEAD OF trigger makes changes of its own, those changes -** are counted.)^ -** ^The sqlite3_total_changes() function counts the changes as soon as -** the statement that makes them is completed (when the statement handle -** is passed to [sqlite3_reset()] or [sqlite3_finalize()]). -** +** METHOD: sqlite3 +** +** ^This function returns the total number of rows inserted, modified or +** deleted by all [INSERT], [UPDATE] or [DELETE] statements completed +** since the database connection was opened, including those executed as +** part of trigger programs. ^Executing any other type of SQL statement +** does not affect the value returned by sqlite3_total_changes(). +** +** ^Changes made as part of [foreign key actions] are included in the +** count, but those made as part of REPLACE constraint resolution are +** not. ^Changes to a view that are intercepted by INSTEAD OF triggers +** are not counted. +** ** See also the [sqlite3_changes()] interface, the ** [count_changes pragma], and the [total_changes() SQL function]. ** ** If a separate thread makes changes on the same database connection ** while [sqlite3_total_changes()] is running then the value ** returned is unpredictable and not meaningful. */ -SQLITE_API int sqlite3_total_changes(sqlite3*); +SQLITE_API int SQLITE_STDCALL sqlite3_total_changes(sqlite3*); /* ** CAPI3REF: Interrupt A Long-Running Query +** METHOD: sqlite3 ** ** ^This function causes any pending database operation to abort and ** return at its earliest opportunity. This routine is typically ** called in response to a user action such as pressing "Cancel" ** or Ctrl-C where the user wants a long query operation to halt @@ -1507,11 +2086,11 @@ ** that are started after the sqlite3_interrupt() call returns. ** ** If the database connection closes while [sqlite3_interrupt()] ** is running then bad things will likely happen. */ -SQLITE_API void sqlite3_interrupt(sqlite3*); +SQLITE_API void SQLITE_STDCALL sqlite3_interrupt(sqlite3*); /* ** CAPI3REF: Determine If An SQL Statement Is Complete ** ** These routines are useful during command-line input to determine if the @@ -1542,37 +2121,45 @@ ** UTF-8 string. ** ** The input to [sqlite3_complete16()] must be a zero-terminated ** UTF-16 string in native byte order. */ -SQLITE_API int sqlite3_complete(const char *sql); -SQLITE_API int sqlite3_complete16(const void *sql); +SQLITE_API int SQLITE_STDCALL sqlite3_complete(const char *sql); +SQLITE_API int SQLITE_STDCALL sqlite3_complete16(const void *sql); /* ** CAPI3REF: Register A Callback To Handle SQLITE_BUSY Errors +** KEYWORDS: {busy-handler callback} {busy handler} +** METHOD: sqlite3 ** -** ^This routine sets a callback function that might be invoked whenever -** an attempt is made to open a database table that another thread -** or process has locked. +** ^The sqlite3_busy_handler(D,X,P) routine sets a callback function X +** that might be invoked with argument P whenever +** an attempt is made to access a database table associated with +** [database connection] D when another thread +** or process has the table locked. +** The sqlite3_busy_handler() interface is used to implement +** [sqlite3_busy_timeout()] and [PRAGMA busy_timeout]. ** -** ^If the busy callback is NULL, then [SQLITE_BUSY] or [SQLITE_IOERR_BLOCKED] +** ^If the busy callback is NULL, then [SQLITE_BUSY] ** is returned immediately upon encountering the lock. ^If the busy callback ** is not NULL, then the callback might be invoked with two arguments. ** ** ^The first argument to the busy handler is a copy of the void* pointer which ** is the third argument to sqlite3_busy_handler(). ^The second argument to ** the busy handler callback is the number of times that the busy handler has -** been invoked for this locking event. ^If the +** been invoked previously for the same locking event. ^If the ** busy callback returns 0, then no additional attempts are made to -** access the database and [SQLITE_BUSY] or [SQLITE_IOERR_BLOCKED] is returned. +** access the database and [SQLITE_BUSY] is returned +** to the application. ** ^If the callback returns non-zero, then another attempt -** is made to open the database for reading and the cycle repeats. +** is made to access the database and the cycle repeats. ** ** The presence of a busy handler does not guarantee that it will be invoked ** when there is lock contention. ^If SQLite determines that invoking the busy ** handler could result in a deadlock, it will go ahead and return [SQLITE_BUSY] -** or [SQLITE_IOERR_BLOCKED] instead of invoking the busy handler. +** to the application instead of invoking the +** busy handler. ** Consider a scenario where one process is holding a read lock that ** it is trying to promote to a reserved lock and ** a second process is holding a reserved lock that it is trying ** to promote to an exclusive lock. The first process cannot proceed ** because it is blocked by the second and the second process cannot @@ -1582,61 +2169,55 @@ ** will induce the first process to release its read lock and allow ** the second process to proceed. ** ** ^The default busy callback is NULL. ** -** ^The [SQLITE_BUSY] error is converted to [SQLITE_IOERR_BLOCKED] -** when SQLite is in the middle of a large transaction where all the -** changes will not fit into the in-memory cache. SQLite will -** already hold a RESERVED lock on the database file, but it needs -** to promote this lock to EXCLUSIVE so that it can spill cache -** pages into the database file without harm to concurrent -** readers. ^If it is unable to promote the lock, then the in-memory -** cache will be left in an inconsistent state and so the error -** code is promoted from the relatively benign [SQLITE_BUSY] to -** the more severe [SQLITE_IOERR_BLOCKED]. ^This error code promotion -** forces an automatic rollback of the changes. See the -** <a href="/cvstrac/wiki?p=CorruptionFollowingBusyError"> -** CorruptionFollowingBusyError</a> wiki page for a discussion of why -** this is important. -** ** ^(There can only be a single busy handler defined for each ** [database connection]. Setting a new busy handler clears any ** previously set handler.)^ ^Note that calling [sqlite3_busy_timeout()] -** will also set or clear the busy handler. +** or evaluating [PRAGMA busy_timeout=N] will change the +** busy handler and thus clear any previously set busy handler. ** ** The busy callback should not take any actions which modify the -** database connection that invoked the busy handler. Any such actions +** database connection that invoked the busy handler. In other words, +** the busy handler is not reentrant. Any such actions ** result in undefined behavior. ** ** A busy handler must not close the database connection ** or [prepared statement] that invoked the busy handler. */ -SQLITE_API int sqlite3_busy_handler(sqlite3*, int(*)(void*,int), void*); +SQLITE_API int SQLITE_STDCALL sqlite3_busy_handler(sqlite3*, int(*)(void*,int), void*); /* ** CAPI3REF: Set A Busy Timeout +** METHOD: sqlite3 ** ** ^This routine sets a [sqlite3_busy_handler | busy handler] that sleeps ** for a specified amount of time when a table is locked. ^The handler ** will sleep multiple times until at least "ms" milliseconds of sleeping ** have accumulated. ^After at least "ms" milliseconds of sleeping, ** the handler returns 0 which causes [sqlite3_step()] to return -** [SQLITE_BUSY] or [SQLITE_IOERR_BLOCKED]. +** [SQLITE_BUSY]. ** ** ^Calling this routine with an argument less than or equal to zero ** turns off all busy handlers. ** ** ^(There can only be a single busy handler for a particular -** [database connection] any any given moment. If another busy handler +** [database connection] at any given moment. If another busy handler ** was defined (using [sqlite3_busy_handler()]) prior to calling ** this routine, that other busy handler is cleared.)^ +** +** See also: [PRAGMA busy_timeout] */ -SQLITE_API int sqlite3_busy_timeout(sqlite3*, int ms); +SQLITE_API int SQLITE_STDCALL sqlite3_busy_timeout(sqlite3*, int ms); /* ** CAPI3REF: Convenience Routines For Running Queries +** METHOD: sqlite3 +** +** This is a legacy interface that is preserved for backwards compatibility. +** Use of this interface is not recommended. ** ** Definition: A <b>result table</b> is memory data structure created by the ** [sqlite3_get_table()] interface. A result table records the ** complete query results from one or more queries. ** @@ -1654,11 +2235,11 @@ ** ** A result table might consist of one or more memory allocations. ** It is not safe to pass a result table directly to [sqlite3_free()]. ** A result table should be deallocated using [sqlite3_free_table()]. ** -** As an example of the result table format, suppose a query result +** ^(As an example of the result table format, suppose a query result ** is as follows: ** ** <blockquote><pre> ** Name | Age ** ----------------------- @@ -1678,56 +2259,60 @@ ** azResult[3] = "43"; ** azResult[4] = "Bob"; ** azResult[5] = "28"; ** azResult[6] = "Cindy"; ** azResult[7] = "21"; -** </pre></blockquote> +** </pre></blockquote>)^ ** ** ^The sqlite3_get_table() function evaluates one or more ** semicolon-separated SQL statements in the zero-terminated UTF-8 ** string of its 2nd parameter and returns a result table to the ** pointer given in its 3rd parameter. ** ** After the application has finished with the result from sqlite3_get_table(), -** it should pass the result table pointer to sqlite3_free_table() in order to +** it must pass the result table pointer to sqlite3_free_table() in order to ** release the memory that was malloced. Because of the way the ** [sqlite3_malloc()] happens within sqlite3_get_table(), the calling ** function must not try to call [sqlite3_free()] directly. Only ** [sqlite3_free_table()] is able to release the memory properly and safely. ** -** ^(The sqlite3_get_table() interface is implemented as a wrapper around +** The sqlite3_get_table() interface is implemented as a wrapper around ** [sqlite3_exec()]. The sqlite3_get_table() routine does not have access ** to any internal data structures of SQLite. It uses only the public ** interface defined here. As a consequence, errors that occur in the ** wrapper layer outside of the internal [sqlite3_exec()] call are not ** reflected in subsequent calls to [sqlite3_errcode()] or -** [sqlite3_errmsg()].)^ +** [sqlite3_errmsg()]. */ -SQLITE_API int sqlite3_get_table( +SQLITE_API int SQLITE_STDCALL sqlite3_get_table( sqlite3 *db, /* An open database */ const char *zSql, /* SQL to be evaluated */ char ***pazResult, /* Results of the query */ int *pnRow, /* Number of result rows written here */ int *pnColumn, /* Number of result columns written here */ char **pzErrmsg /* Error msg written here */ ); -SQLITE_API void sqlite3_free_table(char **result); +SQLITE_API void SQLITE_STDCALL sqlite3_free_table(char **result); /* ** CAPI3REF: Formatted String Printing Functions ** ** These routines are work-alikes of the "printf()" family of functions ** from the standard C library. +** These routines understand most of the common K&R formatting options, +** plus some additional non-standard formats, detailed below. +** Note that some of the more obscure formatting options from recent +** C-library standards are omitted from this implementation. ** ** ^The sqlite3_mprintf() and sqlite3_vmprintf() routines write their ** results into memory obtained from [sqlite3_malloc()]. ** The strings returned by these two routines should be ** released by [sqlite3_free()]. ^Both routines return a ** NULL pointer if [sqlite3_malloc()] is unable to allocate enough ** memory to hold the resulting string. ** -** ^(In sqlite3_snprintf() routine is similar to "snprintf()" from +** ^(The sqlite3_snprintf() routine is similar to "snprintf()" from ** the standard C library. The result is written into the ** buffer supplied as the second parameter whose size is given by ** the first parameter. Note that the order of the ** first two parameters is reversed from snprintf().)^ This is an ** historical accident that cannot be fixed without breaking @@ -1742,16 +2327,18 @@ ** guarantees that the buffer is always zero-terminated. ^The first ** parameter "n" is the total size of the buffer, including space for ** the zero terminator. So the longest string that can be completely ** written will be n-1 characters. ** +** ^The sqlite3_vsnprintf() routine is a varargs version of sqlite3_snprintf(). +** ** These routines all implement some additional formatting ** options that are useful for constructing SQL statements. ** All of the usual printf() formatting options apply. In addition, there -** is are "%q", "%Q", and "%z" options. +** is are "%q", "%Q", "%w" and "%z" options. ** -** ^(The %q option works like %s in that it substitutes a null-terminated +** ^(The %q option works like %s in that it substitutes a nul-terminated ** string from the argument list. But %q also doubles every '\'' character. ** %q is designed for use inside a string literal.)^ By doubling each '\'' ** character it escapes that character and allows it to be inserted into ** the string. ** @@ -1797,18 +2384,25 @@ ** sqlite3_free(zSQL); ** </pre></blockquote> ** ** The code above will render a correct SQL statement in the zSQL ** variable even if the zText variable is a NULL pointer. +** +** ^(The "%w" formatting option is like "%q" except that it expects to +** be contained within double-quotes instead of single quotes, and it +** escapes the double-quote character instead of the single-quote +** character.)^ The "%w" formatting option is intended for safely inserting +** table and column names into a constructed SQL statement. ** ** ^(The "%z" formatting option works like "%s" but with the ** addition that after the string has been read and copied into ** the result, [sqlite3_free()] is called on the input string.)^ */ -SQLITE_API char *sqlite3_mprintf(const char*,...); -SQLITE_API char *sqlite3_vmprintf(const char*, va_list); -SQLITE_API char *sqlite3_snprintf(int,char*,const char*, ...); +SQLITE_API char *SQLITE_CDECL sqlite3_mprintf(const char*,...); +SQLITE_API char *SQLITE_STDCALL sqlite3_vmprintf(const char*, va_list); +SQLITE_API char *SQLITE_CDECL sqlite3_snprintf(int,char*,const char*, ...); +SQLITE_API char *SQLITE_STDCALL sqlite3_vsnprintf(int,char*,const char*, va_list); /* ** CAPI3REF: Memory Allocation Subsystem ** ** The SQLite core uses these three routines for all of its own @@ -1820,10 +2414,14 @@ ** of memory at least N bytes in length, where N is the parameter. ** ^If sqlite3_malloc() is unable to obtain sufficient free ** memory, it returns a NULL pointer. ^If the parameter N to ** sqlite3_malloc() is zero or negative then sqlite3_malloc() returns ** a NULL pointer. +** +** ^The sqlite3_malloc64(N) routine works just like +** sqlite3_malloc(N) except that N is an unsigned 64-bit integer instead +** of a signed 32-bit integer. ** ** ^Calling sqlite3_free() with a pointer previously returned ** by sqlite3_malloc() or sqlite3_realloc() releases that memory so ** that it might be reused. ^The sqlite3_free() routine is ** a no-op if is called with a NULL pointer. Passing a NULL pointer @@ -1832,41 +2430,57 @@ ** memory might result in a segmentation fault or other severe error. ** Memory corruption, a segmentation fault, or other severe error ** might result if sqlite3_free() is called with a non-NULL pointer that ** was not obtained from sqlite3_malloc() or sqlite3_realloc(). ** -** ^(The sqlite3_realloc() interface attempts to resize a -** prior memory allocation to be at least N bytes, where N is the -** second parameter. The memory allocation to be resized is the first -** parameter.)^ ^ If the first parameter to sqlite3_realloc() +** ^The sqlite3_realloc(X,N) interface attempts to resize a +** prior memory allocation X to be at least N bytes. +** ^If the X parameter to sqlite3_realloc(X,N) ** is a NULL pointer then its behavior is identical to calling -** sqlite3_malloc(N) where N is the second parameter to sqlite3_realloc(). -** ^If the second parameter to sqlite3_realloc() is zero or +** sqlite3_malloc(N). +** ^If the N parameter to sqlite3_realloc(X,N) is zero or ** negative then the behavior is exactly the same as calling -** sqlite3_free(P) where P is the first parameter to sqlite3_realloc(). -** ^sqlite3_realloc() returns a pointer to a memory allocation -** of at least N bytes in size or NULL if sufficient memory is unavailable. +** sqlite3_free(X). +** ^sqlite3_realloc(X,N) returns a pointer to a memory allocation +** of at least N bytes in size or NULL if insufficient memory is available. ** ^If M is the size of the prior allocation, then min(N,M) bytes ** of the prior allocation are copied into the beginning of buffer returned -** by sqlite3_realloc() and the prior allocation is freed. -** ^If sqlite3_realloc() returns NULL, then the prior allocation -** is not freed. +** by sqlite3_realloc(X,N) and the prior allocation is freed. +** ^If sqlite3_realloc(X,N) returns NULL and N is positive, then the +** prior allocation is not freed. ** -** ^The memory returned by sqlite3_malloc() and sqlite3_realloc() -** is always aligned to at least an 8 byte boundary. +** ^The sqlite3_realloc64(X,N) interfaces works the same as +** sqlite3_realloc(X,N) except that N is a 64-bit unsigned integer instead +** of a 32-bit signed integer. +** +** ^If X is a memory allocation previously obtained from sqlite3_malloc(), +** sqlite3_malloc64(), sqlite3_realloc(), or sqlite3_realloc64(), then +** sqlite3_msize(X) returns the size of that memory allocation in bytes. +** ^The value returned by sqlite3_msize(X) might be larger than the number +** of bytes requested when X was allocated. ^If X is a NULL pointer then +** sqlite3_msize(X) returns zero. If X points to something that is not +** the beginning of memory allocation, or if it points to a formerly +** valid memory allocation that has now been freed, then the behavior +** of sqlite3_msize(X) is undefined and possibly harmful. +** +** ^The memory returned by sqlite3_malloc(), sqlite3_realloc(), +** sqlite3_malloc64(), and sqlite3_realloc64() +** is always aligned to at least an 8 byte boundary, or to a +** 4 byte boundary if the [SQLITE_4_BYTE_ALIGNED_MALLOC] compile-time +** option is used. ** ** In SQLite version 3.5.0 and 3.5.1, it was possible to define ** the SQLITE_OMIT_MEMORY_ALLOCATION which would cause the built-in ** implementation of these routines to be omitted. That capability ** is no longer provided. Only built-in memory allocators can be used. ** -** The Windows OS interface layer calls +** Prior to SQLite version 3.7.10, the Windows OS interface layer called ** the system malloc() and free() directly when converting ** filenames between the UTF-8 encoding used by SQLite ** and whatever filename encoding is used by the particular Windows -** installation. Memory allocation errors are detected, but -** they are reported back as [SQLITE_CANTOPEN] or +** installation. Memory allocation errors were detected, but +** they were reported back as [SQLITE_CANTOPEN] or ** [SQLITE_IOERR] rather than [SQLITE_NOMEM]. ** ** The pointer arguments to [sqlite3_free()] and [sqlite3_realloc()] ** must be either NULL or else pointers obtained from a prior ** invocation of [sqlite3_malloc()] or [sqlite3_realloc()] that have @@ -1874,13 +2488,16 @@ ** ** The application must not read or write any part of ** a block of memory after it has been released using ** [sqlite3_free()] or [sqlite3_realloc()]. */ -SQLITE_API void *sqlite3_malloc(int); -SQLITE_API void *sqlite3_realloc(void*, int); -SQLITE_API void sqlite3_free(void*); +SQLITE_API void *SQLITE_STDCALL sqlite3_malloc(int); +SQLITE_API void *SQLITE_STDCALL sqlite3_malloc64(sqlite3_uint64); +SQLITE_API void *SQLITE_STDCALL sqlite3_realloc(void*, int); +SQLITE_API void *SQLITE_STDCALL sqlite3_realloc64(void*, sqlite3_uint64); +SQLITE_API void SQLITE_STDCALL sqlite3_free(void*); +SQLITE_API sqlite3_uint64 SQLITE_STDCALL sqlite3_msize(void*); /* ** CAPI3REF: Memory Allocator Statistics ** ** SQLite provides these two interfaces for reporting on the status @@ -1901,12 +2518,12 @@ ** [sqlite3_memory_used()] if and only if the parameter to ** [sqlite3_memory_highwater()] is true. ^The value returned ** by [sqlite3_memory_highwater(1)] is the high-water mark ** prior to the reset. */ -SQLITE_API sqlite3_int64 sqlite3_memory_used(void); -SQLITE_API sqlite3_int64 sqlite3_memory_highwater(int resetFlag); +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_memory_used(void); +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_memory_highwater(int resetFlag); /* ** CAPI3REF: Pseudo-Random Number Generator ** ** SQLite contains a high-quality pseudo-random number generator (PRNG) used to @@ -1914,24 +2531,28 @@ ** already uses the largest possible [ROWID]. The PRNG is also used for ** the build-in random() and randomblob() SQL functions. This interface allows ** applications to access the same PRNG for other purposes. ** ** ^A call to this routine stores N bytes of randomness into buffer P. +** ^The P parameter can be a NULL pointer. ** -** ^The first time this routine is invoked (either internally or by -** the application) the PRNG is seeded using randomness obtained -** from the xRandomness method of the default [sqlite3_vfs] object. -** ^On all subsequent invocations, the pseudo-randomness is generated +** ^If this routine has not been previously called or if the previous +** call had N less than one or a NULL pointer for P, then the PRNG is +** seeded using randomness obtained from the xRandomness method of +** the default [sqlite3_vfs] object. +** ^If the previous call to this routine had an N of 1 or more and a +** non-NULL P then the pseudo-randomness is generated ** internally and without recourse to the [sqlite3_vfs] xRandomness ** method. */ -SQLITE_API void sqlite3_randomness(int N, void *P); +SQLITE_API void SQLITE_STDCALL sqlite3_randomness(int N, void *P); /* ** CAPI3REF: Compile-Time Authorization Callbacks +** METHOD: sqlite3 ** -** ^This routine registers a authorizer callback with a particular +** ^This routine registers an authorizer callback with a particular ** [database connection], supplied in the first argument. ** ^The authorizer callback is invoked as SQL statements are being compiled ** by [sqlite3_prepare()] or its variants [sqlite3_prepare_v2()], ** [sqlite3_prepare16()] and [sqlite3_prepare16_v2()]. ^At various ** points during the compilation process, as logic is being created @@ -2004,11 +2625,11 @@ ** [sqlite3_prepare()] or its variants. Authorization is not ** performed during statement evaluation in [sqlite3_step()], unless ** as stated in the previous paragraph, sqlite3_step() invokes ** sqlite3_prepare_v2() to reprepare a statement after a schema change. */ -SQLITE_API int sqlite3_set_authorizer( +SQLITE_API int SQLITE_STDCALL sqlite3_set_authorizer( sqlite3*, int (*xAuth)(void*,int,const char*,const char*,const char*,const char*), void *pUserData ); @@ -2018,10 +2639,13 @@ ** The [sqlite3_set_authorizer | authorizer callback function] must ** return either [SQLITE_OK] or one of these two constants in order ** to signal SQLite whether or not the action is permitted. See the ** [sqlite3_set_authorizer | authorizer documentation] for additional ** information. +** +** Note that SQLITE_IGNORE is also used as a [conflict resolution mode] +** returned from the [sqlite3_vtab_on_conflict()] interface. */ #define SQLITE_DENY 1 /* Abort the SQL statement with an error */ #define SQLITE_IGNORE 2 /* Don't allow access, but don't generate an error */ /* @@ -2075,13 +2699,15 @@ #define SQLITE_CREATE_VTABLE 29 /* Table Name Module Name */ #define SQLITE_DROP_VTABLE 30 /* Table Name Module Name */ #define SQLITE_FUNCTION 31 /* NULL Function Name */ #define SQLITE_SAVEPOINT 32 /* Operation Savepoint Name */ #define SQLITE_COPY 0 /* No longer used */ +#define SQLITE_RECURSIVE 33 /* NULL NULL */ /* ** CAPI3REF: Tracing And Profiling Functions +** METHOD: sqlite3 ** ** These routines register callback functions that can be used for ** tracing and profiling the execution of SQL statements. ** ** ^The callback function registered by sqlite3_trace() is invoked at @@ -2090,44 +2716,67 @@ ** SQL statement text as the statement first begins executing. ** ^(Additional sqlite3_trace() callbacks might occur ** as each triggered subprogram is entered. The callbacks for triggers ** contain a UTF-8 SQL comment that identifies the trigger.)^ ** +** The [SQLITE_TRACE_SIZE_LIMIT] compile-time option can be used to limit +** the length of [bound parameter] expansion in the output of sqlite3_trace(). +** ** ^The callback function registered by sqlite3_profile() is invoked ** as each SQL statement finishes. ^The profile callback contains ** the original statement text and an estimate of wall-clock time -** of how long that statement took to run. +** of how long that statement took to run. ^The profile callback +** time is in units of nanoseconds, however the current implementation +** is only capable of millisecond resolution so the six least significant +** digits in the time are meaningless. Future versions of SQLite +** might provide greater resolution on the profiler callback. The +** sqlite3_profile() function is considered experimental and is +** subject to change in future versions of SQLite. */ -SQLITE_API void *sqlite3_trace(sqlite3*, void(*xTrace)(void*,const char*), void*); -SQLITE_API SQLITE_EXPERIMENTAL void *sqlite3_profile(sqlite3*, +SQLITE_API void *SQLITE_STDCALL sqlite3_trace(sqlite3*, void(*xTrace)(void*,const char*), void*); +SQLITE_API SQLITE_EXPERIMENTAL void *SQLITE_STDCALL sqlite3_profile(sqlite3*, void(*xProfile)(void*,const char*,sqlite3_uint64), void*); /* ** CAPI3REF: Query Progress Callbacks +** METHOD: sqlite3 ** -** ^This routine configures a callback function - the -** progress callback - that is invoked periodically during long -** running calls to [sqlite3_exec()], [sqlite3_step()] and -** [sqlite3_get_table()]. An example use for this +** ^The sqlite3_progress_handler(D,N,X,P) interface causes the callback +** function X to be invoked periodically during long running calls to +** [sqlite3_exec()], [sqlite3_step()] and [sqlite3_get_table()] for +** database connection D. An example use for this ** interface is to keep a GUI updated during a large query. +** +** ^The parameter P is passed through as the only parameter to the +** callback function X. ^The parameter N is the approximate number of +** [virtual machine instructions] that are evaluated between successive +** invocations of the callback X. ^If N is less than one then the progress +** handler is disabled. +** +** ^Only a single progress handler may be defined at one time per +** [database connection]; setting a new progress handler cancels the +** old one. ^Setting parameter X to NULL disables the progress handler. +** ^The progress handler is also disabled by setting N to a value less +** than 1. ** ** ^If the progress callback returns non-zero, the operation is ** interrupted. This feature can be used to implement a ** "Cancel" button on a GUI progress dialog box. ** -** The progress handler must not do anything that will modify +** The progress handler callback must not do anything that will modify ** the database connection that invoked the progress handler. ** Note that [sqlite3_prepare_v2()] and [sqlite3_step()] both modify their ** database connections for the meaning of "modify" in this paragraph. ** */ -SQLITE_API void sqlite3_progress_handler(sqlite3*, int, int(*)(void*), void*); +SQLITE_API void SQLITE_STDCALL sqlite3_progress_handler(sqlite3*, int, int(*)(void*), void*); /* ** CAPI3REF: Opening A New Database Connection +** CONSTRUCTOR: sqlite3 ** -** ^These routines open an SQLite database file whose name is given by the +** ^These routines open an SQLite database file as specified by the ** filename argument. ^The filename argument is interpreted as UTF-8 for ** sqlite3_open() and sqlite3_open_v2() and as UTF-16 in the native byte ** order for sqlite3_open16(). ^(A [database connection] handle is usually ** returned in *ppDb, even if an error occurs. The only exception is that ** if SQLite is unable to allocate memory to hold the [sqlite3] object, @@ -2136,13 +2785,13 @@ ** [SQLITE_OK] is returned. Otherwise an [error code] is returned.)^ ^The ** [sqlite3_errmsg()] or [sqlite3_errmsg16()] routines can be used to obtain ** an English language description of the error following a failure of any ** of the sqlite3_open() routines. ** -** ^The default encoding for the database will be UTF-8 if -** sqlite3_open() or sqlite3_open_v2() is called and -** UTF-16 in the native byte order if sqlite3_open16() is used. +** ^The default encoding will be UTF-8 for databases created using +** sqlite3_open() or sqlite3_open_v2(). ^The default encoding for databases +** created using sqlite3_open16() will be UTF-16 in the native byte order. ** ** Whether or not an error occurs when it is opened, resources ** associated with the [database connection] handle should be released by ** passing it to [sqlite3_close()] when it is no longer required. ** @@ -2150,11 +2799,11 @@ ** except that it accepts two additional parameters for additional control ** over the new database connection. ^(The flags parameter to ** sqlite3_open_v2() can take one of ** the following three values, optionally combined with the ** [SQLITE_OPEN_NOMUTEX], [SQLITE_OPEN_FULLMUTEX], [SQLITE_OPEN_SHAREDCACHE], -** and/or [SQLITE_OPEN_PRIVATECACHE] flags:)^ +** [SQLITE_OPEN_PRIVATECACHE], and/or [SQLITE_OPEN_URI] flags:)^ ** ** <dl> ** ^(<dt>[SQLITE_OPEN_READONLY]</dt> ** <dd>The database is opened in read-only mode. If the database does not ** already exist, an error is returned.</dd>)^ @@ -2163,19 +2812,18 @@ ** <dd>The database is opened for reading and writing if possible, or reading ** only if the file is write protected by the operating system. In either ** case the database must already exist, otherwise an error is returned.</dd>)^ ** ** ^(<dt>[SQLITE_OPEN_READWRITE] | [SQLITE_OPEN_CREATE]</dt> -** <dd>The database is opened for reading and writing, and is creates it if +** <dd>The database is opened for reading and writing, and is created if ** it does not already exist. This is the behavior that is always used for ** sqlite3_open() and sqlite3_open16().</dd>)^ ** </dl> ** ** If the 3rd parameter to sqlite3_open_v2() is not one of the -** combinations shown above or one of the combinations shown above combined -** with the [SQLITE_OPEN_NOMUTEX], [SQLITE_OPEN_FULLMUTEX], -** [SQLITE_OPEN_SHAREDCACHE] and/or [SQLITE_OPEN_SHAREDCACHE] flags, +** combinations shown above optionally combined with other +** [SQLITE_OPEN_READONLY | SQLITE_OPEN_* bits] ** then the behavior is undefined. ** ** ^If the [SQLITE_OPEN_NOMUTEX] flag is set, then the database connection ** opens in the multi-thread [threading mode] as long as the single-thread ** mode has not been set at compile-time or start-time. ^If the @@ -2185,10 +2833,15 @@ ** ^The [SQLITE_OPEN_SHAREDCACHE] flag causes the database connection to be ** eligible to use [shared cache mode], regardless of whether or not shared ** cache is enabled using [sqlite3_enable_shared_cache()]. ^The ** [SQLITE_OPEN_PRIVATECACHE] flag causes the database connection to not ** participate in [shared cache mode] even if it is enabled. +** +** ^The fourth parameter to sqlite3_open_v2() is the name of the +** [sqlite3_vfs] object that defines the operating system interface that +** the new database connection should use. ^If the fourth parameter is +** a NULL pointer then the default [sqlite3_vfs] object is used. ** ** ^If the filename is ":memory:", then a private, temporary in-memory database ** is created for the connection. ^This in-memory database will vanish when ** the database connection is closed. Future versions of SQLite might ** make use of additional special filenames that begin with the ":" character. @@ -2198,44 +2851,224 @@ ** ** ^If the filename is an empty string, then a private, temporary ** on-disk database will be created. ^This private database will be ** automatically deleted as soon as the database connection is closed. ** -** ^The fourth parameter to sqlite3_open_v2() is the name of the -** [sqlite3_vfs] object that defines the operating system interface that -** the new database connection should use. ^If the fourth parameter is -** a NULL pointer then the default [sqlite3_vfs] object is used. +** [[URI filenames in sqlite3_open()]] <h3>URI Filenames</h3> +** +** ^If [URI filename] interpretation is enabled, and the filename argument +** begins with "file:", then the filename is interpreted as a URI. ^URI +** filename interpretation is enabled if the [SQLITE_OPEN_URI] flag is +** set in the fourth argument to sqlite3_open_v2(), or if it has +** been enabled globally using the [SQLITE_CONFIG_URI] option with the +** [sqlite3_config()] method or by the [SQLITE_USE_URI] compile-time option. +** As of SQLite version 3.7.7, URI filename interpretation is turned off +** by default, but future releases of SQLite might enable URI filename +** interpretation by default. See "[URI filenames]" for additional +** information. +** +** URI filenames are parsed according to RFC 3986. ^If the URI contains an +** authority, then it must be either an empty string or the string +** "localhost". ^If the authority is not an empty string or "localhost", an +** error is returned to the caller. ^The fragment component of a URI, if +** present, is ignored. +** +** ^SQLite uses the path component of the URI as the name of the disk file +** which contains the database. ^If the path begins with a '/' character, +** then it is interpreted as an absolute path. ^If the path does not begin +** with a '/' (meaning that the authority section is omitted from the URI) +** then the path is interpreted as a relative path. +** ^(On windows, the first component of an absolute path +** is a drive specification (e.g. "C:").)^ +** +** [[core URI query parameters]] +** The query component of a URI may contain parameters that are interpreted +** either by SQLite itself, or by a [VFS | custom VFS implementation]. +** SQLite and its built-in [VFSes] interpret the +** following query parameters: +** +** <ul> +** <li> <b>vfs</b>: ^The "vfs" parameter may be used to specify the name of +** a VFS object that provides the operating system interface that should +** be used to access the database file on disk. ^If this option is set to +** an empty string the default VFS object is used. ^Specifying an unknown +** VFS is an error. ^If sqlite3_open_v2() is used and the vfs option is +** present, then the VFS specified by the option takes precedence over +** the value passed as the fourth parameter to sqlite3_open_v2(). +** +** <li> <b>mode</b>: ^(The mode parameter may be set to either "ro", "rw", +** "rwc", or "memory". Attempting to set it to any other value is +** an error)^. +** ^If "ro" is specified, then the database is opened for read-only +** access, just as if the [SQLITE_OPEN_READONLY] flag had been set in the +** third argument to sqlite3_open_v2(). ^If the mode option is set to +** "rw", then the database is opened for read-write (but not create) +** access, as if SQLITE_OPEN_READWRITE (but not SQLITE_OPEN_CREATE) had +** been set. ^Value "rwc" is equivalent to setting both +** SQLITE_OPEN_READWRITE and SQLITE_OPEN_CREATE. ^If the mode option is +** set to "memory" then a pure [in-memory database] that never reads +** or writes from disk is used. ^It is an error to specify a value for +** the mode parameter that is less restrictive than that specified by +** the flags passed in the third parameter to sqlite3_open_v2(). +** +** <li> <b>cache</b>: ^The cache parameter may be set to either "shared" or +** "private". ^Setting it to "shared" is equivalent to setting the +** SQLITE_OPEN_SHAREDCACHE bit in the flags argument passed to +** sqlite3_open_v2(). ^Setting the cache parameter to "private" is +** equivalent to setting the SQLITE_OPEN_PRIVATECACHE bit. +** ^If sqlite3_open_v2() is used and the "cache" parameter is present in +** a URI filename, its value overrides any behavior requested by setting +** SQLITE_OPEN_PRIVATECACHE or SQLITE_OPEN_SHAREDCACHE flag. +** +** <li> <b>psow</b>: ^The psow parameter indicates whether or not the +** [powersafe overwrite] property does or does not apply to the +** storage media on which the database file resides. +** +** <li> <b>nolock</b>: ^The nolock parameter is a boolean query parameter +** which if set disables file locking in rollback journal modes. This +** is useful for accessing a database on a filesystem that does not +** support locking. Caution: Database corruption might result if two +** or more processes write to the same database and any one of those +** processes uses nolock=1. +** +** <li> <b>immutable</b>: ^The immutable parameter is a boolean query +** parameter that indicates that the database file is stored on +** read-only media. ^When immutable is set, SQLite assumes that the +** database file cannot be changed, even by a process with higher +** privilege, and so the database is opened read-only and all locking +** and change detection is disabled. Caution: Setting the immutable +** property on a database file that does in fact change can result +** in incorrect query results and/or [SQLITE_CORRUPT] errors. +** See also: [SQLITE_IOCAP_IMMUTABLE]. +** +** </ul> +** +** ^Specifying an unknown parameter in the query component of a URI is not an +** error. Future versions of SQLite might understand additional query +** parameters. See "[query parameters with special meaning to SQLite]" for +** additional information. +** +** [[URI filename examples]] <h3>URI filename examples</h3> +** +** <table border="1" align=center cellpadding=5> +** <tr><th> URI filenames <th> Results +** <tr><td> file:data.db <td> +** Open the file "data.db" in the current directory. +** <tr><td> file:/home/fred/data.db<br> +** file:///home/fred/data.db <br> +** file://localhost/home/fred/data.db <br> <td> +** Open the database file "/home/fred/data.db". +** <tr><td> file://darkstar/home/fred/data.db <td> +** An error. "darkstar" is not a recognized authority. +** <tr><td style="white-space:nowrap"> +** file:///C:/Documents%20and%20Settings/fred/Desktop/data.db +** <td> Windows only: Open the file "data.db" on fred's desktop on drive +** C:. Note that the %20 escaping in this example is not strictly +** necessary - space characters can be used literally +** in URI filenames. +** <tr><td> file:data.db?mode=ro&cache=private <td> +** Open file "data.db" in the current directory for read-only access. +** Regardless of whether or not shared-cache mode is enabled by +** default, use a private cache. +** <tr><td> file:/home/fred/data.db?vfs=unix-dotfile <td> +** Open file "/home/fred/data.db". Use the special VFS "unix-dotfile" +** that uses dot-files in place of posix advisory locking. +** <tr><td> file:data.db?mode=readonly <td> +** An error. "readonly" is not a valid option for the "mode" parameter. +** </table> +** +** ^URI hexadecimal escape sequences (%HH) are supported within the path and +** query components of a URI. A hexadecimal escape sequence consists of a +** percent sign - "%" - followed by exactly two hexadecimal digits +** specifying an octet value. ^Before the path or query components of a +** URI filename are interpreted, they are encoded using UTF-8 and all +** hexadecimal escape sequences replaced by a single byte containing the +** corresponding octet. If this process generates an invalid UTF-8 encoding, +** the results are undefined. ** ** <b>Note to Windows users:</b> The encoding used for the filename argument ** of sqlite3_open() and sqlite3_open_v2() must be UTF-8, not whatever ** codepage is currently defined. Filenames containing international ** characters must be converted to UTF-8 prior to passing them into ** sqlite3_open() or sqlite3_open_v2(). +** +** <b>Note to Windows Runtime users:</b> The temporary directory must be set +** prior to calling sqlite3_open() or sqlite3_open_v2(). Otherwise, various +** features that require the use of temporary files may fail. +** +** See also: [sqlite3_temp_directory] */ -SQLITE_API int sqlite3_open( +SQLITE_API int SQLITE_STDCALL sqlite3_open( const char *filename, /* Database filename (UTF-8) */ sqlite3 **ppDb /* OUT: SQLite db handle */ ); -SQLITE_API int sqlite3_open16( +SQLITE_API int SQLITE_STDCALL sqlite3_open16( const void *filename, /* Database filename (UTF-16) */ sqlite3 **ppDb /* OUT: SQLite db handle */ ); -SQLITE_API int sqlite3_open_v2( +SQLITE_API int SQLITE_STDCALL sqlite3_open_v2( const char *filename, /* Database filename (UTF-8) */ sqlite3 **ppDb, /* OUT: SQLite db handle */ int flags, /* Flags */ const char *zVfs /* Name of VFS module to use */ ); + +/* +** CAPI3REF: Obtain Values For URI Parameters +** +** These are utility routines, useful to VFS implementations, that check +** to see if a database file was a URI that contained a specific query +** parameter, and if so obtains the value of that query parameter. +** +** If F is the database filename pointer passed into the xOpen() method of +** a VFS implementation when the flags parameter to xOpen() has one or +** more of the [SQLITE_OPEN_URI] or [SQLITE_OPEN_MAIN_DB] bits set and +** P is the name of the query parameter, then +** sqlite3_uri_parameter(F,P) returns the value of the P +** parameter if it exists or a NULL pointer if P does not appear as a +** query parameter on F. If P is a query parameter of F +** has no explicit value, then sqlite3_uri_parameter(F,P) returns +** a pointer to an empty string. +** +** The sqlite3_uri_boolean(F,P,B) routine assumes that P is a boolean +** parameter and returns true (1) or false (0) according to the value +** of P. The sqlite3_uri_boolean(F,P,B) routine returns true (1) if the +** value of query parameter P is one of "yes", "true", or "on" in any +** case or if the value begins with a non-zero number. The +** sqlite3_uri_boolean(F,P,B) routines returns false (0) if the value of +** query parameter P is one of "no", "false", or "off" in any case or +** if the value begins with a numeric zero. If P is not a query +** parameter on F or if the value of P is does not match any of the +** above, then sqlite3_uri_boolean(F,P,B) returns (B!=0). +** +** The sqlite3_uri_int64(F,P,D) routine converts the value of P into a +** 64-bit signed integer and returns that integer, or D if P does not +** exist. If the value of P is something other than an integer, then +** zero is returned. +** +** If F is a NULL pointer, then sqlite3_uri_parameter(F,P) returns NULL and +** sqlite3_uri_boolean(F,P,B) returns B. If F is not a NULL pointer and +** is not a database file pathname pointer that SQLite passed into the xOpen +** VFS method, then the behavior of this routine is undefined and probably +** undesirable. +*/ +SQLITE_API const char *SQLITE_STDCALL sqlite3_uri_parameter(const char *zFilename, const char *zParam); +SQLITE_API int SQLITE_STDCALL sqlite3_uri_boolean(const char *zFile, const char *zParam, int bDefault); +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_uri_int64(const char*, const char*, sqlite3_int64); + /* ** CAPI3REF: Error Codes And Messages +** METHOD: sqlite3 ** -** ^The sqlite3_errcode() interface returns the numeric [result code] or -** [extended result code] for the most recent failed sqlite3_* API call -** associated with a [database connection]. If a prior API call failed -** but the most recent API call succeeded, the return value from -** sqlite3_errcode() is undefined. ^The sqlite3_extended_errcode() +** ^If the most recent sqlite3_* API call associated with +** [database connection] D failed, then the sqlite3_errcode(D) interface +** returns the numeric [result code] or [extended result code] for that +** API call. +** If the most recent API call was successful, +** then the return value from sqlite3_errcode() is undefined. +** ^The sqlite3_extended_errcode() ** interface is the same except that it always returns the ** [extended result code] even when extended result codes are ** disabled. ** ** ^The sqlite3_errmsg() and sqlite3_errmsg16() return English-language @@ -2242,10 +3075,15 @@ ** text that describes the error, as either UTF-8 or UTF-16 respectively. ** ^(Memory to hold the error message string is managed internally. ** The application does not need to worry about freeing the result. ** However, the error string might be overwritten or deallocated by ** subsequent calls to other SQLite interface functions.)^ +** +** ^The sqlite3_errstr() interface returns the English-language text +** that describes the [result code], as UTF-8. +** ^(Memory to hold the error message string is managed internally +** and must not be freed by the application)^. ** ** When the serialized [threading mode] is in use, it might be the ** case that a second error occurs on a separate thread in between ** the time of the first error and the call to these interfaces. ** When that happens, the second error will be reported since these @@ -2257,60 +3095,67 @@ ** ** If an interface fails with SQLITE_MISUSE, that means the interface ** was invoked incorrectly by the application. In that case, the ** error code and message may or may not be set. */ -SQLITE_API int sqlite3_errcode(sqlite3 *db); -SQLITE_API int sqlite3_extended_errcode(sqlite3 *db); -SQLITE_API const char *sqlite3_errmsg(sqlite3*); -SQLITE_API const void *sqlite3_errmsg16(sqlite3*); +SQLITE_API int SQLITE_STDCALL sqlite3_errcode(sqlite3 *db); +SQLITE_API int SQLITE_STDCALL sqlite3_extended_errcode(sqlite3 *db); +SQLITE_API const char *SQLITE_STDCALL sqlite3_errmsg(sqlite3*); +SQLITE_API const void *SQLITE_STDCALL sqlite3_errmsg16(sqlite3*); +SQLITE_API const char *SQLITE_STDCALL sqlite3_errstr(int); /* -** CAPI3REF: SQL Statement Object +** CAPI3REF: Prepared Statement Object ** KEYWORDS: {prepared statement} {prepared statements} ** -** An instance of this object represents a single SQL statement. -** This object is variously known as a "prepared statement" or a -** "compiled SQL statement" or simply as a "statement". +** An instance of this object represents a single SQL statement that +** has been compiled into binary form and is ready to be evaluated. ** -** The life of a statement object goes something like this: +** Think of each SQL statement as a separate computer program. The +** original SQL text is source code. A prepared statement object +** is the compiled object code. All SQL must be converted into a +** prepared statement before it can be run. +** +** The life-cycle of a prepared statement object usually goes like this: ** ** <ol> -** <li> Create the object using [sqlite3_prepare_v2()] or a related -** function. -** <li> Bind values to [host parameters] using the sqlite3_bind_*() +** <li> Create the prepared statement object using [sqlite3_prepare_v2()]. +** <li> Bind values to [parameters] using the sqlite3_bind_*() ** interfaces. ** <li> Run the SQL by calling [sqlite3_step()] one or more times. -** <li> Reset the statement using [sqlite3_reset()] then go back +** <li> Reset the prepared statement using [sqlite3_reset()] then go back ** to step 2. Do this zero or more times. ** <li> Destroy the object using [sqlite3_finalize()]. ** </ol> -** -** Refer to documentation on individual methods above for additional -** information. */ typedef struct sqlite3_stmt sqlite3_stmt; /* ** CAPI3REF: Run-time Limits +** METHOD: sqlite3 ** ** ^(This interface allows the size of various constructs to be limited ** on a connection by connection basis. The first parameter is the ** [database connection] whose limit is to be set or queried. The ** second parameter is one of the [limit categories] that define a ** class of constructs to be size limited. The third parameter is the -** new limit for that construct. The function returns the old limit.)^ +** new limit for that construct.)^ ** ** ^If the new limit is a negative number, the limit is unchanged. -** ^(For the limit category of SQLITE_LIMIT_XYZ there is a +** ^(For each limit category SQLITE_LIMIT_<i>NAME</i> there is a ** [limits | hard upper bound] -** set by a compile-time C preprocessor macro named -** [limits | SQLITE_MAX_XYZ]. +** set at compile-time by a C preprocessor macro called +** [limits | SQLITE_MAX_<i>NAME</i>]. ** (The "_LIMIT_" in the name is changed to "_MAX_".))^ ** ^Attempts to increase a limit above its hard upper bound are ** silently truncated to the hard upper bound. ** +** ^Regardless of whether or not the limit was changed, the +** [sqlite3_limit()] interface returns the prior value of the limit. +** ^Hence, to find the current value of a limit without changing it, +** simply invoke this interface with the third parameter set to -1. +** ** Run-time limits are intended for use in applications that manage ** both their own internal database and also databases that are controlled ** by untrusted external sources. An example application might be a ** web browser that has its own databases for storing history and ** separate databases controlled by JavaScript applications downloaded @@ -2322,11 +3167,11 @@ ** created by an untrusted script can be contained using the ** [max_page_count] [PRAGMA]. ** ** New run-time limit categories may be added in future releases. */ -SQLITE_API int sqlite3_limit(sqlite3*, int id, int newVal); +SQLITE_API int SQLITE_STDCALL sqlite3_limit(sqlite3*, int id, int newVal); /* ** CAPI3REF: Run-Time Limit Categories ** KEYWORDS: {limit category} {*limit categories} ** @@ -2334,47 +3179,54 @@ ** that can be lowered at run-time using [sqlite3_limit()]. ** The synopsis of the meanings of the various limits is shown below. ** Additional information is available at [limits | Limits in SQLite]. ** ** <dl> -** ^(<dt>SQLITE_LIMIT_LENGTH</dt> -** <dd>The maximum size of any string or BLOB or table row.<dd>)^ +** [[SQLITE_LIMIT_LENGTH]] ^(<dt>SQLITE_LIMIT_LENGTH</dt> +** <dd>The maximum size of any string or BLOB or table row, in bytes.<dd>)^ ** -** ^(<dt>SQLITE_LIMIT_SQL_LENGTH</dt> +** [[SQLITE_LIMIT_SQL_LENGTH]] ^(<dt>SQLITE_LIMIT_SQL_LENGTH</dt> ** <dd>The maximum length of an SQL statement, in bytes.</dd>)^ ** -** ^(<dt>SQLITE_LIMIT_COLUMN</dt> +** [[SQLITE_LIMIT_COLUMN]] ^(<dt>SQLITE_LIMIT_COLUMN</dt> ** <dd>The maximum number of columns in a table definition or in the ** result set of a [SELECT] or the maximum number of columns in an index ** or in an ORDER BY or GROUP BY clause.</dd>)^ ** -** ^(<dt>SQLITE_LIMIT_EXPR_DEPTH</dt> +** [[SQLITE_LIMIT_EXPR_DEPTH]] ^(<dt>SQLITE_LIMIT_EXPR_DEPTH</dt> ** <dd>The maximum depth of the parse tree on any expression.</dd>)^ ** -** ^(<dt>SQLITE_LIMIT_COMPOUND_SELECT</dt> +** [[SQLITE_LIMIT_COMPOUND_SELECT]] ^(<dt>SQLITE_LIMIT_COMPOUND_SELECT</dt> ** <dd>The maximum number of terms in a compound SELECT statement.</dd>)^ ** -** ^(<dt>SQLITE_LIMIT_VDBE_OP</dt> +** [[SQLITE_LIMIT_VDBE_OP]] ^(<dt>SQLITE_LIMIT_VDBE_OP</dt> ** <dd>The maximum number of instructions in a virtual machine program -** used to implement an SQL statement.</dd>)^ +** used to implement an SQL statement. This limit is not currently +** enforced, though that might be added in some future release of +** SQLite.</dd>)^ ** -** ^(<dt>SQLITE_LIMIT_FUNCTION_ARG</dt> +** [[SQLITE_LIMIT_FUNCTION_ARG]] ^(<dt>SQLITE_LIMIT_FUNCTION_ARG</dt> ** <dd>The maximum number of arguments on a function.</dd>)^ ** -** ^(<dt>SQLITE_LIMIT_ATTACHED</dt> +** [[SQLITE_LIMIT_ATTACHED]] ^(<dt>SQLITE_LIMIT_ATTACHED</dt> ** <dd>The maximum number of [ATTACH | attached databases].)^</dd> ** +** [[SQLITE_LIMIT_LIKE_PATTERN_LENGTH]] ** ^(<dt>SQLITE_LIMIT_LIKE_PATTERN_LENGTH</dt> ** <dd>The maximum length of the pattern argument to the [LIKE] or ** [GLOB] operators.</dd>)^ ** +** [[SQLITE_LIMIT_VARIABLE_NUMBER]] ** ^(<dt>SQLITE_LIMIT_VARIABLE_NUMBER</dt> -** <dd>The maximum number of variables in an SQL statement that can -** be bound.</dd>)^ +** <dd>The maximum index number of any [parameter] in an SQL statement.)^ ** -** ^(<dt>SQLITE_LIMIT_TRIGGER_DEPTH</dt> +** [[SQLITE_LIMIT_TRIGGER_DEPTH]] ^(<dt>SQLITE_LIMIT_TRIGGER_DEPTH</dt> ** <dd>The maximum depth of recursion for triggers.</dd>)^ +** +** [[SQLITE_LIMIT_WORKER_THREADS]] ^(<dt>SQLITE_LIMIT_WORKER_THREADS</dt> +** <dd>The maximum number of auxiliary worker threads that a single +** [prepared statement] may start.</dd>)^ ** </dl> */ #define SQLITE_LIMIT_LENGTH 0 #define SQLITE_LIMIT_SQL_LENGTH 1 #define SQLITE_LIMIT_COLUMN 2 @@ -2384,14 +3236,17 @@ #define SQLITE_LIMIT_FUNCTION_ARG 6 #define SQLITE_LIMIT_ATTACHED 7 #define SQLITE_LIMIT_LIKE_PATTERN_LENGTH 8 #define SQLITE_LIMIT_VARIABLE_NUMBER 9 #define SQLITE_LIMIT_TRIGGER_DEPTH 10 +#define SQLITE_LIMIT_WORKER_THREADS 11 /* ** CAPI3REF: Compiling An SQL Statement ** KEYWORDS: {SQL statement compiler} +** METHOD: sqlite3 +** CONSTRUCTOR: sqlite3_stmt ** ** To execute an SQL query, it must first be compiled into a byte-code ** program using one of these routines. ** ** The first argument, "db", is a [database connection] obtained from a @@ -2401,19 +3256,18 @@ ** The second argument, "zSql", is the statement to be compiled, encoded ** as either UTF-8 or UTF-16. The sqlite3_prepare() and sqlite3_prepare_v2() ** interfaces use UTF-8, and sqlite3_prepare16() and sqlite3_prepare16_v2() ** use UTF-16. ** -** ^If the nByte argument is less than zero, then zSql is read up to the -** first zero terminator. ^If nByte is non-negative, then it is the maximum -** number of bytes read from zSql. ^When nByte is non-negative, the -** zSql string ends at either the first '\000' or '\u0000' character or -** the nByte-th byte, whichever comes first. If the caller knows -** that the supplied string is nul-terminated, then there is a small -** performance advantage to be gained by passing an nByte parameter that -** is equal to the number of bytes in the input string <i>including</i> -** the nul-terminator bytes. +** ^If the nByte argument is negative, then zSql is read up to the +** first zero terminator. ^If nByte is positive, then it is the +** number of bytes read from zSql. ^If nByte is zero, then no prepared +** statement is generated. +** If the caller knows that the supplied string is nul-terminated, then +** there is a small performance advantage to passing an nByte parameter that +** is the number of bytes in the input string <i>including</i> +** the nul-terminator. ** ** ^If pzTail is not NULL then *pzTail is made to point to the first byte ** past the end of the first SQL statement in zSql. These routines only ** compile the first statement in zSql, so *pzTail is left pointing to ** what remains uncompiled. @@ -2439,16 +3293,12 @@ ** ** <ol> ** <li> ** ^If the database schema changes, instead of returning [SQLITE_SCHEMA] as it ** always used to do, [sqlite3_step()] will automatically recompile the SQL -** statement and try to run it again. ^If the schema has changed in -** a way that makes the statement no longer valid, [sqlite3_step()] will still -** return [SQLITE_SCHEMA]. But unlike the legacy behavior, [SQLITE_SCHEMA] is -** now a fatal error. Calling [sqlite3_prepare_v2()] again will not make the -** error go away. Note: use [sqlite3_errmsg()] to find the text -** of the parsing error that results in an [SQLITE_SCHEMA] return. +** statement and try to run it again. As many as [SQLITE_MAX_SCHEMA_RETRY] +** retries will occur before sqlite3_step() gives up and returns an error. ** </li> ** ** <li> ** ^When an error occurs, [sqlite3_step()] will return one of the detailed ** [error codes] or [extended error codes]. ^The legacy behavior was that @@ -2457,55 +3307,113 @@ ** in order to find the underlying cause of the problem. With the "v2" prepare ** interfaces, the underlying reason for the error is returned immediately. ** </li> ** ** <li> -** ^If the value of a [parameter | host parameter] in the WHERE clause might -** change the query plan for a statement, then the statement may be -** automatically recompiled (as if there had been a schema change) on the first -** [sqlite3_step()] call following any change to the -** [sqlite3_bind_text | bindings] of the [parameter]. +** ^If the specific value bound to [parameter | host parameter] in the +** WHERE clause might influence the choice of query plan for a statement, +** then the statement will be automatically recompiled, as if there had been +** a schema change, on the first [sqlite3_step()] call following any change +** to the [sqlite3_bind_text | bindings] of that [parameter]. +** ^The specific value of WHERE-clause [parameter] might influence the +** choice of query plan if the parameter is the left-hand side of a [LIKE] +** or [GLOB] operator or if the parameter is compared to an indexed column +** and the [SQLITE_ENABLE_STAT3] compile-time option is enabled. ** </li> ** </ol> */ -SQLITE_API int sqlite3_prepare( +SQLITE_API int SQLITE_STDCALL sqlite3_prepare( sqlite3 *db, /* Database handle */ const char *zSql, /* SQL statement, UTF-8 encoded */ int nByte, /* Maximum length of zSql in bytes. */ sqlite3_stmt **ppStmt, /* OUT: Statement handle */ const char **pzTail /* OUT: Pointer to unused portion of zSql */ ); -SQLITE_API int sqlite3_prepare_v2( +SQLITE_API int SQLITE_STDCALL sqlite3_prepare_v2( sqlite3 *db, /* Database handle */ const char *zSql, /* SQL statement, UTF-8 encoded */ int nByte, /* Maximum length of zSql in bytes. */ sqlite3_stmt **ppStmt, /* OUT: Statement handle */ const char **pzTail /* OUT: Pointer to unused portion of zSql */ ); -SQLITE_API int sqlite3_prepare16( +SQLITE_API int SQLITE_STDCALL sqlite3_prepare16( sqlite3 *db, /* Database handle */ const void *zSql, /* SQL statement, UTF-16 encoded */ int nByte, /* Maximum length of zSql in bytes. */ sqlite3_stmt **ppStmt, /* OUT: Statement handle */ const void **pzTail /* OUT: Pointer to unused portion of zSql */ ); -SQLITE_API int sqlite3_prepare16_v2( +SQLITE_API int SQLITE_STDCALL sqlite3_prepare16_v2( sqlite3 *db, /* Database handle */ const void *zSql, /* SQL statement, UTF-16 encoded */ int nByte, /* Maximum length of zSql in bytes. */ sqlite3_stmt **ppStmt, /* OUT: Statement handle */ const void **pzTail /* OUT: Pointer to unused portion of zSql */ ); /* ** CAPI3REF: Retrieving Statement SQL +** METHOD: sqlite3_stmt ** ** ^This interface can be used to retrieve a saved copy of the original ** SQL text used to create a [prepared statement] if that statement was ** compiled using either [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()]. */ -SQLITE_API const char *sqlite3_sql(sqlite3_stmt *pStmt); +SQLITE_API const char *SQLITE_STDCALL sqlite3_sql(sqlite3_stmt *pStmt); + +/* +** CAPI3REF: Determine If An SQL Statement Writes The Database +** METHOD: sqlite3_stmt +** +** ^The sqlite3_stmt_readonly(X) interface returns true (non-zero) if +** and only if the [prepared statement] X makes no direct changes to +** the content of the database file. +** +** Note that [application-defined SQL functions] or +** [virtual tables] might change the database indirectly as a side effect. +** ^(For example, if an application defines a function "eval()" that +** calls [sqlite3_exec()], then the following SQL statement would +** change the database file through side-effects: +** +** <blockquote><pre> +** SELECT eval('DELETE FROM t1') FROM t2; +** </pre></blockquote> +** +** But because the [SELECT] statement does not change the database file +** directly, sqlite3_stmt_readonly() would still return true.)^ +** +** ^Transaction control statements such as [BEGIN], [COMMIT], [ROLLBACK], +** [SAVEPOINT], and [RELEASE] cause sqlite3_stmt_readonly() to return true, +** since the statements themselves do not actually modify the database but +** rather they control the timing of when other statements modify the +** database. ^The [ATTACH] and [DETACH] statements also cause +** sqlite3_stmt_readonly() to return true since, while those statements +** change the configuration of a database connection, they do not make +** changes to the content of the database files on disk. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_stmt_readonly(sqlite3_stmt *pStmt); + +/* +** CAPI3REF: Determine If A Prepared Statement Has Been Reset +** METHOD: sqlite3_stmt +** +** ^The sqlite3_stmt_busy(S) interface returns true (non-zero) if the +** [prepared statement] S has been stepped at least once using +** [sqlite3_step(S)] but has neither run to completion (returned +** [SQLITE_DONE] from [sqlite3_step(S)]) nor +** been reset using [sqlite3_reset(S)]. ^The sqlite3_stmt_busy(S) +** interface returns false if S is a NULL pointer. If S is not a +** NULL pointer and is not a pointer to a valid [prepared statement] +** object, then the behavior is undefined and probably undesirable. +** +** This interface can be used in combination [sqlite3_next_stmt()] +** to locate all prepared statements associated with a database +** connection that are in need of being reset. This can be used, +** for example, in diagnostic routines to search for prepared +** statements that are holding a transaction open. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_stmt_busy(sqlite3_stmt*); /* ** CAPI3REF: Dynamically Typed Value Object ** KEYWORDS: {protected sqlite3_value} {unprotected sqlite3_value} ** @@ -2516,23 +3424,25 @@ ** ** An sqlite3_value object may be either "protected" or "unprotected". ** Some interfaces require a protected sqlite3_value. Other interfaces ** will accept either a protected or an unprotected sqlite3_value. ** Every interface that accepts sqlite3_value arguments specifies -** whether or not it requires a protected sqlite3_value. +** whether or not it requires a protected sqlite3_value. The +** [sqlite3_value_dup()] interface can be used to construct a new +** protected sqlite3_value from an unprotected sqlite3_value. ** ** The terms "protected" and "unprotected" refer to whether or not -** a mutex is held. A internal mutex is held for a protected +** a mutex is held. An internal mutex is held for a protected ** sqlite3_value object but no mutex is held for an unprotected ** sqlite3_value object. If SQLite is compiled to be single-threaded ** (with [SQLITE_THREADSAFE=0] and with [sqlite3_threadsafe()] returning 0) ** or if SQLite is run in one of reduced mutex modes ** [SQLITE_CONFIG_SINGLETHREAD] or [SQLITE_CONFIG_MULTITHREAD] ** then there is no distinction between protected and unprotected ** sqlite3_value objects and they can be used interchangeably. However, ** for maximum code portability it is recommended that applications -** still make the distinction between between protected and unprotected +** still make the distinction between protected and unprotected ** sqlite3_value objects even when not strictly required. ** ** ^The sqlite3_value objects that are passed as parameters into the ** implementation of [application-defined SQL functions] are protected. ** ^The sqlite3_value object returned by @@ -2560,10 +3470,11 @@ /* ** CAPI3REF: Binding Values To Prepared Statements ** KEYWORDS: {host parameter} {host parameters} {host parameter name} ** KEYWORDS: {SQL parameter} {SQL parameters} {parameter binding} +** METHOD: sqlite3_stmt ** ** ^(In the SQL statement text input to [sqlite3_prepare_v2()] and its variants, ** literals may be replaced by a [parameter] that matches one of following ** templates: ** @@ -2574,11 +3485,11 @@ ** <li> @VVV ** <li> $VVV ** </ul> ** ** In the templates above, NNN represents an integer literal, -** and VVV represents an alphanumeric identifer.)^ ^The values of these +** and VVV represents an alphanumeric identifier.)^ ^The values of these ** parameters (also called "host parameter names" or "SQL parameters") ** can be set using the sqlite3_bind_*() routines defined here. ** ** ^The first argument to the sqlite3_bind_*() routines is always ** a pointer to the [sqlite3_stmt] object returned from @@ -2593,25 +3504,49 @@ ** for "?NNN" parameters is the value of NNN. ** ^The NNN value must be between 1 and the [sqlite3_limit()] ** parameter [SQLITE_LIMIT_VARIABLE_NUMBER] (default value: 999). ** ** ^The third argument is the value to bind to the parameter. +** ^If the third parameter to sqlite3_bind_text() or sqlite3_bind_text16() +** or sqlite3_bind_blob() is a NULL pointer then the fourth parameter +** is ignored and the end result is the same as sqlite3_bind_null(). ** ** ^(In those routines that have a fourth argument, its value is the ** number of bytes in the parameter. To be clear: the value is the ** number of <u>bytes</u> in the value, not the number of characters.)^ -** ^If the fourth parameter is negative, the length of the string is +** ^If the fourth parameter to sqlite3_bind_text() or sqlite3_bind_text16() +** is negative, then the length of the string is ** the number of bytes up to the first zero terminator. +** If the fourth parameter to sqlite3_bind_blob() is negative, then +** the behavior is undefined. +** If a non-negative fourth parameter is provided to sqlite3_bind_text() +** or sqlite3_bind_text16() or sqlite3_bind_text64() then +** that parameter must be the byte offset +** where the NUL terminator would occur assuming the string were NUL +** terminated. If any NUL characters occur at byte offsets less than +** the value of the fourth parameter then the resulting string value will +** contain embedded NULs. The result of expressions involving strings +** with embedded NULs is undefined. ** -** ^The fifth argument to sqlite3_bind_blob(), sqlite3_bind_text(), and -** sqlite3_bind_text16() is a destructor used to dispose of the BLOB or -** string after SQLite has finished with it. ^If the fifth argument is +** ^The fifth argument to the BLOB and string binding interfaces +** is a destructor used to dispose of the BLOB or +** string after SQLite has finished with it. ^The destructor is called +** to dispose of the BLOB or string even if the call to bind API fails. +** ^If the fifth argument is ** the special value [SQLITE_STATIC], then SQLite assumes that the ** information is in static, unmanaged space and does not need to be freed. ** ^If the fifth argument has the value [SQLITE_TRANSIENT], then ** SQLite makes its own private copy of the data immediately, before ** the sqlite3_bind_*() routine returns. +** +** ^The sixth argument to sqlite3_bind_text64() must be one of +** [SQLITE_UTF8], [SQLITE_UTF16], [SQLITE_UTF16BE], or [SQLITE_UTF16LE] +** to specify the encoding of the text in the third parameter. If +** the sixth argument to sqlite3_bind_text64() is not one of the +** allowed values shown above, or if the text encoding is different +** from the encoding specified by the sixth parameter, then the behavior +** is undefined. ** ** ^The sqlite3_bind_zeroblob() routine binds a BLOB of length N that ** is filled with zeroes. ^A zeroblob uses a fixed amount of memory ** (just an integer to hold its size) while it is being processed. ** Zeroblobs are intended to serve as placeholders for BLOBs whose @@ -2629,28 +3564,37 @@ ** ^Bindings are not cleared by the [sqlite3_reset()] routine. ** ^Unbound parameters are interpreted as NULL. ** ** ^The sqlite3_bind_* routines return [SQLITE_OK] on success or an ** [error code] if anything goes wrong. +** ^[SQLITE_TOOBIG] might be returned if the size of a string or BLOB +** exceeds limits imposed by [sqlite3_limit]([SQLITE_LIMIT_LENGTH]) or +** [SQLITE_MAX_LENGTH]. ** ^[SQLITE_RANGE] is returned if the parameter ** index is out of range. ^[SQLITE_NOMEM] is returned if malloc() fails. ** ** See also: [sqlite3_bind_parameter_count()], ** [sqlite3_bind_parameter_name()], and [sqlite3_bind_parameter_index()]. */ -SQLITE_API int sqlite3_bind_blob(sqlite3_stmt*, int, const void*, int n, void(*)(void*)); -SQLITE_API int sqlite3_bind_double(sqlite3_stmt*, int, double); -SQLITE_API int sqlite3_bind_int(sqlite3_stmt*, int, int); -SQLITE_API int sqlite3_bind_int64(sqlite3_stmt*, int, sqlite3_int64); -SQLITE_API int sqlite3_bind_null(sqlite3_stmt*, int); -SQLITE_API int sqlite3_bind_text(sqlite3_stmt*, int, const char*, int n, void(*)(void*)); -SQLITE_API int sqlite3_bind_text16(sqlite3_stmt*, int, const void*, int, void(*)(void*)); -SQLITE_API int sqlite3_bind_value(sqlite3_stmt*, int, const sqlite3_value*); -SQLITE_API int sqlite3_bind_zeroblob(sqlite3_stmt*, int, int n); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_blob(sqlite3_stmt*, int, const void*, int n, void(*)(void*)); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_blob64(sqlite3_stmt*, int, const void*, sqlite3_uint64, + void(*)(void*)); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_double(sqlite3_stmt*, int, double); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_int(sqlite3_stmt*, int, int); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_int64(sqlite3_stmt*, int, sqlite3_int64); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_null(sqlite3_stmt*, int); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_text(sqlite3_stmt*,int,const char*,int,void(*)(void*)); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_text16(sqlite3_stmt*, int, const void*, int, void(*)(void*)); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_text64(sqlite3_stmt*, int, const char*, sqlite3_uint64, + void(*)(void*), unsigned char encoding); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_value(sqlite3_stmt*, int, const sqlite3_value*); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_zeroblob(sqlite3_stmt*, int, int n); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_zeroblob64(sqlite3_stmt*, int, sqlite3_uint64); /* ** CAPI3REF: Number Of SQL Parameters +** METHOD: sqlite3_stmt ** ** ^This routine can be used to find the number of [SQL parameters] ** in a [prepared statement]. SQL parameters are tokens of the ** form "?", "?NNN", ":AAA", "$AAA", or "@AAA" that serve as ** placeholders for values that are [sqlite3_bind_blob | bound] @@ -2663,14 +3607,15 @@ ** ** See also: [sqlite3_bind_blob|sqlite3_bind()], ** [sqlite3_bind_parameter_name()], and ** [sqlite3_bind_parameter_index()]. */ -SQLITE_API int sqlite3_bind_parameter_count(sqlite3_stmt*); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_parameter_count(sqlite3_stmt*); /* ** CAPI3REF: Name Of A Host Parameter +** METHOD: sqlite3_stmt ** ** ^The sqlite3_bind_parameter_name(P,N) interface returns ** the name of the N-th [SQL parameter] in the [prepared statement] P. ** ^(SQL parameters of the form "?NNN" or ":AAA" or "@AAA" or "$AAA" ** have a name which is the string "?NNN" or ":AAA" or "@AAA" or "$AAA" @@ -2690,14 +3635,15 @@ ** ** See also: [sqlite3_bind_blob|sqlite3_bind()], ** [sqlite3_bind_parameter_count()], and ** [sqlite3_bind_parameter_index()]. */ -SQLITE_API const char *sqlite3_bind_parameter_name(sqlite3_stmt*, int); +SQLITE_API const char *SQLITE_STDCALL sqlite3_bind_parameter_name(sqlite3_stmt*, int); /* ** CAPI3REF: Index Of A Parameter With A Given Name +** METHOD: sqlite3_stmt ** ** ^Return the index of an SQL parameter given its name. ^The ** index value returned is suitable for use as the second ** parameter to [sqlite3_bind_blob|sqlite3_bind()]. ^A zero ** is returned if no matching parameter is found. ^The parameter @@ -2704,34 +3650,39 @@ ** name must be given in UTF-8 even if the original statement ** was prepared from UTF-16 text using [sqlite3_prepare16_v2()]. ** ** See also: [sqlite3_bind_blob|sqlite3_bind()], ** [sqlite3_bind_parameter_count()], and -** [sqlite3_bind_parameter_index()]. +** [sqlite3_bind_parameter_name()]. */ -SQLITE_API int sqlite3_bind_parameter_index(sqlite3_stmt*, const char *zName); +SQLITE_API int SQLITE_STDCALL sqlite3_bind_parameter_index(sqlite3_stmt*, const char *zName); /* ** CAPI3REF: Reset All Bindings On A Prepared Statement +** METHOD: sqlite3_stmt ** ** ^Contrary to the intuition of many, [sqlite3_reset()] does not reset ** the [sqlite3_bind_blob | bindings] on a [prepared statement]. ** ^Use this routine to reset all host parameters to NULL. */ -SQLITE_API int sqlite3_clear_bindings(sqlite3_stmt*); +SQLITE_API int SQLITE_STDCALL sqlite3_clear_bindings(sqlite3_stmt*); /* ** CAPI3REF: Number Of Columns In A Result Set +** METHOD: sqlite3_stmt ** ** ^Return the number of columns in the result set returned by the ** [prepared statement]. ^This routine returns 0 if pStmt is an SQL ** statement that does not return data (for example an [UPDATE]). +** +** See also: [sqlite3_data_count()] */ -SQLITE_API int sqlite3_column_count(sqlite3_stmt *pStmt); +SQLITE_API int SQLITE_STDCALL sqlite3_column_count(sqlite3_stmt *pStmt); /* ** CAPI3REF: Column Names In A Result Set +** METHOD: sqlite3_stmt ** ** ^These routines return the name assigned to a particular column ** in the result set of a [SELECT] statement. ^The sqlite3_column_name() ** interface returns a pointer to a zero-terminated UTF-8 string ** and sqlite3_column_name16() returns a pointer to a zero-terminated @@ -2738,11 +3689,13 @@ ** UTF-16 string. ^The first parameter is the [prepared statement] ** that implements the [SELECT] statement. ^The second parameter is the ** column number. ^The leftmost column is number 0. ** ** ^The returned string pointer is valid until either the [prepared statement] -** is destroyed by [sqlite3_finalize()] or until the next call to +** is destroyed by [sqlite3_finalize()] or until the statement is automatically +** reprepared by the first call to [sqlite3_step()] for a particular run +** or until the next call to ** sqlite3_column_name() or sqlite3_column_name16() on the same column. ** ** ^If sqlite3_malloc() fails during the processing of either routine ** (for example during a conversion from UTF-8 to UTF-16) then a ** NULL pointer is returned. @@ -2750,25 +3703,28 @@ ** ^The name of a result column is the value of the "AS" clause for ** that column, if there is an AS clause. If there is no AS clause ** then the name of the column is unspecified and may change from ** one release of SQLite to the next. */ -SQLITE_API const char *sqlite3_column_name(sqlite3_stmt*, int N); -SQLITE_API const void *sqlite3_column_name16(sqlite3_stmt*, int N); +SQLITE_API const char *SQLITE_STDCALL sqlite3_column_name(sqlite3_stmt*, int N); +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_name16(sqlite3_stmt*, int N); /* ** CAPI3REF: Source Of Data In A Query Result +** METHOD: sqlite3_stmt ** ** ^These routines provide a means to determine the database, table, and ** table column that is the origin of a particular result column in ** [SELECT] statement. ** ^The name of the database or table or column can be returned as ** either a UTF-8 or UTF-16 string. ^The _database_ routines return ** the database name, the _table_ routines return the table name, and ** the origin_ routines return the column name. ** ^The returned string is valid until the [prepared statement] is destroyed -** using [sqlite3_finalize()] or until the same information is requested +** using [sqlite3_finalize()] or until the statement is automatically +** reprepared by the first call to [sqlite3_step()] for a particular run +** or until the same information is requested ** again in a different encoding. ** ** ^The names returned are the original un-aliased names of the ** database, table, and column. ** @@ -2796,19 +3752,20 @@ ** If two or more threads call one or more ** [sqlite3_column_database_name | column metadata interfaces] ** for the same [prepared statement] and result column ** at the same time then the results are undefined. */ -SQLITE_API const char *sqlite3_column_database_name(sqlite3_stmt*,int); -SQLITE_API const void *sqlite3_column_database_name16(sqlite3_stmt*,int); -SQLITE_API const char *sqlite3_column_table_name(sqlite3_stmt*,int); -SQLITE_API const void *sqlite3_column_table_name16(sqlite3_stmt*,int); -SQLITE_API const char *sqlite3_column_origin_name(sqlite3_stmt*,int); -SQLITE_API const void *sqlite3_column_origin_name16(sqlite3_stmt*,int); +SQLITE_API const char *SQLITE_STDCALL sqlite3_column_database_name(sqlite3_stmt*,int); +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_database_name16(sqlite3_stmt*,int); +SQLITE_API const char *SQLITE_STDCALL sqlite3_column_table_name(sqlite3_stmt*,int); +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_table_name16(sqlite3_stmt*,int); +SQLITE_API const char *SQLITE_STDCALL sqlite3_column_origin_name(sqlite3_stmt*,int); +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_origin_name16(sqlite3_stmt*,int); /* ** CAPI3REF: Declared Datatype Of A Query Result +** METHOD: sqlite3_stmt ** ** ^(The first parameter is a [prepared statement]. ** If this statement is a [SELECT] statement and the Nth column of the ** returned result set of that [SELECT] is a table column (not an ** expression or subquery) then the declared type of the table @@ -2832,15 +3789,16 @@ ** data stored in that column is of the declared type. SQLite is ** strongly typed, but the typing is dynamic not static. ^Type ** is associated with individual values, not with the containers ** used to hold those values. */ -SQLITE_API const char *sqlite3_column_decltype(sqlite3_stmt*,int); -SQLITE_API const void *sqlite3_column_decltype16(sqlite3_stmt*,int); +SQLITE_API const char *SQLITE_STDCALL sqlite3_column_decltype(sqlite3_stmt*,int); +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_decltype16(sqlite3_stmt*,int); /* ** CAPI3REF: Evaluate An SQL Statement +** METHOD: sqlite3_stmt ** ** After a [prepared statement] has been prepared using either ** [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()] or one of the legacy ** interfaces [sqlite3_prepare()] or [sqlite3_prepare16()], this function ** must be called one or more times to evaluate the statement. @@ -2858,11 +3816,11 @@ ** [extended result codes] might be returned as well. ** ** ^[SQLITE_BUSY] means that the database engine was unable to acquire the ** database locks it needs to do its job. ^If the statement is a [COMMIT] ** or occurs outside of an explicit transaction, then you can retry the -** statement. If the statement is not a [COMMIT] and occurs within a +** statement. If the statement is not a [COMMIT] and occurs within an ** explicit transaction then you should rollback the transaction before ** continuing. ** ** ^[SQLITE_DONE] means that the statement has finished executing ** successfully. sqlite3_step() should not be called again on this virtual @@ -2887,10 +3845,22 @@ ** Perhaps it was called on a [prepared statement] that has ** already been [sqlite3_finalize | finalized] or on one that had ** previously returned [SQLITE_ERROR] or [SQLITE_DONE]. Or it could ** be the case that the same database connection is being used by two or ** more threads at the same moment in time. +** +** For all versions of SQLite up to and including 3.6.23.1, a call to +** [sqlite3_reset()] was required after sqlite3_step() returned anything +** other than [SQLITE_ROW] before any subsequent invocation of +** sqlite3_step(). Failure to reset the prepared statement using +** [sqlite3_reset()] would result in an [SQLITE_MISUSE] return from +** sqlite3_step(). But after version 3.6.23.1, sqlite3_step() began +** calling [sqlite3_reset()] automatically in this circumstance rather +** than returning [SQLITE_MISUSE]. This is not considered a compatibility +** break because any application that ever receives an SQLITE_MISUSE error +** is broken by definition. The [SQLITE_OMIT_AUTORESET] compile-time option +** can be used to restore the legacy behavior. ** ** <b>Goofy Interface Alert:</b> In the legacy interface, the sqlite3_step() ** API always returns a generic error code, [SQLITE_ERROR], following any ** error other than [SQLITE_BUSY] and [SQLITE_MISUSE]. You must call ** [sqlite3_reset()] or [sqlite3_finalize()] in order to find one of the @@ -2900,19 +3870,32 @@ ** using either [sqlite3_prepare_v2()] or [sqlite3_prepare16_v2()] instead ** of the legacy [sqlite3_prepare()] and [sqlite3_prepare16()] interfaces, ** then the more specific [error codes] are returned directly ** by sqlite3_step(). The use of the "v2" interface is recommended. */ -SQLITE_API int sqlite3_step(sqlite3_stmt*); +SQLITE_API int SQLITE_STDCALL sqlite3_step(sqlite3_stmt*); /* ** CAPI3REF: Number of columns in a result set +** METHOD: sqlite3_stmt ** -** ^The sqlite3_data_count(P) the number of columns in the -** of the result set of [prepared statement] P. +** ^The sqlite3_data_count(P) interface returns the number of columns in the +** current row of the result set of [prepared statement] P. +** ^If prepared statement P does not have results ready to return +** (via calls to the [sqlite3_column_int | sqlite3_column_*()] of +** interfaces) then sqlite3_data_count(P) returns 0. +** ^The sqlite3_data_count(P) routine also returns 0 if P is a NULL pointer. +** ^The sqlite3_data_count(P) routine returns 0 if the previous call to +** [sqlite3_step](P) returned [SQLITE_DONE]. ^The sqlite3_data_count(P) +** will return non-zero if previous call to [sqlite3_step](P) returned +** [SQLITE_ROW], except in the case of the [PRAGMA incremental_vacuum] +** where it always returns zero since each step of that multi-step +** pragma returns 0 columns of data. +** +** See also: [sqlite3_column_count()] */ -SQLITE_API int sqlite3_data_count(sqlite3_stmt *pStmt); +SQLITE_API int SQLITE_STDCALL sqlite3_data_count(sqlite3_stmt *pStmt); /* ** CAPI3REF: Fundamental Datatypes ** KEYWORDS: SQLITE_TEXT ** @@ -2945,12 +3928,11 @@ #define SQLITE3_TEXT 3 /* ** CAPI3REF: Result Values From A Query ** KEYWORDS: {column access functions} -** -** These routines form the "result set" interface. +** METHOD: sqlite3_stmt ** ** ^These routines return information about a single column of the current ** result row of a query. ^In every case the first argument is a pointer ** to the [prepared statement] that is being evaluated (the [sqlite3_stmt*] ** that was returned from [sqlite3_prepare_v2()] or one of its variants) @@ -2986,30 +3968,39 @@ ** ^If the result is a UTF-16 string, then sqlite3_column_bytes() converts ** the string to UTF-8 and then returns the number of bytes. ** ^If the result is a numeric value then sqlite3_column_bytes() uses ** [sqlite3_snprintf()] to convert that value to a UTF-8 string and returns ** the number of bytes in that string. -** ^The value returned does not include the zero terminator at the end -** of the string. ^For clarity: the value returned is the number of +** ^If the result is NULL, then sqlite3_column_bytes() returns zero. +** +** ^If the result is a BLOB or UTF-16 string then the sqlite3_column_bytes16() +** routine returns the number of bytes in that BLOB or string. +** ^If the result is a UTF-8 string, then sqlite3_column_bytes16() converts +** the string to UTF-16 and then returns the number of bytes. +** ^If the result is a numeric value then sqlite3_column_bytes16() uses +** [sqlite3_snprintf()] to convert that value to a UTF-16 string and returns +** the number of bytes in that string. +** ^If the result is NULL, then sqlite3_column_bytes16() returns zero. +** +** ^The values returned by [sqlite3_column_bytes()] and +** [sqlite3_column_bytes16()] do not include the zero terminators at the end +** of the string. ^For clarity: the values returned by +** [sqlite3_column_bytes()] and [sqlite3_column_bytes16()] are the number of ** bytes in the string, not the number of characters. ** ** ^Strings returned by sqlite3_column_text() and sqlite3_column_text16(), -** even empty strings, are always zero terminated. ^The return -** value from sqlite3_column_blob() for a zero-length BLOB is an arbitrary -** pointer, possibly even a NULL pointer. -** -** ^The sqlite3_column_bytes16() routine is similar to sqlite3_column_bytes() -** but leaves the result in UTF-16 in native byte order instead of UTF-8. -** ^The zero terminator is not included in this count. -** -** ^The object returned by [sqlite3_column_value()] is an -** [unprotected sqlite3_value] object. An unprotected sqlite3_value object -** may only be used with [sqlite3_bind_value()] and [sqlite3_result_value()]. +** even empty strings, are always zero-terminated. ^The return +** value from sqlite3_column_blob() for a zero-length BLOB is a NULL pointer. +** +** <b>Warning:</b> ^The object returned by [sqlite3_column_value()] is an +** [unprotected sqlite3_value] object. In a multithreaded environment, +** an unprotected sqlite3_value object may only be used safely with +** [sqlite3_bind_value()] and [sqlite3_result_value()]. ** If the [unprotected sqlite3_value] object returned by ** [sqlite3_column_value()] is used in any other way, including calls ** to routines like [sqlite3_value_int()], [sqlite3_value_text()], -** or [sqlite3_value_bytes()], then the behavior is undefined. +** or [sqlite3_value_bytes()], the behavior is not threadsafe. ** ** These routines attempt to convert the value where appropriate. ^For ** example, if the internal representation is FLOAT and a text result ** is requested, [sqlite3_snprintf()] is used internally to perform the ** conversion automatically. ^(The following table details the conversions @@ -3019,37 +4010,31 @@ ** <table border="1"> ** <tr><th> Internal<br>Type <th> Requested<br>Type <th> Conversion ** ** <tr><td> NULL <td> INTEGER <td> Result is 0 ** <tr><td> NULL <td> FLOAT <td> Result is 0.0 -** <tr><td> NULL <td> TEXT <td> Result is NULL pointer -** <tr><td> NULL <td> BLOB <td> Result is NULL pointer +** <tr><td> NULL <td> TEXT <td> Result is a NULL pointer +** <tr><td> NULL <td> BLOB <td> Result is a NULL pointer ** <tr><td> INTEGER <td> FLOAT <td> Convert from integer to float ** <tr><td> INTEGER <td> TEXT <td> ASCII rendering of the integer ** <tr><td> INTEGER <td> BLOB <td> Same as INTEGER->TEXT -** <tr><td> FLOAT <td> INTEGER <td> Convert from float to integer +** <tr><td> FLOAT <td> INTEGER <td> [CAST] to INTEGER ** <tr><td> FLOAT <td> TEXT <td> ASCII rendering of the float -** <tr><td> FLOAT <td> BLOB <td> Same as FLOAT->TEXT -** <tr><td> TEXT <td> INTEGER <td> Use atoi() -** <tr><td> TEXT <td> FLOAT <td> Use atof() +** <tr><td> FLOAT <td> BLOB <td> [CAST] to BLOB +** <tr><td> TEXT <td> INTEGER <td> [CAST] to INTEGER +** <tr><td> TEXT <td> FLOAT <td> [CAST] to REAL ** <tr><td> TEXT <td> BLOB <td> No change -** <tr><td> BLOB <td> INTEGER <td> Convert to TEXT then use atoi() -** <tr><td> BLOB <td> FLOAT <td> Convert to TEXT then use atof() +** <tr><td> BLOB <td> INTEGER <td> [CAST] to INTEGER +** <tr><td> BLOB <td> FLOAT <td> [CAST] to REAL ** <tr><td> BLOB <td> TEXT <td> Add a zero terminator if needed ** </table> ** </blockquote>)^ ** -** The table above makes reference to standard C library functions atoi() -** and atof(). SQLite does not really use these functions. It has its -** own equivalent internal routines. The atoi() and atof() names are -** used in the table for brevity and because they are familiar to most -** C programmers. -** -** ^Note that when type conversions occur, pointers returned by prior +** Note that when type conversions occur, pointers returned by prior ** calls to sqlite3_column_blob(), sqlite3_column_text(), and/or ** sqlite3_column_text16() may be invalidated. -** ^(Type conversions and pointer invalidations might occur +** Type conversions and pointer invalidations might occur ** in the following cases: ** ** <ul> ** <li> The initial content is a BLOB and sqlite3_column_text() or ** sqlite3_column_text16() is called. A zero-terminator might @@ -3058,26 +4043,26 @@ ** sqlite3_column_text16() is called. The content must be converted ** to UTF-16.</li> ** <li> The initial content is UTF-16 text and sqlite3_column_bytes() or ** sqlite3_column_text() is called. The content must be converted ** to UTF-8.</li> -** </ul>)^ +** </ul> ** ** ^Conversions between UTF-16be and UTF-16le are always done in place and do ** not invalidate a prior pointer, though of course the content of the buffer -** that the prior pointer points to will have been modified. Other kinds +** that the prior pointer references will have been modified. Other kinds ** of conversion are done in place when it is possible, but sometimes they ** are not possible and in those cases prior pointers are invalidated. ** -** ^(The safest and easiest to remember policy is to invoke these routines +** The safest policy is to invoke these routines ** in one of the following ways: ** ** <ul> ** <li>sqlite3_column_text() followed by sqlite3_column_bytes()</li> ** <li>sqlite3_column_blob() followed by sqlite3_column_bytes()</li> ** <li>sqlite3_column_text16() followed by sqlite3_column_bytes16()</li> -** </ul>)^ +** </ul> ** ** In other words, you should call sqlite3_column_text(), ** sqlite3_column_blob(), or sqlite3_column_text16() first to force the result ** into the desired format, then invoke sqlite3_column_bytes() or ** sqlite3_column_bytes16() to find the size of the result. Do not mix calls @@ -3086,51 +4071,62 @@ ** with calls to sqlite3_column_bytes(). ** ** ^The pointers returned are valid until a type conversion occurs as ** described above, or until [sqlite3_step()] or [sqlite3_reset()] or ** [sqlite3_finalize()] is called. ^The memory space used to hold strings -** and BLOBs is freed automatically. Do <b>not</b> pass the pointers returned -** [sqlite3_column_blob()], [sqlite3_column_text()], etc. into +** and BLOBs is freed automatically. Do <em>not</em> pass the pointers returned +** from [sqlite3_column_blob()], [sqlite3_column_text()], etc. into ** [sqlite3_free()]. ** ** ^(If a memory allocation error occurs during the evaluation of any ** of these routines, a default value is returned. The default value ** is either the integer 0, the floating point number 0.0, or a NULL ** pointer. Subsequent calls to [sqlite3_errcode()] will return ** [SQLITE_NOMEM].)^ */ -SQLITE_API const void *sqlite3_column_blob(sqlite3_stmt*, int iCol); -SQLITE_API int sqlite3_column_bytes(sqlite3_stmt*, int iCol); -SQLITE_API int sqlite3_column_bytes16(sqlite3_stmt*, int iCol); -SQLITE_API double sqlite3_column_double(sqlite3_stmt*, int iCol); -SQLITE_API int sqlite3_column_int(sqlite3_stmt*, int iCol); -SQLITE_API sqlite3_int64 sqlite3_column_int64(sqlite3_stmt*, int iCol); -SQLITE_API const unsigned char *sqlite3_column_text(sqlite3_stmt*, int iCol); -SQLITE_API const void *sqlite3_column_text16(sqlite3_stmt*, int iCol); -SQLITE_API int sqlite3_column_type(sqlite3_stmt*, int iCol); -SQLITE_API sqlite3_value *sqlite3_column_value(sqlite3_stmt*, int iCol); +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_blob(sqlite3_stmt*, int iCol); +SQLITE_API int SQLITE_STDCALL sqlite3_column_bytes(sqlite3_stmt*, int iCol); +SQLITE_API int SQLITE_STDCALL sqlite3_column_bytes16(sqlite3_stmt*, int iCol); +SQLITE_API double SQLITE_STDCALL sqlite3_column_double(sqlite3_stmt*, int iCol); +SQLITE_API int SQLITE_STDCALL sqlite3_column_int(sqlite3_stmt*, int iCol); +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_column_int64(sqlite3_stmt*, int iCol); +SQLITE_API const unsigned char *SQLITE_STDCALL sqlite3_column_text(sqlite3_stmt*, int iCol); +SQLITE_API const void *SQLITE_STDCALL sqlite3_column_text16(sqlite3_stmt*, int iCol); +SQLITE_API int SQLITE_STDCALL sqlite3_column_type(sqlite3_stmt*, int iCol); +SQLITE_API sqlite3_value *SQLITE_STDCALL sqlite3_column_value(sqlite3_stmt*, int iCol); /* ** CAPI3REF: Destroy A Prepared Statement Object +** DESTRUCTOR: sqlite3_stmt ** ** ^The sqlite3_finalize() function is called to delete a [prepared statement]. -** ^If the statement was executed successfully or not executed at all, then -** SQLITE_OK is returned. ^If execution of the statement failed then an -** [error code] or [extended error code] is returned. +** ^If the most recent evaluation of the statement encountered no errors +** or if the statement is never been evaluated, then sqlite3_finalize() returns +** SQLITE_OK. ^If the most recent evaluation of statement S failed, then +** sqlite3_finalize(S) returns the appropriate [error code] or +** [extended error code]. ** -** ^This routine can be called at any point during the execution of the -** [prepared statement]. ^If the virtual machine has not -** completed execution when this routine is called, that is like -** encountering an error or an [sqlite3_interrupt | interrupt]. -** ^Incomplete updates may be rolled back and transactions canceled, -** depending on the circumstances, and the -** [error code] returned will be [SQLITE_ABORT]. +** ^The sqlite3_finalize(S) routine can be called at any point during +** the life cycle of [prepared statement] S: +** before statement S is ever evaluated, after +** one or more calls to [sqlite3_reset()], or after any call +** to [sqlite3_step()] regardless of whether or not the statement has +** completed execution. +** +** ^Invoking sqlite3_finalize() on a NULL pointer is a harmless no-op. +** +** The application must finalize every [prepared statement] in order to avoid +** resource leaks. It is a grievous error for the application to try to use +** a prepared statement after it has been finalized. Any use of a prepared +** statement after it has been finalized can result in undefined and +** undesirable behavior such as segfaults and heap corruption. */ -SQLITE_API int sqlite3_finalize(sqlite3_stmt *pStmt); +SQLITE_API int SQLITE_STDCALL sqlite3_finalize(sqlite3_stmt *pStmt); /* ** CAPI3REF: Reset A Prepared Statement Object +** METHOD: sqlite3_stmt ** ** The sqlite3_reset() function is called to reset a [prepared statement] ** object back to its initial state, ready to be re-executed. ** ^Any SQL statement variables that had values bound to them using ** the [sqlite3_bind_blob | sqlite3_bind_*() API] retain their values. @@ -3149,66 +4145,89 @@ ** [sqlite3_reset(S)] returns an appropriate [error code]. ** ** ^The [sqlite3_reset(S)] interface does not change the values ** of any [sqlite3_bind_blob|bindings] on the [prepared statement] S. */ -SQLITE_API int sqlite3_reset(sqlite3_stmt *pStmt); +SQLITE_API int SQLITE_STDCALL sqlite3_reset(sqlite3_stmt *pStmt); /* ** CAPI3REF: Create Or Redefine SQL Functions ** KEYWORDS: {function creation routines} ** KEYWORDS: {application-defined SQL function} ** KEYWORDS: {application-defined SQL functions} +** METHOD: sqlite3 ** -** ^These two functions (collectively known as "function creation routines") +** ^These functions (collectively known as "function creation routines") ** are used to add SQL functions or aggregates or to redefine the behavior -** of existing SQL functions or aggregates. The only difference between the -** two is that the second parameter, the name of the (scalar) function or -** aggregate, is encoded in UTF-8 for sqlite3_create_function() and UTF-16 -** for sqlite3_create_function16(). +** of existing SQL functions or aggregates. The only differences between +** these routines are the text encoding expected for +** the second parameter (the name of the function being created) +** and the presence or absence of a destructor callback for +** the application data pointer. ** ** ^The first parameter is the [database connection] to which the SQL ** function is to be added. ^If an application uses more than one database ** connection then application-defined SQL functions must be added ** to each database connection separately. ** -** The second parameter is the name of the SQL function to be created or -** redefined. ^The length of the name is limited to 255 bytes, exclusive of -** the zero-terminator. Note that the name length limit is in bytes, not -** characters. ^Any attempt to create a function with a longer name -** will result in [SQLITE_ERROR] being returned. +** ^The second parameter is the name of the SQL function to be created or +** redefined. ^The length of the name is limited to 255 bytes in a UTF-8 +** representation, exclusive of the zero-terminator. ^Note that the name +** length limit is in UTF-8 bytes, not characters nor UTF-16 bytes. +** ^Any attempt to create a function with a longer name +** will result in [SQLITE_MISUSE] being returned. ** ** ^The third parameter (nArg) ** is the number of arguments that the SQL function or ** aggregate takes. ^If this parameter is -1, then the SQL function or ** aggregate may take any number of arguments between 0 and the limit ** set by [sqlite3_limit]([SQLITE_LIMIT_FUNCTION_ARG]). If the third ** parameter is less than -1 or greater than 127 then the behavior is ** undefined. ** -** The fourth parameter, eTextRep, specifies what +** ^The fourth parameter, eTextRep, specifies what ** [SQLITE_UTF8 | text encoding] this SQL function prefers for -** its parameters. Any SQL function implementation should be able to work -** work with UTF-8, UTF-16le, or UTF-16be. But some implementations may be -** more efficient with one encoding than another. ^An application may -** invoke sqlite3_create_function() or sqlite3_create_function16() multiple -** times with the same function but with different values of eTextRep. +** its parameters. The application should set this parameter to +** [SQLITE_UTF16LE] if the function implementation invokes +** [sqlite3_value_text16le()] on an input, or [SQLITE_UTF16BE] if the +** implementation invokes [sqlite3_value_text16be()] on an input, or +** [SQLITE_UTF16] if [sqlite3_value_text16()] is used, or [SQLITE_UTF8] +** otherwise. ^The same SQL function may be registered multiple times using +** different preferred text encodings, with different implementations for +** each encoding. ** ^When multiple implementations of the same function are available, SQLite ** will pick the one that involves the least amount of data conversion. -** If there is only a single implementation which does not care what text -** encoding is used, then the fourth argument should be [SQLITE_ANY]. +** +** ^The fourth parameter may optionally be ORed with [SQLITE_DETERMINISTIC] +** to signal that the function will always return the same result given +** the same inputs within a single SQL statement. Most SQL functions are +** deterministic. The built-in [random()] SQL function is an example of a +** function that is not deterministic. The SQLite query planner is able to +** perform additional optimizations on deterministic functions, so use +** of the [SQLITE_DETERMINISTIC] flag is recommended where possible. ** ** ^(The fifth parameter is an arbitrary pointer. The implementation of the ** function can gain access to this pointer using [sqlite3_user_data()].)^ ** -** The seventh, eighth and ninth parameters, xFunc, xStep and xFinal, are +** ^The sixth, seventh and eighth parameters, xFunc, xStep and xFinal, are ** pointers to C-language functions that implement the SQL function or ** aggregate. ^A scalar SQL function requires an implementation of the xFunc -** callback only; NULL pointers should be passed as the xStep and xFinal +** callback only; NULL pointers must be passed as the xStep and xFinal ** parameters. ^An aggregate SQL function requires an implementation of xStep -** and xFinal and NULL should be passed for xFunc. ^To delete an existing -** SQL function or aggregate, pass NULL for all three function callbacks. +** and xFinal and NULL pointer must be passed for xFunc. ^To delete an existing +** SQL function or aggregate, pass NULL pointers for all three function +** callbacks. +** +** ^(If the ninth parameter to sqlite3_create_function_v2() is not NULL, +** then it is destructor for the application data pointer. +** The destructor is invoked when the function is deleted, either by being +** overloaded or when the database connection closes.)^ +** ^The destructor is also invoked if the call to +** sqlite3_create_function_v2() fails. +** ^When the destructor callback of the tenth parameter is invoked, it +** is passed a single argument which is a copy of the application data +** pointer which was the fifth parameter to sqlite3_create_function_v2(). ** ** ^It is permitted to register multiple implementations of the same ** functions with the same name but with either differing numbers of ** arguments or differing preferred text encodings. ^SQLite will use ** the implementation that most closely matches the way in which the @@ -3220,95 +4239,113 @@ ** ^A function where the encoding difference is between UTF16le and UTF16be ** is a closer match than a function where the encoding difference is ** between UTF8 and UTF16. ** ** ^Built-in functions may be overloaded by new application-defined functions. -** ^The first application-defined function with a given name overrides all -** built-in functions in the same [database connection] with the same name. -** ^Subsequent application-defined functions of the same name only override -** prior application-defined functions that are an exact match for the -** number of parameters and preferred encoding. ** ** ^An application-defined function is permitted to call other ** SQLite interfaces. However, such calls must not ** close the database connection nor finalize or reset the prepared ** statement in which the function is running. */ -SQLITE_API int sqlite3_create_function( +SQLITE_API int SQLITE_STDCALL sqlite3_create_function( sqlite3 *db, const char *zFunctionName, int nArg, int eTextRep, void *pApp, void (*xFunc)(sqlite3_context*,int,sqlite3_value**), void (*xStep)(sqlite3_context*,int,sqlite3_value**), void (*xFinal)(sqlite3_context*) ); -SQLITE_API int sqlite3_create_function16( +SQLITE_API int SQLITE_STDCALL sqlite3_create_function16( sqlite3 *db, const void *zFunctionName, int nArg, int eTextRep, void *pApp, void (*xFunc)(sqlite3_context*,int,sqlite3_value**), void (*xStep)(sqlite3_context*,int,sqlite3_value**), void (*xFinal)(sqlite3_context*) ); +SQLITE_API int SQLITE_STDCALL sqlite3_create_function_v2( + sqlite3 *db, + const char *zFunctionName, + int nArg, + int eTextRep, + void *pApp, + void (*xFunc)(sqlite3_context*,int,sqlite3_value**), + void (*xStep)(sqlite3_context*,int,sqlite3_value**), + void (*xFinal)(sqlite3_context*), + void(*xDestroy)(void*) +); /* ** CAPI3REF: Text Encodings ** ** These constant define integer codes that represent the various ** text encodings supported by SQLite. */ -#define SQLITE_UTF8 1 -#define SQLITE_UTF16LE 2 -#define SQLITE_UTF16BE 3 +#define SQLITE_UTF8 1 /* IMP: R-37514-35566 */ +#define SQLITE_UTF16LE 2 /* IMP: R-03371-37637 */ +#define SQLITE_UTF16BE 3 /* IMP: R-51971-34154 */ #define SQLITE_UTF16 4 /* Use native byte order */ -#define SQLITE_ANY 5 /* sqlite3_create_function only */ +#define SQLITE_ANY 5 /* Deprecated */ #define SQLITE_UTF16_ALIGNED 8 /* sqlite3_create_collation only */ +/* +** CAPI3REF: Function Flags +** +** These constants may be ORed together with the +** [SQLITE_UTF8 | preferred text encoding] as the fourth argument +** to [sqlite3_create_function()], [sqlite3_create_function16()], or +** [sqlite3_create_function_v2()]. +*/ +#define SQLITE_DETERMINISTIC 0x800 + /* ** CAPI3REF: Deprecated Functions ** DEPRECATED ** ** These functions are [deprecated]. In order to maintain ** backwards compatibility with older code, these functions continue ** to be supported. However, new applications should avoid -** the use of these functions. To help encourage people to avoid -** using these functions, we are not going to tell you what they do. +** the use of these functions. To encourage programmers to avoid +** these functions, we will not explain what they do. */ #ifndef SQLITE_OMIT_DEPRECATED -SQLITE_API SQLITE_DEPRECATED int sqlite3_aggregate_count(sqlite3_context*); -SQLITE_API SQLITE_DEPRECATED int sqlite3_expired(sqlite3_stmt*); -SQLITE_API SQLITE_DEPRECATED int sqlite3_transfer_bindings(sqlite3_stmt*, sqlite3_stmt*); -SQLITE_API SQLITE_DEPRECATED int sqlite3_global_recover(void); -SQLITE_API SQLITE_DEPRECATED void sqlite3_thread_cleanup(void); -SQLITE_API SQLITE_DEPRECATED int sqlite3_memory_alarm(void(*)(void*,sqlite3_int64,int),void*,sqlite3_int64); +SQLITE_API SQLITE_DEPRECATED int SQLITE_STDCALL sqlite3_aggregate_count(sqlite3_context*); +SQLITE_API SQLITE_DEPRECATED int SQLITE_STDCALL sqlite3_expired(sqlite3_stmt*); +SQLITE_API SQLITE_DEPRECATED int SQLITE_STDCALL sqlite3_transfer_bindings(sqlite3_stmt*, sqlite3_stmt*); +SQLITE_API SQLITE_DEPRECATED int SQLITE_STDCALL sqlite3_global_recover(void); +SQLITE_API SQLITE_DEPRECATED void SQLITE_STDCALL sqlite3_thread_cleanup(void); +SQLITE_API SQLITE_DEPRECATED int SQLITE_STDCALL sqlite3_memory_alarm(void(*)(void*,sqlite3_int64,int), + void*,sqlite3_int64); #endif /* -** CAPI3REF: Obtaining SQL Function Parameter Values +** CAPI3REF: Obtaining SQL Values +** METHOD: sqlite3_value ** ** The C-language implementation of SQL functions and aggregates uses ** this set of interface routines to access the parameter values on -** the function or aggregate. +** the function or aggregate. ** ** The xFunc (for scalar functions) or xStep (for aggregates) parameters ** to [sqlite3_create_function()] and [sqlite3_create_function16()] ** define callbacks that implement the SQL functions and aggregates. -** The 4th parameter to these callbacks is an array of pointers to +** The 3rd parameter to these callbacks is an array of pointers to ** [protected sqlite3_value] objects. There is one [sqlite3_value] object for ** each parameter to the SQL function. These routines are used to ** extract values from the [sqlite3_value] objects. ** ** These routines work only with [protected sqlite3_value] objects. ** Any attempt to use these routines on an [unprotected sqlite3_value] ** object results in undefined behavior. ** ** ^These routines work just like the corresponding [column access functions] -** except that these routines take a single [protected sqlite3_value] object +** except that these routines take a single [protected sqlite3_value] object ** pointer instead of a [sqlite3_stmt*] pointer and an integer column number. ** ** ^The sqlite3_value_text16() interface extracts a UTF-16 string ** in the native byte-order of the host machine. ^The ** sqlite3_value_text16be() and sqlite3_value_text16le() interfaces @@ -3329,27 +4366,61 @@ ** or [sqlite3_value_text16()]. ** ** These routines must be called from the same thread as ** the SQL function that supplied the [sqlite3_value*] parameters. */ -SQLITE_API const void *sqlite3_value_blob(sqlite3_value*); -SQLITE_API int sqlite3_value_bytes(sqlite3_value*); -SQLITE_API int sqlite3_value_bytes16(sqlite3_value*); -SQLITE_API double sqlite3_value_double(sqlite3_value*); -SQLITE_API int sqlite3_value_int(sqlite3_value*); -SQLITE_API sqlite3_int64 sqlite3_value_int64(sqlite3_value*); -SQLITE_API const unsigned char *sqlite3_value_text(sqlite3_value*); -SQLITE_API const void *sqlite3_value_text16(sqlite3_value*); -SQLITE_API const void *sqlite3_value_text16le(sqlite3_value*); -SQLITE_API const void *sqlite3_value_text16be(sqlite3_value*); -SQLITE_API int sqlite3_value_type(sqlite3_value*); -SQLITE_API int sqlite3_value_numeric_type(sqlite3_value*); +SQLITE_API const void *SQLITE_STDCALL sqlite3_value_blob(sqlite3_value*); +SQLITE_API int SQLITE_STDCALL sqlite3_value_bytes(sqlite3_value*); +SQLITE_API int SQLITE_STDCALL sqlite3_value_bytes16(sqlite3_value*); +SQLITE_API double SQLITE_STDCALL sqlite3_value_double(sqlite3_value*); +SQLITE_API int SQLITE_STDCALL sqlite3_value_int(sqlite3_value*); +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_value_int64(sqlite3_value*); +SQLITE_API const unsigned char *SQLITE_STDCALL sqlite3_value_text(sqlite3_value*); +SQLITE_API const void *SQLITE_STDCALL sqlite3_value_text16(sqlite3_value*); +SQLITE_API const void *SQLITE_STDCALL sqlite3_value_text16le(sqlite3_value*); +SQLITE_API const void *SQLITE_STDCALL sqlite3_value_text16be(sqlite3_value*); +SQLITE_API int SQLITE_STDCALL sqlite3_value_type(sqlite3_value*); +SQLITE_API int SQLITE_STDCALL sqlite3_value_numeric_type(sqlite3_value*); + +/* +** CAPI3REF: Finding The Subtype Of SQL Values +** METHOD: sqlite3_value +** +** The sqlite3_value_subtype(V) function returns the subtype for +** an [application-defined SQL function] argument V. The subtype +** information can be used to pass a limited amount of context from +** one SQL function to another. Use the [sqlite3_result_subtype()] +** routine to set the subtype for the return value of an SQL function. +** +** SQLite makes no use of subtype itself. It merely passes the subtype +** from the result of one [application-defined SQL function] into the +** input of another. +*/ +SQLITE_API unsigned int SQLITE_STDCALL sqlite3_value_subtype(sqlite3_value*); + +/* +** CAPI3REF: Copy And Free SQL Values +** METHOD: sqlite3_value +** +** ^The sqlite3_value_dup(V) interface makes a copy of the [sqlite3_value] +** object D and returns a pointer to that copy. ^The [sqlite3_value] returned +** is a [protected sqlite3_value] object even if the input is not. +** ^The sqlite3_value_dup(V) interface returns NULL if V is NULL or if a +** memory allocation fails. +** +** ^The sqlite3_value_free(V) interface frees an [sqlite3_value] object +** previously obtained from [sqlite3_value_dup()]. ^If V is a NULL pointer +** then sqlite3_value_free(V) is a harmless no-op. +*/ +SQLITE_API sqlite3_value *SQLITE_STDCALL sqlite3_value_dup(const sqlite3_value*); +SQLITE_API void SQLITE_STDCALL sqlite3_value_free(sqlite3_value*); /* ** CAPI3REF: Obtain Aggregate Function Context +** METHOD: sqlite3_context ** -** Implementions of aggregate SQL functions use this +** Implementations of aggregate SQL functions use this ** routine to allocate memory for storing their state. ** ** ^The first time the sqlite3_aggregate_context(C,N) routine is called ** for a particular aggregate function, SQLite ** allocates N of memory, zeroes out that memory, and returns a pointer @@ -3361,18 +4432,21 @@ ** an aggregate query, the xStep() callback of the aggregate function ** implementation is never called and xFinal() is called exactly once. ** In those cases, sqlite3_aggregate_context() might be called for the ** first time from within xFinal().)^ ** -** ^The sqlite3_aggregate_context(C,N) routine returns a NULL pointer if N is -** less than or equal to zero or if a memory allocate error occurs. +** ^The sqlite3_aggregate_context(C,N) routine returns a NULL pointer +** when first called if N is less than or equal to zero or if a memory +** allocate error occurs. ** ** ^(The amount of space allocated by sqlite3_aggregate_context(C,N) is ** determined by the N parameter on first successful call. Changing the ** value of N in subsequent call to sqlite3_aggregate_context() within ** the same aggregate function instance will not resize the memory -** allocation.)^ +** allocation.)^ Within the xFinal callback, it is customary to set +** N=0 in calls to sqlite3_aggregate_context(C,N) so that no +** pointless memory allocations occur. ** ** ^SQLite automatically frees the memory allocated by ** sqlite3_aggregate_context() when the aggregate query concludes. ** ** The first parameter must be a copy of the @@ -3381,14 +4455,15 @@ ** function. ** ** This routine must be called from the same thread in which ** the aggregate SQL function is running. */ -SQLITE_API void *sqlite3_aggregate_context(sqlite3_context*, int nBytes); +SQLITE_API void *SQLITE_STDCALL sqlite3_aggregate_context(sqlite3_context*, int nBytes); /* ** CAPI3REF: User Data For Functions +** METHOD: sqlite3_context ** ** ^The sqlite3_user_data() interface returns a copy of ** the pointer that was the pUserData parameter (the 5th parameter) ** of the [sqlite3_create_function()] ** and [sqlite3_create_function16()] routines that originally @@ -3395,67 +4470,77 @@ ** registered the application defined function. ** ** This routine must be called from the same thread in which ** the application-defined function is running. */ -SQLITE_API void *sqlite3_user_data(sqlite3_context*); +SQLITE_API void *SQLITE_STDCALL sqlite3_user_data(sqlite3_context*); /* ** CAPI3REF: Database Connection For Functions +** METHOD: sqlite3_context ** ** ^The sqlite3_context_db_handle() interface returns a copy of ** the pointer to the [database connection] (the 1st parameter) ** of the [sqlite3_create_function()] ** and [sqlite3_create_function16()] routines that originally ** registered the application defined function. */ -SQLITE_API sqlite3 *sqlite3_context_db_handle(sqlite3_context*); +SQLITE_API sqlite3 *SQLITE_STDCALL sqlite3_context_db_handle(sqlite3_context*); /* ** CAPI3REF: Function Auxiliary Data +** METHOD: sqlite3_context ** -** The following two functions may be used by scalar SQL functions to +** These functions may be used by (non-aggregate) SQL functions to ** associate metadata with argument values. If the same value is passed to ** multiple invocations of the same SQL function during query execution, under -** some circumstances the associated metadata may be preserved. This may -** be used, for example, to add a regular-expression matching scalar -** function. The compiled version of the regular expression is stored as -** metadata associated with the SQL value passed as the regular expression -** pattern. The compiled regular expression can be reused on multiple -** invocations of the same function so that the original pattern string -** does not need to be recompiled on each invocation. +** some circumstances the associated metadata may be preserved. An example +** of where this might be useful is in a regular-expression matching +** function. The compiled version of the regular expression can be stored as +** metadata associated with the pattern string. +** Then as long as the pattern string remains the same, +** the compiled regular expression can be reused on multiple +** invocations of the same function. ** ** ^The sqlite3_get_auxdata() interface returns a pointer to the metadata ** associated by the sqlite3_set_auxdata() function with the Nth argument -** value to the application-defined function. ^If no metadata has been ever -** been set for the Nth argument of the function, or if the corresponding -** function parameter has changed since the meta-data was set, -** then sqlite3_get_auxdata() returns a NULL pointer. -** -** ^The sqlite3_set_auxdata() interface saves the metadata -** pointed to by its 3rd parameter as the metadata for the N-th -** argument of the application-defined function. Subsequent -** calls to sqlite3_get_auxdata() might return this data, if it has -** not been destroyed. -** ^If it is not NULL, SQLite will invoke the destructor -** function given by the 4th parameter to sqlite3_set_auxdata() on -** the metadata when the corresponding function parameter changes -** or when the SQL statement completes, whichever comes first. -** -** SQLite is free to call the destructor and drop metadata on any -** parameter of any function at any time. ^The only guarantee is that -** the destructor will be called before the metadata is dropped. +** value to the application-defined function. ^If there is no metadata +** associated with the function argument, this sqlite3_get_auxdata() interface +** returns a NULL pointer. +** +** ^The sqlite3_set_auxdata(C,N,P,X) interface saves P as metadata for the N-th +** argument of the application-defined function. ^Subsequent +** calls to sqlite3_get_auxdata(C,N) return P from the most recent +** sqlite3_set_auxdata(C,N,P,X) call if the metadata is still valid or +** NULL if the metadata has been discarded. +** ^After each call to sqlite3_set_auxdata(C,N,P,X) where X is not NULL, +** SQLite will invoke the destructor function X with parameter P exactly +** once, when the metadata is discarded. +** SQLite is free to discard the metadata at any time, including: <ul> +** <li> when the corresponding function parameter changes, or +** <li> when [sqlite3_reset()] or [sqlite3_finalize()] is called for the +** SQL statement, or +** <li> when sqlite3_set_auxdata() is invoked again on the same parameter, or +** <li> during the original sqlite3_set_auxdata() call when a memory +** allocation error occurs. </ul>)^ +** +** Note the last bullet in particular. The destructor X in +** sqlite3_set_auxdata(C,N,P,X) might be called immediately, before the +** sqlite3_set_auxdata() interface even returns. Hence sqlite3_set_auxdata() +** should be called near the end of the function implementation and the +** function implementation should not make any use of P after +** sqlite3_set_auxdata() has been called. ** ** ^(In practice, metadata is preserved between function calls for -** expressions that are constant at compile time. This includes literal -** values and [parameters].)^ +** function parameters that are compile-time constants, including literal +** values and [parameters] and expressions composed from the same.)^ ** ** These routines must be called from the same thread in which ** the SQL function is running. */ -SQLITE_API void *sqlite3_get_auxdata(sqlite3_context*, int N); -SQLITE_API void sqlite3_set_auxdata(sqlite3_context*, int N, void*, void (*)(void*)); +SQLITE_API void *SQLITE_STDCALL sqlite3_get_auxdata(sqlite3_context*, int N); +SQLITE_API void SQLITE_STDCALL sqlite3_set_auxdata(sqlite3_context*, int N, void*, void (*)(void*)); /* ** CAPI3REF: Constants Defining Special Destructor Behavior ** @@ -3466,18 +4551,19 @@ ** SQLITE_TRANSIENT value means that the content will likely change in ** the near future and that SQLite should make its own private copy of ** the content before returning. ** ** The typedef is necessary to work around problems in certain -** C++ compilers. See ticket #2191. +** C++ compilers. */ typedef void (*sqlite3_destructor_type)(void*); #define SQLITE_STATIC ((sqlite3_destructor_type)0) #define SQLITE_TRANSIENT ((sqlite3_destructor_type)-1) /* ** CAPI3REF: Setting The Result Of An SQL Function +** METHOD: sqlite3_context ** ** These routines are used by the xFunc or xFinal callbacks that ** implement SQL functions and aggregates. See ** [sqlite3_create_function()] and [sqlite3_create_function16()] ** for additional information. @@ -3489,13 +4575,13 @@ ** ^The sqlite3_result_blob() interface sets the result from ** an application-defined function to be the BLOB whose content is pointed ** to by the second parameter and which is N bytes long where N is the ** third parameter. ** -** ^The sqlite3_result_zeroblob() interfaces set the result of -** the application-defined function to be a BLOB containing all zero -** bytes and N bytes in size, where N is the value of the 2nd parameter. +** ^The sqlite3_result_zeroblob(C,N) and sqlite3_result_zeroblob64(C,N) +** interfaces set the result of the application-defined function to be +** a BLOB containing all zero bytes and N bytes in size. ** ** ^The sqlite3_result_double() interface sets the result from ** an application-defined function to be a floating point value specified ** by its 2nd argument. ** @@ -3519,15 +4605,15 @@ ** ^The sqlite3_result_error_code() function changes the error code ** returned by SQLite as a result of an error in a function. ^By default, ** the error code is SQLITE_ERROR. ^A subsequent call to sqlite3_result_error() ** or sqlite3_result_error16() resets the error code to SQLITE_ERROR. ** -** ^The sqlite3_result_toobig() interface causes SQLite to throw an error -** indicating that a string or BLOB is too long to represent. +** ^The sqlite3_result_error_toobig() interface causes SQLite to throw an +** error indicating that a string or BLOB is too long to represent. ** -** ^The sqlite3_result_nomem() interface causes SQLite to throw an error -** indicating that a memory allocation failed. +** ^The sqlite3_result_error_nomem() interface causes SQLite to throw an +** error indicating that a memory allocation failed. ** ** ^The sqlite3_result_int() interface sets the return value ** of the application-defined function to be the 32-bit signed integer ** value given in the 2nd argument. ** ^The sqlite3_result_int64() interface sets the return value @@ -3540,19 +4626,28 @@ ** ^The sqlite3_result_text(), sqlite3_result_text16(), ** sqlite3_result_text16le(), and sqlite3_result_text16be() interfaces ** set the return value of the application-defined function to be ** a text string which is represented as UTF-8, UTF-16 native byte order, ** UTF-16 little endian, or UTF-16 big endian, respectively. +** ^The sqlite3_result_text64() interface sets the return value of an +** application-defined function to be a text string in an encoding +** specified by the fifth (and last) parameter, which must be one +** of [SQLITE_UTF8], [SQLITE_UTF16], [SQLITE_UTF16BE], or [SQLITE_UTF16LE]. ** ^SQLite takes the text result from the application from ** the 2nd parameter of the sqlite3_result_text* interfaces. ** ^If the 3rd parameter to the sqlite3_result_text* interfaces ** is negative, then SQLite takes result text from the 2nd parameter ** through the first zero character. ** ^If the 3rd parameter to the sqlite3_result_text* interfaces ** is non-negative, then as many bytes (not characters) of the text ** pointed to by the 2nd parameter are taken as the application-defined -** function result. +** function result. If the 3rd parameter is non-negative, then it +** must be the byte offset into the string where the NUL terminator would +** appear if the string where NUL terminated. If any NUL characters occur +** in the string at a byte offset that is less than the value of the 3rd +** parameter, then the resulting string will contain embedded NULs and the +** result of expressions operating on strings with embedded NULs is undefined. ** ^If the 4th parameter to the sqlite3_result_text* interfaces ** or sqlite3_result_blob is a non-NULL pointer, then SQLite calls that ** function as the destructor on the text or BLOB result when it has ** finished using that result. ** ^If the 4th parameter to the sqlite3_result_text* interfaces or to @@ -3564,11 +4659,11 @@ ** or sqlite3_result_blob is the special constant SQLITE_TRANSIENT ** then SQLite makes a copy of the result into space obtained from ** from [sqlite3_malloc()] before it returns. ** ** ^The sqlite3_result_value() interface sets the result of -** the application-defined function to be a copy the +** the application-defined function to be a copy of the ** [unprotected sqlite3_value] object specified by the 2nd parameter. ^The ** sqlite3_result_value() interface makes a copy of the [sqlite3_value] ** so that the [sqlite3_value] specified in the parameter may change or ** be deallocated after sqlite3_result_value() returns without harm. ** ^A [protected sqlite3_value] object may always be used where an @@ -3577,98 +4672,153 @@ ** ** If these routines are called from within the different thread ** than the one containing the application-defined function that received ** the [sqlite3_context] pointer, the results are undefined. */ -SQLITE_API void sqlite3_result_blob(sqlite3_context*, const void*, int, void(*)(void*)); -SQLITE_API void sqlite3_result_double(sqlite3_context*, double); -SQLITE_API void sqlite3_result_error(sqlite3_context*, const char*, int); -SQLITE_API void sqlite3_result_error16(sqlite3_context*, const void*, int); -SQLITE_API void sqlite3_result_error_toobig(sqlite3_context*); -SQLITE_API void sqlite3_result_error_nomem(sqlite3_context*); -SQLITE_API void sqlite3_result_error_code(sqlite3_context*, int); -SQLITE_API void sqlite3_result_int(sqlite3_context*, int); -SQLITE_API void sqlite3_result_int64(sqlite3_context*, sqlite3_int64); -SQLITE_API void sqlite3_result_null(sqlite3_context*); -SQLITE_API void sqlite3_result_text(sqlite3_context*, const char*, int, void(*)(void*)); -SQLITE_API void sqlite3_result_text16(sqlite3_context*, const void*, int, void(*)(void*)); -SQLITE_API void sqlite3_result_text16le(sqlite3_context*, const void*, int,void(*)(void*)); -SQLITE_API void sqlite3_result_text16be(sqlite3_context*, const void*, int,void(*)(void*)); -SQLITE_API void sqlite3_result_value(sqlite3_context*, sqlite3_value*); -SQLITE_API void sqlite3_result_zeroblob(sqlite3_context*, int n); +SQLITE_API void SQLITE_STDCALL sqlite3_result_blob(sqlite3_context*, const void*, int, void(*)(void*)); +SQLITE_API void SQLITE_STDCALL sqlite3_result_blob64(sqlite3_context*,const void*, + sqlite3_uint64,void(*)(void*)); +SQLITE_API void SQLITE_STDCALL sqlite3_result_double(sqlite3_context*, double); +SQLITE_API void SQLITE_STDCALL sqlite3_result_error(sqlite3_context*, const char*, int); +SQLITE_API void SQLITE_STDCALL sqlite3_result_error16(sqlite3_context*, const void*, int); +SQLITE_API void SQLITE_STDCALL sqlite3_result_error_toobig(sqlite3_context*); +SQLITE_API void SQLITE_STDCALL sqlite3_result_error_nomem(sqlite3_context*); +SQLITE_API void SQLITE_STDCALL sqlite3_result_error_code(sqlite3_context*, int); +SQLITE_API void SQLITE_STDCALL sqlite3_result_int(sqlite3_context*, int); +SQLITE_API void SQLITE_STDCALL sqlite3_result_int64(sqlite3_context*, sqlite3_int64); +SQLITE_API void SQLITE_STDCALL sqlite3_result_null(sqlite3_context*); +SQLITE_API void SQLITE_STDCALL sqlite3_result_text(sqlite3_context*, const char*, int, void(*)(void*)); +SQLITE_API void SQLITE_STDCALL sqlite3_result_text64(sqlite3_context*, const char*,sqlite3_uint64, + void(*)(void*), unsigned char encoding); +SQLITE_API void SQLITE_STDCALL sqlite3_result_text16(sqlite3_context*, const void*, int, void(*)(void*)); +SQLITE_API void SQLITE_STDCALL sqlite3_result_text16le(sqlite3_context*, const void*, int,void(*)(void*)); +SQLITE_API void SQLITE_STDCALL sqlite3_result_text16be(sqlite3_context*, const void*, int,void(*)(void*)); +SQLITE_API void SQLITE_STDCALL sqlite3_result_value(sqlite3_context*, sqlite3_value*); +SQLITE_API void SQLITE_STDCALL sqlite3_result_zeroblob(sqlite3_context*, int n); +SQLITE_API int SQLITE_STDCALL sqlite3_result_zeroblob64(sqlite3_context*, sqlite3_uint64 n); + + +/* +** CAPI3REF: Setting The Subtype Of An SQL Function +** METHOD: sqlite3_context +** +** The sqlite3_result_subtype(C,T) function causes the subtype of +** the result from the [application-defined SQL function] with +** [sqlite3_context] C to be the value T. Only the lower 8 bits +** of the subtype T are preserved in current versions of SQLite; +** higher order bits are discarded. +** The number of subtype bytes preserved by SQLite might increase +** in future releases of SQLite. +*/ +SQLITE_API void SQLITE_STDCALL sqlite3_result_subtype(sqlite3_context*,unsigned int); /* ** CAPI3REF: Define New Collating Sequences +** METHOD: sqlite3 ** -** These functions are used to add new collation sequences to the -** [database connection] specified as the first argument. +** ^These functions add, remove, or modify a [collation] associated +** with the [database connection] specified as the first argument. ** -** ^The name of the new collation sequence is specified as a UTF-8 string +** ^The name of the collation is a UTF-8 string ** for sqlite3_create_collation() and sqlite3_create_collation_v2() -** and a UTF-16 string for sqlite3_create_collation16(). ^In all cases -** the name is passed as the second function argument. -** -** ^The third argument may be one of the constants [SQLITE_UTF8], -** [SQLITE_UTF16LE], or [SQLITE_UTF16BE], indicating that the user-supplied -** routine expects to be passed pointers to strings encoded using UTF-8, -** UTF-16 little-endian, or UTF-16 big-endian, respectively. ^The -** third argument might also be [SQLITE_UTF16] to indicate that the routine -** expects pointers to be UTF-16 strings in the native byte order, or the -** argument can be [SQLITE_UTF16_ALIGNED] if the -** the routine expects pointers to 16-bit word aligned strings -** of UTF-16 in the native byte order. -** -** A pointer to the user supplied routine must be passed as the fifth -** argument. ^If it is NULL, this is the same as deleting the collation -** sequence (so that SQLite cannot call it anymore). -** ^Each time the application supplied function is invoked, it is passed -** as its first parameter a copy of the void* passed as the fourth argument -** to sqlite3_create_collation() or sqlite3_create_collation16(). -** -** ^The remaining arguments to the application-supplied routine are two strings, -** each represented by a (length, data) pair and encoded in the encoding -** that was passed as the third argument when the collation sequence was -** registered. The application defined collation routine should -** return negative, zero or positive if the first string is less than, -** equal to, or greater than the second string. i.e. (STRING1 - STRING2). +** and a UTF-16 string in native byte order for sqlite3_create_collation16(). +** ^Collation names that compare equal according to [sqlite3_strnicmp()] are +** considered to be the same name. +** +** ^(The third argument (eTextRep) must be one of the constants: +** <ul> +** <li> [SQLITE_UTF8], +** <li> [SQLITE_UTF16LE], +** <li> [SQLITE_UTF16BE], +** <li> [SQLITE_UTF16], or +** <li> [SQLITE_UTF16_ALIGNED]. +** </ul>)^ +** ^The eTextRep argument determines the encoding of strings passed +** to the collating function callback, xCallback. +** ^The [SQLITE_UTF16] and [SQLITE_UTF16_ALIGNED] values for eTextRep +** force strings to be UTF16 with native byte order. +** ^The [SQLITE_UTF16_ALIGNED] value for eTextRep forces strings to begin +** on an even byte address. +** +** ^The fourth argument, pArg, is an application data pointer that is passed +** through as the first argument to the collating function callback. +** +** ^The fifth argument, xCallback, is a pointer to the collating function. +** ^Multiple collating functions can be registered using the same name but +** with different eTextRep parameters and SQLite will use whichever +** function requires the least amount of data transformation. +** ^If the xCallback argument is NULL then the collating function is +** deleted. ^When all collating functions having the same name are deleted, +** that collation is no longer usable. +** +** ^The collating function callback is invoked with a copy of the pArg +** application data pointer and with two strings in the encoding specified +** by the eTextRep argument. The collating function must return an +** integer that is negative, zero, or positive +** if the first string is less than, equal to, or greater than the second, +** respectively. A collating function must always return the same answer +** given the same inputs. If two or more collating functions are registered +** to the same collation name (using different eTextRep values) then all +** must give an equivalent answer when invoked with equivalent strings. +** The collating function must obey the following properties for all +** strings A, B, and C: +** +** <ol> +** <li> If A==B then B==A. +** <li> If A==B and B==C then A==C. +** <li> If A<B THEN B>A. +** <li> If A<B and B<C then A<C. +** </ol> +** +** If a collating function fails any of the above constraints and that +** collating function is registered and used, then the behavior of SQLite +** is undefined. ** ** ^The sqlite3_create_collation_v2() works like sqlite3_create_collation() -** except that it takes an extra argument which is a destructor for -** the collation. ^The destructor is called when the collation is -** destroyed and is passed a copy of the fourth parameter void* pointer -** of the sqlite3_create_collation_v2(). -** ^Collations are destroyed when they are overridden by later calls to the -** collation creation functions or when the [database connection] is closed -** using [sqlite3_close()]. +** with the addition that the xDestroy callback is invoked on pArg when +** the collating function is deleted. +** ^Collating functions are deleted when they are overridden by later +** calls to the collation creation functions or when the +** [database connection] is closed using [sqlite3_close()]. +** +** ^The xDestroy callback is <u>not</u> called if the +** sqlite3_create_collation_v2() function fails. Applications that invoke +** sqlite3_create_collation_v2() with a non-NULL xDestroy argument should +** check the return code and dispose of the application data pointer +** themselves rather than expecting SQLite to deal with it for them. +** This is different from every other SQLite interface. The inconsistency +** is unfortunate but cannot be changed without breaking backwards +** compatibility. ** ** See also: [sqlite3_collation_needed()] and [sqlite3_collation_needed16()]. */ -SQLITE_API int sqlite3_create_collation( +SQLITE_API int SQLITE_STDCALL sqlite3_create_collation( sqlite3*, const char *zName, int eTextRep, - void*, + void *pArg, int(*xCompare)(void*,int,const void*,int,const void*) ); -SQLITE_API int sqlite3_create_collation_v2( +SQLITE_API int SQLITE_STDCALL sqlite3_create_collation_v2( sqlite3*, const char *zName, int eTextRep, - void*, + void *pArg, int(*xCompare)(void*,int,const void*,int,const void*), void(*xDestroy)(void*) ); -SQLITE_API int sqlite3_create_collation16( +SQLITE_API int SQLITE_STDCALL sqlite3_create_collation16( sqlite3*, const void *zName, int eTextRep, - void*, + void *pArg, int(*xCompare)(void*,int,const void*,int,const void*) ); /* ** CAPI3REF: Collation Needed Callbacks +** METHOD: sqlite3 ** ** ^To avoid having to register all collation sequences before a database ** can be used, a single callback function may be registered with the ** [database connection] to be invoked whenever an undefined collation ** sequence is required. @@ -3689,16 +4839,16 @@ ** ** The callback function should register the desired collation using ** [sqlite3_create_collation()], [sqlite3_create_collation16()], or ** [sqlite3_create_collation_v2()]. */ -SQLITE_API int sqlite3_collation_needed( +SQLITE_API int SQLITE_STDCALL sqlite3_collation_needed( sqlite3*, void*, void(*)(void*,sqlite3*,int eTextRep,const char*) ); -SQLITE_API int sqlite3_collation_needed16( +SQLITE_API int SQLITE_STDCALL sqlite3_collation_needed16( sqlite3*, void*, void(*)(void*,sqlite3*,int eTextRep,const void*) ); @@ -3708,12 +4858,17 @@ ** called right after sqlite3_open(). ** ** The code to implement this API is not available in the public release ** of SQLite. */ -SQLITE_API int sqlite3_key( +SQLITE_API int SQLITE_STDCALL sqlite3_key( + sqlite3 *db, /* Database to be rekeyed */ + const void *pKey, int nKey /* The key */ +); +SQLITE_API int SQLITE_STDCALL sqlite3_key_v2( sqlite3 *db, /* Database to be rekeyed */ + const char *zDbName, /* Name of the database */ const void *pKey, int nKey /* The key */ ); /* ** Change the key on an open database. If the current database is not @@ -3721,49 +4876,57 @@ ** database is decrypted. ** ** The code to implement this API is not available in the public release ** of SQLite. */ -SQLITE_API int sqlite3_rekey( +SQLITE_API int SQLITE_STDCALL sqlite3_rekey( + sqlite3 *db, /* Database to be rekeyed */ + const void *pKey, int nKey /* The new key */ +); +SQLITE_API int SQLITE_STDCALL sqlite3_rekey_v2( sqlite3 *db, /* Database to be rekeyed */ + const char *zDbName, /* Name of the database */ const void *pKey, int nKey /* The new key */ ); /* ** Specify the activation key for a SEE database. Unless ** activated, none of the SEE routines will work. */ -SQLITE_API void sqlite3_activate_see( +SQLITE_API void SQLITE_STDCALL sqlite3_activate_see( const char *zPassPhrase /* Activation phrase */ ); #endif #ifdef SQLITE_ENABLE_CEROD /* ** Specify the activation key for a CEROD database. Unless ** activated, none of the CEROD routines will work. */ -SQLITE_API void sqlite3_activate_cerod( +SQLITE_API void SQLITE_STDCALL sqlite3_activate_cerod( const char *zPassPhrase /* Activation phrase */ ); #endif /* ** CAPI3REF: Suspend Execution For A Short Time ** -** ^The sqlite3_sleep() function causes the current thread to suspend execution +** The sqlite3_sleep() function causes the current thread to suspend execution ** for at least a number of milliseconds specified in its parameter. ** -** ^If the operating system does not support sleep requests with +** If the operating system does not support sleep requests with ** millisecond time resolution, then the time will be rounded up to -** the nearest second. ^The number of milliseconds of sleep actually +** the nearest second. The number of milliseconds of sleep actually ** requested from the operating system is returned. ** ** ^SQLite implements this interface by calling the xSleep() -** method of the default [sqlite3_vfs] object. +** method of the default [sqlite3_vfs] object. If the xSleep() method +** of the default VFS is not implemented correctly, or not implemented at +** all, then the behavior of sqlite3_sleep() may deviate from the description +** in the previous paragraphs. */ -SQLITE_API int sqlite3_sleep(int); +SQLITE_API int SQLITE_STDCALL sqlite3_sleep(int); /* ** CAPI3REF: Name Of The Folder Holding Temporary Files ** ** ^(If this global variable is made to point to a string which is @@ -3770,10 +4933,17 @@ ** the name of a folder (a.k.a. directory), then all temporary files ** created by SQLite when using a built-in [sqlite3_vfs | VFS] ** will be placed in that directory.)^ ^If this variable ** is a NULL pointer, then SQLite performs a search for an appropriate ** temporary file directory. +** +** Applications are strongly discouraged from using this global variable. +** It is required to set a temporary folder on Windows Runtime (WinRT). +** But for all other platforms, it is highly recommended that applications +** neither read nor write this variable. This global variable is a relic +** that exists for backwards compatibility of legacy applications and should +** be avoided in new projects. ** ** It is not safe to read or modify this variable in more than one ** thread at a time. It is not safe to read or modify this variable ** if a [database connection] is being used at the same time in a separate ** thread. @@ -3789,16 +4959,74 @@ ** [sqlite3_malloc] and the pragma may attempt to free that memory ** using [sqlite3_free]. ** Hence, if this variable is modified directly, either it should be ** made NULL or made to point to memory obtained from [sqlite3_malloc] ** or else the use of the [temp_store_directory pragma] should be avoided. +** Except when requested by the [temp_store_directory pragma], SQLite +** does not free the memory that sqlite3_temp_directory points to. If +** the application wants that memory to be freed, it must do +** so itself, taking care to only do so after all [database connection] +** objects have been destroyed. +** +** <b>Note to Windows Runtime users:</b> The temporary directory must be set +** prior to calling [sqlite3_open] or [sqlite3_open_v2]. Otherwise, various +** features that require the use of temporary files may fail. Here is an +** example of how to do this using C++ with the Windows Runtime: +** +** <blockquote><pre> +** LPCWSTR zPath = Windows::Storage::ApplicationData::Current-> +**   TemporaryFolder->Path->Data(); +** char zPathBuf[MAX_PATH + 1]; +** memset(zPathBuf, 0, sizeof(zPathBuf)); +** WideCharToMultiByte(CP_UTF8, 0, zPath, -1, zPathBuf, sizeof(zPathBuf), +**   NULL, NULL); +** sqlite3_temp_directory = sqlite3_mprintf("%s", zPathBuf); +** </pre></blockquote> */ SQLITE_API SQLITE_EXTERN char *sqlite3_temp_directory; +/* +** CAPI3REF: Name Of The Folder Holding Database Files +** +** ^(If this global variable is made to point to a string which is +** the name of a folder (a.k.a. directory), then all database files +** specified with a relative pathname and created or accessed by +** SQLite when using a built-in windows [sqlite3_vfs | VFS] will be assumed +** to be relative to that directory.)^ ^If this variable is a NULL +** pointer, then SQLite assumes that all database files specified +** with a relative pathname are relative to the current directory +** for the process. Only the windows VFS makes use of this global +** variable; it is ignored by the unix VFS. +** +** Changing the value of this variable while a database connection is +** open can result in a corrupt database. +** +** It is not safe to read or modify this variable in more than one +** thread at a time. It is not safe to read or modify this variable +** if a [database connection] is being used at the same time in a separate +** thread. +** It is intended that this variable be set once +** as part of process initialization and before any SQLite interface +** routines have been called and that this variable remain unchanged +** thereafter. +** +** ^The [data_store_directory pragma] may modify this variable and cause +** it to point to memory obtained from [sqlite3_malloc]. ^Furthermore, +** the [data_store_directory pragma] always assumes that any string +** that this variable points to is held in memory obtained from +** [sqlite3_malloc] and the pragma may attempt to free that memory +** using [sqlite3_free]. +** Hence, if this variable is modified directly, either it should be +** made NULL or made to point to memory obtained from [sqlite3_malloc] +** or else the use of the [data_store_directory pragma] should be avoided. +*/ +SQLITE_API SQLITE_EXTERN char *sqlite3_data_directory; + /* ** CAPI3REF: Test For Auto-Commit Mode ** KEYWORDS: {autocommit mode} +** METHOD: sqlite3 ** ** ^The sqlite3_get_autocommit() interface returns non-zero or ** zero if the given database connection is or is not in autocommit mode, ** respectively. ^Autocommit mode is on by default. ** ^Autocommit mode is disabled by a [BEGIN] statement. @@ -3813,26 +5041,55 @@ ** ** If another thread changes the autocommit status of the database ** connection while this routine is running, then the return value ** is undefined. */ -SQLITE_API int sqlite3_get_autocommit(sqlite3*); +SQLITE_API int SQLITE_STDCALL sqlite3_get_autocommit(sqlite3*); /* ** CAPI3REF: Find The Database Handle Of A Prepared Statement +** METHOD: sqlite3_stmt ** ** ^The sqlite3_db_handle interface returns the [database connection] handle ** to which a [prepared statement] belongs. ^The [database connection] ** returned by sqlite3_db_handle is the same [database connection] ** that was the first argument ** to the [sqlite3_prepare_v2()] call (or its variants) that was used to ** create the statement in the first place. */ -SQLITE_API sqlite3 *sqlite3_db_handle(sqlite3_stmt*); +SQLITE_API sqlite3 *SQLITE_STDCALL sqlite3_db_handle(sqlite3_stmt*); + +/* +** CAPI3REF: Return The Filename For A Database Connection +** METHOD: sqlite3 +** +** ^The sqlite3_db_filename(D,N) interface returns a pointer to a filename +** associated with database N of connection D. ^The main database file +** has the name "main". If there is no attached database N on the database +** connection D, or if database N is a temporary or in-memory database, then +** a NULL pointer is returned. +** +** ^The filename returned by this function is the output of the +** xFullPathname method of the [VFS]. ^In other words, the filename +** will be an absolute pathname, even if the filename used +** to open the database originally was a URI or relative pathname. +*/ +SQLITE_API const char *SQLITE_STDCALL sqlite3_db_filename(sqlite3 *db, const char *zDbName); + +/* +** CAPI3REF: Determine if a database is read-only +** METHOD: sqlite3 +** +** ^The sqlite3_db_readonly(D,N) interface returns 1 if the database N +** of connection D is read-only, 0 if it is read/write, or -1 if N is not +** the name of a database on connection D. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_db_readonly(sqlite3 *db, const char *zDbName); /* ** CAPI3REF: Find the next prepared statement +** METHOD: sqlite3 ** ** ^This interface returns a pointer to the next [prepared statement] after ** pStmt associated with the [database connection] pDb. ^If pStmt is NULL ** then this interface returns a pointer to the first prepared statement ** associated with the database connection pDb. ^If no prepared statement @@ -3840,14 +5097,15 @@ ** ** The [database connection] pointer D in a call to ** [sqlite3_next_stmt(D,S)] must refer to an open database ** connection and in particular must not be a NULL pointer. */ -SQLITE_API sqlite3_stmt *sqlite3_next_stmt(sqlite3 *pDb, sqlite3_stmt *pStmt); +SQLITE_API sqlite3_stmt *SQLITE_STDCALL sqlite3_next_stmt(sqlite3 *pDb, sqlite3_stmt *pStmt); /* ** CAPI3REF: Commit And Rollback Notification Callbacks +** METHOD: sqlite3 ** ** ^The sqlite3_commit_hook() interface registers a callback ** function to be invoked whenever a transaction is [COMMIT | committed]. ** ^Any callback set by a previous call to sqlite3_commit_hook() ** for the same database connection is overridden. @@ -3862,17 +5120,19 @@ ** ^The sqlite3_commit_hook(D,C,P) and sqlite3_rollback_hook(D,C,P) functions ** return the P argument from the previous call of the same function ** on the same [database connection] D, or NULL for ** the first call for each function on D. ** +** The commit and rollback hook callbacks are not reentrant. ** The callback implementation must not do anything that will modify ** the database connection that invoked the callback. Any actions ** to modify the database connection must be deferred until after the ** completion of the [sqlite3_step()] call that triggered the commit ** or rollback hook in the first place. -** Note that [sqlite3_prepare_v2()] and [sqlite3_step()] both modify their -** database connections for the meaning of "modify" in this paragraph. +** Note that running any other SQL statements, including SELECT statements, +** or merely calling [sqlite3_prepare_v2()] and [sqlite3_step()] will modify +** the database connections for the meaning of "modify" in this paragraph. ** ** ^Registering a NULL function disables the callback. ** ** ^When the commit hook callback routine returns zero, the [COMMIT] ** operation is allowed to continue normally. ^If the commit hook @@ -3886,24 +5146,26 @@ ** ^The rollback callback is not invoked if a transaction is ** automatically rolled back because the database connection is closed. ** ** See also the [sqlite3_update_hook()] interface. */ -SQLITE_API void *sqlite3_commit_hook(sqlite3*, int(*)(void*), void*); -SQLITE_API void *sqlite3_rollback_hook(sqlite3*, void(*)(void *), void*); +SQLITE_API void *SQLITE_STDCALL sqlite3_commit_hook(sqlite3*, int(*)(void*), void*); +SQLITE_API void *SQLITE_STDCALL sqlite3_rollback_hook(sqlite3*, void(*)(void *), void*); /* ** CAPI3REF: Data Change Notification Callbacks +** METHOD: sqlite3 ** ** ^The sqlite3_update_hook() interface registers a callback function ** with the [database connection] identified by the first argument -** to be invoked whenever a row is updated, inserted or deleted. +** to be invoked whenever a row is updated, inserted or deleted in +** a rowid table. ** ^Any callback set by a previous call to this function ** for the same database connection is overridden. ** ** ^The second argument is a pointer to the function to invoke when a -** row is updated, inserted or deleted. +** row is updated, inserted or deleted in a rowid table. ** ^The first argument to the callback is a copy of the third argument ** to sqlite3_update_hook(). ** ^The second callback argument is one of [SQLITE_INSERT], [SQLITE_DELETE], ** or [SQLITE_UPDATE], depending on the operation that caused the callback ** to be invoked. @@ -3912,10 +5174,11 @@ ** ^The final callback parameter is the [rowid] of the row. ** ^In the case of an update, this is the [rowid] after the update takes place. ** ** ^(The update hook is not invoked when internal system tables are ** modified (i.e. sqlite_master and sqlite_sequence).)^ +** ^The update hook is not invoked when [WITHOUT ROWID] tables are modified. ** ** ^In the current implementation, the update hook ** is not invoked when duplication rows are deleted because of an ** [ON CONFLICT | ON CONFLICT REPLACE] clause. ^Nor is the update hook ** invoked when rows are deleted using the [truncate optimization]. @@ -3935,19 +5198,18 @@ ** the first call on D. ** ** See also the [sqlite3_commit_hook()] and [sqlite3_rollback_hook()] ** interfaces. */ -SQLITE_API void *sqlite3_update_hook( +SQLITE_API void *SQLITE_STDCALL sqlite3_update_hook( sqlite3*, void(*)(void *,int ,char const *,char const *,sqlite3_int64), void* ); /* ** CAPI3REF: Enable Or Disable Shared Pager Cache -** KEYWORDS: {shared cache} ** ** ^(This routine enables or disables the sharing of the database cache ** and schema data structures between [database connection | connections] ** to the same database. Sharing is enabled if the argument is true ** and disabled if the argument is false.)^ @@ -3965,14 +5227,22 @@ ** successfully. An [error code] is returned otherwise.)^ ** ** ^Shared cache is disabled by default. But this might change in ** future releases of SQLite. Applications that care about shared ** cache setting should set it explicitly. +** +** Note: This method is disabled on MacOS X 10.7 and iOS version 5.0 +** and will always return SQLITE_MISUSE. On those systems, +** shared cache mode should be enabled per-database connection via +** [sqlite3_open_v2()] with [SQLITE_OPEN_SHAREDCACHE]. +** +** This interface is threadsafe on processors where writing a +** 32-bit integer is atomic. ** ** See Also: [SQLite Shared-Cache Mode] */ -SQLITE_API int sqlite3_enable_shared_cache(int); +SQLITE_API int SQLITE_STDCALL sqlite3_enable_shared_cache(int); /* ** CAPI3REF: Attempt To Free Heap Memory ** ** ^The sqlite3_release_memory() interface attempts to free N bytes @@ -3979,62 +5249,120 @@ ** of heap memory by deallocating non-essential memory allocations ** held by the database library. Memory used to cache database ** pages to improve performance is an example of non-essential memory. ** ^sqlite3_release_memory() returns the number of bytes actually freed, ** which might be more or less than the amount requested. +** ^The sqlite3_release_memory() routine is a no-op returning zero +** if SQLite is not compiled with [SQLITE_ENABLE_MEMORY_MANAGEMENT]. +** +** See also: [sqlite3_db_release_memory()] */ -SQLITE_API int sqlite3_release_memory(int); +SQLITE_API int SQLITE_STDCALL sqlite3_release_memory(int); + +/* +** CAPI3REF: Free Memory Used By A Database Connection +** METHOD: sqlite3 +** +** ^The sqlite3_db_release_memory(D) interface attempts to free as much heap +** memory as possible from database connection D. Unlike the +** [sqlite3_release_memory()] interface, this interface is in effect even +** when the [SQLITE_ENABLE_MEMORY_MANAGEMENT] compile-time option is +** omitted. +** +** See also: [sqlite3_release_memory()] +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_db_release_memory(sqlite3*); /* ** CAPI3REF: Impose A Limit On Heap Size ** -** ^The sqlite3_soft_heap_limit() interface places a "soft" limit -** on the amount of heap memory that may be allocated by SQLite. -** ^If an internal allocation is requested that would exceed the -** soft heap limit, [sqlite3_release_memory()] is invoked one or -** more times to free up some space before the allocation is performed. -** -** ^The limit is called "soft" because if [sqlite3_release_memory()] -** cannot free sufficient memory to prevent the limit from being exceeded, -** the memory is allocated anyway and the current operation proceeds. -** -** ^A negative or zero value for N means that there is no soft heap limit and -** [sqlite3_release_memory()] will only be called when memory is exhausted. -** ^The default value for the soft heap limit is zero. -** -** ^(SQLite makes a best effort to honor the soft heap limit. -** But if the soft heap limit cannot be honored, execution will -** continue without error or notification.)^ This is why the limit is -** called a "soft" limit. It is advisory only. -** -** Prior to SQLite version 3.5.0, this routine only constrained the memory -** allocated by a single thread - the same thread in which this routine -** runs. Beginning with SQLite version 3.5.0, the soft heap limit is -** applied to all threads. The value specified for the soft heap limit -** is an upper bound on the total memory allocation for all threads. In -** version 3.5.0 there is no mechanism for limiting the heap usage for -** individual threads. -*/ -SQLITE_API void sqlite3_soft_heap_limit(int); +** ^The sqlite3_soft_heap_limit64() interface sets and/or queries the +** soft limit on the amount of heap memory that may be allocated by SQLite. +** ^SQLite strives to keep heap memory utilization below the soft heap +** limit by reducing the number of pages held in the page cache +** as heap memory usages approaches the limit. +** ^The soft heap limit is "soft" because even though SQLite strives to stay +** below the limit, it will exceed the limit rather than generate +** an [SQLITE_NOMEM] error. In other words, the soft heap limit +** is advisory only. +** +** ^The return value from sqlite3_soft_heap_limit64() is the size of +** the soft heap limit prior to the call, or negative in the case of an +** error. ^If the argument N is negative +** then no change is made to the soft heap limit. Hence, the current +** size of the soft heap limit can be determined by invoking +** sqlite3_soft_heap_limit64() with a negative argument. +** +** ^If the argument N is zero then the soft heap limit is disabled. +** +** ^(The soft heap limit is not enforced in the current implementation +** if one or more of following conditions are true: +** +** <ul> +** <li> The soft heap limit is set to zero. +** <li> Memory accounting is disabled using a combination of the +** [sqlite3_config]([SQLITE_CONFIG_MEMSTATUS],...) start-time option and +** the [SQLITE_DEFAULT_MEMSTATUS] compile-time option. +** <li> An alternative page cache implementation is specified using +** [sqlite3_config]([SQLITE_CONFIG_PCACHE2],...). +** <li> The page cache allocates from its own memory pool supplied +** by [sqlite3_config]([SQLITE_CONFIG_PAGECACHE],...) rather than +** from the heap. +** </ul>)^ +** +** Beginning with SQLite version 3.7.3, the soft heap limit is enforced +** regardless of whether or not the [SQLITE_ENABLE_MEMORY_MANAGEMENT] +** compile-time option is invoked. With [SQLITE_ENABLE_MEMORY_MANAGEMENT], +** the soft heap limit is enforced on every memory allocation. Without +** [SQLITE_ENABLE_MEMORY_MANAGEMENT], the soft heap limit is only enforced +** when memory is allocated by the page cache. Testing suggests that because +** the page cache is the predominate memory user in SQLite, most +** applications will achieve adequate soft heap limit enforcement without +** the use of [SQLITE_ENABLE_MEMORY_MANAGEMENT]. +** +** The circumstances under which SQLite will enforce the soft heap limit may +** changes in future releases of SQLite. +*/ +SQLITE_API sqlite3_int64 SQLITE_STDCALL sqlite3_soft_heap_limit64(sqlite3_int64 N); + +/* +** CAPI3REF: Deprecated Soft Heap Limit Interface +** DEPRECATED +** +** This is a deprecated version of the [sqlite3_soft_heap_limit64()] +** interface. This routine is provided for historical compatibility +** only. All new applications should use the +** [sqlite3_soft_heap_limit64()] interface rather than this one. +*/ +SQLITE_API SQLITE_DEPRECATED void SQLITE_STDCALL sqlite3_soft_heap_limit(int N); + /* ** CAPI3REF: Extract Metadata About A Column Of A Table +** METHOD: sqlite3 ** -** ^This routine returns metadata about a specific column of a specific -** database table accessible using the [database connection] handle -** passed as the first function argument. +** ^(The sqlite3_table_column_metadata(X,D,T,C,....) routine returns +** information about column C of table T in database D +** on [database connection] X.)^ ^The sqlite3_table_column_metadata() +** interface returns SQLITE_OK and fills in the non-NULL pointers in +** the final five arguments with appropriate values if the specified +** column exists. ^The sqlite3_table_column_metadata() interface returns +** SQLITE_ERROR and if the specified column does not exist. +** ^If the column-name parameter to sqlite3_table_column_metadata() is a +** NULL pointer, then this routine simply checks for the existance of the +** table and returns SQLITE_OK if the table exists and SQLITE_ERROR if it +** does not. ** ** ^The column is identified by the second, third and fourth parameters to -** this function. ^The second parameter is either the name of the database +** this function. ^(The second parameter is either the name of the database ** (i.e. "main", "temp", or an attached database) containing the specified -** table or NULL. ^If it is NULL, then all attached databases are searched +** table or NULL.)^ ^If it is NULL, then all attached databases are searched ** for the table using the same algorithm used by the database engine to ** resolve unqualified table references. ** ** ^The third and fourth parameters to this function are the table and column -** name of the desired column, respectively. Neither of these parameters -** may be NULL. +** name of the desired column, respectively. ** ** ^Metadata is returned by writing to the memory locations passed as the 5th ** and subsequent parameters to this function. ^Any of these arguments may be ** NULL, in which case the corresponding element of metadata is omitted. ** @@ -4049,38 +5377,35 @@ ** <tr><td> 9th <td> int <td> True if column is [AUTOINCREMENT] ** </table> ** </blockquote>)^ ** ** ^The memory pointed to by the character pointers returned for the -** declaration type and collation sequence is valid only until the next +** declaration type and collation sequence is valid until the next ** call to any SQLite API function. ** ** ^If the specified table is actually a view, an [error code] is returned. ** -** ^If the specified column is "rowid", "oid" or "_rowid_" and an +** ^If the specified column is "rowid", "oid" or "_rowid_" and the table +** is not a [WITHOUT ROWID] table and an ** [INTEGER PRIMARY KEY] column has been explicitly declared, then the output ** parameters are set for the explicitly declared column. ^(If there is no -** explicitly declared [INTEGER PRIMARY KEY] column, then the output -** parameters are set as follows: +** [INTEGER PRIMARY KEY] column, then the outputs +** for the [rowid] are set as follows: ** ** <pre> ** data type: "INTEGER" ** collation sequence: "BINARY" ** not null: 0 ** primary key: 1 ** auto increment: 0 ** </pre>)^ ** -** ^(This function may load one or more schemas from database files. If an -** error occurs during this process, or if the requested table or column -** cannot be found, an [error code] is returned and an error message left -** in the [database connection] (to be retrieved using sqlite3_errmsg()).)^ -** -** ^This API is only available if the library was compiled with the -** [SQLITE_ENABLE_COLUMN_METADATA] C-preprocessor symbol defined. +** ^This function causes all database schemas to be read from disk and +** parsed, if that has not already been done, and returns an error if +** any errors are encountered while loading the schema. */ -SQLITE_API int sqlite3_table_column_metadata( +SQLITE_API int SQLITE_STDCALL sqlite3_table_column_metadata( sqlite3 *db, /* Connection handle */ const char *zDbName, /* Database name or NULL */ const char *zTableName, /* Table name */ const char *zColumnName, /* Column name */ char const **pzDataType, /* OUTPUT: Declared data type */ @@ -4090,19 +5415,29 @@ int *pAutoinc /* OUTPUT: True if column is auto-increment */ ); /* ** CAPI3REF: Load An Extension +** METHOD: sqlite3 ** ** ^This interface loads an SQLite extension library from the named file. ** ** ^The sqlite3_load_extension() interface attempts to load an -** SQLite extension library contained in the file zFile. +** [SQLite extension] library contained in the file zFile. If +** the file cannot be loaded directly, attempts are made to load +** with various operating-system specific extensions added. +** So for example, if "samplelib" cannot be loaded, then names like +** "samplelib.so" or "samplelib.dylib" or "samplelib.dll" might +** be tried also. ** ** ^The entry point is zProc. -** ^zProc may be 0, in which case the name of the entry point -** defaults to "sqlite3_extension_init". +** ^(zProc may be 0, in which case SQLite will try to come up with an +** entry point name on its own. It first tries "sqlite3_extension_init". +** If that does not work, it constructs a name "sqlite3_X_init" where the +** X is consists of the lower-case equivalent of all ASCII alphabetic +** characters in the filename from the last "/" to the first following +** "." and omitting any initial "lib".)^ ** ^The sqlite3_load_extension() interface returns ** [SQLITE_OK] on success and [SQLITE_ERROR] if something goes wrong. ** ^If an error occurs and pzErrMsg is not 0, then the ** [sqlite3_load_extension()] interface shall attempt to ** fill *pzErrMsg with error message text stored in memory @@ -4113,63 +5448,90 @@ ** [sqlite3_enable_load_extension()] prior to calling this API, ** otherwise an error will be returned. ** ** See also the [load_extension() SQL function]. */ -SQLITE_API int sqlite3_load_extension( +SQLITE_API int SQLITE_STDCALL sqlite3_load_extension( sqlite3 *db, /* Load the extension into this database connection */ const char *zFile, /* Name of the shared library containing extension */ const char *zProc, /* Entry point. Derived from zFile if 0 */ char **pzErrMsg /* Put error message here if not 0 */ ); /* ** CAPI3REF: Enable Or Disable Extension Loading +** METHOD: sqlite3 ** ** ^So as not to open security holes in older applications that are -** unprepared to deal with extension loading, and as a means of disabling -** extension loading while evaluating user-entered SQL, the following API +** unprepared to deal with [extension loading], and as a means of disabling +** [extension loading] while evaluating user-entered SQL, the following API ** is provided to turn the [sqlite3_load_extension()] mechanism on and off. ** -** ^Extension loading is off by default. See ticket #1863. +** ^Extension loading is off by default. ** ^Call the sqlite3_enable_load_extension() routine with onoff==1 ** to turn extension loading on and call it with onoff==0 to turn ** it back off again. */ -SQLITE_API int sqlite3_enable_load_extension(sqlite3 *db, int onoff); +SQLITE_API int SQLITE_STDCALL sqlite3_enable_load_extension(sqlite3 *db, int onoff); /* -** CAPI3REF: Automatically Load An Extensions -** -** ^This API can be invoked at program startup in order to register -** one or more statically linked extensions that will be available -** to all new [database connections]. -** -** ^(This routine stores a pointer to the extension entry point -** in an array that is obtained from [sqlite3_malloc()]. That memory -** is deallocated by [sqlite3_reset_auto_extension()].)^ -** -** ^This function registers an extension entry point that is -** automatically invoked whenever a new [database connection] -** is opened using [sqlite3_open()], [sqlite3_open16()], -** or [sqlite3_open_v2()]. -** ^Duplicate extensions are detected so calling this routine -** multiple times with the same extension is harmless. -** ^Automatic extensions apply across all threads. -*/ -SQLITE_API int sqlite3_auto_extension(void (*xEntryPoint)(void)); +** CAPI3REF: Automatically Load Statically Linked Extensions +** +** ^This interface causes the xEntryPoint() function to be invoked for +** each new [database connection] that is created. The idea here is that +** xEntryPoint() is the entry point for a statically linked [SQLite extension] +** that is to be automatically loaded into all new database connections. +** +** ^(Even though the function prototype shows that xEntryPoint() takes +** no arguments and returns void, SQLite invokes xEntryPoint() with three +** arguments and expects and integer result as if the signature of the +** entry point where as follows: +** +** <blockquote><pre> +**   int xEntryPoint( +**   sqlite3 *db, +**   const char **pzErrMsg, +**   const struct sqlite3_api_routines *pThunk +**   ); +** </pre></blockquote>)^ +** +** If the xEntryPoint routine encounters an error, it should make *pzErrMsg +** point to an appropriate error message (obtained from [sqlite3_mprintf()]) +** and return an appropriate [error code]. ^SQLite ensures that *pzErrMsg +** is NULL before calling the xEntryPoint(). ^SQLite will invoke +** [sqlite3_free()] on *pzErrMsg after xEntryPoint() returns. ^If any +** xEntryPoint() returns an error, the [sqlite3_open()], [sqlite3_open16()], +** or [sqlite3_open_v2()] call that provoked the xEntryPoint() will fail. +** +** ^Calling sqlite3_auto_extension(X) with an entry point X that is already +** on the list of automatic extensions is a harmless no-op. ^No entry point +** will be called more than once for each database connection that is opened. +** +** See also: [sqlite3_reset_auto_extension()] +** and [sqlite3_cancel_auto_extension()] +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_auto_extension(void (*xEntryPoint)(void)); + +/* +** CAPI3REF: Cancel Automatic Extension Loading +** +** ^The [sqlite3_cancel_auto_extension(X)] interface unregisters the +** initialization routine X that was registered using a prior call to +** [sqlite3_auto_extension(X)]. ^The [sqlite3_cancel_auto_extension(X)] +** routine returns 1 if initialization routine X was successfully +** unregistered and it returns 0 if X was not on the list of initialization +** routines. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_cancel_auto_extension(void (*xEntryPoint)(void)); /* ** CAPI3REF: Reset Automatic Extension Loading ** -** ^(This function disables all previously registered automatic -** extensions. It undoes the effect of all prior -** [sqlite3_auto_extension()] calls.)^ -** -** ^This function disables automatic extensions in all threads. +** ^This interface disables all automatic extensions previously +** registered using [sqlite3_auto_extension()]. */ -SQLITE_API void sqlite3_reset_auto_extension(void); +SQLITE_API void SQLITE_STDCALL sqlite3_reset_auto_extension(void); /* ** The interface to the virtual-table mechanism is currently considered ** to be experimental. The interface might change in incompatible ways. ** If this is a problem for you, do not use the interface at this time. @@ -4188,11 +5550,11 @@ /* ** CAPI3REF: Virtual Table Object ** KEYWORDS: sqlite3_module {virtual table module} ** -** This structure, sometimes called a a "virtual table module", +** This structure, sometimes called a "virtual table module", ** defines the implementation of a [virtual tables]. ** This structure consists mostly of methods for the module. ** ** ^A virtual table module is created by filling in a persistent ** instance of this structure and passing a pointer to that instance @@ -4228,28 +5590,36 @@ int (*xRollback)(sqlite3_vtab *pVTab); int (*xFindFunction)(sqlite3_vtab *pVtab, int nArg, const char *zName, void (**pxFunc)(sqlite3_context*,int,sqlite3_value**), void **ppArg); int (*xRename)(sqlite3_vtab *pVtab, const char *zNew); + /* The methods above are in version 1 of the sqlite_module object. Those + ** below are for version 2 and greater. */ + int (*xSavepoint)(sqlite3_vtab *pVTab, int); + int (*xRelease)(sqlite3_vtab *pVTab, int); + int (*xRollbackTo)(sqlite3_vtab *pVTab, int); }; /* ** CAPI3REF: Virtual Table Indexing Information ** KEYWORDS: sqlite3_index_info ** -** The sqlite3_index_info structure and its substructures is used to +** The sqlite3_index_info structure and its substructures is used as part +** of the [virtual table] interface to ** pass information into and receive the reply from the [xBestIndex] ** method of a [virtual table module]. The fields under **Inputs** are the ** inputs to xBestIndex and are read-only. xBestIndex inserts its ** results into the **Outputs** fields. ** ** ^(The aConstraint[] array records WHERE clause constraints of the form: ** -** <pre>column OP expr</pre> +** <blockquote>column OP expr</blockquote> ** ** where OP is =, <, <=, >, or >=.)^ ^(The particular operator is -** stored in aConstraint[].op.)^ ^(The index of the column is stored in +** stored in aConstraint[].op using one of the +** [SQLITE_INDEX_CONSTRAINT_EQ | SQLITE_INDEX_CONSTRAINT_ values].)^ +** ^(The index of the column is stored in ** aConstraint[].iColumn.)^ ^(aConstraint[].usable is TRUE if the ** expr on the right-hand side can be evaluated (and thus the constraint ** is usable) and false if it cannot.)^ ** ** ^The optimizer automatically inverts terms of the form "expr OP column" @@ -4258,10 +5628,21 @@ ** ^The aConstraint[] array only reports WHERE clause terms that are ** relevant to the particular virtual table being queried. ** ** ^Information about the ORDER BY clause is stored in aOrderBy[]. ** ^Each term of aOrderBy records a column of the ORDER BY clause. +** +** The colUsed field indicates which columns of the virtual table may be +** required by the current scan. Virtual table columns are numbered from +** zero in the order in which they appear within the CREATE TABLE statement +** passed to sqlite3_declare_vtab(). For the first 63 columns (columns 0-62), +** the corresponding bit is set within the colUsed mask if the column may be +** required by SQLite. If the table has at least 64 columns and any column +** to the right of the first 63 is required, then bit 63 of colUsed is also +** set. In other words, column iCol may be required if the expression +** (colUsed & ((sqlite3_uint64)1 << (iCol>=63 ? 63 : iCol))) evaluates to +** non-zero. ** ** The [xBestIndex] method must fill aConstraintUsage[] with information ** about what parameters to pass to xFilter. ^If argvIndex>0 then ** the right-hand side of the corresponding aConstraint[] is evaluated ** and becomes the argvIndex-th entry in argv. ^(If aConstraintUsage[].omit @@ -4275,20 +5656,50 @@ ** ** ^The orderByConsumed means that output from [xFilter]/[xNext] will occur in ** the correct order to satisfy the ORDER BY clause so that no separate ** sorting step is required. ** -** ^The estimatedCost value is an estimate of the cost of doing the -** particular lookup. A full scan of a table with N entries should have -** a cost of N. A binary search of a table of N entries should have a -** cost of approximately log(N). +** ^The estimatedCost value is an estimate of the cost of a particular +** strategy. A cost of N indicates that the cost of the strategy is similar +** to a linear scan of an SQLite table with N rows. A cost of log(N) +** indicates that the expense of the operation is similar to that of a +** binary search on a unique indexed field of an SQLite table with N rows. +** +** ^The estimatedRows value is an estimate of the number of rows that +** will be returned by the strategy. +** +** The xBestIndex method may optionally populate the idxFlags field with a +** mask of SQLITE_INDEX_SCAN_* flags. Currently there is only one such flag - +** SQLITE_INDEX_SCAN_UNIQUE. If the xBestIndex method sets this flag, SQLite +** assumes that the strategy may visit at most one row. +** +** Additionally, if xBestIndex sets the SQLITE_INDEX_SCAN_UNIQUE flag, then +** SQLite also assumes that if a call to the xUpdate() method is made as +** part of the same statement to delete or update a virtual table row and the +** implementation returns SQLITE_CONSTRAINT, then there is no need to rollback +** any database changes. In other words, if the xUpdate() returns +** SQLITE_CONSTRAINT, the database contents must be exactly as they were +** before xUpdate was called. By contrast, if SQLITE_INDEX_SCAN_UNIQUE is not +** set and xUpdate returns SQLITE_CONSTRAINT, any database changes made by +** the xUpdate method are automatically rolled back by SQLite. +** +** IMPORTANT: The estimatedRows field was added to the sqlite3_index_info +** structure for SQLite version 3.8.2. If a virtual table extension is +** used with an SQLite version earlier than 3.8.2, the results of attempting +** to read or write the estimatedRows field are undefined (but are likely +** to included crashing the application). The estimatedRows field should +** therefore only be used if [sqlite3_libversion_number()] returns a +** value greater than or equal to 3008002. Similarly, the idxFlags field +** was added for version 3.9.0. It may therefore only be used if +** sqlite3_libversion_number() returns a value greater than or equal to +** 3009000. */ struct sqlite3_index_info { /* Inputs */ int nConstraint; /* Number of entries in aConstraint */ struct sqlite3_index_constraint { - int iColumn; /* Column on left-hand side of constraint */ + int iColumn; /* Column constrained. -1 for ROWID */ unsigned char op; /* Constraint operator */ unsigned char usable; /* True if this constraint is usable */ int iTermOffset; /* Used internally - xBestIndex should ignore */ } *aConstraint; /* Table of WHERE clause constraints */ int nOrderBy; /* Number of terms in the ORDER BY clause */ @@ -4303,21 +5714,45 @@ } *aConstraintUsage; int idxNum; /* Number used to identify the index */ char *idxStr; /* String, possibly obtained from sqlite3_malloc */ int needToFreeIdxStr; /* Free idxStr using sqlite3_free() if true */ int orderByConsumed; /* True if output is already ordered */ - double estimatedCost; /* Estimated cost of using this index */ + double estimatedCost; /* Estimated cost of using this index */ + /* Fields below are only available in SQLite 3.8.2 and later */ + sqlite3_int64 estimatedRows; /* Estimated number of rows returned */ + /* Fields below are only available in SQLite 3.9.0 and later */ + int idxFlags; /* Mask of SQLITE_INDEX_SCAN_* flags */ + /* Fields below are only available in SQLite 3.10.0 and later */ + sqlite3_uint64 colUsed; /* Input: Mask of columns used by statement */ }; -#define SQLITE_INDEX_CONSTRAINT_EQ 2 -#define SQLITE_INDEX_CONSTRAINT_GT 4 -#define SQLITE_INDEX_CONSTRAINT_LE 8 -#define SQLITE_INDEX_CONSTRAINT_LT 16 -#define SQLITE_INDEX_CONSTRAINT_GE 32 -#define SQLITE_INDEX_CONSTRAINT_MATCH 64 + +/* +** CAPI3REF: Virtual Table Scan Flags +*/ +#define SQLITE_INDEX_SCAN_UNIQUE 1 /* Scan visits at most 1 row */ + +/* +** CAPI3REF: Virtual Table Constraint Operator Codes +** +** These macros defined the allowed values for the +** [sqlite3_index_info].aConstraint[].op field. Each value represents +** an operator that is part of a constraint term in the wHERE clause of +** a query that uses a [virtual table]. +*/ +#define SQLITE_INDEX_CONSTRAINT_EQ 2 +#define SQLITE_INDEX_CONSTRAINT_GT 4 +#define SQLITE_INDEX_CONSTRAINT_LE 8 +#define SQLITE_INDEX_CONSTRAINT_LT 16 +#define SQLITE_INDEX_CONSTRAINT_GE 32 +#define SQLITE_INDEX_CONSTRAINT_MATCH 64 +#define SQLITE_INDEX_CONSTRAINT_LIKE 65 +#define SQLITE_INDEX_CONSTRAINT_GLOB 66 +#define SQLITE_INDEX_CONSTRAINT_REGEXP 67 /* ** CAPI3REF: Register A Virtual Table Implementation +** METHOD: sqlite3 ** ** ^These routines are used to register a new [virtual table module] name. ** ^Module names must be registered before ** creating a new [virtual table] using the module and before using a ** preexisting [virtual table] for the module. @@ -4331,21 +5766,23 @@ ** when a new virtual table is be being created or reinitialized. ** ** ^The sqlite3_create_module_v2() interface has a fifth parameter which ** is a pointer to a destructor for the pClientData. ^SQLite will ** invoke the destructor function (if it is not NULL) when SQLite -** no longer needs the pClientData pointer. ^The sqlite3_create_module() +** no longer needs the pClientData pointer. ^The destructor will also +** be invoked if the call to sqlite3_create_module_v2() fails. +** ^The sqlite3_create_module() ** interface is equivalent to sqlite3_create_module_v2() with a NULL ** destructor. */ -SQLITE_API int sqlite3_create_module( +SQLITE_API int SQLITE_STDCALL sqlite3_create_module( sqlite3 *db, /* SQLite connection to register module with */ const char *zName, /* Name of the module */ const sqlite3_module *p, /* Methods for the module */ void *pClientData /* Client data for xCreate/xConnect */ ); -SQLITE_API int sqlite3_create_module_v2( +SQLITE_API int SQLITE_STDCALL sqlite3_create_module_v2( sqlite3 *db, /* SQLite connection to register module with */ const char *zName, /* Name of the module */ const sqlite3_module *p, /* Methods for the module */ void *pClientData, /* Client data for xCreate/xConnect */ void(*xDestroy)(void*) /* Module destructor function */ @@ -4369,11 +5806,11 @@ ** is delivered up to the client application, the string will be automatically ** freed by sqlite3_free() and the zErrMsg field will be zeroed. */ struct sqlite3_vtab { const sqlite3_module *pModule; /* The module for this virtual table */ - int nRef; /* NO LONGER USED */ + int nRef; /* Number of open cursors */ char *zErrMsg; /* Error message from sqlite3_mprintf() */ /* Virtual table implementations will typically add additional fields */ }; /* @@ -4404,14 +5841,15 @@ ** ^The [xCreate] and [xConnect] methods of a ** [virtual table module] call this interface ** to declare the format (the names and datatypes of the columns) of ** the virtual tables they implement. */ -SQLITE_API int sqlite3_declare_vtab(sqlite3*, const char *zSQL); +SQLITE_API int SQLITE_STDCALL sqlite3_declare_vtab(sqlite3*, const char *zSQL); /* ** CAPI3REF: Overload A Function For A Virtual Table +** METHOD: sqlite3 ** ** ^(Virtual tables can provide alternative implementations of functions ** using the [xFindFunction] method of the [virtual table module]. ** But global versions of those functions ** must exist in order to be overloaded.)^ @@ -4422,11 +5860,11 @@ ** of the new function always causes an exception to be thrown. So ** the new function is not good for anything by itself. Its only ** purpose is to be a placeholder function that can be overloaded ** by a [virtual table]. */ -SQLITE_API int sqlite3_overload_function(sqlite3*, const char *zFuncName, int nArg); +SQLITE_API int SQLITE_STDCALL sqlite3_overload_function(sqlite3*, const char *zFuncName, int nArg); /* ** The interface to the virtual-table mechanism defined above (back up ** to a comment remarkably similar to this one) is currently considered ** to be experimental. The interface might change in incompatible ways. @@ -4450,47 +5888,65 @@ */ typedef struct sqlite3_blob sqlite3_blob; /* ** CAPI3REF: Open A BLOB For Incremental I/O +** METHOD: sqlite3 +** CONSTRUCTOR: sqlite3_blob ** ** ^(This interfaces opens a [BLOB handle | handle] to the BLOB located ** in row iRow, column zColumn, table zTable in database zDb; ** in other words, the same BLOB that would be selected by: ** ** <pre> ** SELECT zColumn FROM zDb.zTable WHERE [rowid] = iRow; ** </pre>)^ +** +** ^(Parameter zDb is not the filename that contains the database, but +** rather the symbolic name of the database. For attached databases, this is +** the name that appears after the AS keyword in the [ATTACH] statement. +** For the main database file, the database name is "main". For TEMP +** tables, the database name is "temp".)^ ** ** ^If the flags parameter is non-zero, then the BLOB is opened for read -** and write access. ^If it is zero, the BLOB is opened for read access. -** ^It is not possible to open a column that is part of an index or primary -** key for writing. ^If [foreign key constraints] are enabled, it is -** not possible to open a column that is part of a [child key] for writing. -** -** ^Note that the database name is not the filename that contains -** the database but rather the symbolic name of the database that -** appears after the AS keyword when the database is connected using [ATTACH]. -** ^For the main database file, the database name is "main". -** ^For TEMP tables, the database name is "temp". -** -** ^(On success, [SQLITE_OK] is returned and the new [BLOB handle] is written -** to *ppBlob. Otherwise an [error code] is returned and *ppBlob is set -** to be a null pointer.)^ -** ^This function sets the [database connection] error code and message -** accessible via [sqlite3_errcode()] and [sqlite3_errmsg()] and related -** functions. ^Note that the *ppBlob variable is always initialized in a -** way that makes it safe to invoke [sqlite3_blob_close()] on *ppBlob -** regardless of the success or failure of this routine. +** and write access. ^If the flags parameter is zero, the BLOB is opened for +** read-only access. +** +** ^(On success, [SQLITE_OK] is returned and the new [BLOB handle] is stored +** in *ppBlob. Otherwise an [error code] is returned and, unless the error +** code is SQLITE_MISUSE, *ppBlob is set to NULL.)^ ^This means that, provided +** the API is not misused, it is always safe to call [sqlite3_blob_close()] +** on *ppBlob after this function it returns. +** +** This function fails with SQLITE_ERROR if any of the following are true: +** <ul> +** <li> ^(Database zDb does not exist)^, +** <li> ^(Table zTable does not exist within database zDb)^, +** <li> ^(Table zTable is a WITHOUT ROWID table)^, +** <li> ^(Column zColumn does not exist)^, +** <li> ^(Row iRow is not present in the table)^, +** <li> ^(The specified column of row iRow contains a value that is not +** a TEXT or BLOB value)^, +** <li> ^(Column zColumn is part of an index, PRIMARY KEY or UNIQUE +** constraint and the blob is being opened for read/write access)^, +** <li> ^([foreign key constraints | Foreign key constraints] are enabled, +** column zColumn is part of a [child key] definition and the blob is +** being opened for read/write access)^. +** </ul> +** +** ^Unless it returns SQLITE_MISUSE, this function sets the +** [database connection] error code and message accessible via +** [sqlite3_errcode()] and [sqlite3_errmsg()] and related functions. +** ** ** ^(If the row that a BLOB handle points to is modified by an ** [UPDATE], [DELETE], or by [ON CONFLICT] side-effects ** then the BLOB handle is marked as "expired". ** This is true if any column of the row is changed, even a column ** other than the one the BLOB handle is open on.)^ ** ^Calls to [sqlite3_blob_read()] and [sqlite3_blob_write()] for -** a expired BLOB handle fail with an return code of [SQLITE_ABORT]. +** an expired BLOB handle fail with a return code of [SQLITE_ABORT]. ** ^(Changes written into a BLOB prior to the BLOB expiring are not ** rolled back by the expiration of the BLOB. Such changes will eventually ** commit if the transaction continues to completion.)^ ** ** ^Use the [sqlite3_blob_bytes()] interface to determine the size of @@ -4497,53 +5953,77 @@ ** the opened blob. ^The size of a blob may not be changed by this ** interface. Use the [UPDATE] SQL command to change the size of a ** blob. ** ** ^The [sqlite3_bind_zeroblob()] and [sqlite3_result_zeroblob()] interfaces -** and the built-in [zeroblob] SQL function can be used, if desired, -** to create an empty, zero-filled blob in which to read or write using -** this interface. +** and the built-in [zeroblob] SQL function may be used to create a +** zero-filled blob to read or write using the incremental-blob interface. ** ** To avoid a resource leak, every open [BLOB handle] should eventually ** be released by a call to [sqlite3_blob_close()]. */ -SQLITE_API int sqlite3_blob_open( +SQLITE_API int SQLITE_STDCALL sqlite3_blob_open( sqlite3*, const char *zDb, const char *zTable, const char *zColumn, sqlite3_int64 iRow, int flags, sqlite3_blob **ppBlob ); +/* +** CAPI3REF: Move a BLOB Handle to a New Row +** METHOD: sqlite3_blob +** +** ^This function is used to move an existing blob handle so that it points +** to a different row of the same database table. ^The new row is identified +** by the rowid value passed as the second argument. Only the row can be +** changed. ^The database, table and column on which the blob handle is open +** remain the same. Moving an existing blob handle to a new row can be +** faster than closing the existing handle and opening a new one. +** +** ^(The new row must meet the same criteria as for [sqlite3_blob_open()] - +** it must exist and there must be either a blob or text value stored in +** the nominated column.)^ ^If the new row is not present in the table, or if +** it does not contain a blob or text value, or if another error occurs, an +** SQLite error code is returned and the blob handle is considered aborted. +** ^All subsequent calls to [sqlite3_blob_read()], [sqlite3_blob_write()] or +** [sqlite3_blob_reopen()] on an aborted blob handle immediately return +** SQLITE_ABORT. ^Calling [sqlite3_blob_bytes()] on an aborted blob handle +** always returns zero. +** +** ^This function sets the database handle error code and message. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_blob_reopen(sqlite3_blob *, sqlite3_int64); + /* ** CAPI3REF: Close A BLOB Handle -** -** ^Closes an open [BLOB handle]. -** -** ^Closing a BLOB shall cause the current transaction to commit -** if there are no other BLOBs, no pending prepared statements, and the -** database connection is in [autocommit mode]. -** ^If any writes were made to the BLOB, they might be held in cache -** until the close operation if they will fit. -** -** ^(Closing the BLOB often forces the changes -** out to disk and so if any I/O errors occur, they will likely occur -** at the time when the BLOB is closed. Any errors that occur during -** closing are reported as a non-zero return value.)^ -** -** ^(The BLOB is closed unconditionally. Even if this routine returns -** an error code, the BLOB is still closed.)^ -** -** ^Calling this routine with a null pointer (such as would be returned -** by a failed call to [sqlite3_blob_open()]) is a harmless no-op. +** DESTRUCTOR: sqlite3_blob +** +** ^This function closes an open [BLOB handle]. ^(The BLOB handle is closed +** unconditionally. Even if this routine returns an error code, the +** handle is still closed.)^ +** +** ^If the blob handle being closed was opened for read-write access, and if +** the database is in auto-commit mode and there are no other open read-write +** blob handles or active write statements, the current transaction is +** committed. ^If an error occurs while committing the transaction, an error +** code is returned and the transaction rolled back. +** +** Calling this function with an argument that is not a NULL pointer or an +** open blob handle results in undefined behaviour. ^Calling this routine +** with a null pointer (such as would be returned by a failed call to +** [sqlite3_blob_open()]) is a harmless no-op. ^Otherwise, if this function +** is passed a valid open blob handle, the values returned by the +** sqlite3_errcode() and sqlite3_errmsg() functions are set before returning. */ -SQLITE_API int sqlite3_blob_close(sqlite3_blob *); +SQLITE_API int SQLITE_STDCALL sqlite3_blob_close(sqlite3_blob *); /* ** CAPI3REF: Return The Size Of An Open BLOB +** METHOD: sqlite3_blob ** ** ^Returns the size in bytes of the BLOB accessible via the ** successfully opened [BLOB handle] in its only argument. ^The ** incremental blob I/O routines can only read or overwriting existing ** blob content; they cannot change the size of a blob. @@ -4551,14 +6031,15 @@ ** This routine only works on a [BLOB handle] which has been created ** by a prior successful call to [sqlite3_blob_open()] and which has not ** been closed by [sqlite3_blob_close()]. Passing any other pointer in ** to this routine results in undefined and probably undesirable behavior. */ -SQLITE_API int sqlite3_blob_bytes(sqlite3_blob *); +SQLITE_API int SQLITE_STDCALL sqlite3_blob_bytes(sqlite3_blob *); /* ** CAPI3REF: Read Data From A BLOB Incrementally +** METHOD: sqlite3_blob ** ** ^(This function is used to read data from an open [BLOB handle] into a ** caller-supplied buffer. N bytes of data are copied into buffer Z ** from the open BLOB, starting at offset iOffset.)^ ** @@ -4579,49 +6060,53 @@ ** been closed by [sqlite3_blob_close()]. Passing any other pointer in ** to this routine results in undefined and probably undesirable behavior. ** ** See also: [sqlite3_blob_write()]. */ -SQLITE_API int sqlite3_blob_read(sqlite3_blob *, void *Z, int N, int iOffset); +SQLITE_API int SQLITE_STDCALL sqlite3_blob_read(sqlite3_blob *, void *Z, int N, int iOffset); /* ** CAPI3REF: Write Data Into A BLOB Incrementally +** METHOD: sqlite3_blob ** -** ^This function is used to write data into an open [BLOB handle] from a -** caller-supplied buffer. ^N bytes of data are copied from the buffer Z -** into the open BLOB, starting at offset iOffset. +** ^(This function is used to write data into an open [BLOB handle] from a +** caller-supplied buffer. N bytes of data are copied from the buffer Z +** into the open BLOB, starting at offset iOffset.)^ +** +** ^(On success, sqlite3_blob_write() returns SQLITE_OK. +** Otherwise, an [error code] or an [extended error code] is returned.)^ +** ^Unless SQLITE_MISUSE is returned, this function sets the +** [database connection] error code and message accessible via +** [sqlite3_errcode()] and [sqlite3_errmsg()] and related functions. ** ** ^If the [BLOB handle] passed as the first argument was not opened for ** writing (the flags parameter to [sqlite3_blob_open()] was zero), ** this function returns [SQLITE_READONLY]. ** -** ^This function may only modify the contents of the BLOB; it is +** This function may only modify the contents of the BLOB; it is ** not possible to increase the size of a BLOB using this API. ** ^If offset iOffset is less than N bytes from the end of the BLOB, -** [SQLITE_ERROR] is returned and no data is written. ^If N is -** less than zero [SQLITE_ERROR] is returned and no data is written. -** The size of the BLOB (and hence the maximum value of N+iOffset) -** can be determined using the [sqlite3_blob_bytes()] interface. +** [SQLITE_ERROR] is returned and no data is written. The size of the +** BLOB (and hence the maximum value of N+iOffset) can be determined +** using the [sqlite3_blob_bytes()] interface. ^If N or iOffset are less +** than zero [SQLITE_ERROR] is returned and no data is written. ** ** ^An attempt to write to an expired [BLOB handle] fails with an ** error code of [SQLITE_ABORT]. ^Writes to the BLOB that occurred ** before the [BLOB handle] expired are not rolled back by the ** expiration of the handle, though of course those changes might ** have been overwritten by the statement that expired the BLOB handle ** or by other independent statements. ** -** ^(On success, sqlite3_blob_write() returns SQLITE_OK. -** Otherwise, an [error code] or an [extended error code] is returned.)^ -** ** This routine only works on a [BLOB handle] which has been created ** by a prior successful call to [sqlite3_blob_open()] and which has not ** been closed by [sqlite3_blob_close()]. Passing any other pointer in ** to this routine results in undefined and probably undesirable behavior. ** ** See also: [sqlite3_blob_read()]. */ -SQLITE_API int sqlite3_blob_write(sqlite3_blob *, const void *z, int n, int iOffset); +SQLITE_API int SQLITE_STDCALL sqlite3_blob_write(sqlite3_blob *, const void *z, int n, int iOffset); /* ** CAPI3REF: Virtual File System Objects ** ** A virtual filesystem (VFS) is an [sqlite3_vfs] object @@ -4648,13 +6133,13 @@ ** ** ^Unregister a VFS with the sqlite3_vfs_unregister() interface. ** ^(If the default VFS is unregistered, another VFS is chosen as ** the default. The choice for the new VFS is arbitrary.)^ */ -SQLITE_API sqlite3_vfs *sqlite3_vfs_find(const char *zVfsName); -SQLITE_API int sqlite3_vfs_register(sqlite3_vfs*, int makeDflt); -SQLITE_API int sqlite3_vfs_unregister(sqlite3_vfs*); +SQLITE_API sqlite3_vfs *SQLITE_STDCALL sqlite3_vfs_find(const char *zVfsName); +SQLITE_API int SQLITE_STDCALL sqlite3_vfs_register(sqlite3_vfs*, int makeDflt); +SQLITE_API int SQLITE_STDCALL sqlite3_vfs_unregister(sqlite3_vfs*); /* ** CAPI3REF: Mutexes ** ** The SQLite core uses these routines for thread @@ -4662,139 +6147,139 @@ ** use by SQLite, code that links against SQLite is ** permitted to use any of these routines. ** ** The SQLite source code contains multiple implementations ** of these mutex routines. An appropriate implementation -** is selected automatically at compile-time. ^(The following +** is selected automatically at compile-time. The following ** implementations are available in the SQLite core: ** ** <ul> -** <li> SQLITE_MUTEX_OS2 -** <li> SQLITE_MUTEX_PTHREAD +** <li> SQLITE_MUTEX_PTHREADS ** <li> SQLITE_MUTEX_W32 ** <li> SQLITE_MUTEX_NOOP -** </ul>)^ +** </ul> ** -** ^The SQLITE_MUTEX_NOOP implementation is a set of routines +** The SQLITE_MUTEX_NOOP implementation is a set of routines ** that does no real locking and is appropriate for use in -** a single-threaded application. ^The SQLITE_MUTEX_OS2, -** SQLITE_MUTEX_PTHREAD, and SQLITE_MUTEX_W32 implementations -** are appropriate for use on OS/2, Unix, and Windows. +** a single-threaded application. The SQLITE_MUTEX_PTHREADS and +** SQLITE_MUTEX_W32 implementations are appropriate for use on Unix +** and Windows. ** -** ^(If SQLite is compiled with the SQLITE_MUTEX_APPDEF preprocessor +** If SQLite is compiled with the SQLITE_MUTEX_APPDEF preprocessor ** macro defined (with "-DSQLITE_MUTEX_APPDEF=1"), then no mutex ** implementation is included with the library. In this case the ** application must supply a custom mutex implementation using the ** [SQLITE_CONFIG_MUTEX] option of the sqlite3_config() function ** before calling sqlite3_initialize() or any other public sqlite3_ -** function that calls sqlite3_initialize().)^ +** function that calls sqlite3_initialize(). ** ** ^The sqlite3_mutex_alloc() routine allocates a new -** mutex and returns a pointer to it. ^If it returns NULL -** that means that a mutex could not be allocated. ^SQLite -** will unwind its stack and return an error. ^(The argument -** to sqlite3_mutex_alloc() is one of these integer constants: +** mutex and returns a pointer to it. ^The sqlite3_mutex_alloc() +** routine returns NULL if it is unable to allocate the requested +** mutex. The argument to sqlite3_mutex_alloc() must one of these +** integer constants: ** ** <ul> ** <li> SQLITE_MUTEX_FAST ** <li> SQLITE_MUTEX_RECURSIVE ** <li> SQLITE_MUTEX_STATIC_MASTER ** <li> SQLITE_MUTEX_STATIC_MEM -** <li> SQLITE_MUTEX_STATIC_MEM2 +** <li> SQLITE_MUTEX_STATIC_OPEN ** <li> SQLITE_MUTEX_STATIC_PRNG ** <li> SQLITE_MUTEX_STATIC_LRU -** <li> SQLITE_MUTEX_STATIC_LRU2 -** </ul>)^ +** <li> SQLITE_MUTEX_STATIC_PMEM +** <li> SQLITE_MUTEX_STATIC_APP1 +** <li> SQLITE_MUTEX_STATIC_APP2 +** <li> SQLITE_MUTEX_STATIC_APP3 +** <li> SQLITE_MUTEX_STATIC_VFS1 +** <li> SQLITE_MUTEX_STATIC_VFS2 +** <li> SQLITE_MUTEX_STATIC_VFS3 +** </ul> ** ** ^The first two constants (SQLITE_MUTEX_FAST and SQLITE_MUTEX_RECURSIVE) ** cause sqlite3_mutex_alloc() to create ** a new mutex. ^The new mutex is recursive when SQLITE_MUTEX_RECURSIVE ** is used but not necessarily so when SQLITE_MUTEX_FAST is used. ** The mutex implementation does not need to make a distinction ** between SQLITE_MUTEX_RECURSIVE and SQLITE_MUTEX_FAST if it does -** not want to. ^SQLite will only request a recursive mutex in -** cases where it really needs one. ^If a faster non-recursive mutex +** not want to. SQLite will only request a recursive mutex in +** cases where it really needs one. If a faster non-recursive mutex ** implementation is available on the host platform, the mutex subsystem ** might return such a mutex in response to SQLITE_MUTEX_FAST. ** ** ^The other allowed parameters to sqlite3_mutex_alloc() (anything other ** than SQLITE_MUTEX_FAST and SQLITE_MUTEX_RECURSIVE) each return -** a pointer to a static preexisting mutex. ^Six static mutexes are +** a pointer to a static preexisting mutex. ^Nine static mutexes are ** used by the current version of SQLite. Future versions of SQLite ** may add additional static mutexes. Static mutexes are for internal ** use by SQLite only. Applications that use SQLite mutexes should ** use only the dynamic mutexes returned by SQLITE_MUTEX_FAST or ** SQLITE_MUTEX_RECURSIVE. ** ** ^Note that if one of the dynamic mutex parameters (SQLITE_MUTEX_FAST ** or SQLITE_MUTEX_RECURSIVE) is used then sqlite3_mutex_alloc() -** returns a different mutex on every call. ^But for the static +** returns a different mutex on every call. ^For the static ** mutex types, the same mutex is returned on every call that has ** the same type number. ** ** ^The sqlite3_mutex_free() routine deallocates a previously -** allocated dynamic mutex. ^SQLite is careful to deallocate every -** dynamic mutex that it allocates. The dynamic mutexes must not be in -** use when they are deallocated. Attempting to deallocate a static -** mutex results in undefined behavior. ^SQLite never deallocates -** a static mutex. +** allocated dynamic mutex. Attempting to deallocate a static +** mutex results in undefined behavior. ** ** ^The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt ** to enter a mutex. ^If another thread is already within the mutex, ** sqlite3_mutex_enter() will block and sqlite3_mutex_try() will return ** SQLITE_BUSY. ^The sqlite3_mutex_try() interface returns [SQLITE_OK] ** upon successful entry. ^(Mutexes created using ** SQLITE_MUTEX_RECURSIVE can be entered multiple times by the same thread. -** In such cases the, +** In such cases, the ** mutex must be exited an equal number of times before another thread -** can enter.)^ ^(If the same thread tries to enter any other -** kind of mutex more than once, the behavior is undefined. -** SQLite will never exhibit -** such behavior in its own use of mutexes.)^ +** can enter.)^ If the same thread tries to enter any mutex other +** than an SQLITE_MUTEX_RECURSIVE more than once, the behavior is undefined. ** ** ^(Some systems (for example, Windows 95) do not support the operation ** implemented by sqlite3_mutex_try(). On those systems, sqlite3_mutex_try() -** will always return SQLITE_BUSY. The SQLite core only ever uses -** sqlite3_mutex_try() as an optimization so this is acceptable behavior.)^ +** will always return SQLITE_BUSY. The SQLite core only ever uses +** sqlite3_mutex_try() as an optimization so this is acceptable +** behavior.)^ ** ** ^The sqlite3_mutex_leave() routine exits a mutex that was -** previously entered by the same thread. ^(The behavior +** previously entered by the same thread. The behavior ** is undefined if the mutex is not currently entered by the -** calling thread or is not currently allocated. SQLite will -** never do either.)^ +** calling thread or is not currently allocated. ** ** ^If the argument to sqlite3_mutex_enter(), sqlite3_mutex_try(), or ** sqlite3_mutex_leave() is a NULL pointer, then all three routines ** behave as no-ops. ** ** See also: [sqlite3_mutex_held()] and [sqlite3_mutex_notheld()]. */ -SQLITE_API sqlite3_mutex *sqlite3_mutex_alloc(int); -SQLITE_API void sqlite3_mutex_free(sqlite3_mutex*); -SQLITE_API void sqlite3_mutex_enter(sqlite3_mutex*); -SQLITE_API int sqlite3_mutex_try(sqlite3_mutex*); -SQLITE_API void sqlite3_mutex_leave(sqlite3_mutex*); +SQLITE_API sqlite3_mutex *SQLITE_STDCALL sqlite3_mutex_alloc(int); +SQLITE_API void SQLITE_STDCALL sqlite3_mutex_free(sqlite3_mutex*); +SQLITE_API void SQLITE_STDCALL sqlite3_mutex_enter(sqlite3_mutex*); +SQLITE_API int SQLITE_STDCALL sqlite3_mutex_try(sqlite3_mutex*); +SQLITE_API void SQLITE_STDCALL sqlite3_mutex_leave(sqlite3_mutex*); /* ** CAPI3REF: Mutex Methods Object ** ** An instance of this structure defines the low-level routines ** used to allocate and use mutexes. ** ** Usually, the default mutex implementations provided by SQLite are -** sufficient, however the user has the option of substituting a custom +** sufficient, however the application has the option of substituting a custom ** implementation for specialized deployments or systems for which SQLite -** does not provide a suitable implementation. In this case, the user +** does not provide a suitable implementation. In this case, the application ** creates and populates an instance of this structure to pass ** to sqlite3_config() along with the [SQLITE_CONFIG_MUTEX] option. ** Additionally, an instance of this structure can be used as an ** output variable when querying the system for the current mutex ** implementation, using the [SQLITE_CONFIG_GETMUTEX] option. ** ** ^The xMutexInit method defined by this structure is invoked as ** part of system initialization by the sqlite3_initialize() function. -** ^The xMutexInit routine is calle by SQLite exactly once for each +** ^The xMutexInit routine is called by SQLite exactly once for each ** effective call to [sqlite3_initialize()]. ** ** ^The xMutexEnd method defined by this structure is invoked as ** part of system shutdown by the sqlite3_shutdown() function. The ** implementation of this method is expected to release all outstanding @@ -4822,17 +6307,17 @@ ** by this structure are not required to handle this case, the results ** of passing a NULL pointer instead of a valid mutex handle are undefined ** (i.e. it is acceptable to provide an implementation that segfaults if ** it is passed a NULL pointer). ** -** The xMutexInit() method must be threadsafe. ^It must be harmless to -** invoke xMutexInit() mutiple times within the same process and without +** The xMutexInit() method must be threadsafe. It must be harmless to +** invoke xMutexInit() multiple times within the same process and without ** intervening calls to xMutexEnd(). Second and subsequent calls to ** xMutexInit() must be no-ops. ** -** ^xMutexInit() must not use SQLite memory allocation ([sqlite3_malloc()] -** and its associates). ^Similarly, xMutexAlloc() must not use SQLite memory +** xMutexInit() must not use SQLite memory allocation ([sqlite3_malloc()] +** and its associates). Similarly, xMutexAlloc() must not use SQLite memory ** allocation for a static mutex. ^However xMutexAlloc() may use SQLite ** memory allocation for a fast or recursive mutex. ** ** ^SQLite will invoke the xMutexEnd() method when [sqlite3_shutdown()] is ** called, but only if the prior call to xMutexInit returned SQLITE_OK. @@ -4854,38 +6339,38 @@ /* ** CAPI3REF: Mutex Verification Routines ** ** The sqlite3_mutex_held() and sqlite3_mutex_notheld() routines -** are intended for use inside assert() statements. ^The SQLite core +** are intended for use inside assert() statements. The SQLite core ** never uses these routines except inside an assert() and applications -** are advised to follow the lead of the core. ^The SQLite core only +** are advised to follow the lead of the core. The SQLite core only ** provides implementations for these routines when it is compiled -** with the SQLITE_DEBUG flag. ^External mutex implementations +** with the SQLITE_DEBUG flag. External mutex implementations ** are only required to provide these routines if SQLITE_DEBUG is ** defined and if NDEBUG is not defined. ** -** ^These routines should return true if the mutex in their argument +** These routines should return true if the mutex in their argument ** is held or not held, respectively, by the calling thread. ** -** ^The implementation is not required to provided versions of these +** The implementation is not required to provide versions of these ** routines that actually work. If the implementation does not provide working ** versions of these routines, it should at least provide stubs that always ** return true so that one does not get spurious assertion failures. ** -** ^If the argument to sqlite3_mutex_held() is a NULL pointer then +** If the argument to sqlite3_mutex_held() is a NULL pointer then ** the routine should return 1. This seems counter-intuitive since -** clearly the mutex cannot be held if it does not exist. But the +** clearly the mutex cannot be held if it does not exist. But ** the reason the mutex does not exist is because the build is not ** using mutexes. And we do not want the assert() containing the ** call to sqlite3_mutex_held() to fail, so a non-zero return is -** the appropriate thing to do. ^The sqlite3_mutex_notheld() +** the appropriate thing to do. The sqlite3_mutex_notheld() ** interface should also return 1 when given a NULL pointer. */ #ifndef NDEBUG -SQLITE_API int sqlite3_mutex_held(sqlite3_mutex*); -SQLITE_API int sqlite3_mutex_notheld(sqlite3_mutex*); +SQLITE_API int SQLITE_STDCALL sqlite3_mutex_held(sqlite3_mutex*); +SQLITE_API int SQLITE_STDCALL sqlite3_mutex_notheld(sqlite3_mutex*); #endif /* ** CAPI3REF: Mutex Types ** @@ -4902,38 +6387,53 @@ #define SQLITE_MUTEX_STATIC_MEM 3 /* sqlite3_malloc() */ #define SQLITE_MUTEX_STATIC_MEM2 4 /* NOT USED */ #define SQLITE_MUTEX_STATIC_OPEN 4 /* sqlite3BtreeOpen() */ #define SQLITE_MUTEX_STATIC_PRNG 5 /* sqlite3_random() */ #define SQLITE_MUTEX_STATIC_LRU 6 /* lru page list */ -#define SQLITE_MUTEX_STATIC_LRU2 7 /* lru page list */ +#define SQLITE_MUTEX_STATIC_LRU2 7 /* NOT USED */ +#define SQLITE_MUTEX_STATIC_PMEM 7 /* sqlite3PageMalloc() */ +#define SQLITE_MUTEX_STATIC_APP1 8 /* For use by application */ +#define SQLITE_MUTEX_STATIC_APP2 9 /* For use by application */ +#define SQLITE_MUTEX_STATIC_APP3 10 /* For use by application */ +#define SQLITE_MUTEX_STATIC_VFS1 11 /* For use by built-in VFS */ +#define SQLITE_MUTEX_STATIC_VFS2 12 /* For use by extension VFS */ +#define SQLITE_MUTEX_STATIC_VFS3 13 /* For use by application VFS */ /* ** CAPI3REF: Retrieve the mutex for a database connection +** METHOD: sqlite3 ** ** ^This interface returns a pointer the [sqlite3_mutex] object that ** serializes access to the [database connection] given in the argument ** when the [threading mode] is Serialized. ** ^If the [threading mode] is Single-thread or Multi-thread then this ** routine returns a NULL pointer. */ -SQLITE_API sqlite3_mutex *sqlite3_db_mutex(sqlite3*); +SQLITE_API sqlite3_mutex *SQLITE_STDCALL sqlite3_db_mutex(sqlite3*); /* ** CAPI3REF: Low-Level Control Of Database Files +** METHOD: sqlite3 ** ** ^The [sqlite3_file_control()] interface makes a direct call to the ** xFileControl method for the [sqlite3_io_methods] object associated ** with a particular database identified by the second argument. ^The -** name of the database "main" for the main database or "temp" for the +** name of the database is "main" for the main database or "temp" for the ** TEMP database, or the name that appears after the AS keyword for ** databases that are added using the [ATTACH] SQL command. ** ^A NULL pointer can be used in place of "main" to refer to the ** main database file. ** ^The third and fourth parameters to this routine ** are passed directly through to the second and third parameters of ** the xFileControl method. ^The return value of the xFileControl ** method becomes the return value of this routine. +** +** ^The SQLITE_FCNTL_FILE_POINTER value for the op parameter causes +** a pointer to the underlying [sqlite3_file] object to be written into +** the space pointed to by the 4th parameter. ^The SQLITE_FCNTL_FILE_POINTER +** case is a short-circuit path which does not actually invoke the +** underlying sqlite3_io_methods.xFileControl method. ** ** ^If the second parameter (zDbName) does not match the name of any ** open database file, then SQLITE_ERROR is returned. ^This error ** code is not remembered and will not be recalled by [sqlite3_errcode()] ** or [sqlite3_errmsg()]. The underlying xFileControl method might @@ -4941,11 +6441,11 @@ ** an incorrect zDbName and an SQLITE_ERROR return from the underlying ** xFileControl method. ** ** See also: [SQLITE_FCNTL_LOCKSTATE] */ -SQLITE_API int sqlite3_file_control(sqlite3*, const char *zDbName, int op, void*); +SQLITE_API int SQLITE_STDCALL sqlite3_file_control(sqlite3*, const char *zDbName, int op, void*); /* ** CAPI3REF: Testing Interface ** ** ^The sqlite3_test_control() interface is used to read out internal @@ -4960,11 +6460,11 @@ ** The details of the operation codes, their meanings, the parameters ** they take, and what they do are all subject to change without notice. ** Unlike most of the SQLite API, this function is not guaranteed to ** operate consistently from one release to the next. */ -SQLITE_API int sqlite3_test_control(int op, ...); +SQLITE_API int SQLITE_CDECL sqlite3_test_control(int op, ...); /* ** CAPI3REF: Testing Interface Operation Codes ** ** These constants are the valid operation code parameters used @@ -4986,115 +6486,134 @@ #define SQLITE_TESTCTRL_ASSERT 12 #define SQLITE_TESTCTRL_ALWAYS 13 #define SQLITE_TESTCTRL_RESERVE 14 #define SQLITE_TESTCTRL_OPTIMIZATIONS 15 #define SQLITE_TESTCTRL_ISKEYWORD 16 -#define SQLITE_TESTCTRL_LAST 16 +#define SQLITE_TESTCTRL_SCRATCHMALLOC 17 +#define SQLITE_TESTCTRL_LOCALTIME_FAULT 18 +#define SQLITE_TESTCTRL_EXPLAIN_STMT 19 /* NOT USED */ +#define SQLITE_TESTCTRL_NEVER_CORRUPT 20 +#define SQLITE_TESTCTRL_VDBE_COVERAGE 21 +#define SQLITE_TESTCTRL_BYTEORDER 22 +#define SQLITE_TESTCTRL_ISINIT 23 +#define SQLITE_TESTCTRL_SORTER_MMAP 24 +#define SQLITE_TESTCTRL_IMPOSTER 25 +#define SQLITE_TESTCTRL_LAST 25 /* ** CAPI3REF: SQLite Runtime Status ** -** ^This interface is used to retrieve runtime status information -** about the preformance of SQLite, and optionally to reset various +** ^These interfaces are used to retrieve runtime status information +** about the performance of SQLite, and optionally to reset various ** highwater marks. ^The first argument is an integer code for ** the specific parameter to measure. ^(Recognized integer codes -** are of the form [SQLITE_STATUS_MEMORY_USED | SQLITE_STATUS_...].)^ +** are of the form [status parameters | SQLITE_STATUS_...].)^ ** ^The current value of the parameter is returned into *pCurrent. ** ^The highest recorded value is returned in *pHighwater. ^If the ** resetFlag is true, then the highest record value is reset after ** *pHighwater is written. ^(Some parameters do not record the highest ** value. For those parameters ** nothing is written into *pHighwater and the resetFlag is ignored.)^ ** ^(Other parameters record only the highwater mark and not the current ** value. For these latter parameters nothing is written into *pCurrent.)^ ** -** ^The sqlite3_db_status() routine returns SQLITE_OK on success and a -** non-zero [error code] on failure. +** ^The sqlite3_status() and sqlite3_status64() routines return +** SQLITE_OK on success and a non-zero [error code] on failure. ** -** This routine is threadsafe but is not atomic. This routine can be -** called while other threads are running the same or different SQLite -** interfaces. However the values returned in *pCurrent and -** *pHighwater reflect the status of SQLite at different points in time -** and it is possible that another thread might change the parameter -** in between the times when *pCurrent and *pHighwater are written. +** If either the current value or the highwater mark is too large to +** be represented by a 32-bit integer, then the values returned by +** sqlite3_status() are undefined. ** ** See also: [sqlite3_db_status()] */ -SQLITE_API int sqlite3_status(int op, int *pCurrent, int *pHighwater, int resetFlag); +SQLITE_API int SQLITE_STDCALL sqlite3_status(int op, int *pCurrent, int *pHighwater, int resetFlag); +SQLITE_API int SQLITE_STDCALL sqlite3_status64( + int op, + sqlite3_int64 *pCurrent, + sqlite3_int64 *pHighwater, + int resetFlag +); /* ** CAPI3REF: Status Parameters +** KEYWORDS: {status parameters} ** ** These integer constants designate various run-time status parameters ** that can be returned by [sqlite3_status()]. ** ** <dl> -** ^(<dt>SQLITE_STATUS_MEMORY_USED</dt> +** [[SQLITE_STATUS_MEMORY_USED]] ^(<dt>SQLITE_STATUS_MEMORY_USED</dt> ** <dd>This parameter is the current amount of memory checked out ** using [sqlite3_malloc()], either directly or indirectly. The ** figure includes calls made to [sqlite3_malloc()] by the application ** and internal memory usage by the SQLite library. Scratch memory ** controlled by [SQLITE_CONFIG_SCRATCH] and auxiliary page-cache ** memory controlled by [SQLITE_CONFIG_PAGECACHE] is not included in ** this parameter. The amount returned is the sum of the allocation ** sizes as reported by the xSize method in [sqlite3_mem_methods].</dd>)^ ** -** ^(<dt>SQLITE_STATUS_MALLOC_SIZE</dt> +** [[SQLITE_STATUS_MALLOC_SIZE]] ^(<dt>SQLITE_STATUS_MALLOC_SIZE</dt> ** <dd>This parameter records the largest memory allocation request ** handed to [sqlite3_malloc()] or [sqlite3_realloc()] (or their ** internal equivalents). Only the value returned in the ** *pHighwater parameter to [sqlite3_status()] is of interest. ** The value written into the *pCurrent parameter is undefined.</dd>)^ ** -** ^(<dt>SQLITE_STATUS_PAGECACHE_USED</dt> +** [[SQLITE_STATUS_MALLOC_COUNT]] ^(<dt>SQLITE_STATUS_MALLOC_COUNT</dt> +** <dd>This parameter records the number of separate memory allocations +** currently checked out.</dd>)^ +** +** [[SQLITE_STATUS_PAGECACHE_USED]] ^(<dt>SQLITE_STATUS_PAGECACHE_USED</dt> ** <dd>This parameter returns the number of pages used out of the ** [pagecache memory allocator] that was configured using ** [SQLITE_CONFIG_PAGECACHE]. The ** value returned is in pages, not in bytes.</dd>)^ ** +** [[SQLITE_STATUS_PAGECACHE_OVERFLOW]] ** ^(<dt>SQLITE_STATUS_PAGECACHE_OVERFLOW</dt> ** <dd>This parameter returns the number of bytes of page cache -** allocation which could not be statisfied by the [SQLITE_CONFIG_PAGECACHE] +** allocation which could not be satisfied by the [SQLITE_CONFIG_PAGECACHE] ** buffer and where forced to overflow to [sqlite3_malloc()]. The ** returned value includes allocations that overflowed because they ** where too large (they were larger than the "sz" parameter to ** [SQLITE_CONFIG_PAGECACHE]) and allocations that overflowed because ** no space was left in the page cache.</dd>)^ ** -** ^(<dt>SQLITE_STATUS_PAGECACHE_SIZE</dt> +** [[SQLITE_STATUS_PAGECACHE_SIZE]] ^(<dt>SQLITE_STATUS_PAGECACHE_SIZE</dt> ** <dd>This parameter records the largest memory allocation request ** handed to [pagecache memory allocator]. Only the value returned in the ** *pHighwater parameter to [sqlite3_status()] is of interest. ** The value written into the *pCurrent parameter is undefined.</dd>)^ ** -** ^(<dt>SQLITE_STATUS_SCRATCH_USED</dt> +** [[SQLITE_STATUS_SCRATCH_USED]] ^(<dt>SQLITE_STATUS_SCRATCH_USED</dt> ** <dd>This parameter returns the number of allocations used out of the ** [scratch memory allocator] configured using ** [SQLITE_CONFIG_SCRATCH]. The value returned is in allocations, not ** in bytes. Since a single thread may only have one scratch allocation ** outstanding at time, this parameter also reports the number of threads ** using scratch memory at the same time.</dd>)^ ** -** ^(<dt>SQLITE_STATUS_SCRATCH_OVERFLOW</dt> +** [[SQLITE_STATUS_SCRATCH_OVERFLOW]] ^(<dt>SQLITE_STATUS_SCRATCH_OVERFLOW</dt> ** <dd>This parameter returns the number of bytes of scratch memory -** allocation which could not be statisfied by the [SQLITE_CONFIG_SCRATCH] +** allocation which could not be satisfied by the [SQLITE_CONFIG_SCRATCH] ** buffer and where forced to overflow to [sqlite3_malloc()]. The values ** returned include overflows because the requested allocation was too ** larger (that is, because the requested allocation was larger than the ** "sz" parameter to [SQLITE_CONFIG_SCRATCH]) and because no scratch buffer ** slots were available. ** </dd>)^ ** -** ^(<dt>SQLITE_STATUS_SCRATCH_SIZE</dt> +** [[SQLITE_STATUS_SCRATCH_SIZE]] ^(<dt>SQLITE_STATUS_SCRATCH_SIZE</dt> ** <dd>This parameter records the largest memory allocation request ** handed to [scratch memory allocator]. Only the value returned in the ** *pHighwater parameter to [sqlite3_status()] is of interest. ** The value written into the *pCurrent parameter is undefined.</dd>)^ ** -** ^(<dt>SQLITE_STATUS_PARSER_STACK</dt> -** <dd>This parameter records the deepest parser stack. It is only +** [[SQLITE_STATUS_PARSER_STACK]] ^(<dt>SQLITE_STATUS_PARSER_STACK</dt> +** <dd>The *pHighwater parameter records the deepest parser stack. +** The *pCurrent value is undefined. The *pHighwater value is only ** meaningful if SQLite is compiled with [YYTRACKMAXSTACKDEPTH].</dd>)^ ** </dl> ** ** New status parameters may be added from time to time. */ @@ -5105,34 +6624,40 @@ #define SQLITE_STATUS_SCRATCH_OVERFLOW 4 #define SQLITE_STATUS_MALLOC_SIZE 5 #define SQLITE_STATUS_PARSER_STACK 6 #define SQLITE_STATUS_PAGECACHE_SIZE 7 #define SQLITE_STATUS_SCRATCH_SIZE 8 +#define SQLITE_STATUS_MALLOC_COUNT 9 /* ** CAPI3REF: Database Connection Status +** METHOD: sqlite3 ** ** ^This interface is used to retrieve runtime status information ** about a single [database connection]. ^The first argument is the ** database connection object to be interrogated. ^The second argument ** is an integer constant, taken from the set of -** [SQLITE_DBSTATUS_LOOKASIDE_USED | SQLITE_DBSTATUS_*] macros, that -** determiness the parameter to interrogate. The set of -** [SQLITE_DBSTATUS_LOOKASIDE_USED | SQLITE_DBSTATUS_*] macros is likely +** [SQLITE_DBSTATUS options], that +** determines the parameter to interrogate. The set of +** [SQLITE_DBSTATUS options] is likely ** to grow in future releases of SQLite. ** ** ^The current value of the requested parameter is written into *pCur ** and the highest instantaneous value is written into *pHiwtr. ^If ** the resetFlg is true, then the highest instantaneous value is ** reset back down to the current value. +** +** ^The sqlite3_db_status() routine returns SQLITE_OK on success and a +** non-zero [error code] on failure. ** ** See also: [sqlite3_status()] and [sqlite3_stmt_status()]. */ -SQLITE_API int sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int resetFlg); +SQLITE_API int SQLITE_STDCALL sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int resetFlg); /* ** CAPI3REF: Status Parameters for database connections +** KEYWORDS: {SQLITE_DBSTATUS options} ** ** These constants are the available integer "verbs" that can be passed as ** the second argument to the [sqlite3_db_status()] interface. ** ** New verbs may be added in future releases of SQLite. Existing verbs @@ -5140,31 +6665,106 @@ ** [sqlite3_db_status()] to make sure that the call worked. ** The [sqlite3_db_status()] interface will return a non-zero error code ** if a discontinued or unsupported verb is invoked. ** ** <dl> -** ^(<dt>SQLITE_DBSTATUS_LOOKASIDE_USED</dt> +** [[SQLITE_DBSTATUS_LOOKASIDE_USED]] ^(<dt>SQLITE_DBSTATUS_LOOKASIDE_USED</dt> ** <dd>This parameter returns the number of lookaside memory slots currently ** checked out.</dd>)^ ** -** <dt>SQLITE_DBSTATUS_CACHE_USED</dt> -** <dd>^This parameter returns the approximate number of of bytes of heap -** memory used by all pager caches associated with the database connection. +** [[SQLITE_DBSTATUS_LOOKASIDE_HIT]] ^(<dt>SQLITE_DBSTATUS_LOOKASIDE_HIT</dt> +** <dd>This parameter returns the number malloc attempts that were +** satisfied using lookaside memory. Only the high-water value is meaningful; +** the current value is always zero.)^ +** +** [[SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE]] +** ^(<dt>SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE</dt> +** <dd>This parameter returns the number malloc attempts that might have +** been satisfied using lookaside memory but failed due to the amount of +** memory requested being larger than the lookaside slot size. +** Only the high-water value is meaningful; +** the current value is always zero.)^ +** +** [[SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL]] +** ^(<dt>SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL</dt> +** <dd>This parameter returns the number malloc attempts that might have +** been satisfied using lookaside memory but failed due to all lookaside +** memory already being in use. +** Only the high-water value is meaningful; +** the current value is always zero.)^ +** +** [[SQLITE_DBSTATUS_CACHE_USED]] ^(<dt>SQLITE_DBSTATUS_CACHE_USED</dt> +** <dd>This parameter returns the approximate number of bytes of heap +** memory used by all pager caches associated with the database connection.)^ ** ^The highwater mark associated with SQLITE_DBSTATUS_CACHE_USED is always 0. -** checked out.</dd>)^ +** +** [[SQLITE_DBSTATUS_SCHEMA_USED]] ^(<dt>SQLITE_DBSTATUS_SCHEMA_USED</dt> +** <dd>This parameter returns the approximate number of bytes of heap +** memory used to store the schema for all databases associated +** with the connection - main, temp, and any [ATTACH]-ed databases.)^ +** ^The full amount of memory used by the schemas is reported, even if the +** schema memory is shared with other database connections due to +** [shared cache mode] being enabled. +** ^The highwater mark associated with SQLITE_DBSTATUS_SCHEMA_USED is always 0. +** +** [[SQLITE_DBSTATUS_STMT_USED]] ^(<dt>SQLITE_DBSTATUS_STMT_USED</dt> +** <dd>This parameter returns the approximate number of bytes of heap +** and lookaside memory used by all prepared statements associated with +** the database connection.)^ +** ^The highwater mark associated with SQLITE_DBSTATUS_STMT_USED is always 0. +** </dd> +** +** [[SQLITE_DBSTATUS_CACHE_HIT]] ^(<dt>SQLITE_DBSTATUS_CACHE_HIT</dt> +** <dd>This parameter returns the number of pager cache hits that have +** occurred.)^ ^The highwater mark associated with SQLITE_DBSTATUS_CACHE_HIT +** is always 0. +** </dd> +** +** [[SQLITE_DBSTATUS_CACHE_MISS]] ^(<dt>SQLITE_DBSTATUS_CACHE_MISS</dt> +** <dd>This parameter returns the number of pager cache misses that have +** occurred.)^ ^The highwater mark associated with SQLITE_DBSTATUS_CACHE_MISS +** is always 0. +** </dd> +** +** [[SQLITE_DBSTATUS_CACHE_WRITE]] ^(<dt>SQLITE_DBSTATUS_CACHE_WRITE</dt> +** <dd>This parameter returns the number of dirty cache entries that have +** been written to disk. Specifically, the number of pages written to the +** wal file in wal mode databases, or the number of pages written to the +** database file in rollback mode databases. Any pages written as part of +** transaction rollback or database recovery operations are not included. +** If an IO or other error occurs while writing a page to disk, the effect +** on subsequent SQLITE_DBSTATUS_CACHE_WRITE requests is undefined.)^ ^The +** highwater mark associated with SQLITE_DBSTATUS_CACHE_WRITE is always 0. +** </dd> +** +** [[SQLITE_DBSTATUS_DEFERRED_FKS]] ^(<dt>SQLITE_DBSTATUS_DEFERRED_FKS</dt> +** <dd>This parameter returns zero for the current value if and only if +** all foreign key constraints (deferred or immediate) have been +** resolved.)^ ^The highwater mark is always 0. +** </dd> ** </dl> */ -#define SQLITE_DBSTATUS_LOOKASIDE_USED 0 -#define SQLITE_DBSTATUS_CACHE_USED 1 -#define SQLITE_DBSTATUS_MAX 1 /* Largest defined DBSTATUS */ +#define SQLITE_DBSTATUS_LOOKASIDE_USED 0 +#define SQLITE_DBSTATUS_CACHE_USED 1 +#define SQLITE_DBSTATUS_SCHEMA_USED 2 +#define SQLITE_DBSTATUS_STMT_USED 3 +#define SQLITE_DBSTATUS_LOOKASIDE_HIT 4 +#define SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE 5 +#define SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL 6 +#define SQLITE_DBSTATUS_CACHE_HIT 7 +#define SQLITE_DBSTATUS_CACHE_MISS 8 +#define SQLITE_DBSTATUS_CACHE_WRITE 9 +#define SQLITE_DBSTATUS_DEFERRED_FKS 10 +#define SQLITE_DBSTATUS_MAX 10 /* Largest defined DBSTATUS */ /* ** CAPI3REF: Prepared Statement Status +** METHOD: sqlite3_stmt ** ** ^(Each prepared statement maintains various -** [SQLITE_STMTSTATUS_SORT | counters] that measure the number +** [SQLITE_STMTSTATUS counters] that measure the number ** of times it has performed specific operations.)^ These counters can ** be used to monitor the performance characteristics of the prepared ** statements. For example, if the number of table steps greatly exceeds ** the number of table searches or result rows, that would tend to indicate ** that the prepared statement is using a full table scan rather than @@ -5171,51 +6771,61 @@ ** an index. ** ** ^(This interface is used to retrieve and reset counter values from ** a [prepared statement]. The first argument is the prepared statement ** object to be interrogated. The second argument -** is an integer code for a specific [SQLITE_STMTSTATUS_SORT | counter] +** is an integer code for a specific [SQLITE_STMTSTATUS counter] ** to be interrogated.)^ ** ^The current value of the requested counter is returned. ** ^If the resetFlg is true, then the counter is reset to zero after this ** interface call returns. ** ** See also: [sqlite3_status()] and [sqlite3_db_status()]. */ -SQLITE_API int sqlite3_stmt_status(sqlite3_stmt*, int op,int resetFlg); +SQLITE_API int SQLITE_STDCALL sqlite3_stmt_status(sqlite3_stmt*, int op,int resetFlg); /* ** CAPI3REF: Status Parameters for prepared statements +** KEYWORDS: {SQLITE_STMTSTATUS counter} {SQLITE_STMTSTATUS counters} ** ** These preprocessor macros define integer codes that name counter ** values associated with the [sqlite3_stmt_status()] interface. ** The meanings of the various counters are as follows: ** ** <dl> -** <dt>SQLITE_STMTSTATUS_FULLSCAN_STEP</dt> +** [[SQLITE_STMTSTATUS_FULLSCAN_STEP]] <dt>SQLITE_STMTSTATUS_FULLSCAN_STEP</dt> ** <dd>^This is the number of times that SQLite has stepped forward in ** a table as part of a full table scan. Large numbers for this counter ** may indicate opportunities for performance improvement through ** careful use of indices.</dd> ** -** <dt>SQLITE_STMTSTATUS_SORT</dt> +** [[SQLITE_STMTSTATUS_SORT]] <dt>SQLITE_STMTSTATUS_SORT</dt> ** <dd>^This is the number of sort operations that have occurred. ** A non-zero value in this counter may indicate an opportunity to ** improvement performance through careful use of indices.</dd> ** -** <dt>SQLITE_STMTSTATUS_AUTOINDEX</dt> +** [[SQLITE_STMTSTATUS_AUTOINDEX]] <dt>SQLITE_STMTSTATUS_AUTOINDEX</dt> ** <dd>^This is the number of rows inserted into transient indices that ** were created automatically in order to help joins run faster. ** A non-zero value in this counter may indicate an opportunity to ** improvement performance by adding permanent indices that do not ** need to be reinitialized each time the statement is run.</dd> ** +** [[SQLITE_STMTSTATUS_VM_STEP]] <dt>SQLITE_STMTSTATUS_VM_STEP</dt> +** <dd>^This is the number of virtual machine operations executed +** by the prepared statement if that number is less than or equal +** to 2147483647. The number of virtual machine operations can be +** used as a proxy for the total work done by the prepared statement. +** If the number of virtual machine operations exceeds 2147483647 +** then the value returned by this statement status code is undefined. +** </dd> ** </dl> */ #define SQLITE_STMTSTATUS_FULLSCAN_STEP 1 #define SQLITE_STMTSTATUS_SORT 2 #define SQLITE_STMTSTATUS_AUTOINDEX 3 +#define SQLITE_STMTSTATUS_VM_STEP 4 /* ** CAPI3REF: Custom Page Cache Object ** ** The sqlite3_pcache type is opaque. It is implemented by @@ -5222,140 +6832,211 @@ ** the pluggable module. The SQLite core has no knowledge of ** its size or internal structure and never deals with the ** sqlite3_pcache object except by holding and passing pointers ** to the object. ** -** See [sqlite3_pcache_methods] for additional information. +** See [sqlite3_pcache_methods2] for additional information. */ typedef struct sqlite3_pcache sqlite3_pcache; + +/* +** CAPI3REF: Custom Page Cache Object +** +** The sqlite3_pcache_page object represents a single page in the +** page cache. The page cache will allocate instances of this +** object. Various methods of the page cache use pointers to instances +** of this object as parameters or as their return value. +** +** See [sqlite3_pcache_methods2] for additional information. +*/ +typedef struct sqlite3_pcache_page sqlite3_pcache_page; +struct sqlite3_pcache_page { + void *pBuf; /* The content of the page */ + void *pExtra; /* Extra information associated with the page */ +}; /* ** CAPI3REF: Application Defined Page Cache. ** KEYWORDS: {page cache} ** -** ^(The [sqlite3_config]([SQLITE_CONFIG_PCACHE], ...) interface can +** ^(The [sqlite3_config]([SQLITE_CONFIG_PCACHE2], ...) interface can ** register an alternative page cache implementation by passing in an -** instance of the sqlite3_pcache_methods structure.)^ The majority of the -** heap memory used by SQLite is used by the page cache to cache data read -** from, or ready to be written to, the database file. By implementing a -** custom page cache using this API, an application can control more -** precisely the amount of memory consumed by SQLite, the way in which +** instance of the sqlite3_pcache_methods2 structure.)^ +** In many applications, most of the heap memory allocated by +** SQLite is used for the page cache. +** By implementing a +** custom page cache using this API, an application can better control +** the amount of memory consumed by SQLite, the way in which ** that memory is allocated and released, and the policies used to ** determine exactly which parts of a database file are cached and for ** how long. ** -** ^(The contents of the sqlite3_pcache_methods structure are copied to an +** The alternative page cache mechanism is an +** extreme measure that is only needed by the most demanding applications. +** The built-in page cache is recommended for most uses. +** +** ^(The contents of the sqlite3_pcache_methods2 structure are copied to an ** internal buffer by SQLite within the call to [sqlite3_config]. Hence ** the application may discard the parameter after the call to ** [sqlite3_config()] returns.)^ ** -** ^The xInit() method is called once for each call to [sqlite3_initialize()] +** [[the xInit() page cache method]] +** ^(The xInit() method is called once for each effective +** call to [sqlite3_initialize()])^ ** (usually only once during the lifetime of the process). ^(The xInit() -** method is passed a copy of the sqlite3_pcache_methods.pArg value.)^ -** ^The xInit() method can set up up global structures and/or any mutexes +** method is passed a copy of the sqlite3_pcache_methods2.pArg value.)^ +** The intent of the xInit() method is to set up global data structures ** required by the custom page cache implementation. +** ^(If the xInit() method is NULL, then the +** built-in default page cache is used instead of the application defined +** page cache.)^ ** -** ^The xShutdown() method is called from within [sqlite3_shutdown()], -** if the application invokes this API. It can be used to clean up +** [[the xShutdown() page cache method]] +** ^The xShutdown() method is called by [sqlite3_shutdown()]. +** It can be used to clean up ** any outstanding resources before process shutdown, if required. +** ^The xShutdown() method may be NULL. ** -** ^SQLite holds a [SQLITE_MUTEX_RECURSIVE] mutex when it invokes -** the xInit method, so the xInit method need not be threadsafe. ^The +** ^SQLite automatically serializes calls to the xInit method, +** so the xInit method need not be threadsafe. ^The ** xShutdown method is only called from [sqlite3_shutdown()] so it does ** not need to be threadsafe either. All other methods must be threadsafe ** in multithreaded applications. ** ** ^SQLite will never invoke xInit() more than once without an intervening ** call to xShutdown(). ** -** ^The xCreate() method is used to construct a new cache instance. SQLite -** will typically create one cache instance for each open database file, +** [[the xCreate() page cache methods]] +** ^SQLite invokes the xCreate() method to construct a new cache instance. +** SQLite will typically create one cache instance for each open database file, ** though this is not guaranteed. ^The ** first parameter, szPage, is the size in bytes of the pages that must -** be allocated by the cache. ^szPage will not be a power of two. ^szPage -** will the page size of the database file that is to be cached plus an -** increment (here called "R") of about 100 or 200. ^SQLite will use the -** extra R bytes on each page to store metadata about the underlying -** database page on disk. The value of R depends +** be allocated by the cache. ^szPage will always a power of two. ^The +** second parameter szExtra is a number of bytes of extra storage +** associated with each page cache entry. ^The szExtra parameter will +** a number less than 250. SQLite will use the +** extra szExtra bytes on each page to store metadata about the underlying +** database page on disk. The value passed into szExtra depends ** on the SQLite version, the target platform, and how SQLite was compiled. -** ^R is constant for a particular build of SQLite. ^The second argument to -** xCreate(), bPurgeable, is true if the cache being created will -** be used to cache database pages of a file stored on disk, or -** false if it is used for an in-memory database. ^The cache implementation +** ^The third argument to xCreate(), bPurgeable, is true if the cache being +** created will be used to cache database pages of a file stored on disk, or +** false if it is used for an in-memory database. The cache implementation ** does not have to do anything special based with the value of bPurgeable; ** it is purely advisory. ^On a cache where bPurgeable is false, SQLite will ** never invoke xUnpin() except to deliberately delete a page. -** ^In other words, a cache created with bPurgeable set to false will +** ^In other words, calls to xUnpin() on a cache with bPurgeable set to +** false will always have the "discard" flag set to true. +** ^Hence, a cache created with bPurgeable false will ** never contain any unpinned pages. ** +** [[the xCachesize() page cache method]] ** ^(The xCachesize() method may be called at any time by SQLite to set the ** suggested maximum cache-size (number of pages stored by) the cache ** instance passed as the first argument. This is the value configured using -** the SQLite "[PRAGMA cache_size]" command.)^ ^As with the bPurgeable +** the SQLite "[PRAGMA cache_size]" command.)^ As with the bPurgeable ** parameter, the implementation is not required to do anything with this ** value; it is advisory only. ** -** ^The xPagecount() method should return the number of pages currently -** stored in the cache. -** -** ^The xFetch() method is used to fetch a page and return a pointer to it. -** ^A 'page', in this context, is a buffer of szPage bytes aligned at an -** 8-byte boundary. ^The page to be fetched is determined by the key. ^The -** mimimum key value is 1. After it has been retrieved using xFetch, the page -** is considered to be "pinned". -** -** ^If the requested page is already in the page cache, then the page cache -** implementation must return a pointer to the page buffer with its content -** intact. ^(If the requested page is not already in the cache, then the -** behavior of the cache implementation is determined by the value of the -** createFlag parameter passed to xFetch, according to the following table: +** [[the xPagecount() page cache methods]] +** The xPagecount() method must return the number of pages currently +** stored in the cache, both pinned and unpinned. +** +** [[the xFetch() page cache methods]] +** The xFetch() method locates a page in the cache and returns a pointer to +** an sqlite3_pcache_page object associated with that page, or a NULL pointer. +** The pBuf element of the returned sqlite3_pcache_page object will be a +** pointer to a buffer of szPage bytes used to store the content of a +** single database page. The pExtra element of sqlite3_pcache_page will be +** a pointer to the szExtra bytes of extra storage that SQLite has requested +** for each entry in the page cache. +** +** The page to be fetched is determined by the key. ^The minimum key value +** is 1. After it has been retrieved using xFetch, the page is considered +** to be "pinned". +** +** If the requested page is already in the page cache, then the page cache +** implementation must return a pointer to the page buffer with its content +** intact. If the requested page is not already in the cache, then the +** cache implementation should use the value of the createFlag +** parameter to help it determined what action to take: ** ** <table border=1 width=85% align=center> -** <tr><th> createFlag <th> Behaviour when page is not already in cache +** <tr><th> createFlag <th> Behavior when page is not already in cache ** <tr><td> 0 <td> Do not allocate a new page. Return NULL. ** <tr><td> 1 <td> Allocate a new page if it easy and convenient to do so. ** Otherwise return NULL. ** <tr><td> 2 <td> Make every effort to allocate a new page. Only return ** NULL if allocating a new page is effectively impossible. -** </table>)^ +** </table> ** -** SQLite will normally invoke xFetch() with a createFlag of 0 or 1. If -** a call to xFetch() with createFlag==1 returns NULL, then SQLite will +** ^(SQLite will normally invoke xFetch() with a createFlag of 0 or 1. SQLite +** will only use a createFlag of 2 after a prior call with a createFlag of 1 +** failed.)^ In between the to xFetch() calls, SQLite may ** attempt to unpin one or more cache pages by spilling the content of -** pinned pages to disk and synching the operating system disk cache. After -** attempting to unpin pages, the xFetch() method will be invoked again with -** a createFlag of 2. +** pinned pages to disk and synching the operating system disk cache. ** +** [[the xUnpin() page cache method]] ** ^xUnpin() is called by SQLite with a pointer to a currently pinned page -** as its second argument. ^(If the third parameter, discard, is non-zero, -** then the page should be evicted from the cache. In this case SQLite -** assumes that the next time the page is retrieved from the cache using -** the xFetch() method, it will be zeroed.)^ ^If the discard parameter is -** zero, then the page is considered to be unpinned. ^The cache implementation +** as its second argument. If the third parameter, discard, is non-zero, +** then the page must be evicted from the cache. +** ^If the discard parameter is +** zero, then the page may be discarded or retained at the discretion of +** page cache implementation. ^The page cache implementation ** may choose to evict unpinned pages at any time. ** -** ^(The cache is not required to perform any reference counting. A single +** The cache must not perform any reference counting. A single ** call to xUnpin() unpins the page regardless of the number of prior calls -** to xFetch().)^ +** to xFetch(). ** -** ^The xRekey() method is used to change the key value associated with the -** page passed as the second argument from oldKey to newKey. ^If the cache -** previously contains an entry associated with newKey, it should be +** [[the xRekey() page cache methods]] +** The xRekey() method is used to change the key value associated with the +** page passed as the second argument. If the cache +** previously contains an entry associated with newKey, it must be ** discarded. ^Any prior cache entry associated with newKey is guaranteed not ** to be pinned. ** -** ^When SQLite calls the xTruncate() method, the cache must discard all +** When SQLite calls the xTruncate() method, the cache must discard all ** existing cache entries with page numbers (keys) greater than or equal -** to the value of the iLimit parameter passed to xTruncate(). ^If any +** to the value of the iLimit parameter passed to xTruncate(). If any ** of these pages are pinned, they are implicitly unpinned, meaning that ** they can be safely discarded. ** +** [[the xDestroy() page cache method]] ** ^The xDestroy() method is used to delete a cache allocated by xCreate(). ** All resources associated with the specified cache should be freed. ^After ** calling the xDestroy() method, SQLite considers the [sqlite3_pcache*] -** handle invalid, and will not use it with any other sqlite3_pcache_methods +** handle invalid, and will not use it with any other sqlite3_pcache_methods2 ** functions. +** +** [[the xShrink() page cache method]] +** ^SQLite invokes the xShrink() method when it wants the page cache to +** free up as much of heap memory as possible. The page cache implementation +** is not obligated to free any memory, but well-behaved implementations should +** do their best. +*/ +typedef struct sqlite3_pcache_methods2 sqlite3_pcache_methods2; +struct sqlite3_pcache_methods2 { + int iVersion; + void *pArg; + int (*xInit)(void*); + void (*xShutdown)(void*); + sqlite3_pcache *(*xCreate)(int szPage, int szExtra, int bPurgeable); + void (*xCachesize)(sqlite3_pcache*, int nCachesize); + int (*xPagecount)(sqlite3_pcache*); + sqlite3_pcache_page *(*xFetch)(sqlite3_pcache*, unsigned key, int createFlag); + void (*xUnpin)(sqlite3_pcache*, sqlite3_pcache_page*, int discard); + void (*xRekey)(sqlite3_pcache*, sqlite3_pcache_page*, + unsigned oldKey, unsigned newKey); + void (*xTruncate)(sqlite3_pcache*, unsigned iLimit); + void (*xDestroy)(sqlite3_pcache*); + void (*xShrink)(sqlite3_pcache*); +}; + +/* +** This is the obsolete pcache_methods object that has now been replaced +** by sqlite3_pcache_methods2. This object is not used by SQLite. It is +** retained in the header file for backwards compatibility only. */ typedef struct sqlite3_pcache_methods sqlite3_pcache_methods; struct sqlite3_pcache_methods { void *pArg; int (*xInit)(void*); @@ -5367,10 +7048,11 @@ void (*xUnpin)(sqlite3_pcache*, void*, int discard); void (*xRekey)(sqlite3_pcache*, void*, unsigned oldKey, unsigned newKey); void (*xTruncate)(sqlite3_pcache*, unsigned iLimit); void (*xDestroy)(sqlite3_pcache*); }; + /* ** CAPI3REF: Online Backup Object ** ** The sqlite3_backup object records state information about an ongoing @@ -5389,15 +7071,16 @@ ** It is useful either for creating backups of databases or ** for copying in-memory databases to or from persistent files. ** ** See Also: [Using the SQLite Online Backup API] ** -** ^Exclusive access is required to the destination database for the -** duration of the operation. ^However the source database is only -** read-locked while it is actually being read; it is not locked -** continuously for the entire backup operation. ^Thus, the backup may be -** performed on a live source database without preventing other users from +** ^SQLite holds a write transaction open on the destination database file +** for the duration of the backup operation. +** ^The source database is read-locked only while it is being read; +** it is not locked continuously for the entire backup operation. +** ^Thus, the backup may be performed on a live source database without +** preventing other database connections from ** reading or writing to the source database while the backup is underway. ** ** ^(To perform a backup operation: ** <ol> ** <li><b>sqlite3_backup_init()</b> is called once to initialize the @@ -5408,11 +7091,11 @@ ** associated with the backup operation. ** </ol>)^ ** There should be exactly one call to sqlite3_backup_finish() for each ** successful call to sqlite3_backup_init(). ** -** <b>sqlite3_backup_init()</b> +** [[sqlite3_backup_init()]] <b>sqlite3_backup_init()</b> ** ** ^The D and N arguments to sqlite3_backup_init(D,N,S,M) are the ** [database connection] associated with the destination database ** and the database name, respectively. ** ^The database name is "main" for the main database, "temp" for the @@ -5420,15 +7103,19 @@ ** an [ATTACH] statement for an attached database. ** ^The S and M arguments passed to ** sqlite3_backup_init(D,N,S,M) identify the [database connection] ** and database name of the source database, respectively. ** ^The source and destination [database connections] (parameters S and D) -** must be different or else sqlite3_backup_init(D,N,S,M) will file with +** must be different or else sqlite3_backup_init(D,N,S,M) will fail with ** an error. +** +** ^A call to sqlite3_backup_init() will fail, returning SQLITE_ERROR, if +** there is already a read or read-write transaction open on the +** destination database. ** ** ^If an error occurs within sqlite3_backup_init(D,N,S,M), then NULL is -** returned and an error code and error message are store3d in the +** returned and an error code and error message are stored in the ** destination [database connection] D. ** ^The error code and message for the failed call to sqlite3_backup_init() ** can be retrieved using the [sqlite3_errcode()], [sqlite3_errmsg()], and/or ** [sqlite3_errmsg16()] functions. ** ^A successful call to sqlite3_backup_init() returns a pointer to an @@ -5435,29 +7122,33 @@ ** [sqlite3_backup] object. ** ^The [sqlite3_backup] object may be used with the sqlite3_backup_step() and ** sqlite3_backup_finish() functions to perform the specified backup ** operation. ** -** <b>sqlite3_backup_step()</b> +** [[sqlite3_backup_step()]] <b>sqlite3_backup_step()</b> ** ** ^Function sqlite3_backup_step(B,N) will copy up to N pages between ** the source and destination databases specified by [sqlite3_backup] object B. ** ^If N is negative, all remaining source pages are copied. ** ^If sqlite3_backup_step(B,N) successfully copies N pages and there -** are still more pages to be copied, then the function resturns [SQLITE_OK]. +** are still more pages to be copied, then the function returns [SQLITE_OK]. ** ^If sqlite3_backup_step(B,N) successfully finishes copying all pages ** from source to destination, then it returns [SQLITE_DONE]. ** ^If an error occurs while running sqlite3_backup_step(B,N), ** then an [error code] is returned. ^As well as [SQLITE_OK] and ** [SQLITE_DONE], a call to sqlite3_backup_step() may return [SQLITE_READONLY], ** [SQLITE_NOMEM], [SQLITE_BUSY], [SQLITE_LOCKED], or an ** [SQLITE_IOERR_ACCESS | SQLITE_IOERR_XXX] extended error code. ** -** ^The sqlite3_backup_step() might return [SQLITE_READONLY] if the destination -** database was opened read-only or if -** the destination is an in-memory database with a different page size -** from the source database. +** ^(The sqlite3_backup_step() might return [SQLITE_READONLY] if +** <ol> +** <li> the destination database was opened read-only, or +** <li> the destination database is using write-ahead-log journaling +** and the destination and source page sizes differ, or +** <li> the destination database is an in-memory database and the +** destination and source page sizes differ. +** </ol>)^ ** ** ^If sqlite3_backup_step() cannot obtain a required file-system lock, then ** the [sqlite3_busy_handler | busy-handler function] ** is invoked (if one is specified). ^If the ** busy-handler returns non-zero before the lock is available, then @@ -5488,11 +7179,11 @@ ** restarted by the next call to sqlite3_backup_step(). ^If the source ** database is modified by the using the same database connection as is used ** by the backup operation, then the backup database is automatically ** updated at the same time. ** -** <b>sqlite3_backup_finish()</b> +** [[sqlite3_backup_finish()]] <b>sqlite3_backup_finish()</b> ** ** When sqlite3_backup_step() has returned [SQLITE_DONE], or when the ** application wishes to abandon the backup operation, the application ** should destroy the [sqlite3_backup] by passing it to sqlite3_backup_finish(). ** ^The sqlite3_backup_finish() interfaces releases all @@ -5511,23 +7202,24 @@ ** ** ^A return of [SQLITE_BUSY] or [SQLITE_LOCKED] from sqlite3_backup_step() ** is not a permanent error and does not affect the return value of ** sqlite3_backup_finish(). ** -** <b>sqlite3_backup_remaining(), sqlite3_backup_pagecount()</b> -** -** ^Each call to sqlite3_backup_step() sets two values inside -** the [sqlite3_backup] object: the number of pages still to be backed -** up and the total number of pages in the source databae file. -** The sqlite3_backup_remaining() and sqlite3_backup_pagecount() interfaces -** retrieve these two values, respectively. -** -** ^The values returned by these functions are only updated by -** sqlite3_backup_step(). ^If the source database is modified during a backup -** operation, then the values are not updated to account for any extra -** pages that need to be updated or the size of the source database file -** changing. +** [[sqlite3_backup_remaining()]] [[sqlite3_backup_pagecount()]] +** <b>sqlite3_backup_remaining() and sqlite3_backup_pagecount()</b> +** +** ^The sqlite3_backup_remaining() routine returns the number of pages still +** to be backed up at the conclusion of the most recent sqlite3_backup_step(). +** ^The sqlite3_backup_pagecount() routine returns the total number of pages +** in the source database at the conclusion of the most recent +** sqlite3_backup_step(). +** ^(The values returned by these functions are only updated by +** sqlite3_backup_step(). If the source database is modified in a way that +** changes the size of the source database or the number of pages remaining, +** those changes are not reflected in the output of sqlite3_backup_pagecount() +** and sqlite3_backup_remaining() until after the next +** sqlite3_backup_step().)^ ** ** <b>Concurrent Usage of Database Handles</b> ** ** ^The source [database connection] may be used by the application for other ** purposes while a backup operation is underway or being initialized. @@ -5556,23 +7248,24 @@ ** However, the sqlite3_backup_remaining() and sqlite3_backup_pagecount() ** APIs are not strictly speaking threadsafe. If they are invoked at the ** same time as another thread is invoking sqlite3_backup_step() it is ** possible that they return invalid values. */ -SQLITE_API sqlite3_backup *sqlite3_backup_init( +SQLITE_API sqlite3_backup *SQLITE_STDCALL sqlite3_backup_init( sqlite3 *pDest, /* Destination database handle */ const char *zDestName, /* Destination database name */ sqlite3 *pSource, /* Source database handle */ const char *zSourceName /* Source database name */ ); -SQLITE_API int sqlite3_backup_step(sqlite3_backup *p, int nPage); -SQLITE_API int sqlite3_backup_finish(sqlite3_backup *p); -SQLITE_API int sqlite3_backup_remaining(sqlite3_backup *p); -SQLITE_API int sqlite3_backup_pagecount(sqlite3_backup *p); +SQLITE_API int SQLITE_STDCALL sqlite3_backup_step(sqlite3_backup *p, int nPage); +SQLITE_API int SQLITE_STDCALL sqlite3_backup_finish(sqlite3_backup *p); +SQLITE_API int SQLITE_STDCALL sqlite3_backup_remaining(sqlite3_backup *p); +SQLITE_API int SQLITE_STDCALL sqlite3_backup_pagecount(sqlite3_backup *p); /* ** CAPI3REF: Unlock Notification +** METHOD: sqlite3 ** ** ^When running in shared-cache mode, a database operation may fail with ** an [SQLITE_LOCKED] error if the required locks on the shared-cache or ** individual tables within the shared-cache cannot be obtained. See ** [SQLite Shared-Cache Mode] for a description of shared-cache locking. @@ -5611,11 +7304,11 @@ ** ^(There may be at most one unlock-notify callback registered by a ** blocked connection. If sqlite3_unlock_notify() is called when the ** blocked connection already has a registered unlock-notify callback, ** then the new callback replaces the old.)^ ^If sqlite3_unlock_notify() is ** called with a NULL pointer as its second argument, then any existing -** unlock-notify callback is cancelled. ^The blocked connections +** unlock-notify callback is canceled. ^The blocked connections ** unlock-notify callback may also be canceled by closing the blocked ** connection using [sqlite3_close()]. ** ** The unlock-notify callback is not reentrant. If an application invokes ** any sqlite3_xxx API functions from within an unlock-notify callback, a @@ -5681,31 +7374,72 @@ ** by an sqlite3_step() call. ^(If there is a blocking connection, then the ** extended error code is set to SQLITE_LOCKED_SHAREDCACHE. Otherwise, in ** the special "DROP TABLE/INDEX" case, the extended error code is just ** SQLITE_LOCKED.)^ */ -SQLITE_API int sqlite3_unlock_notify( +SQLITE_API int SQLITE_STDCALL sqlite3_unlock_notify( sqlite3 *pBlocked, /* Waiting connection */ void (*xNotify)(void **apArg, int nArg), /* Callback function to invoke */ void *pNotifyArg /* Argument to pass to xNotify */ ); /* ** CAPI3REF: String Comparison ** -** ^The [sqlite3_strnicmp()] API allows applications and extensions to -** compare the contents of two buffers containing UTF-8 strings in a -** case-indendent fashion, using the same definition of case independence -** that SQLite uses internally when comparing identifiers. +** ^The [sqlite3_stricmp()] and [sqlite3_strnicmp()] APIs allow applications +** and extensions to compare the contents of two buffers containing UTF-8 +** strings in a case-independent fashion, using the same definition of "case +** independence" that SQLite uses internally when comparing identifiers. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_stricmp(const char *, const char *); +SQLITE_API int SQLITE_STDCALL sqlite3_strnicmp(const char *, const char *, int); + +/* +** CAPI3REF: String Globbing +* +** ^The [sqlite3_strglob(P,X)] interface returns zero if and only if +** string X matches the [GLOB] pattern P. +** ^The definition of [GLOB] pattern matching used in +** [sqlite3_strglob(P,X)] is the same as for the "X GLOB P" operator in the +** SQL dialect understood by SQLite. ^The [sqlite3_strglob(P,X)] function +** is case sensitive. +** +** Note that this routine returns zero on a match and non-zero if the strings +** do not match, the same as [sqlite3_stricmp()] and [sqlite3_strnicmp()]. +** +** See also: [sqlite3_strlike()]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_strglob(const char *zGlob, const char *zStr); + +/* +** CAPI3REF: String LIKE Matching +* +** ^The [sqlite3_strlike(P,X,E)] interface returns zero if and only if +** string X matches the [LIKE] pattern P with escape character E. +** ^The definition of [LIKE] pattern matching used in +** [sqlite3_strlike(P,X,E)] is the same as for the "X LIKE P ESCAPE E" +** operator in the SQL dialect understood by SQLite. ^For "X LIKE P" without +** the ESCAPE clause, set the E parameter of [sqlite3_strlike(P,X,E)] to 0. +** ^As with the LIKE operator, the [sqlite3_strlike(P,X,E)] function is case +** insensitive - equivalent upper and lower case ASCII characters match +** one another. +** +** ^The [sqlite3_strlike(P,X,E)] function matches Unicode characters, though +** only ASCII characters are case folded. +** +** Note that this routine returns zero on a match and non-zero if the strings +** do not match, the same as [sqlite3_stricmp()] and [sqlite3_strnicmp()]. +** +** See also: [sqlite3_strglob()]. */ -SQLITE_API int sqlite3_strnicmp(const char *, const char *, int); +SQLITE_API int SQLITE_STDCALL sqlite3_strlike(const char *zGlob, const char *zStr, unsigned int cEsc); /* ** CAPI3REF: Error Logging Interface ** -** ^The [sqlite3_log()] interface writes a message into the error log +** ^The [sqlite3_log()] interface writes a message into the [error log] ** established by the [SQLITE_CONFIG_LOG] option to [sqlite3_config()]. ** ^If logging is enabled, the zFormat string and subsequent arguments are ** used with [sqlite3_snprintf()] to generate the final output string. ** ** The sqlite3_log() interface is intended for use by extensions such as @@ -5719,11 +7453,532 @@ ** will not use dynamically allocated memory. The log message is stored in ** a fixed-length buffer on the stack. If the log message is longer than ** a few hundred characters, it will be truncated to the length of the ** buffer. */ -SQLITE_API void sqlite3_log(int iErrCode, const char *zFormat, ...); +SQLITE_API void SQLITE_CDECL sqlite3_log(int iErrCode, const char *zFormat, ...); + +/* +** CAPI3REF: Write-Ahead Log Commit Hook +** METHOD: sqlite3 +** +** ^The [sqlite3_wal_hook()] function is used to register a callback that +** is invoked each time data is committed to a database in wal mode. +** +** ^(The callback is invoked by SQLite after the commit has taken place and +** the associated write-lock on the database released)^, so the implementation +** may read, write or [checkpoint] the database as required. +** +** ^The first parameter passed to the callback function when it is invoked +** is a copy of the third parameter passed to sqlite3_wal_hook() when +** registering the callback. ^The second is a copy of the database handle. +** ^The third parameter is the name of the database that was written to - +** either "main" or the name of an [ATTACH]-ed database. ^The fourth parameter +** is the number of pages currently in the write-ahead log file, +** including those that were just committed. +** +** The callback function should normally return [SQLITE_OK]. ^If an error +** code is returned, that error will propagate back up through the +** SQLite code base to cause the statement that provoked the callback +** to report an error, though the commit will have still occurred. If the +** callback returns [SQLITE_ROW] or [SQLITE_DONE], or if it returns a value +** that does not correspond to any valid SQLite error code, the results +** are undefined. +** +** A single database handle may have at most a single write-ahead log callback +** registered at one time. ^Calling [sqlite3_wal_hook()] replaces any +** previously registered write-ahead log callback. ^Note that the +** [sqlite3_wal_autocheckpoint()] interface and the +** [wal_autocheckpoint pragma] both invoke [sqlite3_wal_hook()] and will +** those overwrite any prior [sqlite3_wal_hook()] settings. +*/ +SQLITE_API void *SQLITE_STDCALL sqlite3_wal_hook( + sqlite3*, + int(*)(void *,sqlite3*,const char*,int), + void* +); + +/* +** CAPI3REF: Configure an auto-checkpoint +** METHOD: sqlite3 +** +** ^The [sqlite3_wal_autocheckpoint(D,N)] is a wrapper around +** [sqlite3_wal_hook()] that causes any database on [database connection] D +** to automatically [checkpoint] +** after committing a transaction if there are N or +** more frames in the [write-ahead log] file. ^Passing zero or +** a negative value as the nFrame parameter disables automatic +** checkpoints entirely. +** +** ^The callback registered by this function replaces any existing callback +** registered using [sqlite3_wal_hook()]. ^Likewise, registering a callback +** using [sqlite3_wal_hook()] disables the automatic checkpoint mechanism +** configured by this function. +** +** ^The [wal_autocheckpoint pragma] can be used to invoke this interface +** from SQL. +** +** ^Checkpoints initiated by this mechanism are +** [sqlite3_wal_checkpoint_v2|PASSIVE]. +** +** ^Every new [database connection] defaults to having the auto-checkpoint +** enabled with a threshold of 1000 or [SQLITE_DEFAULT_WAL_AUTOCHECKPOINT] +** pages. The use of this interface +** is only necessary if the default setting is found to be suboptimal +** for a particular application. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_wal_autocheckpoint(sqlite3 *db, int N); + +/* +** CAPI3REF: Checkpoint a database +** METHOD: sqlite3 +** +** ^(The sqlite3_wal_checkpoint(D,X) is equivalent to +** [sqlite3_wal_checkpoint_v2](D,X,[SQLITE_CHECKPOINT_PASSIVE],0,0).)^ +** +** In brief, sqlite3_wal_checkpoint(D,X) causes the content in the +** [write-ahead log] for database X on [database connection] D to be +** transferred into the database file and for the write-ahead log to +** be reset. See the [checkpointing] documentation for addition +** information. +** +** This interface used to be the only way to cause a checkpoint to +** occur. But then the newer and more powerful [sqlite3_wal_checkpoint_v2()] +** interface was added. This interface is retained for backwards +** compatibility and as a convenience for applications that need to manually +** start a callback but which do not need the full power (and corresponding +** complication) of [sqlite3_wal_checkpoint_v2()]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_wal_checkpoint(sqlite3 *db, const char *zDb); + +/* +** CAPI3REF: Checkpoint a database +** METHOD: sqlite3 +** +** ^(The sqlite3_wal_checkpoint_v2(D,X,M,L,C) interface runs a checkpoint +** operation on database X of [database connection] D in mode M. Status +** information is written back into integers pointed to by L and C.)^ +** ^(The M parameter must be a valid [checkpoint mode]:)^ +** +** <dl> +** <dt>SQLITE_CHECKPOINT_PASSIVE<dd> +** ^Checkpoint as many frames as possible without waiting for any database +** readers or writers to finish, then sync the database file if all frames +** in the log were checkpointed. ^The [busy-handler callback] +** is never invoked in the SQLITE_CHECKPOINT_PASSIVE mode. +** ^On the other hand, passive mode might leave the checkpoint unfinished +** if there are concurrent readers or writers. +** +** <dt>SQLITE_CHECKPOINT_FULL<dd> +** ^This mode blocks (it invokes the +** [sqlite3_busy_handler|busy-handler callback]) until there is no +** database writer and all readers are reading from the most recent database +** snapshot. ^It then checkpoints all frames in the log file and syncs the +** database file. ^This mode blocks new database writers while it is pending, +** but new database readers are allowed to continue unimpeded. +** +** <dt>SQLITE_CHECKPOINT_RESTART<dd> +** ^This mode works the same way as SQLITE_CHECKPOINT_FULL with the addition +** that after checkpointing the log file it blocks (calls the +** [busy-handler callback]) +** until all readers are reading from the database file only. ^This ensures +** that the next writer will restart the log file from the beginning. +** ^Like SQLITE_CHECKPOINT_FULL, this mode blocks new +** database writer attempts while it is pending, but does not impede readers. +** +** <dt>SQLITE_CHECKPOINT_TRUNCATE<dd> +** ^This mode works the same way as SQLITE_CHECKPOINT_RESTART with the +** addition that it also truncates the log file to zero bytes just prior +** to a successful return. +** </dl> +** +** ^If pnLog is not NULL, then *pnLog is set to the total number of frames in +** the log file or to -1 if the checkpoint could not run because +** of an error or because the database is not in [WAL mode]. ^If pnCkpt is not +** NULL,then *pnCkpt is set to the total number of checkpointed frames in the +** log file (including any that were already checkpointed before the function +** was called) or to -1 if the checkpoint could not run due to an error or +** because the database is not in WAL mode. ^Note that upon successful +** completion of an SQLITE_CHECKPOINT_TRUNCATE, the log file will have been +** truncated to zero bytes and so both *pnLog and *pnCkpt will be set to zero. +** +** ^All calls obtain an exclusive "checkpoint" lock on the database file. ^If +** any other process is running a checkpoint operation at the same time, the +** lock cannot be obtained and SQLITE_BUSY is returned. ^Even if there is a +** busy-handler configured, it will not be invoked in this case. +** +** ^The SQLITE_CHECKPOINT_FULL, RESTART and TRUNCATE modes also obtain the +** exclusive "writer" lock on the database file. ^If the writer lock cannot be +** obtained immediately, and a busy-handler is configured, it is invoked and +** the writer lock retried until either the busy-handler returns 0 or the lock +** is successfully obtained. ^The busy-handler is also invoked while waiting for +** database readers as described above. ^If the busy-handler returns 0 before +** the writer lock is obtained or while waiting for database readers, the +** checkpoint operation proceeds from that point in the same way as +** SQLITE_CHECKPOINT_PASSIVE - checkpointing as many frames as possible +** without blocking any further. ^SQLITE_BUSY is returned in this case. +** +** ^If parameter zDb is NULL or points to a zero length string, then the +** specified operation is attempted on all WAL databases [attached] to +** [database connection] db. In this case the +** values written to output parameters *pnLog and *pnCkpt are undefined. ^If +** an SQLITE_BUSY error is encountered when processing one or more of the +** attached WAL databases, the operation is still attempted on any remaining +** attached databases and SQLITE_BUSY is returned at the end. ^If any other +** error occurs while processing an attached database, processing is abandoned +** and the error code is returned to the caller immediately. ^If no error +** (SQLITE_BUSY or otherwise) is encountered while processing the attached +** databases, SQLITE_OK is returned. +** +** ^If database zDb is the name of an attached database that is not in WAL +** mode, SQLITE_OK is returned and both *pnLog and *pnCkpt set to -1. ^If +** zDb is not NULL (or a zero length string) and is not the name of any +** attached database, SQLITE_ERROR is returned to the caller. +** +** ^Unless it returns SQLITE_MISUSE, +** the sqlite3_wal_checkpoint_v2() interface +** sets the error information that is queried by +** [sqlite3_errcode()] and [sqlite3_errmsg()]. +** +** ^The [PRAGMA wal_checkpoint] command can be used to invoke this interface +** from SQL. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_wal_checkpoint_v2( + sqlite3 *db, /* Database handle */ + const char *zDb, /* Name of attached database (or NULL) */ + int eMode, /* SQLITE_CHECKPOINT_* value */ + int *pnLog, /* OUT: Size of WAL log in frames */ + int *pnCkpt /* OUT: Total number of frames checkpointed */ +); + +/* +** CAPI3REF: Checkpoint Mode Values +** KEYWORDS: {checkpoint mode} +** +** These constants define all valid values for the "checkpoint mode" passed +** as the third parameter to the [sqlite3_wal_checkpoint_v2()] interface. +** See the [sqlite3_wal_checkpoint_v2()] documentation for details on the +** meaning of each of these checkpoint modes. +*/ +#define SQLITE_CHECKPOINT_PASSIVE 0 /* Do as much as possible w/o blocking */ +#define SQLITE_CHECKPOINT_FULL 1 /* Wait for writers, then checkpoint */ +#define SQLITE_CHECKPOINT_RESTART 2 /* Like FULL but wait for for readers */ +#define SQLITE_CHECKPOINT_TRUNCATE 3 /* Like RESTART but also truncate WAL */ + +/* +** CAPI3REF: Virtual Table Interface Configuration +** +** This function may be called by either the [xConnect] or [xCreate] method +** of a [virtual table] implementation to configure +** various facets of the virtual table interface. +** +** If this interface is invoked outside the context of an xConnect or +** xCreate virtual table method then the behavior is undefined. +** +** At present, there is only one option that may be configured using +** this function. (See [SQLITE_VTAB_CONSTRAINT_SUPPORT].) Further options +** may be added in the future. +*/ +SQLITE_API int SQLITE_CDECL sqlite3_vtab_config(sqlite3*, int op, ...); + +/* +** CAPI3REF: Virtual Table Configuration Options +** +** These macros define the various options to the +** [sqlite3_vtab_config()] interface that [virtual table] implementations +** can use to customize and optimize their behavior. +** +** <dl> +** <dt>SQLITE_VTAB_CONSTRAINT_SUPPORT +** <dd>Calls of the form +** [sqlite3_vtab_config](db,SQLITE_VTAB_CONSTRAINT_SUPPORT,X) are supported, +** where X is an integer. If X is zero, then the [virtual table] whose +** [xCreate] or [xConnect] method invoked [sqlite3_vtab_config()] does not +** support constraints. In this configuration (which is the default) if +** a call to the [xUpdate] method returns [SQLITE_CONSTRAINT], then the entire +** statement is rolled back as if [ON CONFLICT | OR ABORT] had been +** specified as part of the users SQL statement, regardless of the actual +** ON CONFLICT mode specified. +** +** If X is non-zero, then the virtual table implementation guarantees +** that if [xUpdate] returns [SQLITE_CONSTRAINT], it will do so before +** any modifications to internal or persistent data structures have been made. +** If the [ON CONFLICT] mode is ABORT, FAIL, IGNORE or ROLLBACK, SQLite +** is able to roll back a statement or database transaction, and abandon +** or continue processing the current SQL statement as appropriate. +** If the ON CONFLICT mode is REPLACE and the [xUpdate] method returns +** [SQLITE_CONSTRAINT], SQLite handles this as if the ON CONFLICT mode +** had been ABORT. +** +** Virtual table implementations that are required to handle OR REPLACE +** must do so within the [xUpdate] method. If a call to the +** [sqlite3_vtab_on_conflict()] function indicates that the current ON +** CONFLICT policy is REPLACE, the virtual table implementation should +** silently replace the appropriate rows within the xUpdate callback and +** return SQLITE_OK. Or, if this is not possible, it may return +** SQLITE_CONSTRAINT, in which case SQLite falls back to OR ABORT +** constraint handling. +** </dl> +*/ +#define SQLITE_VTAB_CONSTRAINT_SUPPORT 1 + +/* +** CAPI3REF: Determine The Virtual Table Conflict Policy +** +** This function may only be called from within a call to the [xUpdate] method +** of a [virtual table] implementation for an INSERT or UPDATE operation. ^The +** value returned is one of [SQLITE_ROLLBACK], [SQLITE_IGNORE], [SQLITE_FAIL], +** [SQLITE_ABORT], or [SQLITE_REPLACE], according to the [ON CONFLICT] mode +** of the SQL statement that triggered the call to the [xUpdate] method of the +** [virtual table]. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_vtab_on_conflict(sqlite3 *); + +/* +** CAPI3REF: Conflict resolution modes +** KEYWORDS: {conflict resolution mode} +** +** These constants are returned by [sqlite3_vtab_on_conflict()] to +** inform a [virtual table] implementation what the [ON CONFLICT] mode +** is for the SQL statement being evaluated. +** +** Note that the [SQLITE_IGNORE] constant is also used as a potential +** return value from the [sqlite3_set_authorizer()] callback and that +** [SQLITE_ABORT] is also a [result code]. +*/ +#define SQLITE_ROLLBACK 1 +/* #define SQLITE_IGNORE 2 // Also used by sqlite3_authorizer() callback */ +#define SQLITE_FAIL 3 +/* #define SQLITE_ABORT 4 // Also an error code */ +#define SQLITE_REPLACE 5 + +/* +** CAPI3REF: Prepared Statement Scan Status Opcodes +** KEYWORDS: {scanstatus options} +** +** The following constants can be used for the T parameter to the +** [sqlite3_stmt_scanstatus(S,X,T,V)] interface. Each constant designates a +** different metric for sqlite3_stmt_scanstatus() to return. +** +** When the value returned to V is a string, space to hold that string is +** managed by the prepared statement S and will be automatically freed when +** S is finalized. +** +** <dl> +** [[SQLITE_SCANSTAT_NLOOP]] <dt>SQLITE_SCANSTAT_NLOOP</dt> +** <dd>^The [sqlite3_int64] variable pointed to by the T parameter will be +** set to the total number of times that the X-th loop has run.</dd> +** +** [[SQLITE_SCANSTAT_NVISIT]] <dt>SQLITE_SCANSTAT_NVISIT</dt> +** <dd>^The [sqlite3_int64] variable pointed to by the T parameter will be set +** to the total number of rows examined by all iterations of the X-th loop.</dd> +** +** [[SQLITE_SCANSTAT_EST]] <dt>SQLITE_SCANSTAT_EST</dt> +** <dd>^The "double" variable pointed to by the T parameter will be set to the +** query planner's estimate for the average number of rows output from each +** iteration of the X-th loop. If the query planner's estimates was accurate, +** then this value will approximate the quotient NVISIT/NLOOP and the +** product of this value for all prior loops with the same SELECTID will +** be the NLOOP value for the current loop. +** +** [[SQLITE_SCANSTAT_NAME]] <dt>SQLITE_SCANSTAT_NAME</dt> +** <dd>^The "const char *" variable pointed to by the T parameter will be set +** to a zero-terminated UTF-8 string containing the name of the index or table +** used for the X-th loop. +** +** [[SQLITE_SCANSTAT_EXPLAIN]] <dt>SQLITE_SCANSTAT_EXPLAIN</dt> +** <dd>^The "const char *" variable pointed to by the T parameter will be set +** to a zero-terminated UTF-8 string containing the [EXPLAIN QUERY PLAN] +** description for the X-th loop. +** +** [[SQLITE_SCANSTAT_SELECTID]] <dt>SQLITE_SCANSTAT_SELECT</dt> +** <dd>^The "int" variable pointed to by the T parameter will be set to the +** "select-id" for the X-th loop. The select-id identifies which query or +** subquery the loop is part of. The main query has a select-id of zero. +** The select-id is the same value as is output in the first column +** of an [EXPLAIN QUERY PLAN] query. +** </dl> +*/ +#define SQLITE_SCANSTAT_NLOOP 0 +#define SQLITE_SCANSTAT_NVISIT 1 +#define SQLITE_SCANSTAT_EST 2 +#define SQLITE_SCANSTAT_NAME 3 +#define SQLITE_SCANSTAT_EXPLAIN 4 +#define SQLITE_SCANSTAT_SELECTID 5 + +/* +** CAPI3REF: Prepared Statement Scan Status +** METHOD: sqlite3_stmt +** +** This interface returns information about the predicted and measured +** performance for pStmt. Advanced applications can use this +** interface to compare the predicted and the measured performance and +** issue warnings and/or rerun [ANALYZE] if discrepancies are found. +** +** Since this interface is expected to be rarely used, it is only +** available if SQLite is compiled using the [SQLITE_ENABLE_STMT_SCANSTATUS] +** compile-time option. +** +** The "iScanStatusOp" parameter determines which status information to return. +** The "iScanStatusOp" must be one of the [scanstatus options] or the behavior +** of this interface is undefined. +** ^The requested measurement is written into a variable pointed to by +** the "pOut" parameter. +** Parameter "idx" identifies the specific loop to retrieve statistics for. +** Loops are numbered starting from zero. ^If idx is out of range - less than +** zero or greater than or equal to the total number of loops used to implement +** the statement - a non-zero value is returned and the variable that pOut +** points to is unchanged. +** +** ^Statistics might not be available for all loops in all statements. ^In cases +** where there exist loops with no available statistics, this function behaves +** as if the loop did not exist - it returns non-zero and leave the variable +** that pOut points to unchanged. +** +** See also: [sqlite3_stmt_scanstatus_reset()] +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_stmt_scanstatus( + sqlite3_stmt *pStmt, /* Prepared statement for which info desired */ + int idx, /* Index of loop to report on */ + int iScanStatusOp, /* Information desired. SQLITE_SCANSTAT_* */ + void *pOut /* Result written here */ +); + +/* +** CAPI3REF: Zero Scan-Status Counters +** METHOD: sqlite3_stmt +** +** ^Zero all [sqlite3_stmt_scanstatus()] related event counters. +** +** This API is only available if the library is built with pre-processor +** symbol [SQLITE_ENABLE_STMT_SCANSTATUS] defined. +*/ +SQLITE_API void SQLITE_STDCALL sqlite3_stmt_scanstatus_reset(sqlite3_stmt*); + +/* +** CAPI3REF: Flush caches to disk mid-transaction +** +** ^If a write-transaction is open on [database connection] D when the +** [sqlite3_db_cacheflush(D)] interface invoked, any dirty +** pages in the pager-cache that are not currently in use are written out +** to disk. A dirty page may be in use if a database cursor created by an +** active SQL statement is reading from it, or if it is page 1 of a database +** file (page 1 is always "in use"). ^The [sqlite3_db_cacheflush(D)] +** interface flushes caches for all schemas - "main", "temp", and +** any [attached] databases. +** +** ^If this function needs to obtain extra database locks before dirty pages +** can be flushed to disk, it does so. ^If those locks cannot be obtained +** immediately and there is a busy-handler callback configured, it is invoked +** in the usual manner. ^If the required lock still cannot be obtained, then +** the database is skipped and an attempt made to flush any dirty pages +** belonging to the next (if any) database. ^If any databases are skipped +** because locks cannot be obtained, but no other error occurs, this +** function returns SQLITE_BUSY. +** +** ^If any other error occurs while flushing dirty pages to disk (for +** example an IO error or out-of-memory condition), then processing is +** abandoned and an SQLite [error code] is returned to the caller immediately. +** +** ^Otherwise, if no error occurs, [sqlite3_db_cacheflush()] returns SQLITE_OK. +** +** ^This function does not set the database handle error code or message +** returned by the [sqlite3_errcode()] and [sqlite3_errmsg()] functions. +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_db_cacheflush(sqlite3*); + +/* +** CAPI3REF: Database Snapshot +** KEYWORDS: {snapshot} +** EXPERIMENTAL +** +** An instance of the snapshot object records the state of a [WAL mode] +** database for some specific point in history. +** +** In [WAL mode], multiple [database connections] that are open on the +** same database file can each be reading a different historical version +** of the database file. When a [database connection] begins a read +** transaction, that connection sees an unchanging copy of the database +** as it existed for the point in time when the transaction first started. +** Subsequent changes to the database from other connections are not seen +** by the reader until a new read transaction is started. +** +** The sqlite3_snapshot object records state information about an historical +** version of the database file so that it is possible to later open a new read +** transaction that sees that historical version of the database rather than +** the most recent version. +** +** The constructor for this object is [sqlite3_snapshot_get()]. The +** [sqlite3_snapshot_open()] method causes a fresh read transaction to refer +** to an historical snapshot (if possible). The destructor for +** sqlite3_snapshot objects is [sqlite3_snapshot_free()]. +*/ +typedef struct sqlite3_snapshot sqlite3_snapshot; + +/* +** CAPI3REF: Record A Database Snapshot +** EXPERIMENTAL +** +** ^The [sqlite3_snapshot_get(D,S,P)] interface attempts to make a +** new [sqlite3_snapshot] object that records the current state of +** schema S in database connection D. ^On success, the +** [sqlite3_snapshot_get(D,S,P)] interface writes a pointer to the newly +** created [sqlite3_snapshot] object into *P and returns SQLITE_OK. +** ^If schema S of [database connection] D is not a [WAL mode] database +** that is in a read transaction, then [sqlite3_snapshot_get(D,S,P)] +** leaves the *P value unchanged and returns an appropriate [error code]. +** +** The [sqlite3_snapshot] object returned from a successful call to +** [sqlite3_snapshot_get()] must be freed using [sqlite3_snapshot_free()] +** to avoid a memory leak. +** +** The [sqlite3_snapshot_get()] interface is only available when the +** SQLITE_ENABLE_SNAPSHOT compile-time option is used. +*/ +SQLITE_API SQLITE_EXPERIMENTAL int SQLITE_STDCALL sqlite3_snapshot_get( + sqlite3 *db, + const char *zSchema, + sqlite3_snapshot **ppSnapshot +); + +/* +** CAPI3REF: Start a read transaction on an historical snapshot +** EXPERIMENTAL +** +** ^The [sqlite3_snapshot_open(D,S,P)] interface attempts to move the +** read transaction that is currently open on schema S of +** [database connection] D so that it refers to historical [snapshot] P. +** ^The [sqlite3_snapshot_open()] interface returns SQLITE_OK on success +** or an appropriate [error code] if it fails. +** +** ^In order to succeed, a call to [sqlite3_snapshot_open(D,S,P)] must be +** the first operation, apart from other sqlite3_snapshot_open() calls, +** following the [BEGIN] that starts a new read transaction. +** ^A [snapshot] will fail to open if it has been overwritten by a +** [checkpoint]. +** +** The [sqlite3_snapshot_open()] interface is only available when the +** SQLITE_ENABLE_SNAPSHOT compile-time option is used. +*/ +SQLITE_API SQLITE_EXPERIMENTAL int SQLITE_STDCALL sqlite3_snapshot_open( + sqlite3 *db, + const char *zSchema, + sqlite3_snapshot *pSnapshot +); + +/* +** CAPI3REF: Destroy a snapshot +** EXPERIMENTAL +** +** ^The [sqlite3_snapshot_free(P)] interface destroys [sqlite3_snapshot] P. +** The application must eventually free every [sqlite3_snapshot] object +** using this routine to avoid a memory leak. +** +** The [sqlite3_snapshot_free()] interface is only available when the +** SQLITE_ENABLE_SNAPSHOT compile-time option is used. +*/ +SQLITE_API SQLITE_EXPERIMENTAL void SQLITE_STDCALL sqlite3_snapshot_free(sqlite3_snapshot*); /* ** Undo the hack that converts floating point types to integer for ** builds on processors without floating point support. */ @@ -5732,7 +7987,702 @@ #endif #ifdef __cplusplus } /* End of the 'extern "C"' block */ #endif +#endif /* _SQLITE3_H_ */ + +/* +** 2010 August 30 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +************************************************************************* +*/ + +#ifndef _SQLITE3RTREE_H_ +#define _SQLITE3RTREE_H_ + + +#ifdef __cplusplus +extern "C" { +#endif + +typedef struct sqlite3_rtree_geometry sqlite3_rtree_geometry; +typedef struct sqlite3_rtree_query_info sqlite3_rtree_query_info; + +/* The double-precision datatype used by RTree depends on the +** SQLITE_RTREE_INT_ONLY compile-time option. +*/ +#ifdef SQLITE_RTREE_INT_ONLY + typedef sqlite3_int64 sqlite3_rtree_dbl; +#else + typedef double sqlite3_rtree_dbl; +#endif + +/* +** Register a geometry callback named zGeom that can be used as part of an +** R-Tree geometry query as follows: +** +** SELECT ... FROM <rtree> WHERE <rtree col> MATCH $zGeom(... params ...) +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_rtree_geometry_callback( + sqlite3 *db, + const char *zGeom, + int (*xGeom)(sqlite3_rtree_geometry*, int, sqlite3_rtree_dbl*,int*), + void *pContext +); + + +/* +** A pointer to a structure of the following type is passed as the first +** argument to callbacks registered using rtree_geometry_callback(). +*/ +struct sqlite3_rtree_geometry { + void *pContext; /* Copy of pContext passed to s_r_g_c() */ + int nParam; /* Size of array aParam[] */ + sqlite3_rtree_dbl *aParam; /* Parameters passed to SQL geom function */ + void *pUser; /* Callback implementation user data */ + void (*xDelUser)(void *); /* Called by SQLite to clean up pUser */ +}; + +/* +** Register a 2nd-generation geometry callback named zScore that can be +** used as part of an R-Tree geometry query as follows: +** +** SELECT ... FROM <rtree> WHERE <rtree col> MATCH $zQueryFunc(... params ...) +*/ +SQLITE_API int SQLITE_STDCALL sqlite3_rtree_query_callback( + sqlite3 *db, + const char *zQueryFunc, + int (*xQueryFunc)(sqlite3_rtree_query_info*), + void *pContext, + void (*xDestructor)(void*) +); + + +/* +** A pointer to a structure of the following type is passed as the +** argument to scored geometry callback registered using +** sqlite3_rtree_query_callback(). +** +** Note that the first 5 fields of this structure are identical to +** sqlite3_rtree_geometry. This structure is a subclass of +** sqlite3_rtree_geometry. +*/ +struct sqlite3_rtree_query_info { + void *pContext; /* pContext from when function registered */ + int nParam; /* Number of function parameters */ + sqlite3_rtree_dbl *aParam; /* value of function parameters */ + void *pUser; /* callback can use this, if desired */ + void (*xDelUser)(void*); /* function to free pUser */ + sqlite3_rtree_dbl *aCoord; /* Coordinates of node or entry to check */ + unsigned int *anQueue; /* Number of pending entries in the queue */ + int nCoord; /* Number of coordinates */ + int iLevel; /* Level of current node or entry */ + int mxLevel; /* The largest iLevel value in the tree */ + sqlite3_int64 iRowid; /* Rowid for current entry */ + sqlite3_rtree_dbl rParentScore; /* Score of parent node */ + int eParentWithin; /* Visibility of parent node */ + int eWithin; /* OUT: Visiblity */ + sqlite3_rtree_dbl rScore; /* OUT: Write the score here */ + /* The following fields are only available in 3.8.11 and later */ + sqlite3_value **apSqlParam; /* Original SQL values of parameters */ +}; + +/* +** Allowed values for sqlite3_rtree_query.eWithin and .eParentWithin. +*/ +#define NOT_WITHIN 0 /* Object completely outside of query region */ +#define PARTLY_WITHIN 1 /* Object partially overlaps query region */ +#define FULLY_WITHIN 2 /* Object fully contained within query region */ + + +#ifdef __cplusplus +} /* end of the 'extern "C"' block */ +#endif + +#endif /* ifndef _SQLITE3RTREE_H_ */ + +/* +** 2014 May 31 +** +** The author disclaims copyright to this source code. In place of +** a legal notice, here is a blessing: +** +** May you do good and not evil. +** May you find forgiveness for yourself and forgive others. +** May you share freely, never taking more than you give. +** +****************************************************************************** +** +** Interfaces to extend FTS5. Using the interfaces defined in this file, +** FTS5 may be extended with: +** +** * custom tokenizers, and +** * custom auxiliary functions. +*/ + + +#ifndef _FTS5_H +#define _FTS5_H + + +#ifdef __cplusplus +extern "C" { +#endif + +/************************************************************************* +** CUSTOM AUXILIARY FUNCTIONS +** +** Virtual table implementations may overload SQL functions by implementing +** the sqlite3_module.xFindFunction() method. +*/ + +typedef struct Fts5ExtensionApi Fts5ExtensionApi; +typedef struct Fts5Context Fts5Context; +typedef struct Fts5PhraseIter Fts5PhraseIter; + +typedef void (*fts5_extension_function)( + const Fts5ExtensionApi *pApi, /* API offered by current FTS version */ + Fts5Context *pFts, /* First arg to pass to pApi functions */ + sqlite3_context *pCtx, /* Context for returning result/error */ + int nVal, /* Number of values in apVal[] array */ + sqlite3_value **apVal /* Array of trailing arguments */ +); + +struct Fts5PhraseIter { + const unsigned char *a; + const unsigned char *b; +}; + +/* +** EXTENSION API FUNCTIONS +** +** xUserData(pFts): +** Return a copy of the context pointer the extension function was +** registered with. +** +** xColumnTotalSize(pFts, iCol, pnToken): +** If parameter iCol is less than zero, set output variable *pnToken +** to the total number of tokens in the FTS5 table. Or, if iCol is +** non-negative but less than the number of columns in the table, return +** the total number of tokens in column iCol, considering all rows in +** the FTS5 table. +** +** If parameter iCol is greater than or equal to the number of columns +** in the table, SQLITE_RANGE is returned. Or, if an error occurs (e.g. +** an OOM condition or IO error), an appropriate SQLite error code is +** returned. +** +** xColumnCount(pFts): +** Return the number of columns in the table. +** +** xColumnSize(pFts, iCol, pnToken): +** If parameter iCol is less than zero, set output variable *pnToken +** to the total number of tokens in the current row. Or, if iCol is +** non-negative but less than the number of columns in the table, set +** *pnToken to the number of tokens in column iCol of the current row. +** +** If parameter iCol is greater than or equal to the number of columns +** in the table, SQLITE_RANGE is returned. Or, if an error occurs (e.g. +** an OOM condition or IO error), an appropriate SQLite error code is +** returned. +** +** This function may be quite inefficient if used with an FTS5 table +** created with the "columnsize=0" option. +** +** xColumnText: +** This function attempts to retrieve the text of column iCol of the +** current document. If successful, (*pz) is set to point to a buffer +** containing the text in utf-8 encoding, (*pn) is set to the size in bytes +** (not characters) of the buffer and SQLITE_OK is returned. Otherwise, +** if an error occurs, an SQLite error code is returned and the final values +** of (*pz) and (*pn) are undefined. +** +** xPhraseCount: +** Returns the number of phrases in the current query expression. +** +** xPhraseSize: +** Returns the number of tokens in phrase iPhrase of the query. Phrases +** are numbered starting from zero. +** +** xInstCount: +** Set *pnInst to the total number of occurrences of all phrases within +** the query within the current row. Return SQLITE_OK if successful, or +** an error code (i.e. SQLITE_NOMEM) if an error occurs. +** +** This API can be quite slow if used with an FTS5 table created with the +** "detail=none" or "detail=column" option. If the FTS5 table is created +** with either "detail=none" or "detail=column" and "content=" option +** (i.e. if it is a contentless table), then this API always returns 0. +** +** xInst: +** Query for the details of phrase match iIdx within the current row. +** Phrase matches are numbered starting from zero, so the iIdx argument +** should be greater than or equal to zero and smaller than the value +** output by xInstCount(). +** +** Usually, output parameter *piPhrase is set to the phrase number, *piCol +** to the column in which it occurs and *piOff the token offset of the +** first token of the phrase. The exception is if the table was created +** with the offsets=0 option specified. In this case *piOff is always +** set to -1. +** +** Returns SQLITE_OK if successful, or an error code (i.e. SQLITE_NOMEM) +** if an error occurs. +** +** This API can be quite slow if used with an FTS5 table created with the +** "detail=none" or "detail=column" option. +** +** xRowid: +** Returns the rowid of the current row. +** +** xTokenize: +** Tokenize text using the tokenizer belonging to the FTS5 table. +** +** xQueryPhrase(pFts5, iPhrase, pUserData, xCallback): +** This API function is used to query the FTS table for phrase iPhrase +** of the current query. Specifically, a query equivalent to: +** +** ... FROM ftstable WHERE ftstable MATCH $p ORDER BY rowid +** +** with $p set to a phrase equivalent to the phrase iPhrase of the +** current query is executed. For each row visited, the callback function +** passed as the fourth argument is invoked. The context and API objects +** passed to the callback function may be used to access the properties of +** each matched row. Invoking Api.xUserData() returns a copy of the pointer +** passed as the third argument to pUserData. +** +** If the callback function returns any value other than SQLITE_OK, the +** query is abandoned and the xQueryPhrase function returns immediately. +** If the returned value is SQLITE_DONE, xQueryPhrase returns SQLITE_OK. +** Otherwise, the error code is propagated upwards. +** +** If the query runs to completion without incident, SQLITE_OK is returned. +** Or, if some error occurs before the query completes or is aborted by +** the callback, an SQLite error code is returned. +** +** +** xSetAuxdata(pFts5, pAux, xDelete) +** +** Save the pointer passed as the second argument as the extension functions +** "auxiliary data". The pointer may then be retrieved by the current or any +** future invocation of the same fts5 extension function made as part of +** of the same MATCH query using the xGetAuxdata() API. +** +** Each extension function is allocated a single auxiliary data slot for +** each FTS query (MATCH expression). If the extension function is invoked +** more than once for a single FTS query, then all invocations share a +** single auxiliary data context. +** +** If there is already an auxiliary data pointer when this function is +** invoked, then it is replaced by the new pointer. If an xDelete callback +** was specified along with the original pointer, it is invoked at this +** point. +** +** The xDelete callback, if one is specified, is also invoked on the +** auxiliary data pointer after the FTS5 query has finished. +** +** If an error (e.g. an OOM condition) occurs within this function, an +** the auxiliary data is set to NULL and an error code returned. If the +** xDelete parameter was not NULL, it is invoked on the auxiliary data +** pointer before returning. +** +** +** xGetAuxdata(pFts5, bClear) +** +** Returns the current auxiliary data pointer for the fts5 extension +** function. See the xSetAuxdata() method for details. +** +** If the bClear argument is non-zero, then the auxiliary data is cleared +** (set to NULL) before this function returns. In this case the xDelete, +** if any, is not invoked. +** +** +** xRowCount(pFts5, pnRow) +** +** This function is used to retrieve the total number of rows in the table. +** In other words, the same value that would be returned by: +** +** SELECT count(*) FROM ftstable; +** +** xPhraseFirst() +** This function is used, along with type Fts5PhraseIter and the xPhraseNext +** method, to iterate through all instances of a single query phrase within +** the current row. This is the same information as is accessible via the +** xInstCount/xInst APIs. While the xInstCount/xInst APIs are more convenient +** to use, this API may be faster under some circumstances. To iterate +** through instances of phrase iPhrase, use the following code: +** +** Fts5PhraseIter iter; +** int iCol, iOff; +** for(pApi->xPhraseFirst(pFts, iPhrase, &iter, &iCol, &iOff); +** iCol>=0; +** pApi->xPhraseNext(pFts, &iter, &iCol, &iOff) +** ){ +** // An instance of phrase iPhrase at offset iOff of column iCol +** } +** +** The Fts5PhraseIter structure is defined above. Applications should not +** modify this structure directly - it should only be used as shown above +** with the xPhraseFirst() and xPhraseNext() API methods (and by +** xPhraseFirstColumn() and xPhraseNextColumn() as illustrated below). +** +** This API can be quite slow if used with an FTS5 table created with the +** "detail=none" or "detail=column" option. If the FTS5 table is created +** with either "detail=none" or "detail=column" and "content=" option +** (i.e. if it is a contentless table), then this API always iterates +** through an empty set (all calls to xPhraseFirst() set iCol to -1). +** +** xPhraseNext() +** See xPhraseFirst above. +** +** xPhraseFirstColumn() +** This function and xPhraseNextColumn() are similar to the xPhraseFirst() +** and xPhraseNext() APIs described above. The difference is that instead +** of iterating through all instances of a phrase in the current row, these +** APIs are used to iterate through the set of columns in the current row +** that contain one or more instances of a specified phrase. For example: +** +** Fts5PhraseIter iter; +** int iCol; +** for(pApi->xPhraseFirstColumn(pFts, iPhrase, &iter, &iCol); +** iCol>=0; +** pApi->xPhraseNextColumn(pFts, &iter, &iCol) +** ){ +** // Column iCol contains at least one instance of phrase iPhrase +** } +** +** This API can be quite slow if used with an FTS5 table created with the +** "detail=none" option. If the FTS5 table is created with either +** "detail=none" "content=" option (i.e. if it is a contentless table), +** then this API always iterates through an empty set (all calls to +** xPhraseFirstColumn() set iCol to -1). +** +** The information accessed using this API and its companion +** xPhraseFirstColumn() may also be obtained using xPhraseFirst/xPhraseNext +** (or xInst/xInstCount). The chief advantage of this API is that it is +** significantly more efficient than those alternatives when used with +** "detail=column" tables. +** +** xPhraseNextColumn() +** See xPhraseFirstColumn above. +*/ +struct Fts5ExtensionApi { + int iVersion; /* Currently always set to 3 */ + + void *(*xUserData)(Fts5Context*); + + int (*xColumnCount)(Fts5Context*); + int (*xRowCount)(Fts5Context*, sqlite3_int64 *pnRow); + int (*xColumnTotalSize)(Fts5Context*, int iCol, sqlite3_int64 *pnToken); + + int (*xTokenize)(Fts5Context*, + const char *pText, int nText, /* Text to tokenize */ + void *pCtx, /* Context passed to xToken() */ + int (*xToken)(void*, int, const char*, int, int, int) /* Callback */ + ); + + int (*xPhraseCount)(Fts5Context*); + int (*xPhraseSize)(Fts5Context*, int iPhrase); + + int (*xInstCount)(Fts5Context*, int *pnInst); + int (*xInst)(Fts5Context*, int iIdx, int *piPhrase, int *piCol, int *piOff); + + sqlite3_int64 (*xRowid)(Fts5Context*); + int (*xColumnText)(Fts5Context*, int iCol, const char **pz, int *pn); + int (*xColumnSize)(Fts5Context*, int iCol, int *pnToken); + + int (*xQueryPhrase)(Fts5Context*, int iPhrase, void *pUserData, + int(*)(const Fts5ExtensionApi*,Fts5Context*,void*) + ); + int (*xSetAuxdata)(Fts5Context*, void *pAux, void(*xDelete)(void*)); + void *(*xGetAuxdata)(Fts5Context*, int bClear); + + int (*xPhraseFirst)(Fts5Context*, int iPhrase, Fts5PhraseIter*, int*, int*); + void (*xPhraseNext)(Fts5Context*, Fts5PhraseIter*, int *piCol, int *piOff); + + int (*xPhraseFirstColumn)(Fts5Context*, int iPhrase, Fts5PhraseIter*, int*); + void (*xPhraseNextColumn)(Fts5Context*, Fts5PhraseIter*, int *piCol); +}; + +/* +** CUSTOM AUXILIARY FUNCTIONS +*************************************************************************/ + +/************************************************************************* +** CUSTOM TOKENIZERS +** +** Applications may also register custom tokenizer types. A tokenizer +** is registered by providing fts5 with a populated instance of the +** following structure. All structure methods must be defined, setting +** any member of the fts5_tokenizer struct to NULL leads to undefined +** behaviour. The structure methods are expected to function as follows: +** +** xCreate: +** This function is used to allocate and inititalize a tokenizer instance. +** A tokenizer instance is required to actually tokenize text. +** +** The first argument passed to this function is a copy of the (void*) +** pointer provided by the application when the fts5_tokenizer object +** was registered with FTS5 (the third argument to xCreateTokenizer()). +** The second and third arguments are an array of nul-terminated strings +** containing the tokenizer arguments, if any, specified following the +** tokenizer name as part of the CREATE VIRTUAL TABLE statement used +** to create the FTS5 table. +** +** The final argument is an output variable. If successful, (*ppOut) +** should be set to point to the new tokenizer handle and SQLITE_OK +** returned. If an error occurs, some value other than SQLITE_OK should +** be returned. In this case, fts5 assumes that the final value of *ppOut +** is undefined. +** +** xDelete: +** This function is invoked to delete a tokenizer handle previously +** allocated using xCreate(). Fts5 guarantees that this function will +** be invoked exactly once for each successful call to xCreate(). +** +** xTokenize: +** This function is expected to tokenize the nText byte string indicated +** by argument pText. pText may or may not be nul-terminated. The first +** argument passed to this function is a pointer to an Fts5Tokenizer object +** returned by an earlier call to xCreate(). +** +** The second argument indicates the reason that FTS5 is requesting +** tokenization of the supplied text. This is always one of the following +** four values: +** +** <ul><li> <b>FTS5_TOKENIZE_DOCUMENT</b> - A document is being inserted into +** or removed from the FTS table. The tokenizer is being invoked to +** determine the set of tokens to add to (or delete from) the +** FTS index. +** +** <li> <b>FTS5_TOKENIZE_QUERY</b> - A MATCH query is being executed +** against the FTS index. The tokenizer is being called to tokenize +** a bareword or quoted string specified as part of the query. +** +** <li> <b>(FTS5_TOKENIZE_QUERY | FTS5_TOKENIZE_PREFIX)</b> - Same as +** FTS5_TOKENIZE_QUERY, except that the bareword or quoted string is +** followed by a "*" character, indicating that the last token +** returned by the tokenizer will be treated as a token prefix. +** +** <li> <b>FTS5_TOKENIZE_AUX</b> - The tokenizer is being invoked to +** satisfy an fts5_api.xTokenize() request made by an auxiliary +** function. Or an fts5_api.xColumnSize() request made by the same +** on a columnsize=0 database. +** </ul> +** +** For each token in the input string, the supplied callback xToken() must +** be invoked. The first argument to it should be a copy of the pointer +** passed as the second argument to xTokenize(). The third and fourth +** arguments are a pointer to a buffer containing the token text, and the +** size of the token in bytes. The 4th and 5th arguments are the byte offsets +** of the first byte of and first byte immediately following the text from +** which the token is derived within the input. +** +** The second argument passed to the xToken() callback ("tflags") should +** normally be set to 0. The exception is if the tokenizer supports +** synonyms. In this case see the discussion below for details. +** +** FTS5 assumes the xToken() callback is invoked for each token in the +** order that they occur within the input text. +** +** If an xToken() callback returns any value other than SQLITE_OK, then +** the tokenization should be abandoned and the xTokenize() method should +** immediately return a copy of the xToken() return value. Or, if the +** input buffer is exhausted, xTokenize() should return SQLITE_OK. Finally, +** if an error occurs with the xTokenize() implementation itself, it +** may abandon the tokenization and return any error code other than +** SQLITE_OK or SQLITE_DONE. +** +** SYNONYM SUPPORT +** +** Custom tokenizers may also support synonyms. Consider a case in which a +** user wishes to query for a phrase such as "first place". Using the +** built-in tokenizers, the FTS5 query 'first + place' will match instances +** of "first place" within the document set, but not alternative forms +** such as "1st place". In some applications, it would be better to match +** all instances of "first place" or "1st place" regardless of which form +** the user specified in the MATCH query text. +** +** There are several ways to approach this in FTS5: +** +** <ol><li> By mapping all synonyms to a single token. In this case, the +** In the above example, this means that the tokenizer returns the +** same token for inputs "first" and "1st". Say that token is in +** fact "first", so that when the user inserts the document "I won +** 1st place" entries are added to the index for tokens "i", "won", +** "first" and "place". If the user then queries for '1st + place', +** the tokenizer substitutes "first" for "1st" and the query works +** as expected. +** +** <li> By adding multiple synonyms for a single term to the FTS index. +** In this case, when tokenizing query text, the tokenizer may +** provide multiple synonyms for a single term within the document. +** FTS5 then queries the index for each synonym individually. For +** example, faced with the query: +** +** <codeblock> +** ... MATCH 'first place'</codeblock> +** +** the tokenizer offers both "1st" and "first" as synonyms for the +** first token in the MATCH query and FTS5 effectively runs a query +** similar to: +** +** <codeblock> +** ... MATCH '(first OR 1st) place'</codeblock> +** +** except that, for the purposes of auxiliary functions, the query +** still appears to contain just two phrases - "(first OR 1st)" +** being treated as a single phrase. +** +** <li> By adding multiple synonyms for a single term to the FTS index. +** Using this method, when tokenizing document text, the tokenizer +** provides multiple synonyms for each token. So that when a +** document such as "I won first place" is tokenized, entries are +** added to the FTS index for "i", "won", "first", "1st" and +** "place". +** +** This way, even if the tokenizer does not provide synonyms +** when tokenizing query text (it should not - to do would be +** inefficient), it doesn't matter if the user queries for +** 'first + place' or '1st + place', as there are entires in the +** FTS index corresponding to both forms of the first token. +** </ol> +** +** Whether it is parsing document or query text, any call to xToken that +** specifies a <i>tflags</i> argument with the FTS5_TOKEN_COLOCATED bit +** is considered to supply a synonym for the previous token. For example, +** when parsing the document "I won first place", a tokenizer that supports +** synonyms would call xToken() 5 times, as follows: +** +** <codeblock> +** xToken(pCtx, 0, "i", 1, 0, 1); +** xToken(pCtx, 0, "won", 3, 2, 5); +** xToken(pCtx, 0, "first", 5, 6, 11); +** xToken(pCtx, FTS5_TOKEN_COLOCATED, "1st", 3, 6, 11); +** xToken(pCtx, 0, "place", 5, 12, 17); +**</codeblock> +** +** It is an error to specify the FTS5_TOKEN_COLOCATED flag the first time +** xToken() is called. Multiple synonyms may be specified for a single token +** by making multiple calls to xToken(FTS5_TOKEN_COLOCATED) in sequence. +** There is no limit to the number of synonyms that may be provided for a +** single token. +** +** In many cases, method (1) above is the best approach. It does not add +** extra data to the FTS index or require FTS5 to query for multiple terms, +** so it is efficient in terms of disk space and query speed. However, it +** does not support prefix queries very well. If, as suggested above, the +** token "first" is subsituted for "1st" by the tokenizer, then the query: +** +** <codeblock> +** ... MATCH '1s*'</codeblock> +** +** will not match documents that contain the token "1st" (as the tokenizer +** will probably not map "1s" to any prefix of "first"). +** +** For full prefix support, method (3) may be preferred. In this case, +** because the index contains entries for both "first" and "1st", prefix +** queries such as 'fi*' or '1s*' will match correctly. However, because +** extra entries are added to the FTS index, this method uses more space +** within the database. +** +** Method (2) offers a midpoint between (1) and (3). Using this method, +** a query such as '1s*' will match documents that contain the literal +** token "1st", but not "first" (assuming the tokenizer is not able to +** provide synonyms for prefixes). However, a non-prefix query like '1st' +** will match against "1st" and "first". This method does not require +** extra disk space, as no extra entries are added to the FTS index. +** On the other hand, it may require more CPU cycles to run MATCH queries, +** as separate queries of the FTS index are required for each synonym. +** +** When using methods (2) or (3), it is important that the tokenizer only +** provide synonyms when tokenizing document text (method (2)) or query +** text (method (3)), not both. Doing so will not cause any errors, but is +** inefficient. +*/ +typedef struct Fts5Tokenizer Fts5Tokenizer; +typedef struct fts5_tokenizer fts5_tokenizer; +struct fts5_tokenizer { + int (*xCreate)(void*, const char **azArg, int nArg, Fts5Tokenizer **ppOut); + void (*xDelete)(Fts5Tokenizer*); + int (*xTokenize)(Fts5Tokenizer*, + void *pCtx, + int flags, /* Mask of FTS5_TOKENIZE_* flags */ + const char *pText, int nText, + int (*xToken)( + void *pCtx, /* Copy of 2nd argument to xTokenize() */ + int tflags, /* Mask of FTS5_TOKEN_* flags */ + const char *pToken, /* Pointer to buffer containing token */ + int nToken, /* Size of token in bytes */ + int iStart, /* Byte offset of token within input text */ + int iEnd /* Byte offset of end of token within input text */ + ) + ); +}; + +/* Flags that may be passed as the third argument to xTokenize() */ +#define FTS5_TOKENIZE_QUERY 0x0001 +#define FTS5_TOKENIZE_PREFIX 0x0002 +#define FTS5_TOKENIZE_DOCUMENT 0x0004 +#define FTS5_TOKENIZE_AUX 0x0008 + +/* Flags that may be passed by the tokenizer implementation back to FTS5 +** as the third argument to the supplied xToken callback. */ +#define FTS5_TOKEN_COLOCATED 0x0001 /* Same position as prev. token */ + +/* +** END OF CUSTOM TOKENIZERS +*************************************************************************/ + +/************************************************************************* +** FTS5 EXTENSION REGISTRATION API +*/ +typedef struct fts5_api fts5_api; +struct fts5_api { + int iVersion; /* Currently always set to 2 */ + + /* Create a new tokenizer */ + int (*xCreateTokenizer)( + fts5_api *pApi, + const char *zName, + void *pContext, + fts5_tokenizer *pTokenizer, + void (*xDestroy)(void*) + ); + + /* Find an existing tokenizer */ + int (*xFindTokenizer)( + fts5_api *pApi, + const char *zName, + void **ppContext, + fts5_tokenizer *pTokenizer + ); + + /* Create a new auxiliary function */ + int (*xCreateFunction)( + fts5_api *pApi, + const char *zName, + void *pContext, + fts5_extension_function xFunction, + void (*xDestroy)(void*) + ); +}; + +/* +** END OF REGISTRATION API +*************************************************************************/ + +#ifdef __cplusplus +} /* end of the 'extern "C"' block */ #endif + +#endif /* _FTS5_H */ + ADDED src/stash.c Index: src/stash.c ================================================================== --- src/stash.c +++ src/stash.c @@ -0,0 +1,687 @@ +/* +** Copyright (c) 2010 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@sqlite.org +** +******************************************************************************* +** +** This file contains code used to implement the "stash" command. +*/ +#include "config.h" +#include "stash.h" +#include <assert.h> + + +/* +** SQL code to implement the tables needed by the stash. +*/ +static const char zStashInit[] = +@ CREATE TABLE IF NOT EXISTS "%w".stash( +@ stashid INTEGER PRIMARY KEY, -- Unique stash identifier +@ vid INTEGER, -- The baseline check-out for this stash +@ comment TEXT, -- Comment for this stash. Or NULL +@ ctime TIMESTAMP -- When the stash was created +@ ); +@ CREATE TABLE IF NOT EXISTS "%w".stashfile( +@ stashid INTEGER REFERENCES stash, -- Stash that contains this file +@ rid INTEGER, -- Baseline content in BLOB table or 0. +@ isAdded BOOLEAN, -- True if this is an added file +@ isRemoved BOOLEAN, -- True if this file is deleted +@ isExec BOOLEAN, -- True if file is executable +@ isLink BOOLEAN, -- True if file is a symlink +@ origname TEXT, -- Original filename +@ newname TEXT, -- New name for file at next check-in +@ delta BLOB, -- Delta from baseline. Content if rid=0 +@ PRIMARY KEY(origname, stashid) +@ ); +@ INSERT OR IGNORE INTO vvar(name, value) VALUES('stash-next', 1); +; + +/* +** Add zFName to the stash given by stashid. zFName might be the name of a +** file or a directory. If a directory, add all changed files contained +** within that directory. +*/ +static void stash_add_file_or_dir(int stashid, int vid, const char *zFName){ + char *zFile; /* Normalized filename */ + char *zTreename; /* Name of the file in the tree */ + Blob fname; /* Filename relative to root */ + Blob sql; /* Query statement text */ + Stmt q; /* Query against the vfile table */ + Stmt ins; /* Insert statement */ + + zFile = mprintf("%/", zFName); + file_tree_name(zFile, &fname, 0, 1); + zTreename = blob_str(&fname); + blob_zero(&sql); + blob_append_sql(&sql, + "SELECT deleted, isexe, islink, mrid, pathname, coalesce(origname,pathname)" + " FROM vfile" + " WHERE vid=%d AND (chnged OR deleted OR origname NOT NULL OR mrid==0)", + vid + ); + if( fossil_strcmp(zTreename,".")!=0 ){ + blob_append_sql(&sql, + " AND (pathname GLOB '%q/*' OR origname GLOB '%q/*'" + " OR pathname=%Q OR origname=%Q)", + zTreename, zTreename, zTreename, zTreename + ); + } + db_prepare(&q, "%s", blob_sql_text(&sql)); + blob_reset(&sql); + db_prepare(&ins, + "INSERT INTO stashfile(stashid, rid, isAdded, isRemoved, isExec, isLink," + "origname, newname, delta)" + "VALUES(%d,:rid,:isadd,:isrm,:isexe,:islink,:orig,:new,:content)", + stashid + ); + while( db_step(&q)==SQLITE_ROW ){ + int deleted = db_column_int(&q, 0); + int rid = db_column_int(&q, 3); + const char *zName = db_column_text(&q, 4); + const char *zOrig = db_column_text(&q, 5); + char *zPath = mprintf("%s%s", g.zLocalRoot, zName); + Blob content; + int isNewLink = file_wd_islink(zPath); + + db_bind_int(&ins, ":rid", rid); + db_bind_int(&ins, ":isadd", rid==0); + db_bind_int(&ins, ":isrm", deleted); + db_bind_int(&ins, ":isexe", db_column_int(&q, 1)); + db_bind_int(&ins, ":islink", db_column_int(&q, 2)); + db_bind_text(&ins, ":orig", zOrig); + db_bind_text(&ins, ":new", zName); + + if( rid==0 ){ + /* A new file */ + if( isNewLink ){ + blob_read_link(&content, zPath); + }else{ + blob_read_from_file(&content, zPath); + } + db_bind_blob(&ins, ":content", &content); + }else if( deleted ){ + blob_zero(&content); + db_bind_null(&ins, ":content"); + }else{ + /* A modified file */ + Blob orig; + Blob disk; + + if( isNewLink ){ + blob_read_link(&disk, zPath); + }else{ + blob_read_from_file(&disk, zPath); + } + content_get(rid, &orig); + blob_delta_create(&orig, &disk, &content); + blob_reset(&orig); + blob_reset(&disk); + db_bind_blob(&ins, ":content", &content); + } + db_bind_int(&ins, ":islink", isNewLink); + db_step(&ins); + db_reset(&ins); + fossil_free(zPath); + blob_reset(&content); + } + db_finalize(&ins); + db_finalize(&q); + fossil_free(zFile); + blob_reset(&fname); +} + +/* +** Create a new stash based on the uncommitted changes currently in +** the working directory. +** +** If the "-m" or "--comment" command-line option is present, gather +** its argument as the stash comment. +** +** If files are named on the command-line, then only stash the named +** files. +*/ +static int stash_create(void){ + const char *zComment; /* Comment to add to the stash */ + int stashid; /* ID of the new stash */ + int vid; /* Current checkout */ + + zComment = find_option("comment", "m", 1); + verify_all_options(); + if( zComment==0 ){ + Blob prompt; /* Prompt for stash comment */ + Blob comment; /* User comment reply */ +#if defined(_WIN32) || defined(__CYGWIN__) + int bomSize; + const unsigned char *bom = get_utf8_bom(&bomSize); + blob_init(&prompt, (const char *) bom, bomSize); +#else + blob_zero(&prompt); +#endif + blob_append(&prompt, + "\n" + "# Enter a description of what is being stashed. Lines beginning\n" + "# with \"#\" are ignored. Stash comments are plain text except\n" + "# newlines are not preserved.\n", + -1); + prompt_for_user_comment(&comment, &prompt); + blob_reset(&prompt); + zComment = blob_str(&comment); + } + stashid = db_lget_int("stash-next", 1); + db_lset_int("stash-next", stashid+1); + vid = db_lget_int("checkout", 0); + vfile_check_signature(vid, 0); + db_multi_exec( + "INSERT INTO stash(stashid,vid,comment,ctime)" + "VALUES(%d,%d,%Q,julianday('now'))", + stashid, vid, zComment + ); + if( g.argc>3 ){ + int i; + for(i=3; i<g.argc; i++){ + stash_add_file_or_dir(stashid, vid, g.argv[i]); + } + }else{ + stash_add_file_or_dir(stashid, vid, g.zLocalRoot); + } + return stashid; +} + +/* +** Apply a stash to the current check-out. +*/ +static void stash_apply(int stashid, int nConflict){ + int vid; + Stmt q; + db_prepare(&q, + "SELECT rid, isRemoved, isExec, isLink, origname, newname, delta" + " FROM stashfile WHERE stashid=%d", + stashid + ); + vid = db_lget_int("checkout",0); + db_multi_exec("CREATE TEMP TABLE sfile(x TEXT PRIMARY KEY %s)", + filename_collation()); + while( db_step(&q)==SQLITE_ROW ){ + int rid = db_column_int(&q, 0); + int isRemoved = db_column_int(&q, 1); + int isExec = db_column_int(&q, 2); + int isLink = db_column_int(&q, 3); + const char *zOrig = db_column_text(&q, 4); + const char *zNew = db_column_text(&q, 5); + char *zOPath = mprintf("%s%s", g.zLocalRoot, zOrig); + char *zNPath = mprintf("%s%s", g.zLocalRoot, zNew); + Blob delta; + undo_save(zNew); + blob_zero(&delta); + if( rid==0 ){ + db_multi_exec("INSERT OR IGNORE INTO sfile(x) VALUES(%Q)", zNew); + db_ephemeral_blob(&q, 6, &delta); + blob_write_to_file(&delta, zNPath); + file_wd_setexe(zNPath, isExec); + }else if( isRemoved ){ + fossil_print("DELETE %s\n", zOrig); + file_delete(zOPath); + }else{ + Blob a, b, out, disk; + int isNewLink = file_wd_islink(zOPath); + db_ephemeral_blob(&q, 6, &delta); + if( isNewLink ){ + blob_read_link(&disk, zOPath); + }else{ + blob_read_from_file(&disk, zOPath); + } + content_get(rid, &a); + blob_delta_apply(&a, &delta, &b); + if( isLink == isNewLink && blob_compare(&disk, &a)==0 ){ + if( isLink || isNewLink ){ + file_delete(zNPath); + } + if( isLink ){ + symlink_create(blob_str(&b), zNPath); + }else{ + blob_write_to_file(&b, zNPath); + } + file_wd_setexe(zNPath, isExec); + fossil_print("UPDATE %s\n", zNew); + }else{ + int rc; + if( isLink || isNewLink ){ + rc = -1; + blob_zero(&b); /* because we reset it later */ + fossil_print("***** Cannot merge symlink %s\n", zNew); + }else{ + rc = merge_3way(&a, zOPath, &b, &out, 0); + blob_write_to_file(&out, zNPath); + blob_reset(&out); + file_wd_setexe(zNPath, isExec); + } + if( rc ){ + fossil_print("CONFLICT %s\n", zNew); + nConflict++; + }else{ + fossil_print("MERGE %s\n", zNew); + } + } + blob_reset(&a); + blob_reset(&b); + blob_reset(&disk); + } + blob_reset(&delta); + if( fossil_strcmp(zOrig,zNew)!=0 ){ + undo_save(zOrig); + file_delete(zOPath); + } + } + stash_add_files_in_sfile(vid); + db_finalize(&q); + if( nConflict ){ + fossil_print( + "WARNING: %d merge conflicts - see messages above for details.\n", + nConflict); + } +} + +/* +** Show the diffs associate with a single stash. +*/ +static void stash_diff( + int stashid, /* The stash entry to diff */ + const char *zDiffCmd, /* Command used for diffing */ + const char *zBinGlob, /* GLOB pattern to determine binary files */ + int fBaseline, /* Diff against original baseline check-in if true */ + int fIncludeBinary, /* Do diffs against binary files */ + u64 diffFlags /* Other diff flags */ +){ + Stmt q; + Blob empty; + blob_zero(&empty); + db_prepare(&q, + "SELECT rid, isRemoved, isExec, isLink, origname, newname, delta" + " FROM stashfile WHERE stashid=%d", + stashid + ); + while( db_step(&q)==SQLITE_ROW ){ + int rid = db_column_int(&q, 0); + int isRemoved = db_column_int(&q, 1); + int isLink = db_column_int(&q, 3); + int isBin1, isBin2; + const char *zOrig = db_column_text(&q, 4); + const char *zNew = db_column_text(&q, 5); + char *zOPath = mprintf("%s%s", g.zLocalRoot, zOrig); + Blob a, b; + if( rid==0 ){ + db_ephemeral_blob(&q, 6, &a); + fossil_print("ADDED %s\n", zNew); + diff_print_index(zNew, diffFlags); + isBin1 = 0; + isBin2 = fIncludeBinary ? 0 : looks_like_binary(&a); + diff_file_mem(&empty, &a, isBin1, isBin2, zNew, zDiffCmd, + zBinGlob, fIncludeBinary, diffFlags); + }else if( isRemoved ){ + fossil_print("DELETE %s\n", zOrig); + if( fBaseline==0 ){ + if( file_wd_islink(zOPath) ){ + blob_read_link(&a, zOPath); + }else{ + blob_read_from_file(&a, zOPath); + } + }else{ + content_get(rid, &a); + } + diff_print_index(zNew, diffFlags); + isBin1 = fIncludeBinary ? 0 : looks_like_binary(&a); + isBin2 = 0; + diff_file_mem(&a, &empty, isBin1, isBin2, zOrig, zDiffCmd, + zBinGlob, fIncludeBinary, diffFlags); + }else{ + Blob delta, disk; + int isOrigLink = file_wd_islink(zOPath); + db_ephemeral_blob(&q, 6, &delta); + if( fBaseline==0 ){ + if( isOrigLink ){ + blob_read_link(&disk, zOPath); + }else{ + blob_read_from_file(&disk, zOPath); + } + } + fossil_print("CHANGED %s\n", zNew); + if( !isOrigLink != !isLink ){ + diff_print_index(zNew, diffFlags); + diff_print_filenames(zOrig, zNew, diffFlags); + printf(DIFF_CANNOT_COMPUTE_SYMLINK); + }else{ + Blob *pBase = fBaseline ? &a : &disk; + content_get(rid, &a); + blob_delta_apply(&a, &delta, &b); + isBin1 = fIncludeBinary ? 0 : looks_like_binary(pBase); + isBin2 = fIncludeBinary ? 0 : looks_like_binary(&b); + diff_file_mem(fBaseline? &a : &disk, &b, isBin1, isBin2, zNew, + zDiffCmd, zBinGlob, fIncludeBinary, diffFlags); + blob_reset(&a); + blob_reset(&b); + } + if( !fBaseline ) blob_reset(&disk); + blob_reset(&delta); + } + } + db_finalize(&q); +} + +/* +** Drop the indicated stash +*/ +static void stash_drop(int stashid){ + db_multi_exec( + "DELETE FROM stash WHERE stashid=%d;" + "DELETE FROM stashfile WHERE stashid=%d;", + stashid, stashid + ); +} + +/* +** If zStashId is non-NULL then interpret is as a stash number and +** return that number. Or throw a fatal error if it is not a valid +** stash number. If it is NULL, return the most recent stash or +** throw an error if the stash is empty. +*/ +static int stash_get_id(const char *zStashId){ + int stashid = 0; + if( zStashId==0 ){ + stashid = db_int(0, "SELECT max(stashid) FROM stash"); + if( stashid==0 ) fossil_fatal("empty stash"); + }else{ + stashid = atoi(zStashId); + if( !db_exists("SELECT 1 FROM stash WHERE stashid=%d", stashid) ){ + fossil_fatal("no such stash: %d\n", stashid); + } + } + return stashid; +} + +/* +** COMMAND: stash +** +** Usage: %fossil stash SUBCOMMAND ARGS... +** +** fossil stash +** fossil stash save ?-m|--comment COMMENT? ?FILES...? +** fossil stash snapshot ?-m|--comment COMMENT? ?FILES...? +** +** Save the current changes in the working tree as a new stash. +** Then revert the changes back to the last check-in. If FILES +** are listed, then only stash and revert the named files. The +** "save" verb can be omitted if and only if there are no other +** arguments. The "snapshot" verb works the same as "save" but +** omits the revert, keeping the check-out unchanged. +** +** fossil stash list ?-v|--verbose? +** fossil stash ls ?-v|--verbose? +** +** List all changes sets currently stashed. Show information about +** individual files in each changeset if -v or --verbose is used. +** +** fossil stash show|cat ?STASHID? ?DIFF-FLAGS? +** +** Show the content of a stash +** +** fossil stash pop +** fossil stash apply ?STASHID? +** +** Apply STASHID or the most recently create stash to the current +** working check-out. The "pop" command deletes that changeset from +** the stash after applying it but the "apply" command retains the +** changeset. +** +** fossil stash goto ?STASHID? +** +** Update to the baseline checkout for STASHID then apply the +** changes of STASHID. Keep STASHID so that it can be reused +** This command is undoable. +** +** fossil stash drop ?STASHID? ?-a|--all? +** fossil stash rm ?STASHID? ?-a|--all? +** +** Forget everything about STASHID. Forget the whole stash if the +** -a|--all flag is used. Individual drops are undoable but -a|--all +** is not. +** +** fossil stash diff ?STASHID? +** fossil stash gdiff ?STASHID? +** +** Show diffs of the current working directory and what that +** directory would be if STASHID were applied. +** +** SUMMARY: +** fossil stash +** fossil stash save ?-m|--comment COMMENT? ?FILES...? +** fossil stash snapshot ?-m|--comment COMMENT? ?FILES...? +** fossil stash list|ls ?-v|--verbose? ?-W|--width <num>? +** fossil stash show|cat ?STASHID? ?DIFF-OPTIONS? +** fossil stash pop +** fossil stash apply ?STASHID? +** fossil stash goto ?STASHID? +** fossil stash rm|drop ?STASHID? ?-a|--all? +** fossil stash [g]diff ?STASHID? ?DIFF-OPTIONS? +*/ +void stash_cmd(void){ + const char *zDb; + const char *zCmd; + int nCmd; + int stashid = 0; + undo_capture_command_line(); + db_must_be_within_tree(); + db_open_config(0); + db_begin_transaction(); + zDb = db_name("localdb"); + db_multi_exec(zStashInit /*works-like:"%w,%w"*/, zDb, zDb); + if( g.argc<=2 ){ + zCmd = "save"; + }else{ + zCmd = g.argv[2]; + } + nCmd = strlen(zCmd); + if( memcmp(zCmd, "save", nCmd)==0 ){ + stashid = stash_create(); + undo_disable(); + if( g.argc>=2 ){ + int nFile = db_int(0, "SELECT count(*) FROM stashfile WHERE stashid=%d", + stashid); + char **newArgv = fossil_malloc( sizeof(char*)*(nFile+2) ); + int i = 2; + Stmt q; + db_prepare(&q,"SELECT origname FROM stashfile WHERE stashid=%d", stashid); + while( db_step(&q)==SQLITE_ROW ){ + newArgv[i++] = mprintf("%s%s", g.zLocalRoot, db_column_text(&q, 0)); + } + db_finalize(&q); + newArgv[0] = g.argv[0]; + newArgv[1] = 0; + g.argv = newArgv; + g.argc = nFile+2; + if( nFile==0 ) return; + } + g.argv[1] = "revert"; + revert_cmd(); + }else + if( memcmp(zCmd, "snapshot", nCmd)==0 ){ + stash_create(); + }else + if( memcmp(zCmd, "list", nCmd)==0 || memcmp(zCmd, "ls", nCmd)==0 ){ + Stmt q, q2; + int n = 0, width; + int verboseFlag = find_option("verbose","v",0)!=0; + const char *zWidth = find_option("width","W",1); + + if( zWidth ){ + width = atoi(zWidth); + if( (width!=0) && (width<=46) ){ + fossil_fatal("-W|--width value must be >46 or 0"); + } + }else{ + width = -1; + } + if( !verboseFlag ){ + verboseFlag = find_option("detail","l",0)!=0; /* deprecated */ + } + verify_all_options(); + db_prepare(&q, + "SELECT stashid, (SELECT uuid FROM blob WHERE rid=vid)," + " comment, datetime(ctime) FROM stash" + " ORDER BY ctime" + ); + if( verboseFlag ){ + db_prepare(&q2, "SELECT isAdded, isRemoved, origname, newname" + " FROM stashfile WHERE stashid=$id"); + } + while( db_step(&q)==SQLITE_ROW ){ + int stashid = db_column_int(&q, 0); + const char *zCom; + n++; + fossil_print("%5d: [%.14s] on %s\n", + stashid, + db_column_text(&q, 1), + db_column_text(&q, 3) + ); + zCom = db_column_text(&q, 2); + if( zCom && zCom[0] ){ + fossil_print(" "); + comment_print(zCom, 0, 7, width, g.comFmtFlags); + } + if( verboseFlag ){ + db_bind_int(&q2, "$id", stashid); + while( db_step(&q2)==SQLITE_ROW ){ + int isAdded = db_column_int(&q2, 0); + int isRemoved = db_column_int(&q2, 1); + const char *zOrig = db_column_text(&q2, 2); + const char *zNew = db_column_text(&q2, 3); + if( isAdded ){ + fossil_print(" ADD %s\n", zNew); + }else if( isRemoved ){ + fossil_print(" REMOVE %s\n", zOrig); + }else if( fossil_strcmp(zOrig,zNew)!=0 ){ + fossil_print(" RENAME %s -> %s\n", zOrig, zNew); + }else{ + fossil_print(" EDIT %s\n", zOrig); + } + } + db_reset(&q2); + } + } + db_finalize(&q); + if( verboseFlag ) db_finalize(&q2); + if( n==0 ) fossil_print("empty stash\n"); + }else + if( memcmp(zCmd, "drop", nCmd)==0 || memcmp(zCmd, "rm", nCmd)==0 ){ + int allFlag = find_option("all", "a", 0)!=0; + if( allFlag ){ + Blob ans; + char cReply; + prompt_user("This action is not undoable. Continue (y/N)? ", &ans); + cReply = blob_str(&ans)[0]; + if( cReply=='y' || cReply=='Y' ){ + db_multi_exec("DELETE FROM stash; DELETE FROM stashfile;"); + } + }else if( g.argc>=4 ){ + int i; + undo_begin(); + for(i=3; i<g.argc; i++){ + stashid = stash_get_id(g.argv[i]); + undo_save_stash(stashid); + stash_drop(stashid); + } + undo_finish(); + }else{ + undo_begin(); + undo_save_stash(0); + stash_drop(stashid); + undo_finish(); + } + }else + if( memcmp(zCmd, "pop", nCmd)==0 ){ + if( g.argc>3 ) usage("pop"); + stashid = stash_get_id(0); + undo_begin(); + stash_apply(stashid, 0); + undo_save_stash(stashid); + undo_finish(); + stash_drop(stashid); + }else + if( memcmp(zCmd, "apply", nCmd)==0 ){ + if( g.argc>4 ) usage("apply STASHID"); + stashid = stash_get_id(g.argc==4 ? g.argv[3] : 0); + undo_begin(); + stash_apply(stashid, 0); + undo_finish(); + }else + if( memcmp(zCmd, "goto", nCmd)==0 ){ + int nConflict; + int vid; + if( g.argc>4 ) usage("apply STASHID"); + stashid = stash_get_id(g.argc==4 ? g.argv[3] : 0); + undo_begin(); + vid = db_int(0, "SELECT vid FROM stash WHERE stashid=%d", stashid); + nConflict = update_to(vid); + stash_apply(stashid, nConflict); + db_multi_exec("UPDATE vfile SET mtime=0 WHERE pathname IN " + "(SELECT origname FROM stashfile WHERE stashid=%d)", + stashid); + undo_finish(); + }else + if( memcmp(zCmd, "diff", nCmd)==0 + || memcmp(zCmd, "gdiff", nCmd)==0 + || memcmp(zCmd, "show", nCmd)==0 + || memcmp(zCmd, "cat", nCmd)==0 + ){ + const char *zDiffCmd = 0; + const char *zBinGlob = 0; + int fIncludeBinary = 0; + u64 diffFlags; + + if( find_option("tk",0,0)!=0 ){ + db_close(0); + switch (zCmd[0]) { + case 's': + case 'c': + diff_tk("stash show", 3); + break; + + default: + diff_tk("stash diff", 3); + } + return; + } + if( find_option("internal","i",0)==0 ){ + zDiffCmd = diff_command_external(memcmp(zCmd, "gdiff", nCmd)==0); + } + diffFlags = diff_options(); + if( find_option("verbose","v",0)!=0 ) diffFlags |= DIFF_VERBOSE; + if( g.argc>4 ) usage(mprintf("%s STASHID", zCmd)); + if( zDiffCmd ){ + zBinGlob = diff_get_binary_glob(); + fIncludeBinary = diff_include_binary_files(); + } + stashid = stash_get_id(g.argc==4 ? g.argv[3] : 0); + stash_diff(stashid, zDiffCmd, zBinGlob, zCmd[0]=='s', fIncludeBinary, + diffFlags); + }else + if( memcmp(zCmd, "help", nCmd)==0 ){ + g.argv[1] = "help"; + g.argv[2] = "stash"; + g.argc = 3; + help_cmd(); + }else + { + usage("SUBCOMMAND ARGS..."); + } + db_end_transaction(0); +} Index: src/stat.c ================================================================== --- src/stat.c +++ src/stat.c @@ -16,78 +16,436 @@ ******************************************************************************* ** ** This file contains code to implement the stat web page ** */ -#include <string.h> +#include "VERSION.h" #include "config.h" +#include <string.h> #include "stat.h" + +/* +** For a sufficiently large integer, provide an alternative +** representation as MB or GB or TB. +*/ +void bigSizeName(int nOut, char *zOut, sqlite3_int64 v){ + if( v<100000 ){ + sqlite3_snprintf(nOut, zOut, "%lld bytes", v); + }else if( v<1000000000 ){ + sqlite3_snprintf(nOut, zOut, "%lld bytes (%.1fMB)", + v, (double)v/1000000.0); + }else{ + sqlite3_snprintf(nOut, zOut, "%lld bytes (%.1fGB)", + v, (double)v/1000000000.0); + } +} + +/* +** Return the approximate size as KB, MB, GB, or TB. +*/ +void approxSizeName(int nOut, char *zOut, sqlite3_int64 v){ + if( v<1000 ){ + sqlite3_snprintf(nOut, zOut, "%lld bytes", v); + }else if( v<1000000 ){ + sqlite3_snprintf(nOut, zOut, "%.1fKB", (double)v/1000.0); + }else if( v<1000000000 ){ + sqlite3_snprintf(nOut, zOut, "%.1fMB", (double)v/1000000.0); + }else{ + sqlite3_snprintf(nOut, zOut, "%.1fGB", (double)v/1000000000.0); + } +} /* ** WEBPAGE: stat ** ** Show statistics and global information about the repository. */ void stat_page(void){ - i64 t; - int n, m, fsize; + i64 t, fsize; + int n, m; + int szMax, szAvg; + const char *zDb; + int brief; char zBuf[100]; + const char *p; + login_check_credentials(); - if( !g.okRead ){ login_needed(); return; } + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } + brief = P("brief")!=0; style_header("Repository Statistics"); - @ <p><table class="label-value"> + style_adunit_config(ADUNIT_RIGHT_OK); + if( g.perm.Admin ){ + style_submenu_element("URLs", "URLs and Checkouts", "urllist"); + style_submenu_element("Schema", "Repository Schema", "repo_schema"); + style_submenu_element("Web-Cache", "Web-Cache Stats", "cachestat"); + } + style_submenu_element("Activity Reports", 0, "reports"); + style_submenu_element("SHA1 Collisions", 0, "hash-collisions"); + if( sqlite3_compileoption_used("ENABLE_DBSTAT_VTAB") ){ + style_submenu_element("Table Sizes", 0, "repo-tabsize"); + } + @ <table class="label-value"> @ <tr><th>Repository Size:</th><td> fsize = file_size(g.zRepositoryName); - @ %d(fsize) bytes - @ </td></tr> - @ <tr><th>Number Of Artifacts:</th><td> - n = db_int(0, "SELECT count(*) FROM blob"); - m = db_int(0, "SELECT count(*) FROM delta"); - @ %d(n) (stored as %d(n-m) full text and %d(m) delta blobs) - @ </td></tr> - if( n>0 ){ - int a, b; - @ <tr><th>Uncompressed Artifact Size:</th><td> - t = db_int64(0, "SELECT total(size) FROM blob WHERE size>0"); - sqlite3_snprintf(sizeof(zBuf), zBuf, "%lld", t); - @ %d((int)(((double)t)/(double)n)) bytes average, %s(zBuf) bytes total - @ </td></tr> - @ <tr><th>Compression Ratio:</th><td> - if( t/fsize < 5 ){ - b = 10; - fsize /= 10; - }else{ - b = 1; - } - a = t/fsize; - @ %d(a):%d(b) - @ </td></tr> - } - @ <tr><th>Number Of Check-ins:</th><td> - n = db_int(0, "SELECT count(distinct mid) FROM mlink"); - @ %d(n) - @ </td></tr> - @ <tr><th>Number Of Files:</th><td> - n = db_int(0, "SELECT count(*) FROM filename"); - @ %d(n) - @ </td></tr> - @ <tr><th>Number Of Wiki Pages:</th><td> - n = db_int(0, "SELECT count(*) FROM tag WHERE +tagname GLOB 'wiki-*'"); - @ %d(n) - @ </td></tr> - @ <tr><th>Number Of Tickets:</th><td> - n = db_int(0, "SELECT count(*) FROM tag WHERE +tagname GLOB 'tkt-*'"); - @ %d(n) - @ </td></tr> - @ <tr><th>Duration Of Project:</th><td> - n = db_int(0, "SELECT julianday('now') - (SELECT min(mtime) FROM event) + 0.99"); - @ %d(n) days - @ </td></tr> - @ <tr><th>Project ID:</th><td> - @ %h(db_get("project-code","")) - @ </td></tr> - @ <tr><th>Server ID:</th><td> - @ %h(db_get("server-code","")) - @ </td></tr> - @ </table></p> + bigSizeName(sizeof(zBuf), zBuf, fsize); + @ %s(zBuf) + @ </td></tr> + if( !brief ){ + @ <tr><th>Number Of Artifacts:</th><td> + n = db_int(0, "SELECT count(*) FROM blob"); + m = db_int(0, "SELECT count(*) FROM delta"); + @ %d(n) (%d(n-m) fulltext and %d(m) deltas) + @ </td></tr> + if( n>0 ){ + int a, b; + Stmt q; + @ <tr><th>Uncompressed Artifact Size:</th><td> + db_prepare(&q, "SELECT total(size), avg(size), max(size)" + " FROM blob WHERE size>0 /*scan*/"); + db_step(&q); + t = db_column_int64(&q, 0); + szAvg = db_column_int(&q, 1); + szMax = db_column_int(&q, 2); + db_finalize(&q); + bigSizeName(sizeof(zBuf), zBuf, t); + @ %d(szAvg) bytes average, %d(szMax) bytes max, %s(zBuf) total + @ </td></tr> + @ <tr><th>Compression Ratio:</th><td> + if( t/fsize < 5 ){ + b = 10; + fsize /= 10; + }else{ + b = 1; + } + a = t/fsize; + @ %d(a):%d(b) + @ </td></tr> + } + @ <tr><th>Number Of Check-ins:</th><td> + n = db_int(0, "SELECT count(*) FROM event WHERE type='ci' /*scan*/"); + @ %d(n) + @ </td></tr> + @ <tr><th>Number Of Files:</th><td> + n = db_int(0, "SELECT count(*) FROM filename /*scan*/"); + @ %d(n) + @ </td></tr> + @ <tr><th>Number Of Wiki Pages:</th><td> + n = db_int(0, "SELECT count(*) FROM tag /*scan*/" + " WHERE +tagname GLOB 'wiki-*'"); + @ %d(n) + @ </td></tr> + @ <tr><th>Number Of Tickets:</th><td> + n = db_int(0, "SELECT count(*) FROM tag /*scan*/" + " WHERE +tagname GLOB 'tkt-*'"); + @ %d(n) + @ </td></tr> + } + @ <tr><th>Duration Of Project:</th><td> + n = db_int(0, "SELECT julianday('now') - (SELECT min(mtime) FROM event)" + " + 0.99"); + @ %d(n) days or approximately %.2f(n/365.2425) years. + @ </td></tr> + p = db_get("project-code", 0); + if( p ){ + @ <tr><th>Project ID:</th><td>%h(p)</td></tr> + } + @ <tr><th>Server ID:</th><td>%h(db_get("server-code",""))</td></tr> + @ <tr><th>Fossil Version:</th><td> + @ %h(MANIFEST_DATE) %h(MANIFEST_VERSION) + @ (%h(RELEASE_VERSION)) [compiled using %h(COMPILER_NAME)] + @ </td></tr> + @ <tr><th>SQLite Version:</th><td>%.19s(sqlite3_sourceid()) + @ [%.10s(&sqlite3_sourceid()[20])] (%s(sqlite3_libversion()))</td></tr> + @ <tr><th>Schema Version:</th><td>%h(g.zAuxSchema)</td></tr> + @ <tr><th>Repository Rebuilt:</th><td> + @ %h(db_get_mtime("rebuilt","%Y-%m-%d %H:%M:%S","Never")) + @ By Fossil %h(db_get("rebuilt","Unknown"))</td></tr> + @ <tr><th>Database Stats:</th><td> + zDb = db_name("repository"); + @ %d(db_int(0, "PRAGMA \"%w\".page_count", zDb)) pages, + @ %d(db_int(0, "PRAGMA \"%w\".page_size", zDb)) bytes/page, + @ %d(db_int(0, "PRAGMA \"%w\".freelist_count", zDb)) free pages, + @ %s(db_text(0, "PRAGMA \"%w\".encoding", zDb)), + @ %s(db_text(0, "PRAGMA \"%w\".journal_mode", zDb)) mode + @ </td></tr> + + @ </table> + style_footer(); +} + +/* +** COMMAND: dbstat* +** +** Usage: %fossil dbstat OPTIONS +** +** Shows statistics and global information about the repository. +** +** Options: +** +** --brief|-b Only show essential elements +** --db-check Run a PRAGMA quick_check on the repository database +** --omit-version-info Omit the SQLite and Fossil version information +*/ +void dbstat_cmd(void){ + i64 t, fsize; + int n, m; + int szMax, szAvg; + const char *zDb; + int brief; + int omitVers; /* Omit Fossil and SQLite version information */ + int dbCheck; /* True for the --db-check option */ + char zBuf[100]; + const int colWidth = -19 /* printf alignment/width for left column */; + const char *p, *z; + + brief = find_option("brief", "b",0)!=0; + omitVers = find_option("omit-version-info", 0, 0)!=0; + dbCheck = find_option("db-check",0,0)!=0; + db_find_and_open_repository(0,0); + + /* We should be done with options.. */ + verify_all_options(); + + if( (z = db_get("project-name",0))!=0 + || (z = db_get("short-project-name",0))!=0 + ){ + fossil_print("%*s%s\n", colWidth, "project-name:", z); + } + fsize = file_size(g.zRepositoryName); + bigSizeName(sizeof(zBuf), zBuf, fsize); + fossil_print( "%*s%s\n", colWidth, "repository-size:", zBuf ); + if( !brief ){ + n = db_int(0, "SELECT count(*) FROM blob"); + m = db_int(0, "SELECT count(*) FROM delta"); + fossil_print("%*s%d (stored as %d full text and %d delta blobs)\n", + colWidth, "artifact-count:", + n, n-m, m); + if( n>0 ){ + int a, b; + Stmt q; + db_prepare(&q, "SELECT total(size), avg(size), max(size)" + " FROM blob WHERE size>0"); + db_step(&q); + t = db_column_int64(&q, 0); + szAvg = db_column_int(&q, 1); + szMax = db_column_int(&q, 2); + db_finalize(&q); + bigSizeName(sizeof(zBuf), zBuf, t); + fossil_print( "%*s%d average, " + "%d max, %s total\n", + colWidth, "artifact-sizes:", + szAvg, szMax, zBuf); + if( t/fsize < 5 ){ + b = 10; + fsize /= 10; + }else{ + b = 1; + } + a = t/fsize; + fossil_print("%*s%d:%d\n", colWidth, "compression-ratio:", a, b); + } + n = db_int(0, "SELECT COUNT(*) FROM event e WHERE e.type='ci'"); + fossil_print("%*s%d\n", colWidth, "check-ins:", n); + n = db_int(0, "SELECT count(*) FROM filename /*scan*/"); + fossil_print("%*s%d across all branches\n", colWidth, "files:", n); + n = db_int(0, "SELECT count(*) FROM tag /*scan*/" + " WHERE tagname GLOB 'wiki-*'"); + m = db_int(0, "SELECT COUNT(*) FROM event WHERE type='w'"); + fossil_print("%*s%d (%d changes)\n", colWidth, "wiki-pages:", n, m); + n = db_int(0, "SELECT count(*) FROM tag /*scan*/" + " WHERE tagname GLOB 'tkt-*'"); + m = db_int(0, "SELECT COUNT(*) FROM event WHERE type='t'"); + fossil_print("%*s%d (%d changes)\n", colWidth, "tickets:", n, m); + n = db_int(0, "SELECT COUNT(*) FROM event WHERE type='e'"); + fossil_print("%*s%d\n", colWidth, "events:", n); + n = db_int(0, "SELECT COUNT(*) FROM event WHERE type='g'"); + fossil_print("%*s%d\n", colWidth, "tag-changes:", n); + z = db_text(0, "SELECT datetime(mtime) || ' - about ' ||" + " CAST(julianday('now') - mtime AS INTEGER)" + " || ' days ago' FROM event " + " ORDER BY mtime DESC LIMIT 1"); + fossil_print("%*s%s\n", colWidth, "latest-change:", z); + } + n = db_int(0, "SELECT julianday('now') - (SELECT min(mtime) FROM event)" + " + 0.99"); + fossil_print("%*s%d days or approximately %.2f years.\n", + colWidth, "project-age:", n, n/365.2425); + p = db_get("project-code", 0); + if( p ){ + fossil_print("%*s%s\n", colWidth, "project-id:", p); + } +#if 0 + /* Server-id is not useful information any more */ + fossil_print("%*s%s\n", colWidth, "server-id:", db_get("server-code", 0)); +#endif + fossil_print("%*s%s\n", colWidth, "schema-version:", g.zAuxSchema); + if( !omitVers ){ + fossil_print("%*s%s %s [%s] (%s)\n", + colWidth, "fossil-version:", + MANIFEST_DATE, MANIFEST_VERSION, RELEASE_VERSION, + COMPILER_NAME); + fossil_print("%*s%.19s [%.10s] (%s)\n", + colWidth, "sqlite-version:", + sqlite3_sourceid(), &sqlite3_sourceid()[20], + sqlite3_libversion()); + } + zDb = db_name("repository"); + fossil_print("%*s%d pages, %d bytes/pg, %d free pages, " + "%s, %s mode\n", + colWidth, "database-stats:", + db_int(0, "PRAGMA \"%w\".page_count", zDb), + db_int(0, "PRAGMA \"%w\".page_size", zDb), + db_int(0, "PRAGMA \"%w\".freelist_count", zDb), + db_text(0, "PRAGMA \"%w\".encoding", zDb), + db_text(0, "PRAGMA \"%w\".journal_mode", zDb)); + if( dbCheck ){ + fossil_print("%*s%s\n", colWidth, "database-check:", + db_text(0, "PRAGMA quick_check(1)")); + } +} + +/* +** WEBPAGE: urllist +** +** Show ways in which this repository has been accessed +*/ +void urllist_page(void){ + Stmt q; + int cnt; + login_check_credentials(); + if( !g.perm.Admin ){ login_needed(0); return; } + + style_header("URLs and Checkouts"); + style_adunit_config(ADUNIT_RIGHT_OK); + style_submenu_element("Stat", "Repository Stats", "stat"); + style_submenu_element("Schema", "Repository Schema", "repo_schema"); + @ <div class="section">URLs</div> + @ <table border="0" width='100%%'> + db_prepare(&q, "SELECT substr(name,9), datetime(mtime,'unixepoch')" + " FROM config WHERE name GLOB 'baseurl:*' ORDER BY 2 DESC"); + cnt = 0; + while( db_step(&q)==SQLITE_ROW ){ + @ <tr><td width='100%%'>%h(db_column_text(&q,0))</td> + @ <td><nobr>%h(db_column_text(&q,1))</nobr></td></tr> + cnt++; + } + db_finalize(&q); + if( cnt==0 ){ + @ <tr><td>(none)</td> + } + @ </table> + @ <div class="section">Checkouts</div> + @ <table border="0" width='100%%'> + db_prepare(&q, "SELECT substr(name,7), datetime(mtime,'unixepoch')" + " FROM config WHERE name GLOB 'ckout:*' ORDER BY 2 DESC"); + cnt = 0; + while( db_step(&q)==SQLITE_ROW ){ + @ <tr><td width='100%%'>%h(db_column_text(&q,0))</td> + @ <td><nobr>%h(db_column_text(&q,1))</nobr></td></tr> + cnt++; + } + db_finalize(&q); + if( cnt==0 ){ + @ <tr><td>(none)</td> + } + @ </table> + style_footer(); +} + +/* +** WEBPAGE: repo_schema +** +** Show the repository schema +*/ +void repo_schema_page(void){ + Stmt q; + login_check_credentials(); + if( !g.perm.Admin ){ login_needed(0); return; } + + style_header("Repository Schema"); + style_adunit_config(ADUNIT_RIGHT_OK); + style_submenu_element("Stat", "Repository Stats", "stat"); + style_submenu_element("URLs", "URLs and Checkouts", "urllist"); + db_prepare(&q, "SELECT sql FROM %s.sqlite_master WHERE sql IS NOT NULL", + db_name("repository")); + @ <pre> + while( db_step(&q)==SQLITE_ROW ){ + @ %h(db_column_text(&q, 0)); + } + @ </pre> + db_finalize(&q); + style_footer(); +} + +/* +** WEBPAGE: repo-tabsize +** +** Show relative sizes of tables in the repository database. +*/ +void repo_tabsize_page(void){ + int nPageFree; + sqlite3_int64 fsize; + char zBuf[100]; + + login_check_credentials(); + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } + style_header("Repository Table Sizes"); + style_adunit_config(ADUNIT_RIGHT_OK); + style_submenu_element("Stat", "Repository Stats", "stat"); + db_multi_exec( + "CREATE VIRTUAL TABLE temp.dbx USING dbstat(%s);" + "CREATE TEMP TABLE trans(name TEXT PRIMARY KEY, tabname TEXT)WITHOUT ROWID;" + "INSERT INTO trans(name,tabname)" + " SELECT name, tbl_name FROM %s.sqlite_master;" + "CREATE TEMP TABLE piechart(amt REAL, label TEXT);" + "INSERT INTO piechart(amt,label)" + " SELECT count(*), " + " coalesce((SELECT tabname FROM trans WHERE trans.name=dbx.name),name)" + " FROM dbx" + " GROUP BY 2 ORDER BY 2;", + db_name("repository"), db_name("repository") + ); + nPageFree = db_int(0, "PRAGMA \"%w\".freelist_count", db_name("repository")); + if( nPageFree>0 ){ + db_multi_exec( + "INSERT INTO piechart(amt,label) VALUES(%d,'freelist')", + nPageFree + ); + } + fsize = file_size(g.zRepositoryName); + approxSizeName(sizeof(zBuf), zBuf, fsize); + @ <h2>Repository Size: %s(zBuf)</h2> + @ <center><svg width='800' height='500'> + piechart_render(800,500,PIE_OTHER|PIE_PERCENT); + @ </svg></center> + + if( g.localOpen ){ + db_multi_exec( + "DROP TABLE temp.dbx;" + "CREATE VIRTUAL TABLE temp.dbx USING dbstat(%s);" + "DELETE FROM trans;" + "INSERT INTO trans(name,tabname)" + " SELECT name, tbl_name FROM %s.sqlite_master;" + "DELETE FROM piechart;" + "INSERT INTO piechart(amt,label)" + " SELECT count(*), " + " coalesce((SELECT tabname FROM trans WHERE trans.name=dbx.name),name)" + " FROM dbx" + " GROUP BY 2 ORDER BY 2;", + db_name("localdb"), db_name("localdb") + ); + nPageFree = db_int(0, "PRAGMA \"%s\".freelist_count", db_name("localdb")); + if( nPageFree>0 ){ + db_multi_exec( + "INSERT INTO piechart(amt,label) VALUES(%d,'freelist')", + nPageFree + ); + } + fsize = file_size(g.zLocalDbName); + approxSizeName(sizeof(zBuf), zBuf, fsize); + @ <h2>%h(file_tail(g.zLocalDbName)) Size: %s(zBuf)</h2> + @ <center><svg width='800' height='500'> + piechart_render(800,500,PIE_OTHER|PIE_PERCENT); + @ </svg></center> + } style_footer(); } ADDED src/statrep.c Index: src/statrep.c ================================================================== --- src/statrep.c +++ src/statrep.c @@ -0,0 +1,761 @@ +/* +** Copyright (c) 2013 Stephan Beal +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code to implement the /reports web page. +** +*/ +#include "config.h" +#include <string.h> +#include <time.h> +#include "statrep.h" + + +/* +** Used by stats_report_xxxxx() to remember which type of events +** to show. Populated by stats_report_init_view() and holds the +** return value of that function. +*/ +static int statsReportType = 0; + +/* +** Set by stats_report_init_view() to one of the y=XXXX values +** accepted by /timeline?y=XXXX. +*/ +static const char *statsReportTimelineYFlag = NULL; + + +/* +** Creates a TEMP VIEW named v_reports which is a wrapper around the +** EVENT table filtered on event.type. It looks for the request +** parameter 'type' (reminder: we "should" use 'y' for consistency +** with /timeline, but /reports uses 'y' for the year) and expects it +** to contain one of the conventional values from event.type or the +** value "all", which is treated as equivalent to "*". By default (if +** no 'y' is specified), "*" is assumed (that is also the default for +** invalid/unknown filter values). That 'y' filter is the one used for +** the event list. Note that a filter of "*" or "all" is equivalent to +** querying against the full event table. The view, however, adds an +** abstraction level to simplify the implementation code for the +** various /reports pages. +** +** Returns one of: 'c', 'w', 'g', 't', 'e', representing the type of +** filter it applies, or '*' if no filter is applied (i.e. if "all" is +** used). +*/ +static int stats_report_init_view(){ + const char *zType = PD("type","*"); /* analog to /timeline?y=... */ + const char *zRealType = NULL; /* normalized form of zType */ + int rc = 0; /* result code */ + assert( !statsReportType && "Must not be called more than once." ); + switch( (zType && *zType) ? *zType : 0 ){ + case 'c': + case 'C': + zRealType = "ci"; + rc = *zRealType; + break; + case 'e': + case 'E': + zRealType = "e"; + rc = *zRealType; + break; + case 'g': + case 'G': + zRealType = "g"; + rc = *zRealType; + break; + case 't': + case 'T': + zRealType = "t"; + rc = *zRealType; + break; + case 'w': + case 'W': + zRealType = "w"; + rc = *zRealType; + break; + default: + rc = '*'; + break; + } + assert(0 != rc); + if(zRealType){ + statsReportTimelineYFlag = zRealType; + db_multi_exec("CREATE TEMP VIEW v_reports AS " + "SELECT * FROM event WHERE type GLOB %Q", + zRealType); + }else{ + statsReportTimelineYFlag = "a"; + db_multi_exec("CREATE TEMP VIEW v_reports AS " + "SELECT * FROM event"); + } + return statsReportType = rc; +} + +/* +** Returns a string suitable (for a given value of suitable) for +** use in a label with the header of the /reports pages, dependent +** on the 'type' flag. See stats_report_init_view(). +** The returned bytes are static. +*/ +static const char *stats_report_label_for_type(){ + assert( statsReportType && "Must call stats_report_init_view() first." ); + switch( statsReportType ){ + case 'c': + return "check-ins"; + case 'e': + return "technotes"; + case 'w': + return "wiki changes"; + case 't': + return "ticket changes"; + case 'g': + return "tag changes"; + default: + return "all types"; + } +} + + +/* +** Helper for stats_report_by_month_year(), which generates a list of +** week numbers. zTimeframe should be either a timeframe in the form YYYY +** or YYYY-MM. +*/ +static void stats_report_output_week_links(const char *zTimeframe){ + Stmt stWeek = empty_Stmt; + char yearPart[5] = {0,0,0,0,0}; + memcpy(yearPart, zTimeframe, 4); + db_prepare(&stWeek, + "SELECT DISTINCT strftime('%%W',mtime) AS wk, " + "count(*) AS n, " + "substr(date(mtime),1,%d) AS ym " + "FROM v_reports " + "WHERE ym=%Q AND mtime < current_timestamp " + "GROUP BY wk ORDER BY wk", + strlen(zTimeframe), + zTimeframe); + while( SQLITE_ROW == db_step(&stWeek) ){ + const char *zWeek = db_column_text(&stWeek,0); + const int nCount = db_column_int(&stWeek,1); + cgi_printf("<a href='%R/timeline?" + "yw=%t-%t&n=%d&y=%s'>%s</a>", + yearPart, zWeek, + nCount, statsReportTimelineYFlag, zWeek); + } + db_finalize(&stWeek); +} + +/* +** Implements the "byyear" and "bymonth" reports for /reports. +** If includeMonth is true then it generates the "bymonth" report, +** else the "byyear" report. If zUserName is not NULL then the report is +** restricted to events created by the named user account. +*/ +static void stats_report_by_month_year(char includeMonth, + char includeWeeks, + const char *zUserName){ + Stmt query = empty_Stmt; + int nRowNumber = 0; /* current TR number */ + int nEventTotal = 0; /* Total event count */ + int rowClass = 0; /* counter for alternating + row colors */ + const char *zTimeLabel = includeMonth ? "Year/Month" : "Year"; + char zPrevYear[5] = {0}; /* For keeping track of when + we change years while looping */ + int nEventsPerYear = 0; /* Total event count for the + current year */ + char showYearTotal = 0; /* Flag telling us when to show + the per-year event totals */ + int nMaxEvents = 1; /* for calculating length of graph + bars. */ + int iterations = 0; /* number of weeks/months we iterate + over */ + Blob userFilter = empty_blob; /* Optional user=johndoe query string */ + stats_report_init_view(); + if( zUserName ){ + blob_appendf(&userFilter, "user=%s", zUserName); + } + blob_reset(&userFilter); + db_prepare(&query, + "SELECT substr(date(mtime),1,%d) AS timeframe," + " count(*) AS eventCount" + " FROM v_reports" + " WHERE ifnull(coalesce(euser,user,'')=%Q,1)" + " GROUP BY timeframe" + " ORDER BY timeframe DESC", + includeMonth ? 7 : 4, zUserName); + @ <h1>Timeline Events (%s(stats_report_label_for_type())) + @ by year%s(includeMonth ? "/month" : "") + if( zUserName ){ + @ for user %h(zUserName) + } + @ </h1> + @ <table class='statistics-report-table-events' border='0' cellpadding='2' + @ cellspacing='0' id='statsTable'> + @ <thead> + @ <th>%s(zTimeLabel)</th> + @ <th>Events</th> + @ <th width='90%%'><!-- relative commits graph --></th> + @ </thead><tbody> + /* + Run the query twice. The first time we calculate the maximum + number of events for a given row. Maybe someone with better SQL + Fu can re-implement this with a single query. + */ + while( SQLITE_ROW == db_step(&query) ){ + const int nCount = db_column_int(&query, 1); + if(nCount>nMaxEvents){ + nMaxEvents = nCount; + } + ++iterations; + } + db_reset(&query); + while( SQLITE_ROW == db_step(&query) ){ + const char *zTimeframe = db_column_text(&query, 0); + const int nCount = db_column_int(&query, 1); + int nSize = nCount + ? (int)(100 * nCount / nMaxEvents) + : 1; + showYearTotal = 0; + if(!nSize) nSize = 1; + if(includeMonth){ + /* For Month/year view, add a separator for each distinct year. */ + if(!*zPrevYear || + (0!=fossil_strncmp(zPrevYear,zTimeframe,4))){ + showYearTotal = *zPrevYear; + if(showYearTotal){ + rowClass = ++nRowNumber % 2; + @ <tr class='row%d(rowClass)'> + @ <td></td> + @ <td colspan='2'>Yearly total: %d(nEventsPerYear)</td> + @</tr> + showYearTotal = 0; + } + nEventsPerYear = 0; + memcpy(zPrevYear,zTimeframe,4); + rowClass = ++nRowNumber % 2; + @ <tr class='row%d(rowClass)'> + @ <th colspan='3' class='statistics-report-row-year'>%s(zPrevYear)</th> + @ </tr> + } + } + rowClass = ++nRowNumber % 2; + nEventTotal += nCount; + nEventsPerYear += nCount; + @<tr class='row%d(rowClass)'> + @ <td> + if(includeMonth){ + cgi_printf("<a href='%R/timeline?" + "ym=%t&n=%d&y=%s", + zTimeframe, nCount, + statsReportTimelineYFlag ); + /* Reminder: n=nCount is not actually correct for bymonth unless + that was the only user who caused events. + */ + if( zUserName ){ + cgi_printf("&u=%t", zUserName); + } + cgi_printf("' target='_new'>%s</a>",zTimeframe); + }else { + cgi_printf("<a href='?view=byweek&y=%s&type=%c", + zTimeframe, (char)statsReportType); + if( zUserName ){ + cgi_printf("&u=%t", zUserName); + } + cgi_printf("'>%s</a>", zTimeframe); + } + @ </td><td>%d(nCount)</td> + @ <td> + @ <div class='statistics-report-graph-line' + @ style='width:%d(nSize)%%;'> </div> + @ </td> + @</tr> + if(includeWeeks){ + /* This part works fine for months but it terribly slow (4.5s on my PC), + so it's only shown for by-year for now. Suggestions/patches for + a better/faster layout are welcomed. */ + @ <tr class='row%d(rowClass)'> + @ <td colspan='2' class='statistics-report-week-number-label'>Week #:</td> + @ <td class='statistics-report-week-of-year-list'> + stats_report_output_week_links(zTimeframe); + @ </td></tr> + } + + /* + Potential improvement: calculate the min/max event counts and + use percent-based graph bars. + */ + } + db_finalize(&query); + if(includeMonth && !showYearTotal && *zPrevYear){ + /* Add final year total separator. */ + rowClass = ++nRowNumber % 2; + @ <tr class='row%d(rowClass)'> + @ <td></td> + @ <td colspan='2'>Yearly total: %d(nEventsPerYear)</td> + @</tr> + } + @ </tbody></table> + if(nEventTotal){ + const char *zAvgLabel = includeMonth ? "month" : "year"; + int nAvg = iterations ? (nEventTotal/iterations) : 0; + @ <br><div>Total events: %d(nEventTotal) + @ <br>Average per active %s(zAvgLabel): %d(nAvg) + @ </div> + } + if( !includeMonth ){ + output_table_sorting_javascript("statsTable","tnx",-1); + } +} + +/* +** Implements the "byuser" view for /reports. +*/ +static void stats_report_by_user(){ + Stmt query = empty_Stmt; + int nRowNumber = 0; /* current TR number */ + int nEventTotal = 0; /* Total event count */ + int rowClass = 0; /* counter for alternating + row colors */ + int nMaxEvents = 1; /* max number of events for + all rows. */ + stats_report_init_view(); + @ <h1>Timeline Events + @ (%s(stats_report_label_for_type())) by User</h1> + db_multi_exec( + "CREATE TEMP VIEW piechart(amt,label) AS" + " SELECT count(*), ifnull(euser,user) FROM v_reports" + " GROUP BY ifnull(euser,user) ORDER BY count(*) DESC;" + ); + if( db_int(0, "SELECT count(*) FROM piechart")>=2 ){ + @ <center><svg width=700 height=400> + piechart_render(700, 400, PIE_OTHER|PIE_PERCENT); + @ </svg></centre><hr/> + } + @ <table class='statistics-report-table-events' border='0' + @ cellpadding='2' cellspacing='0' id='statsTable'> + @ <thead><tr> + @ <th>User</th> + @ <th>Events</th> + @ <th width='90%%'><!-- relative commits graph --></th> + @ </tr></thead><tbody> + db_prepare(&query, + "SELECT ifnull(euser,user), " + "COUNT(*) AS eventCount " + "FROM v_reports " + "GROUP BY ifnull(euser,user) ORDER BY eventCount DESC"); + while( SQLITE_ROW == db_step(&query) ){ + const int nCount = db_column_int(&query, 1); + if(nCount>nMaxEvents){ + nMaxEvents = nCount; + } + } + db_reset(&query); + while( SQLITE_ROW == db_step(&query) ){ + const char *zUser = db_column_text(&query, 0); + const int nCount = db_column_int(&query, 1); + char y = (char)statsReportType; + int nSize = nCount + ? (int)(100 * nCount / nMaxEvents) + : 0; + if(!nCount) continue /* arguable! Possible? */; + else if(!nSize) nSize = 1; + rowClass = ++nRowNumber % 2; + nEventTotal += nCount; + @ <tr class='row%d(rowClass)'> + @ <td> + @ <a href="?view=bymonth&user=%h(zUser)&type=%c(y)">%h(zUser)</a> + @ </td><td data-sortkey='%08x(-nCount)'>%d(nCount)</td> + @ <td> + @ <div class='statistics-report-graph-line' + @ style='width:%d(nSize)%%;'> </div> + @ </td> + @</tr> + /* + Potential improvement: calculate the min/max event counts and + use percent-based graph bars. + */ + } + @ </tbody></table> + db_finalize(&query); + output_table_sorting_javascript("statsTable","tkx",2); +} + +/* +** Implements the "byfile" view for /reports. If zUserName is not NULL then the +** report is restricted to events created by the named user account. +*/ +static void stats_report_by_file(const char *zUserName){ + Stmt query; + int mxEvent = 1; /* max number of events across all rows */ + int nRowNumber = 0; + + db_multi_exec( + "CREATE TEMP TABLE statrep(filename, cnt);" + "INSERT INTO statrep(filename, cnt)" + " SELECT filename.name, count(distinct mlink.mid)" + " FROM filename, mlink, event" + " WHERE filename.fnid=mlink.fnid" + " AND mlink.mid=event.objid" + " AND ifnull(coalesce(euser,user,'')=%Q,1)" + " GROUP BY 1", zUserName + ); + db_prepare(&query, + "SELECT filename, cnt FROM statrep ORDER BY cnt DESC, filename /*sort*/" + ); + mxEvent = db_int(1, "SELECT max(cnt) FROM statrep"); + @ <h1>Check-ins Per File + if( zUserName ){ + @ for user %h(zUserName) + } + @ </h1> + @ <table class='statistics-report-table-events' border='0' + @ cellpadding='2' cellspacing='0' id='statsTable'> + @ <thead><tr> + @ <th>File</th> + @ <th>Check-ins</th> + @ <th width='90%%'><!-- relative commits graph --></th> + @ </tr></thead><tbody> + while( SQLITE_ROW == db_step(&query) ){ + const char *zFile = db_column_text(&query, 0); + const int n = db_column_int(&query, 1); + int sz; + if( n<=0 ) continue; + sz = (int)(100*n/mxEvent); + if( sz==0 ) sz = 1; + @<tr class='row%d(++nRowNumber%2)'> + @ <td>%z(href("%R/finfo?name=%T",zFile))%h(zFile)</a></td> + @ <td>%d(n)</td> + @ <td> + @ <div class='statistics-report-graph-line' + @ style='width:%d(sz)%%;'> </div> + @ </td> + @</tr> + } + @ </tbody></table> + db_finalize(&query); + output_table_sorting_javascript("statsTable","tNx",2); +} + +/* +** Implements the "byweekday" view for /reports. If zUserName is not NULL then +** the report is restricted to events created by the named user account. +*/ +static void stats_report_day_of_week(const char *zUserName){ + Stmt query = empty_Stmt; + int nRowNumber = 0; /* current TR number */ + int nEventTotal = 0; /* Total event count */ + int rowClass = 0; /* counter for alternating + row colors */ + int nMaxEvents = 1; /* max number of events for + all rows. */ + Blob userFilter = empty_blob; /* Optional user=johndoe query string */ + static const char *const daysOfWeek[] = { + "Sunday", "Monday", "Tuesday", "Wednesday", + "Thursday", "Friday", "Saturday" + }; + + stats_report_init_view(); + if( zUserName ){ + blob_appendf(&userFilter, "user=%s", zUserName); + } + db_prepare(&query, + "SELECT cast(strftime('%%w', mtime) AS INTEGER) dow," + " COUNT(*) AS eventCount" + " FROM v_reports" + " WHERE ifnull(coalesce(euser,user,'')=%Q,1)" + " GROUP BY dow ORDER BY dow", zUserName); + @ <h1>Timeline Events (%h(stats_report_label_for_type())) by Day of the Week + if( zUserName ){ + @ for user %h(zUserName) + } + @ </h1> + db_multi_exec( + "CREATE TEMP VIEW piechart(amt,label) AS" + " SELECT count(*)," + " CASE cast(strftime('%%w', mtime) AS INT)" + " WHEN 0 THEN 'Sunday'" + " WHEN 1 THEN 'Monday'" + " WHEN 2 THEN 'Tuesday'" + " WHEN 3 THEN 'Wednesday'" + " WHEN 4 THEN 'Thursday'" + " WHEN 5 THEN 'Friday'" + " WHEN 6 THEN 'Saturday'" + " ELSE 'ERROR'" + " END" + " FROM v_reports" + " WHERE ifnull(coalesce(euser,user,'')=%Q,1)" + " GROUP BY 2 ORDER BY cast(strftime('%%w', mtime) AS INT);" + , zUserName + ); + if( db_int(0, "SELECT count(*) FROM piechart")>=2 ){ + @ <center><svg width=700 height=400> + piechart_render(700, 400, PIE_OTHER|PIE_PERCENT); + @ </svg></centre><hr/> + } + @ <table class='statistics-report-table-events' border='0' + @ cellpadding='2' cellspacing='0' id='statsTable'> + @ <thead><tr> + @ <th>DoW</th> + @ <th>Day</th> + @ <th>Events</th> + @ <th width='90%%'><!-- relative commits graph --></th> + @ </tr></thead><tbody> + while( SQLITE_ROW == db_step(&query) ){ + const int nCount = db_column_int(&query, 1); + if(nCount>nMaxEvents){ + nMaxEvents = nCount; + } + } + db_reset(&query); + while( SQLITE_ROW == db_step(&query) ){ + const int dayNum =db_column_int(&query, 0); + const int nCount = db_column_int(&query, 1); + int nSize = nCount + ? (int)(100 * nCount / nMaxEvents) + : 0; + if(!nCount) continue /* arguable! Possible? */; + else if(!nSize) nSize = 1; + rowClass = ++nRowNumber % 2; + nEventTotal += nCount; + @<tr class='row%d(rowClass)'> + @ <td>%d(dayNum)</td> + @ <td>%s(daysOfWeek[dayNum])</td> + @ <td>%d(nCount)</td> + @ <td> + @ <div class='statistics-report-graph-line' + @ style='width:%d(nSize)%%;'> </div> + @ </td> + @</tr> + } + @ </tbody></table> + db_finalize(&query); + output_table_sorting_javascript("statsTable","ntnx",1); +} + + +/* +** Helper for stats_report_by_month_year(), which generates a list of +** week numbers. zTimeframe should be either a timeframe in the form YYYY +** or YYYY-MM. If zUserName is not NULL then the report is restricted to events +** created by the named user account. +*/ +static void stats_report_year_weeks(const char *zUserName){ + const char *zYear = P("y"); /* Year for which report shown */ + Stmt q; + int nMaxEvents = 1; /* max number of events for + all rows. */ + int iterations = 0; /* # of active time periods. */ + int rowCount = 0; + int total = 0; + + stats_report_init_view(); + style_submenu_sql("y", "Year:", + "WITH RECURSIVE a(b) AS (" + " SELECT substr(date('now'),1,4) UNION ALL" + " SELECT b-1 FROM a" + " WHERE b>0+(SELECT substr(date(min(mtime)),1,4) FROM event)" + ") SELECT b, b FROM a ORDER BY b DESC"); + if( zYear==0 || strlen(zYear)!=4 ){ + zYear = db_text("1970","SELECT substr(date('now'),1,4);"); + } + cgi_printf("<br/>"); + db_prepare(&q, + "SELECT DISTINCT strftime('%%W',mtime) AS wk, " + " count(*) AS n " + " FROM v_reports " + " WHERE %Q=substr(date(mtime),1,4) " + " AND mtime < current_timestamp " + " AND ifnull(coalesce(euser,user,'')=%Q,1)" + " GROUP BY wk ORDER BY wk DESC", zYear, zUserName); + @ <h1>Timeline events (%h(stats_report_label_for_type())) + @ for the calendar weeks of %h(zYear) + if( zUserName ){ + @ for user %h(zUserName) + } + @ </h1> + cgi_printf("<table class='statistics-report-table-events' " + "border='0' cellpadding='2' width='100%%' " + "cellspacing='0' id='statsTable'>"); + cgi_printf("<thead><tr>" + "<th>Week</th>" + "<th>Events</th>" + "<th width='90%%'><!-- relative commits graph --></th>" + "</tr></thead>" + "<tbody>"); + while( SQLITE_ROW == db_step(&q) ){ + const int nCount = db_column_int(&q, 1); + if(nCount>nMaxEvents){ + nMaxEvents = nCount; + } + ++iterations; + } + db_reset(&q); + while( SQLITE_ROW == db_step(&q) ){ + const char *zWeek = db_column_text(&q,0); + const int nCount = db_column_int(&q,1); + int nSize = nCount + ? (int)(100 * nCount / nMaxEvents) + : 0; + if(!nSize) nSize = 1; + total += nCount; + cgi_printf("<tr class='row%d'>", ++rowCount % 2 ); + cgi_printf("<td><a href='%R/timeline?yw=%t-%s&n=%d&y=%s", + zYear, zWeek, nCount, + statsReportTimelineYFlag); + if( zUserName ){ + cgi_printf("&u=%t",zUserName); + } + cgi_printf("'>%s</a></td>",zWeek); + + cgi_printf("<td>%d</td>",nCount); + cgi_printf("<td>"); + if(nCount){ + cgi_printf("<div class='statistics-report-graph-line'" + "style='width:%d%%;'> </div>", + nSize); + } + cgi_printf("</td></tr>"); + } + db_finalize(&q); + cgi_printf("</tbody></table>"); + if(total){ + int nAvg = iterations ? (total/iterations) : 0; + cgi_printf("<br><div>Total events: %d<br>" + "Average per active week: %d</div>", + total, nAvg); + } + output_table_sorting_javascript("statsTable","tnx",-1); +} + +/* Report types +*/ +#define RPT_BYFILE 1 +#define RPT_BYMONTH 2 +#define RPT_BYUSER 3 +#define RPT_BYWEEK 4 +#define RPT_BYWEEKDAY 5 +#define RPT_BYYEAR 6 +#define RPT_NONE 0 /* None of the above */ + +/* +** WEBPAGE: reports +** +** Shows activity reports for the repository. +** +** Query Parameters: +** +** view=REPORT_NAME Valid values: bymonth, byyear, byuser +** user=NAME Restricts statistics to the given user +** type=TYPE Restricts the report to a specific event type: +** ci (check-in), w (wiki), t (ticket), g (tag) +** Defaulting to all event types. +** +** The view-specific query parameters include: +** +** view=byweek: +** +** y=YYYY The year to report (default is the server's +** current year). +*/ +void stats_report_page(){ + const char *zView = P("view"); /* Which view/report to show. */ + int eType = RPT_NONE; /* Numeric code for view/report to show */ + int i; /* Loop counter */ + const char *zUserName; /* Name of user */ + const char *azView[16]; /* Drop-down menu of view types */ + static const struct { + const char *zName; /* Name of view= screen type */ + const char *zVal; /* Value of view= query parameter */ + int eType; /* Corresponding RPT_* define */ + } aViewType[] = { + { "File Changes","byfile", RPT_BYFILE }, + { "By Month", "bymonth", RPT_BYMONTH }, + { "By User", "byuser", RPT_BYUSER }, + { "By Week", "byweek", RPT_BYWEEK }, + { "By Weekday", "byweekday", RPT_BYWEEKDAY }, + { "By Year", "byyear", RPT_BYYEAR }, + }; + static const char *const azType[] = { + "a", "All Changes", + "ci", "Check-ins", + "g", "Tags", + "e", "Tech Notes", + "t", "Tickets", + "w", "Wiki" + }; + + login_check_credentials(); + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } + zUserName = P("user"); + if( zUserName==0 ) zUserName = P("u"); + if( zUserName && zUserName[0]==0 ) zUserName = 0; + if( zView==0 ){ + zView = "byuser"; + cgi_replace_query_parameter("view","byuser"); + } + for(i=0; i<ArraySize(aViewType); i++){ + if( fossil_strcmp(zView, aViewType[i].zVal)==0 ){ + eType = aViewType[i].eType; + break; + } + } + if( eType!=RPT_NONE ){ + int nView = 0; /* Slots used in azView[] */ + for(i=0; i<ArraySize(aViewType); i++){ + azView[nView++] = aViewType[i].zVal; + azView[nView++] = aViewType[i].zName; + } + if( eType!=RPT_BYFILE ){ + style_submenu_multichoice("type", ArraySize(azType)/2, azType, 0); + } + style_submenu_multichoice("view", nView/2, azView, 0); + if( eType!=RPT_BYUSER ){ + style_submenu_sql("user","User:", + "SELECT '', 'All Users' UNION ALL " + "SELECT x, x FROM (" + " SELECT DISTINCT trim(coalesce(euser,user)) AS x FROM event %s" + " ORDER BY 1 COLLATE nocase) WHERE x!=''", + eType==RPT_BYFILE ? "WHERE type='ci'" : "" + ); + } + } + style_submenu_element("Stats", "Stats", "%R/stat"); + style_header("Activity Reports"); + switch( eType ){ + case RPT_BYYEAR: + stats_report_by_month_year(0, 0, zUserName); + break; + case RPT_BYMONTH: + stats_report_by_month_year(1, 0, zUserName); + break; + case RPT_BYWEEK: + stats_report_year_weeks(zUserName); + break; + default: + case RPT_BYUSER: + stats_report_by_user(); + break; + case RPT_BYWEEKDAY: + stats_report_day_of_week(zUserName); + break; + case RPT_BYFILE: + stats_report_by_file(zUserName); + break; + } + style_footer(); +} Index: src/style.c ================================================================== --- src/style.c +++ src/style.c @@ -16,33 +16,218 @@ ******************************************************************************* ** ** This file contains code to implement the basic web page look and feel. ** */ +#include "VERSION.h" #include "config.h" #include "style.h" /* ** Elements of the submenu are collected into the following -** structure and displayed below the main menu by style_header(). +** structure and displayed below the main menu. +** +** Populate these structure with calls to +** +** style_submenu_element() +** style_submenu_entry() +** style_submenu_checkbox() +** style_submenu_multichoice() ** -** Populate this structure with calls to style_submenu_element() -** prior to calling style_header(). +** prior to calling style_footer(). The style_footer() routine +** will generate the appropriate HTML text just below the main +** menu. */ static struct Submenu { - const char *zLabel; + const char *zLabel; /* Button label */ const char *zTitle; - const char *zLink; + const char *zLink; /* Jump to this link when button is pressed */ } aSubmenu[30]; -static int nSubmenu = 0; +static int nSubmenu = 0; /* Number of buttons */ +static struct SubmenuCtrl { + const char *zName; /* Form query parameter */ + const char *zLabel; /* Label. Might be NULL for FF_MULTI */ + unsigned char eType; /* FF_ENTRY, FF_MULTI, FF_BINARY */ + unsigned char isDisabled; /* True if this control is grayed out */ + short int iSize; /* Width for FF_ENTRY. Count for FF_MULTI */ + const char *const *azChoice;/* value/display pairs for FF_MULTI */ + const char *zFalse; /* FF_BINARY label when false */ +} aSubmenuCtrl[20]; +static int nSubmenuCtrl = 0; +#define FF_ENTRY 1 +#define FF_MULTI 2 +#define FF_BINARY 3 /* ** Remember that the header has been generated. The footer is omitted ** if an error occurs before the header. */ static int headerHasBeenGenerated = 0; + +/* +** remember, if a sidebox was used +*/ +static int sideboxUsed = 0; + +/* +** Ad-unit styles. +*/ +static unsigned adUnitFlags = 0; + + +/* +** List of hyperlinks and forms that need to be resolved by javascript in +** the footer. +*/ +char **aHref = 0; +int nHref = 0; +int nHrefAlloc = 0; +char **aFormAction = 0; +int nFormAction = 0; + +/* +** Generate and return a anchor tag like this: +** +** <a href="URL"> +** or <a id="ID"> +** +** The form of the anchor tag is determined by the g.javascriptHyperlink +** variable. The href="URL" form is used if g.javascriptHyperlink is false. +** If g.javascriptHyperlink is true then the +** id="ID" form is used and javascript is generated in the footer to cause +** href values to be inserted after the page has loaded. If +** g.perm.History is false, then the <a id="ID"> form is still +** generated but the javascript is not generated so the links never +** activate. +** +** If the user lacks the Hyperlink (h) property and the "auto-hyperlink" +** setting is true, then g.perm.Hyperlink is changed from 0 to 1 and +** g.javascriptHyperlink is set to 1. The g.javascriptHyperlink defaults +** to 0 and only changes to one if the user lacks the Hyperlink (h) property +** and the "auto-hyperlink" setting is enabled. +** +** Filling in the href="URL" using javascript is a defense against bots. +** +** The name of this routine is deliberately kept short so that can be +** easily used within @-lines. Example: +** +** @ %z(href("%R/artifact/%s",zUuid))%h(zFN)</a> +** +** Note %z format. The string returned by this function is always +** obtained from fossil_malloc() so rendering it with %z will reclaim +** that memory space. +** +** There are two versions of this routine: href() does a plain hyperlink +** and xhref() adds extra attribute text. +** +** g.perm.Hyperlink is true if the user has the Hyperlink (h) property. +** Most logged in users should have this property, since we can assume +** that a logged in user is not a bot. Only "nobody" lacks g.perm.Hyperlink, +** typically. +*/ +char *xhref(const char *zExtra, const char *zFormat, ...){ + char *zUrl; + va_list ap; + va_start(ap, zFormat); + zUrl = vmprintf(zFormat, ap); + va_end(ap); + if( g.perm.Hyperlink && !g.javascriptHyperlink ){ + char *zHUrl = mprintf("<a %s href=\"%h\">", zExtra, zUrl); + fossil_free(zUrl); + return zHUrl; + } + if( nHref>=nHrefAlloc ){ + nHrefAlloc = nHrefAlloc*2 + 10; + aHref = fossil_realloc(aHref, nHrefAlloc*sizeof(aHref[0])); + } + aHref[nHref++] = zUrl; + return mprintf("<a %s id='a%d' href='%R/honeypot'>", zExtra, nHref); +} +char *href(const char *zFormat, ...){ + char *zUrl; + va_list ap; + va_start(ap, zFormat); + zUrl = vmprintf(zFormat, ap); + va_end(ap); + if( g.perm.Hyperlink && !g.javascriptHyperlink ){ + char *zHUrl = mprintf("<a href=\"%h\">", zUrl); + fossil_free(zUrl); + return zHUrl; + } + if( nHref>=nHrefAlloc ){ + nHrefAlloc = nHrefAlloc*2 + 10; + aHref = fossil_realloc(aHref, nHrefAlloc*sizeof(aHref[0])); + } + aHref[nHref++] = zUrl; + return mprintf("<a id='a%d' href='%R/honeypot'>", nHref); +} + +/* +** Generate <form method="post" action=ARG>. The ARG value is inserted +** by javascript. +*/ +void form_begin(const char *zOtherArgs, const char *zAction, ...){ + char *zLink; + va_list ap; + if( zOtherArgs==0 ) zOtherArgs = ""; + va_start(ap, zAction); + zLink = vmprintf(zAction, ap); + va_end(ap); + if( g.perm.Hyperlink && !g.javascriptHyperlink ){ + @ <form method="POST" action="%z(zLink)" %s(zOtherArgs)> + }else{ + int n; + aFormAction = fossil_realloc(aFormAction, (nFormAction+1)*sizeof(char*)); + aFormAction[nFormAction++] = zLink; + n = nFormAction; + @ <form id="form%d(n)" method="POST" action='%R/login' %s(zOtherArgs)> + } +} + +/* +** Generate javascript that will set the href= attribute on all anchors. +*/ +void style_resolve_href(void){ + int i; + int nDelay = db_get_int("auto-hyperlink-delay",10); + if( !g.perm.Hyperlink ) return; + if( nHref==0 && nFormAction==0 ) return; + @ <script> + @ function setAllHrefs(){ + if( g.javascriptHyperlink ){ + for(i=0; i<nHref; i++){ + @ gebi("a%d(i+1)").href="%s(aHref[i])"; + } + } + for(i=0; i<nFormAction; i++){ + @ gebi("form%d(i+1)").action="%s(aFormAction[i])"; + } + @ } + if( sqlite3_strglob("*Opera Mini/[1-9]*", P("HTTP_USER_AGENT"))==0 ){ + /* Special case for Opera Mini, which executes JS server-side */ + @ var isOperaMini = Object.prototype.toString.call(window.operamini) + @ === "[object OperaMini]"; + @ if( isOperaMini ){ + @ setTimeout("setAllHrefs();",%d(nDelay)); + @ } + }else if( db_get_boolean("auto-hyperlink-ishuman",0) && g.isHuman ){ + /* Active hyperlinks after a delay */ + @ setTimeout("setAllHrefs();",%d(nDelay)); + }else if( db_get_boolean("auto-hyperlink-mouseover",0) ){ + /* Require mouse movement before starting the teim that will + ** activating hyperlinks */ + @ document.getElementsByTagName("body")[0].onmousemove=function(){ + @ setTimeout("setAllHrefs();",%d(nDelay)); + @ this.onmousemove = null; + @ } + }else{ + /* Active hyperlinks after a delay */ + @ setTimeout("setAllHrefs();",%d(nDelay)); + } + @ </script> +} /* ** Add a new element to the submenu */ void style_submenu_element( @@ -52,358 +237,1395 @@ ... ){ va_list ap; assert( nSubmenu < sizeof(aSubmenu)/sizeof(aSubmenu[0]) ); aSubmenu[nSubmenu].zLabel = zLabel; - aSubmenu[nSubmenu].zTitle = zTitle; + aSubmenu[nSubmenu].zTitle = zTitle ? zTitle : zLabel; va_start(ap, zLink); aSubmenu[nSubmenu].zLink = vmprintf(zLink, ap); va_end(ap); nSubmenu++; } +void style_submenu_entry( + const char *zName, /* Query parameter name */ + const char *zLabel, /* Label before the entry box */ + int iSize, /* Size of the entry box */ + int isDisabled /* True if disabled */ +){ + assert( nSubmenuCtrl < ArraySize(aSubmenuCtrl) ); + aSubmenuCtrl[nSubmenuCtrl].zName = zName; + aSubmenuCtrl[nSubmenuCtrl].zLabel = zLabel; + aSubmenuCtrl[nSubmenuCtrl].iSize = iSize; + aSubmenuCtrl[nSubmenuCtrl].isDisabled = isDisabled; + aSubmenuCtrl[nSubmenuCtrl].eType = FF_ENTRY; + nSubmenuCtrl++; +} +void style_submenu_binary( + const char *zName, /* Query parameter name */ + const char *zTrue, /* Label to show when parameter is true */ + const char *zFalse, /* Label to show when the parameter is false */ + int isDisabled /* True if this control is disabled */ +){ + assert( nSubmenuCtrl < ArraySize(aSubmenuCtrl) ); + aSubmenuCtrl[nSubmenuCtrl].zName = zName; + aSubmenuCtrl[nSubmenuCtrl].zLabel = zTrue; + aSubmenuCtrl[nSubmenuCtrl].zFalse = zFalse; + aSubmenuCtrl[nSubmenuCtrl].isDisabled = isDisabled; + aSubmenuCtrl[nSubmenuCtrl].eType = FF_BINARY; + nSubmenuCtrl++; +} +void style_submenu_multichoice( + const char *zName, /* Query parameter name */ + int nChoice, /* Number of options */ + const char *const *azChoice,/* value/display pairs. 2*nChoice entries */ + int isDisabled /* True if this control is disabled */ +){ + assert( nSubmenuCtrl < ArraySize(aSubmenuCtrl) ); + aSubmenuCtrl[nSubmenuCtrl].zName = zName; + aSubmenuCtrl[nSubmenuCtrl].iSize = nChoice; + aSubmenuCtrl[nSubmenuCtrl].azChoice = azChoice; + aSubmenuCtrl[nSubmenuCtrl].isDisabled = isDisabled; + aSubmenuCtrl[nSubmenuCtrl].eType = FF_MULTI; + nSubmenuCtrl++; +} +void style_submenu_sql( + const char *zName, /* Query parameter name */ + const char *zLabel, /* Label on the control */ + const char *zFormat, /* Format string for SQL command for choices */ + ... /* Arguments to the format string */ +){ + Stmt q; + int n = 0; + int nAlloc = 0; + char **az = 0; + va_list ap; + + va_start(ap, zFormat); + db_vprepare(&q, 0, zFormat, ap); + va_end(ap); + while( SQLITE_ROW==db_step(&q) ){ + if( n+2>=nAlloc ){ + nAlloc += nAlloc + 20; + az = fossil_realloc(az, sizeof(char*)*nAlloc); + } + az[n++] = fossil_strdup(db_column_text(&q,0)); + az[n++] = fossil_strdup(db_column_text(&q,1)); + } + db_finalize(&q); + if( n>0 ){ + aSubmenuCtrl[nSubmenuCtrl].zName = zName; + aSubmenuCtrl[nSubmenuCtrl].zLabel = zLabel; + aSubmenuCtrl[nSubmenuCtrl].iSize = n/2; + aSubmenuCtrl[nSubmenuCtrl].azChoice = (const char *const *)az; + aSubmenuCtrl[nSubmenuCtrl].isDisabled = 0; + aSubmenuCtrl[nSubmenuCtrl].eType = FF_MULTI; + nSubmenuCtrl++; + } +} + /* ** Compare two submenu items for sorting purposes */ static int submenuCompare(const void *a, const void *b){ const struct Submenu *A = (const struct Submenu*)a; const struct Submenu *B = (const struct Submenu*)b; - return strcmp(A->zLabel, B->zLabel); + return fossil_strcmp(A->zLabel, B->zLabel); +} + +/* Use this for the $current_page variable if it is not NULL. If it is +** NULL then use g.zPath. +*/ +static char *local_zCurrentPage = 0; + +/* +** Set the desired $current_page to something other than g.zPath +*/ +void style_set_current_page(const char *zFormat, ...){ + fossil_free(local_zCurrentPage); + if( zFormat==0 ){ + local_zCurrentPage = 0; + }else{ + va_list ap; + va_start(ap, zFormat); + local_zCurrentPage = vmprintf(zFormat, ap); + va_end(ap); + } +} + +/* +** Create a TH1 variable containing the URL for the specified config resource. +** The resulting variable name will be of the form $[zVarPrefix]_url. +*/ +static void url_var( + const char *zVarPrefix, + const char *zConfigName, + const char *zPageName +){ + char *zVarName = mprintf("%s_url", zVarPrefix); + char *zUrl = mprintf("%R/%s?id=%x", zPageName, + skin_id(zConfigName)); + Th_Store(zVarName, zUrl); + free(zUrl); + free(zVarName); +} + +/* +** Create a TH1 variable containing the URL for the specified config image. +** The resulting variable name will be of the form $[zImageName]_image_url. +*/ +static void image_url_var(const char *zImageName){ + char *zVarPrefix = mprintf("%s_image", zImageName); + char *zConfigName = mprintf("%s-image", zImageName); + url_var(zVarPrefix, zConfigName, zImageName); + free(zVarPrefix); + free(zConfigName); } /* ** Draw the header. */ void style_header(const char *zTitleFormat, ...){ va_list ap; char *zTitle; - const char *zHeader = db_get("header", (char*)zDefaultHeader); + const char *zHeader = skin_get("header"); login_check_credentials(); va_start(ap, zTitleFormat); zTitle = vmprintf(zTitleFormat, ap); va_end(ap); - + cgi_destination(CGI_HEADER); - cgi_printf("%s", - "<!DOCTYPE html PUBLIC \"-//W3C/DTD XHTML 1.0 Strict//EN\"" - " \"http://www.x3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\">"); - + + @ <!DOCTYPE html> + if( g.thTrace ) Th_Trace("BEGIN_HEADER<br />\n", -1); /* Generate the header up through the main menu */ Th_Store("project_name", db_get("project-name","Unnamed Fossil Project")); Th_Store("title", zTitle); Th_Store("baseurl", g.zBaseURL); + Th_Store("secureurl", login_wants_https_redirect()? g.zHttpsURL: g.zBaseURL); + Th_Store("home", g.zTop); Th_Store("index_page", db_get("index-page","/home")); - Th_Store("current_page", g.zPath); + if( local_zCurrentPage==0 ) style_set_current_page("%T", g.zPath); + Th_Store("current_page", local_zCurrentPage); + Th_Store("csrf_token", g.zCsrfToken); + Th_Store("release_version", RELEASE_VERSION); Th_Store("manifest_version", MANIFEST_VERSION); Th_Store("manifest_date", MANIFEST_DATE); - if( g.zLogin ){ + Th_Store("compiler_name", COMPILER_NAME); + url_var("stylesheet", "css", "style.css"); + image_url_var("logo"); + image_url_var("background"); + if( !login_is_nobody() ){ Th_Store("login", g.zLogin); } if( g.thTrace ) Th_Trace("BEGIN_HEADER_SCRIPT<br />\n", -1); Th_Render(zHeader); if( g.thTrace ) Th_Trace("END_HEADER<br />\n", -1); Th_Unstore("title"); /* Avoid collisions with ticket field names */ cgi_destination(CGI_BODY); g.cgiOutput = 1; headerHasBeenGenerated = 1; + sideboxUsed = 0; + + /* Make the gebi(x) function available as an almost-alias for + ** document.getElementById(x) (except that it throws an error + ** if the element is not found). + ** + ** Maintenance note: this function must of course be available + ** before it is called. It "should" go in the HEAD so that client + ** HEAD code can make use of it, but because the client can replace + ** the HEAD, and some fossil pages rely on gebi(), we put it here. + */ + @ <script> + @ function gebi(x){ + @ if(x.substr(0,1)=='#') x = x.substr(1); + @ var e = document.getElementById(x); + @ if(!e) throw new Error('Expecting element with ID '+x); + @ else return e;} + @ </script> +} + +#if INTERFACE +/* Allowed parameters for style_adunit() */ +#define ADUNIT_OFF 0x0001 /* Do not allow ads on this page */ +#define ADUNIT_RIGHT_OK 0x0002 /* Right-side vertical ads ok here */ +#endif + +/* +** Various page implementations can invoke this interface to let the +** style manager know what kinds of ads are appropriate for this page. +*/ +void style_adunit_config(unsigned int mFlags){ + adUnitFlags = mFlags; +} + +/* +** Return the text of an ad-unit, if one should be rendered. Return +** NULL if no ad-unit is desired. +** +** The *pAdFlag value might be set to ADUNIT_RIGHT_OK if this is +** a right-hand vertical ad. +*/ +static const char *style_adunit_text(unsigned int *pAdFlag){ + const char *zAd = 0; + *pAdFlag = 0; + if( adUnitFlags & ADUNIT_OFF ) return 0; /* Disallow ads on this page */ + if( g.perm.Admin && db_get_boolean("adunit-omit-if-admin",0) ){ + return 0; + } + if( !login_is_nobody() + && fossil_strcmp(g.zLogin,"anonymous")!=0 + && db_get_boolean("adunit-omit-if-user",0) + ){ + return 0; + } + if( (adUnitFlags & ADUNIT_RIGHT_OK)!=0 + && !fossil_all_whitespace(zAd = db_get("adunit-right", 0)) + && !cgi_body_contains("<table") + ){ + *pAdFlag = ADUNIT_RIGHT_OK; + return zAd; + }else if( !fossil_all_whitespace(zAd = db_get("adunit",0)) ){ + return zAd; + } + return 0; } /* ** Draw the footer at the bottom of the page. */ void style_footer(void){ const char *zFooter; + const char *zAd = 0; + unsigned int mAdFlags = 0; if( !headerHasBeenGenerated ) return; - + /* Go back and put the submenu at the top of the page. We delay the ** creation of the submenu until the end so that we can add elements ** to the submenu while generating page text. */ cgi_destination(CGI_HEADER); - if( nSubmenu>0 ){ + if( nSubmenu+nSubmenuCtrl>0 ){ int i; + if( nSubmenuCtrl ){ + cgi_printf("<form id='f01' method='GET' action='%R/%s'>", g.zPath); + } @ <div class="submenu"> - qsort(aSubmenu, nSubmenu, sizeof(aSubmenu[0]), submenuCompare); - for(i=0; i<nSubmenu; i++){ - struct Submenu *p = &aSubmenu[i]; - if( p->zLink==0 ){ - @ <span class="label">%h(p->zLabel)</span> - }else{ - @ <a class="label" href="%s(p->zLink)">%h(p->zLabel)</a> + if( nSubmenu>0 ){ + qsort(aSubmenu, nSubmenu, sizeof(aSubmenu[0]), submenuCompare); + for(i=0; i<nSubmenu; i++){ + struct Submenu *p = &aSubmenu[i]; + if( p->zLink==0 ){ + @ <span class="label">%h(p->zLabel)</span> + }else{ + @ <a class="label" href="%h(p->zLink)">%h(p->zLabel)</a> + } + } + } + if( nSubmenuCtrl>0 ){ + for(i=0; i<nSubmenuCtrl; i++){ + const char *zQPN = aSubmenuCtrl[i].zName; + const char *zDisabled = " disabled"; + if( !aSubmenuCtrl[i].isDisabled ){ + zDisabled = ""; + cgi_tag_query_parameter(zQPN); + } + switch( aSubmenuCtrl[i].eType ){ + case FF_ENTRY: { + cgi_printf( + "<span class='submenuctrl'>" + " %h<input type='text' name='%s' size='%d' maxlength='%d'" + " value='%h'%s></span>\n", + aSubmenuCtrl[i].zLabel, + zQPN, + aSubmenuCtrl[i].iSize, aSubmenuCtrl[i].iSize, + PD(zQPN,""), + zDisabled + ); + break; + } + case FF_MULTI: { + int j; + const char *zVal = P(zQPN); + if( aSubmenuCtrl[i].zLabel ){ + cgi_printf(" %h", aSubmenuCtrl[i].zLabel); + } + cgi_printf( + "<select class='submenuctrl' size='1' name='%s'%s " + "onchange='gebi(\"f01\").submit();'>\n", + zQPN, zDisabled + ); + for(j=0; j<aSubmenuCtrl[i].iSize*2; j+=2){ + const char *zQPV = aSubmenuCtrl[i].azChoice[j]; + cgi_printf( + "<option value='%h'%s>%h</option>\n", + zQPV, + fossil_strcmp(zVal,zQPV)==0 ? " selected" : "", + aSubmenuCtrl[i].azChoice[j+1] + ); + } + @ </select> + break; + } + case FF_BINARY: { + int isTrue = PB(zQPN); + cgi_printf( + "<select class='submenuctrl' size='1' name='%s'%s " + "onchange='gebi(\"f01\").submit();'>\n", + zQPN, zDisabled + ); + cgi_printf( + "<option value='1'%s>%h</option>\n", + isTrue ? " selected":"", aSubmenuCtrl[i].zLabel + ); + cgi_printf( + "<option value='0'%s>%h</option>\n", + (!isTrue) ? " selected":"", aSubmenuCtrl[i].zFalse + ); + @ </select> + break; + } + } } } @ </div> + if( nSubmenuCtrl ){ + cgi_query_parameters_to_hidden(); + cgi_tag_query_parameter(0); + @ </form> + } } - @ <div class="content"> + + zAd = style_adunit_text(&mAdFlags); + if( (mAdFlags & ADUNIT_RIGHT_OK)!=0 ){ + @ <div class="content adunit_right_container"> + @ <div class="adunit_right"> + cgi_append_content(zAd, -1); + @ </div> + }else{ + if( zAd ){ + @ <div class="adunit_banner"> + cgi_append_content(zAd, -1); + @ </div> + } + @ <div class="content"> + } cgi_destination(CGI_BODY); - /* Put the footer at the bottom of the page. - */ - @ </div><br clear="both"/> - zFooter = db_get("footer", (char*)zDefaultFooter); + if( sideboxUsed ){ + /* Put the footer at the bottom of the page. + ** the additional clear/both is needed to extend the content + ** part to the end of an optional sidebox. + */ + @ <div class="endContent"></div> + } + @ </div> + + /* Set the href= field on hyperlinks. Do this before the footer since + ** the footer will be generating </html> */ + style_resolve_href(); + + zFooter = skin_get("footer"); if( g.thTrace ) Th_Trace("BEGIN_FOOTER<br />\n", -1); Th_Render(zFooter); if( g.thTrace ) Th_Trace("END_FOOTER<br />\n", -1); - + /* Render trace log if TH1 tracing is enabled. */ if( g.thTrace ){ - cgi_append_content("<font color=\"red\"><hr>\n", -1); + cgi_append_content("<span class=\"thTrace\"><hr />\n", -1); cgi_append_content(blob_str(&g.thLog), blob_size(&g.thLog)); - cgi_append_content("</font>\n", -1); + cgi_append_content("</span>\n", -1); } } /* ** Begin a side-box on the right-hand side of a page. The title and ** the width of the box are given as arguments. The width is usually ** a percentage of total screen width. */ void style_sidebox_begin(const char *zTitle, const char *zWidth){ - @ <table width="%s(zWidth)" align="right" border="1" cellpadding=5 - @ vspace=5 hspace=5> - @ <tr><td> - @ <b>%h(zTitle)</b> + sideboxUsed = 1; + @ <div class="sidebox" style="width:%s(zWidth)"> + @ <div class="sideboxTitle">%h(zTitle)</div> } /* End the side-box */ void style_sidebox_end(void){ - @ </td></tr></table> -} - -/* @-comment: // */ -/* -** The default page header. -*/ -const char zDefaultHeader[] = -@ <html> -@ <head> -@ <title>$<project_name>: $<title> -@ -@ -@ -@ -@
      -@ -@
      $</div> -@ <div class="status"><nobr><th1> -@ if {[info exists login]} { -@ puts "Logged in as $login" -@ } else { -@ puts "Not logged in" -@ } -@ </th1></nobr></div> -@ </div> -@ <div class="mainmenu"><th1> -@ html "<a href='$baseurl$index_page'>Home</a> " -@ if {[anycap jor]} { -@ html "<a href='$baseurl/timeline'>Timeline</a> " -@ } -@ if {[hascap oh]} { -@ html "<a href='$baseurl/dir?ci=tip'>Files</a> " -@ } -@ if {[hascap o]} { -@ html "<a href='$baseurl/leaves'>Leaves</a> " -@ html "<a href='$baseurl/brlist'>Branches</a> " -@ html "<a href='$baseurl/taglist'>Tags</a> " -@ } -@ if {[hascap r]} { -@ html "<a href='$baseurl/reportlist'>Tickets</a> " -@ } -@ if {[hascap j]} { -@ html "<a href='$baseurl/wiki'>Wiki</a> " -@ } -@ if {[hascap s]} { -@ html "<a href='$baseurl/setup'>Admin</a> " -@ } elseif {[hascap a]} { -@ html "<a href='$baseurl/setup_ulist'>Users</a> " -@ } -@ if {[info exists login]} { -@ html "<a href='$baseurl/login'>Logout</a> " -@ } else { -@ html "<a href='$baseurl/login'>Login</a> " -@ } -@ </th1></div> -; - -/* -** The default page footer -*/ -const char zDefaultFooter[] = -@ <div class="footer"> -@ Fossil version $manifest_version $manifest_date -@ </div> -@ </body></html> -; - -/* -** The default Cascading Style Sheet. -*/ -const char zDefaultCSS[] = -@ /* General settings for the entire page */ -@ body { -@ margin: 0ex 1ex; -@ padding: 0px; -@ background-color: white; -@ font-family: sans-serif; -@ } -@ -@ /* The project logo in the upper left-hand corner of each page */ -@ div.logo { -@ display: table-cell; -@ text-align: center; -@ vertical-align: bottom; -@ font-weight: bold; -@ color: #558195; -@ } -@ -@ /* The page title centered at the top of each page */ -@ div.title { -@ display: table-cell; -@ font-size: 2em; -@ font-weight: bold; -@ text-align: left; -@ padding: 0 0 0 1em; -@ color: #558195; -@ vertical-align: bottom; -@ width: 100%; -@ } -@ -@ /* The login status message in the top right-hand corner */ -@ div.status { -@ display: table-cell; -@ text-align: right; -@ vertical-align: bottom; -@ color: #558195; -@ font-size: 0.8em; -@ font-weight: bold; -@ } -@ -@ /* The header across the top of the page */ -@ div.header { -@ display: table; -@ width: 100%; -@ } -@ -@ /* The main menu bar that appears at the top of the page beneath -@ ** the header */ -@ div.mainmenu { -@ padding: 5px 10px 5px 10px; -@ font-size: 0.9em; -@ font-weight: bold; -@ text-align: center; -@ letter-spacing: 1px; -@ background-color: #558195; -@ color: white; -@ } -@ -@ /* The submenu bar that *sometimes* appears below the main menu */ -@ div.submenu { -@ padding: 3px 10px 3px 0px; -@ font-size: 0.9em; -@ text-align: center; -@ background-color: #456878; -@ color: white; -@ } -@ div.mainmenu a, div.mainmenu a:visited, div.submenu a, div.submenu a:visited { -@ padding: 3px 10px 3px 10px; -@ color: white; -@ text-decoration: none; -@ } -@ div.mainmenu a:hover, div.submenu a:hover { -@ color: #558195; -@ background-color: white; -@ } -@ -@ /* All page content from the bottom of the menu or submenu down to -@ ** the footer */ -@ div.content { -@ padding: 0ex 1ex 0ex 2ex; -@ } -@ -@ /* Some pages have section dividers */ -@ div.section { -@ margin-bottom: 0px; -@ margin-top: 1em; -@ padding: 1px 1px 1px 1px; -@ font-size: 1.2em; -@ font-weight: bold; -@ background-color: #558195; -@ color: white; -@ } -@ -@ /* The "Date" that occurs on the left hand side of timelines */ -@ div.divider { -@ background: #a1c4d4; -@ border: 2px #558195 solid; -@ font-size: 1em; font-weight: normal; -@ padding: .25em; -@ margin: .2em 0 .2em 0; -@ float: left; -@ clear: left; -@ } -@ -@ /* The footer at the very bottom of the page */ -@ div.footer { -@ font-size: 0.8em; -@ margin-top: 12px; -@ padding: 5px 10px 5px 10px; -@ text-align: right; -@ background-color: #558195; -@ color: white; -@ } -@ -@ /* Hyperlink colors in the footer */ -@ div.footer a { color: white; } -@ div.footer a:link { color: white; } -@ div.footer a:visited { color: white; } -@ div.footer a:hover { background-color: white; color: #558195; } -@ -@ /* <verbatim> blocks */ -@ pre.verbatim { -@ background-color: #f5f5f5; -@ padding: 0.5em; -@} -@ -@ /* The label/value pairs on (for example) the ci page */ -@ table.label-value th { -@ vertical-align: top; -@ text-align: right; -@ padding: 0.2ex 2ex; -@ } -; + @ </div> +} + + +/* The following table contains bits of default CSS that must +** be included if they are not found in the application-defined +** CSS. +*/ +const struct strctCssDefaults { + const char *elementClass; /* Name of element needed */ + const char *comment; /* Comment text */ + const char *value; /* CSS text */ +} cssDefaultList[] = { + { "div.sidebox", + "The nomenclature sidebox for branches,..", + @ float: right; + @ background-color: white; + @ border-width: medium; + @ border-style: double; + @ margin: 10px; + }, + { "div.sideboxTitle", + "The nomenclature title in sideboxes for branches,..", + @ display: inline; + @ font-weight: bold; + }, + { "div.sideboxDescribed", + "The defined element in sideboxes for branches,..", + @ display: inline; + @ font-weight: bold; + }, + { "span.disabled", + "The defined element in sideboxes for branches,..", + @ color: red; + }, + { "span.timelineDisabled", + "The suppressed duplicates lines in timeline, ..", + @ font-style: italic; + @ font-size: small; + }, + { "table.timelineTable", + "the format for the timeline data table", + @ border: 0; + @ border-collapse: collapse; + }, + { "td.timelineTableCell", + "the format for the timeline data cells", + @ vertical-align: top; + @ text-align: left; + }, + { "tr.timelineCurrent td.timelineTableCell", + "the format for the timeline data cell of the current checkout", + @ padding: .1em .2em; + @ border: 1px dashed #446979; + }, + { "tr.timelineSelected", + "The row in the timeline table that contains the entry of interest", + @ padding: .1em .2em; + @ border: 2px solid lightgray; + @ background-color: #ffc; + @ box-shadow: 4px 4px 2px #888; + }, + { "tr.timelineSpacer", + "An extra row inserted to give vertical space between two rows", + @ height: 1ex; + }, + { "span.timelineLeaf", + "the format for the timeline leaf marks", + @ font-weight: bold; + }, + { "a.timelineHistLink", + "the format for the timeline version links", + @ + }, + { "span.timelineHistDsp", + "the format for the timeline version display(no history permission!)", + @ font-weight: bold; + }, + { "td.timelineTime", + "the format for the timeline time display", + @ vertical-align: top; + @ text-align: right; + @ white-space: nowrap; + }, + { "td.timelineGraph", + "the format for the graph placeholder cells in timelines", + @ width: 20px; + @ text-align: left; + @ vertical-align: top; + }, + { ".tl-canvas", + "timeline graph canvas", + @ margin: 0 6px 0 10px; + }, + { ".tl-rail", + "maximum rail spacing", + @ width: 18px; + }, + { ".tl-mergeoffset", + "maximum spacing between merge risers and primary child risers", + @ width: 2px; + }, + { ".tl-nodemark", + "adjusts the vertical position of graph nodes", + @ margin-top: 5px; + }, + { ".tl-node", + "commit node", + @ width: 10px; + @ height: 10px; + @ border: 1px solid #000; + @ background: #fff; + @ cursor: pointer; + }, + { ".tl-node.leaf:after", + "leaf commit marker", + @ content: ''; + @ position: absolute; + @ top: 3px; + @ left: 3px; + @ width: 4px; + @ height: 4px; + @ background: #000; + }, + { ".tl-node.sel:after", + "selected commit node marker", + @ content: ''; + @ position: absolute; + @ top: 2px; + @ left: 2px; + @ width: 6px; + @ height: 6px; + @ background: red; + }, + { ".tl-arrow", + "arrow", + @ width: 0; + @ height: 0; + @ transform: scale(.999); + @ border: 0 solid transparent; + }, + { ".tl-arrow.u", + "up arrow", + @ margin-top: -1px; + @ border-width: 0 3px; + @ border-bottom: 7px solid #000; + }, + { ".tl-arrow.u.sm", + "small up arrow", + @ border-bottom: 5px solid #000; + }, + { ".tl-line", + "line", + @ background: #000; + @ width: 2px; + }, + { ".tl-arrow.merge", + "merge arrow", + @ height: 1px; + @ border-width: 2px 0; + }, + { ".tl-arrow.merge.l", + "left merge arrow", + @ border-right: 3px solid #000; + }, + { ".tl-arrow.merge.r", + "right merge arrow", + @ border-left: 3px solid #000; + }, + { ".tl-line.merge", + "merge line", + @ width: 1px; + }, + { ".tl-arrow.warp", + "timewarp arrow", + @ margin-left: 1px; + @ border-width: 3px 0; + @ border-left: 7px solid #600000; + }, + { ".tl-line.warp", + "timewarp line", + @ background: #600000; + }, + { "a.tagLink", + "the format for the tag links", + @ + }, + { "span.tagDsp", + "the format for the tag display(no history permission!)", + @ font-weight: bold; + }, + { "span.wikiError", + "the format for wiki errors", + @ font-weight: bold; + @ color: red; + }, + { "span.infoTagCancelled", + "the format for fixed/canceled tags,..", + @ font-weight: bold; + @ text-decoration: line-through; + }, + { "span.infoTag", + "the format for tags,..", + @ font-weight: bold; + }, + { "span.wikiTagCancelled", + "the format for fixed/cancelled tags,.. on wiki pages", + @ text-decoration: line-through; + }, + { "table.browser", + "format for the file display table", + @ /* the format for wiki errors */ + @ width: 100%; + @ border: 0; + }, + { "td.browser", + "format for cells in the file browser", + @ width: 24%; + @ vertical-align: top; + }, + { ".filetree", + "tree-view file browser", + @ margin: 1em 0; + @ line-height: 1.5; + }, + { + ".filetree > ul", + "tree-view top-level list", + @ display: inline-block; + }, + { ".filetree ul", + "tree-view lists", + @ margin: 0; + @ padding: 0; + @ list-style: none; + }, + { ".filetree ul.collapsed", + "tree-view collapsed list", + @ display: none; + }, + { ".filetree ul ul", + "tree-view lists below the root", + @ position: relative; + @ margin: 0 0 0 21px; + }, + { ".filetree li", + "tree-view lists items", + @ position: relative; + @ margin: 0; + @ padding: 0; + }, + { ".filetree li li:before", + "tree-view node lines", + @ content: ''; + @ position: absolute; + @ top: -.8em; + @ left: -14px; + @ width: 14px; + @ height: 1.5em; + @ border-left: 2px solid #aaa; + @ border-bottom: 2px solid #aaa; + }, + { ".filetree li > ul:before", + "tree-view directory lines", + @ content: ''; + @ position: absolute; + @ top: -1.5em; + @ bottom: 0; + @ left: -35px; + @ border-left: 2px solid #aaa; + }, + { ".filetree li.last > ul:before", + "hide lines for last-child directories", + @ display: none; + }, + { ".filetree a", + "tree-view links", + " position: relative;\n" + " z-index: 1;\n" + " display: table-cell;\n" + " min-height: 16px;\n" + " padding-left: 21px;\n" + " background-image: url(data:image/gif;base64,R0lGODlhEAAQAJEAAP" + "\\/\\/\\/yEhIf\\/\\/\\/wAAACH5BAEHAAIALAAAAAAQABAAAAIvlIKpxqcfmg" + "OUvoaqDSCxrEEfF14GqFXImJZsu73wepJzVMNxrtNTj3NATMKhpwAAOw==);\n" + " background-position: center left;\n" + " background-repeat: no-repeat;\n" + }, + { "ul.browser", + "list of files in the 'flat-view' file browser", + @ list-style-type: none; + @ padding: 10px; + @ margin: 0px; + @ white-space: nowrap; + }, + { "ul.browser li.file", + "List element in the 'flat-view' file browser for a file", + " background-image: url(data:image/gif;base64,R0lGODlhEAAQAJEAAP" + "\\/\\/\\/yEhIf\\/\\/\\/wAAACH5BAEHAAIALAAAAAAQABAAAAIvlIKpxqcfm" + "gOUvoaqDSCxrEEfF14GqFXImJZsu73wepJzVMNxrtNTj3NATMKhpwAAOw==);\n" + " background-repeat: no-repeat;\n" + " background-position: 0px center;\n" + " padding-left: 20px;\n" + " padding-top: 2px;\n" + }, + { "ul.browser li.dir", + "List element in the 'flat-view file browser for a directory", + " background-image: url(data:image/gif;base64,R0lGODlhEAAQAJEAAP/WVCIi" + "Iv\\/\\/\\/wAAACH5BAEHAAIALAAAAAAQABAAAAInlI9pwa3XYniCgQtkrAFfLXkiFo1jaX" + "po+jUs6b5Z/K4siDu5RPUFADs=);\n" + " background-repeat: no-repeat;\n" + " background-position: 0px center;\n" + " padding-left: 20px;\n" + " padding-top: 2px;\n" + }, + { "div.filetreeline", + "line of a file tree", + @ display: table; + @ width: 100%; + @ white-space: nowrap; + }, + { ".filetree .dir > div.filetreeline > a", + "tree-view directory links", + " background-image: url(data:image/gif;base64,R0lGODlhEAAQAJEAAP/WVCIi" + "Iv\\/\\/\\/wAAACH5BAEHAAIALAAAAAAQABAAAAInlI9pwa3XYniCgQtkrAFfLXkiFo1jaXp" + "o+jUs6b5Z/K4siDu5RPUFADs=);\n" + }, + { "div.filetreeage", + "Last change floating display on the right", + @ display: table-cell; + @ padding-left: 3em; + @ text-align: right; + }, + { "div.filetreeline:hover", + "Highlight the line of a file tree", + @ background-color: #eee; + }, + { "table.login_out", + "table format for login/out label/input table", + @ text-align: left; + @ margin-right: 10px; + @ margin-left: 10px; + @ margin-top: 10px; + }, + { "div.captcha", + "captcha display options", + @ text-align: center; + @ padding: 1ex; + }, + { "table.captcha", + "format for the layout table, used for the captcha display", + @ margin: auto; + @ padding: 10px; + @ border-width: 4px; + @ border-style: double; + @ border-color: black; + }, + { "td.login_out_label", + "format for the label cells in the login/out table", + @ text-align: center; + }, + { "span.loginError", + "format for login error messages", + @ color: red; + }, + { "span.note", + "format for leading text for notes", + @ font-weight: bold; + }, + { "span.textareaLabel", + "format for textarea labels", + @ font-weight: bold; + }, + { "table.usetupLayoutTable", + "format for the user setup layout table", + @ outline-style: none; + @ padding: 0; + @ margin: 25px; + }, + { "td.usetupColumnLayout", + "format of the columns on the user setup list page", + @ vertical-align: top + }, + { "table.usetupUserList", + "format for the user list table on the user setup page", + @ outline-style: double; + @ outline-width: 1px; + @ padding: 10px; + }, + { "th.usetupListUser", + "format for table header user in user list on user setup page", + @ text-align: right; + @ padding-right: 20px; + }, + { "th.usetupListCap", + "format for table header capabilities in user list on user setup page", + @ text-align: center; + @ padding-right: 15px; + }, + { "th.usetupListCon", + "format for table header contact info in user list on user setup page", + @ text-align: left; + }, + { "td.usetupListUser", + "format for table cell user in user list on user setup page", + @ text-align: right; + @ padding-right: 20px; + @ white-space:nowrap; + }, + { "td.usetupListCap", + "format for table cell capabilities in user list on user setup page", + @ text-align: center; + @ padding-right: 15px; + }, + { "td.usetupListCon", + "format for table cell contact info in user list on user setup page", + @ text-align: left + }, + { "div.ueditCapBox", + "layout definition for the capabilities box on the user edit detail page", + @ float: left; + @ margin-right: 20px; + @ margin-bottom: 20px; + }, + { "td.usetupEditLabel", + "format of the label cells in the detailed user edit page", + @ text-align: right; + @ vertical-align: top; + @ white-space: nowrap; + }, + { "span.ueditInheritNobody", + "color for capabilities, inherited by nobody", + @ color: green; + @ padding: .2em; + }, + { "span.ueditInheritDeveloper", + "color for capabilities, inherited by developer", + @ color: red; + @ padding: .2em; + }, + { "span.ueditInheritReader", + "color for capabilities, inherited by reader", + @ color: black; + @ padding: .2em; + }, + { "span.ueditInheritAnonymous", + "color for capabilities, inherited by anonymous", + @ color: blue; + @ padding: .2em; + }, + { "span.capability", + "format for capabilities, mentioned on the user edit page", + @ font-weight: bold; + }, + { "span.usertype", + "format for different user types, mentioned on the user edit page", + @ font-weight: bold; + }, + { "span.usertype:before", + "leading text for user types, mentioned on the user edit page", + @ content:"'"; + }, + { "span.usertype:after", + "trailing text for user types, mentioned on the user edit page", + @ content:"'"; + }, + { "div.selectedText", + "selected lines of text within a linenumbered artifact display", + @ font-weight: bold; + @ color: blue; + @ background-color: #d5d5ff; + @ border: 1px blue solid; + }, + { "p.missingPriv", + "format for missing privileges note on user setup page", + @ color: blue; + }, + { "span.wikiruleHead", + "format for leading text in wikirules definitions", + @ font-weight: bold; + }, + { "td.tktDspLabel", + "format for labels on ticket display page", + @ text-align: right; + }, + { "td.tktDspValue", + "format for values on ticket display page", + @ text-align: left; + @ vertical-align: top; + @ background-color: #d0d0d0; + }, + { "span.tktError", + "format for ticket error messages", + @ color: red; + @ font-weight: bold; + }, + { "table.rpteditex", + "format for example tables on the report edit page", + @ float: right; + @ margin: 0; + @ padding: 0; + @ width: 125px; + @ text-align: center; + @ border-collapse: collapse; + @ border-spacing: 0; + }, + { "table.report", + "Ticket report table formatting", + @ border-collapse:collapse; + @ border: 1px solid #999; + @ margin: 1em 0 1em 0; + @ cursor: pointer; + }, + { "td.rpteditex", + "format for example table cells on the report edit page", + @ border-width: thin; + @ border-color: #000000; + @ border-style: solid; + }, + { "input.checkinUserColor", + "format for user color input on check-in edit page", + @ /* no special definitions, class defined, to enable color pickers, f.e.: + @ ** add the color picker found at http:jscolor.com as java script include + @ ** to the header and configure the java script file with + @ ** 1. use as bindClass :checkinUserColor + @ ** 2. change the default hash adding behaviour to ON + @ ** or change the class defition of element identified by id="clrcust" + @ ** to a standard jscolor definition with java script in the footer. */ + }, + { "div.endContent", + "format for end of content area, to be used to clear page flow.", + @ clear: both; + }, + { "p.generalError", + "format for general errors", + @ color: red; + }, + { "p.tktsetupError", + "format for tktsetup errors", + @ color: red; + @ font-weight: bold; + }, + { "p.xfersetupError", + "format for xfersetup errors", + @ color: red; + @ font-weight: bold; + }, + { "p.thmainError", + "format for th script errors", + @ color: red; + @ font-weight: bold; + }, + { "span.thTrace", + "format for th script trace messages", + @ color: red; + }, + { "p.reportError", + "format for report configuration errors", + @ color: red; + @ font-weight: bold; + }, + { "blockquote.reportError", + "format for report configuration errors", + @ color: red; + @ font-weight: bold; + }, + { "p.noMoreShun", + "format for artifact lines, no longer shunned", + @ color: blue; + }, + { "p.shunned", + "format for artifact lines beeing shunned", + @ color: blue; + }, + { "span.brokenlink", + "a broken hyperlink", + @ color: red; + }, + { "ul.filelist", + "List of files in a timeline", + @ margin-top: 3px; + @ line-height: 100%; + }, + { "ul.filelist li", + "List of files in a timeline", + @ padding-top: 1px; + }, + { "table.sbsdiffcols", + "side-by-side diff display (column-based)", + @ width: 90%; + @ border-spacing: 0; + @ font-size: xx-small; + }, + { "table.sbsdiffcols td", + "sbs diff table cell", + @ padding: 0; + @ vertical-align: top; + }, + { "table.sbsdiffcols pre", + "sbs diff pre block", + @ margin: 0; + @ padding: 0; + @ border: 0; + @ font-size: inherit; + @ background: inherit; + @ color: inherit; + }, + { "div.difflncol", + "diff line number column", + @ padding-right: 1em; + @ text-align: right; + @ color: #a0a0a0; + }, + { "div.difftxtcol", + "diff text column", + @ width: 45em; + @ overflow-x: auto; + }, + { "div.diffmkrcol", + "diff marker column", + @ padding: 0 1em; + }, + { "span.diffchng", + "changes in a diff", + @ background-color: #c0c0ff; + }, + { "span.diffadd", + "added code in a diff", + @ background-color: #c0ffc0; + }, + { "span.diffrm", + "deleted in a diff", + @ background-color: #ffc8c8; + }, + { "span.diffhr", + "suppressed lines in a diff", + @ display: inline-block; + @ margin: .5em 0 1em; + @ color: #0000ff; + }, + { "span.diffln", + "line numbers in a diff", + @ color: #a0a0a0; + }, + { "span.modpending", + "Moderation Pending message on timeline", + @ color: #b03800; + @ font-style: italic; + }, + { "pre.th1result", + "format for th1 script results", + @ white-space: pre-wrap; + @ word-wrap: break-word; + }, + { "pre.th1error", + "format for th1 script errors", + @ white-space: pre-wrap; + @ word-wrap: break-word; + @ color: red; + }, + { "table.label-value th", + "The label/value pairs on (for example) the ci page", + @ vertical-align: top; + @ text-align: right; + @ padding: 0.2ex 2ex; + }, + { ".statistics-report-graph-line", + "for the /reports views", + @ background-color: #446979; + }, + { ".statistics-report-table-events th", + "", + @ padding: 0 1em 0 1em; + }, + { ".statistics-report-table-events td", + "", + @ padding: 0.1em 1em 0.1em 1em; + }, + { ".statistics-report-row-year", + "", + @ text-align: left; + }, + { ".statistics-report-week-number-label", + "for the /stats_report views", + @ text-align: right; + @ font-size: 0.8em; + }, + { ".statistics-report-week-of-year-list", + "for the /stats_report views", + @ font-size: 0.8em; + }, + { "tr.row0", + "even table row color", + @ /* use default */ + }, + { "tr.row1", + "odd table row color", + @ /* Use default */ + }, + { "#usetupEditCapability", + "format for capabilities string, mentioned on the user edit page", + @ font-weight: bold; + }, + { "table.adminLogTable", + "Class for the /admin_log table", + @ text-align: left; + }, + { ".adminLogTable .adminTime", + "Class for the /admin_log table", + @ text-align: left; + @ vertical-align: top; + @ white-space: nowrap; + }, + { ".fileage table", + "The fileage table", + @ border-spacing: 0; + }, + { ".fileage tr:hover", + "Mouse-over effects for the file-age table", + @ background-color: #eee; + }, + { ".fileage td", + "fileage table cells", + @ vertical-align: top; + @ text-align: left; + @ border-top: 1px solid #ddd; + @ padding-top: 3px; + }, + { ".fileage td:first-child", + "fileage first column (the age)", + @ white-space: nowrap; + }, + { ".fileage td:nth-child(2)", + "fileage second column (the filename)", + @ padding-left: 1em; + @ padding-right: 1em; + }, + { ".fileage td:nth-child(3)", + "fileage third column (the check-in comment)", + @ word-wrap: break-word; + @ max-width: 50%; + }, + { ".brlist table", "The list of branches", + @ border-spacing: 0; + }, + { ".brlist table th", "Branch list table headers", + @ text-align: left; + @ padding: 0px 1em 0.5ex 0px; + }, + { ".brlist table td", "Branch list table headers", + @ padding: 0px 2em 0px 0px; + @ white-space: nowrap; + }, + { "th.sort:after", + "General styles for sortable column marker", + @ margin-left: .4em; + @ cursor: pointer; + @ text-shadow: 0 0 0 #000; /* Makes arrow darker */ + }, + { "th.sort.none:after", + "None sort column marker", + @ content: '\2666'; + }, + { "th.sort.asc:after", + "Ascending sort column marker", + @ content: '\2193'; + }, + { "th.sort.desc:after", + "Descending sort column marker", + @ content: '\2191'; + }, + { "span.snippet>mark", + "Search markup", + @ background-color: inherit; + @ font-weight: bold; + }, + { "div.searchForm", + "Container for the search terms entry box", + @ text-align: center; + }, + { "p.searchEmpty", + "Message explaining that there are no search results", + @ font-style: italic; + }, + { 0, + 0, + 0 + } +}; + +/* +** Append all of the default CSS to the CGI output. +*/ +void cgi_append_default_css(void) { + int i; + + cgi_printf("%s", builtin_text("skins/default/css.txt")); + for( i=0; cssDefaultList[i].elementClass; i++ ){ + if( cssDefaultList[i].elementClass[0] ){ + cgi_printf("/* %s */\n%s {\n%s\n}\n\n", + cssDefaultList[i].comment, + cssDefaultList[i].elementClass, + cssDefaultList[i].value + ); + } + } +} + +/* +** Search string zCss for zSelector. +** +** Return true if found. Return false if not found +*/ +static int containsSelector(const char *zCss, const char *zSelector){ + const char *z; + int n; + int selectorLen = (int)strlen(zSelector); + + for(z=zCss; *z; z+=selectorLen){ + z = strstr(z, zSelector); + if( z==0 ) return 0; + if( z!=zCss ){ + for( n=-1; z+n!=zCss && fossil_isspace(z[n]); n--); + if( z+n!=zCss && z[n]!=',' && z[n]!= '}' && z[n]!='/' ) continue; + } + for( n=selectorLen; z[n] && fossil_isspace(z[n]); n++ ); + if( z[n]==',' || z[n]=='{' || z[n]=='/' ) return 1; + } + return 0; +} + +/* +** COMMAND: test-contains-selector +** +** Usage: %fossil test-contains-selector FILENAME SELECTOR +** +** Determine if the CSS stylesheet FILENAME contains SELECTOR. +*/ +void contains_selector_cmd(void){ + int found; + char *zSelector; + Blob css; + if( g.argc!=4 ) usage("FILENAME SELECTOR"); + blob_read_from_file(&css, g.argv[2]); + zSelector = g.argv[3]; + found = containsSelector(blob_str(&css), zSelector); + fossil_print("%s %s\n", zSelector, found ? "found" : "not found"); + blob_reset(&css); +} + /* ** WEBPAGE: style.css +** +** Return the style sheet. */ void page_style_css(void){ - char *zCSS = 0; + Blob css; + int i; cgi_set_content_type("text/css"); - zCSS = db_get("css",(char*)zDefaultCSS); - cgi_append_content(zCSS, -1); + blob_init(&css,skin_get("css"),-1); + + /* add special missing definitions */ + for(i=1; cssDefaultList[i].elementClass; i++){ + char *z = blob_str(&css); + if( !containsSelector(z, cssDefaultList[i].elementClass) ){ + blob_appendf(&css, "/* %s */\n%s {\n%s}\n", + cssDefaultList[i].comment, + cssDefaultList[i].elementClass, + cssDefaultList[i].value); + } + } + + /* Process through TH1 in order to give an opportunity to substitute + ** variables such as $baseurl. + */ + Th_Store("baseurl", g.zBaseURL); + Th_Store("secureurl", login_wants_https_redirect()? g.zHttpsURL: g.zBaseURL); + Th_Store("home", g.zTop); + image_url_var("logo"); + image_url_var("background"); + Th_Render(blob_str(&css)); + + /* Tell CGI that the content returned by this page is considered cacheable */ g.isConst = 1; } /* ** WEBPAGE: test_env +** +** Display CGI-variables and other aspects of the run-time +** environment, for debugging and trouble-shooting purposes. */ void page_test_env(void){ + char c; + int i; + int showAll; + char zCap[30]; + static const char *const azCgiVars[] = { + "COMSPEC", "DOCUMENT_ROOT", "GATEWAY_INTERFACE", + "HTTP_ACCEPT", "HTTP_ACCEPT_CHARSET", "HTTP_ACCEPT_ENCODING", + "HTTP_ACCEPT_LANGUAGE", "HTTP_CONNECTION", "HTTP_HOST", + "HTTP_USER_AGENT", "HTTP_REFERER", "PATH_INFO", "PATH_TRANSLATED", + "QUERY_STRING", "REMOTE_ADDR", "REMOTE_PORT", "REQUEST_METHOD", + "REQUEST_URI", "SCRIPT_FILENAME", "SCRIPT_NAME", "SERVER_PROTOCOL", + "HOME", "FOSSIL_HOME", "USERNAME", "USER", "FOSSIL_USER", + "SQLITE_TMPDIR", "TMPDIR", + "TEMP", "TMP", "FOSSIL_VFS" + }; + + login_check_credentials(); + if( !g.perm.Admin && !g.perm.Setup && !db_get_boolean("test_env_enable",0) ){ + login_needed(0); + return; + } + for(i=0; i<count(azCgiVars); i++) (void)P(azCgiVars[i]); style_header("Environment Test"); -#if !defined(__MINGW32__) - @ uid=%d(getuid()), gid=%d(getgid())<br> + showAll = atoi(PD("showall","0")); + if( !showAll ){ + style_submenu_element("Show Cookies", 0, "%R/test_env?showall=1"); + }else{ + style_submenu_element("Hide Cookies", 0, "%R/test_env"); + } +#if !defined(_WIN32) + @ uid=%d(getuid()), gid=%d(getgid())<br /> #endif - @ g.zBaseURL = %h(g.zBaseURL)<br> - @ g.zTop = %h(g.zTop)<br> - @ g.zRepositoryName = %h(g.zRepositoryName)<br> - cgi_print_all(); + @ g.zBaseURL = %h(g.zBaseURL)<br /> + @ g.zHttpsURL = %h(g.zHttpsURL)<br /> + @ g.zTop = %h(g.zTop)<br /> + @ g.zPath = %h(g.zPath)<br /> + for(i=0, c='a'; c<='z'; c++){ + if( login_has_capability(&c, 1, 0) ) zCap[i++] = c; + } + zCap[i] = 0; + @ g.userUid = %d(g.userUid)<br /> + @ g.zLogin = %h(g.zLogin)<br /> + @ g.isHuman = %d(g.isHuman)<br /> + @ capabilities = %s(zCap)<br /> + for(i=0, c='a'; c<='z'; c++){ + if( login_has_capability(&c, 1, LOGIN_ANON) + && !login_has_capability(&c, 1, 0) ) zCap[i++] = c; + } + zCap[i] = 0; + if( i>0 ){ + @ anonymous-adds = %s(zCap)<br /> + } + @ g.zRepositoryName = %h(g.zRepositoryName)<br /> + @ load_average() = %f(load_average())<br /> + @ <hr> + P("HTTP_USER_AGENT"); + cgi_print_all(showAll); + if( showAll && blob_size(&g.httpHeader)>0 ){ + @ <hr> + @ <pre> + @ %h(blob_str(&g.httpHeader)) + @ </pre> + } + if( g.perm.Setup ){ + const char *zRedir = P("redirect"); + if( zRedir ) cgi_redirect(zRedir); + } style_footer(); + if( g.perm.Admin && P("err") ) fossil_fatal("%s", P("err")); +} + +/* +** WEBPAGE: honeypot +** This page is a honeypot for spiders and bots. +*/ +void honeypot_page(void){ + cgi_set_status(403, "Forbidden"); + @ <p>Please enable javascript or log in to see this content</p> } Index: src/sync.c ================================================================== --- src/sync.c +++ src/sync.c @@ -19,175 +19,280 @@ */ #include "config.h" #include "sync.h" #include <assert.h> -#if INTERFACE -/* -** Flags used to determine which direction(s) an autosync goes in. -*/ -#define AUTOSYNC_PUSH 1 -#define AUTOSYNC_PULL 2 - -#endif /* INTERFACE */ - -/* -** If the respository is configured for autosyncing, then do an +/* +** If the repository is configured for autosyncing, then do an ** autosync. This will be a pull if the argument is true or a push ** if the argument is false. +** +** Return the number of errors. */ -void autosync(int flags){ - const char *zUrl; +int autosync(int flags){ const char *zAutosync; - const char *zPw; + int rc; + int configSync = 0; /* configuration changes transferred */ if( g.fNoSync ){ - return; + return 0; + } + if( flags==SYNC_PUSH && db_get_boolean("dont-push",0) ){ + return 0; } zAutosync = db_get("autosync", 0); if( zAutosync ){ - if( (flags & AUTOSYNC_PUSH)!=0 && memcmp(zAutosync,"pull",4)==0 ){ - return; /* Do not auto-push when autosync=pullonly */ + if( (flags & SYNC_PUSH)!=0 && fossil_strncmp(zAutosync,"pull",4)==0 ){ + return 0; /* Do not auto-push when autosync=pullonly */ } if( is_false(zAutosync) ){ - return; /* Autosync is completely off */ + return 0; /* Autosync is completely off */ } }else{ /* Autosync defaults on. To make it default off, "return" here. */ } - zUrl = db_get("last-sync-url", 0); - if( zUrl==0 ){ - return; /* No default server */ - } - zPw = db_get("last-sync-pw", 0); - url_parse(zUrl); - if( g.urlUser!=0 && g.urlPasswd==0 ){ - g.urlPasswd = mprintf("%s", zPw); - } - printf("Autosync: %s\n", g.urlCanonical); + url_parse(0, URL_REMEMBER); + if( g.url.protocol==0 ) return 0; + if( g.url.user!=0 && g.url.passwd==0 ){ + g.url.passwd = unobscure(db_get("last-sync-pw", 0)); + g.url.flags |= URL_PROMPT_PW; + url_prompt_for_password(); + } + g.zHttpAuth = get_httpauth(); + url_remember(); +#if 0 /* Disabled for now */ + if( (flags & AUTOSYNC_PULL)!=0 && db_get_boolean("auto-shun",1) ){ + /* When doing an automatic pull, also automatically pull shuns from + ** the server if pull_shuns is enabled. + ** + ** TODO: What happens if the shun list gets really big? + ** Maybe the shunning list should only be pulled on every 10th + ** autosync, or something? + */ + configSync = CONFIGSET_SHUN; + } +#endif + if( find_option("verbose","v",0)!=0 ) flags |= SYNC_VERBOSE; + fossil_print("Autosync: %s\n", g.url.canonical); url_enable_proxy("via proxy: "); - client_sync((flags & AUTOSYNC_PUSH)!=0, 1, 0, 0, 0); + rc = client_sync(flags, configSync, 0); + return rc; +} + +/* +** This routine will try a number of times to perform autosync with a +** 0.5 second sleep between attempts; returning the last autosync status. +*/ +int autosync_loop(int flags, int nTries){ + int n = 0; + int rc = 0; + while( (n==0 || n<nTries) && (rc=autosync(flags)) ){ + if( rc ){ + if( ++n<nTries ){ + fossil_warning("Autosync failed, making another attempt."); + sqlite3_sleep(500); + }else{ + fossil_warning("Autosync failed."); + } + } + } + return rc; } /* ** This routine processes the command-line argument for push, pull, ** and sync. If a command-line argument is given, that is the URL ** of a server to sync against. If no argument is given, use the ** most recently synced URL. Remember the current URL for next time. */ -void process_sync_args(void){ +static void process_sync_args(unsigned *pConfigFlags, unsigned *pSyncFlags){ const char *zUrl = 0; - const char *zPw = 0; - int urlOptional = find_option("autourl",0,0)!=0; - g.dontKeepUrl = find_option("once",0,0)!=0; + const char *zHttpAuth = 0; + unsigned configSync = 0; + unsigned urlFlags = URL_REMEMBER | URL_PROMPT_PW; + int urlOptional = 0; + if( find_option("autourl",0,0)!=0 ){ + urlOptional = 1; + urlFlags = 0; + } + zHttpAuth = find_option("httpauth","B",1); + if( find_option("once",0,0)!=0 ) urlFlags &= ~URL_REMEMBER; + if( find_option("private",0,0)!=0 ){ + *pSyncFlags |= SYNC_PRIVATE; + } + if( find_option("verbose","v",0)!=0 ){ + *pSyncFlags |= SYNC_VERBOSE; + } + /* The --verily option to sync, push, and pull forces extra igot cards + ** to be exchanged. This can overcome malfunctions in the sync protocol. + */ + if( find_option("verily",0,0)!=0 ){ + *pSyncFlags |= SYNC_RESYNC; + } url_proxy_options(); - db_find_and_open_repository(1); + clone_ssh_find_options(); + db_find_and_open_repository(0, 0); db_open_config(0); if( g.argc==2 ){ - zUrl = db_get("last-sync-url", 0); - zPw = db_get("last-sync-pw", 0); + if( db_get_boolean("auto-shun",1) ) configSync = CONFIGSET_SHUN; }else if( g.argc==3 ){ zUrl = g.argv[2]; } - if( zUrl==0 ){ - if( urlOptional ) exit(0); - usage("URL"); - } - url_parse(zUrl); - if( !g.dontKeepUrl ){ - db_set("last-sync-url", g.urlCanonical, 0); - if( g.urlPasswd ) db_set("last-sync-pw", g.urlPasswd, 0); - } - if( g.urlUser!=0 && g.urlPasswd==0 ){ - if( zPw==0 ){ - url_prompt_for_password(); - }else{ - g.urlPasswd = mprintf("%s", zPw); - } + if( urlFlags & URL_REMEMBER ){ + clone_ssh_db_set_options(); + } + url_parse(zUrl, urlFlags); + remember_or_get_http_auth(zHttpAuth, urlFlags & URL_REMEMBER, zUrl); + url_remember(); + if( g.url.protocol==0 ){ + if( urlOptional ) fossil_exit(0); + usage("URL"); } user_select(); if( g.argc==2 ){ - printf("Server: %s\n", g.urlCanonical); + if( ((*pSyncFlags) & (SYNC_PUSH|SYNC_PULL))==(SYNC_PUSH|SYNC_PULL) ){ + fossil_print("Sync with %s\n", g.url.canonical); + }else if( (*pSyncFlags) & SYNC_PUSH ){ + fossil_print("Push to %s\n", g.url.canonical); + }else if( (*pSyncFlags) & SYNC_PULL ){ + fossil_print("Pull from %s\n", g.url.canonical); + } } url_enable_proxy("via proxy: "); + *pConfigFlags |= configSync; } /* ** COMMAND: pull ** ** Usage: %fossil pull ?URL? ?options? ** -** Pull changes from a remote repository into the local repository. -** Use the "-R REPO" or "--repository REPO" command-line options -** to specify an alternative repository file. -** -** If the URL is not specified, then the URL from the most recent -** clone, push, pull, remote-url, or sync command is used. -** -** The URL specified normally becomes the new "remote-url" used for -** subsequent push, pull, and sync operations. However, the "--once" -** command-line option makes the URL a one-time-use URL that is not -** saved. -** -** See also: clone, push, sync, remote-url +** Pull all sharable changes from a remote repository into the local repository. +** Sharable changes include public check-ins, and wiki, ticket, and tech-note +** edits. Add the --private option to pull private branches. Use the +** "configuration pull" command to pull website configuration details. +** +** If URL is not specified, then the URL from the most recent clone, push, +** pull, remote-url, or sync command is used. See "fossil help clone" for +** details on the URL formats. +** +** Options: +** +** -B|--httpauth USER:PASS Credentials for the simple HTTP auth protocol, +** if required by the remote website +** --ipv4 Use only IPv4, not IPv6 +** --once Do not remember URL for subsequent syncs +** --proxy PROXY Use the specified HTTP proxy +** --private Pull private branches too +** -R|--repository REPO Repository to pull into +** --ssl-identity FILE Local SSL credentials, if requested by remote +** --ssh-command SSH Use SSH as the "ssh" command +** -v|--verbose Additional (debugging) output +** --verily Exchange extra information with the remote +** to ensure no content is overlooked +** +** See also: clone, config pull, push, remote-url, sync */ void pull_cmd(void){ - process_sync_args(); - client_sync(0,1,0,0,0); + unsigned configFlags = 0; + unsigned syncFlags = SYNC_PULL; + process_sync_args(&configFlags, &syncFlags); + + /* We should be done with options.. */ + verify_all_options(); + + client_sync(syncFlags, configFlags, 0); } /* ** COMMAND: push ** ** Usage: %fossil push ?URL? ?options? ** -** Push changes in the local repository over into a remote repository. -** Use the "-R REPO" or "--repository REPO" command-line options -** to specify an alternative repository file. -** -** If the URL is not specified, then the URL from the most recent -** clone, push, pull, remote-url, or sync command is used. -** -** The URL specified normally becomes the new "remote-url" used for -** subsequent push, pull, and sync operations. However, the "--once" -** command-line option makes the URL a one-time-use URL that is not -** saved. -** -** See also: clone, pull, sync, remote-url +** Push all sharable changes from the local repository to a remote repository. +** Sharable changes include public check-ins, and wiki, ticket, and tech-note +** edits. Use --private to also push private branches. Use the +** "configuration pull" command to push website configuration details. +** +** If URL is not specified, then the URL from the most recent clone, push, +** pull, remote-url, or sync command is used. See "fossil help clone" for +** details on the URL formats. +** +** Options: +** +** -B|--httpauth USER:PASS Credentials for the simple HTTP auth protocol, +** if required by the remote website +** --ipv4 Use only IPv4, not IPv6 +** --once Do not remember URL for subsequent syncs +** --proxy PROXY Use the specified HTTP proxy +** --private Pull private branches too +** -R|--repository REPO Repository to pull into +** --ssl-identity FILE Local SSL credentials, if requested by remote +** --ssh-command SSH Use SSH as the "ssh" command +** -v|--verbose Additional (debugging) output +** --verily Exchange extra information with the remote +** to ensure no content is overlooked +** +** See also: clone, config push, pull, remote-url, sync */ void push_cmd(void){ - process_sync_args(); - client_sync(1,0,0,0,0); + unsigned configFlags = 0; + unsigned syncFlags = SYNC_PUSH; + process_sync_args(&configFlags, &syncFlags); + + /* We should be done with options.. */ + verify_all_options(); + + if( db_get_boolean("dont-push",0) ){ + fossil_fatal("pushing is prohibited: the 'dont-push' option is set"); + } + client_sync(syncFlags, 0, 0); } /* ** COMMAND: sync ** ** Usage: %fossil sync ?URL? ?options? ** -** Synchronize the local repository with a remote repository. This is -** the equivalent of running both "push" and "pull" at the same time. -** Use the "-R REPO" or "--repository REPO" command-line options -** to specify an alternative repository file. -** -** If a user-id and password are required, specify them as follows: -** -** http://userid:password@www.domain.com:1234/path -** -** If the URL is not specified, then the URL from the most recent successful -** clone, push, pull, remote-url, or sync command is used. -** -** The URL specified normally becomes the new "remote-url" used for -** subsequent push, pull, and sync operations. However, the "--once" -** command-line option makes the URL a one-time-use URL that is not -** saved. -** -** See also: clone, push, pull, remote-url +** Synchronize all sharable changes between the local repository and a +** a remote repository. Sharable changes include public check-ins, and wiki, +** ticket, and tech-note edits. +** +** If URL is not specified, then the URL from the most recent clone, push, +** pull, remote-url, or sync command is used. See "fossil help clone" for +** details on the URL formats. +** +** Options: +** +** -B|--httpauth USER:PASS Credentials for the simple HTTP auth protocol, +** if required by the remote website +** --ipv4 Use only IPv4, not IPv6 +** --once Do not remember URL for subsequent syncs +** --proxy PROXY Use the specified HTTP proxy +** --private Pull private branches too +** -R|--repository REPO Repository to pull into +** --ssl-identity FILE Local SSL credentials, if requested by remote +** --ssh-command SSH Use SSH as the "ssh" command +** -v|--verbose Additional (debugging) output +** --verily Exchange extra information with the remote +** to ensure no content is overlooked +** +** See also: clone, pull, push, remote-url */ void sync_cmd(void){ - process_sync_args(); - client_sync(1,1,0,0,0); + unsigned configFlags = 0; + unsigned syncFlags = SYNC_PUSH|SYNC_PULL; + process_sync_args(&configFlags, &syncFlags); + + /* We should be done with options.. */ + verify_all_options(); + + if( db_get_boolean("dont-push",0) ) syncFlags &= ~SYNC_PUSH; + client_sync(syncFlags, configFlags, 0); + if( (syncFlags & SYNC_PUSH)==0 ){ + fossil_warning("pull only: the 'dont-push' option is set"); + } } /* ** COMMAND: remote-url ** @@ -199,39 +304,36 @@ ** The remote-url is set automatically by a "clone" command or by any ** "sync", "push", or "pull" command that specifies an explicit URL. ** The default remote-url is used by auto-syncing and by "sync", "push", ** "pull" that omit the server URL. ** +** See "fossil help clone" for further information about URL formats +** ** See also: clone, push, pull, sync */ void remote_url_cmd(void){ char *zUrl; - db_find_and_open_repository(1); + db_find_and_open_repository(0, 0); + + /* We should be done with options.. */ + verify_all_options(); + if( g.argc!=2 && g.argc!=3 ){ usage("remote-url ?URL|off?"); } if( g.argc==3 ){ - if( strcmp(g.argv[2],"off")==0 ){ - db_unset("last-sync-url", 0); - db_unset("last-sync-pw", 0); - }else{ - url_parse(g.argv[2]); - if( g.urlUser && g.urlPasswd==0 ){ - url_prompt_for_password(); - } - db_set("last-sync-url", g.urlCanonical, 0); - if( g.urlPasswd ){ - db_set("last-sync-pw", g.urlPasswd, 0); - }else{ - db_unset("last-sync-pw", 0); - } - } - } + db_unset("last-sync-url", 0); + db_unset("last-sync-pw", 0); + db_unset("http-auth", 0); + if( is_false(g.argv[2]) ) return; + url_parse(g.argv[2], URL_REMEMBER|URL_PROMPT_PW|URL_ASK_REMEMBER_PW); + } + url_remember(); zUrl = db_get("last-sync-url", 0); if( zUrl==0 ){ - printf("off\n"); + fossil_print("off\n"); return; }else{ - url_parse(zUrl); - printf("%s\n", g.urlCanonical); + url_parse(zUrl, 0); + fossil_print("%s\n", g.url.canonical); } } Index: src/tag.c ================================================================== --- src/tag.c +++ src/tag.c @@ -26,43 +26,53 @@ ** ** This routine assumes that tagid is a tag that should be ** propagated and that the tag is already present in pid. ** ** If tagtype is 2 then the tag is being propagated from an -** ancestor node. If tagtype is 0 it means a branch tag is -** being cancelled. +** ancestor node. If tagtype is 0 it means a propagating tag is +** being blocked. */ -void tag_propagate( +static void tag_propagate( int pid, /* Propagate the tag to children of this node */ int tagid, /* Tag to propagate */ int tagType, /* 2 for a propagating tag. 0 for an antitag */ int origId, /* Artifact of tag, when tagType==2 */ const char *zValue, /* Value of the tag. Might be NULL */ double mtime /* Timestamp on the tag */ ){ - PQueue queue; - Stmt s, ins, eventupdate; + PQueue queue; /* Queue of check-ins to be tagged */ + Stmt s; /* Query the children of :pid to which to propagate */ + Stmt ins; /* INSERT INTO tagxref */ + Stmt eventupdate; /* UPDATE event */ assert( tagType==0 || tagType==2 ); - pqueue_init(&queue); - pqueue_insert(&queue, pid, 0.0); - db_prepare(&s, + pqueuex_init(&queue); + pqueuex_insert(&queue, pid, 0.0, 0); + + /* Query for children of :pid to which to propagate the tag. + ** Three returns: (1) rid of the child. (2) timestamp of child. + ** (3) True to propagate or false to block. + */ + db_prepare(&s, "SELECT cid, plink.mtime," " coalesce(srcid=0 AND tagxref.mtime<:mtime, %d) AS doit" " FROM plink LEFT JOIN tagxref ON cid=rid AND tagid=%d" " WHERE pid=:pid AND isprim", - tagType!=0, tagid + tagType==2, tagid ); db_bind_double(&s, ":mtime", mtime); + if( tagType==2 ){ + /* Set the propagated tag marker on check-in :rid */ db_prepare(&ins, "REPLACE INTO tagxref(tagid, tagtype, srcid, origid, value, mtime, rid)" "VALUES(%d,2,0,%d,%Q,:mtime,:rid)", tagid, origId, zValue ); db_bind_double(&ins, ":mtime", mtime); }else{ + /* Remove all references to the tag from check-in :rid */ zValue = 0; db_prepare(&ins, "DELETE FROM tagxref WHERE tagid=%d AND rid=:rid", tagid ); } @@ -69,55 +79,58 @@ if( tagid==TAG_BGCOLOR ){ db_prepare(&eventupdate, "UPDATE event SET bgcolor=%Q WHERE objid=:rid", zValue ); } - while( (pid = pqueue_extract(&queue))!=0 ){ + while( (pid = pqueuex_extract(&queue, 0))!=0 ){ db_bind_int(&s, ":pid", pid); while( db_step(&s)==SQLITE_ROW ){ int doit = db_column_int(&s, 2); if( doit ){ int cid = db_column_int(&s, 0); double mtime = db_column_double(&s, 1); - pqueue_insert(&queue, cid, mtime); + pqueuex_insert(&queue, cid, mtime, 0); db_bind_int(&ins, ":rid", cid); db_step(&ins); db_reset(&ins); if( tagid==TAG_BGCOLOR ){ db_bind_int(&eventupdate, ":rid", cid); db_step(&eventupdate); db_reset(&eventupdate); } + if( tagid==TAG_BRANCH ){ + leaf_eventually_check(cid); + } } } db_reset(&s); } - pqueue_clear(&queue); + pqueuex_clear(&queue); db_finalize(&ins); db_finalize(&s); if( tagid==TAG_BGCOLOR ){ db_finalize(&eventupdate); } } /* -** Propagate all propagatable tags in pid to its children. +** Propagate all propagatable tags in pid to the children of pid. */ void tag_propagate_all(int pid){ Stmt q; db_prepare(&q, "SELECT tagid, tagtype, mtime, value, origid FROM tagxref" - " WHERE rid=%d" - " AND (tagtype=0 OR tagtype=2)", + " WHERE rid=%d", pid ); while( db_step(&q)==SQLITE_ROW ){ int tagid = db_column_int(&q, 0); int tagtype = db_column_int(&q, 1); double mtime = db_column_double(&q, 2); const char *zValue = db_column_text(&q, 3); int origid = db_column_int(&q, 4); + if( tagtype==1 ) tagtype = 0; tag_propagate(pid, tagid, tagtype, origid, zValue, mtime); } db_finalize(&q); } @@ -166,18 +179,19 @@ db_finalize(&s); if( rc==SQLITE_ROW ){ /* Another entry that is more recent already exists. Do nothing */ return tagid; } - db_prepare(&s, + db_prepare(&s, "REPLACE INTO tagxref(tagid,tagtype,srcId,origid,value,mtime,rid)" " VALUES(%d,%d,%d,%d,%Q,:mtime,%d)", tagid, tagtype, srcId, rid, zValue, rid ); db_bind_double(&s, ":mtime", mtime); db_step(&s); db_finalize(&s); + if( tagid==TAG_BRANCH ) leaf_eventually_check(rid); if( tagtype==0 ){ zValue = 0; } zCol = 0; switch( tagid ){ @@ -191,26 +205,35 @@ } case TAG_USER: { zCol = "euser"; break; } + case TAG_PRIVATE: { + db_multi_exec( + "INSERT OR IGNORE INTO private(rid) VALUES(%d);", + rid + ); + } } if( zCol ){ - db_multi_exec("UPDATE event SET %s=%Q WHERE objid=%d", zCol, zValue, rid); + db_multi_exec("UPDATE event SET \"%w\"=%Q WHERE objid=%d", + zCol, zValue, rid); if( tagid==TAG_COMMENT ){ char *zCopy = mprintf("%s", zValue); wiki_extract_links(zCopy, rid, 0, mtime, 1, WIKI_INLINE); free(zCopy); } } if( tagid==TAG_DATE ){ - db_multi_exec("UPDATE event SET mtime=julianday(%Q) WHERE objid=%d", + db_multi_exec("UPDATE event " + " SET mtime=julianday(%Q)," + " omtime=coalesce(omtime,mtime)" + " WHERE objid=%d", zValue, rid); } - if( tagtype==0 || tagtype==2 ){ - tag_propagate(rid, tagid, tagtype, rid, zValue, mtime); - } + if( tagtype==1 ) tagtype = 0; + tag_propagate(rid, tagid, tagtype, rid, zValue, mtime); return tagid; } /* @@ -234,11 +257,11 @@ zTag = g.argv[2]; switch( zTag[0] ){ case '+': tagtype = 1; break; case '*': tagtype = 2; break; case '-': tagtype = 0; break; - default: + default: fossil_fatal("tag should begin with '+', '*', or '-'"); return; } rid = name_to_rid(g.argv[3]); if( rid==0 ){ @@ -246,11 +269,11 @@ } g.markPrivate = content_is_private(rid); zValue = g.argc==5 ? g.argv[4] : 0; db_begin_transaction(); tag_insert(zTag, tagtype, zValue, -1, 0.0, rid); - db_end_transaction(0); + db_end_transaction(0); } /* ** Add a control record to the repository that either creates ** or cancels a tag. @@ -258,11 +281,13 @@ void tag_add_artifact( const char *zPrefix, /* Prefix to prepend to tag name */ const char *zTagname, /* The tag to add or cancel */ const char *zObjName, /* Name of object attached to */ const char *zValue, /* Value for the tag. Might be NULL */ - int tagtype /* 0:cancel 1:singleton 2:propagated */ + int tagtype, /* 0:cancel 1:singleton 2:propagated */ + const char *zDateOvrd, /* Override date string */ + const char *zUserOvrd /* Override user name */ ){ int rid; int nrid; char *zDate; Blob uuid; @@ -272,11 +297,11 @@ assert( tagtype>=0 && tagtype<=2 ); user_select(); blob_zero(&uuid); blob_append(&uuid, zObjName, -1); - if( name_to_uuid(&uuid, 9) ){ + if( name_to_uuid(&uuid, 9, "*") ){ fossil_fatal("%s", g.zErrMsg); return; } rid = name_to_rid(blob_str(&uuid)); g.markPrivate = content_is_private(rid); @@ -289,25 +314,25 @@ " a hexadecimal artifact ID", zTagname ); } #endif - zDate = db_text(0, "SELECT datetime('now')"); - zDate[10] = 'T'; + zDate = date_in_standard_format(zDateOvrd ? zDateOvrd : "now"); blob_appendf(&ctrl, "D %s\n", zDate); blob_appendf(&ctrl, "T %c%s%F %b", zTagtype[tagtype], zPrefix, zTagname, &uuid); if( tagtype>0 && zValue && zValue[0] ){ blob_appendf(&ctrl, " %F\n", zValue); }else{ blob_appendf(&ctrl, "\n"); } - blob_appendf(&ctrl, "U %F\n", g.zLogin); + blob_appendf(&ctrl, "U %F\n", zUserOvrd ? zUserOvrd : login_name()); md5sum_blob(&ctrl, &cksum); blob_appendf(&ctrl, "Z %b\n", &cksum); - nrid = content_put(&ctrl, 0, 0); - manifest_crosslink(nrid, &ctrl); + nrid = content_put(&ctrl); + manifest_crosslink(nrid, &ctrl, MC_PERMIT_HOOKS); + assert( blob_is_reset(&ctrl) ); } /* ** COMMAND: tag ** Usage: %fossil tag SUBCOMMAND ... @@ -317,22 +342,24 @@ ** %fossil tag add ?--raw? ?--propagate? TAGNAME CHECK-IN ?VALUE? ** ** Add a new tag or property to CHECK-IN. The tag will ** be usable instead of a CHECK-IN in commands such as ** update and merge. If the --propagate flag is present, -** the tag value propages to all descendants of CHECK-IN +** the tag value propagates to all descendants of CHECK-IN ** ** %fossil tag cancel ?--raw? TAGNAME CHECK-IN ** ** Remove the tag TAGNAME from CHECK-IN, and also remove ** the propagation of the tag to any descendants. ** -** %fossil tag find ?--raw? TAGNAME +** %fossil tag find ?--raw? ?-t|--type TYPE? ?-n|--limit #? TAGNAME ** -** List all check-ins that use TAGNAME +** List all objects that use TAGNAME. TYPE can be "ci" for +** check-ins or "e" for events. The limit option limits the number +** of results to the given value. ** -** %fossil tag list ?--raw? ?CHECK-IN? +** %fossil tag list|ls ?--raw? ?CHECK-IN? ** ** List all tags, or if CHECK-IN is supplied, list ** all tags and their values for CHECK-IN. ** ** The option --raw allows the manipulation of all types of tags @@ -352,18 +379,24 @@ ** ** fossil update tag:decaf ** ** will assume that "decaf" is a tag/branch name. ** +** only allow --date-override and --user-override in +** %fossil tag add --date-override 'YYYY-MMM-DD HH:MM:SS' \\ +** --user-override user +** in order to import history from other scm systems */ void tag_cmd(void){ int n; int fRaw = find_option("raw","",0)!=0; int fPropagate = find_option("propagate","",0)!=0; const char *zPrefix = fRaw ? "" : "sym-"; + const char *zFindLimit = find_option("limit","n",1); + const int nFindLimit = zFindLimit ? atoi(zFindLimit) : -2000; - db_find_and_open_repository(1); + db_find_and_open_repository(0, 0); if( g.argc<3 ){ goto tag_cmd_usage; } n = strlen(g.argv[2]); if( n==0 ){ @@ -370,16 +403,19 @@ goto tag_cmd_usage; } if( strncmp(g.argv[2],"add",n)==0 ){ char *zValue; + const char *zDateOvrd = find_option("date-override",0,1); + const char *zUserOvrd = find_option("user-override",0,1); if( g.argc!=5 && g.argc!=6 ){ usage("add ?--raw? ?--propagate? TAGNAME CHECK-IN ?VALUE?"); } zValue = g.argc==6 ? g.argv[5] : 0; db_begin_transaction(); - tag_add_artifact(zPrefix, g.argv[3], g.argv[4], zValue, 1+fPropagate); + tag_add_artifact(zPrefix, g.argv[3], g.argv[4], zValue, + 1+fPropagate,zDateOvrd,zUserOvrd); db_end_transaction(0); }else if( strncmp(g.argv[2],"branch",n)==0 ){ fossil_fatal("the \"fossil tag branch\" command is discontinued\n" @@ -389,66 +425,77 @@ if( strncmp(g.argv[2],"cancel",n)==0 ){ if( g.argc!=5 ){ usage("cancel ?--raw? TAGNAME CHECK-IN"); } db_begin_transaction(); - tag_add_artifact(zPrefix, g.argv[3], g.argv[4], 0, 0); + tag_add_artifact(zPrefix, g.argv[3], g.argv[4], 0, 0, 0, 0); db_end_transaction(0); }else if( strncmp(g.argv[2],"find",n)==0 ){ Stmt q; + const char *zType = find_option("type","t",1); + Blob sql = empty_blob; + if( zType==0 || zType[0]==0 ) zType = "*"; if( g.argc!=4 ){ - usage("find ?--raw? TAGNAME"); + usage("find ?--raw? ?-t|--type TYPE? ?-n|--limit #? TAGNAME"); } if( fRaw ){ - db_prepare(&q, + blob_append_sql(&sql, "SELECT blob.uuid FROM tagxref, blob" " WHERE tagid=(SELECT tagid FROM tag WHERE tagname=%Q)" " AND tagxref.tagtype>0" " AND blob.rid=tagxref.rid", g.argv[3] ); + if( nFindLimit>0 ){ + blob_append_sql(&sql, " LIMIT %d", nFindLimit); + } + db_prepare(&q, "%s", blob_sql_text(&sql)); + blob_reset(&sql); while( db_step(&q)==SQLITE_ROW ){ - printf("%s\n", db_column_text(&q, 0)); + fossil_print("%s\n", db_column_text(&q, 0)); } db_finalize(&q); }else{ int tagid = db_int(0, "SELECT tagid FROM tag WHERE tagname='sym-%q'", g.argv[3]); if( tagid>0 ){ - db_prepare(&q, + blob_append_sql(&sql, "%s" + " AND event.type GLOB '%q'" " AND blob.rid IN (" " SELECT rid FROM tagxref" " WHERE tagtype>0 AND tagid=%d" ")" " ORDER BY event.mtime DESC", - timeline_query_for_tty(), tagid + timeline_query_for_tty(), zType, tagid ); - print_timeline(&q, 2000); + db_prepare(&q, "%s", blob_sql_text(&sql)); + blob_reset(&sql); + print_timeline(&q, nFindLimit, 79, 0); db_finalize(&q); } } }else - if( strncmp(g.argv[2],"list",n)==0 ){ + if(( strncmp(g.argv[2],"list",n)==0 )||( strncmp(g.argv[2],"ls",n)==0 )){ Stmt q; if( g.argc==3 ){ - db_prepare(&q, + db_prepare(&q, "SELECT tagname FROM tag" " WHERE EXISTS(SELECT 1 FROM tagxref" " WHERE tagid=tag.tagid" " AND tagtype>0)" " ORDER BY tagname" ); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); if( fRaw ){ - printf("%s\n", zName); + fossil_print("%s\n", zName); }else if( strncmp(zName, "sym-", 4)==0 ){ - printf("%s\n", &zName[4]); + fossil_print("%s\n", &zName[4]); } } db_finalize(&q); }else if( g.argc==4 ){ int rid = name_to_rid(g.argv[3]); @@ -466,13 +513,13 @@ if( fRaw==0 ){ if( strncmp(zName, "sym-", 4)!=0 ) continue; zName += 4; } if( zValue && zValue[0] ){ - printf("%s=%s\n", zName, zValue); + fossil_print("%s=%s\n", zName, zValue); }else{ - printf("%s\n", zName); + fossil_print("%s\n", zName); } } db_finalize(&q); }else{ usage("tag list ?CHECK-IN?"); @@ -488,21 +535,24 @@ tag_cmd_usage: usage("add|cancel|find|list ..."); } /* -** WEBPAGE: /taglist +** WEBPAGE: taglist +** +** List all non-propagating symbolic tags. */ void taglist_page(void){ Stmt q; login_check_credentials(); - if( !g.okRead ){ - login_needed(); + if( !g.perm.Read ){ + login_needed(g.anon.Read); } login_anonymous_available(); style_header("Tags"); + style_adunit_config(ADUNIT_RIGHT_OK); style_submenu_element("Timeline", "Timeline", "tagtimeline"); @ <h2>Non-propagating tags:</h2> db_prepare(&q, "SELECT substr(tagname,5)" " FROM tag" @@ -513,55 +563,33 @@ " ORDER BY tagname" ); @ <ul> while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); - if( g.okHistory ){ - @ <li><a href=%s(g.zBaseURL)/timeline?t=%T(zName)>%h(zName)</a></li> + if( g.perm.Hyperlink ){ + @ <li>%z(xhref("class='taglink'","%R/timeline?t=%T&n=200",zName)) + @ %h(zName)</a></li> }else{ - @ <li><strong>%h(zName)</strong></li> + @ <li><span class="tagDsp">%h(zName)</span></li> } } @ </ul> db_finalize(&q); style_footer(); } -/* -** Draw the names of all tags added to check-in rid. Only tags -** that are directly applied to rid are named. Propagated tags -** are omitted. -*/ -static void tagtimeline_extra(int rid){ - Stmt q; - db_prepare(&q, - "SELECT substr(tagname,5) FROM tagxref, tag" - " WHERE tagxref.rid=%d" - " AND tagxref.tagid=tag.tagid" - " AND tagxref.tagtype>0 AND tagxref.srcid>0" - " AND tag.tagname GLOB 'sym-*'", - rid - ); - while( db_step(&q)==SQLITE_ROW ){ - const char *zTagName = db_column_text(&q, 0); - if( g.okHistory ){ - @ <a href="%s(g.zBaseURL)/timeline?t=%T(zTagName)">[%h(zTagName)]</a> - }else{ - @ <b>[%h(zTagName)]</b> - } - } - db_finalize(&q); -} - /* ** WEBPAGE: /tagtimeline +** +** Render a timeline with all check-ins that contain non-propagating +** symbolic tags. */ void tagtimeline_page(void){ Stmt q; login_check_credentials(); - if( !g.okRead ){ login_needed(); return; } + if( !g.perm.Read ){ login_needed(g.anon.Read); return; } style_header("Tagged Check-ins"); style_submenu_element("List", "List", "taglist"); login_anonymous_available(); @ <h2>Check-ins with non-propagating tags:</h2> @@ -571,16 +599,10 @@ " AND tagid IN (SELECT tagid FROM tag " " WHERE tagname GLOB 'sym-*'))" " ORDER BY event.mtime DESC", timeline_query_for_www() ); - www_print_timeline(&q, 0, tagtimeline_extra); + www_print_timeline(&q, 0, 0, 0, 0, 0); db_finalize(&q); - @ <br clear="both"> - @ <script> - @ function xin(id){ - @ } - @ function xout(id){ - @ } - @ </script> + @ <br /> style_footer(); } ADDED src/tar.c Index: src/tar.c ================================================================== --- src/tar.c +++ src/tar.c @@ -0,0 +1,665 @@ +/* +** Copyright (c) 2011 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code used to generate tarballs. +*/ +#include "config.h" +#include <assert.h> +#if defined(FOSSIL_ENABLE_MINIZ) +# define MINIZ_HEADER_FILE_ONLY +# include "miniz.c" +#else +# include <zlib.h> +#endif +#include "tar.h" + +/* +** State information for the tarball builder. +*/ +static struct tarball_t { + unsigned char *aHdr; /* Space for building headers */ + char *zSpaces; /* Spaces for padding */ + char *zPrevDir; /* Name of directory for previous entry */ + int nPrevDirAlloc; /* size of zPrevDir */ + Blob pax; /* PAX data */ +} tball; + + +/* +** field lengths of 'ustar' name and prefix fields. +*/ +#define USTAR_NAME_LEN 100 +#define USTAR_PREFIX_LEN 155 + + +/* +** Begin the process of generating a tarball. +** +** Initialize the GZIP compressor and the table of directory names. +*/ +static void tar_begin(sqlite3_int64 mTime){ + assert( tball.aHdr==0 ); + tball.aHdr = fossil_malloc(512+512); + memset(tball.aHdr, 0, 512+512); + tball.zSpaces = (char*)&tball.aHdr[512]; + /* zPrevDir init */ + tball.zPrevDir = NULL; + tball.nPrevDirAlloc = 0; + /* scratch buffer init */ + blob_zero(&tball.pax); + + memcpy(&tball.aHdr[108], "0000000", 8); /* Owner ID */ + memcpy(&tball.aHdr[116], "0000000", 8); /* Group ID */ + memcpy(&tball.aHdr[257], "ustar\00000", 8); /* POSIX.1 format */ + memcpy(&tball.aHdr[265], "nobody", 7); /* Owner name */ + memcpy(&tball.aHdr[297], "nobody", 7); /* Group name */ + gzip_begin(mTime); + db_multi_exec( + "CREATE TEMP TABLE dir(name UNIQUE);" + ); +} + + +/* +** verify that lla characters in 'zName' are in the +** ISO646 (=ASCII) character set. +*/ +static int is_iso646_name( + const char *zName, /* file path */ + int nName /* path length */ +){ + int i; + for(i = 0; i < nName; i++){ + unsigned char c = (unsigned char)zName[i]; + if( c>0x7e ) return 0; + } + return 1; +} + + +/* +** copy string pSrc into pDst, truncating or padding with 0 if necessary +*/ +static void padded_copy( + char *pDest, + int nDest, + const char *pSrc, + int nSrc +){ + if(nSrc >= nDest){ + memcpy(pDest, pSrc, nDest); + }else{ + memcpy(pDest, pSrc, nSrc); + memset(&pDest[nSrc], 0, nDest - nSrc); + } +} + + + +/****************************************************************************** +** +** The 'tar' format has evolved over time. Initially the name was stored +** in a 100 byte null-terminated field 'name'. File path names were +** limited to 99 bytes. +** +** The Posix.1 'ustar' format added a 155 byte field 'prefix', allowing +** for up to 255 characters to be stored. The full file path is formed by +** concatenating the field 'prefix', a slash, and the field 'name'. This +** gives some measure of compatibility with programs that only understand +** the oldest format. +** +** The latest Posix extension is called the 'pax Interchange Format'. +** It removes all the limitations of the previous two formats by allowing +** the storage of arbitrary-length attributes in a separate object that looks +** like a file to programs that do not understand this extension. So the +** contents of the 'name' and 'prefix' fields should contain values that allow +** versions of tar that do not understand this extension to still do +** something useful. +** +******************************************************************************/ + +/* +** The position we use to split a file path into the 'name' and 'prefix' +** fields needs to meet the following criteria: +** +** - not at the beginning or end of the string +** - the position must contain a slash +** - no more than 100 characters follow the slash +** - no more than 155 characters precede it +** +** The routine 'find_split_pos' finds a split position. It will meet the +** criteria of listed above if such a position exists. If no such +** position exists it generates one that useful for generating the +** values used for backward compatibility. +*/ +static int find_split_pos( + const char *zName, /* file path */ + int nName /* path length */ +){ + int i, split = 0; + /* only search if the string needs splitting */ + if(nName > USTAR_NAME_LEN){ + for(i = 1; i+1 < nName; i++) + if(zName[i] == '/'){ + split = i+1; + /* if the split position is within USTAR_NAME_LEN bytes from + * the end we can quit */ + if(nName - split <= USTAR_NAME_LEN) break; + } + } + return split; +} + + +/* +** attempt to split the file name path to meet 'ustar' header +** criteria. +*/ +static int tar_split_path( + const char *zName, /* path */ + int nName, /* path length */ + char *pName, /* name field */ + char *pPrefix /* prefix field */ +){ + int split = find_split_pos(zName, nName); + /* check whether both pieces fit */ + if(nName - split > USTAR_NAME_LEN || split > USTAR_PREFIX_LEN+1){ + return 0; /* no */ + } + + /* extract name */ + padded_copy(pName, USTAR_NAME_LEN, &zName[split], nName - split); + + /* extract prefix */ + padded_copy(pPrefix, USTAR_PREFIX_LEN, zName, (split > 0 ? split - 1 : 0)); + + return 1; /* success */ +} + + +/* +** When using an extension header we still need to put something +** reasonable in the name and prefix fields. This is probably as +** good as it gets. +*/ +static void approximate_split_path( + const char *zName, /* path */ + int nName, /* path length */ + char *pName, /* name field */ + char *pPrefix, /* prefix field */ + int bHeader /* is this a 'x' type tar header? */ +){ + int split; + + /* if this is a Pax Interchange header prepend "PaxHeader/" + ** so we can tell files apart from metadata */ + if( bHeader ){ + blob_reset(&tball.pax); + blob_appendf(&tball.pax, "PaxHeader/%*.*s", nName, nName, zName); + zName = blob_buffer(&tball.pax); + nName = blob_size(&tball.pax); + } + + /* find the split position */ + split = find_split_pos(zName, nName); + + /* extract a name, truncate if needed */ + padded_copy(pName, USTAR_NAME_LEN, &zName[split], nName - split); + + /* extract a prefix field, truncate when needed */ + padded_copy(pPrefix, USTAR_PREFIX_LEN, zName, (split > 0 ? split-1 : 0)); +} + + +/* +** add a Pax Interchange header to the scratch buffer +** +** format: <length> <key>=<value>\n +** the tricky part is that each header contains its own +** size in decimal, counting that length. +*/ +static void add_pax_header( + const char *zField, + const char *zValue, + int nValue +){ + /* calculate length without length field */ + int blen = strlen(zField) + nValue + 3; + /* calculate the length of the length field */ + int next10 = 1; + int n; + for(n = blen; n > 0; ){ + blen++; next10 *= 10; + n /= 10; + } + /* adding the length extended the length field? */ + if(blen > next10){ + blen++; + } + /* build the string */ + blob_appendf(&tball.pax, "%d %s=%*.*s\n", blen, zField, nValue, nValue, zValue); + /* this _must_ be right */ + if(blob_size(&tball.pax) != blen){ + fossil_fatal("internal error: PAX tar header has bad length"); + } +} + + +/* +** set the header type, calculate the checksum and output +** the header +*/ +static void cksum_and_write_header( + char cType +){ + unsigned int cksum = 0; + int i; + memset(&tball.aHdr[148], ' ', 8); + tball.aHdr[156] = cType; + for(i=0; i<512; i++) cksum += tball.aHdr[i]; + sqlite3_snprintf(8, (char*)&tball.aHdr[148], "%07o", cksum); + tball.aHdr[155] = 0; + gzip_step((char*)tball.aHdr, 512); +} + + +/* +** Build a header for a file or directory and write that header +** into the growing tarball. +*/ +static void tar_add_header( + const char *zName, /* Name of the object */ + int nName, /* Number of characters in zName */ + int iMode, /* Mode. 0644 or 0755 */ + unsigned int mTime, /* File modification time */ + int iSize, /* Size of the object in bytes */ + char cType /* Type of object: + '0'==file. '2'==symlink. '5'==directory */ +){ + /* set mode and modification time */ + sqlite3_snprintf(8, (char*)&tball.aHdr[100], "%07o", iMode); + sqlite3_snprintf(12, (char*)&tball.aHdr[136], "%011o", mTime); + + /* see if we need to output a Pax Interchange Header */ + if( !is_iso646_name(zName, nName) + || !tar_split_path(zName, nName, (char*)tball.aHdr, (char*)&tball.aHdr[345]) + ){ + int lastPage; + /* add a file name for interoperability with older programs */ + approximate_split_path(zName, nName, (char*)tball.aHdr, + (char*)&tball.aHdr[345], 1); + + /* generate the Pax Interchange path header */ + blob_reset(&tball.pax); + add_pax_header("path", zName, nName); + + /* set the header length, and write the header */ + sqlite3_snprintf(12, (char*)&tball.aHdr[124], "%011o", + blob_size(&tball.pax)); + cksum_and_write_header('x'); + + /* write the Pax Interchange data */ + gzip_step(blob_buffer(&tball.pax), blob_size(&tball.pax)); + lastPage = blob_size(&tball.pax) % 512; + if( lastPage!=0 ){ + gzip_step(tball.zSpaces, 512 - lastPage); + } + + /* generate an approximate path for the regular header */ + approximate_split_path(zName, nName, (char*)tball.aHdr, + (char*)&tball.aHdr[345], 0); + } + /* set the size */ + sqlite3_snprintf(12, (char*)&tball.aHdr[124], "%011o", iSize); + + /* write the regular header */ + cksum_and_write_header(cType); +} + + +/* +** Recursively add an directory entry for the given file if those +** directories have not previously been seen. +*/ +static void tar_add_directory_of( + const char *zName, /* Name of directory including final "/" */ + int nName, /* Characters in zName */ + unsigned int mTime /* Modification time */ +){ + int i; + for(i=nName-1; i>0 && zName[i]!='/'; i--){} + if( i<=0 ) return; + if( i<tball.nPrevDirAlloc + && strncmp(tball.zPrevDir, zName, i)==0 + && tball.zPrevDir[i]==0 ) return; + db_multi_exec("INSERT OR IGNORE INTO dir VALUES('%#q')", i, zName); + if( sqlite3_changes(g.db)==0 ) return; + tar_add_directory_of(zName, i-1, mTime); + tar_add_header(zName, i, 0755, mTime, 0, '5'); + if( i >= tball.nPrevDirAlloc ){ + int nsize = tball.nPrevDirAlloc * 2; + if(i+1 > nsize) + nsize = i+1; + tball.zPrevDir = fossil_realloc(tball.zPrevDir, nsize); + tball.nPrevDirAlloc = nsize; + } + memcpy(tball.zPrevDir, zName, i); + tball.zPrevDir[i] = 0; +} + + +/* +** Add a single file to the growing tarball. +*/ +static void tar_add_file( + const char *zName, /* Name of the file. nul-terminated */ + Blob *pContent, /* Content of the file */ + int mPerm, /* 1: executable file, 2: symlink */ + unsigned int mTime /* Last modification time of the file */ +){ + int nName = strlen(zName); + int n = blob_size(pContent); + int lastPage; + char cType = '0'; + + /* length check moved to tar_split_path */ + tar_add_directory_of(zName, nName, mTime); + + /* + * If we have a symlink, write its destination path (which is stored in + * pContent) into header, and set content length to 0 to avoid storing path + * as file content in the next step. Since 'linkname' header is limited to + * 100 bytes (-1 byte for terminating zero), if path is greater than that, + * store symlink as a plain-text file. (Not sure how TAR handles long links.) + */ + if( mPerm == PERM_LNK && n <= 100 ){ + sqlite3_snprintf(100, (char*)&tball.aHdr[157], "%s", blob_str(pContent)); + cType = '2'; + n = 0; + } + + tar_add_header(zName, nName, ( mPerm==PERM_EXE ) ? 0755 : 0644, + mTime, n, cType); + if( n ){ + gzip_step(blob_buffer(pContent), n); + lastPage = n % 512; + if( lastPage!=0 ){ + gzip_step(tball.zSpaces, 512 - lastPage); + } + } +} + +/* +** Finish constructing the tarball. Put the content of the tarball +** in Blob pOut. +*/ +static void tar_finish(Blob *pOut){ + db_multi_exec("DROP TABLE dir"); + gzip_step(tball.zSpaces, 512); + gzip_step(tball.zSpaces, 512); + gzip_finish(pOut); + fossil_free(tball.aHdr); + tball.aHdr = 0; + fossil_free(tball.zPrevDir); + tball.zPrevDir = NULL; + tball.nPrevDirAlloc = 0; + blob_reset(&tball.pax); +} + + +/* +** COMMAND: test-tarball +** +** Generate a GZIP-compressed tarball in the file given by the first argument +** that contains files given in the second and subsequent arguments. +*/ +void test_tarball_cmd(void){ + int i; + Blob zip; + if( g.argc<3 ){ + usage("ARCHIVE FILE...."); + } + sqlite3_open(":memory:", &g.db); + tar_begin(-1); + for(i=3; i<g.argc; i++){ + Blob file; + blob_zero(&file); + blob_read_from_file(&file, g.argv[i]); + tar_add_file(g.argv[i], &file, file_wd_perm(0), file_wd_mtime(0)); + blob_reset(&file); + } + tar_finish(&zip); + blob_write_to_file(&zip, g.argv[2]); +} + +/* +** Given the RID for a check-in, construct a tarball containing +** all files in that check-in +** +** If RID is for an object that is not a real manifest, then the +** resulting tarball contains a single file which is the RID +** object. +** +** If the RID object does not exist in the repository, then +** pTar is zeroed. +** +** zDir is a "synthetic" subdirectory which all files get +** added to as part of the tarball. It may be 0 or an empty string, in +** which case it is ignored. The intention is to create a tarball which +** politely expands into a subdir instead of filling your current dir +** with source files. For example, pass a UUID or "ProjectName". +** +*/ +void tarball_of_checkin(int rid, Blob *pTar, const char *zDir){ + Blob mfile, hash, file; + Manifest *pManifest; + ManifestFile *pFile; + Blob filename; + int nPrefix; + char *zName; + unsigned int mTime; + + content_get(rid, &mfile); + if( blob_size(&mfile)==0 ){ + blob_zero(pTar); + return; + } + blob_zero(&hash); + blob_zero(&filename); + + if( zDir && zDir[0] ){ + blob_appendf(&filename, "%s/", zDir); + } + nPrefix = blob_size(&filename); + + pManifest = manifest_get(rid, CFTYPE_MANIFEST, 0); + if( pManifest ){ + mTime = (pManifest->rDate - 2440587.5)*86400.0; + tar_begin(mTime); + if( db_get_boolean("manifest", 0) ){ + blob_append(&filename, "manifest", -1); + zName = blob_str(&filename); + sha1sum_blob(&mfile, &hash); + sterilize_manifest(&mfile); + tar_add_file(zName, &mfile, 0, mTime); + blob_reset(&mfile); + blob_append(&hash, "\n", 1); + blob_resize(&filename, nPrefix); + blob_append(&filename, "manifest.uuid", -1); + zName = blob_str(&filename); + tar_add_file(zName, &hash, 0, mTime); + blob_reset(&hash); + } + manifest_file_rewind(pManifest); + while( (pFile = manifest_file_next(pManifest,0))!=0 ){ + int fid = uuid_to_rid(pFile->zUuid, 0); + if( fid ){ + content_get(fid, &file); + blob_resize(&filename, nPrefix); + blob_append(&filename, pFile->zName, -1); + zName = blob_str(&filename); + tar_add_file(zName, &file, manifest_file_mperm(pFile), mTime); + blob_reset(&file); + } + } + }else{ + sha1sum_blob(&mfile, &hash); + blob_append(&filename, blob_str(&hash), 16); + zName = blob_str(&filename); + mTime = db_int64(0, "SELECT (julianday('now') - 2440587.5)*86400.0;"); + tar_begin(mTime); + tar_add_file(zName, &mfile, 0, mTime); + } + manifest_destroy(pManifest); + blob_reset(&mfile); + blob_reset(&filename); + tar_finish(pTar); +} + +/* +** COMMAND: tarball* +** +** Usage: %fossil tarball VERSION OUTPUTFILE +** +** Generate a compressed tarball for a specified version. If the --name +** option is used, its argument becomes the name of the top-level directory +** in the resulting tarball. If --name is omitted, the top-level directory +** named is derived from the project name, the check-in date and time, and +** the artifact ID of the check-in. +** +** Options: +** --name DIRECTORYNAME The name of the top-level directory in the archive +** -R REPOSITORY Specify a Fossil repository +*/ +void tarball_cmd(void){ + int rid; + Blob tarball; + const char *zName; + zName = find_option("name", 0, 1); + db_find_and_open_repository(0, 0); + + /* We should be done with options.. */ + verify_all_options(); + + if( g.argc!=4 ){ + usage("VERSION OUTPUTFILE"); + } + rid = name_to_typed_rid(g.argv[2], "ci"); + if( rid==0 ){ + fossil_fatal("Check-in not found: %s", g.argv[2]); + return; + } + + if( zName==0 ){ + zName = db_text("default-name", + "SELECT replace(%Q,' ','_') " + " || strftime('_%%Y-%%m-%%d_%%H%%M%%S_', event.mtime) " + " || substr(blob.uuid, 1, 10)" + " FROM event, blob" + " WHERE event.objid=%d" + " AND blob.rid=%d", + db_get("project-name", "unnamed"), rid, rid + ); + } + tarball_of_checkin(rid, &tarball, zName); + blob_write_to_file(&tarball, g.argv[3]); + blob_reset(&tarball); +} + +/* +** WEBPAGE: tarball +** URL: /tarball/RID.tar.gz +** +** Generate a compressed tarball for a check-in. +** Return that tarball as the HTTP reply content. +** +** Optional URL Parameters: +** +** - name=NAME[.tar.gz] is base name of the output file. Defaults to +** something project/version-specific. The prefix of the name, up to +** the last '.', are used as the top-most directory name in the tar +** output. +** +** - uuid=the version to tar (may be a tag/branch name). +** Defaults to "trunk". +** +*/ +void tarball_page(void){ + int rid; + char *zName, *zRid, *zKey; + int nName, nRid; + Blob tarball; + + login_check_credentials(); + if( !g.perm.Zip ){ login_needed(g.anon.Zip); return; } + load_control(); + zName = mprintf("%s", PD("name","")); + nName = strlen(zName); + zRid = mprintf("%s", PD("uuid","trunk")); + nRid = strlen(zRid); + if( nName>7 && fossil_strcmp(&zName[nName-7], ".tar.gz")==0 ){ + /* Special case: Remove the ".tar.gz" suffix. */ + nName -= 7; + zName[nName] = 0; + }else{ + /* If the file suffix is not ".tar.gz" then just remove the + ** suffix up to and including the last "." */ + for(nName=strlen(zName)-1; nName>5; nName--){ + if( zName[nName]=='.' ){ + zName[nName] = 0; + break; + } + } + } + rid = name_to_typed_rid(nRid?zRid:zName, "ci"); + if( rid==0 ){ + @ Not found + return; + } + if( nRid==0 && nName>10 ) zName[10] = 0; + zKey = db_text(0, "SELECT '/tarball/'||uuid||'/%q'" + " FROM blob WHERE rid=%d",zName,rid); + if( P("debug")!=0 ){ + style_header("Tarball Generator Debug Screen"); + @ zName = "%h(zName)"<br> + @ rid = %d(rid)<br> + @ zKey = "%h(zKey)" + style_footer(); + return; + } + if( referred_from_login() ){ + style_header("Tarball Download"); + @ <form action='%R/tarball'> + cgi_query_parameters_to_hidden(); + @ <p>Tarball named <b>%h(zName).tar.gz</b> holding the content + @ of check-in <b>%h(zRid)</b>: + @ <input type="submit" value="Download" /> + @ </form> + style_footer(); + return; + } + blob_zero(&tarball); + if( cache_read(&tarball, zKey)==0 ){ + tarball_of_checkin(rid, &tarball, zName); + cache_write(&tarball, zKey); + } + free( zName ); + free( zRid ); + free( zKey ); + cgi_set_content(&tarball); + cgi_set_content_type("application/x-compressed"); +} Index: src/th.c ================================================================== --- src/th.c +++ src/th.c @@ -1,25 +1,47 @@ /* -** The implementation of the TH core. This file contains the parser, and +** The implementation of the TH core. This file contains the parser, and ** the implementation of the interface in th.h. */ +#include "config.h" #include "th.h" #include <string.h> #include <assert.h> -typedef struct Th_Command Th_Command; -typedef struct Th_Frame Th_Frame; -typedef struct Th_Variable Th_Variable; +/* +** Values used for element values in the tcl_platform array. +*/ + +#if !defined(TH_ENGINE) +# define TH_ENGINE "TH1" +#endif + +#if !defined(TH_PLATFORM) +# if defined(_WIN32) || defined(WIN32) +# define TH_PLATFORM "windows" +# else +# define TH_PLATFORM "unix" +# endif +#endif + +/* +** Forward declarations for structures defined below. +*/ + +typedef struct Th_Command Th_Command; +typedef struct Th_Frame Th_Frame; +typedef struct Th_Variable Th_Variable; +typedef struct Th_InterpAndList Th_InterpAndList; /* ** Interpreter structure. */ struct Th_Interp { Th_Vtab *pVtab; /* Copy of the argument passed to Th_CreateInterp() */ - char *zResult; /* Current interpreter result (Th_Malloc()ed) */ + char *zResult; /* Current interpreter result (Th_Malloc()ed) */ int nResult; /* number of bytes in zResult */ Th_Hash *paCmd; /* Table of registered commands */ Th_Frame *pFrame; /* Current execution frame */ int isListMode; /* True if thSplitList() should operate in "list" mode */ }; @@ -41,25 +63,25 @@ ** are stored in the Th_Frame.paVar hash table member of the associated ** stack frame object. ** ** When an interpreter is created, a single Th_Frame structure is also ** allocated - the global variable scope. Th_Interp.pFrame (the current -** interpreter frame) is initialised to point to this Th_Frame. It is -** not deleted for the lifetime of the interpreter (because the global +** interpreter frame) is initialised to point to this Th_Frame. It is +** not deleted for the lifetime of the interpreter (because the global ** frame never goes out of scope). ** ** New stack frames are created by the Th_InFrame() function. Before ** invoking its callback function, Th_InFrame() allocates a new Th_Frame ** structure with pCaller set to the current frame (Th_Interp.pFrame), ** and sets the current frame to the new frame object. After the callback ** has been invoked, the allocated Th_Frame is deleted and the value ** of the current frame pointer restored. -** -** By default, the Th_SetVar(), Th_UnsetVar() and Th_GetVar() functions -** access variable values in the current frame. If they need to access +** +** By default, the Th_SetVar(), Th_UnsetVar() and Th_GetVar() functions +** access variable values in the current frame. If they need to access ** the global frame, they do so by traversing the pCaller pointer list. -** Likewise, the Th_LinkVar() function uses the pCaller pointers to +** Likewise, the Th_LinkVar() function uses the pCaller pointers to ** link to variables located in the global or other stack frames. */ struct Th_Frame { Th_Hash *paVar; /* Variables defined in this scope */ Th_Frame *pCaller; /* Calling frame */ @@ -77,19 +99,30 @@ ** For scalar variables, Th_Variable.zData is never 0. Th_Variable.nData ** stores the number of bytes in the value pointed to by zData. ** ** For an array variable, Th_Variable.zData is 0 and pHash points to ** a hash table mapping between array key name (a th1 string) and -** a the pointer to the Th_Variable structure holding the scalar +** a pointer to the Th_Variable structure holding the scalar ** value. */ struct Th_Variable { int nRef; /* Number of references to this structure */ int nData; /* Number of bytes at Th_Variable.zData */ - char *zData; /* Data for scalar variables */ + char *zData; /* Data for scalar variables */ Th_Hash *pHash; /* Data for array variables */ }; + +/* +** This structure is used to pass complete context information to the +** hash iteration callback functions that need a Th_Interp and a list +** to operate on, e.g. thListAppendHashKey(). +*/ +struct Th_InterpAndList { + Th_Interp *interp; /* Associated interpreter context */ + char **pzList; /* IN/OUT: Ptr to ptr to list */ + int *pnList; /* IN/OUT: Current length of *pzList */ +}; /* ** Hash table API: */ #define TH_HASHSIZE 257 @@ -104,51 +137,52 @@ static int thEndOfLine(const char *, int); static int thPushFrame(Th_Interp*, Th_Frame*); static void thPopFrame(Th_Interp*); -static void thFreeVariable(Th_HashEntry*, void*); -static void thFreeCommand(Th_HashEntry*, void*); +static int thFreeVariable(Th_HashEntry*, void*); +static int thFreeCommand(Th_HashEntry*, void*); /* ** The following are used by both the expression and language parsers. -** Given that the start of the input string (z, n) is a language +** Given that the start of the input string (z, n) is a language ** construct of the relevant type (a command enclosed in [], an escape ** sequence etc.), these functions determine the number of bytes ** of the input consumed by the construct. For example: ** ** int nByte; ** thNextCommand(interp, "[expr $a+1] $nIter", 18, &nByte); ** -** results in variable nByte being set to 11. Or, +** results in variable nByte being set to 11. Or, ** ** thNextVarname(interp, "$a+1", 4, &nByte); ** ** results in nByte being set to 2. */ static int thNextCommand(Th_Interp*, const char *z, int n, int *pN); static int thNextEscape (Th_Interp*, const char *z, int n, int *pN); static int thNextVarname(Th_Interp*, const char *z, int n, int *pN); static int thNextNumber (Th_Interp*, const char *z, int n, int *pN); +static int thNextInteger (Th_Interp*, const char *z, int n, int *pN); static int thNextSpace (Th_Interp*, const char *z, int n, int *pN); /* ** Given that the input string (z, n) contains a language construct of -** the relevant type (a command enclosed in [], an escape sequence +** the relevant type (a command enclosed in [], an escape sequence ** like "\xFF" or a variable reference like "${varname}", perform ** substitution on the string and store the resulting string in ** the interpreter result. */ static int thSubstCommand(Th_Interp*, const char *z, int n); static int thSubstEscape (Th_Interp*, const char *z, int n); static int thSubstVarname(Th_Interp*, const char *z, int n); /* -** Given that there is a th1 word located at the start of the input +** Given that there is a th1 word located at the start of the input ** string (z, n), determine the length in bytes of that word. If the -** isCmd argument is non-zero, then an unescaped ";" byte not -** located inside of a block or quoted string is considered to mark +** isCmd argument is non-zero, then an unescaped ";" byte not +** located inside of a block or quoted string is considered to mark ** the end of the word. */ static int thNextWord(Th_Interp*, const char *z, int n, int *pN, int isCmd); /* @@ -175,13 +209,13 @@ ** Append nAdd bytes of content copied from zAdd to the end of buffer ** pBuffer. If there is not enough space currently allocated, resize ** the allocation to make space. */ static int thBufferWrite( - Th_Interp *interp, - Buffer *pBuffer, - const char *zAdd, + Th_Interp *interp, + Buffer *pBuffer, + const char *zAdd, int nAdd ){ int nReq; if( nAdd<0 ){ @@ -252,17 +286,19 @@ return -1; } /* ** Argument pEntry points to an entry in a stack frame hash table -** (Th_Frame.paVar). Decrement the refrerence count of the Th_Variable +** (Th_Frame.paVar). Decrement the reference count of the Th_Variable ** structure that the entry points to. Free the Th_Variable if its ** reference count reaches 0. ** ** Argument pContext is a pointer to the interpreter structure. +** +** Returns non-zero if the Th_Variable was actually freed. */ -static void thFreeVariable(Th_HashEntry *pEntry, void *pContext){ +static int thFreeVariable(Th_HashEntry *pEntry, void *pContext){ Th_Variable *pValue = (Th_Variable *)pEntry->pData; pValue->nRef--; assert( pValue->nRef>=0 ); if( pValue->nRef==0 ){ Th_Interp *interp = (Th_Interp *)pContext; @@ -270,27 +306,48 @@ if( pValue->pHash ){ Th_HashIterate(interp, pValue->pHash, thFreeVariable, pContext); Th_HashDelete(interp, pValue->pHash); } Th_Free(interp, pValue); + pEntry->pData = 0; + return 1; } + return 0; } /* ** Argument pEntry points to an entry in the command hash table ** (Th_Interp.paCmd). Delete the Th_Command structure that the ** entry points to. ** ** Argument pContext is a pointer to the interpreter structure. +** +** Always returns non-zero. */ -static void thFreeCommand(Th_HashEntry *pEntry, void *pContext){ +static int thFreeCommand(Th_HashEntry *pEntry, void *pContext){ Th_Command *pCommand = (Th_Command *)pEntry->pData; if( pCommand->xDel ){ pCommand->xDel((Th_Interp *)pContext, pCommand->pContext); } Th_Free((Th_Interp *)pContext, pEntry->pData); pEntry->pData = 0; + return 1; +} + +/* +** Argument pEntry points to an entry in a hash table. The key is +** the list element to be added. +** +** Argument pContext is a pointer to the Th_InterpAndList structure. +** +** Always returns non-zero. +*/ +static int thListAppendHashKey(Th_HashEntry *pEntry, void *pContext){ + Th_InterpAndList *pInterpAndList = (Th_InterpAndList *)pContext; + Th_ListAppend(pInterpAndList->interp, pInterpAndList->pzList, + pInterpAndList->pnList, pEntry->zKey, pEntry->nKey); + return 1; } /* ** Push a new frame onto the stack. */ @@ -310,19 +367,19 @@ Th_HashDelete(interp, pFrame->paVar); interp->pFrame = pFrame->pCaller; } /* -** The first part of the string (zInput,nInput) contains an escape +** The first part of the string (zInput,nInput) contains an escape ** sequence. Set *pnEscape to the number of bytes in the escape sequence. ** If there is a parse error, return TH_ERROR and set the interpreter ** result to an error message. Otherwise return TH_OK. */ static int thNextEscape( Th_Interp *interp, - const char *zInput, - int nInput, + const char *zInput, + int nInput, int *pnEscape ){ int i = 2; assert(nInput>0); @@ -343,18 +400,18 @@ return TH_OK; } /* ** The first part of the string (zInput,nInput) contains a variable -** reference. Set *pnVarname to the number of bytes in the variable -** reference. If there is a parse error, return TH_ERROR and set the +** reference. Set *pnVarname to the number of bytes in the variable +** reference. If there is a parse error, return TH_ERROR and set the ** interpreter result to an error message. Otherwise return TH_OK. */ int thNextVarname( Th_Interp *interp, - const char *zInput, - int nInput, + const char *zInput, + int nInput, int *pnVarname ){ int i; assert(nInput>0); @@ -400,19 +457,19 @@ return TH_OK; } /* ** The first part of the string (zInput,nInput) contains a command -** enclosed in a "[]" block. Set *pnCommand to the number of bytes in -** the variable reference. If there is a parse error, return TH_ERROR -** and set the interpreter result to an error message. Otherwise return +** enclosed in a "[]" block. Set *pnCommand to the number of bytes in +** the variable reference. If there is a parse error, return TH_ERROR +** and set the interpreter result to an error message. Otherwise return ** TH_OK. */ int thNextCommand( Th_Interp *interp, - const char *zInput, - int nInput, + const char *zInput, + int nInput, int *pnCommand ){ int nBrace = 0; int nSquare = 0; int i; @@ -437,17 +494,17 @@ return TH_OK; } /* -** Set *pnSpace to the number of whitespace bytes at the start of +** Set *pnSpace to the number of whitespace bytes at the start of ** input string (zInput, nInput). Always return TH_OK. */ int thNextSpace( Th_Interp *interp, - const char *zInput, - int nInput, + const char *zInput, + int nInput, int *pnSpace ){ int i; for(i=0; i<nInput && th_isspace(zInput[i]); i++); *pnSpace = i; @@ -456,21 +513,21 @@ /* ** The first byte of the string (zInput,nInput) is not white-space. ** Set *pnWord to the number of bytes in the th1 word that starts ** with this byte. If a complete word cannot be parsed or some other -** error occurs, return TH_ERROR and set the interpreter result to +** error occurs, return TH_ERROR and set the interpreter result to ** an error message. Otherwise return TH_OK. ** -** If the isCmd argument is non-zero, then an unescaped ";" byte not -** located inside of a block or quoted string is considered to mark +** If the isCmd argument is non-zero, then an unescaped ";" byte not +** located inside of a block or quoted string is considered to mark ** the end of the word. */ static int thNextWord( Th_Interp *interp, - const char *zInput, - int nInput, + const char *zInput, + int nInput, int *pnWord, int isCmd ){ int iEnd = 0; @@ -501,16 +558,18 @@ } iEnd++; } if( nBrace>0 || nSq>0 ){ /* Parse error */ + Th_SetResult(interp, "parse error", -1); return TH_ERROR; } } if( iEnd>nInput ){ /* Parse error */ + Th_SetResult(interp, "parse error", -1); return TH_ERROR; } *pnWord = iEnd; return TH_OK; } @@ -530,12 +589,12 @@ return thEvalLocal(interp, &zWord[1], nWord-2); } /* ** The input string (zWord, nWord) contains a th1 variable reference -** (a '$' byte followed by a variable name). Perform substitution on -** the input string and store the resulting string in the interpreter +** (a '$' byte followed by a variable name). Perform substitution on +** the input string and store the resulting string in the interpreter ** result. */ static int thSubstVarname( Th_Interp *interp, const char *zWord, @@ -571,11 +630,11 @@ return Th_GetVar(interp, &zWord[1], nWord-1); } /* ** The input string (zWord, nWord) contains a th1 escape sequence. -** Perform substitution on the input string and store the resulting +** Perform substitution on the input string and store the resulting ** string in the interpreter result. */ static int thSubstEscape( Th_Interp *interp, const char *zWord, @@ -607,11 +666,11 @@ return TH_OK; } /* ** The input string (zWord, nWord) contains a th1 word. Perform -** substitution on the input string and store the resulting +** substitution on the input string and store the resulting ** string in the interpreter result. */ static int thSubstWord( Th_Interp *interp, const char *zWord, @@ -639,20 +698,20 @@ int (*xGet)(Th_Interp *, const char*, int, int *) = 0; int (*xSubst)(Th_Interp *, const char*, int) = 0; switch( zWord[i] ){ case '\\': - xGet = thNextEscape; xSubst = thSubstEscape; + xGet = thNextEscape; xSubst = thSubstEscape; break; case '[': if( !interp->isListMode ){ - xGet = thNextCommand; xSubst = thSubstCommand; + xGet = thNextCommand; xSubst = thSubstCommand; break; } case '$': if( !interp->isListMode ){ - xGet = thNextVarname; xSubst = thSubstVarname; + xGet = thNextVarname; xSubst = thSubstVarname; break; } default: { thBufferWrite(interp, &output, &zWord[i], 1); continue; /* Go to the next iteration of the for(...) loop */ @@ -684,11 +743,11 @@ ** Return true if one of the following is true of the buffer pointed ** to by zInput, length nInput: ** ** + It is empty, or ** + It contains nothing but white-space, or -** + It contains no non-white-space characters before the first +** + It contains no non-white-space characters before the first ** newline character. ** ** Otherwise return false. */ static int thEndOfLine(const char *zInput, int nInput){ @@ -724,16 +783,16 @@ ** // Free all memory allocated by Th_SplitList(). The arrays pointed ** // to by argv and argl are invalidated by this call. ** // ** Th_Free(interp, argv); ** -*/ +*/ static int thSplitList( Th_Interp *interp, /* Interpreter context */ - const char *zList, /* Pointer to buffer containing input list */ + const char *zList, /* Pointer to buffer containing input list */ int nList, /* Size of buffer pointed to by zList */ - char ***pazElem, /* OUT: Array of list elements */ + char ***pazElem, /* OUT: Array of list elements */ int **panElem, /* OUT: Lengths of each list element */ int *pnCount /* OUT: Number of list elements */ ){ int rc = TH_OK; @@ -773,14 +832,14 @@ assert((lenbuf.nBuf/sizeof(int))==nCount); assert((pazElem && panElem) || (!pazElem && !panElem)); if( pazElem && rc==TH_OK ){ int i; - char *zElem; + char *zElem; int *anElem; char **azElem = Th_Malloc(interp, - sizeof(char*) * nCount + /* azElem */ + sizeof(char*) * nCount + /* azElem */ sizeof(int) * nCount + /* anElem */ strbuf.nBuf /* space for list element strings */ ); anElem = (int *)&azElem[nCount]; zElem = (char *)&anElem[nCount]; @@ -794,11 +853,11 @@ *panElem = anElem; } if( pnCount ){ *pnCount = nCount; } - + finish: thBufferFree(interp, &strbuf); thBufferFree(interp, &lenbuf); return rc; } @@ -846,11 +905,11 @@ /* Gobble up input a word at a time until the end of the command ** (a semi-colon or end of line). */ while( rc==TH_OK && *zInput!=';' && !thEndOfLine(zInput, nInput) ){ - int nWord; + int nWord=0; thNextSpace(interp, zInput, nInput, &nSpace); rc = thNextWord(interp, &zInput[nSpace], nInput-nSpace, &nWord, 1); zInput += (nSpace+nWord); nInput -= (nSpace+nWord); } @@ -875,18 +934,18 @@ if( rc==TH_OK ){ Th_Command *p = (Th_Command *)(pEntry->pData); const char **azArg = (const char **)argv; rc = p->xProc(interp, p->pContext, argc, azArg, argl); } - - /* If an error occured, add this command to the stack trace report. */ + + /* If an error occurred, add this command to the stack trace report. */ if( rc==TH_ERROR ){ char *zRes; int nRes; char *zStack = 0; int nStack = 0; - + zRes = Th_TakeResult(interp, &nRes); if( TH_OK==Th_GetVar(interp, (char *)"::th_stack_trace", -1) ){ zStack = Th_TakeResult(interp, &nStack); } Th_ListAppend(interp, &zStack, &nStack, zFirst, zInput-zFirst); @@ -911,15 +970,15 @@ ** ** Argument iFrame is interpreted as follows: ** ** * If iFrame is 0, this means the current frame. ** -** * If iFrame is negative, then the nth frame up the stack, where -** n is the absolute value of iFrame. A value of -1 means the +** * If iFrame is negative, then the nth frame up the stack, where +** n is the absolute value of iFrame. A value of -1 means the ** calling procedure. ** -** * If iFrame is +ve, then the nth frame from the bottom of the +** * If iFrame is +ve, then the nth frame from the bottom of the ** stack. An iFrame value of 1 means the toplevel (global) frame. */ static Th_Frame *getFrame(Th_Interp *interp, int iFrame){ Th_Frame *p = interp->pFrame; int i; @@ -947,28 +1006,28 @@ /* ** Evaluate th1 script (zProgram, nProgram) in the frame identified by ** argument iFrame. Leave either an error message or a result in the -** interpreter result and return a th1 error code (TH_OK, TH_ERROR, +** interpreter result and return a th1 error code (TH_OK, TH_ERROR, ** TH_RETURN, TH_CONTINUE or TH_BREAK). */ int Th_Eval(Th_Interp *interp, int iFrame, const char *zProgram, int nProgram){ int rc = TH_OK; Th_Frame *pSavedFrame = interp->pFrame; - /* Set Th_Interp.pFrame to the frame that this script is to be + /* Set Th_Interp.pFrame to the frame that this script is to be ** evaluated in. The current frame is saved in pSavedFrame and will ** be restored before this function returns. */ interp->pFrame = getFrame(interp, iFrame); if( !interp->pFrame ){ rc = TH_ERROR; }else{ int nInput = nProgram; - + if( nInput<0 ){ nInput = th_strlen(zProgram); } rc = thEvalLocal(interp, zProgram, nInput); } @@ -994,13 +1053,13 @@ ** array key name. */ static int thAnalyseVarname( const char *zVarname, int nVarname, - const char **pzOuter, /* OUT: Pointer to scalar/array name */ + const char **pzOuter, /* OUT: Pointer to scalar/array name */ int *pnOuter, /* OUT: Number of bytes at *pzOuter */ - const char **pzInner, /* OUT: Pointer to array key (or null) */ + const char **pzInner, /* OUT: Pointer to array key (or null) */ int *pnInner, /* OUT: Number of bytes at *pzInner */ int *pisGlobal /* OUT: Set to true if this is a global ref */ ){ const char *zOuter = zVarname; int nOuter; @@ -1041,13 +1100,28 @@ *pnInner = nInner; *pisGlobal = isGlobal; return TH_OK; } +/* +** The Find structure is used to return extra information to callers of the +** thFindValue function. The fields within it are populated by thFindValue +** as soon as the necessary information is available. Callers should check +** each field of interest upon return. +*/ + +struct Find { + Th_HashEntry *pValueEntry; /* Pointer to the scalar or array hash entry */ + Th_HashEntry *pElemEntry; /* Pointer to array element hash entry, if any */ + const char *zElem; /* Name of array element, if applicable */ + int nElem; /* Length of array element name, if applicable */ +}; +typedef struct Find Find; + /* ** Input string (zVar, nVar) contains a variable name. This function locates -** the Th_Variable structure associated with the named variable. The +** the Th_Variable structure associated with the named variable. The ** variable name may be a global or local scalar or array variable ** ** If the create argument is non-zero and the named variable does not exist ** it is created. Otherwise, an error is left in the interpreter result ** and NULL returned. @@ -1054,16 +1128,19 @@ ** ** If the arrayok argument is false and the named variable is an array, ** an error is left in the interpreter result and NULL returned. If ** arrayok is true an array name is Ok. */ + static Th_Variable *thFindValue( Th_Interp *interp, - const char *zVar, /* Pointer to variable name */ - int nVar, /* Number of bytes at nVar */ - int create, /* If true, create the variable if not found */ - int arrayok /* If true, an array is Ok. Othewise array==error */ + const char *zVar, /* Pointer to variable name */ + int nVar, /* Number of bytes at nVar */ + int create, /* If true, create the variable if not found */ + int arrayok, /* If true, an array is Ok. Otherwise array==error */ + int noerror, /* If false, set interpreter result to error */ + Find *pFind /* If non-zero, place output here */ ){ const char *zOuter; int nOuter; const char *zInner; int nInner; @@ -1072,16 +1149,24 @@ Th_HashEntry *pEntry; Th_Frame *pFrame = interp->pFrame; Th_Variable *pValue; thAnalyseVarname(zVar, nVar, &zOuter, &nOuter, &zInner, &nInner, &isGlobal); + if( pFind ){ + memset(pFind, 0, sizeof(Find)); + pFind->zElem = zInner; + pFind->nElem = nInner; + } if( isGlobal ){ while( pFrame->pCaller ) pFrame = pFrame->pCaller; } pEntry = Th_HashFind(interp, pFrame->paVar, zOuter, nOuter, create); - assert(pEntry || !create); + assert(pEntry || create<=0); + if( pFind ){ + pFind->pValueEntry = pEntry; + } if( !pEntry ){ goto no_such_var; } pValue = (Th_Variable *)pEntry->pData; @@ -1092,20 +1177,26 @@ pEntry->pData = (void *)pValue; } if( zInner ){ if( pValue->zData ){ - Th_ErrorMessage(interp, "variable is a scalar:", zOuter, nOuter); + if( !noerror ){ + Th_ErrorMessage(interp, "variable is a scalar:", zOuter, nOuter); + } return 0; } if( !pValue->pHash ){ if( !create ){ goto no_such_var; } pValue->pHash = Th_HashNew(interp); } pEntry = Th_HashFind(interp, pValue->pHash, zInner, nInner, create); + assert(pEntry || create<=0); + if( pFind ){ + pFind->pElemEntry = pEntry; + } if( !pEntry ){ goto no_such_var; } pValue = (Th_Variable *)pEntry->pData; if( !pValue ){ @@ -1114,34 +1205,38 @@ pValue->nRef = 1; pEntry->pData = (void *)pValue; } }else{ if( pValue->pHash && !arrayok ){ - Th_ErrorMessage(interp, "variable is an array:", zOuter, nOuter); + if( !noerror ){ + Th_ErrorMessage(interp, "variable is an array:", zOuter, nOuter); + } return 0; } } return pValue; no_such_var: - Th_ErrorMessage(interp, "no such variable:", zVar, nVar); + if( !noerror ){ + Th_ErrorMessage(interp, "no such variable:", zVar, nVar); + } return 0; } /* -** String (zVar, nVar) must contain the name of a scalar variable or -** array member. Look up the variable, store its current value in +** String (zVar, nVar) must contain the name of a scalar variable or +** array member. Look up the variable, store its current value in ** the interpreter result and return TH_OK. ** ** If the named variable does not exist, return TH_ERROR and leave ** an error message in the interpreter result. */ int Th_GetVar(Th_Interp *interp, const char *zVar, int nVar){ Th_Variable *pValue; - pValue = thFindValue(interp, zVar, nVar, 0, 0); + pValue = thFindValue(interp, zVar, nVar, 0, 0, 0, 0); if( !pValue ){ return TH_ERROR; } if( !pValue->zData ){ Th_ErrorMessage(interp, "no such variable:", zVar, nVar); @@ -1148,10 +1243,26 @@ return TH_ERROR; } return Th_SetResult(interp, pValue->zData, pValue->nData); } + +/* +** Return true if variable (zVar, nVar) exists. +*/ +int Th_ExistsVar(Th_Interp *interp, const char *zVar, int nVar){ + Th_Variable *pValue = thFindValue(interp, zVar, nVar, 0, 1, 1, 0); + return pValue && (pValue->zData || pValue->pHash); +} + +/* +** Return true if array variable (zVar, nVar) exists. +*/ +int Th_ExistsArrayVar(Th_Interp *interp, const char *zVar, int nVar){ + Th_Variable *pValue = thFindValue(interp, zVar, nVar, 0, 1, 1, 0); + return pValue && !pValue->zData && pValue->pHash; +} /* ** String (zVar, nVar) must contain the name of a scalar variable or ** array member. If the variable does not exist it is created. The ** variable is set to the value supplied in string (zValue, nValue). @@ -1158,19 +1269,19 @@ ** ** If (zVar, nVar) refers to an existing array, TH_ERROR is returned ** and an error message left in the interpreter result. */ int Th_SetVar( - Th_Interp *interp, - const char *zVar, + Th_Interp *interp, + const char *zVar, int nVar, const char *zValue, int nValue ){ Th_Variable *pValue; - pValue = thFindValue(interp, zVar, nVar, 1, 0); + pValue = thFindValue(interp, zVar, nVar, 1, 0, 0, 0); if( !pValue ){ return TH_ERROR; } if( nValue<0 ){ @@ -1194,13 +1305,13 @@ ** Create a variable link so that accessing variable (zLocal, nLocal) is ** the same as accessing variable (zLink, nLink) in stack frame iFrame. */ int Th_LinkVar( Th_Interp *interp, /* Interpreter */ - const char *zLocal, int nLocal, /* Local varname */ + const char *zLocal, int nLocal, /* Local varname */ int iFrame, /* Stack frame of linked var */ - const char *zLink, int nLink /* Linked varname */ + const char *zLink, int nLink /* Linked varname */ ){ Th_Frame *pSavedFrame = interp->pFrame; Th_Frame *pFrame; Th_HashEntry *pEntry; Th_Variable *pValue; @@ -1209,11 +1320,11 @@ if( !pFrame ){ return TH_ERROR; } pSavedFrame = interp->pFrame; interp->pFrame = pFrame; - pValue = thFindValue(interp, zLink, nLink, 1, 1); + pValue = thFindValue(interp, zLink, nLink, 1, 1, 0, 0); interp->pFrame = pSavedFrame; pEntry = Th_HashFind(interp, interp->pFrame->paVar, zLocal, nLocal, 1); if( pEntry->pData ){ Th_ErrorMessage(interp, "variable exists:", zLocal, nLocal); @@ -1230,25 +1341,68 @@ ** an array, or an array member. If the identified variable exists, it ** is deleted and TH_OK returned. Otherwise, an error message is left ** in the interpreter result and TH_ERROR is returned. */ int Th_UnsetVar(Th_Interp *interp, const char *zVar, int nVar){ + Find find; Th_Variable *pValue; + Th_HashEntry *pEntry; + int rc = TH_ERROR; - pValue = thFindValue(interp, zVar, nVar, 1, 1); + pValue = thFindValue(interp, zVar, nVar, 0, 1, 0, &find); if( !pValue ){ - return TH_ERROR; + return rc; } - Th_Free(interp, pValue->zData); - pValue->zData = 0; - if( pValue->pHash ){ - Th_HashIterate(interp, pValue->pHash, thFreeVariable, (void *)interp); - Th_HashDelete(interp, pValue->pHash); - pValue->pHash = 0; - } - return TH_OK; + if( pValue->zData || pValue->pHash ){ + rc = TH_OK; + }else { + Th_ErrorMessage(interp, "no such variable:", zVar, nVar); + } + + /* + ** The variable may be shared by more than one frame; therefore, make sure + ** it is actually freed prior to freeing the parent structure. The values + ** for the variable must be freed now so the variable appears undefined in + ** all frames. The hash entry in the current frame must also be deleted + ** now; otherwise, if the current stack frame is later popped, it will try + ** to delete a variable which has already been freed. + */ + if( find.zElem ){ + pEntry = find.pElemEntry; + }else{ + pEntry = find.pValueEntry; + } + assert( pEntry ); + assert( pValue ); + if( thFreeVariable(pEntry, (void *)interp) ){ + if( find.zElem ){ + Th_Variable *pValue2 = find.pValueEntry->pData; + Th_HashFind(interp, pValue2->pHash, find.zElem, find.nElem, -1); + }else if( pEntry->pData ){ + Th_Free(interp, pEntry->pData); + pEntry->pData = 0; + } + }else{ + if( pValue->zData ){ + Th_Free(interp, pValue->zData); + pValue->zData = 0; + } + if( pValue->pHash ){ + Th_HashIterate(interp, pValue->pHash, thFreeVariable, (void *)interp); + Th_HashDelete(interp, pValue->pHash); + pValue->pHash = 0; + } + if( find.zElem ){ + Th_Variable *pValue2 = find.pValueEntry->pData; + Th_HashFind(interp, pValue2->pHash, find.zElem, find.nElem, -1); + } + } + if( !find.zElem ){ + Th_HashFind(interp, interp->pFrame->paVar, zVar, nVar, -1); + } + return rc; } /* ** Return an allocated buffer containing a copy of string (z, n). The ** caller is responsible for eventually calling Th_Free() to free @@ -1283,11 +1437,11 @@ if( interp ){ char *zRes = 0; int nRes = 0; Th_SetVar(interp, (char *)"::th_stack_trace", -1, 0, 0); - + Th_StringAppend(interp, &zRes, &nRes, zPre, -1); if( zRes[nRes-1]=='"' ){ Th_StringAppend(interp, &zRes, &nRes, z, n); Th_StringAppend(interp, &zRes, &nRes, (const char *)"\"", 1); }else{ @@ -1365,12 +1519,12 @@ return (char *)Th_Malloc(pInterp, 1); } } -/* -** Wrappers around the supplied malloc() and free() +/* +** Wrappers around the supplied malloc() and free() */ void *Th_Malloc(Th_Interp *pInterp, int nByte){ void *p = pInterp->pVtab->xMalloc(nByte); if( p ){ memset(p, 0, nByte); @@ -1382,16 +1536,16 @@ pInterp->pVtab->xFree(z); } } /* -** Install a new th1 command. +** Install a new th1 command. ** ** If a command of the same name already exists, it is deleted automatically. */ int Th_CreateCommand( - Th_Interp *interp, + Th_Interp *interp, const char *zName, /* New command name */ Th_CommandProc xProc, /* Command callback proc */ void *pContext, /* Value to pass as second arg to xProc */ void (*xDel)(Th_Interp *, void *) /* Command destructor callback */ ){ @@ -1409,27 +1563,27 @@ } pCommand->xProc = xProc; pCommand->pContext = pContext; pCommand->xDel = xDel; pEntry->pData = (void *)pCommand; - + return TH_OK; } /* -** Rename the existing command (zName, nName) to (zNew, nNew). If nNew is 0, +** Rename the existing command (zName, nName) to (zNew, nNew). If nNew is 0, ** the command is deleted instead of renamed. ** ** If successful, TH_OK is returned. If command zName does not exist, or -** if command zNew already exists, an error message is left in the +** if command zNew already exists, an error message is left in the ** interpreter result and TH_ERROR is returned. */ int Th_RenameCommand( - Th_Interp *interp, - const char *zName, /* Existing command name */ + Th_Interp *interp, + const char *zName, /* Existing command name */ int nName, /* Number of bytes at zName */ - const char *zNew, /* New command name */ + const char *zNew, /* New command name */ int nNew /* Number of bytes at zNew */ ){ Th_HashEntry *pEntry; Th_HashEntry *pNewEntry; @@ -1483,11 +1637,11 @@ ** If an error occurs (if (zList, nList) is not a valid list) an error ** message is left in the interpreter result and TH_ERROR returned. ** ** If successful, *pnCount is set to the number of elements in the list. ** panElem is set to point at an array of *pnCount integers - the lengths -** of the element values. *pazElem is set to point at an array of +** of the element values. *pazElem is set to point at an array of ** pointers to buffers containing the array element's data. ** ** To free the arrays allocated at *pazElem and *panElem, the caller ** should call Th_Free() on *pazElem only. Exactly one such call to ** Th_Free() must be made per call to Th_SplitList(). @@ -1509,13 +1663,13 @@ ** Th_Free(interp, azElem); ** */ int Th_SplitList( Th_Interp *interp, - const char *zList, /* Pointer to buffer containing list */ + const char *zList, /* Pointer to buffer containing list */ int nList, /* Number of bytes at zList */ - char ***pazElem, /* OUT: Array of pointers to element data */ + char ***pazElem, /* OUT: Array of pointers to element data */ int **panElem, /* OUT: Array of element data lengths */ int *pnCount /* OUT: Number of elements in list */ ){ int rc; interp->isListMode = 1; @@ -1526,16 +1680,16 @@ } return rc; } /* -** Append a new element to an existing th1 list. The element to append +** Append a new element to an existing th1 list. The element to append ** to the list is (zElem, nElem). ** ** A pointer to the existing list must be stored at *pzList when this -** function is called. The length must be stored in *pnList. The value -** of *pzList must either be NULL (in which case *pnList must be 0), or +** function is called. The length must be stored in *pnList. The value +** of *pzList must either be NULL (in which case *pnList must be 0), or ** a pointer to memory obtained from Th_Malloc(). ** ** This function calls Th_Free() to free the buffer at *pzList and sets ** *pzList to point to a new buffer containing the new list value. *pnList ** is similarly updated before returning. The return value is always TH_OK. @@ -1552,13 +1706,13 @@ ** Th_Free(interp, zList); ** */ int Th_ListAppend( Th_Interp *interp, /* Interpreter context */ - char **pzList, /* IN/OUT: Ptr to ptr to list */ + char **pzList, /* IN/OUT: Ptr to ptr to list */ int *pnList, /* IN/OUT: Current length of *pzList */ - const char *zElem, /* Data to append */ + const char *zElem, /* Data to append */ int nElem /* Length of nElem */ ){ Buffer output; int i; @@ -1607,13 +1761,13 @@ ** Append a new element to an existing th1 string. This function uses ** the same interface as the Th_ListAppend() function. */ int Th_StringAppend( Th_Interp *interp, /* Interpreter context */ - char **pzStr, /* IN/OUT: Ptr to ptr to list */ + char **pzStr, /* IN/OUT: Ptr to ptr to list */ int *pnStr, /* IN/OUT: Current length of *pzStr */ - const char *zElem, /* Data to append */ + const char *zElem, /* Data to append */ int nElem /* Length of nElem */ ){ char *zNew; int nNew; @@ -1631,11 +1785,23 @@ *pnStr = nNew; return TH_OK; } -/* +/* +** Initialize an interpreter. +*/ +static int thInitialize(Th_Interp *interp){ + assert(interp->pFrame); + + Th_SetVar(interp, (char *)"::tcl_platform(engine)", -1, TH_ENGINE, -1); + Th_SetVar(interp, (char *)"::tcl_platform(platform)", -1, TH_PLATFORM, -1); + + return TH_OK; +} + +/* ** Delete an interpreter. */ void Th_DeleteInterp(Th_Interp *interp){ assert(interp->pFrame); assert(0==interp->pFrame->pCaller); @@ -1652,11 +1818,11 @@ /* Delete the interpreter structure itself. */ Th_Free(interp, (void *)interp); } -/* +/* ** Create a new interpreter. */ Th_Interp * Th_CreateInterp(Th_Vtab *pVtab){ Th_Interp *p; @@ -1664,10 +1830,11 @@ p = pVtab->xMalloc(sizeof(Th_Interp) + sizeof(Th_Frame)); memset(p, 0, sizeof(Th_Interp)); p->pVtab = pVtab; p->paCmd = Th_HashNew(p); thPushFrame(p, (Th_Frame *)&p[1]); + thInitialize(p); return p; } /* @@ -1675,10 +1842,11 @@ ** the expression module means the Th_Expr() and exprXXX() functions. */ typedef struct Operator Operator; struct Operator { const char *zOp; + int nOp; int eOp; int iPrecedence; int eArgType; }; typedef struct Expr Expr; @@ -1686,11 +1854,11 @@ Operator *pOp; Expr *pParent; Expr *pLeft; Expr *pRight; - char *zValue; /* Pointer to literal value */ + char *zValue; /* Pointer to literal value */ int nValue; /* Length of literal value buffer */ }; /* Unary operators */ #define OP_UNARY_MINUS 2 @@ -1732,62 +1900,95 @@ #define ARG_NUMBER 2 #define ARG_STRING 3 static Operator aOperator[] = { - {"(", OP_OPEN_BRACKET, -1, 0}, - {")", OP_CLOSE_BRACKET, -1, 0}, + {"(", 1, OP_OPEN_BRACKET, -1, 0}, + {")", 1, OP_CLOSE_BRACKET, -1, 0}, /* Note: all unary operators have (iPrecedence==1) */ - {"-", OP_UNARY_MINUS, 1, ARG_NUMBER}, - {"+", OP_UNARY_PLUS, 1, ARG_NUMBER}, - {"~", OP_BITWISE_NOT, 1, ARG_INTEGER}, - {"!", OP_LOGICAL_NOT, 1, ARG_INTEGER}, + {"-", 1, OP_UNARY_MINUS, 1, ARG_NUMBER}, + {"+", 1, OP_UNARY_PLUS, 1, ARG_NUMBER}, + {"~", 1, OP_BITWISE_NOT, 1, ARG_INTEGER}, + {"!", 1, OP_LOGICAL_NOT, 1, ARG_INTEGER}, /* Binary operators. It is important to the parsing in Th_Expr() that - * the two-character symbols ("==") appear before the one-character + * the two-character symbols ("==") appear before the one-character * ones ("="). And that the priorities of all binary operators are * integers between 2 and 12. */ - {"<<", OP_LEFTSHIFT, 4, ARG_INTEGER}, - {">>", OP_RIGHTSHIFT, 4, ARG_INTEGER}, - {"<=", OP_LE, 5, ARG_NUMBER}, - {">=", OP_GE, 5, ARG_NUMBER}, - {"==", OP_EQ, 6, ARG_NUMBER}, - {"!=", OP_NE, 6, ARG_NUMBER}, - {"eq", OP_SEQ, 7, ARG_STRING}, - {"ne", OP_SNE, 7, ARG_STRING}, - {"&&", OP_LOGICAL_AND, 11, ARG_INTEGER}, - {"||", OP_LOGICAL_OR, 12, ARG_INTEGER}, - - {"*", OP_MULTIPLY, 2, ARG_NUMBER}, - {"/", OP_DIVIDE, 2, ARG_NUMBER}, - {"%", OP_MODULUS, 2, ARG_INTEGER}, - {"+", OP_ADD, 3, ARG_NUMBER}, - {"-", OP_SUBTRACT, 3, ARG_NUMBER}, - {"<", OP_LT, 5, ARG_NUMBER}, - {">", OP_GT, 5, ARG_NUMBER}, - {"&", OP_BITWISE_AND, 8, ARG_INTEGER}, - {"^", OP_BITWISE_XOR, 9, ARG_INTEGER}, - {"|", OP_BITWISE_OR, 10, ARG_INTEGER}, - - {0,0,0} + {"<<", 2, OP_LEFTSHIFT, 4, ARG_INTEGER}, + {">>", 2, OP_RIGHTSHIFT, 4, ARG_INTEGER}, + {"<=", 2, OP_LE, 5, ARG_NUMBER}, + {">=", 2, OP_GE, 5, ARG_NUMBER}, + {"==", 2, OP_EQ, 6, ARG_NUMBER}, + {"!=", 2, OP_NE, 6, ARG_NUMBER}, + {"eq", 2, OP_SEQ, 7, ARG_STRING}, + {"ne", 2, OP_SNE, 7, ARG_STRING}, + {"&&", 2, OP_LOGICAL_AND, 11, ARG_INTEGER}, + {"||", 2, OP_LOGICAL_OR, 12, ARG_INTEGER}, + + {"*", 1, OP_MULTIPLY, 2, ARG_NUMBER}, + {"/", 1, OP_DIVIDE, 2, ARG_NUMBER}, + {"%", 1, OP_MODULUS, 2, ARG_INTEGER}, + {"+", 1, OP_ADD, 3, ARG_NUMBER}, + {"-", 1, OP_SUBTRACT, 3, ARG_NUMBER}, + {"<", 1, OP_LT, 5, ARG_NUMBER}, + {">", 1, OP_GT, 5, ARG_NUMBER}, + {"&", 1, OP_BITWISE_AND, 8, ARG_INTEGER}, + {"^", 1, OP_BITWISE_XOR, 9, ARG_INTEGER}, + {"|", 1, OP_BITWISE_OR, 10, ARG_INTEGER}, + + {0,0,0,0,0} }; /* -** The first part of the string (zInput,nInput) contains a number. -** Set *pnVarname to the number of bytes in the numeric string. +** The first part of the string (zInput,nInput) contains an integer. +** Set *pnVarname to the number of bytes in the numeric string. */ -static int thNextNumber( - Th_Interp *interp, - const char *zInput, - int nInput, +static int thNextInteger( + Th_Interp *interp, + const char *zInput, + int nInput, int *pnLiteral ){ int i; + int (*isdigit)(char) = th_isdigit; + char c; + + if( nInput<2) return TH_ERROR; + assert(zInput[0]=='0'); + c = zInput[1]; + if( c>='A' && c<='Z' ) c += 'a' - 'A'; + if( c=='x' ){ + isdigit = th_ishexdig; + }else if( c!='o' && c!='b' ){ + return TH_ERROR; + } + for(i=2; i<nInput; i++){ + c = zInput[i]; + if( !isdigit(c) ){ + break; + } + } + *pnLiteral = i; + return TH_OK; +} + +/* +** The first part of the string (zInput,nInput) contains a number. +** Set *pnVarname to the number of bytes in the numeric string. +*/ +static int thNextNumber( + Th_Interp *interp, + const char *zInput, + int nInput, + int *pnLiteral +){ + int i = 0; int seenDot = 0; - for(i=0; i<nInput; i++){ + for(; i<nInput; i++){ char c = zInput[i]; if( (seenDot || c!='.') && !th_isdigit(c) ) break; if( c=='.' ) seenDot = 1; } *pnLiteral = i; @@ -1817,12 +2018,12 @@ rc = thSubstWord(interp, pExpr->zValue, pExpr->nValue); }else{ int eArgType = 0; /* Actual type of arguments */ /* Argument values */ - int iLeft; - int iRight; + int iLeft = 0; + int iRight = 0; double fLeft; double fRight; /* Left and right arguments as strings */ char *zLeft = 0; int nLeft = 0; @@ -1848,11 +2049,11 @@ if( eArgType==ARG_NUMBER ){ if( (zLeft==0 || TH_OK==Th_ToInt(0, zLeft, nLeft, &iLeft)) && (zRight==0 || TH_OK==Th_ToInt(0, zRight, nRight, &iRight)) ){ eArgType = ARG_INTEGER; - }else if( + }else if( (zLeft && TH_OK!=Th_ToDouble(interp, zLeft, nLeft, &fLeft)) || (zRight && TH_OK!=Th_ToDouble(interp, zRight, nRight, &fRight)) ){ /* A type error. */ rc = TH_ERROR; @@ -1860,19 +2061,33 @@ }else if( eArgType==ARG_INTEGER ){ rc = Th_ToInt(interp, zLeft, nLeft, &iLeft); if( rc==TH_OK && zRight ){ rc = Th_ToInt(interp, zRight, nRight, &iRight); } - } + } } if( rc==TH_OK && eArgType==ARG_INTEGER ){ int iRes = 0; switch( pExpr->pOp->eOp ) { case OP_MULTIPLY: iRes = iLeft*iRight; break; - case OP_DIVIDE: iRes = iLeft/iRight; break; - case OP_MODULUS: iRes = iLeft%iRight; break; + case OP_DIVIDE: + if( !iRight ){ + Th_ErrorMessage(interp, "Divide by 0:", zLeft, nLeft); + rc = TH_ERROR; + goto finish; + } + iRes = iLeft/iRight; + break; + case OP_MODULUS: + if( !iRight ){ + Th_ErrorMessage(interp, "Modulo by 0:", zLeft, nLeft); + rc = TH_ERROR; + goto finish; + } + iRes = iLeft%iRight; + break; case OP_ADD: iRes = iLeft+iRight; break; case OP_SUBTRACT: iRes = iLeft-iRight; break; case OP_LEFTSHIFT: iRes = iLeft<<iRight; break; case OP_RIGHTSHIFT: iRes = iLeft>>iRight; break; case OP_LT: iRes = iLeft<iRight; break; @@ -1886,26 +2101,36 @@ case OP_BITWISE_OR: iRes = iLeft|iRight; break; case OP_LOGICAL_AND: iRes = iLeft&&iRight; break; case OP_LOGICAL_OR: iRes = iLeft||iRight; break; case OP_UNARY_MINUS: iRes = -iLeft; break; case OP_UNARY_PLUS: iRes = +iLeft; break; + case OP_BITWISE_NOT: iRes = ~iLeft; break; case OP_LOGICAL_NOT: iRes = !iLeft; break; default: assert(!"Internal error"); } Th_SetResultInt(interp, iRes); }else if( rc==TH_OK && eArgType==ARG_NUMBER ){ switch( pExpr->pOp->eOp ) { - case OP_MULTIPLY: Th_SetResultDouble(interp, fLeft*fRight); break; - case OP_DIVIDE: Th_SetResultDouble(interp, fLeft/fRight); break; - case OP_ADD: Th_SetResultDouble(interp, fLeft+fRight); break; - case OP_SUBTRACT: Th_SetResultDouble(interp, fLeft-fRight); break; - case OP_LT: Th_SetResultInt(interp, fLeft<fRight); break; - case OP_GT: Th_SetResultInt(interp, fLeft>fRight); break; - case OP_LE: Th_SetResultInt(interp, fLeft<=fRight); break; - case OP_GE: Th_SetResultInt(interp, fLeft>=fRight); break; - case OP_EQ: Th_SetResultInt(interp, fLeft==fRight); break; - case OP_NE: Th_SetResultInt(interp, fLeft!=fRight); break; + case OP_MULTIPLY: Th_SetResultDouble(interp, fLeft*fRight); break; + case OP_DIVIDE: + if( fRight==0.0 ){ + Th_ErrorMessage(interp, "Divide by 0:", zLeft, nLeft); + rc = TH_ERROR; + goto finish; + } + Th_SetResultDouble(interp, fLeft/fRight); + break; + case OP_ADD: Th_SetResultDouble(interp, fLeft+fRight); break; + case OP_SUBTRACT: Th_SetResultDouble(interp, fLeft-fRight); break; + case OP_LT: Th_SetResultInt(interp, fLeft<fRight); break; + case OP_GT: Th_SetResultInt(interp, fLeft>fRight); break; + case OP_LE: Th_SetResultInt(interp, fLeft<=fRight); break; + case OP_GE: Th_SetResultInt(interp, fLeft>=fRight); break; + case OP_EQ: Th_SetResultInt(interp, fLeft==fRight); break; + case OP_NE: Th_SetResultInt(interp, fLeft!=fRight); break; + case OP_UNARY_MINUS: Th_SetResultDouble(interp, -fLeft); break; + case OP_UNARY_PLUS: Th_SetResultDouble(interp, +fLeft); break; default: assert(!"Internal error"); } }else if( rc==TH_OK ){ int iEqual = 0; assert( eArgType==ARG_STRING ); @@ -1916,10 +2141,12 @@ case OP_SEQ: Th_SetResultInt(interp, iEqual); break; case OP_SNE: Th_SetResultInt(interp, !iEqual); break; default: assert(!"Internal error"); } } + + finish: Th_Free(interp, zLeft); Th_Free(interp, zRight); } @@ -1939,11 +2166,11 @@ #define ISTERM(x) (apToken[x] && (!apToken[x]->pOp || apToken[x]->pLeft)) for(jj=0; jj<nToken; jj++){ if( apToken[jj]->pOp && apToken[jj]->pOp->eOp==OP_OPEN_BRACKET ){ int nNest = 1; - int iLeft = jj; + int iLeft = jj; for(jj++; jj<nToken; jj++){ Operator *pOp = apToken[jj]->pOp; if( pOp && pOp->eOp==OP_OPEN_BRACKET ) nNest++; if( pOp && pOp->eOp==OP_CLOSE_BRACKET ) nNest--; @@ -1965,11 +2192,12 @@ } iLeft = 0; for(jj=nToken-1; jj>=0; jj--){ if( apToken[jj] ){ - if( apToken[jj]->pOp && apToken[jj]->pOp->iPrecedence==1 && iLeft>0 ){ + if( apToken[jj]->pOp && apToken[jj]->pOp->iPrecedence==1 + && iLeft>0 && ISTERM(iLeft) ){ apToken[jj]->pLeft = apToken[iLeft]; apToken[jj]->pLeft->pParent = apToken[jj]; apToken[iLeft] = 0; } iLeft = jj; @@ -1980,13 +2208,11 @@ for(jj=0; jj<nToken; jj++){ Expr *pToken = apToken[jj]; if( apToken[jj] ){ if( pToken->pOp && !pToken->pLeft && pToken->pOp->iPrecedence==i ){ int iRight = jj+1; - - iRight = jj+1; - for(iRight=jj+1; !apToken[iRight] && iRight<nToken; iRight++); + for(; !apToken[iRight] && iRight<nToken; iRight++); if( iRight==nToken || iLeft<0 || !ISTERM(iRight) || !ISTERM(iLeft) ){ return TH_ERROR; } pToken->pLeft = apToken[iLeft]; apToken[iLeft] = 0; @@ -2013,18 +2239,19 @@ /* ** Parse a string containing a TH expression to a list of tokens. */ static int exprParse( Th_Interp *interp, /* Interpreter to leave error message in */ - const char *zExpr, /* Pointer to input string */ + const char *zExpr, /* Pointer to input string */ int nExpr, /* Number of bytes at zExpr */ Expr ***papToken, /* OUT: Array of tokens. */ int *pnToken /* OUT: Size of token array */ ){ int i; int rc = TH_OK; + int nNest = 0; int nToken = 0; Expr **apToken = 0; for(i=0; rc==TH_OK && i<nExpr; ){ char c = zExpr[i]; @@ -2033,12 +2260,17 @@ }else{ Expr *pNew = (Expr *)Th_Malloc(interp, sizeof(Expr)); const char *z = &zExpr[i]; switch (c) { - case '0': case '1': case '2': case '3': case '4': - case '5': case '6': case '7': case '8': case '9': + case '0': + if( thNextInteger(interp, z, nExpr-i, &pNew->nValue)==TH_OK ){ + break; + } + /* fall through */ + case '1': case '2': case '3': case '4': case '5': + case '6': case '7': case '8': case '9': thNextNumber(interp, z, nExpr-i, &pNew->nValue); break; case '$': thNextVarname(interp, z, nExpr-i, &pNew->nValue); @@ -2060,20 +2292,40 @@ break; } default: { int j; - for(j=0; aOperator[j].zOp; j++){ - int nOp; - if( aOperator[j].iPrecedence==1 && nToken>0 ){ + const char *zOp; + for(j=0; (zOp=aOperator[j].zOp); j++){ + int nOp = aOperator[j].nOp; + int nRemain = nExpr - i; + int isMatch = 0; + if( nRemain>=nOp && 0==memcmp(zOp, &zExpr[i], nOp) ){ + isMatch = 1; + } + if( isMatch ){ + if( aOperator[j].eOp==OP_CLOSE_BRACKET ){ + nNest--; + }else if( nRemain>nOp ){ + if( aOperator[j].eOp==OP_OPEN_BRACKET ){ + nNest++; + } + }else{ + /* + ** This is not really a match because this operator cannot + ** legally appear at the end of the string. + */ + isMatch = 0; + } + } + if( nToken>0 && aOperator[j].iPrecedence==1 ){ Expr *pPrev = apToken[nToken-1]; if( !pPrev->pOp || pPrev->pOp->eOp==OP_CLOSE_BRACKET ){ continue; } } - nOp = th_strlen((const char *)aOperator[j].zOp); - if( (nExpr-i)>=nOp && 0==memcmp(aOperator[j].zOp, &zExpr[i], nOp) ){ + if( isMatch ){ pNew->pOp = &aOperator[j]; i += nOp; break; } } @@ -2088,11 +2340,11 @@ memcpy(pNew->zValue, z, pNew->nValue); i += pNew->nValue; } if( (nToken%16)==0 ){ /* Grow the apToken array. */ - Expr **apTokenOld = apToken; + Expr **apTokenOld = apToken; apToken = Th_Malloc(interp, sizeof(Expr *)*(nToken+16)); memcpy(apToken, apTokenOld, sizeof(Expr *)*nToken); } /* Put the new token at the end of the apToken array */ @@ -2102,10 +2354,14 @@ Th_Free(interp, pNew); rc = TH_ERROR; } } } + + if( nNest!=0 ){ + rc = TH_ERROR; + } *papToken = apToken; *pnToken = nToken; return rc; } @@ -2113,11 +2369,11 @@ /* ** Evaluate the string (zExpr, nExpr) as a Th expression. Store ** the result in the interpreter interp and return TH_OK if ** successful. If an error occurs, store an error message in ** the interpreter result and return an error code. -*/ +*/ int Th_Expr(Th_Interp *interp, const char *zExpr, int nExpr){ int rc; /* Return Code */ int i; /* Loop counter */ int nToken = 0; @@ -2130,11 +2386,11 @@ /* Parse the expression to a list of tokens. */ rc = exprParse(interp, zExpr, nExpr, &apToken, &nToken); /* If the parsing was successful, create an expression tree from ** the parsed list of tokens. If successful, apToken[0] is set - ** to point to the root of the expression tree. + ** to point to the root of the expression tree. */ if( rc==TH_OK ){ rc = exprMakeTree(interp, apToken, nToken); } @@ -2168,16 +2424,17 @@ /* ** Iterate through all values currently stored in the hash table. Invoke ** the callback function xCallback for each entry. The second argument ** passed to xCallback is a copy of the fourth argument passed to this -** function. +** function. The return value from the callback function xCallback is +** ignored. */ void Th_HashIterate( - Th_Interp *interp, + Th_Interp *interp, Th_Hash *pHash, - void (*xCallback)(Th_HashEntry *pEntry, void *pContext), + int (*xCallback)(Th_HashEntry *pEntry, void *pContext), void *pContext ){ int i; for(i=0; i<TH_HASHSIZE; i++){ Th_HashEntry *pEntry; @@ -2188,14 +2445,15 @@ } } } /* -** Helper function for Th_HashDelete(). +** Helper function for Th_HashDelete(). Always returns non-zero. */ -static void xFreeHashEntry(Th_HashEntry *pEntry, void *pContext){ +static int xFreeHashEntry(Th_HashEntry *pEntry, void *pContext){ Th_Free((Th_Interp *)pContext, (void *)pEntry); + return 1; } /* ** Free a hash-table previously allocated by Th_HashNew(). */ @@ -2205,14 +2463,14 @@ Th_Free(interp, pHash); } } /* -** This function is used to insert or delete hash table items, or to +** This function is used to insert or delete hash table items, or to ** query a hash table for an existing item. ** -** If parameter op is less than zero, then the hash-table element +** If parameter op is less than zero, then the hash-table element ** identified by (zKey, nKey) is removed from the hash-table if it ** exists. NULL is returned. ** ** Otherwise, if the hash-table contains an item with key (zKey, nKey), ** a pointer to the associated Th_HashEntry is returned. If parameter @@ -2219,11 +2477,11 @@ ** op is greater than zero, then a new entry is added if one cannot ** be found. If op is zero, then NULL is returned if the item is ** not already present in the hash-table. */ Th_HashEntry *Th_HashFind( - Th_Interp *interp, + Th_Interp *interp, Th_Hash *pHash, const char *zKey, int nKey, int op /* -ve = delete, 0 = find, +ve = insert */ ){ @@ -2287,11 +2545,12 @@ ** '\f' 0x0C ** '\r' 0x0D ** ** Whitespace characters have the 0x01 flag set. Decimal digits have the ** 0x2 flag set. Single byte printable characters have the 0x4 flag set. -** Alphabet characters have the 0x8 bit set. +** Alphabet characters have the 0x8 bit set. Hexadecimal digits have the +** 0x20 flag set. ** ** The special list characters have the 0x10 flag set ** ** { } [ ] \ ; ' " ** @@ -2300,14 +2559,14 @@ */ static unsigned char aCharProp[256] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, /* 0x0. */ 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 0x1. */ 5, 4, 20, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, /* 0x2. */ - 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 4, 20, 4, 4, 4, 4, /* 0x3. */ - 4, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, /* 0x4. */ + 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 4, 20, 4, 4, 4, 4, /* 0x3. */ + 4, 44, 44, 44, 44, 44, 44, 12, 12, 12, 12, 12, 12, 12, 12, 12, /* 0x4. */ 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 20, 20, 20, 4, 4, /* 0x5. */ - 4, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, /* 0x6. */ + 4, 44, 44, 44, 44, 44, 44, 12, 12, 12, 12, 12, 12, 12, 12, 12, /* 0x6. */ 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 20, 4, 20, 4, 4, /* 0x7. */ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 0x8. */ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 0x9. */ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, /* 0xA. */ @@ -2331,15 +2590,26 @@ return (aCharProp[(unsigned char)c] & 0x11); } int th_isalnum(char c){ return (aCharProp[(unsigned char)c] & 0x0A); } +int th_isalpha(char c){ + return (aCharProp[(unsigned char)c] & 0x08); +} +int th_ishexdig(char c){ + return (aCharProp[(unsigned char)c] & 0x20); +} +int th_isoctdig(char c){ + return ((c|7) == '7'); +} +int th_isbindig(char c){ + return ((c|1) == '1'); +} #ifndef LONGDOUBLE_TYPE # define LONGDOUBLE_TYPE long double #endif -typedef char u8; /* ** Return TRUE if z is a pure numeric string. Return FALSE if the ** string contains any character which is not part of a number. If @@ -2439,33 +2709,58 @@ return z - zBegin; } /* ** Try to convert the string passed as arguments (z, n) to an integer. -** If successful, store the result in *piOut and return TH_OK. +** If successful, store the result in *piOut and return TH_OK. ** -** If the string cannot be converted to an integer, return TH_ERROR. -** If the interp argument is not NULL, leave an error message in the +** If the string cannot be converted to an integer, return TH_ERROR. +** If the interp argument is not NULL, leave an error message in the ** interpreter result too. */ int Th_ToInt(Th_Interp *interp, const char *z, int n, int *piOut){ int i = 0; int iOut = 0; + int base = 10; + int (*isdigit)(char) = th_isdigit; if( n<0 ){ n = th_strlen(z); } - if( n>0 && (z[0]=='-' || z[0]=='+') ){ + if( n>1 && (z[0]=='-' || z[0]=='+') ){ i = 1; + } + if( (n-i)>2 && z[i]=='0' ){ + if( z[i+1]=='x' || z[i+1]=='X' ){ + i += 2; + base = 16; + isdigit = th_ishexdig; + }else if( z[i+1]=='o' || z[i+1]=='O' ){ + i += 2; + base = 8; + isdigit = th_isoctdig; + }else if( z[i+1]=='b' || z[i+1]=='B' ){ + i += 2; + base = 2; + isdigit = th_isbindig; + } } for(; i<n; i++){ - if( !th_isdigit(z[i]) ){ + char c = z[i]; + if( !isdigit(c) ){ Th_ErrorMessage(interp, "expected integer, got: \"", z, n); return TH_ERROR; } - iOut = iOut * 10 + (z[i] - 48); + if( c>='a' ){ + c -= 'a'-10; + }else if( c>='A' ){ + c -= 'A'-10; + }else{ + c -= '0'; + } + iOut = iOut * base + c; } if( n>0 && z[0]=='-' ){ iOut *= -1; } @@ -2474,20 +2769,20 @@ return TH_OK; } /* ** Try to convert the string passed as arguments (z, n) to a double. -** If successful, store the result in *pfOut and return TH_OK. +** If successful, store the result in *pfOut and return TH_OK. ** -** If the string cannot be converted to a double, return TH_ERROR. -** If the interp argument is not NULL, leave an error message in the +** If the string cannot be converted to a double, return TH_ERROR. +** If the interp argument is not NULL, leave an error message in the ** interpreter result too. */ int Th_ToDouble( - Th_Interp *interp, - const char *z, - int n, + Th_Interp *interp, + const char *z, + int n, double *pfOut ){ if( !sqlite3IsNumber((const char *)z, 0) ){ Th_ErrorMessage(interp, "expected number, got: \"", z, n); return TH_ERROR; @@ -2509,13 +2804,13 @@ if( iVal<0 ){ isNegative = 1; iVal = iVal * -1; } *(--z) = '\0'; - *(--z) = (char)(48+(iVal%10)); - while( (iVal = (iVal/10))>0 ){ - *(--z) = (char)(48+(iVal%10)); + *(--z) = (char)(48+((unsigned)iVal%10)); + while( (iVal = ((unsigned)iVal/10))>0 ){ + *(--z) = (char)(48+((unsigned)iVal%10)); assert(z>zBuf); } if( isNegative ){ *(--z) = '-'; } @@ -2528,33 +2823,33 @@ ** the double fVal and return TH_OK. */ int Th_SetResultDouble(Th_Interp *interp, double fVal){ int i; /* Iterator variable */ double v = fVal; /* Input value */ - char zBuf[128]; /* Output buffer */ - char *z = zBuf; /* Output cursor */ + char zBuf[128]; /* Output buffer */ + char *z = zBuf; /* Output cursor */ int iDot = 0; /* Digit after which to place decimal point */ int iExp = 0; /* Exponent (NN in eNN) */ - const char *zExp; /* String representation of iExp */ + const char *zExp; /* String representation of iExp */ /* Precision: */ #define INSIGNIFICANT 0.000000000001 #define ROUNDER 0.0000000000005 double insignificant = INSIGNIFICANT; /* If the real value is negative, write a '-' character to the * output and transform v to the corresponding positive number. - */ + */ if( v<0.0 ){ *z++ = '-'; v *= -1.0; } - /* Normalize v to a value between 1.0 and 10.0. Integer + /* Normalize v to a value between 1.0 and 10.0. Integer * variable iExp is set to the exponent. i.e the original * value is (v * 10^iExp) (or the negative thereof). - */ + */ if( v>0.0 ){ while( (v+ROUNDER)>=10.0 ) { iExp++; v *= 0.1; } while( (v+ROUNDER)<1.0 ) { iExp--; v *= 10.0; } } v += ROUNDER; @@ -2605,5 +2900,81 @@ } *z = '\0'; return Th_SetResult(interp, zBuf, -1); } + +/* +** Appends all currently registered command names to the specified list +** and returns TH_OK upon success. Any other return value indicates an +** error. +*/ +int Th_ListAppendCommands( + Th_Interp *interp, /* Interpreter context */ + char **pzList, /* OUT: List of command names */ + int *pnList /* OUT: Number of command names */ +){ + Th_InterpAndList *p = (Th_InterpAndList *)Th_Malloc( + interp, sizeof(Th_InterpAndList) + ); + p->interp = interp; + p->pzList = pzList; + p->pnList = pnList; + Th_HashIterate(interp, interp->paCmd, thListAppendHashKey, p); + Th_Free(interp, p); + return TH_OK; +} + +/* +** Appends all variable names for the current frame to the specified list +** and returns TH_OK upon success. Any other return value indicates an +** error. If the current frame cannot be obtained, TH_ERROR is returned. +*/ +int Th_ListAppendVariables( + Th_Interp *interp, /* Interpreter context */ + char **pzList, /* OUT: List of variable names */ + int *pnList /* OUT: Number of variable names */ +){ + Th_Frame *pFrame = getFrame(interp, 0); + if( pFrame ){ + Th_InterpAndList *p = (Th_InterpAndList *)Th_Malloc( + interp, sizeof(Th_InterpAndList) + ); + p->interp = interp; + p->pzList = pzList; + p->pnList = pnList; + Th_HashIterate(interp, pFrame->paVar, thListAppendHashKey, p); + Th_Free(interp, p); + return TH_OK; + }else{ + return TH_ERROR; + } +} + +/* +** Appends all array element names for the specified array variable to the +** specified list and returns TH_OK upon success. Any other return value +** indicates an error. +*/ +int Th_ListAppendArray( + Th_Interp *interp, /* Interpreter context */ + const char *zVar, /* Pointer to variable name */ + int nVar, /* Number of bytes at nVar */ + char **pzList, /* OUT: List of array element names */ + int *pnList /* OUT: Number of array element names */ +){ + Th_Variable *pValue = thFindValue(interp, zVar, nVar, 0, 1, 1, 0); + if( pValue && !pValue->zData && pValue->pHash ){ + Th_InterpAndList *p = (Th_InterpAndList *)Th_Malloc( + interp, sizeof(Th_InterpAndList) + ); + p->interp = interp; + p->pzList = pzList; + p->pnList = pnList; + Th_HashIterate(interp, pValue->pHash, thListAppendHashKey, p); + Th_Free(interp, p); + }else{ + *pzList = 0; + *pnList = 0; + } + return TH_OK; +} Index: src/th.h ================================================================== --- src/th.h +++ src/th.h @@ -1,86 +1,88 @@ /* This header file defines the external interface to the custom Scripting -** Language (TH) interpreter. TH is very similar to TCL but is not an +** Language (TH) interpreter. TH is very similar to Tcl but is not an ** exact clone. */ /* ** Before creating an interpreter, the application must allocate and ** populate an instance of the following structure. It must remain valid ** for the lifetime of the interpreter. */ +typedef struct Th_Vtab Th_Vtab; struct Th_Vtab { void *(*xMalloc)(unsigned int); void (*xFree)(void *); }; -typedef struct Th_Vtab Th_Vtab; /* ** Opaque handle for interpeter. */ typedef struct Th_Interp Th_Interp; -/* -** Create and delete interpreters. +/* +** Create and delete interpreters. */ Th_Interp * Th_CreateInterp(Th_Vtab *pVtab); void Th_DeleteInterp(Th_Interp *); -/* +/* ** Evaluate an TH program in the stack frame identified by parameter ** iFrame, according to the following rules: ** ** * If iFrame is 0, this means the current frame. ** -** * If iFrame is negative, then the nth frame up the stack, where n is +** * If iFrame is negative, then the nth frame up the stack, where n is ** the absolute value of iFrame. A value of -1 means the calling ** procedure. ** ** * If iFrame is +ve, then the nth frame from the bottom of the stack. ** An iFrame value of 1 means the toplevel (global) frame. */ int Th_Eval(Th_Interp *interp, int iFrame, const char *zProg, int nProg); /* -** Evaluate a TH expression. The result is stored in the +** Evaluate a TH expression. The result is stored in the ** interpreter result. */ int Th_Expr(Th_Interp *interp, const char *, int); -/* +/* ** Access TH variables in the current stack frame. If the variable name -** begins with "::", the lookup is in the top level (global) frame. +** begins with "::", the lookup is in the top level (global) frame. */ +int Th_ExistsVar(Th_Interp *, const char *, int); +int Th_ExistsArrayVar(Th_Interp *, const char *, int); int Th_GetVar(Th_Interp *, const char *, int); int Th_SetVar(Th_Interp *, const char *, int, const char *, int); int Th_LinkVar(Th_Interp *, const char *, int, int, const char *, int); int Th_UnsetVar(Th_Interp *, const char *, int); typedef int (*Th_CommandProc)(Th_Interp *, void *, int, const char **, int *); -/* -** Register new commands. +/* +** Register new commands. */ int Th_CreateCommand( - Th_Interp *interp, - const char *zName, + Th_Interp *interp, + const char *zName, /* int (*xProc)(Th_Interp *, void *, int, const char **, int *), */ Th_CommandProc xProc, void *pContext, void (*xDel)(Th_Interp *, void *) ); -/* +/* ** Delete or rename commands. */ int Th_RenameCommand(Th_Interp *, const char *, int, const char *, int); -/* -** Push a new stack frame (local variable context) onto the interpreter -** stack, call the function supplied as parameter xCall with the two -** context arguments, +/* +** Push a new stack frame (local variable context) onto the interpreter +** stack, call the function supplied as parameter xCall with the two +** context arguments, ** ** xCall(interp, pContext1, pContext2) ** ** , then pop the frame off of the interpreter stack. The value returned ** by the xCall() function is returned as the result of this function. @@ -92,21 +94,21 @@ int (*xCall)(Th_Interp *, void *pContext1, void *pContext2), void *pContext1, void *pContext2 ); -/* +/* ** Valid return codes for xProc callbacks. */ #define TH_OK 0 #define TH_ERROR 1 #define TH_BREAK 2 #define TH_RETURN 3 #define TH_CONTINUE 4 -/* -** Set and get the interpreter result. +/* +** Set and get the interpreter result. */ int Th_SetResult(Th_Interp *, const char *, int); const char *Th_GetResult(Th_Interp *, int *); char *Th_TakeResult(Th_Interp *, int *); @@ -114,50 +116,70 @@ ** Set an error message as the interpreter result. This also ** sets the global stack-trace variable $::th_stack_trace. */ int Th_ErrorMessage(Th_Interp *, const char *, const char *, int); -/* +/* ** Access the memory management functions associated with the specified ** interpreter. */ void *Th_Malloc(Th_Interp *, int); void Th_Free(Th_Interp *, void *); -/* +/* ** Functions for handling TH lists. */ int Th_ListAppend(Th_Interp *, char **, int *, const char *, int); int Th_SplitList(Th_Interp *, const char *, int, char ***, int **, int *); int Th_StringAppend(Th_Interp *, char **, int *, const char *, int); -/* +/* ** Functions for handling numbers and pointers. */ int Th_ToInt(Th_Interp *, const char *, int, int *); int Th_ToDouble(Th_Interp *, const char *, int, double *); int Th_SetResultInt(Th_Interp *, int); int Th_SetResultDouble(Th_Interp *, double); +/* +** Functions for handling command and variable introspection. +*/ +int Th_ListAppendCommands(Th_Interp *, char **, int *); +int Th_ListAppendVariables(Th_Interp *, char **, int *); +int Th_ListAppendArray(Th_Interp *, const char *, int, char **, int *); + /* ** Drop in replacements for the corresponding standard library functions. */ int th_strlen(const char *); int th_isdigit(char); int th_isspace(char); int th_isalnum(char); +int th_isalpha(char); int th_isspecial(char); +int th_ishexdig(char); +int th_isoctdig(char); +int th_isbindig(char); char *th_strdup(Th_Interp *interp, const char *z, int n); /* ** Interfaces to register the language extensions. */ int th_register_language(Th_Interp *interp); /* th_lang.c */ int th_register_sqlite(Th_Interp *interp); /* th_sqlite.c */ int th_register_vfs(Th_Interp *interp); /* th_vfs.c */ int th_register_testvfs(Th_Interp *interp); /* th_testvfs.c */ + +#ifdef FOSSIL_ENABLE_TCL +/* +** Interfaces to the full Tcl core library from "th_tcl.c". +*/ +int th_register_tcl(Th_Interp *, void *); +int unloadTcl(Th_Interp *, void *); +int evaluateTclWithEvents(Th_Interp *,void *,const char *,int,int,int,int); +#endif /* ** General purpose hash table from th_lang.c. */ typedef struct Th_Hash Th_Hash; @@ -168,15 +190,15 @@ int nKey; Th_HashEntry *pNext; /* Internal use only */ }; Th_Hash *Th_HashNew(Th_Interp *); void Th_HashDelete(Th_Interp *, Th_Hash *); -void Th_HashIterate(Th_Interp*,Th_Hash*,void (*x)(Th_HashEntry*, void*),void*); +void Th_HashIterate(Th_Interp*,Th_Hash*,int (*x)(Th_HashEntry*, void*),void*); Th_HashEntry *Th_HashFind(Th_Interp*, Th_Hash*, const char*, int, int); /* ** Useful functions from th_lang.c. */ int Th_WrongNumArgs(Th_Interp *interp, const char *zMsg); -typedef struct Th_SubCommand {char *zName; Th_CommandProc xProc;} Th_SubCommand; -int Th_CallSubCommand(Th_Interp*,void*,int,const char**,int*,Th_SubCommand*); +typedef struct Th_SubCommand {const char *zName; Th_CommandProc xProc;} Th_SubCommand; +int Th_CallSubCommand(Th_Interp*,void*,int,const char**,int*,const Th_SubCommand*); Index: src/th_lang.c ================================================================== --- src/th_lang.c +++ src/th_lang.c @@ -1,16 +1,17 @@ /* -** This file contains the implementation of all of the TH language -** built-in commands. +** This file contains the implementation of all of the TH language +** built-in commands. ** -** All built-in commands are implemented using the public interface -** declared in th.h, so this file serves as both a part of the language +** All built-in commands are implemented using the public interface +** declared in th.h, so this file serves as both a part of the language ** implementation and an example of how to extend the language with ** new commands. */ +#include "config.h" #include "th.h" #include <string.h> #include <assert.h> int Th_WrongNumArgs(Th_Interp *interp, const char *zMsg){ @@ -17,19 +18,19 @@ Th_ErrorMessage(interp, "wrong # args: should be \"", zMsg, -1); return TH_ERROR; } /* -** Syntax: +** Syntax: ** ** catch script ?varname? */ static int catch_command( - Th_Interp *interp, - void *ctx, - int argc, - const char **argv, + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, int *argl ){ int rc; if( argc!=2 && argc!=3 ){ @@ -46,19 +47,19 @@ Th_SetResultInt(interp, rc); return TH_OK; } /* -** TH Syntax: +** TH Syntax: ** ** if expr1 body1 ?elseif expr2 body2? ? ?else? bodyN? */ static int if_command( - Th_Interp *interp, - void *ctx, - int argc, - const char **argv, + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, int *argl ){ int rc = TH_OK; int iCond; /* Result of evaluating expression */ @@ -93,19 +94,19 @@ wrong_args: return Th_WrongNumArgs(interp, "if ..."); } /* -** TH Syntax: +** TH Syntax: ** ** expr expr */ static int expr_command( - Th_Interp *interp, - void *ctx, - int argc, - const char **argv, + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, int *argl ){ if( argc!=2 ){ return Th_WrongNumArgs(interp, "expr expression"); } @@ -112,11 +113,11 @@ return Th_Expr(interp, argv[1], argl[1]); } /* -** Evaluate the th1 script (zBody, nBody) in the local stack frame. +** Evaluate the th1 script (zBody, nBody) in the local stack frame. ** Return the result of the evaluation, except if the result ** is TH_CONTINUE, return TH_OK instead. */ static int eval_loopbody(Th_Interp *interp, const char *zBody, int nBody){ int rc = Th_Eval(interp, 0, zBody, nBody); @@ -125,19 +126,19 @@ } return rc; } /* -** TH Syntax: +** TH Syntax: ** ** for init condition incr script */ static int for_command( - Th_Interp *interp, - void *ctx, - int argc, - const char **argv, + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, int *argl ){ int rc; int iCond; @@ -146,11 +147,11 @@ } /* Evaluate the 'init' script */ rc = Th_Eval(interp, 0, argv[1], -1); - while( rc==TH_OK + while( rc==TH_OK && TH_OK==(rc = Th_Expr(interp, argv[2], -1)) && TH_OK==(rc = Th_ToInt(interp, Th_GetResult(interp, 0), -1, &iCond)) && iCond && TH_OK==(rc = eval_loopbody(interp, argv[4], argl[4])) ){ @@ -160,45 +161,45 @@ if( rc==TH_BREAK ) rc = TH_OK; return rc; } /* -** TH Syntax: +** TH Syntax: ** ** list ?arg1 ?arg2? ...? */ static int list_command( - Th_Interp *interp, - void *ctx, - int argc, - const char **argv, + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, int *argl ){ char *zList = 0; int nList = 0; int i; for(i=1; i<argc; i++){ Th_ListAppend(interp, &zList, &nList, argv[i], argl[i]); } - + Th_SetResult(interp, zList, nList); Th_Free(interp, zList); return TH_OK; } /* -** TH Syntax: +** TH Syntax: ** ** lindex list index */ static int lindex_command( - Th_Interp *interp, - void *ctx, - int argc, - const char **argv, + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, int *argl ){ int iElem; int rc; @@ -226,19 +227,19 @@ return rc; } /* -** TH Syntax: +** TH Syntax: ** ** llength list */ static int llength_command( - Th_Interp *interp, - void *ctx, - int argc, - const char **argv, + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, int *argl ){ int nElem; int rc; @@ -253,19 +254,56 @@ return rc; } /* -** TH Syntax: +** TH Syntax: +** +** lsearch list string +*/ +static int lsearch_command( + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, + int *argl +){ + int rc; + char **azElem; + int *anElem; + int nCount; + int i; + + if( argc!=3 ){ + return Th_WrongNumArgs(interp, "lsearch list string"); + } + + rc = Th_SplitList(interp, argv[1], argl[1], &azElem, &anElem, &nCount); + if( rc==TH_OK ){ + Th_SetResultInt(interp, -1); + for(i=0; i<nCount; i++){ + if( anElem[i]==argl[2] && 0==memcmp(azElem[i], argv[2], argl[2]) ){ + Th_SetResultInt(interp, i); + break; + } + } + Th_Free(interp, azElem); + } + + return rc; +} + +/* +** TH Syntax: ** ** set varname ?value? */ static int set_command( - Th_Interp *interp, - void *ctx, - int argc, - const char **argv, + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, int *argl ){ if( argc!=2 && argc!=3 ){ return Th_WrongNumArgs(interp, "set varname ?value?"); } @@ -276,30 +314,30 @@ return Th_GetVar(interp, argv[1], argl[1]); } /* ** When a new command is created using the built-in [proc] command, an -** instance of the following structure is allocated and populated. A -** pointer to the structure is passed as the context (second) argument +** instance of the following structure is allocated and populated. A +** pointer to the structure is passed as the context (second) argument ** to function proc_call1() when the new command is executed. */ typedef struct ProcDefn ProcDefn; struct ProcDefn { int nParam; /* Number of formal (non "args") parameters */ - char **azParam; /* Parameter names */ + char **azParam; /* Parameter names */ int *anParam; /* Lengths of parameter names */ - char **azDefault; /* Default values */ + char **azDefault; /* Default values */ int *anDefault; /* Lengths of default values */ int hasArgs; /* True if there is an "args" parameter */ - char *zProgram; /* Body of proc */ + char *zProgram; /* Body of proc */ int nProgram; /* Number of bytes at zProgram */ - char *zUsage; /* Usage message */ + char *zUsage; /* Usage message */ int nUsage; /* Number of bytes at zUsage */ }; -/* This structure is used to temporarily store arguments passed to an -** invocation of a command created using [proc]. A pointer to an +/* This structure is used to temporarily store arguments passed to an +** invocation of a command created using [proc]. A pointer to an ** instance is passed as the second argument to the proc_call2() function. */ typedef struct ProcArgs ProcArgs; struct ProcArgs { int argc; @@ -322,11 +360,11 @@ ProcArgs *pArgs = (ProcArgs *)pContext2; /* Check if there are the right number of arguments. If there are ** not, generate a usage message for the command. */ - if( (pArgs->argc>(p->nParam+1) && !p->hasArgs) + if( (pArgs->argc>(p->nParam+1) && !p->hasArgs) || (pArgs->argc<=(p->nParam) && !p->azDefault[pArgs->argc-1]) ){ char *zUsage = 0; int nUsage = 0; Th_StringAppend(interp, &zUsage, &nUsage, pArgs->argv[0], pArgs->argl[0]); @@ -357,10 +395,13 @@ int nArgs = 0; for(i=p->nParam+1; i<pArgs->argc; i++){ Th_ListAppend(interp, &zArgs, &nArgs, pArgs->argv[i], pArgs->argl[i]); } Th_SetVar(interp, (const char *)"args", -1, zArgs, nArgs); + if(zArgs){ + Th_Free(interp, zArgs); + } } Th_SetResult(interp, 0, 0); return Th_Eval(interp, 0, p->zProgram, p->nProgram); } @@ -370,12 +411,12 @@ ** created using the [proc] command. The second argument, pContext, ** is a pointer to the associated ProcDefn structure. */ static int proc_call1( Th_Interp *interp, - void *pContext, - int argc, + void *pContext, + int argc, const char **argv, int *argl ){ int rc; @@ -396,36 +437,36 @@ } return rc; } /* -** This function is registered as the delete callback for all commands -** created using the built-in [proc] command. It is called automatically -** when a command created using [proc] is deleted. +** This function is registered as the delete callback for all commands +** created using the built-in [proc] command. It is called automatically +** when a command created using [proc] is deleted. ** ** It frees the ProcDefn structure allocated when the command was created. -*/ +*/ static void proc_del(Th_Interp *interp, void *pContext){ ProcDefn *p = (ProcDefn *)pContext; Th_Free(interp, (void *)p->zUsage); Th_Free(interp, (void *)p); } /* -** TH Syntax: +** TH Syntax: ** ** proc name arglist code */ static int proc_command( - Th_Interp *interp, - void *ctx, + Th_Interp *interp, + void *ctx, int argc, - const char **argv, + const char **argv, int *argl ){ int rc; - char *zName; + const char *zName; ProcDefn *p; int nByte; int i; char *zSpace; @@ -432,11 +473,11 @@ char **azParam; int *anParam; int nParam; - char *zUsage = 0; /* Build up a usage message here */ + char *zUsage = 0; /* Build up a usage message here */ int nUsage = 0; /* Number of bytes at zUsage */ if( argc!=4 ){ return Th_WrongNumArgs(interp, "proc name arglist code"); } @@ -444,23 +485,25 @@ return TH_ERROR; } /* Allocate the new ProcDefn structure. */ nByte = sizeof(ProcDefn) + /* ProcDefn structure */ - (sizeof(char *) + sizeof(int)) * nParam + /* azParam, anParam */ - (sizeof(char *) + sizeof(int)) * nParam + /* azDefault, anDefault */ + (sizeof(char *) + sizeof(int)) * nParam + /* azParam, anParam */ + (sizeof(char *) + sizeof(int)) * nParam + /* azDefault, anDefault */ argl[3] + /* zProgram */ - argl[2]; /* Space for copies of parameter names and default values */ + argl[2]; /* Space for copies of parameter names and default values */ p = (ProcDefn *)Th_Malloc(interp, nByte); /* If the last parameter in the parameter list is "args", then set the ** ProcDefn.hasArgs flag. The "args" parameter does not require an ** entry in the ProcDefn.azParam[] or ProcDefn.azDefault[] arrays. */ - if( anParam[nParam-1]==4 && 0==memcmp(azParam[nParam-1], "args", 4) ){ - p->hasArgs = 1; - nParam--; + if( nParam>0 ){ + if( anParam[nParam-1]==4 && 0==memcmp(azParam[nParam-1], "args", 4) ){ + p->hasArgs = 1; + nParam--; + } } p->nParam = nParam; p->azParam = (char **)&p[1]; p->anParam = (int *)&p->azParam[nParam]; @@ -468,11 +511,11 @@ p->anDefault = (int *)&p->azDefault[nParam]; p->zProgram = (char *)&p->anDefault[nParam]; memcpy(p->zProgram, argv[3], argl[3]); p->nProgram = argl[3]; zSpace = &p->zProgram[p->nProgram]; - + for(i=0; i<nParam; i++){ char **az; int *an; int n; if( Th_SplitList(interp, azParam[i], anParam[i], &az, &an, &n) ){ @@ -517,11 +560,11 @@ } p->zUsage = zUsage; p->nUsage = nUsage; /* Register the new command with the th1 interpreter. */ - zName = (char *)argv[1]; + zName = argv[1]; rc = Th_CreateCommand(interp, zName, proc_call1, (void *)p, proc_del); if( rc==TH_OK ){ Th_SetResult(interp, 0, 0); } @@ -533,61 +576,61 @@ Th_Free(interp, zUsage); return TH_ERROR; } /* -** TH Syntax: +** TH Syntax: ** ** rename oldcmd newcmd */ static int rename_command( - Th_Interp *interp, - void *ctx, + Th_Interp *interp, + void *ctx, int argc, - const char **argv, + const char **argv, int *argl ){ if( argc!=3 ){ return Th_WrongNumArgs(interp, "rename oldcmd newcmd"); } return Th_RenameCommand(interp, argv[1], argl[1], argv[2], argl[2]); } /* -** TH Syntax: +** TH Syntax: ** ** break ?value...? ** continue ?value...? ** ok ?value...? ** error ?value...? */ static int simple_command( - Th_Interp *interp, - void *ctx, - int argc, - const char **argv, + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, int *argl ){ if( argc!=1 && argc!=2 ){ return Th_WrongNumArgs(interp, "return ?value?"); } if( argc==2 ){ Th_SetResult(interp, argv[1], argl[1]); } - return (int)ctx; + return FOSSIL_PTR_TO_INT(ctx); } /* -** TH Syntax: +** TH Syntax: ** ** return ?-code code? ?value? */ static int return_command( - Th_Interp *interp, - void *ctx, - int argc, - const char **argv, + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, int *argl ){ int iCode = TH_RETURN; if( argc<1 || argc>4 ){ return Th_WrongNumArgs(interp, "return ?-code code? ?value?"); @@ -634,11 +677,11 @@ iRes = nLeft-nRight; } if( iRes<0 ) iRes = -1; if( iRes>0 ) iRes = 1; - + return Th_SetResultInt(interp, iRes); } /* ** TH Syntax: @@ -646,33 +689,34 @@ ** string first NEEDLE HAYSTACK */ static int string_first_command( Th_Interp *interp, void *ctx, int argc, const char **argv, int *argl ){ - const char *zNeedle; int nNeedle; - const char *zHaystack; int nHaystack; - int i; int iRes = -1; if( argc!=4 ){ return Th_WrongNumArgs(interp, "string first needle haystack"); } - zNeedle = argv[2]; nNeedle = argl[2]; - zHaystack = argv[3]; nHaystack = argl[3]; - for(i=0; i<(nHaystack-nNeedle); i++){ - if( 0==memcmp(zNeedle, &zHaystack[i], nNeedle) ){ - iRes = i; - break; + if( nNeedle && nHaystack && nNeedle<=nHaystack ){ + const char *zNeedle = argv[2]; + const char *zHaystack = argv[3]; + int i; + + for(i=0; i<=(nHaystack-nNeedle); i++){ + if( 0==memcmp(zNeedle, &zHaystack[i], nNeedle) ){ + iRes = i; + break; + } } } - + return Th_SetResultInt(interp, iRes); } /* ** TH Syntax: @@ -680,27 +724,46 @@ ** string is CLASS STRING */ static int string_is_command( Th_Interp *interp, void *ctx, int argc, const char **argv, int *argl ){ - int i; - int iRes = 1; if( argc!=4 ){ return Th_WrongNumArgs(interp, "string is class string"); } - if( argl[2]!=5 || 0!=memcmp(argv[2], "alnum", 5) ){ - Th_ErrorMessage(interp, "Expected alnum, got: ", argv[2], argl[2]); + if( argl[2]==5 && 0==memcmp(argv[2], "alnum", 5) ){ + int i; + int iRes = 1; + + for(i=0; i<argl[3]; i++){ + if( !th_isalnum(argv[3][i]) ){ + iRes = 0; + } + } + + return Th_SetResultInt(interp, iRes); + }else if( argl[2]==6 && 0==memcmp(argv[2], "double", 6) ){ + double fVal; + if( Th_ToDouble(interp, argv[3], argl[3], &fVal)==TH_OK ){ + return Th_SetResultInt(interp, 1); + } + return Th_SetResultInt(interp, 0); + }else if( argl[2]==7 && 0==memcmp(argv[2], "integer", 7) ){ + int iVal; + if( Th_ToInt(interp, argv[3], argl[3], &iVal)==TH_OK ){ + return Th_SetResultInt(interp, 1); + } + return Th_SetResultInt(interp, 0); + }else if( argl[2]==4 && 0==memcmp(argv[2], "list", 4) ){ + if( Th_SplitList(interp, argv[3], argl[3], 0, 0, 0)==TH_OK ){ + return Th_SetResultInt(interp, 1); + } + return Th_SetResultInt(interp, 0); + }else{ + Th_ErrorMessage(interp, + "Expected alnum, double, integer, or list, got:", argv[2], argl[2]); return TH_ERROR; } - - for(i=0; i<argl[3]; i++){ - if( !th_isalnum(argv[3][i]) ){ - iRes = 0; - } - } - - return Th_SetResultInt(interp, iRes); } /* ** TH Syntax: ** @@ -707,33 +770,34 @@ ** string last NEEDLE HAYSTACK */ static int string_last_command( Th_Interp *interp, void *ctx, int argc, const char **argv, int *argl ){ - const char *zNeedle; int nNeedle; - const char *zHaystack; int nHaystack; - int i; int iRes = -1; if( argc!=4 ){ - return Th_WrongNumArgs(interp, "string first needle haystack"); + return Th_WrongNumArgs(interp, "string last needle haystack"); } - zNeedle = argv[2]; nNeedle = argl[2]; - zHaystack = argv[3]; nHaystack = argl[3]; - for(i=nHaystack-nNeedle-1; i>=0; i--){ - if( 0==memcmp(zNeedle, &zHaystack[i], nNeedle) ){ - iRes = i; - break; + if( nNeedle && nHaystack && nNeedle<=nHaystack ){ + const char *zNeedle = argv[2]; + const char *zHaystack = argv[3]; + int i; + + for(i=nHaystack-nNeedle; i>=0; i--){ + if( 0==memcmp(zNeedle, &zHaystack[i], nNeedle) ){ + iRes = i; + break; + } } } - + return Th_SetResultInt(interp, iRes); } /* ** TH Syntax: @@ -814,32 +878,150 @@ } /* ** TH Syntax: ** -** info exists VAR +** string trim STRING +** string trimleft STRING +** string trimright STRING +*/ +static int string_trim_command( + Th_Interp *interp, void *ctx, int argc, const char **argv, int *argl +){ + int n; + const char *z; + + if( argc!=3 ){ + return Th_WrongNumArgs(interp, "string trim string"); + } + z = argv[2]; + n = argl[2]; + if( argl[1]<5 || argv[1][4]=='l' ){ + while( n && th_isspace(z[0]) ){ z++; n--; } + } + if( argl[1]<5 || argv[1][4]=='r' ){ + while( n && th_isspace(z[n-1]) ){ n--; } + } + Th_SetResult(interp, z, n); + return TH_OK; +} + +/* +** TH Syntax: +** +** info exists VARNAME */ static int info_exists_command( Th_Interp *interp, void *ctx, int argc, const char **argv, int *argl ){ int rc; if( argc!=3 ){ return Th_WrongNumArgs(interp, "info exists var"); } - rc = Th_GetVar(interp, argv[2], argl[2]); - Th_SetResultInt(interp, rc?0:1); + rc = Th_ExistsVar(interp, argv[2], argl[2]); + Th_SetResultInt(interp, rc); + return TH_OK; +} + +/* +** TH Syntax: +** +** info commands +*/ +static int info_commands_command( + Th_Interp *interp, void *ctx, int argc, const char **argv, int *argl +){ + int rc; + char *zElem = 0; + int nElem = 0; + + if( argc!=2 ){ + return Th_WrongNumArgs(interp, "info commands"); + } + rc = Th_ListAppendCommands(interp, &zElem, &nElem); + if( rc!=TH_OK ){ + return rc; + } + Th_SetResult(interp, zElem, nElem); + if( zElem ) Th_Free(interp, zElem); + return TH_OK; +} + +/* +** TH Syntax: +** +** info vars +*/ +static int info_vars_command( + Th_Interp *interp, void *ctx, int argc, const char **argv, int *argl +){ + int rc; + char *zElem = 0; + int nElem = 0; + + if( argc!=2 ){ + return Th_WrongNumArgs(interp, "info vars"); + } + rc = Th_ListAppendVariables(interp, &zElem, &nElem); + if( rc!=TH_OK ){ + return rc; + } + Th_SetResult(interp, zElem, nElem); + if( zElem ) Th_Free(interp, zElem); + return TH_OK; +} + +/* +** TH Syntax: +** +** array exists VARNAME +*/ +static int array_exists_command( + Th_Interp *interp, void *ctx, int argc, const char **argv, int *argl +){ + int rc; + + if( argc!=3 ){ + return Th_WrongNumArgs(interp, "array exists var"); + } + rc = Th_ExistsArrayVar(interp, argv[2], argl[2]); + Th_SetResultInt(interp, rc); + return TH_OK; +} + +/* +** TH Syntax: +** +** array names VARNAME +*/ +static int array_names_command( + Th_Interp *interp, void *ctx, int argc, const char **argv, int *argl +){ + int rc; + char *zElem = 0; + int nElem = 0; + + if( argc!=3 ){ + return Th_WrongNumArgs(interp, "array names varname"); + } + rc = Th_ListAppendArray(interp, argv[2], argl[2], &zElem, &nElem); + if( rc!=TH_OK ){ + return rc; + } + Th_SetResult(interp, zElem, nElem); + if( zElem ) Th_Free(interp, zElem); return TH_OK; } /* ** TH Syntax: ** -** unset VAR +** unset VARNAME */ static int unset_command( - Th_Interp *interp, + Th_Interp *interp, void *ctx, int argc, const char **argv, int *argl ){ @@ -848,26 +1030,31 @@ } return Th_UnsetVar(interp, argv[1], argl[1]); } int Th_CallSubCommand( - Th_Interp *interp, + Th_Interp *interp, void *ctx, int argc, const char **argv, int *argl, - Th_SubCommand *aSub + const Th_SubCommand *aSub ){ - int i; - for(i=0; aSub[i].zName; i++){ - char *zName = (char *)aSub[i].zName; - if( th_strlen(zName)==argl[1] && 0==memcmp(zName, argv[1], argl[1]) ){ - return aSub[i].xProc(interp, ctx, argc, argv, argl); + if( argc>1 ){ + int i; + for(i=0; aSub[i].zName; i++){ + const char *zName = aSub[i].zName; + if( th_strlen(zName)==argl[1] && 0==memcmp(zName, argv[1], argl[1]) ){ + return aSub[i].xProc(interp, ctx, argc, argv, argl); + } } } - - Th_ErrorMessage(interp, "Expected sub-command, got:", argv[1], argl[1]); + if(argc<2){ + Th_ErrorMessage(interp, "Expected sub-command for", argv[0], argl[0]); + }else{ + Th_ErrorMessage(interp, "Expected sub-command, got:", argv[1], argl[1]); + } return TH_ERROR; } /* ** TH Syntax: @@ -879,59 +1066,87 @@ ** string length STRING ** string range STRING FIRST LAST ** string repeat STRING COUNT */ static int string_command( - Th_Interp *interp, + Th_Interp *interp, void *ctx, int argc, const char **argv, int *argl ){ - Th_SubCommand aSub[] = { + static const Th_SubCommand aSub[] = { { "compare", string_compare_command }, { "first", string_first_command }, { "is", string_is_command }, { "last", string_last_command }, { "length", string_length_command }, { "range", string_range_command }, { "repeat", string_repeat_command }, + { "trim", string_trim_command }, + { "trimleft", string_trim_command }, + { "trimright", string_trim_command }, + { 0, 0 } + }; + return Th_CallSubCommand(interp, ctx, argc, argv, argl, aSub); +} + +/* +** TH Syntax: +** +** info commands +** info exists VARNAME +** info vars +*/ +static int info_command( + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, + int *argl +){ + static const Th_SubCommand aSub[] = { + { "commands", info_commands_command }, + { "exists", info_exists_command }, + { "vars", info_vars_command }, { 0, 0 } }; return Th_CallSubCommand(interp, ctx, argc, argv, argl, aSub); } /* ** TH Syntax: ** -** info exists VARNAME +** array exists VARNAME +** array names VARNAME */ -static int info_command( - Th_Interp *interp, +static int array_command( + Th_Interp *interp, void *ctx, int argc, const char **argv, int *argl ){ - Th_SubCommand aSub[] = { - { "exists", info_exists_command }, + static const Th_SubCommand aSub[] = { + { "exists", array_exists_command }, + { "names", array_names_command }, { 0, 0 } }; return Th_CallSubCommand(interp, ctx, argc, argv, argl, aSub); } /* -** Convert the script level frame specification (used by the commands -** [uplevel] and [upvar]) in (zFrame, nFrame) to an integer frame as +** Convert the script level frame specification (used by the commands +** [uplevel] and [upvar]) in (zFrame, nFrame) to an integer frame as ** used by Th_LinkVar() and Th_Eval(). If successful, write the integer ** frame level to *piFrame and return TH_OK. Otherwise, return TH_ERROR ** and leave an error message in the interpreter result. */ static int thToFrame( - Th_Interp *interp, - const char *zFrame, - int nFrame, + Th_Interp *interp, + const char *zFrame, + int nFrame, int *piFrame ){ int iFrame; if( th_isdigit(zFrame[0]) ){ int rc = Th_ToInt(interp, zFrame, nFrame, &iFrame); @@ -952,11 +1167,11 @@ ** TH Syntax: ** ** uplevel ?LEVEL? SCRIPT */ static int uplevel_command( - Th_Interp *interp, + Th_Interp *interp, void *ctx, int argc, const char **argv, int *argl ){ @@ -970,19 +1185,19 @@ } return Th_Eval(interp, iFrame, argv[argc-1], -1); } /* -** TH Syntax: +** TH Syntax: ** ** upvar ?FRAME? OTHERVAR MYVAR ?OTHERVAR MYVAR ...? */ static int upvar_command( - Th_Interp *interp, - void *ctx, - int argc, - const char **argv, + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, int *argl ){ int iVar = 1; int iFrame = -1; int rc = TH_OK; @@ -990,32 +1205,32 @@ if( TH_OK==thToFrame(0, argv[1], argl[1], &iFrame) ){ iVar++; } if( argc==iVar || (argc-iVar)%2 ){ - return Th_WrongNumArgs(interp, + return Th_WrongNumArgs(interp, "upvar frame othervar myvar ?othervar myvar...?"); } for(i=iVar; rc==TH_OK && i<argc; i=i+2){ rc = Th_LinkVar(interp, argv[i+1], argl[i+1], iFrame, argv[i], argl[i]); } return rc; } /* -** TH Syntax: +** TH Syntax: ** ** breakpoint ARGS ** ** This command does nothing at all. Its purpose in life is to serve ** as a point for setting breakpoints in a debugger. */ static int breakpoint_command( - Th_Interp *interp, - void *ctx, - int argc, - const char **argv, + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, int *argl ){ int cnt = 0; cnt++; return TH_OK; @@ -1030,19 +1245,21 @@ struct _Command { const char *zName; Th_CommandProc xProc; void *pContext; } aCommand[] = { + {"array", array_command, 0}, {"catch", catch_command, 0}, {"expr", expr_command, 0}, {"for", for_command, 0}, {"if", if_command, 0}, {"info", info_command, 0}, {"lindex", lindex_command, 0}, {"list", list_command, 0}, {"llength", llength_command, 0}, - {"proc", proc_command, 0}, + {"lsearch", lsearch_command, 0}, + {"proc", proc_command, 0}, {"rename", rename_command, 0}, {"set", set_command, 0}, {"string", string_command, 0}, {"unset", unset_command, 0}, {"uplevel", uplevel_command, 0}, @@ -1049,21 +1266,23 @@ {"upvar", upvar_command, 0}, {"breakpoint", breakpoint_command, 0}, {"return", return_command, 0}, - {"break", simple_command, (void *)TH_BREAK}, - {"continue", simple_command, (void *)TH_CONTINUE}, - {"error", simple_command, (void *)TH_ERROR}, + {"break", simple_command, (void *)TH_BREAK}, + {"continue", simple_command, (void *)TH_CONTINUE}, + {"error", simple_command, (void *)TH_ERROR}, - {0, 0} + {0, 0, 0} }; - int i; + size_t i; /* Add the language commands. */ for(i=0; i<(sizeof(aCommand)/sizeof(aCommand[0])); i++){ - void *ctx = aCommand[i].pContext; + void *ctx; + if ( !aCommand[i].zName || !aCommand[i].xProc ) continue; + ctx = aCommand[i].pContext; Th_CreateCommand(interp, aCommand[i].zName, aCommand[i].xProc, ctx, 0); } return TH_OK; } Index: src/th_main.c ================================================================== --- src/th_main.c +++ src/th_main.c @@ -18,10 +18,57 @@ ** This file contains an interface between the TH scripting language ** (an independent project) and fossil. */ #include "config.h" #include "th_main.h" +#include "sqlite3.h" + +#if INTERFACE +/* +** Flag parameters to the Th_FossilInit() routine used to control the +** interpreter creation and initialization process. +*/ +#define TH_INIT_NONE ((u32)0x00000000) /* No flags. */ +#define TH_INIT_NEED_CONFIG ((u32)0x00000001) /* Open configuration first? */ +#define TH_INIT_FORCE_TCL ((u32)0x00000002) /* Force Tcl to be enabled? */ +#define TH_INIT_FORCE_RESET ((u32)0x00000004) /* Force TH1 commands re-added? */ +#define TH_INIT_FORCE_SETUP ((u32)0x00000008) /* Force eval of setup script? */ +#define TH_INIT_MASK ((u32)0x0000000F) /* All possible init flags. */ + +/* +** Useful and/or "well-known" combinations of flag values. +*/ +#define TH_INIT_DEFAULT (TH_INIT_NONE) /* Default flags. */ +#define TH_INIT_HOOK (TH_INIT_NEED_CONFIG | TH_INIT_FORCE_SETUP) +#define TH_INIT_FORBID_MASK (TH_INIT_FORCE_TCL) /* Illegal from a script. */ +#endif + +/* +** Flags set by functions in this file to keep track of integration state +** information. These flags should not be used outside of this file. +*/ +#define TH_STATE_CONFIG ((u32)0x00000010) /* We opened the config. */ +#define TH_STATE_REPOSITORY ((u32)0x00000020) /* We opened the repository. */ +#define TH_STATE_MASK ((u32)0x00000030) /* All possible state flags. */ + +#ifdef FOSSIL_ENABLE_TH1_HOOKS +/* +** These are the "well-known" TH1 error messages that occur when no hook is +** registered to be called prior to executing a command or processing a web +** page, respectively. If one of these errors is seen, it will not be sent +** or displayed to the remote user or local interactive user, respectively. +*/ +#define NO_COMMAND_HOOK_ERROR "no such command: command_hook" +#define NO_WEBPAGE_HOOK_ERROR "no such command: webpage_hook" +#endif + +/* +** These macros are used within this file to detect if the repository and +** configuration ("user") database are currently open. +*/ +#define Th_IsRepositoryOpen() (g.repositoryOpen) +#define Th_IsConfigOpen() (g.zConfigDbName!=0) /* ** Global variable counting the number of outstanding calls to malloc() ** made by the th1 implementation. This is used to catch memory leaks ** in the interpreter. Obviously, it also means th1 is not threadsafe. @@ -30,11 +77,11 @@ /* ** Implementations of malloc() and free() to pass to the interpreter. */ static void *xMalloc(unsigned int n){ - void *p = malloc(n); + void *p = fossil_malloc(n); if( p ){ nOutstandingMalloc++; } return p; } @@ -43,10 +90,17 @@ nOutstandingMalloc--; } free(p); } static Th_Vtab vtab = { xMalloc, xFree }; + +/* +** Returns the number of outstanding TH1 memory allocations. +*/ +int Th_GetOutstandingMalloc(){ + return nOutstandingMalloc; +} /* ** Generate a TH1 trace message if debugging is enabled. */ void Th_Trace(const char *zFormat, ...){ @@ -54,37 +108,212 @@ va_start(ap, zFormat); blob_vappendf(&g.thLog, zFormat, ap); va_end(ap); } +/* +** Forces input and output to be done via the CGI subsystem. +*/ +void Th_ForceCgi(int fullHttpReply){ + g.httpOut = stdout; + g.httpIn = stdin; + fossil_binary_mode(g.httpOut); + fossil_binary_mode(g.httpIn); + g.cgiOutput = 1; + g.fullHttpReply = fullHttpReply; +} + +/* +** Checks if the TH1 trace log needs to be enabled. If so, prepares +** it for use. +*/ +void Th_InitTraceLog(){ + g.thTrace = find_option("th-trace", 0, 0)!=0; + if( g.thTrace ){ + blob_zero(&g.thLog); + } +} + +/* +** Prints the entire contents of the TH1 trace log to the standard +** output channel. +*/ +void Th_PrintTraceLog(){ + if( g.thTrace ){ + fossil_print("\n------------------ BEGIN TRACE LOG ------------------\n"); + fossil_print("%s", blob_str(&g.thLog)); + fossil_print("\n------------------- END TRACE LOG -------------------\n"); + } +} + +/* +** - adopted from ls_cmd_rev in checkin.c +** - adopted commands/error handling for usage within th1 +** - interface adopted to allow result creation as TH1 List +** +** Takes a checkin identifier in zRev and an optiona glob pattern in zGLOB +** as parameter returns a TH list in pzList,pnList with filenames matching +** glob pattern with the checking +*/ +static void dir_cmd_rev( + Th_Interp *interp, + char **pzList, + int *pnList, + const char *zRev, /* Revision string given */ + const char *zGlob, /* Glob pattern given */ + int bDetails +){ + Stmt q; + char *zOrderBy = "pathname COLLATE nocase"; + int rid; + + rid = th1_name_to_typed_rid(interp, zRev, "ci"); + compute_fileage(rid, zGlob); + db_prepare(&q, + "SELECT datetime(fileage.mtime, toLocal()), fileage.pathname,\n" + " blob.size\n" + " FROM fileage, blob\n" + " WHERE blob.rid=fileage.fid \n" + " ORDER BY %s;", zOrderBy /*safe-for-%s*/ + ); + while( db_step(&q)==SQLITE_ROW ){ + const char *zFile = db_column_text(&q, 1); + if( bDetails ){ + const char *zTime = db_column_text(&q, 0); + int size = db_column_int(&q, 2); + char zSize[50]; + char *zSubList = 0; + int nSubList = 0; + sqlite3_snprintf(sizeof(zSize), zSize, "%d", size); + Th_ListAppend(interp, &zSubList, &nSubList, zFile, -1); + Th_ListAppend(interp, &zSubList, &nSubList, zSize, -1); + Th_ListAppend(interp, &zSubList, &nSubList, zTime, -1); + Th_ListAppend(interp, pzList, pnList, zSubList, -1); + Th_Free(interp, zSubList); + }else{ + Th_ListAppend(interp, pzList, pnList, zFile, -1); + } + } + db_finalize(&q); +} + +/* +** TH1 command: dir CHECKIN ?GLOB? ?DETAILS? +** +** Returns a list containing all files in CHECKIN. If GLOB is given only +** the files matching the pattern GLOB within CHECKIN will be returned. +** If DETAILS is non-zero, the result will be a list-of-lists, with each +** element containing at least three elements: the file name, the file +** size (in bytes), and the file last modification time (relative to the +** time zone configured for the repository). +*/ +static int dirCmd( + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, + int *argl +){ + const char *zGlob = 0; + int bDetails = 0; + + if( argc<2 || argc>4 ){ + return Th_WrongNumArgs(interp, "dir CHECKIN ?GLOB? ?DETAILS?"); + } + if( argc>=3 ){ + zGlob = argv[2]; + } + if( argc>=4 && Th_ToInt(interp, argv[3], argl[3], &bDetails) ){ + return TH_ERROR; + } + if( Th_IsRepositoryOpen() ){ + char *zList = 0; + int nList = 0; + dir_cmd_rev(interp, &zList, &nList, argv[1], zGlob, bDetails); + Th_SetResult(interp, zList, nList); + Th_Free(interp, zList); + return TH_OK; + }else{ + Th_SetResult(interp, "repository unavailable", -1); + return TH_ERROR; + } +} + +/* +** TH1 command: httpize STRING +** +** Escape all characters of STRING which have special meaning in URI +** components. Return a new string result. +*/ +static int httpizeCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + char *zOut; + if( argc!=2 ){ + return Th_WrongNumArgs(interp, "httpize STRING"); + } + zOut = httpize((char*)argv[1], argl[1]); + Th_SetResult(interp, zOut, -1); + free(zOut); + return TH_OK; +} /* ** True if output is enabled. False if disabled. */ static int enableOutput = 1; /* -** TH command: enable_output BOOLEAN +** TH1 command: enable_output BOOLEAN ** -** Enable or disable the puts and hputs commands. +** Enable or disable the puts and wiki commands. */ static int enableOutputCmd( - Th_Interp *interp, - void *p, - int argc, - const char **argv, + Th_Interp *interp, + void *p, + int argc, + const char **argv, int *argl ){ - if( argc!=2 ){ - return Th_WrongNumArgs(interp, "enable_output BOOLEAN"); + int rc; + if( argc<2 || argc>3 ){ + return Th_WrongNumArgs(interp, "enable_output [LABEL] BOOLEAN"); } - return Th_ToInt(interp, argv[1], argl[1], &enableOutput); + rc = Th_ToInt(interp, argv[argc-1], argl[argc-1], &enableOutput); + if( g.thTrace ){ + Th_Trace("enable_output {%.*s} -> %d<br>\n", argl[1],argv[1],enableOutput); + } + return rc; +} + +/* +** Returns a name for a TH1 return code. +*/ +const char *Th_ReturnCodeName(int rc, int nullIfOk){ + static char zRc[32]; + + switch( rc ){ + case TH_OK: return nullIfOk ? 0 : "TH_OK"; + case TH_ERROR: return "TH_ERROR"; + case TH_BREAK: return "TH_BREAK"; + case TH_RETURN: return "TH_RETURN"; + case TH_CONTINUE: return "TH_CONTINUE"; + default: { + sqlite3_snprintf(sizeof(zRc), zRc, "TH1 return code %d", rc); + } + } + return zRc; } /* ** Send text to the appropriate output: Either to the console -** or to the CGI reply buffer. +** or to the CGI reply buffer. Escape all characters with special +** meaning to HTML if the encode parameter is true. */ static void sendText(const char *z, int n, int encode){ if( enableOutput && n ){ if( n<0 ) n = strlen(z); if( encode ){ @@ -93,70 +322,251 @@ } if( g.cgiOutput ){ cgi_append_content(z, n); }else{ fwrite(z, 1, n, stdout); + fflush(stdout); } if( encode ) free((char*)z); } } + +static void sendError(const char *z, int n, int forceCgi){ + int savedEnable = enableOutput; + enableOutput = 1; + if( forceCgi || g.cgiOutput ){ + sendText("<hr><p class=\"thmainError\">", -1, 0); + } + sendText("ERROR: ", -1, 0); + sendText((char*)z, n, 1); + sendText(forceCgi || g.cgiOutput ? "</p>" : "\n", -1, 0); + enableOutput = savedEnable; +} + +/* +** Convert name to an rid. This function was copied from name_to_typed_rid() +** in name.c; however, it has been modified to report TH1 script errors instead +** of "fatal errors". +*/ +int th1_name_to_typed_rid( + Th_Interp *interp, + const char *zName, + const char *zType +){ + int rid; + + if( zName==0 || zName[0]==0 ) return 0; + rid = symbolic_name_to_rid(zName, zType); + if( rid<0 ){ + Th_SetResult(interp, "ambiguous name", -1); + }else if( rid==0 ){ + Th_SetResult(interp, "name not found", -1); + } + return rid; +} + +/* +** Attempt to lookup the specified check-in and file name into an rid. +** This function was copied from artifact_from_ci_and_filename() in +** info.c; however, it has been modified to report TH1 script errors +** instead of "fatal errors". +*/ +int th1_artifact_from_ci_and_filename( + Th_Interp *interp, + const char *zCI, + const char *zFilename +){ + int cirid; + Blob err; + Manifest *pManifest; + ManifestFile *pFile; + + if( zCI==0 ){ + Th_SetResult(interp, "invalid check-in", -1); + return 0; + } + if( zFilename==0 ){ + Th_SetResult(interp, "invalid file name", -1); + return 0; + } + cirid = th1_name_to_typed_rid(interp, zCI, "*"); + blob_zero(&err); + pManifest = manifest_get(cirid, CFTYPE_MANIFEST, &err); + if( pManifest==0 ){ + if( blob_size(&err)>0 ){ + Th_SetResult(interp, blob_str(&err), blob_size(&err)); + }else{ + Th_SetResult(interp, "manifest not found", -1); + } + blob_reset(&err); + return 0; + } + blob_reset(&err); + manifest_file_rewind(pManifest); + while( (pFile = manifest_file_next(pManifest,0))!=0 ){ + if( fossil_strcmp(zFilename, pFile->zName)==0 ){ + int rid = db_int(0, "SELECT rid FROM blob WHERE uuid=%Q", pFile->zUuid); + manifest_destroy(pManifest); + return rid; + } + } + Th_SetResult(interp, "file name not found in manifest", -1); + return 0; +} /* -** TH command: puts STRING -** TH command: html STRING +** TH1 command: puts STRING +** TH1 command: html STRING ** -** Output STRING as HTML (html) or unchanged (puts). +** Output STRING escaped for HTML (html) or unchanged (puts). */ static int putsCmd( - Th_Interp *interp, - void *pConvert, - int argc, - const char **argv, + Th_Interp *interp, + void *pConvert, + int argc, + const char **argv, int *argl ){ if( argc!=2 ){ return Th_WrongNumArgs(interp, "puts STRING"); } - sendText((char*)argv[1], argl[1], pConvert!=0); + sendText((char*)argv[1], argl[1], *(unsigned int*)pConvert); + return TH_OK; +} + +/* +** TH1 command: redirect URL +** +** Issues an HTTP redirect (302) to the specified URL and then exits the +** process. +*/ +static int redirectCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + if( argc!=2 ){ + return Th_WrongNumArgs(interp, "redirect URL"); + } + cgi_redirect(argv[1]); + Th_SetResult(interp, argv[1], argl[1]); /* NOT REACHED */ + return TH_OK; +} + +/* +** TH1 command: insertCsrf +** +** While rendering a form, call this command to add the Anti-CSRF token +** as a hidden element of the form. +*/ +static int insertCsrfCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + if( argc!=1 ){ + return Th_WrongNumArgs(interp, "insertCsrf"); + } + login_insert_csrf_secret(); + return TH_OK; +} + +/* +** TH1 command: verifyCsrf +** +** Before using the results of a form, first call this command to verify +** that this Anti-CSRF token is present and is valid. If the Anti-CSRF token +** is missing or is incorrect, that indicates a cross-site scripting attack. +** If the event of an attack is detected, an error message is generated and +** all further processing is aborted. +*/ +static int verifyCsrfCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + if( argc!=1 ){ + return Th_WrongNumArgs(interp, "verifyCsrf"); + } + login_verify_csrf_secret(); + return TH_OK; +} + +/* +** TH1 command: markdown STRING +** +** Renders the input string as markdown. The result is a two-element list. +** The first element is the text-only title string. The second element +** contains the body, rendered as HTML. +*/ +static int markdownCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + Blob src, title, body; + char *zValue = 0; + int nValue = 0; + if( argc!=2 ){ + return Th_WrongNumArgs(interp, "markdown STRING"); + } + blob_zero(&src); + blob_init(&src, (char*)argv[1], argl[1]); + blob_zero(&title); blob_zero(&body); + markdown_to_html(&src, &title, &body); + Th_ListAppend(interp, &zValue, &nValue, blob_str(&title), blob_size(&title)); + Th_ListAppend(interp, &zValue, &nValue, blob_str(&body), blob_size(&body)); + Th_SetResult(interp, zValue, nValue); return TH_OK; } /* -** TH command: wiki STRING +** TH1 command: decorate STRING +** TH1 command: wiki STRING ** -** Render the input string as wiki. +** Render the input string as wiki. For the decorate command, only links +** are handled. */ static int wikiCmd( - Th_Interp *interp, - void *p, - int argc, - const char **argv, + Th_Interp *interp, + void *p, + int argc, + const char **argv, int *argl ){ + int flags = WIKI_INLINE | WIKI_NOBADLINKS | *(unsigned int*)p; if( argc!=2 ){ return Th_WrongNumArgs(interp, "wiki STRING"); } if( enableOutput ){ Blob src; blob_init(&src, (char*)argv[1], argl[1]); - wiki_convert(&src, 0, WIKI_INLINE); + wiki_convert(&src, 0, flags); blob_reset(&src); } return TH_OK; } /* -** TH command: htmlize STRING +** TH1 command: htmlize STRING ** ** Escape all characters of STRING which have special meaning in HTML. ** Return a new string result. */ static int htmlizeCmd( - Th_Interp *interp, - void *p, - int argc, - const char **argv, + Th_Interp *interp, + void *p, + int argc, + const char **argv, int *argl ){ char *zOut; if( argc!=2 ){ return Th_WrongNumArgs(interp, "htmlize STRING"); @@ -166,80 +576,314 @@ free(zOut); return TH_OK; } /* -** TH command: date +** TH1 command: encode64 STRING +** +** Encode the specified string using Base64 and return the result. +*/ +static int encode64Cmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + char *zOut; + if( argc!=2 ){ + return Th_WrongNumArgs(interp, "encode64 STRING"); + } + zOut = encode64((char*)argv[1], argl[1]); + Th_SetResult(interp, zOut, -1); + free(zOut); + return TH_OK; +} + +/* +** TH1 command: date ** -** Return a string which is the current time and date. +** Return a string which is the current time and date. If the +** -local option is used, the date appears using localtime instead +** of UTC. */ static int dateCmd( - Th_Interp *interp, - void *p, - int argc, - const char **argv, + Th_Interp *interp, + void *p, + int argc, + const char **argv, int *argl ){ - char *zOut = db_text("??", "SELECT datetime('now')"); + char *zOut; + if( argc>=2 && argl[1]==6 && memcmp(argv[1],"-local",6)==0 ){ + zOut = db_text("??", "SELECT datetime('now',toLocal())"); + }else{ + zOut = db_text("??", "SELECT datetime('now')"); + } Th_SetResult(interp, zOut, -1); free(zOut); return TH_OK; } /* -** TH command: hascap STRING +** TH1 command: hascap STRING... +** TH1 command: anoncap STRING... ** -** Return true if the user has all of the capabilities listed in STRING. +** Return true if the current user (hascap) or if the anonymous user +** (anoncap) has all of the capabilities listed in STRING. */ static int hascapCmd( - Th_Interp *interp, - void *p, - int argc, - const char **argv, + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + int rc = 0, i; + if( argc<2 ){ + return Th_WrongNumArgs(interp, "hascap STRING ..."); + } + for(i=1; i<argc && rc==0; i++){ + rc = login_has_capability((char*)argv[i],argl[i],*(int*)p); + } + if( g.thTrace ){ + Th_Trace("[hascap %#h] => %d<br />\n", argl[1], argv[1], rc); + } + Th_SetResultInt(interp, rc); + return TH_OK; +} + +/* +** TH1 command: searchable STRING... +** +** Return true if searching in any of the document classes identified +** by STRING is enabled for the repository and user has the necessary +** capabilities to perform the search. +** +** Document classes: +** +** c Check-in comments +** d Embedded documentation +** t Tickets +** w Wiki +** +** To be clear, only one of the document classes identified by each STRING +** needs to be searchable in order for that argument to be true. But +** all arguments must be true for this routine to return true. Hence, to +** see if ALL document classes are searchable: +** +** if {[searchable c d t w]} {...} +** +** But to see if ANY document class is searchable: +** +** if {[searchable cdtw]} {...} +** +** This command is useful for enabling or disabling a "Search" entry +** on the menu bar. +*/ +static int searchableCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + int rc = 1, i, j; + unsigned int searchCap = search_restrict(SRCH_ALL); + if( argc<2 ){ + return Th_WrongNumArgs(interp, "hascap STRING ..."); + } + for(i=1; i<argc && rc; i++){ + int match = 0; + for(j=0; j<argl[i]; j++){ + switch( argv[i][j] ){ + case 'c': match |= searchCap & SRCH_CKIN; break; + case 'd': match |= searchCap & SRCH_DOC; break; + case 't': match |= searchCap & SRCH_TKT; break; + case 'w': match |= searchCap & SRCH_WIKI; break; + } + } + if( !match ) rc = 0; + } + if( g.thTrace ){ + Th_Trace("[searchable %#h] => %d<br />\n", argl[1], argv[1], rc); + } + Th_SetResultInt(interp, rc); + return TH_OK; +} + +/* +** TH1 command: hasfeature STRING +** +** Return true if the fossil binary has the given compile-time feature +** enabled. The set of features includes: +** +** "ssl" = FOSSIL_ENABLE_SSL +** "legacyMvRm" = FOSSIL_ENABLE_LEGACY_MV_RM +** "execRelPaths" = FOSSIL_ENABLE_EXEC_REL_PATHS +** "th1Docs" = FOSSIL_ENABLE_TH1_DOCS +** "th1Hooks" = FOSSIL_ENABLE_TH1_HOOKS +** "tcl" = FOSSIL_ENABLE_TCL +** "useTclStubs" = USE_TCL_STUBS +** "tclStubs" = FOSSIL_ENABLE_TCL_STUBS +** "tclPrivateStubs" = FOSSIL_ENABLE_TCL_PRIVATE_STUBS +** "json" = FOSSIL_ENABLE_JSON +** "markdown" = FOSSIL_ENABLE_MARKDOWN +** "unicodeCmdLine" = !BROKEN_MINGW_CMDLINE +** "dynamicBuild" = FOSSIL_DYNAMIC_BUILD +** +** Specifying an unknown feature will return a value of false, it will not +** raise a script error. +*/ +static int hasfeatureCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, int *argl ){ - int rc; + int rc = 0; + const char *zArg; if( argc!=2 ){ - return Th_WrongNumArgs(interp, "hascap STRING"); + return Th_WrongNumArgs(interp, "hasfeature STRING"); } - rc = login_has_capability((char*)argv[1],argl[1]); + zArg = (const char *)argv[1]; + if(NULL==zArg){ + /* placeholder for following ifdefs... */ + } +#if defined(FOSSIL_ENABLE_SSL) + else if( 0 == fossil_strnicmp( zArg, "ssl\0", 4 ) ){ + rc = 1; + } +#endif +#if defined(FOSSIL_ENABLE_LEGACY_MV_RM) + else if( 0 == fossil_strnicmp( zArg, "legacyMvRm\0", 11 ) ){ + rc = 1; + } +#endif +#if defined(FOSSIL_ENABLE_EXEC_REL_PATHS) + else if( 0 == fossil_strnicmp( zArg, "execRelPaths\0", 13 ) ){ + rc = 1; + } +#endif +#if defined(FOSSIL_ENABLE_TH1_DOCS) + else if( 0 == fossil_strnicmp( zArg, "th1Docs\0", 8 ) ){ + rc = 1; + } +#endif +#if defined(FOSSIL_ENABLE_TH1_HOOKS) + else if( 0 == fossil_strnicmp( zArg, "th1Hooks\0", 9 ) ){ + rc = 1; + } +#endif +#if defined(FOSSIL_ENABLE_TCL) + else if( 0 == fossil_strnicmp( zArg, "tcl\0", 4 ) ){ + rc = 1; + } +#endif +#if defined(USE_TCL_STUBS) + else if( 0 == fossil_strnicmp( zArg, "useTclStubs\0", 12 ) ){ + rc = 1; + } +#endif +#if defined(FOSSIL_ENABLE_TCL_STUBS) + else if( 0 == fossil_strnicmp( zArg, "tclStubs\0", 9 ) ){ + rc = 1; + } +#endif +#if defined(FOSSIL_ENABLE_TCL_PRIVATE_STUBS) + else if( 0 == fossil_strnicmp( zArg, "tclPrivateStubs\0", 16 ) ){ + rc = 1; + } +#endif +#if defined(FOSSIL_ENABLE_JSON) + else if( 0 == fossil_strnicmp( zArg, "json\0", 5 ) ){ + rc = 1; + } +#endif +#if !defined(BROKEN_MINGW_CMDLINE) + else if( 0 == fossil_strnicmp( zArg, "unicodeCmdLine\0", 15 ) ){ + rc = 1; + } +#endif +#if defined(FOSSIL_DYNAMIC_BUILD) + else if( 0 == fossil_strnicmp( zArg, "dynamicBuild\0", 13 ) ){ + rc = 1; + } +#endif + else if( 0 == fossil_strnicmp( zArg, "markdown\0", 9 ) ){ + rc = 1; + } + if( g.thTrace ){ + Th_Trace("[hasfeature %#h] => %d<br />\n", argl[1], zArg, rc); + } + Th_SetResultInt(interp, rc); + return TH_OK; +} + + +/* +** TH1 command: tclReady +** +** Return true if the fossil binary has the Tcl integration feature +** enabled and it is currently available for use by TH1 scripts. +** +*/ +static int tclReadyCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + int rc = 0; + if( argc!=1 ){ + return Th_WrongNumArgs(interp, "tclReady"); + } +#if defined(FOSSIL_ENABLE_TCL) + if( g.tcl.interp ){ + rc = 1; + } +#endif if( g.thTrace ){ - Th_Trace("[hascap %.*h] => %d<br />\n", argl[1], argv[1], rc); + Th_Trace("[tclReady] => %d<br />\n", rc); } Th_SetResultInt(interp, rc); return TH_OK; } + /* -** TH command: anycap STRING +** TH1 command: anycap STRING ** -** Return true if the user has any one of the capabilities listed in STRING. +** Return true if the current user user +** has any one of the capabilities listed in STRING. */ static int anycapCmd( - Th_Interp *interp, - void *p, - int argc, - const char **argv, + Th_Interp *interp, + void *p, + int argc, + const char **argv, int *argl ){ int rc = 0; int i; if( argc!=2 ){ return Th_WrongNumArgs(interp, "anycap STRING"); } for(i=0; rc==0 && i<argl[1]; i++){ - rc = login_has_capability((char*)&argv[1][i],1); + rc = login_has_capability((char*)&argv[1][i],1,0); } if( g.thTrace ){ - Th_Trace("[hascap %.*h] => %d<br />\n", argl[1], argv[1], rc); + Th_Trace("[hascap %#h] => %d<br />\n", argl[1], argv[1], rc); } Th_SetResultInt(interp, rc); return TH_OK; } /* -** TH1 command: combobox NAME TEXT-LIST NUMLINES +** TH1 command: combobox NAME TEXT-LIST NUMLINES ** ** Generate an HTML combobox. NAME is both the name of the ** CGI parameter and the name of a variable that contains the ** currently selected value. TEXT-LIST is a list of possible ** values for the combobox. NUMLINES is 1 for a true combobox. @@ -246,13 +890,13 @@ ** If NUMLINES is greater than one then the display is a listbox ** with the number of lines given. */ static int comboboxCmd( Th_Interp *interp, - void *p, - int argc, - const char **argv, + void *p, + int argc, + const char **argv, int *argl ){ if( argc!=4 ){ return Th_WrongNumArgs(interp, "combobox NAME TEXT-LIST NUMLINES"); } @@ -269,20 +913,22 @@ if( Th_ToInt(interp, argv[3], argl[3], &height) ) return TH_ERROR; Th_SplitList(interp, argv[2], argl[2], &azElem, &aszElem, &nElem); blob_init(&name, (char*)argv[1], argl[1]); zValue = Th_Fetch(blob_str(&name), &nValue); - z = mprintf("<select name=\"%z\" size=\"%d\">", - htmlize(blob_buffer(&name), blob_size(&name)), height); + zH = htmlize(blob_buffer(&name), blob_size(&name)); + z = mprintf("<select id=\"%s\" name=\"%s\" size=\"%d\">", zH, zH, height); + free(zH); sendText(z, -1, 0); free(z); blob_reset(&name); for(i=0; i<nElem; i++){ zH = htmlize((char*)azElem[i], aszElem[i]); - if( zValue && aszElem[i]==nValue + if( zValue && aszElem[i]==nValue && memcmp(zValue, azElem[i], nValue)==0 ){ - z = mprintf("<option value=\"%s\" selected>%s</option>", zH, zH); + z = mprintf("<option value=\"%s\" selected=\"selected\">%s</option>", + zH, zH); }else{ z = mprintf("<option value=\"%s\">%s</option>", zH, zH); } free(zH); sendText(z, -1, 0); @@ -293,20 +939,20 @@ } return TH_OK; } /* -** TH1 command: linecount STRING MAX MIN +** TH1 command: linecount STRING MAX MIN ** ** Return one more than the number of \n characters in STRING. But ** never return less than MIN or more than MAX. */ static int linecntCmd( Th_Interp *interp, - void *p, - int argc, - const char **argv, + void *p, + int argc, + const char **argv, int *argl ){ const char *z; int size, n, i; int iMin, iMax; @@ -326,57 +972,1027 @@ if( n<iMin ) n = iMin; if( n>iMax ) n = iMax; Th_SetResultInt(interp, n); return TH_OK; } + +/* +** TH1 command: repository ?BOOLEAN? +** +** Return the fully qualified file name of the open repository or an empty +** string if one is not currently open. Optionally, it will attempt to open +** the repository if the boolean argument is non-zero. +*/ +static int repositoryCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + if( argc!=1 && argc!=2 ){ + return Th_WrongNumArgs(interp, "repository ?BOOLEAN?"); + } + if( argc==2 ){ + int openRepository = 0; + if( Th_ToInt(interp, argv[1], argl[1], &openRepository) ){ + return TH_ERROR; + } + if( openRepository ) db_find_and_open_repository(OPEN_OK_NOT_FOUND, 0); + } + Th_SetResult(interp, g.zRepositoryName, -1); + return TH_OK; +} + +/* +** TH1 command: checkout ?BOOLEAN? +** +** Return the fully qualified directory name of the current checkout or an +** empty string if it is not available. Optionally, it will attempt to find +** the current checkout, opening the configuration ("user") database and the +** repository as necessary, if the boolean argument is non-zero. +*/ +static int checkoutCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + if( argc!=1 && argc!=2 ){ + return Th_WrongNumArgs(interp, "checkout ?BOOLEAN?"); + } + if( argc==2 ){ + int openCheckout = 0; + if( Th_ToInt(interp, argv[1], argl[1], &openCheckout) ){ + return TH_ERROR; + } + if( openCheckout ) db_open_local(0); + } + Th_SetResult(interp, g.zLocalRoot, -1); + return TH_OK; +} + +/* +** TH1 command: trace STRING +** +** Generate a TH1 trace message if debugging is enabled. +*/ +static int traceCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + if( argc!=2 ){ + return Th_WrongNumArgs(interp, "trace STRING"); + } + if( g.thTrace ){ + Th_Trace("%s", argv[1]); + } + Th_SetResult(interp, 0, 0); + return TH_OK; +} + +/* +** TH1 command: globalState NAME ?DEFAULT? +** +** Returns a string containing the value of the specified global state +** variable -OR- the specified default value. Currently, the supported +** items are: +** +** "checkout" = The active local checkout directory, if any. +** "configuration" = The active configuration database file name, +** if any. +** "executable" = The fully qualified executable file name. +** "flags" = The TH1 initialization flags. +** "log" = The error log file name, if any. +** "repository" = The active local repository file name, if +** any. +** "top" = The base path for the active server instance, +** if applicable. +** "user" = The active user name, if any. +** "vfs" = The SQLite VFS in use, if overridden. +** +** Attempts to query for unsupported global state variables will result +** in a script error. Additional global state variables may be exposed +** in the future. +** +** See also: checkout, repository, setting +*/ +static int globalStateCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + const char *zDefault = 0; + if( argc!=2 && argc!=3 ){ + return Th_WrongNumArgs(interp, "globalState NAME ?DEFAULT?"); + } + if( argc==3 ){ + zDefault = argv[2]; + } + if( fossil_strnicmp(argv[1], "checkout\0", 9)==0 ){ + Th_SetResult(interp, g.zLocalRoot ? g.zLocalRoot : zDefault, -1); + return TH_OK; + }else if( fossil_strnicmp(argv[1], "configuration\0", 14)==0 ){ + Th_SetResult(interp, g.zConfigDbName ? g.zConfigDbName : zDefault, -1); + return TH_OK; + }else if( fossil_strnicmp(argv[1], "executable\0", 11)==0 ){ + Th_SetResult(interp, g.nameOfExe ? g.nameOfExe : zDefault, -1); + return TH_OK; + }else if( fossil_strnicmp(argv[1], "flags\0", 6)==0 ){ + Th_SetResultInt(interp, g.th1Flags); + return TH_OK; + }else if( fossil_strnicmp(argv[1], "log\0", 4)==0 ){ + Th_SetResult(interp, g.zErrlog ? g.zErrlog : zDefault, -1); + return TH_OK; + }else if( fossil_strnicmp(argv[1], "repository\0", 11)==0 ){ + Th_SetResult(interp, g.zRepositoryName ? g.zRepositoryName : zDefault, -1); + return TH_OK; + }else if( fossil_strnicmp(argv[1], "top\0", 4)==0 ){ + Th_SetResult(interp, g.zTop ? g.zTop : zDefault, -1); + return TH_OK; + }else if( fossil_strnicmp(argv[1], "user\0", 5)==0 ){ + Th_SetResult(interp, g.zLogin ? g.zLogin : zDefault, -1); + return TH_OK; + }else if( fossil_strnicmp(argv[1], "vfs\0", 4)==0 ){ + Th_SetResult(interp, g.zVfsName ? g.zVfsName : zDefault, -1); + return TH_OK; + }else{ + Th_ErrorMessage(interp, "unsupported global state:", argv[1], argl[1]); + return TH_ERROR; + } +} + +/* +** TH1 command: getParameter NAME ?DEFAULT? +** +** Return the value of the specified query parameter or the specified default +** value when there is no matching query parameter. +*/ +static int getParameterCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + const char *zDefault = 0; + if( argc!=2 && argc!=3 ){ + return Th_WrongNumArgs(interp, "getParameter NAME ?DEFAULT?"); + } + if( argc==3 ){ + zDefault = argv[2]; + } + Th_SetResult(interp, cgi_parameter(argv[1], zDefault), -1); + return TH_OK; +} + +/* +** TH1 command: setParameter NAME VALUE +** +** Sets the value of the specified query parameter. +*/ +static int setParameterCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + if( argc!=3 ){ + return Th_WrongNumArgs(interp, "setParameter NAME VALUE"); + } + cgi_replace_parameter(mprintf("%s", argv[1]), mprintf("%s", argv[2])); + return TH_OK; +} + +/* +** TH1 command: reinitialize ?FLAGS? +** +** Reinitializes the TH1 interpreter using the specified flags. +*/ +static int reinitializeCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + u32 flags = TH_INIT_DEFAULT; + if( argc!=1 && argc!=2 ){ + return Th_WrongNumArgs(interp, "reinitialize ?FLAGS?"); + } + if( argc==2 ){ + int iFlags; + if( Th_ToInt(interp, argv[1], argl[1], &iFlags) ){ + return TH_ERROR; + }else{ + flags = (u32)iFlags; + } + } + Th_FossilInit(flags & ~TH_INIT_FORBID_MASK); + Th_SetResult(interp, 0, 0); + return TH_OK; +} + +/* +** TH1 command: render STRING +** +** Renders the template and writes the results. +*/ +static int renderCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + int rc; + if( argc!=2 ){ + return Th_WrongNumArgs(interp, "render STRING"); + } + rc = Th_Render(argv[1]); + Th_SetResult(interp, 0, 0); + return rc; +} + +/* +** TH1 command: styleHeader TITLE +** +** Render the configured style header. +*/ +static int styleHeaderCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + if( argc!=2 ){ + return Th_WrongNumArgs(interp, "styleHeader TITLE"); + } + if( Th_IsRepositoryOpen() ){ + style_header("%s", argv[1]); + Th_SetResult(interp, 0, 0); + return TH_OK; + }else{ + Th_SetResult(interp, "repository unavailable", -1); + return TH_ERROR; + } +} + +/* +** TH1 command: styleFooter +** +** Render the configured style footer. +*/ +static int styleFooterCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + if( argc!=1 ){ + return Th_WrongNumArgs(interp, "styleFooter"); + } + if( Th_IsRepositoryOpen() ){ + style_footer(); + Th_SetResult(interp, 0, 0); + return TH_OK; + }else{ + Th_SetResult(interp, "repository unavailable", -1); + return TH_ERROR; + } +} + +/* +** TH1 command: artifact ID ?FILENAME? +** +** Attempts to locate the specified artifact and return its contents. An +** error is generated if the repository is not open or the artifact cannot +** be found. +*/ +static int artifactCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + if( argc!=2 && argc!=3 ){ + return Th_WrongNumArgs(interp, "artifact ID ?FILENAME?"); + } + if( Th_IsRepositoryOpen() ){ + int rid; + Blob content; + if( argc==3 ){ + rid = th1_artifact_from_ci_and_filename(interp, argv[1], argv[2]); + }else{ + rid = th1_name_to_typed_rid(interp, argv[1], "*"); + } + if( rid!=0 && content_get(rid, &content) ){ + Th_SetResult(interp, blob_str(&content), blob_size(&content)); + blob_reset(&content); + return TH_OK; + }else{ + return TH_ERROR; + } + }else{ + Th_SetResult(interp, "repository unavailable", -1); + return TH_ERROR; + } +} + +#ifdef _WIN32 +# include <windows.h> +#else +# include <sys/time.h> +# include <sys/resource.h> +#endif + +/* +** Get user and kernel times in microseconds. +*/ +static void getCpuTimes(sqlite3_uint64 *piUser, sqlite3_uint64 *piKernel){ +#ifdef _WIN32 + FILETIME not_used; + FILETIME kernel_time; + FILETIME user_time; + GetProcessTimes(GetCurrentProcess(), ¬_used, ¬_used, + &kernel_time, &user_time); + if( piUser ){ + *piUser = ((((sqlite3_uint64)user_time.dwHighDateTime)<<32) + + (sqlite3_uint64)user_time.dwLowDateTime + 5)/10; + } + if( piKernel ){ + *piKernel = ((((sqlite3_uint64)kernel_time.dwHighDateTime)<<32) + + (sqlite3_uint64)kernel_time.dwLowDateTime + 5)/10; + } +#else + struct rusage s; + getrusage(RUSAGE_SELF, &s); + if( piUser ){ + *piUser = ((sqlite3_uint64)s.ru_utime.tv_sec)*1000000 + s.ru_utime.tv_usec; + } + if( piKernel ){ + *piKernel = + ((sqlite3_uint64)s.ru_stime.tv_sec)*1000000 + s.ru_stime.tv_usec; + } +#endif +} + +/* +** TH1 command: utime +** +** Return the number of microseconds of CPU time consumed by the current +** process in user space. +*/ +static int utimeCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + sqlite3_uint64 x; + char zUTime[50]; + getCpuTimes(&x, 0); + sqlite3_snprintf(sizeof(zUTime), zUTime, "%llu", x); + Th_SetResult(interp, zUTime, -1); + return TH_OK; +} + +/* +** TH1 command: stime +** +** Return the number of microseconds of CPU time consumed by the current +** process in system space. +*/ +static int stimeCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + sqlite3_uint64 x; + char zUTime[50]; + getCpuTimes(0, &x); + sqlite3_snprintf(sizeof(zUTime), zUTime, "%llu", x); + Th_SetResult(interp, zUTime, -1); + return TH_OK; +} + + +/* +** TH1 command: randhex N +** +** Return N*2 random hexadecimal digits with N<50. If N is omitted, +** use a value of 10. +*/ +static int randhexCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + int n; + unsigned char aRand[50]; + unsigned char zOut[100]; + if( argc!=1 && argc!=2 ){ + return Th_WrongNumArgs(interp, "repository ?BOOLEAN?"); + } + if( argc==2 ){ + if( Th_ToInt(interp, argv[1], argl[1], &n) ){ + return TH_ERROR; + } + if( n<1 ) n = 1; + if( n>sizeof(aRand) ) n = sizeof(aRand); + }else{ + n = 10; + } + sqlite3_randomness(n, aRand); + encode16(aRand, zOut, n); + Th_SetResult(interp, (const char *)zOut, -1); + return TH_OK; +} + +/* +** TH1 command: query SQL CODE +** +** Run the SQL query given by the SQL argument. For each row in the result +** set, run CODE. +** +** In SQL, parameters such as $var are filled in using the value of variable +** "var". Result values are stored in variables with the column name prior +** to each invocation of CODE. +*/ +static int queryCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + sqlite3_stmt *pStmt; + int rc; + const char *zSql; + int nSql; + const char *zTail; + int n, i; + int res = TH_OK; + int nVar; + char *zErr = 0; + + if( argc!=3 ){ + return Th_WrongNumArgs(interp, "query SQL CODE"); + } + if( g.db==0 ){ + Th_ErrorMessage(interp, "database is not open", 0, 0); + return TH_ERROR; + } + zSql = argv[1]; + nSql = argl[1]; + while( res==TH_OK && nSql>0 ){ + zErr = 0; + sqlite3_set_authorizer(g.db, report_query_authorizer, (void*)&zErr); + rc = sqlite3_prepare_v2(g.db, argv[1], argl[1], &pStmt, &zTail); + sqlite3_set_authorizer(g.db, 0, 0); + if( rc!=0 || zErr!=0 ){ + Th_ErrorMessage(interp, "SQL error: ", + zErr ? zErr : sqlite3_errmsg(g.db), -1); + return TH_ERROR; + } + n = (int)(zTail - zSql); + zSql += n; + nSql -= n; + if( pStmt==0 ) continue; + nVar = sqlite3_bind_parameter_count(pStmt); + for(i=1; i<=nVar; i++){ + const char *zVar = sqlite3_bind_parameter_name(pStmt, i); + int szVar = zVar ? th_strlen(zVar) : 0; + if( szVar>1 && zVar[0]=='$' + && Th_GetVar(interp, zVar+1, szVar-1)==TH_OK ){ + int nVal; + const char *zVal = Th_GetResult(interp, &nVal); + sqlite3_bind_text(pStmt, i, zVal, nVal, SQLITE_TRANSIENT); + } + } + while( res==TH_OK && sqlite3_step(pStmt)==SQLITE_ROW ){ + int nCol = sqlite3_column_count(pStmt); + for(i=0; i<nCol; i++){ + const char *zCol = sqlite3_column_name(pStmt, i); + int szCol = th_strlen(zCol); + const char *zVal = (const char*)sqlite3_column_text(pStmt, i); + int szVal = sqlite3_column_bytes(pStmt, i); + Th_SetVar(interp, zCol, szCol, zVal, szVal); + } + res = Th_Eval(interp, 0, argv[2], argl[2]); + if( res==TH_BREAK || res==TH_CONTINUE ) res = TH_OK; + } + rc = sqlite3_finalize(pStmt); + if( rc!=SQLITE_OK ){ + Th_ErrorMessage(interp, "SQL error: ", sqlite3_errmsg(g.db), -1); + return TH_ERROR; + } + } + return res; +} + +/* +** TH1 command: setting name +** +** Gets and returns the value of the specified Fossil setting. +*/ +#define SETTING_WRONGNUMARGS "setting ?-strict? ?--? name" +static int settingCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + int rc; + int strict = 0; + int nArg = 1; + char *zValue; + if( argc<2 || argc>4 ){ + return Th_WrongNumArgs(interp, SETTING_WRONGNUMARGS); + } + if( fossil_strcmp(argv[nArg], "-strict")==0 ){ + strict = 1; nArg++; + } + if( fossil_strcmp(argv[nArg], "--")==0 ) nArg++; + if( nArg+1!=argc ){ + return Th_WrongNumArgs(interp, SETTING_WRONGNUMARGS); + } + zValue = db_get(argv[nArg], 0); + if( zValue!=0 ){ + Th_SetResult(interp, zValue, -1); + rc = TH_OK; + }else if( strict ){ + Th_ErrorMessage(interp, "no value for setting \"", argv[nArg], -1); + rc = TH_ERROR; + }else{ + Th_SetResult(interp, 0, 0); + rc = TH_OK; + } + if( g.thTrace ){ + Th_Trace("[setting %s%#h] => %d<br />\n", strict ? "strict " : "", + argl[nArg], argv[nArg], rc); + } + return rc; +} + +/* +** TH1 command: glob_match ?-one? ?--? patternList string +** +** Checks the string against the specified glob pattern -OR- list of glob +** patterns and returns non-zero if there is a match. +*/ +#define GLOB_MATCH_WRONGNUMARGS "glob_match ?-one? ?--? patternList string" +static int globMatchCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + int rc; + int one = 0; + int nArg = 1; + Glob *pGlob = 0; + if( argc<3 || argc>5 ){ + return Th_WrongNumArgs(interp, GLOB_MATCH_WRONGNUMARGS); + } + if( fossil_strcmp(argv[nArg], "-one")==0 ){ + one = 1; nArg++; + } + if( fossil_strcmp(argv[nArg], "--")==0 ) nArg++; + if( nArg+2!=argc ){ + return Th_WrongNumArgs(interp, GLOB_MATCH_WRONGNUMARGS); + } + if( one ){ + Th_SetResultInt(interp, sqlite3_strglob(argv[nArg], argv[nArg+1])==0); + rc = TH_OK; + }else{ + pGlob = glob_create(argv[nArg]); + if( pGlob ){ + Th_SetResultInt(interp, glob_match(pGlob, argv[nArg+1])); + rc = TH_OK; + }else{ + Th_SetResult(interp, "unable to create glob from pattern list", -1); + rc = TH_ERROR; + } + glob_free(pGlob); + } + return rc; +} + +/* +** TH1 command: regexp ?-nocase? ?--? exp string +** +** Checks the string against the specified regular expression and returns +** non-zero if it matches. If the regular expression is invalid or cannot +** be compiled, an error will be generated. +*/ +#define REGEXP_WRONGNUMARGS "regexp ?-nocase? ?--? exp string" +static int regexpCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + int rc; + int noCase = 0; + int nArg = 1; + ReCompiled *pRe = 0; + const char *zErr; + if( argc<3 || argc>5 ){ + return Th_WrongNumArgs(interp, REGEXP_WRONGNUMARGS); + } + if( fossil_strcmp(argv[nArg], "-nocase")==0 ){ + noCase = 1; nArg++; + } + if( fossil_strcmp(argv[nArg], "--")==0 ) nArg++; + if( nArg+2!=argc ){ + return Th_WrongNumArgs(interp, REGEXP_WRONGNUMARGS); + } + zErr = re_compile(&pRe, argv[nArg], noCase); + if( !zErr ){ + Th_SetResultInt(interp, re_match(pRe, + (const unsigned char *)argv[nArg+1], argl[nArg+1])); + rc = TH_OK; + }else{ + Th_SetResult(interp, zErr, -1); + rc = TH_ERROR; + } + re_free(pRe); + return rc; +} + +/* +** TH1 command: http ?-asynchronous? ?--? url ?payload? +** +** Perform an HTTP or HTTPS request for the specified URL. If a +** payload is present, it will be interpreted as text/plain and +** the POST method will be used; otherwise, the GET method will +** be used. Upon success, if the -asynchronous option is used, an +** empty string is returned as the result; otherwise, the response +** from the server is returned as the result. Synchronous requests +** are not currently implemented. +*/ +#define HTTP_WRONGNUMARGS "http ?-asynchronous? ?--? url ?payload?" +static int httpCmd( + Th_Interp *interp, + void *p, + int argc, + const char **argv, + int *argl +){ + int nArg = 1; + int fAsynchronous = 0; + const char *zType, *zRegexp; + Blob payload; + ReCompiled *pRe = 0; + UrlData urlData; + + if( argc<2 || argc>5 ){ + return Th_WrongNumArgs(interp, HTTP_WRONGNUMARGS); + } + if( fossil_strnicmp(argv[nArg], "-asynchronous", argl[nArg])==0 ){ + fAsynchronous = 1; nArg++; + } + if( fossil_strcmp(argv[nArg], "--")==0 ) nArg++; + if( nArg+1!=argc && nArg+2!=argc ){ + return Th_WrongNumArgs(interp, REGEXP_WRONGNUMARGS); + } + memset(&urlData, '\0', sizeof(urlData)); + url_parse_local(argv[nArg], 0, &urlData); + if( urlData.isSsh || urlData.isFile ){ + Th_ErrorMessage(interp, "url must be http:// or https://", 0, 0); + return TH_ERROR; + } + zRegexp = db_get("th1-uri-regexp", 0); + if( zRegexp && zRegexp[0] ){ + const char *zErr = re_compile(&pRe, zRegexp, 0); + if( zErr ){ + Th_SetResult(interp, zErr, -1); + return TH_ERROR; + } + } + if( !pRe || !re_match(pRe, (const unsigned char *)urlData.canonical, -1) ){ + Th_SetResult(interp, "url not allowed", -1); + re_free(pRe); + return TH_ERROR; + } + re_free(pRe); + blob_zero(&payload); + if( nArg+2==argc ){ + blob_append(&payload, argv[nArg+1], argl[nArg+1]); + zType = "POST"; + }else{ + zType = "GET"; + } + if( fAsynchronous ){ + const char *zSep, *zParams; + Blob hdr; + zParams = strrchr(argv[nArg], '?'); + if( strlen(urlData.path)>0 && zParams!=argv[nArg] ){ + zSep = ""; + }else{ + zSep = "/"; + } + blob_zero(&hdr); + blob_appendf(&hdr, "%s %s%s%s HTTP/1.0\r\n", + zType, zSep, urlData.path, zParams ? zParams : ""); + if( urlData.proxyAuth ){ + blob_appendf(&hdr, "Proxy-Authorization: %s\r\n", urlData.proxyAuth); + } + if( urlData.passwd && urlData.user && urlData.passwd[0]=='#' ){ + char *zCredentials = mprintf("%s:%s", urlData.user, &urlData.passwd[1]); + char *zEncoded = encode64(zCredentials, -1); + blob_appendf(&hdr, "Authorization: Basic %s\r\n", zEncoded); + fossil_free(zEncoded); + fossil_free(zCredentials); + } + blob_appendf(&hdr, "Host: %s\r\n" + "User-Agent: %s\r\n", urlData.hostname, get_user_agent()); + if( zType[0]=='P' ){ + blob_appendf(&hdr, "Content-Type: application/x-www-form-urlencoded\r\n" + "Content-Length: %d\r\n\r\n", blob_size(&payload)); + }else{ + blob_appendf(&hdr, "\r\n"); + } + if( transport_open(&urlData) ){ + Th_ErrorMessage(interp, transport_errmsg(&urlData), 0, 0); + blob_reset(&hdr); + blob_reset(&payload); + return TH_ERROR; + } + transport_send(&urlData, &hdr); + transport_send(&urlData, &payload); + blob_reset(&hdr); + blob_reset(&payload); + transport_close(&urlData); + Th_SetResult(interp, 0, 0); /* NOTE: Asynchronous, no results. */ + return TH_OK; + }else{ + Th_ErrorMessage(interp, + "synchronous requests are not yet implemented", 0, 0); + blob_reset(&payload); + return TH_ERROR; + } +} + +/* +** Attempts to open the configuration ("user") database. Optionally, also +** attempts to try to find the repository and open it. +*/ +void Th_OpenConfig( + int openRepository +){ + if( openRepository && !Th_IsRepositoryOpen() ){ + db_find_and_open_repository(OPEN_ANY_SCHEMA | OPEN_OK_NOT_FOUND, 0); + if( Th_IsRepositoryOpen() ){ + g.th1Flags |= TH_STATE_REPOSITORY; + }else{ + g.th1Flags &= ~TH_STATE_REPOSITORY; + } + } + if( !Th_IsConfigOpen() ){ + db_open_config(0); + if( Th_IsConfigOpen() ){ + g.th1Flags |= TH_STATE_CONFIG; + }else{ + g.th1Flags &= ~TH_STATE_CONFIG; + } + } +} + +/* +** Attempts to close the configuration ("user") database. Optionally, also +** attempts to close the repository. +*/ +void Th_CloseConfig( + int closeRepository +){ + if( g.th1Flags & TH_STATE_CONFIG ){ + db_close_config(); + g.th1Flags &= ~TH_STATE_CONFIG; + } + if( closeRepository && (g.th1Flags & TH_STATE_REPOSITORY) ){ + db_close(1); + g.th1Flags &= ~TH_STATE_REPOSITORY; + } +} /* ** Make sure the interpreter has been initialized. Initialize it if ** it has not been already. ** ** The interpreter is stored in the g.interp global variable. */ -void Th_FossilInit(void){ +void Th_FossilInit(u32 flags){ + int wasInit = 0; + int needConfig = flags & TH_INIT_NEED_CONFIG; + int forceReset = flags & TH_INIT_FORCE_RESET; + int forceTcl = flags & TH_INIT_FORCE_TCL; + int forceSetup = flags & TH_INIT_FORCE_SETUP; + static unsigned int aFlags[] = { 0, 1, WIKI_LINKSONLY }; + static int anonFlag = LOGIN_ANON; + static int zeroInt = 0; static struct _Command { const char *zName; Th_CommandProc xProc; void *pContext; } aCommand[] = { + {"anoncap", hascapCmd, (void*)&anonFlag}, {"anycap", anycapCmd, 0}, + {"artifact", artifactCmd, 0}, + {"checkout", checkoutCmd, 0}, {"combobox", comboboxCmd, 0}, - {"enable_output", enableOutputCmd, 0}, - {"linecount", linecntCmd, 0}, - {"hascap", hascapCmd, 0}, - {"htmlize", htmlizeCmd, 0}, {"date", dateCmd, 0}, - {"html", putsCmd, 0}, - {"puts", putsCmd, (void*)1}, - {"wiki", wikiCmd, 0}, + {"decorate", wikiCmd, (void*)&aFlags[2]}, + {"dir", dirCmd, 0}, + {"enable_output", enableOutputCmd, 0}, + {"encode64", encode64Cmd, 0}, + {"getParameter", getParameterCmd, 0}, + {"glob_match", globMatchCmd, 0}, + {"globalState", globalStateCmd, 0}, + {"httpize", httpizeCmd, 0}, + {"hascap", hascapCmd, (void*)&zeroInt}, + {"hasfeature", hasfeatureCmd, 0}, + {"html", putsCmd, (void*)&aFlags[0]}, + {"htmlize", htmlizeCmd, 0}, + {"http", httpCmd, 0}, + {"insertCsrf", insertCsrfCmd, 0}, + {"linecount", linecntCmd, 0}, + {"markdown", markdownCmd, 0}, + {"puts", putsCmd, (void*)&aFlags[1]}, + {"query", queryCmd, 0}, + {"randhex", randhexCmd, 0}, + {"redirect", redirectCmd, 0}, + {"regexp", regexpCmd, 0}, + {"reinitialize", reinitializeCmd, 0}, + {"render", renderCmd, 0}, + {"repository", repositoryCmd, 0}, + {"searchable", searchableCmd, 0}, + {"setParameter", setParameterCmd, 0}, + {"setting", settingCmd, 0}, + {"styleHeader", styleHeaderCmd, 0}, + {"styleFooter", styleFooterCmd, 0}, + {"tclReady", tclReadyCmd, 0}, + {"trace", traceCmd, 0}, + {"stime", stimeCmd, 0}, + {"utime", utimeCmd, 0}, + {"verifyCsrf", verifyCsrfCmd, 0}, + {"wiki", wikiCmd, (void*)&aFlags[0]}, + {0, 0, 0} }; - if( g.interp==0 ){ + if( g.thTrace ){ + Th_Trace("th1-init 0x%x => 0x%x<br />\n", g.th1Flags, flags); + } + if( needConfig ){ + /* + ** This function uses several settings which may be defined in the + ** repository and/or the global configuration. Since the caller + ** passed a non-zero value for the needConfig parameter, make sure + ** the necessary database connections are open prior to continuing. + */ + Th_OpenConfig(1); + } + if( forceReset || forceTcl || g.interp==0 ){ + int created = 0; int i; - g.interp = Th_CreateInterp(&vtab); - th_register_language(g.interp); /* Basic scripting commands. */ + if( g.interp==0 ){ + g.interp = Th_CreateInterp(&vtab); + created = 1; + } + if( forceReset || created ){ + th_register_language(g.interp); /* Basic scripting commands. */ + } +#ifdef FOSSIL_ENABLE_TCL + if( forceTcl || fossil_getenv("TH1_ENABLE_TCL")!=0 || + db_get_boolean("tcl", 0) ){ + if( !g.tcl.setup ){ + g.tcl.setup = db_get("tcl-setup", 0); /* Grab Tcl setup script. */ + } + th_register_tcl(g.interp, &g.tcl); /* Tcl integration commands. */ + } +#endif for(i=0; i<sizeof(aCommand)/sizeof(aCommand[0]); i++){ + if ( !aCommand[i].zName || !aCommand[i].xProc ) continue; Th_CreateCommand(g.interp, aCommand[i].zName, aCommand[i].xProc, aCommand[i].pContext, 0); } + }else{ + wasInit = 1; + } + if( forceSetup || !wasInit ){ + int rc = TH_OK; + if( !g.th1Setup ){ + g.th1Setup = db_get("th1-setup", 0); /* Grab TH1 setup script. */ + } + if( g.th1Setup ){ + rc = Th_Eval(g.interp, 0, g.th1Setup, -1); + if( rc==TH_ERROR ){ + int nResult = 0; + char *zResult = (char*)Th_GetResult(g.interp, &nResult); + sendError(zResult, nResult, 0); + } + } + if( g.thTrace ){ + Th_Trace("th1-setup {%h} => %h<br />\n", g.th1Setup, + Th_ReturnCodeName(rc, 0)); + } } + g.th1Flags &= ~TH_INIT_MASK; + g.th1Flags |= (flags & TH_INIT_MASK); } /* ** Store a string value in a variable in the interpreter. */ void Th_Store(const char *zName, const char *zValue){ - Th_FossilInit(); + Th_FossilInit(TH_INIT_DEFAULT); if( zValue ){ if( g.thTrace ){ Th_Trace("set %h {%h}<br />\n", zName, zValue); } Th_SetVar(g.interp, zName, -1, zValue, strlen(zValue)); } } + +/* +** Appends an element to a TH1 list value. This function is called by the +** transfer subsystem; therefore, it must be very careful to avoid doing +** any unnecessary work. To that end, the TH1 subsystem will not be called +** or initialized if the list pointer is zero (i.e. which will be the case +** when TH1 transfer hooks are disabled). +*/ +void Th_AppendToList( + char **pzList, + int *pnList, + const char *zElem, + int nElem +){ + if( pzList && zElem ){ + Th_FossilInit(TH_INIT_DEFAULT); + Th_ListAppend(g.interp, pzList, pnList, zElem, nElem); + } +} + +/* +** Stores a list value in the specified TH1 variable using the specified +** array of strings as the source of the element values. +*/ +void Th_StoreList( + const char *zName, + char **pzList, + int nList +){ + Th_FossilInit(TH_INIT_DEFAULT); + if( pzList ){ + char *zValue = 0; + int nValue = 0; + int i; + for(i=0; i<nList; i++){ + Th_ListAppend(g.interp, &zValue, &nValue, pzList[i], -1); + } + if( g.thTrace ){ + Th_Trace("set %h {%h}<br />\n", zName, zValue); + } + Th_SetVar(g.interp, zName, -1, zValue, nValue); + Th_Free(g.interp, zValue); + } +} + +/* +** Store an integer value in a variable in the interpreter. +*/ +void Th_StoreInt(const char *zName, int iValue){ + Blob value; + char *zValue; + Th_FossilInit(TH_INIT_DEFAULT); + blob_zero(&value); + blob_appendf(&value, "%d", iValue); + zValue = blob_str(&value); + if( g.thTrace ){ + Th_Trace("set %h {%h}<br />\n", zName, zValue); + } + Th_SetVar(g.interp, zName, -1, zValue, strlen(zValue)); + blob_reset(&value); +} /* ** Unset a variable. */ void Th_Unstore(const char *zName){ @@ -389,11 +2005,11 @@ ** Retrieve a string value from the interpreter. If no such ** variable exists, return NULL. */ char *Th_Fetch(const char *zName, int *pSize){ int rc; - Th_FossilInit(); + Th_FossilInit(TH_INIT_DEFAULT); rc = Th_GetVar(g.interp, (char*)zName, -1); if( rc==TH_OK ){ return (char*)Th_GetResult(g.interp, pSize); }else{ return 0; @@ -434,33 +2050,242 @@ int inBracket = 0; if( z[0]=='<' ){ inBracket = 1; z++; } - if( z[0]==':' && z[1]==':' && isalpha(z[2]) ){ + if( z[0]==':' && z[1]==':' && fossil_isalpha(z[2]) ){ z += 3; i += 3; - }else if( isalpha(z[0]) ){ + }else if( fossil_isalpha(z[0]) ){ z ++; i += 1; }else{ return 0; } - while( isalnum(z[0]) || z[0]=='_' ){ + while( fossil_isalnum(z[0]) || z[0]=='_' ){ z++; i++; } if( inBracket ){ if( z[0]!='>' ) return 0; i += 2; } return i; } + +#ifdef FOSSIL_ENABLE_TH1_HOOKS +/* +** This function determines if TH1 hooks are enabled for the repository. It +** may be necessary to open the repository and/or the configuration ("user") +** database from within this function. Before this function returns, any +** database opened will be closed again. This is very important because some +** commands do not expect the repository and/or the configuration ("user") +** database to be open prior to their own code doing so. +*/ +int Th_AreHooksEnabled(void){ + int rc; + if( fossil_getenv("TH1_ENABLE_HOOKS")!=0 ){ + return 1; + } + Th_OpenConfig(1); + rc = db_get_boolean("th1-hooks", 0); + Th_CloseConfig(1); + return rc; +} + +/* +** This function is called by Fossil just prior to dispatching a command. +** Returning a value other than TH_OK from this function (i.e. via an +** evaluated script raising an error or calling [break]/[continue]) will +** cause the actual command execution to be skipped. +*/ +int Th_CommandHook( + const char *zName, + char cmdFlags +){ + int rc = TH_OK; + if( !Th_AreHooksEnabled() ) return rc; + Th_FossilInit(TH_INIT_HOOK); + Th_Store("cmd_name", zName); + Th_StoreList("cmd_args", g.argv, g.argc); + Th_StoreInt("cmd_flags", cmdFlags); + rc = Th_Eval(g.interp, 0, "command_hook", -1); + if( rc==TH_ERROR ){ + int nResult = 0; + char *zResult = (char*)Th_GetResult(g.interp, &nResult); + /* + ** Make sure that the TH1 script error was not caused by a "missing" + ** command hook handler as that is not actually an error condition. + */ + if( memcmp(zResult, NO_COMMAND_HOOK_ERROR, nResult)!=0 ){ + sendError(zResult, nResult, 0); + }else{ + /* + ** There is no command hook handler "installed". This situation + ** is NOT actually an error. + */ + rc = TH_OK; + } + } + /* + ** If the script returned TH_ERROR (e.g. the "command_hook" TH1 command does + ** not exist because commands are not being hooked), return TH_OK because we + ** do not want to skip executing essential commands unless the called command + ** (i.e. "command_hook") explicitly forbids this by successfully returning + ** TH_BREAK or TH_CONTINUE. + */ + if( g.thTrace ){ + Th_Trace("[command_hook {%h}] => %h<br />\n", zName, + Th_ReturnCodeName(rc, 0)); + } + /* + ** Does our call to Th_FossilInit() result in opening a database? If so, + ** clean it up now. This is very important because some commands do not + ** expect the repository and/or the configuration ("user") database to be + ** open prior to their own code doing so. + */ + if( TH_INIT_HOOK & TH_INIT_NEED_CONFIG ) Th_CloseConfig(1); + return rc; +} + +/* +** This function is called by Fossil just after dispatching a command. +** Returning a value other than TH_OK from this function (i.e. via an +** evaluated script raising an error or calling [break]/[continue]) may +** cause an error message to be displayed to the local interactive user. +** Currently, TH1 error messages generated by this function are ignored. +*/ +int Th_CommandNotify( + const char *zName, + char cmdFlags +){ + int rc = TH_OK; + if( !Th_AreHooksEnabled() ) return rc; + Th_FossilInit(TH_INIT_HOOK); + Th_Store("cmd_name", zName); + Th_StoreList("cmd_args", g.argv, g.argc); + Th_StoreInt("cmd_flags", cmdFlags); + rc = Th_Eval(g.interp, 0, "command_notify", -1); + if( g.thTrace ){ + Th_Trace("[command_notify {%h}] => %h<br />\n", zName, + Th_ReturnCodeName(rc, 0)); + } + /* + ** Does our call to Th_FossilInit() result in opening a database? If so, + ** clean it up now. This is very important because some commands do not + ** expect the repository and/or the configuration ("user") database to be + ** open prior to their own code doing so. + */ + if( TH_INIT_HOOK & TH_INIT_NEED_CONFIG ) Th_CloseConfig(1); + return rc; +} + +/* +** This function is called by Fossil just prior to processing a web page. +** Returning a value other than TH_OK from this function (i.e. via an +** evaluated script raising an error or calling [break]/[continue]) will +** cause the actual web page processing to be skipped. +*/ +int Th_WebpageHook( + const char *zName, + char cmdFlags +){ + int rc = TH_OK; + if( !Th_AreHooksEnabled() ) return rc; + Th_FossilInit(TH_INIT_HOOK); + Th_Store("web_name", zName); + Th_StoreList("web_args", g.argv, g.argc); + Th_StoreInt("web_flags", cmdFlags); + rc = Th_Eval(g.interp, 0, "webpage_hook", -1); + if( rc==TH_ERROR ){ + int nResult = 0; + char *zResult = (char*)Th_GetResult(g.interp, &nResult); + /* + ** Make sure that the TH1 script error was not caused by a "missing" + ** webpage hook handler as that is not actually an error condition. + */ + if( memcmp(zResult, NO_WEBPAGE_HOOK_ERROR, nResult)!=0 ){ + sendError(zResult, nResult, 1); + }else{ + /* + ** There is no webpage hook handler "installed". This situation + ** is NOT actually an error. + */ + rc = TH_OK; + } + } + /* + ** If the script returned TH_ERROR (e.g. the "webpage_hook" TH1 command does + ** not exist because commands are not being hooked), return TH_OK because we + ** do not want to skip processing essential web pages unless the called + ** command (i.e. "webpage_hook") explicitly forbids this by successfully + ** returning TH_BREAK or TH_CONTINUE. + */ + if( g.thTrace ){ + Th_Trace("[webpage_hook {%h}] => %h<br />\n", zName, + Th_ReturnCodeName(rc, 0)); + } + /* + ** Does our call to Th_FossilInit() result in opening a database? If so, + ** clean it up now. This is very important because some commands do not + ** expect the repository and/or the configuration ("user") database to be + ** open prior to their own code doing so. + */ + if( TH_INIT_HOOK & TH_INIT_NEED_CONFIG ) Th_CloseConfig(1); + return rc; +} + +/* +** This function is called by Fossil just after processing a web page. +** Returning a value other than TH_OK from this function (i.e. via an +** evaluated script raising an error or calling [break]/[continue]) may +** cause an error message to be displayed to the remote user. +** Currently, TH1 error messages generated by this function are ignored. +*/ +int Th_WebpageNotify( + const char *zName, + char cmdFlags +){ + int rc = TH_OK; + if( !Th_AreHooksEnabled() ) return rc; + Th_FossilInit(TH_INIT_HOOK); + Th_Store("web_name", zName); + Th_StoreList("web_args", g.argv, g.argc); + Th_StoreInt("web_flags", cmdFlags); + rc = Th_Eval(g.interp, 0, "webpage_notify", -1); + if( g.thTrace ){ + Th_Trace("[webpage_notify {%h}] => %h<br />\n", zName, + Th_ReturnCodeName(rc, 0)); + } + /* + ** Does our call to Th_FossilInit() result in opening a database? If so, + ** clean it up now. This is very important because some commands do not + ** expect the repository and/or the configuration ("user") database to be + ** open prior to their own code doing so. + */ + if( TH_INIT_HOOK & TH_INIT_NEED_CONFIG ) Th_CloseConfig(1); + return rc; +} +#endif + + +#ifdef FOSSIL_ENABLE_TH1_DOCS +/* +** This function determines if TH1 docs are enabled for the repository. +*/ +int Th_AreDocsEnabled(void){ + if( fossil_getenv("TH1_ENABLE_DOCS")!=0 ){ + return 1; + } + return db_get_boolean("th1-docs", 0); +} +#endif + /* ** The z[] input contains text mixed with TH1 scripts. -** The TH1 scripts are contained within <th1>...</th1>. +** The TH1 scripts are contained within <th1>...</th1>. ** TH1 variables are $aaa or $<aaa>. The first form of ** variable is literal. The second is run through htmlize ** before being inserted. ** ** This routine processes the template and writes the results @@ -469,34 +2294,39 @@ int Th_Render(const char *z){ int i = 0; int n; int rc = TH_OK; char *zResult; - Th_FossilInit(); + Th_FossilInit(TH_INIT_DEFAULT); while( z[i] ){ if( z[i]=='$' && (n = validVarName(&z[i+1]))>0 ){ const char *zVar; int nVar; + int encode = 1; sendText(z, i, 0); if( z[i+1]=='<' ){ - /* Variables of the form $<aaa> */ + /* Variables of the form $<aaa> are html escaped */ zVar = &z[i+2]; nVar = n-2; }else{ - /* Variables of the form $aaa */ + /* Variables of the form $aaa are output raw */ zVar = &z[i+1]; nVar = n; + encode = 0; } rc = Th_GetVar(g.interp, (char*)zVar, nVar); z += i+1+n; i = 0; zResult = (char*)Th_GetResult(g.interp, &n); - sendText((char*)zResult, n, n>nVar); + sendText((char*)zResult, n, encode); }else if( z[i]=='<' && isBeginScriptTag(&z[i]) ){ sendText(z, i, 0); z += i+5; for(i=0; z[i] && (z[i]!='<' || !isEndScriptTag(&z[i])); i++){} + if( g.thTrace ){ + Th_Trace("eval {<pre>%#h</pre>}<br>", i, z); + } rc = Th_Eval(g.interp, 0, (const char*)z, i); if( rc!=TH_OK ) break; z += i; if( z[0] ){ z += 6; } i = 0; @@ -503,27 +2333,210 @@ }else{ i++; } } if( rc==TH_ERROR ){ - sendText("<hr><p><font color=\"red\"><b>ERROR: ", -1, 0); zResult = (char*)Th_GetResult(g.interp, &n); - sendText((char*)zResult, n, 1); - sendText("</b></font></p>", -1, 0); + sendError(zResult, n, 1); }else{ sendText(z, i, 0); } return rc; } /* ** COMMAND: test-th-render +** +** Usage: %fossil test-th-render FILE +** +** Read the content of the file named "FILE" as if it were a header or +** footer or ticket rendering script, evaluate it, and show the results +** on standard output. +** +** Options: +** +** --cgi Include a CGI response header in the output +** --http Include an HTTP response header in the output +** --open-config Open the configuration database +** --th-trace Trace TH1 execution (for debugging purposes) */ void test_th_render(void){ + int forceCgi, fullHttpReply; Blob in; + Th_InitTraceLog(); + forceCgi = find_option("cgi", 0, 0)!=0; + fullHttpReply = find_option("http", 0, 0)!=0; + if( fullHttpReply ) forceCgi = 1; + if( forceCgi ) Th_ForceCgi(fullHttpReply); + if( find_option("open-config", 0, 0)!=0 ){ + Th_OpenConfig(1); + } + verify_all_options(); if( g.argc<3 ){ usage("FILE"); } blob_zero(&in); blob_read_from_file(&in, g.argv[2]); Th_Render(blob_str(&in)); + Th_PrintTraceLog(); + if( forceCgi ) cgi_reply(); +} + +/* +** COMMAND: test-th-eval +** +** Usage: %fossil test-th-eval SCRIPT +** +** Evaluate SCRIPT as if it were a header or footer or ticket rendering +** script and show the results on standard output. +** +** Options: +** +** --cgi Include a CGI response header in the output +** --http Include an HTTP response header in the output +** --open-config Open the configuration database +** --th-trace Trace TH1 execution (for debugging purposes) +*/ +void test_th_eval(void){ + int rc; + const char *zRc; + int forceCgi, fullHttpReply; + Th_InitTraceLog(); + forceCgi = find_option("cgi", 0, 0)!=0; + fullHttpReply = find_option("http", 0, 0)!=0; + if( fullHttpReply ) forceCgi = 1; + if( forceCgi ) Th_ForceCgi(fullHttpReply); + if( find_option("open-config", 0, 0)!=0 ){ + Th_OpenConfig(1); + } + verify_all_options(); + if( g.argc!=3 ){ + usage("script"); + } + Th_FossilInit(TH_INIT_DEFAULT); + rc = Th_Eval(g.interp, 0, g.argv[2], -1); + zRc = Th_ReturnCodeName(rc, 1); + fossil_print("%s%s%s\n", zRc, zRc ? ": " : "", Th_GetResult(g.interp, 0)); + Th_PrintTraceLog(); + if( forceCgi ) cgi_reply(); +} + +/* +** COMMAND: test-th-source +** +** Usage: %fossil test-th-source FILE +** +** Evaluate the contents of the file named "FILE" as if it were a header +** or footer or ticket rendering script and show the results on standard +** output. +** +** Options: +** +** --cgi Include a CGI response header in the output +** --http Include an HTTP response header in the output +** --open-config Open the configuration database +** --th-trace Trace TH1 execution (for debugging purposes) +*/ +void test_th_source(void){ + int rc; + const char *zRc; + int forceCgi, fullHttpReply; + Blob in; + Th_InitTraceLog(); + forceCgi = find_option("cgi", 0, 0)!=0; + fullHttpReply = find_option("http", 0, 0)!=0; + if( fullHttpReply ) forceCgi = 1; + if( forceCgi ) Th_ForceCgi(fullHttpReply); + if( find_option("open-config", 0, 0)!=0 ){ + Th_OpenConfig(1); + } + verify_all_options(); + if( g.argc!=3 ){ + usage("file"); + } + blob_zero(&in); + blob_read_from_file(&in, g.argv[2]); + Th_FossilInit(TH_INIT_DEFAULT); + rc = Th_Eval(g.interp, 0, blob_str(&in), -1); + zRc = Th_ReturnCodeName(rc, 1); + fossil_print("%s%s%s\n", zRc, zRc ? ": " : "", Th_GetResult(g.interp, 0)); + Th_PrintTraceLog(); + if( forceCgi ) cgi_reply(); +} + +#ifdef FOSSIL_ENABLE_TH1_HOOKS +/* +** COMMAND: test-th-hook +** +** Usage: %fossil test-th-hook TYPE NAME FLAGS +** +** Evaluates the TH1 script configured for the pre-operation (i.e. a command +** or web page) "hook" or post-operation "notification". The results of the +** script evaluation, if any, will be printed to the standard output channel. +** The NAME argument must be the name of a command or web page; however, it +** does not necessarily have to be a command or web page that is normally +** recognized by Fossil. The FLAGS argument will be used to set the value +** of the "cmd_flags" and/or "web_flags" TH1 variables, if applicable. The +** TYPE argument must be one of the following: +** +** cmdhook Executes the TH1 procedure [command_hook], after +** setting the TH1 variables "cmd_name", "cmd_args", +** and "cmd_flags" to appropriate values. +** +** cmdnotify Executes the TH1 procedure [command_notify], after +** setting the TH1 variables "cmd_name", "cmd_args", +** and "cmd_flags" to appropriate values. +** +** webhook Executes the TH1 procedure [webpage_hook], after +** setting the TH1 variables "web_name", "web_args", +** and "web_flags" to appropriate values. +** +** webnotify Executes the TH1 procedure [webpage_notify], after +** setting the TH1 variables "web_name", "web_args", +** and "web_flags" to appropriate values. +** +** Options: +** +** --cgi Include a CGI response header in the output +** --http Include an HTTP response header in the output +** --th-trace Trace TH1 execution (for debugging purposes) +*/ +void test_th_hook(void){ + int rc = TH_OK; + int nResult = 0; + char *zResult = 0; + int forceCgi, fullHttpReply; + Th_InitTraceLog(); + forceCgi = find_option("cgi", 0, 0)!=0; + fullHttpReply = find_option("http", 0, 0)!=0; + if( fullHttpReply ) forceCgi = 1; + if( forceCgi ) Th_ForceCgi(fullHttpReply); + verify_all_options(); + if( g.argc<5 ){ + usage("TYPE NAME FLAGS"); + } + if( fossil_stricmp(g.argv[2], "cmdhook")==0 ){ + rc = Th_CommandHook(g.argv[3], (char)atoi(g.argv[4])); + }else if( fossil_stricmp(g.argv[2], "cmdnotify")==0 ){ + rc = Th_CommandNotify(g.argv[3], (char)atoi(g.argv[4])); + }else if( fossil_stricmp(g.argv[2], "webhook")==0 ){ + rc = Th_WebpageHook(g.argv[3], (char)atoi(g.argv[4])); + }else if( fossil_stricmp(g.argv[2], "webnotify")==0 ){ + rc = Th_WebpageNotify(g.argv[3], (char)atoi(g.argv[4])); + }else{ + fossil_fatal("Unknown TH1 hook %s\n", g.argv[2]); + } + if( g.interp ){ + zResult = (char*)Th_GetResult(g.interp, &nResult); + } + sendText("RESULT (", -1, 0); + sendText(Th_ReturnCodeName(rc, 0), -1, 0); + sendText(")", -1, 0); + if( zResult && nResult>0 ){ + sendText(": ", -1, 0); + sendText(zResult, nResult, 0); + } + sendText("\n", -1, 0); + Th_PrintTraceLog(); + if( forceCgi ) cgi_reply(); } +#endif ADDED src/th_tcl.c Index: src/th_tcl.c ================================================================== --- src/th_tcl.c +++ src/th_tcl.c @@ -0,0 +1,1277 @@ +/* +** Copyright (c) 2011 D. Richard Hipp +** Copyright (c) 2011 Joe Mistachkin +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code used to bridge the TH1 and Tcl scripting languages. +*/ +#include "config.h" + +#ifdef FOSSIL_ENABLE_TCL + +#include "sqlite3.h" +#include "th.h" +#include "tcl.h" + +/* +** This macro is used to verify that the header version of Tcl meets some +** minimum requirement. +*/ +#define MINIMUM_TCL_VERSION(major, minor) \ + ((TCL_MAJOR_VERSION > (major)) || \ + ((TCL_MAJOR_VERSION == (major)) && (TCL_MINOR_VERSION >= (minor)))) + +/* +** These macros are designed to reduce the redundant code required to marshal +** arguments from TH1 to Tcl. +*/ +#define USE_ARGV_TO_OBJV() \ + int objc; \ + Tcl_Obj **objv; \ + int obji; + +#define COPY_ARGV_TO_OBJV() \ + objc = argc-1; \ + objv = (Tcl_Obj **)ckalloc((unsigned)(objc * sizeof(Tcl_Obj *))); \ + for(obji=1; obji<argc; obji++){ \ + objv[obji-1] = Tcl_NewStringObj(argv[obji], argl[obji]); \ + Tcl_IncrRefCount(objv[obji-1]); \ + } + +#define FREE_ARGV_TO_OBJV() \ + for(obji=1; obji<argc; obji++){ \ + Tcl_DecrRefCount(objv[obji-1]); \ + objv[obji-1] = 0; \ + } \ + ckfree((char *)objv); \ + objv = 0; + +/* +** Fetch the Tcl interpreter from the specified void pointer, cast to a Tcl +** context. +*/ +#define GET_CTX_TCL_INTERP(ctx) \ + ((struct TclContext *)(ctx))->interp + +/* +** Fetch the (logically boolean) value from the specified void pointer that +** indicates whether or not we can/should use direct objProc calls. +*/ +#define GET_CTX_TCL_USEOBJPROC(ctx) \ + ((struct TclContext *)(ctx))->useObjProc + +/* +** This is the name of an environment variable that may refer to a Tcl library +** directory or file name. If this environment variable is set [to anything], +** its value will be used when searching for a Tcl library to load. +*/ +#ifndef TCL_PATH_ENV_VAR_NAME +# define TCL_PATH_ENV_VAR_NAME "FOSSIL_TCL_PATH" +#endif + +/* +** Define the Tcl shared library name, some exported function names, and some +** cross-platform macros for use with the Tcl stubs mechanism, when enabled. +*/ +#if defined(USE_TCL_STUBS) +# if defined(_WIN32) +# if !defined(WIN32_LEAN_AND_MEAN) +# define WIN32_LEAN_AND_MEAN +# endif +# if !defined(_WIN32_WINNT) || (_WIN32_WINNT < 0x0502) +# undef _WIN32_WINNT +# define _WIN32_WINNT 0x0502 /* SetDllDirectory, Windows XP SP2 */ +# endif +# include <windows.h> +# ifndef TCL_DIRECTORY_SEP +# define TCL_DIRECTORY_SEP '\\' +# endif +# ifndef TCL_LIBRARY_NAME +# define TCL_LIBRARY_NAME "tcl86.dll\0" +# endif +# ifndef TCL_MINOR_OFFSET +# define TCL_MINOR_OFFSET (4) +# endif +# ifndef dlopen +# define dlopen(a,b) (void *)LoadLibrary((a)) +# endif +# ifndef dlsym +# define dlsym(a,b) GetProcAddress((HANDLE)(a),(b)) +# endif +# ifndef dlclose +# define dlclose(a) FreeLibrary((HANDLE)(a)) +# endif +# else +# include <dlfcn.h> +# ifndef TCL_DIRECTORY_SEP +# define TCL_DIRECTORY_SEP '/' +# endif +# if defined(__CYGWIN__) +# ifndef TCL_LIBRARY_NAME +# define TCL_LIBRARY_NAME "libtcl8.6.dll\0" +# endif +# ifndef TCL_MINOR_OFFSET +# define TCL_MINOR_OFFSET (8) +# endif +# elif defined(__APPLE__) +# ifndef TCL_LIBRARY_NAME +# define TCL_LIBRARY_NAME "libtcl8.6.dylib\0" +# endif +# ifndef TCL_MINOR_OFFSET +# define TCL_MINOR_OFFSET (8) +# endif +# else +# ifndef TCL_LIBRARY_NAME +# define TCL_LIBRARY_NAME "libtcl8.6.so\0" +# endif +# ifndef TCL_MINOR_OFFSET +# define TCL_MINOR_OFFSET (8) +# endif +# endif /* defined(__CYGWIN__) */ +# endif /* defined(_WIN32) */ +# ifndef TCL_FINDEXECUTABLE_NAME +# define TCL_FINDEXECUTABLE_NAME "_Tcl_FindExecutable\0" +# endif +# ifndef TCL_CREATEINTERP_NAME +# define TCL_CREATEINTERP_NAME "_Tcl_CreateInterp\0" +# endif +# ifndef TCL_DELETEINTERP_NAME +# define TCL_DELETEINTERP_NAME "_Tcl_DeleteInterp\0" +# endif +# ifndef TCL_FINALIZE_NAME +# define TCL_FINALIZE_NAME "_Tcl_Finalize\0" +# endif +#endif /* defined(USE_TCL_STUBS) */ + +/* +** If this constant is defined to non-zero, the Win32 SetDllDirectory function +** will be used during the Tcl library loading process if the path environment +** variable for Tcl was set. +*/ +#ifndef TCL_USE_SET_DLL_DIRECTORY +# if defined(_WIN32) && defined(_WIN32_WINNT) && (_WIN32_WINNT >= 0x0502) +# define TCL_USE_SET_DLL_DIRECTORY (1) +# else +# define TCL_USE_SET_DLL_DIRECTORY (0) +# endif +#endif /* TCL_USE_SET_DLL_DIRECTORY */ + +/* +** The function types for Tcl_FindExecutable and Tcl_CreateInterp are needed +** when the Tcl library is being loaded dynamically by a stubs-enabled +** application (i.e. the inverse of using a stubs-enabled package). These are +** the only Tcl API functions that MUST be called prior to being able to call +** Tcl_InitStubs (i.e. because it requires a Tcl interpreter). For complete +** cleanup if the Tcl stubs initialization fails somehow, the Tcl_DeleteInterp +** and Tcl_Finalize function types are also required. +*/ +typedef void (tcl_FindExecutableProc) (const char *); +typedef Tcl_Interp *(tcl_CreateInterpProc) (void); +typedef void (tcl_DeleteInterpProc) (Tcl_Interp *); +typedef void (tcl_FinalizeProc) (void); + +/* +** The function types for the "hook" functions to be called before and after a +** TH1 command makes a call to evaluate a Tcl script. If the "pre" function +** returns anything but TH_OK, then evaluation of the Tcl script is skipped and +** that value is used as the return code. If the "post" function returns +** anything other than its rc argument, that will become the new return code +** for the command. +*/ +typedef int (tcl_NotifyProc) ( + void *pContext, /* The context for this notification. */ + Th_Interp *interp, /* The TH1 interpreter being used. */ + void *ctx, /* The original TH1 command context. */ + int argc, /* Number of arguments for the TH1 command. */ + const char **argv, /* Array of arguments for the TH1 command. */ + int *argl, /* Array of lengths for the TH1 command arguments. */ + int rc /* Recommended notification return value. */ +); + +/* +** Are we using our own private implementation of the Tcl stubs mechanism? If +** this is enabled, it prevents the user from having to link against the Tcl +** stubs library for the target platform, which may not be readily available. +*/ +#if defined(FOSSIL_ENABLE_TCL_PRIVATE_STUBS) +/* +** HACK: Using some preprocessor magic and a private static variable, redirect +** the Tcl API calls [found within this file] to the function pointers +** that will be contained in our private Tcl stubs table. This takes +** advantage of the fact that the Tcl headers always define the Tcl API +** functions in terms of the "tclStubsPtr" variable when the define +** USE_TCL_STUBS is present during compilation. +*/ +#define tclStubsPtr privateTclStubsPtr +static const TclStubs *tclStubsPtr = NULL; + +/* +** Create a Tcl interpreter structure that mirrors just enough fields to get +** it up and running successfully with our private implementation of the Tcl +** stubs mechanism. +*/ +struct PrivateTclInterp { + char *result; + Tcl_FreeProc *freeProc; + int errorLine; + const struct TclStubs *stubTable; +}; + +/* +** Fossil can now be compiled without linking to the actual Tcl stubs library. +** In that case, this function will be used to perform those steps that would +** normally be performed within the Tcl stubs library. +*/ +static int initTclStubs( + Th_Interp *interp, + Tcl_Interp *tclInterp +){ + tclStubsPtr = ((struct PrivateTclInterp *)tclInterp)->stubTable; + if( !tclStubsPtr || (tclStubsPtr->magic!=TCL_STUB_MAGIC) ){ + Th_ErrorMessage(interp, + "could not initialize Tcl stubs: incompatible mechanism", + (const char *)"", 0); + return TH_ERROR; + } + /* NOTE: At this point, the Tcl API functions should be available. */ + if( Tcl_PkgRequireEx(tclInterp, "Tcl", "8.4", 0, (void *)&tclStubsPtr)==0 ){ + Th_ErrorMessage(interp, + "could not initialize Tcl stubs: incompatible version", + (const char *)"", 0); + return TH_ERROR; + } + return TH_OK; +} +#endif /* defined(FOSSIL_ENABLE_TCL_PRIVATE_STUBS) */ + +/* +** Is the loaded version of Tcl one where querying and/or calling the objProc +** for a command does not work for some reason? The following special cases +** are currently handled by this function: +** +** 1. All versions of Tcl 8.4 have a bug that causes a crash when calling into +** the Tcl_GetCommandFromObj function via stubs (i.e. the stubs table entry +** is NULL). +** +** 2. Various beta builds of Tcl 8.6, namely 1 and 2, have an NRE-specific bug +** in Tcl_EvalObjCmd (SF bug #3399564) that cause a panic when calling into +** the objProc directly. +** +** For both of the above cases, the Tcl_EvalObjv function must be used instead +** of the more direct route of querying and calling the objProc directly. +*/ +static int canUseObjProc(){ + int major = -1, minor = -1, patchLevel = -1, type = -1; + + Tcl_GetVersion(&major, &minor, &patchLevel, &type); + if( major<0 || minor<0 || patchLevel<0 || type<0 ){ + return 0; /* NOTE: Invalid version info, assume bad. */ + } + if( major==8 && minor==4 ){ + return 0; /* NOTE: Disabled on Tcl 8.4, missing public API. */ + } + if( major==8 && minor==6 && type==TCL_BETA_RELEASE && patchLevel<3 ){ + return 0; /* NOTE: Disabled on Tcl 8.6b1/b2, SF bug #3399564. */ + } + return 1; /* NOTE: For all other cases, assume good. */ +} + +/* +** Is the loaded version of Tcl one where TIP #285 (asynchronous script +** cancellation) is available? This should return non-zero only for Tcl +** 8.6 and higher. +*/ +static int canUseTip285(){ +#if MINIMUM_TCL_VERSION(8, 6) + int major = -1, minor = -1, patchLevel = -1, type = -1; + + Tcl_GetVersion(&major, &minor, &patchLevel, &type); + if( major<0 || minor<0 || patchLevel<0 || type<0 ){ + return 0; /* NOTE: Invalid version info, assume bad. */ + } + return (major>8 || (major==8 && minor>=6)); +#else + return 0; +#endif +} + +/* +** Creates and initializes a Tcl interpreter for use with the specified TH1 +** interpreter. Stores the created Tcl interpreter in the Tcl context supplied +** by the caller. This must be declared here because quite a few functions in +** this file need to use it before it can be defined. +*/ +static int createTclInterp(Th_Interp *interp, void *pContext); + +/* +** Returns the TH1 return code corresponding to the specified Tcl +** return code. +*/ +static int getTh1ReturnCode( + int rc /* The Tcl return code value to convert. */ +){ + switch( rc ){ + case /*0*/ TCL_OK: return /*0*/ TH_OK; + case /*1*/ TCL_ERROR: return /*1*/ TH_ERROR; + case /*2*/ TCL_RETURN: return /*3*/ TH_RETURN; + case /*3*/ TCL_BREAK: return /*2*/ TH_BREAK; + case /*4*/ TCL_CONTINUE: return /*4*/ TH_CONTINUE; + default /*?*/: return /*?*/ rc; + } +} + +/* +** Returns the Tcl return code corresponding to the specified TH1 +** return code. +*/ +static int getTclReturnCode( + int rc /* The TH1 return code value to convert. */ +){ + switch( rc ){ + case /*0*/ TH_OK: return /*0*/ TCL_OK; + case /*1*/ TH_ERROR: return /*1*/ TCL_ERROR; + case /*2*/ TH_BREAK: return /*3*/ TCL_BREAK; + case /*3*/ TH_RETURN: return /*2*/ TCL_RETURN; + case /*4*/ TH_CONTINUE: return /*4*/ TCL_CONTINUE; + default /*?*/: return /*?*/ rc; + } +} + +/* +** Returns a name for a Tcl return code. +*/ +static const char *getTclReturnCodeName( + int rc, + int nullIfOk +){ + static char zRc[TCL_INTEGER_SPACE + 17]; /* "Tcl return code\0" */ + + switch( rc ){ + case TCL_OK: return nullIfOk ? 0 : "TCL_OK"; + case TCL_ERROR: return "TCL_ERROR"; + case TCL_RETURN: return "TCL_RETURN"; + case TCL_BREAK: return "TCL_BREAK"; + case TCL_CONTINUE: return "TCL_CONTINUE"; + default: { + sqlite3_snprintf(sizeof(zRc), zRc, "Tcl return code %d", rc); + } + } + return zRc; +} + +/* +** Returns the Tcl interpreter result as a string with the associated length. +** If the Tcl interpreter or the Tcl result are NULL, the length will be 0. +** If the length pointer is NULL, the length will not be stored. +*/ +static char *getTclResult( + Tcl_Interp *pInterp, + int *pN +){ + Tcl_Obj *resultPtr; + + if( !pInterp ){ /* This should not happen. */ + if( pN ) *pN = 0; + return 0; + } + resultPtr = Tcl_GetObjResult(pInterp); + if( !resultPtr ){ /* This should not happen either? */ + if( pN ) *pN = 0; + return 0; + } + return Tcl_GetStringFromObj(resultPtr, pN); +} + +/* +** Tcl context information used by TH1. This structure definition has been +** copied from and should be kept in sync with the one in "main.c". +*/ +struct TclContext { + int argc; /* Number of original arguments. */ + char **argv; /* Full copy of the original arguments. */ + void *hLibrary; /* The Tcl library module handle. */ + tcl_FindExecutableProc *xFindExecutable; /* Tcl_FindExecutable() pointer. */ + tcl_CreateInterpProc *xCreateInterp; /* Tcl_CreateInterp() pointer. */ + tcl_DeleteInterpProc *xDeleteInterp; /* Tcl_DeleteInterp() pointer. */ + tcl_FinalizeProc *xFinalize; /* Tcl_Finalize() pointer. */ + Tcl_Interp *interp; /* The on-demand created Tcl interpreter. */ + int useObjProc; /* Non-zero if an objProc can be called directly. */ + int useTip285; /* Non-zero if TIP #285 is available. */ + const char *setup; /* The optional Tcl setup script. */ + tcl_NotifyProc *xPreEval; /* Optional, called before Tcl_Eval*(). */ + void *pPreContext; /* Optional, provided to xPreEval(). */ + tcl_NotifyProc *xPostEval; /* Optional, called after Tcl_Eval*(). */ + void *pPostContext; /* Optional, provided to xPostEval(). */ +}; + +/* +** This function calls the configured xPreEval or xPostEval functions, if any. +** May have arbitrary side-effects. This function returns the result of the +** called notification function or the value of the rc argument if there is no +** notification function configured. +*/ +static int notifyPreOrPostEval( + int bIsPost, + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, + int *argl, + int rc +){ + struct TclContext *tclContext = (struct TclContext *)ctx; + tcl_NotifyProc *xNotifyProc; + + if( !tclContext ){ + Th_ErrorMessage(interp, + "invalid Tcl context", (const char *)"", 0); + return TH_ERROR; + } + xNotifyProc = bIsPost ? tclContext->xPostEval : tclContext->xPreEval; + if( xNotifyProc ){ + rc = xNotifyProc(bIsPost ? + tclContext->pPostContext : tclContext->pPreContext, + interp, ctx, argc, argv, argl, rc); + } + return rc; +} + +/* +** TH1 command: tclEval arg ?arg ...? +** +** Evaluates the Tcl script and returns its result verbatim. If a Tcl script +** error is generated, it will be transformed into a TH1 script error. The +** Tcl interpreter will be created automatically if it has not been already. +*/ +static int tclEval_command( + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, + int *argl +){ + Tcl_Interp *tclInterp; + Tcl_Obj *objPtr; + int rc = TH_OK; + int nResult; + const char *zResult; + + if( createTclInterp(interp, ctx)!=TH_OK ){ + return TH_ERROR; + } + if( argc<2 ){ + return Th_WrongNumArgs(interp, "tclEval arg ?arg ...?"); + } + tclInterp = GET_CTX_TCL_INTERP(ctx); + if( !tclInterp || Tcl_InterpDeleted(tclInterp) ){ + Th_ErrorMessage(interp, "invalid Tcl interpreter", (const char *)"", 0); + return TH_ERROR; + } + rc = notifyPreOrPostEval(0, interp, ctx, argc, argv, argl, rc); + if( rc!=TH_OK ){ + return rc; + } + Tcl_Preserve((ClientData)tclInterp); + if( argc==2 ){ + objPtr = Tcl_NewStringObj(argv[1], argl[1]); + Tcl_IncrRefCount(objPtr); + rc = Tcl_EvalObjEx(tclInterp, objPtr, 0); + Tcl_DecrRefCount(objPtr); objPtr = 0; + }else{ + USE_ARGV_TO_OBJV(); + COPY_ARGV_TO_OBJV(); + objPtr = Tcl_ConcatObj(objc, objv); + Tcl_IncrRefCount(objPtr); + rc = Tcl_EvalObjEx(tclInterp, objPtr, 0); + Tcl_DecrRefCount(objPtr); objPtr = 0; + FREE_ARGV_TO_OBJV(); + } + zResult = getTclResult(tclInterp, &nResult); + Th_SetResult(interp, zResult, nResult); + Tcl_Release((ClientData)tclInterp); + rc = notifyPreOrPostEval(1, interp, ctx, argc, argv, argl, + getTh1ReturnCode(rc)); + return rc; +} + +/* +** TH1 command: tclExpr arg ?arg ...? +** +** Evaluates the Tcl expression and returns its result verbatim. If a Tcl +** script error is generated, it will be transformed into a TH1 script error. +** The Tcl interpreter will be created automatically if it has not been +** already. +*/ +static int tclExpr_command( + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, + int *argl +){ + Tcl_Interp *tclInterp; + Tcl_Obj *objPtr; + Tcl_Obj *resultObjPtr; + int rc = TH_OK; + int nResult; + const char *zResult; + + if( createTclInterp(interp, ctx)!=TH_OK ){ + return TH_ERROR; + } + if( argc<2 ){ + return Th_WrongNumArgs(interp, "tclExpr arg ?arg ...?"); + } + tclInterp = GET_CTX_TCL_INTERP(ctx); + if( !tclInterp || Tcl_InterpDeleted(tclInterp) ){ + Th_ErrorMessage(interp, "invalid Tcl interpreter", (const char *)"", 0); + return TH_ERROR; + } + rc = notifyPreOrPostEval(0, interp, ctx, argc, argv, argl, rc); + if( rc!=TH_OK ){ + return rc; + } + Tcl_Preserve((ClientData)tclInterp); + if( argc==2 ){ + objPtr = Tcl_NewStringObj(argv[1], argl[1]); + Tcl_IncrRefCount(objPtr); + rc = Tcl_ExprObj(tclInterp, objPtr, &resultObjPtr); + Tcl_DecrRefCount(objPtr); objPtr = 0; + }else{ + USE_ARGV_TO_OBJV(); + COPY_ARGV_TO_OBJV(); + objPtr = Tcl_ConcatObj(objc, objv); + Tcl_IncrRefCount(objPtr); + rc = Tcl_ExprObj(tclInterp, objPtr, &resultObjPtr); + Tcl_DecrRefCount(objPtr); objPtr = 0; + FREE_ARGV_TO_OBJV(); + } + if( rc==TCL_OK ){ + zResult = Tcl_GetStringFromObj(resultObjPtr, &nResult); + }else{ + zResult = getTclResult(tclInterp, &nResult); + } + Th_SetResult(interp, zResult, nResult); + if( rc==TCL_OK ){ + Tcl_DecrRefCount(resultObjPtr); resultObjPtr = 0; + } + Tcl_Release((ClientData)tclInterp); + rc = notifyPreOrPostEval(1, interp, ctx, argc, argv, argl, + getTh1ReturnCode(rc)); + return rc; +} + +/* +** TH1 command: tclInvoke command ?arg ...? +** +** Invokes the Tcl command using the supplied arguments. No additional +** substitutions are performed on the arguments. The Tcl interpreter +** will be created automatically if it has not been already. +*/ +static int tclInvoke_command( + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, + int *argl +){ + Tcl_Interp *tclInterp; + int rc = TH_OK; + int nResult; + const char *zResult; + USE_ARGV_TO_OBJV(); + + if( createTclInterp(interp, ctx)!=TH_OK ){ + return TH_ERROR; + } + if( argc<2 ){ + return Th_WrongNumArgs(interp, "tclInvoke command ?arg ...?"); + } + tclInterp = GET_CTX_TCL_INTERP(ctx); + if( !tclInterp || Tcl_InterpDeleted(tclInterp) ){ + Th_ErrorMessage(interp, "invalid Tcl interpreter", (const char *)"", 0); + return TH_ERROR; + } + rc = notifyPreOrPostEval(0, interp, ctx, argc, argv, argl, rc); + if( rc!=TH_OK ){ + return rc; + } + Tcl_Preserve((ClientData)tclInterp); +#if !defined(USE_TCL_EVALOBJV) || !USE_TCL_EVALOBJV + if( GET_CTX_TCL_USEOBJPROC(ctx) ){ + Tcl_Command command; + Tcl_CmdInfo cmdInfo; + Tcl_Obj *objPtr = Tcl_NewStringObj(argv[1], argl[1]); + Tcl_IncrRefCount(objPtr); + command = Tcl_GetCommandFromObj(tclInterp, objPtr); + if( !command || Tcl_GetCommandInfoFromToken(command, &cmdInfo)==0 ){ + Th_ErrorMessage(interp, "Tcl command not found:", argv[1], argl[1]); + Tcl_DecrRefCount(objPtr); objPtr = 0; + Tcl_Release((ClientData)tclInterp); + return TH_ERROR; + } + if( !cmdInfo.objProc ){ + Th_ErrorMessage(interp, "cannot invoke Tcl command:", argv[1], argl[1]); + Tcl_DecrRefCount(objPtr); objPtr = 0; + Tcl_Release((ClientData)tclInterp); + return TH_ERROR; + } + Tcl_DecrRefCount(objPtr); objPtr = 0; + COPY_ARGV_TO_OBJV(); + Tcl_ResetResult(tclInterp); + rc = cmdInfo.objProc(cmdInfo.objClientData, tclInterp, objc, objv); + FREE_ARGV_TO_OBJV(); + }else +#endif /* !defined(USE_TCL_EVALOBJV) || !USE_TCL_EVALOBJV */ + { + COPY_ARGV_TO_OBJV(); + rc = Tcl_EvalObjv(tclInterp, objc, objv, 0); + FREE_ARGV_TO_OBJV(); + } + zResult = getTclResult(tclInterp, &nResult); + Th_SetResult(interp, zResult, nResult); + Tcl_Release((ClientData)tclInterp); + rc = notifyPreOrPostEval(1, interp, ctx, argc, argv, argl, + getTh1ReturnCode(rc)); + return rc; +} + +/* +** TH1 command: tclIsSafe +** +** Returns non-zero if the Tcl interpreter is "safe". The Tcl interpreter +** will be created automatically if it has not been already. +*/ +static int tclIsSafe_command( + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, + int *argl +){ + Tcl_Interp *tclInterp; + + if( createTclInterp(interp, ctx)!=TH_OK ){ + return TH_ERROR; + } + if( argc!=1 ){ + return Th_WrongNumArgs(interp, "tclIsSafe"); + } + tclInterp = GET_CTX_TCL_INTERP(ctx); + if( !tclInterp || Tcl_InterpDeleted(tclInterp) ){ + Th_ErrorMessage(interp, "invalid Tcl interpreter", (const char *)"", 0); + return TH_ERROR; + } + Th_SetResultInt(interp, Tcl_IsSafe(tclInterp)); + return TH_OK; +} + +/* +** TH1 command: tclMakeSafe +** +** Forces the Tcl interpreter into "safe" mode by removing all "unsafe" +** commands and variables. This operation cannot be undone. The Tcl +** interpreter will remain "safe" until the process terminates. +*/ +static int tclMakeSafe_command( + Th_Interp *interp, + void *ctx, + int argc, + const char **argv, + int *argl +){ + static int registerChans = 1; + Tcl_Interp *tclInterp; + int rc = TH_OK; + + if( createTclInterp(interp, ctx)!=TH_OK ){ + return TH_ERROR; + } + if( argc!=1 ){ + return Th_WrongNumArgs(interp, "tclMakeSafe"); + } + tclInterp = GET_CTX_TCL_INTERP(ctx); + if( !tclInterp || Tcl_InterpDeleted(tclInterp) ){ + Th_ErrorMessage(interp, "invalid Tcl interpreter", (const char *)"", 0); + return TH_ERROR; + } + if( Tcl_IsSafe(tclInterp) ){ + Th_ErrorMessage(interp, + "Tcl interpreter is already 'safe'", (const char *)"", 0); + return TH_ERROR; + } + if( registerChans ){ + /* + ** HACK: Prevent the call to Tcl_MakeSafe() from actually closing the + ** standard channels instead of simply unregistering them from + ** the Tcl interpreter. This should only need to be done once + ** per thread (process?). + */ + registerChans = 0; + Tcl_RegisterChannel(NULL, Tcl_GetStdChannel(TCL_STDIN)); + Tcl_RegisterChannel(NULL, Tcl_GetStdChannel(TCL_STDOUT)); + Tcl_RegisterChannel(NULL, Tcl_GetStdChannel(TCL_STDERR)); + } + Tcl_Preserve((ClientData)tclInterp); + if( Tcl_MakeSafe(tclInterp)!=TCL_OK ){ + int nResult; + const char *zResult = getTclResult(tclInterp, &nResult); + Th_ErrorMessage(interp, + "could not make Tcl interpreter 'safe':", zResult, nResult); + rc = TH_ERROR; + }else{ + Th_SetResult(interp, 0, 0); + } + Tcl_Release((ClientData)tclInterp); + return rc; +} + +/* +** Tcl command: th1Eval arg +** +** Evaluates the TH1 script and returns its result verbatim. If a TH1 script +** error is generated, it will be transformed into a Tcl script error. +*/ +static int Th1EvalObjCmd( + ClientData clientData, + Tcl_Interp *interp, + int objc, + Tcl_Obj *const objv[] +){ + Th_Interp *th1Interp; + int nArg; + const char *arg; + int rc; + + if( objc!=2 ){ + Tcl_WrongNumArgs(interp, 1, objv, "arg"); + return TCL_ERROR; + } + th1Interp = (Th_Interp *)clientData; + if( !th1Interp ){ + Tcl_AppendResult(interp, "invalid TH1 interpreter", NULL); + return TCL_ERROR; + } + arg = Tcl_GetStringFromObj(objv[1], &nArg); + rc = Th_Eval(th1Interp, 0, arg, nArg); + arg = Th_GetResult(th1Interp, &nArg); + Tcl_SetObjResult(interp, Tcl_NewStringObj(arg, nArg)); + return getTclReturnCode(rc); +} + +/* +** Tcl command: th1Expr arg +** +** Evaluates the TH1 expression and returns its result verbatim. If a TH1 +** script error is generated, it will be transformed into a Tcl script error. +*/ +static int Th1ExprObjCmd( + ClientData clientData, + Tcl_Interp *interp, + int objc, + Tcl_Obj *const objv[] +){ + Th_Interp *th1Interp; + int nArg; + const char *arg; + int rc; + + if( objc!=2 ){ + Tcl_WrongNumArgs(interp, 1, objv, "arg"); + return TCL_ERROR; + } + th1Interp = (Th_Interp *)clientData; + if( !th1Interp ){ + Tcl_AppendResult(interp, "invalid TH1 interpreter", NULL); + return TCL_ERROR; + } + arg = Tcl_GetStringFromObj(objv[1], &nArg); + rc = Th_Expr(th1Interp, arg, nArg); + arg = Th_GetResult(th1Interp, &nArg); + Tcl_SetObjResult(interp, Tcl_NewStringObj(arg, nArg)); + return getTclReturnCode(rc); +} + +/* +** Array of Tcl integration commands. Used when adding or removing the Tcl +** integration commands from TH1. +*/ +static struct _Command { + const char *zName; + Th_CommandProc xProc; + void *pContext; +} aCommand[] = { + {"tclEval", tclEval_command, 0}, + {"tclExpr", tclExpr_command, 0}, + {"tclInvoke", tclInvoke_command, 0}, + {"tclIsSafe", tclIsSafe_command, 0}, + {"tclMakeSafe", tclMakeSafe_command, 0}, + {0, 0, 0} +}; + +/* +** Called if the Tcl interpreter is deleted. Removes the Tcl integration +** commands from the TH1 interpreter. +*/ +static void Th1DeleteProc( + ClientData clientData, + Tcl_Interp *interp +){ + int i; + Th_Interp *th1Interp = (Th_Interp *)clientData; + + if( !th1Interp ) return; + /* Remove the Tcl integration commands. */ + for(i=0; i<(sizeof(aCommand)/sizeof(aCommand[0])); i++){ + Th_RenameCommand(th1Interp, aCommand[i].zName, -1, NULL, 0); + } +} + +/* +** When Tcl stubs support is enabled, attempts to dynamically load the Tcl +** shared library and fetch the function pointers necessary to create an +** interpreter and initialize the stubs mechanism; otherwise, simply setup +** the function pointers provided by the caller with the statically linked +** functions. +*/ +char *fossil_getenv(const char *zName); /* file.h */ +int file_isdir(const char *zPath); /* file.h */ +char *file_dirname(const char *zPath); /* file.h */ +void fossil_free(void *p); /* util.h */ + +static int loadTcl( + Th_Interp *interp, + void **phLibrary, + tcl_FindExecutableProc **pxFindExecutable, + tcl_CreateInterpProc **pxCreateInterp, + tcl_DeleteInterpProc **pxDeleteInterp, + tcl_FinalizeProc **pxFinalize +){ +#if defined(USE_TCL_STUBS) + const char *zEnvPath = fossil_getenv(TCL_PATH_ENV_VAR_NAME); + char aFileName[] = TCL_LIBRARY_NAME; +#endif /* defined(USE_TCL_STUBS) */ + + if( !phLibrary || !pxFindExecutable || !pxCreateInterp || + !pxDeleteInterp || !pxFinalize ){ + Th_ErrorMessage(interp, + "invalid Tcl loader argument(s)", (const char *)"", 0); + return TH_ERROR; + } +#if defined(USE_TCL_STUBS) + do { + char *zFileName; + void *hLibrary; + if( !zEnvPath ){ + zFileName = aFileName; /* NOTE: Assume present in PATH. */ + }else if( file_isdir(zEnvPath)==1 ){ +#if TCL_USE_SET_DLL_DIRECTORY + SetDllDirectory(zEnvPath); /* NOTE: Maybe needed for "zlib1.dll". */ +#endif /* TCL_USE_SET_DLL_DIRECTORY */ + /* NOTE: The environment variable contains a directory name. */ + zFileName = sqlite3_mprintf("%s%c%s%c", zEnvPath, TCL_DIRECTORY_SEP, + aFileName, '\0'); + }else{ +#if TCL_USE_SET_DLL_DIRECTORY + char *zDirName = file_dirname(zEnvPath); + if( zDirName ){ + SetDllDirectory(zDirName); /* NOTE: Maybe needed for "zlib1.dll". */ + } +#endif /* TCL_USE_SET_DLL_DIRECTORY */ + /* NOTE: The environment variable might contain a file name. */ + zFileName = sqlite3_mprintf("%s%c", zEnvPath, '\0'); +#if TCL_USE_SET_DLL_DIRECTORY + if( zDirName ){ + fossil_free(zDirName); zDirName = 0; + } +#endif /* TCL_USE_SET_DLL_DIRECTORY */ + } + if( !zFileName ) break; + hLibrary = dlopen(zFileName, RTLD_NOW | RTLD_GLOBAL); + /* NOTE: If the file name was allocated, free it now. */ + if( zFileName!=aFileName ){ + sqlite3_free(zFileName); zFileName = 0; + } + if( hLibrary ){ + tcl_FindExecutableProc *xFindExecutable; + tcl_CreateInterpProc *xCreateInterp; + tcl_DeleteInterpProc *xDeleteInterp; + tcl_FinalizeProc *xFinalize; + const char *procName = TCL_FINDEXECUTABLE_NAME; + xFindExecutable = (tcl_FindExecutableProc *)dlsym(hLibrary, procName+1); + if( !xFindExecutable ){ + xFindExecutable = (tcl_FindExecutableProc *)dlsym(hLibrary, procName); + } + if( !xFindExecutable ){ + Th_ErrorMessage(interp, + "could not locate Tcl_FindExecutable", (const char *)"", 0); + dlclose(hLibrary); hLibrary = 0; + return TH_ERROR; + } + procName = TCL_CREATEINTERP_NAME; + xCreateInterp = (tcl_CreateInterpProc *)dlsym(hLibrary, procName+1); + if( !xCreateInterp ){ + xCreateInterp = (tcl_CreateInterpProc *)dlsym(hLibrary, procName); + } + if( !xCreateInterp ){ + Th_ErrorMessage(interp, + "could not locate Tcl_CreateInterp", (const char *)"", 0); + dlclose(hLibrary); hLibrary = 0; + return TH_ERROR; + } + procName = TCL_DELETEINTERP_NAME; + xDeleteInterp = (tcl_DeleteInterpProc *)dlsym(hLibrary, procName+1); + if( !xDeleteInterp ){ + xDeleteInterp = (tcl_DeleteInterpProc *)dlsym(hLibrary, procName); + } + if( !xDeleteInterp ){ + Th_ErrorMessage(interp, + "could not locate Tcl_DeleteInterp", (const char *)"", 0); + dlclose(hLibrary); hLibrary = 0; + return TH_ERROR; + } + procName = TCL_FINALIZE_NAME; + xFinalize = (tcl_FinalizeProc *)dlsym(hLibrary, procName+1); + if( !xFinalize ){ + xFinalize = (tcl_FinalizeProc *)dlsym(hLibrary, procName); + } + if( !xFinalize ){ + Th_ErrorMessage(interp, + "could not locate Tcl_Finalize", (const char *)"", 0); + dlclose(hLibrary); hLibrary = 0; + return TH_ERROR; + } + *phLibrary = hLibrary; + *pxFindExecutable = xFindExecutable; + *pxCreateInterp = xCreateInterp; + *pxDeleteInterp = xDeleteInterp; + *pxFinalize = xFinalize; + return TH_OK; + } + } while( --aFileName[TCL_MINOR_OFFSET]>'3' ); /* Tcl 8.4+ */ + aFileName[TCL_MINOR_OFFSET] = 'x'; + Th_ErrorMessage(interp, + "could not load any supported Tcl 8.6, 8.5, or 8.4 shared library \"", + aFileName, -1); + return TH_ERROR; +#else + *phLibrary = 0; + *pxFindExecutable = Tcl_FindExecutable; + *pxCreateInterp = Tcl_CreateInterp; + *pxDeleteInterp = Tcl_DeleteInterp; + *pxFinalize = Tcl_Finalize; + return TH_OK; +#endif /* defined(USE_TCL_STUBS) */ +} + +/* +** Sets the "argv0", "argc", and "argv" script variables in the Tcl interpreter +** based on the supplied command line arguments. +*/ +static int setTclArguments( + Tcl_Interp *pInterp, + int argc, + char **argv +){ + Tcl_Obj *objPtr; + Tcl_Obj *resultObjPtr; + Tcl_Obj *listPtr; + int rc = TCL_OK; + + if( argc<=0 || !argv ){ + return TCL_OK; + } + objPtr = Tcl_NewStringObj(argv[0], -1); + Tcl_IncrRefCount(objPtr); + resultObjPtr = Tcl_SetVar2Ex(pInterp, "argv0", NULL, objPtr, + TCL_GLOBAL_ONLY|TCL_LEAVE_ERR_MSG); + Tcl_DecrRefCount(objPtr); objPtr = 0; + if( !resultObjPtr ){ + return TCL_ERROR; + } + objPtr = Tcl_NewIntObj(argc - 1); + Tcl_IncrRefCount(objPtr); + resultObjPtr = Tcl_SetVar2Ex(pInterp, "argc", NULL, objPtr, + TCL_GLOBAL_ONLY|TCL_LEAVE_ERR_MSG); + Tcl_DecrRefCount(objPtr); objPtr = 0; + if( !resultObjPtr ){ + return TCL_ERROR; + } + listPtr = Tcl_NewListObj(0, NULL); + Tcl_IncrRefCount(listPtr); + if( argc>1 ){ + while( --argc ){ + objPtr = Tcl_NewStringObj(*++argv, -1); + Tcl_IncrRefCount(objPtr); + rc = Tcl_ListObjAppendElement(pInterp, listPtr, objPtr); + Tcl_DecrRefCount(objPtr); objPtr = 0; + if( rc!=TCL_OK ){ + break; + } + } + } + if( rc==TCL_OK ){ + resultObjPtr = Tcl_SetVar2Ex(pInterp, "argv", NULL, listPtr, + TCL_GLOBAL_ONLY|TCL_LEAVE_ERR_MSG); + if( !resultObjPtr ){ + rc = TCL_ERROR; + } + } + Tcl_DecrRefCount(listPtr); listPtr = 0; + return rc; +} + +/* +** Evaluate a Tcl script, creating the Tcl interpreter if necessary. If the +** Tcl script succeeds, start a Tcl event loop until there are no more events +** remaining to process -OR- the script calls [exit]. If the bWait argument +** is zero, only process events that are already in the queue; otherwise, +** process events until the script terminates the Tcl event loop. +*/ +void fossil_print(const char *zFormat, ...); /* printf.h */ + +int evaluateTclWithEvents( + Th_Interp *interp, + void *pContext, + const char *zScript, + int nScript, + int bCancel, + int bWait, + int bVerbose +){ + struct TclContext *tclContext = (struct TclContext *)pContext; + Tcl_Interp *tclInterp; + int rc; + int flags = TCL_ALL_EVENTS; + int useTip285; + + if( createTclInterp(interp, pContext)!=TH_OK ){ + return TH_ERROR; + } + tclInterp = tclContext->interp; + useTip285 = bCancel ? tclContext->useTip285 : 0; + rc = Tcl_EvalEx(tclInterp, zScript, nScript, TCL_EVAL_GLOBAL); + if( rc!=TCL_OK ){ + if( bVerbose ){ + const char *zResult = getTclResult(tclInterp, 0); + fossil_print("%s: ", getTclReturnCodeName(rc, 0)); + fossil_print("%s\n", zResult); + } + return rc; + } + if( !bWait ) flags |= TCL_DONT_WAIT; + Tcl_Preserve((ClientData)tclInterp); + while( Tcl_DoOneEvent(flags) ){ + if( Tcl_InterpDeleted(tclInterp) ){ + break; + } +#if MINIMUM_TCL_VERSION(8, 6) + if( useTip285 && Tcl_Canceled(tclInterp, 0)!=TCL_OK ){ + break; + } +#endif + } + Tcl_Release((ClientData)tclInterp); + return rc; +} + +/* +** Creates and initializes a Tcl interpreter for use with the specified TH1 +** interpreter. Stores the created Tcl interpreter in the Tcl context supplied +** by the caller. +*/ +static int createTclInterp( + Th_Interp *interp, + void *pContext +){ + struct TclContext *tclContext = (struct TclContext *)pContext; + int argc; + char **argv; + char *argv0 = 0; + Tcl_Interp *tclInterp; + const char *setup; + + if( !tclContext ){ + Th_ErrorMessage(interp, + "invalid Tcl context", (const char *)"", 0); + return TH_ERROR; + } + if( tclContext->interp ){ + return TH_OK; + } + if( loadTcl(interp, &tclContext->hLibrary, &tclContext->xFindExecutable, + &tclContext->xCreateInterp, &tclContext->xDeleteInterp, + &tclContext->xFinalize)!=TH_OK ){ + return TH_ERROR; + } + argc = tclContext->argc; + argv = tclContext->argv; + if( argc>0 && argv ){ + argv0 = argv[0]; + } + tclContext->xFindExecutable(argv0); + tclInterp = tclContext->xCreateInterp(); + if( !tclInterp ){ + Th_ErrorMessage(interp, + "could not create Tcl interpreter", (const char *)"", 0); + return TH_ERROR; + } +#if defined(USE_TCL_STUBS) +#if defined(FOSSIL_ENABLE_TCL_PRIVATE_STUBS) + if( initTclStubs(interp, tclInterp)!=TH_OK ){ + tclContext->xDeleteInterp(tclInterp); + tclInterp = 0; + return TH_ERROR; + } +#else + if( !Tcl_InitStubs(tclInterp, "8.4", 0) ){ + Th_ErrorMessage(interp, + "could not initialize Tcl stubs", (const char *)"", 0); + tclContext->xDeleteInterp(tclInterp); + tclInterp = 0; + return TH_ERROR; + } +#endif /* defined(FOSSIL_ENABLE_TCL_PRIVATE_STUBS) */ +#endif /* defined(USE_TCL_STUBS) */ + if( Tcl_InterpDeleted(tclInterp) ){ + Th_ErrorMessage(interp, + "Tcl interpreter appears to be deleted", (const char *)"", 0); + Tcl_DeleteInterp(tclInterp); /* TODO: Redundant? */ + tclInterp = 0; + return TH_ERROR; + } + tclContext->interp = tclInterp; + if( Tcl_Init(tclInterp)!=TCL_OK ){ + Th_ErrorMessage(interp, + "Tcl initialization error:", Tcl_GetStringResult(tclInterp), -1); + Tcl_DeleteInterp(tclInterp); + tclContext->interp = tclInterp = 0; + return TH_ERROR; + } + if( setTclArguments(tclInterp, argc, argv)!=TCL_OK ){ + Th_ErrorMessage(interp, + "Tcl error setting arguments:", Tcl_GetStringResult(tclInterp), -1); + Tcl_DeleteInterp(tclInterp); + tclContext->interp = tclInterp = 0; + return TH_ERROR; + } + /* + ** Determine (and cache) if an objProc can be called directly for a Tcl + ** command invoked via the tclInvoke TH1 command. + */ + tclContext->useObjProc = canUseObjProc(); + /* + ** Determine (and cache) whether or not we can use TIP #285 (asynchronous + ** script cancellation). + */ + tclContext->useTip285 = canUseTip285(); + /* Add the TH1 integration commands to Tcl. */ + Tcl_CallWhenDeleted(tclInterp, Th1DeleteProc, interp); + Tcl_CreateObjCommand(tclInterp, "th1Eval", Th1EvalObjCmd, interp, NULL); + Tcl_CreateObjCommand(tclInterp, "th1Expr", Th1ExprObjCmd, interp, NULL); + /* If necessary, evaluate the custom Tcl setup script. */ + setup = tclContext->setup; + if( setup && Tcl_Eval(tclInterp, setup)!=TCL_OK ){ + Th_ErrorMessage(interp, + "Tcl setup script error:", Tcl_GetStringResult(tclInterp), -1); + Tcl_DeleteInterp(tclInterp); + tclContext->interp = tclInterp = 0; + return TH_ERROR; + } + return TH_OK; +} + +/* +** Finalizes and unloads the previously loaded Tcl library, if applicable. +*/ +int unloadTcl( + Th_Interp *interp, + void *pContext +){ + struct TclContext *tclContext = (struct TclContext *)pContext; + Tcl_Interp *tclInterp; + tcl_FinalizeProc *xFinalize; +#if defined(USE_TCL_STUBS) + void *hLibrary; +#endif /* defined(USE_TCL_STUBS) */ + + if( !tclContext ){ + Th_ErrorMessage(interp, + "invalid Tcl context", (const char *)"", 0); + return TH_ERROR; + } + /* + ** Grab the Tcl_Finalize function pointer prior to deleting the Tcl + ** interpreter because the memory backing the Tcl stubs table will + ** be going away. + */ + xFinalize = tclContext->xFinalize; + /* + ** If the Tcl interpreter has been created, formally delete it now. + */ + tclInterp = tclContext->interp; + if( tclInterp ){ + Tcl_DeleteInterp(tclInterp); + tclContext->interp = tclInterp = 0; + } + /* + ** If the Tcl library is not finalized prior to unloading it, a deadlock + ** can occur in some circumstances (i.e. the [clock] thread is running). + */ + if( xFinalize ) xFinalize(); +#if defined(USE_TCL_STUBS) + /* + ** If Tcl is compiled on Windows using the latest MinGW, Fossil can crash + ** when exiting while a stubs-enabled Tcl is still loaded. This is due to + ** a bug in MinGW, see: + ** + ** http://comments.gmane.org/gmane.comp.gnu.mingw.user/41724 + ** + ** The workaround is to manually unload the loaded Tcl library prior to + ** exiting the process. + */ + hLibrary = tclContext->hLibrary; + if( hLibrary ){ + dlclose(hLibrary); + tclContext->hLibrary = hLibrary = 0; + } +#endif /* defined(USE_TCL_STUBS) */ + return TH_OK; +} + +/* +** Register the Tcl language commands with interpreter interp. +** Usually this is called soon after interpreter creation. +*/ +int th_register_tcl( + Th_Interp *interp, + void *pContext +){ + int i; + + /* Add the Tcl integration commands to TH1. */ + for(i=0; i<(sizeof(aCommand)/sizeof(aCommand[0])); i++){ + void *ctx; + if( !aCommand[i].zName || !aCommand[i].xProc ) continue; + ctx = aCommand[i].pContext; + /* Use Tcl interpreter for context? */ + if( !ctx ) ctx = pContext; + Th_CreateCommand(interp, aCommand[i].zName, aCommand[i].xProc, ctx, 0); + } + return TH_OK; +} + +#endif /* FOSSIL_ENABLE_TCL */ Index: src/timeline.c ================================================================== --- src/timeline.c +++ src/timeline.c @@ -16,89 +16,48 @@ ******************************************************************************* ** ** This file contains code to implement the timeline web page ** */ +#include "config.h" #include <string.h> #include <time.h> -#include "config.h" #include "timeline.h" /* -** Shorten a UUID so that is the minimum length needed to contain -** at least one digit in the range 'a'..'f'. The minimum length is 10. +** The value of one second in julianday notation +*/ +#define ONE_SECOND (1.0/86400.0) + +/* +** Add an appropriate tag to the output if "rid" is unpublished (private) */ -static void shorten_uuid(char *zDest, const char *zSrc){ - int i; - for(i=0; i<10 && zSrc[i]<='9'; i++){} - memcpy(zDest, zSrc, 10); - if( i==10 && zSrc[i] ){ - do{ - zDest[i] = zSrc[i]; - i++; - }while( zSrc[i-1]<='9' ); - }else{ - i = 10; - } - zDest[i] = 0; -} - +#define UNPUB_TAG "<em>(unpublished)</em>" +void tag_private_status(int rid){ + if( content_is_private(rid) ){ + cgi_printf("%s", UNPUB_TAG); + } +} /* ** Generate a hyperlink to a version. */ void hyperlink_to_uuid(const char *zUuid){ - char zShortUuid[UUID_SIZE+1]; - shorten_uuid(zShortUuid, zUuid); - if( g.okHistory ){ - @ <a href="%s(g.zBaseURL)/info/%s(zShortUuid)">[%s(zShortUuid)]</a> - }else{ - @ <b>[%s(zShortUuid)]</b> - } -} - -/* -** Generate a hyperlink that invokes javascript to highlight -** a version on mouseover. -*/ -void hyperlink_to_uuid_with_mouseover( - const char *zUuid, /* The UUID to display */ - const char *zIn, /* Javascript proc for mouseover */ - const char *zOut, /* Javascript proc for mouseout */ - int id /* Argument to javascript procs */ -){ - char zShortUuid[UUID_SIZE+1]; - shorten_uuid(zShortUuid, zUuid); - if( g.okHistory ){ - @ <a onmouseover='%s(zIn)("m%d(id)")' onmouseout='%s(zOut)("m%d(id)")' - @ href="%s(g.zBaseURL)/vinfo/%s(zShortUuid)">[%s(zShortUuid)]</a> - }else{ - @ <b onmouseover='%s(zIn)("m%d(id)")' onmouseout='%s(zOut)("m%d(id)")'> - @ [%s(zShortUuid)]</b> - } -} - -/* -** Generate a hyperlink to a diff between two versions. -*/ -void hyperlink_to_diff(const char *zV1, const char *zV2){ - if( g.okHistory ){ - if( zV2==0 ){ - @ <a href="%s(g.zBaseURL)/diff?v2=%s(zV1)">[diff]</a> - }else{ - @ <a href="%s(g.zBaseURL)/diff?v1=%s(zV1)&v2=%s(zV2)">[diff]</a> - } + if( g.perm.Hyperlink ){ + @ %z(xhref("class='timelineHistLink'","%R/info/%!S",zUuid))[%S(zUuid)]</a> + }else{ + @ <span class="timelineHistDsp">[%S(zUuid)]</span> } } /* ** Generate a hyperlink to a date & time. */ void hyperlink_to_date(const char *zDate, const char *zSuffix){ if( zSuffix==0 ) zSuffix = ""; - if( g.okHistory ){ - @ <a href="%s(g.zTop)/timeline?c=%T(zDate)">%s(zDate)</a>%s(zSuffix) + if( g.perm.Hyperlink ){ + @ %z(href("%R/timeline?c=%T",zDate))%s(zDate)</a>%s(zSuffix) }else{ @ %s(zDate)%s(zSuffix) } } @@ -106,54 +65,137 @@ ** Generate a hyperlink to a user. This will link to a timeline showing ** events by that user. If the date+time is specified, then the timeline ** is centered on that date+time. */ void hyperlink_to_user(const char *zU, const char *zD, const char *zSuf){ + if( zU==0 || zU[0]==0 ) zU = "anonymous"; if( zSuf==0 ) zSuf = ""; - if( g.okHistory ){ + if( g.perm.Hyperlink ){ if( zD && zD[0] ){ - @ <a href="%s(g.zTop)/timeline?c=%T(zD)&u=%T(zU)">%h(zU)</a>%s(zSuf) + @ %z(href("%R/timeline?c=%T&u=%T",zD,zU))%h(zU)</a>%s(zSuf) }else{ - @ <a href="%s(g.zTop)/timeline?u=%T(zU)">%h(zU)</a>%s(zSuf) + @ %z(href("%R/timeline?u=%T",zU))%h(zU)</a>%s(zSuf) } }else{ @ %s(zU) } } -/* -** Count the number of primary non-branch children for the given check-in. -** -** A primary child is one where the parent is the primary parent, not -** a merge parent. -** -** A non-branch child is one which is on the same branch as the parent. -*/ -int count_nonbranch_children(int pid){ - int nNonBranch; - static const char zSql[] = - @ SELECT count(*) FROM plink - @ WHERE pid=%d AND isprim - @ AND coalesce((SELECT value FROM tagxref - @ WHERE tagid=%d AND rid=plink.pid), 'trunk') - @ =coalesce((SELECT value FROM tagxref - @ WHERE tagid=%d AND rid=plink.cid), 'trunk') - ; - nNonBranch = db_int(0, zSql, pid, TAG_BRANCH, TAG_BRANCH); - return nNonBranch; -} - /* ** Allowed flags for the tmFlags argument to www_print_timeline */ #if INTERFACE #define TIMELINE_ARTID 0x0001 /* Show artifact IDs on non-check-in lines */ #define TIMELINE_LEAFONLY 0x0002 /* Show "Leaf", but not "Merge", "Fork" etc */ #define TIMELINE_BRIEF 0x0004 /* Combine adjacent elements of same object */ #define TIMELINE_GRAPH 0x0008 /* Compute a graph */ #define TIMELINE_DISJOINT 0x0010 /* Elements are not contiguous */ +#define TIMELINE_FCHANGES 0x0020 /* Detail file changes */ +#define TIMELINE_BRCOLOR 0x0040 /* Background color by branch name */ +#define TIMELINE_UCOLOR 0x0080 /* Background color by user */ +#define TIMELINE_FRENAMES 0x0100 /* Detail only file name changes */ +#define TIMELINE_UNHIDE 0x0200 /* Unhide check-ins with "hidden" tag */ +#define TIMELINE_SHOWRID 0x0400 /* Show RID values in addition to UUIDs */ +#define TIMELINE_BISECT 0x0800 /* Show supplimental bisect information */ #endif + +/* +** Hash a string and use the hash to determine a background color. +*/ +char *hash_color(const char *z){ + int i; /* Loop counter */ + unsigned int h = 0; /* Hash on the branch name */ + int r, g, b; /* Values for red, green, and blue */ + int h1, h2, h3, h4; /* Elements of the hash value */ + int mx, mn; /* Components of HSV */ + static char zColor[10]; /* The resulting color */ + static int ix[2] = {0,0}; /* Color chooser parameters */ + + if( ix[0]==0 ){ + if( skin_detail_boolean("white-foreground") ){ + ix[0] = 140; + ix[1] = 40; + }else{ + ix[0] = 216; + ix[1] = 16; + } + } + for(i=0; z[i]; i++ ){ + h = (h<<11) ^ (h<<1) ^ (h>>3) ^ z[i]; + } + h1 = h % 6; h /= 6; + h3 = h % 30; h /= 30; + h4 = h % 40; h /= 40; + mx = ix[0] - h3; + mn = mx - h4 - ix[1]; + h2 = (h%(mx - mn)) + mn; + switch( h1 ){ + case 0: r = mx; g = h2, b = mn; break; + case 1: r = h2; g = mx, b = mn; break; + case 2: r = mn; g = mx, b = h2; break; + case 3: r = mn; g = h2, b = mx; break; + case 4: r = h2; g = mn, b = mx; break; + default: r = mx; g = mn, b = h2; break; + } + sqlite3_snprintf(8, zColor, "#%02x%02x%02x", r,g,b); + return zColor; +} + +/* +** COMMAND: test-hash-color +** +** Usage: %fossil test-hash-color TAG ... +** +** Print out the color names associated with each tag. Used for +** testing the hash_color() function. +*/ +void test_hash_color(void){ + int i; + for(i=2; i<g.argc; i++){ + fossil_print("%20s: %s\n", g.argv[i], hash_color(g.argv[i])); + } +} + +/* +** WEBPAGE: hash-color-test +** +** Print out the color names associated with each tag. Used for +** testing the hash_color() function. +*/ +void test_hash_color_page(void){ + const char *zBr; + char zNm[10]; + int i, cnt; + login_check_credentials(); + + style_header("Hash Color Test"); + for(i=cnt=0; i<10; i++){ + sqlite3_snprintf(sizeof(zNm),zNm,"b%d",i); + zBr = P(zNm); + if( zBr && zBr[0] ){ + @ <p style='border:1px solid;background-color:%s(hash_color(zBr));'> + @ %h(zBr) - %s(hash_color(zBr)) - + @ Omnes nos quasi oves erravimus unusquisque in viam + @ suam declinavit.</p> + cnt++; + } + } + if( cnt ){ + @ <hr> + } + @ <form method="post" action="%s(g.zTop)/hash-color-test"> + @ <p>Enter candidate branch names below and see them displayed in their + @ default background colors above.</p> + for(i=0; i<10; i++){ + sqlite3_snprintf(sizeof(zNm),zNm,"b%d",i); + zBr = P(zNm); + @ <input type="text" size="30" name='%s(zNm)' value='%h(PD(zNm,""))'><br /> + } + @ <input type="submit"> + @ </form> + style_footer(); +} /* ** Output a timeline in the web format given a query. The query ** should return these columns: ** @@ -162,41 +204,57 @@ ** 2. Date/Time ** 3. Comment string ** 4. User ** 5. True if is a leaf ** 6. background color -** 7. type ("ci", "w", "t") +** 7. type ("ci", "w", "t", "e", "g", "div") ** 8. list of symbolic tags. -** 9. tagid for ticket or wiki +** 9. tagid for ticket or wiki or event ** 10. Short comment to user for repeated tickets and wiki */ void www_print_timeline( Stmt *pQuery, /* Query to implement the timeline */ int tmFlags, /* Flags controlling display behavior */ + const char *zThisUser, /* Suppress links to this user */ + const char *zThisTag, /* Suppress links to this tag */ + int selectedRid, /* Highlight the line with this RID value */ void (*xExtra)(int) /* Routine to call on each line of display */ ){ - int wikiFlags; int mxWikiLen; Blob comment; int prevTagid = 0; int suppressCnt = 0; char zPrevDate[20]; GraphContext *pGraph = 0; + int prevWasDivider = 0; /* True if previous output row was <hr> */ + int fchngQueryInit = 0; /* True if fchngQuery is initialized */ + Stmt fchngQuery; /* Query for file changes on check-ins */ + static Stmt qbranch; + int pendingEndTr = 0; /* True if a </td></tr> is needed */ + int vid = 0; /* Current checkout version */ + int dateFormat = 0; /* 0: HH:MM (default) */ + int bCommentGitStyle = 0; /* Only show comments through first blank line */ + const char *zDateFmt; + if( fossil_strcmp(g.zIpAddr, "127.0.0.1")==0 && db_open_local(0) ){ + vid = db_lget_int("checkout", 0); + } zPrevDate[0] = 0; mxWikiLen = db_get_int("timeline-max-comment", 0); - if( db_get_boolean("timeline-block-markup", 0) ){ - wikiFlags = WIKI_INLINE; - }else{ - wikiFlags = WIKI_INLINE | WIKI_NOBLOCK; - } + dateFormat = db_get_int("timeline-date-format", 0); + bCommentGitStyle = db_get_int("timeline-truncate-at-blank", 0); + zDateFmt = P("datefmt"); + if( zDateFmt ) dateFormat = atoi(zDateFmt); if( tmFlags & TIMELINE_GRAPH ){ pGraph = graph_init(); - @ <div id="canvas" style="position:relative;width:1px;height:1px;"></div> } + db_static_prepare(&qbranch, + "SELECT value FROM tagxref WHERE tagid=%d AND tagtype>0 AND rid=:rid", + TAG_BRANCH + ); - @ <table cellspacing=0 border=0 cellpadding=0> + @ <table id="timelineTable" class="timelineTable"> blob_zero(&comment); while( db_step(pQuery)==SQLITE_ROW ){ int rid = db_column_int(pQuery, 0); const char *zUuid = db_column_text(pQuery, 1); int isLeaf = db_column_int(pQuery, 5); @@ -204,13 +262,23 @@ const char *zDate = db_column_text(pQuery, 2); const char *zType = db_column_text(pQuery, 7); const char *zUser = db_column_text(pQuery, 4); const char *zTagList = db_column_text(pQuery, 8); int tagid = db_column_int(pQuery, 9); + const char *zDispUser = zUser && zUser[0] ? zUser : "anonymous"; + const char *zBr = 0; /* Branch */ int commentColumn = 3; /* Column containing comment text */ - char zTime[8]; + int modPending; /* Pending moderation */ + char *zDateLink; /* URL for the link on the timestamp */ + char zTime[20]; + + if( zDate==0 ){ + zDate = "YYYY-MM-DD HH:MM:SS"; /* Something wrong with the repo */ + } + modPending = moderation_pending(rid); if( tagid ){ + if( modPending ) tagid = -tagid; if( tagid==prevTagid ){ if( tmFlags & TIMELINE_BRIEF ){ suppressCnt++; continue; }else{ @@ -218,319 +286,743 @@ } } } prevTagid = tagid; if( suppressCnt ){ - @ <tr><td><td><td> - @ <small><i>... %d(suppressCnt) similar - @ event%s(suppressCnt>1?"s":"") omitted.</i></small></tr> + @ <span class="timelineDisabled">... %d(suppressCnt) similar + @ event%s(suppressCnt>1?"s":"") omitted.</span> suppressCnt = 0; } - if( strcmp(zType,"div")==0 ){ - @ <tr><td colspan=3><hr></td></tr> + if( pendingEndTr ){ + @ </td></tr> + if( pendingEndTr>1 ){ + @ <tr class="timelineSpacer"></tr> + } + pendingEndTr = 0; + } + if( fossil_strcmp(zType,"div")==0 ){ + if( !prevWasDivider ){ + @ <tr><td colspan="3"><hr class="timelineMarker"/></td></tr> + } + prevWasDivider = 1; continue; } - if( memcmp(zDate, zPrevDate, 10) ){ - sprintf(zPrevDate, "%.10s", zDate); - @ <tr><td> - @ <div class="divider"><nobr>%s(zPrevDate)</nobr></div> - @ </td></tr> - } - memcpy(zTime, &zDate[11], 5); - zTime[5] = 0; - @ <tr> - @ <td valign="top" align="right">%s(zTime)</td> - @ <td width="20" align="left" valign="top"> - if( pGraph && zType[0]=='c' ){ - int nParent = 0; - int aParent[32]; - const char *zBr; - int gidx; - static Stmt qparent; - static Stmt qbranch; - db_static_prepare(&qparent, - "SELECT pid FROM plink WHERE cid=:rid ORDER BY isprim DESC /*sort*/" - ); - db_static_prepare(&qbranch, - "SELECT value FROM tagxref WHERE tagid=%d AND tagtype>0 AND rid=:rid", - TAG_BRANCH - ); - db_bind_int(&qparent, ":rid", rid); - while( db_step(&qparent)==SQLITE_ROW && nParent<32 ){ - aParent[nParent++] = db_column_int(&qparent, 0); - } - db_reset(&qparent); + prevWasDivider = 0; + /* Date format codes: + ** (0) HH:MM + ** (1) HH:MM:SS + ** (2) YYYY-MM-DD HH:MM + ** (3) YYMMDD HH:MM + ** (4) (off) + */ + if( dateFormat<2 ){ + if( fossil_strnicmp(zDate, zPrevDate, 10) ){ + sqlite3_snprintf(sizeof(zPrevDate), zPrevDate, "%.10s", zDate); + @ <tr><td> + @ <div class="divider timelineDate">%s(zPrevDate)</div> + @ </td><td></td><td></td></tr> + } + memcpy(zTime, &zDate[11], 5+dateFormat*3); + zTime[5+dateFormat*3] = 0; + }else if( 2==dateFormat ){ + /* YYYY-MM-DD HH:MM */ + sqlite3_snprintf(sizeof(zTime), zTime, "%.16s", zDate); + }else if( 3==dateFormat ){ + /* YYMMDD HH:MM */ + int pos = 0; + zTime[pos++] = zDate[2]; zTime[pos++] = zDate[3]; /* YY */ + zTime[pos++] = zDate[5]; zTime[pos++] = zDate[6]; /* MM */ + zTime[pos++] = zDate[8]; zTime[pos++] = zDate[9]; /* DD */ + zTime[pos++] = ' '; + zTime[pos++] = zDate[11]; zTime[pos++] = zDate[12]; /* HH */ + zTime[pos++] = ':'; + zTime[pos++] = zDate[14]; zTime[pos++] = zDate[15]; /* MM */ + zTime[pos++] = 0; + }else{ + zTime[0] = 0; + } + pendingEndTr = 1; + if( rid==selectedRid ){ + @ <tr class="timelineSpacer"></tr> + @ <tr class="timelineSelected"> + pendingEndTr = 2; + }else if( rid==vid ){ + @ <tr class="timelineCurrent"> + }else { + @ <tr> + } + zDateLink = href("%R/timeline?c=%!S&unhide", zUuid); + @ <td class="timelineTime">%z(zDateLink)%s(zTime)</a></td> + @ <td class="timelineGraph"> + if( tmFlags & TIMELINE_UCOLOR ) zBgClr = zUser ? hash_color(zUser) : 0; + if( zType[0]=='c' + && (pGraph || zBgClr==0 || (tmFlags & TIMELINE_BRCOLOR)!=0) + ){ + db_reset(&qbranch); db_bind_int(&qbranch, ":rid", rid); if( db_step(&qbranch)==SQLITE_ROW ){ zBr = db_column_text(&qbranch, 0); }else{ zBr = "trunk"; } - gidx = graph_add_row(pGraph, rid, nParent, aParent, zBr, zBgClr); + if( zBgClr==0 || (tmFlags & TIMELINE_BRCOLOR)!=0 ){ + if( zBr==0 || strcmp(zBr,"trunk")==0 ){ + zBgClr = 0; + }else{ + zBgClr = hash_color(zBr); + } + } + } + if( zType[0]=='c' && (pGraph || (tmFlags & TIMELINE_BRCOLOR)!=0) ){ + int nParent = 0; + int aParent[GR_MAX_RAIL]; + int gidx; + static Stmt qparent; + db_static_prepare(&qparent, + "SELECT pid FROM plink" + " WHERE cid=:rid AND pid NOT IN phantom" + " ORDER BY isprim DESC /*sort*/" + ); + db_bind_int(&qparent, ":rid", rid); + while( db_step(&qparent)==SQLITE_ROW && nParent<ArraySize(aParent) ){ + aParent[nParent++] = db_column_int(&qparent, 0); + } + db_reset(&qparent); + gidx = graph_add_row(pGraph, rid, nParent, aParent, zBr, zBgClr, + zUuid, isLeaf); db_reset(&qbranch); - @ <div id="m%d(gidx)"></div> + @ <div id="m%d(gidx)" class="tl-nodemark"></div> } - if( zBgClr && zBgClr[0] ){ - @ <td valign="top" align="left" bgcolor="%h(zBgClr)"> + @</td> + if( zBgClr && zBgClr[0] && rid!=selectedRid ){ + @ <td class="timelineTableCell" style="background-color: %h(zBgClr);"> }else{ - @ <td valign="top" align="left"> + @ <td class="timelineTableCell"> + } + if( pGraph && zType[0]!='c' ){ + @ • + } + if( modPending ){ + @ <span class="modpending">(Awaiting Moderator Approval)</span> } if( zType[0]=='c' ){ + if( tmFlags & TIMELINE_BISECT ){ + static Stmt bisectQuery; + db_prepare(&bisectQuery, "SELECT seq, stat FROM bilog WHERE rid=:rid"); + db_bind_int(&bisectQuery, ":rid", rid); + if( db_step(&bisectQuery)==SQLITE_ROW ){ + @ <b>%s(db_column_text(&bisectQuery,1))</b> + @ (%d(db_column_int(&bisectQuery,0))) + } + db_reset(&bisectQuery); + } hyperlink_to_uuid(zUuid); if( isLeaf ){ if( db_exists("SELECT 1 FROM tagxref" " WHERE rid=%d AND tagid=%d AND tagtype>0", rid, TAG_CLOSED) ){ - @ <b>Closed-Leaf:</b> + @ <span class="timelineLeaf">Closed-Leaf:</span> }else{ - @ <b>Leaf:</b> + @ <span class="timelineLeaf">Leaf:</span> } } + }else if( zType[0]=='e' && tagid ){ + hyperlink_to_event_tagid(tagid<0?-tagid:tagid); }else if( (tmFlags & TIMELINE_ARTID)!=0 ){ hyperlink_to_uuid(zUuid); + } + if( tmFlags & TIMELINE_SHOWRID ){ + @ (%d(rid)) } db_column_blob(pQuery, commentColumn, &comment); - if( mxWikiLen>0 && blob_size(&comment)>mxWikiLen ){ + if( zType[0]!='c' ){ + /* Comments for anything other than a check-in are generated by + ** "fossil rebuild" and expect to be rendered as text/x-fossil-wiki */ + wiki_convert(&comment, 0, WIKI_INLINE); + }else if( bCommentGitStyle ){ + /* Truncate comment at first blank line */ + int ii, jj; + int n = blob_size(&comment); + char *z = blob_str(&comment); + for(ii=0; ii<n; ii++){ + if( z[ii]=='\n' ){ + for(jj=ii+1; jj<n && z[jj]!='\n' && fossil_isspace(z[jj]); jj++){} + if( z[jj]=='\n' ) break; + } + } + z[ii] = 0; + @ <span class="timelineComment">%W(z)</span> + }else if( mxWikiLen>0 && blob_size(&comment)>mxWikiLen ){ Blob truncated; blob_zero(&truncated); blob_append(&truncated, blob_buffer(&comment), mxWikiLen); blob_append(&truncated, "...", 3); - wiki_convert(&truncated, 0, wikiFlags); + @ <span class="timelineComment">%W(blob_str(&truncated))</span> blob_reset(&truncated); }else{ - wiki_convert(&comment, 0, wikiFlags); + @ <span class="timelineComment">%W(blob_str(&comment))</span> } blob_reset(&comment); - if( zTagList && zTagList[0] ){ - @ (user: %h(zUser), tags: %h(zTagList)) + + /* Generate the "user: USERNAME" at the end of the comment, together + ** with a hyperlink to another timeline for that user. + */ + if( zTagList && zTagList[0]==0 ) zTagList = 0; + if( g.perm.Hyperlink && fossil_strcmp(zDispUser, zThisUser)!=0 ){ + char *zLink = mprintf("%R/timeline?u=%h&c=%t&nd&n=200", zDispUser, zDate); + @ (user: %z(href("%z",zLink))%h(zDispUser)</a>%s(zTagList?",":"\051") }else{ - @ (user: %h(zUser)) + @ (user: %h(zDispUser)%s(zTagList?",":"\051") } + + /* Generate a "detail" link for tags. */ + if( (zType[0]=='g' || zType[0]=='w' || zType[0]=='t') && g.perm.Hyperlink ){ + @ [%z(href("%R/info/%!S",zUuid))details</a>] + } + + /* Generate the "tags: TAGLIST" at the end of the comment, together + ** with hyperlinks to the tag list. + */ + if( zTagList ){ + if( g.perm.Hyperlink ){ + int i; + const char *z = zTagList; + Blob links; + blob_zero(&links); + while( z && z[0] ){ + for(i=0; z[i] && (z[i]!=',' || z[i+1]!=' '); i++){} + if( zThisTag==0 || memcmp(z, zThisTag, i)!=0 || zThisTag[i]!=0 ){ + blob_appendf(&links, + "%z%#h</a>%.2s", + href("%R/timeline?r=%#t&nd&c=%t&n=200",i,z,zDate), i,z, &z[i] + ); + }else{ + blob_appendf(&links, "%#h", i+2, z); + } + if( z[i]==0 ) break; + z += i+2; + } + @ tags: %s(blob_str(&links))) + blob_reset(&links); + }else{ + @ tags: %h(zTagList)) + } + } + tag_private_status(rid); + + /* Generate extra hyperlinks at the end of the comment */ if( xExtra ){ xExtra(rid); } - @ </td></tr> + + /* Generate the file-change list if requested */ + if( (tmFlags & (TIMELINE_FCHANGES|TIMELINE_FRENAMES))!=0 + && zType[0]=='c' && g.perm.Hyperlink + ){ + int inUl = 0; + if( !fchngQueryInit ){ + db_prepare(&fchngQuery, + "SELECT pid," + " fid," + " (SELECT name FROM filename WHERE fnid=mlink.fnid) AS name," + " (SELECT uuid FROM blob WHERE rid=fid)," + " (SELECT uuid FROM blob WHERE rid=pid)," + " (SELECT name FROM filename WHERE fnid=mlink.pfnid) AS oldnm" + " FROM mlink" + " WHERE mid=:mid AND (pid!=fid OR pfnid>0)" + " AND (fid>0 OR" + " fnid NOT IN (SELECT pfnid FROM mlink WHERE mid=:mid))" + " AND NOT mlink.isaux" + " ORDER BY 3 /*sort*/" + ); + fchngQueryInit = 1; + } + db_bind_int(&fchngQuery, ":mid", rid); + while( db_step(&fchngQuery)==SQLITE_ROW ){ + const char *zFilename = db_column_text(&fchngQuery, 2); + int isNew = db_column_int(&fchngQuery, 0)<=0; + int isMergeNew = db_column_int(&fchngQuery, 0)<0; + int fid = db_column_int(&fchngQuery, 1); + int isDel = fid==0; + const char *zOldName = db_column_text(&fchngQuery, 5); + const char *zOld = db_column_text(&fchngQuery, 4); + const char *zNew = db_column_text(&fchngQuery, 3); + const char *zUnpub = ""; + char *zA; + char zId[20]; + if( !inUl ){ + @ <ul class="filelist"> + inUl = 1; + } + if( tmFlags & TIMELINE_SHOWRID ){ + sqlite3_snprintf(sizeof(zId), zId, " (%d) ", fid); + }else{ + zId[0] = 0; + } + if( (tmFlags & TIMELINE_FRENAMES)!=0 ){ + if( !isNew && !isDel && zOldName!=0 ){ + @ <li> %h(zOldName) → %h(zFilename)%s(zId) + } + continue; + } + zA = href("%R/artifact/%!S",fid?zNew:zOld); + if( content_is_private(fid) ){ + zUnpub = UNPUB_TAG; + } + if( isNew ){ + @ <li> %s(zA)%h(zFilename)</a>%s(zId) %s(zUnpub) + if( isMergeNew ){ + @ (added by merge) + }else{ + @ (new file) + } + @   %z(href("%R/artifact/%!S",zNew))[view]</a></li> + }else if( isDel ){ + @ <li> %s(zA)%h(zFilename)</a> (deleted)</li> + }else if( fossil_strcmp(zOld,zNew)==0 && zOldName!=0 ){ + @ <li> %h(zOldName) → %s(zA)%h(zFilename)</a>%s(zId) + @ %s(zUnpub) %z(href("%R/artifact/%!S",zNew))[view]</a></li> + }else{ + if( zOldName!=0 ){ + @ <li>%h(zOldName) → %s(zA)%h(zFilename)%s(zId)</a> %s(zUnpub) + }else{ + @ <li>%s(zA)%h(zFilename)</a>%s(zId)   %s(zUnpub) + } + @ %z(href("%R/fdiff?sbs=1&v1=%!S&v2=%!S",zOld,zNew))[diff]</a></li> + } + fossil_free(zA); + } + db_reset(&fchngQuery); + if( inUl ){ + @ </ul> + } + } } if( suppressCnt ){ - @ <tr><td><td><td> - @ <small><i>... %d(suppressCnt) similar - @ event%s(suppressCnt>1?"s":"") omitted.</i></small></tr> + @ <span class="timelineDisabled">... %d(suppressCnt) similar + @ event%s(suppressCnt>1?"s":"") omitted.</span> suppressCnt = 0; + } + if( pendingEndTr ){ + @ </td></tr> } if( pGraph ){ graph_finish(pGraph, (tmFlags & TIMELINE_DISJOINT)!=0); if( pGraph->nErr ){ graph_free(pGraph); pGraph = 0; }else{ - @ <tr><td><td><div style="width:%d(pGraph->mxRail*20+30)px;"></div> + @ <tr class="timelineBottom"><td></td><td></td><td></td></tr> } } @ </table> - timeline_output_graph_javascript(pGraph); + if( fchngQueryInit ) db_finalize(&fchngQuery); + timeline_output_graph_javascript(pGraph, (tmFlags & TIMELINE_DISJOINT)!=0, 0); +} + +/* +** Change the RGB background color given in the argument in a foreground +** color with the same hue. +*/ +static const char *bg_to_fg(const char *zIn){ + int i; + unsigned int x[3]; + unsigned int mx = 0; + static int whiteFg = -1; + static char zRes[10]; + if( strlen(zIn)!=7 || zIn[0]!='#' ) return zIn; + zIn++; + for(i=0; i<3; i++){ + x[i] = hex_digit_value(zIn[0])*16 + hex_digit_value(zIn[1]); + zIn += 2; + if( x[i]>mx ) mx = x[i]; + } + if( whiteFg<0 ) whiteFg = skin_detail_boolean("white-foreground"); + if( whiteFg ){ + /* Make the color lighter */ + static const unsigned int t = 215; + if( mx<t ) for(i=0; i<3; i++) x[i] += t - mx; + }else{ + /* Make the color darker */ + static const unsigned int t = 128; + if( mx>t ) for(i=0; i<3; i++) x[i] -= mx - t; + } + sqlite3_snprintf(sizeof(zRes),zRes,"#%02x%02x%02x",x[0],x[1],x[2]); + return zRes; } /* ** Generate all of the necessary javascript to generate a timeline ** graph. */ -void timeline_output_graph_javascript(GraphContext *pGraph){ - if( pGraph && pGraph->nErr==0 ){ - GraphRow *pRow; - int i; - char cSep; - @ <script type="text/JavaScript"> - cgi_printf("var rowinfo = [\n"); - for(pRow=pGraph->pFirst; pRow; pRow=pRow->pNext){ - cgi_printf("{id:\"m%d\",bg:\"%s\",r:%d,d:%d,mo:%d,mu:%d,u:%d,au:", - pRow->idx, - pRow->zBgClr, - pRow->iRail, - pRow->bDescender, - pRow->mergeOut, - pRow->mergeUpto, - pRow->aiRaiser[pRow->iRail] - ); - cSep = '['; - for(i=0; i<GR_MAX_RAIL; i++){ - if( i==pRow->iRail ) continue; - if( pRow->aiRaiser[i]>0 ){ - cgi_printf("%c%d,%d", cSep, pGraph->railMap[i], pRow->aiRaiser[i]); - cSep = ','; - } - } - if( cSep=='[' ) cgi_printf("["); - cgi_printf("],mi:"); - cSep = '['; - for(i=0; i<GR_MAX_RAIL; i++){ - if( pRow->mergeIn & (1<<i) ){ - cgi_printf("%c%d", cSep, pGraph->railMap[i]); - cSep = ','; - } - } - if( cSep=='[' ) cgi_printf("["); - cgi_printf("]}%s", pRow->pNext ? ",\n" : "];\n"); - } - cgi_printf("var nrail = %d\n", pGraph->mxRail+1); - graph_free(pGraph); - @ var canvasDiv = document.getElementById("canvas"); - @ var realCanvas = null; - @ function drawBox(color,x0,y0,x1,y1){ - @ var n = document.createElement("div"); - @ if( x0>x1 ){ var t=x0; x0=x1; x1=t; } - @ if( y0>y1 ){ var t=y0; y0=y1; y1=t; } - @ var w = x1-x0+1; - @ var h = y1-y0+1; - @ n.style.position = "absolute"; - @ n.style.overflow = "hidden"; - @ n.style.left = x0+"px"; - @ n.style.top = y0+"px"; - @ n.style.width = w+"px"; - @ n.style.height = h+"px"; - @ n.style.backgroundColor = color; - @ canvasDiv.appendChild(n); - @ } - @ function absoluteY(id){ - @ var obj = document.getElementById(id); - @ if( !obj ) return; +void timeline_output_graph_javascript( + GraphContext *pGraph, /* The graph to be displayed */ + int omitDescenders, /* True to omit descenders */ + int fileDiff /* True for file diff. False for check-in diff */ +){ + if( pGraph && pGraph->nErr==0 && pGraph->nRow>0 ){ + GraphRow *pRow; + int i; + char cSep; + int iRailPitch; /* Pixels between consecutive rails */ + int showArrowheads; /* True to draw arrowheads. False to omit. */ + int circleNodes; /* True for circle nodes. False for square nodes */ + int colorGraph; /* Use colors for graph lines */ + + iRailPitch = atoi(PD("railpitch","0")); + showArrowheads = skin_detail_boolean("timeline-arrowheads"); + circleNodes = skin_detail_boolean("timeline-circle-nodes"); + colorGraph = skin_detail_boolean("timeline-color-graph-lines"); + + @ <script>(function(){ + @ "use strict"; + @ var css = ""; + if( circleNodes ){ + @ css += ".tl-node, .tl-node:after { border-radius: 50%%; }"; + } + if( !showArrowheads ){ + @ css += ".tl-arrow.u { display: none; }"; + } + @ if( css!=="" ){ + @ var style = document.createElement("style"); + @ style.textContent = css; + @ document.querySelector("head").appendChild(style); + @ } + /* the rowinfo[] array contains all the information needed to generate + ** the graph. Each entry contains information for a single row: + ** + ** id: The id of the <div> element for the row. This is an integer. + ** to get an actual id, prepend "m" to the integer. The top node + ** is 1 and numbers increase moving down the timeline. + ** bg: The background color for this row + ** r: The "rail" that the node for this row sits on. The left-most + ** rail is 0 and the number increases to the right. + ** d: True if there is a "descender" - an arrow coming from the bottom + ** of the page straight up to this node. + ** mo: "merge-out". If non-negative, this is the rail position + ** for the upward portion of a merge arrow. The merge arrow goes up + ** to the row identified by mu:. If this value is negative then + ** node has no merge children and no merge-out line is drawn. + ** mu: The id of the row which is the top of the merge-out arrow. + ** u: Draw a thick child-line out of the top of this node and up to + ** the node with an id equal to this value. 0 if it is straight to + ** the top of the page, -1 if there is no thick-line riser. + ** f: 0x01: a leaf node. + ** au: An array of integers that define thick-line risers for branches. + ** The integers are in pairs. For each pair, the first integer is + ** is the rail on which the riser should run and the second integer + ** is the id of the node upto which the riser should run. + ** mi: "merge-in". An array of integer rail positions from which + ** merge arrows should be drawn into this node. If the value is + ** negative, then the rail position is the absolute value of mi[] + ** and a thin merge-arrow descender is drawn to the bottom of + ** the screen. + ** h: The SHA1 hash of the object being graphed + */ + cgi_printf("var rowinfo = [\n"); + for(pRow=pGraph->pFirst; pRow; pRow=pRow->pNext){ + cgi_printf("{id:%d,bg:\"%s\",r:%d,d:%d,mo:%d,mu:%d,u:%d,f:%d,au:", + pRow->idx, /* id */ + pRow->zBgClr, /* bg */ + pRow->iRail, /* r */ + pRow->bDescender, /* d */ + pRow->mergeOut, /* mo */ + pRow->mergeUpto, /* mu */ + pRow->aiRiser[pRow->iRail], /* u */ + pRow->isLeaf ? 1 : 0 /* f */ + ); + /* u */ + cSep = '['; + for(i=0; i<GR_MAX_RAIL; i++){ + if( i==pRow->iRail ) continue; + if( pRow->aiRiser[i]>0 ){ + cgi_printf("%c%d,%d", cSep, i, pRow->aiRiser[i]); + cSep = ','; + } + } + if( cSep=='[' ) cgi_printf("["); + cgi_printf("],"); + if( colorGraph && pRow->zBgClr[0]=='#' ){ + cgi_printf("fg:\"%s\",", bg_to_fg(pRow->zBgClr)); + } + /* mi */ + cgi_printf("mi:"); + cSep = '['; + for(i=0; i<GR_MAX_RAIL; i++){ + if( pRow->mergeIn[i] ){ + int mi = i; + if( pRow->mergeDown & (1<<i) ) mi = -mi; + cgi_printf("%c%d", cSep, mi); + cSep = ','; + } + } + if( cSep=='[' ) cgi_printf("["); + cgi_printf("],h:\"%!S\"}%s", pRow->zUuid, pRow->pNext ? ",\n" : "];\n"); + } + cgi_printf("var nrail = %d\n", pGraph->mxRail+1); + graph_free(pGraph); + @ var canvasDiv; + @ var railPitch; + @ var mergeOffset; + @ var node, arrow, arrowSmall, line, mArrow, mLine, wArrow, wLine; + @ function initGraph(){ + @ var parent = gebi("timelineTable").rows[0].cells[1]; + @ parent.style.verticalAlign = "top"; + @ canvasDiv = document.createElement("div"); + @ canvasDiv.className = "tl-canvas"; + @ canvasDiv.style.position = "absolute"; + @ parent.appendChild(canvasDiv); + @ + @ var elems = {}; + @ var elemClasses = [ + @ "rail", "mergeoffset", "node", "arrow u", "arrow u sm", "line", + @ "arrow merge r", "line merge", "arrow warp", "line warp" + @ ]; + @ for( var i=0; i<elemClasses.length; i++ ){ + @ var cls = elemClasses[i]; + @ var elem = document.createElement("div"); + @ elem.className = "tl-" + cls; + @ if( cls.indexOf("line")==0 ) elem.className += " v"; + @ canvasDiv.appendChild(elem); + @ var k = cls.replace(/\s/g, "_"); + @ var r = elem.getBoundingClientRect(); + @ var w = Math.round(r.right - r.left); + @ var h = Math.round(r.bottom - r.top); + @ elems[k] = {w: w, h: h, cls: cls}; + @ } + @ node = elems.node; + @ arrow = elems.arrow_u; + @ arrowSmall = elems.arrow_u_sm; + @ line = elems.line; + @ mArrow = elems.arrow_merge_r; + @ mLine = elems.line_merge; + @ wArrow = elems.arrow_warp; + @ wLine = elems.line_warp; + @ + @ var minRailPitch = Math.ceil((node.w+line.w)/2 + mArrow.w + 1); + if( iRailPitch ){ + @ railPitch = %d(iRailPitch); + }else{ + @ railPitch = elems.rail.w; + @ railPitch -= Math.floor((nrail-1)*(railPitch-minRailPitch)/21); + } + @ railPitch = Math.max(railPitch, minRailPitch); + @ + if( PB("nomo") ){ + @ mergeOffset = 0; + }else{ + @ mergeOffset = railPitch-minRailPitch-mLine.w; + @ mergeOffset = Math.min(mergeOffset, elems.mergeoffset.w); + @ mergeOffset = mergeOffset>0 ? mergeOffset + line.w/2 : 0; + } + @ + @ var canvasWidth = (nrail-1)*railPitch + node.w; + @ canvasDiv.style.width = canvasWidth + "px"; + @ canvasDiv.style.position = "relative"; + @ } + @ function drawBox(cls,color,x0,y0,x1,y1){ + @ var n = document.createElement("div"); + @ x0 = Math.floor(x0); + @ y0 = Math.floor(y0); + @ x1 = x1 || x1===0 ? Math.floor(x1) : x0; + @ y1 = y1 || y1===0 ? Math.floor(y1) : y0; + @ if( x0>x1 ){ var t=x0; x0=x1; x1=t; } + @ if( y0>y1 ){ var t=y0; y0=y1; y1=t; } + @ var w = x1-x0; + @ var h = y1-y0; + @ n.style.position = "absolute"; + @ n.style.left = x0+"px"; + @ n.style.top = y0+"px"; + @ if( w ) n.style.width = w+"px"; + @ if( h ) n.style.height = h+"px"; + @ if( color ) n.style.backgroundColor = color; + @ n.className = "tl-"+cls; + @ canvasDiv.appendChild(n); + @ return n; + @ } + @ function absoluteY(obj){ @ var top = 0; @ if( obj.offsetParent ){ @ do{ @ top += obj.offsetTop; @ }while( obj = obj.offsetParent ); @ } @ return top; @ } - @ function absoluteX(id){ - @ var obj = document.getElementById(id); - @ if( !obj ) return; - @ var left = 0; - @ if( obj.offsetParent ){ - @ do{ - @ left += obj.offsetLeft; - @ }while( obj = obj.offsetParent ); - @ } - @ return left; - @ } - @ function drawUpArrow(x,y0,y1){ - @ drawBox("black",x,y0,x+1,y1); - @ if( y0+8>=y1 ){ - @ drawBox("black",x-1,y0+1,x+2,y0+2); - @ drawBox("black",x-2,y0+3,x+3,y0+4); - @ }else{ - @ drawBox("black",x-1,y0+2,x+2,y0+4); - @ drawBox("black",x-2,y0+5,x+3,y0+7); - @ } - @ } - @ function drawThinArrow(y,xFrom,xTo){ - @ if( xFrom<xTo ){ - @ drawBox("black",xFrom,y,xTo,y); - @ drawBox("black",xTo-4,y-1,xTo-2,y+1); - @ if( xTo>xFrom-8 ) drawBox("black",xTo-6,y-2,xTo-5,y+2); - @ }else{ - @ drawBox("black",xTo,y,xFrom,y); - @ drawBox("black",xTo+2,y-1,xTo+4,y+1); - @ if( xTo+8<xFrom ) drawBox("black",xTo+5,y-2,xTo+6,y+2); - @ } - @ } - @ function drawThinLine(x0,y0,x1,y1){ - @ drawBox("black",x0,y0,x1,y1); - @ } - @ function drawNode(p, left, btm){ - @ drawBox("black",p.x-5,p.y-5,p.x+6,p.y+6); - @ drawBox(p.bg,p.x-4,p.y-4,p.x+5,p.y+5); - @ if( p.u>0 ){ - @ var u = rowinfo[p.u-1]; - @ drawUpArrow(p.x, u.y+6, p.y-5); - @ } - @ if( p.d ){ - @ drawUpArrow(p.x, p.y+6, btm); - @ } - @ if( p.mo>=0 ){ - @ var x1 = p.mo*20 + left; - @ var y1 = p.y-3; - @ var x0 = x1>p.x ? p.x+7 : p.x-6; - @ var u = rowinfo[p.mu-1]; - @ var y0 = u.y+5; - @ drawThinLine(x0,y1,x1,y1); - @ drawThinLine(x1,y0,x1,y1); - @ } - @ var n = p.au.length; - @ for(var i=0; i<n; i+=2){ - @ var x1 = p.au[i]*20 + left; - @ var x0 = x1>p.x ? p.x+7 : p.x-6; - @ drawBox("black",x0,p.y,x1,p.y+1); - @ var u = rowinfo[p.au[i+1]-1]; - @ drawUpArrow(x1, u.y+6, p.y); - @ } - @ for(var j in p.mi){ - @ var y0 = p.y+5; - @ var mx = p.mi[j]*20 + left; - @ if( mx>p.x ){ - @ drawThinArrow(y0,mx,p.x+5); - @ }else{ - @ drawThinArrow(y0,mx,p.x-5); - @ } - @ } - @ } - @ function renderGraph(){ - @ var canvasDiv = document.getElementById("canvas"); - @ while( canvasDiv.hasChildNodes() ){ - @ canvasDiv.removeChild(canvasDiv.firstChild); - @ } - @ var canvasY = absoluteY("canvas"); - @ var left = absoluteX(rowinfo[0].id) - absoluteX("canvas") + 15; - @ var width = nrail*20; - @ for(var i in rowinfo){ - @ rowinfo[i].y = absoluteY(rowinfo[i].id) + 10 - canvasY; - @ rowinfo[i].x = left + rowinfo[i].r*20; - @ } - @ var btm = rowinfo[rowinfo.length-1].y + 20; - @ if( btm<32768 ){ - @ canvasDiv.innerHTML = '<canvas id="timeline-canvas" '+ - @ 'style="position:absolute;left:'+(left-5)+'px;"' + - @ ' width="'+width+'" height="'+btm+'"></canvas>'; - @ realCanvas = document.getElementById('timeline-canvas'); - @ }else{ - @ realCanvas = 0; - @ } - @ var context; - @ if( realCanvas && realCanvas.getContext - @ && (context = realCanvas.getContext('2d'))) { - @ drawBox = function(color,x0,y0,x1,y1) { - @ if( y0>32767 || y1>32767 ) return; - @ if( x0>x1 ){ var t=x0; x0=x1; x1=t; } - @ if( y0>y1 ){ var t=y0; y0=y1; y1=t; } - @ if(isNaN(x0) || isNaN(y0) || isNaN(x1) || isNaN(y1)) return; - @ context.fillStyle = color; - @ context.fillRect(x0-left+5,y0,x1-x0+1,y1-y0+1); - @ }; - @ } - @ for(var i in rowinfo){ - @ drawNode(rowinfo[i], left, btm); - @ } - @ } - @ var lastId = rowinfo[rowinfo.length-1].id; - @ var lastY = 0; - @ function checkHeight(){ - @ var h = absoluteY(lastId); + @ function miLineY(p){ + @ return p.y + node.h - mLine.w - 1; + @ } + @ function drawLine(elem,color,x0,y0,x1,y1){ + @ var cls = elem.cls + " "; + @ if( x1===null ){ + @ x1 = x0+elem.w; + @ cls += "v"; + @ }else{ + @ y1 = y0+elem.w; + @ cls += "h"; + @ } + @ drawBox(cls,color,x0,y0,x1,y1); + @ } + @ function drawUpArrow(from,to,color){ + @ var y = to.y + node.h; + @ var arrowSpace = from.y - y + (!from.id || from.r!=to.r ? node.h/2 : 0); + @ var arw = arrowSpace < arrow.h*1.5 ? arrowSmall : arrow; + @ var x = to.x + (node.w-line.w)/2; + @ var y0 = from.y + node.h/2; + @ var y1 = Math.ceil(to.y + node.h + arw.h/2); + @ drawLine(line,color,x,y0,null,y1); + @ x = to.x + (node.w-arw.w)/2; + @ var n = drawBox(arw.cls,null,x,y); + @ n.style.borderBottomColor = color; + @ } + @ function drawMergeLine(x0,y0,x1,y1){ + @ drawLine(mLine,null,x0,y0,x1,y1); + @ } + @ function drawMergeArrow(p,rail){ + @ var x0 = rail*railPitch + node.w/2; + @ if( rail in mergeLines ){ + @ x0 += mergeLines[rail]; + @ if( p.r<rail ) x0 += mLine.w; + @ }else{ + @ x0 += (p.r<rail ? -1 : 1)*line.w/2; + @ } + @ var x1 = mArrow.w ? mArrow.w/2 : -node.w/2; + @ x1 = p.x + (p.r<rail ? node.w + Math.ceil(x1) : -x1); + @ var y = miLineY(p); + @ drawMergeLine(x0,y,x1,null); + @ var x = p.x + (p.r<rail ? node.w : -mArrow.w); + @ var cls = "arrow merge " + (p.r<rail ? "l" : "r"); + @ drawBox(cls,null,x,y+(mLine.w-mArrow.h)/2); + @ } + @ function drawNode(p, btm){ + @ if( p.u>0 ) drawUpArrow(p,rowinfo[p.u-1],p.fg); + @ var cls = node.cls; + @ if( p.mi.length ) cls += " merge"; + @ if( p.f&1 ) cls += " leaf"; + @ var n = drawBox(cls,p.bg,p.x,p.y); + @ n.id = "tln"+p.id; + @ n.onclick = clickOnNode; + @ n.style.zIndex = 10; + if( !omitDescenders ){ + @ if( p.u==0 ) drawUpArrow(p,{x: p.x, y: -node.h},p.fg); + @ if( p.d ) drawUpArrow({x: p.x, y: btm-node.h/2},p,p.fg); + } + @ if( p.mo>=0 ){ + @ var x0 = p.x + node.w/2; + @ var x1 = p.mo*railPitch + node.w/2; + @ var u = rowinfo[p.mu-1]; + @ var y1 = miLineY(u); + @ if( p.u<0 || p.mo!=p.r ){ + @ x1 += mergeLines[p.mo] = -mLine.w/2; + @ var y0 = p.y+2; + @ if( p.r!=p.mo ) drawMergeLine(x0,y0,x1+(x0<x1 ? mLine.w : 0),null); + @ drawMergeLine(x1,y0+mLine.w,null,y1); + @ }else if( mergeOffset ){ + @ mergeLines[p.mo] = u.r<p.r ? -mergeOffset-mLine.w : mergeOffset; + @ x1 += mergeLines[p.mo]; + @ drawMergeLine(x1,p.y+node.h/2,null,y1); + @ }else{ + @ delete mergeLines[p.mo]; + @ } + @ } + @ for( var i=0; i<p.au.length; i+=2 ){ + @ var rail = p.au[i]; + @ var x0 = p.x + node.w/2; + @ var x1 = rail*railPitch + (node.w-line.w)/2; + @ if( x0<x1 ){ + @ x0 = Math.ceil(x0); + @ x1 += line.w; + @ } + @ var y0 = p.y + (node.h-line.w)/2; + @ var u = rowinfo[p.au[i+1]-1]; + @ if( u.id<p.id ){ + @ drawLine(line,u.fg,x0,y0,x1,null); + @ drawUpArrow(p,u,u.fg); + @ }else{ + @ var y1 = u.y + (node.h-line.w)/2; + @ drawLine(wLine,u.fg,x0,y0,x1,null); + @ drawLine(wLine,u.fg,x1-line.w,y0,null,y1+line.w); + @ drawLine(wLine,u.fg,x1,y1,u.x-wArrow.w/2,null); + @ var x = u.x-wArrow.w; + @ var y = u.y+(node.h-wArrow.h)/2; + @ var n = drawBox(wArrow.cls,null,x,y); + @ if( u.fg ) n.style.borderLeftColor = u.fg; + @ } + @ } + @ for( var i=0; i<p.mi.length; i++ ){ + @ var rail = p.mi[i]; + @ if( rail<0 ){ + @ rail = -rail; + @ mergeLines[rail] = -mLine.w/2; + @ var x = rail*railPitch + (node.w-mLine.w)/2; + @ drawMergeLine(x,miLineY(p),null,btm); + @ } + @ drawMergeArrow(p,rail); + @ } + @ } + @ var mergeLines; + @ function renderGraph(){ + @ mergeLines = {}; + @ canvasDiv.innerHTML = ""; + @ var canvasY = absoluteY(canvasDiv); + @ for( var i=0; i<rowinfo.length; i++ ){ + @ rowinfo[i].y = absoluteY(gebi("m"+rowinfo[i].id)) - canvasY; + @ rowinfo[i].x = rowinfo[i].r*railPitch; + @ } + @ var tlBtm = document.querySelector(".timelineBottom"); + @ if( tlBtm.offsetHeight<node.h ){ + @ tlBtm.style.height = node.h + "px"; + @ } + @ var btm = absoluteY(tlBtm) - canvasY + tlBtm.offsetHeight; + @ for( var i=rowinfo.length-1; i>=0; i-- ){ + @ drawNode(rowinfo[i], btm); + @ } + @ } + @ var selRow; + @ function clickOnNode(){ + @ var p = rowinfo[parseInt(this.id.match(/\d+$/)[0], 10)-1]; + @ if( !selRow ){ + @ selRow = p; + @ this.className += " sel"; + @ canvasDiv.className += " sel"; + @ }else if( selRow==p ){ + @ selRow = null; + @ this.className = this.className.replace(" sel", ""); + @ canvasDiv.className = canvasDiv.className.replace(" sel", ""); + @ }else{ + if( fileDiff ){ + @ location.href="%R/fdiff?v1="+selRow.h+"&v2="+p.h+"&sbs=1"; + }else{ + if( db_get_boolean("show-version-diffs", 0)==0 ){ + @ location.href="%R/vdiff?from="+selRow.h+"&to="+p.h+"&sbs=0"; + }else{ + @ location.href="%R/vdiff?from="+selRow.h+"&to="+p.h+"&sbs=1"; + } + } + @ } + @ } + @ var lastRow = gebi("m"+rowinfo[rowinfo.length-1].id); + @ var lastY = 0; + @ function checkHeight(){ + @ var h = absoluteY(lastRow); @ if( h!=lastY ){ @ renderGraph(); @ lastY = h; @ } - @ setTimeout("checkHeight();", 1000); + @ setTimeout(checkHeight, 1000); @ } + @ initGraph(); @ checkHeight(); - @ </script> + @ }())</script> } } /* ** Create a temporary table suitable for storing timeline data. */ static void timeline_temp_table(void){ - static const char zSql[] = + static const char zSql[] = @ CREATE TEMP TABLE IF NOT EXISTS timeline( @ rid INTEGER PRIMARY KEY, @ uuid TEXT, @ timestamp TEXT, @ comment TEXT, @@ -538,48 +1030,41 @@ @ isleaf BOOLEAN, @ bgcolor TEXT, @ etype TEXT, @ taglist TEXT, @ tagid INTEGER, - @ short TEXT + @ short TEXT, + @ sortby REAL @ ) ; - db_multi_exec(zSql); + db_multi_exec("%s", zSql/*safe-for-%s*/); } /* ** Return a pointer to a constant string that forms the basis ** for a timeline query for the WWW interface. */ const char *timeline_query_for_www(void){ - static char *zBase = 0; - static const char zBaseSql[] = + static const char zBase[] = @ SELECT - @ blob.rid, - @ uuid, - @ datetime(event.mtime,'localtime') AS timestamp, - @ coalesce(ecomment, comment), - @ coalesce(euser, user), - @ NOT EXISTS(SELECT 1 FROM plink - @ WHERE pid=blob.rid - @ AND coalesce((SELECT value FROM tagxref - @ WHERE tagid=%d AND rid=plink.pid), 'trunk') - @ = coalesce((SELECT value FROM tagxref - @ WHERE tagid=%d AND rid=plink.cid), 'trunk')), - @ bgcolor, - @ event.type, + @ blob.rid AS blobRid, + @ uuid AS uuid, + @ datetime(event.mtime,toLocal()) AS timestamp, + @ coalesce(ecomment, comment) AS comment, + @ coalesce(euser, user) AS user, + @ blob.rid IN leaf AS leaf, + @ bgcolor AS bgColor, + @ event.type AS eventType, @ (SELECT group_concat(substr(tagname,5), ', ') FROM tag, tagxref @ WHERE tagname GLOB 'sym-*' AND tag.tagid=tagxref.tagid - @ AND tagxref.rid=blob.rid AND tagxref.tagtype>0), - @ tagid, - @ brief - @ FROM event JOIN blob + @ AND tagxref.rid=blob.rid AND tagxref.tagtype>0) AS tags, + @ tagid AS tagid, + @ brief AS brief, + @ event.mtime AS mtime + @ FROM event CROSS JOIN blob @ WHERE blob.rid=event.objid ; - if( zBase==0 ){ - zBase = mprintf(zBaseSql, TAG_BRANCH, TAG_BRANCH); - } return zBase; } /* ** Generate a submenu element with a single parameter change. @@ -595,368 +1080,804 @@ url_render(pUrl, zParam, zValue, zRemove, 0)); } /* -** zDate is a localtime date. Insert records into the -** "timeline" table to cause <hr> to be inserted before and after -** entries of that date. -*/ -static void timeline_add_dividers(const char *zDate){ - db_multi_exec( - "INSERT INTO timeline(rid,timestamp,etype)" - "VALUES(-1,datetime(%Q,'-1 second') || '.9','div')", - zDate - ); - db_multi_exec( - "INSERT INTO timeline(rid,timestamp,etype)" - "VALUES(-2,datetime(%Q) || '.1','div')", - zDate - ); -} - +** Convert a symbolic name used as an argument to the a=, b=, or c= +** query parameters of timeline into a julianday mtime value. +*/ +double symbolic_name_to_mtime(const char *z){ + double mtime; + int rid; + if( z==0 ) return -1.0; + if( fossil_isdate(z) ){ + mtime = db_double(0.0, "SELECT julianday(%Q,fromLocal())", z); + if( mtime>0.0 ) return mtime; + } + rid = symbolic_name_to_rid(z, "*"); + if( rid ){ + mtime = db_double(0.0, "SELECT mtime FROM event WHERE objid=%d", rid); + }else{ + mtime = db_double(-1.0, + "SELECT max(event.mtime) FROM event, tag, tagxref" + " WHERE tag.tagname GLOB 'event-%q*'" + " AND tagxref.tagid=tag.tagid AND tagxref.tagtype" + " AND event.objid=tagxref.rid", + z + ); + } + return mtime; +} + +/* +** zDate is a localtime date. Insert records into the +** "timeline" table to cause <hr> to be inserted on zDate. +*/ +static int timeline_add_divider(double rDate){ + int rid = db_int(-1, + "SELECT rid FROM timeline ORDER BY abs(sortby-%.16g) LIMIT 1", rDate + ); + if( rid>0 ) return rid; + db_multi_exec( + "INSERT INTO timeline(rid,sortby,etype) VALUES(-1,%.16g,'div')", + rDate + ); + return -1; +} + +/* +** Return all possible names for file zUuid. +*/ +char *names_of_file(const char *zUuid){ + Stmt q; + Blob out; + const char *zSep = ""; + db_prepare(&q, + "SELECT DISTINCT filename.name FROM mlink, filename" + " WHERE mlink.fid=(SELECT rid FROM blob WHERE uuid=%Q)" + " AND filename.fnid=mlink.fnid", + zUuid + ); + blob_zero(&out); + while( db_step(&q)==SQLITE_ROW ){ + const char *zFN = db_column_text(&q, 0); + blob_appendf(&out, "%s%z%h</a>", zSep, + href("%R/finfo?name=%t", zFN), zFN); + zSep = " or "; + } + db_finalize(&q); + return blob_str(&out); +} + + +/* +** Add the select/option box to the timeline submenu that is used to +** set the y= parameter that determines which elements to display +** on the timeline. +*/ +static void timeline_y_submenu(int isDisabled){ + static int i = 0; + static const char *az[12]; + if( i==0 ){ + az[0] = "all"; + az[1] = "Any Type"; + i = 2; + if( g.perm.Read ){ + az[i++] = "ci"; + az[i++] = "Check-ins"; + az[i++] = "g"; + az[i++] = "Tags"; + } + if( g.perm.RdWiki ){ + az[i++] = "e"; + az[i++] = "Tech Notes"; + } + if( g.perm.RdTkt ){ + az[i++] = "t"; + az[i++] = "Tickets"; + } + if( g.perm.RdWiki ){ + az[i++] = "w"; + az[i++] = "Wiki"; + } + assert( i<=ArraySize(az) ); + } + if( i>2 ){ + style_submenu_multichoice("y", i/2, az, isDisabled); + } +} /* ** WEBPAGE: timeline ** ** Query parameters: ** -** a=TIMESTAMP after this date -** b=TIMESTAMP before this date. -** c=TIMESTAMP "circa" this date. -** n=COUNT number of events in output -** p=RID artifact RID and up to COUNT parents and ancestors -** d=RID artifact RID and up to COUNT descendants +** a=TIMEORTAG after this event +** b=TIMEORTAG before this event +** c=TIMEORTAG "circa" this event +** m=TIMEORTAG mark this event +** n=COUNT max number of events in output +** p=UUID artifact and up to COUNT parents and ancestors +** d=UUID artifact and up to COUNT descendants +** dp=UUID The same as d=UUID&p=UUID ** t=TAGID show only check-ins with the given tagid +** r=TAGID show check-ins related to tagid ** u=USER only if belonging to this user -** y=TYPE 'ci', 'w', 't' +** y=TYPE 'ci', 'w', 't', 'e', or (default) 'all' ** s=TEXT string search (comment and brief) ** ng Suppress the graph if present +** nd Suppress "divider" lines +** v Show details of files changed +** f=UUID Show family (immediate parents and children) of UUID +** from=UUID Path from... +** to=UUID ... to this +** shortest ... show only the shortest path +** uf=FUUID Show only check-ins that use given file version +** brbg Background color from branch name +** ubg Background color from user +** namechng Show only check-ins that filename changes +** forks Show only forks and their children +** ym=YYYY-MM Show only events for the given year/month. +** ymd=YYYY-MM-DD Show only events on the given day +** datefmt=N Override the date format +** bisect Show the check-ins that are in the current bisect ** ** p= and d= can appear individually or together. If either p= or d= ** appear, then u=, y=, a=, and b= are ignored. ** -** If a= and b= appear, only a= is used. If neither appear, the most -** recent events are choosen. +** If both a= and b= appear then both upper and lower bounds are honored. ** -** If n= is missing, the default count is 20. +** If n= is missing, the default count is 50 for most queries but +** drops to 11 for c= queries. */ void page_timeline(void){ Stmt q; /* Query used to generate the timeline */ Blob sql; /* text of SQL used to generate timeline */ Blob desc; /* Description of the timeline */ - int nEntry = atoi(PD("n","20")); /* Max number of entries on timeline */ - int p_rid = name_to_rid(P("p")); /* artifact p and its parents */ - int d_rid = name_to_rid(P("d")); /* artifact d and its descendants */ + int nEntry; /* Max number of entries on timeline */ + int p_rid = name_to_typed_rid(P("p"),"ci"); /* artifact p and its parents */ + int d_rid = name_to_typed_rid(P("d"),"ci"); /* artifact d and descendants */ + int f_rid = name_to_typed_rid(P("f"),"ci"); /* artifact f and close family */ const char *zUser = P("u"); /* All entries by this user if not NULL */ const char *zType = PD("y","all"); /* Type of events. All if NULL */ const char *zAfter = P("a"); /* Events after this time */ const char *zBefore = P("b"); /* Events before this time */ const char *zCirca = P("c"); /* Events near this time */ + const char *zMark = P("m"); /* Mark this event or an event this time */ const char *zTagName = P("t"); /* Show events with this tag */ + const char *zBrName = P("r"); /* Show events related to this tag */ const char *zSearch = P("s"); /* Search string */ - HQuery url; /* URL for various branch links */ + const char *zUses = P("uf"); /* Only show check-ins hold this file */ + const char *zYearMonth = P("ym"); /* Show check-ins for the given YYYY-MM */ + const char *zYearWeek = P("yw"); /* Check-ins for YYYY-WW (week-of-year) */ + const char *zDay = P("ymd"); /* Check-ins for the day YYYY-MM-DD */ + int useDividers = P("nd")==0; /* Show dividers if "nd" is missing */ + int renameOnly = P("namechng")!=0; /* Show only check-ins that rename files */ + int forkOnly = PB("forks"); /* Show only forks and their children */ + int bisectOnly = PB("bisect"); /* Show the check-ins of the bisect */ int tagid; /* Tag ID */ - int tmFlags; /* Timeline flags */ + int tmFlags = 0; /* Timeline flags */ + const char *zThisTag = 0; /* Suppress links to this tag */ + const char *zThisUser = 0; /* Suppress links to this user */ + HQuery url; /* URL for various branch links */ + int from_rid = name_to_typed_rid(P("from"),"ci"); /* from= for paths */ + int to_rid = name_to_typed_rid(P("to"),"ci"); /* to= for path timelines */ + int noMerge = P("shortest")==0; /* Follow merge links if shorter */ + int me_rid = name_to_typed_rid(P("me"),"ci"); /* me= for common ancestory */ + int you_rid = name_to_typed_rid(P("you"),"ci");/* you= for common ancst */ + int pd_rid; + double rBefore, rAfter, rCirca; /* Boundary times */ + const char *z; + char *zOlderButton = 0; /* URL for Older button at the bottom */ + int selectedRid = -9999999; /* Show a highlight on this RID */ + int disableY = 0; /* Disable type selector on submenu */ + + /* Set number of rows to display */ + z = P("n"); + if( z ){ + if( fossil_strcmp(z,"all")==0 ){ + nEntry = 0; + }else{ + nEntry = atoi(z); + if( nEntry<=0 ){ + cgi_replace_query_parameter("n","10"); + nEntry = 10; + } + } + }else if( zCirca ){ + cgi_replace_query_parameter("n","11"); + nEntry = 11; + }else{ + cgi_replace_query_parameter("n","50"); + nEntry = 50; + } /* To view the timeline, must have permission to read project data. */ + pd_rid = name_to_typed_rid(P("dp"),"ci"); + if( pd_rid ){ + p_rid = d_rid = pd_rid; + } login_check_credentials(); - if( !g.okRead && !g.okRdTkt && !g.okRdWiki ){ login_needed(); return; } - if( zTagName && g.okRead ){ - tagid = db_int(0, "SELECT tagid FROM tag WHERE tagname='sym-%q'", zTagName); + if( (!g.perm.Read && !g.perm.RdTkt && !g.perm.RdWiki) + || (bisectOnly && !g.perm.Setup) + ){ + login_needed(g.anon.Read && g.anon.RdTkt && g.anon.RdWiki); + return; + } + url_initialize(&url, "timeline"); + cgi_query_parameters_to_url(&url); + if( zTagName && g.perm.Read ){ + tagid = db_int(-1,"SELECT tagid FROM tag WHERE tagname='sym-%q'",zTagName); + zThisTag = zTagName; + style_submenu_element("Related", "Related", "%s", + url_render(&url, "r", zTagName, "t", 0)); + }else if( zBrName && g.perm.Read ){ + tagid = db_int(-1,"SELECT tagid FROM tag WHERE tagname='sym-%q'",zBrName); + zThisTag = zBrName; + style_submenu_element("Branch Only", "only", "%s", + url_render(&url, "t", zBrName, "r", 0)); }else{ tagid = 0; + } + if( zMark && zMark[0]==0 ){ + if( zAfter ) zMark = zAfter; + if( zBefore ) zMark = zBefore; + if( zCirca ) zMark = zCirca; + } + if( tagid + && db_int(0,"SELECT count(*) FROM tagxref WHERE tagid=%d",tagid)<=nEntry + ){ + nEntry = -1; + zCirca = 0; } if( zType[0]=='a' ){ - tmFlags = TIMELINE_BRIEF | TIMELINE_GRAPH; + tmFlags |= TIMELINE_BRIEF | TIMELINE_GRAPH; }else{ - tmFlags = TIMELINE_GRAPH; + tmFlags |= TIMELINE_GRAPH; } - if( P("ng")!=0 || zSearch!=0 ){ + if( PB("ng") || zSearch!=0 ){ tmFlags &= ~TIMELINE_GRAPH; + } + if( PB("brbg") ){ + tmFlags |= TIMELINE_BRCOLOR; + } + if( PB("unhide") ){ + tmFlags |= TIMELINE_UNHIDE; + } + if( PB("ubg") ){ + tmFlags |= TIMELINE_UCOLOR; + } + if( zUses!=0 ){ + int ufid = db_int(0, "SELECT rid FROM blob WHERE uuid GLOB '%q*'", zUses); + if( ufid ){ + zUses = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", ufid); + db_multi_exec("CREATE TEMP TABLE usesfile(rid INTEGER PRIMARY KEY)"); + compute_uses_file("usesfile", ufid, 0); + zType = "ci"; + disableY = 1; + }else{ + zUses = 0; + } + } + if( renameOnly ){ + db_multi_exec( + "CREATE TEMP TABLE rnfile(rid INTEGER PRIMARY KEY);" + "INSERT OR IGNORE INTO rnfile" + " SELECT mid FROM mlink WHERE pfnid>0 AND pfnid!=fnid;" + ); + disableY = 1; + } + if( forkOnly ){ + db_multi_exec( + "CREATE TEMP TABLE rnfork(rid INTEGER PRIMARY KEY);\n" + "INSERT OR IGNORE INTO rnfork(rid)\n" + " SELECT pid FROM plink\n" + " WHERE (SELECT value FROM tagxref WHERE tagid=%d AND rid=cid)==" + " (SELECT value FROM tagxref WHERE tagid=%d AND rid=pid)\n" + " GROUP BY pid" + " HAVING count(*)>1;\n" + "INSERT OR IGNORE INTO rnfork(rid)" + " SELECT cid FROM plink\n" + " WHERE (SELECT value FROM tagxref WHERE tagid=%d AND rid=cid)==" + " (SELECT value FROM tagxref WHERE tagid=%d AND rid=pid)\n" + " AND pid IN rnfork;", + TAG_BRANCH, TAG_BRANCH, TAG_BRANCH, TAG_BRANCH + ); + tmFlags |= TIMELINE_UNHIDE; + zType = "ci"; + disableY = 1; + } + if( bisectOnly + && fossil_strcmp(g.zIpAddr,"127.0.0.1")==0 + && db_open_local(0) + ){ + int iCurrent = db_lget_int("checkout",0); + bisect_create_bilog_table(iCurrent); + tmFlags |= TIMELINE_UNHIDE | TIMELINE_BISECT; + zType = "ci"; + disableY = 1; + }else{ + bisectOnly = 0; } style_header("Timeline"); login_anonymous_available(); timeline_temp_table(); blob_zero(&sql); blob_zero(&desc); blob_append(&sql, "INSERT OR IGNORE INTO timeline ", -1); blob_append(&sql, timeline_query_for_www(), -1); - if( (p_rid || d_rid) && g.okRead ){ + if( PB("fc") || PB("v") || PB("detail") ){ + tmFlags |= TIMELINE_FCHANGES; + } + if( (tmFlags & TIMELINE_UNHIDE)==0 ){ + blob_append_sql(&sql, + " AND NOT EXISTS(SELECT 1 FROM tagxref" + " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid)\n", + TAG_HIDDEN + ); + } + if( ((from_rid && to_rid) || (me_rid && you_rid)) && g.perm.Read ){ + /* If from= and to= are present, display all nodes on a path connecting + ** the two */ + PathNode *p = 0; + const char *zFrom = 0; + const char *zTo = 0; + + if( from_rid && to_rid ){ + p = path_shortest(from_rid, to_rid, noMerge, 0); + zFrom = P("from"); + zTo = P("to"); + }else{ + if( path_common_ancestor(me_rid, you_rid) ){ + p = path_first(); + } + zFrom = P("me"); + zTo = P("you"); + } + blob_append(&sql, " AND event.objid IN (0", -1); + while( p ){ + blob_append_sql(&sql, ",%d", p->rid); + p = p->u.pTo; + } + blob_append(&sql, ")", -1); + path_reset(); + blob_append(&desc, "All nodes on the path from ", -1); + blob_appendf(&desc, "%z[%h]</a>", href("%R/info/%h", zFrom), zFrom); + blob_append(&desc, " to ", -1); + blob_appendf(&desc, "%z[%h]</a>", href("%R/info/%h",zTo), zTo); + tmFlags |= TIMELINE_DISJOINT; + db_multi_exec("%s", blob_sql_text(&sql)); + }else if( (p_rid || d_rid) && g.perm.Read ){ /* If p= or d= is present, ignore all other parameters other than n= */ char *zUuid; int np, nd; + tmFlags |= TIMELINE_DISJOINT; if( p_rid && d_rid ){ if( p_rid!=d_rid ) p_rid = d_rid; if( P("n")==0 ) nEntry = 10; } db_multi_exec( "CREATE TEMP TABLE IF NOT EXISTS ok(rid INTEGER PRIMARY KEY)" ); zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", p_rid ? p_rid : d_rid); - blob_appendf(&sql, " AND event.objid IN ok"); + blob_append_sql(&sql, " AND event.objid IN ok"); nd = 0; if( d_rid ){ compute_descendants(d_rid, nEntry+1); nd = db_int(0, "SELECT count(*)-1 FROM ok"); - if( nd>=0 ){ - db_multi_exec("%s", blob_str(&sql)); - blob_appendf(&desc, "%d descendant%s", nd,(1==nd)?"":"s"); - } - timeline_add_dividers( - db_text("1","SELECT datetime(mtime,'localtime') FROM event" - " WHERE objid=%d", d_rid) - ); + if( nd>=0 ) db_multi_exec("%s", blob_sql_text(&sql)); + if( nd>0 ) blob_appendf(&desc, "%d descendant%s", nd,(1==nd)?"":"s"); + if( useDividers ) selectedRid = d_rid; db_multi_exec("DELETE FROM ok"); } if( p_rid ){ - compute_ancestors(p_rid, nEntry+1); + compute_ancestors(p_rid, nEntry+1, 0); np = db_int(0, "SELECT count(*)-1 FROM ok"); if( np>0 ){ if( nd>0 ) blob_appendf(&desc, " and "); blob_appendf(&desc, "%d ancestors", np); - db_multi_exec("%s", blob_str(&sql)); - } - if( d_rid==0 ){ - timeline_add_dividers( - db_text("1","SELECT datetime(mtime,'localtime') FROM event" - " WHERE objid=%d", p_rid) - ); - } - } - if( g.okHistory ){ - blob_appendf(&desc, " of <a href='%s/info/%s'>[%.10s]</a>", - g.zBaseURL, zUuid, zUuid); - }else{ - blob_appendf(&desc, " of check-in [%.10s]", zUuid); - } - }else{ - int n; - const char *zEType = "event"; - char *zDate; - char *zNEntry = mprintf("%d", nEntry); - url_initialize(&url, "timeline"); - url_add_parameter(&url, "n", zNEntry); - if( tagid>0 ){ - zType = "ci"; - url_add_parameter(&url, "t", zTagName); - blob_appendf(&sql, " AND EXISTS (SELECT 1 FROM tagxref WHERE tagid=%d" - " AND tagtype>0 AND rid=blob.rid)", - tagid); - } - if( (zType[0]=='w' && !g.okRdWiki) - || (zType[0]=='t' && !g.okRdTkt) - || (zType[0]=='c' && !g.okRead) + db_multi_exec("%s", blob_sql_text(&sql)); + } + if( useDividers ) selectedRid = p_rid; + } + blob_appendf(&desc, " of %z[%S]</a>", + href("%R/info/%!S", zUuid), zUuid); + if( d_rid ){ + if( p_rid ){ + /* If both p= and d= are set, we don't have the uuid of d yet. */ + zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", d_rid); + } + } + style_submenu_entry("n","Max:",4,0); + timeline_y_submenu(1); + style_submenu_binary("v","With Files","Without Files", + zType[0]!='a' && zType[0]!='c'); + }else if( f_rid && g.perm.Read ){ + /* If f= is present, ignore all other parameters other than n= */ + char *zUuid; + db_multi_exec( + "CREATE TEMP TABLE IF NOT EXISTS ok(rid INTEGER PRIMARY KEY);" + "INSERT INTO ok VALUES(%d);" + "INSERT OR IGNORE INTO ok SELECT pid FROM plink WHERE cid=%d;" + "INSERT OR IGNORE INTO ok SELECT cid FROM plink WHERE pid=%d;", + f_rid, f_rid, f_rid + ); + blob_append_sql(&sql, " AND event.objid IN ok"); + db_multi_exec("%s", blob_sql_text(&sql)); + if( useDividers ) selectedRid = f_rid; + blob_appendf(&desc, "Parents and children of check-in "); + zUuid = db_text("", "SELECT uuid FROM blob WHERE rid=%d", f_rid); + blob_appendf(&desc, "%z[%S]</a>", href("%R/info/%!S", zUuid), zUuid); + tmFlags |= TIMELINE_DISJOINT; + style_submenu_binary("v","With Files","Without Files", + zType[0]!='a' && zType[0]!='c'); + if( (tmFlags & TIMELINE_UNHIDE)==0 ){ + timeline_submenu(&url, "Unhide", "unhide", "", 0); + } + }else{ + /* Otherwise, a timeline based on a span of time */ + int n, nBefore, nAfter; + const char *zEType = "timeline item"; + char *zDate; + if( zUses ){ + blob_append_sql(&sql, " AND event.objid IN usesfile "); + } + if( renameOnly ){ + blob_append_sql(&sql, " AND event.objid IN rnfile "); + } + if( forkOnly ){ + blob_append_sql(&sql, " AND event.objid IN rnfork "); + } + if( bisectOnly ){ + blob_append_sql(&sql, " AND event.objid IN (SELECT rid FROM bilog) "); + } + if( zYearMonth ){ + blob_append_sql(&sql, " AND %Q=strftime('%%Y-%%m',event.mtime) ", + zYearMonth); + } + else if( zYearWeek ){ + blob_append_sql(&sql, " AND %Q=strftime('%%Y-%%W',event.mtime) ", + zYearWeek); + } + else if( zDay ){ + blob_append_sql(&sql, " AND %Q=strftime('%%Y-%%m-%%d',event.mtime) ", + zDay); + } + if( tagid ){ + blob_append_sql(&sql, + " AND (EXISTS(SELECT 1 FROM tagxref" + " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid)\n", tagid); + + if( zBrName ){ + /* The next two blob_appendf() calls add SQL that causes check-ins that + ** are not part of the branch which are parents or children of the + ** branch to be included in the report. This related check-ins are + ** useful in helping to visualize what has happened on a quiescent + ** branch that is infrequently merged with a much more activate branch. + */ + blob_append_sql(&sql, + " OR EXISTS(SELECT 1 FROM plink CROSS JOIN tagxref ON rid=cid" + " WHERE tagid=%d AND tagtype>0 AND pid=blob.rid)\n", + tagid + ); + if( (tmFlags & TIMELINE_UNHIDE)==0 ){ + blob_append_sql(&sql, + " AND NOT EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=cid" + " WHERE tagid=%d AND tagtype>0 AND pid=blob.rid)\n", + TAG_HIDDEN + ); + } + if( P("mionly")==0 ){ + blob_append_sql(&sql, + " OR EXISTS(SELECT 1 FROM plink CROSS JOIN tagxref ON rid=pid" + " WHERE tagid=%d AND tagtype>0 AND cid=blob.rid)\n", + tagid + ); + if( (tmFlags & TIMELINE_UNHIDE)==0 ){ + blob_append_sql(&sql, + " AND NOT EXISTS(SELECT 1 FROM plink JOIN tagxref ON rid=pid" + " WHERE tagid=%d AND tagtype>0 AND cid=blob.rid)\n", + TAG_HIDDEN + ); + } + } + } + blob_append_sql(&sql, ")"); + } + if( (zType[0]=='w' && !g.perm.RdWiki) + || (zType[0]=='t' && !g.perm.RdTkt) + || (zType[0]=='e' && !g.perm.RdWiki) + || (zType[0]=='c' && !g.perm.Read) + || (zType[0]=='g' && !g.perm.Read) ){ zType = "all"; } if( zType[0]=='a' ){ - if( !g.okRead || !g.okRdWiki || !g.okRdTkt ){ - char cSep = '('; - blob_appendf(&sql, " AND event.type IN "); - if( g.okRead ){ - blob_appendf(&sql, "%c'ci'", cSep); - cSep = ','; - } - if( g.okRdWiki ){ - blob_appendf(&sql, "%c'w'", cSep); - cSep = ','; - } - if( g.okRdTkt ){ - blob_appendf(&sql, "%c't'", cSep); - cSep = ','; - } - blob_appendf(&sql, ")"); - } - }else{ /* zType!="all" */ - blob_appendf(&sql, " AND event.type=%Q", zType); - url_add_parameter(&url, "y", zType); - if( zType[0]=='c' ){ - zEType = "checkin"; + if( !g.perm.Read || !g.perm.RdWiki || !g.perm.RdTkt ){ + char cSep = '('; + blob_append_sql(&sql, " AND event.type IN "); + if( g.perm.Read ){ + blob_append_sql(&sql, "%c'ci','g'", cSep); + cSep = ','; + } + if( g.perm.RdWiki ){ + blob_append_sql(&sql, "%c'w','e'", cSep); + cSep = ','; + } + if( g.perm.RdTkt ){ + blob_append_sql(&sql, "%c't'", cSep); + cSep = ','; + } + blob_append_sql(&sql, ")"); + } + }else{ /* zType!="all" */ + blob_append_sql(&sql, " AND event.type=%Q", zType); + if( zType[0]=='c' ){ + zEType = "check-in"; }else if( zType[0]=='w' ){ zEType = "wiki edit"; }else if( zType[0]=='t' ){ zEType = "ticket change"; + }else if( zType[0]=='e' ){ + zEType = "technical note"; + }else if( zType[0]=='g' ){ + zEType = "tag"; } } if( zUser ){ - blob_appendf(&sql, " AND event.user=%Q", zUser); - url_add_parameter(&url, "u", zUser); + int n = db_int(0,"SELECT count(*) FROM event" + " WHERE user=%Q OR euser=%Q", zUser, zUser); + if( n<=nEntry ){ + zCirca = zBefore = zAfter = 0; + nEntry = -1; + } + blob_append_sql(&sql, " AND (event.user=%Q OR event.euser=%Q)", + zUser, zUser); + zThisUser = zUser; } - if ( zSearch ){ - blob_appendf(&sql, + if( zSearch ){ + blob_append_sql(&sql, " AND (event.comment LIKE '%%%q%%' OR event.brief LIKE '%%%q%%')", zSearch, zSearch); - url_add_parameter(&url, "s", zSearch); - } - if( zAfter ){ - while( isspace(zAfter[0]) ){ zAfter++; } - if( zAfter[0] ){ - blob_appendf(&sql, - " AND event.mtime>=(SELECT julianday(%Q, 'utc'))" - " ORDER BY event.mtime ASC", zAfter); - url_add_parameter(&url, "a", zAfter); - zBefore = 0; - }else{ - zAfter = 0; - } - }else if( zBefore ){ - while( isspace(zBefore[0]) ){ zBefore++; } - if( zBefore[0] ){ - blob_appendf(&sql, - " AND event.mtime<=(SELECT julianday(%Q, 'utc'))" - " ORDER BY event.mtime DESC", zBefore); - url_add_parameter(&url, "b", zBefore); - }else{ - zBefore = 0; - } - }else if( zCirca ){ - while( isspace(zCirca[0]) ){ zCirca++; } - if( zCirca[0] ){ - double rCirca = db_double(0.0, "SELECT julianday(%Q, 'utc')", zCirca); - Blob sql2; - blob_init(&sql2, blob_str(&sql), -1); - blob_appendf(&sql2, - " AND event.mtime<=%f ORDER BY event.mtime DESC LIMIT %d", - rCirca, (nEntry+1)/2 - ); - db_multi_exec("%s", blob_str(&sql2)); - blob_reset(&sql2); - blob_appendf(&sql, - " AND event.mtime>=%f ORDER BY event.mtime ASC", - rCirca - ); - nEntry -= (nEntry+1)/2; - timeline_add_dividers(zCirca); - url_add_parameter(&url, "c", zCirca); - }else{ - zCirca = 0; - } - }else{ - blob_appendf(&sql, " ORDER BY event.mtime DESC"); - } - blob_appendf(&sql, " LIMIT %d", nEntry); - db_multi_exec("%s", blob_str(&sql)); - - n = db_int(0, "SELECT count(*) FROM timeline /*scan*/"); - if( n<nEntry && zAfter ){ - cgi_redirect(url_render(&url, "a", 0, "b", 0)); - } - if( zAfter==0 && zBefore==0 && zCirca==0 ){ + } + rBefore = symbolic_name_to_mtime(zBefore); + rAfter = symbolic_name_to_mtime(zAfter); + rCirca = symbolic_name_to_mtime(zCirca); + if( rAfter>0.0 ){ + if( rBefore>0.0 ){ + blob_append_sql(&sql, + " AND event.mtime>=%.17g AND event.mtime<=%.17g" + " ORDER BY event.mtime ASC", rAfter-ONE_SECOND, rBefore+ONE_SECOND); + nEntry = -1; + }else{ + blob_append_sql(&sql, + " AND event.mtime>=%.17g ORDER BY event.mtime ASC", + rAfter-ONE_SECOND); + } + zCirca = 0; + url_add_parameter(&url, "c", 0); + }else if( rBefore>0.0 ){ + blob_append_sql(&sql, + " AND event.mtime<=%.17g ORDER BY event.mtime DESC", + rBefore+ONE_SECOND); + zCirca = 0; + url_add_parameter(&url, "c", 0); + }else if( rCirca>0.0 ){ + Blob sql2; + blob_init(&sql2, blob_sql_text(&sql), -1); + blob_append_sql(&sql2, + " AND event.mtime<=%f ORDER BY event.mtime DESC", rCirca); + if( nEntry>0 ){ + blob_append_sql(&sql2," LIMIT %d", (nEntry+1)/2); + nEntry -= (nEntry+1)/2; + } + if( PB("showsql") ){ + @ <pre>%h(blob_sql_text(&sql2))</pre> + } + db_multi_exec("%s", blob_sql_text(&sql2)); + blob_reset(&sql2); + blob_append_sql(&sql, + " AND event.mtime>=%f ORDER BY event.mtime ASC", + rCirca + ); + if( zMark==0 ) zMark = zCirca; + }else{ + blob_append_sql(&sql, " ORDER BY event.mtime DESC"); + } + if( nEntry>0 ) blob_append_sql(&sql, " LIMIT %d", nEntry); + db_multi_exec("%s", blob_sql_text(&sql)); + + n = db_int(0, "SELECT count(*) FROM timeline WHERE etype!='div' /*scan*/"); + if( zYearMonth ){ + blob_appendf(&desc, "%s events for %h", zEType, zYearMonth); + }else if( zYearWeek ){ + blob_appendf(&desc, "%s events for year/week %h", zEType, zYearWeek); + }else if( zDay ){ + blob_appendf(&desc, "%s events occurring on %h", zEType, zDay); + }else if( zBefore==0 && zCirca==0 && n>=nEntry && nEntry>0 ){ blob_appendf(&desc, "%d most recent %ss", n, zEType); }else{ blob_appendf(&desc, "%d %ss", n, zEType); } + if( zUses ){ + char *zFilenames = names_of_file(zUses); + blob_appendf(&desc, " using file %s version %z%S</a>", zFilenames, + href("%R/artifact/%!S",zUses), zUses); + tmFlags |= TIMELINE_DISJOINT; + } + if( renameOnly ){ + blob_appendf(&desc, " that contain filename changes"); + tmFlags |= TIMELINE_DISJOINT|TIMELINE_FRENAMES; + } + if( forkOnly ){ + blob_appendf(&desc, " associated with forks"); + tmFlags |= TIMELINE_DISJOINT; + } + if( bisectOnly ){ + blob_appendf(&desc, " in the most recent bisect"); + tmFlags |= TIMELINE_DISJOINT; + } if( zUser ){ blob_appendf(&desc, " by user %h", zUser); tmFlags |= TIMELINE_DISJOINT; } - if( tagid>0 ){ + if( zTagName ){ blob_appendf(&desc, " tagged with \"%h\"", zTagName); tmFlags |= TIMELINE_DISJOINT; + }else if( zBrName ){ + blob_appendf(&desc, " related to \"%h\"", zBrName); + tmFlags |= TIMELINE_DISJOINT; } - if( zAfter ){ - blob_appendf(&desc, " occurring on or after %h.<br>", zAfter); - }else if( zBefore ){ - blob_appendf(&desc, " occurring on or before %h.<br>", zBefore); - }else if( zCirca ){ - blob_appendf(&desc, " occurring around %h.<br>", zCirca); + if( rAfter>0.0 ){ + if( rBefore>0.0 ){ + blob_appendf(&desc, " occurring between %h and %h.<br>", + zAfter, zBefore); + }else{ + blob_appendf(&desc, " occurring on or after %h.<br />", zAfter); + } + }else if( rBefore>0.0 ){ + blob_appendf(&desc, " occurring on or before %h.<br />", zBefore); + }else if( rCirca>0.0 ){ + blob_appendf(&desc, " occurring around %h.<br />", zCirca); } if( zSearch ){ blob_appendf(&desc, " matching \"%h\"", zSearch); } - if( g.okHistory ){ - if( zAfter || n==nEntry ){ - zDate = db_text(0, "SELECT min(timestamp) FROM timeline /*scan*/"); - timeline_submenu(&url, "Older", "b", zDate, "a"); - free(zDate); - } - if( zBefore || (zAfter && n==nEntry) ){ - zDate = db_text(0, "SELECT max(timestamp) FROM timeline /*scan*/"); - timeline_submenu(&url, "Newer", "a", zDate, "b"); - free(zDate); - }else if( tagid==0 ){ - if( zType[0]!='a' ){ - timeline_submenu(&url, "All Types", "y", "all", 0); - } - if( zType[0]!='w' && g.okRdWiki ){ - timeline_submenu(&url, "Wiki Only", "y", "w", 0); - } - if( zType[0]!='c' && g.okRead ){ - timeline_submenu(&url, "Checkins Only", "y", "ci", 0); - } - if( zType[0]!='t' && g.okRdTkt ){ - timeline_submenu(&url, "Tickets Only", "y", "t", 0); - } - } - if( nEntry>20 ){ - timeline_submenu(&url, "20 Events", "n", "20", 0); - } - if( nEntry<200 ){ - timeline_submenu(&url, "200 Events", "n", "200", 0); - } - } - } - blob_zero(&sql); - db_prepare(&q, "SELECT * FROM timeline ORDER BY timestamp DESC /*scan*/"); - @ <h2>%b(&desc)</h2> - blob_reset(&desc); - www_print_timeline(&q, tmFlags, 0); - db_finalize(&q); + if( g.perm.Hyperlink ){ + if( zCirca && rCirca ){ + nBefore = db_int(0, + "SELECT count(*) FROM timeline WHERE etype!='div'" + " AND sortby<=%f /*scan*/", rCirca); + nAfter = db_int(0, + "SELECT count(*) FROM timeline WHERE etype!='div'" + " AND sortby>=%f /*scan*/", rCirca); + zDate = db_text(0, "SELECT min(timestamp) FROM timeline /*scan*/"); + if( nBefore>=nEntry ){ + timeline_submenu(&url, "Older", "b", zDate, "c"); + zOlderButton = fossil_strdup(url_render(&url, "b", zDate, "c", 0)); + } + if( nAfter>=nEntry ){ + timeline_submenu(&url, "Newer", "a", zDate, "c"); + } + free(zDate); + }else{ + if( zAfter || n==nEntry ){ + zDate = db_text(0, "SELECT min(timestamp) FROM timeline /*scan*/"); + timeline_submenu(&url, "Older", "b", zDate, "a"); + zOlderButton = fossil_strdup(url_render(&url, "b", zDate, "a", 0)); + free(zDate); + } + if( zBefore || (zAfter && n==nEntry) ){ + zDate = db_text(0, "SELECT max(timestamp) FROM timeline /*scan*/"); + timeline_submenu(&url, "Newer", "a", zDate, "b"); + free(zDate); + } + } + if( zType[0]=='a' || zType[0]=='c' ){ + if( (tmFlags & TIMELINE_UNHIDE)==0 ){ + timeline_submenu(&url, "Unhide", "unhide", "", 0); + } + } + style_submenu_entry("n","Max:",4,0); + timeline_y_submenu(disableY); + style_submenu_binary("v","With Files","Without Files", + zType[0]!='a' && zType[0]!='c'); + } + } + if( PB("showsql") ){ + @ <pre>%h(blob_sql_text(&sql))</pre> + } + if( search_restrict(SRCH_CKIN)!=0 ){ + style_submenu_element("Search", 0, "%R/search?y=c"); + } + if( PB("showid") ) tmFlags |= TIMELINE_SHOWRID; + if( useDividers && zMark && zMark[0] ){ + double r = symbolic_name_to_mtime(zMark); + if( r>0.0 ) selectedRid = timeline_add_divider(r); + } + blob_zero(&sql); + db_prepare(&q, "SELECT * FROM timeline ORDER BY sortby DESC /*scan*/"); + @ <h2>%b(&desc)</h2> + blob_reset(&desc); + www_print_timeline(&q, tmFlags, zThisUser, zThisTag, selectedRid, 0); + db_finalize(&q); + if( zOlderButton ){ + @ %z(xhref("class='button'","%z",zOlderButton))Older</a> + } style_footer(); } /* ** The input query q selects various records. Print a human-readable ** summary of those records. ** -** Limit the number of entries printed to nLine. -** +** Limit number of lines or entries printed to nLimit. If nLimit is zero +** there is no limit. If nLimit is greater than zero, limit the number of +** complete entries printed. If nLimit is less than zero, attempt to limit +** the number of lines printed (this is basically the legacy behavior). +** The line limit, if used, is approximate because it is only checked on a +** per-entry basis. If verbose mode, the file name details are considered +** to be part of the entry. +** ** The query should return these columns: ** ** 0. rid ** 1. uuid ** 2. Date/Time ** 3. Comment string and user ** 4. Number of non-merge children ** 5. Number of parents +** 6. mtime +** 7. branch */ -void print_timeline(Stmt *q, int mxLine){ +void print_timeline(Stmt *q, int nLimit, int width, int verboseFlag){ + int nAbsLimit = (nLimit >= 0) ? nLimit : -nLimit; int nLine = 0; + int nEntry = 0; char zPrevDate[20]; - const char *zCurrentUuid=0; - zPrevDate[0] = 0; + const char *zCurrentUuid = 0; + int fchngQueryInit = 0; /* True if fchngQuery is initialized */ + Stmt fchngQuery; /* Query for file changes on check-ins */ + int rc; + zPrevDate[0] = 0; if( g.localOpen ){ int rid = db_lget_int("checkout", 0); zCurrentUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); } - while( db_step(q)==SQLITE_ROW && nLine<=mxLine ){ + while( (rc=db_step(q))==SQLITE_ROW ){ int rid = db_column_int(q, 0); const char *zId = db_column_text(q, 1); const char *zDate = db_column_text(q, 2); const char *zCom = db_column_text(q, 3); int nChild = db_column_int(q, 4); int nParent = db_column_int(q, 5); char *zFree = 0; int n = 0; char zPrefix[80]; - char zUuid[UUID_SIZE+1]; - - sprintf(zUuid, "%.10s", zId); - if( memcmp(zDate, zPrevDate, 10) ){ - printf("=== %.10s ===\n", zDate); + + if( nAbsLimit!=0 ){ + if( nLimit<0 && nLine>=nAbsLimit ){ + fossil_print("--- line limit (%d) reached ---\n", nAbsLimit); + break; /* line count limit hit, stop. */ + }else if( nEntry>=nAbsLimit ){ + fossil_print("--- entry limit (%d) reached ---\n", nAbsLimit); + break; /* entry count limit hit, stop. */ + } + } + if( fossil_strnicmp(zDate, zPrevDate, 10) ){ + fossil_print("=== %.10s ===\n", zDate); memcpy(zPrevDate, zDate, 10); - nLine++; + nLine++; /* record another line */ } if( zCom==0 ) zCom = ""; - printf("%.8s ", &zDate[11]); + fossil_print("%.8s ", &zDate[11]); zPrefix[0] = 0; if( nParent>1 ){ sqlite3_snprintf(sizeof(zPrefix), zPrefix, "*MERGE* "); n = strlen(zPrefix); } @@ -968,42 +1889,95 @@ zBrType = "*BRANCH* "; } sqlite3_snprintf(sizeof(zPrefix)-n, &zPrefix[n], zBrType); n = strlen(zPrefix); } - if( zCurrentUuid && strcmp(zCurrentUuid,zId)==0 ){ + if( fossil_strcmp(zCurrentUuid,zId)==0 ){ sqlite3_snprintf(sizeof(zPrefix)-n, &zPrefix[n], "*CURRENT* "); - n += strlen(zPrefix); + n += strlen(zPrefix+n); + } + if( content_is_private(rid) ){ + sqlite3_snprintf(sizeof(zPrefix)-n, &zPrefix[n], "*UNPUBLISHED* "); + n += strlen(zPrefix+n); + } + zFree = mprintf("[%S] %s%s", zId, zPrefix, zCom); + /* record another X lines */ + nLine += comment_print(zFree, zCom, 9, width, g.comFmtFlags); + fossil_free(zFree); + + if(verboseFlag){ + if( !fchngQueryInit ){ + db_prepare(&fchngQuery, + "SELECT (pid<=0) AS isnew," + " (fid==0) AS isdel," + " (SELECT name FROM filename WHERE fnid=mlink.fnid) AS name," + " (SELECT uuid FROM blob WHERE rid=fid)," + " (SELECT uuid FROM blob WHERE rid=pid)" + " FROM mlink" + " WHERE mid=:mid AND pid!=fid AND NOT mlink.isaux" + " ORDER BY 3 /*sort*/" + ); + fchngQueryInit = 1; + } + db_bind_int(&fchngQuery, ":mid", rid); + while( db_step(&fchngQuery)==SQLITE_ROW ){ + const char *zFilename = db_column_text(&fchngQuery, 2); + int isNew = db_column_int(&fchngQuery, 0); + int isDel = db_column_int(&fchngQuery, 1); + if( isNew ){ + fossil_print(" ADDED %s\n",zFilename); + }else if( isDel ){ + fossil_print(" DELETED %s\n",zFilename); + }else{ + fossil_print(" EDITED %s\n", zFilename); + } + nLine++; /* record another line */ + } + db_reset(&fchngQuery); } - zFree = sqlite3_mprintf("[%.10s] %s%s", zUuid, zPrefix, zCom); - nLine += comment_print(zFree, 9, 79); - sqlite3_free(zFree); + nEntry++; /* record another complete entry */ + } + if( rc==SQLITE_DONE ){ + /* Did the underlying query actually have all entries? */ + if( nAbsLimit==0 ){ + fossil_print("+++ end of timeline (%d) +++\n", nEntry); + }else{ + fossil_print("+++ no more data (%d) +++\n", nEntry); + } } + if( fchngQueryInit ) db_finalize(&fchngQuery); } /* ** Return a pointer to a static string that forms the basis for ** a timeline query for display on a TTY. */ const char *timeline_query_for_tty(void){ - static const char zBaseSql[] = + static const char zBaseSql[] = @ SELECT - @ blob.rid, + @ blob.rid AS rid, @ uuid, - @ datetime(event.mtime,'localtime'), + @ datetime(event.mtime,toLocal()) AS mDateTime, @ coalesce(ecomment,comment) @ || ' (user: ' || coalesce(euser,user,'?') @ || (SELECT case when length(x)>0 then ' tags: ' || x else '' end @ FROM (SELECT group_concat(substr(tagname,5), ', ') AS x @ FROM tag, tagxref @ WHERE tagname GLOB 'sym-*' AND tag.tagid=tagxref.tagid @ AND tagxref.rid=blob.rid AND tagxref.tagtype>0)) - @ || ')', - @ (SELECT count(*) FROM plink WHERE pid=blob.rid AND isprim), - @ (SELECT count(*) FROM plink WHERE cid=blob.rid) - @ FROM event, blob + @ || ')' as comment, + @ (SELECT count(*) FROM plink WHERE pid=blob.rid AND isprim) + @ AS primPlinkCount, + @ (SELECT count(*) FROM plink WHERE cid=blob.rid) AS plinkCount, + @ event.mtime AS mtime, + @ tagxref.value AS branch + @ FROM tag CROSS JOIN event CROSS JOIN blob + @ LEFT JOIN tagxref ON tagxref.tagid=tag.tagid + @ AND tagxref.tagtype>0 + @ AND tagxref.rid=blob.rid @ WHERE blob.rid=event.objid + @ AND tag.tagname='branch' ; return zBaseSql; } /* @@ -1012,61 +1986,104 @@ */ static int isIsoDate(const char *z){ return strlen(z)==10 && z[4]=='-' && z[7]=='-' - && isdigit(z[0]) - && isdigit(z[5]); + && fossil_isdigit(z[0]) + && fossil_isdigit(z[5]); } /* ** COMMAND: timeline ** -** Usage: %fossil timeline ?WHEN? ?BASELINE|DATETIME? ?-n N? ?-t TYPE? +** Usage: %fossil timeline ?WHEN? ?CHECKIN|DATETIME? ?OPTIONS? ** ** Print a summary of activity going backwards in date and time ** specified or from the current date and time if no arguments -** are given. Show as many as N (default 20) check-ins. The -** WHEN argument can be any unique abbreviation of one of these -** keywords: +** are given. The WHEN argument can be any unique abbreviation +** of one of these keywords: ** ** before ** after ** descendants | children ** ancestors | parents ** -** The BASELINE can be any unique prefix of 4 characters or more. +** The CHECKIN can be any unique prefix of 4 characters or more. ** The DATETIME should be in the ISO8601 format. For ** examples: "2007-08-18 07:21:21". You can also say "current" ** for the current version or "now" for the current time. ** -** The optional TYPE argument may any types supported by the /timeline -** page. For example: -** -** w = wiki commits only -** ci = file commits only -** t = tickets only +** Options: +** -n|--limit N Output the first N entries (default 20 lines). +** N=0 means no limit. +** -p|--path PATH Output items affecting PATH only. +** PATH can be a file or a sub directory. +** --offset P skip P changes +** -t|--type TYPE Output items from the given types only, such as: +** ci = file commits only +** e = technical notes only +** t = tickets only +** w = wiki commits only +** -v|--verbose Output the list of files changed by each commit +** and the type of each change (edited, deleted, +** etc.) after the check-in comment. +** -W|--width <num> Width of lines (default is to auto-detect). Must be +** >20 or 0 (= no limit, resulting in a single line per +** entry). +** -R REPO_FILE Specifies the repository db to use. Default is +** the current checkout's repository. */ void timeline_cmd(void){ Stmt q; - int n, k; - const char *zCount; + int n, k, width; + const char *zLimit; + const char *zWidth; + const char *zOffset; const char *zType; char *zOrigin; char *zDate; Blob sql; int objid = 0; Blob uuid; int mode = 0 ; /* 0:none 1: before 2:after 3:children 4:parents */ - db_find_and_open_repository(1); - zCount = find_option("count","n",1); + int verboseFlag = 0 ; + int iOffset; + const char *zFilePattern = 0; + Blob treeName; + + verboseFlag = find_option("verbose","v", 0)!=0; + if( !verboseFlag){ + verboseFlag = find_option("showfiles","f", 0)!=0; /* deprecated */ + } + db_find_and_open_repository(0, 0); + zLimit = find_option("limit","n",1); + zWidth = find_option("width","W",1); zType = find_option("type","t",1); - if( zCount ){ - n = atoi(zCount); + zFilePattern = find_option("path","p",1); + + if( !zLimit ){ + zLimit = find_option("count",0,1); + } + if( zLimit ){ + n = atoi(zLimit); + }else{ + n = -20; + } + if( zWidth ){ + width = atoi(zWidth); + if( (width!=0) && (width<=20) ){ + fossil_fatal("-W|--width value must be >20 or 0"); + } }else{ - n = 20; + width = -1; } + zOffset = find_option("offset",0,1); + iOffset = zOffset ? atoi(zOffset) : 0; + + /* We should be done with options.. */ + verify_all_options(); + if( g.argc>=4 ){ k = strlen(g.argv[2]); if( strncmp(g.argv[2],"before",k)==0 ){ mode = 1; }else if( strncmp(g.argv[2],"after",k)==0 && k>1 ){ @@ -1077,27 +2094,28 @@ mode = 3; }else if( strncmp(g.argv[2],"ancestors",k)==0 && k>1 ){ mode = 4; }else if( strncmp(g.argv[2],"parents",k)==0 ){ mode = 4; - }else if(!zType && !zCount){ - usage("?WHEN? ?BASELINE|DATETIME? ?-n|--count N? ?-t TYPE?"); + }else if(!zType && !zLimit){ + usage("?WHEN? ?CHECKIN|DATETIME? ?-n|--limit #? ?-t|--type TYPE? " + "?-W|--width WIDTH? ?-p|--path PATH"); } if( '-' != *g.argv[3] ){ - zOrigin = g.argv[3]; + zOrigin = g.argv[3]; }else{ - zOrigin = "now"; + zOrigin = "now"; } }else if( g.argc==3 ){ zOrigin = g.argv[2]; }else{ zOrigin = "now"; } k = strlen(zOrigin); blob_zero(&uuid); blob_append(&uuid, zOrigin, -1); - if( strcmp(zOrigin, "now")==0 ){ + if( fossil_strcmp(zOrigin, "now")==0 ){ if( mode==3 || mode==4 ){ fossil_fatal("cannot compute descendants or ancestors of a date"); } zDate = mprintf("(SELECT datetime('now'))"); }else if( strncmp(zOrigin, "current", k)==0 ){ @@ -1104,11 +2122,11 @@ if( !g.localOpen ){ fossil_fatal("must be within a local checkout to use 'current'"); } objid = db_lget_int("checkout",0); zDate = mprintf("(SELECT mtime FROM plink WHERE cid=%d)", objid); - }else if( name_to_uuid(&uuid, 0)==0 ){ + }else if( name_to_uuid(&uuid, 0, "*")==0 ){ objid = db_int(0, "SELECT rid FROM blob WHERE uuid=%B", &uuid); zDate = mprintf("(SELECT mtime FROM plink WHERE cid=%d)", objid); }else{ const char *zShift = ""; if( mode==3 || mode==4 ){ @@ -1115,59 +2133,178 @@ fossil_fatal("cannot compute descendants or ancestors of a date"); } if( mode==0 ){ if( isIsoDate(zOrigin) ) zShift = ",'+1 day'"; } - zDate = mprintf("(SELECT julianday(%Q%s, 'utc'))", zOrigin, zShift); + zDate = mprintf("(SELECT julianday(%Q%s, fromLocal()))", zOrigin, zShift); + } + + if( zFilePattern ){ + if( zType==0 ){ + /* When zFilePattern is specified and type is not specified, only show + * file check-ins */ + zType="ci"; + } + file_tree_name(zFilePattern, &treeName, 0, 1); + if( fossil_strcmp(blob_str(&treeName), ".")==0 ){ + /* When zTreeName refers to g.zLocalRoot, it's like not specifying + * zFilePattern. */ + zFilePattern = 0; + } } + if( mode==0 ) mode = 1; blob_zero(&sql); blob_append(&sql, timeline_query_for_tty(), -1); - blob_appendf(&sql, " AND event.mtime %s %s", + blob_append_sql(&sql, "\n AND event.mtime %s %s", (mode==1 || mode==4) ? "<=" : ">=", - zDate + zDate /*safe-for-%s*/ ); if( mode==3 || mode==4 ){ db_multi_exec("CREATE TEMP TABLE ok(rid INTEGER PRIMARY KEY)"); if( mode==3 ){ compute_descendants(objid, n); }else{ - compute_ancestors(objid, n); + compute_ancestors(objid, n, 0); } - blob_appendf(&sql, " AND blob.rid IN ok"); + blob_append_sql(&sql, "\n AND blob.rid IN ok"); } if( zType && (zType[0]!='a') ){ - blob_appendf(&sql, " AND event.type=%Q ", zType); + blob_append_sql(&sql, "\n AND event.type=%Q ", zType); + } + if( zFilePattern ){ + blob_append(&sql, + "\n AND EXISTS(SELECT 1 FROM mlink\n" + " WHERE mlink.mid=event.objid\n" + " AND mlink.fnid IN ", -1); + if( filenames_are_case_sensitive() ){ + blob_append_sql(&sql, + "(SELECT fnid FROM filename" + " WHERE name=%Q" + " OR name GLOB '%q/*')", + blob_str(&treeName), blob_str(&treeName)); + }else{ + blob_append_sql(&sql, + "(SELECT fnid FROM filename" + " WHERE name=%Q COLLATE nocase" + " OR lower(name) GLOB lower('%q/*'))", + blob_str(&treeName), blob_str(&treeName)); + } + blob_append(&sql, ")", -1); + } + blob_append_sql(&sql, "\nORDER BY event.mtime DESC"); + if( iOffset>0 ){ + /* Don't handle LIMIT here, otherwise print_timeline() + * will not determine the end-marker correctly! */ + blob_append_sql(&sql, "\n LIMIT -1 OFFSET %d", iOffset); } - - blob_appendf(&sql, " ORDER BY event.mtime DESC"); - db_prepare(&q, blob_str(&sql)); + db_prepare(&q, "%s", blob_sql_text(&sql)); blob_reset(&sql); - print_timeline(&q, n); + print_timeline(&q, n, width, verboseFlag); + db_finalize(&q); +} + + +/* +** COMMAND: test-timewarp-list +** +** Usage: %fossil test-timewarp-list ?-v|---verbose? +** +** Display all instances of child check-ins that appear earlier in time +** than their parent. If the -v|--verbose option is provided, both the +** parent and child check-ins and their times are shown. +*/ +void test_timewarp_cmd(void){ + Stmt q; + int verboseFlag; + + db_find_and_open_repository(0, 0); + verboseFlag = find_option("verbose", "v", 0)!=0; + if( !verboseFlag ){ + verboseFlag = find_option("detail", 0, 0)!=0; /* deprecated */ + } + db_prepare(&q, + "SELECT (SELECT uuid FROM blob WHERE rid=p.cid)," + " (SELECT uuid FROM blob WHERE rid=c.cid)," + " datetime(p.mtime), datetime(c.mtime)" + " FROM plink p, plink c" + " WHERE p.cid=c.pid AND p.mtime>c.mtime" + ); + while( db_step(&q)==SQLITE_ROW ){ + if( !verboseFlag ){ + fossil_print("%s\n", db_column_text(&q, 1)); + }else{ + fossil_print("%.14s -> %.14s %s -> %s\n", + db_column_text(&q, 0), + db_column_text(&q, 1), + db_column_text(&q, 2), + db_column_text(&q, 3)); + } + } db_finalize(&q); } /* -** This is a version of the "localtime()" function from the standard -** C library. It converts a unix timestamp (seconds since 1970) into -** a broken-out local time structure. +** WEBPAGE: timewarps ** -** This modified version of localtime() works like the library localtime() -** by default. Except if the timeline-utc property is set, this routine -** uses gmttime() instead. Thus by setting the timeline-utc property, we -** can get all localtimes to be displayed at UTC time. +** Show all check-ins that are "timewarps". A timewarp is a +** check-in that occurs before its parent, according to the +** timestamp information on the check-in. This can only actually +** happen, of course, if a users system clock is set incorrectly. */ -struct tm *fossil_localtime(const time_t *clock){ - if( g.fTimeFormat==0 ){ - if( db_get_int("timeline-utc", 1) ){ - g.fTimeFormat = 1; - }else{ - g.fTimeFormat = 2; - } - } - if( g.fTimeFormat==1 ){ - return gmtime(clock); - }else{ - return localtime(clock); - } +void test_timewarp_page(void){ + Stmt q; + int cnt = 0; + + login_check_credentials(); + if( !g.perm.Read || !g.perm.Hyperlink ){ + login_needed(g.anon.Read && g.anon.Hyperlink); + return; + } + style_header("Instances of timewarp"); + db_prepare(&q, + "SELECT blob.uuid, " + " date(ce.mtime)," + " pe.mtime>ce.mtime," + " coalesce(ce.euser,ce.user)" + " FROM plink p, plink c, blob, event pe, event ce" + " WHERE p.cid=c.pid AND p.mtime>c.mtime" + " AND blob.rid=c.cid" + " AND pe.objid=p.cid" + " AND ce.objid=c.cid" + " ORDER BY 2 DESC" + ); + while( db_step(&q)==SQLITE_ROW ){ + const char *zCkin = db_column_text(&q, 0); + const char *zDate = db_column_text(&q, 1); + const char *zStatus = db_column_int(&q,2) ? "Open" + : "Resolved by editing date"; + const char *zUser = db_column_text(&q, 3); + char *zHref = href("%R/timeline?c=%S", zCkin); + if( cnt==0 ){ + @ <div class="brlist"><table id="timewarptable"> + @ <thead><tr> + @ <th>Check-in</th> + @ <th>Date</th> + @ <th>User</th> + @ <th>Status</th> + @ </tr></thead><tbody> + } + @ <tr> + @ <td>%s(zHref)%S(zCkin)</a></td> + @ <td>%s(zHref)%s(zDate)</a></td> + @ <td>%h(zUser)</td> + @ <td>%s(zStatus)</td> + @ </tr> + fossil_free(zHref); + cnt++; + } + db_finalize(&q); + if( cnt==0 ){ + @ <p>No timewarps in this repository</p> + }else{ + @ </tbody></table></div> + output_table_sorting_javascript("timewarptable","tttt",2); + } + style_footer(); } Index: src/tkt.c ================================================================== --- src/tkt.c +++ src/tkt.c @@ -26,65 +26,98 @@ ** The list of database user-defined fields in the TICKET table. ** The real table also contains some addition fields for internal ** used. The internal-use fields begin with "tkt_". */ static int nField = 0; -static char **azField = 0; /* Names of database fields */ -static char **azValue = 0; /* Original values */ -static char **azAppend = 0; /* Value to be appended */ +static struct tktFieldInfo { + char *zName; /* Name of the database field */ + char *zValue; /* Value to store */ + char *zAppend; /* Value to append */ + unsigned mUsed; /* 01: TICKET 02: TICKETCHNG */ +} *aField; +#define USEDBY_TICKET 01 +#define USEDBY_TICKETCHNG 02 +#define USEDBY_BOTH 03 +static u8 haveTicket = 0; /* True if the TICKET table exists */ +static u8 haveTicketCTime = 0; /* True if TICKET.TKT_CTIME exists */ +static u8 haveTicketChng = 0; /* True if the TICKETCHNG table exists */ +static u8 haveTicketChngRid = 0; /* True if TICKETCHNG.TKT_RID exists */ /* -** Compare two entries in azField for sorting purposes +** Compare two entries in aField[] for sorting purposes */ static int nameCmpr(const void *a, const void *b){ - return strcmp(*(char**)a, *(char**)b); + return fossil_strcmp(((const struct tktFieldInfo*)a)->zName, + ((const struct tktFieldInfo*)b)->zName); +} + +/* +** Return the index into aField[] of the given field name. +** Return -1 if zFieldName is not in aField[]. +*/ +static int fieldId(const char *zFieldName){ + int i; + for(i=0; i<nField; i++){ + if( fossil_strcmp(aField[i].zName, zFieldName)==0 ) return i; + } + return -1; } /* -** Obtain a list of all fields of the TICKET table. Put them -** in sorted order in azField[]. +** Obtain a list of all fields of the TICKET and TICKETCHNG tables. Put them +** in sorted order in aField[]. ** -** Also allocate space for azValue[] and azAppend[] and initialize -** all the values there to zero. +** The haveTicket and haveTicketChng variables are set to 1 if the TICKET and +** TICKETCHANGE tables exist, respectively. */ static void getAllTicketFields(void){ Stmt q; int i; - if( nField>0 ) return; + static int once = 0; + if( once ) return; + once = 1; db_prepare(&q, "PRAGMA table_info(ticket)"); while( db_step(&q)==SQLITE_ROW ){ - const char *zField = db_column_text(&q, 1); - if( strncmp(zField,"tkt_",4)==0 ) continue; + const char *zFieldName = db_column_text(&q, 1); + haveTicket = 1; + if( memcmp(zFieldName,"tkt_",4)==0 ){ + if( strcmp(zFieldName, "tkt_ctime")==0 ) haveTicketCTime = 1; + continue; + } + if( nField%10==0 ){ + aField = fossil_realloc(aField, sizeof(aField[0])*(nField+10) ); + } + aField[nField].zName = mprintf("%s", zFieldName); + aField[nField].mUsed = USEDBY_TICKET; + nField++; + } + db_finalize(&q); + db_prepare(&q, "PRAGMA table_info(ticketchng)"); + while( db_step(&q)==SQLITE_ROW ){ + const char *zFieldName = db_column_text(&q, 1); + haveTicketChng = 1; + if( memcmp(zFieldName,"tkt_",4)==0 ){ + if( strcmp(zFieldName,"tkt_rid")==0 ) haveTicketChngRid = 1; + continue; + } + if( (i = fieldId(zFieldName))>=0 ){ + aField[i].mUsed |= USEDBY_TICKETCHNG; + continue; + } if( nField%10==0 ){ - azField = realloc(azField, sizeof(azField)*3*(nField+10) ); - if( azField==0 ){ - fossil_fatal("out of memory"); - } + aField = fossil_realloc(aField, sizeof(aField[0])*(nField+10) ); } - azField[nField] = mprintf("%s", zField); + aField[nField].zName = mprintf("%s", zFieldName); + aField[nField].mUsed = USEDBY_TICKETCHNG; nField++; } db_finalize(&q); - qsort(azField, nField, sizeof(azField[0]), nameCmpr); - azAppend = &azField[nField]; - memset(azAppend, 0, sizeof(azAppend[0])*nField); - azValue = &azAppend[nField]; - for(i=0; i<nField; i++){ - azValue[i] = ""; - } -} - -/* -** Return the index into azField[] of the given field name. -** Return -1 if zField is not in azField[]. -*/ -static int fieldId(const char *zField){ - int i; - for(i=0; i<nField; i++){ - if( strcmp(azField[i], zField)==0 ) return i; - } - return -1; + qsort(aField, nField, sizeof(aField[0]), nameCmpr); + for(i=0; i<nField; i++){ + aField[i].zValue = ""; + aField[i].zAppend = 0; + } } /* ** Query the database for all TICKET fields for the specific ** ticket whose name is given by the "name" CGI parameter. @@ -92,24 +125,25 @@ ** ** Only load those fields which do not already exist as ** variables. ** ** Fields of the TICKET table that begin with "private_" are -** expanded using the db_reveal() function. If g.okRdAddr is +** expanded using the db_reveal() function. If g.perm.RdAddr is ** true, then the db_reveal() function will decode the content -** using the CONCEALED table so that the content legable. +** using the CONCEALED table so that the content legible. ** Otherwise, db_reveal() is a no-op and the content remains ** obscured. */ static void initializeVariablesFromDb(void){ const char *zName; Stmt q; int i, n, size, j; zName = PD("name","-none-"); - db_prepare(&q, "SELECT datetime(tkt_mtime) AS tkt_datetime, *" - " FROM ticket WHERE tkt_uuid GLOB '%q*'", zName); + db_prepare(&q, "SELECT datetime(tkt_mtime,toLocal()) AS tkt_datetime, *" + " FROM ticket WHERE tkt_uuid GLOB '%q*'", + zName); if( db_step(&q)==SQLITE_ROW ){ n = db_column_count(&q); for(i=0; i<n; i++){ const char *zVal = db_column_text(&q, i); const char *zName = db_column_name(&q, i); @@ -117,38 +151,24 @@ if( zVal==0 ){ zVal = ""; }else if( strncmp(zName, "private_", 8)==0 ){ zVal = zRevealed = db_reveal(zVal); } - for(j=0; j<nField; j++){ - if( strcmp(azField[j],zName)==0 ){ - azValue[j] = mprintf("%s", zVal); - break; - } - } - if( Th_Fetch(zName, &size)==0 ){ + if( (j = fieldId(zName))>=0 ){ + aField[j].zValue = mprintf("%s", zVal); + }else if( memcmp(zName, "tkt_", 4)==0 && Th_Fetch(zName, &size)==0 ){ Th_Store(zName, zVal); } free(zRevealed); } - }else{ - db_finalize(&q); - db_prepare(&q, "PRAGMA table_info(ticket)"); - if( Th_Fetch("tkt_uuid",&size)==0 ){ - Th_Store("tkt_uuid",zName); - } - while( db_step(&q)==SQLITE_ROW ){ - const char *zField = db_column_text(&q, 1); - if( Th_Fetch(zField, &size)==0 ){ - Th_Store(zField, ""); - } - } - if( Th_Fetch("tkt_datetime",&size)==0 ){ - Th_Store("tkt_datetime",""); - } - } - db_finalize(&q); + } + db_finalize(&q); + for(i=0; i<nField; i++){ + if( Th_Fetch(aField[i].zName, &size)==0 ){ + Th_Store(aField[i].zName, aField[i].zValue); + } + } } /* ** Transfer all CGI parameters to variables in the interpreter. */ @@ -160,114 +180,207 @@ Th_Store(z, P(z)); } } /* -** Update an entry of the TICKET table according to the information -** in the control file given in p. Attempt to create the appropriate -** TICKET table entry if createFlag is true. If createFlag is false, -** that means we already know the entry exists and so we can save the -** work of trying to create it. +** Update an entry of the TICKET and TICKETCHNG tables according to the +** information in the ticket artifact given in p. Attempt to create +** the appropriate TICKET table entry if tktid is zero. If tktid is nonzero +** then it will be the ROWID of an existing TICKET entry. +** +** Parameter rid is the recordID for the ticket artifact in the BLOB table. ** -** Return TRUE if a new TICKET entry was created and FALSE if an -** existing entry was revised. +** Return the new rowid of the TICKET table entry. */ -int ticket_insert(const Manifest *p, int createFlag, int rid){ - Blob sql; +static int ticket_insert(const Manifest *p, int rid, int tktid){ + Blob sql1, sql2, sql3; Stmt q; - int i; - const char *zSep; - int rc = 0; + int i, j; + char *aUsed; - getAllTicketFields(); - if( createFlag ){ - db_multi_exec("INSERT OR IGNORE INTO ticket(tkt_uuid, tkt_mtime) " + if( tktid==0 ){ + db_multi_exec("INSERT INTO ticket(tkt_uuid, tkt_mtime) " "VALUES(%Q, 0)", p->zTicketUuid); - rc = db_changes(); + tktid = db_last_insert_rowid(); + } + blob_zero(&sql1); + blob_zero(&sql2); + blob_zero(&sql3); + blob_append_sql(&sql1, "UPDATE OR REPLACE ticket SET tkt_mtime=:mtime"); + if( haveTicketCTime ){ + blob_append_sql(&sql1, ", tkt_ctime=coalesce(tkt_ctime,:mtime)"); } - blob_zero(&sql); - blob_appendf(&sql, "UPDATE OR REPLACE ticket SET tkt_mtime=:mtime"); - zSep = "SET"; + aUsed = fossil_malloc( nField ); + memset(aUsed, 0, nField); for(i=0; i<p->nField; i++){ const char *zName = p->aField[i].zName; - if( zName[0]=='+' ){ - zName++; - if( fieldId(zName)<0 ) continue; - blob_appendf(&sql,", %s=coalesce(%s,'') || %Q", - zName, zName, p->aField[i].zValue); - }else{ - if( fieldId(zName)<0 ) continue; - blob_appendf(&sql,", %s=%Q", zName, p->aField[i].zValue); + const char *zBaseName = zName[0]=='+' ? zName+1 : zName; + j = fieldId(zBaseName); + if( j<0 ) continue; + aUsed[j] = 1; + if( aField[j].mUsed & USEDBY_TICKET ){ + if( zName[0]=='+' ){ + zName++; + blob_append_sql(&sql1,", \"%w\"=coalesce(\"%w\",'') || %Q", + zName, zName, p->aField[i].zValue); + }else{ + blob_append_sql(&sql1,", \"%w\"=%Q", zName, p->aField[i].zValue); + } + } + if( aField[j].mUsed & USEDBY_TICKETCHNG ){ + blob_append_sql(&sql2, ",\"%w\"", zName); + blob_append_sql(&sql3, ",%Q", p->aField[i].zValue); } if( rid>0 ){ wiki_extract_links(p->aField[i].zValue, rid, 1, p->rDate, i==0, 0); } } - blob_appendf(&sql, " WHERE tkt_uuid='%s' AND tkt_mtime<:mtime", - p->zTicketUuid); - db_prepare(&q, "%s", blob_str(&sql)); + blob_append_sql(&sql1, " WHERE tkt_id=%d", tktid); + db_prepare(&q, "%s", blob_sql_text(&sql1)); db_bind_double(&q, ":mtime", p->rDate); db_step(&q); db_finalize(&q); - blob_reset(&sql); - return rc; + blob_reset(&sql1); + if( blob_size(&sql2)>0 || haveTicketChngRid ){ + int fromTkt = 0; + if( haveTicketChngRid ){ + blob_append(&sql2, ",tkt_rid", -1); + blob_append_sql(&sql3, ",%d", rid); + } + for(i=0; i<nField; i++){ + if( aUsed[i]==0 + && (aField[i].mUsed & USEDBY_BOTH)==USEDBY_BOTH + ){ + const char *z = aField[i].zName; + if( z[0]=='+' ) z++; + fromTkt = 1; + blob_append_sql(&sql2, ",\"%w\"", z); + blob_append_sql(&sql3, ",\"%w\"", z); + } + } + if( fromTkt ){ + db_prepare(&q, "INSERT INTO ticketchng(tkt_id,tkt_mtime%s)" + "SELECT %d,:mtime%s FROM ticket WHERE tkt_id=%d", + blob_sql_text(&sql2), tktid, + blob_sql_text(&sql3), tktid); + }else{ + db_prepare(&q, "INSERT INTO ticketchng(tkt_id,tkt_mtime%s)" + "VALUES(%d,:mtime%s)", + blob_sql_text(&sql2), tktid, blob_sql_text(&sql3)); + } + db_bind_double(&q, ":mtime", p->rDate); + db_step(&q); + db_finalize(&q); + } + blob_reset(&sql2); + blob_reset(&sql3); + fossil_free(aUsed); + return tktid; +} + +/* +** Returns non-zero if moderation is required for ticket changes and ticket +** attachments. +*/ +int ticket_need_moderation( + int localUser /* Are we being called for a local interactive user? */ +){ + /* + ** If the FOSSIL_FORCE_TICKET_MODERATION variable is set, *ALL* changes for + ** tickets will be required to go through moderation (even those performed + ** by the local interactive user via the command line). This can be useful + ** for local (or remote) testing of the moderation subsystem and its impact + ** on the contents and status of tickets. + */ + if( fossil_getenv("FOSSIL_FORCE_TICKET_MODERATION")!=0 ){ + return 1; + } + if( localUser ){ + return 0; + } + return g.perm.ModTkt==0 && db_get_boolean("modreq-tkt",0)==1; } /* ** Rebuild an entire entry in the TICKET table */ void ticket_rebuild_entry(const char *zTktUuid){ char *zTag = mprintf("tkt-%s", zTktUuid); int tagid = tag_findid(zTag, 1); Stmt q; - Manifest manifest; - Blob content; + Manifest *pTicket; + int tktid; int createFlag = 1; - - db_multi_exec( - "DELETE FROM ticket WHERE tkt_uuid=%Q", zTktUuid - ); + + fossil_free(zTag); + getAllTicketFields(); + if( haveTicket==0 ) return; + tktid = db_int(0, "SELECT tkt_id FROM ticket WHERE tkt_uuid=%Q", zTktUuid); + search_doc_touch('t', tktid, 0); + if( haveTicketChng ){ + db_multi_exec("DELETE FROM ticketchng WHERE tkt_id=%d;", tktid); + } + db_multi_exec("DELETE FROM ticket WHERE tkt_id=%d", tktid); + tktid = 0; db_prepare(&q, "SELECT rid FROM tagxref WHERE tagid=%d ORDER BY mtime",tagid); while( db_step(&q)==SQLITE_ROW ){ int rid = db_column_int(&q, 0); - content_get(rid, &content); - manifest_parse(&manifest, &content); - ticket_insert(&manifest, createFlag, rid); - manifest_ticket_event(rid, &manifest, createFlag, tagid); - manifest_clear(&manifest); + pTicket = manifest_get(rid, CFTYPE_TICKET, 0); + if( pTicket ){ + tktid = ticket_insert(pTicket, rid, tktid); + manifest_ticket_event(rid, pTicket, createFlag, tagid); + manifest_destroy(pTicket); + } createFlag = 0; } db_finalize(&q); } + /* -** Create the subscript interpreter and load the "common" code. +** Create the TH1 interpreter and load the "common" code. */ void ticket_init(void){ const char *zConfig; - Th_FossilInit(); + Th_FossilInit(TH_INIT_DEFAULT); zConfig = ticket_common_code(); Th_Eval(g.interp, 0, zConfig, -1); } /* -** Recreate the ticket table. +** Create the TH1 interpreter and load the "change" code. +*/ +int ticket_change(const char *zUuid){ + const char *zConfig; + Th_FossilInit(TH_INIT_DEFAULT); + Th_Store("uuid", zUuid); + zConfig = ticket_change_code(); + return Th_Eval(g.interp, 0, zConfig, -1); +} + +/* +** Recreate the TICKET and TICKETCHNG tables. */ void ticket_create_table(int separateConnection){ const char *zSql; - db_multi_exec("DROP TABLE IF EXISTS ticket;"); + db_multi_exec( + "DROP TABLE IF EXISTS ticket;" + "DROP TABLE IF EXISTS ticketchng;" + ); zSql = ticket_table_schema(); if( separateConnection ){ + db_end_transaction(0); db_init_database(g.zRepositoryName, zSql, 0); }else{ - db_multi_exec("%s", zSql); + db_multi_exec("%s", zSql/*safe-for-%s*/); } } /* -** Repopulate the ticket table +** Repopulate the TICKET and TICKETCHNG tables from scratch using all +** available ticket artifacts. */ void ticket_rebuild(void){ Stmt q; ticket_create_table(1); db_begin_transaction(); @@ -281,104 +394,128 @@ ticket_rebuild_entry(zName); } db_finalize(&q); db_end_transaction(0); } + +/* +** COMMAND: test-ticket-rebuild +** +** Usage: %fossil test-ticket-rebuild TICKETID|all +** +** Rebuild the TICKET and TICKETCHNG tables for the given ticket ID +** or for ALL. +*/ +void test_ticket_rebuild(void){ + db_find_and_open_repository(0, 0); + if( g.argc!=3 ) usage("TICKETID|all"); + if( fossil_strcmp(g.argv[2], "all")==0 ){ + ticket_rebuild(); + }else{ + const char *zUuid; + zUuid = db_text(0, "SELECT substr(tagname,5) FROM tag" + " WHERE tagname GLOB 'tkt-%q*'", g.argv[2]); + if( zUuid==0 ) fossil_fatal("no such ticket: %s", g.argv[2]); + ticket_rebuild_entry(zUuid); + } +} + +/* +** For trouble-shooting purposes, render a dump of the aField[] table to +** the webpage currently under construction. +*/ +static void showAllFields(void){ + int i; + @ <font color="blue"> + @ <p>Database fields:</p><ul> + for(i=0; i<nField; i++){ + @ <li>aField[%d(i)].zName = "%h(aField[i].zName)"; + @ originally = "%h(aField[i].zValue)"; + @ currently = "%h(PD(aField[i].zName,""))"; + if( aField[i].zAppend ){ + @ zAppend = "%h(aField[i].zAppend)"; + } + @ mUsed = %d(aField[i].mUsed); + } + @ </ul></font> +} /* ** WEBPAGE: tktview ** URL: tktview?name=UUID ** -** View a ticket. +** View a ticket identified by the name= query parameter. */ void tktview_page(void){ const char *zScript; char *zFullName; const char *zUuid = PD("name",""); login_check_credentials(); - if( !g.okRdTkt ){ login_needed(); return; } - if( g.okWrTkt || g.okApndTkt ){ + if( !g.perm.RdTkt ){ login_needed(g.anon.RdTkt); return; } + if( g.anon.WrTkt || g.anon.ApndTkt ){ style_submenu_element("Edit", "Edit The Ticket", "%s/tktedit?name=%T", g.zTop, PD("name","")); } - if( g.okHistory ){ - style_submenu_element("History", "History Of This Ticket", + if( g.perm.Hyperlink ){ + style_submenu_element("History", "History Of This Ticket", "%s/tkthistory/%T", g.zTop, zUuid); - style_submenu_element("Timeline", "Timeline Of This Ticket", + style_submenu_element("Timeline", "Timeline Of This Ticket", "%s/tkttimeline/%T", g.zTop, zUuid); - style_submenu_element("Check-ins", "Check-ins Of This Ticket", + style_submenu_element("Check-ins", "Check-ins Of This Ticket", "%s/tkttimeline/%T?y=ci", g.zTop, zUuid); } - if( g.okNewTkt ){ + if( g.anon.NewTkt ){ style_submenu_element("New Ticket", "Create a new ticket", "%s/tktnew", g.zTop); } - if( g.okApndTkt && g.okAttach ){ + if( g.anon.ApndTkt && g.anon.Attach ){ style_submenu_element("Attach", "Add An Attachment", - "%s/attachadd?tkt=%T&from=%s/tktview%%3fname=%t", + "%s/attachadd?tkt=%T&from=%s/tktview/%t", g.zTop, zUuid, g.zTop, zUuid); } + if( P("plaintext") ){ + style_submenu_element("Formatted", "Formatted", "%R/tktview/%s", zUuid); + }else{ + style_submenu_element("Plaintext", "Plaintext", + "%R/tktview/%s?plaintext", zUuid); + } style_header("View Ticket"); if( g.thTrace ) Th_Trace("BEGIN_TKTVIEW<br />\n", -1); ticket_init(); + initializeVariablesFromCGI(); + getAllTicketFields(); initializeVariablesFromDb(); zScript = ticket_viewpage_code(); + if( P("showfields")!=0 ) showAllFields(); if( g.thTrace ) Th_Trace("BEGIN_TKTVIEW_SCRIPT<br />\n", -1); Th_Render(zScript); if( g.thTrace ) Th_Trace("END_TKTVIEW<br />\n", -1); - zFullName = db_text(0, + zFullName = db_text(0, "SELECT tkt_uuid FROM ticket" " WHERE tkt_uuid GLOB '%q*'", zUuid); if( zFullName ){ - int cnt = 0; - Stmt q; - db_prepare(&q, - "SELECT datetime(mtime,'localtime'), filename, user" - " FROM attachment" - " WHERE isLatest AND src!='' AND target=%Q" - " ORDER BY mtime DESC", - zFullName); - while( db_step(&q)==SQLITE_ROW ){ - const char *zDate = db_column_text(&q, 0); - const char *zFile = db_column_text(&q, 1); - const char *zUser = db_column_text(&q, 2); - if( cnt==0 ){ - @ <hr><h2>Attachments:</h2> - @ <ul> - } - cnt++; - @ <li><a href="%s(g.zTop)/attachview?tkt=%s(zFullName)&file=%t(zFile)"> - @ %h(zFile)</a> add by %h(zUser) on - hyperlink_to_date(zDate, "."); - if( g.okWrTkt && g.okAttach ){ - @ [<a href="%s(g.zTop)/attachdelete?tkt=%s(zFullName)&file=%t(zFile)&from=%s(g.zTop)/tktview%%3fname=%s(zFullName)">delete</a>] - } - } - if( cnt ){ - @ </ul> - } - db_finalize(&q); - } - + attachment_list(zFullName, "<hr /><h2>Attachments:</h2><ul>"); + } + style_footer(); } /* -** TH command: append_field FIELD STRING +** TH1 command: append_field FIELD STRING ** ** FIELD is the name of a database column to which we might want ** to append text. STRING is the text to be appended to that ** column. The append does not actually occur until the ** submit_ticket command is run. */ static int appendRemarkCmd( - Th_Interp *interp, - void *p, - int argc, - const char **argv, + Th_Interp *interp, + void *p, + int argc, + const char **argv, int *argl ){ int idx; if( argc!=3 ){ @@ -387,22 +524,56 @@ if( g.thTrace ){ Th_Trace("append_field %#h {%#h}<br />\n", argl[1], argv[1], argl[2], argv[2]); } for(idx=0; idx<nField; idx++){ - if( strncmp(azField[idx], argv[1], argl[1])==0 - && azField[idx][argl[1]]==0 ){ + if( memcmp(aField[idx].zName, argv[1], argl[1])==0 + && aField[idx].zName[argl[1]]==0 ){ break; } } if( idx>=nField ){ Th_ErrorMessage(g.interp, "no such TICKET column: ", argv[1], argl[1]); return TH_ERROR; } - azAppend[idx] = mprintf("%.*s", argl[2], argv[2]); + aField[idx].zAppend = mprintf("%.*s", argl[2], argv[2]); return TH_OK; } + +/* +** Write a ticket into the repository. +*/ +static int ticket_put( + Blob *pTicket, /* The text of the ticket change record */ + const char *zTktId, /* The ticket to which this change is applied */ + int needMod /* True if moderation is needed */ +){ + int result; + int rid = content_put_ex(pTicket, 0, 0, 0, needMod); + if( rid==0 ){ + fossil_fatal("trouble committing ticket: %s", g.zErrMsg); + } + if( needMod ){ + moderation_table_create(); + db_multi_exec( + "INSERT INTO modreq(objid, tktid) VALUES(%d,%Q)", + rid, zTktId + ); + }else{ + db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d);", rid); + db_multi_exec("INSERT OR IGNORE INTO unclustered VALUES(%d);", rid); + } + manifest_crosslink_begin(); + result = (manifest_crosslink(rid, pTicket, MC_NONE)==0); + assert( blob_is_reset(pTicket) ); + if( !result ){ + result = manifest_crosslink_end(MC_PERMIT_HOOKS); + }else{ + manifest_crosslink_end(MC_NONE); + } + return result; +} /* ** Subscript command: submit_ticket ** ** Construct and submit a new ticket artifact. The fields of the artifact @@ -410,84 +581,104 @@ ** taken from TH variables. If the content is unchanged, the field is ** omitted from the artifact. Fields whose names begin with "private_" ** are concealed using the db_conceal() function. */ static int submitTicketCmd( - Th_Interp *interp, - void *pUuid, - int argc, - const char **argv, + Th_Interp *interp, + void *pUuid, + int argc, + const char **argv, int *argl ){ char *zDate; const char *zUuid; int i; - int rid; + int nJ = 0; Blob tktchng, cksum; + int needMod; login_verify_csrf_secret(); + if( !captcha_is_correct() ){ + @ <p class="generalError">Error: Incorrect security code.</p> + return TH_OK; + } zUuid = (const char *)pUuid; blob_zero(&tktchng); - zDate = db_text(0, "SELECT datetime('now')"); - zDate[10] = 'T'; + zDate = date_in_standard_format("now"); blob_appendf(&tktchng, "D %s\n", zDate); free(zDate); + for(i=0; i<nField; i++){ + if( aField[i].zAppend ){ + blob_appendf(&tktchng, "J +%s %z\n", aField[i].zName, + fossilize(aField[i].zAppend, -1)); + ++nJ; + } + } for(i=0; i<nField; i++){ const char *zValue; int nValue; - if( azAppend[i] ){ - blob_appendf(&tktchng, "J +%s %z\n", azField[i], - fossilize(azAppend[i], -1)); - }else{ - zValue = Th_Fetch(azField[i], &nValue); - if( zValue ){ - while( nValue>0 && isspace(zValue[nValue-1]) ){ nValue--; } - if( strncmp(zValue, azValue[i], nValue) || strlen(azValue[i])!=nValue ){ - if( strncmp(azField[i], "private_", 8)==0 ){ - zValue = db_conceal(zValue, nValue); - blob_appendf(&tktchng, "J %s %s\n", azField[i], zValue); - }else{ - blob_appendf(&tktchng, "J %s %#F\n", azField[i], nValue, zValue); - } - } + if( aField[i].zAppend ) continue; + zValue = Th_Fetch(aField[i].zName, &nValue); + if( zValue ){ + while( nValue>0 && fossil_isspace(zValue[nValue-1]) ){ nValue--; } + if( ((aField[i].mUsed & USEDBY_TICKETCHNG)!=0 && nValue>0) + || memcmp(zValue, aField[i].zValue, nValue)!=0 + || strlen(aField[i].zValue)!=nValue + ){ + if( memcmp(aField[i].zName, "private_", 8)==0 ){ + zValue = db_conceal(zValue, nValue); + blob_appendf(&tktchng, "J %s %s\n", aField[i].zName, zValue); + }else{ + blob_appendf(&tktchng, "J %s %#F\n", aField[i].zName, nValue, zValue); + } + nJ++; } } } if( *(char**)pUuid ){ - zUuid = db_text(0, - "SELECT tkt_uuid FROM ticket WHERE tkt_uuid GLOB '%s*'", P("name") + zUuid = db_text(0, + "SELECT tkt_uuid FROM ticket WHERE tkt_uuid GLOB '%q*'", P("name") ); }else{ zUuid = db_text(0, "SELECT lower(hex(randomblob(20)))"); } *(const char**)pUuid = zUuid; blob_appendf(&tktchng, "K %s\n", zUuid); - blob_appendf(&tktchng, "U %F\n", g.zLogin ? g.zLogin : ""); + blob_appendf(&tktchng, "U %F\n", login_name()); md5sum_blob(&tktchng, &cksum); blob_appendf(&tktchng, "Z %b\n", &cksum); - if( g.thTrace ){ - Th_Trace("submit_ticket {\n<blockquote><pre>\n%h\n</pre></blockquote>\n" - "}<br />\n", - blob_str(&tktchng)); - }else{ - rid = content_put(&tktchng, 0, 0); - if( rid==0 ){ - fossil_panic("trouble committing ticket: %s", g.zErrMsg); - } - manifest_crosslink_begin(); - manifest_crosslink(rid, &tktchng); - manifest_crosslink_end(); - } - return TH_RETURN; + if( nJ==0 ){ + blob_reset(&tktchng); + return TH_OK; + } + needMod = ticket_need_moderation(0); + if( g.zPath[0]=='d' ){ + const char *zNeedMod = needMod ? "required" : "skipped"; + /* If called from /debug_tktnew or /debug_tktedit... */ + @ <font color="blue"> + @ <p>Ticket artifact that would have been submitted:</p> + @ <blockquote><pre>%h(blob_str(&tktchng))</pre></blockquote> + @ <blockquote><pre>Moderation would be %h(zNeedMod).</pre></blockquote> + @ <hr /></font> + return TH_OK; + }else{ + if( g.thTrace ){ + Th_Trace("submit_ticket {\n<blockquote><pre>\n%h\n</pre></blockquote>\n" + "}<br />\n", + blob_str(&tktchng)); + } + ticket_put(&tktchng, zUuid, needMod); + } + return ticket_change(zUuid); } /* ** WEBPAGE: tktnew ** WEBPAGE: debug_tktnew ** -** Enter a new ticket. the tktnew_template script in the ticket +** Enter a new ticket. The tktnew_template script in the ticket ** configuration is used. The /tktnew page is the official ticket ** entry page. The /debug_tktnew page is used for debugging the ** tktnew_template in the ticket configuration. /debug_tktnew works ** just like /tktnew except that it does not really save the new ticket ** when you press submit - it just prints the ticket artifact at the @@ -496,32 +687,38 @@ void tktnew_page(void){ const char *zScript; char *zNewUuid = 0; login_check_credentials(); - if( !g.okNewTkt ){ login_needed(); return; } + if( !g.perm.NewTkt ){ login_needed(g.anon.NewTkt); return; } if( P("cancel") ){ cgi_redirect("home"); } style_header("New Ticket"); + ticket_standard_submenu(T_ALL_BUT(T_NEW)); if( g.thTrace ) Th_Trace("BEGIN_TKTNEW<br />\n", -1); ticket_init(); + initializeVariablesFromCGI(); getAllTicketFields(); initializeVariablesFromDb(); - initializeVariablesFromCGI(); - @ <form method="POST" action="%s(g.zBaseURL)/%s(g.zPath)"> + if( g.zPath[0]=='d' ) showAllFields(); + form_begin(0, "%R/%s", g.zPath); login_insert_csrf_secret(); + if( P("date_override") && g.perm.Setup ){ + @ <input type="hidden" name="date_override" value="%h(P("date_override"))"> + } zScript = ticket_newpage_code(); - Th_Store("login", g.zLogin); + Th_Store("login", login_name()); Th_Store("date", db_text(0, "SELECT datetime('now')")); Th_CreateCommand(g.interp, "submit_ticket", submitTicketCmd, (void*)&zNewUuid, 0); if( g.thTrace ) Th_Trace("BEGIN_TKTNEW_SCRIPT<br />\n", -1); if( Th_Render(zScript)==TH_RETURN && !g.thTrace && zNewUuid ){ - cgi_redirect(mprintf("%s/tktview/%s", g.zBaseURL, zNewUuid)); + cgi_redirect(mprintf("%s/tktview/%s", g.zTop, zNewUuid)); return; } + captcha_generate(0); @ </form> if( g.thTrace ) Th_Trace("END_TKTVIEW<br />\n", -1); style_footer(); } @@ -541,52 +738,58 @@ int nName; const char *zName; int nRec; login_check_credentials(); - if( !g.okApndTkt && !g.okWrTkt ){ login_needed(); return; } + if( !g.perm.ApndTkt && !g.perm.WrTkt ){ + login_needed(g.anon.ApndTkt || g.anon.WrTkt); + return; + } zName = P("name"); if( P("cancel") ){ cgi_redirectf("tktview?name=%T", zName); } style_header("Edit Ticket"); if( zName==0 || (nName = strlen(zName))<4 || nName>UUID_SIZE || !validate16(zName,nName) ){ - @ <font color="red"><b>Not a valid ticket id: \"%h(zName)\"</b></font> + @ <span class="tktError">Not a valid ticket id: \"%h(zName)\"</span> style_footer(); return; } nRec = db_int(0, "SELECT count(*) FROM ticket WHERE tkt_uuid GLOB '%q*'", zName); if( nRec==0 ){ - @ <font color="red"><b>No such ticket: \"%h(zName)\"</b></font> + @ <span class="tktError">No such ticket: \"%h(zName)\"</span> style_footer(); return; } if( nRec>1 ){ - @ <font color="red"><b>%d(nRec) tickets begin with: \"%h(zName)\"</b></font> + @ <span class="tktError">%d(nRec) tickets begin with: + @ \"%h(zName)\"</span> style_footer(); return; } if( g.thTrace ) Th_Trace("BEGIN_TKTEDIT<br />\n", -1); ticket_init(); getAllTicketFields(); initializeVariablesFromCGI(); initializeVariablesFromDb(); - @ <form method="POST" action="%s(g.zBaseURL)/%s(g.zPath)"> - @ <input type="hidden" name="name" value="%s(zName)"> + if( g.zPath[0]=='d' ) showAllFields(); + form_begin(0, "%R/%s", g.zPath); + @ <input type="hidden" name="name" value="%s(zName)" /> login_insert_csrf_secret(); zScript = ticket_editpage_code(); - Th_Store("login", g.zLogin); + Th_Store("login", login_name()); Th_Store("date", db_text(0, "SELECT datetime('now')")); Th_CreateCommand(g.interp, "append_field", appendRemarkCmd, 0, 0); Th_CreateCommand(g.interp, "submit_ticket", submitTicketCmd, (void*)&zName,0); if( g.thTrace ) Th_Trace("BEGIN_TKTEDIT_SCRIPT<br />\n", -1); if( Th_Render(zScript)==TH_RETURN && !g.thTrace && zName ){ - cgi_redirect(mprintf("%s/tktview/%s", g.zBaseURL, zName)); + cgi_redirect(mprintf("%s/tktview/%s", g.zTop, zName)); return; } + captcha_generate(0); @ </form> if( g.thTrace ) Th_Trace("BEGIN_TKTEDIT<br />\n", -1); style_footer(); } @@ -607,18 +810,23 @@ sqlite3_close(db); return zErr; } rc = sqlite3_exec(db, "SELECT tkt_id, tkt_uuid, tkt_mtime FROM ticket", 0, 0, 0); - sqlite3_close(db); if( rc!=SQLITE_OK ){ - zErr = mprintf("schema fails to define a valid ticket table " - "containing all required fields"); - return zErr; + zErr = mprintf("schema fails to define valid a TICKET " + "table containing all required fields"); + }else{ + rc = sqlite3_exec(db, "SELECT tkt_id, tkt_mtime FROM ticketchng", 0,0,0); + if( rc!=SQLITE_OK ){ + zErr = mprintf("schema fails to define valid a TICKETCHNG " + "table containing all required fields"); + } } + sqlite3_close(db); } - return 0; + return zErr; } /* ** WEBPAGE: tkttimeline ** URL: /tkttimeline?name=TICKETUUID&y=TYPE @@ -634,11 +842,14 @@ int tagid; char zGlobPattern[50]; const char *zType; login_check_credentials(); - if( !g.okHistory || !g.okRdTkt ){ login_needed(); return; } + if( !g.perm.Hyperlink || !g.perm.RdTkt ){ + login_needed(g.anon.Hyperlink && g.anon.RdTkt); + return; + } zUuid = PD("name",""); zType = PD("y","a"); if( zType[0]!='c' ){ style_submenu_element("Check-ins", "Check-ins", "%s/tkttimeline?name=%T&y=ci", g.zTop, zUuid); @@ -649,16 +860,15 @@ style_submenu_element("History", "History", "%s/tkthistory/%s", g.zTop, zUuid); style_submenu_element("Status", "Status", "%s/info/%s", g.zTop, zUuid); if( zType[0]=='c' ){ - zTitle = mprintf("Check-Ins Associated With Ticket %h", zUuid); + zTitle = mprintf("Check-ins Associated With Ticket %h", zUuid); }else{ zTitle = mprintf("Timeline Of Ticket %h", zUuid); } - style_header(zTitle); - free(zTitle); + style_header("%z", zTitle); sqlite3_snprintf(6, zGlobPattern, "%s", zUuid); canonical16(zGlobPattern, strlen(zGlobPattern)); tagid = db_int(0, "SELECT tagid FROM tag WHERE tagname GLOB 'tkt-%q*'",zUuid); if( tagid==0 ){ @@ -687,13 +897,13 @@ " WHERE target=%Q) " "ORDER BY mtime DESC", timeline_query_for_www(), tagid, zFullUuid, zFullUuid, zFullUuid ); } - db_prepare(&q, zSQL); - free(zSQL); - www_print_timeline(&q, TIMELINE_ARTID, 0); + db_prepare(&q, "%z", zSQL/*safe-for-%s*/); + www_print_timeline(&q, TIMELINE_ARTID|TIMELINE_DISJOINT|TIMELINE_GRAPH, + 0, 0, 0, 0); db_finalize(&q); style_footer(); } /* @@ -705,83 +915,97 @@ void tkthistory_page(void){ Stmt q; char *zTitle; const char *zUuid; int tagid; + int nChng = 0; login_check_credentials(); - if( !g.okHistory || !g.okRdTkt ){ login_needed(); return; } + if( !g.perm.Hyperlink || !g.perm.RdTkt ){ + login_needed(g.anon.Hyperlink && g.anon.RdTkt); + return; + } zUuid = PD("name",""); zTitle = mprintf("History Of Ticket %h", zUuid); style_submenu_element("Status", "Status", "%s/info/%s", g.zTop, zUuid); style_submenu_element("Check-ins", "Check-ins", - "%s/tkttimeline?name=%s?y=ci", g.zTop, zUuid); + "%s/tkttimeline?name=%s&y=ci", g.zTop, zUuid); style_submenu_element("Timeline", "Timeline", "%s/tkttimeline?name=%s", g.zTop, zUuid); - style_header(zTitle); - free(zTitle); + if( P("plaintext")!=0 ){ + style_submenu_element("Formatted", "Formatted", + "%R/tkthistory/%s", zUuid); + }else{ + style_submenu_element("Plaintext", "Plaintext", + "%R/tkthistory/%s?plaintext", zUuid); + } + style_header("%z", zTitle); tagid = db_int(0, "SELECT tagid FROM tag WHERE tagname GLOB 'tkt-%q*'",zUuid); if( tagid==0 ){ @ No such ticket: %h(zUuid) style_footer(); return; } db_prepare(&q, - "SELECT datetime(mtime,'localtime'), objid, uuid, NULL, NULL, NULL" + "SELECT datetime(mtime,toLocal()), objid, uuid, NULL, NULL, NULL" " FROM event, blob" " WHERE objid IN (SELECT rid FROM tagxref WHERE tagid=%d)" " AND blob.rid=event.objid" " UNION " - "SELECT datetime(mtime,'localtime'), attachid, uuid, src, filename, user" + "SELECT datetime(mtime,toLocal()), attachid, uuid, src, filename, user" " FROM attachment, blob" " WHERE target=(SELECT substr(tagname,5) FROM tag WHERE tagid=%d)" " AND blob.rid=attachid" - " ORDER BY 1 DESC", + " ORDER BY 1", tagid, tagid ); while( db_step(&q)==SQLITE_ROW ){ - Blob content; - Manifest m; - char zShort[12]; + Manifest *pTicket; const char *zDate = db_column_text(&q, 0); int rid = db_column_int(&q, 1); const char *zChngUuid = db_column_text(&q, 2); const char *zFile = db_column_text(&q, 4); - memcpy(zShort, zChngUuid, 10); - zShort[10] = 0; + if( nChng==0 ){ + @ <ol> + } + nChng++; if( zFile!=0 ){ const char *zSrc = db_column_text(&q, 3); const char *zUser = db_column_text(&q, 5); if( zSrc==0 || zSrc[0]==0 ){ - @ - @ <p>Delete attachment "%h(zFile)" + @ + @ <li><p>Delete attachment "%h(zFile)" }else{ - @ - @ <p>Add attachment "%h(zFile)" + @ + @ <li><p>Add attachment + @ "%z(href("%R/artifact/%!S",zSrc))%s(zFile)</a>" } - @ [<a href="%s(g.zTop)/artifact/%T(zChngUuid)">%s(zShort)</a>] + @ [%z(href("%R/artifact/%!S",zChngUuid))%S(zChngUuid)</a>] @ (rid %d(rid)) by hyperlink_to_user(zUser,zDate," on"); hyperlink_to_date(zDate, ".</p>"); }else{ - content_get(rid, &content); - if( manifest_parse(&m, &content) && m.type==CFTYPE_TICKET ){ + pTicket = manifest_get(rid, CFTYPE_TICKET, 0); + if( pTicket ){ @ - @ <p>Ticket change - @ [<a href="%s(g.zTop)/artifact/%T(zChngUuid)">%s(zShort)</a>] + @ <li><p>Ticket change + @ [%z(href("%R/artifact/%!S",zChngUuid))%S(zChngUuid)</a>] @ (rid %d(rid)) by - hyperlink_to_user(m.zUser,zDate," on"); + hyperlink_to_user(pTicket->zUser,zDate," on"); hyperlink_to_date(zDate, ":"); - ticket_output_change_artifact(&m); @ </p> + ticket_output_change_artifact(pTicket, "a"); } - manifest_clear(&m); + manifest_destroy(pTicket); } } db_finalize(&q); + if( nChng ){ + @ </ol> + } style_footer(); } /* ** Return TRUE if the given BLOB contains a newline character. @@ -797,28 +1021,430 @@ /* ** The pTkt object is a ticket change artifact. Output a detailed ** description of this object. */ -void ticket_output_change_artifact(Manifest *pTkt){ +void ticket_output_change_artifact(Manifest *pTkt, const char *zListType){ int i; - @ <ol> + int wikiFlags = WIKI_NOBADLINKS; + const char *zBlock = "<blockquote>"; + const char *zEnd = "</blockquote>"; + if( P("plaintext")!=0 ){ + wikiFlags |= WIKI_LINKSONLY; + zBlock = "<blockquote><pre class='verbatim'>"; + zEnd = "</pre></blockquote>"; + } + if( zListType==0 ) zListType = "1"; + @ <ol type="%s(zListType)"> for(i=0; i<pTkt->nField; i++){ Blob val; const char *z; z = pTkt->aField[i].zName; blob_set(&val, pTkt->aField[i].zValue); if( z[0]=='+' ){ - @ <li>Appended to %h(&z[1]):<blockquote> - wiki_convert(&val, 0, 0); - @ </blockquote></li> - }else if( blob_size(&val)<=50 && contains_newline(&val) ){ - @ <li>Change %h(z) to:<blockquote> - wiki_convert(&val, 0, 0); - @ </blockquote></li> + @ <li>Appended to %h(&z[1]):%s(zBlock) + wiki_convert(&val, 0, wikiFlags); + @ %s(zEnd)</li> + }else if( blob_size(&val)>50 || contains_newline(&val) ){ + @ <li>Change %h(z) to:%s(zBlock) + wiki_convert(&val, 0, wikiFlags); + @ %s(zEnd)</li> }else{ @ <li>Change %h(z) to "%h(blob_str(&val))"</li> } blob_reset(&val); } @ </ol> } + +/* +** COMMAND: ticket* +** Usage: %fossil ticket SUBCOMMAND ... +** +** Run various subcommands to control tickets +** +** %fossil ticket show (REPORTTITLE|REPORTNR) ?TICKETFILTER? ?options? +** +** options can be: +** ?-l|--limit LIMITCHAR? +** ?-q|--quote? +** ?-R|--repository FILE? +** +** Run the ticket report, identified by the report format title +** used in the gui. The data is written as flat file on stdout, +** using TAB as separator. The separator can be changed using +** the -l or --limit option. +** +** If TICKETFILTER is given on the commandline, the query is +** limited with a new WHERE-condition. +** example: Report lists a column # with the uuid +** TICKETFILTER may be [#]='uuuuuuuuu' +** example: Report only lists rows with status not open +** TICKETFILTER: status != 'open' +** If the option -q|--quote is used, the tickets are encoded by +** quoting special chars(space -> \\s, tab -> \\t, newline -> \\n, +** cr -> \\r, formfeed -> \\f, vtab -> \\v, nul -> \\0, \\ -> \\\\). +** Otherwise, the simplified encoding as on the show report raw +** page in the gui is used. This has no effect in JSON mode. +** +** Instead of the report title its possible to use the report +** number. Using the special report number 0 list all columns, +** defined in the ticket table. +** +** %fossil ticket list fields +** %fossil ticket ls fields +** +** list all fields, defined for ticket in the fossil repository +** +** %fossil ticket list reports +** %fossil ticket ls reports +** +** list all ticket reports, defined in the fossil repository +** +** %fossil ticket set TICKETUUID (FIELD VALUE)+ ?-q|--quote? +** %fossil ticket change TICKETUUID (FIELD VALUE)+ ?-q|--quote? +** +** change ticket identified by TICKETUUID and set the value of +** field FIELD to VALUE. +** +** Field names as defined in the TICKET table. By default, these +** names include: type, status, subsystem, priority, severity, foundin, +** resolution, title, and comment, but other field names can be added +** or substituted in customized installations. +** +** If you use +FIELD, the VALUE Is appended to the field FIELD. +** You can use more than one field/value pair on the commandline. +** Using -q|--quote enables the special character decoding as +** in "ticket show". So it's possible, to set multiline text or +** text with special characters. +** +** %fossil ticket add FIELD VALUE ?FIELD VALUE .. ? ?-q|--quote? +** +** like set, but create a new ticket with the given values. +** +** %fossil ticket history TICKETUUID +** +** Show the complete change history for the ticket +** +** The values in set|add are not validated against the definitions +** given in "Ticket Common Script". +*/ +void ticket_cmd(void){ + int n; + const char *zUser; + const char *zDate; + const char *zTktUuid; + + /* do some ints, we want to be inside a checkout */ + db_find_and_open_repository(0, 0); + user_select(); + + zUser = find_option("user-override",0,1); + if( zUser==0 ) zUser = login_name(); + zDate = find_option("date-override",0,1); + if( zDate==0 ) zDate = "now"; + zDate = date_in_standard_format(zDate); + zTktUuid = find_option("uuid-override",0,1); + if( zTktUuid && (strlen(zTktUuid)!=40 || !validate16(zTktUuid,40)) ){ + fossil_fatal("invalid --uuid-override: must be 40 characters of hex"); + } + + /* + ** Check that the user exists. + */ + if( !db_exists("SELECT 1 FROM user WHERE login=%Q", zUser) ){ + fossil_fatal("no such user: %s", zUser); + } + + if( g.argc<3 ){ + usage("add|change|list|set|show|history"); + } + n = strlen(g.argv[2]); + if( n==1 && g.argv[2][0]=='s' ){ + /* set/show cannot be distinguished, so show the usage */ + usage("add|change|list|set|show|history"); + } + if(( strncmp(g.argv[2],"list",n)==0 ) || ( strncmp(g.argv[2],"ls",n)==0 )){ + if( g.argc==3 ){ + usage("list fields|reports"); + }else{ + n = strlen(g.argv[3]); + if( !strncmp(g.argv[3],"fields",n) ){ + /* simply show all field names */ + int i; + + /* read all available ticket fields */ + getAllTicketFields(); + for(i=0; i<nField; i++){ + printf("%s\n",aField[i].zName); + } + }else if( !strncmp(g.argv[3],"reports",n) ){ + rpt_list_reports(); + }else{ + fossil_fatal("unknown ticket list option '%s'!",g.argv[3]); + } + } + }else{ + /* add a new ticket or set fields on existing tickets */ + tTktShowEncoding tktEncoding; + + tktEncoding = find_option("quote","q",0) ? tktFossilize : tktNoTab; + + if( strncmp(g.argv[2],"show",n)==0 ){ + if( g.argc==3 ){ + usage("show REPORTNR"); + }else{ + const char *zRep = 0; + const char *zSep = 0; + const char *zFilterUuid = 0; + zSep = find_option("limit","l",1); + zRep = g.argv[3]; + if( !strcmp(zRep,"0") ){ + zRep = 0; + } + if( g.argc>4 ){ + zFilterUuid = g.argv[4]; + } + rptshow( zRep, zSep, zFilterUuid, tktEncoding ); + } + }else{ + /* add a new ticket or update an existing ticket */ + enum { set,add,history,err } eCmd = err; + int i = 0; + Blob tktchng, cksum; + + /* get command type (set/add) and get uuid, if needed for set */ + if( strncmp(g.argv[2],"set",n)==0 || strncmp(g.argv[2],"change",n)==0 || + strncmp(g.argv[2],"history",n)==0 ){ + if( strncmp(g.argv[2],"history",n)==0 ){ + eCmd = history; + }else{ + eCmd = set; + } + if( g.argc==3 ){ + usage("set|change|history TICKETUUID"); + } + zTktUuid = db_text(0, + "SELECT tkt_uuid FROM ticket WHERE tkt_uuid GLOB '%q*'", g.argv[3] + ); + if( !zTktUuid ){ + fossil_fatal("unknown ticket: '%s'!",g.argv[3]); + } + i=4; + }else if( strncmp(g.argv[2],"add",n)==0 ){ + eCmd = add; + i = 3; + if( zTktUuid==0 ){ + zTktUuid = db_text(0, "SELECT lower(hex(randomblob(20)))"); + } + } + /* none of set/add, so show the usage! */ + if( eCmd==err ){ + usage("add|fieldlist|set|show|history"); + } + + /* we just handle history separately here, does not get out */ + if( eCmd==history ){ + Stmt q; + int tagid; + + if( i != g.argc ){ + fossil_fatal("no other parameters expected to %s!",g.argv[2]); + } + tagid = db_int(0, "SELECT tagid FROM tag WHERE tagname GLOB 'tkt-%q*'", + zTktUuid); + if( tagid==0 ){ + fossil_fatal("no such ticket %h", zTktUuid); + } + db_prepare(&q, + "SELECT datetime(mtime,toLocal()), objid, NULL, NULL, NULL" + " FROM event, blob" + " WHERE objid IN (SELECT rid FROM tagxref WHERE tagid=%d)" + " AND blob.rid=event.objid" + " UNION " + "SELECT datetime(mtime,toLocal()), attachid, filename, " + " src, user" + " FROM attachment, blob" + " WHERE target=(SELECT substr(tagname,5) FROM tag WHERE tagid=%d)" + " AND blob.rid=attachid" + " ORDER BY 1 DESC", + tagid, tagid + ); + while( db_step(&q)==SQLITE_ROW ){ + Manifest *pTicket; + const char *zDate = db_column_text(&q, 0); + int rid = db_column_int(&q, 1); + const char *zFile = db_column_text(&q, 2); + if( zFile!=0 ){ + const char *zSrc = db_column_text(&q, 3); + const char *zUser = db_column_text(&q, 4); + if( zSrc==0 || zSrc[0]==0 ){ + fossil_print("Delete attachment %s\n", zFile); + }else{ + fossil_print("Add attachment %s\n", zFile); + } + fossil_print(" by %s on %s\n", zUser, zDate); + }else{ + pTicket = manifest_get(rid, CFTYPE_TICKET, 0); + if( pTicket ){ + int i; + + fossil_print("Ticket Change by %s on %s:\n", + pTicket->zUser, zDate); + for(i=0; i<pTicket->nField; i++){ + Blob val; + const char *z; + z = pTicket->aField[i].zName; + blob_set(&val, pTicket->aField[i].zValue); + if( z[0]=='+' ){ + fossil_print(" Append to "); + z++; + }else{ + fossil_print(" Change "); + } + fossil_print("%h: ",z); + if( blob_size(&val)>50 || contains_newline(&val)) { + fossil_print("\n "); + comment_print(blob_str(&val),0,4,-1,g.comFmtFlags); + }else{ + fossil_print("%s\n",blob_str(&val)); + } + blob_reset(&val); + } + } + manifest_destroy(pTicket); + } + } + db_finalize(&q); + return; + } + /* read all given ticket field/value pairs from command line */ + if( i==g.argc ){ + fossil_fatal("empty %s command aborted!",g.argv[2]); + } + getAllTicketFields(); + /* read commandline and assign fields in the aField[].zValue array */ + while( i<g.argc ){ + char *zFName; + char *zFValue; + int j; + int append = 0; + + zFName = g.argv[i++]; + if( i==g.argc ){ + fossil_fatal("missing value for '%s'!",zFName); + } + zFValue = g.argv[i++]; + if( tktEncoding == tktFossilize ){ + zFValue=mprintf("%s",zFValue); + defossilize(zFValue); + } + append = (zFName[0] == '+'); + if( append ){ + zFName++; + } + j = fieldId(zFName); + if( j == -1 ){ + fossil_fatal("unknown field name '%s'!",zFName); + }else{ + if( append ){ + aField[j].zAppend = zFValue; + }else{ + aField[j].zValue = zFValue; + } + } + } + + /* now add the needed artifacts to the repository */ + blob_zero(&tktchng); + /* add the time to the ticket manifest */ + blob_appendf(&tktchng, "D %s\n", zDate); + /* append defined elements */ + for(i=0; i<nField; i++){ + char *zValue = 0; + char *zPfx; + + if( aField[i].zAppend && aField[i].zAppend[0] ){ + zPfx = " +"; + zValue = aField[i].zAppend; + }else if( aField[i].zValue && aField[i].zValue[0] ){ + zPfx = " "; + zValue = aField[i].zValue; + }else{ + continue; + } + if( memcmp(aField[i].zName, "private_", 8)==0 ){ + zValue = db_conceal(zValue, strlen(zValue)); + blob_appendf(&tktchng, "J%s%s %s\n", zPfx, aField[i].zName, zValue); + }else{ + blob_appendf(&tktchng, "J%s%s %#F\n", zPfx, + aField[i].zName, strlen(zValue), zValue); + } + } + blob_appendf(&tktchng, "K %s\n", zTktUuid); + blob_appendf(&tktchng, "U %F\n", zUser); + md5sum_blob(&tktchng, &cksum); + blob_appendf(&tktchng, "Z %b\n", &cksum); + if( ticket_put(&tktchng, zTktUuid, ticket_need_moderation(1)) ){ + fossil_fatal("%s\n", g.zErrMsg); + }else{ + fossil_print("ticket %s succeeded for %s\n", + (eCmd==set?"set":"add"),zTktUuid); + } + } + } +} + + +#if INTERFACE +/* Standard submenu items for wiki pages */ +#define T_SRCH 0x00001 +#define T_REPLIST 0x00002 +#define T_NEW 0x00004 +#define T_ALL 0x00007 +#define T_ALL_BUT(x) (T_ALL&~(x)) +#endif + +/* +** Add some standard submenu elements for ticket screens. +*/ +void ticket_standard_submenu(unsigned int ok){ + if( (ok & T_SRCH)!=0 && search_restrict(SRCH_TKT)!=0 ){ + style_submenu_element("Search","Search","%R/tktsrch"); + } + if( (ok & T_REPLIST)!=0 ){ + style_submenu_element("Reports","Reports","%R/reportlist"); + } + if( (ok & T_NEW)!=0 && g.anon.NewTkt ){ + style_submenu_element("New","New","%R/tktnew"); + } +} + +/* +** WEBPAGE: ticket +** +** This is intended to be the primary "Ticket" page. Render as +** either ticket-search (if search is enabled) or as the +** /reportlist page (if ticket search is disabled). +*/ +void tkt_home_page(void){ + login_check_credentials(); + if( search_restrict(SRCH_TKT)!=0 ){ + tkt_srchpage(); + }else{ + view_list(); + } +} + +/* +** WEBPAGE: tktsrch +** Usage: /tktsrch?s=PATTERN +** +** Full-text search of all current tickets +*/ +void tkt_srchpage(void){ + login_check_credentials(); + style_header("Ticket Search"); + ticket_standard_submenu(T_ALL_BUT(T_SRCH)); + search_screen(SRCH_TKT, 0); + style_footer(); +} Index: src/tktsetup.c ================================================================== --- src/tktsetup.c +++ src/tktsetup.c @@ -21,17 +21,18 @@ #include "config.h" #include "tktsetup.h" #include <assert.h> /* -** Main sub-menu for configuring the ticketing system. ** WEBPAGE: tktsetup +** Main sub-menu for configuring the ticketing system. */ void tktsetup_page(void){ login_check_credentials(); - if( !g.okSetup ){ - login_needed(); + if( !g.perm.Setup ){ + login_needed(0); + return; } style_header("Ticket Setup"); @ <table border="0" cellspacing="20"> setup_menu_entry("Table", "tktsetup_tab", @@ -38,10 +39,12 @@ "Specify the schema of the \"ticket\" table in the database."); setup_menu_entry("Timeline", "tktsetup_timeline", "How to display ticket status in the timeline"); setup_menu_entry("Common", "tktsetup_com", "Common TH1 code run before all ticket processing."); + setup_menu_entry("Change", "tktsetup_change", + "The TH1 code run after a ticket is edited or created."); setup_menu_entry("New Ticket Page", "tktsetup_newpage", "HTML with embedded TH1 code for the \"new ticket\" webpage."); setup_menu_entry("View Ticket Page", "tktsetup_viewpage", "HTML with embedded TH1 code for the \"view ticket\" webpage."); setup_menu_entry("Edit Ticket Page", "tktsetup_editpage", @@ -65,11 +68,12 @@ @ CREATE TABLE ticket( @ -- Do not change any column that begins with tkt_ @ tkt_id INTEGER PRIMARY KEY, @ tkt_uuid TEXT UNIQUE, @ tkt_mtime DATE, -@ -- Add as many field as required below this line +@ tkt_ctime DATE, +@ -- Add as many fields as required below this line @ type TEXT, @ status TEXT, @ subsystem TEXT, @ priority TEXT, @ severity TEXT, @@ -77,17 +81,29 @@ @ private_contact TEXT, @ resolution TEXT, @ title TEXT, @ comment TEXT @ ); +@ CREATE TABLE ticketchng( +@ -- Do not change any column that begins with tkt_ +@ tkt_id INTEGER REFERENCES ticket, +@ tkt_rid INTEGER REFERENCES blob, +@ tkt_mtime DATE, +@ -- Add as many fields as required below this line +@ login TEXT, +@ username TEXT, +@ mimetype TEXT, +@ icomment TEXT +@ ); +@ CREATE INDEX ticketchng_idx1 ON ticketchng(tkt_id, tkt_mtime); ; /* ** Return the ticket table definition */ const char *ticket_table_schema(void){ - return db_get("ticket-table", (char*)zDefaultTicketTable); + return db_get("ticket-table", zDefaultTicketTable); } /* ** Common implementation for the ticket setup editor pages. */ @@ -94,72 +110,75 @@ static void tktsetup_generic( const char *zTitle, /* Page title */ const char *zDbField, /* Configuration field being edited */ const char *zDfltValue, /* Default text value */ const char *zDesc, /* Description of this field */ - char *(*xText)(const char*), /* Validitity test or NULL */ - void (*xRebuild)(void), /* Run after successulf update */ + char *(*xText)(const char*), /* Validity test or NULL */ + void (*xRebuild)(void), /* Run after successful update */ int height /* Height of the edit box */ ){ const char *z; int isSubmit; - + login_check_credentials(); - if( !g.okSetup ){ - login_needed(); + if( !g.perm.Setup ){ + login_needed(0); + return; } - if( P("setup") ){ + if( PB("setup") ){ cgi_redirect("tktsetup"); } isSubmit = P("submit")!=0; z = P("x"); if( z==0 ){ - z = db_get(zDbField, (char*)zDfltValue); + z = db_get(zDbField, zDfltValue); } style_header("Edit %s", zTitle); if( P("clear")!=0 ){ login_verify_csrf_secret(); db_unset(zDbField, 0); if( xRebuild ) xRebuild(); - z = zDfltValue; + cgi_redirect("tktsetup"); }else if( isSubmit ){ char *zErr = 0; login_verify_csrf_secret(); if( xText && (zErr = xText(z))!=0 ){ - @ <p><font color="red"><b>ERROR: %h(zErr)</b></font></p> + @ <p class="tktsetupError">ERROR: %h(zErr)</p> }else{ db_set(zDbField, z, 0); if( xRebuild ) xRebuild(); cgi_redirect("tktsetup"); } } - @ <form action="%s(g.zBaseURL)/%s(g.zPath)" method="POST"> + @ <form action="%s(g.zTop)/%s(g.zPath)" method="post"><div> login_insert_csrf_secret(); @ <p>%s(zDesc)</p> @ <textarea name="x" rows="%d(height)" cols="80">%h(z)</textarea> - @ <blockquote> - @ <input type="submit" name="submit" value="Apply Changes"> - @ <input type="submit" name="clear" value="Revert To Default"> - @ <input type="submit" name="setup" value="Cancel"> - @ </blockquote> - @ </form> - @ <hr> + @ <blockquote><p> + @ <input type="submit" name="submit" value="Apply Changes" /> + @ <input type="submit" name="clear" value="Revert To Default" /> + @ <input type="submit" name="setup" value="Cancel" /> + @ </p></blockquote> + @ </div></form> + @ <hr /> @ <h2>Default %s(zTitle)</h2> @ <blockquote><pre> @ %h(zDfltValue) @ </pre></blockquote> style_footer(); } /* ** WEBPAGE: tktsetup_tab +** Administrative page for defining the "ticket" table used +** to hold ticket information. */ void tktsetup_tab_page(void){ static const char zDesc[] = - @ <p>Enter a valid CREATE TABLE statement for the "ticket" table. The + @ Enter a valid CREATE TABLE statement for the "ticket" table. The @ table must contain columns named "tkt_id", "tkt_uuid", and "tkt_mtime" - @ with an unique index on "tkt_uuid" and "tkt_mtime".</p> + @ with an unique index on "tkt_uuid" and "tkt_mtime". ; tktsetup_generic( "Ticket Table Schema", "ticket-table", zDefaultTicketTable, @@ -221,20 +240,22 @@ /* ** Return the ticket common code. */ const char *ticket_common_code(void){ - return db_get("ticket-common", (char*)zDefaultTicketCommon); + return db_get("ticket-common", zDefaultTicketCommon); } /* ** WEBPAGE: tktsetup_com +** Administrative page used to define TH1 script that is +** common to all ticket screens. */ void tktsetup_com_page(void){ static const char zDesc[] = - @ <p>Enter TH1 script that initializes variables prior to generating - @ any of the ticket view, edit, or creation pages.</p> + @ Enter TH1 script that initializes variables prior to generating + @ any of the ticket view, edit, or creation pages. ; tktsetup_generic( "Ticket Common Script", "ticket-common", zDefaultTicketCommon, @@ -242,88 +263,154 @@ 0, 0, 30 ); } + +static const char zDefaultTicketChange[] = +@ return +; + +/* +** Return the ticket change code. +*/ +const char *ticket_change_code(void){ + return db_get("ticket-change", zDefaultTicketChange); +} + +/* +** WEBPAGE: tktsetup_change +** Adminstrative screen used to view or edit the TH1 script +** that shows ticket changes. +*/ +void tktsetup_change_page(void){ + static const char zDesc[] = + @ Enter TH1 script that runs after processing the ticket editing + @ and creation pages. + ; + tktsetup_generic( + "Ticket Change Script", + "ticket-change", + zDefaultTicketChange, + zDesc, + 0, + 0, + 30 + ); +} static const char zDefaultNew[] = @ <th1> +@ if {![info exists mutype]} {set mutype {[links only]}} @ if {[info exists submit]} { @ set status Open +@ if {$mutype eq "HTML"} { +@ set mimetype "text/html" +@ } elseif {$mutype eq "Wiki"} { +@ set mimetype "text/x-fossil-wiki" +@ } elseif {$mutype eq {[links only]}} { +@ set mimetype "text/x-fossil-plain" +@ } else { +@ set mimetype "text/plain" +@ } @ submit_ticket +@ set preview 1 @ } @ </th1> -@ <h1 align="center">Enter A New Ticket</h1> +@ <h1 style="text-align: center;">Enter A New Ticket</h1> @ <table cellpadding="5"> @ <tr> -@ <td colspan="2"> -@ Enter a one-line summary of the ticket:<br> -@ <input type="text" name="title" size="60" value="$<title>"> -@ </td> -@ </tr> -@ -@ <tr> -@ <td align="right">Type: -@ <th1>combobox type $type_choices 1</th1> -@ </td> -@ <td>What type of ticket is this?</td> -@ </tr> -@ -@ <tr> -@ <td align="right">Version: -@ <input type="text" name="foundin" size="20" value="$<foundin>"> -@ </td> -@ <td>In what version or build number do you observe the problem?</td> -@ </tr> -@ -@ <tr> -@ <td align="right">Severity: -@ <th1>combobox severity $severity_choices 1</th1> -@ </td> -@ <td>How debilitating is the problem? How badly does the problem +@ <td colspan="3"> +@ Enter a one-line summary of the ticket:<br /> +@ <input type="text" name="title" size="60" value="$<title>" /> +@ </td> +@ </tr> +@ +@ <tr> +@ <td align="right">Type:</td> +@ <td align="left"><th1>combobox type $type_choices 1</th1></td> +@ <td align="left">What type of ticket is this?</td> +@ </tr> +@ +@ <tr> +@ <td align="right">Version:</td> +@ <td align="left"> +@ <input type="text" name="foundin" size="20" value="$<foundin>" /> +@ </td> +@ <td align="left">In what version or build number do you observe +@ the problem?</td> +@ </tr> +@ +@ <tr> +@ <td align="right">Severity:</td> +@ <td align="left"><th1>combobox severity $severity_choices 1</th1></td> +@ <td align="left">How debilitating is the problem? How badly does the problem @ affect the operation of the product?</td> @ </tr> -@ +@ @ <tr> -@ <td align="right">EMail: -@ <input type="text" name="private_contact" value="$<private_contact>" size="30"> +@ <td align="right">EMail:</td> +@ <td align="left"> +@ <input type="text" name="private_contact" value="$<private_contact>" +@ size="30" /> @ </td> -@ <td><u>Not publicly visible</u>. Used by developers to contact you with -@ questions.</td> +@ <td align="left"><u>Not publicly visible</u> +@ Used by developers to contact you with questions.</td> @ </tr> -@ +@ @ <tr> -@ <td colspan="2"> +@ <td colspan="3"> @ Enter a detailed description of the problem. @ For code defects, be sure to provide details on exactly how @ the problem can be reproduced. Provide as much detail as -@ possible. -@ <br> +@ possible. Format: +@ <th1>combobox mutype {Wiki HTML {Plain Text} {[links only]}} 1</th1> +@ <br /> @ <th1>set nline [linecount $comment 50 10]</th1> -@ <textarea name="comment" cols="80" rows="$nline" -@ wrap="virtual" class="wikiedit">$<comment></textarea><br> -@ <input type="submit" name="preview" value="Preview"> +@ <textarea name="icomment" cols="80" rows="$nline" +@ wrap="virtual" class="wikiedit">$<icomment></textarea><br /> +@ </tr> +@ +@ <th1>enable_output [info exists preview]</th1> +@ <tr><td colspan="3"> +@ Description Preview:<br /><hr /> +@ <th1> +@ if {$mutype eq "Wiki"} { +@ wiki $icomment +@ } elseif {$mutype eq "Plain Text"} { +@ set r [randhex] +@ wiki "<verbatim-$r>[string trimright $icomment]\n</verbatim-$r>" +@ } elseif {$mutype eq {[links only]}} { +@ set r [randhex] +@ wiki "<verbatim-$r links>[string trimright $icomment]\n</verbatim-$r>" +@ } else { +@ wiki "<nowiki>$icomment\n</nowiki>" +@ } +@ </th1> +@ <hr /></td></tr> +@ <th1>enable_output 1</th1> +@ +@ <tr> +@ <td><td align="left"> +@ <input type="submit" name="preview" value="Preview" /> +@ </td> +@ <td align="left">See how the description will appear after formatting.</td> @ </tr> @ @ <th1>enable_output [info exists preview]</th1> -@ <tr><td colspan="2"> -@ Description Preview:<br><hr> -@ <th1>wiki $comment</th1> -@ <hr> -@ </td></tr> +@ <tr> +@ <td><td align="left"> +@ <input type="submit" name="submit" value="Submit" /> +@ </td> +@ <td align="left">After filling in the information above, press this +@ button to create the new ticket</td> +@ </tr> @ <th1>enable_output 1</th1> -@ -@ <tr> -@ <td align="right"> -@ <input type="submit" name="submit" value="Submit"> -@ </td> -@ <td>After filling in the information above, press this button to create -@ the new ticket</td> -@ </tr> -@ <tr> -@ <td align="right"> -@ <input type="submit" name="cancel" value="Cancel"> +@ +@ <tr> +@ <td><td align="left"> +@ <input type="submit" name="cancel" value="Cancel" /> @ </td> @ <td>Abandon and forget this ticket</td> @ </tr> @ </table> ; @@ -330,20 +417,22 @@ /* ** Return the code used to generate the new ticket page */ const char *ticket_newpage_code(void){ - return db_get("ticket-newpage", (char*)zDefaultNew); + return db_get("ticket-newpage", zDefaultNew); } /* ** WEBPAGE: tktsetup_newpage +** Administrative page used to view or edit the TH1 script used +** to enter new tickets. */ void tktsetup_newpage_page(void){ static const char zDesc[] = - @ <p>Enter HTML with embedded TH1 script that will render the "new ticket" - @ page</p> + @ Enter HTML with embedded TH1 script that will render the "new ticket" + @ page ; tktsetup_generic( "HTML For New Tickets", "ticket-newpage", zDefaultNew, @@ -354,70 +443,136 @@ ); } static const char zDefaultView[] = @ <table cellpadding="5"> -@ <tr><td align="right">Ticket UUID:</td><td bgcolor="#d0d0d0" colspan="3"> -@ $<tkt_uuid> -@ </td></tr> -@ <tr><td align="right">Title:</td> -@ <td bgcolor="#d0d0d0" colspan="3" valign="top"> -@ <th1>wiki $title</th1> -@ </td></tr> -@ <tr><td align="right">Status:</td><td bgcolor="#d0d0d0"> +@ <tr><td class="tktDspLabel">Ticket UUID:</td> +@ <th1> +@ if {[info exists tkt_uuid]} { +@ if {[hascap s]} { +@ html "<td class='tktDspValue' colspan='3'>$tkt_uuid " +@ html "($tkt_id)</td></tr>\n" +@ } else { +@ html "<td class='tktDspValue' colspan='3'>$tkt_uuid</td></tr>\n" +@ } +@ } else { +@ if {[hascap s]} { +@ html "<td class='tktDspValue' colspan='3'>Deleted " +@ html "(0)</td></tr>\n" +@ } else { +@ html "<td class='tktDspValue' colspan='3'>Deleted</td></tr>\n" +@ } +@ } +@ </th1> +@ <tr><td class="tktDspLabel">Title:</td> +@ <td class="tktDspValue" colspan="3"> +@ $<title> +@ </td></tr> +@ <tr><td class="tktDspLabel">Status:</td><td class="tktDspValue"> @ $<status> @ </td> -@ <td align="right">Type:</td><td bgcolor="#d0d0d0"> +@ <td class="tktDspLabel">Type:</td><td class="tktDspValue"> @ $<type> @ </td></tr> -@ <tr><td align="right">Severity:</td><td bgcolor="#d0d0d0"> +@ <tr><td class="tktDspLabel">Severity:</td><td class="tktDspValue"> @ $<severity> @ </td> -@ <td align="right">Priority:</td><td bgcolor="#d0d0d0"> +@ <td class="tktDspLabel">Priority:</td><td class="tktDspValue"> @ $<priority> @ </td></tr> -@ <tr><td align="right">Subsystem:</td><td bgcolor="#d0d0d0"> +@ <tr><td class="tktDspLabel">Subsystem:</td><td class="tktDspValue"> @ $<subsystem> @ </td> -@ <td align="right">Resolution:</td><td bgcolor="#d0d0d0"> +@ <td class="tktDspLabel">Resolution:</td><td class="tktDspValue"> @ $<resolution> @ </td></tr> -@ <tr><td align="right">Last Modified:</td><td bgcolor="#d0d0d0"> -@ $<tkt_datetime> +@ <tr><td class="tktDspLabel">Last Modified:</td><td class="tktDspValue"> +@ <th1> +@ if {[info exists tkt_datetime]} { +@ html $tkt_datetime +@ } +@ </th1> @ </td> @ <th1>enable_output [hascap e]</th1> -@ <td align="right">Contact:</td><td bgcolor="#d0d0d0"> +@ <td class="tktDspLabel">Contact:</td><td class="tktDspValue"> @ $<private_contact> @ </td> @ <th1>enable_output 1</th1> @ </tr> -@ <tr><td align="right">Version Found In:</td> -@ <td colspan="3" valign="top" bgcolor="#d0d0d0"> +@ <tr><td class="tktDspLabel">Version Found In:</td> +@ <td colspan="3" valign="top" class="tktDspValue"> @ $<foundin> @ </td></tr> -@ <tr><td>Description & Comments:</td></tr> -@ <tr><td colspan="4" bgcolor="#d0d0d0"> -@ <span bgcolor="#d0d0d0"><th1>wiki $comment</th1></span> -@ </td></tr> +@ +@ <th1> +@ if {[info exists comment]} { +@ if {[string length $comment]>10} { +@ html { +@ <tr><td class="tktDspLabel">Description:</td></tr> +@ <tr><td colspan="5" class="tktDspValue"> +@ } +@ if {[info exists plaintext]} { +@ set r [randhex] +@ wiki "<verbatim-$r links>\n$comment\n</verbatim-$r>" +@ } else { +@ wiki $comment +@ } +@ } +@ } +@ set seenRow 0 +@ set alwaysPlaintext [info exists plaintext] +@ query {SELECT datetime(tkt_mtime) AS xdate, login AS xlogin, +@ mimetype as xmimetype, icomment AS xcomment, +@ username AS xusername +@ FROM ticketchng +@ WHERE tkt_id=$tkt_id AND length(icomment)>0} { +@ if {$seenRow} { +@ html "<hr>\n" +@ } else { +@ html "<tr><td class='tktDspLabel'>User Comments:</td></tr>\n" +@ html "<tr><td colspan='5' class='tktDspValue'>\n" +@ set seenRow 1 +@ } +@ html "[htmlize $xlogin]" +@ if {$xlogin ne $xusername && [string length $xusername]>0} { +@ html " (claiming to be [htmlize $xusername])" +@ } +@ html " added on $xdate:\n" +@ if {$alwaysPlaintext || $xmimetype eq "text/plain"} { +@ set r [randhex] +@ if {$xmimetype ne "text/plain"} {html "([htmlize $xmimetype])\n"} +@ wiki "<verbatim-$r>[string trimright $xcomment]</verbatim-$r>\n" +@ } elseif {$xmimetype eq "text/x-fossil-wiki"} { +@ wiki "<p>\n[string trimright $xcomment]\n</p>\n" +@ } elseif {$xmimetype eq "text/html"} { +@ wiki "<p><nowiki>\n[string trimright $xcomment]\n</nowiki>\n" +@ } else { +@ set r [randhex] +@ wiki "<verbatim-$r links>[string trimright $xcomment]</verbatim-$r>\n" +@ } +@ } +@ if {$seenRow} {html "</td></tr>\n"} +@ </th1> @ </table> ; /* ** Return the code used to generate the view ticket page */ const char *ticket_viewpage_code(void){ - return db_get("ticket-viewpage", (char*)zDefaultView); + return db_get("ticket-viewpage", zDefaultView); } /* ** WEBPAGE: tktsetup_viewpage +** Administrative page used to view or edit the TH1 script that +** displays individual tickets. */ void tktsetup_viewpage_page(void){ static const char zDesc[] = - @ <p>Enter HTML with embedded TH1 script that will render the "view ticket" - @ page</p> + @ Enter HTML with embedded TH1 script that will render the "view ticket" page ; tktsetup_generic( "HTML For Viewing Tickets", "ticket-viewpage", zDefaultView, @@ -428,119 +583,137 @@ ); } static const char zDefaultEdit[] = @ <th1> -@ if {![info exists username]} {set username $login} -@ if {[info exists submit]} { -@ if {[info exists cmappnd]} { -@ if {[string length $cmappnd]>0} { -@ set ctxt "\n\n<hr><i>[htmlize $login]" -@ if {$username ne $login} { -@ set ctxt "$ctxt claiming to be [htmlize $username]" -@ } -@ set ctxt "$ctxt added on [date]:</i><br>\n$cmappnd" -@ append_field comment $ctxt -@ } -@ } -@ submit_ticket -@ } -@ </th1> -@ <table cellpadding="5"> -@ <tr><td align="right">Title:</td><td> -@ <input type="text" name="title" value="$<title>" size="60"> -@ </td></tr> -@ <tr><td align="right">Status:</td><td> -@ <th1>combobox status $status_choices 1</th1> -@ </td></tr> -@ <tr><td align="right">Type:</td><td> -@ <th1>combobox type $type_choices 1</th1> -@ </td></tr> -@ <tr><td align="right">Severity:</td><td> -@ <th1>combobox severity $severity_choices 1</th1> -@ </td></tr> -@ <tr><td align="right">Priority:</td><td> -@ <th1>combobox priority $priority_choices 1</th1> -@ </td></tr> -@ <tr><td align="right">Resolution:</td><td> -@ <th1>combobox resolution $resolution_choices 1</th1> -@ </td></tr> -@ <tr><td align="right">Subsystem:</td><td> -@ <th1>combobox subsystem $subsystem_choices 1</th1> -@ </td></tr> -@ <th1>enable_output [hascap e]</th1> -@ <tr><td align="right">Contact:</td><td> -@ <input type="text" name="private_contact" size="40" -@ value="$<private_contact>"> -@ </td></tr> -@ <th1>enable_output 1</th1> -@ <tr><td align="right">Version Found In:</td><td> -@ <input type="text" name="foundin" size="50" value="$<foundin>"> -@ </td></tr> -@ <tr><td colspan="2"> -@ <th1> -@ if {![info exists eall]} {set eall 0} -@ if {[info exists aonlybtn]} {set eall 0} -@ if {[info exists eallbtn]} {set eall 1} -@ if {![hascap w]} {set eall 0} -@ if {![info exists cmappnd]} {set cmappnd {}} -@ set nline [linecount $comment 15 10] -@ enable_output $eall -@ </th1> -@ Description And Comments:<br> -@ <textarea name="comment" cols="80" rows="$nline" -@ wrap="virtual" class="wikiedit">$<comment></textarea><br> -@ <input type="hidden" name="eall" value="1"> -@ <input type="submit" name="aonlybtn" value="Append Remark"> -@ <input type="submit" name="preview1btn" value="Preview"> -@ <th1>enable_output [expr {!$eall}]</th1> -@ Append Remark from -@ <input type="text" name="username" value="$<username>" size="30">:<br> -@ <textarea name="cmappnd" cols="80" rows="15" -@ wrap="virtual" class="wikiedit">$<cmappnd></textarea><br> -@ <th1>enable_output [expr {[hascap w] && !$eall}]</th1> -@ <input type="submit" name="eallbtn" value="Edit All"> -@ <th1>enable_output [expr {!$eall}]</th1> -@ <input type="submit" name="preview2btn" value="Preview"> -@ <th1>enable_output 1</th1> -@ </td></tr> -@ -@ <th1>enable_output [info exists preview1btn]</th1> -@ <tr><td colspan="2"> -@ Description Preview:<br><hr> -@ <th1>wiki $comment</th1> -@ <hr> -@ </td></tr> -@ <th1>enable_output [info exists preview2btn]</th1> -@ <tr><td colspan="2"> -@ Description Preview:<br><hr> -@ <th1>wiki $cmappnd</th1> -@ <hr> -@ </td></tr> -@ <th1>enable_output 1</th1> -@ -@ <tr><td align="right"></td><td> -@ <input type="submit" name="submit" value="Submit Changes"> -@ <input type="submit" name="cancel" value="Cancel"> -@ </td></tr> +@ if {![info exists mutype]} {set mutype {[links only]}} +@ if {![info exists icomment]} {set icomment {}} +@ if {![info exists username]} {set username $login} +@ if {[info exists submit]} { +@ if {$mutype eq "Wiki"} { +@ set mimetype text/x-fossil-wiki +@ } elseif {$mutype eq "HTML"} { +@ set mimetype text/html +@ } elseif {$mutype eq {[links only]}} { +@ set mimetype text/x-fossil-plain +@ } else { +@ set mimetype text/plain +@ } +@ submit_ticket +@ set preview 1 +@ } +@ </th1> +@ <table cellpadding="5"> +@ <tr><td class="tktDspLabel">Title:</td><td> +@ <input type="text" name="title" value="$<title>" size="60" /> +@ </td></tr> +@ +@ <tr><td class="tktDspLabel">Status:</td><td> +@ <th1>combobox status $status_choices 1</th1> +@ </td></tr> +@ +@ <tr><td class="tktDspLabel">Type:</td><td> +@ <th1>combobox type $type_choices 1</th1> +@ </td></tr> +@ +@ <tr><td class="tktDspLabel">Severity:</td><td> +@ <th1>combobox severity $severity_choices 1</th1> +@ </td></tr> +@ +@ <tr><td class="tktDspLabel">Priority:</td><td> +@ <th1>combobox priority $priority_choices 1</th1> +@ </td></tr> +@ +@ <tr><td class="tktDspLabel">Resolution:</td><td> +@ <th1>combobox resolution $resolution_choices 1</th1> +@ </td></tr> +@ +@ <tr><td class="tktDspLabel">Subsystem:</td><td> +@ <th1>combobox subsystem $subsystem_choices 1</th1> +@ </td></tr> +@ +@ <th1>enable_output [hascap e]</th1> +@ <tr><td class="tktDspLabel">Contact:</td><td> +@ <input type="text" name="private_contact" size="40" +@ value="$<private_contact>" /> +@ </td></tr> +@ <th1>enable_output 1</th1> +@ +@ <tr><td class="tktDspLabel">Version Found In:</td><td> +@ <input type="text" name="foundin" size="50" value="$<foundin>" /> +@ </td></tr> +@ +@ <tr><td colspan="2"> +@ Append Remark with format +@ <th1>combobox mutype {Wiki HTML {Plain Text} {[links only]}} 1</th1> +@ from +@ <input type="text" name="username" value="$<username>" size="30" />:<br /> +@ <textarea name="icomment" cols="80" rows="15" +@ wrap="virtual" class="wikiedit">$<icomment></textarea> +@ </td></tr> +@ +@ <th1>enable_output [info exists preview]</th1> +@ <tr><td colspan="2"> +@ Description Preview:<br><hr> +@ <th1> +@ if {$mutype eq "Wiki"} { +@ wiki $icomment +@ } elseif {$mutype eq "Plain Text"} { +@ set r [randhex] +@ wiki "<verbatim-$r>\n[string trimright $icomment]\n</verbatim-$r>" +@ } elseif {$mutype eq {[links only]}} { +@ set r [randhex] +@ wiki "<verbatim-$r links>\n[string trimright $icomment]</verbatim-$r>" +@ } else { +@ wiki "<nowiki>\n[string trimright $icomment]\n</nowiki>" +@ } +@ </th1> +@ <hr> +@ </td></tr> +@ <th1>enable_output 1</th1> +@ +@ <tr> +@ <td align="right"> +@ <input type="submit" name="preview" value="Preview" /> +@ </td> +@ <td align="left">See how the description will appear after formatting.</td> +@ </tr> +@ +@ <th1>enable_output [info exists preview]</th1> +@ <tr> +@ <td align="right"> +@ <input type="submit" name="submit" value="Submit" /> +@ </td> +@ <td align="left">Apply the changes shown above</td> +@ </tr> +@ <th1>enable_output 1</th1> +@ +@ <tr> +@ <td align="right"> +@ <input type="submit" name="cancel" value="Cancel" /> +@ </td> +@ <td>Abandon this edit</td> +@ </tr> +@ @ </table> ; /* ** Return the code used to generate the edit ticket page */ const char *ticket_editpage_code(void){ - return db_get("ticket-editpage", (char*)zDefaultEdit); + return db_get("ticket-editpage", zDefaultEdit); } /* ** WEBPAGE: tktsetup_editpage +** Administrative page for viewing or editing the TH1 script that +** drives the ticket editing page. */ void tktsetup_editpage_page(void){ static const char zDesc[] = - @ <p>Enter HTML with embedded TH1 script that will render the "edit ticket" - @ page</p> + @ Enter HTML with embedded TH1 script that will render the "edit ticket" page ; tktsetup_generic( "HTML For Editing Tickets", "ticket-editpage", zDefaultEdit, @@ -554,43 +727,49 @@ /* ** The default report list page */ static const char zDefaultReportList[] = @ <th1> -@ if {[hascap n]} { +@ if {[anoncap n]} { @ html "<p>Enter a new ticket:</p>" @ html "<ul><li><a href='tktnew'>New ticket</a></li></ul>" @ } @ </th1> -@ +@ @ <p>Choose a report format from the following list:</p> @ <ol> @ <th1>html $report_items</th1> @ </ol> -@ +@ @ <th1> -@ if {[hascap t]} { -@ html "<p>Create a new ticket display format:</p>" -@ html "<ul><li><a href='rptnew'>New report format</a></li></ul>" +@ if {[anoncap t q]} { +@ html "<p>Other options:</p>\n<ul>\n" +@ if {[anoncap t]} { +@ html "<li><a href='rptnew'>New report format</a></li>\n" +@ } +@ if {[anoncap q]} { +@ html "<li><a href='modreq'>Tend to pending moderation requests</a></li>\n" +@ } @ } @ </th1> ; /* ** Return the code used to generate the report list */ const char *ticket_reportlist_code(void){ - return db_get("ticket-reportlist", (char*)zDefaultReportList); + return db_get("ticket-reportlist", zDefaultReportList); } /* ** WEBPAGE: tktsetup_reportlist +** Administrative page used to view or edit the TH1 script that +** defines the "report list" page. */ void tktsetup_reportlist(void){ static const char zDesc[] = - @ <p>Enter HTML with embedded TH1 script that will render the "report list" - @ page</p> + @ Enter HTML with embedded TH1 script that will render the "report list" page ; tktsetup_generic( "HTML For Report List", "ticket-reportlist", zDefaultReportList, @@ -602,11 +781,11 @@ } /* ** The default template ticket report format: */ -static char zDefaultReport[] = +static char zDefaultReport[] = @ SELECT @ CASE WHEN status IN ('Open','Verified') THEN '#f2dcdc' @ WHEN status='Review' THEN '#e8e8e8' @ WHEN status='Fixed' THEN '#cfe8bd' @ WHEN status='Tested' THEN '#bde5d6' @@ -630,16 +809,19 @@ return db_get("ticket-report-template", zDefaultReport); } /* ** WEBPAGE: tktsetup_rpttplt +** +** Administrative page used to view or edit the ticket report +** template. */ void tktsetup_rpttplt_page(void){ static const char zDesc[] = - @ <p>Enter the default ticket report format template. This is the - @ the template report format that initially appears when creating a - @ new ticket summary report.</p> + @ Enter the default ticket report format template. This is the + @ template report format that initially appears when creating a + @ new ticket summary report. ; tktsetup_generic( "Default Report Template", "ticket-report-template", zDefaultReport, @@ -651,11 +833,11 @@ } /* ** The default template ticket key: */ -static const char zDefaultKey[] = +static const char zDefaultKey[] = @ #ffffff Key: @ #f2dcdc Active @ #e8e8e8 Review @ #cfe8bd Fixed @ #bde5d6 Tested @@ -666,21 +848,24 @@ /* ** Return the template ticket report format: */ const char *ticket_key_template(void){ - return db_get("ticket-key-template", (char*)zDefaultKey); + return db_get("ticket-key-template", zDefaultKey); } /* ** WEBPAGE: tktsetup_keytplt +** +** Administrative page used to view or edit the Key template +** for tickets. */ void tktsetup_keytplt_page(void){ static const char zDesc[] = - @ <p>Enter the default ticket report color-key template. This is the + @ Enter the default ticket report color-key template. This is the @ the color-key that initially appears when creating a - @ new ticket summary report.</p> + @ new ticket summary report. ; tktsetup_generic( "Default Report Color-Key Template", "ticket-key-template", zDefaultKey, @@ -691,46 +876,52 @@ ); } /* ** WEBPAGE: tktsetup_timeline +** +** Administrative page used ot configure how tickets are +** rendered on timeline views. */ void tktsetup_timeline_page(void){ login_check_credentials(); - if( !g.okSetup ){ - login_needed(); + if( !g.perm.Setup ){ + login_needed(0); + return; } if( P("setup") ){ cgi_redirect("tktsetup"); } style_header("Ticket Display On Timelines"); db_begin_transaction(); - @ <form action="%s(g.zBaseURL)/tktsetup_timeline" method="POST"> + @ <form action="%s(g.zTop)/tktsetup_timeline" method="post"><div> login_insert_csrf_secret(); - @ <hr> - entry_attribute("Ticket Title", 40, "ticket-title-expr", "t", "title"); + @ <hr /> + entry_attribute("Ticket Title", 40, "ticket-title-expr", "t", + "title", 0); @ <p>An SQL expression in a query against the TICKET table that will @ return the title of the ticket for display purposes.</p> - @ <hr> - entry_attribute("Ticket Status", 40, "ticket-status-column", "s", "status"); + @ <hr /> + entry_attribute("Ticket Status", 40, "ticket-status-column", "s", + "status", 0); @ <p>The name of the column in the TICKET table that contains the ticket @ status in human-readable form. Case sensitive.</p> - @ <hr> + @ <hr /> entry_attribute("Ticket Closed", 40, "ticket-closed-expr", "c", - "status='Closed'"); + "status='Closed'", 0); @ <p>An SQL expression that evaluates to true in a TICKET table query if @ the ticket is closed.</p> - @ <hr> + @ <hr /> @ <p> - @ <input type="submit" name="submit" value="Apply Changes"> - @ <input type="submit" name="setup" value="Cancel"> + @ <input type="submit" name="submit" value="Apply Changes" /> + @ <input type="submit" name="setup" value="Cancel" /> @ </p> - @ </form> + @ </div></form> db_end_transaction(0); style_footer(); - + } Index: src/translate.c ================================================================== --- src/translate.c +++ src/translate.c @@ -13,12 +13,15 @@ ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** -** begin with the "@" character are translated into cgi_printf() statements -** and the translated code is written on standard output. +** SYNOPSIS: +** +** Input lines that begin with the "@" character are translated into +** either cgi_printf() statements or string literals and the +** translated code is written on standard output. ** ** The problem this program is attempt to solve is as follows: When ** writing CGI programs in C, we typically want to output a lot of HTML ** text to standard output. In pure C code, this involves doing a ** printf() with a big string containing all that text. But we have @@ -27,10 +30,26 @@ ** ** This tool allows us to put raw HTML, without the special codes, in ** the middle of a C program. This program then translates the text ** into standard C by inserting all necessary backslashes and other ** punctuation. +** +** Enhancement #1: +** +** If the last non-whitespace character prior to the first "@" of a +** @-block is "=" or "," then the @-block is a string literal initializer +** rather than text that is to be output via cgi_printf(). Render it +** as such. +** +** Enhancement #2: +** +** Comments of the form: "|* @-comment: CC" (where "|" is really "/") +** cause CC to become a comment character for the @-substitution. +** Typical values for CC are "--" (for SQL text) or "#" (for Tcl script) +** or "//" (for C++ code). Lines of subsequent @-blocks that begin with +** CC are omitted from the output. +** */ #include <stdio.h> #include <ctype.h> #include <stdlib.h> #include <string.h> @@ -66,15 +85,16 @@ /* ** Translate the input stream into the output stream */ static void trans(FILE *in, FILE *out){ - int i, j, k; /* Loop counters */ - char c1, c2; /* Characters used to start a comment */ - int lastWasEq = 0; /* True if last non-whitespace character was "=" */ - char zLine[2000]; /* A single line of input */ - char zOut[4000]; /* The input line translated into appropriate output */ + int i, j, k; /* Loop counters */ + char c1, c2; /* Characters used to start a comment */ + int lastWasEq = 0; /* True if last non-whitespace character was "=" */ + int lastWasComma = 0; /* True if last non-whitespace character was "," */ + char zLine[2000]; /* A single line of input */ + char zOut[4000]; /* The input line translated into appropriate output */ c1 = c2 = '-'; while( fgets(zLine, sizeof(zLine), in) ){ for(i=0; zLine[i] && isspace(zLine[i]); i++){} if( zLine[i]!='@' ){ @@ -85,14 +105,16 @@ c1 = zLine[14]; c2 = zLine[15]; } i += strlen(&zLine[i]); while( i>0 && isspace(zLine[i-1]) ){ i--; } - lastWasEq = i>0 && zLine[i-1]=='='; - }else if( lastWasEq ){ + lastWasEq = i>0 && zLine[i-1]=='='; + lastWasComma = i>0 && zLine[i-1]==','; + }else if( lastWasEq || lastWasComma){ /* If the last non-whitespace character before the first @ was - ** an "=" then generate a string literal. But skip comments + ** an "="(var init/set) or a ","(const definition in list) then + ** generate a string literal. But skip comments ** consisting of all text between c1 and c2 (default "--") ** and end of line. */ int indent, omitline; i++; @@ -100,11 +122,11 @@ indent = i - 2; if( indent<0 ) indent = 0; omitline = 0; for(j=0; zLine[i] && zLine[i]!='\r' && zLine[i]!='\n'; i++){ if( zLine[i]==c1 && (c2==' ' || zLine[i+1]==c2) ){ - omitline = 1; break; + omitline = 1; break; } if( zLine[i]=='"' || zLine[i]=='\\' ){ zOut[j++] = '\\'; } zOut[j++] = zLine[i]; } while( j>0 && isspace(zOut[j-1]) ){ j--; } @@ -115,58 +137,70 @@ fprintf(out,"%*s\"%s\\n\"\n",indent, "", zOut); } }else{ /* Otherwise (if the last non-whitespace was not '=') then generate ** a cgi_printf() statement whose format is the text following the '@'. - ** Substrings of the form "%C(...)" where C is any character will - ** puts "%C" in the format and add the "..." as an argument to the - ** cgi_printf call. + ** Substrings of the form "%C(...)" (where C is any sequence of + ** characters other than \000 and '(') will put "%C" in the + ** format and add the "(...)" as an argument to the cgi_printf call. */ int indent; + int nC; + char c; i++; if( isspace(zLine[i]) ){ i++; } indent = i; for(j=0; zLine[i] && zLine[i]!='\r' && zLine[i]!='\n'; i++){ if( zLine[i]=='"' || zLine[i]=='\\' ){ zOut[j++] = '\\'; } zOut[j++] = zLine[i]; if( zLine[i]!='%' || zLine[i+1]=='%' || zLine[i+1]==0 ) continue; - if( zLine[i+2]!='(' ) continue; - i++; - zOut[j++] = zLine[i]; + for(nC=1; zLine[i+nC] && zLine[i+nC]!='('; nC++){} + if( zLine[i+nC]!='(' || !isalpha(zLine[i+nC-1]) ) continue; + while( --nC ) zOut[j++] = zLine[++i]; zArg[nArg++] = ','; - i += 2; - k = 1; - while( zLine[i] ){ - if( zLine[i]==')' ){ + k = 0; i++; + while( (c = zLine[i])!=0 ){ + zArg[nArg++] = c; + if( c==')' ){ k--; if( k==0 ) break; - }else if( zLine[i]=='(' ){ + }else if( c=='(' ){ k++; } - zArg[nArg++] = zLine[i++]; + i++; } } zOut[j] = 0; if( !inPrint ){ fprintf(out,"%*scgi_printf(\"%s\\n\"",indent-2,"", zOut); inPrint = 1; }else{ fprintf(out,"\n%*s\"%s\\n\"",indent+5, "", zOut); } - } + } } } int main(int argc, char **argv){ if( argc==2 ){ + char *arg; FILE *in = fopen(argv[1], "r"); if( in==0 ){ fprintf(stderr,"can not open %s\n", argv[1]); exit(1); } + printf("#line 1 \""); + for(arg=argv[1]; *arg; arg++){ + if( *arg!='\\' ){ + printf("%c", *arg); + }else{ + printf("\\\\"); + } + } + printf("\"\n"); trans(in, stdout); fclose(in); }else{ trans(stdin, stdout); } return 0; } Index: src/undo.c ================================================================== --- src/undo.c +++ src/undo.c @@ -12,17 +12,26 @@ ** Author contact information: ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* -** +** ** This file implements the undo/redo functionality. */ #include "config.h" #include "undo.h" - +#if INTERFACE +/* +** Possible return values from the undo_maybe_save() routine. +*/ +#define UNDO_NONE (0) /* Placeholder only used to initialize vars. */ +#define UNDO_SAVED_OK (1) /* The specified file was saved succesfully. */ +#define UNDO_DISABLED (2) /* File not saved, subsystem is disabled. */ +#define UNDO_INACTIVE (3) /* File not saved, subsystem is not active. */ +#define UNDO_TOOBIG (4) /* File not saved, it exceeded a size limit. */ +#endif /* ** Undo the change to the file zPathname. zPathname is the pathname ** of the file relative to the root of the repository. If redoFlag is ** true the redo a change. If there is nothing to undo (or redo) then @@ -30,48 +39,71 @@ */ static void undo_one(const char *zPathname, int redoFlag){ Stmt q; char *zFullname; db_prepare(&q, - "SELECT content, existsflag FROM undo WHERE pathname=%Q AND redoflag=%d", + "SELECT content, existsflag, isExe, isLink FROM undo" + " WHERE pathname=%Q AND redoflag=%d", zPathname, redoFlag ); if( db_step(&q)==SQLITE_ROW ){ int old_exists; int new_exists; + int old_exe; + int new_exe; + int new_link; + int old_link; Blob current; Blob new; zFullname = mprintf("%s/%s", g.zLocalRoot, zPathname); - new_exists = file_size(zFullname)>=0; + old_link = db_column_int(&q, 3); + new_exists = file_wd_size(zFullname)>=0; + new_link = file_wd_islink(0); if( new_exists ){ - blob_read_from_file(¤t, zFullname); + if( new_link ){ + blob_read_link(¤t, zFullname); + }else{ + blob_read_from_file(¤t, zFullname); + } + new_exe = file_wd_isexe(0); }else{ blob_zero(¤t); + new_exe = 0; } blob_zero(&new); old_exists = db_column_int(&q, 1); + old_exe = db_column_int(&q, 2); if( old_exists ){ db_ephemeral_blob(&q, 0, &new); } if( old_exists ){ if( new_exists ){ - printf("%s %s\n", redoFlag ? "REDO" : "UNDO", zPathname); - }else{ - printf("NEW %s\n", zPathname); - } - blob_write_to_file(&new, zFullname); - }else{ - printf("DELETE %s\n", zPathname); - unlink(zFullname); + fossil_print("%s %s\n", redoFlag ? "REDO" : "UNDO", zPathname); + }else{ + fossil_print("NEW %s\n", zPathname); + } + if( new_exists && (new_link || old_link) ){ + file_delete(zFullname); + } + if( old_link ){ + symlink_create(blob_str(&new), zFullname); + }else{ + blob_write_to_file(&new, zFullname); + } + file_wd_setexe(zFullname, old_exe); + }else{ + fossil_print("DELETE %s\n", zPathname); + file_delete(zFullname); } blob_reset(&new); free(zFullname); db_finalize(&q); - db_prepare(&q, - "UPDATE undo SET content=:c, existsflag=%d, redoflag=NOT redoflag" + db_prepare(&q, + "UPDATE undo SET content=:c, existsflag=%d, isExe=%d, isLink=%d," + " redoflag=NOT redoflag" " WHERE pathname=%Q", - new_exists, zPathname + new_exists, new_exe, new_link, zPathname ); if( new_exists ){ db_bind_blob(&q, ":c", ¤t); } db_step(&q); @@ -119,60 +151,113 @@ "INSERT INTO vmerge SELECT * FROM undo_vmerge;" "DELETE FROM undo_vmerge;" "INSERT INTO undo_vmerge SELECT * FROM undo_vmerge_2;" "DROP TABLE undo_vmerge_2;" ); + if( db_table_exists("localdb", "undo_stash") ){ + if( redoFlag ){ + db_multi_exec( + "DELETE FROM stash WHERE stashid IN (SELECT stashid FROM undo_stash);" + "DELETE FROM stashfile" + " WHERE stashid NOT IN (SELECT stashid FROM stash);" + ); + }else{ + db_multi_exec( + "INSERT OR IGNORE INTO stash SELECT * FROM undo_stash;" + "INSERT OR IGNORE INTO stashfile SELECT * FROM undo_stashfile;" + ); + } + } ncid = db_lget_int("undo_checkout", 0); ucid = db_lget_int("checkout", 0); db_lset_int("undo_checkout", ucid); db_lset_int("checkout", ncid); } /* -** Reset the the undo memory. +** Reset the undo memory. */ void undo_reset(void){ static const char zSql[] = @ DROP TABLE IF EXISTS undo; @ DROP TABLE IF EXISTS undo_vfile; @ DROP TABLE IF EXISTS undo_vmerge; - @ DROP TABLE IF EXISTS undo_pending; + @ DROP TABLE IF EXISTS undo_stash; + @ DROP TABLE IF EXISTS undo_stashfile; ; - db_multi_exec(zSql); + db_multi_exec(zSql /*works-like:""*/); db_lset_int("undo_available", 0); db_lset_int("undo_checkout", 0); } +/* +** The following variable stores the original command-line of the +** command that is a candidate to be undone. +*/ +static char *undoCmd = 0; + /* ** This flag is true if we are in the process of collecting file changes ** for undo. When this flag is false, undo_save() is a no-op. +** +** The undoDisable flag, if set, prevents undo from being activated. */ static int undoActive = 0; +static int undoDisable = 0; + + +/* +** Capture the current command-line and store it as part of the undo +** state. This routine is called before options are extracted from the +** command-line so that we can record the complete command-line. +*/ +void undo_capture_command_line(void){ + Blob cmdline; + int i; + if( undoCmd!=0 || undoDisable ) return; + blob_zero(&cmdline); + for(i=1; i<g.argc; i++){ + if( i>1 ) blob_append(&cmdline, " ", 1); + blob_append(&cmdline, g.argv[i], -1); + } + undoCmd = blob_str(&cmdline); +} /* ** Begin capturing a snapshot that can be undone. */ void undo_begin(void){ int cid; - static const char zSql[] = - @ CREATE TABLE undo( + const char *zDb = db_name("localdb"); + static const char zSql[] = + @ CREATE TABLE "%w".undo( @ pathname TEXT UNIQUE, -- Name of the file @ redoflag BOOLEAN, -- 0 for undoable. 1 for redoable @ existsflag BOOLEAN, -- True if the file exists + @ isExe BOOLEAN, -- True if the file is executable + @ isLink BOOLEAN, -- True if the file is symlink @ content BLOB -- Saved content @ ); - @ CREATE TABLE undo_vfile AS SELECT * FROM vfile; - @ CREATE TABLE undo_vmerge AS SELECT * FROM vmerge; - @ CREATE TABLE undo_pending(undoId INTEGER PRIMARY KEY); + @ CREATE TABLE "%w".undo_vfile AS SELECT * FROM vfile; + @ CREATE TABLE "%w".undo_vmerge AS SELECT * FROM vmerge; ; + if( undoDisable ) return; undo_reset(); - db_multi_exec(zSql); + db_multi_exec(zSql/*works-like:"%w,%w,%w"*/, zDb, zDb, zDb); cid = db_lget_int("checkout", 0); db_lset_int("undo_checkout", cid); db_lset_int("undo_available", 1); + db_lset("undo_cmdline", undoCmd); undoActive = 1; } + +/* +** Permanently disable undo +*/ +void undo_disable(void){ + undoDisable = 1; +} /* ** This flag is true if one or more files have changed and have been ** recorded in the undo log but the undo log has not yet been committed. ** @@ -185,41 +270,143 @@ ** Save the current content of the file zPathname so that it ** will be undoable. The name is relative to the root of the ** tree. */ void undo_save(const char *zPathname){ + if( undoDisable ) return; + if( undo_maybe_save(zPathname, -1)!=UNDO_SAVED_OK ){ + fossil_panic("failed to save undo information for path: %s", + zPathname); + } +} + +/* +** Possibly save the current content of the file zPathname so +** that it will be undoable. The name is relative to the root +** of the tree. The limit argument may be used to specify the +** maximum size for the file to be saved. If the size of the +** specified file exceeds this size limit (in bytes), it will +** not be saved and an appropriate code will be returned. +** +** WARNING: Please do NOT call this function with a limit +** value less than zero, call the undo_save() +** function instead. +** +** The return value of this function will always be one of the +** following codes: +** +** UNDO_SAVED_OK: The specified file was saved succesfully. +** +** UNDO_DISABLED: The specified file was NOT saved, because the +** "undo subsystem" is disabled. This error may +** indicate that a call to undo_disable() was +** issued. +** +** UNDO_INACTIVE: The specified file was NOT saved, because the +** "undo subsystem" is not active. This error +** may indicate that a call to undo_begin() is +** missing. +** +** UNDO_TOOBIG: The specified file was NOT saved, because it +** exceeded the specified size limit. It is +** impossible for this value to be returned if +** the specified size limit is less than zero +** (i.e. unlimited). +*/ +int undo_maybe_save(const char *zPathname, i64 limit){ char *zFullname; - Blob content; - int existsFlag; - Stmt q; - - if( !undoActive ) return; - zFullname = mprintf("%s/%s", g.zLocalRoot, zPathname); - existsFlag = file_size(zFullname)>=0; - db_prepare(&q, - "REPLACE INTO undo(pathname,redoflag,existsflag,content)" - " VALUES(%Q,0,%d,:c)", - zPathname, existsFlag - ); - if( existsFlag ){ - blob_read_from_file(&content, zFullname); - db_bind_blob(&q, ":c", &content); + i64 size; + int result; + + if( undoDisable ) return UNDO_DISABLED; + if( !undoActive ) return UNDO_INACTIVE; + zFullname = mprintf("%s%s", g.zLocalRoot, zPathname); + size = file_wd_size(zFullname); + if( limit<0 || size<=limit ){ + int existsFlag = (size>=0); + int isLink = file_wd_islink(zFullname); + Stmt q; + Blob content; + db_prepare(&q, + "INSERT OR IGNORE INTO" + " undo(pathname,redoflag,existsflag,isExe,isLink,content)" + " VALUES(%Q,0,%d,%d,%d,:c)", + zPathname, existsFlag, file_wd_isexe(zFullname), isLink + ); + if( existsFlag ){ + if( isLink ){ + blob_read_link(&content, zFullname); + }else{ + blob_read_from_file(&content, zFullname); + } + db_bind_blob(&q, ":c", &content); + } + db_step(&q); + db_finalize(&q); + if( existsFlag ){ + blob_reset(&content); + } + undoNeedRollback = 1; + result = UNDO_SAVED_OK; + }else{ + result = UNDO_TOOBIG; } free(zFullname); - db_step(&q); - db_finalize(&q); - if( existsFlag ){ - blob_reset(&content); + return result; +} + +/* +** Returns an error message for the undo_maybe_save() return code. +** Currently, this function assumes that the caller is using the +** returned error message in a context prefixed with "because". +*/ +const char *undo_save_message(int rc){ + static char zRc[32]; + + switch( rc ){ + case UNDO_NONE: return "undo is disabled for this operation"; + case UNDO_SAVED_OK: return "the save operation was successful"; + case UNDO_DISABLED: return "the undo subsystem is disabled"; + case UNDO_INACTIVE: return "the undo subsystem is inactive"; + case UNDO_TOOBIG: return "the file is too big"; + default: { + sqlite3_snprintf(sizeof(zRc), zRc, "of error code %d", rc); + } } - undoNeedRollback = 1; + return zRc; +} + +/* +** Make the current state of stashid undoable. +*/ +void undo_save_stash(int stashid){ + const char *zDb = db_name("localdb"); + db_multi_exec( + "CREATE TABLE IF NOT EXISTS \"%w\".undo_stash" + " AS SELECT * FROM stash WHERE 0;" + "INSERT INTO undo_stash" + " SELECT * FROM stash WHERE stashid=%d;", + zDb, stashid + ); + db_multi_exec( + "CREATE TABLE IF NOT EXISTS \"%w\".undo_stashfile" + " AS SELECT * FROM stashfile WHERE 0;" + "INSERT INTO undo_stashfile" + " SELECT * FROM stashfile WHERE stashid=%d;", + zDb, stashid + ); } /* ** Complete the undo process is one is currently in process. */ void undo_finish(void){ if( undoActive ){ + if( undoNeedRollback ){ + fossil_print(" \"fossil undo\" is available to undo changes" + " to the working checkout.\n"); + } undoActive = 0; undoNeedRollback = 0; } } @@ -235,87 +422,115 @@ void undo_rollback(void){ if( !undoNeedRollback ) return; assert( undoActive ); undoNeedRollback = 0; undoActive = 0; - printf("Rolling back prior filesystem changes...\n"); + fossil_print("Rolling back prior filesystem changes...\n"); undo_all_filesystem(0); } /* ** COMMAND: undo -** -** Usage: %fossil undo ?FILENAME...? -** -** Undo the most recent update or merge or revert operation. If FILENAME is -** specified then restore the content of the named file(s) but otherwise -** leave the update or merge or revert in effect. -** -** A single level of undo/redo is supported. The undo/redo stack -** is cleared by the commit and checkout commands. -*/ -void undo_cmd(void){ - int undo_available; - db_must_be_within_tree(); - db_begin_transaction(); - undo_available = db_lget_int("undo_available", 0); - if( g.argc==2 ){ - if( undo_available!=1 ){ - fossil_fatal("no update or merge operation is available to undo"); - } - undo_all(0); - db_lset_int("undo_available", 2); - }else if( g.argc>=3 ){ - int i; - if( undo_available==0 ){ - fossil_fatal("no update or merge operation is available to undo"); - } - for(i=2; i<g.argc; i++){ - const char *zFile = g.argv[i]; - Blob path; - file_tree_name(zFile, &path, 1); - undo_one(blob_str(&path), 0); - blob_reset(&path); - } - } - db_end_transaction(0); -} - -/* -** COMMAND: redo -** -** Usage: %fossil redo ?FILENAME...? -** -** Redo the an update or merge or revert operation that has been undone -** by the undo command. If FILENAME is specified then restore the changes -** associated with the named file(s) but otherwise leave the update -** or merge undone. -** -** A single level of undo/redo is supported. The undo/redo stack -** is cleared by the commit and checkout commands. -*/ -void redo_cmd(void){ - int undo_available; - db_must_be_within_tree(); - db_begin_transaction(); - undo_available = db_lget_int("undo_available", 0); - if( g.argc==2 ){ - if( undo_available!=2 ){ - fossil_fatal("no undone update or merge operation is available to redo"); - } - undo_all(1); - db_lset_int("undo_available", 1); - }else if( g.argc>=3 ){ - int i; - if( undo_available==0 ){ - fossil_fatal("no update or merge operation is available to redo"); - } - for(i=2; i<g.argc; i++){ - const char *zFile = g.argv[i]; - Blob path; - file_tree_name(zFile, &path, 1); - undo_one(blob_str(&path), 0); - blob_reset(&path); +** COMMAND: redo* +** +** Usage: %fossil undo ?OPTIONS? ?FILENAME...? +** or: %fossil redo ?OPTIONS? ?FILENAME...? +** +** Undo the changes to the working checkout caused by the most recent +** of the following operations: +** +** (1) fossil update (5) fossil stash apply +** (2) fossil merge (6) fossil stash drop +** (3) fossil revert (7) fossil stash goto +** (4) fossil stash pop +** +** The "fossil clean" operation can also be undone; however, this is +** currently limited to files that are less than 10MiB in size. +** +** If FILENAME is specified then restore the content of the named +** file(s) but otherwise leave the update or merge or revert in effect. +** The redo command undoes the effect of the most recent undo. +** +** If the -n|--dry-run option is present, no changes are made and instead +** the undo or redo command explains what actions the undo or redo would +** have done had the -n|--dry-run been omitted. +** +** A single level of undo/redo is supported. The undo/redo stack +** is cleared by the commit and checkout commands. +** +** Options: +** -n|--dry-run do not make changes but show what would be done +** +** See also: commit, status +*/ +void undo_cmd(void){ + int isRedo = g.argv[1][0]=='r'; + int undo_available; + int dryRunFlag = find_option("dry-run", "n", 0)!=0; + const char *zCmd = isRedo ? "redo" : "undo"; + + if( !dryRunFlag ){ + dryRunFlag = find_option("explain", 0, 0)!=0; + } + db_must_be_within_tree(); + verify_all_options(); + db_begin_transaction(); + undo_available = db_lget_int("undo_available", 0); + if( dryRunFlag ){ + if( undo_available==0 ){ + fossil_print("No undo or redo is available\n"); + }else{ + Stmt q; + int nChng = 0; + zCmd = undo_available==1 ? "undo" : "redo"; + fossil_print("A %s is available for the following command:\n\n" + " %s %s\n\n", + zCmd, g.argv[0], db_lget("undo_cmdline", "???")); + db_prepare(&q, + "SELECT existsflag, pathname FROM undo ORDER BY pathname" + ); + while( db_step(&q)==SQLITE_ROW ){ + if( nChng==0 ){ + fossil_print("The following file changes would occur if the " + "command above is %sne:\n\n", zCmd); + } + nChng++; + fossil_print("%s %s\n", + db_column_int(&q,0) ? "UPDATE" : "DELETE", + db_column_text(&q, 1) + ); + } + db_finalize(&q); + if( nChng==0 ){ + fossil_print("No file changes would occur with this undo/redo.\n"); + } + } + }else{ + int vid1 = db_lget_int("checkout", 0); + int vid2; + if( g.argc==2 ){ + if( undo_available!=(1+isRedo) ){ + fossil_fatal("nothing to %s", zCmd); + } + undo_all(isRedo); + db_lset_int("undo_available", 2-isRedo); + }else if( g.argc>=3 ){ + int i; + if( undo_available==0 ){ + fossil_fatal("nothing to %s", zCmd); + } + for(i=2; i<g.argc; i++){ + const char *zFile = g.argv[i]; + Blob path; + file_tree_name(zFile, &path, 0, 1); + undo_one(blob_str(&path), isRedo); + blob_reset(&path); + } + } + vid2 = db_lget_int("checkout", 0); + if( vid1!=vid2 ){ + fossil_print("--------------------\n"); + show_common_info(vid2, "updated-to:", 1, 0); } } db_end_transaction(0); } ADDED src/unicode.c Index: src/unicode.c ================================================================== --- src/unicode.c +++ src/unicode.c @@ -0,0 +1,382 @@ +/* +** Copyright (c) 2013 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file is copied from ext/fts3/fts3_unicode2.c of SQLite3 with +** minor changes. +*/ +#include "config.h" +#include "unicode.h" + +/* +** Return true if the argument corresponds to a unicode codepoint +** classified as either a letter or a number. Otherwise false. +** +** The results are undefined if the value passed to this function +** is less than zero. +*/ +int unicode_isalnum(int c){ + /* Each unsigned integer in the following array corresponds to a contiguous + ** range of unicode codepoints that are not either letters or numbers (i.e. + ** codepoints for which this function should return 0). + ** + ** The most significant 22 bits in each 32-bit value contain the first + ** codepoint in the range. The least significant 10 bits are used to store + ** the size of the range (always at least 1). In other words, the value + ** ((C<<22) + N) represents a range of N codepoints starting with codepoint + ** C. It is not possible to represent a range larger than 1023 codepoints + ** using this format. + */ + static const unsigned int aEntry[] = { + 0x00000030, 0x0000E807, 0x00016C06, 0x0001EC2F, 0x0002AC07, + 0x0002D001, 0x0002D803, 0x0002EC01, 0x0002FC01, 0x00035C01, + 0x0003DC01, 0x000B0804, 0x000B480E, 0x000B9407, 0x000BB401, + 0x000BBC81, 0x000DD401, 0x000DF801, 0x000E1002, 0x000E1C01, + 0x000FD801, 0x00120808, 0x00156806, 0x00162402, 0x00163403, + 0x00164437, 0x0017CC02, 0x0018001D, 0x00187802, 0x00192C15, + 0x0019A804, 0x0019C001, 0x001B5001, 0x001B580F, 0x001B9C07, + 0x001BF402, 0x001C000E, 0x001C3C01, 0x001C4401, 0x001CC01B, + 0x001E980B, 0x001FAC09, 0x001FD804, 0x00205804, 0x00206C09, + 0x00209403, 0x0020A405, 0x0020C00F, 0x00216403, 0x00217801, + 0x00238C21, 0x0024E803, 0x0024F812, 0x00254407, 0x00258804, + 0x0025C001, 0x00260403, 0x0026F001, 0x0026F807, 0x00271C02, + 0x00272C03, 0x00275C01, 0x00278802, 0x0027C802, 0x0027E802, + 0x00280403, 0x0028F001, 0x0028F805, 0x00291C02, 0x00292C03, + 0x00294401, 0x0029C002, 0x0029D401, 0x002A0403, 0x002AF001, + 0x002AF808, 0x002B1C03, 0x002B2C03, 0x002B8802, 0x002BC002, + 0x002C0403, 0x002CF001, 0x002CF807, 0x002D1C02, 0x002D2C03, + 0x002D5802, 0x002D8802, 0x002DC001, 0x002E0801, 0x002EF805, + 0x002F1803, 0x002F2804, 0x002F5C01, 0x002FCC08, 0x00300004, + 0x0030F807, 0x00311803, 0x00312804, 0x00315402, 0x00318802, + 0x0031FC01, 0x00320403, 0x0032F001, 0x0032F807, 0x00331803, + 0x00332804, 0x00335402, 0x00338802, 0x00340403, 0x0034F807, + 0x00351803, 0x00352804, 0x00355C01, 0x00358802, 0x0035E401, + 0x00360802, 0x00372801, 0x00373C06, 0x00375801, 0x00376008, + 0x0037C803, 0x0038C401, 0x0038D007, 0x0038FC01, 0x00391C09, + 0x00396802, 0x003AC401, 0x003AD006, 0x003AEC02, 0x003B2006, + 0x003C041F, 0x003CD00C, 0x003DC417, 0x003E340B, 0x003E6424, + 0x003EF80F, 0x003F380D, 0x0040AC14, 0x00412806, 0x00415804, + 0x00417803, 0x00418803, 0x00419C07, 0x0041C404, 0x0042080C, + 0x00423C01, 0x00426806, 0x0043EC01, 0x004D740C, 0x004E400A, + 0x00500001, 0x0059B402, 0x005A0001, 0x005A6C02, 0x005BAC03, + 0x005C4803, 0x005CC805, 0x005D4802, 0x005DC802, 0x005ED023, + 0x005F6004, 0x005F7401, 0x0060000F, 0x0062A401, 0x0064800C, + 0x0064C00C, 0x00650001, 0x00651002, 0x00677822, 0x00685C05, + 0x00687802, 0x0069540A, 0x0069801D, 0x0069FC01, 0x006A8007, + 0x006AA006, 0x006AC00F, 0x006C0005, 0x006CD011, 0x006D6823, + 0x006E0003, 0x006E840D, 0x006F980E, 0x006FF004, 0x00709014, + 0x0070EC05, 0x0071F802, 0x00730008, 0x00734019, 0x0073B401, + 0x0073C803, 0x0073E002, 0x00770036, 0x0077F004, 0x007EF401, + 0x007EFC03, 0x007F3403, 0x007F7403, 0x007FB403, 0x007FF402, + 0x00800065, 0x0081980A, 0x0081E805, 0x00822805, 0x0082801F, + 0x00834021, 0x00840002, 0x00840C04, 0x00842002, 0x00845001, + 0x00845803, 0x00847806, 0x00849401, 0x00849C01, 0x0084A401, + 0x0084B801, 0x0084E802, 0x00850005, 0x00852804, 0x00853C01, + 0x00862802, 0x0086426B, 0x00900027, 0x0091000B, 0x0092704E, + 0x00940276, 0x009E53E0, 0x00ADD820, 0x00AE6022, 0x00AEF40C, + 0x00AF2808, 0x00AFB004, 0x00B39406, 0x00B3BC03, 0x00B3E404, + 0x00B3F802, 0x00B5C001, 0x00B5FC01, 0x00B7804F, 0x00B8C013, + 0x00BA001A, 0x00BA6C59, 0x00BC00D6, 0x00BFC00C, 0x00C00005, + 0x00C02019, 0x00C0A807, 0x00C0D802, 0x00C0F403, 0x00C26404, + 0x00C28001, 0x00C3EC01, 0x00C64002, 0x00C6580A, 0x00C70024, + 0x00C8001F, 0x00C8A81E, 0x00C94001, 0x00C98020, 0x00CA2827, + 0x00CB003F, 0x00CC0100, 0x01370040, 0x02924037, 0x0293F802, + 0x02983403, 0x0299BC10, 0x029A7802, 0x029BC008, 0x029C0017, + 0x029C8002, 0x029E2402, 0x02A00801, 0x02A01801, 0x02A02C01, + 0x02A08C09, 0x02A0D804, 0x02A1D004, 0x02A20002, 0x02A2D011, + 0x02A33802, 0x02A38012, 0x02A3E003, 0x02A3F001, 0x02A4980A, + 0x02A51C0D, 0x02A57C01, 0x02A60004, 0x02A6CC1B, 0x02A77802, + 0x02A79401, 0x02A8A40E, 0x02A90C01, 0x02A93002, 0x02A97004, + 0x02A9DC03, 0x02A9EC03, 0x02AAC001, 0x02AAC803, 0x02AADC02, + 0x02AAF802, 0x02AB0401, 0x02AB7802, 0x02ABAC07, 0x02ABD402, + 0x02AD6C01, 0x02AF8C0B, 0x03600001, 0x036DFC02, 0x036FFC02, + 0x037FFC01, 0x03EC7801, 0x03ECA401, 0x03EEC810, 0x03F4F802, + 0x03F7F002, 0x03F8001A, 0x03F88033, 0x03F95013, 0x03F9A004, + 0x03FBFC01, 0x03FC040F, 0x03FC6807, 0x03FCEC06, 0x03FD6C0B, + 0x03FF8007, 0x03FFA007, 0x03FFE405, 0x04040003, 0x0404DC09, + 0x0405E411, 0x04063001, 0x0406400C, 0x04068001, 0x0407402E, + 0x040B8001, 0x040DD805, 0x040E7C01, 0x040F4001, 0x0415BC01, + 0x04215C01, 0x0421DC02, 0x04247C01, 0x0424FC01, 0x04280403, + 0x04281402, 0x04283004, 0x0428E003, 0x0428FC01, 0x04294009, + 0x0429FC01, 0x042B2001, 0x042B9402, 0x042BC007, 0x042CE407, + 0x042E6404, 0x04400003, 0x0440E016, 0x0441FC04, 0x0442C012, + 0x04440003, 0x04449C0E, 0x04450004, 0x0445CC03, 0x04460003, + 0x0446CC0E, 0x04471409, 0x04476C01, 0x04477403, 0x0448B012, + 0x044AA401, 0x044B7C0C, 0x044C0004, 0x044CF001, 0x044CF807, + 0x044D1C02, 0x044D2C03, 0x044D5C01, 0x044D8802, 0x044D9807, + 0x044DC005, 0x0452C014, 0x04531801, 0x0456BC07, 0x0456E020, + 0x04577002, 0x0458C014, 0x045AAC0D, 0x045C740F, 0x045CF004, + 0x0491C005, 0x05A9B802, 0x05ABC006, 0x05ACC010, 0x05AD1002, + 0x05BD442E, 0x05BE3C04, 0x06F27008, 0x074000F6, 0x07440027, + 0x0744A4C0, 0x07480046, 0x074C0057, 0x075B0401, 0x075B6C01, + 0x075BEC01, 0x075C5401, 0x075CD401, 0x075D3C01, 0x075DBC01, + 0x075E2401, 0x075EA401, 0x075F0C01, 0x0760028C, 0x076A6C05, + 0x076A840F, 0x07A34007, 0x07BBC002, 0x07C0002C, 0x07C0C064, + 0x07C2800F, 0x07C2C40F, 0x07C3040F, 0x07C34425, 0x07C4401F, + 0x07C4C03C, 0x07C5C02B, 0x07C7981D, 0x07C8402B, 0x07C90009, + 0x07C94002, 0x07CC027A, 0x07D5EC29, 0x07D6952C, 0x07DB800D, + 0x07DBC004, 0x07DC0074, 0x07DE0055, 0x07E0000C, 0x07E04038, + 0x07E1400A, 0x07E18028, 0x07E2401E, 0x07E44009, 0x07E60005, + 0x07E70001, 0x38000401, 0x38008060, 0x380400F0, + }; + static const unsigned int aAscii[4] = { + 0xFFFFFFFF, 0xFC00FFFF, 0xF8000001, 0xF8000001, + }; + + if( c<128 ){ + return ( (aAscii[c >> 5] & (1 << (c & 0x001F)))==0 ); + }else if( c<(1<<22) ){ + unsigned int key = (((unsigned int)c)<<10) | 0x000003FF; + int iRes = 0; + int iHi = sizeof(aEntry)/sizeof(aEntry[0]) - 1; + int iLo = 0; + while( iHi>=iLo ){ + int iTest = (iHi + iLo) / 2; + if( key >= aEntry[iTest] ){ + iRes = iTest; + iLo = iTest+1; + }else{ + iHi = iTest-1; + } + } + assert( aEntry[0]<key ); + assert( key>=aEntry[iRes] ); + return (((unsigned int)c) >= ((aEntry[iRes]>>10) + (aEntry[iRes]&0x3FF))); + } + return 1; +} + + +/* +** If the argument is a codepoint corresponding to a lowercase letter +** in the ASCII range with a diacritic added, return the codepoint +** of the ASCII letter only. For example, if passed 235 - "LATIN +** SMALL LETTER E WITH DIAERESIS" - return 65 ("LATIN SMALL LETTER +** E"). The resuls of passing a codepoint that corresponds to an +** uppercase letter are undefined. +*/ +static int unicode_remove_diacritic(int c){ + static const unsigned short aDia[] = { + 0, 1797, 1848, 1859, 1891, 1928, 1940, 1995, + 2024, 2040, 2060, 2110, 2168, 2206, 2264, 2286, + 2344, 2383, 2472, 2488, 2516, 2596, 2668, 2732, + 2782, 2842, 2894, 2954, 2984, 3000, 3028, 3336, + 3456, 3696, 3712, 3728, 3744, 3896, 3912, 3928, + 3968, 4008, 4040, 4106, 4138, 4170, 4202, 4234, + 4266, 4296, 4312, 4344, 4408, 4424, 4472, 4504, + 6148, 6198, 6264, 6280, 6360, 6429, 6505, 6529, + 61448, 61468, 61534, 61592, 61642, 61688, 61704, 61726, + 61784, 61800, 61836, 61880, 61914, 61948, 61998, 62122, + 62154, 62200, 62218, 62302, 62364, 62442, 62478, 62536, + 62554, 62584, 62604, 62640, 62648, 62656, 62664, 62730, + 62924, 63050, 63082, 63274, 63390, + }; + static const char aChar[] = { + '\0', 'a', 'c', 'e', 'i', 'n', 'o', 'u', 'y', 'y', 'a', 'c', + 'd', 'e', 'e', 'g', 'h', 'i', 'j', 'k', 'l', 'n', 'o', 'r', + 's', 't', 'u', 'u', 'w', 'y', 'z', 'o', 'u', 'a', 'i', 'o', + 'u', 'g', 'k', 'o', 'j', 'g', 'n', 'a', 'e', 'i', 'o', 'r', + 'u', 's', 't', 'h', 'a', 'e', 'o', 'y', '\0', '\0', '\0', '\0', + '\0', '\0', '\0', '\0', 'a', 'b', 'd', 'd', 'e', 'f', 'g', 'h', + 'h', 'i', 'k', 'l', 'l', 'm', 'n', 'p', 'r', 'r', 's', 't', + 'u', 'v', 'w', 'w', 'x', 'y', 'z', 'h', 't', 'w', 'y', 'a', + 'e', 'i', 'o', 'u', 'y', + }; + + unsigned int key = (((unsigned int)c)<<3) | 0x00000007; + int iRes = 0; + int iHi = sizeof(aDia)/sizeof(aDia[0]) - 1; + int iLo = 0; + while( iHi>=iLo ){ + int iTest = (iHi + iLo) / 2; + if( key >= aDia[iTest] ){ + iRes = iTest; + iLo = iTest+1; + }else{ + iHi = iTest-1; + } + } + assert( key>=aDia[iRes] ); + return ((c > (aDia[iRes]>>3) + (aDia[iRes]&0x07)) ? c : (int)aChar[iRes]); +} + + +/* +** Return true if the argument interpreted as a unicode codepoint +** is a diacritical modifier character. +*/ +int unicode_is_diacritic(int c){ + unsigned int mask0 = 0x08029FDF; + unsigned int mask1 = 0x000361F8; + if( c<768 || c>817 ) return 0; + return (c < 768+32) ? + (mask0 & (1 << (c-768))) : + (mask1 & (1 << (c-768-32))); +} + + +/* +** Interpret the argument as a unicode codepoint. If the codepoint +** is an upper case character that has a lower case equivalent, +** return the codepoint corresponding to the lower case version. +** Otherwise, return a copy of the argument. +** +** The results are undefined if the value passed to this function +** is less than zero. +*/ +int unicode_fold(int c, int bRemoveDiacritic){ + /* Each entry in the following array defines a rule for folding a range + ** of codepoints to lower case. The rule applies to a range of nRange + ** codepoints starting at codepoint iCode. + ** + ** If the least significant bit in flags is clear, then the rule applies + ** to all nRange codepoints (i.e. all nRange codepoints are upper case and + ** need to be folded). Or, if it is set, then the rule only applies to + ** every second codepoint in the range, starting with codepoint C. + ** + ** The 7 most significant bits in flags are an index into the aiOff[] + ** array. If a specific codepoint C does require folding, then its lower + ** case equivalent is ((C + aiOff[flags>>1]) & 0xFFFF). + ** + ** The contents of this array are generated by parsing the CaseFolding.txt + ** file distributed as part of the "Unicode Character Database". See + ** http://www.unicode.org for details. + */ + static const struct TableEntry { + unsigned short iCode; + unsigned char flags; + unsigned char nRange; + } aEntry[] = { + {65, 14, 26}, {181, 64, 1}, {192, 14, 23}, + {216, 14, 7}, {256, 1, 48}, {306, 1, 6}, + {313, 1, 16}, {330, 1, 46}, {376, 132, 1}, + {377, 1, 6}, {383, 120, 1}, {385, 50, 1}, + {386, 1, 4}, {390, 44, 1}, {391, 0, 1}, + {393, 42, 2}, {395, 0, 1}, {398, 32, 1}, + {399, 38, 1}, {400, 40, 1}, {401, 0, 1}, + {403, 42, 1}, {404, 46, 1}, {406, 52, 1}, + {407, 48, 1}, {408, 0, 1}, {412, 52, 1}, + {413, 54, 1}, {415, 56, 1}, {416, 1, 6}, + {422, 60, 1}, {423, 0, 1}, {425, 60, 1}, + {428, 0, 1}, {430, 60, 1}, {431, 0, 1}, + {433, 58, 2}, {435, 1, 4}, {439, 62, 1}, + {440, 0, 1}, {444, 0, 1}, {452, 2, 1}, + {453, 0, 1}, {455, 2, 1}, {456, 0, 1}, + {458, 2, 1}, {459, 1, 18}, {478, 1, 18}, + {497, 2, 1}, {498, 1, 4}, {502, 138, 1}, + {503, 150, 1}, {504, 1, 40}, {544, 126, 1}, + {546, 1, 18}, {570, 72, 1}, {571, 0, 1}, + {573, 124, 1}, {574, 70, 1}, {577, 0, 1}, + {579, 122, 1}, {580, 28, 1}, {581, 30, 1}, + {582, 1, 10}, {837, 36, 1}, {880, 1, 4}, + {886, 0, 1}, {895, 36, 1}, {902, 18, 1}, + {904, 16, 3}, {908, 26, 1}, {910, 24, 2}, + {913, 14, 17}, {931, 14, 9}, {962, 0, 1}, + {975, 4, 1}, {976, 156, 1}, {977, 158, 1}, + {981, 162, 1}, {982, 160, 1}, {984, 1, 24}, + {1008, 152, 1}, {1009, 154, 1}, {1012, 146, 1}, + {1013, 144, 1}, {1015, 0, 1}, {1017, 168, 1}, + {1018, 0, 1}, {1021, 126, 3}, {1024, 34, 16}, + {1040, 14, 32}, {1120, 1, 34}, {1162, 1, 54}, + {1216, 6, 1}, {1217, 1, 14}, {1232, 1, 96}, + {1329, 22, 38}, {4256, 68, 38}, {4295, 68, 1}, + {4301, 68, 1}, {5112, 166, 6}, {7680, 1, 150}, + {7835, 148, 1}, {7838, 112, 1}, {7840, 1, 96}, + {7944, 166, 8}, {7960, 166, 6}, {7976, 166, 8}, + {7992, 166, 8}, {8008, 166, 6}, {8025, 167, 8}, + {8040, 166, 8}, {8072, 166, 8}, {8088, 166, 8}, + {8104, 166, 8}, {8120, 166, 2}, {8122, 142, 2}, + {8124, 164, 1}, {8126, 116, 1}, {8136, 140, 4}, + {8140, 164, 1}, {8152, 166, 2}, {8154, 136, 2}, + {8168, 166, 2}, {8170, 134, 2}, {8172, 168, 1}, + {8184, 128, 2}, {8186, 130, 2}, {8188, 164, 1}, + {8486, 114, 1}, {8490, 108, 1}, {8491, 110, 1}, + {8498, 12, 1}, {8544, 8, 16}, {8579, 0, 1}, + {9398, 10, 26}, {11264, 22, 47}, {11360, 0, 1}, + {11362, 104, 1}, {11363, 118, 1}, {11364, 106, 1}, + {11367, 1, 6}, {11373, 100, 1}, {11374, 102, 1}, + {11375, 96, 1}, {11376, 98, 1}, {11378, 0, 1}, + {11381, 0, 1}, {11390, 94, 2}, {11392, 1, 100}, + {11499, 1, 4}, {11506, 0, 1}, {42560, 1, 46}, + {42624, 1, 28}, {42786, 1, 14}, {42802, 1, 62}, + {42873, 1, 4}, {42877, 92, 1}, {42878, 1, 10}, + {42891, 0, 1}, {42893, 84, 1}, {42896, 1, 4}, + {42902, 1, 20}, {42922, 78, 1}, {42923, 74, 1}, + {42924, 76, 1}, {42925, 80, 1}, {42928, 88, 1}, + {42929, 82, 1}, {42930, 86, 1}, {42931, 66, 1}, + {42932, 1, 4}, {43888, 90, 80}, {65313, 14, 26}, + }; + static const unsigned short aiOff[] = { + 1, 2, 8, 15, 16, 26, 28, 32, + 37, 38, 40, 48, 63, 64, 69, 71, + 79, 80, 116, 202, 203, 205, 206, 207, + 209, 210, 211, 213, 214, 217, 218, 219, + 775, 928, 7264, 10792, 10795, 23217, 23221, 23228, + 23231, 23254, 23256, 23275, 23278, 26672, 30204, 54721, + 54753, 54754, 54756, 54787, 54793, 54809, 57153, 57274, + 57921, 58019, 58363, 61722, 65268, 65341, 65373, 65406, + 65408, 65410, 65415, 65424, 65436, 65439, 65450, 65462, + 65472, 65476, 65478, 65480, 65482, 65488, 65506, 65511, + 65514, 65521, 65527, 65528, 65529, + }; + + int ret = c; + + assert( c>=0 ); + assert( sizeof(unsigned short)==2 && sizeof(unsigned char)==1 ); + + if( c<128 ){ + if( c>='A' && c<='Z' ) ret = c + ('a' - 'A'); + }else if( c<65536 ){ + int iHi = sizeof(aEntry)/sizeof(aEntry[0]) - 1; + int iLo = 0; + int iRes = -1; + + while( iHi>=iLo ){ + int iTest = (iHi + iLo) / 2; + int cmp = (c - aEntry[iTest].iCode); + if( cmp>=0 ){ + iRes = iTest; + iLo = iTest+1; + }else{ + iHi = iTest-1; + } + } + assert( iRes<0 || c>=aEntry[iRes].iCode ); + + if( iRes>=0 ){ + const struct TableEntry *p = &aEntry[iRes]; + if( c<(p->iCode + p->nRange) && 0==(0x01 & p->flags & (p->iCode ^ c)) ){ + ret = (c + (aiOff[p->flags>>1])) & 0x0000FFFF; + assert( ret>0 ); + } + } + + if( bRemoveDiacritic ) ret = unicode_remove_diacritic(ret); + } + + else if( c>=66560 && c<66600 ){ + ret = c + 40; + } + else if( c>=68736 && c<68787 ){ + ret = c + 64; + } + else if( c>=71840 && c<71872 ){ + ret = c + 32; + } + + return ret; +} Index: src/update.c ================================================================== --- src/update.c +++ src/update.c @@ -24,342 +24,709 @@ /* ** Return true if artifact rid is a version */ int is_a_version(int rid){ - return db_exists("SELECT 1 FROM plink WHERE cid=%d", rid); + return db_exists("SELECT 1 FROM event WHERE objid=%d AND type='ci'", rid); +} + +/* This variable is set if we are doing an internal update. It is clear +** when running the "update" command. +*/ +static int internalUpdate = 0; +static int internalConflictCnt = 0; + +/* +** Do an update to version vid. +** +** Start an undo session but do not terminate it. Do not autosync. +*/ +int update_to(int vid){ + int savedArgc; + char **savedArgv; + char *newArgv[3]; + newArgv[0] = g.argv[0]; + newArgv[1] = "update"; + newArgv[2] = 0; + savedArgv = g.argv; + savedArgc = g.argc; + g.argc = 2; + g.argv = newArgv; + internalUpdate = vid; + internalConflictCnt = 0; + update_cmd(); + g.argc = savedArgc; + g.argv = savedArgv; + return internalConflictCnt; } /* ** COMMAND: update ** -** Usage: %fossil update ?VERSION? ?FILES...? -** -** Change the version of the current checkout to VERSION. Any uncommitted -** changes are retained and applied to the new checkout. -** -** The VERSION argument can be a specific version or tag or branch name. -** If the VERSION argument is omitted, then the leaf of the the subtree -** that begins at the current version is used, if there is only a single -** leaf. VERSION can also be "current" to select the leaf of the current -** version or "latest" to select the most recent check-in. +** Usage: %fossil update ?OPTIONS? ?VERSION? ?FILES...? +** +** Change the version of the current checkout to VERSION. Any +** uncommitted changes are retained and applied to the new checkout. +** +** The VERSION argument can be a specific version or tag or branch +** name. If the VERSION argument is omitted, then the leaf of the +** subtree that begins at the current version is used, if there is +** only a single leaf. VERSION can also be "current" to select the +** leaf of the current version or "latest" to select the most recent +** check-in. ** ** If one or more FILES are listed after the VERSION then only the -** named files are candidates to be updated. If FILES is omitted, all -** files in the current checkout are subject to be updated. -** -** The -n or --nochange option causes this command to do a "dry run". It -** prints out what would have happened but does not actually make any -** changes to the current checkout or the repository. -** -** The -v or --verbose option prints status information about unchanged -** files in addition to those file that actually do change. +** named files are candidates to be updated, and any updates to them +** will be treated as edits to the current version. Using a directory +** name for one of the FILES arguments is the same as using every +** subdirectory and file beneath that directory. +** +** If FILES is omitted, all files in the current checkout are subject +** to being updated and the version of the current checkout is changed +** to VERSION. Any uncommitted changes are retained and applied to the +** new checkout. +** +** The -n or --dry-run option causes this command to do a "dry run". +** It prints out what would have happened but does not actually make +** any changes to the current checkout or the repository. +** +** The -v or --verbose option prints status information about +** unchanged files in addition to those file that actually do change. +** +** Options: +** --case-sensitive <BOOL> override case-sensitive setting +** --debug print debug information on stdout +** --latest acceptable in place of VERSION, update to latest version +** --force-missing force update if missing content after sync +** -n|--dry-run If given, display instead of run actions +** -v|--verbose print status information about all files +** -W|--width <num> Width of lines (default is to auto-detect). Must be >20 +** or 0 (= no limit, resulting in a single line per entry). +** +** See also: revert */ void update_cmd(void){ int vid; /* Current version */ int tid=0; /* Target version - version we are changing to */ Stmt q; int latestFlag; /* --latest. Pick the latest version if true */ - int nochangeFlag; /* -n or --nochange. Do a dry run */ + int dryRunFlag; /* -n or --dry-run. Do a dry run */ int verboseFlag; /* -v or --verbose. Output extra information */ + int forceMissingFlag; /* --force-missing. Continue if missing content */ + int debugFlag; /* --debug option */ + int setmtimeFlag; /* --setmtime. Set mtimes on files */ + int nChng; /* Number of file renames */ + int *aChng; /* Array of file renames */ + int i; /* Loop counter */ + int nConflict = 0; /* Number of merge conflicts */ + int nOverwrite = 0; /* Number of unmanaged files overwritten */ + int nUpdate = 0; /* Number of changes of any kind */ + int width; /* Width of printed comment lines */ + Stmt mtimeXfer; /* Statement to transfer mtimes */ + const char *zWidth; /* Width option string value */ - url_proxy_options(); + if( !internalUpdate ){ + undo_capture_command_line(); + url_proxy_options(); + } + zWidth = find_option("width","W",1); + if( zWidth ){ + width = atoi(zWidth); + if( (width!=0) && (width<=20) ){ + fossil_fatal("-W|--width value must be >20 or 0"); + } + }else{ + width = -1; + } latestFlag = find_option("latest",0, 0)!=0; - nochangeFlag = find_option("nochange","n",0)!=0; + dryRunFlag = find_option("dry-run","n",0)!=0; + if( !dryRunFlag ){ + dryRunFlag = find_option("nochange",0,0)!=0; /* deprecated */ + } verboseFlag = find_option("verbose","v",0)!=0; + forceMissingFlag = find_option("force-missing",0,0)!=0; + debugFlag = find_option("debug",0,0)!=0; + setmtimeFlag = find_option("setmtime",0,0)!=0; + + /* We should be done with options.. */ + verify_all_options(); + db_must_be_within_tree(); vid = db_lget_int("checkout", 0); - if( vid==0 ){ - fossil_fatal("cannot find current version"); - } - if( db_exists("SELECT 1 FROM vmerge") ){ - fossil_fatal("cannot update an uncommitted merge"); - } - if( !nochangeFlag ) autosync(AUTOSYNC_PULL); - - if( g.argc>=3 ){ - if( strcmp(g.argv[2], "current")==0 ){ + user_select(); + if( !dryRunFlag && !internalUpdate ){ + if( autosync_loop(SYNC_PULL + SYNC_VERBOSE*verboseFlag, + db_get_int("autosync-tries", 1)) ){ + fossil_fatal("Cannot proceed with update"); + } + } + + /* Create any empty directories now, as well as after the update, + ** so changes in settings are reflected now */ + if( !dryRunFlag ) ensure_empty_dirs_created(); + + if( internalUpdate ){ + tid = internalUpdate; + }else if( g.argc>=3 ){ + if( fossil_strcmp(g.argv[2], "current")==0 ){ /* If VERSION is "current", then use the same algorithm to find the ** target as if VERSION were omitted. */ - }else if( strcmp(g.argv[2], "latest")==0 ){ + }else if( fossil_strcmp(g.argv[2], "latest")==0 ){ /* If VERSION is "latest", then use the same algorithm to find the ** target as if VERSION were omitted and the --latest flag is present. */ latestFlag = 1; }else{ - tid = name_to_rid(g.argv[2]); - if( tid==0 ){ - fossil_fatal("no such version: %s", g.argv[2]); - }else if( !is_a_version(tid) ){ - fossil_fatal("no such version: %s", g.argv[2]); - } - } - } - - if( tid==0 ){ - compute_leaves(vid, 1); - if( !latestFlag && db_int(0, "SELECT count(*) FROM leaves")>1 ){ - db_prepare(&q, - "%s " - " AND event.objid IN leaves" - " ORDER BY event.mtime DESC", - timeline_query_for_tty() - ); - print_timeline(&q, 100); - db_finalize(&q); - fossil_fatal("Multiple descendants"); + tid = name_to_typed_rid(g.argv[2],"ci"); + if( tid==0 || !is_a_version(tid) ){ + fossil_fatal("no such check-in: %s", g.argv[2]); + } + } + } + + /* If no VERSION is specified on the command-line, then look for a + ** descendent of the current version. If there are multiple descendants, + ** look for one from the same branch as the current version. If there + ** are still multiple descendants, show them all and refuse to update + ** until the user selects one. + */ + if( tid==0 ){ + int closeCode = 1; + compute_leaves(vid, closeCode); + if( !db_exists("SELECT 1 FROM leaves") ){ + closeCode = 0; + compute_leaves(vid, closeCode); + } + if( !latestFlag && db_int(0, "SELECT count(*) FROM leaves")>1 ){ + db_multi_exec( + "DELETE FROM leaves WHERE rid NOT IN" + " (SELECT leaves.rid FROM leaves, tagxref" + " WHERE leaves.rid=tagxref.rid AND tagxref.tagid=%d" + " AND tagxref.value==(SELECT value FROM tagxref" + " WHERE tagid=%d AND rid=%d))", + TAG_BRANCH, TAG_BRANCH, vid + ); + if( db_int(0, "SELECT count(*) FROM leaves")>1 ){ + compute_leaves(vid, closeCode); + db_prepare(&q, + "%s " + " AND event.objid IN leaves" + " ORDER BY event.mtime DESC", + timeline_query_for_tty() + ); + print_timeline(&q, -100, width, 0); + db_finalize(&q); + fossil_fatal("Multiple descendants"); + } } tid = db_int(0, "SELECT rid FROM leaves, event" " WHERE event.objid=leaves.rid" - " ORDER BY event.mtime DESC"); + " ORDER BY event.mtime DESC"); + if( tid==0 ) tid = vid; + } + + if( tid==0 ){ + return; } db_begin_transaction(); - vfile_check_signature(vid, 1); - if( !nochangeFlag ) undo_begin(); - load_vfile_from_rid(tid); + vfile_check_signature(vid, CKSIG_ENOTFILE); + if( !dryRunFlag && !internalUpdate ) undo_begin(); + if( load_vfile_from_rid(tid) && !forceMissingFlag ){ + fossil_fatal("missing content, unable to update"); + }; /* ** The record.fn field is used to match files against each other. The ** FV table contains one row for each each unique filename in ** in the current checkout, the pivot, and the version being merged. */ db_multi_exec( "DROP TABLE IF EXISTS fv;" "CREATE TEMP TABLE fv(" - " fn TEXT PRIMARY KEY," /* The filename relative to root */ + " fn TEXT %s PRIMARY KEY," /* The filename relative to root */ " idv INTEGER," /* VFILE entry for current version */ " idt INTEGER," /* VFILE entry for target version */ " chnged BOOLEAN," /* True if current version has been edited */ + " islinkv BOOLEAN," /* True if current file is a link */ + " islinkt BOOLEAN," /* True if target file is a link */ " ridv INTEGER," /* Record ID for current version */ - " ridt INTEGER " /* Record ID for target */ - ");" - "INSERT OR IGNORE INTO fv" - " SELECT pathname, 0, 0, 0, 0, 0 FROM vfile" - ); - db_prepare(&q, - "SELECT id, pathname, rid FROM vfile" - " WHERE vid=%d", tid - ); - while( db_step(&q)==SQLITE_ROW ){ - int id = db_column_int(&q, 0); - const char *fn = db_column_text(&q, 1); - int rid = db_column_int(&q, 2); - db_multi_exec( - "UPDATE fv SET idt=%d, ridt=%d WHERE fn=%Q", - id, rid, fn - ); - } - db_finalize(&q); - db_prepare(&q, - "SELECT id, pathname, rid, chnged FROM vfile" - " WHERE vid=%d", vid - ); - while( db_step(&q)==SQLITE_ROW ){ - int id = db_column_int(&q, 0); - const char *fn = db_column_text(&q, 1); - int rid = db_column_int(&q, 2); - int chnged = db_column_int(&q, 3); - db_multi_exec( - "UPDATE fv SET idv=%d, ridv=%d, chnged=%d WHERE fn=%Q", - id, rid, chnged, fn - ); - } - db_finalize(&q); + " ridt INTEGER," /* Record ID for target */ + " isexe BOOLEAN," /* Does target have execute permission? */ + " deleted BOOLEAN DEFAULT 0,"/* File marked by "rm" to become unmanaged */ + " fnt TEXT %s" /* Filename of same file on target version */ + ");", + filename_collation(), filename_collation() + ); + + /* Add files found in the current version + */ + db_multi_exec( + "INSERT OR IGNORE INTO fv(fn,fnt,idv,idt,ridv,ridt,isexe,chnged,deleted)" + " SELECT pathname, pathname, id, 0, rid, 0, isexe, chnged, deleted" + " FROM vfile WHERE vid=%d", + vid + ); + + /* Compute file name changes on V->T. Record name changes in files that + ** have changed locally. + */ + if( vid ){ + find_filename_changes(vid, tid, 1, &nChng, &aChng, debugFlag ? "V->T": 0); + if( nChng ){ + for(i=0; i<nChng; i++){ + db_multi_exec( + "UPDATE fv" + " SET fnt=(SELECT name FROM filename WHERE fnid=%d)" + " WHERE fn=(SELECT name FROM filename WHERE fnid=%d) AND chnged", + aChng[i*2+1], aChng[i*2] + ); + } + fossil_free(aChng); + } + } + + /* Add files found in the target version T but missing from the current + ** version V. + */ + db_multi_exec( + "INSERT OR IGNORE INTO fv(fn,fnt,idv,idt,ridv,ridt,isexe,chnged)" + " SELECT pathname, pathname, 0, 0, 0, 0, isexe, 0 FROM vfile" + " WHERE vid=%d" + " AND pathname %s NOT IN (SELECT fnt FROM fv)", + tid, filename_collation() + ); + + /* + ** Compute the file version ids for T + */ + db_multi_exec( + "UPDATE fv SET" + " idt=coalesce((SELECT id FROM vfile WHERE vid=%d AND fnt=pathname),0)," + " ridt=coalesce((SELECT rid FROM vfile WHERE vid=%d AND fnt=pathname),0)", + tid, tid + ); + + /* + ** Add islink information + */ + db_multi_exec( + "UPDATE fv SET" + " islinkv=coalesce((SELECT islink FROM vfile" + " WHERE vid=%d AND fnt=pathname),0)," + " islinkt=coalesce((SELECT islink FROM vfile" + " WHERE vid=%d AND fnt=pathname),0)", + vid, tid + ); + + + if( debugFlag ){ + db_prepare(&q, + "SELECT rowid, fn, fnt, chnged, ridv, ridt, isexe," + " islinkv, islinkt FROM fv" + ); + while( db_step(&q)==SQLITE_ROW ){ + fossil_print("%3d: ridv=%-4d ridt=%-4d chnged=%d isexe=%d" + " islinkv=%d islinkt=%d\n", + db_column_int(&q, 0), + db_column_int(&q, 4), + db_column_int(&q, 5), + db_column_int(&q, 3), + db_column_int(&q, 6), + db_column_int(&q, 7), + db_column_int(&q, 8)); + fossil_print(" fnv = [%s]\n", db_column_text(&q, 1)); + fossil_print(" fnt = [%s]\n", db_column_text(&q, 2)); + } + db_finalize(&q); + } /* If FILES appear on the command-line, remove from the "fv" table - ** every entry that is not named on the command-line. + ** every entry that is not named on the command-line or which is not + ** in a directory named on the command-line. */ if( g.argc>=4 ){ Blob sql; /* SQL statement to purge unwanted entries */ - char *zSep = "("; /* Separator in the list of filenames */ Blob treename; /* Normalized filename */ int i; /* Loop counter */ + const char *zSep; /* Term separator */ blob_zero(&sql); - blob_append(&sql, "DELETE FROM fv WHERE fn NOT IN ", -1); + blob_append(&sql, "DELETE FROM fv WHERE ", -1); + zSep = ""; for(i=3; i<g.argc; i++){ - file_tree_name(g.argv[i], &treename, 1); - blob_appendf(&sql, "%s'%q'", zSep, blob_str(&treename)); + file_tree_name(g.argv[i], &treename, 0, 1); + if( file_wd_isdir(g.argv[i])==1 ){ + if( blob_size(&treename) != 1 || blob_str(&treename)[0] != '.' ){ + blob_append_sql(&sql, "%sfn NOT GLOB '%q/*' ", + zSep /*safe-for-%s*/, blob_str(&treename)); + }else{ + blob_reset(&sql); + break; + } + }else{ + blob_append_sql(&sql, "%sfn<>%Q ", + zSep /*safe-for-%s*/, blob_str(&treename)); + } + zSep = "AND "; blob_reset(&treename); - zSep = ","; } - blob_append(&sql, ")", -1); - db_multi_exec(blob_str(&sql)); + db_multi_exec("%s", blob_sql_text(&sql)); blob_reset(&sql); } - db_prepare(&q, - "SELECT fn, idv, ridv, idt, ridt, chnged FROM fv ORDER BY 1" + /* + ** Alter the content of the checkout so that it conforms with the + ** target + */ + db_prepare(&q, + "SELECT fn, idv, ridv, idt, ridt, chnged, fnt," + " isexe, islinkv, islinkt, deleted FROM fv ORDER BY 1" + ); + db_prepare(&mtimeXfer, + "UPDATE vfile SET mtime=(SELECT mtime FROM vfile WHERE id=:idv)" + " WHERE id=:idt" ); assert( g.zLocalRoot!=0 ); - assert( strlen(g.zLocalRoot)>1 ); + assert( strlen(g.zLocalRoot)>0 ); assert( g.zLocalRoot[strlen(g.zLocalRoot)-1]=='/' ); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); /* The filename from root */ int idv = db_column_int(&q, 1); /* VFILE entry for current */ int ridv = db_column_int(&q, 2); /* RecordID for current */ int idt = db_column_int(&q, 3); /* VFILE entry for target */ int ridt = db_column_int(&q, 4); /* RecordID for target */ int chnged = db_column_int(&q, 5); /* Current is edited */ + const char *zNewName = db_column_text(&q,6);/* New filename */ + int isexe = db_column_int(&q, 7); /* EXE perm for new file */ + int islinkv = db_column_int(&q, 8); /* Is current file is a link */ + int islinkt = db_column_int(&q, 9); /* Is target file is a link */ + int deleted = db_column_int(&q, 10); /* Marked for deletion */ char *zFullPath; /* Full pathname of the file */ + char *zFullNewPath; /* Full pathname of dest */ + char nameChng; /* True if the name changed */ zFullPath = mprintf("%s%s", g.zLocalRoot, zName); + zFullNewPath = mprintf("%s%s", g.zLocalRoot, zNewName); + nameChng = fossil_strcmp(zName, zNewName); + nUpdate++; + if( deleted ){ + db_multi_exec("UPDATE vfile SET deleted=1 WHERE id=%d", idt); + } if( idv>0 && ridv==0 && idt>0 && ridt>0 ){ /* Conflict. This file has been added to the current checkout ** but also exists in the target checkout. Use the current version. */ - printf("CONFLICT %s\n", zName); + fossil_print("CONFLICT %s\n", zName); + nConflict++; }else if( idt>0 && idv==0 ){ /* File added in the target. */ - printf("ADD %s\n", zName); - undo_save(zName); - if( !nochangeFlag ) vfile_to_disk(0, idt, 0); - }else if( idt>0 && idv>0 && ridt!=ridv && chnged==0 ){ + if( file_wd_isfile_or_link(zFullPath) ){ + fossil_print("ADD %s - overwrites an unmanaged file\n", zName); + nOverwrite++; + }else{ + fossil_print("ADD %s\n", zName); + } + if( !dryRunFlag && !internalUpdate ) undo_save(zName); + if( !dryRunFlag ) vfile_to_disk(0, idt, 0, 0); + }else if( idt>0 && idv>0 && ridt!=ridv && (chnged==0 || deleted) ){ /* The file is unedited. Change it to the target version */ - printf("UPDATE %s\n", zName); - undo_save(zName); - if( !nochangeFlag ) vfile_to_disk(0, idt, 0); - }else if( idt>0 && idv>0 && file_size(zFullPath)<0 ){ + if( deleted ){ + fossil_print("UPDATE %s - change to unmanaged file\n", zName); + }else{ + fossil_print("UPDATE %s\n", zName); + } + if( !dryRunFlag && !internalUpdate ) undo_save(zName); + if( !dryRunFlag ) vfile_to_disk(0, idt, 0, 0); + }else if( idt>0 && idv>0 && !deleted && file_wd_size(zFullPath)<0 ){ /* The file missing from the local check-out. Restore it to the ** version that appears in the target. */ - printf("UPDATE %s\n", zName); - undo_save(zName); - if( !nochangeFlag ) vfile_to_disk(0, idt, 0); + fossil_print("UPDATE %s\n", zName); + if( !dryRunFlag && !internalUpdate ) undo_save(zName); + if( !dryRunFlag ) vfile_to_disk(0, idt, 0, 0); }else if( idt==0 && idv>0 ){ if( ridv==0 ){ /* Added in current checkout. Continue to hold the file as ** as an addition */ db_multi_exec("UPDATE vfile SET vid=%d WHERE id=%d", tid, idv); }else if( chnged ){ - /* Edited locally but deleted from the target. Delete it. */ - printf("CONFLICT %s\n", zName); + /* Edited locally but deleted from the target. Do not track the + ** file but keep the edited version around. */ + fossil_print("CONFLICT %s - edited locally but deleted by update\n", + zName); + nConflict++; }else{ - char *zFullPath; - printf("REMOVE %s\n", zName); - undo_save(zName); - zFullPath = mprintf("%s/%s", g.zLocalRoot, zName); - if( !nochangeFlag ) unlink(zFullPath); - free(zFullPath); + fossil_print("REMOVE %s\n", zName); + if( !dryRunFlag && !internalUpdate ) undo_save(zName); + if( !dryRunFlag ) file_delete(zFullPath); } }else if( idt>0 && idv>0 && ridt!=ridv && chnged ){ /* Merge the changes in the current tree into the target version */ - Blob e, r, t, v; + Blob r, t, v; int rc; - printf("MERGE %s\n", zName); - undo_save(zName); - content_get(ridt, &t); - content_get(ridv, &v); - blob_zero(&e); - blob_read_from_file(&e, zFullPath); - rc = blob_merge(&v, &e, &t, &r); - if( rc>=0 ){ - if( !nochangeFlag ) blob_write_to_file(&r, zFullPath); - if( rc>0 ){ - printf("***** %d merge conflicts in %s\n", rc, zName); - } - }else{ - printf("***** Cannot merge binary file %s\n", zName); - } - blob_reset(&v); - blob_reset(&e); + if( nameChng ){ + fossil_print("MERGE %s -> %s\n", zName, zNewName); + }else{ + fossil_print("MERGE %s\n", zName); + } + if( islinkv || islinkt /* || file_wd_islink(zFullPath) */ ){ + fossil_print("***** Cannot merge symlink %s\n", zNewName); + nConflict++; + }else{ + unsigned mergeFlags = dryRunFlag ? MERGE_DRYRUN : 0; + if( !dryRunFlag && !internalUpdate ) undo_save(zName); + content_get(ridt, &t); + content_get(ridv, &v); + rc = merge_3way(&v, zFullPath, &t, &r, mergeFlags); + if( rc>=0 ){ + if( !dryRunFlag ){ + blob_write_to_file(&r, zFullNewPath); + file_wd_setexe(zFullNewPath, isexe); + } + if( rc>0 ){ + fossil_print("***** %d merge conflicts in %s\n", rc, zNewName); + nConflict++; + } + }else{ + if( !dryRunFlag ){ + blob_write_to_file(&t, zFullNewPath); + file_wd_setexe(zFullNewPath, isexe); + } + fossil_print("***** Cannot merge binary file %s\n", zNewName); + nConflict++; + } + } + if( nameChng && !dryRunFlag ) file_delete(zFullPath); + blob_reset(&v); blob_reset(&t); blob_reset(&r); - }else if( verboseFlag ){ - printf("UNCHANGED %s\n", zName); + }else{ + nUpdate--; + if( chnged ){ + if( verboseFlag ) fossil_print("EDITED %s\n", zName); + }else{ + db_bind_int(&mtimeXfer, ":idv", idv); + db_bind_int(&mtimeXfer, ":idt", idt); + db_step(&mtimeXfer); + db_reset(&mtimeXfer); + if( verboseFlag ) fossil_print("UNCHANGED %s\n", zName); + } } free(zFullPath); + free(zFullNewPath); } db_finalize(&q); - + db_finalize(&mtimeXfer); + fossil_print("%.79c\n",'-'); + if( nUpdate==0 ){ + show_common_info(tid, "checkout:", 1, 0); + fossil_print("%-13s None. Already up-to-date\n", "changes:"); + }else{ + show_common_info(tid, "updated-to:", 1, 0); + fossil_print("%-13s %d file%s modified.\n", "changes:", + nUpdate, nUpdate>1 ? "s" : ""); + } + + /* Report on conflicts + */ + if( !dryRunFlag ){ + Stmt q; + int nMerge = 0; + db_prepare(&q, "SELECT uuid, id FROM vmerge JOIN blob ON merge=rid" + " WHERE id<=0"); + while( db_step(&q)==SQLITE_ROW ){ + const char *zLabel = "merge"; + switch( db_column_int(&q, 1) ){ + case -1: zLabel = "cherrypick merge"; break; + case -2: zLabel = "backout merge"; break; + } + fossil_warning("uncommitted %s against %S.", + zLabel, db_column_text(&q, 0)); + nMerge++; + } + db_finalize(&q); + leaf_ambiguity_warning(tid, tid); + + if( nConflict ){ + if( internalUpdate ){ + internalConflictCnt = nConflict; + nConflict = 0; + }else{ + fossil_warning("WARNING: %d merge conflicts", nConflict); + } + } + if( nOverwrite ){ + fossil_warning("WARNING: %d unmanaged files were overwritten", + nOverwrite); + } + if( nMerge ){ + fossil_warning("WARNING: %d uncommitted prior merges", nMerge); + } + } + /* ** Clean up the mid and pid VFILE entries. Then commit the changes. */ - if( nochangeFlag ){ - db_end_transaction(1); /* With --nochange, rollback changes */ + if( dryRunFlag ){ + db_end_transaction(1); /* With --dry-run, rollback changes */ }else{ + ensure_empty_dirs_created(); if( g.argc<=3 ){ /* All files updated. Shift the current checkout to the target. */ db_multi_exec("DELETE FROM vfile WHERE vid!=%d", tid); + checkout_set_all_exe(tid); manifest_to_disk(tid); db_lset_int("checkout", tid); }else{ /* A subset of files have been checked out. Keep the current ** checkout unchanged. */ db_multi_exec("DELETE FROM vfile WHERE vid!=%d", vid); } - undo_finish(); + if( !internalUpdate ) undo_finish(); + if( setmtimeFlag ) vfile_check_signature(tid, CKSIG_SETMTIME); db_end_transaction(0); } } + +/* +** Create empty directories specified by the empty-dirs setting. +*/ +void ensure_empty_dirs_created(void){ + char *zEmptyDirs = db_get("empty-dirs", 0); + if( zEmptyDirs!=0 ){ + int i; + Blob dirName; + Blob dirsList; + + zEmptyDirs = fossil_strdup(zEmptyDirs); + for(i=0; zEmptyDirs[i]; i++){ + if( zEmptyDirs[i]==',' ) zEmptyDirs[i] = ' '; + } + blob_init(&dirsList, zEmptyDirs, -1); + while( blob_token(&dirsList, &dirName) ){ + char *zDir = blob_str(&dirName); + char *zPath = mprintf("%s/%s", g.zLocalRoot, zDir); + switch( file_wd_isdir(zPath) ){ + case 0: { /* doesn't exist */ + fossil_free(zPath); + zPath = mprintf("%s/%s/x", g.zLocalRoot, zDir); + if( file_mkfolder(zPath, 0, 1)!=0 ) { + fossil_warning("couldn't create directory %s as " + "required by empty-dirs setting", zDir); + } + break; + } + case 1: { /* exists, and is a directory */ + /* do nothing - required directory exists already */ + break; + } + case 2: { /* exists, but isn't a directory */ + fossil_warning("file %s found, but a directory is required " + "by empty-dirs setting", zDir); + } + } + fossil_free(zPath); + blob_reset(&dirName); + } + blob_reset(&dirsList); + fossil_free(zEmptyDirs); + } +} /* -** Get the contents of a file within a given revision. +** Get the contents of a file within the check-in "revision". If +** revision==NULL then get the file content for the current checkout. */ int historical_version_of_file( - const char *revision, /* The baseline name containing the file */ + const char *revision, /* The check-in containing the file */ const char *file, /* Full treename of the file */ Blob *content, /* Put the content here */ - int errCode /* Error code if file not found. Panic if 0. */ + int *pIsLink, /* Set to true if file is link. */ + int *pIsExe, /* Set to true if file is executable */ + int *pIsBin, /* Set to true if file is binary */ + int errCode /* Error code if file not found. Panic if <= 0. */ ){ - Blob mfile; - Manifest m; - int i, rid=0; - + Manifest *pManifest; + ManifestFile *pFile; + int rid=0; + if( revision ){ - rid = name_to_rid(revision); + rid = name_to_typed_rid(revision,"ci"); + }else if( !g.localOpen ){ + rid = name_to_typed_rid(db_get("main-branch","trunk"),"ci"); }else{ rid = db_lget_int("checkout", 0); } if( !is_a_version(rid) ){ if( errCode>0 ) return errCode; - fossil_fatal("no such check-out: %s", revision); - } - content_get(rid, &mfile); - - if( manifest_parse(&m, &mfile) ){ - for(i=0; i<m.nFile; i++){ - if( strcmp(m.aFile[i].zName, file)==0 ){ - rid = uuid_to_rid(m.aFile[i].zUuid, 0); - return content_get(rid, content); - } - } - if( errCode<=0 ){ - fossil_fatal("file %s does not exist in baseline: %s", file, revision); + fossil_fatal("no such check-in: %s", revision); + } + pManifest = manifest_get(rid, CFTYPE_MANIFEST, 0); + + if( pManifest ){ + pFile = manifest_file_find(pManifest, file); + if( pFile ){ + int rc; + rid = uuid_to_rid(pFile->zUuid, 0); + if( pIsExe ) *pIsExe = ( manifest_file_mperm(pFile)==PERM_EXE ); + if( pIsLink ) *pIsLink = ( manifest_file_mperm(pFile)==PERM_LNK ); + manifest_destroy(pManifest); + rc = content_get(rid, content); + if( rc && pIsBin ){ + *pIsBin = looks_like_binary(content); + } + return rc; + } + manifest_destroy(pManifest); + if( errCode<=0 ){ + fossil_fatal("file %s does not exist in check-in: %s", file, revision); } }else if( errCode<=0 ){ - fossil_panic("could not parse manifest for baseline: %s", revision); + if( revision==0 ){ + revision = db_text("current", "SELECT uuid FROM blob WHERE rid=%d", rid); + } + fossil_fatal("could not parse manifest for check-in: %s", revision); } return errCode; } /* ** COMMAND: revert ** -** Usage: %fossil revert ?-r REVISION? FILE ... +** Usage: %fossil revert ?-r REVISION? ?FILE ...? ** ** Revert to the current repository version of FILE, or to ** the version associated with baseline REVISION if the -r flag ** appears. +** +** If FILE was part of a rename operation, both the original file +** and the renamed file are reverted. +** +** Revert all files if no file name is provided. ** ** If a file is reverted accidently, it can be restored using ** the "fossil undo" command. +** +** Options: +** -r REVISION revert given FILE(s) back to given REVISION +** +** See also: redo, undo, update */ void revert_cmd(void){ const char *zFile; const char *zRevision; Blob record; int i; int errCode; - int rid = 0; Stmt q; - + + undo_capture_command_line(); zRevision = find_option("revision", "r", 1); verify_all_options(); - + if( g.argc<2 ){ usage("?OPTIONS? [FILE] ..."); } if( zRevision && g.argc<3 ){ fossil_fatal("the --revision option does not work for the entire tree"); @@ -371,52 +738,93 @@ if( g.argc>2 ){ for(i=2; i<g.argc; i++){ Blob fname; zFile = mprintf("%/", g.argv[i]); - file_tree_name(zFile, &fname, 1); - db_multi_exec("REPLACE INTO torevert VALUES(%B)", &fname); + blob_zero(&fname); + file_tree_name(zFile, &fname, 0, 1); + db_multi_exec( + "REPLACE INTO torevert VALUES(%B);" + "INSERT OR IGNORE INTO torevert" + " SELECT pathname" + " FROM vfile" + " WHERE origname=%B;", + &fname, &fname + ); blob_reset(&fname); } }else{ int vid; vid = db_lget_int("checkout", 0); vfile_check_signature(vid, 0); db_multi_exec( - "INSERT INTO torevert " - "SELECT pathname" - " FROM vfile " - " WHERE chnged OR deleted OR rid=0 OR pathname!=origname" + "DELETE FROM vmerge;" + "INSERT OR IGNORE INTO torevert " + " SELECT pathname" + " FROM vfile " + " WHERE chnged OR deleted OR rid=0 OR pathname!=origname;" ); } + db_multi_exec( + "INSERT OR IGNORE INTO torevert" + " SELECT origname" + " FROM vfile" + " WHERE origname!=pathname AND pathname IN (SELECT name FROM torevert);" + ); blob_zero(&record); db_prepare(&q, "SELECT name FROM torevert"); + if( zRevision==0 ){ + int vid = db_lget_int("checkout", 0); + zRevision = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", vid); + } while( db_step(&q)==SQLITE_ROW ){ + int isExe = 0; + int isLink = 0; + char *zFull; zFile = db_column_text(&q, 0); - if( zRevision!=0 ){ - errCode = historical_version_of_file(zRevision, zFile, &record, 2); - }else{ - rid = db_int(0, "SELECT rid FROM vfile WHERE pathname=%Q", zFile); - if( rid==0 ){ - errCode = 2; - }else{ - content_get(rid, &record); - errCode = 0; - } - } - + zFull = mprintf("%/%/", g.zLocalRoot, zFile); + errCode = historical_version_of_file(zRevision, zFile, &record, + &isLink, &isExe, 0, 2); if( errCode==2 ){ - fossil_warning("file not in repository: %s", zFile); + if( db_int(0, "SELECT rid FROM vfile WHERE pathname=%Q OR origname=%Q", + zFile, zFile)==0 ){ + fossil_print("UNMANAGE %s\n", zFile); + }else{ + undo_save(zFile); + file_delete(zFull); + fossil_print("DELETE %s\n", zFile); + } + db_multi_exec( + "UPDATE OR REPLACE vfile" + " SET pathname=origname, origname=NULL" + " WHERE pathname=%Q AND origname!=pathname;" + "DELETE FROM vfile WHERE pathname=%Q", + zFile, zFile + ); }else{ - char *zFull = mprintf("%//%/", g.zLocalRoot, zFile); + sqlite3_int64 mtime; undo_save(zFile); - blob_write_to_file(&record, zFull); - printf("REVERTED: %s\n", zFile); - free(zFull); + if( file_wd_size(zFull)>=0 && (isLink || file_wd_islink(0)) ){ + file_delete(zFull); + } + if( isLink ){ + symlink_create(blob_str(&record), zFull); + }else{ + blob_write_to_file(&record, zFull); + } + file_wd_setexe(zFull, isExe); + fossil_print("REVERT %s\n", zFile); + mtime = file_wd_mtime(zFull); + db_multi_exec( + "UPDATE vfile" + " SET mtime=%lld, chnged=0, deleted=0, isexe=%d, islink=%d,mrid=rid" + " WHERE pathname=%Q OR origname=%Q", + mtime, isExe, isLink, zFile, zFile + ); } blob_reset(&record); + free(zFull); } db_finalize(&q); undo_finish(); db_end_transaction(0); - printf("\"fossil undo\" is available to undo the changes shown above.\n"); } Index: src/url.c ================================================================== --- src/url.c +++ src/url.c @@ -17,154 +17,341 @@ ** ** This file contains code for parsing URLs that appear on the command-line */ #include "config.h" #include "url.h" +#include <stdio.h> + +#ifdef _WIN32 +#include <io.h> +#ifndef isatty +#define isatty(d) _isatty(d) +#endif +#ifndef fileno +#define fileno(s) _fileno(s) +#endif +#endif + +#if INTERFACE +/* +** Flags for url_parse() +*/ +#define URL_PROMPT_PW 0x001 /* Prompt for password if needed */ +#define URL_REMEMBER 0x002 /* Remember the url for later reuse */ +#define URL_ASK_REMEMBER_PW 0x004 /* Ask whether to remember prompted pw */ +#define URL_REMEMBER_PW 0x008 /* Should remember pw */ +#define URL_PROMPTED 0x010 /* Prompted for PW already */ + +/* +** The URL related data used with this subsystem. +*/ +struct UrlData { + int isFile; /* True if a "file:" url */ + int isHttps; /* True if a "https:" url */ + int isSsh; /* True if an "ssh:" url */ + char *name; /* Hostname for http: or filename for file: */ + char *hostname; /* The HOST: parameter on http headers */ + char *protocol; /* "http" or "https" */ + int port; /* TCP port number for http: or https: */ + int dfltPort; /* The default port for the given protocol */ + char *path; /* Pathname for http: */ + char *user; /* User id for http: */ + char *passwd; /* Password for http: */ + char *canonical; /* Canonical representation of the URL */ + char *proxyAuth; /* Proxy-Authorizer: string */ + char *fossil; /* The fossil query parameter on ssh: */ + unsigned flags; /* Boolean flags controlling URL processing */ + int useProxy; /* Used to remember that a proxy is in use */ + char *proxyUrlPath; + int proxyOrigPort; /* Tunneled port number for https through proxy */ +}; +#endif /* INTERFACE */ + + +/* +** Convert a string to lower-case. +*/ +static void url_tolower(char *z){ + while( *z ){ + *z = fossil_tolower(*z); + z++; + } +} /* -** Parse the given URL. Populate variables in the global "g" structure. -** -** g.urlIsFile True if FILE: -** g.urlIsHttps True if HTTPS: -** g.urlProtocol "http" or "https" or "file" -** g.urlName Hostname for HTTP: or HTTPS:. Filename for FILE: -** g.urlPort TCP port number for HTTP or HTTPS. -** g.urlDfltPort Default TCP port number (80 or 443). -** g.urlPath Path name for HTTP or HTTPS. -** g.urlUser Userid. -** g.urlPasswd Password. -** g.urlHostname HOST:PORT or just HOST if port is the default. -** g.urlCanonical The URL in canonical form, omitting the password -** -** HTTP url format is: -** -** http://userid:password@host:port/path?query#fragment +** Parse the given URL. Populate members of the provided UrlData structure +** as follows: +** +** isFile True if FILE: +** isHttps True if HTTPS: +** isSsh True if SSH: +** protocol "http" or "https" or "file" +** name Hostname for HTTP:, HTTPS:, SSH:. Filename for FILE: +** port TCP port number for HTTP or HTTPS. +** dfltPort Default TCP port number (80 or 443). +** path Path name for HTTP or HTTPS. +** user Userid. +** passwd Password. +** hostname HOST:PORT or just HOST if port is the default. +** canonical The URL in canonical form, omitting the password ** */ -void url_parse(const char *zUrl){ +void url_parse_local( + const char *zUrl, + unsigned int urlFlags, + UrlData *pUrlData +){ int i, j, c; char *zFile = 0; - if( strncmp(zUrl, "http://", 7)==0 || strncmp(zUrl, "https://", 8)==0 ){ + + if( zUrl==0 ){ + zUrl = db_get("last-sync-url", 0); + if( zUrl==0 ) return; + if( pUrlData->passwd==0 ){ + pUrlData->passwd = unobscure(db_get("last-sync-pw", 0)); + } + } + + if( strncmp(zUrl, "http://", 7)==0 + || strncmp(zUrl, "https://", 8)==0 + || strncmp(zUrl, "ssh://", 6)==0 + ){ int iStart; char *zLogin; - g.urlIsFile = 0; + char *zExe; + char cQuerySep = '?'; + + pUrlData->isFile = 0; + pUrlData->useProxy = 0; if( zUrl[4]=='s' ){ - g.urlIsHttps = 1; - g.urlProtocol = "https"; - g.urlDfltPort = 443; + pUrlData->isHttps = 1; + pUrlData->protocol = "https"; + pUrlData->dfltPort = 443; iStart = 8; + }else if( zUrl[0]=='s' ){ + pUrlData->isSsh = 1; + pUrlData->protocol = "ssh"; + pUrlData->dfltPort = 22; + pUrlData->fossil = "fossil"; + iStart = 6; }else{ - g.urlIsHttps = 0; - g.urlProtocol = "http"; - g.urlDfltPort = 80; + pUrlData->isHttps = 0; + pUrlData->protocol = "http"; + pUrlData->dfltPort = 80; iStart = 7; } for(i=iStart; (c=zUrl[i])!=0 && c!='/' && c!='@'; i++){} if( c=='@' ){ + /* Parse up the user-id and password */ for(j=iStart; j<i && zUrl[j]!=':'; j++){} - g.urlUser = mprintf("%.*s", j-iStart, &zUrl[iStart]); - dehttpize(g.urlUser); + pUrlData->user = mprintf("%.*s", j-iStart, &zUrl[iStart]); + dehttpize(pUrlData->user); if( j<i ){ - g.urlPasswd = mprintf("%.*s", i-j-1, &zUrl[j+1]); - dehttpize(g.urlPasswd); + if( ( urlFlags & URL_REMEMBER ) && pUrlData->isSsh==0 ){ + urlFlags |= URL_ASK_REMEMBER_PW; + } + pUrlData->passwd = mprintf("%.*s", i-j-1, &zUrl[j+1]); + dehttpize(pUrlData->passwd); + } + if( pUrlData->isSsh ){ + urlFlags &= ~URL_ASK_REMEMBER_PW; } + zLogin = mprintf("%t@", pUrlData->user); for(j=i+1; (c=zUrl[j])!=0 && c!='/' && c!=':'; j++){} - g.urlName = mprintf("%.*s", j-i-1, &zUrl[i+1]); + pUrlData->name = mprintf("%.*s", j-i-1, &zUrl[i+1]); i = j; - zLogin = mprintf("%t@", g.urlUser); }else{ - for(i=iStart; (c=zUrl[i])!=0 && c!='/' && c!=':'; i++){} - g.urlName = mprintf("%.*s", i-iStart, &zUrl[iStart]); + int inSquare = 0; + int n; + for(i=iStart; (c=zUrl[i])!=0 && c!='/' && (inSquare || c!=':'); i++){ + if( c=='[' ) inSquare = 1; + if( c==']' ) inSquare = 0; + } + pUrlData->name = mprintf("%.*s", i-iStart, &zUrl[iStart]); + n = strlen(pUrlData->name); + if( pUrlData->name[0]=='[' && n>2 && pUrlData->name[n-1]==']' ){ + pUrlData->name++; + pUrlData->name[n-2] = 0; + } zLogin = mprintf(""); } - for(j=0; g.urlName[j]; j++){ g.urlName[j] = tolower(g.urlName[j]); } - if( c==':' ){ - g.urlPort = 0; - i++; - while( (c = zUrl[i])!=0 && isdigit(c) ){ - g.urlPort = g.urlPort*10 + c - '0'; - i++; - } - g.urlHostname = mprintf("%s:%d", g.urlName, g.urlPort); - }else{ - g.urlPort = g.urlDfltPort; - g.urlHostname = g.urlName; - } - g.urlPath = mprintf(&zUrl[i]); - dehttpize(g.urlName); - dehttpize(g.urlPath); - if( g.urlDfltPort==g.urlPort ){ - g.urlCanonical = mprintf( - "%s://%s%T%T", - g.urlProtocol, zLogin, g.urlName, g.urlPath - ); - }else{ - g.urlCanonical = mprintf( - "%s://%s%T:%d%T", - g.urlProtocol, zLogin, g.urlName, g.urlPort, g.urlPath - ); - } - free(zLogin); - }else if( strncmp(zUrl, "file:", 5)==0 ){ - g.urlIsFile = 1; + url_tolower(pUrlData->name); + if( c==':' ){ + pUrlData->port = 0; + i++; + while( (c = zUrl[i])!=0 && fossil_isdigit(c) ){ + pUrlData->port = pUrlData->port*10 + c - '0'; + i++; + } + pUrlData->hostname = mprintf("%s:%d", pUrlData->name, pUrlData->port); + }else{ + pUrlData->port = pUrlData->dfltPort; + pUrlData->hostname = pUrlData->name; + } + dehttpize(pUrlData->name); + pUrlData->path = mprintf("%s", &zUrl[i]); + for(i=0; pUrlData->path[i] && pUrlData->path[i]!='?'; i++){} + if( pUrlData->path[i] ){ + pUrlData->path[i] = 0; + i++; + } + zExe = mprintf(""); + while( pUrlData->path[i]!=0 ){ + char *zName, *zValue; + zName = &pUrlData->path[i]; + zValue = zName; + while( pUrlData->path[i] && pUrlData->path[i]!='=' ){ i++; } + if( pUrlData->path[i]=='=' ){ + pUrlData->path[i] = 0; + i++; + zValue = &pUrlData->path[i]; + while( pUrlData->path[i] && pUrlData->path[i]!='&' ){ i++; } + } + if( pUrlData->path[i] ){ + pUrlData->path[i] = 0; + i++; + } + if( fossil_strcmp(zName,"fossil")==0 ){ + pUrlData->fossil = zValue; + dehttpize(pUrlData->fossil); + zExe = mprintf("%cfossil=%T", cQuerySep, pUrlData->fossil); + cQuerySep = '&'; + } + } + + dehttpize(pUrlData->path); + if( pUrlData->dfltPort==pUrlData->port ){ + pUrlData->canonical = mprintf( + "%s://%s%T%T%s", + pUrlData->protocol, zLogin, pUrlData->name, pUrlData->path, zExe + ); + }else{ + pUrlData->canonical = mprintf( + "%s://%s%T:%d%T%s", + pUrlData->protocol, zLogin, pUrlData->name, pUrlData->port, + pUrlData->path, zExe + ); + } + if( pUrlData->isSsh && pUrlData->path[1] ) pUrlData->path++; + free(zLogin); + }else if( strncmp(zUrl, "file:", 5)==0 ){ + pUrlData->isFile = 1; if( zUrl[5]=='/' && zUrl[6]=='/' ){ i = 7; }else{ i = 5; } zFile = mprintf("%s", &zUrl[i]); }else if( file_isfile(zUrl) ){ - g.urlIsFile = 1; + pUrlData->isFile = 1; zFile = mprintf("%s", zUrl); }else if( file_isdir(zUrl)==1 ){ zFile = mprintf("%s/FOSSIL", zUrl); if( file_isfile(zFile) ){ - g.urlIsFile = 1; - }else{ - free(zFile); - fossil_panic("unknown repository: %s", zUrl); - } - }else{ - fossil_panic("unknown repository: %s", zUrl); - } - if( g.urlIsFile ){ - Blob cfile; - dehttpize(zFile); - file_canonical_name(zFile, &cfile); - free(zFile); - g.urlProtocol = "file"; - g.urlPath = ""; - g.urlName = mprintf("%b", &cfile); - g.urlCanonical = mprintf("file://%T", g.urlName); - blob_reset(&cfile); - } + pUrlData->isFile = 1; + }else{ + free(zFile); + zFile = 0; + fossil_fatal("unknown repository: %s", zUrl); + } + }else{ + fossil_fatal("unknown repository: %s", zUrl); + } + if( urlFlags ) pUrlData->flags = urlFlags; + if( pUrlData->isFile ){ + Blob cfile; + dehttpize(zFile); + file_canonical_name(zFile, &cfile, 0); + free(zFile); + zFile = 0; + pUrlData->protocol = "file"; + pUrlData->path = ""; + pUrlData->name = mprintf("%b", &cfile); + pUrlData->canonical = mprintf("file://%T", pUrlData->name); + blob_reset(&cfile); + }else if( pUrlData->user!=0 && pUrlData->passwd==0 && (urlFlags & URL_PROMPT_PW) ){ + url_prompt_for_password_local(pUrlData); + }else if( pUrlData->user!=0 && ( urlFlags & URL_ASK_REMEMBER_PW ) ){ + if( isatty(fileno(stdin)) ){ + if( save_password_prompt(pUrlData->passwd) ){ + pUrlData->flags = urlFlags |= URL_REMEMBER_PW; + }else{ + pUrlData->flags = urlFlags &= ~URL_REMEMBER_PW; + } + } + } +} + +/* +** Parse the given URL, which describes a sync server. Populate variables +** in the global "g" structure as follows: +** +** g.url.isFile True if FILE: +** g.url.isHttps True if HTTPS: +** g.url.isSsh True if SSH: +** g.url.protocol "http" or "https" or "file" +** g.url.name Hostname for HTTP:, HTTPS:, SSH:. Filename for FILE: +** g.url.port TCP port number for HTTP or HTTPS. +** g.url.dfltPort Default TCP port number (80 or 443). +** g.url.path Path name for HTTP or HTTPS. +** g.url.user Userid. +** g.url.passwd Password. +** g.url.hostname HOST:PORT or just HOST if port is the default. +** g.url.canonical The URL in canonical form, omitting the password +** +** HTTP url format as follows (HTTPS is the same with a different scheme): +** +** http://userid:password@host:port/path +** +** SSH url format is: +** +** ssh://userid@host:port/path?fossil=path/to/fossil.exe +** +*/ +void url_parse(const char *zUrl, unsigned int urlFlags){ + url_parse_local(zUrl, urlFlags, &g.url); } /* ** COMMAND: test-urlparser +** +** Usage: %fossil test-urlparser URL ?options? +** +** --remember Store results in last-sync-url +** --prompt-pw Prompt for password if missing */ void cmd_test_urlparser(void){ int i; + unsigned fg = 0; url_proxy_options(); + if( find_option("remember",0,0) ){ + db_must_be_within_tree(); + fg |= URL_REMEMBER; + } + if( find_option("prompt-pw",0,0) ) fg |= URL_PROMPT_PW; if( g.argc!=3 && g.argc!=4 ){ usage("URL"); } - url_parse(g.argv[2]); + url_parse(g.argv[2], fg); for(i=0; i<2; i++){ - printf("g.urlIsFile = %d\n", g.urlIsFile); - printf("g.urlIsHttps = %d\n", g.urlIsHttps); - printf("g.urlProtocol = %s\n", g.urlProtocol); - printf("g.urlName = %s\n", g.urlName); - printf("g.urlPort = %d\n", g.urlPort); - printf("g.urlDfltPort = %d\n", g.urlDfltPort); - printf("g.urlHostname = %s\n", g.urlHostname); - printf("g.urlPath = %s\n", g.urlPath); - printf("g.urlUser = %s\n", g.urlUser); - printf("g.urlPasswd = %s\n", g.urlPasswd); - printf("g.urlCanonical = %s\n", g.urlCanonical); + fossil_print("g.url.isFile = %d\n", g.url.isFile); + fossil_print("g.url.isHttps = %d\n", g.url.isHttps); + fossil_print("g.url.isSsh = %d\n", g.url.isSsh); + fossil_print("g.url.protocol = %s\n", g.url.protocol); + fossil_print("g.url.name = %s\n", g.url.name); + fossil_print("g.url.port = %d\n", g.url.port); + fossil_print("g.url.dfltPort = %d\n", g.url.dfltPort); + fossil_print("g.url.hostname = %s\n", g.url.hostname); + fossil_print("g.url.path = %s\n", g.url.path); + fossil_print("g.url.user = %s\n", g.url.user); + fossil_print("g.url.passwd = %s\n", g.url.passwd); + fossil_print("g.url.canonical = %s\n", g.url.canonical); + fossil_print("g.url.fossil = %s\n", g.url.fossil); + fossil_print("g.url.flags = 0x%02x\n", g.url.flags); + if( g.url.isFile || g.url.isSsh ) break; if( i==0 ){ - printf("********\n"); + fossil_print("********\n"); url_enable_proxy("Using proxy: "); } } } @@ -185,10 +372,11 @@ ** feature. */ void url_proxy_options(void){ zProxyOpt = find_option("proxy", 0, 1); if( find_option("nosync",0,0) ) g.fNoSync = 1; + if( find_option("ipv4",0,0) ) g.fIPv4 = 1; } /* ** If the "proxy" setting is defined, then change the URL settings ** (initialized by a prior call to url_parse()) so that the HTTP @@ -201,33 +389,43 @@ void url_enable_proxy(const char *zMsg){ const char *zProxy; zProxy = zProxyOpt; if( zProxy==0 ){ zProxy = db_get("proxy", 0); - if( zProxy==0 || zProxy[0]==0 || is_truth(zProxy) ){ - zProxy = getenv("http_proxy"); - } - } - if( zProxy && zProxy[0] && !is_false(zProxy) ){ - char *zOriginalUrl = g.urlCanonical; - char *zOriginalHost = g.urlHostname; - char *zOriginalUser = g.urlUser; - char *zOriginalPasswd = g.urlPasswd; - g.urlUser = 0; - g.urlPasswd = ""; - url_parse(zProxy); - if( zMsg ) printf("%s%s\n", zMsg, g.urlCanonical); - g.urlPath = zOriginalUrl; - g.urlHostname = zOriginalHost; - if( g.urlUser ){ - char *zCredentials1 = mprintf("%s:%s", g.urlUser, g.urlPasswd); + if( zProxy==0 || zProxy[0]==0 || is_false(zProxy) ){ + zProxy = fossil_getenv("http_proxy"); + } + } + if( zProxy && zProxy[0] && !is_false(zProxy) + && !g.url.isSsh && !g.url.isFile ){ + char *zOriginalUrl = g.url.canonical; + char *zOriginalHost = g.url.hostname; + int fOriginalIsHttps = g.url.isHttps; + char *zOriginalUser = g.url.user; + char *zOriginalPasswd = g.url.passwd; + char *zOriginalUrlPath = g.url.path; + int iOriginalPort = g.url.port; + unsigned uOriginalFlags = g.url.flags; + g.url.user = 0; + g.url.passwd = ""; + url_parse(zProxy, 0); + if( zMsg ) fossil_print("%s%s\n", zMsg, g.url.canonical); + g.url.path = zOriginalUrl; + g.url.hostname = zOriginalHost; + if( g.url.user ){ + char *zCredentials1 = mprintf("%s:%s", g.url.user, g.url.passwd); char *zCredentials2 = encode64(zCredentials1, -1); - g.urlProxyAuth = mprintf("Basic %z", zCredentials2); + g.url.proxyAuth = mprintf("Basic %z", zCredentials2); free(zCredentials1); } - g.urlUser = zOriginalUser; - g.urlPasswd = zOriginalPasswd; + g.url.user = zOriginalUser; + g.url.passwd = zOriginalPasswd; + g.url.isHttps = fOriginalIsHttps; + g.url.useProxy = 1; + g.url.proxyUrlPath = zOriginalUrlPath; + g.url.proxyOrigPort = iOriginalPort; + g.url.flags = uOriginalFlags; } } #if INTERFACE /* @@ -234,38 +432,73 @@ ** An instance of this object is used to build a URL with query parameters. */ struct HQuery { Blob url; /* The URL */ const char *zBase; /* The base URL */ - int nParam; /* Number of parameters. Max 10 */ - const char *azName[10]; /* Parameter names */ - const char *azValue[10]; /* Parameter values */ + int nParam; /* Number of parameters. */ + int nAlloc; /* Number of allocated slots */ + const char **azName; /* Parameter names */ + const char **azValue; /* Parameter values */ }; #endif /* ** Initialize the URL object. */ void url_initialize(HQuery *p, const char *zBase){ + memset(p, 0, sizeof(*p)); blob_zero(&p->url); p->zBase = zBase; - p->nParam = 0; +} + +/* +** Resets the given URL object, deallocating any memory +** it uses. +*/ +void url_reset(HQuery *p){ + blob_reset(&p->url); + fossil_free((void *)p->azName); + fossil_free((void *)p->azValue); + url_initialize(p, p->zBase); } /* -** Add a fixed parameter to an HQuery. +** Add a fixed parameter to an HQuery. Or remove the parameters if zValue==0. */ void url_add_parameter(HQuery *p, const char *zName, const char *zValue){ - assert( p->nParam < count(p->azName) ); - assert( p->nParam < count(p->azValue) ); - p->azName[p->nParam] = zName; - p->azValue[p->nParam] = zValue; + int i; + for(i=0; i<p->nParam; i++){ + if( fossil_strcmp(p->azName[i],zName)==0 ){ + if( zValue==0 ){ + p->nParam--; + p->azValue[i] = p->azValue[p->nParam]; + p->azName[i] = p->azName[p->nParam]; + }else{ + p->azValue[i] = zValue; + } + return; + } + } + assert( i==p->nParam ); + if( zValue==0 ) return; + if( i>=p->nAlloc ){ + p->nAlloc = p->nAlloc*2 + 10; + p->azName = fossil_realloc((void *)p->azName, + sizeof(p->azName[0])*p->nAlloc); + p->azValue = fossil_realloc((void *)p->azValue, + sizeof(p->azValue[0])*p->nAlloc); + } + p->azName[i] = zName; + p->azValue[i] = zValue; p->nParam++; } /* ** Render the URL with a parameter override. +** +** Returned memory is transient and is overwritten on the next call to this +** routine for the same HQuery, or until the HQuery object is destroyed. */ char *url_render( HQuery *p, /* Base URL */ const char *zName1, /* First override */ const char *zValue1, /* First override value */ @@ -272,49 +505,94 @@ const char *zName2, /* Second override */ const char *zValue2 /* Second override value */ ){ const char *zSep = "?"; int i; - + blob_reset(&p->url); - blob_appendf(&p->url, "%s/%s", g.zBaseURL, p->zBase); + blob_appendf(&p->url, "%s/%s", g.zTop, p->zBase); for(i=0; i<p->nParam; i++){ const char *z = p->azValue[i]; - if( zName1 && strcmp(zName1,p->azName[i])==0 ){ + if( zName1 && fossil_strcmp(zName1,p->azName[i])==0 ){ zName1 = 0; z = zValue1; if( z==0 ) continue; } - if( zName2 && strcmp(zName2,p->azName[i])==0 ){ + if( zName2 && fossil_strcmp(zName2,p->azName[i])==0 ){ zName2 = 0; z = zValue2; if( z==0 ) continue; } - blob_appendf(&p->url, "%s%s=%T", zSep, p->azName[i], z); + blob_appendf(&p->url, "%s%s", zSep, p->azName[i]); + if( z && z[0] ) blob_appendf(&p->url, "=%T", z); zSep = "&"; } if( zName1 && zValue1 ){ - blob_appendf(&p->url, "%s%s=%T", zSep, zName1, zValue1); + blob_appendf(&p->url, "%s%s", zSep, zName1); + if( zValue1[0] ) blob_appendf(&p->url, "=%T", zValue1); } if( zName2 && zValue2 ){ - blob_appendf(&p->url, "%s%s=%T", zSep, zName2, zValue2); + blob_appendf(&p->url, "%s%s", zSep, zName2); + if( zValue2[0] ) blob_appendf(&p->url, "=%T", zValue2); } return blob_str(&p->url); } /* -** Prompt the user for the password for g.urlUser. Store the result -** in g.urlPasswd. +** Prompt the user for the password that corresponds to the "user" member of +** the provided UrlData structure. Store the result into the "passwd" member +** of the provided UrlData structure. */ -void url_prompt_for_password(void){ - if( isatty(fileno(stdin)) ){ - char *zPrompt = mprintf("password for %s: ", g.urlUser); - Blob x; - prompt_for_password(zPrompt, &x, 0); - free(zPrompt); - g.urlPasswd = mprintf("%b", &x); - blob_reset(&x); +void url_prompt_for_password_local(UrlData *pUrlData){ + if( pUrlData->isSsh || pUrlData->isFile ) return; + if( isatty(fileno(stdin)) + && (pUrlData->flags & URL_PROMPT_PW)!=0 + && (pUrlData->flags & URL_PROMPTED)==0 + ){ + pUrlData->flags |= URL_PROMPTED; + pUrlData->passwd = prompt_for_user_password(pUrlData->user); + if( pUrlData->passwd[0] + && (pUrlData->flags & (URL_REMEMBER|URL_ASK_REMEMBER_PW))!=0 + ){ + if( save_password_prompt(pUrlData->passwd) ){ + pUrlData->flags |= URL_REMEMBER_PW; + }else{ + pUrlData->flags &= ~URL_REMEMBER_PW; + } + } }else{ fossil_fatal("missing or incorrect password for user \"%s\"", - g.urlUser); + pUrlData->user); + } +} + +/* +** Prompt the user for the password for g.url.user. Store the result +** in g.url.passwd. +*/ +void url_prompt_for_password(void){ + url_prompt_for_password_local(&g.url); +} + +/* +** Remember the URL and password if requested. +*/ +void url_remember(void){ + if( g.url.flags & URL_REMEMBER ){ + db_set("last-sync-url", g.url.canonical, 0); + if( g.url.user!=0 && g.url.passwd!=0 && ( g.url.flags & URL_REMEMBER_PW ) ){ + db_set("last-sync-pw", obscure(g.url.passwd), 0); + } + } +} + +/* Preemptively prompt for a password if a username is given in the +** URL but no password. +*/ +void url_get_password_if_needed(void){ + if( (g.url.user && g.url.user[0]) + && (g.url.passwd==0 || g.url.passwd[0]==0) + && isatty(fileno(stdin)) + ){ + url_prompt_for_password(); } } Index: src/user.c ================================================================== --- src/user.c +++ src/user.c @@ -27,34 +27,41 @@ ** onto the end of a blob. */ static void strip_string(Blob *pBlob, char *z){ int i; blob_reset(pBlob); - while( isspace(*z) ){ z++; } + while( fossil_isspace(*z) ){ z++; } for(i=0; z[i]; i++){ if( z[i]=='\r' || z[i]=='\n' ){ - while( i>0 && isspace(z[i-1]) ){ i--; } + while( i>0 && fossil_isspace(z[i-1]) ){ i--; } z[i] = 0; break; } - if( z[i]<' ' ) z[i] = ' '; + if( z[i]>0 && z[i]<' ' ) z[i] = ' '; } blob_append(pBlob, z, -1); } +#if defined(_WIN32) || defined(__BIONIC__) #ifdef __MINGW32__ +#include <conio.h> +#endif /* -** getpass for Windows +** getpass for Windows and Android */ static char *getpass(const char *prompt){ static char pwd[64]; size_t i; fputs(prompt,stderr); fflush(stderr); for(i=0; i<sizeof(pwd)-1; ++i){ +#if defined(_WIN32) pwd[i] = _getch(); +#else + pwd[i] = getc(stdin); +#endif if(pwd[i]=='\r' || pwd[i]=='\n'){ break; } /* BS or DEL */ else if(i>0 && (pwd[i]==8 || pwd[i]==127)){ @@ -112,38 +119,72 @@ blob_zero(&secondTry); while(1){ prompt_for_passphrase(zPrompt, pPassphrase); if( verify==0 ) break; if( verify==1 && blob_size(pPassphrase)==0 ) break; - prompt_for_passphrase("Again: ", &secondTry); + prompt_for_passphrase("Retype new password: ", &secondTry); if( blob_compare(pPassphrase, &secondTry) ){ - printf("Passphrases do not match. Try again...\n"); + fossil_print("Passphrases do not match. Try again...\n"); }else{ break; } } blob_reset(&secondTry); } + +/* +** Prompt to save Fossil user password +*/ +int save_password_prompt(const char *passwd){ + Blob x; + char c; + const char *old = db_get("last-sync-pw", 0); + if( (old!=0) && fossil_strcmp(unobscure(old), passwd)==0 ){ + return 0; + } + prompt_user("remember password (Y/n)? ", &x); + c = blob_str(&x)[0]; + blob_reset(&x); + return ( c!='n' && c!='N' ); +} + +/* +** Prompt for Fossil user password +*/ +char *prompt_for_user_password(const char *zUser){ + char *zPrompt = mprintf("\rpassword for %s: ", zUser); + char *zPw; + Blob x; + fossil_force_newline(); + prompt_for_password(zPrompt, &x, 0); + free(zPrompt); + zPw = mprintf("%b", &x); + blob_reset(&x); + return zPw; +} /* ** Prompt the user to enter a single line of text. */ void prompt_user(const char *zPrompt, Blob *pIn){ char *z; char zLine[1000]; blob_zero(pIn); - printf("%s", zPrompt); + fossil_force_newline(); + fossil_print("%s", zPrompt); fflush(stdout); z = fgets(zLine, sizeof(zLine), stdin); if( z ){ + int n = (int)strlen(z); + if( n>0 && z[n-1]=='\n' ) fossil_new_line_started(); strip_string(pIn, z); } } /* -** COMMAND: user +** COMMAND: user* ** ** Usage: %fossil user SUBCOMMAND ... ?-R|--repository FILE? ** ** Run various subcommands on users of the open repository or of ** the repository identified by the -R or --repository option. @@ -156,10 +197,11 @@ ** ** Query or set the default user. The default user is the ** user for command-line interaction. ** ** %fossil user list +** %fossil user ls ** ** List all users known to the repository ** ** %fossil user new ?USERNAME? ?CONTACT-INFO? ?PASSWORD? ** @@ -171,18 +213,19 @@ ** ** Change the web access password for a user. */ void user_cmd(void){ int n; - db_find_and_open_repository(1); + db_find_and_open_repository(0, 0); if( g.argc<3 ){ usage("capabilities|default|list|new|password ..."); } n = strlen(g.argv[2]); if( n>=2 && strncmp(g.argv[2],"new",n)==0 ){ - Blob passwd, login, contact; + Blob passwd, login, caps, contact; char *zPw; + blob_init(&caps, db_get("default-perms", "u"), -1); if( g.argc>=4 ){ blob_init(&login, g.argv[3], -1); }else{ prompt_user("login: ", &login); @@ -198,21 +241,21 @@ if( g.argc>=6 ){ blob_init(&passwd, g.argv[5], -1); }else{ prompt_for_password("password: ", &passwd, 1); } - zPw = sha1_shared_secret(blob_str(&passwd), blob_str(&login)); + zPw = sha1_shared_secret(blob_str(&passwd), blob_str(&login), 0); db_multi_exec( - "INSERT INTO user(login,pw,cap,info)" - "VALUES(%B,%Q,'v',%B)", - &login, zPw, &contact + "INSERT INTO user(login,pw,cap,info,mtime)" + "VALUES(%B,%Q,%B,%B,now())", + &login, zPw, &caps, &contact ); free(zPw); }else if( n>=2 && strncmp(g.argv[2],"default",n)==0 ){ - user_select(); if( g.argc==3 ){ - printf("%s\n", g.zLogin); + user_select(); + fossil_print("%s\n", g.zLogin); }else{ if( !db_exists("SELECT 1 FROM user WHERE login=%Q", g.argv[3]) ){ fossil_fatal("no such user: %s", g.argv[3]); } if( g.localOpen ){ @@ -219,15 +262,15 @@ db_lset("default-user", g.argv[3]); }else{ db_set("default-user", g.argv[3], 0); } } - }else if( n>=2 && strncmp(g.argv[2],"list",n)==0 ){ + }else if(( n>=2 && strncmp(g.argv[2],"list",n)==0 ) || ( n>=2 && strncmp(g.argv[2],"ls",n)==0 )){ Stmt q; db_prepare(&q, "SELECT login, info FROM user ORDER BY login"); while( db_step(&q)==SQLITE_ROW ){ - printf("%-12s %s\n", db_column_text(&q, 0), db_column_text(&q, 1)); + fossil_print("%-12s %s\n", db_column_text(&q, 0), db_column_text(&q, 1)); } db_finalize(&q); }else if( n>=2 && strncmp(g.argv[2],"password",2)==0 ){ char *zPrompt; int uid; @@ -238,38 +281,39 @@ fossil_fatal("no such user: %s", g.argv[3]); } if( g.argc==5 ){ blob_init(&pw, g.argv[4], -1); }else{ - zPrompt = mprintf("new passwd for %s: ", g.argv[3]); + zPrompt = mprintf("New password for %s: ", g.argv[3]); prompt_for_password(zPrompt, &pw, 1); } if( blob_size(&pw)==0 ){ - printf("password unchanged\n"); + fossil_print("password unchanged\n"); }else{ - char *zSecret = sha1_shared_secret(blob_str(&pw), g.argv[3]); - db_multi_exec("UPDATE user SET pw=%Q WHERE uid=%d", zSecret, uid); + char *zSecret = sha1_shared_secret(blob_str(&pw), g.argv[3], 0); + db_multi_exec("UPDATE user SET pw=%Q, mtime=now() WHERE uid=%d", + zSecret, uid); free(zSecret); } }else if( n>=2 && strncmp(g.argv[2],"capabilities",2)==0 ){ int uid; if( g.argc!=4 && g.argc!=5 ){ - usage("user capabilities USERNAME ?PERMISSIONS?"); + usage("capabilities USERNAME ?PERMISSIONS?"); } uid = db_int(0, "SELECT uid FROM user WHERE login=%Q", g.argv[3]); if( uid==0 ){ fossil_fatal("no such user: %s", g.argv[3]); } if( g.argc==5 ){ db_multi_exec( - "UPDATE user SET cap=%Q WHERE uid=%d", g.argv[4], - uid + "UPDATE user SET cap=%Q, mtime=now() WHERE uid=%d", + g.argv[4], uid ); } - printf("%s\n", db_text(0, "SELECT cap FROM user WHERE uid=%d", uid)); + fossil_print("%s\n", db_text(0, "SELECT cap FROM user WHERE uid=%d", uid)); }else{ - fossil_panic("user subcommand should be one of: " + fossil_fatal("user subcommand should be one of: " "capabilities default list new password"); } } /* @@ -297,74 +341,55 @@ ** ** (2) If the local database is open, check in VVAR. ** ** (3) Check the default user in the repository ** -** (4) Try the USER environment variable. +** (4) Try the FOSSIL_USER environment variable. +** +** (5) Try the USER environment variable. +** +** (6) Try the LOGNAME environment variable. +** +** (7) Try the USERNAME environment variable. ** -** (5) Use the first user in the USER table. +** (8) Check if the user can be extracted from the remote URL. ** ** The user name is stored in g.zLogin. The uid is in g.userUid. */ void user_select(void){ - Stmt s; - if( g.userUid ) return; - if( attempt_user(g.zLogin) ) return; + if( g.zLogin ){ + if( attempt_user(g.zLogin)==0 ){ + fossil_fatal("no such user: %s", g.zLogin); + }else{ + return; + } + } if( g.localOpen && attempt_user(db_lget("default-user",0)) ) return; if( attempt_user(db_get("default-user", 0)) ) return; - if( attempt_user(getenv("USER")) ) return; - - db_prepare(&s, - "SELECT uid, login FROM user" - " WHERE login NOT IN ('anonymous','nobody','reader','developer')" - ); - if( db_step(&s)==SQLITE_ROW ){ - g.userUid = db_column_int(&s, 0); - g.zLogin = mprintf("%s", db_column_text(&s, 1)); - } - db_finalize(&s); - - if( g.userUid==0 ){ - db_prepare(&s, "SELECT uid, login FROM user"); - if( db_step(&s)==SQLITE_ROW ){ - g.userUid = db_column_int(&s, 0); - g.zLogin = mprintf("%s", db_column_text(&s, 1)); - } - db_finalize(&s); - } - - if( g.userUid==0 ){ - db_multi_exec( - "INSERT INTO user(login, pw, cap, info)" - "VALUES('anonymous', '', 'cfghjkmnoqw', '')" - ); - g.userUid = db_last_insert_rowid(); - g.zLogin = "anonymous"; - } -} - -/* -** Compute the shared secret for a user. -*/ -static void user_sha1_shared_secret_func( - sqlite3_context *context, - int argc, - sqlite3_value **argv -){ - char *zPw; - char *zLogin; - assert( argc==2 ); - zPw = (char*)sqlite3_value_text(argv[0]); - zLogin = (char*)sqlite3_value_text(argv[1]); - if( zPw && zLogin ){ - sqlite3_result_text(context, sha1_shared_secret(zPw, zLogin), -1, free); - } -} + if( attempt_user(fossil_getenv("FOSSIL_USER")) ) return; + + if( attempt_user(fossil_getenv("USER")) ) return; + + if( attempt_user(fossil_getenv("LOGNAME")) ) return; + + if( attempt_user(fossil_getenv("USERNAME")) ) return; + + url_parse(0, 0); + if( g.url.user && attempt_user(g.url.user) ) return; + + fossil_print( + "Cannot figure out who you are! Consider using the --user\n" + "command line option, setting your USER environment variable,\n" + "or setting a default user with \"fossil user default USER\".\n" + ); + fossil_fatal("cannot determine user"); +} + /* ** COMMAND: test-hash-passwords ** ** Usage: %fossil test-hash-passwords REPOSITORY @@ -374,12 +399,135 @@ ** has are unchanged. */ void user_hash_passwords_cmd(void){ if( g.argc!=3 ) usage("REPOSITORY"); db_open_repository(g.argv[2]); - sqlite3_create_function(g.db, "sha1_shared_secret", 2, SQLITE_UTF8, 0, - user_sha1_shared_secret_func, 0, 0); + sqlite3_create_function(g.db, "shared_secret", 2, SQLITE_UTF8, 0, + sha1_shared_secret_sql_function, 0, 0); db_multi_exec( - "UPDATE user SET pw=sha1_shared_secret(pw,login)" + "UPDATE user SET pw=shared_secret(pw,login), mtime=now()" " WHERE length(pw)>0 AND length(pw)!=40" ); } + +/* +** WEBPAGE: access_log +** +** Show login attempts, including timestamp and IP address. +** Requires Admin privileges. +** +** Query parameters: +** +** y=N 1: success only. 2: failure only. 3: both (default: 3) +** n=N Number of entries to show (default: 200) +** o=N Skip this many entries (default: 0) +*/ +void access_log_page(void){ + int y = atoi(PD("y","3")); + int n = atoi(PD("n","200")); + int skip = atoi(PD("o","0")); + Blob sql; + Stmt q; + int cnt = 0; + int rc; + int fLogEnabled; + + login_check_credentials(); + if( !g.perm.Admin ){ login_needed(0); return; } + create_accesslog_table(); + + + if( P("delall") && P("delallbtn") ){ + db_multi_exec("DELETE FROM accesslog"); + cgi_redirectf("%s/access_log?y=%d&n=%d&o=%o", g.zTop, y, n, skip); + return; + } + if( P("delanon") && P("delanonbtn") ){ + db_multi_exec("DELETE FROM accesslog WHERE uname='anonymous'"); + cgi_redirectf("%s/access_log?y=%d&n=%d&o=%o", g.zTop, y, n, skip); + return; + } + if( P("delfail") && P("delfailbtn") ){ + db_multi_exec("DELETE FROM accesslog WHERE NOT success"); + cgi_redirectf("%s/access_log?y=%d&n=%d&o=%o", g.zTop, y, n, skip); + return; + } + if( P("delold") && P("deloldbtn") ){ + db_multi_exec("DELETE FROM accesslog WHERE rowid in" + "(SELECT rowid FROM accesslog ORDER BY rowid DESC" + " LIMIT -1 OFFSET 200)"); + cgi_redirectf("%s/access_log?y=%d&n=%d", g.zTop, y, n); + return; + } + style_header("Access Log"); + blob_zero(&sql); + blob_append_sql(&sql, + "SELECT uname, ipaddr, datetime(mtime,toLocal()), success" + " FROM accesslog" + ); + if( y==1 ){ + blob_append(&sql, " WHERE success", -1); + }else if( y==2 ){ + blob_append(&sql, " WHERE NOT success", -1); + } + blob_append_sql(&sql," ORDER BY rowid DESC LIMIT %d OFFSET %d", n+1, skip); + if( skip ){ + style_submenu_element("Newer", "Newer entries", + "%s/access_log?o=%d&n=%d&y=%d", g.zTop, skip>=n ? skip-n : 0, + n, y); + } + rc = db_prepare_ignore_error(&q, "%s", blob_sql_text(&sql)); + @ <center> + fLogEnabled = db_get_boolean("access-log", 0); + @ <div>Access logging is %s(fLogEnabled?"on":"off"). + @ (Change this on the <a href="setup_settings">settings</a> page.)</div> + @ <table border="1" cellpadding="5" id='logtable'> + @ <thead><tr><th width="33%%">Date</th><th width="34%%">User</th> + @ <th width="33%%">IP Address</th></tr></thead><tbody> + while( rc==SQLITE_OK && db_step(&q)==SQLITE_ROW ){ + const char *zName = db_column_text(&q, 0); + const char *zIP = db_column_text(&q, 1); + const char *zDate = db_column_text(&q, 2); + int bSuccess = db_column_int(&q, 3); + cnt++; + if( cnt>n ){ + style_submenu_element("Older", "Older entries", + "%s/access_log?o=%d&n=%d&y=%d", g.zTop, skip+n, n, y); + break; + } + if( bSuccess ){ + @ <tr> + }else{ + @ <tr bgcolor="#ffacc0"> + } + @ <td>%s(zDate)</td><td>%h(zName)</td><td>%h(zIP)</td></tr> + } + if( skip>0 || cnt>n ){ + style_submenu_element("All", "All entries", + "%s/access_log?n=10000000", g.zTop); + } + @ </tbody></table></center> + db_finalize(&q); + @ <hr> + @ <form method="post" action="%s(g.zTop)/access_log"> + @ <label><input type="checkbox" name="delold"> + @ Delete all but the most recent 200 entries</input></label> + @ <input type="submit" name="deloldbtn" value="Delete"></input> + @ </form> + @ <form method="post" action="%s(g.zTop)/access_log"> + @ <label><input type="checkbox" name="delanon"> + @ Delete all entries for user "anonymous"</input></label> + @ <input type="submit" name="delanonbtn" value="Delete"></input> + @ </form> + @ <form method="post" action="%s(g.zTop)/access_log"> + @ <label><input type="checkbox" name="delfail"> + @ Delete all failed login attempts</input></label> + @ <input type="submit" name="delfailbtn" value="Delete"></input> + @ </form> + @ <form method="post" action="%s(g.zTop)/access_log"> + @ <label><input type="checkbox" name="delall"> + @ Delete all entries</input></label> + @ <input type="submit" name="delallbtn" value="Delete"></input> + @ </form> + output_table_sorting_javascript("logtable", "Ttt", 1); + style_footer(); +} ADDED src/utf8.c Index: src/utf8.c ================================================================== --- src/utf8.c +++ src/utf8.c @@ -0,0 +1,359 @@ +/* +** Copyright (c) 2012 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains utilities for converting text between UTF-8 (which +** is always used internally) and whatever encodings are used by the underlying +** filesystem and operating system. +*/ +#include "config.h" +#include "utf8.h" +#include <sqlite3.h> +#ifdef _WIN32 +# include <windows.h> +#endif +#include "cygsup.h" + +#if defined(_WIN32) || defined(__CYGWIN__) +/* +** Translate MBCS to UTF-8. Return a pointer to the translated text. +** Call fossil_mbcs_free() to deallocate any memory used to store the +** returned pointer when done. +*/ +char *fossil_mbcs_to_utf8(const char *zMbcs){ + extern char *sqlite3_win32_mbcs_to_utf8(const char*); + return sqlite3_win32_mbcs_to_utf8(zMbcs); +} + +/* +** After translating from UTF-8 to MBCS, invoke this routine to deallocate +** any memory used to hold the translation +*/ +void fossil_mbcs_free(char *zOld){ + sqlite3_free(zOld); +} +#endif /* _WIN32 */ + +/* +** Translate Unicode text into UTF-8. +** Return a pointer to the translated text. +** Call fossil_unicode_free() to deallocate any memory used to store the +** returned pointer when done. +*/ +char *fossil_unicode_to_utf8(const void *zUnicode){ +#if defined(_WIN32) || defined(__CYGWIN__) + int nByte = WideCharToMultiByte(CP_UTF8, 0, zUnicode, -1, 0, 0, 0, 0); + char *zUtf = fossil_malloc( nByte ); + WideCharToMultiByte(CP_UTF8, 0, zUnicode, -1, zUtf, nByte, 0, 0); + return zUtf; +#else + static Stmt q; + char *zUtf8; + db_static_prepare(&q, "SELECT :utf8"); + db_bind_text16(&q, ":utf8", zUnicode); + db_step(&q); + zUtf8 = fossil_strdup(db_column_text(&q, 0)); + db_reset(&q); + return zUtf8; +#endif +} + +/* +** Translate UTF-8 to unicode for use in system calls. Return a pointer to the +** translated text.. Call fossil_unicode_free() to deallocate any memory +** used to store the returned pointer when done. +*/ +void *fossil_utf8_to_unicode(const char *zUtf8){ +#if defined(_WIN32) || defined(__CYGWIN__) + int nByte = MultiByteToWideChar(CP_UTF8, 0, zUtf8, -1, 0, 0); + wchar_t *zUnicode = fossil_malloc( nByte*2 ); + MultiByteToWideChar(CP_UTF8, 0, zUtf8, -1, zUnicode, nByte); + return zUnicode; +#else + assert( 0 ); /* Never used in unix */ + return fossil_strdup(zUtf8); /* TODO: implement for unix */ +#endif +} + +/* +** Deallocate any memory that was previously allocated by +** fossil_unicode_to_utf8(). +*/ +void fossil_unicode_free(void *pOld){ + fossil_free(pOld); +} + +#if defined(__APPLE__) && !defined(WITHOUT_ICONV) +# include <iconv.h> +#endif + +/* +** Translate text from the filename character set into UTF-8. +** Return a pointer to the translated text. +** Call fossil_path_free() to deallocate any memory used to store the +** returned pointer when done. +** +** This function must not convert '\' to '/' on windows/cygwin, as it is +** used in places where we are not sure it's really filenames we are handling, +** e.g. fossil_getenv() or handling the argv arguments from main(). +** +** On Windows, translate some characters in the in the range +** U+F001 - U+F07F (private use area) to ASCII. Cygwin sometimes +** generates such filenames. See: +** <http://cygwin.com/cygwin-ug-net/using-specialnames.html> +*/ +char *fossil_path_to_utf8(const void *zPath){ +#if defined(_WIN32) + int nByte = WideCharToMultiByte(CP_UTF8, 0, zPath, -1, 0, 0, 0, 0); + char *zUtf = sqlite3_malloc( nByte ); + char *pUtf, *qUtf; + if( zUtf==0 ){ + return 0; + } + WideCharToMultiByte(CP_UTF8, 0, zPath, -1, zUtf, nByte, 0, 0); + pUtf = qUtf = zUtf; + while( *pUtf ) { + if( *pUtf == (char)0xef ){ + wchar_t c = ((pUtf[1]&0x3f)<<6)|(pUtf[2]&0x3f); + /* Only really convert it when the resulting char is in range. */ + if( c && ((c < ' ') || wcschr(L"\"*:<>?|", c)) ){ + *qUtf++ = c; pUtf+=3; continue; + } + } + *qUtf++ = *pUtf++; + } + *qUtf = 0; + return zUtf; +#elif defined(__CYGWIN__) + char *zOut; + zOut = fossil_strdup(zPath); + return zOut; +#elif defined(__APPLE__) && !defined(WITHOUT_ICONV) + char *zIn = (char*)zPath; + char *zOut; + iconv_t cd; + size_t n, x; + for(n=0; zIn[n]>0 && zIn[n]<=0x7f; n++){} + if( zIn[n]!=0 && (cd = iconv_open("UTF-8", "UTF-8-MAC"))!=(iconv_t)-1 ){ + char *zOutx; + char *zOrig = zIn; + size_t nIn, nOutx; + nIn = n = strlen(zIn); + nOutx = nIn+100; + zOutx = zOut = fossil_malloc( nOutx+1 ); + x = iconv(cd, &zIn, &nIn, &zOutx, &nOutx); + if( x==(size_t)-1 ){ + fossil_free(zOut); + zOut = fossil_strdup(zOrig); + }else{ + zOut[n+100-nOutx] = 0; + } + iconv_close(cd); + }else{ + zOut = fossil_strdup(zPath); + } + return zOut; +#else + return (char *)zPath; /* No-op on non-mac unix */ +#endif +} + +/* +** Translate text from UTF-8 to the filename character set. +** Return a pointer to the translated text. +** Call fossil_path_free() to deallocate any memory used to store the +** returned pointer when done. +** +** On Windows, characters in the range U+0001 to U+0031 and the +** characters '"', '*', ':', '<', '>', '?' and '|' are invalid +** to be used, except in the 'extended path' prefix ('?') and +** as drive specifier (':'). Therefore, translate those to characters +** in the range U+F001 - U+F07F (private use area), so those +** characters never arrive in any Windows API. The filenames might +** look strange in Windows explorer, but in the cygwin shell +** everything looks as expected. +** +** See: <http://cygwin.com/cygwin-ug-net/using-specialnames.html> +** +*/ +void *fossil_utf8_to_path(const char *zUtf8, int isDir){ +#ifdef _WIN32 + int nReserved = isDir ? 12 : 0; /* For dir, need room for "FILENAME.EXT" */ + int nChar = MultiByteToWideChar(CP_UTF8, 0, zUtf8, -1, 0, 0); + /* Overallocate 6 chars, making some room for extended paths */ + wchar_t *zUnicode = sqlite3_malloc( (nChar+6) * sizeof(wchar_t) ); + wchar_t *wUnicode = zUnicode; + if( zUnicode==0 ){ + return 0; + } + MultiByteToWideChar(CP_UTF8, 0, zUtf8, -1, zUnicode, nChar); + /* + ** If path starts with "//?/" or "\\?\" (extended path), translate + ** any slashes to backslashes but leave the '?' intact + */ + if( (zUtf8[0]=='\\' || zUtf8[0]=='/') && (zUtf8[1]=='\\' || zUtf8[1]=='/') + && zUtf8[2]=='?' && (zUtf8[3]=='\\' || zUtf8[3]=='/')) { + wUnicode[0] = wUnicode[1] = wUnicode[3] = '\\'; + zUtf8 += 4; + wUnicode += 4; + } + /* + ** If there is no "\\?\" prefix but there is a drive or UNC + ** path prefix and the path is larger than MAX_PATH chars, + ** no Win32 API function can handle that unless it is + ** prefixed with the extended path prefix. See: + ** <http://msdn.microsoft.com/en-us/library/aa365247(VS.85).aspx#maxpath> + **/ + if( fossil_isalpha(zUtf8[0]) && zUtf8[1]==':' + && (zUtf8[2]=='\\' || zUtf8[2]=='/') ){ + if( wUnicode==zUnicode && (nChar+nReserved)>MAX_PATH){ + memmove(wUnicode+4, wUnicode, nChar*sizeof(wchar_t)); + memcpy(wUnicode, L"\\\\?\\", 4*sizeof(wchar_t)); + wUnicode += 4; + } + /* + ** If (remainder of) path starts with "<drive>:/" or "<drive>:\", + ** leave the ':' intact but translate the backslash to a slash. + */ + wUnicode[2] = '\\'; + wUnicode += 3; + }else if( wUnicode==zUnicode && (nChar+nReserved)>MAX_PATH + && (zUtf8[0]=='\\' || zUtf8[0]=='/') + && (zUtf8[1]=='\\' || zUtf8[1]=='/') && zUtf8[2]!='?'){ + memmove(wUnicode+6, wUnicode, nChar*sizeof(wchar_t)); + memcpy(wUnicode, L"\\\\?\\UNC", 7*sizeof(wchar_t)); + wUnicode += 7; + } + /* + ** In the remainder of the path, translate invalid characters to + ** characters in the Unicode private use area. This is what makes + ** Win32 fossil.exe work well in a Cygwin environment even when a + ** filename contains characters which are invalid for Win32. + */ + while( *wUnicode != '\0' ){ + if( (*wUnicode < ' ') || wcschr(L"\"*:<>?|", *wUnicode) ){ + *wUnicode |= 0xF000; + }else if( *wUnicode == '/' ){ + *wUnicode = '\\'; + } + ++wUnicode; + } + return zUnicode; +#elif defined(__CYGWIN__) + char *zPath, *p; + if( fossil_isalpha(zUtf8[0]) && (zUtf8[1]==':') + && (zUtf8[2]=='\\' || zUtf8[2]=='/')) { + /* win32 absolute path starting with drive specifier. */ + int nByte; + wchar_t zUnicode[2000]; + wchar_t *wUnicode = zUnicode; + MultiByteToWideChar(CP_UTF8, 0, zUtf8, -1, zUnicode, count(zUnicode)); + while( *wUnicode != '\0' ){ + if( *wUnicode == '/' ){ + *wUnicode = '\\'; + } + ++wUnicode; + } + nByte = cygwin_conv_path(CCP_WIN_W_TO_POSIX, zUnicode, NULL, 0); + zPath = fossil_malloc(nByte); + cygwin_conv_path(CCP_WIN_W_TO_POSIX, zUnicode, zPath, nByte); + }else{ + zPath = fossil_strdup(zUtf8); + zUtf8 = p = zPath; + while( (*p = *zUtf8++) != 0){ + if( *p++ == '\\' ) { + p[-1] = '/'; + } + } + } + return zPath; +#elif defined(__APPLE__) && !defined(WITHOUT_ICONV) + return fossil_strdup(zUtf8); +#else + return (void *)zUtf8; /* No-op on unix */ +#endif +} + +/* +** Deallocate any memory that was previously allocated by +** fossil_path_to_utf8() or fossil_utf8_to_path(). +*/ +void fossil_path_free(void *pOld){ +#if defined(_WIN32) + sqlite3_free(pOld); +#elif (defined(__APPLE__) && !defined(WITHOUT_ICONV)) || defined(__CYGWIN__) + fossil_free(pOld); +#else + /* No-op on all other unix */ +#endif +} + +/* +** Display UTF-8 on the console. Return the number of +** Characters written. If stdout or stderr is redirected +** to a file, -1 is returned and nothing is written +** to the console. +*/ +int fossil_utf8_to_console( + const char *zUtf8, + int nByte, + int toStdErr +){ +#ifdef _WIN32 + int nChar, written = 0; + wchar_t *zUnicode; /* Unicode version of zUtf8 */ + DWORD dummy; + Blob blob; + + static int istty[2] = { -1, -1 }; + if( istty[toStdErr] == -1 ){ + istty[toStdErr] = _isatty(toStdErr + 1) != 0; + } + if( !istty[toStdErr] ){ + /* stdout/stderr is not a console. */ + return -1; + } + + /* If blob to be written to the Windows console is not + * UTF-8, convert it to UTF-8 first. + */ + blob_init(&blob, zUtf8, nByte); + blob_to_utf8_no_bom(&blob, 1); + nChar = MultiByteToWideChar(CP_UTF8, 0, blob_buffer(&blob), + blob_size(&blob), NULL, 0); + zUnicode = fossil_malloc( (nChar+1)*sizeof(zUnicode[0]) ); + if( zUnicode==0 ){ + return 0; + } + nChar = MultiByteToWideChar(CP_UTF8, 0, blob_buffer(&blob), + blob_size(&blob), zUnicode, nChar); + blob_reset(&blob); + /* Split WriteConsoleW output into multiple chunks, if necessary. See: + * <https://connect.microsoft.com/VisualStudio/feedback/details/635230> */ + while( written<nChar ){ + int size = nChar-written; + if( size>26000 ) size = 26000; + WriteConsoleW(GetStdHandle( + toStdErr ? STD_ERROR_HANDLE : STD_OUTPUT_HANDLE), + zUnicode + written, size, &dummy, 0); + written += size; + } + fossil_free(zUnicode); + return nChar; +#else + return -1; /* No-op on unix */ +#endif +} ADDED src/util.c Index: src/util.c ================================================================== --- src/util.c +++ src/util.c @@ -0,0 +1,345 @@ +/* +** Copyright (c) 2006 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code for miscellaneous utility routines. +*/ +#include "config.h" +#include "util.h" + +/* +** For the fossil_timer_xxx() family of functions... +*/ +#ifdef _WIN32 +# include <windows.h> +#else +# include <sys/time.h> +# include <sys/resource.h> +# include <unistd.h> +# include <fcntl.h> +# include <errno.h> +#endif + + +/* +** Exit. Take care to close the database first. +*/ +NORETURN void fossil_exit(int rc){ + db_close(1); + exit(rc); +} + +/* +** Malloc and free routines that cannot fail +*/ +void *fossil_malloc(size_t n){ + void *p = malloc(n==0 ? 1 : n); + if( p==0 ) fossil_panic("out of memory"); + return p; +} +void fossil_free(void *p){ + free(p); +} +void *fossil_realloc(void *p, size_t n){ + p = realloc(p, n); + if( p==0 ) fossil_panic("out of memory"); + return p; +} + +/* +** This function implements a cross-platform "system()" interface. +*/ +int fossil_system(const char *zOrigCmd){ + int rc; +#if defined(_WIN32) + /* On windows, we have to put double-quotes around the entire command. + ** Who knows why - this is just the way windows works. + */ + char *zNewCmd = mprintf("\"%s\"", zOrigCmd); + wchar_t *zUnicode = fossil_utf8_to_unicode(zNewCmd); + if( g.fSystemTrace ) { + fossil_trace("SYSTEM: %s\n", zNewCmd); + } + rc = _wsystem(zUnicode); + fossil_unicode_free(zUnicode); + free(zNewCmd); +#else + /* On unix, evaluate the command directly. + */ + if( g.fSystemTrace ) fprintf(stderr, "SYSTEM: %s\n", zOrigCmd); + + /* Unix systems should never shell-out while processing an HTTP request, + ** either via CGI, SCGI, or direct HTTP. The following assert verifies + ** this. And the following assert proves that Fossil is not vulnerable + ** to the ShellShock or BashDoor bug. + */ + assert( g.cgiOutput==0 ); + + /* The regular system() call works to get a shell on unix */ + rc = system(zOrigCmd); +#endif + return rc; +} + +/* +** Like strcmp() except that it accepts NULL pointers. NULL sorts before +** all non-NULL string pointers. Also, this strcmp() is a binary comparison +** that does not consider locale. +*/ +int fossil_strcmp(const char *zA, const char *zB){ + if( zA==0 ){ + if( zB==0 ) return 0; + return -1; + }else if( zB==0 ){ + return +1; + }else{ + int a, b; + do{ + a = *zA++; + b = *zB++; + }while( a==b && a!=0 ); + return ((unsigned char)a) - (unsigned char)b; + } +} +int fossil_strncmp(const char *zA, const char *zB, int nByte){ + if( zA==0 ){ + if( zB==0 ) return 0; + return -1; + }else if( zB==0 ){ + return +1; + }else if( nByte>0 ){ + int a, b; + do{ + a = *zA++; + b = *zB++; + }while( a==b && a!=0 && (--nByte)>0 ); + return ((unsigned char)a) - (unsigned char)b; + }else{ + return 0; + } +} + +/* +** Case insensitive string comparison. +*/ +int fossil_strnicmp(const char *zA, const char *zB, int nByte){ + if( zA==0 ){ + if( zB==0 ) return 0; + return -1; + }else if( zB==0 ){ + return +1; + } + if( nByte<0 ) nByte = strlen(zB); + return sqlite3_strnicmp(zA, zB, nByte); +} +int fossil_stricmp(const char *zA, const char *zB){ + int nByte; + int rc; + if( zA==0 ){ + if( zB==0 ) return 0; + return -1; + }else if( zB==0 ){ + return +1; + } + nByte = strlen(zB); + rc = sqlite3_strnicmp(zA, zB, nByte); + if( rc==0 && zA[nByte] ) rc = 1; + return rc; +} + +/* +** Get user and kernel times in microseconds. +*/ +void fossil_cpu_times(sqlite3_uint64 *piUser, sqlite3_uint64 *piKernel){ +#ifdef _WIN32 + FILETIME not_used; + FILETIME kernel_time; + FILETIME user_time; + GetProcessTimes(GetCurrentProcess(), ¬_used, ¬_used, + &kernel_time, &user_time); + if( piUser ){ + *piUser = ((((sqlite3_uint64)user_time.dwHighDateTime)<<32) + + (sqlite3_uint64)user_time.dwLowDateTime + 5)/10; + } + if( piKernel ){ + *piKernel = ((((sqlite3_uint64)kernel_time.dwHighDateTime)<<32) + + (sqlite3_uint64)kernel_time.dwLowDateTime + 5)/10; + } +#else + struct rusage s; + getrusage(RUSAGE_SELF, &s); + if( piUser ){ + *piUser = ((sqlite3_uint64)s.ru_utime.tv_sec)*1000000 + s.ru_utime.tv_usec; + } + if( piKernel ){ + *piKernel = + ((sqlite3_uint64)s.ru_stime.tv_sec)*1000000 + s.ru_stime.tv_usec; + } +#endif +} + +/* +** Internal helper type for fossil_timer_xxx(). + */ +enum FossilTimerEnum { + FOSSIL_TIMER_COUNT = 10 /* Number of timers we can track. */ +}; +static struct FossilTimer { + sqlite3_uint64 u; /* "User" CPU times */ + sqlite3_uint64 s; /* "System" CPU times */ + int id; /* positive if allocated, else 0. */ +} fossilTimerList[FOSSIL_TIMER_COUNT] = {{0,0,0}}; + +/* +** Stores the current CPU times into the shared timer list +** and returns that timer's internal ID. Pass that ID to +** fossil_timer_fetch() to get the elapsed time for that +** timer. +** +** The system has a fixed number of timers, and they can be +** "deallocated" by passing this function's return value to +** fossil_timer_stop() Adjust FOSSIL_TIMER_COUNT to set the number of +** available timers. +** +** Returns 0 on error (no more timers available), with 1+ being valid +** timer IDs. +*/ +int fossil_timer_start(){ + int i; + static char once = 0; + if(!once){ + once = 1; + memset(&fossilTimerList, 0, + count(fossilTimerList)); + } + for( i = 0; i < FOSSIL_TIMER_COUNT; ++i ){ + struct FossilTimer * ft = &fossilTimerList[i]; + if(ft->id) continue; + ft->id = i+1; + fossil_cpu_times( &ft->u, &ft->s ); + break; + } + return (i<FOSSIL_TIMER_COUNT) ? i+1 : 0; +} + +/* +** Returns the difference in CPU times in microseconds since +** fossil_timer_start() was called and returned the given timer ID (or +** since it was last reset). Returns 0 if timerId is out of range. +*/ +sqlite3_uint64 fossil_timer_fetch(int timerId){ + if( timerId>0 && timerId<=FOSSIL_TIMER_COUNT ){ + struct FossilTimer * start = &fossilTimerList[timerId-1]; + if( !start->id ){ + fossil_fatal("Invalid call to fetch a non-allocated " + "timer (#%d)", timerId); + /*NOTREACHED*/ + }else{ + sqlite3_uint64 eu = 0, es = 0; + fossil_cpu_times( &eu, &es ); + return (eu - start->u) + (es - start->s); + } + } + return 0; +} + +/* +** Resets the timer associated with the given ID, as obtained via +** fossil_timer_start(), to the current CPU time values. +*/ +sqlite3_uint64 fossil_timer_reset(int timerId){ + if( timerId>0 && timerId<=FOSSIL_TIMER_COUNT ){ + struct FossilTimer * start = &fossilTimerList[timerId-1]; + if( !start->id ){ + fossil_fatal("Invalid call to reset a non-allocated " + "timer (#%d)", timerId); + /*NOTREACHED*/ + }else{ + sqlite3_uint64 const rc = fossil_timer_fetch(timerId); + fossil_cpu_times( &start->u, &start->s ); + return rc; + } + } + return 0; +} + +/** + "Deallocates" the fossil timer identified by the given timer ID. + returns the difference (in uSec) between the last time that timer + was started or reset. Returns 0 if timerId is out of range (but + note that, due to system-level precision restrictions, this + function might return 0 on success, too!). It is not legal to + re-use the passed-in timerId after calling this until/unless it is + re-initialized using fossil_timer_start() (NOT + fossil_timer_reset()). +*/ +sqlite3_uint64 fossil_timer_stop(int timerId){ + if(timerId<1 || timerId>FOSSIL_TIMER_COUNT){ + return 0; + }else{ + sqlite3_uint64 const rc = fossil_timer_fetch(timerId); + struct FossilTimer * t = &fossilTimerList[timerId-1]; + t->id = 0; + t->u = t->s = 0U; + return rc; + } +} + +/* +** Returns true (non-0) if the given timer ID (as returned from +** fossil_timer_start() is currently active. +*/ +int fossil_timer_is_active( int timerId ){ + if(timerId<1 || timerId>FOSSIL_TIMER_COUNT){ + return 0; + }else{ + const int rc = fossilTimerList[timerId-1].id; + assert(!rc || (rc == timerId)); + return fossilTimerList[timerId-1].id; + } +} + +/* +** Return TRUE if fd is a valid open file descriptor. This only +** works on unix. The function always returns true on Windows. +*/ +int is_valid_fd(int fd){ +#ifdef _WIN32 + return 1; +#else + return fcntl(fd, F_GETFL)!=(-1) || errno!=EBADF; +#endif +} + +/* +** Returns TRUE if zSym is exactly UUID_SIZE bytes long and contains +** only lower-case ASCII hexadecimal values. +*/ +int fossil_is_uuid(const char *zSym){ + return zSym + && (UUID_SIZE==strlen(zSym)) + && validate16(zSym, UUID_SIZE); +} + +/* +** Return true if the input string is NULL or all whitespace. +** Return false if the input string contains text. +*/ +int fossil_all_whitespace(const char *z){ + if( z==0 ) return 1; + while( fossil_isspace(z[0]) ){ z++; } + return z[0]==0; +} Index: src/verify.c ================================================================== --- src/verify.c +++ src/verify.c @@ -13,11 +13,11 @@ ** drh@hwaci.com ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** -** This file contains code used to help verify the integrity of the +** This file contains code used to help verify the integrity of ** the repository. ** ** This file primarily implements the verify_before_commit() interface. ** Any function can call verify_before_commit() with a record id (RID) ** as an argument. Then before the next change to the database commits, @@ -35,17 +35,17 @@ ** Panic if anything goes wrong. If this procedure returns it means ** that everything is OK. */ static void verify_rid(int rid){ Blob uuid, hash, content; - if( db_int(0, "SELECT size FROM blob WHERE rid=%d", rid)<0 ){ + if( content_size(rid, 0)<0 ){ return; /* No way to verify phantoms */ } blob_zero(&uuid); db_blob(&uuid, "SELECT uuid FROM blob WHERE rid=%d", rid); if( blob_size(&uuid)!=UUID_SIZE ){ - fossil_panic("not a valid rid: %d", rid); + fossil_fatal("not a valid rid: %d", rid); } if( content_get(rid, &content) ){ sha1sum_blob(&content, &hash); blob_reset(&content); if( blob_compare(&uuid, &hash) ){ @@ -63,11 +63,11 @@ */ static Bag toVerify; static int inFinalVerify = 0; /* -** This routine is called just prior to each commit operation. +** This routine is called just prior to each commit operation. ** ** Invoke verify_rid() on every record that has been added or modified ** in the repository, in order to make sure that the repository is sane. */ static int verify_at_commit(void){ @@ -84,11 +84,11 @@ return 0; } /* ** Arrange to verify a particular record prior to committing. -** +** ** If the record rid is less than 1, then just initialize the ** verification system but do not record anything as needing ** verification. */ void verify_before_commit(int rid){ Index: src/vfile.c ================================================================== --- src/vfile.c +++ src/vfile.c @@ -19,184 +19,267 @@ */ #include "config.h" #include "vfile.h" #include <assert.h> #include <sys/types.h> -#include <dirent.h> + +/* +** The input is guaranteed to be a 40-character well-formed UUID. +** Find its rid. +*/ +int fast_uuid_to_rid(const char *zUuid){ + static Stmt q; + int rid; + db_static_prepare(&q, "SELECT rid FROM blob WHERE uuid=:uuid"); + db_bind_text(&q, ":uuid", zUuid); + if( db_step(&q)==SQLITE_ROW ){ + rid = db_column_int(&q, 0); + }else{ + rid = 0; + } + db_reset(&q); + return rid; +} /* ** Given a UUID, return the corresponding record ID. If the UUID ** does not exist, then return 0. ** ** For this routine, the UUID must be exact. For a match against ** user input with mixed case, use resolve_uuid(). ** -** If the UUID is not found and phantomize is 1, then attempt to -** create a phantom record. +** If the UUID is not found and phantomize is 1 or 2, then attempt to +** create a phantom record. A private phantom is created for 2 and +** a public phantom is created for 1. */ int uuid_to_rid(const char *zUuid, int phantomize){ int rid, sz; - static Stmt q; char z[UUID_SIZE+1]; - + sz = strlen(zUuid); if( sz!=UUID_SIZE || !validate16(zUuid, sz) ){ return 0; } - strcpy(z, zUuid); + memcpy(z, zUuid, UUID_SIZE+1); canonical16(z, sz); - db_static_prepare(&q, "SELECT rid FROM blob WHERE uuid=:uuid"); - db_bind_text(&q, ":uuid", z); - if( db_step(&q)==SQLITE_ROW ){ - rid = db_column_int(&q, 0); - }else{ - rid = 0; - } - db_reset(&q); + rid = fast_uuid_to_rid(z); if( rid==0 && phantomize ){ - rid = content_new(zUuid); + rid = content_new(zUuid, phantomize-1); } return rid; } -/* -** Verify that an object is not a phantom. If the object is -** a phantom, output an error message and quick. -*/ -void vfile_verify_not_phantom(int rid, const char *zFilename){ - if( db_int(-1, "SELECT size FROM blob WHERE rid=%d", rid)<0 ){ - if( zFilename ){ - fossil_fatal("content missing for %s", zFilename); - }else{ - char *zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); - if( zUuid ){ - fossil_fatal("content missing for [%.10s]", zUuid); - }else{ - fossil_panic("bad object id: %d", rid); - } - } - } -} - -/* -** Build a catalog of all files in a baseline. -** We scan the baseline file for lines of the form: -** -** F NAME UUID -** -** Each such line makes an entry in the VFILE table. -*/ -void vfile_build(int vid, Blob *p){ - int rid; - char *zName, *zUuid; - Stmt ins; - Blob line, token, name, uuid; - int seenHeader = 0; + +/* +** Load a vfile from a record ID. Return the number of files with +** missing content. +*/ +int load_vfile_from_rid(int vid){ + int rid, size, nMissing; + Stmt ins, ridq; + Manifest *p; + ManifestFile *pFile; + + if( db_exists("SELECT 1 FROM vfile WHERE vid=%d", vid) ){ + return 0; + } + db_begin_transaction(); - vfile_verify_not_phantom(vid, 0); - db_multi_exec("DELETE FROM vfile WHERE vid=%d", vid); + p = manifest_get(vid, CFTYPE_MANIFEST, 0); + if( p==0 ) { + db_end_transaction(1); + return 0; + } db_prepare(&ins, - "INSERT INTO vfile(vid,rid,mrid,pathname) " - " VALUES(:vid,:id,:id,:name)"); + "INSERT INTO vfile(vid,isexe,islink,rid,mrid,pathname) " + " VALUES(:vid,:isexe,:islink,:id,:id,:name)"); + db_prepare(&ridq, "SELECT rid,size FROM blob WHERE uuid=:uuid"); db_bind_int(&ins, ":vid", vid); - while( blob_line(p, &line) ){ - char *z = blob_buffer(&line); - if( z[0]=='-' ){ - if( seenHeader ) break; - while( blob_line(p, &line)>2 ){} - if( blob_line(p, &line)==0 ) break; - } - seenHeader = 1; - if( z[0]!='F' || z[1]!=' ' ) continue; - blob_token(&line, &token); /* Skip the "F" token */ - if( blob_token(&line, &name)==0 ) break; - if( blob_token(&line, &uuid)==0 ) break; - zName = blob_str(&name); - defossilize(zName); - zUuid = blob_str(&uuid); - rid = uuid_to_rid(zUuid, 0); - vfile_verify_not_phantom(rid, zName); - if( rid>0 && file_is_simple_pathname(zName) ){ - db_bind_int(&ins, ":id", rid); - db_bind_text(&ins, ":name", zName); - db_step(&ins); - db_reset(&ins); - } - blob_reset(&name); - blob_reset(&uuid); - } + manifest_file_rewind(p); + nMissing = 0; + while( (pFile = manifest_file_next(p,0))!=0 ){ + if( pFile->zUuid==0 || uuid_is_shunned(pFile->zUuid) ) continue; + db_bind_text(&ridq, ":uuid", pFile->zUuid); + if( db_step(&ridq)==SQLITE_ROW ){ + rid = db_column_int(&ridq, 0); + size = db_column_int(&ridq, 1); + }else{ + rid = 0; + size = 0; + } + db_reset(&ridq); + if( rid==0 || size<0 ){ + fossil_warning("content missing for %s", pFile->zName); + nMissing++; + continue; + } + db_bind_int(&ins, ":isexe", ( manifest_file_mperm(pFile)==PERM_EXE )); + db_bind_int(&ins, ":id", rid); + db_bind_text(&ins, ":name", pFile->zName); + db_bind_int(&ins, ":islink", ( manifest_file_mperm(pFile)==PERM_LNK )); + db_step(&ins); + db_reset(&ins); + } + db_finalize(&ridq); db_finalize(&ins); + manifest_destroy(p); db_end_transaction(0); + return nMissing; } +#if INTERFACE +/* +** The cksigFlags parameter to vfile_check_signature() is an OR-ed +** combination of the following bits: +*/ +#define CKSIG_ENOTFILE 0x001 /* non-file FS objects throw an error */ +#define CKSIG_SHA1 0x002 /* Verify file content using sha1sum */ +#define CKSIG_SETMTIME 0x004 /* Set mtime to last check-out time */ + +#endif /* INTERFACE */ + /* -** Check the file signature of the disk image for every VFILE of vid. -** -** Set the VFILE.CHNGED field on every file that has changed. Also -** set VFILE.CHNGED on every folder that contains a file or folder -** that has changed. -** -** If VFILE.DELETED is null or if VFILE.RID is zero, then we can assume -** the file has changed without having the check the on-disk image. -*/ -void vfile_check_signature(int vid, int notFileIsFatal){ +** Look at every VFILE entry with the given vid and update VFILE.CHNGED field +** according to whether or not the file has changed. +** - 0 means no change. +** - 1 means edited. +** - 2 means changed due to a merge. +** - 3 means added by a merge. +** - 4 means changed due to an integrate merge. +** - 5 means added by an integrate merge. +** - 6 means became executable but has unmodified contents. +** - 7 means became a symlink whose target equals its old contents. +** - 8 means lost executable status but has unmodified contents. +** - 9 means lost symlink status and has contents equal to its old target. +** +** If VFILE.DELETED is true or if VFILE.RID is zero, then the file was either +** removed from configuration management via "fossil rm" or added via +** "fossil add", respectively, and in both cases we always know that +** the file has changed without having the check the size, mtime, +** or on-disk content. +** +** If the size of the file has changed, then we always know that the file +** changed without having to look at the mtime or on-disk content. +** +** The mtime of the file is only a factor if the mtime-changes setting +** is false and the useSha1sum flag is false. If the mtime-changes +** setting is true (or undefined - it defaults to true) or if useSha1sum +** is true, then we do not trust the mtime and will examine the on-disk +** content to determine if a file really is the same. +** +** If the mtime is used, it is used only to determine if files are the same. +** If the mtime of a file has changed, we still examine the on-disk content +** to see whether or not the edit was a null-edit. +*/ +void vfile_check_signature(int vid, unsigned int cksigFlags){ int nErr = 0; Stmt q; Blob fileCksum, origCksum; - int checkMtime = db_get_boolean("mtime-changes", 1); + int useMtime = (cksigFlags & CKSIG_SHA1)==0 + && db_get_boolean("mtime-changes", 1); db_begin_transaction(); db_prepare(&q, "SELECT id, %Q || pathname," - " vfile.mrid, deleted, chnged, uuid, mtime" + " vfile.mrid, deleted, chnged, uuid, size, mtime," + " CASE WHEN isexe THEN %d WHEN islink THEN %d ELSE %d END" " FROM vfile LEFT JOIN blob ON vfile.mrid=blob.rid" - " WHERE vid=%d ", g.zLocalRoot, vid); + " WHERE vid=%d ", g.zLocalRoot, PERM_EXE, PERM_LNK, PERM_REG, + vid); while( db_step(&q)==SQLITE_ROW ){ int id, rid, isDeleted; const char *zName; int chnged = 0; int oldChnged; + int origPerm; + int currentPerm; i64 oldMtime; i64 currentMtime; + i64 origSize; + i64 currentSize; id = db_column_int(&q, 0); zName = db_column_text(&q, 1); rid = db_column_int(&q, 2); isDeleted = db_column_int(&q, 3); - oldChnged = db_column_int(&q, 4); - oldMtime = db_column_int64(&q, 6); - if( isDeleted ){ + oldChnged = chnged = db_column_int(&q, 4); + oldMtime = db_column_int64(&q, 7); + origSize = db_column_int64(&q, 6); + currentSize = file_wd_size(zName); + currentMtime = file_wd_mtime(0); + origPerm = db_column_int(&q, 8); + currentPerm = file_wd_perm(zName); + if( chnged==0 && (isDeleted || rid==0) ){ + /* "fossil rm" or "fossil add" always change the file */ chnged = 1; - }else if( !file_isfile(zName) && file_size(0)>=0 ){ - if( notFileIsFatal ){ + }else if( !file_wd_isfile_or_link(0) && currentSize>=0 ){ + if( cksigFlags & CKSIG_ENOTFILE ){ fossil_warning("not an ordinary file: %s", zName); nErr++; } chnged = 1; - }else if( oldChnged>=2 ){ - chnged = oldChnged; - }else if( rid==0 ){ - chnged = 1; - } - if( chnged!=1 ){ - currentMtime = file_mtime(0); - } - if( chnged!=1 && (checkMtime==0 || currentMtime!=oldMtime) ){ + } + if( origSize!=currentSize ){ + if( chnged!=1 ){ + /* A file size change is definitive - the file has changed. No + ** need to check the mtime or sha1sum */ + chnged = 1; + } + }else if( chnged==1 && rid!=0 && !isDeleted ){ + /* File is believed to have changed but it is the same size. + ** Double check that it really has changed by looking at content. */ + assert( origSize==currentSize ); + db_ephemeral_blob(&q, 5, &origCksum); + if( sha1sum_file(zName, &fileCksum) ){ + blob_zero(&fileCksum); + } + if( blob_compare(&fileCksum, &origCksum)==0 ) chnged = 0; + blob_reset(&origCksum); + blob_reset(&fileCksum); + }else if( (chnged==0 || chnged==2 || chnged==4) + && (useMtime==0 || currentMtime!=oldMtime) ){ + /* For files that were formerly believed to be unchanged or that were + ** changed by merging, if their mtime changes, or unconditionally + ** if --sha1sum is used, check to see if they have been edited by + ** looking at their SHA1 sum */ + assert( origSize==currentSize ); db_ephemeral_blob(&q, 5, &origCksum); if( sha1sum_file(zName, &fileCksum) ){ blob_zero(&fileCksum); } if( blob_compare(&fileCksum, &origCksum) ){ chnged = 1; - }else if( currentMtime!=oldMtime ){ - db_multi_exec("UPDATE vfile SET mtime=%lld WHERE id=%d", - currentMtime, id); } blob_reset(&origCksum); blob_reset(&fileCksum); } - if( chnged!=oldChnged ){ - db_multi_exec("UPDATE vfile SET chnged=%d WHERE id=%d", chnged, id); + if( (cksigFlags & CKSIG_SETMTIME) && (chnged==0 || chnged==2 || chnged==4) ){ + i64 desiredMtime; + if( mtime_of_manifest_file(vid,rid,&desiredMtime)==0 ){ + if( currentMtime!=desiredMtime ){ + file_set_mtime(zName, desiredMtime); + currentMtime = file_wd_mtime(zName); + } + } + } +#ifndef _WIN32 + if( chnged==0 || chnged==6 || chnged==7 || chnged==8 || chnged==9 ){ + if( origPerm == currentPerm ){ + chnged = 0; + }else if( currentPerm == PERM_EXE ){ + chnged = 6; + }else if( currentPerm == PERM_LNK ){ + chnged = 7; + }else if( origPerm == PERM_EXE ){ + chnged = 8; + }else if( origPerm == PERM_LNK ){ + chnged = 9; + } + } +#endif + if( currentMtime!=oldMtime || chnged!=oldChnged ){ + db_multi_exec("UPDATE vfile SET mtime=%lld, chnged=%d WHERE id=%d", + currentMtime, chnged, id); } } db_finalize(&q); if( nErr ) fossil_fatal("abort due to prior errors"); db_end_transaction(0); @@ -204,40 +287,83 @@ /* ** Write all files from vid to the disk. Or if vid==0 and id!=0 ** write just the specific file where VFILE.ID=id. */ -void vfile_to_disk(int vid, int id, int verbose){ +void vfile_to_disk( + int vid, /* vid to write to disk */ + int id, /* Write this one file, if not zero */ + int verbose, /* Output progress information */ + int promptFlag /* Prompt user to confirm overwrites */ +){ Stmt q; Blob content; int nRepos = strlen(g.zLocalRoot); if( vid>0 && id==0 ){ - db_prepare(&q, "SELECT id, %Q || pathname, mrid" + db_prepare(&q, "SELECT id, %Q || pathname, mrid, isexe, islink" " FROM vfile" " WHERE vid=%d AND mrid>0", g.zLocalRoot, vid); }else{ assert( vid==0 && id>0 ); - db_prepare(&q, "SELECT id, %Q || pathname, mrid" + db_prepare(&q, "SELECT id, %Q || pathname, mrid, isexe, islink" " FROM vfile" " WHERE id=%d AND mrid>0", g.zLocalRoot, id); } while( db_step(&q)==SQLITE_ROW ){ - int id, rid; + int id, rid, isExe, isLink; const char *zName; id = db_column_int(&q, 0); zName = db_column_text(&q, 1); rid = db_column_int(&q, 2); + isExe = db_column_int(&q, 3); + isLink = db_column_int(&q, 4); content_get(rid, &content); - if( verbose ) printf("%s\n", &zName[nRepos]); - blob_write_to_file(&content, zName); + if( file_is_the_same(&content, zName) ){ + blob_reset(&content); + if( file_wd_setexe(zName, isExe) ){ + db_multi_exec("UPDATE vfile SET mtime=%lld WHERE id=%d", + file_wd_mtime(zName), id); + } + continue; + } + if( promptFlag && file_wd_size(zName)>=0 ){ + Blob ans; + char *zMsg; + char cReply; + zMsg = mprintf("overwrite %s (a=always/y/N)? ", zName); + prompt_user(zMsg, &ans); + free(zMsg); + cReply = blob_str(&ans)[0]; + blob_reset(&ans); + if( cReply=='a' || cReply=='A' ){ + promptFlag = 0; + } else if( cReply!='y' && cReply!='Y' ){ + blob_reset(&content); + continue; + } + } + if( verbose ) fossil_print("%s\n", &zName[nRepos]); + if( file_wd_isdir(zName) == 1 ){ + /*TODO(dchest): remove directories? */ + fossil_fatal("%s is directory, cannot overwrite\n", zName); + } + if( file_wd_size(zName)>=0 && (isLink || file_wd_islink(0)) ){ + file_delete(zName); + } + if( isLink ){ + symlink_create(blob_str(&content), zName); + }else{ + blob_write_to_file(&content, zName); + } + file_wd_setexe(zName, isExe); blob_reset(&content); db_multi_exec("UPDATE vfile SET mtime=%lld WHERE id=%d", - file_mtime(zName), id); + file_wd_mtime(zName), id); } db_finalize(&q); } @@ -250,199 +376,580 @@ " WHERE vid=%d AND mrid>0", g.zLocalRoot, vid); while( db_step(&q)==SQLITE_ROW ){ const char *zName; zName = db_column_text(&q, 0); - unlink(zName); + file_delete(zName); } db_finalize(&q); db_multi_exec("UPDATE vfile SET mtime=NULL WHERE vid=%d AND mrid>0", vid); } +/* +** Check to see if the directory named in zPath is the top of a checkout. +** In other words, check to see if directory pPath contains a file named +** "_FOSSIL_" or ".fslckout". Return true or false. +*/ +int vfile_top_of_checkout(const char *zPath){ + char *zFile; + int fileFound = 0; + + zFile = mprintf("%s/_FOSSIL_", zPath); + fileFound = file_size(zFile)>=1024; + fossil_free(zFile); + if( !fileFound ){ + zFile = mprintf("%s/.fslckout", zPath); + fileFound = file_size(zFile)>=1024; + fossil_free(zFile); + } + + /* Check for ".fos" for legacy support. But the use of ".fos" as the + ** per-checkout database name is deprecated. At some point, all support + ** for ".fos" will end and this code should be removed. This comment + ** added on 2012-02-04. + */ + if( !fileFound ){ + zFile = mprintf("%s/.fos", zPath); + fileFound = file_size(zFile)>=1024; + fossil_free(zFile); + } + return fileFound; +} + +/* +** Return TRUE if zFile is a temporary file. Return FALSE if not. +*/ +static int is_temporary_file(const char *zName){ + static const char *const azTemp[] = { + "baseline", + "merge", + "original", + "output", + }; + int i, j, n; + + if( sqlite3_strglob("ci-comment-????????????.txt", zName)==0 ) return 1; + for(; zName[0]!=0; zName++){ + if( zName[0]=='/' && sqlite3_strglob("/ci-comment-????????????.txt", zName)==0 ){ + return 1; + } + if( zName[0]!='-' ) continue; + for(i=0; i<sizeof(azTemp)/sizeof(azTemp[0]); i++){ + n = (int)strlen(azTemp[i]); + if( memcmp(azTemp[i], zName+1, n) ) continue; + if( zName[n+1]==0 ) return 1; + if( zName[n+1]=='-' ){ + for(j=n+2; zName[j] && fossil_isdigit(zName[j]); j++){} + if( zName[j]==0 ) return 1; + } + } + } + return 0; +} + +#if INTERFACE +/* +** Values for the scanFlags parameter to vfile_scan(). +*/ +#define SCAN_ALL 0x001 /* Includes files that begin with "." */ +#define SCAN_TEMP 0x002 /* Only Fossil-generated files like *-baseline */ +#define SCAN_NESTED 0x004 /* Scan for empty dirs in nested checkouts */ +#endif /* INTERFACE */ + /* ** Load into table SFILE the name of every ordinary file in ** the directory pPath. Omit the first nPrefix characters of ** of pPath when inserting into the SFILE table. ** ** Subdirectories are scanned recursively. -** Omit files named in VFILE.vid +** Omit files named in VFILE. +** +** Files whose names begin with "." are omitted unless the SCAN_ALL +** flag is set. +** +** Any files or directories that match the glob patterns pIgnore* +** are excluded from the scan. Name matching occurs after the +** first nPrefix characters are elided from the filename. +*/ +void vfile_scan( + Blob *pPath, /* Directory to be scanned */ + int nPrefix, /* Number of bytes in directory name */ + unsigned scanFlags, /* Zero or more SCAN_xxx flags */ + Glob *pIgnore1, /* Do not add files that match this GLOB */ + Glob *pIgnore2 /* Omit files matching this GLOB too */ +){ + DIR *d; + int origSize; + struct dirent *pEntry; + int skipAll = 0; + static Stmt ins; + static int depth = 0; + void *zNative; + + origSize = blob_size(pPath); + if( pIgnore1 || pIgnore2 ){ + blob_appendf(pPath, "/"); + if( glob_match(pIgnore1, &blob_str(pPath)[nPrefix+1]) ) skipAll = 1; + if( glob_match(pIgnore2, &blob_str(pPath)[nPrefix+1]) ) skipAll = 1; + blob_resize(pPath, origSize); + } + if( skipAll ) return; + + if( depth==0 ){ + db_prepare(&ins, + "INSERT OR IGNORE INTO sfile(x) SELECT :file" + " WHERE NOT EXISTS(SELECT 1 FROM vfile WHERE" + " pathname=:file %s)", filename_collation() + ); + } + depth++; + + zNative = fossil_utf8_to_path(blob_str(pPath), 1); + d = opendir(zNative); + if( d ){ + while( (pEntry=readdir(d))!=0 ){ + char *zPath; + char *zUtf8; + if( pEntry->d_name[0]=='.' ){ + if( (scanFlags & SCAN_ALL)==0 ) continue; + if( pEntry->d_name[1]==0 ) continue; + if( pEntry->d_name[1]=='.' && pEntry->d_name[2]==0 ) continue; + } + zUtf8 = fossil_path_to_utf8(pEntry->d_name); + blob_appendf(pPath, "/%s", zUtf8); + zPath = blob_str(pPath); + if( glob_match(pIgnore1, &zPath[nPrefix+1]) || + glob_match(pIgnore2, &zPath[nPrefix+1]) ){ + /* do nothing */ +#ifdef _DIRENT_HAVE_D_TYPE + }else if( (pEntry->d_type==DT_UNKNOWN || pEntry->d_type==DT_LNK) + ? (file_wd_isdir(zPath)==1) : (pEntry->d_type==DT_DIR) ){ +#else + }else if( file_wd_isdir(zPath)==1 ){ +#endif + if( !vfile_top_of_checkout(zPath) ){ + vfile_scan(pPath, nPrefix, scanFlags, pIgnore1, pIgnore2); + } +#ifdef _DIRENT_HAVE_D_TYPE + }else if( (pEntry->d_type==DT_UNKNOWN || pEntry->d_type==DT_LNK) + ? (file_wd_isfile_or_link(zPath)) : (pEntry->d_type==DT_REG) ){ +#else + }else if( file_wd_isfile_or_link(zPath) ){ +#endif + if( (scanFlags & SCAN_TEMP)==0 || is_temporary_file(zUtf8) ){ + db_bind_text(&ins, ":file", &zPath[nPrefix+1]); + db_step(&ins); + db_reset(&ins); + } + } + fossil_path_free(zUtf8); + blob_resize(pPath, origSize); + } + closedir(d); + } + fossil_path_free(zNative); + + depth--; + if( depth==0 ){ + db_finalize(&ins); + } +} + +/* +** Scans the specified base directory for any directories within it, while +** keeping a count of how many files they each contains, either directly or +** indirectly. +** +** Subdirectories are scanned recursively. +** Omit files named in VFILE. +** +** Directories whose names begin with "." are omitted unless the SCAN_ALL +** flag is set. +** +** Any directories that match the glob patterns pIgnore* are excluded from +** the scan. Name matching occurs after the first nPrefix characters are +** elided from the filename. +** +** Returns the total number of files found. */ -void vfile_scan(int vid, Blob *pPath, int nPrefix, int allFlag){ +int vfile_dir_scan( + Blob *pPath, /* Base directory to be scanned */ + int nPrefix, /* Number of bytes in base directory name */ + unsigned scanFlags, /* Zero or more SCAN_xxx flags */ + Glob *pIgnore1, /* Do not add directories that match this GLOB */ + Glob *pIgnore2 /* Omit directories matching this GLOB too */ +){ + int result = 0; DIR *d; int origSize; - const char *zDir; struct dirent *pEntry; - static const char *zSql = "SELECT 1 FROM vfile " - " WHERE pathname=%Q AND NOT deleted"; + int skipAll = 0; + static Stmt ins; + static Stmt upd; + static int depth = 0; + void *zNative; origSize = blob_size(pPath); - zDir = blob_str(pPath); - d = opendir(zDir); + if( pIgnore1 || pIgnore2 ){ + blob_appendf(pPath, "/"); + if( glob_match(pIgnore1, &blob_str(pPath)[nPrefix+1]) ) skipAll = 1; + if( glob_match(pIgnore2, &blob_str(pPath)[nPrefix+1]) ) skipAll = 1; + blob_resize(pPath, origSize); + } + if( skipAll ) return result; + + if( depth==0 ){ + db_multi_exec("DROP TABLE IF EXISTS dscan_temp;" + "CREATE TEMP TABLE dscan_temp(" + " x TEXT PRIMARY KEY %s, y INTEGER)", + filename_collation()); + db_prepare(&ins, + "INSERT OR IGNORE INTO dscan_temp(x, y) SELECT :file, :count" + " WHERE NOT EXISTS(SELECT 1 FROM vfile WHERE" + " pathname GLOB :file || '/*' %s)", filename_collation() + ); + db_prepare(&upd, + "UPDATE OR IGNORE dscan_temp SET y = coalesce(y, 0) + 1" + " WHERE x=:file %s", + filename_collation() + ); + } + depth++; + + zNative = fossil_utf8_to_path(blob_str(pPath), 1); + d = opendir(zNative); if( d ){ while( (pEntry=readdir(d))!=0 ){ + char *zOrigPath; char *zPath; + char *zUtf8; if( pEntry->d_name[0]=='.' ){ - if( !allFlag ) continue; + if( (scanFlags & SCAN_ALL)==0 ) continue; if( pEntry->d_name[1]==0 ) continue; if( pEntry->d_name[1]=='.' && pEntry->d_name[2]==0 ) continue; } - blob_appendf(pPath, "/%s", pEntry->d_name); + zOrigPath = mprintf("%s", blob_str(pPath)); + zUtf8 = fossil_path_to_utf8(pEntry->d_name); + blob_appendf(pPath, "/%s", zUtf8); zPath = blob_str(pPath); - if( file_isdir(zPath)==1 ){ - vfile_scan(vid, pPath, nPrefix, allFlag); - }else if( file_isfile(zPath) && !db_exists(zSql, &zPath[nPrefix+1]) ){ - db_multi_exec("INSERT INTO sfile VALUES(%Q)", &zPath[nPrefix+1]); - } - blob_resize(pPath, origSize); - } - } - closedir(d); + if( glob_match(pIgnore1, &zPath[nPrefix+1]) || + glob_match(pIgnore2, &zPath[nPrefix+1]) ){ + /* do nothing */ +#ifdef _DIRENT_HAVE_D_TYPE + }else if( (pEntry->d_type==DT_UNKNOWN || pEntry->d_type==DT_LNK) + ? (file_wd_isdir(zPath)==1) : (pEntry->d_type==DT_DIR) ){ +#else + }else if( file_wd_isdir(zPath)==1 ){ +#endif + if( (scanFlags & SCAN_NESTED) || !vfile_top_of_checkout(zPath) ){ + char *zSavePath = mprintf("%s", zPath); + int count = vfile_dir_scan(pPath, nPrefix, scanFlags, pIgnore1, + pIgnore2); + db_bind_text(&ins, ":file", &zSavePath[nPrefix+1]); + db_bind_int(&ins, ":count", count); + db_step(&ins); + db_reset(&ins); + fossil_free(zSavePath); + result += count; /* found X normal files? */ + } +#ifdef _DIRENT_HAVE_D_TYPE + }else if( (pEntry->d_type==DT_UNKNOWN || pEntry->d_type==DT_LNK) + ? (file_wd_isfile_or_link(zPath)) : (pEntry->d_type==DT_REG) ){ +#else + }else if( file_wd_isfile_or_link(zPath) ){ +#endif + db_bind_text(&upd, ":file", zOrigPath); + db_step(&upd); + db_reset(&upd); + result++; /* found 1 normal file */ + } + fossil_path_free(zUtf8); + blob_resize(pPath, origSize); + fossil_free(zOrigPath); + } + closedir(d); + } + fossil_path_free(zNative); + + depth--; + if( depth==0 ){ + db_finalize(&upd); + db_finalize(&ins); + } + return result; } /* ** Compute an aggregate MD5 checksum over the disk image of every -** file in vid. The file names are part of the checksum. +** file in vid. The file names are part of the checksum. The resulting +** checksum is the same as is expected on the R-card of a manifest. ** ** This function operates differently if the Global.aCommitFile ** variable is not NULL. In that case, the disk image is used for -** each file in aCommitFile[] and the repository image (see -** vfile_aggregate_checksum_repository() is used for all others). +** each file in aCommitFile[] and the repository image +** is used for all others). +** ** Newly added files that are not contained in the repository are -** omitted from the checksum if they are not in Global.aCommitFile. +** omitted from the checksum if they are not in Global.aCommitFile[]. +** +** Newly deleted files are included in the checksum if they are not +** part of Global.aCommitFile[] +** +** Renamed files use their new name if they are in Global.aCommitFile[] +** and their original name if they are not in Global.aCommitFile[] ** ** Return the resulting checksum in blob pOut. */ void vfile_aggregate_checksum_disk(int vid, Blob *pOut){ FILE *in; Stmt q; char zBuf[4096]; db_must_be_within_tree(); - db_prepare(&q, - "SELECT %Q || pathname, pathname, file_is_selected(id), rid FROM vfile" - " WHERE NOT deleted AND vid=%d" - " ORDER BY pathname /*scan*/", + db_prepare(&q, + "SELECT %Q || pathname, pathname, origname, is_selected(id), rid" + " FROM vfile" + " WHERE (NOT deleted OR NOT is_selected(id)) AND vid=%d" + " ORDER BY if_selected(id, pathname, origname) /*scan*/", g.zLocalRoot, vid ); md5sum_init(); while( db_step(&q)==SQLITE_ROW ){ const char *zFullpath = db_column_text(&q, 0); const char *zName = db_column_text(&q, 1); - int isSelected = db_column_int(&q, 2); + int isSelected = db_column_int(&q, 3); if( isSelected ){ md5sum_step_text(zName, -1); - in = fopen(zFullpath,"rb"); - if( in==0 ){ - md5sum_step_text(" 0\n", -1); - continue; - } - fseek(in, 0L, SEEK_END); - sprintf(zBuf, " %ld\n", ftell(in)); - fseek(in, 0L, SEEK_SET); - md5sum_step_text(zBuf, -1); - for(;;){ - int n; - n = fread(zBuf, 1, sizeof(zBuf), in); - if( n<=0 ) break; - md5sum_step_text(zBuf, n); - } - fclose(in); - }else{ - int rid = db_column_int(&q, 3); + if( file_wd_islink(zFullpath) ){ + /* Instead of file content, use link destination path */ + Blob pathBuf; + + sqlite3_snprintf(sizeof(zBuf), zBuf, " %ld\n", + blob_read_link(&pathBuf, zFullpath)); + md5sum_step_text(zBuf, -1); + md5sum_step_text(blob_str(&pathBuf), -1); + blob_reset(&pathBuf); + }else{ + in = fossil_fopen(zFullpath,"rb"); + if( in==0 ){ + md5sum_step_text(" 0\n", -1); + continue; + } + fseek(in, 0L, SEEK_END); + sqlite3_snprintf(sizeof(zBuf), zBuf, " %ld\n", ftell(in)); + fseek(in, 0L, SEEK_SET); + md5sum_step_text(zBuf, -1); + /*printf("%s %s %s",md5sum_current_state(),zName,zBuf); fflush(stdout);*/ + for(;;){ + int n; + n = fread(zBuf, 1, sizeof(zBuf), in); + if( n<=0 ) break; + md5sum_step_text(zBuf, n); + } + fclose(in); + } + }else{ + int rid = db_column_int(&q, 4); + const char *zOrigName = db_column_text(&q, 2); char zBuf[100]; Blob file; + if( zOrigName ) zName = zOrigName; if( rid>0 ){ md5sum_step_text(zName, -1); blob_zero(&file); content_get(rid, &file); - sprintf(zBuf, " %d\n", blob_size(&file)); + sqlite3_snprintf(sizeof(zBuf), zBuf, " %d\n", blob_size(&file)); md5sum_step_text(zBuf, -1); md5sum_step_blob(&file); blob_reset(&file); } } } db_finalize(&q); md5sum_finish(pOut); } + +/* +** Write a BLOB into a random filename. Return the name of the file. +*/ +char *write_blob_to_temp_file(Blob *pBlob){ + sqlite3_uint64 r; + char *zOut = 0; + do{ + sqlite3_free(zOut); + sqlite3_randomness(8, &r); + zOut = sqlite3_mprintf("file-%08llx", r); + }while( file_size(zOut)>=0 ); + blob_write_to_file(pBlob, zOut); + return zOut; +} + +/* +** Do a file-by-file comparison of the content of the repository and +** the working check-out on disk. Report any errors. +*/ +void vfile_compare_repository_to_disk(int vid){ + int rc; + Stmt q; + Blob disk, repo; + char *zOut; + + db_must_be_within_tree(); + db_prepare(&q, + "SELECT %Q || pathname, pathname, rid FROM vfile" + " WHERE NOT deleted AND vid=%d AND is_selected(id)" + " ORDER BY if_selected(id, pathname, origname) /*scan*/", + g.zLocalRoot, vid + ); + md5sum_init(); + while( db_step(&q)==SQLITE_ROW ){ + const char *zFullpath = db_column_text(&q, 0); + const char *zName = db_column_text(&q, 1); + int rid = db_column_int(&q, 2); + + blob_zero(&disk); + if( file_wd_islink(zFullpath) ){ + rc = blob_read_link(&disk, zFullpath); + }else{ + rc = blob_read_from_file(&disk, zFullpath); + } + if( rc<0 ){ + fossil_print("ERROR: cannot read file [%s]\n", zFullpath); + blob_reset(&disk); + continue; + } + blob_zero(&repo); + content_get(rid, &repo); + if( blob_size(&repo)!=blob_size(&disk) ){ + fossil_print("ERROR: [%s] is %d bytes on disk but %d in the repository\n", + zName, blob_size(&disk), blob_size(&repo)); + zOut = write_blob_to_temp_file(&repo); + fossil_print("NOTICE: Repository version of [%s] stored in [%s]\n", + zName, zOut); + sqlite3_free(zOut); + blob_reset(&disk); + blob_reset(&repo); + continue; + } + if( blob_compare(&repo, &disk) ){ + fossil_print( + "ERROR: [%s] is different on disk compared to the repository\n", + zName); + zOut = write_blob_to_temp_file(&repo); + fossil_print("NOTICE: Repository version of [%s] stored in [%s]\n", + zName, zOut); + sqlite3_free(zOut); + } + blob_reset(&disk); + blob_reset(&repo); + } + db_finalize(&q); +} /* ** Compute an aggregate MD5 checksum over the repository image of every -** file in vid. The file names are part of the checksum. +** file in vid. The file names are part of the checksum. The resulting +** checksum is suitable for the R-card of a manifest. ** ** Return the resulting checksum in blob pOut. */ void vfile_aggregate_checksum_repository(int vid, Blob *pOut){ Blob file; Stmt q; char zBuf[100]; db_must_be_within_tree(); - - db_prepare(&q, "SELECT pathname, rid FROM vfile" - " WHERE NOT deleted AND rid>0 AND vid=%d" - " ORDER BY pathname /*scan*/", + + db_prepare(&q, "SELECT pathname, origname, rid, is_selected(id)" + " FROM vfile" + " WHERE (NOT deleted OR NOT is_selected(id))" + " AND rid>0 AND vid=%d" + " ORDER BY if_selected(id,pathname,origname) /*scan*/", vid); blob_zero(&file); md5sum_init(); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); - int rid = db_column_int(&q, 1); + const char *zOrigName = db_column_text(&q, 1); + int rid = db_column_int(&q, 2); + int isSelected = db_column_int(&q, 3); + if( zOrigName && !isSelected ) zName = zOrigName; md5sum_step_text(zName, -1); content_get(rid, &file); - sprintf(zBuf, " %d\n", blob_size(&file)); + sqlite3_snprintf(sizeof(zBuf), zBuf, " %d\n", blob_size(&file)); md5sum_step_text(zBuf, -1); + /*printf("%s %s %s",md5sum_current_state(),zName,zBuf); fflush(stdout);*/ md5sum_step_blob(&file); blob_reset(&file); } db_finalize(&q); md5sum_finish(pOut); } /* ** Compute an aggregate MD5 checksum over the repository image of every -** file in manifest vid. The file names are part of the checksum. +** file in manifest vid. The file names are part of the checksum. The +** resulting checksum is suitable for use as the R-card of a manifest. +** ** Return the resulting checksum in blob pOut. ** ** If pManOut is not NULL then fill it with the checksum found in the -** "R" card near the end of the manifest. +** "R" card near the end of the manifest. +** +** In a well-formed manifest, the two checksums computed here, pOut and +** pManOut, should be identical. */ void vfile_aggregate_checksum_manifest(int vid, Blob *pOut, Blob *pManOut){ - int i, fid; - Blob file, mfile; - Manifest m; + int fid; + Blob file; + Blob err; + Manifest *pManifest; + ManifestFile *pFile; char zBuf[100]; blob_zero(pOut); + blob_zero(&err); if( pManOut ){ blob_zero(pManOut); } db_must_be_within_tree(); - content_get(vid, &mfile); - if( manifest_parse(&m, &mfile)==0 ){ - fossil_panic("manifest file (%d) is malformed", vid); - } - for(i=0; i<m.nFile; i++){ - fid = uuid_to_rid(m.aFile[i].zUuid, 0); - md5sum_step_text(m.aFile[i].zName, -1); - content_get(fid, &file); - sprintf(zBuf, " %d\n", blob_size(&file)); + pManifest = manifest_get(vid, CFTYPE_MANIFEST, &err); + if( pManifest==0 ){ + fossil_fatal("manifest file (%d) is malformed:\n%s\n", + vid, blob_str(&err)); + } + manifest_file_rewind(pManifest); + while( (pFile = manifest_file_next(pManifest,0))!=0 ){ + if( pFile->zUuid==0 ) continue; + fid = uuid_to_rid(pFile->zUuid, 0); + md5sum_step_text(pFile->zName, -1); + content_get(fid, &file); + sqlite3_snprintf(sizeof(zBuf), zBuf, " %d\n", blob_size(&file)); md5sum_step_text(zBuf, -1); md5sum_step_blob(&file); blob_reset(&file); } if( pManOut ){ - blob_append(pManOut, m.zRepoCksum, -1); + if( pManifest->zRepoCksum ){ + blob_append(pManOut, pManifest->zRepoCksum, -1); + }else{ + blob_zero(pManOut); + } } - manifest_clear(&m); + manifest_destroy(pManifest); md5sum_finish(pOut); } /* ** COMMAND: test-agg-cksum +** +** Display the aggregate checksum for content computed in several +** different ways. The aggregate checksum is used during "fossil commit" +** to double-check that the information about to be committed to the +** repository exactly matches the information currently in the check-out. */ void test_agg_cksum_cmd(void){ int vid; Blob hash, hash2; db_must_be_within_tree(); Index: src/wiki.c ================================================================== --- src/wiki.c +++ src/wiki.c @@ -16,22 +16,22 @@ ** ******************************************************************************* ** ** This file contains code to do formatting of wiki text. */ +#include "config.h" #include <assert.h> #include <ctype.h> -#include "config.h" #include "wiki.h" /* ** Return true if the input string is a well-formed wiki page name. ** ** Well-formed wiki page names do not begin or end with whitespace, ** and do not contain tabs or other control characters and do not ** contain more than a single space character in a row. Well-formed -** names must be between 3 and 100 chracters in length, inclusive. +** names must be between 1 and 100 characters in length, inclusive. */ int wiki_name_is_wellformed(const unsigned char *z){ int i; if( z[0]<=0x20 ){ return 0; @@ -39,24 +39,24 @@ for(i=1; z[i]; i++){ if( z[i]<0x20 ) return 0; if( z[i]==0x20 && z[i-1]==0x20 ) return 0; } if( z[i-1]==' ' ) return 0; - if( i<3 || i>100 ) return 0; + if( i<1 || i>100 ) return 0; return 1; } /* ** Output rules for well-formed wiki pages */ static void well_formed_wiki_name_rules(void){ @ <ul> - @ <li> Must not begin or end with a space. + @ <li> Must not begin or end with a space.</li> @ <li> Must not contain any control characters, including tab or - @ newline. - @ <li> Must not have two or more spaces in a row internally. - @ <li> Must be between 3 and 100 characters in length. + @ newline.</li> + @ <li> Must not have two or more spaces in a row internally.</li> + @ <li> Must be between 1 and 100 characters in length.</li> @ </ul> } /* ** Check a wiki name. If it is not well-formed, then issue an error @@ -63,12 +63,12 @@ ** and return true. If it is well-formed, return false. */ static int check_name(const char *z){ if( !wiki_name_is_wellformed((const unsigned char *)z) ){ style_header("Wiki Page Name Error"); - @ The wiki name "<b>%h(z)</b>" is not well-formed. Rules for - @ wiki page names: + @ The wiki name "<span class="wikiError">%h(z)</span>" is not well-formed. + @ Rules for wiki page names: well_formed_wiki_name_rules(); style_footer(); return 1; } return 0; @@ -76,41 +76,229 @@ /* ** WEBPAGE: home ** WEBPAGE: index ** WEBPAGE: not_found +** +** The /home, /index, and /not_found pages all redirect to the homepage +** configured by the administrator. */ void home_page(void){ char *zPageName = db_get("project-name",0); + char *zIndexPage = db_get("index-page",0); login_check_credentials(); - if( !g.okRdWiki ){ - cgi_redirectf("%s/login?g=%s/home", g.zBaseURL, g.zBaseURL); + if( zIndexPage ){ + const char *zPathInfo = P("PATH_INFO"); + while( zIndexPage[0]=='/' ) zIndexPage++; + while( zPathInfo[0]=='/' ) zPathInfo++; + if( fossil_strcmp(zIndexPage, zPathInfo)==0 ) zIndexPage = 0; + } + if( zIndexPage ){ + cgi_redirectf("%s/%s", g.zTop, zIndexPage); + } + if( !g.perm.RdWiki ){ + cgi_redirectf("%s/login?g=%s/home", g.zTop, g.zTop); } if( zPageName ){ login_check_credentials(); g.zExtra = zPageName; - cgi_set_parameter_nocopy("name", g.zExtra); + cgi_set_parameter_nocopy("name", g.zExtra, 1); g.isHome = 1; wiki_page(); return; } style_header("Home"); @ <p>This is a stub home-page for the project. @ To fill in this page, first go to - @ <a href="%s(g.zBaseURL)/setup_config">setup/config</a> + @ %z(href("%R/setup_config"))setup/config</a> @ and establish a "Project Name". Then create a @ wiki page with that name. The content of that wiki page - @ will be displayed in place of this message. + @ will be displayed in place of this message.</p> style_footer(); } /* ** Return true if the given pagename is the name of the sandbox */ static int is_sandbox(const char *zPagename){ - return strcasecmp(zPagename,"sandbox")==0 || - strcasecmp(zPagename,"sand box")==0; + return fossil_stricmp(zPagename,"sandbox")==0 || + fossil_stricmp(zPagename,"sand box")==0; +} + +/* +** Only allow certain mimetypes through. +** All others become "text/x-fossil-wiki" +*/ +const char *wiki_filter_mimetypes(const char *zMimetype){ + if( zMimetype!=0 && + ( fossil_strcmp(zMimetype, "text/x-markdown")==0 + || fossil_strcmp(zMimetype, "text/plain")==0 ) + ){ + return zMimetype; + } + return "text/x-fossil-wiki"; +} + +/* +** Render wiki text according to its mimetype. +** +** text/x-fossil-wiki Fossil wiki +** text/x-markdown Markdown +** anything else... Plain text +*/ +void wiki_render_by_mimetype(Blob *pWiki, const char *zMimetype){ + if( zMimetype==0 || fossil_strcmp(zMimetype, "text/x-fossil-wiki")==0 ){ + wiki_convert(pWiki, 0, 0); + }else if( fossil_strcmp(zMimetype, "text/x-markdown")==0 ){ + Blob tail = BLOB_INITIALIZER; + markdown_to_html(pWiki, 0, &tail); + @ %s(blob_str(&tail)) + blob_reset(&tail); + }else{ + @ <pre> + @ %h(blob_str(pWiki)) + @ </pre> + } +} + +/* +** WEBPAGE: md_rules +** +** Show a summary of the Markdown wiki formatting rules. +*/ +void markdown_rules_page(void){ + Blob x; + int fTxt = P("txt")!=0; + style_header("Markdown Formatting Rules"); + if( fTxt ){ + style_submenu_element("Formatted", "Formatted", "%R/md_rules"); + }else{ + style_submenu_element("Plain-Text", "Plain-Text", "%R/md_rules?txt=1"); + } + blob_init(&x, builtin_text("markdown.md"), -1); + wiki_render_by_mimetype(&x, fTxt ? "text/plain" : "text/x-markdown"); + blob_reset(&x); + style_footer(); +} + +/* +** Returns non-zero if moderation is required for wiki changes and wiki +** attachments. +*/ +int wiki_need_moderation( + int localUser /* Are we being called for a local interactive user? */ +){ + /* + ** If the FOSSIL_FORCE_WIKI_MODERATION variable is set, *ALL* changes for + ** wiki pages will be required to go through moderation (even those performed + ** by the local interactive user via the command line). This can be useful + ** for local (or remote) testing of the moderation subsystem and its impact + ** on the contents and status of wiki pages. + */ + if( fossil_getenv("FOSSIL_FORCE_WIKI_MODERATION")!=0 ){ + return 1; + } + if( localUser ){ + return 0; + } + return g.perm.ModWiki==0 && db_get_boolean("modreq-wiki",0)==1; +} + +/* Standard submenu items for wiki pages */ +#define W_SRCH 0x00001 +#define W_LIST 0x00002 +#define W_HELP 0x00004 +#define W_NEW 0x00008 +#define W_BLOG 0x00010 +#define W_SANDBOX 0x00020 +#define W_ALL 0x0001f +#define W_ALL_BUT(x) (W_ALL&~(x)) + +/* +** Add some standard submenu elements for wiki screens. +*/ +static void wiki_standard_submenu(unsigned int ok){ + if( (ok & W_SRCH)!=0 && search_restrict(SRCH_WIKI)!=0 ){ + style_submenu_element("Search","Search","%R/wikisrch"); + } + if( (ok & W_LIST)!=0 ){ + style_submenu_element("List","List","%R/wcontent"); + } + if( (ok & W_HELP)!=0 ){ + style_submenu_element("Help","Help","%R/wikihelp"); + } + if( (ok & W_NEW)!=0 && g.anon.NewWiki ){ + style_submenu_element("New","New","%R/wikinew"); + } +#if 0 + if( (ok & W_BLOG)!=0 +#endif + if( (ok & W_SANDBOX)!=0 ){ + style_submenu_element("Sandbox", "Sandbox", "%R/wiki?name=Sandbox"); + } +} + +/* +** WEBPAGE: wikihelp +** A generic landing page for wiki. +*/ +void wiki_helppage(void){ + login_check_credentials(); + if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } + style_header("Wiki Help"); + wiki_standard_submenu(W_ALL_BUT(W_HELP)); + @ <h2>Wiki Links</h2> + @ <ul> + { char *zWikiHomePageName = db_get("index-page",0); + if( zWikiHomePageName ){ + @ <li> %z(href("%R%s",zWikiHomePageName)) + @ %h(zWikiHomePageName)</a> wiki home page.</li> + } + } + { char *zHomePageName = db_get("project-name",0); + if( zHomePageName ){ + @ <li> %z(href("%R/wiki?name=%t",zHomePageName)) + @ %h(zHomePageName)</a> project home page.</li> + } + } + @ <li> %z(href("%R/timeline?y=w"))Recent changes</a> to wiki pages.</li> + @ <li> Formatting rules for %z(href("%R/wiki_rules"))Fossil Wiki</a> and for + @ %z(href("%R/md_rules"))Markdown Wiki</a>.</li> + @ <li> Use the %z(href("%R/wiki?name=Sandbox"))Sandbox</a> + @ to experiment.</li> + if( g.anon.NewWiki ){ + @ <li> Create a %z(href("%R/wikinew"))new wiki page</a>.</li> + if( g.anon.Write ){ + @ <li> Create a %z(href("%R/technoteedit"))new tech-note</a>.</li> + } + } + @ <li> %z(href("%R/wcontent"))List of All Wiki Pages</a> + @ available on this server.</li> + if( g.anon.ModWiki ){ + @ <li> %z(href("%R/modreq"))Tend to pending moderation requests</a></li> + } + if( search_restrict(SRCH_WIKI)!=0 ){ + @ <li> %z(href("%R/wikisrch"))Search</a> for wiki pages containing key + @ words</li> + } + @ </ul> + style_footer(); + return; +} + +/* +** WEBPAGE: wikisrch +** Usage: /wikisrch?s=PATTERN +** +** Full-text search of all current wiki text +*/ +void wiki_srchpage(void){ + login_check_credentials(); + style_header("Wiki Search"); + wiki_standard_submenu(W_HELP|W_LIST|W_SANDBOX); + search_screen(SRCH_WIKI, 0); + style_footer(); } /* ** WEBPAGE: wiki ** URL: /wiki?name=PAGENAME @@ -117,219 +305,250 @@ */ void wiki_page(void){ char *zTag; int rid = 0; int isSandbox; + char *zUuid; + unsigned submenuFlags = W_ALL; Blob wiki; - Manifest m; + Manifest *pWiki = 0; const char *zPageName; + const char *zMimetype = 0; char *zBody = mprintf("%s","<i>Empty Page</i>"); - Stmt q; - int cnt = 0; login_check_credentials(); - if( !g.okRdWiki ){ login_needed(); return; } + if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } zPageName = P("name"); if( zPageName==0 ){ - style_header("Wiki"); - @ <ul> - { char *zHomePageName = db_get("project-name",0); - if( zHomePageName ){ - @ <li> <a href="%s(g.zBaseURL)/wiki?name=%t(zHomePageName)"> - @ %h(zHomePageName)</a> wiki home page.</li> - } - } - @ <li> <a href="%s(g.zBaseURL)/timeline?y=w">Recent changes</a> to wiki - @ pages. </li> - @ <li> <a href="%s(g.zBaseURL)/wiki_rules">Formatting rules</a> for - @ wiki.</li> - @ <li> Use the <a href="%s(g.zBaseURL)/wiki?name=Sandbox">Sandbox</a> - @ to experiment.</li> - if( g.okNewWiki ){ - @ <li> Create a <a href="%s(g.zBaseURL)/wikinew">new wiki page</a>.</li> - } - @ <li> <a href="%s(g.zBaseURL)/wcontent">List of All Wiki Pages</a> - @ available on this server.</li> - @ <li> <form method="GET" action="%s(g.zBaseURL)/wfind"> - @ Search wiki titles: <input type="text" name="title"/> - @   <input type="submit" /> - @ </li> - @ </ul> - style_footer(); + if( search_restrict(SRCH_WIKI)!=0 ){ + wiki_srchpage(); + }else{ + wiki_helppage(); + } return; } if( check_name(zPageName) ) return; isSandbox = is_sandbox(zPageName); if( isSandbox ){ + submenuFlags &= ~W_SANDBOX; zBody = db_get("sandbox",zBody); + zMimetype = db_get("sandbox-mimetype","text/x-fossil-wiki"); + rid = 0; }else{ - zTag = mprintf("wiki-%s", zPageName); - rid = db_int(0, - "SELECT rid FROM tagxref" - " WHERE tagid=(SELECT tagid FROM tag WHERE tagname=%Q)" - " ORDER BY mtime DESC", zTag - ); - free(zTag); - memset(&m, 0, sizeof(m)); - blob_zero(&m.content); - if( rid ){ - Blob content; - content_get(rid, &content); - manifest_parse(&m, &content); - if( m.type==CFTYPE_WIKI && m.zWiki ){ - while( isspace(m.zWiki[0]) ) m.zWiki++; - if( m.zWiki[0] ) zBody = m.zWiki; - } - } - } + const char *zUuid = P("id"); + if( zUuid==0 || (rid = symbolic_name_to_rid(zUuid,"w"))==0 ){ + zTag = mprintf("wiki-%s", zPageName); + rid = db_int(0, + "SELECT rid FROM tagxref" + " WHERE tagid=(SELECT tagid FROM tag WHERE tagname=%Q)" + " ORDER BY mtime DESC", zTag + ); + free(zTag); + } + pWiki = manifest_get(rid, CFTYPE_WIKI, 0); + if( pWiki ){ + zBody = pWiki->zWiki; + zMimetype = pWiki->zMimetype; + } + } + zMimetype = wiki_filter_mimetypes(zMimetype); if( !g.isHome ){ - if( (rid && g.okWrWiki) || (!rid && g.okNewWiki) ){ - style_submenu_element("Edit", "Edit Wiki Page", "%s/wikiedit?name=%T", - g.zTop, zPageName); + if( rid ){ + style_submenu_element("Diff", "Last change", + "%R/wdiff?name=%T&a=%d", zPageName, rid); + zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); + style_submenu_element("Details", "Details", + "%R/info/%s", zUuid); + } + if( (rid && g.anon.WrWiki) || (!rid && g.anon.NewWiki) ){ + if( db_get_boolean("wysiwyg-wiki", 0) ){ + style_submenu_element("Edit", "Edit Wiki Page", + "%s/wikiedit?name=%T&wysiwyg=1", + g.zTop, zPageName); + }else{ + style_submenu_element("Edit", "Edit Wiki Page", + "%s/wikiedit?name=%T", + g.zTop, zPageName); + } } - if( rid && g.okApndWiki && g.okAttach ){ + if( rid && g.anon.ApndWiki && g.anon.Attach ){ style_submenu_element("Attach", "Add An Attachment", "%s/attachadd?page=%T&from=%s/wiki%%3fname=%T", g.zTop, zPageName, g.zTop, zPageName); } - if( rid && g.okApndWiki ){ - style_submenu_element("Append", "Add A Comment", "%s/wikiappend?name=%T", - g.zTop, zPageName); + if( rid && g.anon.ApndWiki ){ + style_submenu_element("Append", "Add A Comment", + "%s/wikiappend?name=%T&mimetype=%s", + g.zTop, zPageName, zMimetype); } - if( g.okHistory ){ + if( g.perm.Hyperlink ){ style_submenu_element("History", "History", "%s/whistory?name=%T", g.zTop, zPageName); } } - style_header(zPageName); + style_set_current_page("%T?name=%T", g.zPath, zPageName); + style_header("%s", zPageName); + wiki_standard_submenu(submenuFlags); blob_init(&wiki, zBody, -1); - wiki_convert(&wiki, 0, 0); + wiki_render_by_mimetype(&wiki, zMimetype); blob_reset(&wiki); - - db_prepare(&q, - "SELECT datetime(mtime,'localtime'), filename, user" - " FROM attachment" - " WHERE isLatest AND src!='' AND target=%Q" - " ORDER BY mtime DESC", - zPageName); - while( db_step(&q)==SQLITE_ROW ){ - const char *zDate = db_column_text(&q, 0); - const char *zFile = db_column_text(&q, 1); - const char *zUser = db_column_text(&q, 2); - if( cnt==0 ){ - @ <hr><h2>Attachments:</h2> - @ <ul> - } - cnt++; - if( g.okHistory ){ - @ <li><a href="%s(g.zTop)/attachview?page=%s(zPageName)&file=%t(zFile)"> + attachment_list(zPageName, "<hr /><h2>Attachments:</h2><ul>"); + manifest_destroy(pWiki); + style_footer(); +} + +/* +** Write a wiki artifact into the repository +*/ +static void wiki_put(Blob *pWiki, int parent, int needMod){ + int nrid; + if( !needMod ){ + nrid = content_put_ex(pWiki, 0, 0, 0, 0); + if( parent) content_deltify(parent, nrid, 0); + }else{ + nrid = content_put_ex(pWiki, 0, 0, 0, 1); + moderation_table_create(); + db_multi_exec("INSERT INTO modreq(objid) VALUES(%d)", nrid); + } + db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", nrid); + db_multi_exec("INSERT OR IGNORE INTO unclustered VALUES(%d);", nrid); + manifest_crosslink(nrid, pWiki, MC_NONE); +} + +/* +** Formal names and common names for the various wiki styles. +*/ +static const char *const azStyles[] = { + "text/x-fossil-wiki", "Fossil Wiki", + "text/x-markdown", "Markdown", + "text/plain", "Plain Text" +}; + +/* +** Output a selection box from which the user can select the +** wiki mimetype. +*/ +void mimetype_option_menu(const char *zMimetype){ + unsigned i; + @ <select name="mimetype" size="1"> + for(i=0; i<sizeof(azStyles)/sizeof(azStyles[0]); i+=2){ + if( fossil_strcmp(zMimetype,azStyles[i])==0 ){ + @ <option value="%s(azStyles[i])" selected>%s(azStyles[i+1])</option> }else{ - @ <li> - } - @ %h(zFile)</a> add by %h(zUser) on - hyperlink_to_date(zDate, "."); - if( g.okWrWiki && g.okAttach ){ - @ [<a href="%s(g.zTop)/attachdelete?page=%s(zPageName)&file=%t(zFile)&from=%s(g.zTop)/wiki%%3fname=%s(zPageName)">delete</a>] - } - } - if( cnt ){ - @ </ul> - } - db_finalize(&q); - - if( !isSandbox ){ - manifest_clear(&m); - } - style_footer(); + @ <option value="%s(azStyles[i])">%s(azStyles[i+1])</option> + } + } + @ </select> +} + +/* +** Given a mimetype, return its common name. +*/ +static const char *mimetype_common_name(const char *zMimetype){ + int i; + for(i=4; i>=2; i-=2){ + if( zMimetype && fossil_strcmp(zMimetype, azStyles[i])==0 ){ + return azStyles[i+1]; + } + } + return azStyles[1]; } /* ** WEBPAGE: wikiedit ** URL: /wikiedit?name=PAGENAME +** +** Edit a wiki page. */ void wikiedit_page(void){ char *zTag; int rid = 0; int isSandbox; Blob wiki; - Manifest m; + Manifest *pWiki = 0; const char *zPageName; - char *zHtmlPageName; int n; const char *z; char *zBody = (char*)P("w"); + const char *zMimetype = wiki_filter_mimetypes(P("mimetype")); + int isWysiwyg = P("wysiwyg")!=0; + int goodCaptcha = 1; + if( P("edit-wysiwyg")!=0 ){ isWysiwyg = 1; zBody = 0; } + if( P("edit-markup")!=0 ){ isWysiwyg = 0; zBody = 0; } if( zBody ){ - zBody = mprintf("%s", zBody); + if( isWysiwyg ){ + Blob body; + blob_zero(&body); + htmlTidy(zBody, &body); + zBody = blob_str(&body); + }else{ + zBody = mprintf("%s", zBody); + } } login_check_credentials(); zPageName = PD("name",""); if( check_name(zPageName) ) return; isSandbox = is_sandbox(zPageName); if( isSandbox ){ - if( !g.okWrWiki ){ - login_needed(); + if( !g.perm.WrWiki ){ + login_needed(g.anon.WrWiki); return; } if( zBody==0 ){ zBody = db_get("sandbox",""); + zMimetype = db_get("sandbox-mimetype","text/x-fossil-wiki"); } }else{ zTag = mprintf("wiki-%s", zPageName); - rid = db_int(0, + rid = db_int(0, "SELECT rid FROM tagxref" " WHERE tagid=(SELECT tagid FROM tag WHERE tagname=%Q)" " ORDER BY mtime DESC", zTag ); free(zTag); - if( (rid && !g.okWrWiki) || (!rid && !g.okNewWiki) ){ - login_needed(); + if( (rid && !g.perm.WrWiki) || (!rid && !g.perm.NewWiki) ){ + login_needed(rid ? g.anon.WrWiki : g.anon.NewWiki); return; } - memset(&m, 0, sizeof(m)); - blob_zero(&m.content); - if( rid && zBody==0 ){ - Blob content; - content_get(rid, &content); - manifest_parse(&m, &content); - if( m.type==CFTYPE_WIKI ){ - zBody = m.zWiki; - } + if( zBody==0 && (pWiki = manifest_get(rid, CFTYPE_WIKI, 0))!=0 ){ + zBody = pWiki->zWiki; + zMimetype = pWiki->zMimetype; } } - if( P("submit")!=0 && zBody!=0 ){ + if( P("submit")!=0 && zBody!=0 + && (goodCaptcha = captcha_is_correct()) + ){ char *zDate; Blob cksum; - int nrid; blob_zero(&wiki); db_begin_transaction(); if( isSandbox ){ db_set("sandbox",zBody,0); + db_set("sandbox-mimetype",zMimetype,0); }else{ login_verify_csrf_secret(); - zDate = db_text(0, "SELECT datetime('now')"); - zDate[10] = 'T'; + zDate = date_in_standard_format("now"); blob_appendf(&wiki, "D %s\n", zDate); free(zDate); blob_appendf(&wiki, "L %F\n", zPageName); + if( fossil_strcmp(zMimetype,"text/x-fossil-wiki")!=0 ){ + blob_appendf(&wiki, "N %s\n", zMimetype); + } if( rid ){ char *zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); blob_appendf(&wiki, "P %s\n", zUuid); free(zUuid); } - if( g.zLogin ){ - blob_appendf(&wiki, "U %F\n", g.zLogin); + if( !login_is_nobody() ){ + blob_appendf(&wiki, "U %F\n", login_name()); } blob_appendf(&wiki, "W %d\n%s\n", strlen(zBody), zBody); md5sum_blob(&wiki, &cksum); blob_appendf(&wiki, "Z %b\n", &cksum); blob_reset(&cksum); - nrid = content_put(&wiki, 0, 0); - db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", nrid); - manifest_crosslink(nrid, &wiki); - blob_reset(&wiki); - content_deltify(rid, nrid, 0); + wiki_put(&wiki, 0, wiki_need_moderation(0)); } db_end_transaction(0); cgi_redirectf("wiki?name=%T", zPageName); } if( P("cancel")!=0 ){ @@ -337,38 +556,68 @@ return; } if( zBody==0 ){ zBody = mprintf("<i>Empty Page</i>"); } - zHtmlPageName = mprintf("Edit: %s", zPageName); - style_header(zHtmlPageName); + style_set_current_page("%T?name=%T", g.zPath, zPageName); + style_header("Edit: %s", zPageName); + if( !goodCaptcha ){ + @ <p class="generalError">Error: Incorrect security code.</p> + } + blob_zero(&wiki); + blob_append(&wiki, zBody, -1); if( P("preview")!=0 ){ - blob_zero(&wiki); - blob_append(&wiki, zBody, -1); - @ Preview:<hr> - wiki_convert(&wiki, 0, 0); - @ <hr> + @ Preview:<hr /> + wiki_render_by_mimetype(&wiki, zMimetype); + @ <hr /> blob_reset(&wiki); } for(n=2, z=zBody; z[0]; z++){ if( z[0]=='\n' ) n++; } if( n<20 ) n = 20; - if( n>40 ) n = 40; - @ <form method="POST" action="%s(g.zBaseURL)/wikiedit"> + if( n>30 ) n = 30; + if( !isWysiwyg ){ + /* Traditional markup-only editing */ + form_begin(0, "%R/wikiedit"); + @ <div>Markup style: + mimetype_option_menu(zMimetype); + @ <br /><textarea name="w" class="wikiedit" cols="80" + @ rows="%d(n)" wrap="virtual">%h(zBody)</textarea> + @ <br /> + if( db_get_boolean("wysiwyg-wiki", 0) ){ + @ <input type="submit" name="edit-wysiwyg" value="Wysiwyg Editor" + @ onclick='return confirm("Switching to WYSIWYG-mode\nwill erase your markup\nedits. Continue?")' /> + } + @ <input type="submit" name="preview" value="Preview Your Changes" /> + }else{ + /* Wysiwyg editing */ + Blob html, temp; + form_begin("onsubmit='wysiwygSubmit()'", "%R/wikiedit"); + @ <div> + @ <input type="hidden" name="wysiwyg" value="1" /> + blob_zero(&temp); + wiki_convert(&wiki, &temp, 0); + blob_zero(&html); + htmlTidy(blob_str(&temp), &html); + blob_reset(&temp); + wysiwygEditor("w", blob_str(&html), 60, n); + blob_reset(&html); + @ <br /> + @ <input type="submit" name="edit-markup" value="Markup Editor" + @ onclick='return confirm("Switching to markup-mode\nwill erase your WYSIWYG\nedits. Continue?")' /> + } login_insert_csrf_secret(); - @ <input type="hidden" name="name" value="%h(zPageName)"> - @ <textarea name="w" class="wikiedit" cols="80" - @ rows="%d(n)" wrap="virtual">%h(zBody)</textarea> - @ <br> - @ <input type="submit" name="preview" value="Preview Your Changes"> - @ <input type="submit" name="submit" value="Apply These Changes"> - @ <input type="submit" name="cancel" value="Cancel"> + @ <input type="submit" name="submit" value="Apply These Changes" /> + @ <input type="hidden" name="name" value="%h(zPageName)" /> + @ <input type="submit" name="cancel" value="Cancel" + @ onclick='confirm("Abandon your changes?")' /> + @ </div> + captcha_generate(0); @ </form> - if( !isSandbox ){ - manifest_clear(&m); - } + manifest_destroy(pWiki); + blob_reset(&wiki); style_footer(); } /* ** WEBPAGE: wikinew @@ -377,77 +626,107 @@ ** Prompt the user to enter the name of a new wiki page. Then redirect ** to the wikiedit screen for that new page. */ void wikinew_page(void){ const char *zName; + const char *zMimetype; login_check_credentials(); - if( !g.okNewWiki ){ - login_needed(); + if( !g.perm.NewWiki ){ + login_needed(g.anon.NewWiki); return; - } + } zName = PD("name",""); + zMimetype = wiki_filter_mimetypes(P("mimetype")); if( zName[0] && wiki_name_is_wellformed((const unsigned char *)zName) ){ - cgi_redirectf("wikiedit?name=%T", zName); + if( fossil_strcmp(zMimetype,"text/x-fossil-wiki")==0 + && db_get_boolean("wysiwyg-wiki", 0) + ){ + cgi_redirectf("wikiedit?name=%T&wysiwyg=1", zName); + }else{ + cgi_redirectf("wikiedit?name=%T&mimetype=%s", zName, zMimetype); + } } style_header("Create A New Wiki Page"); - @ <p>Rules for wiki page names: + wiki_standard_submenu(W_ALL_BUT(W_NEW)); + @ <p>Rules for wiki page names:</p> well_formed_wiki_name_rules(); - @ </p> - @ <form method="POST" action="%s(g.zBaseURL)/wikinew"> + form_begin(0, "%R/wikinew"); @ <p>Name of new wiki page: - @ <input type="text" width="35" name="name" value="%h(zName)"> - @ <input type="submit" value="Create"> + @ <input style="width: 35;" type="text" name="name" value="%h(zName)" /><br /> + @ Markup style: + mimetype_option_menu("text/x-fossil-wiki"); + @ <br /><input type="submit" value="Create" /> @ </p></form> if( zName[0] ){ - @ <p><b><font color="red"> - @ "%h(zName)" is not a valid wiki page name!</font></b></p> + @ <p><span class="wikiError"> + @ "%h(zName)" is not a valid wiki page name!</span></p> } style_footer(); } /* ** Append the wiki text for an remark to the end of the given BLOB. */ -static void appendRemark(Blob *p){ +static void appendRemark(Blob *p, const char *zMimetype){ char *zDate; const char *zUser; const char *zRemark; char *zId; zDate = db_text(0, "SELECT datetime('now')"); - zId = db_text(0, "SELECT lower(hex(randomblob(8)))"); - blob_appendf(p, "\n\n<hr><div id=\"%s\"><i>On %s UTC %h", - zId, zDate, g.zLogin); - free(zDate); + zRemark = PD("r",""); zUser = PD("u",g.zLogin); - if( zUser[0] && strcmp(zUser,g.zLogin) ){ - blob_appendf(p, " (claiming to be %h)", zUser); + if( fossil_strcmp(zMimetype, "text/x-fossil-wiki")==0 ){ + zId = db_text(0, "SELECT lower(hex(randomblob(8)))"); + blob_appendf(p, "\n\n<hr><div id=\"%s\"><i>On %s UTC %h", + zId, zDate, login_name()); + if( zUser[0] && fossil_strcmp(zUser,login_name()) ){ + blob_appendf(p, " (claiming to be %h)", zUser); + } + blob_appendf(p, " added:</i><br />\n%s</div id=\"%s\">", zRemark, zId); + }else if( fossil_strcmp(zMimetype, "text/x-markdown")==0 ){ + blob_appendf(p, "\n\n------\n*On %s UTC %h", zDate, login_name()); + if( zUser[0] && fossil_strcmp(zUser,login_name()) ){ + blob_appendf(p, " (claiming to be %h)", zUser); + } + blob_appendf(p, " added:*\n\n%s\n", zRemark); + }else{ + blob_appendf(p, "\n\n------------------------------------------------\n" + "On %s UTC %s", zDate, login_name()); + if( zUser[0] && fossil_strcmp(zUser,login_name()) ){ + blob_appendf(p, " (claiming to be %s)", zUser); + } + blob_appendf(p, " added:\n\n%s\n", zRemark); } - zRemark = PD("r",""); - blob_appendf(p, " added:</i><br />\n%s</div id=\"%s\">", zRemark, zId); + fossil_free(zDate); } /* ** WEBPAGE: wikiappend -** URL: /wikiappend?name=PAGENAME +** URL: /wikiappend?name=PAGENAME&mimetype=MIMETYPE +** +** Append text to the end of a wiki page. */ void wikiappend_page(void){ char *zTag; int rid = 0; int isSandbox; const char *zPageName; - char *zHtmlPageName; const char *zUser; + const char *zMimetype; + int goodCaptcha = 1; + const char *zFormat; login_check_credentials(); zPageName = PD("name",""); + zMimetype = wiki_filter_mimetypes(P("mimetype")); if( check_name(zPageName) ) return; isSandbox = is_sandbox(zPageName); if( !isSandbox ){ zTag = mprintf("wiki-%s", zPageName); - rid = db_int(0, + rid = db_int(0, "SELECT rid FROM tagxref" " WHERE tagid=(SELECT tagid FROM tag WHERE tagname=%Q)" " ORDER BY mtime DESC", zTag ); free(zTag); @@ -454,92 +733,95 @@ if( !rid ){ fossil_redirect_home(); return; } } - if( !g.okApndWiki ){ - login_needed(); + if( !g.perm.ApndWiki ){ + login_needed(g.anon.ApndWiki); return; } - if( P("submit")!=0 && P("r")!=0 && P("u")!=0 ){ + if( P("submit")!=0 && P("r")!=0 && P("u")!=0 + && (goodCaptcha = captcha_is_correct()) + ){ char *zDate; Blob cksum; - int nrid; Blob body; - Blob content; Blob wiki; - Manifest m; + Manifest *pWiki = 0; blob_zero(&body); if( isSandbox ){ - blob_appendf(&body, db_get("sandbox","")); - appendRemark(&body); + blob_append(&body, db_get("sandbox",""), -1); + appendRemark(&body, zMimetype); db_set("sandbox", blob_str(&body), 0); }else{ login_verify_csrf_secret(); - content_get(rid, &content); - manifest_parse(&m, &content); - if( m.type==CFTYPE_WIKI ){ - blob_append(&body, m.zWiki, -1); + pWiki = manifest_get(rid, CFTYPE_WIKI, 0); + if( pWiki ){ + blob_append(&body, pWiki->zWiki, -1); + manifest_destroy(pWiki); } - manifest_clear(&m); blob_zero(&wiki); db_begin_transaction(); - zDate = db_text(0, "SELECT datetime('now')"); - zDate[10] = 'T'; + zDate = date_in_standard_format("now"); blob_appendf(&wiki, "D %s\n", zDate); blob_appendf(&wiki, "L %F\n", zPageName); + if( fossil_strcmp(zMimetype, "text/x-fossil-wiki")!=0 ){ + blob_appendf(&wiki, "N %s\n", zMimetype); + } if( rid ){ char *zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); blob_appendf(&wiki, "P %s\n", zUuid); free(zUuid); } - if( g.zLogin ){ - blob_appendf(&wiki, "U %F\n", g.zLogin); + if( !login_is_nobody() ){ + blob_appendf(&wiki, "U %F\n", login_name()); } - appendRemark(&body); + appendRemark(&body, zMimetype); blob_appendf(&wiki, "W %d\n%s\n", blob_size(&body), blob_str(&body)); md5sum_blob(&wiki, &cksum); blob_appendf(&wiki, "Z %b\n", &cksum); blob_reset(&cksum); - nrid = content_put(&wiki, 0, 0); - db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", nrid); - manifest_crosslink(nrid, &wiki); - blob_reset(&wiki); - content_deltify(rid, nrid, 0); + wiki_put(&wiki, rid, wiki_need_moderation(0)); db_end_transaction(0); } cgi_redirectf("wiki?name=%T", zPageName); } if( P("cancel")!=0 ){ cgi_redirectf("wiki?name=%T", zPageName); return; } - zHtmlPageName = mprintf("Append Comment To: %s", zPageName); - style_header(zHtmlPageName); + style_set_current_page("%T?name=%T", g.zPath, zPageName); + style_header("Append Comment To: %s", zPageName); + if( !goodCaptcha ){ + @ <p class="generalError">Error: Incorrect security code.</p> + } if( P("preview")!=0 ){ Blob preview; blob_zero(&preview); - appendRemark(&preview); + appendRemark(&preview, zMimetype); @ Preview:<hr> - wiki_convert(&preview, 0, 0); + wiki_render_by_mimetype(&preview, zMimetype); @ <hr> blob_reset(&preview); } zUser = PD("u", g.zLogin); - @ <form method="POST" action="%s(g.zBaseURL)/wikiappend"> + form_begin(0, "%R/wikiappend"); login_insert_csrf_secret(); - @ <input type="hidden" name="name" value="%h(zPageName)"> + @ <input type="hidden" name="name" value="%h(zPageName)" /> + @ <input type="hidden" name="mimetype" value="%h(zMimetype)" /> @ Your Name: - @ <input type="text" name="u" size="20" value="%h(zUser)"><br> - @ Comment to append:<br> - @ <textarea name="r" class="wikiedit" cols="80" + @ <input type="text" name="u" size="20" value="%h(zUser)" /><br /> + zFormat = mimetype_common_name(zMimetype); + @ Comment to append (formatted as %s(zFormat)):<br /> + @ <textarea name="r" class="wikiedit" cols="80" @ rows="10" wrap="virtual">%h(PD("r",""))</textarea> - @ <br> - @ <input type="submit" name="preview" value="Preview Your Comment"> - @ <input type="submit" name="submit" value="Append Your Changes"> - @ <input type="submit" name="cancel" value="Cancel"> + @ <br /> + @ <input type="submit" name="preview" value="Preview Your Comment" /> + @ <input type="submit" name="submit" value="Append Your Changes" /> + @ <input type="submit" name="cancel" value="Cancel" /> + captcha_generate(0); @ </form> style_footer(); } /* @@ -551,11 +833,11 @@ ** Function called to output extra text at the end of each line in ** a wiki history listing. */ static void wiki_history_extra(int rid){ if( db_exists("SELECT 1 FROM tagxref WHERE rid=%d", rid) ){ - @ <a href="%s(g.zTop)/wdiff?name=%t(zWikiPageName)&a=%d(rid)">[diff]</a> + @ %z(href("%R/wdiff?name=%t&a=%d",zWikiPageName,rid))[diff]</a> } } /* ** WEBPAGE: whistory @@ -563,31 +845,25 @@ ** ** Show the complete change history for a single wiki page. */ void whistory_page(void){ Stmt q; - char *zTitle; - char *zSQL; const char *zPageName; login_check_credentials(); - if( !g.okHistory ){ login_needed(); return; } + if( !g.perm.Hyperlink ){ login_needed(g.anon.Hyperlink); return; } zPageName = PD("name",""); - zTitle = mprintf("History Of %s", zPageName); - style_header(zTitle); - free(zTitle); + style_header("History Of %s", zPageName); - zSQL = mprintf("%s AND event.objid IN " + db_prepare(&q, "%s AND event.objid IN " " (SELECT rid FROM tagxref WHERE tagid=" "(SELECT tagid FROM tag WHERE tagname='wiki-%q')" " UNION SELECT attachid FROM attachment" " WHERE target=%Q)" "ORDER BY mtime DESC", timeline_query_for_www(), zPageName, zPageName); - db_prepare(&q, zSQL); - free(zSQL); zWikiPageName = zPageName; - www_print_timeline(&q, TIMELINE_ARTID, wiki_history_extra); + www_print_timeline(&q, TIMELINE_ARTID, 0, 0, 0, wiki_history_extra); db_finalize(&q); style_footer(); } /* @@ -595,26 +871,23 @@ ** URL: /whistory?name=PAGENAME&a=RID1&b=RID2 ** ** Show the difference between two wiki pages. */ void wdiff_page(void){ - char *zTitle; int rid1, rid2; const char *zPageName; - Blob content1, content2; - Manifest m1, m2; + Manifest *pW1, *pW2 = 0; Blob w1, w2, d; + u64 diffFlags; login_check_credentials(); rid1 = atoi(PD("a","0")); - if( !g.okHistory ){ login_needed(); return; } + if( !g.perm.Hyperlink ){ login_needed(g.anon.Hyperlink); return; } if( rid1==0 ) fossil_redirect_home(); rid2 = atoi(PD("b","0")); zPageName = PD("name",""); - zTitle = mprintf("Changes To %s", zPageName); - style_header(zTitle); - free(zTitle); + style_header("Changes To %s", zPageName); if( rid2==0 ){ rid2 = db_int(0, "SELECT objid FROM event JOIN tagxref ON objid=rid AND tagxref.tagid=" "(SELECT tagid FROM tag WHERE tagname='wiki-%q')" @@ -621,30 +894,46 @@ " WHERE event.mtime<(SELECT mtime FROM event WHERE objid=%d)" " ORDER BY event.mtime DESC LIMIT 1", zPageName, rid1 ); } - content_get(rid1, &content1); - manifest_parse(&m1, &content1); - if( m1.type!=CFTYPE_WIKI ) fossil_redirect_home(); - blob_init(&w1, m1.zWiki, -1); + pW1 = manifest_get(rid1, CFTYPE_WIKI, 0); + if( pW1==0 ) fossil_redirect_home(); + blob_init(&w1, pW1->zWiki, -1); blob_zero(&w2); - if( rid2 ){ - content_get(rid2, &content2); - manifest_parse(&m2, &content2); - if( m2.type==CFTYPE_WIKI ){ - blob_init(&w2, m2.zWiki, -1); - } + if( rid2 && (pW2 = manifest_get(rid2, CFTYPE_WIKI, 0))!=0 ){ + blob_init(&w2, pW2->zWiki, -1); } blob_zero(&d); - text_diff(&w2, &w1, &d, 5); + diffFlags = construct_diff_flags(1,0); + text_diff(&w2, &w1, &d, 0, diffFlags | DIFF_HTML | DIFF_LINENO); + @ <pre class="udiff"> + @ %s(blob_str(&d)) @ <pre> - @ %h(blob_str(&d)) - @ </pre> + manifest_destroy(pW1); + manifest_destroy(pW2); style_footer(); } +/* +** prepare()s pStmt with a query requesting: +** +** - wiki page name +** - tagxref (whatever that really is!) +** +** Used by wcontent_page() and the JSON wiki code. +*/ +void wiki_prepare_page_list( Stmt * pStmt ){ + db_prepare(pStmt, + "SELECT" + " substr(tagname, 6) as name," + " (SELECT value FROM tagxref WHERE tagid=tag.tagid" + " ORDER BY mtime DESC) as tagXref" + " FROM tag WHERE tagname GLOB 'wiki-*'" + " ORDER BY lower(tagname) /*sort*/" + ); +} /* ** WEBPAGE: wcontent ** ** all=1 Show deleted pages ** @@ -653,32 +942,27 @@ void wcontent_page(void){ Stmt q; int showAll = P("all")!=0; login_check_credentials(); - if( !g.okRdWiki ){ login_needed(); return; } + if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } style_header("Available Wiki Pages"); if( showAll ){ style_submenu_element("Active", "Only Active Pages", "%s/wcontent", g.zTop); }else{ style_submenu_element("All", "All", "%s/wcontent?all=1", g.zTop); } + wiki_standard_submenu(W_ALL_BUT(W_LIST)); @ <ul> - db_prepare(&q, - "SELECT" - " substr(tagname, 6)," - " (SELECT value FROM tagxref WHERE tagid=tag.tagid ORDER BY mtime DESC)" - " FROM tag WHERE tagname GLOB 'wiki-*'" - " ORDER BY lower(tagname) /*sort*/" - ); + wiki_prepare_page_list(&q); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); int size = db_column_int(&q, 1); if( size>0 ){ - @ <li><a href="%s(g.zTop)/wiki?name=%T(zName)">%h(zName)</a></li> + @ <li>%z(href("%R/wiki?name=%T",zName))%h(zName)</a></li> }else if( showAll ){ - @ <li><a href="%s(g.zTop)/wiki?name=%T(zName)"><s>%h(zName)</s></a></li> + @ <li>%z(href("%R/wiki?name=%T",zName))<s>%h(zName)</s></a></li> } } db_finalize(&q); @ </ul> style_footer(); @@ -690,31 +974,33 @@ ** URL: /wfind?title=TITLE ** List all wiki pages whose titles contain the search text */ void wfind_page(void){ Stmt q; - const char * zTitle; + const char *zTitle; login_check_credentials(); - if( !g.okRdWiki ){ login_needed(); return; } + if( !g.perm.RdWiki ){ login_needed(g.anon.RdWiki); return; } zTitle = PD("title","*"); style_header("Wiki Pages Found"); @ <ul> - db_prepare(&q, + db_prepare(&q, "SELECT substr(tagname, 6, 1000) FROM tag WHERE tagname like 'wiki-%%%q%%'" " ORDER BY lower(tagname) /*sort*/" , - zTitle); + zTitle); while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); - @ <li><a href="%s(g.zBaseURL)/wiki?name=%T(zName)">%h(zName)</a></li> + @ <li>%z(href("%R/wiki?name=%T",zName))%h(zName)</a></li> } db_finalize(&q); @ </ul> style_footer(); } /* ** WEBPAGE: wiki_rules +** +** Show the formatting rules for Fossil wiki. */ void wikirules_page(void){ style_header("Wiki Formatting Rules"); @ <h2>Formatting Rule Summary</h2> @ <ol> @@ -721,125 +1007,86 @@ @ <li>Blank lines are paragraph breaks</li> @ <li>Bullets are "*" surrounded by two spaces at the beginning of the @ line.</li> @ <li>Enumeration items are "#" surrounded by two spaces at the beginning of @ a line.</li> - @ <li>Indented pargraphs begin with a tab or two spaces.</li> + @ <li>Indented paragraphs begin with a tab or two spaces.</li> @ <li>Hyperlinks are contained with square brackets: "[target]" or @ "[target|name]".</li> @ <li>Most ordinary HTML works.</li> @ <li><verbatim> and <nowiki>.</li> @ </ol> @ <p>We call the first five rules above "wiki" formatting rules. The @ last two rules are the HTML formatting rule.</p> @ <h2>Formatting Rule Details</h2> @ <ol> - @ <li> <p><b>Paragraphs</b>. Any sequence of one or more blank lines forms + @ <li> <p><span class="wikiruleHead">Paragraphs</span>. + @ Any sequence of one or more blank lines forms @ a paragraph break. Centered or right-justified paragraphs are not @ supported by wiki markup, but you can do these things if you need them - @ using HTML.</p> - @ <li> <p><b>Bullet Lists</b>. + @ using HTML.</p></li> + @ <li> <p><span class="wikiruleHead">Bullet Lists</span>. @ A bullet list item is a line that begins with a single "*" character @ surrounded on @ both sides by two or more spaces or by a tab. Only a single level - @ of bullet list is supported by wiki. For nested lists, use HTML.</p> - @ <li> <p><b>Enumeration Lists</b>. + @ of bullet list is supported by wiki. For nested lists, use HTML.</p></li> + @ <li> <p><span class="wikiruleHead">Enumeration Lists</span>. @ An enumeration list item is a line that begins with a single "#" character @ surrounded on both sides by two or more spaces or by a tab. Only a single @ level of enumeration list is supported by wiki. For nested lists or for - @ enumerations that count using letters or roman numerials, use HTML.</p> - @ <li> <p><b>Indented Paragraphs</b>. + @ enumerations that count using letters or roman numerials, use HTML.</p></li> + @ <li> <p><span class="wikiruleHead">Indented Paragraphs</span>. @ Any paragraph that begins with two or more spaces or a tab and - @ which is not a bullet or enumeration list item is rendered + @ which is not a bullet or enumeration list item is rendered @ indented. Only a single level of indentation is supported by wiki; use - @ HTML for deeper indentation.</p> - @ <li> <p><b>Hyperlinks</b>. + @ HTML for deeper indentation.</p></li> + @ <li> <p><span class="wikiruleHead">Hyperlinks</span>. @ Text within square brackets ("[...]") becomes a hyperlink. The @ target can be a wiki page name, the artifact ID of a check-in or ticket, @ the name of an image, or a URL. By default, the target is displayed @ as the text of the hyperlink. But you can specify alternative text @ after the target name separated by a "|" character.</p> - @ <p>You can also link to internal anchor names using [#anchor-name], providing - @ you have added the necessary "<a name="anchor-name"></a>" - @ tag to your wiki page.</p> - @ <li> <p><b>HTML</b>. + @ <p>You can also link to internal anchor names using [#anchor-name], + @ providing + @ you have added the necessary "<a name='anchor-name'></a>" + @ tag to your wiki page.</p></li> + @ <li> <p><span class="wikiruleHead">HTML</span>. @ The following standard HTML elements may be used: - @ <a> - @ <address> - @ <b> - @ <big> - @ <blockquote> - @ <br> - @ <center> - @ <cite> - @ <code> - @ <dd> - @ <dfn> - @ <div> - @ <dl> - @ <dt> - @ <em> - @ <font> - @ <h1> - @ <h2> - @ <h3> - @ <h4> - @ <h5> - @ <h6> - @ <hr> - @ <img> - @ <i> - @ <kbd> - @ <li> - @ <nobr> - @ <ol> - @ <p> - @ <pre> - @ <s> - @ <samp> - @ <small> - @ <strike> - @ <strong> - @ <sub> - @ <sup> - @ <table> - @ <td> - @ <th> - @ <tr> - @ <tt> - @ <u> - @ <ul> - @ <var>. - @ In addition, there are two non-standard elements available: + show_allowed_wiki_markup(); + @ . There are two non-standard elements available: @ <verbatim> and <nowiki>. @ No other elements are allowed. All attributes are checked and @ only a few benign attributes are allowed on each element. @ In particular, any attributes that specify javascript or CSS @ are elided.</p></li> - @ <li><p><b>Special Markup.</b> + @ <li><p><span class="wikiruleHead">Special Markup.</span> @ The <nowiki> tag disables all wiki formatting rules @ through the matching </nowiki> element. @ The <verbatim> tag works like <pre> with the addition @ that it also disables all wiki and HTML markup - @ through the matching </verbatim>. + @ through the matching </verbatim>.</p></li> @ </ol> style_footer(); } /* -** Add a new wiki page to the respository. The page name is +** Add a new wiki page to the repository. The page name is ** given by the zPageName parameter. isNew must be true to create ** a new page. If no previous page with the name zPageName exists ** and isNew is false, then this routine throws an error. ** ** The content of the new page is given by the blob pContent. +** +** zMimeType specifies the N-card for the wiki page. If it is 0, +** empty, or "text/x-fossil-wiki" (the default format) then it is +** ignored. */ -int wiki_cmd_commit(char const * zPageName, int isNew, Blob *pContent){ +int wiki_cmd_commit(const char *zPageName, int isNew, Blob *pContent, + const char *zMimeType, int localUser){ Blob wiki; /* Wiki page content */ Blob cksum; /* wiki checksum */ int rid; /* artifact ID of parent page */ - int nrid; /* artifact ID of new wiki page */ char *zDate; /* timestamp */ char *zUuid; /* uuid for rid */ rid = db_int(0, "SELECT x.rid FROM tag t, tagxref x" @@ -846,197 +1093,251 @@ " WHERE x.tagid=t.tagid AND t.tagname='wiki-%q'" " ORDER BY x.mtime DESC LIMIT 1", zPageName ); if( rid==0 && !isNew ){ +#ifdef FOSSIL_ENABLE_JSON + g.json.resultCode = FSL_JSON_E_RESOURCE_NOT_FOUND; +#endif fossil_fatal("no such wiki page: %s", zPageName); } if( rid!=0 && isNew ){ +#ifdef FOSSIL_ENABLE_JSON + g.json.resultCode = FSL_JSON_E_RESOURCE_ALREADY_EXISTS; +#endif fossil_fatal("wiki page %s already exists", zPageName); } blob_zero(&wiki); - zDate = db_text(0, "SELECT datetime('now')"); - zDate[10] = 'T'; + zDate = date_in_standard_format("now"); blob_appendf(&wiki, "D %s\n", zDate); free(zDate); blob_appendf(&wiki, "L %F\n", zPageName ); + if( zMimeType && *zMimeType + && 0!=fossil_strcmp(zMimeType,"text/x-fossil-wiki") ){ + blob_appendf(&wiki, "N %F\n", zMimeType); + } if( rid ){ zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid); blob_appendf(&wiki, "P %s\n", zUuid); free(zUuid); } user_select(); - if( g.zLogin ){ - blob_appendf(&wiki, "U %F\n", g.zLogin); + if( !login_is_nobody() ){ + blob_appendf(&wiki, "U %F\n", login_name()); } blob_appendf( &wiki, "W %d\n%s\n", blob_size(pContent), blob_str(pContent) ); md5sum_blob(&wiki, &cksum); blob_appendf(&wiki, "Z %b\n", &cksum); blob_reset(&cksum); db_begin_transaction(); - nrid = content_put( &wiki, 0, 0 ); - db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", nrid); - manifest_crosslink(nrid,&wiki); - blob_reset(&wiki); - content_deltify(rid,nrid,0); + wiki_put(&wiki, 0, wiki_need_moderation(localUser)); db_end_transaction(0); - autosync(AUTOSYNC_PUSH); return 1; } /* -** COMMAND: wiki -** -** Usage: %fossil wiki (export|create|commit|list) WikiName -** -** Run various subcommands to work with wiki entries. -** -** %fossil wiki export PAGENAME ?FILE? -** -** Sends the latest version of the PAGENAME wiki -** entry to the given file or standard output. -** -** %fossil wiki commit PAGENAME ?FILE? -** -** Commit changes to a wiki page from FILE or from standard -** input. -** -** %fossil wiki create PAGENAME ?FILE? -** -** Create a new wiki page with initial content taken from -** FILE or from standard input. -** -** %fossil wiki list -** -** Lists all wiki entries, one per line, ordered -** case-insentively by name. -** -** TODOs: -** -** %fossil wiki export ?-u ARTIFACT? WikiName ?FILE? -** -** Outputs the selected version of WikiName. -** -** %fossil wiki delete ?-m MESSAGE? WikiName -** -** The same as deleting a file entry, but i don't know if fossil -** supports a commit message for Wiki entries. -** -** %fossil wiki ?-u? ?-d? ?-s=[|]? list -** -** Lists the artifact ID and/or Date of last change along with -** each entry name, delimited by the -s char. -** -** %fossil wiki diff ?ARTIFACT? ?-f infile[=stdin]? EntryName -** -** Diffs the local copy of a page with a given version (defaulting -** to the head version). +** COMMAND: wiki* +** +** Usage: ../fossil wiki (export|create|commit|list) WikiName +** +** Run various subcommands to work with wiki entries or tech notes. +** +** ../fossil wiki export ?PAGENAME? ?FILE? [-t|--technote DATETIME ] +** +** Sends the latest version of either the PAGENAME wiki entry +** or the DATETIME tech note to the given file or standard +** output. One of PAGENAME or DATETIME must be specified. +** +** ../fossil wiki (create|commit) PAGENAME ?FILE? ?OPTIONS? +** +** Create a new or commit changes to an existing wiki page or +** technote from FILE or from standard input. +** +** Options: +** -M|--mimetype TEXT-FORMAT The mime type of the update defaulting +** defaulting to the type used by the +** previous version of the page or (for +** new pages) text/x-fossil-wiki. +** -t|--technote DATETIME Specifies the timestamp of the technote +** to be created or updated. +** --technote-tags TAGS The set of tags for a technote. +** --technote-bgcolor COLOR The color used for the technote on the +** timeline. +** +** ../fossil wiki list ?--technote? +** ../fossil wiki ls ?--technote? +** +** Lists all wiki entries, one per line, ordered +** case-insensitively by name. The --technote flag +** specifies that technotes will be listed instead of +** the wiki entries, which will be listed in order +** timestamp. +** */ void wiki_cmd(void){ int n; - db_find_and_open_repository(1); + db_find_and_open_repository(0, 0); if( g.argc<3 ){ goto wiki_cmd_usage; } n = strlen(g.argv[2]); if( n==0 ){ goto wiki_cmd_usage; } if( strncmp(g.argv[2],"export",n)==0 ){ - char const *zPageName; /* Name of the wiki page to export */ - char const *zFile; /* Name of the output file (0=stdout) */ - int rid; /* Artifact ID of the wiki page */ - int i; /* Loop counter */ - char *zBody = 0; /* Wiki page content */ - Manifest m; /* Parsed wiki page content */ - if( (g.argc!=4) && (g.argc!=5) ){ - usage("export PAGENAME ?FILE?"); - } - zPageName = g.argv[3]; - rid = db_int(0, "SELECT x.rid FROM tag t, tagxref x" - " WHERE x.tagid=t.tagid AND t.tagname='wiki-%q'" - " ORDER BY x.mtime DESC LIMIT 1", - zPageName - ); - if( rid ){ - Blob content; - content_get(rid, &content); - manifest_parse(&m, &content); - if( m.type==CFTYPE_WIKI ){ - zBody = m.zWiki; - } - } - if( zBody==0 ){ - fossil_fatal("wiki page [%s] not found",zPageName); - } - for(i=strlen(zBody); i>0 && isspace(zBody[i-1]); i--){} - zFile = (g.argc==4) ? 0 : g.argv[4]; - if( zFile ){ - FILE * zF; - short doClose = 0; - if( (1 == strlen(zFile)) && ('-'==zFile[0]) ){ - zF = stdout; - }else{ - zF = fopen( zFile, "w" ); - doClose = zF ? 1 : 0; - } - if( ! zF ){ - fossil_fatal("wiki export could not open output file for writing."); - } - fprintf(zF,"%.*s\n", i, zBody); - if( doClose ) fclose(zF); - }else{ - printf("%.*s\n", i, zBody); - } - return; - }else - if( strncmp(g.argv[2],"commit",n)==0 - || strncmp(g.argv[2],"create",n)==0 ){ - char *zPageName; - Blob content; - if( g.argc!=4 && g.argc!=5 ){ - usage("commit PAGENAME ?FILE?"); + const char *zPageName; /* Name of the wiki page to export */ + const char *zFile; /* Name of the output file (0=stdout) */ + const char *zETime; /* The name of the technote to export */ + int rid; /* Artifact ID of the wiki page */ + int i; /* Loop counter */ + char *zBody = 0; /* Wiki page content */ + Blob body; /* Wiki page content */ + Manifest *pWiki = 0; /* Parsed wiki page content */ + + zETime = find_option("technote","t",1); + if( !zETime ){ + if( (g.argc!=4) && (g.argc!=5) ){ + usage("export PAGENAME ?FILE?"); + } + zPageName = g.argv[3]; + rid = db_int(0, "SELECT x.rid FROM tag t, tagxref x" + " WHERE x.tagid=t.tagid AND t.tagname='wiki-%q'" + " ORDER BY x.mtime DESC LIMIT 1", + zPageName + ); + if( (pWiki = manifest_get(rid, CFTYPE_WIKI, 0))!=0 ){ + zBody = pWiki->zWiki; + } + if( zBody==0 ){ + fossil_fatal("wiki page [%s] not found",zPageName); + } + zFile = (g.argc==4) ? "-" : g.argv[4]; + }else{ + if( (g.argc!=3) && (g.argc!=4) ){ + usage("export ?FILE? --technote DATETIME"); + } + rid = db_int(0, "SELECT objid FROM event" + " WHERE datetime(mtime)=datetime('%q') AND type='e'" + " ORDER BY mtime DESC LIMIT 1", + zETime + ); + if( (pWiki = manifest_get(rid, CFTYPE_EVENT, 0))!=0 ){ + zBody = pWiki->zWiki; + } + if( zBody==0 ){ + fossil_fatal("technote not found"); + } + zFile = (g.argc==3) ? "-" : g.argv[3]; + } + for(i=strlen(zBody); i>0 && fossil_isspace(zBody[i-1]); i--){} + zBody[i] = 0; + blob_init(&body, zBody, -1); + blob_append(&body, "\n", 1); + blob_write_to_file(&body, zFile); + blob_reset(&body); + manifest_destroy(pWiki); + return; + }else if( strncmp(g.argv[2],"commit",n)==0 + || strncmp(g.argv[2],"create",n)==0 ){ + const char *zPageName; /* page name */ + Blob content; /* Input content */ + int rid; + Manifest *pWiki = 0; /* Parsed wiki page content */ + const char *zMimeType = find_option("mimetype", "M", 1); + const char *zETime = find_option("technote", "t", 1); + const char *zTags = find_option("technote-tags", NULL, 1); + const char *zClr = find_option("technote-bgcolor", NULL, 1); + if( g.argc!=4 && g.argc!=5 ){ + usage("commit|create PAGENAME ?FILE? [--mimetype TEXT-FORMAT]" + " [--technote DATETIME] [--technote-tags TAGS]" + " [--technote-bgcolor COLOR]"); } zPageName = g.argv[3]; if( g.argc==4 ){ blob_read_from_channel(&content, stdin, -1); }else{ blob_read_from_file(&content, g.argv[4]); } - if( g.argv[2][1]=='r' ){ - wiki_cmd_commit(zPageName, 1, &content); - printf("Created new wiki page %s.\n", zPageName); + if(!zMimeType || !*zMimeType){ + /* Try to deduce the mime type based on the prior version. */ + if ( !zETime ){ + rid = db_int(0, "SELECT x.rid FROM tag t, tagxref x" + " WHERE x.tagid=t.tagid AND t.tagname='wiki-%q'" + " ORDER BY x.mtime DESC LIMIT 1", + zPageName + ); + if(rid>0 && (pWiki = manifest_get(rid, CFTYPE_WIKI, 0))!=0 + && (pWiki->zMimetype && *pWiki->zMimetype)){ + zMimeType = pWiki->zMimetype; + } + }else{ + rid = db_int(0, "SELECT objid FROM event" + " WHERE datetime(mtime)=datetime('%q') AND type='e'" + " ORDER BY mtime DESC LIMIT 1", + zPageName + ); + if(rid>0 && (pWiki = manifest_get(rid, CFTYPE_EVENT, 0))!=0 + && (pWiki->zMimetype && *pWiki->zMimetype)){ + zMimeType = pWiki->zMimetype; + } + } + } + if( !zETime ){ + if( g.argv[2][1]=='r' ){ + wiki_cmd_commit(zPageName, 1, &content, zMimeType, 1); + fossil_print("Created new wiki page %s.\n", zPageName); + }else{ + wiki_cmd_commit(zPageName, 0, &content, zMimeType, 1); + fossil_print("Updated wiki page %s.\n", zPageName); + } }else{ - wiki_cmd_commit(zPageName, 0, &content); - printf("Updated wiki page %s.\n", zPageName); + char *zMETime; /* Normalized, mutable version of zETime */ + zMETime = db_text(0, "SELECT coalesce(datetime(%Q),datetime('now'))", + zETime); + if( g.argv[2][1]=='r' ){ + event_cmd_commit(zMETime, 1, &content, zMimeType, zPageName, + zTags, zClr); + fossil_print("Created new tech note %s.\n", zMETime); + }else{ + event_cmd_commit(zMETime, 0, &content, zMimeType, zPageName, + zTags, zClr); + fossil_print("Updated tech note %s.\n", zMETime); + } + free(zMETime); } + manifest_destroy(pWiki); blob_reset(&content); - }else - if( strncmp(g.argv[2],"delete",n)==0 ){ + }else if( strncmp(g.argv[2],"delete",n)==0 ){ if( g.argc!=5 ){ usage("delete PAGENAME"); } fossil_fatal("delete not yet implemented."); - }else - if( strncmp(g.argv[2],"list",n)==0 ){ + }else if(( strncmp(g.argv[2],"list",n)==0 ) + || ( strncmp(g.argv[2],"ls",n)==0 )){ Stmt q; - db_prepare(&q, - "SELECT substr(tagname, 6) FROM tag WHERE tagname GLOB 'wiki-*'" - " ORDER BY lower(tagname) /*sort*/" - ); + if ( !find_option("technote","t",0) ){ + db_prepare(&q, + "SELECT substr(tagname, 6) FROM tag WHERE tagname GLOB 'wiki-*'" + " ORDER BY lower(tagname) /*sort*/" + ); + }else{ + db_prepare(&q, + "SELECT datetime(mtime) FROM event WHERE type='e'" + " ORDER BY mtime /*sort*/" + ); + } while( db_step(&q)==SQLITE_ROW ){ const char *zName = db_column_text(&q, 0); - printf( "%s\n",zName); + fossil_print( "%s\n",zName); } db_finalize(&q); - }else - { + }else{ goto wiki_cmd_usage; } return; wiki_cmd_usage: usage("export|create|commit|list ..."); } Index: src/wikiformat.c ================================================================== --- src/wikiformat.c +++ src/wikiformat.c @@ -15,111 +15,132 @@ ** ******************************************************************************* ** ** This file contains code to do formatting of wiki text. */ -#include <assert.h> #include "config.h" +#include <assert.h> #include "wikiformat.h" #if INTERFACE /* ** Allowed wiki transformation operations */ -#define WIKI_NOFOLLOW 0x001 -#define WIKI_HTML 0x002 -#define WIKI_INLINE 0x004 /* Do not surround with <p>..</p> */ -#define WIKI_NOBLOCK 0x008 /* No block markup of any kind */ +#define WIKI_HTMLONLY 0x001 /* HTML markup only. No wiki */ +#define WIKI_INLINE 0x002 /* Do not surround with <p>..</p> */ +#define WIKI_NOBLOCK 0x004 /* No block markup of any kind */ +#define WIKI_BUTTONS 0x008 /* Allow sub-menu buttons */ +#define WIKI_NOBADLINKS 0x010 /* Ignore broken hyperlinks */ +#define WIKI_LINKSONLY 0x020 /* No markup. Only decorate links */ #endif /* ** These are the only markup attributes allowed. */ -#define ATTR_ALIGN 1 -#define ATTR_ALT 2 -#define ATTR_BGCOLOR 3 -#define ATTR_BORDER 4 -#define ATTR_CELLPADDING 5 -#define ATTR_CELLSPACING 6 -#define ATTR_CLEAR 7 -#define ATTR_COLOR 8 -#define ATTR_COLSPAN 9 -#define ATTR_COMPACT 10 -#define ATTR_FACE 11 -#define ATTR_HEIGHT 12 -#define ATTR_HREF 13 -#define ATTR_HSPACE 14 -#define ATTR_ID 15 -#define ATTR_NAME 16 -#define ATTR_ROWSPAN 17 -#define ATTR_SIZE 18 -#define ATTR_SRC 19 -#define ATTR_START 20 -#define ATTR_TYPE 21 -#define ATTR_VALIGN 22 -#define ATTR_VALUE 23 -#define ATTR_VSPACE 24 -#define ATTR_WIDTH 25 -#define AMSK_ALIGN 0x0000001 -#define AMSK_ALT 0x0000002 -#define AMSK_BGCOLOR 0x0000004 -#define AMSK_BORDER 0x0000008 -#define AMSK_CELLPADDING 0x0000010 -#define AMSK_CELLSPACING 0x0000020 -#define AMSK_CLEAR 0x0000040 -#define AMSK_COLOR 0x0000080 -#define AMSK_COLSPAN 0x0000100 -#define AMSK_COMPACT 0x0000200 -#define AMSK_FACE 0x0000400 -#define AMSK_HEIGHT 0x0000800 -#define AMSK_HREF 0x0001000 -#define AMSK_HSPACE 0x0002000 -#define AMSK_ID 0x0004000 -#define AMSK_NAME 0x0008000 -#define AMSK_ROWSPAN 0x0010000 -#define AMSK_SIZE 0x0020000 -#define AMSK_SRC 0x0040000 -#define AMSK_START 0x0080000 -#define AMSK_TYPE 0x0100000 -#define AMSK_VALIGN 0x0200000 -#define AMSK_VALUE 0x0400000 -#define AMSK_VSPACE 0x0800000 -#define AMSK_WIDTH 0x1000000 -#define AMSK_CLASS 0x2000000 +enum allowed_attr_t { + ATTR_ALIGN = 1, + ATTR_ALT, + ATTR_BGCOLOR, + ATTR_BORDER, + ATTR_CELLPADDING, + ATTR_CELLSPACING, + ATTR_CLASS, + ATTR_CLEAR, + ATTR_COLOR, + ATTR_COLSPAN, + ATTR_COMPACT, + ATTR_FACE, + ATTR_HEIGHT, + ATTR_HREF, + ATTR_HSPACE, + ATTR_ID, + ATTR_LINKS, + ATTR_NAME, + ATTR_ROWSPAN, + ATTR_SIZE, + ATTR_SRC, + ATTR_START, + ATTR_STYLE, + ATTR_TARGET, + ATTR_TYPE, + ATTR_VALIGN, + ATTR_VALUE, + ATTR_VSPACE, + ATTR_WIDTH +}; + +enum amsk_t { + AMSK_ALIGN = 0x00000001, + AMSK_ALT = 0x00000002, + AMSK_BGCOLOR = 0x00000004, + AMSK_BORDER = 0x00000008, + AMSK_CELLPADDING = 0x00000010, + AMSK_CELLSPACING = 0x00000020, + AMSK_CLASS = 0x00000040, + AMSK_CLEAR = 0x00000080, + AMSK_COLOR = 0x00000100, + AMSK_COLSPAN = 0x00000200, + AMSK_COMPACT = 0x00000400, + /* re-use = 0x00000800, */ + AMSK_FACE = 0x00001000, + AMSK_HEIGHT = 0x00002000, + AMSK_HREF = 0x00004000, + AMSK_HSPACE = 0x00008000, + AMSK_ID = 0x00010000, + AMSK_LINKS = 0x00020000, + AMSK_NAME = 0x00040000, + AMSK_ROWSPAN = 0x00080000, + AMSK_SIZE = 0x00100000, + AMSK_SRC = 0x00200000, + AMSK_START = 0x00400000, + AMSK_STYLE = 0x00800000, + AMSK_TARGET = 0x01000000, + AMSK_TYPE = 0x02000000, + AMSK_VALIGN = 0x04000000, + AMSK_VALUE = 0x08000000, + AMSK_VSPACE = 0x10000000, + AMSK_WIDTH = 0x20000000 +}; static const struct AllowedAttribute { const char *zName; unsigned int iMask; } aAttribute[] = { + /* These indexes MUST line up with their + corresponding allowed_attr_t enum values. + */ { 0, 0 }, - { "align", AMSK_ALIGN, }, - { "alt", AMSK_ALT, }, - { "bgcolor", AMSK_BGCOLOR, }, - { "border", AMSK_BORDER, }, - { "cellpadding", AMSK_CELLPADDING, }, - { "cellspacing", AMSK_CELLSPACING, }, - { "class", AMSK_CLASS, }, - { "clear", AMSK_CLEAR, }, - { "color", AMSK_COLOR, }, - { "colspan", AMSK_COLSPAN, }, - { "compact", AMSK_COMPACT, }, - { "face", AMSK_FACE, }, - { "height", AMSK_HEIGHT, }, - { "href", AMSK_HREF, }, - { "hspace", AMSK_HSPACE, }, - { "id", AMSK_ID, }, - { "name", AMSK_NAME, }, - { "rowspan", AMSK_ROWSPAN, }, - { "size", AMSK_SIZE, }, - { "src", AMSK_SRC, }, - { "start", AMSK_START, }, - { "type", AMSK_TYPE, }, - { "valign", AMSK_VALIGN, }, - { "value", AMSK_VALUE, }, - { "vspace", AMSK_VSPACE, }, - { "width", AMSK_WIDTH, }, + { "align", AMSK_ALIGN }, + { "alt", AMSK_ALT }, + { "bgcolor", AMSK_BGCOLOR }, + { "border", AMSK_BORDER }, + { "cellpadding", AMSK_CELLPADDING }, + { "cellspacing", AMSK_CELLSPACING }, + { "class", AMSK_CLASS }, + { "clear", AMSK_CLEAR }, + { "color", AMSK_COLOR }, + { "colspan", AMSK_COLSPAN }, + { "compact", AMSK_COMPACT }, + { "face", AMSK_FACE }, + { "height", AMSK_HEIGHT }, + { "href", AMSK_HREF }, + { "hspace", AMSK_HSPACE }, + { "id", AMSK_ID }, + { "links", AMSK_LINKS }, + { "name", AMSK_NAME }, + { "rowspan", AMSK_ROWSPAN }, + { "size", AMSK_SIZE }, + { "src", AMSK_SRC }, + { "start", AMSK_START }, + { "style", AMSK_STYLE }, + { "target", AMSK_TARGET }, + { "type", AMSK_TYPE }, + { "valign", AMSK_VALIGN }, + { "value", AMSK_VALUE }, + { "vspace", AMSK_VSPACE }, + { "width", AMSK_WIDTH }, }; /* ** Use binary search to locate a tag in the aAttribute[] table. */ @@ -127,11 +148,11 @@ int i, c, first, last; first = 1; last = sizeof(aAttribute)/sizeof(aAttribute[0]) - 1; while( first<=last ){ i = (first+last)/2; - c = strcmp(aAttribute[i].zName, z); + c = fossil_strcmp(aAttribute[i].zName, z); if( c==0 ){ return i; }else if( c<0 ){ first = i+1; }else{ @@ -149,59 +170,72 @@ ** Except for MARKUP_INVALID, this must all be in alphabetical order ** and in numerical sequence. The first markup type must be zero. ** The value for MARKUP_XYZ must correspond to the <xyz> entry ** in aAllowedMarkup[]. */ -#define MARKUP_INVALID 0 -#define MARKUP_A 1 -#define MARKUP_ADDRESS 2 -#define MARKUP_B 3 -#define MARKUP_BIG 4 -#define MARKUP_BLOCKQUOTE 5 -#define MARKUP_BR 6 -#define MARKUP_CENTER 7 -#define MARKUP_CITE 8 -#define MARKUP_CODE 9 -#define MARKUP_DD 10 -#define MARKUP_DFN 11 -#define MARKUP_DIV 12 -#define MARKUP_DL 13 -#define MARKUP_DT 14 -#define MARKUP_EM 15 -#define MARKUP_FONT 16 -#define MARKUP_H1 17 -#define MARKUP_H2 18 -#define MARKUP_H3 19 -#define MARKUP_H4 20 -#define MARKUP_H5 21 -#define MARKUP_H6 22 -#define MARKUP_HR 23 -#define MARKUP_I 24 -#define MARKUP_IMG 25 -#define MARKUP_KBD 26 -#define MARKUP_LI 27 -#define MARKUP_NOBR 28 -#define MARKUP_NOWIKI 29 -#define MARKUP_OL 30 -#define MARKUP_P 31 -#define MARKUP_PRE 32 -#define MARKUP_S 33 -#define MARKUP_SAMP 34 -#define MARKUP_SMALL 35 -#define MARKUP_STRIKE 36 -#define MARKUP_STRONG 37 -#define MARKUP_SUB 38 -#define MARKUP_SUP 39 -#define MARKUP_TABLE 40 -#define MARKUP_TD 41 -#define MARKUP_TH 42 -#define MARKUP_TR 43 -#define MARKUP_TT 44 -#define MARKUP_U 45 -#define MARKUP_UL 46 -#define MARKUP_VAR 47 -#define MARKUP_VERBATIM 48 +#define MARKUP_INVALID 0 +#define MARKUP_A 1 +#define MARKUP_ADDRESS 2 +#define MARKUP_HTML5_ARTICLE 3 +#define MARKUP_HTML5_ASIDE 4 +#define MARKUP_B 5 +#define MARKUP_BIG 6 +#define MARKUP_BLOCKQUOTE 7 +#define MARKUP_BR 8 +#define MARKUP_CENTER 9 +#define MARKUP_CITE 10 +#define MARKUP_CODE 11 +#define MARKUP_COL 12 +#define MARKUP_COLGROUP 13 +#define MARKUP_DD 14 +#define MARKUP_DFN 15 +#define MARKUP_DIV 16 +#define MARKUP_DL 17 +#define MARKUP_DT 18 +#define MARKUP_EM 19 +#define MARKUP_FONT 20 +#define MARKUP_HTML5_FOOTER 21 +#define MARKUP_H1 22 +#define MARKUP_H2 23 +#define MARKUP_H3 24 +#define MARKUP_H4 25 +#define MARKUP_H5 26 +#define MARKUP_H6 27 +#define MARKUP_HTML5_HEADER 28 +#define MARKUP_HR 29 +#define MARKUP_I 30 +#define MARKUP_IMG 31 +#define MARKUP_KBD 32 +#define MARKUP_LI 33 +#define MARKUP_HTML5_NAV 34 +#define MARKUP_NOBR 35 +#define MARKUP_NOWIKI 36 +#define MARKUP_OL 37 +#define MARKUP_P 38 +#define MARKUP_PRE 39 +#define MARKUP_S 40 +#define MARKUP_SAMP 41 +#define MARKUP_HTML5_SECTION 42 +#define MARKUP_SMALL 43 +#define MARKUP_SPAN 44 +#define MARKUP_STRIKE 45 +#define MARKUP_STRONG 46 +#define MARKUP_SUB 47 +#define MARKUP_SUP 48 +#define MARKUP_TABLE 49 +#define MARKUP_TBODY 50 +#define MARKUP_TD 51 +#define MARKUP_TFOOT 52 +#define MARKUP_TH 53 +#define MARKUP_THEAD 54 +#define MARKUP_TITLE 55 +#define MARKUP_TR 56 +#define MARKUP_TT 57 +#define MARKUP_U 58 +#define MARKUP_UL 59 +#define MARKUP_VAR 60 +#define MARKUP_VERBATIM 61 /* ** The various markup is divided into the following types: */ #define MUTYPE_SINGLE 0x0001 /* <img>, <br>, or <hr> */ @@ -231,73 +265,121 @@ short int iType; /* The MUTYPE_* code */ int allowedAttr; /* Allowed attributes on this markup */ } aMarkup[] = { { 0, MARKUP_INVALID, 0, 0 }, { "a", MARKUP_A, MUTYPE_HYPERLINK, - AMSK_HREF|AMSK_NAME|AMSK_CLASS }, - { "address", MARKUP_ADDRESS, MUTYPE_BLOCK, 0 }, - { "b", MARKUP_B, MUTYPE_FONT, 0 }, - { "big", MARKUP_BIG, MUTYPE_FONT, 0 }, - { "blockquote", MARKUP_BLOCKQUOTE, MUTYPE_BLOCK, 0 }, - { "br", MARKUP_BR, MUTYPE_SINGLE, AMSK_CLEAR }, - { "center", MARKUP_CENTER, MUTYPE_BLOCK, 0 }, - { "cite", MARKUP_CITE, MUTYPE_FONT, 0 }, - { "code", MARKUP_CODE, MUTYPE_FONT, 0 }, - { "dd", MARKUP_DD, MUTYPE_LI, 0 }, - { "dfn", MARKUP_DFN, MUTYPE_FONT, 0 }, - { "div", MARKUP_DIV, MUTYPE_BLOCK, AMSK_ID|AMSK_CLASS }, - { "dl", MARKUP_DL, MUTYPE_LIST, AMSK_COMPACT }, - { "dt", MARKUP_DT, MUTYPE_LI, 0 }, - { "em", MARKUP_EM, MUTYPE_FONT, 0 }, + AMSK_HREF|AMSK_NAME|AMSK_CLASS|AMSK_TARGET|AMSK_STYLE }, + { "address", MARKUP_ADDRESS, MUTYPE_BLOCK, AMSK_STYLE }, + { "article", MARKUP_HTML5_ARTICLE, MUTYPE_BLOCK, + AMSK_ID|AMSK_CLASS|AMSK_STYLE }, + { "aside", MARKUP_HTML5_ASIDE, MUTYPE_BLOCK, + AMSK_ID|AMSK_CLASS|AMSK_STYLE }, + + { "b", MARKUP_B, MUTYPE_FONT, AMSK_STYLE }, + { "big", MARKUP_BIG, MUTYPE_FONT, AMSK_STYLE }, + { "blockquote", MARKUP_BLOCKQUOTE, MUTYPE_BLOCK, AMSK_STYLE }, + { "br", MARKUP_BR, MUTYPE_SINGLE, AMSK_CLEAR }, + { "center", MARKUP_CENTER, MUTYPE_BLOCK, AMSK_STYLE }, + { "cite", MARKUP_CITE, MUTYPE_FONT, AMSK_STYLE }, + { "code", MARKUP_CODE, MUTYPE_FONT, AMSK_STYLE }, + { "col", MARKUP_COL, MUTYPE_SINGLE, + AMSK_ALIGN|AMSK_CLASS|AMSK_COLSPAN|AMSK_WIDTH|AMSK_STYLE }, + { "colgroup", MARKUP_COLGROUP, MUTYPE_BLOCK, + AMSK_ALIGN|AMSK_CLASS|AMSK_COLSPAN|AMSK_WIDTH|AMSK_STYLE}, + { "dd", MARKUP_DD, MUTYPE_LI, AMSK_STYLE }, + { "dfn", MARKUP_DFN, MUTYPE_FONT, AMSK_STYLE }, + { "div", MARKUP_DIV, MUTYPE_BLOCK, + AMSK_ID|AMSK_CLASS|AMSK_STYLE }, + { "dl", MARKUP_DL, MUTYPE_LIST, + AMSK_COMPACT|AMSK_STYLE }, + { "dt", MARKUP_DT, MUTYPE_LI, AMSK_STYLE }, + { "em", MARKUP_EM, MUTYPE_FONT, AMSK_STYLE }, { "font", MARKUP_FONT, MUTYPE_FONT, - AMSK_COLOR|AMSK_FACE|AMSK_SIZE }, - { "h1", MARKUP_H1, MUTYPE_BLOCK, AMSK_ALIGN|AMSK_CLASS }, - { "h2", MARKUP_H2, MUTYPE_BLOCK, AMSK_ALIGN|AMSK_CLASS }, - { "h3", MARKUP_H3, MUTYPE_BLOCK, AMSK_ALIGN|AMSK_CLASS }, - { "h4", MARKUP_H4, MUTYPE_BLOCK, AMSK_ALIGN|AMSK_CLASS }, - { "h5", MARKUP_H5, MUTYPE_BLOCK, AMSK_ALIGN|AMSK_CLASS }, - { "h6", MARKUP_H6, MUTYPE_BLOCK, AMSK_ALIGN|AMSK_CLASS }, + AMSK_COLOR|AMSK_FACE|AMSK_SIZE|AMSK_STYLE }, + { "footer", MARKUP_HTML5_FOOTER, MUTYPE_BLOCK, + AMSK_ID|AMSK_CLASS|AMSK_STYLE }, + + { "h1", MARKUP_H1, MUTYPE_BLOCK, + AMSK_ALIGN|AMSK_CLASS|AMSK_STYLE }, + { "h2", MARKUP_H2, MUTYPE_BLOCK, + AMSK_ALIGN|AMSK_CLASS|AMSK_STYLE }, + { "h3", MARKUP_H3, MUTYPE_BLOCK, + AMSK_ALIGN|AMSK_CLASS|AMSK_STYLE }, + { "h4", MARKUP_H4, MUTYPE_BLOCK, + AMSK_ALIGN|AMSK_CLASS|AMSK_STYLE }, + { "h5", MARKUP_H5, MUTYPE_BLOCK, + AMSK_ALIGN|AMSK_CLASS|AMSK_STYLE }, + { "h6", MARKUP_H6, MUTYPE_BLOCK, + AMSK_ALIGN|AMSK_CLASS|AMSK_STYLE }, + + { "header", MARKUP_HTML5_HEADER, MUTYPE_BLOCK, + AMSK_ID|AMSK_CLASS|AMSK_STYLE }, + { "hr", MARKUP_HR, MUTYPE_SINGLE, - AMSK_ALIGN|AMSK_COLOR|AMSK_SIZE|AMSK_WIDTH|AMSK_CLASS }, - { "i", MARKUP_I, MUTYPE_FONT, 0 }, + AMSK_ALIGN|AMSK_COLOR|AMSK_SIZE|AMSK_WIDTH| + AMSK_STYLE|AMSK_CLASS }, + { "i", MARKUP_I, MUTYPE_FONT, AMSK_STYLE }, { "img", MARKUP_IMG, MUTYPE_SINGLE, AMSK_ALIGN|AMSK_ALT|AMSK_BORDER|AMSK_HEIGHT| - AMSK_HSPACE|AMSK_SRC|AMSK_VSPACE|AMSK_WIDTH }, - { "kbd", MARKUP_KBD, MUTYPE_FONT, 0 }, + AMSK_HSPACE|AMSK_SRC|AMSK_VSPACE|AMSK_WIDTH|AMSK_STYLE }, + { "kbd", MARKUP_KBD, MUTYPE_FONT, AMSK_STYLE }, { "li", MARKUP_LI, MUTYPE_LI, - AMSK_TYPE|AMSK_VALUE }, + AMSK_TYPE|AMSK_VALUE|AMSK_STYLE }, + { "nav", MARKUP_HTML5_NAV, MUTYPE_BLOCK, + AMSK_ID|AMSK_CLASS|AMSK_STYLE }, { "nobr", MARKUP_NOBR, MUTYPE_FONT, 0 }, { "nowiki", MARKUP_NOWIKI, MUTYPE_SPECIAL, 0 }, { "ol", MARKUP_OL, MUTYPE_LIST, - AMSK_START|AMSK_TYPE|AMSK_COMPACT }, - { "p", MARKUP_P, MUTYPE_BLOCK, AMSK_ALIGN|AMSK_CLASS }, - { "pre", MARKUP_PRE, MUTYPE_BLOCK, 0 }, - { "s", MARKUP_S, MUTYPE_FONT, 0 }, - { "samp", MARKUP_SAMP, MUTYPE_FONT, 0 }, - { "small", MARKUP_SMALL, MUTYPE_FONT, 0 }, - { "strike", MARKUP_STRIKE, MUTYPE_FONT, 0 }, - { "strong", MARKUP_STRONG, MUTYPE_FONT, 0 }, - { "sub", MARKUP_SUB, MUTYPE_FONT, 0 }, - { "sup", MARKUP_SUP, MUTYPE_FONT, 0 }, + AMSK_START|AMSK_TYPE|AMSK_COMPACT|AMSK_STYLE }, + { "p", MARKUP_P, MUTYPE_BLOCK, + AMSK_ALIGN|AMSK_CLASS|AMSK_STYLE }, + { "pre", MARKUP_PRE, MUTYPE_BLOCK, AMSK_STYLE }, + { "s", MARKUP_S, MUTYPE_FONT, AMSK_STYLE }, + { "samp", MARKUP_SAMP, MUTYPE_FONT, AMSK_STYLE }, + { "section", MARKUP_HTML5_SECTION, MUTYPE_BLOCK, + AMSK_ID|AMSK_CLASS|AMSK_STYLE }, + { "small", MARKUP_SMALL, MUTYPE_FONT, AMSK_STYLE }, + { "span", MARKUP_SPAN, MUTYPE_BLOCK, + AMSK_ALIGN|AMSK_CLASS|AMSK_STYLE }, + { "strike", MARKUP_STRIKE, MUTYPE_FONT, AMSK_STYLE }, + { "strong", MARKUP_STRONG, MUTYPE_FONT, AMSK_STYLE }, + { "sub", MARKUP_SUB, MUTYPE_FONT, AMSK_STYLE }, + { "sup", MARKUP_SUP, MUTYPE_FONT, AMSK_STYLE }, { "table", MARKUP_TABLE, MUTYPE_TABLE, AMSK_ALIGN|AMSK_BGCOLOR|AMSK_BORDER|AMSK_CELLPADDING| - AMSK_CELLSPACING|AMSK_HSPACE|AMSK_VSPACE|AMSK_CLASS }, + AMSK_CELLSPACING|AMSK_HSPACE|AMSK_VSPACE|AMSK_CLASS| + AMSK_STYLE }, + { "tbody", MARKUP_TBODY, MUTYPE_BLOCK, + AMSK_ALIGN|AMSK_CLASS|AMSK_STYLE }, { "td", MARKUP_TD, MUTYPE_TD, AMSK_ALIGN|AMSK_BGCOLOR|AMSK_COLSPAN| - AMSK_ROWSPAN|AMSK_VALIGN|AMSK_CLASS }, + AMSK_ROWSPAN|AMSK_VALIGN|AMSK_CLASS|AMSK_STYLE }, + { "tfoot", MARKUP_TFOOT, MUTYPE_BLOCK, + AMSK_ALIGN|AMSK_CLASS|AMSK_STYLE }, { "th", MARKUP_TH, MUTYPE_TD, AMSK_ALIGN|AMSK_BGCOLOR|AMSK_COLSPAN| - AMSK_ROWSPAN|AMSK_VALIGN|AMSK_CLASS }, + AMSK_ROWSPAN|AMSK_VALIGN|AMSK_CLASS|AMSK_STYLE }, + { "thead", MARKUP_THEAD, MUTYPE_BLOCK, + AMSK_ALIGN|AMSK_CLASS|AMSK_STYLE }, + { "title", MARKUP_TITLE, MUTYPE_BLOCK, 0 }, { "tr", MARKUP_TR, MUTYPE_TR, - AMSK_ALIGN|AMSK_BGCOLOR||AMSK_VALIGN|AMSK_CLASS }, - { "tt", MARKUP_TT, MUTYPE_FONT, 0 }, - { "u", MARKUP_U, MUTYPE_FONT, 0 }, + AMSK_ALIGN|AMSK_BGCOLOR|AMSK_VALIGN|AMSK_CLASS|AMSK_STYLE }, + { "tt", MARKUP_TT, MUTYPE_FONT, AMSK_STYLE }, + { "u", MARKUP_U, MUTYPE_FONT, AMSK_STYLE }, { "ul", MARKUP_UL, MUTYPE_LIST, - AMSK_TYPE|AMSK_COMPACT }, - { "var", MARKUP_VAR, MUTYPE_FONT, 0 }, - { "verbatim", MARKUP_VERBATIM, MUTYPE_SPECIAL, AMSK_ID|AMSK_TYPE }, + AMSK_TYPE|AMSK_COMPACT|AMSK_STYLE }, + { "var", MARKUP_VAR, MUTYPE_FONT, AMSK_STYLE }, + { "verbatim", MARKUP_VERBATIM, MUTYPE_SPECIAL, + AMSK_ID|AMSK_TYPE }, }; + +void show_allowed_wiki_markup( void ){ + int i; /* loop over allowedAttr */ + for( i=1 ; i<=sizeof(aMarkup)/sizeof(aMarkup[0]) - 1 ; i++ ){ + @ <%s(aMarkup[i].zName)> + } +} /* ** Use binary search to locate a tag in the aMarkup[] table. */ static int findTag(const char *z){ @@ -304,11 +386,11 @@ int i, c, first, last; first = 1; last = sizeof(aMarkup)/sizeof(aMarkup[0]) - 1; while( first<=last ){ i = (first+last)/2; - c = strcmp(aMarkup[i].zName, z); + c = fossil_strcmp(aMarkup[i].zName, z); if( c==0 ){ assert( aMarkup[i].iCode==i ); return i; }else if( c<0 ){ first = i+1; @@ -320,40 +402,41 @@ } /* ** Token types */ -#define TOKEN_MARKUP 1 /* <...> */ -#define TOKEN_CHARACTER 2 /* "&" or "<" not part of markup */ -#define TOKEN_LINK 3 /* [...] */ -#define TOKEN_PARAGRAPH 4 /* blank lines */ -#define TOKEN_NEWLINE 5 /* A single "\n" */ -#define TOKEN_BUL_LI 6 /* " * " */ -#define TOKEN_NUM_LI 7 /* " # " */ -#define TOKEN_ENUM 8 /* " \(?\d+[.)]? " */ -#define TOKEN_INDENT 9 /* " " */ -#define TOKEN_RAW 10 /* Output exactly (used when wiki-use-html==1) */ -#define TOKEN_TEXT 11 /* None of the above */ +#define TOKEN_MARKUP 1 /* <...> */ +#define TOKEN_CHARACTER 2 /* "&" or "<" not part of markup */ +#define TOKEN_LINK 3 /* [...] */ +#define TOKEN_PARAGRAPH 4 /* blank lines */ +#define TOKEN_NEWLINE 5 /* A single "\n" */ +#define TOKEN_BUL_LI 6 /* " * " */ +#define TOKEN_NUM_LI 7 /* " # " */ +#define TOKEN_ENUM 8 /* " \(?\d+[.)]? " */ +#define TOKEN_INDENT 9 /* " " */ +#define TOKEN_RAW 10 /* Output exactly (used when wiki-use-html==1) */ +#define TOKEN_TEXT 11 /* None of the above */ /* -** State flags +** State flags. Save the lower 16 bits for the WIKI_* flags. */ -#define AT_NEWLINE 0x001 /* At start of a line */ -#define AT_PARAGRAPH 0x002 /* At start of a paragraph */ -#define ALLOW_WIKI 0x004 /* Allow wiki markup */ -#define FONT_MARKUP_ONLY 0x008 /* Only allow MUTYPE_FONT markup */ -#define INLINE_MARKUP_ONLY 0x010 /* Allow only "inline" markup */ -#define IN_LIST 0x020 /* Within wiki <ul> or <ol> */ -#define WIKI_USE_HTML 0x040 /* wiki-use-html option = on */ +#define AT_NEWLINE 0x0010000 /* At start of a line */ +#define AT_PARAGRAPH 0x0020000 /* At start of a paragraph */ +#define ALLOW_WIKI 0x0040000 /* Allow wiki markup */ +#define ALLOW_LINKS 0x0080000 /* Allow [...] hyperlinks */ +#define FONT_MARKUP_ONLY 0x0100000 /* Only allow MUTYPE_FONT markup */ +#define INLINE_MARKUP_ONLY 0x0200000 /* Allow only "inline" markup */ +#define IN_LIST 0x0400000 /* Within wiki <ul> or <ol> */ /* ** Current state of the rendering engine */ typedef struct Renderer Renderer; struct Renderer { Blob *pOut; /* Output appended to this blob */ int state; /* Flag that govern rendering */ + unsigned renderFlags; /* Flags from the client */ int wikiList; /* Current wiki list type */ int inVerbatim; /* True in <verbatim> mode */ int preVerbState; /* Value of state prior to verbatim */ int wantAutoParagraph; /* True if a <p> is desired */ int inAutoParagraph; /* True if within an automatic paragraph */ @@ -378,29 +461,30 @@ static int r = -1; if( r<0 ) r = db_get_boolean("wiki-use-html", 0); return r; } - /* ** z points to a "<" character. Check to see if this is the start of ** a valid markup. If it is, return the total number of characters in ** the markup including the initial "<" and the terminating ">". If ** it is not well-formed markup, return 0. */ -static int markupLength(const char *z){ +int htmlTagLength(const char *z){ int n = 1; int inparen = 0; int c; if( z[n]=='/' ){ n++; } - if( !isalpha(z[n]) ) return 0; - while( isalnum(z[n]) ){ n++; } - if( (c = z[n])!='>' && !isspace(c) ) return 0; + if( !fossil_isalpha(z[n]) ) return 0; + while( fossil_isalnum(z[n]) || z[n]=='-' ){ n++; } + c = z[n]; + if( c=='/' && z[n+1]=='>' ){ return n+2; } + if( c!='>' && !fossil_isspace(c) ) return 0; while( (c = z[n])!=0 && (c!='>' || inparen) ){ if( c==inparen ){ inparen = 0; - }else if( c=='"' || c=='\'' ){ + }else if( inparen==0 && (c=='"' || c=='\'') ){ inparen = c; } n++; } if( z[n]!='>' ) return 0; @@ -413,11 +497,11 @@ ** of characters through the closing "\n". If not, return 0. */ static int paragraphBreakLength(const char *z){ int i, n; int nNewline = 1; - for(i=1, n=0; isspace(z[i]); i++){ + for(i=1, n=0; fossil_isspace(z[i]); i++){ if( z[i]=='\n' ){ nNewline++; n = i; } } @@ -437,18 +521,27 @@ ** < ** & ** \n ** [ ** -** The "[" and "\n" are only considered interesting if the "useWiki" -** flag is set. +** The "[" is only considered if flags contain ALLOW_LINKS or ALLOW_WIKI. +** The "\n" is only considered interesting if the flags constains ALLOW_WIKI. */ -static int textLength(const char *z, int useWiki){ +static int textLength(const char *z, int flags){ int n = 0; - int c; - while( (c = z[0])!=0 && c!='<' && c!='&' && - (useWiki==0 || (c!='[' && c!='\n')) ){ + int c, x1, x2; + + if( flags & ALLOW_WIKI ){ + x1 = '['; + x2 = '\n'; + }else if( flags & ALLOW_LINKS ){ + x1 = '['; + x2 = 0; + }else{ + x1 = x2 = 0; + } + while( (c = z[0])!=0 && c!='<' && c!='&' && c!=x1 && c!=x2 ){ n++; z++; } return n; } @@ -458,14 +551,14 @@ */ static int isElement(const char *z){ int i; assert( z[0]=='&' ); if( z[1]=='#' ){ - for(i=2; isdigit(z[i]); i++){} + for(i=2; fossil_isdigit(z[i]); i++){} return i>2 && z[i]==';'; }else{ - for(i=1; isalpha(z[i]); i++){} + for(i=1; fossil_isalpha(z[i]); i++){} return i>1 && z[i]==';'; } } /* @@ -487,11 +580,11 @@ while( z[n]==' ' || z[n]=='\t' ){ if( z[n]=='\t' ) i++; i++; n++; } - if( i<2 || isspace(z[n]) ) return 0; + if( i<2 || fossil_isspace(z[n]) ) return 0; return n; } /* ** Check to see if the z[] string is the beginning of a enumeration value. @@ -512,11 +605,11 @@ if( z[n]=='\t' ) i++; i++; n++; } if( i<2 ) return 0; - for(i=0; isdigit(z[n]); i++, n++){} + for(i=0; fossil_isdigit(z[n]); i++, n++){} if( i==0 ) return 0; if( z[n]=='.' ){ n++; } i = 0; @@ -523,11 +616,11 @@ while( z[n]==' ' || z[n]=='\t' ){ if( z[n]=='\t' ) i++; i++; n++; } - if( i<2 || isspace(z[n]) ) return 0; + if( i<2 || fossil_isspace(z[n]) ) return 0; return n; } /* ** Check to see if the z[] string is the beginning of an indented @@ -541,11 +634,11 @@ while( z[n]==' ' || z[n]=='\t' ){ if( z[n]=='\t' ) i++; i++; n++; } - if( i<2 || isspace(z[n]) ) return 0; + if( i<2 || fossil_isspace(z[n]) ) return 0; return n; } /* ** Check to see if the z[] string is a wiki hyperlink. If it is, @@ -569,11 +662,11 @@ ** characters in that token. Write the token type into *pTokenType. */ static int nextWikiToken(const char *z, Renderer *p, int *pTokenType){ int n; if( z[0]=='<' ){ - n = markupLength(z); + n = htmlTagLength(z); if( n>0 ){ *pTokenType = TOKEN_MARKUP; return n; }else{ *pTokenType = TOKEN_CHARACTER; @@ -588,16 +681,16 @@ if( z[0]=='\n' ){ n = paragraphBreakLength(z); if( n>0 ){ *pTokenType = TOKEN_PARAGRAPH; return n; - }else if( isspace(z[1]) ){ + }else if( fossil_isspace(z[1]) ){ *pTokenType = TOKEN_NEWLINE; return 1; } } - if( (p->state & AT_NEWLINE)!=0 && isspace(z[0]) ){ + if( (p->state & AT_NEWLINE)!=0 && fossil_isspace(z[0]) ){ n = listItemLength(z, '*'); if( n>0 ){ *pTokenType = TOKEN_BUL_LI; return n; } @@ -610,11 +703,11 @@ if( n>0 ){ *pTokenType = TOKEN_ENUM; return n; } } - if( (p->state & AT_PARAGRAPH)!=0 && isspace(z[0]) ){ + if( (p->state & AT_PARAGRAPH)!=0 && fossil_isspace(z[0]) ){ n = indentLength(z); if( n>0 ){ *pTokenType = TOKEN_INDENT; return n; } @@ -621,13 +714,16 @@ } if( z[0]=='[' && (n = linkLength(z))>0 ){ *pTokenType = TOKEN_LINK; return n; } + }else if( (p->state & ALLOW_LINKS)!=0 && z[0]=='[' && (n = linkLength(z))>0 ){ + *pTokenType = TOKEN_LINK; + return n; } *pTokenType = TOKEN_TEXT; - return 1 + textLength(z+1, p->state & ALLOW_WIKI); + return 1 + textLength(z+1, p->state); } /* ** Parse only Wiki links, return everything else as TOKEN_RAW. ** @@ -666,11 +762,11 @@ ** Parse this element into the p structure. ** ** The content of z[] might be modified by converting characters ** to lowercase and by inserting some "\000" characters. */ -static void parseMarkup(ParsedMarkup *p, char *z){ +static int parseMarkup(ParsedMarkup *p, char *z){ int i, j, c; int iACode; char *zValue; int seen = 0; char zTag[100]; @@ -681,37 +777,48 @@ }else{ p->endTag = 0; i = 1; } j = 0; - while( isalnum(z[i]) ){ - if( j<sizeof(zTag)-1 ) zTag[j++] = tolower(z[i]); + while( fossil_isalnum(z[i]) ){ + if( j<sizeof(zTag)-1 ) zTag[j++] = fossil_tolower(z[i]); i++; } zTag[j] = 0; p->iCode = findTag(zTag); p->iType = aMarkup[p->iCode].iType; p->nAttr = 0; - while( isspace(z[i]) ){ i++; } - while( p->nAttr<8 && isalpha(z[i]) ){ - int attrOk; /* True to preserver attribute. False to ignore it */ + c = 0; + if( z[i]=='-' ){ + p->aAttr[0].iACode = iACode = ATTR_ID; + i++; + p->aAttr[0].zValue = &z[i]; + while( fossil_isalnum(z[i]) ){ i++; } + p->aAttr[0].cTerm = c = z[i]; + z[i++] = 0; + p->nAttr = 1; + if( c=='>' ) return 0; + } + while( fossil_isspace(z[i]) ){ i++; } + while( c!='>' && p->nAttr<8 && fossil_isalpha(z[i]) ){ + int attrOk; /* True to preserve attribute. False to ignore it */ j = 0; - while( isalnum(z[i]) ){ - if( j<sizeof(zTag)-1 ) zTag[j++] = tolower(z[i]); + while( fossil_isalnum(z[i]) ){ + if( j<sizeof(zTag)-1 ) zTag[j++] = fossil_tolower(z[i]); i++; } zTag[j] = 0; p->aAttr[p->nAttr].iACode = iACode = findAttr(zTag); attrOk = iACode!=0 && (seen & aAttribute[iACode].iMask)==0; - while( isspace(z[i]) ){ z++; } + while( fossil_isspace(z[i]) ){ z++; } if( z[i]!='=' ){ p->aAttr[p->nAttr].zValue = 0; p->aAttr[p->nAttr].cTerm = 0; c = 0; }else{ i++; - while( isspace(z[i]) ){ z++; } + while( fossil_isspace(z[i]) ){ z++; } if( z[i]=='"' ){ i++; zValue = &z[i]; while( z[i] && z[i]!='"' ){ i++; } }else if( z[i]=='\'' ){ @@ -718,11 +825,11 @@ i++; zValue = &z[i]; while( z[i] && z[i]!='\'' ){ i++; } }else{ zValue = &z[i]; - while( !isspace(z[i]) && z[i]!='>' ){ z++; } + while( !fossil_isspace(z[i]) && z[i]!='>' ){ z++; } } if( attrOk ){ p->aAttr[p->nAttr].zValue = zValue; p->aAttr[p->nAttr].cTerm = c = z[i]; z[i] = 0; @@ -731,13 +838,14 @@ } if( attrOk ){ seen |= aAttribute[iACode].iMask; p->nAttr++; } - while( isspace(z[i]) ){ i++; } + while( fossil_isspace(z[i]) ){ i++; } if( z[i]=='>' || (z[i]=='/' && z[i+1]=='>') ) break; } + return seen; } /* ** Render markup on the given blob. */ @@ -748,12 +856,20 @@ }else{ blob_appendf(pOut, "<%s", aMarkup[p->iCode].zName); for(i=0; i<p->nAttr; i++){ blob_appendf(pOut, " %s", aAttribute[p->aAttr[i].iACode].zName); if( p->aAttr[i].zValue ){ - blob_appendf(pOut, "=\"%s\"", p->aAttr[i].zValue); + const char *zVal = p->aAttr[i].zValue; + if( p->aAttr[i].iACode==ATTR_SRC && zVal[0]=='/' ){ + blob_appendf(pOut, "=\"%s%s\"", g.zTop, zVal); + }else{ + blob_appendf(pOut, "=\"%s\"", zVal); + } } + } + if (p->iType & MUTYPE_SINGLE){ + blob_append(pOut, " /", 2); } blob_append(pOut, ">", 1); } } @@ -765,27 +881,79 @@ static void unparseMarkup(ParsedMarkup *p){ int i, n; for(i=0; i<p->nAttr; i++){ char *z = p->aAttr[i].zValue; if( z==0 ) continue; - n = strlen(z); - z[n] = p->aAttr[i].cTerm; + if( p->aAttr[i].cTerm ){ + n = strlen(z); + z[n] = p->aAttr[i].cTerm; + } + } +} + +/* +** Return the value of attribute attrId. Return NULL if there is no +** ID attribute. +*/ +static const char *attributeValue(ParsedMarkup *p, int attrId){ + int i; + for(i=0; i<p->nAttr; i++){ + if( p->aAttr[i].iACode==attrId ){ + return p->aAttr[i].zValue; + } } + return 0; } /* ** Return the ID attribute for markup. Return NULL if there is no ** ID attribute. */ static const char *markupId(ParsedMarkup *p){ - int i; - for(i=0; i<p->nAttr; i++){ - if( p->aAttr[i].iACode==ATTR_ID ){ - return p->aAttr[i].zValue; - } - } - return 0; + return attributeValue(p, ATTR_ID); +} + +/* +** Check markup pMarkup to see if it is a hyperlink with class "button" +** that is follows by simple text and an </a> only. Example: +** +** <a class="button" href="../index.wiki">Index</a> +** +** If the markup matches this pattern, and if the WIKI_BUTTONS flag was +** passed to wiki_convert(), then transform this link into a submenu +** button, skip the text, and set *pN equal to the total length of the +** text through the end of </a> and return true. If the markup does +** not match or if WIKI_BUTTONS is not set, then make no changes to *pN +** and return false. +*/ +static int isButtonHyperlink( + Renderer *p, /* Renderer state */ + ParsedMarkup *pMarkup, /* Potential button markup */ + const char *z, /* Complete text of Wiki */ + int *pN /* Characters of z[] consumed */ +){ + const char *zClass; + const char *zHref; + char *zTag; + int i, j; + if( (p->state & WIKI_BUTTONS)==0 ) return 0; + zClass = attributeValue(pMarkup, ATTR_CLASS); + if( zClass==0 ) return 0; + if( fossil_strcmp(zClass, "button")!=0 ) return 0; + zHref = attributeValue(pMarkup, ATTR_HREF); + if( zHref==0 ) return 0; + i = *pN; + while( z[i] && z[i]!='<' ){ i++; } + if( fossil_strnicmp(&z[i], "</a>",4)!=0 ) return 0; + for(j=*pN; fossil_isspace(z[j]); j++){} + zTag = mprintf("%.*s", i-j, &z[j]); + j = (int)strlen(zTag); + while( j>0 && fossil_isspace(zTag[j-1]) ){ j--; } + if( j==0 ) return 0; + style_submenu_element(zTag, zTag, "%s", zHref); + *pN = i+4; + return 1; } /* ** Pop a single element off of the stack. As the element is popped, ** output its end tag if it is not a </div> tag. @@ -793,11 +961,11 @@ static void popStack(Renderer *p){ if( p->nStack ){ int iCode; p->nStack--; iCode = p->aStack[p->nStack].iCode; - if( iCode!=MARKUP_DIV && p->pOut ){ + if( (iCode!=MARKUP_DIV || p->aStack[p->nStack].zId==0) && p->pOut ){ blob_appendf(p->pOut, "</%s>", aMarkup[iCode].zName); } } } @@ -806,14 +974,11 @@ ** if necessary. */ static void pushStackWithId(Renderer *p, int elem, const char *zId, int w){ if( p->nStack>=p->nAlloc ){ p->nAlloc = p->nAlloc*2 + 100; - p->aStack = realloc(p->aStack, p->nAlloc*sizeof(p->aStack[0])); - if( p->aStack==0 ){ - fossil_panic("out of memory"); - } + p->aStack = fossil_realloc(p->aStack, p->nAlloc*sizeof(p->aStack[0])); } p->aStack[p->nStack].iCode = elem; p->aStack[p->nStack].zId = zId; p->aStack[p->nStack].allowWiki = w; p->nStack++; @@ -848,11 +1013,11 @@ int i; assert( zId!=0 ); for(i=p->nStack-1; i>=0; i--){ if( p->aStack[i].iCode!=iTag ) continue; if( p->aStack[i].zId==0 ) continue; - if( strcmp(zId, p->aStack[i].zId)!=0 ) continue; + if( fossil_strcmp(zId, p->aStack[i].zId)!=0 ) continue; break; } return i; } @@ -879,23 +1044,23 @@ /* ** Begin a new paragraph if that something that is needed. */ static void startAutoParagraph(Renderer *p){ - if( p->wantAutoParagraph==0 || p->wikiList==MARKUP_OL || p->wikiList==MARKUP_UL ) return; - blob_appendf(p->pOut, "<p>", -1); - pushStack(p, MARKUP_P); + if( p->wantAutoParagraph==0 ) return; + if( p->state & WIKI_LINKSONLY ) return; + if( p->wikiList==MARKUP_OL || p->wikiList==MARKUP_UL ) return; + blob_append(p->pOut, "<p>", -1); p->wantAutoParagraph = 0; p->inAutoParagraph = 1; } /* ** End a paragraph if we are in one. */ static void endAutoParagraph(Renderer *p){ if( p->inAutoParagraph ){ - popStackToTag(p, MARKUP_P); p->inAutoParagraph = 0; } } /* @@ -906,10 +1071,34 @@ int n = strlen(z); if( n<4 || n>UUID_SIZE ) return 0; if( !validate16(z, n) ) return 0; return 1; } + +/* +** Return TRUE if a UUID corresponds to an artifact in this +** repository. +*/ +static int in_this_repo(const char *zUuid){ + static Stmt q; + int rc; + int n; + char zU2[UUID_SIZE+1]; + db_static_prepare(&q, + "SELECT 1 FROM blob WHERE uuid>=:u AND uuid<:u2" + ); + db_bind_text(&q, ":u", zUuid); + n = (int)strlen(zUuid); + if( n>=sizeof(zU2) ) n = sizeof(zU2)-1; + memcpy(zU2, zUuid, n); + zU2[n-1]++; + zU2[n] = 0; + db_bind_text(&q, ":u2", zU2); + rc = db_step(&q); + db_reset(&q); + return rc==SQLITE_ROW; +} /* ** zTarget is guaranteed to be a UUID. It might be the UUID of a ticket. ** If it is, store in *pClosed a true or false depending on whether or not ** the ticket is closed and return true. If zTarget @@ -933,11 +1122,11 @@ if( once ){ const char *zClosedExpr = db_get("ticket-closed-expr", "status='Closed'"); db_static_prepare(&q, "SELECT %s FROM ticket " " WHERE tkt_uuid>=:lwr AND tkt_uuid<:upr", - zClosedExpr + zClosedExpr /*safe-for-%s*/ ); once = 0; } db_bind_text(&q, ":lwr", zLower); db_bind_text(&q, ":upr", zUpper); @@ -948,16 +1137,41 @@ rc = 0; } db_reset(&q); return rc; } + +/* +** Return a pointer to the name part of zTarget (skipping the "wiki:" prefix +** if there is one) if zTarget is a valid wiki page name. Return NULL if +** zTarget names a page that does not exist. +*/ +static const char *validWikiPageName(Renderer *p, const char *zTarget){ + if( strncmp(zTarget, "wiki:", 5)==0 + && wiki_name_is_wellformed((const unsigned char*)zTarget) ){ + return zTarget+5; + } + if( strcmp(zTarget, "Sandbox")==0 ) return zTarget; + if( wiki_name_is_wellformed((const unsigned char *)zTarget) + && ((p->state & WIKI_NOBADLINKS)==0 || + db_exists("SELECT 1 FROM tag WHERE tagname GLOB 'wiki-%q'" + " AND (SELECT value FROM tagxref WHERE tagid=tag.tagid" + " ORDER BY mtime DESC LIMIT 1) > 0", zTarget)) + ){ + return zTarget; + } + return 0; +} /* ** Resolve a hyperlink. The zTarget argument is the content of the [...] -** in the wiki. Append to the output string whatever text is approprate +** in the wiki. Append to the output string whatever text is appropriate ** for opening the hyperlink. Write into zClose[0...nClose-1] text that will ** close the markup. +** +** If this routine determines that no hyperlink should be generated, then +** set zClose[0] to 0. ** ** Actually, this routine might or might not append the hyperlink, depending ** on current rendering rules: specifically does the current user have ** "History" permission. ** @@ -969,84 +1183,98 @@ ** [/path] ** ** [./relpath] ** ** [WikiPageName] +** [wiki:WikiPageName] ** ** [0123456789abcdef] ** ** [#fragment] ** ** [2010-02-27 07:13] */ static void openHyperlink( Renderer *p, /* Rendering context */ - const char *zTarget, /* Hyperlink traget; text within [...] */ + const char *zTarget, /* Hyperlink target; text within [...] */ char *zClose, /* Write hyperlink closing text here */ - int nClose /* Bytes available in zClose[] */ + int nClose, /* Bytes available in zClose[] */ + const char *zOrig /* Complete document text */ ){ const char *zTerm = "</a>"; - assert( nClose>=20 ); + const char *z; + assert( nClose>=20 ); if( strncmp(zTarget, "http:", 5)==0 || strncmp(zTarget, "https:", 6)==0 || strncmp(zTarget, "ftp:", 4)==0 || strncmp(zTarget, "mailto:", 7)==0 ){ blob_appendf(p->pOut, "<a href=\"%s\">", zTarget); - /* zTerm = "⟾</a>"; // doesn't work on windows */ }else if( zTarget[0]=='/' ){ - if( 1 /* g.okHistory */ ){ - blob_appendf(p->pOut, "<a href=\"%s%h\">", g.zBaseURL, zTarget); - }else{ - zTerm = ""; - } - }else if( zTarget[0]=='.' || zTarget[0]=='#' ){ - if( 1 /* g.okHistory */ ){ - blob_appendf(p->pOut, "<a href=\"%h\">", zTarget); - }else{ - zTerm = ""; - } + blob_appendf(p->pOut, "<a href=\"%R%h\">", zTarget); + }else if( zTarget[0]=='.' + && (zTarget[1]=='/' || (zTarget[1]=='.' && zTarget[2]=='/')) + && (p->state & WIKI_LINKSONLY)==0 ){ + blob_appendf(p->pOut, "<a href=\"%h\">", zTarget); + }else if( zTarget[0]=='#' ){ + blob_appendf(p->pOut, "<a href=\"%h\">", zTarget); }else if( is_valid_uuid(zTarget) ){ int isClosed = 0; if( is_ticket(zTarget, &isClosed) ){ /* Special display processing for tickets. Display the hyperlink ** as crossed out if the ticket is closed. */ if( isClosed ){ - if( g.okHistory ){ - blob_appendf(p->pOut,"<a href=\"%s/info/%s\"><s>", - g.zBaseURL, zTarget - ); - zTerm = "</s></a>"; - }else{ - blob_appendf(p->pOut,"<s>"); - zTerm = "</s>"; - } - }else{ - if( g.okHistory ){ - blob_appendf(p->pOut,"<a href=\"%s/info/%s\">", - g.zBaseURL, zTarget - ); - }else{ - zTerm = ""; - } - } - }else if( g.okHistory ){ - blob_appendf(p->pOut, "<a href=\"%s/info/%s\">", g.zBaseURL, zTarget); - } - }else if( strlen(zTarget)>=10 && isdigit(zTarget[0]) && zTarget[4]=='-' - && db_int(0, "SELECT datetime(%Q) NOT NULL", zTarget) ){ - blob_appendf(p->pOut, "<a href=\"%s/timeline?c=%T\">", g.zBaseURL, zTarget); - }else if( wiki_name_is_wellformed((const unsigned char *)zTarget) ){ - blob_appendf(p->pOut, "<a href=\"%s/wiki?name=%T\">", g.zBaseURL, zTarget); - }else{ - blob_appendf(p->pOut, "[bad-link: %h]", zTarget); - zTerm = ""; + if( g.perm.Hyperlink ){ + blob_appendf(p->pOut, + "%z<span class=\"wikiTagCancelled\">[", + href("%R/info/%s",zTarget) + ); + zTerm = "]</span></a>"; + }else{ + blob_appendf(p->pOut,"<span class=\"wikiTagCancelled\">["); + zTerm = "]</span>"; + } + }else{ + if( g.perm.Hyperlink ){ + blob_appendf(p->pOut,"%z[", href("%R/info/%s", zTarget)); + zTerm = "]</a>"; + }else{ + blob_appendf(p->pOut, "["); + zTerm = "]"; + } + } + }else if( !in_this_repo(zTarget) ){ + if( (p->state & (WIKI_LINKSONLY|WIKI_NOBADLINKS))!=0 ){ + zTerm = ""; + }else{ + blob_appendf(p->pOut, "<span class=\"brokenlink\">["); + zTerm = "]</span>"; + } + }else if( g.perm.Hyperlink ){ + blob_appendf(p->pOut, "%z[",href("%R/info/%s", zTarget)); + zTerm = "]</a>"; + }else{ + zTerm = ""; + } + }else if( strlen(zTarget)>=10 && fossil_isdigit(zTarget[0]) && zTarget[4]=='-' + && db_int(0, "SELECT datetime(%Q) NOT NULL", zTarget) ){ + blob_appendf(p->pOut, "<a href=\"%R/timeline?c=%T\">", zTarget); + }else if( (z = validWikiPageName(p, zTarget))!=0 ){ + blob_appendf(p->pOut, "<a href=\"%R/wiki?name=%T\">", z); + }else if( zTarget>=&zOrig[2] && !fossil_isspace(zTarget[-2]) ){ + /* Probably an array subscript in code */ + zTerm = ""; + }else if( (p->state & (WIKI_NOBADLINKS|WIKI_LINKSONLY))!=0 ){ + zTerm = ""; + }else{ + blob_appendf(p->pOut, "<span class=\"brokenlink\">[%h]", zTarget); + zTerm = "</span>"; } assert( strlen(zTerm)<nClose ); - strcpy(zClose, zTerm); + sqlite3_snprintf(nClose, zClose, "%s", zTerm); } /* ** Check to see if the given parsed markup is the correct ** </verbatim> tag. @@ -1057,11 +1285,11 @@ if( pMarkup->iCode!=MARKUP_VERBATIM ) return 0; if( !pMarkup->endTag ) return 0; if( p->zVerbatimId==0 ) return 1; if( pMarkup->nAttr!=1 ) return 0; z = pMarkup->aAttr[0].zValue; - return strcmp(z, p->zVerbatimId)==0; + return fossil_strcmp(z, p->zVerbatimId)==0; } /* ** Return the MUTYPE for the top of the stack. */ @@ -1079,14 +1307,20 @@ static void wiki_render(Renderer *p, char *z){ int tokenType; ParsedMarkup markup; int n; int inlineOnly = (p->state & INLINE_MARKUP_ONLY)!=0; - int wikiUseHtml = (p->state & WIKI_USE_HTML)!=0; + int wikiHtmlOnly = (p->state & (WIKI_HTMLONLY | WIKI_LINKSONLY))!=0; + int linksOnly = (p->state & WIKI_LINKSONLY)!=0; + char *zOrig = z; + + /* Make sure the attribute constants and names still align + ** following changes in the attribute list. */ + assert( fossil_strcmp(aAttribute[ATTR_WIDTH].zName, "width")==0 ); while( z[0] ){ - if( wikiUseHtml ){ + if( wikiHtmlOnly ){ n = nextRawToken(z, p, &tokenType); }else{ n = nextWikiToken(z, p, &tokenType); } p->state &= ~(AT_NEWLINE|AT_PARAGRAPH); @@ -1099,11 +1333,11 @@ if( p->wikiList ){ popStackToTag(p, p->wikiList); p->wikiList = 0; } endAutoParagraph(p); - blob_appendf(p->pOut, "\n\n", 1); + blob_append(p->pOut, "\n\n", 1); p->wantAutoParagraph = 1; } p->state |= AT_PARAGRAPH|AT_NEWLINE; break; } @@ -1118,10 +1352,11 @@ }else{ if( p->wikiList!=MARKUP_UL ){ if( p->wikiList ){ popStackToTag(p, p->wikiList); } + endAutoParagraph(p); pushStack(p, MARKUP_UL); blob_append(p->pOut, "<ul>", 4); p->wikiList = MARKUP_UL; } popStackToTag(p, MARKUP_LI); @@ -1137,10 +1372,11 @@ }else{ if( p->wikiList!=MARKUP_OL ){ if( p->wikiList ){ popStackToTag(p, p->wikiList); } + endAutoParagraph(p); pushStack(p, MARKUP_OL); blob_append(p->pOut, "<ol>", 4); p->wikiList = MARKUP_OL; } popStackToTag(p, MARKUP_LI); @@ -1156,10 +1392,11 @@ }else{ if( p->wikiList!=MARKUP_OL ){ if( p->wikiList ){ popStackToTag(p, p->wikiList); } + endAutoParagraph(p); pushStack(p, MARKUP_OL); blob_append(p->pOut, "<ol>", 4); p->wikiList = MARKUP_OL; } popStackToTag(p, MARKUP_LI); @@ -1192,53 +1429,80 @@ char *zTarget; char *zDisplay = 0; int i, j; int savedState; char zClose[20]; + char cS1 = 0; + int iS1 = 0; startAutoParagraph(p); zTarget = &z[1]; for(i=1; z[i] && z[i]!=']'; i++){ if( z[i]=='|' && zDisplay==0 ){ zDisplay = &z[i+1]; - z[i] = 0; - for(j=i-1; j>0 && isspace(z[j]); j--){ z[j] = 0; } + for(j=i; j>0 && fossil_isspace(z[j-1]); j--){} + iS1 = j; + cS1 = z[j]; + z[j] = 0; } } z[i] = 0; if( zDisplay==0 ){ zDisplay = zTarget; }else{ - while( isspace(*zDisplay) ) zDisplay++; - } - openHyperlink(p, zTarget, zClose, sizeof(zClose)); - savedState = p->state; - p->state &= ~ALLOW_WIKI; - p->state |= FONT_MARKUP_ONLY; - wiki_render(p, zDisplay); - p->state = savedState; - blob_append(p->pOut, zClose, -1); + while( fossil_isspace(*zDisplay) ) zDisplay++; + } + openHyperlink(p, zTarget, zClose, sizeof(zClose), zOrig); + if( linksOnly || zClose[0]==0 || p->inVerbatim ){ + if( cS1 ) z[iS1] = cS1; + if( zClose[0]!=']' ){ + blob_appendf(p->pOut, "[%h]%s", zTarget, zClose); + }else{ + blob_appendf(p->pOut, "%h%s", zTarget, zClose); + } + }else{ + savedState = p->state; + p->state &= ~ALLOW_WIKI; + p->state |= FONT_MARKUP_ONLY; + wiki_render(p, zDisplay); + p->state = savedState; + blob_append(p->pOut, zClose, -1); + } break; } case TOKEN_TEXT: { - startAutoParagraph(p); + int i; + for(i=0; i<n && fossil_isspace(z[i]); i++){} + if( i<n ) startAutoParagraph(p); blob_append(p->pOut, z, n); break; } case TOKEN_RAW: { - blob_append(p->pOut, z, n); + if( linksOnly ){ + htmlize_to_blob(p->pOut, z, n); + }else{ + blob_append(p->pOut, z, n); + } break; } case TOKEN_MARKUP: { const char *zId; int iDiv; - parseMarkup(&markup, z); + int mAttr = parseMarkup(&markup, z); + + /* Convert <title> to <h1 align='center'> */ + if( markup.iCode==MARKUP_TITLE && !p->inVerbatim ){ + markup.iCode = MARKUP_H1; + markup.nAttr = 1; + markup.aAttr[0].iACode = AMSK_ALIGN; + markup.aAttr[0].zValue = "center"; + markup.aAttr[0].cTerm = 0; + } /* Markup of the form </div id=ID> where there is a matching - ** ID somewhere on the stack. Exit the verbatim if were are in - ** it. Pop the stack up to the matching <div>. Discard the - ** </div> + ** ID somewhere on the stack. Exit any contained verbatim. + ** Pop the stack up to the matching <div>. Discard the </div> */ if( markup.iCode==MARKUP_DIV && markup.endTag && (zId = markupId(&markup))!=0 && (iDiv = findTagWithId(p, MARKUP_DIV, zId))>=0 ){ @@ -1308,40 +1572,46 @@ popStackToTag(p, markup.iCode); }else /* Push <div> markup onto the stack together with the id=ID attribute. */ - if( markup.iCode==MARKUP_DIV ){ + if( markup.iCode==MARKUP_DIV && (mAttr & ATTR_ID)!=0 ){ pushStackWithId(p, markup.iCode, markupId(&markup), (p->state & ALLOW_WIKI)!=0); }else /* Enter <verbatim> processing. With verbatim enabled, all other ** markup other than the corresponding end-tag with the same ID is ** ignored. */ if( markup.iCode==MARKUP_VERBATIM ){ - int vAttrIdx, vAttrDidAppend=0; + int ii, vAttrDidAppend=0; p->zVerbatimId = 0; p->inVerbatim = 1; p->preVerbState = p->state; p->state &= ~ALLOW_WIKI; - for (vAttrIdx = 0; vAttrIdx < markup.nAttr; vAttrIdx++){ - if( markup.aAttr[vAttrIdx].iACode == ATTR_ID ){ - p->zVerbatimId = markup.aAttr[0].zValue; - }else if( markup.aAttr[vAttrIdx].iACode == ATTR_TYPE ){ + for(ii=0; ii<markup.nAttr; ii++){ + if( markup.aAttr[ii].iACode == ATTR_ID ){ + p->zVerbatimId = markup.aAttr[ii].zValue; + }else if( markup.aAttr[ii].iACode==ATTR_TYPE ){ blob_appendf(p->pOut, "<pre name='code' class='%s'>", - markup.aAttr[vAttrIdx].zValue); + markup.aAttr[ii].zValue); vAttrDidAppend=1; + }else if( markup.aAttr[ii].iACode==ATTR_LINKS + && !is_false(markup.aAttr[ii].zValue) ){ + p->state |= ALLOW_LINKS; } } - if( !vAttrDidAppend ) + if( !vAttrDidAppend ) { + endAutoParagraph(p); blob_append(p->pOut, "<pre class='verbatim'>",-1); + } p->wantAutoParagraph = 0; }else if( markup.iType==MUTYPE_LI ){ if( backupToType(p, MUTYPE_LIST)==0 ){ + endAutoParagraph(p); pushStack(p, MARKUP_UL); blob_append(p->pOut, "<ul>", 4); } pushStack(p, MARKUP_LI); renderMarkup(p->pOut, &markup); @@ -1361,20 +1631,32 @@ pushStack(p, markup.iCode); renderMarkup(p->pOut, &markup); } }else if( markup.iType==MUTYPE_HYPERLINK ){ - popStackToTag(p, markup.iCode); - startAutoParagraph(p); - renderMarkup(p->pOut, &markup); - pushStack(p, markup.iCode); + if( !isButtonHyperlink(p, &markup, z, &n) ){ + popStackToTag(p, markup.iCode); + startAutoParagraph(p); + renderMarkup(p->pOut, &markup); + pushStack(p, markup.iCode); + } }else { if( markup.iType==MUTYPE_FONT ){ startAutoParagraph(p); - }else if( markup.iType==MUTYPE_BLOCK ){ + }else if( markup.iType==MUTYPE_BLOCK || markup.iType==MUTYPE_LIST ){ p->wantAutoParagraph = 0; + } + if( markup.iCode==MARKUP_HR + || markup.iCode==MARKUP_H1 + || markup.iCode==MARKUP_H2 + || markup.iCode==MARKUP_H3 + || markup.iCode==MARKUP_H4 + || markup.iCode==MARKUP_H5 + || markup.iCode==MARKUP_P + ){ + endAutoParagraph(p); } if( (markup.iType & MUTYPE_STACK )!=0 ){ pushStack(p, markup.iCode); } renderMarkup(p->pOut, &markup); @@ -1384,69 +1666,88 @@ } z += n; } } -/* -** Skip over the UTF-8 Byte-Order-Mark that some broken Windows -** tools add to the beginning of text files. -*/ -char *skip_bom(char *z){ - static const char bom[] = { 0xEF, 0xBB, 0xBF }; - if( z && memcmp(z, bom, 3)==0 ) z += 3; - return z; -} - /* ** Transform the text in the pIn blob. Write the results ** into the pOut blob. The pOut blob should already be ** initialized. The output is merely appended to pOut. ** If pOut is NULL, then the output is appended to the CGI ** reply. */ void wiki_convert(Blob *pIn, Blob *pOut, int flags){ - char *z; Renderer renderer; memset(&renderer, 0, sizeof(renderer)); - renderer.state = ALLOW_WIKI|AT_NEWLINE|AT_PARAGRAPH; + renderer.renderFlags = flags; + renderer.state = ALLOW_WIKI|AT_NEWLINE|AT_PARAGRAPH|flags; if( flags & WIKI_NOBLOCK ){ renderer.state |= INLINE_MARKUP_ONLY; } if( flags & WIKI_INLINE ){ renderer.wantAutoParagraph = 0; }else{ renderer.wantAutoParagraph = 1; } if( wikiUsesHtml() ){ - renderer.state |= WIKI_USE_HTML; + renderer.state |= WIKI_HTMLONLY; } if( pOut ){ renderer.pOut = pOut; }else{ renderer.pOut = cgi_output_blob(); } - z = skip_bom(blob_str(pIn)); - wiki_render(&renderer, z); + blob_to_utf8_no_bom(pIn, 0); + wiki_render(&renderer, blob_str(pIn)); endAutoParagraph(&renderer); while( renderer.nStack ){ popStack(&renderer); } blob_append(renderer.pOut, "\n", 1); free(renderer.aStack); } + +/* +** Send a string as wiki to CGI output. +*/ +void wiki_write(const char *zIn, int flags){ + Blob in; + blob_init(&in, zIn, -1); + wiki_convert(&in, 0, flags); + blob_reset(&in); +} /* ** COMMAND: test-wiki-render +** +** %fossil test-wiki-render FILE [OPTIONS] +** +** Options: +** --buttons Set the WIKI_BUTTONS flag +** --htmlonly Set the WIKI_HTMLONLY flag +** --linksonly Set the WIKI_LINKSONLY flag +** --nobadlinks Set the WIKI_NOBADLINKS flag +** --inline Set the WIKI_INLINE flag +** --noblock Set the WIKI_NOBLOCK flag */ void test_wiki_render(void){ Blob in, out; + int flags = 0; + if( find_option("buttons",0,0)!=0 ) flags |= WIKI_BUTTONS; + if( find_option("htmlonly",0,0)!=0 ) flags |= WIKI_HTMLONLY; + if( find_option("linksonly",0,0)!=0 ) flags |= WIKI_LINKSONLY; + if( find_option("nobadlinks",0,0)!=0 ) flags |= WIKI_NOBADLINKS; + if( find_option("inline",0,0)!=0 ) flags |= WIKI_INLINE; + if( find_option("noblock",0,0)!=0 ) flags |= WIKI_NOBLOCK; + db_find_and_open_repository(0,0); + verify_all_options(); if( g.argc!=3 ) usage("FILE"); blob_zero(&out); blob_read_from_file(&in, g.argv[2]); - wiki_convert(&in, &out, 0); + wiki_convert(&in, &out, flags); blob_write_to_file(&out, "-"); } /* ** Search for a <title>... at the beginning of a wiki page. @@ -1459,19 +1760,28 @@ */ int wiki_find_title(Blob *pIn, Blob *pTitle, Blob *pTail){ char *z; int i; int iStart; - z = skip_bom(blob_str(pIn)); - for(i=0; isspace(z[i]); i++){} + blob_to_utf8_no_bom(pIn, 0); + z = blob_str(pIn); + for(i=0; fossil_isspace(z[i]); i++){} if( z[i]!='<' ) return 0; i++; if( strncmp(&z[i],"title>", 6)!=0 ) return 0; - iStart = i+6; + for(iStart=i+6; fossil_isspace(z[iStart]); iStart++){} for(i=iStart; z[i] && (z[i]!='<' || strncmp(&z[i],"",8)!=0); i++){} - if( z[i]!='<' ) return 0; - blob_init(pTitle, &z[iStart], i-iStart); + if( strncmp(&z[i],"",8)!=0 ){ + blob_init(pTitle, 0, 0); + blob_init(pTail, &z[iStart], -1); + return 1; + } + if( i-iStart>0 ){ + blob_init(pTitle, &z[iStart], i-iStart); + }else{ + blob_init(pTitle, 0, 0); + } blob_init(pTail, &z[i+8], -1); return 1; } /* @@ -1495,29 +1805,29 @@ Renderer renderer; int tokenType; ParsedMarkup markup; int n; int inlineOnly; - int wikiUseHtml = 0; + int wikiHtmlOnly = 0; memset(&renderer, 0, sizeof(renderer)); renderer.state = ALLOW_WIKI|AT_NEWLINE|AT_PARAGRAPH; if( flags & WIKI_NOBLOCK ){ renderer.state |= INLINE_MARKUP_ONLY; } if( wikiUsesHtml() ){ - renderer.state |= WIKI_USE_HTML; - wikiUseHtml = 1; + renderer.state |= WIKI_HTMLONLY; + wikiHtmlOnly = 1; } inlineOnly = (renderer.state & INLINE_MARKUP_ONLY)!=0; if( replaceFlag ){ db_multi_exec("DELETE FROM backlink WHERE srctype=%d AND srcid=%d", srctype, srcid); } while( z[0] ){ - if( wikiUseHtml ){ + if( wikiHtmlOnly ){ n = nextRawToken(z, &renderer, &tokenType); }else{ n = nextWikiToken(z, &renderer, &tokenType); } switch( tokenType ){ @@ -1625,20 +1935,18 @@ /* Enter processing. With verbatim enabled, all other ** markup other than the corresponding end-tag with the same ID is ** ignored. */ if( markup.iCode==MARKUP_VERBATIM ){ - int vAttrIdx, vAttrDidAppend=0; + int vAttrIdx; renderer.zVerbatimId = 0; renderer.inVerbatim = 1; renderer.preVerbState = renderer.state; renderer.state &= ~ALLOW_WIKI; for (vAttrIdx = 0; vAttrIdx < markup.nAttr; vAttrIdx++){ if( markup.aAttr[vAttrIdx].iACode == ATTR_ID ){ renderer.zVerbatimId = markup.aAttr[0].zValue; - }else if( markup.aAttr[vAttrIdx].iACode == ATTR_TYPE ){ - vAttrDidAppend=1; } } renderer.wantAutoParagraph = 0; } @@ -1650,7 +1958,287 @@ default: { break; } } z += n; + } + free(renderer.aStack); +} + +/* +** Get the next HTML token. +** +** z points to the start of a token. Return the number of +** characters in that token. +*/ +static int nextHtmlToken(const char *z){ + int n; + char c; + if( (c=z[0])=='<' ){ + n = htmlTagLength(z); + if( n<=0 ) n = 1; + }else if( fossil_isspace(c) ){ + for(n=1; z[n] && fossil_isspace(z[n]); n++){} + }else if( c=='&' ){ + n = z[1]=='#' ? 2 : 1; + while( fossil_isalnum(z[n]) ) n++; + if( z[n]==';' ) n++; + }else{ + n = 1; + for(n=1; 1; n++){ + if( (c = z[n]) > '<' ) continue; + if( c=='<' || c=='&' || fossil_isspace(c) || c==0 ) break; + } + } + return n; +} + +/* +** Attempt to reformat messy HTML to be easily readable by humans. +** +** * Try to keep lines less than 80 characters in length +** * Collapse white space into a single space +** * Put a blank line before: +**

      
      +**    *  Put a newline after 
      and
      +** * Start each of the following elements on a new line: +**
      1. +**
      • +** +** Except, do not do any reformatting inside of
        ...
        +*/ +void htmlTidy(const char *zIn, Blob *pOut){ + int n; + int nPre = 0; + int iCur = 0; + int wantSpace = 0; + int omitSpace = 1; + while( zIn[0] ){ + n = nextHtmlToken(zIn); + if( zIn[0]=='<' && n>1 ){ + int i, j; + int isCloseTag; + int eTag; + int eType; + char zTag[32]; + isCloseTag = zIn[1]=='/'; + for(i=0, j=1+isCloseTag; i<30 && fossil_isalnum(zIn[j]); i++, j++){ + zTag[i] = fossil_tolower(zIn[j]); + } + zTag[i] = 0; + eTag = findTag(zTag); + eType = aMarkup[eTag].iType; + if( eTag==MARKUP_PRE ){ + if( isCloseTag ){ + nPre--; + blob_append(pOut, zIn, n); + zIn += n; + if( nPre==0 ){ blob_append(pOut, "\n", 1); iCur = 0; } + continue; + }else{ + if( iCur && nPre==0 ){ blob_append(pOut, "\n", 1); iCur = 0; } + nPre++; + } + }else if( eType & (MUTYPE_BLOCK|MUTYPE_TABLE) ){ + if( !isCloseTag && nPre==0 && blob_size(pOut)>0 ){ + blob_append(pOut, "\n\n", 1 + (iCur>0)); + iCur = 0; + } + wantSpace = 0; + omitSpace = 1; + }else if( (eType & (MUTYPE_LIST|MUTYPE_LI|MUTYPE_TR|MUTYPE_TD))!=0 + || eTag==MARKUP_HR + ){ + if( nPre==0 && (!isCloseTag || (eType&MUTYPE_LIST)!=0) && iCur>0 ){ + blob_append(pOut, "\n", 1); + iCur = 0; + } + wantSpace = 0; + omitSpace = 1; + } + if( wantSpace && nPre==0 ){ + if( iCur+n+1>=80 ){ + blob_append(pOut, "\n", 1); + iCur = 0; + }else{ + blob_append(pOut, " ", 1); + iCur++; + } + } + blob_append(pOut, zIn, n); + iCur += n; + wantSpace = 0; + if( eTag==MARKUP_BR || eTag==MARKUP_HR ){ + blob_append(pOut, "\n", 1); + iCur = 0; + } + }else if( fossil_isspace(zIn[0]) ){ + if( nPre ){ + blob_append(pOut, zIn, n); + }else{ + wantSpace = !omitSpace; + } + }else{ + if( wantSpace && nPre==0 ){ + if( iCur+n+1>=80 ){ + blob_append(pOut, "\n", 1); + iCur = 0; + }else{ + blob_append(pOut, " ", 1); + iCur++; + } + } + blob_append(pOut, zIn, n); + iCur += n; + wantSpace = omitSpace = 0; + } + zIn += n; + } + if( iCur ) blob_append(pOut, "\n", 1); +} + +/* +** COMMAND: test-html-tidy +** +** Run the htmlTidy() routine on the content of all files named on +** the command-line and write the results to standard output. +*/ +void test_html_tidy(void){ + Blob in, out; + int i; + + for(i=2; i markup. +** If there is no , then create a blank first line. +*/ +void html_to_plaintext(const char *zIn, Blob *pOut){ + int n; + int i, j; + int inTitle = 0; /* True between <title>... */ + int seenText = 0; /* True after first non-whitespace seen */ + int nNL = 0; /* Number of \n characters at the end of pOut */ + int nWS = 0; /* True if pOut ends with whitespace */ + while( fossil_isspace(zIn[0]) ) zIn++; + while( zIn[0] ){ + n = nextHtmlToken(zIn); + if( zIn[0]=='<' && n>1 ){ + int isCloseTag; + int eTag; + int eType; + char zTag[32]; + isCloseTag = zIn[1]=='/'; + for(i=0, j=1+isCloseTag; i<30 && fossil_isalnum(zIn[j]); i++, j++){ + zTag[i] = fossil_tolower(zIn[j]); + } + zTag[i] = 0; + eTag = findTag(zTag); + eType = aMarkup[eTag].iType; + if( eTag==MARKUP_INVALID && fossil_strnicmp(zIn," ' ' within */ + for(i=0; i<n; i++) if( zIn[i]=='\n' ) nNL++; + } + if( !nWS ){ + blob_append(pOut, nNL ? "\n" : " ", 1); + nWS = 1; + } + } + }else if( zIn[0]=='&' ){ + char c = '?'; + if( zIn[1]=='#' ){ + int x = atoi(&zIn[1]); + if( x>0 && x<=127 ) c = x; + }else{ + static const struct { int n; char c; char *z; } aEntity[] = { + { 5, '&', "&" }, + { 4, '<', "<" }, + { 4, '>', ">" }, + { 6, ' ', " " }, + }; + int jj; + for(jj=0; jj<ArraySize(aEntity); jj++){ + if( aEntity[jj].n==n && strncmp(aEntity[jj].z,zIn,n)==0 ){ + c = aEntity[jj].c; + break; + } + } + } + if( fossil_isspace(c) ){ + if( nWS==0 && seenText ) blob_append(pOut, &c, 1); + nWS = 1; + nNL = c=='\n'; + }else{ + if( !seenText && !inTitle ) blob_append(pOut, "\n", 1); + seenText = 1; + nNL = nWS = 0; + blob_append(pOut, &c, 1); + } + }else{ + if( !seenText && !inTitle ) blob_append(pOut, "\n", 1); + seenText = 1; + nNL = nWS = 0; + blob_append(pOut, zIn, n); + } + zIn += n; + } + if( nNL==0 ) blob_append(pOut, "\n", 1); +} + +/* +** COMMAND: test-html-to-text +** +** Usage: %fossil test-html-to-text FILE ... +** +** Read all files named on the command-line. Convert the file +** content from HTML to text and write the results on standard +** output. +** +** This command is intended as a test and debug interface for +** the html_to_plaintext() routine. +*/ +void test_html_to_text(void){ + Blob in, out; + int i; + + for(i=2; i<g.argc; i++){ + blob_read_from_file(&in, g.argv[i]); + blob_zero(&out); + html_to_plaintext(blob_str(&in), &out); + blob_reset(&in); + fossil_puts(blob_str(&out), 0); + blob_reset(&out); } } ADDED src/winfile.c Index: src/winfile.c ================================================================== --- src/winfile.c +++ src/winfile.c @@ -0,0 +1,294 @@ +/* +** Copyright (c) 2006 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file implements several non-trivial file handling wrapper functions +** on Windows using the Win32 API. +*/ +#include "config.h" +#ifdef _WIN32 +/* This code is for win32 only */ +#include <sys/stat.h> +#include <windows.h> +#include "winfile.h" + +#ifndef LABEL_SECURITY_INFORMATION +# define LABEL_SECURITY_INFORMATION (0x00000010L) +#endif + +/* +** Fill stat buf with information received from stat() or lstat(). +** lstat() is called on Unix if isWd is TRUE and allow-symlinks setting is on. +** +*/ +int win32_stat(const wchar_t *zFilename, struct fossilStat *buf, int isWd){ + WIN32_FILE_ATTRIBUTE_DATA attr; + int rc = GetFileAttributesExW(zFilename, GetFileExInfoStandard, &attr); + if( rc ){ + ULARGE_INTEGER ull; + ull.LowPart = attr.ftLastWriteTime.dwLowDateTime; + ull.HighPart = attr.ftLastWriteTime.dwHighDateTime; + buf->st_mode = (attr.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) ? + S_IFDIR : S_IFREG; + buf->st_size = (((i64)attr.nFileSizeHigh)<<32) | attr.nFileSizeLow; + buf->st_mtime = ull.QuadPart / 10000000ULL - 11644473600ULL; + } + return !rc; +} + +/* +** Wrapper around the access() system call. This code was copied from Tcl +** 8.6 and then modified. +*/ +int win32_access(const wchar_t *zFilename, int flags){ + int rc = 0; + PSECURITY_DESCRIPTOR pSd = NULL; + unsigned long size = 0; + PSID pSid = NULL; + BOOL sidDefaulted; + BOOL impersonated = FALSE; + SID_IDENTIFIER_AUTHORITY unmapped = {{0, 0, 0, 0, 0, 22}}; + GENERIC_MAPPING genMap; + HANDLE hToken = NULL; + DWORD desiredAccess = 0, grantedAccess = 0; + BOOL accessYesNo = FALSE; + PPRIVILEGE_SET pPrivSet = NULL; + DWORD privSetSize = 0; + DWORD attr = GetFileAttributesW(zFilename); + + if( attr==INVALID_FILE_ATTRIBUTES ){ + /* + * File might not exist. + */ + + if( GetLastError()!=ERROR_SHARING_VIOLATION ){ + rc = -1; goto done; + } + } + + if( flags==F_OK ){ + /* + * File exists, nothing else to check. + */ + + goto done; + } + + if( (flags & W_OK) + && (attr & FILE_ATTRIBUTE_READONLY) + && !(attr & FILE_ATTRIBUTE_DIRECTORY) ){ + /* + * The attributes say the file is not writable. If the file is a + * regular file (i.e., not a directory), then the file is not + * writable, full stop. For directories, the read-only bit is + * (mostly) ignored by Windows, so we can't ascertain anything about + * directory access from the attrib data. + */ + + rc = -1; goto done; + } + + /* + * It looks as if the permissions are ok, but if we are on NT, 2000 or XP, + * we have a more complex permissions structure so we try to check that. + * The code below is remarkably complex for such a simple thing as finding + * what permissions the OS has set for a file. + */ + + /* + * First find out how big the buffer needs to be. + */ + + GetFileSecurityW(zFilename, + OWNER_SECURITY_INFORMATION | GROUP_SECURITY_INFORMATION | + DACL_SECURITY_INFORMATION | LABEL_SECURITY_INFORMATION, + 0, 0, &size); + + /* + * Should have failed with ERROR_INSUFFICIENT_BUFFER + */ + + if( GetLastError()!=ERROR_INSUFFICIENT_BUFFER ){ + /* + * Most likely case is ERROR_ACCESS_DENIED, which we will convert to + * EACCES - just what we want! + */ + + rc = -1; goto done; + } + + /* + * Now size contains the size of buffer needed. + */ + + pSd = (PSECURITY_DESCRIPTOR)HeapAlloc(GetProcessHeap(), 0, size); + + if( pSd==NULL ){ + rc = -1; goto done; + } + + /* + * Call GetFileSecurity() for real. + */ + + if( !GetFileSecurityW(zFilename, + OWNER_SECURITY_INFORMATION | GROUP_SECURITY_INFORMATION | + DACL_SECURITY_INFORMATION | LABEL_SECURITY_INFORMATION, + pSd, size, &size) ){ + /* + * Error getting owner SD + */ + + rc = -1; goto done; + } + + /* + * As of Samba 3.0.23 (10-Jul-2006), unmapped users and groups are + * assigned to SID domains S-1-22-1 and S-1-22-2, where "22" is the + * top-level authority. If the file owner and group is unmapped then + * the ACL access check below will only test against world access, + * which is likely to be more restrictive than the actual access + * restrictions. Since the ACL tests are more likely wrong than + * right, skip them. Moreover, the unix owner access permissions are + * usually mapped to the Windows attributes, so if the user is the + * file owner then the attrib checks above are correct (as far as they + * go). + */ + + if( !GetSecurityDescriptorOwner(pSd, &pSid, &sidDefaulted) || + memcmp(GetSidIdentifierAuthority(pSid), &unmapped, + sizeof(SID_IDENTIFIER_AUTHORITY))==0 ){ + goto done; /* Attrib tests say access allowed. */ + } + + /* + * Perform security impersonation of the user and open the resulting + * thread token. + */ + + if( !ImpersonateSelf(SecurityImpersonation) ){ + /* + * Unable to perform security impersonation. + */ + + rc = -1; goto done; + } + impersonated = TRUE; + + if( !OpenThreadToken(GetCurrentThread(), + TOKEN_DUPLICATE | TOKEN_QUERY, FALSE, &hToken) ){ + /* + * Unable to get current thread's token. + */ + + rc = -1; goto done; + } + + /* + * Setup desiredAccess according to the access priveleges we are + * checking. + */ + + if( flags & R_OK ){ + desiredAccess |= FILE_GENERIC_READ; + } + if( flags & W_OK){ + desiredAccess |= FILE_GENERIC_WRITE; + } + + memset(&genMap, 0, sizeof(GENERIC_MAPPING)); + genMap.GenericRead = FILE_GENERIC_READ; + genMap.GenericWrite = FILE_GENERIC_WRITE; + genMap.GenericExecute = FILE_GENERIC_EXECUTE; + genMap.GenericAll = FILE_ALL_ACCESS; + + AccessCheck(pSd, hToken, desiredAccess, &genMap, 0, + &privSetSize, &grantedAccess, &accessYesNo); + /* + * Should have failed with ERROR_INSUFFICIENT_BUFFER + */ + + if( GetLastError()!=ERROR_INSUFFICIENT_BUFFER ){ + rc = -1; goto done; + } + pPrivSet = (PPRIVILEGE_SET)HeapAlloc(GetProcessHeap(), 0, privSetSize); + + if( pPrivSet==NULL ){ + rc = -1; goto done; + } + + /* + * Perform access check using the token. + */ + + if( !AccessCheck(pSd, hToken, desiredAccess, &genMap, pPrivSet, + &privSetSize, &grantedAccess, &accessYesNo) ){ + /* + * Unable to perform access check. + */ + + rc = -1; goto done; + } + if( !accessYesNo ){ + rc = -1; + } + +done: + + if( hToken != NULL ){ + CloseHandle(hToken); + } + if( impersonated ){ + RevertToSelf(); + impersonated = FALSE; + } + if( pPrivSet!=NULL ){ + HeapFree(GetProcessHeap(), 0, pPrivSet); + } + if( pSd!=NULL ){ + HeapFree(GetProcessHeap(), 0, pSd); + } + return rc; +} + +/* +** Wrapper around the chdir() system call. +*/ +int win32_chdir(const wchar_t *zChDir, int bChroot){ + int rc = (int)!SetCurrentDirectoryW(zChDir); + return rc; +} + +/* +** Get the current working directory. +** +** On windows, the name is converted from unicode to UTF8 and all '\\' +** characters are converted to '/'. +*/ +void win32_getcwd(char *zBuf, int nBuf){ + int i; + char *zUtf8; + wchar_t *zWide = fossil_malloc( sizeof(wchar_t)*nBuf ); + if( GetCurrentDirectoryW(nBuf, zWide)==0 ){ + fossil_fatal("cannot find current working directory."); + } + zUtf8 = fossil_path_to_utf8(zWide); + fossil_free(zWide); + for(i=0; zUtf8[i]; i++) if( zUtf8[i]=='\\' ) zUtf8[i] = '/'; + strncpy(zBuf, zUtf8, nBuf); + fossil_path_free(zUtf8); +} +#endif /* _WIN32 -- This code is for win32 only */ Index: src/winhttp.c ================================================================== --- src/winhttp.c +++ src/winhttp.c @@ -14,27 +14,30 @@ ** http://www.hwaci.com/drh/ ** ******************************************************************************* ** ** This file implements a very simple (and low-performance) HTTP server -** for windows. +** for windows. It also implements a Windows Service which allows the HTTP +** server to be run without any user logged on. */ -#ifdef __MINGW32__ /* This code is for win32 only */ #include "config.h" -#include "winhttp.h" +#ifdef _WIN32 +/* This code is for win32 only */ #include <windows.h> +#include "winhttp.h" /* ** The HttpRequest structure holds information about each incoming ** HTTP request. */ typedef struct HttpRequest HttpRequest; struct HttpRequest { - int id; /* ID counter */ - SOCKET s; /* Socket on which to receive data */ - SOCKADDR_IN addr; /* Address from which data is coming */ - const char *zNotFound; /* --notfound option, or an empty string */ + int id; /* ID counter */ + SOCKET s; /* Socket on which to receive data */ + SOCKADDR_IN addr; /* Address from which data is coming */ + int flags; /* Flags passed to win32_http_server() */ + const char *zOptions; /* --notfound and/or --localauth options */ }; /* ** Prefix for a temporary file. */ @@ -47,35 +50,51 @@ */ static int find_content_length(const char *zHdr){ while( *zHdr ){ if( zHdr[0]=='\n' ){ if( zHdr[1]=='\r' ) return 0; - if( strncasecmp(&zHdr[1], "content-length:", 15)==0 ){ + if( fossil_strnicmp(&zHdr[1], "content-length:", 15)==0 ){ return atoi(&zHdr[17]); } } zHdr++; } return 0; } + +/* +** Issue a fatal error. +*/ +static NORETURN void winhttp_fatal( + const char *zOp, + const char *zService, + const char *zErr +){ + fossil_fatal("unable to %s service '%s': %s", zOp, zService, zErr); +} /* ** Process a single incoming HTTP request. */ -void win32_process_one_http_request(void *pAppData){ +static void win32_http_request(void *pAppData){ HttpRequest *p = (HttpRequest*)pAppData; FILE *in = 0, *out = 0; int amt, got; int wanted = 0; char *z; - char zRequestFName[100]; - char zReplyFName[100]; + char zCmdFName[MAX_PATH]; + char zRequestFName[MAX_PATH]; + char zReplyFName[MAX_PATH]; char zCmd[2000]; /* Command-line to process the request */ char zHdr[2000]; /* The HTTP request header */ - sprintf(zRequestFName, "%s_in%d.txt", zTempPrefix, p->id); - sprintf(zReplyFName, "%s_out%d.txt", zTempPrefix, p->id); + sqlite3_snprintf(MAX_PATH, zCmdFName, + "%s_cmd%d.txt", zTempPrefix, p->id); + sqlite3_snprintf(MAX_PATH, zRequestFName, + "%s_in%d.txt", zTempPrefix, p->id); + sqlite3_snprintf(MAX_PATH, zReplyFName, + "%s_out%d.txt", zTempPrefix, p->id); amt = 0; while( amt<sizeof(zHdr) ){ got = recv(p->s, &zHdr[amt], sizeof(zHdr)-1-amt, 0); if( got==SOCKET_ERROR ) goto end_request; if( got==0 ){ @@ -89,11 +108,11 @@ wanted = find_content_length(zHdr) + (&z[4]-zHdr) - amt; break; } } if( amt>=sizeof(zHdr) ) goto end_request; - out = fopen(zRequestFName, "wb"); + out = fossil_fopen(zRequestFName, "wb"); if( out==0 ) goto end_request; fwrite(zHdr, 1, amt, out); while( wanted>0 ){ got = recv(p->s, zHdr, sizeof(zHdr), 0); if( got==SOCKET_ERROR ) goto end_request; @@ -104,16 +123,99 @@ } wanted -= got; } fclose(out); out = 0; - sprintf(zCmd, "\"%s\" http \"%s\" %s %s %s%s", - g.argv[0], g.zRepositoryName, zRequestFName, zReplyFName, - inet_ntoa(p->addr.sin_addr), p->zNotFound + /* + ** The repository name is only needed if there was no open checkout. This + ** is designed to allow the open checkout for the interactive user to work + ** with the local Fossil server started via the "ui" command. + */ + if( (p->flags & HTTP_SERVER_HAD_CHECKOUT)==0 ){ + assert( g.zRepositoryName && g.zRepositoryName[0] ); + sqlite3_snprintf(sizeof(zCmd), zCmd, "%s%s\n%s\n%s\n%s", + get_utf8_bom(0), zRequestFName, zReplyFName, inet_ntoa(p->addr.sin_addr), + g.zRepositoryName + ); + }else{ + sqlite3_snprintf(sizeof(zCmd), zCmd, "%s%s\n%s\n%s", + get_utf8_bom(0), zRequestFName, zReplyFName, inet_ntoa(p->addr.sin_addr) + ); + } + out = fossil_fopen(zCmdFName, "wb"); + if( out==0 ) goto end_request; + fwrite(zCmd, 1, strlen(zCmd), out); + fclose(out); + + sqlite3_snprintf(sizeof(zCmd), zCmd, "\"%s\" http -args \"%s\" --nossl%s", + g.nameOfExe, zCmdFName, p->zOptions + ); + fossil_system(zCmd); + in = fossil_fopen(zReplyFName, "rb"); + if( in ){ + while( (got = fread(zHdr, 1, sizeof(zHdr), in))>0 ){ + send(p->s, zHdr, got, 0); + } + } + +end_request: + if( out ) fclose(out); + if( in ) fclose(in); + closesocket(p->s); + file_delete(zRequestFName); + file_delete(zReplyFName); + file_delete(zCmdFName); + free(p); +} + +/* +** Process a single incoming SCGI request. +*/ +static void win32_scgi_request(void *pAppData){ + HttpRequest *p = (HttpRequest*)pAppData; + FILE *in = 0, *out = 0; + int amt, got, nHdr, i; + int wanted = 0; + char zRequestFName[MAX_PATH]; + char zReplyFName[MAX_PATH]; + char zCmd[2000]; /* Command-line to process the request */ + char zHdr[2000]; /* The SCGI request header */ + + sqlite3_snprintf(MAX_PATH, zRequestFName, + "%s_in%d.txt", zTempPrefix, p->id); + sqlite3_snprintf(MAX_PATH, zReplyFName, + "%s_out%d.txt", zTempPrefix, p->id); + out = fossil_fopen(zRequestFName, "wb"); + if( out==0 ) goto end_request; + amt = 0; + got = recv(p->s, zHdr, sizeof(zHdr), 0); + if( got==SOCKET_ERROR ) goto end_request; + amt = fwrite(zHdr, 1, got, out); + nHdr = 0; + for(i=0; zHdr[i]>='0' && zHdr[i]<='9'; i++){ + nHdr = 10*nHdr + zHdr[i] - '0'; + } + wanted = nHdr + i + 1; + if( strcmp(zHdr+i+1, "CONTENT_LENGTH")==0 ){ + wanted += atoi(zHdr+i+15); + } + while( wanted>amt ){ + got = recv(p->s, zHdr, wanted<sizeof(zHdr) ? wanted : sizeof(zHdr), 0); + if( got<=0 ) break; + fwrite(zHdr, 1, got, out); + wanted += got; + } + fclose(out); + out = 0; + assert( g.zRepositoryName && g.zRepositoryName[0] ); + sqlite3_snprintf(sizeof(zCmd), zCmd, + "\"%s\" http \"%s\" \"%s\" %s \"%s\" --scgi --nossl%s", + g.nameOfExe, zRequestFName, zReplyFName, inet_ntoa(p->addr.sin_addr), + g.zRepositoryName, p->zOptions ); - portable_system(zCmd); - in = fopen(zReplyFName, "rb"); + fossil_system(zCmd); + in = fossil_fopen(zReplyFName, "rb"); if( in ){ while( (got = fread(zHdr, 1, sizeof(zHdr), in))>0 ){ send(p->s, zHdr, got, 0); } } @@ -120,37 +222,50 @@ end_request: if( out ) fclose(out); if( in ) fclose(in); closesocket(p->s); - unlink(zRequestFName); - unlink(zReplyFName); + file_delete(zRequestFName); + file_delete(zReplyFName); free(p); } + /* ** Start a listening socket and process incoming HTTP requests on ** that socket. */ void win32_http_server( int mnPort, int mxPort, /* Range of allowed TCP port numbers */ const char *zBrowser, /* Command to launch browser. (Or NULL) */ const char *zStopper, /* Stop server when this file is exists (Or NULL) */ - const char *zNotFound /* The --notfound option, or NULL */ + const char *zNotFound, /* The --notfound option, or NULL */ + const char *zFileGlob, /* The --fileglob option, or NULL */ + const char *zIpAddr, /* Bind to this IP address, if not NULL */ + int flags /* One or more HTTP_SERVER_ flags */ ){ WSADATA wd; SOCKET s = INVALID_SOCKET; SOCKADDR_IN addr; int idCnt = 0; int iPort = mnPort; - char *zNotFoundOption; + Blob options; + wchar_t zTmpPath[MAX_PATH]; - if( zStopper ) unlink(zStopper); + if( zStopper ) file_delete(zStopper); + blob_zero(&options); if( zNotFound ){ - zNotFoundOption = mprintf(" --notfound %s", zNotFound); - }else{ - zNotFoundOption = ""; + blob_appendf(&options, " --notfound %s", zNotFound); + } + if( zFileGlob ){ + blob_appendf(&options, " --files-urlenc %T", zFileGlob); + } + if( g.useLocalauth ){ + blob_appendf(&options, " --localauth"); + } + if( flags & HTTP_SERVER_REPOLIST ){ + blob_appendf(&options, " --repolist"); } if( WSAStartup(MAKEWORD(1,1), &wd) ){ fossil_fatal("unable to initialize winsock"); } while( iPort<=mxPort ){ @@ -158,11 +273,20 @@ if( s==INVALID_SOCKET ){ fossil_fatal("unable to create a socket"); } addr.sin_family = AF_INET; addr.sin_port = htons(iPort); - addr.sin_addr.s_addr = htonl(INADDR_ANY); + if( zIpAddr ){ + addr.sin_addr.s_addr = inet_addr(zIpAddr); + if( addr.sin_addr.s_addr == (-1) ){ + fossil_fatal("not a valid IP address: %s", zIpAddr); + } + }else if( flags & HTTP_SERVER_LOCALHOST ){ + addr.sin_addr.s_addr = htonl(INADDR_LOOPBACK); + }else{ + addr.sin_addr.s_addr = htonl(INADDR_ANY); + } if( bind(s, (struct sockaddr*)&addr, sizeof(addr))==SOCKET_ERROR ){ closesocket(s); iPort++; continue; } @@ -179,42 +303,715 @@ }else{ fossil_fatal("unable to open listening socket on any" " port in the range %d..%d", mnPort, mxPort); } } - zTempPrefix = mprintf("fossil_server_P%d_", iPort); - printf("Listening for HTTP requests on TCP port %d\n", iPort); + if( !GetTempPathW(MAX_PATH, zTmpPath) ){ + fossil_fatal("unable to get path to the temporary directory."); + } + zTempPrefix = mprintf("%sfossil_server_P%d_", + fossil_unicode_to_utf8(zTmpPath), iPort); + fossil_print("Listening for %s requests on TCP port %d\n", + (flags&HTTP_SERVER_SCGI)!=0?"SCGI":"HTTP", iPort); if( zBrowser ){ - zBrowser = mprintf(zBrowser, iPort); - printf("Launch webbrowser: %s\n", zBrowser); - portable_system(zBrowser); + zBrowser = mprintf(zBrowser /*works-like:"%d"*/, iPort); + fossil_print("Launch webbrowser: %s\n", zBrowser); + fossil_system(zBrowser); } - printf("Type Ctrl-C to stop the HTTP server\n"); + fossil_print("Type Ctrl-C to stop the HTTP server\n"); + /* Set the service status to running and pass the listener socket to the + ** service handling procedures. */ + win32_http_service_running(s); for(;;){ SOCKET client; SOCKADDR_IN client_addr; HttpRequest *p; int len = sizeof(client_addr); + int wsaError; client = accept(s, (struct sockaddr*)&client_addr, &len); - if( zStopper && file_size(zStopper)>=0 ){ + if( client==INVALID_SOCKET ){ + /* If the service control handler has closed the listener socket, + ** cleanup and return, otherwise report a fatal error. */ + wsaError = WSAGetLastError(); + if( (wsaError==WSAEINTR) || (wsaError==WSAENOTSOCK) ){ + WSACleanup(); + return; + }else{ + closesocket(s); + WSACleanup(); + fossil_fatal("error from accept()"); + } + }else if( zStopper && file_size(zStopper)>=0 ){ break; } - if( client==INVALID_SOCKET ){ - closesocket(s); - fossil_fatal("error from accept()"); - } - p = malloc( sizeof(*p) ); - if( p==0 ){ - fossil_fatal("out of memory"); - } + p = fossil_malloc( sizeof(*p) ); p->id = ++idCnt; p->s = client; p->addr = client_addr; - p->zNotFound = zNotFoundOption; - _beginthread(win32_process_one_http_request, 0, (void*)p); + p->flags = flags; + p->zOptions = blob_str(&options); + if( flags & HTTP_SERVER_SCGI ){ + _beginthread(win32_scgi_request, 0, (void*)p); + }else{ + _beginthread(win32_http_request, 0, (void*)p); + } } closesocket(s); WSACleanup(); } -#endif /* __MINGW32__ -- This code is for win32 only */ +/* +** The HttpService structure is used to pass information to the service main +** function and to the service control handler function. +*/ +typedef struct HttpService HttpService; +struct HttpService { + int port; /* Port on which the http server should run */ + const char *zNotFound; /* The --notfound option, or NULL */ + const char *zFileGlob; /* The --files option, or NULL */ + int flags; /* One or more HTTP_SERVER_ flags */ + int isRunningAsService; /* Are we running as a service ? */ + const wchar_t *zServiceName;/* Name of the service */ + SOCKET s; /* Socket on which the http server listens */ +}; + +/* +** Variables used for running as windows service. +*/ +static HttpService hsData = {8080, NULL, NULL, 0, 0, NULL, INVALID_SOCKET}; +static SERVICE_STATUS ssStatus; +static SERVICE_STATUS_HANDLE sshStatusHandle; + +/* +** Get message string of the last system error. Return a pointer to the +** message string. Call fossil_unicode_free() to deallocate any memory used +** to store the message string when done. +*/ +static char *win32_get_last_errmsg(void){ + DWORD nMsg; + DWORD nErr = GetLastError(); + LPWSTR tmp = NULL; + char *zMsg = NULL; + + /* Try first to get the error text in English. */ + nMsg = FormatMessageW( + FORMAT_MESSAGE_ALLOCATE_BUFFER | + FORMAT_MESSAGE_FROM_SYSTEM | + FORMAT_MESSAGE_IGNORE_INSERTS, + NULL, + nErr, + MAKELANGID(LANG_ENGLISH, SUBLANG_ENGLISH_US), + (LPWSTR) &tmp, + 0, + NULL + ); + if( !nMsg ){ + /* No english, get what the system has available. */ + nMsg = FormatMessageW( + FORMAT_MESSAGE_ALLOCATE_BUFFER | + FORMAT_MESSAGE_FROM_SYSTEM | + FORMAT_MESSAGE_IGNORE_INSERTS, + NULL, + nErr, + 0, + (LPWSTR) &tmp, + 0, + NULL + ); + } + if( nMsg ){ + zMsg = fossil_unicode_to_utf8(tmp); + }else{ + fossil_fatal("unable to get system error message."); + } + if( tmp ){ + LocalFree((HLOCAL) tmp); + } + return zMsg; +} + +/* +** Report the current status of the service to the service control manager. +** Make sure that during service startup no control codes are accepted. +*/ +static void win32_report_service_status( + DWORD dwCurrentState, /* The current state of the service */ + DWORD dwWin32ExitCode, /* The error code to report */ + DWORD dwWaitHint /* The estimated time for a pending operation */ +){ + if( dwCurrentState==SERVICE_START_PENDING ){ + ssStatus.dwControlsAccepted = 0; + }else{ + ssStatus.dwControlsAccepted = SERVICE_ACCEPT_STOP; + } + ssStatus.dwCurrentState = dwCurrentState; + ssStatus.dwWin32ExitCode = dwWin32ExitCode; + ssStatus.dwWaitHint = dwWaitHint; + + if( (dwCurrentState==SERVICE_RUNNING) || + (dwCurrentState==SERVICE_STOPPED) ){ + ssStatus.dwCheckPoint = 0; + }else{ + ssStatus.dwCheckPoint++; + } + SetServiceStatus(sshStatusHandle, &ssStatus); + return ; +} + +/* +** Handle control codes sent from the service control manager. +** The control dispatcher in the main thread of the service process invokes +** this function whenever it receives a control request from the service +** control manager. +*/ +static void WINAPI win32_http_service_ctrl( + DWORD dwCtrlCode +){ + switch( dwCtrlCode ){ + case SERVICE_CONTROL_STOP: { + win32_report_service_status(SERVICE_STOP_PENDING, NO_ERROR, 0); + if( hsData.s != INVALID_SOCKET ){ + closesocket(hsData.s); + } + win32_report_service_status(ssStatus.dwCurrentState, NO_ERROR, 0); + break; + } + default: { + break; + } + } + return; +} + +/* +** This is the main entry point for the service. +** When the service control manager receives a request to start the service, +** it starts the service process (if it is not already running). The main +** thread of the service process calls the StartServiceCtrlDispatcher +** function with a pointer to an array of SERVICE_TABLE_ENTRY structures. +** Then the service control manager sends a start request to the service +** control dispatcher for this service process. The service control dispatcher +** creates a new thread to execute the ServiceMain function (this function) +** of the service being started. +*/ +static void WINAPI win32_http_service_main( + DWORD argc, /* Number of arguments in argv */ + LPWSTR *argv /* Arguments passed */ +){ + + /* Update the service information. */ + hsData.isRunningAsService = 1; + if( argc>0 ){ + hsData.zServiceName = argv[0]; + } + + /* Register the service control handler function */ + sshStatusHandle = RegisterServiceCtrlHandlerW(L"", win32_http_service_ctrl); + if( !sshStatusHandle ){ + win32_report_service_status(SERVICE_STOPPED, NO_ERROR, 0); + return; + } + + /* Set service specific data and report that the service is starting. */ + ssStatus.dwServiceType = SERVICE_WIN32_OWN_PROCESS; + ssStatus.dwServiceSpecificExitCode = 0; + win32_report_service_status(SERVICE_START_PENDING, NO_ERROR, 3000); + + /* Execute the http server */ + win32_http_server(hsData.port, hsData.port, + NULL, NULL, hsData.zNotFound, hsData.zFileGlob, 0, + hsData.flags); + + /* Service has stopped now. */ + win32_report_service_status(SERVICE_STOPPED, NO_ERROR, 0); + return; +} + +/* +** When running as service, update the HttpService structure with the +** listener socket and update the service status. This procedure must be +** called from the http server when he is ready to accept connections. +*/ +LOCAL void win32_http_service_running(SOCKET s){ + if( hsData.isRunningAsService ){ + hsData.s = s; + win32_report_service_status(SERVICE_RUNNING, NO_ERROR, 0); + } +} + +/* +** Try to start the http server as a windows service. If we are running in +** a interactive console session, this routine fails and returns a non zero +** integer value. When running as service, this routine does not return until +** the service is stopped. In this case, the return value is zero. +*/ +int win32_http_service( + int nPort, /* TCP port number */ + const char *zNotFound, /* The --notfound option, or NULL */ + const char *zFileGlob, /* The --files option, or NULL */ + int flags /* One or more HTTP_SERVER_ flags */ +){ + /* Define the service table. */ + SERVICE_TABLE_ENTRYW ServiceTable[] = + {{L"", (LPSERVICE_MAIN_FUNCTIONW)win32_http_service_main}, {NULL, NULL}}; + + /* Initialize the HttpService structure. */ + hsData.port = nPort; + hsData.zNotFound = zNotFound; + hsData.zFileGlob = zFileGlob; + hsData.flags = flags; + + /* Try to start the control dispatcher thread for the service. */ + if( !StartServiceCtrlDispatcherW(ServiceTable) ){ + if( GetLastError()==ERROR_FAILED_SERVICE_CONTROLLER_CONNECT ){ + return 1; + }else{ + fossil_fatal("error from StartServiceCtrlDispatcher()"); + } + } + return 0; +} + +/* dupe ifdef needed for mkindex +** COMMAND: winsrv* +** Usage: fossil winsrv METHOD ?SERVICE-NAME? ?OPTIONS? +** +** Where METHOD is one of: create delete show start stop. +** +** The winsrv command manages Fossil as a Windows service. This allows +** (for example) Fossil to be running in the background when no user +** is logged in. +** +** In the following description of the methods, "Fossil-DSCM" will be +** used as the default SERVICE-NAME: +** +** fossil winsrv create ?SERVICE-NAME? ?OPTIONS? +** +** Creates a service. Available options include: +** +** -D|--display DISPLAY-NAME +** +** Sets the display name of the service. This name is shown +** by graphical interface programs. By default, the display name +** equals to the service name. +** +** -S|--start TYPE +** +** Sets the start type of the service. TYPE can be "manual", +** which means you need to start the service yourself with the +** 'fossil winsrv start' command or with the "net start" command +** from the operating system. If TYPE is set to "auto", the service +** will be started automatically by the system during startup. +** +** -U|--username USERNAME +** +** Specifies the user account which will be used to run the +** service. The account needs the "Logon as a service" right +** enabled in its profile. Specify local accounts as follows: +** ".\\USERNAME". By default, the "LocalSystem" account will be +** used. +** +** -W|--password PASSWORD +** +** Password for the user account. +** +** The following options are more or less the same as for the "server" +** command and influence the behaviour of the http server: +** +** -P|--port TCPPORT +** +** Specifies the TCP port (default port is 8080) on which the +** server should listen. +** +** -R|--repository REPOSITORY +** +** Specifies the name of the repository to be served. +** The repository option may be omitted if the working directory +** is within an open checkout. +** The REPOSITORY can be a directory (aka folder) that contains +** one or more repositories with names ending in ".fossil". +** In that case, the first element of the URL is used to select +** among the various repositories. +** +** --notfound URL +** +** If REPOSITORY is a directory that contains one or more +** repositories with names of the form "*.fossil" then the +** first element of the URL pathname selects among the various +** repositories. If the pathname does not select a valid +** repository and the --notfound option is available, +** then the server redirects (HTTP code 302) to the URL of +** --notfound. +** +** --localauth +** +** Enables automatic login if the --localauth option is present +** and the "localauth" setting is off and the connection is from +** localhost. +** +** --repolist +** +** If REPOSITORY is directory, URL "/" lists all repositories. +** +** --scgi +** +** Create an SCGI server instead of an HTTP server +** +** +** fossil winsrv delete ?SERVICE-NAME? +** +** Deletes a service. If the service is currently running, it will be +** stopped first and then deleted. +** +** +** fossil winsrv show ?SERVICE-NAME? +** +** Shows how the service is configured and its current state. +** +** +** fossil winsrv start ?SERVICE-NAME? +** +** Start the service. +** +** +** fossil winsrv stop ?SERVICE-NAME? +** +** Stop the service. +** +** +** NOTE: This command is available on Windows operating systems only and +** requires administrative rights on the machine executed. +** +*/ +void cmd_win32_service(void){ + int n; + const char *zMethod; + const char *zSvcName = "Fossil-DSCM"; /* Default service name */ + + if( g.argc<3 ){ + usage("create|delete|show|start|stop ..."); + } + zMethod = g.argv[2]; + n = strlen(zMethod); + + if( strncmp(zMethod, "create", n)==0 ){ + SC_HANDLE hScm; + SC_HANDLE hSvc; + SERVICE_DESCRIPTIONW + svcDescr = {L"Fossil - Distributed Software Configuration Management"}; + DWORD dwStartType = SERVICE_DEMAND_START; + const char *zDisplay = find_option("display", "D", 1); + const char *zStart = find_option("start", "S", 1); + const char *zUsername = find_option("username", "U", 1); + const char *zPassword = find_option("password", "W", 1); + const char *zPort = find_option("port", "P", 1); + const char *zNotFound = find_option("notfound", 0, 1); + const char *zFileGlob = find_option("files", 0, 1); + const char *zLocalAuth = find_option("localauth", 0, 0); + const char *zRepository = find_repository_option(); + int useSCGI = find_option("scgi", 0, 0)!=0; + int allowRepoList = find_option("repolist",0,0)!=0; + Blob binPath; + + verify_all_options(); + if( g.argc==4 ){ + zSvcName = g.argv[3]; + }else if( g.argc>4 ){ + fossil_fatal("too many arguments for create method."); + } + /* Process service creation specific options. */ + if( !zDisplay ){ + zDisplay = zSvcName; + } + /* Per MSDN, the password parameter cannot be NULL. Must use empty + ** string instead (i.e. in the call to CreateServiceW). */ + if( !zPassword ){ + zPassword = ""; + } + if( zStart ){ + if( strncmp(zStart, "auto", strlen(zStart))==0 ){ + dwStartType = SERVICE_AUTO_START; + }else if( strncmp(zStart, "manual", strlen(zStart))==0 ){ + dwStartType = SERVICE_DEMAND_START; + }else{ + winhttp_fatal("create", zSvcName, + "specify 'auto' or 'manual' for the '-S|--start' option"); + } + } + /* Process options for Fossil running as server. */ + if( zPort && (atoi(zPort)<=0) ){ + winhttp_fatal("create", zSvcName, + "port number must be in the range 1 - 65535."); + } + if( !zRepository ){ + db_must_be_within_tree(); + }else if( file_isdir(zRepository)==1 ){ + g.zRepositoryName = mprintf("%s", zRepository); + file_simplify_name(g.zRepositoryName, -1, 0); + }else{ + db_open_repository(zRepository); + } + db_close(0); + /* Build the fully-qualified path to the service binary file. */ + blob_zero(&binPath); + blob_appendf(&binPath, "\"%s\" server", g.nameOfExe); + if( zPort ) blob_appendf(&binPath, " --port %s", zPort); + if( useSCGI ) blob_appendf(&binPath, " --scgi"); + if( allowRepoList ) blob_appendf(&binPath, " --repolist"); + if( zNotFound ) blob_appendf(&binPath, " --notfound \"%s\"", zNotFound); + if( zFileGlob ) blob_appendf(&binPath, " --files-urlenc %T", zFileGlob); + if( zLocalAuth ) blob_append(&binPath, " --localauth", -1); + blob_appendf(&binPath, " \"%s\"", g.zRepositoryName); + /* Create the service. */ + hScm = OpenSCManagerW(NULL, NULL, SC_MANAGER_ALL_ACCESS); + if( !hScm ) winhttp_fatal("create", zSvcName, win32_get_last_errmsg()); + hSvc = CreateServiceW( + hScm, /* Handle to the SCM */ + fossil_utf8_to_unicode(zSvcName), /* Name of the service */ + fossil_utf8_to_unicode(zDisplay), /* Display name */ + SERVICE_ALL_ACCESS, /* Desired access */ + SERVICE_WIN32_OWN_PROCESS, /* Service type */ + dwStartType, /* Start type */ + SERVICE_ERROR_NORMAL, /* Error control */ + fossil_utf8_to_unicode(blob_str(&binPath)), /* Binary path */ + NULL, /* Load ordering group */ + NULL, /* Tag value */ + NULL, /* Service dependencies */ + zUsername ? fossil_utf8_to_unicode(zUsername) : 0, /* Account */ + fossil_utf8_to_unicode(zPassword) /* Account password */ + ); + if( !hSvc ) winhttp_fatal("create", zSvcName, win32_get_last_errmsg()); + /* Set the service description. */ + ChangeServiceConfig2W(hSvc, SERVICE_CONFIG_DESCRIPTION, &svcDescr); + fossil_print("Service '%s' successfully created.\n", zSvcName); + CloseServiceHandle(hSvc); + CloseServiceHandle(hScm); + }else + if( strncmp(zMethod, "delete", n)==0 ){ + SC_HANDLE hScm; + SC_HANDLE hSvc; + SERVICE_STATUS sstat; + + verify_all_options(); + if( g.argc==4 ){ + zSvcName = g.argv[3]; + }else if( g.argc>4 ){ + fossil_fatal("too many arguments for delete method."); + } + hScm = OpenSCManagerW(NULL, NULL, SC_MANAGER_ALL_ACCESS); + if( !hScm ) winhttp_fatal("delete", zSvcName, win32_get_last_errmsg()); + hSvc = OpenServiceW(hScm, fossil_utf8_to_unicode(zSvcName), + SERVICE_ALL_ACCESS); + if( !hSvc ) winhttp_fatal("delete", zSvcName, win32_get_last_errmsg()); + QueryServiceStatus(hSvc, &sstat); + if( sstat.dwCurrentState!=SERVICE_STOPPED ){ + fossil_print("Stopping service '%s'", zSvcName); + if( sstat.dwCurrentState!=SERVICE_STOP_PENDING ){ + if( !ControlService(hSvc, SERVICE_CONTROL_STOP, &sstat) ){ + winhttp_fatal("delete", zSvcName, win32_get_last_errmsg()); + } + } + while( sstat.dwCurrentState!=SERVICE_STOPPED ){ + Sleep(100); + fossil_print("."); + QueryServiceStatus(hSvc, &sstat); + } + fossil_print("\nService '%s' stopped.\n", zSvcName); + } + if( !DeleteService(hSvc) ){ + if( GetLastError()==ERROR_SERVICE_MARKED_FOR_DELETE ){ + fossil_warning("Service '%s' already marked for delete.\n", zSvcName); + }else{ + winhttp_fatal("delete", zSvcName, win32_get_last_errmsg()); + } + }else{ + fossil_print("Service '%s' successfully deleted.\n", zSvcName); + } + CloseServiceHandle(hSvc); + CloseServiceHandle(hScm); + }else + if( strncmp(zMethod, "show", n)==0 ){ + SC_HANDLE hScm; + SC_HANDLE hSvc; + SERVICE_STATUS sstat; + LPQUERY_SERVICE_CONFIGW pSvcConfig; + LPSERVICE_DESCRIPTIONW pSvcDescr; + BOOL bStatus; + DWORD nRequired; + static const char *const zSvcTypes[] = { + "Driver service", + "File system driver service", + "Service runs in its own process", + "Service shares a process with other services", + "Service can interact with the desktop" + }; + const char *zSvcType = ""; + static const char *const zSvcStartTypes[] = { + "Started by the system loader", + "Started by the IoInitSystem function", + "Started automatically by the service control manager", + "Started manually", + "Service cannot be started" + }; + const char *zSvcStartType = ""; + static const char *const zSvcStates[] = { + "Stopped", "Starting", "Stopping", "Running", + "Continue pending", "Pause pending", "Paused" + }; + const char *zSvcState = ""; + + verify_all_options(); + if( g.argc==4 ){ + zSvcName = g.argv[3]; + }else if( g.argc>4 ){ + fossil_fatal("too many arguments for show method."); + } + hScm = OpenSCManagerW(NULL, NULL, GENERIC_READ); + if( !hScm ) winhttp_fatal("show", zSvcName, win32_get_last_errmsg()); + hSvc = OpenServiceW(hScm, fossil_utf8_to_unicode(zSvcName), GENERIC_READ); + if( !hSvc ) winhttp_fatal("show", zSvcName, win32_get_last_errmsg()); + /* Get the service configuration */ + bStatus = QueryServiceConfigW(hSvc, NULL, 0, &nRequired); + if( !bStatus && GetLastError()!=ERROR_INSUFFICIENT_BUFFER ){ + winhttp_fatal("show", zSvcName, win32_get_last_errmsg()); + } + pSvcConfig = fossil_malloc(nRequired); + bStatus = QueryServiceConfigW(hSvc, pSvcConfig, nRequired, &nRequired); + if( !bStatus ) winhttp_fatal("show", zSvcName, win32_get_last_errmsg()); + /* Translate the service type */ + switch( pSvcConfig->dwServiceType ){ + case SERVICE_KERNEL_DRIVER: zSvcType = zSvcTypes[0]; break; + case SERVICE_FILE_SYSTEM_DRIVER: zSvcType = zSvcTypes[1]; break; + case SERVICE_WIN32_OWN_PROCESS: zSvcType = zSvcTypes[2]; break; + case SERVICE_WIN32_SHARE_PROCESS: zSvcType = zSvcTypes[3]; break; + case SERVICE_INTERACTIVE_PROCESS: zSvcType = zSvcTypes[4]; break; + } + /* Translate the service start type */ + switch( pSvcConfig->dwStartType ){ + case SERVICE_BOOT_START: zSvcStartType = zSvcStartTypes[0]; break; + case SERVICE_SYSTEM_START: zSvcStartType = zSvcStartTypes[1]; break; + case SERVICE_AUTO_START: zSvcStartType = zSvcStartTypes[2]; break; + case SERVICE_DEMAND_START: zSvcStartType = zSvcStartTypes[3]; break; + case SERVICE_DISABLED: zSvcStartType = zSvcStartTypes[4]; break; + } + /* Get the service description. */ + bStatus = QueryServiceConfig2W(hSvc, SERVICE_CONFIG_DESCRIPTION, + NULL, 0, &nRequired); + if( !bStatus && GetLastError()!=ERROR_INSUFFICIENT_BUFFER ){ + winhttp_fatal("show", zSvcName, win32_get_last_errmsg()); + } + pSvcDescr = fossil_malloc(nRequired); + bStatus = QueryServiceConfig2W(hSvc, SERVICE_CONFIG_DESCRIPTION, + (LPBYTE)pSvcDescr, nRequired, &nRequired); + if( !bStatus ) winhttp_fatal("show", zSvcName, win32_get_last_errmsg()); + /* Retrieves the current status of the specified service. */ + bStatus = QueryServiceStatus(hSvc, &sstat); + if( !bStatus ) winhttp_fatal("show", zSvcName, win32_get_last_errmsg()); + /* Translate the current state. */ + switch( sstat.dwCurrentState ){ + case SERVICE_STOPPED: zSvcState = zSvcStates[0]; break; + case SERVICE_START_PENDING: zSvcState = zSvcStates[1]; break; + case SERVICE_STOP_PENDING: zSvcState = zSvcStates[2]; break; + case SERVICE_RUNNING: zSvcState = zSvcStates[3]; break; + case SERVICE_CONTINUE_PENDING: zSvcState = zSvcStates[4]; break; + case SERVICE_PAUSE_PENDING: zSvcState = zSvcStates[5]; break; + case SERVICE_PAUSED: zSvcState = zSvcStates[6]; break; + } + /* Print service information to terminal */ + fossil_print("Service name .......: %s\n", zSvcName); + fossil_print("Display name .......: %s\n", + fossil_unicode_to_utf8(pSvcConfig->lpDisplayName)); + fossil_print("Service description : %s\n", + fossil_unicode_to_utf8(pSvcDescr->lpDescription)); + fossil_print("Service type .......: %s.\n", zSvcType); + fossil_print("Service start type .: %s.\n", zSvcStartType); + fossil_print("Binary path name ...: %s\n", + fossil_unicode_to_utf8(pSvcConfig->lpBinaryPathName)); + fossil_print("Service username ...: %s\n", + fossil_unicode_to_utf8(pSvcConfig->lpServiceStartName)); + fossil_print("Current state ......: %s.\n", zSvcState); + /* Cleanup */ + fossil_free(pSvcConfig); + fossil_free(pSvcDescr); + CloseServiceHandle(hSvc); + CloseServiceHandle(hScm); + }else + if( strncmp(zMethod, "start", n)==0 ){ + SC_HANDLE hScm; + SC_HANDLE hSvc; + SERVICE_STATUS sstat; + + verify_all_options(); + if( g.argc==4 ){ + zSvcName = g.argv[3]; + }else if( g.argc>4 ){ + fossil_fatal("too many arguments for start method."); + } + hScm = OpenSCManagerW(NULL, NULL, SC_MANAGER_ALL_ACCESS); + if( !hScm ) winhttp_fatal("start", zSvcName, win32_get_last_errmsg()); + hSvc = OpenServiceW(hScm, fossil_utf8_to_unicode(zSvcName), + SERVICE_ALL_ACCESS); + if( !hSvc ) winhttp_fatal("start", zSvcName, win32_get_last_errmsg()); + QueryServiceStatus(hSvc, &sstat); + if( sstat.dwCurrentState!=SERVICE_RUNNING ){ + fossil_print("Starting service '%s'", zSvcName); + if( sstat.dwCurrentState!=SERVICE_START_PENDING ){ + if( !StartServiceW(hSvc, 0, NULL) ){ + winhttp_fatal("start", zSvcName, win32_get_last_errmsg()); + } + } + while( sstat.dwCurrentState!=SERVICE_RUNNING ){ + Sleep(100); + fossil_print("."); + QueryServiceStatus(hSvc, &sstat); + } + fossil_print("\nService '%s' started.\n", zSvcName); + }else{ + fossil_print("Service '%s' is already started.\n", zSvcName); + } + CloseServiceHandle(hSvc); + CloseServiceHandle(hScm); + }else + if( strncmp(zMethod, "stop", n)==0 ){ + SC_HANDLE hScm; + SC_HANDLE hSvc; + SERVICE_STATUS sstat; + + verify_all_options(); + if( g.argc==4 ){ + zSvcName = g.argv[3]; + }else if( g.argc>4 ){ + fossil_fatal("too many arguments for stop method."); + } + hScm = OpenSCManagerW(NULL, NULL, SC_MANAGER_ALL_ACCESS); + if( !hScm ) winhttp_fatal("stop", zSvcName, win32_get_last_errmsg()); + hSvc = OpenServiceW(hScm, fossil_utf8_to_unicode(zSvcName), + SERVICE_ALL_ACCESS); + if( !hSvc ) winhttp_fatal("stop", zSvcName, win32_get_last_errmsg()); + QueryServiceStatus(hSvc, &sstat); + if( sstat.dwCurrentState!=SERVICE_STOPPED ){ + fossil_print("Stopping service '%s'", zSvcName); + if( sstat.dwCurrentState!=SERVICE_STOP_PENDING ){ + if( !ControlService(hSvc, SERVICE_CONTROL_STOP, &sstat) ){ + winhttp_fatal("stop", zSvcName, win32_get_last_errmsg()); + } + } + while( sstat.dwCurrentState!=SERVICE_STOPPED ){ + Sleep(100); + fossil_print("."); + QueryServiceStatus(hSvc, &sstat); + } + fossil_print("\nService '%s' stopped.\n", zSvcName); + }else{ + fossil_print("Service '%s' is already stopped.\n", zSvcName); + } + CloseServiceHandle(hSvc); + CloseServiceHandle(hScm); + }else + { + fossil_fatal("METHOD should be one of:" + " create delete show start stop"); + } + return; +} +#endif /* _WIN32 -- This code is for win32 only */ ADDED src/wysiwyg.c Index: src/wysiwyg.c ================================================================== --- src/wysiwyg.c +++ src/wysiwyg.c @@ -0,0 +1,306 @@ +/* +** Copyright (c) 2012 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) +** +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code that generates WYSIWYG text editors on +** web pages. +*/ +#include "config.h" +#include <assert.h> +#include <ctype.h> +#include "wysiwyg.h" + + +/* +** Output code for a WYSIWYG editor. The caller must have already generated +** the <form> that will contain the editor, and the call must generate the +** corresponding </form> after this routine returns. The caller must include +** an onsubmit= attribute on the <form> element that invokes the +** wysiwygSubmit() function. +** +** There can only be a single WYSIWYG editor per frame. +*/ +void wysiwygEditor( + const char *zId, /* ID for this editor */ + const char *zContent, /* Initial content (HTML) */ + int w, int h /* Initial width and height */ +){ + + @ <style type="text/css"> + @ .intLink { cursor: pointer; } + @ img.intLink { border: 0; } + @ #wysiwygBox { + @ border: 1px #000000 solid; + @ padding: 12px; + @ } + @ #editMode label { cursor: pointer; } + @ </style> + + @ <input id="wysiwygValue" type="hidden" name="%s(zId)"> + @ <div id="editModeDiv">Edit mode: + @ <select id="editMode" size=1 onchange="setDocMode(this.selectedIndex)"> + @ <option value="0">WYSIWYG</option> + @ <option value="1">Raw HTML</option> + @ </select></div> + @ <div id="toolBar1"> + @ <select onchange="formatDoc('formatblock',this[this.selectedIndex].value); + @ this.selectedIndex=0;"> + @ <option selected>- formatting -</option> + @ <option value="h1">Title 1 <h1></option> + @ <option value="h2">Title 2 <h2></option> + @ <option value="h3">Title 3 <h3></option> + @ <option value="h4">Title 4 <h4></option> + @ <option value="h5">Title 5 <h5></option> + @ <option value="h6">Subtitle <h6></option> + @ <option value="p">Paragraph <p></option> + @ <option value="pre">Preformatted <pre></option> + @ </select> + @ <select onchange="formatDoc('fontname',this[this.selectedIndex].value); + @ this.selectedIndex=0;"> + @ <option class="heading" selected>- font -</option> + @ <option>Arial</option> + @ <option>Arial Black</option> + @ <option>Courier New</option> + @ <option>Times New Roman</option> + @ </select> + @ <select onchange="formatDoc('fontsize',this[this.selectedIndex].value); + @ this.selectedIndex=0;"> + @ <option class="heading" selected>- size -</option> + @ <option value="1">Very small</option> + @ <option value="2">A bit small</option> + @ <option value="3">Normal</option> + @ <option value="4">Medium-large</option> + @ <option value="5">Big</option> + @ <option value="6">Very big</option> + @ <option value="7">Maximum</option> + @ </select> + @ <select onchange="formatDoc('forecolor',this[this.selectedIndex].value); + @ this.selectedIndex=0;"> + @ <option class="heading" selected>- color -</option> + @ <option value="red">Red</option> + @ <option value="blue">Blue</option> + @ <option value="green">Green</option> + @ <option value="black">Black</option> + @ </select> + @ </div> + @ <div id="toolBar2"> + @ <img class="intLink" title="Undo" onclick="formatDoc('undo');" + @ src="data:image/gif;base64,R0lGODlhFgAWAOMKADljwliE33mOrpGjuYKl8aezxqPD+7 + @ /I19DV3NHa7P///////////////////////yH5BAEKAA8ALAAAAAAWABYAAARR8MlJq704680 + @ 7TkaYeJJBnES4EeUJvIGapWYAC0CsocQ7SDlWJkAkCA6ToMYWIARGQF3mRQVIEjkkSVLIbSfE + @ whdRIH4fh/DZMICe3/C4nBQBADs="> + + @ <img class="intLink" title="Redo" onclick="formatDoc('redo');" + @ src="data:image/gif;base64,R0lGODlhFgAWAMIHAB1ChDljwl9vj1iE34Kl8aPD+7/I1/ + @ ///yH5BAEKAAcALAAAAAAWABYAAANKeLrc/jDKSesyphi7SiEgsVXZEATDICqBVJjpqWZt9Na + @ EDNbQK1wCQsxlYnxMAImhyDoFAElJasRRvAZVRqqQXUy7Cgx4TC6bswkAOw=="> + + @ <img class="intLink" title="Remove formatting" + @ onclick="formatDoc('removeFormat')" + @ src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABYAAAAWCAYAAADEtGw7AA + @ AABGdBTUEAALGPC/xhBQAAAAZiS0dEAP8A/wD/oL2nkwAAAAlwSFlzAAAOxAAADsQBlSsOGwA + @ AAAd0SU1FB9oECQMCKPI8CIIAAAAIdEVYdENvbW1lbnQA9syWvwAAAuhJREFUOMtjYBgFxAB5 + @ 01ZWBvVaL2nHnlmk6mXCJbF69zU+Hz/9fB5O1lx+bg45qhl8/fYr5it3XrP/YWTUvvvk3VeqG + @ Xz70TvbJy8+Wv39+2/Hz19/mGwjZzuTYjALuoBv9jImaXHeyD3H7kU8fPj2ICML8z92dlbtMz + @ deiG3fco7J08foH1kurkm3E9iw54YvKwuTuom+LPt/BgbWf3//sf37/1/c02cCG1lB8f//f95 + @ DZx74MTMzshhoSm6szrQ/a6Ir/Z2RkfEjBxuLYFpDiDi6Af///2ckaHBp7+7wmavP5n76+P2C + @ lrLIYl8H9W36auJCbCxM4szMTJac7Kza////R3H1w2cfWAgafPbqs5g7D95++/P1B4+ECK8tA + @ wMDw/1H7159+/7r7ZcvPz4fOHbzEwMDwx8GBgaGnNatfHZx8zqrJ+4VJBh5CQEGOySEua/v3n + @ 7hXmqI8WUGBgYGL3vVG7fuPK3i5GD9/fja7ZsMDAzMG/Ze52mZeSj4yu1XEq/ff7W5dvfVAS1 + @ lsXc4Db7z8C3r8p7Qjf///2dnZGxlqJuyr3rPqQd/Hhyu7oSpYWScylDQsd3kzvnH738wMDzj + @ 5GBN1VIWW4c3KDon7VOvm7S3paB9u5qsU5/x5KUnlY+eexQbkLNsErK61+++VnAJcfkyMTIwf + @ fj0QwZbJDKjcETs1Y8evyd48toz8y/ffzv//vPP4veffxpX77z6l5JewHPu8MqTDAwMDLzyrj + @ b/mZm0JcT5Lj+89+Ybm6zz95oMh7s4XbygN3Sluq4Mj5K8iKMgP4f0////fv77//8nLy+7MCc + @ XmyYDAwODS9jM9tcvPypd35pne3ljdjvj26+H2dhYpuENikgfvQeXNmSl3tqepxXsqhXPyc66 + @ 6s+fv1fMdKR3TK72zpix8nTc7bdfhfkEeVbC9KhbK/9iYWHiErbu6MWbY/7//8/4//9/pgOnH + @ 6jGVazvFDRtq2VgiBIZrUTIBgCk+ivHvuEKwAAAAABJRU5ErkJggg=="> + + @ <img class="intLink" title="Bold" onclick="formatDoc('bold');" + @ src="data:image/gif;base64,R0lGODlhFgAWAID/AMDAwAAAACH5BAEAAAAALAAAAAAWAB + @ YAQAInhI+pa+H9mJy0LhdgtrxzDG5WGFVk6aXqyk6Y9kXvKKNuLbb6zgMFADs=" /> + + @ <img class="intLink" title="Italic" onclick="formatDoc('italic');" + @ src="data:image/gif;base64,R0lGODlhFgAWAKEDAAAAAF9vj5WIbf///yH5BAEAAAMALA + @ AAAAAWABYAAAIjnI+py+0Po5x0gXvruEKHrF2BB1YiCWgbMFIYpsbyTNd2UwAAOw==" /> + + @ <img class="intLink" title="Underline" onclick="formatDoc('underline');" + @ src="data:image/gif;base64,R0lGODlhFgAWAKECAAAAAF9vj////////yH5BAEAAAIALA + @ AAAAAWABYAAAIrlI+py+0Po5zUgAsEzvEeL4Ea15EiJJ5PSqJmuwKBEKgxVuXWtun+DwxCCgA + @ 7" /> + + @ <img class="intLink" title="Left align" + @ onclick="formatDoc('justifyleft');" + @ src="data:image/gif;base64,R0lGODlhFgAWAID/AMDAwAAAACH5BAEAAAAALAAAAAAWAB + @ YAQAIghI+py+0Po5y02ouz3jL4D4JMGELkGYxo+qzl4nKyXAAAOw==" /> + + @ <img class="intLink" title="Center align" + @ onclick="formatDoc('justifycenter');" + @ src="data:image/gif;base64,R0lGODlhFgAWAID/AMDAwAAAACH5BAEAAAAALAAAAAAWAB + @ YAQAIfhI+py+0Po5y02ouz3jL4D4JOGI7kaZ5Bqn4sycVbAQA7" /> + + @ <img class="intLink" title="Right align" + @ onclick="formatDoc('justifyright');" + @ src="data:image/gif;base64,R0lGODlhFgAWAID/AMDAwAAAACH5BAEAAAAALAAAAAAWAB + @ YAQAIghI+py+0Po5y02ouz3jL4D4JQGDLkGYxouqzl43JyVgAAOw==" /> + @ <img class="intLink" title="Numbered list" + @ onclick="formatDoc('insertorderedlist');" + @ src="data:image/gif;base64,R0lGODlhFgAWAMIGAAAAADljwliE35GjuaezxtHa7P//// + @ ///yH5BAEAAAcALAAAAAAWABYAAAM2eLrc/jDKSespwjoRFvggCBUBoTFBeq6QIAysQnRHaEO + @ zyaZ07Lu9lUBnC0UGQU1K52s6n5oEADs=" /> + + @ <img class="intLink" title="Dotted list" + @ onclick="formatDoc('insertunorderedlist');" + @ src="data:image/gif;base64,R0lGODlhFgAWAMIGAAAAAB1ChF9vj1iE33mOrqezxv//// + @ ///yH5BAEAAAcALAAAAAAWABYAAAMyeLrc/jDKSesppNhGRlBAKIZRERBbqm6YtnbfMY7lud6 + @ 4UwiuKnigGQliQuWOyKQykgAAOw==" /> + + @ <img class="intLink" title="Quote" + @ onclick="formatDoc('formatblock','blockquote');" + @ src="data:image/gif;base64,R0lGODlhFgAWAIQXAC1NqjFRjkBgmT9nqUJnsk9xrFJ7u2 + @ R9qmKBt1iGzHmOrm6Sz4OXw3Odz4Cl2ZSnw6KxyqO306K63bG70bTB0rDI3bvI4P///////// + @ //////////////////////////yH5BAEKAB8ALAAAAAAWABYAAAVP4CeOZGmeaKqubEs2Cekk + @ ErvEI1zZuOgYFlakECEZFi0GgTGKEBATFmJAVXweVOoKEQgABB9IQDCmrLpjETrQQlhHjINrT + @ q/b7/i8fp8PAQA7" /> + + @ <img class="intLink" title="Delete indentation" + @ onclick="formatDoc('outdent');" + @ src="data:image/gif;base64,R0lGODlhFgAWAMIHAAAAADljwliE35GjuaezxtDV3NHa7P + @ ///yH5BAEAAAcALAAAAAAWABYAAAM2eLrc/jDKCQG9F2i7u8agQgyK1z2EIBil+TWqEMxhMcz + @ sYVJ3e4ahk+sFnAgtxSQDqWw6n5cEADs=" /> + + @ <img class="intLink" title="Add indentation" + @ onclick="formatDoc('indent');" + @ src="data:image/gif;base64,R0lGODlhFgAWAOMIAAAAADljwl9vj1iE35GjuaezxtDV3N + @ Ha7P///////////////////////////////yH5BAEAAAgALAAAAAAWABYAAAQ7EMlJq704650 + @ B/x8gemMpgugwHJNZXodKsO5oqUOgo5KhBwWESyMQsCRDHu9VOyk5TM9zSpFSr9gsJwIAOw=="> + + @ <img class="intLink" title="Hyperlink" + @ onclick="var sLnk=prompt('Target URL:',''); + @ if(sLnk&&sLnk!=''){formatDoc('createlink',sLnk)}" + @ src="data:image/gif;base64,R0lGODlhFgAWAOMKAB1ChDRLY19vj3mOrpGjuaezxrCztb + @ /I19Ha7Pv8/f///////////////////////yH5BAEKAA8ALAAAAAAWABYAAARY8MlJq704682 + @ 7/2BYIQVhHg9pEgVGIklyDEUBy/RlE4FQF4dCj2AQXAiJQDCWQCAEBwIioEMQBgSAFhDAGghG + @ i9XgHAhMNoSZgJkJei33UESv2+/4vD4TAQA7" /> + +#if 0 /* Cut/Copy/Paste requires special browser permissions for security + ** reasons. So omit these buttons */ + @ <img class="intLink" title="Cut" + @ onclick="formatDoc('cut');" + @ src="data:image/gif;base64,R0lGODlhFgAWAIQSAB1ChBFNsRJTySJYwjljwkxwl19vj1 + @ dusYODhl6MnHmOrpqbmpGjuaezxrCztcDCxL/I18rL1P///////////////////////////// + @ //////////////////////////yH5BAEAAB8ALAAAAAAWABYAAAVu4CeOZGmeaKqubDs6TNnE + @ bGNApNG0kbGMi5trwcA9GArXh+FAfBAw5UexUDAQESkRsfhJPwaH4YsEGAAJGisRGAQY7UCC9 + @ ZAXBB+74LGCRxIEHwAHdWooDgGJcwpxDisQBQRjIgkDCVlfmZqbmiEAOw==" /> + + @ <img class="intLink" title="Copy" + @ onclick="formatDoc('copy');" + @ src="data:image/gif;base64,R0lGODlhFgAWAIQcAB1ChBFNsTRLYyJYwjljwl9vj1iE31 + @ iGzF6MnHWX9HOdz5GjuYCl2YKl8ZOt4qezxqK63aK/9KPD+7DI3b/I17LM/MrL1MLY9NHa7OP + @ s++bx/Pv8/f///////////////yH5BAEAAB8ALAAAAAAWABYAAAWG4CeOZGmeaKqubOum1SQ/ + @ kPVOW749BeVSus2CgrCxHptLBbOQxCSNCCaF1GUqwQbBd0JGJAyGJJiobE+LnCaDcXAaEoxhQ + @ ACgNw0FQx9kP+wmaRgYFBQNeAoGihCAJQsCkJAKOhgXEw8BLQYciooHf5o7EA+kC40qBKkAAA + @ Grpy+wsbKzIiEAOw==" /> + + @ <img class="intLink" title="Paste" + @ onclick="formatDoc('paste');" + @ src="data:image/gif;base64,R0lGODlhFgAWAIQUAD04KTRLY2tXQF9vj414WZWIbXmOrp + @ qbmpGjudClFaezxsa0cb/I1+3YitHa7PrkIPHvbuPs+/fvrvv8/f///////////////////// + @ //////////////////////////yH5BAEAAB8ALAAAAAAWABYAAAWN4CeOZGmeaKqubGsusPvB + @ SyFJjVDs6nJLB0khR4AkBCmfsCGBQAoCwjF5gwquVykSFbwZE+AwIBV0GhFog2EwIDchjwRiQ + @ o9E2Fx4XD5R+B0DDAEnBXBhBhN2DgwDAQFjJYVhCQYRfgoIDGiQJAWTCQMRiwwMfgicnVcAAA + @ MOaK+bLAOrtLUyt7i5uiUhADs=" /> +#endif + + @ </div> + @ <div id="wysiwygBox" + @ style="resize:both; overflow:auto; width: %d(w)em; height: %d(h)em;" + @ contenteditable="true">%s(zContent)</div> + @ <script> + @ var oDoc; + @ + @ /* Initialize the document editor */ + @ function initDoc() { + @ oDoc = document.getElementById("wysiwygBox"); + @ if (!isWysiwyg()) { setDocMode(true); } + @ } + @ + @ /* Return true if the document editor is in WYSIWYG mode. Return + @ ** false if it is in Markup mode */ + @ function isWysiwyg() { + @ return document.getElementById("editMode").selectedIndex==0; + @ } + @ + @ /* Invoke this routine prior to submitting the HTML content back + @ ** to the server */ + @ function wysiwygSubmit() { + @ if(oDoc.style.whiteSpace=="pre-wrap"){setDocMode(0);} + @ document.getElementById("wysiwygValue").value=oDoc.innerHTML; + @ } + @ + @ /* Run the editing command if in WYSIWYG mode */ + @ function formatDoc(sCmd, sValue) { + @ if (isWysiwyg()){ + @ try { + @ // First, try the W3C draft standard way, which has + @ // been working on all non-IE browsers for a while. + @ // It is also supported by IE11 and higher. + @ document.execCommand("styleWithCSS", false, false); + @ } catch (e) { + @ try { + @ // For IE9 or IE10, this should work. + @ document.execCommand("useCSS", 0, true); + @ } catch (e) { + @ // Ok, that apparently did not work, do nothing. + @ } + @ } + @ document.execCommand(sCmd, false, sValue); + @ oDoc.focus(); + @ } + @ } + @ + @ /* Change the editing mode. Convert to markup if the argument + @ ** is true and wysiwyg if the argument is false. */ + @ function setDocMode(bToMarkup) { + @ var oContent; + @ if (bToMarkup) { + @ /* WYSIWYG -> Markup */ + @ var linebreak = new RegExp("</p><p>","ig"); + @ oContent = document.createTextNode( + @ oDoc.innerHTML.replace(linebreak,"</p>\n\n<p>")); + @ oDoc.innerHTML = ""; + @ oDoc.style.whiteSpace = "pre-wrap"; + @ oDoc.appendChild(oContent); + @ document.getElementById("toolBar1").style.visibility="hidden"; + @ document.getElementById("toolBar2").style.visibility="hidden"; + @ } else { + @ /* Markup -> WYSIWYG */ + @ if (document.all) { + @ oDoc.innerHTML = oDoc.innerText; + @ } else { + @ oContent = document.createRange(); + @ oContent.selectNodeContents(oDoc.firstChild); + @ oDoc.innerHTML = oContent.toString(); + @ } + @ oDoc.style.whiteSpace = "normal"; + @ document.getElementById("toolBar1").style.visibility="visible"; + @ document.getElementById("toolBar2").style.visibility="visible"; + @ } + @ oDoc.focus(); + @ } + @ initDoc(); + @ </script> + +} Index: src/xfer.c ================================================================== --- src/xfer.c +++ src/xfer.c @@ -18,30 +18,42 @@ ** This file contains code to implement the file transfer protocol. */ #include "config.h" #include "xfer.h" +#include <time.h> + +/* +** Maximum number of HTTP redirects that any http_exchange() call will +** follow before throwing a fatal error. Most browsers use a limit of 20. +*/ +#define MAX_REDIRECTS 20 + /* ** This structure holds information about the current state of either ** a client or a server that is participating in xfer. */ typedef struct Xfer Xfer; struct Xfer { Blob *pIn; /* Input text from the other side */ Blob *pOut; /* Compose our reply here */ Blob line; /* The current line of input */ - Blob aToken[5]; /* Tokenized version of line */ + Blob aToken[6]; /* Tokenized version of line */ Blob err; /* Error message text */ int nToken; /* Number of tokens in line */ int nIGotSent; /* Number of "igot" cards sent */ int nGimmeSent; /* Number of gimme cards sent */ int nFileSent; /* Number of files sent */ int nDeltaSent; /* Number of deltas sent */ int nFileRcvd; /* Number of files received */ int nDeltaRcvd; /* Number of deltas received */ int nDanglingFile; /* Number of dangling deltas received */ - int mxSend; /* Stop sending "file" with pOut reaches this size */ + int mxSend; /* Stop sending "file" when pOut reaches this size */ + int resync; /* Send igot cards for all holdings */ + u8 syncPrivate; /* True to enable syncing private content */ + u8 nextIsPrivate; /* If true, next "file" received is a private */ + time_t maxTime; /* Time when this transfer should be finished */ }; /* ** The input blob contains a UUID. Convert it into a record ID. @@ -49,11 +61,11 @@ ** phantomize is true. ** ** Compare to uuid_to_rid(). This routine takes a blob argument ** and does less error checking. */ -static int rid_from_uuid(Blob *pUuid, int phantomize){ +static int rid_from_uuid(Blob *pUuid, int phantomize, int isPrivate){ static Stmt q; int rid; db_static_prepare(&q, "SELECT rid FROM blob WHERE uuid=:uuid"); db_bind_str(&q, ":uuid", pUuid); @@ -62,25 +74,31 @@ }else{ rid = 0; } db_reset(&q); if( rid==0 && phantomize ){ - rid = content_new(blob_str(pUuid)); + rid = content_new(blob_str(pUuid), isPrivate); } return rid; } /* ** Remember that the other side of the connection already has a copy ** of the file rid. */ static void remote_has(int rid){ - if( rid ) db_multi_exec("INSERT OR IGNORE INTO onremote VALUES(%d)", rid); + if( rid ){ + static Stmt q; + db_static_prepare(&q, "INSERT OR IGNORE INTO onremote VALUES(:r)"); + db_bind_int(&q, ":r", rid); + db_step(&q); + db_reset(&q); + } } /* -** The aToken[0..nToken-1] blob array is a parse of a "file" line +** The aToken[0..nToken-1] blob array is a parse of a "file" line ** message. This routine finishes parsing that message and does ** a record insert of the file. ** ** The file line is in one of the following two forms: ** @@ -95,17 +113,25 @@ ** be initialized to an empty string. ** ** Any artifact successfully received by this routine is considered to ** be public and is therefore removed from the "private" table. */ -static void xfer_accept_file(Xfer *pXfer){ +static void xfer_accept_file( + Xfer *pXfer, + int cloneFlag, + char **pzUuidList, + int *pnUuidList +){ int n; int rid; int srcid = 0; Blob content, hash; - - if( pXfer->nToken<3 + int isPriv; + + isPriv = pXfer->nextIsPrivate; + pXfer->nextIsPrivate = 0; + if( pXfer->nToken<3 || pXfer->nToken>4 || !blob_is_uuid(&pXfer->aToken[1]) || !blob_is_int(&pXfer->aToken[pXfer->nToken-1], &n) || n<0 || (pXfer->nToken==4 && !blob_is_uuid(&pXfer->aToken[2])) @@ -114,43 +140,151 @@ return; } blob_zero(&content); blob_zero(&hash); blob_extract(pXfer->pIn, n, &content); - if( uuid_is_shunned(blob_str(&pXfer->aToken[1])) ){ + if( !cloneFlag && uuid_is_shunned(blob_str(&pXfer->aToken[1])) ){ /* Ignore files that have been shunned */ + blob_reset(&content); + return; + } + if( isPriv && !g.perm.Private ){ + /* Do not accept private files if not authorized */ + blob_reset(&content); + return; + } + if( cloneFlag ){ + if( pXfer->nToken==4 ){ + srcid = rid_from_uuid(&pXfer->aToken[2], 1, isPriv); + pXfer->nDeltaRcvd++; + }else{ + srcid = 0; + pXfer->nFileRcvd++; + } + rid = content_put_ex(&content, blob_str(&pXfer->aToken[1]), srcid, + 0, isPriv); + Th_AppendToList(pzUuidList, pnUuidList, blob_str(&pXfer->aToken[1]), + blob_size(&pXfer->aToken[1])); + remote_has(rid); + blob_reset(&content); return; } if( pXfer->nToken==4 ){ - Blob src; - srcid = rid_from_uuid(&pXfer->aToken[2], 1); + Blob src, next; + srcid = rid_from_uuid(&pXfer->aToken[2], 1, isPriv); if( content_get(srcid, &src)==0 ){ - rid = content_put(&content, blob_str(&pXfer->aToken[1]), srcid); + rid = content_put_ex(&content, blob_str(&pXfer->aToken[1]), srcid, + 0, isPriv); + Th_AppendToList(pzUuidList, pnUuidList, blob_str(&pXfer->aToken[1]), + blob_size(&pXfer->aToken[1])); pXfer->nDanglingFile++; db_multi_exec("DELETE FROM phantom WHERE rid=%d", rid); - content_make_public(rid); + if( !isPriv ) content_make_public(rid); + blob_reset(&src); + blob_reset(&content); return; } pXfer->nDeltaRcvd++; - blob_delta_apply(&src, &content, &content); + blob_delta_apply(&src, &content, &next); blob_reset(&src); + blob_reset(&content); + content = next; }else{ pXfer->nFileRcvd++; } sha1sum_blob(&content, &hash); if( !blob_eq_str(&pXfer->aToken[1], blob_str(&hash), -1) ){ - blob_appendf(&pXfer->err, "content does not match sha1 hash"); + blob_appendf(&pXfer->err, + "wrong hash on received artifact: expected %s but got %s", + blob_str(&pXfer->aToken[1]), blob_str(&hash)); } - rid = content_put(&content, blob_str(&hash), 0); + rid = content_put_ex(&content, blob_str(&hash), 0, 0, isPriv); + Th_AppendToList(pzUuidList, pnUuidList, blob_str(&hash), blob_size(&hash)); blob_reset(&hash); if( rid==0 ){ blob_appendf(&pXfer->err, "%s", g.zErrMsg); + blob_reset(&content); + }else{ + if( !isPriv ) content_make_public(rid); + manifest_crosslink(rid, &content, MC_NO_ERRORS); + } + assert( blob_is_reset(&content) ); + remote_has(rid); +} + +/* +** The aToken[0..nToken-1] blob array is a parse of a "cfile" line +** message. This routine finishes parsing that message and does +** a record insert of the file. The difference between "file" and +** "cfile" is that with "cfile" the content is already compressed. +** +** The file line is in one of the following two forms: +** +** cfile UUID USIZE CSIZE \n CONTENT +** cfile UUID DELTASRC USIZE CSIZE \n CONTENT +** +** The content is CSIZE bytes immediately following the newline. +** If DELTASRC exists, then the CONTENT is a delta against the +** content of DELTASRC. +** +** The original size of the UUID artifact is USIZE. +** +** If any error occurs, write a message into pErr which has already +** be initialized to an empty string. +** +** Any artifact successfully received by this routine is considered to +** be public and is therefore removed from the "private" table. +*/ +static void xfer_accept_compressed_file( + Xfer *pXfer, + char **pzUuidList, + int *pnUuidList +){ + int szC; /* CSIZE */ + int szU; /* USIZE */ + int rid; + int srcid = 0; + Blob content; + int isPriv; + + isPriv = pXfer->nextIsPrivate; + pXfer->nextIsPrivate = 0; + if( pXfer->nToken<4 + || pXfer->nToken>5 + || !blob_is_uuid(&pXfer->aToken[1]) + || !blob_is_int(&pXfer->aToken[pXfer->nToken-2], &szU) + || !blob_is_int(&pXfer->aToken[pXfer->nToken-1], &szC) + || szC<0 || szU<0 + || (pXfer->nToken==5 && !blob_is_uuid(&pXfer->aToken[2])) + ){ + blob_appendf(&pXfer->err, "malformed cfile line"); + return; + } + if( isPriv && !g.perm.Private ){ + /* Do not accept private files if not authorized */ + return; + } + blob_zero(&content); + blob_extract(pXfer->pIn, szC, &content); + if( uuid_is_shunned(blob_str(&pXfer->aToken[1])) ){ + /* Ignore files that have been shunned */ + blob_reset(&content); + return; + } + if( pXfer->nToken==5 ){ + srcid = rid_from_uuid(&pXfer->aToken[2], 1, isPriv); + pXfer->nDeltaRcvd++; }else{ - content_make_public(rid); - manifest_crosslink(rid, &content); + srcid = 0; + pXfer->nFileRcvd++; } + rid = content_put_ex(&content, blob_str(&pXfer->aToken[1]), srcid, + szC, isPriv); + Th_AppendToList(pzUuidList, pnUuidList, blob_str(&pXfer->aToken[1]), + blob_size(&pXfer->aToken[1])); remote_has(rid); + blob_reset(&content); } /* ** Try to send a file as a delta against its parent. ** If successful, return the number of bytes in the delta. @@ -160,82 +294,86 @@ ** Never send a delta against a private artifact. */ static int send_delta_parent( Xfer *pXfer, /* The transfer context */ int rid, /* record id of the file to send */ + int isPrivate, /* True if rid is a private artifact */ Blob *pContent, /* The content of the file to send */ Blob *pUuid /* The UUID of the file to send */ ){ - static const char *azQuery[] = { + static const char *const azQuery[] = { "SELECT pid FROM plink x" " WHERE cid=%d" - " AND NOT EXISTS(SELECT 1 FROM phantom WHERE rid=pid)" - " AND NOT EXISTS(SELECT 1 FROM plink y" - " WHERE y.pid=x.cid AND y.cid=x.pid)", + " AND NOT EXISTS(SELECT 1 FROM phantom WHERE rid=pid)", - "SELECT pid FROM mlink x" + "SELECT pid, min(mtime) FROM mlink, event ON mlink.mid=event.objid" " WHERE fid=%d" " AND NOT EXISTS(SELECT 1 FROM phantom WHERE rid=pid)" - " AND NOT EXISTS(SELECT 1 FROM mlink y" - " WHERE y.pid=x.fid AND y.fid=x.pid)" }; int i; Blob src, delta; int size = 0; int srcId = 0; for(i=0; srcId==0 && i<count(azQuery); i++){ - srcId = db_int(0, azQuery[i], rid); + srcId = db_int(0, azQuery[i] /*works-like:"%d"*/, rid); } - if( srcId>0 && !content_is_private(srcId) && content_get(srcId, &src) ){ + if( srcId>0 + && (pXfer->syncPrivate || !content_is_private(srcId)) + && content_get(srcId, &src) + ){ char *zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", srcId); blob_delta_create(&src, pContent, &delta); size = blob_size(&delta); if( size>=blob_size(pContent)-50 ){ size = 0; }else if( uuid_is_shunned(zUuid) ){ size = 0; }else{ + if( isPrivate ) blob_append(pXfer->pOut, "private\n", -1); blob_appendf(pXfer->pOut, "file %b %s %d\n", pUuid, zUuid, size); blob_append(pXfer->pOut, blob_buffer(&delta), size); - /* blob_appendf(pXfer->pOut, "\n", 1); */ } blob_reset(&delta); free(zUuid); blob_reset(&src); } return size; } /* -** Try to send a file as a native delta. +** Try to send a file as a native delta. ** If successful, return the number of bytes in the delta. ** If we cannot generate an appropriate delta, then send ** nothing and return zero. ** ** Never send a delta against a private artifact. */ static int send_delta_native( Xfer *pXfer, /* The transfer context */ int rid, /* record id of the file to send */ + int isPrivate, /* True if rid is a private artifact */ Blob *pUuid /* The UUID of the file to send */ ){ Blob src, delta; int size = 0; int srcId; srcId = db_int(0, "SELECT srcid FROM delta WHERE rid=%d", rid); - if( srcId>0 && !content_is_private(srcId) ){ + if( srcId>0 + && (pXfer->syncPrivate || !content_is_private(srcId)) + ){ blob_zero(&src); db_blob(&src, "SELECT uuid FROM blob WHERE rid=%d", srcId); if( uuid_is_shunned(blob_str(&src)) ){ blob_reset(&src); return 0; } blob_zero(&delta); db_blob(&delta, "SELECT content FROM blob WHERE rid=%d", rid); blob_uncompress(&delta, &delta); + if( isPrivate ) blob_append(pXfer->pOut, "private\n", -1); blob_appendf(pXfer->pOut, "file %b %b %d\n", pUuid, &src, blob_size(&delta)); blob_append(pXfer->pOut, blob_buffer(&delta), blob_size(&delta)); size = blob_size(&delta); blob_reset(&delta); @@ -260,12 +398,13 @@ ** this routine becomes a no-op. */ static void send_file(Xfer *pXfer, int rid, Blob *pUuid, int nativeDelta){ Blob content, uuid; int size = 0; + int isPriv = content_is_private(rid); - if( content_is_private(rid) ) return; + if( pXfer->syncPrivate==0 && isPriv ) return; if( db_exists("SELECT 1 FROM onremote WHERE rid=%d", rid) ){ return; } blob_zero(&uuid); db_blob(&uuid, "SELECT uuid FROM blob WHERE rid=%d AND size>=0", rid); @@ -282,53 +421,127 @@ } if( uuid_is_shunned(blob_str(pUuid)) ){ blob_reset(&uuid); return; } - if( pXfer->mxSend<=blob_size(pXfer->pOut) ){ - blob_appendf(pXfer->pOut, "igot %b\n", pUuid); + if( (pXfer->maxTime != -1 && time(NULL) >= pXfer->maxTime) || + pXfer->mxSend<=blob_size(pXfer->pOut) ){ + const char *zFormat = isPriv ? "igot %b 1\n" : "igot %b\n"; + blob_appendf(pXfer->pOut, zFormat /*works-like:"%b"*/, pUuid); pXfer->nIGotSent++; blob_reset(&uuid); return; } if( nativeDelta ){ - size = send_delta_native(pXfer, rid, pUuid); + size = send_delta_native(pXfer, rid, isPriv, pUuid); if( size ){ pXfer->nDeltaSent++; } } if( size==0 ){ content_get(rid, &content); if( !nativeDelta && blob_size(&content)>100 ){ - size = send_delta_parent(pXfer, rid, &content, pUuid); + size = send_delta_parent(pXfer, rid, isPriv, &content, pUuid); } if( size==0 ){ int size = blob_size(&content); + if( isPriv ) blob_append(pXfer->pOut, "private\n", -1); blob_appendf(pXfer->pOut, "file %b %d\n", pUuid, size); blob_append(pXfer->pOut, blob_buffer(&content), size); pXfer->nFileSent++; }else{ pXfer->nDeltaSent++; } + blob_reset(&content); } remote_has(rid); blob_reset(&uuid); +#if 0 + if( blob_buffer(pXfer->pOut)[blob_size(pXfer->pOut)-1]!='\n' ){ + blob_append(pXfer->pOut, "\n", 1); + } +#endif +} + +/* +** Send the file identified by rid as a compressed artifact. Basically, +** send the content exactly as it appears in the BLOB table using +** a "cfile" card. +*/ +static void send_compressed_file(Xfer *pXfer, int rid){ + const char *zContent; + const char *zUuid; + const char *zDelta; + int szU; + int szC; + int rc; + int isPrivate; + int srcIsPrivate; + static Stmt q1; + Blob fullContent; + + isPrivate = content_is_private(rid); + if( isPrivate && pXfer->syncPrivate==0 ) return; + db_static_prepare(&q1, + "SELECT uuid, size, content, delta.srcid IN private," + " (SELECT uuid FROM blob WHERE rid=delta.srcid)" + " FROM blob LEFT JOIN delta ON (blob.rid=delta.rid)" + " WHERE blob.rid=:rid" + " AND blob.size>=0" + " AND NOT EXISTS(SELECT 1 FROM shun WHERE shun.uuid=blob.uuid)" + ); + db_bind_int(&q1, ":rid", rid); + rc = db_step(&q1); + if( rc==SQLITE_ROW ){ + zUuid = db_column_text(&q1, 0); + szU = db_column_int(&q1, 1); + szC = db_column_bytes(&q1, 2); + zContent = db_column_raw(&q1, 2); + srcIsPrivate = db_column_int(&q1, 3); + zDelta = db_column_text(&q1, 4); + if( isPrivate ) blob_append(pXfer->pOut, "private\n", -1); + blob_appendf(pXfer->pOut, "cfile %s ", zUuid); + if( !isPrivate && srcIsPrivate ){ + content_get(rid, &fullContent); + szU = blob_size(&fullContent); + blob_compress(&fullContent, &fullContent); + szC = blob_size(&fullContent); + zContent = blob_buffer(&fullContent); + zDelta = 0; + } + if( zDelta ){ + blob_appendf(pXfer->pOut, "%s ", zDelta); + pXfer->nDeltaSent++; + }else{ + pXfer->nFileSent++; + } + blob_appendf(pXfer->pOut, "%d %d\n", szU, szC); + blob_append(pXfer->pOut, zContent, szC); + if( blob_buffer(pXfer->pOut)[blob_size(pXfer->pOut)-1]!='\n' ){ + blob_append(pXfer->pOut, "\n", 1); + } + if( !isPrivate && srcIsPrivate ){ + blob_reset(&fullContent); + } + } + db_reset(&q1); } /* ** Send a gimme message for every phantom. ** -** It should not be possible to have a private phantom. But just to be -** sure, take care not to send any "gimme" messagse on private artifacts. +** Except: do not request shunned artifacts. And do not request +** private artifacts if we are not doing a private transfer. */ static void request_phantoms(Xfer *pXfer, int maxReq){ Stmt q; - db_prepare(&q, - "SELECT uuid FROM phantom JOIN blob USING(rid)" - " WHERE NOT EXISTS(SELECT 1 FROM shun WHERE uuid=blob.uuid)" - " AND NOT EXISTS(SELECT 1 FROM private WHERE rid=blob.rid)" + db_prepare(&q, + "SELECT uuid FROM phantom CROSS JOIN blob USING(rid) /*scan*/" + " WHERE NOT EXISTS(SELECT 1 FROM shun WHERE uuid=blob.uuid) %s", + (pXfer->syncPrivate ? "" : + " AND NOT EXISTS(SELECT 1 FROM private WHERE rid=blob.rid)") ); while( db_step(&q)==SQLITE_ROW && maxReq-- > 0 ){ const char *zUuid = db_column_text(&q, 0); blob_appendf(pXfer->pOut, "gimme %s\n", zUuid); pXfer->nGimmeSent++; @@ -356,24 +569,24 @@ ** Check the signature on an application/x-fossil payload received by ** the HTTP server. The signature is a line of the following form: ** ** login LOGIN NONCE SIGNATURE ** -** The NONCE is the SHA1 hash of the remainder of the input. -** SIGNATURE is the SHA1 checksum of the NONCE concatenated +** The NONCE is the SHA1 hash of the remainder of the input. +** SIGNATURE is the SHA1 checksum of the NONCE concatenated ** with the users password. ** -** The parameters to this routine are ephermeral blobs holding the +** The parameters to this routine are ephemeral blobs holding the ** LOGIN, NONCE and SIGNATURE. ** ** This routine attempts to locate the user and verify the signature. ** If everything checks out, the USER.CAP column for the USER table ** is consulted to set privileges in the global g variable. ** ** If anything fails to check out, no changes are made to privileges. ** -** Signature generation on the client side is handled by the +** Signature generation on the client side is handled by the ** http_exchange() routine. ** ** Return non-zero for a login failure and zero for success. */ int check_login(Blob *pLogin, Blob *pNonce, Blob *pSig){ @@ -380,12 +593,16 @@ Stmt q; int rc = -1; char *zLogin = blob_terminate(pLogin); defossilize(zLogin); - if( strcmp(zLogin, "nobody")==0 || strcmp(zLogin,"anonymous")==0 ){ + if( fossil_strcmp(zLogin, "nobody")==0 || fossil_strcmp(zLogin,"anonymous")==0 ){ return 0; /* Anybody is allowed to sync as "nobody" or "anonymous" */ + } + if( fossil_strcmp(P("REMOTE_USER"), zLogin)==0 + && db_get_boolean("remote_user_ok",0) ){ + return 0; /* Accept Basic Authorization */ } db_prepare(&q, "SELECT pw, cap, uid FROM user" " WHERE login=%Q" " AND login NOT IN ('anonymous','nobody','developer','reader')" @@ -401,47 +618,39 @@ blob_zero(&combined); blob_copy(&combined, pNonce); blob_append(&combined, blob_buffer(&pw), szPw); sha1sum_blob(&combined, &hash); assert( blob_size(&hash)==40 ); - rc = blob_compare(&hash, pSig); + rc = blob_constant_time_cmp(&hash, pSig); blob_reset(&hash); blob_reset(&combined); if( rc!=0 && szPw!=40 ){ /* If this server stores cleartext passwords and the password did not ** match, then perhaps the client is sending SHA1 passwords. Try ** again with the SHA1 password. */ const char *zPw = db_column_text(&q, 0); - char *zSecret = sha1_shared_secret(zPw, blob_str(pLogin)); + char *zSecret = sha1_shared_secret(zPw, blob_str(pLogin), 0); blob_zero(&combined); blob_copy(&combined, pNonce); blob_append(&combined, zSecret, -1); free(zSecret); sha1sum_blob(&combined, &hash); - rc = blob_compare(&hash, pSig); + rc = blob_constant_time_cmp(&hash, pSig); blob_reset(&hash); blob_reset(&combined); } if( rc==0 ){ const char *zCap; zCap = db_column_text(&q, 1); - login_set_capabilities(zCap); + login_set_capabilities(zCap, 0); g.userUid = db_column_int(&q, 2); g.zLogin = mprintf("%b", pLogin); g.zNonce = mprintf("%b", pNonce); - if( g.fHttpTrace ){ - fprintf(stderr, "# login [%s] with capabilities [%s]\n", g.zLogin,zCap); - } } } db_finalize(&q); - - if( rc==0 ){ - /* If the login was successful. */ - login_set_anon_nobody_capabilities(); - } return rc; } /* ** Send the content of all files in the unsent table. @@ -465,73 +674,131 @@ ** Check to see if the number of unclustered entries is greater than ** 100 and if it is, form a new cluster. Unclustered phantoms do not ** count toward the 100 total. And phantoms are never added to a new ** cluster. */ -static void create_cluster(void){ +void create_cluster(void){ Blob cluster, cksum; + Blob deleteWhere; Stmt q; int nUncl; + int nRow = 0; + int rid; +#if 0 /* We should not ever get any private artifacts in the unclustered table. ** But if we do (because of a bug) now is a good time to delete them. */ db_multi_exec( "DELETE FROM unclustered WHERE rid IN (SELECT rid FROM private)" ); +#endif - nUncl = db_int(0, "SELECT count(*) FROM unclustered" + nUncl = db_int(0, "SELECT count(*) FROM unclustered /*scan*/" " WHERE NOT EXISTS(SELECT 1 FROM phantom" " WHERE rid=unclustered.rid)"); - if( nUncl<100 ){ - return; - } - blob_zero(&cluster); - db_prepare(&q, "SELECT uuid FROM unclustered, blob" - " WHERE NOT EXISTS(SELECT 1 FROM phantom" - " WHERE rid=unclustered.rid)" - " AND unclustered.rid=blob.rid" - " AND NOT EXISTS(SELECT 1 FROM shun WHERE uuid=blob.uuid)" - " ORDER BY 1"); - while( db_step(&q)==SQLITE_ROW ){ - blob_appendf(&cluster, "M %s\n", db_column_text(&q, 0)); - } - db_finalize(&q); - md5sum_blob(&cluster, &cksum); - blob_appendf(&cluster, "Z %b\n", &cksum); - blob_reset(&cksum); - db_multi_exec("DELETE FROM unclustered"); - content_put(&cluster, 0, 0); - blob_reset(&cluster); + if( nUncl>=100 ){ + blob_zero(&cluster); + blob_zero(&deleteWhere); + db_prepare(&q, "SELECT uuid FROM unclustered, blob" + " WHERE NOT EXISTS(SELECT 1 FROM phantom" + " WHERE rid=unclustered.rid)" + " AND unclustered.rid=blob.rid" + " AND NOT EXISTS(SELECT 1 FROM shun WHERE uuid=blob.uuid)" + " ORDER BY 1"); + while( db_step(&q)==SQLITE_ROW ){ + blob_appendf(&cluster, "M %s\n", db_column_text(&q, 0)); + nRow++; + if( nRow>=800 && nUncl>nRow+100 ){ + md5sum_blob(&cluster, &cksum); + blob_appendf(&cluster, "Z %b\n", &cksum); + blob_reset(&cksum); + rid = content_put(&cluster); + manifest_crosslink(rid, &cluster, MC_NONE); + blob_reset(&cluster); + nUncl -= nRow; + nRow = 0; + blob_append_sql(&deleteWhere, ",%d", rid); + } + } + db_finalize(&q); + db_multi_exec( + "DELETE FROM unclustered WHERE rid NOT IN (0 %s)" + " AND NOT EXISTS(SELECT 1 FROM phantom WHERE rid=unclustered.rid)", + blob_sql_text(&deleteWhere) + ); + blob_reset(&deleteWhere); + if( nRow>0 ){ + md5sum_blob(&cluster, &cksum); + blob_appendf(&cluster, "Z %b\n", &cksum); + blob_reset(&cksum); + rid = content_put(&cluster); + manifest_crosslink(rid, &cluster, MC_NONE); + blob_reset(&cluster); + } + } +} + +/* +** Send igot messages for every private artifact +*/ +static int send_private(Xfer *pXfer){ + int cnt = 0; + Stmt q; + if( pXfer->syncPrivate ){ + db_prepare(&q, "SELECT uuid FROM private JOIN blob USING(rid)"); + while( db_step(&q)==SQLITE_ROW ){ + blob_appendf(pXfer->pOut, "igot %s 1\n", db_column_text(&q,0)); + cnt++; + } + db_finalize(&q); + } + return cnt; } /* ** Send an igot message for every entry in unclustered table. ** Return the number of cards sent. */ static int send_unclustered(Xfer *pXfer){ Stmt q; int cnt = 0; - db_prepare(&q, - "SELECT uuid FROM unclustered JOIN blob USING(rid)" - " WHERE NOT EXISTS(SELECT 1 FROM shun WHERE uuid=blob.uuid)" - " AND NOT EXISTS(SELECT 1 FROM private WHERE rid=blob.rid)" - " AND NOT EXISTS(SELECT 1 FROM phantom WHERE rid=blob.rid)" - ); + if( pXfer->resync ){ + db_prepare(&q, + "SELECT uuid, rid FROM blob" + " WHERE NOT EXISTS(SELECT 1 FROM shun WHERE uuid=blob.uuid)" + " AND NOT EXISTS(SELECT 1 FROM phantom WHERE rid=blob.rid)" + " AND NOT EXISTS(SELECT 1 FROM private WHERE rid=blob.rid)" + " AND blob.rid<=%d" + " ORDER BY blob.rid DESC", + pXfer->resync + ); + }else{ + db_prepare(&q, + "SELECT uuid FROM unclustered JOIN blob USING(rid)" + " WHERE NOT EXISTS(SELECT 1 FROM shun WHERE uuid=blob.uuid)" + " AND NOT EXISTS(SELECT 1 FROM phantom WHERE rid=blob.rid)" + " AND NOT EXISTS(SELECT 1 FROM private WHERE rid=blob.rid)" + ); + } while( db_step(&q)==SQLITE_ROW ){ blob_appendf(pXfer->pOut, "igot %s\n", db_column_text(&q, 0)); cnt++; + if( pXfer->resync && pXfer->mxSend<blob_size(pXfer->pOut) ){ + pXfer->resync = db_column_int(&q, 1)-1; + } } db_finalize(&q); + if( cnt==0 ) pXfer->resync = 0; return cnt; } /* ** Send an igot message for every artifact. */ static void send_all(Xfer *pXfer){ Stmt q; - db_prepare(&q, + db_prepare(&q, "SELECT uuid FROM blob " " WHERE NOT EXISTS(SELECT 1 FROM shun WHERE uuid=blob.uuid)" " AND NOT EXISTS(SELECT 1 FROM private WHERE rid=blob.rid)" " AND NOT EXISTS(SELECT 1 FROM phantom WHERE rid=blob.rid)" ); @@ -540,13 +807,16 @@ } db_finalize(&q); } /* -** Send a single config card for configuration item zName +** Send a single old-style config card for configuration item zName. +** +** This routine and the functionality it implements is scheduled for +** removal on 2012-05-01. */ -static void send_config_card(Xfer *pXfer, const char *zName){ +static void send_legacy_config_card(Xfer *pXfer, const char *zName){ if( zName[0]!='@' ){ Blob val; blob_zero(&val); db_blob(&val, "SELECT value FROM config WHERE name=%Q", zName); if( blob_size(&val)>0 ){ @@ -563,10 +833,80 @@ blob_size(&content), blob_str(&content)); blob_reset(&content); } } +/* +** Called when there is an attempt to transfer private content to and +** from a server without authorization. +*/ +static void server_private_xfer_not_authorized(void){ + @ error not\sauthorized\sto\ssync\sprivate\scontent +} + +/* +** Return the common TH1 code to evaluate prior to evaluating any other +** TH1 transfer notification scripts. +*/ +const char *xfer_common_code(void){ + return db_get("xfer-common-script", 0); +} + +/* +** Return the TH1 code to evaluate when a push is processed. +*/ +const char *xfer_push_code(void){ + return db_get("xfer-push-script", 0); +} + +/* +** Return the TH1 code to evaluate when a commit is processed. +*/ +const char *xfer_commit_code(void){ + return db_get("xfer-commit-script", 0); +} + +/* +** Return the TH1 code to evaluate when a ticket change is processed. +*/ +const char *xfer_ticket_code(void){ + return db_get("xfer-ticket-script", 0); +} + +/* +** Run the specified TH1 script, if any, and returns 1 on error. +*/ +int xfer_run_script( + const char *zScript, + const char *zUuidOrList, + int bIsList +){ + int rc = TH_OK; + if( !zScript ) return rc; + Th_FossilInit(TH_INIT_DEFAULT); + Th_Store(bIsList ? "uuids" : "uuid", zUuidOrList ? zUuidOrList : ""); + rc = Th_Eval(g.interp, 0, zScript, -1); + if( rc!=TH_OK ){ + fossil_error(1, "%s", Th_GetResult(g.interp, 0)); + } + return rc; +} + +/* +** Runs the pre-transfer TH1 script, if any, and returns its return code. +** This script may be run multiple times. If the script performs actions +** that cannot be redone, it should use an internal [if] guard similar to +** the following: +** +** if {![info exists common_done]} { +** # ... code here +** set common_done 1 +** } +*/ +int xfer_run_common_script(void){ + return xfer_run_script(xfer_common_code(), 0, 0); +} /* ** If this variable is set, disable login checks. Used for debugging ** only. */ @@ -575,11 +915,11 @@ /* ** The CGI/HTTP preprocessor always redirects requests with a content-type ** of application/x-fossil or application/x-fossil-debug to this page, ** regardless of what path was specified in the HTTP header. This allows ** clone clients to specify a URL that omits default pathnames, such -** as "http://fossil-scm.morg/" instead of "http://fossil-scm.org/index.cgi". +** as "http://fossil-scm.org/" instead of "http://fossil-scm.org/index.cgi". ** ** WEBPAGE: xfer ** ** This is the transfer handler on the server side. The transfer ** message has been uncompressed and placed in the g.cgiIn blob. @@ -593,30 +933,60 @@ int deltaFlag = 0; int isClone = 0; int nGimme = 0; int size; int recvConfig = 0; + char *zNow; + int rc; + const char *zScript = 0; + char *zUuidList = 0; + int nUuidList = 0; + char **pzUuidList = 0; + int *pnUuidList = 0; - if( strcmp(PD("REQUEST_METHOD","POST"),"POST") ){ + if( fossil_strcmp(PD("REQUEST_METHOD","POST"),"POST") ){ fossil_redirect_home(); } + g.zLogin = "anonymous"; + login_set_anon_nobody_capabilities(); + login_check_credentials(); memset(&xfer, 0, sizeof(xfer)); blobarray_zero(xfer.aToken, count(xfer.aToken)); cgi_set_content_type(g.zContentType); + cgi_reset_content(); + if( db_schema_is_outofdate() ){ + @ error database\sschema\sis\sout-of-date\son\sthe\sserver. + return; + } blob_zero(&xfer.err); xfer.pIn = &g.cgiIn; xfer.pOut = cgi_output_blob(); xfer.mxSend = db_get_int("max-download", 5000000); + xfer.maxTime = db_get_int("max-download-time", 30); + if( xfer.maxTime<1 ) xfer.maxTime = 1; + xfer.maxTime += time(NULL); g.xferPanic = 1; db_begin_transaction(); db_multi_exec( "CREATE TEMP TABLE onremote(rid INTEGER PRIMARY KEY);" ); manifest_crosslink_begin(); + rc = xfer_run_common_script(); + if( rc==TH_ERROR ){ + cgi_reset_content(); + @ error common\sscript\sfailed:\s%F(g.zErrMsg) + nErr++; + } + zScript = xfer_push_code(); + if( zScript ){ /* NOTE: Are TH1 transfer hooks enabled? */ + pzUuidList = &zUuidList; + pnUuidList = &nUuidList; + } while( blob_line(xfer.pIn, &xfer.line) ){ if( blob_buffer(&xfer.line)[0]=='#' ) continue; + if( blob_size(&xfer.line)==0 ) continue; xfer.nToken = blob_tokenize(&xfer.line, xfer.aToken, count(xfer.aToken)); /* file UUID SIZE \n CONTENT ** file UUID DELTASRC SIZE \n CONTENT ** @@ -627,11 +997,32 @@ cgi_reset_content(); @ error not\sauthorized\sto\swrite nErr++; break; } - xfer_accept_file(&xfer); + xfer_accept_file(&xfer, 0, pzUuidList, pnUuidList); + if( blob_size(&xfer.err) ){ + cgi_reset_content(); + @ error %T(blob_str(&xfer.err)) + nErr++; + break; + } + }else + + /* cfile UUID USIZE CSIZE \n CONTENT + ** cfile UUID DELTASRC USIZE CSIZE \n CONTENT + ** + ** Accept a file from the client. + */ + if( blob_eq(&xfer.aToken[0], "cfile") ){ + if( !isPush ){ + cgi_reset_content(); + @ error not\sauthorized\sto\swrite + nErr++; + break; + } + xfer_accept_compressed_file(&xfer, pzUuidList, pnUuidList); if( blob_size(&xfer.err) ){ cgi_reset_content(); @ error %T(blob_str(&xfer.err)) nErr++; break; @@ -646,31 +1037,38 @@ && xfer.nToken==2 && blob_is_uuid(&xfer.aToken[1]) ){ nGimme++; if( isPull ){ - int rid = rid_from_uuid(&xfer.aToken[1], 0); + int rid = rid_from_uuid(&xfer.aToken[1], 0, 0); if( rid ){ send_file(&xfer, rid, &xfer.aToken[1], deltaFlag); } } }else - /* igot UUID + /* igot UUID ?ISPRIVATE? ** - ** Client announces that it has a particular file. + ** Client announces that it has a particular file. If the ISPRIVATE + ** argument exists and is non-zero, then the file is a private file. */ - if( xfer.nToken==2 + if( xfer.nToken>=2 && blob_eq(&xfer.aToken[0], "igot") && blob_is_uuid(&xfer.aToken[1]) ){ if( isPush ){ - rid_from_uuid(&xfer.aToken[1], 1); + if( xfer.nToken==2 || blob_eq(&xfer.aToken[2],"1")==0 ){ + rid_from_uuid(&xfer.aToken[1], 1, 0); + }else if( g.perm.Private ){ + rid_from_uuid(&xfer.aToken[1], 1, 1); + }else{ + server_private_xfer_not_authorized(); + } } }else - - + + /* pull SERVERCODE PROJECTCODE ** push SERVERCODE PROJECTCODE ** ** The client wants either send or receive. The server should ** verify that the project code matches. @@ -691,19 +1089,19 @@ nErr++; break; } login_check_credentials(); if( blob_eq(&xfer.aToken[0], "pull") ){ - if( !g.okRead ){ + if( !g.perm.Read ){ cgi_reset_content(); @ error not\sauthorized\sto\sread nErr++; break; } isPull = 1; }else{ - if( !g.okWrite ){ + if( !g.perm.Write ){ if( !isPull ){ cgi_reset_content(); @ error not\sauthorized\sto\swrite nErr++; }else{ @@ -713,26 +1111,50 @@ isPush = 1; } } }else - /* clone + /* clone ?PROTOCOL-VERSION? ?SEQUENCE-NUMBER? ** ** The client knows nothing. Tell all. */ if( blob_eq(&xfer.aToken[0], "clone") ){ + int iVers; login_check_credentials(); - if( !g.okClone ){ + if( !g.perm.Clone ){ cgi_reset_content(); @ push %s(db_get("server-code", "x")) %s(db_get("project-code", "x")) @ error not\sauthorized\sto\sclone nErr++; break; } - isClone = 1; - isPull = 1; - deltaFlag = 1; + if( xfer.nToken==3 + && blob_is_int(&xfer.aToken[1], &iVers) + && iVers>=2 + ){ + int seqno, max; + if( iVers>=3 ){ + cgi_set_content_type("application/x-fossil-uncompressed"); + } + blob_is_int(&xfer.aToken[2], &seqno); + max = db_int(0, "SELECT max(rid) FROM blob"); + while( xfer.mxSend>blob_size(xfer.pOut) && seqno<=max){ + if( time(NULL) >= xfer.maxTime ) break; + if( iVers>=3 ){ + send_compressed_file(&xfer, seqno); + }else{ + send_file(&xfer, seqno, 0, 1); + } + seqno++; + } + if( seqno>max ) seqno = 0; + @ clone_seqno %d(seqno) + }else{ + isClone = 1; + isPull = 1; + deltaFlag = 1; + } @ push %s(db_get("server-code", "x")) %s(db_get("project-code", "x")) }else /* login USER NONCE SIGNATURE ** @@ -741,38 +1163,45 @@ */ if( blob_eq(&xfer.aToken[0], "login") && xfer.nToken==4 ){ if( disableLogin ){ - g.okRead = g.okWrite = 1; + g.perm.Read = g.perm.Write = g.perm.Private = g.perm.Admin = 1; }else{ if( check_tail_hash(&xfer.aToken[2], xfer.pIn) || check_login(&xfer.aToken[1], &xfer.aToken[2], &xfer.aToken[3]) ){ cgi_reset_content(); @ error login\sfailed nErr++; break; - } + } } }else - + /* reqconfig NAME ** ** Request a configuration value */ if( blob_eq(&xfer.aToken[0], "reqconfig") && xfer.nToken==2 ){ - if( g.okRead ){ + if( g.perm.Read ){ char *zName = blob_str(&xfer.aToken[1]); - if( configure_is_exportable(zName) ){ - send_config_card(&xfer, zName); + if( zName[0]=='/' ){ + /* New style configuration transfer */ + int groupMask = configure_name_to_mask(&zName[1], 0); + if( !g.perm.Admin ) groupMask &= ~CONFIGSET_USER; + if( !g.perm.RdAddr ) groupMask &= ~CONFIGSET_ADDR; + configure_send_group(xfer.pOut, groupMask, 0); + }else if( configure_is_exportable(zName) ){ + /* Old style configuration transfer */ + send_legacy_config_card(&xfer, zName); } } }else - + /* config NAME SIZE \n CONTENT ** ** Receive a configuration value from the client. This is only ** permitted for high-privilege users. */ @@ -780,49 +1209,26 @@ && blob_is_int(&xfer.aToken[2], &size) ){ const char *zName = blob_str(&xfer.aToken[1]); Blob content; blob_zero(&content); blob_extract(xfer.pIn, size, &content); - if( !g.okAdmin ){ + if( !g.perm.Admin ){ cgi_reset_content(); @ error not\sauthorized\sto\spush\sconfiguration nErr++; break; } - if( zName[0]!='@' ){ - if( strcmp(zName, "logo-image")==0 ){ - Stmt ins; - db_prepare(&ins, - "REPLACE INTO config(name, value) VALUES(:name, :value)" - ); - db_bind_text(&ins, ":name", zName); - db_bind_blob(&ins, ":value", &content); - db_step(&ins); - db_finalize(&ins); - }else{ - db_multi_exec( - "REPLACE INTO config(name,value) VALUES(%Q,%Q)", - zName, blob_str(&content) - ); - } - }else{ - /* Notice that we are evaluating arbitrary SQL received from the - ** client. But this can only happen if the client has authenticated - ** as an administrator, so presumably we trust the client at this - ** point. - */ - if( !recvConfig ){ - configure_prepare_to_receive(0); - recvConfig = 1; - } - db_multi_exec("%s", blob_str(&content)); - } + if( !recvConfig && zName[0]=='@' ){ + configure_prepare_to_receive(0); + recvConfig = 1; + } + configure_receive(zName, &content, CONFIGSET_ALL); blob_reset(&content); blob_seek(xfer.pIn, 1, BLOB_SEEK_CUR); }else - + /* cookie TEXT ** ** A cookie contains a arbitrary-length argument that is server-defined. ** The argument must be encoded so as not to contain any whitespace. @@ -841,39 +1247,108 @@ */ if( blob_eq(&xfer.aToken[0], "cookie") && xfer.nToken==2 ){ /* Process the cookie */ }else + + /* private + ** + ** This card indicates that the next "file" or "cfile" will contain + ** private content. + */ + if( blob_eq(&xfer.aToken[0], "private") ){ + if( !g.perm.Private ){ + server_private_xfer_not_authorized(); + }else{ + xfer.nextIsPrivate = 1; + } + }else + + + /* pragma NAME VALUE... + ** + ** The client issue pragmas to try to influence the behavior of the + ** server. These are requests only. Unknown pragmas are silently + ** ignored. + */ + if( blob_eq(&xfer.aToken[0], "pragma") && xfer.nToken>=2 ){ + /* pragma send-private + ** + ** If the user has the "x" privilege (which must be set explicitly - + ** it is not automatic with "a" or "s") then this pragma causes + ** private information to be pulled in addition to public records. + */ + if( blob_eq(&xfer.aToken[1], "send-private") ){ + login_check_credentials(); + if( !g.perm.Private ){ + server_private_xfer_not_authorized(); + }else{ + xfer.syncPrivate = 1; + } + } + /* pragma send-catalog + ** + ** Send igot cards for all known artifacts. + */ + if( blob_eq(&xfer.aToken[1], "send-catalog") ){ + xfer.resync = 0x7fffffff; + } + }else + /* Unknown message */ { cgi_reset_content(); @ error bad\scommand:\s%F(blob_str(&xfer.line)) } blobarray_reset(xfer.aToken, xfer.nToken); + blob_reset(&xfer.line); } if( isPush ){ + if( rc==TH_OK ){ + rc = xfer_run_script(zScript, zUuidList, 1); + if( rc==TH_ERROR ){ + cgi_reset_content(); + @ error push\sscript\sfailed:\s%F(g.zErrMsg) + nErr++; + } + } request_phantoms(&xfer, 500); } + if( zUuidList ){ + Th_Free(g.interp, zUuidList); + } if( isClone && nGimme==0 ){ /* The initial "clone" message from client to server contains no ** "gimme" cards. On that initial message, send the client an "igot" - ** card for every artifact currently in the respository. This will + ** card for every artifact currently in the repository. This will ** cause the client to create phantoms for all artifacts, which will ** in turn make sure that the entire repository is sent efficiently ** and expeditiously. */ send_all(&xfer); + if( xfer.syncPrivate ) send_private(&xfer); }else if( isPull ){ create_cluster(); send_unclustered(&xfer); + if( xfer.syncPrivate ) send_private(&xfer); } if( recvConfig ){ configure_finalize_receive(); } - manifest_crosslink_end(); + db_multi_exec("DROP TABLE onremote"); + manifest_crosslink_end(MC_PERMIT_HOOKS); + + /* Send the server timestamp last, in case prior processing happened + ** to use up a significant fraction of our time window. + */ + zNow = db_text(0, "SELECT strftime('%%Y-%%m-%%dT%%H:%%M:%%S', 'now')"); + @ # timestamp %s(zNow) + free(zNow); + db_end_transaction(0); + configure_rebuild(); } /* ** COMMAND: test-xfer ** @@ -893,155 +1368,214 @@ ** ** gdb fossil ** r test-xfer out.txt */ void cmd_test_xfer(void){ - int notUsed; + db_find_and_open_repository(0,0); if( g.argc!=2 && g.argc!=3 ){ usage("?MESSAGEFILE?"); } - db_must_be_within_tree(); blob_zero(&g.cgiIn); blob_read_from_file(&g.cgiIn, g.argc==2 ? "-" : g.argv[2]); disableLogin = 1; page_xfer(); - printf("%s\n", cgi_extract_content(¬Used)); + fossil_print("%s\n", cgi_extract_content()); } /* ** Format strings for progress reporting. */ static const char zLabelFormat[] = "%-10s %10s %10s %10s %10s\n"; static const char zValueFormat[] = "\r%-10s %10d %10d %10d %10d\n"; +static const char zBriefFormat[] = + "Round-trips: %d Artifacts sent: %d received: %d\r"; + +#if INTERFACE +/* +** Flag options for controlling client_sync() +*/ +#define SYNC_PUSH 0x0001 +#define SYNC_PULL 0x0002 +#define SYNC_CLONE 0x0004 +#define SYNC_PRIVATE 0x0008 +#define SYNC_VERBOSE 0x0010 +#define SYNC_RESYNC 0x0020 +#endif +/* +** Floating-point absolute value +*/ +static double fossil_fabs(double x){ + return x>0.0 ? x : -x; +} /* -** Sync to the host identified in g.urlName and g.urlPath. This +** Sync to the host identified in g.url.name and g.url.path. This ** routine is called by the client. ** ** Records are pushed to the server if pushFlag is true. Records ** are pulled if pullFlag is true. A full sync occurs if both are ** true. */ -void client_sync( - int pushFlag, /* True to do a push (or a sync) */ - int pullFlag, /* True to do a pull (or a sync) */ - int cloneFlag, /* True if this is a clone */ - int configRcvMask, /* Receive these configuration items */ - int configSendMask /* Send these configuration items */ +int client_sync( + unsigned syncFlags, /* Mask of SYNC_* flags */ + unsigned configRcvMask, /* Receive these configuration items */ + unsigned configSendMask /* Send these configuration items */ ){ int go = 1; /* Loop until zero */ int nCardSent = 0; /* Number of cards sent */ int nCardRcvd = 0; /* Number of cards received */ int nCycle = 0; /* Number of round trips to the server */ int size; /* Size of a config value */ - int nFileSend = 0; int origConfigRcvMask; /* Original value of configRcvMask */ int nFileRecv; /* Number of files received */ int mxPhantomReq = 200; /* Max number of phantoms to request per comm */ const char *zCookie; /* Server cookie */ - int nSent, nRcvd; /* Bytes sent and received (after compression) */ + i64 nSent, nRcvd; /* Bytes sent and received (after compression) */ + int cloneSeqno = 1; /* Sequence number for clones */ Blob send; /* Text we are sending to the server */ Blob recv; /* Reply we got back from the server */ Xfer xfer; /* Transfer data */ + int pctDone; /* Percentage done with a message */ + int lastPctDone = -1; /* Last displayed pctDone */ + double rArrivalTime; /* Time at which a message arrived */ const char *zSCode = db_get("server-code", "x"); const char *zPCode = db_get("project-code", 0); + int nErr = 0; /* Number of errors */ + int nRoundtrip= 0; /* Number of HTTP requests */ + int nArtifactSent = 0; /* Total artifacts sent */ + int nArtifactRcvd = 0; /* Total artifacts received */ + const char *zOpType = 0;/* Push, Pull, Sync, Clone */ + double rSkew = 0.0; /* Maximum time skew */ - if( db_get_boolean("dont-push", 0) ) pushFlag = 0; - if( pushFlag + pullFlag + cloneFlag == 0 - && configRcvMask==0 && configSendMask==0 ) return; + if( db_get_boolean("dont-push", 0) ) syncFlags &= ~SYNC_PUSH; + if( (syncFlags & (SYNC_PUSH|SYNC_PULL|SYNC_CLONE))==0 + && configRcvMask==0 && configSendMask==0 ) return 0; transport_stats(0, 0, 1); socket_global_init(); memset(&xfer, 0, sizeof(xfer)); xfer.pIn = &recv; xfer.pOut = &send; xfer.mxSend = db_get_int("max-upload", 250000); + xfer.maxTime = -1; + if( syncFlags & SYNC_PRIVATE ){ + g.perm.Private = 1; + xfer.syncPrivate = 1; + } - assert( pushFlag | pullFlag | cloneFlag | configRcvMask | configSendMask ); - db_begin_transaction(); - db_record_repository_filename(0); - db_multi_exec( - "CREATE TEMP TABLE onremote(rid INTEGER PRIMARY KEY);" - ); blobarray_zero(xfer.aToken, count(xfer.aToken)); blob_zero(&send); blob_zero(&recv); blob_zero(&xfer.err); blob_zero(&xfer.line); origConfigRcvMask = 0; + + /* Send the send-private pragma if we are trying to sync private data */ + if( syncFlags & SYNC_PRIVATE ){ + blob_append(&send, "pragma send-private\n", -1); + } + /* ** Always begin with a clone, pull, or push message */ - if( cloneFlag ){ - blob_appendf(&send, "clone\n"); - pushFlag = 0; - pullFlag = 0; + if( syncFlags & SYNC_CLONE ){ + blob_appendf(&send, "clone 3 %d\n", cloneSeqno); + syncFlags &= ~(SYNC_PUSH|SYNC_PULL); nCardSent++; /* TBD: Request all transferable configuration values */ - }else if( pullFlag ){ + content_enable_dephantomize(0); + zOpType = "Clone"; + }else if( syncFlags & SYNC_PULL ){ blob_appendf(&send, "pull %s %s\n", zSCode, zPCode); nCardSent++; + zOpType = (syncFlags & SYNC_PUSH)?"Sync":"Pull"; + if( (syncFlags & SYNC_RESYNC)!=0 && nCycle<2 ){ + blob_appendf(&send, "pragma send-catalog\n"); + nCardSent++; + } } - if( pushFlag ){ + if( syncFlags & SYNC_PUSH ){ blob_appendf(&send, "push %s %s\n", zSCode, zPCode); nCardSent++; + if( (syncFlags & SYNC_PULL)==0 ) zOpType = "Push"; + if( (syncFlags & SYNC_RESYNC)!=0 ) xfer.resync = 0x7fffffff; } - manifest_crosslink_begin(); - fossil_print(zLabelFormat, "", "Bytes", "Cards", "Artifacts", "Deltas"); + if( syncFlags & SYNC_VERBOSE ){ + fossil_print(zLabelFormat /*works-like:"%s%s%s%s%d"*/, + "", "Bytes", "Cards", "Artifacts", "Deltas"); + } while( go ){ int newPhantom = 0; char *zRandomness; + db_begin_transaction(); + db_record_repository_filename(0); + db_multi_exec( + "CREATE TEMP TABLE onremote(rid INTEGER PRIMARY KEY);" + ); + manifest_crosslink_begin(); /* Send make the most recently received cookie. Let the server ** figure out if this is a cookie that it cares about. */ zCookie = db_get("cookie", 0); if( zCookie ){ blob_appendf(&send, "cookie %s\n", zCookie); } - + /* Generate gimme cards for phantoms and leaf cards ** for all leaves. */ - if( pullFlag || cloneFlag ){ + if( (syncFlags & SYNC_PULL)!=0 + || ((syncFlags & SYNC_CLONE)!=0 && cloneSeqno==1) + ){ request_phantoms(&xfer, mxPhantomReq); } - if( pushFlag ){ + if( syncFlags & SYNC_PUSH ){ send_unsent(&xfer); nCardSent += send_unclustered(&xfer); + if( syncFlags & SYNC_PRIVATE ) send_private(&xfer); } /* Send configuration parameter requests. On a clone, delay sending - ** this until the second cycle since the login card might fail on + ** this until the second cycle since the login card might fail on ** the first cycle. */ - if( configRcvMask && (cloneFlag==0 || nCycle>0) ){ + if( configRcvMask && ((syncFlags & SYNC_CLONE)==0 || nCycle>0) ){ const char *zName; + if( zOpType==0 ) zOpType = "Pull"; zName = configure_first_name(configRcvMask); while( zName ){ blob_appendf(&send, "reqconfig %s\n", zName); zName = configure_next_name(configRcvMask); nCardSent++; } - if( configRcvMask & (CONFIGSET_USER|CONFIGSET_TKT) ){ - configure_prepare_to_receive(0); + if( (configRcvMask & (CONFIGSET_USER|CONFIGSET_TKT))!=0 + && (configRcvMask & CONFIGSET_OLDFORMAT)!=0 + ){ + int overwrite = (configRcvMask & CONFIGSET_OVERWRITE)!=0; + configure_prepare_to_receive(overwrite); } origConfigRcvMask = configRcvMask; configRcvMask = 0; } /* Send configuration parameters being pushed */ if( configSendMask ){ - const char *zName; - zName = configure_first_name(configSendMask); - while( zName ){ - send_config_card(&xfer, zName); - zName = configure_next_name(configSendMask); - nCardSent++; + if( zOpType==0 ) zOpType = "Push"; + if( configSendMask & CONFIGSET_OLDFORMAT ){ + const char *zName; + zName = configure_first_name(configSendMask); + while( zName ){ + send_legacy_config_card(&xfer, zName); + zName = configure_next_name(configSendMask); + nCardSent++; + } + }else{ + nCardSent += configure_send_group(xfer.pOut, configSendMask, 0); } configSendMask = 0; } /* Append randomness to the end of the message. This makes all @@ -1050,57 +1584,108 @@ */ zRandomness = db_text(0, "SELECT hex(randomblob(20))"); blob_appendf(&send, "# %s\n", zRandomness); free(zRandomness); + if( syncFlags & SYNC_VERBOSE ){ + fossil_print("waiting for server..."); + } + fflush(stdout); /* Exchange messages with the server */ - nFileSend = xfer.nFileSent + xfer.nDeltaSent; - fossil_print(zValueFormat, "Send:", - blob_size(&send), nCardSent+xfer.nGimmeSent+xfer.nIGotSent, - xfer.nFileSent, xfer.nDeltaSent); + if( http_exchange(&send, &recv, (syncFlags & SYNC_CLONE)==0 || nCycle>0, + MAX_REDIRECTS) ){ + nErr++; + go = 2; + break; + } + + /* Output current stats */ + if( syncFlags & SYNC_VERBOSE ){ + fossil_print(zValueFormat /*works-like:"%s%d%d%d%d"*/, "Sent:", + blob_size(&send), nCardSent+xfer.nGimmeSent+xfer.nIGotSent, + xfer.nFileSent, xfer.nDeltaSent); + }else{ + nRoundtrip++; + nArtifactSent += xfer.nFileSent + xfer.nDeltaSent; + fossil_print(zBriefFormat /*works-like:"%d%d%d"*/, + nRoundtrip, nArtifactSent, nArtifactRcvd); + } nCardSent = 0; nCardRcvd = 0; xfer.nFileSent = 0; xfer.nDeltaSent = 0; xfer.nGimmeSent = 0; xfer.nIGotSent = 0; - fflush(stdout); - http_exchange(&send, &recv, cloneFlag==0 || nCycle>0); + + lastPctDone = -1; blob_reset(&send); + rArrivalTime = db_double(0.0, "SELECT julianday('now')"); + + /* Send the send-private pragma if we are trying to sync private data */ + if( syncFlags & SYNC_PRIVATE ){ + blob_append(&send, "pragma send-private\n", -1); + } /* Begin constructing the next message (which might never be ** sent) by beginning with the pull or push cards */ - if( pullFlag ){ + if( syncFlags & SYNC_PULL ){ blob_appendf(&send, "pull %s %s\n", zSCode, zPCode); nCardSent++; } - if( pushFlag ){ + if( syncFlags & SYNC_PUSH ){ blob_appendf(&send, "push %s %s\n", zSCode, zPCode); nCardSent++; } go = 0; /* Process the reply that came back from the server */ while( blob_line(&recv, &xfer.line) ){ if( blob_buffer(&xfer.line)[0]=='#' ){ + const char *zLine = blob_buffer(&xfer.line); + if( memcmp(zLine, "# timestamp ", 12)==0 ){ + char zTime[20]; + double rDiff; + sqlite3_snprintf(sizeof(zTime), zTime, "%.19s", &zLine[12]); + rDiff = db_double(9e99, "SELECT julianday('%q') - %.17g", + zTime, rArrivalTime); + if( rDiff>9e98 || rDiff<-9e98 ) rDiff = 0.0; + if( rDiff*24.0*3600.0 >= -(blob_size(&recv)/5000.0 + 20) ) rDiff = 0.0; + if( fossil_fabs(rDiff)>fossil_fabs(rSkew) ) rSkew = rDiff; + } + nCardRcvd++; continue; } xfer.nToken = blob_tokenize(&xfer.line, xfer.aToken, count(xfer.aToken)); nCardRcvd++; - if( !g.cgiOutput && !g.fQuiet ){ - printf("\r%d", nCardRcvd); - fflush(stdout); + if( (syncFlags & SYNC_VERBOSE)!=0 && recv.nUsed>0 ){ + pctDone = (recv.iCursor*100)/recv.nUsed; + if( pctDone!=lastPctDone ){ + fossil_print("\rprocessed: %d%% ", pctDone); + lastPctDone = pctDone; + fflush(stdout); + } } /* file UUID SIZE \n CONTENT ** file UUID DELTASRC SIZE \n CONTENT ** ** Receive a file transmitted from the server. */ if( blob_eq(&xfer.aToken[0],"file") ){ - xfer_accept_file(&xfer); + xfer_accept_file(&xfer, (syncFlags & SYNC_CLONE)!=0, 0, 0); + nArtifactRcvd++; + }else + + /* cfile UUID USIZE CSIZE \n CONTENT + ** cfile UUID DELTASRC USIZE CSIZE \n CONTENT + ** + ** Receive a compressed file transmitted from the server. + */ + if( blob_eq(&xfer.aToken[0],"cfile") ){ + xfer_accept_compressed_file(&xfer, 0, 0); + nArtifactRcvd++; }else /* gimme UUID ** ** Server is requesting a file. If the file is a manifest, assume @@ -1109,104 +1694,87 @@ */ if( blob_eq(&xfer.aToken[0], "gimme") && xfer.nToken==2 && blob_is_uuid(&xfer.aToken[1]) ){ - if( pushFlag ){ - int rid = rid_from_uuid(&xfer.aToken[1], 0); + if( syncFlags & SYNC_PUSH ){ + int rid = rid_from_uuid(&xfer.aToken[1], 0, 0); if( rid ) send_file(&xfer, rid, &xfer.aToken[1], 0); } }else - - /* igot UUID + + /* igot UUID ?PRIVATEFLAG? ** ** Server announces that it has a particular file. If this is ** not a file that we have and we are pulling, then create a ** phantom to cause this file to be requested on the next cycle. ** Always remember that the server has this file so that we do ** not transmit it by accident. + ** + ** If the PRIVATE argument exists and is 1, then the file is + ** private. Pretend it does not exists if we are not pulling + ** private files. */ - if( xfer.nToken==2 + if( xfer.nToken>=2 && blob_eq(&xfer.aToken[0], "igot") && blob_is_uuid(&xfer.aToken[1]) ){ - int rid = rid_from_uuid(&xfer.aToken[1], 0); + int rid; + int isPriv = xfer.nToken>=3 && blob_eq(&xfer.aToken[2],"1"); + rid = rid_from_uuid(&xfer.aToken[1], 0, 0); if( rid>0 ){ - content_make_public(rid); - }else if( pullFlag || cloneFlag ){ - rid = content_new(blob_str(&xfer.aToken[1])); + if( !isPriv ) content_make_public(rid); + }else if( isPriv && !g.perm.Private ){ + /* ignore private files */ + }else if( (syncFlags & (SYNC_PULL|SYNC_CLONE))!=0 ){ + rid = content_new(blob_str(&xfer.aToken[1]), isPriv); if( rid ) newPhantom = 1; } remote_has(rid); }else - - + + /* push SERVERCODE PRODUCTCODE ** ** Should only happen in response to a clone. This message tells ** the client what product to use for the new database. */ if( blob_eq(&xfer.aToken[0],"push") && xfer.nToken==3 - && cloneFlag - && blob_is_uuid(&xfer.aToken[1]) + && (syncFlags & SYNC_CLONE)!=0 && blob_is_uuid(&xfer.aToken[2]) ){ - if( blob_eq_str(&xfer.aToken[1], zSCode, -1) ){ - fossil_fatal("server loop"); - } if( zPCode==0 ){ zPCode = mprintf("%b", &xfer.aToken[2]); db_set("project-code", zPCode, 0); } - blob_appendf(&send, "clone\n"); + if( cloneSeqno>0 ) blob_appendf(&send, "clone 3 %d\n", cloneSeqno); nCardSent++; }else - + /* config NAME SIZE \n CONTENT ** ** Receive a configuration value from the server. + ** + ** The received configuration setting is silently ignored if it was + ** not requested by a prior "reqconfig" sent from client to server. */ if( blob_eq(&xfer.aToken[0],"config") && xfer.nToken==3 && blob_is_int(&xfer.aToken[2], &size) ){ const char *zName = blob_str(&xfer.aToken[1]); Blob content; blob_zero(&content); blob_extract(xfer.pIn, size, &content); - g.okAdmin = g.okRdAddr = 1; - if( configure_is_exportable(zName) & origConfigRcvMask ){ - if( zName[0]!='@' ){ - if( strcmp(zName, "logo-image")==0 ){ - Stmt ins; - db_prepare(&ins, - "REPLACE INTO config(name, value) VALUES(:name, :value)" - ); - db_bind_text(&ins, ":name", zName); - db_bind_blob(&ins, ":value", &content); - db_step(&ins); - db_finalize(&ins); - }else{ - db_multi_exec( - "REPLACE INTO config(name,value) VALUES(%Q,%Q)", - zName, blob_str(&content) - ); - } - }else{ - /* Notice that we are evaluating arbitrary SQL received from the - ** server. But this can only happen if we have specifically - ** requested configuration information from the server, so - ** presumably the operator trusts the server. - */ - db_multi_exec("%s", blob_str(&content)); - } - } - nCardSent++; + g.perm.Admin = g.perm.RdAddr = 1; + configure_receive(zName, &content, origConfigRcvMask); + nCardRcvd++; + nArtifactRcvd++; blob_reset(&content); blob_seek(xfer.pIn, 1, BLOB_SEEK_CUR); }else - + /* cookie TEXT ** ** The server might include a cookie in its reply. The client ** should remember this cookie and send it back to the server ** in its next query. @@ -1216,75 +1784,134 @@ */ if( blob_eq(&xfer.aToken[0], "cookie") && xfer.nToken==2 ){ db_set("cookie", blob_str(&xfer.aToken[1]), 0); }else + + /* private + ** + ** This card indicates that the next "file" or "cfile" will contain + ** private content. + */ + if( blob_eq(&xfer.aToken[0], "private") ){ + xfer.nextIsPrivate = 1; + }else + + + /* clone_seqno N + ** + ** When doing a clone, the server tries to send all of its artifacts + ** in sequence. This card indicates the sequence number of the next + ** blob that needs to be sent. If N<=0 that indicates that all blobs + ** have been sent. + */ + if( blob_eq(&xfer.aToken[0], "clone_seqno") && xfer.nToken==2 ){ + blob_is_int(&xfer.aToken[1], &cloneSeqno); + }else + /* message MESSAGE ** ** Print a message. Similar to "error" but does not stop processing. ** ** If the "login failed" message is seen, clear the sync password prior ** to the next cycle. - */ + */ if( blob_eq(&xfer.aToken[0],"message") && xfer.nToken==2 ){ char *zMsg = blob_terminate(&xfer.aToken[1]); defossilize(zMsg); - if( zMsg ) fossil_print("\rServer says: %s\n", zMsg); + if( (syncFlags & SYNC_PUSH) && zMsg && sqlite3_strglob("pull only *", zMsg)==0 ){ + syncFlags &= ~SYNC_PUSH; + zMsg = 0; + } + if( zMsg && zMsg[0] ){ + fossil_force_newline(); + fossil_print("Server says: %s\n", zMsg); + } + }else + + /* pragma NAME VALUE... + ** + ** The server can send pragmas to try to convey meta-information to + ** the client. These are informational only. Unknown pragmas are + ** silently ignored. + */ + if( blob_eq(&xfer.aToken[0], "pragma") && xfer.nToken>=2 ){ }else /* error MESSAGE ** ** Report an error and abandon the sync session. ** ** Except, when cloning we will sometimes get an error on the ** first message exchange because the project-code is unknown ** and so the login card on the request was invalid. The project-code - ** is returned in the reply before the error card, so second and + ** is returned in the reply before the error card, so second and ** subsequent messages should be OK. Nevertheless, we need to ignore ** the error card on the first message of a clone. - */ + */ if( blob_eq(&xfer.aToken[0],"error") && xfer.nToken==2 ){ - if( !cloneFlag || nCycle>0 ){ + if( (syncFlags & SYNC_CLONE)==0 || nCycle>0 ){ char *zMsg = blob_terminate(&xfer.aToken[1]); defossilize(zMsg); - if( strcmp(zMsg, "login failed")==0 ){ + fossil_force_newline(); + fossil_print("Error: %s\n", zMsg); + if( fossil_strcmp(zMsg, "login failed")==0 ){ if( nCycle<2 ){ - if( !g.dontKeepUrl ) db_unset("last-sync-pw", 0); + g.url.passwd = 0; go = 1; + if( g.cgiOutput==0 ){ + g.url.flags |= URL_PROMPT_PW; + g.url.flags &= ~URL_PROMPTED; + url_prompt_for_password(); + url_remember(); + } + }else{ + nErr++; } }else{ - blob_appendf(&xfer.err, "\rserver says: %s", zMsg); + blob_appendf(&xfer.err, "server says: %s\n", zMsg); + nErr++; } - fossil_fatal("\rError: %s", zMsg); + break; } }else /* Unknown message */ - { + if( xfer.nToken>0 ){ if( blob_str(&xfer.aToken[0])[0]=='<' ){ - fossil_fatal( + fossil_warning( "server replies with HTML instead of fossil sync protocol:\n%b", &recv ); + nErr++; + break; } - blob_appendf(&xfer.err, "unknown command: %b", &xfer.aToken[0]); + blob_appendf(&xfer.err, "unknown command: [%b]\n", &xfer.aToken[0]); } if( blob_size(&xfer.err) ){ - fossil_fatal("%b", &xfer.err); + fossil_force_newline(); + fossil_warning("%b", &xfer.err); + nErr++; + break; } blobarray_reset(xfer.aToken, xfer.nToken); blob_reset(&xfer.line); } - if( origConfigRcvMask & (CONFIGSET_TKT|CONFIGSET_USER) ){ + if( (configRcvMask & (CONFIGSET_USER|CONFIGSET_TKT))!=0 + && (configRcvMask & CONFIGSET_OLDFORMAT)!=0 + ){ configure_finalize_receive(); } origConfigRcvMask = 0; - if( nCardRcvd>0 ){ - fossil_print(zValueFormat, "Received:", + if( nCardRcvd>0 && (syncFlags & SYNC_VERBOSE) ){ + fossil_print(zValueFormat /*works-like:"%s%d%d%d%d"*/, "Received:", blob_size(&recv), nCardRcvd, xfer.nFileRcvd, xfer.nDeltaRcvd + xfer.nDanglingFile); + }else{ + fossil_print(zBriefFormat /*works-like:"%d%d%d"*/, + nRoundtrip, nArtifactSent, nArtifactRcvd); } blob_reset(&recv); nCycle++; /* If we received one or more files on the previous exchange but @@ -1293,30 +1920,67 @@ nFileRecv = xfer.nFileRcvd + xfer.nDeltaRcvd + xfer.nDanglingFile; if( (nFileRecv>0 || newPhantom) && db_exists("SELECT 1 FROM phantom") ){ go = 1; mxPhantomReq = nFileRecv*2; if( mxPhantomReq<200 ) mxPhantomReq = 200; + }else if( (syncFlags & SYNC_CLONE)!=0 && nFileRecv>0 ){ + go = 1; } nCardRcvd = 0; xfer.nFileRcvd = 0; xfer.nDeltaRcvd = 0; xfer.nDanglingFile = 0; /* If we have one or more files queued to send, then go - ** another round + ** another round */ if( xfer.nFileSent+xfer.nDeltaSent>0 ){ go = 1; } /* If this is a clone, the go at least two rounds */ - if( cloneFlag && nCycle==1 ) go = 1; + if( (syncFlags & SYNC_CLONE)!=0 && nCycle==1 ) go = 1; + + /* Stop the cycle if the server sends a "clone_seqno 0" card and + ** we have gone at least two rounds. Always go at least two rounds + ** on a clone in order to be sure to retrieve the configuration + ** information which is only sent on the second round. + */ + if( cloneSeqno<=0 && nCycle>1 ) go = 0; + db_multi_exec("DROP TABLE onremote"); + if( go ){ + manifest_crosslink_end(MC_PERMIT_HOOKS); + }else{ + manifest_crosslink_end(MC_PERMIT_HOOKS); + content_enable_dephantomize(1); + } + db_end_transaction(0); }; transport_stats(&nSent, &nRcvd, 1); - fossil_print("Total network traffic: %d bytes sent, %d bytes received\n", - nSent, nRcvd); - transport_close(); - transport_global_shutdown(); - db_multi_exec("DROP TABLE onremote"); - manifest_crosslink_end(); - db_end_transaction(0); + if( (rSkew*24.0*3600.0) > 10.0 ){ + fossil_warning("*** time skew *** server is fast by %s", + db_timespan_name(rSkew)); + g.clockSkewSeen = 1; + }else if( rSkew*24.0*3600.0 < -10.0 ){ + fossil_warning("*** time skew *** server is slow by %s", + db_timespan_name(-rSkew)); + g.clockSkewSeen = 1; + } + + fossil_force_newline(); + fossil_print( + "%s done, sent: %lld received: %lld ip: %s\n", + zOpType, nSent, nRcvd, g.zIpAddr); + transport_close(&g.url); + transport_global_shutdown(&g.url); + if( nErr && go==2 ){ + db_multi_exec("DROP TABLE onremote"); + manifest_crosslink_end(MC_PERMIT_HOOKS); + content_enable_dephantomize(1); + db_end_transaction(0); + } + if( (syncFlags & SYNC_CLONE)==0 && g.rcvid && fossil_any_has_fork(g.rcvid) ){ + fossil_warning("***** WARNING: a fork has occurred *****\n" + "use \"fossil leaves -multiple\" for more details."); + } + return nErr; } ADDED src/xfersetup.c Index: src/xfersetup.c ================================================================== --- src/xfersetup.c +++ src/xfersetup.c @@ -0,0 +1,245 @@ +/* +** Copyright (c) 2007 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains code to implement the transfer configuration +** setup screens. +*/ +#include "config.h" +#include "xfersetup.h" +#include <assert.h> + +/* +** WEBPAGE: xfersetup +** Main sub-menu for configuring the transfer system. +*/ +void xfersetup_page(void){ + login_check_credentials(); + if( !g.perm.Setup ){ + login_needed(0); + return; + } + + style_header("Transfer Setup"); + + @ <table class="xfersetup"> + setup_menu_entry("Common", "xfersetup_com", + "Common TH1 code run before all transfer request processing."); + setup_menu_entry("Push", "xfersetup_push", + "Specific TH1 code to run after \"push\" transfer requests."); + setup_menu_entry("Commit", "xfersetup_commit", + "Specific TH1 code to run after processing a commit."); + setup_menu_entry("Ticket", "xfersetup_ticket", + "Specific TH1 code to run after processing a ticket change."); + @ </table> + + url_parse(0, 0); + if( g.url.protocol ){ + unsigned syncFlags; + const char *zButton; + char *zWarning; + + if( db_get_boolean("dont-push", 0) ){ + syncFlags = SYNC_PULL; + zButton = "Pull"; + zWarning = 0; + }else{ + syncFlags = SYNC_PUSH | SYNC_PULL; + zButton = "Synchronize"; + zWarning = mprintf("WARNING: Pushing to \"%s\" is enabled.", + g.url.canonical); + } + @ <p>Press the <strong>%h(zButton)</strong> button below to + @ synchronize with the <em>%h(g.url.canonical)</em> repository now.<br/> + @ This may be useful when testing the various transfer scripts.</p> + @ <p>You can use the <code>http -async</code> command in your scripts, but + @ make sure the <code>th1-uri-regexp</code> setting is set first.</p> + if( zWarning ){ + @ + @ <big><b>%h(zWarning)</b></big> + free(zWarning); + } + @ + @ <form method="post" action="%s(g.zTop)/%s(g.zPath)"><div> + login_insert_csrf_secret(); + @ <input type="submit" name="sync" value="%h(zButton)" /> + @ </div></form> + @ + if( P("sync") ){ + user_select(); + url_enable_proxy(0); + @ <pre class="xfersetup"> + client_sync(syncFlags, 0, 0); + @ </pre> + } + } + + style_footer(); +} + +/* +** Common implementation for the transfer setup editor pages. +*/ +static void xfersetup_generic( + const char *zTitle, /* Page title */ + const char *zDbField, /* Configuration field being edited */ + const char *zDfltValue, /* Default text value */ + const char *zDesc, /* Description of this field */ + char *(*xText)(const char*), /* Validity test or NULL */ + void (*xRebuild)(void), /* Run after successful update */ + int height /* Height of the edit box */ +){ + const char *z; + int isSubmit; + + login_check_credentials(); + if( !g.perm.Setup ){ + login_needed(0); + return; + } + if( P("setup") ){ + cgi_redirect("xfersetup"); + } + isSubmit = P("submit")!=0; + z = P("x"); + if( z==0 ){ + z = db_get(zDbField, zDfltValue); + } + style_header("Edit %s", zTitle); + if( P("clear")!=0 ){ + login_verify_csrf_secret(); + db_unset(zDbField, 0); + if( xRebuild ) xRebuild(); + z = zDfltValue; + }else if( isSubmit ){ + char *zErr = 0; + login_verify_csrf_secret(); + if( xText && (zErr = xText(z))!=0 ){ + @ <p class="xfersetupError">ERROR: %h(zErr)</p> + }else{ + db_set(zDbField, z, 0); + if( xRebuild ) xRebuild(); + cgi_redirect("xfersetup"); + } + } + @ <form action="%s(g.zTop)/%s(g.zPath)" method="post"><div> + login_insert_csrf_secret(); + @ <p>%s(zDesc)</p> + @ <textarea name="x" rows="%d(height)" cols="80">%h(z)</textarea> + @ <p> + @ <input type="submit" name="submit" value="Apply Changes" /> + @ <input type="submit" name="clear" value="Revert To Default" /> + @ <input type="submit" name="setup" value="Cancel" /> + @ </p> + @ </div></form> + if ( zDfltValue ){ + @ <hr /> + @ <h2>Default %s(zTitle)</h2> + @ <blockquote><pre> + @ %h(zDfltValue) + @ </pre></blockquote> + } + style_footer(); +} + +static const char *zDefaultXferCommon = 0; + +/* +** WEBPAGE: xfersetup_com +** View or edit the TH1 script that runs prior to receiving a +** transfer. +*/ +void xfersetup_com_page(void){ + static const char zDesc[] = + @ Enter TH1 script that initializes variables prior to running + @ any of the transfer request scripts. + ; + xfersetup_generic( + "Transfer Common Script", + "xfer-common-script", + zDefaultXferCommon, + zDesc, + 0, + 0, + 30 + ); +} + +static const char *zDefaultXferPush = 0; + +/* +** WEBPAGE: xfersetup_push +** View or edit the TH1 script that runs after receiving a "push". +*/ +void xfersetup_push_page(void){ + static const char zDesc[] = + @ Enter TH1 script that runs after processing <strong>push</strong> + @ transfer requests. + ; + xfersetup_generic( + "Transfer Push Script", + "xfer-push-script", + zDefaultXferPush, + zDesc, + 0, + 0, + 30 + ); +} + +static const char *zDefaultXferCommit = 0; + +/* +** WEBPAGE: xfersetup_commit +** View or edit the TH1 script that runs when a transfer commit +** is processed. +*/ +void xfersetup_commit_page(void){ + static const char zDesc[] = + @ Enter TH1 script that runs when a commit is processed. + ; + xfersetup_generic( + "Transfer Commit Script", + "xfer-commit-script", + zDefaultXferCommit, + zDesc, + 0, + 0, + 30 + ); +} + +static const char *zDefaultXferTicket = 0; + +/* +** WEBPAGE: xfersetup_ticket +** View or edit the TH1 script that runs when a ticket change artifact +** is processed during a transfer. +*/ +void xfersetup_ticket_page(void){ + static const char zDesc[] = + @ Enter TH1 script that runs when a ticket change is processed. + ; + xfersetup_generic( + "Transfer Ticket Script", + "xfer-ticket-script", + zDefaultXferTicket, + zDesc, + 0, + 0, + 30 + ); +} Index: src/zip.c ================================================================== --- src/zip.c +++ src/zip.c @@ -15,13 +15,18 @@ ** ******************************************************************************* ** ** This file contains code used to generate ZIP archives. */ +#include "config.h" #include <assert.h> -#include <zlib.h> -#include "config.h" +#if defined(FOSSIL_ENABLE_MINIZ) +# define MINIZ_HEADER_FILE_ONLY +# include "miniz.c" +#else +# include <zlib.h> +#endif #include "zip.h" /* ** Write a 16- or 32-bit integer as little-endian into the given buffer. */ @@ -81,11 +86,11 @@ ** Set the date and time from a julian day number. */ void zip_set_timedate(double rDate){ char *zDate = db_text(0, "SELECT datetime(%.17g)", rDate); zip_set_timedate_from_str(zDate); - free(zDate); + fossil_free(zDate); unixTime = (rDate - 2440587.5)*86400.0; } /* ** If the given filename includes one or more directory entries, make @@ -98,17 +103,17 @@ for(i=0; zName[i]; i++){ if( zName[i]=='/' ){ c = zName[i+1]; zName[i+1] = 0; for(j=0; j<nDir; j++){ - if( strcmp(zName, azDir[j])==0 ) break; + if( fossil_strcmp(zName, azDir[j])==0 ) break; } if( j>=nDir ){ nDir++; - azDir = realloc(azDir, sizeof(azDir[0])*nDir); + azDir = fossil_realloc(azDir, sizeof(azDir[0])*nDir); azDir[j] = mprintf("%s", zName); - zip_add_file(zName, 0); + zip_add_file(zName, 0, 0); } zName[i+1] = c; } } } @@ -117,11 +122,11 @@ ** Append a single file to a growing ZIP archive. ** ** pFile is the file to be appended. zName is the name ** that the file should be saved as. */ -void zip_add_file(const char *zName, const Blob *pFile){ +void zip_add_file(const char *zName, const Blob *pFile, int mPerm){ z_stream stream; int nameLen; int toOut = 0; int iStart; int iCRC = 0; @@ -137,34 +142,38 @@ char zOutBuf[100000]; /* Fill in as much of the header as we know. */ nBlob = pFile ? blob_size(pFile) : 0; - if( nBlob>0 ){ - iMethod = 8; - iMode = 0100644; - }else{ + if( pFile ){ /* This is a file, possibly empty... */ + iMethod = (nBlob>0) ? 8 : 0; /* Cannot compress zero bytes. */ + switch( mPerm ){ + case PERM_LNK: iMode = 0120755; break; + case PERM_EXE: iMode = 0100755; break; + default: iMode = 0100644; break; + } + }else{ /* This is a directory, no blob... */ iMethod = 0; iMode = 040755; } nameLen = strlen(zName); memset(zHdr, 0, sizeof(zHdr)); put32(&zHdr[0], 0x04034b50); put16(&zHdr[4], 0x000a); - put16(&zHdr[6], 0); + put16(&zHdr[6], 0x0800); put16(&zHdr[8], iMethod); put16(&zHdr[10], dosTime); put16(&zHdr[12], dosDate); put16(&zHdr[26], nameLen); put16(&zHdr[28], 13); - + put16(&zExTime[0], 0x5455); put16(&zExTime[2], 9); zExTime[4] = 3; put32(&zExTime[5], unixTime); put32(&zExTime[9], unixTime); - + /* Write the header and filename. */ iStart = blob_size(&body); blob_append(&body, zHdr, 30); @@ -198,26 +207,26 @@ blob_append(&body, zOutBuf, toOut); }while( stream.avail_out==0 ); nByte = stream.total_in; nByteCompr = stream.total_out; deflateEnd(&stream); - + /* Go back and write the header, now that we know the compressed file size. */ z = &blob_buffer(&body)[iStart]; put32(&z[14], iCRC); put32(&z[18], nByteCompr); put32(&z[22], nByte); } - + /* Make an entry in the tables of contents */ memset(zBuf, 0, sizeof(zBuf)); put32(&zBuf[0], 0x02014b50); put16(&zBuf[4], 0x0317); put16(&zBuf[6], 0x000a); - put16(&zBuf[8], 0); + put16(&zBuf[8], 0x0800); put16(&zBuf[10], iMethod); put16(&zBuf[12], dosTime); put16(&zBuf[14], dosDate); put32(&zBuf[16], iCRC); put32(&zBuf[20], nByteCompr); @@ -263,13 +272,13 @@ blob_reset(&toc); *pZip = body; blob_zero(&body); nEntry = 0; for(i=0; i<nDir; i++){ - free(azDir[i]); + fossil_free(azDir[i]); } - free(azDir); + fossil_free(azDir); nDir = 0; azDir = 0; } /* @@ -287,11 +296,11 @@ } zip_open(); for(i=3; i<g.argc; i++){ blob_zero(&file); blob_read_from_file(&file, g.argv[i]); - zip_add_file(g.argv[i], &file); + zip_add_file(g.argv[i], &file, file_wd_perm(g.argv[i])); blob_reset(&file); } zip_close(&zip); blob_write_to_file(&zip, g.argv[2]); } @@ -313,72 +322,74 @@ ** politely expands into a subdir instead of filling your current dir ** with source files. For example, pass a UUID or "ProjectName". ** */ void zip_of_baseline(int rid, Blob *pZip, const char *zDir){ - int i; - Blob mfile, file, hash; - Manifest m; + Blob mfile, hash, file; + Manifest *pManifest; + ManifestFile *pFile; Blob filename; int nPrefix; - + content_get(rid, &mfile); if( blob_size(&mfile)==0 ){ blob_zero(pZip); return; } - blob_zero(&file); blob_zero(&hash); - blob_copy(&file, &mfile); blob_zero(&filename); zip_open(); if( zDir && zDir[0] ){ blob_appendf(&filename, "%s/", zDir); } nPrefix = blob_size(&filename); - if( manifest_parse(&m, &mfile) ){ + pManifest = manifest_get(rid, CFTYPE_MANIFEST, 0); + if( pManifest ){ char *zName; - zip_set_timedate(m.rDate); - blob_append(&filename, "manifest", -1); - zName = blob_str(&filename); - zip_add_folders(zName); - zip_add_file(zName, &file); - sha1sum_blob(&file, &hash); - blob_reset(&file); - blob_append(&hash, "\n", 1); - blob_resize(&filename, nPrefix); - blob_append(&filename, "manifest.uuid", -1); - zName = blob_str(&filename); - zip_add_file(zName, &hash); - blob_reset(&hash); - for(i=0; i<m.nFile; i++){ - int fid = uuid_to_rid(m.aFile[i].zUuid, 0); + zip_set_timedate(pManifest->rDate); + if( db_get_boolean("manifest", 0) ){ + blob_append(&filename, "manifest", -1); + zName = blob_str(&filename); + zip_add_folders(zName); + sha1sum_blob(&mfile, &hash); + sterilize_manifest(&mfile); + zip_add_file(zName, &mfile, 0); + blob_reset(&mfile); + blob_append(&hash, "\n", 1); + blob_resize(&filename, nPrefix); + blob_append(&filename, "manifest.uuid", -1); + zName = blob_str(&filename); + zip_add_file(zName, &hash, 0); + blob_reset(&hash); + } + manifest_file_rewind(pManifest); + while( (pFile = manifest_file_next(pManifest,0))!=0 ){ + int fid = uuid_to_rid(pFile->zUuid, 0); if( fid ){ content_get(fid, &file); blob_resize(&filename, nPrefix); - blob_append(&filename, m.aFile[i].zName, -1); + blob_append(&filename, pFile->zName, -1); zName = blob_str(&filename); zip_add_folders(zName); - zip_add_file(zName, &file); + zip_add_file(zName, &file, manifest_file_mperm(pFile)); blob_reset(&file); } } - manifest_clear(&m); }else{ blob_reset(&mfile); - blob_reset(&file); } + manifest_destroy(pManifest); blob_reset(&filename); zip_close(pZip); } /* -** COMMAND: zip +** COMMAND: zip* ** -** Usage: %fossil zip VERSION OUTPUTFILE [--name DIRECTORYNAME] +** Usage: %fossil zip VERSION OUTPUTFILE [--name DIRECTORYNAME] [-R|--repository REPO] ** ** Generate a ZIP archive for a specified version. If the --name option is ** used, it argument becomes the name of the top-level directory in the ** resulting ZIP archive. If --name is omitted, the top-level directory ** named is derived from the project name, the check-in date and time, and @@ -387,15 +398,19 @@ void baseline_zip_cmd(void){ int rid; Blob zip; const char *zName; zName = find_option("name", 0, 1); - db_find_and_open_repository(1); + db_find_and_open_repository(0, 0); + + /* We should be done with options.. */ + verify_all_options(); + if( g.argc!=4 ){ usage("VERSION OUTPUTFILE"); } - rid = name_to_rid(g.argv[2]); + rid = name_to_typed_rid(g.argv[2],"ci"); if( zName==0 ){ zName = db_text("default-name", "SELECT replace(%Q,' ','_') " " || strftime('_%%Y-%%m-%%d_%%H%%M%%S_', event.mtime) " " || substr(blob.uuid, 1, 10)" @@ -413,36 +428,74 @@ ** WEBPAGE: zip ** URL: /zip/RID.zip ** ** Generate a ZIP archive for the baseline. ** Return that ZIP archive as the HTTP reply content. +** +** Optional URL Parameters: +** +** - name=NAME[.zip] is the name of the output file. Defaults to +** something project/version-specific. The base part of the +** name, up to the last dot, is used as the top-most directory +** name in the output file. +** +** - uuid=the version to zip (may be a tag/branch name). +** Defaults to "trunk". +** */ void baseline_zip_page(void){ int rid; char *zName, *zRid; int nName, nRid; Blob zip; + char *zKey; login_check_credentials(); - if( !g.okZip ){ login_needed(); return; } + if( !g.perm.Zip ){ login_needed(g.anon.Zip); return; } + load_control(); zName = mprintf("%s", PD("name","")); nName = strlen(zName); - zRid = mprintf("%s", PD("uuid","")); + zRid = mprintf("%s", PD("uuid","trunk")); nRid = strlen(zRid); - for(nName=strlen(zName)-1; nName>5; nName--){ - if( zName[nName]=='.' ){ - zName[nName] = 0; - break; + if( nName>4 && fossil_strcmp(&zName[nName-4], ".zip")==0 ){ + /* Special case: Remove the ".zip" suffix. */ + nName -= 4; + zName[nName] = 0; + }else{ + /* If the file suffix is not ".zip" then just remove the + ** suffix up to and including the last "." */ + for(nName=strlen(zName)-1; nName>5; nName--){ + if( zName[nName]=='.' ){ + zName[nName] = 0; + break; + } } } - rid = name_to_rid(nRid?zRid:zName); + rid = name_to_typed_rid(nRid?zRid:zName,"ci"); if( rid==0 ){ @ Not found return; + } + if( referred_from_login() ){ + style_header("ZIP Archive Download"); + @ <form action='%R/zip/%h(zName).zip'> + cgi_query_parameters_to_hidden(); + @ <p>ZIP Archive named <b>%h(zName).zip</b> holding the content + @ of check-in <b>%h(zRid)</b>: + @ <input type="submit" value="Download" /> + @ </form> + style_footer(); + return; } if( nRid==0 && nName>10 ) zName[10] = 0; - zip_of_baseline(rid, &zip, zName); - free( zName ); - free( zRid ); + zKey = db_text(0, "SELECT '/zip/'||uuid||'/%q' FROM blob WHERE rid=%d",zName,rid); + blob_zero(&zip); + if( cache_read(&zip, zKey)==0 ){ + zip_of_baseline(rid, &zip, zName); + cache_write(&zip, zKey); + } + fossil_free( zName ); + fossil_free( zRid ); + fossil_free( zKey ); cgi_set_content(&zip); cgi_set_content_type("application/zip"); } ADDED test/Greek-Lipsum-1.txt Index: test/Greek-Lipsum-1.txt ================================================================== --- test/Greek-Lipsum-1.txt +++ test/Greek-Lipsum-1.txt @@ -0,0 +1,77 @@ +Κυο εξ υνυμ δισπυθανδο, εÏος αλιενυμ κυι θε. Îες εξ ελωκυενθιαμ +ινστÏυσθιοÏ. Î˜ÎµÎ¼Ï€Î¿Ï Î½Î¿ÏƒÎ¸ÎµÏ ÏƒÏ… εως. ΠυÏθο μωφεθ μωδεÏατιυς ατ μελ. Συ δυο +αμετ ειυς. ΠÏι δεσωÏε ινθεγÏε ασυμσαν αδ, Ï€Ïω αν Ïεβυμ ÎµÏ†Ï†Î¹ÏƒÎ¹Î±Î½Î¸Ï…Ï +νεσεσιταθιβυς. + +ÎοσθÏυμ ÏƒÏ…ÏƒÎ¹Ï€Î¹Î±Î½Ï„Ï…Ï Î·Î±Ï‚ ει, οÏναθυς Ïεσυσαβο Ï€Ïι ιδ, Ï€ÎµÏ Î½Î¿Î»Ï…Î¹ÏƒÎµ οπωÏθεÏε +ιδ. Θε παÏτιενδω πεÏτινασια ινσωÏÏυπτε φις. Δισθας φαβυλας γυβεÏγÏεν εως +ιν, αλιι σολυμ ηις θε, ποσθυλανθ ασυσαμυς ετ ηας. Îο ινανι φαβυλας +θχεωπηÏαστυς ναμ, ευμ διστα ηομεÏω εα. Μαγνα φυγιθ υθ πεÏ, εσθ ατ νοσθÏυμ +δεσεÏυισε. + +Φις αυδιαμ λαβοÏες παθÏιοκυε εξ, ετ φευγιαθ δεφινιεβας σιθ. Αμετ εÏιπυιτ +δελισατα υσυ ετ, σενσιβυς φολυπθατιβυς Ï€ÎµÏ ÎµÎ¾. Κυωδ ιγνωθα τιβικυε ατ εαμ, +νυλλα ηωνεσθαθις υθ νες. Φιξ αν μυτατ εξεÏσι λαβωÏε. Σεδ νονυμυ κυοδσι +δελενιτ νε, συμο φιδε εα κυι. Ποπυλω μαιοÏυμ πεÏσεκυεÏις αν Ï€Ïω. + +Σολυμ σωνφενιÏε αδ ηας, αν ευμ σολυτα Ïεγιονε Ï€Ïοδεσεθ. ΦεÏο λαβοÏες +σαλυταθυς θε δυο, ηις νε φεÏο βλανδιτ Ï€Ïαεσενθ, ιδ φις σολεατ φιφενδυμ. Συ +συμ μωδω συμμο δολοÏες. Θε ναμ πωσιθ φευγιαθ τινσιδυνθ. + +Υθ ιψυμ νεμωÏε σαπιενθεμ μεα, ει εφεÏτι εφφισιενδι ηας. Ευμ αλβυσιυς +Ï€Ïαεσενθ συ, δεσωÏε σεθεÏο ινδοστυμ μει ει. Ηις υθ συμμο μαλοÏυμ μανδαμυς, +κυι ιν συαφιθαθε πεÏισυλις, ιισκυε οφφισιις κυο νο. Îε νονυμυ ηαβεμυς +πχιλωσοπηια φις. Ετ ηας Ï…Ï„Î±Î¼Ï…Ï ÏεφοÏμιδανς. ΙνεÏμις δεθÏαξιθ Î½ÎµÎ³Î»ÎµÎ³ÎµÎ½Î¸Ï…Ï +δυο υθ, τωÏκυαθος δισεντιυνθ φιθυπεÏατοÏιβυς φιξ νε. Εα σεδ συας μελιυς, +φιμ Ï€Ïοβο ινδοστυμ ÏεπÏιμικυε ευ. + +ΠÏι ιν λυδυς αυδιÏε, συμμο πεÏτινασια ÏƒÏ‰Î½ÏƒÎµÎ¸ÎµÎ¸Ï…Ï Ï†Î¹Ï‚ ιν, σιθ εξ επισυÏι +μαλυισετ σωνσεπθαμ. Αν δετÏασθο ελειφενδ εξπλισαÏι Ï€Ïω. Ιυδισο σομμοδο συμ +αδ. Δισαμ δισυντ φυλπυτατε ιν Ï€Ïω, εξ ηις δελενιτ μαιεσθατις. Ρεβυμ νονυμυ +αππαÏεατ σιθ εα, σιθ ιδ νυλλα σολεατ πεθενθιυμ, ει οπθιων πεÏσεκυεÏις ευμ. +Υθ νισλ ινσωλενς φιξ, εσθ φεÏι ιισκυε αÏγυμενθυμ συ, σεθεÏο μολεστιε +αδιπισινγ ευ μεα. + +Ετ μεα μυσιυς λατινε, μει ÏƒÎµÎ¼Ï€ÎµÏ Î´ÎµÏƒÎµÏυντ πεÏτινασια αν. Συ φενιαμ ποπυλω +αθωμωÏυμ κυο. Îο ιυς Ïεβυμ φιθυπεÏαθα δισπυτατιονι, ατ αλθεÏυμ χενδÏεÏιτ +φιθυπεÏαθα συμ. Ευμ αυτεμ αππετεÏε αδιπισινγ ετ, νο κυο συας ελειφενδ. Εαμ +θαλε δισαμ εξ. + +Ετ σομμοδο λεγενδως φελ, διαμ φωλυπθαÏια νο μελ, δυο φελιτ νεμωÏε αδ. Αν +εξπετενδα συαφιθαθε φελ, ενιμ ασυμσαν Ï€ÎµÏ Î±Î´, εα φιμ μωδω υνυμ. Εα κυωδ +Ï€Ïοβο πεÏσεσυτι φελ, ευ φεÏι Ï€ÏωπÏιαε ινσιδεÏιντ νες. Εξ νες οδιο δελενιτ, +ελιτ ιυδισο ινθεγÏε δυο ιδ. Μελ αλικυιπ πεÏισυλις ετ, ατ ηας αυγυε λαβοÏες +ασεντιοÏ. + +Συ νυλλα δωσενδι δεφινιτιωνες φελ. ΔωλοÏε δισεÏετ ÏεφοÏμιδανς αδ Ï€Ïω. +ΕφεÏτι Ï€Ïωβατυς Ï…Ïβανιθας νο μελ. Ιν φιξ φασεθε δεθÏαξιθ ομιθταντυÏ, ζÏιλ +υτιναμ παθÏιοκυε συ νες. Κυο ει δισενθιετ ασομμοδαÏε. + +Ηας θε ομνεσκυε δελισαθισιμι. ΕξεÏσι δελισατα ινιμισυς ευμ ευ, ιδ ÎµÎ»Î¹Ï„Ï +μελιοÏε αβχοÏÏεανθ εσθ, εως οπθιων Ï€Ïοδεσεθ ÏƒÎ¿Î½ÏƒÎµÏƒÎ¸ÎµÎ¸Ï…ÎµÏ Î¹Î½. Îαμ διαμ ασυμ +τεμποÏιβυς αν. Σομμυνε δεφινιθιονεμ κυο ιν, ηας νωμιναφι φιφενδυμ ατ. +Ομνεσκυε δεφινιεβας μεα θε. + +Εαμ σανστυς αλβυσιυς ευ, φελ στετ επισυÏι ιν, κυο αδ πεÏτιναξ σενσεÏιτ +τωÏκυαθος. ΛαβωÏε νυσκυαμ ιν κυι, εÏος σαεπε τιβικυε εσθ ατ. ΦεÏο υτιναμ +φελ νε, αδ απεÏιÏι Î¿Î¼Î¹Î¸Ï„Î±Î½Ï„Ï…Ï Î´ÎµÏ†Î¹Î½Î¹Ï„Î¹Ï‰Î½ÎµÏ‚ δυο. ΙνφενιÏε ελειφενδ παθÏιοκυε +εξ ναμ. Ιδ ναμ μινιμ υθÏοκυε. Αδ ναθυμ αππετεÏε σεα. + +Μολλις φολυμυς κυι νο, θε φιμ υβικυε αδιπισι διγνισιμ. Îοβις νοσθÏω +μενανδÏι υσυ νο, Ï€Ïιμα ÎµÎ»Î¹Ï„Ï ÎºÏ…Î±ÎµÎºÏ…Îµ ιδ ηας. ΠÏω εα παÏτεμ δομινγ. Θε +φασεθε αυδιÏε φολυπθατιβυς ιυς. Φις δεθÏαξιθ ινφενιÏε ετ, αν ιυς πωσθεα +μεδιοσÏιθαθεμ. + +Εα αδχυς Ï…Ï„Î±Î¼Ï…Ï Ï†Î¹Ï‚. Σιβω λαυδεμ υσυ αδ, φις λεγιμυς πλασεÏαθ φεÏθεÏεμ συ. +Φιμ ατ ειυς αλθεÏυμ φιθυπεÏατοÏιβυς, ατ λατινε ηαβεμυς φολυτπατ μεα. ΓÏαεσω +λυσιλιυς εα φελ. + +Θε φιξ βÏυτε συμμο, φελ ωμιτθαμ ιμπεÏδιετ εξ. Μεα ιν μωδω νυμκυαμ, σεα +Ï„Ïασθατος εξπετενδα αδ. ΓÏαεσε πλαθονεμ Ïεπυδιανδαε φιξ εα, εα ετιαμ +σωνσθιτυθο ασυεφεÏιθ σιθ. Ατ πυÏθο ναθυμ σονγυε φιξ, κυι ετ δισαμ ινεÏμις +ινιμισυς. + +Î ÎµÏ Ï…Î¸ διστα ινθεγÏε, Ï€ÎµÏ Ïεκυε φιεÏενθ αδ. Îε δεσεÏυντ ινφενιÏε ÏƒÏ‰Î½ÏƒÎµÎ¸ÎµÎ¸Ï…Ï +μει, αν ηομεÏω αÏγυμενθυμ Ïεπυδιανδαε πεÏ, ηις σωνσυλ μελιοÏε ινθελλεγαμ +υθ. Îες εα Î»Î±Î²Î¹Î¸Ï…Ï Î´Î¿Î»Î¿Ïεμ υλλαμσοÏπεÏ. Μει εσεντ νεσεσιταθιβυς ιν, αφφεÏθ +σαυσαε ινθεÏεσετ ηας αν. ADDED test/Greek-Lipsum-2.txt Index: test/Greek-Lipsum-2.txt ================================================================== --- test/Greek-Lipsum-2.txt +++ test/Greek-Lipsum-2.txt @@ -0,0 +1,77 @@ +Κυο εξ υνυμ δισπυθανδο, εÏος αλιενυμ κυι θε. Îες εξ ελωκυενθιαμ +ινστÏυσθιοÏ. Î˜ÎµÎ¼Ï€Î¿Ï Î½Î¿ÏƒÎ¸ÎµÏ ÏƒÏ… εως. ΠυÏθο μωφεθ μωδεÏατιυς ατ μελ. Συ δυο +αμετ ειυς. ΠÏι δεσωÏε ινθεγÏε ασυμσαν αδ, Φιξ αν Ïεβυμ ÎµÏ†Ï†Î¹ÏƒÎ¹Î±Î½Î¸Ï…Ï +νεσεσιταθιβυς. + +ÎοσθÏυμ ÏƒÏ…ÏƒÎ¹Ï€Î¹Î±Î½Ï„Ï…Ï Î·Î±Ï‚ ει, οÏναθυς Ïεσυσαβο Ï€Ïι ιδ, Ï€ÎµÏ Î½Î¿Î»Ï…Î¹ÏƒÎµ +οπωÏθεÏε ιδ. Θε παÏτιενδω πεÏτινασια ινσωÏÏυπτε φις. Δισθας φαβυλας +γυβεÏγÏεν εως ιν, αλιι σολυμ ηις θε, ποσθυλανθ ασυσαμυς ετ ηας. Îο ινανι +φαβυλας θχεωπηÏαστυς ναμ, ευμ διστα ηομεÏω εα. Μαγνα φυγιθ υθ πεÏ, εσθ +ατ νοσθÏυμ δεσεÏυισε. + +Φις αυδιαμ λαβοÏες παθÏιοκυε εξ, ετ φευγιαθ δεφινιεβας σιθ. Αμετ εÏιπυιτ +δελισατα υσυ ετ, σενσιβυς φολυπθατιβυς Ï€ÎµÏ ÎµÎ¾. Κυωδ ιγνωθα τιβικυε ατ +εαμ, νυλλα ηωνεσθαθις υθ νες. Φιξ αν μυτατ εξεÏσι λαβωÏε. Σεδ νονυμυ +κυοδσι δελενιτ νε, συμο φιδε εα κυι. Ποπυλω μαιοÏυμ πεÏσεκυεÏις αν Ï€Ïω. + +Σολυμ σωνφενιÏε αδ ηας, αν ευμ σολυτα Ïεγιονε Ï€Ïοδεσεθ. ΦεÏο λαβοÏες +σαλυταθυς θε δυο, ηις νε φεÏο βλανδιτ Ï€Ïαεσενθ, ιδ φις σολεατ φιφενδυμ. +Συ συμ μωδω συμμο δολοÏες. Θε ναμ πωσιθ φευγιαθ τινσιδυνθ. + +Υθ ιψυμ νεμωÏε σαπιενθεμ μεα, ει εφεÏτι εφφισιενδι ηας. Ευμ αλβυσιυς +Ï€Ïαεσενθ συ, δεσωÏε σεθεÏο ινδοστυμ μει ει. Ηις υθ συμμο μαλοÏυμ +μανδαμυς, κυι ιν συαφιθαθε πεÏισυλις, ιισκυε οφφισιις κυο νο. Îε νονυμυ +ηαβεμυς πχιλωσοπηια φις. Ετ ηας Ï…Ï„Î±Î¼Ï…Ï ÏεφοÏμιδανς. ΙνεÏμις δεθÏαξιθ +Î½ÎµÎ³Î»ÎµÎ³ÎµÎ½Î¸Ï…Ï Î´Ï…Î¿ υθ, τωÏκυαθος δισεντιυνθ φιθυπεÏατοÏιβυς φιξ νε. Εα σεδ +συας μελιυς, φιμ Ï€Ïοβο ινδοστυμ ÏεπÏιμικυε ευ. + +ΠÏι ιν λυδυς αυδιÏε, συμμο πεÏτινασια ÏƒÏ‰Î½ÏƒÎµÎ¸ÎµÎ¸Ï…Ï Ï†Î¹Ï‚ ιν, σιθ εξ επισυÏι +μαλυισετ σωνσεπθαμ. Αν δετÏασθο ελειφενδ εξπλισαÏι Ï€Ïω. Ιυδισο σομμοδο +συμ αδ. Δισαμ δισυντ φυλπυτατε ιν Ï€Ïω, εξ ηις δελενιτ μαιεσθατις. Ρεβυμ +νονυμυ αππαÏεατ σιθ εα, σιθ ιδ νυλλα σολεατ πεθενθιυμ, ει οπθιων +πεÏσεκυεÏις ευμ. Υθ νισλ ινσωλενς φιξ, εσθ φεÏι ιισκυε αÏγυμενθυμ συ, +σεθεÏο μολεστιε αδιπισινγ ευ μεα. + +Ετ μεα μυσιυς λατινε, μει ÏƒÎµÎ¼Ï€ÎµÏ Î´ÎµÏƒÎµÏυντ πεÏτινασια αν. Συ φενιαμ +ποπυλω αθωμωÏυμ κυο. Îο ιυς Ïεβυμ φιθυπεÏαθα δισπυτατιονι, ατ αλθεÏυμ +χενδÏεÏιτ φιθυπεÏαθα συμ. Ευμ αυτεμ αππετεÏε αδιπισινγ ετ, νο κυο συας +ελειφενδ. Εαμ θαλε δισαμ εξ. + +Ετ σομμοδο λεγενδως φελ, διαμ φωλυπθαÏια νο μελ, δυο φελιτ νεμωÏε αδ. Αν +εξπετενδα συαφιθαθε φελ, ενιμ ασυμσαν Ï€ÎµÏ Î±Î´, εα φιμ μωδω υνυμ. Εα κυωδ +Ï€Ïοβο πεÏσεσυτι φελ, ευ φεÏι Ï€ÏωπÏιαε ινσιδεÏιντ νες. Εξ νες οδιο +δελενιτ, ελιτ ιυδισο ινθεγÏε δυο ιδ. Μελ αλικυιπ πεÏισυλις ετ, ατ ηας +αυγυε λαβοÏες ασεντιοÏ. + +Συ νυλλα δωσενδι δεφινιτιωνες φελ. ΔωλοÏε δισεÏετ ÏεφοÏμιδανς αδ Ï€Ïω. +ΕφεÏτι Ï€Ïωβατυς Ï…Ïβανιθας νο μελ. Ιν φιξ φασεθε δεθÏαξιθ ομιθταντυÏ, +ζÏιλ υτιναμ παθÏιοκυε συ νες. Κυο ει δισενθιετ ασομμοδαÏε. + +Ηας θε ομνεσκυε δελισαθισιμι. ΕξεÏσι δελισατα ινιμισυς ευμ ευ, ιδ ÎµÎ»Î¹Ï„Ï +μελιοÏε αβχοÏÏεανθ εσθ, εως οπθιων Ï€Ïοδεσεθ ÏƒÎ¿Î½ÏƒÎµÏƒÎ¸ÎµÎ¸Ï…ÎµÏ Î¹Î½. Îαμ διαμ +ασυμ τεμποÏιβυς αν. Σομμυνε δεφινιθιονεμ κυο ιν, ηας νωμιναφι φιφενδυμ +ατ. Ομνεσκυε δεφινιεβας μεα θε. + +Εαμ σανστυς αλβυσιυς ευ, φελ στετ επισυÏι ιν, κυο αδ πεÏτιναξ σενσεÏιτ +τωÏκυαθος. ΛαβωÏε νυσκυαμ ιν κυι, εÏος σαεπε τιβικυε εσθ ατ. ΦεÏο υτιναμ +φελ νε, αδ απεÏιÏι Î¿Î¼Î¹Î¸Ï„Î±Î½Ï„Ï…Ï Î´ÎµÏ†Î¹Î½Î¹Ï„Î¹Ï‰Î½ÎµÏ‚ δυο. ΙνφενιÏε ελειφενδ +παθÏιοκυε εξ ναμ. Ιδ ναμ μινιμ υθÏοκυε. Αδ ναθυμ αππετεÏε σεα. + +Μολλις φολυμυς κυι νο, θε φιμ υβικυε αδιπισι διγνισιμ. Îοβις νοσθÏω +μενανδÏι υσυ νο, Ï€Ïιμα ÎµÎ»Î¹Ï„Ï ÎºÏ…Î±ÎµÎºÏ…Îµ ιδ ηας. ΠÏω εα παÏτεμ δομινγ. Θε +φασεθε αυδιÏε φολυπθατιβυς ιυς. Φις δεθÏαξιθ ινφενιÏε ετ, αν ιυς πωσθεα +μεδιοσÏιθαθεμ. + +Εα αδχυς Ï…Ï„Î±Î¼Ï…Ï Ï†Î¹Ï‚. Σιβω λαυδεμ υσυ αδ, φις λεγιμυς πλασεÏαθ φεÏθεÏεμ +συ. Φιμ ατ ειυς αλθεÏυμ φιθυπεÏατοÏιβυς, ατ λατινε ηαβεμυς φολυτπατ +μεα. ΓÏαεσω λυσιλιυς εα φελ. + +Θε φιξ βÏυτε συμμο, φελ ωμιτθαμ ιμπεÏδιετ εξ. Μεα ιν μωδω νυμκυαμ, σεα +Ï„Ïασθατος εξπετενδα αδ. ΓÏαεσε πλαθονεμ Ïεπυδιανδαε φιξ εα, εα ετιαμ +σωνσθιτυθο ασυεφεÏιθ σιθ. Ατ πυÏθο ναθυμ σονγυε φιξ, κυι ετ δισαμ +ινεÏμις ινιμισυς. + +Î ÎµÏ Ï…Î¸ διστα ινθεγÏε, Ï€ÎµÏ Ïεκυε φιεÏενθ αδ. Îε δεσεÏυντ ινφενιÏε +ÏƒÏ‰Î½ÏƒÎµÎ¸ÎµÎ¸Ï…Ï Î¼ÎµÎ¹, αν ηομεÏω αÏγυμενθυμ Ïεπυδιανδαε πεÏ, ηις σωνσυλ μελιοÏε +ινθελλεγαμ υθ. Îες εα Î»Î±Î²Î¹Î¸Ï…Ï Î´Î¿Î»Î¿Ïεμ υλλαμσοÏπεÏ. Μει εσεντ +νεσεσιταθιβυς ιν, αφφεÏθ σαυσαε ινθεÏεσετ ηας αν. ADDED test/amend.test Index: test/amend.test ================================================================== --- test/amend.test +++ test/amend.test @@ -0,0 +1,405 @@ +# +# Copyright (c) 2015 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# Tests for the "amend" command. +# + +proc short_uuid {uuid {len 10}} { + string range $uuid 0 $len-1 +} + +proc artifact_from_timeline {res var} { + upvar $var artid + regexp {(?x)[0-9]{2}(?::[0-9]{2}){2}\s+\[([0-9a-f]+)]} $res m artid +} + +proc manifest_comment {comment} { + string map [list { } {\\s} \n {\\n} \r {\\r}] $comment +} + +proc uuid_from_commit {res var} { + upvar $var UUID + regexp {^New_Version: ([0-9a-f]{40})$} $res m UUID +} + +proc uuid_from_branch {res var} { + upvar $var UUID + regexp {^New branch: ([0-9a-f]{40})$} $res m UUID +} + +proc uuid_from_checkout {var} { + global RESULT + upvar $var UUID + fossil status + regexp {checkout:\s+([0-9a-f]{40})} $RESULT m UUID +} + +# Make sure we are not in an open repository and initialize new repository +repo_init + +######################################## +# Setup: Add file and commit # +######################################## + +if {![uuid_from_checkout UUIDINIT]} { + test amend-checkout-failure false + return +} +write_file datafile "data" +fossil add datafile +fossil commit -m "c1" +if {![uuid_from_commit $RESULT UUID]} { + test amend-setup-failure false + return +} + +######################################## +# Test: -branch # +######################################## +set UUIDB UUIDB +write_file datafile "data.file" +fossil commit -m "c2" +if {![uuid_from_commit $RESULT UUIDB]} { + test amend-branch.setup false +} +fossil amend $UUIDB -branch amended-branch +test amend-branch-1.1 {[regexp {tags:\s+amended-branch} $RESULT]} +fossil branch ls +test amend-branch-1.2 {[string first "* amended-branch" $RESULT] != -1} +fossil tag list +test amend-branch-1.3 {[string first amended-branch $RESULT] != -1} +fossil tag list --raw $UUIDB +test amend-branch-1.4 {[string first "branch=amended-branch" $RESULT] != -1} +test amend-branch-1.5 {[string first "sym-amended-branch" $RESULT] != -1} +fossil timeline -n 1 +test amend-branch-1.6 {[string match {*Move*to*branch*amended-branch*} $RESULT]} + +######################################## +# Test: -bgcolor # +######################################## +set tc 0 +foreach {color result} { + 0 0 + a a + abcdef #abcdef + abc123 #abc123 + 123efg 123efg + abcdefg abcdefg + abcdeg abcdeg + blue blue + acf #acf + 123 #123 + #1234 #1234 + 1234 1234 + 123456 #123456 +} { + incr tc + fossil amend $UUID -bgcolor $color + test amend-bgcolor-1.$tc.a {[string match "*uuid:*$UUID*" $RESULT]} + fossil tag list --raw $UUID + test amend-bgcolor-1.$tc.b {[string first "bgcolor=$result" $RESULT] != -1} + fossil timeline -n 1 + test amend-bgcolor-1.$tc.c { + [string match "*Change*background*color*to*\"$result\"*" $RESULT] + } + if {[artifact_from_timeline $RESULT artid]} { + fossil artifact $artid + test amend-bgcolor-1.$tc.d { + [string match "*T +bgcolor $UUID $result*" $RESULT] + } + } else { + if {$VERBOSE} { protOut "No artifact found in timeline output" } + test amend-bgcolor-1.$tc.d false + } +} +fossil amend $UUID -bgcolor {} +test amend-bgcolor-2.1 {[string match "*uuid:*$UUID*" $RESULT]} +fossil tag list --raw $UUID +test amend-bgcolor-2.2 { + [string first "bgcolor=" $RESULT] == -1 && + [string first "bgcolor" $RESULT] != -1 +} +fossil timeline -n 1 +test amend-bgcolor-2.3 {[string match "*Cancel*background*color.*" $RESULT]} +if {[artifact_from_timeline $RESULT artid]} { + fossil artifact $artid + test amend-bgcolor-2.4 {[string match "*T -bgcolor $UUID*" $RESULT]} +} else { + if {$VERBOSE} { protOut "No artifact found in timeline output" } + test amend-bgcolor-2.4 false +} + +######################################## +# Test: -branchcolor # +######################################## +set UUID2 UUID2 +fossil branch new brclr $UUID +if {![uuid_from_branch $RESULT UUID2]} { + test amend-branchcolor.setup false +} +fossil update $UUID2 +fossil amend $UUID2 -branchcolor yellow +test amend-branchcolor-1.1 {[string match "*uuid:*$UUID2*" $RESULT]} +fossil tag ls --raw $UUID2 +test amend-branchcolor-1.2 {[string first "bgcolor=yellow" $RESULT] != -1} +fossil timeline -n 1 +test amend-branchcolor-1.3 { + [string match {*Change*branch*background*color*to*"yellow".*} $RESULT] +} +if {[regexp {(?x)[0-9]{2}(?::[0-9]{2}){2}\s+\[([0-9a-f]+)]} $RESULT m artid]} { + fossil artifact $artid + test amend-branchcolor-1.4 { + [string match "*T \*bgcolor $UUID2 yellow*" $RESULT] + } +} else { + if {$VERBOSE} { protOut "No artifact found in timeline output" } + test amend-branchcolor-1.4 false +} + +set UUIDN UUIDN +write_file datafile "brclr" +fossil commit -m "brclr" +if {![uuid_from_commit $RESULT UUIDN]} { + test amend-branchcolor-propagating.setup false +} +write_file datafile "bc1" +fossil commit -m "mc1" +write_file datafile "bc2" +fossil commit -m "mc2" +fossil amend $UUIDN -branchcolor deadbe +test amend-branchcolor-2.1 {[string match "*uuid:*$UUIDN*" $RESULT]} +fossil tag ls --raw current +test amend-branchcolor-2.2 {[string first "bgcolor=#deadbe" $RESULT] != -1} +fossil timeline -n 1 +test amend-branchcolor-2.3 { + [string match {*Change*branch*background*color*to*"#deadbe".*} $RESULT] +} + +######################################## +# Test: -author # +######################################## +fossil amend $UUID -author author-test +test amend-author-1.1 {[string match {*comment:*(user:*author-test)*} $RESULT]} +fossil tag ls --raw $UUID +test amend-author-1.2 {[string first "user=author-test" $RESULT] != -1} +fossil timeline -n 1 +test amend-author-1.3 {[string match {*Change*user*to*"author-test".*} $RESULT]} + +######################################## +# Test: -date # +######################################## +set timestamp [clock scan yesterday] +set date [clock format $timestamp -format "%Y-%m-%d" -gmt 1] +set time [clock format $timestamp -format "%H:%M:%S" -gmt 1] +set datetime "$date $time" +fossil amend $UUIDINIT -date $datetime +test amend-date-1.1 {[string match "*uuid:*$UUIDINIT*$datetime*" $RESULT]} +fossil tag ls --raw $UUIDINIT +test amend-date-1.2 {[string first "date=$datetime" $RESULT] != -1} +fossil timeline -n 1 +test amend-date-1.3 {[string match "*Timestamp*$date*$time*" $RESULT]} +set badformats { + "%+" + "%Y-%m-%d %H:%M%:%S %Z" + "%d/%m/%Y %H:%M%:%S %Z" + "%d/%m/%Y %H:%M%:%S" + "%d/%m/%Y" +} +set sc 0 +foreach badformat $badformats { + incr sc + set datetime [clock format $timestamp -format $badformat -gmt 1] + fossil amend $UUIDINIT -date $datetime -expectError + test amend-date-2.$sc {[string first "YYYY-MM-DD HH:MM:SS" $RESULT] != -1} +} + +######################################## +# Test: -hide # +######################################## +set UUIDH UUIDH +fossil revert +fossil update trunk +fossil branch new tohide current +if {![uuid_from_branch $RESULT UUIDH]} { + test amend-hide-setup false +} +fossil amend $UUIDH -hide +test amend-hide-1.1 {[string match "*uuid:*$UUIDH*" $RESULT]} +fossil tag ls --raw $UUIDH +test amend-hide-1.2 {[string first "hidden" $RESULT] != -1} +fossil timeline -n 1 +test amend-hide-1.3 {[string match {*Add*propagating*"hidden".*} $RESULT]} + +######################################## +# Test: -close # +######################################## +set UUIDC UUIDC +fossil branch new cllf $UUID +if {![uuid_from_branch $RESULT UUIDC]} { + test amend-close.setup false +} +fossil update $UUIDC +fossil amend $UUIDC -close +test amend-close-1.1.a {[string match "*uuid:*$UUIDC*" $RESULT]} +test amend-close-1.1.b { + [string match "*comment:*Create*new*branch*named*\"cllf\"*" $RESULT] +} +fossil tag ls --raw $UUIDC +test amend-close-1.2 {[string first "closed" $RESULT] != -1} +fossil timeline -n 1 +test amend-close-1.3 {[string match {*Marked*"Closed".*} $RESULT]} +write_file datafile "cllf" +fossil commit -m "should fail" -expectError +test amend-close-2 {[string first "closed leaf" $RESULT] != -1} + +set UUID3 UUID3 +fossil revert +fossil update trunk +write_file datafile "cb" +fossil commit -m "closed-branch" --branch "closebranch" +if {![uuid_from_commit $RESULT UUID3]} { + test amend-close-3.setup false +} +write_file datafile "b1" +fossil commit -m "m1" +write_file datafile "b2" +fossil commit -m "m2" +fossil amend $UUID3 --close +test amend-close-3.1 {[string match "*uuid:*$UUID3*" $RESULT]} +fossil tag ls --raw current +test amend-close-3.2 {[string first "closed" $RESULT] != -1} +fossil timeline -n 1 +test amend-close-3.3 { + [string match "*Add*propagating*\"closed\".*" $RESULT] +} +write_file datafile "changed" +fossil commit -m "should fail" -expectError +test amend-close-3.4 {[string first "closed leaf" $RESULT] != -1} + +######################################## +# Test: -tag/-cancel # +######################################## +set tagtests { + tagged tagged + {000000 lower Upper alpha 0alpha} {000000 0alpha Upper alpha lower} +} +set tc 0 +foreach {tagt result} $tagtests { + incr tc + set tags {} + set cancels {} + set t1exp "" + set t2exp "*" + set t3exp "*" + set t5exp "*" + foreach tag $tagt { + lappend tags -tag $tag + lappend cancels -cancel $tag + } + foreach res $result { + append t1exp ", $res" + append t2exp "sym-$res*" + append t3exp "Add*tag*\"$res\".*" + append t5exp "Cancel*tag*\"$res\".*" + } + eval fossil amend $UUID $tags + test amend-tag-$tc.1 {[string match "*uuid:*$UUID*tags:*$t1exp*" $RESULT]} + fossil tag ls --raw $UUID + test amend-tag-$tc.2 {[string match $t2exp $RESULT]} + fossil timeline -n 1 + test amend-tag-$tc.3 {[string match $t3exp $RESULT]} + eval fossil amend $UUID $cancels + test amend-tag-$tc.4 {![string match "*tags:*$t1exp*" $RESULT]} + fossil timeline -n 1 + test amend-tag-$tc.5 {[string match $t5exp $RESULT]} +} + +######################################## +# Test: -comment # +######################################## +proc prep-test {comment content} { + global UUID RESULT + + fossil revert + fossil update trunk + write_file datafile $comment + fossil commit -m $content + if {![uuid_from_commit $RESULT UUID]} { + set UUID "" + } +} + +proc test-comment {name UUID comment} { + global VERBOSE RESULT + + test amend-comment-$name.1 { + [string match "*uuid:*$UUID*comment:*$comment*" $RESULT] + } + fossil timeline -n 1 + if {[artifact_from_timeline $RESULT artid]} { + fossil artifact $artid + test amend-comment-$name.2 { + [string match "*T +comment $UUID *[manifest_comment $comment]*" $RESULT] + } + } else { + if {$VERBOSE} { protOut "No artifact found in timeline output: $RESULT" } + test amend-comment-$name.2 false + } + fossil timeline -n 1 + test amend-comment-$name.3 { + [string match "*[short_uuid $UUID]*Edit*check-in*comment.*" $RESULT] + } + fossil info $UUID + test amend-comment-$name.4 { + [string match "*uuid:*$UUID*comment:*$comment*" $RESULT] + } +} + +prep-test "revision 1" "revision 1" +fossil amend $UUID -comment "revised revision 1" +test-comment 1 $UUID "revised revision 1" + +prep-test "revision 2" "revision 2" +fossil amend $UUID -m "revised revision 2 with -m" +test-comment 2 $UUID "revised revision 2 with -m" + +prep-test "revision 3" "revision 3" +write_file commitmsg "revision 3 revised" +fossil amend $UUID -message-file commitmsg +test-comment 3 $UUID "revision 3 revised" + +prep-test "revision 4" "revision 4" +write_file commitmsg "revision 4 revised with -M" +fossil amend $UUID -M commitmsg +test-comment 4 $UUID "revision 4 revised with -M" + +prep-test "final comment" "final content" +if {[catch {exec which ed} result]} { + if {$VERBOSE} { protOut "Install ed for interactive comment test: $result" } + test-comment 5 $UUID "ed required for interactive edit" +} else { + fossil settings editor "ed -s" + set comment "interactive edited comment" + fossil_maybe_answer "a\n$comment\n.\nw\nq\n" amend $UUID --edit-comment + test-comment 5 $UUID $comment +} + +######################################## +# Test: NULL UUID # +######################################## +fossil amend {} -close -expectError +test amend-null-uuid {$CODE && [string first "no such check-in" $RESULT] != -1} ADDED test/clean.test Index: test/clean.test ================================================================== --- test/clean.test +++ test/clean.test @@ -0,0 +1,189 @@ +# +# Copyright (c) 2015 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# Tests of the "clean" command, including the ability to undo it. +# + +repo_init + +############################################################################### + +fossil extra +test clean-0 {[normalize_result] eq {}} + +############################################################################### + +write_file f1 "f1 line" +fossil add f1 +fossil commit -m "base file" +fossil ls +test clean-1 {[normalize_result] eq {f1}} + +############################################################################### + +fossil extra +test clean-2 {[normalize_result] eq {}} + +############################################################################### + +write_file f2 "f2 line" +fossil extra +test clean-3 {[normalize_result] eq {f2}} + +############################################################################### + +# clean w/undo enabled, should not prompt, 1 file < 10MiB +fossil clean +test clean-4 {[normalize_result] eq \ +{"fossil undo" is available to undo changes to the working checkout.}} + +############################################################################### + +fossil extra +test clean-5 {[normalize_result] eq {}} + +############################################################################### + +fossil undo +test clean-6 {[normalize_result] eq {NEW f2}} +test clean-7 {[read_file f2] eq "f2 line"} + +############################################################################### + +fossil extra +test clean-8 {[normalize_result] eq {f2}} + +############################################################################### + +write_file f3 [string repeat ABCDEFGHIJK 1048576] +fossil extra +test clean-9 {[normalize_result] eq {f2 +f3}} + +############################################################################### + +# clean w/undo enabled, no prompt, 1 file < 10MiB, 1 file > 10MiB +fossil clean --no-prompt +test clean-10 {[normalize_result] eq \ +{"fossil undo" is available to undo changes to the working checkout.}} + +############################################################################### + +fossil extra +test clean-11 {[normalize_result] eq {f3}} + +############################################################################### + +fossil undo +test clean-12 {[normalize_result] eq {NEW f2}} +test clean-13 {[read_file f2] eq "f2 line"} +test clean-14 {[read_file f3] eq [string repeat ABCDEFGHIJK 1048576]} + +############################################################################### + +fossil extra +test clean-15 {[normalize_result] eq {f2 +f3}} + +############################################################################### + +# clean w/undo enabled, force, 1 file < 10MiB, 1 file > 10MiB +fossil clean --force +test clean-16 {[normalize_result] eq \ +{"fossil undo" is available to undo changes to the working checkout.}} + +############################################################################### + +fossil extra +test clean-17 {[normalize_result] eq {}} + +############################################################################### + +fossil undo +test clean-18 {[normalize_result] eq {NEW f2}} +test clean-19 {[read_file f2] eq "f2 line"} + +############################################################################### + +write_file f4 [string repeat KJIHGFEDCBA 1048576] +fossil extra +test clean-20 {[normalize_result] eq {f2 +f4}} + +############################################################################### + +# clean w/undo disabled, no prompt, 1 file < 10MiB, 1 file > 10MiB +fossil clean --disable-undo --no-prompt +test clean-21 {[normalize_result] eq {}} + +############################################################################### + +fossil extra +test clean-22 {[normalize_result] eq {f2 +f4}} + +############################################################################### + +fossil undo -expectError +test clean-23 {[normalize_result] eq {nothing to undo}} + +############################################################################### + +# clean w/undo disabled, force, 1 file < 10MiB, 1 file > 10MiB +fossil clean --disable-undo --force +test clean-24 {[normalize_result] eq {}} + +############################################################################### + +fossil extra +test clean-25 {[normalize_result] eq {}} + +############################################################################### + +fossil undo -expectError +test clean-26 {[normalize_result] eq {nothing to undo}} + +############################################################################### + +write_file f5 "f5 line" +fossil extra +test clean-27 {[normalize_result] eq {f5}} + +############################################################################### + +# clean w/undo disabled, should prompt, 1 file < 10MiB +fossil_maybe_answer Y clean --disable-undo +test clean-28 {[normalize_result] eq \ +{WARNING: Deletion of this file will not be undoable via the 'undo' + command because undo is disabled for this operation. + +Remove unmanaged file "f5" (a=all/y/N)?}} + +############################################################################### + +fossil extra +test clean-29 {[normalize_result] eq {}} + +############################################################################### + +fossil undo -expectError +test clean-30 {[normalize_result] eq {nothing to undo}} + +############################################################################### + +fossil extra +test clean-31 {[normalize_result] eq {}} ADDED test/cmdline.test Index: test/cmdline.test ================================================================== --- test/cmdline.test +++ test/cmdline.test @@ -0,0 +1,30 @@ +# +# Copyright (c) 2012 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# Test command line parsing +# + +proc cmd-line {testname args} { + set i 1 + foreach {cmdline result} $args { + fossil test-echo $cmdline + test cmd-line-$testname.$i {[lrange [split $::RESULT \n] 3 end]=="\{argv\[2\] = \[$result\]\}"} + incr i + } +} +cmd-line 100 abc abc a\"bc a\"bc \"abc\" \"abc\" +cmd-line 101 * * *.* *.* ADDED test/comment.test Index: test/comment.test ================================================================== --- test/comment.test +++ test/comment.test @@ -0,0 +1,318 @@ +# +# Copyright (c) 2014 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# Test comment formatting and printing. +# + +fossil test-comment-format "" "" +test comment-1 {$RESULT eq "\n(1 lines output)"} + +############################################################################### + +fossil test-comment-format --decode "" "" +test comment-2 {$RESULT eq "\n(1 lines output)"} + +############################################################################### + +fossil test-comment-format --width 26 " " "this is a short comment." +test comment-3 {$RESULT eq " this is a short comment.\n(1 lines output)"} + +############################################################################### + +fossil test-comment-format --width 26 --decode " " "this is a short comment." +test comment-4 {$RESULT eq " this is a short comment.\n(1 lines output)"} + +############################################################################### + +fossil test-comment-format --width 26 "*PREFIX* " "this is a short comment." +test comment-5 {$RESULT eq "*PREFIX* this is a short c\n omment.\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 26 --decode "*PREFIX* " "this is a short comment." +test comment-6 {$RESULT eq "*PREFIX* this is a short c\n omment.\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 26 "" "this\\sis\\sa\\sshort\\scomment." +test comment-7 {$RESULT eq "this\\sis\\sa\\sshort\\scommen\nt.\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 26 --decode "" "this\\sis\\sa\\sshort\\scomment." +test comment-8 {$RESULT eq "this is a short comment.\n(1 lines output)"} + +############################################################################### + +fossil test-comment-format --width 78 --decode --trimspace "HH:MM:SS " "this is a long comment that should span multiple lines if the test is working correctly." +test comment-9 {$RESULT eq "HH:MM:SS this is a long comment that should span multiple lines if the test is\n working correctly.\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 78 --decode --trimspace "HH:MM:SS " "this is a long comment that should span multiple lines if the test is working correctly. more text here describing the issue.\\nanother line here..................................................................................*" +test comment-10 {$RESULT eq "HH:MM:SS this is a long comment that should span multiple lines if the test is\n working correctly. more text here describing the issue.\n another line here....................................................\n ..............................*\n(4 lines output)"} + +############################################################################### + +fossil test-comment-format --width 78 "HH:MM:SS " "....................................................................................*" +test comment-11 {$RESULT eq "HH:MM:SS .....................................................................\n ...............*\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 78 "HH:MM:SS " ".....................................................................*" 78 +test comment-12 {$RESULT eq "HH:MM:SS .....................................................................\n *\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 26 "*TEST* " "this\tis a test." +test comment-13 {$RESULT eq "*TEST* this\tis a te\n st.\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 60 "*TEST* " "this is a test......................................................................................................................." +test comment-14 {$RESULT eq "*TEST* this is a test.......................................\n .....................................................\n ...........................\n(3 lines output)"} + +############################################################################### + +fossil test-comment-format --width 60 --wordbreak "*TEST* " "this is a test......................................................................................................................." +test comment-15 {$RESULT eq "*TEST* this is a\n test.................................................\n .....................................................\n .................\n(4 lines output)"} + +############################################################################### + +fossil test-comment-format --width 60 "*TEST* " "this is a test......................................................................................................................." +test comment-16 {$RESULT eq "*TEST* this is a\n test.................................................\n .....................................................\n .................\n(4 lines output)"} + +############################################################################### + +fossil test-comment-format --width 60 --wordbreak "*TEST* " "this is a test......................................................................................................................." +test comment-17 {$RESULT eq "*TEST* this is a\n test.................................................\n .....................................................\n .................\n(4 lines output)"} + +############################################################################### + +fossil test-comment-format --width 60 "*TEST* " "one two three four five six seven eight nine ten eleven twelve" +test comment-18 {$RESULT eq "*TEST* one two three four five six seven eight nine ten elev\n en twelve\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 60 --wordbreak "*TEST* " "one two three four five six seven eight nine ten eleven twelve" +test comment-19 {$RESULT eq "*TEST* one two three four five six seven eight nine ten\n eleven twelve\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 60 "*TEST* " "one two three four five six seven eight nine ten eleven twelve" +test comment-20 {$RESULT eq "*TEST* one two three four five\n six seven eight nine ten\n eleven twelve\n(3 lines output)"} + +############################################################################### + +fossil test-comment-format --width 60 --wordbreak "*TEST* " "one two three four five six seven eight nine ten eleven twelve" +test comment-21 {$RESULT eq "*TEST* one two three four five\n six seven eight nine ten\n eleven twelve\n(3 lines output)"} + +############################################################################### + +fossil test-comment-format --legacy "" "" +test comment-22 {$RESULT eq "\n(1 lines output)"} + +############################################################################### + +fossil test-comment-format --legacy --decode "" "" +test comment-23 {$RESULT eq "\n(1 lines output)"} + +############################################################################### + +fossil test-comment-format --width 26 --legacy " " "this is a short comment." +test comment-24 {$RESULT eq " this is a short comment.\n(1 lines output)"} + +############################################################################### + +fossil test-comment-format --width 26 --legacy --decode " " "this is a short comment." +test comment-25 {$RESULT eq " this is a short comment.\n(1 lines output)"} + +############################################################################### + +fossil test-comment-format --width 25 --legacy "*PREFIX* " "this is a short comment." +test comment-26 {$RESULT eq "*PREFIX* this is a short\n comment.\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 25 --legacy --decode "*PREFIX* " "this is a short comment." +test comment-27 {$RESULT eq "*PREFIX* this is a short\n comment.\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 26 --legacy "" "this\\sis\\sa\\sshort\\scomment." +test comment-28 {$RESULT eq "this\\sis\\sa\\sshort\\scommen\nt.\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 26 --legacy --decode "" "this\\sis\\sa\\sshort\\scomment." +test comment-29 {$RESULT eq "this is a short comment.\n(1 lines output)"} + +############################################################################### + +fossil test-comment-format --width 78 --legacy --decode "HH:MM:SS " "this is a long comment that should span multiple lines if the test is working correctly." +test comment-30 {$RESULT eq "HH:MM:SS this is a long comment that should span multiple lines if the test\n is working correctly.\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 78 --legacy --decode "HH:MM:SS " "this is a long comment that should span multiple lines if the test is working correctly. more text here describing the issue.\\nanother line here..................................................................................*" +test comment-31 {$RESULT eq "HH:MM:SS this is a long comment that should span multiple lines if the test\n is working correctly. more text here describing the issue. another\n line\n here.................................................................\n .................*\n(5 lines output)"} + +############################################################################### + +fossil test-comment-format --width 78 --legacy "HH:MM:SS " "....................................................................................*" +test comment-32 {$RESULT eq "HH:MM:SS .....................................................................\n ...............*\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 78 --legacy "HH:MM:SS " ".....................................................................*" +test comment-33 {$RESULT eq "HH:MM:SS .....................................................................\n *\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 26 --legacy "*TEST* " "this\tis a test." +test comment-34 {$RESULT eq "*TEST* this is a test.\n(1 lines output)"} + +############################################################################### + +fossil test-comment-format --width 60 --legacy "*TEST* " "this is a test......................................................................................................................." +test comment-35 {$RESULT eq "*TEST* this is a\n test.................................................\n .....................................................\n .................\n(4 lines output)"} + +############################################################################### + +fossil test-comment-format --width 60 --legacy --wordbreak "*TEST* " "this is a test......................................................................................................................." +test comment-36 {$RESULT eq "*TEST* this is a\n test.................................................\n .....................................................\n .................\n(4 lines output)"} + +############################################################################### + +fossil test-comment-format --width 60 --legacy "*TEST* " "this is a test......................................................................................................................." +test comment-37 {$RESULT eq "*TEST* this is a\n test.................................................\n .....................................................\n .................\n(4 lines output)"} + +############################################################################### + +fossil test-comment-format --width 60 --legacy --wordbreak "*TEST* " "this is a test......................................................................................................................." +test comment-38 {$RESULT eq "*TEST* this is a\n test.................................................\n .....................................................\n .................\n(4 lines output)"} + +############################################################################### + +fossil test-comment-format --width 60 --legacy "*TEST* " "one two three four five six seven eight nine ten eleven twelve" +test comment-39 {$RESULT eq "*TEST* one two three four five six seven eight nine ten\n eleven twelve\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 60 --legacy --wordbreak "*TEST* " "one two three four five six seven eight nine ten eleven twelve" +test comment-40 {$RESULT eq "*TEST* one two three four five six seven eight nine ten\n eleven twelve\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 60 --legacy "*TEST* " "one two three four five six seven eight nine ten eleven twelve" +test comment-41 {$RESULT eq "*TEST* one two three four five six seven eight nine ten\n eleven twelve\n(2 lines output)"} + +############################################################################### + +fossil test-comment-format --width 60 --legacy --wordbreak "*TEST* " "one two three four five six seven eight nine ten eleven twelve" +test comment-42 {$RESULT eq "*TEST* one two three four five six seven eight nine ten\n eleven twelve\n(2 lines output)"} + +############################################################################### + +set orig "xxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\\nxxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\\nxxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\\nxxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\\nxxxxxxx." +fossil test-comment-format --width 73 --decode --origbreak "" $orig +test comment-43 {$RESULT eq "xxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\nxxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\nxxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\nxxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\nxxxxxxx.\n(5 lines output)"} + +############################################################################### + +fossil test-comment-format --width 73 --decode --origbreak "" $orig $orig +test comment-44 {$RESULT eq "xxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\nxxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\nxxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\nxxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\nxxxxxxx.\n(5 lines output)"} + +############################################################################### + +fossil test-comment-format --width 73 --decode --origbreak "" "00:00:00 \[0000000000\] *CURRENT* $orig" $orig +test comment-45 {$RESULT eq "00:00:00 \[0000000000\] *CURRENT* \nxxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\nxxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\nxxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\nxxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\nxxxxxxx.\n(6 lines output)"} + +############################################################################### + +fossil test-comment-format --width 82 --indent 9 --decode --origbreak " " $orig +test comment-46 {$RESULT eq " xxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\n xxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\n xxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\n xxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\n xxxxxxx.\n(5 lines output)"} + +############################################################################### + +fossil test-comment-format --width 82 --indent 9 --decode --origbreak " " $orig $orig +test comment-47 {$RESULT eq " xxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\n xxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\n xxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\n xxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\n xxxxxxx.\n(5 lines output)"} + +############################################################################### + +fossil test-comment-format --width 82 --indent 9 --decode --origbreak "00:00:00 " "\[0000000000\] *CURRENT* $orig" $orig +test comment-48 {$RESULT eq "00:00:00 \[0000000000\] *CURRENT* \n xxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\n xxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\n xxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\n xxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\n xxxxxxx.\n(6 lines output)"} + +############################################################################### + +fossil test-comment-format --width 72 --decode --trimspace --origbreak "" $orig +test comment-49 {$RESULT eq "xxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\nxxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\nxxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\nxxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\nxxxxxxx.\n(5 lines output)"} + +############################################################################### + +fossil test-comment-format --width 72 --decode --trimspace --origbreak "" $orig $orig +test comment-50 {$RESULT eq "xxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\nxxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\nxxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\nxxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\nxxxxxxx.\n(5 lines output)"} + +############################################################################### + +fossil test-comment-format --width 72 --decode --trimspace --origbreak "" "00:00:00 \[0000000000\] *CURRENT* $orig" $orig +test comment-51 {$RESULT eq "00:00:00 \[0000000000\] *CURRENT* \nxxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\nxxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\nxxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\nxxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\nxxxxxxx.\n(6 lines output)"} + +############################################################################### + +fossil test-comment-format --width 81 --indent 9 --decode --trimspace --origbreak " " $orig +test comment-52 {$RESULT eq " xxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\n xxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\n xxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\n xxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\n xxxxxxx.\n(5 lines output)"} + +############################################################################### + +fossil test-comment-format --width 81 --indent 9 --decode --trimspace --origbreak " " $orig $orig +test comment-53 {$RESULT eq " xxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\n xxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\n xxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\n xxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\n xxxxxxx.\n(5 lines output)"} + +############################################################################### + +fossil test-comment-format --width 81 --indent 9 --decode --trimspace --origbreak "00:00:00 " "\[0000000000\] *CURRENT* $orig" $orig +test comment-54 {$RESULT eq "00:00:00 \[0000000000\] *CURRENT* \n xxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\n xxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\n xxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\n xxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\n xxxxxxx.\n(6 lines output)"} + +############################################################################### + +fossil test-comment-format --width 72 --decode --trimcrlf --origbreak "" $orig +test comment-55 {$RESULT eq "xxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\nxxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\nxxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\nxxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\nxxxxxxx.\n(5 lines output)"} + +############################################################################### + +fossil test-comment-format --width 72 --decode --trimcrlf --origbreak "" $orig $orig +test comment-56 {$RESULT eq "xxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\nxxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\nxxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\nxxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\nxxxxxxx.\n(5 lines output)"} + +############################################################################### + +fossil test-comment-format --width 72 --decode --trimcrlf --origbreak "" "00:00:00 \[0000000000\] *CURRENT* $orig" $orig +test comment-57 {$RESULT eq "00:00:00 \[0000000000\] *CURRENT* \nxxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\nxxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\nxxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\nxxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\nxxxxxxx.\n(6 lines output)"} + +############################################################################### + +fossil test-comment-format --width 81 --indent 9 --decode --trimcrlf --origbreak " " $orig +test comment-58 {$RESULT eq " xxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\n xxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\n xxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\n xxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\n xxxxxxx.\n(5 lines output)"} + +############################################################################### + +fossil test-comment-format --width 81 --indent 9 --decode --trimcrlf --origbreak " " $orig $orig +test comment-59 {$RESULT eq " xxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\n xxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\n xxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\n xxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\n xxxxxxx.\n(5 lines output)"} + +############################################################################### + +fossil test-comment-format --width 81 --indent 9 --decode --trimcrlf --origbreak "00:00:00 " "\[0000000000\] *CURRENT* $orig" $orig +test comment-60 {$RESULT eq "00:00:00 \[0000000000\] *CURRENT* \n xxxx xx xxxxxxx xxxx xxxxxx xxxxxxx, xxxxxxx, x xxxx xxxxxx xx xxxx xxxx\n xxxxxxx xxxxx xxxx xxxx xx xxxxxxx xxxxxxx (xxxxxx xxxxxxxxx x xxxxx).\n xxx'x xxx xxx xx xxxxx xxxx xxx xxx --xxxxxxxxxxx xxxxxx xx xx xxxx. x\n xxxxx x xxxxxx xxxx xxxx xxxx xxxx xxxx x xxxxx xx xxx x xxxxxxxx\n xxxxxxx.\n(6 lines output)"} ADDED test/contains-selector.test Index: test/contains-selector.test ================================================================== --- test/contains-selector.test +++ test/contains-selector.test @@ -0,0 +1,49 @@ +# +# Copyright (c) 2015 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# Test containsSelector() function in src/style.c +# + +proc contains-selector {testId css selectorResultMap} { + set css [string trim $css] + set filename [file join $::tempPath compare-selector.css] + set fh [open $filename w] + puts -nonewline $fh $css + close $fh + foreach {selector found} $selectorResultMap { + set expected "$selector [expr {$found ? "found" : "not found"}]" + set result [fossil test-contains-selector $filename $selector] + test "contains-selector $testId $selector" {$result eq $expected} + } + file delete $filename +} + +contains-selector 1 { + .a.b {} + .c .de {} + /* comment */ + .c .d, .e /* comment */ {} +} { + .a 0 + .b 0 + .a.b 1 + .c 0 + .d 0 + {.c.d} 0 + {.c .d} 1 + .e 1 +} Index: test/delta1.test ================================================================== --- test/delta1.test +++ test/delta1.test @@ -1,21 +1,15 @@ # # Copyright (c) 2006 D. Richard Hipp # # This program is free software; you can redistribute it and/or -# modify it under the terms of the GNU General Public -# License version 2 as published by the Free Software Foundation. +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# General Public License for more details. -# -# You should have received a copy of the GNU General Public -# License along with this library; if not, write to the -# Free Software Foundation, Inc., 59 Temple Place - Suite 330, -# Boston, MA 02111-1307, USA. +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # @@ -31,12 +25,12 @@ # two files to make sure that deltas between these two files # work properly. # set filelist [glob $testdir/*] foreach f $filelist { + if {[file isdir $f]} continue set base [file root [file tail $f]] -puts "base=$base f=$f" set f1 [read_file $f] write_file t1 $f1 for {set i 0} {$i<100} {incr i} { write_file t2 [random_changes $f1 1 1 0 0.1] fossil test-delta t1 t2 ADDED test/diff-test-1.wiki Index: test/diff-test-1.wiki ================================================================== --- test/diff-test-1.wiki +++ test/diff-test-1.wiki @@ -0,0 +1,54 @@ +<title>Graph Test One + +This page contains list of URLs of interesting diffs. +Click on all URLs, one by one, to verify +the correct operation of the diff logic. + + * + Multiple edits on a single line. This is an SQLite version + update diff. It is a large diff and contains many other interesting + features. Scan the whole diff. + * Tricky alignment and multiple edits per line. + * Add a column to a table + * Column alignment with multibyte characters. + The edit of a line with multibyte characters is the first chunk. + * Large diff of sqlite3.c. This diff was very + slow prior to the performance enhancement change [9e15437e97]. + * + A difficult indentation change. (show-version-diffs must be enabled) + * Another tricky indentation. Notice especially + lines 59398 and 59407 on the left. + * Inverse of the previous. + * A complex change that is difficult to align, and + hence falls back to the "delete left and insert right" strategy. + * Inverse of the previous. + * sqlite3.c changes + that are difficult to align. + * sqlite3.c changes inverted. + * Lorem Ipsum in Greek. + * Lorem Ipsum in Greek inverted. + +External: + + * + Code indentation change. + * + A complex change (chunk 1) in which the alignment becomes so complex + that it is better for clarity to abandon it and just show the left + and right sides contiguously. + * + An indentation change. See especially lines 2313 and 2317 on the right, + that their green indentation addition is left-justified. ADDED test/file1.test Index: test/file1.test ================================================================== --- test/file1.test +++ test/file1.test @@ -0,0 +1,100 @@ +# +# Copyright (c) 2011 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# File utilities +# + +repo_init + +proc simplify-name {testname args} { + set i 1 + foreach {path result} $args { + fossil test-simplify-name $path + test simplify-name-$testname.$i {$::RESULT=="\[$path\] -> \[$result\]"} + incr i + } +} + +proc relative-name {testname args} { + set i 1 + foreach {subdir path result} $args { + fossil test-relative-name --chdir $subdir $path + test relative-name-$testname.$i {$::RESULT==$result} + incr i + } +} + +proc relative-tree-name {testname args} { + set i 1 + foreach {subdir path result} $args { + fossil test-tree-name --chdir $subdir $path + test relative-tree-name-$testname.$i {$::RESULT==$result} + incr i + } +} + +proc absolute-tree-name {testname args} { + set i 1 + foreach {subdir path result} $args { + fossil test-tree-name --chdir $subdir --absolute $path + test absolute-tree-name-$testname.$i {$::RESULT==$result} + incr i + } +} + +simplify-name 100 . . .// . .. .. ..///// .. +simplify-name 101 {} {} / / ///////// / ././././ . +simplify-name 102 x x /x /x ///x //x +simplify-name 103 a/b a/b /a/b /a/b a///b a/b ///a///b///// //a/b +simplify-name 104 a/b/../c/ a/c /a/b/../c /a/c /a/b//../c /a/c /a/b/..///c /a/c +simplify-name 105 a/b/../../x/y x/y /a/b/../../x/y /x/y +simplify-name 106 a/b/../../../x/y ../x/y /a/b/../../../x/y /../x/y +simplify-name 107 a/./b/.././../x/y x/y a//.//b//..//.//..//x//y/// x/y + +if {$::tcl_platform(os)=="Windows NT"} { + simplify-name 108 //?/a:/a/b a:/a/b //?/UNC/a/b //a/b //?/ {} + simplify-name 109 \\\\?\\a:\\a\\b a:/a/b \\\\?\\UNC\\a\\b //a/b \\\\?\\ {} +} + +# This is needed because we are now running outside of the Fossil checkout. +file mkdir file1; set savedPwd [pwd]; cd file1 + +# Those directories are only needed for the testcase being able to "--chdir" to it. +file mkdir test1 +file mkdir test1/test2 + +relative-name 100 . . . test1 [pwd] .. test1 [pwd]/ .. test1 [pwd]/test ../test +relative-name 101 test1/test2 [pwd] ../.. test1/test2 [pwd]/ ../.. test1/test2 [pwd]/test ../../test +relative-name 102 test1 [pwd]/test ../test . [pwd]/file1 ./file1 . [pwd]/file1/file2 ./file1/file2 +relative-name 103 . [pwd] . + +relative-tree-name 100 . . file1 test1 [pwd] file1 test1 [pwd]/ file1 test1 [pwd]/test file1/test +relative-tree-name 101 test1/test2 [pwd] file1 test1/test2 [pwd]/ file1 test1/test2 [pwd]/test file1/test +relative-tree-name 102 test1 [pwd]/test file1/test . [pwd]/file1 file1/file1 . [pwd]/file1/file2 file1/file1/file2 +relative-tree-name 103 . [pwd] file1 + +set dirname [file normalize [file dirname [pwd]]] + +absolute-tree-name 100 . . $dirname test1 [pwd] [pwd] test1 [pwd]/ $dirname/file1 test1 [pwd]/test $dirname/file1/test +absolute-tree-name 101 test1/test2 [pwd] $dirname/file1 test1/test2 [pwd]/ $dirname/file1 test1/test2 [pwd]/test $dirname/file1/test +absolute-tree-name 102 test1 [pwd]/test $dirname/file1/test . [pwd]/file1 $dirname/file1/file1 . [pwd]/file1/file2 $dirname/file1/file1/file2 +absolute-tree-name 103 . [pwd] $dirname/file1 + +catch {file delete test1/test2} +catch {file delete test1} + +if {[info exists savedPwd]} {cd $savedPwd; unset savedPwd} ADDED test/fileStat.th1 Index: test/fileStat.th1 ================================================================== --- test/fileStat.th1 +++ test/fileStat.th1 @@ -0,0 +1,102 @@ + + proc doSomeTclSetup {} { + # + # NOTE: Copy repository file name to the Tcl interpreter. This is + # done first (once) because it will be necessary for almost + # everything else later on. + # + tclInvoke set repository [repository] + + # + # NOTE: Create some procedures in the Tcl interpreter to perform + # useful operations. This could also do things like load + # packages, etc. + # + tclEval { + # + # NOTE: Returns an [exec] command for Fossil, using the provided + # sub-command and arguments, suitable for use with [eval] + # or [catch]. + # + proc getFossilCommand { repository user args } { + global env + + lappend result exec [info nameofexecutable] + + if {[info exists env(GATEWAY_INTERFACE)]} then { + # + # NOTE: This option is required when calling + # out to the Fossil executable from a + # CGI process. + # + lappend result -nocgi + } + + eval lappend result $args + + if {[string length $repository] > 0} then { + # + # NOTE: This is almost certainly required + # when calling out to the Fossil + # executable on the server because + # there is almost never an open + # checkout. + # + lappend result -R $repository + } + + if {[string length $user] > 0} then { + lappend result -U $user + } + + # th1Eval [list html $result
        ] + + return $result + } + } + } + + proc getLatestTrunkCheckIn {} { + tclEval { + # + # NOTE: Get the unique Id of the latest check-in on trunk. + # + return [lindex [regexp -line -inline -nocase -- \ + {^uuid:\s+([0-9A-F]{40}) } [eval [getFossilCommand \ + $repository "" info trunk]]] end] + } + } + + proc theSumOfAllFiles { id } { + # + # NOTE: Copy check-in Id value to the Tcl interpreter. + # + tclInvoke set id $id + + tclEval { + set count 0 + + foreach line [split [eval [getFossilCommand \ + $repository "" artifact $id]] \n] { + # + # NOTE: Is this an "F" (file) card? + # + if {[string range $line 0 1] eq "F "} then { + incr count + } + } + + return $count + } + } + + doSomeTclSetup; # perform some extra setup for the Tcl interpreter. + + set checkIn [getLatestTrunkCheckIn] + set totalFiles [theSumOfAllFiles $checkIn] +
        + +
        +As of trunk check-in decorate \[$checkIn\], this +repository contains html $totalFiles files. +
        ADDED test/fileage-test-1.wiki Index: test/fileage-test-1.wiki ================================================================== --- test/fileage-test-1.wiki +++ test/fileage-test-1.wiki @@ -0,0 +1,14 @@ + +This page contains URLs for file-age computations that have given +trouble in the past. Shif-click on on the links, one-by-one, to verify +that the current implementation works correctly: + + * [/fileage?name=c9df0dcdaa402] - Verify that the many + execute permission changes that occurred about 24 hours before + check-in c9df0dcdaa402 do not appear as file changes. + + * [/tree?ci=c9df0dcdaa40&mtime=0&type=tree] - Verify that all + three skin files (css.txt, footer.txt, and header.txt) appear + in all of the skin/*/ folders. + + * On both of the above, check for excessive computation time. ADDED test/glob.test Index: test/glob.test ================================================================== --- test/glob.test +++ test/glob.test @@ -0,0 +1,184 @@ +# +# Copyright (c) 2013 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# Test glob pattern parsing +# + +proc glob-parse {testname args} { + set i 1 + foreach {pattern string result} $args { + fossil test-glob $pattern $string + test glob-parse-$testname.$i {$::RESULT eq $result} + incr i + } +} + +glob-parse 100 test test [string map [list \r\n \n] \ +{SQL expression: (x GLOB 'test') +pattern[0] = [test] +1 test}] + +glob-parse 101 "one two" one [string map [list \r\n \n] \ +{SQL expression: (x GLOB 'one' OR x GLOB 'two') +pattern[0] = [one] +pattern[1] = [two] +1 one}] + +glob-parse 102 t* test [string map [list \r\n \n] \ +{SQL expression: (x GLOB 't*') +pattern[0] = [t*] +1 test}] + +glob-parse 103 "o* two" one [string map [list \r\n \n] \ +{SQL expression: (x GLOB 'o*' OR x GLOB 'two') +pattern[0] = [o*] +pattern[1] = [two] +1 one}] + +glob-parse 104 {"o* two" "three four"} "one two" [string map [list \r\n \n] \ +{SQL expression: (x GLOB 'o* two' OR x GLOB 'three four') +pattern[0] = [o* two] +pattern[1] = [three four] +1 one two}] + +glob-parse 105 {"o* two" "three four"} "two one" [string map [list \r\n \n] \ +{SQL expression: (x GLOB 'o* two' OR x GLOB 'three four') +pattern[0] = [o* two] +pattern[1] = [three four] +0 two one}] + +glob-parse 106 "\"o*\ntwo\" \"three\nfour\"" "one\ntwo" \ +[string map [list \r\n \n] \ +{SQL expression: (x GLOB 'o* +two' OR x GLOB 'three +four') +pattern[0] = [o* +two] +pattern[1] = [three +four] +1 one +two}] + +glob-parse 107 "\"o*\ntwo\" \"three\nfour\"" "two\none" \ +[string map [list \r\n \n] \ +{SQL expression: (x GLOB 'o* +two' OR x GLOB 'three +four') +pattern[0] = [o* +two] +pattern[1] = [three +four] +0 two +one}] + +glob-parse 108 "\"o*\rtwo\" \"three\rfour\"" "one\rtwo" \ +[string map [list \r\n \n] \ +{SQL expression: (x GLOB 'o* +two' OR x GLOB 'three +four') +pattern[0] = [o* +two] +pattern[1] = [three +four] +1 one +two}] + +glob-parse 109 "\"o*\rtwo\" \"three\rfour\"" "two\rone" \ +[string map [list \r\n \n] \ +{SQL expression: (x GLOB 'o* +two' OR x GLOB 'three +four') +pattern[0] = [o* +two] +pattern[1] = [three +four] +0 two +one}] + +glob-parse 110 "'o*\ntwo' 'three\nfour'" "one\ntwo" \ +[string map [list \r\n \n] \ +{SQL expression: (x GLOB 'o* +two' OR x GLOB 'three +four') +pattern[0] = [o* +two] +pattern[1] = [three +four] +1 one +two}] + +glob-parse 111 "'o*\ntwo' 'three\nfour'" "two\none" \ +[string map [list \r\n \n] \ +{SQL expression: (x GLOB 'o* +two' OR x GLOB 'three +four') +pattern[0] = [o* +two] +pattern[1] = [three +four] +0 two +one}] + +glob-parse 112 "\"'o*' 'two'\" \"'three' 'four'\"" "'one' 'two'" \ +[string map [list \r\n \n] \ +{SQL expression: (x GLOB '''o*'' ''two''' OR x GLOB '''three'' ''four''') +pattern[0] = ['o*' 'two'] +pattern[1] = ['three' 'four'] +1 'one' 'two'}] + +glob-parse 113 "\"'o*' 'two'\" \"'three' 'four'\"" "two one" \ +[string map [list \r\n \n] \ +{SQL expression: (x GLOB '''o*'' ''two''' OR x GLOB '''three'' ''four''') +pattern[0] = ['o*' 'two'] +pattern[1] = ['three' 'four'] +0 two one}] + +glob-parse 114 o*,two one [string map [list \r\n \n] \ +{SQL expression: (x GLOB 'o*' OR x GLOB 'two') +pattern[0] = [o*] +pattern[1] = [two] +1 one}] + +glob-parse 115 "o*,two three,four" "one two" [string map [list \r\n \n] \ +{SQL expression: (x GLOB 'o*' OR x GLOB 'two' OR x GLOB 'three' OR x GLOB 'four') +pattern[0] = [o*] +pattern[1] = [two] +pattern[2] = [three] +pattern[3] = [four] +1 one two}] + +glob-parse 116 'o*,two' one [string map [list \r\n \n] \ +{SQL expression: (x GLOB 'o*,two') +pattern[0] = [o*,two] +0 one}] + +glob-parse 117 'o*,two' one,two [string map [list \r\n \n] \ +{SQL expression: (x GLOB 'o*,two') +pattern[0] = [o*,two] +1 one,two}] + +glob-parse 118 "'o*,two three,four'" "one two three,four" \ +[string map [list \r\n \n] \ +{SQL expression: (x GLOB 'o*,two three,four') +pattern[0] = [o*,two three,four] +0 one two three,four}] + +glob-parse 119 "'o*,two three,four'" "one,two three,four" \ +[string map [list \r\n \n] \ +{SQL expression: (x GLOB 'o*,two three,four') +pattern[0] = [o*,two three,four] +1 one,two three,four}] ADDED test/graph-test-1.wiki Index: test/graph-test-1.wiki ================================================================== --- test/graph-test-1.wiki +++ test/graph-test-1.wiki @@ -0,0 +1,84 @@ +Graph Test One + +This page contains examples a list of URLs of timelines with +interesting graphs. Click on all URLs, one by one, to verify +the correct operation of the graph drawing logic. + + * + 20-element timeline, check-ins only, before 2010-11-08 + * + 20-element timeline, check-ins only, no graph, before 2010-11-08 + * + 20-element timeline, check-ins only, file changes, before 2010-11-08 + * + 40-element timeline, check-ins only, before 2010-11-08 + * + 1000-element timeline, check-ins only, before 2010-11-08 + * + 10-elements circa 2010-11-07 10:23:00, with dividers + * + 10-elements circa 2010-11-07 10:23:00, without dividers + * + Parents and children of check-in 3ea66260b5555 + * multiple merge descenders from the penultimate node. + + * + multiple branch risers. + * + multiple branch risers, n=18. + * + multiple branch risers, n=9. + * + Experimental branch with related check-ins. + * + Experimental branch with merge-ins only. + * + Experimental branch check-ins only. + * + Check-ins tagged "release" and related check-ins + * + Check-ins tagged "release" and merge-ins + * + Only check-ins tagged "release" + * + History of source file "Makefile". + * + 20 elements after 1970-01-01. + * + All check-ins - a huge graph. + * + This malformed commit has a + merge parent which is not a valid checkin. + * + From e663bac6f7 to a298a0e2f9 by shortest path. + * + From e663bac6f7 to a298a0e2f9 without merge links. + * + Common ancestor path of e663bac6f7 to a298a0e2f9. + * + Merge on the same branch does not result in a leaf. + + * + This timeline has a hidden commit. Click Unhide to reveal. + * Isolated check-ins. + +External: + + * Timewarp due to a mis-configured system clock. + * Show all three separate deletions of "id.test". + (Scroll down for the third deletion.) + * Merge arrows to the left and to the right + * Previous, with a scrunched graph + * Previous, with a severely scrunched graph ADDED test/json.test Index: test/json.test ================================================================== --- test/json.test +++ test/json.test @@ -0,0 +1,849 @@ +# +# Copyright (c) 2016 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# Test JSON Support +# + +# Make sure we have a build with the json command at all and that it +# is not stubbed out. This assumes the current (as of 2016-01-27) +# practice of eliminating all trace of the fossil json command when +# not configured. If that changes, these conditions might not prevent +# the rest of this file from running. +fossil test-th-eval "hasfeature json" + +if {$::RESULT ne "1"} then { + puts "Fossil was not compiled with JSON support."; return +} + +# We need a JSON parser to effectively test the JSON produced by +# fossil. It looks like the one from tcllib is exactly what we need. +# On ActiveTcl, add it with teacup. On other platforms, YMMV. +# teacup install json +# teacup install json::write +package require json + +proc json2dict {txt} { + set rc [catch {::json::json2dict $txt} result options] + if {$rc != 0} { + protOut "JSON ERROR: $result" + return {} + } + return $result +} + +# and that the json itself smells ok and has the expected API error code in it +fossil json -expectError +set JR [json2dict $RESULT] +if {$JR eq ""} { + puts "Fossil was not compiled with JSON support (bad JSON)."; return +} +test json-1 {[dict exists $JR resultCode] + && [dict get $JR resultCode] eq "FOSSIL-4102"} + +# Use the CLI interface to execute a JSON command. Sets the global +# RESULT to the response text, and JR to a Tcl dict conversion of the +# response body. +# +# Returns "200" or "500". +proc fossil_json {args} { + global RESULT JR + uplevel 1 fossil json {*}$args + set JR [json2dict $RESULT] + return "200" +} + +# Use the HTTP interface to GET a JSON API URL. Sets the globals +# RESULT to the HTTP response body, and JR to a Tcl dict conversion of +# the response body. +# +# Returns the status code from the HTTP header. +proc fossil_http_json {url {cookie "Muppet=Monster"} args} { + global RESULT JR + set request "GET $url HTTP/1.1\r\nHost: localhost\r\nUser-Agent: Fossil-http-json\r\nCookie: $cookie" + set RESULT [fossil_maybe_answer $request http {*}$args] + regexp {(?w)(.*)^\s*$(.*)} $RESULT dummy head body + regexp {^HTTP\S+\s+(\d\d\d)\s+(.*)$} $head dummy status msg + if {$status eq "200"} { + set JR [json2dict $body] + } + return $status +} + + +# Use the HTTP interface to POST a JSON API URL. Sets the globals +# RESULT to the HTTP response body, and JR to a Tcl dict conversion of +# the response body. +# +# Returns the status code from the HTTP header. +proc fossil_post_json {url data {cookie "Muppet=Monster"} args} { + global RESULT JR + + # set up a full GET or POST HTTP request + set len [string length $data] + if {$len > 0} { + set request [subst {POST $url HTTP/1.0\r +Host: localhost\r +User-Agent: Fossil-Test\r +Cookie: $cookie\r +Content-Type: application/json +Content-Length $len +\r +$data}] + } else { + set request [subst {GET $url HTTP/1.0\r +Host: localhost\r +User-Agent: Fossil-Test\r +Cookie: $cookie\r +\r +}] + } + + # handle the actual request + flush stdout + #exec $fossilexe + set RESULT [fossil_maybe_answer $request http {*}$args] + + # separate HTTP headers from body + regexp {(?w)(.*)^\s*$(.*)} $RESULT dummy head body + regexp {^HTTP\S+\s+(\d\d\d)\s+(.*)$} $head dummy status msg + if {$status eq "200"} { + if {[string length $body] > 0} { + set JR [json2dict $body] + } else { + set JR "" + } + } + return $status +} + + +# Inspect a dict for keys it must have and keys it must not have +proc test_dict_keys {testname D okfields badfields} { + set i 1 + foreach f $okfields { + test "$testname-$i" {[dict exists $D $f]} + incr i + } + foreach f $badfields { + test "$testname-$i" {![dict exists $D $f]} + incr i + } +} + +# Inspect the envelope part of a returned JSON structure to confirm +# that it has specific fields and that it lacks specific fields. +proc test_json_envelope {testname okfields badfields} { + test_dict_keys $testname $::JR $okfields $badfields +} + +# Inspect the envelope of a normal successful result +proc test_json_envelope_ok {testname} { + test_json_envelope $testname [concat fossil timestamp command procTimeUs \ + procTimeMs payload] [concat resultCode resultText] +} + +# Inspect the payload of a successful result to confirm that it has +# specific fields and that it lacks specific fields. +proc test_json_payload {testname okfields badfields} { + test_dict_keys $testname [dict get $::JR payload] $okfields $badfields +} + +#### VERSION AKA HAI + +# The JSON API generally assumes we have a respository, so let it have one. +repo_init + +# Check for basic envelope fields in the result with an error +fossil_json -expectError +test_json_envelope json-enverr [concat resultCode fossil timestamp \ +resultText command procTimeUs procTimeMs] {} +test json-enverr-rc-1 {[dict get $JR resultCode] eq "FOSSIL-3002"} + + +# Check for basic envelope fields in the result with a successful +# command +set HAIfields [concat manifestUuid manifestVersion manifestDate \ +manifestYear releaseVersion releaseVersionNumber \ +resultCodeParanoiaLevel jsonApiVersion] + +fossil_json HAI +test_json_envelope_ok json-HAI +test_json_payload json-HAI $HAIfields {} +test json-HAI-api {[dict get $JR payload jsonApiVersion] >= 20120713} + +# Check for basic envelope fields in a HTTP result with a successful +# command +fossil_http_json /json/HAI +test_json_envelope_ok json-http-HAI +test_json_payload json-http-HAI $HAIfields {} +test json-http-HAI-api {[dict get $JR payload jsonApiVersion] >= 20120713} + +fossil_json version +test_json_envelope_ok json-version +test_json_payload json-version $HAIfields {} +test json-version-api {[dict get $JR payload jsonApiVersion] >= 20120713} + +#### ARTIFACT + +# sha1 of 0 bytes and a file to match in a commit +set UUID_empty da39a3ee5e6b4b0d3255bfef95601890afd80709 +write_file empty "" +fossil add empty +fossil ci -m "empty file" + +# json artifact (checkin) +fossil_json [concat artifact tip] +test_json_envelope_ok json-artifact-checkin-env +test json-artifact-checkin {[dict get $JR payload type] eq "checkin"} +test_json_payload json-artifact \ +[concat type uuid isLeaf timestamp user comment parents tags files] {} + +# json artifact (file) +fossil_json [concat artifact $UUID_empty] +test_json_envelope_ok json-artifact-file-env +test json-artifact-file {[dict get $JR payload type] eq "file"} +test_json_payload json-artifact [concat type uuid size checkins] {} + +# json artifact (wiki) +fossil wiki create Empty <<"-=BLANK=-" +fossil_json wiki get Empty +test json-wiki-get {[dict get $JR payload name] eq "Empty"} +set uuid [dict get $JR payload uuid] +fossil_json artifact $uuid +test_json_envelope_ok json-artifact-wiki-env +test json-artifact-wiki {[dict get $JR payload type] eq "wiki"} +test_json_payload json-artifact-wiki [list type uuid artifact] {} +set artifact [dict get $JR payload artifact] +test_dict_keys json-artifact-wiki-artifact $artifact \ + [list name uuid user timestamp size] {} +# name, uuid, parent?, user, timestamp, size?, content? + + +#### AUTHENTICATION +fossil_json anonymousPassword +test_json_envelope_ok json-anonymousPassword-env +test_json_payload json-anonymousPassword {seed password} {} +set seed [dict get $JR payload seed] +set pass [dict get $JR payload password] + +write_file anon-1 [subst { +{ + "command":"login", + "payload":{ + "name":"anonymous", + "anonymousSeed":$seed, + "password":"$pass" + } +} +}] +fossil_json --json-input anon-1 +test_json_envelope_ok json-login-a-env +test_json_payload json-login-a {authToken name capabilities loginCookieName} {} +set AuthAnon [dict get $JR payload] +proc test_hascaps {testname need caps} { + foreach n [split $need {}] { + test $testname-$n {[string first $n $caps] >= 0} + } +} +test_hascaps json-login-c "hmnc" [dict get $AuthAnon capabilities] + +fossil user new U1 User-1 Uone +fossil user capabilities U1 s +write_file u1 { +{ + "command":"login", + "payload":{ + "name":"U1", + "password":"Uone" + } +} +} +fossil_json --json-input u1 +test_json_envelope_ok json-login-u1-env +test_json_payload json-login-u1 {authToken name capabilities loginCookieName} {} +set AuthU1 [dict get $JR payload] +test_hascaps json-login-c "s" [dict get $AuthU1 capabilities] + +set U1Cookie [dict get $AuthU1 loginCookieName]=[regsub -all {[/]} [dict get $AuthU1 authToken] {%2F} ] +set AnonCookie [dict get $AuthAnon loginCookieName]=[regsub -all {[/]} [dict get $AuthAnon authToken] {%2F} ] + +# json cap +# The CLI user has all rights, and no auth token affects that. This +# is consistent with the rest of the fossil CLI, and with the +# pragmatic argument that using the CLI implies physical access to +# the repo file itself, which can be taunted with many tools +# including raw SQLite which will also ignore authentication. +write_file anon-2 [subst { + {"command":"cap", + "authToken":"[dict get $AuthAnon authToken]" + } +}] +fossil_json --json-input anon-2 +test_json_envelope_ok json-cap-env +test json-cap-CLI {[dict get $JR payload permissionFlags setup]} + +# json cap via POST with authToken in request envelope +set anon2 [read_file anon-2] +fossil_post_json "/json/cap" $anon2 +test json-cap-POSTenv-env-0 {[string length $JR] > 0} +test_json_envelope_ok json-cap-POSTenv-env +test json-cap-POSTenv-name {[dict get $JR payload name] eq "anonymous"} knownBug +test json-cap-POSTenv-notsetup {![dict get $JR payload permissionFlags setup]} + + +# json cap via GET with authToken in Cookie header +fossil_post_json "/json/cap" {} $AnonCookie +test json-cap-GETcookie-env-0 {[string length $JR] > 0} +test_json_envelope_ok json-cap-GETcookie-env +test json-cap-GETcookie-name {[dict get $JR payload name] eq "anonymous"} +test json-cap-GETcookie-notsetup {![dict get $JR payload permissionFlags setup]} + + +# json cap via GET with authToken in a parameter +fossil_post_json "/json/cap?authToken=[dict get $AuthAnon authToken]" {} +test json-cap-GETcookie-env-0 {[string length $JR] > 0} +test_json_envelope_ok json-cap-GETcookie-env +test json-cap-GETcookie-name {[dict get $JR payload name] eq "anonymous"} +test json-cap-GETcookie-notsetup {![dict get $JR payload permissionFlags setup]} + + +# whoami +# via CLI with no auth token supplied +fossil_json whoami +test_json_envelope_ok json-whoami-cli-env +test_json_payload json-whoami-cli {name capabilities} {} +test json-whoami-cli-name {[dict get $JR payload name] eq "nobody"} +test_hascaps json-whoami-cli-cap "gjorz" [dict get $JR payload capabilities] + +#### BRANCHES +# json branch list +fossil_json branch list +test_json_envelope_ok json-branch-list-env +test_json_payload json-branch-list {range current branches} {} +test json-branch-list-cur {[dict get $JR payload current] eq "trunk"} +test json-branch-list-cnt {[llength [dict get $JR payload branches]] == 1} +test json-branch-list-val {[dict get $JR payload branches] eq "trunk"} + +# json branch create +fossil_json branch create alpha --basis trunk +test_json_envelope_ok json-branch-create-env +test_json_payload json-branch-create {name basis rid uuid isPrivate} {} + + +#### CONFIG +# json config get AREA +# AREAs are skin ticket project all skin-backup +foreach a [list skin ticket project all skin-backup] { + fossil_json config get $a + test_json_envelope_ok json-config-$a-env + # payload depends on specific area and may be completely empty +} + +#### DIFFS +# json diff v1 v2 + +write_file fish { +ABCD goldfish +} +fossil add fish +fossil ci -m "goldfish" +fossil_json finfo fish +set fishHist [dict get $JR payload checkins] +set fishV1 [dict get [lindex $fishHist 0] uuid] + +write_file fish { +ABCD goldfish +LMNO goldfish +} +fossil ci -m "goldfish" +fossil_json finfo fish +set fishHist [dict get $JR payload checkins] +set fishV2 [dict get [lindex $fishHist 0] uuid] + +test fossil-diff-setup {$fishV1 ne $fishV2} +fossil_json diff $fishV1 $fishV2 +test_json_envelope_ok json-diff-env +test_json_payload json-diff {from to diff} {} +test json-diff-v1 {[dict get $JR payload from] eq $fishV1} +test json-diff-v2 {[dict get $JR payload to] eq $fishV2} +set diff [dict get $JR payload diff] +test json-diff-diff {[string first "+LMNO goldfish" $diff] >= 0} +protOut [dict get $JR payload diff] + + +#### DIRECTORY LISTING +# json dir DIRNAME +fossil_json dir +test_json_envelope_ok json-dir-env +test_json_payload json-dir {name entries} {} + +#### FILE INFO +# json finfo FILENAME +fossil_json finfo empty +test_json_envelope_ok json-finfo-env +test_json_payload json-finfo {name checkins} {} + +#### QUERY +# json query SQLCODE +fossil_json query {"SELECT * FROM reportfmt"} +test_json_envelope_ok json-query-env +test_json_payload json-query {columns rows} {} + +#### STATS +# json stat +fossil_json stat +test_json_envelope_ok json-stat-env +test_json_payload json-stat {repositorySize ageDays ageYears projectCode compiler sqlite} \ +{blobCount deltaCount uncompressedArtifactSize averageArtifactSize maxArtifactSize \ +compressionRatio checkinCount fileCount wikiPageCount ticketCount} + +fossil_json stat -f +test_json_envelope_ok json-stat-env +test_json_payload json-stat {repositorySize \ +blobCount deltaCount uncompressedArtifactSize averageArtifactSize maxArtifactSize \ +compressionRatio checkinCount fileCount wikiPageCount ticketCount \ +ageDays ageYears projectCode compiler sqlite} {} + + +#### STATUS +# NOTE: Local checkout required +# json status +fossil_json status +test_json_envelope_ok json-status-env +test_json_payload json-status {repository localRoot checkout files errorCount} {} + +#### TAGS + +# json tag add NAME CHECKIN VALUE +fossil_json tag add blue trunk green +test_json_envelope_ok json-tag-add-env +test_json_payload json-tag-add {name value propagate raw appliedTo} {} + + +# json tag cancel NAME CHECKIN +fossil_json tag add cancel alpha +test_json_envelope_ok json-tag-cancel-env +# DOCBUG? Doc says no payload. +test_json_payload json-tag-cancel {name value propagate raw appliedTo} {} + +# json tag find NAME +fossil_json tag find alpha +test_json_envelope_ok json-tag-find-env +test_json_payload json-tag-find {name raw type limit artifacts} {} +test json-tag-find-count {[llength [dict get $JR payload artifacts]] >= 1} + +# json tag list CHECKIN +fossil_json tag list +test_json_envelope_ok json-tag-list-env +test_json_payload json-tag-list {raw includeTickets tags} {} +test json-tag-list-count {[llength [dict get $JR payload tags]] >= 2} + + +#### TICKETS +# API Docs say not yet defined, so it isn't quite fair to mark this +# category as TODO for the test cases... + +#### TICKET REPORTS + +# json report get NUMBER +fossil_json report get 1 +test_json_envelope_ok json-report-get-env +test_json_payload json-report-get {report owner title timestamp columns sqlCode} {} + +# json report list +fossil_json report list +test_json_envelope_ok json-report-list-env +#test_json_payload json-report-list {raw includeTickets tags} {} +test json-report-list-count {[llength [dict get $JR payload]] >= 1} + + +# json report run NUMBER +fossil_json report run 1 +test_json_envelope_ok json-report-run-1-env +test_json_payload json-report-list {report title sqlcode columnNames tickets} {} +test json-report-list-count {[llength [dict get $JR payload columnNames]] >= 7} +test json-report-list-count {[llength [dict get $JR payload tickets]] >= 0} + + +#### TIMELINE + +# json timeline checkin +fossil_json timeline checkin +test_json_envelope_ok json-timeline-checkin-env +test_json_payload json-timeline-checkin {limit timeline} {} +set i 0 +foreach t [dict get $JR payload timeline] { + # parents appears only for entries that have a parent + # files appears only if requested by the --files parameter + test_dict_keys json-timeline-checkin-$i $t {type uuid timestamp comment user isLeaf tags} {} + incr i +} + +# json timeline ci +# removed from documentation +#fossil_json timeline ci +#test json-timeline-ci {[dict get $JR resultCode] ne "FOSSIL-1102"} knownBug +#test_json_payload json-timeline-ci {limit timeline} {} + +# json timeline ticket +fossil_json timeline ticket +test_json_envelope_ok json-timeline-ticket-env +test_json_payload json-timeline-ticket {limit timeline} {} + +# json timeline wiki +fossil_json timeline wiki +test_json_envelope_ok json-timeline-wiki-env +test_json_payload json-timeline-wiki {limit timeline} {} + + +#### USER MANAGEMENT + +# json user get +foreach u [list nobody anonymous reader developer U1] { + fossil_json user get $u + test_json_envelope_ok json-user-get-$u-env + test_json_payload json-user-get-$u {uid name capabilities info timestamp} {} +} + +# json user list +fossil_json user list +test_json_envelope_ok json-user-list-env +set i 0 +foreach u [dict get $JR payload] { + test_dict_keys json-user-list-$i $u {uid name capabilities info timestamp} {} + incr i +} + +# json user save +fossil_json user save --uid -1 --name U2 --password Utwo +test_json_envelope_ok json-user-save-env +test_json_payload json-user-save {uid name capabilities info timestamp} {} + + +# DOCBUG? Doc says payload is "same as /json/user/get" but actual +# result was an array of one user similar to /json/user/list. +#set i 0 +#foreach u [dict get $JR payload] { +# test_dict_keys json-user-save-$i $u {uid name capabilities info timestamp} {} +# incr i +#} +#test json-user-save-count {$i == 1} + + + +#### WIKI + +# wiki list +fossil_json wiki list +test_json_envelope_ok json-wiki-list-env +set pages [dict get $JR payload] +test json-wiki-1 {[llength $pages] == 1} +test json-wiki-2 {[lindex $pages 0] eq "Empty"} +fossil_json wiki list --verbose +set pages [dict get $JR payload] +test json-wiki-verbose-1 {[llength $pages] == 1} +test_dict_keys json-wiki-verbose-pages [lindex $pages 0] [list name uuid user timestamp size] {} + +# wiki get +fossil_json wiki get Empty +test_json_envelope_ok json-wiki-get-env +# this page has only one version, so no parent should be listed +test_json_payload json-wiki-get [list name uuid user timestamp size content] [list parent] + + +# wiki create +# requires an authToken? Not from CLI. + +write_file req.json { + { + "command":"wiki/create", + "payload":{ + "name":"Page2", + "content":"Lorem ipsum dolor sic amet." + } + } +} +fossil_json --json-input req.json +test_json_envelope_ok json-wiki-create-env +fossil_json wiki get Page2 +test_json_envelope_ok json-wiki-create-get-env +test_json_payload json-wiki-save-get [list name uuid user timestamp size content] {parent} +set uuid1 [dict get $JR payload uuid] + +# wiki save + +write_file req2.json { + { + "command":"wiki/save", + "payload":{ + "name":"Page2", + "content":"Lorem ipsum dolor sic amet.\nconsectetur adipisicing elit." + } + } +} +fossil_json --json-input req2.json +test_json_envelope_ok json-wiki-save-env +fossil_json wiki get Page2 +test_json_envelope_ok json-wiki-save-get-env +test_json_payload json-wiki-save-get [list name uuid user timestamp size parent content] {} +set uuid2 [dict get $JR payload uuid] +test json-wiki-save-parent {[dict get $JR payload parent] eq $uuid1} + +# wiki diff + +fossil_json wiki diff $uuid1 $uuid2 +test_json_envelope_ok json-wiki-diff-env +test_json_payload json-wiki-diff [list v1 v2 diff] {} +test json-wiki-diff-v1 {[dict get $JR payload v1] eq $uuid1} +test json-wiki-diff-v1 {[dict get $JR payload v2] eq $uuid2} +set diff [dict get $JR payload diff] +test json-wiki-diff-diff {[string first "+consectetur adipisicing elit" $diff] >= 0} +#puts [dict get $JR payload diff] + +# wiki preview +# +# takes a string in fossil wiki markup and return an HTML fragment. +# This command does not make use of the actual wiki content (much?) +# at all. +write_file req3.json { + { + "command":"wiki/preview", + "payload":"Lorem ipsum dolor sic amet.\nconsectetur adipisicing elit." + } +} +fossil_json --json-input req3.json +test_json_envelope_ok json-wiki-preview-env +set pv [dict get $JR payload] +test json-wiki-preview-out-1 {[string first "

        Lorem ipsum" $pv] == 0} +test json-wiki-preview-out-2 {[string last "

        " $pv] == 0} + +#### UNAVOIDABLE MISC + +# json g +fossil_json g +test_json_envelope_ok json-g-env +#puts [llength [dict keys [dict get $JR payload]]] +test json-g-g {[llength [dict keys [dict get $JR payload]]] >= 60};# 64 on my PC + +# json rebuild +fossil_json rebuild +test_json_envelope json-rebuild-env [concat fossil timestamp command procTimeUs \ + procTimeMs] [concat payload resultCode resultText] + +# json resultCodes +fossil_json resultCodes +test_json_envelope_ok json-resultCodes-env +set codes [dict get $JR payload] +test json-resultCodes-codes-1 {[llength $codes] >= 35} ;# count as of API 20120713 +# foreach c $codes { +# puts [dict values $c] +# } +foreach r $codes { + protOut "# [dict get $r resultCode] [dict get $r cSymbol]\n# [dict get $r description]" +} + + + +#### From the API Docs + +# Reminder to self: in March 2012 i saw a corner-case which returns +# HTML output. To reproduce: chmod 444 REPO, then submit a request +# which writes something (timeline creates a temp table). The "repo +# is not writable" error comes back as HTML. i don't know if the +# error happens before we have made the determination that the app is +# in JSON mode or if the error handling is incorrectly not +# recognizing JSON mode. +# +#repo_init x.fossil +#catch {exec chmod 444 .rep.fossil}; # Unix. What about Win? +fossil_http_json /json/timeline/checkin $U1Cookie +test json-ROrepo-1-1 {$CODE == 0} +test json-ROrepo-1-2 {[regexp {\}\s*$} $RESULT]} +test json-ROrepo-1-3 {![regexp {SQLITE_[A-Z]+:} $RESULT]} +test_json_envelope_ok json-http-timeline1 +protOut "chmod 444 repo" +catch {exec chmod 444 .rep.fossil}; # Unix +catch {exec attrib +r .rep.fossil}; # Windows +fossil_http_json /json/timeline/checkin $U1Cookie -expectError +test json-ROrepo-2-1 {$CODE != 0} +test json-ROrepo-2-2 {[regexp {\}\s*$} $RESULT]} knownBug +test json-ROrepo-2-3 {![regexp {SQLITE_[A-Z]+:} $RESULT]} knownBug +#test_json_envelope_ok json-http-timeline2 +catch {exec attrib -r .rep.fossil}; # Windows +catch {exec chmod 666 .rep.fossil}; # Unix + + +#### Result Codes +# Test cases designed to stimulate each (documented) error code. + +# FOSSIL-0000 +# Not returned by any command. We generally verify that in the +# test_json_envelope_ok command by verifying that the resultCode +# field is not present. Should any JSON endpoint begin to use the +# range reserved for non-fatal warnings, those tests will fail. +# +# Notice that code is not included in the list returned from +# /json/resultCodes. + + +# FOSSIL-1000 FSL_JSON_E_GENERIC +# Generic error + +# FOSSIL-1101 FSL_JSON_E_INVALID_REQUEST +# Invalid request +write_file e1101.json { + ["command","nope"] +} +fossil_json --json-input e1101.json -expectError +test json-RC-1101-array-CLI-exit {$CODE != 0} +test_json_envelope json-RC-1101-array-env {fossil timestamp command procTimeUs \ +procTimeMs resultCode resultText} {payload} +test json-RC-1101-array-code {[dict get $JR resultCode] eq "FOSSIL-1101"} + +write_file e1101.json { + "Not really a command but more of a suggestion" +} +fossil_json --json-input e1101.json -expectError +test json-RC-1101-string-CLI-exit {$CODE != 0} +test_json_envelope json-RC-1101-string-env {fossil timestamp command procTimeUs \ +procTimeMs resultCode resultText} {payload} +test json-RC-1101-string-code {[dict get $JR resultCode] eq "FOSSIL-1101"} + + + + +# FOSSIL-1102 FSL_JSON_E_UNKNOWN_COMMAND +# Unknown command or subcommand +fossil_json NoSuchEndpoint -expectError +test json-RC-1102-CLI-exit {$CODE != 0} +test_json_envelope json-RC-1102-env {fossil timestamp command procTimeUs \ +procTimeMs resultCode resultText} {payload} +test json-RC-1102-code {[dict get $JR resultCode] eq "FOSSIL-1102"} + +write_file e1102.json { + { + "command":"no/such/endpoint" + } +} +fossil_json --json-input e1102.json -expectError +test json-env-RC-1102-CLI-exit {$CODE != 0} +test_json_envelope json-env-RC-1102-env {fossil timestamp command procTimeUs \ +procTimeMs resultCode resultText} {payload} +test json-env-RC-1102-code {[dict get $JR resultCode] eq "FOSSIL-1102"} + + +# FOSSIL-1103 FSL_JSON_E_UNKNOWN +# Unknown error + +write_file bad.sql { +CREATE TABLE spam(a integer, b text); +} +exec $::fossilexe sqlite3 --no-repository bad.fossil 1000000} { + set x [string range $x 0 1000000] + } + set k 0 + while {[regexp {<[aA] .*?href="(/[a-z].*?)".*?>(.*)$} $x all url tail]} { + # if {$npending>2*($limit - $i)} break + incr k + if {$k>100} break + set u2 [string map {< < > > " \" & &} $url] + if {![info exists seen($u2)]} { + set next($u2) 1 + set seen($u2) 1 + } + set x $tail + } +} ADDED test/markdown-test1.md Index: test/markdown-test1.md ================================================================== --- test/markdown-test1.md +++ test/markdown-test1.md @@ -0,0 +1,25 @@ + +Markdown Formatter Test Document +================================ + +This document is designed to test the markdown formatter. + + * A bullet item. + * A subitem + * Second bullet + +More text + + 1. Enumeration + 1.1. Subitem 1 + 1.2. Subitem 2 + 2. Second enumeration. + +Another paragraph. + + + +Other Features +-------------- + +Text can show *emphasis* or _emphasis_ or **strong emphassis**. Index: test/merge1.test ================================================================== --- test/merge1.test +++ test/merge1.test @@ -1,21 +1,15 @@ # # Copyright (c) 2006 D. Richard Hipp # # This program is free software; you can redistribute it and/or -# modify it under the terms of the GNU General Public -# License version 2 as published by the Free Software Foundation. +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# General Public License for more details. -# -# You should have received a copy of the GNU General Public -# License along with this library; if not, write to the -# Free Software Foundation, Inc., 59 Temple Place - Suite 330, -# Boston, MA 02111-1307, USA. +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # @@ -50,13 +44,13 @@ 222 - The second line program line in code - 2222 333 - This is a test OF THE merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } -fossil test-3 t1 t3 t2 a32 +fossil 3-way-merge t1 t3 t2 a32 test merge1-1.1 {[same_file t23 a32]} -fossil test-3 t1 t2 t3 a23 +fossil 3-way-merge t1 t2 t3 a23 test merge1-1.2 {[same_file t23 a23]} write_file_indented t1 { 111 - This is line one of the demo program - 1111 222 - The second line program line in code - 2222 @@ -77,34 +71,38 @@ 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t23 { - >>>>>>> BEGIN MERGE CONFLICT + <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< 111 - This is line ONE of the demo program - 1111 - ============================ + ======= COMMON ANCESTOR content follows ============================ + 111 - This is line one of the demo program - 1111 + ======= MERGED IN content follows ================================== 111 - This is line one OF the demo program - 1111 - <<<<<<< END MERGE CONFLICT + >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t32 { - >>>>>>> BEGIN MERGE CONFLICT + <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< 111 - This is line one OF the demo program - 1111 - ============================ + ======= COMMON ANCESTOR content follows ============================ + 111 - This is line one of the demo program - 1111 + ======= MERGED IN content follows ================================== 111 - This is line ONE of the demo program - 1111 - <<<<<<< END MERGE CONFLICT + >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } -fossil test-3 t1 t3 t2 a32 +fossil 3-way-merge t1 t3 t2 a32 test merge1-2.1 {[same_file t32 a32]} -fossil test-3 t1 t2 t3 a23 +fossil 3-way-merge t1 t2 t3 a23 test merge1-2.2 {[same_file t23 a23]} write_file_indented t1 { 111 - This is line one of the demo program - 1111 222 - The second line program line in code - 2222 @@ -131,13 +129,13 @@ 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } -fossil test-3 t1 t3 t2 a32 +fossil 3-way-merge t1 t3 t2 a32 test merge1-3.1 {[same_file t23 a32]} -fossil test-3 t1 t2 t3 a23 +fossil 3-way-merge t1 t2 t3 a23 test merge1-3.2 {[same_file t23 a23]} write_file_indented t1 { 111 - This is line one of the demo program - 1111 222 - The second line program line in code - 2222 @@ -158,34 +156,38 @@ 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t32 { - >>>>>>> BEGIN MERGE CONFLICT - ============================ + <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< + ======= COMMON ANCESTOR content follows ============================ + 111 - This is line one of the demo program - 1111 + ======= MERGED IN content follows ================================== 000 - Zero lines added to the beginning of - 0000 111 - This is line one of the demo program - 1111 - <<<<<<< END MERGE CONFLICT + >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } write_file_indented t23 { - >>>>>>> BEGIN MERGE CONFLICT + <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< 000 - Zero lines added to the beginning of - 0000 111 - This is line one of the demo program - 1111 - ============================ - <<<<<<< END MERGE CONFLICT + ======= COMMON ANCESTOR content follows ============================ + 111 - This is line one of the demo program - 1111 + ======= MERGED IN content follows ================================== + >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 222 - The second line program line in code - 2222 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 } -fossil test-3 t1 t3 t2 a32 +fossil 3-way-merge t1 t3 t2 a32 test merge1-4.1 {[same_file t32 a32]} -fossil test-3 t1 t2 t3 a23 +fossil 3-way-merge t1 t2 t3 a23 test merge1-4.2 {[same_file t23 a23]} write_file_indented t1 { 111 - This is line one of the demo program - 1111 222 - The second line program line in code - 2222 @@ -212,13 +214,13 @@ 333 - This is a test of the merging algohm - 3333 444 - If all goes well, we will be pleased - 4444 555 - we think it well and other stuff too - 5555 666 - Extra line at the end of the file wi - 6666 } -fossil test-3 t1 t3 t2 a32 +fossil 3-way-merge t1 t3 t2 a32 test merge1-5.1 {[same_file t32 a32]} -fossil test-3 t1 t2 t3 a23 +fossil 3-way-merge t1 t2 t3 a23 test merge1-5.2 {[same_file t32 a23]} write_file_indented t1 { 111 - This is line one of the demo program - 1111 222 - The second line program line in code - 2222 @@ -241,13 +243,13 @@ write_file_indented t32 { 111 - This is line one of the demo program - 1111 333 - This is a test of the merging algohm - 3333 555 - we think it well and other stuff too - 5555 } -fossil test-3 t1 t3 t2 a32 +fossil 3-way-merge t1 t3 t2 a32 test merge1-6.1 {[same_file t32 a32]} -fossil test-3 t1 t2 t3 a23 +fossil 3-way-merge t1 t2 t3 a23 test merge1-6.2 {[same_file t32 a23]} write_file_indented t1 { abcd efgh @@ -293,35 +295,44 @@ STUV XYZ. } write_file_indented t23 { abcd - >>>>>>> BEGIN MERGE CONFLICT + <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< efgh 2 ijkl 2 mnop 2 qrst uvwx yzAB 2 CDEF 2 GHIJ 2 - ============================ + ======= COMMON ANCESTOR content follows ============================ + efgh + ijkl + mnop + qrst + uvwx + yzAB + CDEF + GHIJ + ======= MERGED IN content follows ================================== efgh ijkl mnop 3 qrst 3 uvwx 3 yzAB 3 CDEF GHIJ - <<<<<<< END MERGE CONFLICT + >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> KLMN OPQR STUV XYZ. } -fossil test-3 t1 t2 t3 a23 +fossil 3-way-merge t1 t2 t3 a23 test merge1-7.1 {[same_file t23 a23]} write_file_indented t2 { abcd efgh 2 @@ -352,29 +363,40 @@ STUV XYZ. } write_file_indented t23 { abcd + <<<<<<< BEGIN MERGE CONFLICT: local copy shown first <<<<<<<<<<<<<<< efgh 2 ijkl 2 - >>>>>>> BEGIN MERGE CONFLICT - mnop + mnop qrst uvwx yzAB 2 CDEF 2 GHIJ 2 - ============================ + ======= COMMON ANCESTOR content follows ============================ + efgh + ijkl + mnop + qrst + uvwx + yzAB + CDEF + GHIJ + ======= MERGED IN content follows ================================== + efgh + ijkl mnop 3 qrst 3 uvwx 3 yzAB 3 CDEF GHIJ - <<<<<<< END MERGE CONFLICT + >>>>>>> END MERGE CONFLICT >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> KLMN OPQR STUV XYZ. } -fossil test-3 t1 t2 t3 a23 +fossil 3-way-merge t1 t2 t3 a23 test merge1-7.2 {[same_file t23 a23]} Index: test/merge2.test ================================================================== --- test/merge2.test +++ test/merge2.test @@ -1,21 +1,15 @@ # # Copyright (c) 2006 D. Richard Hipp # # This program is free software; you can redistribute it and/or -# modify it under the terms of the GNU General Public -# License version 2 as published by the Free Software Foundation. +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# General Public License for more details. -# -# You should have received a copy of the GNU General Public -# License along with this library; if not, write to the -# Free Software Foundation, Inc., 59 Temple Place - Suite 330, -# Boston, MA 02111-1307, USA. +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # @@ -24,11 +18,13 @@ # Tests of the delta mechanism. # set filelist [glob $testdir/*] foreach f $filelist { + if {[file isdir $f]} continue set base [file root [file tail $f]] + if {[string match "utf16*" $base]} continue set f1 [read_file $f] write_file t1 $f1 for {set i 0} {$i<100} {incr i} { expr {srand($i*2)} write_file t2 [set f2 [random_changes $f1 2 4 0 0.1]] @@ -36,11 +32,11 @@ write_file t3 [set f3 [random_changes $f1 2 4 2 0.1]] expr {srand($i*2+1)} write_file t23 [random_changes $f2 2 4 2 0.1] expr {srand($i*2)} write_file t32 [random_changes $f3 2 4 0 0.1] - fossil test-3-way-merge t1 t2 t3 a23 + fossil 3-way-merge t1 t2 t3 a23 test merge-$base-$i-23 {[same_file a23 t23]} - fossil test-3-way-merge t1 t3 t2 a32 + fossil 3-way-merge t1 t3 t2 a32 test merge-$base-$i-32 {[same_file a32 t32]} } } Index: test/merge3.test ================================================================== --- test/merge3.test +++ test/merge3.test @@ -1,21 +1,15 @@ # # Copyright (c) 2009 D. Richard Hipp # # This program is free software; you can redistribute it and/or -# modify it under the terms of the GNU General Public -# License version 2 as published by the Free Software Foundation. +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# General Public License for more details. -# -# You should have received a copy of the GNU General Public -# License along with this library; if not, write to the -# Free Software Foundation, Inc., 59 Temple Place - Suite 330, -# Boston, MA 02111-1307, USA. +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # @@ -26,20 +20,22 @@ proc merge-test {testid basis v1 v2 result} { write_file t1 [join [string trim $basis] \n]\n write_file t2 [join [string trim $v1] \n]\n write_file t3 [join [string trim $v2] \n]\n - fossil test-3-way-merge t1 t2 t3 t4 + fossil 3-way-merge t1 t2 t3 t4 set x [read_file t4] - regsub -all {>>>>>>> BEGIN MERGE CONFLICT} $x {>} x - regsub -all {============================} $x {=} x - regsub -all {<<<<<<< END MERGE CONFLICT} $x {<} x + regsub -all {<<<<<<< BEGIN MERGE CONFLICT: local copy shown first <+} $x \ + {MINE:} x + regsub -all {======= COMMON ANCESTOR content follows =+} $x {COM:} x + regsub -all {======= MERGED IN content follows =+} $x {YOURS:} x + regsub -all {>>>>>>> END MERGE CONFLICT >+} $x {END} x set x [split [string trim $x] \n] set result [string trim $result] if {$x!=$result} { - puts " Expected \[$result\]" - puts " Got \[$x\]" + protOut " Expected \[$result\]" + protOut " Got \[$x\]" test merge3-$testid 0 } else { test merge3-$testid 1 } } @@ -68,56 +64,56 @@ } { 1 2 3b 4b 5b 6 7 8 9 } { 1 2 3 4 5c 6 7 8 9 } { - 1 2 > 3b 4b 5b = 3 4 5c < 6 7 8 9 + 1 2 MINE: 3b 4b 5b COM: 3 4 5 YOURS: 3 4 5c END 6 7 8 9 } merge-test 4 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8 9 } { 1 2 3 4 5c 6 7 8 9 } { - 1 2 > 3b 4b 5b 6b = 3 4 5c 6 < 7 8 9 + 1 2 MINE: 3b 4b 5b 6b COM: 3 4 5 6 YOURS: 3 4 5c 6 END 7 8 9 } merge-test 5 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8 9 } { 1 2 3 4 5c 6c 7c 8 9 } { - 1 2 > 3b 4b 5b 6b 7 = 3 4 5c 6c 7c < 8 9 + 1 2 MINE: 3b 4b 5b 6b 7 COM: 3 4 5 6 7 YOURS: 3 4 5c 6c 7c END 8 9 } merge-test 6 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8b 9 } { 1 2 3 4 5c 6c 7c 8 9 } { - 1 2 > 3b 4b 5b 6b 7 = 3 4 5c 6c 7c < 8b 9 + 1 2 MINE: 3b 4b 5b 6b 7 COM: 3 4 5 6 7 YOURS: 3 4 5c 6c 7c END 8b 9 } merge-test 7 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8b 9 } { 1 2 3 4 5c 6c 7c 8c 9 } { - 1 2 > 3b 4b 5b 6b 7 8b = 3 4 5c 6c 7c 8c < 9 + 1 2 MINE: 3b 4b 5b 6b 7 8b COM: 3 4 5 6 7 8 YOURS: 3 4 5c 6c 7c 8c END 9 } merge-test 8 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5b 6b 7 8b 9b } { 1 2 3 4 5c 6c 7c 8c 9 } { - 1 2 > 3b 4b 5b 6b 7 8b 9b = 3 4 5c 6c 7c 8c 9 < + 1 2 MINE: 3b 4b 5b 6b 7 8b 9b COM: 3 4 5 6 7 8 9 YOURS: 3 4 5c 6c 7c 8c 9 END } merge-test 9 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b 4b 5 6 7 8b 9b @@ -141,11 +137,11 @@ } { 1 2 3b 4b 5 6 7 8b 9b } { 1 2 3b 4c 5 6c 7c 8 9 } { - 1 2 > 3b 4b = 3b 4c < 5 6c 7c 8b 9b + 1 2 MINE: 3b 4b COM: 3 4 YOURS: 3b 4c END 5 6c 7c 8b 9b } merge-test 12 { 1 2 3 4 5 6 7 8 9 } { 1 2 3b4b 5 6 7 8b 9b @@ -196,20 +192,20 @@ } { 1 6 7 8 9 } { 1 2 3 4 9 } { - 1 > 6 7 8 = 2 3 4 < 9 + 1 MINE: 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END 9 } merge-test 25 { 1 2 3 4 5 6 7 8 9 } { 1 7 8 9 } { 1 2 3 9 } { - 1 > 7 8 = 2 3 < 9 + 1 MINE: 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 END 9 } merge-test 30 { 1 2 3 4 5 6 7 8 9 } { @@ -251,20 +247,20 @@ } { 1 2 3 4 9 } { 1 6 7 8 9 } { - 1 > 2 3 4 = 6 7 8 < 9 + 1 MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 6 7 8 END 9 } merge-test 35 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 9 } { 1 7 8 9 } { - 1 > 2 3 = 7 8 < 9 + 1 MINE: 2 3 COM: 2 3 4 5 6 7 8 YOURS: 7 8 END 9 } merge-test 40 { 2 3 4 5 6 7 8 } { @@ -306,20 +302,20 @@ } { 6 7 8 } { 2 3 4 } { - > 6 7 8 = 2 3 4 < + MINE: 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END } merge-test 45 { 2 3 4 5 6 7 8 } { 7 8 } { 2 3 } { - > 7 8 = 2 3 < + MINE: 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 END } merge-test 50 { 2 3 4 5 6 7 8 } { @@ -360,20 +356,20 @@ } { 2 3 4 } { 6 7 8 } { - > 2 3 4 = 6 7 8 < + MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 6 7 8 END } merge-test 55 { 2 3 4 5 6 7 8 } { 2 3 } { 7 8 } { - > 2 3 = 7 8 < + MINE: 2 3 COM: 2 3 4 5 6 7 8 YOURS: 7 8 END } merge-test 60 { 1 2 3 4 5 6 7 8 9 } { @@ -415,20 +411,20 @@ } { 1 2b 3b 4b 5b 6 7 8 9 } { 1 2 3 4 9 } { - 1 > 2b 3b 4b 5b 6 7 8 = 2 3 4 < 9 + 1 MINE: 2b 3b 4b 5b 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END 9 } merge-test 65 { 1 2 3 4 5 6 7 8 9 } { 1 2b 3b 4b 5b 6b 7 8 9 } { 1 2 3 9 } { - 1 > 2b 3b 4b 5b 6b 7 8 = 2 3 < 9 + 1 MINE: 2b 3b 4b 5b 6b 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 END 9 } merge-test 70 { 1 2 3 4 5 6 7 8 9 } { @@ -470,20 +466,20 @@ } { 1 2 3 4 9 } { 1 2b 3b 4b 5b 6 7 8 9 } { - 1 > 2 3 4 = 2b 3b 4b 5b 6 7 8 < 9 + 1 MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6 7 8 END 9 } merge-test 75 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 9 } { 1 2b 3b 4b 5b 6b 7 8 9 } { - 1 > 2 3 = 2b 3b 4b 5b 6b 7 8 < 9 + 1 MINE: 2 3 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6b 7 8 END 9 } merge-test 80 { 2 3 4 5 6 7 8 } { @@ -525,20 +521,20 @@ } { 2b 3b 4b 5b 6 7 8 } { 2 3 4 } { - > 2b 3b 4b 5b 6 7 8 = 2 3 4 < + MINE: 2b 3b 4b 5b 6 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 4 END } merge-test 85 { 2 3 4 5 6 7 8 } { 2b 3b 4b 5b 6b 7 8 } { 2 3 } { - > 2b 3b 4b 5b 6b 7 8 = 2 3 < + MINE: 2b 3b 4b 5b 6b 7 8 COM: 2 3 4 5 6 7 8 YOURS: 2 3 END } merge-test 90 { 2 3 4 5 6 7 8 } { @@ -580,20 +576,20 @@ } { 2 3 4 } { 2b 3b 4b 5b 6 7 8 } { - > 2 3 4 = 2b 3b 4b 5b 6 7 8 < + MINE: 2 3 4 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6 7 8 END } merge-test 95 { 2 3 4 5 6 7 8 } { 2 3 } { 2b 3b 4b 5b 6b 7 8 } { - > 2 3 = 2b 3b 4b 5b 6b 7 8 < + MINE: 2 3 COM: 2 3 4 5 6 7 8 YOURS: 2b 3b 4b 5b 6b 7 8 END } merge-test 100 { 1 2 3 4 5 6 7 8 9 } { @@ -626,16 +622,16 @@ } { 1 2 3 4 5 7 8 9b } { 1 2 3 4 5 7 8 9b a b c d e } { - 1 2 3 4 5 7 8 > 9b = 9b a b c d e < + 1 2 3 4 5 7 8 MINE: 9b COM: 9 YOURS: 9b a b c d e END } merge-test 104 { 1 2 3 4 5 6 7 8 9 } { 1 2 3 4 5 7 8 9b a b c d e } { 1 2 3 4 5 7 8 9b } { - 1 2 3 4 5 7 8 > 9b a b c d e = 9b < + 1 2 3 4 5 7 8 MINE: 9b a b c d e COM: 9 YOURS: 9b END } Index: test/merge4.test ================================================================== --- test/merge4.test +++ test/merge4.test @@ -1,21 +1,15 @@ # # Copyright (c) 2009 D. Richard Hipp # # This program is free software; you can redistribute it and/or -# modify it under the terms of the GNU General Public -# License version 2 as published by the Free Software Foundation. +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# General Public License for more details. -# -# You should have received a copy of the GNU General Public -# License along with this library; if not, write to the -# Free Software Foundation, Inc., 59 Temple Place - Suite 330, -# Boston, MA 02111-1307, USA. +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # @@ -26,32 +20,32 @@ proc merge-test {testid basis v1 v2 result1 result2} { write_file t1 [join [string trim $basis] \n]\n write_file t2 [join [string trim $v1] \n]\n write_file t3 [join [string trim $v2] \n]\n - fossil test-3-way-merge t1 t2 t3 t4 - fossil test-3-way-merge t1 t3 t2 t5 + fossil 3-way-merge t1 t2 t3 t4 + fossil 3-way-merge t1 t3 t2 t5 set x [read_file t4] - regsub -all {>>>>>>> BEGIN MERGE CONFLICT} $x {>} x - regsub -all {============================} $x {=} x - regsub -all {<<<<<<< END MERGE CONFLICT} $x {<} x + regsub -all {<<<<<<< BEGIN MERGE CONFLICT.*<<} $x {>} x + regsub -all {=======.*=======} $x {=} x + regsub -all {>>>>>>> END MERGE CONFLICT.*>>>>} $x {<} x set x [split [string trim $x] \n] set y [read_file t5] - regsub -all {>>>>>>> BEGIN MERGE CONFLICT} $y {>} y - regsub -all {============================} $y {=} y - regsub -all {<<<<<<< END MERGE CONFLICT} $y {<} y + regsub -all {<<<<<<< BEGIN MERGE CONFLICT.*<<} $y {>} y + regsub -all {=======.*=======} $y {=} y + regsub -all {>>>>>>> END MERGE CONFLICT.*>>>>} $y {<} y set y [split [string trim $y] \n] set result1 [string trim $result1] if {$x!=$result1} { - puts " Expected \[$result1\]" - puts " Got \[$x\]" + protOut " Expected \[$result1\]" + protOut " Got \[$x\]" test merge3-$testid 0 } else { set result2 [string trim $result2] if {$y!=$result2} { - puts " Expected \[$result2\]" - puts " Got \[$y\]" + protOut " Expected \[$result2\]" + protOut " Got \[$y\]" test merge3-$testid 0 } else { test merge3-$testid 1 } } ADDED test/merge5.test Index: test/merge5.test ================================================================== --- test/merge5.test +++ test/merge5.test @@ -0,0 +1,313 @@ +# +# Copyright (c) 2010 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# Tests of the "merge" command +# + +# Verify the results of a check-out +# +proc checkout-test {testid expected_content} { + set flist {} + foreach {status filename} [exec $::fossilexe ls -l] { + if {$status!="DELETED"} {lappend flist $filename} + } + eval fossil sha1sum [lsort $flist] + global RESULT + regsub -all {\n *} [string trim $expected_content] "\n " expected + regsub -all {\n *} [string trim $RESULT] "\n " result + if {$result!=$expected} { + protOut " Expected:\n $expected" + protOut " Got:\n $result" + test merge5-$testid 0 + } else { + test merge5-$testid 1 + } +} + +catch {exec $::fossilexe info} res +if {![regexp {use --repository} $res]} { + puts stderr "Cannot run this test within an open checkout" + return +} +# +# Fossil will write data on $HOME, running 'fossil open' here. +# We need not to clutter the $HOME of the test caller. +set env(HOME) [pwd] + +# Construct a test repository +# +exec $::fossilexe sqlite3 --no-repository m5.fossil <$testdir/${testfile}_repo.sql +fossil rebuild m5.fossil +fossil open m5.fossil +fossil user default drh --user drh +fossil update baseline +checkout-test 10 { + da5c8346496f3421cb58f84b6e59e9531d9d424d one.txt + ed24d19d726d173f18dbf4a9a0f8514daa3e3ca4 three.txt + 278a402316510f6ae4a77186796a6bde78c7dbc1 two.txt +} + +# Update to the tip of the trunk +# +fossil update trunk +checkout-test 20 { + 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt +} + +# Merge in the change that adds file four.txt +# +fossil merge br1 +checkout-test 30 { + 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt + 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt +} + +# Undo the merge. Verify restoration of former state. +# +fossil undo +checkout-test 40 { + 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt +} + +# Now switch to br1 then merge in the trunk. Verify that the result +# is the same as merging br1 into trunk. +# +fossil update br1 +fossil merge trunk +checkout-test 50 { + 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt + 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt +} +fossil undo +fossil update trunk +checkout-test 60 { + 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt +} + +# Go back to the tip of the trunk and merge branch br1 again. This +# time do a check-in of the merge. Verify that the same content appears +# after the merge. +# +fossil update chng3 +fossil merge br1 +checkout-test 70 { + 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt + 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt +} +fossil commit -tag m1 -m {merge with br1} -nosign -f +checkout-test 71 { + 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt + 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt +} +fossil update chng3 +checkout-test 72 { + 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt +} +fossil update m1 +checkout-test 73 { + 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt + 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt +} + +# Merge br2 into the trunk. br2 contains some independent change to the +# two.txt file. Verify that these are merge in correctly. +# +fossil update m1 +fossil merge br2 +checkout-test 80 { + 8f09bc55a60eb8ca06f10a3b577aafa869b31695 five.txt + 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt + 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + 68eeee8b843eaea76e33d3911f416b745d0e5e5c two.txt +} +fossil undo +checkout-test 81 { + 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt + 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt +} + +# Now merge trunk into br2. Verify that the same set of changes result. +# +fossil update br2 +fossil merge trunk +checkout-test 90 { + 8f09bc55a60eb8ca06f10a3b577aafa869b31695 five.txt + 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt + 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + 68eeee8b843eaea76e33d3911f416b745d0e5e5c two.txt +} +fossil undo +checkout-test 91 { + 8f09bc55a60eb8ca06f10a3b577aafa869b31695 five.txt + da5c8346496f3421cb58f84b6e59e9531d9d424d one.txt + ed24d19d726d173f18dbf4a9a0f8514daa3e3ca4 three.txt + 85286cb3bc6d9e6f2f586eb5532f6065678f75b9 two.txt +} + +# Starting from chng3, merge in br4. The one file is deleted from br4, so +# the merge should cause the one file to disappear from the checkout. +# +fossil update chng3 +checkout-test 100 { + 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt +} +fossil merge br4 +checkout-test 101 { + 6e167b139c294bed560e2e30b352361b101e1f39 four.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt +} +fossil undo +checkout-test 102 { + 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt +} + +# Do the same merge of br4 into chng3, but this time check it in as a new +# branch. +# +fossil update chng3 +fossil merge br4 +fossil commit -nosign -branch br4-b -m {merge in br4} -tag m2 +checkout-test 110 { + 6e167b139c294bed560e2e30b352361b101e1f39 four.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt +} + +# Branches br1 and br4 both add file four.txt. So if we merge them together, +# the version of file four.txt in the original should be preserved. +# +fossil update br1 +checkout-test 120 { + 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt + da5c8346496f3421cb58f84b6e59e9531d9d424d one.txt + ed24d19d726d173f18dbf4a9a0f8514daa3e3ca4 three.txt + 278a402316510f6ae4a77186796a6bde78c7dbc1 two.txt +} +fossil merge br4 -expectError +checkout-test 121 { + 35815cf5804e8933eab64ae34e00bbb381be72c5 four.txt + ed24d19d726d173f18dbf4a9a0f8514daa3e3ca4 three.txt + 278a402316510f6ae4a77186796a6bde78c7dbc1 two.txt +} +fossil undo +fossil update br4 +checkout-test 122 { + 6e167b139c294bed560e2e30b352361b101e1f39 four.txt + ed24d19d726d173f18dbf4a9a0f8514daa3e3ca4 three.txt + 278a402316510f6ae4a77186796a6bde78c7dbc1 two.txt +} +fossil merge br1 -expectError +checkout-test 123 { + 6e167b139c294bed560e2e30b352361b101e1f39 four.txt + ed24d19d726d173f18dbf4a9a0f8514daa3e3ca4 three.txt + 278a402316510f6ae4a77186796a6bde78c7dbc1 two.txt +} +fossil undo + +# Merge br5 (which includes a file rename) into chng3 +# +fossil update chng3 +checkout-test 130 { + 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt +} +fossil merge br5 +checkout-test 131 { + 7eaf64a2c9141277b4c24259c7766d6a77047af7 one.txt + 98e47f99bb9fed4fdcd407f553615ca7f15a38a2 three.txt + e58c5da3e6007d0e30600ea31611813093ad180f two-rename.txt +} +fossil undo +checkout-test 132 { + 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt +} +fossil merge br5 +checkout-test 133 { + 7eaf64a2c9141277b4c24259c7766d6a77047af7 one.txt + 98e47f99bb9fed4fdcd407f553615ca7f15a38a2 three.txt + e58c5da3e6007d0e30600ea31611813093ad180f two-rename.txt +} +fossil commit -nosign -m {merge with rename} -branch {trunk+br5} +checkout-test 134 { + 7eaf64a2c9141277b4c24259c7766d6a77047af7 one.txt + 98e47f99bb9fed4fdcd407f553615ca7f15a38a2 three.txt + e58c5da3e6007d0e30600ea31611813093ad180f two-rename.txt +} +fossil update chng3 +checkout-test 135 { + 6f525ab779ad66e24474d845c5fb7938be42d50d one.txt + 64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b three.txt + b262fee89ed8a27a23a5e09d3917e0bebe22cd24 two.txt +} +fossil update trunk+br5 +checkout-test 136 { + 7eaf64a2c9141277b4c24259c7766d6a77047af7 one.txt + 98e47f99bb9fed4fdcd407f553615ca7f15a38a2 three.txt + e58c5da3e6007d0e30600ea31611813093ad180f two-rename.txt +} + +# Merge the chng3 check-in into br5, verifying that the change to two.txt +# from chng3 are applies to two-rename.txt in br5. +# +fossil update br5 +checkout-test 140 { + e866bb885d5184cba497cfb6a4eb281688519521 one.txt + e09593950837f76e70ca2f8ff2272ae3df0ba017 three.txt + 5ebb3c9ad50740a7382902657b84a6105c32fc7b two-rename.txt +} +fossil merge chng3 +checkout-test 141 { + 7eaf64a2c9141277b4c24259c7766d6a77047af7 one.txt + 98e47f99bb9fed4fdcd407f553615ca7f15a38a2 three.txt + e58c5da3e6007d0e30600ea31611813093ad180f two-rename.txt +} +fossil commit -nosign -m {change to two} -branch br5-2 +checkout-test 142 { + 7eaf64a2c9141277b4c24259c7766d6a77047af7 one.txt + 98e47f99bb9fed4fdcd407f553615ca7f15a38a2 three.txt + e58c5da3e6007d0e30600ea31611813093ad180f two-rename.txt +} ADDED test/merge5_repo.sql Index: test/merge5_repo.sql ================================================================== --- test/merge5_repo.sql +++ test/merge5_repo.sql @@ -0,0 +1,391 @@ +PRAGMA foreign_keys=OFF; +BEGIN TRANSACTION; +CREATE TABLE blob( + rid INTEGER PRIMARY KEY, + rcvid INTEGER, + size INTEGER, + uuid TEXT UNIQUE NOT NULL, + content BLOB, + CHECK( length(uuid)==40 AND rid>0 ) +); +INSERT INTO "blob" VALUES(1,1,160,'85d5623f065ba80b20a035dcb4f9aea045215406',X'000000A0789C1DC8BD0E82301000E0BD4F71334993FE1CB465D507300617C352AED7D0208D011C787BA3E3F75DA0D47294F81A775EDFC739EE34332DB2547105A3B492DA48AD078D7D1B7A6BC54DDC21A14E9E52F059A9C928E4E0950AC133658FC6B118A099B6586986068EED5397DFECE72AFF80463C206DB37802C69CACCB31B8EC983B424D2D2361B04645EE92F802B1932DED'); +INSERT INTO "blob" VALUES(2,2,10,'da5c8346496f3421cb58f84b6e59e9531d9d424d',X'0000000A789C73E472E272E672E172E5020008CA0182'); +INSERT INTO "blob" VALUES(3,2,10,'ed24d19d726d173f18dbf4a9a0f8514daa3e3ca4',X'0000000A789CF3E6F2E1F2E5F2E3F2E7020009F601B4'); +INSERT INTO "blob" VALUES(4,2,10,'278a402316510f6ae4a77186796a6bde78c7dbc1',X'0000000A789C73E372E7F2E0F2E4F2E202000960019B'); +INSERT INTO "blob" VALUES(5,2,325,'69cc5363f67013c09bf506e3fe9533c32d0bc1ef',X'000000AF789C05C1C10AC2300C00D07BBF621FB0439A365B3B2F0351F426CA2EB24BBAA4AC3005D7FD3FBE47645EC3B96191B91EEBAE3AD7F22D47E16DAEB96C5AEFE3B5F5030E00388DF6D1DADF1048A84397A1A3C401120283235992CF9195C1135AF2D09967B308E788392FA187B8B0042F2E893AC60E803499A9917D35EF2670C09853E0A8092893EF859CB33D6A8414351AFCF0ED72FA03D5E231F2'); +INSERT INTO "blob" VALUES(6,3,190,'3bfa21d09f8718adc3195848d57114e1fbf564fe',X'000000BE789C8DCC3B0EC2300C00D03DA7C88E2AF9A32475678E5016C49238B6400206D285DB036261E400EFED2301C2843421AE9816A02595B0C65DAF9BC52CAA89337B2E80AC20CD1364633749CCCAD4A1299AFF24A7F15D003ECB78DEA656875D2FF7FFB77088FD710EC758BA6095590C082BEAECEC4AC58CDEBEA2617801EE253190'); +INSERT INTO "blob" VALUES(7,4,10,'35815cf5804e8933eab64ae34e00bbb381be72c5',X'0000000A789C0BE00AE40AE20AE60AE102000A8C01CD'); +INSERT INTO "blob" VALUES(8,4,364,'341db367709dc7bc5046ab4fda4f7d9423118cba',X'0000016C789C35CE3B4E44310C85E1FEAE221B1864C78F24B4206A84A0413476EC888A918641B07CEE0CA23D3A9FF4DF158B78FB5CC7AFD3CDF9E7BCDD970A0807AC07C467945BE05B94EDA1FC1F0A4947994B3A70F64194E6CA96C409E0EED4D1B3D57931C78FBC9230999D5879E822AE385DFAEAEC9A327208618CE0CAB193F3FB29FF50C6BEE0885635B0D1C21EBED886C1EA821C6694348D2FE8FB7825B57563A8842A084B2DD95AC3AE6DA8A947B63E5BF8C4EDB1E8985348696903A409C3978026AD4B0F4DAA01FB31D7F65422C1AC73A4A3D5BD7CEF07016B16BBB5C9DB4B89D3FBF65A6C7873C33052034A6D94E82CAED85BA4ACED1765FC64F9'); +INSERT INTO "blob" VALUES(9,5,294,'9223b4cad40c0bc997b747fe4eee586559a43815',X'00000126789C8D8E310E83300C45774E91B955243B714860EE11E85275B16322A4AA0C0186DEBEA00E5DD9FEF2FE7B37E300C1A2B38803861EA8F7D40CE62295E73C194FA8E2DB18A1D31C2507A096858A3295A81D398F98B2B0918A07B67CDE769FA7B99DB92AAFE379D1BFF7B9FC8231EC2FF630AF759B5FE7DD77A3756A1EC68D0C3E69424DA20E3460804C259172F15E52F305DF884EE5'); +INSERT INTO "blob" VALUES(10,6,10,'8f09bc55a60eb8ca06f10a3b577aafa869b31695',X'0000000A789C0BE50AE30AE78AE08AE402000B2201E6'); +INSERT INTO "blob" VALUES(11,6,426,'3d5bce2e44a6b39205eb514c581aa41d86f48540',X'00000116789C458ECB4A04311444F7F98AAC075BEE23B949DA8DA01F203A832082E471C30C6A2FBA47D1BF37BA7157A7288A23DDBCCC3736B7F6BCF5D3A75E9EBFCEE6D612204C4813E21EFD0C7E266AD78F177C9829C4EC8018C52374C9EA7208182524C9529A8658432B15CD9D9554AB67E12E01902BA4D23D8872D7E4992B538331D46EEE6D23AA320E1CE4C6B1A5CCDDA973181C6BF705CCDEEECA9A977AB43B5B56FAE5EDFB7D1AD1EE064C7F90377D3B2DFADF9CD78FE575E0C1B6F5689E6C57284961887A14A89A628B340C2825E118C020A6F0E0AE7E00A7744B2A'); +INSERT INTO "blob" VALUES(12,7,10,'c6536e40b3d4755ead483c2e0fb2f16ecb751b6a',X'0000000A789C33E032E432E232E632E1020006CC012D'); +INSERT INTO "blob" VALUES(13,7,424,'b95701e0b73e5ebf452a9a7ae00360bf7fd187fd',X'000001A8789C4590414F83310886EFDFAFE879C90C9496B65E359E8DD18BF10285668B3A936F33EABFB753136FBCC003BC5C05317B3A1EF79F17A7CFD3721D22206C316E11EF315F025FC6B8DC84B7839FEBC124F74A8953E3412962D75C474DCA9E9BB74C68CD524C3691BF91A17326F6044A964ACE2E962AF5E830340E64EF5A322ACB444EBBD57FF7B8CD21D8AC44362C34B09A8E244D60D48CC944C8A94B3A431F6F3F482C55124442CE0883C5939482954B6361352FB517D38ECB6DE0D6FB3C8A061740EAD0746460A771B6409DA2C16CF4B1DC85311D4D9AA4E4DEB21924D10AAA0834AD132CF761A3AB1CFA2E6C82AE74D6C7AFD7ED0CC3668AED8F90A3BFEC0FFE9F39ADEF87E7291F82ADBBE5312895341D420637C74AB5CD6F5185E23AC41B2FDF3224768A'); +INSERT INTO "blob" VALUES(14,8,11,'6f525ab779ad66e24474d845c5fb7938be42d50d',X'0000000B789C73E4728AE072E672E172E502000C1801DA'); +INSERT INTO "blob" VALUES(15,8,319,'c19f6ae71a35cddef0d5de3e1e6109c4378c8215',X'000000E0789C0DCA314E04310C05D03EA7C80158E41F27F6646890587A844048681B274E986A4762B6A0E2ECF0EA977FC3B63EC5BED9F56B1C97E3B65F8EFD3AEE6F3FB7708E894027A413F086B292AE00F6C7ED2E7DAC4917CB94185240536C6453C5225AC5A4F9D0A5ABB78EF012A5F65E58788A12B8536DB3900C9EA316E6CEC9E93F8E195EA361641573481EA5D6E900235B422FD49435BC47FFDEC2672469A9CEC98AAEE685672ECD45B28FEA8B590BA99D9FBD3DFC010C0C3CFF'); +INSERT INTO "blob" VALUES(16,9,11,'b262fee89ed8a27a23a5e09d3917e0bebe22cd24',X'0000000B789C73E372E7F288E0F2E4F2E202000C5A01F3'); +INSERT INTO "blob" VALUES(17,9,319,'41a02f2b2277d771d85e7f1202c82d4d6675662f',X'000000D8789C0DCE3B4E03410C00D07E4EB107D820DB63CF8F668B488882068890E8BCB6478806258A142ACE4E0EF0A4C77FE96983F56DDC7E1EAEBFD7745C08100E4807C47794017590206CDFAB8F7062C7EE958A63CD139BEF93B52BCC26C8AE9A239BF265A39715CFC3B0CFA25151B3987B4C70F1C8815110BA71AECD1AA1A4D785F5AE4929B07397CADE729F6462358A36F3745AFCF2953E17C9AD30766AA5CE3DD8E27EA1BD548FD80D9C533E3F9FE6C7E33F8FCE3AED'); +INSERT INTO "blob" VALUES(18,10,187,'6c22c86ee8a603f5e18171efb6ff571b59e35cf3',X'000000BB789C8DCCAD0EC230140650DFA7A8274BFAB5B4B7ADE611862198EEFE30038261787B9843A24F724E3E068409710266E41E6A4776B33FC878A967342B4309236516510B924593420B42E363A2CA3522FF24D76D5FA803FBB2BDEF13AF8F1BFEAEDCD9CB7375176F45D8961C858C9651BED86C080189381621F701B249318A'); +INSERT INTO "blob" VALUES(19,11,187,'8b9f82378ea7a5f9ef8e80d31929f6dfab988acc',X'000000BB789C8DCC3D0EC2300C06D03DA7C88E22D95F623BEACC11CA8258F24B1718280BB747DD18790778670F620A8CC0BCB22C9497486EF5A75EDEC3272E84890A987533EE59864D06A165F4D4554D54317F92DB7E2CB6408E65FF3C42DB9E77FC5DB98BEFAFCD5DBD0C9E604819AC31D5A489F2906AA5C5AAA4C57D01A0C52E74'); +INSERT INTO "blob" VALUES(20,12,11,'64a8a5c7320fccfa4b2e5dfc5fd20a5381a86c5b',X'0000000B789CF3E6F2E1F2E5F28BE0F2E702000CA7020C'); +INSERT INTO "blob" VALUES(21,12,321,'5a71ecd64585155b076a4f1e4d489eb5fe3d05f4',X'00000141789C3D8F314E04310C45FB9C221758643B719C6C0BA246081A44E3C40E53ED48BB23C1F1095B50B8F8FA7A4FDF8F716C7AF9F2DBE7EDD8D76D57F787E3E7084F9100E18474427C433E239D81C273DC2FF73E96C9C4DA459A5A294E394BB69A79F0ECD252ED9EC9186C21FFD258B256E52189608E31357772B6B91823504E15B596C1FD0FFADEEF48A742D3BD36B7AA244A49D9A1596A280EDDBB130DA31C5E6246059AD489444C04ADB2CB44021A952CAF99C265D9C2EBB2226371186AA4C35704D7C252C0B06A1FE13DDA750B1F9111E6C4543CF1C4A1136606582F0FE406D424FC021EAB59DA'); +INSERT INTO "blob" VALUES(22,13,187,'a2c1640c98b3059a83df0b597d673ad073b782de',X'000000BB789C8DCC310EC2300C00C03DAFC88E22D9899DA49D794259108B93D874818176E1F7805818994FBAA38F80103006C40579C63863718B3F0CD9D5B314D43E327165646E50B290A1D2A03A6963D334808D7E92CBF65D207E96ED790B7DBD5FD3DF953BF9F158DDD96B1EB1D91B92285649B917AD5627A8066842EE05469C30D4'); +INSERT INTO "blob" VALUES(23,14,10,'6e167b139c294bed560e2e30b352361b101e1f39',X'0000000A789C33E532E332E7B2E0B2E4020007620146'); +INSERT INTO "blob" VALUES(24,14,388,'8015e80a2391d4497530b5d0f36689f359e374ba',X'00000184789C4590314FC4300C85F7FE8ACC2715D971E234C7086246E8585017277174086851DBD3C1BF27BD85CDCFF6A7F7EC075396F97B5CE749EFC7554A19D749AFE35AE7CBD23D1A0B083DDA1EF184FE8874F4DC3D997D78B7FD6C86153924A4986D74498B6750AB0489BC25C684808A956263B6F3A27A83B45857309660B960A08A4349D54914A88347574448298BDBA1EB7C436C18C48125648F5059D449083870882C9C8A862187923276CF8663CE9E982A0740CA1053F5C04A55A327CA640BB445ADDD8B69E611497373AF425407B0FB19A145C6C04CDDC91CD222533E9B83498BDBF5FAFBD5B7D21C9AE86F4256FD7C9FF4BFB32D97E9A3C9D7F6DA73F7666A96907CB351CB9A23D89CAB44675B50715CA8FB033C856E21'); +INSERT INTO "blob" VALUES(25,15,11,'e866bb885d5184cba497cfb6a4eb281688519521',X'0000000B789C73E472E272E672E1728DE002000AF201DA'); +INSERT INTO "blob" VALUES(26,15,372,'b882b17dc56af1e0297bf8a57c9d15050215c64a',X'00000174789C458E414B04310C85EFF32B7A5E1869D23669BD2A9E45D68B78499BD41175166657D47F6F7711841CF2F2DE47DE8D6B8BAC2FF67C3CAC36DD3AF4E067C019600FE97A0CD274E7867775FA3E39CB44B5E69C34418EAD4A2CDC7A25895631030D074A4218C869D9EC0F528C0A451949814387ACB54729E2FB88471509169AC433F475B820C859A2C70094C077128BC20C99B8905055E3DC586B83E9DE51692D050A9DD84368BED49E3C59E85652082DA0FA11B43E3D38EB297266D4D414C427E30E56C2E8D0A1F9D4A7BDDBD54DD6B6B89DAB5B3AEBE3CFC73C56B71B62BE0839DAFBEB6AFF97D3F6B9BE0DF9E8745BA6A7F1463B76C4AC68218D9A1AA216C2D822D4C63CFD0205E667AF'); +INSERT INTO "blob" VALUES(27,16,11,'e09593950837f76e70ca2f8ff2272ae3df0ba017',X'0000000B789CF38EE0F2E1F2E5F2E3F2E702000DB2020C'); +INSERT INTO "blob" VALUES(28,16,341,'9df70ff13b57e70ea2de63f655eb72df7b83121a',X'000000EF789C0DCD3D4E03311040E1DEA7F001089AF17AFE360D52525220040D8A846C8FAD5004A4CD8AE5F86CF99AEFD173B8CDA7D8FD6BBDDCD7EBD2FBF1725FFA77B9F5BDB79F708E09100E980E886F4833F28C80BF4FFE90C69C444B86342113C2E0D273114165312E5CBD8B36F1DA306E71B71ED7BF35BCC4AA9A2A8A37E232B04332A9430B493347028284D43897F01AD5D4B077A326640C5981477611A324FB7A84F7E8CB357C441BD51C9407541D0A669433FBA05C26622E14A6069F1B1FFF019008411F'); +INSERT INTO "blob" VALUES(29,17,11,'5ebb3c9ad50740a7382902657b84a6105c32fc7b',X'0000000B789C73E372E7F2E0F28CE4F2E202000C0101F4'); +INSERT INTO "blob" VALUES(30,17,322,'94bd3e6b45cc15e86cc76ecfd2207fddbb675b15',X'00000142789C3DCE314E44310C04D0FE9F221758643B711C6F0BA246081A441327B696825D69F9121C9F8010F5F8CDF836F97CDB5F3FF6CFCBE1EAE7FEEEDB5D224038201D109F908F588F85B6FB7439FBCDFEB5276FB59AB5C693B19561BDA88CB0DA8B1B35AC2B4165C245F6D3D5FF10286B56869625A4BAC0E8142D8248A87B9E01D601E507FDBFF22BD9CDF2D03E19A44097DC48812A8BB5D22B028F4C31C4B687A4330422301BCB5AF04ED36B8ECAAB436885D63212F6ED31E56985D1EB2A96A22DFA8800CC6C485A88FAF69CE6F5B4BD24556990111A87AECBEEEAB50C5F8EAB308FED1BE1CB587B'); +INSERT INTO "blob" VALUES(31,18,11,'85286cb3bc6d9e6f2f586eb5532f6065678f75b9',X'0000000B789C73E372E7F2E0F2E4F28AE402000BA301F4'); +INSERT INTO "blob" VALUES(32,18,360,'333c4d56d218735e20565944b50202f24827e3b8',X'00000168789C25CF3D4E43311004E0DEA7F00582BCDE1FAFD382A8118206D1ECDA5E250D91C213E1F83C9476349F34F398D73C6F9FDFDBED929E722D500E500F006F20472A47C4F49CE3FCB31EB6DF2D6B94EE83D9A42CD76145028AA1736B66612ADD11A4F36E2E5F77328D8722097509A40AC35943C965715F9D11669F5469EE643B5DD71DADB927D067AB32A161804E0FB26E259481A6192E1C46FFE876B94FE3AA321C7DC8EC4BA206AB2C67C61A5284A56934F69E5E324EF6B1EA223271ECB5F0DE031AAC604630558294A9A4D76C4B54753F568B8B16F4D5102946AD4EA1BAD27B9ED753FAC80338F6EBB37A31F568AD888A556519A32B78FA037C076278'); +CREATE TABLE delta( + rid INTEGER PRIMARY KEY, + srcid INTEGER NOT NULL REFERENCES blob +); +INSERT INTO "delta" VALUES(5,8); +INSERT INTO "delta" VALUES(11,32); +INSERT INTO "delta" VALUES(15,17); +INSERT INTO "delta" VALUES(17,21); +INSERT INTO "delta" VALUES(28,30); +CREATE TABLE rcvfrom( + rcvid INTEGER PRIMARY KEY, + uid INTEGER REFERENCES user, + mtime DATETIME, + nonce TEXT UNIQUE, + ipaddr TEXT +); +INSERT INTO "rcvfrom" VALUES(1,1,2455542.12469579,NULL,NULL); +INSERT INTO "rcvfrom" VALUES(2,1,2455542.12638891,NULL,NULL); +INSERT INTO "rcvfrom" VALUES(3,1,2455542.12705387,NULL,'127.0.0.1'); +INSERT INTO "rcvfrom" VALUES(4,1,2455542.12796271,NULL,NULL); +INSERT INTO "rcvfrom" VALUES(5,1,2455542.12818105,NULL,'127.0.0.1'); +INSERT INTO "rcvfrom" VALUES(6,1,2455542.12873891,NULL,NULL); +INSERT INTO "rcvfrom" VALUES(7,1,2455542.1294266,NULL,NULL); +INSERT INTO "rcvfrom" VALUES(8,1,2455542.12999842,NULL,NULL); +INSERT INTO "rcvfrom" VALUES(9,1,2455542.13015219,NULL,NULL); +INSERT INTO "rcvfrom" VALUES(10,1,2455542.13073123,NULL,'127.0.0.1'); +INSERT INTO "rcvfrom" VALUES(11,1,2455542.13090438,NULL,'127.0.0.1'); +INSERT INTO "rcvfrom" VALUES(12,1,2455542.1333612,NULL,NULL); +INSERT INTO "rcvfrom" VALUES(13,1,2455542.13353873,NULL,'127.0.0.1'); +INSERT INTO "rcvfrom" VALUES(14,1,2455542.13467894,NULL,NULL); +INSERT INTO "rcvfrom" VALUES(15,1,2455542.13571911,NULL,NULL); +INSERT INTO "rcvfrom" VALUES(16,1,2455542.13623403,NULL,NULL); +INSERT INTO "rcvfrom" VALUES(17,1,2455542.13660848,NULL,NULL); +INSERT INTO "rcvfrom" VALUES(18,1,2455542.19483546,NULL,NULL); +CREATE TABLE user( + uid INTEGER PRIMARY KEY, + login TEXT, + pw TEXT, + cap TEXT, + cookie TEXT, + ipaddr TEXT, + cexpire DATETIME, + info TEXT, + photo BLOB +); +INSERT INTO "user" VALUES(1,'drh','efbbd0','s',NULL,NULL,NULL,'',NULL); +INSERT INTO "user" VALUES(2,'anonymous','42568D1800352E28','ghmncz',NULL,NULL,NULL,'Anon',NULL); +INSERT INTO "user" VALUES(3,'nobody','','jor',NULL,NULL,NULL,'Nobody',NULL); +INSERT INTO "user" VALUES(4,'developer','','dei',NULL,NULL,NULL,'Dev',NULL); +INSERT INTO "user" VALUES(5,'reader','','kptw',NULL,NULL,NULL,'Reader',NULL); +CREATE TABLE config( + name TEXT PRIMARY KEY NOT NULL, + value CLOB, + CHECK( typeof(name)='text' AND length(name)>=1 ) +); +INSERT INTO "config" VALUES('server-code','48d0adf1afd58dae13c5b70ff7aff208fb8b9f53'); +INSERT INTO "config" VALUES('project-code','8201587e22864f2fe6b8eb623626de6f9f747058'); +INSERT INTO "config" VALUES('autosync','1'); +INSERT INTO "config" VALUES('localauth','0'); +INSERT INTO "config" VALUES('project-name','Test Case 1'); +INSERT INTO "config" VALUES('project-description','Fossil repository for testing'); +INSERT INTO "config" VALUES('index-page','/timeline'); +INSERT INTO "config" VALUES('content-schema','1'); +INSERT INTO "config" VALUES('aux-schema','2010-11-24'); +CREATE TABLE shun(uuid UNIQUE); +CREATE TABLE private(rid INTEGER PRIMARY KEY); +CREATE TABLE reportfmt( + rn integer primary key, + owner text, + title text, + cols text, + sqlcode text +); +INSERT INTO "reportfmt" VALUES(1,NULL,'All Tickets','#ffffff Key: +#f2dcdc Active +#e8e8e8 Review +#cfe8bd Fixed +#bde5d6 Tested +#cacae5 Deferred +#c8c8c8 Closed','SELECT + CASE WHEN status IN (''Open'',''Verified'') THEN ''#f2dcdc'' + WHEN status=''Review'' THEN ''#e8e8e8'' + WHEN status=''Fixed'' THEN ''#cfe8bd'' + WHEN status=''Tested'' THEN ''#bde5d6'' + WHEN status=''Deferred'' THEN ''#cacae5'' + ELSE ''#c8c8c8'' END AS ''bgcolor'', + substr(tkt_uuid,1,10) AS ''#'', + datetime(tkt_mtime) AS ''mtime'', + type, + status, + subsystem, + title +FROM ticket'); +CREATE TABLE concealed( + hash TEXT PRIMARY KEY, + content TEXT +); +CREATE TABLE filename( + fnid INTEGER PRIMARY KEY, + name TEXT UNIQUE +); +INSERT INTO "filename" VALUES(1,'four.txt'); +INSERT INTO "filename" VALUES(2,'one.txt'); +INSERT INTO "filename" VALUES(3,'three.txt'); +INSERT INTO "filename" VALUES(4,'two.txt'); +INSERT INTO "filename" VALUES(5,'five.txt'); +INSERT INTO "filename" VALUES(6,'six.txt'); +INSERT INTO "filename" VALUES(7,'two-rename.txt'); +CREATE TABLE mlink( + mid INTEGER REFERENCES blob, + pid INTEGER REFERENCES blob, + fid INTEGER REFERENCES blob, + fnid INTEGER REFERENCES filename, + pfnid INTEGER REFERENCES filename +); +INSERT INTO "mlink" VALUES(8,0,7,1,0); +INSERT INTO "mlink" VALUES(5,0,2,2,0); +INSERT INTO "mlink" VALUES(5,0,3,3,0); +INSERT INTO "mlink" VALUES(5,0,4,4,0); +INSERT INTO "mlink" VALUES(11,0,10,5,0); +INSERT INTO "mlink" VALUES(13,0,12,6,0); +INSERT INTO "mlink" VALUES(21,3,20,3,0); +INSERT INTO "mlink" VALUES(17,4,16,4,0); +INSERT INTO "mlink" VALUES(15,2,14,2,0); +INSERT INTO "mlink" VALUES(24,0,23,1,0); +INSERT INTO "mlink" VALUES(24,2,0,2,0); +INSERT INTO "mlink" VALUES(26,2,25,2,0); +INSERT INTO "mlink" VALUES(30,4,29,7,0); +INSERT INTO "mlink" VALUES(28,3,27,3,0); +INSERT INTO "mlink" VALUES(28,4,4,7,4); +INSERT INTO "mlink" VALUES(28,4,0,4,0); +INSERT INTO "mlink" VALUES(32,4,31,4,0); +CREATE TABLE plink( + pid INTEGER REFERENCES blob, + cid INTEGER REFERENCES blob, + isprim BOOLEAN, + mtime DATETIME, + UNIQUE(pid, cid) +); +INSERT INTO "plink" VALUES(5,8,1,2455542.12795139); +INSERT INTO "plink" VALUES(1,5,1,2455542.12638889); +INSERT INTO "plink" VALUES(5,11,1,2455542.12873843); +INSERT INTO "plink" VALUES(5,13,1,2455542.1294213); +INSERT INTO "plink" VALUES(17,21,1,2455542.13335648); +INSERT INTO "plink" VALUES(15,17,1,2455542.13015046); +INSERT INTO "plink" VALUES(5,15,1,2455542.12998843); +INSERT INTO "plink" VALUES(5,24,1,2455542.13467593); +INSERT INTO "plink" VALUES(5,26,1,2455542.13571759); +INSERT INTO "plink" VALUES(28,30,1,2455542.13659722); +INSERT INTO "plink" VALUES(26,28,1,2455542.13622685); +INSERT INTO "plink" VALUES(11,32,1,2455542.19482639); +CREATE TABLE event( + type TEXT, + mtime DATETIME, + objid INTEGER PRIMARY KEY, + tagid INTEGER, + uid INTEGER REFERENCES user, + bgcolor TEXT, + euser TEXT, + user TEXT, + ecomment TEXT, + comment TEXT, + brief TEXT, + omtime DATETIME +); +INSERT INTO "event" VALUES('ci',2455542.1246875,1,NULL,NULL,NULL,NULL,'drh',NULL,'initial empty check-in',NULL,NULL); +INSERT INTO "event" VALUES('ci',2455542.12638889,5,NULL,NULL,NULL,NULL,'drh',NULL,'add three initial files',NULL,NULL); +INSERT INTO "event" VALUES('ci',2455542.12795139,8,NULL,NULL,NULL,NULL,'drh',NULL,'add four.txt',NULL,2455542.12795139); +INSERT INTO "event" VALUES('ci',2455542.12873843,11,NULL,NULL,NULL,NULL,'drh',NULL,'add five.txt',NULL,NULL); +INSERT INTO "event" VALUES('ci',2455542.1294213,13,NULL,NULL,NULL,NULL,'drh',NULL,'add six.txt',NULL,NULL); +INSERT INTO "event" VALUES('ci',2455542.12998843,15,NULL,NULL,NULL,NULL,'drh',NULL,'changes to one.txt',NULL,NULL); +INSERT INTO "event" VALUES('ci',2455542.13015046,17,NULL,NULL,NULL,NULL,'drh',NULL,'changes to two.txt',NULL,NULL); +INSERT INTO "event" VALUES('ci',2455542.13335648,21,NULL,NULL,NULL,NULL,'drh',NULL,'changes to three.txt',NULL,2455542.13335648); +INSERT INTO "event" VALUES('ci',2455542.13467593,24,NULL,NULL,NULL,NULL,'drh',NULL,'drop one; add new four',NULL,NULL); +INSERT INTO "event" VALUES('ci',2455542.13571759,26,NULL,NULL,NULL,NULL,'drh',NULL,'change one',NULL,NULL); +INSERT INTO "event" VALUES('ci',2455542.13622685,28,NULL,NULL,NULL,NULL,'drh',NULL,'edit three; rename two',NULL,NULL); +INSERT INTO "event" VALUES('ci',2455542.13659722,30,NULL,NULL,NULL,NULL,'drh',NULL,'edit two-rename',NULL,NULL); +INSERT INTO "event" VALUES('ci',2455542.19482639,32,NULL,NULL,NULL,NULL,'drh',NULL,'edit two',NULL,NULL); +CREATE TABLE phantom( + rid INTEGER PRIMARY KEY +); +CREATE TABLE orphan( + rid INTEGER PRIMARY KEY, + baseline INTEGER +); +CREATE TABLE unclustered( + rid INTEGER PRIMARY KEY +); +INSERT INTO "unclustered" VALUES(1); +INSERT INTO "unclustered" VALUES(2); +INSERT INTO "unclustered" VALUES(3); +INSERT INTO "unclustered" VALUES(4); +INSERT INTO "unclustered" VALUES(5); +INSERT INTO "unclustered" VALUES(6); +INSERT INTO "unclustered" VALUES(7); +INSERT INTO "unclustered" VALUES(8); +INSERT INTO "unclustered" VALUES(9); +INSERT INTO "unclustered" VALUES(10); +INSERT INTO "unclustered" VALUES(11); +INSERT INTO "unclustered" VALUES(12); +INSERT INTO "unclustered" VALUES(13); +INSERT INTO "unclustered" VALUES(14); +INSERT INTO "unclustered" VALUES(15); +INSERT INTO "unclustered" VALUES(16); +INSERT INTO "unclustered" VALUES(17); +INSERT INTO "unclustered" VALUES(18); +INSERT INTO "unclustered" VALUES(19); +INSERT INTO "unclustered" VALUES(20); +INSERT INTO "unclustered" VALUES(21); +INSERT INTO "unclustered" VALUES(22); +INSERT INTO "unclustered" VALUES(23); +INSERT INTO "unclustered" VALUES(24); +INSERT INTO "unclustered" VALUES(25); +INSERT INTO "unclustered" VALUES(26); +INSERT INTO "unclustered" VALUES(27); +INSERT INTO "unclustered" VALUES(28); +INSERT INTO "unclustered" VALUES(29); +INSERT INTO "unclustered" VALUES(30); +INSERT INTO "unclustered" VALUES(31); +INSERT INTO "unclustered" VALUES(32); +CREATE TABLE unsent( + rid INTEGER PRIMARY KEY +); +INSERT INTO "unsent" VALUES(31); +INSERT INTO "unsent" VALUES(32); +CREATE TABLE tag( + tagid INTEGER PRIMARY KEY, + tagname TEXT UNIQUE +); +INSERT INTO "tag" VALUES(1,'bgcolor'); +INSERT INTO "tag" VALUES(2,'comment'); +INSERT INTO "tag" VALUES(3,'user'); +INSERT INTO "tag" VALUES(4,'date'); +INSERT INTO "tag" VALUES(5,'hidden'); +INSERT INTO "tag" VALUES(6,'private'); +INSERT INTO "tag" VALUES(7,'cluster'); +INSERT INTO "tag" VALUES(8,'branch'); +INSERT INTO "tag" VALUES(9,'closed'); +INSERT INTO "tag" VALUES(10,'sym-trunk'); +INSERT INTO "tag" VALUES(11,'sym-baseline'); +INSERT INTO "tag" VALUES(12,'sym-br1'); +INSERT INTO "tag" VALUES(13,'sym-br2'); +INSERT INTO "tag" VALUES(14,'sym-br3'); +INSERT INTO "tag" VALUES(15,'sym-chng1'); +INSERT INTO "tag" VALUES(16,'sym-chng2'); +INSERT INTO "tag" VALUES(17,'sym-chng3'); +INSERT INTO "tag" VALUES(18,'sym-br4'); +INSERT INTO "tag" VALUES(19,'sym-br5'); +CREATE TABLE tagxref( + tagid INTEGER REFERENCES tag, + tagtype INTEGER, + srcid INTEGER REFERENCES blob, + origid INTEGER REFERENCES blob, + value TEXT, + mtime TIMESTAMP, + rid INTEGER REFERENCE blob, + UNIQUE(rid, tagid) +); +INSERT INTO "tagxref" VALUES(8,2,1,1,'trunk',2455542.1246875,1); +INSERT INTO "tagxref" VALUES(10,2,1,1,NULL,2455542.1246875,1); +INSERT INTO "tagxref" VALUES(4,1,6,5,'2010-12-11 15:02:00',2455542.12704861,5); +INSERT INTO "tagxref" VALUES(11,1,6,5,NULL,2455542.12704861,5); +INSERT INTO "tagxref" VALUES(8,2,0,1,'trunk',2455542.1246875,5); +INSERT INTO "tagxref" VALUES(10,2,0,1,NULL,2455542.1246875,5); +INSERT INTO "tagxref" VALUES(8,2,9,8,'br1',2455542.1281713,8); +INSERT INTO "tagxref" VALUES(12,2,9,8,NULL,2455542.1281713,8); +INSERT INTO "tagxref" VALUES(4,1,9,8,'2010-12-11 15:04:15',2455542.1281713,8); +INSERT INTO "tagxref" VALUES(10,0,9,8,NULL,2455542.1281713,8); +INSERT INTO "tagxref" VALUES(8,2,11,11,'br2',2455542.12873843,11); +INSERT INTO "tagxref" VALUES(13,2,11,11,NULL,2455542.12873843,11); +INSERT INTO "tagxref" VALUES(11,0,11,11,NULL,2455542.12873843,11); +INSERT INTO "tagxref" VALUES(10,0,11,11,NULL,2455542.12873843,11); +INSERT INTO "tagxref" VALUES(8,2,13,13,'br3',2455542.1294213,13); +INSERT INTO "tagxref" VALUES(14,2,13,13,NULL,2455542.1294213,13); +INSERT INTO "tagxref" VALUES(11,0,13,13,NULL,2455542.1294213,13); +INSERT INTO "tagxref" VALUES(10,0,13,13,NULL,2455542.1294213,13); +INSERT INTO "tagxref" VALUES(4,1,18,15,'2010-12-11 15:07:11',2455542.13072917,15); +INSERT INTO "tagxref" VALUES(15,1,18,15,NULL,2455542.13072917,15); +INSERT INTO "tagxref" VALUES(4,1,19,17,'2010-12-11 15:07:25',2455542.13090278,17); +INSERT INTO "tagxref" VALUES(16,1,19,17,NULL,2455542.13090278,17); +INSERT INTO "tagxref" VALUES(8,2,0,1,'trunk',2455542.1246875,15); +INSERT INTO "tagxref" VALUES(8,2,0,1,'trunk',2455542.1246875,17); +INSERT INTO "tagxref" VALUES(8,2,0,1,'trunk',2455542.1246875,21); +INSERT INTO "tagxref" VALUES(10,2,0,1,NULL,2455542.1246875,15); +INSERT INTO "tagxref" VALUES(10,2,0,1,NULL,2455542.1246875,17); +INSERT INTO "tagxref" VALUES(10,2,0,1,NULL,2455542.1246875,21); +INSERT INTO "tagxref" VALUES(4,1,22,21,'2010-12-11 15:12:02',2455542.13353009,21); +INSERT INTO "tagxref" VALUES(17,1,22,21,NULL,2455542.13353009,21); +INSERT INTO "tagxref" VALUES(8,2,24,24,'br4',2455542.13467593,24); +INSERT INTO "tagxref" VALUES(18,2,24,24,NULL,2455542.13467593,24); +INSERT INTO "tagxref" VALUES(11,0,24,24,NULL,2455542.13467593,24); +INSERT INTO "tagxref" VALUES(10,0,24,24,NULL,2455542.13467593,24); +INSERT INTO "tagxref" VALUES(8,2,26,26,'br5',2455542.13571759,26); +INSERT INTO "tagxref" VALUES(19,2,26,26,NULL,2455542.13571759,26); +INSERT INTO "tagxref" VALUES(11,0,26,26,NULL,2455542.13571759,26); +INSERT INTO "tagxref" VALUES(10,0,26,26,NULL,2455542.13571759,26); +INSERT INTO "tagxref" VALUES(8,2,0,26,'br5',2455542.13571759,28); +INSERT INTO "tagxref" VALUES(8,2,0,26,'br5',2455542.13571759,30); +INSERT INTO "tagxref" VALUES(19,2,0,26,NULL,2455542.13571759,28); +INSERT INTO "tagxref" VALUES(19,2,0,26,NULL,2455542.13571759,30); +INSERT INTO "tagxref" VALUES(8,2,0,11,'br2',2455542.12873843,32); +INSERT INTO "tagxref" VALUES(13,2,0,11,NULL,2455542.12873843,32); +CREATE TABLE backlink( + target TEXT, + srctype INT, + srcid INT, + mtime TIMESTAMP, + UNIQUE(target, srctype, srcid) +); +CREATE TABLE attachment( + attachid INTEGER PRIMARY KEY, + isLatest BOOLEAN DEFAULT 0, + mtime TIMESTAMP, + src TEXT, + target TEXT, + filename TEXT, + comment TEXT, + user TEXT +); +CREATE TABLE ticket( + -- Do not change any column that begins with tkt_ + tkt_id INTEGER PRIMARY KEY, + tkt_uuid TEXT UNIQUE, + tkt_mtime DATE, + -- Add as many field as required below this line + type TEXT, + status TEXT, + subsystem TEXT, + priority TEXT, + severity TEXT, + foundin TEXT, + private_contact TEXT, + resolution TEXT, + title TEXT, + comment TEXT +); +CREATE INDEX delta_i1 ON delta(srcid); +CREATE INDEX mlink_i1 ON mlink(mid); +CREATE INDEX mlink_i2 ON mlink(fnid); +CREATE INDEX mlink_i3 ON mlink(fid); +CREATE INDEX mlink_i4 ON mlink(pid); +CREATE INDEX plink_i2 ON plink(cid,pid); +CREATE INDEX event_i1 ON event(mtime); +CREATE INDEX orphan_baseline ON orphan(baseline); +CREATE INDEX tagxref_i1 ON tagxref(tagid, mtime); +CREATE INDEX backlink_src ON backlink(srcid, srctype); +CREATE INDEX attachment_idx1 ON attachment(target, filename, mtime); +CREATE INDEX attachment_idx2 ON attachment(src); +COMMIT; ADDED test/merge6.test Index: test/merge6.test ================================================================== --- test/merge6.test +++ test/merge6.test @@ -0,0 +1,67 @@ +# +# Copyright (c) 2014 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# Tests of the "merge" command +# + +#################################################################### +# TEST 1: Handle multiple merges each with one or more ADDED files # +#################################################################### + +repo_init +fossil ls + +test merge_multi-0 {[normalize_result] eq {}} + +write_file f1 "f1 line" +fossil add f1 +fossil commit -m "base file" +fossil ls + +test merge_multi-1 {[normalize_result] eq {f1}} + +fossil update trunk +write_file f2 "f2 line" +fossil add f2 +fossil commit -m "branch for file f2" -b branch_for_f2 +fossil ls + +test merge_multi-2 {[normalize_result] eq {f1 +f2}} + +fossil update trunk +write_file f3 "f3 line" +write_file f4 "f4 line" +fossil add f3 +fossil add f4 +fossil commit -m "branch for files f3 and f4" -b branch_for_f3_f4 +fossil ls + +test merge_multi-3 {[normalize_result] eq {f1 +f3 +f4}} + +fossil update trunk +fossil merge branch_for_f2 +fossil merge branch_for_f3_f4 +fossil commit -m "new trunk files f2, f3, and f4 via merge" +fossil ls + +test merge_multi-4 {[normalize_result] eq {f1 +f2 +f3 +f4}} knownBug ADDED test/merge_renames.test Index: test/merge_renames.test ================================================================== --- test/merge_renames.test +++ test/merge_renames.test @@ -0,0 +1,211 @@ +# +# Tests for merging with renames +# +# + +catch {exec $::fossilexe info} res +if {![regexp {use --repository} $res]} { + puts stderr "Cannot run this test within an open checkout" + return +} + +###################################### +# Test 1 # +# Reported: Ticket [554f44ee74e3d] # +###################################### + +repo_init + +write_file f1 "line" +fossil add f1 +fossil commit -m "c1" +fossil tag add pivot current + +write_file f1 "line2" +fossil commit -m "c2" + +write_file f1 "line3" +fossil commit -m "c3" + +write_file f1 "line4" +fossil commit -m "c4" + +write_file f1 "line5" +fossil commit -m "c4" + +write_file f1 "line6" +fossil commit -m "c4" + +fossil update pivot +fossil mv f1 f2 +file rename -force f1 f2 +fossil commit -b rename -m "c5" + +fossil merge trunk +fossil commit -m "trunk merged" + +fossil update pivot +write_file f3 "someline" +fossil add f3 +fossil commit -b branch2 -m "newbranch" + +fossil merge trunk +puts $RESULT + +set deletes 0 +foreach {status filename} $RESULT { + if {$status=="DELETE"} { + set deletes [expr $deletes + 1] + } +} + +if {$deletes!=0} { + # failed + protOut "Error, the merge should not delete any file" + test merge_renames-1 0 +} else { + test merge_renames-1 1 +} + +###################################### +# Test 2 # +# Reported: Ticket [74413366fe5067] # +###################################### + +repo_init + +write_file f1 "line" +fossil add f1 +fossil commit -m "base file" +fossil tag add pivot current + +write_file f2 "line2" +fossil add f2 +fossil commit -m "newfile" + +fossil mv f2 f2new +file rename -force f2 f2new +fossil commit -m "rename" + +fossil update pivot +write_file f1 "line3" +fossil commit -b branch -m "change" + +fossil merge trunk +fossil commit -m "trunk merged" + +fossil update trunk + +fossil merge branch +puts $RESULT + +# Not a nice way to check, but I don't know more tcl now +set deletes 0 +foreach {status filename} $RESULT { + if {$status=="DELETE"} { + set deletes [expr $deletes + 1] + } +} + +if {$deletes!=0} { + # failed + protOut "Error, the merge should not delete any file" + test merge_renames-2 0 +} else { + test merge_renames-2 1 +} + +###################################### +# Test 3 # +# Reported: Ticket [30b28cf351] # +###################################### + +repo_init + +write_file f1 "line" +fossil add f1 +fossil commit -m "base file" +fossil tag add pivot current + +write_file f2 "line2" +fossil add f2 +fossil commit -m "newfile" + +fossil mv f2 f2new +file rename -force f2 f2new +fossil commit -m "rename" + +fossil update pivot +write_file f1 "line3" +fossil commit -b branch -m "change" + +fossil merge trunk +fossil commit -m "trunk merged" + +fossil update trunk + +fossil merge branch +puts $RESULT + +# Not a nice way to check, but I don't know more tcl now +set deletes 0 +foreach {status filename} $RESULT { + if {$status=="DELETE"} { + set deletes [expr $deletes + 1] + } +} + +if {$deletes!=0} { + # failed + protOut "Error, the merge should not delete any file" + test merge_renames-3 0 +} else { + test merge_renames-3 1 +} + +###################################### +# Test 4 # +# Reported: Ticket [67176c3aa4] # +###################################### + +# TO BE WRITTEN. + +###################################### +# Test 5 # +# Handle Rename/Add via Merge # +###################################### + +repo_init + +write_file f1 "old f1 line" +fossil add f1 +fossil commit -m "base file" + +write_file f3 "f3 line" +fossil add f3 +fossil commit -m "branch file" -b branch_for_f3 + +fossil update trunk +fossil mv f1 f2 +file rename -force f1 f2 +write_file f1 "new f1 line" +fossil add f1 +fossil commit -m "rename and add file with old name" + +fossil update branch_for_f3 +fossil merge trunk +fossil commit -m "trunk merged, should have 3 files" + +fossil ls + +test merge_renames-5 {[normalize_result] eq {f1 +f2 +f3}} knownBug + +###################################### +# +# Tests for troubles not specifically linked with renames but that I'd like to +# write: +# [c26c63eb1b] - 'merge --backout' does not handle conflicts properly +# [953031915f] - Lack of warning when overwriting extra files +# [4df5f38f1e] - Troubles merging a file delete with a file change ADDED test/mv-rm.test Index: test/mv-rm.test ================================================================== --- test/mv-rm.test +++ test/mv-rm.test @@ -0,0 +1,391 @@ +# +# Copyright (c) 2011 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# MV / RM Commands +# + +catch {exec $::fossilexe info} res +if {![regexp {use --repository} $res]} { + puts stderr "Cannot run this test within an open checkout" + return +} + +######################################## +# Setup: Add Files and Commit # +######################################## + +set rootDir [file normalize [pwd]] + +set undoMsg "\n \"fossil undo\" is\ +available to undo changes to the\ +working checkout." + +repo_init + +write_file f1 "f1" +write_file f2 "f2" +write_file f3 "f3" +write_file f4 "f4" +write_file f5 "f5" +write_file f6 "f6" +write_file f7 "f7" +write_file f8 "f8" + +file mkdir [file join $rootDir subdirA] +# NOTE: There are no files in subdirA. + +file mkdir [file join $rootDir subdirB] +write_file [file join $rootDir subdirB f9] "f9" + +file mkdir [file join $rootDir subdirC] +write_file [file join $rootDir subdirC f10] "f10" +write_file [file join $rootDir subdirC f11] "f11" + +fossil add f1 f2 f3 f4 f5 f6 f7 f8 subdirB/f9 subdirC/f10 subdirC/f11 +fossil commit -m "c1" + +######################################## +# Test 1: Soft Move Relative Directory # +######################################## + +file mkdir [file join $rootDir subdir1] +cd [file join $rootDir subdir1] + +fossil mv ../f1 . +test mv-soft-relative-1 {$RESULT eq "RENAME f1 subdir1/f1"} + +fossil revert +test mv-soft-relative-2 { + [normalize_result] eq "DELETE subdir1/f1\nREVERT f1${undoMsg}" +} + +cd $rootDir + +################################### +# Test 2: Soft Move Relative File # +################################### + +file mkdir [file join $rootDir subdir2] +cd [file join $rootDir subdir2] + +fossil mv ../f2 ./f2 +test mv-soft-relative-3 {$RESULT eq "RENAME f2 subdir2/f2"} + +fossil revert +test mv-soft-relative-4 { + [normalize_result] eq "DELETE subdir2/f2\nREVERT f2${undoMsg}" +} + +cd $rootDir + +######################################## +# Test 3: Hard Move Relative Directory # +######################################## + +file mkdir [file join $rootDir subdir3] +cd [file join $rootDir subdir3] + +fossil mv --hard ../f3 . +test mv-hard-relative-1 { + [normalize_result] eq "RENAME f3 subdir3/f3\nMOVED_FILE ${rootDir}/f3" +} + +fossil revert +test mv-hard-relative-2 { + [normalize_result] eq "DELETE subdir3/f3\nREVERT f3${undoMsg}" +} + +cd $rootDir + +################################### +# Test 4: Hard Move Relative File # +################################### + +file mkdir [file join $rootDir subdir4] +cd [file join $rootDir subdir4] + +fossil mv --hard ../f4 ./f4 +test mv-hard-relative-3 { + [normalize_result] eq "RENAME f4 subdir4/f4\nMOVED_FILE ${rootDir}/f4" +} + +fossil revert +test mv-hard-relative-4 { + [normalize_result] eq "DELETE subdir4/f4\nREVERT f4${undoMsg}" +} + +cd $rootDir + +######################################## +# Test 5: Soft Move Absolute Directory # +######################################## + +file mkdir [file join $rootDir subdir5] +cd [file join $rootDir subdir5] + +fossil mv [file join $rootDir f5] [file join $rootDir subdir5] +test mv-soft-absolute-1 {$RESULT eq "RENAME f5 subdir5/f5"} + +fossil revert +test mv-soft-absolute-2 { + [normalize_result] eq "DELETE subdir5/f5\nREVERT f5${undoMsg}" +} + +cd $rootDir + +################################### +# Test 6: Soft Move Absolute File # +################################### + +file mkdir [file join $rootDir subdir6] +cd [file join $rootDir subdir6] + +fossil mv [file join $rootDir f6] [file join $rootDir subdir6 f6] +test mv-soft-absolute-3 {$RESULT eq "RENAME f6 subdir6/f6"} + +fossil revert +test mv-soft-absolute-4 { + [normalize_result] eq "DELETE subdir6/f6\nREVERT f6${undoMsg}" +} + +cd $rootDir + +######################################## +# Test 7: Hard Move Absolute Directory # +######################################## + +file mkdir [file join $rootDir subdir7] +cd [file join $rootDir subdir7] + +fossil mv --hard [file join $rootDir f7] [file join $rootDir subdir7] +test mv-hard-absolute-1 { + [normalize_result] eq "RENAME f7 subdir7/f7\nMOVED_FILE ${rootDir}/f7" +} + +fossil revert +test mv-hard-absolute-2 { + [normalize_result] eq "DELETE subdir7/f7\nREVERT f7${undoMsg}" +} + +cd $rootDir + +################################### +# Test 8: Hard Move Absolute File # +################################### + +file mkdir [file join $rootDir subdir8] +cd [file join $rootDir subdir8] + +fossil mv --hard [file join $rootDir f8] [file join $rootDir subdir8 f8] +test mv-hard-absolute-3 { + [normalize_result] eq "RENAME f8 subdir8/f8\nMOVED_FILE ${rootDir}/f8" +} + +fossil revert +test mv-hard-absolute-4 { + [normalize_result] eq "DELETE subdir8/f8\nREVERT f8${undoMsg}" +} + +cd $rootDir + +########################################## +# Test 9: Soft Remove Relative Directory # +########################################## + +file mkdir [file join $rootDir subdir1] +cd [file join $rootDir subdir1] + +fossil rm ../subdirA +test rm-soft-relative-1 {$RESULT eq ""} + +fossil rm ../subdirB +test rm-soft-relative-2 {$RESULT eq "DELETED subdirB/f9"} + +fossil rm ../subdirC +test rm-soft-relative-3 { + [normalize_result] eq "DELETED subdirC/f10\nDELETED subdirC/f11" +} + +fossil revert +test rm-soft-relative-4 { + [normalize_result] eq \ + "REVERT subdirB/f9\nREVERT subdirC/f10\nREVERT subdirC/f11${undoMsg}" +} + +cd $rootDir + +###################################### +# Test 10: Soft Remove Relative File # +###################################### + +file mkdir [file join $rootDir subdir2] +cd [file join $rootDir subdir2] + +fossil rm ../f2 +test rm-soft-relative-5 {$RESULT eq "DELETED f2"} + +fossil revert +test rm-soft-relative-6 { + [normalize_result] eq "REVERT f2${undoMsg}" +} + +cd $rootDir + +########################################### +# Test 11: Hard Remove Relative Directory # +########################################### + +file mkdir [file join $rootDir subdir3] +cd [file join $rootDir subdir3] + +fossil rm --hard ../subdirA +test rm-hard-relative-1 { + [normalize_result] eq "" +} + +fossil rm --hard ../subdirB +test rm-hard-relative-2 { + [normalize_result] eq \ + "DELETED subdirB/f9\nDELETED_FILE ${rootDir}/subdirB/f9" +} + +fossil rm --hard ../subdirC +test rm-hard-relative-3 { + [normalize_result] eq \ + "DELETED subdirC/f10\nDELETED subdirC/f11\nDELETED_FILE\ +${rootDir}/subdirC/f10\nDELETED_FILE ${rootDir}/subdirC/f11" +} + +fossil revert +test rm-hard-relative-4 { + [normalize_result] eq \ + "REVERT subdirB/f9\nREVERT subdirC/f10\nREVERT subdirC/f11${undoMsg}" +} + +cd $rootDir + +###################################### +# Test 12: Hard Remove Relative File # +###################################### + +file mkdir [file join $rootDir subdir4] +cd [file join $rootDir subdir4] + +fossil rm --hard ../f4 +test rm-hard-relative-5 { + [normalize_result] eq "DELETED f4\nDELETED_FILE ${rootDir}/f4" +} + +fossil revert +test rm-hard-relative-6 { + [normalize_result] eq "REVERT f4${undoMsg}" +} + +cd $rootDir + +########################################### +# Test 13: Soft Remove Absolute Directory # +########################################### + +file mkdir [file join $rootDir subdir5] +cd [file join $rootDir subdir5] + +fossil rm [file join $rootDir subdirA] +test rm-soft-absolute-1 {$RESULT eq ""} + +fossil rm [file join $rootDir subdirB] +test rm-soft-absolute-2 {$RESULT eq "DELETED subdirB/f9"} + +fossil rm [file join $rootDir subdirC] +test rm-soft-absolute-3 { + [normalize_result] eq "DELETED subdirC/f10\nDELETED subdirC/f11" +} + +fossil revert +test rm-soft-absolute-4 { + [normalize_result] eq \ + "REVERT subdirB/f9\nREVERT subdirC/f10\nREVERT subdirC/f11${undoMsg}" +} + +cd $rootDir + +###################################### +# Test 14: Soft Remove Absolute File # +###################################### + +file mkdir [file join $rootDir subdir6] +cd [file join $rootDir subdir6] + +fossil rm [file join $rootDir f6] +test rm-soft-absolute-5 {$RESULT eq "DELETED f6"} + +fossil revert +test rm-soft-absolute-6 { + [normalize_result] eq "REVERT f6${undoMsg}" +} + +cd $rootDir + +########################################### +# Test 15: Hard Remove Absolute Directory # +########################################### + +file mkdir [file join $rootDir subdir7] +cd [file join $rootDir subdir7] + +fossil rm --hard [file join $rootDir subdirA] +test rm-hard-absolute-1 {$RESULT eq ""} + +fossil rm --hard [file join $rootDir subdirB] +test rm-hard-absolute-2 { + [normalize_result] eq \ + "DELETED subdirB/f9\nDELETED_FILE ${rootDir}/subdirB/f9" +} + +fossil rm --hard [file join $rootDir subdirC] +test rm-hard-absolute-3 { + [normalize_result] eq \ + "DELETED subdirC/f10\nDELETED subdirC/f11\nDELETED_FILE\ +${rootDir}/subdirC/f10\nDELETED_FILE ${rootDir}/subdirC/f11" +} + +fossil revert +test rm-hard-absolute-4 { + [normalize_result] eq \ + "REVERT subdirB/f9\nREVERT subdirC/f10\nREVERT subdirC/f11${undoMsg}" +} + +cd $rootDir + +###################################### +# Test 16: Hard Remove Absolute File # +###################################### + +file mkdir [file join $rootDir subdir8] +cd [file join $rootDir subdir8] + +fossil rm --hard [file join $rootDir f8] +test rm-hard-absolute-5 { + [normalize_result] eq "DELETED f8\nDELETED_FILE ${rootDir}/f8" +} + +fossil revert +test rm-hard-absolute-6 { + [normalize_result] eq "REVERT f8${undoMsg}" +} + +cd $rootDir ADDED test/release-checklist.wiki Index: test/release-checklist.wiki ================================================================== --- test/release-checklist.wiki +++ test/release-checklist.wiki @@ -0,0 +1,96 @@ +Release Checklist + +This file describes the testing procedures for Fossil prior to an +official release. + +

          +
        1. +From a private directory (not the source tree) run +"tclsh $SRC/test/tester.tcl $FOSSIL" where $FOSSIL is the +name of the executable under test and $SRC is the source tree. +Verify that there are no errors. + +

        2. +Click on each of the links in in the +[./graph-test-1.wiki] document and verify that all graphs are +rendered correctly. + +

        3. +Click on each of the links in in the +[./diff-test-1.wiki] document and verify that all diffs are +rendered correctly. + +

        4. +Click on the following link to verify that it works: [./test-page%2b%2b.wiki | ./test-page++.wiki] +(NB: Many web servers automatically block +or rewrite URLs that contain "+" characters, even when those "+" +characters are encoded as "%2B". On such web servers, the URL +above will not work. This test is only guaranteed to work +when running "fossil ui".) + +

        5. +Shift-click on each of the links in [./fileage-test-1.wiki] and verify +correct operation of the file-age computation. + +

        6. +Verify correct name-change tracking behavior (no net changes) for: +

          +fossil test-name-changes --debug b120bc8b262ac 374920b20944b +
          + +
        7. +Compile for all of the following platforms: +

            +
          1. Linux x86 +
          2. Linux x86_64 +
          3. Mac x86 +
          4. Mac x86_64 +
          5. Windows (mingw) +
          6. Windows (vc++) +
          7. OpenBSD +
          + +
        8. +Run at least one occurrence of the following commands on every +platform: +

            +
          1. fossil rebuild +
          2. fossil sync +
          3. fossil test-integrity +
          + +
        9. +Run the following commands on Linux and verify no major memory leaks +and no run-time errors or warnings (except for the well-known jump on an +uninitialized value that occurs within zlib). +

            +
          1. valgrind fossil rebuild +
          2. valgrind fossil sync +
          + +
        10. + +Inspect [http://www.fossil-scm.org/index.html/vdiff?from=release&to=trunk&sbs=1|all code changes since the previous release], paying particular +attention to the following details: +

            +
          1. Can a malicious HTTP request cause a buffer overrun. +
          2. Can a malicious HTTP request expose privileged information to + unauthorized users. +
          + + +
        11. +Use the release candidate version of fossil in production on the +[http://www.fossil-scm.org/] website for at least 48 hours (without +incident) prior to making the release official. + +

        12. +Verify that the [../www/changes.wiki | Change Log] is correct and +up-to-date. +

        + +
        + +Upon successful completion of all tests above, tag the release candidate +with the "release" tag and set its background color to "#d0c0ff". Update +the www/changes.wiki file to show the date of the release. ADDED test/revert.test Index: test/revert.test ================================================================== --- test/revert.test +++ test/revert.test @@ -0,0 +1,195 @@ +# +# Copyright (c) 2013 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# +# Tests for 'fossil revert' +# +# + +# Test 'fossil revert' against expected results from 'fossil changes' and +# 'fossil addremove -n', as well as by verifying the existence of files +# on the file system. 'fossil undo' is called after each test +# +proc revert-test {testid revertArgs expectedRevertOutput args} { + global RESULT + set passed 1 + + set args [dict merge { + -changes {} -addremove {} -exists {} -notexists {} + } $args] + + set result [fossil revert {*}$revertArgs] + test_status_list revert-$testid $result $expectedRevertOutput + + set statusListTests [list -changes changes -addremove {addremove -n}] + foreach {key fossilArgs} $statusListTests { + set expected [dict get $args $key] + set result [fossil {*}$fossilArgs] + test_status_list revert-$testid$key $result $expected + } + + set fileExistsTests [list -exists 1 does -notexists 0 should] + foreach {key expected verb} $fileExistsTests { + foreach path [dict get $args $key] { + if {[file exists $path] != $expected} { + set passed 0 + protOut " Failure: File $verb not exist: $path" + } + } + test revert-$testid$key $passed + } + + fossil undo +} + +catch {exec $::fossilexe info} res +if {![regexp {use --repository} $res]} { + puts stderr "Cannot run this test within an open checkout" + return +} + +repo_init + +# Prepare first commit +# +write_file f1 "f1" +write_file f2 "f2" +write_file f3 "f3" +fossil add f1 f2 f3 +fossil commit -m "c1" + +# Make changes to be reverted +# +# Add f0 +write_file f0 "f0" +fossil add f0 +# Remove f1 +file delete f1 +fossil rm f1 +# Edit f2 +write_file f2 "f2.1" +# Rename f3 to f3n +file rename -force f3 f3n +fossil mv f3 f3n + +# Test 'fossil revert' with no arguments +# +revert-test 1-1 {} { + UNMANAGE f0 + REVERT f1 + REVERT f2 + REVERT f3 + DELETE f3n +} -addremove { + ADDED f0 +} -exists {f0 f1 f2 f3} -notexists f3n + +# Test with a single filename argument +# +revert-test 1-2 f0 { + UNMANAGE f0 +} -changes { + DELETED f1 + EDITED f2 + RENAMED f3n +} -addremove { + ADDED f0 +} -exists {f0 f2 f3n} -notexists f3 + +revert-test 1-3 f1 { + REVERT f1 +} -changes { + ADDED f0 + EDITED f2 + RENAMED f3n +} -exists {f0 f1 f2 f3n} -notexists f3 + +revert-test 1-4 f2 { + REVERT f2 +} -changes { + ADDED f0 + DELETED f1 + RENAMED f3n +} -exists {f0 f2 f3n} -notexists {f1 f3} + +# Both files involved in a rename are reverted regardless of which filename +# is used as an argument to 'fossil revert' +# +revert-test 1-5 f3 { + REVERT f3 + DELETE f3n +} -changes { + ADDED f0 + DELETED f1 + EDITED f2 +} -exists {f0 f2 f3} -notexists {f1 f3n} + +revert-test 1-6 f3n { + REVERT f3 + DELETE f3n +} -changes { + ADDED f0 + DELETED f1 + EDITED f2 +} -exists {f0 f2 f3} -notexists {f1 f3n} + +# Test with multiple filename arguments +# +revert-test 1-7 {f0 f2 f3n} { + UNMANAGE f0 + REVERT f2 + REVERT f3 + DELETE f3n +} -changes { + DELETED f1 +} -addremove { + ADDED f0 +} -exists {f0 f2 f3} -notexists {f1 f3n} + + +# Test reverting the combination of a renamed file and an added file that +# uses the renamed file's original filename. +# +repo_init +write_file f1 "f1" +fossil add f1 +fossil commit -m "add f1" + +write_file f1n "f1n" +fossil mv f1 f1n +write_file f1 "f1b" +fossil add f1 + +revert-test 2-1 {} { + REVERT f1 + DELETE f1n +} -exists {f1} -notexists {f1n} + + +# Test reverting a rename in the repo but not completed in the file +# system +repo_init +write_file f1 "f1" +fossil add f1 +fossil commit -m "add f1" +fossil mv --soft f1 f1new +test 3-mv-1 {[file exists f1]} +test 3-mv-2 {![file exists f1new]} +revert-test 3-1 {} { + REVERT f1 + DELETE f1new +} -exists {f1} -notexists {f1n} ADDED test/stash.test Index: test/stash.test ================================================================== --- test/stash.test +++ test/stash.test @@ -0,0 +1,398 @@ +# +# Copyright (c) 2013 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# +# Tests for 'fossil stash' +# +# + +proc knownBug {t tests} { + return [expr {$t in $tests ? "knownBug" : ""}] +} + +# Test 'fossil stash' against expected results from 'fossil changes' and +# 'fossil addremove -n', as well as by verifying the existence of files +# on the file system. Unlike the similar function found in +# revert.test, 'fossil undo' is not called after each test because +# many stash operations aren't undoable, and because further testing +# of the stash content is more likely to be useful. +# +# The extra list "-knownbugs" is a list of areas that should be +# marked as "knownBug" to the inner call to test. Known areas are: +# -code The exit status of fossil stash +# -result The result string didn't match +# -changes The changed file set didn't match +# -addremove The addremove result set didn't match +# -exists One or more listed files don't exist +# -notexists One or more listed files do exist +# +# Also, if the exit status of fossil stash does not match +# expectations, the rest of the areas are not tested. +proc test_result_state {testid cmdArgs expectedOutput args} { + global RESULT + set passed 1 + + set args [dict merge { + -changes {} -addremove {} -exists {} -notexists {} -knownbugs {} + } $args] + + set knownbugs [dict get $args "-knownbugs"] + set result $::RESULT + set code $::CODE + if {[lindex $cmdArgs end] eq "-expectError"} { + test $testid-CODE {$code} [knownBug "-code" $knownbugs] + if {!$code} { + return + } + } else { + test $testid-CODE {!$code} [knownBug "-code" $knownbugs] + if {$code} { + return + } + } + test_status_list $testid $result $expectedOutput [knownBug "-result" $knownbugs] + + set statusListTests [list -changes changes -addremove {addremove -n}] + foreach {key fossilArgs} $statusListTests { + set expected [dict get $args $key] + set result [fossil {*}$fossilArgs] + test_status_list $testid$key $result $expected [knownBug $key $knownbugs] + } + + set fileExistsTests [list -exists 1 does -notexists 0 should] + foreach {key expected verb} $fileExistsTests { + foreach path [dict get $args $key] { + if {[file exists $path] != $expected} { + set passed 0 + protOut " Failure: File $verb not exist: $path" + } + } + test $testid$key $passed [knownBug $key $knownbugs] + } + + #fossil undo +} + +proc stash-test {testid stashArgs expectedStashOutput args} { + fossil stash {*}$stashArgs + return [test_result_state stash-$testid "stash $stashArgs" $expectedStashOutput {*}$args] +} + +catch {exec $::fossilexe info} res +if {![regexp {use --repository} $res]} { + puts stderr "Cannot run this test within an open checkout" + return +} + +repo_init + +# Prepare first commit +# +write_file f1 "f1" +write_file f2 "f2" +write_file f3 "f3" +fossil add f1 f2 f3 +fossil commit -m "c1" --tag c1 + +######## +# fossil stash +# fossil stash save ?-m|--comment COMMENT? ?FILES...? + +# Make simple changes to stash +# Add f0, remove f1, edit f2, rename f3 to f3n +write_file f0 "f0" +fossil add f0 +file delete f1 +fossil rm f1 +write_file f2 "f2.1" +file rename -force f3 f3n +fossil mv f3 f3n + +# Stash these changes and confirm +stash-test 1 {save -m "stash 1"} { + UNMANAGE f0 + REVERT f1 + REVERT f2 + REVERT f3 + DELETE f3n +} -addremove { + ADDED f0 +} -exists {f0 f1 f2 f3} -notexists {f3n} + +######## +# fossil stash list|ls ?-v|--verbose? ?-W|--width ? + +# Confirm there is a stash saved +fossil stash list +#protOut "{[normalize_result]}" +#{1: [21bc64cff8c702] on 2016-02-10 19:48:44 +# stash 1} +test stash-1-list-1 {[regexp {^1: \[[0-9a-z]+\] on } [first_data_line]]} +test stash-1-list-2 {[regexp {^\s+stash 1\s*$} [second_data_line]]} + +set diff_stash_1 {DELETE f1 +Index: f1 +================================================================== +--- f1 ++++ f1 +@@ -1,1 +0,0 @@ +-f1 + +CHANGED f2 +--- f2 ++++ f2 +@@ -1,1 +1,1 @@ +-f2 ++f2.1 + +CHANGED f3n +--- f3n ++++ f3n + +ADDED f0 +Index: f0 +================================================================== +--- f0 ++++ f0 +@@ -0,0 +1,1 @@ ++f0} + +######## +# fossil stash show|cat ?STASHID? ?DIFF-OPTIONS? +# fossil stash [g]diff ?STASHID? ?DIFF-OPTIONS? + +fossil stash show +test stash-1-show {[normalize_result] eq $diff_stash_1} +fossil stash diff +test stash-1-diff {[normalize_result] eq $diff_stash_1} + +######## +# fossil stash pop + +stash-test 2 pop { + DELETE f1 + UPDATE f2 + UPDATE f3n + ADDED f0 +} -changes { + ADDED f0 + MISSING f1 + EDITED f2 + MISSING f3 +} -addremove { + ADDED f3n + DELETED f1 + DELETED f3 +} -exists {f0 f2 f3n} -notexists {f1 f3} + +# Confirm there is no longer a stash saved +fossil stash list +test stash-2-list {[first_data_line] eq "empty stash"} + + +# Test stashed mv without touching the file system +# Issue reported by email to fossil-users +# from Warren Young, dated Tue, 9 Feb 2016 01:22:54 -0700 +# with checkin [b8c7af5bd9] plus a local patch on CentOS 5 +# 64 bit intel, 8-byte pointer, 4-byte integer +# Stashed renamed file said: +# fossil: ./src/delta.c:231: checksum: Assertion '...' failed. +# Should be triggered by this stash-WY-1 test. +fossil checkout --force c1 +fossil clean +fossil mv --soft f1 f1new +stash-test WY-1 {save -m "Reported 2016-02-09"} { + REVERT f1 + DELETE f1new +} -changes { +} -addremove { +} -exists {f1 f2 f3} -notexists {f1new} -knownbugs {-code -result} +# TODO: add tests that verify the saved stash is sensible. Possibly +# by applying it and checking results. But until the SQLITE_CONSTRAINT +# error is fixed, there is nothing stashed to test. + + + +# Test stashing the combination of a renamed file and an added file that +# uses the renamed file's original filename. I expect to see the same +# behavior as fossil revert: calmly back out both the rename and the +# add, and presumably stash the content of the added file before it +# is replaced by the revert. +# +repo_init +write_file f1 "f1" +fossil add f1 +fossil commit -m "add f1" + +write_file f1n "f1n" +fossil mv f1 f1n +write_file f1 "f1b" +fossil add f1 + +stash-test 2-1 {save -m "f1b"} { + REVERT f1 + DELETE f1n +} -exists {f1} -notexists {f1n} -knownbugs {-code -result} +# TODO: add tests that verify the saved stash is sensible. Possibly +# by applying it and checking results. But until the MISSING file +# error is fixed, there is nothing stashed to test. + + +# Test stashing a newly added (but never committed) file. As with +# fossil revert, fossil stash save unmanages the new file, but +# leaves the copy present on disk. This is undocumented, but +# probably sensible. +repo_init +write_file f1 "f1" +write_file f2 "f2" +fossil add f1 f2 +fossil commit -m "baseline" + +write_file f3 "f3" +fossil add f3 +stash-test 3-1 {save -m f3} { + UNMANAGE f3 +} -addremove { + ADDED f3 +} -exists {f1 f2 f3} -notexists {} +#fossil status +fossil stash show +test stash-3-1-show {[normalize_result] eq {ADDED f3 +Index: f3 +================================================================== +--- f3 ++++ f3 +@@ -0,0 +1,1 @@ ++f3}} +stash-test 3-1-pop {pop} { + ADDED f3 +} -changes { + ADDED f3 +} -addremove { +} -exists {f1 f2 f3} -notexists {} +fossil status + + +# Test stashing a rename of one file with at least one file +# unchanged. This should stash (and revert) just the rename +# operation. Instead it also stores and touches the unchanged file. +repo_init +write_file f1 "f1" +write_file f2 "f2" +fossil add f1 f2 +fossil commit -m "baseline" + +fossil mv --hard f2 f2n +test_result_state stash-3-4-mv "mv --hard f2 f2n" { + RENAME f2 f2n + MOVED_FILE f2 +} -changes { + RENAMED f2n +} -addremove { +} -exists {f1 f2n} -notexists {f2} + +stash-test 3-2 {save -m f2n} { + REVERT f2 + DELETE f2n +} -exists {f1 f2} -notexists {f2n} -knownbugs {-result} +fossil stash show +test stash-3-2-show-1 {![regexp {\sf1} $RESULT]} knownBug +test stash-3-2-show-2 {[regexp {\sf2n} $RESULT]} +stash-test 3-2-pop {pop} { + UPDATE f1 + UPDATE f2n +} -changes { + RENAMED f2n +} -addremove { + ADDED f2n + DELETED f2 +} -exists {f1 f2n} -notexists {f2} -knownbugs {-changes} + + + +######## +# fossil stash snapshot ?-m|--comment COMMENT? ?FILES...? + +repo_init +write_file f1 "f1" +write_file f2 "f2" +write_file f3 "f3" +fossil add f1 f2 f3 +fossil commit -m "c1" --tag c1 + +# Make simple changes and snapshot them +# Add f0, edit f2 +write_file f0 "f0" +fossil add f0 +write_file f2 "f2.1" + +# Snapshot these changes and confirm +stash-test 4-1 {snapshot -m "snap 1"} { +} -changes { + ADDED f0 + EDITED f2 +} -addremove { +} -exists {f0 f1 f2 f3} -notexists {} +fossil stash diff +test stash-4-1-diff-CODE {!$::CODE} +fossil stash show +test stash-4-1-show-1 {[regexp {CHANGED f2} $RESULT]} +test stash-4-1-show-2 {[regexp {ADDED f0} $RESULT]} + +# remove f1 and snapshot +file delete f1 +fossil rm f1 +stash-test 4-2 {snapshot -m "snap 2"} { +} -changes { + ADDED f0 + DELETED f1 + EDITED f2 +} -addremove { +} -exists {f0 f2 f3} -notexists {f1} +fossil stash diff +test stash-4-2-diff-CODE {!$::CODE} ;# knownBug +fossil stash show +test stash-4-2-show-1 {[regexp {DELETE f1} $RESULT]} +test stash-4-2-show-2 {[regexp {CHANGED f2} $RESULT]} +test stash-4-2-show-3 {[regexp {ADDED f0} $RESULT]} + + +# rename f3 to f3n and snapshot +file rename -force f3 f3n +fossil mv f3 f3n +stash-test 4-3 {snapshot -m "snap 3"} { +} -changes { + ADDED f0 + DELETED f1 + EDITED f2 + RENAMED f3n +} -addremove { +} -exists {f0 f2 f3n} -notexists {f1 f3} +fossil stash diff +test stash-4-3-diff-CODE {!$::CODE} ;# knownBug +fossil stash show +test stash-4-3-show-1 {[regexp {DELETE f1} $RESULT]} +test stash-4-3-show-2 {[regexp {CHANGED f2} $RESULT]} +test stash-4-3-show-2 {[regexp {CHANGED f3n} $RESULT]} +test stash-4-3-show-3 {[regexp {ADDED f0} $RESULT]} + +# fossil stash apply ?STASHID? +# fossil stash goto ?STASHID? +# fossil stash rm|drop ?STASHID? ?-a|--all? + +#fossil checkout --force c1 +#fossil clean ADDED test/subdir-b/readme.txt Index: test/subdir-b/readme.txt ================================================================== --- test/subdir-b/readme.txt +++ test/subdir-b/readme.txt @@ -0,0 +1,3 @@ +This file exists in order to create the "subdir-b" subdirectory. There is +exists sibling directory "subdir" that is a prefix of this subdirectory. +This file exists for self-testing. ADDED test/subdir/one/two/three/four/five/six/readme.txt Index: test/subdir/one/two/three/four/five/six/readme.txt ================================================================== --- test/subdir/one/two/three/four/five/six/readme.txt +++ test/subdir/one/two/three/four/five/six/readme.txt @@ -0,0 +1,2 @@ +This file exists in order to provide Fossil with a test case of a file +that is deeply nested below many subdirectories. ADDED test/test-page++.wiki Index: test/test-page++.wiki ================================================================== --- test/test-page++.wiki +++ test/test-page++.wiki @@ -0,0 +1,7 @@ +Test Page + +The purpose of this page is to test Fossil's ability to deal with +embedded documentation pages that contain characters that should be +escaped in URLs. + +Here is a link to the [./release-checklist.wiki | release checklist]. Index: test/tester.tcl ================================================================== --- test/tester.tcl +++ test/tester.tcl @@ -1,21 +1,15 @@ # # Copyright (c) 2006 D. Richard Hipp # # This program is free software; you can redistribute it and/or -# modify it under the terms of the GNU General Public -# License version 2 as published by the Free Software Foundation. +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) # # This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# General Public License for more details. -# -# You should have received a copy of the GNU General Public -# License along with this library; if not, write to the -# Free Software Foundation, Inc., 59 Temple Place - Suite 330, -# Boston, MA 02111-1307, USA. +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. # # Author contact information: # drh@hwaci.com # http://www.hwaci.com/drh/ # @@ -27,46 +21,147 @@ # # Where ../test/tester.tcl is the name of this file and ../bld/fossil # is the name of the executable to be tested. # -set testdir [file normalize [file dir $argv0]] +set testfiledir [file normalize [file dirname [info script]]] +set testrundir [pwd] +set testdir [file normalize [file dirname $argv0]] set fossilexe [file normalize [lindex $argv 0]] + +if {$tcl_platform(platform) eq "windows" && \ + [string length [file extension $fossilexe]] == 0} { + append fossilexe .exe +} + set argv [lrange $argv 1 end] set i [lsearch $argv -halt] if {$i>=0} { set HALT 1 set argv [lreplace $argv $i $i] } else { set HALT 0 } + +set i [lsearch $argv -prot] +if {$i>=0} { + set PROT 1 + set argv [lreplace $argv $i $i] +} else { + set PROT 0 +} + +set i [lsearch $argv -verbose] +if {$i>=0} { + set VERBOSE 1 + set argv [lreplace $argv $i $i] +} else { + set VERBOSE 0 +} + +set i [lsearch $argv -quiet] +if {$i>=0} { + set QUIET 1 + set argv [lreplace $argv $i $i] +} else { + set QUIET 0 +} + +set i [lsearch $argv -strict] +if {$i>=0} { + set STRICT 1 + set argv [lreplace $argv $i $i] +} else { + set STRICT 0 +} if {[llength $argv]==0} { foreach f [lsort [glob $testdir/*.test]] { set base [file root [file tail $f]] lappend argv $base } } -# Run the fossil program +# start protocol +# +proc protInit {cmd} { + if {$::PROT} { + set out [open [file join $::testrundir prot] w] + fconfigure $out -translation platform + puts $out "starting tests with: $cmd" + close $out + } +} + +# write protocol +# +proc protOut {msg {noQuiet 0}} { + if {$noQuiet || !$::QUIET} { + puts stdout $msg + } + if {$::PROT} { + set out [open [file join $::testrundir prot] a] + fconfigure $out -translation platform + puts $out $msg + close $out + } +} + +# Run the Fossil program with the specified arguments. +# +# Consults the VERBOSE global variable to determine if +# diagnostics should be emitted when no error is seen. +# Sets the CODE and RESULT global variables for use in +# test expressions. # proc fossil {args} { + return [uplevel 1 fossil_maybe_answer [list ""] $args] +} + +# Run the Fossil program with the specified arguments +# and possibly answer the first prompt, if any. +# +# Consults the VERBOSE global variable to determine if +# diagnostics should be emitted when no error is seen. +# Sets the CODE and RESULT global variables for use in +# test expressions. +# +proc fossil_maybe_answer {answer args} { global fossilexe set cmd $fossilexe + set expectError 0 + if {[lindex $args end] eq "-expectError"} { + set expectError 1 + set args [lrange $args 0 end-1] + } foreach a $args { lappend cmd $a } - puts $cmd + protOut $cmd + flush stdout - set rc [catch {eval exec $cmd} result] + if {[string length $answer] > 0} { + protOut $answer + set prompt_file [file join $::tempPath fossil_prompt_answer] + write_file $prompt_file $answer\n + set rc [catch {eval exec $cmd <$prompt_file} result] + file delete $prompt_file + } else { + set rc [catch {eval exec $cmd} result] + } global RESULT CODE set CODE $rc + if {($rc && !$expectError) || (!$rc && $expectError)} { + protOut "ERROR: $result" 1 + } elseif {$::VERBOSE} { + protOut "RESULT: $result" + } set RESULT $result } -# Read a file into memory. +# Read a file into memory. # proc read_file {filename} { set in [open $filename r] fconfigure $in -translation binary set txt [read $in [file size $filename]] @@ -93,25 +188,194 @@ regsub -all { +\n} $x \n x set y [read_file $b] regsub -all { +\n} $y \n y return [expr {$x==$y}] } + +# Create and open a new Fossil repository and clean the checkout +# +proc repo_init {{filename ".rep.fossil"}} { + if {$::env(HOME) ne [pwd]} { + catch {exec $::fossilexe info} res + if {![regexp {use --repository} $res]} { + error "In an open checkout: cannot initialize a new repository here." + } + # Fossil will write data on $HOME, running 'fossil new' here. + # We need not to clutter the $HOME of the test caller. + # + set ::env(HOME) [pwd] + } + catch {exec $::fossilexe close -f} + file delete $filename + exec $::fossilexe new $filename + exec $::fossilexe open $filename + exec $::fossilexe clean -f + exec $::fossilexe set mtime-changes off +} + +# This procedure only returns non-zero if the Tcl integration feature was +# enabled at compile-time and is now enabled at runtime. +proc is_tcl_usable_by_fossil {} { + fossil test-th-eval "hasfeature tcl" + if {$::RESULT ne "1"} {return 0} + fossil test-th-eval "setting tcl" + if {$::RESULT eq "1"} {return 1} + fossil test-th-eval --open-config "setting tcl" + if {$::RESULT eq "1"} {return 1} + return [info exists ::env(TH1_ENABLE_TCL)] +} + +# This procedure only returns non-zero if the TH1 hooks feature was enabled +# at compile-time and is now enabled at runtime. +proc are_th1_hooks_usable_by_fossil {} { + fossil test-th-eval "hasfeature th1Hooks" + if {$::RESULT ne "1"} {return 0} + fossil test-th-eval "setting th1-hooks" + if {$::RESULT eq "1"} {return 1} + fossil test-th-eval --open-config "setting th1-hooks" + if {$::RESULT eq "1"} {return 1} + return [info exists ::env(TH1_ENABLE_HOOKS)] +} + +# This (rarely used) procedure is designed to run a test within the Fossil +# source checkout (e.g. one that does NOT modify any state), while saving +# and restoring the current directory (e.g. one used when running a test +# file outside of the Fossil source checkout). Please do NOT use this +# procedure unless you are absolutely sure it does not modify the state of +# the repository or source checkout in any way. +# +proc run_in_checkout { script {dir ""} } { + if {[string length $dir] == 0} {set dir $::testfiledir} + set savedPwd [pwd]; cd $dir + set code [catch { + uplevel 1 $script + } result] + cd $savedPwd; unset savedPwd + return -code $code $result +} + +# Normalize file status lists (like those returned by 'fossil changes') +# so they can be compared using simple string comparison +# +proc normalize_status_list {list} { + set normalized [list] + set matches [regexp -all -inline -line {^\s*([A-Z_]+:?)\x20+(\S.*)$} $list] + foreach {_ status file} $matches { + lappend normalized [list $status [string trim $file]] + } + set normalized [lsort -index 1 $normalized] + return $normalized +} + +# Perform a test comparing two status lists +# +proc test_status_list {name result expected {constraints ""}} { + set expected [normalize_status_list $expected] + set result [normalize_status_list $result] + if {$result eq $expected} { + test $name 1 $constraints + } else { + protOut " Expected:\n [join $expected "\n "]" 1 + protOut " Got:\n [join $result "\n "]" 1 + test $name 0 $constraints + } +} + +# Append all arguments into a single value and then returns it. +# +proc appendArgs {args} { + eval append result $args +} + +# Return the name of the versioned settings file containing the TH1 +# setup script. +# +proc getTh1SetupFileName {} { + # + # NOTE: This uses the "testdir" global variable provided by the + # test suite; alternatively, the root of the source tree + # could be obtained directly from Fossil. + # + return [file normalize [file join .fossil-settings th1-setup]] +} + +# Return the saved name of the versioned settings file containing +# the TH1 setup script. +# +proc getSavedTh1SetupFileName {} { + return [appendArgs [getTh1SetupFileName] . [pid]] +} + +# Sets the TH1 setup script to the one provided. Prior to calling +# this, the [saveTh1SetupFile] procedure should be called in order to +# preserve the existing TH1 setup script. Prior to completing the test, +# the [restoreTh1SetupFile] procedure should be called to restore the +# original TH1 setup script. +# +proc writeTh1SetupFile { data } { + set fileName [getTh1SetupFileName] + file mkdir [file dirname $fileName] + return [write_file $fileName $data] +} + +# Saves the TH1 setup script file by renaming it, based on the current +# process ID. +# +proc saveTh1SetupFile {} { + set oldFileName [getTh1SetupFileName] + if {[file exists $oldFileName]} { + set newFileName [getSavedTh1SetupFileName] + catch {file delete $newFileName} + file rename $oldFileName $newFileName + } +} + +# Restores the original TH1 setup script file by renaming it back, based +# on the current process ID. +# +proc restoreTh1SetupFile {} { + set oldFileName [getSavedTh1SetupFileName] + set newFileName [getTh1SetupFileName] + if {[file exists $oldFileName]} { + catch {file delete $newFileName} + file rename $oldFileName $newFileName + } else { + # + # NOTE: There was no TH1 setup script file, delete the test one. + # + file delete $newFileName + } +} # Perform a test # -proc test {name expr} { - global bad_test +set test_count 0 +proc test {name expr {constraints ""}} { + global bad_test ignored_test test_count RESULT + incr test_count + set knownBug [expr {"knownBug" in $constraints}] set r [uplevel 1 [list expr $expr]] if {$r} { - puts "test $name OK" + if {$knownBug && !$::STRICT} { + protOut "test $name OK (knownBug)?" + } else { + protOut "test $name OK" + } } else { - puts "test $name FAILED!" - lappend bad_test $name - if {$::HALT} exit + if {$knownBug && !$::STRICT} { + protOut "test $name FAILED (knownBug)!" 1 + lappend ignored_test $name + } else { + protOut "test $name FAILED!" 1 + if {$::QUIET} {protOut "RESULT: $RESULT" 1} + lappend bad_test $name + if {$::HALT} exit + } } } set bad_test {} +set ignored_test {} # Return a random string N characters long. # set vocabulary 01234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ" append vocabulary " ()*^!.eeeeeeeeaaaaattiioo " @@ -167,10 +431,113 @@ append out \n$line } return [string range $out 1 end] } +# Executes the "fossil http" command. The entire content of the HTTP request +# is read from the data file name, with [subst] being performed on it prior to +# submission. Temporary input and output files are created and deleted. The +# result will be the contents of the temoprary output file. +proc test_fossil_http { repository dataFileName url } { + set suffix [appendArgs [pid] - [getSeqNo] - [clock seconds] .txt] + set inFileName [file join $::tempPath [appendArgs test-http-in- $suffix]] + set outFileName [file join $::tempPath [appendArgs test-http-out- $suffix]] + set data [subst [read_file $dataFileName]] + + write_file $inFileName $data + fossil http $inFileName $outFileName 127.0.0.1 $repository --localauth + set result [expr {[file exists $outFileName] ? [read_file $outFileName] : ""}] + + if {1} { + catch {file delete $inFileName} + catch {file delete $outFileName} + } + + return $result +} + +# obtains and increments a "sequence number" for this test run. +proc getSeqNo {} { + upvar #0 seqNo seqNo + if {![info exists seqNo]} { + set seqNo 0 + } + return [incr seqNo] +} + +# fixup the whitespace in the result to make it easier to compare. +proc normalize_result {} { + return [string map [list \r\n \n] [string trim $::RESULT]] +} + +# returns the first line of the normalized result. +proc first_data_line {} { + return [lindex [split [normalize_result] \n] 0] +} + +# returns the second line of the normalized result. +proc second_data_line {} { + return [lindex [split [normalize_result] \n] 1] +} + +# returns the third line of the normalized result. +proc third_data_line {} { + return [lindex [split [normalize_result] \n] 2] +} + +# returns the last line of the normalized result. +proc last_data_line {} { + return [lindex [split [normalize_result] \n] end] +} + +# returns the second to last line of the normalized result. +proc next_to_last_data_line {} { + return [lindex [split [normalize_result] \n] end-1] +} + +# returns the third to last line of the normalized result. +proc third_to_last_data_line {} { + return [lindex [split [normalize_result] \n] end-2] +} + +set tempPath [expr {[info exists env(TEMP)] ? \ + $env(TEMP) : [file dirname [info script]]}] + +if {$tcl_platform(platform) eq "windows"} { + set tempPath [string map [list \\ /] $tempPath] +} + +set tempPath [file normalize $tempPath] + +if {[catch { + write_file [file join $tempPath temporary.txt] [clock seconds] +} error] != 0} { + error "could not write file to directory \"$tempPath\",\ +please set TEMP variable in environment: $error" +} + +protInit $fossilexe foreach testfile $argv { - puts "***** $testfile ******" + set dir [file root [file tail $testfile]] + file delete -force $dir + file mkdir $dir + set origwd [pwd] + cd $dir + protOut "***** $testfile ******" source $testdir/$testfile.test + protOut "***** End of $testfile: [llength $bad_test] errors so far ******" + cd $origwd +} +set nErr [llength $bad_test] +if {$nErr>0 || !$::QUIET} { + protOut "***** Final results: $nErr errors out of $test_count tests" 1 +} +if {$nErr>0} { + protOut "***** Considered failures: $bad_test" 1 +} +set nErr [llength $ignored_test] +if {$nErr>0 || !$::QUIET} { + protOut "***** Ignored results: $nErr ignored errors out of $test_count tests" 1 } -puts "[llength $bad_test] errors: $bad_test" +if {$nErr>0} { + protOut "***** Ignored failures: $ignored_test" 1 +} ADDED test/th1-docs-input.txt Index: test/th1-docs-input.txt ================================================================== --- test/th1-docs-input.txt +++ test/th1-docs-input.txt @@ -0,0 +1,4 @@ +GET ${url} HTTP/1.1 +Host: localhost +User-Agent: Fossil + ADDED test/th1-docs.test Index: test/th1-docs.test ================================================================== --- test/th1-docs.test +++ test/th1-docs.test @@ -0,0 +1,61 @@ +# +# Copyright (c) 2015 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# TH1 Docs +# + +fossil test-th-eval "hasfeature th1Docs" + +if {$::RESULT ne "1"} then { + puts "Fossil was not compiled with TH1 docs support."; return +} + +fossil test-th-eval "hasfeature tcl" + +if {$::RESULT ne "1"} then { + puts "Fossil was not compiled with Tcl support."; return +} + +############################################################################### + +set env(TH1_ENABLE_DOCS) 1; # TH1 docs must be enabled for this test. +set env(TH1_ENABLE_TCL) 1; # Tcl integration must be enabled for this test. + +############################################################################### + +run_in_checkout { + set data [fossil info] +} + +regexp -line -- {^repository: (.*)$} $data dummy repository + +if {[string length $repository] == 0 || ![file exists $repository]} then { + error "unable to locate repository" +} + +set dataFileName [file join $::testdir th1-docs-input.txt] + +############################################################################### + +run_in_checkout { + set RESULT [test_fossil_http \ + $repository $dataFileName /doc/trunk/test/fileStat.th1] +} + +test th1-docs-1a {[regexp {Fossil: test/fileStat.th1} $RESULT]} +test th1-docs-1b {[regexp {>\[[0-9a-f]{40}\]<} $RESULT]} +test th1-docs-1c {[regexp { contains \d+ files\.} $RESULT]} ADDED test/th1-hooks-input.txt Index: test/th1-hooks-input.txt ================================================================== --- test/th1-hooks-input.txt +++ test/th1-hooks-input.txt @@ -0,0 +1,4 @@ +GET ${url} HTTP/1.1 +Host: localhost +User-Agent: Fossil + ADDED test/th1-hooks.test Index: test/th1-hooks.test ================================================================== --- test/th1-hooks.test +++ test/th1-hooks.test @@ -0,0 +1,199 @@ +# +# Copyright (c) 2011 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# TH1 Hooks +# + +fossil test-th-eval "hasfeature th1Hooks" + +if {$::RESULT ne "1"} then { + puts "Fossil was not compiled with TH1 hooks support."; return +} + +############################################################################### + +repo_init +write_file f1 "f1"; fossil add f1; fossil commit -m "c1" + +############################################################################### + +set env(TH1_ENABLE_HOOKS) 1; # TH1 hooks must be enabled for this test. + +############################################################################### + +set testTh1Setup { + proc initialize_hook_log {} { + if {![info exists ::hook_log]} { + set ::hook_log "" + } + } + + proc append_hook_log { args } { + initialize_hook_log + if {[string length $::hook_log] > 0} { + set ::hook_log "$::hook_log " + } + for {set i 0} {$i < [llength $args]} {set i [expr {$i + 1}]} { + set ::hook_log $::hook_log[lindex $args $i] + } + } + + proc emit_hook_log {} { + initialize_hook_log + html "\n

        $::hook_log

        \n" + } + + proc command_hook {} { + append_hook_log command_hook " " $::cmd_name + if {$::cmd_name eq "test1"} { + puts [repository]; continue + } elseif {$::cmd_name eq "test2"} { + error "unsupported command" + } elseif {$::cmd_name eq "test3"} { + emit_hook_log + break "TH_BREAK return code" + } elseif {$::cmd_name eq "test4"} { + emit_hook_log + return -code 2 "TH_RETURN return code" + } elseif {$::cmd_name eq "timeline"} { + set length [llength $::cmd_args] + set length [expr {$length - 1}] + if {[lindex $::cmd_args $length] eq "custom"} { + emit_hook_log + return "custom timeline" + } elseif {[lindex $::cmd_args $length] eq "now"} { + emit_hook_log + return "now timeline" + } else { + emit_hook_log + error "unsupported timeline" + } + } + } + + proc command_notify {} { + append_hook_log command_notify " " $::cmd_name + emit_hook_log + } + + proc webpage_hook {} { + append_hook_log webpage_hook " " $::web_name + if {$::web_name eq "test1"} { + puts [repository]; continue + } + } + + proc webpage_notify {} { + append_hook_log webpage_notify " " $::web_name + emit_hook_log + } +} + +############################################################################### + +set data [fossil info] +regexp -line -- {^repository: (.*)$} $data dummy repository + +if {[string length $repository] == 0 || ![file exists $repository]} then { + error "unable to locate repository" +} + +set dataFileName [file join $::testdir th1-hooks-input.txt] + +############################################################################### + +saveTh1SetupFile; writeTh1SetupFile $testTh1Setup + +############################################################################### + +fossil timeline custom; # NOTE: Bad "WHEN" argument. +test th1-cmd-hooks-1a {[normalize_result] eq \ +{

        command_hook timeline

        ++++ no more data (0) +++ + +

        command_hook timeline command_notify timeline

        }} + +############################################################################### + +fossil timeline +test th1-cmd-hooks-2a {[first_data_line] eq \ + {

        command_hook timeline

        }} + +test th1-cmd-hooks-2b {[second_data_line] eq {ERROR: unsupported timeline}} + +############################################################################### + +fossil timeline -n -1 now +test th1-cmd-hooks-3a {[first_data_line] eq \ + {

        command_hook timeline

        }} + +test th1-cmd-hooks-3b \ + {[regexp -- {=== \d{4}-\d{2}-\d{2} ===} [second_data_line]]} + +test th1-cmd-hooks-3c \ + {[regexp -- {--- line limit \(\d+\) reached ---} [third_to_last_data_line]]} + +test th1-cmd-hooks-3d {[last_data_line] eq \ + {

        command_hook timeline command_notify timeline

        }} + +############################################################################### + +fossil test1 +test th1-custom-cmd-1a {[next_to_last_data_line] eq $repository} + +test th1-custom-cmd-1b {[last_data_line] eq \ + {

        command_hook test1 command_notify test1

        }} + +############################################################################### + +fossil test2 +test th1-custom-cmd-2a {[first_data_line] eq {ERROR: unsupported command}} + +############################################################################### + +fossil test3 +test th1-custom-cmd-3a {[string trim $RESULT] eq \ + {

        command_hook test3

        }} + +############################################################################### + +fossil test4 +test th1-custom-cmd-4a {[string trim $RESULT] eq \ + {

        command_hook test4

        }} + +############################################################################### + +set RESULT [test_fossil_http $repository $dataFileName /timeline] + +test th1-web-hooks-1a {[regexp \ + {Unnamed Fossil Project: Timeline} $RESULT]} + +test th1-web-hooks-1b {[regexp [appendArgs \ + {

        command_hook http webpage_hook timeline} " " \ + {webpage_notify timeline

        }] $RESULT]} + +############################################################################### + +set RESULT [test_fossil_http $repository $dataFileName /test1] +test th1-custom-web-1a {[next_to_last_data_line] eq $repository} + +test th1-custom-web-1b {[last_data_line] eq \ + {

        command_hook http webpage_hook test1 webpage_notify test1

        }} + +############################################################################### + +restoreTh1SetupFile ADDED test/th1-repo.test Index: test/th1-repo.test ================================================================== --- test/th1-repo.test +++ test/th1-repo.test @@ -0,0 +1,90 @@ +# +# Copyright (c) 2011 D. Richard Hipp +# Copyright (c) 2015 Ch. Drexler +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +# Chris Drexler +# +############################################################################ +# +# TH1 tests that may modify the repository +# + +catch {exec $::fossilexe info} res +if {![regexp {use --repository} $res]} { + puts stderr "Cannot run this test within an open checkout" + return +} + +######################################## +# Setup: Add Files and Commit # +######################################## + +set rootDir [file normalize [pwd]] + +repo_init + +write_file f1.md "f1" +write_file f2.md "f2" +write_file f3.txt "f3" +write_file f4.md "f4" + +file mkdir [file join $rootDir subdirA] +# NOTE: There are no files in subdirA. + +file mkdir [file join $rootDir subdirB] +write_file [file join $rootDir subdirB f5.md] "f5" +write_file [file join $rootDir subdirB f6.md] "f6" +write_file [file join $rootDir subdirB f7.txt] "f7" +write_file [file join $rootDir subdirB f8.md] "f8" +write_file [file join $rootDir subdirB f9.wiki] "f9" + +file mkdir [file join $rootDir subdirC] +write_file [file join $rootDir subdirC f10.md] "f10" +write_file [file join $rootDir subdirC f11t.xt] "f11" + +set files_md [list subdirB/f5.md subdirB/f6.md subdirB/f8.md subdirC/f10.md] + +fossil add $rootDir +fossil commit -m "c1" + +set dir [file dirname [info script]] + +############################################################################### + +fossil test-th-eval --open-config "dir trunk subdir*/*.md" +test th1-dir-1 {[llength $RESULT] eq [llength $files_md]} + +set n 1 +foreach i $RESULT j $files_md { + test th1-dir-2.$n {$i eq $j} + set n [expr {$n + 1}] +} + +############################################################################### + +set dateTime {\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}} +fossil test-th-eval --open-config "dir trunk subdir*/*.md 1" +test th1-dir-3.1 {[lindex [lindex $RESULT 0] 0] eq "subdirB/f5.md"} +test th1-dir-3.2 {[lindex [lindex $RESULT 0] 1] == 2} +test th1-dir-3.3 {[regexp -- $dateTime [lindex [lindex $RESULT 0] 2]]} +test th1-dir-3.4 {[lindex [lindex $RESULT 1] 0] eq "subdirB/f6.md"} +test th1-dir-3.5 {[lindex [lindex $RESULT 1] 1] == 2} +test th1-dir-3.6 {[regexp -- $dateTime [lindex [lindex $RESULT 1] 2]]} +test th1-dir-3.7 {[lindex [lindex $RESULT 2] 0] eq "subdirB/f8.md"} +test th1-dir-3.8 {[lindex [lindex $RESULT 2] 1] == 2} +test th1-dir-3.9 {[regexp -- $dateTime [lindex [lindex $RESULT 2] 2]]} +test th1-dir-3.10 {[lindex [lindex $RESULT 3] 0] eq "subdirC/f10.md"} +test th1-dir-3.11 {[lindex [lindex $RESULT 3] 1] == 3} +test th1-dir-3.12 {[regexp -- $dateTime [lindex [lindex $RESULT 3] 2]]} ADDED test/th1-tcl.test Index: test/th1-tcl.test ================================================================== --- test/th1-tcl.test +++ test/th1-tcl.test @@ -0,0 +1,177 @@ +# +# Copyright (c) 2011 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# TH1/Tcl integration +# + +set dir [file dirname [info script]] + +############################################################################### + +repo_init + +############################################################################### + +fossil test-th-eval "hasfeature tcl" + +if {$::RESULT ne "1"} then { + puts "Fossil was not compiled with Tcl support."; return +} + +############################################################################### + +set env(TH1_ENABLE_TCL) 1; # Tcl integration must be enabled for this test. + +############################################################################### + +fossil test-th-render --open-config \ + [file nativename [file join $dir th1-tcl1.txt]] + +test th1-tcl-1 {[regexp -- {^tclReady\(before\) = 0 +tclReady\(after\) = 1 +\d+ +\d+ +\d+ +via Tcl invoke +4 +4 +two words +one_word +three words now +\d+ +two words +4 +\d+ +two words +4 +\d+ +one_word +three words now$} [normalize_result]]} + +############################################################################### + +if {[catch {package require sqlite3}] == 0} { + fossil test-th-render --open-config \ + [file nativename [file join $dir th1-tcl2.txt]] + + test th1-tcl-2 {[regexp -- {^\d+$} [normalize_result]]} +} else { + puts stderr "Skipping 'th1-tcl-2', SQLite package for Tcl not available" +} + +############################################################################### + +fossil test-th-render --open-config \ + [file nativename [file join $dir th1-tcl3.txt]] + +test th1-tcl-3 {$RESULT eq {

        ERROR:\ +invalid command name "bad_command"

        }} + +############################################################################### + +fossil test-th-render --open-config \ + [file nativename [file join $dir th1-tcl4.txt]] + +test th1-tcl-4 {$RESULT eq {

        ERROR:\ +divide by zero

        }} + +############################################################################### + +fossil test-th-render --open-config \ + [file nativename [file join $dir th1-tcl5.txt]] + +test th1-tcl-5 {$RESULT eq {

        ERROR:\ +Tcl command not found: bad_command

        } || $RESULT eq {
        ERROR: invalid command name "bad_command"

        }} + +############################################################################### + +fossil test-th-render --open-config \ + [file nativename [file join $dir th1-tcl6.txt]] + +test th1-tcl-6 {$RESULT eq {

        ERROR:\ +no such command: bad_command

        }} + +############################################################################### + +fossil test-th-render --open-config \ + [file nativename [file join $dir th1-tcl7.txt]] + +test th1-tcl-7 {$RESULT eq {

        ERROR:\ +syntax error in expression: "2**0"

        }} + +############################################################################### + +fossil test-th-render --open-config \ + [file nativename [file join $dir th1-tcl8.txt]] + +test th1-tcl-8 {$RESULT eq {

        ERROR:\ +cannot invoke Tcl command: tailcall

        } || $RESULT eq {
        ERROR: tailcall can only be called from a proc or\ +lambda

        } || $RESULT eq {

        ERROR: This test\ +requires Tcl 8.6 or higher.

        }} + +############################################################################### + +fossil test-th-render --open-config \ + [file nativename [file join $dir th1-tcl9.txt]] + +test th1-tcl-9 {[string trim $RESULT] eq [list [file tail $fossilexe] 3 \ +[list test-th-render --open-config [file nativename [file join $dir \ +th1-tcl9.txt]]]]} + +############################################################################### + +fossil test-th-eval "tclMakeSafe a" +test th1-tcl-10 {[normalize_result] eq \ +{TH_ERROR: wrong # args: should be "tclMakeSafe"}} + +############################################################################### + +fossil test-th-eval "list \[tclIsSafe\] \[tclMakeSafe\] \[tclIsSafe\]" +test th1-tcl-11 {[normalize_result] eq {0 {} 1}} + +############################################################################### + +fossil test-th-eval "tclMakeSafe; tclMakeSafe" +test th1-tcl-12 {[normalize_result] eq \ +{TH_ERROR: Tcl interpreter is already 'safe'}} + +############################################################################### + +fossil test-th-eval "tclEval pwd; tclMakeSafe; tclEval pwd" +test th1-tcl-13 {[normalize_result] eq {TH_ERROR: invalid command name "pwd"}} + +############################################################################### + +fossil test-th-eval "tclMakeSafe; tclExpr {0 + \[string length \[pwd\]\]}" +test th1-tcl-14 {[normalize_result] eq {TH_ERROR: invalid command name "pwd"}} + +############################################################################### + +fossil test-th-eval "tclInvoke pwd; tclMakeSafe; tclInvoke pwd" +test th1-tcl-15 {[normalize_result] eq {TH_ERROR: Tcl command not found: pwd}} + +############################################################################### + +fossil test-th-eval "tclMakeSafe; tclEval set x 2" +test th1-tcl-16 {[normalize_result] eq {2}} + +############################################################################### + +fossil test-th-eval "tclMakeSafe; tclEval set x 2; tclEval info vars x" +test th1-tcl-17 {[normalize_result] eq {x}} ADDED test/th1-tcl1.txt Index: test/th1-tcl1.txt ================================================================== --- test/th1-tcl1.txt +++ test/th1-tcl1.txt @@ -0,0 +1,29 @@ + + # + # This is a "TH1 fragment" used to test the Tcl integration features of TH1. + # The corresponding test file executes this file using the test-th-render + # Fossil command. + # + proc doOut {msg} {puts $msg; puts \n} + doOut "tclReady(before) = [tclReady]" + set channel stdout; tclInvoke set channel $channel + doOut "tclReady(after) = [tclReady]" + doOut [tclEval clock seconds] + doOut [tclEval {set x [clock seconds]}] + tclEval {puts $channel "[clock seconds]"} + tclInvoke puts $channel "via Tcl invoke" + doOut [tclExpr 2+2] + doOut [tclExpr 2 + 2] + doOut [tclInvoke set x "two words"] + doOut [tclInvoke eval set y one_word] + doOut [tclInvoke eval {set z "three words now"}] + doOut [set x [tclEval {set x [clock seconds]}]] + doOut [tclInvoke th1Eval {set y "two words"}] + doOut [set z [tclInvoke th1Expr {2+2}]] + doOut $x + doOut $y + doOut $z + doOut [tclEval set x] + doOut [tclEval set y] + doOut [tclEval set z] + ADDED test/th1-tcl2.txt Index: test/th1-tcl2.txt ================================================================== --- test/th1-tcl2.txt +++ test/th1-tcl2.txt @@ -0,0 +1,19 @@ + + # + # This is a "TH1 fragment" used to test the Tcl integration features of TH1. + # The corresponding test file executes this file using the test-th-render + # Fossil command. + # + # NOTE: This test requires that the SQLite package be available for the Tcl + # interpreter that is linked to the Fossil executable. + # + tclInvoke set repository_name [repository 1] + proc doOut {msg} {puts $msg; puts \n} + doOut [tclEval { + package require sqlite3 + sqlite3 db $repository_name -readonly true + set x [db eval {SELECT COUNT(*) FROM user;}] + db close + return $x + }] + ADDED test/th1-tcl3.txt Index: test/th1-tcl3.txt ================================================================== --- test/th1-tcl3.txt +++ test/th1-tcl3.txt @@ -0,0 +1,9 @@ + + # + # This is a "TH1 fragment" used to test the Tcl integration features of TH1. + # The corresponding test file executes this file using the test-th-render + # Fossil command. + # + proc doOut {msg} {puts $msg; puts \n} + doOut [tclEval bad_command] + ADDED test/th1-tcl4.txt Index: test/th1-tcl4.txt ================================================================== --- test/th1-tcl4.txt +++ test/th1-tcl4.txt @@ -0,0 +1,9 @@ + + # + # This is a "TH1 fragment" used to test the Tcl integration features of TH1. + # The corresponding test file executes this file using the test-th-render + # Fossil command. + # + proc doOut {msg} {puts $msg; puts \n} + doOut [tclExpr 2/0] + ADDED test/th1-tcl5.txt Index: test/th1-tcl5.txt ================================================================== --- test/th1-tcl5.txt +++ test/th1-tcl5.txt @@ -0,0 +1,9 @@ + + # + # This is a "TH1 fragment" used to test the Tcl integration features of TH1. + # The corresponding test file executes this file using the test-th-render + # Fossil command. + # + proc doOut {msg} {puts $msg; puts \n} + doOut [tclInvoke bad_command] + ADDED test/th1-tcl6.txt Index: test/th1-tcl6.txt ================================================================== --- test/th1-tcl6.txt +++ test/th1-tcl6.txt @@ -0,0 +1,9 @@ + + # + # This is a "TH1 fragment" used to test the Tcl integration features of TH1. + # The corresponding test file executes this file using the test-th-render + # Fossil command. + # + proc doOut {msg} {puts $msg; puts \n} + doOut [tclEval th1Eval bad_command] + ADDED test/th1-tcl7.txt Index: test/th1-tcl7.txt ================================================================== --- test/th1-tcl7.txt +++ test/th1-tcl7.txt @@ -0,0 +1,19 @@ + + # + # This is a "TH1 fragment" used to test the Tcl integration features of TH1. + # The corresponding test file executes this file using the test-th-render + # Fossil command. + # + proc doOut {msg} {puts $msg; puts \n} + + # + # BUGBUG: Attempting to divide by zero will crash TH1 with the error: + # "child killed: floating-point exception" + # + # doOut [tclEval th1Expr 2/0] + + # + # NOTE: For now, just cause an expression syntax error. + # + doOut [tclEval th1Expr 2**0] + ADDED test/th1-tcl8.txt Index: test/th1-tcl8.txt ================================================================== --- test/th1-tcl8.txt +++ test/th1-tcl8.txt @@ -0,0 +1,14 @@ + + # + # This is a "TH1 fragment" used to test the Tcl integration features of TH1. + # The corresponding test file executes this file using the test-th-render + # Fossil command. + # + proc doOut {msg} {puts $msg; puts \n} + + if {[tclInvoke set tcl_version] >= 8.6} { + doOut [tclInvoke tailcall set x 1] + } else { + error "This test requires Tcl 8.6 or higher." + } + ADDED test/th1-tcl9.txt Index: test/th1-tcl9.txt ================================================================== --- test/th1-tcl9.txt +++ test/th1-tcl9.txt @@ -0,0 +1,9 @@ + + # + # This is a "TH1 fragment" used to test the Tcl integration features of TH1. + # The corresponding test file executes this file using the test-th-render + # Fossil command. + # + set channel stdout; tclInvoke set channel $channel + tclEval {puts $channel [list [file tail $argv0] $argc $argv]} + ADDED test/th1.test Index: test/th1.test ================================================================== --- test/th1.test +++ test/th1.test @@ -0,0 +1,1455 @@ +# +# Copyright (c) 2011 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# TH1 Commands +# + +set dir [file dirname [info script]]; repo_init + +############################################################################### + +set th1Tcl [is_tcl_usable_by_fossil] +set th1Hooks [are_th1_hooks_usable_by_fossil] + +############################################################################### + +fossil test-th-eval --open-config "setting abc" +test th1-setting-1 {$RESULT eq ""} + +############################################################################### + +fossil test-th-eval --open-config "setting -- abc" +test th1-setting-2 {$RESULT eq ""} + +############################################################################### + +fossil test-th-eval --open-config "setting -strict abc" +test th1-setting-3 {$RESULT eq {TH_ERROR: no value for setting "abc"}} + +############################################################################### + +fossil test-th-eval --open-config "setting -strict -- abc" +test th1-setting-4 {$RESULT eq {TH_ERROR: no value for setting "abc"}} + +############################################################################### + +fossil test-th-eval --open-config "setting autosync" +test th1-setting-5 {$RESULT eq 0 || $RESULT eq 1 || $RESULT eq "on"} + +############################################################################### + +fossil test-th-eval --open-config "setting -strict autosync" +test th1-setting-6 {$RESULT eq 0 || $RESULT eq 1 || $RESULT eq "on"} + +############################################################################### + +fossil test-th-eval --open-config "setting --" +test th1-setting-7 {$RESULT eq \ +{TH_ERROR: wrong # args: should be "setting ?-strict? ?--? name"}} + +############################################################################### + +fossil test-th-eval --open-config "setting -strict --" +test th1-setting-8 {$RESULT eq \ +{TH_ERROR: wrong # args: should be "setting ?-strict? ?--? name"}} + +############################################################################### + +fossil test-th-eval --open-config "setting -- --" +test th1-setting-9 {$RESULT eq {}} + +############################################################################### + +fossil test-th-eval --open-config "setting -strict -- --" +test th1-setting-10 {$RESULT eq {TH_ERROR: no value for setting "--"}} + +############################################################################### + +fossil test-th-eval "expr 42/0" +test th1-divide-by-zero-1 {$RESULT eq {TH_ERROR: Divide by 0: 42}} + +############################################################################### + +fossil test-th-eval "expr 42/0.0" +test th1-divide-by-zero-2 {$RESULT eq {TH_ERROR: Divide by 0: 42}} + +############################################################################### + +fossil test-th-eval "expr 42.0/0" +test th1-divide-by-zero-3 {$RESULT eq {TH_ERROR: Divide by 0: 42.0}} + +############################################################################### + +fossil test-th-eval "expr 42.0/0.0" +test th1-divide-by-zero-4 {$RESULT eq {TH_ERROR: Divide by 0: 42.0}} + +############################################################################### + +fossil test-th-eval "expr 42%0" +test th1-modulus-by-zero-1 {$RESULT eq {TH_ERROR: Modulo by 0: 42}} + +############################################################################### + +fossil test-th-eval "expr 42%0.0" +test th1-modulus-by-zero-2 {$RESULT eq {TH_ERROR: expected integer, got: "0.0"}} + +############################################################################### + +fossil test-th-eval "expr 42.0%0" +test th1-modulus-by-zero-3 {$RESULT eq \ +{TH_ERROR: expected integer, got: "42.0"}} + +############################################################################### + +fossil test-th-eval "expr 42.0%0.0" +test th1-modulus-by-zero-4 {$RESULT eq \ +{TH_ERROR: expected integer, got: "42.0"}} + +############################################################################### + +fossil test-th-eval "set var 1; info exists var" +test th1-info-exists-1 {$RESULT eq {1}} + +############################################################################### + +fossil test-th-eval "set var 1; unset var; info exists var" +test th1-info-exists-2 {$RESULT eq {0}} + +############################################################################### + +fossil test-th-eval "set var 1; unset var; set var 2; info exists var" +test th1-info-exists-3 {$RESULT eq {1}} + +############################################################################### + +fossil test-th-eval "set var 1; expr {\$var+0}" +test th1-info-exists-4 {$RESULT eq {1}} + +############################################################################### + +fossil test-th-eval "set var 1; unset var; expr {\$var+0}" +test th1-info-exists-5 {$RESULT eq {TH_ERROR: no such variable: var}} + +############################################################################### + +fossil test-th-eval "catch {bad}; info exists var; set th_stack_trace" +test th1-info-exists-6 {$RESULT eq {bad}} + +############################################################################### + +fossil test-th-eval "set var(1) 1; info exists var" +test th1-info-exists-7 {$RESULT eq {1}} + +############################################################################### + +fossil test-th-eval "set var(1) 1; unset var(1); info exists var" +test th1-info-exists-8 {$RESULT eq {1}} + +############################################################################### + +fossil test-th-eval "set var(1) 1; unset var; info exists var" +test th1-info-exists-9 {$RESULT eq {0}} + +############################################################################### + +fossil test-th-eval "set var(1) 1; info exists var(1)" +test th1-info-exists-10 {$RESULT eq {1}} + +############################################################################### + +fossil test-th-eval "set var(1) 1; unset var(1); info exists var(1)" +test th1-info-exists-11 {$RESULT eq {0}} + +############################################################################### + +fossil test-th-eval "set var(1) 1; unset var; info exists var(1)" +test th1-info-exists-12 {$RESULT eq {0}} + +############################################################################### + +fossil test-th-eval "set var 1; unset var" +test th1-unset-1 {$RESULT eq {var}} + +############################################################################### + +fossil test-th-eval "unset var" +test th1-unset-2 {$RESULT eq {TH_ERROR: no such variable: var}} + +############################################################################### + +fossil test-th-eval "set var 1; unset var; unset var" +test th1-unset-3 {$RESULT eq {TH_ERROR: no such variable: var}} + +############################################################################### + +fossil test-th-eval "set gv 1; proc p {} {upvar 1 gv lv; unset lv}; p; unset gv" +test th1-unset-4 {$RESULT eq {TH_ERROR: no such variable: gv}} + +############################################################################### + +fossil test-th-eval "set gv 1; upvar 0 gv gv2; info exists gv2" +test th1-unset-5 {$RESULT eq {1}} + +############################################################################### + +fossil test-th-eval "set gv 1; upvar 0 gv gv2; unset gv; unset gv2" +test th1-unset-6 {$RESULT eq {TH_ERROR: no such variable: gv2}} + +############################################################################### + +fossil test-th-eval "set gv 1; upvar 0 gv gv2(1); unset gv; unset gv2(1)" +test th1-unset-7 {$RESULT eq {TH_ERROR: no such variable: gv2(1)}} + +############################################################################### + +fossil test-th-eval "set gv(1) 1; upvar 0 gv(1) gv2; unset gv(1); unset gv2" +test th1-unset-8 {$RESULT eq {TH_ERROR: no such variable: gv2}} + +############################################################################### + +fossil test-th-eval "string first {} {}" +test th1-string-first-1 {$RESULT eq {-1}} + +############################################################################### + +fossil test-th-eval "string first {} {a}" +test th1-string-first-2 {$RESULT eq {-1}} + +############################################################################### + +fossil test-th-eval "string first {a} {}" +test th1-string-first-3 {$RESULT eq {-1}} + +############################################################################### + +fossil test-th-eval "string first {a} {a}" +test th1-string-first-4 {$RESULT eq {0}} + +############################################################################### + +fossil test-th-eval "string first {a} {aa}" +test th1-string-first-5 {$RESULT eq {0}} + +############################################################################### + +fossil test-th-eval "string first {aa} {a}" +test th1-string-first-6 {$RESULT eq {-1}} + +############################################################################### + +fossil test-th-eval "string first {ab} {abc}" +test th1-string-first-7 {$RESULT eq {0}} + +############################################################################### + +fossil test-th-eval "string first {bc} {abc}" +test th1-string-first-8 {$RESULT eq {1}} + +############################################################################### + +fossil test-th-eval "string first {AB} {abc}" +test th1-string-first-9 {$RESULT eq {-1}} + +############################################################################### + +fossil test-th-eval "string last {} {}" +test th1-string-last-1 {$RESULT eq {-1}} + +############################################################################### + +fossil test-th-eval "string last {} {a}" +test th1-string-last-2 {$RESULT eq {-1}} + +############################################################################### + +fossil test-th-eval "string last {a} {}" +test th1-string-last-3 {$RESULT eq {-1}} + +############################################################################### + +fossil test-th-eval "string last {a} {a}" +test th1-string-last-4 {$RESULT eq {0}} + +############################################################################### + +fossil test-th-eval "string last {a} {aa}" +test th1-string-last-5 {$RESULT eq {1}} + +############################################################################### + +fossil test-th-eval "string last {aa} {a}" +test th1-string-last-6 {$RESULT eq {-1}} + +############################################################################### + +fossil test-th-eval "string last {ab} {abc}" +test th1-string-last-7 {$RESULT eq {0}} + +############################################################################### + +fossil test-th-eval "string last {bc} {abc}" +test th1-string-last-8 {$RESULT eq {1}} + +############################################################################### + +fossil test-th-eval "string last {AB} {abc}" +test th1-string-last-9 {$RESULT eq {-1}} + +############################################################################### + +fossil test-th-eval "expr -2147483649.0" +test th1-expr-1 {$RESULT eq {-2147483649.0}} + +############################################################################### + +fossil test-th-eval "expr -2147483649" +test th1-expr-2 {$RESULT eq {2147483647}} + +############################################################################### + +fossil test-th-eval "expr -2147483648" +test th1-expr-3 {$RESULT eq {-2147483648}} + +############################################################################### + +fossil test-th-eval "expr -2147483647" +test th1-expr-4 {$RESULT eq {-2147483647}} + +############################################################################### + +fossil test-th-eval "expr -1" +test th1-expr-5 {$RESULT eq {-1}} + +############################################################################### + +fossil test-th-eval "expr 0" +test th1-expr-6 {$RESULT eq {0}} + +############################################################################### + +fossil test-th-eval "expr 0.0" +test th1-expr-7 {$RESULT eq {0.0}} + +############################################################################### + +fossil test-th-eval "expr 1" +test th1-expr-8 {$RESULT eq {1}} + +############################################################################### + +fossil test-th-eval "expr 2147483647" +test th1-expr-9 {$RESULT eq {2147483647}} + +############################################################################### + +fossil test-th-eval "expr 2147483648" +test th1-expr-10 {$RESULT eq {2147483648}} + +############################################################################### + +fossil test-th-eval "expr 2147483649" +test th1-expr-11 {$RESULT eq {2147483649}} + +############################################################################### + +fossil test-th-eval "expr +2147483649" +test th1-expr-12 {$RESULT eq {-2147483647}} + +############################################################################### + +fossil test-th-eval "expr +2147483649.0" +test th1-expr-13 {$RESULT eq {2147483649.0}} + +############################################################################### + +fossil test-th-eval "expr ~(-1)" +test th1-expr-14 {$RESULT eq {0}} + +############################################################################### + +fossil test-th-eval "expr ~-1" +test th1-expr-15 {$RESULT eq {0}} + +############################################################################### + +fossil test-th-eval "expr ~0" +test th1-expr-16 {$RESULT eq {-1}} + +############################################################################### + +fossil test-th-eval "expr ~+0" +test th1-expr-17 {$RESULT eq {-1}} + +############################################################################### + +fossil test-th-eval "expr ~-0" +test th1-expr-18 {$RESULT eq {-1}} + +############################################################################### + +fossil test-th-eval "expr ~(+0)" +test th1-expr-19 {$RESULT eq {-1}} + +############################################################################### + +fossil test-th-eval "expr ~(-0)" +test th1-expr-20 {$RESULT eq {-1}} + +############################################################################### + +fossil test-th-eval "expr ~1" +test th1-expr-21 {$RESULT eq {-2}} + +############################################################################### + +fossil test-th-eval "expr ~+1" +test th1-expr-22 {$RESULT eq {-2}} + +############################################################################### + +fossil test-th-eval "expr ~(+1)" +test th1-expr-23 {$RESULT eq {-2}} + +############################################################################### + +fossil test-th-eval "expr 0+0b11" +test th1-expr-24 {$RESULT eq 3} + +############################################################################### + +fossil test-th-eval "expr 0+0o15" +test th1-expr-25 {$RESULT eq 13} + +############################################################################### + +fossil test-th-eval "expr 0+0x15" +test th1-expr-26 {$RESULT eq 21} + +############################################################################### + +fossil test-th-eval "expr 0+0b2" +test th1-expr-27 {$RESULT eq {TH_ERROR: expected number, got: "0b2"}} + +############################################################################### + +fossil test-th-eval "expr 0+0o8" +test th1-expr-28 {$RESULT eq {TH_ERROR: expected number, got: "0o8"}} + +############################################################################### + +fossil test-th-eval "expr 0+0xg" +test th1-expr-29 {$RESULT eq {TH_ERROR: syntax error in expression: "0+0xg"}} + +############################################################################### + +fossil test-th-eval "expr 0+0b1." +test th1-expr-30 {$RESULT eq {TH_ERROR: syntax error in expression: "0+0b1."}} + +############################################################################### + +fossil test-th-eval "expr 0+0o1." +test th1-expr-31 {$RESULT eq {TH_ERROR: syntax error in expression: "0+0o1."}} + +############################################################################### + +fossil test-th-eval "expr 0+0x1." +test th1-expr-32 {$RESULT eq {TH_ERROR: syntax error in expression: "0+0x1."}} + +############################################################################### + +fossil test-th-eval "expr 0ne5" +test th1-expr-33 {$RESULT eq {1}} + +############################################################################### + +fossil test-th-eval "expr 0b1+5" +test th1-expr-34 {$RESULT eq {6}} + +############################################################################### + +fossil test-th-eval "expr 0+0b" +test th1-expr-35 {$RESULT eq {TH_ERROR: expected number, got: "0b"}} + +############################################################################### + +fossil test-th-eval "expr (-1)+1" +test th1-expr-36 {$RESULT eq {0}} + +############################################################################### + +fossil test-th-eval "expr (((-1)))" +test th1-expr-37 {$RESULT eq {-1}} + +############################################################################### + +fossil test-th-eval "expr (((1)))" +test th1-expr-38 {$RESULT eq {1}} + +############################################################################### + +fossil test-th-eval "expr (((1))" +test th1-expr-39 {$RESULT eq {TH_ERROR: syntax error in expression: "(((1))"}} + +############################################################################### + +fossil test-th-eval "expr ((1)))" +test th1-expr-40 {$RESULT eq {TH_ERROR: syntax error in expression: "((1)))"}} + +############################################################################### + +fossil test-th-eval "expr (((1)*2)*2)" +test th1-expr-41 {$RESULT eq {4}} + +############################################################################### + +fossil test-th-eval "expr +" +test th1-expr-42 {$RESULT eq {TH_ERROR: syntax error in expression: "+"}} + +############################################################################### + +fossil test-th-eval "expr -" +test th1-expr-43 {$RESULT eq {TH_ERROR: syntax error in expression: "-"}} + +############################################################################### + +fossil test-th-eval "expr ++" +test th1-expr-44 {$RESULT eq {TH_ERROR: syntax error in expression: "++"}} + +############################################################################### + +fossil test-th-eval "expr --" +test th1-expr-45 {$RESULT eq {TH_ERROR: syntax error in expression: "--"}} + +############################################################################### + +fossil test-th-eval "lindex list +" +test th1-expr-46 {$RESULT eq {TH_ERROR: expected integer, got: "+"}} + +############################################################################### + +fossil test-th-eval "lindex list -" +test th1-expr-47 {$RESULT eq {TH_ERROR: expected integer, got: "-"}} + +############################################################################### + +fossil test-th-eval "lindex list +0x" +test th1-expr-48 {$RESULT eq {TH_ERROR: expected integer, got: "+0x"}} + +############################################################################### + +fossil test-th-eval "lindex list -0x" +test th1-expr-49 {$RESULT eq {TH_ERROR: expected integer, got: "-0x"}} + +############################################################################### + +run_in_checkout { + # NOTE: The "1" here forces the checkout to be opened. + fossil test-th-eval "checkout 1" +} + +test th1-checkout-1 {[string length $RESULT] > 0} + +############################################################################### + +run_in_checkout { + if {$th1Hooks} { + fossil test-th-eval "checkout" + } else { + # NOTE: No TH1 hooks, force checkout to be populated. + fossil test-th-eval --open-config "checkout" + } +} + +test th1-checkout-2 {[string length $RESULT] > 0} + +############################################################################### + +set savedPwd [pwd]; cd / +fossil test-th-eval "checkout 1" +cd $savedPwd; unset savedPwd +test th1-checkout-3 {[string length $RESULT] == 0} + +############################################################################### + +set savedPwd [pwd]; cd / +fossil test-th-eval "checkout" +cd $savedPwd; unset savedPwd +test th1-checkout-4 {[string length $RESULT] == 0} + +############################################################################### + +fossil test-th-eval "render {}" +test th1-render-1 {$RESULT eq {}} + +############################################################################### + +fossil test-th-eval "render {$ before set x 123 after $ }" +test th1-render-2 {$RESULT eq {no such variable: x before after 123 }} + +############################################################################### + +fossil test-th-eval "trace {}" +test th1-trace-1 {$RESULT eq {}} + +############################################################################### + +fossil test-th-eval --th-trace "trace {}" +if {$th1Hooks} { + test th1-trace-2 {[normalize_result] eq \ +{------------------ BEGIN TRACE LOG ------------------ +th1-init 0x0 => 0x0
        + +------------------- END TRACE LOG -------------------}} +} else { + test th1-trace-2 {[normalize_result] eq \ + {------------------ BEGIN TRACE LOG ------------------ +th1-init 0x0 => 0x0
        +th1-setup {} => TH_OK
        + +------------------- END TRACE LOG -------------------}} +} + +############################################################################### + +fossil test-th-eval "trace {this is a trace message.}" +test th1-trace-3 {$RESULT eq {}} + +############################################################################### + +fossil test-th-eval --th-trace "trace {this is a trace message.}" +if {$th1Hooks} { + test th1-trace-4 {[normalize_result] eq \ + {------------------ BEGIN TRACE LOG ------------------ +th1-init 0x0 => 0x0
        +this is a trace message. +------------------- END TRACE LOG -------------------}} +} else { + test th1-trace-4 {[normalize_result] eq \ + {------------------ BEGIN TRACE LOG ------------------ +th1-init 0x0 => 0x0
        +th1-setup {} => TH_OK
        +this is a trace message. +------------------- END TRACE LOG -------------------}} +} + +############################################################################### + +fossil test-th-eval "styleHeader {Page Title Here}" +test th1-header-1 {$RESULT eq {TH_ERROR: repository unavailable}} + +############################################################################### + +run_in_checkout { + fossil test-th-eval --open-config "styleHeader {Page Title Here}" +} + +test th1-header-2 {[regexp -- {Fossil: Page Title Here} $RESULT]} + +############################################################################### + +fossil test-th-eval "styleFooter" +test th1-footer-1 {$RESULT eq {TH_ERROR: repository unavailable}} + +############################################################################### + +fossil test-th-eval --open-config "styleFooter" +test th1-footer-2 {$RESULT eq {}} + +############################################################################### + +fossil test-th-eval --open-config --cgi "styleHeader {}; styleFooter" +test th1-footer-3 {[regexp -- {} $RESULT]} + +############################################################################### + +fossil test-th-eval "getParameter" +test th1-get-parameter-1 {$RESULT eq \ + {TH_ERROR: wrong # args: should be "getParameter NAME ?DEFAULT?"}} + +############################################################################### + +fossil test-th-eval "getParameter test1" +test th1-get-parameter-2 {$RESULT eq {}} + +############################################################################### + +fossil test-th-eval "getParameter test1 defValue1" +test th1-get-parameter-3 {$RESULT eq {defValue1}} + +############################################################################### + +fossil test-th-eval "setParameter" +test th1-set-parameter-1 {$RESULT eq \ + {TH_ERROR: wrong # args: should be "setParameter NAME VALUE"}} + +############################################################################### + +fossil test-th-eval "setParameter test1 value1; getParameter test1" +test th1-set-parameter-2 {$RESULT eq {value1}} + +############################################################################### + +fossil test-th-eval "setParameter test2 value2; getParameter test1" +test th1-set-parameter-3 {$RESULT eq {}} + +############################################################################### + +fossil test-th-eval "setParameter test3 value3; getParameter test3" +test th1-set-parameter-4 {$RESULT eq {value3}} + +############################################################################### + +fossil test-th-eval "setParameter test3 value3; getParameter test3 defValue3" +test th1-set-parameter-5 {$RESULT eq {value3}} + +############################################################################### + +fossil test-th-eval "setParameter test4 value4; setParameter test4 value5; getParameter test4" +test th1-set-parameter-6 {$RESULT eq {value5}} + +############################################################################### + +fossil test-th-eval "setParameter test4 value4; setParameter test4 value5; getParameter test4 defValue4" +test th1-set-parameter-7 {$RESULT eq {value5}} + +############################################################################### + +fossil test-th-eval "artifact" +test th1-artifact-1 {$RESULT eq \ + {TH_ERROR: wrong # args: should be "artifact ID ?FILENAME?"}} + +############################################################################### + +fossil test-th-eval "artifact tip" +test th1-artifact-2 {$RESULT eq {TH_ERROR: repository unavailable}} + +############################################################################### + +run_in_checkout { + fossil test-th-eval --open-config "artifact tip" +} + +test th1-artifact-3 {[regexp -- {F test/th1\.test [0-9a-f]{40}} $RESULT]} + +############################################################################### + +fossil test-th-eval "artifact 0000000000" +test th1-artifact-4 {$RESULT eq {TH_ERROR: repository unavailable}} + +############################################################################### + +fossil test-th-eval --open-config "artifact 0000000000" +test th1-artifact-5 {$RESULT eq {TH_ERROR: name not found}} + +############################################################################### + +fossil test-th-eval "artifact tip test/th1.test" +test th1-artifact-6 {$RESULT eq {TH_ERROR: repository unavailable}} + +############################################################################### + +run_in_checkout { + fossil test-th-eval --open-config "artifact tip test/th1.test" +} + +test th1-artifact-7 {[regexp -- {th1-artifact-7} $RESULT]} + +############################################################################### + +fossil test-th-eval "artifact 0000000000 test/th1.test" +test th1-artifact-8 {$RESULT eq {TH_ERROR: repository unavailable}} + +############################################################################### + +fossil test-th-eval --open-config "artifact 0000000000 test/th1.test" +test th1-artifact-9 {$RESULT eq {TH_ERROR: manifest not found}} + +############################################################################### + +run_in_checkout { + if {$th1Hooks} { + fossil test-th-eval "globalState checkout" + } else { + # NOTE: No TH1 hooks, force checkout to be populated. + fossil test-th-eval --open-config "globalState checkout" + } +} + +test th1-globalState-1 {[string length $RESULT] > 0} + +############################################################################### + +run_in_checkout { + if {$th1Hooks} { + fossil test-th-eval "globalState checkout" + test th1-globalState-2 {$RESULT eq [fossil test-th-eval checkout]} + } else { + # NOTE: No TH1 hooks, force checkout to be populated. + fossil test-th-eval --open-config "globalState checkout" + + test th1-globalState-2 {$RESULT eq \ + [fossil test-th-eval --open-config checkout]} + } +} + +############################################################################### + +fossil test-th-eval "globalState configuration" +test th1-globalState-3 {[string length $RESULT] == 0} + +############################################################################### + +fossil test-th-eval --open-config "globalState configuration" +test th1-globalState-4 {[string length $RESULT] > 0} + +############################################################################### + +fossil test-th-eval "globalState executable" +test th1-globalState-5 {[file rootname [file tail $RESULT]] eq "fossil"} + +############################################################################### + +fossil test-th-eval "globalState log" +test th1-globalState-6 {[string length $RESULT] == 0} + +############################################################################### + +fossil test-th-eval --errorlog foserrors.log "globalState log" +test th1-globalState-7 {$RESULT eq "foserrors.log"} + +############################################################################### + +run_in_checkout { + if {$th1Hooks} { + fossil test-th-eval "globalState repository" + } else { + # NOTE: No TH1 hooks, force repository to be populated. + fossil test-th-eval --open-config "globalState repository" + } +} + +test th1-globalState-8 {[string length $RESULT] > 0} + +############################################################################### + +run_in_checkout { + if {$th1Hooks} { + fossil test-th-eval "globalState repository" + test th1-globalState-9 {$RESULT eq [fossil test-th-eval repository]} + } else { + # NOTE: No TH1 hooks, force repository to be populated. + fossil test-th-eval --open-config "globalState repository" + + test th1-globalState-9 {$RESULT eq \ + [fossil test-th-eval --open-config repository]} + } +} + +############################################################################### + +fossil test-th-eval "globalState top" +test th1-globalState-10 {[string length $RESULT] == 0} + +############################################################################### + +fossil test-th-eval "globalState user" +test th1-globalState-11 {[string length $RESULT] == 0} + +############################################################################### + +fossil test-th-eval --user fossil-th1-test "globalState user" +test th1-globalState-12 {$RESULT eq "fossil-th1-test"} + +############################################################################### + +fossil test-th-eval "globalState vfs" +test th1-globalState-13 {[string length $RESULT] == 0} + +############################################################################### + +fossil test-th-eval "globalState vfs" +test th1-globalState-14 {[string length $RESULT] == 0} + +############################################################################### + +if {$tcl_platform(platform) eq "windows"} then { + set altVfs win32-longpath +} else { + set altVfs unix-dotfile +} + +############################################################################### + +fossil test-th-eval --vfs $altVfs "globalState vfs" +test th1-globalState-15 {$RESULT eq $altVfs} + +############################################################################### + +fossil test-th-eval "globalState flags" +test th1-globalState-16 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval "reinitialize; globalState configuration" +test th1-reinitialize-1 {$RESULT eq ""} + +############################################################################### + +fossil test-th-eval "reinitialize 1; globalState configuration" +test th1-reinitialize-2 {$RESULT ne ""} + +############################################################################### + +# +# NOTE: This test will fail if the command names are added to TH1, or +# moved from Tcl builds to plain or the reverse. Sorting the +# command lists eliminates a dependence on order. +# +fossil test-th-eval "info commands" +set sorted_result [lsort $RESULT] +protOut "Sorted: $sorted_result" +set base_commands {anoncap anycap array artifact break breakpoint catch\ + checkout combobox continue date decorate dir enable_output encode64\ + error expr for getParameter glob_match globalState hascap hasfeature\ + html htmlize http httpize if info insertCsrf lindex linecount list\ + llength lsearch markdown proc puts query randhex redirect regexp\ + reinitialize rename render repository return searchable set\ + setParameter setting stime string styleFooter styleHeader tclReady\ + trace unset uplevel upvar utime verifyCsrf wiki} +set tcl_commands {tclEval tclExpr tclInvoke tclIsSafe tclMakeSafe} +if {$th1Tcl} { + test th1-info-commands-1 {$sorted_result eq [lsort "$base_commands $tcl_commands"]} +} else { + test th1-info-commands-1 {$sorted_result eq [lsort "$base_commands"]} +} + + +############################################################################### + +fossil test-th-eval "info vars" + +if {$th1Hooks} { + test th1-info-vars-1 {[lsort $RESULT] eq \ + [lsort "th_stack_trace cmd_flags tcl_platform cmd_name cmd_args"]} +} else { + test th1-info-vars-1 {$RESULT eq "tcl_platform"} +} + +############################################################################### + +fossil test-th-eval "set x 1; info vars" + +if {$th1Hooks} { + test th1-info-vars-2 {[lsort $RESULT] eq \ + [lsort "x th_stack_trace cmd_flags tcl_platform cmd_name cmd_args"]} +} else { + test th1-info-vars-2 {[lsort $RESULT] eq [lsort "x tcl_platform"]} +} + +############################################################################### + +fossil test-th-eval "set x 1; unset x; info vars" + +if {$th1Hooks} { + test th1-info-vars-3 {[lsort $RESULT] eq \ + [lsort "th_stack_trace cmd_flags tcl_platform cmd_name cmd_args"]} +} else { + test th1-info-vars-3 {$RESULT eq "tcl_platform"} +} + +############################################################################### + +fossil test-th-eval "proc foo {} {set x 1; info vars}; foo" +test th1-info-vars-4 {$RESULT eq "x"} + +############################################################################### + +fossil test-th-eval "set y 1; proc foo {} {set x 1; uplevel 1 {info vars}}; foo" + +if {$th1Hooks} { + test th1-info-vars-5 {[lsort $RESULT] eq \ + [lsort "th_stack_trace y cmd_flags tcl_platform cmd_name cmd_args"]} +} else { + test th1-info-vars-5 {[lsort $RESULT] eq [lsort "y tcl_platform"]} +} + +############################################################################### + +fossil test-th-eval "array exists foo" +test th1-array-exists-1 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval "set foo(x) 1; array exists foo" +test th1-array-exists-2 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval "set foo(x) 1; unset foo(x); array exists foo" +test th1-array-exists-3 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval "set foo(x) 1; unset foo; array exists foo" +test th1-array-exists-4 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval "set foo 1; array exists foo" +test th1-array-exists-5 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval "array names foo" +test th1-array-names-1 {$RESULT eq ""} + +############################################################################### + +fossil test-th-eval "set foo 2; array names foo" +test th1-array-names-2 {$RESULT eq ""} + +############################################################################### + +fossil test-th-eval "set foo 2; unset foo; set foo(x) 2; array names foo" +test th1-array-names-3 {$RESULT eq "x"} + +############################################################################### + +fossil test-th-eval "set foo(x) 2; array names foo" +test th1-array-names-4 {$RESULT eq "x"} + +############################################################################### + +fossil test-th-eval "set foo(x) 2; set foo(y) 2; array names foo" +test th1-array-names-5 {$RESULT eq "x y"} + +############################################################################### + +fossil test-th-eval "set foo(x) 2; unset foo(x); array names foo" +test th1-array-names-6 {$RESULT eq ""} + +############################################################################### + +fossil test-th-eval "lsearch" +test th1-lsearch-1 {$RESULT eq \ + {TH_ERROR: wrong # args: should be "lsearch list string"}} + +############################################################################### + +fossil test-th-eval "lsearch a" +test th1-lsearch-2 {$RESULT eq \ + {TH_ERROR: wrong # args: should be "lsearch list string"}} + +############################################################################### + +fossil test-th-eval "lsearch a a a" +test th1-lsearch-3 {$RESULT eq \ + {TH_ERROR: wrong # args: should be "lsearch list string"}} + +############################################################################### + +fossil test-th-eval "lsearch {a b c} a" +test th1-lsearch-4 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval "lsearch {a b c} b" +test th1-lsearch-5 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval "lsearch {a b c} c" +test th1-lsearch-6 {$RESULT eq "2"} + +############################################################################### + +fossil test-th-eval "lsearch {a b c} d" +test th1-lsearch-7 {$RESULT eq "-1"} + +############################################################################### + +fossil test-th-eval "lsearch {a b c} aa" +test th1-lsearch-8 {$RESULT eq "-1"} + +############################################################################### + +fossil test-th-eval "lsearch {aa b c} a" +test th1-lsearch-9 {$RESULT eq "-1"} + +############################################################################### + +fossil test-th-eval "lsearch \"\{aa b c\" a" +test th1-lsearch-10 {$RESULT eq "TH_ERROR: Expected list, got: \"\{aa b c\""} + +############################################################################### + +fossil test-th-eval "glob_match" +test th1-glob-match-1 {$RESULT eq \ +{TH_ERROR: wrong # args: should be "glob_match ?-one? ?--? patternList string"}} + +############################################################################### + +fossil test-th-eval "glob_match -one" +test th1-glob-match-2 {$RESULT eq \ +{TH_ERROR: wrong # args: should be "glob_match ?-one? ?--? patternList string"}} + +############################################################################### + +fossil test-th-eval "glob_match --" +test th1-glob-match-3 {$RESULT eq \ +{TH_ERROR: wrong # args: should be "glob_match ?-one? ?--? patternList string"}} + +############################################################################### + +fossil test-th-eval "glob_match -one --" +test th1-glob-match-4 {$RESULT eq \ +{TH_ERROR: wrong # args: should be "glob_match ?-one? ?--? patternList string"}} + +############################################################################### + +fossil test-th-eval "glob_match -one -- 1" +test th1-glob-match-5 {$RESULT eq \ +{TH_ERROR: wrong # args: should be "glob_match ?-one? ?--? patternList string"}} + +############################################################################### + +fossil test-th-eval "glob_match -one -- 1 2 3" +test th1-glob-match-6 {$RESULT eq \ +{TH_ERROR: wrong # args: should be "glob_match ?-one? ?--? patternList string"}} + +############################################################################### + +fossil test-th-eval {list [glob_match a A] [glob_match A a]} +test th1-glob-match-7 {$RESULT eq "0 0"} + +############################################################################### + +fossil test-th-eval {list [glob_match a,b a] [glob_match a,b b]} +test th1-glob-match-8 {$RESULT eq "1 2"} + +############################################################################### + +fossil test-th-eval {list [glob_match -one a,b a] [glob_match -one a,b b]} +test th1-glob-match-9 {$RESULT eq "0 0"} + +############################################################################### + +fossil test-th-eval {list [glob_match -one a,b a,b] [glob_match -one a b,a]} +test th1-glob-match-10 {$RESULT eq "1 0"} + +############################################################################### + +fossil test-th-eval {list [glob_match a*c abc] [glob_match abc a*c]} +test th1-glob-match-11 {$RESULT eq "1 0"} + +############################################################################### + +fossil test-th-eval {list [glob_match a?c abc] [glob_match abc a?c]} +test th1-glob-match-12 {$RESULT eq "1 0"} + +############################################################################### + +fossil test-th-eval {list [glob_match {a[bd]c} abc] [glob_match abc {a[bd]c}]} +test th1-glob-match-13 {$RESULT eq "1 0"} + +############################################################################### + +fossil test-th-eval {string is} +test th1-string-is-1 {$RESULT eq \ +{TH_ERROR: wrong # args: should be "string is class string"}} + +############################################################################### + +fossil test-th-eval {string is something} +test th1-string-is-2 {$RESULT eq \ +{TH_ERROR: wrong # args: should be "string is class string"}} + +############################################################################### + +fossil test-th-eval {string is not something else} +test th1-string-is-3 {$RESULT eq \ +{TH_ERROR: wrong # args: should be "string is class string"}} + +############################################################################### + +fossil test-th-eval {string is other 123} +test th1-string-is-4 {$RESULT eq \ +"TH_ERROR: Expected alnum, double, integer, or list, got: other"} + +############################################################################### + +fossil test-th-eval {string is alnum 123} +test th1-string-is-5 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval {string is alnum abc} +test th1-string-is-6 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval {string is alnum 123abc} +test th1-string-is-7 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval {string is alnum abc123} +test th1-string-is-8 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval {string is alnum _abc123} +test th1-string-is-9 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval {string is alnum abc.123} +test th1-string-is-10 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval {string is alnum abc123_} +test th1-string-is-11 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval {string is list ""} +test th1-string-is-12 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval {string is list 1} +test th1-string-is-13 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval {string is list "1 2 3"} +test th1-string-is-14 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval {string is list "\{"} +test th1-string-is-15 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval {string is list "1 2 3 \{"} +test th1-string-is-16 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval {string is list "1 2 3 \{\}"} +test th1-string-is-17 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval {string is list "1 2 3 \{\{\}"} +test th1-string-is-18 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval {string is double 123} +test th1-string-is-19 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval {string is double 123.456} +test th1-string-is-20 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval {string is double 123abc} +test th1-string-is-21 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval {string is double 123_456} +test th1-string-is-22 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval {string is integer 123} +test th1-string-is-23 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval {string is integer 123.456} +test th1-string-is-24 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval {string is integer 123abc} +test th1-string-is-25 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval {string is integer 0b11001001} +test th1-string-is-26 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval {string is integer 0b11001002} +test th1-string-is-27 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval {string is integer 0o777} +test th1-string-is-28 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval {string is integer 0o778} +test th1-string-is-29 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval {string is integer 0xC0DEF00D} +test th1-string-is-30 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval {string is integer 0xC0DEF00Z} +test th1-string-is-31 {$RESULT eq "0"} + +############################################################################### + +fossil test-th-eval {markdown} +test th1-markdown-1 {$RESULT eq \ +{TH_ERROR: wrong # args: should be "markdown STRING"}} + +############################################################################### + +fossil test-th-eval {markdown one two} +test th1-markdown-2 {$RESULT eq \ +{TH_ERROR: wrong # args: should be "markdown STRING"}} + +############################################################################### + +fossil test-th-eval {markdown "*This is a test.*"} +test th1-markdown-3 {[normalize_result] eq {{} {
        + +

        This is a test.

        + +
        +}}} + +############################################################################### + +fossil test-th-eval {markdown "Test1\n=====\n*This is a test.*"} +test th1-markdown-4 {[normalize_result] eq {Test1 {
        + +

        This is a test.

        + +
        +}}} + +############################################################################### + +set markdown [read_file [file join $dir markdown-test1.md]] +fossil test-th-eval [string map \ + [list %markdown% $markdown] {markdown {%markdown%}}] +test th1-markdown-5 {[normalize_result] eq \ +{{Markdown Formatter Test Document} {
        + +

        This document is designed to test the markdown formatter.

        + +
          +
        • A bullet item. + +
            +
          • A subitem
          • +
        • +
        • Second bullet
        • +
        + +

        More text

        + +
          +
        1. Enumeration +1.1. Subitem 1 +1.2. Subitem 2
        2. +
        3. Second enumeration.
        4. +
        + +

        Another paragraph.

        + +

        Other Features

        +

        Text can show emphasis or emphasis or strong emphassis.

        + +
        +}}} + +############################################################################### + +fossil test-th-eval {encode64 test} +test th1-encode64-1 {$RESULT eq "dGVzdA=="} + +############################################################################### + +fossil test-th-eval {encode64 test\x00} +test th1-encode64-2 {$RESULT eq "dGVzdAA="} + +############################################################################### + +# +# TODO: Modify the result of this test if the source file (i.e. +# "ajax/cgi-bin/fossil-json.cgi.example") changes. +# +run_in_checkout { + fossil test-th-eval --open-config \ + {encode64 [artifact trunk ajax/cgi-bin/fossil-json.cgi.example]} +} + +test th1-encode64-3 {$RESULT eq \ +"IyEvcGF0aC90by9mb3NzaWwvYmluYXJ5CnJlcG9zaXRvcnk6IC9wYXRoL3RvL3JlcG8uZnNsCg=="} + +############################################################################### + +fossil test-th-eval {array exists tcl_platform} +test th1-platform-1 {$RESULT eq "1"} + +############################################################################### + +fossil test-th-eval {array names tcl_platform} +test th1-platform-2 {$RESULT eq "engine platform"} + +############################################################################### + +fossil test-th-eval {set tcl_platform(engine)} +test th1-platform-3 {$RESULT eq "TH1"} + +############################################################################### + +fossil test-th-eval {set tcl_platform(platform)} +test th1-platform-4 {$RESULT eq "windows" || $RESULT eq "unix"} + +############################################################################### + +set th1FileName [file join $::tempPath th1-[pid].th1] + +write_file $th1FileName { + set x "" + for {set i 0} {$i < 10} {set i [expr {$i + 1}]} { + set x "$x $i" + } + return [string trim $x] + set y; # NOTE: Never hit. +} + +fossil test-th-source $th1FileName +test th1-source-1 {$RESULT eq {TH_RETURN: 0 1 2 3 4 5 6 7 8 9}} +file delete $th1FileName ADDED test/update-test-1.sh Index: test/update-test-1.sh ================================================================== --- test/update-test-1.sh +++ test/update-test-1.sh @@ -0,0 +1,44 @@ +#!/bin/sh +# +# Run this script in an empty directory. A single argument is the full +# pathname of the fossil binary. Example: +# +# sh update-test-1.sh /home/drh/fossil/m1/fossil +# +export FOSSIL=$1 +rm -rf aaa bbb update-test-1.fossil + +# Create a test repository +$FOSSIL new update-test-1.fossil + +# In checkout aaa, add file one.txt +mkdir aaa +cd aaa +$FOSSIL open ../update-test-1.fossil +echo one >one.txt +$FOSSIL add one.txt +$FOSSIL commit -m add-one --tag add-one + +# Open checkout bbb. +mkdir ../bbb +cd ../bbb +$FOSSIL open ../update-test-1.fossil + +# Back in aaa, add file two.txt +cd ../aaa +echo two >two.txt +$FOSSIL add two.txt +$FOSSIL commit -m add-two --tag add-two + +# In bbb, delete file one.txt. Then update the change from aaa that +# adds file two. Verify that one.txt says deleted. +cd ../bbb +$FOSSIL rm one.txt +$FOSSIL changes +echo '========================================================================' +$FOSSIL update +echo '======== The previous should show "ADD two.txt" ========================' +$FOSSIL changes +echo '======== The previous should show "DELETE one.txt" =====================' +$FOSSIL commit --test -m check-in +echo '======== Only file two.txt is checked in ===============================' ADDED test/update-test-2.sh Index: test/update-test-2.sh ================================================================== --- test/update-test-2.sh +++ test/update-test-2.sh @@ -0,0 +1,44 @@ +#!/bin/sh +# +# Run this script in an empty directory. A single argument is the full +# pathname of the fossil binary. Example: +# +# sh update-test-2.sh /home/drh/fossil/m1/fossil +# +export FOSSIL=$1 +rm -rf aaa bbb update-test-2.fossil + +# Create a test repository +$FOSSIL new update-test-2.fossil + +# In checkout aaa, add file one.txt. +mkdir aaa +cd aaa +$FOSSIL open ../update-test-2.fossil +echo one >one.txt +$FOSSIL add one.txt +$FOSSIL commit -m add-one --tag add-one + +# Create checkout bbb. +mkdir ../bbb +cd ../bbb +$FOSSIL open ../update-test-2.fossil + +# Back in aaa, make changes to one.txt. Add file two.txt. +cd ../aaa +echo change >>one.txt +echo two >two.txt +$FOSSIL add two.txt +$FOSSIL commit -m 'chng one and add two' --tag add-two + +# In bbb, remove one.txt, then update. +cd ../bbb +$FOSSIL rm one.txt +$FOSSIL changes +echo '========================================================================' +$FOSSIL update +echo '======== Previous should show "ADD two.txt" and conflict on one.txt ====' +$FOSSIL changes +echo '======== The previous should show "DELETE one.txt" =====================' +$FOSSIL commit --test -m 'check-in' +echo '======== Only file two.txt is checked in ===============================' ADDED test/utf.test Index: test/utf.test ================================================================== --- test/utf.test +++ test/utf.test @@ -0,0 +1,23494 @@ +# +# Copyright (c) 2013 D. Richard Hipp +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the Simplified BSD License (also +# known as the "2-Clause License" or "FreeBSD License".) +# +# This program is distributed in the hope that it will be useful, +# but without any warranty; without even the implied warranty of +# merchantability or fitness for a particular purpose. +# +# Author contact information: +# drh@hwaci.com +# http://www.hwaci.com/drh/ +# +############################################################################ +# +# Test UTF-8/UTF-16 detection +# + +proc swap_byte_order {str} { + set result "" + for {set i 0} {$i < [string length $str]} {incr i} { + set c [scan [string index $str $i] %c] + set c [expr {(($c << 8) & 0xFF00) | (($c >> 8) & 0xFF)}] + append result [format %c $c] + } + return $result +} + +proc utf-check {testname args} { + global tempPath + set i 1 + foreach {fileName result} $args { + set fileName [file join $tempPath $fileName] + fossil test-looks-like-utf $fileName + set result [string map [list %TEMP% $tempPath \r\n \n] $result] + # if {$::RESULT ne $result} {puts stdout $::RESULT} + test utf-check-$testname.$i {$::RESULT eq $result} + incr i + } +} + +unset -nocomplain enc +array set enc [list \ + 0 binary \ + 1 binary \ + 2 unicode \ + 3 unicode-reverse \ +] + +unset -nocomplain bom +array set bom [list \ + 0 "" \ + 1 \xEF\xBB\xBF \ + 2 [expr {$tcl_platform(byteOrder) eq "littleEndian" ? \ + "\xFF\xFE" : "\xFE\xFF"}] \ + 3 [expr {$tcl_platform(byteOrder) eq "littleEndian" ? \ + "\xFE\xFF" : "\xFF\xFE"}] \ +] + +unset -nocomplain data +array set data [list \ + 0 "" \ + 1 \r \ + 2 \n \ + 3 \r\n \ + 4 \rA \ + 5 \rAB \ + 6 \rABC \ + 7 \rABCD \ + 8 \nA \ + 9 \nAB \ + 10 \nABC \ + 11 \nABCD \ + 12 \r\nA \ + 13 \r\nAB \ + 14 \r\nABC \ + 15 \r\nABCD \ + 16 A \ + 17 AB \ + 18 ABC \ + 19 ABCD \ + 20 A\r \ + 21 AB\r \ + 22 ABC\r \ + 23 ABCD\r \ + 24 A\n \ + 25 AB\n \ + 26 ABC\n \ + 27 ABCD\n \ + 28 A\r\n \ + 29 AB\r\n \ + 30 ABC\r\n \ + 31 ABCD\r\n \ + 32 [appendArgs \x00 A] \ + 33 [appendArgs \x00 AB] \ + 34 [appendArgs \x00 ABC] \ + 35 [appendArgs \x00 ABCD] \ + 36 [appendArgs \x00 A\r] \ + 37 [appendArgs \x00 AB\r] \ + 38 [appendArgs \x00 ABC\r] \ + 39 [appendArgs \x00 ABCD\r] \ + 40 [appendArgs \x00 A\n] \ + 41 [appendArgs \x00 AB\n] \ + 42 [appendArgs \x00 ABC\n] \ + 43 [appendArgs \x00 ABCD\n] \ + 44 [appendArgs \x00 A\r\n] \ + 45 [appendArgs \x00 AB\r\n] \ + 46 [appendArgs \x00 ABC\r\n] \ + 47 [appendArgs \x00 ABCD\r\n] \ + 48 A\x00 \ + 49 AB\x00 \ + 50 ABC\x00 \ + 51 ABCD\x00 \ + 52 A\x00\r \ + 53 AB\x00\r \ + 54 ABC\x00\r \ + 55 ABCD\x00\r \ + 56 A\x00\n \ + 57 AB\x00\n \ + 58 ABC\x00\n \ + 59 ABCD\x00\n \ + 60 A\x00\r\n \ + 61 AB\x00\r\n \ + 62 ABC\x00\r\n \ + 63 ABCD\x00\r\n \ + 64 [appendArgs \x00 A\x00] \ + 65 [appendArgs \x00 AB\x00] \ + 66 [appendArgs \x00 ABC\x00] \ + 67 [appendArgs \x00 ABCD\x00] \ + 68 [appendArgs \x00 A\x00\r] \ + 69 [appendArgs \x00 AB\x00\r] \ + 70 [appendArgs \x00 ABC\x00\r] \ + 71 [appendArgs \x00 ABCD\x00\r] \ + 72 [appendArgs \x00 A\x00\n] \ + 73 [appendArgs \x00 AB\x00\n] \ + 74 [appendArgs \x00 ABC\x00\n] \ + 75 [appendArgs \x00 ABCD\x00\n] \ + 76 [appendArgs \x00 A\x00\r\n] \ + 77 [appendArgs \x00 AB\x00\r\n] \ + 78 [appendArgs \x00 ABC\x00\r\n] \ + 79 [appendArgs \x00 ABCD\x00\r\n] \ + 80 [string repeat A 8193] \ + 81 [string repeat A 8193]\r \ + 82 [string repeat A 8193]\n \ + 83 [string repeat A 8193]\r\n \ + 84 [string repeat ABCD 2049] \ + 85 [string repeat ABCD 2049]\r \ + 86 [string repeat ABCD 2049]\n \ + 87 [string repeat ABCD 2049]\r\n \ + 88 \x00[string repeat A 8193] \ + 89 \x00[string repeat A 8193]\r \ + 90 \x00[string repeat A 8193]\n \ + 91 \x00[string repeat A 8193]\r\n \ + 92 \x00[string repeat ABCD 2049] \ + 93 \x00[string repeat ABCD 2049]\r \ + 94 \x00[string repeat ABCD 2049]\n \ + 95 \x00[string repeat ABCD 2049]\r\n \ + 96 [string repeat A 8193]\x00 \ + 97 [string repeat A 8193]\x00\r \ + 98 [string repeat A 8193]\x00\n \ + 99 [string repeat A 8193]\x00\r\n \ + 100 [string repeat ABCD 2049]\x00 \ + 101 [string repeat ABCD 2049]\x00\r \ + 102 [string repeat ABCD 2049]\x00\n \ + 103 [string repeat ABCD 2049]\x00\r\n \ + 104 \x00[string repeat A 8193]\x00 \ + 105 \x00[string repeat A 8193]\x00\r \ + 106 \x00[string repeat A 8193]\x00\n \ + 107 \x00[string repeat A 8193]\x00\r\n \ + 108 \x00[string repeat ABCD 2049]\x00 \ + 109 \x00[string repeat ABCD 2049]\x00\r \ + 110 \x00[string repeat ABCD 2049]\x00\n \ + 111 \x00[string repeat ABCD 2049]\x00\r\n \ + 112 \u000A\u000D \ + 113 \u0A00\u0D00 \ + 114 \u000D\u000A \ + 115 \u0D00\u0A00 \ + 116 \x00\u000A\u000D \ + 117 \x00\u0A00\u0D00 \ + 118 \x00\u000D\u000A \ + 119 \x00\u0D00\u0A00 \ + 120 \u000A\u000D\x00 \ + 121 \u0A00\u0D00\x00 \ + 122 \u000D\u000A\x00 \ + 123 \u0D00\u0A00\x00 \ + 124 \x00\u000A\u000D\x00 \ + 125 \x00\u0A00\u0D00\x00 \ + 126 \x00\u000D\u000A\x00 \ + 127 \x00\u0D00\u0A00\x00 \ + 128 \uD800\uDC00 \ + 129 \uDC00\uD800 \ + 130 \uFEFF \ + 131 \uFFFE \ + 132 \uFFFF \ + 133 \x80\x00 \ + 134 \x80\r \ + 135 \x80\n \ + 136 \x80\r\n \ + 137 \xC0\x00 \ + 138 \xC0\r \ + 139 \xC0\n \ + 140 \xC0\r\n \ + 141 \xF8\x00 \ + 142 \xF8\r \ + 143 \xF8\n \ + 144 \xF8\r\n \ + 145 \xE0\x80\x00 \ + 146 \xE0\x80\r \ + 147 \xE0\x80\n \ + 148 \xE0\x80\r\n \ + 149 \xF0\x80\x80\x00 \ + 150 \xF0\x80\x80\r \ + 151 \xF0\x80\x80\n \ + 152 \xF0\x80\x80\r\n \ + 153 \xF0\x80\x80\x80\x00 \ + 154 \xF0\x80\x80\x80\r \ + 155 \xF0\x80\x80\x80\n \ + 156 \xF0\x80\x80\x80\r\n \ + 157 \xC0\x80\x00 \ + 158 \xC0\x80\r \ + 159 \xC0\x80\n \ + 160 \xC0\x80\r\n \ + 161 \xC0\x81\x00 \ + 162 \xC0\x81\r \ + 163 \xC0\x81\n \ + 164 \xC0\x81\r\n \ + 165 \xC1\x80\x00 \ + 166 \xC1\x80\r \ + 167 \xC1\x80\n \ + 168 \xC1\x80\r\n \ + 169 \xE0\x80\x80\x00 \ + 170 \xE0\x80\x80\r \ + 171 \xE0\x80\x80\n \ + 172 \xE0\x80\x80\r\n \ + 173 \xF4\x8F\xBF\xBF\x00 \ + 174 \xF4\x8F\xBF\xBF\r \ + 175 \xF4\x8F\xBF\xBF\n \ + 176 \xF4\x8F\xBF\xBF\r\n \ + 177 \xF4\x90\x80\x80\x00 \ + 178 \xF4\x90\x80\x80\r \ + 179 \xF4\x90\x80\x80\n \ + 180 \xF4\x90\x80\x80\r\n \ +] + +unset -nocomplain extraData +array set extraData [list \ + 0 "" \ + 1 Z \ +] + +proc deleteTestFiles {path num} { + set fn $num + for {set i 0} {$i < [array size ::bom]} {incr i} { + for {set j 0} {$j < [array size ::data]} {incr j} { + for {set k 0} {$k < [array size ::extraData]} {incr k} { + file delete [file join $path utf-check-$fn-$i-$j-$k.jnk] + incr fn + } + } + } +} + +proc createTestFiles {path num} { + set fn $num + for {set i 0} {$i < [array size ::bom]} {incr i} { + for {set j 0} {$j < [array size ::data]} {incr j} { + for {set k 0} {$k < [array size ::extraData]} {incr k} { + set f [open [file join $path utf-check-$fn-$i-$j-$k.jnk] \ + {WRONLY CREAT TRUNC}]; incr fn + fconfigure $f -encoding binary -translation binary + puts -nonewline $f $::bom($i) + switch -exact $::enc($i) { + binary { + puts -nonewline $f $::data($j) + } + unicode-reverse { + puts -nonewline $f [swap_byte_order \ + [encoding convertto unicode $::data($j)]] + } + default { + puts -nonewline $f [encoding convertto $::enc($i) $::data($j)] + } + } + puts -nonewline $f $::extraData($k) + flush $f; close $f + } + } + } +} + +# +# NOTE: This procedure is used to generate the actual tests based on the data +# in the test arrays (above). It needs to be used whenever additional +# test data is added (i.e. to regenerate the test and their results with +# the correct numbering). +# +proc createTestResults {path num} { + set f [open [file join $path utf-check.txt] {WRONLY CREAT APPEND}] + set fn $num + for {set i 0} {$i < [array size ::bom]} {incr i} { + for {set j 0} {$j < [array size ::data]} {incr j} { + for {set k 0} {$k < [array size ::extraData]} {incr k} { + fconfigure $f -encoding binary -translation binary + set data \n\n + append data {utf-check $fn } + append data {utf-check-$fn-$i-$j-$k.jnk \\\n} + append data {{%OUT%}} + fossil test-looks-like-utf [file join $path utf-check-$fn-$i-$j-$k.jnk] + puts -nonewline $f [string map [list %OUT% [string map [list \ + $::tempPath %TEMP%] $::RESULT]] [subst $data]] + incr fn + } + } + } + flush $f; close $f +} + +createTestFiles $tempPath 100 +# createTestResults $tempPath 100 +########################### BEGIN GENERATED SECTION ########################### + +utf-check 100 utf-check-100-0-0-0.jnk \ +{File "%TEMP%/utf-check-100-0-0-0.jnk" has 0 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 101 utf-check-101-0-0-1.jnk \ +{File "%TEMP%/utf-check-101-0-0-1.jnk" has 1 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 102 utf-check-102-0-1-0.jnk \ +{File "%TEMP%/utf-check-102-0-1-0.jnk" has 1 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 103 utf-check-103-0-1-1.jnk \ +{File "%TEMP%/utf-check-103-0-1-1.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 104 utf-check-104-0-2-0.jnk \ +{File "%TEMP%/utf-check-104-0-2-0.jnk" has 1 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 105 utf-check-105-0-2-1.jnk \ +{File "%TEMP%/utf-check-105-0-2-1.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 106 utf-check-106-0-3-0.jnk \ +{File "%TEMP%/utf-check-106-0-3-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 107 utf-check-107-0-3-1.jnk \ +{File "%TEMP%/utf-check-107-0-3-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 108 utf-check-108-0-4-0.jnk \ +{File "%TEMP%/utf-check-108-0-4-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 109 utf-check-109-0-4-1.jnk \ +{File "%TEMP%/utf-check-109-0-4-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 110 utf-check-110-0-5-0.jnk \ +{File "%TEMP%/utf-check-110-0-5-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 111 utf-check-111-0-5-1.jnk \ +{File "%TEMP%/utf-check-111-0-5-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 112 utf-check-112-0-6-0.jnk \ +{File "%TEMP%/utf-check-112-0-6-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 113 utf-check-113-0-6-1.jnk \ +{File "%TEMP%/utf-check-113-0-6-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 114 utf-check-114-0-7-0.jnk \ +{File "%TEMP%/utf-check-114-0-7-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 115 utf-check-115-0-7-1.jnk \ +{File "%TEMP%/utf-check-115-0-7-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 116 utf-check-116-0-8-0.jnk \ +{File "%TEMP%/utf-check-116-0-8-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 117 utf-check-117-0-8-1.jnk \ +{File "%TEMP%/utf-check-117-0-8-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 118 utf-check-118-0-9-0.jnk \ +{File "%TEMP%/utf-check-118-0-9-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 119 utf-check-119-0-9-1.jnk \ +{File "%TEMP%/utf-check-119-0-9-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 120 utf-check-120-0-10-0.jnk \ +{File "%TEMP%/utf-check-120-0-10-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 121 utf-check-121-0-10-1.jnk \ +{File "%TEMP%/utf-check-121-0-10-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 122 utf-check-122-0-11-0.jnk \ +{File "%TEMP%/utf-check-122-0-11-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 123 utf-check-123-0-11-1.jnk \ +{File "%TEMP%/utf-check-123-0-11-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 124 utf-check-124-0-12-0.jnk \ +{File "%TEMP%/utf-check-124-0-12-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 125 utf-check-125-0-12-1.jnk \ +{File "%TEMP%/utf-check-125-0-12-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 126 utf-check-126-0-13-0.jnk \ +{File "%TEMP%/utf-check-126-0-13-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 127 utf-check-127-0-13-1.jnk \ +{File "%TEMP%/utf-check-127-0-13-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 128 utf-check-128-0-14-0.jnk \ +{File "%TEMP%/utf-check-128-0-14-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 129 utf-check-129-0-14-1.jnk \ +{File "%TEMP%/utf-check-129-0-14-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 130 utf-check-130-0-15-0.jnk \ +{File "%TEMP%/utf-check-130-0-15-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 131 utf-check-131-0-15-1.jnk \ +{File "%TEMP%/utf-check-131-0-15-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 132 utf-check-132-0-16-0.jnk \ +{File "%TEMP%/utf-check-132-0-16-0.jnk" has 1 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 133 utf-check-133-0-16-1.jnk \ +{File "%TEMP%/utf-check-133-0-16-1.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 134 utf-check-134-0-17-0.jnk \ +{File "%TEMP%/utf-check-134-0-17-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 135 utf-check-135-0-17-1.jnk \ +{File "%TEMP%/utf-check-135-0-17-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 136 utf-check-136-0-18-0.jnk \ +{File "%TEMP%/utf-check-136-0-18-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 137 utf-check-137-0-18-1.jnk \ +{File "%TEMP%/utf-check-137-0-18-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 138 utf-check-138-0-19-0.jnk \ +{File "%TEMP%/utf-check-138-0-19-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 139 utf-check-139-0-19-1.jnk \ +{File "%TEMP%/utf-check-139-0-19-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 140 utf-check-140-0-20-0.jnk \ +{File "%TEMP%/utf-check-140-0-20-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 141 utf-check-141-0-20-1.jnk \ +{File "%TEMP%/utf-check-141-0-20-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 142 utf-check-142-0-21-0.jnk \ +{File "%TEMP%/utf-check-142-0-21-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 143 utf-check-143-0-21-1.jnk \ +{File "%TEMP%/utf-check-143-0-21-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 144 utf-check-144-0-22-0.jnk \ +{File "%TEMP%/utf-check-144-0-22-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 145 utf-check-145-0-22-1.jnk \ +{File "%TEMP%/utf-check-145-0-22-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 146 utf-check-146-0-23-0.jnk \ +{File "%TEMP%/utf-check-146-0-23-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 147 utf-check-147-0-23-1.jnk \ +{File "%TEMP%/utf-check-147-0-23-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 148 utf-check-148-0-24-0.jnk \ +{File "%TEMP%/utf-check-148-0-24-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 149 utf-check-149-0-24-1.jnk \ +{File "%TEMP%/utf-check-149-0-24-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 150 utf-check-150-0-25-0.jnk \ +{File "%TEMP%/utf-check-150-0-25-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 151 utf-check-151-0-25-1.jnk \ +{File "%TEMP%/utf-check-151-0-25-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 152 utf-check-152-0-26-0.jnk \ +{File "%TEMP%/utf-check-152-0-26-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 153 utf-check-153-0-26-1.jnk \ +{File "%TEMP%/utf-check-153-0-26-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 154 utf-check-154-0-27-0.jnk \ +{File "%TEMP%/utf-check-154-0-27-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 155 utf-check-155-0-27-1.jnk \ +{File "%TEMP%/utf-check-155-0-27-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 156 utf-check-156-0-28-0.jnk \ +{File "%TEMP%/utf-check-156-0-28-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 157 utf-check-157-0-28-1.jnk \ +{File "%TEMP%/utf-check-157-0-28-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 158 utf-check-158-0-29-0.jnk \ +{File "%TEMP%/utf-check-158-0-29-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 159 utf-check-159-0-29-1.jnk \ +{File "%TEMP%/utf-check-159-0-29-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 160 utf-check-160-0-30-0.jnk \ +{File "%TEMP%/utf-check-160-0-30-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 161 utf-check-161-0-30-1.jnk \ +{File "%TEMP%/utf-check-161-0-30-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 162 utf-check-162-0-31-0.jnk \ +{File "%TEMP%/utf-check-162-0-31-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 163 utf-check-163-0-31-1.jnk \ +{File "%TEMP%/utf-check-163-0-31-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 164 utf-check-164-0-32-0.jnk \ +{File "%TEMP%/utf-check-164-0-32-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 165 utf-check-165-0-32-1.jnk \ +{File "%TEMP%/utf-check-165-0-32-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 166 utf-check-166-0-33-0.jnk \ +{File "%TEMP%/utf-check-166-0-33-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 167 utf-check-167-0-33-1.jnk \ +{File "%TEMP%/utf-check-167-0-33-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 168 utf-check-168-0-34-0.jnk \ +{File "%TEMP%/utf-check-168-0-34-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 169 utf-check-169-0-34-1.jnk \ +{File "%TEMP%/utf-check-169-0-34-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 170 utf-check-170-0-35-0.jnk \ +{File "%TEMP%/utf-check-170-0-35-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 171 utf-check-171-0-35-1.jnk \ +{File "%TEMP%/utf-check-171-0-35-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 172 utf-check-172-0-36-0.jnk \ +{File "%TEMP%/utf-check-172-0-36-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 173 utf-check-173-0-36-1.jnk \ +{File "%TEMP%/utf-check-173-0-36-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 174 utf-check-174-0-37-0.jnk \ +{File "%TEMP%/utf-check-174-0-37-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 175 utf-check-175-0-37-1.jnk \ +{File "%TEMP%/utf-check-175-0-37-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 176 utf-check-176-0-38-0.jnk \ +{File "%TEMP%/utf-check-176-0-38-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 177 utf-check-177-0-38-1.jnk \ +{File "%TEMP%/utf-check-177-0-38-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 178 utf-check-178-0-39-0.jnk \ +{File "%TEMP%/utf-check-178-0-39-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 179 utf-check-179-0-39-1.jnk \ +{File "%TEMP%/utf-check-179-0-39-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 180 utf-check-180-0-40-0.jnk \ +{File "%TEMP%/utf-check-180-0-40-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 181 utf-check-181-0-40-1.jnk \ +{File "%TEMP%/utf-check-181-0-40-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 182 utf-check-182-0-41-0.jnk \ +{File "%TEMP%/utf-check-182-0-41-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 183 utf-check-183-0-41-1.jnk \ +{File "%TEMP%/utf-check-183-0-41-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 184 utf-check-184-0-42-0.jnk \ +{File "%TEMP%/utf-check-184-0-42-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 185 utf-check-185-0-42-1.jnk \ +{File "%TEMP%/utf-check-185-0-42-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 186 utf-check-186-0-43-0.jnk \ +{File "%TEMP%/utf-check-186-0-43-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 187 utf-check-187-0-43-1.jnk \ +{File "%TEMP%/utf-check-187-0-43-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 188 utf-check-188-0-44-0.jnk \ +{File "%TEMP%/utf-check-188-0-44-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 189 utf-check-189-0-44-1.jnk \ +{File "%TEMP%/utf-check-189-0-44-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 190 utf-check-190-0-45-0.jnk \ +{File "%TEMP%/utf-check-190-0-45-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 191 utf-check-191-0-45-1.jnk \ +{File "%TEMP%/utf-check-191-0-45-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 192 utf-check-192-0-46-0.jnk \ +{File "%TEMP%/utf-check-192-0-46-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 193 utf-check-193-0-46-1.jnk \ +{File "%TEMP%/utf-check-193-0-46-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 194 utf-check-194-0-47-0.jnk \ +{File "%TEMP%/utf-check-194-0-47-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 195 utf-check-195-0-47-1.jnk \ +{File "%TEMP%/utf-check-195-0-47-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 196 utf-check-196-0-48-0.jnk \ +{File "%TEMP%/utf-check-196-0-48-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 197 utf-check-197-0-48-1.jnk \ +{File "%TEMP%/utf-check-197-0-48-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 198 utf-check-198-0-49-0.jnk \ +{File "%TEMP%/utf-check-198-0-49-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 199 utf-check-199-0-49-1.jnk \ +{File "%TEMP%/utf-check-199-0-49-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 200 utf-check-200-0-50-0.jnk \ +{File "%TEMP%/utf-check-200-0-50-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 201 utf-check-201-0-50-1.jnk \ +{File "%TEMP%/utf-check-201-0-50-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 202 utf-check-202-0-51-0.jnk \ +{File "%TEMP%/utf-check-202-0-51-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 203 utf-check-203-0-51-1.jnk \ +{File "%TEMP%/utf-check-203-0-51-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 204 utf-check-204-0-52-0.jnk \ +{File "%TEMP%/utf-check-204-0-52-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 205 utf-check-205-0-52-1.jnk \ +{File "%TEMP%/utf-check-205-0-52-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 206 utf-check-206-0-53-0.jnk \ +{File "%TEMP%/utf-check-206-0-53-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 207 utf-check-207-0-53-1.jnk \ +{File "%TEMP%/utf-check-207-0-53-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 208 utf-check-208-0-54-0.jnk \ +{File "%TEMP%/utf-check-208-0-54-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 209 utf-check-209-0-54-1.jnk \ +{File "%TEMP%/utf-check-209-0-54-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 210 utf-check-210-0-55-0.jnk \ +{File "%TEMP%/utf-check-210-0-55-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 211 utf-check-211-0-55-1.jnk \ +{File "%TEMP%/utf-check-211-0-55-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 212 utf-check-212-0-56-0.jnk \ +{File "%TEMP%/utf-check-212-0-56-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 213 utf-check-213-0-56-1.jnk \ +{File "%TEMP%/utf-check-213-0-56-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 214 utf-check-214-0-57-0.jnk \ +{File "%TEMP%/utf-check-214-0-57-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 215 utf-check-215-0-57-1.jnk \ +{File "%TEMP%/utf-check-215-0-57-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 216 utf-check-216-0-58-0.jnk \ +{File "%TEMP%/utf-check-216-0-58-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 217 utf-check-217-0-58-1.jnk \ +{File "%TEMP%/utf-check-217-0-58-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 218 utf-check-218-0-59-0.jnk \ +{File "%TEMP%/utf-check-218-0-59-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 219 utf-check-219-0-59-1.jnk \ +{File "%TEMP%/utf-check-219-0-59-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 220 utf-check-220-0-60-0.jnk \ +{File "%TEMP%/utf-check-220-0-60-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 221 utf-check-221-0-60-1.jnk \ +{File "%TEMP%/utf-check-221-0-60-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 222 utf-check-222-0-61-0.jnk \ +{File "%TEMP%/utf-check-222-0-61-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 223 utf-check-223-0-61-1.jnk \ +{File "%TEMP%/utf-check-223-0-61-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 224 utf-check-224-0-62-0.jnk \ +{File "%TEMP%/utf-check-224-0-62-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 225 utf-check-225-0-62-1.jnk \ +{File "%TEMP%/utf-check-225-0-62-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 226 utf-check-226-0-63-0.jnk \ +{File "%TEMP%/utf-check-226-0-63-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 227 utf-check-227-0-63-1.jnk \ +{File "%TEMP%/utf-check-227-0-63-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 228 utf-check-228-0-64-0.jnk \ +{File "%TEMP%/utf-check-228-0-64-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 229 utf-check-229-0-64-1.jnk \ +{File "%TEMP%/utf-check-229-0-64-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 230 utf-check-230-0-65-0.jnk \ +{File "%TEMP%/utf-check-230-0-65-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 231 utf-check-231-0-65-1.jnk \ +{File "%TEMP%/utf-check-231-0-65-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 232 utf-check-232-0-66-0.jnk \ +{File "%TEMP%/utf-check-232-0-66-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 233 utf-check-233-0-66-1.jnk \ +{File "%TEMP%/utf-check-233-0-66-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 234 utf-check-234-0-67-0.jnk \ +{File "%TEMP%/utf-check-234-0-67-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 235 utf-check-235-0-67-1.jnk \ +{File "%TEMP%/utf-check-235-0-67-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 236 utf-check-236-0-68-0.jnk \ +{File "%TEMP%/utf-check-236-0-68-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 237 utf-check-237-0-68-1.jnk \ +{File "%TEMP%/utf-check-237-0-68-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 238 utf-check-238-0-69-0.jnk \ +{File "%TEMP%/utf-check-238-0-69-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 239 utf-check-239-0-69-1.jnk \ +{File "%TEMP%/utf-check-239-0-69-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 240 utf-check-240-0-70-0.jnk \ +{File "%TEMP%/utf-check-240-0-70-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 241 utf-check-241-0-70-1.jnk \ +{File "%TEMP%/utf-check-241-0-70-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 242 utf-check-242-0-71-0.jnk \ +{File "%TEMP%/utf-check-242-0-71-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 243 utf-check-243-0-71-1.jnk \ +{File "%TEMP%/utf-check-243-0-71-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 244 utf-check-244-0-72-0.jnk \ +{File "%TEMP%/utf-check-244-0-72-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 245 utf-check-245-0-72-1.jnk \ +{File "%TEMP%/utf-check-245-0-72-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 246 utf-check-246-0-73-0.jnk \ +{File "%TEMP%/utf-check-246-0-73-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 247 utf-check-247-0-73-1.jnk \ +{File "%TEMP%/utf-check-247-0-73-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 248 utf-check-248-0-74-0.jnk \ +{File "%TEMP%/utf-check-248-0-74-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 249 utf-check-249-0-74-1.jnk \ +{File "%TEMP%/utf-check-249-0-74-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 250 utf-check-250-0-75-0.jnk \ +{File "%TEMP%/utf-check-250-0-75-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 251 utf-check-251-0-75-1.jnk \ +{File "%TEMP%/utf-check-251-0-75-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 252 utf-check-252-0-76-0.jnk \ +{File "%TEMP%/utf-check-252-0-76-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 253 utf-check-253-0-76-1.jnk \ +{File "%TEMP%/utf-check-253-0-76-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 254 utf-check-254-0-77-0.jnk \ +{File "%TEMP%/utf-check-254-0-77-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 255 utf-check-255-0-77-1.jnk \ +{File "%TEMP%/utf-check-255-0-77-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 256 utf-check-256-0-78-0.jnk \ +{File "%TEMP%/utf-check-256-0-78-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 257 utf-check-257-0-78-1.jnk \ +{File "%TEMP%/utf-check-257-0-78-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 258 utf-check-258-0-79-0.jnk \ +{File "%TEMP%/utf-check-258-0-79-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 259 utf-check-259-0-79-1.jnk \ +{File "%TEMP%/utf-check-259-0-79-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 260 utf-check-260-0-80-0.jnk \ +{File "%TEMP%/utf-check-260-0-80-0.jnk" has 8193 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 261 utf-check-261-0-80-1.jnk \ +{File "%TEMP%/utf-check-261-0-80-1.jnk" has 8194 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 262 utf-check-262-0-81-0.jnk \ +{File "%TEMP%/utf-check-262-0-81-0.jnk" has 8194 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 263 utf-check-263-0-81-1.jnk \ +{File "%TEMP%/utf-check-263-0-81-1.jnk" has 8195 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 264 utf-check-264-0-82-0.jnk \ +{File "%TEMP%/utf-check-264-0-82-0.jnk" has 8194 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 265 utf-check-265-0-82-1.jnk \ +{File "%TEMP%/utf-check-265-0-82-1.jnk" has 8195 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 266 utf-check-266-0-83-0.jnk \ +{File "%TEMP%/utf-check-266-0-83-0.jnk" has 8195 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 267 utf-check-267-0-83-1.jnk \ +{File "%TEMP%/utf-check-267-0-83-1.jnk" has 8196 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 268 utf-check-268-0-84-0.jnk \ +{File "%TEMP%/utf-check-268-0-84-0.jnk" has 8196 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 269 utf-check-269-0-84-1.jnk \ +{File "%TEMP%/utf-check-269-0-84-1.jnk" has 8197 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 270 utf-check-270-0-85-0.jnk \ +{File "%TEMP%/utf-check-270-0-85-0.jnk" has 8197 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 271 utf-check-271-0-85-1.jnk \ +{File "%TEMP%/utf-check-271-0-85-1.jnk" has 8198 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 272 utf-check-272-0-86-0.jnk \ +{File "%TEMP%/utf-check-272-0-86-0.jnk" has 8197 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 273 utf-check-273-0-86-1.jnk \ +{File "%TEMP%/utf-check-273-0-86-1.jnk" has 8198 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 274 utf-check-274-0-87-0.jnk \ +{File "%TEMP%/utf-check-274-0-87-0.jnk" has 8198 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 275 utf-check-275-0-87-1.jnk \ +{File "%TEMP%/utf-check-275-0-87-1.jnk" has 8199 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 276 utf-check-276-0-88-0.jnk \ +{File "%TEMP%/utf-check-276-0-88-0.jnk" has 8194 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 277 utf-check-277-0-88-1.jnk \ +{File "%TEMP%/utf-check-277-0-88-1.jnk" has 8195 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 278 utf-check-278-0-89-0.jnk \ +{File "%TEMP%/utf-check-278-0-89-0.jnk" has 8195 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 279 utf-check-279-0-89-1.jnk \ +{File "%TEMP%/utf-check-279-0-89-1.jnk" has 8196 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 280 utf-check-280-0-90-0.jnk \ +{File "%TEMP%/utf-check-280-0-90-0.jnk" has 8195 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 281 utf-check-281-0-90-1.jnk \ +{File "%TEMP%/utf-check-281-0-90-1.jnk" has 8196 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 282 utf-check-282-0-91-0.jnk \ +{File "%TEMP%/utf-check-282-0-91-0.jnk" has 8196 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 283 utf-check-283-0-91-1.jnk \ +{File "%TEMP%/utf-check-283-0-91-1.jnk" has 8197 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 284 utf-check-284-0-92-0.jnk \ +{File "%TEMP%/utf-check-284-0-92-0.jnk" has 8197 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 285 utf-check-285-0-92-1.jnk \ +{File "%TEMP%/utf-check-285-0-92-1.jnk" has 8198 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 286 utf-check-286-0-93-0.jnk \ +{File "%TEMP%/utf-check-286-0-93-0.jnk" has 8198 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 287 utf-check-287-0-93-1.jnk \ +{File "%TEMP%/utf-check-287-0-93-1.jnk" has 8199 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 288 utf-check-288-0-94-0.jnk \ +{File "%TEMP%/utf-check-288-0-94-0.jnk" has 8198 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 289 utf-check-289-0-94-1.jnk \ +{File "%TEMP%/utf-check-289-0-94-1.jnk" has 8199 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 290 utf-check-290-0-95-0.jnk \ +{File "%TEMP%/utf-check-290-0-95-0.jnk" has 8199 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 291 utf-check-291-0-95-1.jnk \ +{File "%TEMP%/utf-check-291-0-95-1.jnk" has 8200 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 292 utf-check-292-0-96-0.jnk \ +{File "%TEMP%/utf-check-292-0-96-0.jnk" has 8194 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 293 utf-check-293-0-96-1.jnk \ +{File "%TEMP%/utf-check-293-0-96-1.jnk" has 8195 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 294 utf-check-294-0-97-0.jnk \ +{File "%TEMP%/utf-check-294-0-97-0.jnk" has 8195 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 295 utf-check-295-0-97-1.jnk \ +{File "%TEMP%/utf-check-295-0-97-1.jnk" has 8196 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 296 utf-check-296-0-98-0.jnk \ +{File "%TEMP%/utf-check-296-0-98-0.jnk" has 8195 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 297 utf-check-297-0-98-1.jnk \ +{File "%TEMP%/utf-check-297-0-98-1.jnk" has 8196 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 298 utf-check-298-0-99-0.jnk \ +{File "%TEMP%/utf-check-298-0-99-0.jnk" has 8196 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 299 utf-check-299-0-99-1.jnk \ +{File "%TEMP%/utf-check-299-0-99-1.jnk" has 8197 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 300 utf-check-300-0-100-0.jnk \ +{File "%TEMP%/utf-check-300-0-100-0.jnk" has 8197 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 301 utf-check-301-0-100-1.jnk \ +{File "%TEMP%/utf-check-301-0-100-1.jnk" has 8198 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 302 utf-check-302-0-101-0.jnk \ +{File "%TEMP%/utf-check-302-0-101-0.jnk" has 8198 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 303 utf-check-303-0-101-1.jnk \ +{File "%TEMP%/utf-check-303-0-101-1.jnk" has 8199 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 304 utf-check-304-0-102-0.jnk \ +{File "%TEMP%/utf-check-304-0-102-0.jnk" has 8198 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 305 utf-check-305-0-102-1.jnk \ +{File "%TEMP%/utf-check-305-0-102-1.jnk" has 8199 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 306 utf-check-306-0-103-0.jnk \ +{File "%TEMP%/utf-check-306-0-103-0.jnk" has 8199 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 307 utf-check-307-0-103-1.jnk \ +{File "%TEMP%/utf-check-307-0-103-1.jnk" has 8200 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 308 utf-check-308-0-104-0.jnk \ +{File "%TEMP%/utf-check-308-0-104-0.jnk" has 8195 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 309 utf-check-309-0-104-1.jnk \ +{File "%TEMP%/utf-check-309-0-104-1.jnk" has 8196 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 310 utf-check-310-0-105-0.jnk \ +{File "%TEMP%/utf-check-310-0-105-0.jnk" has 8196 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 311 utf-check-311-0-105-1.jnk \ +{File "%TEMP%/utf-check-311-0-105-1.jnk" has 8197 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 312 utf-check-312-0-106-0.jnk \ +{File "%TEMP%/utf-check-312-0-106-0.jnk" has 8196 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 313 utf-check-313-0-106-1.jnk \ +{File "%TEMP%/utf-check-313-0-106-1.jnk" has 8197 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 314 utf-check-314-0-107-0.jnk \ +{File "%TEMP%/utf-check-314-0-107-0.jnk" has 8197 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 315 utf-check-315-0-107-1.jnk \ +{File "%TEMP%/utf-check-315-0-107-1.jnk" has 8198 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 316 utf-check-316-0-108-0.jnk \ +{File "%TEMP%/utf-check-316-0-108-0.jnk" has 8198 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 317 utf-check-317-0-108-1.jnk \ +{File "%TEMP%/utf-check-317-0-108-1.jnk" has 8199 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 318 utf-check-318-0-109-0.jnk \ +{File "%TEMP%/utf-check-318-0-109-0.jnk" has 8199 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 319 utf-check-319-0-109-1.jnk \ +{File "%TEMP%/utf-check-319-0-109-1.jnk" has 8200 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 320 utf-check-320-0-110-0.jnk \ +{File "%TEMP%/utf-check-320-0-110-0.jnk" has 8199 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 321 utf-check-321-0-110-1.jnk \ +{File "%TEMP%/utf-check-321-0-110-1.jnk" has 8200 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 322 utf-check-322-0-111-0.jnk \ +{File "%TEMP%/utf-check-322-0-111-0.jnk" has 8200 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 323 utf-check-323-0-111-1.jnk \ +{File "%TEMP%/utf-check-323-0-111-1.jnk" has 8201 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 324 utf-check-324-0-112-0.jnk \ +{File "%TEMP%/utf-check-324-0-112-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 325 utf-check-325-0-112-1.jnk \ +{File "%TEMP%/utf-check-325-0-112-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 326 utf-check-326-0-113-0.jnk \ +{File "%TEMP%/utf-check-326-0-113-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 327 utf-check-327-0-113-1.jnk \ +{File "%TEMP%/utf-check-327-0-113-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 328 utf-check-328-0-114-0.jnk \ +{File "%TEMP%/utf-check-328-0-114-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 329 utf-check-329-0-114-1.jnk \ +{File "%TEMP%/utf-check-329-0-114-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 330 utf-check-330-0-115-0.jnk \ +{File "%TEMP%/utf-check-330-0-115-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 331 utf-check-331-0-115-1.jnk \ +{File "%TEMP%/utf-check-331-0-115-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 332 utf-check-332-0-116-0.jnk \ +{File "%TEMP%/utf-check-332-0-116-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 333 utf-check-333-0-116-1.jnk \ +{File "%TEMP%/utf-check-333-0-116-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 334 utf-check-334-0-117-0.jnk \ +{File "%TEMP%/utf-check-334-0-117-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 335 utf-check-335-0-117-1.jnk \ +{File "%TEMP%/utf-check-335-0-117-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 336 utf-check-336-0-118-0.jnk \ +{File "%TEMP%/utf-check-336-0-118-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 337 utf-check-337-0-118-1.jnk \ +{File "%TEMP%/utf-check-337-0-118-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 338 utf-check-338-0-119-0.jnk \ +{File "%TEMP%/utf-check-338-0-119-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 339 utf-check-339-0-119-1.jnk \ +{File "%TEMP%/utf-check-339-0-119-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 340 utf-check-340-0-120-0.jnk \ +{File "%TEMP%/utf-check-340-0-120-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 341 utf-check-341-0-120-1.jnk \ +{File "%TEMP%/utf-check-341-0-120-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 342 utf-check-342-0-121-0.jnk \ +{File "%TEMP%/utf-check-342-0-121-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 343 utf-check-343-0-121-1.jnk \ +{File "%TEMP%/utf-check-343-0-121-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 344 utf-check-344-0-122-0.jnk \ +{File "%TEMP%/utf-check-344-0-122-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 345 utf-check-345-0-122-1.jnk \ +{File "%TEMP%/utf-check-345-0-122-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 346 utf-check-346-0-123-0.jnk \ +{File "%TEMP%/utf-check-346-0-123-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 347 utf-check-347-0-123-1.jnk \ +{File "%TEMP%/utf-check-347-0-123-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 348 utf-check-348-0-124-0.jnk \ +{File "%TEMP%/utf-check-348-0-124-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 349 utf-check-349-0-124-1.jnk \ +{File "%TEMP%/utf-check-349-0-124-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 350 utf-check-350-0-125-0.jnk \ +{File "%TEMP%/utf-check-350-0-125-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 351 utf-check-351-0-125-1.jnk \ +{File "%TEMP%/utf-check-351-0-125-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 352 utf-check-352-0-126-0.jnk \ +{File "%TEMP%/utf-check-352-0-126-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 353 utf-check-353-0-126-1.jnk \ +{File "%TEMP%/utf-check-353-0-126-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 354 utf-check-354-0-127-0.jnk \ +{File "%TEMP%/utf-check-354-0-127-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 355 utf-check-355-0-127-1.jnk \ +{File "%TEMP%/utf-check-355-0-127-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 356 utf-check-356-0-128-0.jnk \ +{File "%TEMP%/utf-check-356-0-128-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 357 utf-check-357-0-128-1.jnk \ +{File "%TEMP%/utf-check-357-0-128-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 358 utf-check-358-0-129-0.jnk \ +{File "%TEMP%/utf-check-358-0-129-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 359 utf-check-359-0-129-1.jnk \ +{File "%TEMP%/utf-check-359-0-129-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 360 utf-check-360-0-130-0.jnk \ +{File "%TEMP%/utf-check-360-0-130-0.jnk" has 1 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 361 utf-check-361-0-130-1.jnk \ +{File "%TEMP%/utf-check-361-0-130-1.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 362 utf-check-362-0-131-0.jnk \ +{File "%TEMP%/utf-check-362-0-131-0.jnk" has 1 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 363 utf-check-363-0-131-1.jnk \ +{File "%TEMP%/utf-check-363-0-131-1.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 364 utf-check-364-0-132-0.jnk \ +{File "%TEMP%/utf-check-364-0-132-0.jnk" has 1 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 365 utf-check-365-0-132-1.jnk \ +{File "%TEMP%/utf-check-365-0-132-1.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 366 utf-check-366-0-133-0.jnk \ +{File "%TEMP%/utf-check-366-0-133-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 367 utf-check-367-0-133-1.jnk \ +{File "%TEMP%/utf-check-367-0-133-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 368 utf-check-368-0-134-0.jnk \ +{File "%TEMP%/utf-check-368-0-134-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 369 utf-check-369-0-134-1.jnk \ +{File "%TEMP%/utf-check-369-0-134-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 370 utf-check-370-0-135-0.jnk \ +{File "%TEMP%/utf-check-370-0-135-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 371 utf-check-371-0-135-1.jnk \ +{File "%TEMP%/utf-check-371-0-135-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 372 utf-check-372-0-136-0.jnk \ +{File "%TEMP%/utf-check-372-0-136-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 373 utf-check-373-0-136-1.jnk \ +{File "%TEMP%/utf-check-373-0-136-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 374 utf-check-374-0-137-0.jnk \ +{File "%TEMP%/utf-check-374-0-137-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 375 utf-check-375-0-137-1.jnk \ +{File "%TEMP%/utf-check-375-0-137-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 376 utf-check-376-0-138-0.jnk \ +{File "%TEMP%/utf-check-376-0-138-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 377 utf-check-377-0-138-1.jnk \ +{File "%TEMP%/utf-check-377-0-138-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 378 utf-check-378-0-139-0.jnk \ +{File "%TEMP%/utf-check-378-0-139-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 379 utf-check-379-0-139-1.jnk \ +{File "%TEMP%/utf-check-379-0-139-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 380 utf-check-380-0-140-0.jnk \ +{File "%TEMP%/utf-check-380-0-140-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 381 utf-check-381-0-140-1.jnk \ +{File "%TEMP%/utf-check-381-0-140-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 382 utf-check-382-0-141-0.jnk \ +{File "%TEMP%/utf-check-382-0-141-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 383 utf-check-383-0-141-1.jnk \ +{File "%TEMP%/utf-check-383-0-141-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 384 utf-check-384-0-142-0.jnk \ +{File "%TEMP%/utf-check-384-0-142-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 385 utf-check-385-0-142-1.jnk \ +{File "%TEMP%/utf-check-385-0-142-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 386 utf-check-386-0-143-0.jnk \ +{File "%TEMP%/utf-check-386-0-143-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 387 utf-check-387-0-143-1.jnk \ +{File "%TEMP%/utf-check-387-0-143-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 388 utf-check-388-0-144-0.jnk \ +{File "%TEMP%/utf-check-388-0-144-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 389 utf-check-389-0-144-1.jnk \ +{File "%TEMP%/utf-check-389-0-144-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 390 utf-check-390-0-145-0.jnk \ +{File "%TEMP%/utf-check-390-0-145-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 391 utf-check-391-0-145-1.jnk \ +{File "%TEMP%/utf-check-391-0-145-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 392 utf-check-392-0-146-0.jnk \ +{File "%TEMP%/utf-check-392-0-146-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 393 utf-check-393-0-146-1.jnk \ +{File "%TEMP%/utf-check-393-0-146-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 394 utf-check-394-0-147-0.jnk \ +{File "%TEMP%/utf-check-394-0-147-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 395 utf-check-395-0-147-1.jnk \ +{File "%TEMP%/utf-check-395-0-147-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 396 utf-check-396-0-148-0.jnk \ +{File "%TEMP%/utf-check-396-0-148-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 397 utf-check-397-0-148-1.jnk \ +{File "%TEMP%/utf-check-397-0-148-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 398 utf-check-398-0-149-0.jnk \ +{File "%TEMP%/utf-check-398-0-149-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 399 utf-check-399-0-149-1.jnk \ +{File "%TEMP%/utf-check-399-0-149-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 400 utf-check-400-0-150-0.jnk \ +{File "%TEMP%/utf-check-400-0-150-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 401 utf-check-401-0-150-1.jnk \ +{File "%TEMP%/utf-check-401-0-150-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 402 utf-check-402-0-151-0.jnk \ +{File "%TEMP%/utf-check-402-0-151-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 403 utf-check-403-0-151-1.jnk \ +{File "%TEMP%/utf-check-403-0-151-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 404 utf-check-404-0-152-0.jnk \ +{File "%TEMP%/utf-check-404-0-152-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 405 utf-check-405-0-152-1.jnk \ +{File "%TEMP%/utf-check-405-0-152-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 406 utf-check-406-0-153-0.jnk \ +{File "%TEMP%/utf-check-406-0-153-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 407 utf-check-407-0-153-1.jnk \ +{File "%TEMP%/utf-check-407-0-153-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 408 utf-check-408-0-154-0.jnk \ +{File "%TEMP%/utf-check-408-0-154-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 409 utf-check-409-0-154-1.jnk \ +{File "%TEMP%/utf-check-409-0-154-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 410 utf-check-410-0-155-0.jnk \ +{File "%TEMP%/utf-check-410-0-155-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 411 utf-check-411-0-155-1.jnk \ +{File "%TEMP%/utf-check-411-0-155-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 412 utf-check-412-0-156-0.jnk \ +{File "%TEMP%/utf-check-412-0-156-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 413 utf-check-413-0-156-1.jnk \ +{File "%TEMP%/utf-check-413-0-156-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 414 utf-check-414-0-157-0.jnk \ +{File "%TEMP%/utf-check-414-0-157-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 415 utf-check-415-0-157-1.jnk \ +{File "%TEMP%/utf-check-415-0-157-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 416 utf-check-416-0-158-0.jnk \ +{File "%TEMP%/utf-check-416-0-158-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 417 utf-check-417-0-158-1.jnk \ +{File "%TEMP%/utf-check-417-0-158-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 418 utf-check-418-0-159-0.jnk \ +{File "%TEMP%/utf-check-418-0-159-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 419 utf-check-419-0-159-1.jnk \ +{File "%TEMP%/utf-check-419-0-159-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 420 utf-check-420-0-160-0.jnk \ +{File "%TEMP%/utf-check-420-0-160-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 421 utf-check-421-0-160-1.jnk \ +{File "%TEMP%/utf-check-421-0-160-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 422 utf-check-422-0-161-0.jnk \ +{File "%TEMP%/utf-check-422-0-161-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 423 utf-check-423-0-161-1.jnk \ +{File "%TEMP%/utf-check-423-0-161-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 424 utf-check-424-0-162-0.jnk \ +{File "%TEMP%/utf-check-424-0-162-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 425 utf-check-425-0-162-1.jnk \ +{File "%TEMP%/utf-check-425-0-162-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 426 utf-check-426-0-163-0.jnk \ +{File "%TEMP%/utf-check-426-0-163-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 427 utf-check-427-0-163-1.jnk \ +{File "%TEMP%/utf-check-427-0-163-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 428 utf-check-428-0-164-0.jnk \ +{File "%TEMP%/utf-check-428-0-164-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 429 utf-check-429-0-164-1.jnk \ +{File "%TEMP%/utf-check-429-0-164-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 430 utf-check-430-0-165-0.jnk \ +{File "%TEMP%/utf-check-430-0-165-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 431 utf-check-431-0-165-1.jnk \ +{File "%TEMP%/utf-check-431-0-165-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 432 utf-check-432-0-166-0.jnk \ +{File "%TEMP%/utf-check-432-0-166-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 433 utf-check-433-0-166-1.jnk \ +{File "%TEMP%/utf-check-433-0-166-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 434 utf-check-434-0-167-0.jnk \ +{File "%TEMP%/utf-check-434-0-167-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 435 utf-check-435-0-167-1.jnk \ +{File "%TEMP%/utf-check-435-0-167-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 436 utf-check-436-0-168-0.jnk \ +{File "%TEMP%/utf-check-436-0-168-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 437 utf-check-437-0-168-1.jnk \ +{File "%TEMP%/utf-check-437-0-168-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 438 utf-check-438-0-169-0.jnk \ +{File "%TEMP%/utf-check-438-0-169-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 439 utf-check-439-0-169-1.jnk \ +{File "%TEMP%/utf-check-439-0-169-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 440 utf-check-440-0-170-0.jnk \ +{File "%TEMP%/utf-check-440-0-170-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 441 utf-check-441-0-170-1.jnk \ +{File "%TEMP%/utf-check-441-0-170-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 442 utf-check-442-0-171-0.jnk \ +{File "%TEMP%/utf-check-442-0-171-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 443 utf-check-443-0-171-1.jnk \ +{File "%TEMP%/utf-check-443-0-171-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 444 utf-check-444-0-172-0.jnk \ +{File "%TEMP%/utf-check-444-0-172-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 445 utf-check-445-0-172-1.jnk \ +{File "%TEMP%/utf-check-445-0-172-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 446 utf-check-446-0-173-0.jnk \ +{File "%TEMP%/utf-check-446-0-173-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 447 utf-check-447-0-173-1.jnk \ +{File "%TEMP%/utf-check-447-0-173-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 448 utf-check-448-0-174-0.jnk \ +{File "%TEMP%/utf-check-448-0-174-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 449 utf-check-449-0-174-1.jnk \ +{File "%TEMP%/utf-check-449-0-174-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 450 utf-check-450-0-175-0.jnk \ +{File "%TEMP%/utf-check-450-0-175-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 451 utf-check-451-0-175-1.jnk \ +{File "%TEMP%/utf-check-451-0-175-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 452 utf-check-452-0-176-0.jnk \ +{File "%TEMP%/utf-check-452-0-176-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 453 utf-check-453-0-176-1.jnk \ +{File "%TEMP%/utf-check-453-0-176-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 454 utf-check-454-0-177-0.jnk \ +{File "%TEMP%/utf-check-454-0-177-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 455 utf-check-455-0-177-1.jnk \ +{File "%TEMP%/utf-check-455-0-177-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 456 utf-check-456-0-178-0.jnk \ +{File "%TEMP%/utf-check-456-0-178-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 457 utf-check-457-0-178-1.jnk \ +{File "%TEMP%/utf-check-457-0-178-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 458 utf-check-458-0-179-0.jnk \ +{File "%TEMP%/utf-check-458-0-179-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 459 utf-check-459-0-179-1.jnk \ +{File "%TEMP%/utf-check-459-0-179-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 460 utf-check-460-0-180-0.jnk \ +{File "%TEMP%/utf-check-460-0-180-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 461 utf-check-461-0-180-1.jnk \ +{File "%TEMP%/utf-check-461-0-180-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 462 utf-check-462-1-0-0.jnk \ +{File "%TEMP%/utf-check-462-1-0-0.jnk" has 3 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 463 utf-check-463-1-0-1.jnk \ +{File "%TEMP%/utf-check-463-1-0-1.jnk" has 4 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 464 utf-check-464-1-1-0.jnk \ +{File "%TEMP%/utf-check-464-1-1-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 465 utf-check-465-1-1-1.jnk \ +{File "%TEMP%/utf-check-465-1-1-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 466 utf-check-466-1-2-0.jnk \ +{File "%TEMP%/utf-check-466-1-2-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 467 utf-check-467-1-2-1.jnk \ +{File "%TEMP%/utf-check-467-1-2-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 468 utf-check-468-1-3-0.jnk \ +{File "%TEMP%/utf-check-468-1-3-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 469 utf-check-469-1-3-1.jnk \ +{File "%TEMP%/utf-check-469-1-3-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 470 utf-check-470-1-4-0.jnk \ +{File "%TEMP%/utf-check-470-1-4-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 471 utf-check-471-1-4-1.jnk \ +{File "%TEMP%/utf-check-471-1-4-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 472 utf-check-472-1-5-0.jnk \ +{File "%TEMP%/utf-check-472-1-5-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 473 utf-check-473-1-5-1.jnk \ +{File "%TEMP%/utf-check-473-1-5-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 474 utf-check-474-1-6-0.jnk \ +{File "%TEMP%/utf-check-474-1-6-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 475 utf-check-475-1-6-1.jnk \ +{File "%TEMP%/utf-check-475-1-6-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 476 utf-check-476-1-7-0.jnk \ +{File "%TEMP%/utf-check-476-1-7-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 477 utf-check-477-1-7-1.jnk \ +{File "%TEMP%/utf-check-477-1-7-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 478 utf-check-478-1-8-0.jnk \ +{File "%TEMP%/utf-check-478-1-8-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 479 utf-check-479-1-8-1.jnk \ +{File "%TEMP%/utf-check-479-1-8-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 480 utf-check-480-1-9-0.jnk \ +{File "%TEMP%/utf-check-480-1-9-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 481 utf-check-481-1-9-1.jnk \ +{File "%TEMP%/utf-check-481-1-9-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 482 utf-check-482-1-10-0.jnk \ +{File "%TEMP%/utf-check-482-1-10-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 483 utf-check-483-1-10-1.jnk \ +{File "%TEMP%/utf-check-483-1-10-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 484 utf-check-484-1-11-0.jnk \ +{File "%TEMP%/utf-check-484-1-11-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 485 utf-check-485-1-11-1.jnk \ +{File "%TEMP%/utf-check-485-1-11-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 486 utf-check-486-1-12-0.jnk \ +{File "%TEMP%/utf-check-486-1-12-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 487 utf-check-487-1-12-1.jnk \ +{File "%TEMP%/utf-check-487-1-12-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 488 utf-check-488-1-13-0.jnk \ +{File "%TEMP%/utf-check-488-1-13-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 489 utf-check-489-1-13-1.jnk \ +{File "%TEMP%/utf-check-489-1-13-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 490 utf-check-490-1-14-0.jnk \ +{File "%TEMP%/utf-check-490-1-14-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 491 utf-check-491-1-14-1.jnk \ +{File "%TEMP%/utf-check-491-1-14-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 492 utf-check-492-1-15-0.jnk \ +{File "%TEMP%/utf-check-492-1-15-0.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 493 utf-check-493-1-15-1.jnk \ +{File "%TEMP%/utf-check-493-1-15-1.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 494 utf-check-494-1-16-0.jnk \ +{File "%TEMP%/utf-check-494-1-16-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 495 utf-check-495-1-16-1.jnk \ +{File "%TEMP%/utf-check-495-1-16-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 496 utf-check-496-1-17-0.jnk \ +{File "%TEMP%/utf-check-496-1-17-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 497 utf-check-497-1-17-1.jnk \ +{File "%TEMP%/utf-check-497-1-17-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 498 utf-check-498-1-18-0.jnk \ +{File "%TEMP%/utf-check-498-1-18-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 499 utf-check-499-1-18-1.jnk \ +{File "%TEMP%/utf-check-499-1-18-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 500 utf-check-500-1-19-0.jnk \ +{File "%TEMP%/utf-check-500-1-19-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 501 utf-check-501-1-19-1.jnk \ +{File "%TEMP%/utf-check-501-1-19-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 502 utf-check-502-1-20-0.jnk \ +{File "%TEMP%/utf-check-502-1-20-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 503 utf-check-503-1-20-1.jnk \ +{File "%TEMP%/utf-check-503-1-20-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 504 utf-check-504-1-21-0.jnk \ +{File "%TEMP%/utf-check-504-1-21-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 505 utf-check-505-1-21-1.jnk \ +{File "%TEMP%/utf-check-505-1-21-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 506 utf-check-506-1-22-0.jnk \ +{File "%TEMP%/utf-check-506-1-22-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 507 utf-check-507-1-22-1.jnk \ +{File "%TEMP%/utf-check-507-1-22-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 508 utf-check-508-1-23-0.jnk \ +{File "%TEMP%/utf-check-508-1-23-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 509 utf-check-509-1-23-1.jnk \ +{File "%TEMP%/utf-check-509-1-23-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 510 utf-check-510-1-24-0.jnk \ +{File "%TEMP%/utf-check-510-1-24-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 511 utf-check-511-1-24-1.jnk \ +{File "%TEMP%/utf-check-511-1-24-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 512 utf-check-512-1-25-0.jnk \ +{File "%TEMP%/utf-check-512-1-25-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 513 utf-check-513-1-25-1.jnk \ +{File "%TEMP%/utf-check-513-1-25-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 514 utf-check-514-1-26-0.jnk \ +{File "%TEMP%/utf-check-514-1-26-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 515 utf-check-515-1-26-1.jnk \ +{File "%TEMP%/utf-check-515-1-26-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 516 utf-check-516-1-27-0.jnk \ +{File "%TEMP%/utf-check-516-1-27-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 517 utf-check-517-1-27-1.jnk \ +{File "%TEMP%/utf-check-517-1-27-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 518 utf-check-518-1-28-0.jnk \ +{File "%TEMP%/utf-check-518-1-28-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 519 utf-check-519-1-28-1.jnk \ +{File "%TEMP%/utf-check-519-1-28-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 520 utf-check-520-1-29-0.jnk \ +{File "%TEMP%/utf-check-520-1-29-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 521 utf-check-521-1-29-1.jnk \ +{File "%TEMP%/utf-check-521-1-29-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 522 utf-check-522-1-30-0.jnk \ +{File "%TEMP%/utf-check-522-1-30-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 523 utf-check-523-1-30-1.jnk \ +{File "%TEMP%/utf-check-523-1-30-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 524 utf-check-524-1-31-0.jnk \ +{File "%TEMP%/utf-check-524-1-31-0.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 525 utf-check-525-1-31-1.jnk \ +{File "%TEMP%/utf-check-525-1-31-1.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 526 utf-check-526-1-32-0.jnk \ +{File "%TEMP%/utf-check-526-1-32-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 527 utf-check-527-1-32-1.jnk \ +{File "%TEMP%/utf-check-527-1-32-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 528 utf-check-528-1-33-0.jnk \ +{File "%TEMP%/utf-check-528-1-33-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 529 utf-check-529-1-33-1.jnk \ +{File "%TEMP%/utf-check-529-1-33-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 530 utf-check-530-1-34-0.jnk \ +{File "%TEMP%/utf-check-530-1-34-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 531 utf-check-531-1-34-1.jnk \ +{File "%TEMP%/utf-check-531-1-34-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 532 utf-check-532-1-35-0.jnk \ +{File "%TEMP%/utf-check-532-1-35-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 533 utf-check-533-1-35-1.jnk \ +{File "%TEMP%/utf-check-533-1-35-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 534 utf-check-534-1-36-0.jnk \ +{File "%TEMP%/utf-check-534-1-36-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 535 utf-check-535-1-36-1.jnk \ +{File "%TEMP%/utf-check-535-1-36-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 536 utf-check-536-1-37-0.jnk \ +{File "%TEMP%/utf-check-536-1-37-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 537 utf-check-537-1-37-1.jnk \ +{File "%TEMP%/utf-check-537-1-37-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 538 utf-check-538-1-38-0.jnk \ +{File "%TEMP%/utf-check-538-1-38-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 539 utf-check-539-1-38-1.jnk \ +{File "%TEMP%/utf-check-539-1-38-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 540 utf-check-540-1-39-0.jnk \ +{File "%TEMP%/utf-check-540-1-39-0.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 541 utf-check-541-1-39-1.jnk \ +{File "%TEMP%/utf-check-541-1-39-1.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 542 utf-check-542-1-40-0.jnk \ +{File "%TEMP%/utf-check-542-1-40-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 543 utf-check-543-1-40-1.jnk \ +{File "%TEMP%/utf-check-543-1-40-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 544 utf-check-544-1-41-0.jnk \ +{File "%TEMP%/utf-check-544-1-41-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 545 utf-check-545-1-41-1.jnk \ +{File "%TEMP%/utf-check-545-1-41-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 546 utf-check-546-1-42-0.jnk \ +{File "%TEMP%/utf-check-546-1-42-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 547 utf-check-547-1-42-1.jnk \ +{File "%TEMP%/utf-check-547-1-42-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 548 utf-check-548-1-43-0.jnk \ +{File "%TEMP%/utf-check-548-1-43-0.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 549 utf-check-549-1-43-1.jnk \ +{File "%TEMP%/utf-check-549-1-43-1.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 550 utf-check-550-1-44-0.jnk \ +{File "%TEMP%/utf-check-550-1-44-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 551 utf-check-551-1-44-1.jnk \ +{File "%TEMP%/utf-check-551-1-44-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 552 utf-check-552-1-45-0.jnk \ +{File "%TEMP%/utf-check-552-1-45-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 553 utf-check-553-1-45-1.jnk \ +{File "%TEMP%/utf-check-553-1-45-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 554 utf-check-554-1-46-0.jnk \ +{File "%TEMP%/utf-check-554-1-46-0.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 555 utf-check-555-1-46-1.jnk \ +{File "%TEMP%/utf-check-555-1-46-1.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 556 utf-check-556-1-47-0.jnk \ +{File "%TEMP%/utf-check-556-1-47-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 557 utf-check-557-1-47-1.jnk \ +{File "%TEMP%/utf-check-557-1-47-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 558 utf-check-558-1-48-0.jnk \ +{File "%TEMP%/utf-check-558-1-48-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 559 utf-check-559-1-48-1.jnk \ +{File "%TEMP%/utf-check-559-1-48-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 560 utf-check-560-1-49-0.jnk \ +{File "%TEMP%/utf-check-560-1-49-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 561 utf-check-561-1-49-1.jnk \ +{File "%TEMP%/utf-check-561-1-49-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 562 utf-check-562-1-50-0.jnk \ +{File "%TEMP%/utf-check-562-1-50-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 563 utf-check-563-1-50-1.jnk \ +{File "%TEMP%/utf-check-563-1-50-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 564 utf-check-564-1-51-0.jnk \ +{File "%TEMP%/utf-check-564-1-51-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 565 utf-check-565-1-51-1.jnk \ +{File "%TEMP%/utf-check-565-1-51-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 566 utf-check-566-1-52-0.jnk \ +{File "%TEMP%/utf-check-566-1-52-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 567 utf-check-567-1-52-1.jnk \ +{File "%TEMP%/utf-check-567-1-52-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 568 utf-check-568-1-53-0.jnk \ +{File "%TEMP%/utf-check-568-1-53-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 569 utf-check-569-1-53-1.jnk \ +{File "%TEMP%/utf-check-569-1-53-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 570 utf-check-570-1-54-0.jnk \ +{File "%TEMP%/utf-check-570-1-54-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 571 utf-check-571-1-54-1.jnk \ +{File "%TEMP%/utf-check-571-1-54-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 572 utf-check-572-1-55-0.jnk \ +{File "%TEMP%/utf-check-572-1-55-0.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 573 utf-check-573-1-55-1.jnk \ +{File "%TEMP%/utf-check-573-1-55-1.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 574 utf-check-574-1-56-0.jnk \ +{File "%TEMP%/utf-check-574-1-56-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 575 utf-check-575-1-56-1.jnk \ +{File "%TEMP%/utf-check-575-1-56-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 576 utf-check-576-1-57-0.jnk \ +{File "%TEMP%/utf-check-576-1-57-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 577 utf-check-577-1-57-1.jnk \ +{File "%TEMP%/utf-check-577-1-57-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 578 utf-check-578-1-58-0.jnk \ +{File "%TEMP%/utf-check-578-1-58-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 579 utf-check-579-1-58-1.jnk \ +{File "%TEMP%/utf-check-579-1-58-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 580 utf-check-580-1-59-0.jnk \ +{File "%TEMP%/utf-check-580-1-59-0.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 581 utf-check-581-1-59-1.jnk \ +{File "%TEMP%/utf-check-581-1-59-1.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 582 utf-check-582-1-60-0.jnk \ +{File "%TEMP%/utf-check-582-1-60-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 583 utf-check-583-1-60-1.jnk \ +{File "%TEMP%/utf-check-583-1-60-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 584 utf-check-584-1-61-0.jnk \ +{File "%TEMP%/utf-check-584-1-61-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 585 utf-check-585-1-61-1.jnk \ +{File "%TEMP%/utf-check-585-1-61-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 586 utf-check-586-1-62-0.jnk \ +{File "%TEMP%/utf-check-586-1-62-0.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 587 utf-check-587-1-62-1.jnk \ +{File "%TEMP%/utf-check-587-1-62-1.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 588 utf-check-588-1-63-0.jnk \ +{File "%TEMP%/utf-check-588-1-63-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 589 utf-check-589-1-63-1.jnk \ +{File "%TEMP%/utf-check-589-1-63-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 590 utf-check-590-1-64-0.jnk \ +{File "%TEMP%/utf-check-590-1-64-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 591 utf-check-591-1-64-1.jnk \ +{File "%TEMP%/utf-check-591-1-64-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 592 utf-check-592-1-65-0.jnk \ +{File "%TEMP%/utf-check-592-1-65-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 593 utf-check-593-1-65-1.jnk \ +{File "%TEMP%/utf-check-593-1-65-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 594 utf-check-594-1-66-0.jnk \ +{File "%TEMP%/utf-check-594-1-66-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 595 utf-check-595-1-66-1.jnk \ +{File "%TEMP%/utf-check-595-1-66-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 596 utf-check-596-1-67-0.jnk \ +{File "%TEMP%/utf-check-596-1-67-0.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 597 utf-check-597-1-67-1.jnk \ +{File "%TEMP%/utf-check-597-1-67-1.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 598 utf-check-598-1-68-0.jnk \ +{File "%TEMP%/utf-check-598-1-68-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 599 utf-check-599-1-68-1.jnk \ +{File "%TEMP%/utf-check-599-1-68-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 600 utf-check-600-1-69-0.jnk \ +{File "%TEMP%/utf-check-600-1-69-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 601 utf-check-601-1-69-1.jnk \ +{File "%TEMP%/utf-check-601-1-69-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 602 utf-check-602-1-70-0.jnk \ +{File "%TEMP%/utf-check-602-1-70-0.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 603 utf-check-603-1-70-1.jnk \ +{File "%TEMP%/utf-check-603-1-70-1.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 604 utf-check-604-1-71-0.jnk \ +{File "%TEMP%/utf-check-604-1-71-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 605 utf-check-605-1-71-1.jnk \ +{File "%TEMP%/utf-check-605-1-71-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 606 utf-check-606-1-72-0.jnk \ +{File "%TEMP%/utf-check-606-1-72-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 607 utf-check-607-1-72-1.jnk \ +{File "%TEMP%/utf-check-607-1-72-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 608 utf-check-608-1-73-0.jnk \ +{File "%TEMP%/utf-check-608-1-73-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 609 utf-check-609-1-73-1.jnk \ +{File "%TEMP%/utf-check-609-1-73-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 610 utf-check-610-1-74-0.jnk \ +{File "%TEMP%/utf-check-610-1-74-0.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 611 utf-check-611-1-74-1.jnk \ +{File "%TEMP%/utf-check-611-1-74-1.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 612 utf-check-612-1-75-0.jnk \ +{File "%TEMP%/utf-check-612-1-75-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 613 utf-check-613-1-75-1.jnk \ +{File "%TEMP%/utf-check-613-1-75-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 614 utf-check-614-1-76-0.jnk \ +{File "%TEMP%/utf-check-614-1-76-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 615 utf-check-615-1-76-1.jnk \ +{File "%TEMP%/utf-check-615-1-76-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 616 utf-check-616-1-77-0.jnk \ +{File "%TEMP%/utf-check-616-1-77-0.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 617 utf-check-617-1-77-1.jnk \ +{File "%TEMP%/utf-check-617-1-77-1.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 618 utf-check-618-1-78-0.jnk \ +{File "%TEMP%/utf-check-618-1-78-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 619 utf-check-619-1-78-1.jnk \ +{File "%TEMP%/utf-check-619-1-78-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 620 utf-check-620-1-79-0.jnk \ +{File "%TEMP%/utf-check-620-1-79-0.jnk" has 11 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 621 utf-check-621-1-79-1.jnk \ +{File "%TEMP%/utf-check-621-1-79-1.jnk" has 12 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 622 utf-check-622-1-80-0.jnk \ +{File "%TEMP%/utf-check-622-1-80-0.jnk" has 8196 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 623 utf-check-623-1-80-1.jnk \ +{File "%TEMP%/utf-check-623-1-80-1.jnk" has 8197 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 624 utf-check-624-1-81-0.jnk \ +{File "%TEMP%/utf-check-624-1-81-0.jnk" has 8197 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 625 utf-check-625-1-81-1.jnk \ +{File "%TEMP%/utf-check-625-1-81-1.jnk" has 8198 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 626 utf-check-626-1-82-0.jnk \ +{File "%TEMP%/utf-check-626-1-82-0.jnk" has 8197 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 627 utf-check-627-1-82-1.jnk \ +{File "%TEMP%/utf-check-627-1-82-1.jnk" has 8198 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 628 utf-check-628-1-83-0.jnk \ +{File "%TEMP%/utf-check-628-1-83-0.jnk" has 8198 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 629 utf-check-629-1-83-1.jnk \ +{File "%TEMP%/utf-check-629-1-83-1.jnk" has 8199 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 630 utf-check-630-1-84-0.jnk \ +{File "%TEMP%/utf-check-630-1-84-0.jnk" has 8199 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 631 utf-check-631-1-84-1.jnk \ +{File "%TEMP%/utf-check-631-1-84-1.jnk" has 8200 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 632 utf-check-632-1-85-0.jnk \ +{File "%TEMP%/utf-check-632-1-85-0.jnk" has 8200 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 633 utf-check-633-1-85-1.jnk \ +{File "%TEMP%/utf-check-633-1-85-1.jnk" has 8201 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 634 utf-check-634-1-86-0.jnk \ +{File "%TEMP%/utf-check-634-1-86-0.jnk" has 8200 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 635 utf-check-635-1-86-1.jnk \ +{File "%TEMP%/utf-check-635-1-86-1.jnk" has 8201 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 636 utf-check-636-1-87-0.jnk \ +{File "%TEMP%/utf-check-636-1-87-0.jnk" has 8201 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 637 utf-check-637-1-87-1.jnk \ +{File "%TEMP%/utf-check-637-1-87-1.jnk" has 8202 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 638 utf-check-638-1-88-0.jnk \ +{File "%TEMP%/utf-check-638-1-88-0.jnk" has 8197 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 639 utf-check-639-1-88-1.jnk \ +{File "%TEMP%/utf-check-639-1-88-1.jnk" has 8198 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 640 utf-check-640-1-89-0.jnk \ +{File "%TEMP%/utf-check-640-1-89-0.jnk" has 8198 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 641 utf-check-641-1-89-1.jnk \ +{File "%TEMP%/utf-check-641-1-89-1.jnk" has 8199 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 642 utf-check-642-1-90-0.jnk \ +{File "%TEMP%/utf-check-642-1-90-0.jnk" has 8198 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 643 utf-check-643-1-90-1.jnk \ +{File "%TEMP%/utf-check-643-1-90-1.jnk" has 8199 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 644 utf-check-644-1-91-0.jnk \ +{File "%TEMP%/utf-check-644-1-91-0.jnk" has 8199 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 645 utf-check-645-1-91-1.jnk \ +{File "%TEMP%/utf-check-645-1-91-1.jnk" has 8200 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 646 utf-check-646-1-92-0.jnk \ +{File "%TEMP%/utf-check-646-1-92-0.jnk" has 8200 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 647 utf-check-647-1-92-1.jnk \ +{File "%TEMP%/utf-check-647-1-92-1.jnk" has 8201 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 648 utf-check-648-1-93-0.jnk \ +{File "%TEMP%/utf-check-648-1-93-0.jnk" has 8201 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 649 utf-check-649-1-93-1.jnk \ +{File "%TEMP%/utf-check-649-1-93-1.jnk" has 8202 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 650 utf-check-650-1-94-0.jnk \ +{File "%TEMP%/utf-check-650-1-94-0.jnk" has 8201 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 651 utf-check-651-1-94-1.jnk \ +{File "%TEMP%/utf-check-651-1-94-1.jnk" has 8202 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 652 utf-check-652-1-95-0.jnk \ +{File "%TEMP%/utf-check-652-1-95-0.jnk" has 8202 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 653 utf-check-653-1-95-1.jnk \ +{File "%TEMP%/utf-check-653-1-95-1.jnk" has 8203 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 654 utf-check-654-1-96-0.jnk \ +{File "%TEMP%/utf-check-654-1-96-0.jnk" has 8197 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 655 utf-check-655-1-96-1.jnk \ +{File "%TEMP%/utf-check-655-1-96-1.jnk" has 8198 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 656 utf-check-656-1-97-0.jnk \ +{File "%TEMP%/utf-check-656-1-97-0.jnk" has 8198 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 657 utf-check-657-1-97-1.jnk \ +{File "%TEMP%/utf-check-657-1-97-1.jnk" has 8199 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 658 utf-check-658-1-98-0.jnk \ +{File "%TEMP%/utf-check-658-1-98-0.jnk" has 8198 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 659 utf-check-659-1-98-1.jnk \ +{File "%TEMP%/utf-check-659-1-98-1.jnk" has 8199 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 660 utf-check-660-1-99-0.jnk \ +{File "%TEMP%/utf-check-660-1-99-0.jnk" has 8199 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 661 utf-check-661-1-99-1.jnk \ +{File "%TEMP%/utf-check-661-1-99-1.jnk" has 8200 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 662 utf-check-662-1-100-0.jnk \ +{File "%TEMP%/utf-check-662-1-100-0.jnk" has 8200 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 663 utf-check-663-1-100-1.jnk \ +{File "%TEMP%/utf-check-663-1-100-1.jnk" has 8201 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 664 utf-check-664-1-101-0.jnk \ +{File "%TEMP%/utf-check-664-1-101-0.jnk" has 8201 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 665 utf-check-665-1-101-1.jnk \ +{File "%TEMP%/utf-check-665-1-101-1.jnk" has 8202 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 666 utf-check-666-1-102-0.jnk \ +{File "%TEMP%/utf-check-666-1-102-0.jnk" has 8201 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 667 utf-check-667-1-102-1.jnk \ +{File "%TEMP%/utf-check-667-1-102-1.jnk" has 8202 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 668 utf-check-668-1-103-0.jnk \ +{File "%TEMP%/utf-check-668-1-103-0.jnk" has 8202 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 669 utf-check-669-1-103-1.jnk \ +{File "%TEMP%/utf-check-669-1-103-1.jnk" has 8203 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 670 utf-check-670-1-104-0.jnk \ +{File "%TEMP%/utf-check-670-1-104-0.jnk" has 8198 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 671 utf-check-671-1-104-1.jnk \ +{File "%TEMP%/utf-check-671-1-104-1.jnk" has 8199 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 672 utf-check-672-1-105-0.jnk \ +{File "%TEMP%/utf-check-672-1-105-0.jnk" has 8199 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 673 utf-check-673-1-105-1.jnk \ +{File "%TEMP%/utf-check-673-1-105-1.jnk" has 8200 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 674 utf-check-674-1-106-0.jnk \ +{File "%TEMP%/utf-check-674-1-106-0.jnk" has 8199 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 675 utf-check-675-1-106-1.jnk \ +{File "%TEMP%/utf-check-675-1-106-1.jnk" has 8200 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 676 utf-check-676-1-107-0.jnk \ +{File "%TEMP%/utf-check-676-1-107-0.jnk" has 8200 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 677 utf-check-677-1-107-1.jnk \ +{File "%TEMP%/utf-check-677-1-107-1.jnk" has 8201 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 678 utf-check-678-1-108-0.jnk \ +{File "%TEMP%/utf-check-678-1-108-0.jnk" has 8201 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 679 utf-check-679-1-108-1.jnk \ +{File "%TEMP%/utf-check-679-1-108-1.jnk" has 8202 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 680 utf-check-680-1-109-0.jnk \ +{File "%TEMP%/utf-check-680-1-109-0.jnk" has 8202 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 681 utf-check-681-1-109-1.jnk \ +{File "%TEMP%/utf-check-681-1-109-1.jnk" has 8203 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 682 utf-check-682-1-110-0.jnk \ +{File "%TEMP%/utf-check-682-1-110-0.jnk" has 8202 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 683 utf-check-683-1-110-1.jnk \ +{File "%TEMP%/utf-check-683-1-110-1.jnk" has 8203 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 684 utf-check-684-1-111-0.jnk \ +{File "%TEMP%/utf-check-684-1-111-0.jnk" has 8203 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 685 utf-check-685-1-111-1.jnk \ +{File "%TEMP%/utf-check-685-1-111-1.jnk" has 8204 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 686 utf-check-686-1-112-0.jnk \ +{File "%TEMP%/utf-check-686-1-112-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 687 utf-check-687-1-112-1.jnk \ +{File "%TEMP%/utf-check-687-1-112-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 688 utf-check-688-1-113-0.jnk \ +{File "%TEMP%/utf-check-688-1-113-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 689 utf-check-689-1-113-1.jnk \ +{File "%TEMP%/utf-check-689-1-113-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 690 utf-check-690-1-114-0.jnk \ +{File "%TEMP%/utf-check-690-1-114-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 691 utf-check-691-1-114-1.jnk \ +{File "%TEMP%/utf-check-691-1-114-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 692 utf-check-692-1-115-0.jnk \ +{File "%TEMP%/utf-check-692-1-115-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 693 utf-check-693-1-115-1.jnk \ +{File "%TEMP%/utf-check-693-1-115-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 694 utf-check-694-1-116-0.jnk \ +{File "%TEMP%/utf-check-694-1-116-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 695 utf-check-695-1-116-1.jnk \ +{File "%TEMP%/utf-check-695-1-116-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 696 utf-check-696-1-117-0.jnk \ +{File "%TEMP%/utf-check-696-1-117-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 697 utf-check-697-1-117-1.jnk \ +{File "%TEMP%/utf-check-697-1-117-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 698 utf-check-698-1-118-0.jnk \ +{File "%TEMP%/utf-check-698-1-118-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 699 utf-check-699-1-118-1.jnk \ +{File "%TEMP%/utf-check-699-1-118-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 700 utf-check-700-1-119-0.jnk \ +{File "%TEMP%/utf-check-700-1-119-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 701 utf-check-701-1-119-1.jnk \ +{File "%TEMP%/utf-check-701-1-119-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 702 utf-check-702-1-120-0.jnk \ +{File "%TEMP%/utf-check-702-1-120-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 703 utf-check-703-1-120-1.jnk \ +{File "%TEMP%/utf-check-703-1-120-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 704 utf-check-704-1-121-0.jnk \ +{File "%TEMP%/utf-check-704-1-121-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 705 utf-check-705-1-121-1.jnk \ +{File "%TEMP%/utf-check-705-1-121-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 706 utf-check-706-1-122-0.jnk \ +{File "%TEMP%/utf-check-706-1-122-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 707 utf-check-707-1-122-1.jnk \ +{File "%TEMP%/utf-check-707-1-122-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 708 utf-check-708-1-123-0.jnk \ +{File "%TEMP%/utf-check-708-1-123-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 709 utf-check-709-1-123-1.jnk \ +{File "%TEMP%/utf-check-709-1-123-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 710 utf-check-710-1-124-0.jnk \ +{File "%TEMP%/utf-check-710-1-124-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 711 utf-check-711-1-124-1.jnk \ +{File "%TEMP%/utf-check-711-1-124-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 712 utf-check-712-1-125-0.jnk \ +{File "%TEMP%/utf-check-712-1-125-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 713 utf-check-713-1-125-1.jnk \ +{File "%TEMP%/utf-check-713-1-125-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 714 utf-check-714-1-126-0.jnk \ +{File "%TEMP%/utf-check-714-1-126-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 715 utf-check-715-1-126-1.jnk \ +{File "%TEMP%/utf-check-715-1-126-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 716 utf-check-716-1-127-0.jnk \ +{File "%TEMP%/utf-check-716-1-127-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 717 utf-check-717-1-127-1.jnk \ +{File "%TEMP%/utf-check-717-1-127-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 718 utf-check-718-1-128-0.jnk \ +{File "%TEMP%/utf-check-718-1-128-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 719 utf-check-719-1-128-1.jnk \ +{File "%TEMP%/utf-check-719-1-128-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 720 utf-check-720-1-129-0.jnk \ +{File "%TEMP%/utf-check-720-1-129-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 721 utf-check-721-1-129-1.jnk \ +{File "%TEMP%/utf-check-721-1-129-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 722 utf-check-722-1-130-0.jnk \ +{File "%TEMP%/utf-check-722-1-130-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 723 utf-check-723-1-130-1.jnk \ +{File "%TEMP%/utf-check-723-1-130-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 724 utf-check-724-1-131-0.jnk \ +{File "%TEMP%/utf-check-724-1-131-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 725 utf-check-725-1-131-1.jnk \ +{File "%TEMP%/utf-check-725-1-131-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 726 utf-check-726-1-132-0.jnk \ +{File "%TEMP%/utf-check-726-1-132-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 727 utf-check-727-1-132-1.jnk \ +{File "%TEMP%/utf-check-727-1-132-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 728 utf-check-728-1-133-0.jnk \ +{File "%TEMP%/utf-check-728-1-133-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 729 utf-check-729-1-133-1.jnk \ +{File "%TEMP%/utf-check-729-1-133-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 730 utf-check-730-1-134-0.jnk \ +{File "%TEMP%/utf-check-730-1-134-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 731 utf-check-731-1-134-1.jnk \ +{File "%TEMP%/utf-check-731-1-134-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 732 utf-check-732-1-135-0.jnk \ +{File "%TEMP%/utf-check-732-1-135-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 733 utf-check-733-1-135-1.jnk \ +{File "%TEMP%/utf-check-733-1-135-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 734 utf-check-734-1-136-0.jnk \ +{File "%TEMP%/utf-check-734-1-136-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 735 utf-check-735-1-136-1.jnk \ +{File "%TEMP%/utf-check-735-1-136-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 736 utf-check-736-1-137-0.jnk \ +{File "%TEMP%/utf-check-736-1-137-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 737 utf-check-737-1-137-1.jnk \ +{File "%TEMP%/utf-check-737-1-137-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 738 utf-check-738-1-138-0.jnk \ +{File "%TEMP%/utf-check-738-1-138-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 739 utf-check-739-1-138-1.jnk \ +{File "%TEMP%/utf-check-739-1-138-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 740 utf-check-740-1-139-0.jnk \ +{File "%TEMP%/utf-check-740-1-139-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 741 utf-check-741-1-139-1.jnk \ +{File "%TEMP%/utf-check-741-1-139-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 742 utf-check-742-1-140-0.jnk \ +{File "%TEMP%/utf-check-742-1-140-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 743 utf-check-743-1-140-1.jnk \ +{File "%TEMP%/utf-check-743-1-140-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 744 utf-check-744-1-141-0.jnk \ +{File "%TEMP%/utf-check-744-1-141-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 745 utf-check-745-1-141-1.jnk \ +{File "%TEMP%/utf-check-745-1-141-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 746 utf-check-746-1-142-0.jnk \ +{File "%TEMP%/utf-check-746-1-142-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 747 utf-check-747-1-142-1.jnk \ +{File "%TEMP%/utf-check-747-1-142-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 748 utf-check-748-1-143-0.jnk \ +{File "%TEMP%/utf-check-748-1-143-0.jnk" has 5 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 749 utf-check-749-1-143-1.jnk \ +{File "%TEMP%/utf-check-749-1-143-1.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 750 utf-check-750-1-144-0.jnk \ +{File "%TEMP%/utf-check-750-1-144-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 751 utf-check-751-1-144-1.jnk \ +{File "%TEMP%/utf-check-751-1-144-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 752 utf-check-752-1-145-0.jnk \ +{File "%TEMP%/utf-check-752-1-145-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 753 utf-check-753-1-145-1.jnk \ +{File "%TEMP%/utf-check-753-1-145-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 754 utf-check-754-1-146-0.jnk \ +{File "%TEMP%/utf-check-754-1-146-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 755 utf-check-755-1-146-1.jnk \ +{File "%TEMP%/utf-check-755-1-146-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 756 utf-check-756-1-147-0.jnk \ +{File "%TEMP%/utf-check-756-1-147-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 757 utf-check-757-1-147-1.jnk \ +{File "%TEMP%/utf-check-757-1-147-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 758 utf-check-758-1-148-0.jnk \ +{File "%TEMP%/utf-check-758-1-148-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 759 utf-check-759-1-148-1.jnk \ +{File "%TEMP%/utf-check-759-1-148-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 760 utf-check-760-1-149-0.jnk \ +{File "%TEMP%/utf-check-760-1-149-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 761 utf-check-761-1-149-1.jnk \ +{File "%TEMP%/utf-check-761-1-149-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 762 utf-check-762-1-150-0.jnk \ +{File "%TEMP%/utf-check-762-1-150-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 763 utf-check-763-1-150-1.jnk \ +{File "%TEMP%/utf-check-763-1-150-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 764 utf-check-764-1-151-0.jnk \ +{File "%TEMP%/utf-check-764-1-151-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 765 utf-check-765-1-151-1.jnk \ +{File "%TEMP%/utf-check-765-1-151-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 766 utf-check-766-1-152-0.jnk \ +{File "%TEMP%/utf-check-766-1-152-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 767 utf-check-767-1-152-1.jnk \ +{File "%TEMP%/utf-check-767-1-152-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 768 utf-check-768-1-153-0.jnk \ +{File "%TEMP%/utf-check-768-1-153-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 769 utf-check-769-1-153-1.jnk \ +{File "%TEMP%/utf-check-769-1-153-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 770 utf-check-770-1-154-0.jnk \ +{File "%TEMP%/utf-check-770-1-154-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 771 utf-check-771-1-154-1.jnk \ +{File "%TEMP%/utf-check-771-1-154-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 772 utf-check-772-1-155-0.jnk \ +{File "%TEMP%/utf-check-772-1-155-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 773 utf-check-773-1-155-1.jnk \ +{File "%TEMP%/utf-check-773-1-155-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 774 utf-check-774-1-156-0.jnk \ +{File "%TEMP%/utf-check-774-1-156-0.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 775 utf-check-775-1-156-1.jnk \ +{File "%TEMP%/utf-check-775-1-156-1.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 776 utf-check-776-1-157-0.jnk \ +{File "%TEMP%/utf-check-776-1-157-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 777 utf-check-777-1-157-1.jnk \ +{File "%TEMP%/utf-check-777-1-157-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 778 utf-check-778-1-158-0.jnk \ +{File "%TEMP%/utf-check-778-1-158-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 779 utf-check-779-1-158-1.jnk \ +{File "%TEMP%/utf-check-779-1-158-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 780 utf-check-780-1-159-0.jnk \ +{File "%TEMP%/utf-check-780-1-159-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 781 utf-check-781-1-159-1.jnk \ +{File "%TEMP%/utf-check-781-1-159-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 782 utf-check-782-1-160-0.jnk \ +{File "%TEMP%/utf-check-782-1-160-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 783 utf-check-783-1-160-1.jnk \ +{File "%TEMP%/utf-check-783-1-160-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 784 utf-check-784-1-161-0.jnk \ +{File "%TEMP%/utf-check-784-1-161-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 785 utf-check-785-1-161-1.jnk \ +{File "%TEMP%/utf-check-785-1-161-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 786 utf-check-786-1-162-0.jnk \ +{File "%TEMP%/utf-check-786-1-162-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 787 utf-check-787-1-162-1.jnk \ +{File "%TEMP%/utf-check-787-1-162-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 788 utf-check-788-1-163-0.jnk \ +{File "%TEMP%/utf-check-788-1-163-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 789 utf-check-789-1-163-1.jnk \ +{File "%TEMP%/utf-check-789-1-163-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 790 utf-check-790-1-164-0.jnk \ +{File "%TEMP%/utf-check-790-1-164-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 791 utf-check-791-1-164-1.jnk \ +{File "%TEMP%/utf-check-791-1-164-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 792 utf-check-792-1-165-0.jnk \ +{File "%TEMP%/utf-check-792-1-165-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 793 utf-check-793-1-165-1.jnk \ +{File "%TEMP%/utf-check-793-1-165-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 794 utf-check-794-1-166-0.jnk \ +{File "%TEMP%/utf-check-794-1-166-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 795 utf-check-795-1-166-1.jnk \ +{File "%TEMP%/utf-check-795-1-166-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 796 utf-check-796-1-167-0.jnk \ +{File "%TEMP%/utf-check-796-1-167-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 797 utf-check-797-1-167-1.jnk \ +{File "%TEMP%/utf-check-797-1-167-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 798 utf-check-798-1-168-0.jnk \ +{File "%TEMP%/utf-check-798-1-168-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 799 utf-check-799-1-168-1.jnk \ +{File "%TEMP%/utf-check-799-1-168-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 800 utf-check-800-1-169-0.jnk \ +{File "%TEMP%/utf-check-800-1-169-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 801 utf-check-801-1-169-1.jnk \ +{File "%TEMP%/utf-check-801-1-169-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 802 utf-check-802-1-170-0.jnk \ +{File "%TEMP%/utf-check-802-1-170-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 803 utf-check-803-1-170-1.jnk \ +{File "%TEMP%/utf-check-803-1-170-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 804 utf-check-804-1-171-0.jnk \ +{File "%TEMP%/utf-check-804-1-171-0.jnk" has 7 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 805 utf-check-805-1-171-1.jnk \ +{File "%TEMP%/utf-check-805-1-171-1.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 806 utf-check-806-1-172-0.jnk \ +{File "%TEMP%/utf-check-806-1-172-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 807 utf-check-807-1-172-1.jnk \ +{File "%TEMP%/utf-check-807-1-172-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 808 utf-check-808-1-173-0.jnk \ +{File "%TEMP%/utf-check-808-1-173-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 809 utf-check-809-1-173-1.jnk \ +{File "%TEMP%/utf-check-809-1-173-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 810 utf-check-810-1-174-0.jnk \ +{File "%TEMP%/utf-check-810-1-174-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 811 utf-check-811-1-174-1.jnk \ +{File "%TEMP%/utf-check-811-1-174-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 812 utf-check-812-1-175-0.jnk \ +{File "%TEMP%/utf-check-812-1-175-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 813 utf-check-813-1-175-1.jnk \ +{File "%TEMP%/utf-check-813-1-175-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 814 utf-check-814-1-176-0.jnk \ +{File "%TEMP%/utf-check-814-1-176-0.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 815 utf-check-815-1-176-1.jnk \ +{File "%TEMP%/utf-check-815-1-176-1.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 816 utf-check-816-1-177-0.jnk \ +{File "%TEMP%/utf-check-816-1-177-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 817 utf-check-817-1-177-1.jnk \ +{File "%TEMP%/utf-check-817-1-177-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 818 utf-check-818-1-178-0.jnk \ +{File "%TEMP%/utf-check-818-1-178-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 819 utf-check-819-1-178-1.jnk \ +{File "%TEMP%/utf-check-819-1-178-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 820 utf-check-820-1-179-0.jnk \ +{File "%TEMP%/utf-check-820-1-179-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 821 utf-check-821-1-179-1.jnk \ +{File "%TEMP%/utf-check-821-1-179-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 822 utf-check-822-1-180-0.jnk \ +{File "%TEMP%/utf-check-822-1-180-0.jnk" has 9 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 823 utf-check-823-1-180-1.jnk \ +{File "%TEMP%/utf-check-823-1-180-1.jnk" has 10 bytes. +Starts with UTF-8 BOM: yes +Starts with UTF-16 BOM: no +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 824 utf-check-824-2-0-0.jnk \ +{File "%TEMP%/utf-check-824-2-0-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 825 utf-check-825-2-0-1.jnk \ +{File "%TEMP%/utf-check-825-2-0-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 826 utf-check-826-2-1-0.jnk \ +{File "%TEMP%/utf-check-826-2-1-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 827 utf-check-827-2-1-1.jnk \ +{File "%TEMP%/utf-check-827-2-1-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 828 utf-check-828-2-2-0.jnk \ +{File "%TEMP%/utf-check-828-2-2-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 829 utf-check-829-2-2-1.jnk \ +{File "%TEMP%/utf-check-829-2-2-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 830 utf-check-830-2-3-0.jnk \ +{File "%TEMP%/utf-check-830-2-3-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 831 utf-check-831-2-3-1.jnk \ +{File "%TEMP%/utf-check-831-2-3-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 832 utf-check-832-2-4-0.jnk \ +{File "%TEMP%/utf-check-832-2-4-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 833 utf-check-833-2-4-1.jnk \ +{File "%TEMP%/utf-check-833-2-4-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 834 utf-check-834-2-5-0.jnk \ +{File "%TEMP%/utf-check-834-2-5-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 835 utf-check-835-2-5-1.jnk \ +{File "%TEMP%/utf-check-835-2-5-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 836 utf-check-836-2-6-0.jnk \ +{File "%TEMP%/utf-check-836-2-6-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 837 utf-check-837-2-6-1.jnk \ +{File "%TEMP%/utf-check-837-2-6-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 838 utf-check-838-2-7-0.jnk \ +{File "%TEMP%/utf-check-838-2-7-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 839 utf-check-839-2-7-1.jnk \ +{File "%TEMP%/utf-check-839-2-7-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 840 utf-check-840-2-8-0.jnk \ +{File "%TEMP%/utf-check-840-2-8-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 841 utf-check-841-2-8-1.jnk \ +{File "%TEMP%/utf-check-841-2-8-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 842 utf-check-842-2-9-0.jnk \ +{File "%TEMP%/utf-check-842-2-9-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 843 utf-check-843-2-9-1.jnk \ +{File "%TEMP%/utf-check-843-2-9-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 844 utf-check-844-2-10-0.jnk \ +{File "%TEMP%/utf-check-844-2-10-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 845 utf-check-845-2-10-1.jnk \ +{File "%TEMP%/utf-check-845-2-10-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 846 utf-check-846-2-11-0.jnk \ +{File "%TEMP%/utf-check-846-2-11-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 847 utf-check-847-2-11-1.jnk \ +{File "%TEMP%/utf-check-847-2-11-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 848 utf-check-848-2-12-0.jnk \ +{File "%TEMP%/utf-check-848-2-12-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 849 utf-check-849-2-12-1.jnk \ +{File "%TEMP%/utf-check-849-2-12-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 850 utf-check-850-2-13-0.jnk \ +{File "%TEMP%/utf-check-850-2-13-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 851 utf-check-851-2-13-1.jnk \ +{File "%TEMP%/utf-check-851-2-13-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 852 utf-check-852-2-14-0.jnk \ +{File "%TEMP%/utf-check-852-2-14-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 853 utf-check-853-2-14-1.jnk \ +{File "%TEMP%/utf-check-853-2-14-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 854 utf-check-854-2-15-0.jnk \ +{File "%TEMP%/utf-check-854-2-15-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 855 utf-check-855-2-15-1.jnk \ +{File "%TEMP%/utf-check-855-2-15-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 856 utf-check-856-2-16-0.jnk \ +{File "%TEMP%/utf-check-856-2-16-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 857 utf-check-857-2-16-1.jnk \ +{File "%TEMP%/utf-check-857-2-16-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 858 utf-check-858-2-17-0.jnk \ +{File "%TEMP%/utf-check-858-2-17-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 859 utf-check-859-2-17-1.jnk \ +{File "%TEMP%/utf-check-859-2-17-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 860 utf-check-860-2-18-0.jnk \ +{File "%TEMP%/utf-check-860-2-18-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 861 utf-check-861-2-18-1.jnk \ +{File "%TEMP%/utf-check-861-2-18-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 862 utf-check-862-2-19-0.jnk \ +{File "%TEMP%/utf-check-862-2-19-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 863 utf-check-863-2-19-1.jnk \ +{File "%TEMP%/utf-check-863-2-19-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 864 utf-check-864-2-20-0.jnk \ +{File "%TEMP%/utf-check-864-2-20-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 865 utf-check-865-2-20-1.jnk \ +{File "%TEMP%/utf-check-865-2-20-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 866 utf-check-866-2-21-0.jnk \ +{File "%TEMP%/utf-check-866-2-21-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 867 utf-check-867-2-21-1.jnk \ +{File "%TEMP%/utf-check-867-2-21-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 868 utf-check-868-2-22-0.jnk \ +{File "%TEMP%/utf-check-868-2-22-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 869 utf-check-869-2-22-1.jnk \ +{File "%TEMP%/utf-check-869-2-22-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 870 utf-check-870-2-23-0.jnk \ +{File "%TEMP%/utf-check-870-2-23-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 871 utf-check-871-2-23-1.jnk \ +{File "%TEMP%/utf-check-871-2-23-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 872 utf-check-872-2-24-0.jnk \ +{File "%TEMP%/utf-check-872-2-24-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 873 utf-check-873-2-24-1.jnk \ +{File "%TEMP%/utf-check-873-2-24-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 874 utf-check-874-2-25-0.jnk \ +{File "%TEMP%/utf-check-874-2-25-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 875 utf-check-875-2-25-1.jnk \ +{File "%TEMP%/utf-check-875-2-25-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 876 utf-check-876-2-26-0.jnk \ +{File "%TEMP%/utf-check-876-2-26-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 877 utf-check-877-2-26-1.jnk \ +{File "%TEMP%/utf-check-877-2-26-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 878 utf-check-878-2-27-0.jnk \ +{File "%TEMP%/utf-check-878-2-27-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 879 utf-check-879-2-27-1.jnk \ +{File "%TEMP%/utf-check-879-2-27-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 880 utf-check-880-2-28-0.jnk \ +{File "%TEMP%/utf-check-880-2-28-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 881 utf-check-881-2-28-1.jnk \ +{File "%TEMP%/utf-check-881-2-28-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 882 utf-check-882-2-29-0.jnk \ +{File "%TEMP%/utf-check-882-2-29-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 883 utf-check-883-2-29-1.jnk \ +{File "%TEMP%/utf-check-883-2-29-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 884 utf-check-884-2-30-0.jnk \ +{File "%TEMP%/utf-check-884-2-30-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 885 utf-check-885-2-30-1.jnk \ +{File "%TEMP%/utf-check-885-2-30-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 886 utf-check-886-2-31-0.jnk \ +{File "%TEMP%/utf-check-886-2-31-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 887 utf-check-887-2-31-1.jnk \ +{File "%TEMP%/utf-check-887-2-31-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 888 utf-check-888-2-32-0.jnk \ +{File "%TEMP%/utf-check-888-2-32-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 889 utf-check-889-2-32-1.jnk \ +{File "%TEMP%/utf-check-889-2-32-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 890 utf-check-890-2-33-0.jnk \ +{File "%TEMP%/utf-check-890-2-33-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 891 utf-check-891-2-33-1.jnk \ +{File "%TEMP%/utf-check-891-2-33-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 892 utf-check-892-2-34-0.jnk \ +{File "%TEMP%/utf-check-892-2-34-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 893 utf-check-893-2-34-1.jnk \ +{File "%TEMP%/utf-check-893-2-34-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 894 utf-check-894-2-35-0.jnk \ +{File "%TEMP%/utf-check-894-2-35-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 895 utf-check-895-2-35-1.jnk \ +{File "%TEMP%/utf-check-895-2-35-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 896 utf-check-896-2-36-0.jnk \ +{File "%TEMP%/utf-check-896-2-36-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 897 utf-check-897-2-36-1.jnk \ +{File "%TEMP%/utf-check-897-2-36-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 898 utf-check-898-2-37-0.jnk \ +{File "%TEMP%/utf-check-898-2-37-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 899 utf-check-899-2-37-1.jnk \ +{File "%TEMP%/utf-check-899-2-37-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 900 utf-check-900-2-38-0.jnk \ +{File "%TEMP%/utf-check-900-2-38-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 901 utf-check-901-2-38-1.jnk \ +{File "%TEMP%/utf-check-901-2-38-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 902 utf-check-902-2-39-0.jnk \ +{File "%TEMP%/utf-check-902-2-39-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 903 utf-check-903-2-39-1.jnk \ +{File "%TEMP%/utf-check-903-2-39-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 904 utf-check-904-2-40-0.jnk \ +{File "%TEMP%/utf-check-904-2-40-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 905 utf-check-905-2-40-1.jnk \ +{File "%TEMP%/utf-check-905-2-40-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 906 utf-check-906-2-41-0.jnk \ +{File "%TEMP%/utf-check-906-2-41-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 907 utf-check-907-2-41-1.jnk \ +{File "%TEMP%/utf-check-907-2-41-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 908 utf-check-908-2-42-0.jnk \ +{File "%TEMP%/utf-check-908-2-42-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 909 utf-check-909-2-42-1.jnk \ +{File "%TEMP%/utf-check-909-2-42-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 910 utf-check-910-2-43-0.jnk \ +{File "%TEMP%/utf-check-910-2-43-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 911 utf-check-911-2-43-1.jnk \ +{File "%TEMP%/utf-check-911-2-43-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 912 utf-check-912-2-44-0.jnk \ +{File "%TEMP%/utf-check-912-2-44-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 913 utf-check-913-2-44-1.jnk \ +{File "%TEMP%/utf-check-913-2-44-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 914 utf-check-914-2-45-0.jnk \ +{File "%TEMP%/utf-check-914-2-45-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 915 utf-check-915-2-45-1.jnk \ +{File "%TEMP%/utf-check-915-2-45-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 916 utf-check-916-2-46-0.jnk \ +{File "%TEMP%/utf-check-916-2-46-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 917 utf-check-917-2-46-1.jnk \ +{File "%TEMP%/utf-check-917-2-46-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 918 utf-check-918-2-47-0.jnk \ +{File "%TEMP%/utf-check-918-2-47-0.jnk" has 16 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 919 utf-check-919-2-47-1.jnk \ +{File "%TEMP%/utf-check-919-2-47-1.jnk" has 17 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 920 utf-check-920-2-48-0.jnk \ +{File "%TEMP%/utf-check-920-2-48-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 921 utf-check-921-2-48-1.jnk \ +{File "%TEMP%/utf-check-921-2-48-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 922 utf-check-922-2-49-0.jnk \ +{File "%TEMP%/utf-check-922-2-49-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 923 utf-check-923-2-49-1.jnk \ +{File "%TEMP%/utf-check-923-2-49-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 924 utf-check-924-2-50-0.jnk \ +{File "%TEMP%/utf-check-924-2-50-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 925 utf-check-925-2-50-1.jnk \ +{File "%TEMP%/utf-check-925-2-50-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 926 utf-check-926-2-51-0.jnk \ +{File "%TEMP%/utf-check-926-2-51-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 927 utf-check-927-2-51-1.jnk \ +{File "%TEMP%/utf-check-927-2-51-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 928 utf-check-928-2-52-0.jnk \ +{File "%TEMP%/utf-check-928-2-52-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 929 utf-check-929-2-52-1.jnk \ +{File "%TEMP%/utf-check-929-2-52-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 930 utf-check-930-2-53-0.jnk \ +{File "%TEMP%/utf-check-930-2-53-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 931 utf-check-931-2-53-1.jnk \ +{File "%TEMP%/utf-check-931-2-53-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 932 utf-check-932-2-54-0.jnk \ +{File "%TEMP%/utf-check-932-2-54-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 933 utf-check-933-2-54-1.jnk \ +{File "%TEMP%/utf-check-933-2-54-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 934 utf-check-934-2-55-0.jnk \ +{File "%TEMP%/utf-check-934-2-55-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 935 utf-check-935-2-55-1.jnk \ +{File "%TEMP%/utf-check-935-2-55-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 936 utf-check-936-2-56-0.jnk \ +{File "%TEMP%/utf-check-936-2-56-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 937 utf-check-937-2-56-1.jnk \ +{File "%TEMP%/utf-check-937-2-56-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 938 utf-check-938-2-57-0.jnk \ +{File "%TEMP%/utf-check-938-2-57-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 939 utf-check-939-2-57-1.jnk \ +{File "%TEMP%/utf-check-939-2-57-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 940 utf-check-940-2-58-0.jnk \ +{File "%TEMP%/utf-check-940-2-58-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 941 utf-check-941-2-58-1.jnk \ +{File "%TEMP%/utf-check-941-2-58-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 942 utf-check-942-2-59-0.jnk \ +{File "%TEMP%/utf-check-942-2-59-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 943 utf-check-943-2-59-1.jnk \ +{File "%TEMP%/utf-check-943-2-59-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 944 utf-check-944-2-60-0.jnk \ +{File "%TEMP%/utf-check-944-2-60-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 945 utf-check-945-2-60-1.jnk \ +{File "%TEMP%/utf-check-945-2-60-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 946 utf-check-946-2-61-0.jnk \ +{File "%TEMP%/utf-check-946-2-61-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 947 utf-check-947-2-61-1.jnk \ +{File "%TEMP%/utf-check-947-2-61-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 948 utf-check-948-2-62-0.jnk \ +{File "%TEMP%/utf-check-948-2-62-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 949 utf-check-949-2-62-1.jnk \ +{File "%TEMP%/utf-check-949-2-62-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 950 utf-check-950-2-63-0.jnk \ +{File "%TEMP%/utf-check-950-2-63-0.jnk" has 16 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 951 utf-check-951-2-63-1.jnk \ +{File "%TEMP%/utf-check-951-2-63-1.jnk" has 17 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 952 utf-check-952-2-64-0.jnk \ +{File "%TEMP%/utf-check-952-2-64-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 953 utf-check-953-2-64-1.jnk \ +{File "%TEMP%/utf-check-953-2-64-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 954 utf-check-954-2-65-0.jnk \ +{File "%TEMP%/utf-check-954-2-65-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 955 utf-check-955-2-65-1.jnk \ +{File "%TEMP%/utf-check-955-2-65-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 956 utf-check-956-2-66-0.jnk \ +{File "%TEMP%/utf-check-956-2-66-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 957 utf-check-957-2-66-1.jnk \ +{File "%TEMP%/utf-check-957-2-66-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 958 utf-check-958-2-67-0.jnk \ +{File "%TEMP%/utf-check-958-2-67-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 959 utf-check-959-2-67-1.jnk \ +{File "%TEMP%/utf-check-959-2-67-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 960 utf-check-960-2-68-0.jnk \ +{File "%TEMP%/utf-check-960-2-68-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 961 utf-check-961-2-68-1.jnk \ +{File "%TEMP%/utf-check-961-2-68-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 962 utf-check-962-2-69-0.jnk \ +{File "%TEMP%/utf-check-962-2-69-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 963 utf-check-963-2-69-1.jnk \ +{File "%TEMP%/utf-check-963-2-69-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 964 utf-check-964-2-70-0.jnk \ +{File "%TEMP%/utf-check-964-2-70-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 965 utf-check-965-2-70-1.jnk \ +{File "%TEMP%/utf-check-965-2-70-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 966 utf-check-966-2-71-0.jnk \ +{File "%TEMP%/utf-check-966-2-71-0.jnk" has 16 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 967 utf-check-967-2-71-1.jnk \ +{File "%TEMP%/utf-check-967-2-71-1.jnk" has 17 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 968 utf-check-968-2-72-0.jnk \ +{File "%TEMP%/utf-check-968-2-72-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 969 utf-check-969-2-72-1.jnk \ +{File "%TEMP%/utf-check-969-2-72-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 970 utf-check-970-2-73-0.jnk \ +{File "%TEMP%/utf-check-970-2-73-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 971 utf-check-971-2-73-1.jnk \ +{File "%TEMP%/utf-check-971-2-73-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 972 utf-check-972-2-74-0.jnk \ +{File "%TEMP%/utf-check-972-2-74-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 973 utf-check-973-2-74-1.jnk \ +{File "%TEMP%/utf-check-973-2-74-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 974 utf-check-974-2-75-0.jnk \ +{File "%TEMP%/utf-check-974-2-75-0.jnk" has 16 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 975 utf-check-975-2-75-1.jnk \ +{File "%TEMP%/utf-check-975-2-75-1.jnk" has 17 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 976 utf-check-976-2-76-0.jnk \ +{File "%TEMP%/utf-check-976-2-76-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 977 utf-check-977-2-76-1.jnk \ +{File "%TEMP%/utf-check-977-2-76-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 978 utf-check-978-2-77-0.jnk \ +{File "%TEMP%/utf-check-978-2-77-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 979 utf-check-979-2-77-1.jnk \ +{File "%TEMP%/utf-check-979-2-77-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 980 utf-check-980-2-78-0.jnk \ +{File "%TEMP%/utf-check-980-2-78-0.jnk" has 16 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 981 utf-check-981-2-78-1.jnk \ +{File "%TEMP%/utf-check-981-2-78-1.jnk" has 17 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 982 utf-check-982-2-79-0.jnk \ +{File "%TEMP%/utf-check-982-2-79-0.jnk" has 18 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 983 utf-check-983-2-79-1.jnk \ +{File "%TEMP%/utf-check-983-2-79-1.jnk" has 19 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 984 utf-check-984-2-80-0.jnk \ +{File "%TEMP%/utf-check-984-2-80-0.jnk" has 16388 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 985 utf-check-985-2-80-1.jnk \ +{File "%TEMP%/utf-check-985-2-80-1.jnk" has 16389 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 986 utf-check-986-2-81-0.jnk \ +{File "%TEMP%/utf-check-986-2-81-0.jnk" has 16390 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 987 utf-check-987-2-81-1.jnk \ +{File "%TEMP%/utf-check-987-2-81-1.jnk" has 16391 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 988 utf-check-988-2-82-0.jnk \ +{File "%TEMP%/utf-check-988-2-82-0.jnk" has 16390 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 989 utf-check-989-2-82-1.jnk \ +{File "%TEMP%/utf-check-989-2-82-1.jnk" has 16391 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 990 utf-check-990-2-83-0.jnk \ +{File "%TEMP%/utf-check-990-2-83-0.jnk" has 16392 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 991 utf-check-991-2-83-1.jnk \ +{File "%TEMP%/utf-check-991-2-83-1.jnk" has 16393 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 992 utf-check-992-2-84-0.jnk \ +{File "%TEMP%/utf-check-992-2-84-0.jnk" has 16394 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 993 utf-check-993-2-84-1.jnk \ +{File "%TEMP%/utf-check-993-2-84-1.jnk" has 16395 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 994 utf-check-994-2-85-0.jnk \ +{File "%TEMP%/utf-check-994-2-85-0.jnk" has 16396 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 995 utf-check-995-2-85-1.jnk \ +{File "%TEMP%/utf-check-995-2-85-1.jnk" has 16397 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 996 utf-check-996-2-86-0.jnk \ +{File "%TEMP%/utf-check-996-2-86-0.jnk" has 16396 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 997 utf-check-997-2-86-1.jnk \ +{File "%TEMP%/utf-check-997-2-86-1.jnk" has 16397 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 998 utf-check-998-2-87-0.jnk \ +{File "%TEMP%/utf-check-998-2-87-0.jnk" has 16398 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 999 utf-check-999-2-87-1.jnk \ +{File "%TEMP%/utf-check-999-2-87-1.jnk" has 16399 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1000 utf-check-1000-2-88-0.jnk \ +{File "%TEMP%/utf-check-1000-2-88-0.jnk" has 16390 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1001 utf-check-1001-2-88-1.jnk \ +{File "%TEMP%/utf-check-1001-2-88-1.jnk" has 16391 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1002 utf-check-1002-2-89-0.jnk \ +{File "%TEMP%/utf-check-1002-2-89-0.jnk" has 16392 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1003 utf-check-1003-2-89-1.jnk \ +{File "%TEMP%/utf-check-1003-2-89-1.jnk" has 16393 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1004 utf-check-1004-2-90-0.jnk \ +{File "%TEMP%/utf-check-1004-2-90-0.jnk" has 16392 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1005 utf-check-1005-2-90-1.jnk \ +{File "%TEMP%/utf-check-1005-2-90-1.jnk" has 16393 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1006 utf-check-1006-2-91-0.jnk \ +{File "%TEMP%/utf-check-1006-2-91-0.jnk" has 16394 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1007 utf-check-1007-2-91-1.jnk \ +{File "%TEMP%/utf-check-1007-2-91-1.jnk" has 16395 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1008 utf-check-1008-2-92-0.jnk \ +{File "%TEMP%/utf-check-1008-2-92-0.jnk" has 16396 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1009 utf-check-1009-2-92-1.jnk \ +{File "%TEMP%/utf-check-1009-2-92-1.jnk" has 16397 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1010 utf-check-1010-2-93-0.jnk \ +{File "%TEMP%/utf-check-1010-2-93-0.jnk" has 16398 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1011 utf-check-1011-2-93-1.jnk \ +{File "%TEMP%/utf-check-1011-2-93-1.jnk" has 16399 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1012 utf-check-1012-2-94-0.jnk \ +{File "%TEMP%/utf-check-1012-2-94-0.jnk" has 16398 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1013 utf-check-1013-2-94-1.jnk \ +{File "%TEMP%/utf-check-1013-2-94-1.jnk" has 16399 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1014 utf-check-1014-2-95-0.jnk \ +{File "%TEMP%/utf-check-1014-2-95-0.jnk" has 16400 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1015 utf-check-1015-2-95-1.jnk \ +{File "%TEMP%/utf-check-1015-2-95-1.jnk" has 16401 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1016 utf-check-1016-2-96-0.jnk \ +{File "%TEMP%/utf-check-1016-2-96-0.jnk" has 16390 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1017 utf-check-1017-2-96-1.jnk \ +{File "%TEMP%/utf-check-1017-2-96-1.jnk" has 16391 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1018 utf-check-1018-2-97-0.jnk \ +{File "%TEMP%/utf-check-1018-2-97-0.jnk" has 16392 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1019 utf-check-1019-2-97-1.jnk \ +{File "%TEMP%/utf-check-1019-2-97-1.jnk" has 16393 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1020 utf-check-1020-2-98-0.jnk \ +{File "%TEMP%/utf-check-1020-2-98-0.jnk" has 16392 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1021 utf-check-1021-2-98-1.jnk \ +{File "%TEMP%/utf-check-1021-2-98-1.jnk" has 16393 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1022 utf-check-1022-2-99-0.jnk \ +{File "%TEMP%/utf-check-1022-2-99-0.jnk" has 16394 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1023 utf-check-1023-2-99-1.jnk \ +{File "%TEMP%/utf-check-1023-2-99-1.jnk" has 16395 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1024 utf-check-1024-2-100-0.jnk \ +{File "%TEMP%/utf-check-1024-2-100-0.jnk" has 16396 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1025 utf-check-1025-2-100-1.jnk \ +{File "%TEMP%/utf-check-1025-2-100-1.jnk" has 16397 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1026 utf-check-1026-2-101-0.jnk \ +{File "%TEMP%/utf-check-1026-2-101-0.jnk" has 16398 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1027 utf-check-1027-2-101-1.jnk \ +{File "%TEMP%/utf-check-1027-2-101-1.jnk" has 16399 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1028 utf-check-1028-2-102-0.jnk \ +{File "%TEMP%/utf-check-1028-2-102-0.jnk" has 16398 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1029 utf-check-1029-2-102-1.jnk \ +{File "%TEMP%/utf-check-1029-2-102-1.jnk" has 16399 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1030 utf-check-1030-2-103-0.jnk \ +{File "%TEMP%/utf-check-1030-2-103-0.jnk" has 16400 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1031 utf-check-1031-2-103-1.jnk \ +{File "%TEMP%/utf-check-1031-2-103-1.jnk" has 16401 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1032 utf-check-1032-2-104-0.jnk \ +{File "%TEMP%/utf-check-1032-2-104-0.jnk" has 16392 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1033 utf-check-1033-2-104-1.jnk \ +{File "%TEMP%/utf-check-1033-2-104-1.jnk" has 16393 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1034 utf-check-1034-2-105-0.jnk \ +{File "%TEMP%/utf-check-1034-2-105-0.jnk" has 16394 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1035 utf-check-1035-2-105-1.jnk \ +{File "%TEMP%/utf-check-1035-2-105-1.jnk" has 16395 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1036 utf-check-1036-2-106-0.jnk \ +{File "%TEMP%/utf-check-1036-2-106-0.jnk" has 16394 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1037 utf-check-1037-2-106-1.jnk \ +{File "%TEMP%/utf-check-1037-2-106-1.jnk" has 16395 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1038 utf-check-1038-2-107-0.jnk \ +{File "%TEMP%/utf-check-1038-2-107-0.jnk" has 16396 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1039 utf-check-1039-2-107-1.jnk \ +{File "%TEMP%/utf-check-1039-2-107-1.jnk" has 16397 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1040 utf-check-1040-2-108-0.jnk \ +{File "%TEMP%/utf-check-1040-2-108-0.jnk" has 16398 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1041 utf-check-1041-2-108-1.jnk \ +{File "%TEMP%/utf-check-1041-2-108-1.jnk" has 16399 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1042 utf-check-1042-2-109-0.jnk \ +{File "%TEMP%/utf-check-1042-2-109-0.jnk" has 16400 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1043 utf-check-1043-2-109-1.jnk \ +{File "%TEMP%/utf-check-1043-2-109-1.jnk" has 16401 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1044 utf-check-1044-2-110-0.jnk \ +{File "%TEMP%/utf-check-1044-2-110-0.jnk" has 16400 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1045 utf-check-1045-2-110-1.jnk \ +{File "%TEMP%/utf-check-1045-2-110-1.jnk" has 16401 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1046 utf-check-1046-2-111-0.jnk \ +{File "%TEMP%/utf-check-1046-2-111-0.jnk" has 16402 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1047 utf-check-1047-2-111-1.jnk \ +{File "%TEMP%/utf-check-1047-2-111-1.jnk" has 16403 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1048 utf-check-1048-2-112-0.jnk \ +{File "%TEMP%/utf-check-1048-2-112-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1049 utf-check-1049-2-112-1.jnk \ +{File "%TEMP%/utf-check-1049-2-112-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1050 utf-check-1050-2-113-0.jnk \ +{File "%TEMP%/utf-check-1050-2-113-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1051 utf-check-1051-2-113-1.jnk \ +{File "%TEMP%/utf-check-1051-2-113-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1052 utf-check-1052-2-114-0.jnk \ +{File "%TEMP%/utf-check-1052-2-114-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1053 utf-check-1053-2-114-1.jnk \ +{File "%TEMP%/utf-check-1053-2-114-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1054 utf-check-1054-2-115-0.jnk \ +{File "%TEMP%/utf-check-1054-2-115-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1055 utf-check-1055-2-115-1.jnk \ +{File "%TEMP%/utf-check-1055-2-115-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1056 utf-check-1056-2-116-0.jnk \ +{File "%TEMP%/utf-check-1056-2-116-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1057 utf-check-1057-2-116-1.jnk \ +{File "%TEMP%/utf-check-1057-2-116-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1058 utf-check-1058-2-117-0.jnk \ +{File "%TEMP%/utf-check-1058-2-117-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1059 utf-check-1059-2-117-1.jnk \ +{File "%TEMP%/utf-check-1059-2-117-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1060 utf-check-1060-2-118-0.jnk \ +{File "%TEMP%/utf-check-1060-2-118-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1061 utf-check-1061-2-118-1.jnk \ +{File "%TEMP%/utf-check-1061-2-118-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1062 utf-check-1062-2-119-0.jnk \ +{File "%TEMP%/utf-check-1062-2-119-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1063 utf-check-1063-2-119-1.jnk \ +{File "%TEMP%/utf-check-1063-2-119-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1064 utf-check-1064-2-120-0.jnk \ +{File "%TEMP%/utf-check-1064-2-120-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1065 utf-check-1065-2-120-1.jnk \ +{File "%TEMP%/utf-check-1065-2-120-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1066 utf-check-1066-2-121-0.jnk \ +{File "%TEMP%/utf-check-1066-2-121-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1067 utf-check-1067-2-121-1.jnk \ +{File "%TEMP%/utf-check-1067-2-121-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1068 utf-check-1068-2-122-0.jnk \ +{File "%TEMP%/utf-check-1068-2-122-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1069 utf-check-1069-2-122-1.jnk \ +{File "%TEMP%/utf-check-1069-2-122-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1070 utf-check-1070-2-123-0.jnk \ +{File "%TEMP%/utf-check-1070-2-123-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1071 utf-check-1071-2-123-1.jnk \ +{File "%TEMP%/utf-check-1071-2-123-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1072 utf-check-1072-2-124-0.jnk \ +{File "%TEMP%/utf-check-1072-2-124-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1073 utf-check-1073-2-124-1.jnk \ +{File "%TEMP%/utf-check-1073-2-124-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1074 utf-check-1074-2-125-0.jnk \ +{File "%TEMP%/utf-check-1074-2-125-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1075 utf-check-1075-2-125-1.jnk \ +{File "%TEMP%/utf-check-1075-2-125-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1076 utf-check-1076-2-126-0.jnk \ +{File "%TEMP%/utf-check-1076-2-126-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1077 utf-check-1077-2-126-1.jnk \ +{File "%TEMP%/utf-check-1077-2-126-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1078 utf-check-1078-2-127-0.jnk \ +{File "%TEMP%/utf-check-1078-2-127-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1079 utf-check-1079-2-127-1.jnk \ +{File "%TEMP%/utf-check-1079-2-127-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1080 utf-check-1080-2-128-0.jnk \ +{File "%TEMP%/utf-check-1080-2-128-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1081 utf-check-1081-2-128-1.jnk \ +{File "%TEMP%/utf-check-1081-2-128-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1082 utf-check-1082-2-129-0.jnk \ +{File "%TEMP%/utf-check-1082-2-129-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1083 utf-check-1083-2-129-1.jnk \ +{File "%TEMP%/utf-check-1083-2-129-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1084 utf-check-1084-2-130-0.jnk \ +{File "%TEMP%/utf-check-1084-2-130-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1085 utf-check-1085-2-130-1.jnk \ +{File "%TEMP%/utf-check-1085-2-130-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1086 utf-check-1086-2-131-0.jnk \ +{File "%TEMP%/utf-check-1086-2-131-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1087 utf-check-1087-2-131-1.jnk \ +{File "%TEMP%/utf-check-1087-2-131-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1088 utf-check-1088-2-132-0.jnk \ +{File "%TEMP%/utf-check-1088-2-132-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1089 utf-check-1089-2-132-1.jnk \ +{File "%TEMP%/utf-check-1089-2-132-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1090 utf-check-1090-2-133-0.jnk \ +{File "%TEMP%/utf-check-1090-2-133-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1091 utf-check-1091-2-133-1.jnk \ +{File "%TEMP%/utf-check-1091-2-133-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1092 utf-check-1092-2-134-0.jnk \ +{File "%TEMP%/utf-check-1092-2-134-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1093 utf-check-1093-2-134-1.jnk \ +{File "%TEMP%/utf-check-1093-2-134-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1094 utf-check-1094-2-135-0.jnk \ +{File "%TEMP%/utf-check-1094-2-135-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1095 utf-check-1095-2-135-1.jnk \ +{File "%TEMP%/utf-check-1095-2-135-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1096 utf-check-1096-2-136-0.jnk \ +{File "%TEMP%/utf-check-1096-2-136-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1097 utf-check-1097-2-136-1.jnk \ +{File "%TEMP%/utf-check-1097-2-136-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1098 utf-check-1098-2-137-0.jnk \ +{File "%TEMP%/utf-check-1098-2-137-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1099 utf-check-1099-2-137-1.jnk \ +{File "%TEMP%/utf-check-1099-2-137-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1100 utf-check-1100-2-138-0.jnk \ +{File "%TEMP%/utf-check-1100-2-138-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1101 utf-check-1101-2-138-1.jnk \ +{File "%TEMP%/utf-check-1101-2-138-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1102 utf-check-1102-2-139-0.jnk \ +{File "%TEMP%/utf-check-1102-2-139-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1103 utf-check-1103-2-139-1.jnk \ +{File "%TEMP%/utf-check-1103-2-139-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1104 utf-check-1104-2-140-0.jnk \ +{File "%TEMP%/utf-check-1104-2-140-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1105 utf-check-1105-2-140-1.jnk \ +{File "%TEMP%/utf-check-1105-2-140-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1106 utf-check-1106-2-141-0.jnk \ +{File "%TEMP%/utf-check-1106-2-141-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1107 utf-check-1107-2-141-1.jnk \ +{File "%TEMP%/utf-check-1107-2-141-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1108 utf-check-1108-2-142-0.jnk \ +{File "%TEMP%/utf-check-1108-2-142-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1109 utf-check-1109-2-142-1.jnk \ +{File "%TEMP%/utf-check-1109-2-142-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1110 utf-check-1110-2-143-0.jnk \ +{File "%TEMP%/utf-check-1110-2-143-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1111 utf-check-1111-2-143-1.jnk \ +{File "%TEMP%/utf-check-1111-2-143-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1112 utf-check-1112-2-144-0.jnk \ +{File "%TEMP%/utf-check-1112-2-144-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1113 utf-check-1113-2-144-1.jnk \ +{File "%TEMP%/utf-check-1113-2-144-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1114 utf-check-1114-2-145-0.jnk \ +{File "%TEMP%/utf-check-1114-2-145-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1115 utf-check-1115-2-145-1.jnk \ +{File "%TEMP%/utf-check-1115-2-145-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1116 utf-check-1116-2-146-0.jnk \ +{File "%TEMP%/utf-check-1116-2-146-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1117 utf-check-1117-2-146-1.jnk \ +{File "%TEMP%/utf-check-1117-2-146-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1118 utf-check-1118-2-147-0.jnk \ +{File "%TEMP%/utf-check-1118-2-147-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1119 utf-check-1119-2-147-1.jnk \ +{File "%TEMP%/utf-check-1119-2-147-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1120 utf-check-1120-2-148-0.jnk \ +{File "%TEMP%/utf-check-1120-2-148-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1121 utf-check-1121-2-148-1.jnk \ +{File "%TEMP%/utf-check-1121-2-148-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1122 utf-check-1122-2-149-0.jnk \ +{File "%TEMP%/utf-check-1122-2-149-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1123 utf-check-1123-2-149-1.jnk \ +{File "%TEMP%/utf-check-1123-2-149-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1124 utf-check-1124-2-150-0.jnk \ +{File "%TEMP%/utf-check-1124-2-150-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1125 utf-check-1125-2-150-1.jnk \ +{File "%TEMP%/utf-check-1125-2-150-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1126 utf-check-1126-2-151-0.jnk \ +{File "%TEMP%/utf-check-1126-2-151-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1127 utf-check-1127-2-151-1.jnk \ +{File "%TEMP%/utf-check-1127-2-151-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1128 utf-check-1128-2-152-0.jnk \ +{File "%TEMP%/utf-check-1128-2-152-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1129 utf-check-1129-2-152-1.jnk \ +{File "%TEMP%/utf-check-1129-2-152-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1130 utf-check-1130-2-153-0.jnk \ +{File "%TEMP%/utf-check-1130-2-153-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1131 utf-check-1131-2-153-1.jnk \ +{File "%TEMP%/utf-check-1131-2-153-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1132 utf-check-1132-2-154-0.jnk \ +{File "%TEMP%/utf-check-1132-2-154-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1133 utf-check-1133-2-154-1.jnk \ +{File "%TEMP%/utf-check-1133-2-154-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1134 utf-check-1134-2-155-0.jnk \ +{File "%TEMP%/utf-check-1134-2-155-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1135 utf-check-1135-2-155-1.jnk \ +{File "%TEMP%/utf-check-1135-2-155-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1136 utf-check-1136-2-156-0.jnk \ +{File "%TEMP%/utf-check-1136-2-156-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1137 utf-check-1137-2-156-1.jnk \ +{File "%TEMP%/utf-check-1137-2-156-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1138 utf-check-1138-2-157-0.jnk \ +{File "%TEMP%/utf-check-1138-2-157-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1139 utf-check-1139-2-157-1.jnk \ +{File "%TEMP%/utf-check-1139-2-157-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1140 utf-check-1140-2-158-0.jnk \ +{File "%TEMP%/utf-check-1140-2-158-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1141 utf-check-1141-2-158-1.jnk \ +{File "%TEMP%/utf-check-1141-2-158-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1142 utf-check-1142-2-159-0.jnk \ +{File "%TEMP%/utf-check-1142-2-159-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1143 utf-check-1143-2-159-1.jnk \ +{File "%TEMP%/utf-check-1143-2-159-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1144 utf-check-1144-2-160-0.jnk \ +{File "%TEMP%/utf-check-1144-2-160-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1145 utf-check-1145-2-160-1.jnk \ +{File "%TEMP%/utf-check-1145-2-160-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1146 utf-check-1146-2-161-0.jnk \ +{File "%TEMP%/utf-check-1146-2-161-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1147 utf-check-1147-2-161-1.jnk \ +{File "%TEMP%/utf-check-1147-2-161-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1148 utf-check-1148-2-162-0.jnk \ +{File "%TEMP%/utf-check-1148-2-162-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1149 utf-check-1149-2-162-1.jnk \ +{File "%TEMP%/utf-check-1149-2-162-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1150 utf-check-1150-2-163-0.jnk \ +{File "%TEMP%/utf-check-1150-2-163-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1151 utf-check-1151-2-163-1.jnk \ +{File "%TEMP%/utf-check-1151-2-163-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1152 utf-check-1152-2-164-0.jnk \ +{File "%TEMP%/utf-check-1152-2-164-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1153 utf-check-1153-2-164-1.jnk \ +{File "%TEMP%/utf-check-1153-2-164-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1154 utf-check-1154-2-165-0.jnk \ +{File "%TEMP%/utf-check-1154-2-165-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1155 utf-check-1155-2-165-1.jnk \ +{File "%TEMP%/utf-check-1155-2-165-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1156 utf-check-1156-2-166-0.jnk \ +{File "%TEMP%/utf-check-1156-2-166-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1157 utf-check-1157-2-166-1.jnk \ +{File "%TEMP%/utf-check-1157-2-166-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1158 utf-check-1158-2-167-0.jnk \ +{File "%TEMP%/utf-check-1158-2-167-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1159 utf-check-1159-2-167-1.jnk \ +{File "%TEMP%/utf-check-1159-2-167-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1160 utf-check-1160-2-168-0.jnk \ +{File "%TEMP%/utf-check-1160-2-168-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1161 utf-check-1161-2-168-1.jnk \ +{File "%TEMP%/utf-check-1161-2-168-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1162 utf-check-1162-2-169-0.jnk \ +{File "%TEMP%/utf-check-1162-2-169-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1163 utf-check-1163-2-169-1.jnk \ +{File "%TEMP%/utf-check-1163-2-169-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1164 utf-check-1164-2-170-0.jnk \ +{File "%TEMP%/utf-check-1164-2-170-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1165 utf-check-1165-2-170-1.jnk \ +{File "%TEMP%/utf-check-1165-2-170-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1166 utf-check-1166-2-171-0.jnk \ +{File "%TEMP%/utf-check-1166-2-171-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1167 utf-check-1167-2-171-1.jnk \ +{File "%TEMP%/utf-check-1167-2-171-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1168 utf-check-1168-2-172-0.jnk \ +{File "%TEMP%/utf-check-1168-2-172-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1169 utf-check-1169-2-172-1.jnk \ +{File "%TEMP%/utf-check-1169-2-172-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1170 utf-check-1170-2-173-0.jnk \ +{File "%TEMP%/utf-check-1170-2-173-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1171 utf-check-1171-2-173-1.jnk \ +{File "%TEMP%/utf-check-1171-2-173-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1172 utf-check-1172-2-174-0.jnk \ +{File "%TEMP%/utf-check-1172-2-174-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1173 utf-check-1173-2-174-1.jnk \ +{File "%TEMP%/utf-check-1173-2-174-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1174 utf-check-1174-2-175-0.jnk \ +{File "%TEMP%/utf-check-1174-2-175-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1175 utf-check-1175-2-175-1.jnk \ +{File "%TEMP%/utf-check-1175-2-175-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1176 utf-check-1176-2-176-0.jnk \ +{File "%TEMP%/utf-check-1176-2-176-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1177 utf-check-1177-2-176-1.jnk \ +{File "%TEMP%/utf-check-1177-2-176-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1178 utf-check-1178-2-177-0.jnk \ +{File "%TEMP%/utf-check-1178-2-177-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1179 utf-check-1179-2-177-1.jnk \ +{File "%TEMP%/utf-check-1179-2-177-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1180 utf-check-1180-2-178-0.jnk \ +{File "%TEMP%/utf-check-1180-2-178-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1181 utf-check-1181-2-178-1.jnk \ +{File "%TEMP%/utf-check-1181-2-178-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1182 utf-check-1182-2-179-0.jnk \ +{File "%TEMP%/utf-check-1182-2-179-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1183 utf-check-1183-2-179-1.jnk \ +{File "%TEMP%/utf-check-1183-2-179-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1184 utf-check-1184-2-180-0.jnk \ +{File "%TEMP%/utf-check-1184-2-180-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: yes +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1185 utf-check-1185-2-180-1.jnk \ +{File "%TEMP%/utf-check-1185-2-180-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: yes +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: yes +Has flag LOOK_LONE_CR: yes +Has flag LOOK_LF: yes +Has flag LOOK_LONE_LF: yes +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1186 utf-check-1186-3-0-0.jnk \ +{File "%TEMP%/utf-check-1186-3-0-0.jnk" has 2 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: reversed +Looks like UTF-16: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: no +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1187 utf-check-1187-3-0-1.jnk \ +{File "%TEMP%/utf-check-1187-3-0-1.jnk" has 3 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: reversed +Looks like UTF-8: yes +Has flag LOOK_NUL: no +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1188 utf-check-1188-3-1-0.jnk \ +{File "%TEMP%/utf-check-1188-3-1-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1189 utf-check-1189-3-1-1.jnk \ +{File "%TEMP%/utf-check-1189-3-1-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1190 utf-check-1190-3-2-0.jnk \ +{File "%TEMP%/utf-check-1190-3-2-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1191 utf-check-1191-3-2-1.jnk \ +{File "%TEMP%/utf-check-1191-3-2-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1192 utf-check-1192-3-3-0.jnk \ +{File "%TEMP%/utf-check-1192-3-3-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1193 utf-check-1193-3-3-1.jnk \ +{File "%TEMP%/utf-check-1193-3-3-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1194 utf-check-1194-3-4-0.jnk \ +{File "%TEMP%/utf-check-1194-3-4-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1195 utf-check-1195-3-4-1.jnk \ +{File "%TEMP%/utf-check-1195-3-4-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1196 utf-check-1196-3-5-0.jnk \ +{File "%TEMP%/utf-check-1196-3-5-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1197 utf-check-1197-3-5-1.jnk \ +{File "%TEMP%/utf-check-1197-3-5-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1198 utf-check-1198-3-6-0.jnk \ +{File "%TEMP%/utf-check-1198-3-6-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1199 utf-check-1199-3-6-1.jnk \ +{File "%TEMP%/utf-check-1199-3-6-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1200 utf-check-1200-3-7-0.jnk \ +{File "%TEMP%/utf-check-1200-3-7-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1201 utf-check-1201-3-7-1.jnk \ +{File "%TEMP%/utf-check-1201-3-7-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1202 utf-check-1202-3-8-0.jnk \ +{File "%TEMP%/utf-check-1202-3-8-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1203 utf-check-1203-3-8-1.jnk \ +{File "%TEMP%/utf-check-1203-3-8-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1204 utf-check-1204-3-9-0.jnk \ +{File "%TEMP%/utf-check-1204-3-9-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1205 utf-check-1205-3-9-1.jnk \ +{File "%TEMP%/utf-check-1205-3-9-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1206 utf-check-1206-3-10-0.jnk \ +{File "%TEMP%/utf-check-1206-3-10-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1207 utf-check-1207-3-10-1.jnk \ +{File "%TEMP%/utf-check-1207-3-10-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1208 utf-check-1208-3-11-0.jnk \ +{File "%TEMP%/utf-check-1208-3-11-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1209 utf-check-1209-3-11-1.jnk \ +{File "%TEMP%/utf-check-1209-3-11-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1210 utf-check-1210-3-12-0.jnk \ +{File "%TEMP%/utf-check-1210-3-12-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1211 utf-check-1211-3-12-1.jnk \ +{File "%TEMP%/utf-check-1211-3-12-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1212 utf-check-1212-3-13-0.jnk \ +{File "%TEMP%/utf-check-1212-3-13-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1213 utf-check-1213-3-13-1.jnk \ +{File "%TEMP%/utf-check-1213-3-13-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1214 utf-check-1214-3-14-0.jnk \ +{File "%TEMP%/utf-check-1214-3-14-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1215 utf-check-1215-3-14-1.jnk \ +{File "%TEMP%/utf-check-1215-3-14-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1216 utf-check-1216-3-15-0.jnk \ +{File "%TEMP%/utf-check-1216-3-15-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1217 utf-check-1217-3-15-1.jnk \ +{File "%TEMP%/utf-check-1217-3-15-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1218 utf-check-1218-3-16-0.jnk \ +{File "%TEMP%/utf-check-1218-3-16-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1219 utf-check-1219-3-16-1.jnk \ +{File "%TEMP%/utf-check-1219-3-16-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1220 utf-check-1220-3-17-0.jnk \ +{File "%TEMP%/utf-check-1220-3-17-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1221 utf-check-1221-3-17-1.jnk \ +{File "%TEMP%/utf-check-1221-3-17-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1222 utf-check-1222-3-18-0.jnk \ +{File "%TEMP%/utf-check-1222-3-18-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1223 utf-check-1223-3-18-1.jnk \ +{File "%TEMP%/utf-check-1223-3-18-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1224 utf-check-1224-3-19-0.jnk \ +{File "%TEMP%/utf-check-1224-3-19-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1225 utf-check-1225-3-19-1.jnk \ +{File "%TEMP%/utf-check-1225-3-19-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1226 utf-check-1226-3-20-0.jnk \ +{File "%TEMP%/utf-check-1226-3-20-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1227 utf-check-1227-3-20-1.jnk \ +{File "%TEMP%/utf-check-1227-3-20-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1228 utf-check-1228-3-21-0.jnk \ +{File "%TEMP%/utf-check-1228-3-21-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1229 utf-check-1229-3-21-1.jnk \ +{File "%TEMP%/utf-check-1229-3-21-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1230 utf-check-1230-3-22-0.jnk \ +{File "%TEMP%/utf-check-1230-3-22-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1231 utf-check-1231-3-22-1.jnk \ +{File "%TEMP%/utf-check-1231-3-22-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1232 utf-check-1232-3-23-0.jnk \ +{File "%TEMP%/utf-check-1232-3-23-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1233 utf-check-1233-3-23-1.jnk \ +{File "%TEMP%/utf-check-1233-3-23-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1234 utf-check-1234-3-24-0.jnk \ +{File "%TEMP%/utf-check-1234-3-24-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1235 utf-check-1235-3-24-1.jnk \ +{File "%TEMP%/utf-check-1235-3-24-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1236 utf-check-1236-3-25-0.jnk \ +{File "%TEMP%/utf-check-1236-3-25-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1237 utf-check-1237-3-25-1.jnk \ +{File "%TEMP%/utf-check-1237-3-25-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1238 utf-check-1238-3-26-0.jnk \ +{File "%TEMP%/utf-check-1238-3-26-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1239 utf-check-1239-3-26-1.jnk \ +{File "%TEMP%/utf-check-1239-3-26-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1240 utf-check-1240-3-27-0.jnk \ +{File "%TEMP%/utf-check-1240-3-27-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1241 utf-check-1241-3-27-1.jnk \ +{File "%TEMP%/utf-check-1241-3-27-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1242 utf-check-1242-3-28-0.jnk \ +{File "%TEMP%/utf-check-1242-3-28-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1243 utf-check-1243-3-28-1.jnk \ +{File "%TEMP%/utf-check-1243-3-28-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1244 utf-check-1244-3-29-0.jnk \ +{File "%TEMP%/utf-check-1244-3-29-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1245 utf-check-1245-3-29-1.jnk \ +{File "%TEMP%/utf-check-1245-3-29-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1246 utf-check-1246-3-30-0.jnk \ +{File "%TEMP%/utf-check-1246-3-30-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1247 utf-check-1247-3-30-1.jnk \ +{File "%TEMP%/utf-check-1247-3-30-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1248 utf-check-1248-3-31-0.jnk \ +{File "%TEMP%/utf-check-1248-3-31-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1249 utf-check-1249-3-31-1.jnk \ +{File "%TEMP%/utf-check-1249-3-31-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1250 utf-check-1250-3-32-0.jnk \ +{File "%TEMP%/utf-check-1250-3-32-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1251 utf-check-1251-3-32-1.jnk \ +{File "%TEMP%/utf-check-1251-3-32-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1252 utf-check-1252-3-33-0.jnk \ +{File "%TEMP%/utf-check-1252-3-33-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1253 utf-check-1253-3-33-1.jnk \ +{File "%TEMP%/utf-check-1253-3-33-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1254 utf-check-1254-3-34-0.jnk \ +{File "%TEMP%/utf-check-1254-3-34-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1255 utf-check-1255-3-34-1.jnk \ +{File "%TEMP%/utf-check-1255-3-34-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1256 utf-check-1256-3-35-0.jnk \ +{File "%TEMP%/utf-check-1256-3-35-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1257 utf-check-1257-3-35-1.jnk \ +{File "%TEMP%/utf-check-1257-3-35-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1258 utf-check-1258-3-36-0.jnk \ +{File "%TEMP%/utf-check-1258-3-36-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1259 utf-check-1259-3-36-1.jnk \ +{File "%TEMP%/utf-check-1259-3-36-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1260 utf-check-1260-3-37-0.jnk \ +{File "%TEMP%/utf-check-1260-3-37-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1261 utf-check-1261-3-37-1.jnk \ +{File "%TEMP%/utf-check-1261-3-37-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1262 utf-check-1262-3-38-0.jnk \ +{File "%TEMP%/utf-check-1262-3-38-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1263 utf-check-1263-3-38-1.jnk \ +{File "%TEMP%/utf-check-1263-3-38-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1264 utf-check-1264-3-39-0.jnk \ +{File "%TEMP%/utf-check-1264-3-39-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1265 utf-check-1265-3-39-1.jnk \ +{File "%TEMP%/utf-check-1265-3-39-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1266 utf-check-1266-3-40-0.jnk \ +{File "%TEMP%/utf-check-1266-3-40-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1267 utf-check-1267-3-40-1.jnk \ +{File "%TEMP%/utf-check-1267-3-40-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1268 utf-check-1268-3-41-0.jnk \ +{File "%TEMP%/utf-check-1268-3-41-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1269 utf-check-1269-3-41-1.jnk \ +{File "%TEMP%/utf-check-1269-3-41-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1270 utf-check-1270-3-42-0.jnk \ +{File "%TEMP%/utf-check-1270-3-42-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1271 utf-check-1271-3-42-1.jnk \ +{File "%TEMP%/utf-check-1271-3-42-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1272 utf-check-1272-3-43-0.jnk \ +{File "%TEMP%/utf-check-1272-3-43-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1273 utf-check-1273-3-43-1.jnk \ +{File "%TEMP%/utf-check-1273-3-43-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1274 utf-check-1274-3-44-0.jnk \ +{File "%TEMP%/utf-check-1274-3-44-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1275 utf-check-1275-3-44-1.jnk \ +{File "%TEMP%/utf-check-1275-3-44-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1276 utf-check-1276-3-45-0.jnk \ +{File "%TEMP%/utf-check-1276-3-45-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1277 utf-check-1277-3-45-1.jnk \ +{File "%TEMP%/utf-check-1277-3-45-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1278 utf-check-1278-3-46-0.jnk \ +{File "%TEMP%/utf-check-1278-3-46-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1279 utf-check-1279-3-46-1.jnk \ +{File "%TEMP%/utf-check-1279-3-46-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1280 utf-check-1280-3-47-0.jnk \ +{File "%TEMP%/utf-check-1280-3-47-0.jnk" has 16 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1281 utf-check-1281-3-47-1.jnk \ +{File "%TEMP%/utf-check-1281-3-47-1.jnk" has 17 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1282 utf-check-1282-3-48-0.jnk \ +{File "%TEMP%/utf-check-1282-3-48-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1283 utf-check-1283-3-48-1.jnk \ +{File "%TEMP%/utf-check-1283-3-48-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1284 utf-check-1284-3-49-0.jnk \ +{File "%TEMP%/utf-check-1284-3-49-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1285 utf-check-1285-3-49-1.jnk \ +{File "%TEMP%/utf-check-1285-3-49-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1286 utf-check-1286-3-50-0.jnk \ +{File "%TEMP%/utf-check-1286-3-50-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1287 utf-check-1287-3-50-1.jnk \ +{File "%TEMP%/utf-check-1287-3-50-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1288 utf-check-1288-3-51-0.jnk \ +{File "%TEMP%/utf-check-1288-3-51-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1289 utf-check-1289-3-51-1.jnk \ +{File "%TEMP%/utf-check-1289-3-51-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1290 utf-check-1290-3-52-0.jnk \ +{File "%TEMP%/utf-check-1290-3-52-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1291 utf-check-1291-3-52-1.jnk \ +{File "%TEMP%/utf-check-1291-3-52-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1292 utf-check-1292-3-53-0.jnk \ +{File "%TEMP%/utf-check-1292-3-53-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1293 utf-check-1293-3-53-1.jnk \ +{File "%TEMP%/utf-check-1293-3-53-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1294 utf-check-1294-3-54-0.jnk \ +{File "%TEMP%/utf-check-1294-3-54-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1295 utf-check-1295-3-54-1.jnk \ +{File "%TEMP%/utf-check-1295-3-54-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1296 utf-check-1296-3-55-0.jnk \ +{File "%TEMP%/utf-check-1296-3-55-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1297 utf-check-1297-3-55-1.jnk \ +{File "%TEMP%/utf-check-1297-3-55-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1298 utf-check-1298-3-56-0.jnk \ +{File "%TEMP%/utf-check-1298-3-56-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1299 utf-check-1299-3-56-1.jnk \ +{File "%TEMP%/utf-check-1299-3-56-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1300 utf-check-1300-3-57-0.jnk \ +{File "%TEMP%/utf-check-1300-3-57-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1301 utf-check-1301-3-57-1.jnk \ +{File "%TEMP%/utf-check-1301-3-57-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1302 utf-check-1302-3-58-0.jnk \ +{File "%TEMP%/utf-check-1302-3-58-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1303 utf-check-1303-3-58-1.jnk \ +{File "%TEMP%/utf-check-1303-3-58-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1304 utf-check-1304-3-59-0.jnk \ +{File "%TEMP%/utf-check-1304-3-59-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1305 utf-check-1305-3-59-1.jnk \ +{File "%TEMP%/utf-check-1305-3-59-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1306 utf-check-1306-3-60-0.jnk \ +{File "%TEMP%/utf-check-1306-3-60-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1307 utf-check-1307-3-60-1.jnk \ +{File "%TEMP%/utf-check-1307-3-60-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1308 utf-check-1308-3-61-0.jnk \ +{File "%TEMP%/utf-check-1308-3-61-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1309 utf-check-1309-3-61-1.jnk \ +{File "%TEMP%/utf-check-1309-3-61-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1310 utf-check-1310-3-62-0.jnk \ +{File "%TEMP%/utf-check-1310-3-62-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1311 utf-check-1311-3-62-1.jnk \ +{File "%TEMP%/utf-check-1311-3-62-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1312 utf-check-1312-3-63-0.jnk \ +{File "%TEMP%/utf-check-1312-3-63-0.jnk" has 16 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1313 utf-check-1313-3-63-1.jnk \ +{File "%TEMP%/utf-check-1313-3-63-1.jnk" has 17 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1314 utf-check-1314-3-64-0.jnk \ +{File "%TEMP%/utf-check-1314-3-64-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1315 utf-check-1315-3-64-1.jnk \ +{File "%TEMP%/utf-check-1315-3-64-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1316 utf-check-1316-3-65-0.jnk \ +{File "%TEMP%/utf-check-1316-3-65-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1317 utf-check-1317-3-65-1.jnk \ +{File "%TEMP%/utf-check-1317-3-65-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1318 utf-check-1318-3-66-0.jnk \ +{File "%TEMP%/utf-check-1318-3-66-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1319 utf-check-1319-3-66-1.jnk \ +{File "%TEMP%/utf-check-1319-3-66-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1320 utf-check-1320-3-67-0.jnk \ +{File "%TEMP%/utf-check-1320-3-67-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1321 utf-check-1321-3-67-1.jnk \ +{File "%TEMP%/utf-check-1321-3-67-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1322 utf-check-1322-3-68-0.jnk \ +{File "%TEMP%/utf-check-1322-3-68-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1323 utf-check-1323-3-68-1.jnk \ +{File "%TEMP%/utf-check-1323-3-68-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1324 utf-check-1324-3-69-0.jnk \ +{File "%TEMP%/utf-check-1324-3-69-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1325 utf-check-1325-3-69-1.jnk \ +{File "%TEMP%/utf-check-1325-3-69-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1326 utf-check-1326-3-70-0.jnk \ +{File "%TEMP%/utf-check-1326-3-70-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1327 utf-check-1327-3-70-1.jnk \ +{File "%TEMP%/utf-check-1327-3-70-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1328 utf-check-1328-3-71-0.jnk \ +{File "%TEMP%/utf-check-1328-3-71-0.jnk" has 16 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1329 utf-check-1329-3-71-1.jnk \ +{File "%TEMP%/utf-check-1329-3-71-1.jnk" has 17 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1330 utf-check-1330-3-72-0.jnk \ +{File "%TEMP%/utf-check-1330-3-72-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1331 utf-check-1331-3-72-1.jnk \ +{File "%TEMP%/utf-check-1331-3-72-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1332 utf-check-1332-3-73-0.jnk \ +{File "%TEMP%/utf-check-1332-3-73-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1333 utf-check-1333-3-73-1.jnk \ +{File "%TEMP%/utf-check-1333-3-73-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1334 utf-check-1334-3-74-0.jnk \ +{File "%TEMP%/utf-check-1334-3-74-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1335 utf-check-1335-3-74-1.jnk \ +{File "%TEMP%/utf-check-1335-3-74-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1336 utf-check-1336-3-75-0.jnk \ +{File "%TEMP%/utf-check-1336-3-75-0.jnk" has 16 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1337 utf-check-1337-3-75-1.jnk \ +{File "%TEMP%/utf-check-1337-3-75-1.jnk" has 17 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1338 utf-check-1338-3-76-0.jnk \ +{File "%TEMP%/utf-check-1338-3-76-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1339 utf-check-1339-3-76-1.jnk \ +{File "%TEMP%/utf-check-1339-3-76-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1340 utf-check-1340-3-77-0.jnk \ +{File "%TEMP%/utf-check-1340-3-77-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1341 utf-check-1341-3-77-1.jnk \ +{File "%TEMP%/utf-check-1341-3-77-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1342 utf-check-1342-3-78-0.jnk \ +{File "%TEMP%/utf-check-1342-3-78-0.jnk" has 16 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1343 utf-check-1343-3-78-1.jnk \ +{File "%TEMP%/utf-check-1343-3-78-1.jnk" has 17 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1344 utf-check-1344-3-79-0.jnk \ +{File "%TEMP%/utf-check-1344-3-79-0.jnk" has 18 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1345 utf-check-1345-3-79-1.jnk \ +{File "%TEMP%/utf-check-1345-3-79-1.jnk" has 19 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1346 utf-check-1346-3-80-0.jnk \ +{File "%TEMP%/utf-check-1346-3-80-0.jnk" has 16388 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1347 utf-check-1347-3-80-1.jnk \ +{File "%TEMP%/utf-check-1347-3-80-1.jnk" has 16389 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1348 utf-check-1348-3-81-0.jnk \ +{File "%TEMP%/utf-check-1348-3-81-0.jnk" has 16390 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1349 utf-check-1349-3-81-1.jnk \ +{File "%TEMP%/utf-check-1349-3-81-1.jnk" has 16391 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1350 utf-check-1350-3-82-0.jnk \ +{File "%TEMP%/utf-check-1350-3-82-0.jnk" has 16390 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1351 utf-check-1351-3-82-1.jnk \ +{File "%TEMP%/utf-check-1351-3-82-1.jnk" has 16391 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1352 utf-check-1352-3-83-0.jnk \ +{File "%TEMP%/utf-check-1352-3-83-0.jnk" has 16392 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1353 utf-check-1353-3-83-1.jnk \ +{File "%TEMP%/utf-check-1353-3-83-1.jnk" has 16393 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1354 utf-check-1354-3-84-0.jnk \ +{File "%TEMP%/utf-check-1354-3-84-0.jnk" has 16394 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1355 utf-check-1355-3-84-1.jnk \ +{File "%TEMP%/utf-check-1355-3-84-1.jnk" has 16395 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1356 utf-check-1356-3-85-0.jnk \ +{File "%TEMP%/utf-check-1356-3-85-0.jnk" has 16396 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1357 utf-check-1357-3-85-1.jnk \ +{File "%TEMP%/utf-check-1357-3-85-1.jnk" has 16397 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1358 utf-check-1358-3-86-0.jnk \ +{File "%TEMP%/utf-check-1358-3-86-0.jnk" has 16396 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1359 utf-check-1359-3-86-1.jnk \ +{File "%TEMP%/utf-check-1359-3-86-1.jnk" has 16397 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1360 utf-check-1360-3-87-0.jnk \ +{File "%TEMP%/utf-check-1360-3-87-0.jnk" has 16398 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1361 utf-check-1361-3-87-1.jnk \ +{File "%TEMP%/utf-check-1361-3-87-1.jnk" has 16399 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1362 utf-check-1362-3-88-0.jnk \ +{File "%TEMP%/utf-check-1362-3-88-0.jnk" has 16390 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1363 utf-check-1363-3-88-1.jnk \ +{File "%TEMP%/utf-check-1363-3-88-1.jnk" has 16391 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1364 utf-check-1364-3-89-0.jnk \ +{File "%TEMP%/utf-check-1364-3-89-0.jnk" has 16392 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1365 utf-check-1365-3-89-1.jnk \ +{File "%TEMP%/utf-check-1365-3-89-1.jnk" has 16393 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1366 utf-check-1366-3-90-0.jnk \ +{File "%TEMP%/utf-check-1366-3-90-0.jnk" has 16392 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1367 utf-check-1367-3-90-1.jnk \ +{File "%TEMP%/utf-check-1367-3-90-1.jnk" has 16393 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1368 utf-check-1368-3-91-0.jnk \ +{File "%TEMP%/utf-check-1368-3-91-0.jnk" has 16394 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1369 utf-check-1369-3-91-1.jnk \ +{File "%TEMP%/utf-check-1369-3-91-1.jnk" has 16395 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1370 utf-check-1370-3-92-0.jnk \ +{File "%TEMP%/utf-check-1370-3-92-0.jnk" has 16396 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1371 utf-check-1371-3-92-1.jnk \ +{File "%TEMP%/utf-check-1371-3-92-1.jnk" has 16397 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1372 utf-check-1372-3-93-0.jnk \ +{File "%TEMP%/utf-check-1372-3-93-0.jnk" has 16398 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1373 utf-check-1373-3-93-1.jnk \ +{File "%TEMP%/utf-check-1373-3-93-1.jnk" has 16399 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1374 utf-check-1374-3-94-0.jnk \ +{File "%TEMP%/utf-check-1374-3-94-0.jnk" has 16398 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1375 utf-check-1375-3-94-1.jnk \ +{File "%TEMP%/utf-check-1375-3-94-1.jnk" has 16399 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1376 utf-check-1376-3-95-0.jnk \ +{File "%TEMP%/utf-check-1376-3-95-0.jnk" has 16400 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1377 utf-check-1377-3-95-1.jnk \ +{File "%TEMP%/utf-check-1377-3-95-1.jnk" has 16401 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1378 utf-check-1378-3-96-0.jnk \ +{File "%TEMP%/utf-check-1378-3-96-0.jnk" has 16390 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1379 utf-check-1379-3-96-1.jnk \ +{File "%TEMP%/utf-check-1379-3-96-1.jnk" has 16391 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1380 utf-check-1380-3-97-0.jnk \ +{File "%TEMP%/utf-check-1380-3-97-0.jnk" has 16392 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1381 utf-check-1381-3-97-1.jnk \ +{File "%TEMP%/utf-check-1381-3-97-1.jnk" has 16393 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1382 utf-check-1382-3-98-0.jnk \ +{File "%TEMP%/utf-check-1382-3-98-0.jnk" has 16392 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1383 utf-check-1383-3-98-1.jnk \ +{File "%TEMP%/utf-check-1383-3-98-1.jnk" has 16393 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1384 utf-check-1384-3-99-0.jnk \ +{File "%TEMP%/utf-check-1384-3-99-0.jnk" has 16394 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1385 utf-check-1385-3-99-1.jnk \ +{File "%TEMP%/utf-check-1385-3-99-1.jnk" has 16395 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1386 utf-check-1386-3-100-0.jnk \ +{File "%TEMP%/utf-check-1386-3-100-0.jnk" has 16396 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1387 utf-check-1387-3-100-1.jnk \ +{File "%TEMP%/utf-check-1387-3-100-1.jnk" has 16397 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1388 utf-check-1388-3-101-0.jnk \ +{File "%TEMP%/utf-check-1388-3-101-0.jnk" has 16398 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1389 utf-check-1389-3-101-1.jnk \ +{File "%TEMP%/utf-check-1389-3-101-1.jnk" has 16399 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1390 utf-check-1390-3-102-0.jnk \ +{File "%TEMP%/utf-check-1390-3-102-0.jnk" has 16398 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1391 utf-check-1391-3-102-1.jnk \ +{File "%TEMP%/utf-check-1391-3-102-1.jnk" has 16399 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1392 utf-check-1392-3-103-0.jnk \ +{File "%TEMP%/utf-check-1392-3-103-0.jnk" has 16400 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1393 utf-check-1393-3-103-1.jnk \ +{File "%TEMP%/utf-check-1393-3-103-1.jnk" has 16401 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1394 utf-check-1394-3-104-0.jnk \ +{File "%TEMP%/utf-check-1394-3-104-0.jnk" has 16392 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1395 utf-check-1395-3-104-1.jnk \ +{File "%TEMP%/utf-check-1395-3-104-1.jnk" has 16393 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1396 utf-check-1396-3-105-0.jnk \ +{File "%TEMP%/utf-check-1396-3-105-0.jnk" has 16394 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1397 utf-check-1397-3-105-1.jnk \ +{File "%TEMP%/utf-check-1397-3-105-1.jnk" has 16395 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1398 utf-check-1398-3-106-0.jnk \ +{File "%TEMP%/utf-check-1398-3-106-0.jnk" has 16394 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1399 utf-check-1399-3-106-1.jnk \ +{File "%TEMP%/utf-check-1399-3-106-1.jnk" has 16395 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1400 utf-check-1400-3-107-0.jnk \ +{File "%TEMP%/utf-check-1400-3-107-0.jnk" has 16396 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1401 utf-check-1401-3-107-1.jnk \ +{File "%TEMP%/utf-check-1401-3-107-1.jnk" has 16397 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1402 utf-check-1402-3-108-0.jnk \ +{File "%TEMP%/utf-check-1402-3-108-0.jnk" has 16398 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1403 utf-check-1403-3-108-1.jnk \ +{File "%TEMP%/utf-check-1403-3-108-1.jnk" has 16399 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1404 utf-check-1404-3-109-0.jnk \ +{File "%TEMP%/utf-check-1404-3-109-0.jnk" has 16400 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1405 utf-check-1405-3-109-1.jnk \ +{File "%TEMP%/utf-check-1405-3-109-1.jnk" has 16401 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1406 utf-check-1406-3-110-0.jnk \ +{File "%TEMP%/utf-check-1406-3-110-0.jnk" has 16400 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1407 utf-check-1407-3-110-1.jnk \ +{File "%TEMP%/utf-check-1407-3-110-1.jnk" has 16401 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1408 utf-check-1408-3-111-0.jnk \ +{File "%TEMP%/utf-check-1408-3-111-0.jnk" has 16402 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1409 utf-check-1409-3-111-1.jnk \ +{File "%TEMP%/utf-check-1409-3-111-1.jnk" has 16403 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: yes +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1410 utf-check-1410-3-112-0.jnk \ +{File "%TEMP%/utf-check-1410-3-112-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1411 utf-check-1411-3-112-1.jnk \ +{File "%TEMP%/utf-check-1411-3-112-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1412 utf-check-1412-3-113-0.jnk \ +{File "%TEMP%/utf-check-1412-3-113-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1413 utf-check-1413-3-113-1.jnk \ +{File "%TEMP%/utf-check-1413-3-113-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1414 utf-check-1414-3-114-0.jnk \ +{File "%TEMP%/utf-check-1414-3-114-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1415 utf-check-1415-3-114-1.jnk \ +{File "%TEMP%/utf-check-1415-3-114-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1416 utf-check-1416-3-115-0.jnk \ +{File "%TEMP%/utf-check-1416-3-115-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1417 utf-check-1417-3-115-1.jnk \ +{File "%TEMP%/utf-check-1417-3-115-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1418 utf-check-1418-3-116-0.jnk \ +{File "%TEMP%/utf-check-1418-3-116-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1419 utf-check-1419-3-116-1.jnk \ +{File "%TEMP%/utf-check-1419-3-116-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1420 utf-check-1420-3-117-0.jnk \ +{File "%TEMP%/utf-check-1420-3-117-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1421 utf-check-1421-3-117-1.jnk \ +{File "%TEMP%/utf-check-1421-3-117-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1422 utf-check-1422-3-118-0.jnk \ +{File "%TEMP%/utf-check-1422-3-118-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1423 utf-check-1423-3-118-1.jnk \ +{File "%TEMP%/utf-check-1423-3-118-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1424 utf-check-1424-3-119-0.jnk \ +{File "%TEMP%/utf-check-1424-3-119-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1425 utf-check-1425-3-119-1.jnk \ +{File "%TEMP%/utf-check-1425-3-119-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1426 utf-check-1426-3-120-0.jnk \ +{File "%TEMP%/utf-check-1426-3-120-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1427 utf-check-1427-3-120-1.jnk \ +{File "%TEMP%/utf-check-1427-3-120-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1428 utf-check-1428-3-121-0.jnk \ +{File "%TEMP%/utf-check-1428-3-121-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1429 utf-check-1429-3-121-1.jnk \ +{File "%TEMP%/utf-check-1429-3-121-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1430 utf-check-1430-3-122-0.jnk \ +{File "%TEMP%/utf-check-1430-3-122-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1431 utf-check-1431-3-122-1.jnk \ +{File "%TEMP%/utf-check-1431-3-122-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1432 utf-check-1432-3-123-0.jnk \ +{File "%TEMP%/utf-check-1432-3-123-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1433 utf-check-1433-3-123-1.jnk \ +{File "%TEMP%/utf-check-1433-3-123-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1434 utf-check-1434-3-124-0.jnk \ +{File "%TEMP%/utf-check-1434-3-124-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1435 utf-check-1435-3-124-1.jnk \ +{File "%TEMP%/utf-check-1435-3-124-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1436 utf-check-1436-3-125-0.jnk \ +{File "%TEMP%/utf-check-1436-3-125-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1437 utf-check-1437-3-125-1.jnk \ +{File "%TEMP%/utf-check-1437-3-125-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1438 utf-check-1438-3-126-0.jnk \ +{File "%TEMP%/utf-check-1438-3-126-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1439 utf-check-1439-3-126-1.jnk \ +{File "%TEMP%/utf-check-1439-3-126-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1440 utf-check-1440-3-127-0.jnk \ +{File "%TEMP%/utf-check-1440-3-127-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1441 utf-check-1441-3-127-1.jnk \ +{File "%TEMP%/utf-check-1441-3-127-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1442 utf-check-1442-3-128-0.jnk \ +{File "%TEMP%/utf-check-1442-3-128-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1443 utf-check-1443-3-128-1.jnk \ +{File "%TEMP%/utf-check-1443-3-128-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1444 utf-check-1444-3-129-0.jnk \ +{File "%TEMP%/utf-check-1444-3-129-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1445 utf-check-1445-3-129-1.jnk \ +{File "%TEMP%/utf-check-1445-3-129-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1446 utf-check-1446-3-130-0.jnk \ +{File "%TEMP%/utf-check-1446-3-130-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1447 utf-check-1447-3-130-1.jnk \ +{File "%TEMP%/utf-check-1447-3-130-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1448 utf-check-1448-3-131-0.jnk \ +{File "%TEMP%/utf-check-1448-3-131-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1449 utf-check-1449-3-131-1.jnk \ +{File "%TEMP%/utf-check-1449-3-131-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1450 utf-check-1450-3-132-0.jnk \ +{File "%TEMP%/utf-check-1450-3-132-0.jnk" has 4 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1451 utf-check-1451-3-132-1.jnk \ +{File "%TEMP%/utf-check-1451-3-132-1.jnk" has 5 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1452 utf-check-1452-3-133-0.jnk \ +{File "%TEMP%/utf-check-1452-3-133-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1453 utf-check-1453-3-133-1.jnk \ +{File "%TEMP%/utf-check-1453-3-133-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1454 utf-check-1454-3-134-0.jnk \ +{File "%TEMP%/utf-check-1454-3-134-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1455 utf-check-1455-3-134-1.jnk \ +{File "%TEMP%/utf-check-1455-3-134-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1456 utf-check-1456-3-135-0.jnk \ +{File "%TEMP%/utf-check-1456-3-135-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1457 utf-check-1457-3-135-1.jnk \ +{File "%TEMP%/utf-check-1457-3-135-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1458 utf-check-1458-3-136-0.jnk \ +{File "%TEMP%/utf-check-1458-3-136-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1459 utf-check-1459-3-136-1.jnk \ +{File "%TEMP%/utf-check-1459-3-136-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1460 utf-check-1460-3-137-0.jnk \ +{File "%TEMP%/utf-check-1460-3-137-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1461 utf-check-1461-3-137-1.jnk \ +{File "%TEMP%/utf-check-1461-3-137-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1462 utf-check-1462-3-138-0.jnk \ +{File "%TEMP%/utf-check-1462-3-138-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1463 utf-check-1463-3-138-1.jnk \ +{File "%TEMP%/utf-check-1463-3-138-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1464 utf-check-1464-3-139-0.jnk \ +{File "%TEMP%/utf-check-1464-3-139-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1465 utf-check-1465-3-139-1.jnk \ +{File "%TEMP%/utf-check-1465-3-139-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1466 utf-check-1466-3-140-0.jnk \ +{File "%TEMP%/utf-check-1466-3-140-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1467 utf-check-1467-3-140-1.jnk \ +{File "%TEMP%/utf-check-1467-3-140-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1468 utf-check-1468-3-141-0.jnk \ +{File "%TEMP%/utf-check-1468-3-141-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1469 utf-check-1469-3-141-1.jnk \ +{File "%TEMP%/utf-check-1469-3-141-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1470 utf-check-1470-3-142-0.jnk \ +{File "%TEMP%/utf-check-1470-3-142-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1471 utf-check-1471-3-142-1.jnk \ +{File "%TEMP%/utf-check-1471-3-142-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1472 utf-check-1472-3-143-0.jnk \ +{File "%TEMP%/utf-check-1472-3-143-0.jnk" has 6 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1473 utf-check-1473-3-143-1.jnk \ +{File "%TEMP%/utf-check-1473-3-143-1.jnk" has 7 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1474 utf-check-1474-3-144-0.jnk \ +{File "%TEMP%/utf-check-1474-3-144-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1475 utf-check-1475-3-144-1.jnk \ +{File "%TEMP%/utf-check-1475-3-144-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1476 utf-check-1476-3-145-0.jnk \ +{File "%TEMP%/utf-check-1476-3-145-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1477 utf-check-1477-3-145-1.jnk \ +{File "%TEMP%/utf-check-1477-3-145-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1478 utf-check-1478-3-146-0.jnk \ +{File "%TEMP%/utf-check-1478-3-146-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1479 utf-check-1479-3-146-1.jnk \ +{File "%TEMP%/utf-check-1479-3-146-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1480 utf-check-1480-3-147-0.jnk \ +{File "%TEMP%/utf-check-1480-3-147-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1481 utf-check-1481-3-147-1.jnk \ +{File "%TEMP%/utf-check-1481-3-147-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1482 utf-check-1482-3-148-0.jnk \ +{File "%TEMP%/utf-check-1482-3-148-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1483 utf-check-1483-3-148-1.jnk \ +{File "%TEMP%/utf-check-1483-3-148-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1484 utf-check-1484-3-149-0.jnk \ +{File "%TEMP%/utf-check-1484-3-149-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1485 utf-check-1485-3-149-1.jnk \ +{File "%TEMP%/utf-check-1485-3-149-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1486 utf-check-1486-3-150-0.jnk \ +{File "%TEMP%/utf-check-1486-3-150-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1487 utf-check-1487-3-150-1.jnk \ +{File "%TEMP%/utf-check-1487-3-150-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1488 utf-check-1488-3-151-0.jnk \ +{File "%TEMP%/utf-check-1488-3-151-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1489 utf-check-1489-3-151-1.jnk \ +{File "%TEMP%/utf-check-1489-3-151-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1490 utf-check-1490-3-152-0.jnk \ +{File "%TEMP%/utf-check-1490-3-152-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1491 utf-check-1491-3-152-1.jnk \ +{File "%TEMP%/utf-check-1491-3-152-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1492 utf-check-1492-3-153-0.jnk \ +{File "%TEMP%/utf-check-1492-3-153-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1493 utf-check-1493-3-153-1.jnk \ +{File "%TEMP%/utf-check-1493-3-153-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1494 utf-check-1494-3-154-0.jnk \ +{File "%TEMP%/utf-check-1494-3-154-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1495 utf-check-1495-3-154-1.jnk \ +{File "%TEMP%/utf-check-1495-3-154-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1496 utf-check-1496-3-155-0.jnk \ +{File "%TEMP%/utf-check-1496-3-155-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1497 utf-check-1497-3-155-1.jnk \ +{File "%TEMP%/utf-check-1497-3-155-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1498 utf-check-1498-3-156-0.jnk \ +{File "%TEMP%/utf-check-1498-3-156-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1499 utf-check-1499-3-156-1.jnk \ +{File "%TEMP%/utf-check-1499-3-156-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1500 utf-check-1500-3-157-0.jnk \ +{File "%TEMP%/utf-check-1500-3-157-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1501 utf-check-1501-3-157-1.jnk \ +{File "%TEMP%/utf-check-1501-3-157-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1502 utf-check-1502-3-158-0.jnk \ +{File "%TEMP%/utf-check-1502-3-158-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1503 utf-check-1503-3-158-1.jnk \ +{File "%TEMP%/utf-check-1503-3-158-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1504 utf-check-1504-3-159-0.jnk \ +{File "%TEMP%/utf-check-1504-3-159-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1505 utf-check-1505-3-159-1.jnk \ +{File "%TEMP%/utf-check-1505-3-159-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1506 utf-check-1506-3-160-0.jnk \ +{File "%TEMP%/utf-check-1506-3-160-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1507 utf-check-1507-3-160-1.jnk \ +{File "%TEMP%/utf-check-1507-3-160-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1508 utf-check-1508-3-161-0.jnk \ +{File "%TEMP%/utf-check-1508-3-161-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1509 utf-check-1509-3-161-1.jnk \ +{File "%TEMP%/utf-check-1509-3-161-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1510 utf-check-1510-3-162-0.jnk \ +{File "%TEMP%/utf-check-1510-3-162-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1511 utf-check-1511-3-162-1.jnk \ +{File "%TEMP%/utf-check-1511-3-162-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1512 utf-check-1512-3-163-0.jnk \ +{File "%TEMP%/utf-check-1512-3-163-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1513 utf-check-1513-3-163-1.jnk \ +{File "%TEMP%/utf-check-1513-3-163-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1514 utf-check-1514-3-164-0.jnk \ +{File "%TEMP%/utf-check-1514-3-164-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1515 utf-check-1515-3-164-1.jnk \ +{File "%TEMP%/utf-check-1515-3-164-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1516 utf-check-1516-3-165-0.jnk \ +{File "%TEMP%/utf-check-1516-3-165-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1517 utf-check-1517-3-165-1.jnk \ +{File "%TEMP%/utf-check-1517-3-165-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1518 utf-check-1518-3-166-0.jnk \ +{File "%TEMP%/utf-check-1518-3-166-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1519 utf-check-1519-3-166-1.jnk \ +{File "%TEMP%/utf-check-1519-3-166-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1520 utf-check-1520-3-167-0.jnk \ +{File "%TEMP%/utf-check-1520-3-167-0.jnk" has 8 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1521 utf-check-1521-3-167-1.jnk \ +{File "%TEMP%/utf-check-1521-3-167-1.jnk" has 9 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1522 utf-check-1522-3-168-0.jnk \ +{File "%TEMP%/utf-check-1522-3-168-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1523 utf-check-1523-3-168-1.jnk \ +{File "%TEMP%/utf-check-1523-3-168-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1524 utf-check-1524-3-169-0.jnk \ +{File "%TEMP%/utf-check-1524-3-169-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1525 utf-check-1525-3-169-1.jnk \ +{File "%TEMP%/utf-check-1525-3-169-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1526 utf-check-1526-3-170-0.jnk \ +{File "%TEMP%/utf-check-1526-3-170-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1527 utf-check-1527-3-170-1.jnk \ +{File "%TEMP%/utf-check-1527-3-170-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1528 utf-check-1528-3-171-0.jnk \ +{File "%TEMP%/utf-check-1528-3-171-0.jnk" has 10 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1529 utf-check-1529-3-171-1.jnk \ +{File "%TEMP%/utf-check-1529-3-171-1.jnk" has 11 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1530 utf-check-1530-3-172-0.jnk \ +{File "%TEMP%/utf-check-1530-3-172-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1531 utf-check-1531-3-172-1.jnk \ +{File "%TEMP%/utf-check-1531-3-172-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1532 utf-check-1532-3-173-0.jnk \ +{File "%TEMP%/utf-check-1532-3-173-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1533 utf-check-1533-3-173-1.jnk \ +{File "%TEMP%/utf-check-1533-3-173-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1534 utf-check-1534-3-174-0.jnk \ +{File "%TEMP%/utf-check-1534-3-174-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1535 utf-check-1535-3-174-1.jnk \ +{File "%TEMP%/utf-check-1535-3-174-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1536 utf-check-1536-3-175-0.jnk \ +{File "%TEMP%/utf-check-1536-3-175-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1537 utf-check-1537-3-175-1.jnk \ +{File "%TEMP%/utf-check-1537-3-175-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1538 utf-check-1538-3-176-0.jnk \ +{File "%TEMP%/utf-check-1538-3-176-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1539 utf-check-1539-3-176-1.jnk \ +{File "%TEMP%/utf-check-1539-3-176-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1540 utf-check-1540-3-177-0.jnk \ +{File "%TEMP%/utf-check-1540-3-177-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1541 utf-check-1541-3-177-1.jnk \ +{File "%TEMP%/utf-check-1541-3-177-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1542 utf-check-1542-3-178-0.jnk \ +{File "%TEMP%/utf-check-1542-3-178-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1543 utf-check-1543-3-178-1.jnk \ +{File "%TEMP%/utf-check-1543-3-178-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1544 utf-check-1544-3-179-0.jnk \ +{File "%TEMP%/utf-check-1544-3-179-0.jnk" has 12 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1545 utf-check-1545-3-179-1.jnk \ +{File "%TEMP%/utf-check-1545-3-179-1.jnk" has 13 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1546 utf-check-1546-3-180-0.jnk \ +{File "%TEMP%/utf-check-1546-3-180-0.jnk" has 14 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +utf-check 1547 utf-check-1547-3-180-1.jnk \ +{File "%TEMP%/utf-check-1547-3-180-1.jnk" has 15 bytes. +Starts with UTF-8 BOM: no +Starts with UTF-16 BOM: no +Looks like UTF-8: no +Has flag LOOK_NUL: yes +Has flag LOOK_CR: no +Has flag LOOK_LONE_CR: no +Has flag LOOK_LF: no +Has flag LOOK_LONE_LF: no +Has flag LOOK_CRLF: no +Has flag LOOK_LONG: no +Has flag LOOK_INVALID: yes +Has flag LOOK_ODD: no +Has flag LOOK_SHORT: no} + +############################ END GENERATED SECTION ############################ + +deleteTestFiles $tempPath 100 ADDED test/utf16be.txt Index: test/utf16be.txt ================================================================== --- test/utf16be.txt +++ test/utf16be.txt @@ -0,0 +1,23 @@ +This file contains utf-16be text. +The purpose for including this file in the Fossil +repository is to provide the ability to test Fossil's +handling of UTF-16 using its own repository. + +Browsing to this file in the web interface should display the file as +text on the screen. + +When there are changes to this file those changes should show +up in the "diff" output and be properly displayed on the +screen. + +Test procedures: + + 1. Verify that this file is correctly display using the /artifact + webpage. + + 2. Verify that this file is correctly displayed by the /doc webpage. + + 3. Verify that changes to are correctly displayed by the /fdiff webpage. + + 4. Verify that the "fossil diff" command correctly displays changes + in this file. Do the same with the --tk option. ADDED test/utf16le.txt Index: test/utf16le.txt ================================================================== --- test/utf16le.txt +++ test/utf16le.txt @@ -0,0 +1,23 @@ +This file contains utf-16le text. +The purpose for including this file in the Fossil +repository is to provide the ability to test Fossil's +handling of UTF-16 using its own repository. + +Browsing to this file in the web interface should display the file as +text on the screen. + +When there are changes to this file those changes should show +up in the "diff" output and be properly displayed on the +screen. + +Test procedures: + + 1. Verify that this file is correctly display using the /artifact + webpage. + + 2. Verify that this file is correctly displayed by the /doc webpage. + + 3. Verify that changes to are correctly displayed by the /fdiff webpage. + + 4. Verify that the "fossil diff" command correctly displays changes + in this file. Do the same with the --tk option. ADDED test/valgrind-www.tcl Index: test/valgrind-www.tcl ================================================================== --- test/valgrind-www.tcl +++ test/valgrind-www.tcl @@ -0,0 +1,57 @@ +#!/usr/bin/tclsh +# +# Run this script in an open Fossil checkout at the top-level with a +# fresh build of Fossil itself. This script will run fossil on hundreds +# of different web-pages looking for memory allocation problems using +# valgrind. Valgrind output appears on stderr. Suggested test scenario: +# +# make +# tclsh valgrind-www.tcl 2>&1 | tee valgrind-out.txt +# +# Then examine the valgrind-out.txt file for issues. +# +proc run_query {url} { + set fd [open q.txt w] + puts $fd "GET $url HTTP/1.0\r\n\r" + close $fd + set msg {} + catch {exec valgrind ./fossil test-http @ stderr} msg + return $msg +} +set todo {} +foreach url { + /home + /timeline + /brlist + /taglist + /reportlist + /setup + /dir + /wcontent +} { + set seen($url) 1 + set pending($url) 1 +} +set limit 1000 +set npending [llength [array names pending]] +proc get_pending {} { + global pending npending + set res [lindex [array names pending] [expr {int(rand()*$npending)}]] + unset pending($res) + incr npending -1 + return $res +} +for {set i 0} {$npending>0 && $i<$limit} {incr i} { + set url [get_pending] + puts "====== ([expr {$i+1}]) $url ======" + set x [run_query $url] + while {[regexp {<[aA] .*?href="(/[a-z].*?)".*?>(.*)$} $x all url tail]} { + set u2 [string map {< < > > " \" & &} $url] + if {![info exists seen($u2)]} { + set pending($u2) 1 + set seen($u2) 1 + incr npending + } + set x $tail + } +} Index: tools/cvs2fossil/changeset ================================================================== --- tools/cvs2fossil/changeset +++ tools/cvs2fossil/changeset @@ -14,11 +14,11 @@ # # ## ### ##### ######## ############# ##################### ## Helper application, debugging of cvs2fossil. This application ## extracts all information about a changeset and writes it nicely ## formatted to stdout. The changeset is specified by its internal -## numerical id. +## numerical id. # # ## ### ##### ######## ############# ##################### ## Requirements, extended package management for local packages. lappend auto_path [file join [file dirname [info script]] lib] Index: tools/cvs2fossil/lib/c2f_pbreakacycle.tcl ================================================================== --- tools/cvs2fossil/lib/c2f_pbreakacycle.tcl +++ tools/cvs2fossil/lib/c2f_pbreakacycle.tcl @@ -270,11 +270,11 @@ set mins $minsa($item) set maxp $maxp($item) # Note that for the min successor position "" represents # +infinity integrity assert { - ($mins eq "") || ($maxp < $mins) + ($mins eq "") || ($maxp < $mins) } {Item <$item> is backward at file level ($maxp >= $mins)} } # Save the limits for the splitter, and compute the border at # which to split as the minimum of all minimal successor Index: tools/cvs2fossil/lib/c2f_prev.tcl ================================================================== --- tools/cvs2fossil/lib/c2f_prev.tcl +++ tools/cvs2fossil/lib/c2f_prev.tcl @@ -1179,11 +1179,11 @@ typevariable mytchangesets -array { sym::branch {} sym::tag {} rev {} } - + typevariable myitemmap -array {} ; # Map from items (tagged) # to the list of changesets # containing it. Each item # can be used by only one # changeset. Index: tools/cvs2fossil/lib/mem.tcl ================================================================== --- tools/cvs2fossil/lib/mem.tcl +++ tools/cvs2fossil/lib/mem.tcl @@ -48,12 +48,12 @@ variable lmba variable mid struct::list assign [minfo] _ _ _ cba _ mba - set dc [expr $cba - $lcba] ; set lcba $cba - set dm [expr $mba - $lmba] ; set lmba $mba + set dc [expr $cba - $lcba] ; set lcba $cba + set dm [expr $mba - $lmba] ; set lmba $mba # projection: 1 2 3 4 5 6 7 6 8 10 return "[F [incr mid]] | [F $cba] | [F $dc] | [F $mba] | [F $dm] |=| " } ADDED tools/fossil-autocomplete.bash Index: tools/fossil-autocomplete.bash ================================================================== --- tools/fossil-autocomplete.bash +++ tools/fossil-autocomplete.bash @@ -0,0 +1,15 @@ +# Command name completion for Fossil. +# Mailing-list contribution by Stuart Rackham. +function _fossil() { + local cur commands + cur=${COMP_WORDS[COMP_CWORD]} + commands=$(fossil help --all) + if [ $COMP_CWORD -eq 1 ] || [ ${COMP_WORDS[1]} = help ]; then + # Command name completion for 1st argument or 2nd if help command. + COMPREPLY=( $(compgen -W "$commands" $cur) ) + else + # File name completion for other arguments. + COMPREPLY=( $(compgen -f $cur) ) + fi +} +complete -o default -F _fossil fossil f Index: tools/fossil_chat.tcl ================================================================== --- tools/fossil_chat.tcl +++ tools/fossil_chat.tcl @@ -84,11 +84,11 @@ catch {close $SOCKET} if {[catch { if {$::PROXYHOST ne {}} { set SOCKET [socket $::PROXYHOST $::PROXYPORT] puts $SOCKET "CONNECT $::SERVERHOST:$::SERVERPORT HTTP/1.1" - puts $SOCKET "Host: $::SERVERHOST:$::SERVERPORT" + puts $SOCKET "Host: $::SERVERHOST:$::SERVERPORT" puts $SOCKET "" } else { set SOCKET [socket $::SERVERHOST $::SERVERPORT] } fconfigure $SOCKET -translation binary -blocking 0 @@ -142,11 +142,11 @@ # proc delete_files {} { global FILES .mb.files delete 3 end array unset FILES - .mb.files entryconfigure 1 -state disabled + .mb.files entryconfigure 1 -state disabled } # Prompt the user to select a file from the disk. Then send that # file to all chat participants. # @@ -188,11 +188,11 @@ if {![info exists prior] || $filename!=$prior} { .mb.files add command -label "Save \"$filename\"" \ -command [list save_file $filename] } set FILES($filename) $data - .mb.files entryconfigure 1 -state active + .mb.files entryconfigure 1 -state active set time [clock format [clock seconds] -format {%H:%M} -gmt 1] .msg.t insert end "\[$time $from\] " meta "File: \"$filename\"\n" norm .msg.t see end } @@ -232,11 +232,11 @@ } elseif {$cmd=="file"} { if {[info commands handle_file]=="handle_file"} { handle_file [lindex $line 1] [lindex $line 2] [lindex $line 3] } } -} +} # Handle a broken socket connection # proc disconnect {} { global SOCKET ADDED tools/fossilwiki Index: tools/fossilwiki ================================================================== --- tools/fossilwiki +++ tools/fossilwiki @@ -0,0 +1,132 @@ +#!/usr/bin/perl +# vim: cin : + +$repofile = shift; +$repocmd = ''; +$repocmd = "-R $repofile" if -f $repofile; +$mainpage = ''; + +@rep = (); +if ( ! -f $repofile ) +{ + @rep = `fossil info | grep 'project-name'`; +} +else +{ + @rep = `fossil info $repofile | grep 'project-name'`; +} + +$mainpage = $rep[0]; +chomp $mainpage; +$mainpage =~ s/^project-name:\s+//; + + +@pages = `fossil wiki list $repocmd`; + +%pages = (); +foreach $page ( @pages ) +{ + chomp $page; + $text = `fossil wiki export "$page" $repocmd`; + $pages{$page} = $text; +} + +@orphans = (); +@nointernals = (); +@terminals = (); +@empties = (); +%badlinks = (); +%alllinks = (); +%links = (); +foreach $page ( keys %pages ) +{ + my @links = (); + my $text = $pages{$page}; + while ( $text =~ m/\[([^][]+)\]/g ) + { + push @links,$1; + } + + $numlinks = $#links; + + if (@links == ()) + { + push @terminals, $page; + } + else + { + my @internals = grep { $_ !~ /(http:)|(mailto:)|(https:)/ } @links; + if (@internals == ()) + { + push @nointernals, $page; + } + else + { + @{$links{$page}{'links'}} = map {my ($a,$b) = split /\|/; $a;} @internals; + foreach $internal ( @internals ) + { + my ($int_link, $display) = split /\|/, $internal; + ${$links{$int_link}{'refs'}}++; + $alllinks{$int_link} = 1; + } + } + } + + if ($text eq '' || $text =~ m/^Empty Page<\/i>/) + { + chomp $tail; + my ($head, $tail) = split /\/i>/ , $text; + if ($tail =~ m/^\s*$/s) { + push @empties, $page; + } + } +} +foreach $page ( keys %links ) +{ + if ($page ne $mainpage && (${$links{$page}{'refs'}} == 0)) + { + push @orphans, $page; + } +} +foreach $link (keys %alllinks ) +{ + if (! exists($pages{$link}) && $link !~ /^\./ && $link !~ /^\//) + { + $badlinks{$link} = 1; + } +} +foreach $empty ( @empties ) +{ + print ("empty: '$empty'\n"); +} +foreach $nointernals ( @nointernals ) +{ + print ("nointernals: '$nointernals'\n"); +} +foreach $terminal ( @terminals ) +{ + print ("terminal: '$terminal'\n"); +} +foreach $orphan ( @orphans ) +{ + print ("orphan: '$orphan'\n"); +} +foreach $link ( keys %badlinks ) +{ + print ("badlink: '$link'\n"); +} +foreach $page ( sort keys %links ) +{ + my @links = @{$links{$page}{'links'}}; + if (@links != ()) + { + if ($page eq $mainpage) + { + print "links: *** '$page' *** -> ", join (", ", @links), "\n"; + } + else + { + print "links: '$page' -> ", join (", ", @links), "\n"; + } + } +} ADDED win/Makefile.PellesCGMake Index: win/Makefile.PellesCGMake ================================================================== --- win/Makefile.PellesCGMake +++ win/Makefile.PellesCGMake @@ -0,0 +1,194 @@ +# +############################################################################## +# WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl") +############################################################################## +# +# This file is automatically generated. Instead of editing this +# file, edit "makemake.tcl" then run "tclsh makemake.tcl" +# to regenerate this file. +# +# HowTo +# ----- +# +# This is a Makefile to compile fossil with PellesC from +# http://www.smorgasbordet.com/pellesc/index.htm +# In addition to the Compiler envrionment, you need +# gmake from http://sourceforge.net/projects/unxutils/, Pelles make version +# couldn't handle the complex dependencies in this build +# zlib sources +# Then you do +# 1. create a directory PellesC in the project root directory +# 2. Change the variables PellesCDir/ZLIBSRCDIR to the path of your installation +# 3. open a dos prompt window and change working directory into PellesC (step 1) +# 4. run gmake -f ..\win\Makefile.PellesCGMake +# +# this file is tested with +# PellesC 5.00.13 +# gmake 3.80 +# zlib sources 1.2.5 +# Windows XP SP 2 +# and +# PellesC 6.00.4 +# gmake 3.80 +# zlib sources 1.2.5 +# Windows 7 Home Premium +# + +# +PellesCDir=c:\Programme\PellesC + +# Select between 32/64 bit code, default is 32 bit +#TARGETVERSION=64 + +ifeq ($(TARGETVERSION),64) +# 64 bit version +TARGETMACHINE_CC=amd64 +TARGETMACHINE_LN=amd64 +TARGETEXTEND=64 +else +# 32 bit version +TARGETMACHINE_CC=x86 +TARGETMACHINE_LN=ix86 +TARGETEXTEND= +endif + +# define the project directories +B=.. +SRCDIR=$(B)/src/ +WINDIR=$(B)/win/ +ZLIBSRCDIR=../../zlib/ + +# define linker command and options +LINK=$(PellesCDir)/bin/polink.exe +LINKFLAGS=-subsystem:console -machine:$(TARGETMACHINE_LN) /LIBPATH:$(PellesCDir)\lib\win$(TARGETEXTEND) /LIBPATH:$(PellesCDir)\lib kernel32.lib advapi32.lib delayimp$(TARGETEXTEND).lib Wsock32.lib Crtmt$(TARGETEXTEND).lib + +# define standard C-compiler and flags, used to compile +# the fossil binary. Some special definitions follow for +# special files follow +CC=$(PellesCDir)\bin\pocc.exe +DEFINES=-D_pgmptr=g.argv[0] +CCFLAGS=-T$(TARGETMACHINE_CC)-coff -Ot -W2 -Gd -Go -Ze -MT $(DEFINES) +INCLUDE=/I $(PellesCDir)\Include\Win /I $(PellesCDir)\Include /I $(ZLIBSRCDIR) /I $(SRCDIR) + +# define commands for building the windows resource files +RESOURCE=fossil.res +RC=$(PellesCDir)\bin\porc.exe +RCFLAGS=$(INCLUDE) -D__POCC__=1 -D_M_X$(TARGETVERSION) + +# define the special utilities files, needed to generate +# the automatically generated source files +UTILS=translate.exe mkindex.exe makeheaders.exe mkbuiltin.exe +UTILS_OBJ=$(UTILS:.exe=.obj) +UTILS_SRC=$(foreach uf,$(UTILS),$(SRCDIR)$(uf:.exe=.c)) + +# define the SQLite files, which need special flags on compile +SQLITESRC=sqlite3.c +ORIGSQLITESRC=$(foreach sf,$(SQLITESRC),$(SRCDIR)$(sf)) +SQLITEOBJ=$(foreach sf,$(SQLITESRC),$(sf:.c=.obj)) +SQLITEDEFINES=-DNDEBUG=1 -DSQLITE_OMIT_LOAD_EXTENSION=1 -DSQLITE_ENABLE_LOCKING_STYLE=0 -DSQLITE_THREADSAFE=0 -DSQLITE_DEFAULT_FILE_FORMAT=4 -DSQLITE_OMIT_DEPRECATED -DSQLITE_ENABLE_EXPLAIN_COMMENTS -DSQLITE_ENABLE_FTS4 -DSQLITE_ENABLE_FTS3_PARENTHESIS -DSQLITE_ENABLE_DBSTAT_VTAB -DSQLITE_ENABLE_JSON1 -DSQLITE_ENABLE_FTS5 -DSQLITE_WIN32_NO_ANSI + +# define the SQLite shell files, which need special flags on compile +SQLITESHELLSRC=shell.c +ORIGSQLITESHELLSRC=$(foreach sf,$(SQLITESHELLSRC),$(SRCDIR)$(sf)) +SQLITESHELLOBJ=$(foreach sf,$(SQLITESHELLSRC),$(sf:.c=.obj)) +SQLITESHELLDEFINES=-Dmain=sqlite3_shell -DSQLITE_OMIT_LOAD_EXTENSION=1 -DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) -DSQLITE_SHELL_DBNAME_PROC=fossil_open -Daccess=file_access -Dsystem=fossil_system -Dgetenv=fossil_getenv -Dfopen=fossil_fopen + +# define the th scripting files, which need special flags on compile +THSRC=th.c th_lang.c +ORIGTHSRC=$(foreach sf,$(THSRC),$(SRCDIR)$(sf)) +THOBJ=$(foreach sf,$(THSRC),$(sf:.c=.obj)) + +# define the zlib files, needed by this compile +ZLIBSRC=adler32.c compress.c crc32.c deflate.c gzclose.c gzlib.c gzread.c gzwrite.c infback.c inffast.c inflate.c inftrees.c trees.c uncompr.c zutil.c +ORIGZLIBSRC=$(foreach sf,$(ZLIBSRC),$(ZLIBSRCDIR)$(sf)) +ZLIBOBJ=$(foreach sf,$(ZLIBSRC),$(sf:.c=.obj)) + +# define all fossil sources, using the standard compile and +# source generation. These are all files in SRCDIR, which are not +# mentioned as special files above: +ORIGSRC=$(filter-out $(UTILS_SRC) $(ORIGTHSRC) $(ORIGSQLITESRC) $(ORIGSQLITESHELLSRC),$(wildcard $(SRCDIR)*.c)) +SRC=$(subst $(SRCDIR),,$(ORIGSRC)) +TRANSLATEDSRC=$(SRC:.c=_.c) +TRANSLATEDOBJ=$(TRANSLATEDSRC:.c=.obj) + +# main target file is the application +APPLICATION=fossil.exe + +# define the standard make target +.PHONY: default +default: page_index.h builtin_data.h headers $(APPLICATION) + +# symbolic target to generate the source generate utils +.PHONY: utils +utils: $(UTILS) + +# link utils +$(UTILS) version.exe: %.exe: %.obj + $(LINK) $(LINKFLAGS) -out:"$@" $< + +# compiling standard fossil utils +$(UTILS_OBJ): %.obj: $(SRCDIR)%.c + $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" + +# compile special windows utils +version.obj: $(SRCDIR)mkversion.c + $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" + +# generate the translated c-source files +$(TRANSLATEDSRC): %_.c: $(SRCDIR)%.c translate.exe + translate.exe $< >$@ + +# generate the index source, containing all web references,.. +page_index.h: $(TRANSLATEDSRC) mkindex.exe + mkindex.exe $(TRANSLATEDSRC) >$@ + +builtin_data.h: $(EXTRA_FILES) mkbuiltin.exe + mkbuiltin.exe --prefix $(SRCDIR)/ $(EXTRA_FILES) >$@ + +# extracting version info from manifest +VERSION.h: version.exe ..\manifest.uuid ..\manifest ..\VERSION + version.exe ..\manifest.uuid ..\manifest ..\VERSION >$@ + +# generate the simplified headers +headers: makeheaders.exe page_index.h builtin_data.h VERSION.h ../src/sqlite3.h ../src/th.h VERSION.h + makeheaders.exe $(foreach ts,$(TRANSLATEDSRC),$(ts):$(ts:_.c=.h)) ../src/sqlite3.h ../src/th.h VERSION.h + echo Done >$@ + +# compile C sources with relevant options + +$(TRANSLATEDOBJ): %_.obj: %_.c %.h + $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" + +$(SQLITEOBJ): %.obj: $(SRCDIR)%.c $(SRCDIR)%.h + $(CC) $(CCFLAGS) $(SQLITEDEFINES) $(INCLUDE) "$<" -Fo"$@" + +$(SQLITESHELLOBJ): %.obj: $(SRCDIR)%.c + $(CC) $(CCFLAGS) $(SQLITESHELLDEFINES) $(INCLUDE) "$<" -Fo"$@" + +$(THOBJ): %.obj: $(SRCDIR)%.c $(SRCDIR)th.h + $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" + +$(ZLIBOBJ): %.obj: $(ZLIBSRCDIR)%.c + $(CC) $(CCFLAGS) $(INCLUDE) "$<" -Fo"$@" + +# create the windows resource with icon and version info +$(RESOURCE): %.res: ../win/%.rc ../win/*.ico + $(RC) $(RCFLAGS) $< -Fo"$@" + +# link the application +$(APPLICATION): $(TRANSLATEDOBJ) $(SQLITEOBJ) $(SQLITESHELLOBJ) $(THOBJ) $(ZLIBOBJ) headers $(RESOURCE) + $(LINK) $(LINKFLAGS) -out:"$@" $(TRANSLATEDOBJ) $(SQLITEOBJ) $(SQLITESHELLOBJ) $(THOBJ) $(ZLIBOBJ) $(RESOURCE) + +# cleanup + +.PHONY: clean +clean: + del /F $(TRANSLATEDOBJ) $(SQLITEOBJ) $(THOBJ) $(ZLIBOBJ) $(UTILS_OBJ) version.obj + del /F $(TRANSLATEDSRC) + del /F *.h headers + del /F $(RESOURCE) + +.PHONY: clobber +clobber: clean + del /F *.exe + ADDED win/Makefile.dmc Index: win/Makefile.dmc ================================================================== --- win/Makefile.dmc +++ win/Makefile.dmc @@ -0,0 +1,844 @@ +# +############################################################################## +# WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl") +############################################################################## +# +# This file is automatically generated. Instead of editing this +# file, edit "makemake.tcl" then run "tclsh makemake.tcl" +# to regenerate this file. +# +B = .. +SRCDIR = $B\src +OBJDIR = . +O = .obj +E = .exe + + +# Maybe DMDIR, SSL or INCL needs adjustment +DMDIR = c:\DM +INCL = -I. -I$(SRCDIR) -I$B\win\include -I$(DMDIR)\extra\include + +#SSL = -DFOSSIL_ENABLE_SSL=1 +SSL = + +CFLAGS = -o +BCC = $(DMDIR)\bin\dmc $(CFLAGS) +TCC = $(DMDIR)\bin\dmc $(CFLAGS) $(DMCDEF) $(SSL) $(INCL) +LIBS = $(DMDIR)\extra\lib\ zlib wsock32 advapi32 + +SQLITE_OPTIONS = -DNDEBUG=1 -DSQLITE_OMIT_LOAD_EXTENSION=1 -DSQLITE_ENABLE_LOCKING_STYLE=0 -DSQLITE_THREADSAFE=0 -DSQLITE_DEFAULT_FILE_FORMAT=4 -DSQLITE_OMIT_DEPRECATED -DSQLITE_ENABLE_EXPLAIN_COMMENTS -DSQLITE_ENABLE_FTS4 -DSQLITE_ENABLE_FTS3_PARENTHESIS -DSQLITE_ENABLE_DBSTAT_VTAB -DSQLITE_ENABLE_JSON1 -DSQLITE_ENABLE_FTS5 + +SHELL_OPTIONS = -Dmain=sqlite3_shell -DSQLITE_OMIT_LOAD_EXTENSION=1 -DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) -DSQLITE_SHELL_DBNAME_PROC=fossil_open -Daccess=file_access -Dsystem=fossil_system -Dgetenv=fossil_getenv -Dfopen=fossil_fopen + +SRC = add_.c allrepo_.c attach_.c bag_.c bisect_.c blob_.c branch_.c browse_.c builtin_.c bundle_.c cache_.c captcha_.c cgi_.c checkin_.c checkout_.c clearsign_.c clone_.c comformat_.c configure_.c content_.c db_.c delta_.c deltacmd_.c descendants_.c diff_.c diffcmd_.c doc_.c encode_.c event_.c export_.c file_.c finfo_.c foci_.c fusefs_.c glob_.c graph_.c gzip_.c http_.c http_socket_.c http_ssl_.c http_transport_.c import_.c info_.c json_.c json_artifact_.c json_branch_.c json_config_.c json_diff_.c json_dir_.c json_finfo_.c json_login_.c json_query_.c json_report_.c json_status_.c json_tag_.c json_timeline_.c json_user_.c json_wiki_.c leaf_.c loadctrl_.c login_.c lookslike_.c main_.c manifest_.c markdown_.c markdown_html_.c md5_.c merge_.c merge3_.c moderate_.c name_.c path_.c piechart_.c pivot_.c popen_.c pqueue_.c printf_.c publish_.c purge_.c rebuild_.c regexp_.c report_.c rss_.c schema_.c search_.c setup_.c sha1_.c shun_.c sitemap_.c skins_.c sqlcmd_.c stash_.c stat_.c statrep_.c style_.c sync_.c tag_.c tar_.c th_main_.c timeline_.c tkt_.c tktsetup_.c undo_.c unicode_.c update_.c url_.c user_.c utf8_.c util_.c verify_.c vfile_.c wiki_.c wikiformat_.c winfile_.c winhttp_.c wysiwyg_.c xfer_.c xfersetup_.c zip_.c + +OBJ = $(OBJDIR)\add$O $(OBJDIR)\allrepo$O $(OBJDIR)\attach$O $(OBJDIR)\bag$O $(OBJDIR)\bisect$O $(OBJDIR)\blob$O $(OBJDIR)\branch$O $(OBJDIR)\browse$O $(OBJDIR)\builtin$O $(OBJDIR)\bundle$O $(OBJDIR)\cache$O $(OBJDIR)\captcha$O $(OBJDIR)\cgi$O $(OBJDIR)\checkin$O $(OBJDIR)\checkout$O $(OBJDIR)\clearsign$O $(OBJDIR)\clone$O $(OBJDIR)\comformat$O $(OBJDIR)\configure$O $(OBJDIR)\content$O $(OBJDIR)\db$O $(OBJDIR)\delta$O $(OBJDIR)\deltacmd$O $(OBJDIR)\descendants$O $(OBJDIR)\diff$O $(OBJDIR)\diffcmd$O $(OBJDIR)\doc$O $(OBJDIR)\encode$O $(OBJDIR)\event$O $(OBJDIR)\export$O $(OBJDIR)\file$O $(OBJDIR)\finfo$O $(OBJDIR)\foci$O $(OBJDIR)\fusefs$O $(OBJDIR)\glob$O $(OBJDIR)\graph$O $(OBJDIR)\gzip$O $(OBJDIR)\http$O $(OBJDIR)\http_socket$O $(OBJDIR)\http_ssl$O $(OBJDIR)\http_transport$O $(OBJDIR)\import$O $(OBJDIR)\info$O $(OBJDIR)\json$O $(OBJDIR)\json_artifact$O $(OBJDIR)\json_branch$O $(OBJDIR)\json_config$O $(OBJDIR)\json_diff$O $(OBJDIR)\json_dir$O $(OBJDIR)\json_finfo$O $(OBJDIR)\json_login$O $(OBJDIR)\json_query$O $(OBJDIR)\json_report$O $(OBJDIR)\json_status$O $(OBJDIR)\json_tag$O $(OBJDIR)\json_timeline$O $(OBJDIR)\json_user$O $(OBJDIR)\json_wiki$O $(OBJDIR)\leaf$O $(OBJDIR)\loadctrl$O $(OBJDIR)\login$O $(OBJDIR)\lookslike$O $(OBJDIR)\main$O $(OBJDIR)\manifest$O $(OBJDIR)\markdown$O $(OBJDIR)\markdown_html$O $(OBJDIR)\md5$O $(OBJDIR)\merge$O $(OBJDIR)\merge3$O $(OBJDIR)\moderate$O $(OBJDIR)\name$O $(OBJDIR)\path$O $(OBJDIR)\piechart$O $(OBJDIR)\pivot$O $(OBJDIR)\popen$O $(OBJDIR)\pqueue$O $(OBJDIR)\printf$O $(OBJDIR)\publish$O $(OBJDIR)\purge$O $(OBJDIR)\rebuild$O $(OBJDIR)\regexp$O $(OBJDIR)\report$O $(OBJDIR)\rss$O $(OBJDIR)\schema$O $(OBJDIR)\search$O $(OBJDIR)\setup$O $(OBJDIR)\sha1$O $(OBJDIR)\shun$O $(OBJDIR)\sitemap$O $(OBJDIR)\skins$O $(OBJDIR)\sqlcmd$O $(OBJDIR)\stash$O $(OBJDIR)\stat$O $(OBJDIR)\statrep$O $(OBJDIR)\style$O $(OBJDIR)\sync$O $(OBJDIR)\tag$O $(OBJDIR)\tar$O $(OBJDIR)\th_main$O $(OBJDIR)\timeline$O $(OBJDIR)\tkt$O $(OBJDIR)\tktsetup$O $(OBJDIR)\undo$O $(OBJDIR)\unicode$O $(OBJDIR)\update$O $(OBJDIR)\url$O $(OBJDIR)\user$O $(OBJDIR)\utf8$O $(OBJDIR)\util$O $(OBJDIR)\verify$O $(OBJDIR)\vfile$O $(OBJDIR)\wiki$O $(OBJDIR)\wikiformat$O $(OBJDIR)\winfile$O $(OBJDIR)\winhttp$O $(OBJDIR)\wysiwyg$O $(OBJDIR)\xfer$O $(OBJDIR)\xfersetup$O $(OBJDIR)\zip$O $(OBJDIR)\shell$O $(OBJDIR)\sqlite3$O $(OBJDIR)\th$O $(OBJDIR)\th_lang$O + + +RC=$(DMDIR)\bin\rcc +RCFLAGS=-32 -w1 -I$(SRCDIR) /D__DMC__ + +APPNAME = $(OBJDIR)\fossil$(E) + +all: $(APPNAME) + +$(APPNAME) : translate$E mkindex$E codecheck1$E headers $(OBJ) $(OBJDIR)\link + cd $(OBJDIR) + codecheck1$E $(SRC) + $(DMDIR)\bin\link @link + +$(OBJDIR)\fossil.res: $B\win\fossil.rc + $(RC) $(RCFLAGS) -o$@ $** + +$(OBJDIR)\link: $B\win\Makefile.dmc $(OBJDIR)\fossil.res + +echo add allrepo attach bag bisect blob branch browse builtin bundle cache captcha cgi checkin checkout clearsign clone comformat configure content db delta deltacmd descendants diff diffcmd doc encode event export file finfo foci fusefs glob graph gzip http http_socket http_ssl http_transport import info json json_artifact json_branch json_config json_diff json_dir json_finfo json_login json_query json_report json_status json_tag json_timeline json_user json_wiki leaf loadctrl login lookslike main manifest markdown markdown_html md5 merge merge3 moderate name path piechart pivot popen pqueue printf publish purge rebuild regexp report rss schema search setup sha1 shun sitemap skins sqlcmd stash stat statrep style sync tag tar th_main timeline tkt tktsetup undo unicode update url user utf8 util verify vfile wiki wikiformat winfile winhttp wysiwyg xfer xfersetup zip shell sqlite3 th th_lang > $@ + +echo fossil >> $@ + +echo fossil >> $@ + +echo $(LIBS) >> $@ + +echo. >> $@ + +echo fossil >> $@ + +translate$E: $(SRCDIR)\translate.c + $(BCC) -o$@ $** + +makeheaders$E: $(SRCDIR)\makeheaders.c + $(BCC) -o$@ $** + +mkindex$E: $(SRCDIR)\mkindex.c + $(BCC) -o$@ $** + +mkbuiltin$E: $(SRCDIR)\mkbuiltin.c + $(BCC) -o$@ $** + +mkversion$E: $(SRCDIR)\mkversion.c + $(BCC) -o$@ $** + +codecheck1$E: $(SRCDIR)\codecheck1.c + $(BCC) -o$@ $** + +$(OBJDIR)\shell$O : $(SRCDIR)\shell.c + $(TCC) -o$@ -c $(SHELL_OPTIONS) $(SQLITE_OPTIONS) $(SHELL_CFLAGS) $** + +$(OBJDIR)\sqlite3$O : $(SRCDIR)\sqlite3.c + $(TCC) -o$@ -c $(SQLITE_OPTIONS) $(SQLITE_CFLAGS) $** + +$(OBJDIR)\th$O : $(SRCDIR)\th.c + $(TCC) -o$@ -c $** + +$(OBJDIR)\th_lang$O : $(SRCDIR)\th_lang.c + $(TCC) -o$@ -c $** + +$(OBJDIR)\cson_amalgamation.h : $(SRCDIR)\cson_amalgamation.h + cp $@ $@ + +VERSION.h : mkversion$E $B\manifest.uuid $B\manifest $B\VERSION + +$** > $@ + +page_index.h: mkindex$E $(SRC) + +$** > $@ + +builtin_data.h: mkbuiltin$E $(EXTRA_FILES) + mkbuiltin$E --prefix $(SRCDIR)/ $(EXTRA_FILES) > $@ + +clean: + -del $(OBJDIR)\*.obj + -del *.obj *_.c *.h *.map + +realclean: + -del $(APPNAME) translate$E mkindex$E makeheaders$E mkversion$E codecheck1$E mkbuiltin$E + +$(OBJDIR)\json$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_artifact$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_branch$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_config$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_diff$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_dir$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_finfo$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_login$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_query$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_report$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_status$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_tag$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_timeline$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_user$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_wiki$O : $(SRCDIR)\json_detail.h + + + +$(OBJDIR)\add$O : add_.c add.h + $(TCC) -o$@ -c add_.c + +add_.c : $(SRCDIR)\add.c + +translate$E $** > $@ + +$(OBJDIR)\allrepo$O : allrepo_.c allrepo.h + $(TCC) -o$@ -c allrepo_.c + +allrepo_.c : $(SRCDIR)\allrepo.c + +translate$E $** > $@ + +$(OBJDIR)\attach$O : attach_.c attach.h + $(TCC) -o$@ -c attach_.c + +attach_.c : $(SRCDIR)\attach.c + +translate$E $** > $@ + +$(OBJDIR)\bag$O : bag_.c bag.h + $(TCC) -o$@ -c bag_.c + +bag_.c : $(SRCDIR)\bag.c + +translate$E $** > $@ + +$(OBJDIR)\bisect$O : bisect_.c bisect.h + $(TCC) -o$@ -c bisect_.c + +bisect_.c : $(SRCDIR)\bisect.c + +translate$E $** > $@ + +$(OBJDIR)\blob$O : blob_.c blob.h + $(TCC) -o$@ -c blob_.c + +blob_.c : $(SRCDIR)\blob.c + +translate$E $** > $@ + +$(OBJDIR)\branch$O : branch_.c branch.h + $(TCC) -o$@ -c branch_.c + +branch_.c : $(SRCDIR)\branch.c + +translate$E $** > $@ + +$(OBJDIR)\browse$O : browse_.c browse.h + $(TCC) -o$@ -c browse_.c + +browse_.c : $(SRCDIR)\browse.c + +translate$E $** > $@ + +$(OBJDIR)\builtin$O : builtin_.c builtin.h + $(TCC) -o$@ -c builtin_.c + +builtin_.c : $(SRCDIR)\builtin.c + +translate$E $** > $@ + +$(OBJDIR)\bundle$O : bundle_.c bundle.h + $(TCC) -o$@ -c bundle_.c + +bundle_.c : $(SRCDIR)\bundle.c + +translate$E $** > $@ + +$(OBJDIR)\cache$O : cache_.c cache.h + $(TCC) -o$@ -c cache_.c + +cache_.c : $(SRCDIR)\cache.c + +translate$E $** > $@ + +$(OBJDIR)\captcha$O : captcha_.c captcha.h + $(TCC) -o$@ -c captcha_.c + +captcha_.c : $(SRCDIR)\captcha.c + +translate$E $** > $@ + +$(OBJDIR)\cgi$O : cgi_.c cgi.h + $(TCC) -o$@ -c cgi_.c + +cgi_.c : $(SRCDIR)\cgi.c + +translate$E $** > $@ + +$(OBJDIR)\checkin$O : checkin_.c checkin.h + $(TCC) -o$@ -c checkin_.c + +checkin_.c : $(SRCDIR)\checkin.c + +translate$E $** > $@ + +$(OBJDIR)\checkout$O : checkout_.c checkout.h + $(TCC) -o$@ -c checkout_.c + +checkout_.c : $(SRCDIR)\checkout.c + +translate$E $** > $@ + +$(OBJDIR)\clearsign$O : clearsign_.c clearsign.h + $(TCC) -o$@ -c clearsign_.c + +clearsign_.c : $(SRCDIR)\clearsign.c + +translate$E $** > $@ + +$(OBJDIR)\clone$O : clone_.c clone.h + $(TCC) -o$@ -c clone_.c + +clone_.c : $(SRCDIR)\clone.c + +translate$E $** > $@ + +$(OBJDIR)\comformat$O : comformat_.c comformat.h + $(TCC) -o$@ -c comformat_.c + +comformat_.c : $(SRCDIR)\comformat.c + +translate$E $** > $@ + +$(OBJDIR)\configure$O : configure_.c configure.h + $(TCC) -o$@ -c configure_.c + +configure_.c : $(SRCDIR)\configure.c + +translate$E $** > $@ + +$(OBJDIR)\content$O : content_.c content.h + $(TCC) -o$@ -c content_.c + +content_.c : $(SRCDIR)\content.c + +translate$E $** > $@ + +$(OBJDIR)\db$O : db_.c db.h + $(TCC) -o$@ -c db_.c + +db_.c : $(SRCDIR)\db.c + +translate$E $** > $@ + +$(OBJDIR)\delta$O : delta_.c delta.h + $(TCC) -o$@ -c delta_.c + +delta_.c : $(SRCDIR)\delta.c + +translate$E $** > $@ + +$(OBJDIR)\deltacmd$O : deltacmd_.c deltacmd.h + $(TCC) -o$@ -c deltacmd_.c + +deltacmd_.c : $(SRCDIR)\deltacmd.c + +translate$E $** > $@ + +$(OBJDIR)\descendants$O : descendants_.c descendants.h + $(TCC) -o$@ -c descendants_.c + +descendants_.c : $(SRCDIR)\descendants.c + +translate$E $** > $@ + +$(OBJDIR)\diff$O : diff_.c diff.h + $(TCC) -o$@ -c diff_.c + +diff_.c : $(SRCDIR)\diff.c + +translate$E $** > $@ + +$(OBJDIR)\diffcmd$O : diffcmd_.c diffcmd.h + $(TCC) -o$@ -c diffcmd_.c + +diffcmd_.c : $(SRCDIR)\diffcmd.c + +translate$E $** > $@ + +$(OBJDIR)\doc$O : doc_.c doc.h + $(TCC) -o$@ -c doc_.c + +doc_.c : $(SRCDIR)\doc.c + +translate$E $** > $@ + +$(OBJDIR)\encode$O : encode_.c encode.h + $(TCC) -o$@ -c encode_.c + +encode_.c : $(SRCDIR)\encode.c + +translate$E $** > $@ + +$(OBJDIR)\event$O : event_.c event.h + $(TCC) -o$@ -c event_.c + +event_.c : $(SRCDIR)\event.c + +translate$E $** > $@ + +$(OBJDIR)\export$O : export_.c export.h + $(TCC) -o$@ -c export_.c + +export_.c : $(SRCDIR)\export.c + +translate$E $** > $@ + +$(OBJDIR)\file$O : file_.c file.h + $(TCC) -o$@ -c file_.c + +file_.c : $(SRCDIR)\file.c + +translate$E $** > $@ + +$(OBJDIR)\finfo$O : finfo_.c finfo.h + $(TCC) -o$@ -c finfo_.c + +finfo_.c : $(SRCDIR)\finfo.c + +translate$E $** > $@ + +$(OBJDIR)\foci$O : foci_.c foci.h + $(TCC) -o$@ -c foci_.c + +foci_.c : $(SRCDIR)\foci.c + +translate$E $** > $@ + +$(OBJDIR)\fusefs$O : fusefs_.c fusefs.h + $(TCC) -o$@ -c fusefs_.c + +fusefs_.c : $(SRCDIR)\fusefs.c + +translate$E $** > $@ + +$(OBJDIR)\glob$O : glob_.c glob.h + $(TCC) -o$@ -c glob_.c + +glob_.c : $(SRCDIR)\glob.c + +translate$E $** > $@ + +$(OBJDIR)\graph$O : graph_.c graph.h + $(TCC) -o$@ -c graph_.c + +graph_.c : $(SRCDIR)\graph.c + +translate$E $** > $@ + +$(OBJDIR)\gzip$O : gzip_.c gzip.h + $(TCC) -o$@ -c gzip_.c + +gzip_.c : $(SRCDIR)\gzip.c + +translate$E $** > $@ + +$(OBJDIR)\http$O : http_.c http.h + $(TCC) -o$@ -c http_.c + +http_.c : $(SRCDIR)\http.c + +translate$E $** > $@ + +$(OBJDIR)\http_socket$O : http_socket_.c http_socket.h + $(TCC) -o$@ -c http_socket_.c + +http_socket_.c : $(SRCDIR)\http_socket.c + +translate$E $** > $@ + +$(OBJDIR)\http_ssl$O : http_ssl_.c http_ssl.h + $(TCC) -o$@ -c http_ssl_.c + +http_ssl_.c : $(SRCDIR)\http_ssl.c + +translate$E $** > $@ + +$(OBJDIR)\http_transport$O : http_transport_.c http_transport.h + $(TCC) -o$@ -c http_transport_.c + +http_transport_.c : $(SRCDIR)\http_transport.c + +translate$E $** > $@ + +$(OBJDIR)\import$O : import_.c import.h + $(TCC) -o$@ -c import_.c + +import_.c : $(SRCDIR)\import.c + +translate$E $** > $@ + +$(OBJDIR)\info$O : info_.c info.h + $(TCC) -o$@ -c info_.c + +info_.c : $(SRCDIR)\info.c + +translate$E $** > $@ + +$(OBJDIR)\json$O : json_.c json.h + $(TCC) -o$@ -c json_.c + +json_.c : $(SRCDIR)\json.c + +translate$E $** > $@ + +$(OBJDIR)\json_artifact$O : json_artifact_.c json_artifact.h + $(TCC) -o$@ -c json_artifact_.c + +json_artifact_.c : $(SRCDIR)\json_artifact.c + +translate$E $** > $@ + +$(OBJDIR)\json_branch$O : json_branch_.c json_branch.h + $(TCC) -o$@ -c json_branch_.c + +json_branch_.c : $(SRCDIR)\json_branch.c + +translate$E $** > $@ + +$(OBJDIR)\json_config$O : json_config_.c json_config.h + $(TCC) -o$@ -c json_config_.c + +json_config_.c : $(SRCDIR)\json_config.c + +translate$E $** > $@ + +$(OBJDIR)\json_diff$O : json_diff_.c json_diff.h + $(TCC) -o$@ -c json_diff_.c + +json_diff_.c : $(SRCDIR)\json_diff.c + +translate$E $** > $@ + +$(OBJDIR)\json_dir$O : json_dir_.c json_dir.h + $(TCC) -o$@ -c json_dir_.c + +json_dir_.c : $(SRCDIR)\json_dir.c + +translate$E $** > $@ + +$(OBJDIR)\json_finfo$O : json_finfo_.c json_finfo.h + $(TCC) -o$@ -c json_finfo_.c + +json_finfo_.c : $(SRCDIR)\json_finfo.c + +translate$E $** > $@ + +$(OBJDIR)\json_login$O : json_login_.c json_login.h + $(TCC) -o$@ -c json_login_.c + +json_login_.c : $(SRCDIR)\json_login.c + +translate$E $** > $@ + +$(OBJDIR)\json_query$O : json_query_.c json_query.h + $(TCC) -o$@ -c json_query_.c + +json_query_.c : $(SRCDIR)\json_query.c + +translate$E $** > $@ + +$(OBJDIR)\json_report$O : json_report_.c json_report.h + $(TCC) -o$@ -c json_report_.c + +json_report_.c : $(SRCDIR)\json_report.c + +translate$E $** > $@ + +$(OBJDIR)\json_status$O : json_status_.c json_status.h + $(TCC) -o$@ -c json_status_.c + +json_status_.c : $(SRCDIR)\json_status.c + +translate$E $** > $@ + +$(OBJDIR)\json_tag$O : json_tag_.c json_tag.h + $(TCC) -o$@ -c json_tag_.c + +json_tag_.c : $(SRCDIR)\json_tag.c + +translate$E $** > $@ + +$(OBJDIR)\json_timeline$O : json_timeline_.c json_timeline.h + $(TCC) -o$@ -c json_timeline_.c + +json_timeline_.c : $(SRCDIR)\json_timeline.c + +translate$E $** > $@ + +$(OBJDIR)\json_user$O : json_user_.c json_user.h + $(TCC) -o$@ -c json_user_.c + +json_user_.c : $(SRCDIR)\json_user.c + +translate$E $** > $@ + +$(OBJDIR)\json_wiki$O : json_wiki_.c json_wiki.h + $(TCC) -o$@ -c json_wiki_.c + +json_wiki_.c : $(SRCDIR)\json_wiki.c + +translate$E $** > $@ + +$(OBJDIR)\leaf$O : leaf_.c leaf.h + $(TCC) -o$@ -c leaf_.c + +leaf_.c : $(SRCDIR)\leaf.c + +translate$E $** > $@ + +$(OBJDIR)\loadctrl$O : loadctrl_.c loadctrl.h + $(TCC) -o$@ -c loadctrl_.c + +loadctrl_.c : $(SRCDIR)\loadctrl.c + +translate$E $** > $@ + +$(OBJDIR)\login$O : login_.c login.h + $(TCC) -o$@ -c login_.c + +login_.c : $(SRCDIR)\login.c + +translate$E $** > $@ + +$(OBJDIR)\lookslike$O : lookslike_.c lookslike.h + $(TCC) -o$@ -c lookslike_.c + +lookslike_.c : $(SRCDIR)\lookslike.c + +translate$E $** > $@ + +$(OBJDIR)\main$O : main_.c main.h + $(TCC) -o$@ -c main_.c + +main_.c : $(SRCDIR)\main.c + +translate$E $** > $@ + +$(OBJDIR)\manifest$O : manifest_.c manifest.h + $(TCC) -o$@ -c manifest_.c + +manifest_.c : $(SRCDIR)\manifest.c + +translate$E $** > $@ + +$(OBJDIR)\markdown$O : markdown_.c markdown.h + $(TCC) -o$@ -c markdown_.c + +markdown_.c : $(SRCDIR)\markdown.c + +translate$E $** > $@ + +$(OBJDIR)\markdown_html$O : markdown_html_.c markdown_html.h + $(TCC) -o$@ -c markdown_html_.c + +markdown_html_.c : $(SRCDIR)\markdown_html.c + +translate$E $** > $@ + +$(OBJDIR)\md5$O : md5_.c md5.h + $(TCC) -o$@ -c md5_.c + +md5_.c : $(SRCDIR)\md5.c + +translate$E $** > $@ + +$(OBJDIR)\merge$O : merge_.c merge.h + $(TCC) -o$@ -c merge_.c + +merge_.c : $(SRCDIR)\merge.c + +translate$E $** > $@ + +$(OBJDIR)\merge3$O : merge3_.c merge3.h + $(TCC) -o$@ -c merge3_.c + +merge3_.c : $(SRCDIR)\merge3.c + +translate$E $** > $@ + +$(OBJDIR)\moderate$O : moderate_.c moderate.h + $(TCC) -o$@ -c moderate_.c + +moderate_.c : $(SRCDIR)\moderate.c + +translate$E $** > $@ + +$(OBJDIR)\name$O : name_.c name.h + $(TCC) -o$@ -c name_.c + +name_.c : $(SRCDIR)\name.c + +translate$E $** > $@ + +$(OBJDIR)\path$O : path_.c path.h + $(TCC) -o$@ -c path_.c + +path_.c : $(SRCDIR)\path.c + +translate$E $** > $@ + +$(OBJDIR)\piechart$O : piechart_.c piechart.h + $(TCC) -o$@ -c piechart_.c + +piechart_.c : $(SRCDIR)\piechart.c + +translate$E $** > $@ + +$(OBJDIR)\pivot$O : pivot_.c pivot.h + $(TCC) -o$@ -c pivot_.c + +pivot_.c : $(SRCDIR)\pivot.c + +translate$E $** > $@ + +$(OBJDIR)\popen$O : popen_.c popen.h + $(TCC) -o$@ -c popen_.c + +popen_.c : $(SRCDIR)\popen.c + +translate$E $** > $@ + +$(OBJDIR)\pqueue$O : pqueue_.c pqueue.h + $(TCC) -o$@ -c pqueue_.c + +pqueue_.c : $(SRCDIR)\pqueue.c + +translate$E $** > $@ + +$(OBJDIR)\printf$O : printf_.c printf.h + $(TCC) -o$@ -c printf_.c + +printf_.c : $(SRCDIR)\printf.c + +translate$E $** > $@ + +$(OBJDIR)\publish$O : publish_.c publish.h + $(TCC) -o$@ -c publish_.c + +publish_.c : $(SRCDIR)\publish.c + +translate$E $** > $@ + +$(OBJDIR)\purge$O : purge_.c purge.h + $(TCC) -o$@ -c purge_.c + +purge_.c : $(SRCDIR)\purge.c + +translate$E $** > $@ + +$(OBJDIR)\rebuild$O : rebuild_.c rebuild.h + $(TCC) -o$@ -c rebuild_.c + +rebuild_.c : $(SRCDIR)\rebuild.c + +translate$E $** > $@ + +$(OBJDIR)\regexp$O : regexp_.c regexp.h + $(TCC) -o$@ -c regexp_.c + +regexp_.c : $(SRCDIR)\regexp.c + +translate$E $** > $@ + +$(OBJDIR)\report$O : report_.c report.h + $(TCC) -o$@ -c report_.c + +report_.c : $(SRCDIR)\report.c + +translate$E $** > $@ + +$(OBJDIR)\rss$O : rss_.c rss.h + $(TCC) -o$@ -c rss_.c + +rss_.c : $(SRCDIR)\rss.c + +translate$E $** > $@ + +$(OBJDIR)\schema$O : schema_.c schema.h + $(TCC) -o$@ -c schema_.c + +schema_.c : $(SRCDIR)\schema.c + +translate$E $** > $@ + +$(OBJDIR)\search$O : search_.c search.h + $(TCC) -o$@ -c search_.c + +search_.c : $(SRCDIR)\search.c + +translate$E $** > $@ + +$(OBJDIR)\setup$O : setup_.c setup.h + $(TCC) -o$@ -c setup_.c + +setup_.c : $(SRCDIR)\setup.c + +translate$E $** > $@ + +$(OBJDIR)\sha1$O : sha1_.c sha1.h + $(TCC) -o$@ -c sha1_.c + +sha1_.c : $(SRCDIR)\sha1.c + +translate$E $** > $@ + +$(OBJDIR)\shun$O : shun_.c shun.h + $(TCC) -o$@ -c shun_.c + +shun_.c : $(SRCDIR)\shun.c + +translate$E $** > $@ + +$(OBJDIR)\sitemap$O : sitemap_.c sitemap.h + $(TCC) -o$@ -c sitemap_.c + +sitemap_.c : $(SRCDIR)\sitemap.c + +translate$E $** > $@ + +$(OBJDIR)\skins$O : skins_.c skins.h + $(TCC) -o$@ -c skins_.c + +skins_.c : $(SRCDIR)\skins.c + +translate$E $** > $@ + +$(OBJDIR)\sqlcmd$O : sqlcmd_.c sqlcmd.h + $(TCC) -o$@ -c sqlcmd_.c + +sqlcmd_.c : $(SRCDIR)\sqlcmd.c + +translate$E $** > $@ + +$(OBJDIR)\stash$O : stash_.c stash.h + $(TCC) -o$@ -c stash_.c + +stash_.c : $(SRCDIR)\stash.c + +translate$E $** > $@ + +$(OBJDIR)\stat$O : stat_.c stat.h + $(TCC) -o$@ -c stat_.c + +stat_.c : $(SRCDIR)\stat.c + +translate$E $** > $@ + +$(OBJDIR)\statrep$O : statrep_.c statrep.h + $(TCC) -o$@ -c statrep_.c + +statrep_.c : $(SRCDIR)\statrep.c + +translate$E $** > $@ + +$(OBJDIR)\style$O : style_.c style.h + $(TCC) -o$@ -c style_.c + +style_.c : $(SRCDIR)\style.c + +translate$E $** > $@ + +$(OBJDIR)\sync$O : sync_.c sync.h + $(TCC) -o$@ -c sync_.c + +sync_.c : $(SRCDIR)\sync.c + +translate$E $** > $@ + +$(OBJDIR)\tag$O : tag_.c tag.h + $(TCC) -o$@ -c tag_.c + +tag_.c : $(SRCDIR)\tag.c + +translate$E $** > $@ + +$(OBJDIR)\tar$O : tar_.c tar.h + $(TCC) -o$@ -c tar_.c + +tar_.c : $(SRCDIR)\tar.c + +translate$E $** > $@ + +$(OBJDIR)\th_main$O : th_main_.c th_main.h + $(TCC) -o$@ -c th_main_.c + +th_main_.c : $(SRCDIR)\th_main.c + +translate$E $** > $@ + +$(OBJDIR)\timeline$O : timeline_.c timeline.h + $(TCC) -o$@ -c timeline_.c + +timeline_.c : $(SRCDIR)\timeline.c + +translate$E $** > $@ + +$(OBJDIR)\tkt$O : tkt_.c tkt.h + $(TCC) -o$@ -c tkt_.c + +tkt_.c : $(SRCDIR)\tkt.c + +translate$E $** > $@ + +$(OBJDIR)\tktsetup$O : tktsetup_.c tktsetup.h + $(TCC) -o$@ -c tktsetup_.c + +tktsetup_.c : $(SRCDIR)\tktsetup.c + +translate$E $** > $@ + +$(OBJDIR)\undo$O : undo_.c undo.h + $(TCC) -o$@ -c undo_.c + +undo_.c : $(SRCDIR)\undo.c + +translate$E $** > $@ + +$(OBJDIR)\unicode$O : unicode_.c unicode.h + $(TCC) -o$@ -c unicode_.c + +unicode_.c : $(SRCDIR)\unicode.c + +translate$E $** > $@ + +$(OBJDIR)\update$O : update_.c update.h + $(TCC) -o$@ -c update_.c + +update_.c : $(SRCDIR)\update.c + +translate$E $** > $@ + +$(OBJDIR)\url$O : url_.c url.h + $(TCC) -o$@ -c url_.c + +url_.c : $(SRCDIR)\url.c + +translate$E $** > $@ + +$(OBJDIR)\user$O : user_.c user.h + $(TCC) -o$@ -c user_.c + +user_.c : $(SRCDIR)\user.c + +translate$E $** > $@ + +$(OBJDIR)\utf8$O : utf8_.c utf8.h + $(TCC) -o$@ -c utf8_.c + +utf8_.c : $(SRCDIR)\utf8.c + +translate$E $** > $@ + +$(OBJDIR)\util$O : util_.c util.h + $(TCC) -o$@ -c util_.c + +util_.c : $(SRCDIR)\util.c + +translate$E $** > $@ + +$(OBJDIR)\verify$O : verify_.c verify.h + $(TCC) -o$@ -c verify_.c + +verify_.c : $(SRCDIR)\verify.c + +translate$E $** > $@ + +$(OBJDIR)\vfile$O : vfile_.c vfile.h + $(TCC) -o$@ -c vfile_.c + +vfile_.c : $(SRCDIR)\vfile.c + +translate$E $** > $@ + +$(OBJDIR)\wiki$O : wiki_.c wiki.h + $(TCC) -o$@ -c wiki_.c + +wiki_.c : $(SRCDIR)\wiki.c + +translate$E $** > $@ + +$(OBJDIR)\wikiformat$O : wikiformat_.c wikiformat.h + $(TCC) -o$@ -c wikiformat_.c + +wikiformat_.c : $(SRCDIR)\wikiformat.c + +translate$E $** > $@ + +$(OBJDIR)\winfile$O : winfile_.c winfile.h + $(TCC) -o$@ -c winfile_.c + +winfile_.c : $(SRCDIR)\winfile.c + +translate$E $** > $@ + +$(OBJDIR)\winhttp$O : winhttp_.c winhttp.h + $(TCC) -o$@ -c winhttp_.c + +winhttp_.c : $(SRCDIR)\winhttp.c + +translate$E $** > $@ + +$(OBJDIR)\wysiwyg$O : wysiwyg_.c wysiwyg.h + $(TCC) -o$@ -c wysiwyg_.c + +wysiwyg_.c : $(SRCDIR)\wysiwyg.c + +translate$E $** > $@ + +$(OBJDIR)\xfer$O : xfer_.c xfer.h + $(TCC) -o$@ -c xfer_.c + +xfer_.c : $(SRCDIR)\xfer.c + +translate$E $** > $@ + +$(OBJDIR)\xfersetup$O : xfersetup_.c xfersetup.h + $(TCC) -o$@ -c xfersetup_.c + +xfersetup_.c : $(SRCDIR)\xfersetup.c + +translate$E $** > $@ + +$(OBJDIR)\zip$O : zip_.c zip.h + $(TCC) -o$@ -c zip_.c + +zip_.c : $(SRCDIR)\zip.c + +translate$E $** > $@ + +headers: makeheaders$E page_index.h builtin_data.h VERSION.h + +makeheaders$E add_.c:add.h allrepo_.c:allrepo.h attach_.c:attach.h bag_.c:bag.h bisect_.c:bisect.h blob_.c:blob.h branch_.c:branch.h browse_.c:browse.h builtin_.c:builtin.h bundle_.c:bundle.h cache_.c:cache.h captcha_.c:captcha.h cgi_.c:cgi.h checkin_.c:checkin.h checkout_.c:checkout.h clearsign_.c:clearsign.h clone_.c:clone.h comformat_.c:comformat.h configure_.c:configure.h content_.c:content.h db_.c:db.h delta_.c:delta.h deltacmd_.c:deltacmd.h descendants_.c:descendants.h diff_.c:diff.h diffcmd_.c:diffcmd.h doc_.c:doc.h encode_.c:encode.h event_.c:event.h export_.c:export.h file_.c:file.h finfo_.c:finfo.h foci_.c:foci.h fusefs_.c:fusefs.h glob_.c:glob.h graph_.c:graph.h gzip_.c:gzip.h http_.c:http.h http_socket_.c:http_socket.h http_ssl_.c:http_ssl.h http_transport_.c:http_transport.h import_.c:import.h info_.c:info.h json_.c:json.h json_artifact_.c:json_artifact.h json_branch_.c:json_branch.h json_config_.c:json_config.h json_diff_.c:json_diff.h json_dir_.c:json_dir.h json_finfo_.c:json_finfo.h json_login_.c:json_login.h json_query_.c:json_query.h json_report_.c:json_report.h json_status_.c:json_status.h json_tag_.c:json_tag.h json_timeline_.c:json_timeline.h json_user_.c:json_user.h json_wiki_.c:json_wiki.h leaf_.c:leaf.h loadctrl_.c:loadctrl.h login_.c:login.h lookslike_.c:lookslike.h main_.c:main.h manifest_.c:manifest.h markdown_.c:markdown.h markdown_html_.c:markdown_html.h md5_.c:md5.h merge_.c:merge.h merge3_.c:merge3.h moderate_.c:moderate.h name_.c:name.h path_.c:path.h piechart_.c:piechart.h pivot_.c:pivot.h popen_.c:popen.h pqueue_.c:pqueue.h printf_.c:printf.h publish_.c:publish.h purge_.c:purge.h rebuild_.c:rebuild.h regexp_.c:regexp.h report_.c:report.h rss_.c:rss.h schema_.c:schema.h search_.c:search.h setup_.c:setup.h sha1_.c:sha1.h shun_.c:shun.h sitemap_.c:sitemap.h skins_.c:skins.h sqlcmd_.c:sqlcmd.h stash_.c:stash.h stat_.c:stat.h statrep_.c:statrep.h style_.c:style.h sync_.c:sync.h tag_.c:tag.h tar_.c:tar.h th_main_.c:th_main.h timeline_.c:timeline.h tkt_.c:tkt.h tktsetup_.c:tktsetup.h undo_.c:undo.h unicode_.c:unicode.h update_.c:update.h url_.c:url.h user_.c:user.h utf8_.c:utf8.h util_.c:util.h verify_.c:verify.h vfile_.c:vfile.h wiki_.c:wiki.h wikiformat_.c:wikiformat.h winfile_.c:winfile.h winhttp_.c:winhttp.h wysiwyg_.c:wysiwyg.h xfer_.c:xfer.h xfersetup_.c:xfersetup.h zip_.c:zip.h $(SRCDIR)\sqlite3.h $(SRCDIR)\th.h VERSION.h $(SRCDIR)\cson_amalgamation.h + @copy /Y nul: headers ADDED win/Makefile.mingw Index: win/Makefile.mingw ================================================================== --- win/Makefile.mingw +++ win/Makefile.mingw @@ -0,0 +1,2139 @@ +#!/usr/bin/make +# +############################################################################## +# WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl") +############################################################################## +# +# This file is automatically generated. Instead of editing this +# file, edit "makemake.tcl" then run "tclsh makemake.tcl" +# to regenerate this file. +# +# This is a makefile for use on Cygwin/Darwin/FreeBSD/Linux/Windows using +# MinGW or MinGW-w64. +# +# Some of the special options which can be passed to make +# USE_WINDOWS=1 if building under a windows command prompt +# X64=1 if using an unprefixed 64-bit mingw compiler +# + +#### Select one of MinGW, MinGW-w64 (32-bit) or MinGW-w64 (64-bit) compilers. +# By default, this is an empty string (i.e. use the native compiler). +# +PREFIX = +# PREFIX = mingw32- +# PREFIX = i686-pc-mingw32- +# PREFIX = i686-w64-mingw32- +# PREFIX = x86_64-w64-mingw32- + +#### The toplevel directory of the source tree. Fossil can be built +# in a directory that is separate from the source tree. Just change +# the following to point from the build directory to the src/ folder. +# +SRCDIR = src + +#### The directory into which object code files should be written. +# +OBJDIR = wbld + +#### C Compiler and options for use in building executables that +# will run on the platform that is doing the build. This is used +# to compile code-generator programs as part of the build process. +# See TCC below for the C compiler for building the finished binary. +# +BCC = gcc + +#### Enable compiling with debug symbols (much larger binary) +# +# FOSSIL_ENABLE_SYMBOLS = 1 + +#### Enable JSON (http://www.json.org) support using "cson" +# +# FOSSIL_ENABLE_JSON = 1 + +#### Enable HTTPS support via OpenSSL (links to libssl and libcrypto) +# +# FOSSIL_ENABLE_SSL = 1 + +#### Automatically build OpenSSL when building Fossil (causes rebuild +# issues when building incrementally). +# +# FOSSIL_BUILD_SSL = 1 + +#### Enable relative paths in external diff/gdiff +# +# FOSSIL_ENABLE_EXEC_REL_PATHS = 1 + +#### Enable legacy treatment of mv/rm (skip checkout files) +# +# FOSSIL_ENABLE_LEGACY_MV_RM = 1 + +#### Enable TH1 scripts in embedded documentation files +# +# FOSSIL_ENABLE_TH1_DOCS = 1 + +#### Enable hooks for commands and web pages via TH1 +# +# FOSSIL_ENABLE_TH1_HOOKS = 1 + +#### Enable scripting support via Tcl/Tk +# +# FOSSIL_ENABLE_TCL = 1 + +#### Load Tcl using the stubs library mechanism +# +# FOSSIL_ENABLE_TCL_STUBS = 1 + +#### Load Tcl using the private stubs mechanism +# +# FOSSIL_ENABLE_TCL_PRIVATE_STUBS = 1 + +#### Use 'system' SQLite +# +# USE_SYSTEM_SQLITE = 1 + +#### Use the miniz compression library +# +# FOSSIL_ENABLE_MINIZ = 1 + +#### Use the Tcl source directory instead of the install directory? +# This is useful when Tcl has been compiled statically with MinGW. +# +FOSSIL_TCL_SOURCE = 1 + +#### Check if the workaround for the MinGW command line handling needs to +# be enabled by default. This check may be somewhat fragile due to the +# use of "findstring". +# +ifndef MINGW_IS_32BIT_ONLY +ifeq (,$(findstring w64-mingw32,$(PREFIX))) +MINGW_IS_32BIT_ONLY = 1 +endif +endif + +#### The directories where the zlib include and library files are located. +# +ZINCDIR = $(SRCDIR)/../compat/zlib +ZLIBDIR = $(SRCDIR)/../compat/zlib + +#### Make an attempt to detect if Fossil is being built for the x64 processor +# architecture. This check may be somewhat fragile due to "findstring". +# +ifndef X64 +ifneq (,$(findstring x86_64-w64-mingw32,$(PREFIX))) +X64 = 1 +endif +endif + +#### Determine if the optimized assembly routines provided with zlib should be +# used, taking into account whether zlib is actually enabled and the target +# processor architecture. +# +ifndef X64 +SSLCONFIG = mingw +ifndef FOSSIL_ENABLE_MINIZ +ZLIBCONFIG = LOC="-DASMV -DASMINF" OBJA="inffas86.o match.o" +LIBTARGETS = $(ZLIBDIR)/inffas86.o $(ZLIBDIR)/match.o +else +ZLIBCONFIG = +LIBTARGETS = +endif +else +SSLCONFIG = mingw64 +ZLIBCONFIG = +LIBTARGETS = +endif + +#### Disable creation of the OpenSSL shared libraries. Also, disable support +# for both SSLv2 and SSLv3 (i.e. thereby forcing the use of TLS). +# +SSLCONFIG += no-ssl2 no-ssl3 no-shared + +#### When using zlib, make sure that OpenSSL is configured to use the zlib +# that Fossil knows about (i.e. the one within the source tree). +# +ifndef FOSSIL_ENABLE_MINIZ +SSLCONFIG += --with-zlib-lib=$(PWD)/$(ZLIBDIR) --with-zlib-include=$(PWD)/$(ZLIBDIR) zlib +endif + +#### The directories where the OpenSSL include and library files are located. +# The recommended usage here is to use the Sysinternals junction tool +# to create a hard link between an "openssl-1.x" sub-directory of the +# Fossil source code directory and the target OpenSSL source directory. +# +OPENSSLDIR = $(SRCDIR)/../compat/openssl-1.0.2f +OPENSSLINCDIR = $(OPENSSLDIR)/include +OPENSSLLIBDIR = $(OPENSSLDIR) + +#### Either the directory where the Tcl library is installed or the Tcl +# source code directory resides (depending on the value of the macro +# FOSSIL_TCL_SOURCE). If this points to the Tcl install directory, +# this directory must have "include" and "lib" sub-directories. If +# this points to the Tcl source code directory, this directory must +# have "generic" and "win" sub-directories. The recommended usage +# here is to use the Sysinternals junction tool to create a hard +# link between a "tcl-8.x" sub-directory of the Fossil source code +# directory and the target Tcl directory. This removes the need to +# hard-code the necessary paths in this Makefile. +# +TCLDIR = $(SRCDIR)/../compat/tcl-8.6 + +#### The Tcl source code directory. This defaults to the same value as +# TCLDIR macro (above), which may not be correct. This value will +# only be used if the FOSSIL_TCL_SOURCE macro is defined. +# +TCLSRCDIR = $(TCLDIR) + +#### The Tcl include and library directories. These values will only be +# used if the FOSSIL_TCL_SOURCE macro is not defined. +# +TCLINCDIR = $(TCLDIR)/include +TCLLIBDIR = $(TCLDIR)/lib + +#### Tcl: Which Tcl library do we want to use (8.4, 8.5, 8.6, etc)? +# +ifdef FOSSIL_ENABLE_TCL_STUBS +ifndef FOSSIL_ENABLE_TCL_PRIVATE_STUBS +LIBTCL = -ltclstub86 +endif +TCLTARGET = libtclstub86.a +else +LIBTCL = -ltcl86 +TCLTARGET = binaries +endif + +#### C Compile and options for use in building executables that +# will run on the target platform. This is usually the same +# as BCC, unless you are cross-compiling. This C compiler builds +# the finished binary for fossil. The BCC compiler above is used +# for building intermediate code-generator tools. +# +TCC = $(PREFIX)gcc -Wall + +#### Add the necessary command line options to build with debugging +# symbols, if enabled. +# +ifdef FOSSIL_ENABLE_SYMBOLS +TCC += -g +else +TCC += -Os +endif + +#### When not using the miniz compression library, zlib is required. +# +ifndef FOSSIL_ENABLE_MINIZ +TCC += -L$(ZLIBDIR) -I$(ZINCDIR) +endif + +#### Compile resources for use in building executables that will run +# on the target platform. +# +RCC = $(PREFIX)windres -I$(SRCDIR) + +ifndef FOSSIL_ENABLE_MINIZ +RCC += -I$(ZINCDIR) +endif + +# With HTTPS support +ifdef FOSSIL_ENABLE_SSL +TCC += -L$(OPENSSLLIBDIR) -I$(OPENSSLINCDIR) +RCC += -I$(OPENSSLINCDIR) +endif + +# With Tcl support +ifdef FOSSIL_ENABLE_TCL +ifdef FOSSIL_TCL_SOURCE +TCC += -L$(TCLSRCDIR)/win -I$(TCLSRCDIR)/generic -I$(TCLSRCDIR)/win +RCC += -I$(TCLSRCDIR)/generic -I$(TCLSRCDIR)/win +else +TCC += -L$(TCLLIBDIR) -I$(TCLINCDIR) +RCC += -I$(TCLINCDIR) +endif +endif + +# With miniz (i.e. instead of zlib) +ifdef FOSSIL_ENABLE_MINIZ +TCC += -DFOSSIL_ENABLE_MINIZ=1 +RCC += -DFOSSIL_ENABLE_MINIZ=1 +endif + +# With MinGW command line handling workaround +ifdef MINGW_IS_32BIT_ONLY +TCC += -DBROKEN_MINGW_CMDLINE=1 +RCC += -DBROKEN_MINGW_CMDLINE=1 +endif + +# With HTTPS support +ifdef FOSSIL_ENABLE_SSL +TCC += -DFOSSIL_ENABLE_SSL=1 +RCC += -DFOSSIL_ENABLE_SSL=1 +endif + +# With relative paths in external diff/gdiff +ifdef FOSSIL_ENABLE_EXEC_REL_PATHS +TCC += -DFOSSIL_ENABLE_EXEC_REL_PATHS=1 +RCC += -DFOSSIL_ENABLE_EXEC_REL_PATHS=1 +endif + +# With legacy treatment of mv/rm +ifdef FOSSIL_ENABLE_LEGACY_MV_RM +TCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1 +RCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1 +endif + +# With TH1 embedded docs support +ifdef FOSSIL_ENABLE_TH1_DOCS +TCC += -DFOSSIL_ENABLE_TH1_DOCS=1 +RCC += -DFOSSIL_ENABLE_TH1_DOCS=1 +endif + +# With TH1 hook support +ifdef FOSSIL_ENABLE_TH1_HOOKS +TCC += -DFOSSIL_ENABLE_TH1_HOOKS=1 +RCC += -DFOSSIL_ENABLE_TH1_HOOKS=1 +endif + +# With Tcl support +ifdef FOSSIL_ENABLE_TCL +TCC += -DFOSSIL_ENABLE_TCL=1 +RCC += -DFOSSIL_ENABLE_TCL=1 +# Either statically linked or via stubs +ifdef FOSSIL_ENABLE_TCL_STUBS +TCC += -DFOSSIL_ENABLE_TCL_STUBS=1 -DUSE_TCL_STUBS +RCC += -DFOSSIL_ENABLE_TCL_STUBS=1 -DUSE_TCL_STUBS +ifdef FOSSIL_ENABLE_TCL_PRIVATE_STUBS +TCC += -DFOSSIL_ENABLE_TCL_PRIVATE_STUBS=1 +RCC += -DFOSSIL_ENABLE_TCL_PRIVATE_STUBS=1 +endif +else +TCC += -DSTATIC_BUILD +RCC += -DSTATIC_BUILD +endif +endif + +# With JSON support +ifdef FOSSIL_ENABLE_JSON +TCC += -DFOSSIL_ENABLE_JSON=1 +RCC += -DFOSSIL_ENABLE_JSON=1 +endif + +#### The option -static has no effect on MinGW(-w64), only dynamic +# executables can be built when linking with MSVCRT. OpenSSL +# (optional) and zlib (required) however are always linked in +# statically. Therefore, the FOSSIL_DYNAMIC_BUILD option does +# not really apply to MinGW (i.e. since ALL external libraries +# are NOT linked dynamically). +# +# LIB = -static + +#### MinGW: If available, use the Unicode capable runtime startup code. +# +ifndef MINGW_IS_32BIT_ONLY +LIB += -municode +endif + +#### SQLite: If enabled, use the system SQLite library. +# +ifdef USE_SYSTEM_SQLITE +LIB += -lsqlite3 +endif + +#### OpenSSL: Add the necessary libraries required, if enabled. +# +ifdef FOSSIL_ENABLE_SSL +LIB += -lssl -lcrypto -lgdi32 +endif + +#### Tcl: Add the necessary libraries required, if enabled. +# +ifdef FOSSIL_ENABLE_TCL +LIB += $(LIBTCL) +endif + +#### Extra arguments for linking the finished binary. Fossil needs +# to link against the Z-Lib compression library. There are no +# other mandatory dependencies. +# +LIB += -lmingwex + +#### When not using the miniz compression library, zlib is required. +# +ifndef FOSSIL_ENABLE_MINIZ +LIB += -lz +endif + +#### These libraries MUST appear in the same order as they do for Tcl +# or linking with it will not work (exact reason unknown). +# +ifdef FOSSIL_ENABLE_TCL +ifdef FOSSIL_ENABLE_TCL_STUBS +LIB += -lkernel32 -lws2_32 +else +LIB += -lnetapi32 -lkernel32 -luser32 -ladvapi32 -lws2_32 +endif +else +LIB += -lkernel32 -lws2_32 +endif + +#### Tcl shell for use in running the fossil test suite. This is only +# used for testing. +# +TCLSH = tclsh + +#### Nullsoft installer MakeNSIS location +# +MAKENSIS = "$(PROGRAMFILES)\NSIS\MakeNSIS.exe" + +#### Inno Setup executable location +# +INNOSETUP = "$(PROGRAMFILES)\Inno Setup 5\ISCC.exe" + +#### Include a configuration file that can override any one of these settings. +# +-include config.w32 + +# STOP HERE +# You should not need to change anything below this line +#-------------------------------------------------------- +XTCC = $(TCC) $(CFLAGS) -I. -I$(SRCDIR) + +SRC = \ + $(SRCDIR)/add.c \ + $(SRCDIR)/allrepo.c \ + $(SRCDIR)/attach.c \ + $(SRCDIR)/bag.c \ + $(SRCDIR)/bisect.c \ + $(SRCDIR)/blob.c \ + $(SRCDIR)/branch.c \ + $(SRCDIR)/browse.c \ + $(SRCDIR)/builtin.c \ + $(SRCDIR)/bundle.c \ + $(SRCDIR)/cache.c \ + $(SRCDIR)/captcha.c \ + $(SRCDIR)/cgi.c \ + $(SRCDIR)/checkin.c \ + $(SRCDIR)/checkout.c \ + $(SRCDIR)/clearsign.c \ + $(SRCDIR)/clone.c \ + $(SRCDIR)/comformat.c \ + $(SRCDIR)/configure.c \ + $(SRCDIR)/content.c \ + $(SRCDIR)/db.c \ + $(SRCDIR)/delta.c \ + $(SRCDIR)/deltacmd.c \ + $(SRCDIR)/descendants.c \ + $(SRCDIR)/diff.c \ + $(SRCDIR)/diffcmd.c \ + $(SRCDIR)/doc.c \ + $(SRCDIR)/encode.c \ + $(SRCDIR)/event.c \ + $(SRCDIR)/export.c \ + $(SRCDIR)/file.c \ + $(SRCDIR)/finfo.c \ + $(SRCDIR)/foci.c \ + $(SRCDIR)/fusefs.c \ + $(SRCDIR)/glob.c \ + $(SRCDIR)/graph.c \ + $(SRCDIR)/gzip.c \ + $(SRCDIR)/http.c \ + $(SRCDIR)/http_socket.c \ + $(SRCDIR)/http_ssl.c \ + $(SRCDIR)/http_transport.c \ + $(SRCDIR)/import.c \ + $(SRCDIR)/info.c \ + $(SRCDIR)/json.c \ + $(SRCDIR)/json_artifact.c \ + $(SRCDIR)/json_branch.c \ + $(SRCDIR)/json_config.c \ + $(SRCDIR)/json_diff.c \ + $(SRCDIR)/json_dir.c \ + $(SRCDIR)/json_finfo.c \ + $(SRCDIR)/json_login.c \ + $(SRCDIR)/json_query.c \ + $(SRCDIR)/json_report.c \ + $(SRCDIR)/json_status.c \ + $(SRCDIR)/json_tag.c \ + $(SRCDIR)/json_timeline.c \ + $(SRCDIR)/json_user.c \ + $(SRCDIR)/json_wiki.c \ + $(SRCDIR)/leaf.c \ + $(SRCDIR)/loadctrl.c \ + $(SRCDIR)/login.c \ + $(SRCDIR)/lookslike.c \ + $(SRCDIR)/main.c \ + $(SRCDIR)/manifest.c \ + $(SRCDIR)/markdown.c \ + $(SRCDIR)/markdown_html.c \ + $(SRCDIR)/md5.c \ + $(SRCDIR)/merge.c \ + $(SRCDIR)/merge3.c \ + $(SRCDIR)/moderate.c \ + $(SRCDIR)/name.c \ + $(SRCDIR)/path.c \ + $(SRCDIR)/piechart.c \ + $(SRCDIR)/pivot.c \ + $(SRCDIR)/popen.c \ + $(SRCDIR)/pqueue.c \ + $(SRCDIR)/printf.c \ + $(SRCDIR)/publish.c \ + $(SRCDIR)/purge.c \ + $(SRCDIR)/rebuild.c \ + $(SRCDIR)/regexp.c \ + $(SRCDIR)/report.c \ + $(SRCDIR)/rss.c \ + $(SRCDIR)/schema.c \ + $(SRCDIR)/search.c \ + $(SRCDIR)/setup.c \ + $(SRCDIR)/sha1.c \ + $(SRCDIR)/shun.c \ + $(SRCDIR)/sitemap.c \ + $(SRCDIR)/skins.c \ + $(SRCDIR)/sqlcmd.c \ + $(SRCDIR)/stash.c \ + $(SRCDIR)/stat.c \ + $(SRCDIR)/statrep.c \ + $(SRCDIR)/style.c \ + $(SRCDIR)/sync.c \ + $(SRCDIR)/tag.c \ + $(SRCDIR)/tar.c \ + $(SRCDIR)/th_main.c \ + $(SRCDIR)/timeline.c \ + $(SRCDIR)/tkt.c \ + $(SRCDIR)/tktsetup.c \ + $(SRCDIR)/undo.c \ + $(SRCDIR)/unicode.c \ + $(SRCDIR)/update.c \ + $(SRCDIR)/url.c \ + $(SRCDIR)/user.c \ + $(SRCDIR)/utf8.c \ + $(SRCDIR)/util.c \ + $(SRCDIR)/verify.c \ + $(SRCDIR)/vfile.c \ + $(SRCDIR)/wiki.c \ + $(SRCDIR)/wikiformat.c \ + $(SRCDIR)/winfile.c \ + $(SRCDIR)/winhttp.c \ + $(SRCDIR)/wysiwyg.c \ + $(SRCDIR)/xfer.c \ + $(SRCDIR)/xfersetup.c \ + $(SRCDIR)/zip.c + +EXTRA_FILES = \ + $(SRCDIR)/../skins/aht/details.txt \ + $(SRCDIR)/../skins/black_and_white/css.txt \ + $(SRCDIR)/../skins/black_and_white/details.txt \ + $(SRCDIR)/../skins/black_and_white/footer.txt \ + $(SRCDIR)/../skins/black_and_white/header.txt \ + $(SRCDIR)/../skins/blitz/css.txt \ + $(SRCDIR)/../skins/blitz/details.txt \ + $(SRCDIR)/../skins/blitz/footer.txt \ + $(SRCDIR)/../skins/blitz/header.txt \ + $(SRCDIR)/../skins/blitz/ticket.txt \ + $(SRCDIR)/../skins/blitz_no_logo/css.txt \ + $(SRCDIR)/../skins/blitz_no_logo/details.txt \ + $(SRCDIR)/../skins/blitz_no_logo/footer.txt \ + $(SRCDIR)/../skins/blitz_no_logo/header.txt \ + $(SRCDIR)/../skins/blitz_no_logo/ticket.txt \ + $(SRCDIR)/../skins/default/css.txt \ + $(SRCDIR)/../skins/default/details.txt \ + $(SRCDIR)/../skins/default/footer.txt \ + $(SRCDIR)/../skins/default/header.txt \ + $(SRCDIR)/../skins/eagle/css.txt \ + $(SRCDIR)/../skins/eagle/details.txt \ + $(SRCDIR)/../skins/eagle/footer.txt \ + $(SRCDIR)/../skins/eagle/header.txt \ + $(SRCDIR)/../skins/enhanced1/css.txt \ + $(SRCDIR)/../skins/enhanced1/details.txt \ + $(SRCDIR)/../skins/enhanced1/footer.txt \ + $(SRCDIR)/../skins/enhanced1/header.txt \ + $(SRCDIR)/../skins/khaki/css.txt \ + $(SRCDIR)/../skins/khaki/details.txt \ + $(SRCDIR)/../skins/khaki/footer.txt \ + $(SRCDIR)/../skins/khaki/header.txt \ + $(SRCDIR)/../skins/original/css.txt \ + $(SRCDIR)/../skins/original/details.txt \ + $(SRCDIR)/../skins/original/footer.txt \ + $(SRCDIR)/../skins/original/header.txt \ + $(SRCDIR)/../skins/plain_gray/css.txt \ + $(SRCDIR)/../skins/plain_gray/details.txt \ + $(SRCDIR)/../skins/plain_gray/footer.txt \ + $(SRCDIR)/../skins/plain_gray/header.txt \ + $(SRCDIR)/../skins/rounded1/css.txt \ + $(SRCDIR)/../skins/rounded1/details.txt \ + $(SRCDIR)/../skins/rounded1/footer.txt \ + $(SRCDIR)/../skins/rounded1/header.txt \ + $(SRCDIR)/../skins/xekri/css.txt \ + $(SRCDIR)/../skins/xekri/details.txt \ + $(SRCDIR)/../skins/xekri/footer.txt \ + $(SRCDIR)/../skins/xekri/header.txt \ + $(SRCDIR)/diff.tcl \ + $(SRCDIR)/markdown.md + +TRANS_SRC = \ + $(OBJDIR)/add_.c \ + $(OBJDIR)/allrepo_.c \ + $(OBJDIR)/attach_.c \ + $(OBJDIR)/bag_.c \ + $(OBJDIR)/bisect_.c \ + $(OBJDIR)/blob_.c \ + $(OBJDIR)/branch_.c \ + $(OBJDIR)/browse_.c \ + $(OBJDIR)/builtin_.c \ + $(OBJDIR)/bundle_.c \ + $(OBJDIR)/cache_.c \ + $(OBJDIR)/captcha_.c \ + $(OBJDIR)/cgi_.c \ + $(OBJDIR)/checkin_.c \ + $(OBJDIR)/checkout_.c \ + $(OBJDIR)/clearsign_.c \ + $(OBJDIR)/clone_.c \ + $(OBJDIR)/comformat_.c \ + $(OBJDIR)/configure_.c \ + $(OBJDIR)/content_.c \ + $(OBJDIR)/db_.c \ + $(OBJDIR)/delta_.c \ + $(OBJDIR)/deltacmd_.c \ + $(OBJDIR)/descendants_.c \ + $(OBJDIR)/diff_.c \ + $(OBJDIR)/diffcmd_.c \ + $(OBJDIR)/doc_.c \ + $(OBJDIR)/encode_.c \ + $(OBJDIR)/event_.c \ + $(OBJDIR)/export_.c \ + $(OBJDIR)/file_.c \ + $(OBJDIR)/finfo_.c \ + $(OBJDIR)/foci_.c \ + $(OBJDIR)/fusefs_.c \ + $(OBJDIR)/glob_.c \ + $(OBJDIR)/graph_.c \ + $(OBJDIR)/gzip_.c \ + $(OBJDIR)/http_.c \ + $(OBJDIR)/http_socket_.c \ + $(OBJDIR)/http_ssl_.c \ + $(OBJDIR)/http_transport_.c \ + $(OBJDIR)/import_.c \ + $(OBJDIR)/info_.c \ + $(OBJDIR)/json_.c \ + $(OBJDIR)/json_artifact_.c \ + $(OBJDIR)/json_branch_.c \ + $(OBJDIR)/json_config_.c \ + $(OBJDIR)/json_diff_.c \ + $(OBJDIR)/json_dir_.c \ + $(OBJDIR)/json_finfo_.c \ + $(OBJDIR)/json_login_.c \ + $(OBJDIR)/json_query_.c \ + $(OBJDIR)/json_report_.c \ + $(OBJDIR)/json_status_.c \ + $(OBJDIR)/json_tag_.c \ + $(OBJDIR)/json_timeline_.c \ + $(OBJDIR)/json_user_.c \ + $(OBJDIR)/json_wiki_.c \ + $(OBJDIR)/leaf_.c \ + $(OBJDIR)/loadctrl_.c \ + $(OBJDIR)/login_.c \ + $(OBJDIR)/lookslike_.c \ + $(OBJDIR)/main_.c \ + $(OBJDIR)/manifest_.c \ + $(OBJDIR)/markdown_.c \ + $(OBJDIR)/markdown_html_.c \ + $(OBJDIR)/md5_.c \ + $(OBJDIR)/merge_.c \ + $(OBJDIR)/merge3_.c \ + $(OBJDIR)/moderate_.c \ + $(OBJDIR)/name_.c \ + $(OBJDIR)/path_.c \ + $(OBJDIR)/piechart_.c \ + $(OBJDIR)/pivot_.c \ + $(OBJDIR)/popen_.c \ + $(OBJDIR)/pqueue_.c \ + $(OBJDIR)/printf_.c \ + $(OBJDIR)/publish_.c \ + $(OBJDIR)/purge_.c \ + $(OBJDIR)/rebuild_.c \ + $(OBJDIR)/regexp_.c \ + $(OBJDIR)/report_.c \ + $(OBJDIR)/rss_.c \ + $(OBJDIR)/schema_.c \ + $(OBJDIR)/search_.c \ + $(OBJDIR)/setup_.c \ + $(OBJDIR)/sha1_.c \ + $(OBJDIR)/shun_.c \ + $(OBJDIR)/sitemap_.c \ + $(OBJDIR)/skins_.c \ + $(OBJDIR)/sqlcmd_.c \ + $(OBJDIR)/stash_.c \ + $(OBJDIR)/stat_.c \ + $(OBJDIR)/statrep_.c \ + $(OBJDIR)/style_.c \ + $(OBJDIR)/sync_.c \ + $(OBJDIR)/tag_.c \ + $(OBJDIR)/tar_.c \ + $(OBJDIR)/th_main_.c \ + $(OBJDIR)/timeline_.c \ + $(OBJDIR)/tkt_.c \ + $(OBJDIR)/tktsetup_.c \ + $(OBJDIR)/undo_.c \ + $(OBJDIR)/unicode_.c \ + $(OBJDIR)/update_.c \ + $(OBJDIR)/url_.c \ + $(OBJDIR)/user_.c \ + $(OBJDIR)/utf8_.c \ + $(OBJDIR)/util_.c \ + $(OBJDIR)/verify_.c \ + $(OBJDIR)/vfile_.c \ + $(OBJDIR)/wiki_.c \ + $(OBJDIR)/wikiformat_.c \ + $(OBJDIR)/winfile_.c \ + $(OBJDIR)/winhttp_.c \ + $(OBJDIR)/wysiwyg_.c \ + $(OBJDIR)/xfer_.c \ + $(OBJDIR)/xfersetup_.c \ + $(OBJDIR)/zip_.c + +OBJ = \ + $(OBJDIR)/add.o \ + $(OBJDIR)/allrepo.o \ + $(OBJDIR)/attach.o \ + $(OBJDIR)/bag.o \ + $(OBJDIR)/bisect.o \ + $(OBJDIR)/blob.o \ + $(OBJDIR)/branch.o \ + $(OBJDIR)/browse.o \ + $(OBJDIR)/builtin.o \ + $(OBJDIR)/bundle.o \ + $(OBJDIR)/cache.o \ + $(OBJDIR)/captcha.o \ + $(OBJDIR)/cgi.o \ + $(OBJDIR)/checkin.o \ + $(OBJDIR)/checkout.o \ + $(OBJDIR)/clearsign.o \ + $(OBJDIR)/clone.o \ + $(OBJDIR)/comformat.o \ + $(OBJDIR)/configure.o \ + $(OBJDIR)/content.o \ + $(OBJDIR)/db.o \ + $(OBJDIR)/delta.o \ + $(OBJDIR)/deltacmd.o \ + $(OBJDIR)/descendants.o \ + $(OBJDIR)/diff.o \ + $(OBJDIR)/diffcmd.o \ + $(OBJDIR)/doc.o \ + $(OBJDIR)/encode.o \ + $(OBJDIR)/event.o \ + $(OBJDIR)/export.o \ + $(OBJDIR)/file.o \ + $(OBJDIR)/finfo.o \ + $(OBJDIR)/foci.o \ + $(OBJDIR)/fusefs.o \ + $(OBJDIR)/glob.o \ + $(OBJDIR)/graph.o \ + $(OBJDIR)/gzip.o \ + $(OBJDIR)/http.o \ + $(OBJDIR)/http_socket.o \ + $(OBJDIR)/http_ssl.o \ + $(OBJDIR)/http_transport.o \ + $(OBJDIR)/import.o \ + $(OBJDIR)/info.o \ + $(OBJDIR)/json.o \ + $(OBJDIR)/json_artifact.o \ + $(OBJDIR)/json_branch.o \ + $(OBJDIR)/json_config.o \ + $(OBJDIR)/json_diff.o \ + $(OBJDIR)/json_dir.o \ + $(OBJDIR)/json_finfo.o \ + $(OBJDIR)/json_login.o \ + $(OBJDIR)/json_query.o \ + $(OBJDIR)/json_report.o \ + $(OBJDIR)/json_status.o \ + $(OBJDIR)/json_tag.o \ + $(OBJDIR)/json_timeline.o \ + $(OBJDIR)/json_user.o \ + $(OBJDIR)/json_wiki.o \ + $(OBJDIR)/leaf.o \ + $(OBJDIR)/loadctrl.o \ + $(OBJDIR)/login.o \ + $(OBJDIR)/lookslike.o \ + $(OBJDIR)/main.o \ + $(OBJDIR)/manifest.o \ + $(OBJDIR)/markdown.o \ + $(OBJDIR)/markdown_html.o \ + $(OBJDIR)/md5.o \ + $(OBJDIR)/merge.o \ + $(OBJDIR)/merge3.o \ + $(OBJDIR)/moderate.o \ + $(OBJDIR)/name.o \ + $(OBJDIR)/path.o \ + $(OBJDIR)/piechart.o \ + $(OBJDIR)/pivot.o \ + $(OBJDIR)/popen.o \ + $(OBJDIR)/pqueue.o \ + $(OBJDIR)/printf.o \ + $(OBJDIR)/publish.o \ + $(OBJDIR)/purge.o \ + $(OBJDIR)/rebuild.o \ + $(OBJDIR)/regexp.o \ + $(OBJDIR)/report.o \ + $(OBJDIR)/rss.o \ + $(OBJDIR)/schema.o \ + $(OBJDIR)/search.o \ + $(OBJDIR)/setup.o \ + $(OBJDIR)/sha1.o \ + $(OBJDIR)/shun.o \ + $(OBJDIR)/sitemap.o \ + $(OBJDIR)/skins.o \ + $(OBJDIR)/sqlcmd.o \ + $(OBJDIR)/stash.o \ + $(OBJDIR)/stat.o \ + $(OBJDIR)/statrep.o \ + $(OBJDIR)/style.o \ + $(OBJDIR)/sync.o \ + $(OBJDIR)/tag.o \ + $(OBJDIR)/tar.o \ + $(OBJDIR)/th_main.o \ + $(OBJDIR)/timeline.o \ + $(OBJDIR)/tkt.o \ + $(OBJDIR)/tktsetup.o \ + $(OBJDIR)/undo.o \ + $(OBJDIR)/unicode.o \ + $(OBJDIR)/update.o \ + $(OBJDIR)/url.o \ + $(OBJDIR)/user.o \ + $(OBJDIR)/utf8.o \ + $(OBJDIR)/util.o \ + $(OBJDIR)/verify.o \ + $(OBJDIR)/vfile.o \ + $(OBJDIR)/wiki.o \ + $(OBJDIR)/wikiformat.o \ + $(OBJDIR)/winfile.o \ + $(OBJDIR)/winhttp.o \ + $(OBJDIR)/wysiwyg.o \ + $(OBJDIR)/xfer.o \ + $(OBJDIR)/xfersetup.o \ + $(OBJDIR)/zip.o + +APPNAME = fossil.exe +APPTARGETS = + +#### If the USE_WINDOWS variable exists, it is assumed that we are building +# inside of a Windows-style shell; otherwise, it is assumed that we are +# building inside of a Unix-style shell. Note that the "move" command is +# broken when attempting to use it from the Windows shell via MinGW make +# because the SHELL variable is only used for certain commands that are +# recognized internally by make. +# +ifdef USE_WINDOWS +TRANSLATE = $(subst /,\,$(OBJDIR)/translate.exe) +MAKEHEADERS = $(subst /,\,$(OBJDIR)/makeheaders.exe) +MKINDEX = $(subst /,\,$(OBJDIR)/mkindex.exe) +MKBUILTIN = $(subst /,\,$(OBJDIR)/mkbuiltin.exe) +MKVERSION = $(subst /,\,$(OBJDIR)/mkversion.exe) +CODECHECK1 = $(subst /,\,$(OBJDIR)/codecheck1.exe) +CAT = type +CP = copy +GREP = find +MV = copy +RM = del /Q +MKDIR = -mkdir +RMDIR = rmdir /S /Q +else +TRANSLATE = $(OBJDIR)/translate.exe +MAKEHEADERS = $(OBJDIR)/makeheaders.exe +MKINDEX = $(OBJDIR)/mkindex.exe +MKBUILTIN = $(OBJDIR)/mkbuiltin.exe +MKVERSION = $(OBJDIR)/mkversion.exe +CODECHECK1 = $(OBJDIR)/codecheck1.exe +CAT = cat +CP = cp +GREP = grep +MV = mv +RM = rm -f +MKDIR = -mkdir -p +RMDIR = rm -rf +endif + +all: $(OBJDIR) $(APPNAME) + +$(OBJDIR)/fossil.o: $(SRCDIR)/../win/fossil.rc $(OBJDIR)/VERSION.h +ifdef USE_WINDOWS + $(CAT) $(subst /,\,$(SRCDIR)\miniz.c) | $(GREP) "define MZ_VERSION" > $(subst /,\,$(OBJDIR)\minizver.h) + $(CP) $(subst /,\,$(SRCDIR)\..\win\fossil.rc) $(subst /,\,$(OBJDIR)) + $(CP) $(subst /,\,$(SRCDIR)\..\win\fossil.ico) $(subst /,\,$(OBJDIR)) + $(CP) $(subst /,\,$(SRCDIR)\..\win\fossil.exe.manifest) $(subst /,\,$(OBJDIR)) +else + $(CAT) $(SRCDIR)/miniz.c | $(GREP) "define MZ_VERSION" > $(OBJDIR)/minizver.h + $(CP) $(SRCDIR)/../win/fossil.rc $(OBJDIR) + $(CP) $(SRCDIR)/../win/fossil.ico $(OBJDIR) + $(CP) $(SRCDIR)/../win/fossil.exe.manifest $(OBJDIR) +endif + $(RCC) $(OBJDIR)/fossil.rc -o $(OBJDIR)/fossil.o + +install: $(OBJDIR) $(APPNAME) +ifdef USE_WINDOWS + $(MKDIR) $(subst /,\,$(INSTALLDIR)) + $(MV) $(subst /,\,$(APPNAME)) $(subst /,\,$(INSTALLDIR)) +else + $(MKDIR) $(INSTALLDIR) + $(MV) $(APPNAME) $(INSTALLDIR) +endif + +$(OBJDIR): +ifdef USE_WINDOWS + $(MKDIR) $(subst /,\,$(OBJDIR)) +else + $(MKDIR) $(OBJDIR) +endif + +$(TRANSLATE): $(SRCDIR)/translate.c + $(BCC) -o $@ $(SRCDIR)/translate.c + +$(MAKEHEADERS): $(SRCDIR)/makeheaders.c + $(BCC) -o $@ $(SRCDIR)/makeheaders.c + +$(MKINDEX): $(SRCDIR)/mkindex.c + $(BCC) -o $@ $(SRCDIR)/mkindex.c + +$(MKBUILTIN): $(SRCDIR)/mkbuiltin.c + $(BCC) -o $@ $(SRCDIR)/mkbuiltin.c + +$(MKVERSION): $(SRCDIR)/mkversion.c + $(BCC) -o $@ $(SRCDIR)/mkversion.c + +$(CODECHECK1): $(SRCDIR)/codecheck1.c + $(BCC) -o $@ $(SRCDIR)/codecheck1.c + +# WARNING. DANGER. Running the test suite modifies the repository the +# build is done from, i.e. the checkout belongs to. Do not sync/push +# the repository after running the tests. +test: $(OBJDIR) $(APPNAME) + $(TCLSH) $(SRCDIR)/../test/tester.tcl $(APPNAME) + +$(OBJDIR)/VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(MKVERSION) + $(MKVERSION) $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(SRCDIR)/../VERSION >$@ + +# The USE_SYSTEM_SQLITE variable may be undefined, set to 0, or set +# to 1. If it is set to 1, then there is no need to build or link +# the sqlite3.o object. Instead, the system SQLite will be linked +# using -lsqlite3. +SQLITE3_OBJ.1 = +SQLITE3_OBJ.0 = $(OBJDIR)/sqlite3.o +SQLITE3_OBJ. = $(SQLITE3_OBJ.0) + +# The FOSSIL_ENABLE_MINIZ variable may be undefined, set to 0, or +# set to 1. If it is set to 1, the miniz library included in the +# source tree should be used; otherwise, it should not. +MINIZ_OBJ.0 = +MINIZ_OBJ.1 = $(OBJDIR)/miniz.o +MINIZ_OBJ. = $(MINIZ_OBJ.0) + + +EXTRAOBJ = \ + $(SQLITE3_OBJ.$(USE_SYSTEM_SQLITE)) \ + $(MINIZ_OBJ.$(FOSSIL_ENABLE_MINIZ)) \ + $(OBJDIR)/shell.o \ + $(OBJDIR)/th.o \ + $(OBJDIR)/th_lang.o \ + $(OBJDIR)/th_tcl.o \ + $(OBJDIR)/cson_amalgamation.o + + +zlib: + $(MAKE) -C $(ZLIBDIR) PREFIX=$(PREFIX) $(ZLIBCONFIG) -f win32/Makefile.gcc libz.a + +clean-zlib: + $(MAKE) -C $(ZLIBDIR) PREFIX=$(PREFIX) -f win32/Makefile.gcc clean + +$(ZLIBDIR)/inffas86.o: + $(TCC) -c -o $@ -DASMINF -I$(ZLIBDIR) -O3 $(ZLIBDIR)/contrib/inflate86/inffas86.c + +$(ZLIBDIR)/match.o: + $(TCC) -c -o $@ -DASMV $(ZLIBDIR)/contrib/asm686/match.S + + +ifndef FOSSIL_ENABLE_MINIZ +LIBTARGETS += zlib +endif + +openssl: $(LIBTARGETS) + cd $(OPENSSLLIBDIR);./Configure --cross-compile-prefix=$(PREFIX) $(SSLCONFIG) + $(MAKE) -C $(OPENSSLLIBDIR) build_libs + +clean-openssl: + $(MAKE) -C $(OPENSSLLIBDIR) clean + +tcl: + cd $(TCLSRCDIR)/win;./configure + $(MAKE) -C $(TCLSRCDIR)/win $(TCLTARGET) + +clean-tcl: + $(MAKE) -C $(TCLSRCDIR)/win distclean + +APPTARGETS += $(LIBTARGETS) + +ifdef FOSSIL_BUILD_SSL +APPTARGETS += openssl +endif + +$(APPNAME): $(APPTARGETS) $(OBJDIR)/headers $(CODECHECK1) $(OBJ) $(EXTRAOBJ) $(OBJDIR)/fossil.o + $(CODECHECK1) $(TRANS_SRC) + $(TCC) -o $@ $(OBJ) $(EXTRAOBJ) $(OBJDIR)/fossil.o $(LIB) + +# This rule prevents make from using its default rules to try build +# an executable named "manifest" out of the file named "manifest.c" +# +$(SRCDIR)/../manifest: + # noop + +clean: +ifdef USE_WINDOWS + $(RM) $(subst /,\,$(APPNAME)) + $(RMDIR) $(subst /,\,$(OBJDIR)) +else + $(RM) $(APPNAME) + $(RMDIR) $(OBJDIR) +endif + +setup: $(OBJDIR) $(APPNAME) + $(MAKENSIS) ./setup/fossil.nsi + +innosetup: $(OBJDIR) $(APPNAME) + $(INNOSETUP) ./setup/fossil.iss -DAppVersion=$(shell $(CAT) ./VERSION) + +$(OBJDIR)/page_index.h: $(TRANS_SRC) $(MKINDEX) + $(MKINDEX) $(TRANS_SRC) >$@ + +$(OBJDIR)/builtin_data.h: $(MKBUILTIN) $(EXTRA_FILES) + $(MKBUILTIN) --prefix $(SRCDIR)/ $(EXTRA_FILES) >$@ + +$(OBJDIR)/headers: $(OBJDIR)/page_index.h $(OBJDIR)/builtin_data.h $(MAKEHEADERS) $(OBJDIR)/VERSION.h + $(MAKEHEADERS) $(OBJDIR)/add_.c:$(OBJDIR)/add.h \ + $(OBJDIR)/allrepo_.c:$(OBJDIR)/allrepo.h \ + $(OBJDIR)/attach_.c:$(OBJDIR)/attach.h \ + $(OBJDIR)/bag_.c:$(OBJDIR)/bag.h \ + $(OBJDIR)/bisect_.c:$(OBJDIR)/bisect.h \ + $(OBJDIR)/blob_.c:$(OBJDIR)/blob.h \ + $(OBJDIR)/branch_.c:$(OBJDIR)/branch.h \ + $(OBJDIR)/browse_.c:$(OBJDIR)/browse.h \ + $(OBJDIR)/builtin_.c:$(OBJDIR)/builtin.h \ + $(OBJDIR)/bundle_.c:$(OBJDIR)/bundle.h \ + $(OBJDIR)/cache_.c:$(OBJDIR)/cache.h \ + $(OBJDIR)/captcha_.c:$(OBJDIR)/captcha.h \ + $(OBJDIR)/cgi_.c:$(OBJDIR)/cgi.h \ + $(OBJDIR)/checkin_.c:$(OBJDIR)/checkin.h \ + $(OBJDIR)/checkout_.c:$(OBJDIR)/checkout.h \ + $(OBJDIR)/clearsign_.c:$(OBJDIR)/clearsign.h \ + $(OBJDIR)/clone_.c:$(OBJDIR)/clone.h \ + $(OBJDIR)/comformat_.c:$(OBJDIR)/comformat.h \ + $(OBJDIR)/configure_.c:$(OBJDIR)/configure.h \ + $(OBJDIR)/content_.c:$(OBJDIR)/content.h \ + $(OBJDIR)/db_.c:$(OBJDIR)/db.h \ + $(OBJDIR)/delta_.c:$(OBJDIR)/delta.h \ + $(OBJDIR)/deltacmd_.c:$(OBJDIR)/deltacmd.h \ + $(OBJDIR)/descendants_.c:$(OBJDIR)/descendants.h \ + $(OBJDIR)/diff_.c:$(OBJDIR)/diff.h \ + $(OBJDIR)/diffcmd_.c:$(OBJDIR)/diffcmd.h \ + $(OBJDIR)/doc_.c:$(OBJDIR)/doc.h \ + $(OBJDIR)/encode_.c:$(OBJDIR)/encode.h \ + $(OBJDIR)/event_.c:$(OBJDIR)/event.h \ + $(OBJDIR)/export_.c:$(OBJDIR)/export.h \ + $(OBJDIR)/file_.c:$(OBJDIR)/file.h \ + $(OBJDIR)/finfo_.c:$(OBJDIR)/finfo.h \ + $(OBJDIR)/foci_.c:$(OBJDIR)/foci.h \ + $(OBJDIR)/fusefs_.c:$(OBJDIR)/fusefs.h \ + $(OBJDIR)/glob_.c:$(OBJDIR)/glob.h \ + $(OBJDIR)/graph_.c:$(OBJDIR)/graph.h \ + $(OBJDIR)/gzip_.c:$(OBJDIR)/gzip.h \ + $(OBJDIR)/http_.c:$(OBJDIR)/http.h \ + $(OBJDIR)/http_socket_.c:$(OBJDIR)/http_socket.h \ + $(OBJDIR)/http_ssl_.c:$(OBJDIR)/http_ssl.h \ + $(OBJDIR)/http_transport_.c:$(OBJDIR)/http_transport.h \ + $(OBJDIR)/import_.c:$(OBJDIR)/import.h \ + $(OBJDIR)/info_.c:$(OBJDIR)/info.h \ + $(OBJDIR)/json_.c:$(OBJDIR)/json.h \ + $(OBJDIR)/json_artifact_.c:$(OBJDIR)/json_artifact.h \ + $(OBJDIR)/json_branch_.c:$(OBJDIR)/json_branch.h \ + $(OBJDIR)/json_config_.c:$(OBJDIR)/json_config.h \ + $(OBJDIR)/json_diff_.c:$(OBJDIR)/json_diff.h \ + $(OBJDIR)/json_dir_.c:$(OBJDIR)/json_dir.h \ + $(OBJDIR)/json_finfo_.c:$(OBJDIR)/json_finfo.h \ + $(OBJDIR)/json_login_.c:$(OBJDIR)/json_login.h \ + $(OBJDIR)/json_query_.c:$(OBJDIR)/json_query.h \ + $(OBJDIR)/json_report_.c:$(OBJDIR)/json_report.h \ + $(OBJDIR)/json_status_.c:$(OBJDIR)/json_status.h \ + $(OBJDIR)/json_tag_.c:$(OBJDIR)/json_tag.h \ + $(OBJDIR)/json_timeline_.c:$(OBJDIR)/json_timeline.h \ + $(OBJDIR)/json_user_.c:$(OBJDIR)/json_user.h \ + $(OBJDIR)/json_wiki_.c:$(OBJDIR)/json_wiki.h \ + $(OBJDIR)/leaf_.c:$(OBJDIR)/leaf.h \ + $(OBJDIR)/loadctrl_.c:$(OBJDIR)/loadctrl.h \ + $(OBJDIR)/login_.c:$(OBJDIR)/login.h \ + $(OBJDIR)/lookslike_.c:$(OBJDIR)/lookslike.h \ + $(OBJDIR)/main_.c:$(OBJDIR)/main.h \ + $(OBJDIR)/manifest_.c:$(OBJDIR)/manifest.h \ + $(OBJDIR)/markdown_.c:$(OBJDIR)/markdown.h \ + $(OBJDIR)/markdown_html_.c:$(OBJDIR)/markdown_html.h \ + $(OBJDIR)/md5_.c:$(OBJDIR)/md5.h \ + $(OBJDIR)/merge_.c:$(OBJDIR)/merge.h \ + $(OBJDIR)/merge3_.c:$(OBJDIR)/merge3.h \ + $(OBJDIR)/moderate_.c:$(OBJDIR)/moderate.h \ + $(OBJDIR)/name_.c:$(OBJDIR)/name.h \ + $(OBJDIR)/path_.c:$(OBJDIR)/path.h \ + $(OBJDIR)/piechart_.c:$(OBJDIR)/piechart.h \ + $(OBJDIR)/pivot_.c:$(OBJDIR)/pivot.h \ + $(OBJDIR)/popen_.c:$(OBJDIR)/popen.h \ + $(OBJDIR)/pqueue_.c:$(OBJDIR)/pqueue.h \ + $(OBJDIR)/printf_.c:$(OBJDIR)/printf.h \ + $(OBJDIR)/publish_.c:$(OBJDIR)/publish.h \ + $(OBJDIR)/purge_.c:$(OBJDIR)/purge.h \ + $(OBJDIR)/rebuild_.c:$(OBJDIR)/rebuild.h \ + $(OBJDIR)/regexp_.c:$(OBJDIR)/regexp.h \ + $(OBJDIR)/report_.c:$(OBJDIR)/report.h \ + $(OBJDIR)/rss_.c:$(OBJDIR)/rss.h \ + $(OBJDIR)/schema_.c:$(OBJDIR)/schema.h \ + $(OBJDIR)/search_.c:$(OBJDIR)/search.h \ + $(OBJDIR)/setup_.c:$(OBJDIR)/setup.h \ + $(OBJDIR)/sha1_.c:$(OBJDIR)/sha1.h \ + $(OBJDIR)/shun_.c:$(OBJDIR)/shun.h \ + $(OBJDIR)/sitemap_.c:$(OBJDIR)/sitemap.h \ + $(OBJDIR)/skins_.c:$(OBJDIR)/skins.h \ + $(OBJDIR)/sqlcmd_.c:$(OBJDIR)/sqlcmd.h \ + $(OBJDIR)/stash_.c:$(OBJDIR)/stash.h \ + $(OBJDIR)/stat_.c:$(OBJDIR)/stat.h \ + $(OBJDIR)/statrep_.c:$(OBJDIR)/statrep.h \ + $(OBJDIR)/style_.c:$(OBJDIR)/style.h \ + $(OBJDIR)/sync_.c:$(OBJDIR)/sync.h \ + $(OBJDIR)/tag_.c:$(OBJDIR)/tag.h \ + $(OBJDIR)/tar_.c:$(OBJDIR)/tar.h \ + $(OBJDIR)/th_main_.c:$(OBJDIR)/th_main.h \ + $(OBJDIR)/timeline_.c:$(OBJDIR)/timeline.h \ + $(OBJDIR)/tkt_.c:$(OBJDIR)/tkt.h \ + $(OBJDIR)/tktsetup_.c:$(OBJDIR)/tktsetup.h \ + $(OBJDIR)/undo_.c:$(OBJDIR)/undo.h \ + $(OBJDIR)/unicode_.c:$(OBJDIR)/unicode.h \ + $(OBJDIR)/update_.c:$(OBJDIR)/update.h \ + $(OBJDIR)/url_.c:$(OBJDIR)/url.h \ + $(OBJDIR)/user_.c:$(OBJDIR)/user.h \ + $(OBJDIR)/utf8_.c:$(OBJDIR)/utf8.h \ + $(OBJDIR)/util_.c:$(OBJDIR)/util.h \ + $(OBJDIR)/verify_.c:$(OBJDIR)/verify.h \ + $(OBJDIR)/vfile_.c:$(OBJDIR)/vfile.h \ + $(OBJDIR)/wiki_.c:$(OBJDIR)/wiki.h \ + $(OBJDIR)/wikiformat_.c:$(OBJDIR)/wikiformat.h \ + $(OBJDIR)/winfile_.c:$(OBJDIR)/winfile.h \ + $(OBJDIR)/winhttp_.c:$(OBJDIR)/winhttp.h \ + $(OBJDIR)/wysiwyg_.c:$(OBJDIR)/wysiwyg.h \ + $(OBJDIR)/xfer_.c:$(OBJDIR)/xfer.h \ + $(OBJDIR)/xfersetup_.c:$(OBJDIR)/xfersetup.h \ + $(OBJDIR)/zip_.c:$(OBJDIR)/zip.h \ + $(SRCDIR)/sqlite3.h \ + $(SRCDIR)/th.h \ + $(OBJDIR)/VERSION.h + echo Done >$(OBJDIR)/headers + +$(OBJDIR)/headers: Makefile + +Makefile: + +$(OBJDIR)/add_.c: $(SRCDIR)/add.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/add.c >$@ + +$(OBJDIR)/add.o: $(OBJDIR)/add_.c $(OBJDIR)/add.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/add.o -c $(OBJDIR)/add_.c + +$(OBJDIR)/add.h: $(OBJDIR)/headers + +$(OBJDIR)/allrepo_.c: $(SRCDIR)/allrepo.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/allrepo.c >$@ + +$(OBJDIR)/allrepo.o: $(OBJDIR)/allrepo_.c $(OBJDIR)/allrepo.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/allrepo.o -c $(OBJDIR)/allrepo_.c + +$(OBJDIR)/allrepo.h: $(OBJDIR)/headers + +$(OBJDIR)/attach_.c: $(SRCDIR)/attach.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/attach.c >$@ + +$(OBJDIR)/attach.o: $(OBJDIR)/attach_.c $(OBJDIR)/attach.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/attach.o -c $(OBJDIR)/attach_.c + +$(OBJDIR)/attach.h: $(OBJDIR)/headers + +$(OBJDIR)/bag_.c: $(SRCDIR)/bag.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/bag.c >$@ + +$(OBJDIR)/bag.o: $(OBJDIR)/bag_.c $(OBJDIR)/bag.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/bag.o -c $(OBJDIR)/bag_.c + +$(OBJDIR)/bag.h: $(OBJDIR)/headers + +$(OBJDIR)/bisect_.c: $(SRCDIR)/bisect.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/bisect.c >$@ + +$(OBJDIR)/bisect.o: $(OBJDIR)/bisect_.c $(OBJDIR)/bisect.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/bisect.o -c $(OBJDIR)/bisect_.c + +$(OBJDIR)/bisect.h: $(OBJDIR)/headers + +$(OBJDIR)/blob_.c: $(SRCDIR)/blob.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/blob.c >$@ + +$(OBJDIR)/blob.o: $(OBJDIR)/blob_.c $(OBJDIR)/blob.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/blob.o -c $(OBJDIR)/blob_.c + +$(OBJDIR)/blob.h: $(OBJDIR)/headers + +$(OBJDIR)/branch_.c: $(SRCDIR)/branch.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/branch.c >$@ + +$(OBJDIR)/branch.o: $(OBJDIR)/branch_.c $(OBJDIR)/branch.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/branch.o -c $(OBJDIR)/branch_.c + +$(OBJDIR)/branch.h: $(OBJDIR)/headers + +$(OBJDIR)/browse_.c: $(SRCDIR)/browse.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/browse.c >$@ + +$(OBJDIR)/browse.o: $(OBJDIR)/browse_.c $(OBJDIR)/browse.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/browse.o -c $(OBJDIR)/browse_.c + +$(OBJDIR)/browse.h: $(OBJDIR)/headers + +$(OBJDIR)/builtin_.c: $(SRCDIR)/builtin.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/builtin.c >$@ + +$(OBJDIR)/builtin.o: $(OBJDIR)/builtin_.c $(OBJDIR)/builtin.h $(OBJDIR)/builtin_data.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/builtin.o -c $(OBJDIR)/builtin_.c + +$(OBJDIR)/builtin.h: $(OBJDIR)/headers + +$(OBJDIR)/bundle_.c: $(SRCDIR)/bundle.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/bundle.c >$@ + +$(OBJDIR)/bundle.o: $(OBJDIR)/bundle_.c $(OBJDIR)/bundle.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/bundle.o -c $(OBJDIR)/bundle_.c + +$(OBJDIR)/bundle.h: $(OBJDIR)/headers + +$(OBJDIR)/cache_.c: $(SRCDIR)/cache.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/cache.c >$@ + +$(OBJDIR)/cache.o: $(OBJDIR)/cache_.c $(OBJDIR)/cache.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/cache.o -c $(OBJDIR)/cache_.c + +$(OBJDIR)/cache.h: $(OBJDIR)/headers + +$(OBJDIR)/captcha_.c: $(SRCDIR)/captcha.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/captcha.c >$@ + +$(OBJDIR)/captcha.o: $(OBJDIR)/captcha_.c $(OBJDIR)/captcha.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/captcha.o -c $(OBJDIR)/captcha_.c + +$(OBJDIR)/captcha.h: $(OBJDIR)/headers + +$(OBJDIR)/cgi_.c: $(SRCDIR)/cgi.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/cgi.c >$@ + +$(OBJDIR)/cgi.o: $(OBJDIR)/cgi_.c $(OBJDIR)/cgi.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/cgi.o -c $(OBJDIR)/cgi_.c + +$(OBJDIR)/cgi.h: $(OBJDIR)/headers + +$(OBJDIR)/checkin_.c: $(SRCDIR)/checkin.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/checkin.c >$@ + +$(OBJDIR)/checkin.o: $(OBJDIR)/checkin_.c $(OBJDIR)/checkin.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/checkin.o -c $(OBJDIR)/checkin_.c + +$(OBJDIR)/checkin.h: $(OBJDIR)/headers + +$(OBJDIR)/checkout_.c: $(SRCDIR)/checkout.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/checkout.c >$@ + +$(OBJDIR)/checkout.o: $(OBJDIR)/checkout_.c $(OBJDIR)/checkout.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/checkout.o -c $(OBJDIR)/checkout_.c + +$(OBJDIR)/checkout.h: $(OBJDIR)/headers + +$(OBJDIR)/clearsign_.c: $(SRCDIR)/clearsign.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/clearsign.c >$@ + +$(OBJDIR)/clearsign.o: $(OBJDIR)/clearsign_.c $(OBJDIR)/clearsign.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/clearsign.o -c $(OBJDIR)/clearsign_.c + +$(OBJDIR)/clearsign.h: $(OBJDIR)/headers + +$(OBJDIR)/clone_.c: $(SRCDIR)/clone.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/clone.c >$@ + +$(OBJDIR)/clone.o: $(OBJDIR)/clone_.c $(OBJDIR)/clone.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/clone.o -c $(OBJDIR)/clone_.c + +$(OBJDIR)/clone.h: $(OBJDIR)/headers + +$(OBJDIR)/comformat_.c: $(SRCDIR)/comformat.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/comformat.c >$@ + +$(OBJDIR)/comformat.o: $(OBJDIR)/comformat_.c $(OBJDIR)/comformat.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/comformat.o -c $(OBJDIR)/comformat_.c + +$(OBJDIR)/comformat.h: $(OBJDIR)/headers + +$(OBJDIR)/configure_.c: $(SRCDIR)/configure.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/configure.c >$@ + +$(OBJDIR)/configure.o: $(OBJDIR)/configure_.c $(OBJDIR)/configure.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/configure.o -c $(OBJDIR)/configure_.c + +$(OBJDIR)/configure.h: $(OBJDIR)/headers + +$(OBJDIR)/content_.c: $(SRCDIR)/content.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/content.c >$@ + +$(OBJDIR)/content.o: $(OBJDIR)/content_.c $(OBJDIR)/content.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/content.o -c $(OBJDIR)/content_.c + +$(OBJDIR)/content.h: $(OBJDIR)/headers + +$(OBJDIR)/db_.c: $(SRCDIR)/db.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/db.c >$@ + +$(OBJDIR)/db.o: $(OBJDIR)/db_.c $(OBJDIR)/db.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/db.o -c $(OBJDIR)/db_.c + +$(OBJDIR)/db.h: $(OBJDIR)/headers + +$(OBJDIR)/delta_.c: $(SRCDIR)/delta.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/delta.c >$@ + +$(OBJDIR)/delta.o: $(OBJDIR)/delta_.c $(OBJDIR)/delta.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/delta.o -c $(OBJDIR)/delta_.c + +$(OBJDIR)/delta.h: $(OBJDIR)/headers + +$(OBJDIR)/deltacmd_.c: $(SRCDIR)/deltacmd.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/deltacmd.c >$@ + +$(OBJDIR)/deltacmd.o: $(OBJDIR)/deltacmd_.c $(OBJDIR)/deltacmd.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/deltacmd.o -c $(OBJDIR)/deltacmd_.c + +$(OBJDIR)/deltacmd.h: $(OBJDIR)/headers + +$(OBJDIR)/descendants_.c: $(SRCDIR)/descendants.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/descendants.c >$@ + +$(OBJDIR)/descendants.o: $(OBJDIR)/descendants_.c $(OBJDIR)/descendants.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/descendants.o -c $(OBJDIR)/descendants_.c + +$(OBJDIR)/descendants.h: $(OBJDIR)/headers + +$(OBJDIR)/diff_.c: $(SRCDIR)/diff.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/diff.c >$@ + +$(OBJDIR)/diff.o: $(OBJDIR)/diff_.c $(OBJDIR)/diff.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/diff.o -c $(OBJDIR)/diff_.c + +$(OBJDIR)/diff.h: $(OBJDIR)/headers + +$(OBJDIR)/diffcmd_.c: $(SRCDIR)/diffcmd.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/diffcmd.c >$@ + +$(OBJDIR)/diffcmd.o: $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/diffcmd.o -c $(OBJDIR)/diffcmd_.c + +$(OBJDIR)/diffcmd.h: $(OBJDIR)/headers + +$(OBJDIR)/doc_.c: $(SRCDIR)/doc.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/doc.c >$@ + +$(OBJDIR)/doc.o: $(OBJDIR)/doc_.c $(OBJDIR)/doc.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/doc.o -c $(OBJDIR)/doc_.c + +$(OBJDIR)/doc.h: $(OBJDIR)/headers + +$(OBJDIR)/encode_.c: $(SRCDIR)/encode.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/encode.c >$@ + +$(OBJDIR)/encode.o: $(OBJDIR)/encode_.c $(OBJDIR)/encode.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/encode.o -c $(OBJDIR)/encode_.c + +$(OBJDIR)/encode.h: $(OBJDIR)/headers + +$(OBJDIR)/event_.c: $(SRCDIR)/event.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/event.c >$@ + +$(OBJDIR)/event.o: $(OBJDIR)/event_.c $(OBJDIR)/event.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/event.o -c $(OBJDIR)/event_.c + +$(OBJDIR)/event.h: $(OBJDIR)/headers + +$(OBJDIR)/export_.c: $(SRCDIR)/export.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/export.c >$@ + +$(OBJDIR)/export.o: $(OBJDIR)/export_.c $(OBJDIR)/export.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/export.o -c $(OBJDIR)/export_.c + +$(OBJDIR)/export.h: $(OBJDIR)/headers + +$(OBJDIR)/file_.c: $(SRCDIR)/file.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/file.c >$@ + +$(OBJDIR)/file.o: $(OBJDIR)/file_.c $(OBJDIR)/file.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/file.o -c $(OBJDIR)/file_.c + +$(OBJDIR)/file.h: $(OBJDIR)/headers + +$(OBJDIR)/finfo_.c: $(SRCDIR)/finfo.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/finfo.c >$@ + +$(OBJDIR)/finfo.o: $(OBJDIR)/finfo_.c $(OBJDIR)/finfo.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/finfo.o -c $(OBJDIR)/finfo_.c + +$(OBJDIR)/finfo.h: $(OBJDIR)/headers + +$(OBJDIR)/foci_.c: $(SRCDIR)/foci.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/foci.c >$@ + +$(OBJDIR)/foci.o: $(OBJDIR)/foci_.c $(OBJDIR)/foci.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/foci.o -c $(OBJDIR)/foci_.c + +$(OBJDIR)/foci.h: $(OBJDIR)/headers + +$(OBJDIR)/fusefs_.c: $(SRCDIR)/fusefs.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/fusefs.c >$@ + +$(OBJDIR)/fusefs.o: $(OBJDIR)/fusefs_.c $(OBJDIR)/fusefs.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/fusefs.o -c $(OBJDIR)/fusefs_.c + +$(OBJDIR)/fusefs.h: $(OBJDIR)/headers + +$(OBJDIR)/glob_.c: $(SRCDIR)/glob.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/glob.c >$@ + +$(OBJDIR)/glob.o: $(OBJDIR)/glob_.c $(OBJDIR)/glob.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/glob.o -c $(OBJDIR)/glob_.c + +$(OBJDIR)/glob.h: $(OBJDIR)/headers + +$(OBJDIR)/graph_.c: $(SRCDIR)/graph.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/graph.c >$@ + +$(OBJDIR)/graph.o: $(OBJDIR)/graph_.c $(OBJDIR)/graph.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/graph.o -c $(OBJDIR)/graph_.c + +$(OBJDIR)/graph.h: $(OBJDIR)/headers + +$(OBJDIR)/gzip_.c: $(SRCDIR)/gzip.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/gzip.c >$@ + +$(OBJDIR)/gzip.o: $(OBJDIR)/gzip_.c $(OBJDIR)/gzip.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/gzip.o -c $(OBJDIR)/gzip_.c + +$(OBJDIR)/gzip.h: $(OBJDIR)/headers + +$(OBJDIR)/http_.c: $(SRCDIR)/http.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/http.c >$@ + +$(OBJDIR)/http.o: $(OBJDIR)/http_.c $(OBJDIR)/http.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/http.o -c $(OBJDIR)/http_.c + +$(OBJDIR)/http.h: $(OBJDIR)/headers + +$(OBJDIR)/http_socket_.c: $(SRCDIR)/http_socket.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/http_socket.c >$@ + +$(OBJDIR)/http_socket.o: $(OBJDIR)/http_socket_.c $(OBJDIR)/http_socket.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/http_socket.o -c $(OBJDIR)/http_socket_.c + +$(OBJDIR)/http_socket.h: $(OBJDIR)/headers + +$(OBJDIR)/http_ssl_.c: $(SRCDIR)/http_ssl.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/http_ssl.c >$@ + +$(OBJDIR)/http_ssl.o: $(OBJDIR)/http_ssl_.c $(OBJDIR)/http_ssl.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/http_ssl.o -c $(OBJDIR)/http_ssl_.c + +$(OBJDIR)/http_ssl.h: $(OBJDIR)/headers + +$(OBJDIR)/http_transport_.c: $(SRCDIR)/http_transport.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/http_transport.c >$@ + +$(OBJDIR)/http_transport.o: $(OBJDIR)/http_transport_.c $(OBJDIR)/http_transport.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/http_transport.o -c $(OBJDIR)/http_transport_.c + +$(OBJDIR)/http_transport.h: $(OBJDIR)/headers + +$(OBJDIR)/import_.c: $(SRCDIR)/import.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/import.c >$@ + +$(OBJDIR)/import.o: $(OBJDIR)/import_.c $(OBJDIR)/import.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/import.o -c $(OBJDIR)/import_.c + +$(OBJDIR)/import.h: $(OBJDIR)/headers + +$(OBJDIR)/info_.c: $(SRCDIR)/info.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/info.c >$@ + +$(OBJDIR)/info.o: $(OBJDIR)/info_.c $(OBJDIR)/info.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/info.o -c $(OBJDIR)/info_.c + +$(OBJDIR)/info.h: $(OBJDIR)/headers + +$(OBJDIR)/json_.c: $(SRCDIR)/json.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json.c >$@ + +$(OBJDIR)/json.o: $(OBJDIR)/json_.c $(OBJDIR)/json.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json.o -c $(OBJDIR)/json_.c + +$(OBJDIR)/json.h: $(OBJDIR)/headers + +$(OBJDIR)/json_artifact_.c: $(SRCDIR)/json_artifact.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_artifact.c >$@ + +$(OBJDIR)/json_artifact.o: $(OBJDIR)/json_artifact_.c $(OBJDIR)/json_artifact.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_artifact.o -c $(OBJDIR)/json_artifact_.c + +$(OBJDIR)/json_artifact.h: $(OBJDIR)/headers + +$(OBJDIR)/json_branch_.c: $(SRCDIR)/json_branch.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_branch.c >$@ + +$(OBJDIR)/json_branch.o: $(OBJDIR)/json_branch_.c $(OBJDIR)/json_branch.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_branch.o -c $(OBJDIR)/json_branch_.c + +$(OBJDIR)/json_branch.h: $(OBJDIR)/headers + +$(OBJDIR)/json_config_.c: $(SRCDIR)/json_config.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_config.c >$@ + +$(OBJDIR)/json_config.o: $(OBJDIR)/json_config_.c $(OBJDIR)/json_config.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_config.o -c $(OBJDIR)/json_config_.c + +$(OBJDIR)/json_config.h: $(OBJDIR)/headers + +$(OBJDIR)/json_diff_.c: $(SRCDIR)/json_diff.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_diff.c >$@ + +$(OBJDIR)/json_diff.o: $(OBJDIR)/json_diff_.c $(OBJDIR)/json_diff.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_diff.o -c $(OBJDIR)/json_diff_.c + +$(OBJDIR)/json_diff.h: $(OBJDIR)/headers + +$(OBJDIR)/json_dir_.c: $(SRCDIR)/json_dir.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_dir.c >$@ + +$(OBJDIR)/json_dir.o: $(OBJDIR)/json_dir_.c $(OBJDIR)/json_dir.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_dir.o -c $(OBJDIR)/json_dir_.c + +$(OBJDIR)/json_dir.h: $(OBJDIR)/headers + +$(OBJDIR)/json_finfo_.c: $(SRCDIR)/json_finfo.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_finfo.c >$@ + +$(OBJDIR)/json_finfo.o: $(OBJDIR)/json_finfo_.c $(OBJDIR)/json_finfo.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_finfo.o -c $(OBJDIR)/json_finfo_.c + +$(OBJDIR)/json_finfo.h: $(OBJDIR)/headers + +$(OBJDIR)/json_login_.c: $(SRCDIR)/json_login.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_login.c >$@ + +$(OBJDIR)/json_login.o: $(OBJDIR)/json_login_.c $(OBJDIR)/json_login.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_login.o -c $(OBJDIR)/json_login_.c + +$(OBJDIR)/json_login.h: $(OBJDIR)/headers + +$(OBJDIR)/json_query_.c: $(SRCDIR)/json_query.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_query.c >$@ + +$(OBJDIR)/json_query.o: $(OBJDIR)/json_query_.c $(OBJDIR)/json_query.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_query.o -c $(OBJDIR)/json_query_.c + +$(OBJDIR)/json_query.h: $(OBJDIR)/headers + +$(OBJDIR)/json_report_.c: $(SRCDIR)/json_report.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_report.c >$@ + +$(OBJDIR)/json_report.o: $(OBJDIR)/json_report_.c $(OBJDIR)/json_report.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_report.o -c $(OBJDIR)/json_report_.c + +$(OBJDIR)/json_report.h: $(OBJDIR)/headers + +$(OBJDIR)/json_status_.c: $(SRCDIR)/json_status.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_status.c >$@ + +$(OBJDIR)/json_status.o: $(OBJDIR)/json_status_.c $(OBJDIR)/json_status.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_status.o -c $(OBJDIR)/json_status_.c + +$(OBJDIR)/json_status.h: $(OBJDIR)/headers + +$(OBJDIR)/json_tag_.c: $(SRCDIR)/json_tag.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_tag.c >$@ + +$(OBJDIR)/json_tag.o: $(OBJDIR)/json_tag_.c $(OBJDIR)/json_tag.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_tag.o -c $(OBJDIR)/json_tag_.c + +$(OBJDIR)/json_tag.h: $(OBJDIR)/headers + +$(OBJDIR)/json_timeline_.c: $(SRCDIR)/json_timeline.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_timeline.c >$@ + +$(OBJDIR)/json_timeline.o: $(OBJDIR)/json_timeline_.c $(OBJDIR)/json_timeline.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_timeline.o -c $(OBJDIR)/json_timeline_.c + +$(OBJDIR)/json_timeline.h: $(OBJDIR)/headers + +$(OBJDIR)/json_user_.c: $(SRCDIR)/json_user.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_user.c >$@ + +$(OBJDIR)/json_user.o: $(OBJDIR)/json_user_.c $(OBJDIR)/json_user.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_user.o -c $(OBJDIR)/json_user_.c + +$(OBJDIR)/json_user.h: $(OBJDIR)/headers + +$(OBJDIR)/json_wiki_.c: $(SRCDIR)/json_wiki.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_wiki.c >$@ + +$(OBJDIR)/json_wiki.o: $(OBJDIR)/json_wiki_.c $(OBJDIR)/json_wiki.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_wiki.o -c $(OBJDIR)/json_wiki_.c + +$(OBJDIR)/json_wiki.h: $(OBJDIR)/headers + +$(OBJDIR)/leaf_.c: $(SRCDIR)/leaf.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/leaf.c >$@ + +$(OBJDIR)/leaf.o: $(OBJDIR)/leaf_.c $(OBJDIR)/leaf.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/leaf.o -c $(OBJDIR)/leaf_.c + +$(OBJDIR)/leaf.h: $(OBJDIR)/headers + +$(OBJDIR)/loadctrl_.c: $(SRCDIR)/loadctrl.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/loadctrl.c >$@ + +$(OBJDIR)/loadctrl.o: $(OBJDIR)/loadctrl_.c $(OBJDIR)/loadctrl.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/loadctrl.o -c $(OBJDIR)/loadctrl_.c + +$(OBJDIR)/loadctrl.h: $(OBJDIR)/headers + +$(OBJDIR)/login_.c: $(SRCDIR)/login.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/login.c >$@ + +$(OBJDIR)/login.o: $(OBJDIR)/login_.c $(OBJDIR)/login.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/login.o -c $(OBJDIR)/login_.c + +$(OBJDIR)/login.h: $(OBJDIR)/headers + +$(OBJDIR)/lookslike_.c: $(SRCDIR)/lookslike.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/lookslike.c >$@ + +$(OBJDIR)/lookslike.o: $(OBJDIR)/lookslike_.c $(OBJDIR)/lookslike.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/lookslike.o -c $(OBJDIR)/lookslike_.c + +$(OBJDIR)/lookslike.h: $(OBJDIR)/headers + +$(OBJDIR)/main_.c: $(SRCDIR)/main.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/main.c >$@ + +$(OBJDIR)/main.o: $(OBJDIR)/main_.c $(OBJDIR)/main.h $(OBJDIR)/page_index.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/main.o -c $(OBJDIR)/main_.c + +$(OBJDIR)/main.h: $(OBJDIR)/headers + +$(OBJDIR)/manifest_.c: $(SRCDIR)/manifest.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/manifest.c >$@ + +$(OBJDIR)/manifest.o: $(OBJDIR)/manifest_.c $(OBJDIR)/manifest.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/manifest.o -c $(OBJDIR)/manifest_.c + +$(OBJDIR)/manifest.h: $(OBJDIR)/headers + +$(OBJDIR)/markdown_.c: $(SRCDIR)/markdown.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/markdown.c >$@ + +$(OBJDIR)/markdown.o: $(OBJDIR)/markdown_.c $(OBJDIR)/markdown.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/markdown.o -c $(OBJDIR)/markdown_.c + +$(OBJDIR)/markdown.h: $(OBJDIR)/headers + +$(OBJDIR)/markdown_html_.c: $(SRCDIR)/markdown_html.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/markdown_html.c >$@ + +$(OBJDIR)/markdown_html.o: $(OBJDIR)/markdown_html_.c $(OBJDIR)/markdown_html.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/markdown_html.o -c $(OBJDIR)/markdown_html_.c + +$(OBJDIR)/markdown_html.h: $(OBJDIR)/headers + +$(OBJDIR)/md5_.c: $(SRCDIR)/md5.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/md5.c >$@ + +$(OBJDIR)/md5.o: $(OBJDIR)/md5_.c $(OBJDIR)/md5.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/md5.o -c $(OBJDIR)/md5_.c + +$(OBJDIR)/md5.h: $(OBJDIR)/headers + +$(OBJDIR)/merge_.c: $(SRCDIR)/merge.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/merge.c >$@ + +$(OBJDIR)/merge.o: $(OBJDIR)/merge_.c $(OBJDIR)/merge.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/merge.o -c $(OBJDIR)/merge_.c + +$(OBJDIR)/merge.h: $(OBJDIR)/headers + +$(OBJDIR)/merge3_.c: $(SRCDIR)/merge3.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/merge3.c >$@ + +$(OBJDIR)/merge3.o: $(OBJDIR)/merge3_.c $(OBJDIR)/merge3.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/merge3.o -c $(OBJDIR)/merge3_.c + +$(OBJDIR)/merge3.h: $(OBJDIR)/headers + +$(OBJDIR)/moderate_.c: $(SRCDIR)/moderate.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/moderate.c >$@ + +$(OBJDIR)/moderate.o: $(OBJDIR)/moderate_.c $(OBJDIR)/moderate.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/moderate.o -c $(OBJDIR)/moderate_.c + +$(OBJDIR)/moderate.h: $(OBJDIR)/headers + +$(OBJDIR)/name_.c: $(SRCDIR)/name.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/name.c >$@ + +$(OBJDIR)/name.o: $(OBJDIR)/name_.c $(OBJDIR)/name.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/name.o -c $(OBJDIR)/name_.c + +$(OBJDIR)/name.h: $(OBJDIR)/headers + +$(OBJDIR)/path_.c: $(SRCDIR)/path.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/path.c >$@ + +$(OBJDIR)/path.o: $(OBJDIR)/path_.c $(OBJDIR)/path.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/path.o -c $(OBJDIR)/path_.c + +$(OBJDIR)/path.h: $(OBJDIR)/headers + +$(OBJDIR)/piechart_.c: $(SRCDIR)/piechart.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/piechart.c >$@ + +$(OBJDIR)/piechart.o: $(OBJDIR)/piechart_.c $(OBJDIR)/piechart.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/piechart.o -c $(OBJDIR)/piechart_.c + +$(OBJDIR)/piechart.h: $(OBJDIR)/headers + +$(OBJDIR)/pivot_.c: $(SRCDIR)/pivot.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/pivot.c >$@ + +$(OBJDIR)/pivot.o: $(OBJDIR)/pivot_.c $(OBJDIR)/pivot.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/pivot.o -c $(OBJDIR)/pivot_.c + +$(OBJDIR)/pivot.h: $(OBJDIR)/headers + +$(OBJDIR)/popen_.c: $(SRCDIR)/popen.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/popen.c >$@ + +$(OBJDIR)/popen.o: $(OBJDIR)/popen_.c $(OBJDIR)/popen.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/popen.o -c $(OBJDIR)/popen_.c + +$(OBJDIR)/popen.h: $(OBJDIR)/headers + +$(OBJDIR)/pqueue_.c: $(SRCDIR)/pqueue.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/pqueue.c >$@ + +$(OBJDIR)/pqueue.o: $(OBJDIR)/pqueue_.c $(OBJDIR)/pqueue.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/pqueue.o -c $(OBJDIR)/pqueue_.c + +$(OBJDIR)/pqueue.h: $(OBJDIR)/headers + +$(OBJDIR)/printf_.c: $(SRCDIR)/printf.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/printf.c >$@ + +$(OBJDIR)/printf.o: $(OBJDIR)/printf_.c $(OBJDIR)/printf.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/printf.o -c $(OBJDIR)/printf_.c + +$(OBJDIR)/printf.h: $(OBJDIR)/headers + +$(OBJDIR)/publish_.c: $(SRCDIR)/publish.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/publish.c >$@ + +$(OBJDIR)/publish.o: $(OBJDIR)/publish_.c $(OBJDIR)/publish.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/publish.o -c $(OBJDIR)/publish_.c + +$(OBJDIR)/publish.h: $(OBJDIR)/headers + +$(OBJDIR)/purge_.c: $(SRCDIR)/purge.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/purge.c >$@ + +$(OBJDIR)/purge.o: $(OBJDIR)/purge_.c $(OBJDIR)/purge.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/purge.o -c $(OBJDIR)/purge_.c + +$(OBJDIR)/purge.h: $(OBJDIR)/headers + +$(OBJDIR)/rebuild_.c: $(SRCDIR)/rebuild.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/rebuild.c >$@ + +$(OBJDIR)/rebuild.o: $(OBJDIR)/rebuild_.c $(OBJDIR)/rebuild.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/rebuild.o -c $(OBJDIR)/rebuild_.c + +$(OBJDIR)/rebuild.h: $(OBJDIR)/headers + +$(OBJDIR)/regexp_.c: $(SRCDIR)/regexp.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/regexp.c >$@ + +$(OBJDIR)/regexp.o: $(OBJDIR)/regexp_.c $(OBJDIR)/regexp.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/regexp.o -c $(OBJDIR)/regexp_.c + +$(OBJDIR)/regexp.h: $(OBJDIR)/headers + +$(OBJDIR)/report_.c: $(SRCDIR)/report.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/report.c >$@ + +$(OBJDIR)/report.o: $(OBJDIR)/report_.c $(OBJDIR)/report.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/report.o -c $(OBJDIR)/report_.c + +$(OBJDIR)/report.h: $(OBJDIR)/headers + +$(OBJDIR)/rss_.c: $(SRCDIR)/rss.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/rss.c >$@ + +$(OBJDIR)/rss.o: $(OBJDIR)/rss_.c $(OBJDIR)/rss.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/rss.o -c $(OBJDIR)/rss_.c + +$(OBJDIR)/rss.h: $(OBJDIR)/headers + +$(OBJDIR)/schema_.c: $(SRCDIR)/schema.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/schema.c >$@ + +$(OBJDIR)/schema.o: $(OBJDIR)/schema_.c $(OBJDIR)/schema.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/schema.o -c $(OBJDIR)/schema_.c + +$(OBJDIR)/schema.h: $(OBJDIR)/headers + +$(OBJDIR)/search_.c: $(SRCDIR)/search.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/search.c >$@ + +$(OBJDIR)/search.o: $(OBJDIR)/search_.c $(OBJDIR)/search.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/search.o -c $(OBJDIR)/search_.c + +$(OBJDIR)/search.h: $(OBJDIR)/headers + +$(OBJDIR)/setup_.c: $(SRCDIR)/setup.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/setup.c >$@ + +$(OBJDIR)/setup.o: $(OBJDIR)/setup_.c $(OBJDIR)/setup.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/setup.o -c $(OBJDIR)/setup_.c + +$(OBJDIR)/setup.h: $(OBJDIR)/headers + +$(OBJDIR)/sha1_.c: $(SRCDIR)/sha1.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/sha1.c >$@ + +$(OBJDIR)/sha1.o: $(OBJDIR)/sha1_.c $(OBJDIR)/sha1.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/sha1.o -c $(OBJDIR)/sha1_.c + +$(OBJDIR)/sha1.h: $(OBJDIR)/headers + +$(OBJDIR)/shun_.c: $(SRCDIR)/shun.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/shun.c >$@ + +$(OBJDIR)/shun.o: $(OBJDIR)/shun_.c $(OBJDIR)/shun.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/shun.o -c $(OBJDIR)/shun_.c + +$(OBJDIR)/shun.h: $(OBJDIR)/headers + +$(OBJDIR)/sitemap_.c: $(SRCDIR)/sitemap.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/sitemap.c >$@ + +$(OBJDIR)/sitemap.o: $(OBJDIR)/sitemap_.c $(OBJDIR)/sitemap.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/sitemap.o -c $(OBJDIR)/sitemap_.c + +$(OBJDIR)/sitemap.h: $(OBJDIR)/headers + +$(OBJDIR)/skins_.c: $(SRCDIR)/skins.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/skins.c >$@ + +$(OBJDIR)/skins.o: $(OBJDIR)/skins_.c $(OBJDIR)/skins.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/skins.o -c $(OBJDIR)/skins_.c + +$(OBJDIR)/skins.h: $(OBJDIR)/headers + +$(OBJDIR)/sqlcmd_.c: $(SRCDIR)/sqlcmd.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/sqlcmd.c >$@ + +$(OBJDIR)/sqlcmd.o: $(OBJDIR)/sqlcmd_.c $(OBJDIR)/sqlcmd.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/sqlcmd.o -c $(OBJDIR)/sqlcmd_.c + +$(OBJDIR)/sqlcmd.h: $(OBJDIR)/headers + +$(OBJDIR)/stash_.c: $(SRCDIR)/stash.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/stash.c >$@ + +$(OBJDIR)/stash.o: $(OBJDIR)/stash_.c $(OBJDIR)/stash.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/stash.o -c $(OBJDIR)/stash_.c + +$(OBJDIR)/stash.h: $(OBJDIR)/headers + +$(OBJDIR)/stat_.c: $(SRCDIR)/stat.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/stat.c >$@ + +$(OBJDIR)/stat.o: $(OBJDIR)/stat_.c $(OBJDIR)/stat.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/stat.o -c $(OBJDIR)/stat_.c + +$(OBJDIR)/stat.h: $(OBJDIR)/headers + +$(OBJDIR)/statrep_.c: $(SRCDIR)/statrep.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/statrep.c >$@ + +$(OBJDIR)/statrep.o: $(OBJDIR)/statrep_.c $(OBJDIR)/statrep.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/statrep.o -c $(OBJDIR)/statrep_.c + +$(OBJDIR)/statrep.h: $(OBJDIR)/headers + +$(OBJDIR)/style_.c: $(SRCDIR)/style.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/style.c >$@ + +$(OBJDIR)/style.o: $(OBJDIR)/style_.c $(OBJDIR)/style.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/style.o -c $(OBJDIR)/style_.c + +$(OBJDIR)/style.h: $(OBJDIR)/headers + +$(OBJDIR)/sync_.c: $(SRCDIR)/sync.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/sync.c >$@ + +$(OBJDIR)/sync.o: $(OBJDIR)/sync_.c $(OBJDIR)/sync.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/sync.o -c $(OBJDIR)/sync_.c + +$(OBJDIR)/sync.h: $(OBJDIR)/headers + +$(OBJDIR)/tag_.c: $(SRCDIR)/tag.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/tag.c >$@ + +$(OBJDIR)/tag.o: $(OBJDIR)/tag_.c $(OBJDIR)/tag.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/tag.o -c $(OBJDIR)/tag_.c + +$(OBJDIR)/tag.h: $(OBJDIR)/headers + +$(OBJDIR)/tar_.c: $(SRCDIR)/tar.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/tar.c >$@ + +$(OBJDIR)/tar.o: $(OBJDIR)/tar_.c $(OBJDIR)/tar.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/tar.o -c $(OBJDIR)/tar_.c + +$(OBJDIR)/tar.h: $(OBJDIR)/headers + +$(OBJDIR)/th_main_.c: $(SRCDIR)/th_main.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/th_main.c >$@ + +$(OBJDIR)/th_main.o: $(OBJDIR)/th_main_.c $(OBJDIR)/th_main.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/th_main.o -c $(OBJDIR)/th_main_.c + +$(OBJDIR)/th_main.h: $(OBJDIR)/headers + +$(OBJDIR)/timeline_.c: $(SRCDIR)/timeline.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/timeline.c >$@ + +$(OBJDIR)/timeline.o: $(OBJDIR)/timeline_.c $(OBJDIR)/timeline.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/timeline.o -c $(OBJDIR)/timeline_.c + +$(OBJDIR)/timeline.h: $(OBJDIR)/headers + +$(OBJDIR)/tkt_.c: $(SRCDIR)/tkt.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/tkt.c >$@ + +$(OBJDIR)/tkt.o: $(OBJDIR)/tkt_.c $(OBJDIR)/tkt.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/tkt.o -c $(OBJDIR)/tkt_.c + +$(OBJDIR)/tkt.h: $(OBJDIR)/headers + +$(OBJDIR)/tktsetup_.c: $(SRCDIR)/tktsetup.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/tktsetup.c >$@ + +$(OBJDIR)/tktsetup.o: $(OBJDIR)/tktsetup_.c $(OBJDIR)/tktsetup.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/tktsetup.o -c $(OBJDIR)/tktsetup_.c + +$(OBJDIR)/tktsetup.h: $(OBJDIR)/headers + +$(OBJDIR)/undo_.c: $(SRCDIR)/undo.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/undo.c >$@ + +$(OBJDIR)/undo.o: $(OBJDIR)/undo_.c $(OBJDIR)/undo.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/undo.o -c $(OBJDIR)/undo_.c + +$(OBJDIR)/undo.h: $(OBJDIR)/headers + +$(OBJDIR)/unicode_.c: $(SRCDIR)/unicode.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/unicode.c >$@ + +$(OBJDIR)/unicode.o: $(OBJDIR)/unicode_.c $(OBJDIR)/unicode.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/unicode.o -c $(OBJDIR)/unicode_.c + +$(OBJDIR)/unicode.h: $(OBJDIR)/headers + +$(OBJDIR)/update_.c: $(SRCDIR)/update.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/update.c >$@ + +$(OBJDIR)/update.o: $(OBJDIR)/update_.c $(OBJDIR)/update.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/update.o -c $(OBJDIR)/update_.c + +$(OBJDIR)/update.h: $(OBJDIR)/headers + +$(OBJDIR)/url_.c: $(SRCDIR)/url.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/url.c >$@ + +$(OBJDIR)/url.o: $(OBJDIR)/url_.c $(OBJDIR)/url.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/url.o -c $(OBJDIR)/url_.c + +$(OBJDIR)/url.h: $(OBJDIR)/headers + +$(OBJDIR)/user_.c: $(SRCDIR)/user.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/user.c >$@ + +$(OBJDIR)/user.o: $(OBJDIR)/user_.c $(OBJDIR)/user.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/user.o -c $(OBJDIR)/user_.c + +$(OBJDIR)/user.h: $(OBJDIR)/headers + +$(OBJDIR)/utf8_.c: $(SRCDIR)/utf8.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/utf8.c >$@ + +$(OBJDIR)/utf8.o: $(OBJDIR)/utf8_.c $(OBJDIR)/utf8.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/utf8.o -c $(OBJDIR)/utf8_.c + +$(OBJDIR)/utf8.h: $(OBJDIR)/headers + +$(OBJDIR)/util_.c: $(SRCDIR)/util.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/util.c >$@ + +$(OBJDIR)/util.o: $(OBJDIR)/util_.c $(OBJDIR)/util.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/util.o -c $(OBJDIR)/util_.c + +$(OBJDIR)/util.h: $(OBJDIR)/headers + +$(OBJDIR)/verify_.c: $(SRCDIR)/verify.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/verify.c >$@ + +$(OBJDIR)/verify.o: $(OBJDIR)/verify_.c $(OBJDIR)/verify.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/verify.o -c $(OBJDIR)/verify_.c + +$(OBJDIR)/verify.h: $(OBJDIR)/headers + +$(OBJDIR)/vfile_.c: $(SRCDIR)/vfile.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/vfile.c >$@ + +$(OBJDIR)/vfile.o: $(OBJDIR)/vfile_.c $(OBJDIR)/vfile.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/vfile.o -c $(OBJDIR)/vfile_.c + +$(OBJDIR)/vfile.h: $(OBJDIR)/headers + +$(OBJDIR)/wiki_.c: $(SRCDIR)/wiki.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/wiki.c >$@ + +$(OBJDIR)/wiki.o: $(OBJDIR)/wiki_.c $(OBJDIR)/wiki.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/wiki.o -c $(OBJDIR)/wiki_.c + +$(OBJDIR)/wiki.h: $(OBJDIR)/headers + +$(OBJDIR)/wikiformat_.c: $(SRCDIR)/wikiformat.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/wikiformat.c >$@ + +$(OBJDIR)/wikiformat.o: $(OBJDIR)/wikiformat_.c $(OBJDIR)/wikiformat.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/wikiformat.o -c $(OBJDIR)/wikiformat_.c + +$(OBJDIR)/wikiformat.h: $(OBJDIR)/headers + +$(OBJDIR)/winfile_.c: $(SRCDIR)/winfile.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/winfile.c >$@ + +$(OBJDIR)/winfile.o: $(OBJDIR)/winfile_.c $(OBJDIR)/winfile.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/winfile.o -c $(OBJDIR)/winfile_.c + +$(OBJDIR)/winfile.h: $(OBJDIR)/headers + +$(OBJDIR)/winhttp_.c: $(SRCDIR)/winhttp.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/winhttp.c >$@ + +$(OBJDIR)/winhttp.o: $(OBJDIR)/winhttp_.c $(OBJDIR)/winhttp.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/winhttp.o -c $(OBJDIR)/winhttp_.c + +$(OBJDIR)/winhttp.h: $(OBJDIR)/headers + +$(OBJDIR)/wysiwyg_.c: $(SRCDIR)/wysiwyg.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/wysiwyg.c >$@ + +$(OBJDIR)/wysiwyg.o: $(OBJDIR)/wysiwyg_.c $(OBJDIR)/wysiwyg.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/wysiwyg.o -c $(OBJDIR)/wysiwyg_.c + +$(OBJDIR)/wysiwyg.h: $(OBJDIR)/headers + +$(OBJDIR)/xfer_.c: $(SRCDIR)/xfer.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/xfer.c >$@ + +$(OBJDIR)/xfer.o: $(OBJDIR)/xfer_.c $(OBJDIR)/xfer.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/xfer.o -c $(OBJDIR)/xfer_.c + +$(OBJDIR)/xfer.h: $(OBJDIR)/headers + +$(OBJDIR)/xfersetup_.c: $(SRCDIR)/xfersetup.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/xfersetup.c >$@ + +$(OBJDIR)/xfersetup.o: $(OBJDIR)/xfersetup_.c $(OBJDIR)/xfersetup.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/xfersetup.o -c $(OBJDIR)/xfersetup_.c + +$(OBJDIR)/xfersetup.h: $(OBJDIR)/headers + +$(OBJDIR)/zip_.c: $(SRCDIR)/zip.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/zip.c >$@ + +$(OBJDIR)/zip.o: $(OBJDIR)/zip_.c $(OBJDIR)/zip.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/zip.o -c $(OBJDIR)/zip_.c + +$(OBJDIR)/zip.h: $(OBJDIR)/headers + +SQLITE_OPTIONS = -DNDEBUG=1 \ + -DSQLITE_OMIT_LOAD_EXTENSION=1 \ + -DSQLITE_ENABLE_LOCKING_STYLE=0 \ + -DSQLITE_THREADSAFE=0 \ + -DSQLITE_DEFAULT_FILE_FORMAT=4 \ + -DSQLITE_OMIT_DEPRECATED \ + -DSQLITE_ENABLE_EXPLAIN_COMMENTS \ + -DSQLITE_ENABLE_FTS4 \ + -DSQLITE_ENABLE_FTS3_PARENTHESIS \ + -DSQLITE_ENABLE_DBSTAT_VTAB \ + -DSQLITE_ENABLE_JSON1 \ + -DSQLITE_ENABLE_FTS5 \ + -DSQLITE_WIN32_NO_ANSI \ + -D_HAVE__MINGW_H \ + -DSQLITE_USE_MALLOC_H \ + -DSQLITE_USE_MSIZE + +SHELL_OPTIONS = -Dmain=sqlite3_shell \ + -DSQLITE_OMIT_LOAD_EXTENSION=1 \ + -DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) \ + -DSQLITE_SHELL_DBNAME_PROC=fossil_open \ + -Daccess=file_access \ + -Dsystem=fossil_system \ + -Dgetenv=fossil_getenv \ + -Dfopen=fossil_fopen + +MINIZ_OPTIONS = -DMINIZ_NO_STDIO \ + -DMINIZ_NO_TIME \ + -DMINIZ_NO_ARCHIVE_APIS + +$(OBJDIR)/sqlite3.o: $(SRCDIR)/sqlite3.c $(SRCDIR)/../win/Makefile.mingw + $(XTCC) $(SQLITE_OPTIONS) $(SQLITE_CFLAGS) -c $(SRCDIR)/sqlite3.c -o $@ + +$(OBJDIR)/cson_amalgamation.o: $(SRCDIR)/cson_amalgamation.c + $(XTCC) -c $(SRCDIR)/cson_amalgamation.c -o $@ + +$(OBJDIR)/json.o $(OBJDIR)/json_artifact.o $(OBJDIR)/json_branch.o $(OBJDIR)/json_config.o $(OBJDIR)/json_diff.o $(OBJDIR)/json_dir.o $(OBJDIR)/jsos_finfo.o $(OBJDIR)/json_login.o $(OBJDIR)/json_query.o $(OBJDIR)/json_report.o $(OBJDIR)/json_status.o $(OBJDIR)/json_tag.o $(OBJDIR)/json_timeline.o $(OBJDIR)/json_user.o $(OBJDIR)/json_wiki.o : $(SRCDIR)/json_detail.h + +$(OBJDIR)/shell.o: $(SRCDIR)/shell.c $(SRCDIR)/sqlite3.h $(SRCDIR)/../win/Makefile.mingw + $(XTCC) $(SHELL_OPTIONS) $(SHELL_CFLAGS) -c $(SRCDIR)/shell.c -o $@ + +$(OBJDIR)/th.o: $(SRCDIR)/th.c + $(XTCC) -c $(SRCDIR)/th.c -o $@ + +$(OBJDIR)/th_lang.o: $(SRCDIR)/th_lang.c + $(XTCC) -c $(SRCDIR)/th_lang.c -o $@ + +$(OBJDIR)/th_tcl.o: $(SRCDIR)/th_tcl.c + $(XTCC) -c $(SRCDIR)/th_tcl.c -o $@ + +$(OBJDIR)/miniz.o: $(SRCDIR)/miniz.c + $(XTCC) $(MINIZ_OPTIONS) -c $(SRCDIR)/miniz.c -o $@ + ADDED win/Makefile.mingw.mistachkin Index: win/Makefile.mingw.mistachkin ================================================================== --- win/Makefile.mingw.mistachkin +++ win/Makefile.mingw.mistachkin @@ -0,0 +1,2139 @@ +#!/usr/bin/make +# +############################################################################## +# WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl") +############################################################################## +# +# This file is automatically generated. Instead of editing this +# file, edit "makemake.tcl" then run "tclsh makemake.tcl" +# to regenerate this file. +# +# This is a makefile for use on Cygwin/Darwin/FreeBSD/Linux/Windows using +# MinGW or MinGW-w64. +# +# Some of the special options which can be passed to make +# USE_WINDOWS=1 if building under a windows command prompt +# X64=1 if using an unprefixed 64-bit mingw compiler +# + +#### Select one of MinGW, MinGW-w64 (32-bit) or MinGW-w64 (64-bit) compilers. +# By default, this is an empty string (i.e. use the native compiler). +# +PREFIX = +# PREFIX = mingw32- +# PREFIX = i686-pc-mingw32- +# PREFIX = i686-w64-mingw32- +# PREFIX = x86_64-w64-mingw32- + +#### The toplevel directory of the source tree. Fossil can be built +# in a directory that is separate from the source tree. Just change +# the following to point from the build directory to the src/ folder. +# +SRCDIR = src + +#### The directory into which object code files should be written. +# +OBJDIR = wbld + +#### C Compiler and options for use in building executables that +# will run on the platform that is doing the build. This is used +# to compile code-generator programs as part of the build process. +# See TCC below for the C compiler for building the finished binary. +# +BCC = gcc + +#### Enable compiling with debug symbols (much larger binary) +# +# FOSSIL_ENABLE_SYMBOLS = 1 + +#### Enable JSON (http://www.json.org) support using "cson" +# +FOSSIL_ENABLE_JSON = 1 + +#### Enable HTTPS support via OpenSSL (links to libssl and libcrypto) +# +FOSSIL_ENABLE_SSL = 1 + +#### Automatically build OpenSSL when building Fossil (causes rebuild +# issues when building incrementally). +# +# FOSSIL_BUILD_SSL = 1 + +#### Enable relative paths in external diff/gdiff +# +# FOSSIL_ENABLE_EXEC_REL_PATHS = 1 + +#### Enable legacy treatment of mv/rm (skip checkout files) +# +FOSSIL_ENABLE_LEGACY_MV_RM = 1 + +#### Enable TH1 scripts in embedded documentation files +# +FOSSIL_ENABLE_TH1_DOCS = 1 + +#### Enable hooks for commands and web pages via TH1 +# +FOSSIL_ENABLE_TH1_HOOKS = 1 + +#### Enable scripting support via Tcl/Tk +# +FOSSIL_ENABLE_TCL = 1 + +#### Load Tcl using the stubs library mechanism +# +FOSSIL_ENABLE_TCL_STUBS = 1 + +#### Load Tcl using the private stubs mechanism +# +FOSSIL_ENABLE_TCL_PRIVATE_STUBS = 1 + +#### Use 'system' SQLite +# +# USE_SYSTEM_SQLITE = 1 + +#### Use the miniz compression library +# +# FOSSIL_ENABLE_MINIZ = 1 + +#### Use the Tcl source directory instead of the install directory? +# This is useful when Tcl has been compiled statically with MinGW. +# +FOSSIL_TCL_SOURCE = 1 + +#### Check if the workaround for the MinGW command line handling needs to +# be enabled by default. This check may be somewhat fragile due to the +# use of "findstring". +# +ifndef MINGW_IS_32BIT_ONLY +ifeq (,$(findstring w64-mingw32,$(PREFIX))) +MINGW_IS_32BIT_ONLY = 1 +endif +endif + +#### The directories where the zlib include and library files are located. +# +ZINCDIR = $(SRCDIR)/../compat/zlib +ZLIBDIR = $(SRCDIR)/../compat/zlib + +#### Make an attempt to detect if Fossil is being built for the x64 processor +# architecture. This check may be somewhat fragile due to "findstring". +# +ifndef X64 +ifneq (,$(findstring x86_64-w64-mingw32,$(PREFIX))) +X64 = 1 +endif +endif + +#### Determine if the optimized assembly routines provided with zlib should be +# used, taking into account whether zlib is actually enabled and the target +# processor architecture. +# +ifndef X64 +SSLCONFIG = mingw +ifndef FOSSIL_ENABLE_MINIZ +ZLIBCONFIG = LOC="-DASMV -DASMINF" OBJA="inffas86.o match.o" +LIBTARGETS = $(ZLIBDIR)/inffas86.o $(ZLIBDIR)/match.o +else +ZLIBCONFIG = +LIBTARGETS = +endif +else +SSLCONFIG = mingw64 +ZLIBCONFIG = +LIBTARGETS = +endif + +#### Disable creation of the OpenSSL shared libraries. Also, disable support +# for both SSLv2 and SSLv3 (i.e. thereby forcing the use of TLS). +# +SSLCONFIG += no-ssl2 no-ssl3 no-shared + +#### When using zlib, make sure that OpenSSL is configured to use the zlib +# that Fossil knows about (i.e. the one within the source tree). +# +ifndef FOSSIL_ENABLE_MINIZ +SSLCONFIG += --with-zlib-lib=$(PWD)/$(ZLIBDIR) --with-zlib-include=$(PWD)/$(ZLIBDIR) zlib +endif + +#### The directories where the OpenSSL include and library files are located. +# The recommended usage here is to use the Sysinternals junction tool +# to create a hard link between an "openssl-1.x" sub-directory of the +# Fossil source code directory and the target OpenSSL source directory. +# +OPENSSLDIR = $(SRCDIR)/../compat/openssl-1.0.2f +OPENSSLINCDIR = $(OPENSSLDIR)/include +OPENSSLLIBDIR = $(OPENSSLDIR) + +#### Either the directory where the Tcl library is installed or the Tcl +# source code directory resides (depending on the value of the macro +# FOSSIL_TCL_SOURCE). If this points to the Tcl install directory, +# this directory must have "include" and "lib" sub-directories. If +# this points to the Tcl source code directory, this directory must +# have "generic" and "win" sub-directories. The recommended usage +# here is to use the Sysinternals junction tool to create a hard +# link between a "tcl-8.x" sub-directory of the Fossil source code +# directory and the target Tcl directory. This removes the need to +# hard-code the necessary paths in this Makefile. +# +TCLDIR = $(SRCDIR)/../compat/tcl-8.6 + +#### The Tcl source code directory. This defaults to the same value as +# TCLDIR macro (above), which may not be correct. This value will +# only be used if the FOSSIL_TCL_SOURCE macro is defined. +# +TCLSRCDIR = $(TCLDIR) + +#### The Tcl include and library directories. These values will only be +# used if the FOSSIL_TCL_SOURCE macro is not defined. +# +TCLINCDIR = $(TCLDIR)/include +TCLLIBDIR = $(TCLDIR)/lib + +#### Tcl: Which Tcl library do we want to use (8.4, 8.5, 8.6, etc)? +# +ifdef FOSSIL_ENABLE_TCL_STUBS +ifndef FOSSIL_ENABLE_TCL_PRIVATE_STUBS +LIBTCL = -ltclstub86 +endif +TCLTARGET = libtclstub86.a +else +LIBTCL = -ltcl86 +TCLTARGET = binaries +endif + +#### C Compile and options for use in building executables that +# will run on the target platform. This is usually the same +# as BCC, unless you are cross-compiling. This C compiler builds +# the finished binary for fossil. The BCC compiler above is used +# for building intermediate code-generator tools. +# +TCC = $(PREFIX)gcc -Wall + +#### Add the necessary command line options to build with debugging +# symbols, if enabled. +# +ifdef FOSSIL_ENABLE_SYMBOLS +TCC += -g +else +TCC += -Os +endif + +#### When not using the miniz compression library, zlib is required. +# +ifndef FOSSIL_ENABLE_MINIZ +TCC += -L$(ZLIBDIR) -I$(ZINCDIR) +endif + +#### Compile resources for use in building executables that will run +# on the target platform. +# +RCC = $(PREFIX)windres -I$(SRCDIR) + +ifndef FOSSIL_ENABLE_MINIZ +RCC += -I$(ZINCDIR) +endif + +# With HTTPS support +ifdef FOSSIL_ENABLE_SSL +TCC += -L$(OPENSSLLIBDIR) -I$(OPENSSLINCDIR) +RCC += -I$(OPENSSLINCDIR) +endif + +# With Tcl support +ifdef FOSSIL_ENABLE_TCL +ifdef FOSSIL_TCL_SOURCE +TCC += -L$(TCLSRCDIR)/win -I$(TCLSRCDIR)/generic -I$(TCLSRCDIR)/win +RCC += -I$(TCLSRCDIR)/generic -I$(TCLSRCDIR)/win +else +TCC += -L$(TCLLIBDIR) -I$(TCLINCDIR) +RCC += -I$(TCLINCDIR) +endif +endif + +# With miniz (i.e. instead of zlib) +ifdef FOSSIL_ENABLE_MINIZ +TCC += -DFOSSIL_ENABLE_MINIZ=1 +RCC += -DFOSSIL_ENABLE_MINIZ=1 +endif + +# With MinGW command line handling workaround +ifdef MINGW_IS_32BIT_ONLY +TCC += -DBROKEN_MINGW_CMDLINE=1 +RCC += -DBROKEN_MINGW_CMDLINE=1 +endif + +# With HTTPS support +ifdef FOSSIL_ENABLE_SSL +TCC += -DFOSSIL_ENABLE_SSL=1 +RCC += -DFOSSIL_ENABLE_SSL=1 +endif + +# With relative paths in external diff/gdiff +ifdef FOSSIL_ENABLE_EXEC_REL_PATHS +TCC += -DFOSSIL_ENABLE_EXEC_REL_PATHS=1 +RCC += -DFOSSIL_ENABLE_EXEC_REL_PATHS=1 +endif + +# With legacy treatment of mv/rm +ifdef FOSSIL_ENABLE_LEGACY_MV_RM +TCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1 +RCC += -DFOSSIL_ENABLE_LEGACY_MV_RM=1 +endif + +# With TH1 embedded docs support +ifdef FOSSIL_ENABLE_TH1_DOCS +TCC += -DFOSSIL_ENABLE_TH1_DOCS=1 +RCC += -DFOSSIL_ENABLE_TH1_DOCS=1 +endif + +# With TH1 hook support +ifdef FOSSIL_ENABLE_TH1_HOOKS +TCC += -DFOSSIL_ENABLE_TH1_HOOKS=1 +RCC += -DFOSSIL_ENABLE_TH1_HOOKS=1 +endif + +# With Tcl support +ifdef FOSSIL_ENABLE_TCL +TCC += -DFOSSIL_ENABLE_TCL=1 +RCC += -DFOSSIL_ENABLE_TCL=1 +# Either statically linked or via stubs +ifdef FOSSIL_ENABLE_TCL_STUBS +TCC += -DFOSSIL_ENABLE_TCL_STUBS=1 -DUSE_TCL_STUBS +RCC += -DFOSSIL_ENABLE_TCL_STUBS=1 -DUSE_TCL_STUBS +ifdef FOSSIL_ENABLE_TCL_PRIVATE_STUBS +TCC += -DFOSSIL_ENABLE_TCL_PRIVATE_STUBS=1 +RCC += -DFOSSIL_ENABLE_TCL_PRIVATE_STUBS=1 +endif +else +TCC += -DSTATIC_BUILD +RCC += -DSTATIC_BUILD +endif +endif + +# With JSON support +ifdef FOSSIL_ENABLE_JSON +TCC += -DFOSSIL_ENABLE_JSON=1 +RCC += -DFOSSIL_ENABLE_JSON=1 +endif + +#### The option -static has no effect on MinGW(-w64), only dynamic +# executables can be built when linking with MSVCRT. OpenSSL +# (optional) and zlib (required) however are always linked in +# statically. Therefore, the FOSSIL_DYNAMIC_BUILD option does +# not really apply to MinGW (i.e. since ALL external libraries +# are NOT linked dynamically). +# +# LIB = -static + +#### MinGW: If available, use the Unicode capable runtime startup code. +# +ifndef MINGW_IS_32BIT_ONLY +LIB += -municode +endif + +#### SQLite: If enabled, use the system SQLite library. +# +ifdef USE_SYSTEM_SQLITE +LIB += -lsqlite3 +endif + +#### OpenSSL: Add the necessary libraries required, if enabled. +# +ifdef FOSSIL_ENABLE_SSL +LIB += -lssl -lcrypto -lgdi32 +endif + +#### Tcl: Add the necessary libraries required, if enabled. +# +ifdef FOSSIL_ENABLE_TCL +LIB += $(LIBTCL) +endif + +#### Extra arguments for linking the finished binary. Fossil needs +# to link against the Z-Lib compression library. There are no +# other mandatory dependencies. +# +LIB += -lmingwex + +#### When not using the miniz compression library, zlib is required. +# +ifndef FOSSIL_ENABLE_MINIZ +LIB += -lz +endif + +#### These libraries MUST appear in the same order as they do for Tcl +# or linking with it will not work (exact reason unknown). +# +ifdef FOSSIL_ENABLE_TCL +ifdef FOSSIL_ENABLE_TCL_STUBS +LIB += -lkernel32 -lws2_32 +else +LIB += -lnetapi32 -lkernel32 -luser32 -ladvapi32 -lws2_32 +endif +else +LIB += -lkernel32 -lws2_32 +endif + +#### Tcl shell for use in running the fossil test suite. This is only +# used for testing. +# +TCLSH = tclsh + +#### Nullsoft installer MakeNSIS location +# +MAKENSIS = "$(PROGRAMFILES)\NSIS\MakeNSIS.exe" + +#### Inno Setup executable location +# +INNOSETUP = "$(PROGRAMFILES)\Inno Setup 5\ISCC.exe" + +#### Include a configuration file that can override any one of these settings. +# +-include config.w32 + +# STOP HERE +# You should not need to change anything below this line +#-------------------------------------------------------- +XTCC = $(TCC) $(CFLAGS) -I. -I$(SRCDIR) + +SRC = \ + $(SRCDIR)/add.c \ + $(SRCDIR)/allrepo.c \ + $(SRCDIR)/attach.c \ + $(SRCDIR)/bag.c \ + $(SRCDIR)/bisect.c \ + $(SRCDIR)/blob.c \ + $(SRCDIR)/branch.c \ + $(SRCDIR)/browse.c \ + $(SRCDIR)/builtin.c \ + $(SRCDIR)/bundle.c \ + $(SRCDIR)/cache.c \ + $(SRCDIR)/captcha.c \ + $(SRCDIR)/cgi.c \ + $(SRCDIR)/checkin.c \ + $(SRCDIR)/checkout.c \ + $(SRCDIR)/clearsign.c \ + $(SRCDIR)/clone.c \ + $(SRCDIR)/comformat.c \ + $(SRCDIR)/configure.c \ + $(SRCDIR)/content.c \ + $(SRCDIR)/db.c \ + $(SRCDIR)/delta.c \ + $(SRCDIR)/deltacmd.c \ + $(SRCDIR)/descendants.c \ + $(SRCDIR)/diff.c \ + $(SRCDIR)/diffcmd.c \ + $(SRCDIR)/doc.c \ + $(SRCDIR)/encode.c \ + $(SRCDIR)/event.c \ + $(SRCDIR)/export.c \ + $(SRCDIR)/file.c \ + $(SRCDIR)/finfo.c \ + $(SRCDIR)/foci.c \ + $(SRCDIR)/fusefs.c \ + $(SRCDIR)/glob.c \ + $(SRCDIR)/graph.c \ + $(SRCDIR)/gzip.c \ + $(SRCDIR)/http.c \ + $(SRCDIR)/http_socket.c \ + $(SRCDIR)/http_ssl.c \ + $(SRCDIR)/http_transport.c \ + $(SRCDIR)/import.c \ + $(SRCDIR)/info.c \ + $(SRCDIR)/json.c \ + $(SRCDIR)/json_artifact.c \ + $(SRCDIR)/json_branch.c \ + $(SRCDIR)/json_config.c \ + $(SRCDIR)/json_diff.c \ + $(SRCDIR)/json_dir.c \ + $(SRCDIR)/json_finfo.c \ + $(SRCDIR)/json_login.c \ + $(SRCDIR)/json_query.c \ + $(SRCDIR)/json_report.c \ + $(SRCDIR)/json_status.c \ + $(SRCDIR)/json_tag.c \ + $(SRCDIR)/json_timeline.c \ + $(SRCDIR)/json_user.c \ + $(SRCDIR)/json_wiki.c \ + $(SRCDIR)/leaf.c \ + $(SRCDIR)/loadctrl.c \ + $(SRCDIR)/login.c \ + $(SRCDIR)/lookslike.c \ + $(SRCDIR)/main.c \ + $(SRCDIR)/manifest.c \ + $(SRCDIR)/markdown.c \ + $(SRCDIR)/markdown_html.c \ + $(SRCDIR)/md5.c \ + $(SRCDIR)/merge.c \ + $(SRCDIR)/merge3.c \ + $(SRCDIR)/moderate.c \ + $(SRCDIR)/name.c \ + $(SRCDIR)/path.c \ + $(SRCDIR)/piechart.c \ + $(SRCDIR)/pivot.c \ + $(SRCDIR)/popen.c \ + $(SRCDIR)/pqueue.c \ + $(SRCDIR)/printf.c \ + $(SRCDIR)/publish.c \ + $(SRCDIR)/purge.c \ + $(SRCDIR)/rebuild.c \ + $(SRCDIR)/regexp.c \ + $(SRCDIR)/report.c \ + $(SRCDIR)/rss.c \ + $(SRCDIR)/schema.c \ + $(SRCDIR)/search.c \ + $(SRCDIR)/setup.c \ + $(SRCDIR)/sha1.c \ + $(SRCDIR)/shun.c \ + $(SRCDIR)/sitemap.c \ + $(SRCDIR)/skins.c \ + $(SRCDIR)/sqlcmd.c \ + $(SRCDIR)/stash.c \ + $(SRCDIR)/stat.c \ + $(SRCDIR)/statrep.c \ + $(SRCDIR)/style.c \ + $(SRCDIR)/sync.c \ + $(SRCDIR)/tag.c \ + $(SRCDIR)/tar.c \ + $(SRCDIR)/th_main.c \ + $(SRCDIR)/timeline.c \ + $(SRCDIR)/tkt.c \ + $(SRCDIR)/tktsetup.c \ + $(SRCDIR)/undo.c \ + $(SRCDIR)/unicode.c \ + $(SRCDIR)/update.c \ + $(SRCDIR)/url.c \ + $(SRCDIR)/user.c \ + $(SRCDIR)/utf8.c \ + $(SRCDIR)/util.c \ + $(SRCDIR)/verify.c \ + $(SRCDIR)/vfile.c \ + $(SRCDIR)/wiki.c \ + $(SRCDIR)/wikiformat.c \ + $(SRCDIR)/winfile.c \ + $(SRCDIR)/winhttp.c \ + $(SRCDIR)/wysiwyg.c \ + $(SRCDIR)/xfer.c \ + $(SRCDIR)/xfersetup.c \ + $(SRCDIR)/zip.c + +EXTRA_FILES = \ + $(SRCDIR)/../skins/aht/details.txt \ + $(SRCDIR)/../skins/black_and_white/css.txt \ + $(SRCDIR)/../skins/black_and_white/details.txt \ + $(SRCDIR)/../skins/black_and_white/footer.txt \ + $(SRCDIR)/../skins/black_and_white/header.txt \ + $(SRCDIR)/../skins/blitz/css.txt \ + $(SRCDIR)/../skins/blitz/details.txt \ + $(SRCDIR)/../skins/blitz/footer.txt \ + $(SRCDIR)/../skins/blitz/header.txt \ + $(SRCDIR)/../skins/blitz/ticket.txt \ + $(SRCDIR)/../skins/blitz_no_logo/css.txt \ + $(SRCDIR)/../skins/blitz_no_logo/details.txt \ + $(SRCDIR)/../skins/blitz_no_logo/footer.txt \ + $(SRCDIR)/../skins/blitz_no_logo/header.txt \ + $(SRCDIR)/../skins/blitz_no_logo/ticket.txt \ + $(SRCDIR)/../skins/default/css.txt \ + $(SRCDIR)/../skins/default/details.txt \ + $(SRCDIR)/../skins/default/footer.txt \ + $(SRCDIR)/../skins/default/header.txt \ + $(SRCDIR)/../skins/eagle/css.txt \ + $(SRCDIR)/../skins/eagle/details.txt \ + $(SRCDIR)/../skins/eagle/footer.txt \ + $(SRCDIR)/../skins/eagle/header.txt \ + $(SRCDIR)/../skins/enhanced1/css.txt \ + $(SRCDIR)/../skins/enhanced1/details.txt \ + $(SRCDIR)/../skins/enhanced1/footer.txt \ + $(SRCDIR)/../skins/enhanced1/header.txt \ + $(SRCDIR)/../skins/khaki/css.txt \ + $(SRCDIR)/../skins/khaki/details.txt \ + $(SRCDIR)/../skins/khaki/footer.txt \ + $(SRCDIR)/../skins/khaki/header.txt \ + $(SRCDIR)/../skins/original/css.txt \ + $(SRCDIR)/../skins/original/details.txt \ + $(SRCDIR)/../skins/original/footer.txt \ + $(SRCDIR)/../skins/original/header.txt \ + $(SRCDIR)/../skins/plain_gray/css.txt \ + $(SRCDIR)/../skins/plain_gray/details.txt \ + $(SRCDIR)/../skins/plain_gray/footer.txt \ + $(SRCDIR)/../skins/plain_gray/header.txt \ + $(SRCDIR)/../skins/rounded1/css.txt \ + $(SRCDIR)/../skins/rounded1/details.txt \ + $(SRCDIR)/../skins/rounded1/footer.txt \ + $(SRCDIR)/../skins/rounded1/header.txt \ + $(SRCDIR)/../skins/xekri/css.txt \ + $(SRCDIR)/../skins/xekri/details.txt \ + $(SRCDIR)/../skins/xekri/footer.txt \ + $(SRCDIR)/../skins/xekri/header.txt \ + $(SRCDIR)/diff.tcl \ + $(SRCDIR)/markdown.md + +TRANS_SRC = \ + $(OBJDIR)/add_.c \ + $(OBJDIR)/allrepo_.c \ + $(OBJDIR)/attach_.c \ + $(OBJDIR)/bag_.c \ + $(OBJDIR)/bisect_.c \ + $(OBJDIR)/blob_.c \ + $(OBJDIR)/branch_.c \ + $(OBJDIR)/browse_.c \ + $(OBJDIR)/builtin_.c \ + $(OBJDIR)/bundle_.c \ + $(OBJDIR)/cache_.c \ + $(OBJDIR)/captcha_.c \ + $(OBJDIR)/cgi_.c \ + $(OBJDIR)/checkin_.c \ + $(OBJDIR)/checkout_.c \ + $(OBJDIR)/clearsign_.c \ + $(OBJDIR)/clone_.c \ + $(OBJDIR)/comformat_.c \ + $(OBJDIR)/configure_.c \ + $(OBJDIR)/content_.c \ + $(OBJDIR)/db_.c \ + $(OBJDIR)/delta_.c \ + $(OBJDIR)/deltacmd_.c \ + $(OBJDIR)/descendants_.c \ + $(OBJDIR)/diff_.c \ + $(OBJDIR)/diffcmd_.c \ + $(OBJDIR)/doc_.c \ + $(OBJDIR)/encode_.c \ + $(OBJDIR)/event_.c \ + $(OBJDIR)/export_.c \ + $(OBJDIR)/file_.c \ + $(OBJDIR)/finfo_.c \ + $(OBJDIR)/foci_.c \ + $(OBJDIR)/fusefs_.c \ + $(OBJDIR)/glob_.c \ + $(OBJDIR)/graph_.c \ + $(OBJDIR)/gzip_.c \ + $(OBJDIR)/http_.c \ + $(OBJDIR)/http_socket_.c \ + $(OBJDIR)/http_ssl_.c \ + $(OBJDIR)/http_transport_.c \ + $(OBJDIR)/import_.c \ + $(OBJDIR)/info_.c \ + $(OBJDIR)/json_.c \ + $(OBJDIR)/json_artifact_.c \ + $(OBJDIR)/json_branch_.c \ + $(OBJDIR)/json_config_.c \ + $(OBJDIR)/json_diff_.c \ + $(OBJDIR)/json_dir_.c \ + $(OBJDIR)/json_finfo_.c \ + $(OBJDIR)/json_login_.c \ + $(OBJDIR)/json_query_.c \ + $(OBJDIR)/json_report_.c \ + $(OBJDIR)/json_status_.c \ + $(OBJDIR)/json_tag_.c \ + $(OBJDIR)/json_timeline_.c \ + $(OBJDIR)/json_user_.c \ + $(OBJDIR)/json_wiki_.c \ + $(OBJDIR)/leaf_.c \ + $(OBJDIR)/loadctrl_.c \ + $(OBJDIR)/login_.c \ + $(OBJDIR)/lookslike_.c \ + $(OBJDIR)/main_.c \ + $(OBJDIR)/manifest_.c \ + $(OBJDIR)/markdown_.c \ + $(OBJDIR)/markdown_html_.c \ + $(OBJDIR)/md5_.c \ + $(OBJDIR)/merge_.c \ + $(OBJDIR)/merge3_.c \ + $(OBJDIR)/moderate_.c \ + $(OBJDIR)/name_.c \ + $(OBJDIR)/path_.c \ + $(OBJDIR)/piechart_.c \ + $(OBJDIR)/pivot_.c \ + $(OBJDIR)/popen_.c \ + $(OBJDIR)/pqueue_.c \ + $(OBJDIR)/printf_.c \ + $(OBJDIR)/publish_.c \ + $(OBJDIR)/purge_.c \ + $(OBJDIR)/rebuild_.c \ + $(OBJDIR)/regexp_.c \ + $(OBJDIR)/report_.c \ + $(OBJDIR)/rss_.c \ + $(OBJDIR)/schema_.c \ + $(OBJDIR)/search_.c \ + $(OBJDIR)/setup_.c \ + $(OBJDIR)/sha1_.c \ + $(OBJDIR)/shun_.c \ + $(OBJDIR)/sitemap_.c \ + $(OBJDIR)/skins_.c \ + $(OBJDIR)/sqlcmd_.c \ + $(OBJDIR)/stash_.c \ + $(OBJDIR)/stat_.c \ + $(OBJDIR)/statrep_.c \ + $(OBJDIR)/style_.c \ + $(OBJDIR)/sync_.c \ + $(OBJDIR)/tag_.c \ + $(OBJDIR)/tar_.c \ + $(OBJDIR)/th_main_.c \ + $(OBJDIR)/timeline_.c \ + $(OBJDIR)/tkt_.c \ + $(OBJDIR)/tktsetup_.c \ + $(OBJDIR)/undo_.c \ + $(OBJDIR)/unicode_.c \ + $(OBJDIR)/update_.c \ + $(OBJDIR)/url_.c \ + $(OBJDIR)/user_.c \ + $(OBJDIR)/utf8_.c \ + $(OBJDIR)/util_.c \ + $(OBJDIR)/verify_.c \ + $(OBJDIR)/vfile_.c \ + $(OBJDIR)/wiki_.c \ + $(OBJDIR)/wikiformat_.c \ + $(OBJDIR)/winfile_.c \ + $(OBJDIR)/winhttp_.c \ + $(OBJDIR)/wysiwyg_.c \ + $(OBJDIR)/xfer_.c \ + $(OBJDIR)/xfersetup_.c \ + $(OBJDIR)/zip_.c + +OBJ = \ + $(OBJDIR)/add.o \ + $(OBJDIR)/allrepo.o \ + $(OBJDIR)/attach.o \ + $(OBJDIR)/bag.o \ + $(OBJDIR)/bisect.o \ + $(OBJDIR)/blob.o \ + $(OBJDIR)/branch.o \ + $(OBJDIR)/browse.o \ + $(OBJDIR)/builtin.o \ + $(OBJDIR)/bundle.o \ + $(OBJDIR)/cache.o \ + $(OBJDIR)/captcha.o \ + $(OBJDIR)/cgi.o \ + $(OBJDIR)/checkin.o \ + $(OBJDIR)/checkout.o \ + $(OBJDIR)/clearsign.o \ + $(OBJDIR)/clone.o \ + $(OBJDIR)/comformat.o \ + $(OBJDIR)/configure.o \ + $(OBJDIR)/content.o \ + $(OBJDIR)/db.o \ + $(OBJDIR)/delta.o \ + $(OBJDIR)/deltacmd.o \ + $(OBJDIR)/descendants.o \ + $(OBJDIR)/diff.o \ + $(OBJDIR)/diffcmd.o \ + $(OBJDIR)/doc.o \ + $(OBJDIR)/encode.o \ + $(OBJDIR)/event.o \ + $(OBJDIR)/export.o \ + $(OBJDIR)/file.o \ + $(OBJDIR)/finfo.o \ + $(OBJDIR)/foci.o \ + $(OBJDIR)/fusefs.o \ + $(OBJDIR)/glob.o \ + $(OBJDIR)/graph.o \ + $(OBJDIR)/gzip.o \ + $(OBJDIR)/http.o \ + $(OBJDIR)/http_socket.o \ + $(OBJDIR)/http_ssl.o \ + $(OBJDIR)/http_transport.o \ + $(OBJDIR)/import.o \ + $(OBJDIR)/info.o \ + $(OBJDIR)/json.o \ + $(OBJDIR)/json_artifact.o \ + $(OBJDIR)/json_branch.o \ + $(OBJDIR)/json_config.o \ + $(OBJDIR)/json_diff.o \ + $(OBJDIR)/json_dir.o \ + $(OBJDIR)/json_finfo.o \ + $(OBJDIR)/json_login.o \ + $(OBJDIR)/json_query.o \ + $(OBJDIR)/json_report.o \ + $(OBJDIR)/json_status.o \ + $(OBJDIR)/json_tag.o \ + $(OBJDIR)/json_timeline.o \ + $(OBJDIR)/json_user.o \ + $(OBJDIR)/json_wiki.o \ + $(OBJDIR)/leaf.o \ + $(OBJDIR)/loadctrl.o \ + $(OBJDIR)/login.o \ + $(OBJDIR)/lookslike.o \ + $(OBJDIR)/main.o \ + $(OBJDIR)/manifest.o \ + $(OBJDIR)/markdown.o \ + $(OBJDIR)/markdown_html.o \ + $(OBJDIR)/md5.o \ + $(OBJDIR)/merge.o \ + $(OBJDIR)/merge3.o \ + $(OBJDIR)/moderate.o \ + $(OBJDIR)/name.o \ + $(OBJDIR)/path.o \ + $(OBJDIR)/piechart.o \ + $(OBJDIR)/pivot.o \ + $(OBJDIR)/popen.o \ + $(OBJDIR)/pqueue.o \ + $(OBJDIR)/printf.o \ + $(OBJDIR)/publish.o \ + $(OBJDIR)/purge.o \ + $(OBJDIR)/rebuild.o \ + $(OBJDIR)/regexp.o \ + $(OBJDIR)/report.o \ + $(OBJDIR)/rss.o \ + $(OBJDIR)/schema.o \ + $(OBJDIR)/search.o \ + $(OBJDIR)/setup.o \ + $(OBJDIR)/sha1.o \ + $(OBJDIR)/shun.o \ + $(OBJDIR)/sitemap.o \ + $(OBJDIR)/skins.o \ + $(OBJDIR)/sqlcmd.o \ + $(OBJDIR)/stash.o \ + $(OBJDIR)/stat.o \ + $(OBJDIR)/statrep.o \ + $(OBJDIR)/style.o \ + $(OBJDIR)/sync.o \ + $(OBJDIR)/tag.o \ + $(OBJDIR)/tar.o \ + $(OBJDIR)/th_main.o \ + $(OBJDIR)/timeline.o \ + $(OBJDIR)/tkt.o \ + $(OBJDIR)/tktsetup.o \ + $(OBJDIR)/undo.o \ + $(OBJDIR)/unicode.o \ + $(OBJDIR)/update.o \ + $(OBJDIR)/url.o \ + $(OBJDIR)/user.o \ + $(OBJDIR)/utf8.o \ + $(OBJDIR)/util.o \ + $(OBJDIR)/verify.o \ + $(OBJDIR)/vfile.o \ + $(OBJDIR)/wiki.o \ + $(OBJDIR)/wikiformat.o \ + $(OBJDIR)/winfile.o \ + $(OBJDIR)/winhttp.o \ + $(OBJDIR)/wysiwyg.o \ + $(OBJDIR)/xfer.o \ + $(OBJDIR)/xfersetup.o \ + $(OBJDIR)/zip.o + +APPNAME = fossil.exe +APPTARGETS = + +#### If the USE_WINDOWS variable exists, it is assumed that we are building +# inside of a Windows-style shell; otherwise, it is assumed that we are +# building inside of a Unix-style shell. Note that the "move" command is +# broken when attempting to use it from the Windows shell via MinGW make +# because the SHELL variable is only used for certain commands that are +# recognized internally by make. +# +ifdef USE_WINDOWS +TRANSLATE = $(subst /,\,$(OBJDIR)/translate.exe) +MAKEHEADERS = $(subst /,\,$(OBJDIR)/makeheaders.exe) +MKINDEX = $(subst /,\,$(OBJDIR)/mkindex.exe) +MKBUILTIN = $(subst /,\,$(OBJDIR)/mkbuiltin.exe) +MKVERSION = $(subst /,\,$(OBJDIR)/mkversion.exe) +CODECHECK1 = $(subst /,\,$(OBJDIR)/codecheck1.exe) +CAT = type +CP = copy +GREP = find +MV = copy +RM = del /Q +MKDIR = -mkdir +RMDIR = rmdir /S /Q +else +TRANSLATE = $(OBJDIR)/translate.exe +MAKEHEADERS = $(OBJDIR)/makeheaders.exe +MKINDEX = $(OBJDIR)/mkindex.exe +MKBUILTIN = $(OBJDIR)/mkbuiltin.exe +MKVERSION = $(OBJDIR)/mkversion.exe +CODECHECK1 = $(OBJDIR)/codecheck1.exe +CAT = cat +CP = cp +GREP = grep +MV = mv +RM = rm -f +MKDIR = -mkdir -p +RMDIR = rm -rf +endif + +all: $(OBJDIR) $(APPNAME) + +$(OBJDIR)/fossil.o: $(SRCDIR)/../win/fossil.rc $(OBJDIR)/VERSION.h +ifdef USE_WINDOWS + $(CAT) $(subst /,\,$(SRCDIR)\miniz.c) | $(GREP) "define MZ_VERSION" > $(subst /,\,$(OBJDIR)\minizver.h) + $(CP) $(subst /,\,$(SRCDIR)\..\win\fossil.rc) $(subst /,\,$(OBJDIR)) + $(CP) $(subst /,\,$(SRCDIR)\..\win\fossil.ico) $(subst /,\,$(OBJDIR)) + $(CP) $(subst /,\,$(SRCDIR)\..\win\fossil.exe.manifest) $(subst /,\,$(OBJDIR)) +else + $(CAT) $(SRCDIR)/miniz.c | $(GREP) "define MZ_VERSION" > $(OBJDIR)/minizver.h + $(CP) $(SRCDIR)/../win/fossil.rc $(OBJDIR) + $(CP) $(SRCDIR)/../win/fossil.ico $(OBJDIR) + $(CP) $(SRCDIR)/../win/fossil.exe.manifest $(OBJDIR) +endif + $(RCC) $(OBJDIR)/fossil.rc -o $(OBJDIR)/fossil.o + +install: $(OBJDIR) $(APPNAME) +ifdef USE_WINDOWS + $(MKDIR) $(subst /,\,$(INSTALLDIR)) + $(MV) $(subst /,\,$(APPNAME)) $(subst /,\,$(INSTALLDIR)) +else + $(MKDIR) $(INSTALLDIR) + $(MV) $(APPNAME) $(INSTALLDIR) +endif + +$(OBJDIR): +ifdef USE_WINDOWS + $(MKDIR) $(subst /,\,$(OBJDIR)) +else + $(MKDIR) $(OBJDIR) +endif + +$(TRANSLATE): $(SRCDIR)/translate.c + $(BCC) -o $@ $(SRCDIR)/translate.c + +$(MAKEHEADERS): $(SRCDIR)/makeheaders.c + $(BCC) -o $@ $(SRCDIR)/makeheaders.c + +$(MKINDEX): $(SRCDIR)/mkindex.c + $(BCC) -o $@ $(SRCDIR)/mkindex.c + +$(MKBUILTIN): $(SRCDIR)/mkbuiltin.c + $(BCC) -o $@ $(SRCDIR)/mkbuiltin.c + +$(MKVERSION): $(SRCDIR)/mkversion.c + $(BCC) -o $@ $(SRCDIR)/mkversion.c + +$(CODECHECK1): $(SRCDIR)/codecheck1.c + $(BCC) -o $@ $(SRCDIR)/codecheck1.c + +# WARNING. DANGER. Running the test suite modifies the repository the +# build is done from, i.e. the checkout belongs to. Do not sync/push +# the repository after running the tests. +test: $(OBJDIR) $(APPNAME) + $(TCLSH) $(SRCDIR)/../test/tester.tcl $(APPNAME) + +$(OBJDIR)/VERSION.h: $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(MKVERSION) + $(MKVERSION) $(SRCDIR)/../manifest.uuid $(SRCDIR)/../manifest $(SRCDIR)/../VERSION >$@ + +# The USE_SYSTEM_SQLITE variable may be undefined, set to 0, or set +# to 1. If it is set to 1, then there is no need to build or link +# the sqlite3.o object. Instead, the system SQLite will be linked +# using -lsqlite3. +SQLITE3_OBJ.1 = +SQLITE3_OBJ.0 = $(OBJDIR)/sqlite3.o +SQLITE3_OBJ. = $(SQLITE3_OBJ.0) + +# The FOSSIL_ENABLE_MINIZ variable may be undefined, set to 0, or +# set to 1. If it is set to 1, the miniz library included in the +# source tree should be used; otherwise, it should not. +MINIZ_OBJ.0 = +MINIZ_OBJ.1 = $(OBJDIR)/miniz.o +MINIZ_OBJ. = $(MINIZ_OBJ.0) + + +EXTRAOBJ = \ + $(SQLITE3_OBJ.$(USE_SYSTEM_SQLITE)) \ + $(MINIZ_OBJ.$(FOSSIL_ENABLE_MINIZ)) \ + $(OBJDIR)/shell.o \ + $(OBJDIR)/th.o \ + $(OBJDIR)/th_lang.o \ + $(OBJDIR)/th_tcl.o \ + $(OBJDIR)/cson_amalgamation.o + + +zlib: + $(MAKE) -C $(ZLIBDIR) PREFIX=$(PREFIX) $(ZLIBCONFIG) -f win32/Makefile.gcc libz.a + +clean-zlib: + $(MAKE) -C $(ZLIBDIR) PREFIX=$(PREFIX) -f win32/Makefile.gcc clean + +$(ZLIBDIR)/inffas86.o: + $(TCC) -c -o $@ -DASMINF -I$(ZLIBDIR) -O3 $(ZLIBDIR)/contrib/inflate86/inffas86.c + +$(ZLIBDIR)/match.o: + $(TCC) -c -o $@ -DASMV $(ZLIBDIR)/contrib/asm686/match.S + + +ifndef FOSSIL_ENABLE_MINIZ +LIBTARGETS += zlib +endif + +openssl: $(LIBTARGETS) + cd $(OPENSSLLIBDIR);./Configure --cross-compile-prefix=$(PREFIX) $(SSLCONFIG) + $(MAKE) -C $(OPENSSLLIBDIR) build_libs + +clean-openssl: + $(MAKE) -C $(OPENSSLLIBDIR) clean + +tcl: + cd $(TCLSRCDIR)/win;./configure + $(MAKE) -C $(TCLSRCDIR)/win $(TCLTARGET) + +clean-tcl: + $(MAKE) -C $(TCLSRCDIR)/win distclean + +APPTARGETS += $(LIBTARGETS) + +ifdef FOSSIL_BUILD_SSL +APPTARGETS += openssl +endif + +$(APPNAME): $(APPTARGETS) $(OBJDIR)/headers $(CODECHECK1) $(OBJ) $(EXTRAOBJ) $(OBJDIR)/fossil.o + $(CODECHECK1) $(TRANS_SRC) + $(TCC) -o $@ $(OBJ) $(EXTRAOBJ) $(OBJDIR)/fossil.o $(LIB) + +# This rule prevents make from using its default rules to try build +# an executable named "manifest" out of the file named "manifest.c" +# +$(SRCDIR)/../manifest: + # noop + +clean: +ifdef USE_WINDOWS + $(RM) $(subst /,\,$(APPNAME)) + $(RMDIR) $(subst /,\,$(OBJDIR)) +else + $(RM) $(APPNAME) + $(RMDIR) $(OBJDIR) +endif + +setup: $(OBJDIR) $(APPNAME) + $(MAKENSIS) ./setup/fossil.nsi + +innosetup: $(OBJDIR) $(APPNAME) + $(INNOSETUP) ./setup/fossil.iss -DAppVersion=$(shell $(CAT) ./VERSION) + +$(OBJDIR)/page_index.h: $(TRANS_SRC) $(MKINDEX) + $(MKINDEX) $(TRANS_SRC) >$@ + +$(OBJDIR)/builtin_data.h: $(MKBUILTIN) $(EXTRA_FILES) + $(MKBUILTIN) --prefix $(SRCDIR)/ $(EXTRA_FILES) >$@ + +$(OBJDIR)/headers: $(OBJDIR)/page_index.h $(OBJDIR)/builtin_data.h $(MAKEHEADERS) $(OBJDIR)/VERSION.h + $(MAKEHEADERS) $(OBJDIR)/add_.c:$(OBJDIR)/add.h \ + $(OBJDIR)/allrepo_.c:$(OBJDIR)/allrepo.h \ + $(OBJDIR)/attach_.c:$(OBJDIR)/attach.h \ + $(OBJDIR)/bag_.c:$(OBJDIR)/bag.h \ + $(OBJDIR)/bisect_.c:$(OBJDIR)/bisect.h \ + $(OBJDIR)/blob_.c:$(OBJDIR)/blob.h \ + $(OBJDIR)/branch_.c:$(OBJDIR)/branch.h \ + $(OBJDIR)/browse_.c:$(OBJDIR)/browse.h \ + $(OBJDIR)/builtin_.c:$(OBJDIR)/builtin.h \ + $(OBJDIR)/bundle_.c:$(OBJDIR)/bundle.h \ + $(OBJDIR)/cache_.c:$(OBJDIR)/cache.h \ + $(OBJDIR)/captcha_.c:$(OBJDIR)/captcha.h \ + $(OBJDIR)/cgi_.c:$(OBJDIR)/cgi.h \ + $(OBJDIR)/checkin_.c:$(OBJDIR)/checkin.h \ + $(OBJDIR)/checkout_.c:$(OBJDIR)/checkout.h \ + $(OBJDIR)/clearsign_.c:$(OBJDIR)/clearsign.h \ + $(OBJDIR)/clone_.c:$(OBJDIR)/clone.h \ + $(OBJDIR)/comformat_.c:$(OBJDIR)/comformat.h \ + $(OBJDIR)/configure_.c:$(OBJDIR)/configure.h \ + $(OBJDIR)/content_.c:$(OBJDIR)/content.h \ + $(OBJDIR)/db_.c:$(OBJDIR)/db.h \ + $(OBJDIR)/delta_.c:$(OBJDIR)/delta.h \ + $(OBJDIR)/deltacmd_.c:$(OBJDIR)/deltacmd.h \ + $(OBJDIR)/descendants_.c:$(OBJDIR)/descendants.h \ + $(OBJDIR)/diff_.c:$(OBJDIR)/diff.h \ + $(OBJDIR)/diffcmd_.c:$(OBJDIR)/diffcmd.h \ + $(OBJDIR)/doc_.c:$(OBJDIR)/doc.h \ + $(OBJDIR)/encode_.c:$(OBJDIR)/encode.h \ + $(OBJDIR)/event_.c:$(OBJDIR)/event.h \ + $(OBJDIR)/export_.c:$(OBJDIR)/export.h \ + $(OBJDIR)/file_.c:$(OBJDIR)/file.h \ + $(OBJDIR)/finfo_.c:$(OBJDIR)/finfo.h \ + $(OBJDIR)/foci_.c:$(OBJDIR)/foci.h \ + $(OBJDIR)/fusefs_.c:$(OBJDIR)/fusefs.h \ + $(OBJDIR)/glob_.c:$(OBJDIR)/glob.h \ + $(OBJDIR)/graph_.c:$(OBJDIR)/graph.h \ + $(OBJDIR)/gzip_.c:$(OBJDIR)/gzip.h \ + $(OBJDIR)/http_.c:$(OBJDIR)/http.h \ + $(OBJDIR)/http_socket_.c:$(OBJDIR)/http_socket.h \ + $(OBJDIR)/http_ssl_.c:$(OBJDIR)/http_ssl.h \ + $(OBJDIR)/http_transport_.c:$(OBJDIR)/http_transport.h \ + $(OBJDIR)/import_.c:$(OBJDIR)/import.h \ + $(OBJDIR)/info_.c:$(OBJDIR)/info.h \ + $(OBJDIR)/json_.c:$(OBJDIR)/json.h \ + $(OBJDIR)/json_artifact_.c:$(OBJDIR)/json_artifact.h \ + $(OBJDIR)/json_branch_.c:$(OBJDIR)/json_branch.h \ + $(OBJDIR)/json_config_.c:$(OBJDIR)/json_config.h \ + $(OBJDIR)/json_diff_.c:$(OBJDIR)/json_diff.h \ + $(OBJDIR)/json_dir_.c:$(OBJDIR)/json_dir.h \ + $(OBJDIR)/json_finfo_.c:$(OBJDIR)/json_finfo.h \ + $(OBJDIR)/json_login_.c:$(OBJDIR)/json_login.h \ + $(OBJDIR)/json_query_.c:$(OBJDIR)/json_query.h \ + $(OBJDIR)/json_report_.c:$(OBJDIR)/json_report.h \ + $(OBJDIR)/json_status_.c:$(OBJDIR)/json_status.h \ + $(OBJDIR)/json_tag_.c:$(OBJDIR)/json_tag.h \ + $(OBJDIR)/json_timeline_.c:$(OBJDIR)/json_timeline.h \ + $(OBJDIR)/json_user_.c:$(OBJDIR)/json_user.h \ + $(OBJDIR)/json_wiki_.c:$(OBJDIR)/json_wiki.h \ + $(OBJDIR)/leaf_.c:$(OBJDIR)/leaf.h \ + $(OBJDIR)/loadctrl_.c:$(OBJDIR)/loadctrl.h \ + $(OBJDIR)/login_.c:$(OBJDIR)/login.h \ + $(OBJDIR)/lookslike_.c:$(OBJDIR)/lookslike.h \ + $(OBJDIR)/main_.c:$(OBJDIR)/main.h \ + $(OBJDIR)/manifest_.c:$(OBJDIR)/manifest.h \ + $(OBJDIR)/markdown_.c:$(OBJDIR)/markdown.h \ + $(OBJDIR)/markdown_html_.c:$(OBJDIR)/markdown_html.h \ + $(OBJDIR)/md5_.c:$(OBJDIR)/md5.h \ + $(OBJDIR)/merge_.c:$(OBJDIR)/merge.h \ + $(OBJDIR)/merge3_.c:$(OBJDIR)/merge3.h \ + $(OBJDIR)/moderate_.c:$(OBJDIR)/moderate.h \ + $(OBJDIR)/name_.c:$(OBJDIR)/name.h \ + $(OBJDIR)/path_.c:$(OBJDIR)/path.h \ + $(OBJDIR)/piechart_.c:$(OBJDIR)/piechart.h \ + $(OBJDIR)/pivot_.c:$(OBJDIR)/pivot.h \ + $(OBJDIR)/popen_.c:$(OBJDIR)/popen.h \ + $(OBJDIR)/pqueue_.c:$(OBJDIR)/pqueue.h \ + $(OBJDIR)/printf_.c:$(OBJDIR)/printf.h \ + $(OBJDIR)/publish_.c:$(OBJDIR)/publish.h \ + $(OBJDIR)/purge_.c:$(OBJDIR)/purge.h \ + $(OBJDIR)/rebuild_.c:$(OBJDIR)/rebuild.h \ + $(OBJDIR)/regexp_.c:$(OBJDIR)/regexp.h \ + $(OBJDIR)/report_.c:$(OBJDIR)/report.h \ + $(OBJDIR)/rss_.c:$(OBJDIR)/rss.h \ + $(OBJDIR)/schema_.c:$(OBJDIR)/schema.h \ + $(OBJDIR)/search_.c:$(OBJDIR)/search.h \ + $(OBJDIR)/setup_.c:$(OBJDIR)/setup.h \ + $(OBJDIR)/sha1_.c:$(OBJDIR)/sha1.h \ + $(OBJDIR)/shun_.c:$(OBJDIR)/shun.h \ + $(OBJDIR)/sitemap_.c:$(OBJDIR)/sitemap.h \ + $(OBJDIR)/skins_.c:$(OBJDIR)/skins.h \ + $(OBJDIR)/sqlcmd_.c:$(OBJDIR)/sqlcmd.h \ + $(OBJDIR)/stash_.c:$(OBJDIR)/stash.h \ + $(OBJDIR)/stat_.c:$(OBJDIR)/stat.h \ + $(OBJDIR)/statrep_.c:$(OBJDIR)/statrep.h \ + $(OBJDIR)/style_.c:$(OBJDIR)/style.h \ + $(OBJDIR)/sync_.c:$(OBJDIR)/sync.h \ + $(OBJDIR)/tag_.c:$(OBJDIR)/tag.h \ + $(OBJDIR)/tar_.c:$(OBJDIR)/tar.h \ + $(OBJDIR)/th_main_.c:$(OBJDIR)/th_main.h \ + $(OBJDIR)/timeline_.c:$(OBJDIR)/timeline.h \ + $(OBJDIR)/tkt_.c:$(OBJDIR)/tkt.h \ + $(OBJDIR)/tktsetup_.c:$(OBJDIR)/tktsetup.h \ + $(OBJDIR)/undo_.c:$(OBJDIR)/undo.h \ + $(OBJDIR)/unicode_.c:$(OBJDIR)/unicode.h \ + $(OBJDIR)/update_.c:$(OBJDIR)/update.h \ + $(OBJDIR)/url_.c:$(OBJDIR)/url.h \ + $(OBJDIR)/user_.c:$(OBJDIR)/user.h \ + $(OBJDIR)/utf8_.c:$(OBJDIR)/utf8.h \ + $(OBJDIR)/util_.c:$(OBJDIR)/util.h \ + $(OBJDIR)/verify_.c:$(OBJDIR)/verify.h \ + $(OBJDIR)/vfile_.c:$(OBJDIR)/vfile.h \ + $(OBJDIR)/wiki_.c:$(OBJDIR)/wiki.h \ + $(OBJDIR)/wikiformat_.c:$(OBJDIR)/wikiformat.h \ + $(OBJDIR)/winfile_.c:$(OBJDIR)/winfile.h \ + $(OBJDIR)/winhttp_.c:$(OBJDIR)/winhttp.h \ + $(OBJDIR)/wysiwyg_.c:$(OBJDIR)/wysiwyg.h \ + $(OBJDIR)/xfer_.c:$(OBJDIR)/xfer.h \ + $(OBJDIR)/xfersetup_.c:$(OBJDIR)/xfersetup.h \ + $(OBJDIR)/zip_.c:$(OBJDIR)/zip.h \ + $(SRCDIR)/sqlite3.h \ + $(SRCDIR)/th.h \ + $(OBJDIR)/VERSION.h + echo Done >$(OBJDIR)/headers + +$(OBJDIR)/headers: Makefile + +Makefile: + +$(OBJDIR)/add_.c: $(SRCDIR)/add.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/add.c >$@ + +$(OBJDIR)/add.o: $(OBJDIR)/add_.c $(OBJDIR)/add.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/add.o -c $(OBJDIR)/add_.c + +$(OBJDIR)/add.h: $(OBJDIR)/headers + +$(OBJDIR)/allrepo_.c: $(SRCDIR)/allrepo.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/allrepo.c >$@ + +$(OBJDIR)/allrepo.o: $(OBJDIR)/allrepo_.c $(OBJDIR)/allrepo.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/allrepo.o -c $(OBJDIR)/allrepo_.c + +$(OBJDIR)/allrepo.h: $(OBJDIR)/headers + +$(OBJDIR)/attach_.c: $(SRCDIR)/attach.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/attach.c >$@ + +$(OBJDIR)/attach.o: $(OBJDIR)/attach_.c $(OBJDIR)/attach.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/attach.o -c $(OBJDIR)/attach_.c + +$(OBJDIR)/attach.h: $(OBJDIR)/headers + +$(OBJDIR)/bag_.c: $(SRCDIR)/bag.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/bag.c >$@ + +$(OBJDIR)/bag.o: $(OBJDIR)/bag_.c $(OBJDIR)/bag.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/bag.o -c $(OBJDIR)/bag_.c + +$(OBJDIR)/bag.h: $(OBJDIR)/headers + +$(OBJDIR)/bisect_.c: $(SRCDIR)/bisect.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/bisect.c >$@ + +$(OBJDIR)/bisect.o: $(OBJDIR)/bisect_.c $(OBJDIR)/bisect.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/bisect.o -c $(OBJDIR)/bisect_.c + +$(OBJDIR)/bisect.h: $(OBJDIR)/headers + +$(OBJDIR)/blob_.c: $(SRCDIR)/blob.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/blob.c >$@ + +$(OBJDIR)/blob.o: $(OBJDIR)/blob_.c $(OBJDIR)/blob.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/blob.o -c $(OBJDIR)/blob_.c + +$(OBJDIR)/blob.h: $(OBJDIR)/headers + +$(OBJDIR)/branch_.c: $(SRCDIR)/branch.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/branch.c >$@ + +$(OBJDIR)/branch.o: $(OBJDIR)/branch_.c $(OBJDIR)/branch.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/branch.o -c $(OBJDIR)/branch_.c + +$(OBJDIR)/branch.h: $(OBJDIR)/headers + +$(OBJDIR)/browse_.c: $(SRCDIR)/browse.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/browse.c >$@ + +$(OBJDIR)/browse.o: $(OBJDIR)/browse_.c $(OBJDIR)/browse.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/browse.o -c $(OBJDIR)/browse_.c + +$(OBJDIR)/browse.h: $(OBJDIR)/headers + +$(OBJDIR)/builtin_.c: $(SRCDIR)/builtin.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/builtin.c >$@ + +$(OBJDIR)/builtin.o: $(OBJDIR)/builtin_.c $(OBJDIR)/builtin.h $(OBJDIR)/builtin_data.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/builtin.o -c $(OBJDIR)/builtin_.c + +$(OBJDIR)/builtin.h: $(OBJDIR)/headers + +$(OBJDIR)/bundle_.c: $(SRCDIR)/bundle.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/bundle.c >$@ + +$(OBJDIR)/bundle.o: $(OBJDIR)/bundle_.c $(OBJDIR)/bundle.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/bundle.o -c $(OBJDIR)/bundle_.c + +$(OBJDIR)/bundle.h: $(OBJDIR)/headers + +$(OBJDIR)/cache_.c: $(SRCDIR)/cache.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/cache.c >$@ + +$(OBJDIR)/cache.o: $(OBJDIR)/cache_.c $(OBJDIR)/cache.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/cache.o -c $(OBJDIR)/cache_.c + +$(OBJDIR)/cache.h: $(OBJDIR)/headers + +$(OBJDIR)/captcha_.c: $(SRCDIR)/captcha.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/captcha.c >$@ + +$(OBJDIR)/captcha.o: $(OBJDIR)/captcha_.c $(OBJDIR)/captcha.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/captcha.o -c $(OBJDIR)/captcha_.c + +$(OBJDIR)/captcha.h: $(OBJDIR)/headers + +$(OBJDIR)/cgi_.c: $(SRCDIR)/cgi.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/cgi.c >$@ + +$(OBJDIR)/cgi.o: $(OBJDIR)/cgi_.c $(OBJDIR)/cgi.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/cgi.o -c $(OBJDIR)/cgi_.c + +$(OBJDIR)/cgi.h: $(OBJDIR)/headers + +$(OBJDIR)/checkin_.c: $(SRCDIR)/checkin.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/checkin.c >$@ + +$(OBJDIR)/checkin.o: $(OBJDIR)/checkin_.c $(OBJDIR)/checkin.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/checkin.o -c $(OBJDIR)/checkin_.c + +$(OBJDIR)/checkin.h: $(OBJDIR)/headers + +$(OBJDIR)/checkout_.c: $(SRCDIR)/checkout.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/checkout.c >$@ + +$(OBJDIR)/checkout.o: $(OBJDIR)/checkout_.c $(OBJDIR)/checkout.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/checkout.o -c $(OBJDIR)/checkout_.c + +$(OBJDIR)/checkout.h: $(OBJDIR)/headers + +$(OBJDIR)/clearsign_.c: $(SRCDIR)/clearsign.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/clearsign.c >$@ + +$(OBJDIR)/clearsign.o: $(OBJDIR)/clearsign_.c $(OBJDIR)/clearsign.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/clearsign.o -c $(OBJDIR)/clearsign_.c + +$(OBJDIR)/clearsign.h: $(OBJDIR)/headers + +$(OBJDIR)/clone_.c: $(SRCDIR)/clone.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/clone.c >$@ + +$(OBJDIR)/clone.o: $(OBJDIR)/clone_.c $(OBJDIR)/clone.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/clone.o -c $(OBJDIR)/clone_.c + +$(OBJDIR)/clone.h: $(OBJDIR)/headers + +$(OBJDIR)/comformat_.c: $(SRCDIR)/comformat.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/comformat.c >$@ + +$(OBJDIR)/comformat.o: $(OBJDIR)/comformat_.c $(OBJDIR)/comformat.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/comformat.o -c $(OBJDIR)/comformat_.c + +$(OBJDIR)/comformat.h: $(OBJDIR)/headers + +$(OBJDIR)/configure_.c: $(SRCDIR)/configure.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/configure.c >$@ + +$(OBJDIR)/configure.o: $(OBJDIR)/configure_.c $(OBJDIR)/configure.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/configure.o -c $(OBJDIR)/configure_.c + +$(OBJDIR)/configure.h: $(OBJDIR)/headers + +$(OBJDIR)/content_.c: $(SRCDIR)/content.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/content.c >$@ + +$(OBJDIR)/content.o: $(OBJDIR)/content_.c $(OBJDIR)/content.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/content.o -c $(OBJDIR)/content_.c + +$(OBJDIR)/content.h: $(OBJDIR)/headers + +$(OBJDIR)/db_.c: $(SRCDIR)/db.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/db.c >$@ + +$(OBJDIR)/db.o: $(OBJDIR)/db_.c $(OBJDIR)/db.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/db.o -c $(OBJDIR)/db_.c + +$(OBJDIR)/db.h: $(OBJDIR)/headers + +$(OBJDIR)/delta_.c: $(SRCDIR)/delta.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/delta.c >$@ + +$(OBJDIR)/delta.o: $(OBJDIR)/delta_.c $(OBJDIR)/delta.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/delta.o -c $(OBJDIR)/delta_.c + +$(OBJDIR)/delta.h: $(OBJDIR)/headers + +$(OBJDIR)/deltacmd_.c: $(SRCDIR)/deltacmd.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/deltacmd.c >$@ + +$(OBJDIR)/deltacmd.o: $(OBJDIR)/deltacmd_.c $(OBJDIR)/deltacmd.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/deltacmd.o -c $(OBJDIR)/deltacmd_.c + +$(OBJDIR)/deltacmd.h: $(OBJDIR)/headers + +$(OBJDIR)/descendants_.c: $(SRCDIR)/descendants.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/descendants.c >$@ + +$(OBJDIR)/descendants.o: $(OBJDIR)/descendants_.c $(OBJDIR)/descendants.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/descendants.o -c $(OBJDIR)/descendants_.c + +$(OBJDIR)/descendants.h: $(OBJDIR)/headers + +$(OBJDIR)/diff_.c: $(SRCDIR)/diff.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/diff.c >$@ + +$(OBJDIR)/diff.o: $(OBJDIR)/diff_.c $(OBJDIR)/diff.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/diff.o -c $(OBJDIR)/diff_.c + +$(OBJDIR)/diff.h: $(OBJDIR)/headers + +$(OBJDIR)/diffcmd_.c: $(SRCDIR)/diffcmd.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/diffcmd.c >$@ + +$(OBJDIR)/diffcmd.o: $(OBJDIR)/diffcmd_.c $(OBJDIR)/diffcmd.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/diffcmd.o -c $(OBJDIR)/diffcmd_.c + +$(OBJDIR)/diffcmd.h: $(OBJDIR)/headers + +$(OBJDIR)/doc_.c: $(SRCDIR)/doc.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/doc.c >$@ + +$(OBJDIR)/doc.o: $(OBJDIR)/doc_.c $(OBJDIR)/doc.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/doc.o -c $(OBJDIR)/doc_.c + +$(OBJDIR)/doc.h: $(OBJDIR)/headers + +$(OBJDIR)/encode_.c: $(SRCDIR)/encode.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/encode.c >$@ + +$(OBJDIR)/encode.o: $(OBJDIR)/encode_.c $(OBJDIR)/encode.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/encode.o -c $(OBJDIR)/encode_.c + +$(OBJDIR)/encode.h: $(OBJDIR)/headers + +$(OBJDIR)/event_.c: $(SRCDIR)/event.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/event.c >$@ + +$(OBJDIR)/event.o: $(OBJDIR)/event_.c $(OBJDIR)/event.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/event.o -c $(OBJDIR)/event_.c + +$(OBJDIR)/event.h: $(OBJDIR)/headers + +$(OBJDIR)/export_.c: $(SRCDIR)/export.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/export.c >$@ + +$(OBJDIR)/export.o: $(OBJDIR)/export_.c $(OBJDIR)/export.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/export.o -c $(OBJDIR)/export_.c + +$(OBJDIR)/export.h: $(OBJDIR)/headers + +$(OBJDIR)/file_.c: $(SRCDIR)/file.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/file.c >$@ + +$(OBJDIR)/file.o: $(OBJDIR)/file_.c $(OBJDIR)/file.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/file.o -c $(OBJDIR)/file_.c + +$(OBJDIR)/file.h: $(OBJDIR)/headers + +$(OBJDIR)/finfo_.c: $(SRCDIR)/finfo.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/finfo.c >$@ + +$(OBJDIR)/finfo.o: $(OBJDIR)/finfo_.c $(OBJDIR)/finfo.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/finfo.o -c $(OBJDIR)/finfo_.c + +$(OBJDIR)/finfo.h: $(OBJDIR)/headers + +$(OBJDIR)/foci_.c: $(SRCDIR)/foci.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/foci.c >$@ + +$(OBJDIR)/foci.o: $(OBJDIR)/foci_.c $(OBJDIR)/foci.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/foci.o -c $(OBJDIR)/foci_.c + +$(OBJDIR)/foci.h: $(OBJDIR)/headers + +$(OBJDIR)/fusefs_.c: $(SRCDIR)/fusefs.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/fusefs.c >$@ + +$(OBJDIR)/fusefs.o: $(OBJDIR)/fusefs_.c $(OBJDIR)/fusefs.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/fusefs.o -c $(OBJDIR)/fusefs_.c + +$(OBJDIR)/fusefs.h: $(OBJDIR)/headers + +$(OBJDIR)/glob_.c: $(SRCDIR)/glob.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/glob.c >$@ + +$(OBJDIR)/glob.o: $(OBJDIR)/glob_.c $(OBJDIR)/glob.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/glob.o -c $(OBJDIR)/glob_.c + +$(OBJDIR)/glob.h: $(OBJDIR)/headers + +$(OBJDIR)/graph_.c: $(SRCDIR)/graph.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/graph.c >$@ + +$(OBJDIR)/graph.o: $(OBJDIR)/graph_.c $(OBJDIR)/graph.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/graph.o -c $(OBJDIR)/graph_.c + +$(OBJDIR)/graph.h: $(OBJDIR)/headers + +$(OBJDIR)/gzip_.c: $(SRCDIR)/gzip.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/gzip.c >$@ + +$(OBJDIR)/gzip.o: $(OBJDIR)/gzip_.c $(OBJDIR)/gzip.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/gzip.o -c $(OBJDIR)/gzip_.c + +$(OBJDIR)/gzip.h: $(OBJDIR)/headers + +$(OBJDIR)/http_.c: $(SRCDIR)/http.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/http.c >$@ + +$(OBJDIR)/http.o: $(OBJDIR)/http_.c $(OBJDIR)/http.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/http.o -c $(OBJDIR)/http_.c + +$(OBJDIR)/http.h: $(OBJDIR)/headers + +$(OBJDIR)/http_socket_.c: $(SRCDIR)/http_socket.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/http_socket.c >$@ + +$(OBJDIR)/http_socket.o: $(OBJDIR)/http_socket_.c $(OBJDIR)/http_socket.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/http_socket.o -c $(OBJDIR)/http_socket_.c + +$(OBJDIR)/http_socket.h: $(OBJDIR)/headers + +$(OBJDIR)/http_ssl_.c: $(SRCDIR)/http_ssl.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/http_ssl.c >$@ + +$(OBJDIR)/http_ssl.o: $(OBJDIR)/http_ssl_.c $(OBJDIR)/http_ssl.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/http_ssl.o -c $(OBJDIR)/http_ssl_.c + +$(OBJDIR)/http_ssl.h: $(OBJDIR)/headers + +$(OBJDIR)/http_transport_.c: $(SRCDIR)/http_transport.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/http_transport.c >$@ + +$(OBJDIR)/http_transport.o: $(OBJDIR)/http_transport_.c $(OBJDIR)/http_transport.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/http_transport.o -c $(OBJDIR)/http_transport_.c + +$(OBJDIR)/http_transport.h: $(OBJDIR)/headers + +$(OBJDIR)/import_.c: $(SRCDIR)/import.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/import.c >$@ + +$(OBJDIR)/import.o: $(OBJDIR)/import_.c $(OBJDIR)/import.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/import.o -c $(OBJDIR)/import_.c + +$(OBJDIR)/import.h: $(OBJDIR)/headers + +$(OBJDIR)/info_.c: $(SRCDIR)/info.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/info.c >$@ + +$(OBJDIR)/info.o: $(OBJDIR)/info_.c $(OBJDIR)/info.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/info.o -c $(OBJDIR)/info_.c + +$(OBJDIR)/info.h: $(OBJDIR)/headers + +$(OBJDIR)/json_.c: $(SRCDIR)/json.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json.c >$@ + +$(OBJDIR)/json.o: $(OBJDIR)/json_.c $(OBJDIR)/json.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json.o -c $(OBJDIR)/json_.c + +$(OBJDIR)/json.h: $(OBJDIR)/headers + +$(OBJDIR)/json_artifact_.c: $(SRCDIR)/json_artifact.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_artifact.c >$@ + +$(OBJDIR)/json_artifact.o: $(OBJDIR)/json_artifact_.c $(OBJDIR)/json_artifact.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_artifact.o -c $(OBJDIR)/json_artifact_.c + +$(OBJDIR)/json_artifact.h: $(OBJDIR)/headers + +$(OBJDIR)/json_branch_.c: $(SRCDIR)/json_branch.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_branch.c >$@ + +$(OBJDIR)/json_branch.o: $(OBJDIR)/json_branch_.c $(OBJDIR)/json_branch.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_branch.o -c $(OBJDIR)/json_branch_.c + +$(OBJDIR)/json_branch.h: $(OBJDIR)/headers + +$(OBJDIR)/json_config_.c: $(SRCDIR)/json_config.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_config.c >$@ + +$(OBJDIR)/json_config.o: $(OBJDIR)/json_config_.c $(OBJDIR)/json_config.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_config.o -c $(OBJDIR)/json_config_.c + +$(OBJDIR)/json_config.h: $(OBJDIR)/headers + +$(OBJDIR)/json_diff_.c: $(SRCDIR)/json_diff.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_diff.c >$@ + +$(OBJDIR)/json_diff.o: $(OBJDIR)/json_diff_.c $(OBJDIR)/json_diff.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_diff.o -c $(OBJDIR)/json_diff_.c + +$(OBJDIR)/json_diff.h: $(OBJDIR)/headers + +$(OBJDIR)/json_dir_.c: $(SRCDIR)/json_dir.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_dir.c >$@ + +$(OBJDIR)/json_dir.o: $(OBJDIR)/json_dir_.c $(OBJDIR)/json_dir.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_dir.o -c $(OBJDIR)/json_dir_.c + +$(OBJDIR)/json_dir.h: $(OBJDIR)/headers + +$(OBJDIR)/json_finfo_.c: $(SRCDIR)/json_finfo.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_finfo.c >$@ + +$(OBJDIR)/json_finfo.o: $(OBJDIR)/json_finfo_.c $(OBJDIR)/json_finfo.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_finfo.o -c $(OBJDIR)/json_finfo_.c + +$(OBJDIR)/json_finfo.h: $(OBJDIR)/headers + +$(OBJDIR)/json_login_.c: $(SRCDIR)/json_login.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_login.c >$@ + +$(OBJDIR)/json_login.o: $(OBJDIR)/json_login_.c $(OBJDIR)/json_login.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_login.o -c $(OBJDIR)/json_login_.c + +$(OBJDIR)/json_login.h: $(OBJDIR)/headers + +$(OBJDIR)/json_query_.c: $(SRCDIR)/json_query.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_query.c >$@ + +$(OBJDIR)/json_query.o: $(OBJDIR)/json_query_.c $(OBJDIR)/json_query.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_query.o -c $(OBJDIR)/json_query_.c + +$(OBJDIR)/json_query.h: $(OBJDIR)/headers + +$(OBJDIR)/json_report_.c: $(SRCDIR)/json_report.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_report.c >$@ + +$(OBJDIR)/json_report.o: $(OBJDIR)/json_report_.c $(OBJDIR)/json_report.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_report.o -c $(OBJDIR)/json_report_.c + +$(OBJDIR)/json_report.h: $(OBJDIR)/headers + +$(OBJDIR)/json_status_.c: $(SRCDIR)/json_status.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_status.c >$@ + +$(OBJDIR)/json_status.o: $(OBJDIR)/json_status_.c $(OBJDIR)/json_status.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_status.o -c $(OBJDIR)/json_status_.c + +$(OBJDIR)/json_status.h: $(OBJDIR)/headers + +$(OBJDIR)/json_tag_.c: $(SRCDIR)/json_tag.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_tag.c >$@ + +$(OBJDIR)/json_tag.o: $(OBJDIR)/json_tag_.c $(OBJDIR)/json_tag.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_tag.o -c $(OBJDIR)/json_tag_.c + +$(OBJDIR)/json_tag.h: $(OBJDIR)/headers + +$(OBJDIR)/json_timeline_.c: $(SRCDIR)/json_timeline.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_timeline.c >$@ + +$(OBJDIR)/json_timeline.o: $(OBJDIR)/json_timeline_.c $(OBJDIR)/json_timeline.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_timeline.o -c $(OBJDIR)/json_timeline_.c + +$(OBJDIR)/json_timeline.h: $(OBJDIR)/headers + +$(OBJDIR)/json_user_.c: $(SRCDIR)/json_user.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_user.c >$@ + +$(OBJDIR)/json_user.o: $(OBJDIR)/json_user_.c $(OBJDIR)/json_user.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_user.o -c $(OBJDIR)/json_user_.c + +$(OBJDIR)/json_user.h: $(OBJDIR)/headers + +$(OBJDIR)/json_wiki_.c: $(SRCDIR)/json_wiki.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/json_wiki.c >$@ + +$(OBJDIR)/json_wiki.o: $(OBJDIR)/json_wiki_.c $(OBJDIR)/json_wiki.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/json_wiki.o -c $(OBJDIR)/json_wiki_.c + +$(OBJDIR)/json_wiki.h: $(OBJDIR)/headers + +$(OBJDIR)/leaf_.c: $(SRCDIR)/leaf.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/leaf.c >$@ + +$(OBJDIR)/leaf.o: $(OBJDIR)/leaf_.c $(OBJDIR)/leaf.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/leaf.o -c $(OBJDIR)/leaf_.c + +$(OBJDIR)/leaf.h: $(OBJDIR)/headers + +$(OBJDIR)/loadctrl_.c: $(SRCDIR)/loadctrl.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/loadctrl.c >$@ + +$(OBJDIR)/loadctrl.o: $(OBJDIR)/loadctrl_.c $(OBJDIR)/loadctrl.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/loadctrl.o -c $(OBJDIR)/loadctrl_.c + +$(OBJDIR)/loadctrl.h: $(OBJDIR)/headers + +$(OBJDIR)/login_.c: $(SRCDIR)/login.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/login.c >$@ + +$(OBJDIR)/login.o: $(OBJDIR)/login_.c $(OBJDIR)/login.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/login.o -c $(OBJDIR)/login_.c + +$(OBJDIR)/login.h: $(OBJDIR)/headers + +$(OBJDIR)/lookslike_.c: $(SRCDIR)/lookslike.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/lookslike.c >$@ + +$(OBJDIR)/lookslike.o: $(OBJDIR)/lookslike_.c $(OBJDIR)/lookslike.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/lookslike.o -c $(OBJDIR)/lookslike_.c + +$(OBJDIR)/lookslike.h: $(OBJDIR)/headers + +$(OBJDIR)/main_.c: $(SRCDIR)/main.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/main.c >$@ + +$(OBJDIR)/main.o: $(OBJDIR)/main_.c $(OBJDIR)/main.h $(OBJDIR)/page_index.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/main.o -c $(OBJDIR)/main_.c + +$(OBJDIR)/main.h: $(OBJDIR)/headers + +$(OBJDIR)/manifest_.c: $(SRCDIR)/manifest.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/manifest.c >$@ + +$(OBJDIR)/manifest.o: $(OBJDIR)/manifest_.c $(OBJDIR)/manifest.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/manifest.o -c $(OBJDIR)/manifest_.c + +$(OBJDIR)/manifest.h: $(OBJDIR)/headers + +$(OBJDIR)/markdown_.c: $(SRCDIR)/markdown.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/markdown.c >$@ + +$(OBJDIR)/markdown.o: $(OBJDIR)/markdown_.c $(OBJDIR)/markdown.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/markdown.o -c $(OBJDIR)/markdown_.c + +$(OBJDIR)/markdown.h: $(OBJDIR)/headers + +$(OBJDIR)/markdown_html_.c: $(SRCDIR)/markdown_html.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/markdown_html.c >$@ + +$(OBJDIR)/markdown_html.o: $(OBJDIR)/markdown_html_.c $(OBJDIR)/markdown_html.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/markdown_html.o -c $(OBJDIR)/markdown_html_.c + +$(OBJDIR)/markdown_html.h: $(OBJDIR)/headers + +$(OBJDIR)/md5_.c: $(SRCDIR)/md5.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/md5.c >$@ + +$(OBJDIR)/md5.o: $(OBJDIR)/md5_.c $(OBJDIR)/md5.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/md5.o -c $(OBJDIR)/md5_.c + +$(OBJDIR)/md5.h: $(OBJDIR)/headers + +$(OBJDIR)/merge_.c: $(SRCDIR)/merge.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/merge.c >$@ + +$(OBJDIR)/merge.o: $(OBJDIR)/merge_.c $(OBJDIR)/merge.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/merge.o -c $(OBJDIR)/merge_.c + +$(OBJDIR)/merge.h: $(OBJDIR)/headers + +$(OBJDIR)/merge3_.c: $(SRCDIR)/merge3.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/merge3.c >$@ + +$(OBJDIR)/merge3.o: $(OBJDIR)/merge3_.c $(OBJDIR)/merge3.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/merge3.o -c $(OBJDIR)/merge3_.c + +$(OBJDIR)/merge3.h: $(OBJDIR)/headers + +$(OBJDIR)/moderate_.c: $(SRCDIR)/moderate.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/moderate.c >$@ + +$(OBJDIR)/moderate.o: $(OBJDIR)/moderate_.c $(OBJDIR)/moderate.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/moderate.o -c $(OBJDIR)/moderate_.c + +$(OBJDIR)/moderate.h: $(OBJDIR)/headers + +$(OBJDIR)/name_.c: $(SRCDIR)/name.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/name.c >$@ + +$(OBJDIR)/name.o: $(OBJDIR)/name_.c $(OBJDIR)/name.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/name.o -c $(OBJDIR)/name_.c + +$(OBJDIR)/name.h: $(OBJDIR)/headers + +$(OBJDIR)/path_.c: $(SRCDIR)/path.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/path.c >$@ + +$(OBJDIR)/path.o: $(OBJDIR)/path_.c $(OBJDIR)/path.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/path.o -c $(OBJDIR)/path_.c + +$(OBJDIR)/path.h: $(OBJDIR)/headers + +$(OBJDIR)/piechart_.c: $(SRCDIR)/piechart.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/piechart.c >$@ + +$(OBJDIR)/piechart.o: $(OBJDIR)/piechart_.c $(OBJDIR)/piechart.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/piechart.o -c $(OBJDIR)/piechart_.c + +$(OBJDIR)/piechart.h: $(OBJDIR)/headers + +$(OBJDIR)/pivot_.c: $(SRCDIR)/pivot.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/pivot.c >$@ + +$(OBJDIR)/pivot.o: $(OBJDIR)/pivot_.c $(OBJDIR)/pivot.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/pivot.o -c $(OBJDIR)/pivot_.c + +$(OBJDIR)/pivot.h: $(OBJDIR)/headers + +$(OBJDIR)/popen_.c: $(SRCDIR)/popen.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/popen.c >$@ + +$(OBJDIR)/popen.o: $(OBJDIR)/popen_.c $(OBJDIR)/popen.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/popen.o -c $(OBJDIR)/popen_.c + +$(OBJDIR)/popen.h: $(OBJDIR)/headers + +$(OBJDIR)/pqueue_.c: $(SRCDIR)/pqueue.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/pqueue.c >$@ + +$(OBJDIR)/pqueue.o: $(OBJDIR)/pqueue_.c $(OBJDIR)/pqueue.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/pqueue.o -c $(OBJDIR)/pqueue_.c + +$(OBJDIR)/pqueue.h: $(OBJDIR)/headers + +$(OBJDIR)/printf_.c: $(SRCDIR)/printf.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/printf.c >$@ + +$(OBJDIR)/printf.o: $(OBJDIR)/printf_.c $(OBJDIR)/printf.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/printf.o -c $(OBJDIR)/printf_.c + +$(OBJDIR)/printf.h: $(OBJDIR)/headers + +$(OBJDIR)/publish_.c: $(SRCDIR)/publish.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/publish.c >$@ + +$(OBJDIR)/publish.o: $(OBJDIR)/publish_.c $(OBJDIR)/publish.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/publish.o -c $(OBJDIR)/publish_.c + +$(OBJDIR)/publish.h: $(OBJDIR)/headers + +$(OBJDIR)/purge_.c: $(SRCDIR)/purge.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/purge.c >$@ + +$(OBJDIR)/purge.o: $(OBJDIR)/purge_.c $(OBJDIR)/purge.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/purge.o -c $(OBJDIR)/purge_.c + +$(OBJDIR)/purge.h: $(OBJDIR)/headers + +$(OBJDIR)/rebuild_.c: $(SRCDIR)/rebuild.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/rebuild.c >$@ + +$(OBJDIR)/rebuild.o: $(OBJDIR)/rebuild_.c $(OBJDIR)/rebuild.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/rebuild.o -c $(OBJDIR)/rebuild_.c + +$(OBJDIR)/rebuild.h: $(OBJDIR)/headers + +$(OBJDIR)/regexp_.c: $(SRCDIR)/regexp.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/regexp.c >$@ + +$(OBJDIR)/regexp.o: $(OBJDIR)/regexp_.c $(OBJDIR)/regexp.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/regexp.o -c $(OBJDIR)/regexp_.c + +$(OBJDIR)/regexp.h: $(OBJDIR)/headers + +$(OBJDIR)/report_.c: $(SRCDIR)/report.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/report.c >$@ + +$(OBJDIR)/report.o: $(OBJDIR)/report_.c $(OBJDIR)/report.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/report.o -c $(OBJDIR)/report_.c + +$(OBJDIR)/report.h: $(OBJDIR)/headers + +$(OBJDIR)/rss_.c: $(SRCDIR)/rss.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/rss.c >$@ + +$(OBJDIR)/rss.o: $(OBJDIR)/rss_.c $(OBJDIR)/rss.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/rss.o -c $(OBJDIR)/rss_.c + +$(OBJDIR)/rss.h: $(OBJDIR)/headers + +$(OBJDIR)/schema_.c: $(SRCDIR)/schema.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/schema.c >$@ + +$(OBJDIR)/schema.o: $(OBJDIR)/schema_.c $(OBJDIR)/schema.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/schema.o -c $(OBJDIR)/schema_.c + +$(OBJDIR)/schema.h: $(OBJDIR)/headers + +$(OBJDIR)/search_.c: $(SRCDIR)/search.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/search.c >$@ + +$(OBJDIR)/search.o: $(OBJDIR)/search_.c $(OBJDIR)/search.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/search.o -c $(OBJDIR)/search_.c + +$(OBJDIR)/search.h: $(OBJDIR)/headers + +$(OBJDIR)/setup_.c: $(SRCDIR)/setup.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/setup.c >$@ + +$(OBJDIR)/setup.o: $(OBJDIR)/setup_.c $(OBJDIR)/setup.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/setup.o -c $(OBJDIR)/setup_.c + +$(OBJDIR)/setup.h: $(OBJDIR)/headers + +$(OBJDIR)/sha1_.c: $(SRCDIR)/sha1.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/sha1.c >$@ + +$(OBJDIR)/sha1.o: $(OBJDIR)/sha1_.c $(OBJDIR)/sha1.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/sha1.o -c $(OBJDIR)/sha1_.c + +$(OBJDIR)/sha1.h: $(OBJDIR)/headers + +$(OBJDIR)/shun_.c: $(SRCDIR)/shun.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/shun.c >$@ + +$(OBJDIR)/shun.o: $(OBJDIR)/shun_.c $(OBJDIR)/shun.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/shun.o -c $(OBJDIR)/shun_.c + +$(OBJDIR)/shun.h: $(OBJDIR)/headers + +$(OBJDIR)/sitemap_.c: $(SRCDIR)/sitemap.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/sitemap.c >$@ + +$(OBJDIR)/sitemap.o: $(OBJDIR)/sitemap_.c $(OBJDIR)/sitemap.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/sitemap.o -c $(OBJDIR)/sitemap_.c + +$(OBJDIR)/sitemap.h: $(OBJDIR)/headers + +$(OBJDIR)/skins_.c: $(SRCDIR)/skins.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/skins.c >$@ + +$(OBJDIR)/skins.o: $(OBJDIR)/skins_.c $(OBJDIR)/skins.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/skins.o -c $(OBJDIR)/skins_.c + +$(OBJDIR)/skins.h: $(OBJDIR)/headers + +$(OBJDIR)/sqlcmd_.c: $(SRCDIR)/sqlcmd.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/sqlcmd.c >$@ + +$(OBJDIR)/sqlcmd.o: $(OBJDIR)/sqlcmd_.c $(OBJDIR)/sqlcmd.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/sqlcmd.o -c $(OBJDIR)/sqlcmd_.c + +$(OBJDIR)/sqlcmd.h: $(OBJDIR)/headers + +$(OBJDIR)/stash_.c: $(SRCDIR)/stash.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/stash.c >$@ + +$(OBJDIR)/stash.o: $(OBJDIR)/stash_.c $(OBJDIR)/stash.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/stash.o -c $(OBJDIR)/stash_.c + +$(OBJDIR)/stash.h: $(OBJDIR)/headers + +$(OBJDIR)/stat_.c: $(SRCDIR)/stat.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/stat.c >$@ + +$(OBJDIR)/stat.o: $(OBJDIR)/stat_.c $(OBJDIR)/stat.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/stat.o -c $(OBJDIR)/stat_.c + +$(OBJDIR)/stat.h: $(OBJDIR)/headers + +$(OBJDIR)/statrep_.c: $(SRCDIR)/statrep.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/statrep.c >$@ + +$(OBJDIR)/statrep.o: $(OBJDIR)/statrep_.c $(OBJDIR)/statrep.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/statrep.o -c $(OBJDIR)/statrep_.c + +$(OBJDIR)/statrep.h: $(OBJDIR)/headers + +$(OBJDIR)/style_.c: $(SRCDIR)/style.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/style.c >$@ + +$(OBJDIR)/style.o: $(OBJDIR)/style_.c $(OBJDIR)/style.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/style.o -c $(OBJDIR)/style_.c + +$(OBJDIR)/style.h: $(OBJDIR)/headers + +$(OBJDIR)/sync_.c: $(SRCDIR)/sync.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/sync.c >$@ + +$(OBJDIR)/sync.o: $(OBJDIR)/sync_.c $(OBJDIR)/sync.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/sync.o -c $(OBJDIR)/sync_.c + +$(OBJDIR)/sync.h: $(OBJDIR)/headers + +$(OBJDIR)/tag_.c: $(SRCDIR)/tag.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/tag.c >$@ + +$(OBJDIR)/tag.o: $(OBJDIR)/tag_.c $(OBJDIR)/tag.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/tag.o -c $(OBJDIR)/tag_.c + +$(OBJDIR)/tag.h: $(OBJDIR)/headers + +$(OBJDIR)/tar_.c: $(SRCDIR)/tar.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/tar.c >$@ + +$(OBJDIR)/tar.o: $(OBJDIR)/tar_.c $(OBJDIR)/tar.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/tar.o -c $(OBJDIR)/tar_.c + +$(OBJDIR)/tar.h: $(OBJDIR)/headers + +$(OBJDIR)/th_main_.c: $(SRCDIR)/th_main.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/th_main.c >$@ + +$(OBJDIR)/th_main.o: $(OBJDIR)/th_main_.c $(OBJDIR)/th_main.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/th_main.o -c $(OBJDIR)/th_main_.c + +$(OBJDIR)/th_main.h: $(OBJDIR)/headers + +$(OBJDIR)/timeline_.c: $(SRCDIR)/timeline.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/timeline.c >$@ + +$(OBJDIR)/timeline.o: $(OBJDIR)/timeline_.c $(OBJDIR)/timeline.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/timeline.o -c $(OBJDIR)/timeline_.c + +$(OBJDIR)/timeline.h: $(OBJDIR)/headers + +$(OBJDIR)/tkt_.c: $(SRCDIR)/tkt.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/tkt.c >$@ + +$(OBJDIR)/tkt.o: $(OBJDIR)/tkt_.c $(OBJDIR)/tkt.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/tkt.o -c $(OBJDIR)/tkt_.c + +$(OBJDIR)/tkt.h: $(OBJDIR)/headers + +$(OBJDIR)/tktsetup_.c: $(SRCDIR)/tktsetup.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/tktsetup.c >$@ + +$(OBJDIR)/tktsetup.o: $(OBJDIR)/tktsetup_.c $(OBJDIR)/tktsetup.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/tktsetup.o -c $(OBJDIR)/tktsetup_.c + +$(OBJDIR)/tktsetup.h: $(OBJDIR)/headers + +$(OBJDIR)/undo_.c: $(SRCDIR)/undo.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/undo.c >$@ + +$(OBJDIR)/undo.o: $(OBJDIR)/undo_.c $(OBJDIR)/undo.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/undo.o -c $(OBJDIR)/undo_.c + +$(OBJDIR)/undo.h: $(OBJDIR)/headers + +$(OBJDIR)/unicode_.c: $(SRCDIR)/unicode.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/unicode.c >$@ + +$(OBJDIR)/unicode.o: $(OBJDIR)/unicode_.c $(OBJDIR)/unicode.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/unicode.o -c $(OBJDIR)/unicode_.c + +$(OBJDIR)/unicode.h: $(OBJDIR)/headers + +$(OBJDIR)/update_.c: $(SRCDIR)/update.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/update.c >$@ + +$(OBJDIR)/update.o: $(OBJDIR)/update_.c $(OBJDIR)/update.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/update.o -c $(OBJDIR)/update_.c + +$(OBJDIR)/update.h: $(OBJDIR)/headers + +$(OBJDIR)/url_.c: $(SRCDIR)/url.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/url.c >$@ + +$(OBJDIR)/url.o: $(OBJDIR)/url_.c $(OBJDIR)/url.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/url.o -c $(OBJDIR)/url_.c + +$(OBJDIR)/url.h: $(OBJDIR)/headers + +$(OBJDIR)/user_.c: $(SRCDIR)/user.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/user.c >$@ + +$(OBJDIR)/user.o: $(OBJDIR)/user_.c $(OBJDIR)/user.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/user.o -c $(OBJDIR)/user_.c + +$(OBJDIR)/user.h: $(OBJDIR)/headers + +$(OBJDIR)/utf8_.c: $(SRCDIR)/utf8.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/utf8.c >$@ + +$(OBJDIR)/utf8.o: $(OBJDIR)/utf8_.c $(OBJDIR)/utf8.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/utf8.o -c $(OBJDIR)/utf8_.c + +$(OBJDIR)/utf8.h: $(OBJDIR)/headers + +$(OBJDIR)/util_.c: $(SRCDIR)/util.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/util.c >$@ + +$(OBJDIR)/util.o: $(OBJDIR)/util_.c $(OBJDIR)/util.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/util.o -c $(OBJDIR)/util_.c + +$(OBJDIR)/util.h: $(OBJDIR)/headers + +$(OBJDIR)/verify_.c: $(SRCDIR)/verify.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/verify.c >$@ + +$(OBJDIR)/verify.o: $(OBJDIR)/verify_.c $(OBJDIR)/verify.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/verify.o -c $(OBJDIR)/verify_.c + +$(OBJDIR)/verify.h: $(OBJDIR)/headers + +$(OBJDIR)/vfile_.c: $(SRCDIR)/vfile.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/vfile.c >$@ + +$(OBJDIR)/vfile.o: $(OBJDIR)/vfile_.c $(OBJDIR)/vfile.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/vfile.o -c $(OBJDIR)/vfile_.c + +$(OBJDIR)/vfile.h: $(OBJDIR)/headers + +$(OBJDIR)/wiki_.c: $(SRCDIR)/wiki.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/wiki.c >$@ + +$(OBJDIR)/wiki.o: $(OBJDIR)/wiki_.c $(OBJDIR)/wiki.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/wiki.o -c $(OBJDIR)/wiki_.c + +$(OBJDIR)/wiki.h: $(OBJDIR)/headers + +$(OBJDIR)/wikiformat_.c: $(SRCDIR)/wikiformat.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/wikiformat.c >$@ + +$(OBJDIR)/wikiformat.o: $(OBJDIR)/wikiformat_.c $(OBJDIR)/wikiformat.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/wikiformat.o -c $(OBJDIR)/wikiformat_.c + +$(OBJDIR)/wikiformat.h: $(OBJDIR)/headers + +$(OBJDIR)/winfile_.c: $(SRCDIR)/winfile.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/winfile.c >$@ + +$(OBJDIR)/winfile.o: $(OBJDIR)/winfile_.c $(OBJDIR)/winfile.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/winfile.o -c $(OBJDIR)/winfile_.c + +$(OBJDIR)/winfile.h: $(OBJDIR)/headers + +$(OBJDIR)/winhttp_.c: $(SRCDIR)/winhttp.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/winhttp.c >$@ + +$(OBJDIR)/winhttp.o: $(OBJDIR)/winhttp_.c $(OBJDIR)/winhttp.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/winhttp.o -c $(OBJDIR)/winhttp_.c + +$(OBJDIR)/winhttp.h: $(OBJDIR)/headers + +$(OBJDIR)/wysiwyg_.c: $(SRCDIR)/wysiwyg.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/wysiwyg.c >$@ + +$(OBJDIR)/wysiwyg.o: $(OBJDIR)/wysiwyg_.c $(OBJDIR)/wysiwyg.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/wysiwyg.o -c $(OBJDIR)/wysiwyg_.c + +$(OBJDIR)/wysiwyg.h: $(OBJDIR)/headers + +$(OBJDIR)/xfer_.c: $(SRCDIR)/xfer.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/xfer.c >$@ + +$(OBJDIR)/xfer.o: $(OBJDIR)/xfer_.c $(OBJDIR)/xfer.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/xfer.o -c $(OBJDIR)/xfer_.c + +$(OBJDIR)/xfer.h: $(OBJDIR)/headers + +$(OBJDIR)/xfersetup_.c: $(SRCDIR)/xfersetup.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/xfersetup.c >$@ + +$(OBJDIR)/xfersetup.o: $(OBJDIR)/xfersetup_.c $(OBJDIR)/xfersetup.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/xfersetup.o -c $(OBJDIR)/xfersetup_.c + +$(OBJDIR)/xfersetup.h: $(OBJDIR)/headers + +$(OBJDIR)/zip_.c: $(SRCDIR)/zip.c $(TRANSLATE) + $(TRANSLATE) $(SRCDIR)/zip.c >$@ + +$(OBJDIR)/zip.o: $(OBJDIR)/zip_.c $(OBJDIR)/zip.h $(SRCDIR)/config.h + $(XTCC) -o $(OBJDIR)/zip.o -c $(OBJDIR)/zip_.c + +$(OBJDIR)/zip.h: $(OBJDIR)/headers + +SQLITE_OPTIONS = -DNDEBUG=1 \ + -DSQLITE_OMIT_LOAD_EXTENSION=1 \ + -DSQLITE_ENABLE_LOCKING_STYLE=0 \ + -DSQLITE_THREADSAFE=0 \ + -DSQLITE_DEFAULT_FILE_FORMAT=4 \ + -DSQLITE_OMIT_DEPRECATED \ + -DSQLITE_ENABLE_EXPLAIN_COMMENTS \ + -DSQLITE_ENABLE_FTS4 \ + -DSQLITE_ENABLE_FTS3_PARENTHESIS \ + -DSQLITE_ENABLE_DBSTAT_VTAB \ + -DSQLITE_ENABLE_JSON1 \ + -DSQLITE_ENABLE_FTS5 \ + -DSQLITE_WIN32_NO_ANSI \ + -D_HAVE__MINGW_H \ + -DSQLITE_USE_MALLOC_H \ + -DSQLITE_USE_MSIZE + +SHELL_OPTIONS = -Dmain=sqlite3_shell \ + -DSQLITE_OMIT_LOAD_EXTENSION=1 \ + -DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) \ + -DSQLITE_SHELL_DBNAME_PROC=fossil_open \ + -Daccess=file_access \ + -Dsystem=fossil_system \ + -Dgetenv=fossil_getenv \ + -Dfopen=fossil_fopen + +MINIZ_OPTIONS = -DMINIZ_NO_STDIO \ + -DMINIZ_NO_TIME \ + -DMINIZ_NO_ARCHIVE_APIS + +$(OBJDIR)/sqlite3.o: $(SRCDIR)/sqlite3.c $(SRCDIR)/../win/Makefile.mingw.mistachkin + $(XTCC) $(SQLITE_OPTIONS) $(SQLITE_CFLAGS) -c $(SRCDIR)/sqlite3.c -o $@ + +$(OBJDIR)/cson_amalgamation.o: $(SRCDIR)/cson_amalgamation.c + $(XTCC) -c $(SRCDIR)/cson_amalgamation.c -o $@ + +$(OBJDIR)/json.o $(OBJDIR)/json_artifact.o $(OBJDIR)/json_branch.o $(OBJDIR)/json_config.o $(OBJDIR)/json_diff.o $(OBJDIR)/json_dir.o $(OBJDIR)/jsos_finfo.o $(OBJDIR)/json_login.o $(OBJDIR)/json_query.o $(OBJDIR)/json_report.o $(OBJDIR)/json_status.o $(OBJDIR)/json_tag.o $(OBJDIR)/json_timeline.o $(OBJDIR)/json_user.o $(OBJDIR)/json_wiki.o : $(SRCDIR)/json_detail.h + +$(OBJDIR)/shell.o: $(SRCDIR)/shell.c $(SRCDIR)/sqlite3.h $(SRCDIR)/../win/Makefile.mingw.mistachkin + $(XTCC) $(SHELL_OPTIONS) $(SHELL_CFLAGS) -c $(SRCDIR)/shell.c -o $@ + +$(OBJDIR)/th.o: $(SRCDIR)/th.c + $(XTCC) -c $(SRCDIR)/th.c -o $@ + +$(OBJDIR)/th_lang.o: $(SRCDIR)/th_lang.c + $(XTCC) -c $(SRCDIR)/th_lang.c -o $@ + +$(OBJDIR)/th_tcl.o: $(SRCDIR)/th_tcl.c + $(XTCC) -c $(SRCDIR)/th_tcl.c -o $@ + +$(OBJDIR)/miniz.o: $(SRCDIR)/miniz.c + $(XTCC) $(MINIZ_OPTIONS) -c $(SRCDIR)/miniz.c -o $@ + ADDED win/Makefile.msc Index: win/Makefile.msc ================================================================== --- win/Makefile.msc +++ win/Makefile.msc @@ -0,0 +1,1748 @@ +# +############################################################################## +# WARNING: DO NOT EDIT, AUTOMATICALLY GENERATED FILE (SEE "src/makemake.tcl") +############################################################################## +# +# This Makefile will only function correctly if used from a sub-directory +# that is a direct child of the top-level directory for this project. +# +!if !exist("..\.fossil-settings") +!error "Please change the current directory to the one containing this file." +!endif + +# +# This file is automatically generated. Instead of editing this +# file, edit "makemake.tcl" then run "tclsh makemake.tcl" +# to regenerate this file. +# +B = .. +SRCDIR = $B\src +OBJDIR = . +OX = . +O = .obj +E = .exe +P = .pdb + +# Perl is only necessary if OpenSSL support is enabled and it must +# be built from source code. The PERLDIR variable should point to +# the directory containing the main Perl binary (i.e. "perl.exe"). +PERLDIR = C:\Perl\bin +PERL = perl.exe + +# Enable debugging symbols? +!ifndef DEBUG +DEBUG = 0 +!endif + +# Build the OpenSSL libraries? +!ifndef FOSSIL_BUILD_SSL +FOSSIL_BUILD_SSL = 0 +!endif + +# Build the included zlib library? +!ifndef FOSSIL_BUILD_ZLIB +FOSSIL_BUILD_ZLIB = 1 +!endif + +# Link everything except SQLite dynamically? +!ifndef FOSSIL_DYNAMIC_BUILD +FOSSIL_DYNAMIC_BUILD = 0 +!endif + +# Enable relative paths in external diff/gdiff? +!ifndef FOSSIL_ENABLE_EXEC_REL_PATHS +FOSSIL_ENABLE_EXEC_REL_PATHS = 0 +!endif + +# Enable the JSON API? +!ifndef FOSSIL_ENABLE_JSON +FOSSIL_ENABLE_JSON = 0 +!endif + +# Enable legacy treatment of the mv/rm commands? +!ifndef FOSSIL_ENABLE_LEGACY_MV_RM +FOSSIL_ENABLE_LEGACY_MV_RM = 0 +!endif + +# Enable use of miniz instead of zlib? +!ifndef FOSSIL_ENABLE_MINIZ +FOSSIL_ENABLE_MINIZ = 0 +!endif + +# Enable OpenSSL support? +!ifndef FOSSIL_ENABLE_SSL +FOSSIL_ENABLE_SSL = 0 +!endif + +# Enable the Tcl integration subsystem? +!ifndef FOSSIL_ENABLE_TCL +FOSSIL_ENABLE_TCL = 0 +!endif + +# Enable TH1 scripts in embedded documentation files? +!ifndef FOSSIL_ENABLE_TH1_DOCS +FOSSIL_ENABLE_TH1_DOCS = 0 +!endif + +# Enable TH1 hooks for commands and web pages? +!ifndef FOSSIL_ENABLE_TH1_HOOKS +FOSSIL_ENABLE_TH1_HOOKS = 0 +!endif + +# Enable support for Windows XP with Visual Studio 201x? +!ifndef FOSSIL_ENABLE_WINXP +FOSSIL_ENABLE_WINXP = 0 +!endif + +!if $(FOSSIL_ENABLE_SSL)!=0 +SSLDIR = $(B)\compat\openssl-1.0.2f +SSLINCDIR = $(SSLDIR)\inc32 +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +SSLLIBDIR = $(SSLDIR)\out32dll +!else +SSLLIBDIR = $(SSLDIR)\out32 +!endif +SSLLFLAGS = /nologo /opt:ref /debug +SSLLIB = ssleay32.lib libeay32.lib user32.lib gdi32.lib +!if "$(PLATFORM)"=="amd64" || "$(PLATFORM)"=="x64" +!message Using 'x64' platform for OpenSSL... +# BUGBUG (OpenSSL): Using "no-ssl*" here breaks the build. +# SSLCONFIG = VC-WIN64A no-asm no-ssl2 no-ssl3 +SSLCONFIG = VC-WIN64A no-asm +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +SSLCONFIG = $(SSLCONFIG) shared +!else +SSLCONFIG = $(SSLCONFIG) no-shared +!endif +SSLSETUP = ms\do_win64a.bat +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +SSLNMAKE = ms\ntdll.mak all +!else +SSLNMAKE = ms\nt.mak all +!endif +# BUGBUG (OpenSSL): Using "OPENSSL_NO_SSL*" here breaks dynamic builds. +!if $(FOSSIL_DYNAMIC_BUILD)==0 +SSLCFLAGS = -DOPENSSL_NO_SSL2 -DOPENSSL_NO_SSL3 +!endif +!elseif "$(PLATFORM)"=="ia64" +!message Using 'ia64' platform for OpenSSL... +# BUGBUG (OpenSSL): Using "no-ssl*" here breaks the build. +# SSLCONFIG = VC-WIN64I no-asm no-ssl2 no-ssl3 +SSLCONFIG = VC-WIN64I no-asm +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +SSLCONFIG = $(SSLCONFIG) shared +!else +SSLCONFIG = $(SSLCONFIG) no-shared +!endif +SSLSETUP = ms\do_win64i.bat +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +SSLNMAKE = ms\ntdll.mak all +!else +SSLNMAKE = ms\nt.mak all +!endif +# BUGBUG (OpenSSL): Using "OPENSSL_NO_SSL*" here breaks dynamic builds. +!if $(FOSSIL_DYNAMIC_BUILD)==0 +SSLCFLAGS = -DOPENSSL_NO_SSL2 -DOPENSSL_NO_SSL3 +!endif +!else +!message Assuming 'x86' platform for OpenSSL... +# BUGBUG (OpenSSL): Using "no-ssl*" here breaks the build. +# SSLCONFIG = VC-WIN32 no-asm no-ssl2 no-ssl3 +SSLCONFIG = VC-WIN32 no-asm +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +SSLCONFIG = $(SSLCONFIG) shared +!else +SSLCONFIG = $(SSLCONFIG) no-shared +!endif +SSLSETUP = ms\do_ms.bat +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +SSLNMAKE = ms\ntdll.mak all +!else +SSLNMAKE = ms\nt.mak all +!endif +# BUGBUG (OpenSSL): Using "OPENSSL_NO_SSL*" here breaks dynamic builds. +!if $(FOSSIL_DYNAMIC_BUILD)==0 +SSLCFLAGS = -DOPENSSL_NO_SSL2 -DOPENSSL_NO_SSL3 +!endif +!endif +!endif + +!if $(FOSSIL_ENABLE_TCL)!=0 +TCLDIR = $(B)\compat\tcl-8.6 +TCLSRCDIR = $(TCLDIR) +TCLINCDIR = $(TCLSRCDIR)\generic +!endif + +# zlib options +ZINCDIR = $(B)\compat\zlib +ZLIBDIR = $(B)\compat\zlib + +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +ZLIB = zdll.lib +!else +ZLIB = zlib.lib +!endif + +INCL = /I. /I$(SRCDIR) /I$B\win\include + +!if $(FOSSIL_ENABLE_MINIZ)==0 +INCL = $(INCL) /I$(ZINCDIR) +!endif + +!if $(FOSSIL_ENABLE_SSL)!=0 +INCL = $(INCL) /I$(SSLINCDIR) +!endif + +!if $(FOSSIL_ENABLE_TCL)!=0 +INCL = $(INCL) /I$(TCLINCDIR) +!endif + +CFLAGS = /nologo +LDFLAGS = + +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +LDFLAGS = $(LDFLAGS) /MANIFEST +!else +LDFLAGS = $(LDFLAGS) /NODEFAULTLIB:msvcrt /MANIFEST:NO +!endif + +!if $(FOSSIL_ENABLE_WINXP)!=0 +XPCFLAGS = $(XPCFLAGS) /D_USING_V110_SDK71_=1 +CFLAGS = $(CFLAGS) $(XPCFLAGS) +!if "$(PLATFORM)"=="amd64" || "$(PLATFORM)"=="x64" +XPLDFLAGS = $(XPLDFLAGS) /SUBSYSTEM:CONSOLE,5.02 +!else +XPLDFLAGS = $(XPLDFLAGS) /SUBSYSTEM:CONSOLE,5.01 +!endif +LDFLAGS = $(LDFLAGS) $(XPLDFLAGS) +!endif + +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +!if $(DEBUG)!=0 +CRTFLAGS = /MDd +!else +CRTFLAGS = /MD +!endif +!else +!if $(DEBUG)!=0 +CRTFLAGS = /MTd +!else +CRTFLAGS = /MT +!endif +!endif + +!if $(DEBUG)!=0 +CFLAGS = $(CFLAGS) /Zi $(CRTFLAGS) /Od +LDFLAGS = $(LDFLAGS) /DEBUG +!else +CFLAGS = $(CFLAGS) $(CRTFLAGS) /O2 +!endif + +BCC = $(CC) $(CFLAGS) +TCC = $(CC) /c $(CFLAGS) $(MSCDEF) $(INCL) +RCC = $(RC) /D_WIN32 /D_MSC_VER $(MSCDEF) $(INCL) +MTC = mt +LIBS = ws2_32.lib advapi32.lib +LIBDIR = + +!if $(FOSSIL_DYNAMIC_BUILD)!=0 +TCC = $(TCC) /DFOSSIL_DYNAMIC_BUILD=1 +RCC = $(RCC) /DFOSSIL_DYNAMIC_BUILD=1 +!endif + +!if $(FOSSIL_ENABLE_MINIZ)==0 +LIBS = $(LIBS) $(ZLIB) +LIBDIR = $(LIBDIR) /LIBPATH:$(ZLIBDIR) +!endif + +!if $(FOSSIL_ENABLE_MINIZ)!=0 +TCC = $(TCC) /DFOSSIL_ENABLE_MINIZ=1 +RCC = $(RCC) /DFOSSIL_ENABLE_MINIZ=1 +!endif + +!if $(FOSSIL_ENABLE_JSON)!=0 +TCC = $(TCC) /DFOSSIL_ENABLE_JSON=1 +RCC = $(RCC) /DFOSSIL_ENABLE_JSON=1 +!endif + +!if $(FOSSIL_ENABLE_SSL)!=0 +TCC = $(TCC) /DFOSSIL_ENABLE_SSL=1 +RCC = $(RCC) /DFOSSIL_ENABLE_SSL=1 +LIBS = $(LIBS) $(SSLLIB) +LIBDIR = $(LIBDIR) /LIBPATH:$(SSLLIBDIR) +!endif + +!if $(FOSSIL_ENABLE_EXEC_REL_PATHS)!=0 +TCC = $(TCC) /DFOSSIL_ENABLE_EXEC_REL_PATHS=1 +RCC = $(RCC) /DFOSSIL_ENABLE_EXEC_REL_PATHS=1 +!endif + +!if $(FOSSIL_ENABLE_LEGACY_MV_RM)!=0 +TCC = $(TCC) /DFOSSIL_ENABLE_LEGACY_MV_RM=1 +RCC = $(RCC) /DFOSSIL_ENABLE_LEGACY_MV_RM=1 +!endif + +!if $(FOSSIL_ENABLE_TH1_DOCS)!=0 +TCC = $(TCC) /DFOSSIL_ENABLE_TH1_DOCS=1 +RCC = $(RCC) /DFOSSIL_ENABLE_TH1_DOCS=1 +!endif + +!if $(FOSSIL_ENABLE_TH1_HOOKS)!=0 +TCC = $(TCC) /DFOSSIL_ENABLE_TH1_HOOKS=1 +RCC = $(RCC) /DFOSSIL_ENABLE_TH1_HOOKS=1 +!endif + +!if $(FOSSIL_ENABLE_TCL)!=0 +TCC = $(TCC) /DFOSSIL_ENABLE_TCL=1 +RCC = $(RCC) /DFOSSIL_ENABLE_TCL=1 +TCC = $(TCC) /DFOSSIL_ENABLE_TCL_STUBS=1 +RCC = $(RCC) /DFOSSIL_ENABLE_TCL_STUBS=1 +TCC = $(TCC) /DFOSSIL_ENABLE_TCL_PRIVATE_STUBS=1 +RCC = $(RCC) /DFOSSIL_ENABLE_TCL_PRIVATE_STUBS=1 +TCC = $(TCC) /DUSE_TCL_STUBS=1 +RCC = $(RCC) /DUSE_TCL_STUBS=1 +!endif + +SQLITE_OPTIONS = /DNDEBUG=1 \ + /DSQLITE_OMIT_LOAD_EXTENSION=1 \ + /DSQLITE_ENABLE_LOCKING_STYLE=0 \ + /DSQLITE_THREADSAFE=0 \ + /DSQLITE_DEFAULT_FILE_FORMAT=4 \ + /DSQLITE_OMIT_DEPRECATED \ + /DSQLITE_ENABLE_EXPLAIN_COMMENTS \ + /DSQLITE_ENABLE_FTS4 \ + /DSQLITE_ENABLE_FTS3_PARENTHESIS \ + /DSQLITE_ENABLE_DBSTAT_VTAB \ + /DSQLITE_ENABLE_JSON1 \ + /DSQLITE_ENABLE_FTS5 \ + /DSQLITE_WIN32_NO_ANSI + +SHELL_OPTIONS = /Dmain=sqlite3_shell \ + /DSQLITE_OMIT_LOAD_EXTENSION=1 \ + /DUSE_SYSTEM_SQLITE=$(USE_SYSTEM_SQLITE) \ + /DSQLITE_SHELL_DBNAME_PROC=fossil_open \ + /Daccess=file_access \ + /Dsystem=fossil_system \ + /Dgetenv=fossil_getenv \ + /Dfopen=fossil_fopen + +MINIZ_OPTIONS = /DMINIZ_NO_STDIO \ + /DMINIZ_NO_TIME \ + /DMINIZ_NO_ARCHIVE_APIS + +SRC = add_.c \ + allrepo_.c \ + attach_.c \ + bag_.c \ + bisect_.c \ + blob_.c \ + branch_.c \ + browse_.c \ + builtin_.c \ + bundle_.c \ + cache_.c \ + captcha_.c \ + cgi_.c \ + checkin_.c \ + checkout_.c \ + clearsign_.c \ + clone_.c \ + comformat_.c \ + configure_.c \ + content_.c \ + db_.c \ + delta_.c \ + deltacmd_.c \ + descendants_.c \ + diff_.c \ + diffcmd_.c \ + doc_.c \ + encode_.c \ + event_.c \ + export_.c \ + file_.c \ + finfo_.c \ + foci_.c \ + fusefs_.c \ + glob_.c \ + graph_.c \ + gzip_.c \ + http_.c \ + http_socket_.c \ + http_ssl_.c \ + http_transport_.c \ + import_.c \ + info_.c \ + json_.c \ + json_artifact_.c \ + json_branch_.c \ + json_config_.c \ + json_diff_.c \ + json_dir_.c \ + json_finfo_.c \ + json_login_.c \ + json_query_.c \ + json_report_.c \ + json_status_.c \ + json_tag_.c \ + json_timeline_.c \ + json_user_.c \ + json_wiki_.c \ + leaf_.c \ + loadctrl_.c \ + login_.c \ + lookslike_.c \ + main_.c \ + manifest_.c \ + markdown_.c \ + markdown_html_.c \ + md5_.c \ + merge_.c \ + merge3_.c \ + moderate_.c \ + name_.c \ + path_.c \ + piechart_.c \ + pivot_.c \ + popen_.c \ + pqueue_.c \ + printf_.c \ + publish_.c \ + purge_.c \ + rebuild_.c \ + regexp_.c \ + report_.c \ + rss_.c \ + schema_.c \ + search_.c \ + setup_.c \ + sha1_.c \ + shun_.c \ + sitemap_.c \ + skins_.c \ + sqlcmd_.c \ + stash_.c \ + stat_.c \ + statrep_.c \ + style_.c \ + sync_.c \ + tag_.c \ + tar_.c \ + th_main_.c \ + timeline_.c \ + tkt_.c \ + tktsetup_.c \ + undo_.c \ + unicode_.c \ + update_.c \ + url_.c \ + user_.c \ + utf8_.c \ + util_.c \ + verify_.c \ + vfile_.c \ + wiki_.c \ + wikiformat_.c \ + winfile_.c \ + winhttp_.c \ + wysiwyg_.c \ + xfer_.c \ + xfersetup_.c \ + zip_.c + +EXTRA_FILES = $(SRCDIR)\../skins/aht/details.txt \ + $(SRCDIR)\../skins/black_and_white/css.txt \ + $(SRCDIR)\../skins/black_and_white/details.txt \ + $(SRCDIR)\../skins/black_and_white/footer.txt \ + $(SRCDIR)\../skins/black_and_white/header.txt \ + $(SRCDIR)\../skins/blitz/css.txt \ + $(SRCDIR)\../skins/blitz/details.txt \ + $(SRCDIR)\../skins/blitz/footer.txt \ + $(SRCDIR)\../skins/blitz/header.txt \ + $(SRCDIR)\../skins/blitz/ticket.txt \ + $(SRCDIR)\../skins/blitz_no_logo/css.txt \ + $(SRCDIR)\../skins/blitz_no_logo/details.txt \ + $(SRCDIR)\../skins/blitz_no_logo/footer.txt \ + $(SRCDIR)\../skins/blitz_no_logo/header.txt \ + $(SRCDIR)\../skins/blitz_no_logo/ticket.txt \ + $(SRCDIR)\../skins/default/css.txt \ + $(SRCDIR)\../skins/default/details.txt \ + $(SRCDIR)\../skins/default/footer.txt \ + $(SRCDIR)\../skins/default/header.txt \ + $(SRCDIR)\../skins/eagle/css.txt \ + $(SRCDIR)\../skins/eagle/details.txt \ + $(SRCDIR)\../skins/eagle/footer.txt \ + $(SRCDIR)\../skins/eagle/header.txt \ + $(SRCDIR)\../skins/enhanced1/css.txt \ + $(SRCDIR)\../skins/enhanced1/details.txt \ + $(SRCDIR)\../skins/enhanced1/footer.txt \ + $(SRCDIR)\../skins/enhanced1/header.txt \ + $(SRCDIR)\../skins/khaki/css.txt \ + $(SRCDIR)\../skins/khaki/details.txt \ + $(SRCDIR)\../skins/khaki/footer.txt \ + $(SRCDIR)\../skins/khaki/header.txt \ + $(SRCDIR)\../skins/original/css.txt \ + $(SRCDIR)\../skins/original/details.txt \ + $(SRCDIR)\../skins/original/footer.txt \ + $(SRCDIR)\../skins/original/header.txt \ + $(SRCDIR)\../skins/plain_gray/css.txt \ + $(SRCDIR)\../skins/plain_gray/details.txt \ + $(SRCDIR)\../skins/plain_gray/footer.txt \ + $(SRCDIR)\../skins/plain_gray/header.txt \ + $(SRCDIR)\../skins/rounded1/css.txt \ + $(SRCDIR)\../skins/rounded1/details.txt \ + $(SRCDIR)\../skins/rounded1/footer.txt \ + $(SRCDIR)\../skins/rounded1/header.txt \ + $(SRCDIR)\../skins/xekri/css.txt \ + $(SRCDIR)\../skins/xekri/details.txt \ + $(SRCDIR)\../skins/xekri/footer.txt \ + $(SRCDIR)\../skins/xekri/header.txt \ + $(SRCDIR)\diff.tcl \ + $(SRCDIR)\markdown.md + +OBJ = $(OX)\add$O \ + $(OX)\allrepo$O \ + $(OX)\attach$O \ + $(OX)\bag$O \ + $(OX)\bisect$O \ + $(OX)\blob$O \ + $(OX)\branch$O \ + $(OX)\browse$O \ + $(OX)\builtin$O \ + $(OX)\bundle$O \ + $(OX)\cache$O \ + $(OX)\captcha$O \ + $(OX)\cgi$O \ + $(OX)\checkin$O \ + $(OX)\checkout$O \ + $(OX)\clearsign$O \ + $(OX)\clone$O \ + $(OX)\comformat$O \ + $(OX)\configure$O \ + $(OX)\content$O \ + $(OX)\cson_amalgamation$O \ + $(OX)\db$O \ + $(OX)\delta$O \ + $(OX)\deltacmd$O \ + $(OX)\descendants$O \ + $(OX)\diff$O \ + $(OX)\diffcmd$O \ + $(OX)\doc$O \ + $(OX)\encode$O \ + $(OX)\event$O \ + $(OX)\export$O \ + $(OX)\file$O \ + $(OX)\finfo$O \ + $(OX)\foci$O \ + $(OX)\fusefs$O \ + $(OX)\glob$O \ + $(OX)\graph$O \ + $(OX)\gzip$O \ + $(OX)\http$O \ + $(OX)\http_socket$O \ + $(OX)\http_ssl$O \ + $(OX)\http_transport$O \ + $(OX)\import$O \ + $(OX)\info$O \ + $(OX)\json$O \ + $(OX)\json_artifact$O \ + $(OX)\json_branch$O \ + $(OX)\json_config$O \ + $(OX)\json_diff$O \ + $(OX)\json_dir$O \ + $(OX)\json_finfo$O \ + $(OX)\json_login$O \ + $(OX)\json_query$O \ + $(OX)\json_report$O \ + $(OX)\json_status$O \ + $(OX)\json_tag$O \ + $(OX)\json_timeline$O \ + $(OX)\json_user$O \ + $(OX)\json_wiki$O \ + $(OX)\leaf$O \ + $(OX)\loadctrl$O \ + $(OX)\login$O \ + $(OX)\lookslike$O \ + $(OX)\main$O \ + $(OX)\manifest$O \ + $(OX)\markdown$O \ + $(OX)\markdown_html$O \ + $(OX)\md5$O \ + $(OX)\merge$O \ + $(OX)\merge3$O \ + $(OX)\moderate$O \ + $(OX)\name$O \ + $(OX)\path$O \ + $(OX)\piechart$O \ + $(OX)\pivot$O \ + $(OX)\popen$O \ + $(OX)\pqueue$O \ + $(OX)\printf$O \ + $(OX)\publish$O \ + $(OX)\purge$O \ + $(OX)\rebuild$O \ + $(OX)\regexp$O \ + $(OX)\report$O \ + $(OX)\rss$O \ + $(OX)\schema$O \ + $(OX)\search$O \ + $(OX)\setup$O \ + $(OX)\sha1$O \ + $(OX)\shell$O \ + $(OX)\shun$O \ + $(OX)\sitemap$O \ + $(OX)\skins$O \ + $(OX)\sqlcmd$O \ + $(OX)\sqlite3$O \ + $(OX)\stash$O \ + $(OX)\stat$O \ + $(OX)\statrep$O \ + $(OX)\style$O \ + $(OX)\sync$O \ + $(OX)\tag$O \ + $(OX)\tar$O \ + $(OX)\th$O \ + $(OX)\th_lang$O \ + $(OX)\th_main$O \ + $(OX)\th_tcl$O \ + $(OX)\timeline$O \ + $(OX)\tkt$O \ + $(OX)\tktsetup$O \ + $(OX)\undo$O \ + $(OX)\unicode$O \ + $(OX)\update$O \ + $(OX)\url$O \ + $(OX)\user$O \ + $(OX)\utf8$O \ + $(OX)\util$O \ + $(OX)\verify$O \ + $(OX)\vfile$O \ + $(OX)\wiki$O \ + $(OX)\wikiformat$O \ + $(OX)\winfile$O \ + $(OX)\winhttp$O \ + $(OX)\wysiwyg$O \ + $(OX)\xfer$O \ + $(OX)\xfersetup$O \ + $(OX)\zip$O \ +!if $(FOSSIL_ENABLE_MINIZ)!=0 + $(OX)\miniz$O \ +!endif + $(OX)\fossil.res + + +APPNAME = $(OX)\fossil$(E) +PDBNAME = $(OX)\fossil$(P) +APPTARGETS = + +all: $(OX) $(APPNAME) + +zlib: + @echo Building zlib from "$(ZLIBDIR)"... +!if $(FOSSIL_ENABLE_WINXP)!=0 + @pushd "$(ZLIBDIR)" && $(MAKE) /f win32\Makefile.msc $(ZLIB) "CC=cl $(XPCFLAGS)" "LD=link $(XPLDFLAGS)" && popd +!else + @pushd "$(ZLIBDIR)" && $(MAKE) /f win32\Makefile.msc $(ZLIB) && popd +!endif + +!if $(FOSSIL_ENABLE_SSL)!=0 +openssl: + @echo Building OpenSSL from "$(SSLDIR)"... +!if "$(PERLDIR)" != "" + @set PATH=$(PERLDIR);$(PATH) +!endif + @pushd "$(SSLDIR)" && $(PERL) Configure $(SSLCONFIG) && popd + @pushd "$(SSLDIR)" && call $(SSLSETUP) && popd +!if $(FOSSIL_ENABLE_WINXP)!=0 + @pushd "$(SSLDIR)" && $(MAKE) /f $(SSLNMAKE) "CC=cl $(SSLCFLAGS) $(XPCFLAGS)" "LFLAGS=$(SSLLFLAGS) $(XPLDFLAGS)" && popd +!else + @pushd "$(SSLDIR)" && $(MAKE) /f $(SSLNMAKE) "CC=cl $(SSLCFLAGS)" && popd +!endif +!endif + +!if $(FOSSIL_ENABLE_MINIZ)==0 +!if $(FOSSIL_BUILD_ZLIB)!=0 +APPTARGETS = $(APPTARGETS) zlib +!endif +!endif + +!if $(FOSSIL_ENABLE_SSL)!=0 +!if $(FOSSIL_BUILD_SSL)!=0 +APPTARGETS = $(APPTARGETS) openssl +!endif +!endif + +$(APPNAME) : $(APPTARGETS) translate$E mkindex$E codecheck1$E headers $(OBJ) $(OX)\linkopts + cd $(OX) + codecheck1$E $(SRC) + link $(LDFLAGS) /OUT:$@ $(LIBDIR) Wsetargv.obj fossil.res @linkopts + if exist $@.manifest \ + $(MTC) -nologo -manifest $@.manifest -outputresource:$@;1 + +$(OX)\linkopts: $B\win\Makefile.msc + echo $(OX)\add.obj > $@ + echo $(OX)\allrepo.obj >> $@ + echo $(OX)\attach.obj >> $@ + echo $(OX)\bag.obj >> $@ + echo $(OX)\bisect.obj >> $@ + echo $(OX)\blob.obj >> $@ + echo $(OX)\branch.obj >> $@ + echo $(OX)\browse.obj >> $@ + echo $(OX)\builtin.obj >> $@ + echo $(OX)\bundle.obj >> $@ + echo $(OX)\cache.obj >> $@ + echo $(OX)\captcha.obj >> $@ + echo $(OX)\cgi.obj >> $@ + echo $(OX)\checkin.obj >> $@ + echo $(OX)\checkout.obj >> $@ + echo $(OX)\clearsign.obj >> $@ + echo $(OX)\clone.obj >> $@ + echo $(OX)\comformat.obj >> $@ + echo $(OX)\configure.obj >> $@ + echo $(OX)\content.obj >> $@ + echo $(OX)\cson_amalgamation.obj >> $@ + echo $(OX)\db.obj >> $@ + echo $(OX)\delta.obj >> $@ + echo $(OX)\deltacmd.obj >> $@ + echo $(OX)\descendants.obj >> $@ + echo $(OX)\diff.obj >> $@ + echo $(OX)\diffcmd.obj >> $@ + echo $(OX)\doc.obj >> $@ + echo $(OX)\encode.obj >> $@ + echo $(OX)\event.obj >> $@ + echo $(OX)\export.obj >> $@ + echo $(OX)\file.obj >> $@ + echo $(OX)\finfo.obj >> $@ + echo $(OX)\foci.obj >> $@ + echo $(OX)\fusefs.obj >> $@ + echo $(OX)\glob.obj >> $@ + echo $(OX)\graph.obj >> $@ + echo $(OX)\gzip.obj >> $@ + echo $(OX)\http.obj >> $@ + echo $(OX)\http_socket.obj >> $@ + echo $(OX)\http_ssl.obj >> $@ + echo $(OX)\http_transport.obj >> $@ + echo $(OX)\import.obj >> $@ + echo $(OX)\info.obj >> $@ + echo $(OX)\json.obj >> $@ + echo $(OX)\json_artifact.obj >> $@ + echo $(OX)\json_branch.obj >> $@ + echo $(OX)\json_config.obj >> $@ + echo $(OX)\json_diff.obj >> $@ + echo $(OX)\json_dir.obj >> $@ + echo $(OX)\json_finfo.obj >> $@ + echo $(OX)\json_login.obj >> $@ + echo $(OX)\json_query.obj >> $@ + echo $(OX)\json_report.obj >> $@ + echo $(OX)\json_status.obj >> $@ + echo $(OX)\json_tag.obj >> $@ + echo $(OX)\json_timeline.obj >> $@ + echo $(OX)\json_user.obj >> $@ + echo $(OX)\json_wiki.obj >> $@ + echo $(OX)\leaf.obj >> $@ + echo $(OX)\loadctrl.obj >> $@ + echo $(OX)\login.obj >> $@ + echo $(OX)\lookslike.obj >> $@ + echo $(OX)\main.obj >> $@ + echo $(OX)\manifest.obj >> $@ + echo $(OX)\markdown.obj >> $@ + echo $(OX)\markdown_html.obj >> $@ + echo $(OX)\md5.obj >> $@ + echo $(OX)\merge.obj >> $@ + echo $(OX)\merge3.obj >> $@ + echo $(OX)\moderate.obj >> $@ + echo $(OX)\name.obj >> $@ + echo $(OX)\path.obj >> $@ + echo $(OX)\piechart.obj >> $@ + echo $(OX)\pivot.obj >> $@ + echo $(OX)\popen.obj >> $@ + echo $(OX)\pqueue.obj >> $@ + echo $(OX)\printf.obj >> $@ + echo $(OX)\publish.obj >> $@ + echo $(OX)\purge.obj >> $@ + echo $(OX)\rebuild.obj >> $@ + echo $(OX)\regexp.obj >> $@ + echo $(OX)\report.obj >> $@ + echo $(OX)\rss.obj >> $@ + echo $(OX)\schema.obj >> $@ + echo $(OX)\search.obj >> $@ + echo $(OX)\setup.obj >> $@ + echo $(OX)\sha1.obj >> $@ + echo $(OX)\shell.obj >> $@ + echo $(OX)\shun.obj >> $@ + echo $(OX)\sitemap.obj >> $@ + echo $(OX)\skins.obj >> $@ + echo $(OX)\sqlcmd.obj >> $@ + echo $(OX)\sqlite3.obj >> $@ + echo $(OX)\stash.obj >> $@ + echo $(OX)\stat.obj >> $@ + echo $(OX)\statrep.obj >> $@ + echo $(OX)\style.obj >> $@ + echo $(OX)\sync.obj >> $@ + echo $(OX)\tag.obj >> $@ + echo $(OX)\tar.obj >> $@ + echo $(OX)\th.obj >> $@ + echo $(OX)\th_lang.obj >> $@ + echo $(OX)\th_main.obj >> $@ + echo $(OX)\th_tcl.obj >> $@ + echo $(OX)\timeline.obj >> $@ + echo $(OX)\tkt.obj >> $@ + echo $(OX)\tktsetup.obj >> $@ + echo $(OX)\undo.obj >> $@ + echo $(OX)\unicode.obj >> $@ + echo $(OX)\update.obj >> $@ + echo $(OX)\url.obj >> $@ + echo $(OX)\user.obj >> $@ + echo $(OX)\utf8.obj >> $@ + echo $(OX)\util.obj >> $@ + echo $(OX)\verify.obj >> $@ + echo $(OX)\vfile.obj >> $@ + echo $(OX)\wiki.obj >> $@ + echo $(OX)\wikiformat.obj >> $@ + echo $(OX)\winfile.obj >> $@ + echo $(OX)\winhttp.obj >> $@ + echo $(OX)\wysiwyg.obj >> $@ + echo $(OX)\xfer.obj >> $@ + echo $(OX)\xfersetup.obj >> $@ + echo $(OX)\zip.obj >> $@ +!if $(FOSSIL_ENABLE_MINIZ)!=0 + echo $(OX)\miniz.obj >> $@ +!endif + echo $(LIBS) >> $@ + +$(OX): + @-mkdir $@ + +translate$E: $(SRCDIR)\translate.c + $(BCC) $** + +makeheaders$E: $(SRCDIR)\makeheaders.c + $(BCC) $** + +mkindex$E: $(SRCDIR)\mkindex.c + $(BCC) $** + +mkbuiltin$E: $(SRCDIR)\mkbuiltin.c + $(BCC) $** + +mkversion$E: $(SRCDIR)\mkversion.c + $(BCC) $** + +codecheck1$E: $(SRCDIR)\codecheck1.c + $(BCC) $** + +$(OX)\shell$O : $(SRCDIR)\shell.c $B\win\Makefile.msc + $(TCC) /Fo$@ $(SHELL_OPTIONS) $(SQLITE_OPTIONS) $(SHELL_CFLAGS) -c $(SRCDIR)\shell.c + +$(OX)\sqlite3$O : $(SRCDIR)\sqlite3.c $B\win\Makefile.msc + $(TCC) /Fo$@ -c $(SQLITE_OPTIONS) $(SQLITE_CFLAGS) $(SRCDIR)\sqlite3.c + +$(OX)\th$O : $(SRCDIR)\th.c + $(TCC) /Fo$@ -c $** + +$(OX)\th_lang$O : $(SRCDIR)\th_lang.c + $(TCC) /Fo$@ -c $** + +$(OX)\th_tcl$O : $(SRCDIR)\th_tcl.c + $(TCC) /Fo$@ -c $** + +$(OX)\miniz$O : $(SRCDIR)\miniz.c + $(TCC) /Fo$@ -c $(MINIZ_OPTIONS) $(SRCDIR)\miniz.c + +VERSION.h : mkversion$E $B\manifest.uuid $B\manifest $B\VERSION + $** > $@ +$(OX)\cson_amalgamation$O : $(SRCDIR)\cson_amalgamation.c + $(TCC) /Fo$@ /c $** + +page_index.h: mkindex$E $(SRC) + $** > $@ + +builtin_data.h: mkbuiltin$E $(EXTRA_FILES) + mkbuiltin$E --prefix $(SRCDIR)/ $(EXTRA_FILES) > $@ + +clean: + del $(OX)\*.obj 2>NUL + del *.obj 2>NUL + del *_.c 2>NUL + del *.h 2>NUL + del *.ilk 2>NUL + del *.map 2>NUL + del *.res 2>NUL + del headers 2>NUL + del linkopts 2>NUL + del vc*.pdb 2>NUL + +realclean: clean + del $(APPNAME) 2>NUL + del $(PDBNAME) 2>NUL + del translate$E 2>NUL + del translate$P 2>NUL + del mkindex$E 2>NUL + del mkindex$P 2>NUL + del makeheaders$E 2>NUL + del makeheaders$P 2>NUL + del mkversion$E 2>NUL + del mkversion$P 2>NUL + del codecheck1$E 2>NUL + del codecheck1$P 2>NUL + del mkbuiltin$E 2>NUL + del mkbuiltin$P 2>NUL + +$(OBJDIR)\json$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_artifact$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_branch$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_config$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_diff$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_dir$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_finfo$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_login$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_query$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_report$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_status$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_tag$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_timeline$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_user$O : $(SRCDIR)\json_detail.h +$(OBJDIR)\json_wiki$O : $(SRCDIR)\json_detail.h + +$(OX)\add$O : add_.c add.h + $(TCC) /Fo$@ -c add_.c + +add_.c : $(SRCDIR)\add.c + translate$E $** > $@ + +$(OX)\allrepo$O : allrepo_.c allrepo.h + $(TCC) /Fo$@ -c allrepo_.c + +allrepo_.c : $(SRCDIR)\allrepo.c + translate$E $** > $@ + +$(OX)\attach$O : attach_.c attach.h + $(TCC) /Fo$@ -c attach_.c + +attach_.c : $(SRCDIR)\attach.c + translate$E $** > $@ + +$(OX)\bag$O : bag_.c bag.h + $(TCC) /Fo$@ -c bag_.c + +bag_.c : $(SRCDIR)\bag.c + translate$E $** > $@ + +$(OX)\bisect$O : bisect_.c bisect.h + $(TCC) /Fo$@ -c bisect_.c + +bisect_.c : $(SRCDIR)\bisect.c + translate$E $** > $@ + +$(OX)\blob$O : blob_.c blob.h + $(TCC) /Fo$@ -c blob_.c + +blob_.c : $(SRCDIR)\blob.c + translate$E $** > $@ + +$(OX)\branch$O : branch_.c branch.h + $(TCC) /Fo$@ -c branch_.c + +branch_.c : $(SRCDIR)\branch.c + translate$E $** > $@ + +$(OX)\browse$O : browse_.c browse.h + $(TCC) /Fo$@ -c browse_.c + +browse_.c : $(SRCDIR)\browse.c + translate$E $** > $@ + +$(OX)\builtin$O : builtin_.c builtin.h + $(TCC) /Fo$@ -c builtin_.c + +builtin_.c : $(SRCDIR)\builtin.c + translate$E $** > $@ + +$(OX)\bundle$O : bundle_.c bundle.h + $(TCC) /Fo$@ -c bundle_.c + +bundle_.c : $(SRCDIR)\bundle.c + translate$E $** > $@ + +$(OX)\cache$O : cache_.c cache.h + $(TCC) /Fo$@ -c cache_.c + +cache_.c : $(SRCDIR)\cache.c + translate$E $** > $@ + +$(OX)\captcha$O : captcha_.c captcha.h + $(TCC) /Fo$@ -c captcha_.c + +captcha_.c : $(SRCDIR)\captcha.c + translate$E $** > $@ + +$(OX)\cgi$O : cgi_.c cgi.h + $(TCC) /Fo$@ -c cgi_.c + +cgi_.c : $(SRCDIR)\cgi.c + translate$E $** > $@ + +$(OX)\checkin$O : checkin_.c checkin.h + $(TCC) /Fo$@ -c checkin_.c + +checkin_.c : $(SRCDIR)\checkin.c + translate$E $** > $@ + +$(OX)\checkout$O : checkout_.c checkout.h + $(TCC) /Fo$@ -c checkout_.c + +checkout_.c : $(SRCDIR)\checkout.c + translate$E $** > $@ + +$(OX)\clearsign$O : clearsign_.c clearsign.h + $(TCC) /Fo$@ -c clearsign_.c + +clearsign_.c : $(SRCDIR)\clearsign.c + translate$E $** > $@ + +$(OX)\clone$O : clone_.c clone.h + $(TCC) /Fo$@ -c clone_.c + +clone_.c : $(SRCDIR)\clone.c + translate$E $** > $@ + +$(OX)\comformat$O : comformat_.c comformat.h + $(TCC) /Fo$@ -c comformat_.c + +comformat_.c : $(SRCDIR)\comformat.c + translate$E $** > $@ + +$(OX)\configure$O : configure_.c configure.h + $(TCC) /Fo$@ -c configure_.c + +configure_.c : $(SRCDIR)\configure.c + translate$E $** > $@ + +$(OX)\content$O : content_.c content.h + $(TCC) /Fo$@ -c content_.c + +content_.c : $(SRCDIR)\content.c + translate$E $** > $@ + +$(OX)\db$O : db_.c db.h + $(TCC) /Fo$@ -c db_.c + +db_.c : $(SRCDIR)\db.c + translate$E $** > $@ + +$(OX)\delta$O : delta_.c delta.h + $(TCC) /Fo$@ -c delta_.c + +delta_.c : $(SRCDIR)\delta.c + translate$E $** > $@ + +$(OX)\deltacmd$O : deltacmd_.c deltacmd.h + $(TCC) /Fo$@ -c deltacmd_.c + +deltacmd_.c : $(SRCDIR)\deltacmd.c + translate$E $** > $@ + +$(OX)\descendants$O : descendants_.c descendants.h + $(TCC) /Fo$@ -c descendants_.c + +descendants_.c : $(SRCDIR)\descendants.c + translate$E $** > $@ + +$(OX)\diff$O : diff_.c diff.h + $(TCC) /Fo$@ -c diff_.c + +diff_.c : $(SRCDIR)\diff.c + translate$E $** > $@ + +$(OX)\diffcmd$O : diffcmd_.c diffcmd.h + $(TCC) /Fo$@ -c diffcmd_.c + +diffcmd_.c : $(SRCDIR)\diffcmd.c + translate$E $** > $@ + +$(OX)\doc$O : doc_.c doc.h + $(TCC) /Fo$@ -c doc_.c + +doc_.c : $(SRCDIR)\doc.c + translate$E $** > $@ + +$(OX)\encode$O : encode_.c encode.h + $(TCC) /Fo$@ -c encode_.c + +encode_.c : $(SRCDIR)\encode.c + translate$E $** > $@ + +$(OX)\event$O : event_.c event.h + $(TCC) /Fo$@ -c event_.c + +event_.c : $(SRCDIR)\event.c + translate$E $** > $@ + +$(OX)\export$O : export_.c export.h + $(TCC) /Fo$@ -c export_.c + +export_.c : $(SRCDIR)\export.c + translate$E $** > $@ + +$(OX)\file$O : file_.c file.h + $(TCC) /Fo$@ -c file_.c + +file_.c : $(SRCDIR)\file.c + translate$E $** > $@ + +$(OX)\finfo$O : finfo_.c finfo.h + $(TCC) /Fo$@ -c finfo_.c + +finfo_.c : $(SRCDIR)\finfo.c + translate$E $** > $@ + +$(OX)\foci$O : foci_.c foci.h + $(TCC) /Fo$@ -c foci_.c + +foci_.c : $(SRCDIR)\foci.c + translate$E $** > $@ + +$(OX)\fusefs$O : fusefs_.c fusefs.h + $(TCC) /Fo$@ -c fusefs_.c + +fusefs_.c : $(SRCDIR)\fusefs.c + translate$E $** > $@ + +$(OX)\glob$O : glob_.c glob.h + $(TCC) /Fo$@ -c glob_.c + +glob_.c : $(SRCDIR)\glob.c + translate$E $** > $@ + +$(OX)\graph$O : graph_.c graph.h + $(TCC) /Fo$@ -c graph_.c + +graph_.c : $(SRCDIR)\graph.c + translate$E $** > $@ + +$(OX)\gzip$O : gzip_.c gzip.h + $(TCC) /Fo$@ -c gzip_.c + +gzip_.c : $(SRCDIR)\gzip.c + translate$E $** > $@ + +$(OX)\http$O : http_.c http.h + $(TCC) /Fo$@ -c http_.c + +http_.c : $(SRCDIR)\http.c + translate$E $** > $@ + +$(OX)\http_socket$O : http_socket_.c http_socket.h + $(TCC) /Fo$@ -c http_socket_.c + +http_socket_.c : $(SRCDIR)\http_socket.c + translate$E $** > $@ + +$(OX)\http_ssl$O : http_ssl_.c http_ssl.h + $(TCC) /Fo$@ -c http_ssl_.c + +http_ssl_.c : $(SRCDIR)\http_ssl.c + translate$E $** > $@ + +$(OX)\http_transport$O : http_transport_.c http_transport.h + $(TCC) /Fo$@ -c http_transport_.c + +http_transport_.c : $(SRCDIR)\http_transport.c + translate$E $** > $@ + +$(OX)\import$O : import_.c import.h + $(TCC) /Fo$@ -c import_.c + +import_.c : $(SRCDIR)\import.c + translate$E $** > $@ + +$(OX)\info$O : info_.c info.h + $(TCC) /Fo$@ -c info_.c + +info_.c : $(SRCDIR)\info.c + translate$E $** > $@ + +$(OX)\json$O : json_.c json.h + $(TCC) /Fo$@ -c json_.c + +json_.c : $(SRCDIR)\json.c + translate$E $** > $@ + +$(OX)\json_artifact$O : json_artifact_.c json_artifact.h + $(TCC) /Fo$@ -c json_artifact_.c + +json_artifact_.c : $(SRCDIR)\json_artifact.c + translate$E $** > $@ + +$(OX)\json_branch$O : json_branch_.c json_branch.h + $(TCC) /Fo$@ -c json_branch_.c + +json_branch_.c : $(SRCDIR)\json_branch.c + translate$E $** > $@ + +$(OX)\json_config$O : json_config_.c json_config.h + $(TCC) /Fo$@ -c json_config_.c + +json_config_.c : $(SRCDIR)\json_config.c + translate$E $** > $@ + +$(OX)\json_diff$O : json_diff_.c json_diff.h + $(TCC) /Fo$@ -c json_diff_.c + +json_diff_.c : $(SRCDIR)\json_diff.c + translate$E $** > $@ + +$(OX)\json_dir$O : json_dir_.c json_dir.h + $(TCC) /Fo$@ -c json_dir_.c + +json_dir_.c : $(SRCDIR)\json_dir.c + translate$E $** > $@ + +$(OX)\json_finfo$O : json_finfo_.c json_finfo.h + $(TCC) /Fo$@ -c json_finfo_.c + +json_finfo_.c : $(SRCDIR)\json_finfo.c + translate$E $** > $@ + +$(OX)\json_login$O : json_login_.c json_login.h + $(TCC) /Fo$@ -c json_login_.c + +json_login_.c : $(SRCDIR)\json_login.c + translate$E $** > $@ + +$(OX)\json_query$O : json_query_.c json_query.h + $(TCC) /Fo$@ -c json_query_.c + +json_query_.c : $(SRCDIR)\json_query.c + translate$E $** > $@ + +$(OX)\json_report$O : json_report_.c json_report.h + $(TCC) /Fo$@ -c json_report_.c + +json_report_.c : $(SRCDIR)\json_report.c + translate$E $** > $@ + +$(OX)\json_status$O : json_status_.c json_status.h + $(TCC) /Fo$@ -c json_status_.c + +json_status_.c : $(SRCDIR)\json_status.c + translate$E $** > $@ + +$(OX)\json_tag$O : json_tag_.c json_tag.h + $(TCC) /Fo$@ -c json_tag_.c + +json_tag_.c : $(SRCDIR)\json_tag.c + translate$E $** > $@ + +$(OX)\json_timeline$O : json_timeline_.c json_timeline.h + $(TCC) /Fo$@ -c json_timeline_.c + +json_timeline_.c : $(SRCDIR)\json_timeline.c + translate$E $** > $@ + +$(OX)\json_user$O : json_user_.c json_user.h + $(TCC) /Fo$@ -c json_user_.c + +json_user_.c : $(SRCDIR)\json_user.c + translate$E $** > $@ + +$(OX)\json_wiki$O : json_wiki_.c json_wiki.h + $(TCC) /Fo$@ -c json_wiki_.c + +json_wiki_.c : $(SRCDIR)\json_wiki.c + translate$E $** > $@ + +$(OX)\leaf$O : leaf_.c leaf.h + $(TCC) /Fo$@ -c leaf_.c + +leaf_.c : $(SRCDIR)\leaf.c + translate$E $** > $@ + +$(OX)\loadctrl$O : loadctrl_.c loadctrl.h + $(TCC) /Fo$@ -c loadctrl_.c + +loadctrl_.c : $(SRCDIR)\loadctrl.c + translate$E $** > $@ + +$(OX)\login$O : login_.c login.h + $(TCC) /Fo$@ -c login_.c + +login_.c : $(SRCDIR)\login.c + translate$E $** > $@ + +$(OX)\lookslike$O : lookslike_.c lookslike.h + $(TCC) /Fo$@ -c lookslike_.c + +lookslike_.c : $(SRCDIR)\lookslike.c + translate$E $** > $@ + +$(OX)\main$O : main_.c main.h + $(TCC) /Fo$@ -c main_.c + +main_.c : $(SRCDIR)\main.c + translate$E $** > $@ + +$(OX)\manifest$O : manifest_.c manifest.h + $(TCC) /Fo$@ -c manifest_.c + +manifest_.c : $(SRCDIR)\manifest.c + translate$E $** > $@ + +$(OX)\markdown$O : markdown_.c markdown.h + $(TCC) /Fo$@ -c markdown_.c + +markdown_.c : $(SRCDIR)\markdown.c + translate$E $** > $@ + +$(OX)\markdown_html$O : markdown_html_.c markdown_html.h + $(TCC) /Fo$@ -c markdown_html_.c + +markdown_html_.c : $(SRCDIR)\markdown_html.c + translate$E $** > $@ + +$(OX)\md5$O : md5_.c md5.h + $(TCC) /Fo$@ -c md5_.c + +md5_.c : $(SRCDIR)\md5.c + translate$E $** > $@ + +$(OX)\merge$O : merge_.c merge.h + $(TCC) /Fo$@ -c merge_.c + +merge_.c : $(SRCDIR)\merge.c + translate$E $** > $@ + +$(OX)\merge3$O : merge3_.c merge3.h + $(TCC) /Fo$@ -c merge3_.c + +merge3_.c : $(SRCDIR)\merge3.c + translate$E $** > $@ + +$(OX)\moderate$O : moderate_.c moderate.h + $(TCC) /Fo$@ -c moderate_.c + +moderate_.c : $(SRCDIR)\moderate.c + translate$E $** > $@ + +$(OX)\name$O : name_.c name.h + $(TCC) /Fo$@ -c name_.c + +name_.c : $(SRCDIR)\name.c + translate$E $** > $@ + +$(OX)\path$O : path_.c path.h + $(TCC) /Fo$@ -c path_.c + +path_.c : $(SRCDIR)\path.c + translate$E $** > $@ + +$(OX)\piechart$O : piechart_.c piechart.h + $(TCC) /Fo$@ -c piechart_.c + +piechart_.c : $(SRCDIR)\piechart.c + translate$E $** > $@ + +$(OX)\pivot$O : pivot_.c pivot.h + $(TCC) /Fo$@ -c pivot_.c + +pivot_.c : $(SRCDIR)\pivot.c + translate$E $** > $@ + +$(OX)\popen$O : popen_.c popen.h + $(TCC) /Fo$@ -c popen_.c + +popen_.c : $(SRCDIR)\popen.c + translate$E $** > $@ + +$(OX)\pqueue$O : pqueue_.c pqueue.h + $(TCC) /Fo$@ -c pqueue_.c + +pqueue_.c : $(SRCDIR)\pqueue.c + translate$E $** > $@ + +$(OX)\printf$O : printf_.c printf.h + $(TCC) /Fo$@ -c printf_.c + +printf_.c : $(SRCDIR)\printf.c + translate$E $** > $@ + +$(OX)\publish$O : publish_.c publish.h + $(TCC) /Fo$@ -c publish_.c + +publish_.c : $(SRCDIR)\publish.c + translate$E $** > $@ + +$(OX)\purge$O : purge_.c purge.h + $(TCC) /Fo$@ -c purge_.c + +purge_.c : $(SRCDIR)\purge.c + translate$E $** > $@ + +$(OX)\rebuild$O : rebuild_.c rebuild.h + $(TCC) /Fo$@ -c rebuild_.c + +rebuild_.c : $(SRCDIR)\rebuild.c + translate$E $** > $@ + +$(OX)\regexp$O : regexp_.c regexp.h + $(TCC) /Fo$@ -c regexp_.c + +regexp_.c : $(SRCDIR)\regexp.c + translate$E $** > $@ + +$(OX)\report$O : report_.c report.h + $(TCC) /Fo$@ -c report_.c + +report_.c : $(SRCDIR)\report.c + translate$E $** > $@ + +$(OX)\rss$O : rss_.c rss.h + $(TCC) /Fo$@ -c rss_.c + +rss_.c : $(SRCDIR)\rss.c + translate$E $** > $@ + +$(OX)\schema$O : schema_.c schema.h + $(TCC) /Fo$@ -c schema_.c + +schema_.c : $(SRCDIR)\schema.c + translate$E $** > $@ + +$(OX)\search$O : search_.c search.h + $(TCC) /Fo$@ -c search_.c + +search_.c : $(SRCDIR)\search.c + translate$E $** > $@ + +$(OX)\setup$O : setup_.c setup.h + $(TCC) /Fo$@ -c setup_.c + +setup_.c : $(SRCDIR)\setup.c + translate$E $** > $@ + +$(OX)\sha1$O : sha1_.c sha1.h + $(TCC) /Fo$@ -c sha1_.c + +sha1_.c : $(SRCDIR)\sha1.c + translate$E $** > $@ + +$(OX)\shun$O : shun_.c shun.h + $(TCC) /Fo$@ -c shun_.c + +shun_.c : $(SRCDIR)\shun.c + translate$E $** > $@ + +$(OX)\sitemap$O : sitemap_.c sitemap.h + $(TCC) /Fo$@ -c sitemap_.c + +sitemap_.c : $(SRCDIR)\sitemap.c + translate$E $** > $@ + +$(OX)\skins$O : skins_.c skins.h + $(TCC) /Fo$@ -c skins_.c + +skins_.c : $(SRCDIR)\skins.c + translate$E $** > $@ + +$(OX)\sqlcmd$O : sqlcmd_.c sqlcmd.h + $(TCC) /Fo$@ -c sqlcmd_.c + +sqlcmd_.c : $(SRCDIR)\sqlcmd.c + translate$E $** > $@ + +$(OX)\stash$O : stash_.c stash.h + $(TCC) /Fo$@ -c stash_.c + +stash_.c : $(SRCDIR)\stash.c + translate$E $** > $@ + +$(OX)\stat$O : stat_.c stat.h + $(TCC) /Fo$@ -c stat_.c + +stat_.c : $(SRCDIR)\stat.c + translate$E $** > $@ + +$(OX)\statrep$O : statrep_.c statrep.h + $(TCC) /Fo$@ -c statrep_.c + +statrep_.c : $(SRCDIR)\statrep.c + translate$E $** > $@ + +$(OX)\style$O : style_.c style.h + $(TCC) /Fo$@ -c style_.c + +style_.c : $(SRCDIR)\style.c + translate$E $** > $@ + +$(OX)\sync$O : sync_.c sync.h + $(TCC) /Fo$@ -c sync_.c + +sync_.c : $(SRCDIR)\sync.c + translate$E $** > $@ + +$(OX)\tag$O : tag_.c tag.h + $(TCC) /Fo$@ -c tag_.c + +tag_.c : $(SRCDIR)\tag.c + translate$E $** > $@ + +$(OX)\tar$O : tar_.c tar.h + $(TCC) /Fo$@ -c tar_.c + +tar_.c : $(SRCDIR)\tar.c + translate$E $** > $@ + +$(OX)\th_main$O : th_main_.c th_main.h + $(TCC) /Fo$@ -c th_main_.c + +th_main_.c : $(SRCDIR)\th_main.c + translate$E $** > $@ + +$(OX)\timeline$O : timeline_.c timeline.h + $(TCC) /Fo$@ -c timeline_.c + +timeline_.c : $(SRCDIR)\timeline.c + translate$E $** > $@ + +$(OX)\tkt$O : tkt_.c tkt.h + $(TCC) /Fo$@ -c tkt_.c + +tkt_.c : $(SRCDIR)\tkt.c + translate$E $** > $@ + +$(OX)\tktsetup$O : tktsetup_.c tktsetup.h + $(TCC) /Fo$@ -c tktsetup_.c + +tktsetup_.c : $(SRCDIR)\tktsetup.c + translate$E $** > $@ + +$(OX)\undo$O : undo_.c undo.h + $(TCC) /Fo$@ -c undo_.c + +undo_.c : $(SRCDIR)\undo.c + translate$E $** > $@ + +$(OX)\unicode$O : unicode_.c unicode.h + $(TCC) /Fo$@ -c unicode_.c + +unicode_.c : $(SRCDIR)\unicode.c + translate$E $** > $@ + +$(OX)\update$O : update_.c update.h + $(TCC) /Fo$@ -c update_.c + +update_.c : $(SRCDIR)\update.c + translate$E $** > $@ + +$(OX)\url$O : url_.c url.h + $(TCC) /Fo$@ -c url_.c + +url_.c : $(SRCDIR)\url.c + translate$E $** > $@ + +$(OX)\user$O : user_.c user.h + $(TCC) /Fo$@ -c user_.c + +user_.c : $(SRCDIR)\user.c + translate$E $** > $@ + +$(OX)\utf8$O : utf8_.c utf8.h + $(TCC) /Fo$@ -c utf8_.c + +utf8_.c : $(SRCDIR)\utf8.c + translate$E $** > $@ + +$(OX)\util$O : util_.c util.h + $(TCC) /Fo$@ -c util_.c + +util_.c : $(SRCDIR)\util.c + translate$E $** > $@ + +$(OX)\verify$O : verify_.c verify.h + $(TCC) /Fo$@ -c verify_.c + +verify_.c : $(SRCDIR)\verify.c + translate$E $** > $@ + +$(OX)\vfile$O : vfile_.c vfile.h + $(TCC) /Fo$@ -c vfile_.c + +vfile_.c : $(SRCDIR)\vfile.c + translate$E $** > $@ + +$(OX)\wiki$O : wiki_.c wiki.h + $(TCC) /Fo$@ -c wiki_.c + +wiki_.c : $(SRCDIR)\wiki.c + translate$E $** > $@ + +$(OX)\wikiformat$O : wikiformat_.c wikiformat.h + $(TCC) /Fo$@ -c wikiformat_.c + +wikiformat_.c : $(SRCDIR)\wikiformat.c + translate$E $** > $@ + +$(OX)\winfile$O : winfile_.c winfile.h + $(TCC) /Fo$@ -c winfile_.c + +winfile_.c : $(SRCDIR)\winfile.c + translate$E $** > $@ + +$(OX)\winhttp$O : winhttp_.c winhttp.h + $(TCC) /Fo$@ -c winhttp_.c + +winhttp_.c : $(SRCDIR)\winhttp.c + translate$E $** > $@ + +$(OX)\wysiwyg$O : wysiwyg_.c wysiwyg.h + $(TCC) /Fo$@ -c wysiwyg_.c + +wysiwyg_.c : $(SRCDIR)\wysiwyg.c + translate$E $** > $@ + +$(OX)\xfer$O : xfer_.c xfer.h + $(TCC) /Fo$@ -c xfer_.c + +xfer_.c : $(SRCDIR)\xfer.c + translate$E $** > $@ + +$(OX)\xfersetup$O : xfersetup_.c xfersetup.h + $(TCC) /Fo$@ -c xfersetup_.c + +xfersetup_.c : $(SRCDIR)\xfersetup.c + translate$E $** > $@ + +$(OX)\zip$O : zip_.c zip.h + $(TCC) /Fo$@ -c zip_.c + +zip_.c : $(SRCDIR)\zip.c + translate$E $** > $@ + +fossil.res : $B\win\fossil.rc + $(RCC) /fo $@ $** + +headers: makeheaders$E page_index.h builtin_data.h VERSION.h + makeheaders$E add_.c:add.h \ + allrepo_.c:allrepo.h \ + attach_.c:attach.h \ + bag_.c:bag.h \ + bisect_.c:bisect.h \ + blob_.c:blob.h \ + branch_.c:branch.h \ + browse_.c:browse.h \ + builtin_.c:builtin.h \ + bundle_.c:bundle.h \ + cache_.c:cache.h \ + captcha_.c:captcha.h \ + cgi_.c:cgi.h \ + checkin_.c:checkin.h \ + checkout_.c:checkout.h \ + clearsign_.c:clearsign.h \ + clone_.c:clone.h \ + comformat_.c:comformat.h \ + configure_.c:configure.h \ + content_.c:content.h \ + db_.c:db.h \ + delta_.c:delta.h \ + deltacmd_.c:deltacmd.h \ + descendants_.c:descendants.h \ + diff_.c:diff.h \ + diffcmd_.c:diffcmd.h \ + doc_.c:doc.h \ + encode_.c:encode.h \ + event_.c:event.h \ + export_.c:export.h \ + file_.c:file.h \ + finfo_.c:finfo.h \ + foci_.c:foci.h \ + fusefs_.c:fusefs.h \ + glob_.c:glob.h \ + graph_.c:graph.h \ + gzip_.c:gzip.h \ + http_.c:http.h \ + http_socket_.c:http_socket.h \ + http_ssl_.c:http_ssl.h \ + http_transport_.c:http_transport.h \ + import_.c:import.h \ + info_.c:info.h \ + json_.c:json.h \ + json_artifact_.c:json_artifact.h \ + json_branch_.c:json_branch.h \ + json_config_.c:json_config.h \ + json_diff_.c:json_diff.h \ + json_dir_.c:json_dir.h \ + json_finfo_.c:json_finfo.h \ + json_login_.c:json_login.h \ + json_query_.c:json_query.h \ + json_report_.c:json_report.h \ + json_status_.c:json_status.h \ + json_tag_.c:json_tag.h \ + json_timeline_.c:json_timeline.h \ + json_user_.c:json_user.h \ + json_wiki_.c:json_wiki.h \ + leaf_.c:leaf.h \ + loadctrl_.c:loadctrl.h \ + login_.c:login.h \ + lookslike_.c:lookslike.h \ + main_.c:main.h \ + manifest_.c:manifest.h \ + markdown_.c:markdown.h \ + markdown_html_.c:markdown_html.h \ + md5_.c:md5.h \ + merge_.c:merge.h \ + merge3_.c:merge3.h \ + moderate_.c:moderate.h \ + name_.c:name.h \ + path_.c:path.h \ + piechart_.c:piechart.h \ + pivot_.c:pivot.h \ + popen_.c:popen.h \ + pqueue_.c:pqueue.h \ + printf_.c:printf.h \ + publish_.c:publish.h \ + purge_.c:purge.h \ + rebuild_.c:rebuild.h \ + regexp_.c:regexp.h \ + report_.c:report.h \ + rss_.c:rss.h \ + schema_.c:schema.h \ + search_.c:search.h \ + setup_.c:setup.h \ + sha1_.c:sha1.h \ + shun_.c:shun.h \ + sitemap_.c:sitemap.h \ + skins_.c:skins.h \ + sqlcmd_.c:sqlcmd.h \ + stash_.c:stash.h \ + stat_.c:stat.h \ + statrep_.c:statrep.h \ + style_.c:style.h \ + sync_.c:sync.h \ + tag_.c:tag.h \ + tar_.c:tar.h \ + th_main_.c:th_main.h \ + timeline_.c:timeline.h \ + tkt_.c:tkt.h \ + tktsetup_.c:tktsetup.h \ + undo_.c:undo.h \ + unicode_.c:unicode.h \ + update_.c:update.h \ + url_.c:url.h \ + user_.c:user.h \ + utf8_.c:utf8.h \ + util_.c:util.h \ + verify_.c:verify.h \ + vfile_.c:vfile.h \ + wiki_.c:wiki.h \ + wikiformat_.c:wikiformat.h \ + winfile_.c:winfile.h \ + winhttp_.c:winhttp.h \ + wysiwyg_.c:wysiwyg.h \ + xfer_.c:xfer.h \ + xfersetup_.c:xfersetup.h \ + zip_.c:zip.h \ + $(SRCDIR)\sqlite3.h \ + $(SRCDIR)\th.h \ + VERSION.h \ + $(SRCDIR)\cson_amalgamation.h + @copy /Y nul: headers ADDED win/buildmsvc.bat Index: win/buildmsvc.bat ================================================================== --- win/buildmsvc.bat +++ win/buildmsvc.bat @@ -0,0 +1,305 @@ +@ECHO OFF + +:: +:: buildmsvc.bat -- +:: +:: This batch file attempts to build Fossil using the latest version +:: Microsoft Visual Studio installed on this machine. +:: +:: + +SETLOCAL + +REM SET __ECHO=ECHO +REM SET __ECHO2=ECHO +IF NOT DEFINED _AECHO (SET _AECHO=REM) +IF NOT DEFINED _CECHO (SET _CECHO=REM) +IF NOT DEFINED _VECHO (SET _VECHO=REM) + +REM +REM NOTE: Setup local environment variables that point to the root directory +REM of the Fossil source checkout and to the directory containing this +REM build tool. +REM +SET ROOT=%~dp0\.. +SET ROOT=%ROOT:\\=\% + +%_VECHO% Root = '%ROOT%' + +SET TOOLS=%~dp0 +SET TOOLS=%TOOLS:~0,-1% + +%_VECHO% Tools = '%TOOLS%' + +REM +REM Visual C++ ???? +REM +IF DEFINED VCINSTALLDIR IF EXIST "%VCINSTALLDIR%" ( + %_AECHO% Build environment appears to be setup. + GOTO skip_setupVisualStudio +) + +REM +REM Visual Studio ???? +REM +IF DEFINED VSVARS32 IF EXIST "%VSVARS32%" ( + %_AECHO% Build environment batch file manually overridden to "%VSVARS32%"... + GOTO skip_detectVisualStudio +) + +REM +REM Visual Studio 2015 +REM +IF NOT DEFINED VS140COMNTOOLS GOTO skip_detectVisualStudio2015 +SET VSVARS32=%VS140COMNTOOLS%\vsvars32.bat +IF EXIST "%VSVARS32%" ( + %_AECHO% Using Visual Studio 2015... + GOTO skip_detectVisualStudio +) +:skip_detectVisualStudio2015 + +REM +REM Visual Studio 2013 +REM +IF NOT DEFINED VS120COMNTOOLS GOTO skip_detectVisualStudio2013 +SET VSVARS32=%VS120COMNTOOLS%\vsvars32.bat +IF EXIST "%VSVARS32%" ( + %_AECHO% Using Visual Studio 2013... + GOTO skip_detectVisualStudio +) +:skip_detectVisualStudio2013 + +REM +REM Visual Studio 2012 +REM +IF NOT DEFINED VS110COMNTOOLS GOTO skip_detectVisualStudio2012 +SET VSVARS32=%VS110COMNTOOLS%\vsvars32.bat +IF EXIST "%VSVARS32%" ( + %_AECHO% Using Visual Studio 2012... + GOTO skip_detectVisualStudio +) +:skip_detectVisualStudio2012 + +REM +REM Visual Studio 2010 +REM +IF NOT DEFINED VS100COMNTOOLS GOTO skip_detectVisualStudio2010 +SET VSVARS32=%VS100COMNTOOLS%\vsvars32.bat +IF EXIST "%VSVARS32%" ( + %_AECHO% Using Visual Studio 2010... + GOTO skip_detectVisualStudio +) +:skip_detectVisualStudio2010 + +REM +REM Visual Studio 2008 +REM +IF NOT DEFINED VS90COMNTOOLS GOTO skip_detectVisualStudio2008 +SET VSVARS32=%VS90COMNTOOLS%\vsvars32.bat +IF EXIST "%VSVARS32%" ( + %_AECHO% Using Visual Studio 2008... + GOTO skip_detectVisualStudio +) +:skip_detectVisualStudio2008 + +REM +REM Visual Studio 2005 +REM +IF NOT DEFINED VS80COMNTOOLS GOTO skip_detectVisualStudio2005 +SET VSVARS32=%VS80COMNTOOLS%\vsvars32.bat +IF EXIST "%VSVARS32%" ( + %_AECHO% Using Visual Studio 2005... + GOTO skip_detectVisualStudio +) +:skip_detectVisualStudio2005 + +REM +REM Visual Studio 2003 +REM +IF NOT DEFINED VS71COMNTOOLS GOTO skip_detectVisualStudio2003 +SET VSVARS32=%VS71COMNTOOLS%\vsvars32.bat +IF EXIST "%VSVARS32%" ( + %_AECHO% Using Visual Studio 2003... + GOTO skip_detectVisualStudio +) +:skip_detectVisualStudio2003 + +REM +REM Visual Studio 2002 +REM +IF NOT DEFINED VS70COMNTOOLS GOTO skip_detectVisualStudio2002 +SET VSVARS32=%VS70COMNTOOLS%\vsvars32.bat +IF EXIST "%VSVARS32%" ( + %_AECHO% Using Visual Studio 2002... + GOTO skip_detectVisualStudio +) +:skip_detectVisualStudio2002 + +REM +REM NOTE: If we get to this point, no Visual Studio build environment batch +REM files were found. +REM +ECHO No Visual Studio build environment batch files were found. +GOTO errors + +REM +REM NOTE: At this point, the appropriate Visual Studio version should be +REM selected. +REM +:skip_detectVisualStudio + +REM +REM NOTE: Remove any double-backslash sequences that may be present in the +REM selected Visual Studio common tools path. This is not strictly +REM necessary; however, it makes reading the output easier. +REM +SET VSVARS32=%VSVARS32:\\=\% + +%_VECHO% VsVars32 = '%VSVARS32%' + +REM +REM NOTE: After this point, a clean ERRORLEVEL is required; therefore, make +REM sure it is reset now. +REM +CALL :fn_ResetErrorLevel + +REM +REM NOTE: Attempt to call the selected batch file to setup the environment +REM variables for building with MSVC. +REM +%__ECHO3% CALL "%VSVARS32%" + +IF ERRORLEVEL 1 ( + ECHO Visual Studio build environment batch file "%VSVARS32%" failed. + GOTO errors +) + +REM +REM NOTE: After this point, the environment should already be setup for +REM building with MSVC. +REM +:skip_setupVisualStudio + +%_VECHO% VcInstallDir = '%VCINSTALLDIR%' + +REM +REM NOTE: Attempt to create the build output directory, if necessary. +REM +IF NOT EXIST "%ROOT%\msvcbld" ( + %__ECHO% MKDIR "%ROOT%\msvcbld" + + IF ERRORLEVEL 1 ( + ECHO Could not make directory "%ROOT%\msvcbld". + GOTO errors + ) +) + +REM +REM NOTE: Attempt to change to the created build output directory so that +REM the generated files will be placed there. +REM +%__ECHO2% PUSHD "%ROOT%\msvcbld" + +IF ERRORLEVEL 1 ( + ECHO Could not change to directory "%ROOT%\msvcbld". + GOTO errors +) + +REM +REM NOTE: If requested, setup the build environment to refer to the Windows +REM SDK v7.1A, which is required if the binaries are being built with +REM Visual Studio 201x and need to work on Windows XP. +REM +IF DEFINED USE_V110SDK71A ( + %_AECHO% Forcing use of the Windows SDK v7.1A... + CALL :fn_UseV110Sdk71A +) + +%_VECHO% Path = '%PATH%' +%_VECHO% Include = '%INCLUDE%' +%_VECHO% Lib = '%LIB%' +%_VECHO% NmakeArgs = '%NMAKE_ARGS%' + +REM +REM NOTE: Attempt to execute NMAKE for the Fossil MSVC makefile, passing +REM anything extra from our command line along (e.g. extra options). +REM +%__ECHO% nmake /f "%TOOLS%\Makefile.msc" %NMAKE_ARGS% %* + +IF ERRORLEVEL 1 ( + GOTO errors +) + +REM +REM NOTE: Attempt to restore the previously saved directory. +REM +%__ECHO2% POPD + +IF ERRORLEVEL 1 ( + ECHO Could not restore directory. + GOTO errors +) + +GOTO no_errors + +:fn_UseV110Sdk71A + IF "%PROCESSOR_ARCHITECTURE%" == "x86" GOTO set_v110Sdk71A_x86 + SET PFILES_SDK71A=%ProgramFiles(x86)% + GOTO set_v110Sdk71A_done + :set_v110Sdk71A_x86 + SET PFILES_SDK71A=%ProgramFiles% + :set_v110Sdk71A_done + SET PATH=%PFILES_SDK71A%\Microsoft SDKs\Windows\7.1A\Bin;%PATH% + SET INCLUDE=%PFILES_SDK71A%\Microsoft SDKs\Windows\7.1A\Include;%INCLUDE% + IF "%PLATFORM%" == "x64" ( + SET LIB=%PFILES_SDK71A%\Microsoft SDKs\Windows\7.1A\Lib\x64;%LIB% + ) ELSE ( + SET LIB=%PFILES_SDK71A%\Microsoft SDKs\Windows\7.1A\Lib;%LIB% + ) + CALL :fn_UnsetVariable PFILES_SDK71A + SET NMAKE_ARGS=%NMAKE_ARGS% FOSSIL_ENABLE_WINXP=1 + GOTO :EOF + +:fn_UnsetVariable + SETLOCAL + SET VALUE=%1 + IF DEFINED VALUE ( + SET VALUE= + ENDLOCAL + SET %VALUE%= + ) ELSE ( + ENDLOCAL + ) + CALL :fn_ResetErrorLevel + GOTO :EOF + +:fn_ResetErrorLevel + VERIFY > NUL + GOTO :EOF + +:fn_SetErrorLevel + VERIFY MAYBE 2> NUL + GOTO :EOF + +:usage + ECHO. + ECHO Usage: %~nx0 [...] + ECHO. + GOTO errors + +:errors + CALL :fn_SetErrorLevel + ENDLOCAL + ECHO. + ECHO Build failure, errors were encountered. + GOTO end_of_file + +:no_errors + CALL :fn_ResetErrorLevel + ENDLOCAL + ECHO. + ECHO Build success, no errors were encountered. + GOTO end_of_file + +:end_of_file +%__ECHO% EXIT /B %ERRORLEVEL% ADDED win/fossil.exe.manifest Index: win/fossil.exe.manifest ================================================================== --- win/fossil.exe.manifest +++ win/fossil.exe.manifest @@ -0,0 +1,43 @@ + + + + + Simple, high-reliability, distributed software configuration management system. + + + + + + + + + + + + + + + + + + + + + + + + + true + + + + + + + + ADDED win/fossil.ico Index: win/fossil.ico ================================================================== --- win/fossil.ico +++ win/fossil.ico cannot compute difference between binary files ADDED win/fossil.rc Index: win/fossil.rc ================================================================== --- win/fossil.rc +++ win/fossil.rc @@ -0,0 +1,187 @@ +/* +** Copyright (c) 2012 D. Richard Hipp +** +** This program is free software; you can redistribute it and/or +** modify it under the terms of the Simplified BSD License (also +** known as the "2-Clause License" or "FreeBSD License".) + +** This program is distributed in the hope that it will be useful, +** but without any warranty; without even the implied warranty of +** merchantability or fitness for a particular purpose. +** +** Author contact information: +** drh@hwaci.com +** http://www.hwaci.com/drh/ +** +******************************************************************************* +** +** This file contains resource information for the executable on Windows. +*/ + +#if !defined(_WIN32_WCE) +#include "winresrc.h" +#else +#include "windows.h" +#endif /* !defined(_WIN32_WCE) */ + +#if !defined(VS_FF_NONE) +# define VS_FF_NONE 0x00000000L +#endif /* !defined(VS_FF_NONE) */ + +#include "VERSION.h" +#define _RC_COMPILE_ +#include "config.h" +#include "sqlite3.h" + +#if defined(FOSSIL_ENABLE_MINIZ) +#if defined(__MINGW32__) +#include "minizver.h" +#else +#define MINIZ_HEADER_FILE_ONLY +#include "miniz.c" +#endif /* defined(__MINGW32__) */ +#else +#include "zlib.h" +#endif /* defined(FOSSIL_ENABLE_MINIZ) */ + +#if defined(FOSSIL_ENABLE_SSL) +#include "openssl/opensslv.h" +#endif /* defined(FOSSIL_ENABLE_SSL) */ + +#if defined(FOSSIL_ENABLE_TCL) +#include "tcl.h" +#endif /* defined(FOSSIL_ENABLE_TCL) */ + +#if defined(FOSSIL_ENABLE_JSON) +#include "json_detail.h" +#endif /* defined(FOSSIL_ENABLE_JSON) */ + +/* + * English (U.S.) resources + */ + +#if defined(_WIN32) +LANGUAGE LANG_ENGLISH, SUBLANG_ENGLISH_US +#pragma code_page(1252) +#endif /* defined(_WIN32) */ + +/* + * Icon + */ + +#define IDI_FOSSIL 8001 + +IDI_FOSSIL ICON "fossil.ico" + +/* + * Version + */ + +VS_VERSION_INFO VERSIONINFO + FILEVERSION RELEASE_RESOURCE_VERSION + PRODUCTVERSION RELEASE_RESOURCE_VERSION + FILEFLAGSMASK VS_FFI_FILEFLAGSMASK +#if defined(_DEBUG) + FILEFLAGS VS_FF_DEBUG +#else + FILEFLAGS VS_FF_NONE +#endif /* defined(_DEBUG) */ + FILEOS VOS__WINDOWS32 + FILETYPE VFT_APP + FILESUBTYPE VFT2_UNKNOWN +BEGIN + BLOCK "StringFileInfo" + BEGIN + BLOCK "040904b0" + BEGIN + VALUE "CompanyName", "Fossil Development Team\0" + VALUE "FileDescription", "Fossil is a simple, high-reliability, distributed software configuration management system.\0" + VALUE "ProductName", "Fossil\0" + VALUE "ProductVersion", "Fossil " RELEASE_VERSION " " MANIFEST_VERSION " " MANIFEST_DATE " UTC\0" + VALUE "FileVersion", "Fossil " RELEASE_VERSION " " MANIFEST_VERSION " " MANIFEST_DATE " UTC\0" + VALUE "InternalName", "fossil\0" + VALUE "LegalCopyright", "Copyright © " MANIFEST_YEAR " by D. Richard Hipp. All rights reserved.\0" + VALUE "OriginalFilename", "fossil.exe\0" + VALUE "CompilerName", COMPILER_NAME "\0" + VALUE "SQLiteVersion", "SQLite " SQLITE_VERSION " " SQLITE_SOURCE_ID "\0" +#if defined(FOSSIL_DYNAMIC_BUILD) + VALUE "DynamicBuild", "yes\0" +#else + VALUE "DynamicBuild", "no\0" +#endif +#if defined(FOSSIL_ENABLE_MINIZ) + VALUE "MinizVersion", "miniz " MZ_VERSION "\0" +#else + VALUE "ZlibVersion", "zlib " ZLIB_VERSION "\0" +#endif /* defined(FOSSIL_ENABLE_MINIZ) */ +#if defined(BROKEN_MINGW_CMDLINE) + VALUE "CommandLineIsUnicode", "No\0" +#else + VALUE "CommandLineIsUnicode", "Yes\0" +#endif /* defined(BROKEN_MINGW_CMDLINE) */ +#if defined(FOSSIL_ENABLE_SSL) + VALUE "SslEnabled", "Yes, " OPENSSL_VERSION_TEXT "\0" +#endif /* defined(FOSSIL_ENABLE_SSL) */ +#if defined(FOSSIL_ENABLE_LEGACY_MV_RM) + VALUE "LegacyMvRm", "Yes\0" +#else + VALUE "LegacyMvRm", "No\0" +#endif /* defined(FOSSIL_ENABLE_LEGACY_MV_RM) */ +#if defined(FOSSIL_ENABLE_EXEC_REL_PATHS) + VALUE "ExecRelPaths", "Yes\0" +#else + VALUE "ExecRelPaths", "No\0" +#endif /* defined(FOSSIL_ENABLE_EXEC_REL_PATHS) */ +#if defined(FOSSIL_ENABLE_TH1_DOCS) + VALUE "Th1Docs", "Yes\0" +#else + VALUE "Th1Docs", "No\0" +#endif /* defined(FOSSIL_ENABLE_TH1_DOCS) */ +#if defined(FOSSIL_ENABLE_TH1_HOOKS) + VALUE "Th1Hooks", "Yes\0" +#else + VALUE "Th1Hooks", "No\0" +#endif /* defined(FOSSIL_ENABLE_TH1_HOOKS) */ +#if defined(FOSSIL_ENABLE_TCL) + VALUE "TclEnabled", "Yes, Tcl " TCL_PATCH_LEVEL "\0" +#if defined(USE_TCL_STUBS) + VALUE "UseTclStubsEnabled", "Yes\0" +#else + VALUE "UseTclStubsEnabled", "No\0" +#endif /* defined(USE_TCL_STUBS) */ +#if defined(FOSSIL_ENABLE_TCL_STUBS) + VALUE "TclStubsEnabled", "Yes\0" +#else + VALUE "TclStubsEnabled", "No\0" +#endif /* defined(FOSSIL_ENABLE_TCL_STUBS) */ +#if defined(FOSSIL_ENABLE_TCL_PRIVATE_STUBS) + VALUE "TclPrivateStubsEnabled", "Yes\0" +#else + VALUE "TclPrivateStubsEnabled", "No\0" +#endif /* defined(FOSSIL_ENABLE_TCL_PRIVATE_STUBS) */ +#endif /* defined(FOSSIL_ENABLE_TCL) */ +#if defined(FOSSIL_ENABLE_JSON) + VALUE "JsonEnabled", "Yes, cson " FOSSIL_JSON_API_VERSION "\0" +#endif /* defined(FOSSIL_ENABLE_JSON) */ + VALUE "MarkdownEnabled", "Yes\0" + END + END + BLOCK "VarFileInfo" + BEGIN + VALUE "Translation", 0x409, 0x4b0 + END +END + +/* + * This embedded manifest is needed for Windows 8.1. + */ + +#ifndef RT_MANIFEST +#define RT_MANIFEST 24 +#endif /* RT_MANIFEST */ + +#ifndef CREATEPROCESS_MANIFEST_RESOURCE_ID +#define CREATEPROCESS_MANIFEST_RESOURCE_ID 1 +#endif /* CREATEPROCESS_MANIFEST_RESOURCE_ID */ + +CREATEPROCESS_MANIFEST_RESOURCE_ID RT_MANIFEST "fossil.exe.manifest" ADDED win/include/dirent.h Index: win/include/dirent.h ================================================================== --- win/include/dirent.h +++ win/include/dirent.h @@ -0,0 +1,838 @@ +/* + * dirent.h - dirent API for Microsoft Visual Studio + * + * Copyright (C) 2006-2012 Toni Ronkko + * + * Permission is hereby granted, free of charge, to any person obtaining + * a copy of this software and associated documentation files (the + * ``Software''), to deal in the Software without restriction, including + * without limitation the rights to use, copy, modify, merge, publish, + * distribute, sublicense, and/or sell copies of the Software, and to + * permit persons to whom the Software is furnished to do so, subject to + * the following conditions: + * + * The above copyright notice and this permission notice shall be included + * in all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS + * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. + * IN NO EVENT SHALL TONI RONKKO BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + * + * $Id: dirent.h,v 1.20 2014/03/19 17:52:23 tronkko Exp $ + */ +#ifndef DIRENT_H +#define DIRENT_H + +/* + * Define architecture flags so we don't need to include windows.h. + * Avoiding windows.h makes it simpler to use windows sockets in conjunction + * with dirent.h. + */ +#if !defined(_68K_) && !defined(_MPPC_) && !defined(_X86_) && !defined(_IA64_) && !defined(_AMD64_) && defined(_M_IX86) +# define _X86_ +#endif +#if !defined(_68K_) && !defined(_MPPC_) && !defined(_X86_) && !defined(_IA64_) && !defined(_AMD64_) && defined(_M_AMD64) +#define _AMD64_ +#endif + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +/* Indicates that d_type field is available in dirent structure */ +#define _DIRENT_HAVE_D_TYPE + +/* Indicates that d_namlen field is available in dirent structure */ +#define _DIRENT_HAVE_D_NAMLEN + +/* Entries missing from MSVC 6.0 */ +#if !defined(FILE_ATTRIBUTE_DEVICE) +# define FILE_ATTRIBUTE_DEVICE 0x40 +#endif + +/* File type and permission flags for stat() */ +#if !defined(S_IFMT) +# define S_IFMT _S_IFMT /* File type mask */ +#endif +#if !defined(S_IFDIR) +# define S_IFDIR _S_IFDIR /* Directory */ +#endif +#if !defined(S_IFCHR) +# define S_IFCHR _S_IFCHR /* Character device */ +#endif +#if !defined(S_IFFIFO) +# define S_IFFIFO _S_IFFIFO /* Pipe */ +#endif +#if !defined(S_IFREG) +# define S_IFREG _S_IFREG /* Regular file */ +#endif +#if !defined(S_IREAD) +# define S_IREAD _S_IREAD /* Read permission */ +#endif +#if !defined(S_IWRITE) +# define S_IWRITE _S_IWRITE /* Write permission */ +#endif +#if !defined(S_IEXEC) +# define S_IEXEC _S_IEXEC /* Execute permission */ +#endif +#if !defined(S_IFIFO) +# define S_IFIFO _S_IFIFO /* Pipe */ +#endif +#if !defined(S_IFBLK) +# define S_IFBLK 0 /* Block device */ +#endif +#if !defined(S_IFLNK) +# define S_IFLNK 0 /* Link */ +#endif +#if !defined(S_IFSOCK) +# define S_IFSOCK 0 /* Socket */ +#endif + +#if defined(_MSC_VER) +# define S_IRUSR S_IREAD /* Read user */ +# define S_IWUSR S_IWRITE /* Write user */ +# define S_IXUSR 0 /* Execute user */ +# define S_IRGRP 0 /* Read group */ +# define S_IWGRP 0 /* Write group */ +# define S_IXGRP 0 /* Execute group */ +# define S_IROTH 0 /* Read others */ +# define S_IWOTH 0 /* Write others */ +# define S_IXOTH 0 /* Execute others */ +#endif + +/* Maximum length of file name */ +#if !defined(PATH_MAX) +# define PATH_MAX MAX_PATH +#endif +#if !defined(FILENAME_MAX) +# define FILENAME_MAX MAX_PATH +#endif +#if !defined(NAME_MAX) +# define NAME_MAX FILENAME_MAX +#endif + +/* File type flags for d_type */ +#define DT_UNKNOWN 0 +#define DT_REG S_IFREG +#define DT_DIR S_IFDIR +#define DT_FIFO S_IFIFO +#define DT_SOCK S_IFSOCK +#define DT_CHR S_IFCHR +#define DT_BLK S_IFBLK +#define DT_LNK S_IFLNK + +/* Macros for converting between st_mode and d_type */ +#define IFTODT(mode) ((mode) & S_IFMT) +#define DTTOIF(type) (type) + +/* + * File type macros. Note that block devices, sockets and links cannot be + * distinguished on Windows and the macros S_ISBLK, S_ISSOCK and S_ISLNK are + * only defined for compatibility. These macros should always return false + * on Windows. + */ +#define S_ISFIFO(mode) (((mode) & S_IFMT) == S_IFIFO) +#define S_ISDIR(mode) (((mode) & S_IFMT) == S_IFDIR) +#define S_ISREG(mode) (((mode) & S_IFMT) == S_IFREG) +#define S_ISLNK(mode) (((mode) & S_IFMT) == S_IFLNK) +#define S_ISSOCK(mode) (((mode) & S_IFMT) == S_IFSOCK) +#define S_ISCHR(mode) (((mode) & S_IFMT) == S_IFCHR) +#define S_ISBLK(mode) (((mode) & S_IFMT) == S_IFBLK) + +/* Return the exact length of d_namlen without zero terminator */ +#define _D_EXACT_NAMLEN(p) ((p)->d_namlen) + +/* Return number of bytes needed to store d_namlen */ +#define _D_ALLOC_NAMLEN(p) (PATH_MAX) + + +#ifdef __cplusplus +extern "C" { +#endif + + +/* Wide-character version */ +struct _wdirent { + long d_ino; /* Always zero */ + unsigned short d_reclen; /* Structure size */ + size_t d_namlen; /* Length of name without \0 */ + int d_type; /* File type */ + wchar_t d_name[PATH_MAX]; /* File name */ +}; +typedef struct _wdirent _wdirent; + +struct _WDIR { + struct _wdirent ent; /* Current directory entry */ + WIN32_FIND_DATAW data; /* Private file data */ + int cached; /* True if data is valid */ + HANDLE handle; /* Win32 search handle */ + wchar_t *patt; /* Initial directory name */ +}; +typedef struct _WDIR _WDIR; + +static _WDIR *_wopendir (const wchar_t *dirname); +static struct _wdirent *_wreaddir (_WDIR *dirp); +static int _wclosedir (_WDIR *dirp); +static void _wrewinddir (_WDIR* dirp); + + +/* For compatibility with Symbian */ +#define wdirent _wdirent +#define WDIR _WDIR +#define wopendir _wopendir +#define wreaddir _wreaddir +#define wclosedir _wclosedir +#define wrewinddir _wrewinddir + + +/* Multi-byte character versions */ +struct dirent { + long d_ino; /* Always zero */ + unsigned short d_reclen; /* Structure size */ + size_t d_namlen; /* Length of name without \0 */ + int d_type; /* File type */ + char d_name[PATH_MAX]; /* File name */ +}; +typedef struct dirent dirent; + +struct DIR { + struct dirent ent; + struct _WDIR *wdirp; +}; +typedef struct DIR DIR; + +static DIR *opendir (const char *dirname); +static struct dirent *readdir (DIR *dirp); +static int closedir (DIR *dirp); +static void rewinddir (DIR* dirp); + + +/* Internal utility functions */ +static WIN32_FIND_DATAW *dirent_first (_WDIR *dirp); +static WIN32_FIND_DATAW *dirent_next (_WDIR *dirp); + +static int dirent_mbstowcs_s( + size_t *pReturnValue, + wchar_t *wcstr, + size_t sizeInWords, + const char *mbstr, + size_t count); + +static int dirent_wcstombs_s( + size_t *pReturnValue, + char *mbstr, + size_t sizeInBytes, + const wchar_t *wcstr, + size_t count); + +static void dirent_set_errno (int error); + +/* + * Open directory stream DIRNAME for read and return a pointer to the + * internal working area that is used to retrieve individual directory + * entries. + */ +static _WDIR* +_wopendir( + const wchar_t *dirname) +{ + _WDIR *dirp = NULL; + int error; + + /* Must have directory name */ + if (dirname == NULL || dirname[0] == '\0') { + dirent_set_errno (ENOENT); + return NULL; + } + + /* Allocate new _WDIR structure */ + dirp = (_WDIR*) malloc (sizeof (struct _WDIR)); + if (dirp != NULL) { + DWORD n; + + /* Reset _WDIR structure */ + dirp->handle = INVALID_HANDLE_VALUE; + dirp->patt = NULL; + dirp->cached = 0; + + /* Compute the length of full path plus zero terminator */ + n = GetFullPathNameW (dirname, 0, NULL, NULL); + + /* Allocate room for absolute directory name and search pattern */ + dirp->patt = (wchar_t*) malloc (sizeof (wchar_t) * n + 16); + if (dirp->patt) { + + /* + * Convert relative directory name to an absolute one. This + * allows rewinddir() to function correctly even when current + * working directory is changed between opendir() and rewinddir(). + */ + n = GetFullPathNameW (dirname, n, dirp->patt, NULL); + if (n > 0) { + wchar_t *p; + + /* Append search pattern \* to the directory name */ + p = dirp->patt + n; + if (dirp->patt < p) { + switch (p[-1]) { + case '\\': + case '/': + case ':': + /* Directory ends in path separator, e.g. c:\temp\ */ + /*NOP*/; + break; + + default: + /* Directory name doesn't end in path separator */ + *p++ = '\\'; + } + } + *p++ = '*'; + *p = '\0'; + + /* Open directory stream and retrieve the first entry */ + if (dirent_first (dirp)) { + /* Directory stream opened successfully */ + error = 0; + } else { + /* Cannot retrieve first entry */ + error = 1; + dirent_set_errno (ENOENT); + } + + } else { + /* Cannot retrieve full path name */ + dirent_set_errno (ENOENT); + error = 1; + } + + } else { + /* Cannot allocate memory for search pattern */ + error = 1; + } + + } else { + /* Cannot allocate _WDIR structure */ + error = 1; + } + + /* Clean up in case of error */ + if (error && dirp) { + _wclosedir (dirp); + dirp = NULL; + } + + return dirp; +} + +/* + * Read next directory entry. The directory entry is returned in dirent + * structure in the d_name field. Individual directory entries returned by + * this function include regular files, sub-directories, pseudo-directories + * "." and ".." as well as volume labels, hidden files and system files. + */ +static struct _wdirent* +_wreaddir( + _WDIR *dirp) +{ + WIN32_FIND_DATAW *datap; + struct _wdirent *entp; + + /* Read next directory entry */ + datap = dirent_next (dirp); + if (datap) { + size_t n; + DWORD attr; + + /* Pointer to directory entry to return */ + entp = &dirp->ent; + + /* + * Copy file name as wide-character string. If the file name is too + * long to fit in to the destination buffer, then truncate file name + * to PATH_MAX characters and zero-terminate the buffer. + */ + n = 0; + while (n + 1 < PATH_MAX && datap->cFileName[n] != 0) { + entp->d_name[n] = datap->cFileName[n]; + n++; + } + dirp->ent.d_name[n] = 0; + + /* Length of file name excluding zero terminator */ + entp->d_namlen = n; + + /* File type */ + attr = datap->dwFileAttributes; + if ((attr & FILE_ATTRIBUTE_DEVICE) != 0) { + entp->d_type = DT_CHR; + } else if ((attr & FILE_ATTRIBUTE_DIRECTORY) != 0) { + entp->d_type = DT_DIR; + } else { + entp->d_type = DT_REG; + } + + /* Reset dummy fields */ + entp->d_ino = 0; + entp->d_reclen = sizeof (struct _wdirent); + + } else { + + /* Last directory entry read */ + entp = NULL; + + } + + return entp; +} + +/* + * Close directory stream opened by opendir() function. This invalidates the + * DIR structure as well as any directory entry read previously by + * _wreaddir(). + */ +static int +_wclosedir( + _WDIR *dirp) +{ + int ok; + if (dirp) { + + /* Release search handle */ + if (dirp->handle != INVALID_HANDLE_VALUE) { + FindClose (dirp->handle); + dirp->handle = INVALID_HANDLE_VALUE; + } + + /* Release search pattern */ + if (dirp->patt) { + free (dirp->patt); + dirp->patt = NULL; + } + + /* Release directory structure */ + free (dirp); + ok = /*success*/0; + + } else { + /* Invalid directory stream */ + dirent_set_errno (EBADF); + ok = /*failure*/-1; + } + return ok; +} + +/* + * Rewind directory stream such that _wreaddir() returns the very first + * file name again. + */ +static void +_wrewinddir( + _WDIR* dirp) +{ + if (dirp) { + /* Release existing search handle */ + if (dirp->handle != INVALID_HANDLE_VALUE) { + FindClose (dirp->handle); + } + + /* Open new search handle */ + dirent_first (dirp); + } +} + +/* Get first directory entry (internal) */ +static WIN32_FIND_DATAW* +dirent_first( + _WDIR *dirp) +{ + WIN32_FIND_DATAW *datap; + + /* Open directory and retrieve the first entry */ + dirp->handle = FindFirstFileW (dirp->patt, &dirp->data); + if (dirp->handle != INVALID_HANDLE_VALUE) { + + /* a directory entry is now waiting in memory */ + datap = &dirp->data; + dirp->cached = 1; + + } else { + + /* Failed to re-open directory: no directory entry in memory */ + dirp->cached = 0; + datap = NULL; + + } + return datap; +} + +/* Get next directory entry (internal) */ +static WIN32_FIND_DATAW* +dirent_next( + _WDIR *dirp) +{ + WIN32_FIND_DATAW *p; + + /* Get next directory entry */ + if (dirp->cached != 0) { + + /* A valid directory entry already in memory */ + p = &dirp->data; + dirp->cached = 0; + + } else if (dirp->handle != INVALID_HANDLE_VALUE) { + + /* Get the next directory entry from stream */ + if (FindNextFileW (dirp->handle, &dirp->data) != FALSE) { + /* Got a file */ + p = &dirp->data; + } else { + /* The very last entry has been processed or an error occured */ + FindClose (dirp->handle); + dirp->handle = INVALID_HANDLE_VALUE; + p = NULL; + } + + } else { + + /* End of directory stream reached */ + p = NULL; + + } + + return p; +} + +/* + * Open directory stream using plain old C-string. + */ +static DIR* +opendir( + const char *dirname) +{ + struct DIR *dirp; + int error; + + /* Must have directory name */ + if (dirname == NULL || dirname[0] == '\0') { + dirent_set_errno (ENOENT); + return NULL; + } + + /* Allocate memory for DIR structure */ + dirp = (DIR*) malloc (sizeof (struct DIR)); + if (dirp) { + wchar_t wname[PATH_MAX]; + size_t n; + + /* Convert directory name to wide-character string */ + error = dirent_mbstowcs_s (&n, wname, PATH_MAX, dirname, PATH_MAX); + if (!error) { + + /* Open directory stream using wide-character name */ + dirp->wdirp = _wopendir (wname); + if (dirp->wdirp) { + /* Directory stream opened */ + error = 0; + } else { + /* Failed to open directory stream */ + error = 1; + } + + } else { + /* + * Cannot convert file name to wide-character string. This + * occurs if the string contains invalid multi-byte sequences or + * the output buffer is too small to contain the resulting + * string. + */ + error = 1; + } + + } else { + /* Cannot allocate DIR structure */ + error = 1; + } + + /* Clean up in case of error */ + if (error && dirp) { + free (dirp); + dirp = NULL; + } + + return dirp; +} + +/* + * Read next directory entry. + * + * When working with text consoles, please note that file names returned by + * readdir() are represented in the default ANSI code page while any output to + * console is typically formatted on another code page. Thus, non-ASCII + * characters in file names will not usually display correctly on console. The + * problem can be fixed in two ways: (1) change the character set of console + * to 1252 using chcp utility and use Lucida Console font, or (2) use + * _cprintf function when writing to console. The _cprinf() will re-encode + * ANSI strings to the console code page so many non-ASCII characters will + * display correcly. + */ +static struct dirent* +readdir( + DIR *dirp) +{ + WIN32_FIND_DATAW *datap; + struct dirent *entp; + + /* Read next directory entry */ + datap = dirent_next (dirp->wdirp); + if (datap) { + size_t n; + int error; + + /* Attempt to convert file name to multi-byte string */ + error = dirent_wcstombs_s( + &n, dirp->ent.d_name, PATH_MAX, datap->cFileName, PATH_MAX); + + /* + * If the file name cannot be represented by a multi-byte string, + * then attempt to use old 8+3 file name. This allows traditional + * Unix-code to access some file names despite of unicode + * characters, although file names may seem unfamiliar to the user. + * + * Be ware that the code below cannot come up with a short file + * name unless the file system provides one. At least + * VirtualBox shared folders fail to do this. + */ + if (error && datap->cAlternateFileName[0] != '\0') { + error = dirent_wcstombs_s( + &n, dirp->ent.d_name, PATH_MAX, + datap->cAlternateFileName, PATH_MAX); + } + + if (!error) { + DWORD attr; + + /* Initialize directory entry for return */ + entp = &dirp->ent; + + /* Length of file name excluding zero terminator */ + entp->d_namlen = n - 1; + + /* File attributes */ + attr = datap->dwFileAttributes; + if ((attr & FILE_ATTRIBUTE_DEVICE) != 0) { + entp->d_type = DT_CHR; + } else if ((attr & FILE_ATTRIBUTE_DIRECTORY) != 0) { + entp->d_type = DT_DIR; + } else { + entp->d_type = DT_REG; + } + + /* Reset dummy fields */ + entp->d_ino = 0; + entp->d_reclen = sizeof (struct dirent); + + } else { + /* + * Cannot convert file name to multi-byte string so construct + * an errornous directory entry and return that. Note that + * we cannot return NULL as that would stop the processing + * of directory entries completely. + */ + entp = &dirp->ent; + entp->d_name[0] = '?'; + entp->d_name[1] = '\0'; + entp->d_namlen = 1; + entp->d_type = DT_UNKNOWN; + entp->d_ino = 0; + entp->d_reclen = 0; + } + + } else { + /* No more directory entries */ + entp = NULL; + } + + return entp; +} + +/* + * Close directory stream. + */ +static int +closedir( + DIR *dirp) +{ + int ok; + if (dirp) { + + /* Close wide-character directory stream */ + ok = _wclosedir (dirp->wdirp); + dirp->wdirp = NULL; + + /* Release multi-byte character version */ + free (dirp); + + } else { + + /* Invalid directory stream */ + dirent_set_errno (EBADF); + ok = /*failure*/-1; + + } + return ok; +} + +/* + * Rewind directory stream to beginning. + */ +static void +rewinddir( + DIR* dirp) +{ + /* Rewind wide-character string directory stream */ + _wrewinddir (dirp->wdirp); +} + +/* Convert multi-byte string to wide character string */ +static int +dirent_mbstowcs_s( + size_t *pReturnValue, + wchar_t *wcstr, + size_t sizeInWords, + const char *mbstr, + size_t count) +{ + int error; + +#if defined(_MSC_VER) && _MSC_VER >= 1400 + + /* Microsoft Visual Studio 2005 or later */ + error = mbstowcs_s (pReturnValue, wcstr, sizeInWords, mbstr, count); + +#else + + /* Older Visual Studio or non-Microsoft compiler */ + size_t n; + + /* Convert to wide-character string (or count characters) */ + n = mbstowcs (wcstr, mbstr, sizeInWords); + if (!wcstr || n < count) { + + /* Zero-terminate output buffer */ + if (wcstr && sizeInWords) { + if (n >= sizeInWords) { + n = sizeInWords - 1; + } + wcstr[n] = 0; + } + + /* Length of resuting multi-byte string WITH zero terminator */ + if (pReturnValue) { + *pReturnValue = n + 1; + } + + /* Success */ + error = 0; + + } else { + + /* Could not convert string */ + error = 1; + + } + +#endif + + return error; +} + +/* Convert wide-character string to multi-byte string */ +static int +dirent_wcstombs_s( + size_t *pReturnValue, + char *mbstr, + size_t sizeInBytes, /* max size of mbstr */ + const wchar_t *wcstr, + size_t count) +{ + int error; + +#if defined(_MSC_VER) && _MSC_VER >= 1400 + + /* Microsoft Visual Studio 2005 or later */ + error = wcstombs_s (pReturnValue, mbstr, sizeInBytes, wcstr, count); + +#else + + /* Older Visual Studio or non-Microsoft compiler */ + size_t n; + + /* Convert to multi-byte string (or count the number of bytes needed) */ + n = wcstombs (mbstr, wcstr, sizeInBytes); + if (!mbstr || n < count) { + + /* Zero-terminate output buffer */ + if (mbstr && sizeInBytes) { + if (n >= sizeInBytes) { + n = sizeInBytes - 1; + } + mbstr[n] = '\0'; + } + + /* Lenght of resulting multi-bytes string WITH zero-terminator */ + if (pReturnValue) { + *pReturnValue = n + 1; + } + + /* Success */ + error = 0; + + } else { + + /* Cannot convert string */ + error = 1; + + } + +#endif + + return error; +} + +/* Set errno variable */ +static void +dirent_set_errno( + int error) +{ +#if defined(_MSC_VER) && _MSC_VER >= 1400 + + /* Microsoft Visual Studio 2005 and later */ + _set_errno (error); + +#else + + /* Non-Microsoft compiler or older Microsoft compiler */ + errno = error; + +#endif +} + + +#ifdef __cplusplus +} +#endif +#endif /*DIRENT_H*/ + ADDED win/include/unistd.h Index: win/include/unistd.h ================================================================== --- win/include/unistd.h +++ win/include/unistd.h @@ -0,0 +1,47 @@ +#ifndef _UNISTD_H +#define _UNISTD_H 1 + +/* This file intended to serve as a drop-in replacement for + * unistd.h on Windows + * Please add functionality as neeeded + */ + +#include +#include +#define srandom srand +#define random rand +#if defined(__DMC__) +#endif + +#if defined(_WIN32) +#define _CRT_SECURE_NO_WARNINGS 1 + +#ifndef F_OK +#define F_OK 0 +#endif /* not F_OK */ + +#ifndef X_OK +#define X_OK 1 +#endif /* not X_OK */ + +#ifndef R_OK +#define R_OK 2 +#endif /* not R_OK */ + +#ifndef W_OK +#define W_OK 4 +#endif /* not W_OK */ + +#define S_ISDIR(m) (((m) & S_IFMT) == S_IFDIR) +#define S_ISREG(m) (((m) & S_IFMT) == S_IFREG) + + + +#endif + +#define access _access +#define ftruncate _chsize + +#define ssize_t int + +#endif /* unistd.h */ ADDED www/adding_code.wiki Index: www/adding_code.wiki ================================================================== --- www/adding_code.wiki +++ www/adding_code.wiki @@ -0,0 +1,215 @@ +Adding Features To Fossil + +

        1.0 Introduction

        + +This article provides a brief overview of how to write new code that extends +or enhances Fossil. + +

        2.0 Programming Language

        + +Fossil is written in C-89. There are specific [./style.wiki | style guidelines] +that are required for any new code that will be accepted into the Fossil core. +But, of course, if you are writing an extension just for yourself, you can +use any programming style you want. + +The source code for Fossil is not sent directly into the C compiler. +There are three separate code [./makefile.wiki#preprocessing|preprocessors] +that run over the code first. + + 1. The mkindex preprocessor scans all regular source files looking + for special comments that contain "help" text and which identify routines + that implement specific commands or which generate particular web pages. + + 2. The makeheaders preprocessor generates all the ".h" files + automatically. Fossil programmers write ".c" files only and let the + makeheaders preprocessor create the ".h" files. + + 3. The translate preprocessor converts source code lines that + begin with "@" into string literals, or into print statements that + generate web page output, depending on context. + +The [./makefile.wiki|Makefile] for Fossil takes care of running these +preprocessors with all the right arguments and in the right order. So it is +not necessary to understand the details of how these preprocessors work. +(Though, the sources for all three preprocessors are included in the source +tree and are well commented, if you want to dig deeper.) It is only necessary +to know that these preprocessors exist and hence will effect the way you +write code. + +

        3.0 Adding New Source Code Files

        + +New source code files are added in the "src/" subdirectory of the Fossil +source tree. Suppose one wants to add a new source code file named +"xyzzy.c". The first step is to add this file to the various makefiles. +Do so by editing the file src/makemake.tcl and adding "xyzzy" (without +the final ".c") to the list of source modules at the top of that script. +Save the result and then run the makemake.tcl script using a TCL +interpreter. The command to run the makemake.tcl script is: + + tclsh makemake.tcl + +The working directory must be src/ when the command above is run. +Note that TCL is not normally required to build Fossil, but +it is required for this step. If you do not have a TCL interpreter on +your system already, they are easy to install. A popular choice is the +[http://www.activestate.com/activetcl|Active Tcl] installation from +ActiveState. + +After the makefiles have been updated, create the xyzzy.c source file +from the following template: + +
        +/* +** Copyright boilerplate goes here. +***************************************************** +** High-level description of what this module goes +** here. +*/ +#include "config.h" +#include "xyzzy.h" + +#if INTERFACE +/* Exported object (structure) definitions or #defines +** go here */ +#endif /* INTEFACE */ + +/* New code goes here */ +
        + +Note in particular the #include "xyzzy.h" line near the top. +The "xyzzy.h" file is automatically generated by makeheaders. Every +normal Fossil source file must have a #include at the top that imports +its private header file. (Some source files, such as "sqlite3.c" are +exceptions to this rule. Don't worry about those exceptions. The +files you write will require this #include line.) + +The "#if INTERFACE ... #endif" section is optional and is only needed +if there are structure definitions or typedefs or macros that need to +be used by other source code files. The makeheaders preprocessor +uses definitions in the INTERFACE section to help it generate header +files. See [../src/makeheaders.html | makeheaders.html] for additional +information. + +After creating a template file such as shown above, and after updating +the makefiles, you should be able to recompile Fossil and have it include +your new source file, even before you source file contains any code. +It is recommended that you try this. + +Be sure to [/help/add|fossil add] your new source file to the self-hosting +Fossil repository and then [/help/commit|commit] your changes! + +

        4.0 Creating A New Command

        + +By "commands" we mean the keywords that follow "fossil" when invoking +Fossil from the command-line. So, for example, in + + fossil diff xyzzy.c + +The "command" is "diff". Commands may optionally be followed by +arguments and/or options. To create new commands in Fossil, add code +(either to an existing source file, or to a new source file created as +described above) according to the following template: + +
        +/* +** COMMAND: xyzzy +** +** Help text goes here. +*/ +void xyzzy_cmd(void){ + /* Implement the command here */ + fossil_print("Hello, World!\n"); +} +
        + +The example above creates a new command named "xyzzy" that prints the +message "Hello, World!" on the console. This command is a normal command +that will show up in the list of command from [/help/help|fossil help]. +If you add an asterisk to the end of the command name, like this: + +
        +** COMMAND: xyzzy* +
        + +Then the command will only show up if you add the "--all" option to +[/help/help|fossil help]. Or, if the command name starts with +"test" then the command will be considered experimental and will only +show up when the --test option is used with [/help/help|fossil help]. + +The example above is a fully functioning Fossil command. You can add +the text shown to an existing Fossil source file, recompiling then test +it out by typing: + + ./fossil xyzzy
        + ./fossil help xyzzy
        + ./fossil xyzzy --help
        + +The name of the C function that implements the command can be anything +you like (as long as it does not collide with some other symbol in the +Fossil code) but it is traditional to name the function +"commandname_cmd", as is done in the example. + +You could also use "printf()" instead of "fossil_print()" to generate +the output text, if desired. But "fossil_print()" is recommended as +it has extra logic to insert \r characters at the right times on +Windows systems. + +Once you have the command running, you can then start adding code to +make it do useful things. There are lots of utility functions in +Fossil for parsing command-line options and for +opening and accessing and manipulating the repository and +the working check-out. Study implementations of existing commands +to get an idea of how things are done. You can easily find the implementations +of existing commands by searching for "COMMAND: name" in the +files of the "src/" directory. + +

        5.0 Creating A New Web Page

        + +As with commands, new webpages can be added simply by inserting a function +that generates the webpage together with a special header comment. A +template follows: + +
        +/* +** WEBPAGE: helloworld +*/ +void helloworld_page(void){ + style_header("Hello World!"); + @

        Hello, World!

        + style_footer(); +} +
        + +Add the code above to a new or existing Fossil source code file, then +recompile fossil and run [/help/ui|fossil ui] then enter +"http://localhost:8080/helloworld" in your web browser and the routine +above will generate a web page that says "Hello World." +It really is that simple. + +The special "WEBPAGE:" comment is picked up by the "mkindex" preprocessor +and used to generate a table that maps the "helloworld" webpage name +into a pointer to the "helloworld_page()" function. The function that +implements a webpage can be named anything you like (as long as it does +not collide with another name) but the traditional name is +"pagename_page". + +HTML pages begin with a call to style_header() and end with the call to +style_footer(). Content is generated by the "@" lines that are translated +(by the "translate" preprocessor) into printf-like code that generates the +content of the webpage. Different techniques are used to generate +non-HTML content. In the unlikely event that you need to generate +non-HTML content, look at existing webpage implementations +(ex: "logo" or "style.css") to see how that is done. + +There are lots of other things that a real web-page implementation will +need to do, such verifying user credentials, parsing query parameters, +and interacting with the repository. But now that you have the general +idea of how webpages are implemented, you can look at the many other +webpage implementations already built into Fossil to see how all that +works. + +

        6.0 See Also

        + + * [./makefile.wiki|The Fossil Build Process] + * [./tech_overview.wiki|A Technical Overview Of Fossil] + * [./contribute.wiki|Contributing To The Fossil Project] ADDED www/antibot.wiki Index: www/antibot.wiki ================================================================== --- www/antibot.wiki +++ www/antibot.wiki @@ -0,0 +1,147 @@ +Defense Against Spiders + +The website presented by a Fossil server has many hyperlinks. +Even a modest project can have millions of pages in its +tree, and many of those pages (for example diffs and annotations +and ZIP archive of older check-ins) can be expensive to compute. +If a spider or bot tries to walk a website implemented by +Fossil, it can present a crippling bandwidth and CPU load. + +The website presented by a Fossil server is intended to be used +interactively by humans, not walked by spiders. This article +describes the techniques used by Fossil to try to welcome human +users while keeping out spiders. + +

        The "hyperlink" user capability

        + +Every Fossil web session has a "user". For random passers-by on the internet +(and for spiders) that user is "nobody". The "anonymous" user is also +available for humans who do not wish to identify themselves. The difference +is that "anonymous" requires a login (using a password supplied via +a CAPTCHA) whereas "nobody" does not require a login. +The site administrator can also create logins with +passwords for specific individuals. + +The "h" or "hyperlink" capability is a permission that can be granted +to users that enables the display of hyperlinks. Most of the hyperlinks +generated by Fossil are suppressed if this capability is missing. So +one simple defense against spiders is to disable the "h" permission for +the "nobody" user. This means that users must log in (perhaps as +"anonymous") before they can see any of the hyperlinks. Spiders do not +normally attempt to log into websites and will therefore +not see most of the hyperlinks and will not try to walk the millions of +historical check-ins and diffs available on a Fossil-generated website. + +If the "h" capability is missing from user "nobody" but is present for +user "anonymous", then a message automatically appears at the top of each +page inviting the user to log in as anonymous in order to activate hyperlinks. + +Removing the "h" capability from user "nobody" is an effective means +of preventing spiders from walking a Fossil-generated website. But +it can also be annoying to humans, since it requires them to log in. +Hence, Fossil provides other techniques for blocking spiders which +are less cumbersome to humans. + +

        Automatic hyperlinks based on UserAgent

        + +Fossil has the ability to selectively enable hyperlinks for users +that lack the "h" capability based on their UserAgent string in the +HTTP request header and on the browsers ability to run Javascript. + +The UserAgent string is a text identifier that is included in the header +of most HTTP requests that identifies the specific maker and version of +the browser (or spider) that generated the request. Typical UserAgent +strings look like this: + +
          +
        • Mozilla/5.0 (Windows NT 6.1; rv:19.0) Gecko/20100101 Firefox/19.0 +
        • Mozilla/4.0 (compatible; MSIE 8.0; Windows_NT 5.1; Trident/4.0) +
        • Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) +
        • Wget/1.12 (openbsd4.9) +
        + +The first two UserAgent strings above identify Firefox 19 and +Internet Explorer 8.0, both running on Windows NT. The third +example is the spider used by Google to index the internet. +The fourth example is the "wget" utility running on OpenBSD. +Thus the first two UserAgent strings above identify the requestor +as human whereas the second two identify the requestor as a spider. +Note that the UserAgent string is completely under the control +of the requestor and so a malicious spider can forge a UserAgent +string that makes it look like a human. But most spiders truly +seem to desire to "play nicely" on the internet and are quite open +about the fact that they are a spider. And so the UserAgent string +provides a good first-guess about whether or not a request originates +from a human or a spider. + +In Fossil, under the Admin/Access menu, there is a setting entitled +"Enable hyperlinks for "nobody" based on User-Agent and Javascript". +If this setting is enabled, and if the UserAgent string looks like a +human and not a spider, then Fossil will enable hyperlinks even if +the "h" capability is omitted from the user permissions. This setting +gives humans easy access to the hyperlinks while preventing spiders +from walking the millions of pages on a typical Fossil site. + +But the hyperlinks are not enabled directly with the setting above. +Instead, the HTML code that is generated contains anchor tags ("<a>") +without "href=" attributes. Then, javascript code is added to the +end of the page that goes back and fills in the "href=" attributes of +the anchor tags with the hyperlink targets, thus enabling the hyperlinks. +This extra step of using javascript to enable the hyperlink targets +is a security measure against spiders that forge a human-looking +UserAgent string. Most spiders do not bother to run javascript and +so to the spider the empty anchor tag will be useless. But all modern +web browsers implement javascript, so hyperlinks will appears +normally for human users. + +

        Further defenses

        + +Recently (as of this writing, in the spring of 2013) the Fossil server +on the SQLite website ([http://www.sqlite.org/src/]) has been hit repeatedly +by Chinese spiders that use forged UserAgent strings to make them look +like normal web browsers and which interpret javascript. We do not +believe these attacks to be nefarious since SQLite is public domain +and the attackers could obtain all information they ever wanted to +know about SQLite simply by cloning the repository. Instead, we +believe these "attacks" are coming from "script kiddies". But regardless +of whether or not malice is involved, these attacks do present +an unnecessary load on the server which reduces the responsiveness of +the SQLite website for well-behaved and socially responsible users. +For this reason, additional defenses against +spiders have been put in place. + +On the Admin/Access page of Fossil, just below the +"Enable hyperlinks for "nobody" based on User-Agent and Javascript" +setting, there are now two additional subsettings that can be optionally +enabled to control hyperlinks. + +The first subsetting waits to run the +javascript that sets the "href=" attributes on anchor tags until after +at least one "mouseover" event has been detected on the <body> +element of the page. The thinking here is that spiders will not be +simulating mouse motion and so no mouseover events will ever occur and +hence the hyperlinks will never become enabled for spiders. + +The second new subsetting is a delay (in milliseconds) before setting +the "href=" attributes on anchor tags. The default value for this +delay is 10 milliseconds. The idea here is that a spider will try to +render the page immediately, and will not wait for delayed scripts +to be run, thus will never enable the hyperlinks. + +These two subsettings can be used separately or together. If used together, +then the delay timer does not start until after the first mouse movement +is detected. + +

        The ongoing struggle

        + +Fossil currently does a very good job of providing easy access to humans +while keeping out troublesome robots and spiders. However, spiders and +bots continue to grow more sophisticated, requiring ever more advanced +defenses. This "arms race" is unlikely to ever end. The developers of +Fossil will continue to try improve the spider defenses of Fossil so +check back from time to time for the latest releases and updates. + +Readers of this page who have suggestions on how to improve the spider +defenses in Fossil are invited to submit your ideas to the Fossil Users +mailing list: +[mailto:fossil-users@lists.fossil-scm.org | fossil-users@lists.fossil-scm.org]. ADDED www/apple-touch-icon.png Index: www/apple-touch-icon.png ================================================================== --- www/apple-touch-icon.png +++ www/apple-touch-icon.png cannot compute difference between binary files ADDED www/background.jpg Index: www/background.jpg ================================================================== --- www/background.jpg +++ www/background.jpg cannot compute difference between binary files ADDED www/blame.wiki Index: www/blame.wiki ================================================================== --- www/blame.wiki +++ www/blame.wiki @@ -0,0 +1,62 @@ +The Annotate Algorithm + +

        1.0 Introduction

        + +The [/help?cmd=annotate|fossil annotate], +[/help?cmd=blame|fossil blame], and +[/help?cmd=praise|fossil praise] commands, and the +[/help?cmd=/annotate|/annotate], +[/help?cmd=/blame|/blame], and +[/help?cmd=/praise|/praise] web pages are all used to show the most +recent check-in that modified each line of a particular file. +This article overviews the algorithm used to compute the annotation +for a file in Fossil. + +

        2.0 Algorithm

        + +
          +
        1. Locate the check-in that contains the file that is to be + annotated. Call this check-in C0. +
        2. Find all direct ancestors of C0. A direct ancestor is the closure + of the primary parent of C0. Merged in branches are not part of + the direct ancestors of C0. +
        3. Prune the list of ancestors of C0 so that it contains only + check-in in which the file to be annotated was modified. +
        4. Load the complete text of the file to be annotated from check-in C0. + Call this version of the file F0. +
        5. Parse F0 into lines. Mark each line as "unchanged". +
        6. For each ancestor of C0 on the pruned list (call the ancestor CX), + beginning with the most + recent ancestor and moving toward the oldest ancestor, do the + following steps: +
            +
          1. Load the text for the file to be annotated as it existed in check-in CX. + Call this text FX. +
          2. Compute a diff going from FX to F0. +
          3. For each line of F0 that is changed in the diff and which was previously + marked "unchanged", update the mark to indicated that line + was modified by CX. +
          +
        7. Show each line of F0 together with its change mark, appropriately + formatted. +
        + +

        3.0 Discussion and Notes

        + +The time-consuming part of this algorithm is step 6b - computing the +diff from all historical versions of the file to the version of the file +under analysis. For a large file that has many historical changes, this +can take several seconds. For this reason, the default +[/help?cmd=/annotate|/annotate] webpage only shows those lines that where +changed by the 20 most recent modifications to the file. This allows +the loop on step 6 to terminate after only 19 diffs instead of the hundreds +or thousands of diffs that might be required for a frequently modified file. + +As currently implemented (as of 2015-12-12) the annotate algorithm does not +follow files across name changes. File name change information is +available in the database, and so the algorithm could be enhanced to follow +files across name changes by modifications to step 3. + +Step 2 is interesting in that it is +[/artifact/6cb824a0417?ln=196-201 | implemented] using a +[https://www.sqlite.org/lang_with.html#recursivecte|recursive common table expression]. Index: www/branching.wiki ================================================================== --- www/branching.wiki +++ www/branching.wiki @@ -1,9 +1,7 @@ -Fossil Documentation -

        -Branching, Forking, Merging, and Tagging -

        +Branching, Forking, Merging, and Tagging +

        Background

        In a simple and perfect world, the development of a project would proceed linearly, as shown in figure 1.
      @@ -13,70 +11,74 @@
      Each circle represents a check-in. For the sake of clarity, the check-ins are given small consecutive numbers. In a real system, of course, the check-in numbers would be 40-character SHA1 hashes since it is not possible -to allocate collision-free sequential numbers is a distributed system. -But sequential numbers are easier to read, so we will substitute them for +to allocate collision-free sequential numbers in a distributed system. +But as sequential numbers are easier to read, we will substitute them for the 40-character SHA1 hashes in this document. -The arrows in figure 1 show evolution of the project. The initial +The arrows in figure 1 show the evolution of a project. The initial check-in is 1. Check-in 2 is derived from 1. In other words, check-in 2 was created by making edits to check-in 1 and then committing those edits. We say that 2 is a child of 1 -and that 1 is a parent of 2. +and that 1 is a parent of 2. Check-in 3 is derived from check-in 2, making 3 a child of 2. We say that 3 is a descendant of both 1 and 2 and that 1 -and 2 are both ancestors of 3. +and 2 are both ancestors of 3. + + +

      DAGs

      -We call the graph of check-ins a tree. Check-in 1 is the root -since it has no ancestors. Check-in 4 is a leaf of the tree since -it has no descendants. (We will give a more precise in the definition of -"leaf" later.) +The graph of check-ins is a +[http://en.wikipedia.org/wiki/Directed_acyclic_graph | directed acyclic graph] +commonly shortened to DAG. Check-in 1 is the root of the DAG +since it has no ancestors. Check-in 4 is a leaf of the DAG since +it has no descendants. (We will give a more precise definition later of +"leaf.") Alas, reality often interferes with the simple linear development of a project. Suppose two programmers make independent modifications to check-in 2. -After both changes are checked in, we have a check-in graph that looks -like figure 2: +After both changes are committed, the check-in graph looks like figure 2:

      Figure 2
      The graph in figure 2 has two leaves: check-ins 3 and 4. Check-in 2 has -two children, check-ins 3 and 4. We call this stituation a fork. - -Fossil tries to prevent forks. Suppose the two programmers who were -editing check-in 2 are named Alice and Bob. Suppose Alice finished her -edits first and did a commit, resulting in check-in 3. Later, when Bob -tried to commit his changes, fossil would try to verify that check-in 2 -was still a leaf. Fossil would see that check-in 3 had occurred and would -abort Bob's commit attempt with a message "would fork". This allows Bob -to do a "fossil update" which would pull in Alices changes and merge them -together with his own changes. After merging, Bob could then commit -check-in 4 as a child of check-in 3 and the result would be a linear graph -as shown in figure 1. This is how CVS works. This is also how fossil -works in "autosync" mode. - -But it might be that Bob is off-network when he does his commit, so he +two children, check-ins 3 and 4. We call this state a fork. + +Fossil tries to prevent forks. Suppose two programmers named Alice and +Bob are each editing check-in 2 separately. Alice finishes her edits +first and commits her changes, resulting in check-in 3. Later, when Bob +attempts to commit his changes, fossil verifies that check-in 2 is still +a leaf. Fossil sees that check-in 3 has occurred and aborts Bob's commit +attempt with a message "would fork." This allows Bob to do a "fossil +update" which pulls in Alice's changes, merging them into his own +changes. After merging, Bob commits check-in 4 as a child of check-in 3. +The result is a linear graph as shown in figure 1. This is how CVS +works. This is also how fossil works in [./concepts.wiki#workflow | +"autosync"] mode. + +But perhaps Bob is off-network when he does his commit, so he has no way of knowing that Alice has already committed her changes. -Or, it could be that Bob has turned off "autosync" mode in SQLite. Or, -maybe Bob just doesn't want to merge in Alices changes before he has -saved his own, so he forces the commit to occur using the "--force" option -to the fossil commit command. For whatever reason, two commits against -check-in 2 have occurred and now the tree has two leaves. +Or, it could be that Bob has turned off "autosync" mode in Fossil. Or, +maybe Bob just doesn't want to merge in Alice's changes before he has +saved his own, so he forces the commit to occur using the "--allow-fork" +option to the fossil commit command. For any of these reasons, +two commits against check-in 2 have occurred and now the DAG has two leaves. So which version of the project is the "latest" in the sense of having the most features and the most bug fixes? When there is more than one leaf in the graph, you don't really know. So we like to have graphs with a single leaf. To resolve this situation, Alice can use the fossil merge command to merge in Bob's changes in her local copy of check-in 3. Then she -can commit the results as check-in 5. This results in a tree as shown +can commit the results as check-in 5. This results in a DAG as shown in figure 3.

      @@ -84,11 +86,11 @@
      Check-in 5 is a child of check-in 3 because it was created by editing check-in 3. But check-in 5 also inherits the changes from check-in 4 by virtue of the merge. So we say that check-in 5 is a merge child -of check-in 4 and that it is a direct child of check-in 3. +of check-in 4 and that it is a direct child of check-in 3. The graph is now back to a single leaf (check-in 5). We have already seen that if fossil is in autosync mode then Bob would have been warned about the potential fork the first time he tried to commit check-in 4. If Bob had updated his local check-out to merge in @@ -97,31 +99,31 @@ in figure 1. Really the graph of figure 1 is a subset of figure 3. Hold your hand over the check-in 4 circle of figure 3 and then figure 3 looks exactly like figure 1 (except that the leaf has a different check-in number, but that is just a notational difference - the two check-ins have exactly the same content). In other words, figure 3 is really a superset -of figure 1. The check-in 4 of figure 3 captures addition state which +of figure 1. The check-in 4 of figure 3 captures additional state which is omitted from figure 1. Check-in 4 of figure 3 holds a copy of Bob's local checkout before he merged in Alice's changes. That snapshot -of Bob's changes independent of Alice's changes is omitted from figure 1. +of Bob's changes, which is independent of Alice's changes, is omitted from figure 1. Some people say that the approach taken in figure 3 is better because it preserves this extra intermediate state. Others say that the approach taken in figure 1 is better because it is much easier to visualize a -linear line of development and because the the merging happens automatically +linear line of development and because the merging happens automatically instead of as a separate manual step. We will not take sides in that debate. We will simply point out that fossil enables you to do it either way.

      Forking Versus Branching

      -Having more than one leaf in the check-in tree is usually -considered undesirable, and so forks are usually either avoided entirely, +Having more than one leaf in the check-in DAG is called a "fork." This +is usually undesirable and either avoided entirely, as in figure 1, or else quickly resolved as shown in figure 3. But sometimes, one does want to have multiple leaves. For example, a project might have one leaf that is the latest version of the project under development and another leaf that is the latest version that has been tested. -When multiple leaves are desirable, we call the phenomenon branching +When multiple leaves are desirable, we call this branching instead of forking. Figure 4 shows an example of a project where there are two branches, one for development work and another for testing.
      @@ -129,36 +131,37 @@
      Figure 4
      The hypothetical scenario of figure 4 is this: The project starts and -progresses to a point where (at check-in 2) +progresses to a point where (at check-in 2) it is ready to enter testing for its first release. In a real project, of course, there might be hundreds or thousands of check-ins before a project reaches this point, but for simplicity of presentation we will say that the project is ready after check-in 2. The project then splits into two branches that are used by separate teams. The testing team, using the blue branch, finds and fixes a few bugs. This is shown by check-ins 6 and 9. Meanwhile the development -team, working on the top uncolored branch, +team, working on the top uncolored branch, is busy adding features for the second release. Of course, the development team would like to take advantage of the bug fixes implemented by the testing team. So periodically, the changes in the test branch are merged into the dev branch. This is shown by the dashed merge arrows between check-ins 6 and 7 and between check-ins 9 and 10. In both figures 2 and 4, check-in 2 has two children. In figure 2, -we called this a "fork". In diagram 4, we call it a "branch". What is +we call this a "fork." In diagram 4, we call it a "branch." What is the difference? As far as the internal fossil data structures are concerned, there is no difference. The distinction is in the intent. In figure 2, the fact that check-in 2 has multiple children is an accident that stems from concurrent development. In figure 4, giving check-in 2 multiple children is a deliberate act. So, to a good approximation, we define forking to be by accident and branching to be by intent. Apart from that, they are the same. +

      Tags And Properties

      Tags and properties are used in fossil to help express the intent, and thus to distinguish between forks and branches. Figure 5 shows the same scenario as figure 4 but with tags and properties added: @@ -171,94 +174,98 @@ A tag is a name that is attached to a check-in. A property is a name/value pair. Internally, fossil implements tags as properties with a NULL value. So, tags and properties really are much the same thing, and henceforth we will use the word "tag" -to mean either a tag or a property. +to mean either a tag or a property. -A tag can be a one-time tag, a propagating tag or a cancellation tag. +A tag can be a one-time tag, a propagating tag or a cancellation tag. A one-time tag only applies to the check-in to which it is attached. A propagating tag applies to the check-in to which it is attached and also to all direct descendants of that check-in. A direct descendant -is a descendant through direct children. Tags propagation does not +is a descendant through direct children. Tag propagation does not cross merges. Tag propagation also stops as soon as it encounters another check-in with the same tag. A cancellation tag is attached to a single check-in in order to either override a one-time tag that was previously placed on that same check-in, or to block tag propagation from an ancestor. -Every repository is created with a single empty check-in that has two -propagating tags. In figure 5, that initial empty check-in is check-in 1. -The branch tag tells (by its value) -what branch the check-in is a member of. -The default branch is called "trunk". All tags that begin with "sym-" +The initial check-in of every repository has two propagating tags. In +figure 5, that initial check-in is check-in 1. The branch tag +tells (by its value) what branch the check-in is a member of. +The default branch is called "trunk." All tags that begin with "sym-" are symbolic name tags. When a symbolic name tag is attached to a check-in, that allows you to refer to that check-in by its symbolic name rather than by its 40-character SHA1 hash name. When a symbolic name tag propagates (as does the sym-trunk tag) then referring to that name is the same as referring to the most recent check-in with that name. -Thus the two tags on check-in one cause all decendents to be in the -"trunk" branch and to have the symbolic name "trunk". +Thus the two tags on check-in 1 cause all descendants to be in the +"trunk" branch and to have the symbolic name "trunk." Check-in 4 has a branch tag which changes the name of the branch -to "test". The branch tag on check-in 4 propagates to check-ins 6 and 9. +to "test." The branch tag on check-in 4 propagates to check-ins 6 and 9. But because tag propagation does not follow merge links, the branch=test tag does not propagate to check-ins 7, 8, or 10. Note also that the branch tag on check-in 4 blocks the propagation of branch=trunk so that it cannot reach check-ins 6 or 9. This causes check-ins 4, 6, and 9 to be in the "test" branch and all others to be in the "trunk" branch. Check-in 4 also has a sym-test tag, which gives the symbolic name "test" to check-ins 4, 6, and 9. Because tags do not propagate across merges, check-ins 7, 8, and 10 do not inherit the sym-test tag and -are hence not known by the name "test". -To prevent the sym-trunk tag from propagating from check-in 1 -into check-ins 4, 6, and 9, there is a cancellation tag for -sym-trunk on check-in 4. The net effect of all of this is that +are hence not known by the name "test." +To prevent the sym-trunk tag from propagating from check-in 1 +into check-ins 4, 6, and 9, there is a cancellation tag for +sym-trunk on check-in 4. The net effect is that check-ins on the trunk go by the symbolic name of "trunk" and check-ins -that are on the test branch go by the symbolic name "test". +on the test branch go by the symbolic name "test." The bgcolor=blue tag on check-in 4 causes the background color of timelines to be blue for check-in 4 and its direct descendants. Figure 5 also shows two one-time tags on check-in 9. (The diagram does not make a graphical distinction between one-time and propagating tags.) The sym-release-1.0 tag means that check-in 9 can be referred to -using the more meaningful name "release-1.0". The closed tag means -that check-in 9 is a "closed leaf". A closed leaf is a leaf that intended -to never have any direct children. +using the more meaningful name "release-1.0." The closed tag means +that check-in 9 is a "closed leaf." A closed leaf is a leaf that should +never have direct children.

      Review Of Terminology

      - -Here is a list of definitions of key terms: -
      Branch
      -

      A branch is a set of check-ins that have the same value for their -branch property.

      +

      A branch is a set of check-ins with the same value for their +"branch" property.

      Leaf
      -

      A leaf is a check-in that has no children in the same branch.

      +

      A leaf is a check-in with no children in the same branch.

      Closed Leaf
      -

      A closed leaf is leaf that has the closed tag. Such leaves -are intented to never be extended with descendents and hence are omitted +

      A closed leaf is any leaf with the closed tag. These leaves +are intended to never be extended with descendants and hence are omitted from lists of leaves in the command-line and web interface.

      Open Leaf

      A open leaf is a leaf that is not closed.

      Fork
      -

      A fork occurs when a check-in has two or more direct (non-merge) +

      A fork is when a check-in has two or more direct (non-merge) children in the same branch.

      Branch Point

      A branch point occurs when a check-in has two or more direct (non-merge) -children in the different branches. A branch point is similar to a fork, +children in different branches. A branch point is similar to a fork, except that the children are in different branches.

      Check-in 4 of figure 3 is not a leaf because it has a child (check-in 5) in the same branch. Check-in 9 of figure 5 also has a child (check-in 10) but that child is in a different branch, so check-in 9 is a leaf. Because -of the closed tag check-in 9, it is a closed leaf. +of the closed tag on check-in 9, it is a closed leaf. Check-in 2 of figure 3 is considered a "fork" because it has two children in the same branch. Check-in 2 of figure 5 also has two children, but each child is in a different branch, hence in -figure 5, check-in 2 is considered a "branch point". +figure 5, check-in 2 is considered a "branch point." + +

      Differences With Other DVCSes

      + +Fossil keeps all check-ins on a single DAG. Branches are identified with +tags. This means that check-ins can be freely moved between branches +simply by altering their tags. + +Most other DVCSes maintain a separate DAG for each branch. Index: www/bugtheory.wiki ================================================================== --- www/bugtheory.wiki +++ www/bugtheory.wiki @@ -1,7 +1,7 @@ -Fossil Documentation -

      Bug-Tracking In Fossil

      +Bug-Tracking In Fossil +

      Introduction

      A bug-report in fossil is called a "ticket". Tickets are tracked separately from code check-ins. Some other distributed bug-tracking systems store tickets as files within Index: www/build-icons/mac.gif ================================================================== --- www/build-icons/mac.gif +++ www/build-icons/mac.gif cannot compute difference between binary files ADDED www/build-icons/openbsd.gif Index: www/build-icons/openbsd.gif ================================================================== --- www/build-icons/openbsd.gif +++ www/build-icons/openbsd.gif cannot compute difference between binary files Index: www/build.wiki ================================================================== --- www/build.wiki +++ www/build.wiki @@ -1,74 +1,213 @@ -Building and Installing Fossil - - -

      This page describes how to build and install Fossil. The -whole process is designed to be very easy.

      +Compiling and Installing Fossil

      0.0 Using A Pre-compiled Binary

      -

      You can skip steps 1.0 and 2.0 below by downloading -a pre-compiled binary -appropriate for your platform. If you use a pre-compiled binary -jump immediate to step 3.0.

      +

      Released versions of fossil come with +pre-compiled binaries and +a source archive for that release. You can thus skip the following if you +want to run or build a release version of fossil. Just download +the appropriate package from the downloads page +and put it on your $PATH. +To uninstall, simply delete the binary. +To upgrade from an older release, just overwrite the older binary with +the newer one.

      + +

      0.1 Executive Summary

      + +

      Building and installing is very simple. Three steps:

      + +
        +
      1. Download and unpack a source tarball or ZIP. +
      2. ./configure; make +
      3. Move the resulting "fossil" or "fossil.exe" executable to someplace on +your $PATH. +
      + +


      1.0 Obtaining The Source Code

      -

      Fossil is self-hosting, so you can obtain a ZIP archive containing -a snapshot of the latest version directly from fossil's own fossil -repository. Follow these steps:

      +

      Fossil is self-hosting, so you can obtain a ZIP archive or tarball +containing a snapshot of the latest version directly from +Fossil's own fossil repository. Additionally, source archives of +released versions of +fossil are available from the downloads page. +To obtain a development version of fossil, follow these steps:

        -
      1. Pointer your web browser at +

      2. Point your web browser to -http://www.fossil-scm.org/. Click on the "Login" menu button.

      3. - -
      4. Log in as anonymous. The password is shown on screen. -The reason for requiring this login is to prevent spiders from -walking the entire website, downloading ZIP archives -of every historical version, and thereby soaking up all our bandwidth.

      5. - -
      6. Click on the -Timeline or -Leaves link at -the top of the page.

      7. - -
      8. Select a version of of fossil you want to download. Click on its -link. Note that you must successfully log in as "anonymous" in step 1 -above in order to see the link to the detailed version information.

      9. - -
      10. Finally, click on the -"Zip Archive" link. This link will build a ZIP archive of the -complete source code and download it to your browser. +http://www.fossil-scm.org/.

      11. + +
      12. Click on the +Timeline +link at the top of the page.

      13. + +
      14. Select a version of of Fossil you want to download. The latest +version on the trunk branch is usually a good choice. Click on its +link.

      15. + +
      16. Finally, click on one of the +"Zip Archive" or "Tarball" links, according to your preference. +These link will build a ZIP archive or a gzip-compressed tarball of the +complete source code and download it to your computer.

      + +

      Aside: Is it really safe to use an unreleased development version of +the Fossil source code?

      + +Yes! Any check-in on the +[/timeline?t=trunk | trunk branch] of the Fossil +[http://fossil-scm.org/fossil/timeline | Fossil self-hosting repository] +will work fine. (Dodgy code is always on a branch.) In the unlikely +event that you pick a version with a serious bug, it still won't +clobber your files. Fossil uses several +[./selfcheck.wiki | self-checks] prior to committing any +repository change that prevent loss-of-work due to bugs. + +The Fossil [./selfhost.wiki | self-hosting repositories], especially +the one at [http://www.fossil-scm.org/fossil], usually run a version +of trunk that is less than a week or two old. Look at the bottom +left-hand corner of this screen (to the right of "This page was +generated in...") to see exactly which version of Fossil is +rendering this page. It is always safe to use whatever version +of the Fossil code you find running on the main Fossil website.

      2.0 Compiling

      -

      Follow these steps to compile:

      -
        -
      1. -

        Create a directory to hold the source code. Then unzip the -ZIP archive you downloaded into that directory. You should be -in the top-level folder of that directory

      2. - -
      3. (Optional:) -Edit the Makefile to set it up like you want. You probably do not -need to do anything. Do not be intimidated: There are less than 10 -variables in the makefile that can be changed. The whole Makefile -is only a few dozen lines long and most of those lines are comments.

        - -
      4. Type "make" +

      5. +

        Unpack the ZIP or tarball you downloaded then +cd into the directory created.

      6. + +
      7. (Optional, Unix only) +Run ./configure to construct a makefile. + +
          +
        1. +If you do not have the OpenSSL library installed on your system, then +add --with-openssl=none to omit the https functionality. + +

        2. +To build a statically linked binary (suitable for use inside a chroot +jail) add the --static option. + +

        3. +To enable the native [./th1.md#tclEval | Tcl integration feature] feature, +add the --with-tcl=1 and --with-tcl-private-stubs=1 options. + +

        4. +Other configuration options can be seen by running +./configure --help +

        + +
      8. Run "make" to build the "fossil" or "fossil.exe" executable. +The details depend on your platform and compiler. + +

          +
        1. Unix → the configure-generated Makefile should work on +all Unix and Unix-like systems. Simply type "make". + +

        2. Unix without running "configure" → if you prefer to avoid +running configure, you can also use: make -f Makefile.classic. You may +want to make minor edits to Makefile.classic to configure the build for your +system. + +

        3. MinGW 3.x (not 4.x) / MinGW-w64 → Use the MinGW makefile: +"make -f win/Makefile.mingw". On a Windows box you will need either +Cygwin or Msys as build environment. On Cygwin, Linux or Darwin you may want +to make minor edits to win/Makefile.mingw to configure the cross-compile +environment. + +To enable the native [./th1.md#tclEval | Tcl integration feature], use a +command line like the following (all on one line): + +make -f win/Makefile.mingw FOSSIL_ENABLE_TCL=1 FOSSIL_ENABLE_TCL_STUBS=1 FOSSIL_ENABLE_TCL_PRIVATE_STUBS=1 + +Alternatively, ./configure may now be used to create a Makefile +suitable for use with MinGW; however, options passed to configure that are +not applicable on Windows may cause the configuration or compilation to fail +(e.g. fusefs, internal-sqlite, etc). + +HINT: Do not use MinGW-4.x, it may compile but the Fossil binary +will not work correctly, see +[https://www.fossil-scm.org/index.html/tktview/18cff45a4e210430e24c | ticket]. + +

        4. MSVC → Use the MSVC makefile. First +change to the "win/" subdirectory ("cd win") then run +"nmake /f Makefile.msc".

          Alternatively, the batch +file "win\buildmsvc.bat" may be used and it will attempt to +detect and use the latest installed version of MSVC.

          To enable +the optional OpenSSL support, +first download the official +source code for OpenSSL and extract it to an appropriately named +"openssl-X.Y.ZA" subdirectory within the local +[/tree?ci=trunk&name=compat | compat] directory (e.g. +"compat/openssl-1.0.2f"), then make sure that some recent +Perl binaries are installed locally, +and finally run one of the following commands: +

          +nmake /f Makefile.msc FOSSIL_ENABLE_SSL=1 FOSSIL_BUILD_SSL=1 PERLDIR=C:\full\path\to\Perl\bin
          +
          +
          +buildmsvc.bat FOSSIL_ENABLE_SSL=1 FOSSIL_BUILD_SSL=1 PERLDIR=C:\full\path\to\Perl\bin
          +
          +To enable the optional native [./th1.md#tclEval | Tcl integration feature], +run one of the following commands or add the "FOSSIL_ENABLE_TCL=1" +argument to one of the other NMAKE command lines: +
          +nmake /f Makefile.msc FOSSIL_ENABLE_TCL=1
          +
          +
          +buildmsvc.bat FOSSIL_ENABLE_TCL=1
          +
          + +
        5. Cygwin → The same as other Unix-like systems. It is +recommended to configure using: "configure --disable-internal-sqlite", +making sure you have the "libsqlite3-devel" , "zlib-devel" and +"openssl-devel" packages installed first. +

      3.0 Installing

        -
      1. -

        The finished binary is named "fossil". Put this binary in a +

      2. +

        The finished binary is named "fossil" (or "fossil.exe" on Windows). +Put this binary in a directory that is somewhere on your PATH environment variable. It does not matter where.

      3. (Optional:) To uninstall, just delete the binary.

      + +

      4.0 Additional Considerations

      + +
        +
      • + If the makefiles that come with Fossil do not work for + you, or for some other reason you want to know how to build + Fossil manually, then refer to the + [./makefile.wiki | Fossil Build Process] document which describes + in detail what the makefiles do behind the scenes. + +

      • + The fossil executable is self-contained and stand-alone and usually + requires no special libraries or other software to be installed. However, + the "--tk" option to the [/help/diff|diff command] requires that Tcl/Tk + be installed on the local machine. You can get Tcl/Tk from + [http://www.activestate.com/activetcl|ActiveState]. + +

      • + To build on older Macs (circa 2002, MacOS 10.2) edit the Makefile + generated by configure to add the following lines: +

        +  TCC += -DSQLITE_WITHOUT_ZONEMALLOC
        +  TCC += -D_BSD_SOURCE
        +  TCC += -DWITHOUT_ICONV
        +  TCC += -Dsocketlen_t=int
        +  TCC += -DSQLITE_MAX_MMAP_SIZE=0
        +
        +
      ADDED www/changes.wiki Index: www/changes.wiki ================================================================== --- www/changes.wiki +++ www/changes.wiki @@ -0,0 +1,762 @@ +Change Log + +

      Changes for Version 1.35 (2016-00-00)

      + + * Enable symlinks by default on all non-Windows platforms. + * Rework the [/help?cmd=/setup_ulist|/setup_list page] (the User List page) + to display all users + in a click-to-sort table. + * Fix backslash-octal escape on filenames while importing from git + * When markdown documents begin with <h1> HTML elements, use that + header at the document title. + * Added the [/help?cmd=/bigbloblist|/bigbloblist page]. + * Enhance the [/help?cmd=/finfo|/finfo page] so that when it is showing + the ancestors of a particular file version, it only shows direct + ancestors and omits changes on branches, thus making it show the same set + of ancestors that are used for [/help?cmd=/blame|/blame]. + * Added the --page option to the [/help?cmd=ui|fossil ui] command + * Added the [/help?cmd=bisect|fossil bisect ui] command + * Enhanced the [/help?cmd=diff|fossil diff] command so that it accepts + directory names as arguments and computes diffs on all files contained + within those directories. + * Fix the [/help?cmd=add|fossil add] command so that it shows "SKIP" for + files added that were already under management. + * TH1 enhancements: +
      • Add [array exists] command.
      • +
      • Add minimal [array names] command.
      • +
      • Add tcl_platform(engine) and tcl_platform(platform) array + elements.
      • +
      + * Get autosetup working with MinGW. + * Fix autosetup detection of zlib in the source tree. + * Added autosetup detection of OpenSSL when it may be present under the + "compat" subdirectory of the source tree. + +

      Changes for Version 1.34 (2015-11-02)

      + + * Make the [/help?cmd=clean|fossil clean] command undoable for files less + than 10MiB. + * Update internal Unicode character tables, used in regular expression + handling, from version 7.0 to 8.0. + * Add the new [/help?cmd=amend|amend] command which is used to modify + tags of a "check-in". + * Fix bug in [/help?cmd=import|import] command, handling version 3 of + the svndump format for subversion. + * Add the [/help?cmd=all|all cache] command. + * TH1 enhancements: +
      • Add minimal [lsearch] command. Only exact + case-sensitive matching is supported.
      • +
      • Add the [glob_match], [markdown], + [dir], and [encode64] commands.
      • +
      • Add the [tclIsSafe] and [tclMakeSafe] commands to + the Tcl integration subsystem.
      • +
      • Add 'double', 'integer', and 'list' classes to the + [string is] command.
      • +
      + * Add the --undo option to the [/help?cmd=diff|diff] command. + * Build-in Antirez's "linenoise" command-line editing library for use with + the [/help?cmd=sqlite3|fossil sql] command on Unix platforms. + * Add [/help?cmd=stash|stash cat] as an alias for the + [/help?cmd=stash|stash show] command. + * Automatically pull before [/help?cmd=merge|fossil merge] when auto-sync + is enabled. + * Fix --hard option to [/help?cmd=mv|fossil mv] and [/help?cmd=rm|fossil rm] + to enable them to work properly with certain relative paths. + * Change the mimetype for ".n" and ".man" files to text/plain. + * Display improvements in the [/help?cmd=bisect|fossil bisect chart] command. + * Updated the built-in SQLite to version 3.9.1 and activated JSON1 and FTS5 + support (both currently unused within Fossil). + +

      Changes for Version 1.33 (2015-05-23)

      + * Improved fork detection on [/help?cmd=update|fossil update], + [/help?cmd=status|fossil status] and related commands. + * Change the default skin to what used to be called "San Francisco Modern". + * Add the [/repo-tabsize] web page + * Add [/help?cmd=import|fossil import --svn], for importing a subversion + repository into fossil which was exported using "svnadmin dump". + * Add the "--compress-only" option to [/help?cmd=rebuild|fossil rebuild]. + * Use a pie chart on the [/reports?view=byuser] page. + * Enhanced [/help?cmd=clean|fossil clean --verily] so that it ignores + keep-glob and ignore-glob settings. Added the -x alias for --verily. + * Add the --soft and --hard options to [/help?cmd=rm|fossil rm] and + [/help?cmd=mv|fossil mv]. The default is still --soft, but that is + now configurable at compile-time or by the mv-rm-files setting. + * Improved ability to [./customgraph.md|customize the timeline graph]. + * Improvements to the [/sitemap] page. + * Automatically adjust the [/help?cmd=timeline|CLI timeline] to the terminal + width on Linux. + * Added [info commands] and [info vars] commands to TH1. + These commands perform the same function as their Tcl counterparts, + except they do not accept a pattern argument. + * Fix some obscure issues with TH1 expression processing. + * Fix titles in search results for documents that are not wiki, markdown, + or HTML. + * Formally translate TH1 to Tcl return codes and vice-versa, where + necessary, in the Tcl integration subsystem. + * Add [/help?cmd=leaves|fossil leaves -multiple], for finding multiple + leaves on the same branch. + * Added the "Blitz" skin option. + * Removed the ".fossil-settings/keep-glob" file. It should not have been + checked into the repository. + * Update the built-in SQLite to version 3.8.10.2. + * Make [/help?cmd=open|fossil open] honor ".fossil-settings/allow-symlinks". + * Allow [/help?cmd=add|fossil add] to be used on symlinks to nonexistent or + unreadable files in the same way as [/help?cmd=addremove|fossil addremove]. + * Added fork warning to be issued if sync produced a fork + * Update the [/help?cmd=/info|info] page to report when a file becomes a + symlink. Additionally show the UUID for files whose types have changed + without changing contents or symlink target. + * Have [/help?cmd=changes|fossil changes] and + [/help?cmd=status|fossil status] report when executable or symlink status + changes on otherwise unmodified files. + * Permit filtering weekday and file [/help?cmd=/reports|reports] by user. + Also ensure the user parameter is preserved when changing types. Add a + field for direct entry of the user name to each applicable report. + * Create parent directories of [/help?cmd=settings|empty-dirs] if they don't + already exist. + * Inhibit timeline links to wiki pages that have been deleted. + +

      Changes for Version 1.32 (2015-03-14)

      + * When creating a new repository using [/help?cmd=init|fossil init], ensure + that the new repository is fully compatible with historical versions of + Fossil by having a valid manifest as RID 1. + * Anti-aliased rendering of arrowheads on timeline graphs. + * Added vi/less-style key bindings to the --tk diff GUI. + * Documentation updates to fix spellings and changes all "checkins" to + "check-ins". + * Add the --repolist option to server commands such as + [/help?cmd=server|fossil server] or [/help?cmd=http|fossil http]. + * Added the "Xekri" skin. + * Enhance the "ln=" query parameter on artifact displays to accept multiple + ranges, separate by spaces (or "+" when URL-encoded). + * Added [/help?cmd=forget|fossil forget] as an alias for + [/help?cmd=rm|fossil rm]. + +

      Changes For Version 1.31 (2015-02-23)

      + * Change the auxiliary schema by adding columns MLINK.ISAUX and MLINK.PMID + columns to the schema, to support better drawing of file change graphs. + A [/help?cmd=rebuild|fossil rebuild] is recommended but is not required. + so that the new graph drawing logic can work effectively. + * Added [/search|search] over Check-in comments, Documents, Tickets and + Wiki. Disabled by default. The search can be either a full-scan or it + can use an index that is kept up-to-date automatically. The new + /srchsetup web-page and the [/help?cmd=fts-config|fts-config] command + were added to help configure the search capability. Expect further + enhancements to the search capabilities in subsequent releases. + * Added form elements to some submenus (in particular the /timeline) + for easier operation. + * Added the --ifneeded option to [/help?cmd=rebuild|fossil rebuild]. + * Added "override skins" using the "skin:" line of the CGI script or + using the --skin LABEL option on the [/help?cmd=server|server], + [/help?cmd=ui|ui], or [/help?cmd=http|http] commands. + * Embedded html documents that begin with + <doc class="fossil-doc"> are displayed with standard + headers and footers added. + * Allow <div style='...'> markup in [/wiki_rules|wiki]. + * Renamed "Events" to "Technical Notes", while updating the technote + display and control pages. Add support for technotes as plain text + or as Markdown. + * Added the [/md_rules] pages containing summary instructions on the + Markdown format. + * Added the --repolist and --nojail options to the various server commands + (ex: [/help?cmd=server|fossil server]). + * Added the [/help?cmd=all|fossil all add] subcommand to "fossil all". + * Improvements to the /login page. Some hyperlinks to pages that require + "anonymous" privileges are displayed even if the current user is "nobody" + but automatically redirect to /login. + * The [/help?cmd=/doc|/doc] web-page will now try to deliver the file + "404.md" from the top-level directory (if such a file exists) in + place of its built-in 404 text. + * Download of Tarballs and ZIP Archives by user "nobody" is now enabled + by default in new repositories. + * Enhancements to the table sorting controls. More display tables + are now sortable. + * Add IPv6 support to [/help?cmd=sync|fossil sync] and + [/help?cmd=clone|fossil clone] + * Add more skins such as "San Francisco Modern" and "Eagle". + * During shutdown, check to see if the check-out database (".fslckout") + contains a lot of free space, and if it does, VACUUM it. + * Added the [/mimetype_list] page. + * Added the [/hash-collisions] page. + * Allow the user of Common Table Expressions in the SQL that defaults + ticket reports. + * Break out the components (css, footer, and header) for the + various built-in skins into separate files in the source tree. + +

      Changes For Version 1.30 (2015-01-19)

      + * Added the [/help?cmd=bundle|fossil bundle] command. + * Added the [/help?cmd=purge|fossil purge] command. + * Added the [/help?cmd=publish|fossil publish] command. + * Added the [/help?cmd=unpublished|fossil unpublished] command. + * Enhance the [/tree] webpage to show the age of each file with the option + to sort by age. + * Enhance the [/brlist] webpage to show additional information about each branch + and to be sortable by clicking on column headers. + * Add support for Docker. Just install docker and type + "sudo docker run -d -p 8080:8080 nijtmans/fossil" to get it running. + * Add the [/help/fusefs|fossil fusefs DIRECTORY] command that mounts a + Fuse Filesystem at the given DIRECTORY and populates it with read-only + copies of all historical check-ins. This only works on systems that + support FuseFS. + * Add the administrative log that records all configuration. + * Added the [/sitemap] webpage. + * Added the [/bloblist] web page. + * Let [/help?cmd=new|fossil new] no longer create an initial empty commit + by default. The first commit after checking out an empty repository will + become the initial commit. + * Added the [/help?cmd=all|fossil all dbstat] and + [/help?cmd=all|fossil all info] commands. + * Update SQLite to version 3.8.8. + * Added the --verily option to the [/help?cmd=clean|fossil clean] command. + * Add the "autosync-tries" setting to control the number of autosync attempts + before returning an error. + * Added a compile-time option (--with-miniz) to build using miniz instead + of zlib. Disabled by default. + * Support customization of commands and webpages, including the ability to + add new ones, via the "TH1 hooks" feature. Disabled by default. Enabled + via a compile-time option. + * Add the [checkout], [render], [styleHeader], [styleFooter], + [trace], [getParameter], [setParameter], [artifact], and + [globalState] commands to TH1, primarily for use by TH1 hooks. + * Automatically adjust the width of command-line timeline output according to the + detected width of the terminal. + * Prompt the user to optionally fix invalid UTF-8 at check-in. + * Added a line-number toggle option to the [/help?cmd=/info|/info] + and [/help?cmd=/artifact|/artifact] pages. + * Most commands now issue errors rather than silently ignoring unrecognized + command-line options. + * Use full 40-character SHA1 hashes (instead of abbreviations) in most + internal URLs. + * The "ssh:" sync method on Windows now uses "plink.exe" instead of "ssh" as + the secure-shell client program. + * Prevent a partial clone when the connection is lost. + * Make the distinction between 301 and 302 redirects. + * Allow commits against a closed check-in as long as the commit goes onto + a different branch. + * Improved cache control in the web interface reduces unnecessary requests + for common resources like the page logo and CSS. + * Fix a rare and long-standing sync protocol bug + that would silently prevent the sync from running to completion. Before + this bug-fix it was sometimes necessary to do "fossil sync --verily" to + get two repositories in sync. + * Add the [/finfo?name=src/foci.c|files_of_checkin] virtual table - useful + for ad hoc queries in the [/help?cmd=sqlite3|fossil sql] interface, + and also used internally. + * Added the "$secureurl" TH1 variable for use in headers and footers. + * (Internal:) Add the ability to include resources as separate files in the + source tree that are converted into constant byte arrays in the compiled + binary. Use this feature to store the Tk script that implements the --tk + diff option in a separate file for easier editing. + * (Internal:) Implement a system of compile-time checks to help ensure + the correctness of printf-style formatting strings. + * Fix CVE-2014-3566, also known as the POODLE SSL 3.0 vulnerability. + * Numerous documentation fixes and improvements. + * Other obscure and minor bug fixes - see the timeline for details. + +

      Changes For Version 1.29 (2014-06-12)

      + * Add the ability to display content, diffs and annotations for UTF16 + text files in the web interface. + * Add the "SaveAs..." and "Invert" buttons + to the graphical diff display that results + from using the --tk option with the [/help/diff | fossil diff] command. + * The [/reports] page now requires Read ("o") permissions. The "byweek" + report now properly propagates the selected year through the event type + filter links. + * The [/help/info | info command] now shows leaf status of the checkout. + * Add support for tunneling https through a http proxy (Ticket [e854101c4f]). + * Add option --empty to the "[/help?cmd=open | fossil open]" command. + * Enhanced [/help?cmd=/fileage|the fileage page] to support a glob parameter. + * Add -w|--ignore-all-space and -Z|--ignore-trailing-space options to + [/help?cmd=annotate|fossil annotate], [/help?cmd=blame|fossil blame], + [/help?cmd=diff|fossil (g)diff], [/help?cmd=stash|fossil stash diff]. + * Add --strip-trailing-cr option to [/help?cmd=diff|fossil (g)diff] and + [/help?cmd=stash|fossil stash diff]. + * Add button "Ignore Whitespace" to /annotate, /blame, /ci, /fdiff + and /vdiff UI pages. + * Enhance [/reports?view=byweekday|/reports] with a "byweekday" view. + * Enhance the [/help?cmd=cat|fossil cat] command so that it works outside + of a checkout when using the -R command-line option. + * Use full-length SHA1 hashes, not abbreviations, in most hyperlinks. + * Correctly render the <title> markup on wiki pages in the + [/help?cmd=/artifact|/artifact] webpage. + * Enhance the [/help?cmd=whatis|fossil whatis] command to report on attachments + and cluster artifacts. Added the [/help?cmd=test-whatis-all] command for + testing purposes. + * Add support for HTTP Basic Authentication on [/help?cmd=clone|clone] and + [/help?cmd=sync|sync]. + * Fix the [/help?cmd=stash|stash] so that it remembers added files and re-adds + them when the stash is applied. + * Fix the server so that it avoids writing to the database (and thus avoids + database locking issues) on a + [/help?cmd=pull|pull] or [/help?cmd=clone|clone]. + * Add support for [./server.wiki#loadmgmt|server load management] using both + a cache of expensive pages (the [/help?cmd=cache|fossil cache] command) + and by rejecting expensive page requests when the server load average is too + high. + * Add the [/help?cmd=praise|fossil praise] command as an alias for + [/help?cmd=blame|fossil blame] for subversion compatibility. + * Enhance the [/help?cmd=test-diff|fossil test-diff] command with -y or --tk + options so that it shows both filenames above their respective columns in + the side-by-side diff output. + * Issue a warning if a [/help?cmd=add|fossil add] command tries to add a file + that matches the ignore-glob. + * Add option -W|--width to "[/help?cmd=stash|fossil stash ls]" + and "[/help?cmd=leaves|fossil leaves]" commands. + * Enhance support for running as the root user. Now works on Haiku. + * Added the -empty option to [/help?cmd=new|fossil new], which + causes it to not create an initial empty commit. The first commit after + checking out a repo created this way will become the initial commit. + * Enhance sync operations by committing each round-trip to minimize number + of retransmits when autosync fails. Include option for + [/help?cmd=update| fossil update] and [/help?cmd=merge| fossil merge] to + continue even if missing content. + * Minor portability fixes for platforms where the char type is unsigned + by default. + +

      Changes For Version 1.28 (2014-01-27)

      + * Enhance [/help?cmd=/reports | /reports] to support event type filtering. + * When cloning a repository, the user name passed via the URL (if any) + is now used as the default local admin user's name. + * Enhance the SSH transport mechanism so that it runs a single instance of + the "fossil" executable on the remote side, obviating the need for a shell + on the remote side. Some users may need to add the "?fossil=/path/to/fossil" + query parameter to "ssh:" URIs if their fossil binary is not in a standard + place. + * Add the "[/help?cmd=blame | fossil blame]" command that works just like + "fossil annotate" but uses a different output format that includes the + user who made each changes and omits line numbers. + * Add the "Tarball and ZIP-archive Prefix" configuration parameter under + Admin/Configuration. + * Fix CGI processing so that it works on web servers that do not + supply REQUEST_URI. + * Add options --dirsonly, --emptydirs, and --allckouts to the + "[/help?cmd=clean | fossil clean]" command. + * Ten-fold performance improvement in large "fossil blame" or + "fossil annotate" commands. + * Add option -W|--width and --offset to "[/help?cmd=timeline | fossil timeline]" + and "[/help?cmd=finfo | fossil finfo]" commands. + * Option -n|--limit of "[/help?cmd=timeline | fossil timeline]" now + specifies the number of entries, just like all other commands which + have the -n|--limit option. The various timeline-related functions + now output "--- ?? limit (??) reached ---" at the end whenever + appropriate. Use "-n 0" if no limit is desired. + * Fix handling of password embedded in Fossil URL. + * New --once option to [/help?cmd=clone | fossil clone] command + which does not store the URL or password when cloning. + * Modify [/help?cmd=ui | fossil ui] to respect "default user" in an open + repository. + * Fossil now hides check-ins that have the "hidden" tag in timeline webpages. + * Enhance /ci_edit page to add the "hidden" tag to check-ins. + * Advanced possibilities for commit and ticket change notifications over + http using TH1 scripting. + * Add --sha1sum and --integrate options + to the "[/help?cmd=commit | fossil commit]" command. + * Add the "clean" and "extra" subcommands to the + "[/help?cmd=all | fossil all]" command + * Add the --whatif option to "[/help?cmd=clean|fossil clean]" that works the + same as "--dry-run", + so that the name does not collide with the --dry-run option of "fossil all". + * Provide a configuration option to show dates on the web timeline + as "YYMMMDD HH:MM" + * Add an option to the "stats" webpage that allows an administrator to see + the current repository schema. + * Enhancements to the "[/help?cmd=/vdiff|/vdiff]" webpage for more difference + display options. + * Added the "[/tree?ci=trunk&expand | /tree]" webpage as an alternative + to "/dir" and make it the default way of showing file lists. + * Send gzipped HTTP responses to clients that support it. + +

      Changes For Version 1.27 (2013-09-11)

      + * Enhance the [/help?cmd=changes | fossil changes], + [/help?cmd=clean | fossil clean], [/help?cmd=extras | fossil extras], + [/help?cmd=ls | fossil ls] and [/help?cmd=status | fossil status] commands + to restrict operation to files and directories named on the command-line. + * New --integrate option to [/help?cmd=merge | fossil merge], which + automatically closes the merged branch when committing. + * Renamed /stats_report page to [/reports]. Graph width is now + relative, not absolute. + * Added yw=YYYY-WW (year-week) filter to timeline to limit the results + to a specific year and calendar week number, e.g. [/timeline?yw=2013-01]. + * Updates to SQLite to prevent opening a repository file using file descriptors + 1 or 2 on Unix. This fixes a bug under which an assertion failure could + overwrite part of a repository database file, corrupting it. + * Added support for unlimited line lengths in side-by-side diffs. + * New --close option to [/help?cmd=commit | fossil commit], which + immediately closes the branch being committed. + * Added chart option to [/help?cmd=bisect | fossil bisect]. + * Improvements to the "human or bot?" determination. + * Reports errors about missing CGI-standard environment variables for HTTP + servers which do not support them. + * Minor improvements to sync support on Windows. + * Added --scgi option to [/help?cmd=server | fossil server]. + * Internal improvements to the sync process. + * The internals of the JSON API are now MIT-licensed, so downstream + users/packagers are no longer affected by the "do no evil" license + clause. + +

      Changes For Version 1.26 (2013-06-18)

      + * The argument to the --port option for the [/help?cmd=ui | fossil ui] and + [/help?cmd=server | fossil server] commands can take an IP address in addition + to the port number, causing Fossil to bind to just that one IP address. + * After prompting for a password, also ask if that password should be + remembered. + * Performance improvements to the diff engine. + * Fix the side-by-side diff engine to work better with multi-byte Unicode text. + * Color-coding in the web-based annotation (blame) display. Fix the annotation + engine so that it is no longer confused by time-warps. + * The markdown formatter is now available by default and can be used for + tickets, wiki, and embedded documentation. + * Add subcommands "fossil bisect log" and "fossil bisect status" to the + [/help?cmd=bisect | fossil bisect] command, as well as other bisect enhancements. + * Enhanced defenses that prevent spiders from using excessive CPU and bandwidth. + * Consistent use of the -n or --dry-run command line options. + * Win32: Fossil now understands Cygwin paths containing one or more of + the characters "*:<>?|. Those are normally forbidden in + win32. This means that the win32 fossil.exe is better usable in a Cygwin + environment. See + [http://cygwin.com/cygwin-ug-net/using-specialnames.html#pathnames-specialchars]. + * Cygwin: Fossil now understands win32 absolute paths starting with a drive + letter everywhere. The default value of the "case-sensitive" setting is + now FALSE, except when case-sensitivity is enabled in the Windows kernel. + See + [http://cygwin.com/cygwin-ug-net/using-specialnames.html#pathnames-casesensitive] + * Enhancements to /timeline.rss, adding more flags for filtering + results, including the ability to subscribe to changes made + to individual tickets. For example: [/timeline.rss?y=t&tkt=12fceeec82]. + * Improved handling of the differences between case-sensitive and + case-insensitive filesystems. + * JSON API: added the 'status' command to report local checkout status. + * Fixes to the --args support and documented this feature in the help. + * Added [/stats_report] page. + * Added ym=YYYY-MM filter to the [/timeline?ym=2013-06]. + * Fixed: config reset now re-installs default ticket report format. + * ssh:// and file:// protocols now ignore proxy settings. + * Added [/hash-color-test] web page. + * Cherry-pick merges are recorded internally (though no yet displayed on the + timeline graph.) + * Bring in the latest versions of SQLite, zlib, and autosetup from upstream. + +

      Changes For Version 1.25 (2013-02-16)

      + * Enhancements to ticket processing. There are now two tables: TICKET and + TICKETCHNG. There is one row in TICKETCHNG for each ticket artifact. + Fields from ticket artifacts go into either or both of TICKET and + TICKETCHNG, whichever contain matching column names. Default ticket + edit and viewing scripts are updated to use TICKETCHNG. The TH1 + scripting language is enhanced to support this, including the new + "query" command for doing SQL queries against the repository database. + All changes should be backwards compatible. + * Add the ability to moderate ticket and wiki changes. Unmoderated changes + do not sync and may be deleted by the moderator if found to contain spam + or other objectionable content. + * Add javascript so that clicking on a node of the timeline graph selects + that node. Then clicking on a second node shows a diff between the + two nodes. Clicking on the selected node unselects it. + * Warn of unresolved merge conflicts in "fossil status" and disallow + commits of unresolved conflicts unless the --allow-conflict option + is used. + * Add javascript so that clicking on column headers in a ticket report + sorts by the indicated column. + * Add the "fossil cat" command which is basically an alias for + "fossil finfo -p". + * Hyperlinks with the class "button" are rendered as submenu buttons + on embedded documentation. + * The check-in comment editor on Windows now defaults to NotePad.exe. + * Correctly deal with BOMs in check-in comments. Also attempt to convert + check-in comments to UTF8 from other encodings. + * Allow the deletion of multiple stash entries using multiple arguments + to the "fossil stash rm" command. + * Enhance the "fossil server DIRECTORY" command to serve static content + files contained in DIRECTORY. For security, only files with a + recognized suffix (such as *.html, *.jpg, *.txt, etc) will be delivered + as static content, and *.fossil files are not on the list of recognized + suffixes. There are additional restrictions on the names of the files. + * Allow the "fossil ui" command to specify a directory as long as the + the --notfound option is used. + * Add a configuration option that causes timeline messages to be rendered + as text/x-fossil-plain (which is the same as text/plain except that + hyperlinks inside of [...] are decorated.) + * Only decorate [...] in check-in comments and tickets + if the contented text really is a valid hyperlink target. + * Improvements to the side-by-side diff algorithm, for a more + human-friendly display in some complex cases. + * Added [utime] and [stime] commands to TH1. These + commands can be used for things such as displaying the page rendering + time in the footer. + * Add the ability to pass command-line options of "fossil rebuild" to + "fossil all rebuild". + * Add the --deanalyze option to "fossil rebuild" (and "fossil all rebuild") + * Do not run the graphical merging tool nor leave merge-droppings after a + dry-run merge. Display an improved merge-summary message at the end of + the merge. + * Add options to "fossil commit" to override the various sanity checks. + Options added: --allow-empty, --allow-fork, --allow-older, and + --allow-conflict. + * Optionally require a CAPTCHA (controlled by a setting on the + Admin/Access webpage) when a user who is not logged in tries to + edit wiki, or a ticket, or an attachment. + * Improvements to the "ssh://" sync protocol, to help it move past + noisy motd comments. + * Add the uf=FILE-SHA1-HASH query parameter to the timeline, causing the + timeline to show only check-ins that contain the specific file identified + by FILE-SHA1-HASH. ("uf" stands for "uses file".) + * Enhance the file change annotator so that it follows the file across + name changes. + * Fix the server-side of the sync protocol so that it will not generate + a delta loop when a file changes from its original state, through two + or more intermediate states, and back to the original state, all within + a single sync. + * Show much less output during a sync operation, unless the --verbose + option is used. + * Set the action= attribute of <form> elements using javascript, + as an addition defense against spam-bots. + * Disallow invalid UTF8 characters (such as characters in the surrogate + pair range) in filenames. + * Judge the UserAgent strings issued by the NetSurf webbrowser to be + coming from a human, not from a bot. + * Add the zlib sources to the Fossil source tree (under compat/zlib) and + use those sources when compiling on (Windows) systems that do not have + a zlib library installed by default. + * Prompt the user with the option to convert non-UTF8 files into UTF8 + when committing. + * Allow the characters *[]? in filenames. + * Allow the --context option on diff commands to have a value of 0. + * Added the "dbstat" command. + * Enhanced "fossil merge" so that if the VERSION argument is omitted, Fossil + tries to merge any forks of the current branch. + * Improved detection of forks in a commit race. + * Added the --analyze option to "fossil rebuild". + +

      Changes For Version 1.24 (2012-10-22)

      + * Added support for WYSIWYG editing of wiki pages. WYSIWYG is turned off + by default and can be turned on by setting a configuration option. + * Allow style= attribute to occur in HTML markup on wiki pages. + * Added the --tk option to the "fossi diff" and "fossil stash diff" + commands, causing color-coded diff output to be displayed in a Tcl/Tk + GUI window. This option only works if Tcl/Tk is installed on the + host. + * On Windows, make the "gdiff" command default to use WinDiff.exe. + * Update the "fossil stash" command so that it always prompts for a + comment if the -m option is omitted. + * Enhance the timeline webpages so that a=, b=, c=, d=, p=, and dp= + query parameters (and others) can all accept any valid check-in name + (such as branch names or labels) instead of just SHA1 hashes. + * Added the "fossil stash show" command. + * Added the "fileage" webpage with links to this page from the check-in + information page and from the file browser. + * Added --age and -t options to the "fossil ls" command. + * Added the --setmtime option to "fossil update". When used, the mtime + of all managed files is set to the time when the most recent version of + the file was checked in. + * Changed the "vdiff" webpage to show the complete text of files that + were added or removed (the equivalent of using the -N or --newfile + options with the "fossil diff" command-line.) + * Added the --temp option to "fossil clean" and "fossil extra", causing + those commands to only look at temporary files generated by Fossil, + such as merge-conflict reports or aborted check-in messages. + * Enhance the raw page download so that it can guess the mimetype of + attachments based on the filename. + * Change the behavior of the from= and to= query parameters on the + timeline page so that by default the path between the two specified + check-ins avoids merges. + * Add the --baseurl option to "fossil server" and "fossil http" commands, + so that those commands can be used with reverse proxies. + * If unable to determine the command-line user, do not guess. Instead + issue an error message. This helps prevent check-ins from accidentally + occurring under the wrong username. + * Include branch information in the output of file change listings + (the "finfo" webpage). + * Make the simplified view of file history, rather than the full view, + the default. + * In the "fossil configuration" command, allow the "css" option for + synchronizing, importing, or exporting just the CSS file. This makes + it easier to share CSS files across repositories by exporting from + one and importing to another. + * Add the (unsupported) "fossil test-orphans" command. + * Add the --template option to the "fossil init" command, to facilitate + creating new repositories based on a template repository. + * Add the diff-binary setting, which if enabled causes binary files to + be passed to the "gdiff" command for it to deal with, rather than simply + printing a "cannot diff binary files" error. + * Add the --unified option to the "fossil diff" command to force a unified + diff even if the --tk option (which normally implies a side-by-side diff) + is used. + * Present a choice of nearby branches and versions to diff against on the + check-in information page. + * Add the --force option to the "fossil merge" command that will force the + merge to occur even if it would be a no-op. This is sometimes useful for + documentation purposes. + * Add another built-in skin: "Enhanced Default". Other minor tweaks to + the existing skins. + * Add the "urllist" webpage, showing a list of URLs by which a server + instance of Fossil has been accessed. Requires "Administrator" privileges. + A link is on the "Setup" main page. + * Enable dynamic loading of the Tcl runtime for installations that want + to use Tcl as part of their configuration. This reduces the size of + the Fossil binary and allows any version of Tcl 8.4 or later to be used. + * Merge the latest SQLite changes from upstream. + * Lots of minor bug fixes. + +

      Changes For Version 1.23 (2012-08-08)

      + * The default checkout database name is now ".fslckout" instead of + "_FOSSIL_" on Unix. Both names continue to work. + * Added the "fossil all changes" command + * Added the --ckout option to the "fossil all list" command + * Added the "public-pages" glob pattern that can be configured to allow + anonymous users to see embedded documentation on sites where source + code should not be accessible to anonymous users. + * Allow multiple --tag options on the same "fossil commit" command. + * Change the meaning of the --bgcolor option for "fossil commit" to only + change the color for that one commit. The new --branchcolor option + is available to set a persistent background color. + * Add the branch= query parameter to the vdiff page and the --branch option + to the "fossil diff" command. + * Check-in names of the form "root:BRANCH" now refer to the origin of + the branch. Hence to see all changes in a branch, use + "fossil diff --from root:BRANCH --to BRANCH". The --branch option on + the diff command is an alias for the same. + * Add the ability to configure ad-units to be displayed between the menu + bar and the content. + * Add the ability to set a background image as part of server configuration. + * Allow partial commits of cherrypick merges. + * Updates against an uncommitted merge are now a warning, not a fatal error. + * Prompt the user to continue if a check-in comment is unedited. + * Fixes to case sensitivity settings with the /dir webpage. + * Repositories now try to remember the locations of all checkouts and + web-access URLs and display this information with the + "fossil info $REPO" command. + * Improved defense against spiders: The src= attribute of + <a> elements is set using javascript after the page loads. + * Enhanced formatting of the user list page. + * If a file named in "fossil add" is missing, that is now a warning instead + of a fatal error. + * Fix side-by-side diff so that it displays correctly with + multi-byte UTF8 characters. + * Performance improvements in the diff logic. + * Other performance tweaks and documentation updates. + +

      Changes For Version 1.22 (2012-03-17)

      + * Greatly improved "diff" processing including the new --brief option, + partial line matching, colorized in-line diffs, and better performance. + * Promote "allow-symlinks" to a versionable setting + * Harden the CGI processing logic against DOS attacks + * Add the ability to run TH1 scripts after sync requests + * Store the repository name in _FOSSIL_ as it is type in the "open" command, + possibly as a relative pathname. + * Make ".fslckout" the alternative name for the "_FOSSIL_" file. + * Change the "ssh:" transfer method to allow all access regardless of + user permission. + * Improvements to the timeline messages associated with tag changes. + (Requires a "[/help/rebuild | fossil rebuild]" to take effect.) + * Various additions and fixes for the JSON API. + * Improved merge-with-rename handling. + * --cherrypick merges use their origin's commit message by default. + * Added support for multiple concurrent logins per user. + * Update to use SQLite version 3.7.11. + * Various minor bug fixes. + +

      Changes For Version 1.21 (2011-12-13)

      + * Added side-by-side diffs in the command-line interface + * Automatically enable hyperlinks if the UserAgent string in the + HTTP header suggests that the requestor is a human and not a bot. + * Show only commonly used commands with "fossil help". Use + "fossil help --all" to see the complete list now. + * Improvements to the "stash" command: (1) Stash all files, not just + those below the working directory. (2) Add the --detail option to + "list". (3) Confirm before "drop --all". (4) Add the "help" + subcommand. + * Add an Admin/Access setting to change the number of octets of the + IP address that are saved in login cookies - allowing this setting + to be changed to zero + * Promote the "test-md5sum" command to "md5sum". + * Added the "whatis" command. + * Stop showing the server-code in status outputs - it is no longer used + for anything. + * Added a compile-time option (--with-tcl) to build in full Tcl scripting + support via integration with TH1. + * Merged the JSON branch into trunk. Disabled by default. Enabled + by a compile-time option. Probably it will be enabled by default + in some future release. + * Update to use SQLite version 3.7.9 plus the alignment fix for Sparc. + align + +

      Changes For Version 1.20 (2011-10-21)

      + * Added side-by-side diffs in HTML interface. [0bde74ea1e] + * Added support for symlinks. (Controlled by "allow-symlinks" setting, + off by default). [e4f1c1fe95] + * Fixed CLI annotate to show the proper file version in case there + are multiple equal versions in history. [e161670939] + * Timeline now shows tag changes (requires rebuild).[87540ed6e6] + * Fixed annotate to show "more relevant" versions of lines in + some cases. [e161670939] + * New command: ticket history. [98a855c508] + * Disabled SSLv2 in HTTPS client.[ea1d369d23] + * Fixed constant prompting regarding previously-saved SSL + certificates. [636804745b] + * Other SSL improvements. + * Added -R REPOFILE support to several more CLI commands. [e080560378] + * Generated tarballs now have constant timestamps, so they are + always identical for any given check-in. [e080560378] + * A number of minor HTML-related tweaks and fixes. + * Added --args FILENAME global CLI argument to import arbitrary + CLI arguments from a file (e.g. long file lists). [e080560378] + * Fixed significant memory leak in annotation of files with long + histories.[9929bab702] + * Added warnings when a merge operation overwrites local copies + (UNDO is available, but previously this condition normally went + silently unnoticed). [39f979b08c] + * Improved performance when adding many files. [a369dc7721] + * Improve merges which contain many file renames. [0b93b0f958] + * Added protection against timing attacks. [d4a341b49d] + * Firefox now remembers filled fields when returning to forms. [3fac77d7b0] + * Added the --stats option to the rebuild command. [f25e5e53c4] + * RSS feed now passes validation. [ce354d0a9f] + * Show overridden user when entering commit comment. [ce354d0a9f] + * Made rebuilding from web interface silent. [ce354d0a9f] + * Now works on MSVC with repos >2GB. [6092935ff2] + * A number of code cleanups to resolve warnings from various compilers. + * Update the built-in SQLite to version 3.7.9 beta. + +

      Changes For Version 1.19 (2011-09-02)

      + * Added a ./configure script based on autosetup. + * Added the "[/help/winsrv | fossil winsrv]" command + for creating a Fossil service on Windows systems. + * Added "versionable settings" where settings that affect + the local tree can be stored in versioned files in the + .fossil-settings directory. + * Background colors for branches are chosen automatically if no + color is specified by the user. + * The status, changes and extras commands now show + pathnames relative to the current working directory, + unless overridden by command line options or the + "relative-paths" setting.
      WARNING: This + change will break scripts which rely on the current + output when the current working directory is not the + repository root. + * Added "empty-dirs" versionable setting. + * Added support for client-side SSL certificates with "ssl-identity" + setting and --ssl-identity option. + * Added "ssl-ca-location" setting to specify trusted root + SSL certificates. + * Added the --case-sensitive BOOLEAN command-line option to many commands. + Default to true for Unix and false for Windows. + * Added the "Color-Test" submenu button on the branch list web page. + * Compatibility improvements to the git-export feature. + * Performance improvements on SHA1 checksums + * Update to the latest SQLite version 3.7.8 alpha. + * Fix the tarball generator to work with very log pathnames + +

      Changes For Version 1.18 (2011-07-14)

      + * Added this Change Log + * Added sequential version numbering + * Added a optional configure script - the Makefile still works for most + systems. + * Improvements to the "annotate" algorithm: only search primary + ancestors and ignore branches. + * Update the "scrub" command to remove traces of login-groups and + subrepositories. + * Added the --type option to the "fossil tag find" command. + * In contexts where only a check-in makes sense, resolve branch and + tag names to checkins only, never events or other artifacts. + * Improved display of file renames on a diff. A rebuild is required + to take full advantage of this change. + * Update the built-in SQLite to version 3.7.7. ADDED www/checkin.wiki Index: www/checkin.wiki ================================================================== --- www/checkin.wiki +++ www/checkin.wiki @@ -0,0 +1,89 @@ +Check-in Checklist + +

      Always run the following checklist prior to every +check-in or commit to the Fossil repository:

      + +Before every check-in: + + 0. fossil user default → your username is correct. + + 1. fossil diff → +
        +
      1. No stray changes +
      2. All changes comply with the license +
      3. All inputs are scrubbed before use +
      4. No injection attacks via %s formats +
      + + 2. fossil extra → no unmanaged files need to be added. + + 3. The check-in will go onto the desired branch. + → Check-ins to trunk normally require approval from + the lead programmer (drh). + + 4. auto-sync is on, or the system clock is verified + + 5. If sources files have been added or removed, ensure all makefiles + and configure scripts have been updated accordingly. + +Before every check-in to trunk: + + 6. No compiler warnings on the development machine. + + 7. The fossil executable that results from a build actually works. + + +
      +

      Commentary

      + +Before you go ahead and push content back to the servers, make sure +that the username you are using by default matches your username +within the project. Also remember to enable the localauth setting +if you intend to make changes via a locally served web UI. + +Item 1 is the most important step. Consider using gdiff +instead of diff if you have a graphical differ configured. Or +use the command-line option --tk. Also consider the -N +command-line option to show the complete text newly added files. +The recommended command for completing checklist item 1 is: + + fossil diff --tk -N + +Look carefully at every changed line in item 1. +Make sure that you are not about to commit unrelated changes. +If there are two or more unrelated changes present, consider +breaking up the commit into two or more separate commits. +Always make 100% sure that all changes are compatible with the +BSD license, that you have the authority to commit the code in accordance +with the [/doc/trunk/www/copyright-release.html | CLA] that you have +signed and have on file, and that +no NDAs, copyrights, patents, or trademarks are infringed by the check-in. +Also check very carefully to make sure that +you are not introducing security vulnerabilities. Pay particular attention +to the possibility of SQL or HTML injection attacks. + +Item 2 verifies that you have not added source files but failed to +do the necessary "fossil add" to manage them. If you commit +without bringing the new file under source control, the check-in will +be broken. That, in turn, can cause complications far in the future +when we are bisecting for a bug. + +For item 3, Run "fossil status" or the equivalent to +make sure your changes are going into the branch you think they are. +Also double-check the branch name when entering change comments. +Never check into trunk unless you are expressly authorized to do so. + +For Item 4, if you are on-network, make sure auto-sync is enabled. This +will minimize the risk of an unintended fork. It will also give you a +warning if you system clock is set incorrectly. If you are off-network, +make sure that your system clock is correct or at least close to correct +so that your check-in does not appear out-of-sequence on timelines. +On-network commits are preferred, especially for trunk commits. + +Items 6 and 7 help to ensure that check-ins on the trunk always work. +Knowing that the trunk always works makes bisecting much easier. Items +6 and 7 are recommended for all check-ins, even those that are on a branch. +But they are especially important for trunk check-ins. We desire that +all trunk check-ins work at all times. Any experimental, unstable, or +questionable changes should go on a branch and be merged into trunk after +further testing. ADDED www/checkin_names.wiki Index: www/checkin_names.wiki ================================================================== --- www/checkin_names.wiki +++ www/checkin_names.wiki @@ -0,0 +1,243 @@ +Check-in Names + + + +
      +

      Executive Summary

      +

      A check-in can be identified using any of the following +names: +

        +
      • SHA1 hash prefix +
      • Tag or branchname +
      • Timestamp: YYYY-MM-DD HH:MM:SS +
      • tag-name : timestamp +
      • root : branchname +
      • Special names: +
          +
        • tip +
        • current +
        • next +
        • previous or prev +
        • ckout for embedded docs +
        +
      +
      +Many Fossil [/help|commands] and [./webui.wiki | web-interface] URLs accept +check-in names as an argument. For example, the "[/help/info|info]" command +accepts an optional check-in name to identify the specific checkout +about which information is desired: + +
      +fossil info checkin-name +
      + +You are perhaps reading this page from the following URL: + +
      +http://www.fossil-scm.org/fossil/doc/trunk/www/checkin_names.wiki +
      + +The URL above is an example of an [./embeddeddoc.wiki | embedded documentation] +page in Fossil. The bold term of the pathname is a check-in name that +determines which version of the documentation to display. + +Fossil provides a variety of ways to specify a check-in. This +document describes the various methods. + +

      Canonical Check-in Name

      + +The canonical name of a check-in is the SHA1 hash of its +[./fileformat.wiki#manifest | manifest] expressed as a 40-character +lowercase hexadecimal number. For example: + +
      +fossil info e5a734a19a9826973e1d073b49dc2a16aa2308f9
      +
      + +The full 40-character SHA1 hash is unwieldy to remember and type, though, +so Fossil also accepts a unique prefix of the hash, using any combination +of upper and lower case letters, as long as the prefix is at least 4 +characters long. Hence the following commands all +accomplish the same thing as the above: + +
      +fossil info e5a734a19a9
      +fossil info E5a734A
      +fossil info e5a7
      +
      + +Many web-interface screens identify check-ins by 10- or 16-character +prefix of canonical name. + +

      Tags And Branch Names

      + +Using a tag or branch name where a check-in name is expected causes +Fossil to choose the most recent check-in with that tag or branch name. +So, for example, as of this writing the most recent check-in that +is tagged with "release" is [d0753799e44]. +So the command: + +
      +fossil info release
      +
      + +Results in the following input: + +
      +uuid:         d0753799e447b795933e9f266233767d84aa1d84 2010-11-01 14:23:35 UTC
      +parent:       4e1241f3236236187ad2a8f205323c05b98c9895 2010-10-31 21:51:11 UTC
      +child:        4a094f46ade70bd9d1e4ffa48cbe94b4d3750aef 2010-11-01 18:52:37 UTC
      +child:        f4033ec09ee6bb2a73fa588c217527a1f311bd27 2010-11-01 23:38:34 UTC
      +tags:         trunk, release
      +comment:      Fix a typo in the file format documentation reported on the
      +              Tcl/Tk chatroom. (user: drh)
      +
      + +There are multiple check-ins that are tagged with "release" but +(as of this writing) the [d0753799e44] +check-in is the most recent so it is the one that is selected. + +Note that unlike other command DVCSes, a "branch" in Fossil +is not anything special; it is simply a sequence of check-ins that +share a common tag. So the same mechanism that resolves tag names +also resolves branch names. + +Note also that there can (in theory) be an ambiguity between tag names +and canonical names. Suppose, for example, you had a check-in with +the canonical name deed28aa99a835f01fa06d5b4a41ecc2121bf419 and you +also happened to have tagged a different check-in with "deed2". If +you use the "deed2" name, does it choose the canonical name or the tag +name? In such cases, you can prefix the tag name with "tag:". +For example: + +
      +fossil info tag:deed2 +
      + +The "tag:deed2" name will refer to the most recent check-in +tagged with "deed2" not to the +check-in whose canonical name begins with "deed2". + +

      Whole Branches

      + +Usually when a branch name is specified, it means the latest check-in on +that branch. But for some commands (ex: [/help/purge|purge]) a branch name +on the argument means the earliest connected check-in on the branch. This +seems confusing when being explained here, but it works out to be intuitive +in practice. + +For example, the command "fossil purge XYZ" means to purge the check-in XYZ +and all of its descendants. But when XYZ is in the form of a branch name, one +generally wants to purge the entire branch, not just the last check-in on the +branch. And so for this reason, commands like purge will interpret a branch +name to be the first check-in of the branch rather than the last. If there +are two or more branches with the same name, then these commands will select +the first check-in of the branch that has the most recent check-in. What +happens is that Fossil searches for the most recent check-in with the given +tag, just as it always does. But if that tag is a branch name, it then walks +back down the branch looking for the first check-in of that branch. + +Again, this behavior only occurs on a few commands where it make sense. + +

      Timestamps

      + +A timestamp in one of the formats shown below means the most recent +check-in that occurs no later than the timestamp given: + + * YYYY-MM-DD + * YYYY-MM-DD HH:MM + * YYYY-MM-DD HH:MM:SS + * YYYY-MM-DD HH:MM:SS.SSS + +The space between the day and the year can optionally be +replaced by an uppercase T and the entire timestamp can +optionally be followed by "z" or "Z". In the fourth +form with fractional seconds, any number of digits may follow the +decimal point, though due to precision limits only the first three +digits will be significant. + +In its default configuration, Fossil interprets and displays all dates +in Universal Coordinated Time (UTC). This tends to work the best for +distributed projects where participants are scattered around the globe. +But there is an option on the Admin/Timeline page of the web-interface to +switch to local time. The "Z" suffix on an timestamp check-in +name is meaningless if Fossil is in the default mode of using UTC for +everything, but if Fossil has been switched to local time mode, then the +"Z" suffix means to interpret that particular timestamp using +UTC instead of local time. + +For an example of how timestamps are useful, +consider the homepage for the Fossil website itself: + +
      +http://www.fossil-scm.org/fossil/doc/trunk/www/index.wiki +
      + +The bold component of that URL is a check-in name. To see what the +Fossil website looked like on January 1, 2009, one has merely to change +the URL to the following: + +
      +http://www.fossil-scm.org/fossil/doc/2009-01-01/www/index.wiki +
      + +

      Tag And Timestamp

      + +A check-in name can also take the form of a tag or branch name followed by +a colon and then a timestamp. The combination means to take the most +recent check-in with the given tag or branch which is not more recent than +the timestamp. So, for example: + +
      +fossil update trunk:2010-07-01T14:30 +
      + +Would cause Fossil to update the working check-out to be the most recent +check-in on the trunk that is not more recent that 14:30 (UTC) on +July 1, 2010. + +

      Root Of A Branch

      + +A branch name that begins with the "root:" prefix refers to the +last check-in in the parent branch prior to the beginning of the branch. +Such a label is useful, for example, in computing all diffs for a single +branch. The following example will show all changes in the hypothetical +branch "xyzzy": + +
      +fossil diff --from root:xyzzy --to xyzzy +
      + + +

      Special Tags

      + +The tag "tip" means the most recent check-in. The "tip" tag is roughly +equivalent to the timestamp tag "5000-01-01". + +If the command is being run from a working check-out (not against a bare +repository) then a few extra tags apply. The "current" tag means the +current check-out. The "next" tag means the youngest child of the +current check-out. And the "previous" or "prev" tag means the primary +(non-merge) parent of the current check-out. + +For embedded documentation, the tag "ckout" means the version as present in +the local source tree on disk, provided that the web server is started using +"fossil ui" or "fossil server" from within the source tree. This tag can be +used to preview local changes to documentation before committing them. It does +not apply to CLI commands. + +

      Additional Examples

      + +To view the changes in the most recent check-in prior to the version currently +checked out: + +
      +fossil diff --from previous --to current
      +
      + +Suppose you are of the habit of tagging each release with a "release" tag. +Then to see everything that has changed on the trunk since the last release: + +
      +fossil diff --from release --to trunk
      +
      DELETED www/cmd_add.wiki Index: www/cmd_add.wiki ================================================================== --- www/cmd_add.wiki +++ www/cmd_add.wiki @@ -1,65 +0,0 @@ -

      add

      - -The often used add command is how you tell fossil to -include a (usually new) file in the repository. - -fossil is designed to manage artifacts whose role is being -"source" for something, most probably software program code or other -text. One can imagine all kinds of ways to let fossil know just what -constitutes a source; the simplest and most direct way it -actually finds out is when you give it the - fossil add path command. - -It's reasonable to think of -the [./cmd_import.wiki | import] -and [./cmd_clone.wiki | clone] -commands as very high-powered versions of the add -command that are combined with system level file movement and -networking functions. Not particularly accurate, but reasonable. - -Typing  fossil add myfile causes fossil to put -myfile into the repository at the next -commit—provided you issue it from within the source -tree, of course. - -By contrast,  fossil add mydirectory will add -all of the files in mydirectory, and -all of its sub-directories. In other words, adding a directory will -recursively add all of the directory's file system descendants to the -repository. This was an oft-requested feature, recently implemented. -It is very flexible. Only when you add a directory do you get the -recursive behavior. If you are globbing a subset of files, you won't -get the recursion. - -Realize that the repository is not changed by the -add command, but by the  commit command. -add myfile tells fossil to "mark" -myfile as part of the repository. Only commands which actually -manipulate the content of the repository can physically put source -artifacts into (or remove them from) the repository. - -Just to keep things symmetric, there are also commands that can -manipulate the repository without affecting the checked-out sources -(see [./cmd_pull.wiki | fossil pull], for instance.) - -It's worthwhile reiterating that fossil is storing the content -of source artifacts and the names of the artifacts in their "native -habitat", a sequence of "temporal slices" (aka "versions") of the -state of the whole system, and a set of unique identifiers. When you -add a file to a repository, the path to the file is a part of -the name of the file. There is a mis-match between the file -system's idea of a directory (a file containing pointers to files) and -fossil's idea (a substring of the name of the artifact.) The names of -the artifacts specify their relative locations because of the way the -file system interprets them. If you don't keep this in mind, you may -fool yourself into thinking fossil somehow "stores -directories." It doesn't, and believing it does will eventually -confuse you. - -See also: [./cmd_rm.wiki | fossil rm], -[./cmd_import.wiki | fossil import], -[./cmd_clone.wiki | fossil clone], -[./cmd_commit.wiki | fossil commit], -[./cmd_pull.wiki | fossil pull], -[./cmd_settings.wiki | fossil setting] (async), -[./reference.wiki | Reference] DELETED www/cmd_all.wiki Index: www/cmd_all.wiki ================================================================== --- www/cmd_all.wiki +++ www/cmd_all.wiki @@ -1,59 +0,0 @@ -

      all

      - -The all command will let you perform (some) commands on -all of your repositories, and provides a way of finding all -of your repositories as well. - -There are some commands you might especially want to perform on every -repository you've got, once in a while. fossil all - includes four of the most likely as sub-commands: -[./cmd_pull.wiki | pull], -[./cmd_push.wiki | push], -[./cmd_rebuild.wiki | rebuild] and -[./cmd_sync.wiki | sync]. - -Follow the links to find out what each of those do, and then a moment -of thought will tell you why you might want to have them available for -all repositories. - -Certainly you'll want your repositories all rebuilt when you upgrade -fossil after there has been a change in the repository -structure. For the others, it depends. Usually you would want -across-the-board versions if you've been "off Net" for a while, and -have commits to multiple repositories than you need to share, or want -to get the repository changes that have been made by others, or both. - -The last sub-command provided by all is "list." - -While the other sub-commands give you a way to conveniently take care -of all of your repositories for some common tasks, the -list provides a way to take care of any subset of your -repositories in any way you want. It provides a list of all of your -repositories' locations. fossil all list -outputs a one-per-line listing of the path for each of your -repositories. With that in hand, you can easily script just about any -repository manipulations you want. - -Or, you could just jog your memory. - -The all command uses the .fossil file in the home -directory to find all of your repositories, so you can mess it up by -moving your repositories around. This is easy to do inadvertently if -you have a cavalier attitude about repos, but you'll know pretty -quickly that you've done it—many commands you try to use from -inside of a checkout won't work correctly. The .fossil file is -an sqlite db file which fossil uses to keeping track of -repository locations. Advice: if you move your repositories around, -let fossil know you did; -[./cmd_close.wiki | close] them before you move -them, and then [./cmd_open.wiki | open] them from -their new locations. - -See also: [./cmd_pull.wiki | fossil pull], -[./cmd_push.wiki | fossil push], -[./cmd_rebuild.wiki | fossil rebuild], -[./cmd_sync.wiki | fossil sync], -[./cmd_open.wiki | fossil open], -[./cmd_close.wiki | fossil close], -[./reference.wiki | Reference], -[http://www.sqlite.org | SQLite] DELETED www/cmd_cgi.wiki Index: www/cmd_cgi.wiki ================================================================== --- www/cmd_cgi.wiki +++ www/cmd_cgi.wiki @@ -1,19 +0,0 @@ -

      cgi

      - -cgi is the command that tells fossil it is running as a -web-page supplier for an external http server. (For you web-miesters, -the "cgi" is actually unnecessary if your web environment is set up in -a normal fashion.) - -This is the command you will probably use if you want to make a -moderate-to-high hit rate public repository (like the fossil -project's self-hosted repository) but you'll be using it in the -shebang line. - -If you need lower level access to the pages fossil generates, -you'll want to look at the [./cmd_http.wiki | http] -command. - -See also: [./cmd_http.wiki | fossil http], -[./concepts.wiki#saserv | Concepts (setting up a server)], -[./reference.wiki | Reference] DELETED www/cmd_changes.wiki Index: www/cmd_changes.wiki ================================================================== --- www/cmd_changes.wiki +++ www/cmd_changes.wiki @@ -1,19 +0,0 @@ -

      changes

      - -The changes command is informational, it doesn't do -anything to a checked-out project, but it tells you something about -it. - -This is simply a quick way to get a list of the files which are -different in the source tree (the checkout) and the repository. - -There is a bit more information (was a file edited, added or -removed?, for instance). - -The same information will be displayed if you -[./cmd_status.wiki |  fossil status ], -except there will be some additional repository information displayed -first. - -See also: [./cmd_status.wiki | fossil status], -[./reference.wiki | Reference] DELETED www/cmd_checkout.wiki Index: www/cmd_checkout.wiki ================================================================== --- www/cmd_checkout.wiki +++ www/cmd_checkout.wiki @@ -1,50 +0,0 @@ -

      checkout

      - -The checkout command is how a project version goes from -the repository to the chosen project directory. - -Without going into detail about getting/opening a repository, once you -have a repository and a place in which the repository has been -opened, you can "check out" a "version" of the files which make up the -repository at somewhen. - -The term "checkout" is traditional in source management systems, but a -bit of an anachronism in a distributed system like fossil. -"Checking out" a version of a project means getting all of the source -artifacts out into the standard environment---currently the -shell/file-system. - -Traditionally, the version is some "incrementing" code like -v1.3.2rcQuink or f451 or something. In distributed SCM systems it's -some absolutely unique identifier, usually the result of a one-way -hash (SHA1, in fossil's case.) The fossil term for these is -artifact IDs. - -fossil checkout  id will check out the -version corresponding to id into the source tree. - -checkout requires you to pick a precise version to put into -the "on-disk" source tree, and leaves any edited files which are already -in the tree intact. - -update, on the other hand, merges edits into the -version you choose (if you choose one; you can default the version.) - -Since a version is required, and fossil's artifact IDs are -fairly long, there are two good ways to refer to the version. You can -use a unique proper prefix of the version (six or eight characters is -more than enough in most cases) or you can [./cmd_tag.wiki | -tag] your check-ins and use the tags for checkouts, reverting, -branching (tags are the best way to branch) and so forth. Both -methods work throughout fossil. - -See also [./cmd_tag.wiki | fossil tag], -[./cmd_revert.wiki | fossil revert], -[./cmd_update.wiki | fossil update], -[./cmd_push.wiki | fossil push], -[./cmd_pull.wiki | fossil pull], -[./cmd_clone.wiki | fossil clone], -[./cmd_open.wiki | fossil open], -[./cmd_close.wiki | fossil close], -[./cmd_new.wiki | fossil new], -[./reference.wiki | Reference] DELETED www/cmd_extra.wiki Index: www/cmd_extra.wiki ================================================================== --- www/cmd_extra.wiki +++ www/cmd_extra.wiki @@ -1,38 +0,0 @@ -

      extra

      - -The extra command is informational, it doesn't do anything to -a checked-out project, but it tells you something about it. - -Extra files are files that exist in a checked-out project, but don't belong to -the repository. - -The fossil extra command will get you a list of these files. - -This is convenient for figuring out if you've -[./cmd_add.wiki | add]ed every file that needs to be - -in the repository before you do a commit. It will also tell you what -will be removed if you [./cmd_clean.wiki | clean] -the project. - -Suppose, for example, you have a "noodle.src" file as a scratch pad -for source code, and you don't want to include your latest -hare-brained ideas in the repository? You don't add it -to the repository, of course—though there are ways you might add -it unintentionally. If your project is big, and you want to find -noodle.src, and anything else that isn't under source control within -the project directories, then fossil extra will -give you a list. - -If you don't think this is all that useful, then you've never had to write -a shell script that only affects project files and leaves everything -else alone. ;) - -The extra command is almost, but not quite entirely, the exact -opposite of the [./cmd_ls.wiki | ls] command. - -See also: [./cmd_status.wiki | fossil status], -[./cmd_ls.wiki | fossil ls], -[./cmd_changes.wiki | fossil changes], -[./cmd_clean.wiki | fossil clean], -[./reference.wiki | Reference] DELETED www/cmd_ls.wiki Index: www/cmd_ls.wiki ================================================================== --- www/cmd_ls.wiki +++ www/cmd_ls.wiki @@ -1,55 +0,0 @@ -

      ls

      - -The ls* command is informational, it doesn't do anything to -a checked-out project, but it tells you something about it. - -A project consists of a "source tree" of "artifacts" (see [./concepts.wiki | Fossil concepts].) -From a practical standpoint this is a set of files and directories rooted -at a main project directory. The files that are under source control aren't -particularly distinguishable from those that aren't. The ls and -extra commands provide this information. - -fossil ls produces a listing of the files which are under source -control and their status within the repository. The output is a simple -list of STATUS/filepath pairs on separate lines. The status of a file will -likely be one of ADDED, UNCHANGED, UPDATED, or DELETED. * - -It's important to realize that this is the status relative to the repository, -it's the status as fossil sees it and has nothing to do with -filesystem status. If you're new to source-management/version-control -systems, you'll probably get bit by this concept-bug at least once. - -To really see the difference, issue an ls before and after doing -a [./cmd_commit.wiki | commit]. Before, the status of files may be any of the three, -but after committing changes the status will be UNCHANGED "across -the board." - -By way of example, here's what I see if I fossil ls in the -directory where I have checked out my testing repository: -
      -    $ fossil ls
      -    ADDED     feegboing
      -    UNCHANGED fossil_docs.txt
      -    DELETED   nibcrod
      -
      -But if I do a simple ls, what I get is -
      -    $ ls
      -    feegboing  fossil_docs.txt  manifest.uuid  noodle.txt
      -    _FOSSIL_   manifest         nibcrod
      -
      - -The ls command is almost, but not quite entirely, the exact -opposite of the -[./cmd_extra.wiki | extra command]. - -Notes: - * If you come from the Windows world, it will help to know that 'ls' is the usual unix command for listing a directory. - * There are more states for a file to be in than those listed, including MISSING, EDITED, RENAMED and a couple of others. - -See also: [./cmd_add.wiki | fossil add], -[./rm.wiki | fossil rm], -[./cmd_extra.wiki | fossil extra], -[./cmd_commit.wiki | fossil commit], -[./concepts.wiki | Fossil concepts], -[./reference.wiki | Reference] DELETED www/cmd_mv.wiki Index: www/cmd_mv.wiki ================================================================== --- www/cmd_mv.wiki +++ www/cmd_mv.wiki @@ -1,24 +0,0 @@ -

      mv | rename

      - -The mv (alias "rename") command tells -fossil that a file has gone from one external name to another -without changing content. - -You could do this by renaming the file in the file system, -[./cmd_rm.wiki | deleting] the old name from the project, and -[./cmd_add.wiki | adding] the new name. But you would lose the -continuity of the content's history that way. Using -mv makes the name change a part of the history -maintained by fossil. You will, of course, need a good -comment somewhere (say, the commit comment) if you want to -remember why you changed the name... fossil -only maintains history, it doesn't (yet) explain it. - -mv is much like the [./cmd_rm.wiki | rm] -command, in that it manipulates fossil's "idea" of what is -part of the project. The difference is that mv assumes -you have actually made some change to the file system. - -See also: [./cmd_rm.wiki | fossil rm], -[./cmd_add.wiki | fossil add], -[./reference.wiki | Reference] DELETED www/cmd_new.wiki Index: www/cmd_new.wiki ================================================================== --- www/cmd_new.wiki +++ www/cmd_new.wiki @@ -1,33 +0,0 @@ -

      new

      - -The new command allows you to create a brand new -repository. - -Pragmatically, this means that an SQLite database is created with -whatever name you specified, and set up with the appropriate tables -and initial data. - -There's not much to new, it's what happens afterward that -gets a project going: - - Once you have a new repository file, you need to create and cd to a - directory in which you will store your files, or move into an - existing directory which contains the files for a project. - - Then, you need to [./cmd_open.wiki | open] the new - repository, and get the server running so you can set up the project - name and so forth. - - Finally, you'll [./cmd_add.wiki | add] files to it. If - you are adding exisiting files, you can add them individually, via - globbing from the shell, or by adding the directory (which will add - all of the directory's file-system descendants recursively.) - -But you can't do all that until you create a repository file with -new. - -See also: -[./cmd_open.wiki | fossil open], -[./cmd_add.wiki | fossil add], -[./cmd_server.wiki | fossil ui], -[./reference.wiki | Reference] DELETED www/cmd_rm.wiki Index: www/cmd_rm.wiki ================================================================== --- www/cmd_rm.wiki +++ www/cmd_rm.wiki @@ -1,40 +0,0 @@ -

      del | rm

      - -The del (alias rm) command takes a "file" -out of a project. - -It does not delete the file from the repository, it does -not remove the file from the file system on disk. It tells -fossil that the file is no longer a part of the project for -which fossil is maintaining the sources. - -For example, if you have a nice, clean source tree and use the -[./cmd_extra.wiki | extra] command on it, you won't -get any output. If you then rm some file and commit -the change, that file will be listed by the extra -command. - -The file is still on the disk, and it is still in the repository. -But the file is not part of the project -anymore. Further changes to the file will not be checked in unless -you [./cmd_add.wiki | add] the file again. - -It can initially be confusing to see a file that's been "deleted" -still showing up in the files list in the repository, but remember -that the files list currently* shows -all of the files that have ever been in the repository because -fossil is a source control system and therefore keeps a record -of the history of a project. - -To get a list of the files only in the current version of the -project, use the [./cmd_ls.wiki | ls] command. - -The del command is the logical opposite of the -[./cmd_add.wiki | add] command, in its single-file-add -form. - -*version 7c281b629a on 20081220 - -See also: [./cmd_add.wiki | fossil add], -[./cmd_ls.wiki | fossil ls], -[./reference.wiki | Reference] DELETED www/cmd_status.wiki Index: www/cmd_status.wiki ================================================================== --- www/cmd_status.wiki +++ www/cmd_status.wiki @@ -1,53 +0,0 @@ -

      status

      - -The status command is informational, it doesn't do anything to -a checked-out project, but it tells you something about it. - -Running  fossil status  currently prefixes -the output of the [./cmd_changes.wiki | changes] command -with information about the repository and checkout. The information -is in the form of the [./concepts.wiki#aidex | Artifact ID]s of the -server code, the checkout, and the parent (of, I think the -checkout.) - -This is useful for getting an at-a-glance view of the state of your -project, especially in a situation where you need the artifact IDs. - -Here is what I get when I issue a status on my local -version of the fossil repository as I write this: - -
      -   $ fossil status
      -   repository:   /home/me/myclone.fossil
      -   local-root:   /home/me/fossil/
      -   server-code:  99d6c9cf3f262720579db177503812814d712fc7
      -   checkout:     a8c3a7ea9249281e0a1fb55fb31d2ad57844f848
      -   parent:       21cecd209f7201f17e8a784c0d8f735603d440ae
      -   EDITED   www/cmd_.wiki-template
      -   EDITED   www/cmd_add.wiki
      -   EDITED   www/cmd_all.wiki
      -   EDITED   www/cmd_extra.wiki
      -   EDITED   www/cmd_ls.wiki
      -   EDITED   www/cmd_update.wiki
      -   EDITED   www/index.wiki
      -   $
      -
      - -Once I actually make changes to the repository (say, a -[./cmd_commit.wiki | commit]) most of that will change—all -of those files showing as "EDITED" will be checked in and won't -show up, and the artifact IDs will reflect the new state of the -repository. - -If the only thing you want to see is which files in the checked-out -source tree have changed in some way, use the -[./cmd_changes.wiki | changes] command. - -If what you want is the files in the checked-out source tree which are -not part of the project, use the -[./cmd_extra.wiki | extra] command. - -See also: [./cmd_changes.wiki | fossil changes], -[./cmd_extra.wiki | fossil extra], -[./concepts.wiki | Fossil concepts], -[./reference.wiki | Reference] DELETED www/cmd_sync.wiki Index: www/cmd_sync.wiki ================================================================== --- www/cmd_sync.wiki +++ www/cmd_sync.wiki @@ -1,22 +0,0 @@ -

      sync

      - -The sync command [./cmd_pull.wiki | pull]s and -[./cmd_push.wiki | push]es repository changes simultaneously. - -This applies to repositories available via a URL, of course. If your -project is strictly local you can do all of the distributed stuff as -long as you are "serving" the repository via http in some fashion, but -it's probably pointless to do so. - -Assuming you aren't running fossil as a high-powered version of -[http://www.gnu.org/software/rcs | RCS], your use of sync -in your projects is up to you. fossil defaults to using a -[./cmd_setting.wiki | setting] of autosync -If you have cloned a repository you will automatically sync with the -original if you [./cmd_commit.wiki | commit] changes to your local -version unless you customize your configuration. - -See also: [./cmd_pull.wiki | fossil pull], -[./cmd_push.wiki | fossil push], -[./cmd_setting.wiki | fossil setting], -[./reference.wiki | Reference] DELETED www/cmd_update.wiki Index: www/cmd_update.wiki ================================================================== --- www/cmd_update.wiki +++ www/cmd_update.wiki @@ -1,33 +0,0 @@ -

      update

      - -What do you do if you have changes out on a repository and -you want them merged with your checkout? - -You use the update command. - -fossil can [./about_checkout.wiki | overwrite] any -changes you've made to your checkout, or it can -[./about_merge.wiki | merge] whatever changes have occurred -in the repo into your checkout. - -Update merges changes from the repository into your checkout. - -fossil uses a simple conflict resolution strategy for merges: -the latest change wins. - -Local intranet [./cmd_commit.wiki | commit]s -(by someone else) -or Net [./cmd_pull.wiki | pull]s from a server -will usually require a fossil update afterward. - -Local commits are likely to be made with -[./cmd_settings.wiki#autosync | automatic syncing] -set to "on", however, so if you don't use fossil for Net-wide -projects you may never have to use update. - -See also: [./cmd_pull.wiki | fossil pull], -[./cmd_commit.wiki | fossil commit], -[./cmd_settings.wiki#autosync | fossil setting] (autosync), -[./about_checkout.wiki | checkouts], -[./about_merge.wiki | merging], -[./reference.wiki | Reference] DELETED www/cmd_version.wiki Index: www/cmd_version.wiki ================================================================== --- www/cmd_version.wiki +++ www/cmd_version.wiki @@ -1,12 +0,0 @@ -

      version

      - -The version command is informational, it doesn't do -anything to a checked-out project, but it tells you something about -it. - -Issuing the version command will print out the short-form of the -artifact ID for the fossil executable. - -See also: [./cmd_status.wiki | fossil status], -[./cmd_info.wiki | fossil info], -[./reference.wiki | Reference] Index: www/concepts.wiki ================================================================== --- www/concepts.wiki +++ www/concepts.wiki @@ -11,11 +11,11 @@ There are many such systems in use today. Fossil strives to distinguish itself from the others by being extremely simple to setup and operate. This document is intended as a quick introduction to the concepts -behind fossil. +behind Fossil.

      2.0 Composition Of A Project

      A software project normally consists of a "source tree". @@ -22,35 +22,36 @@ A source tree is a hierarchy of files that are used to generate the end product. The source tree changes over time as the software grows and expands and as features are added and bugs are fixed. A snapshot of the source tree at any point in time is called a "version" or "revision" or a "baseline" of the product. -In fossil, we use the name "check-in". +In Fossil, we use the name "check-in". A "repository" is a database that contains copies of all historical check-ins for a project. Check-ins are normally stored in the repository in a highly space-efficient compressed format (delta encoding). But that is an implementation detail that you the user need not worry over. Think of the repository as a safe place where all your old check-ins are securely stored away and available for retrieval whenever you need them. -A repository in fossil is a single file on your disk. This file +A repository in Fossil is a single file on your disk. This file might be rather large (dozens or hundreds of megabytes for a large or long running project) but it is nevertheless just a file. You can move it around, rename it, write it out to a memory stick, or do anything else you normally do with files. -Each source tree that is controlled by fossil is associated with +Each source tree that is controlled by Fossil is associated with a single repository on the local disk drive. You can tie two or more source trees to a single repository if you want (though one tree per repository is the most common configuration.) So a single repository can be associated with many source trees, but each source tree is associated with only one repository. -Fossil source trees may not overlap. A fossil source tree is identified -by a file named "_FOSSIL_" in the root directory of the source tree. Every +Fossil source trees may not overlap. A Fossil source tree is identified +by a file named "_FOSSIL_" (or ".fslckout", but this article will always +use the name "_FOSSIL_") in the root directory of the source tree. Every file that is a sibling of _FOSSIL_ and every file in every subfolder is considered potentially a part of the source tree. The _FOSSIL_ file contains (among other things) the pathname of the repository with which the source tree is associated. On the other hand, the repository has no record of its source trees. So you are free to delete a source tree @@ -78,11 +79,11 @@

      2.1 Identification Of Artifacts

      A particular version of a particular file is called an "artifact". Each artifact has a universally unique name which is the -SHA1 hash of the content +SHA1 hash of the content of that file expressed as 40 characters of lower-case hexadecimal. Such a hash is referred to as the Artifact Identifier or Artifact ID for the artifact. The SHA1 algorithm is created with the purpose of providing a highly forgery-resistant identifier for a file. Given any file it is simple to find the artifact ID for that file. But given a @@ -97,13 +98,13 @@ 19dbf73078be9779edd6a0156195e610f81c94f9
      b4104959a67175f02d6b415480be22a239f1f077
      997c9d6ae03ad114b2b57f04e9eeef17dcb82788
      -When referring to an artifact using fossil, you can use a unique +When referring to an artifact using Fossil, you can use a unique prefix of the artifact ID that is four characters or longer. This saves -a lot of typing. When displaying artifact IDs, fossil will usually only +a lot of typing. When displaying artifact IDs, Fossil will usually only show the first 10 digits since that is normally enough to uniquely identify a file. Changing (or adding or removing) a single byte in a file results in a completely different artifact ID. And since the artifact ID is the name of @@ -122,19 +123,28 @@ artifacts and reconstruct the complete development history of a software project.

      2.2 Manifests

      -At the root of a source tree is a special file called the -"manifest". The manifest is a listing of all other files in +Associated with every check-in is a special file called the +[./fileformat.wiki#manifest| "manifest"]. The manifest is a +listing of all other files in that source tree. The manifest contains the (complete) artifact ID of the file and the name of the file as it appears on disk, and thus serves as a mapping from artifact ID to disk name. The artifact ID of the manifest is the identifier for the entire check-in. When -you look at a "timeline" of changes in fossil, the ID associated +you look at a "timeline" of changes in Fossil, the ID associated with each check-in or commit is really just the artifact ID of the manifest for that check-in. + +

      The manifest file is not normally a real file on disk. Instead, +the manifest is computed in memory by Fossil whenever it needs it. +However, the "fossil setting manifest on" command will cause the +manifest file to be materialized to disk, if desired. Both Fossil +itself, and SQLite cause the manifest file to be materialized to disk +so that the makefiles for these project can read the manifest and +embed version information in generated binaries.

      Fossil automatically generates a manifest whenever you "commit" a new check-in. So this is not something that you, the developer, need to worry with. The format of a manifest is intentionally designed to be simple to parse, so that if @@ -159,59 +169,63 @@

    • A repository keeps a record of historical check-ins.
    • Repositories share their changes using push, pull, sync, and clone.
    • A particular version of a particular file is an artifact that is identified by an artifact ID.
    • -
    • Artifacts tracked by fossil are inherently immutable.
    • +
    • Artifacts tracked by Fossil are inherently immutable.
    • Fossil automatically generates a manifest file that identifies every artifact in a check-in.
    • The artifact ID of the manifest is the identifier of the check-in.
    • 3.0 Fossil - The Program

      -Fossil is software. The implementation of fossil is in the form -of a single executable named "fossil" (or "fossil.exe" on windows). -To install fossil on your system, +Fossil is software. The implementation of Fossil is in the form +of a single executable named "fossil" (or "fossil.exe" on Windows). +To install Fossil on your system, all you have to do is obtain a copy of this one executable file (either by downloading a pre-compiled version or [./build.wiki | compiling it yourself]) and then putting that file somewhere on your PATH. Fossil is completely self-contained. It is not necessary to -install any other software in order to use fossil. You do not need +install any other software in order to use Fossil. You do not need CVS, gzip, diff, rsync, Python, Perl, Tcl, Java, apache, PostgreSQL, MySQL, SQLite, patch, or any similar software on your system in order to use -fossil effectively. You will want to have some kind of text editor +Fossil effectively. You will want to have some kind of text editor for entering check-in comments. Fossil will use whatever text editor is identified by your VISUAL environment variable. Fossil will also use GPG to clearsign your manifests if you happen to have it installed, -but fossil will skip that step if GPG missing from your system. -You can optionally set up fossil to use external "diff" programs, -though fossil has an excellent built-in "diff" algorithm that works -fine for most people. - -To uninstall fossil, simply delete the executable. - -To upgrade an older version of fossil to a newer version, just +but Fossil will skip that step if GPG missing from your system. +You can optionally set up Fossil to use external "diff" programs, +though Fossil has an excellent built-in "diff" algorithm that works +fine for most people. If you happen to have Tcl/Tk installed on your +system, Fossil will use it to generate a graphical "diff" display when +you use the --tk option to the "diff" command, but this too is entirely +optional. + + +To uninstall Fossil, simply delete the executable. + +To upgrade an older version of Fossil to a newer version, just replace the old executable with the new one. You might need to run "fossil all rebuild" to restructure your repositories after an upgrade. Running "all rebuild" never hurts, so when upgrading it is a good policy to run it even if it is not strictly necessary. -To use fossil, simply type the name of the executable in your +To use Fossil, simply type the name of the executable in your shell, followed by one of the various built-in commands and arguments appropriate for that command. For example:
      fossil help
      In the next section, when we say things like "use the help command" we mean to use the command name "help" as the first -token after the name of the fossil executable, as shown above. +token after the name of the Fossil executable, as shown above.

      4.0 Workflow

      @@ -221,24 +235,24 @@ Autosync mode is reminiscent of CVS or SVN in that it automatically keeps your changes in synchronization with your co-workers through the use of a central server. The manual-merge mode is the standard workflow for GIT or Mercurial in that your local repository develops independently of your coworkers and you share and merge your changes manually. -An interesting feature of fossil is that it supports both autosync +An interesting feature of Fossil is that it supports both autosync and manual-merge work flows. -The default setting for fossil is to be in autosync mode. You +The default setting for Fossil is to be in autosync mode. You can change the autosync setting or check the current autosync setting using commands like:
      fossil setting autosync on
      fossil setting autosync off
      fossil settings
      -By default, fossil runs with autosync mode turned on. The +By default, Fossil runs with autosync mode turned on. The authors finds that projects run more smoothly in autosync mode since autosync helps to prevent pointless forking and merge and helps keeps all collaborators working on exactly the same code rather than on their own personal forks of the code. In the author's view, manual-merge mode should be reserved for disconnected operation. @@ -276,11 +290,11 @@
    • Create a new check-in using the commit command. You will be prompted for a check-in comment and also for your GPG key if you have GPG installed. The commit copies the edits you have made in your local source -tree into your local repository. After your commit completes, fossil will +tree into your local repository. After your commit completes, Fossil will automatically push your changes back to the server you cloned from or whatever server you most recently synced with.
    • @@ -359,11 +373,11 @@ local repository.
    • Once changes are in your local repository, use -use the update command to merge them to your local source tree. +the update command to merge them to your local source tree. If you merge in some changes and find that the changes do not work out or are not to your liking, you can back out the changes using the undo command.
    • @@ -381,75 +395,32 @@

      5.0 Setting Up A Fossil Server

      With other configuration management software, setting up a server is a lot of work and normally takes time, patience, and a lot of system knowledge. Fossil is designed to avoid this frustration. Setting up -a server with fossil is ridiculously easy. You have three options:

      +a server with Fossil is ridiculously easy. You have four options:

        -
      1. Setting up a stand-alone server - -From within your source tree just use the server command and -fossil will start listening for incoming requests on TCP port 8080. -You can point your web browser at -http://localhost:8080/ and begin exploring. Or your coworkers -can do pushes or pulls against your server. Use the --port -option to the server command to specify a different TCP port. If -you do not have a local source tree, use the -R command-line -option to specify the repository file. - -A stand-alone server is a great way to set of transient connections -between coworkers for doing quick pushes or pulls. But you can also -set up a permanent stand-alone server if you prefer. Just make -arrangements for fossil to be launched with appropriate arguments -after every reboot. - -If you just want a server to browse the built-in fossil website -locally, use the ui command in place of server. The -ui command starts up a local server too, but it also takes -the additional step of automatically launching your webbrowser and -pointing at the new server. -
      2. - -
      3. Setting up a CGI server - -If you have a web-server running on your machine already, you can -set up fossil to be run from CGI. Simply create an executable script -that looks something like this: - -
        -#!/usr/local/bin/fossil
        -repository: /home/me/bigproject.fossil
        -
        - -Edit this script to use whatever pathnames are appropriate for -your project. Then point your web browser at the script and off you -go. The [./selfhost.wiki | self-hosting fossil repositories] are -all set up this way.
      4. - -
      5. Setting up an inetd server - -If you have inetd or xinetd running on your system, you can set -those services up to launch fossil to deal with inbound TCP/IP connections -on whatever port you want. Set up inetd or xinetd to launch fossil -like this: - -
        -/usr/local/bin/fossil http /home/me/bigproject.fossil
        -
        - -As before, change the filenames to whatever is appropriate for -your system. You can have fossil run as any user that has write -permission on the repository and on the directory that contains the -repository. But it is safer to run fossil as root. When fossil -sees that it is running as root, it automatically puts itself into -a chroot jail and -drops all privileges prior to reading any information from the client. -Since fossil is a stand-alone program, you do not need to put anything -in the chroot jail with fossil in order for it to do its job. -
      6. -
      +
    • Stand-alone server. +Simply run the [/help?cmd=server|fossil server] or +[/help?cmd=ui|fossil ui] command from the command-line. + +

    • CGI. +Install a 2-line CGI script on a CGI-enabled web-server like Apache. + +

    • SCGI. +Start an SCGI server using the +[/help?cmd=server| fossil server --scgi] command for handling +SCGI requests from web-servers like Nginx. + +

    • Inetd or Stunnel. +Configure programs like inetd, xinetd, or stunnel to hand off HTTP requests +directly to the [/help?cmd=http|fossil http] command. + + +See the [./server.wiki | How To Configure A Fossil Server] document +for details.

      6.0 Review Of Key Concepts

      • The fossil program is a self-contained stand-alone executable. ADDED www/contribute.wiki Index: www/contribute.wiki ================================================================== --- www/contribute.wiki +++ www/contribute.wiki @@ -0,0 +1,83 @@ +Contributing To Fossil + +Users are encouraged to contributed enhancements back to the Fossil +project. This note outlines some of the procedures for making +useful contributions. + +

        1.0 Contributor Agreement

        + +In order to accept your contributions, we must have a +[./copyright-release.pdf | Contributor Agreement (PDF)] +(or [./copyright-release.html | as HTML]) on file for you. We require +this in order to maintain clear title to the Fossil code and prevent +the introduction of code with incompatible licenses or other entanglements +that might cause legal problems for Fossil users. Many larger companies +and other lawyer-rich organizations require this as a precondition to using +Fossil. + +If you do not wish to submit a Contributor Agreement, we would still +welcome your suggestions and example code, but we will not use your code +directly - we will be forced to re-implement your changes from scratch which +might take longer. + +

        2.0 Submitting Patches

        + +Suggested changes or bug fixes can be submitted by creating a patch +against the current source tree. Email patches to +drh@sqlite.org. Be sure to +describe in detail what the patch does and which version of Fossil +it is written against. + +A contributor agreement is not strictly necessary to submit a patch. +However, without a contributor agreement on file, your patch will be +used for reference only - it will not be applied to the code. This +may delay acceptance of your patch. + +Your patches or changes might not be accepted even if you do have +a contributor agreement on file. Please do not take this personally +or as an affront to your coding ability. Sometimes patches are rejected +because they seem to be taking the project in a direction that the +architect does not want to go. Or, there might be an alternative +implementation of the same feature being prepared separately. + +

        3.0 Check-in Privileges

        + +Check-in privileges are granted on a case-by-case basis. Your chances +of getting check-in privileges are much improved if you have a history +of submitting quality patches and/or making thoughtful posts on the +[http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/ | mailing list]. +A contributor agreement is, of course, a prerequisite for check-in +privileges.

        + +Contributors are asked to make all non-trivial changes on a branch. The +Fossil Architect (Richard Hipp) will merge changes onto the trunk.

        + +Contributors are required to following the +[./checkin.wiki | pre-checkin checklist] prior to every check-in to +the Fossil self-hosting repository. This checklist is short and succinct +and should only require a few seconds to follow. Contributors +should print out a copy of the pre-checkin checklist and keep +it on a notecard beside their workstations, for quick reference. + +Contributors should review the +[./style.wiki | Coding Style Guidelines] and mimic the coding style +used through the rest of the Fossil source code. Your code should +blend in. A third-party reader should be unable to distinguish your +code from any other code in the source corpus. + +

        4.0 Testing

        + +Fossil has the beginnings of a +[../test/release-checklist.wiki | release checklist] but this is an +area that needs further work. (Your contributions here are welcomed!) +Contributors with check-in privileges are expected to run the release +checklist on any major changes they contribute, and if appropriate expand +the checklist and/or the automated test scripts to cover their additions. + + +

        5.0 See Also

        + + * [./build.wiki | How To Compile And Install Fossil] + * [./makefile.wiki | The Fossil Build Process] + * [./tech_overview.wiki | A Technical Overview of Fossil] + * [./adding_code.wiki | Adding Features To Fossil] Index: www/copyright-release.html ================================================================== --- www/copyright-release.html +++ www/copyright-release.html @@ -1,63 +1,108 @@

        -Assignment of Copyright for
        -Contributions To The Fossil SCM +Fossil SCM Contributor Agreement

        -This Agreement is made between Hipp, Wyrick & Company, Inc., a -Georgia corporation with headquarters at 6200 Maple Cove Lane, Charlotte, NC, -(hereafter "Hwaci") -and NAME AND ADDRESS OF PROGRAMMER (hereafter -"Programmer"). -

        - -

        For valuable consideration, receipt and sufficiency of which are hereby -acknowledged, Programmer and Hwaci agree as follows: -

        +This agreement applies to your contribution of material to the +Fossil Software Configuration Management System ("Fossil") that is +managed by Hipp, Wyrick & Company, Inc. ("Hwaci") and +sets out the intellectual property rights you grant to Hwaci in the +contributed material. +The terms "contribution" and "contributed material" mean any source code, +object code, patch, tool, sample, graphic, specification, manual, +documentation, or any other material posted, submitted, or uploaded by +you to the Fossil project. + The term "you" means the person identified +and signing at the bottom of this document. If your contribution +is on behalf of a company, the term "you" also means the company +identified in the signature area below.
        1. -Programmer does hereby sell, assign, and transfer to Hwaci, its successors -and assigns, the entire right, title and interest in and to the copyright -in Fossil Software Configuration Management System (hereafter "Fossil") -or any portions thereof whether created before or after this agreement -and any registrations and copyright applications relating thereto and any -renewals and extensions thereof, and in and to all works based upon, -derived from, or incorporating Fossil, and in and to all income, -royalties, damages, claims and payments now or hereafter due or payable -with respect thereto, and in and to all causes of action, either in law -or in equity for past, present, or future infringement based on the -copyrights, and in and to all rights corresponding to the foregoing -throughout the world. +With respect to any worldwide copyrights, or copyright applications and +registrations, in your contribution: +

            +
          • You hereby assign to Hwaci joint ownership, and to the extent that + such assignment is or becomes invalid, ineffective or unenforceable, + you hereby grant to Hwaci a perpetual, irrevocable, non-exclusive, + worldwide, no-charge, royalty-free, unrestricted license to exercise + all rights under those copyrights, including the right to sublicense. +
          • You agree that both you and Hwaci can do all things in relation to your + contribution as if each of us were the sole owners, and if one of us + makes a derivative work of your contribution, the one who makes + (or has made) the derivative work will be the sole owner of that + derivative work. +
          • You agree that you will not assert any moral rights in your + contribution against Hwaci, Hwaci's licensees or transferees, or + any other user or consumer of your contribution. +
          • You agree that Hwaci may register a copyright in your contribution and + exercise all ownership rights associated with it. +
          • You agree that neither you nor Hwaci has any duty to consult with, + obtain the consent of, or pay or render an accounting to the other + for any use or distribution of your contribution. +
          + +
        2. +With respect to any patents you own, or that you can license without payment +to any third party, and which are relevant to your contribution, you hereby +grant to Hwaci a perpetual, irrevocable, non-exclusive, worldwide, no-charge, +royalty-free license to +make, have made, use, sell, offer to sell, import, and otherwise +transfer your contribution in whole or in part, alone or in +combination with or included in any product, work or materials arising +out of the Fossil project, and to sublicense these same rights. + +

        3. +Except as set out above, you keep all right, title, and interest in your +contribution. The rights that you grant to Hwaci under this agreement +are effective on the date that you first submitted your contribution to the +Fossil project, even if your submission took place before the date that +you sign this agreement.

        4. -Programmer hereby represents and -warrants to Licensee that Programmer is the owner of those portions of -the Fossil software developed and produced by Programmer or -otherwise has the right to grant to Hwaci the rights set forth in -this Agreement. +You represent and warrant the following: +

            +
          • Your contribution is an original work and that you can legally + grant the rights set out in this agreement. +
          • Your contribution does not, to the best of your knowledge and belief, + violate any third party's copyrights, trademarks, patents, + or other intellectual property rights. +
          • You are authorized to sign this agreement on behalf of your + company (if applicable). +
        -

        -In witness whereof, the parties have executed this Agreement, -effective this YYYY-MM-DD - -

        - -
        -HWACI
        -By:
        -

         

        -Print Name:
        -

         

        -Title:
        -

         

        -
        -PROGRAMMER
        -By:
        -

         

        -Print Name:
        -

         

        -Title:
        -

         

        -
        +

        By filling in the following information and signing your name, +you agree to be bound by all of the terms +set forth in this agreement. Please print clearly.

        + +
        +

        + + + + + +
        Your name & email: + +  

          + +

        Company name:
        (if applicable)
        + +  

          + + 

        Postal address: + + +

         

         

         

        + +
        Signature: 

         

        Date: 

         

        +

        + +

        Send completed forms to: +

        +Hipp, Wyrick & Company, Inc.
        +6200 Maple Cove Lane
        +Charlotte, NC 28269-1086
        +USA +

        ADDED www/copyright-release.pdf Index: www/copyright-release.pdf ================================================================== --- www/copyright-release.pdf +++ www/copyright-release.pdf cannot compute difference between binary files Index: www/custom_ticket.wiki ================================================================== --- www/custom_ticket.wiki +++ www/custom_ticket.wiki @@ -1,7 +1,8 @@ +Customizing The Ticket System -

        Customizing The Ticket System

        +

        Introduction

        This guide will explain how to add the "assigned_to" and "submitted_by" fields to the ticket system in Fossil, as well as making the system more useful. You must have "admin" access to the repository to implement these instructions.

        ADDED www/customgraph.md Index: www/customgraph.md ================================================================== --- www/customgraph.md +++ www/customgraph.md @@ -0,0 +1,228 @@ +# Customizing the Timeline Graph + +Beginning with version 1.33, Fossil gives users and skin authors significantly +more control over the look and feel of the timeline graph. + +## Basic Style Options + +Fossil includes several options for changing the graph's style without having +to delve into CSS. These can be found in the details.txt file of your skin or +under Admin/Skins/Details in the web UI. + +* ###`timeline-arrowheads` + + Set this to `0` to hide arrowheads on primary child lines. + +* ###`timeline-circle-nodes` + + Set this to `1` to make check-in nodes circular instead of square. + +* ###`timeline-color-graph-lines` + + Set this to `1` to colorize primary child lines. + +* ###`white-foreground` + + Set this to `1` if your skin uses white (or any light color) text. + This tells Fossil to generate darker background colors for branches. + + +## Advanced Styling + +If the above options aren't enough for you, it's time to get your hands dirty +with CSS. To get started, I recommend first copying all the [graph-related CSS +rules](#default-css) to your stylesheet. Then it's simply a matter of making +the necessary changes to achieve the look you want. So, next, let's look at the +various graph elements and what purpose they serve. + +Each element used to construct the timeline graph falls into one of two +categories: visible elements and positioning elements. We'll start with the +latter, less obvious type. + +## Positioning Elements + +These elements aren't intended to be seen. They're only used to help position +the graph and its visible elements. + +* ###`.tl-canvas` + + Set the left and right margins on this class to give the desired amount + of space between the graph and its adjacent columns in the timeline. + + #### Additional Classes + + * `.sel`: See [`.tl-node`](#tl-node) for more information. + +* ###`.tl-rail` + + Think of rails as invisible vertical lines on which check-in nodes are + placed. The more simultaneous branches in a graph, the more rails required + to draw it. Setting the `width` property on this class determines the + maximum spacing between rails. This spacing is automatically reduced as + the number of rails increases. If you change the `width` of `.tl-node` + elements, you'll probably need to change this value, too. + +* ###`.tl-mergeoffset` + + A merge line often runs vertically right beside a primary child line. This + class's `width` property specifies the maximum spacing between the two. + Setting this value to `0` will eliminate the vertical merge lines. + Instead, the merge arrow will extend directly off the primary child line. + As with rail spacing, this is also adjusted automatically as needed. + +* ###`.tl-nodemark` + + In the timeline table, the second cell in each check-in row contains an + invisible div with this class. These divs are used to determine the + vertical position of the nodes. By setting the `margin-top` property, + you can adjust this position. + +## Visible Elements + +These are the elements you can actually see on the timeline graph: the nodes, +arrows, and lines. Each of these elements may also have additional classes +attached to them, depending on their context. + +* ###`.tl-node` + + A node exists for each check-in in the timeline. + + #### Additional Classes + + * `.leaf`: Specifies that the check-in is a leaf (i.e. that it has no + children in the same branch). + + * `.merge`: Specifies that the check-in contains a merge. + + * `.sel`: When the user clicks a node to designate it as the beginning + of a diff, this class is added to both the node itself and the + [`.tl-canvas`](#tl-canvas) element. The class is removed from both + elements when the node is clicked again. + +* ###`.tl-arrow` + + Arrows point from parent nodes to their children. Technically, this + class is just for the arrowhead. The rest of the arrow is composed + of [`.tl-line`](#tl-line) elements. + + There are six additional classes that are used to distinguish the different + types of arrows. However, only these combinations are valid: + + * `.u`: Up arrow that points to a child from its primary parent. + + * `.u.sm`: Smaller up arrow, used when there is limited space between + parent and child nodes. + + * `.merge.l` or `.merge.r`: Merge arrow pointing either to the left or + right. + + * `.warp`: A timewarped arrow (always points to the right), used when a + misconfigured clock makes a check-in appear to have occurred before its + parent ([example](https://www.sqlite.org/src/timeline?c=2010-09-29&nd)). + +* ###`.tl-line` + + Along with arrows, lines connect parent and child nodes. Line thickness is + determined by the `width` property, regardless of whether the line is + horizontal or vertical. You can also use borders to create special line + styles. Here's a CSS snippet for making dotted merge lines: + + .tl-line.merge { + width: 0; + background: transparent; + border: 0 dotted #000; + } + .tl-line.merge.h { + border-top-width: 1px; + } + .tl-line.merge.v { + border-left-width: 1px; + } + + #### Additional Classes + + * `.merge`: A merge line. + + * `.h` or `.v`: Horizontal or vertical. + + * `.warp`: A timewarped line. + + +## Default Timeline Graph CSS + + .tl-canvas { + margin: 0 6px 0 10px; + } + .tl-rail { + width: 18px; + } + .tl-mergeoffset { + width: 2px; + } + .tl-nodemark { + margin-top: 5px; + } + .tl-node { + width: 10px; + height: 10px; + border: 1px solid #000; + background: #fff; + cursor: pointer; + } + .tl-node.leaf:after { + content: ''; + position: absolute; + top: 3px; + left: 3px; + width: 4px; + height: 4px; + background: #000; + } + .tl-node.sel:after { + content: ''; + position: absolute; + top: 2px; + left: 2px; + width: 6px; + height: 6px; + background: red; + } + .tl-arrow { + width: 0; + height: 0; + transform: scale(.999); + border: 0 solid transparent; + } + .tl-arrow.u { + margin-top: -1px; + border-width: 0 3px; + border-bottom: 7px solid #000; + } + .tl-arrow.u.sm { + border-bottom: 5px solid #000; + } + .tl-line { + background: #000; + width: 2px; + } + .tl-arrow.merge { + height: 1px; + border-width: 2px 0; + } + .tl-arrow.merge.l { + border-right: 3px solid #000; + } + .tl-arrow.merge.r { + border-left: 3px solid #000; + } + .tl-line.merge { + width: 1px; + } + .tl-arrow.warp { + margin-left: 1px; + border-width: 3px 0; + border-left: 7px solid #600000; + } + .tl-line.warp { + background: #600000; + } ADDED www/customskin.md Index: www/customskin.md ================================================================== --- www/customskin.md +++ www/customskin.md @@ -0,0 +1,241 @@ +Theming +======= + +Every HTML page generated by Fossil has the following basic structure: + + +
        + + + +
        Header
        +Fossil-Generated Content
        Footer
        + +The header and footer control the "look" of Fossil pages. Those +two sections can be customized separately for each repository to +develop a new theme. + +The header will normally look something like this: + + + ... + + ... top banner and menu bar ... +
        + +And the footer will look something like this: + +
        + ... bottom material ... + + + +The <head> element in the header will normally reference the +/style.css CSS file that Fossil stores internally. (The $stylesheet_url +TH1 variable, described below, is useful for accomplishing this.) + +The middle "content" section comprised the bulk of most pages and +contains the actual Fossil-generated data +that the user is interested in seeing. The text of this content +section is not normally configurable. The content text can be styled +using CSS, but it otherwise fixed. Hence it is the header and footer +and the CSS that determine the look of a repository. +We call the bundle of built-in CSS, header, and footer a "skin". + +Built-in Skins +-------------- + +Fossil comes with several built-in skins. The sources to these built-ins can +be found in the Fossil source tree under the skins/ folder. The skins/ +folder contains a separate subfolder for each built-in skin, with each +subfolders holding four files, "css.txt", "details.txt", +"footer.txt", and "header.txt", +that describe the CSS, rendering options, +footer, and header for that skin, respectively. + +The skin of a repository can be changed to any of the built-in skins using +the web interface by going to the /setup_skin web page (requires Admin +privileges) and clicking the appropriate button. Or, the --skin command +line option can be used for the +[fossil ui](../../../help?cmd=ui) or +[fossil server](../../../help?cmd=server) commands to force that particular +instance of Fossil to use the specified built-in skin. + +Sharing Skins +------------- + +The skin of a repository is not part of the versioned state and does not +"push" or "pull" like checked-in files. The skin is local to the +repository. However, skins can be shared between repositories using +the [fossil config](../../../help?cmd=configuration) command. +The "fossil config push skin" command will send the local skin to a remote +repository and the "fossil config pull skin" command will import a skin +from a remote repository. The "fossil config export skin FILENAME" +will export the skin for a repository into a file FILENAME. This file +can then be imported into a different repository using the +"fossil config import FILENAME" command. Unlike "push" and "pull", +the "export" and "import" commands are able to move skins between +repositories for different projects. So, for example, if you have a +group of related repositories, you can develop a skin for one of them, +then get a consistent look across all the repositories by exporting +the skin from the first repository and importing into all the others. + +The file generated by "fossil config export" could be checked into +one of your repositories and versioned, if desired. This will not +automatically change the skin when looking backwards in time, but it +will provide an historical record of what the skin used to be and +allow the historical look of the repositories to be recreated if +necessary. + +When cloning a repository, the skin of new repository is initialized to +the skin of the repository from which it was cloned. + +Header And Footer Processing +---------------------------- + +The header.txt and footer.txt files of a scan are merely the HTML text +of the header and footer. Except, before being prepended and appended to +the content, the header and footer text are run through a +[TH1 interpreter](./th1.md) that might adjust the text as follows: + + * All text within <th1>...</th1> is elided from the + output and that text is instead run as a TH1 script. That TH1 + script has the opportunity to insert new text in place of itself, + or to inhibit or enable the output of subsequent text. + + * Text for the form "$NAME" or "$<NAME>" is replace with + the value of the TH1 variable NAME. + +For example, the following is the first few lines of a typical +header file: + + + + + $<project_name>: $<title> + + + + +After variables are substituted by TH1, the final header text +delivered to the web browser might look something like this: + + + + + Fossil: Timeline + + + + +The same TH1 interpreter is used for both the header and the footer +and for all scripts contained within them both. Hence, any global +TH1 variables that are set by the header are available to the footer. + +TH1 Variables +------------- + +Before expanding the TH1 within the header and footer, Fossil first +initializes a number of TH1 variables to values that depend on +respository settings and the specific page being generated. + + * **project_name** - The project_name variable is filled with the + name of the project as configured under the Admin/Configuration + menu. + + * **title** - The title variable holds the title of the page being + generated. + + The title variable is special in that it is deleted after + the header script runs and before the footer script. This is + necessary to avoid a conflict with a variable by the same name used + in my ticket-screen scripts. + + * **baseurl** - The root of the URL namespace for this server. + + * **secureurl** - The same as $baseurl except that if the scheme is + "http:" it is changed to "https:" + + * **home** - The $baseurl without the scheme and hostname. For example, + if the $baseurl is "http://projectX.com/cgi-bin/fossil" then the + $home will be just "/cgi-bin/fossil". + + * **index_page** - The landing page URI as + specified by the Admin/Configuration setup page. + + * **current_page** - The name of the page currently being processed, + without the leading "/" and without query parameters. + Examples: "timeline", "doc/trunk/README.txt", "wiki". + + * **csrf_token** - A token used to prevent cross-site request forgery. + + * **release_version** - The release version of Fossil. Ex: "1.31" + + * **manifest_version** - A prefix on the SHA1 check-in hash of the + specific version of fossil that is running. Ex: "\[47bb6432a1\]" + + * **manifest_date** - The date of the source-code check-in for the + version of fossil that is running. + + * **compiler_name** - The name and version of the compiler used to + build the fossil executable. + + * **login** - This variable only exists if the user has logged in. + The value is the username of the user. + + * **stylesheet_url** - A URL for the internal style-sheet maintained + by Fossil. + + * **log\_image\_url** - A URL for the logo image for this project, as + configured on the Admin/Logo page. + + * **background\_image\_url** - A URL for a background image for this + project, as configured on the Admin/Logo page. + +All of the above are variables in the sense that either the header or the +footer is free to change or erase them. But they should probably be treated +as constants. New predefined values are likely to be added in future +releases of Fossil. + +Suggested Skin Customization Procedure +-------------------------------------- + +Developers are free, of course, to develop new skins using any method they +want, but the following is a technique that has worked well in the past and +can serve as a starting point for future work: + + 1. Select a built-in skin that is closest to the desired look. Make + copies of the css, footer, and header into files name "css.txt", + "details.txt", + "footer.txt", and "header.txt" in some temporary directory. + + If the Fossil source code is available, then these three files can + be copied directly out of one of the subdirectories under skins. If + sources are not easily at hand, then a copy/paste out of the + CSS, footer, and header editing screens under the Admin menu will + work just as well. The important point is that the three files + be named exactly "css.txt", "footer.txt", and "header.txt" and that + they all be in the same directory. + + 2. Run the [fossil ui](../../../help?cmd=ui) command with an extra + option "--skin SKINDIR" where SKINDIR is the name of the directory + in which the three txt files were stored in step 1. This will bring + up the Fossil website using the tree files in SKINDIR. + + 3. Edit the four txt files in SKINDIR. After making each small change, + press Reload on the web browser to see the effect of that change. + Iterate until the desired look is achieved. + + 4. Copy/paste the resulting css.txt, details.txt, + header.txt, and footer.txt files + into the CSS, details, header, and footer configuration screens + under the Admin/Skins menu. + +See Also +-------- + +* [Customizing the Timeline Graph](customgraph.md) Index: www/delta_encoder_algorithm.wiki ================================================================== --- www/delta_encoder_algorithm.wiki +++ www/delta_encoder_algorithm.wiki @@ -1,9 +1,8 @@ +Fossil Delta Encoding Algorithm -

        -Fossil Delta Encoding Algorithm -

        +

        Abstract

        A key component for the efficient storage of multiple revisions of a file in fossil repositories is the use of delta-compression, i.e. to store only the changes between revisions instead of the whole file.

        Index: www/delta_format.wiki ================================================================== --- www/delta_format.wiki +++ www/delta_format.wiki @@ -1,9 +1,8 @@ +Fossil Delta Format -

        -Fossil Delta Format -

        +

        Abstract

        Fossil achieves efficient storage and low-bandwidth synchronization through the use of delta-compression. Instead of storing or transmitting the complete content of an artifact, fossil stores or transmits only the changes relative to a related artifact. @@ -67,11 +66,11 @@

        1.3 Segment-List

        The segment-list of a delta describes how to create the target from the original by a combination of inserting literal byte-sequences and -copying ranges of bytes from the original. This is there the +copying ranges of bytes from the original. This is where the compression takes place, by encoding the large common parts of original and target in small copy instructions.

        The target is constructed from beginning to end, with the data generated by each instruction appended after the data of all previous @@ -156,11 +155,11 @@   5x@Kt, Copy 380 @ 1336   6:pieces Literal 6 'pieces'   79@Qt, Copy 457 @ 1720   F: Example: eskilLiteral 15 ' Example: eskil'   ~E@Y0, Copy 4046 @ 2176 -Trailer2zMM3E Ckecksum -1101438770 +Trailer2zMM3E Checksum -1101438770

        The unified diff behind the above delta is

        @@ -214,11 +213,11 @@
         
        • Pure text files generate a pure text delta.
        • Binary files generate a delta that may contain some binary data.
        • -
        • The delta encoding does not attempt to compress the content +
        • The delta encoding does not attempt to compress the content. It was considered to be much more sensible to do compression using a separate general-purpose compression library, like zlib.
        Index: www/embeddeddoc.wiki ================================================================== --- www/embeddeddoc.wiki +++ www/embeddeddoc.wiki @@ -1,7 +1,7 @@ -Managing Project Documentation -

        Managing Project Documentation

        +Project Documentation +

        Project Documentation

        Fossil provides a built-in wiki that can be used to store the documentation for a project. This is sufficient for many projects. If your project is well-served by wiki documentation, then you @@ -63,31 +63,50 @@ Finally, the <filename> element of the URL is the pathname of the documentation file relative to the root of the source tree. The mimetype (and thus the rendering) of documentation files is -determined by the file suffix. Fossil currently understands 192 -different file suffixes, including all the popular ones such as -".css", ".gif", ".htm", ".html", ".jpg", ".jpeg", ".png", and ".txt". +determined by the file suffix. Fossil currently understands +[/mimetype_list|many different file suffixes], +including all the popular ones such as ".css", ".gif", ".htm", +".html", ".jpg", ".jpeg", ".png", and ".txt". Documentation files whose names end in ".wiki" use the [/wiki_rules | same markup as wiki pages] - a safe subset of HTML together with some wiki rules for paragraph -breaks, lists, and hyperlinks. The ".wiki" and ".txt" pages +breaks, lists, and hyperlinks. +Documentation files ending in ".md" or ".markdown" use the +[/md_rules | Markdown markup langauge]. +Documentation files ending in ".txt" are plain text. +Wiki, markdown, and plain text documentation files are rendered with the standard fossil header and footer added. -All other mimetypes are delivered directly to the requesting +Most other mimetypes are delivered directly to the requesting web browser without interpretation, additions, or changes. + +Files with the mimetype "text/html" (the .html or .htm suffix) are +usually rendered directly to the browser without interpretation. +However, if the file begins with a <div> element like this: + + <div class='fossil-doc' data-title='Title Text'> + +Then the standard Fossil header and footer are added to the document +prior to being displayed. The "class='fossil-doc'" attribute is +required for this to occur. The "data-title='...'" attribute is +optional, but if it is present the text will become the title displayed +in the Fossil header. An example of this can be seen in the text +of the [/artifact/84b4b3d041d93a?txt=1 | Index Of Fossil Documentation] +document.

        Examples

        This file that you are currently reading is an example of embedded documentation. The name of this file in the fossil source tree is "www/embeddeddoc.wiki". You are perhaps looking at this file using the URL: - [http://www.fossil-scm.org/index.html/doc/tip/www/embeddeddoc.wiki]. + [http://www.fossil-scm.org/index.html/doc/trunk/www/embeddeddoc.wiki]. The first part of this path, the "[http://www.fossil-scm.org/index.html]", is the base URL. You might have originally typed: [http://www.fossil-scm.org/]. The web server at the www.fossil-scm.org site automatically redirects such links by appending "index.html". The @@ -98,23 +117,23 @@
         #!/usr/bin/fossil
         repository: /fossil/fossil.fossil
         
        -This is one of three ways to set up a -fossil web server. - -The "/tip/" part of the URL tells fossil to use -the documentation files from the check-in that was checked in most -recently. If you wanted to see an historical version of this document, -you could substitute the name of a check-in for "/tip/". -For example, to see the version of this document associated with -check-in [9be1b00392], simply replace the "/tip/" with +This is one of four ways to set up a +fossil web server. + +The "/trunk/" part of the URL tells fossil to use +the documentation files from the most recent trunk check-in. +If you wanted to see an historical version of this document, +you could substitute the name of a check-in for "/trunk/". +For example, to see the version of this document associated with +check-in [9be1b00392], simply replace the "/trunk/" with "/9be1b00392/". You can also substitute the symbolic name for a particular version or branch. For example, you might -replace "/tip/" with "/trunk/" to get the latest -version of this document in the "trunk" branch. The symbolic name +replace "/trunk/" with "/experimental/" to get the latest +version of this document in the "experimental" branch. The symbolic name can also be a date and time string in any of the following formats:

        • YYYY-MM-DD
        • YYYY-MM-DDTHH:MM @@ -135,12 +154,12 @@ The file that encodes this document is stored in the fossil source tree under the name "www/embeddeddoc.wiki" and so that name forms the last part of the URL for this document. As I sit writing this documentation file, I am testing my work by -running the "fossil server" command line and viewing +running the "fossil ui" command line and viewing http://localhost:8080/doc/ckout/www/embeddeddoc.wiki in Firefox. I am doing this even though I have not yet checked in the "www/embeddeddoc.wiki" file for the first time. Using the special "ckout" version identifier on the "/doc" page it is easy to make multiple changes to multiple files and see how they all look together before committing anything to the repository. ADDED www/event.wiki Index: www/event.wiki ================================================================== --- www/event.wiki +++ www/event.wiki @@ -0,0 +1,75 @@ +Technical Notes + +

          What Is A "Technote"?

          + +In Fossil, a "technical note" or "technote" (formerly called an "event") +is a special kind of [./wikitheory.wiki | wiki page] +that is associated with a point in time rather than having a page name. +Each technote causes a single entry to appear on the +[/timeline?y=e | Timeline Page]. +Clicking on the timeline link will display the text of the technote. +The wiki content, the timeline entry text, the +time of the technote, and the timeline background color can all be edited. + +As with check-ins, wiki, and tickets, all technotes automatically synchronize +to other repositories. Hence, technotes can be viewed, created, and edited +off-line. And the complete edit history for technotes is maintained +for auditing purposes. + +Possible uses for technotes include: + + * Milestones. Project milestones, such as releases or beta-test + cycles, can be recorded as technotes. The timeline entry for the technote + can be something simple like "Version 1.2.3" perhaps with a bright + color background to draw attention to the entry and the wiki content + can contain release notes, for example. + + * Blog Entries. Blog entries from developers describing the current + state of a project, or rational for various design decisions, or + roadmaps for future development, can be entered as technotes. + + * Process Checkpoints. For projects that have a formal process, + technotes can be used to record the completion or the initiation of + various process steps. For example, a technote can be used to record + the successful completion of a long-running test, perhaps with + performance results and details of where the test was run and who + ran it recorded in the wiki content. + + * News Articles. Significant occurrences in the lifecycle of + a project can be recorded as news articles using technotes. Perhaps the + domain name of the canonical website for a project changes, or new + server hardware is obtained. Such happenings are appropriate for + reporting as news. + + * Announcements. Changes to the composition of the development + team or acquisition of new project sponsors can be communicated as + announcements which can be implemented as technotes. + +No project is required to use technotes. But technotes can help many projects +stay better organized and provide a better historical record of the +development progress. + +

          Viewing Technotes

          + +Because technotes are considered a special kind of wiki, +users must have permission to read wiki in order read technotes. +Enable the "j" permission under the /Setup/Users menu in order +to give specific users or user classes the ability to view wiki +and technotes. + +Technotes show up on the timeline. Click on the hyperlink beside the +technote title to see the complete text. + +

          Creating And Editing Technotes

          + +There is a hyperlink under the /wikihelp menu that can be used to create +new technotes. And there is a submenu hyperlink on technote displays for +editing existing technotes. + +Users must have check-in privileges (permission "i") in order to +create or edit technotes. In addition, users must have create-wiki +privilege (permission "f") to create new technotes and edit-wiki +privilege (permission "k") in order to edit existing technotes. + +Technote content may be formatted as [/wiki_rules | Fossil wiki], +[/md_rules | Markdown], or a plain text. Index: www/faq.tcl ================================================================== --- www/faq.tcl +++ www/faq.tcl @@ -9,14 +9,15 @@ } faq { What GUIs are available for fossil? } { - The fossil executable comes with a web-based GUI built in. Just run: + The fossil executable comes with a [./webui.wiki | web-based GUI] built in. + Just run:
          - fossil ui REPOSITORY-FILENAME + fossil [/help/ui|ui] REPOSITORY-FILENAME
          And your default web browser should pop up and automatically point to the fossil interface. (Hint: You can omit the REPOSITORY-FILENAME if you are within an open check-out.) @@ -30,64 +31,94 @@ and Tagging document. } faq { - How do I create a new branch in fossil? + How do I create a new branch? } { There are lots of ways: - When you are checking in a new change using the commit + When you are checking in a new change using the [/help/commit|commit] command, you can add the option "--branch BRANCH-NAME" to - make the change be the founding check-in for a new branch. You can - also add the "--bgcolor COLOR" option to give the branch a - specific background color on timelines. + make the new check-in be the first check-in for a new branch. - If you want to create a new branch whose founding check-in is the + If you want to create a new branch whose initial content is the same as an existing check-in, use this command:
          - fossil branch new BRANCH-NAME BASIS + fossil [/help/branch|branch] new BRANCH-NAME BASIS
          The BRANCH-NAME argument is the name of the new branch and the BASIS argument is the name of the check-in that the branch splits off from. If you already have a fork in your check-in tree and you want to convert that fork to a branch, you can do this from the web interface. First locate the check-in that you want to be - the founding check-in of your branch on the timeline and click on its + the initial check-in of your branch on the timeline and click on its link so that you are on the ci page. Then find the "edit" - link (near the "Commands:" label) and click on that. On the - "Edit Check-in" page, check the box beside "Branching:" and fill in + link (near the "Commands:" label) and click on that. On the + "Edit Check-in" page, check the box beside "Branching:" and fill in the name of your new branch to the right and press the "Apply Changes" button. } + +faq { + How do I tag a check-in? +} { + There are several ways: + + When you are checking in a new change using the [/help/commit|commit] + command, you can add a tag to that check-in using the + "--tag TAGNAME" command-line option. You can repeat the --tag + option to give a check-in multiple tags. Tags need not be unique. So, + for example, it is common to give every released version a "release" tag. + + If you want add a tag to an existing check-in, you can use the + [/help/tag|tag] command. For example: + +
          + fossil [/help/branch|tag] add TAGNAME CHECK-IN +
          + + The CHECK-IN in the previous line can be any + [./checkin_names.wiki | valid check-in name format]. + + You can also add (and remove) tags from a check-in using the + [./webui.wiki | web interface]. First locate the check-in that you + what to tag on the timeline, then click on the link to go the detailed + information page for that check-in. Then find the "edit" + link (near the "Commands:" label) and click on that. There are + controls on the edit page that allow new tags to be added and existing + tags to be removed. +} faq { How do I create a private branch that won't get pushed back to the main repository. } { - Use the --private command-line option on the + Use the --private command-line option on the commit command. The result will be a check-in which exists on - your local repository only and is never pushed to other repositories. - All descendents of a private check-in are also private. - + your local repository only and is never pushed to other repositories. + All descendants of a private check-in are also private. + Unless you specify something different using the --branch and/or --bgcolor options, the new private check-in will be put on a branch named "private" with an orange background color. - + You can merge from the trunk into your private branch in order to keep your private branch in sync with the latest changes on the trunk. Once you have everything in your private branch the way you want it, you can then merge your private branch back into the trunk and push. Only the final merge operation will appear in other repositories. It will seem as if all the changes that occurred on your private branch occurred in a single check-in. Of course, you can also keep your branch private forever simply by not merging the changes in the private branch back into the trunk. + + [./private.wiki | Additional information] } faq { How can I delete inappropriate content from my fossil repository? } { @@ -97,25 +128,33 @@ faq { How do I make a clone of the fossil self-hosting repository? } { Any of the following commands should work:
          -  fossil  clone  http://www.fossil-scm.org/  fossil.fossil
          - fossil clone http://www2.fossil-scm.org/ fossil.fossil
          - fossil clone http://www.hwaci.com/cgi-bin/fossil fossil.fossil + fossil [/help/clone|clone] http://www.fossil-scm.org/ fossil.fossil + fossil [/help/clone|clone] http://www2.fossil-scm.org/ fossil.fossil + fossil [/help/clone|clone] http://www3.fossil-scm.org/site.cgi fossil.fossil
          Once you have the repository cloned, you can open a local check-out as follows:
          -  mkdir src; cd src; fossil open ../fossil.fossil
          +  mkdir src; cd src; fossil [/help/open|open] ../fossil.fossil
             
          Thereafter you should be able to keep your local check-out up to date with the latest code in the public repository by typing:
          -  fossil update
          +  fossil [/help/update|update]
             
          } + +faq { + How do I import or export content from and to other version control systems? +} { + Please see [./inout.wiki | Import And Export] +} + + ############################################################################# # Code to actually generate the FAQ # puts "Fossil FAQ" Index: www/faq.wiki ================================================================== --- www/faq.wiki +++ www/faq.wiki @@ -4,24 +4,27 @@

          Note: See also Questions and Criticisms.

          1. What GUIs are available for fossil?
          2. What is the difference between a "branch" and a "fork"?
          3. -
          4. How do I create a new branch in fossil?
          5. -
          6. How do I create a private branch that won't get pushed back to the +
          7. How do I create a new branch?
          8. +
          9. How do I tag a check-in?
          10. +
          11. How do I create a private branch that won't get pushed back to the main repository.
          12. -
          13. How can I delete inappropriate content from my fossil repository?
          14. -
          15. How do I make a clone of the fossil self-hosting repository?
          16. +
          17. How can I delete inappropriate content from my fossil repository?
          18. +
          19. How do I make a clone of the fossil self-hosting repository?
          20. +
          21. How do I import or export content from and to other version control systems?

          (1) What GUIs are available for fossil?

          -
          The fossil executable comes with a web-based GUI built in. Just run: +
          The fossil executable comes with a [./webui.wiki | web-based GUI] built in. +Just run:
          -fossil ui REPOSITORY-FILENAME +fossil [/help/ui|ui] REPOSITORY-FILENAME
          And your default web browser should pop up and automatically point to the fossil interface. (Hint: You can omit the REPOSITORY-FILENAME if you are within an open check-out.)
        • @@ -32,49 +35,76 @@
          This is a big question - too big to answer in a FAQ. Please read the Branching, Forking, Merging, and Tagging document.
          -

          (3) How do I create a new branch in fossil?

          +

          (3) How do I create a new branch?

          There are lots of ways: -When you are checking in a new change using the commit +When you are checking in a new change using the [/help/commit|commit] command, you can add the option "--branch BRANCH-NAME" to -make the change be the founding check-in for a new branch. You can -also add the "--bgcolor COLOR" option to give the branch a -specific background color on timelines. +make the new check-in be the first check-in for a new branch. -If you want to create a new branch whose founding check-in is the +If you want to create a new branch whose initial content is the same as an existing check-in, use this command:
          -fossil branch new BRANCH-NAME BASIS +fossil [/help/branch|branch] new BRANCH-NAME BASIS
          The BRANCH-NAME argument is the name of the new branch and the BASIS argument is the name of the check-in that the branch splits off from. If you already have a fork in your check-in tree and you want to convert that fork to a branch, you can do this from the web interface. First locate the check-in that you want to be -the founding check-in of your branch on the timeline and click on its +the initial check-in of your branch on the timeline and click on its link so that you are on the ci page. Then find the "edit" link (near the "Commands:" label) and click on that. On the "Edit Check-in" page, check the box beside "Branching:" and fill in the name of your new branch to the right and press the "Apply Changes" button.
          -

          (4) How do I create a private branch that won't get pushed back to the +

          (4) How do I tag a check-in?

          + +
          There are several ways: + +When you are checking in a new change using the [/help/commit|commit] +command, you can add a tag to that check-in using the +"--tag TAGNAME" command-line option. You can repeat the --tag +option to give a check-in multiple tags. Tags need not be unique. So, +for example, it is common to give every released version a "release" tag. + +If you want add a tag to an existing check-in, you can use the +[/help/tag|tag] command. For example: + +
          +fossil [/help/branch|tag] add TAGNAME CHECK-IN +
          + +The CHECK-IN in the previous line can be any +[./checkin_names.wiki | valid check-in name format]. + +You can also add (and remove) tags from a check-in using the +[./webui.wiki | web interface]. First locate the check-in that you +what to tag on the timeline, then click on the link to go the detailed +information page for that check-in. Then find the "edit" +link (near the "Commands:" label) and click on that. There are +controls on the edit page that allow new tags to be added and existing +tags to be removed.
          + + +

          (5) How do I create a private branch that won't get pushed back to the main repository.

          Use the --private command-line option on the commit command. The result will be a check-in which exists on your local repository only and is never pushed to other repositories. -All descendents of a private check-in are also private. +All descendants of a private check-in are also private. Unless you specify something different using the --branch and/or --bgcolor options, the new private check-in will be put on a branch named "private" with an orange background color. @@ -84,33 +114,40 @@ then merge your private branch back into the trunk and push. Only the final merge operation will appear in other repositories. It will seem as if all the changes that occurred on your private branch occurred in a single check-in. Of course, you can also keep your branch private forever simply -by not merging the changes in the private branch back into the trunk.
          +by not merging the changes in the private branch back into the trunk. + +[./private.wiki | Additional information] - -

          (5) How can I delete inappropriate content from my fossil repository?

          + +

          (6) How can I delete inappropriate content from my fossil repository?

          See the article on [./shunning.wiki | "shunning"] for details.
          - -

          (6) How do I make a clone of the fossil self-hosting repository?

          + +

          (7) How do I make a clone of the fossil self-hosting repository?

          Any of the following commands should work:
          -fossil  clone  http://www.fossil-scm.org/  fossil.fossil
          -fossil clone http://www2.fossil-scm.org/ fossil.fossil
          -fossil clone http://www.hwaci.com/cgi-bin/fossil fossil.fossil +fossil [/help/clone|clone] http://www.fossil-scm.org/ fossil.fossil +fossil [/help/clone|clone] http://www2.fossil-scm.org/ fossil.fossil +fossil [/help/clone|clone] http://www3.fossil-scm.org/site.cgi fossil.fossil
          Once you have the repository cloned, you can open a local check-out as follows:
          -mkdir src; cd src; fossil open ../fossil.fossil
          +mkdir src; cd src; fossil [/help/open|open] ../fossil.fossil
           
          Thereafter you should be able to keep your local check-out up to date with the latest code in the public repository by typing:
          -fossil update
          +fossil [/help/update|update]
           
          + +

          (8) How do I import or export content from and to other version control systems?

          + +
          Please see [./inout.wiki | Import And Export]
          + Index: www/fileformat.wiki ================================================================== --- www/fileformat.wiki +++ www/fileformat.wiki @@ -28,12 +28,12 @@ The local state is not composed of artifacts and is not intended to be enduring. This document is concerned with global state only. Local state is only mentioned here in order to distinguish it from global state. Each artifact in the repository is named by its SHA1 hash. -No prefixes or meta information is added to a artifact before -its hash is computed. The name of a artifact in the repository +No prefixes or meta information is added to an artifact before +its hash is computed. The name of an artifact in the repository is exactly the same SHA1 hash that is computed by sha1sum on the file as it exists in your source tree.

          Some artifacts have a particular format which gives them special meaning to fossil. Fossil recognizes: @@ -42,24 +42,28 @@
        • [#manifest | Manifests]
        • [#cluster | Clusters]
        • [#ctrl | Control Artifacts]
        • [#wikichng | Wiki Pages]
        • [#tktchng | Ticket Changes]
        • -
        • [#artifact | Artifacts]
        • +
        • [#attachment | Attachments]
        • +
        • [#event | TechNotes]
        -These five artifact types are described in the sequel. +These seven artifact types are described in the following sections. In the current implementation (as of 2009-01-25) the artifacts that -make up a fossil repository are stored in in as delta- and zlib-compressed +make up a fossil repository are stored as delta- and zlib-compressed blobs in an SQLite database. This is an implementation detail and might change in a future release. For the purpose of this article "file format" means the format of the artifacts, not how the artifacts are stored on disk. It is the artifact format that is intended to be enduring. The specifics of how artifacts are stored on disk, though stable, is not intended to live as long as the artifact format. + +All of the artifacts can be extracted from a Fossil repository using +the "fossil deconstruct" command.

        1.0 The Manifest

        A manifest defines a check-in or version of the project @@ -91,59 +95,78 @@ may contain no additional text or data beyond what is described here. Allowed cards in the manifest are as follows:
        +B baseline-manifest
        C checkin-comment
        D time-and-date-stamp
        -F filename SHA1-hash permissions old-name
        +F filename ?SHA1-hash? ?permissions? ?old-name?
        +N mimetype
        P SHA1-hash+
        +Q (+|-)SHA1-hash ?SHA1-hash?
        R repository-checksum
        -T (+|-|*)tag-name * ?value?
        +T (+|-|*)tag-name * ?value?
        U user-login
        Z manifest-checksum
        + +A manifest may optionally have a single B-card. The B-card specifies +another manifest that serves as the "baseline" for this manifest. A +manifest that has a B-card is called a delta-manifest and a manifest +that omits the B-card is a baseline-manifest. The other manifest +identified by the argument of the B-card must be a baseline-manifest. +A baseline-manifest records the complete contents of a check-in. +A delta-manifest records only changes from its baseline. A manifest must have exactly one C-card. The sole argument to the C-card is a check-in comment that describes the check-in that the manifest defines. The check-in comment is text. The following escape sequences are applied to the text: A space (ASCII 0x20) is represented as "\s" (ASCII 0x5C, 0x73). A -newline (ASCII 0x0a) is "\n" (ASCII 0x6C, x6E). A backslash +newline (ASCII 0x0a) is "\n" (ASCII 0x5C, x6E). A backslash (ASCII 0x5C) is represented as two backslashes "\\". Apart from space and newline, no other whitespace characters are allowed in the check-in comment. Nor are any unprintable characters allowed in the comment. A manifest must have exactly one D-card. The sole argument to the D-card is a date-time stamp in the ISO8601 format. The date and time should be in coordinated universal time (UTC). -The format is: +The format one of:
        -YYYY-MM-DDTHH:MM:SS +YYYY-MM-DDTHH:MM:SS
        +YYYY-MM-DDTHH:MM:SS.SSS
        -A manifest has zero or more F-cards. Each F-card defines a file -(other than the manifest itself) which is part of the check-in that -the manifest defines. There are two, three, or four arguments. -The first argument -is the pathname of the file in the check-in relative to the root -of the project file hierarchy. No ".." or "." directories are allowed -within the filename. Space characters are escaped as in C-card -comment text. Backslash characters and newlines are not allowed -within filenames. The directory separator character is a forward -slash (ASCII 0x2F). The second argument to the F-card is the -full 40-character lower-case hexadecimal SHA1 hash of the content -artifact. The optional 3rd argument defines any special access -permissions associated with the file. The only special code currently -defined is "x" which means that the file is executable. All files are -always readable and writable. This can be expressed by "w" permission -if desired but is optional. -The optional 4th argument is the name of the same file as it existed in -the parent check-in. If the name of the file is unchanged from its -parent, then the 4th argument is omitted. +A manifest has zero or more F-cards. Each F-card identifies a file +that is part of the check-in. There are one, two, three, or four +arguments. The first argument is the pathname of the file in the +check-in relative to the root of the project file hierarchy. No ".." +or "." directories are allowed within the filename. Space characters +are escaped as in C-card comment text. Backslash characters and +newlines are not allowed within filenames. The directory separator +character is a forward slash (ASCII 0x2F). The second argument to the +F-card is the full 40-character lower-case hexadecimal SHA1 hash of +the content artifact. The second argument is required for baseline +manifests but is optional for delta manifests. When the second +argument to the F-card is omitted, it means that the file has been +deleted relative to the baseline (files removed in baseline manifests +versions are not added as F-cards). The optional 3rd argument +defines any special access permissions associated with the file. This +can be defined as "x" to mean that the file is executable or "l" +(small letter ell) to mean a symlink. All files are always readable +and writable. This can be expressed by "w" permission if desired but +is optional. The file format might be extended with new permission +letters in the future. The optional 4th argument is the name of the +same file as it existed in the parent check-in. If the name of the +file is unchanged from its parent, then the 4th argument is omitted. + +A manifest has zero or one N-cards. The N-card specifies the mimetype for the +text in the comment of the C-card. If the N-card is omitted, a default mimetype +is used. A manifest has zero or one P-cards. Most manifests have one P-card. The P-card has a varying number of arguments that defines other manifests from which the current manifest is derived. Each argument is an 40-character lowercase @@ -153,10 +176,29 @@ Other arguments define manifests with which the first was merged to yield the current manifest. Most manifests have a P-card with a single argument. The first manifest in the project has no ancestors and thus has no P-card. +A manifest has zero or more Q-cards. A Q-card is similar to a P-card +in that it defines a predecessor to the current check-in. But +whereas a P-card defines the immediate ancestor or a merge +ancestor, the Q-card is used to identify a single check-in or a small +range of check-ins which were cherry-picked for inclusion in or +exclusion from the current manifest. The first argument of +the Q-card is the artifact ID of another manifest (the "target") +which has had its changes included or excluded in the current manifest. +The target is preceded by "+" or "-" to show inclusion or +exclusion, respectively. The optional second argument to the +Q-card is another manifest artifact ID which is the "baseline" +for the cherry-pick. If omitted, the baseline is the primary +parent of the target. The +changes included or excluded consist of all changes moving from +the baseline to the target. + +The Q-card was added to the interface specification on 2011-02-26. +Older versions of Fossil will reject manifests that contain Q-cards. + A manifest may optionally have a single R-card. The R-card has a single argument which is the MD5 checksum of all files in the check-in except the manifest itself. The checksum is expressed as 32-characters of lowercase hexadecimal. The checksum is computed as follows: For each file in the check-in (except for @@ -163,16 +205,17 @@ the manifest itself) in strict sorted lexicographical order, take the pathname of the file relative to the root of the repository, append a single space (ASCII 0x20), the size of the file in ASCII decimal, a single newline character (ASCII 0x0A), and the complete text of the file. -Compute the MD5 checksum of the the result. +Compute the MD5 checksum of the result. -A manifest might contain one or more T-cards used to set tags or -properties on the check-in. The format of the T-card is the same as +A manifest might contain one or more T-cards used to set +[./branching.wiki#tags | tags or properties] +on the check-in. The format of the T-card is the same as described in Control Artifacts section below, except that the -second argument is the single characcter "*" instead of an +second argument is the single character "*" instead of an artifact ID. The * in place of the artifact ID indicates that the tag or property applies to the current artifact. It is not possible to encode the current artifact ID as part of an artifact, since the act of inserting the artifact ID would change the artifact ID, hence a * is used to represent "self". T-cards are typically @@ -182,24 +225,24 @@ Each manifest has a single U-card. The argument to the U-card is the login of the user who created the manifest. The login name is encoded using the same character escapes as is used for the check-in comment argument to the C-card. -A manifest has an option Z-card as its last line. The argument +A manifest must have a single Z-card as its last line. The argument to the Z-card is a 32-character lowercase hexadecimal MD5 hash of all prior lines of the manifest up to and including the newline -character that immediately precedes the "Z". The Z-card is just +character that immediately precedes the "Z". The Z-card is a sanity check to prove that the manifest is well-formed and consistent. A sample manifest from Fossil itself can be seen [/artifact/28987096ac | here].

        2.0 Clusters

        -A cluster is a artifact that declares the existence of other artifacts. +A cluster is an artifact that declares the existence of other artifacts. Clusters are used during repository synchronization to help reduce network traffic. As such, clusters are an optimization and may be removed from a repository without loss or damage to the underlying project code. @@ -228,12 +271,11 @@ A cluster contains one or more "M" cards followed by a single "Z" line. Each M card has a single argument which is the artifact ID of another artifact in the repository. The Z card work exactly like the Z card of a manifest. The argument to the Z card is the lower-case hexadecimal representation of the MD5 checksum of all -prior cards in the cluster. Note that the Z card is required -on a cluster. +prior cards in the cluster. The Z-card is required. An example cluster from Fossil can be seen [/artifact/d03dbdd73a2a8 | here]. @@ -240,45 +282,46 @@

        3.0 Control Artifacts

        Control artifacts are used to assign properties to other artifacts within the repository. The basic format of a control artifact is the same as a manifest or cluster. A control artifact is a text -files divided into cards by newline characters. Each card has a +file divided into cards by newline characters. Each card has a single-character card type followed by arguments. Spaces separate the card type and the arguments. No surplus whitespace is allowed. -All cards must occur in strict lexigraphical order. +All cards must occur in strict lexicographical order. Allowed cards in a control artifact are as follows:
        D time-and-date-stamp
        -T (+|-|*)tag-name artifact-id ?value?
        +T (+|-|*)tag-name artifact-id ?value?
        U user-name
        Z checksum
        -A control artifact must have one D card and one Z card and +A control artifact must have one D card, one U card, one Z card and one or more T cards. No other cards or other text is allowed in a control artifact. Control artifacts might be PGP clearsigned. The D card and the Z card of a control artifact are the same as in a manifest. -The T card represents a "tag" or property that is applied to +The T card represents a [./branching.wiki#tags | tag or property] +that is applied to some other artifact. The T card has two or three values. The second argument is the 40 character lowercase artifact ID of the artifact to which the tag is to be applied. The first value is the tag name. The first character of the tag -is either "+", "-", or "*". A "+" means the tag should be added +is either "+", "-", or "*". The "+" means the tag should be added to the artifact. The "-" means the tag should be removed. The "*" character means the tag should be added to the artifact -and all direct descendants (but not branches) of the artifact down +and all direct descendants (but not descendants through a merge) down to but not including the first descendant that contains a -more recent "-" tag with the same name. +more recent "-", "*", or "+" tag with the same name. The optional third argument is the value of the tag. A tag -without a value is a boolean. +without a value is a Boolean. When two or more tags with the same name are applied to the same artifact, the tag with the latest (most recent) date is used. @@ -288,11 +331,11 @@ check-in user. The "date" tag overrides the check-in date. The "branch" tag sets the name of the branch that at check-in belongs to. Symbolic tags begin with the "sym-" prefix. The U card is the name of the user that created the control -artifact. The Z card is the usual artifact checksum. +artifact. The Z card is the usual required artifact checksum. An example control artifacts can be seen [/info/9d302ccda8 | here]. @@ -305,30 +348,34 @@ the following card types:
        D time-and-date-stamp
        L wiki-title
        +N mimetype
        P parent-artifact-id+
        U user-name
        W size \n text \n
        Z checksum
        The D card is the date and time when the wiki page was edited. The P card specifies the parent wiki pages, if any. The L card -gives the name of the wiki page. The U card specifies the login +gives the name of the wiki page. The optional N card specifies +the mimetype of the wiki text. If the N card is omitted, the +mimetype is assumed to be text/x-fossil-wiki. +The U card specifies the login of the user who made this edit to the wiki page. The Z card is -the usual checksum over the either artifact. +the usual checksum over the entire artifact and is required. The W card is used to specify the text of the wiki page. The argument to the W card is an integer which is the number of bytes of text in the wiki page. That text follows the newline character that terminates the W card. The wiki text is always followed by one extra newline. An example wiki artifact can be seen -[/artifact/7b2f5fd0e0 | here]. +[/artifact?name=7b2f5fd0e0&txt=1 | here].

        5.0 Ticket Changes

        A ticket-change artifact represents a change to a trouble ticket. @@ -342,15 +389,16 @@ Z checksum The D card is the usual date and time stamp and represents the point in time when the change was entered. The U card is the login of the -programmer who entered this change. The Z card is the checksum over +programmer who entered this change. The Z card is the required checksum over the entire artifact. -Every ticket has a unique ID. The ticket to which this change is applied -is specified by the K card. A ticket exists if it contains one or +Every ticket has a distinct ticket-id: +40-character lower-case hexadecimal number. +The ticket-id is given in the K-card. A ticket exists if it contains one or more changes. The first "change" to a ticket is what brings the ticket into existence. J cards specify changes to the "value" of "fields" in the ticket. If the value parameter of the J card is omitted, then the @@ -375,25 +423,27 @@

        6.0 Attachments

        An attachment artifact associates some other artifact that is the -attachment (the source artifact) with a ticket or wiki page to which +attachment (the source artifact) with a ticket or wiki page or +technical note to which the attachment is connected (the target artifact). The following cards are allowed on an attachment artifact:
        -A filename target ?source? -C comment
        +A filename target ?source?
        +C comment
        D time-and-date-stamp
        +N mimetype
        U user-name
        Z checksum
        The A card specifies a filename for the attachment in its first argument. -The second argument to the A card is the name -of the wiki page or ticket to which the attachment is connected. The +The second argument to the A card is the name of the wiki page or +ticket or technical note to which the attachment is connected. The third argument is either missing or else it is the 40-character artifact ID of the attachment itself. A missing third argument means that the attachment should be deleted. The C card is an optional comment describing what the attachment is about. @@ -400,156 +450,336 @@ The C card is optional, but there can only be one. A single D card is required to give the date and time when the attachment was applied. -A single U card gives the name of the user to added the attachment. +There may be zero or one N cards. The N card specifies the mimetype of the +comment text provided in the C card. If the N card is omitted, the C card +mimetype is taken to be text/plain. + +A single U card gives the name of the user who added the attachment. If an attachment is added anonymously, then the U card may be omitted. The Z card is the usual checksum over the rest of the attachment artifact. +The Z card is required. + + + +

        7.0 Technical Notes

        + +A technical note or "technote" artifact (formerly known as an "event" artifact) +associates a timeline comment and a page of text +(similar to a wiki page) with a point in time. Technotes can be used +to record project milestones, release notes, blog entries, process +checkpoints, or news articles. +The following cards are allowed on an technote artifact: + +
        +C comment
        +D time-and-date-stamp
        +E technote-time technote-id
        +N mimetype
        +P parent-artifact-id+
        +T +tag-name * ?value?
        +U user-name
        +W size \n text \n
        +Z checksum +
        + +The C card contains text that is displayed on the timeline for the +technote. The C card is optional, but there can only be one. + +A single D card is required to give the date and time when the +technote artifact was created. This is different from the time at which +the technote appears on the timeline. + +A single E card gives the time of the technote (the point on the timeline +where the technote is displayed) and a unique identifier for the technote. +When there are multiple artifacts with the same technote-id, the one with +the most recent D card is the only one used. The technote-id must be a +40-character lower-case hexadecimal string. + +The optional N card specifies the mimetype of the text of the technote +that is contained in the W card. If the N card is omitted, then the +W card text mimetype is assumed to be text/x-fossil, which is the +Fossil wiki format. + +The optional P card specifies a prior technote with the same technote-id +from which the current technote is an edit. The P card is a hint to the +system that it might be space efficient to store one technote as a delta of +the other. + +A technote might contain one or more T-cards used to set +[./branching.wiki#tags | tags or properties] +on the technote. The format of the T-card is the same as +described in [#ctrl | Control Artifacts] section above, except that the +second argument is the single character "*" instead of an +artifact ID and the name is always prefaced by "+". +The * in place of the artifact ID indicates that +the tag or property applies to the current artifact. It is not +possible to encode the current artifact ID as part of an artifact, +since the act of inserting the artifact ID would change the artifact ID, +hence a * is used to represent "self". The "+" on the +name means that tags can only be add and they can only be non-propagating +tags. In a technote, T cards are normally used to set the background +display color for timelines. + +The optional U card gives name of the user who entered the technote. + +A single W card provides wiki text for the document associated with the +technote. The format of the W card is exactly the same as for a +[#wikichng | wiki artifact]. + +The Z card is the required checksum over the rest of the artifact. -

        7.0 Card Summary

        +

        8.0 Card Summary

        -The following table summaries the various kinds of cards that -appear on Fossil artifacts: +The following table summarizes the various kinds of cards that appear +on Fossil artifacts. A blank entry means that combination of card and +artifact is not legal. A number or range of numbers indicates the number +of times a card may (or must) appear in the corresponding artifact type. +e.g. a value of 1 indicates a required unique card and 1+ indicates that one +or more such cards are required. - + - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - + + + + + + + - - - - - - + + + + + + + - - - - - - + + + + + + + + + + + + + + + + + - - - - - - + + + + + + + + + + + + + + + + + - - - - - - + + + + + + + - - - - - - - + + + + + + + + - - - - - - + + + + + + + - - - - - - + + + + + + + - - - - - - + + + + + + +
        Card FormatUsed ByUsed By
        Manifest Cluster Control Wiki Ticket Attachment
        A filename target source     X
        C coment-textX    XTechnote
        A filename target ?source?     1 
        B baseline0-1*      
         * = Required for delta manifests
        C comment-text1    0-10-1
        D date-time-stampX XXXX
        F filename uuid permissions oldnameX     
        J name value    X 1 11111
        E technote-time technote-id      1
        F filename ?uuid? ?permissions? ?oldname?0+      
        J name ?value?    1+  
        K ticket-uuid    X     1  
        L wiki-title   X     1   
        M uuid X     1+     
        N mimetype0-1  0-1 0-10-1
        P uuid ...X  X  0-1  0-1  0-1
        Q (+|-)uuid ?uuid?0+      
        R md5sumX     0-1      
        T (+|*|-)tagname uuid valueX X   T (+|*|-)tagname uuid ?value?0+ 1+   0+
        U usernameX XXXX1 1110-10-1
        W size   X     1  1
        Z md5sumXXXXXX1111111
        + + + +

        9.0 Addenda

        + +This section contains additional information which may be useful when +implementing algorithms described above. + +

        R Card Hash Calculation

        + +Given a manifest file named MF, the following Bash shell code +demonstrates how to compute the value of the R card in that manifest. +This example uses manifest [28987096ac]. Lines starting with # are +shell input and other lines are output. This demonstration assumes that the +file versions represented by the input manifest are checked out +under the current directory. + +
        +# head MF
        +-----BEGIN PGP SIGNED MESSAGE-----
        +Hash: SHA1
        +
        +C Make\sthe\s"clearsign"\sPGP\ssigning\sdefault\sto\soff.
        +D 2010-02-23T15:33:14
        +F BUILD.txt 4f7988767e4e48b29f7eddd0e2cdea4555b9161c
        +F COPYRIGHT-GPL2.txt 06877624ea5c77efe3b7e39b0f909eda6e25a4ec
        +...
        +
        +# grep '^R ' MF
        +R c0788982781981c96a0d81465fec7192
        +
        +# for i in $(awk '/^F /{print $2}' MF); do \
        +  echo $i $(stat -c '%s' $i); \
        +  cat $i; \
        +done | md5sum
        +c0788982781981c96a0d81465fec7192  -
        +
        + +Minor caveats: the above demonstration will work only when none of the +filenames in the manifest are "fossilized" (encoded) because they contain +spaces. In that case the shell-generated hash would differ because the +stat calls will fail to find such files (which are output in encoded +form here). That approach also won't work for delta manifests. Calculating +the R-card for delta manifests requires traversing both the delta and its baseline in +lexical order of the files, preferring the delta's copy if both contain +a given file. ADDED www/fiveminutes.wiki Index: www/fiveminutes.wiki ================================================================== --- www/fiveminutes.wiki +++ www/fiveminutes.wiki @@ -0,0 +1,76 @@ +Up and running in 5 minutes as a single user + +

        +The following document was contributed by Gilles Ganault on 2013-01-08. + +


        + +

        Up and running in 5 minutes as a single user

        +

        This short document explains the main basic Fossil commands for a single +user, i.e. with no additional users, with no need to synchronize with some remote +repository, and no need for branching/forking.

        + +

        Create a new repository

        +

        fossil new c:\test.repo

        +

        This will create the new SQLite binary file that holds the repository, i.e. +files, tickets, wiki, etc. It can be located anywhere, although it's considered +best practice to keep it outside the work directory where you will work on files +after they've been checked out of the repository.

        + +

        Open the repository

        +

        cd c:\temp\test.fossil

        +

        fossil open c:\test.repo

        +

        This will check out the last revision of all the files in the repository, +if any, into the current work directory. In addition, it will create a binary +file _FOSSIL_ to keep track of changes (on non-Windows systems it is called +.fslckout).

        + +

        Add new files

        +

        fossil add .

        +

        To tell Fossil to add new files to the repository. The files aren't actually +added until you run "commit". When using ".", it tells Fossil +to add all the files in the current directory recursively, i.e. including all +the files in all the subdirectories.

        +

        Note: To tell Fossil to ignore some extensions:

        +

        fossil settings ignore-glob "*.o,*.obj,*.exe" --global

        + +

        Remove files that haven't been committed yet

        +

        fossil delete myfile.c

        +

        This will simply remove the item from the list of files that were previously +added through "fossil add".

        + +

        Check current status

        +

        fossil changes

        +

        This shows the list of changes that have been done and will be committed the +next time you run "fossil commit". It's a useful command to run before +running "fossil commit" just to check that things are OK before proceeding.

        + +

        Commit changes

        +

        To actually apply the pending changes to the repository, e.g. new files marked +for addition, checked-out files that have been edited and must be checked-in, +etc.

        + +

        fossil commit -m "Added stuff"

        + +If no file names are provided on the command-line then all changes will be checked in, +otherwise just the listed file(s) will be checked in. + +

        Compare two revisions of a file

        +

        If you wish to compare the last revision of a file and its checked out version +in your work directory:

        +

        fossil gdiff myfile.c

        +

        If you wish to compare two different revisions of a file in the repository:

        +

        fossil finfo myfile: Note the first hash, which is the UUID of the commit +when the file was committed

        +

        fossil gdiff --from UUID#1 --to UUID#2 myfile.c

        +

        Cancel changes and go back to previous revision

        +

        fossil revert myfile.c

        +

        Fossil does not prompt when reverting a file. It simply reminds the user about the +"undo" command, just in case the revert was a mistake.

        + + +

        Close the repository

        +

        fossil close

        +

        This will simply remove the _FOSSIL_ at the root of the work directory but +will not delete the files in the work directory. From then on, any use of "fossil" +will trigger an error since there is no longer any connection.

        ADDED www/foss-cklist.wiki Index: www/foss-cklist.wiki ================================================================== --- www/foss-cklist.wiki +++ www/foss-cklist.wiki @@ -0,0 +1,113 @@ +Checklist For Successful Open-Source Projects + + +

        This checklist is loosely derived from Tom "Spot" Callaway's Fail Score +blog post +http://spot.livejournal.com/308370.html (see also +[1] and +[2]). +Tom's original post assigned point scores to the various elements and +by adding together the individual points, the reader is supposed to be able +to judge the likelihood that the project will fail. +The point scores, and the items on the list, clearly reflect Tom's +biases and are not necessarily those of the larger open-source community. +Nevertheless, the policy of the Fossil shall be to strive for a perfect +score.

        + +

        This checklist is an inversion of Tom's original post in that it strives to +say what the project should do in order to succeed rather than what it +should not do to avoid failure. The point values are omitted.

        + +

        See also: +

        + +
        + +
          +
        1. The source code size is less than 100 MB, uncompressed. + +

        2. The project uses a Version Control System (VCS). +

            +
          1. The VCS has a working web interface. +
          2. There is documentation on how to use the VCS. +
          3. The VCS is general-purpose, not something hacked together for this + one project. +
          + +
        3. The project comes with documentation on how to build from source +and that documentation is lucid, correct, and up-to-date. + +

        4. The project is configurable using an autoconf-generated configure +script, or the equivalent, and does not require: +

            +
          1. Manually editing flat files +
          2. Editing code header files +
          + +
        5. The project should be buildable using commonly available and + standard tools like "make". + +

        6. The project does not depend on 3rd-party proprietary build tools. + +

        7. The project is able to dynamically link against standard libraries + such as zlib and libjpeg. +

            +
          1. The project does not ship with the sources to standard libraries. + (On the Fossil project, we will allow the SQLite library sources + to be included in the source tree as long as a system-installed + SQLite library can be used in its stead.) +
          2. The project does not use slightly modified versions of standard + libraries. Any required bug fixes in standard libraries are pushed + back upstream. +
          + + +
        8. "make install" works and can be configured to target any + directory of the installer's choosing. +

            +
          1. The project contains no unconfigurable hard-coded pathnames like + "/opt" or "/usr/local". +
          2. After installation, the source tree can be moved or deleted and + the application will continue working. +
          + + +
        9. The source code uses only \n for line endings, not \r\n. + +

        10. The code does not depend on any special compiler features or bugs. + +

        11. The project has a mailing list and announces releases on + the mailing list. + +

        12. The project has a bug tracker. + +

        13. The project has a website. + +

        14. Release version numbers are in the traditional X.Y or X.Y.Z format. + +

        15. Releases can be downloaded as tarball using + gzip or bzip2 compression. + +

        16. Releases unpack into a versioned top-level directory. + (ex: "projectname-1.2.3/"). + +

        17. A statement of license appears at the top of every source code file + and the complete text of the license is included in the source code + tarball. + +

        18. There are no incompatible licenses in the code. + +

        19. The project has not been blithely proclaimed "public domain" without +having gone through the tedious and exacting legal steps to actually put it +in the public domain. + +

        20. There is an accurate change log in the code and on the website. + +

        21. There is documentation in the code and on the website. +

        ADDED www/fossil-from-msvc.wiki Index: www/fossil-from-msvc.wiki ================================================================== --- www/fossil-from-msvc.wiki +++ www/fossil-from-msvc.wiki @@ -0,0 +1,51 @@ +

        Integrating Fossil in the Microsoft Express 2010 IDE

        + +Contributed by Gilles Ganault on 2013-05-24. + +The Express version of Visual Studio doesn't support add-in's and plug-in's, +but it's not an issue since it's still possible to use Fossil through the +External Tools menu and Fossil is a CLI application anyway: + +
          +
        1. Tools > Settings > Expert Settings
        2. +
        3. Tools > External Tools, where the items in this list map + to "External Tool X" that we'll add to our own Fossil + menu later:
        4. +
            +
          1. Rename the default "[New Tool 1]" to eg. + "Commit"   2. +
          2. +
          3. Change Command to where Fossil is located eg. + "c:\fossil.exe"
          4. +
          5. Change Arguments to the required command, eg. + "commit -m". + The user will be prompted to type the comment that Commit expects
          6. +
          7. Set "Initial Directory" to point it to the work directory + where the source files are currently checked out + by Fossil (eg. c:\Workspace). It's also possible to use system + variables such as "$(ProjectDir)" instead of hard-coding the path
          8. +
          9. Check "Prompt for arguments", since Commit + requires typing a comment. Useless for commands like Changes + that don't require arguments
          10. +
          11. Uncheck "Close on Exit", so we can see what Fossil says + before closing the DOS box. Note that "Use Output Window" + will display the output in a child window within the IDE instead of + opening a DOS box
          12. +
          13. Click on OK
          14. +
          +
        5. Tools > Customize > Commands
        6. +
            +
          1. With "Menu bar = Menu Bar" selected, click on "Add + New Menu". A new "Fossil" menu is displayed in the + IDE's menu bar
          2. +
          3. Click on "Modify Selection" to rename it + "Fossil", and...
          4. +
          5. Use the "Move Down" button to move it lower in + the list
          6. +
          +
        7. Still in Customize dialog: In the "Menu bar" combo, select + the new Fossil menu you just created, and Click on "Add Command...": + From Categories, select Tools, and select "External Command 1". + Click on Close. It's unfortunate that the IDE doesn't say which command + maps to "External Command X".
        8. +
        ADDED www/fossil-v-git.wiki Index: www/fossil-v-git.wiki ================================================================== --- www/fossil-v-git.wiki +++ www/fossil-v-git.wiki @@ -0,0 +1,315 @@ +Fossil Versus Git + +

        1.0 Don't Stress!

        + +If you start out using one DVCS and later decide you like the other better, +you can easily [./inout.wiki | move your content]¹. + +Fossil and [http://git-scm.com | Git] are very similar in many respects, +but there are also important differences. +See the table below for +a high-level summary and the text that follows for more details. + +Keep in mind that you are reading this on a Fossil website, +so the information here +might be biased in favor of Fossil. Ask around with people who have +used both Fossil and Git for other opinions. + +¹Git does not support +wiki, tickets, or tech-notes, so those elements will not transfer when +exporting from Fossil to Git. + +

        2.0 Executive Summary:

        + +
        + + + + + + + + + + + + + + +
        GITFOSSIL
        File versioning onlyVersioning, Tickets, Wiki, and Technotes
        Ad-hoc, pile-of-files key/value databaseRelational SQL database
        Bazaar-style developmentCathedral-style development
        Designed for Linux developmentDesigned for SQLite development
        Lots of little toolsStand-alone executable
        One check-out per repositoryMany check-outs per repository
        Remembers what you should have doneRemembers what you actually did
        GPLBSD
        + +

        3.0 Discussion

        + +

        3.1 Feature Set

        + +Git provides file versioning services only, whereas Fossil adds an +integrated [./wikitheory.wiki | wiki], +[./bugtheory.wiki | ticketing & bug tracking], +[./embeddeddoc.wiki | embedded documentation], and +[./event.wiki | Technical notes]. +These additional capabilities are available for Git as 3rd-party and/or +user-installed add-ons, but with Fossil they are integrated into +the design. One way to describe Fossil is that it is +"[https://github.com/ | github]-in-a-box". + +If you clone Git's self-hosting repository you get just Git's source code. +If you clone Fossil's self-hosting repository, you get the entire +Fossil website - source code, documentation, ticket history, and so forth. + +For developers who choose to self-host projects (rather than using a +3rd-party service such as GitHub) Fossil is much easier to set up, since +the stand-alone Fossil executable together with a 2-line CGI script +suffice to instantiate a full-featured developer website. To accomplish +the same using Git requires locating, installing, configuring, integrating, +and managing a wide assortment of separate tools. Standing up a developer +website using Fossil can be done in minutes, whereas doing the same using +Git requires hours or days. + +

        3.2 Database

        + +The baseline data structures for Fossil and Git are the same (modulo +formatting details). Both systems store check-ins as immutable +objects referencing their immediate ancestors and named by their SHA1 hash. + +The difference is that Git stores its objects as individual files +in the ".git" folder or compressed into +bespoke "pack-files", whereas Fossil stores its objects in a +relational ([https://www.sqlite.org/|SQLite]) database file. To put it +another way, Git uses an ad-hoc pile-of-files key/value database whereas +Fossil uses a proven, general-purpose SQL database. This +difference is more than an implementation detail. It +has important consequences. + +With Git, one can easily locate the ancestors of a particular check-in +by following the pointers embedded the check-in object, but it is +difficult to go the other direction and locate the descendants of a +check-in. It is so difficult, in fact, that neither native Git nor +GitHub provide this capability. With Git, if you are looking at some +historical check-in then you cannot ask +"what came next" or "what are the children of this check-in". + +Fossil, on the other hand, parses essential information about check-ins +(parents, children, committers, comments, files changed, etc.) +into a relational database that can be easily +queried using concise SQL statements to find both ancestors and +descendents of a check-in. + +Leaf check-ins in Git that lack a "ref" become "detached", making them +difficult to locate and subject to garbage collection. This +"detached head" problem has caused untold grief for countless +Git users. With Fossil, all check-ins are easily located using +a variety of attributes (parents, children, committer, date, full-text +search of the check-in comment) and so detached heads are simply not possible. + +The ease with which check-ins can be located and queried in Fossil +has resulted in a huge variety of reports and status screens +([./webpage-ex.md|examples]) that show project state +in ways that help developers +maintain enhanced awareness and comprehension +and avoid errors. + +

        3.3 Cathedral vs. Bazaar

        + +Fossil and Git promote different development styles. Git promotes a +"bazaar" development style in which numerous anonymous developers make +small and sometimes haphazard contributions. Fossil +promotes a "cathedral" development model in which the project is +closely supervised by an highly engaged architect and implemented by +a clique of developers. + +Nota Bene: This is not to say that Git cannot be used for cathedral-style +development or that Fossil cannot be used for bazaar-style development. +They can be. But those modes are not their design intent nor the their +low-friction path. + +Git encourages a style in which individual developers work in relative +isolation, maintaining their +own branches and the occasionally rebasing and pushing selected changes up +to the main repository. Developers using Git often have their own +private branches that nobody else ever sees. Work becomes siloed. +This is exactly what one wants when doing bazaar-style development. + +Fossil, in contrast, strives to keep all changes from all contributors +mirrored in the main repository (in separate branches) at all times. +Work in progress from one developer is readily visible to all other +developers and to the project leader, well before the code is ready +to integrate. Fossil places a lot of emphasis on reporting the state +of the project, and the changes underway by all developers, so that +all developers and especially the project leader can maintain a better +mental picture of what is happening, and better situational awareness. + +

        3.4 Linux vs. SQLite

        + +Git was specifically designed to support the development of Linux. +Fossil was specifically designed to support the development of SQLite. + +Both SQLite and Linux are important pieces of software. +SQLite is found on far more systems than Linux. (Almost every Linux +system uses SQLite, but there are many non-Linux systems such as +iPhones, PlayStations, and Windows PC that use SQLite.) On the other +hand, for those systems that do use Linux, Linux is a far more important +component. + +Linux uses a bazaar-style development model. There are thousands and +thousands of contributors, most of whom do not know each others names. +Git is designed for this scenario. + +SQLite uses cathedral-style development. 95% of the code in SQLite +comes from just three programmers, 64% from just the lead developer. +And all SQLite developers know each other well and interact daily. +Fossil is designed for this development model. + +

        3.5 Lots of little tools vs. Self-contained system

        + +Git consists of many small tools, each doing one small part of the job, +which can be recombined (by experts) to perform powerful operations. +Git has a lot of complexity and many dependencies and requires an "installer" +script or program to get it running. + +Fossil is a single self-contained stand-alone executable with hardly +any dependencies. Fossil can be (and often is) run inside a +minimally configured chroot jail. To install Fossil, +one merely puts the executable on $PATH. + +The designer of Git says that the unix philosophy is to have lots of +small tools that collaborate to get the job done. The designer of +Fossil says that the unix philosophy is "it just works". Both +individuals have written their DVCSes to reflect their own view +of the "unix philosophy". + +

        3.6 One vs. Many Check-outs per Repository

        + +A "repository" in Git is a pile-of-files in the ".git" subdirectory +of a single check-out. The check-out and the repository are inseperable. + +With Fossil, a "repository" is a single SQLite database file +that can be stored anywhere. There +can be multiple active check-outs from the same repository, perhaps +open on different branches or on different snapshots of the same branch. +Long-running tests or builds can be running in one check-out while +changes are being committed in another. + +

        3.7 What you should have done vs. What you actually did

        + +Git puts a lot of emphasis on maintaining +a "clean" check-in history. Extraneous and experimental branches by +individual developers often never make it into the main repository. And +branches are often rebased before being pushed, to make +it appear as if development had been linear. Git strives to record what +the development of a project should have looked like had there been no +mistakes. + +Fossil, in contrast, puts more emphasis on recording exactly what happened, +including all of the messy errors, dead-ends, experimental branches, and +so forth. One might argue that this +makes the history of a Fossil project "messy". But another point of view +is that this makes the history "accurate". In actual practice, the +superior reporting tools available in Fossil mean that the added "mess" +is not a factor. + +One commentator has mused that Git records history according to +the victors, whereas Fossil records history as it actually happened. + +

        3.8 GPL vs. BSD

        + +Git is covered by the GPL license whereas Fossil is covered by +a two-clause BSD license. + +Consider the difference between GPL and BSD licenses: GPL is designed +to make writing easier at the expense of making reading harder. BSD is +designed to make reading easier and the expense of making writing harder. + +To a first approximation, the GPL license grants the right to read +source code to anyone who promises to give back enhancements. In other +words, the act of reading GPL source code (a prerequiste for making changes) +implies acceptance of the license which requires updates to be contributed +back under the same license. (The details are more complex, but the +foregoing captures the essence of the idea.) A big advantage of the GPL +is that anybody can contribute to the code without having to sign additional +legal documentation because they have implied their acceptance of the GPL +license by the very act of reading the source code. This means that a GPL +project can legally accept anonymous and drive-by patches. + +The BSD licenses, on the other hand, make reading much easier than the GPL, +because the reader need not surrender proprietary interest +in their own enhancements. On the flip side, BSD and similarly licensed +projects must obtain legal affidavits from authors before +new content can be added into the project. Anonymous and drive-by +patches cannot be accepted. This makes signing up new contributors for +BSD licensed projects harder. + +The licenses on the implementations of Git and Fossil only apply to the +implementations themselves, not to the projects which the systems store. +Nevertheless, one can see a more GPL-oriented world-view in Git and a +more BSD-oriented world-view in Fossil. Git encourages anonymous contributions +and siloed development, which are hallmarks of the GPL/bazaar approach to +software, whereas Fossil encourages a more tightly collaborative, +cliquish, cathedral-style approach more typical of BSD-licensed projects. + +

        4.0 Missing Features

        + +Most of the capabilities found in Git are also available in Fossil and +the other way around. For example, both systems have local check-outs, +remote repositories, push/pull/sync, bisect capabilities, and a "stash". +Both systems store project history as a directed acyclic graph (DAG) +of immutable check-in objects. + +But there are a few capabilities in one system that are missing from the +other. + +

        4.1 Features found in Fossil but missing from Git

        + + * The ability to show descendents of a check-in. + + Both Git and Fossil can easily find the ancestors of a check-in. But + only Fossil shows the descendents. (It is possible to find the + descendents of a check-in in Git using the log, but that is sufficiently + difficult that nobody ever actually does it.) + + * Wiki, Embedded documentation, Trouble-tickets, and Tech-Notes + + Git only provides versioning of source code. Fossil strives to provide + other related configuration management services as well. + + * Named branches + + Branches in Fossil have persistent names that are propagated + to collaborators via [/help?cmd=push|push] and [/help?cmd=pull|pull]. + All developers see the same name on the same branch. Git, in contrast, + uses only local branch names, so developers working on the + same project can (and frequently do) use a different name for the + same branch. + + * The [/help?cmd=all|fossil all] command + + Fossil keeps track of all repositories and check-outs and allows + operations over all of them with a single command. For example, in + Fossil is possible to request a pull of all repositories on a laptop + from their respective servers, prior to taking the laptop off network. + Or it is possible to do "fossil all status" to see if there are any + uncommitted changes that were overlooked prior to the end of the workday. + + * The [/help?cmd=ui|fossil ui] command + + Fossil supports an integrated web interface. Some of the same features + are available using third-party add-ons for Git, but they do not provide + nearly as many features and they are not nearly as convenient to use. + + +

        4.2 Features found in Git but missing from Fossil

        + + * Rebase + + Because of its emphasis on recording history exactly as it happened, + rather than as we would have liked it to happen, Fossil deliberately + does not provide a "rebase" command. One can rebase manually in Fossil, + with sufficient perserverence, but it not something that can be done with + a single command. + + * Push or pull a single branch + + The [/help?cmd=push|fossil push], [/help?cmd=pull|fossil pull], and + [/help?cmd=sync|fossil sync] commands do not provide the capability to + push or pull individual branches. Pushing and pulling in Fossil is + all or nothing. This is in keeping with Fossil's emphasis on maintaining + a complete record and on sharing everything between all developers. Index: www/fossil_logo_small.gif ================================================================== --- www/fossil_logo_small.gif +++ www/fossil_logo_small.gif cannot compute difference between binary files ADDED www/fossil_prompt.sh Index: www/fossil_prompt.sh ================================================================== --- www/fossil_prompt.sh +++ www/fossil_prompt.sh @@ -0,0 +1,64 @@ + +#------------------------------------------------------------------------- +# get_fossil_data() +# +# If the current directory is part of a fossil checkout, then populate +# a series of global variables based on the current state of that +# checkout. Variables are populated based on the output of the [fossil info] +# command. +# +# If the current directory is not part of a fossil checkout, set global +# variable $fossil_info_project_name to an empty string and return. +# +function get_fossil_data() { + fossil_info_project_name="" + eval `get_fossil_data2` +} +function get_fossil_data2() { + fossil info 2> /dev/null | sed 's/"//g'|grep "^[^ ]*:" | while read LINE ; do + local field=`echo $LINE | sed 's/:.*$//' | sed 's/-/_/'` + local value=`echo $LINE | sed 's/^[^ ]*: *//'` + echo fossil_info_${field}=\"${value}\" + done +} + +#------------------------------------------------------------------------- +# set_prompt() +# +# Set the PS1 variable. If the current directory is part of a fossil +# checkout then the prompt contains information relating to the state +# of the checkout. +# +# Otherwise, if the current directory is not part of a fossil checkout, it +# is set to a fairly standard bash prompt containing the host name, user +# name and current directory. +# +function set_prompt() { + get_fossil_data + if [ -n "$fossil_info_project_name" ] ; then + project=$fossil_info_project_name + checkout=`echo $fossil_info_checkout | sed 's/^\(........\).*/\1/'` + date=`echo $fossil_info_checkout | sed 's/^[^ ]* *..//' | sed 's/:[^:]*$//'` + tags=$fossil_info_tags + local_root=`echo $fossil_info_local_root | sed 's/\/$//'` + local=`pwd | sed "s*${local_root}**" | sed "s/^$/\//"` + + # Color the first part of the prompt blue if this is a clean checkout. + # Or red if it has been edited in any way at all. Set $c1 to the escape + # sequence required to change the type to the required color. And $c2 + # to the sequence that changes it back. + # + if [ -n "`fossil chang`" ] ; then + c1="\[\033[1;31m\]" # red + else + c1="\[\033[1;34m\]" # blue + fi + c2="\[\033[0m\]" + + PS1="$c1${project}.${tags}$c2 ${checkout} (${date}):${local}$c1\$$c2 " + else + PS1="\u@\h:\w\$ " + fi +} + +PROMPT_COMMAND=set_prompt ADDED www/fossil_prompt.wiki Index: www/fossil_prompt.wiki ================================================================== --- www/fossil_prompt.wiki +++ www/fossil_prompt.wiki @@ -0,0 +1,23 @@ +Fossilized Bash Prompt +

        2013-02-21

        + +Dan Kennedy has contributed a +[./fossil_prompt.sh?mimetype=text/plain | bash script] +that manipulates the bash prompt to show the status of +the Fossil repository that the user is currently visiting. +The prompt shows the branch, version, and timestamp for the +current checkout, and the prompt changes colors from blue to +red when there are uncommitted changes. + +To try out this script, simply download it from the link above, then +type: + +
        +. fossil_prompt.sh
        +
        + +For a permanent installation, you can graft the code into your +.bashrc file in your home directory. + +The code is very simple (only 32 non-comment lines, as of this writing) +and hence easy to customized. ADDED www/hacker-howto.wiki Index: www/hacker-howto.wiki ================================================================== --- www/hacker-howto.wiki +++ www/hacker-howto.wiki @@ -0,0 +1,14 @@ +Fossil Hackers How-To + +The following links are of interest to programmers who want to modify +or enhance Fossil. Ordinary users can safely ignore this information. + + * [./build.wiki | How To Compile And Install Fossil] + * [./customskin.md | Theming Fossil] + * [./makefile.wiki | The Fossil Build Process] + * [./tech_overview.wiki | A Technical Overview of Fossil] + * [./adding_code.wiki | Adding Features To Fossil] + * [./contribute.wiki|Contributing Code Or Enhancements To The Fossil Project] + * [./style.wiki | Coding Style Guidelines] + * [./checkin.wiki | Pre-checkin Checklist] + * [../test/release-checklist.wiki | Release Checklist] ADDED www/hints.wiki Index: www/hints.wiki ================================================================== --- www/hints.wiki +++ www/hints.wiki @@ -0,0 +1,61 @@ +Fossil Tips And Usage Hints + + 1. Click on nodes of any timeline graph to see diffs between the two + selected versions. + + 2. Add the "--tk" option to "[/help?cmd=diff | fossil diff]" commands + to get a pop-up + window containing a complete side-by-side diff. (NB: The pop-up + window is run as a separate Tcl/Tk process, so you will need to + have Tcl/Tk installed on your machine for this to work. Visit + [http://www.activestate.com/activetcl] to for a quick download of + Tcl/Tk if you do not already have it on your system.) + + 3. The "[/help/clean | fossil clean -x]" command is a great + alternative to "make clean". You can use "[/help/clean | fossil clean -f]" + as a slightly safer alternative if the "ignore-glob" setting is + not set. WARNING: make sure you did a "fossil add" for all source-files + you plan to commit, otherwise those files will be deleted without warning. + + 4. Use "[/help?cmd=all | fossil all changes]" to look for any uncommitted + edits in any of your Fossil projects. Use + "[/help?cmd=all | fossil all pull]" on your laptop + prior to going off network (for example, on a long plane ride) + to make sure you have all the latest content locally. Then run + "[/help/all|fossil all push]" when you get back online to upload + your changes. + + 5. To see an entire timeline, type "all" into the "Max:" entry box. + + 6. You can manually add a "c=CHECKIN" query parameter to the timeline + URL to get a snapshot of what was going on about the time of some + check-in. The "CHECKIN" can be + [./checkin_names.wiki | any valid check-in or version name], including + tags, branch names, and dates. For example, to see what was going + on in the Fossil repository on 2008-01-01, visit + [http://www.fossil-scm.org/fossil/timeline?c=2008-01-01]. + + 7. Further to the previous two hints, there are lots of query parameters + that you can add to timeline pages. The available query parameters + are tersely documented [/help?cmd=/timeline | here]. + + 8. You can run "[/help?cmd=test-diff | fossil test-diff --tk $file1 $file2]" + to get a pop-up window with side-by-side diffs of two files, even if + neither of the two files is part of any Fossil repository. Note that + this command is "test-diff", not "diff". + + 9. On web pages showing the content of a file (for example + [http://www.fossil-scm.org/fossil/artifact/c7dd1de9f]) you can manually + add a query parameter of the form "ln=FROM,TO" to the URL that + will cause the range of lines indicated to be highlighted. This + is useful in pointing out a few lines of code using a hyperlink + in an email or text message. Example: + [http://www.fossil-scm.org/fossil/artifact/c7dd1de9f?ln=28,30]. + Adding the "ln" query parameter without any argument simply turns + on line numbers. This feature only works right with files with + a mimetype of text/plain, of course. + + 10. When editing documentation to be checked in as managed files, you can + preview what the documentation will look like by using the special + "ckout" branch name in the "doc" URL while running "fossil ui". + See the [./embeddeddoc.wiki | embedded documentation] for details. Index: www/index.wiki ================================================================== --- www/index.wiki +++ www/index.wiki @@ -1,141 +1,154 @@ -Fossil Home Page - - -

        - -Fossil: -Simple, high-reliability, distributed software configuration management - -

        - - -

        Why Use Fossil?

        - - -
        - - - -
        -

        Quick Links

        +Home + +

        What Is Fossil?

        + +
          -
        • [./quickstart.wiki | Quick Start]
        • [http://www.fossil-scm.org/download.html | Download] +
        • [./quickstart.wiki | Quick Start]
        • [./build.wiki | Install] -
        • [../COPYRIGHT-GPL2.txt | License] -
        • [/timeline | Recent changes] +
        • [../COPYRIGHT-BSD2.txt | License]
        • [./faq.wiki | FAQ] +
        • [./changes.wiki | Change Log] +
        • [./hacker-howto.wiki | Hacker How-To] +
        • [./hints.wiki | Tip & Hints] +
        • [./permutedindex.html | Documentation Index] +
        • [http://www.fossil-scm.org/schimpf-book/home | Jim Schimpf's book]
        • Mailing list
            -
          • [http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users | sign-up] +
          • [http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/fossil-users | sign-up]
          • [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org | archives] -
              +
          -
        -
        -
        - -There are plenty of open-source version control systems available on the -internet these days. What makes Fossil worthy of attention? - - 1. Bug Tracking And Wiki - + + +

        Fossil is a simple, high-reliability, distributed software configuration +management system with these advanced features: + + 1. Integrated Bug Tracking, Wiki, and Technotes - In addition to doing [./concepts.wiki | distributed version control] like Git and Mercurial, - Fossil also supports [./bugtheory.wiki | distributed bug tracking] and - [./wikitheory.wiki | distributed wiki] all in a single - integrated package. - - 2. Web Interface - - Fossil has a built-in and easy-to-use [./webui.wiki | web interface] - that simplifies project tracking and promotes situational awareness. - Simply type "fossil ui" from within any check-out and Fossil - automatically opens your web browser in a page that gives detailed - history and status information on that project. - - 3. Autosync - + Fossil also supports [./bugtheory.wiki | bug tracking], + [./wikitheory.wiki | wiki], and [./event.wiki | technotes]. + + 2. Built-in Web Interface - + Fossil has a built-in and intuitive [./webui.wiki | web interface] + with a rich assortment of information pages + ([./webpage-ex.md|examples]) designed to promote situational awareness. + + This entire website¹ + is just a running instance of Fossil. The pages you see here + are all [./wikitheory.wiki | wiki] or + [./embeddeddoc.wiki | embedded documentation]. + When you clone Fossil from one of its + [./selfhost.wiki | self-hosting repositories], + you get more than just source code - you get this entire website. + (¹except the + [http://www.fossil-scm.org/download.html | download] page) + + 3. Self-Contained - + Fossil is a single self-contained stand-alone executable. + To install, simply download a + precompiled binary + for Linux, Mac, OpenBSD, or Windows and put it on your $PATH. + [./build.wiki | Easy-to-compile source code] is also available. + + 4. Simple Networking - + No custom protocols or TCP ports. + Fossil uses ordinary HTTP (or HTTPS or SSH) + for network communications, so it works fine from behind + restrictive firewalls, including [./quickstart.wiki#proxy|proxies]. + The protocol is + [./stats.wiki | bandwidth efficient] to the point that Fossil can be + used comfortably over dial-up. + + 5. CGI/SCGI Enabled - No server is required, but if you want to + set one up, Fossil supports four easy + [./server.wiki | server configurations]. + + 6. Autosync - Fossil supports [./concepts.wiki#workflow | "autosync" mode] which helps to keep projects moving - forward by reducing the amount of needless + forward by reducing the amount of needless [./branching.wiki | forking and merging] often associated with distributed projects. - 4. Self-Contained - - Fossil is a single stand-alone executable that contains everything - needed to do configuration management. - Installation is trivial: simply download a - precompiled binary - for Linux, Mac, or Windows and put it on your $PATH. - [./build.wiki | Easy-to-compile source code] is available for - users on other platforms. Fossil sources are also mostly self-contained, - requiring only the "zlib" library and the standard C library to build. - - 5. Simple Networking - - Fossil uses plain old HTTP (with - [./quickstart.wiki#proxy | proxy support]) - for all network communications, meaning that it works fine from behind - restrictive firewalls. The protocol is - [./stats.wiki | bandwidth efficient] to the point that Fossil can be - used comfortably over a dial-up internet connection. - - 6. CGI Enabled - - No server is required to use fossil. But a - server does make collaboration easier. Fossil supports three different - yet simple [./quickstart.wiki#serversetup | server configurations]. - The most popular is a 2-line CGI script. This is the approach - used by the [./selfhost.wiki | self-hosting fossil repositories]. - - 7. Robust & Reliable - - Fossil stores content in an SQLite database so that transactions are - atomic even if interrupted by a power loss or system crash. Furthermore, - automatic [./selfcheck.wiki | self-checks] verify that all aspects of - the repository are consistent prior to each commit. In over two years - of operation, no work has ever been lost after having been committed to - a Fossil repository. + 7. Robust & Reliable - + Fossil stores content using an [./fileformat.wiki | enduring file format] + in an SQLite database so that transactions are + atomic even if interrupted by a power loss or system crash. + Automatic [./selfcheck.wiki | self-checks] verify that all aspects of + the repository are consistent prior to each commit. + + 8. Free and Open-Source - Uses the [../COPYRIGHT-BSD2.txt|2-clause BSD license].


        Links For Fossil Users:

        - * [./reviews.wiki | Testimonials] from satisfied fossil users. + * "Fuel" is cross-platform GUI front-end for Fossil + written in Qt. [http://fuelscm.org/]. + Fuel is an independent project run by a different group of + developers. + * [./reviews.wiki | Testimonials] from satisfied Fossil users and + [./quotes.wiki | Quotes] about Fossil and other DVCSes. * [./faq.wiki | FAQ] - * The [./concepts.wiki | concepts] behind fossil - * [./quickstart.wiki | Quick Start] guide to using fossil - * [./qandc.wiki | Questions & Criticisms] directed at fossil. - * [./build.wiki | Building And Installing] + * The [./concepts.wiki | concepts] behind Fossil. + * [./quickstart.wiki | Quick Start] guide to using Fossil. + * [./qandc.wiki | Questions & Criticisms] directed at Fossil. + * [./build.wiki | Compiling and Installing] * Fossil supports [./embeddeddoc.wiki | embedded documentation] that is versioned along with project source code. - * Fossil uses an [./fileformat.wiki | enduring file format] that is + * Fossil uses an [./fileformat.wiki | enduring file format] that is designed to be readable, searchable, and extensible by people not yet born. * A tutorial on [./branching.wiki | branching], what it means and how - to do it using fossil. + to do it using Fossil. * The [./selfcheck.wiki | automatic self-check] mechanism helps insure project integrity. * Fossil contains a [./wikitheory.wiki | built-in wiki]. + * An [./event.wiki | Event] is a special kind of wiki page associated + with a point in time rather than a name. + * [./settings.wiki | Settings] control the behaviour of Fossil. + * [./ssl.wiki | Use SSL] to encrypt communication with the server. * There is a - [http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users | mailing list] (with publically readable - [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org | archives] - available for discussing fossil issues. + [http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users | mailing list] + (with publicly readable + [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org | archives]) + available for discussing Fossil issues. * [./stats.wiki | Performance statistics] taken from real-world projects - hosted on fossil. - * How to [./shunning.wiki | delete content] from a fossil repository. + hosted on Fossil. + * How to [./shunning.wiki | delete content] from a Fossil repository. * How Fossil does [./password.wiki | password management]. - * Some (unfinished but expanding) extended - [./reference.wiki | reference documentation] for the fossil command line. + * On-line [/help | help]. * Documentation on the - [http://www.sqliteconcepts.org/THManual.pdf | TH1 Script Language] used - to configure the ticketing subsystem. + [http://www.sqliteconcepts.org/THManual.pdf | TH1 scripting language], + used to customize [./custom_ticket.wiki | ticketing], and several other + subsystems, including [./customskin.md | theming]. + * List of [./th1.md | TH1 commands provided by Fossil itself] that expose + its key functionality to TH1 scripts. + * A free hosting server for Fossil repositories is available at + [http://chiselapp.com/]. * How to [./server.wiki | set up a server] for your repository. * Customizing the [./custom_ticket.wiki | ticket system]. + * Methods to [./checkin_names.wiki | identify a specific check-in]. + * [./inout.wiki | Import and export] from and to Git. + * [./fossil-v-git.wiki | Fossil versus Git]. + * [./fiveminutes.wiki | Up and running in 5 minutes as a single user] + (contributed by Gilles Ganault on 2013-01-08). + * [./antibot.wiki | How Fossil defends against abuse by spiders and bots]. -

        Links For Fossil Developer:

        +

        Links For Fossil Developers:

        + * [./contribute.wiki | Contributing] code or documentation to the + Fossil project. * [./theory1.wiki | Thoughts On The Design Of Fossil]. * [./pop.wiki | Principles Of Operation] + * [./tech_overview.wiki | A Technical Overview Of Fossil]. * The [./fileformat.wiki | file format] used by every content file stored in the repository. * The [./delta_format.wiki | format of deltas] used to efficiently store changes between file revisions. * The [./delta_encoder_algorithm.wiki | encoder algorithm] used to efficiently generate deltas. * The [./sync.wiki | synchronization protocol]. ADDED www/inout.wiki Index: www/inout.wiki ================================================================== --- www/inout.wiki +++ www/inout.wiki @@ -0,0 +1,63 @@ +Import And Export + +Fossil has the ability to import and export repositories from and to +[http://git-scm.com/ | Git]. And since most other version control +systems will also import/export from Git, that means that you can +import/export a Fossil repository to most version control systems using +Git as an intermediary. + +

        Git → Fossil

        + +To import a Git repository into Fossil, run commands like this: + +
        +cd git-repo
        +git fast-export --all | fossil import --git new-repo.fossil
        +
        + +In other words, simply pipe the output of the "git fast-export" command +into the "fossil import --git" command. The 3rd argument to the "fossil import" +command is the name of a new Fossil repository that is created to hold the Git +content. + +The --git option is not actually required. The git-fast-export file format +is currently the only VCS interchange format that Fossil understands. But +future versions of Fossil might be enhanced to understand other VCS +interchange formats, and so for compatibility, use of the +--git option is recommended. + +

        Fossil → Git

        + +To convert a Fossil repository into a Git repository, run commands like +this: + +
        +git init new-repo
        +cd new-repo
        +fossil export --git ../repo.fossil | git fast-import
        +
        + +In other words, create a new Git repository, then pipe the output from the +"fossil export --git" command into the "git fast-import" command. + +Note that the "fossil export --git" command only exports the versioned files. +Tickets and wiki and events are not exported, since Git does not understand +those concepts. + +As with the "import" command, the --git option is not required +since the git-fast-export file format is currently the only VCS interchange +format that Fossil will generate. However, +future versions of Fossil might add the ability to generate other +VCS interchange formats, and so for compatibility, the use of the --git +option recommended. + +An anonymous user sends this comment: + +
        +The main Fossil branch is called "trunk", while the main git branch is +called "master". After you've exported your FOSSIL repo to git, you won't +see any files and gitk will complain about a missing "HEAD". You can +resolve this problem by merging "trunk" with "master" +(first verify using git status that you are on the "master" branch): +git merge trunk +
        ADDED www/makefile.wiki Index: www/makefile.wiki ================================================================== --- www/makefile.wiki +++ www/makefile.wiki @@ -0,0 +1,268 @@ +The Fossil Build Process + +

        1.0 Introduction

        + +The build process for Fossil is tricky in that the source code +needs to be processed by three different preprocessor programs +before it is compiled. Most users will download a +[http://www.fossil-scm.org/download.html | precompiled binary] +so this is of no consequence to them, and even those who +want to compile the code themselves can use one of the +[./build.wiki | existing makefiles]. +So must people do not need to be concerned with the +build complexities of Fossil. But hard-core developers who desire +a deep understanding of how Fossil is put together can benefit +from reviewing this article. + + +

        2.0 Source Code Tour

        + +The source code for Fossil is found in the +[/dir?ci=trunk&name=src | src/] subdirectory of the +source tree. The src/ subdirectory contains all code, including +the code for the separate preprocessor programs. + +Each preprocessor program is a separate C program implemented in +a single file of C source code. The three preprocessor programs +are: + + 1. mkindex.c + 2. translate.c + 3. makeheaders.c + +Fossil makes use of [http://www.sqlite.org/ | SQLite] for on-disk +storage. The SQLite implementation is contained in three source +code files that do not participate in the preprocessing steps. +These three files that implement SQLite are: + + 4. sqlite3.c + 5. sqlite3.h + 6. shell.c + +The sqlite3.c and sqlite3.h source files are byte-for-byte copies of a +standard [http://www.sqlite.org/amalgamation.html | amalgamation]. +The shell.c source file is code for the SQLite +[http://www.sqlite.org/sqlite.html | command-line shell] that is used +to help implement the [/help/sqlite3 | fossil sql] command. The +shell.c source file is also a byte-for-byte copy of the +shell.c file from the SQLite release. + +The TH1 script engine is implemented using files: + + 7. th.c + 8. th.h + +These two files are imports like the SQLite source files, +and so are not preprocessed. + +The VERSION.h header file is generated from other information sources +using a small program called: + + 9. mkversion.c + +The builtin_data.h header file contains the definitions of C-language +byte-array constants that contain various resources such as scripts and +images. The builtin_data.h header file is generate from the original +resource files using a small program called: + + 10 mkbuiltin.c + +The src/ subdirectory also contains documentation about the +makeheaders preprocessor program: + + 11. [../src/makeheaders.html | makeheaders.html] + +Click on the link to read this documentation. In addition there is +a [http://www.tcl-lang.org/ | Tcl] script used to build the various makefiles: + + 12. makemake.tcl + +Running this Tcl script will automatically regenerate all makefiles. +In order to add a new source file to the Fossil implementation, simply +edit makemake.tcl to add the new filename, then rerun the script, and +all of the makefiles for all targets will be rebuild. + +Finally, there is one of the makefiles generated by makemake.tcl: + + 13. main.mk + +The main.mk makefile is invoked from the Makefile in the top-level +directory. The main.mk is generated by makemake.tcl and should not +be hand edited. Other makefiles generated by makemake.tcl are in +other subdirectories (currently all in the win/ subdirectory). + +All the other files in the src/ subdirectory (79 files at the time of +this writing) are C source code files that are subject to the +preprocessing steps described below. In the sequel, we will call these +other files "src.c" in order to have a convenient name. The reader +should understand that whenever "src.c" or "src.h" is used in the text +that follows, we really mean all (79) other source files other than +the exceptions described above. + +

        3.0 Automatically generated files

        + +The "VERSION.h" header file contains some C preprocessor macros that +identify the version of Fossil that is to be build. The VERSION.h file is +generated automatically from information extracted from the "manifest", +"manifest.uuid", and "VERSION" source files in the root directory of the +source tree. +(The "manifest" and "manifest.uuid" files are automatically generated and +updated by Fossil itself. See the [/help/setting | fossil set manifest] +command for additional information.) + +The VERSION.h header file is generated by +a C program: src/mkversion.c. +To run the VERSION.h generator, first compile the src/mkversion.c + source file into a command-line program (named "mkversion.exe") +then run: + +
        +mkversion.exe manifest.uuid manifest VERSION >VERSION.h
        +
        + +The pathnames in the above command might need to be adjusted to get the +directories right. The point is that the manifest.uuid, manifest, and +VERSION files +in the root of the source tree are the three arguments and +the generated VERSION.h file appears on standard output. + +The builtin_data.h header file is generated by a C program: src/mkbuiltin.c. +The builtin_data.h file contains C-langauge byte-array definitions for +the content of resource files used by Fossil. To generate the +builtin_data.h file, first compile the mkbuiltin.c program, then run: + +
        +mkbuiltin.exe diff.tcl OtherFiles... >builtin_data.h
        +
        + +At the time of this writing, the "diff.tcl" script (a Tcl/Tk script used +to generate implement --tk option on the diff command) is the only resource +file processed using mkbuiltin.exe. However, new resources will likely be +added using this facility in future versions of Fossil. + + +

        4.0 Preprocessing

        + +There are three preprocessors for the Fossil sources. The mkindex +and translate preprocessors can be run in any order. The makeheaders +preprocessor must be run after translate. + +

        4.1 The mkindex preprocessor

        + +The mkindex program scans the "src.c" source files looking for special +comments that identify routines that implement various Fossil commands, +web interface methods, and help text comments. The mkindex program +generates some C code that Fossil uses in order to dispatch commands and +HTTP requests and to show on-line help. Compile the mkindex program +from the mkindex.c source file. Then run: + +
        +./mkindex src.c >page_index.h
        +
        + +Note that "src.c" in the above is a stand-in for the (79) regular source +files of Fossil - all source files except for the exceptions described in +section 2.0 above. + +The output of the mkindex program is a header file that is #include-ed by +the main.c source file during the final compilation step. + +

        4.2 The translate preprocessor

        + +The translate preprocessor looks for lines of source code that begin +with "@" and converts those lines into string constants or (depending on +context) into special "printf" operations for generating the output of +an HTTP request. The translate preprocessor is a simple C program whose +sources are in the translate.c source file. The translate preprocess +is run on each of the other ordinary source files separately, like this: + +
        +./translate src.c >src_.c
        +
        + +In this case, the "src.c" file represents any single source file from the +set of ordinary source files as described in section 2.0 above. Note that +each source file is translated separately. By convention, the names of +the translated source files are the names of the input sources with a +single "_" character at the end. But a new makefile can use any naming +convention it wants - the "_" is not critical to the build process. + +After being translated, the output files (the "src_.c" files) should be +used for all subsequent preprocessing and compilation steps. + +

        4.3 The makeheaders preprocessor

        + +For each C source module "src.c", there is an automatically generated +header module "src.h" that contains all of the datatype and procedure +declarations needed by the source module. These header files are generated +automatically by the makeheaders program. The sources to makeheaders +are contained in a single file "makeheaders.c". Additional documentation +on makeheaders can be found in [../src/makeheaders.html | src/makeheaders.html]. + +The makeheaders program is run once. It scans all inputs source files and +generates header files for each one. Note that the sqlite3.c and shell.c +source files are not scanned by makeheaders. Makeheaders only runs over +"ordinary" source files, not the exceptional source files. However, +makeheaders also uses some extra header files as input. The general format +is like this: + +
        +makeheaders src_.c:src.h sqlite3.h th.h VERSION.h
        +
        + +In the example above the "src_.c" and "src.h" names represent all of the +(79) ordinary C source files, each as a separate argument. + +

        5.0 Compilation

        + +After all generated files have been created and all ordinary source files +have been preprocessed, the generated and preprocessed files can be +combined into a single executable using a C compiler. This can be done +all at once, or each preprocessed source file can be compiled into a +separate object code file and the resulting object code files linked +together in a final step. + +Some files require special C-preprocessor macro definitions. +When compiling sqlite.c, the following macros are recommended: + + * -DSQLITE_OMIT_LOAD_EXTENSION=1 + * -DSQLITE_ENABLE_DBSTAT_VTAB=1 + * -DSQLITE_ENABLE_FTS4=1 + * -DSQLITE_ENABLE_LOCKING_STYLE=0 + * -DSQLITE_THREADSAFE=0 + * -DSQLITE_DEFAULT_FILE_FORMAT=4 + * -DSQLITE_ENABLE_EXPLAIN_COMMENTS=1 + +The first three symbol definitions above are required; the others are merely +recommended. Extension loading is omitted as a security measure. The dbstat +virtual table is needed for the [/help?cmd=/repo-tabsize|/repo-tabsize] page. +FTS4 is needed for the search feature. Fossil is single-threaded so mutexing +is disabled in SQLite as a performance enhancement. The +SQLITE_ENABLE_EXPLAIN_COMMENTS option makes the output of "EXPLAIN" queries +in the "[/help?cmd=sqlite3|fossil sql]" command much more readable. + +When compiling the shell.c source file, these macros are required: + + * -Dmain=sqlite3_main + * -DSQLITE_OMIT_LOAD_EXTENSION=1 + +The "main()" routine in the shell must be changed into sqlite3_main() +to prevent it from colliding with the real main() in Fossil, and to give +Fossil an entry point to jump to when the +[/help/sqlite3 | fossil sql] command is invoked. + +All the other source code files can be compiled without any special +options. + +

        6.0 Linkage

        + +Fossil needs to be linked against [http://www.zlib.net | zlib]. If +the HTTPS option is enabled, then it will also need to link against +the appropriate SSL implementation. And, of course, Fossil needs to +link against the standard C library. No other libraries or external +dependences are used. + +

        7.0 See Also

        + + * [./tech_overview.wiki | A Technical Overview Of Fossil] + * [./adding_code.wiki | How To Add Features To Fossil] Index: www/mkdownload.tcl ================================================================== --- www/mkdownload.tcl +++ www/mkdownload.tcl @@ -1,79 +1,88 @@ #!/usr/bin/tclsh # -# Run this script to build the "download" page on standard output. -# -# -puts \ -{ - -Fossil: Downloads - - - -
        - -
        Fossil Downloads
        -
        - +# Run this script to build the "download.html" page. Also generate +# the fossil_download_checksums.html page. +# +# +set out [open download.html w] +fconfigure $out -encoding utf-8 -translation lf +puts $out \ +{ + + + + Fossil: Download + + + + + +
        +

        Fossil

        Download
        +
        +

        -

        -Click on links below to download prebuilt binaries and source tarballs for -recent versions of Fossil. -The historical source code is also available in the -self-hosting -Fossil repositories. -

        - -

        -Important Note: -After upgrading to a newer version of fossil, it is always a good idea -to run: -

        -fossil all rebuild
        -
        -Running "rebuild" this way is not always necessary, but it never hurts. -

        +
        } +puts $out \ +"To install Fossil → download the stand-alone executable" +puts $out \ +{and put it on your $PATH. +

        +RPMs available + +here. +Cryptographic checksums for download files are +here. +

        +
        } # Find all all unique timestamps. # foreach file [glob -nocomplain download/fossil-*.zip] { - if {[regexp {(\d+).zip$} $file all datetime] - && [string length $datetime]==14} { - set adate($datetime) 1 - } -} - -# Do all dates from newest to oldest -# -foreach datetime [lsort -decr [array names adate]] { - set dt [string range $datetime 0 3]-[string range $datetime 4 5]- - append dt "[string range $datetime 6 7] " - append dt "[string range $datetime 8 9]:[string range $datetime 10 11]:" - append dt "[string range $datetime 12 13]" - set link [string map {{ } +} $dt] - set hr http://www.fossil-scm.org/fossil/timeline?c=$link&y=ci - puts "" - - foreach {prefix suffix img desc} { - fossil-linux-x86 zip linux.gif {Linux x86} - fossil-linux-amd64 zip linux64.gif {Linux x86_64} - fossil-macosx-x86 zip mac.gif {Mac 10.5 x86} - fossil-w32 zip win32.gif {Windows} - fossil-src tar.gz src.gif {Source Tarball} - } { - set filename download/$prefix-$datetime.$suffix + if {[regexp -- {-(\d\.\d+).zip$} $file all version]} { + set avers($version) 1 + } +} + +# Do all versions from newest to oldest +# +foreach vers [lsort -decr -real [array names avers]] { + set hr "/fossil/timeline?c=version-$vers;y=ci" + puts $out "" + puts $out "" + + foreach {prefix img desc} { + fossil-linux-x86 linux.gif {Linux 3.x x86} + fossil-macosx-x86 mac.gif {Mac 10.x x86} + fossil-openbsd-x86 openbsd.gif {OpenBSD 5.x x86} + fossil-w32 win32.gif {Windows} + fossil-src src.gif {Source Tarball} + } { + set basename download/$prefix-$vers + set filename $basename.tar.gz + if {![file exists $basename.tar.gz]} { + set filename $basename.zip + } if {[file exists $filename]} { set size [file size $filename] set units bytes if {$size>1024*1024} { set size [format %.2f [expr {$size/(1024.0*1024.0)}]] @@ -80,20 +89,50 @@ set units MiB } elseif {$size>1024} { set size [format %.2f [expr {$size/(1024.0)}]] set units KiB } - puts "" + puts $out "" } else { - puts "" + puts $out "" } } - puts "" + puts $out "" + if {[file exists download/releasenotes-$vers.html]} { + puts $out "" + } } -puts "" +puts $out "" -puts {

        " - puts "Fossil snapshot as of $dt
        " - puts "

        " + puts $out "
        Version $vers
        " + puts $out "
        " - puts "
        $desc

        " - puts "$size $units
        " + puts $out "
        $desc

        " + puts $out "$size $units
          
        " + set rn [open download/releasenotes-$vers.html] + fconfigure $rn -encoding utf-8 + puts $out "[read $rn]" + close $rn + puts $out "


        +puts $out {
    • } + +close $out + +# Generate the checksum page +# +set out [open fossil_download_checksums.html w] +fconfigure $out -encoding utf-8 -translation lf +puts $out { +Fossil Download Checksums + +

      Checksums For Fossil Downloads

      +

      The following table shows the SHA1 checksums for the precompiled +binaries available on the +Fossil website.

      +
      }
      +
      +foreach file [lsort [glob -nocomplain download/fossil-*.zip]] {
      +  set sha1sum [lindex [exec sha1sum $file] 0]
      +  puts $out "$sha1sum   [file tail $file]"
      +}
      +puts $out {
      } +close $out ADDED www/mkindex.tcl Index: www/mkindex.tcl ================================================================== --- www/mkindex.tcl +++ www/mkindex.tcl @@ -0,0 +1,115 @@ +#!/bin/sh +# +# Run this TCL script to generate a WIKI page that contains a +# permuted index of the various documentation files. +# +# tclsh mkindex.tcl >permutedindex.html +# + +set doclist { + adding_code.wiki {Adding New Features To Fossil} + adding_code.wiki {Hacking Fossil} + antibot.wiki {Defense against Spiders and Bots} + blame.wiki {The Annotate/Blame Algorithm Of Fossil} + branching.wiki {Branching, Forking, Merging, and Tagging} + bugtheory.wiki {Bug Tracking In Fossil} + build.wiki {Compiling and Installing Fossil} + checkin_names.wiki {Check-in And Version Names} + checkin.wiki {Check-in Checklist} + changes.wiki {Fossil Changelog} + copyright-release.html {Contributor License Agreement} + concepts.wiki {Fossil Core Concepts} + contribute.wiki {Contributing Code or Documentation To The Fossil Project} + customgraph.md {Theming: Customizing the Timeline Graph} + customskin.md {Theming: Customizing The Appearance of Web Pages} + custom_ticket.wiki {Customizing The Ticket System} + delta_encoder_algorithm.wiki {Fossil Delta Encoding Algorithm} + delta_format.wiki {Fossil Delta Format} + embeddeddoc.wiki {Embedded Project Documentation} + event.wiki {Events} + faq.wiki {Frequently Asked Questions} + fileformat.wiki {Fossil File Format} + fiveminutes.wiki {Update and Running in 5 Minutes as a Single User} + foss-cklist.wiki {Checklist For Successful Open-Source Projects} + fossil-from-msvc.wiki {Integrating Fossil in the Microsoft Express 2010 IDE} + fossil-v-git.wiki {Fossil Versus Git} + hacker-howto.wiki {Hacker How-To} + hints.wiki {Fossil Tips And Usage Hints} + index.wiki {Home Page} + inout.wiki {Import And Export To And From Git} + makefile.wiki {The Fossil Build Process} + newrepo.wiki {How To Create A New Fossil Repository} + password.wiki {Password Management And Authentication} + pop.wiki {Principles Of Operations} + private.wiki {Creating, Syncing, and Deleting Private Branches} + qandc.wiki {Questions And Criticisms} + quickstart.wiki {Fossil Quick Start Guide} + quotes.wiki + {Quotes: What People Are Saying About Fossil, Git, and DVCSes in General} + ../test/release-checklist.wiki {Pre-Release Testing Checklist} + reviews.wiki {Reviews} + selfcheck.wiki {Fossil Repository Integrity Self Checks} + selfhost.wiki {Fossil Self Hosting Repositories} + server.wiki {How To Configure A Fossil Server} + settings.wiki {Fossil Settings} + shunning.wiki {Shunning: Deleting Content From Fossil} + stats.wiki {Performance Statistics} + style.wiki {Source Code Style Guidelines} + ssl.wiki {Using SSL with Fossil} + sync.wiki {The Fossil Sync Protocol} + tech_overview.wiki {A Technical Overview Of The Design And Implementation + Of Fossil} + tech_overview.wiki {SQLite Databases Used By Fossil} + th1.md {The TH1 Scripting Language} + tickets.wiki {The Fossil Ticket System} + theory1.wiki {Thoughts On The Design Of The Fossil DVCS} + webpage-ex.md {Webpage Examples} + webui.wiki {The Fossil Web Interface} + wikitheory.wiki {Wiki In Fossil} +} + +set permindex {} +set stopwords {fossil and a in of on the to are about used by for or} +foreach {file title} $doclist { + set n [llength $title] + regsub -all {\s+} $title { } title + lappend permindex [list $title $file] + for {set i 0} {$i<$n-1} {incr i} { + set prefix [lrange $title 0 $i] + set suffix [lrange $title [expr {$i+1}] end] + set firstword [string tolower [lindex $suffix 0]] + if {[lsearch $stopwords $firstword]<0} { + lappend permindex [list "$suffix — $prefix" $file] + } + } +} +set permindex [lsort -dict -index 0 $permindex] +set out [open permutedindex.html w] +fconfigure $out -encoding utf-8 -translation lf +puts $out \ +"
      " +puts $out { +
      +
      + + +
      +
      +

      Primary Documents:

      + + +

      Permuted Index:

      +
        } +foreach entry $permindex { + foreach {title file} $entry break + puts $out "
      • $title
      • " +} +puts $out "
      " Index: www/newrepo.wiki ================================================================== --- www/newrepo.wiki +++ www/newrepo.wiki @@ -1,6 +1,6 @@ -

      HOWTO: creating a new repository

      +How To Create A New Fossil Repository

      The [/doc/tip/www/quickstart.wiki|quickstart guide] explains how to get up and running with fossil. But once you're running, what can you do with it? This document will walk you through the process of creating a fossil repository, populating it with files, and then @@ -32,11 +32,11 @@ The ui command starts up a server (with an optional -port NUMBER argument) and launches a web browser pointing at the fossil server. From there it takes just a few moments to configure the repo. Most importantly, go to the Admin menu, then the Users link, and set your account name and password, and grant your account all access -priviledges. (I also like to grant Clone access to the anonymous user, +privileges. (I also like to grant Clone access to the anonymous user, but that's personal preference.) Once you are done, kill the fossil server (with Ctrl-C or equivalent) and close the browser window. @@ -63,11 +63,11 @@ information about your local repository. You can ignore it for all purposes, but be sure not to accidentally remove it or otherwise damage it - it belongs to fossil, not you. The next thing we need to do is add files to our repository. As it -happens, we have a few C source files laying around, which we'll +happens, we have a few C source files lying around, which we'll simply copy into our working directory. stephan@ludo:~/fossil/demo$ cp ../csnip/*.{c,h} . stephan@ludo:~/fossil/demo$ ls @@ -75,11 +75,11 @@ tokenize_path.c tokenize_path.h vappendf.c vappendf.h Fossil doesn't know about those files yet. Telling fossil about a new file is a two-step process. First we add the file -to the repo, then we commit the file. This is a familiar +to the repository, then we commit the file. This is a familiar process for anyone who's worked with SCM systems before: stephan@ludo:~/fossil/demo$ fossil add *.{c,h} stephan@ludo:~/fossil/demo$ fossil commit -m "egg" @@ -159,6 +159,6 @@ via the Admin/Users page mentioned above). You may always use the server or ui commands to browse a cloned repository. You can even edit create or wiki entries, etc., and they will be pushed to the remote side the next time you -push data to the the remote. +push data to the remote. Index: www/password.wiki ================================================================== --- www/password.wiki +++ www/password.wiki @@ -7,12 +7,12 @@ are not transmitted from one repository to another during a sync. Passwords are local configuration information that can (and usually does) vary from one repository to the next within the same project. Passwords are stored in the PW field of the USER table. -In older versions of Fossil (prior to -[/timeline?c=2010-01-10+20:56:30 | 2010-01-11]) the password +In older versions of Fossil (prior to +[/timeline?c=2010-01-10T20:56:30 | 2010-01-11]) the password is stored as cleartext. In newer versions of Fossil, the password can be either cleartext or an SHA1 hash (written as a 40-character lower-case hexadecimal number). If the USER.PW field contains a 40-character string, that string is assumed to be a SHA1 hash. If the size of USER.PW is anything other than 40 characters, then @@ -60,25 +60,25 @@ the web interface or direct SQL manipulation of the USER table. Note also that the password field is essentially ignored for the special users named "anonymous", "developer", "reader", and "nobody". It is not possible to authenticate as users "developer", "reader", or "nobody" and the authentication protocol -for "anonymous" use one-time captchas not persistent passwords. +for "anonymous" uses one-time captchas not persistent passwords.

      Web Interface Authentication

      When a user logs into Fossil using the web interface, the login name and password are sent in the clear to the server. The server then hashes the password and compares it against the value stored in USER.PW. If they match, the server sets a cookie on the client to record the login. This cookie contains a large amount of high-quality randomness -and is thus impossible to guess. The value of the cookie and the IP +and is thus intractable to guess. The value of the cookie and the IP address of the client is stored in the USER.COOKIE and USER.IPADDR fields -of the USER table on the server. +of the USER table on the server. The USER.CEXPIRE field holds an expiration date for the cookie, encoded as a julian day number. On all subsequent -HTTP requests, the cookie value is matched against the USER table to +HTTP requests, the cookie value is matched against the USER table to enable access to the repository. A login cookie will only work if the IP address matches. This feature is designed to make it more difficult for an attacker to sniff the cookie and take over the connection. A cookie-sniffing attack will only work @@ -103,12 +103,12 @@ over the wire, but that plan has not yet been set in code.

      Sync Protocol Authentication

      A different authentication mechanism is used when one repository wants -to sync (or push or pull or clone) another respository. When two -respositories are syncing, the one that initiates the transaction is +to sync (or push or pull or clone) another repository. When two +repositories are syncing, the one that initiates the transaction is the client and the repository that responds is the server. The client works by sending HTTP requests to the server with a method of "xfer" and a content-type of "application/x-fossil". The content is Zlib-compressed text consisting of "cards" of instructions. The first card of this content is a "login" card responsible for authentication. The login card contains @@ -137,12 +137,12 @@
       http://login:password@servername.org/path
       
      For older clients, the password is used for the shared secret as stated -in the URL and with no encoding. -For newer clients, the shared secret is derived from the password +in the URL and with no encoding. +For newer clients, the shared secret is derived from the password by transformed the password using the SHA1 hash encoding described above. However, if the first character of the password is "*" (ASCII 0x2a) then the "*" is skipped and the rest of the password is used directly as the share secret without the SHA1 encoding. ADDED www/permutedindex.html Index: www/permutedindex.html ================================================================== --- www/permutedindex.html +++ www/permutedindex.html @@ -0,0 +1,206 @@ +
      + +
      +
      + + +
      +
      +

      Primary Documents:

      + + +

      Permuted Index:

      +
      Index: www/pop.wiki ================================================================== --- www/pop.wiki +++ www/pop.wiki @@ -31,11 +31,11 @@ In many contexts, the name can be abbreviated to a unique prefix. A five- or six-character prefix usually suffices to uniquely identify a file.

    • Because artifacts are named by their SHA1 hash, all artifacts -are immutable. Any change to the content of a artifact also +are immutable. Any change to the content of an artifact also changes the hash that forms the artifacts name, thus creating a new artifact. Both the old original version of the artifact and the new change are preserved under different names.

    • It is theoretically possible for two artifacts with different ADDED www/private.wiki Index: www/private.wiki ================================================================== --- www/private.wiki +++ www/private.wiki @@ -0,0 +1,94 @@ +Private Branches + +By default, everything you check into a Fossil repository is shared +to all clones of that repository. In Fossil, you don't push and pull +individual branches; you push and pull everything all at once. + +But sometimes users want to keep some private work that is not +shared with others. This might be a preliminary or experimental change +that needs further refinement before it is shared and which might never +be shared at all. To do this in Fossil, simply commit the change with +the --private command-line option: + +

      +fossil commit --private
      +
      + +The --private option causes Fossil to put the check-in in a new branch +named "private". That branch will not participate in subsequent clone, +sync, push, or pull operations. The branch will remain on the one local +repository where it was created. Note that you only use the --private +option for the first check-in that creates the private branch. +Additional checkins into the private branch remain private automatically. + +

      Publishing Private Changes

      + +After additional work, one might desire to publish the changes associated +with a private branch. The usual way to do this is to merge those +changes into a public branch. For example: + +
      +fossil update trunk
      +fossil merge private
      +fossil commit
      +
      + +The private branch remains private. (There is no way to convert a private +branch into a public branch.) But all of the changes associated with +the private branch are now folded into the public branch and are hence +visible to other users of the project. + +

      Syncing Private Branches

      + +A private branch normally stays on the one repository where it was +originally created. But sometimes you want to share private branches +with another repository. For example, you might be building a cross-platform +application and have separate repositories on your Windows laptop, +your Linux desktop, and your iMac. You can transfer private branches +between these machines by using the --private option on the "sync", +"push", "pull", and "clone" commands. For example, if you are running +"fossil server" on your Linux box and you want to clone that repository +to your Mac, including all private branches, use: + +
      +fossil clone --private http://user@linux.localnetwork:8080/ mac-clone.fossil
      +
      + +You'll have to supply a username and password in order for this to work. +Fossil will not clone (or sync) private branches anonymously. Furthermore, +you have to enable the "Private" capability (the "x" capability) in order +to enable private branch syncing. The Admin ("a") and Superuser ("s") do +not imply Private ("x"); you'll have to set "x" in addition to +"a" or "s". This is a little extra work, but there is a purpose here: +the interface is designed to make it very difficult to accidently +sync a private branch to a public server. It is highly recommended that +you leave the "x" capability turned off on all repositories used for +collaboration (repositories to which many people push and pull) and +only enable "x" for local repositories when you need to share private +branches. + +Private branch sync only works if you use the --private command-line option. +Private branches are never synced via the auto-sync mechanism. Once +again, this restriction is designed to make it hard to accidently +push private branches beyond their intended audience. + +

      Purging Private Branches

      + +You can remove all private branches from a repository using this command: + +
      +fossil scrub --private
      +
      + +Note that the above is a permanent and irreversible change. You will +be asked to confirm before continuing. Once the private branches are +removed, they cannot be retrieved (unless you have synced them to another +repository.) So be careful with the command. + +

      Additional Notes

      + +All of the features above apply to all private branches in a +single repository at once. There is no mechanism in Fossil (currently) +that allows you to push, pull, clone, sync, or scrub and individual +private branch within a repository that contains multiple private +branches. Index: www/qandc.wiki ================================================================== --- www/qandc.wiki +++ www/qandc.wiki @@ -38,11 +38,11 @@ code while off network, then sync your changes later. With Trac, you can only view and edit tickets and wiki while you are connected to the server.
    • Fossil is lightweight and fully self-contained. It is very easy to setup on a low-resource machine. Fossil does not require an - administator.
    • + administrator.
    • Fossil integrates code versioning into the same repository with wiki and tickets. There is nothing extra to add or install. Fossil is an all-in-one turnkey solution.
    • @@ -63,19 +63,19 @@ Other projects are also adopting fossil. But fossil does not yet have the massive user base of git or mercurial. Fossil looks like the bug tracker that would be in your -Linksys Router's administration screen.

      +Linksys Router's administration screen.

      I take a pragmatic approach to software: form follows function. To me, it is more important to have a reliable, fast, efficient, enduring, and simple DVCS than one that looks pretty.

      -

      On the other hand, if you have patches that improve the apparance -of Fossil without seriously compromising its reliablity, performance, +

      On the other hand, if you have patches that improve the appearance +of Fossil without seriously compromising its reliability, performance, and/or maintainability, I will be happy to accept them. Fossil is self-hosting. Send email to request a password that will let you push to the main fossil repository.

      @@ -146,11 +146,13 @@ You do not need any other packages (diff, patch, merge, cvs, svn, rcs, git, python, perl, tcl, apache, sqlite, and so forth) in order to run fossil. Fossil runs just fine in a chroot jail all by itself. And the self-contained fossil -executable is much less than 1MB in size. +executable is much less than 1MB in size. (Update 2015-01-12: Fossil has +grown in the years since the previous sentence was written but is still +much less than 2MB according to "size" when compiled using -Os on x64 Linux.) Fossil is the very opposite of bloat.

      Index: www/quickstart.wiki ================================================================== --- www/quickstart.wiki +++ www/quickstart.wiki @@ -1,75 +1,162 @@ Fossil Quick Start Guide -

      Fossil Quick Start

      This is a guide to get you started using fossil quickly and painlessly.

      -

      Installing

      +

      Installing

      Fossil is a single self-contained C program. You need to either download a precompiled binary - or build it yourself from sources. + or compile it yourself from sources. Install fossil by putting the fossil binary - someplace on your PATH environment variable.

      + someplace on your $PATH.

      -
      -

      Cloning An Existing Repository

      +

      General Work Flow

      + +

      Fossil works with repository files (a database with the project's + complete history) and with checked-out local trees (the working directory + you use to do your work). + The workflow looks like this:

      + +
        +
      • Create or clone a repository file. ([/help/init|fossil init] or + [/help/clone | fossil clone]) +
      • Check out a local tree. ([/help/open | fossil open]) +
      • Perform operations on the repository (including repository + configuration). +
      + +

      The following sections will give you a brief overview of these + operations.

      + +

      Starting A New Project

      + +

      To start a new project with fossil, create a new empty repository + this way: ([/help/init | more info])

      + +
      + fossil init repository-filename +
      + +

      Cloning An Existing Repository

      Most fossil operations interact with a repository that is on the local disk drive, not on a remote system. Hence, before accessing a remote repository it is necessary to make a local copy of that repository. Making a local copy of a remote repository is called "cloning".

      -

      Clone a remote repository as follows:

      +

      Clone a remote repository as follows: ([/help/clone | more info])

      fossil clone URL repository-filename
      -

      The URL above is the http URL for the fossil repository - you want to clone, and it may include a "user:password" part, e.g. - http://drh:secret@www.fossil-scm.org/fossil. You can - call the new repository anything you want - there are no naming - restrictions. As an example, you can clone the fossil repository - this way:

      +

      The URL specifies the fossil repository + you want to clone. The repository-filename is the new local + filename into which the cloned repository will be written. For + example:

      fossil clone http://www.fossil-scm.org/ myclone.fossil
      -

      The new local copy of the respository is stored in a single file, - which in the example above is named "myclone.fossil". - You can name your repositories anything you want. The ".fossil" suffix - is not required.

      - -

      Note: If you are behind a restrictive firewall, you might need - to specify an HTTP proxy to use.

      - -

      Starting A New Project

      - -

      To start a new project with fossil, create a new empty repository - this way:

      +

      If the remote repository requires a login, include a + userid in the URL like this: + +

      + fossil clone http://userid@www.fossil-scm.org/ myclone.fossil +
      + + +

      You will be prompted separately for the password. + Use "%HH" escapes for special characters in the userid. + Examples: "%40" in place of "@" and "%2F" in place of "/". + +

      If you are behind a restrictive firewall, you might need + to specify an HTTP proxy.

      + +

      A Fossil repository is a single disk file. Instead of cloning, + you can just make a copy of the repository file (for example, using + "scp"). Note, however, that the repository file contains auxiliary + information above and beyond the versioned files, including some + sensitive information such as password hashes and email addresses. If you + want to share Fossil repositories directly, consider running the + [/help/scrub|fossil scrub] command to remove sensitive information + before transmitting the file. + +

      Importing From Another Version Control System

      + +

      Rather than start a new project, or clone an existing Fossil project, + you might prefer to + import an existing Git project + into Fossil using the [/help/import | fossil import] command. + +

      Checking Out A Local Tree

      + +

      To work on a project in fossil, you need to check out a local + copy of the source tree. Create the directory you want to be + the root of your tree and cd into that directory. Then + do this: ([/help/open | more info])

      - fossil new repository-filename + fossil open repository-filename
      -

      Configuring Your Local Repository

      +

      This leaves you with the newest version of the tree + checked out. + From anywhere underneath the root of your local tree, you + can type commands like the following to find out the status of + your local tree:

      + +
      + [/help/info | fossil info]
      + [/help/status | fossil status]
      + [/help/changes | fossil changes]
      + [/help/diff | fossil diff]
      + [/help/timeline | fossil timeline]
      + [/help/ls | fossil ls]
      + [/help/branch | fossil branch]
      +
      + +

      Note that Fossil allows you to make multiple check-outs in + separate directories from the same repository. This enables you, + for example, to do builds from multiple branches or versions at + the same time without having to generate extra clones.

      + +

      To switch a checkout between different versions and branches, + use:

      + +
      + [/help/update | fossil update]
      + [/help/checkout | fossil checkout]
      +
      + +

      [/help/update | update] honors the "autosync" option and + does a "soft" switch, merging any local changes into the target + version, whereas [/help/checkout | checkout] does not + automatically sync and does a "hard" switch, overwriting local + changes if told to do so.

      + +

      Configuring Your Local Repository

      When you create a new repository, either by cloning an existing project or create a new project of your own, you usually want to do some - local configuration. This is easily accomplished using the webserver - that is built into fossil. Start the fossil webserver like this:

      + local configuration. This is easily accomplished using the web-server + that is built into fossil. Start the fossil webserver like this: + ([/help/ui | more info])

      fossil ui repository-filename
      + +

      You can omit the repository-filename from the command above + if you are inside a checked-out local tree.

      This starts a web server then automatically launches your web browser and makes it point to this web server. If your system has an unusual configuration, fossil might not be able to figure out how to start your web browser. In that case, first tell fossil @@ -84,172 +171,185 @@ should, change this after you create a few users.

      When you are finished configuring, just press Control-C or use the kill command to shut down the mini-server.

      -

      Checking Out A Local Tree

      - -

      To work on a project in fossil, you need to check out a local - copy of the source tree. Create the directory you want to be - the root of your tree and cd into that directory. Then - do this:

      - -
      - fossil open repository-filename -
      - -

      This leaves you with the newest version of the tree - checked out. - From anywhere underneath the root of your local tree, you - can type commands like the following to find out the status of - your local tree:

      - -
      - fossil info
      - fossil status
      - fossil changes
      - fossil timeline
      - fossil leaves
      - fossil ls
      - fossil branch list
      -
      - -

      Making Changes

      +

      Making Changes

      To add new files to your project, or remove old files, use these commands:

      - fossil add file...
      - fossil rm file... + [/help/add | fossil add] file...
      + [/help/rm | fossil rm] file...
      + [/help/addremove | fossil addremove] file...

      You can also edit files freely. Once you are ready to commit your changes, type:

      - fossil commit + [/help/commit | fossil commit]

      You will be prompted for check-in comments using whatever editor - is specified by your VISUAL or EDITOR environment variable. If you - have GPG installed, you may be prompted for your GPG passphrase so - that the check-in can be signed with your GPG signature. After - this your changes will be checked in.

      - -

      Sharing Changes

      - -

      The changes you commit are only on your local repository. + is specified by your VISUAL or EDITOR environment variable.

      + + In the default configuration, the [/help/commit|commit] + command will also automatically [/help/push|push] your changes, but that + feature can be disabled. (More information about + [./concepts.wiki#workflow|autosync] and how to disable it.) + Remember that your coworkers can not see your changes until you + commit and push them.

      + +

      Sharing Changes

      + +

      When [./concepts.wiki#workflow|autosync] is turned off, + the changes you [/help/commit | commit] are only + on your local repository. To share those changes with other repositories, do:

      - fossil push URL + [/help/push | fossil push] URL

      Where URL is the http: URL of the server repository you want to share your changes with. If you omit the URL argument, fossil will use whatever server you most recently synced with.

      -

      The push command only sends your changes to others. To - Receive changes from others, use pull. Or go both ways at - once using sync:

      +

      The [/help/push | push] command only sends your changes to others. To + Receive changes from others, use [/help/pull | pull]. Or go both ways at + once using [/help/sync | sync]:

      - fossil pull URL
      - fossil sync URL + [/help/pull | fossil pull] URL
      + [/help/sync | fossil sync] URL

      When you pull in changes from others, they go into your repository, not into your checked-out local tree. To get the changes into your - local tree, use update:

      - -
      - fossil update AID -
      - -

      The AID is some unique abbreviation to the 40-character - artifact identifier (AID) for a particular check-in. If you omit - the AID fossil moves you to the - leaf version of the branch your are currently on. If your branch - has multiple leaves, you get an error - you'll have to specify the - leaf you want using a AID argument.

      - -

      Branching And Merging

      - -

      You can create branches by doing multiple commits off of the - same base version. To merge to branches back together, first - update to the leaf of one branch. Then do a merge - of the leaf of the other branch:

      - -
      - fossil merge AID -
      - -

      Test to make sure your merge didn't mess up the code, then - commit and possibly also push your changes. Remember - that nobody else can see your changes until you commit and - if other are using a different repository you will also need to - push.

      - - -

      Setting Up A Server

      - -

      The easiest way to set up a server is:

      - -
      - fossil server repository-filename -
      - -

      Or - -

      - fossil ui repository-filename -
      - -

      The difference between these two command is that ui - attempts to automatically start your web browser pointing at the - server whereas server does not. - You can omit the repository-filename if you are within - a checked-out local tree. This server uses port 8080 by default - but you can specify a different port using the -port command.

      - -

      Command-line servers like this are useful when two people want - to share a repository on temporary or ad-hoc basis. For a more - permanent installation, you should use either the CGI server or the - inetd server. - - To use the CGI server, create a CGI script that - looks something like this:

      - -
      - #!/usr/local/bin/fossil
      - repository: /home/proj1/repos1.fossil -
      - -

      Adjust the paths in this CGI script to match your installation - and make sure both repository file and the directory that contains it - are readable and writable by the user that CGI scripts run as. - Then point clients at the CGI script. That's all there is to it!

      - - -

      You can also run fossil from inetd or xinetd. For an inetd - installation, make an entry in /etc/inetd.conf that looks something - like this:

      - -
      - 80 stream tcp nowait.1000 root /usr/bin/fossil \
      - /usr/bin/fossil http /home/proj1/repos1.fossil -
      - -

      Adjust the paths to suit your installation, of course. Notice that - fossil runs as root. This is not required - you can run it as an - unprivileged user. But it is more secure to run fossil as root. - When you do run fossil as root, it automatically puts itself in a - chroot jail in the same directory as the repository, then drops - root privileges prior to reading any information from the request.

      - -

      HTTP Proxies

      + local tree, use [/help/update | update]:

      + +
      + [/help/update | fossil update] VERSION +
      + +

      The VERSION can be the name of a branch or tag or any + abbreviation to the 40-character + artifact identifier for a particular check-in, or it can be a + date/time stamp. ([./checkin_names.wiki | more info]) + If you omit + the VERSION, then fossil moves you to the + latest version of the branch your are currently on.

      + +

      The default behavior is for [./concepts.wiki#workflow|autosync] to + be turned on. That means that a [/help/pull|pull] automatically occurs + when you run [/help/update|update] and a [/help/push|push] happens + automatically after you [/help/commit|commit]. So in normal practice, + the push, pull, and sync commands are rarely used. But it is important + to know about them, all the same.

      + +
      + [/help/checkout | fossil checkout] VERSION +
      + +

      Is similar to update except that it does not honor the autosync + setting, nor does it merge in local changes - it prefers to overwrite + them and fails if local changes exist unless the --force + flag is used.

      + +

      Branching And Merging

      + +

      Use the --branch option to the [/help/commit | commit] command + to start a new branch. Note that in Fossil, branches are normally + created when you commit, not before you start editing. You can + use the [/help/branch | branch new] command to create a new branch + before you start editing, if you want, but most people just wait + until they are ready to commit. + + To merge two branches back together, first + [/help/update | update] to the branch you want to merge into. + Then do a [/help/merge|merge] another branch that you want to incorporate + the changes from. For example, to merge "featureX" changes into "trunk" + do this:

      + +
      + fossil [/help/update|update] trunk
      + fossil [/help/merge|merge] featureX
      + # make sure the merge didn't break anything...
      + fossil [/help/commit|commit] +
      + +

      The argument to the [/help/merge|merge] command can be any of the + version identifier forms that work for [/help/update|update]. + ([./checkin_names.wiki|more info].) + The merge command has options to cherrypick individual + changes, or to back out individual changes, if you don't want to + do a full merge.

      + + The merge command puts all changes in your working check-out. + No changes are made to the repository. + You must run [/help/commit|commit] separately + to add the merge changes into your repository to make them persistent + and so that your coworkers can see them. + But before you do that, you will normally want to run a few tests + to verify that the merge didn't cause logic breaks in your code. + + The same branch can be merged multiple times without trouble. Fossil + automatically keeps up with things and avoids conflicts when doing + multiple merges. So even if you have merged the featureX branch + into trunk previously, you can do so again and Fossil will automatically + know to pull in only those changes that have occurred since the previous + merge. + +

      If a merge or update doesn't work out (perhaps something breaks or + there are many merge conflicts) then you back up using:

      + +
      + [/help/undo | fossil undo] +
      + +

      This will back out the changes that the merge or update made to the + working checkout. There is also a [/help/redo|redo] command if you undo by + mistake. Undo and redo only work for changes that have + not yet been checked in using commit and there is only a single + level of undo/redo.

      + + +

      Setting Up A Server

      + +

      Fossil can act as a stand-alone web server using one of these + commands:

      + +
      + [/help/server | fossil server] repository-filename
      + [/help/ui | fossil ui] repository-filename +
      + +

      The repository-filename can be omitted when these commands + are run from within an open check-out, which a particularly useful + shortcut for the fossil ui command. + +

      The ui command is intended for accessing the web interface + from a local desktop. The ui command binds to the loopback IP + address only (and thus makes the web interface visible only on the + local machine) and it automatically start your web browser pointing at the + server. For cross-machine collaboration, use the server command, + which binds on all IP addresses and does not try to start a web browser.

      + +

      Servers are also easily configured as: +

        +
      • [./server.wiki#inetd|inetd/xinetd] +
      • [./server.wiki#cgi|CGI] +
      • [./server.wiki#scgi|SCGI] +
      + +

      The [./selfhost.wiki | self-hosting fossil repositories] use + CGI. + + +

      HTTP Proxies

      If you are behind a restrictive firewall that requires you to use an HTTP proxy to reach the internet, then you can configure the proxy in three different ways. You can tell fossil about your proxy using a command-line option on commands that use the network, @@ -259,11 +359,11 @@ fossil clone URL --proxy Proxy-URL

      It is annoying to have to type in the proxy URL every time you sync your project, though, so you can make the proxy configuration - persistent using the setting command:

      + persistent using the [/help/setting | setting] command:

      fossil setting proxy Proxy-URL
      @@ -288,19 +388,13 @@
      fossil sync http://192.168.1.36:8080/ --proxy off
      -

      More Hints

      - -

      Try these commands:

      - -
      - fossil help
      - fossil test-commands -
      +

      More Hints

      + +

      A [/help | complete list of commands] is available, as is the + [./hints.wiki|helpful hints] document. See the + [./permutedindex.html#pindex|permuted index] for additional + documentation.

      Explore and have fun!

      - - -
      - ADDED www/quotes.wiki Index: www/quotes.wiki ================================================================== --- www/quotes.wiki +++ www/quotes.wiki @@ -0,0 +1,168 @@ +What People Are Saying + +The following are collected quotes from various forums and blogs about +Fossil, Git, and DVCSes in general. This collection is put together +by the creator of Fossil, so of course there is selection bias... + +

      On The Usability Of Git:

      + +
        +
      1. Git approaches the usability of iptables, which is to say, utterly +unusable unless you have the manpage tattooed on you arm. + +
        +by mml at [http://news.ycombinator.com/item?id=1433387] +
        + +
      2. It's simplest to think of the state of your [git] repository +as a point in a high-dimensional "code-space", in which branches are +represented as n-dimensional membranes, mapping the spatial loci of +successive commits onto the projected manifold of each cloned +repository. + +
        +At [http://tartley.com/?p=1267] +
        + +
      3. Git is not a Prius. Git is a Model T. +Its plumbing and wiring sticks out all over the place. +You have to be a mechanic to operate it successfully or you'll be +stuck on the side of the road when it breaks down. +And it will break down. + +
        +Nick Farina at [http://nfarina.com/post/9868516270/git-is-simpler] +
        + +
      4. Initial revision of "git", The information manager from hell + +
        +Linus Torvalds - 2005-04-07 22:13:13
        +Commit comment on the very first source-code check-in for git +
        + +
      5. I've been experimenting a lot with git at work. +Damn, it's complicated. +It has things to trip you up with that sane people just wouldn't ever both with +including the ability to allow you to commit stuff in such a way that you can't find +it again afterwards (!!!) +Demented workflow complexity on acid? +

        * dkf really wishes he could use fossil instead

        +
        +by Donal K. Fellow (dkf) on the Tcl/Tk chatroom, 2013-04-09. +
        + +
      6. Klingon Code Warriors embrace Git; we enjoy arbitrary conflicts. Git is not for the weak and feeble. TODAY IS A GOOD DAY TO CODE. + +
        +teastain at [http://www.reddit.com/r/programming/comments/xpitj/10_things_i_hate_about_git/c5oj4fk] +
        + +
      7. [G]it is designed to forget things. + +
        +[http://www.cs.cmu.edu/~davide/howto/git_lose.html] +
        + +
      8. [I]n nearly 31 years of using a computer i have, in total, lost more data to git +(while following the instructions!!!) than any other single piece of software. + +
        +Stephen Beal on the [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg17181.html|Fossil mailing list] + 2014-09-01. +
        + +
      9. If programmers _really_ wanted to help scientists, they'd build a version control +system that was more usable than Git. + +
        +Tweet by Greg Wilson @gvwilson on 2015-02-22 17:47 +
        + +
      10. + +
        Randall Munroe. [http://xkcd.com/1597/]
        + +
      + +

      On The Usability Of Fossil:

      + +
        +
      1. +Fossil mesmerizes me with simplicity especially after I struggled to +get a bug-tracking system to work with mercurial. + +
        +rawjeev at [http://stackoverflow.com/questions/156322/what-do-people-think-of-the-fossil-dvcs] +
        + +
      2. Fossil is the best thing to happen +to my development workflow this year, as I am pretty sure that using +Git has resulted in the premature death of too many of my brain cells. +I'm glad to be able to replace Git in every place that I possibly can +with Fossil. + +
        +Joe Prostko at [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg16716.html] +
        + +
      3. Fossil is awesome!!! I have never seen an app like that before, +such simplicity and flexibility!!! + +
        +zengr at [http://stackoverflow.com/questions/138621/best-version-control-for-lone-developer] +
        + +
      4. This is my favourite VCS. I can carry it on a USB. And it's a complete system, with it's own +server, ticketing system, Wiki pages, and a very, very helpful timeline visualization. And +the entire program in a single file! + +
        +thunderbong commenting on hacker news: [https://news.ycombinator.com/item?id=9131619] +
        + + +
      + + +

      On Git Versus Fossil

      + +
        +
      1. +Just want to say thanks for fossil making my life easier.... +Also [for] not having a misanthropic command line interface. + +
        +Joshua Paine at [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg02736.html] +
        + +
      2. We use it at a large university to manage code that small teams write. +The runs everywhere, ease of installation and portability is something that +seems to be a good fit with the environment we have (highly ditrobuted, +sometimes very restrictive firewalls, OSX/Win/Linux). We are happy with it +and teaching a Msc/Phd student (read complete novice) fossil has just +been a smoother ride than Git was. + +
        +viablepanic at [http://www.reddit.com/r/programming/comments/bxcto/why_not_fossil_scm/] +
        + +
      3. In the fossil community - and hence in fossil itself - development history +is pretty much sacrosanct. The very name "fossil" was to chosen to +reflect the unchanging nature of things in that history. + +

        In git (or rather, the git community), the development history is part of +the published aspect of the project, so it provides tools for rearranging +that history so you can present what you "should" have done rather +than what you actually did. + +

        +Mike Meyer on the Fossil mailing list, 2011-10-04 +
        + +
      4. github is such a pale shadow of what fossil does. + +
        +dkf on the Tcl chatroom, 2013-12-06 +
        +
      DELETED www/reference.wiki Index: www/reference.wiki ================================================================== --- www/reference.wiki +++ www/reference.wiki @@ -1,698 +0,0 @@ -

      Command Line Interface Reference

      - - This is an easy introduction to the fossil command line interface - (cli). It assumes some familiarity with using the command line, and - with Source Code Maintenence (SCM) systems—but not too - much. - - If you are trying to find information about fossil's web - capabilities, see the Fossil Home and - Fossil Wiki pages for pointers. - -

      Things to note

      - * Fossil cli commands do not use special delimeters, they use - spaces. This is traditional with VCS/SCM. Some options to - fossil commands do use special delimiters, particularly the - '-' (hyphen, or dash) character. This is very similar to Tcl. - Think of fossil as a shell you invoke and feed a command to, - including any options, and it will make more sense. - - * Any fossil command is acceptable once enough of it has been - entered to make the intent unambiguous. 'clo' is a proper prefix of - both the 'clone' and 'close' commands, for instance, but 'clon' is - enough to make the intent—the 'clone' - command—unambiguous. - - * Pragmatically, a [./concepts.wiki#keyconc | version] - in fossil is a 40-character long string of hexadecimal. - fossil will be able to figure out which version you want - with any distinct prefix of that string which is at - least four characters long. Commands which require a - version are looking for the string, a distinct prefix of the - string, or a tag. - - * SCM in a distributed environment can be a bit confusing with - regard to branching, merging, and versions in general. See the - [./branching.wiki | explanation of branching] and it will all make - much more sense. - - * Op.Ed. An excellent way to learn to use fossil - effectively is to - [./quickstart.wiki#fslclone | clone the repository for fossil] - itself. You can then poke around using the fossil ui - command, and look things up with no connection worries. You can - set up test repositories and try things out on-the-fly to see how - they work, using their own ui's. The CLI will far easier to - understand if you can run a repository, watch it in a browser, and - hack around with it in a simplified environment (your tests) with - guaranteed and fast access to the sources & docs (your cloned fossil - repository). -


      - You should probably start interacting with fossil at the command - line by asking it what it can - do:    ˆ - - $ fossil help
      -Usage: fossil help COMMAND.
      -Available COMMANDs:
      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      add*co*httprebuildsync*
      all*commitinforeconstructtag
      branchconfigurationleavesredotimeline
      cgi*deconstructls*rename*ui
      changes*del*mergerevertundo
      checkout*descendantsmv*rm*unset
      cidiffnew*rstatsupdate*
      cleanextra*openserveruser
      clonegdiffpullsettingsversion*
      closehelppushstatus*wiki
      -This is fossil version [a89b436bc9] 2009-02-11 05:00:02 UTC
      -
      -What follows is a survey of what you get if you type -fossil help command for all of the -commands listed above. There are links to individual pages for each -of them; pages with content (commands marked with a '*' are done) go -into the reason for a command in a bit more depth than the program help. -
      -
      ˆ - Usage: fossil add FILE... - Make arrangements to add one or more files to the current checkout - at the next commit. - -
      ˆ - Usage: fossil all (list|pull|push|rebuild|sync) - The ~/.fossil file records the location of all repositories for a - user. This command performs certain operations on all repositories - that can be useful before or after a period of disconnection operation. - Available operations are: - - list Display the location of all repositories - - pull Run a "pull" operation on all repositories - - push Run a "push" on all repositories - - rebuild Rebuild on all repositories - - sync Run a "sync" on all repositories - - Respositories are automatically added to the set of known repositories - when one of the following commands against the repository: clone, info, - pull, push, or sync - -
      ˆ - Usage: fossil branch SUBCOMMAND ... ?-R|--repository FILE? - -Run various subcommands on the branches of the open repository or -of the repository identified by the -R or --repository option. - - fossil branch new BRANCH-NAME BASIS ?-bgcolor COLOR? - - Create a new branch BRANCH-NAME off of check-in BASIS. - You can optionally give the branch a default color. - - fossil branch list - - List all branches - -
      ˆ - Usage: fossil cgi SCRIPT - The SCRIPT argument is the name of a file that is the CGI script - that is being run. The command name, "cgi", may be omitted if - the GATEWAY_INTERFACE environment variable is set to "CGI" (which - should always be the case for CGI scripts run by a webserver.) The - SCRIPT file should look something like this: - - #!/usr/bin/fossil - repository: /home/somebody/project.db - - The second line defines the name of the repository. After locating - the repository, fossil will generate a webpage on stdout based on - the values of standard CGI environment variables. - -
      ˆ - Usage: fossil changes - Report on the edit status of all files in the current checkout. - See also the "status" and "extra" commands. - -
      ˆ - Usage: fossil checkout VERSION ?-f|--force? - Check out a version specified on the command-line. This command - will not overwrite edited files in the current checkout unless - the --force option appears on the command-line. - - See also the "update" command. - -
      ˆ - Usage: fossil commit ?-m COMMENT? ?--nosign? ?FILE...? fossil ci ... (as above) - - Create a new version containing all of the changes in the current - checkout. You will be prompted to enter a check-in comment unless - the "-m" option is used to specify a comment line. You will be - prompted for your GPG passphrase in order to sign the new manifest - unless the "--nosign" options is used. All files that have - changed will be committed unless some subset of files is specified - on the command line. - -
      ˆ - Usage: fossil clean ?-all? - Delete all "extra" files in the source tree. "Extra" files are - files that are not officially part of the checkout. See also - the "extra" command. This operation cannot be undone. - - You will be prompted before removing each file. If you are - sure you wish to remove all "extra" files you can specify the - optional -all flag. - -
      ˆ - Usage: fossil clone URL FILENAME - Make a clone of a repository specified by URL in the local - file named FILENAME. - -
      ˆ - Usage: fossil close ?-f|--force? - The opposite of "open". Close the current database connection. - Require a -f or --force flag if there are unsaved changed in the - current check-out. - -
      ˆ - Usage: fossil configuration METHOD ... - Where METHOD is one of: export import merge pull push reset. All methods - accept the -R or --repository option to specific a repository. - - fossil configuration export AREA FILENAME - - Write to FILENAME exported configuraton information for AREA. - AREA can be one of: all ticket skin project - - fossil configuration import FILENAME - - Read a configuration from FILENAME, overwriting the current - configuration. - - fossil configuration merge FILENAME - - Read a configuration from FILENAME and merge its values into - the current configuration. Existing values take priority over - values read from FILENAME. - - fossil configuration pull AREA ?URL? - - Pull and install the configuration from a different server - identified by URL. If no URL is specified, then the default - server is used. - fossil configuration push AREA ?URL? - - Push the local configuration into the remote server identified - by URL. Admin privilege is required on the remote server for - this to work. - - fossil configuration reset AREA - - Restore the configuration to the default. AREA as above. - - WARNING: Do not import, merge, or pull configurations from an untrusted - source. The inbound configuration is not checked for safety and can - introduce security vulnerabilities. - -
      ˆ - COMMAND: deconstruct - Usage: fossil deconstruct ?-R|--repository REPOSITORY? DESTINATION - Populates the indicated DESTINATION directory with copies of all - artifcats contained within the repository. Artifacts are named AA/bbbbb - where AA is the first 2 characters of the artifact ID and bbbbb is the - remaining 38 characters. - -
      ˆ - Usage: fossil rm FILE... or: fossil del FILE... - Remove one or more files from the tree. - -
      ˆ - Usage: fossil descendants ?CHECKIN-ID? - Find all leaf descendants of the check-in specified or if the argument - is omitted, of the check-in currently checked out. - -
      ˆ - Usage: fossil diff|gdiff ?-i? ?-r REVISION? FILE... - Show the difference between the current version of a file (as it - exists on disk) and that same file as it was checked out. - - diff will show a textual diff while gdiff will attempt to run a - graphical diff command that you have setup. If the choosen command - is not yet configured, the internal textual diff command will be - used. - - If -i is supplied for either diff or gdiff, the internal textual - diff command will be executed. - - Here are a few external diff command settings, for example: - - fossil setting diff-command diff - - fossil setting gdiff-command tkdiff - fossil setting gdiff-command eskill22 - fossil setting gdiff-command tortoisemerge - fossil setting gdiff-command meld - fossil setting gdiff-command xxdiff - fossil setting gdiff-command kdiff3 - -
      ˆ - Usage: fossil extra - Print a list of all files in the source tree that are not part of - the current checkout. See also the "clean" command. - -
      ˆ - Usage: fossil help COMMAND - Display information on how to use COMMAND - -
      ˆ - Usage: fossil http REPOSITORY - Handle a single HTTP request appearing on stdin. The resulting webpage - is delivered on stdout. This method is used to launch an HTTP request - handler from inetd, for example. The argument is the name of the repository. - -
      ˆ - Usage: fossil info ?ARTIFACT-ID|FILENAME? - With no arguments, provide information about the current tree. - If an argument is specified, provide information about the object - in the respository of the current tree that the argument refers - to. Or if the argument is the name of a repository, show - information about that repository. - -
      ˆ - Usage: fossil leaves - Find leaves of all branches. - -
      ˆ - Usage: fossil ls - Show the names of all files in the current checkout - -
      ˆ - Usage: fossil merge VERSION - The argument is a version that should be merged into the current - checkout. - Only file content is merged. The result continues to use the - file and directory names from the current check-out even if those - names might have been changed in the branch being merged in. - -
      ˆ - Usage: fossil mv|rename OLDNAME NEWNAME or: fossil mv|rename OLDNAME... DIR - - Move or rename one or more files within the tree - - This command does not rename the files on disk. All this command does is - record the fact that filenames have changed so that appropriate notations - can be made at the next commit/checkin. - -
      ˆ - Usage: fossil new FILENAME - - Create a repository for a new project in the file named FILENAME. - This command is distinct from "clone". The "clone" command makes - a copy of an existing project. This command starts a new project. - -
      ˆ - Usage: fossil open FILENAME - Open a connection to the local repository in FILENAME. A checkout - for the repository is created with its root at the working directory. - See also the "close" command. - -
      ˆ - Usage: fossil rstats - - Deliver a report of the repository statistics for the - current checkout. - -
      ˆ - Usage: fossil pull ?URL? ?-R|--respository REPOSITORY? - Pull changes in a remote repository into the local repository. - The repository is identified by the -R or --repository option. - If there is no such option then the open repository is used. - The URL of the remote server is specified on the command line - If no URL is specified then the URL used by the most recent - "pull", "push", or "sync" command is used. - - The URL is of the following form: - - http://USER@HOST:PORT/PATH - - The "USER@" and ":PORT" substrings are optional. - The "USER" substring specifies the login user. You will be - prompted for the password on the command-line. The PORT - specifies the TCP port of the server. The default port is - 80. - -
      ˆ - Usage: fossil push ?URL? ?-R|--repository REPOSITORY? - Push changes in the local repository over into a remote repository. - See the "pull" command for additional information. - -
      ˆ - Usage: fossil rebuild REPOSITORY - Reconstruct the named repository database from the core - records. Run this command after updating the fossil - executable in a way that changes the database schema. - -
      ˆ - COMMAND: reconstruct - Usage: fossil reconstruct REPOSITORY ORIGIN - Creates the REPOSITORY and populates it with the artifacts in the - indicated ORIGIN directory. - -
      ˆ - Usage: fossil redo ?FILENAME...? - Redo the an update or merge operation that has been undone by the - undo command. If FILENAME is specified then restore the changes - associated with the named file(s) but otherwise leave the update - or merge undone. - - A single level of undo/redo is supported. The undo/redo stack - is cleared by the commit and checkout commands. - -
      ˆ - Usage: fossil revert ?--yes? ?-r CHECKIN? FILE - Revert to the current repository version of FILE, or to - the version associated with check-in CHECKIN if the -r flag - appears. This command will confirm your operation unless the - file is missing or the --yes option is used. - -
      ˆ - Usage: fossil server ?-P|--port TCPPORT? ?REPOSITORY? Or: fossil ui ?-P|--port TCPPORT? ?REPOSITORY? - - Open a socket and begin listening and responding to HTTP requests on - TCP port 8080, or on any other TCP port defined by the -P or - --port option. The optional argument is the name of the repository. - The repository argument may be omitted if the working directory is - within an open checkout. - - The "ui" command automatically starts a web browser after initializing - the web server. - -
      ˆ - COMMAND: settings - COMMAND: unset - Usage: fossil setting ?PROPERTY? ?VALUE? ?-global? - fossil unset PROPERTY ?-global? - - The "setting" command with no arguments lists all properties and their - values. With just a property name it shows the value of that property. - With a value argument it changes the property for the current repository. - - The "unset" command clears a property setting. - - autosync If enabled, automatically pull prior to - commit or update and automatically push - after commit or tag or branch creation. - - diff-command External command to run when performing a diff. - If undefined, the internal text diff will be used. - - editor Text editor command used for check-in comments. - - http-port The TCP/IP port number to use by the "server" - and "ui" commands. Default: 8080 - - gdiff-command External command to run when performing a graphical - diff. If undefined, text diff will be used. - - localauth If enabled, require that HTTP connections from - 127.0.0.1 be authenticated by password. If - false, all HTTP requests from localhost have - unrestricted access to the repository. - - clearsign When enabled (the default), fossil will attempt to - sign all commits with gpg. When disabled, commits will - be unsigned. - - pgp-command Command used to clear-sign manifests at check-in. - The default is "gpg --clearsign -o ". - - mtime-changes Use file modification times (mtimes) to detect when - files have been modified. - - proxy URL of the HTTP proxy. If undefined or "on" then - the "http_proxy" environment variable is consulted. - If the http_proxy environment variable is undefined - then a direct HTTP connection is used. - - web-browser A shell command used to launch your preferred - web browser when given a URL as an argument. - Defaults to "start" on windows, "open" on Mac, - and "firefox" on Unix. - -
      ˆ - Usage: fossil status - Report on the status of the current checkout. - -
      ˆ - Usage: fossil sync ?URL? ?-R|--repository REPOSITORY? - Synchronize the local repository with a remote repository. This is - the equivalent of running both "push" and "pull" at the same time. - See the "pull" command for additional information. - -
      ˆ - Usage: fossil tag SUBCOMMAND ... - Run various subcommands to control tags and properties - - fossil tag add ?--raw? TAGNAME CHECK-IN ?VALUE? - - Add a new tag or property to CHECK-IN. The tag will - be usable instead of a CHECK-IN in commands such as - update and merge. - - fossil tag branch ?--raw? ?--nofork? TAGNAME CHECK-IN ?VALUE? - - A fork will be created so that the new checkin - is a sibling of CHECK-IN and identical to it except - for a generated comment. Then the new tag will - be added to the new checkin and propagated to - all direct children. Additionally all symbolic - tags of that checkin inherited from CHECK-IN will - be cancelled. - - However, if the option --nofork is given, no - fork will be created and the tag/property will be - added to CHECK-IN directly. No tags will be canceled. - - fossil tag cancel ?--raw? TAGNAME CHECK-IN - - Remove the tag TAGNAME from CHECK-IN, and also remove - the propagation of the tag to any descendants. - - fossil tag find ?--raw? TAGNAME - - List all check-ins that use TAGNAME - - fossil tag list ?--raw? ?CHECK-IN? - - List all tags, or if CHECK-IN is supplied, list - all tags and their values for CHECK-IN. - - The option --raw allows the manipulation of all types of - tags used for various internal purposes in fossil. You - should not use this option to make changes unless you are - sure what you are doing. - - If you need to use a tagname that might be confused with - a hexadecimal check-in or artifact ID, you can explicitly - disambiguate it by prefixing it with "tag:". For instance: - - fossil update decaf - - will be taken as an artifact or check-in ID and fossil will - probably complain that no such revision was found. However - - fossil update tag:decaf - - will assume that "decaf" is a tag/branch name. - -
      ˆ - Usage: fossil timeline ?WHEN? ?CHECK-IN|DATETIME? ?-n|--count N? - Print a summary of activity going backwards in date and time - specified or from the current date and time if no arguments - are given. Show as many as N (default 20) check-ins. The - WHEN argument can be any unique abbreviation of one of these - keywords: - - before - after - descendants | children - ancestors | parents - - The CHECK-IN can be any unique prefix of 4 characters or more. - The DATETIME should be in the ISO8601 format. For - examples: "2007-08-18 07:21:21". You can also say "current" - for the current version or "now" for the current time. - -
      ˆ - Usage: fossil undo ?FILENAME...? - Undo the most recent update or merge operation. If FILENAME is - specified then restore the content of the named file(s) but otherwise - leave the update or merge in effect. - - A single level of undo/redo is supported. The undo/redo stack - is cleared by the commit and checkout commands. - -
      ˆ - Usage: fossil update ?VERSION? ?--latest? - The optional argument is a version that should become the current - version. If the argument is omitted, then use the leaf of the - tree that begins with the current version, if there is only a single leaf. If there are a multiple leaves, the latest is used - if the --latest flag is present. - - This command is different from the "checkout" in that edits are - not overwritten. Edits are merged into the new version. - -
      ˆ - Usage: fossil user SUBCOMMAND ... ?-R|--repository FILE? - Run various subcommands on users of the open repository or of - the repository identified by the -R or --repository option. - - fossil user capabilities USERNAME ?STRING? - - Query or set the capabilities for user USERNAME - - fossil user default ?USERNAME? - - Query or set the default user. The default user is the - user for command-line interaction. - - fossil user list - - List all users known to the repository - - fossil user new ?USERNAME? - - Create a new user in the repository. Users can never be - deleted. They can be denied all access but they must continue - to exist in the database. - - fossil user password USERNAME - - Change the web access password for a user. - -
      ˆ - Usage: fossil version - Print the source code version number for the fossil executable. - -
      ˆ - Usage: fossil wiki (export|create|commit|list) WikiName - Run various subcommands to fetch wiki entries. - - fossil wiki export PAGENAME ?FILE? - - Sends the latest version of the PAGENAME wiki - entry to the given file or standard output. - - fossil wiki commit PAGENAME ?FILE? - - Commit changes to a wiki page from FILE or from standard. - - fossil wiki create PAGENAME ?FILE? - - Create a new wiki page with initial content taken from - FILE or from standard input. - - fossil wiki list - - Lists all wiki entries, one per line, ordered - case-insentively by name. - - TODOs: - - fossil wiki export ?-u ARTIFACT? WikiName ?FILE? - - Outputs the selected version of WikiName. - - fossil wiki delete ?-m MESSAGE? WikiName - - The same as deleting a file entry, but i don't know if fossil - supports a commit message for Wiki entries. - - fossil wiki ?-u? ?-d? ?-s=[|]? list - - Lists the artifact ID and/or Date of last change along with - each entry name, delimited by the -s char. - - fossil wiki diff ?ARTIFACT? ?-f infile[=stdin]? EntryName - - Diffs the local copy of a page with a given version (defaulting - to the head version). - -
      - -
      ˆ - -

      Caveats

      - This is not actually a reference, it's the start of a reference. - There are wikilinks to uncreated pages for the commands. This was - created by running the fossil help for each command listed by running - fossil help... Duplicate commands are only listed once (I - think). There are several bits of fossil that are not addressed - in the help for commands (special wiki directories, special users, etc.) - so they are (currently) not addressed here. Clarity and brevity may be - sacrificed for expediency at the authors indiscretion. All spelling and - grammatical mistakes are somebody elses fault. void * - prohibited where __C_PLUS_PLUS__ . Title and taxes extra. - Not valid in Hooptigonia. Index: www/reviews.wiki ================================================================== --- www/reviews.wiki +++ www/reviews.wiki @@ -1,6 +1,17 @@ -

      What People Are Saying About Fossil

      +Reviews +External links: + + * [http://nixtu.blogspot.com/2010/03/fossil-dvcs-on-go-first-impressions.html | + Fossil DVCS on the Go - First Impressions] + * [http://blog.mired.org/2011/02/fossil-sweet-spot-in-vcs-space.html | + Fossil - a sweet spot in the VCS space] by Mike Meyer. + * [http://blog.s11n.net/?p=72|Four reasons to take a closer look at the Fossil SCM] by Stephan Beal + +See Also: + + * [./quotes.wiki | Short Quotes on Fossil, Git, And DVCSes] Daniel writes on 2009-01-06:
      The reasons I use fossil are that it's the only version control I @@ -7,11 +18,28 @@ have found that I can get working through the VERY annoying MS firewalls at work.. (albeit through an ntlm proxy) and I just love single .exe applications!
      -Stephen Beal writes on 2009-01-11: + +Joshua Paine on 2010-10-22: + +
      +With one of my several hats on, I'm in a small team using git. Another +team member just checked some stuff into trunk that should have been on +a branch. Nothing else had happened since, so in fossil I would have +just edited that commit and put it on a new branch. In git that can't +actually be done without danger once other people have pulled, so I had +to create a new commit rolling back the changes, then branch and cherry +pick the earlier changes, then figure out how to make my new branch +shared instead of private. Just want to say thanks for fossil making my +life easier on most of my projects, and being able to move commits to +another branch after the fact and shared-by-default branches are good +features. Also not having a misanthropic command line interface. +
      + +Stephan Beal writes on 2009-01-11:
      Sometime in late 2007 I came across a link to fossil on sqlite.org. It was a good thing I bookmarked it, because I was never able to find the @@ -104,12 +132,12 @@ In summary: I remember my first reaction to fossil being, "this will be an -excellent solution for small projects [like the dozens we've all got +excellent solution for small projects (like the dozens we've all got sitting on our hard drives but which don't justify the hassle of -version control]." A year of daily use in over 15 source trees has +version control)." A year of daily use in over 15 source trees has confirmed that, and I continue to heartily recommend fossil to other developers I know who also have their own collection of "unhosted" pet projects.
      ADDED www/scgi.wiki Index: www/scgi.wiki ================================================================== --- www/scgi.wiki +++ www/scgi.wiki @@ -0,0 +1,26 @@ +Fossil SCGI + +To run Fossil using SCGI, start the [/help/server|fossil server] command +with the --scgi command-line option. You will probably also want to +specific an alternative TCP/IP port using --port. For example: + +
      +fossil server $REPOSITORY --port 9000 --scgi
      +
      + +Then configure your SCGI-aware web-server to send SCGI requests to port +9000 on the machine where Fossil is running. A typical configuration for +this in Nginx is: + +
      +location ~ ^/demo_project/ {
      +    include scgi_params;
      +    scgi_pass localhost:9000;
      +    scgi_param SCRIPT_NAME "/demo_project";
      +}
      +
      + +Note that Nginx does not normally send either the PATH_INFO or SCRIPT_NAME +variables via SCGI, but Fossil needs one or the other. So the configuration +above needs to add SCRIPT_NAME. If you do not do this, Fossil returns an +error. Index: www/selfcheck.wiki ================================================================== --- www/selfcheck.wiki +++ www/selfcheck.wiki @@ -13,12 +13,14 @@ part to the defensive measures described here, no data has been lost. The integrity checks are doing their job well.

      Atomic Check-ins With Rollback

      -The fossil repository is an -SQLite version 3 database file. +The fossil repository is stored in an +SQLite database file. +([./tech_overview.wiki | Addition information] about the repository +file format.) SQLite is very mature and stable and has been in wide-spread use for many years, so we are confident it will not cause repository corruption. SQLite databases do not corrupt even if a program or system crash or power failure occurs in the middle of the update. If some kind of crash @@ -25,27 +27,28 @@ does occur in the middle of a change, then all the changes are rolled back the next time that the database is accessed. A check-in operation in fossil makes many changes to the repository database. But all these changes happen within a single transaction. -If something goes wrong in the middle of the commit, then the transaction +If something goes wrong in the middle of the commit, even if that something +is a power failure or OS crash, then the transaction is rolled back and the database is unchanged.

      Verification Of Delta Encodings Prior To Transaction Commit

      -The content files that comprise the global state of a fossil respository +The content files that comprise the global state of a fossil repository are stored in the repository as a tree. The leaves of the tree are stored as zlib-compressed BLOBs. Interior nodes are deltas from their -decendants. A lot of encoding is going on. There is +descendants. A lot of encoding is going on. There is zlib-compression which is relatively well-tested but still might cause corruption if used improperly. And there is the relatively new delta-encoding mechanism designed expressly for fossil. We want to make sure that bugs in these encoding mechanisms do not lead to loss of data. To increase our confidence that everything in the repository is -recoverable, fossil makes sure it can extract an exact replicate +recoverable, fossil makes sure it can extract an exact replica of every content file that it changes just prior to transaction commit. So during the course of check-in (or other repository operation) many different files in the repository might be modified. Some files are simply compressed. Other files are delta encoded and then compressed. @@ -64,14 +67,14 @@ files.

      Checksum Over All Files In A Check-in

      Manifest artifacts that define a check-in have two fields (the -R-card and Z-card) that record MD5 hashs of the manifest itself +R-card and Z-card) that record MD5 hashes of the manifest itself and of all other files in the manifest. Prior to any check-in commit, these checksums are verified to ensure that the check-in -checked in agrees exactly with what is on disk. Similarly, +agrees exactly with what is on disk. Similarly, the repository checksum is verified after a checkout to make sure that the entire repository was checked out correctly. Note that these added checks use a different hash (MD5 instead of SHA1) in order to avoid common-mode failures in the hash algorithm implementation. Index: www/selfhost.wiki ================================================================== --- www/selfhost.wiki +++ www/selfhost.wiki @@ -1,14 +1,13 @@ Fossil Self-Hosting Repositories -

      Fossil Self-Hosting Repositories

      Fossil has self-hosted since 2007-07-21. As of this writing (2009-08-24) there are three publicly accessible repositories for the Fossil source code: 1. [http://www.fossil-scm.org/] - 2. [http://www.hwaci.com/cgi-bin/fossil] - 3. [http://www2.fossil-scm.org/] + 2. [http://www2.fossil-scm.org/] + 3. [http://www3.fossil-scm.org/site.cgi] The canonical repository is (1). Repositories (2) and (3) automatically stay in synchronization with (1) via a cron job that invokes @@ -19,15 +18,15 @@ Changes (such as new tickets or wiki or check-ins) can be implemented on any of the three servers and those changes automatically propagate to the other two servers. Server (1) runs as a CGI script on a -Linode 720 located in Dallas, TX +Linode 1024 located in Dallas, TX - on the same virtual machine that hosts SQLite and over a -dozen other smaller projects. This demonstrates that Fossil does not -require much server power. +dozen other smaller projects. This demonstrates that Fossil can run on +a low-power host processor. Multiple fossil-based projects can easily be hosted on the same machine, even if that machine is itself one of several dozen virtual machines on single physical box. The CGI script that runs the canonical Fossil self-hosting repository is as follows: @@ -34,33 +33,34 @@
       #!/usr/bin/fossil
       repository: /fossil/fossil.fossil
       
      -Server (2) runs as a CGI script on a shared hosting account at -Hurricane Electric in San Jose and -Fremont, CA. This server demonstrates the ability of +Server (3) runs as a CGI script on a shared hosting account at +Hurricane Electric in Fremont, CA. +This server demonstrates the ability of Fossil to run on an economical shared-host web account with no privileges beyond port 80 HTTP access and CGI. It is not necessary -to have a dedicated server to run Fossil. As far as we are aware, +to have a dedicated computer with administrator privileges to run Fossil. +As far as we are aware, Fossil is the only full-featured configuration management system that can run in such a restricted environment. The CGI script that runs on the Hurricane Electric server is the same as the CGI script shown above, -except that the pathnames are modified to suite the environment: +except that the pathnames are modified to suit the environment:
       #!/home/hwaci/bin/fossil
       repository: /home/hwaci/fossil/fossil.fossil
       
      -Server (2) is synchronized with the canonical server (1) by running +Server (3) is synchronized with the canonical server (1) by running the following command via cron:
       /home/hwaci/bin/fossil sync -R /home/hwaci/fossil/fossil.fossil
       
      -Server (3) is a -Linode 360 located in Atlanta, GA +Server (2) is a +Linode 512 located in Newark, NJ and set up just like the canonical server (1) with the addition of a -cron job for synchronization as in server (2). +cron job for synchronization as in server (3). Index: www/server.wiki ================================================================== --- www/server.wiki +++ www/server.wiki @@ -1,115 +1,365 @@ - -

      Server Setup Guide

      -

      This guide is intended to help guide you in setting up a Fossil server.

      - -

      Standalone server

      -The easiest way to set up a Fossil server is to use the server or -ui command. Assuming the repository you are interested in serving is -in the file "repo.fossil", you can use either of these commands to -start Fossil as a server: -
        -
      • fossil server repo.fossil -
      • fossil ui repo.fossil -
      - -

      -Both of these commands start a Fossil server on port 8080 on the local machine, -which can be accessed with the URL: http://localhost:8080/ using any -handy web browser. The difference between the two commands is that "ui", in -addition to starting the Fossil server, also starts a web browser and points it -to the URL mentioned above. -

      -

      -NOTES: -

        -
      1. The option "--port NNN" will start the server on port "NNN" instead of 8080. -
      2. If port 8080 is already being used (perhaps by another Fossil server), then -Fossil will use the next available port number. -
      3. Starting either command from within an "open" Fossil checkout will start a -server using the repository corresponding to the checkout. -
      4. This is an excellent option for quickly sharing with coworkers on a small network. -
      5. A huge advantage to this deployment scenario is that no special "root" or "administrator" access is required in order to share the repository. -
      -

      -
      - -

      Fossil as an ''inetd'' service

      -

      -Modify your /etc/inetd.conf (on Linux, modify as appropriate for your platform) so it contains a line like this: -

      - -12345 stream tcp nowait.1000 root /path-to/fossil /path-to/fossil http /other-path-to/repository - -
      -In this example, you are telling "inetd" that when an incoming connection on port "12345" happens, it should launch the binary "/path-to/fossil". Obviously you will need to modify the "path-to" parts as appropriate for your particular setup. -

      -

      -This is a more complex setup than the "standalone" server, but it has the advantage of only using system resources when an actual connection is attempted. If no-one ever connects to that port, a Fossil server will not (automatically) run. It has the disadvantage of requiring "root" access and therefore may not normally be available to lower-priced "shared" servers on the internet. -

      -
      - -

      Fossil as a ''CGI script''

      -

      -This is the most flexible and most likely to be widely usable of these deployment scenarios. In order for it to work, you must have the ability to install "CGI scripts" on the server you are interested in using. -

      -
      - -

      One script per repository

      -

      -Create a script (let's call it 'repo') in your CGI directory which has content like this: -

      -#!/path-to/fossil -repository: /path-to-repo/repository -
      -

      -

      -It may be necessary to set permissions properly, or to modify an ".htaccess" file or other server-specific things like that. Consult with your server provider if you need that sort of assistance. -

      -

      -Once the script is set up correctly, and assuming your server is also set correctly, you should be able to access your repository with a URL like: http://mydomain.org/cgi-bin/repo (assuming the "repo" script is accessible under "cgi-bin", which would be a typical deployment on Apache for instance). -

      -
      - -

      Serving multiple repositories with one script

      -

      -This scenario is almost identical to the previous one. However, here we will assume you have multiple repositories, in one directory (call it 'fossils'). So as before, create a script (again, 'repo'): -

      -#!/path-to/fossil -directory: /path-to-repo/fossils -notfound: http://url-to-go-to-if-repo-not-found/ -
      -

      -

      -Once deployed, a URL like: http://mydomain.org/cgi-bin/repo/XYZ will serve up the repository "fossils/XYX" (if it exists). This makes serving multiple projects on one server pretty painless. -

      +How To Configure A Fossil Server +

      Introduction

      +

      A server is not necessary to use Fossil, but a server does help in collaborating with +peers. A Fossil server also works well as a complete website for a project. +For example, the complete [https://www.fossil-scm.org/] website, including the +page you are now reading (but excepting the +[https://www.fossil-scm.org/download.html|download page]), +is just a Fossil server displaying the content of the +self-hosting repository for Fossil.

      +

      This article is a guide for setting up your own Fossil server.

      +

      Overview

      +There are basically four ways to set up a Fossil server: +
        +
      1. A stand-alone server +
      2. Using inetd or xinetd or stunnel +
      3. CGI +
      4. SCGI (a.k.a. SimpleCGI) +
      +Each of these can serve either a single repository, or a directory hierarchy +containing many repositories with names ending in ".fossil". +
      + +

      Standalone server

      +The easiest way to set up a Fossil server is to use either the +[/help/server|server] or the [/help/ui|ui] commands: +
        +
      • fossil server REPOSITORY +
      • fossil ui REPOSITORY +
      +

      +The REPOSITORY argument is either the name of the repository file, or +a directory containing many repositories. +Both of these commands start a Fossil server, usually on TCP port 8080, though +a higher numbered port might also be used if 8080 is already occupied. You can +access these using URLs of the form http://localhost:8080/, or if +REPOSITORY is a directory, URLs of the form +http://localhost:8080/repo/ where repo is the base +name of the repository file without the ".fossil" suffix. +The difference between "ui" and "server" is that "ui" will +also start a web browser and points it +to the URL mentioned above, and "ui" command binds to +the loopback IP address (127.0.0.1) only so that the "ui" command cannot be +used to serve content to a different machine. +

      +

      +If one of the commands above is run from within an open checkout, +then the REPOSITORY argument can be omitted and the checkout is used as +the repository. +

      +

      +Both commands have additional command-line options that can be used to refine +their behavior. See the [/help/server|online documentation] for an overview. +

      +
      + +

      Fossil as an inetd/xinetd or stunnel service

      +

      +A Fossil server can be launched on-demand by inetd or xinetd using +the [/help/http|fossil http] command. To launch Fossil from inetd, modify +your inetd configuration file (typically "/etc/inetd.conf") to contain a +line something like this: +

      +
      +80 stream tcp nowait.1000 root /usr/bin/fossil /usr/bin/fossil http /home/fossil/repo.fossil
      +
      +
      +In this example, you are telling "inetd" that when an incoming connection +appears on TCP port "80", that it should launch the binary "/usr/bin/fossil" +program with the arguments shown. +Obviously you will +need to modify the pathnames for your particular setup. +The final argument is either the name of the fossil repository to be served, +or a directory containing multiple repositories. +

      +

      +If you use a non-standard TCP port on +systems where the port-specification must be a symbolic name and cannot be +numeric, add the desired name and port to /etc/services. For example, if +you want your Fossil server running on TCP port 12345 instead of 80, you +will need to add: +

      +
      +fossil          12345/tcp  #fossil server
      +
      +
      +and use the symbolic name ('fossil' in this example) instead of the numeral ('12345') +in inetd.conf. For details, see the relevant section in your system's documentation, e.g. +the [https://www.freebsd.org/doc/en/books/handbook/network-inetd.html|FreeBSD Handbook] in +case you use FreeBSD. +

      +

      +If your system is running xinetd, then the configuration is likely to be +in the file "/etc/xinetd.conf" or in a subfile of "/etc/xinetd.d". +An xinetd configuration file will appear like this:

      +
      +
      +service http
      +{
      +  port = 80
      +  socket_type = stream
      +  wait = no
      +  user = root
      +  server = /usr/bin/fossil
      +  server_args = http /home/fossil/repos/
      +}
      +
      +
      +

      +The xinetd example above has Fossil configured to serve multiple +repositories, contained under the "/home/fossil/repos/" directory. +

      +

      +In both cases notice that Fossil was launched as root. This is not required, +but if it is done, then Fossil will automatically put itself into a chroot +jail for the user who owns the fossil repository before reading any information +off of the wire. +

      +

      +Inetd or xinetd must be enabled, and must be (re)started whenever their configuration +changes - consult your system's documentation for details. +

      +

      +[https://www.stunnel.org/ | Stunnel version 5] is an inetd-like process that +accepts and decodes SSL-encrypted connections. Fossil can be run directly from +stunnel in a manner similar to inetd and xinetd. This can be used to provide +a secure link to a Fossil project. The configuration needed to get stunnel5 +to invoke Fossil is very similar to the inetd and xinetd examples shown above. +The relevant parts of an stunnel configuration might look something +like the following: +

      
      +[https]
      +accept       = www.ubercool-project.org:443
      +TIMEOUTclose = 0
      +exec         = /usr/bin/fossil
      +execargs     = /usr/bin/fossil http /home/fossil/ubercool.fossil --https
      +
      +See the stunnel5 documentation for further details about the /etc/stunnel/stunnel.conf +configuration file. Note that the [/help/http|fossil http] command should include +the --https option to let Fossil know to use "https" instead of "http" as the scheme +on generated hyperlinks. +

      +Using inetd or xinetd or stunnel is a more complex setup +than the "standalone" server, but it has the +advantage of only using system resources when an actual connection is +attempted. If no-one ever connects to that port, a Fossil server will +not (automatically) run. It has the disadvantage of requiring "root" access +and therefore may not normally be available to lower-priced "shared" servers +on the internet. +

      +
      + +

      Fossil as CGI

      +

      +A Fossil server can also be run from an ordinary web server as a CGI program. +This feature allows Fossil to be seamlessly integrated into a larger website. +CGI is how the [./selfhost.wiki | self-hosting fossil repositories] are +implemented. +

      +

      +To run Fossil as CGI, create a CGI script (here called "repo") in the CGI directory +of your web server and having content like this: +

      +#!/usr/bin/fossil
      +repository: /home/fossil/repo.fossil
      +
      +

      + +

      +As always, adjust your paths appropriately. +It may be necessary to set permissions properly, or to modify an ".htaccess" +file or make other server-specific changes. Consult the documentation +for your particular web server. In particular, the following permissions are +normally required (but, again, may be different for a particular +configuration): + +

        +
      • The Fossil binary must be readable/executable, and ALL directories leading up to it +must be readable by the process which executes the CGI.
      • +
      • ALL directories leading to the CGI script must also be readable and the CGI +script itself must be executable for the user under which it will run (which often differs +from the one running the web server - consult your site's documentation or administrator).
      • +
      • The repository file AND the directory containing it must be writable by the same account +which executes the Fossil binary (again, this might differ from the WWW user). The directory +needs to be writable so that sqlite can write its journal files.
      • +
      • Fossil must be able to create temporary files, the default directory +for which depends on the OS. When the CGI process is operating within +a chroot, ensure that this directory exists and is readable/writeable +by the user who executes the Fossil binary.
      • +
      +

      + +

      +Once the script is set up correctly, and assuming your server is also set +correctly, you should be able to access your repository with a URL like: +http://mydomain.org/cgi-bin/repo (assuming the "repo" script is +accessible under "cgi-bin", which would be a typical deployment on Apache +for instance). +

      +

      +To serve multiple repositories from a directory using CGI, use the "directory:" +tag in the CGI script rather than "repository:". You might also want to add +a "notfound:" tag to tell where to redirect if the particular repository requested +by the URL is not found: +

      +#!/usr/bin/fossil
      +directory: /home/fossil/repos
      +notfound: http://url-to-go-to-if-repo-not-found/
      +
      +

      +

      +Once deployed, a URL like: http://mydomain.org/cgi-bin/repo/XYZ +will serve up the repository "/home/fossil/repos/XYZ.fossil" (if it exists). +

      +
      + + +

      Fossil as SCGI

      + +

      +The [/help/server|fossil server] command, described above as a way of +starting a stand-alone web server, can also be used for SCGI. Simply add +the --scgi command-line option and the stand-alone server will interpret +and respond to the SimpleCGI or SCGI protocol rather than raw HTTP. This can +be used in combination with a webserver (such as [http://nginx.org|Nginx]) +that does not support CGI. A typical Nginx configuration to support SCGI +with Fossil would look something like this: +

      +location /demo_project/ {
      +    include scgi_params;
      +    scgi_pass localhost:9000;
      +    scgi_param SCRIPT_NAME "/demo_project";
      +}
      +
      +

      +Note that Fossil requires the SCRIPT_NAME variable +in order to function properly, but Nginx does not provide this +variable by default. +So it is necessary to provide the SCRIPT_NAME parameter in the configuration. +Failure to do this will cause Fossil to return an error. +

      +

      +All of the features of the stand-alone server mode described above, +such as the ability to serve a directory full of Fossil repositories +rather than just a single repository, work the same way in SCGI mode. +

      +

      +For security, it is probably a good idea to add the --localhost option +to the [/help/server|fossil server] command to prevent Fossil from accepting +off-site connections. And one might want to specify the listening TCP port +number, rather than letting Fossil choose one for itself, just to avoid +ambiguity. A typical command to start a Fossil SCGI server +would be something like this: +

      +fossil server $REPOSITORY --scgi --localhost --port 9000
      +

      Securing a repository with SSL

      -Using either of the CGI script approaches, it is trivial to use SSL to secure the server. Simply set up the Fossil CGI scripts etc. as above, but modify the Apache (or IIS, etc.) server to require SSL (that is, a URL with "https://") in order to access the CGI script directory. This may also be accomplished (on Apache, at least) using appropriate ".htaccess" rules. -

      -

      -If you are using "inetd" to serve your repository, then you simply need to add "/usr/bin/stunnel" (perhaps on a different path, depending on your setup) before the command line to launch Fossil. -

      -

      -At this stage, the standalone server (e.g. "fossil server") does not support SSL. -

      -
      - -

      Various security concerns with hosted repositories

      -

      -There are two main concerns relating to usage of Fossil for sharing sensitive information (source or any other data): -

        -
      • Interception of the Fossil synchronization stream, thereby capturing data, and -
      Direct access to the Fossil repository on the server -

      -

      -Regarding the first, it is adequate to secure the server using SSL, and disallowing any non-SSL access. The data stream will be encrypted by the HTTPS protocol, rendering the data reasonably secure. The truly paranoid may wish to deploy ssh encrypted tunnels, but that is quite a bit more difficult and cumbersome to set up (particularly for a larger number of users). -

      -

      -As far as direct access to the repository, the same steps must be taken as for any other internet-facing data-store. Access passwords to any disk-accessing accounts should be strong (and preferably changed from time to time). However, the data in the repository itself are not encrypted (unless you save encrypted data yourself), and so the system administrators of your server will be able to access your data (as with any hosting service setup). The only workaround in this case is to host the server yourself, in which case you will need to allocate resources to deal with administration issues. -

      - -
      - - +Using either CGI or SCGI, it is trivial to use SSL to +secure the server. Simply set up the Fossil CGI scripts etc. as above, +but modify the Apache (or IIS, etc.) server to require SSL (that is, a +URL with "https://") in order to access the CGI script directory. This +may also be accomplished (on Apache, at least) using appropriate +".htaccess" rules. +

      +

      +If you are using "inetd" to serve your repository, then you simply need +to add "/usr/bin/stunnel" (perhaps on a different path, depending on your +setup) before the command line to launch Fossil. +

      +

      +At this stage, the standalone server (e.g. "fossil server") does not +support SSL. +

      +

      +For more information, see Using SSL with Fossil. +

      +
      + + +

      Managing Server Load

      +

      +A Fossil server is very efficient and normally presents a very light +load on the server. +The Fossil [./selfhost.wiki | self-hosting server] is a 1/24th slice VM at +[http://www.linode.com | Linode.com] hosting 65 other repositories in +addition to Fossil (and including some very high-traffic sites such +as [http://www.sqlite.org] and [http://system.data.sqlite.org]) and +it has a typical load of 0.05 to 0.1. A single HTTP request to Fossil +normally takes less than 10 milliseconds of CPU time to complete. So +requests can be arriving at a continuous rate of 20 or more per second +and the CPU can still be mostly idle. +

      +However, there are some Fossil web pages that can consume large +amounts of CPU time, especially on repositories with a large number +of files or with long revision histories. High CPU usage pages include +[/help?cmd=/zip | /zip], [/help?cmd=/tarball | /tarball], +[/help?cmd=/annotate | /annotate] and others. On very large repositories, +these commands can take 15 seconds or more of CPU time. +If these kinds of requests arrive too quickly, the load average on the +server can grow dramatically, making the server unresponsive. +

      +Fossil provides two capabilities to help avoid server overload problems +due to excessive requests to expensive pages: +

        +
      1. An optional cache is available that remembers the 10 most recently + requested /zip or /tarball pages and returns the precomputed answer + if the same page is requested again. +

      2. Page requests can be configured to fail with a + [http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.5.3 | "503 Server Overload"] + HTTP error if an expensive request is received while the host load + average is too high. +

      +Both of these load-control mechanisms are turned off by default, but they +are recommended for high-traffic sites. +

      +The webpage cache is activated using the [/help?cmd=cache|fossil cache init] +command-line on the server. Add a -R option to specify the specific repository +for which to enable caching. If running this command as root, be sure to +"chown" the cache database (which is a separate file in the same directory +and with the same name as the repository but with the suffix changed to ".cache") +to give it write permission for the userid of the webserver. +

      +To activate the server load control feature +visit the /Admin/Access setup page in the administrative web +interface and in the "Server Load Average Limit" box +enter the load average threshold above which "503 Server +Overload" replies will be issued for expensive requests. On the +self-host Fossil server, that value is set to 1.5. But you could easily +set it higher on a multi-core server. +

      +The maximum load average can also be set on the command line using +commands like this: +

      +fossil set max-loadavg 1.5
      +fossil all set max-loadavg 1.5
      +
      +The second form is especially useful for changing the maximum load average +simultaneously on a large number of repositories. +

      +Note that this load-average limiting feature is only available on operating +systems that support the "getloadavg()" API. Most modern Unix systems have +this interface, but Windows does not, so the feature will not work on Windows. +Note also that Linux implements "getloadavg()" by accessing the "/proc/loadavg" +file in the "proc" virtual filesystem. If you are running a Fossil instance +inside a chroot() jail on Linux, you will need to make the "/proc" file +system available inside that jail in order for this feature to work. On +the self-hosting Fossil repository, this was accomplished by adding a line +to the "/etc/mtab" or "/etc/fstab" file that looks like: +

      +chroot_jail_proc /home/www/proc proc r 0 0
      +
      +Pathnames should be adjusted for individual systems, of course. +

      +To see if the load-average limiter is functional, visit the [/test_env] page +of the server to view the current load average. If the value for the load +average is greater than zero, that means that it is possible to activate +the load-average limiter on that repository. If the load average shows +exactly "0.0", then that means that Fossil is unable to find the load average +(either because it is in a chroot() jail without /proc access, or because +it is running on a system that does not support "getloadavg()") and so the +load-average limiter will not function. + +

      ADDED www/settings.wiki Index: www/settings.wiki ================================================================== --- www/settings.wiki +++ www/settings.wiki @@ -0,0 +1,62 @@ +Fossil Settings + +

      Using Fossil Settings

      + +Settings control the behaviour of fossil. They are set with the +fossil settings command, or through the web interface in +the Settings page in the Admin section. + +For a list of all settings, view the Settings page, or type +fossil help settings from the command line. + + +

      Repository settings

      + +Settings are set on a per-repository basis. When you clone a repository, +a subset of settings are copied to your local repository. + +If you make a change to a setting on your local repository, it is not +synced back to the server when you push or sync. If +you make a change on the server, you need to manually make the change on +all repositories which are cloned from this repository. + +You can also set a setting globally on your local machine. The value +will be used for all repositories cloned to your machine, unless +overridden explicitly in a particular repository. Global settings can be +set by using the -global option on the fossil settings +command. + +

      "Versionable" settings

      + +Most of the settings control the behaviour of fossil on your local +machine, largely acting to reflect your preference on how you want to +use Fossil, how you communicate with the server, or options for hosting +a repository on the web. + +However, for historical reasons, some settings affect how you work with +versioned files. These are allow-symlinks, +binary-glob, crnl-glob, empty-dirs, +encoding-glob, ignore-glob, keep-glob and +manifest. The most important is ignore-glob which +specifies which files should be ignored when looking for unmanaged files +with the extras command. + +Because these options can change over time, and the inconvenience of +replicating changes, these settings are "versionable". As well as being +able to be set using the settings command or the web interface, +you can create versioned files in the .fossil-settings +subdirectory of the check-out root, named with the setting name. +The contents of the file is the +value of the setting, and these files are checked in, committed, merged, +and so on, as with any other file. + +Where a setting is a list of values, such as ignore-glob, you +can use a newline as a separator as well as a comma. + +For example, to set the list of ignored files, create a +.fossil-settings/ignore-glob file where each line contains a +glob for ignored files. + +If you set the value of a setting using the settings command as +well as a versioned file, the versioned setting will take precedence. A +warning will be displayed. Index: www/shunning.wiki ================================================================== --- www/shunning.wiki +++ www/shunning.wiki @@ -34,47 +34,50 @@ "rebuild" command.

      Shunning lists are local state

      The shunning list is part of the local state of a Fossil repository. -In other words, shunning does not propagate using the normal "sync" -mechanism. An artifact can be +In other words, shunning does not propagate to a remote repository +using the normal "sync" mechanism. An artifact can be shunned from one repository but be allowed to exist in another. The fact that the shunning list does not propagate is a security feature. If the -shunning list propagated then a malecious user (or +shunning list propagated then a malicious user (or a bug in the fossil code) might introduce a shun record that would -propagate through all respositories in a network and permanently +propagate through all repositories in a network and permanently destroy vital information. By refusing to propagate the shunning list, -Fossil insures that no remote user will ever be able to remove +Fossil ensures that no remote user will ever be able to remove information from your personal repositories without your permission. -The shunning list does not propagate by the normal "sync" mechanism, +The shunning list does not propagate to a remote repository +by the normal "sync" mechanism, but it is still possible to copy shuns from one repository to another using the "configuration" command: fossil configuration pull shun remote-url
      fossil configuration push shun remote-url The two command above will pull or push shunning lists from or to the remote-url indicated and merge the lists on the receiving end. "Admin" privilege on the remote server is required in order to -push a shun list. +push a shun list. In contrast, the shunning list will be automatically +received by default as part of a normal client "pull" operation unless +disabled by the "auto-shun" setting. -Note that the shunning list remains in the respository even after the +Note that the shunning list remains in the repository even after the shunned artifact has been removed. This is to prevent the artifact from being reintroduced into the repository the next time it syncs with another repository that has not shunned the artifact.

      Managing the shunning list

      The complete shunning list for a repository can be viewed by a user -with "admin" privilege on the "/shunned" URL of the web interface to Fossil. +with "admin" privilege on the "/shun" URL of the web interface to Fossil. That URL is accessible under the "Admin" button on the default menu bar. Items can be added to or removed from the shunning list. "Sync" operations are inhibited as soon as the artifact is added to the shunning list, but the content of the artifact is not actually removed -from the responstory until the next time the repository is rebuilt. +from the repository until the next time the repository is rebuilt. When viewing individual artifacts with the web interface, "admin" users will usually see a "Shun" option in the submenu that will take them directly to the shunning page and enable that artifact to be shunned with a single additional mouse click. ADDED www/ssl.wiki Index: www/ssl.wiki ================================================================== --- www/ssl.wiki +++ www/ssl.wiki @@ -0,0 +1,36 @@ +SSL and Fossil + +

      Using SSL with Fossil

      + +If you are storing sensitive information in your repository, you should use SSL to encrypt all communications. This will protect the credentials used to access the server, as well preventing eavesdropping of the contents of your repository. + +To host a repository with SSL, you need to use an web server which supports SSL in front of the Fossil server. You can host it using the CGI option or by proxying Fossil's built in HTTP server. + +Your fossil client must be built with SSL support. The configure script will attempt to find OpenSSL on your system, but if necessary, you can specify the location with the --with-openssl option. Type ./configure --help for details. + +Make sure the URL you clone from uses the https: scheme to ensure you're using SSL. If your server is configured to serve the repository from http as well as https, it's easy to accidentally use unencrypted HTTP if you forget the all important 's'. + + +

      Certificates

      + +To verify the identify of a server, SSL uses certificates. Fossil needs to know which certificates you trust. + +If you are using a self-signed certificate, you'll be asked if you want to accept the certificate the first time you communicate with the server. Verify the certificate fingerprint is correct, then answer "always" to remember your decision. + +If you are using a certificate signed by a certificate authority, you need to specify the certificates you trust with the ssl-ca-location setting. Set this globally with the -global option for convenience. + +This should be set to the location of a file containing all the PEM encoded certificates you trust. You can obtain a certificate using a web browser, for example, Firefox, or just refer to your system's trusted CA roots which are usually stored somewhere in /etc. + + +

      Client side certificates

      + +You can also use client side certificates to add an extra layer of authentication, over and above Fossil's built in user management. If you are particularly paranoid, you'll want to use this to remove the ability of anyone on the internet from making any request to Fossil. Without presenting a valid client side certificate, the web server won't invoke the fossil CGI handler. + +Configure your server to request a client side certificate, and set up a certificate authority to sign your client certificates. For each person who needs to access the repository, create a private key and certificate signed with that CA. + +The PEM encoded private key and certificate should be stored in a single file, simply by concatenating the key and certificate files. Specify the location of this file with the ssl-identity setting, or the --ssl-identity option to the clone command. + +If you've password protected the private key, the password will be requested every time you connect to the server. This password is not stored by fossil, as doing so would defeat the purpose of having a password. + +If you attempt to connect to a server which requests a client certificate, but don't provide one, fossil will show an error message which explains what to do to authenticate with the server. + Index: www/stats.wiki ================================================================== --- www/stats.wiki +++ www/stats.wiki @@ -2,164 +2,149 @@

      Performance Statistics

      The questions will inevitably arise: How does Fossil perform? Does it use a lot of disk space or bandwidth? Is it scalable? -In an attempt to answers these questions, this report looks at five +In an attempt to answers these questions, this report looks at several projects that use fossil for configuration management and examines how well they are working. The following table is a summary of the results. +(Last updated on 2015-02-28.) Explanation and analysis follows the table. - - + - - - - - - - - - - - - - + + + + + + + + + + + + + + +
      Project Number Of Artifacts Number Of Check-insProject Duration
      (as of 2009-08-23)
      Average Check-ins Per DayProject Duration
      (as of 2015-02-28)
      Uncompressed Size Repository Size Compression Ratio Clone Bandwidth
      SQLite -28643 -6755 -3373 days
      9.24 yrs -
      2.00 -1.27 GB -35.4 MB -35:1 -982 KB up
      12.4 MB down -
      Fossil -4981 -1272 -764 days
      2.1 yrs -
      1.66 -144 MB -8.74 MB -16:1 -128 KB up
      4.49 MB down -
      SLT -2062 -67 -266 days -0.25 -1.76 GB -147 MB -11:1 -1.1 MB up
      141 MB down -
      TH3 -1999 -429 -331 days -1.30 -70.5 MB -6.3 MB -11:1 -55 KB up
      4.66 MB down -
      SQLite Docs -1787 -444 -650 days
      1.78 yrs -
      0.68 -43 MB -4.9 MB -8:1 -46 KB up
      3.35 MB down +
      [http://www.sqlite.org/src/timeline | SQLite] +56089 +14357 +5388 days
      14.75 years +
      3.4 GB +45.5 MB +73:1 +29.9 MB +
      [http://core.tcl-lang.org/tcl/timeline | TCL] +139662 +18125 +6183 days
      16.93 years +
      6.6 GB +192.6 MB +34:1 +117.1 MB +
      [/timeline | Fossil] +29937 +8238 +2779 days
      7.61 years +
      2.3 GB +34.0 MB +67:1 +21.5 MB +
      [http://www.sqlite.org/slt/timeline | SLT] +2278 +136 +2282 days
      6.25 years +
      2.0 GB +144.7 MB +13:1 +142.1 MB +
      [http://www.sqlite.org/th3.html | TH3] +8729 +2406 +2347 days
      6.43 years +
      386 MB +15.2 MB +25:1 +12.6 MB +
      [http://www.sqlite.org/docsrc/timeline | SQLite Docs] +5503 +1631 +2666 days
      7.30 years +
      194 MB +10.9 MB +17:1 +8.37 MB
      -

      The Five Projects

      - -The five projects listed above were chosen because they have been in -existance for a long time (relative to the age of fossil) or because -they have larges amounts of content. The most important project using -fossil is SQLite. Fossil itself -is built on top of SQLite and so obviously SQLite has to predate fossil. -SQLite was originally versioned using CVS, but recently the entire 9-year -and 320-MB CVS history of SQLite was converted over to Fossil. This is -an important datapoint because it demonstrates fossil's ability to manage -a significant and long-running project. -The next-longest running fossil project is fossil itself, at 2.1 years. -The documentation for SQLite -(identified above as "SQLite Docs") was split off of the main SQLite -source tree and into its own fossil repository about 1.75 years ago. -The "SQL Logic Test" or "SLT" project is a massive -collection of SQL statements and their output used to compare the -processing of SQLite against MySQL, PostgreSQL, Microsoft SQL Server, -and Oracle. -Finally "TH3" is a proprietary set of test cases for SQLite used to give -100% branch test coverage of SQLite on embedded platforms. All projects -except for TH3 are open-source. -

      Measured Attributes

      -In fossil, every version of every file, every wiki page, every change to +In Fossil, every version of every file, every wiki page, every change to every ticket, and every check-in is a separate "artifact". One way to -think of a fossil project is as a bag of artifacts. Of course, there is -a lot more than this going on in fossil. Many of the artifacts have meaning +think of a Fossil project is as a bag of artifacts. Of course, there is +a lot more than this going on in Fossil. Many of the artifacts have meaning and are related to other artifacts. But at a low level (for example when synchronizing two instances of the same project) the only thing that matters is the unordered collection of artifacts. In fact, one of the key -characteristics of fossil is that the entire project history can be +characteristics of Fossil is that the entire project history can be reconstructed simply by scanning the artifacts in an arbitrary order. The number of check-ins is the number of times that the "commit" command has been run. A single check-in might change a 3 or 4 files, or it might -change several dozen different files. Regardless of the number of files +change dozens or hundreds of files. Regardless of the number of files changed, it still only counts as one check-in. The "Uncompressed Size" is the total size of all the artifacts within -the fossil repository assuming they were all uncompressed and stored +the repository assuming they were all uncompressed and stored separately on the disk. Fossil makes use of delta compression between related versions of the same file, and then uses zlib compression on the resulting deltas. The total resulting repository size is shown after the uncompressed -size. +size. On the right end of the table, we show the "Clone Bandwidth". This is the -total number of bytes sent from client to server ("uplink") and from server -back to client ("downlink") in order to clone a repository. These byte counts -include HTTP protocol overhead. +total number of bytes sent from server back to the client. The number of +bytes sent from client to server is negligible in comparison. +These byte counts include HTTP protocol overhead. In the table and throughout this article, "GB" means gigabytes (109 bytes) not gibibytes (230 bytes). Similarly, "MB" and "KB" means megabytes and kilobytes, not mebibytes and kibibytes. -

      Analysis And Supplimental Data

      +

      Analysis And Supplemental Data

      Perhaps the two most interesting datapoints in the above table are SQLite and SLT. SQLite is a long-running project with long revision chains. -Some of the files in SQLite have been edited close to a thousand times. +Some of the files in SQLite have been edited over a thousand times. Each of these edits is stored as a delta, and hence the SQLite project -gets excellent 35:1 compression. SLT, on the other hand, consists of +gets excellent 73:1 compression. SLT, on the other hand, consists of many large (megabyte-sized) SQL scripts that have one or maybe two -versions. There is very little delta compression occurring and so the +edits each. There is very little delta compression occurring and so the overall repository compression ratio is much lower. Note also that quite a bit more bandwidth is required to clone SLT than SQLite. For the first nine years of its development, SQLite was versioned by CVS. The resulting CVS repository measured over 320MB in size. So, the -developers were -pleasently surprised to see that this entire project could be cloned in -fossil using only about 13MB of network traffic. The "sync" protocol +developers were surprised to see that the equivalent Fossil project (the +first nine years on SQLite) would clone with only 13MB of bandwidth. +The "sync" protocol used by fossil has turned out to be surprisingly efficient. A typical -check-in on SQLite might use 3 or 4KB of network bandwidth total. Hardly -worth measuring. The sync protocol is efficient enough that, once cloned, -fossil could easily be used over a dial-up connection. +check-in on SQLite might use 3 or 4KB of network bandwidth. +For example, the [04eef9522386a59e] check-in used a single HTTP request +of 2099 bytes and got back a reply of 1116 bytes. +The sync protocol is efficient enough that, once cloned, +Fossil can easily be used over a dial-up connection. ADDED www/style.wiki Index: www/style.wiki ================================================================== --- www/style.wiki +++ www/style.wiki @@ -0,0 +1,121 @@ +Coding Style + +Fossil source code should following the style guidelines below. + +General points: + + 10. No line of code exceeds 80 characters in length. (Occasional + exceptions are made for HTML text on @-lines.) + + 11. There are no tab characters. + + 12. Line terminators are \n only. Do not use a \r\n line terminator. + + 13. 2-space indentation is used. Remember: No tabs. + + 14. Comments contain no spelling or grammatical errors. (Abbreviations + and sentence fragments are acceptable when trying to fit a comment + on a single line as long as the meaning is clear.) + + 15. The tone of comments is professional and courteous. Comments + contain no profanity, obscenity, or innuendo. + + 16. All C-code conforms to ANSI C-89. + + 17. All comments and identifiers are in English. + + 18. The program is single-threaded. Do not use threads. + (One exception to this is the HTTP server implementation for Windows, + which we do not know how to implement without the use of threads.) + + +C preprocessor macros: + + 20. The purpose of every preprocessor macros is clearly explained in a + comment associated with its definition. + + 21. Every preprocessor macro is used at least once. + + 22. The names of preprocessor macros clearly reflect their use. + + 23. Assumptions about the relative values of related macros are + verified by asserts. Example: assert(READ_LOCK+1==WRITE_LOCK); + + +Function header comments: + + 30. Every function has a header comment describing the purpose and use + of the function. + + 31. Function header comment defines the behavior of the function in + sufficient detail to allow the function to be re-implemented from + scratch without reference to the original code. + + 32. Functions that perform dynamic memory allocation (either directly + or indirectly via subfunctions) say so in their header comments. + + +Function bodies: + +
        +
      1. The name of a function clearly reflects its purpose. + +
      2. Automatic variables are small, not large objects or arrays. Avoid + excessive stack usage. + +
      3. The check-list items for functions also apply to major subsections + within a function. + +
      4. All code subblocks are enclosed in {...}. + + +
      5. assert() macros are used as follows: +
          + +
        1. Function preconditions are clearly stated and verified by asserts. + +
        2. Invariants are identified by asserts. +
        + +
      + + +Class (struct) declarations: + + 50. The purpose and use of every class (a.k.a. structure) is clearly defined + in the header comment of its declaration. + + 51. The purpose and use of every class member is clearly defined either + in the header comment of the class declaration or when the member is + declared or both. + + 52. The names of class members clearly reflect their use. + + 53. Invariants for classes are clearly defined. + +Variables and class instances: + + 60. The purpose and use of every variable is defined by a comment at the + variable definition. + + 61. The names of variables clearly reflect their use. + + 62. Related variables have related names. (ex: aSavepoint and nSavepoint.) + + 63. Variables have minimum practical scope. + + 64. Automatic variables are small, not large objects or arrays. + + 65. Constants are "const". + + 66. Invariants on variables or groups of variables are defined and + tested by asserts. + + 67. When a variable that refers to the same value is used within + multiple scopes, the same name is used in all cases. + + 68. When variables refer to different values, different names are used + even when the names are in different scopes. + + 69. Variable names with wide scope are sufficiently distinctive to allow + searching for them using grep. Index: www/sync.wiki ================================================================== --- www/sync.wiki +++ www/sync.wiki @@ -1,6 +1,6 @@ -

      The Fossil Sync Protocol

      +The Fossil Sync Protocol

      Fossil supports commands push, pull, and sync for transferring information from one repository to another. The command is run on the client repository. A URL for the server repository is specified as part of the command. This document describes what happens @@ -26,12 +26,12 @@ shared to a few hundred.

      Each repository also has local state. The local state determines the web-page formatting preferences, authorized users, ticket formats, and similar information that varies from one repository to another. -The local state is not transfered by the push, pull, -and sync command, though some local state is transfered during +The local state is not transferred by the push, pull, +and sync command, though some local state is transferred during a clone in order to initialize the local state of the new repository. The configuration push and configuration pull commands can be used to send or receive local state.

      @@ -124,11 +124,11 @@ section details the "x-fossil" content type.

      3.1 Line-oriented Format

      The x-fossil content type consists of zero or more "cards". Cards -are separate by the newline character ("\n"). Leading and trailing +are separated by the newline character ("\n"). Leading and trailing whitespace on a card is ignored. Blank cards are ignored.

      Each card is divided into zero or more space separated tokens. The first token on each card is the operator. Subsequent tokens are arguments. The set of operators understood by servers is slightly @@ -159,15 +159,21 @@

      Privileges are cumulative. There can be multiple successful login cards. The session privileges are the bit-wise OR of the privileges of each individual login.

      -

      3.3 File Cards

      +

      3.3 File and Compressed File Cards

      -

      Artifacts are transferred using "file" cards. (The name "file" -card comes from the fact that most artifacts correspond to files.) -File cards come in two different formats depending +

      Artifacts are transferred using either "file" cards, or "cfile" cards. +(The name "file" card comes from the fact that most artifacts correspond to +files, and "cfile" is just a compressed file.) +

      + +

      3.3.1 File Cards

      + +

      For sync protocols, artifacts are transferred using "file" +cards. File cards come in two different formats depending on whether the artifact is sent directly or as a delta from some other artifact.

      file artifact-id size \n content
      @@ -176,11 +182,11 @@

      File cards are different from all other cards in that they followed by in-line "payload" data. The content of the artifact or the artifact delta consists of the first size bytes of the x-fossil content that immediately follow the newline that -terminates the file card. No other cards have this characteristic. +terminates the file card. Only file and cfile cards have this characteristic.

      The first argument of a file card is the ID of the artifact that is being transferred. The artifact ID is the lower-case hexadecimal representation of the SHA1 hash of the artifact. @@ -194,27 +200,58 @@

      File cards are sent in both directions: client to server and server to client. A delta might be sent before the source of the delta, so both client and server should remember deltas and be able to apply them when their source arrives.

      +

      3.3.2 Compressed File Cards

      + +

      A client that sends a clone protocol version "3" or greater will +receive artifacts as "cfile" cards while cloning. This card was +introduced to improve the speed of the transfer of content by sending the +compressed artifact directly from the server database to the client.

      + +

      Compressed File cards are similar to File cards, sharing the same +in-line "payload" data characteristics and also the same treatment of +direct content or delta content. It comes in two different formats +depending on whether the artifact is sent directly or as a delta from +some other artifact.

      + +
      +cfile artifact-id usize csize \n content
      +cfile artifact-id delta-artifact-id usize csize \n content
      +
      + +

      The first argument of the cfile card is the ID of the artifact that +is being transferred. The artifact ID is the lower-case hexadecimal +representation of the SHA1 hash of the artifact. The second argument of +the cfile card is the original size in bytes of the artifact. The last +argument of the cfile card is the number of compressed bytes of payload +that immediately follow the cfile card. If the cfile card has only +three arguments, that means the payload is the complete content of the +artifact. If the cfile card has four arguments, then the payload is a +delta and the second argument is the ID of another artifact that is the +source of the delta and the third argument is the original size of the +delta artifact.

      + +

      Unlike file cards, cfile cards are only sent in one direction during a +clone from server to client for clone protocol version "3" or greater.

      +

      3.4 Push and Pull Cards

      -

      Among of the first cards in a client-to-server message are -the push and pull cards. The push card tell the server that -the client is pushing content. The pull card tell the server +

      Among the first cards in a client-to-server message are +the push and pull cards. The push card tells the server that +the client is pushing content. The pull card tells the server that the client wants to pull content. In the event of a sync, both cards are sent. The format is as follows:

      push servercode projectcode
      pull servercode projectcode

      The servercode argument is the repository ID for the -client. The server will only allow the transaction to proceed -if the servercode is different from its own servercode. This -prevents a sync-loop. The projectcode is the identifier +client. The projectcode is the identifier of the software project that the client repository contains. The projectcode for the client and server must match in order for the transaction to proceed.

      The server will also send a push card back to the client @@ -223,21 +260,63 @@

      3.5 Clone Cards

      A clone card works like a pull card in that it is sent from client to server in order to tell the server that the client -wants to pull content. But unlike the pull card, the clone -card has no arguments.

      +wants to pull content. The clone card comes in two formats. Older +clients use the no-argument format and newer clients use the +two-argument format.

      + +
      +clone
      +clone protocol-version sequence-number +
      + +

      3.5.1 Protocol 3

      + +

      The latest clients send a two-argument clone message with a +protocol version of "3". (Future versions of Fossil might use larger +protocol version numbers.) Version "3" of the protocol enhanced version +"2" by introducing the "cfile" card which is intended to speed up clone +operations. Instead of sending "file" cards, the server will send "cfile" +cards

      + +

      3.5.2 Protocol 2

      + +

      The sequence-number sent is the number +of artifacts received so far. For the first clone message, the +sequence number is 0. The server will respond by sending file +cards for some number of artifacts up to the maximum message size. + +

      The server will also send a single "clone_seqno" card to the client +so that the client can know where the server left off.

      -clone +clone_seqno sequence-number
      -

      In response to a clone message, the server also sends the client +

      The clone message in subsequent HTTP requests for the same clone +operation will use the sequence-number from the +clone_seqno of the previous reply.

      + +

      In response to an initial clone message, the server also sends the client a push message so that the client can discover the projectcode for this project.

      +

      3.5.3 Legacy Protocol

      + +

      Older clients send a clone card with no argument. The server responds +to a blank clone card by sending an "igot" card for every artifact in the +repository. The client will then issue "gimme" cards to pull down all the +content it needs. + +

      The legacy protocol works well for smaller repositories (50MB with 50,000 +artifacts) but is too slow and unwieldy for larger repositories. +The version 2 protocol is an effort to improve performance. Further +performance improvements with higher-numbered clone protocols are +possible in future versions of Fossil. +

      3.6 Igot Cards

      An igot card can be sent from either client to server or from server to client in order to indicate that the sender holds a copy of a particular artifact. The format is:

      @@ -287,15 +366,120 @@ a cookie from another server. (Typically the server will embed its servercode as part of the cookie.)

      3.9 Request-Configuration Cards

      -TBD... +

      A request-configuration or "reqconfig" card is sent from client to +server in order to request that the server send back "configuration" +data. "Configuration" data is information about users or website +appearance or other administrative details which are not part of the +persistent and versioned state of the project. For example, the "name" +of the project, the default Cascading Style Sheet (CSS) for the web-interface, +and the project logo displayed on the web-interface are all configuration +data elements. + +

      The reqconfig card is normally sent in response to the +"fossil configuration pull" command. The format is as follows: + +

      +reqconfig configuration-name +
      + +

      As of [/timeline?r=trunk&c=2015-03-19+03%3A57%3A46&n=20|2015-03-19], the configuration-name must be one of the +following values: + +

      + +
      +
        +
      • css +
      • header +
      • footer +
      • logo-mimetype +
      • logo-image +
      • background-mimetype +
      • background-image +
      • index-page +
      • timeline-block-markup +
      • timeline-max-comment +
      • timeline-plaintext +
          +
        • adunit +
        • adunit-omit-if-admin +
        • adunit-omit-if-user +
        • white-foreground +
        • project-name +
        • short-project-name +
        • project-description +
        • index-page +
        • manifest +
        • binary-glob +
        • clean-glob +
            +
          • ignore-glob +
          • keep-glob +
          • crnl-glob +
          • encoding-glob +
          • empty-dirs +
          • allow-symlinks +
          • dotfiles +
          • ticket-table +
          • ticket-common +
          • ticket-change +
          • ticket-newpage +
              +
            • ticket-viewpage +
            • ticket-editpage +
            • ticket-reportlist +
            • ticket-report-template +
            • ticket-key-template +
            • ticket-title-expr +
            • ticket-closed-expr +
            • @reportfmt +
            • @user +
            • @concealed +
            • @shun +
            + +

            New configuration-names are likely to be added in future releases of +Fossil. If the server receives a configuration-name that it does not +understand, the entire reqconfig card is silently ignored. The reqconfig +card might also be ignored if the user lacks sufficient privilege to +access the requested information. + +

            The configuration-names that begin with an alphabetic character refer +to values in the "config" table of the server database. For example, +the "logo-image" configuration item refers to the project logo image +that is configured on the Admin page of the [./webui.wiki | web-interface]. +The value of the configuration item is returned to the client using a +"config" card. + +

            If the configuration-name begins with "@", that refers to a class of +values instead of a single value. The content of these configuration items +is returned in a "config" card that contains pure SQL text that is +intended to be evaluated by the client. + +

            The @user and @concealed configuration items contain sensitive information +and are ignored for clients without sufficient privilege.

            3.10 Configuration Cards

            -TBD... +

            A "config" card is used to send configuration information from client +to server (in response to a "fossil configuration push" command) or +from server to client (in response to a "fossil configuration pull" or +"fossil clone" command). The format is as follows: + +

            +config configuration-name size \n content +
            + +

            The server will only accept a config card if the user has +"Admin" privilege. A client will only accept a config card if +it had sent a corresponding reqconfig card in its request. + +

            The content of the configuration item is used to overwrite the +corresponding configuration data in the receiver.

            3.11 Error Cards

            If the server discovers anything wrong with a request, it generates an error card in its reply. When the client sees the error card, @@ -325,14 +509,14 @@

            If either the client or the server sees a card that is not described above, then it generates an error and aborts.

            4.0 Phantoms And Clusters

            -

            When a repository knows that a artifact exists and knows the ID of +

            When a repository knows that an artifact exists and knows the ID of that artifact, but it does not know the artifact content, then it stores that artifact as a "phantom". A repository will typically create a phantom when -it receives an igot card for a artifact that it does not hold or when it +it receives an igot card for an artifact that it does not hold or when it receives a file card that references a delta source that it does not hold. When a server is generating its reply or when a client is generating a new request, it will usually send gimme cards for every phantom that it holds.

            @@ -346,11 +530,11 @@ single argument. No extra whitespace and no trailing or leading whitespace is allowed. All cards in the cluster must occur in strict lexicographical order.

            A cluster consists of one or more "M" cards followed by a single -"Z" card. Each M card holds an argument which is a artifact ID for an +"Z" card. Each M card holds an argument which is an artifact ID for an artifact in the repository. The Z card has a single argument which is the lower-case hexadecimal representation of the MD5 checksum of all preceding M cards up to and included the newline character that occurred just before the Z that starts the Z card.

            ADDED www/tech_overview.wiki Index: www/tech_overview.wiki ================================================================== --- www/tech_overview.wiki +++ www/tech_overview.wiki @@ -0,0 +1,345 @@ +Technical Overview +

            +A Technical Overview
            Of The Design And Implementation
            Of Fossil +

            + +

            1.0 Introduction

            + +At its lowest level, a Fossil repository consists of an unordered set +of immutable "artifacts". You might think of these artifacts as "files", +since in many cases the artifacts are exactly that. +But other "control artifacts" +are also included in the mix. These control artifacts define the relationships +between artifacts - which files go together to form a particular +version of the project, who checked in that version and when, what was +the check-in comment, what wiki pages are included with the project, what +are the edit histories of each wiki page, what bug reports or tickets are +included, who contributed to the evolution of each ticket, and so forth. +This low-level file format is called the "global state" of +the repository, since this is the information that is synced to peer +repositories using push and pull operations. The low-level file format +is also called "enduring" since it is intended to last for many years. +The details of the low-level, enduring, global file format +are [./fileformat.wiki | described separately]. + +This article is about how Fossil is currently implemented. Instead of +dealing with vague abstractions of "enduring file formats" as the +[./fileformat.wiki | other document] does, this article provides +some detail on how Fossil actually stores information on disk. + +

            2.0 Three Databases

            + +Fossil stores state information in +[http://www.sqlite.org/ | SQLite] database files. +SQLite keeps an entire relational database, including multiple tables and +indices, in a single disk file. The SQLite library allows the database +files to be efficiently queried and updated using the industry-standard +SQL language. SQLite updates are atomic, so even in the event of +a system crashes or power failure the repository content is protected. + +Fossil uses three separate classes of SQLite databases: + +
              +
            1. The configuration database +
            2. Repository databases +
            3. Checkout databases +
            + +The configuration database is a one-per-user database that holds +global configuration information used by Fossil. There is one +repository database per project. The repository database is the +file that people are normally referring to when they say +"a Fossil repository". The checkout database is found in the working +checkout for a project and contains state information that is unique +to that working checkout. + +Fossil does not always use all three database files. The web interface, +for example, typically only uses the repository database. And the +[/help/all | fossil setting] command only opens the configuration database +when the --global option is used. But other commands use all three +databases at once. For example, the [/help/status | fossil status] +command will first locate the checkout database, then use the checkout +database to find the repository database, then open the configuration +database. Whenever multiple databases are used at the same time, +they are all opened on the same SQLite database connection using +SQLite's [http://www.sqlite.org/lang_attach.html | ATTACH] command. + +The chart below provides a quick summary of how each of these +database files are used by Fossil, with detailed discussion following. + +
            + + + + + +
            +

            Configuration Database
            "~/.fossil"

            +
              +
            • Global [/help/setting |settings] +
            • List of active repositories used by the [/help/all | all] command +
            +
            +

            Repository Database
            "project.fossil"

            +
              +
            • [./fileformat.wiki | Global state of the project] + encoded using delta-compression +
            • Local [/help/setting|settings] +
            • Web interface display preferences +
            • User credentials and permissions +
            • Metadata about the global state to facilitate rapid + queries +
            +
            +

            Checkout Database
            "_FOSSIL_"

            +
              +
            • The repository database used by this checkout +
            • The version currently checked out +
            • Other versions [/help/merge | merged] in but not + yet [/help/commit | committed] +
            • Changes from the [/help/add | add], [/help/delete | delete], + and [/help/rename | rename] commands that have not yet been committed +
            • "mtime" values and other information used to efficiently detect + local edits +
            • The "[/help/stash | stash]" +
            • Information needed to "[/help/undo|undo]" or "[/help/redo|redo]" +
            +
            +
            + +

            2.1 The Configuration Database

            + +The configuration database holds cross-repository preferences and a list of all +repositories for a single user. + +The [/help/setting | fossil setting] command can be used to specify various +operating parameters and preferences for Fossil repositories. Settings can +apply to a single repository, or they can apply globally to all repositories +for a user. If both a global and a repository value exists for a setting, +then the repository-specific value takes precedence. All of the settings +have reasonable defaults, and so many users will never need to change them. +But if changes to settings are desired, the configuration database provides +a way to change settings for all repositories with a single command, rather +than having to change the setting individually on each repository. + +The configuration database also maintains a list of repositories. This +list is used by the [/help/all | fossil all] command in order to run various +operations such as "sync" or "rebuild" on all repositories managed by a user. + +On Unix systems, the configuration database is named ".fossil" and is +located in the user's home directory. On Windows, the configuration +database is named "_fossil" (using an underscore as the first character +instead of a dot) and is located in the directory specified by the +LOCALAPPDATA, APPDATA, or HOMEPATH environment variables, in that order. + +You can override this default location by defining the environment +variable FOSSIL_HOME pointing to an appropriate (writable) directory. + +

            2.2 Repository Databases

            + +The repository database is the file that is commonly referred to as +"the repository". This is because the repository database contains, +among other things, the complete revision, ticket, and wiki history for +a project. It is customary to name the repository database after then +name of the project, with a ".fossil" suffix. For example, the repository +database for the self-hosting Fossil repository is called "fossil.fossil" +and the repository database for SQLite is called "sqlite.fossil". + +

            2.2.1 Global Project State

            + +The bulk of the repository database (typically 75 to 85%) consists +of the artifacts that comprise the +[./fileformat.wiki | enduring, global, shared state] of the project. +The artifacts are stored as BLOBs, compressed using +[http://www.zlib.net/ | zlib compression] and, where applicable, +using [./delta_encoder_algorithm.wiki | delta compression]. +The combination of zlib and delta compression results in a considerable +space savings. For the SQLite project, at the time of this writing, +the total size of all artifacts is over 2.0 GB but thanks to the +combined zlib and delta compression, that content only takes up +32 MB of space in the repository database, for a compression ratio +of about 64:1. The average size of a content BLOB in the database +is around 500 bytes. + +Note that the zlib and delta compression is not an inherent part of the +Fossil file format; it is just an optimization. +The enduring file format for Fossil is the unordered +set of artifacts. The compression techniques are just a detail of +how the current implementation of Fossil happens to store these artifacts +efficiently on disk. + +All of the original uncompressed and undeltaed artifacts can be extracted +from a Fossil repository database using +the [/help/deconstruct | fossil deconstruct] +command. Individual artifacts can be extracted using the +[/help/artifact | fossil artifact] command. +When accessing the repository database using raw SQL and the +[/help/sqlite3 | fossil sql] command, the extension function +"content()" with a single argument which is the SHA1 hash +of an artifact will return the complete undeleted and uncompressed +content of that artifact. + +Going the other way, the [/help/reconstruct | fossil reconstruct] +command will scan a directory hierarchy and add all files found to +a new repository database. The [/help/import | fossil import] command +works by reading the input git-fast-export stream and using it to construct +corresponding artifacts which are then written into the repository database. + +

            2.2.2 Project Metadata

            + +The global project state information in the repository database is +supplemented by computed metadata that makes querying the project state +more efficient. Metadata includes information such as the following: + + * The names for all files found in any check-in. + * All check-ins that modify a given file + * Parents and children of each check-in. + * Potential timeline rows. + * The names of all symbolic tags and the check-ins they apply to. + * The names of all wiki pages and the artifacts that comprise each + wiki page. + * Attachments and the wiki pages or tickets they apply to. + * Current content of each ticket. + * Cross-references between tickets, check-ins, and wiki pages. + +The metadata is held in various SQL tables in the repository database. +The metadata is designed to facilitate queries for the various timelines and +reports that Fossil generates. +As the functionality of Fossil evolves, +the schema for the metadata can and does change. +But schema changes do no invalidate the repository. Remember that the +metadata contains no new information - only information that has been +extracted from the canonical artifacts and saved in a more useful form. +Hence, when the metadata schema changes, the prior metadata can be discarded +and the entire metadata corpus can be recomputed from the canonical +artifacts. That is what the +[/help/rebuild | fossil rebuild] command does. + +

            2.2.3 Display And Processing Preferences

            + +The repository database also holds information used to help format +the display of web pages and configuration settings that override the +global configuration settings for the specific repository. All of +this information (and the user credentials and privileges too) is +local to each repository database; it is not shared between repositories +by [/help/sync | fossil sync]. That is because it is entirely reasonable +that two different websites for the same project might have completely +different display preferences and user communities. One instance of the +project might be a fork of the other, for example, which pulls from the +other but never pushes and extends the project in ways that the keepers of +the other website disapprove of. + +Display and processing information includes the following: + + * The name and description of the project + * The CSS file, header, and footer used by all web pages + * The project logo image + * Fields of tickets that are considered "significant" and which are + therefore collected from artifacts and made available for display + * Templates for screens to view, edit, and create tickets + * Ticket report formats and display preferences + * Local values for [/help/setting | settings] that override the + global values defined in the per-user configuration database. + +Though the display and processing preferences do not move between +repository instances using [/help/sync | fossil sync], this information +can be shared between repositories using the +[/help/config | fossil config push] and +[/help/config | fossil config pull] commands. +The display and processing information is also copied into new +repositories when they are created using +[/help/clone | fossil clone]. + +

            2.2.4 User Credentials And Privileges

            + +Just because two development teams are collaborating on a project and allow +push and/or pull between their repositories does not mean that they +trust each other enough to share passwords and access privileges. +Hence the names and emails and passwords and privileges of users are +considered private information that is kept locally in each repository. + +Each repository database has a table holding the username, privileges, +and login credentials for users authorized to interact with that particular +database. In addition, there is a table named "concealed" that maps the +SHA1 hash of each users email address back into their true email address. +The concealed table allows just the SHA1 hash of email addresses to +be stored in tickets, and thus prevents actual email addresses from falling +into the hands of spammers who happen to clone the repository. + +The content of the user and concealed tables can be pushed and pulled using the +[/help/config | fossil config push] and +[/help/config | fossil config pull] commands with the "user" and +"email" as the AREA argument, but only if you have administrative +privileges on the remote repository. + +

            2.2.5 Shunned Artifact List

            + +The set of canonical artifacts for a project - the global state for the +project - is intended to be an append-only database. In other words, +new artifacts can be added but artifacts can never be removed. But +it sometimes happens that inappropriate content is mistakenly or +maliciously added to a repository. The only way to get rid of +the undesired content is to [./shunning.wiki | "shun"] it. +The "shun" table in the repository database records the SHA1 hash of +all shunned artifacts. + +The shun table can be pushed or pulled using +the [/help/config | fossil config] command with the "shun" AREA argument. +The shun table is also copied during a [/help/clone | clone]. + +

            2.3 Checkout Databases

            + +Unlike several other popular DVCSes, Fossil allows a single repository +to have multiple working checkouts. Each working checkout has a single +database in its root directory that records the state of that checkout. +The checkout database is named "_FOSSIL_" by default, but can be renamed +to ".fslckout" if desired. (Future versions of Fossil might make +".fslckout" the default name.) The checkout database records information +such as the following: + + * The name of the repository database file. + * The version that is currently checked out. + * Files that have been [/help/add | added], + [/help/rm | removed], or [/help/mv | renamed] but not + yet committed. + * The mtime and size of files as they were originally checked out, + in order to expedite checking which files have been edited. + * Other check-ins that have been [/help/merge | merged] into the + working checkout but not yet committed. + * Copies of files prior to the most recent undoable operation - needed to + implement the [/help/undo | undo] and [/help/redo | redo] commands. + * The [/help/stash | stash]. + * State information for the [/help/bisect | bisect] command. + +For Fossil commands that run from within a working checkout, the +first thing that happens is that Fossil locates the checkout database. +Fossil first looks in the current directory. If not found there, it +looks in the parent directory. If not found there, the parent of the +parent. And so forth until either the checkout database is found +or the search reaches the root of the filesystem. (In the latter case, +Fossil returns an error, of course.) Once the checkout database is +located, it is used to locate the repository database. + +Notice that the checkout database contains a pointer to the repository +database but that the repository database has no record of the checkout +databases. That means that a working checkout directory tree can be +freely renamed or copied or deleted without consequence. But the +repository database file, on the other hand, has to stay in the same +place with the same name or else the open checkout databases will not +be able to find it. + +A checkout database is created by the [/help/open | fossil open] command. +A checkout database is deleted by [/help/close | fossil close]. The +fossil close command really isn't needed; one can accomplish the same +thing simply by deleting the checkout database. + +Note that the stash, the undo stack, and the state of the bisect command +are all contained within the checkout database. That means that the +fossil close command will delete all stash content, the undo stack, and +the bisect state. The close command is not undoable. Use it with care. + +

            3.0 See Also

            + + * [./makefile.wiki | The Fossil Build Process] + * [./contribute.wiki | How To Contribute Code To Fossil] + * [./adding_code.wiki | Adding New Features To Fossil] ADDED www/th1.md Index: www/th1.md ================================================================== --- www/th1.md +++ www/th1.md @@ -0,0 +1,670 @@ +TH1 Scripts +=========== + +TH1 is a very small scripting language used to help generate web-page +content in Fossil. + +Origins +------- + +TH1 began as a minimalist re-implementation of the Tcl scripting language. +There was a need to test the SQLite library on Symbian phones, but at that +time all of the test cases for SQLite were written in Tcl and Tcl could not +be easily compiled on the SymbianOS. So TH1 was developed as a cut-down +version of Tcl that would facilitate running the SQLite test scripts on +SymbianOS. + +The testing of SQLite on SymbianOS was eventually accomplished by other +means. But Fossil was first being designed at about the same time. +Early prototypes of Fossil were written in pure Tcl. But as the development +shifted toward the use of C-code, the need arose to have a Tcl-like +scripting language to help with code generation. TH1 was small and +light-weight and used minimal resources and seemed ideally suited for the +task. + +The name "TH1" stands "Test Harness 1", since that was its original purpose. + +Overview +-------- + +TH1 is a string-processing language. All values are strings. Any numerical +operations are accomplished by converting from string to numeric, performing +the computation, then converting the result back into a string. (This might +seem inefficient, but it is faster than people imagine, and numeric +computations do not come up very often for the kinds of work that TH1 does, +so it has never been a factor.) + +A TH1 script consist of a sequence of commands. +Each command is terminated by the first (unescaped) newline or ";" character. +The text of the command (excluding the newline or semicolon terminator) +is broken into space-separated tokens. The first token is the command +name and subsequent tokens are the arguments. In this since, TH1 syntax +is similar to the familiar command-line shell syntax. + +A token is any sequence of characters other than whitespace and semicolons. +Or, all text without double-quotes is a single token even if it includes +whitespace and semicolons. Or, all text without nested {...} pairs is a +single token. + +The nested {...} form of tokens is important because it allows TH1 commands +to have an appearance similar to C/C++. It is important to remember, though, +that a TH1 script is really just a list of text commands, not a context-free +language with a grammar like C/C++. This can be confusing to long-time +C/C++ programmers because TH1 does look a lot like C/C++. But the semantics +of TH1 are closer to FORTH or Lisp than they are to C. + +Consider the "if" command in TH1. + + if {$current eq "dev"} { + puts "hello" + } else { + puts "world" + } + +The example above is a single command. The first token, and the name +of the command, is "if". +The second token is '$current eq "dev"' - an expression. (The outer {...} +are removed from each token by the command parser.) The third token +is the 'puts "hello"', with its whitespace and newlines. The fourth token +is "else". And the fifth and last token is 'puts "world"'. + +The "if" command word by evaluating its first argument (the second token) +as an expression, and if that expression is true, evaluating its +second argument (the third token) as a TH1 script. +If the expression is false and the third argument is "else" then +the fourth argument is evaluated as a TH1 expression. + +So, you see, even though the example above spans five lines, it is really +just a single command. + +Summary of Core TH1 Commands +---------------------------- + +The original Tcl language after when TH1 is modeled has a very rich +repertoire of commands. TH1, as it is designed to be minimalist and +embedded has a greatly reduced command set. The following bullets +summarize the commands available in TH1: + + * array exists VARNAME + * array names VARNAME + * break + * catch SCRIPT ?VARIABLE? + * continue + * error ?STRING? + * expr EXPR + * for INIT-SCRIPT TEST-EXPR NEXT-SCRIPT BODY-SCRIPT + * if EXPR SCRIPT (elseif EXPR SCRIPT)* ?else SCRIPT? + * info commands + * info exists VARNAME + * info vars + * lindex LIST INDEX + * list ARG ... + * llength LIST + * lsearch LIST STRING + * proc NAME ARG-LIST BODY-SCRIPT + * rename OLD NEW + * return ?-code CODE? ?VALUE? + * set VARNAME VALUE + * string compare STR1 STR2 + * string first NEEDLE HAYSTACK ?START-INDEX? + * string is CLASS STRING + * string last NEEDLE HAYSTACK ?START-INDEX? + * string length STRING + * string range STRING FIRST LAST + * string repeat STRING COUNT + * unset VARNAME + * uplevel ?LEVEL? SCRIPT + * upvar ?FRAME? OTHERVAR MYVAR ?OTHERVAR MYVAR? + +All of the above commands work as in the original Tcl. Refer to the +Tcl documentation +for details. + +Summary of Core TH1 Variables +----------------------------- + + * tcl\_platform(engine) -- _This will always have the value "TH1"._ + * tcl\_platform(platform) -- _This will have the value "windows" or "unix"._ + * th\_stack\_trace -- _This will contain error stack information._ + +TH1 Extended Commands +--------------------- + +There are many new commands added to TH1 and used to access the special +features of Fossil. The following is a summary of the extended commands: + + * anoncap + * anycap + * artifact + * checkout + * combobox + * date + * decorate + * dir + * enable\_output + * encode64 + * getParameter + * glob\_match + * globalState + * hascap + * hasfeature + * html + * htmlize + * http + * httpize + * insertCsrf + * linecount + * markdown + * puts + * query + * randhex + * redirect + * regexp + * reinitialize + * render + * repository + * searchable + * setParameter + * setting + * styleHeader + * styleFooter + * tclEval + * tclExpr + * tclInvoke + * tclIsSafe + * tclMakeSafe + * tclReady + * trace + * stime + * utime + * verifyCsrf + * wiki + +Each of the commands above is documented by a block comment above their +implementation in the th\_main.c or th\_tcl.c source files. + +All commands starting with "tcl", with the exception of "tclReady", +require the Tcl integration subsystem be included at compile-time. +Additionally, the "tcl" repository setting must be enabled at runtime +in order to successfully make use of these commands. + +TH1 anoncap Command +----------------------------------------- + + * anoncap STRING... + +Returns true if the anonymous user has all of the capabilities listed +in STRING. + +TH1 anycap Command +--------------------------------------- + + * anycap STRING + +Returns true if the current user user has any one of the capabilities +listed in STRING. + +TH1 artifact Command +------------------------------------------- + + * artifact ID ?FILENAME? + +Attempts to locate the specified artifact and return its contents. An +error is generated if the repository is not open or the artifact cannot +be found. + +TH1 checkout Command +------------------------------------------- + + * checkout ?BOOLEAN? + +Return the fully qualified directory name of the current checkout or an +empty string if it is not available. Optionally, it will attempt to find +the current checkout, opening the configuration ("user") database and the +repository as necessary, if the boolean argument is non-zero. + +TH1 combobox Command +------------------------------------------- + + * combobox NAME TEXT-LIST NUMLINES + +Generates and emits an HTML combobox. NAME is both the name of the +CGI parameter and the name of a variable that contains the currently +selected value. TEXT-LIST is a list of possible values for the +combobox. NUMLINES is 1 for a true combobox. If NUMLINES is greater +than one then the display is a listbox with the number of lines given. + +TH1 date Command +----------------------------------- + + * date ?-local? + +Return a strings which is the current time and date. If the -local +option is used, the date appears using localtime instead of UTC. + +TH1 decorate Command +------------------------------------------- + + * decorate STRING + +Renders STRING as wiki content; however, only links are handled. No +other markup is processed. + +TH1 dir Command +------------------------------------------- + + * dir CHECKIN ?GLOB? ?DETAILS? + +Returns a list containing all files in CHECKIN. If GLOB is given only +the files matching the pattern GLOB within CHECKIN will be returned. +If DETAILS is non-zero, the result will be a list-of-lists, with each +element containing at least three elements: the file name, the file +size (in bytes), and the file last modification time (relative to the +time zone configured for the repository). + +TH1 enable\_output Command +------------------------------------------------------ + + * enable\_output BOOLEAN + +Enable or disable sending output when the combobox, puts, or wiki +commands are used. + +TH1 encode64 Command +------------------------------------------- + + * encode64 STRING + +Encode the specified string using Base64 and return the result. + +TH1 getParameter Command +--------------------------------------------------- + + * getParameter NAME ?DEFAULT? + +Returns the value of the specified query parameter or the specified +default value when there is no matching query parameter. + +TH1 glob\_match Command +------------------------------------------------ + + * glob\_match ?-one? ?--? patternList string + +Checks the string against the specified glob pattern -OR- list of glob +patterns and returns non-zero if there is a match. + +TH1 globalState Command +------------------------------------------------- + + * globalState NAME ?DEFAULT? + +Returns a string containing the value of the specified global state +variable -OR- the specified default value. The supported items are: + + 1. **checkout** -- _Active local checkout directory, if any._ + 1. **configuration** -- _Active configuration database file name, if any._ + 1. **executable** -- _Fully qualified executable file name._ + 1. **flags** -- _TH1 initialization flags._ + 1. **log** -- _Error log file name, if any._ + 1. **repository** -- _Active local repository file name, if any._ + 1. **top** -- _Base path for the active server instance, if applicable._ + 1. **user** -- _Active user name, if any._ + 1. **vfs** -- _SQLite VFS in use, if overridden._ + +Attempts to query for unsupported global state variables will result +in a script error. Additional global state variables may be exposed +in the future. + +TH1 hascap Command +--------------------------------------- + + * hascap STRING... + +Returns true if the current user has all of the capabilities listed +in STRING. + +TH1 hasfeature Command +----------------------------------------------- + + * hasfeature STRING + +Returns true if the binary has the given compile-time feature enabled. +The possible features are: + + 1. **ssl** -- _Support for the HTTPS transport._ + 1. **legacyMvRm** -- _Support for legacy mv/rm command behavior._ + 1. **execRelPaths** -- _Use relative paths with external diff/gdiff._ + 1. **th1Docs** -- _Support for TH1 in embedded documentation._ + 1. **th1Hooks** -- _Support for TH1 command and web page hooks._ + 1. **tcl** -- _Support for Tcl integration._ + 1. **useTclStubs** -- _Tcl stubs enabled in the Tcl headers._ + 1. **tclStubs** -- _Uses Tcl stubs (i.e. linking with stubs library)._ + 1. **tclPrivateStubs** -- _Uses Tcl private stubs (i.e. header-only)._ + 1. **json** -- _Support for the JSON APIs._ + 1. **markdown** -- _Support for Markdown documentation format._ + 1. **unicodeCmdLine** -- _The command line arguments are Unicode._ + 1. **dynamicBuild** -- _Dynamically linked to libraries._ + +Specifying an unknown feature will return a value of false, it will not +raise a script error. + +TH1 html Command +----------------------------------- + + * html STRING + +Outputs the STRING escaped for HTML. + +TH1 htmlize Command +----------------------------------------- + + * htmlize STRING + +Escape all characters of STRING which have special meaning in HTML. +Returns the escaped string. + +TH1 http Command +----------------------------------- + + * http ?-asynchronous? ?--? url ?payload? + +Performs an HTTP or HTTPS request for the specified URL. If a +payload is present, it will be interpreted as text/plain and +the POST method will be used; otherwise, the GET method will +be used. Upon success, if the -asynchronous option is used, an +empty string is returned as the result; otherwise, the response +from the server is returned as the result. Synchronous requests +are not currently implemented. + +TH1 httpize Command +----------------------------------------- + + * httpize STRING + +Escape all characters of STRING which have special meaning in URI +components. Returns the escaped string. + +TH1 insertCsrf Command +----------------------------------------------- + + * insertCsrf + +While rendering a form, call this command to add the Anti-CSRF token +as a hidden element of the form. + +TH1 linecount Command +--------------------------------------------- + + * linecount STRING MAX MIN + +Returns one more than the number of \n characters in STRING. But +never returns less than MIN or more than MAX. + +TH1 markdown Command +--------------------------------------------- + + * markdown STRING + +Renders the input string as markdown. The result is a two-element list. +The first element contains the body, rendered as HTML. The second element +is the text-only title string. + +TH1 puts Command +----------------------------------- + + * puts STRING + +Outputs the STRING unchanged. + +TH1 query Command +------------------------------------- + + * query SQL CODE + +Runs the SQL query given by the SQL argument. For each row in the result +set, run CODE. + +In SQL, parameters such as $var are filled in using the value of variable +"var". Result values are stored in variables with the column name prior +to each invocation of CODE. + +TH1 randhex Command +----------------------------------------- + + * randhex N + +Returns a string of N*2 random hexadecimal digits with N<50. If N is +omitted, use a value of 10. + +TH1 redirect Command +------------------------------------------- + + * redirect URL + +Issues an HTTP redirect (302) to the specified URL and then exits the +process. + +TH1 regexp Command +--------------------------------------- + + * regexp ?-nocase? ?--? exp string + +Checks the string against the specified regular expression and returns +non-zero if it matches. If the regular expression is invalid or cannot +be compiled, an error will be generated. + +TH1 reinitialize Command +--------------------------------------------------- + + * reinitialize ?FLAGS? + +Reinitializes the TH1 interpreter using the specified flags. + +TH1 render Command +--------------------------------------- + + * render STRING + +Renders the TH1 template and writes the results. + +TH1 repository Command +----------------------------------------------- + + * repository ?BOOLEAN? + +Returns the fully qualified file name of the open repository or an empty +string if one is not currently open. Optionally, it will attempt to open +the repository if the boolean argument is non-zero. + +TH1 searchable Command +----------------------------------------------- + + * searchable STRING... + +Return true if searching in any of the document classes identified +by STRING is enabled for the repository and user has the necessary +capabilities to perform the search. The possible document classes +are: + + 1. **c** -- _Check-in comments_ + 1. **d** -- _Embedded documentation_ + 1. **t** -- _Tickets_ + 1. **w** -- _Wiki_ + +To be clear, only one of the document classes identified by each STRING +needs to be searchable in order for that argument to be true. But all +arguments must be true for this routine to return true. Hence, to see +if ALL document classes are searchable: + + if {[searchable c d t w]} {...} + +But to see if ANY document class is searchable: + + if {[searchable cdtw]} {...} + +This command is useful for enabling or disabling a "Search" entry on the +menu bar. + +TH1 setParameter Command +--------------------------------------------------- + + * setParameter NAME VALUE + +Sets the value of the specified query parameter. + +TH1 setting Command +----------------------------------------- + + * setting name + +Gets and returns the value of the specified setting. + +TH1 styleHeader Command +------------------------------------------------- + + * styleHeader TITLE + +Render the configured style header. + +TH1 styleFooter Command +------------------------------------------------- + + * styleFooter + +Render the configured style footer. + +TH1 tclEval Command +----------------------------------------- + +**This command requires the Tcl integration feature.** + + * tclEval arg ?arg ...? + +Evaluates the Tcl script and returns its result verbatim. If a Tcl script +error is generated, it will be transformed into a TH1 script error. The +Tcl interpreter will be created automatically if it has not been already. + +TH1 tclExpr Command +----------------------------------------- + +**This command requires the Tcl integration feature.** + + * tclExpr arg ?arg ...? + +Evaluates the Tcl expression and returns its result verbatim. If a Tcl +script error is generated, it will be transformed into a TH1 script +error. The Tcl interpreter will be created automatically if it has not +been already. + +TH1 tclInvoke Command +--------------------------------------------- + +**This command requires the Tcl integration feature.** + + * tclInvoke command ?arg ...? + +Invokes the Tcl command using the supplied arguments. No additional +substitutions are performed on the arguments. The Tcl interpreter +will be created automatically if it has not been already. + +TH1 tclIsSafe Command +--------------------------------------------- + +**This command requires the Tcl integration feature.** + + * tclIsSafe + +Returns non-zero if the Tcl interpreter is "safe". The Tcl interpreter +will be created automatically if it has not been already. + +TH1 tclMakeSafe Command +--------------------------------------------- + +**This command requires the Tcl integration feature.** + + * tclMakeSafe + +Forces the Tcl interpreter into "safe" mode by removing all "unsafe" +commands and variables. This operation cannot be undone. The Tcl +interpreter will remain "safe" until the process terminates. The Tcl +interpreter will be created automatically if it has not been already. + +TH1 tclReady Command +------------------------------------------- + + * tclReady + +Returns true if the binary has the Tcl integration feature enabled and it +is currently available for use by TH1 scripts. + +TH1 trace Command +------------------------------------- + + * trace STRING + +Generates a TH1 trace message if TH1 tracing is enabled. + +TH1 stime Command +------------------------------------- + + * stime + +Returns the number of microseconds of CPU time consumed by the current +process in system space. + +TH1 utime Command +------------------------------------- + + * utime + +Returns the number of microseconds of CPU time consumed by the current +process in user space. + +TH1 verifyCsrf Command +----------------------------------------------- + + * verifyCsrf + +Before using the results of a form, first call this command to verify +that this Anti-CSRF token is present and is valid. If the Anti-CSRF token +is missing or is incorrect, that indicates a cross-site scripting attack. +If the event of an attack is detected, an error message is generated and +all further processing is aborted. + +TH1 wiki Command +----------------------------------- + + * wiki STRING + +Renders STRING as wiki content. + +Tcl Integration Commands +------------------------ + +When the Tcl integration subsystem is enabled, several commands are added +to the Tcl interpreter. They are used to allow Tcl scripts access to the +Fossil functionality provided via TH1. The following is a summary of the +Tcl commands: + + * th1Eval + * th1Expr + +Tcl th1Eval Command +----------------------------------------- + +**This command requires the Tcl integration feature.** + + * th1Eval arg + +Evaluates the TH1 script and returns its result verbatim. If a TH1 script +error is generated, it will be transformed into a Tcl script error. + +Tcl th1Expr Command +----------------------------------------- + +**This command requires the Tcl integration feature.** + + * th1Expr arg + +Evaluates the TH1 expression and returns its result verbatim. If a TH1 +script error is generated, it will be transformed into a Tcl script error. Index: www/theory1.wiki ================================================================== --- www/theory1.wiki +++ www/theory1.wiki @@ -74,11 +74,11 @@ Another way to think of the relational tables in a Fossil repository is as an index for the artifacts. Without the relational tables, to generate a report like a timeline would require scanning every artifact - the equivalent of a full table scan. The relational tables hold pointers -the the relevant artifacts in presorted order so that generating a timeline +the relevant artifacts in presorted order so that generating a timeline is much more efficient. So like an index in a relational database, the relational tables in an Fossil repository do not add any new information, they merely make the information in the artifacts faster and easier to look up. @@ -95,11 +95,11 @@ as its scripting language. This misunderstanding likely arises because people fail to appreciate that SQL is a programming language. People are taught that SQL is a "query language" as if that were somehow different from a -"programming language". But they really are two different favors of the +"programming language". But they really are two different flavors of the same thing. I find that people do better with SQL if they think of SQL as a programming language and each statement of SQL is a separate program. SQL is a peculiar programming language in that one uses SQL to specify what to compute whereas in most other programming languages one specifies how ADDED www/tickets.wiki Index: www/tickets.wiki ================================================================== --- www/tickets.wiki +++ www/tickets.wiki @@ -0,0 +1,199 @@ +The Fossil Ticket System + +

            1.0 File Format

            + +At its lowest level, the tickets of Fossil consist solely of +[./fileformat.wiki#tktchng | ticket change artifacts]. +Each ticket change artifact corresponds to a single change +to a ticket. The act of creating a ticket is considered a +change. + +Each ticket change artifact contains the following information: + +
              +
            • The ID of the ticket that was changed +
            • The timestamp for when the change occurred +
            • The user who made the change +
            • A list of key/value pairs that show what changed in the ticket +
            + +To determine the current state of a particular ticket, Fossil orders +the change artifacts for that ticket from oldest to most recent, +then applies each change in timestamp order. + +On each change artifact, there are one or more key/value pairs that +implement the change. The key corresponds to a field of the ticket +that is modified. The value may either replace the earlier value for +that key, or the value may be appended to the prior value. + +

            2.0 Ticket Tables

            + +The low-level artifact format for ticket content is tedious and +cumbersome to access in realtime. To facility reporting and display +of tickets, the low-level artifact information is collected and +summarized in a pair of SQL tables in each local repository. Display +and reporting of tickets is accomplished by querying these two tables. + +Note that only the low-level ticket change artifacts are synced. The +content of the two ticket tables can always be reconstructed from the +ticket change artifacts. And, indeed, the reconstruction of the ticket +tables from low-level artifacts happens automatically whenever new +ticket change artifacts are received by the system. The important point +to remember is that display of tickets is accomplished using SQL tables +but that recording and syncing of ticket information is accomplished using +ticket change artifacts. + +

            2.1 Ticket Table Schema

            + +The two ticket tables are called TICKET and TICKETCHNG. +The default schema (as of this writing) for these two tables is shown +below: + +
            +CREATE TABLE ticket( + -- Do not change any column that begins with tkt_ + tkt_id INTEGER PRIMARY KEY, + tkt_uuid TEXT UNIQUE, + tkt_mtime DATE, + tkt_ctime DATE, + -- Add as many fields as required below this line + type TEXT, + status TEXT, + subsystem TEXT, + priority TEXT, + severity TEXT, + foundin TEXT, + private_contact TEXT, + resolution TEXT, + title TEXT, + comment TEXT +); +CREATE TABLE ticketchng( + -- Do not change any column that begins with tkt_ + tkt_id INTEGER REFERENCES ticket, + tkt_rid INTEGER REFERENCES blob, + tkt_mtime DATE, + -- Add as many fields as required below this line + login TEXT, + username TEXT, + mimetype TEXT, + icomment TEXT +); +CREATE INDEX ticketchng_idx1 ON ticketchng(tkt_id, tkt_mtime); +
            + +Generally speaking, there is one row in the TICKETCHNG table for each +change to each ticket. In other words, there is one row in the +TICKETCHNG table for each low-level ticket change artifact. The +TICKET table, on the other hand, contains a summary of the current +status of each ticket. + +Fields of the TICKET and TICKETCHNG tables that begin with "tkt_" are +used internally by Fossil. The logic inside of Fossil that converts +ticket change artifacts into row data for the two ticket tables expects +the "tkt_" fields to always be present. All of the other fields of the +TICKET and TICKETCHNG tables are "user defined" in the sense that they +can be anything the administrator of the system wants them to be. The +user-defined fields should correspond to keys in the key/value pairs of +the ticket change artifacts. + +The tkt_id fields of TICKET and TICKETCHNG are an integer key +used to uniquely identify the ticket to which the row belongs. These +keys are for internal use only and may change when doing a "fossil rebuild". + +The tkt_uuid field is the unique hexadecimal identifier for the ticket. +Ticket identifiers appear to be SHA1 hash strings, but they +are not really the hash of any identifiable artifact. They are +just random hexadecimal numbers. When creating a new ticket, Fossil uses +a (high-quality) pseudo-random number generator to create the ticket +number. The ticket numbers are large so that the chance of collision +between any two tickets is vanishingly small. + +The tkt_mtime field of TICKET shows the time (as a Julian day number) +of the most recent ticket change artifact for that ticket. The +tkt_mtime field of TICKETCHNG shows the timestamp on the ticket +change artifact that the TICKETCHNG row refers to. The +tkt_ctime field of TICKET is the time of the oldest ticket change +artifact for that ticket, thus holding the time that the ticket was +created. + +The tkt_rid field of TICKETCHNG is the integer primary key in the +BLOB table of the ticket change artifact that gave rise to the row in the +TICKETCHNG table. + +All the other fields of the TICKET and TICKETCHNG tables are available +for customization for individual projects. None of the remaining fields +are required, but all of them are needed in order to use the default +ticket creating, viewing, and editing scripts. It is recommended that +the other fields be retained and that customizations be restricted to +adding new fields above and beyond the default. + +

            2.2 Translating Artifacts To Tables

            + +Each row in the TICKETCHNG table corresponds to a single ticket change +artifact. The tkt_id field is the integer primary key of the TICKET +table entry for the corresponding ticket. The tkt_rid field is the +integer primary key for the BLOB table entry that contains the low-level +artifact text. The tkt_mtime field is the timestamp on the ticket +change artifact, expressed as a julian day number. If the ticket +change artifact contains a key/value pair where the key is "login", +then the corresponding value is stored in the login field of the +TICKETCHNG table. The same it true for "username", "mimetype", and +"icomment" fields. Any time there is a key/value pair in the ticket +change artifact and the key corresponds to the name of a field in the +TICKETCHNG table, then the value of that key/value pair is stored in +the TICKETCHNG table. If the TICKETCHNG table has a field for which there +is no corresponding key/value pair in the artifact, then that field of +the TICKETCHNG table is NULL. If there are key/value pairs in the +artifact that have no corresponding field in the TICKETCHNG table, those +key/value pairs are silently ignored. + +Each row in the TICKET table records the overall status of a ticket. +The tkt_id field is a unique integer primary key for the ticket. +the tkt_uuid field is the global ticket identifier - a larger random +hexadecimal constant. The tkt_mtime and tkt_ctime fields hold the +times of the most recent and the oldest ticket change artifacts for +this ticket, respectively. + +To reconstruct the TICKET table, the ticket change +artifacts are visited in timestamp order. As each ticket change artifact is +visited, its key/value pairs are examined. For any key/value pair in +which the key is the same as a field in the TICKET table, the value +of that pair either replaces or is appended to the previous value +of the corresponding field in the TICKET table. Whether a value is +replaced or appended is determined by markings in the ticket change +artifact itself. Most fields are usually replaced. (For example, to change +the status from "Open" to "Fixed" would involve a key value pair +"status/Fixed" with the replace attribute set). The main exception +is the "comment" field, which is usually appended with new comment +text. + +Note that the replace-or-append mark on ticket change artifacts is +only used by the TICKET table. Since the initial value of all fields +in the TICKETCHNG table is NULL, the replace-or-append mark makes no +difference there. + +

            2.3 Old-Style versus New-Style Tickets

            + +Older versions of Fossil +(before [/timeline?c=2012-11-27T16:26:29 | 2012-11-27]) +only supported the TICKET table. +In this older style, new comments were added to tickets by using +the append-value feature on the comment field. Thus the TICKET.COMMENT +field contains the complete text of all user comments already appended +together and ready for display. + +A problem with the old approach is that all comment text had to +be in the same format. In other words, the all comment text had to be +either plaintext or wiki or HTML. It was not possible for some comments +to be in HTML and others to be plaintext. Some site administrators wanted the +ability to mix plaintext, wiki, and HTML comments and display each +comment according to its chosen format. Hence, Fossil was enhanced to +support the "new-style" tickets. + +The TICKETCHNG table was added to support new-style tickets. In the new +style, comment text is stored with the "icomment" (for "Incremental Comment") +key and appears separately, and with its on mimetype, in multiple rows +of the TICKETCHNG table. It then falls to the TH1 script code on the +View Ticket Page to query the TICKETCHNG table and extract and format +the various comments in timestamp order. ADDED www/uitest.html Index: www/uitest.html ================================================================== --- www/uitest.html +++ www/uitest.html @@ -0,0 +1,129 @@ + + +Fossil UI Test + + + + +

            Test frame for the Fossil server at ???.

            +
              +
            • 0 of 0 pages checked so far. +
            • Current page: +
            • Press "Begin" to begin testing +
            +Begin +Previous +Next +Test Passes +

            Testing Complete!

            + ADDED www/webpage-ex.md Index: www/webpage-ex.md ================================================================== --- www/webpage-ex.md +++ www/webpage-ex.md @@ -0,0 +1,123 @@ +Web-Page Examples +================= + +Here are just a few examples of the many web pages supported +by Fossil. Follow hyperlinks on the examples below to see many +other examples. + + + * Example + 100 most recent check-ins. + + * Example + All changes to the src/file.c source file. + + * Example + All check-ins using a particular version of the src/file.c + source file. + + * Example + Check-ins proximate to an historical point in time (2014-01-01). + + * Example + The previous example augmented with file changes. + + * Example + First 25 check-ins after 1970-01-01. (The first 25 check-ins of + the project.) + + * Example + All check-ins of the "svn-import" branch together with check-ins + that merge with that branch. + + * Example + All check-ins of the "svn-import" branch only. + + * Example + 100 most recent check-ins color coded by committer rather than by branch. + + * Example + All check-ins on the most direct path from + version-1.27 to version-1.28 + + * Example + Show check-ins that contain file name changes + + * Example + Show check-ins circa 2014-01-08 by user "drh". + + (Hint: In the pages above, click the graph nodes + for any two check-ins or files to see a diff.) + + + * Example + Full-text search for "interesting pages". + + * Example + All files for a particular check-in (daff9d20621480) + + * Example + All files for the latest check-in on a branch (trunk) sorted by + last modification time. + + * Example + Age of all files in the latest checking for branch "svn-import". + + * Example + Table of branches. (Click on column headers to sort.) + + * Example + Overall repository status. + + * Example + Number of check-ins per committer. + + * Example + Number of check-ins for each source file. + (Click on column headers to sort.) + + * + Example + Most recent change to each line of a particular source file in a + particular check-in. + + * Example + List of tags on check-ins. + + * Example + The largest objects in the repository. + + * Example + SHA1 prefix collisions Index: www/webui.wiki ================================================================== --- www/webui.wiki +++ www/webui.wiki @@ -1,41 +1,55 @@ -

            The Fossil Web Interface

            +The Fossil Web Interface -One of the innovative features of fossil is its built-in web interface. +One of the innovative features of Fossil is its built-in web interface. This web interface provides everything you need to run a software development project: * [./bugtheory.wiki | Ticketing and bug tracking] * [./wikitheory.wiki | Wiki] * [./embeddeddoc.wiki | On-line documentation] * Status information * Timelines + * Graphs of revision and branching history + * [./event.wiki | Technical notes] * File and version lists and differences + * Download historical versions as ZIP archives * Historical change data - * Links to download historical versions as ZIP archives + * Add and remove tags on check-ins + * Move check-ins between branches + * Revise check-in comments + * Manage user credentials and access permissions + * And so forth... (some [./webpage-ex.md|examples]) -You get all of this, and more, for free when you use fossil. +You get all of this, and more, for free when you use Fossil. There are no extra programs to install or setup. Everything you need is already pre-configured and built into the -self-contained, stand-alone fossil executable. +self-contained, stand-alone Fossil executable. As an example of how useful this web interface can be, -the entire [./index.wiki | fossil website] (except for the -[http://www.fossil-scm.org/download.html | precompiled binary download page]), +the entire [./index.wiki | Fossil website] (except for the +[http://www.fossil-scm.org/download.html | download page]), including the document you are now reading, -is rendered using the stock fossil web interface, with no enhancements, +is rendered using the Fossil web interface, with no enhancements, and little customization. -Note also that because fossil is a distribute system, you can run +
            +Key point: The Fossil website is just a running instance +of Fossil! +
            + +Note also that because Fossil is a distributed system, you can run the web interface on your local machine while off network (for example, while on an airplane) including making changes to wiki pages and/or trouble ticket, then synchronize with your -co-workers after you reconnect. +co-workers after you reconnect. When you clone a Fossil repository, you +don't just get the project source code, you get the entire project +management website.

            Drop-Dead Simple Startup

            -To start using the built-in fossil web interface on an existing fossil +To start using the built-in Fossil web interface on an existing Fossil repository, simply type this: fossil ui existing-repository.fossil Substitute the name of your repository, of course. @@ -45,30 +59,33 @@ from within an open check-out, you can omit the repository name: fossil ui The latter case is a very useful short-cut when you are working on a -fossil project and you want to quickly do some work with the web interface. -Notice that fossil automatically finds an unused TCP port to run the -server own and automatically points your web browser to the correct +Fossil project and you want to quickly do some work with the web interface. +Notice that Fossil automatically finds an unused TCP port to run the +server on and automatically points your web browser to the correct URL. So there is never any fumbling around trying to find an open port or to type arcane strings into your browser URL entry box. The interface just pops right up, ready to run. -The fossil web interface is also very easy to setup and run on a -network server, as either a CGI program or from inetd. Details on how +The Fossil web interface is also very easy to setup and run on a +network server, as either a CGI program or from inetd, or as an +SCGI server. Details on how to do that are described further below.

            Things To Do Using The Web Interface

            You can view timelines of changes to the project. The default "Timeline" link on the menu bar takes you to a page that shows the 20 -most recent check-ins, wiki page edits, and ticket/bug-report changes. +most recent check-ins, wiki page edits, ticket/bug-report changes, +and/or blog entries. This gives a very useful snapshot of what has been happening lately on the project. You can click to go further back in time, if needed. Or follow hyperlinks to see details, including diffs and annotated diffs, -of individual check-ins, wiki page edits, or ticket changes. +of individual check-ins, wiki page edits, ticket changes, and +blog edits. You can view and edit tickets and bug reports by following the "Tickets" link on the menu bar. Fossil is backed by an SQL database, so users with appropriate permissions can write new ticket report formats based on SQL query statements. @@ -86,17 +103,17 @@ [/wiki_rules | wiki formatting rules] so you won't have to spend a lot of time learning a new markup language. And, as with tickets, all of your edits will automatically merge with those of your co-workers when your repository synchronizes. -You can view summary reports of leaves and branches in the -check-in graph by visiting the "Leaves" or "Branches" links on the +You can view summary reports of branches in the +check-in graph by visiting the "Branches" link on the menu bar. From those pages you can follow hyperlinks to get additional details. These screens allow you to easily keep track of what is going on with separate subteams within your project team. -The "Files" link on the menu allows you to browse though the file +The "Files" link on the menu allows you to browse through the file hierarchy of the project and to view complete changes histories on individual files, with hyperlinks to the check-ins that made those changes, and with diffs and annotated diffs between versions. The web interface supports [./embeddeddoc.wiki | embedded documentation]. @@ -116,14 +133,15 @@ up as the "Home" page can be changed. It is often useful to set the "Home" page to be a wiki page or an embedded document.

            Installing On A Network Server

            -When you create a new fossil project and after you have configured it +When you create a new Fossil project and after you have configured it like you want it using the web interface, you can make the project available to a distributed team by simply copying the single -repository file up to a web server that supports CGI. Just put the +repository file up to a web server that supports CGI or SCGI. To +run Fossil as CGI, just put the sample-project.fossil file in a directory where CGI scripts have both read and write permission on the file and the directory that contains the file, then add a CGI script that looks something like this: @@ -130,18 +148,22 @@ #!/usr/local/bin/fossil repository: /home/www/sample-project.fossil Adjust the script above so that the paths are correct for your system, -of course, and also make sure the fossil binary is installed on the server. +of course, and also make sure the Fossil binary is installed on the server. But that is all you have to do. You now have everything you need to host a distributed software development project in less than five minutes using a two-line CGI script. -You don't have a CGI-capable web server running on your server machine? -Not a problem. The fossil interface can also be launched via inetd or -xinetd. An inetd configuration line sufficient to launch the fossil +Instructions for setting up an SCGI server are +[./scgi.wiki | available separately]. + +You don't have a CGI- or SCGI-capable web server running on your +server machine? +Not a problem. The Fossil interface can also be launched via inetd or +xinetd. An inetd configuration line sufficient to launch the Fossil web interface looks like this: 80 stream tcp nowait.1000 root /usr/local/bin/fossil \ /usr/local/bin/fossil http /home/www/sample-project.fossil Index: www/wikitheory.wiki ================================================================== --- www/wikitheory.wiki +++ www/wikitheory.wiki @@ -1,18 +1,20 @@ -

            Wiki In [./index.wiki | Fossil]

            +Wiki In Fossil +

            Introduction

            Fossil uses [/wiki_rules | wiki markup] for many things: * Stand-alone wiki pages. * Description and comments in [./bugtheory.wiki | bug reports]. * Check-in comments. * [./embeddeddoc.wiki | Embedded documentation] files whose - name ends in "wiki". + name ends in ".wiki". + * [./event.wiki | Technical notes]. The [/wiki_rules | formatting rules] for fossil wiki are designed to be simple and intuitive. The idea is that wiki provides -paragraph breaks, numbered and bulletted lists, and hyperlinking for +paragraph breaks, numbered and bulleted lists, and hyperlinking for simple documents together with a safe subset of HTML for more complex formatting tasks. Some commentators feel that the use of HTML is a mistake and that fossil should use the markup language of the @@ -29,10 +31,14 @@ 3. Where the fossil wiki markup language is insufficient, HTML is used. HTML is a standard language familiar to most programmers so there is nothing new to learn. And, though cumbersome, the HTML does not need to be used very often so is not a burden. +UPDATE: Since 2012, Fossil also contains a [/md_rules | Markdown] +rendering engine. Markdown can optionally be used to format +[./embeddeddoc.wiki | embedded documents], wiki pages, +[./event.wiki | technical notes], and bug report text.

            Stand-alone Wiki Pages

            Each wiki page has its own revision history which is independent of the sequence of check-ins (check-ins). Wiki pages can branch and merge @@ -55,23 +61,23 @@

            Embedded Documentation

            Files in the source tree that use the ".wiki" suffix can be accessed and displayed using special URLs to the fossil server. This allows project documentation to be stored in the source tree and accessed -online. (Details are descripted [./embeddeddoc.wiki | separately].) +online. (Details are described [./embeddeddoc.wiki | separately].) -Some project prefer to store their documentation in wiki. There is nothing +Some projects prefer to store their documentation in wiki. There is nothing wrong with that. But other projects prefer to keep documentation as part of the source tree, so that it is versioned along with the source tree and so that only developers with check-in privileges can change it. Embedded documentation serves this latter purpose. Both forms of documentation use the exact same wiki markup language. Some projects may choose to use both forms of documentation at the same time. Because the same -format is used, it is trival to move file from wiki to embedded documentation +format is used, it is trivial to move a file from wiki to embedded documentation or back again as the project evolves.

            Bug-reports and check-in comments

            The comments on check-ins and the text in the descriptions of bug reports both use wiki formatting. Exactly the same set of formatting rules apply. There is never a need to learn one formatting language for documentation and a different markup for bugs or for check-in comments. ADDED www/xkcd-git.gif Index: www/xkcd-git.gif ================================================================== --- www/xkcd-git.gif +++ www/xkcd-git.gif cannot compute difference between binary files